Notice: The content of this article is based on publicly available knowledge. Whereever applicable, the author has provided links to articles that have published this information. Specifically, no confidential information, unavailable on public forums is presented here.
I just got back from a week long trip to the Bay Area (almost my second home). One of the nicest things about the Bay Area is the constant spirit of innovation and entrepreneurship that drives the people there. I remember meeting an old friend of mine three years ago, when he excitedly described that he is working on an initiative to “unwire” large scale areas (metro) by placing access points and implementing an efficient inter-access point routing protocol to offer a paraNetwork. When I heard it then, I wished him best of luck, but was fairly positive that this effort would be a disaster. After all, there is the Internet, there are cable companies, there are DSL providers. So why do we need this technology ?
Today, I had lunch with him again. And now, he is excited, not because he thinks it is a great concept, but is excited because his dream is being implemented. Google partnered with them recently to unwire Mountain View (ref: here ). Earthlink signed up with them and Motorola to provide city wide wifi access to 5 major cities (ref: here) , including the now famous ‘Philadelphia – City of Brotherly love wifi initiative’ (ref: here ).
The company is Tropos Networks, based in Sunny Vale, CA.
This article, is however, more about the implications of this technology. Till a technology is proven, there are always cynics (including me). I have been involved in the VoIP industry for many years to have seen the same cynicism when VoIP first promised to be an alternate solution to the PSTN. Till Vonage came in and proved it, no one really bought it. Today, pretty much every company is in this game. Broadband IP (wired) has come of age and has matured. Unwired broadband is going through the same cycles of ‘proof lies in the pudding’ that its wired sibling faced a couple of years ago.
The right technology, The right time, The right cost.
Verizon today talks about delivering FTOS with whopping megabit speeds and so does cable. Compared to that, the ‘proven’ wifi technologies go to 54mbps (I am not yet talking about the 200+ mbps promises of 802.11n and similar – IEEE has just ratified 802.11n and 802.16e – its years away till it gets ratified in the consumer space. OEM buildout is happening today, obviously). However, infrastructure is all about ‘doing the job well at the right time’. Ethernet’s popularity was not about being the most efficient protocol. It clearly was not, when introduced. Alternate protocols did a better job of collision management and guarantee of delivery. However, Ethernet met the requirements and was cheap and simple. SMTP is another example.
Slapping in a Wifi access point inside your house is a very different beast from a multi access point technology that can provide you connectivity as you are moving from one coverage area to another, or, say, driving at 75mph on your local highway. The biggest challenge is to provide the user a reliable high bandwidth connection with a constant IP address while at the same time addressing true mobility. This is what companies such as Tropos provide. There are many more in the fray, including Cisco and Nortel (more on that later).
So anyway, back to the fundamental question “Why do we need lamp-post networking when we have Cable, DSL and similar ? They offer higher bandwidth at home. On the other hand, carriers like Verizon are deploying HSDPA and similar technologies that will also offer high speed data access with mobility”
The simple answer is timing. Today, Tropos is deployed at over 300 sites successfully. Cisco has seen this success and has put their entire weight behind their own version of mesh-networking. Unlike Nortel, which chose to deploy OSPF as a routing protocol between access points, Cisco has developed its own ‘secret sauce routing protocol’ just as Tropos had done years ago. Today, Tropos is clearly 2 years ahead of the game. Whether Cisco or others can catch or not is another question. The point to consumers is that competition is healthy. Bigwigs such as Cisco will channel in millions of dollars in this space perfecting the technology while smaller fish like Tropos will continue to innovate to stay a leader. Net-Net: consumer profits irrespective of which company wins.
Alternate solutions such as HSDPA or mobility solutions from Verizon’s FiOS and similar are not there yet. It takes years to perfect and ensure that power, signal loss, interference and security are addressed appropriately before being an acceptable solution to the market.
While such technology is perfected, companies like Cisco and Tropos are proving that mesh works in a Municipal area with well proven 802.11b and 802.11g technologies. The core to this technology is efficient routing. With the IEEE ratification of 802.16e and 11n, plugging in these higher bandwidth chips into these access points becomes a much simpler problem to solve incrementally. Most users today really don’t need more than 802.11g (typically 20mbps in real life). With the convergence of Rich Media, it is expected that bandwith requirements will increase to 100mbps – but we are not there yet. By the time we are, I bet the newer .11x standards will be hardened as well to serve that market.
The cost for this technology is just about right as well. As an example, to unwire Mountain view, Google’s install cost for 400 nodes is $1,000,000 (approx) and a recurring cost of $17,000 pa (ref: here )
In other words, just for a million bucks, you can set up a complete access and distribution infrastructure to offer high speed bandwith with mobility to around 70,000 residents. If 60% of the population uses the service (and why not) at, say, $15 per month, with say 30% of that going as profit to the ISP (after taking into account maintainence, support and data center costs) do the math. Recovering investments is a non-issue. This same model can be applied to any large community.
Single vs. Dual radio
While Tropos uses a single radio channel in their access points (2.4GHz), companies such as Nortel and Cisco chose to use a dual channel in their access points (2.4Ghz and 5GHz). This is all about claims of bandwidth. Cisco and Nortel claim that providing a dual radio channel provides them with more spectrum to offer more bandwidth to their users. Folks like Tropos claim that
dual radio is more hype than reality and the cost of dual radio interference negates its supposed benefit. This difference of opinion is documented all around the web, including here.
It was about time someone did a more practical test and not theoretical formula. Rongdi Chan, working at the Network Research Center at Tsinghua University recently published his findings of performance throughput of the dual radio Nortel solution vs. that of the single radio Tropos solution. That report can be found here.
In summary, what he found was that as the number of hops increases (acess point – access point transfers) the Tropos thoughput fall-off was at 1/n while Nortel throughput decreased at (1/2)^n . Quite contrary to the theoretical assumptions of the double advantage of double radio, eh ? The reason? An efficient routing protocol affects throughput rates in a more significant way than adding another radio channel. I expect the Cisco performance to be better, though, assuming that their routing protocol is more efficient.
Net-Net (ElusiveCheese hates the term 'Net-Net' - he thinks it makes me a marketing dweeb)
To the consumer, such innovative technologies drive down cost and present incremental benefits quickly. Majority of the applications needs today are met by 802.11g and even 802.11b throughput rates. Leveraging this technology and providing true mobility is a far better approach than waiting to offer a similar service only when there is a possibility of providing 500mbps. This way, infrastructure gets validated incrementally, people get used to this new freedom, and application providers can start offering innovative solutions over such a paranet right away and take quick baby steps to perfect the ecosystem as oppsed to waiting for a revolutionary change 3 years down the line only to see it come crashing down a year later because it is not just robust enough. There is a reason why any successful product rolls out as prealpha, beta, 0.1 and then 1.0, people ! Starting with 2.0 right away, is , well, a recipe for disaster.