Search This Blog

Wednesday, December 20, 2006

IMS WiMAX interworking - Policy - Part II



(click here for Part I)

In a previous article, I talked about how IMS, 3G and WiMAX fit together at a conceptual level. As promised, in this article, let’s delve a bit into what it means to interwork IMS and WiMAX. I wanted to get this article out by end this year, so that we have some ‘meat’ to this thread before I forget all about this in 2007.

Note: as usual, click on images to make them larger

What needs to be interworked ?

At a macro level, we know that IMS is a session control layer, while the Wimax forum’s NWG efforts stop at IP-Connectivity. So when we think of interworking, we need to focus on


  1. Policy – how will one enforce that network policies such as QoS, admission control, etc. are enforced uniformly ?
  2. Security – how does one ensure that a subscriber is authenticated at the WiMAX and at the IMS layer, since they both provide different levels of services ? (for example, a UE may connect to a WiMAX network , but may not be allowed to place a call via IMS due to call barring)
  3. Charging – how will one ensure that voice/video/data related access in the WiMAX network propagate to the IMS charging and policy specification
  4. Session continuity – when a WiMAX UE moves between BTSs (micro-mobility) or ASNs (macro-mobility), what changes need to occur at the IMS level to ensure that an existing session is continuous ?
  5. Service continuity - when a WiMAX UE side handover occurs, what effect does it have on the ‘service’ being executed -a notch up from the ‘session’ in the previous point. For example, assume that the UE is in a video conference and switches to another WiMAX ASN which has lesser bandwidth support than the previously connected ASN, which would result in loss of video and downgrade to a lower codec rate. To ensure service continuity, the centralized Application Server hosting the conference would need to be notified of this change, and this change may need to be propagated to other participants.

But what is the model of interworking ?
Before we figure out an approach, we need to answer some key questions. For example:
  1. Who owns the WiMAX network and who owns the IMS network ? This is a very important question. If Sprint-Nextel went the WiMAX way and Verizon went the IMS+3G way, there are going to be business rules that govern which edge nodes of each network will get to talk to each other. Infact, ownership is one of the biggest challenges of interworking. I’ve spoken to countless enterprise companies who are considering IMS interworking, but are paranoid about exposing their corporate data to a centralized carrier’s HSS, for instance.
  2. What is the relationship model ? Master-slave or Peer-Peer ? For example, we can consider a situation, where say, a Cingular own both 3G and WiMAX spectrum and deploys and end-end cross-bred solution. In this situation, since it is all owned by one operator, one could deploy a ‘Policy Control point’ (the guy who says ‘Hey buddy – reduce QoS to 128kbps’) in one and a ‘Policy Enforcement Point’ (they guy who says ‘Okey Dokey ! Let me act on that asap’) in another. Howevever, If you assume as case where two competitive operators just want to offer interworking, but not yeild control, it may be necessary to deploy both the Control and Enforcement nodes independantly in each network, and define a ‘peer’ protocol between the Policy control nodes.

A simple model
As I mentioned earlier, the business and technical permutations to this solution make it an extremely wide area. For simplicity, let us assume the following:
(Don't ask me why I selected 'Mo' - I wanted to be more innovative than saying 'X' and this is the level of my creativity...)
  1. Operator ‘Mo’ owns both 3G and WiMAX neworks
  2. Operator ‘Mo’ decides to make IMS the ‘Master’ control layer and decides to deploy WiMAX as yet another access stratum, under control and direction from the IMS layer.

Understanding Policy Interworking and PCC


In previous releases of IMS, the P-CSCF and the GGSN would often play roles of the ‘Policy Definition Function’ (PDF) and the ‘Policy Enforcement Point’ (PEF), where the P-CSCF would set policy and control based on the network and user profile, while the GGSN would allow/disallow media PDP context and flows based on those instructions. The 3GPP group then felt that the Policy functions were too closely tied to the core elements and there was a need to separate the concept of ‘Policy’ away. Also, it was important to isolate charging and tie it to policy in a way that lends itself to a heterogenous network (i.e. make it IP-CAN independent). This is exactly what the SA2 group of 3GPP was tasked with, and is specified in TS 23.203 “Policy and Charging Control architecture”. This was a better evolution of the older PEF and PDF. In this new model, the following important nodes are defined:
  1. PCRF (Policy and Charging Rules Function) – This is a logical node that creates ‘rules’ for setting both policy and charging. These rules could be set using a combination of parameters (example, User Joe is only allowed voice and video’ and network X can have a max bandwith of 256kbps – so when User Joe connects from network X, both attributes are combined and apply to arrive at ‘User Joe is only allowed voice and video at 256kbps while connected to X’
  2. PCEF (Policy and Charging Enforcement Function) that acts on the rules from PCRF and enforces the same. The protocol between PCRF and PCEF happens to be Diameter (mapped to Gx interface)
Sidenote: I am still skeptical that in reality the PCRF and PCEF will be sold a 'separate entities', but I do know of a few companies that have started selling independant nodes for policy. One would need to wait and see whether this becomes a scalable model for making money for them, or will most deployments simply use a policy solution from the same vendor that sold them the core IMS CSCF infrastructure.

It is not possible to define all the differences and enhancements that PCC brings over older pre-PCC releases in this blog note, but in short, it attempts to specify specific charging rules, charging models, concept of a service data flow ,more fine grained QoS control and explicit interfaces to other key nodes that participate in a service flow, like the Charging system, Application Servers, CSCFs and HSS) while ensuring that ideally the PCC should be able to be generic enough to adapt to any IP-CAN.

This is what the PCC architecture looks like (credit:taken from 3GPP TS 23.203)





Mapping to WiMAX Network Reference Model

Now that we understand how PCC works, let us take a look at the WiMAX NRM (Credit: Wimax forum Stage 2 NRM)





In short, the ASN (or Access Service Network) provides the access l
ayer connectivity and QoS to the Mobile Station (or UE). The CSN (Connectivity Service Network) provides connectivity services – usually AAA, IP Address allocation, Security (NAT/Firewalls) etc. are part of the CSN. Not shown in this diagram, is that WiMAX already specified a QoS Framework Service management and enforcement points in the ASN for access QoS and management. They are called the SFM (Service Flow Manager) and SFA (Service Flow Auth.). So it would make sense to use them as an interace with IMS’ PCC.

Remember that we assumed, for simplicity, that there is only one owner and there are no federation & sharing issues. Taking this forward then:
  1. We assume that the WiMAX ‘CSN’ is replaced with ‘IMS for session control, DHCP+Home Agent for IP-CAN allocation)
  2. The PCRF can then reside in the CSN of WiMAX. And since IMS is part of CSN, the PCRF can effectively talk to the HSS, CSCFs and App. Servers for interacting will all of them for service flow interactions
  3. The PCEF will also be part of the CSN, and will communicate to the PCRF via Gx (Diameter)
  4. The PCEF will then possibly talk to the WiMAX ASN SFA for enforcing rules in the WiMAX ASN

The last point may be confusing. Why can’t PCEF be a part of the ASN ? Well, it sure can, and like I said, there are multiple ways to slice and dice. But in this way, the PCEF can act as a ‘mediator’ between the ASN and the PCRF. The WiMAX ASN SFA does not understand how to interact with PCRFs and it uses a different mechanism for QoS enforcement. Therefore, the CSN hosted PCEF can receive requests from the PCRF, and translate to the SFA expected interface and vice versa. This also results in lesser re-engineering at the ASN. Ofcourse, that means that the interface between the PCEF and SFA is “new”. The IMS-Wimax interworking folks call this Gx’ (i.e. think of it as Gx, but modified to work in WiMAX). So the final diagram comes out to:





There goes. A simplified approach, at a 10,000 feet level. In the next few articles, I will talk about the other aspects of security interworking, session interworking, service interworking

Thursday, December 7, 2006

IMS vs. 3G vs. Wimax – Part I (Basics)


Disclaimer: I don’t know if and when there will be a part II, but I have a lot to say in this area, and I get the feeling that a single post will not be enough. So let me at least start the article naming in the right way. As I decide to author followups, I will link each post to the other.

Updated Dec 20 2006: Link to part II

There is a lot of confusion and misunderstanding in the market today on technologies that could potentially co-exist or replace one another. This article presents my view of how IMS, 3G and WiMax fit into the larger equation. I

First, let us talk about these conceptual layers: (You would find most network architectures are based on derivations of these layered principles)





Starting from bottom up:

Access Layer: Simply put, the Access Stratum involves all the physical characteristics and the mechanisms needed to connect devices with each other. This may be WiMAX radio engineering (802.16, 802.16e etc.), WiFI engineering, UMTS RAN or other technologies. The job of the access stratum is to define and implement the required protocols to be able to physically connect multiple nodes of a network together. An analogy would be, when you connect your desktop to an ethernet cord, the ethernet related protocols (defined in IEEE 802.3 and related) are responsible to ensure that my computer is able to ‘physically reach’ my wired router sitting somewhere in the back end.

IP Connectivity Layer: Most next generation network architectures have chosen to us IP as their connectivity layer. In other words, designers would like abstract the upper layers from the Access Stratum such that when people build applications on top, they see IP packets, IP addressess and the IP based mechanisms that have become the de-facto standard of the Internet. Of course, what that means is someone needs to figure out how to run IP over the Access Stratum – and there are a lot of ‘convergence subsystems’ that define elaborate technologies to achieve just this. This layer should be able to create an “IP pipe” over the “physical pipe” such that layers on top can use IP addressing to communicate without worrying about the intricacies of how the communication is really happening.

Session Management Layer: Now that we have an end-end IP pipe, where I can reach elements using IP addressing, how does one communicate with them ? How do I discover another user ? How do I request a phone call ? Or a Video call ? How do I, as a network provider set policies (example, Bob is barred from calling), charge (example, $10 flat rate for push-to-talk calls, $17 flat rate for 2 way audio) and provision the network ? This is what the Session Management Layer defines.

Mapping 3G, WiMAX and IMS to these layers:

3G is the full set of layers – it defined protocols from the Access Stratum through the Session Layer. Specifically, in the Access Stratum, it defined UMTS and W-CDMA as popular cellular technologies. As time passed, they realized that other disruptive technologies such as Metro-Wifi and WiMax could be alternate access technologies and that it is potential in the future, as these alternate access stratums get laid out, there may be large subscriber base that do not need cellular protocols but only use WiMAX, for example.

WiMax as it stands today, specifies the protocols for the Access Stratum (IEEE 802.16, 802.16e and related families). To add to this, the WiMax Forum defined a WiMAX based Network Reference Model which uses the ‘Access Stratum’ of the 802.16x specifications and specifies mechanisms of how to run IP over it. In addition, it also specifies the convergence subsystem in great depth, that is required to run a WiMAX based system and end up with an end-end IP pipe with good stuff like Mobility Management, QoS management, Resource Management and the rest, at the IP layer.

IMS is the ‘top most’ stack of 3G protocols. It uses SIP as a preferred choice for session management and specifies Policy, Charging, Location, Routing, Services and Session management so that devices connected to the network via an IP pipe can ‘do things’ like place calls, send IMS and Networks can figure out how to charge, bill, provision and manage.

So when people say ‘IMS is dead, long live WiMAX’ or the other way around, they are probably mixing things.

As an example, WiMAX forum has for long considered to adopt the work done by 3GPP/IMS to see how IMS as a session management layer could fit on top of the WiMAX NRM. WiMAX needs some session management layer, correct ? Now one could create proprietary session management layers on top which are not standardized (many of the current wimax trials use their own walled in session management layers) but there are a few problems with this:

A network architecture takes years to ‘thaw out’. Any network architecture that accounts for session management (so please don’t sight IP and HTTP, please), subscriber management, and heterogenous network interworking issues (example wimax caller calling a PSTN user, or wimax-gprs etc.) requires a lot of thought and design. Are you sure that if you start on your own path, four years down the line you will not re-invent the same thing that IMS already is ?
The motivation for an open standards network is just that – something that is validated and vetted by multiple organizations so that both the users and networks could benefit from muti-vendor interoperability.

On the other side, it is true that the IMS layer carrier within it a lot of ‘baggage’ that may or may not apply to a pure WiMAX network (call continuty anchoring at the IMS level is one example).

The key therefore, is to select a relevant subset of the IMS that is useful for WiMAX as perceived today, and then continue to build on top as the network evolves.

In short, while I think it is reasonable to say ‘long live WiMAX, UMTS is dead’ or ‘long live UMTS, WiMAX is dead’, it is incorrect to say that for IMS vs. WIMAX. They don’t play in the same layer !!

Specifically:





Right, I do have a lot to say. And I think I should end this article here. In subsequent parts, I would like to talk about how WiMAX and IMS coexist and the interworking mechanisms, including ‘Why you may eventually re-invent a lot of IMS even if you think you don’t need it’.

So there you have it, a promise for Part II and Part II at least...

Updated Dec 20 2006: Link to part II

Monday, November 27, 2006

Preparing for your first customer meeting





The first meeting with a customer often carries a long way in your being able to build a strong relationship. Here are some things I would do, in preparation for the first meeting with a customer:

  • Research the customer - If the customer has a website, read their product/service offerings. When you visit the customer, instead of saying "Tell us what you do" - rephrase it as, "Based on what I researched, your company is involved in X,Y and Z. I would be keen to hear your perspective on where the company is headed". You would be surprised how many sales people I know who walk into a customer meeting with absolutely no idea what they do.
  • Treat Research as 'input' not 'output' - As an extension to the above, do not conclude on what a customer does, simply by research. It is important to ask the customer about their perspective, since it is often more detailed (and sometimes, rather different) than a web report. Doing research is showing respect to the customer - that you want to know their business. But don't use it to put words in your customer's mouth.
  • Research their competition- I remember walking into a meeting with a large service provider in Canada. The first question they shot across was "Before we engage with you, we want to know if you understand our space. Tell us, who you think our competition is". Fortunately, we were well prepared for that question. Had we not been, we would have lost respect that very moment
  • If your customer is a public company, read their 10K reports - if you don't have time for the entire report, read the summary, at the least. It gives you valuable information about their pain points, their competition and more
  • Keep an account map ready to fill in - One of the most important things in first level meetings is to assess who's who across the table. You will have the 'paper pushers' - those who talk a lot, but have little standing in the decision process, the 'gatekeepers' - who by the designated folks to keep vendors at arm's length so that the real decision makers are not harassed, the 'trusted lieutenants' who affect the decision process and 'the decision makers'. In large organizations, it is critical for you to know who you are speaking with and how the organization is charted out. This will ensure you spend the right amount of energies opening the right channel
  • Keep a list of key questions to ask - Many people think asking customers about anything is a bad thing. Not so. If you need to know an organization chart, ask away. If you need to know some product details, ask away. At best your customer will avoid a direct response, but more often than not, it works.
  • Make sure you have updated business cards - avoid scratching out titles/details before handing over your card. I've seen people scratching out titles that say "Director" to "Sr. Director" and then pass on to the customer. Really, does it matter to the customer or are you stroking your ego ?
  • If possible, create a targetted presentation - research the customer space and modify your generic presentation to have information that you think the customer is interested in. If you are not sure about some solutions, create a 1 pager with a summary of such solutions - if the customer shows interest, get into it, otherwise, move on
  • Put your best foot forward - I cannot re-iterate how important it is to completely impress the customer at the first meeting. If you think there is someone in your team who can help with this, make sure s/he comes along. This is also why I believe that the best people in a company need to have a field responsibility. You will not believe how many times I have heard "Oh, yes, Joe is a great technical guy, but he should be involved only when the customer relationship blooms more" - while I understand that the best guys cannot be available for every first level meeting, for the important ones, make sure they are. You never know how a meeting turns, and having him there is better than saying "Oh yes, we have all the expertise, but let me get back to you on that"
  • Keep a list of the 'pain points' - any customer has things in his own product that he is proud about. At the same time, any customer has pain points that need to be addressed. As you talk to the customer, keep filling these in. You need to address how you will solve his pain-points.
  • Make sure you are 10 mins ahead of time but not 1 hour ! - Make sure you are there 10-15 mins ahead of time. But not a full hour ahead! If you are an hour ahead, unless you have an existing relationship, don't call the customer or press him to start soon. If you do, more likely that the customer will need to re-arrange his current commitments or just curtly ask you to wait it out. In any case, an hour ahead is better than a minute late. I've seen senior executives turn cold during the meeting because they waited 5 mins in the conference room and you were not there.
  • Ask for business - many people I know shy away from asking for business. You are not there for a personal beer party. You are there for business. And always remember that a customer will give you business only if it solves a problem for them. So it is a two way street. Never shy from asking for business.
  • Make sure you create a minutes of meeting (MoM) with a clear action plan for followup. Also make sure that this MoM is distributed to the customer for validation with clear indication on what was discussed, what is the action result, who is the owner for an action and a due date by which it will be addressed.


Monday, October 16, 2006

Identity Based Encryption (IBE)




Lazy days are just perfect for me to catch up with reading. This Saturday, as I was browsing through the Internet reading up on new (at least for me) trends and technologies, I came across a recent I-D on a scheme called Identity Based Encryption (IBE) here. The premise and applicability of this technology seemed pretty interesting, so I read more here, here and other places. This technology is currently being pioneered by a relatively new company, called Voltage Security.
I don't claim to understand complex mathematics, so I am going to restrict my comments on its applicability. Simply put, IBE is not a complete replacement of existing asymmetrical cryptographic algorithms. It allows a mechanism where an arbitrary string could be used by the 'sender' as a means to encrypt a message. Based on that identity string, the receiver can obtain a private key to decrypt it, as long as the receiver can satisfactorily prove to some 'Key Server' that it is the rightful owner of that 'arbitary identity' string. This eliminates the need for certificate exchanges before a communication takes place in traditional PKI schemes.

This makes more sense when we apply a deployment model to it. Consider for example, two parties: usera@att.com and userb@vzw.com

In the current mechanism of PKI based security, the following happens:


  1. UserA contacts a key server to obtain the certificate of userB. Let us assume that the keyserver for vzw domain resides at vzw.com.
  2. UserA then needs to compare the certificate with a revocation list, to ensure that the certificate has not been revoked for some reason
  3. UserA should also check whether this key has been signed as authentic from some central authority (say, like Verisign)
  4. UserA then extracts userB's public key from the certificate, encodes the message, and sends it off
  5. UserB, assuming that it has its private key is now able to decrypt the message
  6. (If User B did not have his private key, or it needed to be refreshed, it would contact its key server at vzw.com securely to obtain it)
There are several issues with this approach:
  1. The process of certificate management and verification is expensive for userA
  2. For this to work, it is necessary for UserB to have a public certificate created in the first place, or userA cannot even contact UserB
  3. The mechanism of directly distributing public keys (the long string of digits you usually see in many mails and sites that say 'My Public RSA Key is below:') binds it statically to the associated identity (this will be clear when I talk about the advantages of IBE)
With IBE, steps 1-4 are greatly reduced. Here is what happens with IBE:
  1. usera@att.com wants to communicate securely with userb@vzw.com. First, userA contacts vzw.com to obtain what is know as a Master Private key, for the vzw.com domain (remember, we are assuming a deployment model where each master domain manages its own security, and hence, we assume each primary domain will have its own unique master key. Nothing stops multiple domains to use some central master key server, however)
  2. Next user A uses the identity string userb@vzw.com as the public key of userb and encrypts the message along with the received master key. What this means is that userB receives an encrypted s/mime message with the From, To and other routing headers intact, but a garbled text body
  3. userB now contacts its key server at vzw.com and performs a security exchange proving it is the rightful owner of the identity. Once satisfied, vzw.com provides userB with its private key which userB can then use to decrypt
  4. Once userB has received its private key, it does not need to contact vzw.com's key server each time - it can continue using the same key henceforth, based on the expiry and policies as set by verizon's key server (this is the same as PKI). In other words, IBE essentially provides a mapping between a abitrary string and the PKI private key that will eventually be used to decrypt the message.
This has some very interesting ramifications:
  1. userA can now send secure emails to userB without the problems of first getting its public key
  2. The computational power requirement for userA reduces (think cell phones and battery consumption)
  3. userB could choose to setup its private key after receiving a secure message from userA.
  4. Since the outgoing encryption is based on a string, ad-hoc policies are very simple to implement without the cost of re-invoking/revoking certifcates.
  5. Since keys are generated on demand, the key server is essentially stateless, which lends to better scaling for the key server (thanks to a person in the 'know' who read and reminded me of this point)
Consider some use-cases:
  1. Assume that avaya.com corporation has deployed this scheme, then any emails encrypted with “avaya.com” can generate a private key that applies to all avaya employees. Similarly, an Avaya employee could send an ad-hoc encrypted email only to “sip.avaya.com” for all it’s SIP development list organization. These strings and relation to an appropriate private key is therefore dynamic. You don’t need to pre-create dozens of certificates for such relationships. If a key server for a domain does not want to honor that identity string attribute, it can reject it.
Finally, since the mathematical foundation allows for association with arbitrary strings, each domain can set its own key generation rules. This brings us to my last interesting read, “Fuzzy IBE”. In this approach, the authors extend the scheme to allow for ‘fuzziness of accuracy’. In this approach, when UserB tries to communicate with vzw’s key server to provide he is indeed userB@vzw.com, instead of key exactness, the server and the client (server=VZW key server, client=user B) negotiate a set of attributes which defines B. The server could choose to grant userB it’s private key even if not all attributes are exactly matching. The degree of error tolerance, however is key and the paper discusses algorithms to securely prove its veracity given a particular tolerance. Why is this useful ? Consider for example, the new phones being launched with voice recognition scan or biometric scans. Such idenity proofs are a combination of multiple attributes, and there is no guarantee that are all the same all the time. For example, an iris recognition could go astray, if you just got punched in your eye by the boyfriend of a girl you were trying to warm up to. Or, your voice recognition identity may go astray if you happened to be partying all night before, screaming ‘Who Let The Dogs Out’ So all in all, IBE offers a very convenient approach to standard certificate mechanisms that hopefully will really help in domain based security systems by greatly reducing existing pains that plague the certificate community. In addition, being able to map a URI (identity) to an encryption mechanism should greatly help its deployment in the VoIP space as well.

Friday, October 13, 2006

Conveniences of the future




Okay, not a technology post, really. Just some light-hearted cynicism. We haven't had one of those posts for a while now. Life is not all about technology, ya know ?

Yes, yes, I know, with the advent of the all pervasive IP pipe, you are going to able able to wave a hand and discover your friend's contact from google with its advanced mindreader engine. You will be able to click on a webpage and call your friend (let's not worry about minor details like how your friend's contact would be on the web in the first place and whether he really wants that or not). You will be able to discover his presence and call him when he is free and all that good stuff. Ofcourse, there is IM as well, and you could just IM me your contact. Great stuff!

But in the meantime, let us assume that a vast majority of the users, who still use a phone as we know it, start getting used to this new world... After all ,why blame them ? Most of the devices I see today have a dialpad on them, even if it happens to be a soft-phone. So till UI innovations come in, I'd expect people's usability pattern will remain the same for a while.


Since the dawn of VoIP, people have been constantly saying that with VoIP telephony, you can now dial by user names, and it makes life so much simpler than remembering a number.

1. Have you ever tried typing 'billybob@myverizon.net' on your phone vs. 3101457865?

2. Do you really believe that when this form of identity becomes popular, you will actually get an easy to remember email like 'billybob' ? I'd bet it would be more like 'billybob_0012'. If you work for Tier 1 companies with 10,000+ employees, you probably know that already, looking at your email. In this case, we are talking about subscribers 10x - 100x that size.

3. Do you really think people will remember who is who, when you have addresses like samlikesfood@sip.yahoo.com, sam@myisp.com, samusr124@sip.vzw.net ?

4. If you believe that no one will dial user names, and that it would all be in an address book, then whether its numbers or user names, it doesn't really matter now, does it ? And incidentally, I can bet you make several sporadic phone calls to people you don't want in your address book.

Finally,
old-me: What's your contact no.?
old-you: 3011563865
old-me: I'm sorry, thats 301-15-what ?
old-you: 301-15-..6...3...8..6...5
old-me: thanks

Next generation conversation:
me: what's your contact ?
you: suzie281264@m-world.att.net
me: Is that suzie with a 'z' or an 's' ?
you: 'z'
me: Is that suzie 81264?
you: No, 281264. Its my birthday -that was the best id I got that was available.
me: I am sorry, I couldn't understand your accent following the @, it is 'aim' ?
you: No, the letter 'aym'
me: underscore or dash ?
you: What's that ?
me: I mean, the symbol after the @ and before 'world'. What is it ?
you: Oh ok, it is a hyphen. What's a dash ?
me: Never mind
me: great suzie281264@m-world.attnet
you: no, 'att DOT net'
me: cr*p. Do you have a phone number ?

Friday, October 6, 2006

Its not VoIP, AJAX, Web 2.0 - its SAAS (Software as a Service)


The topic itself is not that new. Anyone who is no one has in the past posted at least some sort of ramble regarding how either AJAX is a cure for cancer, or, how it has been tried in the past, and has and will miserably fail. Amidst all the hooplah and badly thought out articles, I read one here, which I believe the author has put thought into. I liked it.

In the past year, I’ve been spending a lot of time on how to effectively merge the ‘web 2.0’ world (to re-use a much used term) into the world I think I know relatively well, the VoIP world.

Here is the problem: most of the people busy adding “2.0”, “3.0” etc. to marketing terms don’t spend too much time putting their arms around what the model of entry/execution/exit really is and focus only on the technologies.

I remember a conversation with a friend a few months ago, where in a moment of excitement, he exclaimed “The client is interested in new technologies such as AJAX, google maps, presence, IM, RSS, voip and presence. Can we slap together something that shows a group of people located on google maps, including presence tags, throwing out special RSS feeds, talking and chatting – and all of this happening simultaneously ?”

This cartoon, from Randy Glasbergen, depicts the situation exactly.


Copyright notice: From "Today's Cartoon by Randy Glasbergen", posted with special permission. For many more cartoons, please visit Randy's site @ www.glasbergen.com

News flash – slapping individually successful services together is more often a recipe for immense failure rather than fabulous vision. And that is exactly where this entire “Web 2.0” community is headed. Very few actually ask some key questions:

  • How is it different from what happens today ?

  • Why would people use it ?

  • How do I beat existing competition ?

  • How do I keep ahead of future competition ?

  • What is my revenue model ?

  • Who is my target market ?

A colleague of mine recently commented, "You know, this entire web 2.0 evolution, if done correctly, could have the same impact as the evolution from books to movies"

So back to what I am doing these days: I’ve been working on interesting models of how all these diverse techologies could be applied to the deployed VoIP world today. I certainly cannot talk about what it is, since its company confidential, but the reason I bring it up is that I’ve validated it with several key players in the media and telecom industry and they all agree with this vision. And this makes me believe that there is a future in this space. But only if treated wholistically. Looking at it from the AJAX level is like trying to figure out if your car is a keeper by looking at just the quality of its air-conditioning. AJAX is not the engine here. Its a useful presentation layer. The engine is the business model - that ties everything together. For example, a good buddy of mine had a useful insight when talking about the relevance of AJAX. "How does it matter to you if AJAX succeeds or not? Tomorrow, flash could get more agressive in technology and marketing. So why tie your business model to to a presentatation layer technlology?". How true. When you think about the end-end model, AJAX becomes less relevant.

Understanding the model – SAAS (Software as a Service)
First things first – AJAX is a technology (well, a collection of, really). It is NOT a business model. Web 2.0 is a marketing mechanism, not a business model. The business model which will govern whether AJAX and related technologies will succeed is called ‘Software As A Service’ (SAAS)

Yes, SAAS has been around for a long time. It is somewhat similar to the ASP model in terms of ‘being hosted’, but there are significant differences. Yes, it’s the same four lettered acronym which Google is really depending on, eventually. It’s the same four lettered acronym which Ray Ozzie is trying to implement in Microsoft. It’s the same four lettered acronym that is behind Amazon’s EC2 and S3 initiatives. It is the same four lettered acronym that Salesforce is all about.

Simply put, it is a shift in thinking. Instead of selling ‘software’ as a chunk (the microsoft desktop OS model), you sell ‘what it does’ and you ‘pay as you use it’. This has several ramifications, in terms of cost of deployment, potential for better content aggregation and overall reduced cost for both the producer and the consumer. The usual pricing model between SaaS and perpetual typically converge after 3 yrs -i.e. perpetual theoretically becomes cheaper - but that is only assuming no new software upgrades/functionality upgrades, which is not realistic. I’m not going to explain this – do a google for SAAS and you can read its benefits

SAAS has existed before. What’s different now ?

  • Broadband deployment rates have significantly increased since 2001. More than 60% of the US internet community is broadband enabled. Worldwide broadband growth is growing at the rate of 210% annually. In other words, the infrastructure is capable of handling SAAS now (sorry, I don't remember the growth numbers exactly - refer to my article on broadband evolution earlier, where I researched and reported the exact figures)

  • Technologies such as AJAX have proved to be an effective presentation layer for the SAAS model, where desktop powered features can be offered on the net. The browser-server experience has vastly improved due to these improved technologies (you no longer need a browser refresh to get a single byte of data)

  • Applications have been rolled out by technology pioneers such as Google, Writely, Zoho and others. I have always maintained that those who prove a technology works are not necessarily the ones who succeed. Those who watch these folks plough in the money, and then step in when the technology is mature are usually the folks who gain. These applications, which are live in the market today prove that the technology is ready. It’s the business model that is needed. So when people say gmail has 25% of the subscriber base of a hotmail and therefore no biggie, I don’t think they get the picture.

Understanding the business – SAAS (Software as a Service)
Before we understand what implications SAAS has, let us first try and figure out;
a) What is the value chain ?
b) Who are the players ?
c) What are the risks ?
d) For each step in the value chain, what are the potential revenue models ?

And then, ask, “Where can I play?”

So here is a diagram that I drew up. To put a structure we can put our arms around, before talking about whether AJAX has potential to overthow mankind and preside in the Oval Office. I am not going to comment on this diagram right away. A lot of it is self explanatory. I would like YOU to comment on it, however.

How do you interpret what you see below ?

(Please click on the image for a larger version. After you click on it, your browser may still be resizing the image, making the text jagged, so either disable that feature, or download the image and then see it)

Monday, October 2, 2006

SIP and Skype, P2P and Supernodes - what a melee





Gaaah. I happened to bump into Slashdot today and read this:
“…is that when you install the Skype client, it will drain system resources by running as a supernode from time to time”
and finally concluding that the author will more likely use SIP over Skype.

Implictly implying that with SIP, you are free from such issues !

Let’s get the facts straight:

  • P2P is an architecture, SIP is a protocol. Skype is a product, and Skype uses its own proprietary protocol (you can call it ‘Skype Protocol’ if you want).
  • A SuperNode system forms a fundamental design choice of many existing P2P networks, including Skype, Kazaa, Grokster and several other massively scaled networks.
  • Today, most of SIP’s deployment uses a centralized architecture. In other words, all your SIP phones register with some central server and some central proxy. Your calls are routed through them. If they fail, you cannot reach other users, or, will have to attempt to call them directly (not as simple, because the person who is sitting in your buddy list as sippal@myisp.com may actually be user457@001dxp.bbcppcspool.myisp.com and this complex ID is mapped to its simpler one by the proxy /location server that went down.
  • There is current frantic work going on in the p2psip mailing list which is attempting to solve the following issues:
  • How does one map a SIP flow over a P2P network ?
  • Does it make sense to deploy a SIP overlay over a P2P network with common architectural principals of existing P2P networks (like DHT, for instance), or,
  • Does it make sense to deploy a P2P network over the SIP protocol ?
In other words, if you haven’t figured it yet, Supernodes have nothing to do with SIP. If you still haven't got it, A P2P SIP Network could and most likely would also use Supernodes. One good way to avoid using supernodes, is , um, say, uh, use what we know as a Centralized network.

Some Basics
Supernodes is often quoted as a necessary evil for a largely scaled P2P network. Let us first spend a bit of time, understanding how P2P networks differ from centralized networks.

The biggest difference, is that in a pure P2P network, there is no well known or centralized node that is mostly always available. P2P networks are plagued by problems of churn (a node may be in a network at a particular point of time, and may disappear the next moment because the user logs out), location & routing (how do you locate a user, Joe, if you only know his name, but not how to get to him?)

To address such elementary issues, which do not exist in centralized networks, several implementations have implemented very effective algorithms, such as DHT (Distributed Hash Tables), which try and establish an analogy between a unique encoded key and the contents that need to be retrieved in such a way that the key could be used as a primary identifer to locate the data. For a client that is trying to locate that data, it would generate the key and this key would traverse a P2P network, using a defined protocol, till the node(s) that store data related to the key responds. Ofcourse, this is an oversimplification. There have been several improvements to optimized DHT routing, including alternate architectural suggestions on routing and location for P2P networks.

Now enter supernodes. Why do we need it ?
Well, let’s put it this way. Networks are not made the same around the world. At any one point of time, there will be users on a high speed cable, a medium speed dsl, or a low speed dial up. And they all want to communicate, and locate, effectively. Supernodes are inbetween nodes, between the source and the destination, which can provide additional services to other clients. Here are somethings a supernode could do:
  • If User A wants to reach User B, and SuperNode (SN) knows a shorter way to reach B, it may act as a router to route User A directly to B, and avoid the message having to hop across multiple networks
  • If User A is behind a firewall, and wants to talk to User B, but needs a Media Relay server outside its firewall to route media through, the SN, if it has enough CPU cycles empty, may agree to serve as A’s external Relay server for its session. During the session, if the SN CPU gets busy (say the owner of the SN decides to make a call), it can drop the role of a relay server and A will look for another
  • If User A tries to reach User B and finds B offline, the SN may agree to be a ‘voice mail’ service for A, to receive its voice mail, then send it off to a central voice mail server and delete its copy. Ofcourse, the voice mail is typically encrypted with a key that the SN does not know of, so it is, for the most part, storing a bunch of encrypted bits for A.
The catch is, that the “SN” could be you. It could be any client in the P2P network, that has spare cycles to participate in other activities, not related to you, which makes the network more efficient.

But that’s the challenge of a P2P architecture. People choose a P2P network because it is scalable, and fault tolerant. In principle, there is no centralization, and each node can take over a functionality required to keep the network running, and there is a discovery protocol defined to find out such ad-hoc nodes. However, in gaining distribution and fault tolerancy, P2P networks need to deal with efficiency (how fast is the network) and effects of churn (if there is no accountability for nodes, and you cannot make guarantees of their availability status in the network [churn]. how do you provide services, like, say, voice mail, - who accepts the voicemail ?)

If we don’t have supernodes, the network wil need to re-discover itself each time, and will not be as efficient as users want it to be. If we don’t have supernodes, navigating networks and solving their challenges increase (firewall was one example above). If we don’t have supernodes, that is, if a client refuses to behave as anything more than a client, how do we provide services which, for example, need to kick in if some client is not online ?

The problem with SuperNodes is not in architecture, its in….
…Implementation! The problem with supernodes is that some networks do not allow you to specify when you choose to be a Supernode. And not without reason. If each participant decides to switch of Supernode functionality, the network performance degrades substantially. On the other hand, if the network decides for you, what the SuperNode rate threshold is, then you have no control over your computer’s resources. You need to trust the network.

An implementation can take it to limits – you may find your CPU choked at times – which is typically a result of bad threshold implementations.

Unfortunately the industry is now calling SuperNodes Malware ! I’ve seen lawsuit applications, I’ve seen marketing statements from new VoIP companies that say ‘We don’t do Supernodes’ like as if its evil.

You can’t have the cake
and eat it too. Supernodes are an important piece of the puzzle of well performing P2P networks. If you take them away, you may as well go back to the centralized model of operation, and keep P2P for your marketing collateral as what happens when your clients can directly call each other’s IP in a peer2peer fashion. Hooray !

Friday, September 22, 2006

A-IMS from Verizon and Buddies: A Good thing as I see it



Ever since ‘A-IMS’ was announced by Verizon, some months ago, blogs and columns have mushroomed all around with comments ranging from ‘Will this set back IMS deployments for several years??’ to ‘I just completed reading the specifications and it looks interesting’



Here is how I see it: Think of A-IMS as a deployable product packaging of the standards that 3GPP/3GPP2 have been creating. Read it again: A-IMS as a deployable product packaging. In other words, Verizon (and buddies) have looked at existing specifications and have asked “For it to be successfully deployed in MY NETWORK, what do we need?” and have proceed to fill in the ‘blanks’.

And this is a great thing. Left to themselves, standards always aim for utopia. In the mean time, vendors suffer deployment blues because certain ‘real’ problems are left open, to be addressed later. Most architects will agree that a live deployment only uses 30% of a utopian network design, and this exactly why we always have vendor incompatibilities as standards evolve (ever worked with a Cisco IAD in the early SIP days?).

The nice thing about A-IMS is that because it is vendor controlled and not a standards consortium, they are not forced to taking the ‘most generic path’. For example, they have taken a firm stand and have detailed procedures on not only what a Policy Format should look like, but also What it should contain.

Not to be left behind, another set of operators/vendors have recently gotten together to form what they call NGMN to take IMS towards realization along a path that they think is correct. At first glance, this may give one the idea that this will result in architectural forks. Get real buddy – Even with only 3GPP/2 as the only standards, different vendor OEM products have a tough time talking to each other. I remember doing an end-end IMS deployment consulting for an operator – when we spoke to the vendors, each one pitched an “end-end” network fully comprising of their own products (or partners). So when I asked them “So are you fully standards compliant?”, the common answer was “Ofcourse. But we can guarantee that only if you use our products end-end”. So there.

The reason I like A-IMS is that this was done with an immediate deployment concerns. Matters that have not been resolved yet in 3GPP have been tackled, and have been given due priority, even it it means a solution that suits Verizon’s existing EVDO network. It’s a stake in the ground.

So without much ado, some of the major A-IMS ‘additions’ and my interpretations are:

Policy Manager – While 3GPP defines the interfaces (Go) and the envelope formats for policy, it does not outline when a policy kicks in, what is the SLA to be adhered to, what are the corrective actions. The A-IMS policy manager takes the standard 3G PDF and extends it with realizations.

Application Manager – A perfect example of product packaging. A-IMS has lumped together all the control ‘-F’ functions together (S/P/I-CSCF) and called it the Application Manager (AM). It’s an entity that controls session state. This is also why A-IMS says they have ‘simplified’ the network. Actually, that is quite untrue – they have simply shown a realization while 3G leaves it to logical functions. So in A-IMS speak, this one node routes, validates, filters, inspects and finally hands over messages to application servers.

Services Data Manager (SDM) – The SDM is the A-IMS version of the ‘universal data repository’. In many ways, it is like the proposes ‘GUP’ HSS profile (Global User Profile). In essence, it acts as not only a repository of data for standard HSS services, but it also allows proprietary data to be stored in it via fixed interfaces that are accessible by 3rd party application servers. This eases data management for the network.

Bearer Manager (BM) – This has been one of my biggest pain points with the state of standards today. As it stands today, while control plane policing is defined, its relation to bearer plan is severely lacking. Specifically, in 3G, since for an invoked service, signaling and media goes in different paths (eg RTP through GGSN, SIP through CSCFs), one needs to be able to specify policies that identify and correlate streams at ‘business logic’ level, not just ‘message level, via embedded identifiers’. A-IMS takes a step forward and specifies rules/associations for managing the entire service stream as a single entity. To purists, this means that A-IMS is stepping into defining ‘what a service could be’ – and I like it. I like to know what a service is, I like the concept of a ‘Service Identifier’ , and I hate it when people compare services to a generic programming language, as far as deployment realities go.

Breaking multi-level authentication – One of the goals of 3G was to isolate layers from each other, so that 3G could work across all access networks. While this is architecturally great , this also induced performance problems, if no one layer could assume functionality of another. This was sorely felt at security negotiations with RAN, IP-CAN, and IMS all performing their own authentication and security negotiations. A-IMS has put a stake in the ground, selecting the EAP framework and has specified mechanisms of how one set of layer keys can be used to ‘compute’ keys for other layers, decreasing latency and making it easier to perform ‘single sign on’ deployments (FYI, EAP is used in the WiMAX Network Ref. Model as well)

The hype of ‘SIP and non-SIP applications’ – This is the most commonly quoted ‘enhancement’. In IMS, the AS only sees ‘ISC’ (SIP) and non-SIP to SIP conversion is done by adaptors. The problem is that no one really specified how the adaptations would be done – sort of a ‘fabulous goal – tell us how to do this conversion too, please’. A-IMS, instead introduces a “Services Broker” – which along with the App. Manager and Policy Manager can interface to both SIP and non SIP interfaces directly. In addition, the “Services Broker” is also a feature interaction manager (much needed).

Platform security threats – one of the areas sorely missing from 3G, is any reccomendation on how to prevent platform level attacks like Ddos, DoS, MOTM and others. A-IMS again takes a stab at this and specifies a unified model which they recommend that covers security from a 3 dimensional model - I think that’s the X.805 format from Bellcore - how threat, destruction, corruption, removal, disclosure, interruption affect Access control, authentication, non repudiation, data confidentiality, communication security, data integrity, availability, privacy for End user, Control layer, bearer layer and management layer


So all in all, I like A-IMS. I don’t care if it created a fork in 3G standards. Chances are that Verizon, Lucent, Motorola, Qualcomm and Nortel will push a lot of these ‘filled in gaps’ into the standards and this will fuel the standards more. Speaking of NGMN (from China Mobile, NTT Docomo, KPN, Vodafone, Orange, Sprint-Nextel etc) I haven’t seen the details yet. But I hope they fill in more gaps as well.

Thursday, September 21, 2006

From My Heart To Yours




This is not a post about management or technology, but something of utmost importance to us technologists. Do you like solving big problems? Read on...


SAHC, an exciting non-profit got started by the Bay Area El Camino Hospital, South Asian physicians, specialists, and generous donors. I am pleased to let you know that the center is out of its pilot phase and is now open. There was a well attended opening ceremony yesterday with a who's-who in the South Asian community making their pitch for getting screened.

I want to do my part and share my experience with you: A few years back, on a plane ride to India, I read an interesting piece in India Today where there was preliminary research being done in Singapore, London, and Chicago (Dr. Enas Enas) on a genetic anomaly with South Asians that increased their chances of fatal heart attacks by 400%. Kaiser was also noticing an abnormal number of fatal heart attacks in the Indian community in the Bay Area.

I kept track of these events, learned of SAHC, and got screened a few months back confirming a few early markers. Once this was confirmed, a case worker was assigned to me and the SAHC hooked up with my primary care physician. They also sponsored a free fitness instructor at the YMCA and assigned a nutrionist to work with Meera and me on diet choices. Thankfully, I can postpone getting on drugs for a little bit more. Best of all, the service was all free and Aetna picked up a significant portion of the advanced lipid tests. I spent $69 in total for such world class service.

My long blog is to convince each and every one of you South Asians to get screened at http://www.southasianheartcenter.org/. This epidemic is real and will likely you. It does not matter if you are:

* Working out
* Rich
* Vegetarian
* Thin
* Stress Free
* Have had no other complications
* Have borderline cholesterol readings
* Are a woman

Please make time to sign up and get tested. Look at all the positives you will get by simply signing up:

* You contribute to some very cutting edge research that will save the lives of many of your friends and millions of South Asians. By 2010, India will have 60% of the CAD world burden. The median age of a South Asia CAD victim is fast dropping to the late 30s/early 40s.

* You will be in control of events in the eventuality of a cardiac event or a stroke. You will be armed with all the relevant information. You risk is already two times the US national average based on existing data. You risk increases 4-8 times if you have adopted a western lifestyle, smoke, or drink.

* A majority of South Asians in the US are just beginning to enter the danger zone. 5% of all ER cardiac events in the Bay Area are due to South Asians. You could be next! Act now!

* With changed lifestyle choices, you will indirectly contribute to combating childhood obesity/diabetes in the community and give our children a better future!

Please do sign up.

Friday, August 25, 2006

Amazon's S3 and EC2 - classic application long tail




While there are a multitude of opinions on the web about what motivates Amazon to offer their S3 and recently released EC2 infrastructure and whether they can hope to make any margins from their service, I think there is a bigger gameplay involved.

The Amazon S3 and EC2 infrastructure offers a unique opportunity to small and medium sized businesses. By almost commoditizing infrastructure, Amazon is effectively telling ISVs 'Use our network to offer anything to anyone' at scalability,performance and speed levels that rival the best of best infrastructures like those hosted by Google.

Amazon started with S3, which offered affordable storage space with carrier class redundancy followed by EC2 which offered affordable computing space with massive redundancy and reliability. In other words, an effective cluster of computers ready to run your software, with huge storage for data. All over a high performance network.

Amazon, according to me have struck at the 'root', which is an affordable and powerful infrastructure, that will let anyone deploy services such as email, VoD, conferencing and whatever else without the performance bottlenecks of the 'open internet' . If Amazon started this as an 'application' like mail, it would be only for a niche market. Instead, Amazon is 'long-tailing' the applications - and letting the software vendors decide the killer application and their target audience.Whatever is hosted, its their servers and network.

By making the infrastructure available before the applications, Amazon has effectively moved the ‘innovation’ part to the community. Compare this to companies such as google, which launch several innovative applications and are parallely building their own grid network which will one day be released to the world for ‘fast and effective computing’.

I think Amazon got it right over google this time. Commoditize infrastructure, long-tail application hosting. Let the ISVs create millions of niche markets and bring in business for themselves.

Added bonus: imagine the amount of content that will incrementally be made available to Amazon for personalized services. Afterall, one of the most important requirements of personalized services is availability of multiple content streams for users.

Wednesday, August 23, 2006

An engineer turned star salesman



Whoever said sales is done better by folks who wear versace suits ?
It's all about passion. If you believe in it, you make others believe in it. If you try to hard-sell, you put off most people.

Enjoy this bit of passionate humor. Click on Play below (around 6 mins.)

 

Monday, July 3, 2006

Employee loaded costs



One of the things that always amazes me is how ill-informed employees are of their 'total cost to the company', often referred to as 'loaded costs'. Simply put, the pre-tax salary that is in your offer letter is only a part of what your employer pays for you. I find it pretty silly that an employee leaves one company for a 5% pay hike in his base salary without calcuating what other 'hidden' costs may not be paid by the new employer. To give you an idea, here is a sample break down of what constitues your 'loaded cost' to your employer, assuming your salary is $100,000 (easy number for computations). There are some approximations made, but for the most part, this will give you a good idea of the costs involved.

This also gives employers an idea of how much they are really spending per employee. Many senior employees who are involved in budgeting and planning are often clueless about their real costs and take only 'paper salary' as their total cost, which is way off the mark.

Assumptions:
  • # of employees = 40
  • 2 VPs, 4 Directors, 3 sales (typically this is more, but let us take this model to compute S&G costs)
Employee loaded costs:
  • Base - $100,000
  • Timeoff/leave - $5,555.56 (assuming 15 days PTO)
  • Company Mandatory contribution to benefits: $4,000
  • trainings costs per employee per yr: $4,000
  • HR Costs (including recruitment): $1,500
  • Employer taxes on behalf of employee: $6,813 (FUTA, FICA, Medicare etc.)
  • Office space and general expenses (rent apportioned, stationery, phone, employee travel ,etc.) - $13,020
  • Company Benefits contribution - $28,000 (insurance taxes, medicare, health premiums)
  • Payroll related expenses - $10,000
  • S&G per employee - $42,237.50 (cost of sales)
Summing all of the above, loaded cost = $215,126 (approx) for an employee who's pre-taxable income is $100,000.

Wednesday, June 21, 2006

H.325 - Is a rehash necessary ?




At the 2006 ITU-T workshop, a presentation was made on the 'next' generation of protocols 'H.325' !

The primary pitch for H.325:
a) It offers a 'centralized' model of operation, as opposed to SIP and H.323 which put intelligence in the device. SIP cannot do this (*cough* tell this to the SIP centrex guys *cough* B2B)

Secondary pitches for H.325 :
b) Reduced Complexity ( haven't we heard that before)
c) Rapid Service creation (we have seen this on marketing slideware for years)
d) Better 'capability negotiation' (which does not need a new protocol - its a behavioural change that can be adapted to any existing protocol)
e) "Truly" take advantage of IP networks (not sure what that really means)
f) Better NAT/FW traversal (enough already ! we have come to a point where Sykpe's NAT traversel works everywhere. Why bother again ?)
g) It says "SIP is largely equated to voice" (not at all, really)

And finally:

"H.325 was launched by ITU SG-16 to meet NGN requirements and overcome limitations of 'legacy' systems" - oh brother ! Join the queue please, we have 3GPP, 3GPP2, TISPAN and a bunch of other SDOs trying to 'solve the next generation' problem.

The interesting part is that H.325 is a 'new effort' with requirements being confirmed by end 2007 and 2008 being the year for the protocol definition.


We spent till 2000 battling between protocol choice for deployment (SIP vs. H.323). For good or for worse, that battle was won by SIP. Today, we have 3GPP, TISPAN, IPTV, Cable and several other forums who have brought out architectures that meet initial requirements for their respective networks. More importantly, OEMs and carriers have invested billions of dollars in getting those networks ready, with SIP support.

Is this really a time to launch YAAAPHWANH ? (Yet Another Attempt At Promoting H.323 With Another New H.325 ?)

I personally think this is an effort which is too late. Some of the problems are real - but instead of blowing off the dust of a has-been protocol and adding bells to it, why not add the missing features to SIP ?

Wednesday, May 17, 2006

Free PSTN calling from Skype - marketing gimmick with a punch ?



On May 15 2006, I received an email from Skype (as millions more would also have) which said:

"Starting from today it doesn't matter if it's a Skype-to-Skype call or a call to landline or mobile phone - it's free as long as you're calling from within the US or Canada to US or Canadian phone number."
I must admit, I was ecstatic. And not because it would save me from some marginal cash from my cell phone bill, but because this was what I considered to be the next "gutsy" step needed after Vonage.

I shot out an email to several of my colleagues and clients saying:
(This is a public email list with non-confidential information)

"For those who use Skype, I received an email (its on their website too) that Skype is now offering Skype-PSTN outbound calling free. This is not a trial or a 'special run' - but supposedly a permanent feature.

In other words, 'SkypeOut' which was a charged service is now essentially free.

The catch, however, is that this is only free for US and Canada right now (local calls).
One way to look at it is that with the Skype acquisition for $4b by Ebay, they have a lot of money to try new things. Skype believes that by making basic calling free, they will attract more people to sign-up and 'buy-in' for value added services (like skypeIn and voice mail).

For the skeptics, who have been through the bubble burst, this is reminiscent of the 'get market share first, money will follow' mantra that resulted in the demise of several hundred startups.


However, at another level, it needs a gutsy step forward like this to focus on the real 'value proposition' of voip. Is it 'cheaper voice calls' or is it 'value added services' ? Skype seems to think the latter.

Gmail was an example of a 'next step' for email systems, where disk space
was commoditized, throwing mud at the face of email providers that charged more for more space. Instead, Google converted their email system into a massive ad-generated revenue scheme as well as 'value added services' (like automatic ups tracking for an email that contained a ups number, asynchronous UI for faster response time, google maps integration etc.)

So whether this effort bombs for Skype or not, I think it will have a similar effect of what Vonage did to VoIP - perk up and scare the other providers enough to concentrate on Value added services.
If this effort bombs, I guess Skype will still have a few billion to lick its wounds on. But if it succeeds, and like Vonage did for basic VoIP, Skype can prove that by a combination of volumes, value-added services and targetting advertising (perfect time for them to insert audio advertisements as replacements to ringtones - hey if its free, you will bear it) they have a workable business model, it’s the next step of action to all the talk and hype folks !
I wish Skype the best of luck, but somehow, I think the 'second-buyers' as they are called,profit the most. Either way, the landscape-is-a-changing."

Within a few hours, I noticed that this was a staged marketing effort by Skpe who changed their implicitly stated "indefinite free" period to an explicit:

Skype is a little program for making free calls within the US and Canada to all phones until the end of the year
To be honest, I was a little cheesed. I dislike marketing gimmicks that don't project the complete image in its first shot. I just think it is 'not cricket', as the brits would say.

Even so, I think a 6 months freebie effort by Skype to increase their North American share of a dismal 20% packs a punch in 'shaking up the charge for basic calling' concept. There is a big difference in charging "0.01c" a minute vs charging "0.00c" a minute. The former associates a 'value' with the scheme, implying that the provider thinks that there is money to be earned, maybe by volumes. The latter de-values the scheme completely and states "There is no money to be made in basic calling. Move on, and do something better to earn money"

And that is why I was initially very excited. It's not about the money. It's about a breakthrough in thinking. The 6 months staged marketing cap dissapointed me, but not completely. It is still a 'statement' which bears impact.

I did a bit of calculation on how much it may cost Skype to offer this service, based on assumptions that may be incorrect. But it does give a general idea.


Number of skype users: 100 million (source)
North American penetration 20%
However, number of skype users actually active every day: 5 million (source http://saunderslog.com/2006/05/02/reading-between-the-margins-vonage-vs-skype/)
Number of skype users in USA+Canada active every day: approx 1 million
(actually I think it is more, there are more broadband connections in US, so chunk of active skype users should be more - I am being conservative)
Current promotion: It seems only US/Canada residents can call US/Canada. The Rest of the World cannot call US/Canada for free.
Assumptions:
Discounted termination fee from VoIP/PSTN providers in bulk: 0.01c per minute
Duration of call: 3 mins
# PSTN of calls per user per day: 3


% of US&Canada users who will start using Skype
Termination cost ONLY to Skype (does not include management/infrastructure cost)
10%
$1.6m
40%
$6.4m
60%
$9.7m
80%
$12m


Thursday, April 27, 2006

Broadband-IP



What is ‘Broadband’?

The definition of what ‘Broadband IP’ constitutes is fairly open to interpretation. While there are numerical definitions of what a ‘Broadband’ connection is, these numbers vary depending on the operator (and what it wants to advertise) and the nature of the service. As an example, if you are in an IM text chat session with your friend and are not exchanging rich media, does it really matter if you are on a 44kbps link or a 3mpbs link ? However, if you were in a Skype audio/video chat with a colleague, you will certainly feel the difference between a 44kbps link (choppy voice/video) and a 3mpbs link (smooth video and audio) . Therefore, the definition of ‘Broadband’ over IP is closely tied to the nature of the service that is to be delivered.
Having said that, for the scope of this paper, let us assune that broadband is any data service that offers a data rate of higher than 56kbps. Practically, most wireline broadband providers start at a minimal ‘broadband’ rate of 128kbs/256kbps. While early versions of second generation wireless technologies such 1xRTT and 1xEVDO offer lower data rates, it is a safe assumption that a majority of ‘broadband’ usage is above the 128kbps data rate.

Growth of Broadband

The success of a service depends largely on two factors: a) The customer need for the service and b) the quality of the service. In some cases, the first factor is so important that users are willing to accept degradation of quality (the 2nd factor). The proliferation of cell-phones worldwide, with dropped calls and jittery voice is a good example of such a service where the uniqueness (mobility) of offering overshadowed the quality concerns. Having said that, cell phone coverages have significantly evolved over the years, since eventually, consumers will expect ‘acceptable’ quality. Therefore, it is important to analyze the growth of Broadband before we can discuss the relevance of the services on top of it. WebsiteOptimization has some interesting data on Broadband Growth, where they show that in the USA, in 2000, the total % of ‘dial-up’ users within the ‘internet connected population’ was 74%, while in 2006, this rate dropped down to 36% (in other words broadband users are now 64% of the total internet population). In terms of the proliferation of Internet in countries, InternetWorldStats reports that the number of Internet users in the USA is currently at 68.6% of the US population while world wide, the Internet users are at 16% of the total world population, up from 5% in 2000.
The relevance of Broadband penetration is a function of the internet penetration of the region in question as well as the market segment within that region. As an example, Asia’s overall internet usage growth since 2000 is over 210%, EU is 180%, North America is 110% (source)
In short, this tells us three things:
  • It is obvious that different regions will have different growth numbers for internet usage (a function of the economy amongst other factors). However, each region has a very high grown rate which implies that the rate at which internet penetration is increasing is very high.
  • As the internet penetration grows, it is expected that the % of broadband users within the Internet users will also substantially grow (as shown above by individual statistics in the US as one region example) due to the fact that broadband rates are driving down and there are new innovations for delivering broadband to the last mile (more on this later)
  • Finally, from an innovative service deployment perspective, it also tells us that the penetration of broadband usage within the healthy internet growth worldwide means the ‘last mile access’ (from the provider to the user) is capable of delivering media rich services. After all, even if the network is connected via high-speed fibers, if the connection to the user is bandwidth restricted, then this severely limits the usability of the internet for high speed data services from a user perspective.
Factors that changed the ‘Communication Landscape’ as we know it


The rapid growth and related excitement about leveraging the Internet for communications can be broadly attributed to some key technologies and market shifts in the past decade:

  • The penetration of the Internet around the world – discussed in detail in previous sections.
  • Voice over IP and Vonage – In the 1990s, using the Internet to deliver voice calls was a much talked about topic. (actually, voice over IP as concept is older and there were several proprietary protocols that dealt with specific requirements, but it took till the late 90’s for standardization momentum to pick up). Competing protocols such as H.323 and SIP were introduced, both of which promised to utlize the internet to deliver global voice calling around the world. Using the internet, which is a de-regulated and distributed packet network to be able to deliver voice calls was exciting to everybody except the incumbent carriers because it allowed them to play in the communications space without heavy investment in infrastructure. Detecting this trend, incumbent carriers also reduced calling rates while trialing VoIP themselves. However, no one really was ready to deploy VoIP for consumers (there were several enterprise VoIP deployments, where QoS is a lesser issue). This was till Vonage took a leap forward and introduced VoIP calling for the masses. For a low monthly rate, they offered a bundle of enhanced services which were charged at a premium from LECs. This effort had a worldwide ripple effect on two fronts. First, consumers and providers realized VoIP as a technology had the potential to work well and threaten to replace PSTN lines (admittedly, there are still issues of emergency calling and overall consistent quality issues which are being addressed.) and secondly, it was clear message to the LECs that this was a disruptive technology that could potentially displace them from a multi-billion dollar market.
  • 3GPP – While Vonage and a slew of ‘me too’ providers penetrated the wireline world as a viable alternative of Broadband IP being used to replace PSTN lines, there was still a large void in the ‘mobile world’. The challenges of a mobile network are much more than in fixed networks (QoS, Roaming agreements, location tracking, Emergency calling, latency, air interface optimization and more). From a technology perspective it was fairly obvious that it was only a matter of time that Broadband IP somehow penetrated into the mobile world and threatened the monopoly of wireless carriers. However, before that happened there needed to be significant investment in preparing the wireless network for the IP infusion. 3GPP took on the responsibility of defining an all IP architecture (in stages) which would provide carriers a migration path from switched to packet based networks and thereby ‘stay in the game’ and be able to retain their customer base as well as fight the challenges posed by alternate technologies that threatened to bypass their network (next point). Interestingly, 3G licenses were put on bid for astronomically high costs for carriers. Once those costs were sunk in, unfortunately, the worldwide market dipped significantly
    during 2001-2003 and revived up again end 2004. Once the market was up again, carriers had already invested too much into 3G licences and it was (and is) in their interest to make it a reality soon. In 2006, there are already several 3GPP trials in the market and we expect to see all-IP 3G deployments by Q2 2007.
  • Distruptive technologies: WiFi, Skype, Peer2Peer and others: While all of this happened, several technology and market innovations took place which drove consumers faster towards ‘using the Internet for communication needs’ than ever before. The first was WiFi – the industry rapidly made progress in WiFi and related technologies where the user could access a high bandwidth internet connection with limited mobility. The mobility range continued to improve with new mobility standards such as 802.11e and beyond. In addition, recognizing the fact that new standards take years to deploy and stabilize, innovations by companies such as Tropos, Cisco and others resulted in new network topologies that increased the range of existing WiFi standards by creating a mesh of interworking routers hosted on city lamp-posts to provide a larger mobility range to users. At the same time, services such as Skype were introduced which provided excellent audio/video quality and easy connectivity with the PSTN which further drove consumers to adopt broadband IP. Finally, solutions such as Skype demonstrated that the Internet was ready for providing VoIP communications in a peer2peer fashion – you don’t need to sign up with some ‘central provider’ to reach another user (as long as the other user is also on the Internet) – the Internet was the ‘provider’.
  • Google – Finally, this has to be said: Google happened. This was not just a demonstration of a ‘search engine’ company. It was the ushering of a new era of communication where google demonstrated how different streams of content – voice, video, data could be effectively grouped together and personalized in such a way that the Internet suddenly became a respository of ‘useful information’ for each individual instead of ‘Raw Data’ using a single interface. In addition, they also demonstrated en-masse how technologies such as AJAX could be used to present attractive User Interfaces which compete with the effectiveness of local desktop applications.

Finally, what are the applications of Broadband-IP ?


We talked about infrastructure & broadband penetration. We talked about disruptive technologies. So how do we tie this end to end ?


Imagine that you are on your desktop PC chatting with your friend. In addition to chatting, you are also sharing photographs from your flickr account for a recent trip you and your friend took. Your friend and you are looking at the photographs together as well as collaboratively editing the photos by adding notes about what you were doing in various photos. Ofcourse, your ‘chat’ is using ‘voice’ (talking), ‘video’ (you have a web-cam) as well as ‘data’ (sharing textual notes). While conversing, you decide to drive down to your friend’s house, so you click on a button and the ‘session’ automatically gets transferred to your cell phone, using VoIP. However, your current network operator does not provide you with enough bandwidth to host a video call, so the session automatically removes the video session when the call transfers to your cell phone. At the same time, you use google maps in your cell phone to map the exact driving directions to your friends house. Your friend in turn can track your current location from home, because you have allowed him to view your location updates. That way, if you are lost, your friend knows where you are without you having to figure out !


Ofcourse, the service could go on an on with the potential combinations that could enhance the definition of what we know communication to be today.
So really, as an answer to the applications of Broadband-IP, the scope is enormous, almost infinite, limited to one’s imagination and fair business reasoning of which service makes sense to deploy and which service would a user be willing to pay for as opposed to being a free service with alternate revenue mechanisms (advertising, as an example).


Crystal Ball

So what’s the next wave ? Well, really, 3G and Fixed Mobile Convergence are too new not to be considered the next wave. But really, beyond that, another area I see that will contribute significantly to the ‘end-end Broadband IP’ dream is that of Home Networking. In 2001, I did quite a bit of work with Telcordia, Columbia University and others to define extensions to SIP for home networking. Unfortunately, the economy tanked then and the timing was just not right. But I am betting this field will be the next to mesh into a common protocol infrastructure in this decade. We shall see.