Search This Blog

Monday, October 16, 2006

Identity Based Encryption (IBE)

Lazy days are just perfect for me to catch up with reading. This Saturday, as I was browsing through the Internet reading up on new (at least for me) trends and technologies, I came across a recent I-D on a scheme called Identity Based Encryption (IBE) here. The premise and applicability of this technology seemed pretty interesting, so I read more here, here and other places. This technology is currently being pioneered by a relatively new company, called Voltage Security.
I don't claim to understand complex mathematics, so I am going to restrict my comments on its applicability. Simply put, IBE is not a complete replacement of existing asymmetrical cryptographic algorithms. It allows a mechanism where an arbitrary string could be used by the 'sender' as a means to encrypt a message. Based on that identity string, the receiver can obtain a private key to decrypt it, as long as the receiver can satisfactorily prove to some 'Key Server' that it is the rightful owner of that 'arbitary identity' string. This eliminates the need for certificate exchanges before a communication takes place in traditional PKI schemes.

This makes more sense when we apply a deployment model to it. Consider for example, two parties: and

In the current mechanism of PKI based security, the following happens:

  1. UserA contacts a key server to obtain the certificate of userB. Let us assume that the keyserver for vzw domain resides at
  2. UserA then needs to compare the certificate with a revocation list, to ensure that the certificate has not been revoked for some reason
  3. UserA should also check whether this key has been signed as authentic from some central authority (say, like Verisign)
  4. UserA then extracts userB's public key from the certificate, encodes the message, and sends it off
  5. UserB, assuming that it has its private key is now able to decrypt the message
  6. (If User B did not have his private key, or it needed to be refreshed, it would contact its key server at securely to obtain it)
There are several issues with this approach:
  1. The process of certificate management and verification is expensive for userA
  2. For this to work, it is necessary for UserB to have a public certificate created in the first place, or userA cannot even contact UserB
  3. The mechanism of directly distributing public keys (the long string of digits you usually see in many mails and sites that say 'My Public RSA Key is below:') binds it statically to the associated identity (this will be clear when I talk about the advantages of IBE)
With IBE, steps 1-4 are greatly reduced. Here is what happens with IBE:
  1. wants to communicate securely with First, userA contacts to obtain what is know as a Master Private key, for the domain (remember, we are assuming a deployment model where each master domain manages its own security, and hence, we assume each primary domain will have its own unique master key. Nothing stops multiple domains to use some central master key server, however)
  2. Next user A uses the identity string as the public key of userb and encrypts the message along with the received master key. What this means is that userB receives an encrypted s/mime message with the From, To and other routing headers intact, but a garbled text body
  3. userB now contacts its key server at and performs a security exchange proving it is the rightful owner of the identity. Once satisfied, provides userB with its private key which userB can then use to decrypt
  4. Once userB has received its private key, it does not need to contact's key server each time - it can continue using the same key henceforth, based on the expiry and policies as set by verizon's key server (this is the same as PKI). In other words, IBE essentially provides a mapping between a abitrary string and the PKI private key that will eventually be used to decrypt the message.
This has some very interesting ramifications:
  1. userA can now send secure emails to userB without the problems of first getting its public key
  2. The computational power requirement for userA reduces (think cell phones and battery consumption)
  3. userB could choose to setup its private key after receiving a secure message from userA.
  4. Since the outgoing encryption is based on a string, ad-hoc policies are very simple to implement without the cost of re-invoking/revoking certifcates.
  5. Since keys are generated on demand, the key server is essentially stateless, which lends to better scaling for the key server (thanks to a person in the 'know' who read and reminded me of this point)
Consider some use-cases:
  1. Assume that corporation has deployed this scheme, then any emails encrypted with “” can generate a private key that applies to all avaya employees. Similarly, an Avaya employee could send an ad-hoc encrypted email only to “” for all it’s SIP development list organization. These strings and relation to an appropriate private key is therefore dynamic. You don’t need to pre-create dozens of certificates for such relationships. If a key server for a domain does not want to honor that identity string attribute, it can reject it.
Finally, since the mathematical foundation allows for association with arbitrary strings, each domain can set its own key generation rules. This brings us to my last interesting read, “Fuzzy IBE”. In this approach, the authors extend the scheme to allow for ‘fuzziness of accuracy’. In this approach, when UserB tries to communicate with vzw’s key server to provide he is indeed, instead of key exactness, the server and the client (server=VZW key server, client=user B) negotiate a set of attributes which defines B. The server could choose to grant userB it’s private key even if not all attributes are exactly matching. The degree of error tolerance, however is key and the paper discusses algorithms to securely prove its veracity given a particular tolerance. Why is this useful ? Consider for example, the new phones being launched with voice recognition scan or biometric scans. Such idenity proofs are a combination of multiple attributes, and there is no guarantee that are all the same all the time. For example, an iris recognition could go astray, if you just got punched in your eye by the boyfriend of a girl you were trying to warm up to. Or, your voice recognition identity may go astray if you happened to be partying all night before, screaming ‘Who Let The Dogs Out’ So all in all, IBE offers a very convenient approach to standard certificate mechanisms that hopefully will really help in domain based security systems by greatly reducing existing pains that plague the certificate community. In addition, being able to map a URI (identity) to an encryption mechanism should greatly help its deployment in the VoIP space as well.

Friday, October 13, 2006

Conveniences of the future

Okay, not a technology post, really. Just some light-hearted cynicism. We haven't had one of those posts for a while now. Life is not all about technology, ya know ?

Yes, yes, I know, with the advent of the all pervasive IP pipe, you are going to able able to wave a hand and discover your friend's contact from google with its advanced mindreader engine. You will be able to click on a webpage and call your friend (let's not worry about minor details like how your friend's contact would be on the web in the first place and whether he really wants that or not). You will be able to discover his presence and call him when he is free and all that good stuff. Ofcourse, there is IM as well, and you could just IM me your contact. Great stuff!

But in the meantime, let us assume that a vast majority of the users, who still use a phone as we know it, start getting used to this new world... After all ,why blame them ? Most of the devices I see today have a dialpad on them, even if it happens to be a soft-phone. So till UI innovations come in, I'd expect people's usability pattern will remain the same for a while.

Since the dawn of VoIP, people have been constantly saying that with VoIP telephony, you can now dial by user names, and it makes life so much simpler than remembering a number.

1. Have you ever tried typing '' on your phone vs. 3101457865?

2. Do you really believe that when this form of identity becomes popular, you will actually get an easy to remember email like 'billybob' ? I'd bet it would be more like 'billybob_0012'. If you work for Tier 1 companies with 10,000+ employees, you probably know that already, looking at your email. In this case, we are talking about subscribers 10x - 100x that size.

3. Do you really think people will remember who is who, when you have addresses like,, ?

4. If you believe that no one will dial user names, and that it would all be in an address book, then whether its numbers or user names, it doesn't really matter now, does it ? And incidentally, I can bet you make several sporadic phone calls to people you don't want in your address book.

old-me: What's your contact no.?
old-you: 3011563865
old-me: I'm sorry, thats 301-15-what ?
old-you: 301-15-..6...3...8..6...5
old-me: thanks

Next generation conversation:
me: what's your contact ?
me: Is that suzie with a 'z' or an 's' ?
you: 'z'
me: Is that suzie 81264?
you: No, 281264. Its my birthday -that was the best id I got that was available.
me: I am sorry, I couldn't understand your accent following the @, it is 'aim' ?
you: No, the letter 'aym'
me: underscore or dash ?
you: What's that ?
me: I mean, the symbol after the @ and before 'world'. What is it ?
you: Oh ok, it is a hyphen. What's a dash ?
me: Never mind
me: great suzie281264@m-world.attnet
you: no, 'att DOT net'
me: cr*p. Do you have a phone number ?

Friday, October 6, 2006

Its not VoIP, AJAX, Web 2.0 - its SAAS (Software as a Service)

The topic itself is not that new. Anyone who is no one has in the past posted at least some sort of ramble regarding how either AJAX is a cure for cancer, or, how it has been tried in the past, and has and will miserably fail. Amidst all the hooplah and badly thought out articles, I read one here, which I believe the author has put thought into. I liked it.

In the past year, I’ve been spending a lot of time on how to effectively merge the ‘web 2.0’ world (to re-use a much used term) into the world I think I know relatively well, the VoIP world.

Here is the problem: most of the people busy adding “2.0”, “3.0” etc. to marketing terms don’t spend too much time putting their arms around what the model of entry/execution/exit really is and focus only on the technologies.

I remember a conversation with a friend a few months ago, where in a moment of excitement, he exclaimed “The client is interested in new technologies such as AJAX, google maps, presence, IM, RSS, voip and presence. Can we slap together something that shows a group of people located on google maps, including presence tags, throwing out special RSS feeds, talking and chatting – and all of this happening simultaneously ?”

This cartoon, from Randy Glasbergen, depicts the situation exactly.

Copyright notice: From "Today's Cartoon by Randy Glasbergen", posted with special permission. For many more cartoons, please visit Randy's site @

News flash – slapping individually successful services together is more often a recipe for immense failure rather than fabulous vision. And that is exactly where this entire “Web 2.0” community is headed. Very few actually ask some key questions:

  • How is it different from what happens today ?

  • Why would people use it ?

  • How do I beat existing competition ?

  • How do I keep ahead of future competition ?

  • What is my revenue model ?

  • Who is my target market ?

A colleague of mine recently commented, "You know, this entire web 2.0 evolution, if done correctly, could have the same impact as the evolution from books to movies"

So back to what I am doing these days: I’ve been working on interesting models of how all these diverse techologies could be applied to the deployed VoIP world today. I certainly cannot talk about what it is, since its company confidential, but the reason I bring it up is that I’ve validated it with several key players in the media and telecom industry and they all agree with this vision. And this makes me believe that there is a future in this space. But only if treated wholistically. Looking at it from the AJAX level is like trying to figure out if your car is a keeper by looking at just the quality of its air-conditioning. AJAX is not the engine here. Its a useful presentation layer. The engine is the business model - that ties everything together. For example, a good buddy of mine had a useful insight when talking about the relevance of AJAX. "How does it matter to you if AJAX succeeds or not? Tomorrow, flash could get more agressive in technology and marketing. So why tie your business model to to a presentatation layer technlology?". How true. When you think about the end-end model, AJAX becomes less relevant.

Understanding the model – SAAS (Software as a Service)
First things first – AJAX is a technology (well, a collection of, really). It is NOT a business model. Web 2.0 is a marketing mechanism, not a business model. The business model which will govern whether AJAX and related technologies will succeed is called ‘Software As A Service’ (SAAS)

Yes, SAAS has been around for a long time. It is somewhat similar to the ASP model in terms of ‘being hosted’, but there are significant differences. Yes, it’s the same four lettered acronym which Google is really depending on, eventually. It’s the same four lettered acronym which Ray Ozzie is trying to implement in Microsoft. It’s the same four lettered acronym that is behind Amazon’s EC2 and S3 initiatives. It is the same four lettered acronym that Salesforce is all about.

Simply put, it is a shift in thinking. Instead of selling ‘software’ as a chunk (the microsoft desktop OS model), you sell ‘what it does’ and you ‘pay as you use it’. This has several ramifications, in terms of cost of deployment, potential for better content aggregation and overall reduced cost for both the producer and the consumer. The usual pricing model between SaaS and perpetual typically converge after 3 yrs -i.e. perpetual theoretically becomes cheaper - but that is only assuming no new software upgrades/functionality upgrades, which is not realistic. I’m not going to explain this – do a google for SAAS and you can read its benefits

SAAS has existed before. What’s different now ?

  • Broadband deployment rates have significantly increased since 2001. More than 60% of the US internet community is broadband enabled. Worldwide broadband growth is growing at the rate of 210% annually. In other words, the infrastructure is capable of handling SAAS now (sorry, I don't remember the growth numbers exactly - refer to my article on broadband evolution earlier, where I researched and reported the exact figures)

  • Technologies such as AJAX have proved to be an effective presentation layer for the SAAS model, where desktop powered features can be offered on the net. The browser-server experience has vastly improved due to these improved technologies (you no longer need a browser refresh to get a single byte of data)

  • Applications have been rolled out by technology pioneers such as Google, Writely, Zoho and others. I have always maintained that those who prove a technology works are not necessarily the ones who succeed. Those who watch these folks plough in the money, and then step in when the technology is mature are usually the folks who gain. These applications, which are live in the market today prove that the technology is ready. It’s the business model that is needed. So when people say gmail has 25% of the subscriber base of a hotmail and therefore no biggie, I don’t think they get the picture.

Understanding the business – SAAS (Software as a Service)
Before we understand what implications SAAS has, let us first try and figure out;
a) What is the value chain ?
b) Who are the players ?
c) What are the risks ?
d) For each step in the value chain, what are the potential revenue models ?

And then, ask, “Where can I play?”

So here is a diagram that I drew up. To put a structure we can put our arms around, before talking about whether AJAX has potential to overthow mankind and preside in the Oval Office. I am not going to comment on this diagram right away. A lot of it is self explanatory. I would like YOU to comment on it, however.

How do you interpret what you see below ?

(Please click on the image for a larger version. After you click on it, your browser may still be resizing the image, making the text jagged, so either disable that feature, or download the image and then see it)

Monday, October 2, 2006

SIP and Skype, P2P and Supernodes - what a melee

Gaaah. I happened to bump into Slashdot today and read this:
“…is that when you install the Skype client, it will drain system resources by running as a supernode from time to time”
and finally concluding that the author will more likely use SIP over Skype.

Implictly implying that with SIP, you are free from such issues !

Let’s get the facts straight:

  • P2P is an architecture, SIP is a protocol. Skype is a product, and Skype uses its own proprietary protocol (you can call it ‘Skype Protocol’ if you want).
  • A SuperNode system forms a fundamental design choice of many existing P2P networks, including Skype, Kazaa, Grokster and several other massively scaled networks.
  • Today, most of SIP’s deployment uses a centralized architecture. In other words, all your SIP phones register with some central server and some central proxy. Your calls are routed through them. If they fail, you cannot reach other users, or, will have to attempt to call them directly (not as simple, because the person who is sitting in your buddy list as may actually be and this complex ID is mapped to its simpler one by the proxy /location server that went down.
  • There is current frantic work going on in the p2psip mailing list which is attempting to solve the following issues:
  • How does one map a SIP flow over a P2P network ?
  • Does it make sense to deploy a SIP overlay over a P2P network with common architectural principals of existing P2P networks (like DHT, for instance), or,
  • Does it make sense to deploy a P2P network over the SIP protocol ?
In other words, if you haven’t figured it yet, Supernodes have nothing to do with SIP. If you still haven't got it, A P2P SIP Network could and most likely would also use Supernodes. One good way to avoid using supernodes, is , um, say, uh, use what we know as a Centralized network.

Some Basics
Supernodes is often quoted as a necessary evil for a largely scaled P2P network. Let us first spend a bit of time, understanding how P2P networks differ from centralized networks.

The biggest difference, is that in a pure P2P network, there is no well known or centralized node that is mostly always available. P2P networks are plagued by problems of churn (a node may be in a network at a particular point of time, and may disappear the next moment because the user logs out), location & routing (how do you locate a user, Joe, if you only know his name, but not how to get to him?)

To address such elementary issues, which do not exist in centralized networks, several implementations have implemented very effective algorithms, such as DHT (Distributed Hash Tables), which try and establish an analogy between a unique encoded key and the contents that need to be retrieved in such a way that the key could be used as a primary identifer to locate the data. For a client that is trying to locate that data, it would generate the key and this key would traverse a P2P network, using a defined protocol, till the node(s) that store data related to the key responds. Ofcourse, this is an oversimplification. There have been several improvements to optimized DHT routing, including alternate architectural suggestions on routing and location for P2P networks.

Now enter supernodes. Why do we need it ?
Well, let’s put it this way. Networks are not made the same around the world. At any one point of time, there will be users on a high speed cable, a medium speed dsl, or a low speed dial up. And they all want to communicate, and locate, effectively. Supernodes are inbetween nodes, between the source and the destination, which can provide additional services to other clients. Here are somethings a supernode could do:
  • If User A wants to reach User B, and SuperNode (SN) knows a shorter way to reach B, it may act as a router to route User A directly to B, and avoid the message having to hop across multiple networks
  • If User A is behind a firewall, and wants to talk to User B, but needs a Media Relay server outside its firewall to route media through, the SN, if it has enough CPU cycles empty, may agree to serve as A’s external Relay server for its session. During the session, if the SN CPU gets busy (say the owner of the SN decides to make a call), it can drop the role of a relay server and A will look for another
  • If User A tries to reach User B and finds B offline, the SN may agree to be a ‘voice mail’ service for A, to receive its voice mail, then send it off to a central voice mail server and delete its copy. Ofcourse, the voice mail is typically encrypted with a key that the SN does not know of, so it is, for the most part, storing a bunch of encrypted bits for A.
The catch is, that the “SN” could be you. It could be any client in the P2P network, that has spare cycles to participate in other activities, not related to you, which makes the network more efficient.

But that’s the challenge of a P2P architecture. People choose a P2P network because it is scalable, and fault tolerant. In principle, there is no centralization, and each node can take over a functionality required to keep the network running, and there is a discovery protocol defined to find out such ad-hoc nodes. However, in gaining distribution and fault tolerancy, P2P networks need to deal with efficiency (how fast is the network) and effects of churn (if there is no accountability for nodes, and you cannot make guarantees of their availability status in the network [churn]. how do you provide services, like, say, voice mail, - who accepts the voicemail ?)

If we don’t have supernodes, the network wil need to re-discover itself each time, and will not be as efficient as users want it to be. If we don’t have supernodes, navigating networks and solving their challenges increase (firewall was one example above). If we don’t have supernodes, that is, if a client refuses to behave as anything more than a client, how do we provide services which, for example, need to kick in if some client is not online ?

The problem with SuperNodes is not in architecture, its in….
…Implementation! The problem with supernodes is that some networks do not allow you to specify when you choose to be a Supernode. And not without reason. If each participant decides to switch of Supernode functionality, the network performance degrades substantially. On the other hand, if the network decides for you, what the SuperNode rate threshold is, then you have no control over your computer’s resources. You need to trust the network.

An implementation can take it to limits – you may find your CPU choked at times – which is typically a result of bad threshold implementations.

Unfortunately the industry is now calling SuperNodes Malware ! I’ve seen lawsuit applications, I’ve seen marketing statements from new VoIP companies that say ‘We don’t do Supernodes’ like as if its evil.

You can’t have the cake
and eat it too. Supernodes are an important piece of the puzzle of well performing P2P networks. If you take them away, you may as well go back to the centralized model of operation, and keep P2P for your marketing collateral as what happens when your clients can directly call each other’s IP in a peer2peer fashion. Hooray !