SmoothSpan Blog

For Executives, Entrepreneurs, and other Digerati who need to know about SaaS and Web 2.0.

Archive for February 18th, 2008

NYT Correctly Calls the Biggest Microhoo Challenge (Hint: It’s the Microsoft Rift With the Web)

Posted by Bob Warfield on February 18, 2008

I’ve been complaining about Microsoft’s unneccesary Rift with the Web for quite a while, and occasionally taking grief for it from Microsoft supporters.  The NYT has a great article about how the incompatibility of Microsoft’s proprietary with Yahoo’s Open web technology is the biggest obstacle that the combined entity will face.  When we reach a stage where the NYT sees that Microsoft has a Rift, it’s pretty hard to deny the Rift exists.

What if the merger signalled the end of the Rift?  Wouldn’t it send an amazing message to the world if Microsoft flat out abandoned their insistence on their own platforms.  They don’t have to kill those platforms, they just need to quit insisting on them to the exclusion of all else.  Can you imagine a world with Microsoft pushing Linux, PHP, MySQL and all the rest?

No, me neither.  I guess they’re in for a rough ride!

Posted in platforms, strategy, Web 2.0 | Leave a Comment »

The Hidden Gems that Social Graphs Bring to Light

Posted by Bob Warfield on February 18, 2008

YouNoodle.com claims to be able to predict which startups are more likely to succeed.  While I hesitate to endorse the idea that their predictions are accurate, the methods are quite interesting.  According to the NY Times that introduced YouNoodle from stealth:

their algorithm uses sophisticated modeling pertaining to how social capital and networks can affect an organization’s performance.

They also say that they are focusing in general on assessing the experiences and social and business contacts of entrepreneurs who start a company, and on how the entrepreneurs within that company might fit with one another. They will not disclose precisely what factors they use to predict a start-up’s success, or how their algorithm processes those factors.

If you visit the site, YouNoodle is still not revealing all, but it looks a lot like a social network for startups.  Here’s the entry for startup SnapTalent, for example.

What does all this have with the likelihood for success?  I can but speculate, but my speculation goes something like this, and is based on my own experiences as an entreprenuer.  I suspect the Social Graph piece is attempting to determine whether the founders are connected enough to reach critical mass in getting their ideas noticed.  That fits well with “using sophisticated modeling pertaining to how social capital and networks can influence an organization’s performance.” 

Can a startup succeed without social capital?  I doubt it.  After all, they have to get the word out somehow.  I’ve suggested in the past that maybe startups ought to insist on having a top-notch blogger on staff, but these guys are taking it another step up altogether.  They seem to insist that a startup be sufficiently linked in to the right groups of people.  It’s a fascinating premise.  As we interact with the web, we leave behind our footprints.  Suitable forensic research can determine from those footprints something about our networking and sales skills.

A lot more is likely possible from a detailed analysis of Social Interactions on the Web.  LinkedIn recently announced they were going after the investment community.  Tim O’Reilly described the service thusly:

While the service isn’t going live for several months, Mike outlined the core of the value proposition, which I could sum up as a Web 2.0 version of the Gerson-Lehman Group‘s expert network. Gerson (or GLG as it is often called) has made a splash in investment research by assembling a network of experts on virtually any topic. Subscribers pay a hefty subscription fee for access to that network.

Think about it.  If it’s true that it isn’t what you know but who you know, the Internet could be the ultimate wealth enabler.  This kind of information is very new in the world.  Even ten years ago, what mechanism would have been available to assess from afar how well networked various people are?  Social Graphs are revealing new and compelling hidden gems of insight that can be mined for various purposes.

What other uses can such information be put to?

Posted in venture, Web 2.0 | Leave a Comment »

Amazon Ran Out of Capacity

Posted by Bob Warfield on February 18, 2008

As I suggested in my original post on the topic, Amazon’s recent S3 outage was due to running out of capacity.  Specifically, they ran out of authentication capacity.  In part, this problem was due to the fact that Amazon wasn’t monitoring exactly this part of their capacity envelope very well.  High Contrast has the Amazon quote telling us that it was also due to just a few customers radically increasing their load on the system in an unpredictable way:

the surge was caused by at least one very large customer plus several other customers suddenly and unexpectedly increasing their usage. 

So far, most of the pundits are in something of a denial mode.  They argue that nothing really new and interesting is happening here.  All services go down, including the electric companyVinnie Merchandani says corporate data centers have been going down a lot more often than 99.999% uptime allows for since forever.  Folks like Nick Carr seem to feel the biggest issue in this outage was that users didn’t have timely information and Amazon is fixing that.

This all misses a bigger point.  What these writers are doing is attempting to apply the old standards and methods against the new world of Cloud Computing.  The trouble is, there is something genuinely new at work here that goes beyond the inevitability of some outages and the need to be more transparent with customers about what is going on.  The problem Amazon and other would-be cloud platform purveyors face is predictability.  The world they deal in is radically less predictable than corporate data centers of old because the Internet today has much lower friction and higher connectivity between different web sites that make load spikes increasingly sudden and intense.  There is a cascade of dominoes effect that is enabled by the low friction web that wasn’t nearly so twitchy in the past. 

The premise of any large computing infrastructure is that by sharing the load across many customers (and in Amazon’s case, sharing excess capacity from their core retail business), we enable headroom for such load spikes.  But how realistic is that concept?

Consider this Alexa plot of CNN and Flickr traffic over time:

 Flickr Traffic

Do these two curves look predictable to you?  Take CNN, for example.  To handle the big spikes requires 2-3x overload capacity.  Flickr is a little less crazy except for one massive event that involved a doubling in a very short time.  This latter even was permanent in its effect, so if you were counting on temporarily borrowing some headroom, you would have had to keep it in place indefinitely and grow from there.  Ironically, that chart was brought to my attention at Amazon Startup Project where they used it to sell the idea of unlimited headroom a startup can’t afford to purchase by using Amazon Web Services.

These charts are displaying non-linear behaviour, the hardest of all phenomena to predict.  This non-linearity is becoming more and more common because the Internet has become extremely viral.  It is crosslinked, the very meaning of the word “web”, and messages travel along the links with almost no friction.  Viral has become a virtue, and much of the current innovation is focused around how to make the viral spread of information more likely.  Social Networks are all about such behaviour.  Take a look again at those CNN spikes.  Now let’s imagine your cloud computing infrastructure is hosting a bunch of different blogging, micro-blogging, video, photo sharing, and other social sites.  The CNN spikes no doubt represent something newsworthy happening.  The greatest likelihood is that each spike will be echoed at some level across all of these sites that are in the business of spreading information.  Friction has been lowered to the point it is almost non-existent when it comes to the spread of memes on the Internet.  We have major spikes from world events, such as the assassination of a world leader.  In the Internet, we can have major spikes from such inane moments as Scoble shedding tears of delight over new Microsoft secret software.  And the whole thing is wired together.  That one tear on Scoble’s cheek breeds a thousand or more accounts ranging from poking fun to trying to guess what this secret software is.  There is a ravenous beast poised over the keyboard waiting for something interesting to pass onto its network of other ravenous beasts.

This is decidedly non-linear behaviour and impossible to predict.  The answer is major cloud computing infrastructure providers will need to have considerable excess capacity available on tap at all times to avoid outages.  Take Amazon.  Web bandwidth to their web services now exceeds to total traffic to all of their other properties.  What might have once been a nice remaindering business allowing them to resell their excess capacity is now driving the need for more capacity.  They have just a few choices.  They can invest in a lot more hardware and lower the margins on their business, or they can implement some strategies to limit the availability of the service to some customers.  It strains credulity to think they’ll limit capacity to their retail business.  How will they decide?  Tiered pricing of some kind? 

Think in terms of other unexpected networked events.  I’m reminded of financial markets and the law of unintended consequences.  Look at today’s housing market.  Remember Long Term Capital, a hedge fund with Nobel Laureates who had mathematical proofs they would continue making money.  Right up until they unpredictably went bankrupt.  BTW, this sort of thing used to happen with the electrical grid too.  In both cases, the financial markets and the electrical grid, elaborate means were put into place to artificially inject friction to damp the machine’s oscillations before it could destroy itself.  There are elaborate rules in the stock exchanges about shorting stocks that are falling.  They inject a form of friction back into those markets to prevent total free fall. 

Perhaps this points the way to new technology for Cloud Computing infrastructure.  A gentle injection of the right kind of friction at the right point for a limited time might prevent suddenly massive spikes and outages.  It’s an area ripe for innovation.  Meanwhile, Amazon could sorely use some competition.  If a customer could contract for emergency capacity from elsewhere, or even better, if the Cloud Computing Providers could share slack capacity as the electrical companies do, it would be tremendously helpful when the inevitable load spikes arrive.

Posted in amazon, data center, platforms, saas, Web 2.0 | 3 Comments »