SmoothSpan Blog

For Executives, Entrepreneurs, and other Digerati who need to know about SaaS and Web 2.0.

Archive for November 7th, 2007

Amazon Web Services Continues to Mature, Google to Follow Soon?

Posted by Bob Warfield on November 7, 2007

Amazon recently announced the availability of S3 in Europe, something that customers have been clamoring for.  This is both to reduce latency for US-based companies and to make it easier for European companies to embrace the service.  Presumably Amazon themselves have datacenters in all sorts of nifty places, and an East Asian center would be a worthwhile next step as they continue to roll out the service.  This new capability works with a “local constraint” that identifies where the S3 storage bucket is to be located.  The default remains US datacenters.

When I attended Amazon Startup Project, there was a lot of interest among developers present for some ability to gain a little control over exactly where the machine resources they were purchasing from Amazon materialized.  This manifested at a couple of levels.  First was the ability to access multipel data centers for redundancy.  The “local constraint” option could be expanded to include East and West Coast US locations, for example.  The second request was an ability to specify that machines could be in the same datacenter, but that they ought to be on separate racks, again to increase resilience in the event of a failure that impacts the whole rack.  Note that S3 already has a lot of this kind of redudancy built in, and it is more EC2 (the ability to buy raw Linux machines) that we’re dealing with here.  I can imagine Amazon will get to all of this in the fullness of time.  The requests are not unreasonable nor should they be all that difficult to implement. 

Meanwhile Red Hat has announced SaaS pricing for their Red Hat Linux when offered on EC2, an interesting development.  It sounds like a good thing, but I’m still trying to decide whether their pricing makes sense.  It’s $19/month per user plus 21 to 94 cents per compute hour.  In exchange you get Tech Support and access to all the RHEL (Red Hat Enterprise Linux) apps.  My problem is they want to charge $19/month per user plus an additional hourly charge that’s as much as Amazon wants for the EC2 hardware itself (at least the “small” configuration).  That sounds like a lot, particularly the “per user” piece.  Perhaps they meant to say “per server”, but that isn’t how the release is worded.  We’ll have to wait and see if there is clarification later.  The bottom line on this is that systems software companies are starting to take notice and view Amazon as another platform to support.

Speaking of systems software, we’re still waiting to see a persistent database solution.  When I attended Amazon Startup Project, they mentioned some sort of persistent database support would be available  by end of this year, and at least one of the entrepreneurs who spoke said they were beta testing the solution.  This is a gaping hole in their offering, and one I’m sure they’ll be filling as soon as they can.  What remains to be seen is whether they offer a solution developed by Amazon, or whether a partner steps up.  For example, mySQL could offer something along the lines of what Red Hat is doing.  In looking at the Red Hat pricing, and thinking about some of the things I’ve heard about mySQL (1 in 10,000 “customers” actually pays for mySQL), I wonder if this sort of thing doesn’t provide an opportunity for these vendors to deal themselves a new hand in terms of how they deal with customers.  It wouldn’t be the first time a transition to a SaaS model radically changed the rules.  We’ll have to see how it all works out.

Meanwhile, Amazon keeps ticking along with a pretty good pace of announcements around the service.  Recall we’ve gotten bigger servers, and SLA’s, two things that were much in demand around the time I went to Startup Project.  On the SLA front, Amazon is coming through with flying colors.  Read/Write Web tells us they’re hitting four 9’s, which is an extra “9” on what their SLA’s promise.  That’s a solid number that even big companies struggle to achieve.

There are two areas of challenge that I’ll be interested to watch as events continue to unfold.  First, there remains a suspicion that Amazon Web Services are largely a remaindering service and that they aren’t even trying to make money with it, but rather recover costs on low server utilization for the rest of their business.  If this is true, then at some point service levels will degrade as the excess capacity is used up and Amazon fails to invest in keeping ahead of that curve.  While it’s possible that this is the case, I’m skeptical.  I think their current pricing actually does let them make money.  There is certainly a fair amount of premium being charged relative to other hosting services.  They must have relentlessly driven service costs down and invested in nearly total automation of the infrastructure.  If the service is profitable or nearly profitable, then we can count on them to keep investing in it as it grows.

This brings me to my second thought.  So far, Amazon has largely focused on delivering capabilities they had to build for their core business anyway.  At some point, the average customer’s needs will deviate from Amazon’s view of how web architectures should be built.  They’ve said, for example, that there are no plans to offer their keyed virtual storage system Dynamo as a service.  There may be a lot of reasons for this.  Ironically, it may not be multitenant, so they may fear opening it up is too risky for the overall business: not enough isolation between tenants.  An alternative view may be that for all but the very largest of web sites, a service like Dynamo is just not necessary.  Most sites want mySQL or the equivalent.  Whether we choose to view that choice as enlightened or not is immaterial.  Many very large web sites get built around vanilla relational technology and they wind up working fine without anything exotic like Dynamo. 

My question on all of that is how much further will Amazon go?  If they’re just milking technologies they’ve built that have broad applicability, they will have a decision to make.  That decision is how heavily to invest in technologies to be delivered via Web Services that have no benefit for the rest of Amazon’s business.  My guess is they’ll go slowly on such investments, preferring to see partners develop the technologies.  We’ll see an interesting first test when we see what happens around persistent database support.

All in all, the service continues to have a bright future.  People who want to directly equate the raw cost of servers to the cost of on-demand utility computing are not making an apples to apples comparison in my mind.  They miss a lot of the benefits that services like Amazon offers that are just not available in a raw server or even a virtual appliance setting.  Businesses have to decide how much they value those services, but early indications for S3, Amazon’s highest value-add service, are very positive.  If you don’t see the additional value added, go with a different alternative.  There are many options in today’s market, with more all the time.

Speaking of more all the time, I’ve been hearing rumblings that Google may announce their equivalent shortly.  I’m all ears for that one!  I will be interested to see if it’s more Google vaporware (i.e. you won’t really be able to use it until end of 2008) or if it’s something that’s ready to go immediately.

Related Articles

Update on Red Hat:  the add-on pricing is per server, not per user, as I had speculated.

Posted in amazon, platforms, saas, strategy, Web 2.0 | 1 Comment »

Legacy Code: Another SaaS Advantage

Posted by Bob Warfield on November 7, 2007

Billy Marshall wrote recently about the “triple threat of legacy code”.  Billy is in the software appliance business, so he is arguing that dealing with legacy code is a compelling advantage for appliances, and he is right.  The software vendors he is speaking to are concerned about three threats legacy code presents to their business:

Higher Support Costs:  I’m very familiar with this one, having lived it.  Over time you wind up supporting a lot of different versions of the software for a lot of different reasons with conventional enterprise software.  It mostly boils down to the more choice you give your customers, the more choice they’ll exercise.  Forget customers being conscientious about staying current–some will, but the vast majority will not.  They’ll stay on whatever version they installed and got running until a compelling event happens. And why shouldn’t they?  After all, if it ain’t broke, don’t fix it.  Just one little problem: the act of doing nothing eventually breaks things for everyone. 

It gets harder and harder for the vendor to support the older versions.  Over time, 3rd party technologies the old version is dependant on become obsolete and unsupported.  Think old versions of the database, operating system, app server, and almost anything else.   For customers, they encounter bugs that are likely already fixed in the latest version and could’ve been avoided altogether had they stayed current.  Sometimes these bugs are quite serious.  Often there are ticking time bombs that a customer is certain to encounter if they stay with a release long enough.  Yet the idea that things are working is strangely calming and leads the customer to ignore warnings in release notes for later versions until they’re on the brink of some crucial event when it’s very difficult to roll in a new release.  This leads to hot fixes and other unsafe practices.  With enough patches, a customer can wind up on a version that is entirely unique to one single installation.  This is never a good place to be, but it’s all too easy to get there with the best of intentions.

Meanwhile, the mainline code base, the one the majority of new users are on, is moving further and further away.  Customers that make no effort to stay current are toting up a hidden cost that will one day come true.  The longer they wait, the more expensive it will be when they do finally have to upgrade.

Lower Revenue:  This is another familiar speed bump in the life of a conventional enterprise software company.  The Engineering group labors to produce a shiny new module that is much in demand throughout the installed base.  There’s just one problem:  it only works with the latest release of the flagship.  This one is maddening to customers and salespeople alike.  The salespeople want to know why the new modules can’t work with the old releases.  The Engineers roll their eyes and wonder how they’ll ever get the module built if they have to support so much legacy code.  Customers just want the new module without the high cost of a full upgrade, so they’re displeased and disillusioned as well.

Competitive Risk: This one is the worst.  You’ve got customers who’ve waited so long to upgrade that they now face almost a complete reimplementation.  Given the extremely high costs and pain associated with this, they naturally decide to repopen the entire decision cycle.  Suddenly they’re looking at whether they want to reimplement or replace. 

SaaS vendors avoid all of this by keeping customers painlessly current.  A virtual appliance can potentially do this if and only if the customer will allow the vendor to use the facilities of the appliance to keep them painlessly up to date.  A failure to do that means the customer is not on current code and an expensive migration will be required, appliance or not. 

While it’s doable in the appliance case, it seems considerably harder to me to manage than the SaaS alternative.  If nothing else, consider the requirements of sandboxing to make the transition easy.  There has to be some period when both systems are running in parallel.  The shorter this period can be, the easier for all concerned.  It’s not clear with an appliance how that gets orchestrated.  The  SaaS vendor can build this into their data center operations and use excess capacity to migrate a few customers at a time relatively automatically.  It seems like the virtual appliance is short machine capacity unless it is somehow prepared to flex that capacity for the purpose of the migration.  The reason is that the appliance is likely sized for the normal case of running the app, rather than the odd case of two versions of the app for a short time while things are migrated.

Savvy SaaS vendors I’ve talked to are using the Legacy Code situation as an opportunity to come in at the Competitive Risk stage when a customer faces a huge migration cost and offer them an alternative to go SaaS at that point.  Given the relative age of a lot of Enterprise Software that’s out there today, we should see steadily rising opportunity for this sort of thing as time goes on.  This also presents a cautionary tale to comapnies like Oracle that are bent on acquisitions.  If they rock the boat too much after acquiring a company, for example by requiring mass upgrades, they make themselves vulnerable to SaaS vendors and other competitors who want to swoop in at a moment of uncertainty and pain and gain converts.  If they keep support too broad, their internal costs will be higher.  It’s a Hobson’s choice for anyone in the Legacy Code business.

Posted in business, saas, strategy | 1 Comment »