SmoothSpan Blog

For Executives, Entrepreneurs, and other Digerati who need to know about SaaS and Web 2.0.

Archive for the ‘cloud’ Category

NoSQL is a Premature Optimization

Posted by Bob Warfield on July 22, 2011

No SQL databasesThere’s been a lot of back and forth lately from the NoSQL crowd around Michael Stonebreaker’s contention that reliance on relational technology and MySQL has trapped Facebook in a ‘fate worse than death.’   This was reported in a GigaOm post by Derrick Harris.  Harris reports in a later post that most of the reaction to Stonebreaker’s contention was negative:

By and large, the responses weren’t positive. Some singled out Stonebraker as out of touch or as just trying to sell a product. Some pointed to the popularity of MySQL as evidence of its continued relevance. Many questioned how Stonebraker dare question the wisdom of Facebook’s top-of-the-line database engineers.

Harris, Jim Starkey, Paul Mikesell, and Curt Monash all take a stab at rehabilitating Stonebreaker’s argument in the second post.  Their argument boils down to, “Yeah, Facebook did it, but only because they have great engineers, spent a fortune, and endured a lot of pain.  There are easier ways.”

Sorry fellas, time to annoy the digerati again, and so soon after bashing Social Media.  I disagree with their contention, which is well expressed in the article by this Jim Starkey quote:

If a company has plans for its web application to scale and start driving a lot of traffic, Starkey said, he can’t imagine why it would build that new application using MySQL.

In fact, I would argue that starting with NoSQL because you think you might someday have enough traffic and scale to warrant it is a premature optimization, and as such, should be avoided by smaller and even medium sized organizations.  You will have plenty of time to switch to NoSQL as and if it becomes helpful.  Until that time, NoSQL is an expensive distraction you don’t need.

The best example I see for why that’s the way to look at NoSQL comes from Netflix, which is mentioned towards the end of the article.  I went through several expositions by Netflix engineers on their experience transitioning from an Oracle Relational data center to one based on NoSQL in the form of Amazon’s SimpleDB and then later Cassandra (the latter is still an ongoing transition as I understand it).  You’re welcome to read the same sources, I’ve listed them at the bottom.

Netflix decided to move to the Cloud in late 2008 to early 2009 after an outage prompted them to consider what it would take to engineer their way to significantly higher up time.  They concluded they couldn’t build data centers fast enough, and that as soon as one was built it was swamped for capacity and out of date.  They agree with Amazon’s Werner Vogels that building data centers represented “undifferentiated heavy lifting”, and was therefore to be avoided, so they bet heavily on the Cloud.  These are smart technologists who have been very transparent about their experiences, so it’s worth learning from them.  Werner Vogels reaction to Stonebreaker’s remarks about Facebook are an apt way to start:

Scaling data systems in real life has humbled me.  I would not dare criticize an architecture that holds social graphs of 750M and works.

The gist of the argument for NoSQL being a premature optimization is straightforward and rests on 3 points:

Point 1:  NoSQL technologies require more investment than Relational to get going with. 

The remarks from Netflix are pretty clear on this.  From the Netflix “Tech” blog:

Adopting the non-relational model in general is not easy, and Netflix has been paying a steep pioneer tax while integrating these rapidly evolving and still maturing NoSQL products. There is a learning curve and an operational overhead.

Or, as Sid Anand says, “How do you translate relational concepts, where there is an entire industry built up on an understanding of those concepts, to NoSQL?’

Companies embarking on NoSQL are dealing with less mature tools, less available talent that is familiar with the tools, and in general fewer available patterns and know-how with which to apply the new technology.  This creates a greater tax on being able to adopt the technology.  That sounds a lot like what we expect to see in premature optimizations to me.

Point 2:  There is no particular advantage to NoSQL until you reach scales that require it.  In fact it is the opposite, given Point 1.

It’s harder to use.  You wind up having to do more in your application layer to make up for what Relational does that NoSQL can’t that you may rely on.  Take consistency, for example.  As Anand says in his video, “Non-relational systems are not consistent.  Some, like Cassandra, will heal the data.  Some will not.  If yours doesn’t, you will spend a lot of time writing consistency checkers to deal with it.”  This is just one of many issues involved with being productive with NoSQL.

Point 3:  If you are fortunate enough to need the scaling, you will have the time to migrate to NoSQL and it isn’t that expensive or painful to do so when the time comes.

The root of premature optimization is engineers hating the thought of rewriting.  Their code has to do everything just exactly right the first time or its crap code.  But what about the idea you don’t even understand the problem well enough to write “good” code at first.  Maybe you need to see how users interact with it, what sorts of bottlenecks exist, and how the code will evolve.  Perhaps your startup will have to pivot a time or two before you’ve even started building the right product.  Wouldn’t it be great to be able to use more productive tools while you go through that process?  Isn’t that how we think about modern programming?

Yes it is, and the only reason not to think that way is if we have reason to believe that a migration will be, to use Stonebreaker’s words, “a fate worse than death.”  The trouble is, it isn’t a fate worse than death.  And yes, it will help to have great engineers, but by the time you get to the volumes that require NoSQL, you’ll be able to afford them, and even then, it isn’t that bad.

Netflix’s story is a great one in this respect.  They went about their NoSQL migration in a clever way.  They built a bi-directional replication between Oracle and SimpleDB, and then they started moving over one app at a time.   They did this against a mature system rather than a new buggy untested by users system.  As a result, things went pretty quickly and pretty smoothly.  That’s how engineers are supposed to work: bravo Netflix!

I have a note out to Adrian Cockcroft to ask how long it took, but already I have found a reference to Sid Anand doing the initial “forklifting” of a billion records from Oracle to Simple DB in about 9 months, and they went on from there.  When Sid Anand was asked what the most complex query was to convert from Oracle to NoSQL he said, “There weren’t really any.”  He went on to say you wouldn’t convert your transactional data anyway, and that was pretty much it.


The world loves to see things in black and white.  It sells more papers.  Therefore, because some situations benefit from NoSQL for scaling, we hear a hue and cry that everyone must embrace NoSQL immediately.  Poppycock.  You can go a long long way with SQL-based approaches, they’re more proven, they’re cheaper, and they’re easier.  Start out there and if the horse you’re riding is strong enough to carry you to NoSQL scaling levels you can tackle that when the time comes.  Meanwhile, avoid premature optimizations.  You don’t have time for them.  Let all these guys with NoSQL startups make their money elsewhere.  You need to stay agile and focused on your next minimum viable deliverable.

Extra! Extra!

This post is now, at least for a time, the biggest post ever for Smoothspan.  Be sure to check out my follow up post:  Biggest Post Ever Redux:  NoSQL as a More Flexible Solution?

Articles on the Netflix NoSQL Transition

Sid Anand’s Experience “Forklifting” the Data from Oracle into SimpleDB

Adrian Cockcroft’s NoSQL Migration Slides

Sid Anand’s QCon Video on NoSQL at Netflix

Posted in cloud, data center | 47 Comments »

The Two Most Desirable Features of a Platform as a Service

Posted by Bob Warfield on July 5, 2011

Big Data?


Uber DB scaling?

Mad Hadoopishness?

Faster app development?

Universal Social Connectivity?

Not so much.  Some platform or other claims all of those things, but the two most desirable features of a PaaS appear to be revenue generation and commodity pricing, not necessarily in that order.

Let me first say that I’ve come to view AppStores as PaaS offerings of a sort, and their ability to generate revenue for developers drives those developers into their arms.  This also applies to more traditional PaaS offerings such as Salesforce’s  As nearly as I can tell talking to folks and scanning the blogosphere for activity, people write for either because they need to integrate with the Salesforce CRM app or because they want the revenue tailwind you can get via

The commodity pricing piece applies to what many prefer to call IaaS, or Infrastructure as a Service, with  Amazon Web Services being the most widely used example so far.  Let’s go ahead and keep them in our PaaS list for this discussion and ignore the IaaS moniker.  If we start cutting out things like Amazon and for various reasons, there is so little left it’s hard to talk about it.  I would submit we may have divided up the market into too many A-A-S’s a little prematurely without waiting to see what would stick.

This is the gist of my problem, BTW.  There’s been plenty of time for PaaS to get big, but it is kind of lumbering along.  Yes, we get things like Heroku.  Doesn’t that qualify largely under the Commodity category though? we’ve already talked about and Amazon too.  But in this Cloudy-Cloudy-*aaSy-*aaSy world, what else is flying high?  There’s room for a ton of things, but you have to solve the revenue and commodity checkboxes first.

Here is the problem for all you would-be PaaS vendors out there–it’s darned hard to get design wins if your platform involves heavy lock-in, heavy rewriting, too much cost, or isn’t otherwise a requirement for a Minimum Viable Product.   Yup, there’s that darned MVP concept rearing its ugly head again.  The trouble is, it’s no longer just something the Cool Kids bandy about while sipping lattes.  It’s become something of a requirement for survival and an article of How Things Are Done In The Valley.  You can’t get capital to fool around with fancy products any more, so you have to go MVP all the way because you’re likely starving on your own nickel until you do.  In addition to the VC’s, Agile thinking and a general suspicion of Premature Optimizaton have really made it hard to sell Feature Density.  The trouble with that list of things at the top is they’re either not that commonly needed, they involve dealing with scale you should be so lucky as to get to (and are therefore not MVP material), or they solve problems you’re convinced you can easily solve as you’re starting your journey (e.g. multitenant).  Yes, those problems are harder than you think, but it doesn’t matter.  They don’t look harder when you’re eyeing what it takes to get an MVP off the ground.

PaaS vendors, you have a couple of choices available to deal with this.  You can ignore it, argue that it’s early days yet and your time will come.  It’s pretty hard to dispute that because it is early days yet.  But don’t get too target fixated at such an early stage either lest some upstart take the early days away.  You can still be Zucked pretty darned easily precisely because it is early days.  Note: To be “Zucked” is to be treated to the same fait Facebook dished out to MySpace.  Call it Fast Follower on Steroids with a Heavy Case of Rabies.  It’s not a pleasant thing to happen to your life’s dream.

Okay, so what’s the alternative for would-be PaaS Masters of the Universe?

Hey, this is a good time for the Cloud.  You know that.  We hear less and less whining about security and all that.  Clever marketers are even now letting IT get just a little bit infected with “Private Clouds”, a potent Trojan Horse strategy to win over their hearts and minds.  Whatever it takes, resistance is futile.  Once their apps and data are on my servers it’s only a matter of reconfiguring the subnets and voila!  All your Apps are be in my Cloud!

Given that is the case, there may be some first mover, network effect, and momentum issues to think about.  In other words, stop being so darned pure to your vision and line up as many customers as you can as fast as you can.  That’s how we win in this part of the Bubble, um I mean Business, Cycle.

What the heck does that mean?

I’m glad you asked, but it should be obvious:  PaaS vendors need to embrace these two desirable features and nail them before worrying about much else.  There are two simple questions:

1.  Are you directly delivering revenue producing traffic to your customers by virtue of some aspect of your PaaS?

2.  Do you have an offering that lets people buy into some commodity-priced-let’s-get-started-without-boiling-the-ocean version of your PaaS?

If you do, Hallelujah Brothers and Sisters!  You have a shot at the promised land and you can now start looking at more potent differentiation rather than table stakes.  If you don’t, back up and let’s figure something out, because this PaaS stuff can turn into a Darwin Test if you’re not careful.

There is room for innovation on the commodity side, but it’s getting harder.  The days may be gone when you can deliver commodity infrastructure.  Storage and CPU like S3 and EC2, in other words.  If you aren’t already spun up and doing good things, you need to start skating where the puck is going to be.  I have offered up my PaaS-as-bite-sized-pieces strategy before (Sell the Condiments, not the Sandwiches).  Check it out.  Lots to commoditize there.

There is also room for tons of innovation on the “delivering revenue producing traffic” front.  We’ve seen it for mobile platforms, in fact, one could argue the appstores are the defining element there coupled with the relative desirability of the platforms to their users.  The participants seem to be pretty mercenary about those two related dimensions.  I am mystified about why Amazon, which ought to understand App Stores better than anyone, is not doing so great with their Android App Store and doesn’t appear to have one at all for Amazon Web Services.  The latter is inexcusable.  There’s got to be all sorts of opportunity to create an App Store there.

Salesforce keeps wanting to be a major PaaS vendor, but they somehow misunderstand the data they were among the first to collect.  Yes, they do have the revenue producing traffic piece nailed.  But, they seem to be very much in denial about whether that is the main reason people will use, and totally opposed to solving the commodity issue.  Every time I talk to an entrepreneur or investor that wonders whether they should use or one of its offshoots, I ask them to consider a simple thought experiment.  Look up Salesforce’s current cost of service as a percentage of revenue.  Take the cost they will be charged to use Force.comand divide by SFDC’s cost of service.  That number is what they must charge to have the same margins as Salesforce, and that assumes they don’t spend another dime on anything else to deliver their service.  Most of the time that makes for a short conversation.  Oh, I didn’t think about it like that.  There’s no way we can be competitive if we have to charge that much.  And so it goes with a lot of other PaaS offerings too, BTW.  Perhaps Heroku is their way of covering both bases and a sign that they do understand.

For other PaaS vendors, maybe there is a sliver of hope.  If you can change that cost of service to be cost of service plus some cost of marketing, because the PaaS will deliver revenue producing traffic, you can afford to pay more.  Heck, Steve Jobs gets 30% just for delivering the traffic and not helping you in much of any other way.

The revenue producing traffic is by far the hardest thing to do.  You can’t materialize it out of thin air.  You either already have a solid traffic stream you can repurpose (that’s what Salesforce did), or you may have to look at partnering opportunities.  For those that have a stream, now is your chance to enter the PaaS business in an interesting way.  Casting eyes around, Adobe is well positioned for this.  Get an AppStore together, Adobe, and link your dev tools and *aaS efforts to it.  There are bound to be others too.  Open Source vendors in the tools business, maybe you have this sort of opportunity as well.  IBM, Oracle, HP, and whomever else this is a huge opportunity for you.  Maybe Cisco too as I think about it.  Trouble is, your Big Sales Force may think it hampers them in some way.  Ignore their parochial interests and charge ahead.  This is a Silver Bullet for PaaS and Cloud ascendancy.

Give it some thought.  It’s high time for some break out PaaS action.

Posted in amazon, bootstrapping, business, cloud, platforms, saas, venture | 7 Comments »

CIO’s: You’ve Got a Target On Your Back, Use It!

Posted by Bob Warfield on May 24, 2011

Bummer of a birthmarkThis week’s InfoBoom post is all about security and Sony’s recent epidemic of hacker attacks.  I can’t imagine any CIO watching the Sony/Hacker drama unfold wouldn’t be wondering whether it could happen to their organization. I was reminded of it again when I read this morning about “Another day, another attack on Sony.”

Clearly Sony had very much underestimated their risks, but isn’t it likely almost everyone has too? So far, Sony has estimated $171 million in costs relating to these attacks.  In this post, I look at whether a strategy similar to Netflix’s “Chaos Monkey” might help CIO’s feel a little more secure.

Check it out on InfoBoom.

Posted in business, cloud, data center, strategy | Leave a Comment »

Amazon: The Hidden Empire

Posted by Bob Warfield on May 13, 2011

An amazing read about Amazon and the strategy of creating a huge success on the web:

Amazon: The Hidden Empire by faberNovel

Why can’t the other Big Seattle Tech Company be remotely this smart?

Posted in amazon, business, cloud, service, strategy | Leave a Comment »

People Using Amazon Cloud: Get Some Cheap Insurance At Least

Posted by Bob Warfield on April 23, 2011

I’m reading through Twitter streams, Amazon Forums, and other news sources trying to get a sense of how users are responding and what their problems are.  It’s pretty appalling out there.  B2B companies admitting they have no recent backups and just have to wait for it to come back online.  A company that claims patient’s lives are at stake as they do cardiac monitoring based in the Amazon Cloud and are desperately seeking assistance.  The list goes on.

There’s some basic insurance any company using the Amazon Cloud needs to take out first chance they get.  It’s not hard, it’s not expensive, it’s not push a button and get hot failover to multiple Clouds, and it won’t fix your problems if you’re caught in the current outage.  But it will at least give you a little more maneuvering room.  Many of the acounts I’m reading boil down to a lack of options other than waiting because they have no accessible backup data.  In other words, they’d love to bring up their sites again on another Amazon Region, but they can’t because they’re missing access to a reasonably current data backup, or the Amazon Machine Instances are all in the affected region or issues along those lines.

Companies need the Cloud equivalent of offsite backup.  At a minimum, you need to be sure you can get access to a backup of your infrastructure–all the AMI’s and Data needed to restart.  Storage is cheap.  Heck, if you’re totally paranoid, turn the tables and backup the Cloud to your own datacenter which consists of just the backup infrastructure.  At least that way you’ve always got the data.  Yes, there will be latency issues and that data will not be up to the minute.  But look at all that’s happened.  Suppose you could’ve spun up in another region having lost 2 hours of data.  Not good, not good at all.  But is it really worse than waiting over 24 hours or would you be feeling blessed about now if you could’ve done it 2 hours into the emergency?  These are the kind of trade offs to be thinking about for disaster recovery.  It’s chewing gum and bailing wire until you get an architecture that’s more resilient, but it sure beats not having any choices and waiting.

Another thing: make sure you test your backups.  Do they restore?  Can you go through the exercise of spinning up in another region to see that it works?  Don’t just test once and forget about it.  Pick an interval and retest.  Make it routine so you know it works.

Staging all the data to other locations is not that expensive compared to continuously running dual failover infrastructure.  That’s one of the beauties of elasticity.

There’s a lot of grumbling about how hard it is to failover to other regions and how expensive.  Nothing is harder than explaining to your customers why your site is down.  But at least get some cheap insurance in place so you have options the next time this happens.  And there will be a next time, no matter whether it is Amazon, some other Cloud provider, or your own datacenter.  There is always a next time.

While you’re at it, consider some other cheap insurance:

– Do you have a way to communicate with your customers when your site is down?  An ops blog that you’re sure is hosted in a different cloud is cheap and cheerful.

– Can you at least get your web site home page showing?  Think about how to get DNS access and a place to host that don’t rely 100% on one Cloud provider.

– Is there something about your app that would make partial access in an outage valuable?  For example, on a customer service app, being able to log trouble tickets as email during an outage or scheduled downtime would be extremely helpful.  Mail is cheap and easy to offer as alternate infrastructure, and it is also easy to imagine piping the email messages through a converter that would file them as tickets when the site came back up.  It’s not hard to imagine being able to queue many kinds of transaction this way in an emergency.  What are the key limited-functionality areas your users will want to have access to in an emergency?

– For some apps, it is easier to provide high availability for reading than for writing.  Can you arrange that in an emergency, reading is still possible, just not writing or creating new objects?  Customers are a lot more tractable if they know they still have access to their data, but just can’t create new data for a while.  For example, a bookmarking site that lets me access my bookmarks but not create new ones during an outage is much less threatening than one that just brings up its Fail Whale equivalent on me.

Welcome to the world of Disaster Recovery.  Disasters have a User Experience too.  Have you planned your customer’s Disaster UX yet?

Posted in amazon, cloud | 12 Comments »

What to Do When Your Cloud is Down

Posted by Bob Warfield on April 21, 2011

Heroku status is down

This post is on  behalf of the Enterprise CIO Forum and HP.  

As I write this, Amazon is having a major East Coast outage that has affected Heroku, Foursquare, Quora, Reddit and others.  Heroku’s status page is just the sound of a lost sheep bleating repeatedly for its mother in heavy fog.  What’s a poor sheep to do about this problem anyway?  After all, isn’t a Cloud-based service dead once it’s Cloud is dead?

Rather than wringing our hands and shaking our heads about “That Darned Cloud, I knew this would happen”, let’s talk about it a bit, because there are some things that can and should be done.  Enterprises wanting to adopt the Cloud will want to have thought through these issues and not just avoided them by avoiding the Cloud.  In the end, they’re issues every IT group faces with their own infrastructure and there are strategies that can be used to minimize the damage.

I remember a conversation with a customer when I was a young Vice President of R&D at Borland, then a half a billion dollar a year software company (I miss it).  This particular customer was waxing eloquent about our Quattro Pro spreadsheet, but they just had one problem they wanted us to solve: they wanted Quattro Pro not to lose any data if the user was editing and there was a power outage.

I was flabbergasted.  “It’s a darned computer, it dies when you shut off the power!” I sputtered in only slightly more professional terms.  Of course I was wrong and hadn’t really thought the problem through.  With suitable checkpoints and logging, this is actually a fairly straightforward problem to solve and most of the software I use today deals with it just fine, thank you very much.

So it is with the Cloud.  Your first reaction may be, “We’re a Cloud Service, of course we go down if our Cloud goes down!”  But, it isn’t that black and white.  I like John Dodge’s thought that the Cloud should be treated just like rubber, sugar, and steel.  When Goodyear first started buying rubber from others, when Ford bought steel, and when Hershey’s bought sugar, do you think they didn’t take steps to ensure their suppliers wouldn’t control them?  Or take Apple.  Reports are that Japan’s recent tragedies aren’t impacting them much at all and that they’re absolutely sticking with their Japanese suppliers.  This has to come down to Apple and their suppliers having had a plan in place that was robust enough to weather even a disaster of these proportions.

What can be done?

First, this particular Amazon outage is apparently a regional outage, limited to the Virginia datacenter.   A look at Amazon’s status as I write this shows the West Coast infrastructure is doing okay:

Amazon: one up one downMost SaaS companies have to get huge before they can afford multiple physical data centers if they own the data centers.  But if you’re using a Cloud that offers multiple physical locations, you have the ability to have the extra security of multiple physical data centers very cheaply.  The trick is, you have to make use of it, but it’s just software.   A service like Heroku could’ve decided to spread the applications it’s hosting evenly over the two regions or gone even further afield to offshore regions.

This is one of the dark sides of multitenancy, and an unnecessary one at that.  Architects should be designing not for one single super apartment for all tenants, but for a relatively few apartments, and the operational flexibility to make it easy via dashboard to automatically allocate their tenants to whatever apartments they like, and then change their minds and seamlessly migrate them to new accommodations as needed.  This is a powerful tool that ultimately will make it easier to scale the software too, assuming its usage is decomposable to minimize communication between the apartments.  Some apps (Twitter!) are not so easily decomposed.

This then, is a pretty basic question to ask of your infrastructure provider: “How easy do you make it for me to access multiple physical data centers with attendant failover and backups?”  In this case, Amazon offers the capability, but Heroku took it back away for those who added it in their stack.  I suspect they’ll address this issue pretty shortly, but it would’ve been a good question to explore earlier, no?  Meanwhile, what about the other vendors you may be using that build on top of Amazon.  Do they make it easy to spread things around and not get taken out if one Amazon region goes down?  If not, why not?

Here’s the answer you’d like to hear:

We take full advantage of Amazon’s multiple regions.  We’ll make it easy if one goes down for your app to be up and running on the other within an SLA of X.

Note that they may charge you extra for that service and it may therefore be optional, but at least you’ve made an informed choice.  Certainly all the necessary underpinnings are available from Amazon to support it.  Note that there are some operational niceties I won’t get into too deeply here, but I do want to mention in passing that it is also possible to offer a continuum of answers to the above question that have to do with the SLA.  For example, at my last startup, we were in the Cloud as a Customer Service app and decided we wanted to be able to bring back the service in another region if the one we were in totally failed within 20 minutes and with no more than 5 minutes of data loss.  That pretty much dictated how we needed to use S3 (which is slow, but automatically ships your data to multiple physical data centers), EBS, and EC2 to deliver those SLA’s.  Smart users and PaaS vendors will look into packaging several options because you should be backed up to S3 regardless, so what you’re basically arguing about and paying extra for is how “warm” the alternate site is and how much has to be spun up from scratch via S3.

Another observation about this outage:  it is largely focused on EBS latency, though there is also talk of difficulty connecting to some EC2 instances.  This is the second time in recent history we’ve heard of some major EBS issues.  We read that Reddit had gone down over EBS latency issues less than a month ago.  Clearly anyone using EBS needs to be thinking about failure as a likely possibility.  In fact, the ReadWriteWeb article I linked to implies Reddit had been seeing EBS problems for quite some time.  One wonders if Heroku has too.

What will you do if you’re using EBS and it fails?  Reddits says they’re rearchitecting to avoid EBS.  That’s certainly one approach, but there may be others.  Amazon provides considerable flexibility in the combination of local disk, EBS, and S3 to fashion alternatives.  The trick is in making your infrastructure sufficiently metadata driven, and having thought throught the scenarios and tried them, sufficiently well-tested, that you can adapt in real-time when problems develop.  In this respect, I have seem Netflix admonish that the only way to test is to keep taking down aspects of your production infrastructure and making sure the system adapts properly.  That’s likely another good question to ask your PaaS and Cloud vendors–“Do you take down production infrastructure to test your failover?”  Of course you’d like to see that and not just take their word for it too.

I haven’t even touched on the possibilities of utilizing multiple Cloud vendors to ensure further redundancy and failover options.  It would be fascinating to see a PaaS service like S3 that is redundant across multiple data centers and multiple cloud vendors.  That seems like a real winner for building the kind of services that will be resilient to these kinds of outages.  It’s early days yet for the Cloud, even though some days it seems like Amazon has won.  There’s plenty of opportunity for innovators to create new solutions that avoid the problems we see today.  Even the experts like Heroku aren’t utilizing the Cloud as well as they should be.

Now is your chance!

This post is on  behalf of the Enterprise CIO Forum and HP.

Related Articles

James Cohen has some good thoughts on how to work around Amazon outages.

I tweeted: “The beauty of Cloud: We can blame Amazon instead of our IT when we’re down. Except we really can’t:

Excellent discussion here about how Netflix has a ton of assets on AWS and was unaffected.  In their words, they run on 3 regions and architected so that losing 1 would leave them running.  As Netflix says, “It’s cheaper than the cost of being down.”  Amen.  I’m seeing some anonymous posts whining about the exact definition of zones versus regions, what’s a poor EU service to do, etc., etc..  Study Netflix.  They’re up.  These other services are down.  Oh, and forget the anonymous comments.  Give your name like a real person and don’t be a lightweight.

Lots of comments here and there also that multi-cloud redundancy is hard.  Aside from the fact that this particular incident today didn’t require multiple clouds, consider that it is fantastically easier to go multi-cloud than it is to build multiple physical data centers. was almost a billion dollar a year company before they built a second data center.  Speaking of which, I bet they want to chat with the folks at Heroku now that they own them.

Clay Loveless gets failover in the Cloud.  JustinB, not so much.  Too ready to take Amazon’s word about their features.  Makes me wonder if folks early to AWS who saw it buggy and got used to dealing with that are better able to deal with problems like today’s?  When you run a service, it’s your problem, even when your Cloud vendor fails.  Gotta figure it out.

Lydia Leong (Gartner) and I don’t always agree, but she’s spot on with her analysis of the Amazon “failure” and what customers should have been doing about it.

EngineYard apparently was set to offer multiple AWS regions in Beta, and accelerated it to mitigate AWS problems for their customers.  Read their Twitter log on it.  Would love to hear from some of their customers that tried it how well it worked.

Posted in amazon, apple, business, cloud, strategy | 10 Comments »

Give Me a Layer to Stand On, And I’ll Move You to the Cloud

Posted by Bob Warfield on April 14, 2011

This post is on  behalf of the Enterprise CIO Forum and HP.  This is my first in a series of posts sponsored by Enterprise CIO Forum and HP, thanks for the sponsorship!

Apologies to Archimedes, who is supposed to have said:

“Give me a place to stand and I’ll move the Earth.”

He was speaking of leverage and the idea that a great weight can be moved with very little force given the right amount of leverage.  Leverage is what IT needs to make an orderly progression to the Cloud without having to expend too much force.

Where software is concerned, leverage often comes in the form of finding the right infrastructure layer from which to effect the desired transformation.  A good layer may come in the form of some standard that becomes the Rosetta stone whereby lots of different implementations are made equal and customers have more choices.  It may come from a particular abstraction that is powerful enough to sit on top of formerly quite different paradigms and make them all work alike. Leverage may come in the form of a new box of some kind that insulates one set of connections from the other, once again making the implementations on the other side of the box equal.

In order to move to the Cloud in an orderly and proficient manner, IT still needs the right layers to stand on.  Cloud providers like Amazon have done a little bit of this work, but there is a lot more still to be done.  You can see the difference in a service like Amazon’s, that looks very familiar to someone with at least a background in virtualization, versus a service like Google App Engine that forces you to more radically change how you think about infrastructure.  A good Cloud layer will minimize that disruptive thinking, perhaps at some cost of performance, in order to achieve greater agility.

Ideally, the right Layers meet IT halfway to the Cloud, forcing them to change much less than a direct jump into the Cloud, and facilitating the process of making different infrastructure variations look the same.  HP’s Hybrid Delivery is all about creating the right methodology and services to enable IT to think about their own data centers, private clouds, and public clouds as similarly as possible, as John Dodge points out over on the Enterprise CIO Forum.  Dodge views the right layers as facilitating agility, and that’s exactly what a good layer ought to do.

The next step beyond methodology and services will be technologies that act as layers.  IT will be able to refactor their infrastructure into components that are best left in the corporate data center, components that need a private cloud, and components that can thrive in public clouds.  The test of the best technologies will be how agile and transparent this refactoring can be, as well as the completeness of the new layer in terms of solving the key problems:

–  Security and Authentication across the different infrastructures.

–  Latency issues that will arise when some data and API’s are leaving the data center and picking up a much more expensive round trip cost due to the latency of the Cloud.

–  Management and Monitoring:  How do we make it easy to do these jobs in the same way no matter which infrastructure is involved?

There are many more dimensions such Cloud Refactoring Layers will need to address, but those serve as a reasonable framework to start a discussion.  As mentioned before, an appropriate Layer could take many different formats.  Imagine, for example, a Cloud Appliance.  Instead of installing a rack of blade servers, suppose you could insert a device in your rack that talked to servers in the Cloud and made managing and using them look very similar to having the physical blade servers right in your data center.  What an interesting device this would be.  Imagine a virtual Cloud “server” that acts as if it has 128 cores (or however many its appropriate to share per network connection in your rack based on the latency and bandwidth capacity of your Cloud access), terrabytes of data, compatibility with the management tools you know and love, a secure bulletproof connection to its Cloud backend that couldn’t “leak” into your network, and so on.  Yet the device would sit there in your rack taking very little space and power.  Consider it a Cloud “Force Multiplier” for your data center.

You wouldn’t have to entrust anything too sensitive to your Cloud Appliance.  Perhaps it simply allows you to offload some capacity from your Data Center to make room for more mission critical apps.  You could envision application dedicated versions of such an appliance aimed at apps like Mail Servers, Web Servers, or Sharepoint and other Social Apps.  Or, perhaps the appliance would be tasked with doing backups for PC’s and the non-critical servers.  Why mess with tape and other physical media when you can get multiple physical location redundant backups very easily from the Cloud?  Such an appliance would be exactly the kind of leveraged Layer I’m talking about.  It would make it easy for IT to start shifting apps to the Cloud without undue strain.

Such appliances already exist and are referred to as “Virtual Cluster Appliances.”  Expect a lot more to develop along these lines.

This post is on  behalf of the Enterprise CIO Forum and HP.

Posted in cloud | Leave a Comment »

Any Hardware Company Not Investing Big in Cloud is Nuts

Posted by Bob Warfield on April 7, 2011

If the Cloud is here to stay, and the trend to move apps into the Cloud is only going to get stronger, then any hardware or infrastructure company that doesn’t invest big in the Cloud is nuts.

The problem these vendors face is “Cloud as disintermediator”.  Companies that buy into the Cloud are letting the Cloud purveyor make the datacenter decisions.  And Clouds aggregate a lot of those decisions under one roof to gain their economies of scale.  It’s like waking up one morning as a hardware vendor and discovering your best customers had switched to another hardware vendor.  The difference, is you can see it coming and you have a chance to do something about it.

In fact, hardware and infrastructure vendors have quite a lot they can do about it:

– Make it a point not to lose Cloud deals.  When Cloud vendors come knocking, lock them up.  There can’t be as many Clouds as there are traditional vendors, and each one will have a lot of inertia to stay with their original infrastructure choices.  Make sure they choose you.

– Start your own Cloud.  This one may be pretty risky if it locks you out of being the basic nuts and bolts of other popular Clouds, but you have to at least consider it.  At the same time, Cloud vendors will need to decide how they feel about using say IBM hardware with IBM pushing their own Cloud.  It muddies the waters.

– Invest in building products that are uniquely suited to the Cloud.  There’ve been some fascinating glimpses of what large scale Cloud data centers need.  There is a ton of intellectual property opportunity in the world of the Cloud.  Now is the time to start staking your claims to it.  Get out your blank sheets of paper, sign up with some big Clouds to work on their needs, and you will wind up with some good ideas.  That’s only the start.  In the commodity world of the Cloud, execution will be everything.

– While we’re on the subject of execution, think what the razor thin margins demanded by the Cloud mean to your business model of today.  Think what they mean to the needs of the Cloud vendors.  Skate as fast as you can to where that puck is going to be.

I liken the Cloud a bit to what CNC machinery did to the manufacturing world.  We went from a situation where each and every machine required a skilled manual machinist to run it, to a world where computer-controlled machine tools can be run 4 or 5 to an operator and the operator can be far less skilled than a master machinist.  The same could be said for servers.  We’re moving from a world where every app had its own servers and lots of wasted capacity, to a world where we share that capacity internally via virtualization, and then finally to a world where we’re sharing capacity that is time sliced across many customers in huge data centers to wring every last drop of utilization.  Yes, there are drivers like Big Data that offset some of that, but it is hard to escape the possibility that in the long run, the world may not need as many servers as it once did.

(This article motivated by a Tweet by Jeff Kaplan about Dell investing $1B to expand its Cloud solutions.)

Related Articles

To get an idea just how different hardware purpose-built for the Cloud can be, check out Facebook’s servers.  They’ve just open-sourced the designs for their server hardware, which is quite a bit more energy-efficient that their leased data center hardware.  The difference in efficiency shows how much of an advantage can be had by specifically targeting the Cloud.  Facebook was smart to open source, if only because it makes it more likely the design will be commoditized and hence easier for them to buy even more cheaply.

Posted in cloud | 3 Comments »

Fred Wilson is Still Wrong About Streaming Music and Amazon’s Locker Will Rock

Posted by Bob Warfield on March 29, 2011

Fred and I have tangled before over the issue of owning your music versus streaming it.  Fred continues undaunted in his latest post, a reaction to Amazon’s Music Locker announcements:

I don’t get the idea of music locker services like the one Amazon just announced. If I’m going to stream music from the cloud, why should I continue to buy files and collect them? I’ve been a Rhapsody subscriber for something like 11 or 12 years and I love subscription streaming services. I’ve just stared using on my Android and on the web and I love it too.

Locker services seem like they are designed to continue the physical model of collecting music and buying music when there is a new and better way – just subscribe to music dial tone and listen to whatever you want wherever you want.

I’m bearish on locker services and bullish on subscription streaming services.

Naturally, I am equally as undaunted.  The answer for why we differ so much is a simple one: people interact with their music in different ways.  Fred seems to love variety.  In a comment to my original post, he says:

i may be wrong. it happens all the time!

but it’s how i see all of this playing out

i own close to a thousand vinyl records

i own at least that many CDs

i have a terrabyte server full of mp3s

all of that is available in our home to our whole family

and yet we listen to rhapsody and other streaming services close to 80% of the time

it’s just easier. we don’t have to wonder if we own it. we just decide what we want to listen to and then play it.

I like variety too, and I use streaming services sometimes when I want to go in search of it.  But I like curation better.  My curation.  The songs I know and love I want to be able to hear when I want to hear them.  Music is not background ambience for me.  I literally can’t have it on when I’m working, or pretty soon I’m more focused on the music than the work.  Moreover, I’m concerned about the future when I don’t own the bits.

Look, things change.  The music industry is arbitrary and capricious.  Fred himself has suffered the outrageous slings and arrows of their behavior to the point he eventually confessed to pirating some music to get access to it.

I don’t like the possibility that the streaming service I’ve paid for falls into a dispute with a music label and suddenly can’t stream some artist I love.  I don’t like the idea that some streaming service concentrates so much power they become a monopoly and decide to charge per listen or some such nonsense.  We only just got the Beatles on iTunes after years fer cryin’ out loud.  Let’s keep as much power as possible in the hands of the music lovers and not the record labels or distribution (e.g. streaming) channels.  I know the latter offer better ways for VC’s and Startups to make money, but that doesn’t mean I have to like it or support it.

If I am going to make the emotional commitment to the music, I want to be in control.  Amazon’s Locker Service is the ideal way to have my cake and eat it to.  I can get all the benefits of streaming and keep all the benefits of owning.  Fred may not value the benefits of owning, and that’s great for him.  He should stick to streaming. But his bearish predictions for the non-streaming world don’t reflect the whole universe of whole people interact with their music.

BTW, this is no different than insisting you be in control of your data when you sign up with a SaaS software company.  If they can’t deliver regular backups of your data whenever you want to see it, it’s time to start asking why.

Related Articles

For the record, I agree with the sentiment that this isn’t innovative.  We’re past the Early Adopters.  Amazon’s service is the sound the Chasm makes when you’re already across it.  In light of that, I will see Fred’s Bearish call on lockers and raise with my own Bearish call on streamers.  Streaming music businesses that can’t also offer a locker service are going to be limited to either casual use as a second service and not the System of Musical Truth that is my music collection, or they’re going to be limited to the portion of the musical audience who, like Fred, don’t require a music collection.  In short, there will be a ceiling on their success if they can’t support both models, particularly in light of Amazon and Apple’s distribution strength.




Posted in business, cloud, strategy, user interface, venture, Web 2.0 | 4 Comments »

Does the Internet Mean There Can Only Be One?

Posted by Bob Warfield on March 8, 2011

I read with interest today Hubspot’s coverage of their new monster VC round.  They’ve raised a $32M Series D monster round from Sequoia, Google, and Salesforce–certainly an all-start cast.

There’s a lot of interesting data in these announcements, such as Hubspot’s view of what market shares look like for the Marketing Automation category:

If true, and we should wait to hear what the other vendors have to say before concluding it is, it suggests Hubspot is blowing away their competition at Eloqua and Marketo, and not by just a little.  That’s pretty big news too.

But there was one part of these announcements that really caught my eye.  Brian Halligan says:

In industries formed prior to the internet, oligopolies naturally formed where there is a market leader holding 20% market share, a 2nd place competitor having 18% or so, a 3rd having 15%, etc.  In industries that have formed in the last 10 or so years, the opposite seems to be happening where the winner takes all (or at least 80% of the market cap in that given industry).  A few examples include Amazon, VMWare, Zappos,, Google, and even Groupon.

Dharmesh Shah follows with:

For the following leading companies, see if you can name the #2 player and #3 in their category.  You have 30 seconds, I’ll wait:

  • Amazon
  • NetFlix
  • VMWare
  • eBay

Difficult, isn’t it?  Chances are you struggled a bit with coming up with the #2 and failed completely to come up with #3.  The point here is, as these tech categories evolved, the #1 player became so dominant that we often don’t even know who #2 and #3 are.

I don’t know about you, but I’m skeptical about this new “rule”.  There’ve been so many “rules” that the Internet has supposedly changed in some form or fashion.  I think it’s worht delving into this one.  First, is it really true that there can be only “one”, or is there something about this list or this environment that makes it a temporary abberation?

First, we could as well have asked whether there can be “only one” SaaS company in each category.  Certainly that market is closer to what HubSpot is than these companies they’re holding up as examples.

While there are not tons of companies, there is often more than one public SaaS company in a category.  I’m going to call that a strike against the “only one” hypothesis.  But, I will point out, that it is very difficult to fund a new SaaS company today and they take a lot of capital.  It may very well be that a factor at work here that has nothing to do with the Internet is the funding environment.  VC’s today are focused on companies that can be bootstrapped before they bring their millions to bear.  HubSpot got their first capital before we had fully entered that era.  It would be hard to found a company today on a slide show and team, which is where most of the SaaS world started.  So that’s a factor that has changed, but that could change back.  Personally, I think that when VC’s get tired of funding 12 different add-ons to each popular service, each with no perceivable barrier to entry, and each at the mercy of services like Twitter, they may start to look for opportunities with more substance than the usual Consumer Internet Plays that need no marketing.  From that perspective, the more firms like HubSpot that succeed, the better.  But, for the time being, we’re immersed in Dot Com Bubble 2.0 as huge valuations roil around us in markets where “there can only be one.”

Second, some of these companies mentioned have profound network effects.  That’s an ideal reason for there only to be one.  eBay is the best example.  I did an auction e-commerce business called PriceRadar that was aimed at delivering some cool optimal merchandising and selling tools for online auctions.  When we started there were circa 8 auction houses and more being announced all the time.  There were going to be not only huge horizontal auctions like eBay, but every major Internet service would have one (like Yahoo!), and there would be vertical auctions for industry (DoveBid).  Within 2 years very little was left except for eBay.  That’s how strong the network effects are for that business.  Netflix has network effects.  How many subscriptions to movies will a household tolerate?  Amazon may have network effects.  They are the online superstore merchandise-wise, they control some key franchises like books, they sell readers that read their books and create further network lock-in, and Clouds may have network effects due to latency.

The upshot of network effects is that there is a very short window for competitors to respond.  If they don’t, the compound interest associated with the network effect and the lock in makes it impossible to catch up.  That should be a sobering thought if you’re competing in a market with network effects, but it isn’t clear to me that Hubspot is.  Do companies plug and unplug their marketing automation software?  To some extent they do.  I was given that perspective by no less an authority than a key executive at one of the three Marketing Automation companies I’ve mentioned so far in this post.  Color me skeptical about network effects for these guys.

What else leads to just one?

Platforms, which are related to network effects.  Sometimes they become so pervasive you must deal with them.  Google owns search.  Facebook is another.  The network effects aren’t as striking as eBay’s when you deal with a platform, but it is a function of needing to be compatible with the status quo and it being too hard to reinvent all the wheels you get with the platform.  Oracle and SAP have platforms in this sense.

What about VMWare?

That one is pretty easy–there are VM managers that are just as popular as VMWare, but they’re Open Source.  You could argue MySQL was just as popular as the big DB vendors, but never hit their revenues because they were Open Source.  This is a scary thing about building a business around Open Source–you may succeed without getting much for it.  It’s very tricky to find exactly the right balance that ignites passion while delivering profits.

How about properties like Groupon?

Man, hard to believe they won’t see a #2 and #3 that make good money.  Living Social is already on that road.  Moreover, there’s been a spate of articles lately that are finally recognizing that coupons aren’t really even all that unique and they may not be the best thing for you and your customers in terms of fostering a long-term relationship.  It’s like the world started switching from newspapers to online media and forgot to bring their coupons along.  So, wow, Groupon is great, I have coupons again!  And then pretty soon we’ve got coupons coming from 18 different mailing lists, we’ve got flash shopping sites, we’ve got small businesses getting hit with tons of visitors who buy below cost and then never come back, and we realize we weren’t missing all that much.

I’m not buying “there can only be one”.  There may only be one if the others don’t get moving soon enough, and the Internet may shorten that window, but that’s all it does.

What do you think?

Related Articles

David Raab, longtime marketing automation expert, raises a little heck with the idea that the game is over, there can be only one, and it is Hubspot.  He also pointed to the little inconsistency in Hubspots graph of lead sources which shows email and not inbound to be the lion’s share.  That caught my eye too.  Read David’s article to see what Dharmesh had to say on that one (good explanation).

Posted in business, cloud, Marketing, strategy, venture | 2 Comments »

%d bloggers like this: