SmoothSpan Blog

For Executives, Entrepreneurs, and other Digerati who need to know about SaaS and Web 2.0.

Archive for the ‘data center’ Category

Minimizing the Cost of SaaS Operations

Posted by Bob Warfield on March 29, 2010

SaaS software is much more dependent on being run by the numbers than conventional on-premises software because the expenses are front loaded and the costs are back loaded.  SAP learned this the hard way with its Business By Design product, for example.  If you run the numbers, there is a high degree of correlation between low-cost of delivering the service and high growth rates among public SaaS companies.  It isn’t hard to understand–every dollar spent delivering the service is a dollar that can’t be spent to find new customers or improve the service.

So how do you lower your cost to deliver a SaaS service? 

At my last gig, Helpstream, we got our cost down to 5 cents per seat per year.  I’ve talked to a lot of SaaS folks and nobody I’ve yet met got even close.  In fact, they largely don’t believe me when I tell them what the figures were.  The camp that is willing to believe immediately wants to know how we did it.  That’s the subject of this “Learnings” blog post.  The formula is relatively complex, so I’ll break it down section by section, and I’ll apologize up front for the long post.

Attitude Matters:  Be Obsessed with Lowering Cost of Service

You get what you expect and inspect.  Never a truer thing said than in this case.  It was a deep-seated part of the Helpstream culture and strategy that Cost of Service had to be incredibly low.  So low that we could exist on an advertising model if we had to.  While we never did, a lot was invested in the critical up front time when it mattered to get the job done.  Does your organization have the religion about cutting service cost, or are there 5 or 6 other things that you consider more important?

Go Multi-tenant, and Probably go Coarse-grained Multi-tenant

Are you betting you can do SaaS well enough with a bunch of virtual machines, or did you build a multi-tenant architecture?  I’m skeptical about your chances if you are in the former camp unless your customers are very very big.  Even so, the peculiar requirements of very big customers (they will insist on doing things their way and you will cave) will drive your costs up.

Multi-tenancy lets you amortize a lot of costs so that they’re paid once and benefit a lot of customers.  It helps smooth loads so that as one customer has a peak load others probably don’t.  It clears the way to massive operations automation which is much harder in a virtual machine scenario.

Multi-tenancy comes in a lot of flavors.  For this discussion, let’s consider fine-grained versus coarse-grained.  Fine grain is the Salesforce model.  You put all the customers together in each table and use a field to extract them out again.  Lots of folks love that model, even to a religious degree that decrees only this model is true multi-tenancy.  I don’t agree.  Fine grained is less efficient.  Whoa!  Sacrilege!  But true, because you’re constantly doing the work of separating one tenant’s records from another.  Even if developers are protected from worrying about it by clever layering of code, it can’t help but require more machine resources to constantly sift records.

Coarse-grained means every customer gets their own database, but these many databases are all on the same instance of the database server.  This is the model we used at Helpstream.  It turns out that a relatively vanilla MySQL architecture can support thousands of tenants per server.  That’s plenty!  Moreover, it requires less machine resources and it scales better.  A thread associated with a tenant gets access to the one database right up front and can quit worrying about the other customers right then.  A server knows that the demands on a table only come from one customer and it can allocate cpus table by table.  Good stuff, relatively easy to build, and very efficient.

The one down side of coarse grain I have discovered is that its hard to analyze all the data across customers because it’s all in separate tables.  Perhaps the answer is a data warehouse constructed especially for the purpose of such analysis that’s fed from the individual tenant schemas.

Go Cloud and Get Out of the Datacenter Business

Helpstream ran in the Amazon Cloud using EC2, EBS, and S3.  We had help from OpSource because you can’t run mail servers in the Amazon Cloud–the IP’s are already largely black listed due to spammers using Amazon.  Hey, spammers want a low-cost of ops too!

Being able to spin up new servers and storage incrementally, nearly instantly (usually way less than 10 minutes for us to create a  new multi-tenant “pod”), and completely from a set of API’s radically cuts costs.  Knowing Amazon is dealing with a lot of the basics like the network infrastructure and replicating storage to multiple physical locations saves costs.  Not having to crawl around cages, unpack servers, or replace things that go bad is priceless. 

Don’t mess around.  Unless your application requires some very special hardware configuration that is unavailable from any Cloud, get out of the data center business.  This is especially true for small startups who can’t afford things like redundant data centers in multiple locations.  Unfortunately, it is a hard to impossible transition for large SaaS vendors that are already thoroughly embedded in their Ops infrastructure.  Larry Dignan wrote a great post capturing how Helpstream managed the transition to Amazon.

Build a Metadata-Driven Architecture

I failed to include this one in my first go-round because I took it for granted people build Metadata-driven architectures when they build Multi-tenancy.  But that’s only partially true, and a metadata-driven architecture is a very important thing to do.

Metadata literally means data about data.  For much of the Enterprise Software world, data is controlled by code, not data.  Want some custom fields?  Somebody has to go write some custom code to create and access the fields.  Want to change the look and feel of a page?  Go modify the HTML or AJAX directly.

Having all that custom code is anathema, because it can break, it has to be maintained, its brittle and inflexible, and it is expensive to create.  At Helpstream, we were metadata happy, and proud of it.  You could get on the web site and provision a new workspace in less than a minute–it was completely automated.  Upgrades for all customers were automated.  A tremendous amount of customization was available through configuration of our business rules platform.  Metadata gives your operations automation a potent place to tie in as well.

Open Source Only:  No License Fees!

I know of SaaS businesses that say over half their operating costs are Oracle licenses.  That stuff is expensive.  Not for us.  Helpstream had not a single license fee to pay anywhere.  Java, MySQL, Lucene, and a host of other components were there to do the job.

This mentality extends to using commodity hardware and Linux versus some fancy box and an OS that costs money too.  See for example Salesforce’s switch.

Automate Operations to Death

Whatever your Operations personnel do, let’s hope it is largely automating and not firefighting.  Target key areas of operational flexibility up front.  For us, this was system monitoring, upgrades, new workspace provisioning, and the flexibility to migrate workspaces (our name for a single tenant) to different pods (multi-tenant instances). 

Every time there is a fire to be fought, you have to ask several questions and potentially do more automation:

1.  Did the customer discover the problem and bring it to our attention?  If so, you need more monitoring.  You should always know before your customer does.

2.  Did you know immediately what the problem was, or did you have to do a lot of digging to diagnose?  If you had to do digging, you need to pump up your logging and diagnostics.  BTW, the most common Ops issue is, “Your service is too slow.”  This is painful to diagnose.  It is often an issue with the customer’s own network infrastructure for example.  Make sure to hit this one hard.  You need to know how many milliseconds were needed for each leg of the journey.  We didn’t finish this one, but were actively thinking of implementing capabilities like Google uses to tell with code at the client when a page seems slow.  Our pages all carried a comment that told how long it took at the server side.  By comparing that with a client side measure of time, we would’ve been able to tell whether it was “us” or “them” more easily.

3.  Did you have to perform a manual operation or write code to fix the problem?  If so, you need to automate whatever it was.

This all argues for the skillset needed by your Ops people, BTW.  It also argues to have Ops be a part of Engineering, because you can see how much impact there is on the product’s architecture.

Hit the Highlights of Efficient Architecture

Without going down the rathole of premature optimization, there is a lot of basic stuff that every architecture should have.  Thread pooling.  Good clean multi-threading that isn’t going to deadlock.  Idempotent operations and good use of transactions with rollback in the face of errors.  Idempotency means if the operation fails you can just do it again and everything will be okay.  Smart use of caching, but not too much caching.  How does your client respond to dropped connections?  How many round trips does the client require to do a high traffic page?

We used Java instead of one of the newer slower languages.  Sorry, didn’t mean to be pejorative, and I know this is a religious minefield, but we got value from Java’s innate performance.  PHP or Python are pretty cool, but I’m not sure they are what you want to squeeze every last drop of operations cost out of your system.  The LAMP stack is cheap up front, but SaaS is forever.

Carefully Match Architecture with SLA’s

The Enterprise Software and IT World is obsessed with things like failover.  Can I reach over and unplug this server and automatically failover to another server without the users ever noticing?  That’s the ideal.  But it may be a premature optimization for your particular application.  Donald Knuth says, “97% of the time: premature optimization is the root of all evil.” 

Ask yourself how much is enough?  We settled on 10 minutes with no data loss.  If our system crashed hard and had to be completely restarted, it was good enough if we could do that in less than 10 minutes and no loss of data during that time.  That meant no failover was required, which greatly simplified our architecture. 

To implement this, we ran a second MySQL replicated from the main instance and captured EBS backup snapshots from that second server.  This took the load of snapshotting off the main server and gave us a cheaper alternative to a true full failover.  If the main server died, it could be brought back up again in less than 10 minutes with the EBS volume mounted and away we would go.  The Amazon infrastructures makes this type of architecture easy to build and very successful.  Note that with coarse-grained multi-tenancy, one could even share the backup server across multiple multi-tenant instances.

Don’t Overlook the Tuning!

Tuning is probably the first thing you thought of with respect to cutting costs, right?  Developers love tuning.  It’s so satisfying to make a program run faster or scale better.  That’s probably because it is an abstract measure that doesn’t involve a customer growling about something that’s making them unhappy.

Tuning is important, but it is the last thing we did.  It was almost all MySQL tuning too.  Databases are somewhat the root of all evil in this kind of software, followed closely by networks and the Internet.  We owe a great debt of gratitude to the experts at Percona.  It doesn’t matter how smart you are, if the other guys already know the answer through experience, they win.  Percona has a LOT of experience here, folks.

Conclusion

Long-winded, I know.  Sorry about that, but you have to fit a lot of pieces together to really keep the costs down.  The good news is that a lot of these pieces (metadata-driven architecture, cloud computing, and so on) deliver benefits in all sorts of other ways besides lowering the cost to deliver the service.  Probably the thing I am most proud of about Helpstream was just how much software we delivered with very few developers.  We never had more than 5 while I was there.  Part of the reason for that is our architecture really was a “whole greater than the sum of its parts” sort of thing.  Of course a large part was also that these developers were absolute rock stars too!

Posted in cloud, data center, ec2, enterprise software, platforms, saas, software development | 5 Comments »

10 Things You Don’t Need to Do In the Clouds

Posted by Bob Warfield on May 25, 2009

Sometimes a breakthrough paradigm shift eliminates the need for all kinds of things.  Word processors and laser printers killed a lot of other things that were once thriving including typewriters, liquid paper, and Linotype machines.  So it is with the Cloud.  When I chat with my Director of Operations at Helpstream, we’re always chuckling about how much better life is in the Amazon Cloud for our company.

As I read through unread blog posts with Google Reader, I’m going to note 10 things we don’t need to worry about since we’re in the Cloud:

1.  NetApp’s new DataDomain data de-duping product.   NetApp bought a company with a cool technology.  Plug it in place of you tape backups and you can backup to hard disk because this thing eliminates redundant data–sort of a very backup-savvy compression algorithm.  But if you’re in the Cloud, who cares?  Your Cloud vendor worries about this stuff.  You just buy it by the gigabyte, as much as you like, and do whatever.  Backup already looks like it is a hard disk with S3 and especially Elastic Block Store.  This is one whole chunk of costs and complexity you can safely ignore because it just doesn’t matter to you and you couldn’t install it in your vendor’s Cloud if it did.

2.  Server power consumption.  It’s out of your hands.  Sure there are really cool new technologies, like Dell’s Fortuna server that is the size of a hard disk and uses 20-30W.  But it doesn’t matter.  You aren’t choosing the servers in your Cloud.  The good news is that any really large scale Cloud vendor like Amazon will be choosing servers with great performance per watt, because it lowers their cost basis.  If they’re selling a commodity, like EC2, they’ll have to pass those savings on to you too.  Best of all, you can feel good about these being more green solutions than you’re likely to have the expertise to create in your own data center.

3.  Worrying about big iron or little iron (or little big iron where a proprietary cpu is in a small chassis?).  Should I run the best servers Sun (or some other Big Iron vendor) can provide?  Or should I just run lots of little commodity “Lintel” (Linux + Intel/AMD) boxes?  Quit worrying about it, because you can’t affect this.  In all likelihood your Cloud vendor has Lintel.  You have no idea which hardware brand they use, so you can quit caring about that too.  All those specs, which rack form factor, yada, yada, just don’t matter any more.  You have a handful of virtual machines you can choose from.  There are relatively few specifications to focus on for those virtual machines.  Someone else has probably already figured out how to set up memcached or whatever on those machines and how to optimize the software for that footprint.  You should certainly try some experiments because your software may be different, but the search space is sharply limited.  That’s a good thing, isn’t it?  Now you can focus instead of poring over a gillion spec sheets outer joined to a gillion different purchase deals.

4.  Worrying about MIPs in general.  As Om Malik so correctly points out, its the megabits (of connectivity) not the MIPs that count these days.  We haven’t been able to get more MIPs like we used to for a while, because of the multicore crisis.  Sure, we get more cores, but we don’t get faster clock speeds.  Everyone is ooohing and aaaahing that the iPhone will get a 1.5x faster cpu.  Does anyone remember back when you got a PC twice as faster every 18 months?  They never felt twice as fast.  Most of the time you could only tell if you went back to the slower machine, which seemed sooooo slooooow.  People will hardly notice the faster iPhone, unless they go back to an older one.  Meanwhile, those in the clouds can get all the MIPs they want, provided they’re ready to use elastically scaled cores loosely coupled over a LAN.

5.  Wholesale bandwidth costs.  Why worry about it if all your data is in the Cloud?  All you care about is how fast an individual browser can access that Cloud.  Granted, a big office requires a fair bit of bandwidth, but nothing like a data center.  Moreover, your Cloud vendor probably has multiple data centers in multiple geographies as well as CDN capabilities, so you are now geographically distributed in terms of connectibility.

6.  Which load balancing box to buy.   Forget about it.  Your Cloud vendor does this for you, and even if they didn’t, you’ll have to use software because you don’t get to install any custom hardware in their Cloud.  With the advent of Amazon offering load balancing as a service of their Cloud, all you need to think about is how to use it with your application.  Life gets simpler and more focused again.

7.  Hardware monitoring.  Amazon’s new CloudWatch service tracks all the usual low level monitoring (cpu load, disk i/o, network i/o, and so on) on one minute intervals.  The data is kept around for two weeks.  This is all stuff you’d have to monitor somehow.  You’d have to find some monitor software, install it, learn how to use it, yada, yada.  With CloudWatch, you just have to learn to use what’s already there.  Amazon had to get this and a lot of other things to work just to have a Cloud.  You get a handy assist from that.  People who want to compare Amazon on a raw server cost basis never look at these kinds of costs.

8.  Creating multiple data centers for redunancy and for multiple geographies.  Werner Vogels, Amazon’s CTO, makes it sound so simple:

The Amazon Elastic Compute Cloud (Amazon EC2) embodies much of what makes infrastructure as a service such a powerful technology; it enables our customers to build secure, fault-tolerant applications that can scale up and down with demand, at low cost. Core in achieving these levels of efficiency and fault-tolerance is the ability to acquire and release compute resources in a matter of minutes, and in different Availability Zones.

Elastic availability of compute resources in multiple different Availability Zones (e.g. datacenters) in a matter of minutes?  First, it’s impossible for small companies to afford multiple redundant data centers.  They all reach a scale before dealing with that.  The Cloud levels that playing field so anyone and everyone can afford it day 1.  Just the sanity of having your data backed up to S3 with multiple copies in different physical locations is wonderful.  Second, even when you reach the size of being able to afford multiple data centers, it is a hugely expensive and complex undertaking.  Why would you ever want to deal with this if you didn’t have to?

9.  Exactly how to configure complex software like MySQL for my particular server instances.  Most of the Clouds have libraries of machine instances where somebody else (hopefully even the vendor who made the software) has set it all up, blessed it, snapshotted the image, and made it available.  Mount that image on an EC2 virtual server and away you go with something you know works.  Even if you are not on Amazon and don’t have Amazon Machine Instances like that, other clouds have these options too.  3Tera, for example, builds software for Cloud Owners and has what they call their Enterprise App Store.  These are pre-configured and ready-to-run instances. 

10.  Worry your engineers are spending valuable time worrying about infrastructure and worse physically visiting that infrastructure instead of doing something that gives your company a distinct competitive advantage.  Why build a datacenter if everyone else has one?  Let them make that investment while you invest elsewhere.  Werner Vogels gives a great example that is appropriate since the Indy 500 just ran Sunday.   Their site has a unique problem.  It requires a huge amount of resources to deliver a rich user experience:  multiple video streams including views from the cockpits of drivers’ cars with audio feeds and telemetry.  The challenge, as Vogel puts it, was that it isn’t used very frequently:

This is a high load application but it only runs three times a year. They found that they had to move a lot of engineers into data centers to keep their servers up. When they moved to cloud infrastructure they made 75% cost savings, the majority of which was on the people side; now they can manage everything from their armchair at home.

So there you have it.  10 things you don’t have to deal with if your data center is in the Cloud.  These are 10 things based on the pseudo-random collection of blog posts in my Google Reader RSS feeds.  There are many more out there, and I’m not even going to claim these are the 10 most important things.

Don’t you need fewer things to worry about so you can focus on what actually makes the difference?

Posted in amazon, cloud, data center | 18 Comments »

Jon Hansen’s Cloud Computing in the SaaS World

Posted by Bob Warfield on May 1, 2009

Jon Hansen runs an excellent podcasting/Internet radio show called PI Window on Business. Recently I was invited to join one of these casts, but couldn’t due to another commitment, but I wanted to pass along this program because it concerns Cloud Computing and SaaS. Host Jon Hansen is talking with guest and blogger Michael Dunham from Scio Consulting. They’re talking about the recent McKinsey study that claims the Cloud does not deliver any savings for large organizations, and the program starts with a pretty decent introduction to Cloud Computing.

They key differentiator in Cloud Computing is that like SaaS, it is a Service. The customer pays for it without worrying how it works. The entire infrastructure in the background is transparent. That’s as it should be, BTW. There isn’t a lot of value and there is tremendous cost in having to be aware of every implementation detail to use a service. Dunham likens the Cloud to the telecommunications infrastructure that’s existed for a long time.

I wanted to go back over a number of issues raised in the show and give my own perspectives. This will be a longish post, because it was a half hour show that touched on a lot of issues.

There were lots of interesting parallels raised in the show. One theme is that Cloud really isn’t something completely new. As mentioned, we’ve had telecommunications and other kinds of Clouds for a long time. One of Hansen’s first questions was, “Who needs to be involved with the Cloud?” Will it be confined to an Oligarchy of Concentrated Expertise with the Large Players?

First thing to note is that the Cloud benefits from scale. It is essentially a commoditization phenomenon. There’s not a lot of benefit in buying services from a Cloud that is too small, unless those services are very unique. That will therefore drive scale on the vendor side of the equation except for more specialized kinds of Cloud. A lot of people I talk to wonder if Amazon isn’t already so far ahead of the pack scale-wise that it will be hard to catch them. The good news is that their offering is pretty generic, so there is an opportunity to differentiate. The bad news is that except for the largest possible companies, the IBM’s, Sun/Oracles, and the like, Amazon may already be too far along and so it will be essential for the other players to differentiate.

“Is it safe to say the expansion is occurring as the market is decentralizing? “

I don’t think of it so much as decentralizing as changing the locus of centralization. We move from centralizing around large corporate IT datacenters to centralizing around relatively fewer Cloud vendors’ large datacenters. One of the reasons companies are starting to scramble on the vendor side is this centralization.

The Cloud aggregates transactions. If I am selling servers pre-Cloud, I have lots and lots of customers. Win or lose any particular one, and it is not a big concern. There are a lot of fish in the sea. But, whichever vendor is lucky enough to close Amazon on their servers, wins a whole ton of virtual accounts (Amazon’s customers) by default. The stakes are much much higher. We will see Cloud providers dictating to such vendors the same way Wallmart and GE dictate to their suppliers how business will be done. Major new forces are being created in the market because Cloud Vendors represent the collective buying power of all their customers.

“So the general user and population don’t need to worry about this?”

The general user already spends most of their computing time in the Cloud. Every web app is in the Cloud from the standpoint of an end user. Don’t we already spend the majority of the time on our PC’s in web apps? So we have the reverse of the “last mile” problem of residential Internet access. The “last mile” is in place as we use all these web apps. The Cloud is about the “first mile” where the datacenter begins. What is your web browser connected to?

As the Enterprise grows increasingly distributed, this again favors the Cloud which is purpose-built for a webby world. It’s no accident one of the biggest Cloud vendors is also one of the oldest and most successful Web businesses. They know how to do that stuff!

“What about reliability, dependency on expertise, and support for the Cloud?”

Well, how reliable are your web apps? Do they crash more often than your Microsoft apps (LOL)? Mine sure don’t.

Are we happy with where the expertise lies with these apps? Tim Chou started the SaaS business at Oracle many years ago, and he is the first one I heard talking about the idea of “Who better?” Would you rather have your apps supported by your internal IT? Not me. They’re smart, but this is the first and only time they’ve ever run whatever app we’re talking about, and they didn’t build that app. I want the vendor to run the app for me and support it. They developed it, and they’ve run it for a lot of customers over a much longer period than my IT people. They’re the world’s foremost experts. Who better?

“What is the importance of standards?”

Michael Dunham was very concerned that we haven’t evolved enough standards yet in the Cloud world. I’m a lot less concerned. Amazon is a very straightforward service to adopt. The differences between what you have to know there are no greater than the differences between Sun SPARC Solaris vs HP/UX vs IBM AIX vs Wintel vs Linux Intel in a traditional data center. They’re no different than Oracle vs DB2 vs SQL Server vs whatever other platforms. In fact, they’re actually much less because a big part of what Amazon provides comes from these very same standards already. They haven’t added that much. I think it’s pretty straightforward to understand and take advantage of it. Standards are not holding us back to any appreciable degree, though IT loves to clamor for standards. It’s just their way of delegating some of the responsibility to understand.

What’s much harder for IT is the loss of control. They’ve built a lot of distinctive competencies over thousands of procurement decisions made over many years. They’re loath to revisit that fabric. But the advantages compel them to at least consider it.

“How is the organization involved with Cloud decisions? IT? Purchasing? Does this hasten the obsolescence of the CIO?”

There is a profound impact, no doubt about it. But I don’t think that impact is really that different from mega trends that have been at work in IT for a long time. IT has largely gotten away from being the arms and legs. They manage the arms and legs. I was having coffee with a friend from a Big SI the other day. She was lamenting they hadn’t yet embraced the Cloud, but she went on to say it was for the best because Customers lose control, there are security risks, and all the other chestnuts.

I responded that customers had already lost control and had all the same risks. Most of them are not running their IT today. The bulk of the people costs are going to outsourcers of one kind or another, either overseas, or IBM, or some other large service organization. I’ve worked with Fortune 500 companies that told me only 5-15% of the IT employees were actually employees of the company versus outsourcers. Why is the Cloud so different? She blinked, laughed, and agreed.

“Is there a concern that the big players don’t have enough experience with the Cloud? Remember, nobody ever got fired for buying IBM. Is this a big leveler?”

First, the big players are not blind to all of this. The Cloud and SaaS are highly disruptive. It’s very hard for them to flick a switch and be there overnight. The cost to their business model is just too high, and as was mentioned on the program, it touches every part of the organization and every aspect of doing business.

With that said, big license sales are slowing and have been for some time. Maintenance is becoming an increasing component. Acquisition of other company’s maintenance bases has become the growth vehicle. Ultimately, the source of organic growth will be Cloud/SaaS.
As I say, the big players are not blind. I wrote the post on the Red Cloud. I believe Oracle made the Sun acquisition largely in response to the whole Cloud movement. Moreover, Oracle has been active for many years with a SaaS business. It’s doing very well, though they don’t advertise it very loudly. SAP is less far along with Business By Design, but clearly they also see they need to be developing the expertise.

For the time being, there is still a tremendous advantage for newer players. It is more of an architectural advantage. I’m talking about both their software and their organizational architectures. As was mentioned on the program, it’s easier to start clean sheet for the Cloud than convert after the fact. The bigger the company, the harder to convert, and the slower that conversion must be.

“McKinsey recently said the Cloud is more style than substance because:

 Nobody agrees on the definition
 It doesn’t scale to Enterprise
 It distracts attention from areas where tangible value can be unleashed.

Why would they say that? Are they being influenced because they’re in line with the old model and vendors?”

First, I did not think the McKinsey report reflected a very deep analysis. The coverage I saw on it was universally negative. From my perspective, they picked a conclusion and then drew up an analysis that supported their conclusion, so yes, I’d say they’re part of the Old School “Military Industrial Complex” around IT. They have an agenda.

SaaS eliminates a lot of value from the ecosystem for third parties like McKinsey precisely because it is service and that’s what McKinsey and the SI’s are in the business of delivering.

That particular report did a lot of silly things in analyzing the cost of the Cloud for larger organizations. The per-server cost for corporate IT were ridiculously low compared to many other estimates I’ve seen (just the power costs alone from data center studies I had seen were a big fraction of what McKinsey claimed). They burdened the analysis with a lot of costs that were irrelevant to the choice of Cloud or Data Center. That just added a lot of fixed costs that masked variable cost differences.

Given their great name, the study really doesn’t reflect very well on their expertise. But it will be a handy piece of collateral for those that want some air cover from the advantages the Cloud is bringing and the attendant disruptions that entails to an industry.

Posted in amazon, business, cloud, data center | 4 Comments »

A Vision for Oracle’s Cloud Platform: The Red Cloud

Posted by Bob Warfield on April 22, 2009

Helpstream’s CEO (my boss), Tony Nemelka, penned a great piece on the Helpstream Blog about what the Oracle/Sun acquisition might mean.   A lot of attention has been focused on the potential for negative outcomes.  Will Oracle kill MySQL or at least damage it with worse than faint praise (fascinating post by fellow Enterprise Irregular Josh Greenbaum)?  Other note (quite rightly) that Oracle can’t really kill MySQL because of its Open Source basis.  I found WordPress founder Matt Mullenweg’s post to be particularly eloquent of this.

What I found intriguing about Tony’s post was the more positive scenario it envisions.  Read the post, but let me summarize for purposes of this discussion.

Tony was recently in Japan and has a long history there having been an executive for PeopleSoft, Epiphany, and then Adobe in charge of the region.  Needless to say, his contacts are pretty high up the food chain, so they know what’s going on.  In that world, the System Integrators are the gatekeepers for the market.  They’re very powerful, and the interesting discovery Tony made is that they absolutely love Force.com.  It’s not hard to see why.  The SaaS model squeezes the SI ecosystem.  The normal meat and potatoes business around just getting on-premises software installed is greatly reduced.  The business of just keeping the lights on is almost non-existant for SaaS.  Yet SI’s have a lot to bring to the table.  A good SI often understands the Domain, its Best Practices, and the key Business Processes better even than the software vendor.  Having access to a SaaS platform makes it possible for the SI to turn that valuable knowledge into product which can then be sold.  That’s why having a platform on which to do that is so important to them.

Tony goes on to speculate that Oracle is picking up the components necessary to create such a platform.  If nothing else, Oracle’s Japanese SI’s are screaming that they need one.  I have to imagine SI’s everywhere are grokking the essential value of a platform to the SaaS ecosystem.  There’s nothing about the Japanese market that would make that a unique requirement there.  So far, Oracle is really stuck in that department.  I suppose they would argue the coming Fusion represents such a platform.  At the same time Sun, like every big hardware vendor, was hard at work crafting a Cloud strategy.  They all know the Cloud is a tidal wave that can profoundly impact their businesses.  The Cloud represents the federation of many smaller deals into fewer gargantuan deals.  One over simplified way to view it is as a whole new sales and marketing channel.  Failing to suit up for the game guarantees a loss and the stakes are high. 

Some I’ve talked to say that Oracle just “doesn’t get it.”  They don’t believe in SaaS, they don’t understand SaaS, and they can never execute this kind of nuanced strategy until it is way too late. 

The idea that Oracle wouldn’t try or wouldn’t deliver anything is not something I’d want to put too much money on.  It’ll probably take them longer than anyone would like, or they might surprise us too.  Even Oracle can’t really fight the Cloud/SaaS tidal wave.  Remember that the guy at the top believes it in, don’t forget that Larry Ellison has put his money into SaaS companies many times.  It happens to be inconvenient at the moment for the various financial metrics Wall Street cares about for Oracle to switch wholesale to SaaS, but even that is something they can manage over time.  Also, we know Sun was working hard on their own Cloud strategy.  Suddenly there is a lot more Cloud DNA coming into Oracle. 

I suspect Oracle’s vision of what a PaaS (Cloud Platform) is will be a lot different than what you or I might choose.   If nothing else, it won’t be a clean sheet of paper approach.  Let’s think about what it might be.

First, I would expect the initial version will not be multitenant.  Multitenancy as it is delivered today is too deep in the guts of the application.  It forces too much change to architectures, which creates too much adoption friction for a platform at the outset.  Oracle will want to deliver the promise of running any app on their Cloud Platform, or at least any app built to run on their now very comprehensive stack.  It spans hardware (SPARC et al) to OS (Solaris) to DB (Oracle/MySQL) to App Server (BEA) to Business Intelligence (Hyperion) to <you get the idea, phew!>. 

Second, if not multitenant, what?  Think virtualization.   Don’t they need to buy VMWare to get that?  They may buy VMWare, but they don’t need it for virtualization.  Sun Solaris has a wonderful virtualization capability built right in.  I’ve used it before to create a SaaS application and it works extremely well.   Imagine an Amazon-like capability to start up these virtual Solaris machines.  If Oracle is smart, and they usually are, you’ll be able to start up virtual “appliances” in the Red Cloud (my new name for Oracle’s Cloud) that deliver database, app server, and many other functions.  Consider:

–  Storage:  This one is obvious.  Sun has a big storage business, Jonathan Schwartz has blogged about their great ZFS storage technology, and so Oracle can easily deliver the “Amazon S3” storage piece of the Red Cloud.

–  Identity Management:  Control over who logs on to the Cloud.  Sun is strong in Identity Management, for example.  Oracle already had a business there.  The combination of the two would create a leader to rival IBM and possibly be the world leader.  Having Identity Management built into the Red Cloud would be a decided work savings over Amazon, for example.  And Identity Management is one of those things on-prem apps are used to farming out to another module.  Hence it would facilitate their migration to the Red Cloud.

–  Business Intelligence:  This is another of those modules everyone wants to OEM instead of having to build for themselves.  It’s an ecosystem component, like Identity Management, ETL, and a host of other things.  Oracle can again deliver a virtual appliance in the Red Cloud that makes it simple to connect.

–  Integration:  This will be essential to making the Red Cloud Appliances work.  But when you have one vendor that controls so much of the stack, they should be able to make it work better than anyone else.  This is where the vision of being the “Apple for the Enterprise” can best be seen.  This is where a lot of the Fusion work, as well as work from BEA could tie in.

OK, that’s a pretty darned impressive first tier vision if you ask me.  It’s taking the old pitch about buying a suite from one vendor and ratcheting it up significantly in terms of scope.  Not everyone will buy into it, but Big IT might just need something like this before they can start moving to the Cloud.   If Oracle can deliver such a thing, it will be an enormous business.  Someone I was talking to recently said they thought Microsoft was Oracle’s ultimate acquisition target, but I think Larry has a shot at rivaling much bigger entities if he can execute.  The IBM’s and HP’s are starting to appear on his horizon.

But there are some warts.  It will be a hodgepodge unless some Fusion-like glue can make it all fit.  It will be an enormously complex offering to market and sell.  It will be nearly impossible to take in the scope of it or understand all of it.  All of the Enterprise complexity that Big IT loves, but that SaaS has tried to ameliorate will be there.  Granted, this is better than the old on-prem complexity.  Oracle can deliver at least some of the SaaS advantages.  But what about cost?  Can we really get the SaaS cost advantages without multitenanacy?  In a word, “No!”  I just wrote an article about the pitfalls of thinking virtualization is a substitute for true multitenancy.  I stand by every word I said.  But Oracle has a unique opportunity to virtualize multitenancy itself, not in the first iteration of the Red Cloud, but in later iterations.

Whoa!  What the heck is Bob on about now?  Virtualizing multitenancy, what does he mean?

What I mean is a mechanism whereby single tenant applications can be made to have all the benefits of multitenancy without radical architectural change.  There are two ways this can happen. 

The approach is to have the database itself virtualize multitenancy.  The most popular model for multitenancy is what I call columnar:  a column in the database tells which tenant each record belongs to.  A suitable feature set in the database server can completely automate this and make it radically simpler for the application to work along this model.  A second common model gives every tenant their own set of tables.  Here again, special support in the database can radically simplify the implementation.  Note that Oracle already does partitioning (they may have invented it, but I am not sure of that), which is a feature that makes a bunch of physical tables look like a single table and that greatly improves scalability.  So now Oracle could deliver a Multitenant DB Appliance in the Red Cloud.

The second approach attacks the costs of being single tenant.  In my post on the subject, I talked about the idea of fixed costs and variable costs.  The difficulty is that the database server uses up a bunch of system resources on each virtual machine even if there is no data loaded into it.  This is because the database server is unaware of the other database servers in their respective virtual machines.  They are isolated.  But what if they were aware of each other and could pool those fixed cost resources to reduce the overhead?  What if the operating system itself facilitated this?  Those fixed costs could be dramatically reduced.  Moreover, if a comprehensive integrated feature set was aimed at reducing the cost of administering such as system, we would likely start closing in on true multitenant efficiency levels. 

That would be my vision for Oracle’s Red Cloud.  It would be a first-class Cloud platform, although much more Enterprisey and Old School than the New Cloud Age SaaS and Amazon-style Cloud visions.  Oracle and BigIT will view that as an advantage.  The biggest challenge in all this will be execution.  It is a gargantuan task on a level Oracle has never delivered before.  It involves both coordinating a lot of existing parts and pieces as well as delivering some genuine innovation (Virtual Multitenancy will be non-trivial!).   They may not be able to pull it off.  But the stakes are very high, and they will have years to work at it.

What do you think of the Red Cloud?

Posted in amazon, business, cloud, data center, saas, strategy | 4 Comments »

Can Corporate IT Operate as Efficiently as Salesforce.com?

Posted by Bob Warfield on April 10, 2009

Phil Wainewright penned an interesting analysis of what I had been calling Salesforce.com’s “Green Crystals,” but more commonly known as multitenancy.  In the past I’ve said that the exact technology involved there was not really the important thing, it was just an interesting side story told for marketing purposes.  Put another way, there are lots of ways to skin the multitenancy cat, but that isn’t even the important cat to be after.

At one point in his post, Phil frames the discussion in a particularly interesting light:

So are Salesforce.com’s green crystals just a marketing ploy, or is it really something that customers can’t easily replicate in their own private clouds, or by adopting rival cloud platforms?

I like this statement from a couple of angles.  First, it asks the legitimate question of whether the non-SaaS world can learn some interesting things from SaaS technology.  Second, it ultimately leads into a discussion of some technology around query optimization that Salesforce has patented, and whether those patents have given them an interesting monopoly now that will be denied others.

But before we delve into all that patented query optimization, let me make another controversial statement:

It’s just more green crystals and it is not the source of Salesforce’s advantage.

Why do I keep bringing up this silly green crystals notion?  It is human nature to want a reason to believe.  Marketing has trained us to look for the “silver bullet advantage.”  Green crystals marketing is a term I first heard coined at Borland years ago when I was VP Of Engineering.  Our products had loads of advantages over the competition, but marketing wanted one single reason they could tell customers why we had all those advantages, because its impossible to make a compelling sale off hundreds of small advantages.  The term “Green Crystals” comes from soap marketing.  Why is my soap better than yours?  Because it has Green Crystals, that’s why!

Likewise, SaaS has a myriad of advantages over other delivery vehicles, and one SaaS vendor may have a myriad of advantages over another.  But it helps to have Green Crystals.  In the past, it was enough just to be a SaaS vendor, but the new breed had to differentiate themselves from the old hosted ASP model.  Hence Marc Benioff brilliantly coined the multitenant argument.  Now there are getting to be a lot of SaaS vendors with multitenancy, so maybe it’s time to coin something new, “patented query optimization,” anyone?

Whether you believe they’re just Green Crystals or not, their only advantage to the customer is that they enable SaaS companies to deliver their service at a lower cost.  I have argued in the past that most of the savings is not due to multitenancy (or patented query optimization), but rather automation of the people costs.  I stand by that argument, but I want to drill down on it further with some real numbers.

Let’s start by considering the cost to deliver the service for a few SaaS companies.  Here, Salesforce really does have a tremendous advantage over most of its brethren.  Consider these numbers taken from the latest 10K filings for these SaaS companies:

–  Salesforce delivers $1 in revenue at a cost of about 12 cents to deliver the service.

–  Concur delivers $1 of revenue at a cost of about 32 cents–almost 3x what Salesforce spends.

–  Companies like Success Factors (about 35 cents) and Omniture (36 cents) have a similar much higher cost than Salesforce.

You can go analyze all the public SaaS companies, but the last time I did so, I don’t remember anyone doing better than Salesforce.  They know how to deliver their service cheaply!

Now the next thing to look into is what the cost components might be.  The point of multitenancy (and virtualization for that matter), is to share infrastructure as needed, amortizing it across a lot of customers, so that costs are saved through a reduction in waste.  All of Salesforce runs on only 1,000 machines.

That sounds really impressive.  Perhaps the competition is simply running 3x as many servers because of their inefficient multi-tenant architectures.  Is that the source of the cost differential?  Well, let’s start by figuring out what 1,000 machines cost.  I’m a Cloud guy, so I would buy them from Amazon.  I will assume that Salesforce can run their own data center at least as cheaply, so the cost of 1,000 servers on Amazon running 24x7x52 ought to be a conservative number.  It comes to about $4.7 million, which is less than 4% of Salesforce’s cost to deliver their service.    The competition would have to be using more than 5x as much hardware as Salesforce to account for that kind of cost differential from hardware alone.  Now you see why I have a hard time believing the Green Crystals are telling the whole story.

What else may be at work here?  It’s the cost of the operations people that drives the Lion’s Share of what it costs to deliver a service.  Consider a fairly low-paid ops person who is technical (DBA’s and the like) may have a loaded cost of perhaps $120K per year.  That cost is equivalent to 25 of those servers, BTW.   So how many ops people does the gap in the cost of the hardware versus the overall Salesforce service cost represent?  About 1,000!

Now I am not trying to say all of the cost differential is due to the cost of such ops people, but it does bear thinking about.  Certainly there are other costs involved too.   I have a friend running a SaaS company who privately says his Oracle licenses are the biggest cost after the personnel and ahead of hardware.  Most SaaS companies didn’t build on Open Source technology, so they are going to have all sorts of software license costs.  Still, I assert most of their cost is people.

Salesforce must therefore be radically more efficient at automating what it is taking other companies a lot of warm bodies to deliver.  So if Phil’s corporate clients want to even begin to approach the efficiency of a Salesforce (let alone another SaaS vendor), they have to focus on that kind of automation.

How good can it get?  That’s an interesting question.  Salesforce is using 1,000 servers to deliver a service to 1.5 million people, according to the Techcrunch article I linked to above.  Consider these numbers for Facebook that are taken from a great posting over at Diamond Notes:

–  Facebook at that time (early 2008) had 1800 MySQL servers, 10,000 web servers, and 800 Memcached servers.  They had a total of 12,600 servers, in other words.

–  We’re hearing Facebook has 200 million users today, let’s assume they had 100 million a year ago.  May have ben less, but we need to make a guess.  That means they’re housing nearly 8,000 users per server, versus Salesforce housing about 1,500.

–  Even more interesting are the figures on how many MySQL DBA’s each company in that survey uses.  Facebook has just 2 DBA’s keeping 1800 servers happy!  That’s a couple orders of magnitude better than anyone else in the survey.  Clearly Facebook has been very successful in automating a lot of operations work.

All of these factors will play a role.  However, given that a single operations person is equivalent to 25 servers in cost, and given that we have the evidence of companies like Facebook, massive automation of the manual tasks is the big cost reduction opportunity moreso than making more efficient use of the hardware.

Imagine being able to keep Salesforce’s 1,000 servers humming with just 10 or 20 ops people.  Can corporate IT ever get there?  Sounds very hard to me when you consider all the advantages a company like Salesforce has over Corporate IT.  If nothing else, they can change the software to facilitate the automation instead of having to go in after the fact with third party tools.

Posted in data center, saas | 5 Comments »

Catching Up With 3Tera in the Clouds

Posted by Bob Warfield on March 1, 2009

Recently I had a chance to catch up with 3Tera CEO Barry Lynn and SVP of Sales and Marketing Bert Armijo.  It’s been a little while since I chatted with these guys and they’ve been busy!

It’s now been roughly 3 years since their first beta test.  Incidentally, they claim that beta makes them the first Cloud vendor, since Amazon S3’s beta was 1 month later!  Not sure selling Cloud infrastructure is the same as selling the Cloud like Amazon (that’d be like making the gold pans before the gold is found), but I do applaud their pioneer spirit.  If not the first, they’re certainly among a very small group of original Cloud Thinkers.

Good Catching Up With You Guys.  What’s New Since We Talked?

Our latest version is 2.48, which was recently released.  The big change there is we’ve added support for Solaris and Windows, and there is integrated monitoring.  We now have service on 4 continents, soon to be 5.  And we have customers taking advantage of that.  A customer can get presence on 4 continents in a day with 3Tera.

How Many Customers Do You Have Now?

Several hundred live customers, mostly through partners.

We have a number of hosting partners, and we’re always looking for more.

Tell Us About Your Partnering Strategy

People are starting to realize the need for private clouds.  People are starting to get it.  Federal Government and Large Enterprise want it.  There are legal restrictions on where data can be put.

For a long time partnering was unique to us.  People in the space all wanted to build their own cloud.  Our customers can work with multiple operators from Day 1.  Our customers do this on a daily basis.  It’s routine.  We have a button for it in the GUI to automate it.  Backing up to multiple points of presence, for example.

The product is maturing and we’re starting to see a change in types of customers coming on board.  Don’t know if it’s the economy or the Cloud industry.  A year ago, most customers were web or SaaS.  Now the vast majority are Enterprises.  It is a profitable and stable business though it puts a different kind of requirements on the product.

Why Enterprise?  I’ve Talked to a Lot of SaaS Companies Having Difficulty With Large Deals.

First I haven’t heard SaaS companies are having any particular problem with Enterprise sales.  There’s stress everywhere, but we don’t see the Enterprise as particularly stressed.  When business is good companies want control over price like SaaS offers.  But when business is bad they want economies of scale.

We love this economy.  Everything requires a bit of luck.  Here’s what’s going to happen.  This is the hardest hitting recession we’ve yet seen in the shortest period of time.  Some companies want quick ROI investment, particularly around saving money.  Others get completely frozen and don’t do anything.  There’ll be companies in both of those camps.

<The frozen camp is where I’m hearing the Enterprise problems.  Larger orgs seem more prone to freezing.>

If you look at things like Siebel or other SaaS having a problem, where customers cut back is in discretionary vertical functionality.  Do I have to do it at all?  We greatly lower the cost of almost everything.  We don’t build apps, we build platforms.  So we replace something non-discretionary with something also non-discretionary but cheaper.  Others are making discretionary spending cheaper.

A datacenter upgrade or tech refresh cycle was poised (last one was dotcom).  Now they lost budget for that, so its, “How do I run my business?”

You can’t run a business without IT infrastructure.  I may get rid of my cable TV, but I still have to buy food.  I just want cheaper food.  That’s what we’re out to do.  We can show them the catalyst for cheaper IT infrastructure.  We can even enhance quality while saving. 

People get the same level or more control as when the hardware was in their datacenter.  In fact, it’s more, because they have better tools to abstract large distributed systems.

How do you get the word out?

We look more like a web company there.  We don’t have a big field sales force.  We don’t have big Enterprise software contracts.  What we’ve done is to simplify and create a small incremental purchasing decision.  Even multinationals can start for a few hundred dollars a month, increasing spend as they see value.  That eliminates long eval cycles and the committee sale.  We’re more efficient and we pass that along to the customer.  We’re more focused on value instead of artificial billings like services and support. 

<This incremental pay-as-you-go cost is what I love about the Clouds.  We’ve seen it at my own company Helpstream when using Amazon.>

So we use telemarketing, or what a lot of people call Sales 2.0.  A lot of sales are Webex.  We only go visit customers who have an established footprint with us.

We minimize the onboarding cost and eliminate the lock in.  We avoid API’s that people have to write code to—that creates lock in which worries customers.  This minimizes the perception of risk.

We have a full blown disruptive product.  It is subscription based.  It’s incremental.  You can try it out slowly and then move quickly when you’re convinced.  That helps a lot in this economy. 

Customers save the capex because they don’t have to own the software.  They save personnel costs because people don’t manage servers, they just manage applications using our platform.  Saving thoses costs together with faster time to market really is a cheaper and better proposition. 

We use a combination of methods to attack the marketing problem.  We are voracious practitioners of PR.  PR offers so much more value than any ad we can place ever would, whether that’s a Google ad or a print ad.  Having some writing and putting their intelligence into it creates value. 

We also do some Google ads, though we have cut back on that a little bit.  It is valuable because it brings new people into the space.  It causes them to go find the PR.  We also do a few conferences.  We don’t go to big trade shows, but there are some decent focused small conferences.  We like conferences with a few hundred people because we can spend time educating someone there.

Often these conferences are vertical or geographic.  For example, there are Cloud Conferences for Government people.

Most of our leads are inbound.  Soon, we want to look at more outbound techniques, but without spending a huge fortune.  For example, we’ll be at the Web 2.0 show in SF.  We’re also doing the Sitcom Cloud Event in NY.  We’re doing Forrester’s Cloud Event.  We actively participate in Cloud Camp.

Tell Us More About Your Partner Strategy

When we started, it wasn’t clear this was the right path.  We got a lot of pushback, but we stayed committed to it.  Cloud is not going to be a one size fits all market.  There’s a lot of different purchasers and a lot of different requirements.  Banks want their systems of record in their data center.  Healthcare likewise.  Europe has a lot of laws about this.  There are many geographical issues, even involving physical limitations of the speed of light.

The level of service customers need and can afford is also all over the map.  One company can’t build a data center that meets all of those requirements.  We see a Federation of Clouds where users can take their workload to where their requirements are met.

It’s very popular to have developer systems in a different area than production.  The latter has geography, redundancy, and other requirements.  We transfer the workload seamlessly from one to the other, which is powerful.

We have a tiny little startup that has facilities and points of presence in three continents.  Startups couldn’t begin to do that in the past.  We have customers on military contracts that have very special security requirements. 

Only by partnering could we meet all of these requirements.  Our job is to build the best possible enabling platform.

We’re very conscious of our partners and want to make sure they make money.  We don’t charge them up front or make them sign up for huge commitments.  Its win-win, customers save money, but partners make even more money with us than on their own.

The real strategic value is there will be an evolution over the next couple of years.  Many companies are just not in the infrastructure business.  Yes, they spend billions, but at some point they’re going to stop building that infrastructure and start using the Cloud.  It’ll start slow, they’ll move bits and pieces, but at some point, they won’t need to own datacenters.  There is a whole industry growing up to service this.

What About Amazon, Google, Microsoft, et al?

Cloud computing will be a federation of many many clouds.  There are thousands of telcos in the world.  We see cloud computing playing out the same way.  Of course we see Amazon, Google, and the others playing in that game.

There should be standards to increase the interoperability and make it better for all.  Networking is a very successful example of this today.  Telephones with rotary dials still work today.  Other industries struggle to get to that point.

Why Not Amazon Today?

What does that mean? 

<Bob laughing, “I want your graphical management tool working for me in the Amazon Cloud!”>

We set out to do a particular thing.  Amazon didn’t exist back then.  We set out to make it easy to deal with large systems.  We built an underlying infrastructure to support the user interface that you see.  AppLogic’s Cloudware infrastructure identifies pieces that can be broken out as services.  We are starting to see how to do that.

There’s the UI, there’s a grid OS, we look at heartbeats, failures, etc.  We have a catalog system, we have a metering system, and we generate billing information.  Each could’ve been a company.  But we built it all together as a seamless whole.

Now that we understand how this all fits together, we can look and see how to do it on systems that have fewer services.  EC2 is one of those targets.  We’ve been open about that.  It won’t happen in a month or two, but it’s something we’d like to do.  Amazon is one of several.

We don’t want to be seen as a front end for a cloud. 

Thanks Guys, Great Discussion!

<3Tera remains one of the Cloud Leaders that I like to keep an eye on.  They’re enabling the hosting world to build their own Clouds using 3Tera’s platform.  That ensures a lot more Clouds will be available with lots of interesting features and distinctions.  It’s all good for the end users!>

 

Posted in amazon, business, cloud, data center, Marketing, Partnering, saas, strategy | Leave a Comment »

Are We Surprised Cisco Will Build Cloud Computing Servers to Compete With HP, Dell, et al?

Posted by Bob Warfield on January 20, 2009

A fascinating article from the NY Times just hit Techmeme.  It’s all about how Cisco is planning to start building servers.  To be precise, servers equipped with virtualization software.

The article expresses some surprise at this move, but it seems terribly obvious now that the news is out.  After all, there is already a move afoot to create Cloud Servers.  These a stripped down machines best suited to doing nothing but being the commoditized ubiquitous guts of some massive Cloud data center where there will be thousands of them.  The emphasis will be on reliability and cost efficiency.  I’ve likened the advantages of such highly standard machines to the business advantage Southwest Airlines gets by standardizing all of their aircraft as 737’s.  Companies like Google have already seen the light and taken these steps.

When you look at a server not as a complex machine optimized for maximum performance, but as an interchangeable box optimized for value and low cost of ownership, doesn’t it suddenly sound a lot more like the boxes Cisco has traditionally been making?  Don’t they know a lot about how to do that sort of thing?  And what did we think was inside those Cisco network boxes anyway?  Surprise, they’re mostly just computers with special software.  Cisco already has a huge leg up on how to do this stuff.

Mind you, these boxes are not strictly for the Cloud, but the vision of highly standardized corporate datacenters where the important thing about the machines is virtualization and efficiency more than absolute maximum throughput is pretty much what the Cloud wants anyway.

It’s going to be interesting to watch, but this isn’t the first or the last time that the Cloud will change the dynamics of the marketplace.

Posted in cloud, data center | 2 Comments »

If You Thought SaaS Was Annoying, The Cloud Babies Will Piss You Off!

Posted by Bob Warfield on January 7, 2009

I’ve been enjoying a spirited exchange with some of the Enteprise Irregulars around SaaS and Big Software for the Enterprise.  I won’t bore you with too many of the details, but we wound up in one of the classic cul de sacs these arguments often do.  Big Software was expressing their annoyance that once again incredible magic was being claimed, “Because it was SaaS.”  They were so annoyed at all the hype they percieved SaaS to be, and felt it was duping customers into believing too much in the name of SaaS.  If you read this blog at all (or have had a look at my resume), you will know I am an unabashed SaaS supporter, so when I hear someone shaking their head and bemoaning that SaaS is just a lot of hype, I spring into action.  Like any good evangelista, I launched into a long sermon about the many innovations SaaS has brought about that would be appropriate for any Enterprise (Big Software) Software to adopt regardless of whether they have a SaaS offering.

As it was happening, I was surprised myself at how it was coming out.  I’m not sure I ever heard anyone say SaaS had innovations that should be copied back into on-prem software before, but as I was waxing forth on the topic, I realized it was one of those things that had been germinating in the back of my mind for quite some while.  Let’s talk about that for a minute and then I’ll get into the whole Cloud Babies thing. 

What innovations has SaaS created that others would do well to adopt?  I’m talking about product architecture and functionality here.  Largely, it boils down to the idea of making software that is flexible without requiring expensive custom SI work.  Big ERP is legendary for the amount of expensive SI work that is required to install it.  The cost of such work is extraordinary, and the price tag when that work goes awry has created some legendary scandals in the Big ERP history books.  Getting away from all that is one of the promises of SaaS, and as I was quick to point out in that debate, it’s not just hype.  The economics of SaaS won’t support the expensive SI customization work. 

So how do SaaS vendors deal with the problem?  First, let me be the first to admit that a lot of them don’t.  They just restrict the scope of their offering and you live with that.  Sometimes that means the offering can only be successful for Small or Medium sized businesses and Big Enterprises can’t make use of it.  But that’s not the best answer.  The best answer is to find a way to deliver the flexibility in a way that doesn’t require expensive custom work.  There are two ways the SaaS world tackles this–for some problems metadata is the answer, and for other problems end user-approachable self-service customization works.  Let me give some examples of each.

Metadata is literally “data about data“.  As such, it is a beautiful thing.  Let’s consider the database.  It is very common for different organizations to want to be able to customize the database to their own purposes.  Let’s say you have a record that keeps information about your customers.  A lot of this information will be common, and could be standardized.  We all want the customer’s name, their address, phone number, and perhaps a few other things.  But then there will also be a lot of things that differ from one organization to the next.  Perhaps one wants to assign a specific sales person to each customer.  Another wants to record that customer’s birthday (obviously this is a much smaller organization than the first!).  And so on.  Without metadata, each database has to be customized and changed.  With metadata, rather than changing each database, you build the idea of custom fields in, and then you can just tell the database what the custom fields will be in each case but the structure needn’t change.  Metadata is not unique to SaaS, but it is an important part of the “multitenant” concept.  It makes it possible for all those tenants to live in the same database, but still get to have all their custom fields.

Metadata can also make it possible to enable that second method for flexibility.  Customizing a database without metadata is going to require someone to get into the database, modify the schema, make sure reports are modified to deal with the new schema, make sure the schema changes don’t break the product, and on and on.  Such work is definitely the province of expensive and highly technical experts.  However, once we have metadata, we can create a simple user interface that lets almost anyone add new fields, and that handles all the rest of it automatically.  Suddenly we have made what had been a difficult and expensive technical task approachable in a self-service way by non-technical customers.  Not only that, but they can make these changes quickly and easily, and they can even iterate on them until they get it just right.

Hopefully you can see why making expensive “flexibility customization” easy like this is essential to SaaS.  It makes no sense to sign up for cheap monthly Software as a Service and then have to spend millions to get it customized before you can use it.  Salesforce.com and others have done a fabulous job figuring out how to deliver this kind of thing.  There were a few non-SaaS companies doing this earlier, but nobody had made it an end-to-end requirement for the whole application install experience before the SaaS world came along and its economics made it imperative.  One example of a company that did this sort of thing to good effect was Business Objects.  It’s essential BI innovation was to make it possible to have the DB experts define the metadata needed to make querying the objects easy.  My old alma mater Callidus Software was another.  Our software computed Sales Compensation, which requires a lot of complex business logic.  Most of the players required expensive custom work to create comp plans, but we offered a product where business analysts could create the comp plans using formulas a lot like what you’d find in Excel.

The time is ripe, I would argue, for Big Software to be examined for opportunities to apply the same lessons.  Much Big Sofware is a couple generations older than the SaaS products of our time, so it isn’t suprising there should be some innovations worth looking at.  And in fact, Big Software are no dummies either.  See for example this discussion with Henning Kagerman of SAP’s changes in thinking about how to customize business processes.  Their Business By Design offering is not only a SaaS offering, but also a new generation concept for On-premises, and it is ripe with these sorts of ideas.  SAP has long been one of the customization heavyweights, but the pendulum seems to be swinging to the idea that next generation architectures might need to find ways to maintain flexibility while reducing the cost of customization. 

Adoption of these new ideas by the mainstream even outside of SaaS will be a good thing for all concerned.  But such adoption usually signals the maturation of an area, and this triggered little warning bells in my head.  If Big Software is upset and annoyed at the SaaS upstarts, who will upset and annoy the SaaS guys?  Who will unleash not just all the hype and disruption, but like SaaS, a set of innovations that SaaS, Big Software, and others will want to adopt too.  We’ve got a billion dollar SaaS leader in Salesforce, a gaggle of successful SaaS public companies still growing rapidly, an economic climate set to magnify the SaaS advantage further, and a number of exciting SaaS startups such as my own Helpstream.  The other thing is I’ve noted that when bubbles burst and everyone is wringing their hands in anguish, just as the hype from the last binge is dying down and consolidation is setting in, that’s usually when the next cycle is being born.  You just have to look around for it and it’s probably right there in plain sight.  Enter the Cloud Babies.

I call them Cloud Babies not out of any desire to denegrate, but because the Cloud is still in its infancy.  I am intentionally distinguishing SaaS from the Cloud too.  I mean the Cloud in the sense of Amazon, and perhaps Force.com.  The Cloud as a platform and a datacenter that is not only not the customer’s datacenter, but not even the software vendor’s datacenter.  I mean utility computing and everything that implies.

The Cloud Babies will be just as annoying to those not yet on the Cloud as SaaS is for those not yet selling (or buying) SaaS.  It’s going to seem ridiculously over hyped.  It’s going to seem like it isn’t real, that it won’t last, and that it will only matter to certain market segments or to small businesses but never large enterprises.  In fact, you can already ready most of that out there.  But I have already seen enough of the Cloud (Helpstream moved to Amazon recently) to know that there is a lot more to it than that.  There is a kernel of hard reality to it.  The Cloud is disruptive.  It will lead to innovation.  It will lead to architecture changes that give fundamental advantages.  If you thought the Sequoia memo of doom about what startups should do in this economy was serious, they missed an important point.  Any startup running their own datacenter today is at a huge disadvantage to those who are already in the Cloud.

I saw on Twitter earlier today that Fred Wilson means to sell GOOG and AAPL tomorrow and buy AMZN.  I agree.  If the SaaS Guys were annoying, you ain’t seen nothing yet.  The Cloud Babies are really gonna piss you off!

Posted in amazon, cloud, data center, enterprise software, platforms, saas | 5 Comments »

Cloud Computing Keiretsu: VMWare + Elastra, Amazon + RightScale + EngineYard

Posted by Bob Warfield on December 31, 2008

A keiretsu (系列? lit. system or series) is a set of companies with interlocking business relationships and shareholdings.   So says Wikipedia.

As the competitive landscape for the Cloud Computing World begins to take shape, forming Keiretsus is one of the most important things for the players to be doing at this time.  Most of them do not have enough technology they can call their own to create a total solution for their brand.  Even mighty Amazon, which comes closest, benefits from software like RightScale’s asset management suite (see my interview of RightScale’s CEO, Michael Crandell). 

As such, there will be a rush to round out complete suites in these early days through partnerships.  This is a normal business pattern and its good news for both the partners and customers.  It will lead to more complete solutions, better integration between the components, greater standardization, and it signals the legitimization of the market and the beginnings of a shift from early adopters to the mainstream.  In short, it is a sign of great health in momentum for Cloud Computing.

The latest news of the impending Keiretsu was the joint announcement that Elastra will partner with VMWare.  I saw that one on Twitter this morning, BTW.  It made me think of the various Keiretsu as revolving around 3 different major markets, each of which corresponds somewhat to an equivalent market in the conventional world:

cloudkeiretsu1

The Pure Cloud is the Amazon-style market.  It offers generic Cloud Infrastructure to all comers.  It’s Cloud as a Service, and is analagous to the Software as a Service world.  This world keeps costs extremely low and relies mostly on Open Source software (e.g. Xen Hypervisor) and internally developed software (S3 or Elastic Block Store) to keep the costs low.  The model works great at delivering very low costs.  My company, Helpstream, saved a bundle recently by switching.

The VMWare/Elastra combination is also interesting.  In my (so far cloudy) framework of different Cloud models, I put VMWare in the same category as 3Tera (another great company I’ve interviewed).  These companies are charging for software that Amazon has built or Open Sourced (not precisely, but the software is analagous to Amazon’s Xen Hypervisor and the Elastic Block Store they built).  Presumably the customers for this type of Keiretsu will be hosting providers that want to play in the Cloud but do not have enough software development capability themselves to get there.  There is also a market for corporations that want to create private data centers that take advantage of Cloud technology, or that are Cloud compatible so they can build composite apps that span the Cloud and the private data center.  This is an interesting market, to be sure, it’s just different than the “mainstream” (if I can use that word this early in the Cloud cycle) Cloud Market that companies like Amazon or Google represent.  It most closely resembles the perpetual license software market, with its vendors selling licenses for infrastructure.  In fact it isn’t an exact analog because a lot of this stuff is services, but metaphorically, it’s pretty close. 

The last analagous market is what I call the Vertical Cloud.  Seems like there is always a vertical opportunity to be had in any megamarket, and this can be quite a lucrative field to play in.  In the Cloud world, the verticals are represented by Google and Intuit, who have created special-purpose cloud platforms that cater to particular desires.  One could even view things like Facebook applications as existing in the Vertical Cloud market.

Next we have the crossover players.  RightScale is both a useful component for the Amazon world, and the company is also offering to deliver their service on clouds more like the Infrastructure Sellers operate in.  There are many reasons why this is a valuable niche.  To learn more, check out my interview of RightScale CEO Michael Crandell.  Elastra is another product available now in two of the Cloud markets.  Shifting over to the other side we see Ruby On Rails as a specialized vertical being delivered on the Amazon Cloud by EngineYard and Heroku, hence they’re in the crossover space between the two types of markets. 

There are a lot of other players I”ve left off the diagram just to keep it clean and obvious at a glance what’s happening.  For example, Force.com is trying to be both a Vertical, where it is ideal for creating add-ons to the Salesforce ecosystem, and a true Cloud as a Service where any generic application could be built there.  To make it even more interesting, Force.com connects to both Amazon and Google.  This framework provides a way of thinking about possible future combinations or products.  What are other vertical crossover opportunities between the generic cloud and the verticals?  What will be the crossover opportunities between the infrastructure and vertical worlds?  Will there be more than just the 3 big markets, or are the analogs to conventional software markets convincing enough to tell us this is probably all there is?

It’ll be interesting to watch the Clouds continue to evolve and see what unfolds!

Posted in cloud, data center, Partnering, platforms | 2 Comments »

Cloud Computing Servers are Like Southwest Airlines’ 737’s

Posted by Bob Warfield on December 30, 2008

What features does a server absolutely positively have to have to be a candidate for a big cloud data center?  What features would put it ahead of other servers in the eyes of the manager writing the checks for that cloud computing data center?

There’s an interesting article out about how Rackable Systems (and presumably others) are building machines inpsired by Google that answer those questions better than ever before.  We’re talking about features like heat-resistant processors, motherboards that contain 2 servers, and that only need one power supply voltage instead of 2 or 3.

One thing the cloud does, is it will force standardization and penny shaving at the hardware (and software) end.  When Amazon, Google, or one of the others is building a big cloud data center, they want utility-grade computing.  It has to be dense on MIPS value, meaning it is really compact and cheap for the amount of cpu power delivered.  Designs that add 25% to the cost to deliver an extra 10% in power won’t cut it.  The Cloud will be too concerned about simply delivering more cores and enough memory, disk, and network speed to keep them happy.  Closing a deal to build standard hardware for a big cloud vendor will be hugely valuable, and in fact, Rackable started out life building systems for Google. 

It’s going to be interesting to watch the Cloud Server market evolve.   Reading these articles reminds me of Southwest Airlines, which dramatically improved its cost savings by standardizing on just one kind of airplane, the 737.  Not only did that one-size-fit-all for Southwest, but it made their maintenance costs dramatically lower becaues they can standardize on spare parts for one aircraft, mechanics trained on one, and so on.

Cool beans!

Related Articles

James Urquhart, one of the top cloud bloggers, has given us mention in his related post.  There’s some further great analysis from James as well, so check him out!

Posted in cloud, data center | 1 Comment »

 
%d bloggers like this: