SmoothSpan Blog

For Executives, Entrepreneurs, and other Digerati who need to know about SaaS and Web 2.0.

Interview With 3Tera’s Peter Nickolov and Bert Armijo, Part 3

Posted by Bob Warfield on October 26, 2007

Overview

3Tera is one of the new breed of utility computing services such as Amazon Web Services. If you missed Part 1 or Part 2 of the interview, they’re worth a read!

As always in these interviews, my remarks are parenthetical, any good ideas are those of the 3Tera folks, and any foolishness is my responsibility alone.

Utility Computing as a Business

You’ve sold 100 customers in just a year, what’s your Sales and Marketing secret?

3Tera:  We’re still in the early growth phase, our true hockey stick is yet to come, and we expect growth to accelerate.  Right now we’re focused on getting profitable.

We don’t have a secret, really.  We have a very good story to tell.  We’re attending lots of conferences, we’re buying AdWords, we’re getting the word out through bloggers like yourself, and we’re getting a lot of referrals from happy customers.

The truth is, the utility computing story is big.  People hear about Amazon and they start looking at it, and pretty soon they find us.  It’s going to get a lot bigger.  If you read their blogs, Jonathan Schwartz at Sun and Steve Ballmer at Microsoft are out talking to hosters.  Hosting used to be viewed as a lousy business, but the better hosters today are growing at 30-40% a year.  This is big news.

Bob:  (I think their growth in just a year has been remarkable for any company, and speaks highly to the excitement around these kinds of offerings.  Utility computing is the wave of the future, there is a ton of software moving into the clouds, and the economics of managing the infrastructure demand vendors take a look at offerings like 3Tera.  We’re only going to see this trend getting stronger.)

Tell us more about your business model

3Tera:  We offer both hosted (SaaS) and on-premises versions.  As we said, 80% choose the hosted option.  The other 20% are large enterprises that want to do things in their own data center.  British Telecom is an example of that.

We sell directly on behalf of our hosting providers, and there are also hosting providers that have reseller licenses.  Either way, the customer sees one bill from whoever sold them the grid.

Bob:  (This is quite an interesting hybrid business model.  Giving customers the option to take things on-premises is interesting, but even more interesting is how few actually take that approach:  just 20%, and those mostly larger enterprises.  It would make sense to me for a vendor looking to offer both models to draw a line that forces on-premises only for the largest deals anyway.  3Tera’s partnering model with the hosting providers is also quite interesting.)

How do you see the hosting and infrastructure business changing over time?

3Tera:  There are huge forces at work for centralization.  Today, if you are running less than 1000 servers, you should be hosting because you just can’t do it cost effectively yourself.  Over time, that number is going up due to a couple of factors.

First, there is starting to be a lot of regulation that affects data centers.  Europe is already there and the US is not far behind.  There are lots of rules surrounding privacy and data retention, for example.  If I take your picture to make a badge so you can visit, I have to ask your permission.  I have to follow regulations that dictate how long I can keep that picture on file before I dispose of it.  All of this is being expressed as certifications for data centers such as SAS-70.  There are other, more stringent standards out there and on the way.  The cost of adhering to these in your own data center is prohibitive.  Why do it if you can use a hosted data center that has already made the investment and gotten it done?

Second, there are simple physics.  More and more datacenters are a function of electricity.  That’s power for the machines and power for the cooling.  I talked to a smaller telco near hear recently that was planning to do an upgrade to their datacenter.  This was not a new datacenter, just an upgrade, and not that big a data center by telco standards.

The upgrade involved needing an additional 10 megawatts of power.  The total budget was something like $100 million.  These are big numbers.  The amount of effort required to get approval for another 10 megawatts alone is staggering.  There are all kinds of regulations, EPA sign offs, and the like required.

Longer-term, once you remove the requirement for humans to touch the servers, it opens up possibilities.  Why do we put data centers in urban areas?  So people can touch their machines.  If people didn’t have to touch them, we’d put the data centers next to power plants.  We’d change the physical topology and cooling requirements to be much more efficient.

We want people to think of servers the way they think about fluorescent tubes in the office.  If a light goes out, you don’t start paging people and rushing around 24×7 to fix it.  You probably don’t fix it at all.  You wait until 6 or 8 are out and then you send someone around to do it all at once, so it’s cost effective.  Meanwhile, there is enough light available from other tubes so you can live without it.  It’s the same with servers once they’re part of a grid.

Conclusion

The changes in the industry mentioned at the end of the interview are quite interesting.  Legislation is not one I had heard about, but it makes total sense.  Power density is something I’d heard about from several sources including the blogosphere, but also more directly.  I met with one SaaS vendor’s Director of IT Operations who said the growth at their datacenter is extremely visible, and he mentioned they think about it in terms of backup power.  When the SaaS vendor first set up at the colo facility, it had 2 x 2 Megawatt backup generators.  The last time my friend was there that number had grown to 24 units generating about 50 megawatts of backup power.  For perspective, an average person in the US uses about 12,000 watts, so 50 megawatts is enough for a city of over 4,000 people.

Another fellow I had coffee from this morning runs all the product development and IT for a large well-known consumer focused company on the web.  He mentioned they now did all of their datacenter planning around power consumption, and had recently changed some architectures to reduce that consumption, even to the point of asking one of their hardware vendors to improve the machinery along those lines.

These kinds of trends are only going to lead to further increases in datacenter centralization and more computing moving into the cloud to increase efficiency, centralize management to make it cheaper, and load balance so fewer watts of energy need be consumed idling.

 
%d bloggers like this: