SmoothSpan Blog

For Executives, Entrepreneurs, and other Digerati who need to know about SaaS and Web 2.0.

Archive for October 26th, 2007

SaaS, Cloud Computing, and Liability for Security Breaches

Posted by Bob Warfield on October 26, 2007

There’s an interesting post by Larry Dignan about the TJX legal activity surrounding a data breach that exposed customer’s credit card information.  It seems the banks are suing TJX to make it their problem.  In the past, banks and credit card companies had to eat the expense.  The TJX breach involved the theft from its system of over 100 million credit card numbers by unknown intruders.  This wasn’t a case of a tape falling off a truck or a bug in software inadvertantly publishing the numbers–criminals hacked into the system during an 18 month period to steal the data.  This is the largest such theft of its kind ever reported.  In addition personal data on about  451,000 individuals was stolen by accessing a system relating to their return of merchandise without receipts in 2003.

The lawsuit alleges the breach happened due to conclusions found by TJX’s own consultants doing forensic work after the crime.  They said the company failed to comply with 9 of the 12 PCI-DSS standards among other things:

–  An improperly configured wireless network.  Note that this is outside a secure datacenter–an important point to keep in mind with supposedly secure apps.

–  Outdated WEP encryption on the wireless network, a practice many other retailers suffer from.

–  Failure to segment sensitive data and treat it differently.

–  Retention of credit card information that shouldn’t have been retained under PCI-DSS standards.

–  Apparently just one store in Florida was compromised sufficiently to lead to the calamity.  The data, about 80 GB, was transferred over the Internet to a site in California. 

–  In addition, a sniffer was installed on the network to capture credit card info which was being transmitted in the clear.

–  The consultant deposed noted that they had never seen such a “void of monitoring and capture via logs of activity.”

The list goes on for pages if you read the legal filings, but this gives a  nice understandable idea of what went on.  According to the plaintiffs, at least, TJX was even aware of a lot of the problems from audits the year before but had failed to fix things.

I read this after posting the last of my 3Tera interview, and it struck a chord with something from the interview.  The 3Tera guys talk about how data privacy regulations are increasingly driving datacenter centralization and outsourcing.  What a great concrete example this is, and it cuts both ways. 

It puts a greater onus on those running datacenters to keep them secure or face the consequences.  This puts more pressure on SaaS and other cloud computing vendors to deliver a very high quality product.  But secondly, it means that IT organizations will also have even greater expenses around their in-house software activity too in order to secure it.  If nothing else, the article on the TJX breach I linked to above mentions they weren’t even sure of a lot of what was stolen because they had routinely deleted the data.  What are the chances TJX and others will decide to archive a lot more information in the wake of all this?

Inevitably, this will lead to exactly the kind of legislation the 3Tera guys mention.  This legislation will drive the kind of certification companies need to have around various kinds of data.  Europe is already off to a head start on this front, but the US will surely follow.  It may not even take legislation.  The civil legal system may create ramifications for how datacenters operate. 

Let me give an example.  As I understand it, your damages relating to use of stolen IP in software are dramatically less if you can show that your organization took steps to verify it wasn’t using someone else’s IP.  There are companies today that make a business of scanning your source code and looking for suspicious entries.  The CFO or General Counsel may mandate such scanning simply because it is cheap insurance relative to treble damages if you can prove by doing so your organization took reasonable steps. 

So it is with these security issues.  The more certifications along the lines of SAS-70, the more opportunity you have to tell the lawyers that your organization took reasonable steps and therefore shouldn’t be held liable, or at least not liable for as much.  SAS-70, incidentally, is an auditing standard promulgated by the AICPA.  There are many more such standards out there, such as those associated with Sarbanes-Oxley.  When you look at the impact Sarbanes Oxley has had on small public companies, it isn’t hard to see that this kind of thing will drive the SMB market to doing more and more in the cloud using SaaS and other mechanisms because they just won’t be able to afford to certify their own projects the way the bigger companies can.

In TJX’s case, they would have benefited from using a more standardized product, running in a more modern datacenter, with all the safeguards and certifications in place.  A modern thin client could run HTTPS which adds additional much more secure encryption over the nearly useless wireless WEP protocol TJX was using.  In short, it’s hard to see how a respectable SaaS vendor would have fallen into the same traps precisely because customers would have insisted on audits like PCC-DSS and they’d have insisted the guidelines had been followed.  For their part the SaaS vendor should be touting those things as advantages and amortizing the cost over multiple tenants so as to make it cheaper for customers to have the additional security.  Sure a SaaS vendor could make a mistake, but doing so leaves that vendor liable more than the customer. 

I can already hear the arguments that if TJX had only done the right things with their on-premises software, there’d been no issue, so why is Bob making this out as a SaaS thing?  If the trend to take the litigation route on these things continues, companies will have a lot more to think about before undertaking to accept all of that liability themselves.  Let’s also consider that TJX apparently knew of the deficiencies but did not take action.  Why then, didn’t they take action?  I have no data to support this, but in my experience this almost always boils down to issues of cost.  TJX probably had the best of intentions, back lacked resources, budget, and time to make the fixes before time ran out for them.  It was a costly mistake, but again, it seems like costs are something that SaaS greatly alleviates. 

Posted in business, saas, strategy | 1 Comment »

Interview With 3Tera’s Peter Nickolov and Bert Armijo, Part 3

Posted by Bob Warfield on October 26, 2007

Overview

3Tera is one of the new breed of utility computing services such as Amazon Web Services. If you missed Part 1 or Part 2 of the interview, they’re worth a read!

As always in these interviews, my remarks are parenthetical, any good ideas are those of the 3Tera folks, and any foolishness is my responsibility alone.

Utility Computing as a Business

You’ve sold 100 customers in just a year, what’s your Sales and Marketing secret?

3Tera:  We’re still in the early growth phase, our true hockey stick is yet to come, and we expect growth to accelerate.  Right now we’re focused on getting profitable.

We don’t have a secret, really.  We have a very good story to tell.  We’re attending lots of conferences, we’re buying AdWords, we’re getting the word out through bloggers like yourself, and we’re getting a lot of referrals from happy customers.

The truth is, the utility computing story is big.  People hear about Amazon and they start looking at it, and pretty soon they find us.  It’s going to get a lot bigger.  If you read their blogs, Jonathan Schwartz at Sun and Steve Ballmer at Microsoft are out talking to hosters.  Hosting used to be viewed as a lousy business, but the better hosters today are growing at 30-40% a year.  This is big news.

Bob:  (I think their growth in just a year has been remarkable for any company, and speaks highly to the excitement around these kinds of offerings.  Utility computing is the wave of the future, there is a ton of software moving into the clouds, and the economics of managing the infrastructure demand vendors take a look at offerings like 3Tera.  We’re only going to see this trend getting stronger.)

Tell us more about your business model

3Tera:  We offer both hosted (SaaS) and on-premises versions.  As we said, 80% choose the hosted option.  The other 20% are large enterprises that want to do things in their own data center.  British Telecom is an example of that.

We sell directly on behalf of our hosting providers, and there are also hosting providers that have reseller licenses.  Either way, the customer sees one bill from whoever sold them the grid.

Bob:  (This is quite an interesting hybrid business model.  Giving customers the option to take things on-premises is interesting, but even more interesting is how few actually take that approach:  just 20%, and those mostly larger enterprises.  It would make sense to me for a vendor looking to offer both models to draw a line that forces on-premises only for the largest deals anyway.  3Tera’s partnering model with the hosting providers is also quite interesting.)

How do you see the hosting and infrastructure business changing over time?

3Tera:  There are huge forces at work for centralization.  Today, if you are running less than 1000 servers, you should be hosting because you just can’t do it cost effectively yourself.  Over time, that number is going up due to a couple of factors.

First, there is starting to be a lot of regulation that affects data centers.  Europe is already there and the US is not far behind.  There are lots of rules surrounding privacy and data retention, for example.  If I take your picture to make a badge so you can visit, I have to ask your permission.  I have to follow regulations that dictate how long I can keep that picture on file before I dispose of it.  All of this is being expressed as certifications for data centers such as SAS-70.  There are other, more stringent standards out there and on the way.  The cost of adhering to these in your own data center is prohibitive.  Why do it if you can use a hosted data center that has already made the investment and gotten it done?

Second, there are simple physics.  More and more datacenters are a function of electricity.  That’s power for the machines and power for the cooling.  I talked to a smaller telco near hear recently that was planning to do an upgrade to their datacenter.  This was not a new datacenter, just an upgrade, and not that big a data center by telco standards.

The upgrade involved needing an additional 10 megawatts of power.  The total budget was something like $100 million.  These are big numbers.  The amount of effort required to get approval for another 10 megawatts alone is staggering.  There are all kinds of regulations, EPA sign offs, and the like required.

Longer-term, once you remove the requirement for humans to touch the servers, it opens up possibilities.  Why do we put data centers in urban areas?  So people can touch their machines.  If people didn’t have to touch them, we’d put the data centers next to power plants.  We’d change the physical topology and cooling requirements to be much more efficient.

We want people to think of servers the way they think about fluorescent tubes in the office.  If a light goes out, you don’t start paging people and rushing around 24×7 to fix it.  You probably don’t fix it at all.  You wait until 6 or 8 are out and then you send someone around to do it all at once, so it’s cost effective.  Meanwhile, there is enough light available from other tubes so you can live without it.  It’s the same with servers once they’re part of a grid.

Conclusion

The changes in the industry mentioned at the end of the interview are quite interesting.  Legislation is not one I had heard about, but it makes total sense.  Power density is something I’d heard about from several sources including the blogosphere, but also more directly.  I met with one SaaS vendor’s Director of IT Operations who said the growth at their datacenter is extremely visible, and he mentioned they think about it in terms of backup power.  When the SaaS vendor first set up at the colo facility, it had 2 x 2 Megawatt backup generators.  The last time my friend was there that number had grown to 24 units generating about 50 megawatts of backup power.  For perspective, an average person in the US uses about 12,000 watts, so 50 megawatts is enough for a city of over 4,000 people.

Another fellow I had coffee from this morning runs all the product development and IT for a large well-known consumer focused company on the web.  He mentioned they now did all of their datacenter planning around power consumption, and had recently changed some architectures to reduce that consumption, even to the point of asking one of their hardware vendors to improve the machinery along those lines.

These kinds of trends are only going to lead to further increases in datacenter centralization and more computing moving into the cloud to increase efficiency, centralize management to make it cheaper, and load balance so fewer watts of energy need be consumed idling.

Posted in data center, grid, platforms, saas, Web 2.0 | Leave a Comment »

 
%d bloggers like this: