The Eternal SaaS Loss of Control Angst
Posted by Bob Warfield on November 11, 2007
Just caught another wringing of the SaaS-loss-of-control hands. Niall Sclater via Aloof Architectures is worried that if teachers use 3rd party sites (the “lots of small pieces” strategy) versus a purpose built and maintained system running in the University data center that it will be a disaster if the 3rd party sites are down because students wouldn’t be able to get their work done on time. In this case the panic is over slideshare, which evidently went down for some maintenance.
This sort of thing comes up over and over again where SaaS is concerned. What will we do if the service is down? We have no control, we’ll be stuck!
I’ve got bad news: you’re stuck whether you rely on SaaS or your own On-premises solution. Either one can go down. The question is, “Which one is more likely to go down?”, closely followed by, “Which one will be faster bringing things back up?”
Having seen both sides of this fence, I can’t help but place my bets on SaaS. Why? Because the SaaS side can bring the full resources of the people that wrote the software together with the foremost experts on running the software to bear on the problem. The SaaS vendor can invest in actually doing all of the things that others like to talk about, but often don’t actually implement very well. Things like multiple redundant offsite backups and hot failover capabilities. I’ve lived through these fire drills, and let me tell you, IT was almost never going to get the software back up and running without help anyway unless the problem was relatively minor. Escalations to the vendor are a normal matter of course. Unfortunately, if the vendor is at the other end of the phone line trying to talk the customer’s IT people through it, things are an order of magnitude harder than if the software is running directly in the vendor’s own datacenter where they can touch it and make it work.
Here’s another one to consider. The SaaS vendor is in the business of uptime to a much greater degree than in-house IT. Why? Because a multitenant stumble carries with it a much greater cost of failure than some temporary downtime at a single customer. This stuff has to work for the SaaS vendor, and most are turning in admirable scores for their SLA’s.
Consider also the testing that goes on with SaaS, and the ability of the SaaS vendor to head off problems for customers before they materialize across much of the user base. At any given time, many customers will be doing many different things with the software. Much more than a single customer’s usage could hope to cover. Much more than in-house QA can hope to cover. As such, problems get flushed out quickly. The savvy SaaS vendor is fixing those problems inline and rolling the fixes out to everyone before most customers ever see the problem. I’ve talked to On-premises companies that report 40-70% of tech support problems are fixed in the latest release. Customers are reporting those problems because they’re not on the latest release. With SaaS, everyone stays with the latest release, so a huge number of problems are never experienced.
The next time you’re worrying about a loss of control, ask yourself. Who do you want to have in control? The world’s foremost experts on the software you’re using? Or some folks that may be quite good, but that are far less experienced? The answer is really not that hard once you think about it.
7 Responses to “The Eternal SaaS Loss of Control Angst”
You must log in to post a comment.
troywing said
Couldn’t agree more Bob,
On Premise Product Versioning is a vicious cycle. Customers will upgrade to new versions only on their own schedules forcing On Premise vendors to devote more resources to supporting them and the customer loses out as they won’t get the benefit of continuous product improvement that SaaS provides. Its gets worse each time you bring out a new release.
You’re also hamstrung by the environments that are “certified” within a customer organization. I once had to manage an on premise product which had releases for 3 versions of SQL Server and 3 versions of Oracle. It was a nightmare trying to get identical features delivered to all the customers without causing onsite problems. It generated a neverending quality issue with changes in one version “breaking” another. Its a problem that is never encountered in SaaS.
sclater said
Hello Bob
My point was not that universities should never outsource key online services – I agree that they can be carried out just as well and often better by external companies. However it is risky to rely on 3rd party software sitting out there on the Internet for your core services – unless you establish a business relationship with the companies who are hosting them. There is a growing philosophy that universities should just point their students at a whole range of social software sites rather than hosting anything centrally and I argue that this is not a solution for the delivery of formal distance learning at scale in my postings: The VLE is dead. Long live the VLE and Reinventing the Wheel.
Niall Sclater
smoothspan said
Niall, welcome!
You bring up an interesting shade of gray in an otherwise black and white discussion. I’m interpreting your comment as a focus on use of free social sites where you can have no formal business relationship versus paid for SaaS sites. It’s a good point. The free social sites do have a lot less at stake with their ad models and are not really promising to be “business quality”.
Best,
BW
beyondportals said
Excellent points, Bob!
As a Microsoft Level-4 multi-tenant SaaS provider we also get similar questions from time to time and the kind of answer that you provide here in this post usually helps simply because it makes eminent sense, period.
That’s when they start thinking about all those times they had escalated the issue to their vendor and their vendor couldn’t do much over the phone, etc.
I think this is more a psychological issue than a real operational issue and eventually people will start to see the obvious advantages of SaaS solutions.
I think we SaaS providers also have the responsibility to educate the public at large by explaining in depth the security measures we take to address such concerns.
Here is a related page from our web site where we have explained in great detail how we maintain the physical security of our client data:
http://www.web2expense.com/w2excms.nsf/$All/3F9BCF96A890958C8525738B0077699C?OpenDocument
Keep up the good work! Best regards.
Don Thompson
http://www.web2expense.com
http://www.beyondportals.com
smoothspan said
I think you’re right, Don. There are stats out there that show the first SaaS purchase for an organization is the hardest. As more and more companies try and succeed with SaaS, it lowers the threshold for other SaaS vendors to get in. It’s great that you’ve put a lot of data on your site about how your own firm works.
Best,
BW
psychemedia said
“In this case the panic is over slideshare, which evidently went down for some maintenance.”
One thing that Niall hasn’t commented on is what happens when an institutional system goes down – which is exactly what appears to have happened to the Open University authentication system on the same day as the Slideshare outage.
The consequences of the central system going down is catastrophic for the user. However, if one piece of the SaaS jigsaw goes down, then the user may still be able to access other parts of the mix, assuming they can log in.
That said, one thing I hadn’t thought through before was that whilst single sign on over distributed apps is obviously desirable for the user, it does introduce a single point of failure across the component services covered by that single sign-on.
sclater said
Can’t argue with that, Psychomedia. It is indeed catastrophic if your central authentication system goes down, thus denying you access to the rest of your systems. If you’re going to rely on a central authentication system it needs to be extremely robust and be given the highest priority for support.