SmoothSpan Blog

For Executives, Entrepreneurs, and other Digerati who need to know about SaaS and Web 2.0.

Archive for January, 2011

Should We View Internet Censorship as a Human Rights Violation?

Posted by Bob Warfield on January 28, 2011

This is an uncomfortable topic because it sounds petty.  Lori Kozlowski must feel uncomfortable too when she starts by saying, “Without being dramatic, it makes one wonder if access to the Internet at this point in history is actually a human right.”

After all, much worse things are being done to people in this world than Internet censorship.  Why even consider it for the list?

The Internet is certainly up in arms about Egypt being taken off the Internet.  There’s a Monster long tail of articles on Techmeme about it.  GigaOm tells how it was done.  It only took about 2 hours to completely isolate the country.  That’s pretty scary, and there is talk of an Internet Kill Switch for the US too, although more to stop cyber terrorists than to use as Egypt has, one would hope.

Is this just the Digerati contemplating their navels, or is it something truly important?  Does it rise to the level of Human Rights?

Our Founding Fathers were very keen on the right to free speech.  It is an essential freedom in our culture.  The Founding Fathers understood that it is silence and isolation that lead to tyranny getting away with it.  That’s where I’m coming from on this.  That’s what makes it an essential human right.  It isn’t the speech itself, it’s what’s being talked about.  Think about what it would mean to you to suddenly be cut off completely amid violence over the span of 2 hours. 

I have been a bit disturbed when we look too greedily at lists of where the Internet growth is highest and make those our targets without wondering what else is going on in those places and whether their Internet is our Internet.  Fortunately, there seems to be only one entry on that list where this is a concern, but it sure is high up the list.  Do you want to do business with a place like that?  Can you make a buck some other way?  Despite some of my recent negativity towards Google (no link on purpose), you have to give them their props for pulling out of China, even for a time.

Postscript

Sounds like President Obama views the Internet as a basic human right.  Good for him.

Posted in Uncategorized | 2 Comments »

13 Reasons Not to Go to the Cloud

Posted by Bob Warfield on January 28, 2011

This weeks sponsored InfoBoom post is something a little different for me.  As long time readers will know, I am a total Cloud fanatic.  However, Michael Coté wrote a note saying he’s been researching reasons not to go to the Cloud.  I’ll bite.  There is almost always a lot to be learned by considering the reverse of any proposition, no matter how good it may seem.  If nothing else, you will discover things are never as rosy as they seem or you may discover a way to eliminate a negative and make it better than ever.

Check out the article, therefore, for 13 reasons not to go to the Cloud.  I didn’t set out to make it 13, but given my preference for the Cloud, it was just Karma that made it such an unsavory number!

PS May be a busy day for Smoothspan.  I’ve got a couple of other posts I want to get out and I’m not sure I can wait!

Posted in cloud | 1 Comment »

Skeptical About Amazon’s Bulk E-Mail Service (Beware the Cloud Neighbors)

Posted by Bob Warfield on January 25, 2011

Amazon has announced yet another web service, this time a Bulk E-Mailing Service.  I love the Amazon strategy of supplying a powerful PaaS (Platform as a Service) one service at a time.  In fact, doing a PaaS this way was one of my key recommendations in, “Sell the Condiments not the Sandwiches.”  I’m generally a huge proponent of Amazon Web Services, but this time I’m skeptical.

Here’s where I’m coming from:

At my last company, we were big users of AWS and we loved it.  Before we went down that route, I canvassed my network for gotchas.  The number one piece of advice we got was not to put our email on Amazon.  Why?  Because too many spammers had already used the service and gotten most of the IP space black balled.

See that part in my title, “Beware the Cloud Neighbors?”  I’ve been noodling on this issue ever since having heard those warnings about Amazon.  It’s really the fundamental concern most people have about the Cloud:  is there something my neighbors will do in the Cloud that will hurt me?  Usually the worry is that the neighbors will try to break in.  But in this case, we have a different problem.  Because the neighbors set up the equivalent of too many meth labs in the neighborhood (hate the spam!), the whole neighborhood got a bad reputation.  I was told at the time that Amazon was trying to work out ways to get un-blacklisted, but as you probably know, that’s really hard. 

This sort of thing was on my mind all during the Wikileaks stories too.  All these Cloud services were kicking Wikileaks out as fast as they could.  I kept wondering what impact a bad actor (perceived, I personally don’t have a problem with Wikileaks) could have on the overall Cloud.  Maybe that’s the reason they had to be kicked out so summarily and quickly.  I got a note from a blog reader that the service I use, WordPress.com, is completely blocked in China.  Here again, a Cloud was being punished for what were presumably the actions of a relatively few of its inhabitants.  Keep in mind the possibility for collateral damage from misbehaving Cloud Neighbors as you look at Clouds.  I expect they will ultimately have to think harder about their T’s and C’s, maybe winding up like some gated communities with all sorts of rules about how you have to behave to live in their Cloud.

Meanwhile, let’s get back to Amazon’s Bulk E-Mail Service.  As I read the announcement, I kept looking for signs of how the service would fix the blacklisting and prevent renewed bad behavior.  The good news is that there are some teeth there.  New users are on a probationary period while Amazon monitors your behavior:

Once your application is up and running, the next step is to request production access using the SES Production Access Request Form. We’ll review your request and generally contact you within 24 hours. Once granted production access, you will no longer have to verify the destination addresses and you’ll be able to send email to any address. SES will begin to increase your daily sending quota and your maximum send rate based on a number of factors including the amount of email that you send, the number of rejections and bounces that occur, and the number of complaints that it generates. This will occur gradually over time as your activities provide evidence that you are using SES in a responsible manner. The combination of sandbox access, production access, and the gradual increase in quotas will allow us to help ensure high deliverability for all customers of SES. This is exactly the same process that bulk senders of email, including Amazon.com, use to “season” their sending channels.

I think that’s great, so where does my skepticism come from?

Two sources, really.  First, is just a concern that whatever residual blacklisting is still out there making it tough even if you follow the rules.  Perhaps Amazon has an endless supply of all new clean IP addresses reserved for the new service.  If so, I’d love to hear that.  If not, why can’t someone fire up a mail server on each of how ever many EC2 instances they own and spam away?  That’s how the blacklisting came about in the first place.  I suspect they must have a clean room for the mail service, as Amazon’s own E-Commerce mail certainly gets through, so let’s stand by to hear something about that.  Maybe someone from Amazon will reply with a comment to this post.  Second source is just vague unease about whether they’ve taken it far enough.  I have had the dubious pleasure of seeing some of the seamy underbelly of the spam world as part of researching how to bootstrap startups and small businesses.  You don’t have to look far before you run into this rather large community of people and you say, “Aha, now I get who these spammers are!”  They have nothing of value to offer you by way of product, but they will say and do anything as often as they can to get you to part with money, a click so someone else gives them the money, or whatever they’re after.

As you look at some of the small business email marketing tools, they have varying degrees of spam resistance baked in.  They do this out of a sheer need for survival.  As Amazon points out, deliverability is the key.  Some of these services got into the hands of spammers early in their careers and watched deliverability for all customers plummet.  It’s interesting to read about what they had to do to recover.  Many of them have hard and fast requirements for things like “double opt-in”, where the prospect has to not only sign up for your newsletter, but also has to click an opt-in link that is solely controlled by the marketing software provider.  They’ve seen the case where the spammer rigged the signup form to be completely misleading and they want no doubt that the person really really does want to be on the mailing list.  You also see requirements to have an opt-out link in every piece of email that goes out, to the point where sometimes the marketing software provider puts one in and you have no control over it.

Amazon is not undertaking anything that direct.  Rather, they’re ramping up users slowly and waiting to see if there is feedback that the user is spamming.  In the long run, that works, and they surely know a lot more than me about it, but it doesn’t seem quite a s foolproof as what others have done.  I’m sure a lot of organizations will prefer not to have Draconian controls over their email, it’s just not clear to me that waiting until spamming occurs will work, at least unless you have an unlimited supply of IPs to burn.

ReadWriteWeb has a piece that leaves the impression Amazon’s prices are so far below competitors that the only thing left is support.  I don’t think that’s right because I think deliverability will be the issue.  Word gets around fast on whether you have it or not and whether it is reliable.  I took a look at the four competitors mentioned in the article with some thoughts:

CritSend:  CritSend has a deliverability engine that checks for likelihood a message would be flagged as spam before it is sent.  It automatically manages bounces and spam notifications.  It automatically analyzes and repackages each email optimizing it with legal and technical compliance additions.  They take care of CAN SPAM compliance for you and append your emails with specific CAN SPAM content footers, DomainKeys signatures and SPF/SenderID. They also handle all aspects of unsubscription management for you including automatically appending your unsubscription message to your emails and full unsubscription request processing. They store unsubscription addresses, bounces and spam report clicks. Their addresses are filtered from your deliveries, and you may edit them through the website dashboard or export them in real time through an API query.

Postmark:  Sounds simpler, but they still handle things like whitelisting, ISP throttling, reverse DNS, feedback loops, content scanning, and delivery monitoring to ensure your emails get to the inbox.

SendGrid also discusses their automated handling of unsubscribe and similar capabilities.

Are you beginning to see how sophisticated some of this stuff is in order to prevent spammers from clogging up a mail delivery service?  Do you need all this?  Hard to say.  Having seen a little bit of the spammers and knowing how much gets through despite such measures, you have to wonder though.  Amazon probably has all these bases covered, but it isn’t clear from their announcement that it is at the level of these other players.  I’d wait for confirmation via references from good-sized and reputable e-commerce players.  After all, it’s basically your reputation that’s at stake as well as your ability to communicate with your customers.

Posted in business, cloud, strategy | 3 Comments »

3 Challenges Larry Page Faces for Google

Posted by Bob Warfield on January 24, 2011

Google PeopleThere are 3 challenges Larry Page must overcome to fundamentally make Google a bigger, better, and more successful company that customers love and that does no evil:

Lack of Customer Focus

Google is all about the algorithms, but algorithms are not customer-focused.  They’re off doing whatever they were made to do, and if that’s not the right thing, whatever customer gets in the algorithm’s way is stuck.  Google is legendary for its lack of customer support.  If you happen to fall into the cracks where the algorithms can’t help you, you’re stuck.  There are no warm bodies, no Zappo’s style people you can talk to.

For a long time, Google has made it past this issue on the shear golly-gee-whiz-what-a-great-story-is-Google factor.  It has been such an amazing company playing in such hip spaces that this made the company feel “real” and substituted for having customer focus with a more traditional connection with its customers.  It’s a great brand, but a brand has to have real people behind it to hold up.  It’s founders have done some things to the algorithms that are arguably very customer focused.  For example, the discussion of how AdWords factors in “quality” and not just the dollars is one such case.  It’s not enough, particularly at a time when there is a growing consensus that the quality thing is failing. 

Google is at a crossroads.  It has reached the level of scale that triggers the antibodies and scares people.  Our little reptilian brains are signalling Danger loud and clear as they always do when one entity gets too powerful.  If Google doesn’t develop a more human side and get more in touch with its customers, it will wind up where Microsoft and Oracle are.  Two companies that in many ways have great products, but that are very much not loved because they’re not customer focused.  Side note on customer focus and not doing evil: don’t have the guy charged with fixing your spam problems airing your competitor’s dirty laundry.  That’s vintage Microsoft and Oracle tactics, and that’s not what you guys are supposed to be.  Take the high road always, or get comfortable with being another Microsoft or Oracle. You could argue that it doesn’t seem to have hurt those two companies much, but I disagree.  They reached unstoppable momentum at a time when markets were far less frictionless and people were far less suspicious (partially because they hadn’t yet been abused by these behemoths).  The markets now fight them at every turn.  Microsoft seems to be succumbing, Oracle not so much because their Enterprise software business has far greater lock-in.   Larry, take note:  Google has less lock-in than either Microsoft or Oracle.  Your empire came to you as a million iron filings seeking out a magnet.  Think about what that means and play nice: there are other magnets coming along all the time.

Figuring out how to get the human touch won’t be easy.  It probably requires a cultural change of some sort, and those are always hard.  It probably requires some new Executive talent that really gets these things.  The existing culture’s antibodies will want to reject these strange new thoughts and Larry, you’ve just gotten back to the top so you may not be so hip to it either.  But it needs to be done.  If it were me calling the shots, I would find three key executives to fix this problem:

A VP of HR (horrors to an algorithms crowd and I can’t believe I’m writing this as an Engineer, but…) who is someone like Patty McCord at Netflix.  Someone who will be deadly serious about making sure you have the right kind of customer-focused culture and the right kind of people to keep it vigorous.  Heck, someone who might make sure your people leave, if they leave, for all the right reasons, and not just to move on to the next hot G00gle-like thing.

A VP of User Experience:  Algorithms do not have a User Experience, or if they do, it is either as brief as possible (welcome to original Google) or not very pleasant.  Google wants to play in all sorts of spaces where algorithms are not sufficient and User Experience is critical.  Find some awesome user experience executive and give them the absolute power to make the User Experience right.  This person has to make designers and other UX peeps alpha dogs right alongside the algorithm geeks.  It’ll be hard, but it’s essential.

A VP of Customer Service who understands that the customer controls the conversation, that it’s about engaging with customers, and that your customers are people, and not just biotics who can be optimized into paying you more money by your algorithms.  Get someone from a place with insanely great customer service like Zappo’s.

You’re going to have to stand behind these people and let them change Google for the better.  You’ll have to help them change the ways of many of your loyal cohort.  It won’t be easy.  There will be few of them and a lot of antibodies in the beginning.  But if they can succeed, even just slowly at first, it will make a huge difference for Google.

Too Much Waiting for Lightning to Strike

Google is a company that has been very very lucky.  There, it’s out in the open.  It isn’t an indictment of your skills and talent–when you have the combination of fantastic skills and talent coupled with fantastic luck, you get an incredible success.  Bill Gates is absolutely brilliant, and Microsoft was incredibly lucky.  Companies that come and then go suddenly may simply be companies that only had the luck and then it ran out.

The problem with being so lucky is your model for success has luck baked in to a huge degree, you just can’t help it, it’s all you’ve known.  But that means you’re waiting for lightning to strike.  We can turn that into an algorithm if you like, so it makes sense to an algorithmic culture:  you’re practicing Darwinian selection.  You’re planting a thousand seeds to see which ones will sprout.  You’re a mile wide and an inch deep, and that’s the dark side of waiting for lightning to strike.  You’re not building businesses from the ground up very well.  You throw something over the wall to the world that’s half-baked and if lightning doesn’t strike, you mostly ignore it.  You don’t know how to overcome the gravity well if it is too deep and reach escape velocity through hard work, perseverance, brilliance, and a near-total lack of luck.  You’re missing out on one of the advantages of being big: you shouldn’t need to count on luck so much anymore.  You can’t count on it, not to maintain the growth rates that are baked in your stock valuation for years to come.  You need a system that can overcome luck.

This problem goes to your ability to focus on improvement until a product is sufficiently successful, or until it delights your customers.  The world used to tease Microsoft about taking 3 releases to get any of their products right.  They are a company that did not wait for lightning to strike.  They built businesses through relentless improvement.  Why are Google Apps still not 100% compatible with Microsoft Office?  It isn’t that hard to do, but you haven’t done it.  I wrote about it way back in 2008, and it still isn’t done.  It isn’t like Office is even that much a moving target, yet you can’t hit it.  I can only conclude it’s because you’re not trying. 

You’re waiting for lightning to strike.  You’re being fatalistic and letting the market decide.  You do a little bit, and if you don’t get an overwhelming response in a very short time, you move on.  That’s how Google came to be, but it is not a model for building great companies and products.  It’s just an eye-witness account of what it means to be lucky.  Don’t depend on luck for your product strategy.

Don’t Become Microsoft

Microsoft had that relentless execution, but they never had innovation.  Their version of Darwinian selection was to let other companies spend their money to figure out where lightning would strike.  When they saw the lightning strike, they’d mobilize, head on over, and try to take the lightning away from whoever had it in their bottle.  It happened time and again for all of their successful businesses.  They didn’t have the first or the best GUI.  They didn’t have the first or the best PC operating system.  They didn’t have the first or the best spreadsheet, slide software, C compilers, or most other things. 

Since you’re big, and if you decide to quite waiting for lightning to strike, there will be a tremendous urge to become Microsoft.  You’re already doing it.  You had to have your own browser.  You’ve built a couple of operating systems (one seems a good idea, Android, the other leaves me shaking my head).  You want to go after Microsoft’s Office franchise.  You’ve got staff members talking trash about competitor’s spam.  Etc.

You can do all that, and you’ll continue to grow.  But you can’t do that and be customer focused.  You can’t do that and “Do No Evil.”  Being a rapaciously hyper-competitive robber baron will tend to piss people off.  Fred Wilson talks politely about what you should not do by way of destroying innovation.  Heed his words.

There are two good models that do not involve being rapaciously hyper-competitive robber barons:

Do your own innovation, do it well, bring it to fruition, and delight customers.  Apple is a great example.

Let innovators grow their businesses to stable self-sustaining scale.  Buy the businesses, keep them independent, and nurture them.  Warren Buffet’s companies or Johnson and Johnson work this way.  Oracle does not.  Since they are a rapacious robber baron, they buy companies not to nurture them, but to milk customers who are locked in of their last IT penny.  You don’t have their lock-in and you don’t want to be that kind of company, so avoid that path!

It’ll Be Tough Balancing These Three

It’ll be tough balancing these three.  They will tend to fight against one another, against your existing culture, and against your tendency to want to just take what you think you’re entitled to as all big businesses do.  It will take charismatic leadership, great people, a great culture, and a strong will.  Nobody ever said it would be easy.  Larry, you’re young and you have plenty of time and energy.  You’ve built something special.  Don’t lose it just to get bigger.

Posted in business, strategy | Leave a Comment »

What Will GPU-On-The-CPU Mean for Analytics?

Posted by Bob Warfield on January 21, 2011

This week’s InfoBoom-sponsored post is all about Intel’s announcement that it would be shipping chips that include an integrated graphics processor (GPU) on the same chip as the CPU under their so-called Sandybridge architecture.  A lot of folks probably ignored the announcement thinking it meant better video games for their kids and perhaps their laptops, but there is a more interesting way to look at the impact of such architectures for business.

GPU’s are not just for graphics, as it turns out.  They’re extremely powerful supercomputers in their own right, with vector processing and other capabilities that are well on par with the Cray supercomputers that ruled during my college computer science days.  The Cray X-MP ran 941 Megaflops back in the day and was considered strategic weapons grade computing.  The latest Nvidia CUDA GPU’s weigh in at 1 Teraflop or better, so are fully capable of keeping up with the old Cray.  What happens when that kind of power is available on every CPU?  A whole lot of power, and a whole lot of change.

Check out my post over on InfoBoom to find out!

Posted in cloud, software development | Leave a Comment »

Google Says Spam a Priority, Not So Much a Problem

Posted by Bob Warfield on January 21, 2011

Official response from Matt Cutts on the Google Search Spam problem.  The long and the short of it?  There is acknowledgement that there has been a “slight uptick” in recent months but that things are much better than they were 5 years ago:

The short answer is that according to the evaluation metrics that we’ve refined over more than a decade, Google’s search quality is better than it has ever been in terms of relevance, freshness and comprehensiveness.

Now that all sounds great, until you think about it.  When I wrote about this issue recently, I stated, “If there is blame to be made, I would blame sloth more than conspiracy.”  Let’s go with that and assume no conspiracy and that Google simply needs to work harder.  As everyone knows, Google is an algorithms-driven company.  So here is the conundrum:

If you have a lot of spam because your algorithm isn’t good enough at rejecting it, should you believe your algorithm when it says your “search quality is better than it has ever been in terms of relevance, freshness, and comprehensiveness?”

Two other comments and then I’ll leave this topic: 

First is that I appreciated the reference to Smoothspan’s recent article, “Silly rabbits: Google is for Spam not for Search“:

One misconception that we’ve seen in the last few weeks is the idea that Google doesn’t take as strong action on spammy content in our index if those sites are serving Google ads.

While I continue not to think there is a conspiracy, it is important to note that Google monetizes the spammers, drives traffic to their sites, and so benefits.  All the more reason to be very squeaky clean.

And speaking of being very squeaky clean, did anyone else think it was at least vaguely humorous that Google’s Spam Czar, Messr Cutts of the “Move along, these are not the spam droids you’re looking for” memo, was spending his time breaking stories about the spaminess of his competitors when he could have been improving his own company’s record there?  Did we consider that while Facebook’s #2 advertiser was a spammer, we have absolutely no way of quantifying how much revenue Google gets from spammers monetizing their spam through Google’s ads.  What if there are some in Google’s top 10? 

How about we all focus on dialing down our own spam, let the Internet police the competitors, and get on with some more relevant search results.  Speaking of more relevant results, I love Google’s Custom Search Engines.  Great way to reduce spam per this ReadWriteWeb article if you want a specialized search for a particular domain that’s narrow enough to have a manageable number of contributors.

Posted in Marketing, strategy, Web 2.0 | 3 Comments »

Database.com, nice name, shame about the platform (Huh?)

Posted by Bob Warfield on January 20, 2011

Matt McAdams writes an interesting guest post for Phil Wainewright’s ZDNet blog, Software as Services.  I knew when I read the snarky title, “Database.com, nice name, shame about the platform,” I had to check it out.

The key issue McAdams seems to have with Database.com (aside from the fact his own company’s vision of PaaS is much different, more in a minute on that) is latency. He contends that if you separate the database from the application layer, you’re facing many more round trips between the application and the database.  Given the long latency of such trips, your application’s performance may degrade until your app “will run at perhaps one tenth the speed (that is, page loads will take ten times longer) than a web app whose code and data are colocated.”  McAdams calls this a non-starter.

If using Database.com or similar database-in-the-cloud services results in a 10x slower app, then McAdams is right, but there are already ample existence proofs that this need not be the case.  There are also some other interesting considerations that may make it not the case for particular database-in-the-cloud services (hereafter DaaS to avoid all that typing!).

Let’s start with the latter.  Salesforce’s offering is, of course, done from their data centers.  From that standpoint, you’re going to pay whatever the datacenter-to-datacenter latency is to access it if your application logic is in some other Cloud, such as Amazon.  That’s a bit of a setback, to be sure.  But what if your DaaS provider is in your Cloud?  I bring this up because some are.  Of course some Clouds, such as Amazon offer DaaS services of various kinds.  In addition, it’s worth looking at whether your DaaS vendor hosts from their own datacenter, or from a publicly accessible Cloud like Amazon.  I know from having talked to ClearDB‘s CEO, Cashton Coleman, that their service is in the Amazon Cloud. 

This is an important issue when you’re buying your application infrastructure as a service rather than having it hosted in your own datacenter.  It also creates a network effect for Clouds, which is something I’ve had on my mind for a long time and written about before as being a tremendous advantage for Cloud market leaders like Amazon.  These network effects will be an increasing issue for PaaS vendors of all kinds, and one that bears looking at if you’re considering a PaaS.  The next time you are contemplating using some web service as infrastructure, no matter what it may be, you need to look into where it’s hosted and consider whether it makes sense to prefer a vendor who is colocated in the same Cloud as your own applications and data.  Consider for example even simple things like getting backups of your data or bulk loading.  When you have a service in the same Cloud, for example like ClearDB, it becomes that much cheaper and easier.

Okay, so latency can be managed by being in the same Cloud.  In this case, Database.com is not in the same Cloud, so what’s next?  Before leaving the latency issue, if I were calling the shots at Salesforce, I’d think about building a very high bandwidth pipe with lower latency into the Amazon Cloud.  This has been done before and is an interesting strategy for maximizing an affinity with a particular vendor.  For example, I wrote some time ago about Joyent’s high speed connection to Facebook.

Getting back to how to deal with latency, why not write apps that don’t need all those round trips?  It helps to put together some kind of round-trip minimization in any event, even to make your own datacenter more efficient.  There are architectures that are specifically predicated on this sort of thing, and I’m a big believer one whose ultimate incarnation I’ve taken to calling “Fat SaaS“.  A pure Fat SaaS application minimizes its dependency on the Cloud and moves as much processing as possible into the client.  Ideally, the Cloud winds up being just a data repository (that’s what Database.com and ClearDB are) along with some real-time messaging capabilities to facilitate cross-client communication.  The technology is available today to build SaaS applications using Fat SaaS.  There are multiple DaaS offerings to serve as the data stores and many of them are capable of serious scalability while they’re at it.  There are certainly messaging services of various kinds available.  And lastly, there is technology available for the client, such as the Adobe AIR ecosystem.  It’s amazing what can be done with such a simple collection of components:  Rich UX, very fast response, and all the advantages of SaaS.  The fast response is courtesy of not being bound to the datastore for each and every transaction since you have capabilities like SQLite.  Once you get used to the idea, it’s quite liberating, in fact. 

Surprisingly, many have seen this model, though they may not have thought much about it.  As McAdams points out, “Database.com will do better with developers of mobile apps, which contain the user interface and the app code in the same bundle.”  Yup, many mobile apps are Fat SaaS.  The architecture becomes a lot more interesting when you start thinking about how popular apps are on mobile versus apps in browsers and why.  This also points the way towards some of the types of apps that are particularly well suited to Fat SaaS:  complex games, rich content creation applications, and a variety of things where the simple fill-in-the-form-and-do-the-workflow metaphor just isn’t right would do well with Fat SaaS.  There are also advantages for cost and scaling, where Fat SaaS it works great so long as your application’s usage patterns are largely hub and spoke.  What I mean by that is that there isn’t a lot of need to do cross-client processing in bulk.  Sure, a record may pass through several user’s clients on its journey, but you don’t have to do complex business logic that involves looking at thousands of those records across many clients very often.  When you’re doing hub-and-spoke, the client hardware is free in Fat SaaS.  It may be cheap in an elastic Cloud, but it’s hard to beat free!

The situation for aggregate processing looks worse for Fat SaaS, but it isn’t necessarily black and white.  Very often we find systems whose transaction processing behavior is a lot different than what we want for Business Intelligence.  Hence we wind up optimizing two different datastores–one the transaction processing store, the other the BI Data Mart.  The same approach can work here.

One last comment about McAdams’ post.  He concludes with a pitch for his company’s products:

The bigger criticism applies to all DaaS offerings, and it’s this: they’re solving the wrong problem. Making databases more accessible to programmers who already know how to use databases is nice and all, but how about making databases more accessible to business users? 

Why not let non-programmers build web apps without writing code? To do this, Database.com would have to include things that Salesforce.com discloses explicitly are not part of the platform: page layout tools, reports, dashboards, and so on. These can be built with Force.com, but using Force.com requires programming knowledge.

It seems to me the more innovative cloud DB players are the companies providing a cloud database with a complete, integrated app development platform that requires no coding, only point-and-click configuration. These platforms, like TrackVia (the author’s company) or Intuit’s QuickBase, are doing more to change how cloud apps get built than the better-known DaaS vendors are.

As I’ve said in previous posts, I’m not a fan of the “anyone can program with our special point and click application” ideas.  The demos look whizzy, but this is really not innovation, and the tools are pretty limited in how far they’ll take you.  We’ve similar tools for a long time under earlier guises such as 4GL or later guises such as the PC desktop databases like dBase or Microsoft Access.  We’ve seen it in languages with UI Builders like Visual Basic or Adobe Flex.  I have to say, we’ve seen a lot more point and click programmers than we have seen DaaS.  In addition, the idea that the problem a DaaS solves is one that is already solved, in other words that they’re trying to make understanding databases easier for programmers who already understand databases is also pretty far off base (and surprisingly given McAdams’ past in DBA software).

DaaS is all about two things that many developers are usually not so good at:  Scaling and Automating Operations.  There is a lot of work required for either and it’s work that the majority of developers who may have a ton of domain expertise for their app area typically are short on.

Posted in cloud, software development | Leave a Comment »

Seth: Netflix Avoided the Tragically Knowable

Posted by Bob Warfield on January 14, 2011

Seth Godin is perhaps my all time favorite blogger, and I have recommended him frequently.  We mostly agree, but not this time.  His recent post is trying to convey the need to “just do it” as Nike says, without a lot of testing and analysis.  Godin is big on getting on with it and not giving yourself excuses to hesitate, and I don’t disagree with that sentiment, but that doesn’t mean you can’t test.

His recent post is about Netflix, a company whose founders I know well.  I remember sitting at the Starbucks in downtown Los Altos when Marc Randolph, who originated the idea for Netflix and was its first CEO, first told me about it.  I worked for Reed Hastings, who acquired my startup Integrity QA.  Reed met Marc because Marc was our VP of Marketing at Integrity.

One of my all-time favorite business anecdotes involves Marc Randolph telling an audience that failed to test a marketing program that went on to be a disaster that the end result could have been tragically knowable.

So let’s look at Seth Godin’s assertions about diving in and think about how to dive in prudently.  You should read his post, where he gives Netflix credit for being a culture of testing, but goes on to suggest that testing isn’t what made their success. 

Except they didn’t test the model of renting DVDs by mail for a monthly fee.

Seth, you’d be surprised at what can be tested and how.  My recollection is that there was some survey work done before the initial round of financing was raised.  You also have to consider that a startup itself, in it’s very early stages, is a test.  You’re financed for a fraction of the money that it takes to really know in order to verify that traction is available.  Yes, there will be serious ramifications in that your company may not get another chance to test something else, but if you go into it thinking of it as a test that has to succeed, you will understand how your Board views what you are about.

And they didn’t test the model of having an innovative corporate culture.

Well yeah, they did.  The culture at Netflix is an evolution of culture from Reed and Patty McCord’s (she runs HR) earlier days at Pure Atria.  Reed has always been a person who believed in people and culture more than almost anything and they got a chance to try even more interesting things at Netflix.

And they didn’t test the idea of betting the company on a switch to online delivery.

Sorry, Seth, but here to, it got tested.  They didn’t go cold turkey or whole hog (that’s some farm animals!) by dumping the mail model.  They ramped it up and if you don’t think they’d have backed off again if it didn’t work, you should get to know them better.

Testing is not a tactic, it’s a mindset and a culture all its own–one that’s key for startups and any companies trying to get into new markets.

What do you do absent enough information to make a sound decision?  There are a three choices I’ve seen employed:

–  You go with your gut.  By the time there is enough information, it’s too late.  Put all the wood behind the arrow your gut says is the one.

–  You go ask somebody.  Find a consultant or luminary.  Be careful how you ask or you’re just trading your gut for theirs.  Even the highest paid consultants don’t necessarily have the right answers.

–  You conduct rapid experiments to collect the data you need to confirm your early hunches.

I rank those in order from least to most desirable from my perspective.  Not much use for the gut.  It’s better than no decision at all, anything is.  But it is hugely risky and largely a matter of luck, or at least an inability to have enough introspection on your intuition to know why and explain it to others.

Where consultants are concerned, if they can’t explain the “why” by giving you new and well-corroborated data on which to make a decision, I wouldn’t take their gut either.  Most of them don’t do what you do every day.  At best they talk to people who do and may or may not have understood what they heard or know how to apply it to your circumstances.

The fast experiment approach is the one that avoids those Tragically Knowable mistakes.  I often see people moan and roll their eyes.  They just want to get on with making the big bet and rolling the dice.  Do the hard work.  Know the knowable.  Then make your decision and sleep well at night.  For those who say it isn’t knowable, you’d be surprised.

Related Articles

From someone who was there, more confirmation testing works!

Posted in business, strategy | 2 Comments »

IBM’s Watson: The Future of Business Analytics?

Posted by Bob Warfield on January 13, 2011

IBM’s Watson software is the subject of this week’s SmoothSpan InfoBoom-sponsored posting.

In it, I discuss the ramifications of IBM’s remarkable Watson software for Business in the next 10 years.  It’s amazing to watch the videos of IBM’s Watson participating in Jeopardy rounds against the human champions.  Watson answers faster and more accurately, and it seemingly knows everything. The mileau of information that has to be cataloged in order to have a shot at winning Jeaporday is vast and includes history, literature, politics, arts and entertainment, and science, not to mention algorithms capable of natural language reasoning to be able to decipher the question and arrive at a specific answer.

Imagine having a resource like Watson tirelessly available for your organization’s needs.  Would it be the ultimate business intelligence tool?  Ask any question, Jeopardy-style, and the machine could cross index every bit of data available in any datamart your corporation has for an answer.  How much more effective would meetings be if you could ask such questions and quickly get answers?

Check out my post on InfoBoom for more…

Posted in saas | Leave a Comment »

Gartner: The Cloud is Not a Contract

Posted by Bob Warfield on January 12, 2011

There is a bit of a joust on between Gartner, GigaOm, and likely others over the recent Gartner Magic Quadrant for Cloud Infrastructure.  The Internet loves a good fight!

Gartner launched their magic quadrant with some fanfare on December 22.  Immediately after the holidays, on January 4, GigaOm’s Derrick Harris threw down the gauntlet by bluntly saying, “Gartner just flat got it wrong.”  Can’t get much more black and white than that.  His reasoning is as follows:

Initially, it seems inconceivable that anybody could rank IaaS providers and not list Amazon Web Services among the leaders. Until, that is, one looks at Gartner’s ranking criteria, which is skewed against ever placing a pure cloud provider in that quadrant. Large web hosts and telcos certainly have a wider range of offerings and more enterprise-friendly service levels, but those aren’t necessarily what cloud computing is all about. Cloud IaaS is about letting users get what they need, when they need it — ideally, with a credit card. It doesn’t require requisitioning servers from the IT department, signing a contract for any predefined time period or paying for services beyond the computing resources.

I have to say, he is right.  It is obvious and absurd not to rank Amazon Web Services at least among the leaders.  If you’re going to take that step, it’s a bold one, and needs to be expressed up front with no ambiguity and leading with a willingness to have a big discussion about it.  Gartner didn’t do that either.  They just presented their typical blah, blah, blah report.  For weaknesses, which presumably got Amazon moved out of the ranks of leaders, they offer the following:

  • No managed services.
  • No collocation, dedicated nonvirtualized servers (often used for databases), or private non-Internet connectivity.
  • The weakest cloud compute SLA of any of the evaluated competing public cloud compute services.  They offer 99.95% uptimes instead of the 99.99% of many others and the penalties are capped.
  • Support and other items are unbundled.
  • Amazon’s offering is developer-centric, rather than enterprise-oriented, although it has significant traction in large enterprises. Its services are normally purchased online with a credit card; traditional corporate invoicing must be negotiated as a special request. Prospective customers who want to speak with a sales representative can fill out an online form to request contact; Amazon does have field sales and solutions engineering. Amazon will negotiate and sign contracts known as Enterprise Agreements, but customers often report that the negotiation process is frustrating.

My first reaction to reading those negatives is they make a pretty good list of criteria for differentiating an old-fashioned managed hosting data center from a real Cloud service.  Does Gartner understand what the Cloud really is, what it is about, and how to engage with it successfully? 

For her part, the lead analyst, Lydia Leong, responded the day after the GigaOm post.  Here response, predictably, is to disagree with Derrick’s quoted paragraph above, saying:

I dispute Derrick’s assertion of what cloud IaaS is about. I think the things he cites above are cool, and represent a critical shake-up in thinking about IT access, but it’s not ultimately what the whole cloud IaaS market is about.

Lemme get this straight, the Cloud IaaS market is about (since I will negate Derrick’s remarks that Lydia disagrees with):

  • Eliminating Pure Cloud vendors from serious consideration.  You must have non-Cloud offerings to play.
  • Eliminating the self-service aspect of letting users get what they need, when they need it–ideally, with a credit card.
  • Eliminating the possibility for self-service without a contract negotiation.

Newsflash for you folks at Gartner: the Cloud is Not a Contract.  It is a Service, but it is a not a legion of Warm Bodies.  It’s not about sucking up with field sales and solutions engineering (“You’re a handsom and powerful man!”).

I can understand that Lydia’s clients mention the need for elaborate contracts with detailed provisions unique to their circumstances.  When that happens, and when it is so at odds with the landscape of a fundamentally new development that respecting it will prevent you from naming legitimate leaders like Amazon as leaders, there are two ways you can proceed.  The easy thing is to cave to your clients since they’re paying the bills and concoct a scenario where the clients get what they think they want.  The hard thing is to show some leadership, earn your fees, and explain to the client, or at least to the vendors slighted, why your recommendation is right. 

Let’s put on our analyst hats, leave Gartner’s misguided analysis, and look at the issue squarely of, “How should we be looking at the issue of Contracts and the Cloud?”

As I have already said, “The Cloud is not about contacts.”  What it is about is commoditization through scale and through sharing of resources which leads to what we call elasticity.  That’s the first tier.  The second tier is that it is about the automation of operations through API’s, not feet in the datacenter cages.  All the rest is hype.  It is this unique combination of: scale, sharing of resources, elasticity, and the automation of ops through API’s that makes the Cloud a new paradigm.  That’s how the Cloud delivers savings.  It’s not that hard to understand once you look at it in those terms.

Now what is the impact of contracts on all that?  First, a contract cannot make an ordinary datacenter into a Cloud no matter who owns it unless it addresses those issues.  Clouds are Clouds because they have those qualities and not because some contract or marketer has labeled them as such.  Second, arbitrary contracts have the power to turn Clouds into ordinary hosted data centers: 

A contract can destroy a Cloud’s essential “Cloudness”!

I wanted to put that in bold and set apart because it is important.  When you, Mr handsom and powerful IT leader, are negotiating hard with your Cloud vendor, you have the power to destroy what it was you thought you were buying.  Your biggest risk is not that they will say, “No”, it is that they might say, “Yes” if you wave a big enough check.  Those who have made the mistake of getting exactly what they want on a big Enterprise Software implementation that wound up going very far wrong because what you wanted was not what the software really did will know what I am talking about.

How do we avoid having a contract destroy “Cloudness?”  This is simple:

Never sign a contract with your Cloud provider that interferes with their ability to commoditize through scale, sharing, and automation of operations.

If they are smart, the Cloud provider will never let it get to that stage.  This is one reason Amazon won’t negotiate much in a contract.  Negotiating fees for services offered is fine.  That does not interfere with the critical “Cloudness” qualities (I am steadfastly refusing the term Cloudiness so as not to deprecate my central thesis!).  BTW, there are very close corollaries for SaaS, which is why they are also much more limited relative to On-prem vendors in what they can negotiate and why they try to hew to the side that Amazon has.  This stuff didn’t get invented for Cloud or out of disdain for customers, there are real economic impacts.

Let’s try a simple example.  Your firm wants to secure Cloud resources from a provider who has some sort of maintenance outage provision for the infrastructure.  It’s a made up example for Cloud (mostly), but is on point for SaaS and easy to understand, so let me continue.  Your firm finds that maintenance window to be unacceptable because you have aligned your own maintenance windows with those of another vendor.  If you accept the Cloud vendor’s window, you will now have two windows to present your constituents and that is unacceptable.  So you want to negotiate a change in the contracts.  Sounds very innocent, doesn’t it?  I’ve been through this exact scenario at SaaS companies where customers wanted this to be done for the same legitimate and logical reasons. 

But consider it from the Cloud provider’s point of view.  If they have a special maintenance window for you, they have to take a portion of their infrastructure and change how it works.  Unless they have other customers that want exactly the same terms, they will have to dedicate that infrastructure to your use.  Can you see the problem?  We have now eliminated the ability to scale that infrastructure through sharing and scale.  In addition, depending on how their automated operations function, it may or may not be applicable to your specialized resources.  It isn’t just a matter of changing a schedule for some ops people–the automated ops is either set up to deal with this kind of flexibility or it isn’t.

That was an example for a maintenance window, but any deviation you negotiate in a contract that impacts scale, sharing, or automated ops can have the same impact.  Here are more examples:

–  You want to change the disk quotas or cpus on your instances.

–  Your SLA requirements damage some baked-in aspect of the providers automated ops infrastructure.  This is easy to have happen.  You insist on network bandwidth that requires a different switching fabric or whatever.

–  You want to limit how and when machines can be patched by the provider.

–  You want to put your machines on private subnets as Gartner suggests should be possible and has many who think the idea of a Private Cloud is Marketing BS decry.

That list can be a mile long.  When you get done ruling out all the things you really can’t negotiate without un-Clouding your Cloud, you’re going to see that relatively simple contracts such as you can already negotiate with Amazon are all that’s left.  Congratulations!  Unlike Gartner, you now understand what a Cloud is and how to take advantage of it. 

And remember, the next time you’re negotiating a Cloud contract, be careful what you wish for–you just might get it.

Postscript

Lydia Leong, one of the Gartner analysts who did the Magic Quadrant discussed here, has responded.  I read her response as boiling down to agreement with a lot of what I’ve said, excusing the disagreements on the fact that the Magic Quadrant concerned both Cloud and Web Hosting and calling attention to Gartner’s plan to do a mid-year pure-Cloud Magic Quadrant.   The latter is a good idea, and one would think had to be at least partially motivated by more negative feedback than just my own about this current Magic Quadrant.  Try to compare Cloud vendors against Web Hosting vendors on the same quadrant is an apples-to-oranges exercise at best and misleading at worst.

In any event, I thank Lydia for her gracious response and look forward to seeing the Cloud Pure Play Magic Quadrant.  That’s where the meat will be!

Posted in business, cloud, data center, saas, service, strategy | 3 Comments »