SmoothSpan Blog

For Executives, Entrepreneurs, and other Digerati who need to know about SaaS and Web 2.0.

Archive for November, 2007

Making All Software Into Tools Reduces Risk

Posted by Bob Warfield on November 30, 2007

Here’s a radical idea:

All software should be a tool or language because it reduces your risk of failure.

I’ve written before that if an area is important enough, it eventually becomes a language.  We’ve watched it happen over and over again, sometimes in the most unlikely places.  For example, Adobe made it’s start by turning printing into a language called Postscript.  It gave them enormous advantages over the competition.  Here’s another prediction: we won’t see the real universal Social Graph until it is expressed as a language not an application.  Why?  Because as I have said before, it is a living breathing dynamically changing entity that means a lot of different things to a lot of different people.

Why do I say tools and languages are lower risk?  Certainly this flies in the face of conventional wisdom.  Many VC’s, for example, want no part of a tools play.  They see it as extremely risky for a variety of reasons.  Dealing with IT and other technical types seems hard.  As customers they are seen as too demanding.  They want it all for free as Open Source, etc., etc..  That’s one particular market, and there are answers to those questions, but making software into a tool does not necessarily require that you sell to that market.  Sometimes, the tool nature is almost invisible.  Spreadsheets, for example, are really languages.  Creating a spreadsheet is an odd form of programming.  In fact, it’s great because it’s probably the most widely used and understood language that’s ever been created.

Here’s another way to look at it.  I recently wrote an article called “Why Small Software Teams Grow Large“.  It was a response to a number of questions that had come up over my proposition that you need a small team to write really great software.  Here is a question that relates to this post:

Aren’t all the great small team examples tools?

Linux, Delphi, and Quattro Pro are described by Chris as “generic tools built without regard for any specific business logic.”  There are two ways to think about this.  Chris takes the path that says the example is irrelevant because most software isn’t that way.  I take the path of saying that the example is spot on because all software should become a language if done right.  The benefit?  Your small team is enormously more productive as are your customers if you can actually make the language something they can grasp and use.  There is a reason these small teams built languages (though I’m not sure I view Linux as a tool/language).  In fact, Quattro Pro had no less than 18 interpreters buried in the guts.  Very few of them were surfaced for end users, most were there to make the code simpler and the product more powerful.

You see what’s at work there?  Languages/Tools make it possible for smaller teams to be even more productive.  If you buy into the idea that small teams are the best, you must want them to have the best possible tools as well?  What could be better than to make the software they’re working on a special purpose tool?

I’m certainly not the first one to think of things this way.  There is an entire area based on the idea of Domain Specific Languages.  The idea is to create a language around the problem you’re trying to solve in order to make it easier to solve the problem.  Tools like Ruby on Rails or Lisp are extremely good at that task.

Rather than dive into a highly technical tangent, let’s get back to the theme here.  Why would making your software into a tool reduce risk?  We’ve already hit on one aspect–it can make the developers much more productive.  Here is another: a tool makes it much easier for the software to adapt to changing needs of customers.  I do a fair amount of what I call “Termite Inspection” for various VC’s around the Valley.  Some of these engagements boil down to, “This company has a great idea, but they seem to be stuck in a rut.  What should they do?”  Many times what has happened is a company started out with a great idea and some knowledge of the domain.  They built a piece of software that is a very literal embodiment of their view of the domain.  So long as the whole domain and all the customers fit their view, life is good.  However, for most non-trivial (i.e. interesting market size) domains, nobody knows the whole domain.  You learn as you go.  Customers throw curve balls.  If the only way to adapt to that is to either keep adding tons of features to the software, or do expensive custom work on each deal, that’s way too much friction to scale.  A domain specific language makes it possible to manuever a bit at low cost and give customers what they want.

I finally put two and two together on just how important this can really be when reading Fred Wilson’s excellent article on Why Startups Fail.  Interestingly, he classifies all of his failures (and presumably others) into two categories:

1) It was a dumb idea and we realized it early on and killed the investment. I’ve only been involved in one investment in this category personally although I’ve lived through a bunch like this over the years in the partnerships I’ve been in.
2) It was a decent idea but directionally incorrect, it was hugely overfunded, the burn rate was taken to levels way beyond reason, and it became impossible to adapt the business in a financially viable manner.

Can you see where this is going?  Category 1 won’t be helped, and Fred correctly says you can identify this and cut your losses early on.  Building the software as a tool is a perfect antidote to #2.  The “decent idea but directionally incorrect” is strikingly similar to Marc Andreesen’s concept of achieving a product/market fit, which he says is the only thing that matters.  Fred Wilson gives us a great anecdote:

Dick Costolo, co-founder of FeedBurner, describes a startup as the process of going down lots of dark alleys only to find that they are dead ends. Dick describes the art of a successful deal as figuring out they are dead ends quickly and trying another and another until you find the one paved with gold.

Achieving the product/market fit is a matter of trial and error.  It is searching through an unknown territory filled with dead ends.  To survive, succeed, and prosper, your software needs maximum flexibility and adaptability.  That’s the definition of a Tool.  It can be adapted even as the requirements keep changing radically, and it can be adapted very cheaply and efficiently.

The principle should be clear by now, but there are troubling questions of practice. 

First, creating a tool sounds harder than writing an application.  My answer is that it’s harder because of who can do it and who can’t.  Once the tool is created, any developer should be able to use it to create new features.  Creating it is the province of developers a notch or two above the lowest common denominator.  The good news is that you don’t need to many of them.  Given my preferred maximum team size of 10 developers, you should easily get buy if 2 or 3 of the 10 are language creators.  Look for people who’ve done it before and are comfortable with the idea.  Look for people who are adept with dynamic languages and tools like Ruby on Rails. 

Second, delivering a tool sounds like a nightmare for users.  What do you do if your users are not technical?  Just because you’ve delivered a tool doesn’t mean you have to leave sharp blades and flaming torches hanging out of the toolbox.  Nothing is easier to attach a user interface to than a language.  That’s one of the great beauties.  But here’s a real quantum leap: combine your application-as-tool with a ui-as-tool for true power and flexibility.  I’ve been working with Adobe’s Flex along those lines and the results are amazing.  Other combinations that are similar to this would be combining something like PHP, which could be viewed as UI-as-tool with your application.  This is how the various social web 2.0 sites have gotten going so quickly and cheaply.  PHP straddles both sides of the spectrum and allowed these organisms to evolve very quickly.  In the end, you will wind up with a much better user experience because you’ll be able to quickly evolve your UI based on feedback.  Take it into usability testing early, and the combination of application and UI languages will make it straightforward to act on recommendations.

I hope you can get a sense from this post how powerful a competitive weapon having a language versus an application can be.  It actually reduces your costs, increases your flexibility, and let’s you respond with ultimate nimbleness to market demands until you’ve found the best possible product/market fit.

Posted in software development, strategy, venture | 10 Comments »

Interview With Lucid Era’s Ken Rudin, Part 3

Posted by Bob Warfield on November 30, 2007

This is part 3 of my interview with LucidEra CEO Ken Rudin.  If you missed Part 1 or Part 2, be sure to go back and check them out.  As always in these interviews, my remarks are parenthetical, any good ideas are those of Ken Rudin, and any foolishness is my responsibility alone.

How does SaaS affect the sales process?  Walk us through a typical SaaS sales cycle from initial lead generation to closing.

Ken:  Our sales cycle consists of meeting, often via teleconference and showing the demo.  90% want a 10 day free trial.  Our trial actually improves your sales organization and delivers value.  It’s on your own data so it’s real value.  During the trial, every other day we send the customer a killer report, which contains a cool insight about your business.

Bob:  How much do you need to invest to make your trial run on the customer’s real data?

Ken:  Customers using Salesforce.com we can get running very quickly.  Most of our customers are there.  We have a prebuilt connector to SFDC.  We can can also talk to Goldmine or Siebel CRM On-demand.  We’ll take their data in a spreadsheet for all tier 1 solutions.  On the finanacial side we have connectors to Oracle Financials and NetSuite.  In terms of customization, we can handle custom fields.  We need a little work if we have to remap some fields.

Bob:  (Being able to run a trial that is essentially a full install is a huge competitive advantage for Lucid Era.  It means there is almost no friction involved in getting the customer to see what the product means for them.  This is the sort of model most enterprise software companies only dream about.)

What platform is your software running on?  (e.g. Solaris+Oracle, etc.)

Ken:  Lucid Era is almost all Java except some core C++ for high performance.  It’s Linux and all open source.  We use Broadbase’s database which is good at queries and analysis.  It’s a column store.  We bought and open sourced it.  It’s now LucidDB.  We also run an open source OLAP engine called Mondrian.

Bob:  (This was interesting news.  It seems that LucidEra didn’t just take the tried and true MySQL route.  It isn’t surprising, because Business Intelligence places some radically different demands on the database.  But this marks the first company I’ve talked to that uses the new column store technology.  This is a technology that Michael Stonebreaker and other database luminaries have been talking about a lot recently.  David DeWitt offers a good overview of the technology.  Suffice it to say that most existing databases access the data by rows, while a column store can give you rows, it works on columns.  This yields higher performance for a lot of operations because you’re not really ready to look at rows until you’ve done a lot of column processing.  For example, to find all sales transactions > $100,000 is a matter of looking at the sales transaction column.  This is an over simplification, but the benchmarks are promising.  LucidEra has also gone after an OLAP engine, which is similar to technology Arbor pioneered that was acquired by Hyperion.  It’s another alternative to row-based databases that is ideal for slicing and dicing data in a lot of different ways when you need maximum flexibility and don’t necessarily know what questions you will ask in advance.  It’s also interesting to note that they bought the Broadbase technology and open sourced it.  This creates a community around the code so Lucid Era doesn’t have to bear the full burden of support and development by themselves.  )

How big was your Engineering team and how long did it take?

Ken:  Significant investment was made.  Bigger than many others.

Bob:  (Ken was very cagey about this one.  I have a hunch he doesn’t want to make it easy for competitors to triangulate on what it takes to build a knock off.  Given the technologies he has described, I have no trouble believing LucidEra has some technology barriers to entry.)

Is Lucid Era a multitenant app?  Why is multitenancy so important for SaaS companies?

Ken:  We are totally multitenant.  It’s important to me, not so much the customer.  It keeps costs down.  We share operational resources internally.  We have the option to put a customer on a box.

Bob:  (This is a pretty standard answer for a new SaaS company.  It is important to keep operational costs down.  A SaaS company has to achieve a 16:1 improvement in cost of operation versus on-premises to be competitive.  That’s a tough hurdle, and one that most people are surprised at when they see it quantified as 16:1.  Multitenant helps tremendously, but there’s a lot more going on than that.  It’s also interesting that they offer the option of putting a customer on their own box, presumably at a higher cost.  All SaaS companies have multiple instances, though they don’t like to talk about it much.  This is not to say they give customers the option of their own instance as Lucid Era does.  Rather, it’s an operational convenience for the SaaS vendor.  Multiple instances may be used to stage new versions of the software, to split up loads, or even to make sure there are instances available at more than one datacenter in case disaster strikes.  Depending on the economics, it seems to me that it would make sense to let customers pay more for an instance if they’re particularly paranoid about keeping their data separate.)

Conclusion

Lucid Era is a fascinating business.  They’re in an interesting place now that the old school BI vendors have largely been bought.  Even more interesting is their approach of delivering a solution rather than a tool, and getting the customer up and running as the first stage of the “sales” cycle.  Add to that some radical new technology in the form of the column store LucidDB, and we’re seeing a total reinvention of what BI means.  We’ll see more of this sort of thing where conventional vendors are relegated to the Tools category and the SaaS vendor is providing finished solutions that can deliver value extremely quickly.

Let’s keep an eye on these guys and see where it leads!

Posted in saas | Leave a Comment »

MySpace for Sharing and Expression, Facebook for Networking?

Posted by Bob Warfield on November 29, 2007

It’s been interesting to watch the foot race between MySpace and Facebook, and even more interesting to watch people’s reactions to the two.  Somehow, MySpace got pidgeonholed by the Digerati as a great first try ultimately doomed by the Facebook better mousetrap.  These people complain that MySpace is too ugly and unstructured to be useful.

Andrew Chen points out:

 “that MySpace is still far ahead on stats. For example:

What’s up with this?  Andrew says, “Silicon Valley people aren’t MySpace users,” that they don’t understand the use cases, and he brings up the notion of people who:

 “insist on every product being “Googley.” What I mean by that is:

  • Simple
  • Functional
  • Easy”

He winds up drawing an analogy between MySpace and scrapbooking, which is good, but it isn’t the whole story.  A callous interpretation of Andrew’s remarks might even lead one to believe he is recommending that MySpace is groups that are “economically challenged.”  The underclass, according to an article Andrew links to.  Trailer Trash and Red Necks of the Jeff Foxworthy persuasion if we gone to be more blunt, and believe me, the article Andrew links to gets extremely blunt.

I think there is a lot more at work here.  Part of it has to do with what people do on these networks, and part of it has to do with my concept of Web 2.0 Learning Styles.  My thesis is that MySpace is about giving the common man a way to express themselves and share that expression.  In that sense, Andrew’s scrapbooking is a good analogy.  Really, any craft or art form is a good analogy, including blogging, because a big part of it is a desire to express one’s self.  Expression is very much an ongoing process.

Facebook seems more goal oriented.  That goal seems to be networking:  How many friends can I notch up?  How influential are they?  Can I get a hot date (for those who subscribe to the idea that Facebook is largely about college dating).  The opportunities for expression on Facebook seem dramatically more limited.  Perhaps it’s a natural outgrowth of the goals.  If Facebook is all about courtship rituals of one kind or another (yes, business networking is definitely a courtship), then expression can be a dangerous thing.  If we express too freely, we may turn off the object of our pursuits.  So instead of putting the expression out there as a reflection of our personal taste and talents, which is risky, the expression is channeled into cutesy, low-risk, multiple choice options.  “You have 1 gift to give.”  My gift today is Thanksgiving leftovers.  This allows Facebook users to be cute, coy, or flirtatious, but without putting much of their real persona on the line.  Personally, I’d rather meet someone’s fuller expression of themselves before I feel like they are a “friend”.

There is another aspect that I think about frequently, and this is my concept of Web 2.0 Learning Styles.  The Myers Briggs test is based on certain personality traits.  It inspired me to come up with a similar approach to web services after I saw the love/hate reactions to things like Twitter and Scoble’s videos versus his blogging.  I could see that there was a place for more than one style, but that people prefer styles, sometimes intensely.  Savvy marketers and designers need to cater to this.  They need to figure out how to help people to self-select the communication style they prefer and then serve up content using that style that accomplishes the goal.  You can clearly see the MySpace/Facebook differences here too.  Here was my original casting of it:

Web 2.0 Personalities

In this case, I show MySpace as a more Free Form and less Structured Facebook.  I almost think I should have categorized Facebook over on the Text side of the line, or perhaps defined that dimension as “Simple Media” and “Multi Media”.  There is definitely a difference in richness of expression, with Facebook being “Googlier” as Andrew Chen says.

Getting back to Andrew’s original remarks, why then does the Valley not like MySpace?  Because engineers and business people are often more into structure, participation, and simplicity.  It’s the visual thinkers and intuitive crowd that will prefer the MySpace (or similar avenues of expression).

Related Articles

faberNovel consulting reaches a very similar conclusion about Facebook and MySpace.  On their quadrant, they position MySpace as being about Public Exposition and Fantasized Identity.  Facebook is about Real Identity and Qualitative Contacts.  I prefer to stick to my view that MySpacers are “expressing themselves” as much as they’re creating “fantasized identities”, but there is truth in both views.

Posted in Marketing, strategy, user interface, Web 2.0 | 5 Comments »

A Kindle User After My Own Heart

Posted by Bob Warfield on November 29, 2007

Go read Josh Taylor’s post on how he took a Kindle to the Carribean and why he has fallen into “deep like” for the device after that.  Being able to travel without a suitcase full of books was the first lightbulb that lit for me when I heard about Kindle.  The truth is, I’d seen an eBook a long long time before Kindle.  I can’t even remember whose it was, but we’re talking before Blackberry even existed.  It was a lame device back then, but I would still have bought one but for lack of decent book selection.  Despite O’Reilly not being there yet, I think Amazon has the means to fix the selection problem, and the device is certainly light years ahead of most of what we’ve seen even if many are still unconvinced.

Tidbits from Taylor’s post:

  • Taylor loved the Kindle’s screen for reading text, but says graphics, even black and white pictures, are almost hopeless.  I still haven’t personally seen a Kindle, but my friend Song Huang was recently telling me how impressive the eInk display is.  He saw one at a conference somewhere and was convinced they had just stuck a piece of paper behind glass as a mockup.  When the thing updated and showed it wasn’t paper, he was blown away.  I’d love to hear whether line art looks good on a Kindle.  That’s the sort of thing I’d want if reading a technical book, although it’s a shame actual pictures are so poor–it’ll make it hard to see screen shots.
  • As to the UI, Taylor loves the navigation but laments you can’t put Kindle away without accidentally flipping a page.  Has no one ever been reading their paperback, nodding off, dropped the book and lost their place entirely?  Must be my age if I’m the only one.  He also had an incident where his wife went to the beach without a proper charge and the Kindle died.  Doh!  Hate when that happens!

I also liked learning that Amazon will let you grab the first chapter of any book free to see if you like it before purchasing.  As I wrote in my original Kindle post, there are lots of ways the buying experience can be enhanced by Kindle.  One of my minor book purchasing peccadilos is an inability to keep track of all the authors and which of their novels I already have.  Every now and then I wind up with two copies of something.  Nothing worse than diving into what you think is a new offering from a favorite author only to discover you’ve already read the book!  I want to be able to get into a book club for my faves whereby I get notified as soon as something new is available and I can get the book with one click.  BTW, Amazon is famous for patenting the one click (I believe the recently lost that patent too).  I would expect them to try to patent a lot of the new stuff behind Kindle.  Patents are not my favorite thing, but they are a fact of life.

Scoble ran an interview on the street with a woman who wanted to see his Kindle while he was giving a talk at Stanford.  I came away from the interview with a slightly different reaction than I think Scoble and others may have.  There is a view that Kindle’s foibles are disasterous, but I’m not at all convinced.  Scoble points out that this woman hit many of his complaints almost immediately:

Notice that she accidentally hits the “next” button. That she tries to use it as a touch screen. That she is bugged by the refresh rate. But, she, like me, is interested enough to want to buy one (she’s the first that I’ve shown it to that has that reaction). Imagine if Amazon had designed it better? Imagine how many more people would want it.

The thing is, if you watch the video, none of that bothered her.  She made an assumption that is common outside Silicon Valley: if the thing didn’t work as she expected it to, it was not a problem, it just meant she needed to learn.  Sometimes I think we get too focused on a particular view of how things have to work in the Valley, and we’re way over the top critical when they don’t.  Many successful products are riddled with inconsistencies, but work so well compared to the alternatives that we ignore them.  I’m typing this in WordPress and let me tell you, it has at least as many UI foibles as Kindle, but it doesn’t matter, and it’s wildly successful.

I do agree with Scoble that if Kindle had been as perfect as iPhone or iPod from the get go, if it had been just as sexy, and just as “right”, Kindle would be a much bigger success.  However, let’s reflect on two thoughts.  First, Josh Taylor remarks that the Kindle must be popular because you can’t get one.  Note that this may not be the whole story.  Amazon may be limiting supply for a variety of reasons.  They want to understand usage patterns better to see if they can make money, or they want to respond to user criticisms without having a ton of inventory, or even they want to make sure it doesn’t damage their lucrative Christmas season.  Second, iPhone and iPod were not first generation devices in their categories.  I suppose we can argue that Kindle isn’t either, but it seems to me the precursors of the Apple products were much closer to success than Kindle’s precursors are.

All this has, um, kindled my desire to have a Kindle.  Still not sure I’ll put it on the Christmas list (you can’t seem to get one anyway), but my birthday is early in the year.  I just hope to see the rumored Apple Tablet device before I have to pull the trigger.  Wouldn’t it be awesome if Amazon takes the Open Road and has an OEM offering for other eBook builders?  Wouldn’t it be even more awesome if the Apple Tablet picked up the backend of the Kindle service and accessed it from their own UI?  Whoa!  Stranger things have happened, but not often…

Posted in amazon, platforms, user interface, Web 2.0, wireless | 3 Comments »

Giant Global Graph: Do You Need A Clue?

Posted by Bob Warfield on November 29, 2007

Sir Tim, who more or less invented the World Wide Web, recently did a blog post entitled the Giant Global Graph.  It’s a long rambling post that touches on multiple themes.  It is a logical reductionist discussion that only geeks are equipped to fully understand and appreciate.  Not because it is a superior way to organize and article, but only because our minds are pitifully linear compared to more intuitive thinkers.  Opinions vary on how well these themes go together as the primary insight behind the whole article is that the notion of additional structure for the web beyond mere hyperlinks in the form of a graph is valuable and far reaching.  Let me say it again, slightly differently:  Berners-Lee is on about the idea of additional structure and content for the web beyond hyperlinks.  Hyperlinks are navigational.  They convey some meaning beyond navigation, but not much.  Perhaps the most famous is Google’s Page Rank which makes the assumption that lots of links to a page indicate the page may be of more value to a searcher than a page with few links into it.  There may be other things one can intuit by examining hyperlinks, but it’s hard.  Making it easy, and especially making it easy for computers, is what Tim Berners-Lee wants to accomplish with his Semantic Web notions.  As long as we’re layering weird but related notions into this mashup, I’d like to add one I haven’t seen the other commentators write about which is the use of what are essentially web hyperlinks (a bit more, but close) to allow computers to interact directly with one another in a practice that has been called RESTful Architecture.

It’s quite amazing, really, what’s possible with a clean, simple, and well designed architecture like the web.  The danger is that if we extend it as Sir Tim proposes, that we do so equally as elegantly.  There’s a lot out there now, and a lot of moving parts interacting with what’s out there.  Adding sand in the machinery is not helpful.  So what exactly did Sir Tim’s latest missive bring to the table?

There’s a nice historical / layered architectural view of what the Internet and World Wide Web are and how they differ.  Put simply, the Internet is the generic plumbing that lets computers talk to each other Internationally through standard protocols.  The World Wide Web is a notion of documents that users interact with over the Internet.  Both are what mathematicians call graphs, which are nothing more than nodes with connections between them.  The Internet is a graph of computers.  The World Wide Web is a graph of documents.  Again, for “graph” substitute “network of nodes with connections between them.”  Pretty easy so far, no?  Okay, we’re a third of the way through the post, and we’re going to kick things up a notch.

TBL’s next concept that he brings to the table is, “It’s not the documents, it is the things they are about which are important”.  He goes on to say this is obvious, but I don’t think it is as obvious as he thinks when you go on to consider the real ramifications of all that.  TBl wants to somehow factor out the core ideas in these documents and use those ideas to create another kind of graph, which he calls the Semantic Web or Giant Global Graph.  These core ideas become the nodes of the graph, and they link together documents and related ideas in interesting ways. 

Why?  Because computers are actually pretty lousy at reading plain English (or any other language) and figuring out what those underlying ideas are.  For examples, TBL mentions things like:

– Biologists wanting proteins, drugs, or genes.  BTW, any profession or interest area will have a big list of jargon that is peculiar to that interest area and that should be factored and graphed for any web document.

– Business People want customers, products, and sales information. 

– People in general want Social Relationship information, and that is what people refer to as the Social Graph.

You see where he is coming from?  I wasn’t trying to be insulting with my post title.  When I say, “Do you want a clue?”, I’m referring to this new graph structure as providing clues to computers about what the heck is actually on a web page so that you can use the web pages in novel ways that are hard today but very useful if you can get a fully annotated Semantic Web, er GGG.  This is not the easiest thing in the world to do, as you can imagine.  There is a heck of a lot of work involved in doing all that annotation, and a lot of it may have to be done by hand. 

However, if we are very very clever, some useful pieces may become automatic.  Take the Social Graph.  If we create our own Social Graph about our relationships with people, it may contain enough information that the web can meaningfully change how it interacts with us as regards those people.  Today, we look at it as happening in the context of a Social Network, but it should not be limited.  Why can’t I go into my address book, pick a person, and reach out with high certainty into the entire web to see as much as possible about that person?  Where are their blogs and home pages?  Which Social Networks do they belong to?  What articles quote them?  What company do they work for?  If I visit the company’s web site, wouldn’t it be cool to be able to tell who I know that works there?

A couple of things should be coming clearer now.  First, I hope you can see why many of us (and now TBL), recognize the term “Social Graph” as being separate from “Social Network.”  The Social Graph can be so much more than a particular web site focused on Social Networking.  It can literally impact every aspect of your web experience.  And it is a collection of data that is at once both very open and very private and personal.  There are pieces we want everyone to see, and pieces we want to keep entirely to ourselves.  It is a very tangled web we are weaving.  It grows and morphs constantly.  We would like to start building it once and never start over.  This is why I’ve said the Real Social Graph Hasn’t Shown Itself Yet.

Here is another way to think about the GGG or Semantic Web.  The web of today is manual and literally.  You create a concrete link between documents.  You traverse the links.  They are largely fixed and relatively inviolate.  This is a good thing.  You don’t want to lose track of a thing.  But that is only one form of navigation.  Sometimes you don’t know where the first bread crumb on the trail is.  For that, you need search tools of various kinds.  The Semantic Web can inform the search process much more fully than keywords and Page Rank.  Beyond search, we would like a living web that restructures itself as it learns.  A change in one place can ripple through this graph structure to have far reaching and beneficial effects.  Suddenly the map of the web can be personalized around your interests, knowledge levels, relationships, and needs.  That’s pretty cool!

TBL winds up with a cautionary note about control.  Each of these layers has involved some loss of control.  First we gave up the idea of private networks to get to the Internet.  Anyone can be on it, including your worst enemies, competitors, criminals, and other evil doers relative to you.  Second, the World Wide Web involved a loss of document control.  Everything went to HTML instead of native document formats.  HTML involves a lot of loss of control.  It has gotten better, but real page layout and typography afficionados cringe.  Now we’re talking about sharing that graph data.  A graph requires two components, a lock and a key.  You hold the key.  Your Social Graph is your set of friends and relationships.  The lock is the set of pages that the key unlocks.  There is cooperation on both sides.  And, as TBL points out, this loss of control doesn’t have to mean that someone can access data they have no right to access.  It is important that you maintain control.  Even though the Internet is not a private network you can still run HTTPS to encrypt the packets or even some other protocol.  We routinely trust sensitive information to the Internet these days because there are cultural patterns for how it’s done and real technology to help protect us.  These things have yet to evolve for the next graph layer that TBL wants us to construct, but it is necessary infrastructure for this all to work.

What has been the reaction of others to this? 

Umair, as usual, gets it, and points out an important pitfall to avoid: the social graph is not web 3.0 (and the converse is also important:  web 3.0 is not the social graph).  I hope from my post above it is easier to see how the Social Graph is a subset of the GGG, and how it is also different than Social Networks.  GGG is a lot bigger than just Social.

Stowe Boyd, for example, had been very anti-Social Graph, but now says he “gives”.  Boyd was right to insist on more clarity before giving, but he is still suspicious that TBL is somehow trying to hitch a free ride on Social Graphs for his Semantic Web.  On the latter Boyd is more suspicious, but I think needlessly.  I hope you can see from my notes above what its all about, this Semantic Web.  The reason Stowe sees so much more fire around Social Networking is because this is an area where users have found sufficient value to create Social Graphs for themselves.  So far, we haven’t seen much action elsewhere, but we should.

My guess is that there are others areas of sufficient interest to generate spontaneous volunteer work of the kind we see around Social Networking.  There just needs to be the right enablement.  Perhaps it will be some form or flavor of the bookmarking trend that surrounds sites like Digg.  Perhaps it will be around online retailling.  Merchants have uniform means of referring to products in the form of standardized EDI, Bar Code, SKU, and other information.  Perhaps if shopping search got dramatically better when such information was available in the GGG for a page, it would drive many to add the information.

Anne Zelenka on GigaOM takes a dim view of all this.  She feels that computers are poorly suited to understanding relationships, and that trying to shoehorn the Social Graph into the Semantic Web sells it short.  I can’t seem to find a good concrete objection or counter example in her article though, other than a vague sense of unease about it all.  She cites another of her articles that talks about the downsides of a distributed and open social graph.  My problem is I can’t see where TBL is advocating this.  In fact, I’m not sure I see where anyone is.  We all want control over our Social Graph.  Remember my analogy of the lock and key.  I’m the only one with my key.  Also go back to the example of how the Internet involved a loss of control but that various standards came into play so that privacy could still be preserved.  That has to be done for the GGG as well.  There will be lots of kinds of data there that we may not want out roaming freely.  Facebook’s tracking of what you purchase with Beacon is another great example of a GGG like Graph Structure (call it the Global Purchases Graph or GPG) that some folks are upset at losing control over.  In other words, with some maturity in the standards, the tradeoffs can be extremely palatable and do not amount to putting all that data right out in the open.  That we don’t have this today is another reason why I say nobody has yet seen the Real Social Graph or the Real GGG either for that matter.

One thing that still seems missing from many of the other commentaries I’ve read is that this GGG notion puts a lot of the value that is currently being delivered by proprietary platforms like Facebook back into the Web.  That’s a better and more open model.  It’s in the best interests of everyone to pursue that model.  In fact, GGG tries to go beyond just being a Social Graph precisely so that it becomes  general purpose means of capturing almost any kind of structural annotation and linkages around the Web’s document model.  That sort of thing can help future proof a big idea so it runs further before we find ourselves having to add a fourth and fifth layer on top of the first two that are already there.

In conclusion, I think TBL did a good job tying his vision back to some immediate realities thereby making it more concrete and touchable.  It still has quite a ways to go, but it’s great to still be in early days and not have to worry about whether Facebook and Google have a permanent and irrevocable stranglehold on all innovation.  My own little contribution is that now that we’ve tied Social Graphs into the Semantic Web, I’d like to see REST somehow get tied in.  After all, why shouldn’t we annotate rest API’s on web pages too so that they can easily be found and connected to?  It’s like putting electrical sockets in a room for future use.

Related Articles:

I guess it’s not just me seeing a connection between the GGG and REST, see Discipline and Punishment for more.

Posted in saas | 5 Comments »

Verizon Drops the Open Bomb: Maybe The Old School Is Starting To Get It

Posted by Bob Warfield on November 27, 2007

My hat is off to whomever at Verizon decided to open up their network for “any app any device.”  To date, the industry has had a strangehold on which handsets each carrier offers and which apps are on the handsets.  Tampering with the standard offered by your carrier has been difficult and extremely risky.  The Apple iPhone triggered a storm of controversy when they started “bricking” phones that users tampered with to unlock.  Verizon has done a complete about face relative to the rest of the industry in announcing their new open approach.  Opinions vary on what it means, but it certainly is dramatic.  There is a short list of standards the devices must adhere to, some a little odd (CDMA versus GSM), and phones must be certified in a new lab facility Verizon is building. 

Ben Worthen at the WSJ says it means Verizon’s mobile devices are on their way to becoming as useful as PCs are today.  I don’t know about them being as useful, but an open device with a vibrant ecosystem of apps to choose from can’t help but make mobile devices a whale of a lot more useful.  Various friends in the wireless industry have lamented since they started that the biggest challenge is distribution.  You can build the software, but you can’t get it onto the device without elaborate negotiations with Luddite Carriers.  These are huge Enterprise-class deals with similarly huge and lengthy sales cycles.  One friend had a huge deal fall through with a major carrier simply because they did a reorg just before the deal was about to close.  I’ve seen that happen selling Enterprise software and it is one of the most frustrating things imaginable because you get to start the sales cycle all over again.  Quite simply these carriers were the bottleneck to innovation, and given how monolithic carriers in the US are, we had lagged the rest of the world considerable in mobile infrastructure and innovation.

There is considerable speculation over whether this was all triggered by Google’s Android announcements.  It hardly matters whether it was or it wasn’t, but it’s the sort of thing the blogosphere likes to gossip about.  After watching the round and round between Google, it’s OpenSocial partners, and Facebook, I have to say that Verizon looks brilliant from a PR perspective compared to Facebook.  ZDNet’s Larry Dignan suspects Verizon will have a fair portfolio of 3rd parties apps up in 2008 well before Google even gets their initiative off the ground.  That’s certainly what Verizon has to be aiming for, and given the pent up demand for distribution among wireless startups, there is likely no shortage of such apps ready to go today.

The point is that an open ecosystem can drive growth and opportunity far beyond what a walled garden can in today’s edge economics.  The world has been exploited by too many walled gardens and so it’s leery of granting to much power to perceived gardens, particularly when credible open alternatives are available.  This trend has moved in fits and starts.  The recent web world would like to think they invented the idea, but the truth is that it’s been going on a long time where those later to the party are dropping the Open Bomb to trump earlier arrivals.  It sure worked great for Sun against Apollo, for example in the 80’s and 90’s.  Unix crunched Apollo’s Domain, but has itself been overshadowed by Linux in recent years.

The question now is, when will the next carrier sign up?  There must be considerable gnashing and moaning about the risks.  Most people don’t understand the warped economics of most of these telcos.  In many cases, they offer services to get attention, but they hope against hope that the service won’t be popular because it chews up too many costs to support it.  Some of my in-the-know friends indicate video on phones is more or less in this category for many vendors.  There is some thought that these offerings were intentionally crippled to keep the cost down.  At the same time, text messaging is something of a cash cow, while other areas have been commoditized.  None of this is common across countries, or necessarily even across carriers, and it’s all a moving target.  Opening up to run any device any app is a gutsy play.  What if the devices and apps that turn out to be popular tank profitability?  In fact, there are a lot of unknowns and risks.  It seems likely Verizon is going to pass a lot of costs it used to eat back to customers, for example.  This is all part of the unbundling, but if the economics don’t work out well, it will severely limit the value of the open capability.  Hopefully we can count on competitive dynamics to make such pricing issues short lived.  Om Malik predicts it will open the market up to cheap Chinese handsets, for example.  Details of the exact business model are hazy.  There may be surcharges for apps or bandwidth may be limited or too expensive.

And what other ripples does this create in the pond?  What does it do for Apple, whose iPhone is pretty darned closed, thank you very much?  What of the Google Open Handset (aka Android) initiative?  Will Verizon join up?  Some say it was already rumored before this latest.  Beyond all that, what if everyone’s playbacks are finally updated with the conclusion that it’s good to drop the Open Bomb early in this much more connected world of New Millenium?  Wouldn’t that be a gas.  Much more disruption in many industries followed by new bursts of innovation and lots of entrepreneurial opportunity.  Now that’s what I’m talking about!

Related Articles

Erik Schonfeld discusses Verizon’s “Two Tiered” strategy, which he characterizes as one tier for Verizon’s valued installed base (you can’t sell them your apps) and the new customers who pay for an “open” phone.  I’m not surprised.  This is all part and parcel of the business model issues that surround these devices, as I discuss above.  Verizon can’t very well offer everyone the capability who has an existing contract because they might select an app that causes Verizon to start losing money under that contract.  I still think Verizon has made a positive move and more should follow.

So many see Verizon’s move as being a result of some other company’s strategic genius.  Scott Karp says it’s all a result of a brilliant Apple conspiracy plan.  Every big company I’ve ever seen is the world’s worst conspirator, but maybe.  Just because I’m paranoid does not mean the whole world isn’t out to get me.

Woodrow gets it right by saying:

Whether you question VZ’s motives or not is largely irrelevant. This IS a revolutionary move. And, while I think Verizon is reacting to market conditions; they are doing so far more aggressively and proactively than most have come to expect of BIG TELCO.

Precisely my point.

Daniel Berninger over on GigaOm muses about the impacts on Verizon’s business model of “any app, any device.”  For example, he says that SMS messaging is hugely profitable, but doesn’t that go away in an open world?  Yes it does, and Verizon or others will have to rethink their pricing.  This is precisely why they can’t offer it to existing contracts as I explain above.

Posted in business, strategy, wireless | 7 Comments »

Why Small Software Teams Grow Large And Other Software Development Conundrums

Posted by Bob Warfield on November 27, 2007

Ever since I can remember there is a background noise in the software development community around a relatively small set of topics:

– Are small teams better than big teams?

– Does language matter?

– What’s the best language?

– How do we achieve quality?

– Is software art and talent driven, or is it engineering and process driven?

The list is probably longer, but you get the idea.  Lately I’ve been involved in some back and forth over team sizes.  For the record, I think small talented teams crush big teams with tons of process in terms of productivity and final results.  They build better software in less time.  It’s been a good discussion, but commenters on various threads bring up issues that are important to address.  Let’s see if we can’t tackle a few here.

Chris Winters’ post is a good place to start, as Chris delivers a lot of meat in a very few lines.

Don’t these small team pundits completely ingore the timeframe?

As Carmack said, “a team of 3 focused and creative people can accomplish almost anything” — given sufficient time. But time is the knob that many (most?) projects have little control over. Everybody has competitors, and everything needs to get out yesterday.

This is truly Mythical Man Month territory.  You can deliver a project sooner with more than 3 developers, but I have serious doubts about delivering a core module with more than 10.  The question is whether what you are building is amenable to being broken down into what are essentially very separate projects.  Sometimes this is possible, more often it isn’t.  To make it possible will require some architectural investment to keep the modules separate, and that investment is serialized into the schedule.

A much more effective knob to twiddle when it comes to time to market is scope.  Cut scope.  Cut it early and cut it again if the project is drifting off schedule.  It is amazing how much scope can really be done away with and still have a release that completely satisfies commercial needs and makes customers happy.  It is a mistake to go for the many-year feature abundanzas that result in products like our friends in Redmond like to ship once every 5 years.

Does “small team” include QA?  People to gather requirements?  People to write documentation?

Great question!  First, on the issue of QA, I have never found a point of diminishing returns, and I have had the luxury of spending a LOT of money on QA during my Borland days.  Simply put, it is extremely difficult to find all the bugs.  No matter how many we could find in a week with a huge team of testers, a small core team of developers was able to keep up.  We started with a ratio of 1 QA per developer and added more QA on top of that as we moved towards shipment.  Those days are probably gone, but boy did it ever help. 

We got a lot of mileage over mobilizing the company and customers to assist.  We paid bug bounties to these folks based on severity of bug, and I know of at least one case where a very talented SE paid for a kitchen remodel with the funds.  OTOH, I had friends working on Microsoft Windows XP where much larger developer teams were involved, and even more humongous QA teams.  According to one friend, they completely deleted the entire bug database and started over again multiple times during the project because the developers simply couldn’t keep up.

As a final thought on this, the number of resources you can bring to bear outside the core developers is a function of the communication load they put on the developers.  Communicating via a bug tracking system is pretty efficient, hence lots of QA didn’t bring down the team.  Having a doc or QA person want to camp out so the developer can educate them is not going to scale.

Isn’t there a huge distinction between creating something and maintaining/improving it?

No, not necessarily.  If your view of improving something is to canvas the user base for every possible feature, build a giant laundry list, and build all of that, you’re going to find it’s very daunting.  But you’re also going to find you quickly defocus the conceptual integrity of your product and wind up with bloatware that does not make your customers happy.  The art of great software is in understanding what’s important to the user experience versus what is merely urgent.  A small team can keep a product fresh and current for years if they’re good at this because they don’t waste time on a load of features that clog things up.

Exceptions?  Beware the things that require tons of code yet qualify as only 1 more feature on the checklist.  Platforms are the arch example of that.  Supporting many printers and display adapters was the old school.  Supporting many databases or app servers is more recent.  It’s not worth it and it is a huge sink on resources.

Aren’t all the great small team examples tools?

Linux, Delphi, and Quattro Pro are described by Chris as “generic tools built without regard for any specific business logic.”  There are two ways to think about this.  Chris takes the path that says the example is irrelevant because most software isn’t that way.  I take the path of saying that the example is spot on because all software should become a language if done right.  The benefit?  Your small team is enormously more productive as are your customers if you can actually make the language something they can grasp and use.  There is a reason these small teams built languages (though I’m not sure I view Linux as a tool/language).  In fact, Quattro Pro had no less than 18 interpreters buried in the guts.  Very few of them were surfaced for end users, most were there to make the code simpler and the product more powerful.

Do we know what we’re building?

Chris hints on another big problem when he says:

And doing all that for non-trivial businesses, with a small team, no matter what language, is a tough job. (Not even mentioning figuring out what it is you’re supposed to build.) In a short amount of time? Approaching impossible.

The one thing that will sink a project faster than too many cooks with not enough talent is poor requirements at the outset.  If you truly don’t know what you’re building, you’re doomed.  You need to see pretty clearly what it is before the first line of code is written.  If you must build a prototype to get there, consider it time and money well spent.

By the way, creating a DSL for your domain is a way of creating a formal specification.  It takes rigor to create a language.  Hand waving won’t get you very far.  Also note that a giant laundry list of use cases and feature requests is not a specification.  It’s a wish list.  You need to understand the commercial dynamics of what really matters to customers, why it matters, and have a pretty good sketch of how you’ll solve their problem.

Don’t kid yourself that a giant team running around gathering requirements and hacking out use cases in monolithic modules is building software, business or otherwise.  It’s creating an unmaintainable mess that makes nobody but the IT guys that assembled that laundry list happy.  Certainly the business uses who it lands on are going to find it isn’t what they thought they were getting and it is almost impossible to change it into what they wanted.

So then why do small teams grow large?

This is my own editorial, but I’m surprised it isn’t talked about more.  We’ve all seen teams start small and grow large as a product succeeds.  Why does this happen?  The prevailing wisdom seems to be that as you gain customers, their demands for new functionality outstrip what the small team could provide.  I don’t believe it.  I’ve looked at a lot of software releases, and by release 3 or 4, it’s all too easy to get caught up delivering much sound and fury signifying nothing.  Did that release 3 or 4 really turn out that much more functionality than 1 or 2?  Was it really revolutionary?  Were the features all of the same caliber as the original?

No, I don’t think so.

Another source is failed negotiations between VP’s of Engineering and CEO’s.  Most CEO’s do not understand creating software.  They come from fields like Sales and Marketing where throwing dollars and bodies at problems can make a difference.  So they want their VP of Engineering to do the same.  They don’t want to hear the team is running flat out and can’t produce more.  They want what they want when they want it, and they’ll write the check to make it happen.  I have not personally run afoul of this (I am stubborn about agreeing to do something I know will fail), but I have heard of it.  This is not the big issue, though.

In my experience managing multiple release cycles for something like 50 products, team growth almost always boils down to aspirations and career growth.  People get tired of working on the same code base.  They get tired of working on the boring parts of the code base.  They want new challenges.  They want career growth.  They want to be architects and managers.  Who can blame them?  It’s human to want these things.

Pretty soon, a person who was a senior developer is an architect or a manager.  We carve some subsystem off and hand it over to them.  Or, we hire someone incredibly junior to be in charge of some unimportant code that the senior guys want nothing more to do with.  A fiefdom is born and the big team is on it’s way.  This is a mistake!

We are better off to create entirely new products as a path for career development.  If we can’t justify a product, or worse, can’t justify giving a person a product, then we need to be honest about that.  You may lose the person as they seek opportunity elsewhere, but you may also need to get some new blood into your small team.

This points up something to look for in your hiring practices.  Beware too much naked ambition among developers.  The guys that push for promotion every year can be awesome, but they will be high maintenance, and they will try to push you into doing something that’s bad for the team and perhaps even bad for themselves.  Beware especially the talented developer who wants to move into management because they want to make the decisions on architecture.  The developers who are in love with the act of creation and who have great chemistry with their coworkers are the gems.  They will not push you for much more than clear and exciting direction on where to take the product next.  Don’t exploit these guys either.  Reward the heck out of them.  After all, it doesn’t take too many to work miracles.  Make sure they’re happy, whatever it takes.

Give me a few guys like that who are really good and there isn’t any piece of software I can’t get built faster and better than the big team.  We’ll also keep that software vigorous and cutting edge for years too, and we’ll run circles around the competition.

Posted in software development, strategy | 7 Comments »

Interview With Lucid Era’s Ken Rudin, Part 2

Posted by Bob Warfield on November 27, 2007

This is part 2 of my interview with LucidEra CEO Ken Rudin.  If you missed Part 1, be sure to go back and check it out!  As always in these interviews, my remarks are parenthetical, any good ideas are those of Ken Rudin, and any foolishness is my responsibility alone.

What was the fundraising like?  What do VC’s look for in a SaaS business plan?

Ken:  It’s been interesting.  I feel very fortunate about what we raised, the valuations, and the great Board Team.  The A round went quickly: 7 weeks from first meeting to close.  It went quickly not because of the idea and team.  Deals go slowly because VC’ s want to think and talk.  Nobody needed to think about our deal.  The majority hated it.  A few loved it and went for it. 

When we presented it was 3 guys and 19 PowerPoint slides.  No demo.  Most people said BI is complex, it involves integration which is bad for on-demand, and it’s heavily customized.  But I knew better.  As a consultant at Emergence, I could see people kept reinventing the wheel.  They didn’t all need completely unique solutions.  The phrasing was different, but the essence was the same.   The world really was smaller than people thought.  And it worked.  We weren’t out to meet the needs of 100%, but we can do it for 80%.

Traditional BI is a tool that says we can do anything, but that’s a disadvantage.  It’s like managing a nuclear reactor because all the code is custom.  But we can keep identical schemas and use metadata to make it custom enough.

Bob:  (You never know how VC’s will react to a deal.  It only takes a couple to bite.  My last VC funded deal involved talking to 15 VC’s and the last 2 funded the deal.  The other 13 passed.  These days, there’s a lot of talk as well about going to VC’s later in the cycle, after a version 1.0 product has been built and a few customers are using it in Beta.  Angels often fund the first stage.  I’ve had a couple of VC’s indicate that they no longer fund a  slide show.  Lucid Era is proof that you can buck that trend if you get your idea in front of the right audience.)

What were your biggest fears about LucidEra, and how did they turn out?

Ken:  One of my biggest fears was that we had to sell over the phone.  We couldn’t afford travel and 6 month sales cycles.  This was unproven for me.  A lot of the VC’s wouldn’t believe it either.  They thought we’d need a consultative sale.  But we had a prebuilt solution, not a tool.  It’s finished.  I can show it and they can take or leave it very cheaply and quickly.  Even our 2 largest customers we saw at a trade show at our booth, but otherwise we never met face to face.

What really surprised you as you got more into Lucid Era that you hadn’t anticipated?

Ken:  The biggest surprise we’ve seen just in the last 6 weeks is how fast the big companies have jumped in.  Our plan was mid-market focused.  We didn’t even try to market to our bigger customers, they came to us at a trade show and we closed them within 30 days.

Another surprise was channels.  We’re working with partners.  It took a while for us to figure out what type would work for us.  Traditional SI’s have no interest, which was a negative surprise.  Traditional SI’s want to bill a lot of hours for custom work.  We stumbled across some SI’s who are purely SaaS focused and discovered that SaaS has been disruptive to SI’s just as it has software vendors.  Pure plays are coming on strong and have figured out how to create high margin business around SaaS.  They may teach or use the tool for you, but they don’t have to set it up.  They help do it faster by making it multiple choice and prescriptive.  More importantly they help with best practices.  Which choices should I take for my business?  This has a lot more IP, and isn’t just competing with commodity offshore rates for code slinging.  It also generates repeat business.  More management consulting.  Sales Effectiveness optimization.  Now it becomes an annual service and not just a project.

Bob: (This sentiment has been echoed by every SaaS company I’ve talked to.  The traditional SI’s largely haven’t found their footing yet in the SaaS world.  They will often speak positively about it so as not to be perceived as critical of a major trend, but they don’t know how to actually engage their businesses with it.)

Do you think SaaS is an inevitable bridge that every ISV has to cross in some form or fashion?

Ken:  No.  Even if it was, that would be a real challenge for them.  There are some that will do just fine staying where they are.   Not many, but there are some.  The majority will have to deal with it.  Just because cars came doesn’t mean there are no horses, they are still around.  There’s just a lot less need.  On-demand dramatically reduces the need for conventional on-premises software. 

Bob:  Where are the likely survivors?

Ken:  There are some government mandated things involving military or top secret.

Bob:  It’s pretty hard for companies to embrace both models.

Ken:  Mixed companies have a civil war inside.  This is true for SI’s too.  But the mixed company may not really be behind SaaS.  They just think it is the thing to do.  It’s fashionable.

Any other advice for those who would start a SaaS business?

Ken:  I think for me, don’t just think, rethink.  What I mean is SaaS is not taking an existing enterprise software application and delivering it as SaaS.  All you’ve done is change delivery.  Rethink the whole purpose of the application.  We redesigned what BI means.  It’s not a tool, it’s a solution.  People don’t want drills, they want holes.

Who are the SaaS companies you look up to?

Ken:  Salesforce.com is the poster child, and I have personal background there as well.  I also have a lot of connections with NetSuite.  I spent a lot of time on their advisory board.  I remember discussing churn rates and hardware and so on that they won’t talk about outside.  I look up to them.  They have done a phenomenal job.  NetSuite is SAP On-demand.  Salaesforce is best of breed CRM on demand.

What are your thoughts about Salesforce’s Force?

Ken:  We’re big fans of it.  We believe in opening the platform and you’ll see us do it with our own platform.  They have great buzz and some successes, but they have a lot of work ahead.  It is still early stage. 

One of the things I think they’ll run into is that as it becomes more flexible with Apex code, they open a Pandora’s Box.  No-software is no longer true.  Someone can write code that doesn’t work, and it isn’t Salesforce’s fault.  The brand delivers a certain promise, but 3rd party apps can be challenging.  It’s not simple anymore.

Are they encouraging too much customization?  Is it Siebel again?  If so, there will be lawsuits between customers, SI’s and Salesforce when customization fails.  It’s a risk, not a prediction.  I’m not saying this is bound to happen, just that it could if the situation isn’t managed very carefully.

What are your thoughts on being in the BI market where all the significant traditional players have been bought?

Ken:  It creates a great opportunity for us.  The last big independent got bought.  All the tall trees in the forest are cut down, so the undergrowth has a chance.  Those guys will not innovate for years, they’ll be integrating not innovating.  All the new innovation will come from smaller players like us.

We also said, wow, if major companies are buying these guys for such high prices, there is probably some real value in BI.  Makes customers start thinking about what their BI strategy should be.

Bob:  (It’s an interesting situation, where the big pure plays all got bought by even bigger companies.  Oracle got Hyperion, SAP got Business Objects, and recently IBM got Cognos.  Having watched these sorts of acquisitions for years, I don’t have any problem believing Ken’s view that we won’t see much innovation in BI from them for a long long time.  Integration is a difficult process, and often the best people choose to move on, or at least become very distracted.)

Do you think we’ll see something similar in all the other perpetual markets?  What does it mean for SaaS in general?

Ken:  For any segment that has large independents, we will see something similar.  Any time there are too many small players there will be consolidation.  Eventually, I think we’ll see that in the SaaS SI world to get critical mass.

It’s the end of the current cycle.  The next generation steps up.  It happened with CRM.  SFDC took over and the rest were acquired.  It’s happening with BI.  These shifts don’t take away the need from customers, the just change the players.

Bob:  (I agree wholeheartedly that we’re at the end of the cycle for on-premises software.  It’s very hard to get a new on-premises enterprise company funded no matter what the idea may be.  I just had lunch with someone with a small private on-premises company and they’re having a tough time.  They know they have to get a SaaS offering going.)

Can the big guys get into SaaS successfully?  SAP with ByDesign?  Oracle now talking more about it?

Ken:  It’s like changing your DNA.  I wouldn’t say it’s impossible, but it is as close as you can imagine.  I call it the “battle within” or the “civil war.”  All the divisions are involved, not just sales.  Marketing, Engineering, Finance, everybody hates it.  Marketing hates it because you’re marketing against your own products.  SaaS sells on simpler.  Simpler than what?  You’re slamming your cash cow.  You can’t say anything interesting without damaging one product or the other.

Bob:  (Again, this is a sentiment echoed almost everywhere I’ve visited. The best approach to transition is to try to isolate some areas that can be pure SaaS with no in-house on-premises competitor.  New products and verticals work well.  I call this the “protected game preserve” strategy and I’ve written about it before.)

Next Installment

In our next installment, we’ll get Ken’s perspectives on the sales process as well as the scoop on Lucid Era’s innovative database architecture.  Stay tuned by getting on the data feed or e-mailing list for the blog.  Just check out the options in the little box below my picture at top of page.
 

Posted in business, saas, strategy | 2 Comments »

One Cloud, Two Clouds, Four Clouds, More?

Posted by Bob Warfield on November 26, 2007

GigaOm writes recently that the world may only need 5 clouds, echoing a misquote attributed to Thomas J. Watson at IBM.  Nick Carr is much closer to the likely outcome when he writes about Vertical Clouds, an interesting article well worth a read.

The reality is that we have not yet settled on exactly what product the clouds are offering.  Today, we’re at the lowest common denominator of Linux dial tone along with bulk storage.  That’s Amazon EC2 and S3 in a nutshell.  It may be that after a suitable period of consolidatin the world only needs 5 or so Linux dial tone offerings.  Given the number of nearly identical services offering web and email servers, we seem to be a long long ways from that consolidation.

I like Carr’s idea better.  Linux dial tone is useful, but doesn’t ultimately doesn’t take utility computing very far along the path to its true potential.  That potential extends well beyond broadly generic services and into vertically oriented spaces just as non-cloud computing products do.  Just take a look at the plethora of database offerings alone.  There are Open Source databases like MySQL, column store and other specialized DB’s for Business Intelligence, Enterprise mainstays like Oracle and DB2, database hardware like Teradata, and so on.  Each of these is filling a particular niche and ecosystem.  Each of those niches can spawn at least one specialized cloud to service the interests of the niche.  The clouds are a long ways from being mature enough to start taking on multiple niches at once.

The linkages between clouds will also be interesting.  Someone remarked to me that the problem with the cloud is there isn’t just one, there are many, and there are walls between them.  We’re starting to see those walls break down in some cases.  Look at the offer by Joyent to do hosting of Facebook apps.  The offer is free to the first 3500 developers to sign up.  How do they do it?  By means of a special cloud-to-cloud link:

There is also no latency. We have set up a direct physical fiber optic line between the Joyent data center and Facebook’s data center. Somewhere under San Francisco bay, there is a multiple-gigabit-per-second fiber line capable of pumping massive traffic.

Fiber is remarkably cheap.  I would expect to see more partnerships between non-competing cloud vendors who provide connections between their clouds that offer advantages in terms of bandwidth, cost for bandwidth, and latency just like this example.  Imagine you’re creating an enterprise application of some kind.  Perhaps it’s even CRM and you want to host a component on Force.  But, Force is expensive, and you don’t want everything there.  Perhaps you’ll also need a connection to WebEx so you can do teledemos with customers.  Lastly, you want some kind of Business Intelligence capability that goes well beyond what Force offers.  Now let’s suppose you discover a utility computing vendor that has special cloud connections out to Force and WebEx, and offers BI capabilities as part of their service.  That would be exciting to you, and probably to a lot of other vendors.

Here’s another one that matters: geography.  Which geographies does your cloud vendor cover and how does that map back to your business.  Amazon recently announced the ability to target S3 data to their European datacenter.  Much more will follow.  Connections to the CDN’s will also factor in here.  There are a lot of other scenarios that could develop.  Suppose Facebook decides hosting is a way to monetize.  If you want to tie into their Social Graph database, you have to build your application in their cloud.  Hmmm.  That’s a head scratcher.  What could companies like Google do by persuading you to move into their cloud?  What if there was an economic rational to save both parties money by colocating?  Perhaps it becomes cheaper for Google to search your content and part of that savings is passed on to make it cheaper for you to host your content inside Google’s cloud. 

Get ready for a lot of complexity and choice to be injected to the cloud computing picture.  The connections that take place between different categories of Enterprise software are pretty well understood.  What’s less well understood is how they will manifest themselves as connections between clouds.  It seems clear that there are many opportunities for success out there.  Many more than just five clouds will be needed.  In fact, Business Development people should take note: before too long, a piece of the puzzle will involve asking which clouds are directly connected to which other clouds? 

Cloud computing is about to get much more interesting!

Posted in amazon, data center, saas, strategy, Web 2.0 | Leave a Comment »

Will MS Office Or Oracle Be Slaughtered First By The Cloud?

Posted by Bob Warfield on November 26, 2007

There’s a spate of articles about the new Live Documents service, a cloud-based SaaS challenger to Microsoft Office.  Like any good entrepreneur, CEO and founder Sabia Bhatia claims his new baby and it’s relatives will displace MS Office by 2010.  That’s right around the corner, but wildy optimistic.  ZDNet’s Dan Farber calls these cloud Office wannabes “ankle biters”, and with good reason.  They don’t come close to posing a threat to Microsoft yet.  It’s not for lack of trying.  There’s a pretty good crowd of them out there now.  Live Documents has joined a cast that includes Google, Zimbra (now owned by Yahoo), Zoho, ThinkFree, Adobe, and others.

Why can’t these guys take over by 2010?  Because every story you read is largely a man bites dog story.  There’s little to no discussion of amazing new features these products offer that would give a reason to switch.  The mere fact that products calling themselves Office equivalents can exist in the cloud without needing to be installed seems to be newsworthy enough.  There are no great roundup reviews that are getting big attention in the blogosphere that play them off against each other.  What one does find are articles telling us about the introduction of features that seem painfully elementary.  It wasn’t so long ago that the Google Spreadsheet learned how to hide columns, for example.  Even as TechCrunch writes “While Live Documents Yaps, Zoho Delivers,” Stowe Boyd writes that he can’t get Zoho to play nicely with Google Gears, even though the ability to work disconnected is the big newly announced feature for Zoho.  Apparently the Live Docs messaging annoys Michael Arrington, who writes:

New product press releases unencumbered by the complexities of releasing actual software set off alarm bells. And when those press releases are so boastful as to suggest that the (unlaunched) product can hurt a competitor’s $20 billion revenue stream, the alarm bells get much louder…

So far Live Documents is nothing more than bullshit and smokescreens. That may have been the way to do business when Bhatia co-founded Hotmail in 1996, but his software is going to have to survive on its own in a hyper competitive marketplace when it actually launches. Hubris alone won’t do it.

From my perspective, this matter-of-fact let’s paste together a bunch of things out on the web and not worry too much if they work well is a problem for an Office Killer.  The Microsoft Office represents basic literacy in the business world.  Give it up and you may find you’re unable to speak the lingua franca of others you must communicate with day to day.  Real challengers have to keep this in mind.  I was a General in the last Office Suite Wars, having fathered Quattro Pro at Borland.  We made considerable inroads (Quattro Pro sold on the order of $100 million its first year) but ultimately fell by the wayside because we lacked a word processor to go with our suite.  The one thing I can tell you is that absolute 100% compatibility with the market leader at the time together with significant innovation over that market leader and significant economic advantage (we were much cheaper at the outset) were key to the success we did achieve.  I don’t have a sense these upstarts have achieved any of these ideals.

There is a market that I think is more interesting when we look at who might become a Cloud Casualty sooner rather than later.  I’m speaking of Oracle, of course, and specifically of the database server business.  MySQL appears poised for an IPO, but beyond that there is a raft of contenders out there who have achieved a lot relative to Oracle.  There are fairly Oracle compatible products like Enterprise DB.  There are products that have serious innovations such as the column-based DB’s.  And the economic advantages are undeniable.  Unlike the Office Suite arena, it gets harder and harder to find significant killer features that Oracle offers that don’t exist in the Open Source DB world.  We’ve seen extremely large web sites powered by MySQL and some of these others, for example.  It would be hard to claim Oracle is dramatically more scalable in the face of the evidence, although one can likely conclude it remains easier to scale and more performant. 

The problem is that the costs associated with Oracle licenses are positively usary.  A friend who runs a SaaS company says his Oracle costs are bigger than his hardware costs.  I asked him why he continues to use it and he indicated his CTO was convinced he had to for scalability.  At my last company, Callidus, we ran some tests and were surprised at how performant these solutions were compared to Oracle.  I believe that currently, it’s an inertia thing.  People are sure they can get Oracle to scale, they have people on board who know how, and unless cost is a serious issue from the get go (as it can be with startups focused on ad revenues), the tried and true is chosen.  When enough people have experience scaling MySQL and its relatives, that inertia will have gone away.  Venture Capitalists tell me most of their portfolio companies are there.  As companies built on technologies like MySQL mature and people move on to other jobs, the word spreads.  Businesses running old-school on-premises software will be the last bastions for that inertia, but realistically, I’ve still talked to members from that elite group who are keeping COBOL CICS and IBM AS400 software alive. 

It won’t take an awful lot of shift before Oracle feels the pain.  The problem is that Oracle depends on this business as a cash cow to finance it’s other expansion.  Knock 20-25% off the top through erosion to these upstarts and it will dramatically change the Oracle profit picture for the worse.  Here’s another interesting strategic point to consider: utility computing ties together larger tectonic plates that can result in greater and more sudden market shifts.  Imagine companies like Amazon deciding to offer database dial-tone in their rent-a-clouds.  The database is the most labor intensive and problematic piece of software in the whole suite.  If someone automates those problems away, promises scalability on a utility computing grid, and handles normal SQL, many will rush at the opportunity.  Such aggregation of users can drive a lot of license fees away in a hurry.  They also take away a lot of the intertia issue in a couple of ways.  First, the cloud service has to deal with the operational knowledge required for server care and feeding.  Customers won’t have to.  Second, these cloud vendors are no small shakes.  Names like Amazon, Google, and Microsoft are bandied about.  The fear of dealing with some Open Source vendor that is viewed as too small and flakey is greatly ameliorated.  An additional sweetener is that the competitive strength of users of such services would be much greater versus their competitors who still use expensive Oracle and have to manage the servers themselves.

Despite interest by folks like Nick Car in LiveDocuments, breaking Oracle’s strangehold on the database world is a more likely spot for the disruption of the old school to show up first.  It would mark quite a change.

Posted in amazon, data center, enterprise software, grid, platforms, strategy | 3 Comments »