SmoothSpan Blog

For Executives, Entrepreneurs, and other Digerati who need to know about SaaS and Web 2.0.

Archive for October, 2007

What is Talent if Not Good Design for Programmers? (Talent is No Myth for Programmers)

Posted by Bob Warfield on October 31, 2007

I love the Discipline and Punishment blog, but he’s taken a left turn at Albuquerque when he talks about the Myth of Talent for Software Engineering and then goes on to say:

What is the equivalent of “capital” in programming? It’s not ‘talent’ in any meaningful sense. Rather I’d suggest it’s what most people call “good design.” What’s really paying dividends in any large project is good design simply because design adds value faster than it adds cost. Good design, purely because of its “compound interest”-like nature, is largely what drives success in software and what separates successful projects from unsuccessful projects. The many software projects that fail (really they are abandoned) inevitably do so because of bad design.

I absolutely agree that it is the design and not the languages that matter most, but how else do we measure talent among programmers if not by good design?  Who do we look up to most among programmers we know?  It’s the Uber Architects.  These are the ones that have the good design ideas.  Yeah sure there are those guys who crank out devilishly clever but unmaintable code.  In the old days, they were called hackers, back before that term was a good thing.  It was a term of derision.  Great software engineers were the ones with great designs. 

Let me provide yet one more reason why talent matters.  If you really do believe it takes fewer people to write good code, and there is an overwhelming body of evidence to support that, how could you not also believe that if you have only a few slots on the team it means that talent matters even more?

In struggling to understand whether I’ve misunderstood the original post, I almost get a sense that the wrong meaning is being attributed to the word “Talent” when I see terms like “Rock Star Programmer” being thrown in as bad examples of “Talent”.  There are certainly situations where the talent label is misapplied to someone who in fact, does not have the talent.  Don’t blame that on the idea that talent is valuable! 

I sense a little bit of tendency to feel that the design is handed to programmers, and hence that’s why design and talent are separate.  That’s a bad idea!  Design is a participative process under the watchful eye of the right benevolent but fascist dictators.  A good dictator creates an overall framework that leaves a ton of freedom for individual programmers to express their own design sense in their corner of the world.  A programmer who just wants to be handed all the design detail on a stone tablet so he can “write code” is not someone I want to hire or work with on my small teams.  The guys I want will leave if they don’t get a hand in design.  You will understand a design so much more fully if you participated in creating it.  By the same token, I hate the idea of architects that never write a line of code because they’re too busy thinking of great architectures for someone else to build.

There is also a certain uncomfortableness with the idea that talent implies really great programmers are born and not made.  Let me lay that one to rest too.  I’ve hired hundreds of programmers to work on really difficult software in small teams and I have seen overwhelming evidence that great programmers, programmers with talent, are very definitely born and not made.  We’ve built languages and tools of all shapes and sizes, database servers, desktop application software, web software, data mining, genetic algorithms, and a host of other stuff ranging from the mundane to the completely weird.  I’ve worked with extremely famous programmers at companies like Borland, and I’ve worked with brilliant folks who are obscure.  The one area I don’t have experience with is games, but I think that world believes you’re born and not made too. 

Throughout those hundreds of programmers hired, only one pattern sufficed to hire more great programmers.  It had nothing to do with age or education.  It had nothing to do with giving them clever tests in the interview process.  It had to do with whether they’d ever done at least one great thing or not, and whether they could communicate that thing well enough to expose some native design sense.  I’ve seen guys with fantastic MIT or Stanford CS degrees (including graduate degrees) that could not program their way out of a paper bag.  I’ve seen high school dropouts that started programming relatively late code circles around the average CS grad. 

I’m sorry folks, but talent matters, talent is the ability to design software systems and algorithms, and you either got it or you ain’t.

Posted in software development, strategy | 5 Comments »

Serendipity is the Key to Code Reuse

Posted by Bob Warfield on October 31, 2007

In watching yet another ping pong game of compiled versus dynamic languages, I found myself mostly siding with the dynamic language crowd in the form of Steve Vinoski and Patrick Logan.  This time around the discussion was all about what to do with “average programmers”.  It all got started when some others posited that dynamic languages and RESTful architectures are for those who view progamming as art while statically checking the contract, whether speaking of languages or SOA’s, is what those who want to view programming as engineering should do.  I don’t want to get sidetracked by all of that background, because I will write about it some other time (is programming art or engineering, what to do with average programmes, yada, yada).  For this post, I was struck by one of the linked in tributaries that make the blogosphere such a powerful idea generator because it triggers that sudden tunneling of ideas between relatively unrelated spaces that sparks creativity.

Specifically, Vinoski mentioned almost completely in passing that REST is leads to serendipitous code reuse.  He provides just the one word, serendipity, with the link I’ve added to the word.  Well I can’t resist diving down a good looking rabbit hole when I see one, just to see what else there may be, and sure enough, it was worth the trip.  Stuart Charlton has a neat little post on serendipity and code reuse that crystallized some fuzzy thinking for me.  Stuart is an Enterprise Architect for BEA, which seems to me ought to predispose him to be much less enlightened, but being an INTP, I guess he couldn’t help himself blurting out some good stuff about REST.  Here is the goodness of Stu:

One problem with SOA is that it is very “heavy”, with a partial focus, like CBD before it, on planned reuse.

In some industries, planned “product line” reuse has been shown to work, such as with car platforms. It’s also appropriate for very general purpose programming libraries, etc., and could also be appropriate in software (there’s a fair amount of “software product lines” literature out there).

From this viewpoint, “build it and maybe people will use it later” is a bad thing. SOA proponents really dislike this approach, where one exposes thousands of services in hopes of serendipity — because it never actually happens.

Yet, on the Web, we do this all the time. The Web architecture is all about serendipity, and letting a thousand information flowers bloom, regardless of whether it serves some greater, over arching, aligned need. We expose resources based on use, but the constraints on the architecture enables reuse without planning. Serendipity seems to result from good linking habits, stable URIs, a clear indication of the meaning of a particular resource, and good search algorithms to harvest & rank this meaning.

This led my own little ESTJ overly top-down and logical mind to instantly make a list or continuum of choices surrounding code reuse, SOA, REST, and dynamic languages:

1.  Preplan everything and rigidly enforce the contracts:  Heavy SOA.  No hope for serendipity.

2.  Expose thousands of services in hopes of serendipity.  Lots of work if you use SOA-style mechanisms.  That’s why SOA proponents say it never happens.

3.  Use a lightweight protocol that exposes thousands of services almost for free: REST, we love you!

4.  Expose a language that can create any service as needed.  Whoa!  Where did that come from?

I’ll admit, #4 popped up unbidden.  In my defense, I have a sort of “Warfield’s Law” when it comes to computers that leaps to mind more frequently than it should:

All things become languages if they’re important enough.

If Adobe can make a language out of printing, I don’t see why this doesn’t merit a language too.  The Law is true too.  Computers do one thing especially uniquely: they are Turing machines.  Languages are their manifestation of that.  If it isn’t a language, it could be done by a toaster at some level and doesn’t need a real computer.  Conversely, if you don’t make it a language, you’re robbing yourself of the full power a computer can offer.

Getting back to the problem at hand, what could be better than a dynamic language for enabling serendipitous code reuse?  We can reorganize available code assets under the latest DSL of the day to solve an entirely new problem.  In the grand scheme of these distributed computing conglomerations, REST becomes the sort of assembly language for these DSL’s.  It gives us the universal impedance match we need to hook up our components.  The dynamic language massages that somewhat generic socket into something more recognizable and powerful for the particular domain we want to conquer today.  Tomorrow, we’ll build a new one, and it’ll be easy, because we have lots of RESTful components lying about and a nifty dyamic language that’s potent at composing them.

Perhaps I’ll run this thing on a Pile of Lamps while I’m at it.  I say that only partially in jest, because the Pile O’ Lamps crowd are speaking my language, at least in terms of making things into languages.  Aloof Schipperke writes that he sees SPOTs when thinking about the Pile of Lamps.  He goes through an abstraction exercise that got me thinking again.  Here is the original Lamp:

  • Linux
  • Apache
  • MySQL
  • P{HP,erl,ython}

Aloof says, “Wait, it’s more like this if we go up 10,000 feet”:

  • Scripting Language
  • HTTP
  • Operating System
  • Database

Do you see Warfield’s Law (everything becomes a language) at work yet?  What did the Scripting Language replace for many architectures?  The middle tier.  The application server.  Devilish complex, painful to operate, often extremely inefficient.  Beans of every variety abound.  J2EE et al ad nauseum.  Why put up with it?  Replace it with (drumroll please!):

A Scripting Language!

Now can we successfully combine all of these ingredients?   

  • Component Bus for Reuse and Communication:  (RESTful or similar) 
  • Scripting Language
  • HTTP
  • Operating System
  • Database
  • Utility Computing Infrastructure

That seems to me a potent next generation architecture for web software. 

Replacing pieces of code, framework, or whatever with a language is an extremely powerful chess move.  It’s turning a pawn into a queen.  It’s not easy to do, as most are not facile at creating languages.  But why bother, there are lots of nifty dynamic languages to choose from.  Now you’ve got another excuse to go look one up.

Posted in platforms, software development | 3 Comments »

Ubiquitous Social Networks for Business

Posted by Bob Warfield on October 31, 2007

Google’s OpenSocial API’s could make a lot of sense for Business.  I started thinking along these lines after seeing how many of the initial partners in the initiative were quite business related:   companies like Salesforce, LinkedIn, Plaxo, and Oracle, for example.  I asked how much longer would businesses swim upstream trying to use Facebook for business networking and communities if OpenSocial plugs directly into real business networks and communities?

This all begs the question of what would businesses do with Ubiquitous Social Networking?

Note that Google’s OpenSocial doesn’t get us there yet; it merely clears away the critical cross-Network linkage issues and lowers the barriers to write apps that will run on all participating networks.  Businesses still need easy to customize components that support the API’s to let them build special purpose Social Networks that they can live with.  I can’t believe that sort of thing is far behind, though.  Either Google will build it or someone else will. 

Let’s brainstorm a bit about what it means:

Let’s start out with the mechanisms by which many are introduced to businesses:  product registration and sales lead registration.  Let’s postulate for the moment that rather than this information disappearing into the black hole it usually does, what happens is you sign up to be a part of the community of customers and other interested parties the business is supporting via Social Networking.  So, you fill out the usual bit of information and perhaps a little bit more, and you’re on board. 

We don’t know at this stage enough about OpenSocial to say, but let’s even imagine that sign up is streamlined because the API’s can go back to the Google mother ship and save you re-entering data everywhere you go.  In that sense, perhaps it becomes the Universal Identity Fabric for the Internet that the ZDNet folks talk about.  Note that this is just getting you into community.  Anything that involves money changing hands or more security continues to operate as it always has and will remain secure and armor plated–no Open API’s needed there.

Now you’re part of a community.  You can participate in conversations with other community members and with the business.  Benefits?  Let’s see:

–  Improved Customer Service:  Some of the best and most responsive customer service I’ve ever seen comes not from huge companies but from communities.  Empower interested individuals to help their fellow customers with problems and make sure there is enough oversite so that if the help stalls or if wrong answers are given (or other noxious hijinks need attention) someone with the company picks up the thread.  I don’t know about you, but I much prefer online support to calling some faraway call center and fighting my way through some person who knows nothing but the script in front of their face.  It would be cheaper for the business too.

–  Faster Sales Cycle:  Great communities facilitate sales.  I’ve seen this over and over again.  The crudest form is classic Enterprise references, but it can be far far better than that with a real community.  Prospects get in and get infected by the enthusiasm.  Even if they see a problem, so long as it is worked out quickly, they’ll see it as a positive.  Chances are they’ll learn positives that no salesperson could ever think to give them as they watch what others are doing with products.

–  Customer Satisfaction:  People like to be part of a community.  It breeds loyalty.  Getting better Service and a happier Sales Cycle fostered by fellow loyalists being paid nothing all add customer satisfaction and loyalty. 

Given the initial entre, sub-groups can be a function of the community too.  This let’s business crowdsource all kinds of input from the customer base.  Interested parties will naturally gravitate in and out of the appropriate groups.  Whether the business is looking for input on what to build next or is actively trying to recruit new employees from these communities (often a fantastic source, BTW), it becomes much simpler for all concerned.

Okay, that’s a pretty conventional step up for some conventional processes.  But now let’s postulate what can happen if all of these business communities start to be connected via the Octopus that is Google.  Let’s assume that information can pass through the membrane below the surface of the API’s and go back to Google, where it gets repackaged into new services.  This is likely a voluntary process, but Google will be smart about making sure incentives are aligned to encourage as much sharing as possible.  They have a valuable soft dollar currency, BTW, in the form fo their ubiquitous advertising which virtually every business that has an online business must participate in.  If they have to, they can trade some of that to encourage the right behaviour.  My guess is they won’t have to.

What happens when we start the comingling?

Suddenly, we have cross-business knowledge of customers.  The benefits here are many as well.  Your office supplies source will already know which toner cartridge fits the printer you ordered and registered from Dell.  The camera store knows which accessories fit the new digital camera you got for your birthday.  The photography enthusiast site you’ve joined automatically offers to hook you up with the group of fellow enthusiasts who have the same camera.  They know from another photo site you’ve been visiting to hook you up with the landscape photography crowd rather than the portrait takers.  Because you have a membership with the National Audobon Society, you’re also steered to the wildlife photographers.

It’s almost creepy, I know, but it could also be strangely wonderful in terms of time saved and chance connections that never would have happened otherwise.  Why will businesses share their valuable information?  I know, it’s YOUR valuable information, but they see it as theirs and they’ll want you to sign off on it as such.  They won’t share contact information, but preference information remains pretty gray.  The reason they’ll share anything is they’ll be incented to do so.  Businesses could pay one another referral fees for successful transactions, for example.  That’s why the printer company is incented to tell the office supply outfit which printer you have and which toner works with it.  Google could use their ad dollars as a universal exchange currency to incent the behaviour.  Make it available to their ad engine and they cut you a break on your ads.  If they structure it right, it will be businesses trading Google ad dollars around and eventually spending even more of those dollars by making click throughs go up.

Google Operating System calls the new concept a “meta-Social Network”.  Stowe Boyd agrees that’s what it is.  I like that term, as that’s exactly what I think the long-term potential will be.

This is a beautiful example of using “open” to trump “proprietary”, and it will have far reaching consequences (although Stowe says it isn’t really “open” just “more open”).  Marc Andreesen called Facebook, “a dramatic leap forward for the Internet industry” and goes on to say, “Open Social is the next big leap forward!”  In that same post, he indicates the combined pool of users for the initial partners is about 100 million, which is 2x Facebook already.  So much for some of the early hand wringing over whether developers would find it interesting enough.  Scoble certainly views this as a threat when he asks, “Will Google Friendster Facebook?”  To learn more as a developer, Brady Forest has a succinct post over on O’Reilly.

It will likely take a couple of iterations and perhaps even some new API’s before the full vision I’m painting can be realized, but the starting gun has been fired.  I’m reminded of the seen in the movie Trading Places where the Duke’s are explaining commodity trading to Eddy Murphy.  The “good part”, as they call it, is that the traders make money whether the commodity is going up or down so long as people are buying and selling.  Google owns this commodity exchange, and OpenSocial is going to make it even more valuable while spinning off benefits that will make the web much more intelligent and aware of everyone’s personal choices.

Related Articles

What’s Next Google?  (You won the battle, now what about the war?)

Google is in the Cat Bird Seat for Identity Matching With OpenSocial

Posted in Marketing, strategy, Web 2.0 | 9 Comments »

For Google, The Internet Is Their Social Network (Plus, Business Communities and Cookie Marketing)

Posted by Bob Warfield on October 31, 2007

For Google, the Internet literally is their Social Network. I first made that pronouncement in a post on whether the giants should buy, build, or integrate, and it looks like Google does indeed want to make the Internet their social network.  Their chosen vehicle for doing this is a set of open apis to be announced tomorrow called “Open Social”.  If a social network agrees to participate and offer these api’s, developers will be able to access them to do three things on the network:

  1. Access profile information about the user.
  2. Access friends data for the user, the social graph, in other words.
  3. Access activities, such as news feeds.

Applications (widgets for social networks, in other words) can be created using these api’s without recourse to any special markup languages such as most networks currently require.  Instead, normal Javascript and HTML are used.  This will make it easier to port existing applications to use the api’s and play on participating social networks.  The host social network gets to set its own rules about things like advertising and other policies in the apps. 

The charter hosts form quite a potent group and include Orkut, Salesforce, LinkedIn, Ning, Hi5, Plaxo, Friendster, Viadeo and Oracle.  Charter applications include Flixster, iLike, RockYou and Slide, which are some of the key apps from Facebook.

This marks quite a warning shot across the bow of Facebook.  The timing is interesting.  Google probably could have impacted Facebook’s recent financing by preannouncing, but they chose not to.  I think it’s a more clever strategy because it creates a more intangible fear.  If Facebook had gotten funded at their desired valuation anyway, they could downplay the whole Google thing and say the world had voted with their pocketbooks that it didn’t matter and they still got a $15B valuation and $500M in cash.  Moreover, Google may have acted as a spoiler to get Microsoft to commit to Facebook, only to pull the rug out from under them with an open strategy.  Proprietary Microsoft is in bed with proprietary Facebook and out in the cold with the rest of the web.  Doh!

For Google, the Internet is its Walled Garden.  Why should it create sub-gardens?

There are lots of ramifications to ponder including Business Communities and the future of Social Network Marketing.  The mouth waters at the possibilties OpenSocial may bring.  It’s hard to see how Facebook can compete with this simply by adding groupings to their own world.  Google has turned the walled garden inside out with this new api.  A couple of brief notes follow.

Business Communities 

Take special note of the business players in the mix, companies like Salesforce, LinkedIn, Plaxo, and Oracle, for example.  First thought is how much longer will businesses swim upstream trying to use Facebook for business networking and communities if OpenSocial plugs directly into real business networks and communities?

I think there is a huge opportunity for businesses to get hooked up with communities, and Google is going to be right in the middle of all that as are these early movers to the platform.  Imagine the possibilities to be had by linking these kinds of disparate services into collaborative uber communities.  What if companies use this as a mechanism for product registration and introduction to their own communities?  What are the possibilities for tying all of that together?  And how can Google’s advertising machine take advantage of such goings on?

Of Dirty Cookies and Social Network Marketing

Speaking of advertising, there’s been a lot of news lately about Facebook’s “Social Ad Network”.  The idea is that Facebook will plant cookies that tell other sites you visit what your interests are so they can better target ads to you.  Presumably, Google could make use of this api to completely short circuit Facebook’s Social Ad Network by tying their knowledge (via the api) of your profile and interests back to their own AdSense system.  You’ve got to figure the combination of knowing the profile across potentially multiple social networks together with tying into AdSense’s other insights would produce a truly killer ad platform.

There’s one tiny little fly in this ointment for either company, and that’s the idea that for some people, such “dirty cookies” are downright creepy from a privacy standpoint.  I have to admit, I am somewhat in that camp myself.

Interesting times we live in.  If these antics manage to build a box around Facebook that caps it’s potential, we will be seeing yet another hole in the bubble that will start to let some of the momentum leak out.

Related Articles

Ubiquitious Social Networking for Business

Robbin Harris says Google Bluffed Microsoft into Overpaying.  I agree, as I mention above, Google played this one in a very clever way.

Posted in business, strategy, Web 2.0 | 7 Comments »

How Many Tenants For a Multitenant SaaS Architecture?

Posted by Bob Warfield on October 30, 2007

We’ve talked about the cost advantages multitenancy can bring, up to 16:1 compared to a single tenant per instance.  But how many tenants do we have to put into an instance to get those kinds of savings?  In other words, what metrics should we shoot for?

Once again, we can turn to statements made around Salesforce.com to get an idea.  For example, Michael Dell says that Salesforce was running 40 Dell PowerEdge servers at one point in time.  If we go back into Salesforce EDGAR filings, we can see that they had 6,700 customers (tenants) and 134,000 seats when running on the 40 Dell servers.

With a little math, we conclude that if we view the 40 servers as one instance, Salesforce was able to stack in 168 tenants per server, and 3,350 seats per server.  I’ve queried my contacts at various other SaaS companies and learned that this is pretty high in their estimation.  Admittedly, the CRM application is not very taxing on machine resources relative to a lot of other applications.  It’s basically fill in a form and keep track of the data with a little reporting.  Transaction rates are governed by the rate at which sales cycles change and are added, which is pretty slow among Enterprise transaction rates.  Based on that, let’s view Salesforce as the practical zenith.

It does seem like we can draw some conclusions pretty quickly.  Most virtualization strategies are not going to let you run 168 instances on a single server.  Figures I’ve head are in single digits.  If nothing else, virtualization means you don’t modify your software.  Virtualization is therefore good at sharing variable costs, those costs your software is designed to vary anyway, but not so good at sharing fixed costs. 

One of the SaaS vendors that uses virtualization mentioned a good example of this fixed versus variable cost issue.  He mentioned that adding a domain to an app server has a fixed minimum cost to them of 1GB of RAM.  His reaction to the SFDC figure was to say that running 170 zones (what he calls their virtual partitions) was a non-starter because it would mean the server needed 170 GB of RAM.  A true multitenant architecture could share the app server resource requirements across many tenants.

Note that this fellow I talked to wasn’t sweating it very much.  His business calls for fewer larger SaaS tenants than Salesforce.  Their average is in the hundreds of seats per tenant, not 20 or so like Salesforce, so they naturally have many fewer tenants.  All of this has to factor into your decision making when considering multitenancy versus virtualization for SaaS.

A last consideration is that even loyal multitenancy fans still keep multiple instances, each of them multitenant.  There are a variety of reasons to do this that boil down to operational convenience.  If nothing else, it’s easier to build and provision an instance and then migrate customers to it when doing upgrades.  Opinions vary on how many servers go into an instance, but vendors I talked to are thinking along the lines of 50 to 100 at the high end.

Utility computing infrastructure with painless scale up/scale down would have a bearing on this.  I’ll be writing more about that over time.

Posted in platforms, saas | 2 Comments »

The Value of SaaS vs Maintenance Recurring Revenue

Posted by Bob Warfield on October 30, 2007

Oracle’s bid for BEA valued the company at 7.5x maintenance revenue.  According to Credit Suisse, past Oracle acquisitions have all fallen into the 7x to 8x range.  BEA asked for 9x. 

This made me wonder about the value of SaaS recurring revenue.  After all, if the Oracle’s of the world are primarily after nice, safe recurring revenue streams, maintenance is one thing, but it’s computed as a fraction of the license price, usually in the 15-20% range.  Why not look at companies that get 100% recurring revenue for their software?

Here is a quick look at those figures for some publicly traded SaaS companies:

SaaS Valuation

The average is 9, which is pretty close to the 7x-8x Oracle wants to pay for recurring maintenance revenues.  A small premium for growth might make sense for these younger SaaS companies.

It’s always interesting when two relatively unrelated things, in this case a multiple on maintenance revenue and a multiple on SaaS revenue, match up so well.  That’s usually telling us we’re on the right track.

Posted in business, saas, strategy | 2 Comments »

How Will This Web 2.0 Bubble Burst?

Posted by Bob Warfield on October 30, 2007

The cries that the present Web 2.0 world is a bubble about to burst are becoming increasingly strident.  Emotions are starting to run away, and sometimes it gets a little ugly with the name calling.  We are at that classic point where bubble cryers abound and those who believe there is no bubble are becoming increasingly aggressive in their attempts to disagree.  Doesn’t it feel just like it did the last time?  What’s left to do to wrap up this bubble and move on?

Bubbles don’t go away until everyone is in.  It’s never quite everyone, but it’s so many that there’s nobody left to keep the momentum of joining going.  When everyone is on the train, the next step is for something to falter.  Something pretty big.  In the last bubble, I remember IPO’s going off like clockwork, and then suddenly, at the peak of bubble mania, WebVan’s IPO faltered seriously.  People who got pre-IPO shares lost money.  I’m sure there were more incidents like this going on, but that’s the one I knew about.

Picture the scene.  Everyone is finally on the train.  All the naysayers who had doubted you could make billions of dollars with sock puppets selling pet food had given up.  They’d seen too many make those billions and decided what was happening was too big to fight, so they joined too.  Then the train derailed.  Look how fast a bubble can come unwound once that happens.  One reason is all those naysayers.  They got on knowing they shouldn’t have, and at the first sign of trouble they were facing naked fear.  They never had the courage of their convictions, only the courage of seeing others succeed, and now they’d seen failure.  Right where they used to predict failure would be.  The latecomers were the pragmatists, not the visionaries.  They are data sifters, not intuitive, and this new data of failure can make them change course on a dime.  Thus begins the rush to the exits.  The true true believers will stay on long after it’s too late.  They were visionaries, not pragmatists, or they wouldn’t have been there in the first place.  They fly on intuition, and while the situation may smell funny, it’s hard to shake a strong intuition.  They don’t operate on hindsight.  They usually don’t spot the trouble coming as easily as they create a new direction.  The threat is that the wide eyed optimism that fueled the bubble in the beginning will be gone and it will take more than vision to keep going.  That’s a rather emotional view of what happens, but it is a natural outcome of a competitive system reaching equilibrium. 

How can we tell what stage we’re in?  Watch the fast followers: they are the canaries in the coal mine.  They’ve eliminated the bad genes that the first movers mistakenly had in their formulas and kept only the good.  We know we’re in the fast follower stage when we read about new companies that are endless combinations of familiar things.  When was the last time you heard of a fundamentally new thing like a Wiki, Blog, or Social Network?  Maybe we count the video stuff like Joost, but it has fast follower Hulu.  The fast followers are the sure sign that the white space green field open running is done, and it’s all about competition.  Huge markets will sustain a certain amount of competition without impacting growth rates, but eventually, fatigue sets in, there is minimal differentiation, and people give up looking at new things and choose the market leader.

The market leaders will survive the bubble.  They will even prosper as markets shed competitors that were a dead weight on their growth.  Market leaders will have reached enough critical mass to weather the bursting and emerge stronger than ever before.  This is one reason why folks like Jack Welch have said the time to invest in growth is during adversity when others are cutting back.  Your growth investment goes so much further if you’re actually in a position to be a market leader.

Given all the angel money and bootstrapping, this is a funny age.  The last bubble saw lots of IPOs and hence tremendous collateral damage was done to non-combatants.  This time we’re seeing mergy frenzy.  It isn’t so much the non-combatants as market leaders from the present (Google) and the past (Microsoft) that are providing the exits.  Hopefully there will be a little less irrational exuberance as these companies use the tool of acquisition to fill out their portfolios and compete with one another.  Perhaps we’ll have a kinder gentler bubble bursting, but the end result will be the same.  Only a few in each category can survive, it may just take a little longer and involve less sound and fury getting there.

What does a complete portfolio look like for an Uber-player like a Google, Microsoft, or even a Yahoo?  My Web 2.0 Personality Map seems like a good map of the territory.  To reach all the personality types, you’d want a presence in every square of the matrix:

Web 2.0 Personalities

The properties are clearly laid out.  Note that there are many more services out there that fit in a given square than I’ve listed.  Some of these are already taken.  Master moves such as Microsoft acquiring Yahoo would fill in many squares with a single stroke. 

But, such a combination raises the issue that properties have to want to get bought at a valuation that is compatible with the purchaser’s pocketbook and notions of propriety.  This is starting to happen when we see deals like Zimbra.  It was a good multiple for the VC’s, but nothing spectacular.  Clearly they were becoming concerned their niche was getting crowded with bigger players and their ability to compete was diminishing.  So it will be as each acquisiton plays out, others in the space become increasingly nervous.  It is a game of musical chairs and there are not enough seats for everyone. 

Remember too that these are largely private companies.  Public companies can be coerced under the duress of fiduciary responsibility to be reasonable.  Oracle has polished this to a fine art, and the backlash for those who resist is painful and ultimately futile.  Just ask Dave Duffield who fought hard to keep Peoplesoft out of Oracle’s clutches.  No, in this case, we’re dealing with founders and investors.  It’s largely a game of fear of failure versus the bird in the hand.  The right chemistry is in place.  People are starting to wonder about the bubble, and there are some WebVan-like events too as we look at events like eBay’s spectacular failure with Skype.  There are subtle clues too.  Why was the story on Microsoft’s initial Facebook investment $500M but the final figure was $250M?

The canaries are beginning to nod off in their cages.  Best not to go too much deeper into the coal mine!

Stay tuned, as I have a post I’m penning on where I think the next big bubble opportunity will be.  It’s a doozy, it’s a logical progression, and it may even be near term enough to keep this market going strong on some slightly different cylinders.

Related Stories

Automattic (WordPress) spurns $200 million acquisition offer

R/W Web asks, “Are we still changing the web?”  When I took the poll, 33% said there’s still plenty of innovation, 28% said it’s slowing, and 22% said there was not enough.  Given that the audience for this is probably a little skewed, I’d say there’s still some sand in the hourglass.  OTOH, the author, Richard McMannus, drags a lot of things into this category that I don’t think belong, such as SaaS, Utility Computing, and the Mobile Web.  They are phenomena in their own right that are on a separate bubble clock and not so far along.  They are driven by forces other than Web 2.0 and can continue on if Web 2.0 falters a bit.  Mobile, I suppose, can prolong the mainstream Web 2.0 by increasing access to it.  But ask yourself, how good a job did Blackberry do reinvigorating email?

Posted in business, strategy, Web 2.0 | 9 Comments »

User-Contributed Data Auditing?

Posted by Bob Warfield on October 29, 2007

We’ve all read the headlines about user-generated content, which has become a big deal in the modern Web world.  The rap is that only about 1% of that content is any good.  While many writers are down on the idea, 1% of the web is huge and growing like crazy, so it matters.  There’s another rash of articles dealing with user-generated metadata or user-generated structure.  These are all worthwhile concepts that show how people using the web can add back tremendous value. 

It’s the Data, Stupid is the title of a thought-provoking WSJ piece that made be think of another wonderful user-generated contribution.  Ben Worthen, the article’s author, has this to say:

One of the hottest techs right now is software that helps business people view and interact with their companies’ data. But none of this software will help a lick if the data you’re working with isn’t any good.

This is a huge problem for businesses right now.  There’s an old joke making the rounds of BI circles to the effect that the answers gleaned from the first BI project was great right up until the second BI project started producing answers that didn’t agree with the first, at which point we were worse off than before the first when at least ignorance was bliss.  According to Worthen, Accenture has surveyed businesses and learned that only 29% make data quality a part of their initiative.  Having worked in Enterprise software for some time, I can tell you that most of the data one encounters out there has problems of one kind or another.  Nearly any project has to deal with cleaning up the data well enough for whatever purpose it is to be used for. 

But during my travels, I came upon a unique situation where users were actually incented to massively improve the quality of the data.  I’m speaking of the Incentive Compensation experience at Callidus Software.  Most people don’t think about it, but there is a huge amount of data that must be collected to calculate sales compensation that is directly relevant to many questions on the revenue side of the business.  The data mart associated with such systems can easily answer questions such as:

  • Who were the most important customers for each period and how is the relationship evolving in terms of repeat business?
  • What were the most important products sold?
  • Who were the best sales people?
  • What were the best territories?

There are many more examples, but this data was solid gold for our customers.  I got this news at one user conference some years ago when I was essentially mobbed by a group of customers.  One of them indicated that while their title was VP of Compensation, they were producing reports out of the incentive data mart for virtually every functional organization in the company.  The reason people were looking in the comp datamart was that the information there was unusually accurate.  Since salespeople were being compensated on it, and since they had visibility to see the data through a web page our software provided, an entire group of people were suddenly motivated to clean up the data.  It helped that our software could actually reward them for doing this by going back and restating results.  The  VP’s of Compensation for these customers were demanding tools to help them with delivering this information to others, so eventually we built the TrueAnalytics product.

The moral of the story is to find ways to incent your users to clean up the data and they will do so, spectacularly.  But they need visibility and a reason to care before it can happen.

Posted in business, saas, strategy | 6 Comments »

Practical Experiences With Never Rewriting Software

Posted by Bob Warfield on October 29, 2007

Dharmesh Shah says we should (almost) never rewrite software, and he is right.  Rewrites come about, he says, from the following causes (the comments are mine):

1.  The Code Sucks:  Given who wrote the old code versus the new code, why won’t the new code suck too?  If your code is brittle, refactor it.  Fixing brittleness is one of things refactoring does, and it does so without breaking the world for a long time as a side effect.  If things are so bad refactoring won’t help, you probably need to rebuild the R&D team first and then they’ll insist the code be rewritten.  The prognosis is not good for this sort of thing, but sometimes you can recover.  Unfortunately, the fish rots from the head.  Who hired the original team?  Are they hiring the new team?  Is the company really prepared to hold position for a major rewrite and team rebuild.  This is dire experimental surgery that I hope you’ll avoid at all costs. 

2.  “We’re So Much Smarter Now”:  There is often a grain of truth here, but what will it do to your installed based to do a major rewrite that expunges what you see as flawed assumptions?  Note that you can refactor more than just code.  Refactor the user model.  Get it right without breaking the old ways.  You can be smarter without boiling the ocean.

3.  We Picked the Wrong Platform/Language:  I haven’t seen this one come up so often in the way Shah describes, where new guys hired in want a change.  We’ve tended to focus so much on good chemistry for communication purposes that we hire birds of a feather.  See my article on using fewer people to build better software for more.  However, sometimes platforms are forced on us, and Shah gives similar examples in his write up.  I can think of three major examples throughout my nearly 25 year career.  First was client server.  There was definitely a time to shift from using a desktop DB engine to client server on various projects I was aware of.   Next was Windows versus character-based DOS UI’s.  Lastly was the Internet.  These are all such radical paradigm shifts that unless you were amazingly prescient or just an amazing architect, you probably had to contemplate some major rewrite-level work to get there.  I have also presided over a langauge shift.  We ported Quattro Pro (the DOS version, Windows was in C++) from an unsupported (but reasonable mature) Modula-2 compiler to Turbo Pascal while at Borland.  The short story is it wasn’t worth it.  It took a lot of work despite how similar the languages were, and despite promises of how much better code Turbo Pascal would generate, the product actually ran a bit slower.  You’d have a hard time convincing me a rewrite to change languages was ever worth it.

I want to go back to the refactoring.  This is something I’ve always intuitively done.  We used to call it “interative middle-out programming”.  It’s the idea that you keep things running at all times and your changes are measured transformations.  Even testing can be approached in this way, although that’s the subject for another post.  I firmly believe that schedules and product roadmaps need to allocate refactoring time.  This is time spent that delivers no feature a product manager, customer, or sales person could recognize as such, but that improves the code.  Usually, not too much can be allocated there.  I strive to hold out 20% on top of whatever schedule pad is there, and yes, I will sacrifice that 20% if we fall behind.  About every 3rd or 4th major release, one must invest more, say 40% in housekeeping.

If you do all that, it is amazing how long a code base can remain viable.  The Modula-2 version of Quattro Pro on DOS lasted for well over 10 years with no signs of distress.  Like a house, the code must have good “bones”.  That is, the top-down architecture has to be right and can’t fight you too much.  But if you have those “good bones”, you can keep running with the same code base, add tons of functionality, and the team is amazingly much happier too.  Part of it is that the refactoring time let’s them exercise their computer science tendencies and not just work on yet another mundane feature.  All great developers need to scratch that itch.  They want to put a clever algorithm or structure in place.  It makes the code better, but it makes the team better to.  Part of it is rotating developers onto new projects if they get too bored.  Let the new young Turks have a shot.  Make sure the new guys know they’re the ones paying the bills.

So remember:  budget time for refactoring, and skip the major rewrite.  Everyone will be happier!

Posted in saas | 3 Comments »

A Pile of Lamps Needs a Brain

Posted by Bob Warfield on October 28, 2007

Continuing the discussion of a Pile of Lamps (a clustered Lamp stack in more prosaic terms), Aloof Schipperke writes about how such a thing might manage its consumption of machines on a utility computing fabric:

Techniques for managing large sets of machines tend to either highly centralized or highly decentralized. Centralized solutions tend to come from system administration circles as ways to cope with large quantities of machines. Decentralized solutions tend to come from the parallel computing space where algorithms are designed to take advantage of large quantities of machines.

Neither approach tends to provide much coupling between management actions and application conditions. Neither approach seems well adapted for any form of semi-intelligent dynamic configuration of multi-layer web application. Neither of them seem well suited for non-trivial quantities of loosely coupled LAMP stacks.

Aloof has been contemplating whether a better approach might be to have the machines converse amongst themselves in some way.  He envisions machines getting together when loads become too challenging and deciding to spawn another machine to take some of the load on.

Let’s drop back and consider this more generally.  First, we have a unique capability emerging in hosted utility grids.  These range from systems like Amazon’s Web Services to 3Tera’s ability to create grids at their hosting partners.  It started with the grid computing movement which sought to use “spare” computers on demand, and has now become a full blown commercially available service.  Applications can order and provision a new server literally on 10 minutes notice, use it for a period of time, and then release the machine back to the pool only paying for the time they’ve used.  This differs markedly from stories such as iLike’s, who had to drive around in a truck borrowing servers everywhere they could, and then physically connect them up.  Imagine how much easier it could have been to push a button and bring on the extra servers on 10 minutes notice as they were needed.

Second, we have the problem of how to manage such a system.  This is Aloof’s problem.  Just because we can provision a new machine on 10 minutes notice doesn’t mean a lot of other things:

  • It doesn’t mean our application is architected to take advantage of another machine. 
  • It doesn’t mean we can reconfigure our application to take advantage in 10 minutes.
  • It doesn’t mean we have a system in place that knows when it’s time to add a machine, or take one back off.

This requires another generation of thinking beyond what’s typically been implemented.  New variable cost infrastructure has to trickle down into fixed cost architectures.  For me, this sort of problem always boils down to finding the right granularity of “object” to think about.  Is the machine the object?  Whether or not it is, our software layers must take account of machines as objects because that’s how we pay for them.

So to attack this problem, we need to understand a collection of questions:

  1. What is to be our unit of scalability?  A machine?  A process?  A thread?  A component of some kind?  At some level, the unit has to map to a machine so we can properly allocate on a utility grid.
  2. How do we allocate activity to our scalability units?  Examples include load balancing and database partitioning.  Abstractly, we need some hashing function that selects the scalability unit to allocate work (data, compute crunching, web page serving, etc.) to.
  3. What is the mechanism to rebalance?  When a scalability unit reaches saturation by some measure, we must rebalance the system.  We change the hashing function in #2 and we have a mechanism to redistribute without losing anything while the process is happening.  We also must understand how we measure saturation or load for our particular domain.

Let’s cast this back to the world of a Pile of Lamps.  A traditional Lamp stack scaling effort is going to view each component of the stack separately.  The web piece is separate from the data piece, so we have different answers for the 3 issues on each of the 2 tiers.  Pile of Lamps changes how we factor the problem.  If I understand the concept correctly, instead of independently scaling the two tiers, we will simply add more Lamp clusters, each of which is a quasi-independent system.

This means we have to add a #4 to the first 3.  It was implicit anyway:

    4.  How do the scaling units communicate when the resources needed to finish some work are not all present within the scaling unit?

Let’s say we’re using a Pile of Lamps to create a service like Twitter.  As long as the folks I’m following are on the same scaling unit as me, life is good.  But eventually, I will follow someone on another scaling unit.  If the Pile of Lamps is clever, it makes this transparent in some way.  If it can do that, the other three issues are at least things we can go about doing behind the scenes without bothering developers to handle it in their code.  If not, we’ll have to build a layer into our application code that makes it transparent for most of the rest of the code.

I think Aloof’s musings about whether #3 can be done as conversations between the machines will be clearer if the Pile of Lamps idea is mapped out more fulling in terms of all 4 questions.

Posted in grid, multicore, platforms, strategy, Web 2.0 | 1 Comment »

 
%d bloggers like this: