SmoothSpan Blog

For Executives, Entrepreneurs, and other Digerati who need to know about SaaS and Web 2.0.

Archive for August, 2007

Scoble and I Are On a Similar Wavelength About Social Graph Based Searching

Posted by Bob Warfield on August 26, 2007

I couldn’t believe the serendipity.  Not long after writing my post about Searching Blogs Instead of Google to avoid Spam, I read Robert Scoble’s excellent piece about Social Graph Based Search (or here for the meat).  We’re very much on the same wavelength here!  Scoble’s videos do a great job of explaining why the blog search method works.  In essence, Page Rank (Google’s search algorithm) is just too easy for SEO’s (Search Engine Optimizers) to cheat on.  In essence, create a ton of links to your page, populate it with the right keywords, and you can trick Google into sending you a zillion people no matter what trash you may have put there.

Scoble uses the term “SEO Resistant Search”.  Ironically, SEO Resistance, or rather, better relevance, is the reason most people use Google.  But this whole approach is ideal for the Open Search Engine Initiative I’ve proposed already.

Good reading here, thanks Scoble: it is indeed the basis for a whole new kind of search engine!

Posted in Marketing, Open Source, Partnering, platforms, saas, Web 2.0 | Leave a Comment »

Stop Googling and Search for Blogs! (aka Web 2.0 to reduce Spam)

Posted by Bob Warfield on August 25, 2007

I’ve recently changed my search tactics for the Internet.  After combing through blogs for some time and using Google’s Blog Search or Technorati to do it, I’ve reached the conclusion that there’s often a lot less Spam in the blogosphere than there is in the more general web.  This tends to make my searches more efficient.

We’re all very familiar with how much dross is retrieved by the average Google search.  I’m doing well if more than 2 or 3 of the results on a page are actually useful, while the rest turns out to be shopping bots, sites having nothing to do with what I was interested in that happened to have the same words, and so on.  The blogosphere seems to pack a lot more useful results into a typical blog search in my experience.  In fact, I’ve tried a lot of the Alternative Search Engines, and nothing has come close to being as helpful as searching blogs instead of the Web!

I knock on wood about this, because if the world switches, the marketing folk will turn from SEO to BEO (Blog Engine Optimization!?!!) and we’ll be back in the same boat.

Think of Blog Searching as a Hyper-Amplified version of Google’s famous Page Rank algorithm.  Page Rank is the secret sauce behind why Google yields better searches.  It was invented at Stanford by Larry Page (hey, that’s why it’s called Page Rank, not because it ranks Pages!) and Sergey Brin seized on it to build an empire.  The basic insight behind Page Rank is that if someone links to a page, it is an implicit vote in favor of that page.  Ergo, having lots of links to the page means the page is a far far better page than one that has the same keywords but no links to it.  Think of it as Web Democracy in Action, where the links are like votes.  Imagine how much loftier the rank should be if some individual has actually labored to produce content about the topic in the form of a blog?  And what if said page of blog content is also then Page Ranked?  As Donald Sutherland-character Oddball said in Kelly’s Heroes, “Like wow man, how can we lose with so many positive waves?”

Here is the other cool thing about blog searching: 

I may be the exception, but I find that the majority of my searches involve things I will be interested in for a long period of time.  I’m not just looking for some throw-away bit of information.  When you find a blog that has valuable info on one of these topics, you can throw it into you Blog Reader and get high quality content from the same source coming at you in the future without the need to revisit the search.  That’s been really helpful when you need to mainline some information right into the old brain cells and make sure you stay up to date on it.

You can’t get away from Googling the web altogether, because you may want information that is not typically found in a blog.  For example, you want to find the corporate web site (although, here again, I’ve taken to finding a company’s blog before bothering with the corporate site when I am researching a new company) or when you are shopping.  Shopping, BTW, has gotten so painful on the net recently because there are so many middlemen who want to get between you and finding someone who is actually selling or reviewing your product.  They add no value to me, but they certainly are an impediment. 

I’ve heard it said that one of the great advantages of the Web 2.0 is access to Spam-free (more like minimal-Spam) channels of communication that you know are coming from or going to individuals who are part of your Social Network and that are therefore trusted. 

Now here is the bad side of my experience with doing this.  The darned blog search engines are buggy!  Technorati’s woes in this respect are well known.  However, Google recently threw me for a loop by presenting this message:

The Google Bronx Cheer

People who’ve watched me online say I type really fast, but I had no idea I was fast enough to trigger this nuisance!  It seems to take 15 minutes or half an hour to reset, and it seems to come back pretty easily once you set it off.  I am assuming it has something to do with how frequently you search, but maybe something else triggers it.  In any event, it had me revisiting Technorati for search which I hadn’t done in a while!

Submit to Digg | Submit to | Submit to StumbleUpon

Posted in Marketing, Web 2.0 | 7 Comments »

The Psychology of SaaS and Web 2.0 Persuasion (and Selling)

Posted by Bob Warfield on August 24, 2007

I recently came across Hummer Winblad VC Will Price’s excellent summary  of Robert Cialdini’s excellent book, Influence, the Psychology of Persuasion.  The basic thesis is that people rely on a relatively simple framework of cues and heuristics for decision making because we simply don’t have time to completely analyze every decision.  Understanding these decision-making heuristics is a powerful tool towards persuading people.  It’s an excellent foundation for marketers, sales people, and negotiators to have at their fingertips. 

The six guiding principles are:

1. Reciprocation:  If I do something for you, you are in my debt and must do something for me.

2. Commitment and Consistency:  Everyone wants to be perceived as having the integrity to deliver on their commitments, and as being consistent in their views.  We think badly of those who are inconsistent and don’t honor their commitments.

3. Social Proof:  When we don’t have time to reach a conclusion ourselves, we look at the conclusion others have reached, and we prefer looking to people we respect who are close to us in background.

4. Liking:  We are more likely to say yes to people we like.

5. Authority:  We feel compelled to follow the authority figures in our lives.

6. Scarcity:  Scarce items are perceived to be more valuable.  We are more motivated by fear of loss or fear we’ll be left out than by thought of gain.  We fear deadlines budgets and other artificial forms of scarcity.

Most good marketers, sales people, and negotiators will look at the list and consider that it’s a fine list but that they haven’t learned much new from it. 
My reaction on seeing the list was to try to cast it into a theory for how SaaS and Web 2.0 companies succeed.  I could feel a lot of my own thoughts on SaaS and Web 2.0 resonating with Cialdini’s 6 principles.  It shouldn’t be too surprising that they did.  After all, the heart of their challenge is to persuade people to adopt their services. 

Towards that end, here is my list of questions for you, ideas for responses, and examples for each of the 6 principles of persuasion:


How do you place your customers in the position of needing to reciprocate?  First you have to give them something of value.

What have you given customers without their asking that has value to them?

This area is rich with ideas and examples.  Free trial offers and Web 2.0 services that live off advertising (often users perceive the service as a free gift) are the most common.  Here are some other possibilities:

– Gifts:  Give your best customers a gift of some kind.  One of my old employers had a special customer advocacy group that were like concierges for the customer’s experience with us.  It was brilliant, and the advocates made sure customers that were great references got gifts.

– Proof of Concept:  A willingness to invest your resources in a proof of concept that the customers sees as an expensive project if they had to do it leads to feelings of reciprocation.  In competitive situations, make sure you offer the POC first!

– Give to Get Negotiation:  Give up something right up front and then wait for the right moment to ask for reciprocation.

– Book:  Write a book that educates customers in a useful way about your space.  There is a long history of the success of this: Arbor and Cognos both did it in the BI world, Tom Siebel wrote a book in the early days of CRM, and there were others.  Services like LuLu make this so very easy.  Give the book as a gift and make sure it offers real content and therefore value.

– Palms Up Networking:  This is more for people than organizations.  It’s 5-time CEO Christina Comaford-Lynch’s approach to networking that got her into the White House, and a brilliant example of reciprocation in action.

– Give someone a home or a voice:  Isn’t this what Web 2.0 is all about?  MySpace and Facebook give people an online home.  Blogs give people a voice.  What can you give someone online that they will value more than what it costs you to provide it?

– Win-win:  How does your product or service help your customers to make money?  (???)

In closing on this principle, the ultimate reciprocation comes after you deliver a great product or service.  I used to come out of the Callidus User Group meetings charged for months because customers were so happy with the results they’d gotten with our software that it was infectious.  Those customers were eager to help Callidus succeed further.

Commitment and Consistency

What are the principles you want your business to be absolutely known by?

What are you doing to commit your entire organization to that consistency?

You can’t be surprised that when making fast decisions without full analysis people prefer choices that don’t waffle all over the place and change their minds.  It’s hard enough to make such decisions without trying to hit a moving target at the same time. 

Look at some of the great brands in the world and you’ll see their commitment and consistency:

– Starbucks:  Committed to a great coffee-centered experience.  Absolutely consistent: the Starbucks experience is the same wherever I go, even in Italy, Turkey, and Greece on a recent cruise.

– Apple:  We all know Apple to be uncompromisingly committed to what they call “insanely great products.”

– Ferrari or Porsche:  Each different, each entirely committed to their view of what it takes to sit at the pinnacle of automotive design.

The brands have hugely loyal fans because of their commitment and consistency to ideals their audiences love.  These companies have taken commitment and consistency almost too far in the minds of many business people, but their formula works.  And we all know of cases where companies lost their way relative to their original social contract and commitment to customers.  It didn’t go well for them, did it?

Don’t be afraid to talk passionately and often about your values: it’s part of the act of making the commitment.

Stay pretty high level on what you are committed to.  Customer Satisfaction should be number one, followed by Shareholder Value and Value for the people in your organization who make it all happen.  In my experience, that’s the order needed to build a great business and the rest will follow.

Before I go too far down this road, make sure you don’t allow consistency to be the “hobgoblin of little minds” either.  Screamer policies exist because policies down in the weeds should always be overridden by higher policies that commit organizations to customer satisfaction.  Even if it means a little bit of inconsistency in the minor rules, it means the major message is something people can always rely on.

Lastly, I read a study long ago about 6 sigmas and customer satisfaction at hotels that I’ll always remember.  Six Sigmas is the science of changing the process so mistakes are impossible.  You would think it is ideal for a luxury hotel, right?  One of the more famous ones tried Six Sigmas for a while and had to scrap it.  They found that customers who never encountered a problem were less satisfied than customers who occasionally encountered a small problem and then got to see the hotel go way out of their way to fix the problem.

This is part of demonstrating commitment.  Seth Godin calls it “follow through.”  Time and again I have watched customers decide in favor of a vendor they felt would stand behind them and put their success above all else, even if the customer felt the product in question didn’t have quite all the bells and whistles they wanted.  It was the commitment and consistency that won the customers over.

Social Proof

How will you build social proof for your desired market?

Who are the Key influencers that lead others to your product or service?

Social Proof can be a chicken and egg dilemma until you realize it is classic early adopter marketing.  Crowds move as herds, but there are always individuals that lead the herds.  Have you identified those individuals and given them the right cues so they can lead the herd your way?

In Malcolm Gladwell’s great book The Tipping Point , he identifies three kinds of people that are the key to the herd behavior:  Connectors, Mavens, and Salesmen.  Connectors have big social circles that can reach a large audience.  Mavens are the experts people know to turn to for advice.  Salesmen are the charismatic people who get the message out, often unconsciously.

Interestingly, these three types of people may not necessarily even be people in the Web 2.0 world.  Consider Facebook and their widget API.  If you create a widget for Facebook, you are using Facebook as a Connector (because it has a huge social network), a Maven (because it’s the authority in social networking right now), and a Salesman (because it’s “hip” to be on Facebook).  Don’t miss the opportunity to harness entire herds like this to help create your own herd.

Partners can fit into the same categories if you choose them correctly.  Invariably you will find partners acting as Mavens (who are the thought leaders in your space?), Connectors (which big VARs and SIs have virtually every Global 2000 player in the customer list?), and Salesmen (you did properly incent your partners to act on your behalf, didn’t you?).

Of course PR is the science of how to identify these types out in the press and analyst communities and get them interested in your cause.

This also explains why companies are so eager to turn their customers into “Brand Ambassadors”, but also why it is so hard.  In an ideal world, each and every one of your customers wants to tell everyone they know how great your product is.  It’s a lofty goal that’s very hard to achieve.  It merits it’s very own application of the 6 principles to properly persuade your customers to become your Brand Ambassadors.


Why will people like your company and the people they deal with?

Are you and your company likable?  Are you working to improve your likability?

Why are salespeople often such likable folks?  We say its because its essential to have social skills to be good at sales, but there’s more to it than that.  It has to do with the propensity to buy from someone you like, or to agree with someone you like, in which case you’ve bought their idea.

As a voice for your company, you should endeavor to be likable by those who would buy your products.  That doesn’t mean you have to be a total suck-up to your customers and influencers.  In fact, we all know suck-ups that aren’t very well liked at all.  Perhaps it’s better to look at it as being someone who is well respected, a more professional definition of likability.  Think of it as being a person or a company that others would want to emulate out of respect for what you’ve accomplished.  Folks like Howard Schultz at Starbucks or Sam Walton always come off as likable.  One of the best examples from the computer industry is probably John Chambers at Cisco.

Companies should cultivate likable images too.  Give back to your community through charity and you harness both the principle of reciprocation and likability.  It’s also good for the soul and genuinely helps others out.  Be humble and self-deprecating.  Be generous in recognizing the contribution of others.

We all know of Tyrant Kings and Captains of Industry who do not project much likability.  Yet they often have charisma of another sort—the charisma of power.  That still counts in a lot of ways, but it isn’t something that can be wielded until much later in a company’s evolution, and there is no substitute for true likability combined with power.  Just ask Bill Clinton!

Think back to my earlier example of the Customer Advocates.  One way of looking at their job was it was all about making sure every customer had a personal friend inside the company.  You could do a lot worse than to make sure that happens for your customers and organization!

The Web has changed some of the aspects of how this works, BTW.  Likability needs to be transmitted via the written word in many cases.  Yes, podcasts and videos are higher bandwidth, but you need to think about how to get your likability out over all the web media.  The individuals charged with getting the word out have to be content creators that are good at being likable.   One of the all-time great bloggers and brand ambassadors (at one time for Microsoft) is Robert Scoble.  Tim Ferris, commenting on the interview he had with Scoble had this to say:

   One thing impresses me about Robert more than all of his credentials: he smiles more than almost anyone I know.

It’s that likability thing again!

Take a look at personal blogs from folks like Facebook’s Mark Zuckerberg.  Heck, just look at his picture.  That fella is trying to be well liked!


How can you position your company as the authority?

Successfully positioning your company as the authority in your space has got to be one of the key steps to Nirvana.  If your name comes up in every discussion of the topic, congratulations!  You have succeeded.  If it doesn’t, you have a lot of work to do, but fear not, you can apply the 6 principles to this task as well.

How to become the authority using the 6 principles:

Reciprocity:  Give away valuable content in the area you want to be an authority.  Open Source is one of the ultimate examples and it put big names like Linus Torvalds on the map (say, he looks likable in his picture on Wikipedia too!). 

Commitment and Consistency:  Be totally focused on the area you want to be recognized as an authority for. 

Social Proof:  Get others talking about you as an authority.  This is an ideal role for VAR and SI partners as well as industry analysts.  Identify those people whose own special authority is deciding who others should listen to and win their hearts and minds.  Get the word out about these others who believe in you.

Liking:  Be a likable authority, not an insufferable authority!

Authority:  You are likely an authority on something already, right?  Get it out there so people can see you as an authority on something and look for the halo effect.  Also, write.  Nothing like seeing something in print to help establish authority.  Lastly, conduct yourself as an authority.  As Christine Comaford-Lynch’s interview puts it:

    If you try adopting supreme self-confidence, even for a day, you’ll be stunned by how the world responds. It treats you as if you deserve everything you ask for.

Scarcity:  More on this in a moment, but the real authorities are scarce because they are in demand.  I’m not suggesting you adopt the common tactic of being hard to reach, keeping people waiting, and late to every meeting, but get the scarcity message across.


How do you create the impression of scarcity when selling your offering?  Ah, now we understand why Web 2.0 companies have infinitely long beta tests that are by invitation-only.  It isn’t because they’re trying to be secretive, although it could be related to their ability to deliver their product.  More importantly, it builds the cache of scarcity.  Go try to buy a Ferrari (even if you don’t have the cash, it’s an interesting experience).  You will likely be told you have to put down a deposit and get on a waiting list.  Same for Harley Davidson.  Are you sure these guys really can’t build their cars and motorcycles fast enough to suit demand?  Hogwash!  They’re creating scarcity because it persuades.

There are dozens of ways you can create scarcity around your products and personal time.  If you are successful with the other 5 principles, you probably won’t have to make up the scarcity–it’ll be a fact of your day to day existence that you have a tiger by the tail and you dare not let go!

Newsflash:  Seth Godin writes a great post that is relevant to reciprocation:  What’s the most generous thing you could give to your best customer, best friend, or most important prospect.  Thought provoking!

Related Articles:

If you liked this post, also take a gander at my read on Web 2.0 Personality Types!

Submit to Digg | Submit to | Submit to StumbleUpon

Posted in business, Marketing, Partnering, saas, Web 2.0 | 6 Comments »

Please Read AltSearchEngines

Posted by Bob Warfield on August 23, 2007

Today we’re guest authors on Charles Knight’s excellent AltSearchEngines blog.  Aside from being incredibly flattered by the opportunity, I was happy to give back to this community that Charles is building around the idea that maybe 2 or 3 giant search engines are not enough.  I’ve been reading his blog for some time and have learned many fascinating things doing so.

I hope you will enjoy it as much as I have!

Posted in saas | Leave a Comment »

Why Don’t Search Startups Share Data, Part 2

Posted by Bob Warfield on August 22, 2007

I mentioned in an earlier post that search startups ought to look into a divide and conquer approach when crawling the web.  After all, one of the biggest complaints about a lot of interesting search services is they don’t find as much as Google does.  TechCrunch, for example, complains that Microsoft’s new Tafiti produces search results that are “not as relevant as Google or Yahoo“.  And yet, they also admit Tafiti is beautiful (as an aside, it is very cool and worth a look to see what Microsoft’s Flex killer, Silverlight, can do for a web site).  If the Alt search sites band together to do the basic crawling and crunching using Google’s MapReduce-style algorithms (possible based on the Open Sourced Hadoop Yahoo is pushing), they could share one of the bigger costs of being in business and ameliorate the huge advantage in reach that the biggest players have over them.

ZDNet bloggers Dan Farber and Larry Dignan ask whether Open Sourced Hadoop can give Yahoo the leverage it needs to close the gap with Google.  Their first words are that “Open source is always friend to the No. 2 player in a market and always the enemy of the top dog.”  I don’t think Hadoop by itself is enough, but if Yahoo were to create a collaborative search service, maybe it would be.  In fact, what if search was much more like Facebook only more open (Hey, if Scoble can do it with a hotel, I can do it with a search engine!)?  In a manner similar to my “Web Hosting Plan for World Domination“, Yahoo could undertake a plan for “Search Engine World Domination”.  Here’s how it would work:

–  Yahoo builds up the Hadoop Open Source infrastructure for Web Crawling.  Alt Search engines can tie back into that to get the raw data and avoid doing their own crawling.  Even GigaOm says “The biggest hindrance to any search start-up taking on Google (or Microsoft, Ask or Yahoo for that matter) is the high cost of infrastructure.”  Let’s share those costs and further defray them by having a big player like Yahoo help out.

–  Yahoo can also offer up the Hadoop scaffolding to do any massively parallel processing these Alt Search Engines need to compute their indices.  Think of it as being like Amazon’s EC2 and S3, but purpose-built to simplify search engines.  People are already asking Amazon for Search Engine AMI’s, so there is clearly interest.

–  Now here is there Facebook piece of the puzzle:  Yahoo needs to turn this whole infrastructure play into a Social Networking play.  That means they offer Search Widgits to any Social Network that wants them, and they let you personalize your own search experience by collecting the widgits you like.  Most importantly, Yahoo creates basic widgits that reflect their current search offering, but they allow the Alt Search Engines to make widgits that package their search functionality.  Take a look at Tafiti and see how it let’s you select different “views”.  Those views are widgits!

–  Yahoo gets a big new channel for its ads, and it gracioulsy shares the revenues with the Widgit builders because that’s what makes the world go round.  Perhaps they even have virtual dollars that can be used to pay for the infrastructure using ad revenue, although I personally think they should give away as much infrastructure as possible to attract the Alt Search crowd to their platform. 

Don Dodge, meanwhile, is wondering what the exit strategy is for the almost 1,000 startups out there trying to peddle alternative search engines.  It sure seems to me that creating this search widgit social network world solves a big problem for Yahoo and at the same time creates a lot of new opportunity for the exit strategy of these engines.  Suddenly, they have access to large volumes of data they couldn’t afford and a distribution channel in which to build an audience. 

Open Source Swarm Competition in the Search Engine Space is Born!

Submit to Digg | Submit to | Submit to StumbleUpon

Posted in amazon, business, ec2, grid, Marketing, Open Source, Partnering, software development, user interface, venture, Web 2.0 | 3 Comments »

Are You Red Shifted? (aka Do you use Utility Computing, Web 2.0, and Every Other Cool Thing???)

Posted by Bob Warfield on August 21, 2007

Sun’s CTO Greg Papadapoulos has been espousing what he calls the Red Shift theory of computing . Aside from having its own Wikipedia entry, the Red Shift theory has something to offer to a variety of audiences.  It gives permission to believe the computing industry is entering another period of hypergrowth.  It provides commentary on Moore’s Law, and which types of problems may or may not encounter the Multicore Crisis.  It gives a reason to believe Sun can once again regain its lost glories by leading the Red Shifted contingents.  And lastly, it provides yet another way to talk about whether your organization is a hip Web 2.0 “Red Shifted” organization, or whether you’re one of those oh-so-yesterday “Blue Shifted” deals.

What then is the Red Shift theory?  For starters, red and blue shift have to do with Doppler effects on light that tell us whether stars or galaxies are moving towards us or away from us and hence whether the universe is expanding or contracting.  Ignore all of that, it has little to do with the theory at hand, which has another meaning and simply uses the terminology as packaging.  Simply put, Papadapoulos postulates that demand for computing resources is segmented into a hyper growth “Red Shifted” segment and a much slower growing “Blue Shifted” segment.  In fact, the definitions for “fast” and “slow” have “fast” being growth that is much faster than Moore’s Law and “slow” being growth that is much slower. 

Slow growth is fueled by demand that grows, well slowly.  This demand is basic spurred by the use of computers to manage conventional financial transaction.  In other words, this segment is what most of the Enterprise Software Industry does today.  If much of your computer usage is built around this kind of activity, you live in a Blue Shifted world.  We shouldn’t expect much to happen in this world, and if we are bored with Enterprise Software today, it’s because too much of it is doing the basic plumbing for the Blue Shifted world.  That world isn’t suddenly going to wake up and start going gangbusters again, it’s done.  In fact, consolidation, virtualization and more power efficient components will be the dominant activities as enterprises try to reduce costs for core applications and services.  Virtualization et al will further reduce growth as the Blue Sector figures out how to use what it has ever more efficiently.  Growth here has regressed to the mean of GDP growth, which is slow indeed by computer industry standards.  End of an era.

The Red Shifted world is the more exciting world.  The thought leader for Red Shift is Web 2.0.  Not far out of the limelight are such applications as financial market simulations, drug industry research, and computer animation.  The shift to SaaS (growing 43 percent annually, according to a recent report by RBC Capital Markets), while it involves moving Blue Zone applications, will also deliver Red Shifted growth  because of the rate of conversion.  Their demand for computing is slated to increase at a rate faster than Moore’s Law, which is voracious indeed.  To make matters even more interesting, Papadopoulos goes on to argue that companies who embrace Red Shifted applications will grow much faster than those that stick to their Blue Shift knitting.  To paraphrase Will Smith, “I gotta get me summa dat Red Shift!”

Of course the theory goes on to describe a bright future for Sun which is well positioned to deliver scale efficient infrastructure, which they call “brutally” efficient infrastructure.  Microsoft and IDC agree with the vision, so they see something bright in their future around it too.  Sun goes on to project that at some point in the not too distant future, there will be just 5 massive data centers worldwide doing all of this business.  Wow!  Of course they haven’t heard about my web hosting plan for world domination yet, so maybe there will be 5+1 where the “+1” is a consortium of much smaller vendors delivering a shared utility computing fabric.

Personally, I like and agree with many aspects of the Red Shift Theory.  I’ve said many times during my Enterprise Career that Moore’s Law has passed up the growth in financial transactions and that this will lead to a cheapening of infrastructure cost for conventional Enterprise applications.  It’s about time too.  The centralization and skillset focus that SaaS brings to the table will bring further economies of scale to the table and make traditional computing still cheaper.  I also agree that bringing in the Web 2.0 collaborative-connectedness paradigm is another world changer, and one we are much closer to first steps on than SaaS.

There are some aspects of the theory I wonder about, however.  For example, Sun is obsessed that their “brutal efficiency” mantra means Big Iron/Big Servers.  When you have a hammer, everything is a nail. The trouble is that market experience seems to imply commodity computing is a lot cheaper than Big Iron.  Google, Yahoo, Amazon, et al are built on Lintel (Linux + Intel compatible) machines that are cheap.  Their infrastructure lashes together thousands of these boxes.  My experience at Callidus Software with our grid-computing based Enterprise software was that it was only the database that benefited from expensive Big Iron boxes.  Moreover, having written our app to run on its own mini-grid, we tended to minimize the DB as much as possible to the point where only 25% of the cpus had to be Big Iron DB-class machines.  Interestingly, the 25% often cost as much as the remaining 75%!  Because of this, I’m much more sold on utility computing infrastructures delivered on commodity Lintel stacks.  Just the sort of thing Amazon offers with EC2 and S3.

The other interesting viewpoint I came across was a number of folks who had concluded that the practical upshot of the Red Shift theory is that the future belongs to the database, not the processor.  The argument is that if you’re data intensive, you are by definition in the Red Zone, so companies like credit card processors are there.  I don’t agree.  Database scaling will be an important axis for Red Shifted companies to master, but the database is an effect, not a cause, and not all data-intensive activities like credit card processing will necessarily grow at rates faster than Moore’s Law.  I do think we’ll see some fundamental changes in how the world works with databases, and perhaps those changes will break Oracle’s hedgemony at the high end, but the DB is the tail wagging the dog.  I also think the data has huge value and we’re just beginning to think about the idea that the data may in fact be more valuable even than the software, particularly in a Web 2.0 context.  However, first you have to be doing something that generates all those data volumes, delivers truly valuable data, and that gets us back to the original Red Shift argument about which kinds of apps qualify.This is all just another side of the whole Multicore Crisis too.  After all, what do you do if your business is growing faster than Moore’s Law and you just bought the biggest machine they make?  Lest you laugh, this is actually what happened to eBay during the bad old days of their service outages.  We were located on their campus in Campbell and I used to watch the satellite crews set up to interview the eBayers about what had gone wrong.  Fundamentally, they had a monolithic architecture doing their auction search and once they had it running on the biggest mainframe Sun could sell them, they were stuck.  This precipitated a total rewrite to get things to horizontally scale.  I’m sure it was a harrowing experience, but we will see it play out over and over again as the Red Shift collides with the Multicore Crisis.

It’s an exciting world we live in!


Submit to Digg | Submit to | Submit to StumbleUpon

Posted in amazon, business, data center, ec2, grid, Marketing, multicore, saas, Web 2.0 | Leave a Comment »

How Does Virtualization Impact Hosting Providers? (A Secret Blueprint for Web Hosting World Domination)

Posted by Bob Warfield on August 16, 2007

I’ve written in the past about data centers growing ever larger and more complex in the era of SaaS and Web 2.0.  My friend Chris Cabrera, CEO of SaaS provider Xactly, recently commented along similar lines  when asked about the VMWare IPO. 

Now Isabel Wang who really understands the hosting world has written a great post on the impact of virtualization (in the wake of VMWare’s massive IPO) on the web hosting business.  I took away several interesting messages from Isabel’s post:

          Virtualization will be essential to the success of Hosters because it lets them offer their service more economically by upping server utilization.  It’s an open question whether those economies are passed to the customer or the bottom line.

          These technologies help address the performance and scalability issues that keep a lot of folks awake at night.  Amazon’s Bezos and Microsoft’s Ray Ozzie realize this, and that’s why they’re rushing full speed ahead into this market.  They’ve solved the problems for their organizations and see a great opportunity to help others and make money along the way.

          The market has moved on from crude partitioning techniques to much more sophisticated and flexible approaches.  Virtualization in data centers will be layered, and will involve physical server virtualization, utility computing fabric comprised of pools of servers across multiple facilities, applications frameworks such as Amazon Web Services, and Shared Services such as identity management.  This complexity tells us the virtualization wars are just beginning and VMWare isn’t even close to looking it all up, BTW.

          This can all be a little threatening to the established hosting vendors.  Much of their expertise is tied up in building racks of servers, keeping them cool, and hot swapping the things that break.  The new generation requires them to develop sophisticated software infrastructure which is not something they’ve been asked to do in that past.  It may wind up being something they don’t have the expertise to do either.  These are definitely the ingredients of paradigm shifts and disruptive technologies!

We’re talking about nothing less than utility computing here, folks.  It’s a radical step-up in the value hosting can offer, and it fits what customers really want to achieve.  Hosting customers want infinite variability, fine granularity of offering, and real-time load tracking without downtime like the big crash in San Fran that recently took out a bunch of Web 2.0 companies.  They want help creating this flexibility in their own applications.  They want billing that is cost effective and not monolithic.  Billing that lets them buy (sorry to use this here) On-demand.  After all, their own businesses are selling On-demand and they want to match expenses to revenue as closely as possible to create the hosting equivalent of just in time inventory. Call it just in time scaling or just in time MIPS.  Most of all, they want to focus their energies on their distinctive competencies and let the hoster solve these hard problems painlessly on their behalf.

When I read what folks like Amazon and the Microsofties have to say about it, I’m reminded of the Intel speeches of yore  that talked about how chip fabs would become so expensive to build that only a very few companies would have the luxury of owning them and Intel would be one of those companies.  Google, for example, spends $600 million on each data center.  Big companies love to use big infrastructure costs to create the walls around their very own gardens!  Why should the hosting world be any different?

The trouble is, the big guys also have a point.  To paraphrase a particular blog title, “Data centers are a pain in the SaaS”.  They are a pain in the Web 2.0 too.  Or, as Chief Technology Officer, Werner Vogels said, “Building data centers requires technologists and engineering staff to spend 70% of their efforts on undifferentiated heavy lifting.”

Does this mean the big guys like Amazon and Microsoft (and don’t forget others like Sun Grid) will use software layers atop their massive data centers to massively centralize and monopolize data centers?  Here’s where it gets interesting, and I see winning strategies for both the largest and smaller players.

First, the big players worry about how to beat each other, not the little guys.  Amazon knows Microsoft will come gunning for them, because they must.  Can Amazon really out innovate Microsoft at software?  Maybe.  The world needs an alternative to Microsoft anyway.  But the answer when competing against players like Microsoft and IBM has historically been to play the “Open System vs. Monolithic Proprietary System” card.  It has worked time and time again, even allowing the open system to beat better products (sorry Sun, the Apollo was better way back when!).

How does Amazon do this to win the massive data center wars?  It’s straightforward:  they place key components of Amazon Web Services into the Open Source community while keeping critical gate keeping functions closed and under their control.  This lets them “franchise” out AWS to other data centers.  If you are a web hoster and you can offer to resell capacity that is accessible with Amazon’s API’s, wouldn’t that be an attractive way to quit worrying so much about it?  Wouldn’t it make the Amazon API dramatically more attractive if you knew there would be other players supporting it? 

Amazon, meanwhile, takes a smaller piece of a bigger pie.  They charge their franchisees for the key pieces they hold onto to make the whole thing work.  Perhaps they keep the piece needed to provision a server and get back an IP and charge a small tax to bring a new server for EC2 or S3 online in another data center.  How about doing the load balancing and failover bits?  Wouldn’t you like it if you could buy capacity accessed through a common API that can fail over to any participating data center in the world?  How about being able to change your SaaS datacenter to take advantage of better pricing simply by reprovisioning any or all of the machines in your private cloud to move?  How about being able to tell your customers your SaaS or Web 2.0 offering is that much safer for them to choose because it is data center agnostic?

BTW, any of the big players could opt to play this trump card.  It just means getting out of the “I want to own the whole thing” game of chicken and taking that smaller piece of a bigger pie.  Would you buy infrastructure from Google or Yahoo if they offered such a deal?  Why not?  Whoever opens their system gains a big advantage over those who keep theirs monolithic.  It answers many of the objections raised in an O’Reilly post about what to do if Amazon decides to get out of the business or has a hiccup.

Second, doesn’t that still mean the smaller players of less than Amazon/Google/Microsoft stature are out in the cold?  Not yet.  Not if they act quickly, before the software layers needed to get to first base become too deep and there are too many who have adopted those layers.  What the smaller players need to do is immediately launch a collaborative Open Source project to develop Amazon-compatible API’s that anyone can deploy.  Open Source trumps Open System which trumps Closed Monoliths.  It leverages a larger community to act in their own enlightened self-interest to solve a problem no single one of these players can probably afford to solve on their own.  Moreover, this is the kind of problem the Uber Geeks love to work on, so you’ll get some volunteers.

Can it be done?  I haven’t looked at it in great detail, but the API’s look simple enough today that I will argue it is within the scope of a relatively near-term Open Source initiative.  This is especially true if a small consortium got together and started pushing.  One comment from that same O’Neil blog post said, “From an engineering standpoint, there’s not much magic involved in EC2.  Will you suffer for a while without the nifty management interface? Sure. Could you build your own using Ruby or PHP in a few days? Yep.”  I don’t know if it’s that easy, but it sure sounds doable.  By the way, the “nifty management interface” is another gatekeeper Amazon might hold on to and monetize.

But wait, won’t Amazon sue?  Perhaps.  Perhaps it tips their hands to Open Source it themselves.  Legal protection of API’s is hard.  The players could start from a different API and simple build a connector that lets their different API also work seamlessly with Amazon and arrive at the same endpoint—developers who write to that API can use Amazon or any other provider that supports the API.

You only need three services to get going:  EC2, S3, and a service Amazon should have provided that I will call the “Elastic Data Cloud”.  It offers mySQL without the pain of losing your data if the EC2 instance goes down.  By the way, this is also something a company bent on dominating virtualization or data center infrastructure could undertake, it is something a hardware vendor could build and sell to favor their hardware, and its something some other player could go after.  The mySQL service, for example, would make sense for mySQL themselves to build.  One can envision similar services and their associated machine images being a requirement after some point if you want to sell to SaaS and Web companies.  Big Enterprise might undertake to use this set of API’s and infrastructure to remainder unused capacity in their data centers (unlikely, they’re skittish), help them manage their data centers (yep, they need provisioning solutions), use outsourcers to get apps distributed and hardened for disaster recovery, and the like.

So there you have it, hosting providers, virtualizers, and software vendors:  a blueprint for world domination.  I hope you go for it. I’m building stuff I’d like to host on such a platform, and I’m sure others are too!

Note that the game is already afoot with Citrix having bought XenSource.  Why does this put things in play?  Because Amazon EC2 is built around Xen.  Hmmmm…

Some late breaking news: 

There’s been a lot of blogging lately over whether Yahoo’s support of Open Sourced Hadoop will help them close the gap against Google.  As ZDNet points out, “Open source is always friend to the No. 2 player in a market and always the enemy of the top dog.”  That’s basically my point on the Secret Blueprint for Web Hosting World Domination.

Submit to Digg | Submit to | Submit to StumbleUpon

Posted in amazon, business, data center, ec2, grid, multicore, Open Source, Partnering, saas, venture, Web 2.0 | 8 Comments »

Does Data Volume Trump the Multicore Crisis?

Posted by Bob Warfield on August 14, 2007

Bill de hÓra says Data Trumps Processing.  His blog post elaborates that he sees the scaling of the DBMS as a far bigger problem for most software developers than the multicore crisis (how to effectively utilize the many cores that are appearing on chips).  I saw the post linked to on Making It Stick, which talks a lot about multicore.  Since I’m intrigued by all things multicore, I took a look, but came away in fairly strong disagreement with the proposition.  Let me walk you through the two reasons I am not so concerned about data volume.

First, we have to ask where the data volumes are coming from and at what rate they’re growing.  In the Enterprise world, data volumes are largely a function of transactions.  That’s not the only source, but it is by far the biggest.  Here’s a news flash:  transaction volumes do not grow at anything like the Moore’s Law doubling eveyr 18 months rate except for relatively small companies that start with lower volumes and grow until they’ve nearly saturated their volumes.  My old company Callidus Software worked with data volumes at the extreme high end of the Enterprise scale when calculating sales commissions.  We paid the sales commissions for companies like Allstate, Sprint Nextel, and Washington Mutual Bank.  To pay those commissions we had to analyze every transaction coming in.  Terabytes were routine stuff.  To make matters worse, we kept both a highly normalized transaction processing store as well as a denormlized data market for reporting and analysis. 

 There are potentially sources of data that grow at Moore’s Law rates.  We can talk about data acquisition from sensors, or even about analyzing the web itself, but for the most part, we start with a volume and deal with it.  Also, let’s leave aside data volumes that are high in bytes, but not so high in terms of rows and columns.  Media sharing generates a lot of bytes, but the complexity and scalability issues come about when you have equivalent terabytes of structured data without the padding.

The second reason I’m less concerned is that the tools are set up to deal with this problem.  Oracle scales beautifully to very large numbers of cores.  That’s always been one of their competitive weapons, and when I used to be a VP there, we managed benchmarks against the competition by simply lining up more cores than they could handle (SQL Server was the usual victim) and then crushing the numbers by using all the cores.  To make matters even more one sided, the data world already uses an inherently parallel language: SQL.  They’re not trying to hack Java or C++ to add parallelism after the fact.  This frees the DBMS vendors to do all sorts of dirty work under the covers without threatening the legacy code base.  And the vendors have done a good job, or at least Oracle has and DB2 is also quite good in this respect.  They will continue to get better.

Now where might the data side be a problem?  First, Oracle and DB2 are not necessarily answers people like to hear.  They are relatively expensive.  Even SQL Server is not cheap and isn’t very “Web” fashionable.  What the world wants is for their LAMP-based implementation to scale out to vast proportions.  In other words, mySQL and the other Open Source players.  Here we may have a problem.  Those products are great, but they’re not necessarily set up to scale in this way, at least not to hundreds of cpus.  That leaves programmers holding the bag and having to code around these issues on those platforms.  If someone wanted to perform a pious task, it would be to address these scalability issues for these platforms.  After all, as I’ve mentioned, it can be tackled deep down in the bowels of the code without messing about the code that accesses the DBMS too much.

A second issue is DBMS skills.  They’re not hard, especially not as hard as writing fine grained parallel code in a conventional language, but they are relatively hard to come by.  I guess it isn’t a sexy enough field for a lot of great programmers.  Bill de hÓra mentions in his post that “Every web 2.0 scaling war story I’ve heard indicates RDBMS access becomes the fundamental design challenge.”  I hope you’re not too surprised, but I agree with Bill and would add a corollary that every Enterprise scaling war story I’ve heard indicates RDMS access is the fundamental design challenge.  Where Bill and I might differ is that in the majority of cases I’ve seen it has been due to the database developers doing something silly that was not that hard to fix.  Sometimes even the smart guys make a mistake in not understanding the usage patterns of the system very well.  DBMS programming turns out to be an iterative process.  If you rush you Facebook applet out without much testing (load tests are expensive), run it against mySQL, and it hits 1 million users a week later you’re going not going to get much sleep for awhile.

Then there is a category where the DBMS forces you closer to multicore.  Bill alludes to this too when he comments that “things like joins, 3nf, triggers, and integrity constraints have to go – in other words, key features of RDBMSes, the very reasons you’d decide to use one, get in the way. The RDBMS is reduced to an indexed filesystem.”  These are all areas where folks that are not DBMS gods fear to tread, especially when the performance stakes are high.  Eventually we come to grips with the idea that the DBMS is solving a communication problem with a member of the computing ecosystem that is as remote in round trip times as the Age of the Dinosarus on Earth.  I’m referring to the disk drive, of course.  The cpus are shielded behind two layers of cache, a bunch of RAM, disk caches in the OS or app, more caches on the disk controller, RAID stripping, and who knows what all else.  It ain’t enough!  So when the going gets tough, you use the DBMS just exactly the way Bill says, as an indexed file system.  Guess what?  Building your own joins across multiple cpus, implementing integrity constraints when the data is fed in or modified, and so on are all easier than writing fine grained parallel code in Java. Most systems will find it has to be done for a relatively small fraction of their overall schema.

Besides which, where else can I get such a great indexed file system?

Submit to Digg | Submit to | Submit to StumbleUpon

Posted in business, grid, multicore, software development | Leave a Comment »

What Shape is Your Mission Statement? Tag Clouds for Marketing and Meaning…

Posted by Bob Warfield on August 13, 2007

Recently I needed to set up some meetings with folks to introduce them to SmoothSpan.  These are folks who won’t necessarily drop everything and come running, so I wanted to do something a bit unusual to tease them and get their interest flowing.  I hit on the idea of sending them a Tag Cloud that describes SmoothSpan’s business.  Sorry, I can’t share the cloud with you yet, we’re still very much in stealth, but I got to thinking about the idea and the reaction people had to it and decided it might just be a helpful tool for various marketing purposes.

If you’re wondering what a Tag Cloud is, there’s one over to the left that shows the relative importance of various tags to the different posts in this blog.  As I write this, the Tag Cloud says we’re heavy on SaaS, Partnering, and business, but there are a lot of other interesting tidbits there as well.  I started with this blog’s Tag Cloud (lest you think the posts here are random!) and augmented it by hand to represent all of the different thoughts I wanted people to have as they learn about SmoothSpan.

Tag Clouds can be created in a lot of ways.  All they need is a list of terms and a weight associated with the term.  Examples include: frequency of tag use in articles, frequency of word or term use in a body of text, popularity of search terms, articles visited, or the like.

Think about uses for tag clouds in your own business.  Here are some ideas:

1.       To introduce folks to your company in a hip Web 2.0 sort of way.

2.       Have you created the Tag Cloud that describes your business or product?  Do your employees, investors, and customers agree that this is what you do?  If not, you may need to work harder on your messaging.

3.       Put one on the back of your business cards.  You weren’t using the back for anything, anyway, were you?

4.       Put one at the bottom of your emails.

5.       Use them to check whether your blog, company web site, or PR is on target with the messaging you hope for.

6.       Use them to test market concepts without narrowing things down to too much detail.  I’m envisioning some sort of online survey where folks see different versions of a Tag Cloud that they rate.

7.       Use it as a more abstract and less contrived mission statement.  Done write, it lets the reader decide which part of your proposition most interests them.  This latter process, BTW, is an essential part of any successful sales cycle: you have to figure out which problem the customer could solve with you product.

8.       Consider one as a navigation tool for your corporate web site.

9.   How about using a Tag Cloud to tell your skills, career, and interests at a glance?

When we come out of stealth, I would expect our Blog’s Tag Cloud to rapidly converge until it matches the corporate mission Tag Cloud I just created.  Meanwhile, if you want to make your own Tag Cloud, it’s pretty easy to do with Excel and Microsoft Word.  I’ll walk you through it.

First, list the terms or tags you want to use in a column.  Any order will do.  Next, assign a weight to each term, and sort the terms into ascending order from lowest to highest weight.  Now, decide how many buckets you want to divide your terms into.  Each bucket gets successively bolder formatting.  I left the first bucket in the default font, bolded the second, bumped up the type size on the third, bolded and bumped the type size on the fourth, and so on.  You can also use colors to either emphasize more, or to create a second dimension.  I used 4 colors in my mission blog:

Red:  SaaS/Enterprise/Business

Blue:  Web 2.0/Social Networking

Green:  Technology

Yellow-Orange:  User Experience/Simplicity/Design Sense/Visuals

There are tools out there on the Internet that will also create tag clouds by analyzing text.  I haven’t evaluated any of them, so I won’t recommend, but perhaps someone will submit a comment to this post with some ideas along those lines.

Have fun, Tag Clouds are a useful tool for getting across Marketing and Meaning!

Submit to Digg | Submit to | Submit to StumbleUpon

Posted in business, Marketing, user interface, Web 2.0 | 5 Comments »

Software for Retaillers: Interesting New SaaS Niche?

Posted by Bob Warfield on August 10, 2007

Isabel Wang mentioned something in passing that got me to thinking.  She wondered if there was software that automatically optimizes the inventory at a retailler.  The thought was sparked by a conversation about DayJet, a jet airplane taxi service whose main claim to fame is an extremely sophisticated plane scheduling algorithm.  There was also the mention of’s auto-recommendation technology accounting for 35% of their sales.

Both of these technologies are here today and have been for some time, though they are often closely guarded secrets and competitive weapons.

I worked with Louis Borders once upon a time and can tell you that Borders Books got its start using just such a piece of software. Books are extremely expensive to ship, so you want to make sure that whatever you get will sell.  This is compounded by having a superstore format with even more titles on display.  The Borders software would analyze everything from the shape of a best seller’s curve over time to regional interests to determine which books to send where.  This is how Louis Borders (ne Webvan) got into the tech world from having been a bookseller.

My last startup, iMiner/PriceRadar performed the Amazon auto-recommendation trick for eBay.  They wound up buying the software from us, but we never could convince them to use it for that purpose.  I wish I’d had the statistic on Amazon at the time! 

Wallmart and the other mega-stores have similar technology by this point.  What I haven’t seen is such technology packaged in a SaaS form factor and offered to small and medium businesses. 

Could retail optimization for small and medium stores that don’t have proprietary solutions be the next killer SaaS business plan?

Better yet, someone needs to create the “Wholesale Amazon” that will offer to do sophisticated inventory planning and other big-retailler software services for free if only you buy your merchandise from them.

Submit to Digg | Submit to | Submit to StumbleUpon

Posted in amazon, business, saas, venture | 4 Comments »

%d bloggers like this: