SmoothSpan Blog

For Executives, Entrepreneurs, and other Digerati who need to know about SaaS and Web 2.0.

Archive for August, 2007

Domain Specific Social Networking, Anyone?

Posted by Bob Warfield on August 31, 2007

While we’re slicing and dicing behavioural aspects of Social Networking, here is another dimension to contemplate:  Domain Specific Social Networking.  The idea comes about as a derivative of Domain Specific Languages.  As the Wikipedia puts it, a Domain Specific Language is a programming language that was created for a specific task (i.e. its “domain”).  I think a nicer name is “problem-oriented language”, but I’m not in charge.  For example, the odd little language GraphViz is used exclusively for creating a particular kind of diagram.

Let me give another related example.  We see quite a few Domain Specific Search Engines.  Spock is for searching about people.  CureHunter is for searching about medicine.  Amazon is for searching for things to buy.  In each case, we can see how the tool is able to do a better job than more generic tools because it incorporates features and knowledge specific to the domain.  In effect, it makes the problem easier because it is focused on a subset.

Back to Domain Specific Social Networking.  I’ll argue that LinkedIn is a DSSN (sorry, got tired of all that typing), while Facebook and MySpace are just Social Networks.  Why?  Because LinkedIn is pretty well tuned to standard business networking activities.  Where is John working these days?  Who do I know at the customer who might help me out selling?  We need to hire a great web designer, who can we find by referral?

What’s the point?  There are a couple, and it really depends on what you are trying to accomplish.  Once again, we have a trade off between “all things to all people” and “exactly the right product for a particular purpose”.  Focus your thinking around what you’re trying to accomplish and think about that trade off.  If you’re creating (or harnessing someone else’s) social network for marketing purposes to “get the word out”, you probably want something pretty general.  OTOH, if you have some specific purpose, perhaps you want to enable collective decision making, you might want to consider some sort of DSSN that is more optimal to your task.

Before leaving this topic, let’s also consider another trick that can help blend general Social Networks with the DSSN concept:  Widgets.  Specialized Widgets are a great way to get a quick and dirty DSSN operating just long enough to solve a particular problem.  A great example of this is Market Research.  An online survey injected into an existing Social Network can get you much-needed decision making input.  What about calls to action?  How about revamping the age old idea of a contest by supercharging it with some Web 2.0 juice?  We can inject a contest that involves giving us something we want.  Perhaps its ideas for new products or some such.  Perhaps its getting people to try your product (take the Pepsi Challenge).  Perhaps you want a guest speaker on your blog, or the most interesting customer success story.  There’s a million ways to think of using contests, surveys, voting, and all the rest to get this done. 

How about DSSN’s and Widgets designed to help boost your partner’s success or give your customer some special advantage or gift.  Maybe the Widget is the gift and it also helps you build a DSSN around the recipients.

We’ll see a lot more on the Widget front as people work out how to employ them to create temporary DSSN’s around some initiative they want to drive.

Related Articles:

Social Media Today:  A Social Network for people who want to talk about Social Networks.

Posted in Marketing, saas, Web 2.0 | Leave a Comment »

With SaaS Are You Are 100% Helpless Or Much Better Off?

Posted by Bob Warfield on August 31, 2007

I can’t help but comment on some of the doings out there by John Dvorak and Matt Asay.  Troublemaker that he is (heck, I love John, he’s been stirring the pot ever since I can remember), Dvorak kicked the whole thing off with a rant about how maybe moving everyone’s software into the cloud is really not such a good idea.  It makes us more vulnerable and John tickles our sense of paranoia into a throbbing feelling of vague unease:

All this proves is that these Web-based applications cannot be trusted.

Matt makes the source of the paranoia seemingly much more concrete:

In a SaaS world, even more than in the traditional proprietary world, you are 100% helpless if something goes wrong. Not only do you not have the code (as you would if you had a on-premise desktop/server software), but you also don’t have the IT staff to be able to go in to troubleshoot. You’re completely reliant on the vendor.

I say seemingly, because both these guys are dead wrong.  There are some folks like Amy Wohl, Fryer’s Blog, and others chiming in with their two cents, but I wanted to give my own spin.

It may seem that SaaS leaves you helpless (I’m not even sure I agree with that), but reality is that you’re actually better off.  Let’s pick apart some of this stuff and get a real perspective on what’s going on. 

Let’s just start with the proposition about the code.  Who in the Enterprise has the code for their Oracle DB servers?  What about the code for SAP?  No takers?  And what happens when there is a big time problem with your On-premise ERP application?  Is your IT staff fixing it?  Or are you running to your ERP vendor to get them to do it?  Haven’t we reached a point where IT is writing very little code, and isn’t that a good thing because the code they do write is hopefully something to give their business a unique advantage?  And what about the issue that SaaS is largely a phenomenon of Small and Medium Businesses?  Aren’t these organizations even less likely to have IT staff writing code on apps that can be bought?

My experience building and selling mission critical enterprise software leads me to a much different conclusion than what the armchair quarterbacks are touting.  If your mission critical software goes down, you are not fooling around in IT with source code, you are on the line to Tech Support, and if it’s really bad, you are on the line to the Executives or CEO of your vendor and whatever VAR installed the silly thing demanding immediate action. 

Let’s walk through an unfolding IT crisis and see what happens with typical On-premise vs SaaS Enterprise Software.  I think you’ll be surprised if you haven’t thought about this or lived through the exercise yourself. 

I have to make a point in favor of SaaS right up front as we walk through the chronology.  If the SaaS vendor and their offering are up to snuff, they know you have a problem almost as soon as or even potentially before you do.  In fact SaaS vendors are much better positioned to have taken care of the problem before you will ever have a chance to see it than their On-premise cousins.  Huh?  Isn’t this just testing your code properly?  Why does a SaaS vendor have an advantage?  Testing is important, but SaaS vendors have another huge advantage over On-premise: they fix problems before you encounter them much more easily. 

I did some benchmarking among SaaS and Enterprise vendors by going out and talking to a number of my peers.  One of the questions I asked was quite illuminating: 

How many of the problems reported to your Tech Support organization had already been fixed in some release of your software?

I tended to get 2 strongly polarized responses.  The vendor either had no idea, or more interestingly, they concluded that well over 50% of problems were already fixed in some release of the software.  I can tell you that my experience was definitely in the “well over 50%” end, hence I wanted to see if others were seeing the same.  Since SaaS vendors are upgrading you constantly and transparently to the latest versions, 50% of all Tech Support incidents are being nipped in the bud before the customer even sees them.  Imagine what it does for customer satisfaction when half your Tech Support incidents are eliminated?  Bugs are going to slip through for any software, but with SaaS, we can fix the bug for everyone as soon as even one customer (or our internal testers) finds it.

While on the subject of testing, consider how much easier it is for the SaaS vendors to test because everyone runs the same version of the software.  Testers don’t have to split their time and energy among multiple platforms.  They don’t have to waste time certifying hot fixes that will only matter to one customer who stubbornly refuses to upgrade.  Worries about the interactions between patches and hot fixes are a thing of the past for a SaaS vendor. 

Continuing with the process, you’ve now gotten your ISV’s attention and its time to diagnose the problem.  Diagnosis is another area where the SaaS vendor has a huge advantage.  Before diagnosis can proceed, the ISV has to narrow the search down to one of three categories:

–  It’s a bug or product problem. 

–  It’s pilot error (the FAA is right, this is usually the reason).  But which pilot?  Was the software improperly customized or installed?  Are users asking the software to do something it just doesn’t do?  Documentation, help, or UI confusing customers (that’s only partially pilot error)? 

–  It’s an environmental problem.  I’ve had customer DBA’s delete all the indexes on my app’s DB and then someone calls and says, “The app has crashed.”  Well no, it didn’t crash, it just ran so slowly you wished it did.  Who deleted the bleepin’ indexes anyway?  In another case, a customer had a faulty router between 40 cpus of transaction processing and the DB.  Guess what that does for throughput, and why weren’t they on the same subnet anyway?

The customer (and the VARs, unfortunately) will always start out assuming there is a bug.  The ISV starts out assuming the problem is first pilot error, second an environmental problem, and only after eliminating the first two do they move on to the assumption there is a bug.  Diagnosis almost always takes longer than coming up with a fix, often days longer while your organization is in agony over the problem. 

Now let’s consider how the two vendor types (On-premise and SaaS) triage through the three diagnosis categories.  The On-Premise ISV will start out walking the customer over the phone through what’s happening, hoping to identify a misconception or user error right there.  We’ve all sat through these painful sessions where the Tech Support guy just doesn’t seem to get it: they can’t see your problem and it’s painful to explain over a phone.  Now let’s consider the SaaS vendor.  The software runs inside their data center, not yours.  This gives them the access to be able to see much more directly what you are doing on the application.  Clever vendors will set it up so they can see exactly what’s happening on your screen step-by-step.  Advantage SaaS.

If the phone session fails the next step is to check the bug database and see if anyone has reported anything similar.  But wait, the SaaS vendor can eliminate most of this as bugs get fixed as soon as they’re reported for all customers.  Meanwhile, the On-premise guys are going to tell you to install this new version or that patch.  Never mind that this can be a huge undertaking for Enterprise Software.  I’ve seen cases where just upgrading to a new app server version that’s known to fix the problem can’t be done because it would interfere with other 3rd party software running on the same server.  Drat!  We’ve also all been through doing the upgrade or reinstall on something and despite Tech’s assurances, it made no improvement whatsoever, it just wasted our time.

Are you beginning to see how this works?  No?  Let’s keep going.  The phone talk through didn’t help.  New versions and patches didn’t help.  Now we have to try to duplicate your problem on our in-house hardware as a final check against environmental issues.  The ease with which some DBA or person in the IT shop can mess up the carefully nurtured environment for a piece of software is frightening.  Database servers are filled with parameters and DBA’s like nothing better than fooling with them to get a little more performance.  Worse, some enterprise software just has to be constantly tweaked to keep it happy.  Of course with SaaS, your software is already running on our hardware.  The people that wrote the software have dictated the ideal environment.  You don’t have to wonder if you are the largest customer running on a particular stack that has hardly been tested.  Every customer runs the same software version on the optimal reference stack and that environment is tended to by experts.  Lastly, you are most likely part of a multi-tenant environment, meaning that if you are the only one with a problem, that’s another immediate tip-off that it isn’t environmental.  SaaS, move ahead three squares; On-premise, do not pass “Go”, do not collect $200.

Now we’re at the last, most finicky, and most contentious stage.  We know it isn’t environmental.  We know it isn’t a problem with a prior release.  We know it isn’t user error because we can see exactly what you are doing.  It has taken an On-premise vendor a huge effort and probably quite a bit of time to establish all this.  The SaaS vendor knows within the first phone call if they’re set up right, and already has a huge advantage.  Even better, they may have eliminated the problem before you encountered it.  But assuming that’s all failed, we now have to fix a bug.   

Even here the SaaS advantage is huge.  Customers for On-premise ISV’s frequently want the developers to fly out to their site to fix a problem.  It makes them feel better, because after the painful diagnosis process, they’ve now gotten the message up close and personal about how hard it is to do remote diagnosis.  Guess what?  With SaaS, the best and the brightest are already at your data center!  You get that red carpet treatment that is usually offered only rarely to the best and biggest customers no matter who you are when you buy SaaS.  And the developers are much happier and more productive.  You can never fly everyone to the customer, but all the developers are together for SaaS, and anyone who needs to help can easily jump in or give feedback.

Let’s consider a last issue in terms of preventative maintenance.  If you’ve been in the Enterprise Software game as I have, you will know that not all customers and SI partners follow your Best Practices recommendations.  Sometimes they can’t, sometimes they won’t, sometimes they just didn’t know, but always they should.  This can range from how the software is installed, the infrastructure supporting it, day to day operations, or whatever.  With SaaS, the vendor can ensure that the latest Best Practices are always used with their software.  Six Sigma devotees will recognize the enormous advantage SaaS has with respect to repeatability.  On-premise vendors will know firsthand of the tremendous variability of customer experience with the same software because of all the factors one has to get right before it all comes together.  With SaaS, the vendor is in control of many more of those factors, and this does lead inevitably to better Customer Satisfaction.

As to the “helpless” slant, the real power to help one’s self was never a matter of laying hands on software or hardware.  It has always been the ability to bring pressure on the vendor.  Once again, advantage SaaS.  The On-premise ISV has your software license check a long time ago.  At this stage they are gambling with maintenance and their reputation.  The stakes are higher with SaaS.  They get their payments monthly, and not all up front.  A monthly SaaS payment is bigger than a maintenance payment, aAnd by the way, most of the problems with these systems strike early on, so the SaaS vendor has a lot at risk from the beginning.  As Chris Cabrera at SaaS vendor Xactly is fond of saying, “We have to earn your business every day. ”

I hope all of this makes it clear how much better off the customer is with SaaS.  It’s yet another testimony to what the final “S” for Service in the acronym really means (if you’ll pardon the double meaning): 

SaaS simply works better.

Related Articles:

Things that SaaS Customers No Longer Have to Worry About

SaaS Resolves the Software Prisoner’s Dilemma

Submit to Digg | Submit to Del.icio.us | Submit to StumbleUpon

Posted in business, saas, strategy | 9 Comments »

Marc Andreesen on Hiring a Professional CEO

Posted by Bob Warfield on August 30, 2007

I loved Part 9 of Marc’s series on how to be an entrepreneur wherein he minces no words about the prospects for hiring a Professional CEO:

Don’t.

If you don’t have anyone on your founding team who is capable of being CEO, then sell your company — now.

The rest of the series is extremely good too, though mostly not this black and white.

Posted in business, venture | Leave a Comment »

You’ve Already Had a Multicore Crisis and Just Didn’t Realize It!

Posted by Bob Warfield on August 30, 2007

As I was driving along pondering the imponderables, I suddenly realized the folks talking about the Multicore Crisis have gotten it all wrong.  For those who haven’t heard of it, the Multicore Crisis is basically concern about what happens as chipmakers shift from being able to deliver ever-faster clock speeds according to Moore’s Law to delivering ever more processor cores on the same chip.  The crisis comes about because its much harder to write truly parallel software than it is to just let the chip get faster and run conventional software twice as fast every 18-24 months.  No lesser folks than Microsoft’s Craig Mundie have proclaimed that we are 10 years away from having the proper languages and other tools to efficiently harness the hardware that will exist in a multicore world.

Some of the pundits in the blogosphere have argued that we have plenty of time to get ready for the Multicore Crisis, and that all the hubub today is just hype and hand wringing.  They will do projections that say it’s easy with a couple cores to just give one to the OS, save the other for the app, and see an immediate speedup.  By the time there’s enough cores on a chip that this quits working, 10 years will have gone by and we’ll have all those great new tools needed to harness the big chips.  There are some pretty good rebuttals for this already, BTW.

Never mind that quad core chips have already shipped, motherboards are cheaply available to put two of these together in a “V8” 8-core configuration, 8-core chips are nearly here from Intel and already here from Sun.  Never mind that Intel has an 80 core chip in their labs and there are startups looking at 64 cores in the relative near term.  Let’s also forget that with 4 cores shipped now and 8 cores due out next year we will see 64 cores in more like 6 years than 10, according to standard Moore’s Law rates.  Despite all that, it’s all going to be okay.  Really!

Here is my problem with all this back and forth:  we’ve already hit the Multicore Brick Wall without leaving skid marks and most people just don’t realize it!  I hear the crowd out there now, beyond the klieg lights, grumbling in the dark, “What’s he on about now?”  Patience please.  The problem with multicore is it teaches us that someday we will expect software to scale linearly.  That Alpha Geek Speak means if I double the number of available cores, I want my software to run twice as fast.  Hallelujah!  I’m back to getting twice the speed every 18-24 months just like in the heyday of Moore’s Law.  In the post-clockspeed-doubling world that’s coming, this will be a requirement or all computing progress grinds to a halt (that means the money stops: true crisis), or so say the Multicored Chicken Littles.

Linear Scalability is hard to do, but ironically, it is nothing new.  Guess what?  We’ve already been fighting with “scalability” for a long time.  Can you see where I’m going with this?  Let me give you some examples.

Once upon a time eBay was plagued by terrible outages.  Analysts stated that this was due to eBay’s failure to build a redundant, scalable web architecture.  One of my startups was located on eBay’s campus in Campbell, and the story we heard at the local Starbucks was interesting.  It seems eBay had built out their original architecture around the idea of running a 3rd party search engine on a mainframe.  Eventually, they reached a point where they had purchased the largest mainframe Sun had to offer.  Unfortunately, being a Red Shifted business, they were growing at a rate faster than Moore’s Law, and hence faster than Sun could provide them more powerful machines!  Or, as eBay themselves put it in a presentation on their architectural evolution, “By November 1999, the database servers approached their limits of physical growth.”

In August of 1999, Meg Whitman hired Maynard Webb on the heels of all this to fix it.  The fix (despite many protestations that at least some of the problem was due to issues with eBay’s vendors like Sun) boiled down rearchitecting the very fabric of eBay to allow for:

    “clustering the servers for greater availability, dividing the workload among its Oracle databases

Wow!  Deja Vu all over again.  They needed to find a way to harness more cores to keep up with the load:  eBay had a Multicore Crisis in 1999!  

When I worked for Oracle, we used to employ the Multicore Crisis to make sure our server win the benchmarks against competitors.  It was easy.  Just insist on running the benchmark on a server that had more cpus than Microsoft SQL Server could utilize.  If Oracle could run 2x the cpus and keep them all efficiently humming away, we would run 2x as fast on the same hardware.  As I recall, at first SQL Server could utilize just 4 cores.  At some point, and after a lot of pain, they upped it to 8.  I’ve worked on big Enterprise projects where we successfully harnessed well over 100 cpus.

Which brings me to my last company, Callidus Software.  We used scalability as a powerful competitive weapon.  We had built a grid computing infrastructure to run our incentive compensation software.  The competition literaly had to throw in the towel at certain volume levels.  Beyond here there be scalability dragons.  There’s nothing quite like competing in a deal where you know your competition can’t produce a single happy reference at the volume levels the prospect requires.

More recently, the Skype VOIP service was down for an extended time due to what was basically a scaling problem.  Microsoft forced some updates through to Windows users, Windows had to reboot (what else is new), and suddenly there were millions of rebooted machines trying to log onto Skype all at the same time.  Skype’s explanation was:

Our software’s peer-to-peer network management algorithm was not tuned to take into account a combination of high load and supernode rebooting.

Consider the costs to businesses that depend on Skype?  Looking closer to home at eBay, Skype’s owner, investors saw a loss of $1B in market value as the drama unfolded.  A Multicore Crisis can be really bad for your business!  As more and more of the computing world turns to centralized models like SaaS and Web 2.0, it becomes more important than ever to solve the Multicore Crisis, or at least the Scalability Crisis for these businesses to succeed.

If we want to move beyond this, SaaS and Web 2.0 sites have to be architected for massive scalability, particularly if they’re built on cost-effective Lintel (Linux on commodity Intel boxes) architectures like so many of these sites are.  In addition, companies need to invest in utility computing at the hosting end so they can rapidly increase (or decrease) the hardware they have on line when demand hits.  One example of a utility computing service would be Amazon’s EC2 and S3 services that let you dynamically provision a machine in their data center in about 10 minutes.

Have you ever encountered massive outages on a new and rapidly growing service?  Perhaps a newly minted Web 2.0 startup?  Perhaps you’ve been really unlucky and encountered the problem as you company tried to install a mission critical piece of Enterprise Software.  Post a comment here to share your experiences.  I know many of you have already had a Multicore Crisis, and now you know what to look for.

For those who are thinking you’ll worry about the Multicore Crisis in 10 years when it’s an easy problem to solve, remember:

You’ve already had a Multicore Crisis and just didn’t know it!

Related Articles:

A Picture of the Multicore Crisis:  See a timeline of it unfolding.

Multicore Language Timetable

Submit to Digg | Submit to Del.icio.us | Submit to StumbleUpon

Posted in amazon, ec2, grid, multicore, platforms, saas, software development, Web 2.0 | 8 Comments »

Is Support a Cost Center or a Product? (If you do SaaS or Open Source, It’s a Product!)

Posted by Bob Warfield on August 29, 2007

I always find the RedMonk blog interesting, and this time it has to do with his post on Making Money in Open Source on Support.  Coté says some things that got me frothing at the keyboard again. 

Developers Need Support, But It Is Seldom Offered With Enough Bandwidth

First, on the likelihood you can make money selling support to ISV’s:

“In general, I’ve found that ISV programmers (people who write applications [packaged or SaaS delivered] to sell to other companies, not “corporate developers” who write in-house applications) are less prone to use support for software, closed or open source.”

and:

“This is the kind of mentality I encounter among programmers quite a lot: it’s insulting to them to suggest that they need help.”

Let me explain about the ISV perspective, because that’s the world I’ve lived in all my career.  This has amazingly little to do with machisimo, being too cheap, or being insulted.  Rather it has everything to do with bandwidth.  We’ve all used Tech Support.  Who do you know that loves it?  Sitting for hours on an 800 line being tortured by music and that painful interruption that’s worse telling you how important the call is to them.  So why don’t they answer then?  How would you like to be waiting on some Tech Support guy to tell you all the standard stuff (take 2 reinstalls and call me in the morning) while some high dollar Enterprise customer is chewing your CEO’s ear off about why your mission critical software doesn’t work?  You know its going to take 3 escalations to more senior folk before they even understand what you’re trying to tell them and meanwhile your CEO is ready to fly your entire team to Nowhere, Iowa to work at the customer’s site just to placate them.  Been there, done that.

The insult is not that I need the help, it’s that you think you’re helping by doing that to me! 

If you want to make money selling support, treat it as a product, not a cost center.  Don’t send me to Bangalore.  Don’t put a guy on the phone with my architect that can’t carry my Alpha Geek’s jock strap.  Get somebody real that can go toe to toe.  But is that really a viable model?  Can you afford to hire Alpha Geeks to deliver support?  Probably not, because they will only do it if they have that rare combination of Alpha Geekdom and craving human contact so much they’ll take it under the duress of Tech Support.

SaaS companies come off better in this respect because its easier for them to put their Alpha Geeks onto the problem.  The Alpha Geeks watch the problem as it unfolds and directly access the data center to fix it.  They don’t have to struggle to remote control the customer through an On-premise fix.  Service for SaaS vendors is a product, not a cost center, but it’s also a product that is cheaper for SaaS companies to deliver because the service can be delivered in many cases without them leaving their desk and sometimes without the customer even knowing.  The latter is a problem too if you want the customer to value the service, but we’ll save it for later.

Despite all that, Enterprise ISV’s still spend time on the phone to companies like Oracle and Microsoft trying to get help, and they keep their maintenance paid up because you never know. 

Support Types

Coté offers a good list of types of support provided:  bugs, scaling, configuring, upgrades, finger pointing (or proving who’s “right”: the software or the user), re-setting expectations (the software actually won’t scramble an egg inside it’s shell like the sales guy said), and your million dollar nightmare (customizing and supporting out-of-date deployments).

He goes on to suggest some other types peculiar to open source:  Updates, Platform Certification (“Stacks”), Product Improvements and New Features.  Yup, been there and delivered those.  There are two that deserve more discussion: being an advocate to the community, and professional services customization gigs.

On being an advocate, this seems so far removed from the realm of what Tech Support does, that I wouldn’t even include it here.  I’ve seen this work best when Marketing or Sales handled it as part of customer relationship management.  Yes, Tech should be able to do that, but I’ve rarely seen the skillset and mentalities located there to be succesful.

On professional services to customize, yes, this is a very real opportunity to sell more product.  It isn’t really a support issue though.  Yes, many open source consumers will just modify the code themselves, but I would hesitate to speculate that this happens the majority of times.  See my thoughts on Professional Services end runs though, as it is something you must guard against.

I want to cover a few types I’ve run across that are also biggies for support but aren’t mentioned:

Education.  A tremendous amount of Technical Support calls boil down to the customer needing to ask a question.  The question may be so complex or confusing that it gets escalated all the way to the Chief Architect, but its still a valid role.  Answering questions is important, and I would think even more so for Open Source where the questions may have to do with how the code operates.

Professional Services End Runs:  This has been the source of more bad feelings all around than any other phenomenon I’ve seen.  Here is the scenario.  The customer buys the product, but for a variety of reasons they do not have the money to get it properly installed:

–  They picked a bad VAR (low cost bidder, anyone?) who spent the budget without delivering.

–  They never could get enough budget internally for a proper engagement, and choose to finesse it this way because they’re cheap.  Don’t laugh, I’ve had customers ‘fess up that this is exactly what’s happening and persuade CEO’s to deal with it in the interest of future business.

–   The absolute worst:  Your own professional services people failed.  It may not even be their fault, but now the Customer has Righteous Indignation on their side and you will get that software installed or else!

–  Almost as Bad:  Your organization measures performance and pays on those measurements in such a way that one organization throws another under the bus to make their bonus.  Professional Services throws Tech under the bus.  Or its implicit, and Tech hires cheap people who can’t deliver and everything is escalated to Engineering.  Yuck!

Companies that make Services a Product instead of a Cost Center are much better set up to think about these problems and deliver a better experience to their customers.

All your tech support are belong to us!

I didn’t see much rumination about Oracle’s attempt to steal Red Hat’s business.  One of the problems of basing your model around selling support for an open source product is differentiation.  Perhaps you can achieve better focus, but someone else can take a credible shot at stealing your business.  At the very least this will exert downward pressure on your pricing.

Submit to Digg | Submit to Del.icio.us | Submit to StumbleUpon

Posted in business, Open Source, saas, strategy | 4 Comments »

Luddites Need SaaS Too (Or Don’t Get Stuck On the Far Side Of the Chasm!)

Posted by Bob Warfield on August 29, 2007

I want to share an anecdote today that is a ringing endorsement for SaaS, and that made me think that Geoffrey Moore’s Chasm can hang you up whichever direction you’re trying to cross.

This morning, I wanted to share my slide presentation on SmoothSpan with a friend who is remote.  Without thinking about it too much, I simply emailed him the presentation and arranged a time to get on a conference call with him.  The fun began shortly after the call started when he announced that Windows couldn’t find an application to open the files with: Doh!  My friend has been out of Tech for awhile, and spends most of his time messing with digital photography.  He hadn’t bothered to buy a copy of Office 2007, and the file formats are incompatible.  “No problem”, says I, and I guided him through the process of downloading Microsoft’s file format converter, which lets the old versions read the new files.  Several megabytes of small talk later, he was ready to try, but there was still an issue.  The majority of my friend’s time on a computer is spent in PhotoShop, and so he has a Macintosh.  The Windows utility didn’t work on the Mac! 

Now we were really getting frustrated, we’d wasted the first 10 minutes of the call, and I was tired of it.  My first thought was to get him a PDF, but darn, I hadn’t installed the PDF converter yet.  So, I resaved the files in the old Office format and resent them to my friend.  Two minutes later we launched into the slideshow.  But wait, there’s more, and it’s much worse than a set of steak knives.  My friend immediately starts critiquing all sorts of visual problems with the slide that were completely invisible to me on my machine.  It seems that Microsoft has a few problems saving to the old formats.  Drat!

Ironically, one of the earliest slides in the presentation mentions SaaS, and so my friend wanted to know why people like SaaS.  I was running 20 minutes late, was very frustrated by the obstacles we’d encountered, but suddenly, the light went on.  We had just lived through one of the best examples of why even my friend (who isn’t a Luddite at all, but just doesn’t have a reason to keep up with Office apps) could have benefited had I been using a SaaS service like WebEx to deliver the slides rather than exchanging files and asking him to run an application on his desktop.

SaaS would have been up-to-date with the latest versions without my friend even having to know about it, there would have been a lot fewer steps on his end even if everything had worked out well from the beginning, and both of us could’ve focused on the meat of the discussion instead of fooling around trying to make software work.    We were able to laugh about it, but the experience really drove the SaaS point home to my pal, who had basically gotten stuck on the far side of Geoffrey Moore’s Chasm. 

Moore talks about companies having to cross the vast chasm from early adopters (the hipsters who like everything new because its new) to the mainstream to achieve mega success.  Sometimes the mainstream gets stuck a little too far to the right and misses out on some really good stuff like SaaS and Web 2.0.  Stepping back across that chasm can really help.  After this experience, I almost think SaaS offers even more value the less forward looking your organization may be.  After all, you’ll be the ones staggering under the load of layers of software that hasn’t been updated in an age.  It’s shades of disadvantaged countries who’ve found it easier to make cell phone networks work than to install all the copper needed for a full land-line based telco infrastructure.  Maybe that’s the way to abandon antiquated IT infrastructure and jump a couple of generations ahead by letting the SaaS guys keep you up to date.

Read/Write Web had an interesting post recently on Rethinking the Chasm.  The thesis is that change is coming so rapidly that its hard to attract and hold the attention of enough early adopters to establish your idea before they’ve moved on to the latest Shiny New Thing.  Perhaps there are a few Luddites stuck on the far side who have enough pain that they’d make an interesting target market.  This was certainly our experience selling Enterprise Software at Callidus.  The best customers were at both ends of the Chasm Spectrum.  Early Adopters wanted it because it was new.  The Luddites wanted it because their infrastructure was so old and broken it was causing tremendous pain to the Business.

Submit to Digg | Submit to Del.icio.us | Submit to StumbleUpon

Posted in Marketing, saas, strategy | 3 Comments »

Web 2.0 Personality Types

Posted by Bob Warfield on August 29, 2007

There’s a Facebook application called Personal DNA that I came across recently.  It’s like a Myers Briggs personality test.  When I took it, I found my personality was “Encouraging Leader”.  It’s a fortune cookie, but at least the fortune was good for my aspiration set.  

For a long time I was pretty skeptical about these personality tests, but after I took the Myers Briggs test twice and realized what it said about the teams I was involved with, I became more interested.  The first time around, I learned that a person I’d partnered with through more startups and other jobs than anyone else had exactly the same personality type except that he was the “Introvert” and I was the “Extrovert”.  Talk about having each other’s backs!  It was actually perfect.  Because our learning styles were the same, our communication was extremely high bandwidth and very effective.  The risk would be that we could be blind-sided when dealing with people far different from that personality type.

I’ve seen that happen too.  The second time I took the test, it was with an entire executive team.  After, we were presented with a map that showed how everyone fit in relative to one another and relative to the center of mass for the whole team.  I remember my first reaction to it was to say, “Hey, that’s like a seating chart for who everyone sits with at lunch!”  Needless to say, those individuals who were far away on the “seating chart” presented difficulties.  It was harder for us to communicate our ideas to one another and collaborate.

One of the important points they teach you on these personality tests is that there is no “best” personality trait.  They are all effective (its great fun to look up who the famous people are from your type!), but the value is in understanding how to communicate with someone who has different traits than yours.

This got me to thinking about the Web and whether there might not be some sort of “Personality Type Test for Web 2.0 Software”.  Let me give you an example.  Guy Kawasaki recently announced he was going to Twitter.  I’m not a Twitter guy.  I like Instant Messenger-type software because it lets me know when people are online that I may not otherwise reach out to, and because it can be faster than a phone call.  Random tweets having nothing to do with anything of immediate importance seem like noise to me.  In fact, one day I had a blog post opened that had an embedded Scoble Kyte video.  Kyte includes a chat capability, and this thing was on a browser tab that wasn’t my current tab while I worked on something else (I’d planned to read the post shortly).  The constant beeps and squawks were so distracting I finally stopped what I was doing to deal with it.

This brings me to the first of several “Web 2.0 Personality Dimensions” one might define.  Let’s call it your “Interruptability”:

Interrupted | Deferred

If you are the “Interrupted” type, you love multi-tasking.  You crave feeds.  You want data feeding you constantly and driving your day.  The “Deferred” type wants to be more in control of their time.  They want to take time out to focus on things without interruption.

Let’s continue with the Scoble example, because it generated a lot of comments on his site from people who complained they don’t want to have to watch videos to get his messages.  They wanted him to go back to writing.  We’ll call this one “Media Preference”:

Text | Video/Multimedia

I can see the test questions already:  “Do you prefer voicemail or email for quick communications with colleagues?” 

Note that if there is, in fact, a pronounced personality trait here, then Scoble is risking a lot if he plans to focus on more video and do less writing.  I say that because he grew his audience through the medium of text so he has a self-selected audience of text lovers who may or may not like video.  If people tend to polarize, there will be trouble and he’ll have to deal with rebellion and rebalancing his readership towards folks that like the new medium.  OTOH, maybe there’s just a business trend issue, such as ReadWriteWeb talks about in their Will Podcasting Survive post. 

There are a couple of other obvious “Web 2.0 Personality Dimensions” one could posit:

Free Form | Structured

Some sites bring a lot of structure, while others are very free form.  Facebook is much more structured than MySpace, which many have argued is a good thing, particularly for those who want to socially network professionally.  LinkedIn is even more structured than Facebook.  There is a continuum.  I remember when I first came across Dave Winer’s ThinkTank outline processor.  I was enthralled because this is the way I think.  When Lotus shipped a word processor called Manuscript that was totally structured around a very rigid view of outlining and styles, I loved that too.  Eventually I discovered that a lot of people, indeed most people, don’t write by outlining first.  In fact, they may never choose to look at an outline.  Hence Microsoft Word has truly useless outlining functionality.  It’s just good enough to demo, and little more.

What about the desire and tendency to become involved with the information associated with the Social Network?

Watcher | Participator/Shaper

I’ll always remember the Peter Sellers line, “I like to watch”, from the movie Being There, but I digress.  A Watcher is just that.  They prefer to absorb.  One of the things the personality trait tests try to determine is whether you respond immediately and interactively, or whether you want to take the information away and process it for a while.  This category is all about how you want to process the information you receive from the web.  A Watcher is pretty happy with a search engine and perhaps a reader.  A Participator wants to get involved.  Perhaps the involvement is fairly passive: they’d like to tag the information or attach private comments to it for their own use.  Perhaps they keep a book mark on del.icio.us.  There are increasingly aggressive ways to get involved however.  Perhaps you will post a comment to this blog post.  Or, slightly more aggressive participation would be to publish your del.icio.us bookmarks.  You may be such an aggressive participator that you write an entire blog post or series of posts about someone else’s presentation.  This can get dicey if the other guy is also an aggressive participator and you disagree, but it’s what makes the Web go round.

Since the primary value of Web 2.0 is collaboration, the Participators play a key role.  With no Participators, there can be no Web 2.0.

How about this one:

Clean Simple UI | Rich Internet Applications

Much has been said about Google’s clean user interface.  Microsoft’s Tafiti goes to the opposite extreme.  Note that the page I linked to for Tafiti belongs to a person who has chosen a blog look and feel that even match’s Tafiti’s scheme pretty well.  They stopped short of calling it a Google-killer, but you do have to ask yourself where you come down on a love or hate of AJAX and Rich Internet Applications.

I could go on pulling these dimensions out of thin air, but my point is really that folks thinking about creating or adopting Web 2.0 in whole or in part should think about whether the audiences they want to collaborate with are likely to prefer one style or another, or whether they need to open all channels of communication to make sure some corner of the Web 2.0 Personality Space doesn’t get disenfranchised.

It almost makes me wonder if being able to “skin” your Social Networking Experience by taking some sort of preference test wouldn’t also be helpful.

This isn’t as far-fetched as you may think.  Savvy marketing and sales people have been using personality traits for a long time to understand how best to reach and influence their customers:

–  How to think about Brands vs Personality Types

–  Know your audience’s Personality Traits

–  Decide whether to show it or tell it in a presentation based on the audience’s traits

–  Here’s a great link on the subject from a social network:  LinkedIn

Why not Web 2.0 interface design too?

If nothing else, a “Web 2.0 Personality Traits” theory explains why such a diversity of Web 2.0 experiences seems to be successful, and it may point the way for how to make future Web 2.0 efforts even more successful.  If Mark Cuban is right that the Web has stabilized, it simply means that there is an offering out there to tickle the full spectrum of personalities.  What remains to Digest the Web 2.0 is to formalize this thinking and understand how to efficiently leverage it.

(Subscript:  I started out life in this business doing UI design.  The design for Borland’s Quattro Pro spreadsheet is one example.  The notebook tab UI for spreadsheets originated with Quattro, and Borland eventually won a substantial sum suing Microsoft over its use in Excel.)

Related Articles

Check out Part 2 of the Web 2.0 Personality Series where we slot existing services into the model.

Part 3 of the Web 2.0 Personality Series tells how to target the various personalities.

Also see how Fred Thompson and Twitter Leverage Web 2.0 Personality Styles.

Submit to Digg | Submit to Del.icio.us | Submit to StumbleUpon

Posted in Marketing, ria, user interface, Web 2.0 | 25 Comments »

Business Web 2.0 Demands a Different Trust Fabric Than Social Web 2.0

Posted by Bob Warfield on August 28, 2007

Web 2.0 is all about collaboration.  In fact, I mentally substitute the word “collaboration” for Web 2.0 in any context where I’m having a difficult time understanding and it generally makes things much clearer.  Thanks to O’Reilly for that!

Harvard professor Andrew McAfee got me thinking about the differences in the Trust Fabric (how Social Networks govern Trusted access to information) between Business users of Web 2.0 and Social users of Web 2.0.  His post, The Great Decoupling, gives some exciting insights into how information flows within corporate hierarchies.  The gist is that historically, information has flowed hierarchically within organizations.  This is because decision making requires information and historically, it has been costly to gather and disseminate information.  These “economics of information distribution” have largely restricted information flow to corporate hierarchies, but the times they are a changin’.  One of the two factors, the costs to gather and disseminate, have been greatly diminished and commoditized by the Web.  Nowhere is this effect more strongly felt than in the Web 2.0, which has become the ultimate evolution of that trend.

MIT’s Tom Malone has gone on to write about the effects of this in his new book The Future of Work.  In his book, Malone likens the changing topology of information transfer with the hypothesis that decision making will follow suit:

The Future of Work

This all leads to massive decentralization of decision making in organizations.  It all sounds great, right?  But then McAfee starts to disagree a bit with Malone for a very solid reason:

But the fundamental rule about where decision rights should go has nothing to do with information costs themselves. Instead, it has to do with knowledge. The ground rule is: align decision rights with relevant knowledge.

This is an absolutely crucial insight.  McAfee goes on to give an example wherein some loan officers at a bank make better credit risk assessments than other officers.  Clearly, decentralizing the decision making to all the loan officers is counter productive–it should be further centralized to just those officers who make the best decisions.

There are many other examples.  I like to think in terms of categories of information that need to stay within the corporate hierarchy but that are essential to decision making.  Here are three examples:

Morale:  Do we really want everyone to know how poorly some initiative is going?  How will it help to tell those who can’t make a difference and would only be depressed by the knowledge?  Is it fair to expose some internal squabble that was mostly sound and fury signifying nothing?  Won’t that just unfairly tarnish some otherwise good people’s reputations and make them less effective?

Governance:  Is the information legal and appropriate for everyone to know in this age of SOX and Securities Laws?

Competitive Advantage:  Do I want to risk giving my competitors access to key information because I’ve distributed it too broadly?

The headlong rush the Web brings to expose everything to everyone scares the heck out of most corporate types.  Their two biggest requests for Web 2.0 initiatives are Governance and Security, and the reasons for it are exactly what we’ve been discussing.  It isn’t just that they have “control issues”.  There are sound business reasons why controls have to be in place.

McAfee alludes to this as well when he says:

The net result of disappearing information costs won’t necessarily be decentralization. It will instead be the decoupling of information flows and decision rights. Organization designers will be able to allocate decision rights without worrying about how costly it will be to get required information to deciders. Leaders will be able to ask “Who should make this decision?” without adding “Keeping in mind that it’s going to be slow, difficult, and expensive to get them the general knowledge they’ll need.”

The set of rules and data structures that define this mashup between information flows and decision rights will be an essential component of any broad-based Web 2.0 initiative for Business.  It defines differenes in how the Trust Fabric for a Business Social Network has to operate versus how a Social Social Network (sorry!) has to work.  This makes the Web 2.0 problem for business considerably more difficult than just firing up the LAMP stack to deliver a MySpace or Facebook clone to your chosen community of employees or customers.  It requires an odd combination of totally control-centric Enterprise Software think with the Laissez Faire mentality of the Web.

Dion Hinchcliffe over at ZDNet says:

Silicon Valley proper has for the most part become thoroughly bored with the Web 2.0 meme despite the largely superficial presence of the most powerful Web 2.0 concepts in many online products and services.

At the same time, mainstream business is just now getting ready for Web 2.0 adoption and are beginning to incorporate the underlying technologies, platforms, and concepts into their IT departments and lines of business.

The trouble is that so far Silicon Valley has largely built Closed Social Web 2.0.  It’s bored with that and is somewhat thinking about how to build Open Social Web 2.0.  There’s a lot of discussion about Social Graphs and Trust Fabric going on everywhere from Scoble’s Plan for Google’s Demise to the 2 Marc’s (Canter and Andreesen) Plans to Steal Away Facebook’s Crown.  Seems everyone agrees they want Web 2.0, and they want it delivered in such a way that it is open.  Business also needs to make sure it has a seat at the table as well so that its unique needs can also be met.

Related Articles

Mashups and “The Future of Work” in Enterprise 2.0

Why Can’t I Search My Enterprise Data As Well As Google Searches the Internet?

Business (And Social) Alternatives to Page Rank

Submit to Digg | Submit to Del.icio.us | Submit to StumbleUpon

Posted in business, Partnering, platforms, user interface, Web 2.0 | 3 Comments »

Bumps in My Internet Journey (Links): August 27, 2007

Posted by Bob Warfield on August 27, 2007

This post introduces a new weekly feature for the SmoothSpan blog where I’ll list noteworthy links I came across during the prior week.  Call them “Bumps in My Internet Journey” because they made me stop and think!  If they made me think enough, eventually you’ll see a blog post about it.

Why Mahalo, TechMeme, and Facebook are going to kick Google’s butt in four years:  Scoble’s got an interesting take.  Everyone hates search engine Spam!

Attention Economy:  All You Need to Know:  Nice overview of Read/Write Web’s Attention Economy concept.  Eventually I’ll blog about this…

Denormalizing Your Way to Speed and Profit:  Because databases are an agent of the Devil when it comes to massive scalability.

Yahoo Pig and Google Sawzall:  Wherein MapReduce and Hadoop become Languages.  Don’t worry, to become a Languages is the most exalted thing in computing.  Adobe even made printers languages!

Profiting from the Content Delivery Network Wars:  They haven’t seen anything yet.  Amazon and the other big guys will roll Akamai et al up as part of the hosting package  you get when you buy their utility computing service.

What Makes an Idea Viral:  Seth Godin is always worth listening to.

Werner Vogels Tells Us About the Amazon Technology Platform:  As well as interesting glimpses into their culture.

By 2014, We’ll Have 1000-core Chips:  The amazing Tile64 has shipped with 64 cores.  Available today lest you thought the Multicore Crisis was far in the future!

Posted in business, grid, Marketing, multicore, Open Source, Partnering, platforms, saas, software development, Web 2.0 | Leave a Comment »

Social Graph Search Engines, Part 3

Posted by Bob Warfield on August 27, 2007

I’ve inadvertantly stumbled on a topic that’s getting tremendous attention, so I wanted to sum up my thoughts here before moving on.

Let’s start with Robert Scoble’s post, Why Mahalo, TechMeme, and Facebook are going to kick Google’s butt in four years.  Scoble’s proposition, which he communicates well in three videos, is that by using the Social Graph, we can dramatically reduce SEO Spam in search results.  The source of the Spam is that Google’s current PageRank algorithm is just too easy to game.  You simply need the right keywords together with as many sites linking to the page with the keywords as possible.   That combination does not guarantee a human will see anything of value on the resulting page.  With a Social Graph Search, you will weigh in human judgement about the quality of the page.  You do this by considering factors such as:

–  Did someone from your Trust Fabric (i.e. your social network) write the page?

–  Was it written or linked to by people who you know share similar interests because of the social graph?

And so on.  It’s a good idea, and I had already modified my own search habits to take advantage of a less grandiose form of this by choosing to search blogs before doing a general Internet-wide Google.

If you read through the comments on Scoble’s post, there seem to be two big issues with this proposition.  First, its simply very hard to get over the idea that Google is huge, omnipotent, and can simply tweak their algorithms to adopt a Social Graph Search approach.  Second, there is a lot of doubt about the feasibility of human augmented search.

Let’s start with the feasibility of human augmented search.  I think people are looking at it too literally.  Perhaps companies like Mahalo are also in the camp of being too literal.  If we have to wait for humans to laboriously research each topic and select the best links in some systematic way, I agree that sounds painful.  Not only that, but it seems like its been done.  Isn’t that what About.com did?  I’ve landed on their site only rarely and never found it to be helpful.  But look at it another way.  I do a lot of research on the web, and there’s nothing better than hitting pay dirt by finding someone list of bookmarks are a link to the seminal article that started some new idea.  Today, I find these things by tracking through blogs, and its a lot faster than tracking through Google.  Couldn’t that be automated?

So in a nutshell, my argument about feasibility is that there are already structures in the Web that are mostly Spam-free that can be tapped into to deliver improved search.  The blogosphere is one such.  I can already see an objection:  won’t the SEO’s just shift their attention to polluting these new information watering holes with their Spam?  Perhaps, but many of them are more amenable to Spam defenses because they’re not just randomly selected web pages.  Let me give you an example.  Let’s say we’re going to harness bookmarks in the blogosphere, for example.  Let’s have the human editorial staff simply identify a core nucleus of trusted blogs by reading them.  We’ll take blogs like Scobleizer, TechCrunch, and Mashable.  Let’s tag pages on those sites with a “Bob” Rank (because hey, the Page Rank is named after Larry Page!).  The BobRank for pages our editorial staff have validated is infinite (in Computer Science, that means its a really big number, like 1 billion).  Every link on a 1 billion BobRank page gets a rank of 1 billion minus 1.  Links on those pages are 1 billion minus 1 minus 1.  And so on.  The BobRank is telling us how far away from objectively validated acceptibility a page is.

We can envision many embellishments.  For example, we can augment the BobRank with a FriendRank.  Have my friends in my Social Graph written or bookmarked the page?  Well then up the BobRank by 1 million.  Have their friends?  Up it by 1 million minus 1.  And once again, we have a super Page Rank-style algorithm where all the people in the Social Graph are helping out.

How do we efficiently gather and process all this information?  That brings me to the first objection, that Google is so big it can just do this itself.  That’s very true, BTW.  But it doesn’t mean everyone else has to lose out.  The rest of the AltSearch community simply need a Queen from which to launch Swarm Competition.  I’ve already mentioned how this could be done around Yahoo.  Let’s revisit in light of the Social Graph-based strategy how Yahoo can jump about 2 generations ahead.

First, Yahoo owns enough properties (such as del.icio.us) that they have Social Graphs out the wazoo to analyze and create a “Jerry” Rank (they certainly won’t call it the “Bob” Rank, doh!).  But secondly, they can open up their world and leverage the AltSearch community of something like 1,000 different search startups.  My proposal was that they turn search itself into a Social Network.  They can do that by leveraging all these social graphs, by releasing their own open Identity and Trust Fabric (e.g. Social Graph) apis that their partners must share to participate, and they can make available their web crawling data and massively parallel horsepower via Hadoop.

Okay, we see how Yahoo benefits.  Big hype value.  Big news value.  Promulgating a key open standard for Identity and Trust Fabric.  Leveraging everyone else’s social graphs to improve their search and generally creating network effects that haven’t been seen since eBay rolled up the auction world.  How do the partners benefit?

To continue the idea of building a Search Engine around a Social Network, what else do social networks have?  Widgeits!  Look how excited everyone got about Facebook Widgets!  Yahoo can bestow on their partners the ability to create search widgets.  People in the Search Network get to choose which widgets they like.  From a UI standpoint, it might not be too different from how Microsoft shows different search views in Tafiti.  Remember the little ring with the icons?  Or it could be a cleaner textier Google, sorry, definitely it would be a Yahoo-style list.  What’s on the list is determined by what widgets you signed up for, but Yahoo will also give you a basic set for general purposes.

To this we also add feeds, photos, and all the rest of the stuff that makes up a Social Network.

Still think someone can’t get there with a decent start?  This vision is my bet for what search engines look like in a couple of years, as well as an example of something I’ll talk more about:

Any web offering can be turned into a Social Network, and there are huge Competitive Advantages to doing so.

Submit to Digg | Submit to Del.icio.us | Submit to StumbleUpon

Posted in business, Marketing, platforms, saas, Web 2.0 | 5 Comments »