SmoothSpan Blog

For Executives, Entrepreneurs, and other Digerati who need to know about SaaS and Web 2.0.

Archive for the ‘Open Source’ Category

Amazon Web Services: The De Facto Cloud API?

Posted by Bob Warfield on July 12, 2010

Read a couple of posts last week that coalesced some thoughts I’d been having into this one.  First was the fascinating rumor about a Google EC2 clone.  Hat tip to High Scalability Blog for putting me on to this one.  The second was James Urquhart’s musings about the desirability of the Amazon API’s as a standard.

James and most of his commenters are worried that we might standardize on something that doesn’t have all the bells and whistles.  Guess what guys, standards NEVER have all the bells and whistles.  By the time people get done arguing about them and they get enough momentum and use to be more than just standards in name, the world has moved on.  So what?  If we waited for standards to be perfect, we’d would have none.  The point of the standards is that they’re good enough to help reduce friction in areas where innovation at every turn is no longer desirable.

Are we there yet with the Cloud?

I think so.  I’m not saying there is no innovation left to do–there is tons of it coming, and it’ll all be good.  But, there is value at this stage in having a standard too.  There has been a lot of success demonstrated with the Amazon Cloud from companies large and small.  I would be curious to hear whether Amazon thinks their API should be a standard (and whether they’re prepared to give up some control in exchange), or whether they think it still has too much growing up to do. 

I look at it like this:

First, the wonderful thing about standards is there are so many to choose from.  Just because we annoint Amazon as one such standard does not mean there can never be others that are completely different.  It does not mean the standard can never have a superset or optional features to cover many of the cases Urquhart raises.  And remember, there is still that period of making it into a standard during which it can be molded a bit before it settles in.

Second, speaking of supersets, I think it is important to think about Cloud Standards as a layered model.  I would definitely not put all of Amazon’s offerings into the standing.  Perhaps just EC2, EBS, and S3.  Note that S3 is spreading even more rapidly than EC2 as a de facto standard.  These three seem pretty benign, reasonably abstract (meaning they don’t expose too much ugly detail about what goes on underneath), and reasonably proven.

So what do we get for it?

If the standard works and matters, we get a lot more vendors supporting the standard.  That’s a good thing, and something I have to believe every Amazon Web Services client would very much like to see happen.  The second thing we get is that some, but not all, of the innovators will quit trying to reinvent the wheel that is the Amazon layer (IaaS if you must use an ‘aaS name for it) and they will move on to other layers.  Depending on how much more innovation you think that layer needs, this is a good thing.  For those that think it is a bad thing, there is still time for someone with a better mousetrap to put it out there and show us.  But what if making it a standard causes a ton of innovation around how to solve some of the problems like I/O bandwidth behind the scenes without affecting the API?  Wouldn’t that be an excellent thing?  Heck, I’d love to be able to add an “11” to the AWS I/O speed control, who wouldn’t?

But time is a wasting.  For whatever reasons, it seems like the other players are very late to the party relative to Amazon.  If they wait too long, Amazon gets a de facto standard before the market has much leverage to pry loose some control.  This is arguably what happened around the IBM PC, DOS, and Windows.  BTW, you read that like its a bad thing, but it wasn’t.  We accomplished a heck of a lot by not having to continue innovating over the bus, the BIOS, and so on.  It just could’ve been a lot better if there hadn’t been so much control vested in Microsoft and IBM.  As I write this, I wonder a little bit if players like Google really aren’t taking too long.  Maybe AppEngine is the exact image of what they think the Cloud should look like, and the problem is the world just didn’t bite the way they have with AWS.  Google could be saying, “Okay, we hear you, so we’ll do it your way now.”  There has to be some reason they’d endorse S3 and potentially EC2. 

If Google’s ready to go there, why not the rest of us?

Posted in amazon, cloud, Open Source, platforms | 10 Comments »

Adobe: 7 Things You Should Do With Flash/Flex

Posted by Bob Warfield on June 21, 2010

Dear Adobe:

Apple has started the anti-Flash/Flex snowball rolling, and it is getting steadily bigger.  It’s a perfect storm, because they’ve got the platforms that are perfectly suited to Flash, their platforms are wildly popular, and your faithful audience desperately wants to be there.  But that’s not all.  They didn’t just prohibit Flash, they have called a lot of attention to a credible competitor: HTML5.  I know, I know.  It will be a long time before HTML5 is everything Flash is today.  It’s not even close right now, and a lot of people have conflated media delivery with Rich User Experience in ways that unfairly diminish your platform.  Get over it.  Economic pressures (aka naked greed and envy to be on these precious Apple platforms) have created a hill of growing height, and the water that is developer mindshare is rapidly flowing down that hill and away from Flash.

What can you do?

Lame ads won’t help.  Complaining about it won’t help.  Technology and innovation can help.  If you move quickly, and you have some things in your camp that buy you time (Android), you can still salvage the situation.

Here is what it takes:

1.  Absolute Single Minded Focus on Performance and Stability

People have concluded your platform is buggy and slow.  It doesn’t matter if you agree or not, the customer is always right.  When you hear McAskill at Smugmug and Adler at Scribd railing about your stuff, it’s time to move from denial to acceptance.  Their voices and those of many others are too loud and being spread too efficiently to pretend it isn’t so.  It’s past time to deal with it, in fact.  You need to embrace this problem, own it, and deliver the solution as quickly as possible.  The solutions can take many forms.  My recommendations are part of this post, and this point is more about declaring a focus both publicly and internally and owning the problem.  You don’t have to say, “We agree, our platform is slow and buggy.”  You do have to say:

“We have a great platform and our customers have told us to make it dramatically faster and more stable.  That’s our #1 priority, and here’s how we plan to do it…”

2.  Stability:  Quality + Security.  Get a Czar.

I’ll define Stability as consisting of equal parts Quality and Security.  Your customers are finding too many bugs.  There are too many public security issues.  This is happening at a time when you can ill afford it.  Get a Czar nominated and equipped to deal with this area.  Apportion your development cycles between performance and stability, give the stability cycles to the Czar and just do it.  The Czar needs to rapidly do the following:

– Identify the most egregious problems you have missed that are troubling your customers.  Look outwardly not inwardly to find them.  Fix that first tranche rapidly.

– Upgrade your automated testing so regressions are under control.

– Put in place a culture of quality that ensures that every single release is better than the last one.

– Investigate whether some of the quality issues don’t stem from education issues.  If customers are approaching it wrong, or don’t know how things work, they may be seeing behavior that is exactly what’s expected, but that looks to them like a bug.  Do not let this be an excuse for thinking you don’t have real bugs too.

– Be transparent about the plans and the results.

Get this stuff fixed and make sure it stays fixed.  This is not a, “Let’s fix the top 100/1000/or whatever bugs,” thing.  It’s a cultural change accompanied by results.

3.  Build a High Performance Native Compiler

Yes, I know, it is wonderful that Flash programs work everywhere.  But you are dealing with Performance perception and a company that says they will only let Native tools in the tent.  Figure out how to kill both birds with one stone.  Every platform does not need a native compiler.  But, if Facebook can afford to build a PHP compiler for performance sake, you definitely can afford to do this.  If you don’t have any serious compiler gurus, get some on board.  While you’re at it, build an optimizer for your interpreted stuff too.  You need a two-pronged attack:

–  Better bytecodes with the usual optimizations that matter closer to the language–operator strength reduction and all that.

–  Killer native compiler that will run circles around your bytecode stuff when it needs to.

If you do it right, it should be possible to pick and choose which classes are native and which are bytecode within the same Flash app.  You will also need to provide infrastructure that makes it easy to serve up the right native version to whatever platform is being used by the consumer.  Don’t make your developers figure that out.  BTW, you need to get this into Beta in less than 12 months.  You don’t have much time.

There is an old saying, “If you want people to make a new decision, you must give them new information.”  This pair of developments is the new information for performance haters.

4.  Revisit the Asynchrony of Flash and Embrace Multicore

This may just be baked too deeply into the programming model, but it sucks.  Sometimes programs want to be able to block until something happens, and when they can’t they wind up wasting their time and you mobile device’s battery life to no good end.  This asynchronous stuff is a throwback to not having a real multi-thread model for Flash, and in the Multicore Era, that’s a liability.  Sure, current mobile platforms don’t have many cores.  It doesn’t matter.  #3 is really only a stopgap.  In the Multicore Era, if you want to completely crush the competition on performance, do it with more cores.  When I was at Oracle, it was all about building benchmarks that could use more cores than SQL Server.  Once you use more cores than the other guys, you become almost exponentially faster.

And while you’re at it, you will deliver a model that is much friendlier to developers.  Being able to deal with multiple threads and blocking should be the basis for Flex 5.

5.  Embrace the GPU and Knock ‘Em Dead With 3D

Last point.  For most machines, the graphics processor is the most powerful CPU in the machine.  That’s a big surprise to many, but hey, it’s true.  That sucker has got vector processing going on just like the old Cray supercomputers.  There are companies building supercomputers out of them, for Heaven’s Sake.  Our freakin’ Air Force uses the GPU’s in Sony Playstations to build supercomputers fer cryin’ out loud.  I know it is a pain to go native on the GPU.  Sometimes the OS doesn’t help you very much.  But you have to find a way to get your developer’s hands on those beautiful MIPS.  This is especially true since Flash is all about the visuals.  While you’re at it, build a killer 3D subsystem for Flash so peeps can create virtual reality, 3D modelers, CAD/CAM, killer FPS games, and a whole of others things you haven’t even thought of.  With #’s 3, 4, and 5, nobody will be able to touch you on the performance front.

6.  Bring Back the Flock with a Cross Compiler

In many ways, Apple’s insistance on anything but Flash is like an old-fashioned shelf space war straight out of the pages of Ye Olde Shrink Wrapped Software.  If I build my app in not-Flash, it is a pain for me to go back to Flash no matter how much I like it.  It is worse if my not-Flash is on a super hot platform, because I kind of want to just keep writing not-Flash on that platform once I get hooked. 

Here is the thing: if HTML5 is really as limiting as Flash devotees claim, it should be trivial to translate the limited functionality of HTML5 to Flash.  While you are at it, please undertake the slightly less trivial task of moving Objective-C to Flash.  Sound hard?  It is a little, but not any harder than all the other stuff you need to do.  Besides, I’ll bet you can do this one as a joint project with Google.  Why?  Easy:

Take any award-winning iPlatform app.  Feed source into your new cross compiler.  Push a button.  Get back an award-winning Android app written in Flash.  BTW, you can have it on your desktop or anywhere else too. 

Now what iPlatform developer could resist that if it all works great?  Don’t think of it as aiding the enemy by giving developers an opportunity to start from HTML5 with less downside.  The developers that will use this are already lost to you, and you need to bring back your flock.

7.  Keep Your HTML5 Powder Dry

By now Adobe, I expect you’re really feeling pretty unhappy with this post.  The stuff I am telling you needs to be done is not easy, and it won’t be cheap.  At the same time, you know in your heart that this is what it really takes to win this war.  It’s about to get worse.  HTML5 is coming.  All of those other steps will only allow you to maintain your lead for longer. 

You need to recreate everything that is great for your platform on HTML5.  But, you need to keep it in the backroom until the timing is right.  Don’t dribble it out.  Do big quantum leap releases.  Your first one should not be an also-ran.  It should establish Adobe as the premiere resource for HTML5 developers.

It’s just that simple

As I’ve said, this is a hard road.  But, if you don’t follow it, if you don’t dig down deep and go to war now in a meaningful way, you won’t catch up later.  You’ve got a great platform.  If you want to keep it, you know what to do.

Posted in apple, flex, Marketing, Open Source, platforms, ria, strategy, user interface | 6 Comments »

Apple, Adobe, Punctuated Equilibrium, and Commoditization

Posted by Bob Warfield on April 13, 2010

Lots of kerfluffle around Apple’s refusal and downright blocking of Flash on its platforms.  Developers like John Gruber are stating the obvious in explaining why Apple is behaving this way.  Techcrunch is piling on, since Steve Jobs is referring people to Gruber. 

BFD, I explained all this years ago and in more than one post.  It really is extremely obvious what’s going on.  The more interesting question is, “Where will it lead?”

I’ve written also about how platform vendors should behave like Switzerland.  Apple most assuredly is not behaving like Switzerland.  Under the banner of making insanely great products, they’re doing the equivalent of being a Swiss bank that makes you agree to terms that effectively lets them own all of your customers and tax all of your revenues.  Evil?  Probably.  Yes, let’s make no mistake, there is Evil here in the sense that Google used to proclaim they would not do.  Apple wants to lock up customers and tax them for every cent of revenue they possibly can.

For the time being, this strategy will work great, but Apple should be aware that they are using artificial scarcity to drive demand.  The user experience offered by the iPhone and iPad are sublime, and there are no good enough alternatives yet.  Hence the artificial scarcity.  But it is artificial in the sense that sooner or later someone will produce a good enough alternative, at which point we get commoditization.  I’ve written about the parallels of evolutionary biology and business quite a lot.  Apple’s case is no different. The “i” products have created a punctuated equilibrium.  This is a fundamental change that gives those organisms a tremendous advantage.  They rapidly gain share in the ecosystem as a result.  However, Nature abhors such ownership, and so do the markets.  Other organisms will copy the traits that made the upstarts successful.  They will blunt these advantages and slow their growth.  There are too many dollars at stake driving that desire.

Can it happen? 

Well, it’s getting harder.  Apple has the full benefit of network effects.  Consider the online auction space.  Once upon a time there were a huge number of auction houses.  Today, there is largely only eBay.  A network effect happens when each new member of the network brings even more value than the previous joiner brought.  It is an exponential growth in value, and hence switching costs.  At some point, it is too hard for new players to show value.  The danger in Apple’s products is therefore not the compelling UI’s, but the switching costs.  Take my Kindle.  In a very short time I have put several hundred books on it.  Would I switch over to Apple’s formats?  No.  I’d lose my several hundred books, or at the least, I would have to fool around trying to remember which format and reader I need to use to access a particular book.  That’s a network effect.  Cell phone companies do the same with their contracts.  Ironically, I want to upgrade my iPhone and give it to my son so he gets a hand-me-down upgrade.  I couldn’t until recently because my AT&T contract wouldn’t let me.  Apple is doing the same around apps and media with the iPhone and iPad.  Facebook and Twitter, BTW, are huge because of network effects, and it will be hard to ever unplug that advantage now that it is built.

Several things need to happen very quickly in order to thwart Apple’s growing network effects.  First, people need to be aware that this is what’s happening.  Make sure you understand the network effects and lock-in that you’re becoming a participant of.  Be careful, because there are even stronger variants than what Apple is offering.  I have argued in the past with Fred Wilson about streaming media.  Get roped into allowing your media to be strictly streamed by a few providers and you are well and truly locked in.  Second, Microsoft, Google, and whoever else needs to wake up.  You have a major problem with your Product Management and Strategy people.  Why are you unable to successfully copy Apple’s products to produce a “good enough” alternative?   Microsoft, in particular, you did it before with Windows and Mac.  Your new Kin phone seems to be a flop from the get go.  Did you forget that your role is to commoditize other people’s ideas?  Sounds tawdry, and it has certainly been proclaimed as Evil, but looked at in this kind of light it really isn’t.  It’s about providing alternatives.  Google, your Android is much better, but geez, it needs to get better a lot faster.  Time is running out as Apple builds network effects.

Where are the Open Source community and standards making bodies in all this?  Open formats and digital rights management that transfers across devices is critical.  Portability and interconnection rule when defusing network effects.  Even if big third parties like Microsoft and Google are just sailing under the Open Banner to make money, it’s doing a service by blunting a growing monopoly.  Media owners, do you want to be marginalized by Apple?  That’s what will happen.  You backed Amazon down on Kindle (first on text to speech and then on pricing) before it was too late because you were afraid.  What are you doing with Apple?  Alternate content distribution channel owns (pretty clearly I am talking to you, Netflix, and you, Amazon–maybe you two should merge), what is your plan to navigate these waters?

What can Apple to if it senses a revolt?  Lots of things.  It’s being extremely heavy handed at the moment.  I never heard anyone accuse Apple of humility, after all.  Totally blocking Flash is not necessary to preserve lock-in.  One could argue there is enough valuable Flash content that it is slightly counter-productive, in fact.  Apple is simply having trouble dealing with the meta-nature of computers.  It is hard to define a Turing platform like Flash in such a way that it can’t become a Trojan horse.  In fact, it may be impossible.  Instead, it is necessary to individually vet each Flash application and have the flexibility to revoke the authorization of that app if it misbehaves.  For Adobe’s part (and any other would-be platform developer), they should facilitate allowing this kind of control.  It is better to have more Flash applications than fewer, even if they must operate in shackles in the Apple world.  Apple knows how to peacefully coexist.  Their PC business flourishes after they finally adopted Intel chips and allow virtual machines to run Windows.

I give this another 2-3 years before it becomes impossible to change the page.  If that happens, we’re stuck with the monopoly until there is another punctuated equilibrium.  That happened with Microsoft, and it took years and the Internet to get there.  It happened with Intel’s microprocessor dominance.  In both cases, the resultant products were inferior even to many peers during their time, but the network effects were too strong and the better peers didn’t get in position soon enough.  A failure to stop one of these snowballs is how monopolies are built, and they’re extremely lucrative.  Each day you wait it becomes a little bit harder.  That’s the nature of network effects.

Posted in apple, business, flex, Marketing, Open Source, platforms, strategy | 5 Comments »

MySQL and BEA: Oracle and Sun Will Be At Each Other’s Throats!

Posted by Bob Warfield on January 16, 2008

Big news today is that Sun is buying MySQL and Oracle is buying BEA. This creates a couple of strange bedfellows to say the least. BEA is inextricably wrapped up in Sun’s Java business (is it really a business or just a hobby given the revenues it doesn’t produce?) which gives a reason for the two to get closer together. On the other hand, there is hardly a bigger threat to Oracles core database server business imaginable than MySQL, which has got to push the two companies further apart. What a tangled web!  Is Sun leaving Oracle to its own devices in order to pursue cloud computing?  Sure looks like it!

Let’s analyze these moves a bit. I want to start with BEA and Oracle.

As we all know, Oracle started that courtship dance not long ago and was rebuffed for not offering enough.  Amusingly, they closed almost exactly at the midpoint of the prices the two argued were “fair” at the outset.  Meanwhile, the recession is really setting in, stock prices are falling, and Oracle’s offer went up.  Since Cisco’s John Chambers mused about IT spending will slowing, it has become a widely accepted article that this will happen. So shall it be said, so shall it be written, Mr. Chambers. That’s a very bad thing for BEA, which is primarily selling to that market. The corporate IT market is their bread and butter for a number of reasons. Many ISV’s and web companies will look to Open Source solutions like Tomcat or JBoss with which to reduce costs. Corporate IT wants to superior support of a big player like BEA. The darker truth is that big Java seems to be falling out of favor among the bleeding edge crowd. Java itself gets a lot of criticism, but is strong enough to take it. J2EE is another matter, though there is still a huge amount of it going on. There is also the matter of the steady ascendency of RESTful acrchitecture while BEA is one of the lynchpins of Big SOA.  There is already posturing about the importance of BEA to Oracle Fusion.  If it is so important, Fusion may be born with an obsolete architecture from day one. 

The long and the short is that any competent tea leaf reader (is there any such thing?) would conclude that this was a good move for BEA to let themselves be bought before their curve has crested too much more. For Oracle’s part, its a further opportunity to consolidate their Big Corporate IT Hedgemony and to feed their acquisition-based growth machine. I am not qualified to say whether they paid too much or not, but if I do think the value curve for BEA is falling and will continue to fall post-acquisition. They are way late on the innovation curve, which looks to me like it has already fallen.  In short, BEA is a pure bean counting exercise: milk the revenue tail as efficiently as possible and then move on.  For this Oracle paid $8.5B.  Not surprisingly, even though it is a much bigger transaction, there is much less about it on the blogosphere as I write this than about the other transaction.

Speaking of which, let’s turn to the Sun+MySQL combination.  Jonathan Schwartz gets a bit artsy with his blog post introducing the introduction, which he calls “Teach dolphins to fly.”  The metaphor is apropos.  Schwartz says that MySQL is the biggest database up and comer news in the world of network computing (that’s how we say cloud computing without offending the dolphins that haven’t figured out how to fly yet).  What Sun will bring to the table is credibility, solidity, and support.  He talks about Fortune 500 needing all that in the guise of:

Global Enterprise Support for MySQL – so that traditional enterprises looking for the same mission critical support they’ve come to expect with proprietary databases can have that peace of mind with MySQL, as well.

That business of “proprietary databases” means Oracle.  Jonathan just fired a good sized projectile across your bow Mr. Ellison.  What do you think of that? 

I know what I think.  Getting my tea leaf reading union card back out, I compare these two big acquisitions and walk away with a view that Oracle paid $8.5B to carve up an older steer and have a BBQ while Sun paid $1B to buy the most promising race horse to win the Kentucky Derby.  What a brilliant move for Sun!  Now they’ve united a couple of the big elements out there, Java being one and MySQL the other.  They could stand to add a decent scripting language, but unlike Microsoft’s typical tactics, they’ve learned not to ply a scorched earth policy towards other platforms, so they are peacefully coexisting until a better cohabitation arrangement comes along. 

We talked a little about the Oracle transaction being a good deal for BEA:  it’s a lucrative exit from declining fortunes.  What about mySQL?  Zack Urlocker comments about the rumor everyone knew, that MySQL had been poised to go public.  Let me tell you: this is a far better move.  Savvy private companies get right to the IPO alter, and then they find someone to buy them for a premium over what they would go out at.  What they gain in return is potentially huge.  The best possible example of this was VMWare.  Now look where they are.  I will argue that would not have been possible without the springboard of EMC.  At least not this quickly.   Sun offers the same potential for MySQL.  It is truly the biggest open source deal in history.  It’s also a watershed liquidity event for a highly technical platform based offering from a sea of consumer web offerings.  The VC’s have been pretty tepid about new deals like MySQL.  Perhaps this will help more innovations to get funded.

What do others have to say about the deal?

 – Tim O’Reilly echoes the big open source and importance of database to platform themes.

 – Larry Dignan picks up on my rather combative title theme by pointing out that it puts Sun at war with the major DB vendors:  Microsoft, IBM and Oracle.  Personally, I think any overt combat will hurt those three.  The Open Source movement holds the higher moral ground and it just won’t be good PR to buck that too publicly.  Dignan sounds like he is making a little light of Schwartz’s conference call remark that it is the most important acquisition in Sun’s history, but I think that is no exaggeration on Jonathan’s part.  This is a hugely strategic move that affects every aspect of how Sun interfaces with the world computing ecosystem including its customers, many partners, and its future.  When Dignan asks what else Sun needs, I would argue a decent scripting language.  Since Google already has Python in hand, what about buying a company like Zend to get a leg up on PHP?  Last point from Larry is he asks, “If Sun makes MySQL more enterprise acceptable does that diminish its mojo with startups? Does it matter?”  Bottom line: improvements for the Enterprise in no way diminish what makes MySQL attractive to startups, providing Sun minds its manners.  So far it has been a good citizen.  With regards to, “Does it matter?”  Yes, it matters hugely.  MySQL is tapped into all the megatrends that lead to the future.  Startups are a part of that.  Of course that matters.

One other thought I’ve had:  what if Sun decides to build the ultimate database appliance?  I’m talking about order it, plug your CAT5 cable in, and forget about it.  Do for dabases what disk arrays did for storage.  That seems to me a powerful combination.  Database servers require a painful amount of care and feeding to install and administer properly.  If Sun can convert them to appliances, it kills two birds with one stone.  First, it becomes a powerful incentive to buy more Sun hardware.  This will even help more fully monetize MySQL, which apparently only gets revenue from 1 in 10,000 users.  Second, it could radically simplify and commoditze a piece of the software and cloud computing fabric that is currently expensive and painful.  Such a move would be a radical revolution that would perforce drive a huge revenue opportunity for Sun.  They have enough smart people between Sun and MySQL to pull it off if they have the will. 

Conclusion

Sun has made an uncannily good move in acquiring MySQL.  As Wired points out:

One company that won’t be thrilled by the news is Oracle, makers of the Oracle database which has managed to seduce a large segment of the enterprise market into the proprietary Oracle on the basis that the open source options lacked support.

With Sun backing the free MySQL option (and offering paid support) Oracle suddenly looks a bit expensive.

How else can you simultaneously lay a bet on owning a substantial piece of the computing fabric that all future roads are pointing to and send a big chill down Larry Ellison’s spine for the low low price of just $1B?  Awesome move, Jonathan!

Related Articles

VARGuy says the acquisition means Sun finally matters again.  $1B is cheap to “finally matter again!”

Posted in business, enterprise software, Open Source, Partnering, platforms, saas, soa, strategy, Web 2.0 | 9 Comments »

Pile O’ LAMPs: What Would Fielding Say?

Posted by Bob Warfield on October 21, 2007

I’ve been pondering the Pile O’ Lamps concept that I first read about in Aloof Architecture and Process Perfection.  Read the posts yourself for the horse’s mouth, but to me, the Pile O’ Lamps concept is basically asking whether a computing grid of LAMP stacks is a worthwhile architectural construct that could be highly reusable for a variety of applications.  I say grid, because in my mind, it achieves maximal potential if deployed flexibly on a utility computing fabric such as Amazon EC2 where it can automatically flex to a larger cluster based on load requirements.  If it is fixed in size by configuration (which still means changeable, just not as quickly and automatically), I guess it would be more proper to call it a LAMP cluster.

LAMP refers to Linux as the OS, Apache as the web server, mySQL as the database, and a “P” language (usually PHP or Python) as the langauge used to implement the application.  It has become almost ubiquitious as a superfast way to bring up a new web application.  There are some shortcomings, but by and large, it remains one of the simplest ways to get the job done and still have the thing continue to work if you move into the big time.  A Pile of Lamps architecture would presumably simplify scaling by building it in at the outset rather than trying to tack it on later.

In general, I love the idea.  People are effectively doing what it calls for all the time anyway, they just do so in an ad hoc manner.  I got ambitious this Sunday morning and thought I’d drag out Fielding’s Dissertation and see how the idea stacks up.  If you’ve never had a look at Roy Fielding’s Architectural Styles and the Design of Network-Based Software Architectures, you missed out on a beautiful piece of work from the man that co-designed the Internet protocols.  This particular document sets forth the REST (Representational State Transfer) architecture.  What’s cool about it is that Fielding has a framework that he uses to evaluate the various components of REST that is applicable to a lot of other network architecture problems.  See Chapter 3 of the Dissertation for details, but that is my favorite part of the document. 

His concept is to create a scorecard for various network architectural components, and then use that scorecard together with the domain requirements of the design problem to arrive at an optimal architecture.  He says that’s how he got to REST, and it certainly seems to make sense as you read the Dissertation.  Here is a rendition of his ranking criteria for the models he considers:

Fielding Framework

A “0” means the architectural style is beneficial to some domains and not others.  Positive means the style has benefit and negative means it is a poorer choice.

The components that make up REST look like this:

RESTful according to Fielding

There are 3 components that go into it:

  • Layered Cached Stateless Client Server:  The row marked LCS+C$SS
  • Uniform Interface, which isn’t in the original Fielding taxonomy, but which he says adds the qualities listed.
  • Code on Demand:  This is the ability of the web to send code to the client for execution based on what it requests.  So, for example, Flash or AJAX.

The “RESTful Result” is simply the total of the other attributes.  You can see it hits pretty darned well on most of the categories with the exception of Network efficiency.  As noted, this primarily means it isn’t suited to extremely fine grained communication, but is fine for a web page.  Pretty cool framework, eh?

Incidentally, Fielding’s framework really dumps on CORBA for all the right reasons.  Give it a read to see why.

Now let’s look at the Pile of Lamps.  Note that we aren’t trying to compare it to REST–they solve different problems.  Fielding tells us to do the analysis based on our domain, so put aside the RESTful scores, they aren’t meaningful to compare to anything but REST competitors.  Here is the result for Pile of Lamps:

Pile of Lamps

I view the LAMP stack as Layered Client Server, which is already a decent protocol.  A Pile of Lamps seems to me is basically adding a cached and replicated capability to the LAMP stack, so I add the cached/replicated repository to the equation.  You can see that it amplifies the LAMP stack while taking nothing away.  Basically, it makes it more efficient, more scalable, and it delivers those benefits in a simple way.  This makes total sense to me, given the concept. 

One can use the framework to fiddle with other potential additions to the Pile of Lamps idea.  For example, what if statelessness were pervasive in this paradigm?  I leave further refinement of the idea to readers, commenters, and the original authors, but it looks promising to me.  I’d also encourage others to delve into Fielding’s work.  It has application well beyond just describing REST.

Related Articles

A Pile of Lamps Needs a Brain

Posted in grid, multicore, Open Source, platforms, software development, strategy, Web 2.0 | 3 Comments »

Making a Business of Open Source

Posted by Bob Warfield on October 4, 2007

I have little to add here other than to say that Joe Cooper has a great essay on how you go about making a business that makes money out of what looks at first glance like free software.  It’s worth a read if you’ve ever been curious about the business of open source.  His comments about hosted open source, which is a lot like combined SaaS and Open Source, are particularly useful.

Posted in business, Open Source, saas, strategy | 1 Comment »

Platform Vendors Have to Be Switzerland

Posted by Bob Warfield on October 1, 2007

Platforms are big news these days.  Everyone wants to be a platform:  becoming a platform is even part of Yahoo’s plans to turn the company around. 

What do you look for in a platform vendor?  Yes, the features and functionality provided by the platform are important.  And yes, the community that is already there is also important.  But what about how a platform conducts itself?  Think of a platform like the popular notion of a Swiss Bank:  an extremely safe place to put your faith and assets.  A place that has no interest whatsoever in anything but safeguarding those assets.  Most platforms are not like Swiss Banks.  They are bent on World Domination.  They get confused about their loyalties when greed sets in.  They start to compete with those who placed their trust in them.  Those are the platforms you want to think twice about betting on.  Consider what it will be like to try to make a living on that platform, or to commit your data and energy to using the platform. 

Platforms often come about because a great application started a frenzy that others wanted to be part of.  Facebook is in that category.  We’ve yet to see whether Facebook will be a friendly platform owner, or predatory, but it’s something folks wonder about for obvious reasons.  Zuckerberg is saying some of the right things, at least, when he suggests that widgets should have a life both on and off Facebook.  That implies he doesn’t insist on total domination.  On the other hand, some are afraid that at least other Social Networks like LinkedIn have reason to fear.  Yet, the other Social Networks are direct competitors, not consumers of Facebook’s platform, so isn’t it kosher to fire a shot or two in their direction.  It’s still too early to tell if Herr Zuckerberg will be a taciturn Swiss Banker, or whether he’s bent on World Domination.  I am cautiously optimistic about the early signs.

The Apple iPhone is another great-application-begets-platform story, and one that seems to have gone bad.  Clearly, Apple is not acting as Switzerland here.  They are bent on world domination.  In fact, their weapon of choice, “bricking” of iPhones, has sparked a startling backlash where once there was nothing but raves.  Steve Jobs does not want to nurture you on his platform, he wants to control your every thought and action.  I’m sure he feels it’s for your own good, but is this what we want from our platform vendor?  I should say not, and its been a long time since a killer app came to roost on Apple’s platform as its first and only hunting grounds.

Still other companies are in the odd role that their admirers beg them to be platforms, but they have no intention of sharing any part of their pie.  eBay has always had the view that they owned every penny of opportunity surrounding their platform.  Many have tried to join on, but eBay itself makes that all but impossible.  It isn’t surprising, their Disney executives grew up on the idea that Mickey was a franchise, not a platform, and nobody was entitled to a piece of Mickey’s pie.  Apple is very similar, particularly in view of the latest iPhone shenanigans.  In Apple’s case, they’re control freaks for the sake of their conceptual integrity, whereas with eBay, it’s just business.

Microsoft gives us an abject lesson on platform owners who are not Switzerland.  It started innocently enough with things like BASIC and DOS, but soon, Microsoft wanted to compete with everyone, taking full advantage of their platform in every way they could to do so.  For a while in Silicon Valley every VC pitch had to include a discussion of why Microsoft wouldn’t just take your business away when they wanted it.  Arguably the strategy worked well for a time, but they seem to have reached the limits of it.  I see the last gasp as their abandonment of all things Java in a tiff with Sun over who could own the platform.  Microsoft will say they had little choice, but they’ve pursued it as a bit of a scorched earth policy by working overtime to ensure that the Java web and the .NET web are as incompatible as can be and still call it the web.  Consequently, they lost the hearts and minds of many who would embrace platforms.  Arguably, very little of the current cloud computing renaissance involves Microsoft as a result.

In the land of Marc Andreesen’s 3 kinds of platforms post, he mentions several of the third kind including Ning, Salesforce, and Second Life.  FWIW, I find Andreesen and Ning fit the Swiss Banker profile pretty well.  As we look back over Andreesen’s history, he seems to have done a good job nurturing platforms along the way.  Ning has no particular Social Network of their own, except to help other Social Networkers to use Ning more effectively. 

Salesforce is an interesting case.  Clearly, the other Marc (Benioff), is bent on world domination.  The question is to what degree he would compromise his platform aspirations to do so.  Perhaps he can turn into the Swiss Banker over time, but it seems an unlikely role at the moment.  He came up through the ranks of Oracle, which is not typically a breeding ground for the Swiss Banker mentality.  Still, we should watch closely and reserve judgement for a bit.  There are signs of life around Force, but it remains to be seen whether they’re higher forms of life, or just people looking to siphon off the halo effect of Salesforce’s community and greatness. 

Adobe has been a platform owner for some time with pdf and Flash.  But they’re also an application company as we were reminded by the announcements surrounding their acquisition of Web 2.0 word processor BuzzWord.  As Scoble says, they’re going for Microsoft’s throat.  But does this mean they might one day go for their platform user’s throats too?  I suspect not.  Adobe has always been very deliberate in their actions and they’ve done a good job nurturing their platforms.  BuzzWord is happening at a time when Microsoft essentially owns the document world.  Choosing to go after one of the giant platform monopolists doesn’t seem to me like bad behaviour for a platform vendor.

The ultimate Swiss Banker is of course Open Source.  How can you go wrong here?  The platform you’re depending on has been placed in the Open Source community and so how bad can things get in terms of the owner/creator being the Anti-Swiss?  Hence we see why so many prefer Open Source.  Perhaps Open Source is a way for Marc Benioff to gain full trust, though I have a terrible time seeing Force ever being an Open Source platform.  It just doesn’t seem like their style.

The next time you’re shopping for a platform, remember that platforms involve a big investment.  Try thinking of it in terms of Swiss Bankers.  Which one fits the profile?

Posted in Open Source, Partnering, platforms, saas, strategy, venture, Web 2.0 | 12 Comments »

To Escape the Multicore Crisis, Go Out Not Up

Posted by Bob Warfield on September 29, 2007

Of course, you should never go up in a burning building, go out instead.  Amazon’s Werner Voegels sees the Multicore Crisis in much the same way:

Only focusing on 50X just gives you faster Elephants, not the revolutionary new breeds of animals that can serve us better.

Voegels is writing there about Michael Stonebreaker’s claims that he can demonstrate a database architecture that outperforms conventional databases by a factor of 50X.  Stonebreaker is no one to take lightly: he’s accomplished a lot of innovation in his career so far and he isn’t nearly done.  He advocates replacing the Oracle (and mySQL) style databases (which he calls legacy databases) with a collection of special purpose databases that are optimized for particular tasks such as OLTP or data warehousing.  It’s not unlike the concept myself and others have talked about that suggests that the one-language-fits-all paradigm is all wrong and you’d do better to adopt polyglot programming.

I like Stonebreaker’s work.  While I want the ability to scale out to any level that Voegels suggests, I will take the 50X improvement as a basic building block and then scale that out if I can.  That’s a significant scaling factor even looked at in the terms of the Multicore Language Timetable.  It’s nearly 8 years of Moore’s Cycles.  I’m also mindful that databases are the doorway to the I/O side of the equation which is often a lot harder to scale out.  Backing an engine that’s 50X faster sucking the bits off the disk with memcached ought to lead to some pretty amazing performance.

But Voegels is right, in the long term we need to see different beasts than the elephants.  It was with that thought in mind that I’ve been reading with interest articles about Sequoia, an open source database clustering technology that makes a collection of database servers look like one more powerful server.  It can be used to increase performance and reliablity.  It’s worth noting that Sequoia can be installed for any Java app using JDBC without modifying the app.  Their clever monicker for their technology is RAIDb:  Redundant Array of Inexpensive Databases.  There are different levels of RAIDb just as there are RAID levels that allow for partitioning, mirroring, and replication.  The choice of level or combinations of levels governs whether your applications gets more performance, more reliability, or both.

Sequoia is not a panacea, but for some types of benchmarks such as TPC-W, it shows a nearly linear speedup as more cpus are added.  It seems likely a combination of approaches such as Stonebreaker’s specialized databases for particular niches and clustering approaches like Sequoia all running on a utility computing fabric such as Amazon’s EC2 will finally break the multicore logjam for databases.

Posted in amazon, ec2, grid, multicore, Open Source, platforms, software development | 4 Comments »

70% of the Software You Build is Wasted (Part 1 of Series of Tool/Platform Rants)

Posted by Bob Warfield on September 4, 2007

The headline gives a gruesome statistic, but it is probably understated.  At least 70% of the software you build is wasted because you are constantly reinventing the wheel by building components that do not deliver any competitive differentiation to your offering.  You have to build them for the offering to work, but they’re things that everyone in your space also provides.  Often, they are things that everyone in every space provides. 

I was having lunch with a CTO friend recently and broached this subject to him in a deliberately provocative way.  After he got past my delivery, he sighed and commented that he agreed.  Every new job he takes requires reinventing the same wheels all over again.  Another friend who is a marketing guy had exactly the same reaction even though he isn’t a techie.  He knew exactly how much was being invested in Engineering to build stuff that he couldn’t put in a press release or otherwise tout.  His referred to this work as a tax on innovation.  The non-differentiated stuff would just barely be average if you did it extremely well.  It would be average because it didn’t matter that it be any better than average: it wasn’t a competitive differentiator.  Therefore you couldn’t afford to make it better than average if you were focusing your business properly.

I find it incredibly bleak to consider that 70% of the lines of code being written will be average at best and will likely make no difference to the business.

Consider the example of Security to give a flavor for what I’m talking about.  In the Enterprise World where I’ve spent a lot of my career, Security is a key area of functionality that CIO’s and IT guys want to know about before they’ll even think about buying your product.  There is a long list of features and questions they have to be briefed on.  How will you interface with our LDAP server?  How will you deliver single sign on under our portal standard?  Do you support our (1 of 257 distinct formulas we’ve seen so far) exact recipe for how we want passwords and cookies to be handled by thin clients?  All of the real offerings need to have good answers for all of these questions, so it’s not giving you any proprietary advantage to solve these problems, it’s just part of the cost of doing business.

For SaaS companies, the tax is even higher.  At a perpetual software company, you install inside the firewall.  Many security issues for SaaS are taken for granted once you’re inside the firewall.  There is a lot of IT glue already in place to enforce things like password policies (it has to change every 90 days except when there is a Harvest Moon, there must be at least 13.5 characters, 1 of which is a symbol from the Greek alphabet, 3 of which are the square root of your Mother’s first pet’s name, yada, yada, yada) and to let other vendors shoulder some of the burden for things like monitoring whether the software is functioning properly.  As a SaaS vendor, you have to build all this glue that your perpetual peers take for granted.  Is it any wonder that the common wisdom has become that SaaS takes more investment capital?

There are many more examples:

  • Forms and UI:  Most UI is really pretty similar, with a slightly different application of surface level cosmetics for branding.  This is a good thing, because it means you understand the vocabulary if not the language when you see the UI of most web software.  Yet, a staggering amount of work goes into recreating this wheel over and over again.
  • Database Connections and Persistence:  The data goes in a database in most cases, whether we’re talking mySQL or Oracle.  Isn’t it amazing how much effort still has to go into this connection lo these many years since E.F. Codd postulated relational databases?  And how long have we known that object oriented languages need a way to put objects into the database and then get them back out, something we call a “persistence layer”?  Yet, we frequently have to build or at least extensively modify some component to make this happen. 
  • Reporting, Messaging, Data Feeds, yada, yada, yada.  The list of things companies build that already exist in some form or fashion is huge.  NIH is alive and well in today’s Software 2.0 world. 
  • Scalability:  Caches and similar contrivances get recreated over and over again as traffic builds up on your web site.

There are many more examples that I’ll leave to the readers.  How does this happen?  I blame three candidates, one of which is cultural, and the other two are technological, yet enabled by the cultural quirk:

Software Developers are Producers Not Consumers of Code

Code reusability is hard, as anyone who has tried to herd the cats (developers) all together towards some form of reusability or core technology will tell you.  Every developer will loudly proclaim that code must be reused.  They will immediately follow this up by demanding to work in a core technology group that will produce the greatest code for sharing since sliced bread.  What they’re really saying is, “Everyone should reuse code, but it has to be my code they reuse.”  Take any software developer who is well regarded by his peers, collect some of his code, destroy all comments and other information that would connect the code to the star, and give the code to another engineer telling him he has to use it or maintain it.  The recipient can be a star or just one of the troops in the trenches, it doesn’t matter.  In 99 out of 100 cases, the recipient will loudly proclaim that the code is completely unusable and will have to be rewritten. 

Programmers hate to reuse code becaues they hate to read and understand code.  In the old days it was called “NIH” syndrome.

By now you may be thinking that “Software Developers are Doers, Not Learners”.  In practice, I have not found this to be true, but I have found that the organizations the developers work for rarely invest in letting their developers learn, hence the end result is they’re stuck doing the same old think in the same old Curly Braced Language:

The Tyranny of One-Size-Fits-All Curly Brace Languages

If everyone else’s code is crap, which we have established by now, I’d better have a language that lets me write anything, because you never know what I might have to write.  I’d better have a curly brace language: a real Alpha Geek’s Power Tool for Programming.  The Curly Brace Languages are C++, Java, and C#.  They are all descendants of the mighty C language, which was created in 1972 in order to build Unix, an operating system.  Because of this, C is considered a Systems Programming Language, although many used it to create Application Software.  Because Systems Programming is about creating absolutely the gnarliest, most difficult types of software, such as operating systems and even new language compilers, it has to be able to get down to the finest levels of detail without making any assumptions.  It can’t get in your way, in other words.  Wikipedia puts it amusingly by saying that System’s Software talks to hardware while Application Software talks to people.

Unfortunately, for that 70% of wheel reinvention I mention above not getting in you way means the Curly Braced Language also doesn’t help you much.  You do all the heavy lifting!  OTOH, should you need to write code that talks to hardware (does your business really need that?), the Curly Braced Language is indispensible.  Blogger Russell Beatie puts it extremely well in Java Needs An Overhaul:

There’s something about the Java culture which just seems to encourage obtuse solutions over simplicity. 

It isn’t the culture though; it’s the language that encourages obtuse solutions.  It’s deep object oriented programming.  It’s the fact that the Curly Brace Languages have become the assembly languages of our day, and there is nothing more obtuse than a big assembly language program.  Don’t believe Curly Braces = Assembly Language? Consider the following strengths of Curly Braces, which are essentially the same as assembly language:

– You can talk directly to the hardware (yes, you can do Systems programming, but does your project really need to?)

– Because I get better performance (yes, but not in a multicore world where massive scalability not tight loops will rule)

– Because I might have to do some gnarly cool down-in-the-weeds Geek thing that can only be done by a Curly Brace Language (yes, but how often must you do this?)

Enough said.  You can get the job done with Curly Braces, but 70% of your work is wasted because you have an electron microscope when you really needed a pair of reading glasses.  Smart companies are now using more than one language, a practice called Polyglot Programming.  They know the dangers of overly focusing on a single Curly Braced Language for all programming.  I’ll have more to say about Polyglot Programming in a future post.

The Application Framework Tower of Babble

So what happens with my Curly Brace Language when I want to do Application Programming instead of Systems Programming?  Well, unless I am crazy enough to ship an OS with my application (don’t laugh, it has been done in the past!), I need help talking to the OS that’s already in place.  That’s because the Curly Braced Language is so busy not getting in your way that it doesn’t help you much either.  It can talk to your hardware, but scarcely knows your operating system.  So, the stuff that isn’t built in comes to you via the Application Framework (what used to be called libraries).  Without an Application Framework, Curly Braced Languages can’t do much except write, “Hello, World.”  Unfortunately, that’s been done and is no longer monetizable.  Time to move on.

App Frameworks are supposed to be standardized so everyone can reuse that code, but fortunately, the best thing about standards is there are so many to choose from.  I recently read a 4 part article about Web Application Frameworks for the Python language and lost count of how many different frameworks were mentioned.  It was quite a good article, but you get a sense from it that App Frameworks can become another excuse to write more code especially if your language allows it.  The other problem with them is they are wicked hard to learn and understand.  Learning to write code in C is not bad at all.  The original book on C was 272 pages of clear easy to read text.  The original tome for learning to write in Microsoft Windows was Charles Petzold’s classic Programming Windows was a dense 1478 pages!  Did I mention learning Application Frameworks is hard?  That’s 5 times as much reading for the framework as for the language!

Quoting Beattie’s “Java Needs An Overhaul” on Frameworks gives a flavor of what it’s like and why it’s broken:

As a Java developer, I was always so amazed at how difficult it was to use the standard Java Class Libraries for day-to-day tasks. Every app out there ends up having to include 20MB of .jars in order to get even the simplest functionality working because Java libraries are so low-level and incomplete.

What’s worse, is that most of the frameworks are not that well implemented.  In fact, there are no great frameworks that solve all the 70% of problems that are the tax we’re talking about here.  This forces many large organizations to wind up writing their own framework, thereby empowering the internal crowd who wants code reuse so long as it is their code that is being reused.  The proliferation of these frameworks inside large companies is so extreme that the benefits are usually lost.  Hence more 70% tax burden.

Hey Wait, What Happened to Object Oriented Programming?

Yessir, making it easier to reuse code was one of the big promises of OOP.  I love OOP and have loved it since encountering Smalltalk.  I confess I’m an odd soul  because LISP was the first programming language I learned, which raises a lot of eyebrows.  I was a General in the OOP Wars where Borland C++ battled it out against Microsoft C++.  I even ran a startup that sold a Modula-2 compiler of all things, for a little while in the 80’s.  OOP is a powerful tool, but I have two criticisms of it.  First, as it is traditionally deployed by the Curly Brace Languages, it is incredibly baroque to the point of being extremely powerful yet almost impossible to master.  Experienced devotees of the Curly Braced OOP Priesthood will tell you that these constructs are exquisitely precise in letting them refactor their code along whatever architectural designs they desire, but that out of the universe of people who can write code, a much smaller universe can do object oriented programming.  This is a shame, because the original concepts for object oriented programming came out of languages like Smalltalk (and Simula) that were designed to make programming approachable by anyone.  There is a growing suspicion that OOP really doesn’t help productivity much at all, but I’m not yet ready to enter that camp myself.  However, more than one person has said that the reusability of C++ is not significantly better than C, and I am in agreement with that.

The second issue I have is that OOP doesn’t really facilitate code reuse very well.  It isn’t service oriented, it’s about controlling the fine grained behavior of objects in intricate ways.  In fact, one could argue that it makes a lot of code much harder for someone to read and understand because of all the things that happen implicitly and in many and varied locations.  Certainly anyone who has ever walked through a complex inheritance scenario using all the OOP bells and whistles in a Curly Braced langauge on code someone else wrote will tell you it was a harrowing adventure at best.  The old computed GOTO in FORTRAN has nothing on OOP when it comes to the power to obscure meaning.

What About Open Source?

Open Source has spawned a lot of code reuse in certain areas, moreso than any other trend I’ve seen in my career.  Definitely more than Object Oriented Programming.  I think its great, and I am a believer, but it has its limitations too.  A lot of Open Source code is intended to be reused as-is or extensively modified.  That is, its more like software that’s so cheap you may as well reuse it than software that is designed to be more reusable than other software.  Truly resuable code would not need much modification to repurpose it, or the modification involved would be extremely trivial.

That leaves Open Source reusability down to build versus buy.  If the code to be reused is suffciently simple to just write, most developers will not select Open Source.  If the code is extremely complex, and the economics or schedule do not allow for a rewrite, Open Source comes to the rescue.  This tends to push Open Source code sharing more towards the grandiose and away from the prosaic.  mySQL would be a heck of a thing for an app company to have to write before it could get on with developing expense reimbursement software.

Given this trade-off, I see most Open Source code reuse as being more a matter of module reuse.  Some fairly large and complex Open Sourced subsystem gets packaged up with some glue code and becomes the centerpiece of an important part of an application.  That’s great, but it doesn’t seem to whittle much off the 70%.

What’s the Answer?

If you want to quit wasting 70% of your efforts on software, you’re going to have to discover a way to reuse code–preferably reusing code that (Gasp!) other people wrote anyway.  Getting back to the service oriented perspective, true code reuse benefits from the service oriented perspective.  Forget the Curly Braced Power Tool perspective for a minute.  Use the power tools to create the proprietary advantage that you currently only get to spend 30% of your time and resources on.  Look for a simpler, service-oriented approach to the 70% of functionality that is undifferentiated.  Favor simpler service-oriented approaches without making them too simple as to be unworkable.  This will minimize the amount of learning your developers have to do to reuse the code components.  This is why REST is rapidly becoming more popular than SOAP as a protocol for Service Oriented Architectures.  It’s simpler. 

A lot of things succeed because they are simpler.  C, in its day, was far simpler than Algol or PL/I or even COBOL.  C++ was simpler than the overblown Ada.  And Java simplified a lot of the issues that were on the C++ programmer’s mind.  Now lately we see that scripting languages like PHP, Python, and Ruby have succeeded well because they’re simpler than the Curly Braced Languages.

There is no one-size fits all, so why not choose a couple of sizes for different occasions?  Martin Fowler (author of one of my favorite books on Enterprise Patterns) puts it well when he says, “we will see multiple languages used in projects with people choosing a language for what it can do in the same way that people choose frameworks now.”  Or, as the Meme Agora blog puts it, we are entering an era of Polyglot Programming.

PS:  While you’re thinking about Polyglot Programming, consider that the Multicore Crisis is going to start kicking sand into a lot of the old machinery sometime soon anyway.  The Curly Braced Languages will be the ones hardest hit by it because they’re closest to the cores.

Related Articles:

Who doesn’t love Java?  (Part 2 of the Tool/Platform rants)

ESB vs REST (Another Case for Multi-Language Programming)

Java Device Drivers from Sun:  For those who don’t think you can talk to hardware in Java, here’s a detailed paper on how it works, and info on the Java Device Driver Kit (JDDK).  If you’re uncomfortable reconciling the notion of device drivers running on virtual machines, Sun makes a good case for it.  Their argument is that by letting the VM run as part of the kernel, you can create device drivers that are independent of the underlying CPU’s instruction set.  This is particularly important to Sun who have both SPARC and x86 to worry about.

Submit to Digg | Submit to Del.icio.us | Submit to StumbleUpon

Posted in Open Source, platforms, saas, Saas developers, software development | 16 Comments »

Is Support a Cost Center or a Product? (If you do SaaS or Open Source, It’s a Product!)

Posted by Bob Warfield on August 29, 2007

I always find the RedMonk blog interesting, and this time it has to do with his post on Making Money in Open Source on Support.  Coté says some things that got me frothing at the keyboard again. 

Developers Need Support, But It Is Seldom Offered With Enough Bandwidth

First, on the likelihood you can make money selling support to ISV’s:

“In general, I’ve found that ISV programmers (people who write applications [packaged or SaaS delivered] to sell to other companies, not “corporate developers” who write in-house applications) are less prone to use support for software, closed or open source.”

and:

“This is the kind of mentality I encounter among programmers quite a lot: it’s insulting to them to suggest that they need help.”

Let me explain about the ISV perspective, because that’s the world I’ve lived in all my career.  This has amazingly little to do with machisimo, being too cheap, or being insulted.  Rather it has everything to do with bandwidth.  We’ve all used Tech Support.  Who do you know that loves it?  Sitting for hours on an 800 line being tortured by music and that painful interruption that’s worse telling you how important the call is to them.  So why don’t they answer then?  How would you like to be waiting on some Tech Support guy to tell you all the standard stuff (take 2 reinstalls and call me in the morning) while some high dollar Enterprise customer is chewing your CEO’s ear off about why your mission critical software doesn’t work?  You know its going to take 3 escalations to more senior folk before they even understand what you’re trying to tell them and meanwhile your CEO is ready to fly your entire team to Nowhere, Iowa to work at the customer’s site just to placate them.  Been there, done that.

The insult is not that I need the help, it’s that you think you’re helping by doing that to me! 

If you want to make money selling support, treat it as a product, not a cost center.  Don’t send me to Bangalore.  Don’t put a guy on the phone with my architect that can’t carry my Alpha Geek’s jock strap.  Get somebody real that can go toe to toe.  But is that really a viable model?  Can you afford to hire Alpha Geeks to deliver support?  Probably not, because they will only do it if they have that rare combination of Alpha Geekdom and craving human contact so much they’ll take it under the duress of Tech Support.

SaaS companies come off better in this respect because its easier for them to put their Alpha Geeks onto the problem.  The Alpha Geeks watch the problem as it unfolds and directly access the data center to fix it.  They don’t have to struggle to remote control the customer through an On-premise fix.  Service for SaaS vendors is a product, not a cost center, but it’s also a product that is cheaper for SaaS companies to deliver because the service can be delivered in many cases without them leaving their desk and sometimes without the customer even knowing.  The latter is a problem too if you want the customer to value the service, but we’ll save it for later.

Despite all that, Enterprise ISV’s still spend time on the phone to companies like Oracle and Microsoft trying to get help, and they keep their maintenance paid up because you never know. 

Support Types

Coté offers a good list of types of support provided:  bugs, scaling, configuring, upgrades, finger pointing (or proving who’s “right”: the software or the user), re-setting expectations (the software actually won’t scramble an egg inside it’s shell like the sales guy said), and your million dollar nightmare (customizing and supporting out-of-date deployments).

He goes on to suggest some other types peculiar to open source:  Updates, Platform Certification (“Stacks”), Product Improvements and New Features.  Yup, been there and delivered those.  There are two that deserve more discussion: being an advocate to the community, and professional services customization gigs.

On being an advocate, this seems so far removed from the realm of what Tech Support does, that I wouldn’t even include it here.  I’ve seen this work best when Marketing or Sales handled it as part of customer relationship management.  Yes, Tech should be able to do that, but I’ve rarely seen the skillset and mentalities located there to be succesful.

On professional services to customize, yes, this is a very real opportunity to sell more product.  It isn’t really a support issue though.  Yes, many open source consumers will just modify the code themselves, but I would hesitate to speculate that this happens the majority of times.  See my thoughts on Professional Services end runs though, as it is something you must guard against.

I want to cover a few types I’ve run across that are also biggies for support but aren’t mentioned:

Education.  A tremendous amount of Technical Support calls boil down to the customer needing to ask a question.  The question may be so complex or confusing that it gets escalated all the way to the Chief Architect, but its still a valid role.  Answering questions is important, and I would think even more so for Open Source where the questions may have to do with how the code operates.

Professional Services End Runs:  This has been the source of more bad feelings all around than any other phenomenon I’ve seen.  Here is the scenario.  The customer buys the product, but for a variety of reasons they do not have the money to get it properly installed:

–  They picked a bad VAR (low cost bidder, anyone?) who spent the budget without delivering.

–  They never could get enough budget internally for a proper engagement, and choose to finesse it this way because they’re cheap.  Don’t laugh, I’ve had customers ‘fess up that this is exactly what’s happening and persuade CEO’s to deal with it in the interest of future business.

–   The absolute worst:  Your own professional services people failed.  It may not even be their fault, but now the Customer has Righteous Indignation on their side and you will get that software installed or else!

–  Almost as Bad:  Your organization measures performance and pays on those measurements in such a way that one organization throws another under the bus to make their bonus.  Professional Services throws Tech under the bus.  Or its implicit, and Tech hires cheap people who can’t deliver and everything is escalated to Engineering.  Yuck!

Companies that make Services a Product instead of a Cost Center are much better set up to think about these problems and deliver a better experience to their customers.

All your tech support are belong to us!

I didn’t see much rumination about Oracle’s attempt to steal Red Hat’s business.  One of the problems of basing your model around selling support for an open source product is differentiation.  Perhaps you can achieve better focus, but someone else can take a credible shot at stealing your business.  At the very least this will exert downward pressure on your pricing.

Submit to Digg | Submit to Del.icio.us | Submit to StumbleUpon

Posted in business, Open Source, saas, strategy | 4 Comments »