SmoothSpan Blog

For Executives, Entrepreneurs, and other Digerati who need to know about SaaS and Web 2.0.

Archive for the ‘soa’ Category

MySQL and BEA: Oracle and Sun Will Be At Each Other’s Throats!

Posted by Bob Warfield on January 16, 2008

Big news today is that Sun is buying MySQL and Oracle is buying BEA. This creates a couple of strange bedfellows to say the least. BEA is inextricably wrapped up in Sun’s Java business (is it really a business or just a hobby given the revenues it doesn’t produce?) which gives a reason for the two to get closer together. On the other hand, there is hardly a bigger threat to Oracles core database server business imaginable than MySQL, which has got to push the two companies further apart. What a tangled web!  Is Sun leaving Oracle to its own devices in order to pursue cloud computing?  Sure looks like it!

Let’s analyze these moves a bit. I want to start with BEA and Oracle.

As we all know, Oracle started that courtship dance not long ago and was rebuffed for not offering enough.  Amusingly, they closed almost exactly at the midpoint of the prices the two argued were “fair” at the outset.  Meanwhile, the recession is really setting in, stock prices are falling, and Oracle’s offer went up.  Since Cisco’s John Chambers mused about IT spending will slowing, it has become a widely accepted article that this will happen. So shall it be said, so shall it be written, Mr. Chambers. That’s a very bad thing for BEA, which is primarily selling to that market. The corporate IT market is their bread and butter for a number of reasons. Many ISV’s and web companies will look to Open Source solutions like Tomcat or JBoss with which to reduce costs. Corporate IT wants to superior support of a big player like BEA. The darker truth is that big Java seems to be falling out of favor among the bleeding edge crowd. Java itself gets a lot of criticism, but is strong enough to take it. J2EE is another matter, though there is still a huge amount of it going on. There is also the matter of the steady ascendency of RESTful acrchitecture while BEA is one of the lynchpins of Big SOA.  There is already posturing about the importance of BEA to Oracle Fusion.  If it is so important, Fusion may be born with an obsolete architecture from day one. 

The long and the short is that any competent tea leaf reader (is there any such thing?) would conclude that this was a good move for BEA to let themselves be bought before their curve has crested too much more. For Oracle’s part, its a further opportunity to consolidate their Big Corporate IT Hedgemony and to feed their acquisition-based growth machine. I am not qualified to say whether they paid too much or not, but if I do think the value curve for BEA is falling and will continue to fall post-acquisition. They are way late on the innovation curve, which looks to me like it has already fallen.  In short, BEA is a pure bean counting exercise: milk the revenue tail as efficiently as possible and then move on.  For this Oracle paid $8.5B.  Not surprisingly, even though it is a much bigger transaction, there is much less about it on the blogosphere as I write this than about the other transaction.

Speaking of which, let’s turn to the Sun+MySQL combination.  Jonathan Schwartz gets a bit artsy with his blog post introducing the introduction, which he calls “Teach dolphins to fly.”  The metaphor is apropos.  Schwartz says that MySQL is the biggest database up and comer news in the world of network computing (that’s how we say cloud computing without offending the dolphins that haven’t figured out how to fly yet).  What Sun will bring to the table is credibility, solidity, and support.  He talks about Fortune 500 needing all that in the guise of:

Global Enterprise Support for MySQL – so that traditional enterprises looking for the same mission critical support they’ve come to expect with proprietary databases can have that peace of mind with MySQL, as well.

That business of “proprietary databases” means Oracle.  Jonathan just fired a good sized projectile across your bow Mr. Ellison.  What do you think of that? 

I know what I think.  Getting my tea leaf reading union card back out, I compare these two big acquisitions and walk away with a view that Oracle paid $8.5B to carve up an older steer and have a BBQ while Sun paid $1B to buy the most promising race horse to win the Kentucky Derby.  What a brilliant move for Sun!  Now they’ve united a couple of the big elements out there, Java being one and MySQL the other.  They could stand to add a decent scripting language, but unlike Microsoft’s typical tactics, they’ve learned not to ply a scorched earth policy towards other platforms, so they are peacefully coexisting until a better cohabitation arrangement comes along. 

We talked a little about the Oracle transaction being a good deal for BEA:  it’s a lucrative exit from declining fortunes.  What about mySQL?  Zack Urlocker comments about the rumor everyone knew, that MySQL had been poised to go public.  Let me tell you: this is a far better move.  Savvy private companies get right to the IPO alter, and then they find someone to buy them for a premium over what they would go out at.  What they gain in return is potentially huge.  The best possible example of this was VMWare.  Now look where they are.  I will argue that would not have been possible without the springboard of EMC.  At least not this quickly.   Sun offers the same potential for MySQL.  It is truly the biggest open source deal in history.  It’s also a watershed liquidity event for a highly technical platform based offering from a sea of consumer web offerings.  The VC’s have been pretty tepid about new deals like MySQL.  Perhaps this will help more innovations to get funded.

What do others have to say about the deal?

 – Tim O’Reilly echoes the big open source and importance of database to platform themes.

 – Larry Dignan picks up on my rather combative title theme by pointing out that it puts Sun at war with the major DB vendors:  Microsoft, IBM and Oracle.  Personally, I think any overt combat will hurt those three.  The Open Source movement holds the higher moral ground and it just won’t be good PR to buck that too publicly.  Dignan sounds like he is making a little light of Schwartz’s conference call remark that it is the most important acquisition in Sun’s history, but I think that is no exaggeration on Jonathan’s part.  This is a hugely strategic move that affects every aspect of how Sun interfaces with the world computing ecosystem including its customers, many partners, and its future.  When Dignan asks what else Sun needs, I would argue a decent scripting language.  Since Google already has Python in hand, what about buying a company like Zend to get a leg up on PHP?  Last point from Larry is he asks, “If Sun makes MySQL more enterprise acceptable does that diminish its mojo with startups? Does it matter?”  Bottom line: improvements for the Enterprise in no way diminish what makes MySQL attractive to startups, providing Sun minds its manners.  So far it has been a good citizen.  With regards to, “Does it matter?”  Yes, it matters hugely.  MySQL is tapped into all the megatrends that lead to the future.  Startups are a part of that.  Of course that matters.

One other thought I’ve had:  what if Sun decides to build the ultimate database appliance?  I’m talking about order it, plug your CAT5 cable in, and forget about it.  Do for dabases what disk arrays did for storage.  That seems to me a powerful combination.  Database servers require a painful amount of care and feeding to install and administer properly.  If Sun can convert them to appliances, it kills two birds with one stone.  First, it becomes a powerful incentive to buy more Sun hardware.  This will even help more fully monetize MySQL, which apparently only gets revenue from 1 in 10,000 users.  Second, it could radically simplify and commoditze a piece of the software and cloud computing fabric that is currently expensive and painful.  Such a move would be a radical revolution that would perforce drive a huge revenue opportunity for Sun.  They have enough smart people between Sun and MySQL to pull it off if they have the will. 

Conclusion

Sun has made an uncannily good move in acquiring MySQL.  As Wired points out:

One company that won’t be thrilled by the news is Oracle, makers of the Oracle database which has managed to seduce a large segment of the enterprise market into the proprietary Oracle on the basis that the open source options lacked support.

With Sun backing the free MySQL option (and offering paid support) Oracle suddenly looks a bit expensive.

How else can you simultaneously lay a bet on owning a substantial piece of the computing fabric that all future roads are pointing to and send a big chill down Larry Ellison’s spine for the low low price of just $1B?  Awesome move, Jonathan!

Related Articles

VARGuy says the acquisition means Sun finally matters again.  $1B is cheap to “finally matter again!”

Posted in business, enterprise software, Open Source, Partnering, platforms, saas, soa, strategy, Web 2.0 | 9 Comments »

Coté’s Excellent Description of the Microsoft Web Rift

Posted by Bob Warfield on January 2, 2008

Coté’s latest RedMonk post perfectly captures my reservations about Microsoft, which I refer to as their “rift with the web”.  Here are the relevant passages:

Microsoft frameworks are plagued by lock-in fears. That is, you’re either a 100% Microsoft coder or a 0% Microsoft coder. Sure, that’s an exaggeration, but the more nuanced consequences are that something intriguing like Astoria will play best with Microsoft coders, unlike Amazon’s web services which will play well with any coder.

This thing he calles “lock-in fear” and the extreme polarization (encouraged by Microsoft’s rhetoric, tactics, and track record) that you’re either all-Microsoft or no-Microsoft is my “web rift”.  I’ve written about this a couple of times before, and it never fails to raise the ire of some Microsoft fan or other.

This particular RedMonk post is chiefly concerned, it seems to me, with making sure that Microsoft’s Astoria doesn’t disappear under the continuous din of innovation Amazon is putting up for us lately around their cloud computing services.  The essential new thing about Astoria in Coté’s mind is that it is a RESTful framework rather than a .NET framework:

If you’re just coding to a URL, that’s not quit so bad as coding to a .Net library and all the Microsoft baggage and tool-chain needed to support that.

I agree, but I think Coté has obfuscated two issues together that don’t have to be: the issue of how software components communicate versus whether a solution is hosted/SaaS or not.  One can imagine components communicating RESTfully without any need to have them be hosted in SaaS fashion.  My own bias would be to go ahead and host, but it’s not a requirement and to put the two together as one is conflating the issue (don’t you love that word “conflate” that has drifted into common use here in the valley?).  Coté’s contention that hosting itself will reduce fears of lock-in is also pretty hard to swallow.  While I am again a whole-hearted advocate, giving your software over to a hosted environment and all of its attendant API’s (RESTful or not) is a big step towards lock-in, no matter how you look at it.  This is again an area where Microsoft’s old school monopolist behavior won’t serve it well.  There will be fear, perhaps unreasonable, that Microsoft will take unfair advantage if handed the keys to your kingdom by hosting on their cloud infrastructure.  The problem is that they’ve failed to conduct themselves as the Swiss do in matters of banking.  They are voracious competitors and seemingly always will be.  It isn’t enough for them to win, others must lose.

There are several other interesting points made in the post.  For example, on the issue of hosting, Coté wonders why big companies are so slow to launch.  It’s very true, we’ve seen it for all the big players.  The answer is perhaps that they have more to lose and are more likely to reach the win-lose decision point too quickly and with too much momentum to gracefully correct the problems.  Startups have a distinct advantage in this.  It’s not clear to me why more big companies don’t fund startups expressly to deal with the issue with the intention of acquiring them later if things work out.

Coté also makes a hugely important point about the value of self-service:

My sense is that unless it’s all delivered as a URL with dead-simple docs and pricing (check out the page for SimpleDB), any given technology won’t work out at web-scale.

Put another way, these new technologies need to be completely self-service. If a developer has to ever talk with a human from the company or team offering the project, something has gone wrong.

Self-service is a crucial part of viral growth potential in today’s world.  Any company that releases a product without at least a semblance of a plan for how to make it self-service at some point down the road is laying the foundations for failure.

Related Articles

Latest Microsoft Office service packs decommits support for Older File Formats: Especially Competitors

This is typical of Microsoft’s “we don’t have to be nice, we’re the phone company” (appologies to Lily Tomlyn) behavior.  To access your old files requires you to delve into the registry.  Microsoft claims these old files are a security risk.  It’s tacky, it’s more lock-in, and it’s more evidence that MSFT is up to their same old tricks.

Posted in amazon, platforms, saas, soa, software development, strategy | 8 Comments »

Eventual Consistency Is Not That Scary

Posted by Bob Warfield on December 22, 2007

Amazon’s new SimpleDB offering, like many other post-modern databases such as CouchDB, offers massive scaling potential if users will accept eventual consistency.  It feels like a weighty decision.  Cast in the worst possible light, eventual consistency means the database will sometimes return the wrong answer in the interests of allowing it to keep scaling.  Gasp!  What good is a database that returns the wrong answer?  Why bother? 

Often waiting for the write answer (sorry, that inadvertant slip makes for a good pun so I’ll leave it in place) returns a different kind of wrong answer.  Specifically, it may not return an answer at all.  The system may simply appear to hang. 

How does all this come about?  Largely, it’s a function of how fast changes in the database can be propogated to the point they’re available to everyone reading from the database.  For small numbers of users (i.e. we’re not scaling at all), this is easy.  There is one copy of the data sitting in a table structure, we lock up the readers so they can’t access it whenever we change that data, and everyone always gets the right answer.  Of course, solving simple problems is always easy.  It’s solving the hard problems that lands us the big bucks.  So how do we scale that out?  When we reach a point where we are delivering that information from that one single place as fast as it can be delivered, we have no choice but to make more places to deliver from.  There are many different mechanisms for replicating the data and making it all look like one big happy (but sometimes inconsistent) database, let’s look at them.

Once again, this problem may be simpler when cast in a certain way.  The most common and easiest approach is to keep one single structure as the source of truth for writing, and then replicate out changes to many other databases for reading.  All the common database software supports this.  If your single database could handle 100 users consistently, you can imagine if those 100 users were each another database you were replication to, suddenly you could handle 100 * 100 users, or 10,000 users.  Now we’re scaling.  There are schemes to replicate the replicated and so on and so forth.  Note that in this scenario, all writing must still be done on the one single database.  This is okay, because for many problems, perhaps even the majority, readers far outnumber writers.  In fact, this works so well, that we may not even use databases for the replication.  Instead, we might consider a vast in-memory cache.  Software such as memcached does this for us quite nicely, with another order of magnitude performance boost since reading things in memory is dramatically faster than trying to read from disk.

Okay, that’s pretty cool, but is it consistent?  This will depend on how fast you can replicate the data.  If you can get every database and cache in the system up to date between consecutive read requests, you are sure to be consistent.  In fact, it just has to get done between read requests for any piece of data that changed, which is a much lower bar to hurdle.  If consistency is critical, the system may be designed to inhibit reading until changes have propogated.  It take some very clever algorithms to do this well without throwing a spanner into the works and bringing the system to its knees performance-wise. 

Still, we can get pretty far.  Suppose your database can service 100 users with reads and writes and keep it all consistent with appropriate performance.  Let’s say we replace those 100 users with 100 copies of your database to get up to 10,000 users.  It’s now going to take twice as long.  During the first half, we’re copying changes from the Mother Server to all of the children.  The second half we’re serving the answers to the readers requesting them.  Let’s say we can keep the overall time the same just by halving how many are served.  So the Mother Server talks to 50 children.  Now we can scale to 50 * 50 = 2500 users.  Not nearly as good, but still much better than not scaling at all.  We can go 3 layers deep and have Mother serve 33 children each serve 33 grand children to get to 33 * 33 * 33 = 35,937 users.  Not bad, but Google’s founders can still sleep soundly at night.  The reality is we probably can handle a lot more than 100 on our Mother Server.  Perhaps she’s good for 1000.  Now the 3-layered scheme will get us all the way to 333*333*333 = 36 million.  That starts to wake up the sound sleepers, or perhaps makes them restless.  Yet, that also means we’re using over 100,000 servers too: 1 Mothers talks to 333 children who each have 333 grandchildren.  It’s a pretty wasteful scheme.

Well, let’s bring in Eventual Consistency to reduce the waste.  Assume you are a startup CEO.  You are having a great day, because you are reading the wonderful review of your service in Techcrunch.  It seems like the IPO will be just around the corner after all that gushing does it’s inevitable work and millions suddenly find their way to your site.  Just at the peak of your bliss, the CTO walks in and says she has good news and bad news.  The bad news is the site is crashing and angry emails are pouring in.  The other bad news is that to fix it “right”, so that the data stays consistent, she needs your immediate approval to purchase 999 servers so she can set up a replicated scheme that runs 1 Mother Server (which you already own) and 999 children.  No way, you say.  What’s the good news?  With a sly smile, she tells you that if you’re willing to tolerate a little eventual consistency, your site could get by on a lot fewer servers than 999.

Suppose you are willing to have it take twice as long as normal for data to be up to date.  The readers will read just as fast, it’s just that if they’re reading something that changed, it won’t be correct until the second consecutive read or page refresh.  So, our old model that had the system able to handle 1,000 users, and replicated to 999 servers to handle 1 million users used to have to go to 3 tiers (333 * 333 * 333) to get to the next level at 36 million and still serve everything consistently and just as fast.  If we relax the “just as fast”, we can let our Mother Server handle 2,000 at half the speed to get to 2000 * 1000 = 2 million users on 3 tiers with 2000 servers instead of 100,000 servers to get to 36 million. If we run 4x slower on writes, we can get 4000*1000 = 4 million users with 4000 servers.  Eventually things will bog down and thrash, but you can see how tolerating Eventual Consistency can radically reduce your machine requirements in this simple architecture.  BTW, we all run into Eventual Consistency all the time on the web, whether or not we know it.  I use Google Reader to read blogs and WordPress to write this blog.  Any time a page refresh shows you a different result when you didn’t change anything, you may be looking at Eventual Consistency.  Even if you suspect others changed something, Google Reader still comes along frequently and says an error occured and asks me to refresh.  It’s telling me they relied on Eventual Consistency and I have an inconsistent result.

As I mention, these approaches can still be wasteful of servers because of all the data copies that are flowing around.  This leads us to wonder, “What’s the next alternative?”  Instead of just using servers to copy data to other servers, which is a prime source of the waste, we could try to employ what’s called a sharded or Federated architecture.  In this approach, there is only one copy of each piece of data, but we’re dividing up that data so that each server is only responsible for a small subset of it.  Let’s say we have a database keeping up with our inventory for a big shopping site.  It’s really important to have it be consistent so that when people buy, they know the item was in stock.  Hey, it’s a contrived example and we know we can cheat on it, but go with it.  Let’s further suppose we have 100,000 SKU’s, or different kinds of items in our inventory.  We can divide this across 100 servers by letting each server be responsible for 1,000 items.  Then we write some code that acts as the go-between with the servers.  It simply checks the query to see what you are looking for, and sends your query to the correct sub-server.  Voila, you have a sharded architecture that scales very efficiently.  Our replicated model would blow out 99 copies from the 1 server, and it could be about 50 times faster (or handle 50x the users as I use a gross 1/2 time estimate for replication time) on reads, but it was no faster at all on writes.  That wouldn’t work for our inventory problem because writes are so common during the Christmas shopping season. 

Now what are the pitfalls of sharding.  First, there is some assembly required.  Actually, there is a lot of assembly required.  It’s complicated to build such architectures.  Second, it may be very hard to load balance the shards.  Just dividing up the product inventory across 100 servers is not necessarily helpful.  You would want to use a knowledge of access patterns to divide the products so the load on each server is about the same.  If all the popular products wound up on one server, you’d have a scaling disaster.  These balances can change over time and have to be updated, which brings more complexity.  Some say you never stop fiddling with the tuning of a sharded architecture, but at least we don’t have Eventual Consistency.  Hmmm, or do we?  If you can ever get into a situation where there is more than one copy of the data and the one you are accessing is not up to date, Eventual Consistency could rear up as a design choice made by the DB owners.  In that case, they just give you the wrong answer and move on. 

How can this happen in the sharded world?  It’s all about that load balancing.  Suppose our load balancer needs to move some data to a different shard.  Suppose the startup just bought 10 more servers and wants to create 10 additional shards.  While that data is in motion, there are still users on the site.  What do we tell them?  Sometimes companies can shut down the service to keep everything consistent while changes are made.  Certainly that is  one answer, but it may annoy your users greatly.  Another answer is to tolerate Eventual Consistency while things are in motion with a promise of a return to full consistency when the shards are done rebalancing.  Here is a case where the Eventual Consistency didn’t last all that long, so maybe that’s better than the case where it happens a lot. 

Note that consistency is often in the eye of the beholder.  If we’re talking Internet users, ask yourself how much harm there would be if a page refresh delivered a different result.  In may applications, the user may even expect or welcome a different result.  An email program that suddenly shows mail after a refresh is not at all unexpected.  That the user didn’t know the mail was already on the server at the time of the first refresh doesn’t really hurt them.  There are cases where absolute consistency is very important.  Go back to the sharded database example.  It is normal to expect every single product in the inventory to have a unique id that lets us find that part.  Those ids have to be unique and consistent across all of the shards.  It is crucially important that any id changes are up to date before anything else is done or the system can get really corrupted.  So, we may create a mechanism to generate consistent ids across shards.  This adds still more architectural complexity.

There are nightmare scenarios where it becomes impossible to shard efficiently.  I will over simplify to make it easy and not necessarily correct, but I hope you will get the idea.  Suppose you’re dealing with operations that affect many different objects.  The objects are divided into shards naturally when examined individually, but the operations between the objects span many shards.  Perhaps the relationships between shards are incompatible to the extent that there is no way to shard them across machines such that every single operation doesn’t hit many shards instead of a single shard.  Hitting many shards will invalidate the sharding approach.  In times like this, we will again be tempted to opt for Eventual Consistency.  We’ll get to hitting all the shards in our sweet time, and any accesses before that update is finished will just live with inconsistent results.  Such scenarios can arise where there is no obvious good sharding algorithm, or where the relationships between the objects (perhaps its some sort of real time collaborative application where people are bouncing around touching objects unpredictably) are changing much too quickly to rebalance the shards.  One really common case of an operation hitting many shards is queries.  You can’t anticipate all queries such that any of them can be processed within a single shard unless you sharply limit the expressiveness of the query tools and languages.

I hope you come away from this discussion with some new insights:

–  Inconsistency derives from having multiple copies of the data that are not all in sync.

–  We need multiple copies to scale.  This is easiest for reads.  Scaling writes is much harder.

–  We can keep copies consistent at the expense of slowing everything down to wait for consistency.  The savings in relaxing this can be quite large.

–  We can somewhat balance that expense with increasingly complex architecture.  Sharding is more efficient than replication, but gets very complex and can still break down, for example. 

–  It’s still cheaper to allow for Eventual Consistency, and in many applications, the user experience is just as good.

Big web sites realized all this long ago.  That’s why sites like Amazon have systems like SimpleDB and Dynamo that are built from the ground up with Eventual Consistency in mind.  You need to look very carefully at your application to know what’s good or bad, and also understand what the performance envelope is for the Eventual Consistency.  Here are some thoughts from the blogosphere:

Dare Obasanjo

The documentation for the PutAttributes method has the following note

Because Amazon SimpleDB makes multiple copies of your data and uses an eventual consistency update model, an immediate GetAttributes or Query request (read) immediately after a DeleteAttributes or PutAttributes request (write) might not return the updated data.

This may or may not be a problem depending on your application. It may be OK for a del.icio.us style application if it took a few minutes before your tag updates were applied to a bookmark but the same can’t be said for an application like Twitter. What would be useful for developers would be if Amazon gave some more information around the delayed propagation such as average latency during peak and off-peak hours.

Here I think Dare’s example of Twitter suffering from Eventual Consistency is interesting.  In Twitter, we follow mico-blog postings.  What would be the impact of Eventual Consistency?  Of course it depends on the exact nature of the consistency, but lets look at our replicated reader approach.  Recall that in the Eventual Consistency version, we simply tolerate that we allow reads to come in so fast that some of the replicated read servers are not up to date.  However, they are up to date with respect to a certain point in time, just not necessarily the present.  In other words, I could read at 10:00 am and get results on one server that are up to date through 10:00 am and on another results only up to date through 9:59 am.  For Twitter, depending on which server my session is connected to, my feeds may update a little behind the times.  Is that the end of the world?  For Twitter users, if they are engaged in a real time conversation, it means the person with the delayed feed may write something that looks out of sequence to the person with the up to date feed whenever the two are in a back and forth chat.  OTOH, if Twitter degraded to that mode rather than taking longer and longer to accept input or do updates, wouldn’t that be better? 

Erik Onnen

Onnen wrote a post called “Socializing Eventual Consistency” that has two important points.  First, many developers are not used to talking about Eventual Consistency.  The knee jerk reaction is that it’s bad, not the right thing, or an unnecessary compromise for anyone but a huge player like Amazon.  It’s almost like a macho thing.  Onnen lacked the right examples and vocabulary to engage his peers when it was time to decide about it.  Hopefully all the chatter about Amazon’s SimpleDB and other massively scalable sites will get more familiarity flowing around these concepts.  I hope this article also makes it easier.

His other point is that when push comes to shove, most business users will prefer availability over consistency.  I think that is a key point.  It’s also a big takeaway from the next blog:

Werner Vogels

Amazon’s CTO posted to try to make Eventual Consistency and it’s trade offs more clear for all.  He lays a lot of good theoretical groundwork that boils down to explaining that there are tradeoffs and you can’t have it all.  This is similar to the message I’ve tried to portray above.  Eventually, you have to keep multiple copies of the data to scale.  Once that happens, it becomes harder and harder to maintain consistency and still scale.  Vogels provides a full taxonomy of concepts (i.e. Monotonic Write Consistency et al) with which to think about all this and evaluate the trade offs.  He also does a good job pointing out how often even conventional RDMS’s wind up dealing with inconsistency.  Some of the best (and least obvious to many) examples include the idea that your mechanism for backups is often not fully consistent.  The right answer for many systems is to require that writes always work, but that reads are only eventually consistent.

Conclusion

I’ve covered a lot of consistency related tradeoffs involved in database systems for large web architectures.  Rest assured, that unless you are pretty unsuccessful, you will have to deal with this stuff.  Get ahead of the curve and understand for your application what the consistency requirements will be.  Do not start out being unnecessarily consistent.  That’s a premature optimization that can bite you in many ways.  Relaxing consistency as much as possible while still delivering a good user experience can lead to radically better scaling as well as making your life simpler.  Eventual Consistency is nothing to be afraid of.  Rather, it’s a key concept and tactic to be aware of.

Personally, I would seriously look into solutions like Amazon’s Simple DB while I was at it. 

Posted in amazon, data center, enterprise software, grid, platforms, soa, software development | 6 Comments »

SAP’s A1S Brings Competition to SaaS for the First Time

Posted by Bob Warfield on September 15, 2007

On September 19, SAP will bring competition to the SaaS world for the first time.  Everyone else in Enterprise Software will be affected. 

Seasoned SaaS executives will dislike my conjecture that there’s been little competition prior to A1S, but I don’t think its far from the truth.  There has been some choice in the SaaS market, but its largely been a green field opportunity where choice between SaaS offerings didn’t matter as much as choosing SaaS over On-Premises.  Just to underscore the point that A1S is all about competition, SAP has chosen to launch their product during Salesforce.com’s Dreamforce week.  Nothing like the stranger walking out onto the street at high noon to call the hometown guy out!

What does it mean?  Well for starters, it means SaaS has come of age and its here to stay.  SAP doesn’t choose its moves at random.  It’s an extremely deliberate company that puts a lot of power behind each stroke.  It will execute relentlessly until it achieves its goals, which may take quite some time.  That’s okay, because SAP has been a very patient company in the past.  Having this stamp of approval on the market, not to mention having a big player begin to invest in growing the market further should accelerate the SaaS market’s growth.  More importantly, it means that the chasm has shifted.  SaaS is mainstream, and that means this announcement affects everyone involved with Enterprise Software.

At the 800lb gorilla end of the market, SAP has stolen a march on arch-rival Oracle’s Fusion efforts, which has got to feel good to SAP loyalists.  The description of Fusion as being heavily SOA and SaaS enabled sounds much like A1S.  Oracle can’t be counted out, but it is interesting to watch the pendulum swing back and forth between these two companies as they struggle for dominance at the top of the heap.  Time will tell whether A1S and more importantly SaaS form the next major competitive axis for the pendulum to swing on.

For those Enterprise vendors that still haven’t figured out the SaaS conundrum, the window where you could bury your heads in the sand has just officially closed.  You need to have a SaaS strategy now.  There’s no more time for stalling.  If you’re public, the pain of a switch will be massive.  Lack of a SaaS game plan will start to show up in the form of pointed questions from analysts and investors.  You will need a good story, and they’ll be monitoring closely how well you execute on it.  If you’re private, perhaps part of a leveraged buyout consortium, you’d better be reinventing yourself as SaaS and not just fiddling with the numbers.  If you’re planning on being acquired, you would do well to have a story about how you help the acquirer’s SaaS strategy.

The A1S launch and subsequent scrambling affects the technology landscape as well.  It puts considerably more teeth into the whole SOA thing than we’ve seen in the past.  It’s more urgent to implement it, rather than just talk about it because it is the central nervous system behind A1S and a critical enabler for SaaS.  This opens the door to a domino scenario:  as more and more companies open their Enterprise fabric with SOA, it becomes easier to contemplate more SaaS projects.  This is a spiralling positive feedback loop that will help accelerate SaaS adoption further.

Adoption of SaaS overseas has been slow.  But this will change, particularly in Europe where SAP is very strong.  If a European company blesses the trend, it can accelerate in Europe.  Europe and the rest of the world represent incredible market growth opportunity for the SaaS world if it starts firing on all cylinders, as well as insulation from short term economy woes that may only affect the US.

As more and more computing moves into the cloud, the industry that serves hosting and data centers will have to look on that trend and decide how to succeed.  SaaS needs a different offering than a lot of hosting providers are used to.  Investments in complex data center management such as HP’s Opsware acquisition are likely a good move.  Products aimed at IT’s internal data centers or departmental data centers are moving into difficult territory.  Internet technology will stay hot or get hotter.  Cisco can regard all of this as good news.

These developments are extremely positive for the SaaS world, but there will be pitfalls and pain for both SAP and the other old school players as they try to execute a move to SaaS.  Some of it will be real, some just positioning games spun by the new kids on the block.  I’ve written a two-part series on the problems SaaS brings for On-Premises companies.  It’s an extremely disruptive game for the Old School because it forces them to choose which way a customer will be sold up front, and it sharply defines short term and long term benefits in a way that brings short term pain to the On-premises company in exchange for long term benefits.  As if to underscore how touch this tension between models can be, Hasso Platner had to backpedal a bit on whether A1S would cannibalize the installed base recently because of exactly these concerns.  SAP is heavily positioning A1S as a mid-market solution.  Part of that may be the reality of whether Big Enterprise is quite ready to embrace SaaS yet, and part of that may be SAP trying to construct a Protected Game Preserve for their SaaS offering that protects the core business.

It’s interesting to contemplate just how much new business SAP is getting anyway.  Like Oracle, they’ve got a raft of maintenance and professional services engagements that make up the bulk of their revenues.  It’s unlikely an existing customer would rip out a successful solution in order to switch it over to SaaS.  Therefore, swapping an increasing percentage of their new business to SaaS may not have quite the same negative effects on revenue deferral as it would for a less mature company.  It seems that even when it comes to something as disruptive as SaaS, scale still makes it easier for the big guys.

The SaaS world will be watching carefully how well A1S delivers on the SaaS formula.  As ZDNet puts it, SAP is known for being liquid concrete poured into the organization.  That doesn’t sound too much like the nimble experience we’ve come to expect from SaaS vendors.  Oracle has had a SaaS-like offering of its products for a long time, and they’d tell you they’re in the SaaS business, but they aren’t really according to folks like NetSuite and others I’ve heard from.  If A1S turns out to be almost-SaaS, it won’t make much more of a difference than Oracle has, other than to futher legitimize the markets.  As so many have pointed out, SaaS is about Service, but more importantly, it’s about an experience that can only be achieved by a thorough combination of the right software and service.  Otherwise hosting would’ve succeeded.  SaaS is a lot more than just hosting.

This move by SAP also represents a big splash in the partner world.  As I look over the Google Blog Search results for “A1S” in the last week, a lot of it is focused around discussions on the impact it will have on partners and the SAP job market for consultants.  Barbara Darrow puts it well when she says, “SaaS is still viewed by many in the channel as the ultimate in disintermediation.”  This is one of those deep dark secrets that shows another disruptive nature of SaaS.  I’ve written about it on a couple of occassions starting with my post Is SaaS Toxic for Partners

To a large extent, partners are in just as bad or a worse position than On-premises ISV’s.  They are a part of the shrinking trailing edge that is the province of very late adotpers.  The problem for partners, VARs, SI’s, and consultants with SaaS is twofold.  First, SaaS commoditizes a lot of the heavy lifting partners used to do around deploying a new On-premises application.  A properly delivered SaaS application radically reduces the workload there and has historically shifted a lot of the work to the SaaS vendor inside their data center.  That’s probably not going to change with A1S if its a first class SaaS offering.

Second, SaaS doesn’t afford partners a lot of opportunity to create IP.  SaaS tends to be a set of isolated islands in the big sea that is the Cloud.  Traditionally, IT has had a large arsenal of tools they could bring in to augment their On-premises software.  Call them bandaids or extensions, but they formed an ecosystem around the Enterprise Software.  Business Intelligence, ETL tools, security products, and a whole raft of other businesses were built on this promise.  For the most part, it was the promise of being able to tie directly into the fabric underlying the Enterprise Software.  Direct access to database tables was one of the most common mechanisms this ecosystem operated on.  That’s no longer possible as computing moves to the cloud.  It will affect those secondary ISV’s that live around the databases of these big Enterprise apps, and it will also affect partners who often created IP around their access to the rich soup of that ecosystem.  

All of these partner/ecosystem businesses now have to reinvent themselves.  They need to find new ways of differentiating and providing value within the confines of what the cloud has to offer.  Several writers chide Salesforce that their announcement of minor repositioning and small extensions to their application platform business can’t compete with the A1S announcement.  And yet, Salesforce is struggling to show us what a viable ecosystem in the SaaS world might look like.

It’s going to be exciting to watch all of this unfold.  Every Enterprise software company should be assembling their best and brightest to map out their position and strategy with respect to SaaS and A1S.  SaaS is here today and if you don’t heed that, you’ll be gone tomorrow.

Related Articles:

Great post over on the Lucid Era blog about the challenges conventional ISV’s face in moving to SaaS.  I’ve written about the same challenges before and suggested a “protected game preserve” strategy to help overcome them.

Zoli echoes many of the same sentiments over at his blog.

Submit to Digg | Submit to Del.icio.us | Submit to StumbleUpon

Posted in business, data center, Partnering, saas, soa, strategy | 10 Comments »

 
%d bloggers like this: