SmoothSpan Blog

For Executives, Entrepreneurs, and other Digerati who need to know about SaaS and Web 2.0.

Archive for April 23rd, 2008

From TwitPitches to TwitQuiries…

Posted by Bob Warfield on April 23, 2008

Stowe Boyd is writing about his TwitPitches idea again.  This is his practice of using Twitter for people to send him their elevator pitch.

I didn’t like them coming in his blog RSS feed to me–as I wrote, it isn’t what I expected or wanted from that venue.  But I can see the value.  Stowe wants to get through a blizzard of incoming PR requests to pitch ideas to him quickly.  He doesn’t want to read lengthy PR releases or unneccesary chit chat from people he doesn’t know.  He just wants to evaluate as quickly as possible whether it makes sense to dig any deeper or whether he just wants to hit the delete key.

In many ways this sort of quick inquiry is what I’ve always used messaging systems for.  In one of my jobs my team was spread all over a building.  It was easy to use AOL IM (pre-Twitter days!) to find out if someone was busy or had time to meet if I headed over.  Messaging was faster and easier than the phone and it worked well.

Stowe is on to something here, but why stop at pitches?  And why put it all exclusively through Twitter? 

I’d love to have a “fast lane” for inquiries tied into my email account.  I’d love to have a special Twitter “channel” for these things.  It’d be great if I could be reached by either venue.  Ditto phone calls and voicemail.  The overriding consideration would be to keep it brief (Twitter’s 140 character limit is fine) and to the point.  Better, give some flavors.  a “yes/no” TwitQuiry has a built in Yes/No button.  Bang one and the sender gets their answer.  “Time” would be another goodie.  Works for, “How long until you can take a call from me?”  Or, “When will you be done with that preso you promised me?”

It probably makes sense to let people create their own Tweetlets (applets for TweetQuiries) that do these things.  I can imagine them being really useful even for certain kinds of business process, for example.  A tiny little bit of structure aimed at streamlining keystrokes will go a long way.  There are lots of tiny interpersonal transactions that could be facilitated in this way.

Could be something to this yet.

Posted in user interface, Web 2.0 | 1 Comment »

Bug or Architecture Flaw? (Fail or No Fail)

Posted by Bob Warfield on April 23, 2008

Blaine Cook, Twitter’s lead architect, has left the company.  Predictably, the blogosphere is flaying him pretty good.  Michael Arrington asks , “Amateur Hour Over At Twitter?”

Ouch!

Blaine, I feel for you.  People expect and are pretty tolerant of a few bugs, but the problems at Twitter have been going on for long enough that it’s clear there were deep-seated architectural flaws that were not going away very soon.  Twitter is taking the right steps–they’ve got a new VP of Engineering and Ops as well as two new scaling experts.  Cook, rightly or wrongly is firmly under the bus.

What follows next is important.  The new gang has a limited window in which to fix the problem.  This won’t be easy.  Fixing deep architecture issues on a live system that can’t keep up is one of those nightmare scenarios that’s painful beyond belief. 

What can we learn from this? 

First, Twitter is just the latest example of an important service that has all the ingredients for success except for the ability to scale properly. 

I gave up for the last time on Technorati not long ago for similar reasons.  For a long time it was my blogging hub.  I used it for search and to monitor how well my own messages were penetrating the blogosphere.  But it was wildy inconistent.  It was easy to switch to Google for Blog Search, after all, they are the search experts?  But that Technorati Authority seemed like it was worth hanging around for. 

And then one day my Authority dropped almost 100 points.  In one day I went from over 300 to just a little over 200.  What’s up with this?  I waited for it to come back–at various points in the past, something similar had happened and then corrected itself in a day or two.  No such luck.

Eventually, I stayed away long enough, that it was time to log in again.  I didn’t remember my account info, so I simply searched for SmoothSpan to find my blog.  There was my answer for what had happened:  I saw two SmoothSpans!  One had my old over 300 authority, one the new authority. 

But I could tell neither was really right.  In other words, the true authority was some mix of sites from one and some from the other.  Thinking about how this could happen revealed a classic architecture flaw: Technorati had created more than one record for the same thing and couldn’t keep them straight.

Twitter’s situation is similar.  Supposedly they were rolling out a new caching system of some kind when their latest troubles hit.  Caches create more than one copy of the data intentionally, to make it easier to scale.  The trick is to keep it all running smoothly and to feed the cache from the one true version of the data.

Second learning point:  It may be a bad idea to worry about scaling later.  This topic has been debated from time to time.  Some have advanced the notion that to worry about scaling up front is a premature optimization.  Scaling is not a premature optimization!  It is fundamental architecture.  The number of developers who can deliver a highly scalable web property is a tiny fraction of the number of developers who can get “almost there”.  The difference between having one architecture versus the other is a healthy heaping of FAIL.

My last company, Callidus Software, understood scaling.  We got it so well it became a major differentiator for the product.  We could literally go to customers where the likes of Oracle and SAP (who supposedly understood scaling) could not go because their systems wouldn’t handle the scale.  There’s nothing quite like being the only game in town for a big customer that has to have a solution.

Scaling is something the Cloud Platform world may eventually deliver for us.  So far, they are more about Utility Computing, which is the ability to add more machines quickly and easily.  That’s Amazon’s model.  Whether your software can use more machines (i.e. whether it scales), is up to you.  Teasing apart the aspects of an application that make it scalable and handing them over to a platform will be a ticklish business.  It’s likely to a good deal of rewriting.  But either way, your team can get it right the first time, do a rewrite under fire, or rewrite for a Cloud Platform that shows how its done.

Does your team understand scaling?  Really?  How do you know?

Related Articles

A recent interview with Blaine Cook.  Interesting note on eventual consistency: Twitter only allows API’s to update once per minute.  The team is 5 full-time developers and that includes 1 new person.  Tight team, but that’s good.

Michael Arrington mentions he read this post in a Seesmic video.  Scan down through the comments to see it.  He’s involved in a big slugfest over whether this was character assassination on Cook, whether the fault lies with Ruby on Rails, and so on and so forth.  It’s important not to lose track in all of that emotional content of the main issue here, which is that scaling matters, a relatively small set of developers have lived through it and know what to do, and it is hard to fix after the fact.

Despite the fact that there’s basically a flamewar on Techcrunch over this, others seem to reach a similar conclusion.  Larry Dignan has a good post over at ZDNet.

Posted in platforms, saas, Web 2.0 | 3 Comments »

Microsoft Mesh: All Your Devices and Data Are Be Ours

Posted by Bob Warfield on April 23, 2008

I’ll bet your blog reader is probably overflowing with posts about Microsoft’s Mesh announcement this morning.  Sorry to add one more, but it is an important announcement and there is some analysis I want to get out onto the table.

Mesh Product Director Mike Zintel sums up the Mesh vision well: 

“The coolest thing about Live Mesh is how it smashes the abrupt mental switch that I have to make today as I move between being ‘on the web’ and ‘in an application.”

“At the core of Mesh is [the] concept of a customer’s mesh, or collection of devices, applications and data that an individual owns or regularly uses. The Mesh Account Service persists the relationship among these resources and authorizes access to them. The mesh is the foundation for a model where customers will ultimately license applications to their mesh, as opposed to an instantiation of Windows, Mac or a mobile account or a web site. Such applications will be seamlessly installed and run from their mesh and application settings persisted across their mesh.”

Ray Ozzie adds to this that the Web is “the Hub of our social mesh and our device mesh.” And goes on to say that, “in scenarios ranging from productivity to media and entertainment, social mesh notions of linking, sharing, ranking and tagging will become as familiar as File, Edit and View.”

Mesh starts out for individuals, which I suspect is intended to minimize adoption friction.  Lots of cool sounding functionality has been considered and incorporated into the design.  There are objects that range from data to applications, and data is not limited to files.  There is some sort of role-based security around these objects, and objects can be stored in all sorts of places.  Most importantly, there is pub/sub synchronization, replication, and update alerts and feeds to keep people aware when changes are made.

At the moment, Live Mesh is an invitation only “Technology preview.”  The current version only does synchronization of Windows computers, but its intent is to extend to other devices.  Mobile and Mac are promised within the next year.

How is this really different?  Josh Catone mentions similarities with Dropbox, SugarSync, and Microsoft’s own FolderShare.  Interestingly, I’ve seen a demo of software that syncs across PC’s and mobile devices not long ago from SoonR as well.  It works today for a number of devices.  Their motto is “The Anywhere Workforce.”  I vividly remember a couple of years ago seeing a PowerPoint slideshow played back over a Motorola Razor at my home using SoonR.  How long before Microsoft has made it that far?

As Catone points out, the difference is that Live Mesh is intended to be a platform.  It is feeds of all shapes of sizes plugged into your data and applications.  These feeds are used to sync your data objects, and to keep you and others abreast of when these updates are happneing.  Since its a platform, the feed mechanisms are open and accessible to third parties.  There is a demo, for example, of Twitter tweets being synced to the Mesh Notifier.  Clearly Microsoft will want a piece of the burgeouning feed aggregation world.  FriendFeed and friends look out!  In addition, there is an offline component too, which provides one possible answer to the likes of Google Gears or Adobe AIR.

In a nutshell, that’s what Live Mesh is, now what does it mean?  Will it work?  What can we do to prepare for it?

Analysis

Scoble loves it.  A veritable toolbox of feeds going every which way.  Everything is a feed, including your, um, feeds.  I’m not surprised Scoble loves it, but his description doesn’t make it sound like the one highly differentiated must have thing that will keep Microsoft a vital part of all our lives.  In particular, if your promise is connectivity ala feeds between all things, how does that reconcile from this passage from Scoble:

Unfortunately they aren’t even close to being finished. Mac support? Coming in the future. Nokia support? Unclear. iPhone support? Ask Steve Jobs (translation: will be very limited due to Apple’s complete control of that platform). Firefox support? Yes! Linux support? What’s that?

Can a ubiquitous “operating system” to store all your data, manage all your feeds, and connect it all to every computer and mobile device you own succeed if it is owned and operated by a company whose reputation is to take unfair advantage of any access you give it?  Isn’t this the very poster child of what you would want to have be very Open Source and very Swiss in its dealings?

Phil Wainewright sees this issue clearly when he brings up “how the company seems to lurch from launching fresh, Web-savvy solutions one day but then falling back into its crusty old server-centric habits the next.”

Clearly such a platform will rely on third parties to get excited about supporting it.  Will Steve Jobs let it onto the iPhone when he hasn’t even allowed the Flash Player?  Will Sony support it wholeheartedly when they see the XBox team has a huge lead on them because of the usual Microsoft “our guys get all the early advantages” ploy?

Erick Schonfeld captures the flavor of a nagging doubt that has been in my mind.  He sees Live Mesh as a response to efforts to take browser-based apps onto the desktop and into Microsoft’s face.  Live Mesh is taking applications off the desktop and pushing them into the Web’s face.  They demo geneology changes being updated whenever a family member makes a changed, but Erick points to web-based Geni which already does this automatically.

Which one is better?  Do you want lots of copies of data being synced, or one copy being edited collaboratively by many?  This debate has been waged for years, but the general consensus has tended towards one copy with collaborative editing.  Desktop-think wants to keep going the other way.  And, it is very convenient if it’s just you in charge of your data–this blog with me editing it coming to many of you via RSS.  But, if many of us have to work on it, a Wiki is a better idea.

The Live Mesh vision is then useful, but probably overreaching.  It isn’t the “one great thing to restore Microsoft’s dominance.”  There are a lot of questions about whether Microsoft will reach a critical mass in getting others to play along with it.  Suppose they follow their standard playbook (and I see no deviation yet):

– Get their teams early exposure.

– Preannounce to freeze the market.

– By the time others can get there, the Microsoft apps are all doing it better than anyone else will be able to emulate for some time.

– Use this as the competitive edge to increase share for Microsoft apps and position the others as being “not quite up to the Windows standard.”

That strategy doesn’t work in the web world and will backfire mightily if they try it.  Imagine LiveMesh with nobody but Microsoft’s apps talking to it along with whatever files they can drag along and other open API’s that their own engineers tie into.  It becomes a modestly useful feature, not a platform, provided it is well implemented.

Here is my next concern.  There are 100 engineers at work on Live Mesh already, and lots of key functionality (like version control) nowhere in sight.  Aside from the Tactics of Monopoly, the other Fail mode is creating a giant monolith of software.  Vista is a painful example of how far things can go wrong.  Mesh is, at its core, another attempt to rework the document and folder file system.  Microsoft promised this in Longhorn for years but never delivered.  Now Microsoft is adding to that challenge the need to build something that (sorry!) meshes well with the web.  That’s no small order.

Many are saying this is all Ozzie and is his third iteration of a vision that started with Lotus Notes.  Groove is another one, lest we forget that.  Is this the right vision to be on?  Did the first two iterations demonstrate enough goodness that we want to build this stuff into the OS and fabric of every PC and device we own?  It doesn’t seem like it, but perhaps.  OTOH, what if we were all using an online backup service of one kind or another (Mozy et al), and we could access the backups from any device, publish an RSS feed of the versions, and so on. 

The world used to say Microsoft gets it right by the Third Try.  Microsoft is a little slow.  After the Third Try comes Fourth System Effect (with appologies to Brooks’ Second System Effect) where they go way overboard and manage to produce a Vista.

Lest I leave on that purely negative note, it’s always helpful to ask what they should have done or what they should now do.  It’s the “What would Google do?” sort of game, although it’s more like, “What do modern web companies in general do?”  Google’s recent AppEngine announcement is a prime example that touches all the bases:

–  Start small:  one language (Python), one application type (web apps).  You can build something small quickly without 100 cooks in the kitchen and make it tight.

–  Involve the community:  10,000 betas, first come first served, no special favorites and ramping up almost immediately to 20,000 betas.

–  Be open:  SDK was open sourced day 1.  It didn’t take long for the community to take the SDK and bring it up on Amazon Web Services.  Google doesn’t care, it’s all good.  It’s Open.

–  Piggyback on an innocuous beginning:  AppEngine is built on technology Google had created to do lots of other things internally.

–  No special advantages:  Google usually integrates after the launch of a new service so that everyone is on an even playing field and the service gets out the door faster.

There are lots of ways a service like LiveMesh could’ve followed this formula.  I’ll let you fill in the blanks, but consider this too:  there are a fair number of organizations out in the wild that can follow such a formula (and some are already far along the path).  We don’t really need Microsoft to get there.  That’s the piece Microsoft needs to wrap their heads around better.

BTW, the mental barrier between being on the web or in the application (going back to Mike Zintel) is already broken.  I don’t feel it a bit when I spend most of my day in the browser using web apps. 

I guess this is what Microsoft is worried about.

Posted in platforms, saas, strategy, Web 2.0 | 4 Comments »

 
%d bloggers like this: