SmoothSpan Blog

For Executives, Entrepreneurs, and other Digerati who need to know about SaaS and Web 2.0.

Archive for February, 2013

Check on Your SaaS Company’s Hosting Provider, Avoid Firehost

Posted by Bob Warfield on February 25, 2013

PagelyDown

My CNCCookbook blog is experiencing it’s second outage so far this month.  That’s a cause for visitor unhappiness and potentially lost business.  I use Page.ly, because I believe in SaaS services.  CNCCookbook is bootstrapped, and I try not to spend any of my time at all doing something that I can easily have done for me by a SaaS provider, like hosting a WordPress blog.  Page.ly has been pretty good in most respects, though far from perfect in terms of outages.  Frankly, there have been too many outages and having two in one month is starting to be a bit much.  Their story is that their hosting provider, FireHost, has created both of these problems.

It’s even affected the Page.ly blog, as it did the last outage too.  Ironically, I wouldn’t be posting this blog except that Page.ly’s blog went to the same screen I’m showing here when I attempted to comment that maybe it was time they thought about Firehost alternatives.

Whatever’s going on at Firehost, and however much it saves Page.ly to use Firehost instead of some more reliable service, it’s not worth it guys.  It’s making you look bad, and through extension, that makes my business using your service look bad.  The good news is if it continues, it is very straightforward to migrate to Page.ly’s competitors.  I also have experience with WPEngine from a prior company, and found them to be more performant and a nicer service, but quite a bit more expensive.  Perhaps some of that expense is going to a better hoster for their service.  At CNCCookbook, we use Amazon for our own services and I can’t remember the last time we had an outage.  Maybe once have we had one, and it involved the simple expedient of rebooting our EC2 instance.

In the end, if I do move the CNCCookbook blog, I will be checking who the new provider uses as their hoster.  If it’s FireHost, there’s not much point in moving.  Some service should start aggregating up time data on the hosting services.  It would be good to know who your SaaS provider uses–unless they’re huge they probably don’t have their own servers–and how reliable that provider has been over time.  While it may not seem like it, it will be in every SaaS company’s best interests to cooperate with such data collection simply because it shines a light on the hosting providers that will require them to rise to the next level of reliability.  As it stands, they’re a step removed and much harder to track.

Sorry Page.ly and Firehost–no links for you.  Not happy today.

Posted in business, cloud | 3 Comments »

Charging for Your Product is About 2000 Times More Effective than Relying on Ad Revenue

Posted by Bob Warfield on February 22, 2013

BootstrapsI was reading Gabriel Weinberg’s piece on the depressing math behind consumer-facing apps.  He’s talking about conversion rates for folks to actually use such apps and I got to thinking about the additional conversion rate of an ad-based revenue model since he refers to the Facebooks and Twitters of the world.  Just for grins, I put together a comparison between the numbers Gabriel uses and the numbers from my bootstrapped company, CNCCookbook.  The difference is stark:

Ad-Based Revenue Model CNCCookbook Selling a B2B and B2C Product
Conversion from impression to user 5% Conversion to Trial from Visitor 0.50%
Add clickthrough rate 0.10% Trial Purchase Rate 13%
Clickthrough Revenue  $      1.00 Avg Order Size  $ 152.03
Value of an impression  $ 0.00005  $      0.10 =     1,976.35 times better

Let’s walk through it.

Both sites have visitors who convert to something more.  In the case of the Ad-Revenue model, presumably it is a person who creates an account on a Facebook or Twitter-like site, thereby becoming a user.  Gabe says that conversion rate for a really strong property might be 5%.  It can be much lower, like 1 to 3%.  I went with the optimistic 5%–the model is already too hard to contemplate 1%.  In the case of CNCCookbook, the conversion is from visitor to Trial user for the software.  We have a 30 day free trial on all our products.

From becoming a User or Trial User, the next conversion rate is monetization.  For the Ad-Revenue model, I did a quick search for clickthrough rates on display advertising and came up with 0.1%.  Sure, you might get your Users to click on more than one ad over time, but let’s just keep these numbers simple.  They’re not going to click on 2000 ads to even the score, after all.  For CNCCookbook, we have a very high conversion rate from trials–about 13%.  I view that as a commentary on the high quality of our software–people like it if they try it.  I understand conversions in the 5% are more common, so you may be forgiven for deciding the ad revenue model is only 1000 times less effective than charging for a product.

Okay, given those conversion rates, we take the average revenue per transaction and multiply all that on through to find the value of an impression.  What is it worth to you to bring another visitor to your site?

In this analysis at least, it’s pretty easy to see why bootstrappers need to be charging for their products and not relying on ad revenue.  Unless you just happen to have an amazingly viral product, it’s just too hard.  You have to rack up way too much traffic to get to interesting revenue levels.

Or, to put it like 37Signals:  Charge for your products, Dummy!

Posted in bootstrapping, business, strategy, venture | 4 Comments »

How Many Software Companies Monitor Their Software as Well as Tesla Monitors its Cars?

Posted by Bob Warfield on February 14, 2013

The unfolding story of how the New York Times’ negative review of the Tesla Model S may have actually been faked is a cautionary tale for software vendors.  Basically, there is enough instrumentation and feedback built into the Tesla S that Elon Musk was able to “shred” the review, as Dan Frommer writes.  The graphical plot of exactly what was happening with annotations is particularly damning:

NY Times Tesla Speed Chart

It’ll be fascinating to see how the NYT responds.  Hard to imagine how they do anything but investigate Broder and ultimately move him along elsewhere.  To do much else would imply very little journalistic integrity.

My question for you is that since you’re reading this blog and are likely somehow involved in high tech hardware or software at some level, how does your product compare in terms of how well it can monitor what your users are doing with your product?

I’m fascinated with the idea of closing the feedback loop for the good of customers.  Yes, it’s great Musk can catch the NYT in a bogus review, and perhaps you will catch a reviewer too, but the potential for improving your customer’s experience is of much greater value to your product.  This may seem like a Big-Company-Only idea, but I’m pursuing it with a vengeance for my SaaS bootstrap company (CNCCookbook) because I need precise feedback that pinpoints where I can do the most good for my users with the scarce resources I have available.  I can tell you from experience that the tools are available and straightforward.  You can have the data for very little effort invested.

The next thing I am after is to automate responses to that data.  I’ve been reading the blog of a company called Totango with some interest.  They essentially want to provide SaaS automation for a Customer Success team.  Various folks have written about the importance of Customer Success and I’m also a big believer.  My thoughts at this point are to start out relatively simple.  I want to understand the early lifecycle of my products and be able to trigger automated actions based on that cycle.  For example:

Step 1:  Installation

Monitor the first time the customer has successfully logged into the product.  Offer increasing amounts of help via emails once a day until they achieve this milestone.  The emails can start with self-service help resourcs of various kinds and eventually escalate to offering a call or help webinar.  The goal is to get the customer properly installed.

Step 2:  Configuration

This seems like part of installing, but in fact there is significant post installation configuration needed for CNC Manufacturing software.  Same sort of thing: provide daily emails with increasing levels of help until the system determines that the user has properly configured the system.  Also, this is an opportunity to collect information.  We provide canned configuration for the most common cases and finding out what the next tranche of cases to target should be is very helpful.

Step 3:  The Path to Power Usage

It’d be great if everyone who signed up for our 30 day free trial actually got to see and understand all of the features that set our product apart.  I’ve seen some other products like Dropbox (Full disclosure: they give me another 250MB of storage if you use that link and then sign up. If you’d rather I didn’t get the extra storage, use this link instead. If you sign up, they’ll give you a link where you can get 250MB free too.) walk customers through a usage maturity exercise.  They’ve somewhat gamified it by giving out some of their “currency” in the form of extra storage if you complete the tasks.  My goals here would be to get everyone to see as many of our unique functions as possible during the 30 day trial.

Step 4:  The Holy Grail: Referrals

If all this goes well, the customer gets through the Trial, understands the unique capabilities of our products, and likes the product well enough to buy it, then the final stage in this incarnation is to ask them to refer others they know who might like the product.

That’s a pretty simple roadmap for how to create some closed-loop feedback of telemetry and drip email that improves your customer’s experience.  So I’ll ask again:

Is your company setup to monitor your users as successfully as Tesla monitors its drivers?  Why not?  I’ve used a lot of software where it is pretty clear they’re not monitoring much at all.  I’ve even talked to some of them to encourage change, and they seem receptive.

If you have a story about what sort of work along these lines you’re doing, please share it in the comments below.  I’m very curious.  I think we have the potential to personalize the experience for our customers like never before.

Posted in business, cloud, customer service, software development, strategy, user interface | 7 Comments »

Just Got My Vanity Plates from LinkedIn

Posted by Bob Warfield on February 12, 2013

I recently got a notice from LinkedIn stating that my profile was in the top 1% out of 200 million in terms of how many people had viewed it.  So, they sent me my vanity shot:

OnePctLinkedIn

It’s a nice letter.  I admit I puzzled over who could be spending so much time checking out my profile–seems like a lot more than 1 in 100 people would be ahead of me in terms of attention and name recognition out of the 200 million on LinkedIn.  I would count most of my LinkedIn contacts for starters.

However, it didn’t take much thought to conclude this was probably due to my bootstrapped company CNCCookbook.  We get about 1.5 million visits a year to the web site, making it one of the top CNC sites and almost certainly the most popular CNC blog.

In other words, my marketing is working.  That’s a good thing in a bootstrapped SaaS company.  What a great Age we live in when a SaaS company can be created by just one man and reach so many.

Posted in bootstrapping, business | Leave a Comment »

Big Data is a Small Market Compared to Suburban Data

Posted by Bob Warfield on February 2, 2013

BurbsBig Data is all the rage, and seem to be one of the prime targets for new entrepreneurial ventures since VC-dom started to move from Consumer Internet to Enterprise recently.  Yet, I remain skeptical about Big Data for a variety of reasons.  As I’ve noted before, it seems to be a premature optimization for most companies.  That post angered the Digerati who are quite taken with their NoSQL shiny objects, but there have been others since who reach much the same conclusion.  The truth is, Moore’s Law scales faster than most organizations can scale their creation of data.  Yes, there are some few out of millions of companies that are large enough to really need Big Data and yes, it is so fashionable right now that many who don’t need it will be talking about it and using it just so they can be part of the new new thing.  But they’re risking the problems many have had when they adopt the new new thing for fashion rather than because it solves real problems they have.

This post is not really about Big Data, other than to point out that I think it is a relatively small market in the end.  It’ll go the way of Object Oriented Databases by launching some helpful new ideas, the best of which will be adopted by the entrenched vendors before the OODB companies can reach interesting scales.  So it will be with Hadoop, NoSQL, and the rest of the Big Data Mafia.  For those who want to get a head start on the next wave, and on a wave that is destined to be much more horizontal, much larger, and of much greater appeal, I offer the notion of Suburban Data.

While I shudder at the thought of any new buzzwords, Suburban Data is what I’ve come up with when thinking about the problem of massively parallel architectures that are so loosely coupled (or perhaps not coupled at all) that they don’t need to deal with many of the hard consistency problems of Big Data.  They don’t care because what they are is architectures optimized to create a Suburb of very loosely coordinated and relatively small collections of data.  Think of Big Data’s problems as being those of the inner city where there is tremendous congestion, real estate is extremely expensive, and it makes sense to build up, not out.  Think Manhattan.  It’s very sexy and a wonderful place to visit, but a lot of us wouldn’t want to live there.  Suburban Data, on the other hand, is all about the suburbs.  Instead of building giant apartment buildings where everyone is in very close proximity, Suburban Data is about maximizing the potential of detached single family dwellings.  It’s decentralized and there is no need for excruciatingly difficult parallel algorithms to ration scarce services and enforce consistency across terabytes.

Let’s consider a few Real World application examples.

WordPress.com is a great place to start.  It consists of many instances of WordPress blogs.  Anyone who likes can get one for free.  I have several, including this Smoothspan Blog.  Most of the functionality offered by wp.com does not have to coordinate between individual blogs.  Rather, it’s all about administering a very large number of blogs that individually have very modest requirements on the power of the underlying architecture.  Yes, there are some features that are coordinated, but the vast majority of functionality, and the functionality I tend to use, is not.  If you can see the WordPress.com example, web site hosting services are another obvious example.  They just want to give out instances as cheaply as possible.  Every blog or website is its own single family home.

There are a lot of examples along these lines in the Internet world.  Any offering where the need to communicate and coordinate between different tenants is minimized is a good candidate.  Another huge area of opportunity for Suburban Data are SaaS companies of all kinds.  Unless a SaaS company is exclusively focused on extremely large customers, the requirements of an average SaaS instance in the multi-tenant architecture are modest.  What customers want is precisely the detached single family dwelling, at least that’s what they want from a User Experience perspective.  Given that SaaS is the new way of the world, and even a solo bootstrapper can create a successful SaaS offering, this is truly a huge market.  The potential here is staggering, because this is the commodity market.

Look at the major paradigm shifts that have come before and most have amounted to a very similar (metaphorically) transition.  We went from huge centralized mainframes to mini-computers.  We went from mini-computers to PC’s.  Many argue we’re in the midst of going from PC’s to Mobile.  Suburban Data is all about how to create architectures that are optimal for creating Suburbs of users.

What might such architectures look like?

First, I think it is safe to say that while existing technologies such as virtualization and the increasing number of server hardware architectures being optimized for data center use (Facebook and Google have proprietary hardware architectures for their servers) are a start, there is a lot more that’s possible and the job has hardly begun.  To be the next Oracle in the space needs a completely clean sheet design from top to bottom.  I’m not going to map the architecture out in great detail because its early days and frankly I don’t know all the details.  But, let’s Blue Sky a bit.

Imagine an architecture that puts at least 128 x86 compatible (we need a commodity instruction set for our Suburbs) cores along with all the RAM and Flash Disc storage they need onto the equivalent of a memory stick for today’s desktop PC’s.  Because power and cooling are two of the biggest challenges in modern data centers, the Core Stick will use the most miserly architectures possible–we want a lot of cores with reasonable but no extravagant clock speeds.  Think per-core power consumption suitable for Mobile Devices more than desktops.  For software, let’s imagine these cores run an OS Kernel that’s built around virtualization and the needs of Suburban Data from the ground up.  Further, there is a service layer running on top of the OS that’s also optimized for the Suburban Data world but has the basics all ready to go:  Apache Web Server and MySQL.  In short, you have 128 Amazon EC2 instances potent enough to run 90% of the web sites on the Internet.  Now let’s create backplanes that fit a typical 19″ rack set up with all the right UPS and DC power capabilities the big data centers already know how to do well.  The name of the game will be Core Density.  We get 128 on a memory stick, and let’s say 128 sticks in a 1U rack mount, so we can support 16K web instances in one of those rack mounts.

There will many valuable problems to solve with such architectures, and hence many opportunities for new players to make money.  Consider what has to be done to reinvent hierarchical storage manage for such architectures.  We’ve got a Flash local disc with each core, but it is probably relatively small.  Hence we need access to storage on a hierarchical basis so we can consume as much as we want and it seamlessly works.  Or, consider communicating with and managing the cores.  The only connections to the Core Stick should be very high speed Ethernet and power.  Perhaps we’ll want some out of band control signals for security’s sake as well.  Want to talk to one of these little gems, just fire up the browser and connect to its IP address.  BTW, we probably want full software net fabric capabilities on the stick.

It’ll take quite a while to design, build, and mature such architectures.  That’s fine, it’ll give us several more Moore cycles in which to cement the inevitability of these architectures.

You see what I mean when I say this is a whole new ballgame and a much bigger market than Big Data?  It goes much deeper and will wind up being the fabric of the Internet and Cloud of tomorrow.

Posted in business, cloud, data center, enterprise software, multicore, platforms, saas, service | 2 Comments »

 
%d bloggers like this: