SmoothSpan Blog

For Executives, Entrepreneurs, and other Digerati who need to know about SaaS and Web 2.0.

Archive for the ‘cloud’ Category

Major Data Loss Bug in WPEngine and Yoast SEO Plugin

Posted by Bob Warfield on June 1, 2017

PickardFacePalm

What would you think if I told you that while using the most popular WordPress SEO plugin on one of the most popular WordPress hosting platforms you’d lose a blog post approximately every 30 days and be locked out of your account?  It’ll cost you the time it took to write the blog post plus the time to unsnarl things with the hoster each time.

Pretty ugly, right?

Well that’s exactly what can happen if you use WPEngine as your host and the Yoast SEO Plugin.  I’ve been around this track twice in the last 30 days.  Both times WordPress froze while I was editing a blog post.  Both times I lost the entire post with no backup.  I’m not sure why the normal incremental saves failed.  Both times I was completely locked out of my site–my IP address was blocked.  And both times I had to have my IP address white listed to get back in.

Guess what?

It’s just going to keep happening too.  The folks at WPEngine are completely adamant that what they’re doing is right and the only way to keep their platform secure.  There’s just one problem–I use multiple platforms for my WordPress blogs and I use Yoast on all of them (it’s the most popular SEO plugin there is) and only WPEngine has these problems.

Worse, WPEngine recommends and endorses Yoast.  If it’s such a bad actor, why wouldn’t they be blocking me from even installing it?

Every time my ISP gives me a new IP Address, I will get this all over again.  Another blog post trashed.  More hours wasted.  And all of it should be tragically avoidable.

During my second go-round with their tech support, I was told that what’s happening is Yoast is sending links with unsafe characters.  They indicated this was somehow my problem.  So, I asked them to help me reconfigure to avoid the problem.  After tracking down a senior tech, here’s the list of URL’s containing unsafe characters they gave me:

WPEngineFail

Gobbledigook to be sure, but more importantly, it’s nothing I’ve typed in as a link.  And after some back and forth I confirmed, it’s nothing we can configure Yoast to stop doing.

You see, it’s tracking the prominent words in the article as I type them.  You can see I was writing an article about “copyblogger”.  It’s one of my Top 15 Marketing Masters profiles where I go through and analyze the marketing tactics of the Top 15 online marketers I follow.

Presumably, if I type enough of the right things fast enough, WPEngine’s security bots are triggered and my IP is frozen out.  Of course, WPEngine is adamant that this is sound practice and that really it must be Yoast that’s at fault.  As a software developer, I look at this and call BS.

This is all innocuous stuff.  If they were going to have a problem with this stuff at all, they should not be locking out IPs that they’ve authenticated as the site owners.  Maybe others, but not a properly logged in administrator.

In fact, I would submit that there should be no case where they cause data loss based on anything I can type into a blog post.  Yes, perhaps if I misconfigure some serious system level setting, MAYBE.  But not just because I’m writing a blog post.

There are so many ways the WPEngine folks could work around this and prevent data loss.  I have a hard time believing I’m the only one it happens to.  Certainly the tech support rep knew exactly what was going on when I got in touch.  But, for whatever reason, it continues.  I couldn’t even get the rep to escalate me to his supervisor.

It may be time for me to find a better hoster.  This is just silly to keep losing data so frequently.

Posted in cloud, customer service, platforms, service | Leave a Comment »

Oh Dear, the Green Pundits Don’t Understand the Cloud or Multitenancy

Posted by Bob Warfield on January 16, 2015

forestRecently I was drawn into a discussion of how Green the Cloud is where I responded as follows:

SaaS is going to come out ahead of any reasonably calculation of carbon emissions versus on-prem. Multi-tenancy is just a lot more efficient. Look at the data centers of companies like Google, Amazon, and Facebook. Most corporates wish they could come close as they watch these companies dictate every detail right down to the exact design of servers in order to minimize their costs. As everyone agrees, most of that cost is energy.

So choose SaaS if you’re worried about carbon, and yes, it could become another axis of competition in terms of exactly which Cloud provider does it best.

Tom Raftery immediately responded:

The answer is that it depends, tbh. It depends entirely on the carbon intensity of the data centre (where it sources its energy), not the efficiency of the data centre.

If you have a data centre with a PUE of 1.2, and it is 50% coal powered (not atypical in North America, Germany, Poland, and others, for example), it will have a higher CO2 footprint than a data centre with a PUE of 3.0 powered primarily by renewables – again I have run the numbers on this and published them.

Similarly with on-prem. If I have an app that I’m running in-house, and I’m based in a country like Spain, France, Denmark, or any other country with where the electricity has a low carbon intensity; then moving to the cloud would likely increase the CO2 footprint of my application. Especially if the cloud provider is based in the US which has 45% of its electricity generated from coal.

Tom is the chief analyst for Greenmonk, which writes about this sort of thing for a living.  He’s been quoted by others who are in the same camp such as Brian Proffitt on ReadWriteWeb.  And who wouldn’t love a nice juicy story to put those darned Cloud vendors in their place?  Those guys have been riding high for too long and ought to be brought down a notch or two, harrumph.

I have a lot of problems with this kind of math–it just doesn’t tell the whole story.

First, I can’t imagine why Tom wants to be on record as saying that PUE (Power Usage Efficiency) just doesn’t matter.  Sure, he has found some examples where CO2 footprint overwhelmed PUE, but to say the answer depends entirely (his word) on the sources of the data center’s energy and not on the efficiency of the data center just seems silly to me.  Are there no data centers anywhere in the world at all where PUE matters?  Did all the Cloud data centers with great PUE just magically get situated where the carbon footprints are lousy enough that PUE can’t matter?

I’m very skeptical that could be the case.  You must consider both PUE and CO2 per Kilowatt Hour, how could we not when we’re talking per Kilowatt hour and PUE determines how many Kilowatts are required?

Here’s another one to think about.  If this whole PUE/CO2 thing matters enough to affect the economics of a Cloud Vendor, we should expect them to build data centers in regions with better CO2 energy.  Since they put up new data centers constantly, that’s not going to take them very long at all.  Some are talking about adding solar to existing facilities as well.  Now, do we want to lay odds that corporate data centers are going to be rebuilt and applications transferred as quickly for the same reasons?  If you’re running corporate IT and you have a choice of selecting a Cloud Data Center with better numbers or building out a new data center yourself, which one will get you the results faster?  And remember, once we are comparing Apples to Apples on CO2, those Cloud vendors’ unnaturally low PUE’s are going to start to haunt you even more as they run with fewer Kilowatt Hours.

Multitenancy Trumps all this PUE and CO2 Talk

But there’s a bigger problem here in that all data centers are not equal in another much more important way than either PUE or fuel source CO2 footprints.  That problem is multitenancy.  In fact, what we really want to know is CO2 emissions per seat served–that’s the solution everyone is buying.  Data centers get built in order to deliver seats of some application or another, they’re a means to an end, and delivering seats is that end.  The capacity they need to have, the number and type of servers, and hence the ultimate kilowatts consumed and carbon footprint produced is a function of seats.  Anyone looking purely at data centers and not seats served is not seeing the whole picture.  After all, if I run a corporation that has a datacenter, it’s fair to charge the carbon from that datacenter against my corporation.  But if I am subscribing to some number of seats of some Cloud application, I should only be charged the carbon footprint needed to deliver just those seats.  Why would I pay the carbon footprint needed to deliver seats to unrelated organizations?  I wouldn’t.

Corporate data centers have been doing better over time with virtualization at being more efficient.  They get a lot more seats onto a server than they used to.  The days of having a separate collection of hardware for each app are gone except for the very most intensive apps.  But that efficiency pales in comparison to true multitenancy.  If you wonder why, read my signature article about it.  I’ll run it down quickly here too.

Consider using virtual machines to run 10 apps.  Through the magic of the VM, we can install 10 copies of the OS, 10 copies of the Database Server, and 10 copies of the app.  Voila, we can run it all on one machine instead of 10.  That’s pretty cool!  Now what does Multitenancy do that the VM’s have to compete with?  Let’s try an example where we’re trying to host the same software for 10 companies using VM’s.  We do as mentioned and install the 10 copies of each kind of software and now we can host 10 tenants.  But, with multitenancy, we install 1 copy of the OS, 1 copy of the Database, and 1 copy of the app.  Then we run all 10 users in the same app.  In fact, with the savings we get from not having to run all the VM’s, we can actually hose more like 1000 tenants versus 10.

But it gets better.  With the Virtual Machine solution, we will need to make sure each VM has enough resources to support the peak usage loads that will be encountered.  There’s not really a great way to “flex” our usage.  With Multitenancy, we need to have a machine that supports the peak loads of the tenants at any moment in time on the system.  We can chose to bring capacity on and off line at will, and in fact, that’s our business.  For a given and very large number of seats, larger than most any single corporate application for most corporations, would we rather bet the corporation can be more efficient with on-prem software in its wholly owned data center or that the SaaS vendor will pull off far greater efficiency given that its software is purpose-built to do so?  My bet is on the SaaS vendor, and not by a little, but by a lot.  The SaaS vendor will beat the corporate by a minimum of 10-20x and more likely 100x on this metric.  You only have to look to the financials of a SaaS company to see this.  Their cost to deliver service is a very small part of their overall expenses yet most SaaS apps represent a considerable savings over the cost of On-Prem even though they carry the cost of delivering the service which the On-Prem vendor does not.

Conclusion

Raftery says, “energy use” and “emissions produced” have been conflated to mean the same thing.  I say he’s absolutely right about that but hasn’t seen the bigger picture that it is not energy use nor emissions produced in isolation, it’s seats delivered per emissions produced.  Itt’s having the flexibility to make a difference rapidly.  And that is why we bet on the Cloud when it comes to being Green.

Posted in cloud, data center, enterprise software, saas, strategy | 1 Comment »

Authentication as a Service: Slow Progress, But Are We There Yet?

Posted by Bob Warfield on July 11, 2014

BankVaultSmallAuthentication as a Service solves a problem every Cloud Developer, mobile or desktop, has to solve.  As one player in the space, AuthRocket, puts it:

Do you really want to write code for users, forgotten passwords, permissions, and admin panels again?

To that I would add, “Do you really want to have to be a world class expert on that stuff to make sure you don’t leave some gaping security hole out of ignorance?”  I think the answer is a resounding, “NO!” to both questions.  Why do it in this world of Agile Development, Lean Startups, and Minimum Viable Products?  It’s one of those things everyone does (and should do) pretty much the same way from a user’s perspective, so there is no opportunity for differentiation.  You have to do it right because the downside of security problems is huge.  You have to do it right up front to protect your customer’s data and your investment (so nobody gets to use your products for free).  There’s basically very little upside to rolling your own (it’ll only slow you down) and tremendous downside.  Hence, you’d like to buy a service.

I keep going around this block for my own company’s (CNCCookbook) products, and I surely would like to get off that merry g0-round.  I wanted to buy this some time ago, and have written about it for quite a while.  For example, in an article I wrote 4 years ago on PaaS Strategy (Platform as a Service), I suggested login would be an ideal service for a pass to offer with these world:

Stuff like your login and authentication subsystem.  You’re not really going to try to build a better login and authentication system, are you?

I sound just like AuthRocket there, don’t I?  I’m sure that’s not the earliest mention I’ve made, because I’ve been looking for this stuff for a long time now.  As I say, I had to roll my own because I couldn’t find a good solution.  I would still like to replace the solution that CNCCookbook uses with a nice Third-Party service.  I only have few very generic requirements:

–  It has to offer what I need.  Basically that’s Email + Password login with all the account and forgotten password management interactions handled for me.  It would be very nice if they do Federated Login using the other popular web services like Amazon, Facebook, Twitter, Google, or whatever.  It would also be very nice if it could do 2 factor login.  The latter two are optional.

–  It has to work well.  I judge this by who has adopted it and how it is reviewed.

–  It has to be here for the long haul.  I’ll judge this by size of customer base and quality of backers.  AuthRocket, for example, is still at the invitation-only Beta stage.  That’s too early for me.  I have mature products and don’t want to have to change out this service too often.

–  It has to be easy for me to access the API’s.  I prefer a nice RESTful API, but I will take a platform-specific API for my chosen development platform: Adobe Flex.  And no, I don’t want to debate that platform, it has worked fabulously well for me, the products are mature, and I am not looking to switch.

–  It has to be easy to tie it back to securing my data in the Amazon Web Services Cloud.

–  Optional Bonus:  It helps me solve the problem of disconnected data access.  My apps are Adobe AIR apps.  You download and can run without a web connection for a period of time.  This is important to my audience, but means I’ve got to use data models that keep local copies and sync with the Cloud when they get connected.

While my apps are not yet available on iOS or Android, all of those things are almost exactly the same problems any Mobile App developer faces.  Therefore, this ought to be a hotbed of activity, and I guess it is, but so far, I still can never seem to find the right solution for me, and I don’t think I’m asking for anything all that crazy.  But, I have yet to find a solution.  Let me tell you a little bit about my 2 most recent near misses.

Amazon Cognito

I was very excited to read about Amazon’s new Cognito service.  At CNCCookbook we’re big Amazon believers, and use all sorts of their services.  Unfortunately, at least until Cognito, they didn’t really have a good service for solving CNCCookbook’s authentication problems.  They had IAM, which is a very complicated, very heavy-weight, very Big Corporate IT kind of solution.  It looked kind of like maybe you could do it if you had to, but you’d still wind up writing all the darned password management stuff and it looked like it was going to be a real ordeal.  Mostly, I think of IAM, as the tool used to define roles for how broad classes of users can access the various other Amazon offerings.  I wanted another service of some kind to be the sort of simpler, friendlier, front end to IAM.  Enter Cognito, and it sure sounded good:

Amazon Cognito lets you securely store, manage, and sync user identities and app data in the AWS Cloud, and manage and sync this data across multiple devices and OS platforms. You can do this with just a few lines of code, and your app can work the same, regardless of whether a user’s devices are online or offline.

You can use Amazon Cognito directly from your mobile app without building or maintaining any backend infrastructure. Amazon Cognito handles secure app data storage and sync, enabling you to focus on your app experiences, instead of the heavy lifting of creating and managing a user data sync solution.

A guy like me loves the part about, “You can do this with just a few lines of code” followed by “without building or maintaining any backend infrastructure.”  Now that’s what I’m talking about, I gotta get me some of this!

It’s nearly all there:

–  Amazon is an outfit that can be trusted for the long haul.

–  REST API’s are no problem, that’s how Amazon prefers to operate.

–  Tie back to other Amazon Web Services?  Puh-lease, who do you think you’re talking to, of course one Amazon Service talks to the others!

–  Sync?  Yeah, baby, that’s what Cognito is all about.  More potential time savings for yours truly.

Oops, just one little shortcoming:  it only does Federated Login via Amazon, Facebook, or Google.  That’s cool and all, but wheres my Email + Password login so I can seamlessly move customers over to it?  Maybe I missed it, maybe it’s coming, or maybe Amazon just doesn’t think it’s important.  Can I live with forcing my users to make sure they have either an Amazon, Facebook, or Google account?  Yeah, I guess maybe, but we sell a B2B app and it sure seems kind of unprofessional somehow.

Amazon, can you please fill this hole ASAP?

Firebase

I hear fabulous things about Firebase, I really do.  People seem to love it.  It’s chock full of great functionality, and on the surface of it, Firebase should fit my needs.  Yet, when I dig in deep, I find that the login piece is kind of a red-headed stepchild.  Yeah, they advertise Email + Password Login, and they even tell you how to do it.  But there’s no RESTful API available for it.  They list all the right operations:

–  Login, and returns a token
–  Create a new user account
–  Changing passwords
–  Password reset emails
–  Deleting accounts
etc.
However, it appears that those things are handled by a client library which is in a very dev platform specific format.  If you use one of their chosen platforms, it’s ok.  If not, you can only use their rest API’s for the Cloud Database–no login functionality.  That’s going nowhere for me.  It would’ve been so much nicer had they packaged what’s in the client library in their Cloud and provided RESTful API’s for the functions I’ve listed.  As I told them when I made the suggestion, that makes their offering accessible to virtually every language and platform with the least effort for them instead of just the few they support.
Conclusion:  Crowd Sourcing?
Hey, I’m open to suggestions and the Wisdom of the Crowd.  Maybe someone out there knows of a service that meets my requirements.  They seem pretty generic and I’m frankly surprised I still can’t find such a thing after all these years of building almost anything you can imagine as a service.  We’re not very far away from it.  Either Amazon or Firebase could add the functionality pretty easily.  I’m hoping maybe I’ll get lucky in the next 6 months or so.  If anyone knows the right people in those organizations (or their competition), pass this post along to them.

 

 

Posted in bootstrapping, business, cloud, mobile, platforms, saas, service, software development | 3 Comments »

Jump Starting a Small Business With Cloud Services

Posted by Bob Warfield on November 18, 2013

PennyPincherSo you want to start a small business, perhaps a bootstrapped tech company?  Good for you, I enjoy mine immensely.  Let me suggest you adopt a rule that I’ve used for a long time:

If it’s available in the Cloud, use the Cloud Service.  Don’t roll your own or manage your own server, even if it is a server in the Cloud.

The thing about a good Cloud Provider (or SaaS service, if you prefer), is that their service is their business.  If they’re doing it right, they can afford to know a lot more about it, do the job a lot better, and deliver it a lot more cheaply than you can.  Meanwhile, you have plenty of work to occupy your time.  Keep your focus on doing those things that uniquely differentiate your business and delegate the rest to the Cloud whenever you can.

That’s the high-level mindset.  Using this approach I have consistently taken companies that had significant IT burdens and gotten them down to where it takes a talented IT guy maybe 1/3 of their time to keep things humming along smoothly.  This for sites that have millions of visitors a year–plenty for most small businesses.  BTW, my instructions to the IT guy were to spend that 1/3 of time automating themselves out of a job.  They’ll never get there, of course, but all progress in that direction is helpful.

Why So Much Cloud Emphasis?

Let’s drill down on why I think that’s the way for small businesses to go.

First, there’s no need to deal with hardware and so that whole time-consuming effort of ordering the servers and setting them up is eliminated. You can turn cloud-based services on or off in seconds.

Second, the cloud-based services know how to manage their services because that’s all they do.  Suppose you choose to base your web presence on WordPress. You could deal with setting up the WP server on an Amazon instance and still be in the Cloud, but now you have to manage it (keep all the security updates going, run backups, optimize for speed, etc.). That takes time and expertise.

Or, you can let a service like Page.ly, ManageWP, or WordPress.com do all that heavy lifting for you.  Now you don’t even have to think about it much—it just happens and they follow industry best practices it would be hard for a small business to emulate.

Third, you can scale up and scale down. Small business traffic is very bursty. One day some big site like Techcrunch writes about you and your site is melting down—nobody can access it. You needed to scale up fast! The next day you’re back to your normal small business traffic. If you had invested the time and money in big scale, it’s wasted on those days laying idle. But, if you choose the right cloud-based host they can scale up and scale down automatically for you.

BTW, this is critical for good Google results as they penalize slow sites on SEO.

Okay, How Can My Business Use the Cloud to Best Effect?

I’ll cover this one by what I see as the critical business phases:

1. Reach your audience

Job #1 has got to be creating a web presence that lets you reach your audience. You need to do this even before you have a product to sell them, because you’ll need to take advantage of the time you spend building product to optimize that audience touch point. Towards that end, you’ll want the following:

–  Web Site with Blog: I highly recommend building that around WordPress using a WordPress Cloud Hosting service. It lets you leverage the huge WordPress ecosystem which means lots of off-the-shelf plugins and know how to make your web site sing with minimal effort on your part.

–  Analytics and A/B Testing: Get hooked up with Google Analytics via a plugin for WordPress so you can monitor what people do on your site and use that feedback to improve your Audience attraction and engagement. A/B Testing lets you try pages side by side to see which one works best. It takes time to optimize, so don’t wait until you’re ready with product. Start day one trying things to see what works.

–  Social Media: Get your Facebook, Twitter, and LinkedIn pages going ASAP. If nothing else, you need to nail down your presence and brand in those places. Use WordPress plugins to automate the interconnection of Social Media with your web site.

–  Domain: Don’t bother picking a company name until you nail your custom web domain. Read up on SEO aspects of that to make sure the domain is helping you pull traffic. Get yourself a DNS service such as DNSMadeEasy or one that Amazon provides. This will let you tie together disparate Cloud Based services under your brand and domain.  The DNS decides what computers actually get the message when someone types in a URL.  It’s like the central switchboard of your web presence and you’ll use it for all sorts of things.  It’s also your lifeline if some emergency strikes a Cloud provider and you need to bypass them to get to an alternate of some kind.

–  Email: Gear up both your firm’s employee email plus an email service you can use to email customers.  Start building your mailing list day one so you have as big a list as possible available to help when you’re ready to launch. I like services like MailChimp for the mass mailings and services like Google Apps for employee email. Be sure your email service includes easy integration to your WordPress blog and start a weekly email newsletter from day 1.

–  SEO: Learn to master your own SEO activities. It affects every aspect of your web presence. You have two audiences—people and the machines that are performing search at places like Google or Bing. You can’t afford to fail either audience. There are a variety of Cloud Services that can help you with this such

–  Surveys: You need all the feedback you can get to guide your efforts to reach your audience. I like SurveyMonkey and Qualaroo.  Survey Monkey does complex surveys.  Qualaroo does neat little spur of the moment unobtrusive surveys.  Both are extremely useful.  I use Survey Monkey for targeted surveys that go out via email and blog posts.  You get a survey when your free trial ends.  I use them to research market topics.  They’re great for creating interesting content–people love to read survey results.  Qualaroo is on key web pages asking:

“Would you recommend this product?”  on the download page

“What articles should we be writing?”  on the blog

“What can we do to make this product more likely something you could buy and use?”  on the pricing page and elsewhere.

–  Customer Service:  Customer Service isn’t just about fixing product problems.  It’s about giving your audience a way to reach you and a way to reach each other to engage.  As such, it’s worth setting these systems up from Day 1.  For my businesses, I want a Customer Service solution that offers a pretty big menu:

Trouble Ticketing.  This is the classic Customer Service app but it’s the one you’ll use the least often if you’re doing it right.  Consider Trouble Tickets to be a failure.  A failure to prevent the problem before it started.  A failure in documentation or user interface/experience.  A failure to communicate.  The customer’s point of last resort.  You have to have Trouble Ticketing, but you want to do everything in your power to make sure Customers never have to use it.

Idea Storming:  I love giving customers every possible way to provide feedback.  Ideation is the ability to put an idea on an idea board and vote on it.  Give customers a fixed scarce number of votes and then pay attention.  Whatever rises to the top on the voting is something you need to deal with.

Forums:  Own your own forums even though there are lots of forums out there.  Make them private and require some form of sign up.  This is your exclusive User Club.  Be very responsive on the forums.  Go there first and Trouble Tickets second.  If you help someone with a problem on the forums, others can see the answer and potentially be helped in the future.  If you help someone by closing a Trouble Ticket, you only helped them and the effort is not leveraged.

Knowledge Base:  You want a KB integrated with the rest of the Customer Service experience so that as someone enters a Trouble Ticket, they are directed to KB articles that can potentially help.

I use a service called User Voice to do all those things except the forums.  I use a free BBS service for that.

2. Build your product

If you’ve got a software company, or perhaps an e-commerce company, you’ve got to build some software.  There are helpful Cloud services here too:

–  Source Control: You need source control day one.  Being without it is like jumping out of an airplane without a parachute.  I like Github but there are lots of others.

–  Bug Tracking: For bug tracking and the like, Atlassian and others have this base covered. Don’t confuse it with Customer Service software, which I will cover under E-Commerce.

–  Online information resources: There are so many here I can’t begin to count, but we live in an age where there are literally thousands of developers helping each other online in all kinds of ways.  StackExchange can answer almost any technical question you might have. Online forums are there too for more specific areas.

–  Consulting: Need quick design work but don’t have a designer on staff yet? Need a specialized piece of code written that’s just part of your solution but nobody knows how? Need a little extra testing help or maybe some tech writing? There are tons of services like Elance that can get you some high quality temporary help.

3. E-Commerce

For this stage, you have a vibrant audience, big and growing email list, and your product has had a successful free Beta test. Time to start charging. Here are some things you may need to take the order, process payments, and handle the accounting:

–  Shopping Cart: If you chose WordPress, there are tons of plugins to help. But, they’re not the only game in town either.

–  Payment Processing: Who will process credit cards for you? Lots of possibilities ranging from Paypal to Stripe.  Be sure your processor covers International sales and any special needs you may have, like recurring payments for subscription services.

–  Accounting: A lot of these services can connect to QuickBooks to make your bookkeeping easier.  Scope that out in advance.

How Do I Choose the Right Service?

With so many different kinds of Cloud Service, it is hard to be specific. So, I’ll talk about the generic:

– Look for an online and vocal fan club for the service. It doesn’t take long with Google to see which services are loved and which ones are marginal.

– Look for companies similar to yours that use the service proving someone else has tried it and succeeded. Try to contact those companies and see what they think of the service. I’m not talking competitors—they won’t help. But there are always similar kinds of companies that don’t compete at all.

– Make sure you have a roadmap for what you need your services to be able to do for at least the next 2 years. Get your developers and others to review the proposed service against the roadmap and make sure you won’t have to switch down the road. It’s a good exercise to have that Roadmap available anyway—it’s just a wish list of everything you want to do for Marketing, E-Commerce, and Product over the next 2 years.

– Get your developers to look carefully at the published API’s for the services. Even if you won’t be using any API’s early, someday you might. The quality of the API’s is an indication of how well architected the service is too.

Conclusion

You can build a pretty amazing online Customer Experience if you make full use of available Cloud Services as described.  If you have build all of it, set up the servers, do the backups, install all the updates, and so on, you’ll be wasting a lot of your time that could be spent doing other things.

Posted in bootstrapping, business, cloud | Leave a Comment »

Don’t Bury the Map With the Treasure: Thin Clients Trump Apps in Walled Gardens

Posted by Bob Warfield on July 10, 2013

FeedlyBugOne of the questions every SaaS company will have to be able to answer for their customers is, “What happens if you go under?”  It’s actually a fascinating question, and one you have a chance as a vendor to think about and turn to your advantage.  For example, one of my SaaS ventures was Helpstream.  We had the unpleasant experience of being shut down by our VC’s shortly after the 2008 crash, but we tried to do well by our customers.  As it turned out, our architecture made it very straightforward for us to offer those folks the chance to host their own Helpstream instance and keep going rather than have to stop cold turkey.  There are still customers live on the software as a result.  I won’t go into all the details of how this was accomplished, but suffice it to say our architecture made us very nimble about being able to create multi-tenant apartment complexes that could house anywhere from 1 to a couple of thousand tenants on standard Amazon EC2 + S3 infrastructure.  Thus it was trivial for us to set up a customer as their own tenant in their own apartment house and hand them the keys.  This is not something you could say about something like, say, Salesforce.com, or many other SaaS offerings.  Building on a commodity cloud like Amazon can have its virtues.

In the perpetual on-premises license days, we had source code escrows.  In the SaaS/Cloud era, it makes sense to codify what happens in the event of a dissolution of some sort.  As the Helpstream example shows, it’s possible to do something that makes enormous sense for customers and thereby give them a greater sense of security, something that the Cloud is not often known for.

Unfortunately, things also go on in the Cloud that have nothing to do with a particular vendor, but that actually make things much worse for customers.  I present the example of Feedly and the Apple App Store.

As most of you will know, Google discontinued Google Reader, forcing those of us who need such a thing to seek alternatives.  I looked at a good half dozen during the warning period and eventually settled on Feedly.  Let me be clear that this is still not a decision I regret, but I am forced to endure a not so pleasant aspect of the way Feedly works on my iPad.  There is a problem in that Feedly is set up to seamlessly transfer you from Google Reader to Feedly.  That part is good.  What is less good is that Google changed some aspects of the API and created a little problem for the Feedly app.  Feedly works great for me on my desktop, because I can access it via web browser as a thin client.  It is dead to me on my iPad because of this problem.  Feedly mistakenly thinks it is overloaded with users, a surprisingly plausible story in the wake of Google Reader shutting down.  In fact, this is not the case.  There is simply a bug that causes the iOS Feedly app to mistakenly report this problem.

Now here is the problem:

Since iOS is a walled garden, and Feedly has to wait until Apple approves a fixed version of the app, they are stuck.  It’s been 7 days and the app still doesn’t work and a fix has not been approved.  As my headline says, the map is buried with the treasure because Apple is presenting them from fixing a very obvious problem.  Feedly has no real answer for this, and Apple isn’t telling them an ETA on approval either.  It’s hard to be impressed with either Apple or Feedly based on how all of this is rolling out.  You’d think whatever process Apple uses would be aware of how many people use Feedly (it’s millions) and could find a way to expedite an obvious fix.  Apparently the Monarchy of Cupertino cannot be bothered with such mundane details as customer happiness.

Meanwhile, I have to ask myself, “Why can’t I run the Feedly thin client in the Safari browser on iOS?”  That would be so handy right about now.  Yet, they seem to have been at pains to ensure that if you are on an iPad, you surely must use their app and are to be prevented from accessing the thin client that works so well on my desktop and that would have prevented this nuisance.

Folks, the next time you’re using your tablet and you go to some website and it offers to download an app, skip it.  That app is not going to improve your user experience enough to be worth the trouble.  You are only going to encourage them not to keep their thin client working well on your platform.  And someday, you may wish the map hadn’t been buried with the treasure the way the Feedly guys did it.  Don’t frequent the Walled Garden.  Don’t encourage it at all unless you absolutely must.

This was all tragically avoidable, and I hope Feedly will take note and pave the way for their thin client to work on iOS so the next time they don’t have to wait on Apple.  Those of you at other companies, don’t let this happen to your customers!

Posted in apple, cloud, saas, service, strategy | 3 Comments »

Check on Your SaaS Company’s Hosting Provider, Avoid Firehost

Posted by Bob Warfield on February 25, 2013

PagelyDown

My CNCCookbook blog is experiencing it’s second outage so far this month.  That’s a cause for visitor unhappiness and potentially lost business.  I use Page.ly, because I believe in SaaS services.  CNCCookbook is bootstrapped, and I try not to spend any of my time at all doing something that I can easily have done for me by a SaaS provider, like hosting a WordPress blog.  Page.ly has been pretty good in most respects, though far from perfect in terms of outages.  Frankly, there have been too many outages and having two in one month is starting to be a bit much.  Their story is that their hosting provider, FireHost, has created both of these problems.

It’s even affected the Page.ly blog, as it did the last outage too.  Ironically, I wouldn’t be posting this blog except that Page.ly’s blog went to the same screen I’m showing here when I attempted to comment that maybe it was time they thought about Firehost alternatives.

Whatever’s going on at Firehost, and however much it saves Page.ly to use Firehost instead of some more reliable service, it’s not worth it guys.  It’s making you look bad, and through extension, that makes my business using your service look bad.  The good news is if it continues, it is very straightforward to migrate to Page.ly’s competitors.  I also have experience with WPEngine from a prior company, and found them to be more performant and a nicer service, but quite a bit more expensive.  Perhaps some of that expense is going to a better hoster for their service.  At CNCCookbook, we use Amazon for our own services and I can’t remember the last time we had an outage.  Maybe once have we had one, and it involved the simple expedient of rebooting our EC2 instance.

In the end, if I do move the CNCCookbook blog, I will be checking who the new provider uses as their hoster.  If it’s FireHost, there’s not much point in moving.  Some service should start aggregating up time data on the hosting services.  It would be good to know who your SaaS provider uses–unless they’re huge they probably don’t have their own servers–and how reliable that provider has been over time.  While it may not seem like it, it will be in every SaaS company’s best interests to cooperate with such data collection simply because it shines a light on the hosting providers that will require them to rise to the next level of reliability.  As it stands, they’re a step removed and much harder to track.

Sorry Page.ly and Firehost–no links for you.  Not happy today.

Posted in business, cloud | 3 Comments »

How Many Software Companies Monitor Their Software as Well as Tesla Monitors its Cars?

Posted by Bob Warfield on February 14, 2013

The unfolding story of how the New York Times’ negative review of the Tesla Model S may have actually been faked is a cautionary tale for software vendors.  Basically, there is enough instrumentation and feedback built into the Tesla S that Elon Musk was able to “shred” the review, as Dan Frommer writes.  The graphical plot of exactly what was happening with annotations is particularly damning:

NY Times Tesla Speed Chart

It’ll be fascinating to see how the NYT responds.  Hard to imagine how they do anything but investigate Broder and ultimately move him along elsewhere.  To do much else would imply very little journalistic integrity.

My question for you is that since you’re reading this blog and are likely somehow involved in high tech hardware or software at some level, how does your product compare in terms of how well it can monitor what your users are doing with your product?

I’m fascinated with the idea of closing the feedback loop for the good of customers.  Yes, it’s great Musk can catch the NYT in a bogus review, and perhaps you will catch a reviewer too, but the potential for improving your customer’s experience is of much greater value to your product.  This may seem like a Big-Company-Only idea, but I’m pursuing it with a vengeance for my SaaS bootstrap company (CNCCookbook) because I need precise feedback that pinpoints where I can do the most good for my users with the scarce resources I have available.  I can tell you from experience that the tools are available and straightforward.  You can have the data for very little effort invested.

The next thing I am after is to automate responses to that data.  I’ve been reading the blog of a company called Totango with some interest.  They essentially want to provide SaaS automation for a Customer Success team.  Various folks have written about the importance of Customer Success and I’m also a big believer.  My thoughts at this point are to start out relatively simple.  I want to understand the early lifecycle of my products and be able to trigger automated actions based on that cycle.  For example:

Step 1:  Installation

Monitor the first time the customer has successfully logged into the product.  Offer increasing amounts of help via emails once a day until they achieve this milestone.  The emails can start with self-service help resourcs of various kinds and eventually escalate to offering a call or help webinar.  The goal is to get the customer properly installed.

Step 2:  Configuration

This seems like part of installing, but in fact there is significant post installation configuration needed for CNC Manufacturing software.  Same sort of thing: provide daily emails with increasing levels of help until the system determines that the user has properly configured the system.  Also, this is an opportunity to collect information.  We provide canned configuration for the most common cases and finding out what the next tranche of cases to target should be is very helpful.

Step 3:  The Path to Power Usage

It’d be great if everyone who signed up for our 30 day free trial actually got to see and understand all of the features that set our product apart.  I’ve seen some other products like Dropbox (Full disclosure: they give me another 250MB of storage if you use that link and then sign up. If you’d rather I didn’t get the extra storage, use this link instead. If you sign up, they’ll give you a link where you can get 250MB free too.) walk customers through a usage maturity exercise.  They’ve somewhat gamified it by giving out some of their “currency” in the form of extra storage if you complete the tasks.  My goals here would be to get everyone to see as many of our unique functions as possible during the 30 day trial.

Step 4:  The Holy Grail: Referrals

If all this goes well, the customer gets through the Trial, understands the unique capabilities of our products, and likes the product well enough to buy it, then the final stage in this incarnation is to ask them to refer others they know who might like the product.

That’s a pretty simple roadmap for how to create some closed-loop feedback of telemetry and drip email that improves your customer’s experience.  So I’ll ask again:

Is your company setup to monitor your users as successfully as Tesla monitors its drivers?  Why not?  I’ve used a lot of software where it is pretty clear they’re not monitoring much at all.  I’ve even talked to some of them to encourage change, and they seem receptive.

If you have a story about what sort of work along these lines you’re doing, please share it in the comments below.  I’m very curious.  I think we have the potential to personalize the experience for our customers like never before.

Posted in business, cloud, customer service, software development, strategy, user interface | 7 Comments »

Big Data is a Small Market Compared to Suburban Data

Posted by Bob Warfield on February 2, 2013

BurbsBig Data is all the rage, and seem to be one of the prime targets for new entrepreneurial ventures since VC-dom started to move from Consumer Internet to Enterprise recently.  Yet, I remain skeptical about Big Data for a variety of reasons.  As I’ve noted before, it seems to be a premature optimization for most companies.  That post angered the Digerati who are quite taken with their NoSQL shiny objects, but there have been others since who reach much the same conclusion.  The truth is, Moore’s Law scales faster than most organizations can scale their creation of data.  Yes, there are some few out of millions of companies that are large enough to really need Big Data and yes, it is so fashionable right now that many who don’t need it will be talking about it and using it just so they can be part of the new new thing.  But they’re risking the problems many have had when they adopt the new new thing for fashion rather than because it solves real problems they have.

This post is not really about Big Data, other than to point out that I think it is a relatively small market in the end.  It’ll go the way of Object Oriented Databases by launching some helpful new ideas, the best of which will be adopted by the entrenched vendors before the OODB companies can reach interesting scales.  So it will be with Hadoop, NoSQL, and the rest of the Big Data Mafia.  For those who want to get a head start on the next wave, and on a wave that is destined to be much more horizontal, much larger, and of much greater appeal, I offer the notion of Suburban Data.

While I shudder at the thought of any new buzzwords, Suburban Data is what I’ve come up with when thinking about the problem of massively parallel architectures that are so loosely coupled (or perhaps not coupled at all) that they don’t need to deal with many of the hard consistency problems of Big Data.  They don’t care because what they are is architectures optimized to create a Suburb of very loosely coordinated and relatively small collections of data.  Think of Big Data’s problems as being those of the inner city where there is tremendous congestion, real estate is extremely expensive, and it makes sense to build up, not out.  Think Manhattan.  It’s very sexy and a wonderful place to visit, but a lot of us wouldn’t want to live there.  Suburban Data, on the other hand, is all about the suburbs.  Instead of building giant apartment buildings where everyone is in very close proximity, Suburban Data is about maximizing the potential of detached single family dwellings.  It’s decentralized and there is no need for excruciatingly difficult parallel algorithms to ration scarce services and enforce consistency across terabytes.

Let’s consider a few Real World application examples.

WordPress.com is a great place to start.  It consists of many instances of WordPress blogs.  Anyone who likes can get one for free.  I have several, including this Smoothspan Blog.  Most of the functionality offered by wp.com does not have to coordinate between individual blogs.  Rather, it’s all about administering a very large number of blogs that individually have very modest requirements on the power of the underlying architecture.  Yes, there are some features that are coordinated, but the vast majority of functionality, and the functionality I tend to use, is not.  If you can see the WordPress.com example, web site hosting services are another obvious example.  They just want to give out instances as cheaply as possible.  Every blog or website is its own single family home.

There are a lot of examples along these lines in the Internet world.  Any offering where the need to communicate and coordinate between different tenants is minimized is a good candidate.  Another huge area of opportunity for Suburban Data are SaaS companies of all kinds.  Unless a SaaS company is exclusively focused on extremely large customers, the requirements of an average SaaS instance in the multi-tenant architecture are modest.  What customers want is precisely the detached single family dwelling, at least that’s what they want from a User Experience perspective.  Given that SaaS is the new way of the world, and even a solo bootstrapper can create a successful SaaS offering, this is truly a huge market.  The potential here is staggering, because this is the commodity market.

Look at the major paradigm shifts that have come before and most have amounted to a very similar (metaphorically) transition.  We went from huge centralized mainframes to mini-computers.  We went from mini-computers to PC’s.  Many argue we’re in the midst of going from PC’s to Mobile.  Suburban Data is all about how to create architectures that are optimal for creating Suburbs of users.

What might such architectures look like?

First, I think it is safe to say that while existing technologies such as virtualization and the increasing number of server hardware architectures being optimized for data center use (Facebook and Google have proprietary hardware architectures for their servers) are a start, there is a lot more that’s possible and the job has hardly begun.  To be the next Oracle in the space needs a completely clean sheet design from top to bottom.  I’m not going to map the architecture out in great detail because its early days and frankly I don’t know all the details.  But, let’s Blue Sky a bit.

Imagine an architecture that puts at least 128 x86 compatible (we need a commodity instruction set for our Suburbs) cores along with all the RAM and Flash Disc storage they need onto the equivalent of a memory stick for today’s desktop PC’s.  Because power and cooling are two of the biggest challenges in modern data centers, the Core Stick will use the most miserly architectures possible–we want a lot of cores with reasonable but no extravagant clock speeds.  Think per-core power consumption suitable for Mobile Devices more than desktops.  For software, let’s imagine these cores run an OS Kernel that’s built around virtualization and the needs of Suburban Data from the ground up.  Further, there is a service layer running on top of the OS that’s also optimized for the Suburban Data world but has the basics all ready to go:  Apache Web Server and MySQL.  In short, you have 128 Amazon EC2 instances potent enough to run 90% of the web sites on the Internet.  Now let’s create backplanes that fit a typical 19″ rack set up with all the right UPS and DC power capabilities the big data centers already know how to do well.  The name of the game will be Core Density.  We get 128 on a memory stick, and let’s say 128 sticks in a 1U rack mount, so we can support 16K web instances in one of those rack mounts.

There will many valuable problems to solve with such architectures, and hence many opportunities for new players to make money.  Consider what has to be done to reinvent hierarchical storage manage for such architectures.  We’ve got a Flash local disc with each core, but it is probably relatively small.  Hence we need access to storage on a hierarchical basis so we can consume as much as we want and it seamlessly works.  Or, consider communicating with and managing the cores.  The only connections to the Core Stick should be very high speed Ethernet and power.  Perhaps we’ll want some out of band control signals for security’s sake as well.  Want to talk to one of these little gems, just fire up the browser and connect to its IP address.  BTW, we probably want full software net fabric capabilities on the stick.

It’ll take quite a while to design, build, and mature such architectures.  That’s fine, it’ll give us several more Moore cycles in which to cement the inevitability of these architectures.

You see what I mean when I say this is a whole new ballgame and a much bigger market than Big Data?  It goes much deeper and will wind up being the fabric of the Internet and Cloud of tomorrow.

Posted in business, cloud, data center, enterprise software, multicore, platforms, saas, service | 2 Comments »

Dinosaur Bones Just Flew Over My House

Posted by Bob Warfield on September 21, 2012

A space shuttle on the back of the special 747 just flew over my house on its way to its final resting place in LA.  It was a proud but bittersweet sight because it was a taste of what had been in an era that’s come to a close for our manned space exploration program.  We got a call from a friend that it was coming, walked out onto our deck, and there it was within a couple of minutes. The trio of aircraft (there was a fighter escort) were moving along at a stately pace and at a fairly low altitude to give anyone who cared a chance to see the craft fly, one last time, albeit with a lot of help.

For more, see the original article over on my other blog, CNCCookbook.

Posted in cloud | Leave a Comment »

Big Data = BI + ADD

Posted by Bob Warfield on April 26, 2012

Big Data is Business Intelligence plus Attention Deficit Disorder?  That’s gotta be linkbait of an order I’ve not used since my NoSQL is a Premature Optimization post.  What’s up with that?

I just got done attending Jeff Kaplan’s excellent Cloud BI Summit at the Computer History Museum.  It was a very enjoyable event and it helped me align a lot of disparate threads that had been circulating in my noggin into some coherent insights.  It also reminded me what an excellent facilitator of panels Jeff can be.

One of those threads is that the problems Big Data is focused on are not necessarily all that hard.  Everyone wants to talk breathlessly about the incredible volumes of data that are flying around these days and how hard it is to deal with that data.  But I just don’t buy it.  We’ve had Big Data for a long time and we’ve been dealing with it for a long time.  Web Scale only affects a relatively few organizations at the very high end–probably not enough organizations to make it the interesting investment thesis that it currently is held out as.  At smaller scales, the volumes are just not all that crazy relative to the tools that are available.  Put another way, its ADD because we simply have that much more data and excuses to obsess over nuts and bolts technology instead of seeking the real actionable insights our organizations need.

Listen, I’ve been doing Big Data for a long time.  You want to talk lots of transactions?  I did a startup called iMiner / PriceRadar that had to process all of eBay’s transactions every day in just a few hours.  This was back in 1999.  There was no Amazon Cloud to leverage.  Nobody was using MySQL, and nobody had really even heard of NoSQL.  eBay itself was still crashing weekly because they hadn’t started using their relational databases in all the adhoc-NoSQL ways (no joins or transactions!) that people would eventually codify and start talking about.  I asked all of my high-powered database expert friends how to do handle the data volumes and they said it would require Oracle and lots of expensive high powered Unix hardware.  We couldn’t afford that so I told them we were going to use SQL Server running on commodity Windows hardware.  They laughed and said that wasn’t even possible.  8 weeks later it was up and running and worked well. Today we’d use MySQL and commodity hardware at Amazon.  This was the whole point of my NoSQL is a Premature Optimization post.  You don’t need column stores and flash memory architectures to get this kind of work done for the majority of companies out there.  Who really has Facebook’s data volumes (and BTW, they used MySQL to handle those)?  Who has Google’s data volumes?  Relatively few organizations.

So then what’s the real problem?

The best part of the conference for me was a panel Ken Rudin participated in.  I’ve interviewed Ken on this blog before twice when he was running LucidEra.  He went on from there to run Analytics at Zynga, and now he is at Facebook running their Analytics.  The rest of the people on the panel were from companies building the tools needed to wrangle Big Data.  Ken pretty much cleared the decks when he said that was all fine and well, but the hard part wasn’t really managing the data.  The hard part isn’t the ETL, the data feeds, the database, nor the dashboards and report generators.  That’s all doable and easier than you’d think up to very large scales.  The hard part, according to Ken, is knowing what questions to ask.  He is so right about that.

There’s not that much new under the sun with Big Data.  When I worked at Callidus, terabyte data stores for our application were common.  We were handling huge transaction volumes around sales commissions for the world’s largest companies.  We got it done largely with off the shelf technologies for the Analytics and a little tuning.  If you don’t think managing the complex comp plans for 250,000 insurance agents for one of the largest insurance companies in the world is Big Data, you just don’t know much about the comp plan business.  For one customer we had to horizontally scale our architecture out to over 200 cpus.

Jane Griffin, from Deloitte amplified on Ken’s thought in some nice ways.  She said what I had been thinking ever since Ken uttered those words during his session:  there is a growing shortage of the sort of person who knows how to ask the questions.  Once again, she was so right about that.

As a person who does know how to ask those questions and get answers, I have seen it over and over again.  I have often wondered in various organizations why I was doing that kind of work.  Wasn’t there someone besides the SVP of Products or Engineering that could formulate these questions and get them answered with Analytics?  Worse, why was it that when I presented the data and the answers there were so often blank faces?  Why didn’t they get it?  Didn’t they think analytically?

The simple fact is that we are moving from a world of intuition and gut feel to a world of data and analytical thinking.  Before we had the data, intuition was all we had to fall back on when making decisions.  It’s better than nothing, but it can result in spectacular failures.  I’m fond of telling the story of a certain expensive marketing program that went very wrong that was based on someone’s gut feel.  It involved spending a lot of money without producing much of a result.  When my friend Marc Randolph, someone who is very analytical in their thinking, was asked why it failed, he said, “I don’t know, but it was tragically knowable.”  He meant the idea should have been tested at much smaller volumes before all that wood was put behind an arrow that was doomed to miss.

In this world, if businesses want to succeed, they have to realize that data is plentiful.  First they have to collect it up.  That’s the easy part.  Then comes the hard part.  Someone in the organization must have both the skills and the empowerment to ask the right questions, get the some answers backed up by hard analytics, and then get changes made to take advantage of this newfound knowledge.  If your competitors have digital strategies driven by hard analytics, and you’re flying by the seat of your pants, you may as well be piloting a biplane and barnstorming because they’re flying a sophisticated jet with radar and gps navigation.  You don’t want to try to win that contest!

Thinking about these kinds of people that can perform that analytics-question-asking task, it’s clear they’re scarce as hen’s teeth in most corporations.  I am reminded of a similar essential skill set that is equally as scarce:  great software developers.  The world went through an interesting evolution largely because of the shortage of great software developers.  At first, when there weren’t that many computers around, IT built all their own software.  Eventually, they couldn’t hire enough great developers, and so packaged software had a chance.  Then, there weren’t enough great developers there either, so the world of professional services was spawned.  Demand outstripped that supply and so we went global, with offshoring and outsourcing.  There is no monopoly on this talent in the US.  Now we have a sort of software development supply chain where software of varying degrees of sophistication can be created and the scarce supply of these developers has to be rationed at these different levels in that chain.

Expect to see the same thing going on with the Data Scientists, Quants, or whatever you want to call these people that know how to ask the Big Data Questions.  They’re going to be the Moneyballers in their organizations and they’ll be worth every expensive penny you wind up having to pay them.  Someday soon you’ll be turning to the Deloittes of the world to hire these people when you can no longer attract them.  Who knows, maybe we’ll see them popping up in India, China, or Vietnam not long after that?

BTW, it’s a great pity the space isn’t called “Big Questions” or maybe “Big Insights” instead of “Big Data.”  It would’ve been much more to the point, or at least more benefit oriented and less feature oriented.

Posted in business, cloud | 4 Comments »

 
%d bloggers like this: