William Vambenepe has a nice post that illustrates some of the challenges in the catalog side of cloud computing. First a quote, then my comments.
Of course, by the time my account usage page was updated (it took a few hours) I had found the price list which in retrospect wasn’t that hard to find (from Amazon, not IBM).
So maybe I am not the brightest droplet in the cloud, but for 20 bucks I consider that at least I bought the right to make a point: these prices should not be just on some web page. They should be accessible at the time of launch, in the console. And also in the EC2 API, so that the various EC2 tools can retrieve them. Whether it’s just for human display or to use as part of some automation logic, this should be available in an authoritative manner, without the need to scrape a page.
The other thing that bothers me is the need to decide upfront whether I want to launch a Tivoli instance to manage 50 virtual cores, 200 virtual cores or 600 virtual cores. That feels very inelastic for an EC2 deployment. I want to be charged for the actual number of virtual cores I am managing at any point in time. I realize the difficulty in metering this way (the need for Tivoli to report this to AWS, the issue of trust…) but hopefully it will eventually get there.
This anecdote shows the disconnect between the flat-static catalog, the order system, and the subscription / metering system that will need to be addressed.
The other issue raised, is that of units of measure. Units of measure vary today depending on where the customer sees value. This needs to be also managed in the cloud catalog/order/subscription manager trinity.
Now can you imagine trying to call your cloud provider to explain that you didn’t mean to launch 1000 servers for a week?Catalog, Metering, Order Management, Subscription