Much has been said about the benefits of “SaaS for IT service management (ITSM)” …
For many organizations, the key benefit of SaaS is its simple, subscription-based pricing model that provides a lower and consistent level of expenditure which is Opex rather than a Capex investment – highly suited to those organizations wishing to invest limited Capex into business innovation projects rather than into IT. I deliberately haven’t stated that SaaS is cheaper as “it depends” ... Many tools have a “breakeven point” in the three to four year timeframe where SaaS becomes more expensive to customers than on-premises.
This simplicity of pricing can also be viewed from a value-for-money perspective, in that a per-seat subscription will usually cover access to capabilities across multiple ITIL (or ITSM) processes rather than the traditional need for organizations to buy multiple licenses across multiple ITSM products (or modules), giving an organization the freedom to increase its ITSM maturity without extra cost (unless additional people need access to the solution).
It should come as no surprise that websites thrive on traffic. So naturally, it follows that driving traffic to your site is a strong motivation for any company looking to grow their web presence. However ironically, driving traffic to your site can also be a double-edged sword if your infrastructure is not properly prepared to handle the load. This means that, strangely, popularity can actually become a potential cause of an outage.
Yesterday, popular Internet forum and message board Reddit discovered this firsthand.In an interesting campaign move, President Barack Obama graced the site with his presence by doing an “Ask Me Anything” (AMA) thread, a message thread in which commenters submit questions and the original poster responds. Word about this rare opportunity to send the President of the United States a direct message spread across social media like a wildfire, leading to a massive spike in traffic that ultimately brought down Reddit a mere few minutes into the life of the thread. Current figures show that their number of unique connections and pageviews both more than tripled compared to their typical traffic. Eventually the site came back online and the AMA progressed as usual.
The Distributed Management Task Force (DMTF) is best known in the cloud standards world for its Open Virtualization Format (OVF) specification that’s been highly adopted by cloud vendors today and is considered the first and only true standard in the IaaS space. But as of late, the focus has been solving the interoperability challenges in the cloud space. In July 2010, after releasing a series of white papers, the DMTF Open Cloud Standards Incubator group transitioned into the Cloud Management Working Group (CMWG) and has been working on interoperability standards ever since. For the past year, the main focus has been the Cloud Infrastructure Management Interface (CIMI) specification for a self-service portal that would enable easy interoperability between solutions. And today the DMTF CMWG released CIMI v1.0.
I’ve been speaking to more and more clients lately who are not just saving money with cloud computing — they’re using the principles of the cloud to completely transform how they source, build, and deliver all IT services. Savvy I&O leaders should look beyond the per-hour savings promised by the cloud to the core tenets of cloud computing itself. How do the public clouds do it? Why can’t you?
Well, you can. You can transform your IT operating model from that of widget-provider to a true service-oriented business partner. Forrester writes extensively about how to make the IT to BT (business technology) transition. I recently spoke at length with the IT management team at Commonwealth Bank of Australia (CBA) about their multi-year IT transformation to what they call “everything-as-a-service.” I was put in touch with them by one of their primary suppliers, cloud service management and automation vendor ServiceMesh.
We’ll be publishing a complete case study soon, but I wanted to share some of the basics here because they outline a strategy anyone can achieve, regardless of your current level of cloud maturity. The bank started by establishing six core tenets to be enforced across all I&O services moving forward, whether hosted internally or externally. These guiding principles neatly summarize the core value dimensions of cloud computing itself:
Pay as you go. Business customers only pay for products and services actually used, on a metered, charge-back basis, under flexible service agreements, as opposed to fixed-term contracts.
With VMworld in full swing this week and Microsoft’s cloud-centered Windows Server 2012 launching soon after, your options for technology to build and deploy enterprise clouds is about to expand significantly. Meanwhile, Amazon continues to drop prices faster than your local Wal-Mart, introduce new cloud compute and storage services almost monthly, and has already gobbled up a trillion objects in S3. Is it time to start moving your workloads to the cloud?
Forrsights surveys show that companies are indeed moving to the cloud, primarily for speed and lower costs — but are the savings really there? The answer might not be obvious. Are you heavily virtualized already? Have you moved up the virtualization value chain beyond server consolidation to using virtual machines for better disaster recovery, less downtime, automated configuration management, and the like? Do you have a virtual-first policy and actively share resources across business units? If you run a mature virtual environment today, your internal infrastructure costs might already be competitive with the cloud.
This week, Amazon announced a new storage offering within Amazon Web Services (AWS) called Glacier. Aptly named, it’s intended to be vast and slow moving, with a cheap price tag to match. At a fraction of the cost of the storage intended for online storage offerings EBS and S3, Glacier will cost you $.01 per GB per month, compared to around $.05- $.13 per GB per month for higher performance offerings depending on capacity stored. Restores from Glacier are costly by design; this is intended for data that you’re not likely to access frequently. If used for the right types of data, this will be a low cost way to park stale data for long periods of time.
Analyzing the cost implications, it would cost you all of $120 to store a TB of data for a year, provided you don’t have to access it during that time. Ten years would cost you $1,200, and 100 years would cost you $12,000. Sure there would be upcharges if and when you access the data, but the value of being able to get back the data you need within a few hours, years after you archived it is tremendous. The data reliability guarantee is 11 9’s— meaning that for each piece of data, Amazon guarantees that it will be there 99.999999999% of the time, included in the base cost to archive it, which is about as close to certainty as you can get in any contract.
Unfortunately I don’t often hear “strategy” and “IT service management (ITSM)” in the same sentence, unless of course someone is maligning the ITIL 2011 Service Strategy book or if an organization is justifying a significant investment in a new ITSM tool (to me this is too often the breeding ground for failed aspirations). Alternatively we often talk about (and are consumed by) tactical ITSM issues and our tactical responses. So where and what is your ITSM strategy? And where is your ITSM strategic plan?
If you have answers to these questions you probably don’t need to read this blog so feel free to choose another. If you don’t, don’t you think you should? I’ve stolen some written-word from my colleague Jean-Pierre Garbani to get you thinking.
What’s your strategy for ITSM strategy?
I’m not going to answer this – I just thought it a funny question. Better starter questions are probably: “What do I mean by strategy?” and “What is strategic planning?”
I can’t help but use the ever-useful Wikipedia for the first:
At our core we are “IT people” (hopefully you are shouting at your screen, “No, I'm a business person!” but please bear with me), so it is all too easy for us to look at the future of IT service delivery purely from a technology perspective; that is, to be absorbed by the opportunities and challenges such as bring-your-own-device (BYOD), mobility, social, shiny SaaS ITSM tools, and cloud per se.
For instance, my colleague Glenn O’Donnell can often be heard saying that “the future of service management is an automated one,” and, unless you have access to the report from which I lifted this quote (and much of this blog), it is too easy to forget about how the “yellow brick road” to the future affects our people. Glenn’s report covers this in some detail, and I have politely stolen some of it to include below.
Looking at the future from an employee perspective = fear
This week the California courts handed down a nice present for HP — a verdict confirming that Oracle was required to continue to deliver its software on HP’s Itanium-based Integrity servers. This was a major victory for HP, on the face of it giving them the prize they sought — continued availability of Oracle’s eponymous database on their high-end systems.
However, HP’s customers should not immediately assume that everything has returned to a “status quo ante.” Once Humpty Dumpty has fallen off the wall it is very difficult to put the pieces together again. As I see it, there are still three major elephants in the room that HP users must acknowledge before they make any decisions:
Oracle will appeal, and there is no guarantee of the outcome. The verdict could be upheld or it could be reversed. If it is upheld, then that represents a further delay in the start date from which Oracle will be measured for its compliance with the court ordered development. Oracle will also continue to press its counterclaims against HP, but those do not directly relate to the continued development or Oracle software on Itanium.
Itanium is still nearing the end of its road map. A reasonable interpretation of the road map tea leaves that have been exposed puts the final Itanium release at about 2015 unless Intel decides to artificially split Kittson into two separate releases. Integrity customers must take this into account as they buy into the architecture in the last few years of Itanium’s life, although HP can be depended on to offer high-quality support for a decade after the last Itanium CPU rolls off Intel’s fab lines. HP has declared its intention to produce Integrity-level x86 systems, but OS support intentions are currently stated as Linux and Windows, not HP-UX.