The 2013 New Year has begun with the removal from the global tech market outlook of one risk, that of the US economy going over the fiscal cliff. On New Year's day, the US House of Representatives followed the lead of the US Senate and passed a bill that extends existing tax rates for households with $450,000 or less in income, extends unemployment insurance benefits for 2 million Americans, and renews tax credits for child care, college tuition, and renewable energy production, as well as delaying for two months the automatic spending cuts. While it also allowed Social Security payroll taxes to rise by 2 percentage points — thereby raising the tax burden on poor and middle class people — and did not increase the federal debt ceiling or address entitlement spending, the last-minute compromise does mean that the US tech market no longer has to worry, for now, about big increases in taxes and cuts in spending pushing the US economy into recession.
Amazon Web Services (AWS) held its first global customer and partner conference, re:Invent, in late November in Las Vegas, attracting approximately 6,000 attendees. While aimed squarely at developers, AWS highlighted two key themes that will appeal directly to enterprise IT decision-makers:
Continued global expansion. AWS cites customers in 190 countries, but the company is clearly pushing for greater penetration into enterprise accounts via aggressive global expansion. AWS now has nine regions (each of which has at least one data center), including three in Asia Pacific: Tokyo, Singapore, and Sydney.
An expanded services footprint within customer accounts. The major announcement at re:Invent was a limited preview of a new data warehouse (DW) service called Amazon Redshift — a fully managed, cloud-based, petabyte-scale DW. As my colleague Stefan Ried tweeted during the event, with a limit of 1.6 petabytes, this is not just for testing and development — this is a serious production warehouse.
Forrester cloud computing expert James Staten recently published 10 Cloud Predictions For 2013 with contributions from nine other analysts, including myself. The prediction that is near and dear to my heart is #10: "Developers will awaken to: development isn't all that different in the cloud," That's right, it ain't different. Not much anyway. Sure. It can be single-click-easy to provision infrastructure, spin up an application platform stack, and deploy your code. Cloud is great for developers. And Forrester's cloud developer survey shows that the majority of programming languages, frameworks, and development methodologies used for enterprise application development are also used in the cloud.
Forget Programming Language Charlatans
Forget the vendors and programming language charlatans that want you to think the cloud development is different. You already have the skills and design sensibility to make it work. In some cases, you may have to learn some new APIs just like you have had to for years. As James aptly points out in the post: "What's different isn't the coding but the services orientation and the need to configure the application to provide its own availability and performance. And, frankly this isn't all that new either. Developers had to worry about these aspects with websites since 2000." The best cloud vendors make your life easier, not different.
So what does VMware and EMC’s announcement of the new Pivotal Initiative mean for I&O leaders? Put simply, it means the leading virtualization vendor is staying focused on the data center — and that’s good news. As many wise men have said, the best strategy comes from knowing what NOT to do. In this case, that means NOT shifting focus too fast and too far afield to the cloud.
I think this is a great move, and makes all kinds of sense to protect VMware’s relationship with its core buyer, maintain focus on the datacenter, and lay the foundation for the vendor’s software-defined data center strategy. This move helps to end the cloud-washing that’s confused customers for years: There’s a lot of work left to do to virtualize the entire data center stack, from compute to storage and network and apps, and the easy apps, by now, have mostly been virtualized. The remaining workloads enterprises seek to virtualize are much harder: They don’t naturally benefit from consolidation savings, they are highly performance sensitive, and they are much more complex.
This week Wal-Mart announced that it would put significant weight behind the new Boxee TV box, a $99 set-top box that competes with the market-leading Apple TV and the runner-up Roku boxes. Wal-Mart also sells the Apple TV and Roku devices, so it might not seem like a big deal, but it is. Because Wal-Mart is going to promote Boxee TV with in-store displays and outbound marketing support. Why? Because in addition to the regular apps like Hulu, Netflix, and the rest, Boxee gives Wal-Mart customers three things they can't get from Apple or Roku:
Regular TV shows from local broadcasters. Boxee's new box has a digital tuner that lets you tune to digital signals from ABC, CBS, CW, Fox, NBC, PBS, and Univision through either an over-the-air antenna or via ClearQAM.
Unlimited DVR. Not only will Boxee let you watch these channels, it is offering unlimited cloud DVR for $9.99 a month (in only the top eight markets for now) to record any shows from those networks, without managing a hard drive or paying extra if you want to store hours and hours of video.
Multidevice viewing. This is the real coup for Boxee. Because its DVR is in the cloud, it can send your recorded content to any device you log in to -- whether it's in your home or in your hands while traveling for business.
Nathan Bedford Forrest, a Confederate general of despicable ideology and consummate tactics, spoke of “keepin up the skeer,” applying continued pressure to opponents to prevent them from regrouping and counterattacking. POWER7+, the most recent version of IBM’s POWER architecture, anticipated as a follow-up to the POWER7 for almost a year, was finally announced this week, and appears to be “keepin up the skeer” in terms of its competitive potential for IBM POWER-based systems. In short, it is a hot piece of technology that will keep existing IBM users happy and should help IBM maintain its impressive momentum in the Unix systems segment.
For the chip heads, the CPU is implemented in a 32 NM process, the same as Intel’s upcoming Poulson, and embodies some interesting evolutions in high-end chip design, including:
Use of DRAM instead of SRAM — IBM has pioneered the use of embedded DRAM (eDRAM) as embedded L3 cache instead of the more standard and faster SRAM. In exchange for the loss of speed, eDRAM requires fewer transistors and lower power, allowing IBM to pack a total of 80 MB (a lot) of shared L3 cache, far more than any other product has ever sported.
[For some reason this has been unpublished since April — so here it is well after AMD announced its next spin of the SeaMicro product.]
At its recent financial analyst day, AMD indicated that it intended to differentiate itself by creating products that were advantaged in niche markets, with specific mention, among other segments, of servers, and to generally shake up the trench warfare that has had it on the losing side of its lifelong battle with Intel (my interpretation, not AMD management’s words). Today, at least for the server side of the business, it made a move that can potentially offer it visibility and differentiation by acquiring innovative server startup SeaMicro.
SeaMicro has attracted our attention since its appearance (blog post 1, blog post 2) with its innovative architecture that dramatically reduces power and improves density by sharing components like I/O adapters, disks, and even BIOS over a proprietary fabric. The irony here is that SeaMicro came to market with a tight alignment with Intel, who at one point even introduced a special dual-core packaging of its Atom CPU to allow SeaMicro to improve its density and power efficiency. Most recently SeaMicro and Intel announced a new model that featured Xeon CPUs to address the more mainstream segments that were not a part of SeaMicro’s original Atom-based offering.
On Tuesday, September 4, Microsoft made the official announcement of Windows Server 2012, ending what has seemed like an interminable sequence of rumors, Beta releases, and endless speculation about this successor to Windows Server 2008.
So, is it worth the wait and does it live up to its hype? All omens point to a resounding “YES.”
Make no mistake, this is a really major restructuring of the OS, and a major step-function in capabilities aligned with several major strategic trends for both Microsoft and the rest of the industry. While Microsoft’s high level message is centered on the cloud, and on the Windows Server 2012 features that make it a productive platform upon which both enterprises and service providers can build a cost-effective cloud, its features will be immensely valuable to a wide range of businesses.
What It Does
The reviewers guide for Windows Server 2012 is over 220 pages long, and the OS has at least 100 features that are worth noting, so a real exploration of the features of this OS is way beyond what I can do here. Nonetheless, we can look at several buckets of technology to get an understanding of the general capabilities. Also important to note is that while Microsoft has positioned this as a very cloud-friendly OS, almost all of these cloud-related features are also very useful to an enterprise IT environment.
New file system — Included in WS2012 is ReFS, a new file system designed to survive failures that would bring down or corrupt the previous NTFS file system (which is still available). Combined with improvements in cluster management and failover, this is a capability that will play across the entire user spectrum.
I’ve been speaking to more and more clients lately who are not just saving money with cloud computing — they’re using the principles of the cloud to completely transform how they source, build, and deliver all IT services. Savvy I&O leaders should look beyond the per-hour savings promised by the cloud to the core tenets of cloud computing itself. How do the public clouds do it? Why can’t you?
Well, you can. You can transform your IT operating model from that of widget-provider to a true service-oriented business partner. Forrester writes extensively about how to make the IT to BT (business technology) transition. I recently spoke at length with the IT management team at Commonwealth Bank of Australia (CBA) about their multi-year IT transformation to what they call “everything-as-a-service.” I was put in touch with them by one of their primary suppliers, cloud service management and automation vendor ServiceMesh.
We’ll be publishing a complete case study soon, but I wanted to share some of the basics here because they outline a strategy anyone can achieve, regardless of your current level of cloud maturity. The bank started by establishing six core tenets to be enforced across all I&O services moving forward, whether hosted internally or externally. These guiding principles neatly summarize the core value dimensions of cloud computing itself:
Pay as you go. Business customers only pay for products and services actually used, on a metered, charge-back basis, under flexible service agreements, as opposed to fixed-term contracts.
With VMworld in full swing this week and Microsoft’s cloud-centered Windows Server 2012 launching soon after, your options for technology to build and deploy enterprise clouds is about to expand significantly. Meanwhile, Amazon continues to drop prices faster than your local Wal-Mart, introduce new cloud compute and storage services almost monthly, and has already gobbled up a trillion objects in S3. Is it time to start moving your workloads to the cloud?
Forrsights surveys show that companies are indeed moving to the cloud, primarily for speed and lower costs — but are the savings really there? The answer might not be obvious. Are you heavily virtualized already? Have you moved up the virtualization value chain beyond server consolidation to using virtual machines for better disaster recovery, less downtime, automated configuration management, and the like? Do you have a virtual-first policy and actively share resources across business units? If you run a mature virtual environment today, your internal infrastructure costs might already be competitive with the cloud.