Yesterday the Kenyan president broke ground on a new smart city development outside of Nairobi. The site of the new Konza Techno City is located in Eastern Kenya, 60 km from Nairobi on the Nairobi-Mombasa Road. It is 50 km from Jomo Kenyatta International airport and 500km from Mombasa and its ports. The greenfield site, purchased by the Ministry of Information and Communication and to be managed by the Konza Technopolis Development Authority, extends over 5,000 acres.
The primary goal of the new city is to develop the Kenyan Business Process Outsourcing and Information Technology Enabled Services (BPO/ITES) industry – with estimated creation of 200,000 new jobs across the broad technology and related sectors over a 20-year period. But the primary objective is to create at least 82,000 jobs in the BPO sector as this is a key area for Kenya's Vision 2030. The new city will also house a university, recreation and entertainment venues, a film and media center, a financial district, as well as residential neighborhoods and the supporting infrastructure.
When I returned to Forrester in mid-2010, one of the first blog posts I wrote was about Oracle’s new roadmap for SPARC and Solaris, catalyzed by numerous client inquiries and other interactions in which Oracle’s real level of commitment to future SPARC hardware was the topic of discussion. In most cases I could describe the customer mood as skeptical at best, and panicked and committed to migration off of SPARC and Solaris at worst. Nonetheless, after some time spent with Oracle management, I expressed my improved confidence in the new hardware team that Oracle had assembled and their new roadmap for SPARC processors after the successive debacles of the UltraSPARC-5 and Rock processors under Sun’s stewardship.
Two and a half years later, it is obvious that Oracle has delivered on its commitments regarding SPARC and is continuing its investments in SPARC CPU and system design as well as its Solaris OS technology. The latest evolution of SPARC technology, the SPARC T5 and the soon-to-be-announced M5, continue the evolution and design practices set forth by Oracle’s Rick Hetherington in 2010 — incremental evolution of a common set of SPARC cores, differentiation by variation of core count, threads and cache as opposed to fundamental architecture, and a reliable multi-year performance progression of cores and system scalability.
This case study is from TJ Keitt's and my social business playbook report, “The Road To Social Business Starts With A Burning Platform.” A social business uses technology to work efficiently using a common collaboration platform -- without being constrained by server availability or storage capacity. Here’s the story.
If you've already consolidated dozens of email systems from every vendor and era onto a single managed instance of Exchange 2007, made the shift to support 70 or more state agencies by operating as an ISP, and crunched 20 SharePoint instances down to a single scalable data center, what else is there to do? After all, you've already achieved a high state of IT operational efficiency and process optimization.
If you are Ed Valencia, CTO and Deputy Commissioner, and Tarek Tomes, Customer and Service Management, Assistant Commissioner, the State of Minnesota’s IT department (MN.IT), you step back and ask, “Has what we’ve done really helped the business communicate and collaborate efficiently and effectively?” They knew they could do more by moving their collaboration workloads into the cloud.
So they took a gamble that Microsoft's Office 365 Dedicated offering was ready for the State of Minnesota. Office 365 Dedicated has opened new doors for people throughout the State of Minnesota government. Agencies can collaborate with one another because the common collaboration platform integrates the disparate directories of the different government entities. For example, the Governor can send a message to every agency in the executive branch through this common platform.
Forrester cloud computing expert James Staten recently published 10 Cloud Predictions For 2013 with contributions from nine other analysts, including myself. The prediction that is near and dear to my heart is #10: "Developers will awaken to: development isn't all that different in the cloud," That's right, it ain't different. Not much anyway. Sure. It can be single-click-easy to provision infrastructure, spin up an application platform stack, and deploy your code. Cloud is great for developers. And Forrester's cloud developer survey shows that the majority of programming languages, frameworks, and development methodologies used for enterprise application development are also used in the cloud.
Forget Programming Language Charlatans
Forget the vendors and programming language charlatans that want you to think the cloud development is different. You already have the skills and design sensibility to make it work. In some cases, you may have to learn some new APIs just like you have had to for years. As James aptly points out in the post: "What's different isn't the coding but the services orientation and the need to configure the application to provide its own availability and performance. And, frankly this isn't all that new either. Developers had to worry about these aspects with websites since 2000." The best cloud vendors make your life easier, not different.
James Staten and I wrote this vision of the future of cloud computing. The full report is available to Forrester clients at this link. The research is part of Forrester’s playbook to advise CIOs on productive use of cloud computing and is relevant to application development and delivery leaders as well.
This research charts the shifts taking place in the market as indicated by the most advanced cloud developers and consumers. In the future, look for the popular software-as-a-service (SaaS) and infrastructure-as-a-service (IaaS) models to become much more flexible by allowing greater customization and integration. Look for more pragmatic cloud development platforms that cross the traditional cloud service boundaries of SaaS, platform-as-a-service (PaaS), and IaaS. And look for good private and public cloud options — and simpler ways of integrating private-public hybrids.
The key takeaways from this research are:
IaaS, PaaS, and SaaS boundaries will fall. In the future, no cloud will be an island. SaaS, PaaS, and IaaS will remain distinct but expand to anchor cloud platform ecosystems that weave together application, development platform, and infrastructure services. Business services built in these ecosystems will be easier to develop, better performing, more secure, and more cost-efficient.
This week, the New York Times ran a series of articles about data center power use (and abuse) “Power, Pollution and the Internet” (http://nyti.ms/Ojd9BV) and “Data Barns in a Farm Town, Gobbling Power and Flexing Muscle” (http://nyti.ms/RQDb0a). Among the claims made in the articles were that data centers were “only using 6 to 12 % of the energy powering their servers to deliver useful computation. Like a lot of media broadsides, the reality is more complex than the dramatic claims made in these articles. Technically they are correct in claiming that of the electricity going to a server, only a very small fraction is used to perform useful work, but this dramatic claim is not a fair representation of the overall efficiency picture. The Times analysis fails to take into consideration that not all of the power in the data center goes to servers, so the claim of 6% efficiency of the servers is not representative of the real operational efficiency of the complete data center.
On the other hand, while I think the Times chooses drama over even-keeled reporting, the actual picture for even a well-run data center is not as good as its proponents would claim. Consider:
A new data center with a PUE of 1.2 (very efficient), with 83% of the power going to IT workloads.
Then assume that 60% of the remaining power goes to servers (storage and network get the rest), for a net of almost 50% of the power going into servers. If the servers are running at an average utilization of 10%, then only 10% of 50%, or 5% of the power is actually going to real IT processing. Of course, the real "IT number" is the server + plus storage + network, so depending on how you account for them, the IT usage could be as high as 38% (.83*.4 + .05).
On Tuesday, September 4, Microsoft made the official announcement of Windows Server 2012, ending what has seemed like an interminable sequence of rumors, Beta releases, and endless speculation about this successor to Windows Server 2008.
So, is it worth the wait and does it live up to its hype? All omens point to a resounding “YES.”
Make no mistake, this is a really major restructuring of the OS, and a major step-function in capabilities aligned with several major strategic trends for both Microsoft and the rest of the industry. While Microsoft’s high level message is centered on the cloud, and on the Windows Server 2012 features that make it a productive platform upon which both enterprises and service providers can build a cost-effective cloud, its features will be immensely valuable to a wide range of businesses.
What It Does
The reviewers guide for Windows Server 2012 is over 220 pages long, and the OS has at least 100 features that are worth noting, so a real exploration of the features of this OS is way beyond what I can do here. Nonetheless, we can look at several buckets of technology to get an understanding of the general capabilities. Also important to note is that while Microsoft has positioned this as a very cloud-friendly OS, almost all of these cloud-related features are also very useful to an enterprise IT environment.
New file system — Included in WS2012 is ReFS, a new file system designed to survive failures that would bring down or corrupt the previous NTFS file system (which is still available). Combined with improvements in cluster management and failover, this is a capability that will play across the entire user spectrum.
The most notable news to come out of the VMworld conference last week was the coronation of Pat Gelsinger as the new CEO of VMware. His tenure officially started over the weekend, on September 1, to be exact.
For those who don’t know Pat’s career, he gained fame at Intel as the personification of the x86 processor family. It’s unfair to pick a single person as the father of the modern x86 architecture, but if you had to pick just one person, it’s probably Pat. He then grew to become CTO, and eventually ran the Digital Enterprise Group. This group accounted for 55% of Intel’s US$37.586B in revenue according to its 2008 annual report, the last full year of Pat’s tenure. EMC poached him from Intel in 2009, naming him president of the Information Infrastructure Products group. EMC’s performance since then has been very strong, with a 17.5% YoY revenue increase in its latest annual report. Pat’s group contributed 53.7% of that revenue. While he’s a geek at heart (his early work), he proved without a doubt that he also has the business execution chops (his later work). Both will serve him well at VMware, especially the latter.
In mid-July, my colleagues and I attended Orange’s annual analyst event in Paris. There were no major announcements, but we made several observations:
ORANGE is one of the few carriers with true delivery capabilities. Its global footprint is a real advantage vis-a-vis carrier competitors, in particular in Africa and Asia. At the recent event, Vale, the Brazilian metals and mining corporation, presented a customer case study in which Vale emphasized the importance of ORANGE’s global network infrastructure for its decision to go with ORANGE as UCC and network provider. ORANGE’s global reach positions it well to address the opportunity in emerging markets, both for Western MNCs going into these markets and also to address intra-regional business in Africa and Asia. Another customer case study with the Chinese online retailer 360buy, focusing on a contact center solution, demonstrated ORANGE’s ability to win against local competitors in Asia.
Cloud Services Offer New Opportunities For Big Data Solutions
What’s better than writing about one hot topic? Well, writing about two hot topics in one blog post — and here you go:
The State Of BI In The Cloud
Over the past few years, BI business intelligence (BI) was the overlooked stepchild of cloud solutions and market adoption. Sure, some BI software-as-a-service (SaaS) vendors have been pretty successful in this space, but it was success in a niche compared with the four main SaaS applications: customer relationship management (CRM), collaboration, human capital management (HCM), and eProcurement. While those four applications each reached cloud adoption of 25% and more in North America and Western Europe, BI was leading the field of second-tier SaaS solutions used by 17% of all companies in our Forrester Software Survey, Q4 2011. Considering that the main challenges of cloud computing are data security and integration efforts (yes, the story of simply swiping your credit card to get a full operational cloud solution in place is a fairy tale), 17% cloud adoption is actually not bad at all; BI is all about data integration, data analysis, and security. With BI there is of course the flexibility to choose which data a company considers to run in a cloud deployment and what data sources to integrate — a choice that is very limited when implementing, e.g., a CRM or eProcurement cloud solution.
“38% of all companies are planning a BI SaaS project before the end of 2013.”