Without a doubt, the tech industry’s new economics are creating major tumult in the marketplace. “Services,” not products, and “in the cloud,” not on the computer, are just two of the major trends forcing IT services providers to continually predict future market demand and adjust strategy accordingly. More than ever, it’s imperative to understand where firms will rely on third-party providers in the coming year . . . and also where they’ll increase spend.
As you may know, Forrester fields a 20-minute Web survey each year to commercial buyers of enterprise IT services as part of Forrester’s Forrsights for Business Technology (formerly named “Business Data Services”). This year, we’ll continue to collect responses from IT decision-makers at companies with 1,000 or more employees across the US, Canada, France, the UK, and Germany. As we’re designing the survey now, our commitment to strategists is that we’ll write the questions with your underlying need in mind: to predict and quantify tech industry growth and disruption.
Here are a few new questions you’ll be able to answer with our 2010 data insights:
Which areas of innovation are turned into business- or IT-funded projects? . . . How mature is vendor governance/oversight compared with three years ago? . . . How are firms dealing with the rising influence of Digital Natives? . . . What are the plans, strategies, and barriers for moving from a staff augmentation to a fully managed services model? . . . How will an uptick in selective sourcing strategies translate to you as the service provider tailoring your go-to-market plans according to current customer challenges?
And, of course, we’ll continue to ask traditional questions around services plans, budgets, and preferred vendors.
It’s probably fair to say that the computer community is obsessed with speed. After all, our people buy computers to solve problems, and generally the faster the computer, the faster the problem gets solved. The earliest benchmark that I have seen is published in “High Speed Computing Devices, Engineering Resource Devices, McGraw Hill, 1950.” They cite the Marchant desktop calculator as achieving a best-in-class result of 1,350 digits per minute for addition, and the threshold problems then were figuring out how to break down Newton Raphsen equation solvers for maximum computational efficiency. And so the race begins…
Not much has changed since 1950. While our appetites are now expressed in GFLOPs per CPU and TFLOPS per system, users continue to push for escalation of performance in numerically intensive problems. Just as we settled down to a relatively predictable performance model with standard CPUs and cores glued into servers and aggregated into distributed computing architectures of various flavors, along came the notion of attached processors. First appearing in the 1960s and 1970s as attached mainframe vector processors and attached floating point array processors for minicomputers, attached processors have always had a devoted and vocal minority support within the industry. My own brush with them was as a developer using a Floating Point Systems array processor attached to a 32-bit minicomputer to speed up a nuclear reactor core power monitoring application. When all was said and done, the 50X performance advantage of the FPS box had decreased to about 3.5X for the total application. Not bad, but a defeat of expectations. Subsequent brushes with attempts to integrate DSPs with workstations left me a bit jaundiced about the future of attached processors as general purpose accelerators.
Forrester received more than 1,000 inquiries on SaaS and cloud services to date in 2010. With SaaS gaining maturity and even becoming the more common way to deploy software in some categories, firms are increasingly opting for SaaS solutions in place of packaged apps.
With the growing uptake of SaaS, Forrester has seen a change in the nature of questions about SaaS. Firms are not only asking basics around the whens and whys of SaaS but they are also asking more strategic questions around SaaS sourcing and SaaS vendor management, as well as how to set up organizational structure and hire the right skills to succeed with SaaS deployments.
Stay tuned for the full analysis of Forrester's SaaS inquiry data for the first half of 2010, to be published shortly.
Also, for anyone interested in a more in-depth analysis of SaaS and cloud services trends and best practices, we are hosting our first full-day workshop on the topic in Forrester’s Cambridge, Mass., headquarters on September 16. For more details about this event, please click here.
Please share your thoughts and connect with me on Twitter @lizherbert.
As much fun as the juicy details of the Oracle-Google lawsuit are, the meaning of the suit for enterprise application development managers is, well, philosophical. Aside from sweating over the legal status of your Android phone (if you own one), the lawsuit won’t create drama for your shop. But the long-term implications are serious. Henceforth, Java will be a marching band rather than a jazz collective. Oracle’s action will reduce the independent innovation that has made Java what it is, causing developers to seek new ideas from sources outside of Java. Your Java strategy, as a result, will get more complicated.
A little background: Since the late ’90s, the primary source of Java innovation has been open source projects that either fix Java limitations or provide low-cost alternatives to vendor products. But Java’s position as a wellspring of innovation has been declining in recent years as many Web developers shifted their attention to dynamic languages, pure Web protocols, XML programming, and other new ideas. This trend has been particularly pronounced in the client tier for Web applications, where alternative rich Internet application technologies including Ajax frameworks like Dojo and container-based platforms like Adobe Flash/Flex have replaced client-side Java. Java virtual machines are a foundation of these efforts, but the enterprise and mobile Java platforms are not.
In choosing Java’s future course, Oracle had two philosophies to choose from.
The deadline to submit your entry into the Forrester Groundswell Awards is on August 27, just two weeks away. The submissions we received last year, which we wrote up in this Forrester report, provided invaluable assistance to Forrester clients seeking ways to optimize Groundswell-related investments.
We hope you’ll participate this year as well. Josh Bernoff, one of the authors of Groundswell, just posted his advice on how to create a great entry. I have reposted it below for our technology industry clients:
If you haven't entered yet but plan to, this advice is for you. (If you just want to see other people's entries, click on the items at the left of the Awards site.)
The PCI Security Standards Council released the summary of changes for the new version of PCI — 2.0. Merchants, you can quit holding your breath as this document is a yawner — as we’ve long suspected it would be. In fact, to call it 2.0 is a real stretch as it seems to be filled — as promised by earlier briefings with the PCI SSC — merely with additional guidance and clarifications. Jeff, over at the PCI Guru, has a great review of the summary doc so I won’t try to duplicate his detailed analysis. The most helpful part of the doc is an acknowledgement that more guidance on virtualization — the one function per server stuff — will finally be addressed.
Suffice it to say, it doesn’t look good for all those DLP vendors looking for Santa Compliance to leave them a little gift under the tree this year. I’ve been hearing hopeful rumors (that I assume start within the bowels of DLP vendor marketing departments) that PCI would require DLP in the next version. Looks like it’s going to be a three year wait to see if Santa will finally stop by their house.
Remember that this is a summary of changes so there’s not that much meat yet. The actual standard will be pre-released early next month with the final standard coming out after the European Community Meeting in October.
I recently recorded a podcast with GlaxoSmithKline (GSK), the global pharmaceutical company, and their success story of implementing a PC power management initiative that is expected to cut energy costs by ~$1 million per year. While these savings alone should impress any IT executive – especially IT infrastructure and operations professionals who manage PCs – what I found so unique about their story came through my conversation with Matt Bartow, business analyst in GSK’s research and development IT organization, who led this initiative. In particular, GSK is a great example of how “empowering” staff to innovate can industrialize IT operations leading to significant cost savings andgreen IT benefits.
GSK’s success with PC power management is an outcome of the inspired management style advocated in Forrester’s upcoming book, Empowered. By proactively calling on their employees to spur innovation, GSK tapped into one of their greatest inventive resources – staff, like Matt Bartow, who Forrester would consider a highly empowered and resourceful operative (HERO). But as Empowered explains, HEROes can’t succeed without support from management. By initiating the innovation challenge, GSK’s IT leadership not only identified HEROes in their organization but sourced innovative ideas at the same time. From there, the use of social media technology – in this case, using a wiki-type website with voting capabilities – made it simple for GSK staff to participate while giving them a “say” in the selection process.
So how exactly did PC power management become an IT priority at GSK?
I’ve been getting a number of inquiries recently regarding benchmarking potential savings from consolidating multiple physical servers onto a smaller number of servers using VMs, usually VMware. The variations in the complexity of the existing versus new infrastructures, operating environments, and applications under consideration make it impossible to come up with consistent rules of thumb, and in most cases, also make it very difficult to predict with any accuracy what the final outcome will be absent a very tedious modeling exercise.
However, the major variables that influence the puzzle remain relatively constant, giving us the ability to at least set out a framework to help analyze potential consolidation projects. This list usually includes:
As green IT plans persist through 2010, I'm starting to receive questions from IT infrastructure and operations professionals — particularly data center managers — about the use of cleaner energy sources (e.g. wind, solar, fuel cells, hydro) to power their data center facilities. So when Google recently announced its purchase of 114 megawatts of wind power capacity for the next 20 years from a wind farm in Iowa, I got excited, hopeful of a credible example I could refer to.
But as it turns out, Google will not be using this wind energy to power its data centers. . . yet. Despite Google stating that the wind capacity is enough to power several data centers, their Senior Vice President of Operations, Urs Hoelzle, explains that, "We cannot use this energy directly, so we're reselling it back to the grid in the regional spot market." I confirmed this in electronic conversations with two other industry insiders, Martin LaMonica (CNET News) and Lora Kolodny (GreenTech), who also covered the announcement.
And it's unfortunate since Google's $600 million data center in Council Bluffs, Iowa could likely benefit from the greener, and possibly cheaper, wind energy. But Iowa is a large state and it's likely that distribution of the wind energy is an issue since the Council Bluffs data center appears to be well over a 100 miles away from their wind farms several counties away.
We’ve all heard software reps blame “revenue recognition” and “Sarbanes-Oxley” as an excuse for not giving an extra discount or contractual concession. IT sourcing professionals may now hear “GSA Rules” and the “False Claims Act” cited as similar justification: “We didn’t give that concession to the government, so we can’t give it to you.” Could that be the worrying unintended consequence of the Justice Department’s action against Oracle: http:/searchoracle.techtarget.com/news/2240019712/US-government-sues-Oracle-for-tens-of-millions-of-dollars?
I can’t comment on the details of the Oracle case, but I’m sure it is complex and two-sided. For instance, I’ve helped clients negotiate reasonable compromises with Oracle to handle special circumstances that won’t apply to many other organizations. These may have involved an extra discretionary discount, if Oracle didn’t have a programmatic way to handle the exception. I wouldn’t expect to get the same concession or discount for another client to whom those special circumstances didn’t apply. For example, this report describes one issue that is particularly important to public sector agencies, but whose impact varies widely: Do Your Software Contracts Permit External Use?