This is a roll-up of all Forrester blogs written for Business Technology Professionals. Role-specific blogs are listed below. Visit Forrester.com to learn how we make Business Technology Professionals successful every day.
MyCustomer.com recently asked me what my thoughts were about CRM: Why initial CRM projects failed, what has now changed to make deployments successful, and what the future holds for CRM. Here is the first part of my point of view, as well as a link to a series of three published articles from MyCustomer.com.
Question: Nearly a decade ago, estimates suggested that a very large proportion of CRM projects were failing. What were the main problems undermining CRM projects in those days?
Answer: The main problems undermining CRM projects a decade ago were mismatched expectations with reality in three categories: technology, process and people.
The first CRM systems were not fully baked and had large feature holes that were not always communicated to the purchaser. The technology was not intuitive or easy to use. It was hard to implement with long time-to-value and hard to become proficient in its use. It was even harder to change the business processes that had been implemented — changes that were necessary to stay in line with evolving business needs.
CRM systems were also difficult to integrate with a company’s IT ecosystem, which meant that many actions needed to be repeated in multiple systems. (For example, consider a CRM system that was not integrated into a company’s email system. This means that a sales person would have to cut and paste a customer communication from their email correspondence into the CRM system, which was labor intensive and often not done. )
On the heels of Forrester's GRC Market Overview last month, this week we published my Governance, Risk, And Compliance Predictions: 2011 And Beyond report. Based on our research with GRC vendors, buyers, and users, this paper highlights the aggressive regulatory environment and greater attention to risk management as drivers for change. Specifically, here is a brief summary of the top five trends we will see next year:
Increasing vendor competition will continue to bring more choices and more confusion. Strong market growth will encourage more technology and service vendors to get into the market, which means the fragmentation (which I've discussed previously) and confusion will continue.
Can you remember a year when your business both (1) grew in a healthy way and (2) changed more slowly than the year before? Besides a company’s early startup years, such would be the exception, not the rule. So, in 2011, your business is likely to continue accelerating its pace of change. A recent Forrester report, The Top 15 Technology Trends EA Should Watch: 2011 To 2013, named both business rules and SOA policy as items for your watch list — because both of them help accelerate business change.
Back in the mainframe days — and even into minicomputer, client/server, and Web applications — nearly all of the business logic for every application was tightly wrapped up in the application code. A few forward-thinking programmers might have built separate parameter files with a small bit of business-oriented application configuration, but that was about it. But, business changes too quickly to have all of the rules locked up in the code.
Some have tried the route that businesspeople ought to do their own programming — and many vendor tools through the years have tried creatively (though unsuccessfully) to make development simple enough for that. But, business is too complex for businesspeople to do all of their own programming.
Enter business rules, SOA policy, and other ways to pull certain bits of business logic out of being buried in the code. What makes these types of approaches valuable is that they are targeted, contained, and can have appropriate life cycles built around them to allow businesspeople to change what they are qualified to change, authorized to change, and have been approved to change.
Most, if not all, technology improvements need what is commonly referred to as “complementary inputs” to yield their full potential. For example, Gutenberg's invention of movable type wouldn't have been viable without progress in ink, paper, and printing press technology. IT innovations depend on complements to take hold. The use of internal cloud differences will affect applications, configuration, monitoring, and capacity management. External clouds will need attention to security and performance issues related to network latency. Financial data availability is also one important cloud adoption criteria and must be addressed. Without progress in these complementary technologies, the benefits of using cloud computing cannot be fully developed.
Internal cloud technology is going to offer embedded physical/virtual configuration management, VM provisioning, orchestration of resources, and most probably, basic monitoring or data collection in an automated environment, with a highly abstracted administration interface. This has the following impact:
More than ever, we need to know where things are. Discovery and tracking of assets and applications in real time is more important than ever: As configurations can be easily changed and applications easily moved, control of the data center requires complete visibility. Configuration management systems must adapt to this new environment.
Applications must be easily movable. To take advantage of the flexibility offered by orchestration, provisioning, and configuration automation, applications must be easily loaded and configured. This assumes that there is, upstream of the application release, an automated process that will “containerize” the applications, its dependencies, and its configuration elements. This will affect the application life cycle (see figure).
On Dec. 2, Oracle announced the next move in its program to integrate its hardware and software assets, with the introduction of Oracle Private Cloud Architecture, an integrated infrastructure stack with Infiniband and/or 10G Ethernet fabric, integrated virtualization, management and servers along with software content, both Oracle’s and customer-supplied. Oracle has rolled out the architecture as a general platform for a variety of cloud environments, along with three specific implementations, Exadata, Exalogic and the new Sunrise Supercluster, as proof points for the architecture.
Exadata has been dealt with extensively in other venues, both inside Forrester and externally, and appears to deliver the goods for I&O groups who require efficient consolidation and maximum performance from an Oracle database environment.
Exalogic is a middleware-targeted companion to the Exadata hardware architecture (or another instantiation of Oracle’s private cloud architecture, depending on how you look at it), presenting an integrated infrastructure stack ready to run either Oracle or third-party apps, although Oracle is positioning it as a Java middleware platform. It consists of the following major components integrated into a single rack:
Oracle x86 or T3-based servers and storage.
Oracle Quad-rate Infiniband switches and the Oracle Solaris gateway, which makes the Infiniband network look like an extension of the enterprise 10G Ethernet environment.
Oracle Linux or Solaris.
Oracle Enterprise Manager Ops Center for management.
A few days ago I read an interesting article about how organizations need to adapt to virtualization to take full advantage of it.
If we consider that this is, in fact, the first step toward the industrialization of IT, we should consider how the organization of industry evolved over time, from the beginning to the mass-production era. In fact, I think IT will reach the mass-production stage within a few years. If we replicate this evolution in IT, it will go through these phases:
The craftsperson era. At the early stage of any industry, we find a solitary figure in a shop soon complemented by similarly minded associates (this is me, 43 years ago). They create valuable and innovative products, but productivity and cost per unit of production is usually through the roof. This is where IT was at the end of the 1960s and the beginning of the 1970s. The organization landscape was dominated by “gurus” who seemed to know everything and were loosely coupled within some kind of primitive structure.
The bureaucratic era. As IT was getting more complex, an organizational structure started to appear that tended to “rationalize” IT into a formal, hierarchical structure. In concept, it is very similar to what Max Weber described in 1910: a structure that emphasizes specialization and standardization in pursuit of a common goal. Tasks are split into small increments, mated to skills, and coordinated by a strong hierarchical protocol. The coordination within the organization is primarily achieved through bureaucratic controls. This is the “silo” concept.
I've written about the World Bank's Doing Business Index in several blogs and reports. One of my favorite graphics from my "Where In The World?" report on market opportunity assessment looks at the BRICs (Brazil, Russia, India, and China) - relative to a selection of other emerging markets - in terms of population and then compares their rankings across three economic and political indicators: Doing Business, Economic Freedom, and eReadiness. The point is that "bigger is not always better" in terms of a potential market to enter.
Saudi Arabia has used the World Bank's Doing Business Index as a critical measure of its 10 x 10 initiative - a program of reforms launched with the objective of being in the top 10 countries for doing business by 2010. They missed the mark in 2010. But with the 2011 new rankings, we can congratulate Saudi Arabia's reformers for making it to 11 x 11.
Similar to the past few years at this time of year, we are in the process of preparing a global banking platform deals report for 2010. As we have done since 2005 to help application delivery teams make informed decisions, we will analyze deals’ structure, determine countable new named deals, and look at extended business as well as key functional areas and hosted deals — all to identify the level of global and regional success as well as functional hot spots for a large number of banking platform vendors.
In the past, some vendors told us that they are not particularly fond of us counting new named deals while only mentioning extended business, renewed licenses, and the like. Why do we do this, and what is the background for this approach? First, extended business as often represents good existing relationships between vendors and banks as it represents product capabilities themselves. Second, we have asked for average deals sizes and license fees for years, but only a minority of vendors typically discloses this information. Thus, we do not have a broad basis for dollar or euro market shares — and I personally shy away from playing the banking platform revenue estimates game.
An Alternative Counting Model Could Be Implemented Easily . . .
Consequently, available data makes counting new named deals the only feasible way to represent an extending or shrinking footprint in the off-the-shelf banking platform market — and thus to also represent customer decisions in favor of one banking platform or the other. Some vendors suggested introducing weights for the size of the bank and the relevance of the seven world regions (for example, North America and Asia Pacific). We could easily do so, but there are problems with this approach:
The General Services Administration made a bold decision to move its email and collaboration systems to the cloud. In the RFP issued last June, it was easy to see their goals in the statement of objectives:
This Statement of Objectives (SOO) describes the goals that GSA expects to achieve with regard to the
1. modernization of its e-mail system;
2. provision of an effective collaborative working environment;
3. reduction of the government’s in-house system maintenance burden by providing related business, technical, and management functions; and
4. application of appropriate security and privacy safeguards.
GSA announced yesterday that they choose Google Apps for email and collaboration and Unisys as the implementation partner.
So what does this mean?
What it means (WIM) #1: GSA employees will be using a next-generation information workplace. And that means mobile, device-agnostic, and location-agile. Gmail on an iPad? No problem. Email from a home computer? Yep. For GSA and for every other agency and most companies, it's important to give employees the tools to be productive and engage from every location on every device. "Work becomes a thing you do and not a place you go." [Thanks to Earl Newsome of Estee Lauder for that quote.]
“School Bond Measure Fails” seems a common headline these days. In fact, a quick Google search found that school bond measures and tax levies have just this fall failed all over the US, notably in Santa Clara County, which was characterized as “tax friendly.” However, despite the hardships of raising money for schools, per-pupil spending continues to increase – having increased steadily from just over $500/pupil in 1919-20 to $11,674/pupil in 2006-07, according to the National Center for Education Statistics.
One place that the expenditure has been going has been toward technology investments. The number of computers in public elementary and secondary schools has increased: in 2005, the average public school contained 154 instructional computers, compared with only 90 in 1998. More importantly, the percentage of instructional rooms with access to the Internet increased from 51 percent in 1998 to 94 percent in 2005.