Customer service leaders know that a good customer experience has a quantifiable impact on revenue, as measured by increased rates of repurchase, increased recommendations, and decreased willingness to defect from a brand. They also conceptually understand that clean data is important, but many can’t make the connection between how master data management and data quality investments directly improve customer service metrics. This means that IT initiates data projects more than two-thirds of the time, while data projects that directly affect customer service processes rarely get funded.
What needs to happen is that customer service leaders have to partner with data management pros — often working within IT — to reframe the conversation. Historically, IT organizations would attempt to drive technology investments with the ambiguous goal of “cleaning dirty customer data” within CRM, customer service, and other applications. Instead of this approach, this team must articulate the impact that poor-quality data has on critical business and customer-facing processes.
To do this, start by taking an inventory of the quality of data that is currently available:
Chart the customer service processes that are followed by customer service agents. 80% of customer calls can be attributed to 20% of the issues handled.
Understand what customer, product, order, and past customer interaction data are needed to support these processes.
North Plains, a legacy pure-play digital asset management (DAM) vendor based out of Toronto, Ontario, announced today that it has agreed to buy fellow pure-play DAM vendor Xinet. The DAM market is fragmented and, with a few exceptions (Adobe, Autonomy, EMC, and OpenText), is littered with smaller, proprietary players. We’ve long expected moves in this market, but most of the focus has been on the larger DAM players in the market or the larger content management or customer experience vendors that have no DAM solution (such as IBM).
With this acquisition of Xinet, North Plains moves to become one of the few, if not only, midmarket pure-play DAM player in between the big guns and the pure-play small vendors. What else does North Plains get out of the acquisition?
A platform solution aimed at creative professionals. Xinet has found success targeting creative professionals and supporting assets at the beginning of the content life cycle.
Increased regional reach. More than many other pure-play North American-based DAM vendors, Xinet targets European and Asian customers. North Plains gains a much more global customer base and will inherit channels partners across the globe. Watch for this to be just the first of many moves to make North Plains a global, pure-play DAM vendor.
A stronghold among advertising agencies. Xinet has penetrated the advertising vertical and counts many of these larger names among its clients. With the acquisition, North Plains gains a foothold into this coveted vertical.
In a recent Forrester/DRJ joint survey on BC preparedness, of organizations that have invoked a BC plan in the last five years, 37% said that their BC plans had not adequately addressed communication. In my experience, I’ve found that many organizations:
Don’t appreciate the importance of effective communication. Many organizations focus the content of their BC plans and the goals of their BC exercises on the details of recovery procedures but don’t focus on how they will contact and coordinate response teams, employees, partners, first responders and customers. If you can’t communicate, you can’t respond to anything.
Rely on manual procedures like call lists or email alone. By themselves, manual procedures are unreliable, they don’t scale for organizations with thousands of employees (or citizens) and they don’t provide any kind of reporting.
Underestimate the difficulty of communicating effectively under stress. During the incident is not the time to attempt to craft effective communication messages or look for a secondary mode of communication because your first mode of communication (land lines and email) is no longer available.
Nowadays, there are two topics that I’m very passionate about. The first is the fact that spring is finally here and it’s time to dust off my clubs to take in my few first few rounds of golf. The second topic that I’m currently passionate about is the research I’ve been doing around the connection between big data and big process.
While most enterprise architects are familiar with the promise — and, unfortunately, the hype — of big data, very few are familiar with the newer concept of “big process.” Forrester first coined this term back in August of 2011 to describe the shift we see in organizations moving from siloed approaches to BPM and process improvement to more holistic approaches that stitch all the pieces together to drive business transformation.
Our working definition for big process is:
“Methods and techniques that provide a more holistic approach to process improvement and process transformation initiatives.”
In our Forrsights Business Decision-Makers Survey, Q4 2011, we asked business technology leaders to rate IT’s ability to establish an architecture that can accommodate changes to business strategy. While 45% of IT rated its ability positively, only 30% of business respondents did. Clearly, both think there is room for improvement, but business is more concerned about it.
So are we agile? Only 21% of enterprise architects in our September 2011 Global State Of Enterprise Architecture Online Survey reported being even modestly agile, so I think we all know the answer.
What do we do about it? Continue to focus on technology standardization and cost reduction? Give up on that and focus on tactical business needs? Gridlock in the middle because we can’t make the business case to invest in agility? This is the struggle EA organizations face today.
To act with agility, firms must create a foundation for it, and three barriers can get in the way:
Brittle processes and legacy systems. We all know it this one; the current state mess of processes that cannot adapt to change and legacy systems where everything is connected to everything else, so even the smallest changes have broad impacts. Techniques to overcome this barrier include partitioning the problem into digestible pieces to show incremental progress and short-term payoff.
Engaging citizens in government isn’t a new concept. Referenda, ballot initiatives, and recall of elected leaders are common in the US and other democracies. Even the EU has recently sought to involve citizens through its European Citizens’ Initiative. This new program, however, has started in an era where new modes of constituent engagement are easier and cheaper. Obtaining the signatures required to place an initiative on a ballot or bring an issue to government leaders’ attention no longer requires endless hours in front of a shopping mall. New social media tools like Facebook, Twitter, or even more local sites like Everyblock in the US or Iniative.eu in Europe make it easier to reach out to citizens and for citizens to reach out to their governments.
And, the pattern extends across types of government and geographies. Political activists in Nigeria are using social media to drive election reforms. Political unrest and even revolution throughout the Middle East garner support via social media sites. Recently, citizens in China used social media to block destruction of trees in Nanjing.
New tools specifically tailored to citizen engagement — such as citizen reporting platforms, open data infrastructure, and competition platforms — even further transform governance. These tools provide citizens with not only a voice but also a role in the governance process.
Airtel launched India’s first 4G LTE services in Kolkata yesterday. Airtel delivers the service using TDD technology, making it one of the few operators globally to launch a TD-LTE network. The majority of commercial LTE launches are still based on FDD technology, which begs the question: What impact will TDD have on the LTE landscape? Will TD-LTE get support from equipment manufacturers, or will it suffer a fate similar to that of WiMAX? What does it mean for operators? I believe that TDD will affect the entire mobile ecosystem. Here’s how:
Price parity between paired and unpaired spectra. Both paired and unpaired spectra will be viewed as media that deliver wireless service irrespective of the underlying technology; this will drive price parity between the spectra. The dichotomy between the FDD spectrum (used primarily for coverage) and the TDD spectrum (mainly for capacity) will disappear as technological advancements make it possible to achieve similar capacity and coverage on both spectra. Consequently, the “spectrum crunch” may diminish, as any spectrum will be satisfactory for the deployment of mobile broadband services.
Calculating the cost of a data breach should be a part of every organization’s information security risk management strategy. It’s not an easy task by any means, but making efforts to do so upfront — as opposed to after a breach, when calculating cost is the last thing on the to-do list! — for your organization can help to assess risk and justify security investments. But where does one begin, and what should be considered in cost estimates? There are the usual suspects, or direct costs, relating to discovery, response, notification, and damage control such as:
In-house time and labor (IT, legal, PR, incident response, call center, etc)
New technologies or services implemented as a result of the breach to change or repair systems
External consultants or services for incident response
The term "individual contributor" covers a lot of ground -- from brain surgeon to the shipping and receiving clerk at your local Wal-Mart. I'm not sure which of these two is a better fit for a virtual desktop, or which one has a Mac at home, but I do know that the individual contributors who spent their own money on technology last year to do their jobs, shelled out $1,252.60 on hardware alone, and another $556.90 on software. That's a heap o' cash.
When we asked them why they spent the money, 42% said it was something they use in their personal lives that they wanted to use for work. Another 27% said their own equipment is better than what their companies provide (presumably CT scanners, portable defibrillators and Sony PSPs can be ruled out). How do their companies feel about them using their own devices and software? 48% said their firms would either not approve, or make them stop using it.
Of course we know the usual reasons why: Security and company policy, and the "benefits" of centralized IT and shared services, among others. I don't know about you, but I always found "shared services" to be a bit of a sham. You know how it works: the VP with the biggest, high-profile project gets all of the services, and the rest of the plebes get to "share" the table scraps. Want a copy of Microsoft Project or a new laptop for that customer service rep who starts next week? Sorry…Steve's program is using all of the Project licenses, and all we have left in the closet is Pentium II desktops…but they have ergonomic keyboards!
Ever since offshore outsourcing became popular, employment visas — specifically the L1 and H1 visa — have been a source of debate. Indian vendors have needed them to make their offshore model work. US technical employees have feared them because they threaten to take away their livelihood.
Well, here we are in 2012 and the debate is hotter than ever. The offshore vendors, attempting to accommodate tech-savvy clients’ agility and context requirements, require even more staff onsite in the US. Simultaneously, the US government, struggling to combat unemployment, shore up the dwindling middle class, and get through the 2012 election cycle, is cracking down on visa enforcement. For Forrester clients, this situation has become problematic as their vendors fail to land resources for mission-critical projects and the clients themselves are then compelled to use local contractors to fulfill their onsite needs (one reason staff augmentation vendors are seeing a big uptick in growth).