How do you schedule your service desk staff to ensure excellent staffing and achieve service-level targets? Does your service desk solution cover this?
The effective staffing of service desk analysts can be complicated. Leveraging historic volume levels for all of the communication channels is one way to plan ahead. Additionally, having insight into planned projects from other groups — e.g., upgrades of applications or other planned releases — is important as well to plan ahead.
Service desk teams should start automating the workforce management process as much as possible in order to meet the customers’ expectations. Some service desk solutions have the workforce management as part of their functionalities already. If this is a challenge for you today — make sure that you include this key requirement into your functionality assessment list. Use the ITSM Support Tools Product Comparison tool for your assessment.
In the past week I have been briefed by one vendor who has incorporated workforce management into their solution. helpLine 5.1 Workforce Management allows for optimized planning of the service desk team.
Hi, and thanks for stopping by. I joined Forrester just over a month ago and I plan to post here regularly with some thoughts on the ERP apps arena. I’m hoping this blog will serve as a place for us to exchange views, and I very much welcome your input.
As you know, Forrester is structured around roles, and I’m part of the analyst team serving the needs of business process professionals. My primary area of focus is enterprise resource planning software. I’m currently pulling together my research agenda for 2011, and I was wondering what top-of-mind issues you think I should be tackling.
At a high level, some of the areas I’m considering include:
SaaS ERP and PaaS.
ERP-flavored project management.
I’m also interested in hearing about midsize organizations and enterprises that have benefited from the successful deployment of one of the following:
A two-tier combination of one vendor’s SaaS ERP integrating with another vendor's on-premise ERP.
Has your company published a supplier code of conduct? Most likely it has. Is it conducting supplier audits to ensure that the code of conduct is being followed? Maybe. Does it have a plan in place if you turn up something truly ugly? Doubtful. Would you publish those results if they were bad? Yeah . . . probably not.
Enter Apple, which recently released its latest Supplier Responsibility 2011 Progress Report, which outlines the specific findings of its own supplier audits. The results?
“In 2010, our audits of 127 facilities revealed 37 core violations: 18 facilities where workers had paid excessive recruitment fees, which we consider to be involuntary labor; ten facilities where underage workers had been hired; two instances of worker endangerment; four facilities where records were falsified; one case of bribery; and one case of coaching workers on how to answer auditors’ questions.” (Source: Apple Supplier Responsibility, 2011 Progress Report)
I give Apple high praise for making this information public. Hopefully, it has a ripple effect in the industry and we’ll see more transparency. Public sentiment does not separate the company that assembles an iPad from the Apple brand. Even if you’ve outsourced the supply chain, there’s still a corporate responsibility to ensure that socially and environmentally sound business practices are taking place. And this goes for subcontractor relationships too — yes, in the eyes of the consumer, you are responsible for your supplier’s supplier’s actions. Apple gets this.
The tech industry has generally enjoyed a good reputation with the public and with politicians -- unlike those "bad guys" in banking, or health insurance, or oil and gas. However, analysis that I have done in a just-published report -- Caution: IT Investment May Be Hurting US Job Growth -- suggests that this good reputation could be dented by evidence that business investment in technology could be coming at the expense of hiring.
Some background: In preparing Forrester’s tech market forecasts, I spend a lot of time looking at economic indicators. Employment is not an economic indicator that I usually track, because it has no causal connection that I have been able to find with tech market growth. However, given all the press attention that has been paid to an unemployment rate in excess of 9% and monthly employment increases measured in the tens of thousands instead of hundreds of thousands, it has been hard to ignore the fact that US job growth has been remarkably feeble in this economic recovery.
Like many connected with IBM as an employee, a customer, or an analyst, I watched IBM's Watson beat two smart humans in three games of Jeopardy. However, I was able to do so under more privileged conditions than sitting on my couch. Along with my colleague John Rymer, I attended an IBM event in San Francisco, in which two of the IBM scientists who had developed Watson provided background on Watson prior to, during commercial breaks in, and after the broadcast of the third and final Jeopardy game. We learned a lot about the time, effort, and approaches that went into making Watson competitive in Jeopardy (including, in answer to John's question, that its code base was a combination of Java and C++). This background information made clear how impressive Watson is as a milestone in the development of artificial intelligence. But it also made clear how much work still needs to be done to take the Watson technology and deploy it against the IBM-identified business problems in healthcare, customer service and call centers, or security.
The IBM scientists showed a scattergram of the percentage of Jeopardy questions that winning human contestants got right vs. the percentage of questions that they answered, which showed that these winners generally got 80% or more of the answers right for 60% to 70% of the questions. They then showed line charts of how Watson did against the same variables over time, with Watson well below this zone at the beginning, but then month by month moving higher and higher, until by the time of the contest it was winning over two-thirds of the test contests against past Jeopardy winners. But what I noted was how long the training process took before Watson became competitive -- not to mention the amount of computing and human resources IBM put behind the project.
With all the articles written about IPv4 addresses running out, Forrester’s phone lines are lit up like a Christmas tree. Clients are asking what they should do, who they should engage, and when they should start embracing IPv6. Like the old adage “It takes a village to raise a child,” Forrester is only one component; therefore, I started to compile a list of vendors and tactical documentation links that would help customers transition to IPv6. As I combed through multiple sites, the knowledge and documentation chasm between vendors became apparent. If the vendor doesn’t understand your business goals or have the knowledge to solve your business issues, are they a good partner? Are acquisition and warranty costs the only or largest considerations to making a change to a new vendor? I would say no.
Support documentation and availability to knowledge is especially critical in networking design, deployment, maintenance, and upgrades. Some pundits have relegated networking to a commodity play, but networking is more than plumbing. It’s the fabric that supports a dynamic business connecting users to services that are relevant to the moment, are aggregated at the point of use, and originate from multiple locations. The complexity has evolved from designing in a few links to tens of hundreds of relationships (security, acceleration, prioritization, etc.) along the flow of apps and data through a network. Virtualization, convergence, consolidation, and the evolving data center networks are prime examples of today’s network complexity. In response to this complexity, architects and practitioners turn to books, training materials, blogs, and repositories so that they can:
Set up an infrastructure more quickly or with a minimal number of issues, since there is a design guide or blueprint.
Seriously. I recently spoke with a client who swears that software quality improved once they got rid of the QA team. Instead of making QA responsible for quality, they put the responsibility squarely on the backs of the developers producing the software. This seems to go against conventional wisdom about quality software and developers: Don't trust developers. Or, borrowing from Ronald Reagan, trust but verify.
This client is no slouch, either. The applications provide real-time market data for financial markets, and the client does more than 40 software releases per year. If the market data produced by an application were unavailable or inaccurate, then the financial market it serves would crumble. Availability and accuracy of information are absolute. This app can't go down, and it can't be wrong.
Why Does This Work?
The client said that this works because the developers know that they have 100% responsibility for the application. If it doesn't work, the developers can't say that "QA didn't catch the problem." There is no QA team to blame. The buck stops with the application development team. They better get it right, or heads will roll.
As British author Samuel Johnson famously put it, "The prospect of being hanged focuses the mind wonderfully."
IBM's Watson (natural language processing, deduction, AI, inference and statistical modeling all served by a massively parallel POWER7 array of computers with a total of 2880 processors with 15TB RAM) beat the greatest Jeopardy players in three rounds over the past 3 days — and the matches weren't even close. Watson has shocked us, and now it's time to think: What's in it for the security professional?
The connection is easy to see. The complexity, amount of unstructured background information, and the real-time need to make decisions.
Forrester predicts that the same levels of Watson's sophistication will appear in pattern recognition in fraud management and data protection. If Watson can answer a Jeopardy riddle in real time, it will certainly be able to find patterns of data loss, clustering security incidents, and events, and find root causes of them. Mitigation and/or removal of those root causes will be easy, compared to identifying them . . .
As we witness truly historic events in the Middle East brought about in part by citizens empowered by social networks, we are also seeing disturbing trends that may yet result in social networks becoming a force for evil.
I recently joined the Content and Collaboration team at Forrester, and I was happy to see Forrester data showing that 53% of organizations are looking to expand, upgrade, or implement their Content Management solution. Over the last six weeks, I’ve taken many inquiries that dealt with organizations looking at re-evaluating ECM programs, driven by the desire to both add new functionality and extend the reach of ECM to a broader audience. ECM is clearly alive and well.
But time and again I’ve seen this problem: Companies will jump directly into the RFI/ RFP process without fully developing their strategy and road map. But skipping this important step can result in poor ECM technology selection, lack of governance, and, ultimately, failure.
A good road map will address the three classical aspects of an enterprise application implementation: People, Process, and Technology. Outlining the tasks for each area is a good start down the path of success. Here are some sample points for starting your ECM project:
Define your ECM Strategy – Every organization defines ECM differently. When creating a strategy, focus on gaining an understanding of your goals and objectives for implementing an ECM solution. A good example of an ECM goal is to minimize the number of versions of the same document that exist in the organization. These goals and objectives will form the basis for the project’s critical success factors.