It provides a well-written, step-by-step guide to risk management processes that can be applied to whole organizations, or any part thereof. So far, it has received well-deserved praise for its surprising brevity and consolidated value. These are especially important characteristics for a document with as lofty a goal as standardizing what it calls “an integral part of all organizational processes.”
But if we expect the availability of ISO 31000 to have any sort of revolutionary or game-changing impact in the immediate future, we’re getting way ahead of ourselves.
I talk with many IT professionals that are dismayed at how little backup and recovery has changed in the last ten years. Most IT organizations still run traditional weekly fulls and daily incremental backups, they still struggle to meet backup windows and to improve recovery capabilities, to improve backup and restore success rates and to keep up with data growth. Sure there have been some improvements — the shift to disk as the primary target for backup did improve backup and recovery performance, but it hasn't fundamentally changed backup operations or addressed the most basic backup challenges. Why hasn't disk dragged backup out of the dark ages? Well, disk alone can't address some of the underlying causes. Unfortunately, many IT organizations:
In its complaint, the SEC alleges that, “Madoff and his lieutenant Frank DiPascali, Jr., routinely asked (Jerome) O'Hara and (George) Perez for their help in creating records that, among other things, combined actual positions and activity from... market-making and proprietary trading businesses with the fictional balances maintained in investor accounts.”
The SEC further alleges that O’Hara and Perez tried to cover their tracks by deleting hundreds of files, withdrew hundreds of thousands of dollars from their investments through the company, told Madoff they wanted to stop helping him, and then accepted larger salaries and substantial bonuses for their promise to keep quiet.
It will be interesting to watch this case unfold. I was hoping it would get into issues of whether the IT professionals were considered just uninvolved support staff or key participants in the scheme. Considering the evidence SEC claims to have, I don’t think we’ll hear those arguments in this case, but keep an eye out for how the defense comes together. Fraud prevention is a growing area of concern for government, health care, insurance, financial services, and other industries... which means we could be seeing more cases questioning the responsibility of IT to identify and/or prevent such issues.
Each year for the past three years I've analyzed and written on the state of enterprise disaster recovery preparedness. I've seen a definite improvement in overall DR preparedness during these past three years. Most enterprises do have some kind of recovery data center, enterprises often use an internal or colocated recovery data center to support advanced DR solutions such as replication and more "active-active" data center configurations and finally, the distance between data centers is increasing. As much as things have improved, there is still a lot more room for improvement not just in advanced technology adoption but also in DR process management. I typically find that very few enterprises are both technically sophisticated and good at managing DR as an on-going process.
When it comes to DR planning and process management, there are a number of standards including the British Standard for IT Service Continuity Management (BS 25777), other country standards and even industry specific standards. British Standards have a history of evolving into ISO standards and there has already been widespread acceptance of BS 25777 as well as BS 25999 (the business continuity version). No matter which standard you follow, I don’t think you can go drastically wrong. DR planning best practices have been well defined for years and there is a lot of commonality in these standards. They will all recommend:
As GRC practices continue to gain traction, I’ve had a lot of great conversations lately with clients about the importance of peer interaction for professionals in governance, risk, and compliance roles. With his finger apparently on the pulse of all major technology trends, Forrester’s Josh Bernoff must see this as well. This week he announced the winners of the 2009 Forrester Groundswell Awards, with two top GRC vendors among the winners. (For those of you not familiar with Josh Bernoff or Groundswell, check out the book info here.)
Two years ago, Forrester and the Disaster Recovery Journal partnered together to field surveys on a pair of pressing topics in Risk Management: Business Continuity (BC) and Disaster Recovery (DR). The surveys help highlight trends in the industry and to provide organizations with some statistical data for peer comparison. The partnership has been a huge success. In 2007, we examined the state of disaster recovery preparedness, in 2008, we examined the state of business continuity preparedness and this year, we examine the state of crisis communications and the interplay between enterprise risk management and business continuity.
We decided to focus on crisis communications because as last year’s study revealed, one of the lessons learned from organizations who had invoked a business continuity plan (BCP) was that they had greatly underestimated the importance and difficulty of communication and collaboration within and without the organization. In any situation, a natural disaster, a power outage, a security incident or even a corporate scandal, crisis communication is critical to responding quickly, managing the response and returning to normal operations.
Organizations approach crisis communication differently. In some organizations, crisis communications is a separate team that works together with BC/DR planning teams to embed communication strategies into BCPs/DRPs and in other companies, BC/DR planning teams do its best to address crisis communication.
Yesterday IBM announced the availability of their new IBM Information Archive Appliance. The appliance replaces IBM’s DR550. The new appliance has significantly increased scale and performance because it’s built on IBM’s Global Parallel File System (GPFS), more interfaces (NAS and an API to Tivoli Storage Manager) and accepts information from multiple sources – IBM content management and archiving software and eventually 3rd party software. Tivoli Storage Manager (TSM) is embedded in the appliance to provide automated tiered disk and tape storage as well as block-level deduplication. TSM’s block-level deduplication will reduce storage capacity requirements and its disk and tape management capabilities will let IT continue to leverage tape for long-term data retention. All these appliance subcomponents are transparent to the IT end user who manages the appliance – he or she just sees one console where they define collections and retention policies for those collections.
Two weeks ago, I commented on the changing role of the risk management professional, and thought it would be worthwhile to spend a few moments discussing the auditor as well. In a contest of which job is likely to see more change in the next two years, I would expect a photo finish.
Even in the toughest times, winners will invariably emerge. With the way expectations are changing regarding corporate controls and disclosure, risk management professionals (whose lack of influence was seen as a substantial cause of our current state of affairs to begin with) will likely be among the first beneficiaries of our new outlook on business.
Forrester customer inquiries seem to have taken a step back when it comes to risk management. While there are still plenty of incoming technology and vendor selection questions, there has been a noticeable spike in calls about fundamental issues, such as how to build and organize risk management programs. Knowledge and experience in risk management basics is in high demand.
On a weekly basis, I get at least one inquiry request from either a vendor or an end-user company seeking industry averages for the cost of downtime. Vendors like to quote these statistics to grab your attention and to create a sense of urgency to buy their products or services. BC/DR planners and senior IT managers quote these statistics to create a sense of urgency with their own executives who are often loath to invest in BC/DR preparedness because they view it as a very expensive insurance policy.
BC/DR planners, senior IT managers and anyone else trying to build the business case for BC/DR should avoid the use of industry averages and other sensational statistics. While these statistics do grab attention, more often than not, they are misleading and inaccurate, and your executives will see through them. You'll hurt your business case in the end because you haven't done your homework and your execs will know it.
I saw a study recently that stated the cost of downtime for the insurance industry was $1,202,444 per hour. You might be tempted to grab this statistic and throw it into the next presentation to your C-level exec but what is this statistic really telling you? Do the demographics of the companies in the study match yours? Do you trust the accuracy of the data? Consider the following:
What is the definition of insurance industry in this case? Is it companies that focus solely on insurance or does it include companies that also provide financial advice and monetary instruments to their clients?