I had the chance to sit down with Credit Suisse’s CISO and Head of IT Risk, Daniel Barriuso, to ask him a few questions about his role at Credit Suisse and his approach to security. Daniel will be keynoting this week at Forrester’s Security Forum, which kicks off this Thursday, September 16th. Here’s a sample of our Q&A below:
Why is a more holistic approach to IT security so important today?
[Barriuso]: Given the complex and fast changing IT security landscape, a holistic approach is key to being able to effectively understand the end-to-end threat landscape and manage it proactively. This entails planning for both current and emerging threats, identifying future trends, and making conscious decisions on the security investments required.
What were some of the most important lessons that you learned over the last several years?
[Barriuso]: A key lesson that I have learned through my career is that governance is the foundation for a strong IT security organization. Often organizations focus on technology and technical controls as the main driver to secure data. Instead, a top-down approach is required, beginning with the policy, governance bodies, and risk management framework.
What advice would you give to other senior security leaders who want to move to this more holistic approach?
I just completed my second quarter as the Research Director of Forrester’s Security and Risk team. Since no one has removed me from my position, I assume I’m doing an OK job. Q2 was another highly productive quarter for the team. We published 20 reports, ran a security track at Forrester’s IT Forum in Las Vegas and Lisbon, and fielded more than 506 client inquiries.
In April, I discussed the need to focus on the maturity of the security organization itself. I remain convinced that this is the most important priority for security and risk professionals. If we don’t change, we’ll always find ourselves reacting to the next IT shift or business innovation, never predicting or preparing for it ahead of time. It reminds me of the Greek myth of Sisyphus. Sisyphus was a crafty king who earned the wrath of the gods. For punishment, the gods forced him to roll a huge boulder up a steep hill, only to watch it roll back down just before he reached the top — requiring him to begin again. Gods tend to be an unforgiving lot, so Sisyphus has to repeat this process for the rest of eternity.
If my protestations don’t convince you, perhaps some data will. The following are the top five Forrester reports read by security and risk professionals in Q2:
We recently embarked on a Forrester-wide research project to benchmark the use of social technologies across enterprise organizations. Why is this important? Well as you may know, we cover social technologies from a wide range of perspectives — from roles in marketing to IT to technology professionals. We find each of these roles differ in their general “social maturity” and that most companies are experiencing pockets of success, but few, if any, are successfully implementing it across the board. In fact, full maturity in this space could take years, but there are clear differences in how some “ahead of the curve” companies are using social technologies for business results.
There are serious security and risk concerns with social technology but there are also significant business and operational benefits. Security professionals have to determine how they can mitigate these risks to an acceptable level without significantly hampering the business. If you haven’t seen it, Chenxi Wang has written an excellent report on how effective management of social media can alleviate security risks. Check out To Facebook Or Not To Facebook.
There is also some discussion about how security professionals might use social technologies to their own benefit — particularly to leverage the knowledge of other security professionals to combat the growing sophistication of security attacks. If you haven’t seen it, check out John Kindervag’s report SOC 2.0: Virtualizing Security Operations.
This is my first post as the new Research Director for the Security and Risk team here at Forrester. During my first quarter as RD, I spent a lot of time listening to our clients and working with the analysts and researchers on my team to create a research agenda for the rest of the year that will help our clients tackle their toughest challenges. It was a busy Q1 for the team. We hosted our Security Forum in London, fielded more than 443 end client inquiries, completed more than 18 research reports, and delivered numerous custom consulting engagements.
In the first quarter of 2010, clients were still struggling with the security ramifications of increased outsourcing, cloud computing, consumer devices and social networking. Trends have created a shift in data and device ownership that is usurping traditional IT control and eroding traditional security controls and protections.
We’re still dealing with this shift in 2010 — there’s no easy fix. This year there is a realization that the only way that the Security Organization can stay one step ahead of whatever business or technology shift happens next is to transform itself from a silo of technical expertise that is reactive and operationally focused to one that is focused on proactive information risk management. This requires a reexamination of the security program itself (strategy, policy, roles, skills, success metrics, etc.), its security processes, and its security architecture. In short, taking a step back and looking at the big picture before evaluating and deploying the next point protection product. Not surprisingly, our five most read docs since January 1, 2010 to today are having less to do with specific security technologies:
I talk with many IT professionals that are dismayed at how little backup and recovery has changed in the last ten years. Most IT organizations still run traditional weekly fulls and daily incremental backups, they still struggle to meet backup windows and to improve recovery capabilities, to improve backup and restore success rates and to keep up with data growth. Sure there have been some improvements — the shift to disk as the primary target for backup did improve backup and recovery performance, but it hasn't fundamentally changed backup operations or addressed the most basic backup challenges. Why hasn't disk dragged backup out of the dark ages? Well, disk alone can't address some of the underlying causes. Unfortunately, many IT organizations:
Each year for the past three years I've analyzed and written on the state of enterprise disaster recovery preparedness. I've seen a definite improvement in overall DR preparedness during these past three years. Most enterprises do have some kind of recovery data center, enterprises often use an internal or colocated recovery data center to support advanced DR solutions such as replication and more "active-active" data center configurations and finally, the distance between data centers is increasing. As much as things have improved, there is still a lot more room for improvement not just in advanced technology adoption but also in DR process management. I typically find that very few enterprises are both technically sophisticated and good at managing DR as an on-going process.
When it comes to DR planning and process management, there are a number of standards including the British Standard for IT Service Continuity Management (BS 25777), other country standards and even industry specific standards. British Standards have a history of evolving into ISO standards and there has already been widespread acceptance of BS 25777 as well as BS 25999 (the business continuity version). No matter which standard you follow, I don’t think you can go drastically wrong. DR planning best practices have been well defined for years and there is a lot of commonality in these standards. They will all recommend:
Two years ago, Forrester and the Disaster Recovery Journal partnered together to field surveys on a pair of pressing topics in Risk Management: Business Continuity (BC) and Disaster Recovery (DR). The surveys help highlight trends in the industry and to provide organizations with some statistical data for peer comparison. The partnership has been a huge success. In 2007, we examined the state of disaster recovery preparedness, in 2008, we examined the state of business continuity preparedness and this year, we examine the state of crisis communications and the interplay between enterprise risk management and business continuity.
We decided to focus on crisis communications because as last year’s study revealed, one of the lessons learned from organizations who had invoked a business continuity plan (BCP) was that they had greatly underestimated the importance and difficulty of communication and collaboration within and without the organization. In any situation, a natural disaster, a power outage, a security incident or even a corporate scandal, crisis communication is critical to responding quickly, managing the response and returning to normal operations.
Organizations approach crisis communication differently. In some organizations, crisis communications is a separate team that works together with BC/DR planning teams to embed communication strategies into BCPs/DRPs and in other companies, BC/DR planning teams do its best to address crisis communication.
Yesterday IBM announced the availability of their new IBM Information Archive Appliance. The appliance replaces IBM’s DR550. The new appliance has significantly increased scale and performance because it’s built on IBM’s Global Parallel File System (GPFS), more interfaces (NAS and an API to Tivoli Storage Manager) and accepts information from multiple sources – IBM content management and archiving software and eventually 3rd party software. Tivoli Storage Manager (TSM) is embedded in the appliance to provide automated tiered disk and tape storage as well as block-level deduplication. TSM’s block-level deduplication will reduce storage capacity requirements and its disk and tape management capabilities will let IT continue to leverage tape for long-term data retention. All these appliance subcomponents are transparent to the IT end user who manages the appliance – he or she just sees one console where they define collections and retention policies for those collections.
On a weekly basis, I get at least one inquiry request from either a vendor or an end-user company seeking industry averages for the cost of downtime. Vendors like to quote these statistics to grab your attention and to create a sense of urgency to buy their products or services. BC/DR planners and senior IT managers quote these statistics to create a sense of urgency with their own executives who are often loath to invest in BC/DR preparedness because they view it as a very expensive insurance policy.
BC/DR planners, senior IT managers and anyone else trying to build the business case for BC/DR should avoid the use of industry averages and other sensational statistics. While these statistics do grab attention, more often than not, they are misleading and inaccurate, and your executives will see through them. You'll hurt your business case in the end because you haven't done your homework and your execs will know it.
I saw a study recently that stated the cost of downtime for the insurance industry was $1,202,444 per hour. You might be tempted to grab this statistic and throw it into the next presentation to your C-level exec but what is this statistic really telling you? Do the demographics of the companies in the study match yours? Do you trust the accuracy of the data? Consider the following:
What is the definition of insurance industry in this case? Is it companies that focus solely on insurance or does it include companies that also provide financial advice and monetary instruments to their clients?
Storage-as-a-Service is relatively new. Today the main value proposition is as a cloud target for on-premise deployments of backup and archiving software. If you have a need to retain data for extended periods of time (1 year plus in most cases) tape is still the more cost effective option given it's low capital acquisition cost and removability. If you have long term data retention needs and you want to eliminate tape, that's where a cloud storage target comes in. Electronically vault that data to a storage-as-service provider who can store that data at cents per GB. You just can't beat the economies of scale these providers are able to achieve.
If you're a small business and you don't have the staff to implement and manage a backup solution or if you're an enterprise and you're looking for a PC backup or a remote office backup solution, I think it's worthwhile to compare the three year total cost of ownership of an on-premise solution versus backup-as-a-service.