It’s common knowledge that the security landscape has shifted over the past few years and the once-strong perimeters that CISOs relied upon have become stretched, fragmented, and overrun by increasingly mature attackers. There are many reasons for this change — from the increasing value of intellectual property and ideas to the business’ desire for agility and flexibility— but it comes down to the fact that the technology controls that CISOs are so used to deploying simply can’t stay ahead of the threats.
Increasingly, Security & Risk (S&R) Professionals are being asked not only to protect the organization from hackers but also to protect their organization’s brand and competitive advantage whilst enabling efficient and agile business processes. In this environment, we need to realize that technology is just one piece of an increasingly complex puzzle, and it’s a puzzle we have to solve without ever saying “no.” As one security expert Forrester interviewed put it, the right question is “How do I make sure this boat doesn’t crash?”; it isn’t, “How do I make sure this boat doesn’t even reach the ocean?”
It’s essential that CISOs shift their focus beyond technology to the wider spectrum of responsibilities that comprise an effective security practice. By redefining the situation and evolving their role, S&R professionals can:
For many years, security professionals have lived by the three pillars of risk management – AVOID, TREAT, ACCEPT. These great tenets have served the profession well, enabling CISOs to build appropriately secure networks at a tolerable level of cost. Unfortunately, as evidenced by the litany of security breaches we have seen over the past 12 months, it’s clear that the landscape is changing. More than ever before, security is clearly a ‘no-win’ game.
The high profile attackers, state-sponsored or otherwise, are one threat – but it goes deeper than this. The keys to the kingdom are no longer in the hands of the generals and policy makers; their decisions and discussions are enabled by email, IM and IP telephony, all of which sit firmly in the domain of the IT department and system admin – and stressed, poorly paid employees do not make the ideal custodians of such critical information. As an example, Anonymous claims to have access to every classified government database in the US, but they didn’t hack them – disaffected system administrators and employees simply opened the doors for them, or sent them the access codes.
As the broadening gap between our ambitions for a secure enterprise and our abilities to deliver on such a vision become self-evident, the time has come to pay equal attention to the poor cousin of risk management, “TRANSFER.” For many CISOs, risk transference is a topic that is largely theoretical as, even when a task is outsourced, the risk associated with a breach commonly remains with the data owning organisation. Cyber insurance offers a different solution.
Last night I stumbled across a documentary on BBC2 (content only available to UK residents – sorry!) about the human brain. One section talked about how the brain perceived risk issues – obviously an interesting topic for security folk!
A test subject was placed into a brain scanner and asked to estimate the likelihood of 80 different negative events occurring to him in the future – from developing cancer, to his house being burgled, to breaking a leg etc. Once he had stated his opinion, the real likelihood was then displayed to him.
At the end of the 80 events, the process resets and the subject is presented with the same events and asked to, once again, state his perceived likelihood, although this time with some knowledge of the actual answers.
The results are surprising.
Where his initial response had been too pessimistic, the test subject adjusted his perception to align with the actual likelihood. However, where he had initially been too optimistic, his opinion remain largely unchanged by the facts! It was apparent that the brain proactively maintained a ‘rose-tinted’ view of the risks, accommodating a more optimistic view but shunning anything more negative.
The scientists argued that this was the brain did this for two main reasons
1 – To minimise stress and anxiety, for the resultant health benefits; and
2 – Because an optimistic outlook helps drive success, support ambition and keep humanity striving for a better future.
The new revolution in apps and social media continues at a stunning rate. Nearly every day a colleague tells me of another app or site that is bubbling up and about to hit the big time. Many will not break through, but some will capture the imagination and become the next generation of YouTube and Facebook.
The behaviour of certain apps/sites, however, gives me some cause for concern. As a recent entrant to Pinterest, I was alarmed to note that the site takes a copy of the pinned image and serves that from its own servers. The burden of managing copyright issues seems to sit firmly with the users, most of whom never give such legislation a second thought. There is a method for removing content however, unsurprisingly, it’s not half as simple as pinning new content. Pinterest’s terms and conditions are also interesting, giving it “irrevocable, perpetual, royalty-free” permission to “exploit” member content.
The Pinterest site is building its value on other people’s content — which is fine as long as those people have consented. I recently looked at some interesting Infographics pinned on the site, all of which must have taken considerable resources to put together, yet I never once needed to visit the source site, which may have perhaps triggered advertising income vital to enabling them to continue their work. I wonder if they even realize their content is available in this way?
Last night I attended a vendor presentation about cloud-based risk and the threat from nation state attacks. Unfortunately, due to a busy schedule and a difficult journey, I arrived just as the final presentation moved to its Q&A stage. Listening to a Q&A session when I had no idea what the content of the presentation had been was actually quite an interesting experience, unfortunately not for all the best reasons. A section of the audience immediately dived into the detail and tried to find fault with the solutions that had evidently been outlined. They poked and prodded the presenter until she admitted that no solution was 100% and, yes, there were ways to mount a successful attack even with her recommendations in place. At that point, the questioners sat back in their seats, triumphant – they had won. There seemed little interest in continuing the conversation to figure out ways to minimize the remaining risk, and their body language suggested that they had mentally discounted everything that had been said.
I was a little disappointed by this. Some S&R pros seem to treat information security as an academic exercise, a challenge where the best argument wins and security is a mere footnote. These folk are often also the ones who overreact to very complex, and very unlikely, technical threat scenarios while overlooking behaviors and processes that may be fundamentally flawed. They appear unhappy with any security solution that isn’t perfect. I had hoped that we all recognized that good security was not about hitting a home-run; it’s much more about applying the 80/20 rule over and over again, iteratively reducing the risk to the organization.
A few months ago I shared a flight with a very pleasant lady from a European regulatory body. After shoulder surfing her papers and seeing we were both interested in information security (ironic paradox acknowledged!) we had a long chat about how enterprises could stand a chance against the hacktivist and criminal hordes so intent on stealing their data.
My flight-buddy felt that the future lay in open and honest sharing between organisations – i.e. when one is hacked they would immediately share details of both the breach and the method with their peers and wider industry; this would allow the group to look for similar exploits and prepare to deflect similar attacks. Being somewhat cynical, and having worked in industry, I felt that such a concept was idealised and that organisations would refuse to share such information for fear of reputational or brand damage – she acknowledged that it was proving tougher than she had expected to get her organisations to join in with this voluntary disclosure!
Across the US and Europe we are seeing a move toward ‘mandatory’breach disclosure; however they have seemingly disparate intentions. US requirements focus on breaches that may impact an organisations financial condition or integrity, whilst EU breach notification is very focussed on cases where there may have been an exposure of personal data. Neither of these seem to be pushing us toward this nirvana of ‘collaborative protection’.
In the UK, I’m aware that the certain organizations, within specific sectors, will share information within their small closed communities, unfortunately this is not widespread and certainly does not reflect the concept of ‘open and honest’ as my flight-buddy would have envisaged.
Security threats develop and evolve with startling rapidity, with the attackers always seeking to stay one step ahead of the S&R professional. The agility of our aggressors is understandable; they do not have the same service-focused restrictions that most organizations have, and they seek to find and exploit individual weaknesses in the vast sea of interconnecting technology that is our computing infrastructure.
If we are to stand a chance of breaking even in this game, we have to learn our lessons and ensure that we don’t repeat the same mistakes over and over. Unfortunately, it is alarmingly common to see well known vulnerabilities and weakness being baked right in to new applications and systems – just as if the past 5 years had never happened!
A recent report released by Alex Hopkins of Context Information Security shines a light on the vulnerabilities they discovered while testing almost 600 pre-release web applications during 2011. The headlines for me were:
On average, the number of issues discovered per application is on the rise.
Two-thirds of web applications were affected by cross site scripting (XSS).
Nearly one in five web applications were vulnerable to SQL injection.
It makes depressing reading, but I’m interested in why this situation is occurring:
Are S&R professionals simply not educating and guiding application developers?
Are application developers ignoring the training and education? Are we teaching them the wrong things or do we struggle to explain the threats from XSS and SQL injection?
Are our internal testing regimes failing, allowing flawed code to reach release candidate stage?
I was reading an article recently which outlined the different agencies employed within the United Kingdom to protect against cyber-threats. Not including the armed forces, who would have specialist roles to play in any particular cyber-threat scenario, it transpires that there are 18(!) different players covering this space, each with overlapping strategies, policies and expenditure. The formal report, from the UK Government’s Intelligence & Security Committee, was wonderfully understated, speaking of "confusion and duplication of effort".
Such difficulties bring to mind the challenges we face in our global organizations, which are often made up from different corporate entities. Similar issues can happen to our security management functions - we overlap, overspend and contradict – all to the detriment of the enterprise as a whole. Managing a global information security function in an optimal manner is no easy task; it takes careful planning, an understanding of essential roles & responsibilities and the ability to manage some elements remotely.
I’ve recently published two papers relating to these very topics. If you are considering a reorganization, or just interested in what top performing security organizations look like right now, check out these links:
As much as the cloud computing model makes sense to me, my security sensibilities cry out about information risk every time I start to consider actual implementation for data of value across an enterprise.
A model which has always made sense has been to place only encrypted data in the cloud, holding the keys locally. This solution gives you control over data access, bypassing any Patriot Act concerns, but allows realization of the benefits of a shared, cloud infrastructure. It has always been recognized, however, that this solution has a number of drawbacks, such as:
The immense corporate sensitivity of the encryption keys utilised. These keys become essential to doing business. If they are corrupted, lost or held hostage by hacktivists, for example, then the organization stops dead in the water.
The difficulty of creating indexes, searching and applying transactions across encrypted data stores. If the concept is to keep the keys away from the cloud environment then actions such as indexing, searching or running database functions become very challenging.
The USA PATRIOT Act (more commonly known as “the Patriot Act”) was signed into law by George W. Bush on October 26, 2001 as a response to the September 11 attacks. The title of the act (USA PATRIOT) is actually an acronym that stands for “Uniting (and) Strengthening America (by) Providing Appropriate Tools Required (to) Intercept (and) Obstruct Terrorism”. Many aspects of the Act were to expire in 2005; however, renewals and extensions mean that the Act is here for a while yet.
For Security & Risk Professionals, the Patriot Act comes up in conversation mostly with regard to data access. The Act suggests that the US government is able to gain access to data held on US soil, or even by a US firm outside US territory, without the data owner being notified; this is of significant concern when it comes to considerations around the adoption of cloud technology. EU-based organizations are concerned that utilizing cloud as part of their infrastructure will make their data accessible to the US government. In 2004, the Canadian government passed laws prohibiting the storage of citizens’ personal data outside their physical boundaries, and a recent news article suggested that one large UK defense contractor walked away from Microsoft’s Office 365 due to lack of assurances on data location.