The new revolution in apps and social media continues at a stunning rate. Nearly every day a colleague tells me of another app or site that is bubbling up and about to hit the big time. Many will not break through, but some will capture the imagination and become the next generation of YouTube and Facebook.
The behaviour of certain apps/sites, however, gives me some cause for concern. As a recent entrant to Pinterest, I was alarmed to note that the site takes a copy of the pinned image and serves that from its own servers. The burden of managing copyright issues seems to sit firmly with the users, most of whom never give such legislation a second thought. There is a method for removing content however, unsurprisingly, it’s not half as simple as pinning new content. Pinterest’s terms and conditions are also interesting, giving it “irrevocable, perpetual, royalty-free” permission to “exploit” member content.
The Pinterest site is building its value on other people’s content — which is fine as long as those people have consented. I recently looked at some interesting Infographics pinned on the site, all of which must have taken considerable resources to put together, yet I never once needed to visit the source site, which may have perhaps triggered advertising income vital to enabling them to continue their work. I wonder if they even realize their content is available in this way?
Last night I attended a vendor presentation about cloud-based risk and the threat from nation state attacks. Unfortunately, due to a busy schedule and a difficult journey, I arrived just as the final presentation moved to its Q&A stage. Listening to a Q&A session when I had no idea what the content of the presentation had been was actually quite an interesting experience, unfortunately not for all the best reasons. A section of the audience immediately dived into the detail and tried to find fault with the solutions that had evidently been outlined. They poked and prodded the presenter until she admitted that no solution was 100% and, yes, there were ways to mount a successful attack even with her recommendations in place. At that point, the questioners sat back in their seats, triumphant – they had won. There seemed little interest in continuing the conversation to figure out ways to minimize the remaining risk, and their body language suggested that they had mentally discounted everything that had been said.
I was a little disappointed by this. Some S&R pros seem to treat information security as an academic exercise, a challenge where the best argument wins and security is a mere footnote. These folk are often also the ones who overreact to very complex, and very unlikely, technical threat scenarios while overlooking behaviors and processes that may be fundamentally flawed. They appear unhappy with any security solution that isn’t perfect. I had hoped that we all recognized that good security was not about hitting a home-run; it’s much more about applying the 80/20 rule over and over again, iteratively reducing the risk to the organization.
A few months ago I shared a flight with a very pleasant lady from a European regulatory body. After shoulder surfing her papers and seeing we were both interested in information security (ironic paradox acknowledged!) we had a long chat about how enterprises could stand a chance against the hacktivist and criminal hordes so intent on stealing their data.
My flight-buddy felt that the future lay in open and honest sharing between organisations – i.e. when one is hacked they would immediately share details of both the breach and the method with their peers and wider industry; this would allow the group to look for similar exploits and prepare to deflect similar attacks. Being somewhat cynical, and having worked in industry, I felt that such a concept was idealised and that organisations would refuse to share such information for fear of reputational or brand damage – she acknowledged that it was proving tougher than she had expected to get her organisations to join in with this voluntary disclosure!
Across the US and Europe we are seeing a move toward ‘mandatory’breach disclosure; however they have seemingly disparate intentions. US requirements focus on breaches that may impact an organisations financial condition or integrity, whilst EU breach notification is very focussed on cases where there may have been an exposure of personal data. Neither of these seem to be pushing us toward this nirvana of ‘collaborative protection’.
In the UK, I’m aware that the certain organizations, within specific sectors, will share information within their small closed communities, unfortunately this is not widespread and certainly does not reflect the concept of ‘open and honest’ as my flight-buddy would have envisaged.
Security threats develop and evolve with startling rapidity, with the attackers always seeking to stay one step ahead of the S&R professional. The agility of our aggressors is understandable; they do not have the same service-focused restrictions that most organizations have, and they seek to find and exploit individual weaknesses in the vast sea of interconnecting technology that is our computing infrastructure.
If we are to stand a chance of breaking even in this game, we have to learn our lessons and ensure that we don’t repeat the same mistakes over and over. Unfortunately, it is alarmingly common to see well known vulnerabilities and weakness being baked right in to new applications and systems – just as if the past 5 years had never happened!
A recent report released by Alex Hopkins of Context Information Security shines a light on the vulnerabilities they discovered while testing almost 600 pre-release web applications during 2011. The headlines for me were:
On average, the number of issues discovered per application is on the rise.
Two-thirds of web applications were affected by cross site scripting (XSS).
Nearly one in five web applications were vulnerable to SQL injection.
It makes depressing reading, but I’m interested in why this situation is occurring:
Are S&R professionals simply not educating and guiding application developers?
Are application developers ignoring the training and education? Are we teaching them the wrong things or do we struggle to explain the threats from XSS and SQL injection?
Are our internal testing regimes failing, allowing flawed code to reach release candidate stage?
I was reading an article recently which outlined the different agencies employed within the United Kingdom to protect against cyber-threats. Not including the armed forces, who would have specialist roles to play in any particular cyber-threat scenario, it transpires that there are 18(!) different players covering this space, each with overlapping strategies, policies and expenditure. The formal report, from the UK Government’s Intelligence & Security Committee, was wonderfully understated, speaking of "confusion and duplication of effort".
Such difficulties bring to mind the challenges we face in our global organizations, which are often made up from different corporate entities. Similar issues can happen to our security management functions - we overlap, overspend and contradict – all to the detriment of the enterprise as a whole. Managing a global information security function in an optimal manner is no easy task; it takes careful planning, an understanding of essential roles & responsibilities and the ability to manage some elements remotely.
I’ve recently published two papers relating to these very topics. If you are considering a reorganization, or just interested in what top performing security organizations look like right now, check out these links:
As much as the cloud computing model makes sense to me, my security sensibilities cry out about information risk every time I start to consider actual implementation for data of value across an enterprise.
A model which has always made sense has been to place only encrypted data in the cloud, holding the keys locally. This solution gives you control over data access, bypassing any Patriot Act concerns, but allows realization of the benefits of a shared, cloud infrastructure. It has always been recognized, however, that this solution has a number of drawbacks, such as:
The immense corporate sensitivity of the encryption keys utilised. These keys become essential to doing business. If they are corrupted, lost or held hostage by hacktivists, for example, then the organization stops dead in the water.
The difficulty of creating indexes, searching and applying transactions across encrypted data stores. If the concept is to keep the keys away from the cloud environment then actions such as indexing, searching or running database functions become very challenging.
The USA PATRIOT Act (more commonly known as “the Patriot Act”) was signed into law by George W. Bush on October 26, 2001 as a response to the September 11 attacks. The title of the act (USA PATRIOT) is actually an acronym that stands for “Uniting (and) Strengthening America (by) Providing Appropriate Tools Required (to) Intercept (and) Obstruct Terrorism”. Many aspects of the Act were to expire in 2005; however, renewals and extensions mean that the Act is here for a while yet.
For Security & Risk Professionals, the Patriot Act comes up in conversation mostly with regard to data access. The Act suggests that the US government is able to gain access to data held on US soil, or even by a US firm outside US territory, without the data owner being notified; this is of significant concern when it comes to considerations around the adoption of cloud technology. EU-based organizations are concerned that utilizing cloud as part of their infrastructure will make their data accessible to the US government. In 2004, the Canadian government passed laws prohibiting the storage of citizens’ personal data outside their physical boundaries, and a recent news article suggested that one large UK defense contractor walked away from Microsoft’s Office 365 due to lack of assurances on data location.
It’s interesting how many threads there are on the Internet that still debate the difference between these two words: “responsible” and “accountable.” Oddly enough, today I stumbled across two definitions, from seemingly respectable sources, that hold diametrically opposite views! To me, the answer is simple – you can delegate responsibility, but accountability remains fixed.
This is a key point in the extended enterprises in which we now function. Firms are now made up of a myriad of offshore and outsourced services, running on systems that are similarly fragmented and distributed across vendors. This complex tangle of people and data represents a huge challenge to the CISO, who remains accountable for the security, and often the compliance, of his employer yet is no longer responsible for their provision.
With a methodical and comprehensive process and a surfeit of resource (please stop laughing at the back!), the CISO does, however, have the ability to follow the data trails and manage risk down in this regard. Unfortunately, with the advent of cloud, things are taking a turn for the worse. Cloud vendors are reluctant to be scrutinized, and the security and compliance demands of the CISO can often go unanswered. If cloud really is to be a mainstay of computing in the future, something has to give – we need to find a balance where compliance and security assurance requirements are met without fatally undermining the cloud model. This is a key topic for 2012 and something we’ll be following with interest.
As security professionals, we remain accountable for resolving these issues, no matter how much responsibility has been pushed to third parties and cloud vendors. So, how do you minimize the workload involved in managing the third parties that make up your extended enterprise, and how do you gain assurance around cloud vendors?
The cyberinsurance market today represents only a tiny segment of the overall insurance industry, and a recent Forrester paper on the topic identified that only a very small percentage of organizations that have purchased business insurance have also purchased cyberinsurance. Many insurance companies, however, are now estimating a period of significant growth in this area, and recent conversations suggest that more companies are either interested in this coverage or have recently purchased such policies.
I'm interested to know where your organization sits on this topic. If you have a minute, please respond to our short poll on the topic.
You can find the poll in the right column of this page, below the “About the Analyst” or “About this Blog” section.
The importance of data security throughout the supply chain is something we have all considered, but Greg Schaffer, acting deputy undersecretary of the Homeland Security Department of the National Protection and Programs directorate at the Department of Homeland Security, recently acknowledged finding instances where vulnerabilities and backdoors have been deliberately placed into hardware and software. This is not a risk that hasn’t been previously pondered as, in 1995, we watched Sandra Bullock star in ‘The Net," and address this very issue. However the startling realism of Mr. Schaffer’s admission means that it can no longer be categorized as a "hollywood hacking" or a future risk.
The potential impact of such backdoors here is terrifying and it is easy to imagine crucial response systems being remotely disabled at critical points in the name of financial or political advantage.
If we are dedicated to the security of our data, we must consider how to transform our due diligence process for any new product or service. How much trust can we put in any technology solution where many of the components originate from lowest cost providers situated in territories recognized to have an interest in overseas corporate secrets? We stand a chance of finding a keylogger when it’s inserted as malware, but if it’s built into the chipset on your laptop, that’s an entirely different challenge… Do we, as a security community, react to this and change our behavior now? Or do we wait until the risk becomes more apparent and widely documented? Even then, how do we counter this threat without blowing our whole annual budget on penetration testing for every tiny component and sub-routine? Where is the pragmatic line here?