Security threats develop and evolve with startling rapidity, with the attackers always seeking to stay one step ahead of the S&R professional. The agility of our aggressors is understandable; they do not have the same service-focused restrictions that most organizations have, and they seek to find and exploit individual weaknesses in the vast sea of interconnecting technology that is our computing infrastructure.
If we are to stand a chance of breaking even in this game, we have to learn our lessons and ensure that we don’t repeat the same mistakes over and over. Unfortunately, it is alarmingly common to see well known vulnerabilities and weakness being baked right in to new applications and systems – just as if the past 5 years had never happened!
A recent report released by Alex Hopkins of Context Information Security shines a light on the vulnerabilities they discovered while testing almost 600 pre-release web applications during 2011. The headlines for me were:
On average, the number of issues discovered per application is on the rise.
Two-thirds of web applications were affected by cross site scripting (XSS).
Nearly one in five web applications were vulnerable to SQL injection.
It makes depressing reading, but I’m interested in why this situation is occurring:
Are S&R professionals simply not educating and guiding application developers?
Are application developers ignoring the training and education? Are we teaching them the wrong things or do we struggle to explain the threats from XSS and SQL injection?
Are our internal testing regimes failing, allowing flawed code to reach release candidate stage?
I was reading an article recently which outlined the different agencies employed within the United Kingdom to protect against cyber-threats. Not including the armed forces, who would have specialist roles to play in any particular cyber-threat scenario, it transpires that there are 18(!) different players covering this space, each with overlapping strategies, policies and expenditure. The formal report, from the UK Government’s Intelligence & Security Committee, was wonderfully understated, speaking of "confusion and duplication of effort".
Such difficulties bring to mind the challenges we face in our global organizations, which are often made up from different corporate entities. Similar issues can happen to our security management functions - we overlap, overspend and contradict – all to the detriment of the enterprise as a whole. Managing a global information security function in an optimal manner is no easy task; it takes careful planning, an understanding of essential roles & responsibilities and the ability to manage some elements remotely.
I’ve recently published two papers relating to these very topics. If you are considering a reorganization, or just interested in what top performing security organizations look like right now, check out these links:
As much as the cloud computing model makes sense to me, my security sensibilities cry out about information risk every time I start to consider actual implementation for data of value across an enterprise.
A model which has always made sense has been to place only encrypted data in the cloud, holding the keys locally. This solution gives you control over data access, bypassing any Patriot Act concerns, but allows realization of the benefits of a shared, cloud infrastructure. It has always been recognized, however, that this solution has a number of drawbacks, such as:
The immense corporate sensitivity of the encryption keys utilised. These keys become essential to doing business. If they are corrupted, lost or held hostage by hacktivists, for example, then the organization stops dead in the water.
The difficulty of creating indexes, searching and applying transactions across encrypted data stores. If the concept is to keep the keys away from the cloud environment then actions such as indexing, searching or running database functions become very challenging.
The USA PATRIOT Act (more commonly known as “the Patriot Act”) was signed into law by George W. Bush on October 26, 2001 as a response to the September 11 attacks. The title of the act (USA PATRIOT) is actually an acronym that stands for “Uniting (and) Strengthening America (by) Providing Appropriate Tools Required (to) Intercept (and) Obstruct Terrorism”. Many aspects of the Act were to expire in 2005; however, renewals and extensions mean that the Act is here for a while yet.
For Security & Risk Professionals, the Patriot Act comes up in conversation mostly with regard to data access. The Act suggests that the US government is able to gain access to data held on US soil, or even by a US firm outside US territory, without the data owner being notified; this is of significant concern when it comes to considerations around the adoption of cloud technology. EU-based organizations are concerned that utilizing cloud as part of their infrastructure will make their data accessible to the US government. In 2004, the Canadian government passed laws prohibiting the storage of citizens’ personal data outside their physical boundaries, and a recent news article suggested that one large UK defense contractor walked away from Microsoft’s Office 365 due to lack of assurances on data location.
It’s interesting how many threads there are on the Internet that still debate the difference between these two words: “responsible” and “accountable.” Oddly enough, today I stumbled across two definitions, from seemingly respectable sources, that hold diametrically opposite views! To me, the answer is simple – you can delegate responsibility, but accountability remains fixed.
This is a key point in the extended enterprises in which we now function. Firms are now made up of a myriad of offshore and outsourced services, running on systems that are similarly fragmented and distributed across vendors. This complex tangle of people and data represents a huge challenge to the CISO, who remains accountable for the security, and often the compliance, of his employer yet is no longer responsible for their provision.
With a methodical and comprehensive process and a surfeit of resource (please stop laughing at the back!), the CISO does, however, have the ability to follow the data trails and manage risk down in this regard. Unfortunately, with the advent of cloud, things are taking a turn for the worse. Cloud vendors are reluctant to be scrutinized, and the security and compliance demands of the CISO can often go unanswered. If cloud really is to be a mainstay of computing in the future, something has to give – we need to find a balance where compliance and security assurance requirements are met without fatally undermining the cloud model. This is a key topic for 2012 and something we’ll be following with interest.
As security professionals, we remain accountable for resolving these issues, no matter how much responsibility has been pushed to third parties and cloud vendors. So, how do you minimize the workload involved in managing the third parties that make up your extended enterprise, and how do you gain assurance around cloud vendors?
The cyberinsurance market today represents only a tiny segment of the overall insurance industry, and a recent Forrester paper on the topic identified that only a very small percentage of organizations that have purchased business insurance have also purchased cyberinsurance. Many insurance companies, however, are now estimating a period of significant growth in this area, and recent conversations suggest that more companies are either interested in this coverage or have recently purchased such policies.
I'm interested to know where your organization sits on this topic. If you have a minute, please respond to our short poll on the topic.
You can find the poll in the right column of this page, below the “About the Analyst” or “About this Blog” section.
The importance of data security throughout the supply chain is something we have all considered, but Greg Schaffer, acting deputy undersecretary of the Homeland Security Department of the National Protection and Programs directorate at the Department of Homeland Security, recently acknowledged finding instances where vulnerabilities and backdoors have been deliberately placed into hardware and software. This is not a risk that hasn’t been previously pondered as, in 1995, we watched Sandra Bullock star in ‘The Net," and address this very issue. However the startling realism of Mr. Schaffer’s admission means that it can no longer be categorized as a "hollywood hacking" or a future risk.
The potential impact of such backdoors here is terrifying and it is easy to imagine crucial response systems being remotely disabled at critical points in the name of financial or political advantage.
If we are dedicated to the security of our data, we must consider how to transform our due diligence process for any new product or service. How much trust can we put in any technology solution where many of the components originate from lowest cost providers situated in territories recognized to have an interest in overseas corporate secrets? We stand a chance of finding a keylogger when it’s inserted as malware, but if it’s built into the chipset on your laptop, that’s an entirely different challenge… Do we, as a security community, react to this and change our behavior now? Or do we wait until the risk becomes more apparent and widely documented? Even then, how do we counter this threat without blowing our whole annual budget on penetration testing for every tiny component and sub-routine? Where is the pragmatic line here?