On May 5, 2014, Target announced the resignation of its CEO, Gregg Steinhafel, in large part because of the massive and embarrassing customer data breach that occurred just before the 2013 U.S. holiday season kicked into high gear. After a security breach or incident, the CISO (or whoever is in charge of security) or the CIO, or both, are usually axed. Someone’s head has to roll. But the resignation of the CEO is unusual, and I believe this marks an important turning point in the visibility, prioritization, importance, and funding of information security. It’s an indication of just how much:
Security directly affects the top and bottom line. Early estimates of the cost of Target's 2013 holiday security breach indicate a potential customer churn of 1% to 5%, representing anywhere from $30 million to $150 million in lost net income. Target's stock fell 11% after it disclosed the breach in mid-December, but investors pushed shares up nearly 7% on the news of recovering sales. In February 2014, the company reported a 46% decline in profits due to the security breach.
Poor security will tank your reputation. The last thing Target needed was to be a permanent fixture of the 24-hour news cycle during the holiday season. Sure, like other breached companies, Target’s reputation will likely bounce back but it will take a lot of communication, investment, and other efforts to regain customer trust. The company announced last week that it will spend $100 million to adopt chip-and-PIN technology.
It takes a lot more than a static analysis tool, a web scanning service, and a few paid hackers to make your mobile development lifecycle, team, and eventually, your applications secure. Finding flaws in an individual mobile application is easy (assuming you have the right technical skill set). What is a lot harder is actually stopping the creation of mobile application security flaws in the first place.
To achieve the lofty goal of a truly secure mobile application development program takes a rethinking of how we have traditionally secured our applications in the past. Mobile development brings many changes to enterprise engineering teams including additional new device sensors, privacy impacting behaviors that cross the security chasm between consumer and enterprise isolation, and even faster release cycles on the order of days instead of months. Smaller teams with little to no experience in security are cranking out mobile applications at a fevered pace. The result is an accumulation of security debt that will eventually be paid by the enterprises and consumers that use these applications.
In a research world where we collect data on security technology (and services!) adoption, security spending, workforce attitudes about security, and more, there’s one type of data that I get asked about from Forrester clients in inquiry that makes me pause: breach cost data. I pause not because we don’t have it, but because it’s pretty useless for what S&R pros want to use it for (usually to justify investment). Here’s why:
What we see, and what is publicly available data, is not a complete picture. In fact, it’s often a tiny sliver of the actual costs incurred, or an estimate of a part of the cost that an organization opts to reveal.
What an organization may know or estimate as the cost (assuming they have done a cost analysis, which is also rare), and do not have to share, is typically not shared. After all, they would like to put this behind them as quickly as possible, and not draw further unnecessary attention.
What an organization may believe is an estimate of the cost can change over time as events related to the breach crop up. For example, in the case of the Sony PlayStation Network Platform hack in April 2011, a lot of costs were incurred in the weeks and months following the breach, but they were also getting slapped with fines in 2013 relating to the breach. In other breaches, legal actions and settlements can also draw out over the course of many years.
Fifty organizations representing 95 countries were included in the data set. This included 1,367 confirmed data breaches. By comparison, last year’s report included 19 organizations and 621 confirmed data breaches.
In a significant change, Verizon expanded the analysis beyond breaches to include security incidents. As a result, this year’s dataset has 63,437 incidents. This is a great change, recognizes that incidents are about more than just data exfiltration, and also allows for security incidents like DoS attacks to be included.
The structure of the report itself has also evolved; it is no longer threat overview, actors, actions and so on. One of the drivers for this format change was an astounding discovery. Verizon found that over the past 10 years, 92% of all incidents they analyzed could be described by just nine attack patterns. The 2014 report is structured around these nine attack patterns.
Everyone makes mistakes, but for social media teams, one wrong click can mean catastrophe. @USAirways experienced this yesterday when it responded to a customer complaint on Twitter with a pornographic image, quickly escalating into every social media manager’s worst nightmare.
Not only is this one of the most obscene social media #fails to date, but the marketers operating the airline’s Twitter handle left the post online for close to an hour. In the age of social media, it might as well have remained up there for a decade. Regardless of how or why this happened, this event immediately paints a picture of incompetence at US Airways, as well as the newly merged American Airlines brand.
It also indicates a lack of effective oversight and governance.
While details are still emerging, initial reports indicate that human error was the cause of the errant US Airways tweet, which likely means it was a copy and paste mistake or the image was saved incorrectly and selected from the wrong stream. In any case, basic controls could have prevented this brand disaster:
US Airways could have built a process where all outgoing posts that contain an image must be reviewed by a secondary reviewer or manager;
It could have segregated its social content library so that posts flagged for spam don’t appear for outgoing posts;
It could have leveraged technology that previews the full post and image before publishing.
Next month Forrester will publish research focusing on the role the customer plays in security planning. Customer attitudes are changing, and companies need to recognize these changes or risk losing customers. These changes put enormous attention on the CISO and the security team. But CISOs should also look at this as a big opportunity for CISOs to move from the back office to the front office. Security incidents, managed well, can actually enhance customer perceptions of a company; managed poorly, they can be devastating. If customers lose trust in a company because of the way the business handles personal data and privacy, they will easily take their business elsewhere. Sales will fall, stock prices will follow, and the CISO will be accountable. CISOs need to improve their security program by focusing on the company’s true customers – the ones that create revenue – clarifying and speeding communications and implementing customer-focused security controls. Look for it next month!
Some of the highlights in case you haven't read it yet:
Six months before the incident, Target invested $1.6 million in FireEye technology.
Target had a team of security specialists in Bangalore monitoring the environment.
On Saturday November 30, FireEye identified and alerted on the exfiltration malware. By all accounts this wasn't sophisticated malware; the article states that even Symantec Endpoint Protection detected it.
In this age of the customer, there is nothing more important than the effective and safe operation of the global financial system. Trillions of dollars move around the world because of a well-oiled financial services system. Most consumers take our financial services system for granted. They get paid, have the money direct deposited into their account, pay bills, use their ATM card to get cash, and put family valuables in the safety deposit box. The consumer’s assumption is that their cash, investments and valuables are safe.
Symantec’s 2014 CyberWar Games set out to prove or disprove how correct are these assumptions. Symantec’s cyberwar event is the brainchild of Samir Kapuria, a Symantec vice president within the Information Security Group. Symantec structures the event as a series of playoff events. Teams form and compete, earning points for creating and discovering exploits. Out of this process, the ten best teams travel to Symantec’s Mountain View, California headquarters to compete in the finals.
Last week I attended the RSA Conference (RSAC) Innovation Sandbox for the first time. Not only was I an attendee, but I also was fortunate enough to host a CTO panel during the event. For those that aren’t aware, the Innovation Sandbox is one of the more popular programs of the RSAC week. The highlight of the Innovation Sandbox is the competition for the coveted “Most Innovative Company at the RSA Conference” award. This is basically the information security version of ABC’s Shark Tank. If you want to learn about the up-and-coming vendors and technologies, this is one place to do it. To participate, companies had to meet the following criteria:
The product has been in the market for less than one year (launched after February 2013).
The company must be privately held, with less than $5M in revenue in 2013.
The product has the potential to make a significant impact on the information security space.
The product can be demonstrated live and on-site during Innovation Sandbox.
The company has a management team that has proven successful in the delivery of products to market.
January 28th was the anniversary of the Space Shuttle Challenger disaster. The Rogers Commission detailed the official account of the disaster, laying bare all of the failures that lead to the loss of a shuttle and its crew. Officially known as The Report of the Presidential Commission on the Space Shuttle Challenger Accident - The Tragedy of Mission 51, the report is five volumes long and covers every possible angle starting with how NASA chose its vendor, to the psychological traps that plagued the decision making that lead to that fateful morning. There are many lessons to be learned in those five volumes and now, I am going to share the ones that made a great impact on my approach to risk management. The first is the lesson of overconfidence.
In the late 1970’s, NASA was assessing the likelihood and risk associated with the catastrophic loss of their new, reusable, orbiter. NASA commissioned a study where research showed that based on NASA’s prior launches there was the chance for a catastrophic failure approximately once every 24 launches. NASA, who was planning on using several shuttles with payloads to help pay for the program, decided that the number was too conservative. They then asked the United States Air Force (USAF) to re-perform the study. The USAF concluded that the likelihood was once every 52 launches.
In the end, NASA believed that because of the lessons they learned since the moon missions and the advances in technology, the true likelihood of an event was 1 in 100,000 launches. Think about that; it would be over 4100 years before there would be a catastrophic event. In the end, Challenger flew 10 missions before it’s catastrophic event and Colombia flew 28 missions before its catastrophic event, during reentry, after the loss of heat tiles during take off. During the life of a program that lasted 30 years, they lost two of five shuttles.