It takes a lot more than a static analysis tool, a web scanning service, and a few paid hackers to make your mobile development lifecycle, team, and eventually, your applications secure. Finding flaws in an individual mobile application is easy (assuming you have the right technical skill set). What is a lot harder is actually stopping the creation of mobile application security flaws in the first place.
To achieve the lofty goal of a truly secure mobile application development program takes a rethinking of how we have traditionally secured our applications in the past. Mobile development brings many changes to enterprise engineering teams including additional new device sensors, privacy impacting behaviors that cross the security chasm between consumer and enterprise isolation, and even faster release cycles on the order of days instead of months. Smaller teams with little to no experience in security are cranking out mobile applications at a fevered pace. The result is an accumulation of security debt that will eventually be paid by the enterprises and consumers that use these applications.
On April 8, 2014, Microsoft stopped technical support for Windows XP; XP customers will no longer receive security or technical updates, hotfixes, or free or paid assistance. Microsoft statistics show that around 25% of PCs in Asia Pacific still run XP. Asia Pacific enterprises haven’t migrated away from XP because:
Technology management departments didn’t communicate the need well enough and thus have not received the necessary funding to migrate to Windows 7 or 8.
Many firms rely on legacy applications that run on XP and are often incompatible with the latest versions of Windows. For example, an Australia-based oil and gas exploration firm faced application compatibility issues when migrating from XP to Windows 7.
Some enterprises underestimated the work required to migrate to a new OS and are still halfway through their project.
In a research world where we collect data on security technology (and services!) adoption, security spending, workforce attitudes about security, and more, there’s one type of data that I get asked about from Forrester clients in inquiry that makes me pause: breach cost data. I pause not because we don’t have it, but because it’s pretty useless for what S&R pros want to use it for (usually to justify investment). Here’s why:
What we see, and what is publicly available data, is not a complete picture. In fact, it’s often a tiny sliver of the actual costs incurred, or an estimate of a part of the cost that an organization opts to reveal.
What an organization may know or estimate as the cost (assuming they have done a cost analysis, which is also rare), and do not have to share, is typically not shared. After all, they would like to put this behind them as quickly as possible, and not draw further unnecessary attention.
What an organization may believe is an estimate of the cost can change over time as events related to the breach crop up. For example, in the case of the Sony PlayStation Network Platform hack in April 2011, a lot of costs were incurred in the weeks and months following the breach, but they were also getting slapped with fines in 2013 relating to the breach. In other breaches, legal actions and settlements can also draw out over the course of many years.
Fifty organizations representing 95 countries were included in the data set. This included 1,367 confirmed data breaches. By comparison, last year’s report included 19 organizations and 621 confirmed data breaches.
In a significant change, Verizon expanded the analysis beyond breaches to include security incidents. As a result, this year’s dataset has 63,437 incidents. This is a great change, recognizes that incidents are about more than just data exfiltration, and also allows for security incidents like DoS attacks to be included.
The structure of the report itself has also evolved; it is no longer threat overview, actors, actions and so on. One of the drivers for this format change was an astounding discovery. Verizon found that over the past 10 years, 92% of all incidents they analyzed could be described by just nine attack patterns. The 2014 report is structured around these nine attack patterns.
Everyone makes mistakes, but for social media teams, one wrong click can mean catastrophe. @USAirways experienced this yesterday when it responded to a customer complaint on Twitter with a pornographic image, quickly escalating into every social media manager’s worst nightmare.
Not only is this one of the most obscene social media #fails to date, but the marketers operating the airline’s Twitter handle left the post online for close to an hour. In the age of social media, it might as well have remained up there for a decade. Regardless of how or why this happened, this event immediately paints a picture of incompetence at US Airways, as well as the newly merged American Airlines brand.
It also indicates a lack of effective oversight and governance.
While details are still emerging, initial reports indicate that human error was the cause of the errant US Airways tweet, which likely means it was a copy and paste mistake or the image was saved incorrectly and selected from the wrong stream. In any case, basic controls could have prevented this brand disaster:
US Airways could have built a process where all outgoing posts that contain an image must be reviewed by a secondary reviewer or manager;
It could have segregated its social content library so that posts flagged for spam don’t appear for outgoing posts;
It could have leveraged technology that previews the full post and image before publishing.