New social media scams and marketing #fails are common fodder for water cooler banter today – even a recent episode of HBO’s Veep ran a joke where the President blames a Chinese cyberattack for sending an ill-advised tweet.
Cybersecurity requires a specialized skillset and a lot of manual work. We depend on the knowledge of our security analysts to recognize and stop threats. To do their work, they need information. Some of that information can be found internally in device logs, network metadata or scan results. Analysts may also look outside the organization at threat intelligence feeds, security blogs, social media sites, threat reports and other resources for information.
This takes a lot of time.
Security analysts are expensive resources. In many organizations, they are overwhelmed with work. Alerts are triaged, so that only the most serious get worked. Many alerts don’t get worked at all. That means that some security incidents are never investigated, leaving gaps in threat detection.
This is not new information for security pros. They get reminded of this every time they read an industry news article, attend a security conference or listen to a vendor presentation. We know there are not enough trained security professionals available to fill the open positions.
Since the start of the Industrial Revolution, we have strived to find technical answers to our labor problems. Much manual labor was replaced with machines, making production faster and more efficient.
Advances in artificial intelligence and robotics are now making it possible for humans and machines to work side-by-side. This is happening now on factory floors all over the world. Now, it’s coming to a new production facility, the security operations center (SOC).
Today, IBM announced a new initiative to use their cognitive computing technology, Watson, for cybersecurity. Watson for Cyber Security promises to give security analysts a new resource for detecting, investigating and responding to security threats.
One of the S&R team’s newest additions, Principal Analyst Jeff Pollard comes to Forrester after many years at major security services firms. His research guides client initiatives related to managed security services, security outsourcing, and security economics, and integrating security services into operational workflows, incident response processes, threat intelligence applications, and business requirements. Jeff is already racking up briefings and client inquiries, so get on his schedule while you still can! (As a side note, while incident response is generally not funny, Jeff is. He would be at least a strong 3 seed in a hypothetical Forrester Analyst Laugh-Off tournament. Vegas has approved that seeding.)
Prior to joining Forrester, Jeff served as a global architect at Verizon, Dell SecureWorks, and Mandiant, working with the world's largest organizations in financial services, telecommunications, media, and defense. In those roles he helped clients fuse managed security and professional services engagements in security monitoring, security management, red teams, penetration testing, OSINT, forensics, and application security.
Whether it’s the growth of service providers transitioning to offer services, the emergence of Containers within Hyperconverged solutions, or the potential emergence of Google succeeding, the public cloud is set for a year of “hyper-growth”! That said we have to sort through the FUD (Fear, Uncertainty and Doubt), especially in security, to determine the appropriateness of public cloud for your organization.
Is the low hanging cloud fruit eaten?
The rush to cloud to date has clearly been within “systems of innovation,” applications geared mostly to customer engagement (so-called “systems of engagement”). Enterprises leveraging public cloud are looking to get new innovative applications and services rapidly to market. These applications have been primarily driving customer acquisition and then fostering customer loyalty. These initiatives represent just the tip of the iceberg, the real opportunity is in moving “systems of record”, or everyday work to the public cloud.
Newly minted Vice President and Principal Analyst, Rick Holland, is one of the most senior analysts on our research team. But for those of you who haven’t had the opportunity to get to know him, Rick started his career as an intelligence analyst in the U.S. Army, and he went on to hold a variety of security engineer, administrator, and strategy positions outside of the military before arriving at Forrester. His research focuses on incident response, threat intelligence, vulnerability management, email and web content security, and virtualization security. Rick regularly speaks at security events including the RSA conference and SANS summits and is frequently quoted in the media. He also guest lectures at his alma mater, the University of Texas at Dallas.
Rick holds a B.S. in business administration with an MIS concentration (cum laude) from the University of Texas at Dallas. Rick is a Certified Information Systems Security Professional (CISSP), a Certified Information Systems Auditor (CISA), and a GIAC Certified Incident Handler (GCIH).
For years cybersecurity professionals have struggled to adequately track their detection and response capabilities. We use Mean Time to Detection/Containment/Recovery. I wanted to introduce an additional way to track your ability to detect and respond to "sophisticated" adversaries: Mean Time Before CEO Apologizes (MTBCA). Tripwire’s Tim Erlin had another amusing metric: Mean Time To Free Credit Monitoring (MTTFCM).
Here are some examples (there are countless others) that illustrate the pain associated with MTBCA:
Your CEO doesn't want to have to deliver a somber apology to your customers, just like you don't want to have to inform senior management that a "sophisticated attack" was used to compromise your environment. Some of these attacks may have very well been sophisticated but I'm always skeptical. In many cases I think sophisticated is used to deflect responsibility. For more on that check out, "The Millennium Falcon And Breach Responsibility."
Critical infrastructure is frequently on my mind, especially the ICS/SCADA within the energy sector. I live in Texas; oil and natural gas are big here ya'll. I'm just a short distance away from multiple natural gas drilling sites. I cannot help but think about the risks during the extraction and transport of this natural gas. North Texas has seen an attempt to bomb the natural gas infrastructure. In 2012, Anson Chi attempted to destroy an Atmos Energy pipeline in Plano, Texas. As a security and risk professional, I wonder about the potential cyber impacts an adversary with Chi's motivations could have.
Fifty organizations representing 95 countries were included in the data set. This included 1,367 confirmed data breaches. By comparison, last year’s report included 19 organizations and 621 confirmed data breaches.
In a significant change, Verizon expanded the analysis beyond breaches to include security incidents. As a result, this year’s dataset has 63,437 incidents. This is a great change, recognizes that incidents are about more than just data exfiltration, and also allows for security incidents like DoS attacks to be included.
The structure of the report itself has also evolved; it is no longer threat overview, actors, actions and so on. One of the drivers for this format change was an astounding discovery. Verizon found that over the past 10 years, 92% of all incidents they analyzed could be described by just nine attack patterns. The 2014 report is structured around these nine attack patterns.
Before joining Forrester, I ran my own consulting firm. No matter how ridiculous the problem or how complicated the solution, when a client would ask if I could help, I would say yes. Some people might say I was helpful, but I was in an overconfidence trap. There was always this voice in the back of my mind that would say, “How hard could it be?” Think of the havoc that kind of trap can have on a risk management program. If any part of the risk program is qualitative, and you are an overconfident person, your risk assessments will be skewed. If you are in an overconfidence trap, force yourself to estimate the extremes and imagine the scenarios where those extremes can happen. This will help you understand when you are being overconfident and allow you to find the happy medium.
Have you ever padded the budget of a project “just to be safe”? I hate to tell you this, but you are in the prudence trap. By padding the project budget, you are anticipating an unknown. Many other managers in your company may be using the same “strategy.” But the next time you do a project like this, you will pad the budget again, because the inherent uncertainty is still there. The easiest way to keep your risk management program out of the prudence trap is to never adjust your risk assessments to be “on the safe side,” There is nothing safe about using a psychological trap to predict risk.