We didn’t need to check the results of the MITRE ATT&CK Carbanak+FIN7 evaluation when they were released, since within minutes of being live, we already had an email from a vendor touting its MITRE ATT&CK prowess. This vendor stated it “dominated” the evaluation — except MITRE Engenuity doesn’t hand out rankings and awards (therefore, there is no #winning). Here’s what MITRE Engenuity says:

MITRE Engenuity does not assign scores, rankings, or ratings. The evaluation results are available to the public, so other organizations may provide their own analysis and interpretation — these are not endorsed or validated by MITRE Engenuity.

Other vendors claimed they had “eliminated the APT” or had “entered the ring again,” while a few others have been surprisingly quiet about the results. Vendor performance in the evaluation correlates rather strongly to the volume, language, and channels used to … enthusiastically mention results. And that’s a downside for end users trying to figure out what all this means.

This is to say, don’t believe everything you read on the internet. Vendors that focus more of the conversation on beating their competitors than they do enabling their customers do their end users a disservice.

Do The MITRE ATT&CK Evaluation Results Matter?

Yes.

with caveats. You had to know that part was coming, right?

The problem with any evaluation like this is:

  • It’s not your environment.
    • The evaluation doesn’t allow you to abdicate conducting testing and proof of concept or value. High rankings or “domination” of the results does not prove the tool will be effective given your infrastructure, your team, or your business goals.
  • It’s informative but not determinative.
    • Don’t buy anything based solely on this evaluation. See number one.
  •  It’s focused on the TOOL.
    • It’s NOT focused on the experience. There are lots of great products poorly deployed, not deployed at all, misconfigured, or lacking the right visibility to be maximally effective.
    • It’s not inclusive of the vendor’s business. There are lots of great products out there that lack the documentation, customer service, funding, and market success to sustain them in the long run.
  •  Timing.
    • Vendors know ahead of time which adversary they are going to be tested against. Given that the MITRE ATT&CK framework is publicly available, this gives vendors time to prepare before the evaluations based on the actions a specific threat actor is known to takeYou would not have the benefit of this time to prepare the tool in your environment.
  • Perception and skepticism.
    • Vendors pay for these evaluations, and that makes everyone suspicious. It is pay to participate, not pay to play.

How To Use MITRE ATT&CK Evaluations Successfully

With all this said, the results do matter. Having an unbiased assessment of vendor capabilities gives security teams confidence — or makes them reconsider — tools currently deployed and those being evaluated. It also gives security pros evidence to validate that the product they use is taking the right approach to solve the problem and showcases where gaps may exist. Over the last three years, we’ve seen some excellent use cases of MITRE ATT&CK — and some not so useful ones. Here’s our checklist for how to maximize the benefit you get from MITRE ATT&CK:

  1. This round of the results is very focused on financially motivated APTs, with Carbanak mainly targeting banks and FIN7 mainly targeting retail, restaurant, and hospitality. Ultimately, you want your solution to detect any and all threats, but this evaluation may be of particular interest to these sectors when it comes to the granularity of coverage and the timing of detection.
  2. At some point, users get involved, so the user experience matters. It doesn’t matter if a tool finds 100% of whatever is thrown at it. What matters is how your teams work with the tool. That has to remain as a consideration. Something your team is more comfortable with every day will net you more gains than something that “found more” that they find hard to use.
  3. The results include an impressive number of screenshots of what the platforms actually looked like during this exercise, step by stepDrill down into individual results, and use these screenshots to get a better sense of how much information is presented to the analyst and how easily it is presented.
  4. This work is not done in a vacuum. The MITRE ATT&CK team maintains a GitHub repo of adversary emulation plans for these and other previously tested APTs as a way to operationalize threat intelligenceRegardless of if the products you use are tested or not, you can use these to test how your current capabilities would handle these threat actor scenarios. Direct your red team to this resource for more information.
  5. In an effort to make these evaluations more accessible, the MITRE ATT&CK team has created a resource to allow comparison of results between vendors, as well as a technique comparison tool. With regards to actually comparing vendors, these two tools may be the most important resources available. They give you breakdowns of what each vendor detected. This tool can be incredibly useful for teams that want to find which vendors prioritize visibility for specific techniques.
  6. Our former colleague Josh Zelonis (@josh_zelonis) has also updated the code in his GitHub repository that provides a straightforward overview of the latest results and now includes a downloadable spreadsheet without having to run the code directly.

While vendor-sourced breakdowns may get a little … self-congratulatory, they aren’t entirely useless. When reviewing what vendors send you about the latest evaluation, look for breakdowns that aim to build trust with their customers by stating the data and the upsides of their solution cleanly and clearly, minimizing self-serving bluster.

Forrester clients can also see how many of these vendors faired in our most recent EDR market evaluation:The Forrester Wave™: Enterprise Detection And Response, Q1 2020.”