Put yourself in the shoes of a personal bodyguard. Your job is to protect Whitney Houston (or whatever). You wake up in the morning, do your yoga routine (or whatever), put on your suit, ankle holster, side holster etc and go to work, standing by Ms Houson’s side, giving her personal protection. At what point during the day do you think to yourself “man I wish someone would attack us right now?”
The answer is of course, never. You never want to get attacked. Who wants to cop a bullet meant for someone else? Not you. If you do your job right, noone attacks Ms. Houston, so she is safe and comfortable. You provide a deterrent effect, and get to make out in the limo. Everyone is happy.
The analogy of a personal body guard clearly shows us what’s fundamentally wrong with antivirus, including it’s modern incarnations, and not necessarily limited to endpoint security. I’ve heard a founder of a honeypot tech company admit to feeling excited when his products detect an attack. I’ve lamented about the weird misalignment of incentives inherent to cybersecurity before, but I’ve recently started thinking that the problem is closely linked specifically to the the “detect-to-protect” approach.
I think on some high level most security buyers are vaguely aware of the imperfection of “detect-to-protect”: the conversation tends to open with the need for a patient zero before signatures can be synthesized. There is a tendency to talk about things in terms of risk management, but I’m getting the sense that often times the risk management meme is left at a high level and when it boils down to actual decision making, the insidious roots of detect-to-protect can be hard to rip out.
We’ve had traditional antivirus for a long time. The core AV functionality of the endpoint component of AV is largely a commodity offering at this point – they all would use the same OS APIs to intercept the same things to check against signatures, and so the key competitive lever is signature quality, which leads to the obvious comparison metric: detection rates, or how many known bad things do you detect as bad, and how many known good things do you detect as good? There is a raging debate about how this kind of testing should be carried out, with a potential anti-trust lawsuit in the works, but without looking too closely into the nature of the complaint, I think that’s just focusing on some specific details about how detection-rate based testing is carried out, without addressing the broader question of whether detection-rate based testing is a good thing overall.
My concern is about whether an entire generation of infosec buyers only knows how to see the world in terms of detect-to-protect. Budgets are limited and choices have to be made, do you go with solution X or solution Y? You’ve been PoC-ing both, you can only buy or expand or renew one of them, so which do you get? If you have a particularly sophisticated buying operation you may do it with a proper, thoughtful, DiD risk management assessment. But the temptation to simply lean on numbers in lieu of critical thinking can be irresistible. This temptation is heightened by sensible but oft tritely-applied notions like “you can’t improve what you can’t measure.” Measuring detection rates is trivial. Solution X detected N bad things. Solution Y detection M bad things. N>M so let’s go with Solution X. Your subconscious reminds you that if there’s a breach, noone gets fired for choosing the solution that detected more during a PoC trial. It’s a smart career move.
The fallacy of making decisions based on the availability of data is hardly unique to infosec, digital health and other areas are prone to this trap as well. I’m not sure what to do about it. Should buyers consciously try to expand their horizons to include approaches that are not centered on detect-to-protect? That’s easier said than done.
The problem is aggravated when sellers of products that are not centered in the detect-to-protect philosophy start playing by the rules of detect-to-protect because they want to claw some market share. The intent may be right – we can add on a forensics/detection component to our solution so why not provide more value for our users? – but the effect could be that novel solutions essentially endorse traditional measures of success, reinforcing a feedback loop that hurts them in the long-run. It’s understandable: creating a new category with new measures of success is hard, high risk work. Your startup needs customers right now, to be able to live long enough to make an impact, and displacing existing products with your own offering is the surest way of doing that. Budgets are limited remember?
I’m not sure if there’s an industry path to get out of this trap. But I think it would help if buyers and sellers of cybersecurity try harder to keep measurement fallacy front-of-mind.