pwn the future
pwn the future
The One That Might Get Us Cancelled (startup idea #6)
0:00
-12:17

The One That Might Get Us Cancelled (startup idea #6)

what's next – exploring potential startup ideas at the intersection of industrial control systems and cybersecurity - episode #6

The [redacted] idea.

“Secret Weapon” and “Long Bow” were sitting around the table in our borrowed conference room, in the attic of a three hundred year old building in a colonial-era neighborhood in Northern Virginia. We had decided to get together for an in-person weekend to kick around more ideas, having walked through, and shot holes in, ideas #1-5. It was mid-June.

SW, a former trader, was still mulling over the “oracle” problem.

“Honestly I just want to trade on it,” he said offhandedly.

“What do you mean?” I asked.

A string of barely understandable finance-speak interlaced with security jargon fell out of his mouth over the next few minutes. “Market-making,” “proof-of-compromise,” “swaps,” “clearinghouse” and lots of concepts that I thought I understood (a dangerous place to be) swam together for a few minutes.

LB and I started riffing on it immediately. We started laughing: “can you imagine if…”

But alas, dear reader, that’s where this story ends.

Because that entire conversation isn’t appropriate for polite company here on the World Wide Web. And y’all have been so great to sign up for this substack. And this is the last “failed startup idea” before the big reveal - coming in just over a week.

Why aren’t I going to share more? Because we came up with an idea so crazy that it might cause so much creative destruction that it would upset a whole lot of folks. Though possibly make us rich in the process.

In the end, we decided to pass… for now.

This post won’t be discussing “The One That Might Get Us Cancelled,” but instead, discussing some of concepts we used to think through the problem of refactoring the world of vulnerability research and disclosure. Because other smart folks are working on the same problems, and having the foundational concepts and models floating around the aether is sometimes useful.

So here goes:

Vulnerability Markets: Carrots and Sticks.

To the smart kids in back: forgive me for what is likely to be a very, very poor description of a very, very complex problem. If you have a better way to explain it, please drop it in the comments or Twitter.

Let’s start at square one: software doesn’t always work as intended.

Sure, it may look like it is working as intended. Your photos turn into cartoon versions of the original .jpg, the image gets uploaded, your friends send you ephemeral thumbs ups or what-have-you. But deep in the dark crevices of that executable file lie some unintended consequences of a few too many late-night Red Bull sessions. And it might just turn out that, despite that fancy compliance certificate you got by copy-pasting some AI-generated “security” language into a typeform, your buttoned-up app has a weak point.

Those weak points - they’re often called “bugs” or “vulnerabilities” - can come in a rainbow of flavors. Maybe the photo doesn’t render when you tilt your phone 90 degrees. Maybe the app text doesn’t display well in “night mode.” Or maybe when someone inputs a very specific string of characters into your search bar, they can gain access to a large database of user information that’s supposed to be encrypted.

A Sisyphean Task

Think about it this way: weaknesses in computer programs are like stored potential energy. Not a great analogy, but a useful one: codebases are built over years, often layers upon layers. And within likely most, if not all, of them are errors. These errors, when examined, can sometimes lead down rabbit holes that allow talented programmers to craft mechanisms by which they can cause that program to do things it really shouldn’t be doing.

People smarter than me could likely craft some kind of mathematical model for how, as codebases increase in complexity, so do the possibility of vulnerabilities. But until that moment, suffice it to say the law of large numbers applies. Software has weaknesses. Some of those get fixed (“patched”) and some don’t as new code gets added, and new possibilities of exploitation emerge.

And so a game of computerized cat-and-mouse is afoot.

Welcome to the world of vulnerability research.

Computerized Casablanca

Those who can parse the details of these digital domains are in a certain way, members of an underground 21st century elite, populated with a very small global cast of characters worthy of the big screen treatment: Experts. Extroverts. Lovers. Liars. Preppies. Ponderers. Addicts. And Adherents. Some have been at it for decades. Others, just a few short months. But all of them — like successful gold miners during the Rush — have a sort of unnatural ability to spot digital weakness.

The question is: what happens next?

Much like during the Gold Rush, researchers must find digital assayers, trading their findings for money (or other, perhaps more-salacious favors), then return to the wild to continue the hunt. One could imagine that these assayers are an interesting sort: criminal gangs, governments, corporations, researchers.

This microcosm is a kind of hyper-puzzle. Complex motivations, information asymmetry, sovereign power, underworld influence. Yet at the heart of it all, a kind of stored energy, the extracted gold from the digital mountainside.

If vulnerabilities are a kind of stored energy, the release of that energy can (if released to the right people) have a sort of negative entropic value. If disclosed to the manufacturer or a responsible party (and then patched), the software’s users are now exposed to marginally less risk. So is the software company. Thus disclosure “releases” this energy (creating value) and forces those who seek to find ways of doing bad things with code to find new exploitation mechanisms. In the military, we would call this a “cost-imposition strategy.”

Keeping this tenuous metaphor alive, one could start asking: what currently limits the release of this negative entropy (/energy)? Wouldn’t we want to promote the release of this potential energy (since it makes our systems more secure)?

A partial answer is:

  • Threat of violence. Those who purchase and utilize vulnerabilities and vulnerability research tend to also have access to the the means of production of organized violence. Which is to say: they are nation-states, criminal gangs, or similar. Rumor has it that researchers tend to tread lightly when dealing with them, or associated intermediaries.

  • Lack of talented individuals. The skills required to do this kind of research are rare and fractal. Those who are good are very good, and it takes a while to get there. There’s more to say here, but I’ll just leave it at that, for various reasons not worth going in to here.

  • Current business practice. Centralized business models seem to be emergent today, evidenced by companies like NSO, Azimuth, and others essentially aggregating talent and selling the fruits of their work either partially, or on top of a “full software stack.” One need only look at recent reporting to see that, either out of utility or profit-maximization, even this industry is moving towards the “as-a-service” model.

But at the higher level, these are factors that simply describe some of the governing dynamics of the vulnerability research environment as a whole. The question is: what might a new environment look like?

Potential Energy: Thematics

What does it look like to forge a new system that lowers the friction, enabling more of this potential energy to “escape?” As we sat in the Attic and riffed on a potentially explosive solution, the underlying dynamic was: information asymmetry.

Today’s system has three archetypes: (1) researchers (2) buyers (3) utilizers. And there dominant approaches:

  1. The quasi-intermediary. Interested entity (2) builds relationship with researcher (1) and utilizer (3), and essentially conducts both short-and-long-term operational and financial arbitrage on a basket of work done by the researcher. This can take a few different forms, from stable W-2 employment (NSO et al) and contracts with customers, to vulnerability “brokers” who might be simply applying age-old rapport-building sales techniques. This case, (1) and (3) have no ability to exchange information, which is held by (2).

  2. The platform. Many companies today seek to increase information flow between the (1) and (3) players by offering software firms the ability to offer vetted hackers to run against pre-release code, in exchange for rewards, money, etc. (such as Hacker One or Synack). I can’t speak to the information and money flow here, but one imagines that those would be limiting factors that would also result in low prices being paid for low-value vulnerabilities discovered because they are coming out of what I am guessing are low information environments (could be wrong).

  3. The bounty. For companies that don’t want to purchase they can offer a “bounty” for vulnerabilities voluntarily disclosed to them through some platform or mechanism. Here as well you have asymmetry, as the discovering party has no way to be sure that, once disclosed, the company will pay. Furthermore, it is very hard to settle on a price in advance.

There are likely other models, but these three are the ones that come to mind at the moment. And they serve to illustrate just how rudimentary the current environment is, and how it likely contributes to the fact that these releases of potential energy (revelation of critical vulnerabilities) are sporadic.

Closing Thoughts:

How to unlock this “potential energy?” It’s certainly tough, but as we discussed in our borrowed conference room, new technologies may offer a potential way forward. While we won’t disclose it here, I can say that our thinking had to do with distributed systems, proof-of-state, a new concept we started batting around that I’ll coin — “proof-of-compromise,” and other themes you can pick up on if you want to follow me on Twitter.

Additionally, some weeks after we walked away from this idea, I got a call from a very smart engineer (won’t provide further details to provide a bit of privacy), who essentially as trying to crack the same problem using slightly different structural approaches. All that to say: folks are actively thinking about this problem, and the time may come when it is possible to solve in the next 5-10 years.

0 Comments
pwn the future
pwn the future
notes from the (industrial control systems cybersecurity startup) underground.
Listen on
Substack App
RSS Feed
Appears in episode
Joshua Steinman