Questions about Cyber Conflict Remain Unresolved, Panelists Note
Who is an enemy combatant in the cyber realm, and what are the rules of engagement for cyber conflict? These are enduring questions that remain unresolved despite years of punditry about cyberwarfare.
There have been numerous discussions and papers written about cyberwarfare over the last decade, yet many of the key issues and principles related to it are still remarkably unresolved or untested — What does and doesn’t constitute digital warfare? What is the threshold for triggering NATO’s Article 5 defense clause when it comes to cyberattacks? What are the rules of engagement in the digital realm? When does cyberespionage become a cyberattack?
These questions were largely theoretical until Russia invaded Ukraine in February and vigilantes began launching cyberattacks against Russia, adding a new question to the mix — what constitutes an enemy combatant in cyberspace when anyone anywhere can enter a conflict on behalf of one of the parties to the conflict?
Last week at the S4 ICS Security Conference in Miami, Maggie Morganti sat down with Danielle Jablanski and Joe Slowik to talk through some of these questions, particularly as they relate to the recent Incontroller/Pipedream malware that was discovered and may have been created by Russia to target critical infrastructure in the West. The malware created by a group that threat researchers at Dragos are calling Chernovite, was discovered before it was deployed against any targets. But it has the potential to cause physical disruption in critical infrastructure systems, which could run the gamut from simple manufacturing plants to gas pipelines or chemical plants.
I thought the conversation raised interesting points and am publishing an edited transcript here of parts of the discussion, since the video recording of the session likely won’t be available for weeks.
Morganti is security research manager for Rockwell Automation and is a former threat intelligence analyst for Mandiant. Slowik leads threat intelligence and detections at Gigamon and previously ran incident response at Los Alamos National Laboratory and worked for Dragos as a threat analyst. Danielle Jablanski is an OT cybersecurity strategist with Nozomi Networks and a non-resident fellow with the Atlantic Council’s Cyber Statecraft Initiative.
Morganti: When we look at the cyber domain, reconnaissance for destructive attacks can look very similar, if not identical, to…cyber espionage. So where in the attack chain do we start looking at applying international law to … cyber network attacks and cyber network operations?
Slowik: I would argue that right now we just really don't know, because these items overlap significantly… The types of access that would be necessary for [an] intelligence operation are really quite similar to those that would be necessary for … a kinetic-effects or disruptive-effects … operation.
So, for example, we could look at something like Nobellium — the SolarWinds-Microsoft event — which was very widespread [and] did not result in any known disruptive effects, but very easily could have transitioned into that, [versus] … the various wipers that have taken place in Ukraine.…
[I]n order to get in place to deliver those wipers, you would need to proceed through an intrusion that can just as easily be used to [siphon data] from the Ukrainian motor vehicle registration servers, as well as then wiping them afterwards. So that dual-use tension … represents very difficult items for us to parse, because at what point does that adversary switch over [to become an attacker as opposed to a spy]? You can make an argument … that this is where attribution does actually matter and that having some sense of who the adversary is and what that adversary’s mission or purpose might be can serve a vital function in trying to differentiate [between] “this is just an espionage group” versus “it’s a [disruptive/destructive] Sandworm team.”
Jablanski: So on the adversarial intent-distinction part, I think it gets into the next topic, which is what defines a combatant in cyberspace? And that distinction now has moved away from who you are and what you care about, to what you target and what the impact is of that target. And so I think that that actually, to Joe's point, makes attribution count even more as a political message and norm-setting capability to say, I don't care who you believe in and what your freedom flag is, but if you target these things, and do X, Y, and Z in terms of effects, that's going to be a problem for the world.
Slowik: I guess one final point on this, because this is a very important topic, when … we start moving away from just general IT operations and into industrial environments, there's also a very important distinction between … intentional effects and … when … adversaries screw up and cause a disruption inadvertently…. [A]n adversary that accesses a [programmable logic controller] to [steal information] … and knocks over the device resulting in a disruption — have they just inadvertently [moved] straight into being a combatant?
Morganti: So the next problem-set that we're going to look at is who is a combatant? [T]hese laws [of war] were written back when we were dealing with…very traditional kinetic warfare, and combatants wore uniforms and they moved in … formations. And now we are… in this sort of brave new day and age where … a combat is behind the keyboard a world away, and they may be a direct, state-sponsored individual or they may just be sort of… state-tolerated, state-aligned. So … under international law … how wide do we spread the umbrella for who should be considered … a combatant party to the conflict?
Jablanski: [W]e obviously don't see armed conflict in the same sense in cyber operations and that's because the individuals involved in those operations are fluid and conditional. But the thing we forget is that they also have targets.… Typically, there are some low-hanging fruit that we like to talk about that are opportunistic attacks, but the most sophisticated and resourced ones focus on targets. And so I think … the targets become the main, important factor … what are they targeting and why? And what effect does that have on society I think is a whole other realm of truth that we have to unpack….
[W]e pretend that there aren’t actual rules of engagement [but] there are. There are red lines, so to speak, in those operational engagements…. [W]e shouldn't do X, Y, and Z because we know the potential implications, but we pretend that it's all sort of ad hoc and not all planned out…. So just because the rules of engagement for combatants in cyber aren't written down somewhere in Geneva doesn't mean that they don't exist…. I've actually seen offensive cyber actors in different countries say that that's a good thing, because as soon as you write down those red lines, you might actually create a false crisis by having them. You know, trip a wire that would potentially … necessitate a big conflict. And my favorite example… is … when the Conti group came out and said, “Our objectives are on behalf of a certain nation-state….” Everyone that had studied any of these topics said. "I don't think that's the flex you think it is.” And it took, I think, like a couple of hours for them to recant their statement and say, “Oh, no, no, no, no. We didn't mean that.”
Slowik: All excellent points, Danielle. But … taking into consideration the most recent item of interest — Incontroller/Pipedream [malware created by the] Chernovite group — and drawing a distinction that was made already this morning between [an] access-development team vs. an effects team … maybe this Chernovite entity really is the development team.… Well, if that's the case, is Chernovite … an enemy combatant … if all they did was develop some code that could do some very nasty things, but that was never actually deployed? And we can start taking this back a little bit further…for example, Barry who works for Raytheon is a shit-hot developer and put together some really interesting capability … and handed it over to the military, and then it's Capt. Smith that actually deployed it. Well, if it ends up knocking over a children's hospital in the process and some…non-combatants die, is that Capt. Smith's fault for deploying it, or is Barry, who put this capability together in the first place, also having some sort of claim over this? We can make the argument … you don't blame the weapons manufacturers when someone drops a bomb. But given … how frequently we … don’t make very useful distinctions between the developers, access-development teams, and the actual impacts teams, the way that blame … accretes to certain individuals becomes very interesting.
So I'm looking at [the threat actor group known as] Xenotime, for example, from a few years ago, and the Triton incident…. [T]he Russian research institute that was implicated as being involved [in developing it],… it's not necessarily proven that that was the organization that executed the operation. But rather through indictments and so forth, it was shown that they were certainly involved…. If, say… the Triton incident … had resulted in a loss of life … would [the Russian institute] as an organization be responsible in some way? Well, as far as the US Treasury is concerned, they thought yes [and sanctioned them]…. So identifying roles and responsibilities within the cyber ecosystem is [an] interesting topic that I don't think we've really explored that much in trying to figure out who's actually a combatant within these spaces.
Morganti: I think this is a really great segueway to … arms control… How do we look at doing arms control for cybersecurity?… I know everyone is going to blow up the backchannel as I say this, but in a case like Stuxnet where you know there were very specific technical controls around what that could be used for and in what environments that could actually be viable,… under international law, do we feel that there are sort of deep obligations to put those sort of bounds around payloads so that malware itself … will adhere to distinction for the target environments? … Or do we think that … it's sort of at the mercy of whoever deploys it? And … do we think we're doing a good job in … making sure … that we're not inadvertently setting precedents for cyber that are going to translate, you know, back to Barry at Raytheon…?
Slowik: In thinking through this in a fairly objective manner, there's a lot of confusing bits about this…. I'm looking at enablement tools, things like …Cobalt Strike, which was used in the Triton event, for example, by the adversary in that case, in a very specific sort of fashion…. [I]t’s [made by] a commercial entity. And if they're enabling these sorts of items, like what sort of responsibility is yours to that, and is this something that deserves some degree of control or due care? In fairness, the organization that sells the product does do stringent vetting of customers. But that doesn't prevent source code and cracked versions from leaking online.…
There's also very interesting difference or distinction … for threat researchers and analysts [giving presentations] about these [malicious tools]…. [W]e’ve seen with other things,… these items tend to leak very widely and far, in some cases even with very limited distribution. And in those cases, does a Mandiant or Dragos … have some responsibility … if these sorts of items leak out…? I know there was some controversy over the Triton framework when bits and pieces of that slowly drilled out into commercial malware repositories… is there a requirement or some sort of due diligence on the part of these sorts of providers in order to screen their content for things that could potentially be repackaged and reused for these sorts of events? I don't have answers to these sorts of things, because I'm not that cool. But these are definitely questions that I don't think get brought up often enough….
Jablanski: I think to go back to your question about like whether or not we can retrofit … proportionality based on impact, the answer will always be there are thresholds…. [I]n the cyber [realm] we can't regulate these capabilities, it has to be about impacts, and maybe sometimes intent…. [I]n nuclear — I'd never liken a cyber weapon or capability to a nuclear weapon — but what we do there is, we do fallout data analysis. So if you take out New York City with a nuclear weapon, there are all these really smart people that have looked at what would the impact be on financial markets, what would it be like for all these other things. And that impact analysis … for the different kinds of ICS attacks that we're starting to see … hasn't been done, nobody's done it….
Slowik: I think we have seen … some evidence of this sort of like battle-damage analysis.… I wouldn't say that doesn't happen in cyber, I would say it's just very unclear what that looks like and where the limits get drawn…. The open question is whether or not that sort of sense of due care … extends to other entities like [an] entity that would backdoor a piece of accounting software and then unleash a destructive worm that ends up wiping organizations as far afield as Denmark, the United Kingdom and the United States.… So [w]ho’s YOLOing it out there, and who was actually … exercising some degree of attempted restraint. But there's also the question, as well, of unintended consequences when it comes to these items. So … as Maggie referenced earlier, Stuxnet was very narrowly targeted from [an] effects capacity. But from an enabling perspective — given the zero days that were employed [to spread it] — it [spread] fairly far and widely…. [D]id someone get in trouble? Maybe Israel, or maybe the United States … because Stuxnet was just a little bit too aggressive or more aggressive than we needed to?
Jablanski: But, Joe, I would say my question wasn’t whether or not we've done the analysis, but that … the fallout and impact analysis has not given us the threshold for when we determine if something is a weapon or something is an incident or is it bigger than attack or something is cyber war?
Slowik: Another interesting take on this as well, just going to personal experience, [is] the revision of targeting allowances [in] Iraq, and especially Afghanistan, since we were there so long. The evolution in what was allowed versus what was just completely written off as unacceptable because of the potential for civilian casualties or unintended effects or values vary dramatically from the early 2000s until the late 2010, late teens.… And that was based upon experience — very nasty, very unfortunate experiences. But there was some learning going on. They're like, holy crap, we can't do this anymore…. The reason I bring this up is that … incidents happened and we learned from them and then revised. We've seen from a critical infrastructure perspective … we just haven't had the events in question to begin shaping these things. Just as we say that safety rules are written in blood [when it comes to] the industrial space, when it comes to some of the things like Geneva Conventions, the law of armed conflict, etc.… it's based upon people previously screwing up in really nasty ways. And unfortunately — fortunately or unfortunately — we haven't had [something similar in cyberspace yet].
Jablanski: I argue that there are rules of engagement that we're not privy to…. [D]o you think that those rules [of] engagement exist? Do you think that there's, you know, a quantum person that has a higher up they have to go through clearances … for, you know, approval and authorization? Or do you think they're learning better … as they go?
Slowik: I think if nothing else, there's always the rule or law of self-interest. So if you look at someone like Conti [gang], when they [make a statement like] “Glory to Mother Russia” … CyberCom was just like, you know, rubbing its hands together. It's like, “That's right, asshole. We're coming after you now.”
But, you know, there certainly are rules of engagement out there — if nothing else, just do restraint, because someone doesn't want to get invaded or whatever.…
If you like this story, feel free to share with others.
If you’d like to receive future articles directly to your email in-box, you can also subscribe: