Sunday, September 8, 2024

Opinion | To stop intelligence leaks, assume there will be bad actors

Opinion | To stop intelligence leaks, assume there will be bad actors


Reading about the massive dump of secrets in what has become known as the Discord leaks, people might reasonably ask: Why do these hemorrhages of classified information keep happening? How can a 21-year-old Air National Guard member have access to secrets for which he had no apparent need to know — and then, allegedly, toss them into a chatroom to impress his pals?

Most important, what can the Pentagon and the intelligence community do to maintain better control of classified information, without intruding on employees’ personal lives or blocking what the public truly should know?

I’ve been asking people in and out of government to unthread these questions. I hear growing support for the argument that classified systems should be built to assume the worst about their users, rather than the best. As we’re learning, any systems administrator could turn out to be the next Edward Snowden. Intelligence agencies need a “zero-trust architecture” that automatically enforces need-to-know permissions. This approach would be more efficient, and less bureaucratic and invasive, than some other draconian changes.

Robert M. Gates, who spent his career overseeing classified information as CIA director and defense secretary, sees two essentials going forward: “We need technology solutions that enforce the need to know, and systematic and rigorous enforcement of rules that already exist.”

This security mess has evolved with relentless, mind-boggling regularity, though the motivations have differed in each case: Army intelligence analyst Bradley (now Chelsea) Manning was convicted for turning over 1.6 gigabytes of data to WikiLeaks in 2010; former NSA contractor Snowden fled to Russia after scraping the code-breaking agency’s most secret programs and giving them to the news media. Some of these leaks may have been beneficial to the public, but the system was at the mercy of its users.

Lesser-known cases have been just as serious: CIA officer Joshua Schulte was convicted in 2022 for leaking hacking tools of the agency’s Operations Support Branch in 2017 in a package known as Vault 7; the so-called Shadow Brokers, whose identities remain unknown, published highly sensitive NSA and CIA files in 2016 and 2017.

And now, Airman 1st Class Jack Teixeira, who allegedly used his access as a system administrator for intelligence programs at a base on Cape Cod, Mass., to disclose top secret “code word” intelligence reports about everything from Chinese satellite capabilities to Ukrainian ammunition supplies. He allegedly accessed the documents, printed many of them out, and then dumped photos of the reports onto the Discord server platform.

Why, in all these cases, did the systems for handling classified data break down? Certainly, too much information is classified, but that’s not really the problem here, many experts told me. Instead, it’s that existing rules aren’t enforced evenly, and they depend too much on fallible human beings.

Reforms meant to solve one problem often create new ones. Manning had access to a vast array of classified information in part because one lesson of 9/11 was that the CIA and FBI couldn’t read each other’s intelligence and thus failed to “connect the dots” about the plot. Stovepipes were bad; transparency was good. So analysts were given much wider access to secrets — and more opportunities to misuse that information.

James R. Clapper Jr., the former director of national intelligence who led the Pentagon’s post-Manning investigation, noted that new rules were established. But enforcement was uneven across the Defense Department, and control eroded over time because the restrictions were seen as onerous and inefficient.

“There needs to be a comprehensive system for monitoring electronic behavior” at work by people with high security clearances, Clapper said. The intelligence community shouldn’t routinely monitor its employees’ private communications, but it must learn to recognize “red flags” that may emerge with people who have been trusted with security clearances.

The system has missed the warnings. Manning was going through intense personal experiences in Iraq as the leaks began; Snowden moved from being an eager employee when he worked at the CIA to an angry dissenter when he became a contractor for NSA. Schulte was seen by his CIA colleagues as “vexing and obstreperous,” but he maintained access to secrets, according to a superb New Yorker profile by Patrick Radden Keefe.

The system needs accountability, but how should it be enforced? When Gates was defense secretary, he was famous for firing senior officials when there was a screw-up. But after the Manning leaks, Gates told me, Pentagon lawyers cautioned him that, before taking disciplinary steps, “you need to be mindful of the criminal proceedings that are underway.”

The Biden administration faces that same legal constraint in dealing with the Discord leaks. Senior intelligence officials are still struggling to understand just how the leaker got access to so much information. And they sensibly want to avoid overreacting.

The right answer is that accountability and controls should be engineered into the system. The tenets of such a zero-trust architecture are neatly summarized in a recent blog post from the defense software company Palantir: “Assume a hostile environment,” “presume breach,” “never trust, always verify,” “scrutinize explicitly,” and “every transaction should be logged for analysis and audit.”

Smart young engineers don’t want Big Brother looking over their shoulders into their personal data. It’s simpler, fairer and less intrusive to have the rules and permissions built into the software used for handling classified information.



Source link