3.2 C
New York
Friday, November 22, 2024

“At What Point Does Profit Trump Safety?” Ex-National Cyber Director Presses Software Regulation Amid High-Profile Hacks

“At What Point Does Profit Trump Safety?” Ex-National Cyber Director Presses Software Regulation Amid High-Profile Hacks

By Renee Dudley

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.

In 2019, hackers launched one of the largest cybersecurity attacks in U.S. history, eventually infiltrating various government agencies, as well as scores of private sector companies. The White House later attributed the attack, known as the SolarWinds hack, to Russia’s Foreign Intelligence Service. But as U.S. officials scrambled to respond to this spying, they realized they were missing key information: critical log files, the digital records of activity on users’ computers.

The feature, which allows users to detect and investigate suspicious activity in their networks, is included in high-end Microsoft 365 plans but not in the basic version then used by some government agencies. Other agencies didn’t retain sufficient log data over a long enough time frame. Had logging been more widely deployed, it might have tipped off officials to the intrusion sooner and enabled them to better investigate after it had been discovered.

Against this backdrop, President Biden nominated Chris Inglis to become the country’s first National Cyber Director. Inglis, a former National Security Agency official who began his career as a computer scientist, would go on to oversee the development of the administration’s National Cybersecurity Strategy. And as he and his team at the White House drafted that document, he kept returning to the SolarWinds hack. Known as a supply chain attack, this far-reaching breach started with compromised software that was used by many high-profile customers. “Everyone along that supply chain assumed that security was built in at the factory and sustained along the supply chain,” Inglis said of the SolarWinds attack. “We now know that wasn’t the case.”

The issue emerged again this month when some victims of a cyberattack linked to China were unable to detect the intrusion because they held basic Microsoft licenses rather than the premium ones that include logging. Hackers had exploited a flaw in Microsoft’s cloud computing service to break into about two dozen organizations globally, including the U.S. State Department.

These types of incidents reflect a larger trend, Inglis said: Computer users find themselves bearing a disproportionately large share of the burden of defending against cyberattacks. In response, the new strategy proposes shifting more of that burden to software makers themselves. Indeed, following the most recent cyberattack by Chinese hackers, Biden administration officials called on Microsoft last week to make security features like logging standard for all users.

Microsoft said it is engaging with the administration on the issue. “We are evaluating feedback and are open to other models,” a company spokesman said in a statement.

Although the Biden strategy, which was announced in March, is not binding, it represents a significant change in the government’s approach. Among its proposals: advancing legislation that would hold tech firms liable for data losses and harm caused by insecure products. Inglis, who stepped down from his role as director earlier this year, recently spoke with ProPublica about the national strategy document and the administration’s push to make technology providers do more to protect users from cyberattacks. The conversation has been edited for length and clarity.

The Biden administration is talking about regulating cybersecurity. What would that look like in practice?

If you look at regulation of cyberspace at the moment, it’s mostly focused on operators. It’s not focused on those who build the cloud or major pieces of software. Governments need to consult with the private sector to understand what’s critical in those systems. We can use regulatory authorities that exist already, whether it’s the Department of Commerce, the FCC, the Treasury Department. When something is life- or safety-critical, you get to a place where you have to actually specify those things that you say are not discretionary. We did this with drugs and therapeutics. We did this with transportation systems. We need to do the same thing in cyberspace.

I’m reminded of a book I’m sure you’re familiar with, “The Cuckoo’s Egg,” Cliff Stoll’s story about the sprawling intrusion into U.S. government and military computer systems in the 1980s. Eventually, the trail led to West German hackers paid by the Soviet Union’s intelligence service, the KGB. These issues are not exactly new. Why has regulation never come up in this conversation before?

Well, I think it’s been brought up, but two things prevented it. First, we’ve thought about the idea that security is something that the technologists, the innovators, would actually take care of. They’ve always been of the mind that they would take care of it when they get around to it. But they’re always on to the next new innovation. So they never get around to it. We never double back to essentially build something in that wasn’t there at the start.

Two, we worried that too much regulation will actually suppress innovation and deny us the full benefit of technology. We still need to think about that. But it turns out that innovation is not a free lunch. I won’t cite any particular sources, but if you’re a good business person, you want to avoid any unnecessary cost. And so you’re always going to point out the downside of regulation.

You have alluded in this discussion to making products secure by design — the concept, which also is a focus of the national strategy document, that security should be built into digital products. What are some examples of this?

It’s pretty straightforward: Are the software or hardware systems meeting security expectations under reasonably foreseeable conditions? We’ve done that with automobiles. We have airbags, we have seat belts, we have anti-lock brakes. So what are the basic cybersecurity features that should be there at the get-go? Multifactor authentication or some reasonable equivalent to that. Some degree of segmentation so that if something gets into your system, it doesn’t rapidly race across. An easy way to patch vulnerabilities. The magic in the middle of that is that the vendor actually says, ‘I will take that responsibility.’ As opposed to saying, ‘Let the buyer beware. I’ll sell you the basic version. But if you want security features, then I’ll sell you a package on top of that.’ That’s nonsense.

That sounds like the whole Microsoft licensing debate in the wake of the SolarWinds attack, where the government lacked logging, a key security feature.

That’s right. Now, if you have an extraordinary security situation — you’re in the darknet, or you’re doing business in places where there’s very little jurisdictional authority exercised by the local police forces or the diplomatic cadre — then you ought to expect to pay more. But if you’re just an ordinary consumer, security ought to come along, built in.

I’m wondering how things are going to move ahead with this, given what seems to be the historic corporate outlook. When Microsoft President Brad Smith testified before Congress in early 2021, then-Rep. Jim Langevin of Rhode Island questioned him about charging extra for logging. Smith replied, “We are a for-profit company. Everything we do is designed to generate a return.”

So is Ford Motor Co. So is Tesla. It’s a pretty simple formulation, which is: At what point does profit trump safety? And the answer is, there is some reasonable alignment of the two. You can’t have all of one and none of the other. The businesses have to be able to sustain themselves; profit needs to be in the bargain. But they cannot deploy technologies that they know to be injurious to the welfare, health and safety of their customers. That is simply not the way this society works. I just think that companies that deploy products that have a detrimental effect on their customers either will find themselves [improving security] through self-enlightenment or market forces, or they should expect that they will be compelled to do that.

We should be pro-business. But business over the interest of the customers that it serves is essentially a graveyard spiral. It’s a race to the bottom. And so this is yet another moment where you have to align the interest of business with the interest of consumers that they will serve.

This post was originally published on this site

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments

Stay Connected

156,471FansLike
396,312FollowersFollow
2,320SubscribersSubscribe

Latest Articles

0
Would love your thoughts, please comment.x
()
x