The Whitehouse announced the National Cyber Strategy Implementation Plan last week, closely following the Storm-0558 Chinese forgery of Microsoft Azure Active Directory keys. This is the first National Cyber Strategy to come with an implementation plan and has been long awaited. This step is made particularly notable because the top cybersecurity spot in The Whitehouse is still in limbo. The plan focuses on how goals of the Strategy, like shifting the cybersecurity burden to bigger organizations and creating incentives for long-term investment, will be achieved.
As with many Federal Government documents prepared by consultants and workshopped into fine beauracratic jargon, it can be difficult to understand what the difference really is between the strategic objective and the implementation. For example, how is "Explore approaches to develop a... liability framework" much nearer to changing behavior than stating the goal of "Shift liability of insecure products and services"? Hosting a legal symposium may be a good idea, and boy does the executive branch like to convene the public and private sector, but some of the implementation plan reads as if the first step is planning for implementation. Look, I'm being a bit unfair, but my time in government consulting trained me to look out for plans, frameworks and strategies as tells that nothing is really getting done. Momentum is a powerful force and implementation is about doing, not planning. So here's a few moves that the Whitehouse is calling for, a more active CISA, R&D grants and new certifications. The rest may be stuck on planning the implementation for a while.
In AI risk, Alex Karp, CEO of Palantir threw a match at the powderkeg of AI opinions with his piece "Our Oppenheimer Moment: The Creation of A.I. Weapons." He's not the first or even the loudest to say that LLMs need regulating, but he is likely the only one to have written a PhD dissertation on aggression and culture (see English translation from Google Translate below).
It's a long and complicated philosophical tract, which examines the use of rhetorical gestures and jargon to simultaniously avoid making any substantial claims, while appealing to the repressed mood of the audience. Karp identifies patterns of speech and jargon that allow violence and aggression to slip into culture and become the basis for an identity (anyone interested in a dissertation reading club DM me on Twitter). It is with this analysis in mind that we should assess his essay on AI, and particularly the accompanying graph, which at first glance appears to be absolute nonsense (yields in kilotons vs number of parameters!?):
This graph comes from Karp's opening which acknowledges and legitimates the fears of AI pessimists. Halfway through the essay he abruptly does a 180 and calls efforts to stop or slow technological progress "misguided." The goal of the graph, in other words is to drive clicks and generate criticism. It is rhetoric that taps the prevailing mood. Having whipped up the reading into a state of alarm, he redirects that ire at Silicon Valley leadership, big-tech engineers and hand-wringing censors. Besides my interest in abolishing the fear of progress and the cultural cul-de-sac we seem to be trapped in, there is one more interesting line before Karp comes clean and says Palantir is pro-AI, defense spending is good and all of our competitors are bad (yawn).
The salient observation that Karp slips in between the two polemics, is that we must "construct moats and guardrails around A.I. programs’ ability to autonomously integrate with other systems, such as electrical grids, defense and intelligence networks, and our air traffic control infrastructure." Taking internet connectivity as a precident, we have not succeeded in an orderly roll-out of connected industrial technology. Operational technology has become more connected, more complex and more prone to exploitation, without equivalent improvements in security, despite a large increase in security knowledge, spending and tools.
Closing out this week's tech risk roundup is a fascinating piece about Kaspersky, the infamous Russian cybersecurity firm, collaborating with Apple to secure iOS. Kaspersky alerted Apple to yet another a zero-day vulnerability (the third vulnerability identified by Kaspersky and addressed by Apple) prompting a triad of patches this summer. Kaspersky's public announcements were preceded by claims that the Russian government was the target of an NSA operation.
Apple has long had a difficult relationship with the U.S. government and has taken to using that publicity to highlight the commitment to privacy. Now that's making for some strange collaborations. As benign risks get mitigated, things get weirder, and not just for the first parties involved. Something to keep in mind for planning, AI and collaborations.
Speaking of strange collaborations, did you see Putin's Africa summit in Moscow? As we enter the dog-days of summer, Russia is shifting its focus in Ukraine from damaging energy to agriculture. One downstream affect of targeting agriculture in Ukraine is that many African and Middle Eastern countries depend on that supply to stave off famine, so a bit of PR and goodwill are an attempted prophylactic.
Why is Russia targeting Ukrainian agriculture, and why now? Well there are only two windows for large mechanized military campaigns in Eastern Europe, due to the mud season (rasputitsa) in the spring and fall. Those windows are summer and winter, when the earth dries out enough or freezes hard enough for wheels, continous tracks or sleds. Russia's strategy, following the failed blitz to Kyiv, has been to undermine Ukrainian infrastructure, depriving Ukraine of both food and export revenue in the summer and crucial energy for heating and manufacturing in the winter.
Pursuing this strategy may cause geopolitical instability in the Middle East and Africa, where there are many Russian wheat importers: Egypt ($2.44B), Turkey ($1.79B), Nigeria ($493M), Azerbaijan ($339M), and Saudi Arabia ($316M). Close allies like Syria, Iran and Eritria will no doubt get their imports, but more tenous relationships like Israel, Turkey and India may find that revolutions in their neighborhood are more than they can bear.
From revolutions to rate hikes, the Federal Reserve Chairman Jerome Powell is leaving the possibility of raising rates on the table for September. While the Fed is now forecasting the much sought-after soft landing for the economy, it's still to early for congratulations, particularly with the election year looming. After a brief respite in June, the Fed's announcement that they would raise rates 25 BPS is keeping investors on their toes. Markets will likely overreact to the September decision, pricing in dispair if rates go up or irrational exhuberence should rates stay flat. Still, the days of 3% increases from June to November, as in 2022, appear well behind us. With the mundane risks under control, it's time to look out for the weird and esoteric.
One such esoteric risk is cyber financial. The SEC called for the long awaited cyber disclosure meeting to discuss new, enhanced and amended public company rules in the case of a cybersecurity incident. Byrne Hobart over at The Diff has a good take on the implications for assets and liabilities. He's betting that software companies selling a complex suite of tools can make money coming and going if they also sell security tools to compliment the more prosaic software. There are many implications for investors, both those looking at security companies and those whose companies have cybersecurity risks (all companies).
There are also serious implications for those working in cybersecurity. CISOs, already beset by hackers, budgets, auditors, consultants, lawyers and vendors, now have a new set of regulators to please. Industry compliance for public sector, healthcare, energy and financial services has long been a challenge for CISOs, but as Bloomberg financial columnist Matt Levine likes to say, "everything is securities fraud" and data breaches are part of everything so:
You know my theory: Every bad thing that a public company does is also securities fraud. If there is a data breach or sexual harassment or animal mistreatment or pollution at a company, and the public finds out about it and the stock goes down, then someone will sue the company, arguing that it defrauded shareholders by leading them to believe that there wouldn’t be a data breach or sexual harassment or whatever. And this theory is weird, and people sometimes say “wait shouldn’t securities fraud be for, like, accounting misstatements, not sexual harassment?” But it is a little hard to articulate a limiting principle. If a company convinces investors that it is good, but it is in fact bad, then it has defrauded them, and badness is not limited to accounting.
In some sense, this ruling just affirms the status quo, since data breaches could already cause lawsuits and CISOs were already being punished for covering them up. There are three reasons it's significant. First, it puts a timer of four days to disclose once the breach is deamed material. Second, it removes any doubt on the part of CEO, board members, etc. that failing to disclose a breach is illegal, so CISOs, while under fire, have something concrete to point to if ordered to cover things up. Finally, and most importantly,
The new rules also add Regulation S-K Item 106, which will require registrants to describe their processes, if any, for assessing, identifying, and managing material risks from cybersecurity threats, as well as the material effects or reasonably likely material effects of risks from cybersecurity threats and previous cybersecurity incidents. Item 106 will also require registrants to describe the board of directors’ oversight of risks from cybersecurity threats and management’s role and expertise in assessing and managing material risks from cybersecurity threats. These disclosures will be required in a registrant's annual report on Form 10-K.
CISOs have been held to often unattainable standards given their resources and the threats they face. Now they can at least document that they attempted to take reasonable precautions before the fact. It won't be long until we see our first cyber whistle-blower award. Just in the last year we've had the lids blown off the LIBOR fixing scandal, a pharma kick-back scheme and a telecom bribery operation.
In some ways, it's not surprising that the SEC is taking the reigns on technology risk. After all the financial services industry is usually the leader in tackling cyber risk. Over at the G-SIBs (Global Systemically Important Banks, why it's not not Banks Important to the Global System, or BIGS, I'll never know), there's a bit of a row over who owns information technology risk, information technology or risk. It's a quirk of the financial industry that risk is even a stand-alone department. Partly due to the principal-agent problem of many bank employees having perverse personal incentives to take risks, housing risk in a seperate department may be a feature not a bug. Still, half of the banks surveyed had an in-business unit information security team.
Of the respondants, only half attempted to model cyber risk using a loss distribution, but all of them used thier forecasting tools, from qualitative scenarios to regressions for determining economic, although not regulatory capital. In addition to risk measurement, there was divergence in the team sizes, with some banks double counting general IT people as risk managers and others with dedicated teams of 10-50 people. It's an important question, not just for cybersecurity, whether to break out risk management as its own discipline or whether it's a specialization, as many argue, of a broader field, such as IT. Given the financial services industry's track record at avoiding crises, the approach is not very reassuring, but with other sectors like healthcare, energy and government lagging and the rest of the economy even further behind, there aren't many good models for what effective risk management looks like, and even if there were, I suppose we don't hear much about them until the model breaks.