This blog, in February:
“we should totally expect to see legislation, quite soon, telling us that “nobody should be permitted more than [so many bogoMIPS, computrons, or teraflops] at home””
https://alecmuffett.com/article/109130
Now comes Lawfare:
The importance of compute to AI capabilities and the feasibility of governing it make it a key intervention point for AI governance efforts. In particular, compute governance can support three kinds of AI governance goals: It can help increase visibility into AI development and deployment, allocate AI inputs toward more desirable purposes, and enforce rules around AI development and deployment.
Visibility is the ability to understand which actors use, develop, and deploy compute-intensive AI, and how they do so. The detectability of compute allows for better visibility in several ways. For example, cloud compute providers could be required to monitor large-scale compute usage. By applying processes such as know-your-customer requirements to the cloud computing industry, governments could better identify potentially problematic or sudden advances in AI capabilities. This would, in turn, allow for faster regulatory response.
https://www.lawfaremedia.org/article/to-govern-ai-we-must-govern-compute
…and
One enforcement mechanism discussed in our paper is physically limiting chip-to-chip networking to make it harder to train and deploy large AI systems
…and
The power to decide how large amounts of compute are used could be allocated via digital “votes” and “vetoes,” with the aim of ensuring that the most risky training runs and inference jobs are subject to increased scrutiny. While this may appear unnecessary relative to the current state of largely unregulated AI research, there is precedent in the case of other high-risk technologies: Nuclear weapons use similar mechanisms, called permissive action links (security systems that require multiple authorized individuals in order to unlock nuclear weapons for possible use).
In their defence they do make at small nod to the possibility that such controls could have illiberal consequences, but they don’t appear to be very worried about that in proportion to the rest of the content, certainly they believe it is solvable with even more regulation.
So, back to ITAR and Cryptowars 1.0 thinking, then?
Previously: