I’ve recently been diving deep into two excellent blueprints from
, including a “Narrow Path” and “The Compendium,” which articulate the risks of unfettered AGI development and suggest sane and clear frameworks towards mitigation of those risks.What is worrisome is how much of the risk mitigation strategies from potentially civilization-ending technology developments require immediate large-scale diplomatic measures to implement.
Recently, Amazon, Palantir, and Anthropic announced that they are collaborating to integrate advanced artificial intelligence (AI) capabilities into U.S. defense operations. And they are not unique. OpenAI is similarly supporting the US military, and Meta recently amended its terms and services to allow its open-source models to be used by the US military, in part because the Chinese military was already leveraging it.
In a world where the US, China, and the EU must cooperate immediately to agree on some universal AI safety standards jointly, this partnership and others similar to it raise significant concerns about our ability to implement AI development safeguards. Is it already too late?
Even safety-focused companies like Anthropic, founded with a focus on AI safety, now find themselves aligning with defense contractors. I don’t see how we can develop a moratorium on AGI development until we’ve figured out how to avoid it killing everyone when it is rapidly being integrated into military operations. Unlike other technologies, we can’t wait to develop safeguards until the superpowers have them since developing AGI will likely mean having an insurmountable military advantage, assuming it doesn’t result in the destruction of the entire planet.
I don’t think that this militarization of frontier AI models and the development of AGI by the US military can be rolled back. I also don’t see how we get China to the table.
What, then, can the rest of the world do?