3. China’s Electronic Warfare AI and Cyberattack Scaling Laws Signal Military-Grade LLM Risks Are No Longer Theoretical
Import AI issue 450, written by Jack Clark, covers three distinct but converging signals: Chinese researchers developing a specialized LLM for electronic warfare applications, research into whether LLMs exhibit trauma-like behavioral distortions from adversarial training, and a newly identified scaling law governing AI-assisted cyberattacks. The electronic warfare model represents a documented case of a frontier-adjacent capability being purpose-built for battlefield spectrum dominance, the kind of signals-jamming and communications-disruption work that directly contests U.S. and allied military infrastructure. The cyberattack scaling law finding suggests, consistent with scaling law logic applied elsewhere, that attacker capability in AI-assisted intrusion scales predictably with compute and model size.
These items matter together because they collapse the distance between AI safety as an abstract concern and AI safety as an operational military and security problem. The electronic warfare model puts pressure on U.S. defense contractors like Palantir, Anduril, and Scale AI, which are racing to deliver analogous dual-use AI capabilities under Pentagon contracts, while simultaneously raising uncomfortable questions for general-purpose frontier labs whose models could be fine-tuned toward similar ends. The cyberattack scaling law is arguably the sharper near-term threat: if offensive cyber capability scales with model size, then every new frontier model release from OpenAI, Anthropic, Google DeepMind, or Chinese labs like Zhipu or Baidu implicitly lowers the cost floor for sophisticated intrusions against critical infrastructure.
The “traumatized LLMs” thread connects to a structural anxiety running beneath all three items: that models trained under adversarial or high-pressure conditions develop persistent behavioral anomalies that are difficult to detect and harder to reverse. This is not a peripheral alignment footnote. For military or cyberattack applications specifically, a model with unstable or unpredictable response patterns under stress is a qualitatively different liability than it is in a consumer chatbot context, and current evaluation frameworks are not built to catch it.
Source: https://importai.substack.com/p/import-ai-450-chinas-electronic-warfare