top of page
Search

Interruptive AI, Causing Adversarial Confusion: The New Geo-Political Battleground

AI is becoming an increasingly more dominant factor of war-making, and thereby it becomes an increasingly more attractive target for adversarial attacks. Modern militaries secretly establish an invasive AI command alongside the non-secret defensive AI corp. Invasive AI is carried out two ways: (i) data feed contamination, and (ii) inferential malware. This battleground has already been waged in the Russia-Ukraine war, and in the US-Israel v Iran war of 2026 -- by all sides. In future conflicts this battleground will be much more dominant.

Data feed contamination is straight forward. Anything that your side puts up there data wise is instantly consumed by your adversarial AI in order to better assist your adversary. By posting data that will steer the adversarial inferential process astray, one practices interruptive AI. There are two data feed contamination strategies: confidence busters, and falsehood injection. The former strategy is easier: flooding the data realm with data which is inconsistent with the 'real' data. If done well, the adversarial AI allocates a high validity index to the confusing data and this leads to issuing AI-advice with a lesser degree of confidence, making AI less useful. Falsehood injection is more complicated because raw data per any issue is normally surrounded by peripheral data that gives it credibility. Modern AI readers are adept in looking for such peripheral data. False data per an issue does not have peripheral data naturally and it must be fabricated too. For example, a big troop movement is accompanied by a logistics reorganization, and if data about a brigade going on a given track is put up, but no associated logistics data is posted -- it is spotted as decoy.

Affecting the inferential pathway is more challenging. It is done by round-about selling the adversary inferential components that are contaminated. Much of the AI computational load is carried out through dedicated hardware components supplied by NVIDIA and their competitors. It is virtually impossible to check the logical wiring in the imprinted hardware, and often it takes one local designer to install inferential hardware that stays stealth throughout integrity checking. It is an unnerving thought that the AI you rely on is subtely impactd by deep seated AI malware. Militaries feverishly develop detective countermeasures.

For elaboration on these techniques and for a discussion on countermeasures, please inquire with Prof. Gideon Samid, Gideon@BitMint.com


 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page