Embracing Generative AI: How U.S. Intelligence Agencies Are Using Advanced Technology to Combat Illicit Activities

Long before the boom of generative AI in the United States, a Silicon Valley firm was already ahead of the curve. This firm was contracted to collect and analyze non-classified data on illicit Chinese fentanyl trafficking, and their results were groundbreaking. The firm’s use of generative AI far exceeded human-only analysis, finding twice as many companies and individuals engaged in illegal or suspicious activities related to the deadly opioid.

Excited U.S. intelligence officials publicly praised the results of the AI, which made connections based on internet and dark-web data. The firm, known as Rhombus Power, shared their findings with Beijing authorities, urging them to crack down on the illicit activities. One key aspect of this operation, known as Sable Spear, was the use of generative AI to provide evidence summaries for potential criminal cases to U.S. agencies three years before the release of OpenAI’s ChatGPT product.

Brian Drake, the Defense Intelligence Agency’s then-director of AI and the project coordinator, emphasized the importance of artificial intelligence in saving countless work hours in analyzing data. Rhombus Power later used generative AI to predict Russia’s full-scale invasion of Ukraine four months in advance for a different U.S. government client. The firm also alerts government customers, whose names are not disclosed, to imminent North Korean missile launches and Chinese space operations.

U.S. intelligence agencies are now racing to embrace the AI revolution, as they believe it is crucial to keep up with exponential data growth and advancements in surveillance technology. However, officials are aware of the challenges posed by the young and brittle nature of generative AI. These prediction models, trained on vast datasets to generate text, images, video, and human-like conversation, are not tailor-made for the deceptive world of illicit activities.

CIA director William Burns recently highlighted the need for sophisticated AI models that can digest massive amounts of open-source and clandestinely acquired information. The CIA’s inaugural chief technology officer, Nand Mulchandani, compared gen AI models to a “crazy drunk friend” – capable of great insight and creativity, but also prone to bias and deception.

Despite the security and privacy concerns surrounding generative AI, experimentation continues in secret across U.S. intelligence agencies. Thousands of analysts now use a CIA-developed gen AI called Osiris, which operates on unclassified and publicly available data. Mulchandani emphasized the importance of ensuring the information’s sources are marked out with certainty.

While gen AI shows promise as a virtual assistant in sifting through vast amounts of data, officials maintain that it will never replace human analysts. The CIA is exploring various gen AI models but is not committing to any one specifically, as the technology continues to evolve rapidly. Mulchandani stressed that analysts need to be able to verify the information provided by AI with absolute certainty, especially when dealing with classified networks.

In conclusion, the use of generative AI in intelligence operations has shown great potential but also poses significant challenges. As U.S. agencies continue to navigate this evolving landscape, the balance between harnessing the power of AI and ensuring the accuracy and security of intelligence remains a top priority.


Want to read more examples of FreshBot's generated content?