Long before the boom of generative AI in the United States, a Silicon Valley firm was already making waves in the intelligence community with its use of advanced technology to combat illicit Chinese fentanyl trafficking. The firm, known as Rhombus Power, was contracted to collect and analyze non-classified data on illegal activities, particularly in the deadly opioid trade.
The results of Rhombus Power’s operation, codenamed Sable Spear, were groundbreaking. The firm’s generative AI technology was able to uncover twice as many companies and individuals involved in illegal or suspicious activities compared to traditional human-only analysis methods. This success caught the attention of U.S. intelligence officials, who were impressed by the AI’s ability to make connections based on internet and dark-web data.
One key aspect of the operation that has not been previously reported is the use of generative AI to provide evidence summaries for potential criminal cases to U.S. intelligence agencies. This technology, which was developed by Rhombus Power three years before the release of OpenAI’s ChatGPT product, saved countless work hours for analysts. Brian Drake, the Defense Intelligence Agency’s then-director of AI, praised the technology, stating, “You wouldn’t be able to do that without artificial intelligence.”
In addition to its success in combating fentanyl trafficking, Rhombus Power’s generative AI technology was also used to predict Russia’s full-scale invasion of Ukraine four months in advance for a different U.S. government client. The firm claims to have also alerted government customers to imminent North Korean missile launches and Chinese space operations.
The use of generative AI in intelligence operations is becoming increasingly common as U.S. agencies seek to keep pace with exponential data growth and emerging surveillance technologies. However, officials are aware of the challenges posed by this technology, including its youthfulness and susceptibility to bias.
CIA director William Burns recently emphasized the need for sophisticated AI models that can analyze vast amounts of data from both open-source and clandestine sources. The CIA’s chief technology officer, Nand Mulchandani, likened generative AI models to a “crazy drunk friend” – capable of great insight but also prone to bias and deception.
Despite these challenges, thousands of analysts across U.S. intelligence agencies are now using a CIA-developed generative AI called Osiris. This technology, which runs on unclassified data, provides annotated summaries and allows analysts to delve deeper with queries. Mulchandani noted that the CIA is experimenting with various commercial AI models but has not committed to any one technology.
While generative AI shows promise as a virtual assistant in intelligence operations, officials maintain that it will never fully replace human analysts. The technology’s limitations, including security and privacy concerns, underscore the need for careful and ethical deployment in the intelligence community.