The Department of Homeland Security has witnessed firsthand both the opportunities and risks associated with artificial intelligence. It was able to locate a trafficking victim years later using an A.I. tool that generated an image of the child as they would appear a decade older. However, it has also been misled in investigations by deep fake images created by A.I.
Now, the department is leading the way as the first federal agency to adopt the technology, with a plan to integrate generative A.I. models across various divisions. Through partnerships with OpenAI, Anthropic, and Meta, they will launch pilot programs utilizing chatbots and other tools to combat drug and human trafficking crimes, train immigration officials, and prepare for emergency management nationwide.
The rapid adoption of this unproven technology is part of a broader effort to keep pace with the changes driven by generative A.I., which has the ability to create highly realistic images, videos, and replicate human speech.
“One cannot ignore it,” stated Alejandro Mayorkas, secretary of the Department of Homeland Security. “And if one isn’t forward-leaning in recognizing and being prepared to address its potential for good and its potential for harm, it will be too late, and that’s why we’re moving quickly.”
The decision to incorporate generative A.I. agency-wide represents how new technologies like OpenAI’s ChatGPT are challenging even the most traditional industries to reconsider their methods. Yet, government agencies like the D.H.S. are expected to face intense scrutiny over their use of the technology, which has been criticized for its unreliability and potential for discrimination.
In response to President Biden’s executive order mandating the implementation of safety standards for A.I. across the federal government, federal agencies have quickly devised plans. The D.H.S., with its 260,000 employees, is tasked with safeguarding Americans within the country’s borders, including combatting human and drug trafficking, protecting critical infrastructure, disaster response, and border patrol.
As part of their strategy, the agency intends to hire 50 A.I. experts to develop solutions to defend the nation’s critical infrastructure from A.I.-generated threats and combat the use of technology for illicit activities like child exploitation and creating biological weapons.
The agency will allocate $5 million towards pilot programs using A.I. models like ChatGPT to aid in investigations related to child abuse materials, human trafficking, and drug trafficking. They will also collaborate with companies to analyze text-based data for investigative purposes. Additionally, chatbots will be utilized to train immigration officials and assist in disaster relief planning.
Results from the pilot programs will be reported by the end of the year, as stated by Eric Hysen, the department’s chief information officer and A.I. lead. The agency has selected OpenAI, Anthropic, and Meta to experiment with various tools, and will leverage cloud providers such as Microsoft, Google, and Amazon in their pilot initiatives.
“We cannot do this alone,” emphasized Hysen. “We need to collaborate with the private sector to establish responsible use of generative A.I.”