Not one but two artificial intelligence tools have been rolled out at the Food and Drug Administration. And, unsurprisingly, not one but two of them appear to suck.
FDA Commissioner Martin Makary has been hyped about jamming AI into his department. Regrettably, he’s significantly less hyped about things the FDA should be hyped about, like vaccines. But since Makary is functionally an anti-vaxxer who says that the Centers for Disease Control and Prevention’s vaccine advisory panel is a “kangaroo court” that “rubber stamps” all vaccines, perhaps it’s best if he stays fixated on AI.
The FDA announced its deployment of AI last month in a news release that reads more like a pitch deck.
“I was blown away by the success of our first AI-assisted scientific review pilot. We need to value our scientists’ time and reduce the amount of non-productive busywork that has historically consumed much of the review process,” Makary said in the release.
And Jin Liu, deputy director of the FDA’s Office of Drug Evaluation Sciences echoed Makary’s sentiment.
“This is a game-changer technology that has enabled me to perform scientific review tasks in minutes that used to take three days,” Liu said.
Related | The dark reality of making US the ‘AI capital of the world’
Of course, the actual AI tools do not even remotely resemble this description.
One tool, CDRH-GPT, is designed to assist employees at the FDA’s Center for Devices and Radiological Health, which reviews the safety of medical devices. The tool is supposed to speed up reviews and approvals of such devices, but so far it isn’t connected to any other FDA internal computer systems, nor to external internet sources like medical journals. People familiar with the tool told NBC that it also has trouble with basic tasks like uploading documents or allowing users to submit questions.
The other tool is Elsa, which Makary announced on Monday. Elsa was already being used to “accelerate clinical protocol reviews, shorten the time needed for scientific evaluations, and identify high-priority inspection targets.” Sounds impressive! But when staff tested the tool by asking it questions about publicly available information, Elsa’s responses were incorrect.
FDA leadership is no doubt counting on these AI tools working—or at least for all of us to pretend that they're working—because Health and Human Services Secretary Robert F. Kennedy Jr. purged many of the FDA’s top leaders and slashed the workforce by about 3,500 employees.
Health and Human Services Secretary Robert F. Kennedy Jr.
The FDA isn’t the first agency to try to make a big AI splash, though. That honor belongs to the General Services Administration, which rolled out GSAi (ugh) in March to great fanfare under the direction of former/maybe current/maybe former Department of Government Efficiency head Elon Musk. GSA employees were told that the options for its uses “are endless,” including tasks like drafting talking points or summarizing emails.
But one user told WIRED that “it’s about as good as an intern. Generic and guessable answers.”
Not exactly world-changing. Moreover, after the March launch, mention of GSAi simply vanished—no glowing press releases about how much time it’s saved, no mention of how it’s improving. Nothing.
It isn’t clear whether any of GSAi was built off of Musk’s racist chatbot Grok, but it appears that Musk and DOGE have already let it loose in the Department of Homeland Security—and there are so very many problems with this.
First, there doesn’t appear to have been any official testing or review of Grok. Second, no one knows what data it’s being trained on, which means that it could have access to sensitive or confidential data. Third, it could theoretically give Musk access to all sorts of nonpublic data about rival corporations. Fourth, and this one hardly bears mentioning since conflicts of interest no longer matter, but Musk's ability to get his privately owned AI chatbot into federal departments because he happened to be working for the government at the time is typically frowned upon.
It looks like HHS already had the bright idea to let AI draft some of the Make America Health Again report. The first release of the report featured health studies that don’t exist, studies attributed to authors who didn’t write them, and citations to the same study with different authors, all of which are problems that stem from AI.
But rather than acknowledge that rolling out a health initiative with a bunch of made-up stuff is kind of not great for the American people, Calley Means, Kennedy’s MAHA point man, went on NewsNation to say that the incorrect citations were “a great disservice to President Trump and Bobby Kennedy.”
The Trump administration has already decided to regulate AI with the very lightest of touches, so it doesn’t look like it will be preventing the government from adopting half-baked AI tools. None of these things have been rolled out with the kind of lengthy testing that the federal government would usually require, and it’s not clear that these AI tools can provide even a fraction of what’s promised.
Instead, they offer inaccuracies and fragmented information to workforces decimated by DOGE cuts. It’s in no way clear how it’s more efficient for the government to use incomplete and inaccurate technology instead of leveraging the thousands of federal employees who already have highly specialized knowledge.
But since the Trump administration fired all of those people, half-baked AI it is.
Campaign Action