HHS Foists Halucination Machine on Entire Department

A black screen with OpenAI's logo and text reading "How can I help you today?"
Image via OpenAI/Wikimedia Commons

Hello, welcome to my relaunched newsletter, read more about the deal here and please subscribe!


On Tuesday morning, leadership at the Department of Health and Human Services sent an email to its entire tens of thousands of staff. Shared with me by a source inside HHS, it proudly announced that OpenAI's ChatGPT was being made available "to everyone in the department effective immediately."

The message was sent by Deputy Secretary Jim O'Neill, and it noted that some parts of the department — the Food and Drug Administration and the Administration for Children and Families, specifically — were already making use of the chatbot.

"In many offices around the world, the growing administrative burden of extensive emails and meetings can distract even highly motivated people from getting things done," O'Neill wrote to a staff that has faced thousands of firings and other indignities for eight months. "We should all be vigilant against barriers that could slow our progress toward making America healthy again."

That vigilance, then, means that the staff of agencies like the National Institutes of Health, the Centers for Disease Control and Prevention, and the Centers for Medicare and Medicaid Services should, apparently, start using something that O'Neill himself admits can get things very, very wrong. "You should be skeptical of everything you read," the email noted, and anything ChatGPT tells you should be considered suspect — "treat answers as suggestions."

This follows on an early August deal where OpenAI said it was offering its product to federal agencies for $1 for a year — a clear attempt to worm its way into the root functioning of government in such a way that would be hard to extricate in the years when that price jumps a tad. And while O'Neill urged skepticism, he also didn't shy away from the sort of grandiose rhetoric that AI true believers tend to spout.

ChatGPT, he said in the email, can "promote rigorous science, radical transparency, and robust good health. As Secretary Kennedy said, 'The AI revolution has arrived.'"

That's the same Secretary Kennedy who, back in May, touted the "gold-standard" science behind his new Make America Healthy Again report — after which we learned that at least seven of the studies that report cited were fabricated out of whole cloth, almost certainly by generative AI. "Rigorous science" indeed.

O'Neill wrote that the hallucination machine has been granted an "Authority to Operate" at the FISMA Moderate level — that refers to the Federal Information Security Modernization Act, where a "loss of confidentiality, integrity, or availability" could mean "a significant deterioration of mission capability," among other things. Though it is hardly alone in this, OpenAI does not exactly have a spotless security record.

There is something of a contradiction inherent to this move, inserting a technology with such well documented potential issues into literally every employee's lap while urging them to remember those issues. "Before making a significant decision, make sure you have considered original sources and counterarguments," O'Neill wrote in his message. "Like other LLMs, ChatGPT is particularly good at summarizing long documents" — only, maybe it's not?

One analysis of several LLMs published in August found that for long summaries of meetings or long documents, "the results were surprisingly poor." Long AI-generated summaries "also had more hallucinations" than in shorter, better summaries they produced — though they produced those summaries in hours less time than a human could. It's wrong, but at least you have it fast.

It isn't clear just yet exactly how HHS employees are expected to use their new friend, and the message simply touted its availability rather than mandating its use. But given the broad swath of the department's functions and their close connection to all of our health, personal information, and more, along with the ongoing attempts to shrink the department to a fraction of its formal, functional size, it isn't hard to imagine one hallucination or another having fairly dire knock-on effects.

Read more