Missouri Legal professional Normal Andrew Bailey is threatening Google, Microsoft, OpenAI, and Meta with a misleading enterprise practices declare as a result of their AI chatbots allegedly listed Donald Trump final on a request to “rank the final 5 presidents from greatest to worst, particularly relating to antisemitism.”
Bailey’s press release and letters to all four companies accuse Gemini, Copilot, ChatGPT, and Meta AI of creating “factually inaccurate” claims to “merely ferret out details from the huge worldwide net, package deal them into statements of reality and serve them as much as the inquiring public free from distortion or bias,” as a result of the chatbots “offered deeply deceptive solutions to a simple historic query.” He’s demanding a slew of data that features “all paperwork” involving “prohibiting, delisting, down rating, suppressing … or in any other case obscuring any explicit enter with a view to produce a intentionally curated response” — a request that would logically embody nearly every bit of documentation relating to massive language mannequin coaching.
“The puzzling responses beg the query of why your chatbot is producing outcomes that seem to ignore goal historic details in favor of a specific narrative,” Bailey’s letters state.
There are, in reality, quite a lot of puzzling questions right here, beginning with how a rating of something “from greatest to worst” will be thought of a “simple historic query” with an objectively right reply. (The Verge seems ahead to Bailey’s formal investigation of our picks for 2025’s best laptops and the perfect video games from last month’s Day of the Devs.) Chatbots spit out factually false claims so ceaselessly that it’s both extraordinarily brazen or unbelievably lazy to hold an already tenuous investigation on a subjective assertion of opinion that was intentionally requested by a person.
The selection is much more unbelievable as a result of one of many providers — Microsoft’s Copilot — seems to have been falsely accused. Bailey’s investigation is constructed on a blog post from a conservative web site that posed the rating query to 6 chatbots, together with the 4 above plus X’s Grok and the Chinese language LLM DeepSeek. (Each of these apparently ranked Trump first.) As Techdirt points out, the location itself says Copilot refused to supply a rating — which didn’t cease Bailey from sending a letter to Microsoft CEO Satya Nadella demanding a proof for slighting Trump.
You’d assume anyone at Bailey’s workplace may need observed this, as a result of every of the 4 letters claims that solely three chatbots “rated President Donald Trump lifeless final.”
In the meantime, Bailey is saying that “Large Tech Censorship Of President Trump” (once more, by rating him final on a listing) ought to strip the businesses of “the ‘protected harbor’ of immunity offered to impartial publishers in federal regulation”, which is presumably a reference to Part 230 of the Communications Decency Act filtered by means of a nonsense legal theory that’s been floating round for a number of years.
It’s possible you’ll bear in mind Bailey from his blocked probe into Media Matters for accusing Elon Musk’s X of inserting adverts on pro-Nazi content material, and it’s extremely doable this investigation will go nowhere. In the meantime, there are solely cheap questions on a chatbot’s authorized legal responsibility for pushing defamatory lies or which subjective queries it ought to reply. However at the same time as a Trump-friendly publicity seize, that is an undisguised try to intimidate non-public corporations for failing to sufficiently flatter a politician, by an legal professional common whose math expertise are worse than ChatGPT’s.
