This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

What's Trending

Tracking trends critical to life sciences and technology companies. Subscribe to stay up to date.

| 1 minute read

Does Your AI Chatbot 'Hallucinate'? Beware.

The FTC Act grants the Federal Trade Commission (FTC) significant authority to create tailored rules and guidance protecting consumers across various aspects including advertising and privacy. When it comes to AI chatbots exhibiting "hallucinations" by confidently providing inaccurate information, the FTC is taking notice.

In the realm of commerce regulation, the FTC Act offers vast powers. The FTC's ability to enforce rules often determines the level of risk associated with certain practices. Companies utilizing AI chatbots for customer service should be cautious. The FTC asserts that its authority extends to monitoring chatbot interactions to ensure they are not deceiving consumers. The FTC has already issued guidance (in the form of a list of “don'ts”) for companies using AI chatbots.  And the requests for more formal guidance, such as the Electronic Frontier Foundation's, around the use of AI chatbots make pick up traction if companies do not heed these warnings.

It is essential for businesses employing AI technology to stay informed about the FTC's stance on chatbot use and take affirmative steps to ensure that the chatbots are not deceptively or unfairly employed in a way that harms consumers, as the consequences of such practices can be severe. By understanding the implications of the FTC Act and its enforcement capabilities, companies can navigate the evolving landscape of AI regulation effectively.

The FTC Act gives the FTC the power to craft specific, fit-for-purpose rules and guidance that can protect Americans’ consumer, privacy, labor and other rights. Take the problem of AI “hallucinations,” which is the industry’s term for the seemingly irrepressible propensity of chatbots to answer questions with incorrect answers, delivered with the blithe confidence of a “bullshitter.”


regulatory, ai & machine learning, digital media & entertainment, software