The Federal Trade Commission (FTC) has taken legal action against five companies for deceptive practices related to their AI technologies, following through on its commitment to regulate unethical AI use.
These lawsuits come under the FTC’s “Operation AI Comply,” targeting firms that either exaggerated their AI’s capabilities or used it to break the law.
The cases in focusThree of the companies—Ascend Ecom, Ecommerce Empire Builders, and FBA Machine—are fighting the allegations in court. They were accused of selling get-rich-quick schemes that claimed their AI-driven tools could create profitable online stores. Instead, these promises led to consumers losing millions, as the profits never materialized.
Each of these companies has been issued court orders to cease operations while the cases unfold.
Misleading AI in actionFTC Chair Lina Khan emphasized that the misuse of AI for fraudulent purposes is illegal, stating, “The FTC’s enforcement actions make clear that there is no AI exemption from the law”.
The crackdown is part of the agency’s larger mission to protect consumers from deceptive practices and ensure ethical AI development.
Settled cases: DoNotPay and Biden robocallsTwo companies have already settled with the FTC. One of the more familiar names, DoNotPay, known for its “robot lawyer,” was fined for misrepresenting its AI as a legal substitute. Despite the hype, it failed to deliver services that matched real human lawyers, leaving users with incomplete documents and unresolved issues.
The FTC’s actions highlight growing regulation in AI, aiming to hold developers accountable for the misuse of their technologyThe company agreed to pay $193,000 and notify past users of its limitations.
In a separate case, Steve Kramer, who created AI-generated robocalls impersonating President Joe Biden, was fined $6 million by the FCC. The calls violated the Telephone Consumer Protection Act, and further lawsuits against Kramer are ongoing.
Rytr’s AI review generator on the scopeThe most controversial case involves Rytr, an AI company that allowed users to generate fake online reviews. This practice violated the FTC’s rules on deceptive advertising, especially since many of these fake testimonials contained false details unrelated to the products or services in question.
Although the FTC’s decision was contentious, with some commissioners dissenting, Rytr has agreed to stop offering its AI review-generating services.
Critics of the case, including former FTC Chief Technologist Neil Chilson, argue that holding AI companies responsible for user-generated content sets a dangerous precedent. He expressed concerns that this decision could stifle innovation by penalizing developers for how users misuse their tools, even if the company itself didn’t cause harm.
What’s next?The FTC’s actions mark a significant moment in regulating AI technologies. While some worry this could stifle innovation, the agency is standing firm on enforcing consumer protection laws, making it clear that AI developers must be accountable for how their technology is used.
As AI continues to evolve, this might be just the beginning of stricter oversight in the industry.
Featured image credit: Emre Çıtak/Ideogram AI