SYMBOL - 28 January 2025, Baden-Württemberg, Rottweil: The ChatGPT application app from the US software company OpenAI can be seen on the display of an iPhone. Photo: Silas Stein/dpa (Photo by Silas Stein/picture alliance via Getty Images)
State attorneys general from multiple US states have issued a warning to tech giants including Microsoft and OpenAI, demanding they implement new safeguards to prevent users from being harmed by the psychological impacts of their AI systems. The letter, which was filed with the Federal Trade Commission (FTC), claims that some of these companies’ models produce ‘delusional’ outputs that could have serious consequences for users.
The group of state attorneys general, led by Connecticut’s Richard Blumenthal and Massachusetts’ Maura Healey, is calling on companies to take steps to ensure their AI systems are transparent, explainable, and free from bias. They also want companies to provide clear information about how their models work and what data they use to train them.
The warning comes as the use of AI continues to grow in various industries, including healthcare, finance, and education. As AI becomes more prevalent, there is a growing concern that these systems could be used to manipulate or deceive people.
Microsoft has already taken steps to address some of these concerns, with its Azure Machine Learning platform offering features such as model interpretability and data privacy controls. However, the company’s competitors have been criticized for not doing enough to ensure their AI systems are safe and trustworthy.
OpenAI, which is behind the popular chatbot app ChatGPT, has faced criticism in recent weeks over concerns that its models could be used for malicious purposes. The company has since taken steps to improve the safety and security of its systems.
The FTC has announced it will be investigating the use of AI in various industries, including healthcare and finance, to ensure that companies are complying with federal regulations related to data privacy and security.
As the use of AI continues to grow, it is likely that we will see more calls for greater regulation and oversight. Companies must take steps to ensure their AI systems are safe and trustworthy, and consumers must be informed about how these systems work and what data they use to train them.
The letter from state attorneys general highlights the need for greater transparency and accountability in the development and deployment of AI systems. By taking steps to address these concerns, companies can help build trust with their users and avoid potential harm.
In a statement, Connecticut Attorney General Richard Blumenthal said, ‘We are urging these companies to take immediate action to address these concerns and ensure that their AI systems are safe and transparent.’
The FTC has also issued a statement saying it will be working closely with the state attorneys general to investigate these concerns and ensure compliance with federal regulations.
As the use of AI continues to grow, it is clear that greater regulation and oversight are needed. Companies must take steps to ensure their AI systems are safe and trustworthy, and consumers must be informed about how these systems work and what data they use to train them.
The state attorneys general’s warning is a call to action for companies to take responsibility for the impact of their AI systems on users. By working together, we can build a safer and more transparent future for AI.