AI News Roundup – DOGE testing chatbot for civil service, DeepMind unveils robotics-focused AI models, Spain proposes massive fines for violating AI disclosure rules, and more
- March 17, 2025
- Snippets
Practices & Technologies
Artificial IntelligenceTo help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.
-
- The Atlantic reports that Elon Musk’s Department of Government Efficiency is testing an AI chatbot at the General Services Administration. The GSA is a federal agency that supports other agencies within the federal bureaucracy, and is pursuing an “AI-first strategy” at the behest of DOGE and the Trump administration. DOGE, a controversial group aiming to downsize the federal government, has created a chatbot for use by GSA staffers dubbed “GSA Chat.” According to an internal email, the chatbot is intended to help employees “work more effectively and efficiently.” GSA Chat operates similarly to other common AI chatbots like ChatGPT, using a prompt-response model. DOGE intends to roll out the chatbot to other federal agencies under the name “AI.gov.” However, some GSA employees have been wary of the new chatbot, noting its ability of “hallucinate” and questions about the legality of entering sensitive information into the AI system.
-
- The Financial Times reports on new AI models for robotics developed by Google’s DeepMind. The new models, Gemini Robotics and Gemini Robotics-ER, apply “reasoning” techniques to models designed to control robotic arms with three goals in mind: adjusting to new situations, responding quickly to instructions, and being able to manipulate objects. According to the company, robotic arms using the new models can perform precise tasks, such as origami folding or packing items into plastic bags. DeepMind is partnering with Apptronik and other robotics companies such as Agile Robots and Boston Dynamics to continue testing of the models for use in real-world robotics scenarios.
-
- Reuters reports that Spain’s government has introduced new punitive measures against companies that do not disclose the use of AI-generated content without proper labels. Spain’s Digital Transformation Minister Oscar Lopez told reporters that the new measures follow guidelines from the European Union’s AI Act, which took effect earlier this year. In particular, the rules are intended to combat the rise of “deepfakes” – AI edited videos that are presented as real. If passed by Spain’s parliament, companies that do not label AI-generated content as such may be subject to fines of up to 35 million euros ($38.2 million) or 7% of their global annual turnover. AI regulation has been a controversial topic in recent years – Spain among the first country in the EU to implement AI regulations, though other countries in the bloc also have regulations under consideration.
-
- Bloomberg reports that OpenAI is petitioning the Trump administration to help protect AI companies from state-level AI regulations in the United States. In a response to a White House request for input regarding AI policy, the company argued that the many AI-related regulation measures currently pending in many states could hinder progress in the technology and cause the U.S. to fall behind China in developing new AI models. In response, OpenAI proposed that, in exchange for voluntary access to AI models, AI companies should be shielded from state regulations. The Trump administration has generally taken a hands-off approach to AI regulations, and federal legislation on the topic has yet to gain any traction. In the same response, OpenAI also called for copyright reform and further government support of AI infrastructure (such as datacenters).
-
- The Associated Press reports that a group of publishers and authors in France are suing Meta over the use of their copyrighted works to train the company’s AI models. The National Publishing Union, National Union of Authors and Composers, and the Societe des Gens de Lettres have all sued Meta in a court in Paris. They accuse the company of “massive use of copyrighted works without authorization” to train its AI models, which are used in Facebook, Instagram and WhatsApp. Under the European Union’s AI Act, companies must comply with copyright law and disclose the material used for training their AI models, though it remains to be seen whether Meta has complied with these directives.