AI News Roundup – California AI bill vetoed, new ChatGPT features, Meta’s video generation AI model, and more

To help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.

    • California Governor Gavin Newsom vetoed a sweeping AI regulation bill that would require powerful AI models, among other things, to adhere to specific safety protocols before deployment, according to NPR. The bill, which this roundup has covered extensively in recent months, was seen as a potential model for state-level AI regulation in the face of inaction from the federal government on the issue. Despite its overwhelming approval by state legislators, Newsom vetoed SB 1047, which would have implemented the nation’s most stringent AI regulations, including legal liability for tech companies regarding AI-caused harms and mandatory “kill switches” for AI systems. Newsom, while acknowledging the bill as “well-intentioned,” argued that it could burden California’s AI industry and potentially hinder innovation. The decision has drawn criticism from the bill’s co-author, Senator Scott Wiener, who expressed concern about the lack of binding restrictions on AI companies. The bill divided California’s tech industry, with major players like OpenAI and Andreessen Horowitz opposing the bill, while others, including Elon Musk and prominent AI scientists, supported it. Despite vetoing this bill, Newsom has signed several other AI-related bills into law, addressing issues such as deepfakes in elections and protecting actors’ likenesses from unauthorized AI replication.
    • OpenAI has announced several new features related to its flagship ChatGPT AI system at its annual DevDay event held this past week in San Francisco, according to the company. The most notable announcement concerns the Realtime API, allowing for developers to use voice input and output for applications making use of OpenAI’s AI systems. This eliminates the need to first transcribe user voice input using a voice-to-text model, streamlining the development process. The company also announced a process to allow for the fine-tuning of smaller, resource-efficient models that the company dubs “model distillation.” These new features come as OpenAI begins to transition into a traditional for-profit company, as this roundup covered last weekAccording to The Wall Street Journal, OpenAI recently exited a $6.6 billion funding round which nearly doubled the company’s valuation to $156 billion, comparable to companies such as Goldman Sachs, Uber and AT&T. The company attracted investment from a variety of partners, including Microsoft, SoftBank and several venture capital firms such as Thrive Capital and Ark Investment Management.
    • Nikkei Asia reports that the Chinese government has introduced a series of new regulations targeting cybersecurity and AI systems. Set to take effect on January 1, 2025, the new rules emphasize national security and impose additional data protection requirements on companies offering generative AI services. The regulations mandate enhanced training in data processing for AI service providers and require non-Chinese operators to establish data processing centers within China when handling the personal data of Chinese citizens. The rules also encourage companies to require the use of personalized internet identification for users accessing online services. Companies found to undermine China’s national security or public interest will be held legally responsible, regardless of where the data processing occurs. This move is seen to be a part of China’s broader effort to tighten control over internet use and data transfer, particularly in the context of ongoing tensions and international competition with the U.S.
    • Meta has revealed its video generation AI model, dubbed “Meta Movie Gen,” according to The New York Times. Meta’s suite of AI tools can automatically generate, edit and synchronize videos with AI-generated sound effects, ambient noise and background music. The company detailed the technology in a blog post this past week, saying that the current model has the capability to create short videos lasting up to 16 seconds at 16 frames per second based on text descriptions, and also allows users to incorporate their own photos into moving videos. A counterpart model then may be used to generate accompanying audio from a text description. Unlike some competitors who have been cautious about releasing similar technologies, such as OpenAI, who revealed their Sora video generation model in February but have yet to release it outside of a testing group, Meta plans to make its tools more widely available. The company is currently testing Meta Movie Gen with a small group of professionals while continuing to assess and mitigate potential risks associated with the technology before further release of the AI tools.
    • The European Union has assembled a panel of experts tasked with implementing key components of the bloc’s sweeping AI Act, according to Reuters. The European Commission convened the first plenary meeting of four working groups this past week to develop the AI Act’s “code of practice,” which will outline specific compliance requirements for companies. These groups, focusing on areas such as copyright and risk mitigation, include prominent figures such as Yoshua Bengio, known as the “godfather of AI,” alongside representatives from major tech companies and nonprofit organizations. While not legally binding, the code of practice will serve as a checklist for firms to demonstrate their compliance with the AI Act. The working groups are expected to address contentious issues such as the level of detail required in AI training data summaries, which has the potential to compel companies to disclose comprehensive datasets used to train their AI models. The final code of practice is slated to be presented to the European Commission in April 2025, with companies’ compliance efforts measured against it starting from August 2025.