AI News Roundup – U.S. appeals court affirms that wholly AI-generated content is not eligible for copyright protection, California panel recommends further AI regulations, and more

To help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.

    • Reuters reports that a U.S. appeals court has ruled that a work of art that was entirely created by an AI system without human input is uncopyrightable under U.S. law. DABUS, an AI system created by Stephen Thaler, allegedly generated “A Recent Entrance to Paradise,” a piece of visual art in 2018. Thaler, who claims DABUS is “sentient,” registered a copyright for the art with the U.S. Copyright Office in the same year, which was eventually rejected in 2022 on the grounds that creative works must be human authored to be copyrightable. This past week, the U.S. Court of Appeals for the District of Columbia Circuit upheld the decision of the Copyright Office. An attorney for Thaler told CNBC that they plan to appeal the decision, even to the U.S. Supreme Court if necessary.
    • The MIT Technology Review reports on recent studies released by OpenAI in partnership with the MIT Media Lab researching the effects of AI system use on users’ emotional health. The studies found that few ChatGPT users engage emotionally with the AI system, unlike other AI-powered “companion” apps that have proliferated in recent years. OpenAI and MIT analyzed over 40 million ChatGPT interactions that were related to emotional engagement and surveyed the over 4,000 users that had those interactions, as well as recruited 1,000 users to answer questions regarding their ChatGPT use. The studies found that participants who “bonded” with ChatGPT were more likely to be lonely, and that participants who interacted with ChatGPT’s voice mode using a voice that did not match their own gender were significantly lonelier than other participants. The studies are a first step into how AI use affects people and their interactions with other humans, which has not had much formal study thus far.
    • CalMatters reports that a group of AI leaders working on behalf of California Governor Gavin Newsom issued a set of recommendations urging the state to adopt AI transparency regulations. Last year, Newsom vetoed several bills that aimed to regulate AI systems that were both supported by and criticized by several of the state’s large tech companies, but Newsom also formed the Joint California Policy Working Group on AI Frontier Models. The group’s report recommended that the state should adopt rules protecting whistleblowers and possibly develop a system to inform government officials when AI systems have dangerous capabilities. Scott Wiener, a California state senator who sponsored a major AI regulation bill that was vetoed by Newsom, said that the group’s recommendations would influence a revised version of his bill for future legislative sessions.
    • Semafor reports on Amazon’s efforts to develop AI-focused microprocessors in an effort to reduce dependence on those made by Nvidia. Annapurna Labs, an Israeli startup and Amazon subsidiary, has developed the Trainium 2 AI chip. While the chip is not as powerful as Nvidia’s most advanced offerings, Annapurna has focused their chips on power efficiency. Anthropic, maker of the Claude chatbot family, is the largest customer for Trainium 2 and has agreed to help Amazon and Annapurna test the next generation of the chips; an Anthropic engineer has already found a bug in the Trainium 2’s compiler that affected its performance.
    • OpenAI has revealed a new series of speech-to-text (STT) and text-to-speech (TTS) AI models, according to the company. The new STT models, gpt-4o-transcribe, improve upon the company’s past Whisper STT model on several benchmarks, which the company says is a result of improvements in reinforcement learning techniques. The new TTS model, gpt-4o-mini-tts, allows for users to “instruct” the model to speak in a certain manner or mimic a style or accent. The company has provided a demo site to test the TTS model’s capabilities, which allows users to test voices that mimic a typical New York City cab driver or surfer. It remains to be seen if the new models will be open-sourced, as Whisper was in 2022.