AI News Roundup – New regulations for investments in Chinese tech firms, AI “hallucinations,” major upgrade to ChatGPT, and more

To help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.

    • The U.S. Department of the Treasury has finalized new regulations limiting investment in China relating to the development of frontier technologies such as semiconductors, quantum computing and artificial intelligence, according to Bloomberg. These rules, effective January 2, 2025, include outright bans on certain investments in advanced Chinese tech firms and require notifications to be provided to the Treasury Department for others, particularly when involving legacy semiconductor technologies that are often manufactured in China. Aimed at preventing American resources from enhancing China’s military and cyber capabilities, these regulations build upon previous export restrictions and implement President Biden’s August 2023 executive order on the topic of regulating technologies relevant to national security in foreign countries. In particular, the rules, among other things, ban American individuals and companies from obtaining equity in certain military-affiliated AI companies in China, though in general, publicly traded securities are exempt from the regulations. The new rules have undergone lengthy deliberation since last summer, and have sparked flagrant opposition from China, whose Foreign Ministry stated that the country “strongly deplores and firmly rejects” the U.S. restrictions.
    • The Associated Press reports on new research that alleges that OpenAI’s Whisper voice-to-text transcription tool has fabricated text, an instance of AI “hallucinations,” in highly sensitive applications such as hospitals. According to interviews with software engineers, developers and researchers, Whisper frequently invents content that was never spoken, including racial commentary, violent rhetoric and non-existent medical treatments. The problem appears widespread, with one University of Michigan researcher finding hallucinations in 80% of audio transcriptions examined, while another developer found them in nearly all of 26,000 transcripts analyzed. These issues are particularly concerning as Whisper is being integrated into healthcare settings—despite OpenAI’s warnings against using it in “high-risk domains”—with over 30,000 clinicians and 40 health systems utilizing Whisper-based tools to transcribe doctor-patient consultations. Former White House science advisor Alondra Nelson and other experts are calling for potential government regulation and urging OpenAI to address these flaws, especially given the serious implications for medical diagnoses and accessibility services for high-risk patients.
    • Deadline reports on a recent agreement between SAG-AFTRA, a powerful screen actors’ and professionals’ union, and the AI voice company Ethovox that allows the latter to create an AI voice model using the voices of the union’s members. The deal, announced this past week, requires both upfront session fees and ongoing revenue sharing for performers who participate in building the AI voice model, which will not be user-facing and will not produce identifiable individual voices. Ethovox, which presents itself as “the only voice AI company owned and managed by voice actors,” emphasized that participation in the model’s training will be voluntary and focused on ethical AI development that prioritizes artists’ interests. This agreement follows similar deals SAG-AFTRA made with other AI companies earlier in the year, including Narrativ for audio ads and Replica Studios for video games, reflecting the union’s broader strategy of establishing protective guardrails around AI use in entertainment, even as it continues to negotiate with major video game developers over AI provisions for voice actors in their Interactive Media Agreement.
    • Chinese researchers have developed a military-focused AI model based on Meta’s open-source Llama 13B large-language model, according to Reuters. The project, called ChatBIT, was developed at institutions linked to China’s People’s Liberation Army (PLA), including its Academy of Military Science, and is designed to gather and process intelligence for military decision making. While the model was trained on a relatively small dataset of 100,000 military dialogue records, the researchers claim it performs at roughly 90% of ChatGPT-4’s capabilities for military applications. Meta has responded that any military use of Llama violates its terms of service, which explicitly prohibit military applications and activities subject to U.S. defense export controls. The development highlights ongoing debates about the risks of making AI models publicly available, especially as the U.S. has tightened restrictions on the use of sensitive technologies in recent weeks.
    • OpenAI released a major upgrade to its ChatGPT chatbot this past week, enabling it to perform internet searches and revise its answers based on the information it finds, according to The Washington Post. The new feature, available to paying subscribers starting this past Thursday, uses Microsoft’s Bing search engine and includes content from publishing partners like News Corp. and the Associated Press, transforming ChatGPT from a static knowledge-based system into a more dynamic search tool that directly competes with major search engines. The upgrade promises to improve accuracy and reduce “hallucinations” by accessing current information, though it raises concerns among publishers about their business models as AI tools increasingly summarize web content without requiring users to visit original sources. OpenAI’s head of media partnerships, Varun Shetty, suggests users will still visit publisher websites to learn more. With regard to this week’s election, a time when disinformation is often at its peak, Shetty said that the search tool will direct users to “the highest quality and authoritative [news] sources,” such as the Associated Press and Reuters.