AI News Roundup – China looking to set global AI standards, reconstruction of ancient writings, growth of AI summarization, and more

To help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.

 

    • This past week, a U.S. federal judge in California advanced a copyright infringement lawsuit brought by artists against Stability AI, the developer of the popular Stable Diffusion image-generation AI model, according to The Hollywood Reporter. U.S. District Judge William Orrick allowed key copyright and trademark claims to move forward, marking a significant win for the artists. The lawsuit alleges that Stable Diffusion was built using billions of copyrighted images scraped from the internet without permission or compensation. The judge found that Stable Diffusion may have been “built to a significant extent on copyrighted works” and created with the intent to “facilitate” infringement. The eventual ruling in this case could have far-reaching implications for other AI companies that have incorporated the Stable Diffusion model, or other AI models trained on copyrighted information, into their products. The case will now proceed to the discovery phase, where the plaintiffs may uncover more information about how the AI firms harvested and used copyrighted materials to train their models. This lawsuit provides a counterpart to the ongoing copyright infringement case brought by The New York Times against OpenAI regarding the latter’s training of its AI models on the former’s copyrighted content, for which this roundup (and other MBHB experts) have provided ongoing coverage.
    • Nikkei Asia reports on a recent series of decisions handed down by judges in China regarding generative AI and the right of artists to their original work in the country. These rulings, which include recognizing an individual’s right to their voice in AI-generated content and establishing copyright that applies to AI-created images, are seen as part of the Communist Party of China’s efforts to take the lead in setting global AI standards. The article highlights that China’s courts have been quick to establish precedents on various aspects of AI technology, in contrast to other countries where debates and legislation are still in early stages. These recent judgments are based on China’s new AI regulations, such as the Interim Measures for the Management of Generative Artificial Intelligence Services, implemented in August 2023 as the world’s first law to comprehensively regulate generative AI. These rulings reflect China’s broader strategy to not only develop AI technology but also to “integrate it practically into society,” as called for by the country’s 14th Five-Year Plan, which outlined China’s social and economic development guidelines for the years 2021-2025.
    • New AI technologies have been used to accelerate the reconstruction of the Epic of Gilgamesh, one of the world’s oldest extant literary texts, according to The New York Times. A project called Fragmentarium, led by Professor Enrique Jiménez at the Ludwig Maximilian University in Munich, Germany, employed machine learning to piece together digitized tablet fragments at a much faster rate than human scholars. Since 2018, the team has successfully matched over 1,500 tablet pieces, including 20 fragments from Gilgamesh that add detail to over 100 lines of the epic. These new discoveries provide insights into various episodes of the story. The AI-assisted research has significantly sped up the process of piecing together this ancient text, which has been ongoing since George Smith’s initial translation work at the British Museum in London in 1872. Despite nearly 150 years of effort, about 30 percent of Gilgamesh remains missing. Thus, scholars are optimistic that the new AI technology will continue to uncover more of Gilgamesh and other works of ancient Mesopotamian literature, potentially revealing further connections between ancient writings.
    • Reuters reports on a new complaint filed against the social media platform X (formerly Twitter), which alleges that the company has violated European Union (EU) data privacy laws by training its AI system on users’ personal information. The complaint was lodged by the Austrian advocacy group NOYB, led by privacy activist Max Schrems, with nine EU authorities. NOYB claims that X used personal data for AI training without obtaining proper user consent, violating the EU’s General Data Protection Regulation (GDPR). This action follows Ireland’s Data Protection Commission seeking to restrict X’s data processing for AI development, in response to which X agreed to temporarily halt AI training using EU users’ data collected before they could opt out. However, NOYB alleges that the Irish case “does not question the legality of the data processing itself,” and has filed the additional complaints to encourage more stringent enforcement of the GDPR against X. The company did not immediately reply to a request for comment from Reuters.
    • John Herrman writes in New York Magazine regarding the rapid growth of AI summarization, a key use-case for generative AI systems in recent years. Herrman notes that major tech companies like Alphabet’s Google, Microsoft, Apple and others are integrating summarization features across their products, from email and office-related documents to social media feeds and product reviews. While these tools can be useful for condensing large amounts of information, Herrman points out several challenges and limitations associated with the technology. AI summaries often produce errors, misinterpretations or unnecessarily summarize already concise content. In personal communications, such as text messages or emails, AI summaries can strip away context and nuance, potentially making them less useful or even misleading. Herrman argues that while AI summarization may be beneficial in certain contexts, such as parsing large datasets or catching up on low-stakes meetings, its widespread application to all forms of communication risks oversimplifying important interactions and treating everything as work to be condensed, rather than distinguishing between what should be summarized and what deserves full attention.