AI News Roundup – U.S. Copyright Office declares generative AI outputs uncopyrightable, U.K. move to open up works for AI training suffers setback, Vatican releases document on AI technologies, and more
- February 3, 2025
- Snippets
Practices & Technologies
Artificial IntelligenceTo help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.
-
- The U.S. Copyright Office has declared outputs purely produced by generative AI systems as uncopyrightable, according to The Verge. As part of the second part of a wide-ranging report on copyright issues related to AI, the agency said that current copyright law should apply to AI technologies without need for changes, and that even though AI prompts are created by humans, those do not constitute human authorship for the eventual generated output. Copyright law could apply if AI systems are used for assistance or as long as “significant creative modification” to the AI-generated output is performed. A U.S. appeals court recently heard arguments in a case challenging the agency’s rejection of a copyright for AI-created work, which could result in the agency modifying its guidance in the future. The third and final part of the agency’s report is forthcoming and is expected to deal with the legal issues surrounding the training of generative AI systems on copyrighted works.
-
- The U.K. government’s proposal to allow AI companies to train their models on copyrighted works without permission of the authors has suffered a setback in the House of Lords, according to The London Standard. A cross-party coalition of peers defied both the ruling Labour and opposition Conservative party line to vote in favor of amendments to a data use regulation bill that would “explicitly subject AI companies to UK copyright law… reveal the names and owners of web crawlers that currently operate anonymously and allow copyright owners to know when, where and how their work is used.” This comes as the U.K. government has contemplated exempting the process of “text and data mining” from copyright law altogether; a “consultation” on the issue is expected to end on February 25. Opposition to the exemption proposal has been wide; Sir Paul McCartney told the BBC that such a move would allow a world where “anyone who wants can just rip [artists’ works] off.”
-
- The Vatican has released a new document addressing the benefits and risks of AI technologies, according to The New York Times. The document, entitled Antiqua et nova (“Ancient and new”), was approved by Pope Francis and released by the Vatican’s Dicastery for the Doctrine of the Faith this past week. Among many observations about the technology, the document warned against “the shadow of evil” present in some applications of the technology and exhorted all peoples to ensure that “the ends and the means used in a given application of AI, as well as the overall vision it incorporates, must all be evaluated to ensure they respect human dignity and promote the common good.” The document also said that “those using AI should be careful not to become overly dependent on it for their decision-making,” and provided several further warnings against improper use of the technology in wide-ranging fields, from medical uses, education, and warfare.
-
- Microsoft is investigating into whether DeepSeek improperly used OpenAI’s systems in the development of its AI models, according to Bloomberg. DeepSeek, which roiled markets over the past few weeks with the release of AI models that were competitive with industry leaders while being much cheaper to train, may have “exfiltrated” a large amount of data using the OpenAI application programming interface (API). This activity may have violated OpenAI’s terms of service. David Sacks, the White House “AI czar,” said recently that DeepSeek may have trained its models on the outputs of OpenAI’s models using a process known as “distillation.” OpenAI, Microsoft and DeepSeek did not provide comment to Bloomberg on the allegations, but the controversy underscores growing competition between the U.S. and China and the effect of restrictions on China’s access to high-performance AI semiconductor chips.
-
- The U.S. Authors Guild, a trade organization for writers, has introduced a certification system for authors to ensure that their works have not been AI-generated, according to The Bookseller. After obtaining this certification, authors will be able to use a “Human Authored” logo on their books, and their books or other works will be listed in a database for interested readers to access. Mary Rasenberger, CEO of the Authors Guild, said that this initiative “isn’t about rejecting technology — it’s about creating transparency, acknowledging the reader’s desire for human connection and celebrating the uniquely human elements of storytelling.” AI-generated works and the training of AI models on works have become a flashpoint in the debate over the technology, as several large publishers, including HarperCollins, have signed contracts providing access to some books for AI training.