AI News Roundup – IP Protection hearing, USPTO AI Guidelines, potential data center for AI computing and more

To help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.

    • Last week, the US House Judiciary Subcommittee on Courts, Intellectual Property, and the Internet held a hearing entitled “Artificial Intelligence and Intellectual Property: Part III – IP Protection for AI-Assisted Inventions and Creative Works.” The goal of the hearing was to examine standards and policy considerations regarding whether IP protection should be given to inventions (patents), or creative works (copyrights) generated with the assistance of AI.  Several stakeholders, including policy experts and law school professors, testified to the subcommittee. Claire Laporte, former Head of IP at Ginkgo Bioworks urged the subcommittee to prioritize leading the world in AI-assisted biology, such as training AI models to greatly reduce the scale and time to solve biological engineering challenges. With regard to human authorship of AI-generated artwork and music, Sandra Aistars, Professor at Antonin Scalia Law School testified that the proper inquiry is “whether the human author has used GAI [like an artist might utilize] any other tool or material to bring their creative vision to life or has the use of GAI served to substitute for the artist’s own creativity.”
    • The USPTO has released guidelines for practitioner use of AI tools for patent and trademark prosecution. Ultimately, this guidance reiterates that using AI tools is not unlike a practitioner working with paralegals and junior attorneys – the practitioner is responsible for the legal and technical accuracy of all papers submitted to the USPTO. In other words, the practitioner needs to carefully review AI-produced output and not just trust these tools to always do the right thing.
    • AI companies running out of training data for large language models (LLMs) has been a concern for a while but apparently this concern has hit a wall. In seeking new sources of data, companies have allegedly transcribed millions of hours of YouTube audio into text, considered buying large publishing companies outright, and are looking to use synthetic data for training. The former involve questionable copyright practices and the latter technique has not yet proven viable.
    • Microsoft and OpenAI are reportedly looking to build a $100 billion data center in the U.S. for AI computing, potentially powered by a nuclear reactor.
    • Chip giant NVIDIA has announced the new Blackwell B200 GPU platform, which can scale to 20 petaflops of compute with its 208 billion transistors. It is expected to use dramatically less power per calculation when compared to current generation GPUs.