AI News Roundup – Director Vidal Memo, UK AI Safety Institute, ancient Roman scrolls deciphered and more
- February 12, 2024
- Snippets
Practices & Technologies
Artificial IntelligenceThere’s a lot happening in the world of AI. To help you stay on top of the latest news, we have compiled a roundup of the developments we are following.
Kathi Vidal, Under Secretary of Commerce for Intellectual Property and Director of the USPTO, released a memo regarding the Office’s stance on filings that are drafted, at least in part, using artificial intelligence. In the memo, entitled “The Applicability of Existing Regulations as to Party and Practitioner Misconduct Related to the Use of Artificial Intelligence,” Vidal expressed her confidence that the “staff in the [Patent Trial and Appeal Board and Trademark Trial and Appeal Board] will successfully apply their existing skills and relevant existing rules…” to the challenges introduced by AI-aided and/or completely AI-generated filings. Specifically, although AI offers benefits, like expanded access and lower costs, Vidal counseled practitioners to not assume the accuracy of AI tools and warned of potential sanctions in response to submissions that misstate facts or law. Such filings could be construed as papers presented for an improper purpose. Furthermore, Vidal stated that the USPTO will be shortly issuing a notice that provides specific guidance on the use of AI tools by parties and practitioners.
Large AI companies are urging the United Kingdom’s new “AI safety” regulator to speed up feedback on their latest large-language models. Known as the “AI Safety Institute,” the regulatory body was set up in November 2023 by the UK Government to analyze and facilitate information exchange about the rapidly-evolving field of artificial intelligence. However, many of the companies including OpenAI, Microsoft, Google DeepMind, and Meta are expressing frustration that the regulation has not kept up with the pace of technological development. The UK body was established to minimize the risk of AI misuse, such as cybersecurity risks and the AI-assisted design of biological weapons, although the program relies on voluntary submissions of model internals for review. A similar body was set up in the U.S. in November as an initiative of the National Institute for Standards and Technology (NIST).
AI image analysis models have been used to unravel and read segments of the Herculaneum papyri, a set of scrolls retrieved from an ancient Roman villa in Pompeii destroyed by the eruption of Mount Vesuvius in 79 A.D. The ancient scrolls have since been thought impossible to read due to the extreme heat generated from the eruption charring the papyrus beyond recognition to the naked eye, though this process also preserved them, albeit tightly rolled, and previous attempts at unrolling have led to the destruction of some of the documents. The AI model developed was trained on “crackle,” a pattern of cracks and lines that result from dried ink on the page, which unveiled Greek letters inscribed on the scrolls even though the ink has long been rendered invisible.
The Federal Communications Commission (FCC) has banned AI-generated spam robocalls over concerns that the technology could be used to mislead voters in upcoming elections as well as for purposes of extortion. This ban comes as technology regulators are wrestling with what to do about the spreading of deepfakes online.