AI News Roundup – Guardrails for AI use across government operations, Perplexity lawsuit, thousands of creative professionals sign open letter, and more
- October 28, 2024
- Snippets
Practices & Technologies
Artificial IntelligenceTo help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.
-
- The Biden Administration has released new guidelines for how federal agencies, especially those involved with national security, should make use of AI technologies, according to The New York Times. The guidelines, issued through a national security memorandum signed by President Biden this past week, establish “guardrails” for AI use across various government operations, from nuclear weapons decisions to asylum applications. The document explicitly prohibits AI systems from making nuclear launch decisions and requires human oversight for certain sensitive determinations like classifying terrorists or tracking individuals based on ethnicity or religion. The memorandum also addresses the protection of private-sector AI advances as national assets and creates new mechanisms for evaluating AI tools before their release. However, since many of the order’s deadlines extend beyond President Biden’s current term, questions remain about whether the next administration will alter or even maintain these guidelines, particularly if former President Donald Trump wins the upcoming election over his opponent, current Vice President Kamala Harris.
-
- CNBC reports that Anthropic, developer of the Claude AI chatbot, has released AI agents that have the ability to use computers to perform tasks as a human would through common user interfaces. The new computer use capability, announced this past week, allows AI to interpret screen content, navigate websites, click buttons, enter text, and execute complex multi-step tasks through any software with real-time internet browsing. Amazon, one of Anthropic’s most prominent business partners, had early access to the tool, which has also been tested by companies like Asana, Canva and Notion. Initially released in public beta for developers, Anthropic plans to make the feature available to consumers and enterprise clients in early 2025. The company envisions future applications including booking flights, scheduling appointments, filling out forms, conducting research and filing expense reports. This development positions Anthropic in direct competition with other tech giants like OpenAI, Microsoft, Google and Meta in the rapidly evolving AI agent market, which is seen as a significant advancement beyond traditional chatbot capabilities.
-
- Dow Jones & Co., publisher of The Wall Street Journal, and the New York Post have sued Perplexity, an AI search-engine startup, for alleged copyright infringement, according to a report from Variety. The lawsuit, filed on October 21, 2024, claims that Perplexity engages in “massive” unauthorized copying of publishers’ copyrighted works through its AI-powered information discovery platform, which encourages users to “Skip the Links” to original publishers’ websites. News Corp CEO Robert Thomson accused Perplexity of “content kleptocracy” and stated that the company had ignored a July 2024 letter offering to discuss potential licensing arrangements. The lawsuit seeks statutory damages of up to $150,000 per infringement. The legal action follows a similar cease-and-desist notice sent to Perplexity by The New York Times and stands in contrast to News Corp’s recent licensing agreement with OpenAI, potentially worth over $250 million over five years. These legal actions come as Perplexity is reportedly seeking hundreds of millions of dollars in new funds from investors. According to The Wall Street Journal, the company seeks to double its valuation to over $8 billion, following its rival OpenAI which recently raised $6.6 billion in a funding round of its own, as reported by a roundup earlier this month.
-
- Bloomberg reports that the Chinese technology conglomerate, Huawei, has made use of advanced AI processors manufactured by Taiwan Semiconductor Manufacturing Company (TSMC), possibly violating U.S. sanctions placed on the company. According to the research firm TechInsights, Huawei’s Ascend 910B AI accelerator chip was produced using TSMC’s 7-nanometer process, despite the Shenzhen-based company being barred from doing business with TSMC since August 2020 under U.S. sanctions. While TSMC maintains it has not supplied chips to Huawei since September 2020 and Huawei claims it has never officially launched the 910B chip, the processors were first spotted in Chinese server products in 2022 and have been used by companies like Iflytek and Baidu. The discovery raises questions about China’s domestic capability to produce advanced chips through companies like Semiconductor Manufacturing International Corporation (SMIC), with whom Huawei has partnered in the past. The U.S. Commerce Department’s Bureau of Industry and Security is aware of the reports but would not confirm whether an investigation is ongoing regarding a possible violation of sanctions.
-
- Thousands of artists, musicians, actors and other creative professionals have signed an open letter condemning the “unlicensed use of creative works for training generative AI,” according to The Washington Post. The 29-word letter, which garnered more than 10,500 signatures, including those of ABBA’s Björn Ulvaeus, Radiohead’s Thom Yorke, actors Julianne Moore and Kevin Bacon, and novelist Kazuo Ishiguro, was organized by Fairly Trained, a nonprofit led by former AI executive Ed Newton-Rex that certifies tech companies for fair data practices. The letter describes the unauthorized use of creative works for AI development as “a major, unjust threat” to creators’ livelihoods. This protest comes amid ongoing legal battles over AI companies’ data practices, with several lawsuits proceeding through U.S. courts, including cases against OpenAI, Stability AI and Perplexity. While tech companies argue their data scraping practices for AI training are protected as “fair use” under copyright law, regulators in the U.S. and the United Kingdom are currently debating whether to create specific copyright exemptions for AI projects and otherwise develop guidelines around the rapidly developing technology.