AI News Roundup – Potential cap on AI chip exports, New York Times’ cease and desist letter to Perplexity, high school student punished for use of AI, and more

To help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.

    • The U.S. Commerce Department is weighing a cap on the exports of high-performance AI chips to certain countries, according to Bloomberg. The Biden administration is considering implementing country-specific ceilings on export licenses for advanced AI chips from companies like Nvidia and AMD, with a focus on Persian Gulf nations that have both the desire and financial means to develop AI data centers. This potential policy, which is still in early discussions, aims to address national security concerns surrounding the global development of AI capabilities. The approach would build upon recent regulations easing the licensing process for AI chip shipments to data centers in countries such as the UAE and Saudi Arabia. While the deliberations are ongoing, the move reflects growing U.S. concerns about the security implications of AI development worldwide and the strategic importance of American chip technology in shaping the global AI landscape, especially with regards to China. Indeed, the head of Saudi Arabia’s most prominent technical institute, the King Abdullah University of Science and Technology (Kaust), told the Financial Times this past week that the country would avoid any collaboration with China that might endanger access to U.S.-made AI chips.
    • The New York Times has sent a cease and desist letter to Perplexity, a generative-AI powered internet search startup, threatening a lawsuit unless the company ceases to use the Times’ copyrighted content, according to The Wall Street Journal. This move is part of a broader pushback by publishers against AI companies using their content without permission. Perplexity, backed by Amazon founder and CEO Jeff Bezos, generates AI summaries of search results, which publishers fear may reduce traffic to their websites. The Times claims Perplexity’s use of its content violates copyright law and has asked for information on how the startup accesses its website despite measures to prevent scraping. Perplexity’s CEO Aravind Srinivas, expressed a willingness to work with publishers, though the letter claims that as of earlier this month Perplexity had not stopped using web crawlers to scrape the Times’ content. Should the Times sue Perplexity, it would add to the newspaper’s ongoing lawsuits against AI companies on similar grounds, most notably one against ChatGPT creator OpenAI.
    • A new study from Apple’s Machine Learning Research unit claims that large language models (LLMs) cannot perform genuine logical reasoning. The researchers analyzed the popular GSM8K dataset, which includes several thousand grade school-level math word problems, which is often used for benchmarking AI models. While many prominent AI models, including OpenAI’s o1 reasoning-focused model, have seen high marks on GSM8K in recent months, Apple’s researchers wanted to confirm whether the AI models’ reasoning capabilities had genuinely improved, or if another phenomenon was responsible. In the course of their investigation, the researchers created a new benchmark, GSM-Symbolic, which “enables more controllable evaluations, providing key insights and more reliable metrics for measuring the reasoning capabilities of models.” The researchers found that LLMs vary noticeably when responding to different versions of the same question. In particular, the researchers noted that the mathematical performance declined when the numerical values in the word problems were altered, and that performance “significantly deteriorates as the number of clauses in a question increases.” This led the researchers to a startling conclusion: current LLMs cannot perform genuine logical reasoning; rather, they “replicate reasoning steps from their training data.”
    • The UK government is considering a legal scheme that would allow AI companies to scrape content made by publishers and artists unless the latter groups explicitly opt-out, according to the Financial Times. This proposed “opt-out” model, which the government plans to consult on in the coming weeks, has sparked controversy in the creative industry. While tech giants have argued for free access to online content for AI training purposes, often under a “fair use” schema, publishers and artists contend that this approach is unfair and impractical, potentially creating a significant administrative burden, especially for smaller companies. The creative sector instead advocates for an “opt-in” system that would enable them to negotiate licensing agreements and receive compensation for their intellectual property. The European Union, of which the UK is no longer a member, adopted a similar “opt-out” model in its sweeping AI Act regulations that came into effect this year, allowing companies to scrape content unless the copyright owner has explicitly denied permission to do so.
    • The Boston Globe reports that the family of a high school student in Hingham, Massachusetts, has brought a federal lawsuit against school officials, alleging that the student was wrongfully punished for using AI to complete a history assignment. The student, a senior, and a classmate used AI for initial research on a research project about NBA star Kareem Abdul-Jabbar’s civil rights activism, which resulted in accusations of cheating from the students’ teacher. The student received a ‘D’ grade, was barred from joining the National Honor Society, and faced detention, despite a claim in the lawsuit that AI use was not explicitly prohibited at the time, either by the teacher or the student handbook. The family argues this punishment has negatively impacted the student’s GPA and college prospects, particularly for elite universities such as Stanford. School officials maintain that the discipline was appropriate and consistent with handbook policies. The case highlights the evolving debate surrounding AI use in academic settings, with the school subsequently amending its handbook this past August to prohibit AI use. A federal judge is set to hear arguments this coming week regarding a preliminary injunction to reverse the punitive measures, grant the student a ‘B’ grade in Social Studies and mandate AI training for the school officials involved.