AI News Roundup – AI-powered cheating in higher education, brain stimulation treatments, foreign interference, and more

To help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.

    • AI industry leaders have begun to publicly comment on some of California’s proposed AI regulations, of which this roundup is providing ongoing coverage. According to the Financial Times, OpenAI has publicly opposed S.B. 1047, which would require powerful AI models (such as OpenAI’s GPT-4o) to adhere to specific safety protocols before deployment. In a letter to the bill’s sponsor, Senator Scott Weiner, OpenAI’s Chief Strategy Officer Jason Kwon wrote that the bill “threatens California’s unique status as the global leader in AI” as well as “slow the pace of innovation, and lead California’s world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere.” OpenAI also argued that any sweeping AI regulation should come from the federal government as opposed to the state level. In a response, Senator Weiner agreed that the federal government should take the lead but was “skeptical” that Congress would act. Other AI companies, however, have welcomed the bill. Anthropic, developer of the Claude chatbot, said in a letter to California Governor Gavin Newsom that “our initial concerns about the bill potentially hindering innovation due to the rapidly evolving nature of the field have been greatly reduced in the amended version,” according to Reuters. The bill must be passed by August 31 in order to be sent to Governor Newsom for his signature or veto.
    • Senate Majority Leader Chuck Schumer, a Democrat of New York, is “optimistic” about federal AI regulation passing through Congress this year, according to The Wall Street Journal. The Senate Commerce, Science, and Transportation Committee advanced several AI bills last month, including a bipartisan bill to authorize a U.S. AI Safety Institute within the National Institute of Standards and Technology. However, sweeping legislation has not been forthcoming on the federal level, especially compared to the European Union’s recent AI Act and state-level efforts in California. However, given the election cycle taking up Congress’ time, it is unclear if Schumer will be able to guide even the most bipartisan bills through Congress.
    • Ian Bogost at The Atlantic writes on the prevalence of AI-powered cheating in higher education, highlighting the ongoing challenges faced by colleges and universities as they enter the third year of widespread AI availability. Despite efforts to combat AI-generated essays and assignments, institutions still lack comprehensive plans to address the issue. The article discusses various attempts to detect AI-written content, including watermarking and other technological solutions, but notes their limitations and the ability of students to easily avoid such measures. Bogost explores how some educators are trying to incorporate AI into their teaching methods, while others advocate for fundamental changes in assignment design to make cheating less appealing. The piece also touches on the broader implications for academic integrity, workload pressures on faculty, and the potential for AI to both hinder and potentially enhance the educational process. Ultimately, Bogost suggests that colleges must evolve their teaching approaches to effectively reckon with the realities of AI in education.
    • The New York Times reports on a new study in which researchers used AI to personalize deep brain stimulation treatments for Parkinson’s disease. Published in Nature Medicine, the study involved four patients, including a 48-year-old skateboarder named Shawn Connolly, whom The Times profiled. The researchers developed individualized algorithms to adjust the electrical stimulation based on each patient’s specific symptoms and brain activity patterns. This adaptive deep brain stimulation approach resulted in patients experiencing their most bothersome symptoms for about half as much time compared to conventional stimulation. The personalized treatment also improved quality of life for most participants without worsening other symptoms. While the study was small, it represents a significant step towards creating “brain pacemakers” that can dynamically respond to patients’ changing needs. Experts suggest that such personalized brain stimulation treatments could become available for various neurological and psychiatric disorders within the next five to 10 years.
    • OpenAI has shut down an Iranian influence campaign that extensively used its ChatGPT AI system, according to Bloomberg. The company reported removing a network of Iranian accounts that utilized ChatGPT to generate long-form articles and social media comments aimed at influencing the upcoming U.S. presidential election. The campaign created content mimicking both liberal and conservative viewpoints, including posts about former President Donald Trump and Vice President Kamala Harris. The operation also touched on topics such as the Israel-Gaza conflict and the Paris Olympics. However, OpenAI noted that the campaign did not appear to gain significant traction or engagement. This incident is part of a broader trend of foreign operatives experimenting with AI tools to influence campaigns, though many have struggled to achieve substantial impact. Additionally, the disclosure comes amid ongoing concerns about foreign interference in U.S. elections, with recent warnings from intelligence agencies about propaganda efforts by countries like Iran, Russia and China.