AI News Roundup – Pew poll reveals AI pessimism among American public, AI-assisted treatment found to reduce Parkinson’s symptoms, Anthropic researches how reasoning models do and do not “think,” and more
- April 8, 2025
- Snippets
Practices & Technologies
Artificial IntelligenceTo help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.
-
- Pew Research this week released findings from a survey of Americans and AI experts regarding their views on AI technologies. The survey found a stark gap in optimism regarding AI between adults and the AI experts: 56% of experts said that they thought AI would have a positive impact on the U.S. over the next 20 years, while only 17% of U.S. adults said the same. An even larger gap exists between experts and the public regarding whether AI will personally benefit or harm them, with 76% of experts saying that it will benefit them while only 24% of the public agrees. Similar shares of both experts and the public want further government regulation of AI: 58% of the public and 56% of experts think that the government in the U.S. will not go far enough in regulating the use of AI. The public also expressed concerns with AI’s effect on the job market, with 64% indicating that they believe AI will lead to fewer jobs over the next 20 years, while only 39% of experts said the same. Overall, the results indicate that the American public is generally more pessimistic than optimistic regarding AI, especially in comparison to AI experts.
-
- The Washington Post reports on a new treatment for Parkinson’s disease that was found to reduce some symptoms of the condition. Deep brain stimulation (DBS), which involves sending electrical pulses into the brain by way of implanted electrodes, has been used since 1997 to treat Parkinson’s. However, a new technique has emerged with the help of AI — adaptive DBS, which modifies the timing and power of the electrical pulses by analyzing the electrical patterns of the neurons that make up a patient’s brain. Several early studies have shown that adaptive DBS treatment was associated with greater control of symptoms and fewer side effects in Parkinson’s patients in comparison to traditional DBS. The Post interviewed two patients who participated in a trial of adaptive DBS who have since had their worst motor symptoms (often tremors) “virtually eliminated,” and have been able to reduce the amount of medication they previously had to take as part of their treatment. All forms of DBS, like any surgery, has risks for complications, and is not applicable to every Parkinson’s patient, but presents a positive example of AI applied to medical treatment with a level of success beyond that of traditional treatment options.
-
- Anthropic released a new research paper investigating “reasoning” AI models — those models that show the process (often called a “chain-of-thought”) that led to the generation of their answer. Nearly every major AI developer has released reasoning-focused models, from OpenAI’s o1, DeepSeek’s R1, Anthropic’s own Claude 3.7 Sonnet, among others. However, Anthropic’s research showed that the chain-of-thought generated by such models is not always trustworthy; that is, the chain-of-thought is not always a true description of what the model was “thinking.” To analyze this phenomenon, Anthropic tested both Claude 3.7 Sonnet and DeepSeek R1, giving the model a “hint” towards answers to questions the models were evaluating, and reviewing whether the model admitted to using the hint in its chain-of-thought. For both models, the majority of responses generated did not mention the hint. Even after setting up a reinforcement-learning based reward system for both models, the model learned to use the hint (even if the hint was deliberately pointing to an incorrect answer) and would generate false reasoning justifying the wrong answer. In general, these results show the limitations of reasoning models, and prompts questions as to whether such models actually perform a thought-like process or simply mimic it. Anthropic concluded that the study indicated that “advanced reasoning models very often hide their true thought processes, and sometimes do so when their behaviors are explicitly misaligned,” and that there is “substantial work to be done” in addressing these issues in AI models.
-
- The Financial Times reports that a European group has developed an AI system to help predict wildfires. The European Centre for Medium-Range Weather Forecasts has developed the Probability of Fire (PoF) model to predict when and where wildfires may ignite using a variety of data, including weather conditions (such as heat and precipitation levels) as well as the amount of vegetation available in the region and human activity. In testing, the AI-based PoF model was able to accurately predict the spots where fires broke out during January’s deadly fires in Los Angeles, while more traditional wildfire prediction methods merely identified flammable areas. Wildfire prediction at this scale and at such detail traditionally requires vast amounts of data and computing power to process such data, and thus the use of AI-based models has the potential to allow smaller agencies with fewer resources to perform accurate predictions, which improves public safety. The frequency of wildfires has doubled in the past two decades, likely due to climate change, but the researchers are confident that the AI systems will help to improve the safety of people as they face the risk of wildfires.
-
- An AI model from Google’s DeepMind has shown success at playing the popular video game Minecraft, according to Nature. The system, dubbed Dreamer, was able to navigate Minecraft’s 3-D environment and collect diamonds, an important material in the game. DeepMind researchers said that Minecraft presents a unique challenge for developing AI models that can play games, as Minecraft randomly generates the world each time a new user plays the game. Thus, to achieve goals in the game, the model must learn how to adapt to different environments rather than following set paths. Using reinforcement learning and a reward system, it took Dreamer nine days to find a single diamond, something that expert human players can do in 20 to 30 minutes. Dreamer constructs a “world model” of its surroundings within the game which allows it to test out solutions without actually performing them in the game. The DeepMind researchers also said that this world model process could aid in the development of creating robots that interact with the physical world.