AI News Roundup – Newest OpenAI model, AI technologies for military applications, AI-generated sports recaps, and more
- September 16, 2024
- Snippets
Practices & Technologies
Artificial IntelligenceTo help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.
-
- OpenAI has released its latest AI model, dubbed OpenAI o1, which the company says can perform some reasoning tasks (such as math and coding) at the same level as a human being, according to Bloomberg. The model, known internally as “Strawberry” during development, is designed to spend more time computing answers before responding to user queries, allowing it to solve multi-step problems. While it lacks some features of ChatGPT like web browsing and file uploading, OpenAI considers it a significant advancement in AI capability. In a blog post announcing the new model, the company said that OpenAI o1 was trained “to spend more time thinking through problems before they respond, much like a person would. Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes.” The company also stated that it believes OpenAI o1 will be useful for a variety of complex tasks, including data annotation for health science research, mathematical formula generation for physics applications and building multi-step workflows for software developers. A preview version of OpenAI o1 is now available for paid ChatGPT Plus and Team users to gather feedback for further improvements.
-
- The U.S. Department of Commerce is proposing to require developers of advanced AI models to produce detailed reports regarding their technologies’ safety and security, according to Reuters. The proposal, announced this past week by the Bureau of Industry and Security (BIS), would mandate that “frontier” AI model developers and cloud computing providers submit reports to the federal government about their development activities and cybersecurity measures. These efforts include testing for dangerous capabilities such as assisting in cyberattacks or lowering barriers for non-experts to develop weapons of mass destruction. The BIS, which is part of the Commerce Department, stated that this information is crucial for ensuring these technologies meet stringent safety and reliability standards. The move comes as legislative action on AI in Congress has stalled, and as the Biden administration has continued to take steps to prevent China from using U.S. technology for AI development. U.S. Secretary of Commerce Gina Raimondo said in a press release that “this proposed rule would help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security.”
-
- Nikkei Asia reports on a recent summit in South Korea that released a non-binding “blueprint” to encourage responsible use of AI technologies for military applications. The Responsible AI in the Military Domain (REAIM) summit, held in Seoul this past week, brought together representatives from 96 nations, including the United States and China. This year’s summit built upon last year’s event in Amsterdam, producing a more action-oriented document that addresses risk assessments, human control and confidence-building measures. The blueprint also emphasizes preventing AI from being used to proliferate weapons of mass destruction and maintaining human involvement in nuclear weapons deployment. While the document is not legally binding, it aims to establish ongoing multi-stakeholder discussions on AI in military contexts, moving beyond principles to more concrete steps that may be focused on at next year’s summit, whose location and time are still under discussion.
-
- The Associated Press reports that a group of AI industry leaders met with top U.S. officials at the White House this past week to discuss the future of the technology, including its energy usage and current regulatory efforts. The meeting also addressed increasing public-private collaboration, addressing workforce needs, and streamlining permitting processes for the AI industry. Executives from major tech companies such as OpenAI, Nvidia, Microsoft, Alphabet, Meta and Amazon’s AWS were in attendance, along with representatives from utility companies Exelon and AES to discuss power grid requirements. Key administration officials, including White House Chief of Staff Jeff Zients and Commerce Secretary Gina Raimondo, participated in the discussions, underscoring the administration’s recent moves to regulate AI technologies, including President Biden’s executive order from October, which aimed to establish protections and address consumer rights issues in AI development.
-
- The MIT Technology Review reports on a new study published in the journal, Science, that claims AI chatbots can be effective in reducing people’s propensity to believe in conspiracy theories. The research, conducted by scientists from MIT’s Sloan School of Management and Cornell University, found that engaging in conversations with large language models like GPT-4 Turbo about conspiracy theories reduced participants’ belief in them by approximately 20%. The study involved 2,190 participants who discussed their chosen conspiracy theories with the AI, which was prompted to be persuasive. The chatbot’s effectiveness is attributed to its ability to access vast amounts of information and tailor factual counterarguments to specific beliefs. Follow-up assessments after 10 days and two months showed that the change in beliefs persisted. The researchers suggest that this approach could be implemented on social media platforms or through targeted advertisements to combat misinformation. They also noted the AI’s high accuracy in providing factual information during the study, with 99.2% of its claims verified as true by a professional fact-checker.
-
- ESPN has begun to publish AI-generated recaps of several women’s sports on its website, and has sparked controversy in doing so, according to The Verge. The company, beginning this past week, is using Microsoft’s AI technologies to write summaries of women’s soccer games, with plans to expand to lacrosse and other sports. While ESPN claims this initiative will “augment existing coverage” and free up human journalists for more in-depth reporting, critics argue that the AI-generated content lacks nuance and emotional context. For example, one recap failed to mention the significance of the final match before retirement of former U.S. women’s national team captain Alex Morgan. Some industry observers have expressed concern that the use of such AI recaps could lead to AI covering more sports in the future, potentially threatening jobs. ESPN maintains that each AI-generated article is clearly labeled as such and reviewed by humans for accuracy, but the move has reignited debates about AI’s role in journalism and its potential impact on human writers.