AI News Roundup – AI-powered educational technology, emotional reliance on AI, AI robocallers, and more
- August 12, 2024
- Snippets
Practices & Technologies
Artificial IntelligenceTo help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.
-
- California’s legislature returned to Sacramento this past week to finish up lawmaking work before the end of the session on August 31, according to The Washington Post. Among the bills being considered are a number of groundbreaking bills to regulate AI technologies, some of which have been covered in a previous roundup. Key bills include S.B. 1047, which would require powerful AI models to adhere to specific safety protocols before deployment, and measures to combat social media “addiction” among children. The state is also weighing legislation that would force large tech platforms to pay news organizations for content usage and impose a tax on digital advertising to fund tax credits for news outlets. As the home of Silicon Valley, California’s tech policy decisions are expected to have significant national impact, with experts noting that whatever passes in this session will likely influence regulations across the country.
-
- Nikkei Asia reports on South Korea’s adoption of AI-powered educational technology. Korean tech giants such as LG and Samsung are competing for a share of the growing educational technology market. LG has installed “future classrooms” in 18 locations across South Korea, featuring AI-equipped robots and digital whiteboards, while Samsung has developed AI-powered displays for schools that can transcribe and summarize lessons. The South Korean government plans to introduce AI-powered digital textbooks for 5 million students from elementary to high school in March 2025, marking a world-first initiative. These textbooks will use AI to assess student proficiency and provide personalized content and lessons. As South Korea’s school-age population is expected to shrink, the companies are developing expertise in this field with an eye toward expanding into international markets such as Japan, India and Mongolia.
-
- A new report from OpenAI has found that users of its GPT-4o chatbot may form an emotional reliance on its voice mode, according to The New York Times. The report, which details the safety work involved in the development of GPT-4o, revealed that during early testing, users’ language when communicating with GPT-4o suggested the formation of emotional connections with the AI, such as “this is our last day together.” OpenAI acknowledged that while these instances appeared benign, they signaled a need for further investigation into the long-term effects. The company noted that the humanlike voice feature could reduce the need for human interaction, potentially benefiting those experiencing loneliness but also possibly affecting healthy relationships. OpenAI plans to conduct additional research on diverse user populations and independent studies to better understand and mitigate potential risks associated with emotional reliance on AI.
-
- The Verge reports on a new U.S. Federal Communications Commission (FCC) regulation that would require robocallers to disclose if they are using AI. The proposed rules, announced this past week, would mandate that callers reveal their use of AI technology when seeking consent for future calls and messages, as well as during any AI-generated phone calls. This builds upon the FCC’s existing ban on AI-generated robocalls without prior consent. The agency defines an “AI-generated call” as one that uses computational technology or machine learning to produce voice or text content for communication. The proposal also includes an exemption for individuals with speech and hearing disabilities who use AI-generated voice software for outbound calls, provided there are no unsolicited advertisements and recipients are not charged. The FCC is seeking comments on potential abuses of this exemption and how to prevent them, especially as the rules intended to address the enhanced risk of fraud and scams associated with AI-generated calls.
-
- Microsoft and Palantir Technologies announced an agreement to combine their AI and cloud offerings in order to bid for U.S. government defense and intelligence work, according to Bloomberg. The partnership will integrate Palantir’s products with Microsoft’s Azure cloud services for government customers, including tools for confidential use. Notably, Palantir will adopt Microsoft’s Azure OpenAI service within these secure environments, allowing the use of OpenAI’s GPT-4 model for top-secret tasks. This collaboration aims to provide U.S. defense workers with AI-powered tools for logistics, contracting and action planning. The agreement expands Palantir’s access to large language models in classified settings, while Microsoft continues to leverage its partnership with OpenAI to boost demand for its Azure cloud services. Both companies plan to offer boot-camp sessions for defense and intelligence communities to test these new combined technologies, potentially revolutionizing how AI is used in sensitive government operations.