AI News Roundup – Nobel Prize winners, Chinese AI companies target the U.S. market, Amazon’s AI-powered shopping assistance, and more

To help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.

    • This year’s Nobel Prizes in Physics and Chemistry were each awarded for projects studying or using AI technologies, according to The New York Times. John J. Hopfield and Geoffrey E. Hinton were awarded the Nobel Prize in Physics for their pioneering work from the 1970s until the present on artificial neural networks, which underlie most modern AI and machine learning systems. Their research, according to the Nobel committee, “have showed a completely new way for us to use computers to aid and to guide us to tackle many of the challenges our society faces.” Demis Hassabis and John Jumper of Google’s DeepMind AI subsidiary, as well as David Baker of the University of Washington, were awarded the Nobel Prize in Chemistry for work relating to AI prediction of protein folding, which has numerous medical applications such as drug development. The work of Hassabis and Jumper, who were the primary developers behind AlphaFold, DeepMind’s AI-powered protein structure prediction system, has the potential to greatly accelerate the development of new medicines and other scientific research. Baker’s research involves the creation of proteins whose structure may be predicted using computational tools like AlphaFold and Rosetta, the latter of which Baker’s lab uses. According to the Nobel committee, this work “opened up a completely new world of protein structures that we had never seen before.”
    • The United Kingdom’s Ministry of Defence is deploying an AI model to assess the country’s military capabilities, according to POLITICO Europe. The model, developed by the U.S.-based software company Palantir, will be used to analyze submissions for a comprehensive Strategic Defense Review announced by Defense Secretary John Healey. This marks the first time the British government has utilized AI in producing a major review, aiming to increase efficiency in the process. The AI will sift through thousands of submissions from various military branches, arms manufacturers and think tanks, identifying key themes and providing summaries for officials to examine. While some view this as an innovative step towards modernizing government processes, others in the defense industry express concerns about potential oversights, cybersecurity risks and the possibility of submissions being unfairly prioritized or overlooked. One defense industry insider was skeptical, telling POLITICO that “given the nature and importance of this topic, and the work that needs to be done, are they really confident they are going to pick up all the salient points?” The review is expected to be published by this coming March.
    • The Financial Times reports on Chinese AI companies that are attempting to break into the U.S. market to drive revenue growth amid a sluggish domestic industry. Companies like MiniMax, ByteDance and 01.ai are launching AI products overseas, particularly targeting the U.S., which offers a larger pool of high-spending users. MiniMax, for instance, has projected $70 million in sales this year, largely from its avatar chatbot app Talkie, which has gained popularity among U.S. teenagers. These companies are leveraging their competitive advantage in products like avatar chatbots, which require less computing power than larger AI models preferred by U.S.-based AI companies. To navigate potential regulatory challenges, Chinese AI firms are incorporating overseas in locations such as Singapore and Hong Kong, and then operate their apps using servers located outside of China. This strategy aims to emulate ByteDance’s TikTok success while avoiding similar regulatory scrutiny from Washington. The move to expand internationally is driven by the need to generate revenue quickly, as fundraising in China has slowed and domestic consumers are less likely to pay for subscriptions for AI systems.
    • WIRED reports on Amazon’s efforts to bring AI systems to its shopping experience, highlighting the company’s vision for AI-powered shopping agents. Amazon is developing large language models, like the one powering its Rufus chatbot, that are trained on vast amounts of retail data from its e-commerce platform. The company has introduced AI-generated shopping guides for hundreds of product categories and is exploring more advanced AI services. These include autonomous AI shopping agents that could proactively recommend products, add items to carts or even make purchases based on a customer’s habits and interests. Amazon executives, including Trishul Chilimbi and Rajiv Mehta, talked to WIRED about the potential for AI agents to handle complex shopping tasks, such as preparing for a camping trip or automatically buying the next book in a series a customer is reading. While these advanced features are still in development, Amazon sees them as part of its roadmap for enhancing the shopping experience through AI technology.
    • AI deepfakes have been used to circumvent cryptocurrency exchanges’ security measures, according to Axios. Research from Cato Networks’ threat intelligence lab reveals that fraudsters are employing AI to create fake identities, including passports and video verifications, to establish phony accounts on cryptocurrency exchanges. This technology allows money launderers to rapidly generate multiple accounts, making it easier to cash out illicit gains while evading detection. The AI-powered tools can create convincing fake documents and even produce deepfake videos that match the forged identities, fooling automated “proof of life” verification systems. Etay Maor from Cato warns that this development significantly scales up the operational capacity of fraudsters, making it more challenging for exchanges to detect and prevent fraudulent activities. The report suggests that crypto exchanges should introduce more randomness in their verification processes and potentially increase human oversight to combat this emerging threat.
    • OpenAI has cracked down on uses of its AI systems to create misinformation, according to the company. OpenAI reported disrupting five covert influence operations over the past three months that attempted to use the company’s AI models such as ChatGPT for deceptive activities. These operations, originating from Russia, China, Iran and Israel, sought to manipulate public opinion on various political issues. In response, the company terminated accounts linked to these operations and announced that it would implement further safety measures to prevent abuse. OpenAI noted that while these threat actors used AI to generate content in multiple languages, debug code and increase productivity, they did not achieve significant audience engagement.