AI News Roundup – Backlash from Silicon Valley tech companies, Anti-AI movement, Apple Intelligence, and more
- June 17, 2024
- Snippets
Practices & Technologies
Artificial IntelligenceTo help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.
-
- California’s proposed AI regulations have caused backlash from Silicon Valley tech companies, according to the Financial Times. These regulations, recently covered here in the AI News Roundup, would require companies to include a “kill switch” to shut down their models, guarantee that their models could not be used for hazardous purposes, and make routine assessments of bias within AI decision-making, among other purposes. Arun Rao, a lead product manager for generative AI at Meta, said that the proposed legislation was “unworkable” and would “end open source” in the state. An anonymous Silicon Valley venture capitalist said to the Financial Times that “they were already fielding queries from founders asking if they would need to leave the state as a result of the potential legislation.” However, the chief sponsor of the proposed legislation, Democratic state Senator Scott Wiener, pushed back on criticism, saying that some responses were “not fully accurate,” and said he was planning on proposing amendments to assuage concerns. Among the amendments is a statement that open-source model developers would not be liable for violations by versions of their models that are sufficiently modified by a third party. Another amendment would ensure that the legislation would only apply to large models that “cost at least $100 million to train,” in an effort to protect AI startups. A representative of the Center for AI Safety, a non-profit behind the campaign, said that regulation of the rapidly advancing field was required, as “there are these competitive pressures that are affecting these AI organizations that basically incentivize them to cut corners on safety.” The legislation is expected to receive a vote in California’s General Assembly in August 2024.
-
- Brian Merchant writes at The Atlantic regarding the anti-AI movement that has emerged in response to the ubiquitousness of AI technologies in many common online platforms, from search engines to messaging and social media apps. Some companies have taken advantage of this AI backlash to market themselves as AI-free. The Unilever beauty brand Dove publicly committed to “never use AI to represent real women” as part of a campaign to phase out digital retouching in its ad campaigns. Merchant likens the AI-free push to organic food labels, an effort for companies to signal that their products are human-made. Many writers, artists and other creators have also jumped on this trend, putting disclaimers on their webpages and on their art that they do not use AI to make content. Many Instagram users have also begun to adorn their posts with hashtags “#NoAI and #NotAI.” With AI technologies growing, especially at large companies that dominate most of the Internet, it remains to be seen if this small movement will gain more traction. Merchant concludes by saying that “if we want to preserve a human economy for creative goods and services, we’re going to have to fight for it too.”
-
- WIRED reports that LAION-5B, a text-image pair dataset used to train AI models, contains over 170 images and personal details of Brazilian children. The allegation, made in a new report from Human Rights Watch (HRW), has shone a spotlight on efforts to address privacy and safety issues created when AI models are trained on personal and sensitive information. The photos “span the entirety of childhood,” from birth to birthday parties to class presentations at school and were often posted on personal blogs or photo/video sharing sites, but otherwise appear to be impossible to find through normal Internet searches. According to HRW, the children whose photos are in LAION “face further threats to their privacy due to flaws in the technology,” explaining how models trained on the dataset can reproduce identical copies of the material they were trained on, including in some cases private medical records. The report also says that “malicious actors have used LAION-trained AI tools to generate explicit imagery of children using innocuous photos, as well as explicit imagery of child survivors whose images of sexual abuse were scraped into LAION-5B.” The report then calls upon Brazil’s government to strengthen data protection laws and ban nonconsensual use of AI to generate explicit images of people. Hye Jung Han, a researcher at HRW, says that governments and regulators all over the world must do more to protect children and their parents from this type of AI abuse, saying that “children and their parents shouldn’t be made to shoulder responsibility for protecting kids against a technology that’s fundamentally impossible to protect against – it’s not their fault.”
-
- Apple, Inc. unveiled AI features for its iPhone, iPad and Mac products at its 2024 Worldwide Developer Conference this past week, according to Axios. “Apple Intelligence,” as it is known, differentiates itself from other generative AI technologies by focusing on a specific customer’s information and use cases, rather than being trained on a broad variety of information. Siri, Apple’s voice assistant, is planned to receive a major overhaul as part of these efforts. For example, in response to a query to “Open that podcast my wife sent a few days ago,” “Siri will be able to look for mail or other messages from the person it knows is the user’s wife, find the recent link to a podcast, and then open the file in the podcast app,” according to the report. Apple will also be partnering with OpenAI, making use of its ChatGPT AI model for queries that require more resources or search depth than what is locally capable on a device, but, in a privacy-focused move, Apple will require user permission each time a query is sent away from the device. Beyond the ChatGPT partnership, Axios confirmed that “all the Apple Intelligence features — both those running on devices and those that are processed in Apple’s data centers — use the company’s own models.” Apple stated that Apple Intelligence features, in a beta, will be included in the planned releases of iOS 18, iPadOS 18 and macOS Sequoia in fall 2024.
-
- “Slop” has emerged as the term of choice for those criticizing the AI equivalent of Internet “spam,” according to The New York Times. Examples of “slop” given are a “low-price digital book that seems like the one you were looking for, but not quite,” and social media posts in your feed that “seem to come from nowhere.” The term, which “conjures images of heaps of unappetizing food being shoveled into troughs for livestock,” appears to have been first used in this context in 2022 as a reaction to AI image generators but has recently made traction in such varied places as the anonymous message board 4chan, YCombinator’s forum Hacker News and the comments of YouTube videos. Adam Aleksic, a linguist interviewed by The Times, said that “what we always see with any slang is that it starts in a niche community and then spreads from there.” Developer Simon Willison, whom some have identified as an early user of the term, said it was in use long before he found it and that “society needs concise ways to talk about modern AI — both the positives and the negatives,” concluding that “‘Ignore that email, it’s spam,’ and ‘Ignore that article, it’s slop,’ are both useful lessons.”