AI News Roundup – Midjourney and Stability AI feud, AI tools for teachers, Anthropic releases new LLM and more

To help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.

    • Generative AI companies do not like it when their data gets scraped. Image generation firm, Midjourney has banned employees of rival Stability AI from accessing Midjourney’s service after accounts linked to Stability AI allegedly brought down the service by requesting “all the prompt and image pairs.” Stability AI management has stated that they did not order the “botnet-like activity” that was initiated from “paid accounts.”
    • While student use of large language models, such as ChatGPT, for homework assignments is generally frowned upon, such models are growing in popularity as grading tools for teachers. Of course, this begs the question of how long it will be before grades are assigned by AI for essays written with AI, and at what point humans will no longer be involved. Some educators have suggested that the use of AI-based teaching tools may allow more individualized lesson plans that allow faster learners to move ahead with more challenging content.
    • India’s Ministry of Electronics and Information Technology (MeitY) has issued an advisory ruling that generative AI platforms must not “permit any bias or discrimination or threaten the integrity of the electoral process.” While not yet legally binding, this is the first time a major world government has strongly signaled that oversight of AI in the electoral process is imminent.
    • Anthropic has released the third version of its constitutional AI, Claude. The company has claimed that Claude is more trustworthy and reliable than competing models GPT and Gemini, while outperforming those competitors in a number of benchmarks. Anthropic also recently released a prompt library to help users of its Claude LLM generate more relevant outputs.