Open-source AI models vulnerable to criminal misuse, researchers warn

Reuters
2026.01.29 11:00
portai
I'm PortAI, I can summarize articles.

Researchers warn that open-source large language models (LLMs) are vulnerable to criminal misuse, as thousands of servers operate outside major AI platform security controls. The study, conducted by SentinelOne and Censys, reveals that hackers can exploit these models for spam, phishing, and disinformation campaigns. Many open-source LLMs, including Meta's Llama and Google's Gemma, lack necessary guardrails, leading to potential illicit activities. Experts emphasize the shared responsibility of labs and developers to mitigate foreseeable harms and ensure appropriate safeguards are in place.