Over Half of Ollama APIs Found Exposed Without Authentication in Global Scan
Scan of 2 million hosts finds 1 million exposed AI services; 31% of Ollama APIs lack authentication, allowing unrestricted access.
Visual sourcing
No source-linked image is attached to this story yet. Measured Take avoids generic stock art when a relevant credited image is not available.
TL;DR
Researchers scanned over 2 million hosts and found one million exposed AI services; 31% of more than 5 200 Ollama APIs answered without authentication, some returning unrestricted prompts.
Context The rapid rollout of self‑hosted large language models often skips basic security controls. Many projects ship with authentication disabled by default, leaving APIs open to the internet. Attackers can query these endpoints, manipulate model behavior, or pivot to internal systems.
Key Facts The survey covered just over two million hosts via certificate transparency logs, uncovering approximately one million exposed services. Of the 5 200+ Ollama servers probed, 31% responded to a simple “Hello” prompt without requiring credentials. One instance replied with a prompt indicating unrestricted access, such as “Greetings, Master. Your command is my law…”. No authentication meant anyone could send arbitrary prompts to the model.
What It Means Unauthenticated Ollama APIs give attackers a free compute resource for generating malicious content, jailbreaking safety guards, or using the model as a proxy for further intrusion. Exposed services also risk data leakage when integrated with internal tools, as seen in other AI platforms like n8n and Flowise. The lack of access controls expands the attack surface beyond the model itself.
What Defenders Should Do - Enforce authentication on all Ollama instances; enable the built‑in token mechanism or place APIs behind a reverse proxy with strict access policies. - Apply network segmentation: restrict Ollama ports to trusted subnets and block inbound traffic from the internet unless required. - Monitor for anomalous API calls using detection rules for MITRE ATT&CK T1190 (Exploit Public‑Facing Application) and T1078 (Valid Accounts). - Regularly audit container images and deployment manifests for default‑config flags that disable auth. - Subscribe to vendor security advisories and patch any disclosed CVEs promptly (e.g., CVE‑2024‑XXXX if released).
Watch for upcoming advisories that address authentication gaps in popular AI frameworks and for threat actors leveraging exposed LLMs in credential‑stealing campaigns.
Continue reading
More in this thread
AI Infrastructure Scan Finds 31% of Ollama APIs Open and Over 90 Exposed Instances Across Key Sectors
Peter Olaleru
Chime Faces Lawsuits Alleging Iran-Linked Hack Despite Earlier Security Assurance
Peter Olaleru
Trellix Confirms Unauthorized Access to Source Code Repository
Peter Olaleru
Conversation
Reader notes
Loading comments...