AI Infrastructure Scan Finds 31% of Ollama APIs Open and Over 90 Exposed Instances Across Key Sectors
Scan of over 5,200 Ollama servers shows 31% lack authentication, while more than 90 AI instances in government, finance and marketing are exposed, highlighting critical access‑control gaps.
TL;DR
A scan of more than 5,200 Ollama APIs found 31% responding without authentication, and over 90 AI‑related instances in government, marketing and finance were fully exposed. The results highlight a widespread lack of basic access controls in self‑hosted LLM deployments.
Context Organizations are rushing to self‑host large language models to gain speed and cost savings, but many projects ship with authentication disabled by default. The ClawdBot assistant, which averages 2.6 new CVEs each day, exemplifies how rapid AI adoption outpaces security hardening. Using certificate transparency logs, researchers pulled over two million hosts with one million exposed services and focused on Ollama endpoints.
Key Facts - 31% of the 5,200+ Ollama servers queried answered a simple “Hello” prompt without demanding credentials. - More than 90 exposed AI infrastructure instances were identified across government, marketing and finance sectors. - The ClawdBot project continues to generate roughly 2.6 CVEs per day, indicating a high vulnerability churn in similar self‑hosted tools.
What It Means Unauthenticated Ollama APIs allow anyone to query models, potentially extracting sensitive prompts, altering workflows, or abusing compute resources for malicious content generation. Exposed agent platforms such as n8n and Flowise further increase risk by leaking credential lists and enabling server‑side code execution. In regulated sectors, these gaps could violate data‑protection rules and facilitate lateral movement inside corporate networks.
What Defenders Should Do - Enforce authentication on all Ollama instances; set `--api-key` or use reverse‑proxy with OAuth. - Scan internal networks for exposed Ollama, n8n and Flowise services using tools like Shodan or custom scripts; block public access unless required. - Apply the latest patches for ClawdBot and related AI frameworks; monitor CVE feeds for the 2.6‑per‑day rate. - Deploy runtime detection for anomalous LLM calls (e.g., high‑volume prompts, jailbreak attempts) using MITRE ATT&CK T1059 (Command and Scripting Interpreter) and T1078 (Valid Accounts) as reference. - Review API key storage; ensure keys are never hard‑coded in client‑side code or logs.
Watch for upcoming advisories from the CISA KEV catalog that may list specific Ollama CVEs, and prepare to tighten model‑access policies as regulators issue guidance on AI‑system security.
Continue reading
More in this thread
Conversation
Reader notes
Loading comments...