Medium severity5.4GHSA Advisory· Published May 15, 2026
CVE-2026-45365
CVE-2026-45365
Description
Open WebUI is a self-hosted artificial intelligence platform designed to operate entirely offline. Prior to 0.8.11, an internal-only bypass_filter parameter is exposed on the /openai/chat/completions and /ollama/api/chat HTTP endpoints via FastAPI query string binding, allowing any authenticated user to append ?bypass_filter=true and bypass model access control checks to invoke admin-restricted models. This vulnerability is fixed in 0.8.11.
Affected products
1- Range: <= 0.8.10
Patches
13 files changed · +14 −4
backend/open_webui/routers/ollama.py+5 −1 modified@@ -1285,7 +1285,6 @@ async def generate_chat_completion( form_data: dict, url_idx: Optional[int] = None, user=Depends(get_verified_user), - bypass_filter: Optional[bool] = False, bypass_system_prompt: bool = False, ): if not request.app.state.config.ENABLE_OLLAMA_API: @@ -1295,6 +1294,11 @@ async def generate_chat_completion( # Database operations (get_model_by_id, AccessGrants.has_access) manage their own short-lived sessions. # This prevents holding a connection during the entire LLM call (30-60+ seconds), # which would exhaust the connection pool under concurrent load. + + # bypass_filter is read from request.state to prevent external clients from + # setting it via query parameter (CVE fix). Only internal server-side callers + # (e.g. utils/chat.py) should set request.state.bypass_filter = True. + bypass_filter = getattr(request.state, "bypass_filter", False) if BYPASS_MODEL_ACCESS_CONTROL: bypass_filter = True
backend/open_webui/routers/openai.py+5 −1 modified@@ -938,13 +938,17 @@ async def generate_chat_completion( request: Request, form_data: dict, user=Depends(get_verified_user), - bypass_filter: Optional[bool] = False, bypass_system_prompt: bool = False, ): # NOTE: We intentionally do NOT use Depends(get_session) here. # Database operations (get_model_by_id, AccessGrants.has_access) manage their own short-lived sessions. # This prevents holding a connection during the entire LLM call (30-60+ seconds), # which would exhaust the connection pool under concurrent load. + + # bypass_filter is read from request.state to prevent external clients from + # setting it via query parameter (CVE fix). Only internal server-side callers + # (e.g. utils/chat.py) should set request.state.bypass_filter = True. + bypass_filter = getattr(request.state, "bypass_filter", False) if BYPASS_MODEL_ACCESS_CONTROL: bypass_filter = True
backend/open_webui/utils/chat.py+4 −2 modified@@ -166,6 +166,10 @@ async def generate_chat_completion( if BYPASS_MODEL_ACCESS_CONTROL: bypass_filter = True + # Propagate bypass_filter via request.state so that downstream route + # handlers (openai/ollama) can read it without exposing it as a query param. + request.state.bypass_filter = bypass_filter + if hasattr(request.state, "metadata"): if "metadata" not in form_data: form_data["metadata"] = request.state.metadata @@ -269,7 +273,6 @@ async def stream_wrapper(stream): request=request, form_data=form_data, user=user, - bypass_filter=bypass_filter, bypass_system_prompt=bypass_system_prompt, ) if form_data.get("stream"): @@ -286,7 +289,6 @@ async def stream_wrapper(stream): request=request, form_data=form_data, user=user, - bypass_filter=bypass_filter, bypass_system_prompt=bypass_system_prompt, )
Vulnerability mechanics
AI mechanics synthesis has not run for this CVE yet.
References
4News mentions
0No linked articles in our index yet.