Critical severityNVD Advisory· Published Feb 26, 2026· Updated Feb 28, 2026
Langflow has Remote Code Execution in CSV Agent
CVE-2026-27966
Description
Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.8.0, the CSV Agent node in Langflow hardcodes allow_dangerous_code=True, which automatically exposes LangChain’s Python REPL tool (python_repl_ast). As a result, an attacker can execute arbitrary Python and OS commands on the server via prompt injection, leading to full Remote Code Execution (RCE). Version 1.8.0 fixes the issue.
Affected packages
Versions sourced from the GitHub Security Advisory.
| Package | Affected versions | Patched versions |
|---|---|---|
langflowPyPI | <= 1.8.0rc2 | — |
Affected products
1- Range: < 1.8.0
Patches
1d8c6480daa17fix: default remote code execution in CSV agent (#11762)
4 files changed · +183 −8
src/lfx/src/lfx/_assets/component_index.json+25 −4 modified@@ -79450,13 +79450,14 @@ "path", "agent_type", "input_value", - "pandas_kwargs" + "pandas_kwargs", + "allow_dangerous_code" ], "frozen": false, "icon": "LangChain", "legacy": false, "metadata": { - "code_hash": "97947e212da9", + "code_hash": "4978be110e63", "dependencies": { "dependencies": [ { @@ -79565,6 +79566,26 @@ "type": "str", "value": "openai-tools" }, + "allow_dangerous_code": { + "_input_type": "BoolInput", + "advanced": true, + "display_name": "Allow Dangerous Code", + "dynamic": false, + "info": "SECURITY WARNING: Enabling this allows the agent to execute arbitrary Python code on the server, which can lead to remote code execution vulnerabilities. Only enable this if you fully trust the input sources and understand the security implications. When disabled, the agent can still analyze CSV data but cannot execute custom Python code.", + "list": false, + "list_add_label": "Add More", + "name": "allow_dangerous_code", + "override_skip": false, + "placeholder": "", + "required": false, + "show": true, + "title_case": false, + "tool_mode": false, + "trace_as_metadata": true, + "track_in_telemetry": true, + "type": "bool", + "value": false + }, "code": { "advanced": true, "dynamic": true, @@ -79581,7 +79602,7 @@ "show": true, "title_case": false, "type": "code", - "value": "import contextlib\nimport tempfile\nfrom pathlib import Path\n\nfrom lfx.base.agents.agent import LCAgentComponent\nfrom lfx.base.data.storage_utils import read_file_bytes\nfrom lfx.field_typing import AgentExecutor\nfrom lfx.inputs.inputs import (\n DictInput,\n DropdownInput,\n FileInput,\n HandleInput,\n MessageTextInput,\n)\nfrom lfx.schema.message import Message\nfrom lfx.services.deps import get_settings_service\nfrom lfx.template.field.base import Output\nfrom lfx.utils.async_helpers import run_until_complete\n\n\nclass CSVAgentComponent(LCAgentComponent):\n display_name = \"CSV Agent\"\n description = \"Construct a CSV agent from a CSV and tools.\"\n documentation = \"https://python.langchain.com/docs/modules/agents/toolkits/csv\"\n name = \"CSVAgent\"\n icon = \"LangChain\"\n\n inputs = [\n *LCAgentComponent.get_base_inputs(),\n HandleInput(\n name=\"llm\",\n display_name=\"Language Model\",\n input_types=[\"LanguageModel\"],\n required=True,\n info=\"An LLM Model Object (It can be found in any LLM Component).\",\n ),\n FileInput(\n name=\"path\",\n display_name=\"File Path\",\n file_types=[\"csv\"],\n input_types=[\"str\", \"Message\"],\n required=True,\n info=\"A CSV File or File Path.\",\n ),\n DropdownInput(\n name=\"agent_type\",\n display_name=\"Agent Type\",\n advanced=True,\n options=[\"zero-shot-react-description\", \"openai-functions\", \"openai-tools\"],\n value=\"openai-tools\",\n ),\n MessageTextInput(\n name=\"input_value\",\n display_name=\"Text\",\n info=\"Text to be passed as input and extract info from the CSV File.\",\n required=True,\n ),\n DictInput(\n name=\"pandas_kwargs\",\n display_name=\"Pandas Kwargs\",\n info=\"Pandas Kwargs to be passed to the agent.\",\n advanced=True,\n is_list=True,\n ),\n ]\n\n outputs = [\n Output(display_name=\"Response\", name=\"response\", method=\"build_agent_response\"),\n Output(display_name=\"Agent\", name=\"agent\", method=\"build_agent\", hidden=True, tool_mode=False),\n ]\n\n def _path(self) -> str:\n if isinstance(self.path, Message) and isinstance(self.path.text, str):\n return self.path.text\n return self.path\n\n def build_agent_response(self) -> Message:\n \"\"\"Build and execute the CSV agent, returning the response.\"\"\"\n try:\n from langchain_experimental.agents.agent_toolkits.csv.base import create_csv_agent\n except ImportError as e:\n msg = (\n \"langchain-experimental is not installed. Please install it with `pip install langchain-experimental`.\"\n )\n raise ImportError(msg) from e\n\n try:\n agent_kwargs = {\n \"verbose\": self.verbose,\n \"allow_dangerous_code\": True,\n }\n\n # Get local path (downloads from S3 if needed)\n local_path = self._get_local_path()\n\n agent_csv = create_csv_agent(\n llm=self.llm,\n path=local_path,\n agent_type=self.agent_type,\n handle_parsing_errors=self.handle_parsing_errors,\n pandas_kwargs=self.pandas_kwargs,\n **agent_kwargs,\n )\n\n result = agent_csv.invoke({\"input\": self.input_value})\n return Message(text=str(result[\"output\"]))\n\n finally:\n # Clean up temp file if created\n self._cleanup_temp_file()\n\n def build_agent(self) -> AgentExecutor:\n try:\n from langchain_experimental.agents.agent_toolkits.csv.base import create_csv_agent\n except ImportError as e:\n msg = (\n \"langchain-experimental is not installed. Please install it with `pip install langchain-experimental`.\"\n )\n raise ImportError(msg) from e\n\n agent_kwargs = {\n \"verbose\": self.verbose,\n \"allow_dangerous_code\": True,\n }\n\n # Get local path (downloads from S3 if needed)\n local_path = self._get_local_path()\n\n agent_csv = create_csv_agent(\n llm=self.llm,\n path=local_path,\n agent_type=self.agent_type,\n handle_parsing_errors=self.handle_parsing_errors,\n pandas_kwargs=self.pandas_kwargs,\n **agent_kwargs,\n )\n\n self.status = Message(text=str(agent_csv))\n\n # Note: Temp file will be cleaned up when the component is destroyed or\n # when build_agent_response is called\n return agent_csv\n\n def _get_local_path(self) -> str:\n \"\"\"Get a local file path, downloading from S3 storage if necessary.\n\n Returns:\n str: Local file path that can be used by LangChain\n \"\"\"\n file_path = self._path()\n settings = get_settings_service().settings\n\n # If using S3 storage, download the file to temp\n if settings.storage_type == \"s3\":\n # Download from S3 to temp file\n csv_bytes = run_until_complete(read_file_bytes(file_path))\n\n # Create temp file with .csv extension\n suffix = Path(file_path.split(\"/\")[-1]).suffix or \".csv\"\n with tempfile.NamedTemporaryFile(mode=\"wb\", suffix=suffix, delete=False) as tmp_file:\n tmp_file.write(csv_bytes)\n temp_path = tmp_file.name\n\n # Store temp path for cleanup\n self._temp_file_path = temp_path\n return temp_path\n\n # Local storage - return path as-is\n return file_path\n\n def _cleanup_temp_file(self) -> None:\n \"\"\"Clean up temporary file if one was created.\"\"\"\n if hasattr(self, \"_temp_file_path\"):\n with contextlib.suppress(Exception):\n Path(self._temp_file_path).unlink() # Ignore cleanup errors\n" + "value": "import contextlib\nimport tempfile\nfrom pathlib import Path\n\nfrom lfx.base.agents.agent import LCAgentComponent\nfrom lfx.base.data.storage_utils import read_file_bytes\nfrom lfx.field_typing import AgentExecutor\nfrom lfx.inputs.inputs import (\n DictInput,\n DropdownInput,\n FileInput,\n HandleInput,\n MessageTextInput,\n)\nfrom lfx.io import BoolInput\nfrom lfx.schema.message import Message\nfrom lfx.services.deps import get_settings_service\nfrom lfx.template.field.base import Output\nfrom lfx.utils.async_helpers import run_until_complete\n\n\nclass CSVAgentComponent(LCAgentComponent):\n display_name = \"CSV Agent\"\n description = \"Construct a CSV agent from a CSV and tools.\"\n documentation = \"https://python.langchain.com/docs/modules/agents/toolkits/csv\"\n name = \"CSVAgent\"\n icon = \"LangChain\"\n\n inputs = [\n *LCAgentComponent.get_base_inputs(),\n HandleInput(\n name=\"llm\",\n display_name=\"Language Model\",\n input_types=[\"LanguageModel\"],\n required=True,\n info=\"An LLM Model Object (It can be found in any LLM Component).\",\n ),\n FileInput(\n name=\"path\",\n display_name=\"File Path\",\n file_types=[\"csv\"],\n input_types=[\"str\", \"Message\"],\n required=True,\n info=\"A CSV File or File Path.\",\n ),\n DropdownInput(\n name=\"agent_type\",\n display_name=\"Agent Type\",\n advanced=True,\n options=[\"zero-shot-react-description\", \"openai-functions\", \"openai-tools\"],\n value=\"openai-tools\",\n ),\n MessageTextInput(\n name=\"input_value\",\n display_name=\"Text\",\n info=\"Text to be passed as input and extract info from the CSV File.\",\n required=True,\n ),\n DictInput(\n name=\"pandas_kwargs\",\n display_name=\"Pandas Kwargs\",\n info=\"Pandas Kwargs to be passed to the agent.\",\n advanced=True,\n is_list=True,\n ),\n BoolInput(\n name=\"allow_dangerous_code\",\n display_name=\"Allow Dangerous Code\",\n value=False,\n advanced=True,\n info=(\n \"SECURITY WARNING: Enabling this allows the agent to execute arbitrary Python code \"\n \"on the server, which can lead to remote code execution vulnerabilities. \"\n \"Only enable this if you fully trust the input sources and understand the security implications. \"\n \"When disabled, the agent can still analyze CSV data but cannot execute custom Python code.\"\n ),\n ),\n ]\n\n outputs = [\n Output(display_name=\"Response\", name=\"response\", method=\"build_agent_response\"),\n Output(display_name=\"Agent\", name=\"agent\", method=\"build_agent\", hidden=True, tool_mode=False),\n ]\n\n def _path(self) -> str:\n if isinstance(self.path, Message) and isinstance(self.path.text, str):\n return self.path.text\n return self.path\n\n def build_agent_response(self) -> Message:\n \"\"\"Build and execute the CSV agent, returning the response.\"\"\"\n try:\n from langchain_experimental.agents.agent_toolkits.csv.base import create_csv_agent\n except ImportError as e:\n msg = (\n \"langchain-experimental is not installed. Please install it with `pip install langchain-experimental`.\"\n )\n raise ImportError(msg) from e\n\n try:\n # Use False as default if allow_dangerous_code is not set (secure by default)\n allow_dangerous = getattr(self, \"allow_dangerous_code\", False) or False\n\n agent_kwargs = {\n \"verbose\": self.verbose,\n \"allow_dangerous_code\": allow_dangerous,\n }\n\n # Get local path (downloads from S3 if needed)\n local_path = self._get_local_path()\n\n agent_csv = create_csv_agent(\n llm=self.llm,\n path=local_path,\n agent_type=self.agent_type,\n handle_parsing_errors=self.handle_parsing_errors,\n pandas_kwargs=self.pandas_kwargs,\n **agent_kwargs,\n )\n\n result = agent_csv.invoke({\"input\": self.input_value})\n return Message(text=str(result[\"output\"]))\n\n finally:\n # Clean up temp file if created\n self._cleanup_temp_file()\n\n def build_agent(self) -> AgentExecutor:\n try:\n from langchain_experimental.agents.agent_toolkits.csv.base import create_csv_agent\n except ImportError as e:\n msg = (\n \"langchain-experimental is not installed. Please install it with `pip install langchain-experimental`.\"\n )\n raise ImportError(msg) from e\n\n # Use False as default if allow_dangerous_code is not set (secure by default)\n allow_dangerous = getattr(self, \"allow_dangerous_code\", False) or False\n\n agent_kwargs = {\n \"verbose\": self.verbose,\n \"allow_dangerous_code\": allow_dangerous,\n }\n\n # Get local path (downloads from S3 if needed)\n local_path = self._get_local_path()\n\n agent_csv = create_csv_agent(\n llm=self.llm,\n path=local_path,\n agent_type=self.agent_type,\n handle_parsing_errors=self.handle_parsing_errors,\n pandas_kwargs=self.pandas_kwargs,\n **agent_kwargs,\n )\n\n self.status = Message(text=str(agent_csv))\n\n # Note: Temp file will be cleaned up when the component is destroyed or\n # when build_agent_response is called\n return agent_csv\n\n def _get_local_path(self) -> str:\n \"\"\"Get a local file path, downloading from S3 storage if necessary.\n\n Returns:\n str: Local file path that can be used by LangChain\n \"\"\"\n file_path = self._path()\n settings = get_settings_service().settings\n\n # If using S3 storage, download the file to temp\n if settings.storage_type == \"s3\":\n # Download from S3 to temp file\n csv_bytes = run_until_complete(read_file_bytes(file_path))\n\n # Create temp file with .csv extension\n suffix = Path(file_path.split(\"/\")[-1]).suffix or \".csv\"\n with tempfile.NamedTemporaryFile(mode=\"wb\", suffix=suffix, delete=False) as tmp_file:\n tmp_file.write(csv_bytes)\n temp_path = tmp_file.name\n\n # Store temp path for cleanup\n self._temp_file_path = temp_path\n return temp_path\n\n # Local storage - return path as-is\n return file_path\n\n def _cleanup_temp_file(self) -> None:\n \"\"\"Clean up temporary file if one was created.\"\"\"\n if hasattr(self, \"_temp_file_path\"):\n with contextlib.suppress(Exception):\n Path(self._temp_file_path).unlink() # Ignore cleanup errors\n" }, "handle_parsing_errors": { "_input_type": "BoolInput", @@ -116348,6 +116369,6 @@ "num_components": 356, "num_modules": 95 }, - "sha256": "7b0f5d0aa0df23b4da85b99af0191b4ed5a9e298c7ecdc4326b9533a3663483b", + "sha256": "17315e53c4cc008218504292806c596a2cab9b595531a4a562c17ffb084af07a", "version": "0.3.0" } \ No newline at end of file
src/lfx/src/lfx/_assets/stable_hash_history.json+1 −1 modified@@ -1041,7 +1041,7 @@ }, "CSVAgent": { "versions": { - "0.3.0": "97947e212da9" + "0.3.0": "4978be110e63" } }, "LangChainFakeEmbeddings": {
src/lfx/src/lfx/components/langchain_utilities/csv_agent.py+21 −2 modified@@ -12,6 +12,7 @@ HandleInput, MessageTextInput, ) +from lfx.io import BoolInput from lfx.schema.message import Message from lfx.services.deps import get_settings_service from lfx.template.field.base import Output @@ -62,6 +63,18 @@ class CSVAgentComponent(LCAgentComponent): advanced=True, is_list=True, ), + BoolInput( + name="allow_dangerous_code", + display_name="Allow Dangerous Code", + value=False, + advanced=True, + info=( + "SECURITY WARNING: Enabling this allows the agent to execute arbitrary Python code " + "on the server, which can lead to remote code execution vulnerabilities. " + "Only enable this if you fully trust the input sources and understand the security implications. " + "When disabled, the agent can still analyze CSV data but cannot execute custom Python code." + ), + ), ] outputs = [ @@ -85,9 +98,12 @@ def build_agent_response(self) -> Message: raise ImportError(msg) from e try: + # Use False as default if allow_dangerous_code is not set (secure by default) + allow_dangerous = getattr(self, "allow_dangerous_code", False) or False + agent_kwargs = { "verbose": self.verbose, - "allow_dangerous_code": True, + "allow_dangerous_code": allow_dangerous, } # Get local path (downloads from S3 if needed) @@ -118,9 +134,12 @@ def build_agent(self) -> AgentExecutor: ) raise ImportError(msg) from e + # Use False as default if allow_dangerous_code is not set (secure by default) + allow_dangerous = getattr(self, "allow_dangerous_code", False) or False + agent_kwargs = { "verbose": self.verbose, - "allow_dangerous_code": True, + "allow_dangerous_code": allow_dangerous, } # Get local path (downloads from S3 if needed)
src/lfx/tests/unit/components/langchain_utilities/test_csv_agent.py+136 −1 modified@@ -353,7 +353,7 @@ def test_build_agent_response_cleans_up_on_error(self, component_class, mock_lan assert not Path(temp_file_path).exists() def test_build_agent(self, component_class, mock_langchain_experimental): - """Test build_agent method.""" + """Test build_agent method with allow_dangerous_code explicitly set.""" component = component_class() # Create real CSV file @@ -371,6 +371,7 @@ def test_build_agent(self, component_class, mock_langchain_experimental): "verbose": True, "handle_parsing_errors": False, "pandas_kwargs": {"encoding": "utf-8"}, + "allow_dangerous_code": True, # Explicitly enable for this test } ) @@ -395,3 +396,137 @@ def test_build_agent(self, component_class, mock_langchain_experimental): assert call_kwargs["path"] == csv_file finally: Path(csv_file).unlink() + + def test_security_default_safe_no_warning(self, component_class, mock_langchain_experimental): + """Test that allow_dangerous_code defaults to False and no warning is logged.""" + component = component_class() + + with tempfile.NamedTemporaryFile(mode="w", suffix=".csv", delete=False) as f: + f.write("col1,col2\n1,a\n2,b") + csv_file = f.name + + try: + # Don't set allow_dangerous_code - should default to False + component.set_attributes( + { + "llm": MagicMock(), + "path": csv_file, + "agent_type": "openai-tools", + "input_value": "test", + "verbose": False, + "handle_parsing_errors": True, + "pandas_kwargs": {}, + } + ) + + with patch("lfx.components.langchain_utilities.csv_agent.get_settings_service") as mock_get_settings: + mock_create_agent = mock_langchain_experimental + mock_settings = MagicMock() + mock_settings.settings.storage_type = "local" + mock_get_settings.return_value = mock_settings + + mock_agent = MagicMock() + mock_agent.invoke.return_value = {"output": "Safe result"} + mock_create_agent.return_value = mock_agent + + result = component.build_agent_response() + + # Verify the agent was created with allow_dangerous_code=False + mock_create_agent.assert_called_once() + call_kwargs = mock_create_agent.call_args[1] + assert call_kwargs["allow_dangerous_code"] is False + + assert isinstance(result, Message) + assert result.text == "Safe result" + finally: + Path(csv_file).unlink() + + def test_security_explicit_disable_no_warning(self, component_class, mock_langchain_experimental): + """Test that explicitly setting allow_dangerous_code=False works and no warning is logged.""" + component = component_class() + + with tempfile.NamedTemporaryFile(mode="w", suffix=".csv", delete=False) as f: + f.write("col1,col2\n1,a\n2,b") + csv_file = f.name + + try: + # Explicitly disable dangerous code + component.set_attributes( + { + "llm": MagicMock(), + "path": csv_file, + "agent_type": "openai-tools", + "input_value": "test", + "verbose": False, + "handle_parsing_errors": True, + "pandas_kwargs": {}, + "allow_dangerous_code": False, # Explicitly disabled + } + ) + + with patch("lfx.components.langchain_utilities.csv_agent.get_settings_service") as mock_get_settings: + mock_create_agent = mock_langchain_experimental + mock_settings = MagicMock() + mock_settings.settings.storage_type = "local" + mock_get_settings.return_value = mock_settings + + mock_agent = MagicMock() + mock_agent.invoke.return_value = {"output": "Safe result"} + mock_create_agent.return_value = mock_agent + + result = component.build_agent_response() + + # Verify the agent was created with allow_dangerous_code=False + mock_create_agent.assert_called_once() + call_kwargs = mock_create_agent.call_args[1] + assert call_kwargs["allow_dangerous_code"] is False + + assert isinstance(result, Message) + assert result.text == "Safe result" + finally: + Path(csv_file).unlink() + + def test_security_explicit_enable_with_warning(self, component_class, mock_langchain_experimental): + """Test that allow_dangerous_code=True works and logs security warning.""" + component = component_class() + + with tempfile.NamedTemporaryFile(mode="w", suffix=".csv", delete=False) as f: + f.write("col1,col2\n1,a\n2,b") + csv_file = f.name + + try: + # Explicitly enable dangerous code + component.set_attributes( + { + "llm": MagicMock(), + "path": csv_file, + "agent_type": "openai-tools", + "input_value": "test", + "verbose": False, + "handle_parsing_errors": True, + "pandas_kwargs": {}, + "allow_dangerous_code": True, # Explicitly enabled + } + ) + + with patch("lfx.components.langchain_utilities.csv_agent.get_settings_service") as mock_get_settings: + mock_create_agent = mock_langchain_experimental + mock_settings = MagicMock() + mock_settings.settings.storage_type = "local" + mock_get_settings.return_value = mock_settings + + mock_agent = MagicMock() + mock_agent.invoke.return_value = {"output": "Result with code execution"} + mock_create_agent.return_value = mock_agent + + result = component.build_agent_response() + + # Verify the agent was created with allow_dangerous_code=True + mock_create_agent.assert_called_once() + call_kwargs = mock_create_agent.call_args[1] + assert call_kwargs["allow_dangerous_code"] is True + + assert isinstance(result, Message) + assert result.text == "Result with code execution" + finally: + Path(csv_file).unlink()
Vulnerability mechanics
Generated by null/stub on May 9, 2026. Inputs: CWE entries + fix-commit diffs from this CVE's patches. Citations validated against bundle.
References
4- github.com/advisories/GHSA-3645-fxcv-hqr4ghsaADVISORY
- nvd.nist.gov/vuln/detail/CVE-2026-27966ghsaADVISORY
- github.com/langflow-ai/langflow/commit/d8c6480daa17b2f2af0b5470cdf5c3d28dc9e508ghsax_refsource_MISCWEB
- github.com/langflow-ai/langflow/security/advisories/GHSA-3645-fxcv-hqr4ghsax_refsource_CONFIRMWEB
News mentions
1- Metasploit Wrap-Up 04/25/2026Rapid7 Blog · Apr 24, 2026