CVE-2026-32689
Description
Allocation of Resources Without Limits or Throttling vulnerability in phoenixframework phoenix allows a denial of service via the long-poll transport's NDJSON body handling.
In 'Elixir.Phoenix.Transports.LongPoll':publish/4, when a POST request is received with Content-Type: application/x-ndjson, the request body is split on newline characters using String.split/2 with no limit on the number of resulting segments. An attacker can send a body consisting entirely of newline bytes, causing a 1:1 amplification into a list of empty binaries — a 1 MB body produces approximately one million list elements, an 8 MB body approximately 8.4 million. Each element is then walked by Enum.map, materializing another list of the same size. This exhausts BEAM memory and schedulers, crashing the node and terminating all active sessions.
A session token required to reach the vulnerable endpoint is freely obtainable by any client via an unauthenticated GET request to the same URL with a matching Origin header, making this attack effectively unauthenticated.
This issue affects phoenix: from 1.7.0 before 1.7.22 and 1.8.6.
Affected packages
Versions sourced from the GitHub Security Advisory.
| Package | Affected versions | Patched versions |
|---|---|---|
phoenixHex | >= 1.7.0, < 1.7.22 | 1.7.22 |
phoenixHex | >= 1.8.0, < 1.8.6 | 1.8.6 |
Affected products
1Patches
21a67c61ff9ceprevent unexpected memory usage on nd-json body splitting
4 files changed · +127 −23
assets/js/phoenix/constants.js+1 −0 modified@@ -3,6 +3,7 @@ export const phxWindow = typeof window !== "undefined" ? window : null export const global = globalSelf || phxWindow || globalThis export const DEFAULT_VSN = "2.0.0" export const SOCKET_STATES = {connecting: 0, open: 1, closing: 2, closed: 3} +export const MAX_LONGPOLL_BATCH_SIZE = 100; export const DEFAULT_TIMEOUT = 10000 export const WS_CLOSE_NORMAL = 1000 export const CHANNEL_STATES = {
assets/js/phoenix/longpoll.js+11 −4 modified@@ -1,7 +1,8 @@ import { SOCKET_STATES, TRANSPORTS, - AUTH_TOKEN_PREFIX + AUTH_TOKEN_PREFIX, + MAX_LONGPOLL_BATCH_SIZE } from "./constants" import Ajax from "./ajax" @@ -149,16 +150,22 @@ export default class LongPoll { } } - batchSend(messages){ + batchSend(messages, offset = 0){ this.awaitingBatchAck = true - this.ajax("POST", {"Content-Type": "application/x-ndjson"}, messages.join("\n"), () => this.onerror("timeout"), resp => { - this.awaitingBatchAck = false + const next = offset + MAX_LONGPOLL_BATCH_SIZE + const batch = messages.slice(offset, next) + this.ajax("POST", {"Content-Type": "application/x-ndjson"}, batch.join("\n"), () => this.onerror("timeout"), resp => { if(!resp || resp.status !== 200){ + this.awaitingBatchAck = false this.onerror(resp && resp.status) this.closeAndRetry(1011, "internal server error", false) + } else if(next < messages.length){ + this.batchSend(messages, next) } else if(this.batchBuffer.length > 0){ this.batchSend(this.batchBuffer) this.batchBuffer = [] + } else { + this.awaitingBatchAck = false } }) }
assets/test/longpoll_test.js+95 −0 modified@@ -158,6 +158,101 @@ describe("LongPoll", () => { expect.any(Function) ) }) + + it("coalesces rapid send() calls and buffers sends made during an in-flight batch", () => { + jest.useFakeTimers() + try { + const longpoll = new LongPoll("http://localhost/socket/longpoll", undefined) + longpoll.timeout = 1000 + // suppress the initial poll() that the constructor schedules via setTimeout(0) + longpoll.poll = jest.fn() + + const calls = [] + Ajax.request.mockImplementation((method, url, headers, body, timeout, ontimeout, callback) => { + calls.push({method, body, callback}) + return {abort: jest.fn()} + }) + + // Three sends in the same tick should collapse into one currentBatch + longpoll.send("a") + longpoll.send("b") + longpoll.send("c") + + expect(calls).toHaveLength(0) + expect(longpoll.currentBatch).toEqual(["a", "b", "c"]) + + // Flush the setTimeout(0) — currentBatch becomes one POST + jest.runOnlyPendingTimers() + + expect(calls).toHaveLength(1) + expect(calls[0].method).toBe("POST") + expect(calls[0].body).toBe("a\nb\nc") + expect(longpoll.currentBatch).toBeNull() + expect(longpoll.awaitingBatchAck).toBe(true) + + // Sends during in-flight ack go to batchBuffer, not a new request + longpoll.send("d") + longpoll.send("e") + expect(calls).toHaveLength(1) + expect(longpoll.batchBuffer).toEqual(["d", "e"]) + + // Ack the first batch — the buffered sends should be flushed as the next POST + calls[0].callback({status: 200}) + + expect(calls).toHaveLength(2) + expect(calls[1].body).toBe("d\ne") + expect(longpoll.batchBuffer).toEqual([]) + expect(longpoll.awaitingBatchAck).toBe(true) + + // Ack the buffered batch — nothing left to send + calls[1].callback({status: 200}) + expect(calls).toHaveLength(2) + expect(longpoll.awaitingBatchAck).toBe(false) + } finally { + jest.useRealTimers() + } + }) + + it("splits 150 rapid send() calls into two requests in order", () => { + jest.useFakeTimers() + try { + const longpoll = new LongPoll("http://localhost/socket/longpoll", undefined) + longpoll.timeout = 1000 + longpoll.poll = jest.fn() + + const calls = [] + Ajax.request.mockImplementation((method, url, headers, body, timeout, ontimeout, callback) => { + calls.push({body, callback}) + return {abort: jest.fn()} + }) + + for(let i = 0; i < 150; i++){ longpoll.send(`m${i}`) } + + // Flush the setTimeout(0) so batchSend runs on the full 150-entry batch + jest.runOnlyPendingTimers() + + expect(calls).toHaveLength(1) + const firstLines = calls[0].body.split("\n") + expect(firstLines).toHaveLength(100) + expect(firstLines[0]).toBe("m0") + expect(firstLines[99]).toBe("m99") + + // Ack the first chunk — batchSend should recurse with the remaining 50 + calls[0].callback({status: 200}) + + expect(calls).toHaveLength(2) + const secondLines = calls[1].body.split("\n") + expect(secondLines).toHaveLength(50) + expect(secondLines[0]).toBe("m100") + expect(secondLines[49]).toBe("m149") + + calls[1].callback({status: 200}) + expect(calls).toHaveLength(2) + expect(longpoll.awaitingBatchAck).toBe(false) + } finally { + jest.useRealTimers() + } + }) }) })
lib/phoenix/transports/long_poll.ex+20 −19 modified@@ -2,8 +2,11 @@ defmodule Phoenix.Transports.LongPoll do @moduledoc false @behaviour Plug - # 10MB + # The maximum is 10MB but read_body will cap the whole request at ~8MB, + # so this acts as a secondary protection mechanism. @max_base64_size 10_000_000 + # TODO: enforce batch size on the server in the next release + # @max_poll_batch_size 100 @connect_info_opts [:check_csrf] import Plug.Conn @@ -78,30 +81,28 @@ defmodule Phoenix.Transports.LongPoll do defp publish(conn, server_ref, endpoint, opts) do case read_body(conn, []) do {:ok, body, conn} -> - # we need to match on both v1 and v2 protocol, as well as wrap for backwards compat - batch = + # We need to match on both v1 and v2 protocol, as well as wrap for backwards compat + status = case get_req_header(conn, "content-type") do ["application/x-ndjson"] -> body - |> String.split(["\n", "\r\n"]) - |> Enum.map(fn - "[" <> _ = txt -> {txt, :text} - base64 -> {safe_decode64!(base64), :binary} + |> String.splitter(["\n", "\r\n"]) + # |> Stream.take(@max_poll_batch_size) + |> Enum.find(fn part -> + msg = + case part do + "[" <> _ = txt -> {txt, :text} + base64 -> {safe_decode64!(base64), :binary} + end + + transport_dispatch(endpoint, server_ref, msg, opts) end) _ -> - [{body, :text}] + transport_dispatch(endpoint, server_ref, {body, :text}, opts) end - {conn, status} = - Enum.reduce_while(batch, {conn, nil}, fn msg, {conn, _status} -> - case transport_dispatch(endpoint, server_ref, msg, opts) do - :ok -> {:cont, {conn, :ok}} - :request_timeout = timeout -> {:halt, {conn, timeout}} - end - end) - - conn |> put_status(status) |> status_json() + conn |> put_status(status || :ok) |> status_json() _ -> raise Plug.BadRequestError @@ -121,8 +122,8 @@ defmodule Phoenix.Transports.LongPoll do broadcast_from!(endpoint, server_ref, {:dispatch, client_ref(server_ref), body, ref}) receive do - {:ok, ^ref} -> :ok - {:error, ^ref} -> :ok + {:ok, ^ref} -> nil + {:error, ^ref} -> nil after opts[:window_ms] -> :request_timeout end
912ea181fd24prevent unexpected memory usage on nd-json body splitting
3 files changed · +32 −23
assets/js/phoenix/constants.js+1 −0 modified@@ -3,6 +3,7 @@ export const phxWindow = typeof window !== "undefined" ? window : null export const global = globalSelf || phxWindow || global export const DEFAULT_VSN = "2.0.0" export const SOCKET_STATES = {connecting: 0, open: 1, closing: 2, closed: 3} +export const MAX_LONGPOLL_BATCH_SIZE = 100; export const DEFAULT_TIMEOUT = 10000 export const WS_CLOSE_NORMAL = 1000 export const CHANNEL_STATES = {
assets/js/phoenix/longpoll.js+11 −4 modified@@ -1,6 +1,7 @@ import { SOCKET_STATES, - TRANSPORTS + TRANSPORTS, + MAX_LONGPOLL_BATCH_SIZE } from "./constants" import Ajax from "./ajax" @@ -132,16 +133,22 @@ export default class LongPoll { } } - batchSend(messages){ + batchSend(messages, offset = 0){ this.awaitingBatchAck = true - this.ajax("POST", "application/x-ndjson", messages.join("\n"), () => this.onerror("timeout"), resp => { - this.awaitingBatchAck = false + const next = offset + MAX_LONGPOLL_BATCH_SIZE + const batch = messages.slice(offset, next) + this.ajax("POST", {"Content-Type": "application/x-ndjson"}, batch.join("\n"), () => this.onerror("timeout"), resp => { if(!resp || resp.status !== 200){ + this.awaitingBatchAck = false this.onerror(resp && resp.status) this.closeAndRetry(1011, "internal server error", false) + } else if(next < messages.length){ + this.batchSend(messages, next) } else if(this.batchBuffer.length > 0){ this.batchSend(this.batchBuffer) this.batchBuffer = [] + } else { + this.awaitingBatchAck = false } }) }
lib/phoenix/transports/long_poll.ex+20 −19 modified@@ -2,8 +2,11 @@ defmodule Phoenix.Transports.LongPoll do @moduledoc false @behaviour Plug - # 10MB + # The maximum is 10MB but read_body will cap the whole request at ~8MB, + # so this acts as a secondary protection mechanism. @max_base64_size 10_000_000 + # TODO: enforce batch size on the server in the next release + # @max_poll_batch_size 100 import Plug.Conn alias Phoenix.Socket.{V1, V2, Transport} @@ -77,30 +80,28 @@ defmodule Phoenix.Transports.LongPoll do defp publish(conn, server_ref, endpoint, opts) do case read_body(conn, []) do {:ok, body, conn} -> - # we need to match on both v1 and v2 protocol, as well as wrap for backwards compat - batch = + # We need to match on both v1 and v2 protocol, as well as wrap for backwards compat + status = case get_req_header(conn, "content-type") do ["application/x-ndjson"] -> body - |> String.split(["\n", "\r\n"]) - |> Enum.map(fn - "[" <> _ = txt -> {txt, :text} - base64 -> {safe_decode64!(base64), :binary} + |> String.splitter(["\n", "\r\n"]) + # |> Stream.take(@max_poll_batch_size) + |> Enum.find(fn part -> + msg = + case part do + "[" <> _ = txt -> {txt, :text} + base64 -> {safe_decode64!(base64), :binary} + end + + transport_dispatch(endpoint, server_ref, msg, opts) end) _ -> - [{body, :text}] + transport_dispatch(endpoint, server_ref, {body, :text}, opts) end - {conn, status} = - Enum.reduce_while(batch, {conn, nil}, fn msg, {conn, _status} -> - case transport_dispatch(endpoint, server_ref, msg, opts) do - :ok -> {:cont, {conn, :ok}} - :request_timeout = timeout -> {:halt, {conn, timeout}} - end - end) - - conn |> put_status(status) |> status_json() + conn |> put_status(status || :ok) |> status_json() _ -> raise Plug.BadRequestError @@ -120,8 +121,8 @@ defmodule Phoenix.Transports.LongPoll do broadcast_from!(endpoint, server_ref, {:dispatch, client_ref(server_ref), body, ref}) receive do - {:ok, ^ref} -> :ok - {:error, ^ref} -> :ok + {:ok, ^ref} -> nil + {:error, ^ref} -> nil after opts[:window_ms] -> :request_timeout end
Vulnerability mechanics
AI mechanics synthesis has not run for this CVE yet.
References
7- github.com/advisories/GHSA-628h-q48j-jr6qghsaADVISORY
- nvd.nist.gov/vuln/detail/CVE-2026-32689ghsaADVISORY
- cna.erlef.org/cves/CVE-2026-32689.htmlnvdWEB
- github.com/phoenixframework/phoenix/commit/1a67c61ff9ce0a7711662ac7354861917a7c80f7nvdWEB
- github.com/phoenixframework/phoenix/commit/912ea181fd247c21dbcc49fb97d0053b947d81bfnvdWEB
- github.com/phoenixframework/phoenix/security/advisories/GHSA-628h-q48j-jr6qnvdWEB
- osv.dev/vulnerability/EEF-CVE-2026-32689nvdWEB
News mentions
14- Bypassing On-Camera Age-Verification ChecksSchneier on Security · May 15, 2026
- 'FrostyNeighbor' APT Carefully Targets Govt Orgs in Poland, UkraineDark Reading · May 14, 2026
- Worm Redux: Fresh Mini Shai-Hulud Infections Bite Supply ChainDark Reading · May 12, 2026
- 'Dirty Frag' Exploit Poised to Blow Up on Enterprise Linux DistrosDark Reading · May 11, 2026
- Attacks Abuse Windows Phone Link to Steal Texts & Bypass 2FADark Reading · May 6, 2026
- Microsoft Edge Stores Passwords in Process Memory, Posing Enterprise RiskDark Reading · May 5, 2026
- Silver Fox Springs Tax-Themed Attacks on Orgs in India, RussiaDark Reading · May 4, 2026
- ThreatsDay Bulletin: SMS Blaster Busts, OpenEMR Flaws, 600K Roblox Hacks and 25 More StoriesThe Hacker News · Apr 30, 2026
- Vect 2.0 Ransomware Acts as Wiper, Thanks to Design ErrorDark Reading · Apr 29, 2026
- Fresh Wave of GlassWorm VS Code Extensions Slices Through Supply ChainDark Reading · Apr 28, 2026
- Unpatched 'PhantomRPC' Flaw in Windows Enables Privilege EscalationDark Reading · Apr 27, 2026
- DPRK Fake Job Scams Self-Propagate in 'Contagious Interview'Dark Reading · Apr 22, 2026
- Surge in Bomgar RMM Exploitation Demonstrates Supply Chain RiskDark Reading · Apr 21, 2026
- Google Fixes Critical RCE Flaw in AI-Based 'Antigravity' ToolDark Reading · Apr 21, 2026