cbor2 has a Denial of Service via Uncontrolled Recursion in cbor2.loads
Description
cbor2 provides encoding and decoding for the Concise Binary Object Representation (CBOR) serialization format. Versions prior to 5.9.0 are vulnerable to a Denial of Service (DoS) attack caused by uncontrolled recursion when decoding deeply nested CBOR structures. This vulnerability affects both the pure Python implementation and the C extension _cbor2. The C extension relies on Python's internal recursion limits Py_EnterRecursiveCall rather than a data-driven depth limit, meaning it still raises RecursionError and crashes the worker process when the limit is hit. While the library handles moderate nesting levels, it lacks a hard depth limit. An attacker can supply a crafted CBOR payload containing approximately 100,000 nested arrays 0x81. When cbor2.loads() attempts to parse this, it hits the Python interpreter's maximum recursion depth or exhausts the stack, causing the process to crash with a RecursionError. Because the library does not enforce its own limits, it allows an external attacker to exhaust the host application's stack resource. In many web application servers (e.g., Gunicorn, Uvicorn) or task queues (Celery), an unhandled RecursionError terminates the worker process immediately. By sending a stream of these small (<100KB) malicious packets, an attacker can repeatedly crash worker processes, resulting in a complete Denial of Service for the application. Version 5.9.0 patches the issue.
Affected packages
Versions sourced from the GitHub Security Advisory.
| Package | Affected versions | Patched versions |
|---|---|---|
cbor2PyPI | < 5.9.0 | 5.9.0 |
Affected products
1Patches
1e61a5f365ba6Set default read_size to 1 for backwards compatibility (#275)
6 files changed · +130 −30
cbor2/_decoder.py+31 −2 modified@@ -71,6 +71,7 @@ def __init__( tag_hook: Callable[[CBORDecoder, CBORTag], Any] | None = None, object_hook: Callable[[CBORDecoder, dict[Any, Any]], Any] | None = None, str_errors: Literal["strict", "error", "replace"] = "strict", + read_size: int = 1, ): """ :param fp: @@ -89,6 +90,13 @@ def __init__( :param str_errors: determines how to handle unicode decoding errors (see the `Error Handlers`_ section in the standard library documentation for details) + :param read_size: + the minimum number of bytes to read at a time. + Setting this to a higher value like 4096 improves performance, + but is likely to read past the end of the CBOR value, advancing the stream + position beyond the decoded data. This only matters if you need to reuse the + stream after decoding. + Ignored in the pure Python implementation, but included for API compatibility. .. _Error Handlers: https://docs.python.org/3/library/codecs.html#error-handlers @@ -828,6 +836,7 @@ def loads( tag_hook: Callable[[CBORDecoder, CBORTag], Any] | None = None, object_hook: Callable[[CBORDecoder, dict[Any, Any]], Any] | None = None, str_errors: Literal["strict", "error", "replace"] = "strict", + read_size: int = 1, ) -> Any: """ Deserialize an object from a bytestring. @@ -846,6 +855,10 @@ def loads( :param str_errors: determines how to handle unicode decoding errors (see the `Error Handlers`_ section in the standard library documentation for details) + :param read_size: + the minimum number of bytes to read at a time. + Setting this to a higher value like 4096 improves performance. + Ignored in the pure Python implementation, but included for API compatibility. :return: the deserialized object @@ -854,7 +867,11 @@ def loads( """ with BytesIO(s) as fp: return CBORDecoder( - fp, tag_hook=tag_hook, object_hook=object_hook, str_errors=str_errors + fp, + tag_hook=tag_hook, + object_hook=object_hook, + str_errors=str_errors, + read_size=read_size, ).decode() @@ -863,6 +880,7 @@ def load( tag_hook: Callable[[CBORDecoder, CBORTag], Any] | None = None, object_hook: Callable[[CBORDecoder, dict[Any, Any]], Any] | None = None, str_errors: Literal["strict", "error", "replace"] = "strict", + read_size: int = 1, ) -> Any: """ Deserialize an object from an open file. @@ -881,12 +899,23 @@ def load( :param str_errors: determines how to handle unicode decoding errors (see the `Error Handlers`_ section in the standard library documentation for details) + :param read_size: + the minimum number of bytes to read at a time. + Setting this to a higher value like 4096 improves performance, + but is likely to read past the end of the CBOR value, advancing the stream + position beyond the decoded data. This only matters if you need to reuse the + stream after decoding. + Ignored in the pure Python implementation, but included for API compatibility. :return: the deserialized object .. _Error Handlers: https://docs.python.org/3/library/codecs.html#error-handlers """ return CBORDecoder( - fp, tag_hook=tag_hook, object_hook=object_hook, str_errors=str_errors + fp, + tag_hook=tag_hook, + object_hook=object_hook, + str_errors=str_errors, + read_size=read_size, ).decode()
docs/usage.rst+11 −0 modified@@ -74,6 +74,17 @@ instead encodes a reference to the nth sufficiently long string already encoded. .. warning:: Support for string referencing is rare in other CBOR implementations, so think carefully whether you want to enable it. +Performance tuning +------------------ + +By default, the decoder only reads the exact amount of bytes it needs. But this can negatively +impact the performance due to the potentially large number of individual read operations. +To make it faster, you can pass a different ``read_size`` parameter (say, 4096), to :func:`load`, +:func:`loads` or :class:`CBORDecoder`. + +.. warning:: If the input stream contains data other than the CBOR stream, that data (or parts of) + may be lost. + Tag support -----------
docs/versionhistory.rst+7 −0 modified@@ -7,6 +7,13 @@ This library adheres to `Semantic Versioning 2.0 <http://semver.org/>`_. **UNRELEASED** +- Changed the default ``read_size`` from 4096 to 1 for backwards compatibility. + The buffered reads introduced in 5.8.0 could cause issues when code needs to + access the stream position after decoding. Users can opt-in to faster decoding + by passing ``read_size=4096`` when they don't need to access the stream directly + after decoding. Added a direct read path for ``read_size=1`` to avoid buffer + management overhead. + (`#275 <https://github.com/agronholm/cbor2/pull/275>`_; PR by @andreer) - Fixed C encoder not respecting string referencing when encoding string-type datetimes (tag 0) (`#254 <https://github.com/agronholm/cbor2/issues/254>`_) - Fixed a missed check for an exception in the C implementation of ``CBOREncoder.encode_shared()``
source/decoder.c+53 −25 modified@@ -47,6 +47,10 @@ static int _CBORDecoder_set_tag_hook(CBORDecoderObject *, PyObject *, void *); static int _CBORDecoder_set_object_hook(CBORDecoderObject *, PyObject *, void *); static int _CBORDecoder_set_str_errors(CBORDecoderObject *, PyObject *, void *); +// Forward declarations for read dispatch functions +static int fp_read_unbuffered(CBORDecoderObject *, char *, Py_ssize_t); +static int fp_read_buffered(CBORDecoderObject *, char *, Py_ssize_t); + static PyObject * decode(CBORDecoderObject *, DecodeOptions); static PyObject * decode_bytestring(CBORDecoderObject *, uint8_t); static PyObject * decode_string(CBORDecoderObject *, uint8_t); @@ -156,6 +160,7 @@ CBORDecoder_new(PyTypeObject *type, PyObject *args, PyObject *kwargs) self->readahead_size = 0; self->read_pos = 0; self->read_len = 0; + self->fp_read = fp_read_unbuffered; // default, will be set properly in init } return (PyObject *) self; error: @@ -165,7 +170,7 @@ CBORDecoder_new(PyTypeObject *type, PyObject *args, PyObject *kwargs) // CBORDecoder.__init__(self, fp=None, tag_hook=None, object_hook=None, -// str_errors='strict', read_size=4096) +// str_errors='strict', read_size=1) int CBORDecoder_init(CBORDecoderObject *self, PyObject *args, PyObject *kwargs) { @@ -234,7 +239,8 @@ _CBORDecoder_set_fp_with_read_size(CBORDecoderObject *self, PyObject *value, Py_ return -1; } - if (self->readahead == NULL || self->readahead_size != read_size) { + // Skip buffer allocation for read_size=1 (direct read path doesn't use buffer) + if (read_size > 1 && (self->readahead == NULL || self->readahead_size != read_size)) { new_buffer = (char *)PyMem_Malloc(read_size); if (!new_buffer) { Py_DECREF(read); @@ -255,8 +261,15 @@ _CBORDecoder_set_fp_with_read_size(CBORDecoderObject *self, PyObject *value, Py_ if (new_buffer) { PyMem_Free(self->readahead); self->readahead = new_buffer; - self->readahead_size = read_size; + } else if (read_size == 1 && self->readahead != NULL) { + // Free existing buffer when switching to direct read path (read_size=1) + PyMem_Free(self->readahead); + self->readahead = NULL; } + self->readahead_size = read_size; + + // Set read dispatch function - eliminates runtime check on every read + self->fp_read = (read_size == 1) ? fp_read_unbuffered : fp_read_buffered; return 0; } @@ -448,9 +461,25 @@ fp_read_bytes(CBORDecoderObject *self, char *buf, Py_ssize_t size) return bytes_read; } -// Read into caller's buffer using the readahead buffer +// Unbuffered read - used when read_size=1 (backwards compatible mode) +// This matches the 5.7.1 behavior with no runtime overhead +static int +fp_read_unbuffered(CBORDecoderObject *self, char *buf, Py_ssize_t size) +{ + Py_ssize_t bytes_read = fp_read_bytes(self, buf, size); + if (bytes_read == size) + return 0; + if (bytes_read >= 0) + PyErr_Format( + _CBOR2_CBORDecodeEOF, + "premature end of stream (expected to read %zd bytes, " + "got %zd instead)", size, bytes_read); + return -1; +} + +// Buffered read - used when read_size > 1 for improved performance static int -fp_read(CBORDecoderObject *self, char *buf, const Py_ssize_t size) +fp_read_buffered(CBORDecoderObject *self, char *buf, Py_ssize_t size) { Py_ssize_t available, to_copy, remaining, total_copied; @@ -508,7 +537,7 @@ fp_read_object(CBORDecoderObject *self, const Py_ssize_t size) if (!ret) return NULL; - if (fp_read(self, PyBytes_AS_STRING(ret), size) == -1) { + if (self->fp_read(self, PyBytes_AS_STRING(ret), size) == -1) { Py_DECREF(ret); return NULL; } @@ -529,7 +558,7 @@ CBORDecoder_read(CBORDecoderObject *self, PyObject *length) return NULL; ret = PyBytes_FromStringAndSize(NULL, len); if (ret) { - if (fp_read(self, PyBytes_AS_STRING(ret), len) == -1) { + if (self->fp_read(self, PyBytes_AS_STRING(ret), len) == -1) { Py_DECREF(ret); ret = NULL; } @@ -577,19 +606,19 @@ decode_length(CBORDecoderObject *self, uint8_t subtype, if (subtype < 24) { *length = subtype; } else if (subtype == 24) { - if (fp_read(self, value.u8.buf, sizeof(uint8_t)) == -1) + if (self->fp_read(self, value.u8.buf, sizeof(uint8_t)) == -1) return -1; *length = value.u8.value; } else if (subtype == 25) { - if (fp_read(self, value.u16.buf, sizeof(uint16_t)) == -1) + if (self->fp_read(self, value.u16.buf, sizeof(uint16_t)) == -1) return -1; *length = be16toh(value.u16.value); } else if (subtype == 26) { - if (fp_read(self, value.u32.buf, sizeof(uint32_t)) == -1) + if (self->fp_read(self, value.u32.buf, sizeof(uint32_t)) == -1) return -1; *length = be32toh(value.u32.value); } else { - if (fp_read(self, value.u64.buf, sizeof(uint64_t)) == -1) + if (self->fp_read(self, value.u64.buf, sizeof(uint64_t)) == -1) return -1; *length = be64toh(value.u64.value); } @@ -753,7 +782,7 @@ decode_indefinite_bytestrings(CBORDecoderObject *self) list = PyList_New(0); if (list) { while (1) { - if (fp_read(self, &lead.byte, 1) == -1) + if (self->fp_read(self, &lead.byte, 1) == -1) break; if (lead.major == 2 && lead.subtype != 31) { ret = decode_bytestring(self, lead.subtype); @@ -960,7 +989,7 @@ decode_indefinite_strings(CBORDecoderObject *self) list = PyList_New(0); if (list) { while (1) { - if (fp_read(self, &lead.byte, 1) == -1) + if (self->fp_read(self, &lead.byte, 1) == -1) break; if (lead.major == 3 && lead.subtype != 31) { ret = decode_string(self, lead.subtype); @@ -2065,7 +2094,7 @@ CBORDecoder_decode_simple_value(CBORDecoderObject *self) PyObject *tag, *ret = NULL; uint8_t buf; - if (fp_read(self, (char*)&buf, sizeof(uint8_t)) == 0) { + if (self->fp_read(self, (char*)&buf, sizeof(uint8_t)) == 0) { tag = PyStructSequence_New(&CBORSimpleValueType); if (tag) { PyStructSequence_SET_ITEM(tag, 0, PyLong_FromLong(buf)); @@ -2091,7 +2120,7 @@ CBORDecoder_decode_float16(CBORDecoderObject *self) char buf[sizeof(uint16_t)]; } u; - if (fp_read(self, u.buf, sizeof(uint16_t)) == 0) + if (self->fp_read(self, u.buf, sizeof(uint16_t)) == 0) ret = PyFloat_FromDouble(unpack_float16(u.i)); set_shareable(self, ret); return ret; @@ -2109,7 +2138,7 @@ CBORDecoder_decode_float32(CBORDecoderObject *self) char buf[sizeof(float)]; } u; - if (fp_read(self, u.buf, sizeof(float)) == 0) { + if (self->fp_read(self, u.buf, sizeof(float)) == 0) { u.i = be32toh(u.i); ret = PyFloat_FromDouble(u.f); } @@ -2129,7 +2158,7 @@ CBORDecoder_decode_float64(CBORDecoderObject *self) char buf[sizeof(double)]; } u; - if (fp_read(self, u.buf, sizeof(double)) == 0) { + if (self->fp_read(self, u.buf, sizeof(double)) == 0) { u.i = be64toh(u.i); ret = PyFloat_FromDouble(u.f); } @@ -2158,7 +2187,7 @@ decode(CBORDecoderObject *self, DecodeOptions options) if (Py_EnterRecursiveCall(" in CBORDecoder.decode")) return NULL; - if (fp_read(self, &lead.byte, 1) == 0) { + if (self->fp_read(self, &lead.byte, 1) == 0) { switch (lead.major) { case 0: ret = decode_uint(self, lead.subtype); break; case 1: ret = decode_negint(self, lead.subtype); break; @@ -2414,13 +2443,12 @@ PyDoc_STRVAR(CBORDecoder__doc__, " :class:`dict` object. The return value is substituted for the dict\n" " in the deserialized output.\n" ":param read_size:\n" -" the size of the read buffer (default 4096). The decoder reads from\n" -" the stream in chunks of this size for performance. This means the\n" -" stream position may advance beyond the bytes actually decoded. For\n" -" large values (bytestrings, text strings), reads may be larger than\n" -" ``read_size``. Code that needs to read from the stream after\n" -" decoding should use :meth:`decode_from_bytes` instead, or set\n" -" ``read_size=1`` to disable buffering (at a performance cost).\n" +" the minimum number of bytes to read at a time.\n" +" Setting this to a higher value like 4096 improves performance,\n" +" but is likely to read past the end of the CBOR value, advancing the stream\n" +" position beyond the decoded data. This only matters if you need to reuse the\n" +" stream after decoding.\n" +" Ignored in the pure Python implementation, but included for API compatibility.\n" "\n" ".. _CBOR: https://cbor.io/\n" );
source/decoder.h+13 −3 modified@@ -3,10 +3,17 @@ #include <stdbool.h> #include <stdint.h> -// Default readahead buffer size for streaming reads -#define CBOR2_DEFAULT_READ_SIZE 4096 +// Default readahead buffer size for streaming reads. +// Set to 1 for backwards compatibility (no buffering). +#define CBOR2_DEFAULT_READ_SIZE 1 -typedef struct { +// Forward declaration for function pointer typedef +struct CBORDecoderObject_; + +// Function pointer type for read dispatch (eliminates runtime check) +typedef int (*fp_read_fn)(struct CBORDecoderObject_ *, char *, Py_ssize_t); + +typedef struct CBORDecoderObject_ { PyObject_HEAD PyObject *read; // cached read() method of fp PyObject *tag_hook; @@ -23,6 +30,9 @@ typedef struct { Py_ssize_t readahead_size; // size of allocated buffer Py_ssize_t read_pos; // current position in buffer Py_ssize_t read_len; // valid bytes in buffer + + // Read dispatch - points to unbuffered or buffered implementation + fp_read_fn fp_read; } CBORDecoderObject; extern PyTypeObject CBORDecoderType;
tests/test_decoder.py+15 −0 modified@@ -123,6 +123,21 @@ def test_load(impl): assert impl.load(fp=stream) == 1 +def test_stream_position_after_decode(impl): + """Test that stream position is exactly at end of decoded CBOR value.""" + # CBOR: integer 1 (1 byte: 0x01) followed by extra data + cbor_data = b"\x01" + extra_data = b"extra" + with BytesIO(cbor_data + extra_data) as stream: + decoder = impl.CBORDecoder(stream) + result = decoder.decode() + assert result == 1 + # Stream position should be exactly at end of CBOR data + assert stream.tell() == len(cbor_data) + # Should be able to read the extra data + assert stream.read() == extra_data + + @pytest.mark.parametrize( "payload, expected", [
Vulnerability mechanics
Generated by null/stub on May 9, 2026. Inputs: CWE entries + fix-commit diffs from this CVE's patches. Citations validated against bundle.
References
6- github.com/advisories/GHSA-3c37-wwvx-h642ghsaADVISORY
- nvd.nist.gov/vuln/detail/CVE-2026-26209ghsaADVISORY
- github.com/agronholm/cbor2/commit/e61a5f365ba610d5907a0ae1bc72769bba34294bghsax_refsource_MISCWEB
- github.com/agronholm/cbor2/pull/275ghsax_refsource_MISCWEB
- github.com/agronholm/cbor2/releases/tag/5.9.0ghsax_refsource_MISCWEB
- github.com/agronholm/cbor2/security/advisories/GHSA-3c37-wwvx-h642ghsax_refsource_CONFIRMWEB
News mentions
0No linked articles in our index yet.