OpenBao vulnerable to denial of service via malicious JSON request processing
Description
OpenBao is an open source identity-based secrets management system. In OpenBao versions prior to 2.4.1, JSON objects after decoding may use significantly more memory than their serialized version. It is possible to craft a JSON payload to maximize the factor between serialized memory usage and deserialized memory usage, similar to a zip bomb, with factors reaching approximately 35. This can be used to circumvent the max_request_size configuration parameter which is intended to protect against denial of service attacks. The request body is parsed into a map very early in the request handling chain before authentication, which means an unauthenticated attacker can send a specifically crafted JSON object and cause an out-of-memory crash. Additionally, for requests with large numbers of strings, the audit subsystem can consume large quantities of CPU. The vulnerability is fixed in version 2.4.1.
Affected packages
Versions sourced from the GitHub Security Advisory.
| Package | Affected versions | Patched versions |
|---|---|---|
github.com/openbao/openbaoGo | < 2.4.1 | 2.4.1 |
Affected products
1Patches
1d418f238bc99reject JSON bodies which consume large amounts of RAM (#1756)
8 files changed · +177 −12
changelog/1756.txt+3 −0 added@@ -0,0 +1,3 @@ +```release-note:security +Limit the complexity of JSON in HTTP request bodies. HCSEC-2025-24 / CVE-2025-6203. +```
command/server.go+5 −0 modified@@ -851,6 +851,11 @@ func (c *ServerCommand) InitListeners(logger hclog.Logger, config *server.Config Config: lnConfig, }) + if lnConfig.MaxRequestJsonComplexity == 0 { + lnConfig.MaxRequestJsonComplexity = vault.DefaultMaxJsonComplexity + } + props["max_request_json_complexity"] = fmt.Sprintf("%d", lnConfig.MaxRequestJsonComplexity) + // Store the listener props for output later key := fmt.Sprintf("listener %d", i+1) propsList := make([]string, 0, len(props))
http/handler.go+75 −4 modified@@ -339,12 +339,18 @@ func handleAuditNonLogical(core *vault.Core, h http.Handler) http.Handler { // are performed. func wrapGenericHandler(core *vault.Core, h http.Handler, props *vault.HandlerProperties) http.Handler { var maxRequestDuration time.Duration + var maxRequestJsonComplexity int64 if props.ListenerConfig != nil { maxRequestDuration = props.ListenerConfig.MaxRequestDuration + maxRequestJsonComplexity = props.ListenerConfig.MaxRequestJsonComplexity } if maxRequestDuration == 0 { maxRequestDuration = vault.DefaultMaxRequestDuration } + if maxRequestJsonComplexity == 0 { + maxRequestJsonComplexity = vault.DefaultMaxJsonComplexity + } + // Swallow this error since we don't want to pollute the logs and we also don't want to // return an HTTP error here. This information is best effort. hostname, _ := os.Hostname() @@ -386,6 +392,8 @@ func wrapGenericHandler(core *vault.Core, h http.Handler, props *vault.HandlerPr nw.Header().Set(consts.NamespaceHeaderName, nsHeader) } + ctx = addMaximumJsonTokensToContext(ctx, maxRequestJsonComplexity) + ctx = namespace.ContextWithNamespaceHeader(ctx, nsHeader) r = r.WithContext(ctx) @@ -730,11 +738,74 @@ func parseQuery(values url.Values) map[string]interface{} { return nil } +type ctxKeyMaxRequestJsonComplexity struct{} + +func maximumJsonTokensFromContext(ctx context.Context) int64 { + maxJsonTokens := ctx.Value(ctxKeyMaxRequestJsonComplexity{}) + if maxJsonTokens == nil { + return 0 + } + return maxJsonTokens.(int64) +} + +func addMaximumJsonTokensToContext(ctx context.Context, limit int64) context.Context { + return context.WithValue(ctx, ctxKeyMaxRequestJsonComplexity{}, limit) +} + func parseJSONRequest(r *http.Request, w http.ResponseWriter, out interface{}) (io.ReadCloser, error) { - // Limit the maximum number of bytes to MaxRequestSize to protect - // against an indefinite amount of data being read. - reader := r.Body - err := jsonutil.DecodeJSONFromReader(reader, out) + ctx := r.Context() + maxJsonTokens := maximumJsonTokensFromContext(ctx) + + if maxJsonTokens <= 0 { + err := jsonutil.DecodeJSONFromReader(r.Body, out) + if err != nil && err != io.EOF { + return nil, fmt.Errorf("failed to parse JSON input: %w", err) + } + return nil, err + } + + reader, ok := r.Body.(io.ReadSeeker) + if !ok { + body, err := io.ReadAll(r.Body) + reader = bytes.NewReader(body) + if err != nil { + return nil, fmt.Errorf("failed to read JSON input: %w", err) + } + } + + pos, err := reader.Seek(0, io.SeekCurrent) + if err != nil { + return nil, fmt.Errorf("failed to read JSON input: %w", err) + } + + dec := json.NewDecoder(reader) + + var tokenCount int64 + for { + _, err := dec.Token() + if err != nil { + if errors.Is(err, io.EOF) { + break + } + return nil, err + } + tokenCount++ + if tokenCount > maxJsonTokens { + return nil, errors.New("failed to parse JSON input: too many tokens") + } + } + + _, err = reader.Seek(pos, io.SeekStart) + if err != nil { + return nil, fmt.Errorf("failed to read JSON input: %w", err) + } + + err = ctx.Err() + if err != nil { + return nil, err + } + + err = jsonutil.DecodeJSONFromReader(reader, out) if err != nil && err != io.EOF { return nil, fmt.Errorf("failed to parse JSON input: %w", err) }
http/handler_test.go+55 −0 modified@@ -1071,3 +1071,58 @@ func TestHandler_RestrictedEndpointCalls(t *testing.T) { }) } } + +func TestParseJSONRequest(t *testing.T) { + t.Parallel() + + const example = `{ "hello" : "world" }` // token count 4 + + tests := []struct { + name string + limit []int64 + + expectError bool + }{ + { + name: "no limit", + limit: nil, + expectError: false, + }, + { + name: "above limit", + limit: []int64{3}, + expectError: true, + }, + { + name: "below limit", + limit: []int64{5}, + expectError: false, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + ctx := t.Context() + if len(tt.limit) > 0 { + require.Len(t, tt.limit, 1, "invalid test case") + ctx = addMaximumJsonTokensToContext(ctx, tt.limit[0]) + } + + req, err := http.NewRequestWithContext(ctx, "POST", "/v1/test", strings.NewReader(example)) + require.NoError(t, err) + + var res map[string]any + _, err = parseJSONRequest(req, nil, &res) + + if tt.expectError { + require.ErrorContains(t, err, "too many tokens") + require.Len(t, res, 0) + } else { + require.NoError(t, err) + require.Equal(t, map[string]any{ + "hello": "world", + }, res) + } + }) + } +}
internalshared/configutil/listener.go+18 −8 modified@@ -49,14 +49,16 @@ type Listener struct { PurposeRaw interface{} `hcl:"purpose"` Role string `hcl:"role"` - Address string `hcl:"address"` - ClusterAddress string `hcl:"cluster_address"` - MaxRequestSize int64 `hcl:"-"` - MaxRequestSizeRaw interface{} `hcl:"max_request_size"` - MaxRequestDuration time.Duration `hcl:"-"` - MaxRequestDurationRaw interface{} `hcl:"max_request_duration"` - RequireRequestHeader bool `hcl:"-"` - RequireRequestHeaderRaw interface{} `hcl:"require_request_header"` + Address string `hcl:"address"` + ClusterAddress string `hcl:"cluster_address"` + MaxRequestSize int64 `hcl:"-"` + MaxRequestSizeRaw interface{} `hcl:"max_request_size"` + MaxRequestJsonComplexity int64 `hcl:"-"` + MaxRequestJsonComplexityRaw interface{} `hcl:"max_request_json_complexity"` + MaxRequestDuration time.Duration `hcl:"-"` + MaxRequestDurationRaw interface{} `hcl:"max_request_duration"` + RequireRequestHeader bool `hcl:"-"` + RequireRequestHeaderRaw interface{} `hcl:"require_request_header"` TLSDisable bool `hcl:"-"` TLSDisableRaw interface{} `hcl:"tls_disable"` @@ -255,6 +257,14 @@ func ParseListeners(result *SharedConfig, list *ast.ObjectList) error { l.RequireRequestHeaderRaw = nil } + + if l.MaxRequestJsonComplexityRaw != nil { + if l.MaxRequestJsonComplexity, err = parseutil.ParseInt(l.MaxRequestJsonComplexityRaw); err != nil { + return multierror.Prefix(fmt.Errorf("error parsing max_request_json_complexity: %w", err), fmt.Sprintf("listeners.%d", i)) + } + + l.MaxRequestJsonComplexityRaw = nil + } } // TLS Parameters
vault/request_handling.go+4 −0 modified@@ -57,6 +57,10 @@ var ( // to complete, unless overridden on a per-handler basis DefaultMaxRequestDuration = 90 * time.Second + // DefaultMaxJsonComplexity is the number of JSON tokens allowed in a + // request body, unless overridden on a per-handler basis + DefaultMaxJsonComplexity = int64(10000) + ErrNoApplicablePolicies = errors.New("no applicable policies") ErrPolicyNotExist = errors.New("policy does not exist")
website/content/docs/configuration/listener/tcp.mdx+4 −0 modified@@ -88,6 +88,10 @@ default value in the `"/sys/config/ui"` [API endpoint](/api-docs/system/config-u request duration allowed before OpenBao cancels the request. This overrides `default_max_request_duration` for this listener. +- `max_request_json_complexity` `(int: 10000)` – Specifies the hard maximum + of JSON tokens allowed in the request body. Defaults to `10000` if not set or `0`. + A value lower than 0 disables this option altogether. + - `proxy_protocol_behavior` `(string: "")` – When specified, enables a PROXY protocol version 1 behavior for the listener. Accepted Values:
website/content/docs/internals/limits.mdx+13 −0 modified@@ -242,6 +242,19 @@ operation on a remote service, the OpenBao client will see a failure. The environment variable [`VAULT_CLIENT_TIMEOUT`](/docs/commands#vault_client_timeout) sets a client-side maximum duration as well, which is 60 seconds by default. +### Request JSON Complexity + +The maximum number of tokens in a request body JSON is limited by the +`max_request_json_complexity` option in the [listener +stanza](/docs/configuration/listener/tcp). It defaults to 10000. This limit is +in place to prevent attackers from sending specially crafted JSON bodies that, +once unmarshaled, consume significantly more memory than the original request body (circumventing the +`max_request_size` limit). This limit should not affect valid requests, as even +large requests use a small number of tokens (a large string still counts as a +single token). + +The definition for "JSON complexity" might change in the future. + ### Lease limits A systemwide [maximum TTL](/docs/configuration#max_lease_ttl), and a
Vulnerability mechanics
Generated by null/stub on May 9, 2026. Inputs: CWE entries + fix-commit diffs from this CVE's patches. Citations validated against bundle.
References
8- github.com/advisories/GHSA-g46h-2rq9-gw5mghsaADVISORY
- nvd.nist.gov/vuln/detail/CVE-2025-59043ghsaADVISORY
- nvd.nist.gov/vuln/detail/CVE-2025-6203ghsaADVISORY
- discuss.hashicorp.com/t/hcsec-2025-24-vault-denial-of-service-though-complex-json-payloads/76393ghsaWEB
- github.com/openbao/openbao/blob/788536bd3e10818a7b4fb00aac6affc23388e5a9/http/logical.goghsax_refsource_MISCWEB
- github.com/openbao/openbao/commit/d418f238bc99adc72c73109faf574cc2b672880cghsax_refsource_MISCWEB
- github.com/openbao/openbao/pull/1756ghsax_refsource_MISCWEB
- github.com/openbao/openbao/security/advisories/GHSA-g46h-2rq9-gw5mghsax_refsource_CONFIRMWEB
News mentions
0No linked articles in our index yet.