CVE-2014-0230
Description
Apache Tomcat 6.x before 6.0.44, 7.x before 7.0.55, and 8.x before 8.0.9 does not properly handle cases where an HTTP response occurs before finishing the reading of an entire request body, which allows remote attackers to cause a denial of service (thread consumption) via a series of aborted upload attempts.
Affected packages
Versions sourced from the GitHub Security Advisory.
| Package | Affected versions | Patched versions |
|---|---|---|
org.apache.tomcat:tomcatMaven | >= 6.0.0, < 6.0.44 | 6.0.44 |
org.apache.tomcat:tomcatMaven | >= 7.0.0, < 7.0.55 | 7.0.55 |
org.apache.tomcat:tomcatMaven | >= 8.0.0, < 8.0.9 | 8.0.9 |
Affected products
114cpe:2.3:a:apache:tomcat:6.0.13:*:*:*:*:*:*:*+ 110 more
- cpe:2.3:a:apache:tomcat:6.0.13:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.14:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.15:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.16:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.17:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.18:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.19:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.20:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.24:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.26:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.27:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.28:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.0:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.0:alpha:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.1:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.1:alpha:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.2:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.2:alpha:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.2:beta:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.3:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.4:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.4:alpha:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.5:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.6:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.6:alpha:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.7:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.7:alpha:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.7:beta:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.8:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.8:alpha:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.9:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.9:beta:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.10:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.11:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.12:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.32:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.33:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.35:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.36:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.37:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.39:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.41:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.43:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.0:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.0:beta:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.1:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.2:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.2:beta:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.3:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.4:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.4:beta:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.5:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.6:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.7:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.8:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.9:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.10:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.11:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.12:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.13:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.14:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.15:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.16:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.35:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.36:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.37:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.38:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.39:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.40:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.41:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.42:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.43:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.44:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.45:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.46:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.47:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.48:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.49:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.29:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.30:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:6.0.31:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.17:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.18:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.19:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.20:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.21:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.22:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.23:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.24:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.25:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.26:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.27:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.28:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.29:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.30:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.31:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.32:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.33:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.34:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.50:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.52:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.53:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:7.0.54:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:8.0.0:rc1:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:8.0.0:rc10:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:8.0.0:rc2:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:8.0.0:rc5:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:8.0.1:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:8.0.3:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:8.0.5:*:*:*:*:*:*:*
- cpe:2.3:a:apache:tomcat:8.0.8:*:*:*:*:*:*:*
cpe:2.3:a:oracle:virtualization:4.63:*:*:*:*:*:*:*+ 2 more
- cpe:2.3:a:oracle:virtualization:4.63:*:*:*:*:*:*:*
- cpe:2.3:a:oracle:virtualization:4.71:*:*:*:*:*:*:*
- cpe:2.3:a:oracle:virtualization:5.1:*:*:*:*:*:*:*
Patches
8812088583d0eDelay closing the connection until maxSwallowSize bytes have been read. This gives the client a chance to read the response. See http://httpd.apache.org/docs/2.0/misc/fin_wait_2.html#appendix
2 files changed · +16 −7
java/org/apache/coyote/http11/filters/IdentityInputFilter.java+11 −7 modified@@ -174,21 +174,25 @@ public void setRequest(Request request) { } - /** - * End the current request. - */ @Override - public long end() throws IOException { + public long end() throws IOException { - if (maxSwallowSize > -1 && remaining > maxSwallowSize) { - throw new IOException(sm.getString("inputFilter.maxSwallow")); - } + final boolean maxSwallowSizeExceeded = (maxSwallowSize > -1 && remaining > maxSwallowSize); + long swallowed = 0; // Consume extra bytes. while (remaining > 0) { + int nread = buffer.doRead(endChunk, null); if (nread > 0 ) { + swallowed += nread; remaining = remaining - nread; + if (maxSwallowSizeExceeded && swallowed > maxSwallowSize) { + // Note: We do not fail early so the client has a chance to + // read the response before the connection is closed. See: + // http://httpd.apache.org/docs/2.0/misc/fin_wait_2.html#appendix + throw new IOException(sm.getString("inputFilter.maxSwallow")); + } } else { // errors are handled higher up. remaining = 0; }
webapps/docs/changelog.xml+5 −0 modified@@ -95,6 +95,11 @@ leave references to the UpgradeProcessor associated with the connection in memory. (markt) </fix> + <fix> + When applying the <code>maxSwallowSize</code> limit to a connection read + that many bytes first before closing the connection to give the client a + chance to read the reponse. (markt) + </fix> </changelog> </subsection> <subsection name="Jasper">
fdd9f11dc24bAllow to configure maxSwallowSize attribute of an HTTP connector via JMX.
2 files changed · +8 −0
java/org/apache/catalina/connector/mbeans-descriptors.xml+4 −0 modified@@ -97,6 +97,10 @@ description="Maximum size of a POST which will be saved by the container during authentication" type="int"/> + <attribute name="maxSwallowSize" + description="The maximum number of request body bytes to be swallowed by Tomcat for an aborted upload" + type="int"/> + <!-- Common --> <attribute name="maxThreads" description="The maximum number of request processing threads to be created"
webapps/docs/changelog.xml+4 −0 modified@@ -204,6 +204,10 @@ Add a new limit, defaulting to 2MB, for the amount of data Tomcat will swallow for an aborted upload. (markt) </add> + <update> + Allow to configure <code>maxSwallowSize</code> attribute of an HTTP + connector via JMX. (kkolinko) + </update> </changelog> </subsection> <subsection name="Jasper">
fc049912464fAllow to configure maxSwallowSize attribute of an HTTP connector via JMX.
2 files changed · +8 −0
java/org/apache/catalina/connector/mbeans-descriptors.xml+4 −0 modified@@ -106,6 +106,10 @@ description="Maximum size of a POST which will be saved by the container during authentication" type="int"/> + <attribute name="maxSwallowSize" + description="The maximum number of request body bytes to be swallowed by Tomcat for an aborted upload" + type="int"/> + <!-- Common --> <attribute name="maxThreads" description="The maximum number of request processing threads to be created"
webapps/docs/changelog.xml+4 −0 modified@@ -145,6 +145,10 @@ <bug>56704</bug>: Add support for OpenSSL syntax for ciphers when using JSSE SSL connectors. Submitted by Emmanuel Hugonnet. (remm) </add> + <update> + Allow to configure <code>maxSwallowSize</code> attribute of an HTTP + connector via JMX. (kkolinko) + </update> </changelog> </subsection> <subsection name="Jasper">
e3146f4b03a2Fix compilation failure.
1 file changed · +2 −3
test/org/apache/catalina/core/TestSwallowAbortedUploads.java+2 −3 modified@@ -23,7 +23,6 @@ import java.io.PrintWriter; import java.io.Writer; import java.net.Socket; -import java.nio.charset.StandardCharsets; import java.util.Arrays; import java.util.Collection; @@ -450,7 +449,7 @@ public void doTestChunkedPUT(boolean limit) throws Exception { try { conn = new Socket("localhost", getPort()); Writer writer = new OutputStreamWriter( - conn.getOutputStream(), StandardCharsets.US_ASCII); + conn.getOutputStream(), "US-ASCII"); writer.write("PUT /does-not-exist HTTP/1.1\r\n"); writer.write("Host: any\r\n"); writer.write("Transfer-encoding: chunked\r\n"); @@ -470,7 +469,7 @@ public void doTestChunkedPUT(boolean limit) throws Exception { try { BufferedReader reader = new BufferedReader(new InputStreamReader( - conn.getInputStream(), StandardCharsets.US_ASCII)); + conn.getInputStream(), "US-ASCII")); responseLine = reader.readLine(); } catch (IOException e) {
b1c8477e3e3eAdd a new limit, defaulting to 2MB, for the amount of data Tomcat will swallow for an aborted upload.
14 files changed · +156 −20
java/org/apache/coyote/http11/AbstractHttp11Processor.java+4 −3 modified@@ -683,14 +683,15 @@ protected boolean statusDropsConnection(int status) { /** * Initialize standard input and output filters. */ - protected void initializeFilters(int maxTrailerSize, int maxExtensionSize) { + protected void initializeFilters(int maxTrailerSize, int maxExtensionSize, + int maxSwallowSize) { // Create and add the identity filters. - getInputBuffer().addFilter(new IdentityInputFilter()); + getInputBuffer().addFilter(new IdentityInputFilter(maxSwallowSize)); getOutputBuffer().addFilter(new IdentityOutputFilter()); // Create and add the chunked filters. getInputBuffer().addFilter( - new ChunkedInputFilter(maxTrailerSize, maxExtensionSize)); + new ChunkedInputFilter(maxTrailerSize, maxExtensionSize, maxSwallowSize)); getOutputBuffer().addFilter(new ChunkedOutputFilter()); // Create and add the void filters.
java/org/apache/coyote/http11/AbstractHttp11Protocol.java+10 −0 modified@@ -162,6 +162,16 @@ public void setMaxExtensionSize(int maxExtensionSize) { } + /** + * Maximum amount of request body to swallow. + */ + private int maxSwallowSize = 2 * 1024 * 1024; + public int getMaxSwallowSize() { return maxSwallowSize; } + public void setMaxSwallowSize(int maxSwallowSize) { + this.maxSwallowSize = maxSwallowSize; + } + + /** * This field indicates if the protocol is treated as if it is secure. This * normally means https is being used but can be used to fake https e.g
java/org/apache/coyote/http11/filters/ChunkedInputFilter.java+12 −3 modified@@ -138,6 +138,9 @@ public class ChunkedInputFilter implements InputFilter { private long extensionSize; + private final int maxSwallowSize; + + /** * Flag that indicates if an error has occurred. */ @@ -146,10 +149,11 @@ public class ChunkedInputFilter implements InputFilter { // ----------------------------------------------------------- Constructors - public ChunkedInputFilter(int maxTrailerSize, int maxExtensionSize) { + public ChunkedInputFilter(int maxTrailerSize, int maxExtensionSize, int maxSwallowSize) { this.trailingHeaders.setLimit(maxTrailerSize); this.maxExtensionSize = maxExtensionSize; this.maxTrailerSize = maxTrailerSize; + this.maxSwallowSize = maxSwallowSize; } @@ -235,9 +239,14 @@ public void setRequest(Request request) { */ @Override public long end() throws IOException { + long swallowed = 0; + int read = 0; // Consume extra bytes : parse the stream until the end chunk is found - while (doRead(readChunk, null) >= 0) { - // NOOP: Just consume the input + while ((read = doRead(readChunk, null)) >= 0) { + swallowed += read; + if (maxSwallowSize > -1 && swallowed > maxSwallowSize) { + throwIOException(sm.getString("inputFilter.maxSwallow")); + } } // Return the number of extra bytes which were consumed
java/org/apache/coyote/http11/filters/IdentityInputFilter.java+19 −3 modified@@ -24,6 +24,7 @@ import org.apache.coyote.Request; import org.apache.coyote.http11.InputFilter; import org.apache.tomcat.util.buf.ByteChunk; +import org.apache.tomcat.util.res.StringManager; /** * Identity input filter. @@ -32,6 +33,9 @@ */ public class IdentityInputFilter implements InputFilter { + private static final StringManager sm = StringManager.getManager( + IdentityInputFilter.class.getPackage().getName()); + // -------------------------------------------------------------- Constants @@ -76,8 +80,10 @@ public class IdentityInputFilter implements InputFilter { protected ByteChunk endChunk = new ByteChunk(); - // ------------------------------------------------------------- Properties + private final int maxSwallowSize; + + // ------------------------------------------------------------- Properties /** * Get content length. @@ -101,6 +107,13 @@ public long getRemaining() { } + // ------------------------------------------------------------ Constructor + + public IdentityInputFilter(int maxSwallowSize) { + this.maxSwallowSize = maxSwallowSize; + } + + // ---------------------------------------------------- InputBuffer Methods @@ -163,8 +176,11 @@ public void setRequest(Request request) { * End the current request. */ @Override - public long end() - throws IOException { + public long end() throws IOException { + + if (maxSwallowSize > -1 && remaining > maxSwallowSize) { + throw new IOException(sm.getString("inputFilter.maxSwallow")); + } // Consume extra bytes. while (remaining > 0) {
java/org/apache/coyote/http11/filters/LocalStrings.properties+3 −1 modified@@ -22,4 +22,6 @@ chunkedInputFilter.invalidCrlfNoCR=Invalid end of line sequence (No CR before LF chunkedInputFilter.invalidCrlfNoData=Invalid end of line sequence (no data available to read) chunkedInputFilter.invalidHeader=Invalid chunk header chunkedInputFilter.maxExtension=maxExtensionSize exceeded -chunkedInputFilter.maxTrailer=maxTrailerSize exceeded \ No newline at end of file +chunkedInputFilter.maxTrailer=maxTrailerSize exceeded + +inputFilter.maxSwallow=maxSwallowSize exceeded \ No newline at end of file
java/org/apache/coyote/http11/Http11AprProcessor.java+2 −2 modified@@ -59,7 +59,7 @@ protected Log getLog() { public Http11AprProcessor(int headerBufferSize, AprEndpoint endpoint, - int maxTrailerSize, int maxExtensionSize) { + int maxTrailerSize, int maxExtensionSize, int maxSwallowSize) { super(endpoint); @@ -69,7 +69,7 @@ public Http11AprProcessor(int headerBufferSize, AprEndpoint endpoint, outputBuffer = new InternalAprOutputBuffer(response, headerBufferSize); response.setOutputBuffer(outputBuffer); - initializeFilters(maxTrailerSize, maxExtensionSize); + initializeFilters(maxTrailerSize, maxExtensionSize, maxSwallowSize); }
java/org/apache/coyote/http11/Http11AprProtocol.java+2 −1 modified@@ -300,7 +300,8 @@ protected void longPoll(SocketWrapper<Long> socket, protected Http11AprProcessor createProcessor() { Http11AprProcessor processor = new Http11AprProcessor( proto.getMaxHttpHeaderSize(), (AprEndpoint)proto.endpoint, - proto.getMaxTrailerSize(), proto.getMaxExtensionSize()); + proto.getMaxTrailerSize(), proto.getMaxExtensionSize(), + proto.getMaxSwallowSize()); processor.setAdapter(proto.adapter); processor.setMaxKeepAliveRequests(proto.getMaxKeepAliveRequests()); processor.setKeepAliveTimeout(proto.getKeepAliveTimeout());
java/org/apache/coyote/http11/Http11NioProcessor.java+2 −2 modified@@ -64,7 +64,7 @@ protected Log getLog() { public Http11NioProcessor(int maxHttpHeaderSize, NioEndpoint endpoint, - int maxTrailerSize, int maxExtensionSize) { + int maxTrailerSize, int maxExtensionSize, int maxSwallowSize) { super(endpoint); @@ -74,7 +74,7 @@ public Http11NioProcessor(int maxHttpHeaderSize, NioEndpoint endpoint, outputBuffer = new InternalNioOutputBuffer(response, maxHttpHeaderSize); response.setOutputBuffer(outputBuffer); - initializeFilters(maxTrailerSize, maxExtensionSize); + initializeFilters(maxTrailerSize, maxExtensionSize, maxSwallowSize); }
java/org/apache/coyote/http11/Http11NioProtocol.java+2 −1 modified@@ -259,7 +259,8 @@ protected void longPoll(SocketWrapper<NioChannel> socket, public Http11NioProcessor createProcessor() { Http11NioProcessor processor = new Http11NioProcessor( proto.getMaxHttpHeaderSize(), (NioEndpoint)proto.endpoint, - proto.getMaxTrailerSize(), proto.getMaxExtensionSize()); + proto.getMaxTrailerSize(), proto.getMaxExtensionSize(), + proto.getMaxSwallowSize()); processor.setAdapter(proto.adapter); processor.setMaxKeepAliveRequests(proto.getMaxKeepAliveRequests()); processor.setKeepAliveTimeout(proto.getKeepAliveTimeout());
java/org/apache/coyote/http11/Http11Processor.java+2 −2 modified@@ -50,7 +50,7 @@ protected Log getLog() { public Http11Processor(int headerBufferSize, JIoEndpoint endpoint, - int maxTrailerSize, int maxExtensionSize) { + int maxTrailerSize, int maxExtensionSize, int maxSwallowSize) { super(endpoint); @@ -60,7 +60,7 @@ public Http11Processor(int headerBufferSize, JIoEndpoint endpoint, outputBuffer = new InternalOutputBuffer(response, headerBufferSize); response.setOutputBuffer(outputBuffer); - initializeFilters(maxTrailerSize, maxExtensionSize); + initializeFilters(maxTrailerSize, maxExtensionSize, maxSwallowSize); }
java/org/apache/coyote/http11/Http11Protocol.java+2 −1 modified@@ -164,7 +164,8 @@ protected void longPoll(SocketWrapper<Socket> socket, protected Http11Processor createProcessor() { Http11Processor processor = new Http11Processor( proto.getMaxHttpHeaderSize(), (JIoEndpoint)proto.endpoint, - proto.getMaxTrailerSize(),proto.getMaxExtensionSize()); + proto.getMaxTrailerSize(),proto.getMaxExtensionSize(), + proto.getMaxSwallowSize()); processor.setAdapter(proto.adapter); processor.setMaxKeepAliveRequests(proto.getMaxKeepAliveRequests()); processor.setKeepAliveTimeout(proto.getKeepAliveTimeout());
test/org/apache/catalina/core/TestSwallowAbortedUploads.java+82 −1 modified@@ -16,8 +16,14 @@ */ package org.apache.catalina.core; +import java.io.BufferedReader; import java.io.IOException; +import java.io.InputStreamReader; +import java.io.OutputStreamWriter; import java.io.PrintWriter; +import java.io.Writer; +import java.net.Socket; +import java.nio.charset.StandardCharsets; import java.util.Arrays; import java.util.Collection; @@ -32,6 +38,7 @@ import static org.junit.Assert.assertNull; import static org.junit.Assert.assertTrue; +import org.junit.Assert; import org.junit.Test; import org.apache.catalina.Context; @@ -113,7 +120,7 @@ public void testAbortedUploadLimitedSwallow() { Exception ex = doAbortedUploadTest(client, true, true); assertNull("Limited upload with swallow enabled generates client exception", ex); - assertTrue("Limited upload with swallow enabled returns error status code", + assertTrue("Limited upload with swallow enabled returns non-500 status code", client.isResponse500()); client.reset(); } @@ -410,4 +417,78 @@ public boolean isResponseBodyOK() { } } + + @Test + public void testChunkedPUTLimit() throws Exception { + doTestChunkedPUT(true); + } + + + @Test + public void testChunkedPUTNoLimit() throws Exception { + doTestChunkedPUT(false); + } + + + public void doTestChunkedPUT(boolean limit) throws Exception { + + Tomcat tomcat = getTomcatInstance(); + tomcat.addContext("", TEMP_DIR); + // No need for target to exist. + + if (!limit) { + tomcat.getConnector().setAttribute("maxSwallowSize", "-1"); + } + + tomcat.start(); + + Exception writeEx = null; + Exception readEx = null; + String responseLine = null; + Socket conn = null; + + try { + conn = new Socket("localhost", getPort()); + Writer writer = new OutputStreamWriter( + conn.getOutputStream(), StandardCharsets.US_ASCII); + writer.write("PUT /does-not-exist HTTP/1.1\r\n"); + writer.write("Host: any\r\n"); + writer.write("Transfer-encoding: chunked\r\n"); + writer.write("\r\n"); + + // Smarter than the typical client. Attempts to read the response + // even if the request is not fully written. + try { + // Write (or try to write) 16MB + for (int i = 0; i < 1024 * 1024; i++) { + writer.write("10\r\n"); + writer.write("0123456789ABCDEF\r\n"); + } + } catch (Exception e) { + writeEx = e; + } + + try { + BufferedReader reader = new BufferedReader(new InputStreamReader( + conn.getInputStream(), StandardCharsets.US_ASCII)); + + responseLine = reader.readLine(); + } catch (IOException e) { + readEx = e; + } + } finally { + if (conn != null) { + conn.close(); + } + } + + if (limit) { + Assert.assertNotNull(writeEx); + } else { + Assert.assertNull(writeEx); + Assert.assertNull(readEx); + Assert.assertNotNull(responseLine); + Assert.assertTrue(responseLine.contains("404")); + } + } }
webapps/docs/changelog.xml+4 −0 modified@@ -147,6 +147,10 @@ HTTP connector and ensure that access log entries generated by error conditions use the correct request start time. (markt) </fix> + <add> + Add a new limit, defaulting to 2MB, for the amount of data Tomcat will + swallow for an aborted upload. (markt) + </add> </changelog> </subsection> <subsection name="Jasper">
webapps/docs/config/http.xml+10 −0 modified@@ -431,6 +431,16 @@ If not specified, this attribute is set to 100.</p> </attribute> + <attribute name="maxSwallowSize" required="false"> + <p>The maximum number of request body bytes (excluding transfer encoding + overhead) that will be swallowed by Tomcat for an aborted upload. An + aborted upload is when Tomcat knows that the request body is going to be + ignored but the client still sends it. If Tomcat does not swallow the body + the client is unlikely to see the response. If not specified the default + of 2097152 (2 megabytes) will be used. A value of less than zero indicates + that no limit should be enforced.</p> + </attribute> + <attribute name="maxThreads" required="false"> <p>The maximum number of request processing threads to be created by this <strong>Connector</strong>, which therefore determines the
6b2cfacf749bGrr. Different behaviours on different OSes
1 file changed · +0 −1
test/org/apache/catalina/core/TestSwallowAbortedUploads.java+0 −1 modified@@ -468,7 +468,6 @@ public void doTestChunkedPUT(boolean limit) throws Exception { if (limit) { Assert.assertNotNull(writeEx); - Assert.assertNotNull(readEx); } else { Assert.assertNull(writeEx); Assert.assertNull(readEx);
c1357e649641Correct test. Exceeded the swallow limit aborts the connection.
1 file changed · +4 −3
test/org/apache/catalina/core/TestSwallowAbortedUploads.java+4 −3 modified@@ -468,11 +468,12 @@ public void doTestChunkedPUT(boolean limit) throws Exception { if (limit) { Assert.assertNotNull(writeEx); + Assert.assertNotNull(readEx); } else { Assert.assertNull(writeEx); + Assert.assertNull(readEx); + Assert.assertNotNull(responseLine); + Assert.assertTrue(responseLine.contains("404")); } - Assert.assertNull(readEx); - Assert.assertNotNull(responseLine); - Assert.assertTrue(responseLine.contains("404")); } }
e28dd578fad9Add a new limit, defaulting to 2MB, for the amount of data Tomcat will swallow for an aborted upload.
16 files changed · +152 −22
java/org/apache/coyote/http11/AbstractHttp11Processor.java+4 −3 modified@@ -647,14 +647,15 @@ protected boolean statusDropsConnection(int status) { /** * Initialize standard input and output filters. */ - protected void initializeFilters(int maxTrailerSize, int maxExtensionSize) { + protected void initializeFilters(int maxTrailerSize, int maxExtensionSize, + int maxSwallowSize) { // Create and add the identity filters. - getInputBuffer().addFilter(new IdentityInputFilter()); + getInputBuffer().addFilter(new IdentityInputFilter(maxSwallowSize)); getOutputBuffer().addFilter(new IdentityOutputFilter()); // Create and add the chunked filters. getInputBuffer().addFilter( - new ChunkedInputFilter(maxTrailerSize, maxExtensionSize)); + new ChunkedInputFilter(maxTrailerSize, maxExtensionSize, maxSwallowSize)); getOutputBuffer().addFilter(new ChunkedOutputFilter()); // Create and add the void filters.
java/org/apache/coyote/http11/AbstractHttp11Protocol.java+10 −0 modified@@ -154,6 +154,16 @@ public void setMaxExtensionSize(int maxExtensionSize) { } + /** + * Maximum amount of request body to swallow. + */ + private int maxSwallowSize = 2 * 1024 * 1024; + public int getMaxSwallowSize() { return maxSwallowSize; } + public void setMaxSwallowSize(int maxSwallowSize) { + this.maxSwallowSize = maxSwallowSize; + } + + /** * This field indicates if the protocol is treated as if it is secure. This * normally means https is being used but can be used to fake https e.g
java/org/apache/coyote/http11/filters/ChunkedInputFilter.java+12 −3 modified@@ -137,6 +137,9 @@ public class ChunkedInputFilter implements InputFilter { private long extensionSize; + private final int maxSwallowSize; + + /** * Flag that indicates if an error has occurred. */ @@ -145,10 +148,11 @@ public class ChunkedInputFilter implements InputFilter { // ----------------------------------------------------------- Constructors - public ChunkedInputFilter(int maxTrailerSize, int maxExtensionSize) { + public ChunkedInputFilter(int maxTrailerSize, int maxExtensionSize, int maxSwallowSize) { this.trailingHeaders.setLimit(maxTrailerSize); this.maxExtensionSize = maxExtensionSize; this.maxTrailerSize = maxTrailerSize; + this.maxSwallowSize = maxSwallowSize; } @@ -234,9 +238,14 @@ public void setRequest(Request request) { */ @Override public long end() throws IOException { + long swallowed = 0; + int read = 0; // Consume extra bytes : parse the stream until the end chunk is found - while (doRead(readChunk, null) >= 0) { - // NOOP: Just consume the input + while ((read = doRead(readChunk, null)) >= 0) { + swallowed += read; + if (maxSwallowSize > -1 && swallowed > maxSwallowSize) { + throwIOException(sm.getString("inputFilter.maxSwallow")); + } } // Return the number of extra bytes which were consumed
java/org/apache/coyote/http11/filters/IdentityInputFilter.java+17 −2 modified@@ -24,6 +24,7 @@ import org.apache.coyote.Request; import org.apache.coyote.http11.InputFilter; import org.apache.tomcat.util.buf.ByteChunk; +import org.apache.tomcat.util.res.StringManager; /** * Identity input filter. @@ -32,6 +33,9 @@ */ public class IdentityInputFilter implements InputFilter { + private static final StringManager sm = StringManager.getManager( + IdentityInputFilter.class.getPackage().getName()); + // -------------------------------------------------------------- Constants @@ -76,6 +80,14 @@ public class IdentityInputFilter implements InputFilter { protected final ByteChunk endChunk = new ByteChunk(); + private final int maxSwallowSize; + + + public IdentityInputFilter(int maxSwallowSize) { + this.maxSwallowSize = maxSwallowSize; + } + + // ---------------------------------------------------- InputBuffer Methods /** @@ -137,8 +149,11 @@ public void setRequest(Request request) { * End the current request. */ @Override - public long end() - throws IOException { + public long end() throws IOException { + + if (maxSwallowSize > -1 && remaining > maxSwallowSize) { + throw new IOException(sm.getString("inputFilter.maxSwallow")); + } // Consume extra bytes. while (remaining > 0) {
java/org/apache/coyote/http11/filters/LocalStrings.properties+3 −1 modified@@ -22,4 +22,6 @@ chunkedInputFilter.invalidCrlfNoCR=Invalid end of line sequence (No CR before LF chunkedInputFilter.invalidCrlfNoData=Invalid end of line sequence (no data available to read) chunkedInputFilter.invalidHeader=Invalid chunk header chunkedInputFilter.maxExtension=maxExtensionSize exceeded -chunkedInputFilter.maxTrailer=maxTrailerSize exceeded \ No newline at end of file +chunkedInputFilter.maxTrailer=maxTrailerSize exceeded + +inputFilter.maxSwallow=maxSwallowSize exceeded \ No newline at end of file
java/org/apache/coyote/http11/Http11AprProcessor.java+2 −2 modified@@ -59,7 +59,7 @@ protected Log getLog() { public Http11AprProcessor(int headerBufferSize, AprEndpoint endpoint, - int maxTrailerSize, int maxExtensionSize) { + int maxTrailerSize, int maxExtensionSize, int maxSwallowSize) { super(endpoint); @@ -69,7 +69,7 @@ public Http11AprProcessor(int headerBufferSize, AprEndpoint endpoint, outputBuffer = new InternalAprOutputBuffer(response, headerBufferSize); response.setOutputBuffer(outputBuffer); - initializeFilters(maxTrailerSize, maxExtensionSize); + initializeFilters(maxTrailerSize, maxExtensionSize, maxSwallowSize); }
java/org/apache/coyote/http11/Http11AprProtocol.java+2 −1 modified@@ -319,7 +319,8 @@ protected void longPoll(SocketWrapper<Long> socket, protected Http11AprProcessor createProcessor() { Http11AprProcessor processor = new Http11AprProcessor( proto.getMaxHttpHeaderSize(), (AprEndpoint)proto.endpoint, - proto.getMaxTrailerSize(), proto.getMaxExtensionSize()); + proto.getMaxTrailerSize(), proto.getMaxExtensionSize(), + proto.getMaxSwallowSize()); processor.setAdapter(proto.getAdapter()); processor.setMaxKeepAliveRequests(proto.getMaxKeepAliveRequests()); processor.setKeepAliveTimeout(proto.getKeepAliveTimeout());
java/org/apache/coyote/http11/Http11Nio2Processor.java+2 −2 modified@@ -60,7 +60,7 @@ protected Log getLog() { public Http11Nio2Processor(int maxHttpHeaderSize, Nio2Endpoint endpoint, - int maxTrailerSize, int maxExtensionSize) { + int maxTrailerSize, int maxExtensionSize, int maxSwallowSize) { super(endpoint); @@ -70,7 +70,7 @@ public Http11Nio2Processor(int maxHttpHeaderSize, Nio2Endpoint endpoint, outputBuffer = new InternalNio2OutputBuffer(response, maxHttpHeaderSize); response.setOutputBuffer(outputBuffer); - initializeFilters(maxTrailerSize, maxExtensionSize); + initializeFilters(maxTrailerSize, maxExtensionSize, maxSwallowSize); }
java/org/apache/coyote/http11/Http11Nio2Protocol.java+2 −1 modified@@ -248,7 +248,8 @@ protected void longPoll(SocketWrapper<Nio2Channel> socket, public Http11Nio2Processor createProcessor() { Http11Nio2Processor processor = new Http11Nio2Processor( proto.getMaxHttpHeaderSize(), (Nio2Endpoint) proto.endpoint, - proto.getMaxTrailerSize(), proto.getMaxExtensionSize()); + proto.getMaxTrailerSize(), proto.getMaxExtensionSize(), + proto.getMaxSwallowSize()); processor.setAdapter(proto.getAdapter()); processor.setMaxKeepAliveRequests(proto.getMaxKeepAliveRequests()); processor.setKeepAliveTimeout(proto.getKeepAliveTimeout());
java/org/apache/coyote/http11/Http11NioProcessor.java+2 −2 modified@@ -63,7 +63,7 @@ protected Log getLog() { public Http11NioProcessor(int maxHttpHeaderSize, NioEndpoint endpoint, - int maxTrailerSize, int maxExtensionSize) { + int maxTrailerSize, int maxExtensionSize, int maxSwallowSize) { super(endpoint); @@ -73,7 +73,7 @@ public Http11NioProcessor(int maxHttpHeaderSize, NioEndpoint endpoint, outputBuffer = new InternalNioOutputBuffer(response, maxHttpHeaderSize); response.setOutputBuffer(outputBuffer); - initializeFilters(maxTrailerSize, maxExtensionSize); + initializeFilters(maxTrailerSize, maxExtensionSize, maxSwallowSize); }
java/org/apache/coyote/http11/Http11NioProtocol.java+2 −1 modified@@ -280,7 +280,8 @@ protected void longPoll(SocketWrapper<NioChannel> socket, public Http11NioProcessor createProcessor() { Http11NioProcessor processor = new Http11NioProcessor( proto.getMaxHttpHeaderSize(), (NioEndpoint)proto.endpoint, - proto.getMaxTrailerSize(), proto.getMaxExtensionSize()); + proto.getMaxTrailerSize(), proto.getMaxExtensionSize(), + proto.getMaxSwallowSize()); processor.setAdapter(proto.getAdapter()); processor.setMaxKeepAliveRequests(proto.getMaxKeepAliveRequests()); processor.setKeepAliveTimeout(proto.getKeepAliveTimeout());
java/org/apache/coyote/http11/Http11Processor.java+2 −2 modified@@ -49,7 +49,7 @@ protected Log getLog() { public Http11Processor(int headerBufferSize, JIoEndpoint endpoint, - int maxTrailerSize, int maxExtensionSize) { + int maxTrailerSize, int maxExtensionSize, int maxSwallowSize) { super(endpoint); @@ -59,7 +59,7 @@ public Http11Processor(int headerBufferSize, JIoEndpoint endpoint, outputBuffer = new InternalOutputBuffer(response, headerBufferSize); response.setOutputBuffer(outputBuffer); - initializeFilters(maxTrailerSize, maxExtensionSize); + initializeFilters(maxTrailerSize, maxExtensionSize, maxSwallowSize); }
java/org/apache/coyote/http11/Http11Protocol.java+2 −1 modified@@ -186,7 +186,8 @@ protected void longPoll(SocketWrapper<Socket> socket, protected Http11Processor createProcessor() { Http11Processor processor = new Http11Processor( proto.getMaxHttpHeaderSize(), (JIoEndpoint)proto.endpoint, - proto.getMaxTrailerSize(),proto.getMaxExtensionSize()); + proto.getMaxTrailerSize(),proto.getMaxExtensionSize(), + proto.getMaxSwallowSize()); processor.setAdapter(proto.getAdapter()); processor.setMaxKeepAliveRequests(proto.getMaxKeepAliveRequests()); processor.setKeepAliveTimeout(proto.getKeepAliveTimeout());
test/org/apache/catalina/core/TestSwallowAbortedUploads.java+76 −1 modified@@ -16,8 +16,14 @@ */ package org.apache.catalina.core; +import java.io.BufferedReader; import java.io.IOException; +import java.io.InputStreamReader; +import java.io.OutputStreamWriter; import java.io.PrintWriter; +import java.io.Writer; +import java.net.Socket; +import java.nio.charset.StandardCharsets; import java.util.Arrays; import java.util.Collection; @@ -32,6 +38,7 @@ import static org.junit.Assert.assertNull; import static org.junit.Assert.assertTrue; +import org.junit.Assert; import org.junit.Test; import org.apache.catalina.Context; @@ -113,7 +120,7 @@ public void testAbortedUploadLimitedSwallow() { Exception ex = doAbortedUploadTest(client, true, true); assertNull("Limited upload with swallow enabled generates client exception", ex); - assertTrue("Limited upload with swallow enabled returns error status code", + assertTrue("Limited upload with swallow enabled returns non-500 status code", client.isResponse500()); client.reset(); } @@ -400,4 +407,72 @@ public boolean isResponseBodyOK() { } } + + @Test + public void testChunkedPUTLimit() throws Exception { + doTestChunkedPUT(true); + } + + + @Test + public void testChunkedPUTNoLimit() throws Exception { + doTestChunkedPUT(false); + } + + + public void doTestChunkedPUT(boolean limit) throws Exception { + + Tomcat tomcat = getTomcatInstance(); + tomcat.addContext("", TEMP_DIR); + // No need for target to exist. + + if (!limit) { + tomcat.getConnector().setAttribute("maxSwallowSize", "-1"); + } + + tomcat.start(); + + Exception writeEx = null; + Exception readEx = null; + String responseLine = null; + + try (Socket conn = new Socket("localhost", getPort())) { + Writer writer = new OutputStreamWriter( + conn.getOutputStream(), StandardCharsets.US_ASCII); + writer.write("PUT /does-not-exist HTTP/1.1\r\n"); + writer.write("Host: any\r\n"); + writer.write("Transfer-encoding: chunked\r\n"); + writer.write("\r\n"); + + // Smarter than the typical client. Attempts to read the response + // even if the request is not fully written. + try { + // Write (or try to write) 16MB + for (int i = 0; i < 1024 * 1024; i++) { + writer.write("10\r\n"); + writer.write("0123456789ABCDEF\r\n"); + } + } catch (Exception e) { + writeEx = e; + } + + try { + BufferedReader reader = new BufferedReader(new InputStreamReader( + conn.getInputStream(), StandardCharsets.US_ASCII)); + + responseLine = reader.readLine(); + } catch (IOException e) { + readEx = e; + } + } + + if (limit) { + Assert.assertNotNull(writeEx); + } else { + Assert.assertNull(writeEx); + } + Assert.assertNull(readEx); + Assert.assertNotNull(responseLine); + Assert.assertTrue(responseLine.contains("404")); + } }
webapps/docs/changelog.xml+4 −0 modified@@ -205,6 +205,10 @@ <fix> Improve configuration of cache sizes in the endpoint. (markt) </fix> + <add> + Add a new limit, defaulting to 2MB, for the amount of data Tomcat will + swallow for an aborted upload. (markt) + </add> </changelog> </subsection> <subsection name="Jasper">
webapps/docs/config/http.xml+10 −0 modified@@ -436,6 +436,16 @@ If not specified, this attribute is set to 100.</p> </attribute> + <attribute name="maxSwallowSize" required="false"> + <p>The maximum number of request body bytes (excluding transfer encoding + overhead) that will be swallowed by Tomcat for an aborted upload. An + aborted upload is when Tomcat knows that the request body is going to be + ignored but the client still sends it. If Tomcat does not swallow the body + the client is unlikely to see the response. If not specified the default + of 2097152 (2 megabytes) will be used. A value of less than zero indicates + that no limit should be enforced.</p> + </attribute> + <attribute name="maxThreads" required="false"> <p>The maximum number of request processing threads to be created by this <strong>Connector</strong>, which therefore determines the
Vulnerability mechanics
Generated by null/stub on May 9, 2026. Inputs: CWE entries + fix-commit diffs from this CVE's patches. Citations validated against bundle.
References
56- tomcat.apache.org/security-6.htmlnvdPatchVendor AdvisoryWEB
- tomcat.apache.org/security-7.htmlnvdPatchVendor AdvisoryWEB
- tomcat.apache.org/security-8.htmlnvdPatchVendor AdvisoryWEB
- mail-archives.apache.org/mod_mbox/tomcat-announce/201505.mbox/%3C554949D1.8030904%40apache.org%3EnvdVendor AdvisoryWEB
- github.com/advisories/GHSA-pxcx-cxq8-4mmwghsaADVISORY
- nvd.nist.gov/vuln/detail/CVE-2014-0230ghsaADVISORY
- marc.infonvdWEB
- marc.infonvdWEB
- openwall.com/lists/oss-security/2015/04/10/1nvdWEB
- rhn.redhat.com/errata/RHSA-2015-1622.htmlnvdWEB
- rhn.redhat.com/errata/RHSA-2016-0595.htmlnvdWEB
- rhn.redhat.com/errata/RHSA-2016-0596.htmlnvdWEB
- rhn.redhat.com/errata/RHSA-2016-0597.htmlnvdWEB
- rhn.redhat.com/errata/RHSA-2016-0598.htmlnvdWEB
- svn.apache.org/viewvcnvdWEB
- svn.apache.org/viewvcnvdWEB
- svn.apache.org/viewvcnvdWEB
- www.debian.org/security/2016/dsa-3447nvdWEB
- www.debian.org/security/2016/dsa-3530nvdWEB
- www.oracle.com/technetwork/security-advisory/cpujul2018-4258247.htmlnvdWEB
- www.oracle.com/technetwork/topics/security/bulletinoct2015-2511968.htmlnvdWEB
- www.oracle.com/technetwork/topics/security/cpujul2015-2367936.htmlnvdWEB
- www.ubuntu.com/usn/USN-2654-1nvdWEB
- www.ubuntu.com/usn/USN-2655-1nvdWEB
- access.redhat.com/errata/RHSA-2015:2659nvdWEB
- access.redhat.com/errata/RHSA-2015:2660nvdWEB
- github.com/apache/tomcat/commit/6b2cfacf749be186ea77249a979af1d4863e47baghsaWEB
- github.com/apache/tomcat/commit/812088583d0e60717a8fe9c6d14e12bcdc3e6c51ghsaWEB
- github.com/apache/tomcat/commit/b1c8477e3e3ee635d19cc4d5987c2b157431e0c1ghsaWEB
- github.com/apache/tomcat/commit/c1357e649641844109711d60cacb98e4b5fcd3cbghsaWEB
- github.com/apache/tomcat/commit/e28dd578fad90a6d5726ec34f3245c9f99d909a5ghsaWEB
- github.com/apache/tomcat/commit/e3146f4b03a2386c3e57597e86134d4ed5c31303ghsaWEB
- github.com/apache/tomcat/commit/fc049912464f0dcf9dede3761f38049369057e16ghsaWEB
- github.com/apache/tomcat/commit/fdd9f11dc24b95e5425076abb58e968336f320a2ghsaWEB
- h20564.www2.hpe.com/portal/site/hpsc/public/kb/docDisplaynvdWEB
- h20566.www2.hpe.com/portal/site/hpsc/public/kb/docDisplaynvdWEB
- issues.jboss.org/browse/JWS-219nvdWEB
- issues.jboss.org/browse/JWS-220nvdWEB
- lists.apache.org/thread.html/37220405a377c0182d2afdbc36461c4783b2930fbeae3a17f1333113@%3Cdev.tomcat.apache.org%3EghsaWEB
- lists.apache.org/thread.html/39ae1f0bd5867c15755a6f959b271ade1aea04ccdc3b2e639dcd903b@%3Cdev.tomcat.apache.org%3EghsaWEB
- lists.apache.org/thread.html/b84ad1258a89de5c9c853c7f2d3ad77e5b8b2930be9e132d5cef6b95@%3Cdev.tomcat.apache.org%3EghsaWEB
- lists.apache.org/thread.html/b8a1bf18155b552dcf9a928ba808cbadad84c236d85eab3033662cfb@%3Cdev.tomcat.apache.org%3EghsaWEB
- lists.apache.org/thread.html/r03c597a64de790ba42c167efacfa23300c3d6c9fe589ab87fe02859c@%3Cdev.tomcat.apache.org%3EghsaWEB
- lists.apache.org/thread.html/r587e50b86c1a96ee301f751d50294072d142fd6dc08a8987ae9f3a9b@%3Cdev.tomcat.apache.org%3EghsaWEB
- lists.apache.org/thread.html/r9136ff5b13e4f1941360b5a309efee2c114a14855578c3a2cbe5d19c@%3Cdev.tomcat.apache.org%3EghsaWEB
- rhn.redhat.com/errata/RHSA-2015-1621.htmlnvd
- rhn.redhat.com/errata/RHSA-2015-2661.htmlnvd
- rhn.redhat.com/errata/RHSA-2016-0599.htmlnvd
- www.securityfocus.com/bid/74475nvd
- lists.apache.org/thread.html/37220405a377c0182d2afdbc36461c4783b2930fbeae3a17f1333113%40%3Cdev.tomcat.apache.org%3Envd
- lists.apache.org/thread.html/39ae1f0bd5867c15755a6f959b271ade1aea04ccdc3b2e639dcd903b%40%3Cdev.tomcat.apache.org%3Envd
- lists.apache.org/thread.html/b84ad1258a89de5c9c853c7f2d3ad77e5b8b2930be9e132d5cef6b95%40%3Cdev.tomcat.apache.org%3Envd
- lists.apache.org/thread.html/b8a1bf18155b552dcf9a928ba808cbadad84c236d85eab3033662cfb%40%3Cdev.tomcat.apache.org%3Envd
- lists.apache.org/thread.html/r03c597a64de790ba42c167efacfa23300c3d6c9fe589ab87fe02859c%40%3Cdev.tomcat.apache.org%3Envd
- lists.apache.org/thread.html/r587e50b86c1a96ee301f751d50294072d142fd6dc08a8987ae9f3a9b%40%3Cdev.tomcat.apache.org%3Envd
- lists.apache.org/thread.html/r9136ff5b13e4f1941360b5a309efee2c114a14855578c3a2cbe5d19c%40%3Cdev.tomcat.apache.org%3Envd
News mentions
0No linked articles in our index yet.