Crashes due to overflow and `CHECK`-fail in ops with large tensor shapes
Description
TensorFlow is an open source platform for machine learning. In affected versions TensorFlow allows tensor to have a large number of dimensions and each dimension can be as large as desired. However, the total number of elements in a tensor must fit within an int64_t. If an overflow occurs, MultiplyWithoutOverflow would return a negative result. In the majority of TensorFlow codebase this then results in a CHECK-failure. Newer constructs exist which return a Status instead of crashing the binary. This is similar to CVE-2021-29584. The fix will be included in TensorFlow 2.7.0. We will also cherrypick this commit on TensorFlow 2.6.1, TensorFlow 2.5.2, and TensorFlow 2.4.4, as these are also affected and still in supported range.
Affected packages
Versions sourced from the GitHub Security Advisory.
| Package | Affected versions | Patched versions |
|---|---|---|
tensorflowPyPI | >= 2.6.0, < 2.6.1 | 2.6.1 |
tensorflowPyPI | >= 2.5.0, < 2.5.2 | 2.5.2 |
tensorflowPyPI | < 2.4.4 | 2.4.4 |
tensorflow-cpuPyPI | >= 2.6.0, < 2.6.1 | 2.6.1 |
tensorflow-cpuPyPI | >= 2.5.0, < 2.5.2 | 2.5.2 |
tensorflow-cpuPyPI | < 2.4.4 | 2.4.4 |
tensorflow-gpuPyPI | >= 2.6.0, < 2.6.1 | 2.6.1 |
tensorflow-gpuPyPI | >= 2.5.0, < 2.5.2 | 2.5.2 |
tensorflow-gpuPyPI | < 2.4.4 | 2.4.4 |
Affected products
1- Range: >= 2.6.0, < 2.6.1
Patches
3a871989d7b6cMerge pull request #51658 from yongtang:51618-tf.image.extract_glimpse
2 files changed · +16 −2
tensorflow/core/kernels/image/attention_ops.cc+3 −2 modified@@ -87,9 +87,10 @@ class ExtractGlimpseOp : public OpKernel { const int64_t output_height = window_size.tensor<int, 1>()(0); const int64_t output_width = window_size.tensor<int, 1>()(1); + TensorShape output_shape = input_shape; - output_shape.set_dim(1, output_height); - output_shape.set_dim(2, output_width); + OP_REQUIRES_OK(context, output_shape.SetDimWithStatus(1, output_height)); + OP_REQUIRES_OK(context, output_shape.SetDimWithStatus(2, output_width)); const Tensor& offsets = context->input(2); OP_REQUIRES(context, offsets.shape().dims() == 2,
tensorflow/python/kernel_tests/attention_ops_test.py+13 −0 modified@@ -18,6 +18,7 @@ from tensorflow.python.framework import constant_op from tensorflow.python.framework import dtypes +from tensorflow.python.framework import errors from tensorflow.python.ops import array_ops from tensorflow.python.ops import gen_image_ops from tensorflow.python.ops import image_ops @@ -297,6 +298,18 @@ def testGlimpseNonNormalizedNonCentered(self): np.asarray([[5, 6, 7], [10, 11, 12], [15, 16, 17]]), self.evaluate(result2)[0, :, :, 0]) + def testGlimpseNegativeInput(self): + img = np.arange(9).reshape([1, 3, 3, 1]) + with self.test_session(): + with self.assertRaises((errors.InternalError, ValueError)): + result = image_ops.extract_glimpse_v2( + img, + size=[1023, -63], + offsets=[1023, 63], + centered=False, + normalized=False) + self.evaluate(result) + if __name__ == '__main__': test.main()
d81b1351da3eMerge pull request #51717 from yongtang:46890-tf.image.pad_to_bounding_box
2 files changed · +17 −1
tensorflow/core/kernels/pad_op.cc+2 −1 modified@@ -85,7 +85,8 @@ class PadOp : public OpKernel { errors::InvalidArgument("Paddings must be non-negative: ", before_d, " ", after_d)); const int64_t size_d = in0.dim_size(d); - output_shape.AddDim(before_d + size_d + after_d); + OP_REQUIRES_OK( + context, output_shape.AddDimWithStatus(before_d + size_d + after_d)); } // If there is no padding to be done, forward the input to output.
tensorflow/python/ops/image_ops_test.py+15 −0 modified@@ -2293,6 +2293,21 @@ def testNameScope(self): y = image_ops.pad_to_bounding_box(image, 0, 0, 55, 66) self.assertTrue(y.op.name.startswith("pad_to_bounding_box")) + def testInvalidInput(self): + # Test case for GitHub issue 46890. + if test_util.is_xla_enabled(): + # TODO(b/200850176): test fails with XLA. + return + with self.session(): + with self.assertRaises(errors_impl.InternalError): + v = image_ops.pad_to_bounding_box( + image=np.ones((1, 1, 1)), + target_height=5191549470, + target_width=5191549470, + offset_height=1, + offset_width=1) + self.evaluate(v) + class SelectDistortedCropBoxTest(test_util.TensorFlowTestCase):
7c1692bd417ePR #51732: Fix crash of tf.image.crop_and_resize when input is large number
2 files changed · +24 −12
tensorflow/core/kernels/image/crop_and_resize_op.cc+14 −12 modified@@ -170,14 +170,15 @@ class CropAndResizeOp : public AsyncOpKernel { context, crop_height > 0 && crop_width > 0, errors::InvalidArgument("crop dimensions must be positive"), done); + TensorShape shape; + OP_REQUIRES_OK_ASYNC(context, shape.AddDimWithStatus(num_boxes), done); + OP_REQUIRES_OK_ASYNC(context, shape.AddDimWithStatus(crop_height), done); + OP_REQUIRES_OK_ASYNC(context, shape.AddDimWithStatus(crop_width), done); + OP_REQUIRES_OK_ASYNC(context, shape.AddDimWithStatus(depth), done); // Allocate output tensor. Tensor* output = nullptr; - OP_REQUIRES_OK_ASYNC( - context, - context->allocate_output( - 0, TensorShape({num_boxes, crop_height, crop_width, depth}), - &output), - done); + OP_REQUIRES_OK_ASYNC(context, context->allocate_output(0, shape, &output), + done); auto compute_callback = [this, context, output]() { const Tensor& image = context->input(0); @@ -417,14 +418,15 @@ class CropAndResizeGradImageOp : public AsyncOpKernel { done); } + TensorShape shape; + OP_REQUIRES_OK_ASYNC(context, shape.AddDimWithStatus(batch_size), done); + OP_REQUIRES_OK_ASYNC(context, shape.AddDimWithStatus(image_height), done); + OP_REQUIRES_OK_ASYNC(context, shape.AddDimWithStatus(image_width), done); + OP_REQUIRES_OK_ASYNC(context, shape.AddDimWithStatus(depth), done); // Allocate output tensor. Tensor* output = nullptr; - OP_REQUIRES_OK_ASYNC( - context, - context->allocate_output( - 0, TensorShape({batch_size, image_height, image_width, depth}), - &output), - done); + OP_REQUIRES_OK_ASYNC(context, context->allocate_output(0, shape, &output), + done); auto compute_callback = [this, context, output]() { const Tensor& grads = context->input(0);
tensorflow/python/ops/image_ops_test.py+10 −0 modified@@ -6075,6 +6075,16 @@ def testImageCropAndResize(self): crop_size=[1, 1]) self.evaluate(op) + def testImageCropAndResizeWithInvalidInput(self): + with self.session(): + with self.assertRaises((errors.InternalError, ValueError)): + op = image_ops_impl.crop_and_resize_v2( + image=np.ones((1, 1, 1, 1)), + boxes=np.ones((11, 4)), + box_indices=np.ones((11)), + crop_size=[2065374891, 1145309325]) + self.evaluate(op) + @parameterized.named_parameters( ("_jpeg", "JPEG", "jpeg_merge_test1.jpg"), ("_png", "PNG", "lena_rgba.png"),
Vulnerability mechanics
Generated by null/stub on May 9, 2026. Inputs: CWE entries + fix-commit diffs from this CVE's patches. Citations validated against bundle.
References
11- github.com/advisories/GHSA-prcg-wp5q-rv7pghsaADVISORY
- nvd.nist.gov/vuln/detail/CVE-2021-41197ghsaADVISORY
- github.com/pypa/advisory-database/tree/main/vulns/tensorflow-cpu/PYSEC-2021-607.yamlghsaWEB
- github.com/pypa/advisory-database/tree/main/vulns/tensorflow-gpu/PYSEC-2021-805.yamlghsaWEB
- github.com/pypa/advisory-database/tree/main/vulns/tensorflow/PYSEC-2021-390.yamlghsaWEB
- github.com/tensorflow/tensorflow/commit/7c1692bd417eb4f9b33ead749a41166d6080af85ghsax_refsource_MISCWEB
- github.com/tensorflow/tensorflow/commit/a871989d7b6c18cdebf2fb4f0e5c5b62fbc19edfghsax_refsource_MISCWEB
- github.com/tensorflow/tensorflow/commit/d81b1351da3e8c884ff836b64458d94e4a157c15ghsax_refsource_MISCWEB
- github.com/tensorflow/tensorflow/issues/46890ghsax_refsource_MISCWEB
- github.com/tensorflow/tensorflow/issues/51908ghsax_refsource_MISCWEB
- github.com/tensorflow/tensorflow/security/advisories/GHSA-prcg-wp5q-rv7pghsax_refsource_CONFIRMWEB
News mentions
0No linked articles in our index yet.