VYPR
Low severityNVD Advisory· Published May 14, 2021· Updated Aug 3, 2024

Heap out of bounds in `QuantizedBatchNormWithGlobalNormalization`

CVE-2021-29547

Description

TensorFlow is an end-to-end open source platform for machine learning. An attacker can cause a segfault and denial of service via accessing data outside of bounds in tf.raw_ops.QuantizedBatchNormWithGlobalNormalization. This is because the implementation(https://github.com/tensorflow/tensorflow/blob/55a97caa9e99c7f37a0bbbeb414dc55553d3ae7f/tensorflow/core/kernels/quantized_batch_norm_op.cc#L176-L189) assumes the inputs are not empty. If any of these inputs is empty, .flat<T>() is an empty buffer, so accessing the element at index 0 is accessing data outside of bounds. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range.

Affected packages

Versions sourced from the GitHub Security Advisory.

PackageAffected versionsPatched versions
tensorflowPyPI
< 2.1.42.1.4
tensorflowPyPI
>= 2.2.0, < 2.2.32.2.3
tensorflowPyPI
>= 2.3.0, < 2.3.32.3.3
tensorflowPyPI
>= 2.4.0, < 2.4.22.4.2
tensorflow-cpuPyPI
< 2.1.42.1.4
tensorflow-cpuPyPI
>= 2.2.0, < 2.2.32.2.3
tensorflow-cpuPyPI
>= 2.3.0, < 2.3.32.3.3
tensorflow-cpuPyPI
>= 2.4.0, < 2.4.22.4.2
tensorflow-gpuPyPI
< 2.1.42.1.4
tensorflow-gpuPyPI
>= 2.2.0, < 2.2.32.2.3
tensorflow-gpuPyPI
>= 2.3.0, < 2.3.32.3.3
tensorflow-gpuPyPI
>= 2.4.0, < 2.4.22.4.2

Affected products

1

Patches

1
d6ed5bcfe1dc

Add missing validation in `QuantizedBatchNormWithGlobalNormalization`

https://github.com/tensorflow/tensorflowMihai MaruseacApr 23, 2021via ghsa
1 file changed · +67 10
  • tensorflow/core/kernels/quantized_batch_norm_op.cc+67 10 modified
    @@ -173,20 +173,50 @@ class QuantizedBatchNormOp : public OpKernel {
     
       void Compute(OpKernelContext* context) override {
         const Tensor& input = context->input(0);
    -    const float input_min = context->input(1).flat<float>()(0);
    -    const float input_max = context->input(2).flat<float>()(0);
    +    const auto& input_min_tensor = context->input(1);
    +    OP_REQUIRES(context, input_min_tensor.NumElements() == 1,
    +                errors::InvalidArgument("input_min must have 1 element"));
    +    const float input_min = input_min_tensor.flat<float>()(0);
    +    const auto& input_max_tensor = context->input(2);
    +    OP_REQUIRES(context, input_max_tensor.NumElements() == 1,
    +                errors::InvalidArgument("input_max must have 1 element"));
    +    const float input_max = input_max_tensor.flat<float>()(0);
         const Tensor& mean = context->input(3);
    -    const float mean_min = context->input(4).flat<float>()(0);
    -    const float mean_max = context->input(5).flat<float>()(0);
    +    const auto& mean_min_tensor = context->input(4);
    +    OP_REQUIRES(context, mean_min_tensor.NumElements() == 1,
    +                errors::InvalidArgument("mean_min must have 1 element"));
    +    const float mean_min = mean_min_tensor.flat<float>()(0);
    +    const auto& mean_max_tensor = context->input(5);
    +    OP_REQUIRES(context, mean_max_tensor.NumElements() == 1,
    +                errors::InvalidArgument("mean_max must have 1 element"));
    +    const float mean_max = mean_max_tensor.flat<float>()(0);
         const Tensor& var = context->input(6);
    -    const float var_min = context->input(7).flat<float>()(0);
    -    const float var_max = context->input(8).flat<float>()(0);
    +    const auto& var_min_tensor = context->input(7);
    +    OP_REQUIRES(context, var_min_tensor.NumElements() == 1,
    +                errors::InvalidArgument("var_min must have 1 element"));
    +    const float var_min = var_min_tensor.flat<float>()(0);
    +    const auto& var_max_tensor = context->input(8);
    +    OP_REQUIRES(context, var_max_tensor.NumElements() == 1,
    +                errors::InvalidArgument("var_max must have 1 element"));
    +    const float var_max = var_max_tensor.flat<float>()(0);
         const Tensor& beta = context->input(9);
    -    const float beta_min = context->input(10).flat<float>()(0);
    -    const float beta_max = context->input(11).flat<float>()(0);
    +    const auto& beta_min_tensor = context->input(10);
    +    OP_REQUIRES(context, beta_min_tensor.NumElements() == 1,
    +                errors::InvalidArgument("beta_min must have 1 element"));
    +    const float beta_min = beta_min_tensor.flat<float>()(0);
    +    const auto& beta_max_tensor = context->input(11);
    +    OP_REQUIRES(context, beta_max_tensor.NumElements() == 1,
    +                errors::InvalidArgument("beta_max must have 1 element"));
    +    const float beta_max = beta_max_tensor.flat<float>()(0);
         const Tensor& gamma = context->input(12);
    -    const float gamma_min = context->input(13).flat<float>()(0);
    -    const float gamma_max = context->input(14).flat<float>()(0);
    +    const auto& gamma_min_tensor = context->input(13);
    +    OP_REQUIRES(context, gamma_min_tensor.NumElements() == 1,
    +                errors::InvalidArgument("gamma_min must have 1 element"));
    +    const float gamma_min = gamma_min_tensor.flat<float>()(0);
    +    const auto& gamma_max_tensor = context->input(14);
    +    OP_REQUIRES(context, gamma_max_tensor.NumElements() == 1,
    +                errors::InvalidArgument("gamma_max must have 1 element"));
    +    const float gamma_max = gamma_max_tensor.flat<float>()(0);
     
         OP_REQUIRES(context, input.dims() == 4,
                     errors::InvalidArgument("input must be 4-dimensional",
    @@ -203,6 +233,33 @@ class QuantizedBatchNormOp : public OpKernel {
         OP_REQUIRES(context, gamma.dims() == 1,
                     errors::InvalidArgument("gamma must be 1-dimensional",
                                             gamma.shape().DebugString()));
    +    OP_REQUIRES(context, mean.NumElements() > 1,
    +                errors::InvalidArgument("Must have at least a mean value",
    +                                        gamma.shape().DebugString()));
    +    OP_REQUIRES(context, mean.NumElements() > 1,
    +                errors::InvalidArgument("Must have at least a mean value"));
    +    const auto last_dim = input.shape().dims() - 1;
    +    OP_REQUIRES(context,
    +                mean.shape().dim_size(0) == input.shape().dim_size(last_dim),
    +                errors::InvalidArgument("Must provide as many means as the "
    +                                        "last dimension of the input tensor: ",
    +                                        mean.shape().DebugString(), " vs. ",
    +                                        input.shape().DebugString()));
    +    OP_REQUIRES(
    +        context, mean.shape().dim_size(0) == var.shape().dim_size(0),
    +        errors::InvalidArgument(
    +            "Mean and variance tensors must have the same shape: ",
    +            mean.shape().DebugString(), " vs. ", var.shape().DebugString()));
    +    OP_REQUIRES(
    +        context, mean.shape().dim_size(0) == beta.shape().dim_size(0),
    +        errors::InvalidArgument(
    +            "Mean and beta tensors must have the same shape: ",
    +            mean.shape().DebugString(), " vs. ", beta.shape().DebugString()));
    +    OP_REQUIRES(
    +        context, mean.shape().dim_size(0) == gamma.shape().dim_size(0),
    +        errors::InvalidArgument(
    +            "Mean and gamma tensors must have the same shape: ",
    +            mean.shape().DebugString(), " vs. ", gamma.shape().DebugString()));
     
         Tensor* output = nullptr;
         OP_REQUIRES_OK(context,
    

Vulnerability mechanics

Generated by null/stub on May 9, 2026. Inputs: CWE entries + fix-commit diffs from this CVE's patches. Citations validated against bundle.

References

7

News mentions

0

No linked articles in our index yet.