VYPR
Moderate severityNVD Advisory· Published May 20, 2022· Updated Apr 22, 2025

Core dump when loading TFLite models with quantization in TensorFlow

CVE-2022-29212

Description

TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, certain TFLite models that were created using TFLite model converter would crash when loaded in the TFLite interpreter. The culprit is that during quantization the scale of values could be greater than 1 but code was always assuming sub-unit scaling. Thus, since code was calling QuantizeMultiplierSmallerThanOneExp, the TFLITE_CHECK_LT assertion would trigger and abort the process. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.

Affected packages

Versions sourced from the GitHub Security Advisory.

PackageAffected versionsPatched versions
tensorflowPyPI
< 2.6.42.6.4
tensorflowPyPI
>= 2.7.0, < 2.7.22.7.2
tensorflowPyPI
>= 2.8.0, < 2.8.12.8.1
tensorflow-cpuPyPI
< 2.6.42.6.4
tensorflow-cpuPyPI
>= 2.7.0, < 2.7.22.7.2
tensorflow-cpuPyPI
>= 2.8.0, < 2.8.12.8.1
tensorflow-gpuPyPI
< 2.6.42.6.4
tensorflow-gpuPyPI
>= 2.7.0, < 2.7.22.7.2
tensorflow-gpuPyPI
>= 2.8.0, < 2.8.12.8.1

Affected products

1

Patches

1
a989426ee134

Improve to cover scale value greater than one

https://github.com/tensorflow/tensorflowSongyi HanMar 7, 2022via ghsa
2 files changed · +34 5
  • tensorflow/lite/kernels/comparisons.cc+14 5 modified
    @@ -81,6 +81,17 @@ TfLiteStatus ComparisonPrepareStringAllowed(TfLiteContext* context,
       return ComparisonPrepareCommon(context, node, true);
     }
     
    +void QuantizeMultiplier(double double_multiplier, int32_t* quantized_multiplier,
    +                        int* left_shift) {
    +  if (double_multiplier < 1.0) {
    +    QuantizeMultiplierSmallerThanOneExp(double_multiplier, quantized_multiplier,
    +                                        left_shift);
    +  } else {
    +    QuantizeMultiplierGreaterThanOne(double_multiplier, quantized_multiplier,
    +                                     left_shift);
    +  }
    +}
    +
     template <typename input_dtype, reference_ops::ComparisonFn<int32> opname>
     void ComparisonQuantized(const TfLiteTensor* input1, const TfLiteTensor* input2,
                              TfLiteTensor* output, bool requires_broadcast) {
    @@ -90,13 +101,11 @@ void ComparisonQuantized(const TfLiteTensor* input1, const TfLiteTensor* input2,
         const int left_shift = 8;
     
         int32 input1_multiplier;
    -    int input1_shift;
    -    QuantizeMultiplierSmallerThanOneExp(input1->params.scale,
    -                                        &input1_multiplier, &input1_shift);
         int32 input2_multiplier;
    +    int input1_shift;
         int input2_shift;
    -    QuantizeMultiplierSmallerThanOneExp(input2->params.scale,
    -                                        &input2_multiplier, &input2_shift);
    +    QuantizeMultiplier(input1->params.scale, &input1_multiplier, &input1_shift);
    +    QuantizeMultiplier(input2->params.scale, &input2_multiplier, &input2_shift);
     
         ComparisonParams op_params;
         op_params.left_shift = left_shift;
    
  • tensorflow/lite/kernels/comparisons_test.cc+20 0 modified
    @@ -653,6 +653,26 @@ TEST(ComparisonsTest, QuantizedInt8GreaterWithBroadcast) {
       }
     }
     
    +TEST(ComparisonsTest,
    +     QuantizedInt8GreaterWithBroadcastMultiplierGreaterThanOne) {
    +  const float kMin = -127.f;
    +  const float kMax = 127.f;
    +  std::vector<std::vector<int>> test_shapes = {
    +      {6}, {2, 3}, {2, 1, 3}, {1, 3, 1, 2}};
    +  for (int i = 0; i < test_shapes.size(); ++i) {
    +    ComparisonOpModel model({TensorType_INT8, test_shapes[i], kMin, kMax},
    +                            {TensorType_INT8, {}, kMin, kMax}, TensorType_INT8,
    +                            BuiltinOperator_GREATER);
    +    model.QuantizeAndPopulate<int8_t>(model.input1(),
    +                                      {572, -2, -71, 8, 11, 20});
    +    model.QuantizeAndPopulate<int8_t>(model.input2(), {8});
    +    model.Invoke();
    +    EXPECT_THAT(model.GetOutput(),
    +                ElementsAre(true, false, false, false, true, true))
    +        << "With shape number " << i;
    +  }
    +}
    +
     TEST(ComparisonsTest, QuantizedUInt8GreaterEqualWithBroadcast) {
       const float kMin = -1.f;
       const float kMax = 128.f;
    

Vulnerability mechanics

Generated by null/stub on May 9, 2026. Inputs: CWE entries + fix-commit diffs from this CVE's patches. Citations validated against bundle.

References

10

News mentions

0

No linked articles in our index yet.