Use of unitialized value in TensorFlow Lite
Description
TensorFlow is an end-to-end open source platform for machine learning. In affected versions all TFLite operations that use quantization can be made to use unitialized values. For example. The issue stems from the fact that quantization.params is only valid if quantization.type is different that kTfLiteNoQuantization. However, these checks are missing in large parts of the code. We have patched the issue in GitHub commits 537bc7c723439b9194a358f64d871dd326c18887, 4a91f2069f7145aab6ba2d8cfe41be8a110c18a5 and 8933b8a21280696ab119b63263babdb54c298538. The fix will be included in TensorFlow 2.6.0. We will also cherrypick this commit on TensorFlow 2.5.1, TensorFlow 2.4.3, and TensorFlow 2.3.4, as these are also affected and still in supported range.
Affected packages
Versions sourced from the GitHub Security Advisory.
| Package | Affected versions | Patched versions |
|---|---|---|
tensorflowPyPI | < 2.3.4 | 2.3.4 |
tensorflowPyPI | >= 2.4.0, < 2.4.3 | 2.4.3 |
tensorflowPyPI | >= 2.5.0, < 2.5.1 | 2.5.1 |
tensorflow-cpuPyPI | < 2.3.4 | 2.3.4 |
tensorflow-cpuPyPI | >= 2.4.0, < 2.4.3 | 2.4.3 |
tensorflow-cpuPyPI | >= 2.5.0, < 2.5.1 | 2.5.1 |
tensorflow-gpuPyPI | < 2.3.4 | 2.3.4 |
tensorflow-gpuPyPI | >= 2.4.0, < 2.4.3 | 2.4.3 |
tensorflow-gpuPyPI | >= 2.5.0, < 2.5.1 | 2.5.1 |
Affected products
1- Range: >= 2.5.0, < 2.5.1
Patches
38933b8a21280Fix a null pointer exception caused by branching on uninitialized data.
1 file changed · +3 −0
tensorflow/lite/kernels/depthwise_conv.cc+3 −0 modified@@ -176,6 +176,7 @@ TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) { if (data_type != kTfLiteFloat32) { TF_LITE_ENSURE_EQ(context, filter->quantization.type, kTfLiteAffineQuantization); + TF_LITE_ENSURE(context, filter->quantization.type != kTfLiteNoQuantization); const auto* affine_quantization = reinterpret_cast<TfLiteAffineQuantization*>( filter->quantization.params); @@ -195,6 +196,7 @@ TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) { } if (is_hybrid) { + TF_LITE_ENSURE(context, filter->quantization.type != kTfLiteNoQuantization); const auto* affine_quantization = reinterpret_cast<TfLiteAffineQuantization*>( filter->quantization.params); @@ -495,6 +497,7 @@ TfLiteStatus EvalHybridPerChannel(TfLiteContext* context, TfLiteNode* node, op_params.weights_offset = 0; op_params.float_activation_min = output_activation_min; op_params.float_activation_max = output_activation_max; + TF_LITE_ENSURE(context, filter->quantization.type != kTfLiteNoQuantization); const auto* affine_quantization = reinterpret_cast<TfLiteAffineQuantization*>(filter->quantization.params); if (kernel_type == kReference) {
4a91f2069f71Fix a null pointer exception caused by branching on uninitialized data.
1 file changed · +9 −0
tensorflow/lite/kernels/unidirectional_sequence_lstm.cc+9 −0 modified@@ -62,8 +62,12 @@ TfLiteStatus PopulateQuantizedLstmParams8x8_16( context, GetOutputSafe(context, node, lstm::full::kOutputTensor, &output_tensor)); + TF_LITE_ENSURE(context, + cell_state->quantization.type != kTfLiteNoQuantization); auto* cell_state_params = static_cast<TfLiteAffineQuantization*>(cell_state->quantization.params); + TF_LITE_ENSURE(context, + output_tensor->quantization.type != kTfLiteNoQuantization); auto* proj_params = static_cast<TfLiteAffineQuantization*>( output_tensor->quantization.params); if (cell_clip > 0.0) { @@ -160,6 +164,8 @@ TfLiteStatus PopulateQuantizedLstmParams8x8_16( TfLiteTensor* intermediate; TF_LITE_ENSURE_OK(context, GetIntermediatesSafe(context, node, i, &intermediate)); + TF_LITE_ENSURE(context, + intermediate->quantization.type != kTfLiteNoQuantization); auto* params = static_cast<TfLiteAffineQuantization*>( intermediate->quantization.params); intermediate_scale.push_back(params->scale->data[0]); @@ -174,6 +180,7 @@ TfLiteStatus PopulateQuantizedLstmParams8x8_16( // is ignored. TfLiteTensor* hidden; TF_LITE_ENSURE_OK(context, GetIntermediatesSafe(context, node, 4, &hidden)); + TF_LITE_ENSURE(context, hidden->quantization.type != kTfLiteNoQuantization); auto* hidden_params = static_cast<TfLiteAffineQuantization*>(hidden->quantization.params); intermediate_scale.push_back(hidden_params->scale->data[0]); @@ -760,6 +767,8 @@ TfLiteStatus PopulatePrecomputedZPTimesWeightsWithBias(TfLiteContext* context, const TfLiteTensor* intermediate = &context->tensors[node->intermediates->data[4]]; + TF_LITE_ENSURE(context, + intermediate->quantization.type != kTfLiteNoQuantization); const auto* params = static_cast<TfLiteAffineQuantization*>(intermediate->quantization.params); const int32_t hidden_zp = params->zero_point->data[0];
537bc7c72343Fix a null pointer exception caused by branching on uninitialized data.
1 file changed · +7 −0
tensorflow/lite/kernels/svdf.cc+7 −0 modified@@ -256,14 +256,21 @@ TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) { output_temp_size_array)); // Calculate effective scales. + TF_LITE_ENSURE(context, input->quantization.type != kTfLiteNoQuantization); auto* input_params = reinterpret_cast<TfLiteAffineQuantization*>(input->quantization.params); + TF_LITE_ENSURE(context, + weights_feature->quantization.type != kTfLiteNoQuantization); auto* weights_feature_params = reinterpret_cast<TfLiteAffineQuantization*>( weights_feature->quantization.params); + TF_LITE_ENSURE(context, state->quantization.type != kTfLiteNoQuantization); auto* state_params = reinterpret_cast<TfLiteAffineQuantization*>(state->quantization.params); + TF_LITE_ENSURE(context, + weights_time->quantization.type != kTfLiteNoQuantization); auto* weight_time_params = reinterpret_cast<TfLiteAffineQuantization*>( weights_time->quantization.params); + TF_LITE_ENSURE(context, output->quantization.type != kTfLiteNoQuantization); auto* output_params = reinterpret_cast<TfLiteAffineQuantization*>( output->quantization.params); const double effective_scale_1 = input_params->scale->data[0] *
Vulnerability mechanics
Generated by null/stub on May 9, 2026. Inputs: CWE entries + fix-commit diffs from this CVE's patches. Citations validated against bundle.
References
10- github.com/advisories/GHSA-4c4g-crqm-xrxwghsaADVISORY
- nvd.nist.gov/vuln/detail/CVE-2021-37682ghsaADVISORY
- github.com/pypa/advisory-database/tree/main/vulns/tensorflow-cpu/PYSEC-2021-595.yamlghsaWEB
- github.com/pypa/advisory-database/tree/main/vulns/tensorflow-gpu/PYSEC-2021-793.yamlghsaWEB
- github.com/pypa/advisory-database/tree/main/vulns/tensorflow/PYSEC-2021-304.yamlghsaWEB
- github.com/tensorflow/tensorflow/blob/460e000de3a83278fb00b61a16d161b1964f15f4/tensorflow/lite/kernels/depthwise_conv.ccghsaWEB
- github.com/tensorflow/tensorflow/commit/4a91f2069f7145aab6ba2d8cfe41be8a110c18a5ghsax_refsource_MISCWEB
- github.com/tensorflow/tensorflow/commit/537bc7c723439b9194a358f64d871dd326c18887ghsax_refsource_MISCWEB
- github.com/tensorflow/tensorflow/commit/8933b8a21280696ab119b63263babdb54c298538ghsax_refsource_MISCWEB
- github.com/tensorflow/tensorflow/security/advisories/GHSA-4c4g-crqm-xrxwghsax_refsource_CONFIRMWEB
News mentions
0No linked articles in our index yet.