VYPR
High severityNVD Advisory· Published Sep 25, 2020· Updated Aug 4, 2024

Null pointer dereference in tensorflow-lite

CVE-2020-15209

Description

In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, a crafted TFLite model can force a node to have as input a tensor backed by a nullptr buffer. This can be achieved by changing a buffer index in the flatbuffer serialization to convert a read-only tensor to a read-write one. The runtime assumes that these buffers are written to before a possible read, hence they are initialized with nullptr. However, by changing the buffer index for a tensor and implicitly converting that tensor to be a read-write one, as there is nothing in the model that writes to it, we get a null pointer dereference. The issue is patched in commit 0b5662bc, and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1.

Affected packages

Versions sourced from the GitHub Security Advisory.

PackageAffected versionsPatched versions
tensorflowPyPI
< 1.15.41.15.4
tensorflowPyPI
>= 2.0.0, < 2.0.32.0.3
tensorflowPyPI
>= 2.1.0, < 2.1.22.1.2
tensorflowPyPI
>= 2.2.0, < 2.2.12.2.1
tensorflowPyPI
>= 2.3.0, < 2.3.12.3.1
tensorflow-cpuPyPI
< 1.15.41.15.4
tensorflow-cpuPyPI
>= 2.0.0, < 2.0.32.0.3
tensorflow-cpuPyPI
>= 2.1.0, < 2.1.22.1.2
tensorflow-cpuPyPI
>= 2.2.0, < 2.2.12.2.1
tensorflow-cpuPyPI
>= 2.3.0, < 2.3.12.3.1
tensorflow-gpuPyPI
< 1.15.41.15.4
tensorflow-gpuPyPI
>= 2.0.0, < 2.0.32.0.3
tensorflow-gpuPyPI
>= 2.1.0, < 2.1.22.1.2
tensorflow-gpuPyPI
>= 2.2.0, < 2.2.12.2.1
tensorflow-gpuPyPI
>= 2.3.0, < 2.3.12.3.1

Affected products

1

Patches

1
0b5662bc2be1

[tflite] Ensure input tensors don't have `nullptr` buffers.

https://github.com/tensorflow/tensorflowMihai MaruseacSep 18, 2020via ghsa
4 files changed · +58 18
  • tensorflow/lite/BUILD+2 0 modified
    @@ -242,6 +242,7 @@ cc_library(
             ":arena_planner",
             ":external_cpu_backend_context",
             ":graph_info",
    +        ":kernel_api",
             ":memory_planner",
             ":minimal_logging",
             ":shared_library",
    @@ -469,6 +470,7 @@ cc_test(
             "testdata/add_shared_tensors.bin",
             "testdata/empty_model.bin",
             "testdata/multi_add_flex.bin",
    +        "testdata/segment_sum_invalid_buffer.bin",
             "testdata/sparse_tensor.bin",
             "testdata/test_min_runtime.bin",
             "testdata/test_model.bin",
    
  • tensorflow/lite/core/subgraph.cc+14 0 modified
    @@ -19,6 +19,7 @@ limitations under the License.
     #include <cstdint>
     
     #include "tensorflow/lite/arena_planner.h"
    +#include "tensorflow/lite/builtin_ops.h"
     #include "tensorflow/lite/c/common.h"
     #include "tensorflow/lite/context_util.h"
     #include "tensorflow/lite/core/api/tensor_utils.h"
    @@ -1030,6 +1031,19 @@ TfLiteStatus Subgraph::Invoke() {
               tensor->data_is_stale) {
             TF_LITE_ENSURE_STATUS(EnsureTensorDataIsReadable(tensor_index));
           }
    +      if (tensor->data.raw == nullptr && tensor->bytes > 0) {
    +        if (registration.builtin_code == kTfLiteBuiltinReshape && i == 1) {
    +          // In general, having a tensor here with no buffer will be an error.
    +          // However, for the reshape operator, the second input tensor is only
    +          // used for the shape, not for the data. Thus, null buffer is ok.
    +          continue;
    +        } else {
    +          // In all other cases, we need to return an error as otherwise we will
    +          // trigger a null pointer dereference (likely).
    +          ReportError("Input tensor %d lacks data", tensor_index);
    +          return kTfLiteError;
    +        }
    +      }
         }
     
         if (check_cancelled_func_ != nullptr &&
    
  • tensorflow/lite/model_test.cc+42 18 modified
    @@ -438,24 +438,48 @@ TEST(BasicFlatBufferModel, TestParseModelWithSparseTensor) {
     }
     
     // TODO(b/150072943): Add malformed model with sparse tensor tests.
    -TEST(BasicFlatBufferModel, TestHandleMalformedModel) {
    -  const auto model_paths = {
    -      // These models use the same tensor as both input and ouput of a node
    -      "tensorflow/lite/testdata/add_shared_tensors.bin",
    -  };
    -
    -  for (const auto& model_path : model_paths) {
    -    std::unique_ptr<tflite::FlatBufferModel> model =
    -        FlatBufferModel::BuildFromFile(model_path);
    -    ASSERT_NE(model, nullptr);
    -
    -    tflite::ops::builtin::BuiltinOpResolver resolver;
    -    InterpreterBuilder builder(*model, resolver);
    -    std::unique_ptr<Interpreter> interpreter;
    -    ASSERT_EQ(builder(&interpreter), kTfLiteOk);
    -    ASSERT_NE(interpreter, nullptr);
    -    ASSERT_NE(interpreter->AllocateTensors(), kTfLiteOk);
    -  }
    +
    +// The models here have at least a node that uses the same tensor as input and
    +// output. This causes segfaults when trying to eval the operator, hence we try
    +// to prevent this scenario. The earliest place we can check this is in
    +// `AllocateTensors`, hence the test checks that `interpreter->AllocateTensors`
    +// detects these bad models.
    +TEST(BasicFlatBufferModel, TestHandleMalformedModelReuseTensor) {
    +  const auto model_path =
    +      "tensorflow/lite/testdata/add_shared_tensors.bin";
    +
    +  std::unique_ptr<tflite::FlatBufferModel> model =
    +      FlatBufferModel::BuildFromFile(model_path);
    +  ASSERT_NE(model, nullptr);
    +
    +  tflite::ops::builtin::BuiltinOpResolver resolver;
    +  InterpreterBuilder builder(*model, resolver);
    +  std::unique_ptr<Interpreter> interpreter;
    +  ASSERT_EQ(builder(&interpreter), kTfLiteOk);
    +  ASSERT_NE(interpreter, nullptr);
    +  ASSERT_NE(interpreter->AllocateTensors(), kTfLiteOk);
    +}
    +
    +// The models here have a buffer index for a tensor pointing to a null buffer.
    +// This results in the tensor being interpreted as read-write, but the model
    +// assumes the tensor is read-only. As such, `interpreter->Invoke()` would
    +// segfault if no precondition check is added. The test checks that the
    +// precondition check exists.
    +TEST(BasicFlatBufferModel, TestHandleMalformedModelInvalidBuffer) {
    +  const auto model_path =
    +      "tensorflow/lite/testdata/segment_sum_invalid_buffer.bin";
    +
    +  std::unique_ptr<tflite::FlatBufferModel> model =
    +      FlatBufferModel::BuildFromFile(model_path);
    +  ASSERT_NE(model, nullptr);
    +
    +  tflite::ops::builtin::BuiltinOpResolver resolver;
    +  InterpreterBuilder builder(*model, resolver);
    +  std::unique_ptr<Interpreter> interpreter;
    +  ASSERT_EQ(builder(&interpreter), kTfLiteOk);
    +  ASSERT_NE(interpreter, nullptr);
    +  ASSERT_EQ(interpreter->AllocateTensors(), kTfLiteOk);
    +  ASSERT_NE(interpreter->Invoke(), kTfLiteOk);
     }
     
     // TODO(aselle): Add tests for serialization of builtin op data types.
    
  • tensorflow/lite/testdata/segment_sum_invalid_buffer.bin+0 0 added

Vulnerability mechanics

Generated by null/stub on May 9, 2026. Inputs: CWE entries + fix-commit diffs from this CVE's patches. Citations validated against bundle.

References

9

News mentions

0

No linked articles in our index yet.