VYPR
High severityNVD Advisory· Published May 14, 2021· Updated Aug 3, 2024

Stack overflow due to looping TFLite subgraph

CVE-2021-29591

Description

TensorFlow is an end-to-end open source platform for machine learning. TFlite graphs must not have loops between nodes. However, this condition was not checked and an attacker could craft models that would result in infinite loop during evaluation. In certain cases, the infinite loop would be replaced by stack overflow due to too many recursive calls. For example, the While implementation(https://github.com/tensorflow/tensorflow/blob/106d8f4fb89335a2c52d7c895b7a7485465ca8d9/tensorflow/lite/kernels/while.cc) could be tricked into a scneario where both the body and the loop subgraphs are the same. Evaluating one of the subgraphs means calling the Eval function for the other and this quickly exhaust all stack space. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range. Please consult our security guide(https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md) for more information regarding the security model and how to contact us with issues and questions.

Affected packages

Versions sourced from the GitHub Security Advisory.

PackageAffected versionsPatched versions
tensorflowPyPI
< 2.1.42.1.4
tensorflowPyPI
>= 2.2.0, < 2.2.32.2.3
tensorflowPyPI
>= 2.3.0, < 2.3.32.3.3
tensorflowPyPI
>= 2.4.0, < 2.4.22.4.2
tensorflow-cpuPyPI
< 2.1.42.1.4
tensorflow-cpuPyPI
>= 2.2.0, < 2.2.32.2.3
tensorflow-cpuPyPI
>= 2.3.0, < 2.3.32.3.3
tensorflow-cpuPyPI
>= 2.4.0, < 2.4.22.4.2
tensorflow-gpuPyPI
< 2.1.42.1.4
tensorflow-gpuPyPI
>= 2.2.0, < 2.2.32.2.3
tensorflow-gpuPyPI
>= 2.3.0, < 2.3.32.3.3
tensorflow-gpuPyPI
>= 2.4.0, < 2.4.22.4.2

Affected products

1

Patches

2
c6173f5fe66c

TFLite: Error out when the graph has a recurion.

https://github.com/tensorflow/tensorflowYu-Cheng LingMay 3, 2021via ghsa
6 files changed · +70 2
  • tensorflow/lite/BUILD+1 0 modified
    @@ -643,6 +643,7 @@ cc_test(
             "testdata/test_min_runtime.bin",
             "testdata/test_model.bin",
             "testdata/test_model_broken.bin",
    +        "testdata/unsupported_recursion.bin",
             "testdata/while_op_with_forwarding_input.bin",
         ],
         tags = [
    
  • tensorflow/lite/core/subgraph.cc+46 0 modified
    @@ -156,6 +156,42 @@ const char* GetTFLiteOpName(const TfLiteRegistration& op_reg) {
       return tflite::EnumNamesBuiltinOperator()[op_reg.builtin_code];
     }
     
    +// An utility test to detect if the subgraph is abused:
    +// 1. Detects if recursion exists in the graph (recursion is not currently
    +//    supported.
    +// 2. Detects if the interpreter / subgraph is used in multiple subgraphs.
    +//    Note: It's clearly documented that the interpreter / subgraph are not
    +//    thread-safe. This serves as a check with possible false negatives
    +//    unless we switch to atomic boolean flags.
    +class SubgraphGuard {
    + public:
    +  SubgraphGuard(TfLiteContext* context, bool* is_subgraph_in_use)
    +      : is_subgraph_in_use_(is_subgraph_in_use) {
    +    if (*is_subgraph_in_use_) {
    +      TF_LITE_KERNEL_LOG(
    +          context,
    +          "Subgraph is already in use. Using an interpreter or a subgraph in "
    +          "multiple threads is not supported. Recursion in the graph is not "
    +          "supported.");
    +      status_ = kTfLiteError;
    +    } else {
    +      *is_subgraph_in_use_ = true;
    +    }
    +  }
    +  ~SubgraphGuard() {
    +    // If tht original status was OK, recover the boolean flag.
    +    if (status_ == kTfLiteOk) {
    +      *is_subgraph_in_use_ = false;
    +    }
    +  }
    +
    +  TfLiteStatus status() const { return status_; }
    +
    + private:
    +  TfLiteStatus status_ = kTfLiteOk;
    +  bool* is_subgraph_in_use_;
    +};
    +
     }  // namespace
     
     // A trivial implementation of GraphInfo around the Interpreter.
    @@ -655,6 +691,7 @@ TfLiteStatus Subgraph::BytesRequired(TfLiteType type, const int* dims,
     
     TfLiteStatus Subgraph::AllocateTensors() {
       TFLITE_SCOPED_TAGGED_DEFAULT_PROFILE(profiler_.get(), "AllocateTensors");
    +
       if (!consistent_) {
         ReportError("AllocateTensors() called on inconsistent model.");
         return kTfLiteError;
    @@ -678,6 +715,12 @@ TfLiteStatus Subgraph::AllocateTensors() {
         return kTfLiteOk;
       }
     
    +  // Note `AllocateTensors` sometimes calls itself recursively above
    +  // for delegates. Therefore only the logic below need to be guarded
    +  // by `SubgraphGuard`.
    +  SubgraphGuard guard(&context_, &is_subgraph_in_use_);
    +  TF_LITE_ENSURE_OK(&context_, guard.status());
    +
       next_execution_plan_index_to_prepare_ = 0;
       next_execution_plan_index_to_plan_allocation_ = 0;
       next_original_execution_plan_index_to_prepare_ = 0;
    @@ -1014,6 +1057,9 @@ TfLiteStatus Subgraph::PrepareOpsAndTensors() {
     }
     
     TfLiteStatus Subgraph::Invoke() {
    +  SubgraphGuard guard(&context_, &is_subgraph_in_use_);
    +  TF_LITE_ENSURE_OK(&context_, guard.status());
    +
       if (!consistent_) {
         ReportError("Invoke called on model that is not consistent.");
         return kTfLiteError;
    
  • tensorflow/lite/core/subgraph.h+4 0 modified
    @@ -759,6 +759,10 @@ class Subgraph {
       // Whether memory planner should be instantiated to retain intermediates for
       // debugging.
       bool preserve_all_tensors_ = false;
    +
    +  // Whether the subgraph is currently in use (e.g. running the `Invoke`
    +  // or `AllocateTensors` functions).
    +  bool is_subgraph_in_use_ = false;
     };
     
     }  // namespace tflite
    
  • tensorflow/lite/kernels/while.cc+0 2 modified
    @@ -138,8 +138,6 @@ TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {
       auto* subgraphs = this_subgraph->GetSubgraphs();
       TF_LITE_ENSURE(context, op_data->cond_subgraph_index < subgraphs->size());
       TF_LITE_ENSURE(context, op_data->body_subgraph_index < subgraphs->size());
    -  TF_LITE_ENSURE(context,
    -                 op_data->cond_subgraph_index != op_data->body_subgraph_index);
     
       Subgraph* cond_subgraph = (*subgraphs)[op_data->cond_subgraph_index].get();
       Subgraph* body_subgraph = (*subgraphs)[op_data->body_subgraph_index].get();
    
  • tensorflow/lite/model_test.cc+19 0 modified
    @@ -600,6 +600,25 @@ TEST(BasicFlatBufferModel, TestHandleMalformedModelReuseTensor) {
       ASSERT_NE(interpreter->AllocateTensors(), kTfLiteOk);
     }
     
    +// Recursion & reentrant are not supported in TFLite.
    +// The test ensures it fails gracefullly instead of crashing with
    +// a stack overflow.
    +TEST(BasicFlatBufferModel, TestUnsupportedRecursion) {
    +  const auto model_path =
    +      "tensorflow/lite/testdata/unsupported_recursion.bin";
    +
    +  std::unique_ptr<tflite::FlatBufferModel> model =
    +      FlatBufferModel::BuildFromFile(model_path);
    +  ASSERT_NE(model, nullptr);
    +
    +  tflite::ops::builtin::BuiltinOpResolver resolver;
    +  InterpreterBuilder builder(*model, resolver);
    +  std::unique_ptr<Interpreter> interpreter;
    +  ASSERT_EQ(builder(&interpreter), kTfLiteOk);
    +  ASSERT_NE(interpreter, nullptr);
    +  ASSERT_NE(interpreter->AllocateTensors(), kTfLiteOk);
    +}
    +
     // The models here have a buffer index for a tensor pointing to a null buffer.
     // This results in the tensor being interpreted as read-write, but the model
     // assumes the tensor is read-only. As such, `interpreter->Invoke()` would
    
  • tensorflow/lite/testdata/unsupported_recursion.bin+0 0 added
9c1dc920d8ff

Prevent infinite loop/stack overflow in TFLite `while` op.

https://github.com/tensorflow/tensorflowMihai MaruseacApr 28, 2021via ghsa
1 file changed · +2 0
  • tensorflow/lite/kernels/while.cc+2 0 modified
    @@ -138,6 +138,8 @@ TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {
       auto* subgraphs = this_subgraph->GetSubgraphs();
       TF_LITE_ENSURE(context, op_data->cond_subgraph_index < subgraphs->size());
       TF_LITE_ENSURE(context, op_data->body_subgraph_index < subgraphs->size());
    +  TF_LITE_ENSURE(context,
    +                 op_data->cond_subgraph_index != op_data->body_subgraph_index);
     
       Subgraph* cond_subgraph = (*subgraphs)[op_data->cond_subgraph_index].get();
       Subgraph* body_subgraph = (*subgraphs)[op_data->body_subgraph_index].get();
    

Vulnerability mechanics

Generated by null/stub on May 9, 2026. Inputs: CWE entries + fix-commit diffs from this CVE's patches. Citations validated against bundle.

References

9

News mentions

0

No linked articles in our index yet.