VYPR
Moderate severityNVD Advisory· Published Oct 9, 2024· Updated Oct 9, 2024

Wasmtime runtime crash when combining tail calls with trapping imports

CVE-2024-47763

Description

Wasmtime is an open source runtime for WebAssembly. Wasmtime's implementation of WebAssembly tail calls combined with stack traces can result in a runtime crash in certain WebAssembly modules. The runtime crash may be undefined behavior if Wasmtime was compiled with Rust 1.80 or prior. The runtime crash is a deterministic process abort when Wasmtime is compiled with Rust 1.81 and later. WebAssembly tail calls are a proposal which relatively recently reached stage 4 in the standardization process. Wasmtime first enabled support for tail calls by default in Wasmtime 21.0.0, although that release contained a bug where it was only on-by-default for some configurations. In Wasmtime 22.0.0 tail calls were enabled by default for all configurations. The specific crash happens when an exported function in a WebAssembly module (or component) performs a return_call (or return_call_indirect or return_call_ref) to an imported host function which captures a stack trace (for example, the host function raises a trap). In this situation, the stack-walking code previously assumed there was always at least one WebAssembly frame on the stack but with tail calls that is no longer true. With the tail-call proposal it's possible to have an entry trampoline appear as if it directly called the exit trampoline. This situation triggers an internal assert in the stack-walking code which raises a Rust panic!(). When Wasmtime is compiled with Rust versions 1.80 and prior this means that an extern "C" function in Rust is raising a panic!(). This is technically undefined behavior and typically manifests as a process abort when the unwinder fails to unwind Cranelift-generated frames. When Wasmtime is compiled with Rust versions 1.81 and later this panic becomes a deterministic process abort. Overall the impact of this issue is that this is a denial-of-service vector where a malicious WebAssembly module or component can cause the host to crash. There is no other impact at this time other than availability of a service as the result of the crash is always a crash and no more. This issue was discovered by routine fuzzing performed by the Wasmtime project via Google's OSS-Fuzz infrastructure. We have no evidence that it has ever been exploited by an attacker in the wild. All versions of Wasmtime which have tail calls enabled by default have been patched: * 21.0.x - patched in 21.0.2 * 22.0.x - patched in 22.0.1 * 23.0.x - patched in 23.0.3 * 24.0.x - patched in 24.0.1 * 25.0.x - patched in 25.0.2. Wasmtime versions from 12.0.x (the first release with experimental tail call support) to 20.0.x (the last release with tail-calls off-by-default) have support for tail calls but the support is disabled by default. These versions are not affected in their default configurations, but users who explicitly enabled tail call support will need to either disable tail call support or upgrade to a patched version of Wasmtime. The main workaround for this issue is to disable tail support for tail calls in Wasmtime, for example with Config::wasm_tail_call(false). Users are otherwise encouraged to upgrade to patched versions.

Affected packages

Versions sourced from the GitHub Security Advisory.

PackageAffected versionsPatched versions
wasmtimecrates.io
>= 12.0.0, < 21.0.221.0.2
wasmtimecrates.io
>= 22.0.0, < 22.0.122.0.1
wasmtimecrates.io
>= 23.0.0, < 23.0.323.0.3
wasmtimecrates.io
>= 24.0.0, < 24.0.124.0.1
wasmtimecrates.io
>= 25.0.0, < 25.0.225.0.2

Affected products

1

Patches

1
0ebe54d05f0e

Merge commit from fork

https://github.com/bytecodealliance/wasmtimeNick FitzgeraldOct 9, 2024via ghsa
2 files changed · +417 27
  • crates/wasmtime/src/runtime/type_registry.rs+181 27 modified
    @@ -17,14 +17,15 @@ use core::{
         hash::{Hash, Hasher},
         ops::Range,
         sync::atomic::{
    -        AtomicUsize,
    -        Ordering::{AcqRel, Acquire},
    +        AtomicBool, AtomicUsize,
    +        Ordering::{AcqRel, Acquire, Release},
         },
     };
     use wasmtime_environ::{
    -    iter_entity_range, packed_option::PackedOption, EngineOrModuleTypeIndex, GcLayout,
    -    ModuleInternedTypeIndex, ModuleTypes, PrimaryMap, SecondaryMap, TypeTrace, VMSharedTypeIndex,
    -    WasmRecGroup, WasmSubType,
    +    iter_entity_range,
    +    packed_option::{PackedOption, ReservedValue},
    +    EngineOrModuleTypeIndex, GcLayout, ModuleInternedTypeIndex, ModuleTypes, PrimaryMap,
    +    SecondaryMap, TypeTrace, VMSharedTypeIndex, WasmRecGroup, WasmSubType,
     };
     use wasmtime_slab::{Id as SlabId, Slab};
     
    @@ -180,12 +181,15 @@ impl Drop for TypeCollection {
     
     #[inline]
     fn shared_type_index_to_slab_id(index: VMSharedTypeIndex) -> SlabId {
    +    assert!(!index.is_reserved_value());
         SlabId::from_raw(index.bits())
     }
     
     #[inline]
     fn slab_id_to_shared_type_index(id: SlabId) -> VMSharedTypeIndex {
    -    VMSharedTypeIndex::new(id.into_raw())
    +    let index = VMSharedTypeIndex::new(id.into_raw());
    +    assert!(!index.is_reserved_value());
    +    index
     }
     
     /// A Wasm type that has been registered in the engine's `TypeRegistry`.
    @@ -417,8 +421,25 @@ impl Debug for RecGroupEntry {
     struct RecGroupEntryInner {
         /// The Wasm rec group, canonicalized for hash consing.
         hash_consing_key: WasmRecGroup,
    +
    +    /// The shared type indices for each type in this rec group.
         shared_type_indices: Box<[VMSharedTypeIndex]>,
    +
    +    /// The number of times that this entry has been registered in the
    +    /// `TypeRegistryInner`.
    +    ///
    +    /// This is an atomic counter so that cloning a `RegisteredType`, and
    +    /// temporarily keeping a type registered, doesn't require locking the full
    +    /// registry.
         registrations: AtomicUsize,
    +
    +    /// Whether this entry has already been unregistered from the
    +    /// `TypeRegistryInner`.
    +    ///
    +    /// This flag exists to detect and avoid double-unregistration bugs that
    +    /// could otherwise occur in rare cases. See the comments in
    +    /// `TypeRegistryInner::unregister_type` for details.
    +    unregistered: AtomicBool,
     }
     
     impl PartialEq for RecGroupEntry {
    @@ -611,6 +632,7 @@ impl TypeRegistryInner {
     
             // If we've already registered this rec group before, reuse it.
             if let Some(entry) = self.hash_consing_map.get(&hash_consing_key) {
    +            assert_eq!(entry.0.unregistered.load(Acquire), false);
                 entry.incref(
                     "hash consed to already-registered type in `TypeRegistryInner::register_rec_group`",
                 );
    @@ -622,8 +644,9 @@ impl TypeRegistryInner {
             // while this rec group is still alive.
             hash_consing_key
                 .trace_engine_indices::<_, ()>(&mut |index| {
    -                let entry = &self.type_to_rec_group[index].as_ref().unwrap();
    -                entry.incref(
    +                let other_entry = &self.type_to_rec_group[index].as_ref().unwrap();
    +                assert_eq!(other_entry.0.unregistered.load(Acquire), false);
    +                other_entry.incref(
                         "new cross-group type reference to existing type in `register_rec_group`",
                     );
                     Ok(())
    @@ -645,17 +668,32 @@ impl TypeRegistryInner {
                             map[idx]
                         } else {
                             let rec_group_offset = idx.as_u32() - module_rec_group_start.as_u32();
    -                        VMSharedTypeIndex::from_u32(engine_rec_group_start + rec_group_offset)
    +                        let index =
    +                            VMSharedTypeIndex::from_u32(engine_rec_group_start + rec_group_offset);
    +                        assert!(!index.is_reserved_value());
    +                        index
                         }
                     });
                     self.insert_one_type_from_rec_group(gc_runtime, module_index, ty)
                 })
                 .collect();
     
    +        debug_assert_eq!(
    +            shared_type_indices.len(),
    +            shared_type_indices
    +                .iter()
    +                .copied()
    +                .inspect(|ty| assert!(!ty.is_reserved_value()))
    +                .collect::<crate::hash_set::HashSet<_>>()
    +                .len(),
    +            "should not have any duplicate type indices",
    +        );
    +
             let entry = RecGroupEntry(Arc::new(RecGroupEntryInner {
                 hash_consing_key,
                 shared_type_indices,
                 registrations: AtomicUsize::new(1),
    +            unregistered: AtomicBool::new(false),
             }));
             log::trace!("create new entry {entry:?} (registrations -> 1)");
     
    @@ -845,29 +883,133 @@ impl TypeRegistryInner {
         /// zero remaining registrations.
         fn unregister_entry(&mut self, entry: RecGroupEntry) {
             debug_assert!(self.drop_stack.is_empty());
    +
    +        // There are two races to guard against before we can unregister the
    +        // entry, even though it was on the drop stack:
    +        //
    +        // 1. Although an entry has to reach zero registrations before it is
    +        //    enqueued in the drop stack, we need to double check whether the
    +        //    entry is *still* at zero registrations. This is because someone
    +        //    else can resurrect the entry in between when the
    +        //    zero-registrations count was first observed and when we actually
    +        //    acquire the lock to unregister it. In this example, we have
    +        //    threads A and B, an existing rec group entry E, and a rec group
    +        //    entry E' that is a duplicate of E:
    +        //
    +        //    Thread A                        | Thread B
    +        //    --------------------------------+-----------------------------
    +        //    acquire(type registry lock)     |
    +        //                                    |
    +        //                                    | decref(E) --> 0
    +        //                                    |
    +        //                                    | block_on(type registry lock)
    +        //                                    |
    +        //    register(E') == incref(E) --> 1 |
    +        //                                    |
    +        //    release(type registry lock)     |
    +        //                                    |
    +        //                                    | acquire(type registry lock)
    +        //                                    |
    +        //                                    | unregister(E)         !!!!!!
    +        //
    +        //    If we aren't careful, we can unregister a type while it is still
    +        //    in use!
    +        //
    +        //    The fix in this case is that we skip unregistering the entry if
    +        //    its reference count is non-zero, since that means it was
    +        //    concurrently resurrected and is now in use again.
    +        //
    +        // 2. In a slightly more convoluted version of (1), where an entry is
    +        //    resurrected but then dropped *again*, someone might attempt to
    +        //    unregister an entry a second time:
    +        //
    +        //    Thread A                        | Thread B
    +        //    --------------------------------|-----------------------------
    +        //    acquire(type registry lock)     |
    +        //                                    |
    +        //                                    | decref(E) --> 0
    +        //                                    |
    +        //                                    | block_on(type registry lock)
    +        //                                    |
    +        //    register(E') == incref(E) --> 1 |
    +        //                                    |
    +        //    release(type registry lock)     |
    +        //                                    |
    +        //    decref(E) --> 0                 |
    +        //                                    |
    +        //    acquire(type registry lock)     |
    +        //                                    |
    +        //    unregister(E)                   |
    +        //                                    |
    +        //    release(type registry lock)     |
    +        //                                    |
    +        //                                    | acquire(type registry lock)
    +        //                                    |
    +        //                                    | unregister(E)         !!!!!!
    +        //
    +        //    If we aren't careful, we can unregister a type twice, which leads
    +        //    to panics and registry corruption!
    +        //
    +        //    To detect this scenario and avoid the double-unregistration bug,
    +        //    we maintain an `unregistered` flag on entries. We set this flag
    +        //    once an entry is unregistered and therefore, even if it is
    +        //    enqueued in the drop stack multiple times, we only actually
    +        //    unregister the entry the first time.
    +        //
    +        // A final note: we don't need to worry about any concurrent
    +        // modifications during the middle of this function's execution, only
    +        // between (a) when we first observed a zero-registrations count and
    +        // decided to unregister the type, and (b) when we acquired the type
    +        // registry's lock so that we could perform that unregistration. This is
    +        // because this method has exclusive access to `&mut self` -- that is,
    +        // we have a write lock on the whole type registry -- and therefore no
    +        // one else can create new references to this zero-registration entry
    +        // and bring it back to life (which would require finding it in
    +        // `self.hash_consing_map`, which no one else has access to, because we
    +        // now have an exclusive lock on `self`).
    +
    +        // Handle scenario (1) from above.
    +        let registrations = entry.0.registrations.load(Acquire);
    +        if registrations != 0 {
    +            log::trace!(
    +                "{entry:?} was concurrently resurrected and no longer has \
    +                 zero registrations (registrations -> {registrations})",
    +            );
    +            assert_eq!(entry.0.unregistered.load(Acquire), false);
    +            return;
    +        }
    +
    +        // Handle scenario (2) from above.
    +        if entry.0.unregistered.load(Acquire) {
    +            log::trace!(
    +                "{entry:?} was concurrently resurrected, dropped again, \
    +                 and already unregistered"
    +            );
    +            return;
    +        }
    +
    +        // Okay, we are really going to unregister this entry. Enqueue it on the
    +        // drop stack.
             self.drop_stack.push(entry);
     
    +        // Keep unregistering entries until the drop stack is empty. This is
    +        // logically a recursive process where if we unregister a type that was
    +        // the only thing keeping another type alive, we then recursively
    +        // unregister that other type as well. However, we use this explicit
    +        // drop stack to avoid recursion and the potential stack overflows that
    +        // recursion implies.
             while let Some(entry) = self.drop_stack.pop() {
                 log::trace!("Start unregistering {entry:?}");
     
    -            // We need to double check whether the entry is still at zero
    -            // registrations: Between the time that we observed a zero and
    -            // acquired the lock to call this function, another thread could
    -            // have registered the type and found the 0-registrations entry in
    -            // `self.map` and incremented its count.
    -            //
    -            // We don't need to worry about any concurrent increments during
    -            // this function's invocation after we check for zero because we
    -            // have exclusive access to `&mut self` and therefore no one can
    -            // create a new reference to this entry and bring it back to life.
    -            let registrations = entry.0.registrations.load(Acquire);
    -            if registrations != 0 {
    -                log::trace!(
    -                    "{entry:?} was concurrently resurrected and no longer has \
    -                     zero registrations (registrations -> {registrations})",
    -                );
    -                continue;
    -            }
    +            // All entries on the drop stack should *really* be ready for
    +            // unregistration, since no one can resurrect entries once we've
    +            // locked the registry.
    +            assert_eq!(entry.0.registrations.load(Acquire), 0);
    +            assert_eq!(entry.0.unregistered.load(Acquire), false);
    +
    +            // We are taking responsibility for unregistering this entry, so
    +            // prevent anyone else from attempting to do it again.
    +            entry.0.unregistered.store(true, Release);
     
                 // Decrement any other types that this type was shallowly
                 // (i.e. non-transitively) referencing and keeping alive. If this
    @@ -901,6 +1043,18 @@ impl TypeRegistryInner {
                 // map. Additionally, stop holding a strong reference from each
                 // function type in the rec group to that function type's trampoline
                 // type.
    +            debug_assert_eq!(
    +                entry.0.shared_type_indices.len(),
    +                entry
    +                    .0
    +                    .shared_type_indices
    +                    .iter()
    +                    .copied()
    +                    .inspect(|ty| assert!(!ty.is_reserved_value()))
    +                    .collect::<crate::hash_set::HashSet<_>>()
    +                    .len(),
    +                "should not have any duplicate type indices",
    +            );
                 for ty in entry.0.shared_type_indices.iter().copied() {
                     log::trace!("removing {ty:?} from registry");
     
    
  • tests/all/module.rs+236 0 modified
    @@ -1,4 +1,8 @@
    +use anyhow::Context;
    +use std::sync::atomic::{AtomicBool, Ordering::Relaxed};
    +use std::sync::Arc;
     use wasmtime::*;
    +use wasmtime_test_macros::wasmtime_test;
     
     #[test]
     fn checks_incompatible_target() -> Result<()> {
    @@ -296,3 +300,235 @@ fn cross_engine_module_exports() -> Result<()> {
         assert!(instance.get_module_export(&mut store, &export).is_none());
         Ok(())
     }
    +
    +/// Smoke test for registering and unregistering modules (and their rec group
    +/// entries) concurrently.
    +#[wasmtime_test(wasm_features(gc, function_references))]
    +fn concurrent_type_registry_modifications(config: &mut Config) -> Result<()> {
    +    let _ = env_logger::try_init();
    +
    +    // The number of seconds to run the smoke test.
    +    const TEST_DURATION_SECONDS: u64 = 5;
    +
    +    // The number of worker threads to spawn for this smoke test.
    +    const NUM_WORKER_THREADS: usize = 32;
    +
    +    let engine = Engine::new(config)?;
    +
    +    /// Tests of various kinds of modifications to the type registry.
    +    enum Test {
    +        /// Creating a module (from its text format) should register new entries
    +        /// in the type registry.
    +        Module(&'static str),
    +        /// Creating an individual func type registers a singleton entry in the
    +        /// registry which is managed slightly differently from modules.
    +        Func(fn(&Engine) -> FuncType),
    +        /// Create a single struct type like a single function type.
    +        Struct(fn(&Engine) -> StructType),
    +        /// Create a single array type like a single function type.
    +        Array(fn(&Engine) -> ArrayType),
    +    }
    +    const TESTS: &'static [Test] = &[
    +        Test::Func(|engine| FuncType::new(engine, [], [])),
    +        Test::Func(|engine| FuncType::new(engine, [], [ValType::I32])),
    +        Test::Func(|engine| FuncType::new(engine, [ValType::I32], [])),
    +        Test::Struct(|engine| StructType::new(engine, []).unwrap()),
    +        Test::Array(|engine| {
    +            ArrayType::new(engine, FieldType::new(Mutability::Const, StorageType::I8))
    +        }),
    +        Test::Array(|engine| {
    +            ArrayType::new(engine, FieldType::new(Mutability::Var, StorageType::I8))
    +        }),
    +        Test::Module(
    +            r#"
    +                (module
    +                    ;; A handful of function types.
    +                    (type (func))
    +                    (type (func (param i32)))
    +                    (type (func (result i32)))
    +                    (type (func (param i32) (result i32)))
    +
    +                    ;; A handful of recursive types.
    +                    (rec)
    +                    (rec (type $s (struct (field (ref null $s)))))
    +                    (rec (type $a (struct (field (ref null $b))))
    +                         (type $b (struct (field (ref null $a)))))
    +                    (rec (type $c (struct (field (ref null $b))
    +                                          (field (ref null $d))))
    +                         (type $d (struct (field (ref null $a))
    +                                          (field (ref null $c)))))
    +
    +                    ;; Some GC types
    +                    (type (struct))
    +                    (type (array i8))
    +                    (type (array (mut i8)))
    +                )
    +            "#,
    +        ),
    +        Test::Module(
    +            r#"
    +                (module
    +                    ;; Just the function types.
    +                    (type (func))
    +                    (type (func (param i32)))
    +                    (type (func (result i32)))
    +                    (type (func (param i32) (result i32)))
    +                )
    +            "#,
    +        ),
    +        Test::Module(
    +            r#"
    +                (module
    +                    ;; Just the recursive types.
    +                    (rec)
    +                    (rec (type $s (struct (field (ref null $s)))))
    +                    (rec (type $a (struct (field (ref null $b))))
    +                         (type $b (struct (field (ref null $a)))))
    +                    (rec (type $c (struct (field (ref null $b))
    +                                          (field (ref null $d))))
    +                         (type $d (struct (field (ref null $a))
    +                                          (field (ref null $c)))))
    +                )
    +            "#,
    +        ),
    +        Test::Module(
    +            r#"
    +                (module
    +                    ;; One of each kind of type.
    +                    (type (func (param i32) (result i32)))
    +                    (rec (type $a (struct (field (ref null $b))))
    +                         (type $b (struct (field (ref null $a)))))
    +                )
    +            "#,
    +        ),
    +    ];
    +
    +    // Spawn the worker threads, each of them just registering and unregistering
    +    // modules (and their types) constantly for the duration of the smoke test.
    +    let handles = (0..NUM_WORKER_THREADS)
    +        .map(|_| {
    +            let engine = engine.clone();
    +            std::thread::spawn(move || -> Result<()> {
    +                let mut tests = TESTS.iter().cycle();
    +                let start = std::time::Instant::now();
    +                while start.elapsed().as_secs() < TEST_DURATION_SECONDS {
    +                    match tests.next() {
    +                        Some(Test::Module(wat)) => {
    +                            let _ = Module::new(&engine, wat)?;
    +                        }
    +                        Some(Test::Func(ctor)) => {
    +                            let _ = ctor(&engine);
    +                        }
    +                        Some(Test::Struct(ctor)) => {
    +                            let _ = ctor(&engine);
    +                        }
    +                        Some(Test::Array(ctor)) => {
    +                            let _ = ctor(&engine);
    +                        }
    +                        None => unreachable!(),
    +                    }
    +                }
    +
    +                Ok(())
    +            })
    +        })
    +        .collect::<Vec<_>>();
    +
    +    // Join all of the thread handles.
    +    for handle in handles {
    +        handle
    +            .join()
    +            .expect("should join thread handle")
    +            .context("error during thread execution")?;
    +    }
    +
    +    Ok(())
    +}
    +
    +#[wasmtime_test(wasm_features(function_references))]
    +fn concurrent_type_modifications_and_checks(config: &mut Config) -> Result<()> {
    +    const THREADS_CHECKING: usize = 4;
    +
    +    let _ = env_logger::try_init();
    +
    +    let engine = Engine::new(&config)?;
    +
    +    let mut threads = Vec::new();
    +    let keep_going = Arc::new(AtomicBool::new(true));
    +
    +    // Spawn a number of threads that are all working with a module and testing
    +    // various properties about type-checks in the module.
    +    for _ in 0..THREADS_CHECKING {
    +        threads.push(std::thread::spawn({
    +            let engine = engine.clone();
    +            let keep_going = keep_going.clone();
    +            move || -> Result<()> {
    +                while keep_going.load(Relaxed) {
    +                    let module = Module::new(
    +                        &engine,
    +                        r#"
    +                            (module
    +                                (func (export "f") (param funcref)
    +                                    i32.const 0
    +                                    local.get 0
    +                                    table.set
    +                                    i32.const 0
    +                                    call_indirect (result f64)
    +                                    drop
    +                                )
    +
    +                                (table 1 funcref)
    +                            )
    +                        "#,
    +                    )?;
    +                    let ty = FuncType::new(&engine, [], [ValType::I32]);
    +                    let mut store = Store::new(&engine, ());
    +                    let func = Func::new(&mut store, ty, |_, _, results| {
    +                        results[0] = Val::I32(0);
    +                        Ok(())
    +                    });
    +
    +                    let instance = Instance::new(&mut store, &module, &[])?;
    +                    assert!(instance.get_typed_func::<(), i32>(&mut store, "f").is_err());
    +                    assert!(instance.get_typed_func::<(), f64>(&mut store, "f").is_err());
    +                    let f = instance.get_typed_func::<Func, ()>(&mut store, "f")?;
    +                    let err = f.call(&mut store, func).unwrap_err();
    +                    assert_eq!(err.downcast::<Trap>()?, Trap::BadSignature);
    +                }
    +                Ok(())
    +            }
    +        }));
    +    }
    +
    +    // Spawn threads in the background creating/destroying `FuncType`s related
    +    // to the module above.
    +    threads.push(std::thread::spawn({
    +        let engine = engine.clone();
    +        let keep_going = keep_going.clone();
    +        move || -> Result<()> {
    +            while keep_going.load(Relaxed) {
    +                FuncType::new(&engine, [], [ValType::F64]);
    +            }
    +            Ok(())
    +        }
    +    }));
    +    threads.push(std::thread::spawn({
    +        let engine = engine.clone();
    +        let keep_going = keep_going.clone();
    +        move || -> Result<()> {
    +            while keep_going.load(Relaxed) {
    +                FuncType::new(&engine, [], [ValType::I32]);
    +            }
    +            Ok(())
    +        }
    +    }));
    +
    +    std::thread::sleep(std::time::Duration::new(2, 0));
    +    keep_going.store(false, Relaxed);
    +
    +    for thread in threads {
    +        thread.join().unwrap()?;
    +    }
    +
    +    Ok(())
    +}
    

Vulnerability mechanics

Generated by null/stub on May 9, 2026. Inputs: CWE entries + fix-commit diffs from this CVE's patches. Citations validated against bundle.

References

9

News mentions

0

No linked articles in our index yet.