Wasmtime race condition could lead to WebAssembly control-flow integrity and type safety violations
Description
Wasmtime is an open source runtime for WebAssembly. Under certain concurrent event orderings, a wasmtime::Engine's internal type registry was susceptible to double-unregistration bugs due to a race condition, leading to panics and potentially type registry corruption. That registry corruption could, following an additional and particular sequence of concurrent events, lead to violations of WebAssembly's control-flow integrity (CFI) and type safety. Users that do not use wasmtime::Engine across multiple threads are not affected. Users that only create new modules across threads over time are additionally not affected. Reproducing this bug requires creating and dropping multiple type instances (such as wasmtime::FuncType or wasmtime::ArrayType) concurrently on multiple threads, where all types are associated with the same wasmtime::Engine. Wasm guests cannot trigger this bug. See the "References" section below for a list of Wasmtime types-related APIs that are affected. Wasmtime maintains an internal registry of types within a wasmtime::Engine and an engine is shareable across threads. Types can be created and referenced through creation of a wasmtime::Module, creation of wasmtime::FuncType, or a number of other APIs where the host creates a function (see "References" below). Each of these cases interacts with an engine to deduplicate type information and manage type indices that are used to implement type checks in WebAssembly's call_indirect function, for example. This bug is a race condition in this management where the internal type registry could be corrupted to trigger an assert or contain invalid state. Wasmtime's internal representation of a type has individual types (e.g. one-per-host-function) maintain a registration count of how many time it's been used. Types additionally have state within an engine behind a read-write lock such as lookup/deduplication information. The race here is a time-of-check versus time-of-use (TOCTOU) bug where one thread atomically decrements a type entry's registration count, observes zero registrations, and then acquires a lock in order to unregister that entry. However, between when this first thread observed the zero-registration count and when it acquires that lock, another thread could perform the following sequence of events: re-register another copy of the type, which deduplicates to that same entry, resurrecting it and incrementing its registration count; then drop the type and decrement its registration count; observe that the registration count is now zero; acquire the type registry lock; and finally unregister the type. Now, when the original thread finally acquires the lock and unregisters the entry, it is the second time this entry has been unregistered. This bug was originally introduced in Wasmtime 19's development of the WebAssembly GC proposal. This bug affects users who are not using the GC proposal, however, and affects Wasmtime in its default configuration even when the GC proposal is disabled. Wasmtime users using 19.0.0 and after are all affected by this issue. We have released the following Wasmtime versions, all of which have a fix for this bug: * 21.0.2 * 22.0.1 * 23.0.3 * 24.0.1 * 25.0.2. If your application creates and drops Wasmtime types on multiple threads concurrently, there are no known workarounds. Users are encouraged to upgrade to a patched release.
Affected packages
Versions sourced from the GitHub Security Advisory.
| Package | Affected versions | Patched versions |
|---|---|---|
wasmtimecrates.io | >= 19.0.0, < 21.0.2 | 21.0.2 |
wasmtimecrates.io | >= 22.0.0, < 22.0.1 | 22.0.1 |
wasmtimecrates.io | >= 23.0.0, < 23.0.3 | 23.0.3 |
wasmtimecrates.io | >= 24.0.0, < 24.0.1 | 24.0.1 |
wasmtimecrates.io | >= 25.0.0, < 25.0.2 | 25.0.2 |
Affected products
1- Range: >= 19.0.0, < 21.0.2
Patches
10ebe54d05f0eMerge commit from fork
2 files changed · +417 −27
crates/wasmtime/src/runtime/type_registry.rs+181 −27 modified@@ -17,14 +17,15 @@ use core::{ hash::{Hash, Hasher}, ops::Range, sync::atomic::{ - AtomicUsize, - Ordering::{AcqRel, Acquire}, + AtomicBool, AtomicUsize, + Ordering::{AcqRel, Acquire, Release}, }, }; use wasmtime_environ::{ - iter_entity_range, packed_option::PackedOption, EngineOrModuleTypeIndex, GcLayout, - ModuleInternedTypeIndex, ModuleTypes, PrimaryMap, SecondaryMap, TypeTrace, VMSharedTypeIndex, - WasmRecGroup, WasmSubType, + iter_entity_range, + packed_option::{PackedOption, ReservedValue}, + EngineOrModuleTypeIndex, GcLayout, ModuleInternedTypeIndex, ModuleTypes, PrimaryMap, + SecondaryMap, TypeTrace, VMSharedTypeIndex, WasmRecGroup, WasmSubType, }; use wasmtime_slab::{Id as SlabId, Slab}; @@ -180,12 +181,15 @@ impl Drop for TypeCollection { #[inline] fn shared_type_index_to_slab_id(index: VMSharedTypeIndex) -> SlabId { + assert!(!index.is_reserved_value()); SlabId::from_raw(index.bits()) } #[inline] fn slab_id_to_shared_type_index(id: SlabId) -> VMSharedTypeIndex { - VMSharedTypeIndex::new(id.into_raw()) + let index = VMSharedTypeIndex::new(id.into_raw()); + assert!(!index.is_reserved_value()); + index } /// A Wasm type that has been registered in the engine's `TypeRegistry`. @@ -417,8 +421,25 @@ impl Debug for RecGroupEntry { struct RecGroupEntryInner { /// The Wasm rec group, canonicalized for hash consing. hash_consing_key: WasmRecGroup, + + /// The shared type indices for each type in this rec group. shared_type_indices: Box<[VMSharedTypeIndex]>, + + /// The number of times that this entry has been registered in the + /// `TypeRegistryInner`. + /// + /// This is an atomic counter so that cloning a `RegisteredType`, and + /// temporarily keeping a type registered, doesn't require locking the full + /// registry. registrations: AtomicUsize, + + /// Whether this entry has already been unregistered from the + /// `TypeRegistryInner`. + /// + /// This flag exists to detect and avoid double-unregistration bugs that + /// could otherwise occur in rare cases. See the comments in + /// `TypeRegistryInner::unregister_type` for details. + unregistered: AtomicBool, } impl PartialEq for RecGroupEntry { @@ -611,6 +632,7 @@ impl TypeRegistryInner { // If we've already registered this rec group before, reuse it. if let Some(entry) = self.hash_consing_map.get(&hash_consing_key) { + assert_eq!(entry.0.unregistered.load(Acquire), false); entry.incref( "hash consed to already-registered type in `TypeRegistryInner::register_rec_group`", ); @@ -622,8 +644,9 @@ impl TypeRegistryInner { // while this rec group is still alive. hash_consing_key .trace_engine_indices::<_, ()>(&mut |index| { - let entry = &self.type_to_rec_group[index].as_ref().unwrap(); - entry.incref( + let other_entry = &self.type_to_rec_group[index].as_ref().unwrap(); + assert_eq!(other_entry.0.unregistered.load(Acquire), false); + other_entry.incref( "new cross-group type reference to existing type in `register_rec_group`", ); Ok(()) @@ -645,17 +668,32 @@ impl TypeRegistryInner { map[idx] } else { let rec_group_offset = idx.as_u32() - module_rec_group_start.as_u32(); - VMSharedTypeIndex::from_u32(engine_rec_group_start + rec_group_offset) + let index = + VMSharedTypeIndex::from_u32(engine_rec_group_start + rec_group_offset); + assert!(!index.is_reserved_value()); + index } }); self.insert_one_type_from_rec_group(gc_runtime, module_index, ty) }) .collect(); + debug_assert_eq!( + shared_type_indices.len(), + shared_type_indices + .iter() + .copied() + .inspect(|ty| assert!(!ty.is_reserved_value())) + .collect::<crate::hash_set::HashSet<_>>() + .len(), + "should not have any duplicate type indices", + ); + let entry = RecGroupEntry(Arc::new(RecGroupEntryInner { hash_consing_key, shared_type_indices, registrations: AtomicUsize::new(1), + unregistered: AtomicBool::new(false), })); log::trace!("create new entry {entry:?} (registrations -> 1)"); @@ -845,29 +883,133 @@ impl TypeRegistryInner { /// zero remaining registrations. fn unregister_entry(&mut self, entry: RecGroupEntry) { debug_assert!(self.drop_stack.is_empty()); + + // There are two races to guard against before we can unregister the + // entry, even though it was on the drop stack: + // + // 1. Although an entry has to reach zero registrations before it is + // enqueued in the drop stack, we need to double check whether the + // entry is *still* at zero registrations. This is because someone + // else can resurrect the entry in between when the + // zero-registrations count was first observed and when we actually + // acquire the lock to unregister it. In this example, we have + // threads A and B, an existing rec group entry E, and a rec group + // entry E' that is a duplicate of E: + // + // Thread A | Thread B + // --------------------------------+----------------------------- + // acquire(type registry lock) | + // | + // | decref(E) --> 0 + // | + // | block_on(type registry lock) + // | + // register(E') == incref(E) --> 1 | + // | + // release(type registry lock) | + // | + // | acquire(type registry lock) + // | + // | unregister(E) !!!!!! + // + // If we aren't careful, we can unregister a type while it is still + // in use! + // + // The fix in this case is that we skip unregistering the entry if + // its reference count is non-zero, since that means it was + // concurrently resurrected and is now in use again. + // + // 2. In a slightly more convoluted version of (1), where an entry is + // resurrected but then dropped *again*, someone might attempt to + // unregister an entry a second time: + // + // Thread A | Thread B + // --------------------------------|----------------------------- + // acquire(type registry lock) | + // | + // | decref(E) --> 0 + // | + // | block_on(type registry lock) + // | + // register(E') == incref(E) --> 1 | + // | + // release(type registry lock) | + // | + // decref(E) --> 0 | + // | + // acquire(type registry lock) | + // | + // unregister(E) | + // | + // release(type registry lock) | + // | + // | acquire(type registry lock) + // | + // | unregister(E) !!!!!! + // + // If we aren't careful, we can unregister a type twice, which leads + // to panics and registry corruption! + // + // To detect this scenario and avoid the double-unregistration bug, + // we maintain an `unregistered` flag on entries. We set this flag + // once an entry is unregistered and therefore, even if it is + // enqueued in the drop stack multiple times, we only actually + // unregister the entry the first time. + // + // A final note: we don't need to worry about any concurrent + // modifications during the middle of this function's execution, only + // between (a) when we first observed a zero-registrations count and + // decided to unregister the type, and (b) when we acquired the type + // registry's lock so that we could perform that unregistration. This is + // because this method has exclusive access to `&mut self` -- that is, + // we have a write lock on the whole type registry -- and therefore no + // one else can create new references to this zero-registration entry + // and bring it back to life (which would require finding it in + // `self.hash_consing_map`, which no one else has access to, because we + // now have an exclusive lock on `self`). + + // Handle scenario (1) from above. + let registrations = entry.0.registrations.load(Acquire); + if registrations != 0 { + log::trace!( + "{entry:?} was concurrently resurrected and no longer has \ + zero registrations (registrations -> {registrations})", + ); + assert_eq!(entry.0.unregistered.load(Acquire), false); + return; + } + + // Handle scenario (2) from above. + if entry.0.unregistered.load(Acquire) { + log::trace!( + "{entry:?} was concurrently resurrected, dropped again, \ + and already unregistered" + ); + return; + } + + // Okay, we are really going to unregister this entry. Enqueue it on the + // drop stack. self.drop_stack.push(entry); + // Keep unregistering entries until the drop stack is empty. This is + // logically a recursive process where if we unregister a type that was + // the only thing keeping another type alive, we then recursively + // unregister that other type as well. However, we use this explicit + // drop stack to avoid recursion and the potential stack overflows that + // recursion implies. while let Some(entry) = self.drop_stack.pop() { log::trace!("Start unregistering {entry:?}"); - // We need to double check whether the entry is still at zero - // registrations: Between the time that we observed a zero and - // acquired the lock to call this function, another thread could - // have registered the type and found the 0-registrations entry in - // `self.map` and incremented its count. - // - // We don't need to worry about any concurrent increments during - // this function's invocation after we check for zero because we - // have exclusive access to `&mut self` and therefore no one can - // create a new reference to this entry and bring it back to life. - let registrations = entry.0.registrations.load(Acquire); - if registrations != 0 { - log::trace!( - "{entry:?} was concurrently resurrected and no longer has \ - zero registrations (registrations -> {registrations})", - ); - continue; - } + // All entries on the drop stack should *really* be ready for + // unregistration, since no one can resurrect entries once we've + // locked the registry. + assert_eq!(entry.0.registrations.load(Acquire), 0); + assert_eq!(entry.0.unregistered.load(Acquire), false); + + // We are taking responsibility for unregistering this entry, so + // prevent anyone else from attempting to do it again. + entry.0.unregistered.store(true, Release); // Decrement any other types that this type was shallowly // (i.e. non-transitively) referencing and keeping alive. If this @@ -901,6 +1043,18 @@ impl TypeRegistryInner { // map. Additionally, stop holding a strong reference from each // function type in the rec group to that function type's trampoline // type. + debug_assert_eq!( + entry.0.shared_type_indices.len(), + entry + .0 + .shared_type_indices + .iter() + .copied() + .inspect(|ty| assert!(!ty.is_reserved_value())) + .collect::<crate::hash_set::HashSet<_>>() + .len(), + "should not have any duplicate type indices", + ); for ty in entry.0.shared_type_indices.iter().copied() { log::trace!("removing {ty:?} from registry");
tests/all/module.rs+236 −0 modified@@ -1,4 +1,8 @@ +use anyhow::Context; +use std::sync::atomic::{AtomicBool, Ordering::Relaxed}; +use std::sync::Arc; use wasmtime::*; +use wasmtime_test_macros::wasmtime_test; #[test] fn checks_incompatible_target() -> Result<()> { @@ -296,3 +300,235 @@ fn cross_engine_module_exports() -> Result<()> { assert!(instance.get_module_export(&mut store, &export).is_none()); Ok(()) } + +/// Smoke test for registering and unregistering modules (and their rec group +/// entries) concurrently. +#[wasmtime_test(wasm_features(gc, function_references))] +fn concurrent_type_registry_modifications(config: &mut Config) -> Result<()> { + let _ = env_logger::try_init(); + + // The number of seconds to run the smoke test. + const TEST_DURATION_SECONDS: u64 = 5; + + // The number of worker threads to spawn for this smoke test. + const NUM_WORKER_THREADS: usize = 32; + + let engine = Engine::new(config)?; + + /// Tests of various kinds of modifications to the type registry. + enum Test { + /// Creating a module (from its text format) should register new entries + /// in the type registry. + Module(&'static str), + /// Creating an individual func type registers a singleton entry in the + /// registry which is managed slightly differently from modules. + Func(fn(&Engine) -> FuncType), + /// Create a single struct type like a single function type. + Struct(fn(&Engine) -> StructType), + /// Create a single array type like a single function type. + Array(fn(&Engine) -> ArrayType), + } + const TESTS: &'static [Test] = &[ + Test::Func(|engine| FuncType::new(engine, [], [])), + Test::Func(|engine| FuncType::new(engine, [], [ValType::I32])), + Test::Func(|engine| FuncType::new(engine, [ValType::I32], [])), + Test::Struct(|engine| StructType::new(engine, []).unwrap()), + Test::Array(|engine| { + ArrayType::new(engine, FieldType::new(Mutability::Const, StorageType::I8)) + }), + Test::Array(|engine| { + ArrayType::new(engine, FieldType::new(Mutability::Var, StorageType::I8)) + }), + Test::Module( + r#" + (module + ;; A handful of function types. + (type (func)) + (type (func (param i32))) + (type (func (result i32))) + (type (func (param i32) (result i32))) + + ;; A handful of recursive types. + (rec) + (rec (type $s (struct (field (ref null $s))))) + (rec (type $a (struct (field (ref null $b)))) + (type $b (struct (field (ref null $a))))) + (rec (type $c (struct (field (ref null $b)) + (field (ref null $d)))) + (type $d (struct (field (ref null $a)) + (field (ref null $c))))) + + ;; Some GC types + (type (struct)) + (type (array i8)) + (type (array (mut i8))) + ) + "#, + ), + Test::Module( + r#" + (module + ;; Just the function types. + (type (func)) + (type (func (param i32))) + (type (func (result i32))) + (type (func (param i32) (result i32))) + ) + "#, + ), + Test::Module( + r#" + (module + ;; Just the recursive types. + (rec) + (rec (type $s (struct (field (ref null $s))))) + (rec (type $a (struct (field (ref null $b)))) + (type $b (struct (field (ref null $a))))) + (rec (type $c (struct (field (ref null $b)) + (field (ref null $d)))) + (type $d (struct (field (ref null $a)) + (field (ref null $c))))) + ) + "#, + ), + Test::Module( + r#" + (module + ;; One of each kind of type. + (type (func (param i32) (result i32))) + (rec (type $a (struct (field (ref null $b)))) + (type $b (struct (field (ref null $a))))) + ) + "#, + ), + ]; + + // Spawn the worker threads, each of them just registering and unregistering + // modules (and their types) constantly for the duration of the smoke test. + let handles = (0..NUM_WORKER_THREADS) + .map(|_| { + let engine = engine.clone(); + std::thread::spawn(move || -> Result<()> { + let mut tests = TESTS.iter().cycle(); + let start = std::time::Instant::now(); + while start.elapsed().as_secs() < TEST_DURATION_SECONDS { + match tests.next() { + Some(Test::Module(wat)) => { + let _ = Module::new(&engine, wat)?; + } + Some(Test::Func(ctor)) => { + let _ = ctor(&engine); + } + Some(Test::Struct(ctor)) => { + let _ = ctor(&engine); + } + Some(Test::Array(ctor)) => { + let _ = ctor(&engine); + } + None => unreachable!(), + } + } + + Ok(()) + }) + }) + .collect::<Vec<_>>(); + + // Join all of the thread handles. + for handle in handles { + handle + .join() + .expect("should join thread handle") + .context("error during thread execution")?; + } + + Ok(()) +} + +#[wasmtime_test(wasm_features(function_references))] +fn concurrent_type_modifications_and_checks(config: &mut Config) -> Result<()> { + const THREADS_CHECKING: usize = 4; + + let _ = env_logger::try_init(); + + let engine = Engine::new(&config)?; + + let mut threads = Vec::new(); + let keep_going = Arc::new(AtomicBool::new(true)); + + // Spawn a number of threads that are all working with a module and testing + // various properties about type-checks in the module. + for _ in 0..THREADS_CHECKING { + threads.push(std::thread::spawn({ + let engine = engine.clone(); + let keep_going = keep_going.clone(); + move || -> Result<()> { + while keep_going.load(Relaxed) { + let module = Module::new( + &engine, + r#" + (module + (func (export "f") (param funcref) + i32.const 0 + local.get 0 + table.set + i32.const 0 + call_indirect (result f64) + drop + ) + + (table 1 funcref) + ) + "#, + )?; + let ty = FuncType::new(&engine, [], [ValType::I32]); + let mut store = Store::new(&engine, ()); + let func = Func::new(&mut store, ty, |_, _, results| { + results[0] = Val::I32(0); + Ok(()) + }); + + let instance = Instance::new(&mut store, &module, &[])?; + assert!(instance.get_typed_func::<(), i32>(&mut store, "f").is_err()); + assert!(instance.get_typed_func::<(), f64>(&mut store, "f").is_err()); + let f = instance.get_typed_func::<Func, ()>(&mut store, "f")?; + let err = f.call(&mut store, func).unwrap_err(); + assert_eq!(err.downcast::<Trap>()?, Trap::BadSignature); + } + Ok(()) + } + })); + } + + // Spawn threads in the background creating/destroying `FuncType`s related + // to the module above. + threads.push(std::thread::spawn({ + let engine = engine.clone(); + let keep_going = keep_going.clone(); + move || -> Result<()> { + while keep_going.load(Relaxed) { + FuncType::new(&engine, [], [ValType::F64]); + } + Ok(()) + } + })); + threads.push(std::thread::spawn({ + let engine = engine.clone(); + let keep_going = keep_going.clone(); + move || -> Result<()> { + while keep_going.load(Relaxed) { + FuncType::new(&engine, [], [ValType::I32]); + } + Ok(()) + } + })); + + std::thread::sleep(std::time::Duration::new(2, 0)); + keep_going.store(false, Relaxed); + + for thread in threads { + thread.join().unwrap()?; + } + + Ok(()) +}
Vulnerability mechanics
Generated by null/stub on May 9, 2026. Inputs: CWE entries + fix-commit diffs from this CVE's patches. Citations validated against bundle.
References
6- github.com/advisories/GHSA-7qmx-3fpx-r45mghsaADVISORY
- nvd.nist.gov/vuln/detail/CVE-2024-47813ghsaADVISORY
- github.com/bytecodealliance/wasmtime/commit/0ebe54d05f0e1f6c64b7c8bb48c9e9f6c95cacbaghsaWEB
- github.com/bytecodealliance/wasmtime/pull/7969ghsax_refsource_MISCWEB
- github.com/bytecodealliance/wasmtime/security/advisories/GHSA-7qmx-3fpx-r45mghsax_refsource_CONFIRMWEB
- rustsec.org/advisories/RUSTSEC-2024-0439.htmlghsaWEB
News mentions
0No linked articles in our index yet.