VYPR
Medium severity5.3NVD Advisory· Published Mar 20, 2026· Updated Apr 17, 2026

CVE-2026-32766

CVE-2026-32766

Description

astral-tokio-tar is a tar archive reading/writing library for async Rust. In versions 0.5.6 and earlier, malformed PAX extensions were silently skipped when parsing tar archives. This silent skipping (rather than rejection) of invalid PAX extensions could be used as a building block for a parser differential, for example by silently skipping a malformed GNU “long link” extension so that a subsequent parser would misinterpret the extension. In practice, exploiting this behavior in astral-tokio-tar requires a secondary misbehaving tar parser, i.e. one that insufficiently validates malformed PAX extensions and interprets them rather than skipping or erroring on them. This vulnerability is considered low-severity as it requires a separate vulnerability against any unrelated tar parser. This issue has been fixed in version 0.6.0.

Affected packages

Versions sourced from the GitHub Security Advisory.

PackageAffected versionsPatched versions
astral-tokio-tarcrates.io
< 0.6.00.6.0

Affected products

1

Patches

1
e5e0139cae45

Merge commit from fork

https://github.com/astral-sh/tokio-tarWilliam WoodruffMar 16, 2026via ghsa
4 files changed · +131 45
  • src/archive.rs+18 2 modified
    @@ -293,8 +293,24 @@ impl<R: Read + Unpin> Archive<R> {
             // child directories within those of more restrictive permissions. See [0] for details.
             //
             // [0]: <https://github.com/alexcrichton/tar-rs/issues/242>
    -        directories.sort_by(|a, b| b.path_bytes().cmp(&a.path_bytes()));
    -        for mut dir in directories {
    +
    +        // Validate paths and pair with entries, then sort
    +        let mut dirs_with_paths: Vec<(Entry<Archive<R>>, Vec<u8>)> = directories
    +            .into_iter()
    +            .map(|dir| {
    +                let path = dir
    +                    .path_bytes()
    +                    .map_err(|e| TarError::new("failed to read directory path from archive", e))?
    +                    .into_owned();
    +                Ok((dir, path))
    +            })
    +            .collect::<io::Result<_>>()?;
    +
    +        // Sort by path (reverse order for topological sorting)
    +        dirs_with_paths.sort_by(|a, b| b.1.cmp(&a.1));
    +
    +        // Unpack directories in sorted order
    +        for (mut dir, _path) in dirs_with_paths {
                 dir.unpack_in_raw(&dst, &mut targets).await?;
             }
     
    
  • src/entry.rs+45 34 modified
    @@ -139,7 +139,9 @@ impl<R: Read + Unpin> Entry<R> {
         /// separators, and it will not always return the same value as
         /// `self.header().path_bytes()` as some archive formats have support for
         /// longer path names described in separate entries.
    -    pub fn path_bytes(&self) -> Cow<'_, [u8]> {
    +    ///
    +    /// This method may return an error if PAX extensions are malformed.
    +    pub fn path_bytes(&self) -> io::Result<Cow<'_, [u8]>> {
             self.fields.path_bytes()
         }
     
    @@ -165,7 +167,9 @@ impl<R: Read + Unpin> Entry<R> {
         /// Note that this will not always return the same value as
         /// `self.header().link_name_bytes()` as some archive formats have support for
         /// longer path names described in separate entries.
    -    pub fn link_name_bytes(&self) -> Option<Cow<'_, [u8]>> {
    +    ///
    +    /// This method may return an error if PAX extensions are malformed.
    +    pub fn link_name_bytes(&self) -> io::Result<Option<Cow<'_, [u8]>>> {
             self.fields.link_name_bytes()
         }
     
    @@ -388,65 +392,69 @@ impl<R: Read + Unpin> EntryFields<R> {
         }
     
         fn path(&self) -> io::Result<Cow<'_, Path>> {
    -        bytes2path(self.path_bytes())
    +        bytes2path(self.path_bytes()?)
         }
     
    -    fn path_bytes(&self) -> Cow<'_, [u8]> {
    +    fn path_bytes(&self) -> io::Result<Cow<'_, [u8]>> {
             match self.long_pathname {
                 Some(ref bytes) => {
                     if let Some(&0) = bytes.last() {
    -                    Cow::Borrowed(&bytes[..bytes.len() - 1])
    +                    Ok(Cow::Borrowed(&bytes[..bytes.len() - 1]))
                     } else {
    -                    Cow::Borrowed(bytes)
    +                    Ok(Cow::Borrowed(bytes))
                     }
                 }
                 None => {
                     if let Some(ref pax) = self.pax_extensions {
    -                    let pax = pax_extensions(pax)
    -                        .filter_map(|f| f.ok())
    -                        .find(|f| f.key_bytes() == b"path")
    -                        .map(|f| f.value_bytes());
    -                    if let Some(field) = pax {
    -                        return Cow::Borrowed(field);
    +                    // Check for malformed PAX extensions and return hard error
    +                    for ext in pax_extensions(pax) {
    +                        let ext = ext?; // Propagate error instead of silently dropping
    +                        if ext.key_bytes() == b"path" {
    +                            return Ok(Cow::Borrowed(ext.value_bytes()));
    +                        }
                         }
                     }
    -                self.header.path_bytes()
    +                Ok(self.header.path_bytes())
                 }
             }
         }
     
         /// Gets the path in a "lossy" way, used for error reporting ONLY.
         fn path_lossy(&self) -> String {
    -        String::from_utf8_lossy(&self.path_bytes()).to_string()
    +        // If path_bytes() fails, fall back to the header path for error reporting
    +        match self.path_bytes() {
    +            Ok(bytes) => String::from_utf8_lossy(&bytes).to_string(),
    +            Err(_) => String::from_utf8_lossy(&self.header.path_bytes()).to_string(),
    +        }
         }
     
         fn link_name(&self) -> io::Result<Option<Cow<'_, Path>>> {
    -        match self.link_name_bytes() {
    +        match self.link_name_bytes()? {
                 Some(bytes) => bytes2path(bytes).map(Some),
                 None => Ok(None),
             }
         }
     
    -    fn link_name_bytes(&self) -> Option<Cow<'_, [u8]>> {
    +    fn link_name_bytes(&self) -> io::Result<Option<Cow<'_, [u8]>>> {
             match self.long_linkname {
                 Some(ref bytes) => {
                     if let Some(&0) = bytes.last() {
    -                    Some(Cow::Borrowed(&bytes[..bytes.len() - 1]))
    +                    Ok(Some(Cow::Borrowed(&bytes[..bytes.len() - 1])))
                     } else {
    -                    Some(Cow::Borrowed(bytes))
    +                    Ok(Some(Cow::Borrowed(bytes)))
                     }
                 }
                 None => {
                     if let Some(ref pax) = self.pax_extensions {
    -                    let pax = pax_extensions(pax)
    -                        .filter_map(|f| f.ok())
    -                        .find(|f| f.key_bytes() == b"linkpath")
    -                        .map(|f| f.value_bytes());
    -                    if let Some(field) = pax {
    -                        return Some(Cow::Borrowed(field));
    +                    // Check for malformed PAX extensions and return hard error
    +                    for ext in pax_extensions(pax) {
    +                        let ext = ext?; // Propagate error instead of silently dropping
    +                        if ext.key_bytes() == b"linkpath" {
    +                            return Ok(Some(Cow::Borrowed(ext.value_bytes())));
    +                        }
                         }
                     }
    -                self.header.link_name_bytes()
    +                Ok(self.header.link_name_bytes())
                 }
             }
         }
    @@ -744,7 +752,7 @@ impl<R: Read + Unpin> EntryFields<R> {
             // Old BSD-tar compatibility.
             // Names that have a trailing slash should be treated as a directory.
             // Only applies to old headers.
    -        if self.header.as_ustar().is_none() && self.path_bytes().ends_with(b"/") {
    +        if self.header.as_ustar().is_none() && self.path_bytes()?.ends_with(b"/") {
                 self.unpack_dir(dst).await?;
                 if self.preserve_permissions {
                     if let Ok(mode) = self.header.mode() {
    @@ -908,14 +916,17 @@ impl<R: Read + Unpin> EntryFields<R> {
                     Ok(Some(e)) => e,
                     _ => return Ok(()),
                 };
    -            let exts = exts
    -                .filter_map(|e| e.ok())
    -                .filter_map(|e| {
    -                    let key = e.key_bytes();
    -                    let prefix = b"SCHILY.xattr.";
    -                    key.strip_prefix(prefix).map(|rest| (rest, e))
    -                })
    -                .map(|(key, e)| (OsStr::from_bytes(key), e.value_bytes()));
    +            // Process xattr extensions, propagating errors instead of silently dropping them
    +            let mut xattrs = Vec::new();
    +            for ext in exts {
    +                let ext = ext?; // Propagate error instead of silently dropping
    +                let key = ext.key_bytes();
    +                let prefix = b"SCHILY.xattr.";
    +                if let Some(rest) = key.strip_prefix(prefix) {
    +                    xattrs.push((OsStr::from_bytes(rest), ext.value_bytes()));
    +                }
    +            }
    +            let exts = xattrs.into_iter();
     
                 for (key, value) in exts {
                     xattr::set(dst, key, value).map_err(|e| {
    
  • tests/all.rs+8 8 modified
    @@ -78,7 +78,7 @@ async fn simple_concat() {
     
             while let Some(entry) = entries.next().await {
                 let e = t!(entry);
    -            names.push(t!(::std::str::from_utf8(&e.path_bytes())).to_string());
    +            names.push(t!(::std::str::from_utf8(&t!(e.path_bytes()))).to_string());
             }
     
             names
    @@ -203,7 +203,7 @@ async fn large_filename() {
     
         // The long entry added with `append_file`
         let mut f = entries.next().await.unwrap().unwrap();
    -    assert_eq!(&*f.path_bytes(), too_long.as_bytes());
    +    assert_eq!(&*t!(f.path_bytes()), too_long.as_bytes());
         assert_eq!(f.header().size().unwrap(), 4);
         let mut s = String::new();
         t!(f.read_to_string(&mut s).await);
    @@ -212,7 +212,7 @@ async fn large_filename() {
         // The long entry added with `append_data`
         let mut f = entries.next().await.unwrap().unwrap();
         assert!(f.header().path_bytes().len() < too_long.len());
    -    assert_eq!(&*f.path_bytes(), too_long.as_bytes());
    +    assert_eq!(&*t!(f.path_bytes()), too_long.as_bytes());
         assert_eq!(f.header().size().unwrap(), 4);
         let mut s = String::new();
         t!(f.read_to_string(&mut s).await);
    @@ -244,7 +244,7 @@ async fn large_filename_with_dot_dot_at_100_byte_mark() {
         let mut entries = t!(ar.entries());
     
         let mut f = t!(entries.next().await.unwrap());
    -    assert_eq!(&*f.path_bytes(), long_name_with_dot_dot.as_bytes());
    +    assert_eq!(&*t!(f.path_bytes()), long_name_with_dot_dot.as_bytes());
         assert_eq!(f.header().size().unwrap(), 4);
         let mut s = String::new();
         t!(f.read_to_string(&mut s).await);
    @@ -1015,7 +1015,7 @@ async fn long_name_trailing_nul() {
         let mut a = Archive::new(&contents[..]);
     
         let e = t!(t!(a.entries()).next().await.unwrap());
    -    assert_eq!(&*e.path_bytes(), b"foo");
    +    assert_eq!(&*t!(e.path_bytes()), b"foo");
     }
     
     #[tokio::test]
    @@ -1040,7 +1040,7 @@ async fn long_linkname_trailing_nul() {
         let mut a = Archive::new(&contents[..]);
     
         let e = t!(t!(a.entries()).next().await.unwrap());
    -    assert_eq!(&*e.link_name_bytes().unwrap(), b"foo");
    +    assert_eq!(&*t!(e.link_name_bytes()).unwrap(), b"foo");
     }
     
     #[tokio::test]
    @@ -1228,11 +1228,11 @@ async fn path_separators() {
     
         let entry = t!(entries.next().await.unwrap());
         assert_eq!(t!(entry.path()), short_path);
    -    assert!(!entry.path_bytes().contains(&b'\\'));
    +    assert!(!t!(entry.path_bytes()).contains(&b'\\'));
     
         let entry = t!(entries.next().await.unwrap());
         assert_eq!(t!(entry.path()), long_path);
    -    assert!(!entry.path_bytes().contains(&b'\\'));
    +    assert!(!t!(entry.path_bytes()).contains(&b'\\'));
     
         assert!(entries.next().await.is_none());
     }
    
  • tests/entry.rs+60 1 modified
    @@ -41,7 +41,7 @@ async fn absolute_symlink() {
         let mut ar = async_tar::Archive::new(&bytes[..]);
         let mut entries = t!(ar.entries());
         let entry = t!(entries.next().await.unwrap());
    -    assert_eq!(&*entry.link_name_bytes().unwrap(), b"/bar");
    +    assert_eq!(&*t!(entry.link_name_bytes()).unwrap(), b"/bar");
     }
     
     #[tokio::test]
    @@ -462,3 +462,62 @@ async fn accept_relative_link() {
         t!(td.path().join("foo/bar").symlink_metadata());
         t!(File::open(td.path().join("foo").join("bar")).await);
     }
    +
    +#[tokio::test]
    +async fn malformed_pax_path_extension() {
    +    // Create a tar archive with a malformed PAX extension for path
    +    let mut ar_bytes = Vec::new();
    +
    +    // First, create a PAX extension header with malformed content
    +    let mut pax_header = async_tar::Header::new_gnu();
    +    pax_header.set_entry_type(async_tar::EntryType::new(b'x')); // PAX local extensions
    +    t!(pax_header.set_path("PaxHeaders/file"));
    +
    +    // Create malformed PAX extension data - the length field doesn't match the actual content length
    +    // Format is: "<length> <key>=<value>\n" where length includes itself
    +    let malformed_pax = b"99 path=test.txt\n"; // Claims to be 99 bytes but is only 17 bytes
    +    pax_header.set_size(malformed_pax.len() as u64);
    +    pax_header.set_cksum();
    +
    +    // Manually write the archive
    +    ar_bytes.extend_from_slice(pax_header.as_bytes());
    +    ar_bytes.extend_from_slice(malformed_pax);
    +    // Pad to 512 byte boundary
    +    let padding = (512 - (malformed_pax.len() % 512)) % 512;
    +    ar_bytes.extend_from_slice(&vec![0u8; padding]);
    +
    +    // Now add the actual file entry
    +    let mut header = async_tar::Header::new_gnu();
    +    header.set_size(4);
    +    header.set_entry_type(async_tar::EntryType::Regular);
    +    t!(header.set_path("file"));
    +    header.set_cksum();
    +    ar_bytes.extend_from_slice(header.as_bytes());
    +    ar_bytes.extend_from_slice(b"test");
    +    // Pad to 512 byte boundary
    +    ar_bytes.extend_from_slice(&vec![0u8; 508]);
    +
    +    // Try to read the entries - malformed PAX data is now detected during iteration
    +    let mut ar = async_tar::Archive::new(&ar_bytes[..]);
    +    let mut entries = t!(ar.entries());
    +
    +    // This should return an error because the PAX extension is malformed
    +    let result = entries.next().await.unwrap();
    +    assert!(
    +        result.is_err(),
    +        "Expected error for malformed PAX extension"
    +    );
    +
    +    // Verify it's a PAX-related error
    +    match result {
    +        Err(e) => {
    +            let err_str = e.to_string();
    +            assert!(
    +                err_str.contains("malformed pax extension"),
    +                "Expected 'malformed pax extension' error, got: {}",
    +                err_str
    +            );
    +        }
    +        Ok(_) => panic!("Expected error but got Ok"),
    +    }
    +}
    

Vulnerability mechanics

Generated by null/stub on May 9, 2026. Inputs: CWE entries + fix-commit diffs from this CVE's patches. Citations validated against bundle.

References

5

News mentions

0

No linked articles in our index yet.