Next.js: Unbounded next/image disk cache growth can exhaust storage
Description
Next.js is a React framework for building full-stack web applications. Starting in version 10.0.0 and prior to version 16.1.7, the default Next.js image optimization disk cache (/_next/image) did not have a configurable upper bound, allowing unbounded cache growth. An attacker could generate many unique image-optimization variants and exhaust disk space, causing denial of service. This is fixed in version 16.1.7 by adding an LRU-backed disk cache with images.maximumDiskCacheSize, including eviction of least-recently-used entries when the limit is exceeded. Setting maximumDiskCacheSize: 0 disables disk caching. If upgrading is not immediately possible, periodically clean .next/cache/images and/or reduce variant cardinality (e.g., tighten values for images.localPatterns, images.remotePatterns, and images.qualities).
Affected packages
Versions sourced from the GitHub Security Advisory.
| Package | Affected versions | Patched versions |
|---|---|---|
nextnpm | >= 16.0.0-beta.0, < 16.1.7 | 16.1.7 |
nextnpm | >= 10.0.0, < 15.5.14 | 15.5.14 |
Affected products
1Patches
139eb8e0ac498feat(next/image): add lru disk cache and `images.maximumDiskCacheSize` (#89963)
12 files changed · +521 −43
docs/01-app/03-api-reference/02-components/image.mdx+31 −0 modified@@ -837,6 +837,36 @@ module.exports = { } ``` +#### `maximumDiskCacheSize` + +The default image optimization loader will write optimized images to disk so subsequent requests can be served faster from the disk cache. + +You can configure the maximum disk cache size in bytes, for example 500 MB: + +```js filename="next.config.js" +module.exports = { + images: { + maximumDiskCacheSize: 500_000_000, + }, +} +``` + +You can also disable the disk cache entirely by setting the value to `0`. + +```js filename="next.config.js" +module.exports = { + images: { + maximumDiskCacheSize: 0, + }, +} +``` + +If no value is configured, the default behavior is to check the current available disk space once during startup and use 50%. + +When the disk cache exceeds the configured size, the least recently used optimized images will be deleted until the cache is under the limit again. + +Alternatively, you can implement your own cache handler using [`cacheHandler`](/docs/app/api-reference/config/next-config-js/incrementalCacheHandlerPath) which will ignore the `maximumDiskCacheSize` configuration. + #### `maximumResponseBody` The default image optimization loader will fetch source images up to 50 MB in size. @@ -1363,6 +1393,7 @@ export default function Home() { | Version | Changes | | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `v16.1.7` | `maximumDiskCacheSize` configuration added. | | `v16.1.2` | `maximumResponseBody` configuration added. | | `v16.0.0` | `qualities` default configuration changed to `[75]`, `preload` prop added, `priority` prop deprecated, `dangerouslyAllowLocalIP` config added, `maximumRedirects` config added. | | `v15.3.0` | `remotePatterns` added support for array of `URL` objects. |
packages/next/errors.json+3 −1 modified@@ -1066,5 +1066,7 @@ "1065": "createServerPathnameForMetadata should not be called in client contexts.", "1066": "createServerSearchParamsForServerPage should not be called in a client validation.", "1067": "The Next.js unhandled rejection filter is being installed more than once. This is a bug in Next.js.", - "1068": "Expected workStore to be initialized" + "1068": "Expected workStore to be initialized", + "1069": "Invariant: cache entry \"%s\" not found in dir \"%s\"", + "1070": "image of size %s could not be tracked by lru cache" }
packages/next/src/server/config-schema.ts+1 −0 modified@@ -623,6 +623,7 @@ export const configSchema: zod.ZodType<NextConfig> = z.lazy(() => .optional(), loader: z.enum(VALID_LOADERS).optional(), loaderFile: z.string().optional(), + maximumDiskCacheSize: z.number().int().min(0).optional(), maximumRedirects: z.number().int().min(0).max(20).optional(), maximumResponseBody: z .number()
packages/next/src/server/image-optimizer.ts+111 −31 modified@@ -30,6 +30,7 @@ import { getContentType, getExtension } from './serve-static' import * as Log from '../build/output/log' import isError from '../lib/is-error' import { isPrivateIp } from './is-private-ip' +import { getOrInitDiskLRU } from './lib/disk-lru-cache.external' import { parseUrl } from '../lib/url' import type { CacheControl } from './lib/cache-control' import { InvariantError } from '../shared/lib/invariant-error' @@ -61,6 +62,29 @@ const BLUR_QUALITY = 70 // should match `next-image-loader` let _sharp: typeof import('sharp') +async function initCacheEntries( + cacheDir: string +): Promise<Array<{ key: string; size: number; expireAt: number }>> { + const cacheKeys = await promises.readdir(cacheDir).catch(() => []) + const entries: Array<{ key: string; size: number; expireAt: number }> = [] + + for (const cacheKey of cacheKeys) { + try { + const { expireAt, buffer } = await readFromCacheDir(cacheDir, cacheKey) + entries.push({ + key: cacheKey, + size: buffer.byteLength, + expireAt, + }) + } catch { + // Skip entries that can't be read from disk + } + } + + // Sort oldest-first so we can replay them chronologically into LRU + return entries.sort((a, b) => a.expireAt - b.expireAt) +} + export function getSharp(concurrency: number | null | undefined) { if (_sharp) { return _sharp @@ -139,14 +163,16 @@ export function getImageEtag(image: Buffer) { } async function writeToCacheDir( - dir: string, + cacheDir: string, + cacheKey: string, extension: string, maxAge: number, expireAt: number, buffer: Buffer, etag: string, upstreamEtag: string ) { + const dir = join(/* turbopackIgnore: true */ cacheDir, cacheKey) const filename = join( /* turbopackIgnore: true */ dir, @@ -159,6 +185,37 @@ async function writeToCacheDir( await promises.writeFile(filename, buffer) } +async function readFromCacheDir(cacheDir: string, cacheKey: string) { + const dir = join(/* turbopackIgnore: true */ cacheDir, cacheKey) + const files = await promises.readdir(dir) + const file = files[0] + if (!file) { + throw new Error( + `Invariant: cache entry "${cacheKey}" not found in dir "${cacheDir}"` + ) + } + const [maxAgeSt, expireAtSt, etag, upstreamEtag, extension] = file.split( + '.', + 5 + ) + const filePath = join(/* turbopackIgnore: true */ dir, file) + const buffer = await promises.readFile(/* turbopackIgnore: true */ filePath) + const expireAt = Number(expireAtSt) + const maxAge = Number(maxAgeSt) + return { maxAge, expireAt, etag, upstreamEtag, buffer, extension } +} + +async function deleteFromCacheDir(cacheDir: string, cacheKey: string) { + return promises + .rm(join(/* turbopackIgnore: true */ cacheDir, cacheKey), { + recursive: true, + force: true, + }) + .catch((err) => { + Log.error(`Failed to delete cache key ${cacheKey}`, err) + }) +} + /** * Inspects the first few bytes of a buffer to determine if * it matches the "magic number" of known file signatures. @@ -318,6 +375,8 @@ export class ImageOptimizerCache { private cacheDir: string private nextConfig: NextConfigRuntime private cacheHandler?: CacheHandler + private cacheDiskLRU?: ReturnType<typeof getOrInitDiskLRU> + private isDiskCacheEnabled?: boolean static validateParams( req: IncomingMessage, @@ -507,6 +566,21 @@ export class ImageOptimizerCache { this.cacheDir = join(/* turbopackIgnore: true */ distDir, 'cache', 'images') this.nextConfig = nextConfig this.cacheHandler = cacheHandler + + // Eagerly start LRU initialization for filesystem cache + if ( + !cacheHandler && + nextConfig.images.maximumDiskCacheSize !== 0 && + nextConfig.experimental.isrFlushToDisk + ) { + this.isDiskCacheEnabled = true + this.cacheDiskLRU = getOrInitDiskLRU( + this.cacheDir, + nextConfig.images.maximumDiskCacheSize, + initCacheEntries, + deleteFromCacheDir + ) + } } async get(cacheKey: string): Promise<IncrementalResponseCacheEntry | null> { @@ -549,38 +623,34 @@ export class ImageOptimizerCache { return null } + // If the filesystem cache is disabled, return early + if (!this.isDiskCacheEnabled) { + return null + } + // Fall back to filesystem cache try { - const cacheDir = join(/* turbopackIgnore: true */ this.cacheDir, cacheKey) - const files = await promises.readdir(cacheDir) const now = Date.now() + const { maxAge, expireAt, etag, upstreamEtag, buffer, extension } = + await readFromCacheDir(this.cacheDir, cacheKey) - for (const file of files) { - const [maxAgeSt, expireAtSt, etag, upstreamEtag, extension] = - file.split('.', 5) - const buffer = await promises.readFile( - /* turbopackIgnore: true */ join( - /* turbopackIgnore: true */ cacheDir, - file - ) - ) - const expireAt = Number(expireAtSt) - const maxAge = Number(maxAgeSt) + // Promote entry in LRU (mark as recently used) + const lru = await this.cacheDiskLRU + lru?.get(cacheKey) - return { - value: { - kind: CachedRouteKind.IMAGE, - etag, - buffer, - extension, - upstreamEtag, - }, - revalidateAfter: - Math.max(maxAge, this.nextConfig.images.minimumCacheTTL) * 1000 + - Date.now(), - cacheControl: { revalidate: maxAge, expire: undefined }, - isStale: now > expireAt, - } + return { + value: { + kind: CachedRouteKind.IMAGE, + etag, + buffer, + extension, + upstreamEtag, + }, + revalidateAfter: + Math.max(maxAge, this.nextConfig.images.minimumCacheTTL) * 1000 + + Date.now(), + cacheControl: { revalidate: maxAge, expire: undefined }, + isStale: now > expireAt, } } catch (_) { // failed to read from cache dir, treat as cache miss @@ -630,18 +700,28 @@ export class ImageOptimizerCache { return } - // Fall back to filesystem cache - if (!this.nextConfig.experimental.isrFlushToDisk) { + // If the filesystem cache is disabled, return early + if (!this.isDiskCacheEnabled) { return } + // Fall back to filesystem cache const expireAt = Math.max(revalidate, this.nextConfig.images.minimumCacheTTL) * 1000 + Date.now() try { + const lru = await this.cacheDiskLRU + const success = lru?.set(cacheKey, value.buffer.byteLength) + if (success === false) { + throw new Error( + `image of size ${value.buffer.byteLength} could not be tracked by lru cache` + ) + } + await writeToCacheDir( - join(/* turbopackIgnore: true */ this.cacheDir, cacheKey), + this.cacheDir, + cacheKey, value.extension, revalidate, expireAt,
packages/next/src/server/lib/disk-lru-cache.external.ts+60 −0 added@@ -0,0 +1,60 @@ +import { promises } from 'fs' +import { LRUCache } from './lru-cache' + +/** + * Module-level LRU singleton for disk cache eviction. + * Initialized once on first `set()`, shared across all consumers. + * Once resolved, the promise stays resolved — subsequent calls just await the cached result. + */ +let _diskLRUPromise: Promise<LRUCache<number>> | null = null + +/** + * Initialize or return the module-level LRU for disk cache eviction. + * Concurrent calls are deduplicated via the shared promise. + * + * @param cacheDir - The directory where cached files are stored + * @param maxDiskSize - Maximum disk cache size in bytes + * @param readEntries - Callback to scan existing cache entries (format-agnostic) + */ +export async function getOrInitDiskLRU( + cacheDir: string, + maxDiskSize: number | undefined, + readEntries: ( + cacheDir: string + ) => Promise<Array<{ key: string; size: number; expireAt: number }>>, + evictEntry: (cacheDir: string, cacheKey: string) => Promise<void> +): Promise<LRUCache<number>> { + if (!_diskLRUPromise) { + _diskLRUPromise = (async () => { + let maxSize = maxDiskSize + if (typeof maxSize === 'undefined') { + // Ensure cacheDir exists before checking disk space + await promises.mkdir(cacheDir, { recursive: true }) + // Since config was not provided, default to 50% of available disk space + const { bavail, bsize } = await promises.statfs(cacheDir) + maxSize = Math.floor((bavail * bsize) / 2) + } + + const lru = new LRUCache<number>( + maxSize, + (size) => size, + (cacheKey) => evictEntry(cacheDir, cacheKey) + ) + + const entries = await readEntries(cacheDir) + for (const entry of entries) { + lru.set(entry.key, entry.size) + } + + return lru + })() + } + return _diskLRUPromise +} + +/** + * Reset the module-level LRU singleton. Exported for testing only. + */ +export function resetDiskLRU(): void { + _diskLRUPromise = null +}
packages/next/src/server/lib/lru-cache.test.ts+25 −4 modified@@ -9,7 +9,7 @@ describe('LRUCache', () => { }) it('should set and get values', () => { - cache.set('key1', 'value1') + expect(cache.set('key1', 'value1')).toBe(true) expect(cache.get('key1')).toBe('value1') }) @@ -105,11 +105,11 @@ describe('LRUCache', () => { expect(cache.currentSize).toBe(8) // 5 + 2 + 1 }) - it('should handle items larger than max size', () => { + it('should prevent adding item larger than max size when lru is empty', () => { const consoleSpy = jest.spyOn(console, 'warn').mockImplementation() const cache = new LRUCache<string>(5, (value) => value.length) - cache.set('key1', 'toolarge') // size 8 > maxSize 5 + expect(cache.set('key1', 'toolarge')).toBe(false) // size 8 > maxSize 5 expect(cache.has('key1')).toBe(false) expect(cache.size).toBe(0) @@ -120,6 +120,27 @@ describe('LRUCache', () => { consoleSpy.mockRestore() }) + it('should prevent adding item larger than max size when lru is not empty', () => { + const consoleSpy = jest.spyOn(console, 'warn').mockImplementation() + const cache = new LRUCache<string>(5, (value) => value.length) + + expect(cache.set('key1', 'ab')).toBe(true) // size 2 + expect(cache.set('key2', 'cd')).toBe(true) // size 2, total = 4 + + expect(cache.set('key3', 'toolarge')).toBe(false) // size 8 > maxSize 5, should be rejected + + expect(cache.has('key1')).toBe(true) + expect(cache.has('key2')).toBe(true) + expect(cache.has('key3')).toBe(false) + expect(cache.size).toBe(2) + expect(cache.currentSize).toBe(4) + expect(consoleSpy).toHaveBeenCalledWith( + 'Single item size exceeds maxSize' + ) + + consoleSpy.mockRestore() + }) + it('should update size when overwriting existing keys', () => { const cache = new LRUCache<string>(10, (value) => value.length) @@ -184,7 +205,7 @@ describe('LRUCache', () => { describe('Edge Cases', () => { it('should handle zero max size', () => { const cache = new LRUCache<string>(0) - cache.set('key1', 'value1') + expect(cache.set('key1', 'value1')).toBe(false) expect(cache.has('key1')).toBe(false) expect(cache.size).toBe(0) })
packages/next/src/server/lib/lru-cache.ts+4 −2 modified@@ -123,7 +123,7 @@ export class LRUCache<T> { * - O(1) for uniform item sizes * - O(k) where k is the number of items evicted (can be O(N) for variable sizes) */ - public set(key: string, value: T): void { + public set(key: string, value: T): boolean { const size = this.calculateSize?.(value) ?? 1 if (size <= 0) { throw new Error( @@ -133,7 +133,7 @@ export class LRUCache<T> { } if (size > this.maxSize) { console.warn('Single item size exceeds maxSize') - return + return false } const existing = this.cache.get(key) @@ -158,6 +158,8 @@ export class LRUCache<T> { this.totalSize -= tail.size this.onEvict?.(tail.key, tail.data) } + + return true } /**
packages/next/src/shared/lib/image-config.ts+4 −0 modified@@ -103,6 +103,9 @@ export type ImageConfigComplete = { /** @see [Acceptable formats](https://nextjs.org/docs/api-reference/next/image#acceptable-formats) */ formats: ImageFormat[] + /** @see [Maximum Disk Cache Size (in bytes)](https://nextjs.org/docs/api-reference/next/image#maximumdiskcachesize) */ + maximumDiskCacheSize: number | undefined + /** @see [Maximum Redirects](https://nextjs.org/docs/api-reference/next/image#maximumredirects) */ maximumRedirects: number @@ -156,6 +159,7 @@ export const imageConfigDefault: ImageConfigComplete = { disableStaticImages: false, minimumCacheTTL: 14400, // 4 hours formats: ['image/webp'], + maximumDiskCacheSize: undefined, // auto-detect by default maximumRedirects: 3, maximumResponseBody: 50_000_000, // 50 MB dangerouslyAllowLocalIP: false,
test/integration/image-optimizer/test/max-disk-size-cache-85kb.test.ts+13 −0 added@@ -0,0 +1,13 @@ +import { join } from 'path' +import { setupTests } from './util' + +const appDir = join(__dirname, '../app') + +describe('with maximumDiskCacheSize 85KB config', () => { + setupTests({ + appDir, + nextConfigImages: { + maximumDiskCacheSize: 85_000, + }, + }) +})
test/integration/image-optimizer/test/max-disk-size-cache-zero.test.ts+13 −0 added@@ -0,0 +1,13 @@ +import { join } from 'path' +import { setupTests } from './util' + +const appDir = join(__dirname, '../app') + +describe('with maximumDiskCacheSize zero config', () => { + setupTests({ + appDir, + nextConfigImages: { + maximumDiskCacheSize: 0, + }, + }) +})
test/integration/image-optimizer/test/util.ts+99 −5 modified@@ -13,6 +13,7 @@ import { launchApp, nextBuild, nextStart, + retry, waitFor, } from 'next-test-utils' import isAnimated from 'next/dist/compiled/is-animated' @@ -122,6 +123,22 @@ export const cleanImagesDir = async (imagesDir) => { await fs.remove(imagesDir) } +async function getDirSize(dir: string): Promise<number> { + let totalSize = 0 + const entries = await fs.readdir(dir).catch(() => [] as string[]) + for (const entry of entries) { + const entryPath = join(dir, entry) + const stat = await fs.stat(entryPath).catch(() => null) + if (!stat) continue + if (stat.isDirectory()) { + totalSize += await getDirSize(entryPath) + } else { + totalSize += stat.size + } + } + return totalSize +} + async function expectAvifSmallerThanWebp( w: number, q: number, @@ -979,7 +996,10 @@ export function runTests(ctx: RunTestsCtx) { }) it('should use cache and stale-while-revalidate when query is the same for external image', async () => { - if (ctx.nextConfigExperimental?.isrFlushToDisk === false) { + if ( + ctx.nextConfigExperimental?.isrFlushToDisk === false || + ctx.nextConfigImages?.maximumDiskCacheSize === 0 + ) { return // this test is not applicable when we don't write the cache } await cleanImagesDir(imagesDir) @@ -1201,7 +1221,10 @@ export function runTests(ctx: RunTestsCtx) { } it('should use cache and stale-while-revalidate when query is the same for internal image', async () => { - if (ctx.nextConfigExperimental?.isrFlushToDisk === false) { + if ( + ctx.nextConfigExperimental?.isrFlushToDisk === false || + ctx.nextConfigImages?.maximumDiskCacheSize === 0 + ) { return // this test is not applicable when we don't write the cache } await cleanImagesDir(imagesDir) @@ -1348,7 +1371,10 @@ export function runTests(ctx: RunTestsCtx) { } it('should use cached image file when parameters are the same for animated gif', async () => { - if (ctx.nextConfigExperimental?.isrFlushToDisk === false) { + if ( + ctx.nextConfigExperimental?.isrFlushToDisk === false || + ctx.nextConfigImages?.maximumDiskCacheSize === 0 + ) { return // this test is not applicable when we don't write the cache } await cleanImagesDir(imagesDir) @@ -1455,7 +1481,10 @@ export function runTests(ctx: RunTestsCtx) { `${contentDispositionType}; filename="test.bmp"` ) - if (ctx.nextConfigExperimental?.isrFlushToDisk === false) { + if ( + ctx.nextConfigExperimental?.isrFlushToDisk === false || + ctx.nextConfigImages?.maximumDiskCacheSize === 0 + ) { expect(json1).toEqual({}) expect(await fsToJson(ctx.imagesDir)).toEqual({}) } else { @@ -1583,7 +1612,10 @@ export function runTests(ctx: RunTestsCtx) { await expectWidth(res3, ctx.w) const length = - ctx.nextConfigExperimental?.isrFlushToDisk === false ? 0 : 1 + ctx.nextConfigExperimental?.isrFlushToDisk === false || + ctx.nextConfigImages?.maximumDiskCacheSize === 0 + ? 0 + : 1 await check(async () => { const json1 = await fsToJson(ctx.imagesDir) @@ -1600,6 +1632,68 @@ export function runTests(ctx: RunTestsCtx) { expect(xCache).toEqual(['MISS', 'MISS', 'MISS']) }) } + + if (typeof ctx.nextConfigImages?.maximumDiskCacheSize !== 'undefined') { + const { maximumDiskCacheSize } = ctx.nextConfigImages + it(`should handle maximumDiskCacheSize ${maximumDiskCacheSize}`, async () => { + const opts = { headers: { accept: 'image/webp' } } + const requests = [ + { url: '/test.png', w: largeSize }, + { url: '/test.jpg', w: largeSize }, + { url: '/test.gif', w: largeSize }, + { url: '/test.bmp', w: largeSize }, + { url: '/test.webp', w: largeSize }, + { url: '/test.avif', w: largeSize }, + { url: '/test.tiff', w: largeSize }, + { url: '/test.ico', w: largeSize }, + { url: '/animated.gif', w: largeSize }, + { url: '/animated.png', w: largeSize }, + { url: '/animated2.png', w: largeSize }, + ] + await cleanImagesDir(imagesDir) + const json1 = await fsToJson(ctx.imagesDir) + expect(Object.keys(json1).length).toEqual(0) + for (const { url, w } of requests) { + const query = { url, w, q: ctx.q } + const res = await fetchViaHTTP(ctx.appPort, '/_next/image', query, opts) + expect(res.status).toBe(200) + await res.buffer() // consume response body + await retry(async () => { + const size = await getDirSize(imagesDir) + expect(size).toBeLessThanOrEqual(maximumDiskCacheSize) + }) + } + + const json2 = await fsToJson(ctx.imagesDir) + const json2Length = Object.keys(json2).length + if (maximumDiskCacheSize === 0) { + expect(json2Length).toEqual(0) + } else { + expect(json2Length).toBeGreaterThan(0) + } + + const res = await fetchViaHTTP( + ctx.appPort, + '/_next/image', + { url: '/mountains.jpg', w: ctx.w, q: ctx.q }, + opts + ) + expect(res.status).toBe(200) + + await retry(async () => { + const json3 = await fsToJson(ctx.imagesDir) + const json3Length = Object.keys(json3).length + if (maximumDiskCacheSize === 0) { + expect(json3Length).toEqual(0) + } else { + expect(json3Length).toBeGreaterThan(0) + expect(json3).not.toStrictEqual(json2) + } + const size = await getDirSize(imagesDir) + expect(size).toBeLessThanOrEqual(maximumDiskCacheSize) + }) + }) + } } export const setupTests = (ctx: SetupTestsCtx) => {
test/unit/image-optimizer/lru-disk-eviction.test.ts+157 −0 added@@ -0,0 +1,157 @@ +/* eslint-env jest */ +import { join } from 'path' +import { promises } from 'fs' +import { tmpdir } from 'os' +import { setTimeout } from 'timers/promises' +import { + getOrInitDiskLRU, + resetDiskLRU, +} from 'next/dist/server/lib/disk-lru-cache.external' + +async function writeEntry( + cacheDir: string, + key: string, + sizeInBytes: number, + expireAt: number = Date.now() + 60_000 +) { + const dir = join(cacheDir, key) + const buffer = Buffer.alloc(sizeInBytes, 0x42) // Fill with dummy data + await promises.mkdir(dir, { recursive: true }) + await promises.writeFile(join(dir, `${expireAt}.bin`), buffer) +} + +async function readEntry(cacheDir: string, key: string) { + const dir = join(cacheDir, key) + const [file] = await promises.readdir(dir) + const buffer = await promises.readFile(join(dir, file)) + const [expireAtStr] = file.split('.') + return { size: buffer.byteLength, expireAt: Number(expireAtStr) } +} + +async function initEntries( + cacheDir: string +): Promise<Array<{ key: string; size: number; expireAt: number }>> { + const keys = await promises.readdir(cacheDir).catch(() => []) + const entries: Array<{ key: string; size: number; expireAt: number }> = [] + + for (const key of keys) { + const { size, expireAt } = await readEntry(cacheDir, key) + entries.push({ key, size, expireAt }) + } + + // Sort oldest-first so we can replay them chronologically into LRU + return entries.sort((a, b) => a.expireAt - b.expireAt) +} + +async function rmEntry(cacheDir: string, cacheKey: string): Promise<void> { + await promises.rm(join(cacheDir, cacheKey), { recursive: true, force: true }) +} + +describe('LRU disk eviction', () => { + let cacheDir: string + + beforeEach(async () => { + cacheDir = await promises.mkdtemp(join(tmpdir(), 'next-lru-test-')) + resetDiskLRU() + }) + + afterEach(async () => { + resetDiskLRU() + await promises.rm(cacheDir, { recursive: true, force: true }) + }) + + it('should evict oldest entries on initialization', async () => { + const expireAt = Date.now() + 60_000 + // Write 4 entries of 400 bytes each (total 1600) + await writeEntry(cacheDir, 'entry-a', 400, expireAt + 1) + await writeEntry(cacheDir, 'entry-b', 400, expireAt + 2) + await writeEntry(cacheDir, 'entry-c', 400, expireAt + 3) + await writeEntry(cacheDir, 'entry-d', 400, expireAt + 4) + + // Init LRU with 1500 byte limit (less than 1600 current total) + const lru = await getOrInitDiskLRU(cacheDir, 1500, initEntries, rmEntry) + + // entry-a should have been evicted (oldest) + expect(lru.has('entry-a')).toBe(false) + expect(lru.has('entry-b')).toBe(true) + expect(lru.has('entry-c')).toBe(true) + expect(lru.has('entry-d')).toBe(true) + + // Verify disk eviction (fire-and-forget, so wait a tick) + await setTimeout(100) + const contents = await promises.readdir(cacheDir) + expect(contents).toEqual(['entry-b', 'entry-c', 'entry-d']) + }) + + it('should evict old entries when new entries are set', async () => { + const lru = await getOrInitDiskLRU(cacheDir, 1000, initEntries, rmEntry) + + // Add entries via LRU set (simulating what ImageOptimizerCache.set does) + await writeEntry(cacheDir, 'new-a', 400) + await writeEntry(cacheDir, 'new-b', 400) + lru.set('new-a', 400) + lru.set('new-b', 400) + + // Both should exist + expect(lru.has('new-a')).toBe(true) + expect(lru.has('new-b')).toBe(true) + + // Adding a third entry should evict the oldest (new-a) + await writeEntry(cacheDir, 'new-c', 400) + lru.set('new-c', 400) + + expect(lru.has('new-a')).toBe(false) + expect(lru.has('new-b')).toBe(true) + expect(lru.has('new-c')).toBe(true) + + // Verify disk eviction (fire-and-forget, wait a tick) + await setTimeout(100) + const contents = await promises.readdir(cacheDir) + expect(contents).toEqual(['new-b', 'new-c']) + }) + + it('should promote entries on get() to prevent eviction', async () => { + const lru = await getOrInitDiskLRU(cacheDir, 1000, initEntries, rmEntry) + + await writeEntry(cacheDir, 'x', 400) + await writeEntry(cacheDir, 'y', 400) + lru.set('x', 400) + lru.set('y', 400) + + // Access 'x' to promote it (mark as recently used) + lru.get('x') + + // Add 'z' - should evict 'y' (least recently used) instead of 'x' + await writeEntry(cacheDir, 'z', 400) + lru.set('z', 400) + + expect(lru.has('x')).toBe(true) + expect(lru.has('y')).toBe(false) + expect(lru.has('z')).toBe(true) + }) + + it('should return the same LRU instance on subsequent calls', async () => { + const lru1 = await getOrInitDiskLRU(cacheDir, 1000, initEntries, rmEntry) + const lru2 = await getOrInitDiskLRU(cacheDir, 1000, initEntries, rmEntry) + expect(lru1 === lru2).toBeTrue() + }) + + it('should deduplicate concurrent init calls', async () => { + const [lru1, lru2] = await Promise.all([ + getOrInitDiskLRU(cacheDir, 1000, initEntries, rmEntry), + getOrInitDiskLRU(cacheDir, 1000, initEntries, rmEntry), + ]) + expect(lru1 === lru2).toBeTrue() + }) + + it('should handle empty cache directory', async () => { + const lru = await getOrInitDiskLRU(cacheDir, 1000, initEntries, rmEntry) + expect(lru.size).toBe(0) + }) + + it('should handle non-existent cache directory', async () => { + const missing = join(cacheDir, 'this-does-not-exist') + const lru = await getOrInitDiskLRU(missing, 1000, initEntries, rmEntry) + expect(lru.size).toBe(0) + }) +})
Vulnerability mechanics
Generated by null/stub on May 9, 2026. Inputs: CWE entries + fix-commit diffs from this CVE's patches. Citations validated against bundle.
References
5- github.com/advisories/GHSA-3x4c-7xq6-9pq8ghsaADVISORY
- nvd.nist.gov/vuln/detail/CVE-2026-27980ghsaADVISORY
- github.com/vercel/next.js/commit/39eb8e0ac498b48855a0430fbf4c22276a73b4bdghsax_refsource_MISCWEB
- github.com/vercel/next.js/releases/tag/v16.1.7ghsax_refsource_MISCWEB
- github.com/vercel/next.js/security/advisories/GHSA-3x4c-7xq6-9pq8ghsax_refsource_CONFIRMWEB
News mentions
1- The Good, the Bad and the Ugly in Cybersecurity – Week 19SentinelOne Labs · May 8, 2026