Page MenuHomeFreeBSD

cache: avoid hardcoded zone alignment
Needs ReviewPublic

Authored by brooks on Fri, Dec 26, 7:35 PM.
Tags
None
Referenced Files
F140878116: D54376.id168629.diff
Mon, Dec 29, 4:27 AM
F140819398: D54376.id168629.diff
Sun, Dec 28, 11:04 AM
Unknown Object (File)
Sat, Dec 27, 9:14 AM
Unknown Object (File)
Sat, Dec 27, 12:00 AM
Unknown Object (File)
Fri, Dec 26, 11:58 PM
Unknown Object (File)
Fri, Dec 26, 9:04 PM
Subscribers

Details

Reviewers
mjg
jhb
olce
Group Reviewers
cheri
Summary

Previously, this was under aligned on CHERI system where pointers are
larger than time_t.

Use the alignment of struct namecache_ts which picks up time_t via strut
timespec and pointers via struct namecache. This arguably overaligns
cache_zone_small and cache_zone_large on i386 kernels, but I suspect the
actual microarchitectures most i386 binaries are run on do better with
64-bit alignment.

Remove the comment about mips since it's long gone.

Effort: CHERI upstreaming
Sponsored by: Innovate UK
Fixes: cf8ac0de8150 ("cache: reduce zone alignment to 8 bytes")

Diff Detail

Repository
rG FreeBSD src repository
Lint
Lint Passed
Unit
No Test Coverage
Build Status
Buildable 69481
Build 66364: arc lint + arc unit

Event Timeline

This passes a universe build with:

_Static_assert(CACHE_ZONE_ALIGNMENT == UMA_ALIGNOF(time_t), "impossible things happened");
_Static_assert(CACHE_ZONE_ALIGNMENT >= UMA_ALIGNOF(void *), "impossible things happened");

inserted after the CACHE_ZONE_ALIGNMENT definition.

The original assessment of alignment was in error (UMA_ALIGNOF has always used alignof()).

sys/kern/vfs_cache.c
387

This comment is stale. namecache_ts embeds a namecache rather than mimicking the first N members after commit rG0bbae6f3648895c7f9211a94ddbfbdb25b30ecc0. This is an existing bug of course, not due to the alignment issue you are fixing.

407

I'd be tempted to say "Ensure all zones are sufficiently aligned to hold either a struct namecache or struct nameache_ts object." as I think that is clearer. Even better perhaps would be to just use the right type for each zone, but that breaks the static assertions.

I think you could also remove the need for CACHE_LARGE_PAD by using roundup2() as needed, e.g.

#define CACHE_ZONE_LARGE_SIZE    roundup2(offsetof(struct namecache, nc_name) + NAME_MAX + 1, CACHE_ZONE_ALIGNMENT + 1)

This would avoid the need for a 3rd variant of that constant for CHERI? Possibly something similar could be done to derive CACHE_PATH_CUTOFF if you had something like:

#define CACHE_PATH_CUTOFF_MIN    40
#define CACHE_STRUCT_LEN(pathlen)   (offsetof(struct namecache, nc_name) + (pathlen) + 1)
#define CACHE_PATH_CUTOFF   (roundup2(CACHE_STRUCT_LEN(CACHE_PATH_CUTOFF_MIN), CACHE_ZONE_ALIGNMENT + 1) - CACHE_STRUCT_LEN(0))

Coupled with the suggestion above this removes the need for any LP64 (or CHERI) #ifdef's at all. It also makes it easier to adjust the split in the future as the comment suggests based on path lengths.

(I'd perhaps be tempted to also fix CACHE_ZONE_ALIGNMENT to just be alignof(struct namecache_ts) and only use UMA_ALIGNOF when creating the zones to avoid the need to add the "+ 1" in various places such as the static assertions and the suggestions above.)