Page MenuHomeFreeBSD

geom_part: Fix potential integer overflow when checking size of the table
ClosedPublic

Authored by zlei on Sep 26 2022, 9:49 AM.
Tags
None
Referenced Files
F81925145: D36709.diff
Tue, Apr 23, 6:46 AM
Unknown Object (File)
Mon, Apr 22, 7:17 PM
Unknown Object (File)
Mon, Apr 22, 9:26 AM
Unknown Object (File)
Sun, Apr 21, 2:37 PM
Unknown Object (File)
Mar 14 2024, 4:09 PM
Unknown Object (File)
Mar 14 2024, 4:09 PM
Unknown Object (File)
Mar 11 2024, 5:21 AM
Unknown Object (File)
Mar 11 2024, 4:41 AM

Details

Summary

hdr_entries and hdr_entsz are both uint32_t as defined in UEFI spec. Current spec does not have upper limit of the number of partition entries and the size of partition entry, it is potential that malicious or corrupted GPT header read from untrusted source contains large size of entry number or size.

PR: 266548
MFC after: 2 weeks

Diff Detail

Repository
rG FreeBSD src repository
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

zlei requested review of this revision.Sep 26 2022, 9:49 AM
zlei created this revision.
zlei created this object with visibility "Custom Policy".

I'm not sure whether it is a security flow or not and thus set the visibility only to the author @marcel and secteam .

oshogbo added a subscriber: oshogbo.

I don't think we treat this as a security issue.

This revision is now accepted and ready to land.Sep 26 2022, 10:42 AM
zlei changed the visibility from "Custom Policy" to "Public (No Login Required)".Sep 26 2022, 11:44 AM

Shouldn't we have a specific limit on the number of GPT entries? This patch addresses the overflow but it's still possible to craft a GPT which triggers a very large memory allocation in the kernel, and there are other places in this file where we assume hdr->hdr_entries * hdr->hdr_entsz won't overflow.

There is similar code in stand/libsa/gpt.c. There we have a MAXTBLENTS limit of 128, which seems like a reasonable limit. (I previously thought that GPT was in fact limited to 128 partitions.)

Shouldn't we have a specific limit on the number of GPT entries? This patch addresses the overflow but it's still possible to craft a GPT which triggers a very large memory allocation in the kernel, and there are other places in this file where we assume hdr->hdr_entries * hdr->hdr_entsz won't overflow.

I've planned to resolve this in separate diff.

There is similar code in stand/libsa/gpt.c. There we have a MAXTBLENTS limit of 128, which seems like a reasonable limit. (I previously thought that GPT was in fact limited to 128 partitions.)

Actually @ae has introduced the limit which is 4K in 799eac8c3df597179bbb3b078362f3ff03993a1a, but that is for write routine.
I'm planning to reuse this limit.

I previously thought that GPT was in fact limited to 128 partitions.

Ref: https://uefi.org/specs/UEFI/2.10/05_GUID_Partition_Table_Format.html#gpt-overview
" A minimum of 16,384 bytes of space must be reserved for the GPT Partition Entry Array. " ,

It is not maximum .
IIRC linux has a limit of 255 partitions.

In D36709#833561, @zlei.huang_gmail.com wrote:

I previously thought that GPT was in fact limited to 128 partitions.

Ref: https://uefi.org/specs/UEFI/2.10/05_GUID_Partition_Table_Format.html#gpt-overview
" A minimum of 16,384 bytes of space must be reserved for the GPT Partition Entry Array. " ,

It is not maximum .
IIRC linux has a limit of 255 partitions.

Windows supports a maximum 128 partitions. De-facto, that's the limit, even if there's ways to specify larger
numbers of partitions. I believe that's why it was selected back in the day... Though it may be because 128 partitions
is 16k, which is the maximum buffer size in the BIOS loader due to space limitations.

https://learn.microsoft.com/en-us/windows-hardware/manufacture/desktop/windows-and-gpt-faq?view=windows-11

255 is a weird limit since it wastes at least one entry. The real limit is 32MB in Linux. It limits the size to this, but
doesn't seem to limit the overflow potential except by casting to u64 before multiplication, which is exactly what this
patch is doing. They seem to only check on creation and reject partition schemes that exceed their 32MB limit. But
they also reject entries that aren't exactly the right size (we only reject too small).

I think a 4k limit is fine, and just as arbitrary as Linux's limits (which on machines with less address space will be smaller).
Heck, given window's limiting it to 128, that would be fine too, imho, but since we have been able to write tables with
4k entries, we should be able to read them. Of course this weighing of options suggests a tunable/sysctl for the actual
limit...

So that's a long way of saying 'I think this change is fine' and 'I agree with Mark that more is also needed'

I also agree this needn't be a security issue... any exploit requires root or physical access to the machine...

The child revision D36717 looks good. Can we proceed?

I'd like to commit this if there is no objection.

@kp @melifaro Any ideas ?

This still looks good to me, I think we're good to go with mentor's approval....

It looks like the same bug is repeated in the loader. In particular, hdr_entsz is not checked in stand/libsa/gpt.c and stand/common/part.c.

It looks like the same bug is repeated in the loader. In particular, hdr_entsz is not checked in stand/libsa/gpt.c and stand/common/part.c.

While the loader has almost the same logic as geom_part.
If the bug is not fix in loader then users may boot from corrupted disk, that is awful.

In D36709#858481, @zlei wrote:

It looks like the same bug is repeated in the loader. In particular, hdr_entsz is not checked in stand/libsa/gpt.c and stand/common/part.c.

While the loader has almost the same logic as geom_part.
If the bug is not fix in loader then users may boot from corrupted disk, that is awful.

Is it awful? If there's an overflow here, it's more likely the result would be a truncated answer, which typically means not being able to find the right thing, and we'd boot garbage. But more likely the sanity checks that re in place would catch it.

I'll work up a fix (if you don't beat me to it), but it looks like the construct is used in a lot of places and I'm not sure that they all would be fixed with a simple cast since at least some incarnations of crc32 that the result is passed to a size_t, which is only 32-bits on i386...

In D36709#859036, @imp wrote:
In D36709#858481, @zlei wrote:

It looks like the same bug is repeated in the loader. In particular, hdr_entsz is not checked in stand/libsa/gpt.c and stand/common/part.c.

While the loader has almost the same logic as geom_part.
If the bug is not fix in loader then users may boot from corrupted disk, that is awful.

Is it awful? If there's an overflow here, it's more likely the result would be a truncated answer, which typically means not being able to find the right thing, and we'd boot garbage. But more likely the sanity checks that re in place would catch it.

I can not say how awful it is.
I think we should not boot into a corrupted disk, although during the boot progress the disk is mounted read-only.
Users may choose to boot into single-user mode and try to 'FIX' the corrupted disk and then it will make things even worse.
That why I think loader should be fixed as well.

I'll work up a fix (if you don't beat me to it), but it looks like the construct is used in a lot of places and I'm not sure that they all would be fixed with a simple cast since at least some incarnations of crc32 that the result is passed to a size_t, which is only 32-bits on i386...

For geom schema or network protocols I think it is apparently wrong to use machine dependent data types.