Page MenuHomeFreeBSD

geom_part: Check number of GPT entries and size of GPT entry
ClosedPublic

Authored by zlei on Sep 26 2022, 3:20 PM.
Tags
None
Referenced Files
Unknown Object (File)
Mar 11 2024, 2:58 AM
Unknown Object (File)
Mar 11 2024, 2:58 AM
Unknown Object (File)
Mar 11 2024, 2:58 AM
Unknown Object (File)
Mar 11 2024, 2:58 AM
Unknown Object (File)
Mar 11 2024, 2:58 AM
Unknown Object (File)
Mar 11 2024, 2:58 AM
Unknown Object (File)
Mar 11 2024, 2:55 AM
Unknown Object (File)
Mar 11 2024, 2:55 AM
Subscribers

Details

Summary

Current specification does not have upper limit of the number of partition entries and the size of partition entry. In 799eac8c3df597179bbb3b078362f3ff03993a1a Andrey V. Elsukov introduced a limit maximum number of GPT entries to 4k, but that is for write routine (gpart create) only. When attaching disks that have large number of GPT entries exceeding the limit, or disks with large size of partition entry, it is still possible to exhaust kernel memory.

  1. Reuse the limit of the maximum number of partition entries.
  2. Limit the maximum size of GPT entry to 1k.

In current specification (2.10) the size of GPT entry is 128 * 2^n while n >= 0, and the size - 128 is reserved. 1k should be sufficient enough for foreseen future.

PR: 266548

Test Plan

Attach a specially crafted disk image with gpart(modified kernel to support creating large GPT entry numbers):

root@:~ # truncate -s 64M test.img
root@:~ # mdconfig test.img
md0
root@:~ # gpart create -s GPT -n 8192 md0
md0 created
root@:~ # mdconfig -du 0

Reattach the disk image.

root@:~ # mdconfig test.img
GEOM: md0: unsupported GPT detected.
GEOM: md0: number of GPT entries: 8192, entry size: 128B.
GEOM: md0: maximum supported number of GPT entries: 4096, entry size: 1024B.
GEOM: md0: GPT rejected.

Diff Detail

Repository
rG FreeBSD src repository
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

sys/geom/part/g_part_gpt.c
153

May it deserve a tunable ?

sys/geom/part/g_part_gpt.c
526

If you made it a tunable, then you'd use that size here, but keep the gps_maxent as above.
If you don't make it a tunable, then maybe you should use g_part_gpt_scheme.gps_maxent here.

Even though I suggested a tunable, I'm now less sure that it is a good idea than when I suggested it.

974

Other than a better error message, what purpose does this code serve? The following catch-all would already capture this (g_free(NULL) is a nop).

Thanks for working on this.

sys/geom/part/g_part_gpt.c
526

I think this check should come before the multiplication. Then we don't have to worry about overflow.

I don't have a strong feeling about having a tunable. We can always add one after the fact.

974

I agree. It is probably worth reporting the problem with printf(), but 1) we should say _why_ the GPT is unsupported, 2) we should do it when reading the table, I think.

Read GPT table fully seems too heavy to report unsupported accurately. We try our best here.
I'm still struggling as it shall be rare that both primary and backup table headers corrupt at the same time and have good CRCs and "equals" to each other.

zlei marked an inline comment as done.Sep 27 2022, 3:24 PM
zlei marked an inline comment as done.Sep 27 2022, 3:42 PM
zlei added inline comments.
sys/geom/part/g_part_gpt.c
526

I think this check should come before the multiplication. Then we don't have to worry about overflow.

If the table size check fail, then definitely the header is invalid. We must not report invalid as unsupported.

Let me clarify, the unsupported means the header looks valid but not supported (due to implementation)

974

I agree. It is probably worth reporting the problem with printf(), but 1) we should say _why_ the GPT is unsupported,

I'm still struggling with that. Should we report maximum supported header entry count and size ?

  1. we should do it when reading the table, I think.

Too heavy to read large table.

zlei marked an inline comment as done.Sep 28 2022, 4:36 AM
  1. we should do it when reading the table, I think.

Or maybe we read at most say 1M table and report unsupported when header table size exceed.
This should be precisely and not that heavy.

I misread @markj 's comment:

  1. we should do it when reading the table, I think.

The GPT entry header is sanity although the entry number or the entry size is not supported.
So mark GPT header as NOT_SUPPORTED does not make sense. Move the check to gpt_read_tbl().

Also add detailed error message to explain why not supported

zlei edited the test plan for this revision. (Show Details)

Looks ok to me, thanks. Just some minor notes about the messages printed to the console.

sys/geom/part/g_part_gpt.c
977
979

These two messages would be clearer to me if you replaced "number" with "count". Or write "number of GPT entries" instead of "GPT entry number".

979
zlei edited the test plan for this revision. (Show Details)

Update as @markj suggested.

Oops, add missing unit B(ytes)

zlei marked 5 inline comments as done.Oct 6 2022, 4:44 AM
zlei added inline comments.
sys/geom/part/g_part_gpt.c
974

Other than a better error message, what purpose does this code serve? The following catch-all would already capture this (g_free(NULL) is a nop).

It deserves a good error message.
I've planned to emit some NULL checks below.

markj added inline comments.
sys/geom/part/g_part_gpt.c
977
979
This revision is now accepted and ready to land.Oct 6 2022, 1:37 PM
This revision now requires review to proceed.Oct 6 2022, 2:01 PM
zlei marked 2 inline comments as done.Oct 6 2022, 2:02 PM
zlei added inline comments.
sys/geom/part/g_part_gpt.c
977

My bad ...

This revision was not accepted when it landed; it landed in state Needs Review.Oct 18 2022, 3:19 PM
This revision was automatically updated to reflect the committed changes.
zlei marked an inline comment as done.Oct 19 2022, 1:06 AM

@imp @markj Thanks for the review !