Page MenuHomeFreeBSD

intrng: address post-commit review feedback
AbandonedPublic

Authored by kevans on Fri, Oct 25, 4:07 AM.
Tags
None
Referenced Files
Unknown Object (File)
Mon, Nov 4, 11:52 AM
Unknown Object (File)
Mon, Nov 4, 11:52 AM
Unknown Object (File)
Mon, Nov 4, 11:36 AM
Unknown Object (File)
Fri, Nov 1, 11:09 PM
Unknown Object (File)
Sun, Oct 27, 12:11 AM
Unknown Object (File)
Sat, Oct 26, 11:19 PM
Unknown Object (File)
Sat, Oct 26, 9:28 PM

Details

Reviewers
jrtc27
mhorne
andrew
manu
Group Reviewers
arm64
Summary

Rename enum root_type to properly namespace it and just use the type
directly in intr_irq_handler().

Drop commentary on self-explanatory enum element + redundant
assignments.

Stop casting intr_root_type around; it is an int.

Diff Detail

Repository
rG FreeBSD src repository
Lint
Lint Skipped
Unit
Tests Skipped
Build Status
Buildable 60186
Build 57070: arc lint + arc unit

Event Timeline

imho the original commit is wrong and should be reverted. Enum is clearly not the right type of variable for a low-level interface, especially if it should be used in assembler. A direct consequence of this problem is that the patch uses two different types (enum and u_register_t) for the same variable.

imho the original commit is wrong and should be reverted. Enum is clearly not the right type of variable for a low-level interface, especially if it should be used in assembler. A direct consequence of this problem is that the patch uses two different types (enum and u_register_t) for the same variable.

tbf the assembler part of this is rather minimal, and this change was meant to clear up the u_register_t goofiness entirely Did I miss a spot?

ohh, sorry, taking back this part. I noticed u_register_t in the original commit and missed the subsequent change.
But my complaint about enum) being a low-level interface remains. if I'm not mistaken, the enum size is not fixed which is not a good feature for the interface.
You cannot assume sizeof(MyEnum) == sizeof(int) in general. see

6.7.2.2 Enumeration specifiers
[...]
Constraints
The expression that defines the value of an enumeration constant shall be an integer constant expression that has a value representable as an int.
[...]
Each enumerated type shall be compatible with char, a signed integer type, or an unsigned integer type. The choice of type is implementation-defined, but shall be capable of representing the values of all the members of the enumeration.

We can assume the size of an enum on arm and arm64 as the ABI requires it to be either an int or unsigned int (unless it needs to hold values that are too large so can't be expressed as either of these). This means we can assume the size of this enum is 32-bit on arm and arm64. I assume riscv has a similar requirement.

There is an option in the arm ABI for the size of an enum to be smaller than an int, I decided to not use this when porting FreeBSD to use the EABI as it would break too many things.

We can assume the size of an enum on arm and arm64 as the ABI requires it to be either an int or unsigned int (unless it needs to hold values that are too large so can't be expressed as either of these). This means we can assume the size of this enum is 32-bit on arm and arm64. I assume riscv has a similar requirement.

It might, but riscv's branch off to intr_irq_handler is in the C trap handling, so it can also benefit from using the correct type.

This code

#include <stdio.h>

enum enum8  {val8_1 =0x01, val8_2 = 0x7F};
enum enum16 {val16_1 =0x01, va16_2 = 0x7FFF};
enum enum32 {val32_1 =0x01, val32_2 = 0x7FFFFFFF};
enum enum64 {val64_1 =0x01, val64_2 = 0x7FFFFFFFFFFFFFFF};

enum enump8  {valp8_1 =0x01, valp8_2 = 0x7F} __attribute__((packed));
enum enump16 {valp16_1 =0x01, vap16_2 = 0x7FFF} __attribute__((packed));
enum enump32 {valp32_1 =0x01, valp32_2 = 0x7FFFFFFF} __attribute__((packed));
enum enump64 {valp64_1 =0x01, valp64_2 = 0x7FFFFFFFFFFFFFFF} __attribute__((packed));



int main(void)
{
  printf("size of enum8  is %zd\n", sizeof (enum enum8));
  printf("size of enum16 is %zd\n", sizeof (enum enum16));
  printf("size of enum32 is %zd\n", sizeof (enum enum32));
  printf("size of enum64 is %zd\n\n", sizeof (enum enum64));

  printf("size of enump8  is %zd\n", sizeof (enum enump8));
  printf("size of enump16 is %zd\n", sizeof (enum enump16));
  printf("size of enump32 is %zd\n", sizeof (enum enump32));
  printf("size of enump64 is %zd\n\n", sizeof (enum enump64));
}

Gives me these result:

root@tegra210:~/test/enum # cc -o enum enum.c
root@tegra210:~/test/enum # ./enum
size of enum8  is 4
size of enum16 is 4
size of enum32 is 4
size of enum64 is 8

size of enump8  is 1
size of enump16 is 2
size of enump32 is 4
size of enump64 is 8

and

root@tegra210:~/test/enum # cc -fshort-enums -o enum enum.c
root@tegra210:~/test/enum # ./enum
size of enum8  is 1
size of enum16 is 2
size of enum32 is 4
size of enum64 is 8

size of enump8  is 1
size of enump16 is 2
size of enump32 is 4
size of enump64 is 8

These results are consistent on all platforms I have (armv7, arm64, amd64).

And due to clause 6.7.2.2 (Enumeration specifiers) of the C standard, the compiler is free to choose whether the default is ' -fshort-enums' or ' -fno-short-enums'.

We don't build the kernel with -fshort-enums, and even if we did then the assembly for arm and arm64 would be identical to the current code as the enum is passed in via a register to intr_irq_handler

And we also don't compile the kernel with -fno-short-enums. So the actual behavior/variant depends only on the compiler (for which both variants are equally valid).
What happens when someone uses this enum as a member of KABI compliant structure?

I'm not sure why we're making so much fuss about this, but to me, changing from a fixed size variable (uint32_t) to an enum just remains a step backwards.

So long as the assembly adheres to the ABI it works, and each assembly file is already correct for the corresponding ABI. The numbers are small positive integers, so whether they need sign-extending or zero-extending (or nothing at all) doesn’t matter.

And we also don't compile the kernel with -fno-short-enums. So the actual behavior/variant depends only on the compiler (for which both variants are equally valid).
What happens when someone uses this enum as a member of KABI compliant structure?

They get the same as any other enum in a structure? There are plenty already out there, and unless you start to reach 128 enum values there is no risk of the type changing by adding one.

And we also don't compile the kernel with -fno-short-enums. So the actual behavior/variant depends only on the compiler (for which both variants are equally valid).
What happens when someone uses this enum as a member of KABI compliant structure?

They get the same as any other enum in a structure? There are plenty already out there, and unless you start to reach 128 enum values there is no risk of the type changing by adding one.

???? The enum (with less than 128 values) size changes with the default compiler settings.

And I'm protesting because the original patch plus this one do virtually nothing else (besides some shuffling with constants) so they make the clearly more correct uint32_t a less correct enum.
Is my opinion wrong or where is the mistake?

And we also don't compile the kernel with -fno-short-enums. So the actual behavior/variant depends only on the compiler (for which both variants are equally valid).
What happens when someone uses this enum as a member of KABI compliant structure?

They get the same as any other enum in a structure? There are plenty already out there, and unless you start to reach 128 enum values there is no risk of the type changing by adding one.

???? The enum (with less than 128 values) size changes with the default compiler settings.

Well, yours is sparse. In the case of intr_root_type it's not, it's required to be contiguous from 0, so it must hit 128 distinct root types (SCHAR_MAX + 1) to potentially change.

And I'm protesting because the original patch plus this one do virtually nothing else (besides some shuffling with constants) so they make the clearly more correct uint32_t a less correct enum.
Is my opinion wrong or where is the mistake?

In isolation it's pointless, yes, the defines worked just fine. The main point is that certain root types may be conditional on the kernel config, e.g. in the original patch author's case whether you are building a kernel for Xen or not. Making the defines be contiguous in that case quickly becomes a mess of ifdefs once you have more than one such configuration item, so you're a bit stuck unless you're happy to always allocate room for every possible root type even if your config doesn't need it. Which is maybe fine, but it's a choice to be made. Using an enum lets them be automatically numbered, and the count automatically counted, based on your config, solely from ifdef'ing each enum entry.

I do think there should have been a proper discussion about this before the original patch was landed, though, because the above choice was never discussed and decided upon. Personally I don't see the problem with just giving each a define and wasting a few array entries down in subr_intr.c.

And we also don't compile the kernel with -fno-short-enums. So the actual behavior/variant depends only on the compiler (for which both variants are equally valid).
What happens when someone uses this enum as a member of KABI compliant structure?

They get the same as any other enum in a structure? There are plenty already out there, and unless you start to reach 128 enum values there is no risk of the type changing by adding one.

???? The enum (with less than 128 values) size changes with the default compiler settings.

Well, yours is sparse. In the case of intr_root_type it's not, it's required to be contiguous from 0, so it must hit 128 distinct root types (SCHAR_MAX + 1) to potentially change.

No, sizeof(enum) with < 128 values is 4 (with -fno-short-enums) or 1 (with -fshort-enums).

And I'm protesting because the original patch plus this one do virtually nothing else (besides some shuffling with constants) so they make the clearly more correct uint32_t a less correct enum.
Is my opinion wrong or where is the mistake?

In isolation it's pointless, yes, the defines worked just fine. The main point is that certain root types may be conditional on the kernel config, e.g. in the original patch author's case whether you are building a kernel for Xen or not. Making the defines be contiguous in that case quickly becomes a mess of ifdefs once you have more than one such configuration item, so you're a bit stuck unless you're happy to always allocate room for every possible root type even if your config doesn't need it. Which is maybe fine, but it's a choice to be made. Using an enum lets them be automatically numbered, and the count automatically counted, based on your config, solely from ifdef'ing each enum entry.

I do think there should have been a proper discussion about this before the original patch was landed, though, because the above choice was never discussed and decided upon. Personally I don't see the problem with just giving each a define and wasting a few array entries down in subr_intr.c.

No,no,no, I've read the original patch :-) Well, given the unfriendly size and tons of unrelated changes, probably not entirely.

XEN interrupts are plus/minus standard IPI. It's hard to understand why we would change almost everything related to interrupts for a new IPI (single or range)......

I'm ok with just reverting the original patch and moving on with the work I really wanted to do. I've long-since reached my tolerance for back-and-forth on INTRNG changes I need to continue AS work, and I can really work with either state.

Yeah, I mean, I don't really care which way we go, so long as we commit fully to one. As it stands we're using enums in name only, we're not getting any of the benefits that they bring due to this halfway house approach.

Wow, so friendly to revert something immediately after the person gets a rebase done and without informing them about discussion which was occurring. Also odd how there is suddenly all this discussion now when there was no activity on the original source for more than a month. I tried to get everyone's attention who is on here.

Might I also mention a very simple solution for the exact size being vital:

INTR_ROOT_DO_NOT_USE = 0x80000000,

The approach isn't particularly wonderful anyway. The real problem for the Xen interface is the existing bug in INTRNG, "QQQ: Only root PICs are aware of other CPUs ???" The answer to that is "no", yet this documented problem has been ignored for approaching a decade.

@mmel I also find it odd that you were the first to suggest an enum.

Wow, so friendly to revert something immediately after the person gets a rebase done and without informing them about discussion which was occurring.

Says the person sending messages laced with hostile sarcasm.

Also odd how there is suddenly all this discussion now when there was no activity on the original source for more than a month. I tried to get everyone's attention who is on here.

By now you should be able to figure out why: because you were not involved, so we could actually have a reasoned discussion without all your draining interjections.

@mmel I also find it odd that you were the first to suggest an enum.

And then walked it back in the prototype in https://reviews.freebsd.org/D40161#954119, presumably because of https://reviews.freebsd.org/D40161#954037. People can change their minds over time, whether due to their own thoughts or additional information from others; surely you don't believe that, just because mmel suggested an enum he has to forever believe it's the right thing to do no matter what?

@mmel I also find it odd that you were the first to suggest an enum.

That's a misunderstanding. It seems I was too brief and probably inaccurate. I apologize for that.
I always meant to say that INTR_TYPE should not be a #define with bitfield-like values, but with enum-like values.
Given my aversion to having enum in interfaces, it didn't occur to me at all that someone might take that as a recommendation to use enum. My bad, I'll try not to repeat it next time.

The approach isn't particularly wonderful anyway. The real problem for the Xen interface is the existing bug in INTRNG, "QQQ: Only root PICs are aware of other CPUs ???" The answer to that is "no", yet this documented problem has been ignored for approaching a decade.

Right, the answer is no. And it's not a problem for XEN, but for real hw. For example, Armada 7040. Their MOCHI/MCi bus and GIC extension allow to generate interrupts along with CPU binding information.
To solve this problem, we need a sleepable context and trees of PICs for all roots. And yes, multiple roots makes situation slightly worse in this area.

But it's hard to see why XEN would steal and abuse other interfaces without creating its own...

For some reason you are equating PICs and devices. That's a mistake. In a finished-ideal state INTRNG should allow you to have multiple PICs within a single device, this is necessary for some PCIe controllers where legacy and MSI(x) controllers are separate interrupt entities...

And about the INTR_TYPE allocation. I think the dynamic allocation based on kernel option is not a good idea - not for kernel modules. You can allocate the required number statically, without #ifdef. A few unnecessary bytes (if someone uses the INTR_TYPE count for array allocation) doesn't even deserve attention these days.

Says the person sending messages laced with hostile sarcasm.

You're bordering on harassment. I might have too many electrons, but continuous sniping won't assist with that.

@mmel I also find it odd that you were the first to suggest an enum.

That's a misunderstanding. It seems I was too brief and probably inaccurate. I apologize for that.
I always meant to say that INTR_TYPE should not be a #define with bitfield-like values, but with enum-like values.
Given my aversion to having enum in interfaces, it didn't occur to me at all that someone might take that as a recommendation to use enum. My bad, I'll try not to repeat it next time.

The u_register_t seems more correct for intr_irq_handler() as all modern architectures are using register-passing. There could be a chance someone writing assembly-language might see uint32_t and only clear the lower half of a 64-bit register. Whereas a compiler might assume uint32_t means the upper half has been cleared. This though is a theoretical compiler for a theoretical architecture,

I guess I ended up squashing in that both @mhorne and I had been asking for INTR_ROOT_NUM to be replaced with INTR_ROOT_COUNT. I was also strongly in concurrence that INTRNG shouldn't provide defaults for any of the values as the choice of value should strictly be the architecture's choice. I thought @mhorne's approval was premised on the belief INTR_ROOT_NUM could be addressed during commit.

The approach isn't particularly wonderful anyway. The real problem for the Xen interface is the existing bug in INTRNG, "QQQ: Only root PICs are aware of other CPUs ???" The answer to that is "no", yet this documented problem has been ignored for approaching a decade.

Right, the answer is no. And it's not a problem for XEN, but for real hw. For example, Armada 7040. Their MOCHI/MCi bus and GIC extension allow to generate interrupts along with CPU binding information.
To solve this problem, we need a sleepable context and trees of PICs for all roots. And yes, multiple roots makes situation slightly worse in this area.

But it's hard to see why XEN would steal and abuse other interfaces without creating its own...

The issue is pic_init_secondary() almost perfectly matches the need. There is some per-processor setup which must be done after each processor has been initialized, but before interrupts are enabled.

The abuse is the Xen PIC isn't truly a root PIC. Upon consideration, a potential modification of D40474 would be after all root PICs have been initialized, loop through the list of PICs using a reserved high value.

The place where Xen needs an interface is the one between INTRNG interrupts and INTRNG PICs. With experimentation I found INTRNG fairly readily breaks into 3 pieces. A low-level piece close to intr_event, PIC handling/interface, and interrupt mapping.

For Xen low/high/level/edge simply don't apply (I expect specialized HW buses might not allow access/modification). As such I want to skip that. Xen's step akin to mapping does things not applicable to HW devices. Do you know of HW devices which upon system suspend, most mappings are completely lost, while a few need to be recreated with different parameters?
Additionally since this is a specialized PIC with specialized devices, extra assumptions are made and steps are skipped. No struct resource * are ever created, so INTRNG's PIC interface doesn't match. Trying to match would involve deep changes spread all over 30K lines.

Another thought is, if the mapping portion was broken off, could the x86 MSI handling be modified to utilize it? (this could indicate how truly architecture-independent INTRNG is) I do have the break apart handy showing roughly where the interfaces are. I'm pretty well certain I don't have it quite right though.

For some reason you are equating PICs and devices. That's a mistake. In a finished-ideal state INTRNG should allow you to have multiple PICs within a single device, this is necessary for some PCIe controllers where legacy and MSI(x) controllers are separate interrupt entities...

Could you clarify at each spot where "device" is used whether "device" refers to FreeBSD device_t versus hunk of silicon? Might it in fact be better to require creating two separate FreeBSD device_ts for a PIC which implemented both types of interrupt controller?

This well is thoroughly poisoned; time to stop drinking.