Basically I want to get rid of all conversion relying on the layout. And add macros for common conversions e.g. from domain to dmar unit etc.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Aug 3 2020
I did not added uses to contrib sources, i.e. tests, bsnmpd, and heimdal.
In D25924#574449, @hselasky wrote:When you unload a kernel module, you know its text segment virtual address, where all code resides.
Why can't you iterate all pending callbacks and check if the function pointer is within the unloaded range?
Aug 2 2020
Is this the only example of such situation, where inode of the vnode changes without the vnode going through its lifecycle ?
This looks like half-done. You are mixing write(2) and stdio .
I tried to lump this in with v_hash, but unfortunately that one has many testicles elsewhere and in particular is not constant for the lifetime of the vnode.
The change does not make sense. The address space of the process is destroyed in context of exit1(), see the call to vmspace_exit(). Zombies only hold the struct proc itself to record pid and exit code. So your change is both nop, and racy because p->p_pptr can become invalid any moment.
Aug 1 2020
In D25916#574171, @cem wrote:In D25916#574168, @kib wrote:So this KPI is not safe to use from interrupt handlers ? I think you need to mention it in the man page.
It is no worse from interrupt handlers as existing random(9) KPI, which does not document that deficiency. But we could use spinlock_enter() in place of critical_enter() if that would solve the problem better.
So this KPI is not safe to use from interrupt handlers ? I think you need to mention it in the man page.
Jul 31 2020
I am looking at all lines like this:
iodom = (struct iommu_domain *)domain;
and think we need macros like DMAR2IODOM etc.
Handle latest Mark notes, comments about them were pushed as followups.
In D24652#573697, @markj wrote:I am not sure how best to handle the interaction with mlock(). The populate handler does not bump the wire count in the physical page structures (as I noted, it does not handle VM_FAULT_WIRE at all). We could wire every single page in the range, but this will be rather slow for 1GB pages. If we do not bump the per-page wire count, then vm_object_unwire() needs to be updated. It might be reasonable to simply record the wiring in the vm_map_entry and avoid faults. Then, the first access to an mlock'ed large page mapping will trigger a soft fault, which I believe goes against the spirit of mlock(), but maybe it is acceptable.
Fix definition of AT_RBENEATH.
Jul 30 2020
In D25873#573638, @np wrote:The goal is to improve vxlan's performance. mbuf tags would add at least an alloc/free and extra access(es) to the tags (which are on a linked list) on the hot path. Why even consider any other location for the flags if there _are_ bits available in the proper place right now. We'll be out of them after this change, we aren't out already.
Where VNI is currently stored when packet is pushed up or down to driver, for stateless offload ?
Jul 29 2020
Butt then why do you need taskqueue.h and tree.h before iommu ? They are included by the header ATM
Jul 28 2020
So please plan for the following two next changes:
- Removal of headers from iommu.h
- Providing the vtable for implementation of map/unmap to be used from iommu_gas.
Jul 27 2020
I think ARM gic writes are not ordered with regards to STLR. It is typical in the sense that for instance Intel LAPIC ICR writes in x2APIC mode are also not ordered with normal accesses. We issue MFENCE before ICR write to avoid similar issue there.
Jul 26 2020
In D24217#572014, @mjg wrote:Spinning (or adaptive spinning) is mandatory for the primitive to perform when contended. I presume going off CPU in this particular use case has to be supported, hence the new primitive and there is no adaptive spinning since there is no information who owns the lock and consequently no means to check if they went off CPU.
I had a draft for a routine which supports adaptive spinning and easily fits in a byte (even less). The states are: free, locked, sleeping waiters, owner off cpu.
Key observation is that going off CPU while holding a lock is relatively rare.
The idea is to explicitly track all OBM locks as they are taken in an array in struct thread or similar. Then the few places which can put the thread off CPU with the lock held check for it and walk the array to mark the locks appropriately. While puts an explicit upper limit at how many locks should be taken at any given time, I think the limitation is more than fine.
That said, I'll do some benchmarks later with stock head, head + vm obj rebase, head + obm, head + vm obj rebase + obm.
Add comments for pv fake page.
Handle first batch of comments from Mark.
But does this mean that vm_page_free_prep() needs similar change ? Or at least, make assertion conditional on the page validity ?
Jul 25 2020
Jul 24 2020
I assume the next patch will move intel_gas.c to dev/iommu ?
Jul 23 2020
Is usermode variable in mips/trap.c::trap() redundant ?
When I wrote a response yesterday, my motivation was that it is important to provide as much information as we can for the trap signal cause. But, as an after-thought, I realized that siginfo_t is not useful for this purpose, it drops a lot of very important data from the faulting context. In other words, either code has access to ucontext for the fault, or it does not matter much which data we loose. From this point of view, providing just the trap number (for some definition of it) is good enough.
In D25700#570955, @hselasky wrote:What do you think about using "int" instead of "int32" ?
Since you are doing cleanups, perhaps change the return type to bool, there and for swap_reserve().
Sorry I did not stated that explicitly, the large swap should be a single volume. Blists are allocatd per swap device.
In D25736#570896, @pho wrote:In D25736#570891, @kib wrote:In D25736#570868, @pho wrote:I completed a full stress2 test with D25736.74719.diff on r363390.
No problems seen.How large the configured swap size was ? And how much of it was used (approximately) ?
$ ./swapused.sh FreeBSD t2.osted.lan 13.0-CURRENT FreeBSD 13.0-CURRENT #0 r363443M: Thu Jul 23 13:01:56 CEST 2020 pho@t2.osted.lan:/usr/src/sys/amd64/compile/PHO amd64 swapinfo -h Device Size Used Avail Capacity /dev/da0p4 67G 0B 67G 0% swap disk 1027604514 140509184 4 freebsd-swap (67G) 13:08:48 0% 13:10:02 1% 13:10:09 2% : 13:18:33 32% 13:18:55 33%I think this patch mostly needs Marius' test sort /dev/zero.
This test is included in stress2 as sort.sh
In D25700#570888, @hselasky wrote:With all that corner cases and hard-to-correctly-interpret semantic, I do not see why it is useful to deviate from Linux original.
Instead of casting, would a static inline function be better, and more clean code-wise?
In D25736#570868, @pho wrote:I completed a full stress2 test with D25736.74719.diff on r363390.
No problems seen.
Jul 22 2020
In D25700#570754, @hselasky wrote:Konstantin, doesn't the use of subtraction "-" imply that the result is signed in C?
No, the result has the promoted type of left and right operands. Basically the shorter type is promoted to wider one. But there are additional rules when operands have different signess.
In D25700#570745, @hselasky wrote:Konstantin: Please explain why:
(int)((unsigned)(x) - (unsigned)(y)) is different from
(int)((x) - (y))I don't get it.
May be, just copy the esr value as is. There is additional encoding of trap cause in ISS field.