Index: stable/11/lib/libc/sys/ptrace.2 =================================================================== --- stable/11/lib/libc/sys/ptrace.2 (revision 325830) +++ stable/11/lib/libc/sys/ptrace.2 (revision 325831) @@ -1,1086 +1,1108 @@ .\" $FreeBSD$ .\" $NetBSD: ptrace.2,v 1.2 1995/02/27 12:35:37 cgd Exp $ .\" .\" This file is in the public domain. -.Dd June 11, 2017 +.Dd September 14, 2017 .Dt PTRACE 2 .Os .Sh NAME .Nm ptrace .Nd process tracing and debugging .Sh LIBRARY .Lb libc .Sh SYNOPSIS .In sys/types.h .In sys/ptrace.h .Ft int .Fn ptrace "int request" "pid_t pid" "caddr_t addr" "int data" .Sh DESCRIPTION The .Fn ptrace system call provides tracing and debugging facilities. It allows one process (the .Em tracing process) to control another (the .Em traced process). The tracing process must first attach to the traced process, and then issue a series of .Fn ptrace system calls to control the execution of the process, as well as access process memory and register state. For the duration of the tracing session, the traced process will be .Dq re-parented , with its parent process ID (and resulting behavior) changed to the tracing process. It is permissible for a tracing process to attach to more than one other process at a time. When the tracing process has completed its work, it must detach the traced process; if a tracing process exits without first detaching all processes it has attached, those processes will be killed. .Pp Most of the time, the traced process runs normally, but when it receives a signal (see .Xr sigaction 2 ) , it stops. The tracing process is expected to notice this via .Xr wait 2 or the delivery of a .Dv SIGCHLD signal, examine the state of the stopped process, and cause it to terminate or continue as appropriate. The signal may be a normal process signal, generated as a result of traced process behavior, or use of the .Xr kill 2 system call; alternatively, it may be generated by the tracing facility as a result of attaching, stepping by the tracing process, or an event in the traced process. The tracing process may choose to intercept the signal, using it to observe process behavior (such as .Dv SIGTRAP ) , or forward the signal to the process if appropriate. The .Fn ptrace system call is the mechanism by which all this happens. .Pp A traced process may report additional signal stops corresponding to events in the traced process. These additional signal stops are reported as .Dv SIGTRAP or .Dv SIGSTOP signals. The tracing process can use the .Dv PT_LWPINFO request to determine which events are associated with a .Dv SIGTRAP or .Dv SIGSTOP signal. Note that multiple events may be associated with a single signal. For example, events indicated by the .Dv PL_FLAG_BORN , .Dv PL_FLAG_FORKED , and .Dv PL_FLAG_EXEC flags are also reported as a system call exit event .Pq Dv PL_FLAG_SCX . The signal stop for a new child process enabled via .Dv PTRACE_FORK will report a .Dv SIGSTOP signal. All other additional signal stops use .Dv SIGTRAP . .Pp Each traced process has a tracing event mask. An event in the traced process only reports a signal stop if the corresponding flag is set in the tracing event mask. The current set of tracing event flags include: .Bl -tag -width "Dv PTRACE_SYSCALL" .It Dv PTRACE_EXEC Report a stop for a successful invocation of .Xr execve 2 . This event is indicated by the .Dv PL_FLAG_EXEC flag in the .Va pl_flags member of .Vt "struct ptrace_lwpinfo" . .It Dv PTRACE_SCE Report a stop on each system call entry. This event is indicated by the .Dv PL_FLAG_SCE flag in the .Va pl_flags member of .Vt "struct ptrace_lwpinfo" . .It Dv PTRACE_SCX Report a stop on each system call exit. This event is indicated by the .Dv PL_FLAG_SCX flag in the .Va pl_flags member of .Vt "struct ptrace_lwpinfo" . .It Dv PTRACE_SYSCALL Report stops for both system call entry and exit. .It Dv PTRACE_FORK This event flag controls tracing for new child processes of a traced process. .Pp When this event flag is enabled, new child processes will enable tracing and stop before executing their first instruction. The new child process will include the .Dv PL_FLAG_CHILD flag in the .Va pl_flags member of .Vt "struct ptrace_lwpinfo" . The traced process will report a stop that includes the .Dv PL_FLAG_FORKED flag. The process ID of the new child process will also be present in the .Va pl_child_pid member of .Vt "struct ptrace_lwpinfo" . If the new child process was created via .Xr vfork 2 , the traced process's stop will also include the .Dv PL_FLAG_VFORKED flag. Note that new child processes will be attached with the default tracing event mask; they do not inherit the event mask of the traced process. .Pp When this event flag is not enabled, new child processes will execute without tracing enabled. .It Dv PTRACE_LWP This event flag controls tracing of LWP .Pq kernel thread creation and destruction. When this event is enabled, new LWPs will stop and report an event with .Dv PL_FLAG_BORN set before executing their first instruction, and exiting LWPs will stop and report an event with .Dv PL_FLAG_EXITED set before completing their termination. .Pp Note that new processes do not report an event for the creation of their initial thread, and exiting processes do not report an event for the termination of the last thread. .It Dv PTRACE_VFORK Report a stop event when a parent process resumes after a .Xr vfork 2 . .Pp When a thread in the traced process creates a new child process via .Xr vfork 2 , the stop that reports .Dv PL_FLAG_FORKED and .Dv PL_FLAG_SCX occurs just after the child process is created, but before the thread waits for the child process to stop sharing process memory. If a debugger is not tracing the new child process, it must ensure that no breakpoints are enabled in the shared process memory before detaching from the new child process. This means that no breakpoints are enabled in the parent process either. .Pp The .Dv PTRACE_VFORK flag enables a new stop that indicates when the new child process stops sharing the process memory of the parent process. A debugger can reinsert breakpoints in the parent process and resume it in response to this event. This event is indicated by setting the .Dv PL_FLAG_VFORK_DONE flag. .El .Pp The default tracing event mask when attaching to a process via .Dv PT_ATTACH , .Dv PT_TRACE_ME , or .Dv PTRACE_FORK includes only .Dv PTRACE_EXEC events. All other event flags are disabled. .Pp The .Fa request argument specifies what operation is being performed; the meaning of the rest of the arguments depends on the operation, but except for one special case noted below, all .Fn ptrace calls are made by the tracing process, and the .Fa pid argument specifies the process ID of the traced process or a corresponding thread ID. The .Fa request argument can be: .Bl -tag -width "Dv PT_GET_EVENT_MASK" .It Dv PT_TRACE_ME This request is the only one used by the traced process; it declares that the process expects to be traced by its parent. All the other arguments are ignored. (If the parent process does not expect to trace the child, it will probably be rather confused by the results; once the traced process stops, it cannot be made to continue except via .Fn ptrace . ) When a process has used this request and calls .Xr execve 2 or any of the routines built on it (such as .Xr execv 3 ) , it will stop before executing the first instruction of the new image. Also, any setuid or setgid bits on the executable being executed will be ignored. If the child was created by .Xr vfork 2 system call or .Xr rfork 2 call with the .Dv RFMEM flag specified, the debugging events are reported to the parent only after the .Xr execve 2 is executed. .It Dv PT_READ_I , Dv PT_READ_D These requests read a single .Vt int of data from the traced process's address space. Traditionally, .Fn ptrace has allowed for machines with distinct address spaces for instruction and data, which is why there are two requests: conceptually, .Dv PT_READ_I reads from the instruction space and .Dv PT_READ_D reads from the data space. In the current .Fx implementation, these two requests are completely identical. The .Fa addr argument specifies the address (in the traced process's virtual address space) at which the read is to be done. This address does not have to meet any alignment constraints. The value read is returned as the return value from .Fn ptrace . .It Dv PT_WRITE_I , Dv PT_WRITE_D These requests parallel .Dv PT_READ_I and .Dv PT_READ_D , except that they write rather than read. The .Fa data argument supplies the value to be written. .It Dv PT_IO This request allows reading and writing arbitrary amounts of data in the traced process's address space. The .Fa addr argument specifies a pointer to a .Vt "struct ptrace_io_desc" , which is defined as follows: .Bd -literal struct ptrace_io_desc { int piod_op; /* I/O operation */ void *piod_offs; /* child offset */ void *piod_addr; /* parent offset */ size_t piod_len; /* request length */ }; /* * Operations in piod_op. */ #define PIOD_READ_D 1 /* Read from D space */ #define PIOD_WRITE_D 2 /* Write to D space */ #define PIOD_READ_I 3 /* Read from I space */ #define PIOD_WRITE_I 4 /* Write to I space */ .Ed .Pp The .Fa data argument is ignored. The actual number of bytes read or written is stored in .Va piod_len upon return. .It Dv PT_CONTINUE The traced process continues execution. The .Fa addr argument is an address specifying the place where execution is to be resumed (a new value for the program counter), or .Po Vt caddr_t Pc Ns 1 to indicate that execution is to pick up where it left off. The .Fa data argument provides a signal number to be delivered to the traced process as it resumes execution, or 0 if no signal is to be sent. .It Dv PT_STEP The traced process is single stepped one instruction. The .Fa addr argument should be passed .Po Vt caddr_t Pc Ns 1 . The .Fa data argument provides a signal number to be delivered to the traced process as it resumes execution, or 0 if no signal is to be sent. .It Dv PT_KILL The traced process terminates, as if .Dv PT_CONTINUE had been used with .Dv SIGKILL given as the signal to be delivered. .It Dv PT_ATTACH This request allows a process to gain control of an otherwise unrelated process and begin tracing it. It does not need any cooperation from the to-be-traced process. In this case, .Fa pid specifies the process ID of the to-be-traced process, and the other two arguments are ignored. This request requires that the target process must have the same real UID as the tracing process, and that it must not be executing a setuid or setgid executable. (If the tracing process is running as root, these restrictions do not apply.) The tracing process will see the newly-traced process stop and may then control it as if it had been traced all along. .It Dv PT_DETACH This request is like PT_CONTINUE, except that it does not allow specifying an alternate place to continue execution, and after it succeeds, the traced process is no longer traced and continues execution normally. .It Dv PT_GETREGS This request reads the traced process's machine registers into the .Do .Vt "struct reg" .Dc (defined in .In machine/reg.h ) pointed to by .Fa addr . .It Dv PT_SETREGS This request is the converse of .Dv PT_GETREGS ; it loads the traced process's machine registers from the .Do .Vt "struct reg" .Dc (defined in .In machine/reg.h ) pointed to by .Fa addr . .It Dv PT_GETFPREGS This request reads the traced process's floating-point registers into the .Do .Vt "struct fpreg" .Dc (defined in .In machine/reg.h ) pointed to by .Fa addr . .It Dv PT_SETFPREGS This request is the converse of .Dv PT_GETFPREGS ; it loads the traced process's floating-point registers from the .Do .Vt "struct fpreg" .Dc (defined in .In machine/reg.h ) pointed to by .Fa addr . .It Dv PT_GETDBREGS This request reads the traced process's debug registers into the .Do .Vt "struct dbreg" .Dc (defined in .In machine/reg.h ) pointed to by .Fa addr . .It Dv PT_SETDBREGS This request is the converse of .Dv PT_GETDBREGS ; it loads the traced process's debug registers from the .Do .Vt "struct dbreg" .Dc (defined in .In machine/reg.h ) pointed to by .Fa addr . .It Dv PT_LWPINFO This request can be used to obtain information about the kernel thread, also known as light-weight process, that caused the traced process to stop. The .Fa addr argument specifies a pointer to a .Vt "struct ptrace_lwpinfo" , which is defined as follows: .Bd -literal struct ptrace_lwpinfo { lwpid_t pl_lwpid; int pl_event; int pl_flags; sigset_t pl_sigmask; sigset_t pl_siglist; siginfo_t pl_siginfo; char pl_tdname[MAXCOMLEN + 1]; pid_t pl_child_pid; u_int pl_syscall_code; u_int pl_syscall_narg; }; .Ed .Pp The .Fa data argument is to be set to the size of the structure known to the caller. This allows the structure to grow without affecting older programs. .Pp The fields in the .Vt "struct ptrace_lwpinfo" have the following meaning: .Bl -tag -width indent -compact .It Va pl_lwpid LWP id of the thread .It Va pl_event Event that caused the stop. Currently defined events are: .Bl -tag -width "Dv PL_EVENT_SIGNAL" -compact .It Dv PL_EVENT_NONE No reason given .It Dv PL_EVENT_SIGNAL Thread stopped due to the pending signal .El .It Va pl_flags Flags that specify additional details about observed stop. Currently defined flags are: .Bl -tag -width indent -compact .It Dv PL_FLAG_SCE The thread stopped due to system call entry, right after the kernel is entered. The debugger may examine syscall arguments that are stored in memory and registers according to the ABI of the current process, and modify them, if needed. .It Dv PL_FLAG_SCX The thread is stopped immediately before syscall is returning to the usermode. The debugger may examine system call return values in the ABI-defined registers and/or memory. .It Dv PL_FLAG_EXEC When .Dv PL_FLAG_SCX is set, this flag may be additionally specified to inform that the program being executed by debuggee process has been changed by successful execution of a system call from the .Fn execve 2 family. .It Dv PL_FLAG_SI Indicates that .Va pl_siginfo member of .Vt "struct ptrace_lwpinfo" contains valid information. .It Dv PL_FLAG_FORKED Indicates that the process is returning from a call to .Fn fork 2 that created a new child process. The process identifier of the new process is available in the .Va pl_child_pid member of .Vt "struct ptrace_lwpinfo" . .It Dv PL_FLAG_CHILD The flag is set for first event reported from a new child which is automatically attached when .Dv PTRACE_FORK is enabled. .It Dv PL_FLAG_BORN This flag is set for the first event reported from a new LWP when .Dv PTRACE_LWP is enabled. It is reported along with .Dv PL_FLAG_SCX . .It Dv PL_FLAG_EXITED This flag is set for the last event reported by an exiting LWP when .Dv PTRACE_LWP is enabled. Note that this event is not reported when the last LWP in a process exits. The termination of the last thread is reported via a normal process exit event. .It Dv PL_FLAG_VFORKED Indicates that the thread is returning from a call to .Xr vfork 2 that created a new child process. This flag is set in addition to .Dv PL_FLAG_FORKED . .It Dv PL_FLAG_VFORK_DONE Indicates that the thread has resumed after a child process created via .Xr vfork 2 has stopped sharing its address space with the traced process. .El .It Va pl_sigmask The current signal mask of the LWP .It Va pl_siglist The current pending set of signals for the LWP. Note that signals that are delivered to the process would not appear on an LWP siglist until the thread is selected for delivery. .It Va pl_siginfo The siginfo that accompanies the signal pending. Only valid for .Dv PL_EVENT_SIGNAL stop when .Dv PL_FLAG_SI is set in .Va pl_flags . .It Va pl_tdname The name of the thread. .It Va pl_child_pid The process identifier of the new child process. Only valid for a .Dv PL_EVENT_SIGNAL stop when .Dv PL_FLAG_FORKED is set in .Va pl_flags . .It Va pl_syscall_code The ABI-specific identifier of the current system call. Note that for indirect system calls this field reports the indirected system call. Only valid when .Dv PL_FLAG_SCE or .Dv PL_FLAG_SCX is set in .Va pl_flags. .It Va pl_syscall_narg The number of arguments passed to the current system call not counting the system call identifier. Note that for indirect system calls this field reports the arguments passed to the indirected system call. Only valid when .Dv PL_FLAG_SCE or .Dv PL_FLAG_SCX is set in .Va pl_flags. .El .It Dv PT_GETNUMLWPS This request returns the number of kernel threads associated with the traced process. .It Dv PT_GETLWPLIST This request can be used to get the current thread list. A pointer to an array of type .Vt lwpid_t should be passed in .Fa addr , with the array size specified by .Fa data . The return value from .Fn ptrace is the count of array entries filled in. .It Dv PT_SETSTEP This request will turn on single stepping of the specified process. .It Dv PT_CLEARSTEP This request will turn off single stepping of the specified process. .It Dv PT_SUSPEND This request will suspend the specified thread. .It Dv PT_RESUME This request will resume the specified thread. .It Dv PT_TO_SCE This request will set the .Dv PTRACE_SCE event flag to trace all future system call entries and continue the process. The .Fa addr and .Fa data arguments are used the same as for .Dv PT_CONTINUE. .It Dv PT_TO_SCX This request will set the .Dv PTRACE_SCX event flag to trace all future system call exits and continue the process. The .Fa addr and .Fa data arguments are used the same as for .Dv PT_CONTINUE. .It Dv PT_SYSCALL This request will set the .Dv PTRACE_SYSCALL event flag to trace all future system call entries and exits and continue the process. The .Fa addr and .Fa data arguments are used the same as for .Dv PT_CONTINUE. .It Dv PT_GET_SC_ARGS For the thread which is stopped in either .Dv PL_FLAG_SCE or .Dv PL_FLAG_SCX state, that is, on entry or exit to a syscall, this request fetches the syscall arguments. .Pp The arguments are copied out into the buffer pointed to by the .Fa addr pointer, sequentially. Each syscall argument is stored as the machine word. Kernel copies out as many arguments as the syscall accepts, see the .Va pl_syscall_narg member of the .Vt struct ptrace_lwpinfo , but not more than the .Fa data bytes in total are copied. .It Dv PT_FOLLOW_FORK This request controls tracing for new child processes of a traced process. If .Fa data is non-zero, .Dv PTRACE_FORK is set in the traced process's event tracing mask. If .Fa data is zero, .Dv PTRACE_FORK is cleared from the traced process's event tracing mask. .It Dv PT_LWP_EVENTS This request controls tracing of LWP creation and destruction. If .Fa data is non-zero, .Dv PTRACE_LWP is set in the traced process's event tracing mask. If .Fa data is zero, .Dv PTRACE_LWP is cleared from the traced process's event tracing mask. .It Dv PT_GET_EVENT_MASK This request reads the traced process's event tracing mask into the integer pointed to by .Fa addr . The size of the integer must be passed in .Fa data . .It Dv PT_SET_EVENT_MASK This request sets the traced process's event tracing mask from the integer pointed to by .Fa addr . The size of the integer must be passed in .Fa data . .It Dv PT_VM_TIMESTAMP This request returns the generation number or timestamp of the memory map of the traced process as the return value from .Fn ptrace . This provides a low-cost way for the tracing process to determine if the VM map changed since the last time this request was made. .It Dv PT_VM_ENTRY This request is used to iterate over the entries of the VM map of the traced process. The .Fa addr argument specifies a pointer to a .Vt "struct ptrace_vm_entry" , which is defined as follows: .Bd -literal struct ptrace_vm_entry { int pve_entry; int pve_timestamp; u_long pve_start; u_long pve_end; u_long pve_offset; u_int pve_prot; u_int pve_pathlen; long pve_fileid; uint32_t pve_fsid; char *pve_path; }; .Ed .Pp The first entry is returned by setting .Va pve_entry to zero. Subsequent entries are returned by leaving .Va pve_entry unmodified from the value returned by previous requests. The .Va pve_timestamp field can be used to detect changes to the VM map while iterating over the entries. The tracing process can then take appropriate action, such as restarting. By setting .Va pve_pathlen to a non-zero value on entry, the pathname of the backing object is returned in the buffer pointed to by .Va pve_path , provided the entry is backed by a vnode. The .Va pve_pathlen field is updated with the actual length of the pathname (including the terminating null character). The .Va pve_offset field is the offset within the backing object at which the range starts. The range is located in the VM space at .Va pve_start and extends up to .Va pve_end (inclusive). .Pp The .Fa data argument is ignored. .El +.Sh ARM MACHINE-SPECIFIC REQUESTS +.Bl -tag -width "Dv PT_SETVFPREGS" +.It Dv PT_GETVFPREGS +Return the thread's +.Dv VFP +machine state in the buffer pointed to by +.Fa addr . +.Pp +The +.Fa data +argument is ignored. +.It Dv PT_SETVFPREGS +Set the thread's +.Dv VFP +machine state from the buffer pointed to by +.Fa addr . +.Pp +The +.Fa data +argument is ignored. +.El +.Pp .Sh x86 MACHINE-SPECIFIC REQUESTS .Bl -tag -width "Dv PT_GETXSTATE_INFO" .It Dv PT_GETXMMREGS Copy the XMM FPU state into the buffer pointed to by the argument .Fa addr . The buffer has the same layout as the 32-bit save buffer for the machine instruction .Dv FXSAVE . .Pp This request is only valid for i386 programs, both on native 32-bit systems and on amd64 kernels. For 64-bit amd64 programs, the XMM state is reported as part of the FPU state returned by the .Dv PT_GETFPREGS request. .Pp The .Fa data argument is ignored. .It Dv PT_SETXMMREGS Load the XMM FPU state for the thread from the buffer pointed to by the argument .Fa addr . The buffer has the same layout as the 32-bit load buffer for the machine instruction .Dv FXRSTOR . .Pp As with .Dv PT_GETXMMREGS, this request is only valid for i386 programs. .Pp The .Fa data argument is ignored. .It Dv PT_GETXSTATE_INFO Report which XSAVE FPU extensions are supported by the CPU and allowed in userspace programs. The .Fa addr argument must point to a variable of type .Vt struct ptrace_xstate_info , which contains the information on the request return. .Vt struct ptrace_xstate_info is defined as follows: .Bd -literal struct ptrace_xstate_info { uint64_t xsave_mask; uint32_t xsave_len; }; .Ed The .Dv xsave_mask field is a bitmask of the currently enabled extensions. The meaning of the bits is defined in the Intel and AMD processor documentation. The .Dv xsave_len field reports the length of the XSAVE area for storing the hardware state for currently enabled extensions in the format defined by the x86 .Dv XSAVE machine instruction. .Pp The .Fa data argument value must be equal to the size of the .Vt struct ptrace_xstate_info . .It Dv PT_GETXSTATE Return the content of the XSAVE area for the thread. The .Fa addr argument points to the buffer where the content is copied, and the .Fa data argument specifies the size of the buffer. The kernel copies out as much content as allowed by the buffer size. The buffer layout is specified by the layout of the save area for the .Dv XSAVE machine instruction. .It Dv PT_SETXSTATE Load the XSAVE state for the thread from the buffer specified by the .Fa addr pointer. The buffer size is passed in the .Fa data argument. The buffer must be at least as large as the .Vt struct savefpu (defined in .Pa x86/fpu.h ) to allow the complete x87 FPU and XMM state load. It must not be larger than the XSAVE state length, as reported by the .Dv xsave_len field from the .Vt struct ptrace_xstate_info of the .Dv PT_GETXSTATE_INFO request. Layout of the buffer is identical to the layout of the load area for the .Dv XRSTOR machine instruction. .It Dv PT_GETFSBASE Return the value of the base used when doing segmented memory addressing using the %fs segment register. The .Fa addr argument points to an .Vt unsigned long variable where the base value is stored. .Pp The .Fa data argument is ignored. .It Dv PT_GETGSBASE Like the .Dv PT_GETFSBASE request, but returns the base for the %gs segment register. .It Dv PT_SETFSBASE Set the base for the %fs segment register to the value pointed to by the .Fa addr argument. .Fa addr must point to the .Vt unsigned long variable containing the new base. .Pp The .Fa data argument is ignored. .It Dv PT_SETGSBASE Like the .Dv PT_SETFSBASE request, but sets the base for the %gs segment register. .El .Sh PowerPC MACHINE-SPECIFIC REQUESTS .Bl -tag -width "Dv PT_SETVRREGS" .It Dv PT_GETVRREGS Return the thread's .Dv ALTIVEC machine state in the buffer pointed to by .Fa addr . .Pp The .Fa data argument is ignored. .It Dv PT_SETVRREGS Set the thread's .Dv ALTIVEC machine state from the buffer pointed to by .Fa addr . .Pp The .Fa data argument is ignored. .El .Pp Additionally, other machine-specific requests can exist. .Sh RETURN VALUES Most requests return 0 on success and \-1 on error. Some requests can cause .Fn ptrace to return \-1 as a non-error value, among them are .Dv PT_READ_I and .Dv PT_READ_D , which return the value read from the process memory on success. To disambiguate, .Va errno can be set to 0 before the call and checked afterwards. .Pp The current .Fn ptrace implementation always sets .Va errno to 0 before calling into the kernel, both for historic reasons and for consistency with other operating systems. It is recommended to assign zero to .Va errno explicitly for forward compatibility. .Sh ERRORS The .Fn ptrace system call may fail if: .Bl -tag -width Er .It Bq Er ESRCH .Bl -bullet -compact .It No process having the specified process ID exists. .El .It Bq Er EINVAL .Bl -bullet -compact .It A process attempted to use .Dv PT_ATTACH on itself. .It The .Fa request argument was not one of the legal requests. .It The signal number (in .Fa data ) to .Dv PT_CONTINUE was neither 0 nor a legal signal number. .It .Dv PT_GETREGS , .Dv PT_SETREGS , .Dv PT_GETFPREGS , .Dv PT_SETFPREGS , .Dv PT_GETDBREGS , or .Dv PT_SETDBREGS was attempted on a process with no valid register set. (This is normally true only of system processes.) .It .Dv PT_VM_ENTRY was given an invalid value for .Fa pve_entry . This can also be caused by changes to the VM map of the process. .It The size (in .Fa data ) provided to .Dv PT_LWPINFO was less than or equal to zero, or larger than the .Vt ptrace_lwpinfo structure known to the kernel. .It The size (in .Fa data ) provided to the x86-specific .Dv PT_GETXSTATE_INFO request was not equal to the size of the .Vt struct ptrace_xstate_info . .It The size (in .Fa data ) provided to the x86-specific .Dv PT_SETXSTATE request was less than the size of the x87 plus the XMM save area. .It The size (in .Fa data ) provided to the x86-specific .Dv PT_SETXSTATE request was larger than returned in the .Dv xsave_len member of the .Vt struct ptrace_xstate_info from the .Dv PT_GETXSTATE_INFO request. .It The base value, provided to the amd64-specific requests .Dv PT_SETFSBASE or .Dv PT_SETGSBASE , pointed outside of the valid user address space. This error will not occur in 32-bit programs. .El .It Bq Er EBUSY .Bl -bullet -compact .It .Dv PT_ATTACH was attempted on a process that was already being traced. .It A request attempted to manipulate a process that was being traced by some process other than the one making the request. .It A request (other than .Dv PT_ATTACH ) specified a process that was not stopped. .El .It Bq Er EPERM .Bl -bullet -compact .It A request (other than .Dv PT_ATTACH ) attempted to manipulate a process that was not being traced at all. .It An attempt was made to use .Dv PT_ATTACH on a process in violation of the requirements listed under .Dv PT_ATTACH above. .El .It Bq Er ENOENT .Bl -bullet -compact .It .Dv PT_VM_ENTRY previously returned the last entry of the memory map. No more entries exist. .El .It Bq Er ENAMETOOLONG .Bl -bullet -compact .It .Dv PT_VM_ENTRY cannot return the pathname of the backing object because the buffer is not big enough. .Fa pve_pathlen holds the minimum buffer size required on return. .El .El .Sh SEE ALSO .Xr execve 2 , .Xr sigaction 2 , .Xr wait 2 , .Xr execv 3 , .Xr i386_clr_watch 3 , .Xr i386_set_watch 3 .Sh HISTORY The .Fn ptrace function appeared in .At v7 . Index: stable/11/sys/arm/arm/machdep.c =================================================================== --- stable/11/sys/arm/arm/machdep.c (revision 325830) +++ stable/11/sys/arm/arm/machdep.c (revision 325831) @@ -1,1229 +1,1231 @@ /* $NetBSD: arm32_machdep.c,v 1.44 2004/03/24 15:34:47 atatat Exp $ */ /*- * Copyright (c) 2004 Olivier Houchard * Copyright (c) 1994-1998 Mark Brinicombe. * Copyright (c) 1994 Brini. * All rights reserved. * * This code is derived from software written for Brini by Mark Brinicombe * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * This product includes software developed by Mark Brinicombe * for the NetBSD Project. * 4. The name of the company nor the name of the author may be used to * endorse or promote products derived from this software without specific * prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Machine dependent functions for kernel setup * * Created : 17/09/94 * Updated : 18/04/01 updated for new wscons */ #include "opt_compat.h" #include "opt_ddb.h" #include "opt_kstack_pages.h" #include "opt_platform.h" #include "opt_sched.h" #include "opt_timer.h" #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #ifdef FDT #include #include #endif #ifdef DEBUG #define debugf(fmt, args...) printf(fmt, ##args) #else #define debugf(fmt, args...) #endif #if defined(COMPAT_FREEBSD4) || defined(COMPAT_FREEBSD5) || \ defined(COMPAT_FREEBSD6) || defined(COMPAT_FREEBSD7) || \ defined(COMPAT_FREEBSD9) #error FreeBSD/arm doesn't provide compatibility with releases prior to 10 #endif struct pcpu __pcpu[MAXCPU]; struct pcpu *pcpup = &__pcpu[0]; static struct trapframe proc0_tf; uint32_t cpu_reset_address = 0; int cold = 1; vm_offset_t vector_page; int (*_arm_memcpy)(void *, void *, int, int) = NULL; int (*_arm_bzero)(void *, int, int) = NULL; int _min_memcpy_size = 0; int _min_bzero_size = 0; extern int *end; #ifdef FDT vm_paddr_t pmap_pa; #if __ARM_ARCH >= 6 vm_offset_t systempage; vm_offset_t irqstack; vm_offset_t undstack; vm_offset_t abtstack; #else /* * This is the number of L2 page tables required for covering max * (hypothetical) memsize of 4GB and all kernel mappings (vectors, msgbuf, * stacks etc.), uprounded to be divisible by 4. */ #define KERNEL_PT_MAX 78 static struct pv_addr kernel_pt_table[KERNEL_PT_MAX]; struct pv_addr systempage; static struct pv_addr msgbufpv; struct pv_addr irqstack; struct pv_addr undstack; struct pv_addr abtstack; static struct pv_addr kernelstack; #endif /* __ARM_ARCH >= 6 */ #endif /* FDT */ #ifdef MULTIDELAY static delay_func *delay_impl; static void *delay_arg; #endif struct kva_md_info kmi; /* * arm32_vector_init: * * Initialize the vector page, and select whether or not to * relocate the vectors. * * NOTE: We expect the vector page to be mapped at its expected * destination. */ extern unsigned int page0[], page0_data[]; void arm_vector_init(vm_offset_t va, int which) { unsigned int *vectors = (int *) va; unsigned int *vectors_data = vectors + (page0_data - page0); int vec; /* * Loop through the vectors we're taking over, and copy the * vector's insn and data word. */ for (vec = 0; vec < ARM_NVEC; vec++) { if ((which & (1 << vec)) == 0) { /* Don't want to take over this vector. */ continue; } vectors[vec] = page0[vec]; vectors_data[vec] = page0_data[vec]; } /* Now sync the vectors. */ icache_sync(va, (ARM_NVEC * 2) * sizeof(u_int)); vector_page = va; #if __ARM_ARCH < 6 if (va == ARM_VECTORS_HIGH) { /* * Enable high vectors in the system control reg (SCTLR). * * Assume the MD caller knows what it's doing here, and really * does want the vector page relocated. * * Note: This has to be done here (and not just in * cpu_setup()) because the vector page needs to be * accessible *before* cpu_startup() is called. * Think ddb(9) ... */ cpu_control(CPU_CONTROL_VECRELOC, CPU_CONTROL_VECRELOC); } #endif } static void cpu_startup(void *dummy) { struct pcb *pcb = thread0.td_pcb; const unsigned int mbyte = 1024 * 1024; #if __ARM_ARCH < 6 && !defined(ARM_CACHE_LOCK_ENABLE) vm_page_t m; #endif identify_arm_cpu(); vm_ksubmap_init(&kmi); /* * Display the RAM layout. */ printf("real memory = %ju (%ju MB)\n", (uintmax_t)arm32_ptob(realmem), (uintmax_t)arm32_ptob(realmem) / mbyte); printf("avail memory = %ju (%ju MB)\n", (uintmax_t)arm32_ptob(vm_cnt.v_free_count), (uintmax_t)arm32_ptob(vm_cnt.v_free_count) / mbyte); if (bootverbose) { arm_physmem_print_tables(); devmap_print_table(); } bufinit(); vm_pager_bufferinit(); pcb->pcb_regs.sf_sp = (u_int)thread0.td_kstack + USPACE_SVC_STACK_TOP; pmap_set_pcb_pagedir(kernel_pmap, pcb); #if __ARM_ARCH < 6 vector_page_setprot(VM_PROT_READ); pmap_postinit(); #ifdef ARM_CACHE_LOCK_ENABLE pmap_kenter_user(ARM_TP_ADDRESS, ARM_TP_ADDRESS); arm_lock_cache_line(ARM_TP_ADDRESS); #else m = vm_page_alloc(NULL, 0, VM_ALLOC_NOOBJ | VM_ALLOC_ZERO); pmap_kenter_user(ARM_TP_ADDRESS, VM_PAGE_TO_PHYS(m)); #endif *(uint32_t *)ARM_RAS_START = 0; *(uint32_t *)ARM_RAS_END = 0xffffffff; #endif } SYSINIT(cpu, SI_SUB_CPU, SI_ORDER_FIRST, cpu_startup, NULL); /* * Flush the D-cache for non-DMA I/O so that the I-cache can * be made coherent later. */ void cpu_flush_dcache(void *ptr, size_t len) { dcache_wb_poc((vm_offset_t)ptr, (vm_paddr_t)vtophys(ptr), len); } /* Get current clock frequency for the given cpu id. */ int cpu_est_clockrate(int cpu_id, uint64_t *rate) { return (ENXIO); } void cpu_idle(int busy) { CTR2(KTR_SPARE2, "cpu_idle(%d) at %d", busy, curcpu); spinlock_enter(); #ifndef NO_EVENTTIMERS if (!busy) cpu_idleclock(); #endif if (!sched_runnable()) cpu_sleep(0); #ifndef NO_EVENTTIMERS if (!busy) cpu_activeclock(); #endif spinlock_exit(); CTR2(KTR_SPARE2, "cpu_idle(%d) at %d done", busy, curcpu); } int cpu_idle_wakeup(int cpu) { return (0); } /* * Most ARM platforms don't need to do anything special to init their clocks * (they get intialized during normal device attachment), and by not defining a * cpu_initclocks() function they get this generic one. Any platform that needs * to do something special can just provide their own implementation, which will * override this one due to the weak linkage. */ void arm_generic_initclocks(void) { #ifndef NO_EVENTTIMERS #ifdef SMP if (PCPU_GET(cpuid) == 0) cpu_initclocks_bsp(); else cpu_initclocks_ap(); #else cpu_initclocks_bsp(); #endif #endif } __weak_reference(arm_generic_initclocks, cpu_initclocks); #ifdef MULTIDELAY void arm_set_delay(delay_func *impl, void *arg) { KASSERT(impl != NULL, ("No DELAY implementation")); delay_impl = impl; delay_arg = arg; } void DELAY(int usec) { delay_impl(usec, delay_arg); } #endif void cpu_pcpu_init(struct pcpu *pcpu, int cpuid, size_t size) { } void spinlock_enter(void) { struct thread *td; register_t cspr; td = curthread; if (td->td_md.md_spinlock_count == 0) { cspr = disable_interrupts(PSR_I | PSR_F); td->td_md.md_spinlock_count = 1; td->td_md.md_saved_cspr = cspr; } else td->td_md.md_spinlock_count++; critical_enter(); } void spinlock_exit(void) { struct thread *td; register_t cspr; td = curthread; critical_exit(); cspr = td->td_md.md_saved_cspr; td->td_md.md_spinlock_count--; if (td->td_md.md_spinlock_count == 0) restore_interrupts(cspr); } /* * Clear registers on exec */ void exec_setregs(struct thread *td, struct image_params *imgp, u_long stack) { struct trapframe *tf = td->td_frame; memset(tf, 0, sizeof(*tf)); tf->tf_usr_sp = stack; tf->tf_usr_lr = imgp->entry_addr; tf->tf_svc_lr = 0x77777777; tf->tf_pc = imgp->entry_addr; tf->tf_spsr = PSR_USR32_MODE; } #ifdef VFP /* * Get machine VFP context. */ -static void +void get_vfpcontext(struct thread *td, mcontext_vfp_t *vfp) { - struct pcb *curpcb; + struct pcb *pcb; - curpcb = curthread->td_pcb; - critical_enter(); - - vfp_store(&curpcb->pcb_vfpstate, false); - memcpy(vfp->mcv_reg, curpcb->pcb_vfpstate.reg, + pcb = td->td_pcb; + if (td == curthread) { + critical_enter(); + vfp_store(&pcb->pcb_vfpstate, false); + critical_exit(); + } else + MPASS(TD_IS_SUSPENDED(td)); + memcpy(vfp->mcv_reg, pcb->pcb_vfpstate.reg, sizeof(vfp->mcv_reg)); - vfp->mcv_fpscr = curpcb->pcb_vfpstate.fpscr; - - critical_exit(); + vfp->mcv_fpscr = pcb->pcb_vfpstate.fpscr; } /* * Set machine VFP context. */ -static void +void set_vfpcontext(struct thread *td, mcontext_vfp_t *vfp) { - struct pcb *curpcb; + struct pcb *pcb; - curpcb = curthread->td_pcb; - critical_enter(); - - vfp_discard(td); - memcpy(curpcb->pcb_vfpstate.reg, vfp->mcv_reg, - sizeof(curpcb->pcb_vfpstate.reg)); - curpcb->pcb_vfpstate.fpscr = vfp->mcv_fpscr; - - critical_exit(); + pcb = td->td_pcb; + if (td == curthread) { + critical_enter(); + vfp_discard(td); + critical_exit(); + } else + MPASS(TD_IS_SUSPENDED(td)); + memcpy(pcb->pcb_vfpstate.reg, vfp->mcv_reg, + sizeof(pcb->pcb_vfpstate.reg)); + pcb->pcb_vfpstate.fpscr = vfp->mcv_fpscr; } #endif int arm_get_vfpstate(struct thread *td, void *args) { int rv; struct arm_get_vfpstate_args ua; mcontext_vfp_t mcontext_vfp; rv = copyin(args, &ua, sizeof(ua)); if (rv != 0) return (rv); if (ua.mc_vfp_size != sizeof(mcontext_vfp_t)) return (EINVAL); #ifdef VFP get_vfpcontext(td, &mcontext_vfp); #else bzero(&mcontext_vfp, sizeof(mcontext_vfp)); #endif rv = copyout(&mcontext_vfp, ua.mc_vfp, sizeof(mcontext_vfp)); if (rv != 0) return (rv); return (0); } /* * Get machine context. */ int get_mcontext(struct thread *td, mcontext_t *mcp, int clear_ret) { struct trapframe *tf = td->td_frame; __greg_t *gr = mcp->__gregs; if (clear_ret & GET_MC_CLEAR_RET) { gr[_REG_R0] = 0; gr[_REG_CPSR] = tf->tf_spsr & ~PSR_C; } else { gr[_REG_R0] = tf->tf_r0; gr[_REG_CPSR] = tf->tf_spsr; } gr[_REG_R1] = tf->tf_r1; gr[_REG_R2] = tf->tf_r2; gr[_REG_R3] = tf->tf_r3; gr[_REG_R4] = tf->tf_r4; gr[_REG_R5] = tf->tf_r5; gr[_REG_R6] = tf->tf_r6; gr[_REG_R7] = tf->tf_r7; gr[_REG_R8] = tf->tf_r8; gr[_REG_R9] = tf->tf_r9; gr[_REG_R10] = tf->tf_r10; gr[_REG_R11] = tf->tf_r11; gr[_REG_R12] = tf->tf_r12; gr[_REG_SP] = tf->tf_usr_sp; gr[_REG_LR] = tf->tf_usr_lr; gr[_REG_PC] = tf->tf_pc; mcp->mc_vfp_size = 0; mcp->mc_vfp_ptr = NULL; memset(&mcp->mc_spare, 0, sizeof(mcp->mc_spare)); return (0); } /* * Set machine context. * * However, we don't set any but the user modifiable flags, and we won't * touch the cs selector. */ int set_mcontext(struct thread *td, mcontext_t *mcp) { mcontext_vfp_t mc_vfp, *vfp; struct trapframe *tf = td->td_frame; const __greg_t *gr = mcp->__gregs; #ifdef WITNESS if (mcp->mc_vfp_size != 0 && mcp->mc_vfp_size != sizeof(mc_vfp)) { printf("%s: %s: Malformed mc_vfp_size: %d (0x%08X)\n", td->td_proc->p_comm, __func__, mcp->mc_vfp_size, mcp->mc_vfp_size); } else if (mcp->mc_vfp_size != 0 && mcp->mc_vfp_ptr == NULL) { printf("%s: %s: c_vfp_size != 0 but mc_vfp_ptr == NULL\n", td->td_proc->p_comm, __func__); } #endif if (mcp->mc_vfp_size == sizeof(mc_vfp) && mcp->mc_vfp_ptr != NULL) { if (copyin(mcp->mc_vfp_ptr, &mc_vfp, sizeof(mc_vfp)) != 0) return (EFAULT); vfp = &mc_vfp; } else { vfp = NULL; } tf->tf_r0 = gr[_REG_R0]; tf->tf_r1 = gr[_REG_R1]; tf->tf_r2 = gr[_REG_R2]; tf->tf_r3 = gr[_REG_R3]; tf->tf_r4 = gr[_REG_R4]; tf->tf_r5 = gr[_REG_R5]; tf->tf_r6 = gr[_REG_R6]; tf->tf_r7 = gr[_REG_R7]; tf->tf_r8 = gr[_REG_R8]; tf->tf_r9 = gr[_REG_R9]; tf->tf_r10 = gr[_REG_R10]; tf->tf_r11 = gr[_REG_R11]; tf->tf_r12 = gr[_REG_R12]; tf->tf_usr_sp = gr[_REG_SP]; tf->tf_usr_lr = gr[_REG_LR]; tf->tf_pc = gr[_REG_PC]; tf->tf_spsr = gr[_REG_CPSR]; #ifdef VFP if (vfp != NULL) set_vfpcontext(td, vfp); #endif return (0); } void sendsig(catcher, ksi, mask) sig_t catcher; ksiginfo_t *ksi; sigset_t *mask; { struct thread *td; struct proc *p; struct trapframe *tf; struct sigframe *fp, frame; struct sigacts *psp; struct sysentvec *sysent; int onstack; int sig; int code; td = curthread; p = td->td_proc; PROC_LOCK_ASSERT(p, MA_OWNED); sig = ksi->ksi_signo; code = ksi->ksi_code; psp = p->p_sigacts; mtx_assert(&psp->ps_mtx, MA_OWNED); tf = td->td_frame; onstack = sigonstack(tf->tf_usr_sp); CTR4(KTR_SIG, "sendsig: td=%p (%s) catcher=%p sig=%d", td, p->p_comm, catcher, sig); /* Allocate and validate space for the signal handler context. */ if ((td->td_pflags & TDP_ALTSTACK) != 0 && !(onstack) && SIGISMEMBER(psp->ps_sigonstack, sig)) { fp = (struct sigframe *)((uintptr_t)td->td_sigstk.ss_sp + td->td_sigstk.ss_size); #if defined(COMPAT_43) td->td_sigstk.ss_flags |= SS_ONSTACK; #endif } else fp = (struct sigframe *)td->td_frame->tf_usr_sp; /* make room on the stack */ fp--; /* make the stack aligned */ fp = (struct sigframe *)STACKALIGN(fp); /* Populate the siginfo frame. */ get_mcontext(td, &frame.sf_uc.uc_mcontext, 0); #ifdef VFP get_vfpcontext(td, &frame.sf_vfp); frame.sf_uc.uc_mcontext.mc_vfp_size = sizeof(fp->sf_vfp); frame.sf_uc.uc_mcontext.mc_vfp_ptr = &fp->sf_vfp; #else frame.sf_uc.uc_mcontext.mc_vfp_size = 0; frame.sf_uc.uc_mcontext.mc_vfp_ptr = NULL; #endif frame.sf_si = ksi->ksi_info; frame.sf_uc.uc_sigmask = *mask; frame.sf_uc.uc_stack.ss_flags = (td->td_pflags & TDP_ALTSTACK ) ? ((onstack) ? SS_ONSTACK : 0) : SS_DISABLE; frame.sf_uc.uc_stack = td->td_sigstk; mtx_unlock(&psp->ps_mtx); PROC_UNLOCK(td->td_proc); /* Copy the sigframe out to the user's stack. */ if (copyout(&frame, fp, sizeof(*fp)) != 0) { /* Process has trashed its stack. Kill it. */ CTR2(KTR_SIG, "sendsig: sigexit td=%p fp=%p", td, fp); PROC_LOCK(p); sigexit(td, SIGILL); } /* * Build context to run handler in. We invoke the handler * directly, only returning via the trampoline. Note the * trampoline version numbers are coordinated with machine- * dependent code in libc. */ tf->tf_r0 = sig; tf->tf_r1 = (register_t)&fp->sf_si; tf->tf_r2 = (register_t)&fp->sf_uc; /* the trampoline uses r5 as the uc address */ tf->tf_r5 = (register_t)&fp->sf_uc; tf->tf_pc = (register_t)catcher; tf->tf_usr_sp = (register_t)fp; sysent = p->p_sysent; if (sysent->sv_sigcode_base != 0) tf->tf_usr_lr = (register_t)sysent->sv_sigcode_base; else tf->tf_usr_lr = (register_t)(sysent->sv_psstrings - *(sysent->sv_szsigcode)); /* Set the mode to enter in the signal handler */ #if __ARM_ARCH >= 7 if ((register_t)catcher & 1) tf->tf_spsr |= PSR_T; else tf->tf_spsr &= ~PSR_T; #endif CTR3(KTR_SIG, "sendsig: return td=%p pc=%#x sp=%#x", td, tf->tf_usr_lr, tf->tf_usr_sp); PROC_LOCK(p); mtx_lock(&psp->ps_mtx); } int sys_sigreturn(td, uap) struct thread *td; struct sigreturn_args /* { const struct __ucontext *sigcntxp; } */ *uap; { ucontext_t uc; int spsr; if (uap == NULL) return (EFAULT); if (copyin(uap->sigcntxp, &uc, sizeof(uc))) return (EFAULT); /* * Make sure the processor mode has not been tampered with and * interrupts have not been disabled. */ spsr = uc.uc_mcontext.__gregs[_REG_CPSR]; if ((spsr & PSR_MODE) != PSR_USR32_MODE || (spsr & (PSR_I | PSR_F)) != 0) return (EINVAL); /* Restore register context. */ set_mcontext(td, &uc.uc_mcontext); /* Restore signal mask. */ kern_sigprocmask(td, SIG_SETMASK, &uc.uc_sigmask, NULL, 0); return (EJUSTRETURN); } /* * Construct a PCB from a trapframe. This is called from kdb_trap() where * we want to start a backtrace from the function that caused us to enter * the debugger. We have the context in the trapframe, but base the trace * on the PCB. The PCB doesn't have to be perfect, as long as it contains * enough for a backtrace. */ void makectx(struct trapframe *tf, struct pcb *pcb) { pcb->pcb_regs.sf_r4 = tf->tf_r4; pcb->pcb_regs.sf_r5 = tf->tf_r5; pcb->pcb_regs.sf_r6 = tf->tf_r6; pcb->pcb_regs.sf_r7 = tf->tf_r7; pcb->pcb_regs.sf_r8 = tf->tf_r8; pcb->pcb_regs.sf_r9 = tf->tf_r9; pcb->pcb_regs.sf_r10 = tf->tf_r10; pcb->pcb_regs.sf_r11 = tf->tf_r11; pcb->pcb_regs.sf_r12 = tf->tf_r12; pcb->pcb_regs.sf_pc = tf->tf_pc; pcb->pcb_regs.sf_lr = tf->tf_usr_lr; pcb->pcb_regs.sf_sp = tf->tf_usr_sp; } void pcpu0_init(void) { #if __ARM_ARCH >= 6 set_curthread(&thread0); #endif pcpu_init(pcpup, 0, sizeof(struct pcpu)); PCPU_SET(curthread, &thread0); } /* * Initialize proc0 */ void init_proc0(vm_offset_t kstack) { proc_linkup0(&proc0, &thread0); thread0.td_kstack = kstack; thread0.td_pcb = (struct pcb *) (thread0.td_kstack + kstack_pages * PAGE_SIZE) - 1; thread0.td_pcb->pcb_flags = 0; thread0.td_pcb->pcb_vfpcpu = -1; thread0.td_pcb->pcb_vfpstate.fpscr = VFPSCR_DN; thread0.td_frame = &proc0_tf; pcpup->pc_curpcb = thread0.td_pcb; } #if __ARM_ARCH >= 6 void set_stackptrs(int cpu) { set_stackptr(PSR_IRQ32_MODE, irqstack + ((IRQ_STACK_SIZE * PAGE_SIZE) * (cpu + 1))); set_stackptr(PSR_ABT32_MODE, abtstack + ((ABT_STACK_SIZE * PAGE_SIZE) * (cpu + 1))); set_stackptr(PSR_UND32_MODE, undstack + ((UND_STACK_SIZE * PAGE_SIZE) * (cpu + 1))); } #else void set_stackptrs(int cpu) { set_stackptr(PSR_IRQ32_MODE, irqstack.pv_va + ((IRQ_STACK_SIZE * PAGE_SIZE) * (cpu + 1))); set_stackptr(PSR_ABT32_MODE, abtstack.pv_va + ((ABT_STACK_SIZE * PAGE_SIZE) * (cpu + 1))); set_stackptr(PSR_UND32_MODE, undstack.pv_va + ((UND_STACK_SIZE * PAGE_SIZE) * (cpu + 1))); } #endif #ifdef FDT #if __ARM_ARCH < 6 void * initarm(struct arm_boot_params *abp) { struct mem_region mem_regions[FDT_MEM_REGIONS]; struct pv_addr kernel_l1pt; struct pv_addr dpcpu; vm_offset_t dtbp, freemempos, l2_start, lastaddr; uint64_t memsize; uint32_t l2size; char *env; void *kmdp; u_int l1pagetable; int i, j, err_devmap, mem_regions_sz; lastaddr = parse_boot_param(abp); arm_physmem_kernaddr = abp->abp_physaddr; memsize = 0; cpuinfo_init(); set_cpufuncs(); /* * Find the dtb passed in by the boot loader. */ kmdp = preload_search_by_type("elf kernel"); if (kmdp != NULL) dtbp = MD_FETCH(kmdp, MODINFOMD_DTBP, vm_offset_t); else dtbp = (vm_offset_t)NULL; #if defined(FDT_DTB_STATIC) /* * In case the device tree blob was not retrieved (from metadata) try * to use the statically embedded one. */ if (dtbp == (vm_offset_t)NULL) dtbp = (vm_offset_t)&fdt_static_dtb; #endif if (OF_install(OFW_FDT, 0) == FALSE) panic("Cannot install FDT"); if (OF_init((void *)dtbp) != 0) panic("OF_init failed with the found device tree"); /* Grab physical memory regions information from device tree. */ if (fdt_get_mem_regions(mem_regions, &mem_regions_sz, &memsize) != 0) panic("Cannot get physical memory regions"); arm_physmem_hardware_regions(mem_regions, mem_regions_sz); /* Grab reserved memory regions information from device tree. */ if (fdt_get_reserved_regions(mem_regions, &mem_regions_sz) == 0) arm_physmem_exclude_regions(mem_regions, mem_regions_sz, EXFLAG_NODUMP | EXFLAG_NOALLOC); /* Platform-specific initialisation */ platform_probe_and_attach(); pcpu0_init(); /* Do basic tuning, hz etc */ init_param1(); /* Calculate number of L2 tables needed for mapping vm_page_array */ l2size = (memsize / PAGE_SIZE) * sizeof(struct vm_page); l2size = (l2size >> L1_S_SHIFT) + 1; /* * Add one table for end of kernel map, one for stacks, msgbuf and * L1 and L2 tables map and one for vectors map. */ l2size += 3; /* Make it divisible by 4 */ l2size = (l2size + 3) & ~3; freemempos = (lastaddr + PAGE_MASK) & ~PAGE_MASK; /* Define a macro to simplify memory allocation */ #define valloc_pages(var, np) \ alloc_pages((var).pv_va, (np)); \ (var).pv_pa = (var).pv_va + (abp->abp_physaddr - KERNVIRTADDR); #define alloc_pages(var, np) \ (var) = freemempos; \ freemempos += (np * PAGE_SIZE); \ memset((char *)(var), 0, ((np) * PAGE_SIZE)); while (((freemempos - L1_TABLE_SIZE) & (L1_TABLE_SIZE - 1)) != 0) freemempos += PAGE_SIZE; valloc_pages(kernel_l1pt, L1_TABLE_SIZE / PAGE_SIZE); for (i = 0, j = 0; i < l2size; ++i) { if (!(i % (PAGE_SIZE / L2_TABLE_SIZE_REAL))) { valloc_pages(kernel_pt_table[i], L2_TABLE_SIZE / PAGE_SIZE); j = i; } else { kernel_pt_table[i].pv_va = kernel_pt_table[j].pv_va + L2_TABLE_SIZE_REAL * (i - j); kernel_pt_table[i].pv_pa = kernel_pt_table[i].pv_va - KERNVIRTADDR + abp->abp_physaddr; } } /* * Allocate a page for the system page mapped to 0x00000000 * or 0xffff0000. This page will just contain the system vectors * and can be shared by all processes. */ valloc_pages(systempage, 1); /* Allocate dynamic per-cpu area. */ valloc_pages(dpcpu, DPCPU_SIZE / PAGE_SIZE); dpcpu_init((void *)dpcpu.pv_va, 0); /* Allocate stacks for all modes */ valloc_pages(irqstack, IRQ_STACK_SIZE * MAXCPU); valloc_pages(abtstack, ABT_STACK_SIZE * MAXCPU); valloc_pages(undstack, UND_STACK_SIZE * MAXCPU); valloc_pages(kernelstack, kstack_pages * MAXCPU); valloc_pages(msgbufpv, round_page(msgbufsize) / PAGE_SIZE); /* * Now we start construction of the L1 page table * We start by mapping the L2 page tables into the L1. * This means that we can replace L1 mappings later on if necessary */ l1pagetable = kernel_l1pt.pv_va; /* * Try to map as much as possible of kernel text and data using * 1MB section mapping and for the rest of initial kernel address * space use L2 coarse tables. * * Link L2 tables for mapping remainder of kernel (modulo 1MB) * and kernel structures */ l2_start = lastaddr & ~(L1_S_OFFSET); for (i = 0 ; i < l2size - 1; i++) pmap_link_l2pt(l1pagetable, l2_start + i * L1_S_SIZE, &kernel_pt_table[i]); pmap_curmaxkvaddr = l2_start + (l2size - 1) * L1_S_SIZE; /* Map kernel code and data */ pmap_map_chunk(l1pagetable, KERNVIRTADDR, abp->abp_physaddr, (((uint32_t)(lastaddr) - KERNVIRTADDR) + PAGE_MASK) & ~PAGE_MASK, VM_PROT_READ|VM_PROT_WRITE, PTE_CACHE); /* Map L1 directory and allocated L2 page tables */ pmap_map_chunk(l1pagetable, kernel_l1pt.pv_va, kernel_l1pt.pv_pa, L1_TABLE_SIZE, VM_PROT_READ|VM_PROT_WRITE, PTE_PAGETABLE); pmap_map_chunk(l1pagetable, kernel_pt_table[0].pv_va, kernel_pt_table[0].pv_pa, L2_TABLE_SIZE_REAL * l2size, VM_PROT_READ|VM_PROT_WRITE, PTE_PAGETABLE); /* Map allocated DPCPU, stacks and msgbuf */ pmap_map_chunk(l1pagetable, dpcpu.pv_va, dpcpu.pv_pa, freemempos - dpcpu.pv_va, VM_PROT_READ|VM_PROT_WRITE, PTE_CACHE); /* Link and map the vector page */ pmap_link_l2pt(l1pagetable, ARM_VECTORS_HIGH, &kernel_pt_table[l2size - 1]); pmap_map_entry(l1pagetable, ARM_VECTORS_HIGH, systempage.pv_pa, VM_PROT_READ|VM_PROT_WRITE|VM_PROT_EXECUTE, PTE_CACHE); /* Establish static device mappings. */ err_devmap = platform_devmap_init(); devmap_bootstrap(l1pagetable, NULL); vm_max_kernel_address = platform_lastaddr(); cpu_domains((DOMAIN_CLIENT << (PMAP_DOMAIN_KERNEL * 2)) | DOMAIN_CLIENT); pmap_pa = kernel_l1pt.pv_pa; cpu_setttb(kernel_l1pt.pv_pa); cpu_tlb_flushID(); cpu_domains(DOMAIN_CLIENT << (PMAP_DOMAIN_KERNEL * 2)); /* * Now that proper page tables are installed, call cpu_setup() to enable * instruction and data caches and other chip-specific features. */ cpu_setup(); /* * Only after the SOC registers block is mapped we can perform device * tree fixups, as they may attempt to read parameters from hardware. */ OF_interpret("perform-fixup", 0); platform_gpio_init(); cninit(); debugf("initarm: console initialized\n"); debugf(" arg1 kmdp = 0x%08x\n", (uint32_t)kmdp); debugf(" boothowto = 0x%08x\n", boothowto); debugf(" dtbp = 0x%08x\n", (uint32_t)dtbp); arm_print_kenv(); env = kern_getenv("kernelname"); if (env != NULL) { strlcpy(kernelname, env, sizeof(kernelname)); freeenv(env); } if (err_devmap != 0) printf("WARNING: could not fully configure devmap, error=%d\n", err_devmap); platform_late_init(); /* * Pages were allocated during the secondary bootstrap for the * stacks for different CPU modes. * We must now set the r13 registers in the different CPU modes to * point to these stacks. * Since the ARM stacks use STMFD etc. we must set r13 to the top end * of the stack memory. */ cpu_control(CPU_CONTROL_MMU_ENABLE, CPU_CONTROL_MMU_ENABLE); set_stackptrs(0); /* * We must now clean the cache again.... * Cleaning may be done by reading new data to displace any * dirty data in the cache. This will have happened in cpu_setttb() * but since we are boot strapping the addresses used for the read * may have just been remapped and thus the cache could be out * of sync. A re-clean after the switch will cure this. * After booting there are no gross relocations of the kernel thus * this problem will not occur after initarm(). */ cpu_idcache_wbinv_all(); undefined_init(); init_proc0(kernelstack.pv_va); arm_vector_init(ARM_VECTORS_HIGH, ARM_VEC_ALL); pmap_bootstrap(freemempos, &kernel_l1pt); msgbufp = (void *)msgbufpv.pv_va; msgbufinit(msgbufp, msgbufsize); mutex_init(); /* * Exclude the kernel (and all the things we allocated which immediately * follow the kernel) from the VM allocation pool but not from crash * dumps. virtual_avail is a global variable which tracks the kva we've * "allocated" while setting up pmaps. * * Prepare the list of physical memory available to the vm subsystem. */ arm_physmem_exclude_region(abp->abp_physaddr, (virtual_avail - KERNVIRTADDR), EXFLAG_NOALLOC); arm_physmem_init_kernel_globals(); init_param2(physmem); dbg_monitor_init(); kdb_init(); return ((void *)(kernelstack.pv_va + USPACE_SVC_STACK_TOP - sizeof(struct pcb))); } #else /* __ARM_ARCH < 6 */ void * initarm(struct arm_boot_params *abp) { struct mem_region mem_regions[FDT_MEM_REGIONS]; vm_paddr_t lastaddr; vm_offset_t dtbp, kernelstack, dpcpu; char *env; void *kmdp; int err_devmap, mem_regions_sz; #ifdef EFI struct efi_map_header *efihdr; #endif /* get last allocated physical address */ arm_physmem_kernaddr = abp->abp_physaddr; lastaddr = parse_boot_param(abp) - KERNVIRTADDR + arm_physmem_kernaddr; set_cpufuncs(); cpuinfo_init(); /* * Find the dtb passed in by the boot loader. */ kmdp = preload_search_by_type("elf kernel"); dtbp = MD_FETCH(kmdp, MODINFOMD_DTBP, vm_offset_t); #if defined(FDT_DTB_STATIC) /* * In case the device tree blob was not retrieved (from metadata) try * to use the statically embedded one. */ if (dtbp == (vm_offset_t)NULL) dtbp = (vm_offset_t)&fdt_static_dtb; #endif if (OF_install(OFW_FDT, 0) == FALSE) panic("Cannot install FDT"); if (OF_init((void *)dtbp) != 0) panic("OF_init failed with the found device tree"); #if defined(LINUX_BOOT_ABI) arm_parse_fdt_bootargs(); #endif #ifdef EFI efihdr = (struct efi_map_header *)preload_search_info(kmdp, MODINFO_METADATA | MODINFOMD_EFI_MAP); if (efihdr != NULL) { arm_add_efi_map_entries(efihdr, mem_regions, &mem_regions_sz); } else #endif { /* Grab physical memory regions information from device tree. */ if (fdt_get_mem_regions(mem_regions, &mem_regions_sz,NULL) != 0) panic("Cannot get physical memory regions"); } arm_physmem_hardware_regions(mem_regions, mem_regions_sz); /* Grab reserved memory regions information from device tree. */ if (fdt_get_reserved_regions(mem_regions, &mem_regions_sz) == 0) arm_physmem_exclude_regions(mem_regions, mem_regions_sz, EXFLAG_NODUMP | EXFLAG_NOALLOC); /* * Set TEX remapping registers. * Setup kernel page tables and switch to kernel L1 page table. */ pmap_set_tex(); pmap_bootstrap_prepare(lastaddr); /* * Now that proper page tables are installed, call cpu_setup() to enable * instruction and data caches and other chip-specific features. */ cpu_setup(); /* Platform-specific initialisation */ platform_probe_and_attach(); pcpu0_init(); /* Do basic tuning, hz etc */ init_param1(); /* * Allocate a page for the system page mapped to 0xffff0000 * This page will just contain the system vectors and can be * shared by all processes. */ systempage = pmap_preboot_get_pages(1); /* Map the vector page. */ pmap_preboot_map_pages(systempage, ARM_VECTORS_HIGH, 1); if (virtual_end >= ARM_VECTORS_HIGH) virtual_end = ARM_VECTORS_HIGH - 1; /* Allocate dynamic per-cpu area. */ dpcpu = pmap_preboot_get_vpages(DPCPU_SIZE / PAGE_SIZE); dpcpu_init((void *)dpcpu, 0); /* Allocate stacks for all modes */ irqstack = pmap_preboot_get_vpages(IRQ_STACK_SIZE * MAXCPU); abtstack = pmap_preboot_get_vpages(ABT_STACK_SIZE * MAXCPU); undstack = pmap_preboot_get_vpages(UND_STACK_SIZE * MAXCPU ); kernelstack = pmap_preboot_get_vpages(kstack_pages * MAXCPU); /* Allocate message buffer. */ msgbufp = (void *)pmap_preboot_get_vpages( round_page(msgbufsize) / PAGE_SIZE); /* * Pages were allocated during the secondary bootstrap for the * stacks for different CPU modes. * We must now set the r13 registers in the different CPU modes to * point to these stacks. * Since the ARM stacks use STMFD etc. we must set r13 to the top end * of the stack memory. */ set_stackptrs(0); mutex_init(); /* Establish static device mappings. */ err_devmap = platform_devmap_init(); devmap_bootstrap(0, NULL); vm_max_kernel_address = platform_lastaddr(); /* * Only after the SOC registers block is mapped we can perform device * tree fixups, as they may attempt to read parameters from hardware. */ OF_interpret("perform-fixup", 0); platform_gpio_init(); cninit(); debugf("initarm: console initialized\n"); debugf(" arg1 kmdp = 0x%08x\n", (uint32_t)kmdp); debugf(" boothowto = 0x%08x\n", boothowto); debugf(" dtbp = 0x%08x\n", (uint32_t)dtbp); debugf(" lastaddr1: 0x%08x\n", lastaddr); arm_print_kenv(); env = kern_getenv("kernelname"); if (env != NULL) strlcpy(kernelname, env, sizeof(kernelname)); if (err_devmap != 0) printf("WARNING: could not fully configure devmap, error=%d\n", err_devmap); platform_late_init(); /* * We must now clean the cache again.... * Cleaning may be done by reading new data to displace any * dirty data in the cache. This will have happened in cpu_setttb() * but since we are boot strapping the addresses used for the read * may have just been remapped and thus the cache could be out * of sync. A re-clean after the switch will cure this. * After booting there are no gross relocations of the kernel thus * this problem will not occur after initarm(). */ /* Set stack for exception handlers */ undefined_init(); init_proc0(kernelstack); arm_vector_init(ARM_VECTORS_HIGH, ARM_VEC_ALL); enable_interrupts(PSR_A); pmap_bootstrap(0); /* Exclude the kernel (and all the things we allocated which immediately * follow the kernel) from the VM allocation pool but not from crash * dumps. virtual_avail is a global variable which tracks the kva we've * "allocated" while setting up pmaps. * * Prepare the list of physical memory available to the vm subsystem. */ arm_physmem_exclude_region(abp->abp_physaddr, pmap_preboot_get_pages(0) - abp->abp_physaddr, EXFLAG_NOALLOC); arm_physmem_init_kernel_globals(); init_param2(physmem); /* Init message buffer. */ msgbufinit(msgbufp, msgbufsize); dbg_monitor_init(); kdb_init(); return ((void *)STACKALIGN(thread0.td_pcb)); } #endif /* __ARM_ARCH < 6 */ #endif /* FDT */ Index: stable/11/sys/arm/arm/ptrace_machdep.c =================================================================== --- stable/11/sys/arm/arm/ptrace_machdep.c (nonexistent) +++ stable/11/sys/arm/arm/ptrace_machdep.c (revision 325831) @@ -0,0 +1,63 @@ +/*- + * Copyright (c) 2017 John Baldwin + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF + * SUCH DAMAGE. + * + */ + +#include +__FBSDID("$FreeBSD$"); + +#include +#include +#include +#ifdef VFP +#include +#endif + +int +cpu_ptrace(struct thread *td, int req, void *addr, int data) +{ +#ifdef VFP + mcontext_vfp_t vfp; +#endif + int error; + + switch (req) { +#ifdef VFP + case PT_GETVFPREGS: + get_vfpcontext(td, &vfp); + error = copyout(&vfp, addr, sizeof(vfp)); + break; + case PT_SETVFPREGS: + error = copyin(addr, &vfp, sizeof(vfp)); + if (error == 0) + set_vfpcontext(td, &vfp); + break; +#endif + default: + error = EINVAL; + } + + return (error); +} Property changes on: stable/11/sys/arm/arm/ptrace_machdep.c ___________________________________________________________________ Added: svn:eol-style ## -0,0 +1 ## +native \ No newline at end of property Added: svn:keywords ## -0,0 +1 ## +FreeBSD=%H \ No newline at end of property Added: svn:mime-type ## -0,0 +1 ## +text/plain \ No newline at end of property Index: stable/11/sys/arm/include/ptrace.h =================================================================== --- stable/11/sys/arm/include/ptrace.h (revision 325830) +++ stable/11/sys/arm/include/ptrace.h (revision 325831) @@ -1,8 +1,23 @@ /* $NetBSD: ptrace.h,v 1.2 2001/02/23 21:23:52 reinoud Exp $ */ /* $FreeBSD$ */ #ifndef _MACHINE_PTRACE_H_ #define _MACHINE_PTRACE_H_ +#define __HAVE_PTRACE_MACHDEP + +/* + * Must match mcontext_vfp_t. Note that mcontext_vfp_t does not + * include explicit padding. + */ +struct vfpreg { + __uint64_t vfp_reg[32]; + __uint32_t vfp_scr; + __uint32_t vfp_pad0; +}; + +#define PT_GETVFPREGS (PT_FIRSTMACH + 0) +#define PT_SETVFPREGS (PT_FIRSTMACH + 1) + #endif /* !_MACHINE_PTRACE_H */ Index: stable/11/sys/arm/include/vfp.h =================================================================== --- stable/11/sys/arm/include/vfp.h (revision 325830) +++ stable/11/sys/arm/include/vfp.h (revision 325831) @@ -1,157 +1,159 @@ /* * Copyright (c) 2012 Mark Tinguely * * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * $FreeBSD$ */ #ifndef _MACHINE__VFP_H_ #define _MACHINE__VFP_H_ /* fpsid, fpscr, fpexc are defined in the newer gas */ #define VFPSID cr0 #define VFPSCR cr1 #define VMVFR1 cr6 #define VMVFR0 cr7 #define VFPEXC cr8 #define VFPINST cr9 /* vfp 1 and 2 except instruction */ #define VFPINST2 cr10 /* vfp 2? */ /* VFPSID */ #define VFPSID_IMPLEMENTOR_OFF 24 #define VFPSID_IMPLEMENTOR_MASK (0xff000000) #define VFPSID_HARDSOFT_IMP (0x00800000) #define VFPSID_SINGLE_PREC 20 /* version 1 and 2 */ #define VFPSID_SUBVERSION_OFF 16 #define VFPSID_SUBVERSION2_MASK (0x000f0000) /* version 1 and 2 */ #define VFPSID_SUBVERSION3_MASK (0x007f0000) /* version 3 */ #define VFP_ARCH1 0x0 #define VFP_ARCH2 0x1 #define VFP_ARCH3 0x2 #define VFPSID_PARTNUMBER_OFF 8 #define VFPSID_PARTNUMBER_MASK (0x0000ff00) #define VFPSID_VARIANT_OFF 4 #define VFPSID_VARIANT_MASK (0x000000f0) #define VFPSID_REVISION_MASK 0x0f /* VFPSCR */ #define VFPSCR_CC_N (0x80000000) /* comparison less than */ #define VFPSCR_CC_Z (0x40000000) /* comparison equal */ #define VFPSCR_CC_C (0x20000000) /* comparison = > unordered */ #define VFPSCR_CC_V (0x10000000) /* comparison unordered */ #define VFPSCR_QC (0x08000000) /* saturation cululative */ #define VFPSCR_DN (0x02000000) /* default NaN enable */ #define VFPSCR_FZ (0x01000000) /* flush to zero enabled */ #define VFPSCR_RMODE_OFF 22 /* rounding mode offset */ #define VFPSCR_RMODE_MASK (0x00c00000) /* rounding mode mask */ #define VFPSCR_RMODE_RN (0x00000000) /* round nearest */ #define VFPSCR_RMODE_RPI (0x00400000) /* round to plus infinity */ #define VFPSCR_RMODE_RNI (0x00800000) /* round to neg infinity */ #define VFPSCR_RMODE_RM (0x00c00000) /* round to zero */ #define VFPSCR_STRIDE_OFF 20 /* vector stride -1 */ #define VFPSCR_STRIDE_MASK (0x00300000) #define VFPSCR_LEN_OFF 16 /* vector length -1 */ #define VFPSCR_LEN_MASK (0x00070000) #define VFPSCR_IDE (0x00008000) /* input subnormal exc enable */ #define VFPSCR_IXE (0x00001000) /* inexact exception enable */ #define VFPSCR_UFE (0x00000800) /* underflow exception enable */ #define VFPSCR_OFE (0x00000400) /* overflow exception enable */ #define VFPSCR_DNZ (0x00000200) /* div by zero exception en */ #define VFPSCR_IOE (0x00000100) /* invalid op exec enable */ #define VFPSCR_IDC (0x00000080) /* input subnormal cumul */ #define VFPSCR_IXC (0x00000010) /* Inexact cumulative flag */ #define VFPSCR_UFC (0x00000008) /* underflow cumulative flag */ #define VFPSCR_OFC (0x00000004) /* overflow cumulative flag */ #define VFPSCR_DZC (0x00000002) /* division by zero flag */ #define VFPSCR_IOC (0x00000001) /* invalid operation cumul */ /* VFPEXC */ #define VFPEXC_EX (0x80000000) /* exception v1 v2 */ #define VFPEXC_EN (0x40000000) /* vfp enable */ #define VFPEXC_DEX (0x20000000) /* Synchronous exception */ #define VFPEXC_FP2V (0x10000000) /* FPINST2 valid */ #define VFPEXC_INV (0x00000080) /* Input exception */ #define VFPEXC_UFC (0x00000008) /* Underflow exception */ #define VFPEXC_OFC (0x00000004) /* Overflow exception */ #define VFPEXC_IOC (0x00000001) /* Invlaid operation */ /* version 3 registers */ /* VMVFR0 */ #define VMVFR0_RM_OFF 28 #define VMVFR0_RM_MASK (0xf0000000) /* VFP rounding modes */ #define VMVFR0_SV_OFF 24 #define VMVFR0_SV_MASK (0x0f000000) /* VFP short vector supp */ #define VMVFR0_SR_OFF 20 #define VMVFR0_SR (0x00f00000) /* VFP hw sqrt supp */ #define VMVFR0_D_OFF 16 #define VMVFR0_D_MASK (0x000f0000) /* VFP divide supp */ #define VMVFR0_TE_OFF 12 #define VMVFR0_TE_MASK (0x0000f000) /* VFP trap exception supp */ #define VMVFR0_DP_OFF 8 #define VMVFR0_DP_MASK (0x00000f00) /* VFP double prec support */ #define VMVFR0_SP_OFF 4 #define VMVFR0_SP_MASK (0x000000f0) /* VFP single prec support */ #define VMVFR0_RB_MASK (0x0000000f) /* VFP 64 bit media support */ /* VMVFR1 */ #define VMVFR1_FMAC_OFF 28 #define VMVFR1_FMAC_MASK (0xf0000000) /* Neon FMAC support */ #define VMVFR1_VFP_HP_OFF 24 #define VMVFR1_VFP_HP_MASK (0x0f000000) /* VFP half prec support */ #define VMVFR1_HP_OFF 20 #define VMVFR1_HP_MASK (0x00f00000) /* Neon half prec support */ #define VMVFR1_SP_OFF 16 #define VMVFR1_SP_MASK (0x000f0000) /* Neon single prec support */ #define VMVFR1_I_OFF 12 #define VMVFR1_I_MASK (0x0000f000) /* Neon integer support */ #define VMVFR1_LS_OFF 8 #define VMVFR1_LS_MASK (0x00000f00) /* Neon ld/st instr support */ #define VMVFR1_DN_OFF 4 #define VMVFR1_DN_MASK (0x000000f0) /* Neon prop NaN support */ #define VMVFR1_FZ_MASK (0x0000000f) /* Neon denormal arith supp */ #define COPROC10 (0x3 << 20) #define COPROC11 (0x3 << 22) #ifndef LOCORE struct vfp_state { uint64_t reg[32]; uint32_t fpscr; uint32_t fpexec; uint32_t fpinst; uint32_t fpinst2; }; #ifdef _KERNEL +void get_vfpcontext(struct thread *, mcontext_vfp_t *); +void set_vfpcontext(struct thread *, mcontext_vfp_t *); void vfp_init(void); void vfp_store(struct vfp_state *, boolean_t); void vfp_discard(struct thread *); #endif /* _KERNEL */ #endif /* LOCORE */ #endif Index: stable/11/sys/conf/files.arm =================================================================== --- stable/11/sys/conf/files.arm (revision 325830) +++ stable/11/sys/conf/files.arm (revision 325831) @@ -1,161 +1,162 @@ # $FreeBSD$ cloudabi32_vdso.o optional compat_cloudabi32 \ dependency "$S/contrib/cloudabi/cloudabi_vdso_armv6.S" \ compile-with "${CC} -x assembler-with-cpp -shared -nostdinc -nostdlib -Wl,-T$S/compat/cloudabi/cloudabi_vdso.lds $S/contrib/cloudabi/cloudabi_vdso_armv6.S -o ${.TARGET}" \ no-obj no-implicit-rule \ clean "cloudabi32_vdso.o" # cloudabi32_vdso_blob.o optional compat_cloudabi32 \ dependency "cloudabi32_vdso.o" \ compile-with "${OBJCOPY} --input-target binary --output-target elf32-littlearm --binary-architecture arm cloudabi32_vdso.o ${.TARGET}" \ no-implicit-rule \ clean "cloudabi32_vdso_blob.o" # arm/arm/autoconf.c standard arm/arm/bcopy_page.S standard arm/arm/bcopyinout.S standard arm/arm/blockio.S standard arm/arm/bus_space_asm_generic.S standard arm/arm/bus_space_base.c optional fdt arm/arm/bus_space_generic.c standard arm/arm/busdma_machdep-v4.c optional !armv6 arm/arm/busdma_machdep-v6.c optional armv6 arm/arm/copystr.S standard arm/arm/cpufunc.c standard arm/arm/cpufunc_asm.S standard arm/arm/cpufunc_asm_arm9.S optional cpu_arm9 | cpu_arm9e arm/arm/cpufunc_asm_arm11.S optional cpu_arm1176 arm/arm/cpufunc_asm_arm11x6.S optional cpu_arm1176 arm/arm/cpufunc_asm_armv4.S optional cpu_arm9 | cpu_arm9e | cpu_fa526 | cpu_xscale_pxa2x0 | cpu_xscale_ixp425 | cpu_xscale_81342 arm/arm/cpufunc_asm_armv5_ec.S optional cpu_arm9e arm/arm/cpufunc_asm_armv6.S optional cpu_arm1176 arm/arm/cpufunc_asm_armv7.S optional cpu_cortexa | cpu_krait | cpu_mv_pj4b arm/arm/cpufunc_asm_fa526.S optional cpu_fa526 arm/arm/cpufunc_asm_pj4b.S optional cpu_mv_pj4b arm/arm/cpufunc_asm_sheeva.S optional cpu_arm9e arm/arm/cpufunc_asm_xscale.S optional cpu_xscale_pxa2x0 | cpu_xscale_ixp425 | cpu_xscale_81342 arm/arm/cpufunc_asm_xscale_c3.S optional cpu_xscale_81342 arm/arm/cpuinfo.c standard arm/arm/cpu_asm-v6.S optional armv6 arm/arm/db_disasm.c optional ddb arm/arm/db_interface.c optional ddb arm/arm/db_trace.c optional ddb arm/arm/debug_monitor.c optional ddb armv6 arm/arm/disassem.c optional ddb arm/arm/dump_machdep.c standard arm/arm/elf_machdep.c standard arm/arm/elf_note.S standard arm/arm/exception.S standard arm/arm/fiq.c standard arm/arm/fiq_subr.S standard arm/arm/fusu.S standard arm/arm/gdb_machdep.c optional gdb arm/arm/generic_timer.c optional generic_timer arm/arm/gic.c optional gic arm/arm/hdmi_if.m optional hdmi arm/arm/identcpu-v4.c optional !armv6 arm/arm/identcpu-v6.c optional armv6 arm/arm/in_cksum.c optional inet | inet6 arm/arm/in_cksum_arm.S optional inet | inet6 arm/arm/intr.c optional !intrng kern/subr_intr.c optional intrng arm/arm/locore.S standard no-obj arm/arm/machdep.c standard arm/arm/machdep_boot.c standard arm/arm/machdep_kdb.c standard arm/arm/machdep_intr.c standard arm/arm/machdep_ptrace.c standard arm/arm/mem.c optional mem arm/arm/minidump_machdep.c optional mem arm/arm/mp_machdep.c optional smp arm/arm/mpcore_timer.c optional mpcore_timer arm/arm/nexus.c standard arm/arm/ofw_machdep.c optional fdt arm/arm/physmem.c standard arm/arm/pl190.c optional pl190 arm/arm/pl310.c optional pl310 arm/arm/platform.c optional platform arm/arm/platform_if.m optional platform arm/arm/pmap-v4.c optional !armv6 arm/arm/pmap-v6.c optional armv6 arm/arm/pmu.c optional pmu | fdt hwpmc +arm/arm/ptrace_machdep.c standard arm/arm/sc_machdep.c optional sc arm/arm/setcpsr.S standard arm/arm/setstack.s standard arm/arm/stack_machdep.c optional ddb | stack arm/arm/stdatomic.c standard \ compile-with "${NORMAL_C:N-Wmissing-prototypes}" arm/arm/support.S standard arm/arm/swtch.S standard arm/arm/swtch-v4.S optional !armv6 arm/arm/swtch-v6.S optional armv6 arm/arm/sys_machdep.c standard arm/arm/syscall.c standard arm/arm/trap-v4.c optional !armv6 arm/arm/trap-v6.c optional armv6 arm/arm/uio_machdep.c standard arm/arm/undefined.c standard arm/arm/unwind.c optional ddb | kdtrace_hooks arm/arm/vm_machdep.c standard arm/arm/vfp.c standard arm/cloudabi32/cloudabi32_sysvec.c optional compat_cloudabi32 board_id.h standard \ dependency "$S/arm/conf/genboardid.awk $S/arm/conf/mach-types" \ compile-with "${AWK} -f $S/arm/conf/genboardid.awk $S/arm/conf/mach-types > board_id.h" \ no-obj no-implicit-rule before-depend \ clean "board_id.h" cddl/compat/opensolaris/kern/opensolaris_atomic.c optional zfs | dtrace compile-with "${CDDL_C}" cddl/dev/dtrace/arm/dtrace_asm.S optional dtrace compile-with "${DTRACE_S}" cddl/dev/dtrace/arm/dtrace_subr.c optional dtrace compile-with "${DTRACE_C}" cddl/dev/fbt/arm/fbt_isa.c optional dtrace_fbt | dtraceall compile-with "${FBT_C}" crypto/blowfish/bf_enc.c optional crypto | ipsec | ipsec_support crypto/des/des_enc.c optional crypto | ipsec | ipsec_support | netsmb dev/dwc/if_dwc.c optional dwc dev/dwc/if_dwc_if.m optional dwc dev/fb/fb.c optional sc dev/fdt/fdt_arm_platform.c optional platform fdt dev/hwpmc/hwpmc_arm.c optional hwpmc dev/hwpmc/hwpmc_armv7.c optional hwpmc armv6 dev/iicbus/twsi/twsi.c optional twsi dev/ofw/ofw_cpu.c optional fdt dev/ofw/ofwpci.c optional fdt pci dev/pci/pci_host_generic.c optional pci_host_generic pci fdt dev/psci/psci.c optional psci dev/psci/psci_arm.S optional psci dev/syscons/scgfbrndr.c optional sc dev/syscons/scterm-teken.c optional sc dev/syscons/scvtb.c optional sc dev/uart/uart_cpu_fdt.c optional uart fdt font.h optional sc \ compile-with "uudecode < /usr/share/syscons/fonts/${SC_DFLT_FONT}-8x16.fnt && file2c 'u_char dflt_font_16[16*256] = {' '};' < ${SC_DFLT_FONT}-8x16 > font.h && uudecode < /usr/share/syscons/fonts/${SC_DFLT_FONT}-8x14.fnt && file2c 'u_char dflt_font_14[14*256] = {' '};' < ${SC_DFLT_FONT}-8x14 >> font.h && uudecode < /usr/share/syscons/fonts/${SC_DFLT_FONT}-8x8.fnt && file2c 'u_char dflt_font_8[8*256] = {' '};' < ${SC_DFLT_FONT}-8x8 >> font.h" \ no-obj no-implicit-rule before-depend \ clean "font.h ${SC_DFLT_FONT}-8x14 ${SC_DFLT_FONT}-8x16 ${SC_DFLT_FONT}-8x8" kern/msi_if.m optional intrng kern/pic_if.m optional intrng kern/subr_busdma_bufalloc.c standard kern/subr_devmap.c standard kern/subr_sfbuf.c standard libkern/arm/aeabi_unwind.c standard libkern/arm/divsi3.S standard libkern/arm/ffs.S standard libkern/arm/ldivmod.S standard libkern/arm/ldivmod_helper.c standard libkern/arm/memclr.S standard libkern/arm/memcpy.S standard libkern/arm/memset.S standard libkern/arm/muldi3.c standard libkern/ashldi3.c standard libkern/ashrdi3.c standard libkern/divdi3.c standard libkern/ffsl.c standard libkern/ffsll.c standard libkern/fls.c standard libkern/flsl.c standard libkern/flsll.c standard libkern/lshrdi3.c standard libkern/moddi3.c standard libkern/qdivrem.c standard libkern/ucmpdi2.c standard libkern/udivdi3.c standard libkern/umoddi3.c standard Index: stable/11 =================================================================== --- stable/11 (revision 325830) +++ stable/11 (revision 325831) Property changes on: stable/11 ___________________________________________________________________ Modified: svn:mergeinfo ## -0,0 +0,1 ## Merged /head:r323581-323583