Index: stable/10/lib/libc/sys/ptrace.2 =================================================================== --- stable/10/lib/libc/sys/ptrace.2 (revision 286310) +++ stable/10/lib/libc/sys/ptrace.2 (revision 286311) @@ -1,624 +1,811 @@ .\" $FreeBSD$ .\" $NetBSD: ptrace.2,v 1.2 1995/02/27 12:35:37 cgd Exp $ .\" .\" This file is in the public domain. -.Dd July 22, 2013 +.Dd July 3, 2015 .Dt PTRACE 2 .Os .Sh NAME .Nm ptrace .Nd process tracing and debugging .Sh LIBRARY .Lb libc .Sh SYNOPSIS .In sys/types.h .In sys/ptrace.h .Ft int .Fn ptrace "int request" "pid_t pid" "caddr_t addr" "int data" .Sh DESCRIPTION The .Fn ptrace system call provides tracing and debugging facilities. It allows one process (the .Em tracing process) to control another (the .Em traced process). The tracing process must first attach to the traced process, and then issue a series of .Fn ptrace system calls to control the execution of the process, as well as access process memory and register state. For the duration of the tracing session, the traced process will be .Dq re-parented , with its parent process ID (and resulting behavior) changed to the tracing process. It is permissible for a tracing process to attach to more than one other process at a time. When the tracing process has completed its work, it must detach the traced process; if a tracing process exits without first detaching all processes it has attached, those processes will be killed. .Pp Most of the time, the traced process runs normally, but when it receives a signal (see .Xr sigaction 2 ) , it stops. The tracing process is expected to notice this via .Xr wait 2 or the delivery of a .Dv SIGCHLD signal, examine the state of the stopped process, and cause it to terminate or continue as appropriate. The signal may be a normal process signal, generated as a result of traced process behavior, or use of the .Xr kill 2 system call; alternatively, it may be generated by the tracing facility as a result of attaching, system calls, or stepping by the tracing process. The tracing process may choose to intercept the signal, using it to observe process behavior (such as .Dv SIGTRAP ) , or forward the signal to the process if appropriate. The .Fn ptrace system call is the mechanism by which all this happens. .Pp The .Fa request argument specifies what operation is being performed; the meaning of the rest of the arguments depends on the operation, but except for one special case noted below, all .Fn ptrace calls are made by the tracing process, and the .Fa pid argument specifies the process ID of the traced process or a corresponding thread ID. The .Fa request argument can be: .Bl -tag -width 12n .It Dv PT_TRACE_ME This request is the only one used by the traced process; it declares that the process expects to be traced by its parent. All the other arguments are ignored. (If the parent process does not expect to trace the child, it will probably be rather confused by the results; once the traced process stops, it cannot be made to continue except via .Fn ptrace . ) When a process has used this request and calls .Xr execve 2 or any of the routines built on it (such as .Xr execv 3 ) , it will stop before executing the first instruction of the new image. Also, any setuid or setgid bits on the executable being executed will be ignored. If the child was created by .Xr vfork 2 system call or .Xr rfork(2) call with the .Dv RFMEM flag specified, the debugging events are reported to the parent only after the .Xr execve 2 is executed. .It Dv PT_READ_I , Dv PT_READ_D These requests read a single .Vt int of data from the traced process's address space. Traditionally, .Fn ptrace has allowed for machines with distinct address spaces for instruction and data, which is why there are two requests: conceptually, .Dv PT_READ_I reads from the instruction space and .Dv PT_READ_D reads from the data space. In the current .Fx implementation, these two requests are completely identical. The .Fa addr argument specifies the address (in the traced process's virtual address space) at which the read is to be done. This address does not have to meet any alignment constraints. The value read is returned as the return value from .Fn ptrace . .It Dv PT_WRITE_I , Dv PT_WRITE_D These requests parallel .Dv PT_READ_I and .Dv PT_READ_D , except that they write rather than read. The .Fa data argument supplies the value to be written. .It Dv PT_IO This request allows reading and writing arbitrary amounts of data in the traced process's address space. The .Fa addr argument specifies a pointer to a .Vt "struct ptrace_io_desc" , which is defined as follows: .Bd -literal struct ptrace_io_desc { int piod_op; /* I/O operation */ void *piod_offs; /* child offset */ void *piod_addr; /* parent offset */ size_t piod_len; /* request length */ }; /* * Operations in piod_op. */ #define PIOD_READ_D 1 /* Read from D space */ #define PIOD_WRITE_D 2 /* Write to D space */ #define PIOD_READ_I 3 /* Read from I space */ #define PIOD_WRITE_I 4 /* Write to I space */ .Ed .Pp The .Fa data argument is ignored. The actual number of bytes read or written is stored in .Va piod_len upon return. .It Dv PT_CONTINUE The traced process continues execution. The .Fa addr argument is an address specifying the place where execution is to be resumed (a new value for the program counter), or .Po Vt caddr_t Pc Ns 1 to indicate that execution is to pick up where it left off. The .Fa data argument provides a signal number to be delivered to the traced process as it resumes execution, or 0 if no signal is to be sent. .It Dv PT_STEP The traced process is single stepped one instruction. The .Fa addr argument should be passed .Po Vt caddr_t Pc Ns 1 . The .Fa data argument provides a signal number to be delivered to the traced process as it resumes execution, or 0 if no signal is to be sent. .It Dv PT_KILL The traced process terminates, as if .Dv PT_CONTINUE had been used with .Dv SIGKILL given as the signal to be delivered. .It Dv PT_ATTACH This request allows a process to gain control of an otherwise unrelated process and begin tracing it. It does not need any cooperation from the to-be-traced process. In this case, .Fa pid specifies the process ID of the to-be-traced process, and the other two arguments are ignored. This request requires that the target process must have the same real UID as the tracing process, and that it must not be executing a setuid or setgid executable. (If the tracing process is running as root, these restrictions do not apply.) The tracing process will see the newly-traced process stop and may then control it as if it had been traced all along. .It Dv PT_DETACH This request is like PT_CONTINUE, except that it does not allow specifying an alternate place to continue execution, and after it succeeds, the traced process is no longer traced and continues execution normally. .It Dv PT_GETREGS This request reads the traced process's machine registers into the .Do .Vt "struct reg" .Dc (defined in .In machine/reg.h ) pointed to by .Fa addr . .It Dv PT_SETREGS This request is the converse of .Dv PT_GETREGS ; it loads the traced process's machine registers from the .Do .Vt "struct reg" .Dc (defined in .In machine/reg.h ) pointed to by .Fa addr . .It Dv PT_GETFPREGS This request reads the traced process's floating-point registers into the .Do .Vt "struct fpreg" .Dc (defined in .In machine/reg.h ) pointed to by .Fa addr . .It Dv PT_SETFPREGS This request is the converse of .Dv PT_GETFPREGS ; it loads the traced process's floating-point registers from the .Do .Vt "struct fpreg" .Dc (defined in .In machine/reg.h ) pointed to by .Fa addr . .It Dv PT_GETDBREGS This request reads the traced process's debug registers into the .Do .Vt "struct dbreg" .Dc (defined in .In machine/reg.h ) pointed to by .Fa addr . .It Dv PT_SETDBREGS This request is the converse of .Dv PT_GETDBREGS ; it loads the traced process's debug registers from the .Do .Vt "struct dbreg" .Dc (defined in .In machine/reg.h ) pointed to by .Fa addr . .It Dv PT_LWPINFO This request can be used to obtain information about the kernel thread, also known as light-weight process, that caused the traced process to stop. The .Fa addr argument specifies a pointer to a .Vt "struct ptrace_lwpinfo" , which is defined as follows: .Bd -literal struct ptrace_lwpinfo { lwpid_t pl_lwpid; int pl_event; int pl_flags; sigset_t pl_sigmask; sigset_t pl_siglist; siginfo_t pl_siginfo; char pl_tdname[MAXCOMLEN + 1]; int pl_child_pid; }; .Ed .Pp The .Fa data argument is to be set to the size of the structure known to the caller. This allows the structure to grow without affecting older programs. .Pp The fields in the .Vt "struct ptrace_lwpinfo" have the following meaning: .Bl -tag -width indent -compact .It pl_lwpid LWP id of the thread .It pl_event Event that caused the stop. Currently defined events are .Bl -tag -width indent -compact .It PL_EVENT_NONE No reason given .It PL_EVENT_SIGNAL Thread stopped due to the pending signal .El .It pl_flags Flags that specify additional details about observed stop. Currently defined flags are: .Bl -tag -width indent -compact .It PL_FLAG_SCE The thread stopped due to system call entry, right after the kernel is entered. The debugger may examine syscall arguments that are stored in memory and registers according to the ABI of the current process, and modify them, if needed. .It PL_FLAG_SCX The thread is stopped immediately before syscall is returning to the usermode. The debugger may examine system call return values in the ABI-defined registers and/or memory. .It PL_FLAG_EXEC When .Dv PL_FLAG_SCX is set, this flag may be additionally specified to inform that the program being executed by debuggee process has been changed by successful execution of a system call from the .Fn execve 2 family. .It PL_FLAG_SI Indicates that .Va pl_siginfo member of .Vt "struct ptrace_lwpinfo" contains valid information. .It PL_FLAG_FORKED Indicates that the process is returning from a call to .Fn fork 2 that created a new child process. The process identifier of the new process is available in the .Va pl_child_pid member of .Vt "struct ptrace_lwpinfo" . .It PL_FLAG_CHILD The flag is set for first event reported from a new child, which is automatically attached due to .Dv PT_FOLLOW_FORK enabled. .El .It pl_sigmask The current signal mask of the LWP .It pl_siglist The current pending set of signals for the LWP. Note that signals that are delivered to the process would not appear on an LWP siglist until the thread is selected for delivery. .It pl_siginfo The siginfo that accompanies the signal pending. Only valid for .Dv PL_EVENT_SIGNAL stop when .Dv PL_FLAG_SI is set in .Va pl_flags . .It pl_tdname The name of the thread. .It pl_child_pid The process identifier of the new child process. Only valid for a .Dv PL_EVENT_SIGNAL stop when .Dv PL_FLAG_FORKED is set in .Va pl_flags . .El .It PT_GETNUMLWPS This request returns the number of kernel threads associated with the traced process. .It PT_GETLWPLIST This request can be used to get the current thread list. A pointer to an array of type .Vt lwpid_t should be passed in .Fa addr , with the array size specified by .Fa data . The return value from .Fn ptrace is the count of array entries filled in. .It PT_SETSTEP This request will turn on single stepping of the specified process. .It PT_CLEARSTEP This request will turn off single stepping of the specified process. .It PT_SUSPEND This request will suspend the specified thread. .It PT_RESUME This request will resume the specified thread. .It PT_TO_SCE This request will trace the specified process on each system call entry. .It PT_TO_SCX This request will trace the specified process on each system call exit. .It PT_SYSCALL This request will trace the specified process on each system call entry and exit. .It PT_FOLLOW_FORK This request controls tracing for new child processes of a traced process. If .Fa data is non-zero, then new child processes will enable tracing and stop before executing their first instruction. If .Fa data is zero, then new child processes will execute without tracing enabled. By default, tracing is not enabled for new child processes. Child processes do not inherit this property. The traced process will set the .Dv PL_FLAG_FORKED flag upon exit from a system call that creates a new process. .It PT_VM_TIMESTAMP This request returns the generation number or timestamp of the memory map of the traced process as the return value from .Fn ptrace . This provides a low-cost way for the tracing process to determine if the VM map changed since the last time this request was made. .It PT_VM_ENTRY This request is used to iterate over the entries of the VM map of the traced process. The .Fa addr argument specifies a pointer to a .Vt "struct ptrace_vm_entry" , which is defined as follows: .Bd -literal struct ptrace_vm_entry { int pve_entry; int pve_timestamp; u_long pve_start; u_long pve_end; u_long pve_offset; u_int pve_prot; u_int pve_pathlen; long pve_fileid; uint32_t pve_fsid; char *pve_path; }; .Ed .Pp The first entry is returned by setting .Va pve_entry to zero. Subsequent entries are returned by leaving .Va pve_entry unmodified from the value returned by previous requests. The .Va pve_timestamp field can be used to detect changes to the VM map while iterating over the entries. The tracing process can then take appropriate action, such as restarting. By setting .Va pve_pathlen to a non-zero value on entry, the pathname of the backing object is returned in the buffer pointed to by .Va pve_path , provided the entry is backed by a vnode. The .Va pve_pathlen field is updated with the actual length of the pathname (including the terminating null character). The .Va pve_offset field is the offset within the backing object at which the range starts. The range is located in the VM space at .Va pve_start and extends up to .Va pve_end (inclusive). .Pp The .Fa data argument is ignored. .El +.Sh x86 MACHINE-SPECIFIC REQUESTS +.Bl -tag -width "Dv PT_GETXSTATE_INFO" +.It Dv PT_GETXMMREGS +Copy the XMM FPU state into the buffer pointed to by the +argument +.Fa addr . +The buffer has the same layout as the 32-bit save buffer for the +machine instruction +.Dv FXSAVE . .Pp -Additionally, machine-specific requests can exist. +This request is only valid for i386 programs, both on native 32-bit +systems and on amd64 kernels. +For 64-bit amd64 programs, the XMM state is reported as part of +the FPU state returned by the +.Dv PT_GETFPREGS +request. +.Pp +The +.Fa data +argument is ignored. +.It Dv PT_SETXMMREGS +Load the XMM FPU state for the thread from the buffer pointed to +by the argument +.Fa addr . +The buffer has the same layout as the 32-bit load buffer for the +machine instruction +.Dv FXRSTOR . +.Pp +As with +.Dv PT_GETXMMREGS, +this request is only valid for i386 programs. +.Pp +The +.Fa data +argument is ignored. +.It Dv PT_GETXSTATE_INFO +Report which XSAVE FPU extensions are supported by the CPU +and allowed in userspace programs. +The +.Fa addr +argument must point to a variable of type +.Vt struct ptrace_xstate_info , +which contains the information on the request return. +.Vt struct ptrace_xstate_info +is defined as follows: +.Bd -literal +struct ptrace_xstate_info { + uint64_t xsave_mask; + uint32_t xsave_len; +}; +.Ed +The +.Dv xsave_mask +field is a bitmask of the currently enabled extensions. +The meaning of the bits is defined in the Intel and AMD +processor documentation. +The +.Dv xsave_len +field reports the length of the XSAVE area for storing the hardware +state for currently enabled extensions in the format defined by the x86 +.Dv XSAVE +machine instruction. +.Pp +The +.Fa data +argument value must be equal to the size of the +.Vt struct ptrace_xstate_info . +.It Dv PT_GETXSTATE +Return the content of the XSAVE area for the thread. +The +.Fa addr +argument points to the buffer where the content is copied, and the +.Fa data +argument specifies the size of the buffer. +The kernel copies out as much content as allowed by the buffer size. +The buffer layout is specified by the layout of the save area for the +.Dv XSAVE +machine instruction. +.It Dv PT_SETXSTATE +Load the XSAVE state for the thread from the buffer specified by the +.Fa addr +pointer. +The buffer size is passed in the +.Fa data +argument. +The buffer must be at least as large as the +.Vt struct savefpu +(defined in +.Pa x86/fpu.h ) +to allow the complete x87 FPU and XMM state load. +It must not be larger than the XSAVE state length, as reported by the +.Dv xsave_len +field from the +.Vt struct ptrace_xstate_info +of the +.Dv PT_GETXSTATE_INFO +request. +Layout of the buffer is identical to the layout of the load area for the +.Dv XRSTOR +machine instruction. +.It Dv PT_GETFSBASE +Return the value of the base used when doing segmented +memory addressing using the %fs segment register. +The +.Fa addr +argument points to an +.Vt unsigned long +variable where the base value is stored. +.Pp +The +.Fa data +argument is ignored. +.It Dv PT_GETGSBASE +Like the +.Dv PT_GETFSBASE +request, but returns the base for the %gs segment register. +.It Dv PT_SETFSBASE +Set the base for the %fs segment register to the value pointed to +by the +.Fa addr +argument. +.Fa addr +must point to the +.Vt unsigned long +variable containing the new base. +.Pp +The +.Fa data +argument is ignored. +.It Dv PT_SETGSBASE +Like the +.Dv PT_SETFSBASE +request, but sets the base for the %gs segment register. +.El +.Sh PowerPC MACHINE-SPECIFIC REQUESTS +.Bl -tag -width "Dv PT_SETVRREGS" +.It Dv PT_GETVRREGS +Return the thread's +.Dv ALTIVEC +machine state in the buffer pointed to by +.Fa addr . +.Pp +The +.Fa data +argument is ignored. +.It Dv PT_SETVRREGS +Set the thread's +.Dv ALTIVEC +machine state from the buffer pointed to by +.Fa addr . +.Pp +The +.Fa data +argument is ignored. +.El +.Pp +Additionally, other machine-specific requests can exist. .Sh RETURN VALUES Some requests can cause .Fn ptrace to return \-1 as a non-error value; to disambiguate, .Va errno can be set to 0 before the call and checked afterwards. .Sh ERRORS The .Fn ptrace system call may fail if: .Bl -tag -width Er .It Bq Er ESRCH .Bl -bullet -compact .It No process having the specified process ID exists. .El .It Bq Er EINVAL .Bl -bullet -compact .It A process attempted to use .Dv PT_ATTACH on itself. .It The .Fa request argument was not one of the legal requests. .It The signal number (in .Fa data ) to .Dv PT_CONTINUE was neither 0 nor a legal signal number. .It .Dv PT_GETREGS , .Dv PT_SETREGS , .Dv PT_GETFPREGS , .Dv PT_SETFPREGS , .Dv PT_GETDBREGS , or .Dv PT_SETDBREGS was attempted on a process with no valid register set. (This is normally true only of system processes.) .It .Dv PT_VM_ENTRY was given an invalid value for .Fa pve_entry . This can also be caused by changes to the VM map of the process. .It The size (in .Fa data ) provided to .Dv PT_LWPINFO was less than or equal to zero, or larger than the .Vt ptrace_lwpinfo structure known to the kernel. +.It +The size (in +.Fa data ) +provided to the x86-specific +.Dv PT_GETXSTATE_INFO +request was not equal to the size of the +.Vt struct ptrace_xstate_info . +.It +The size (in +.Fa data ) +provided to the x86-specific +.Dv PT_SETXSTATE +request was less than the size of the x87 plus the XMM save area. +.It +The size (in +.Fa data ) +provided to the x86-specific +.Dv PT_SETXSTATE +request was larger than returned in the +.Dv xsave_len +member of the +.Vt struct ptrace_xstate_info +from the +.Dv PT_GETXSTATE_INFO +request. +.It +The base value, provided to the amd64-specific requests +.Dv PT_SETFSBASE +or +.Dv PT_SETGSBASE , +pointed outside of the valid user address space. +This error will not occur in 32-bit programs. .El .It Bq Er EBUSY .Bl -bullet -compact .It .Dv PT_ATTACH was attempted on a process that was already being traced. .It A request attempted to manipulate a process that was being traced by some process other than the one making the request. .It A request (other than .Dv PT_ATTACH ) specified a process that was not stopped. .El .It Bq Er EPERM .Bl -bullet -compact .It A request (other than .Dv PT_ATTACH ) attempted to manipulate a process that was not being traced at all. .It An attempt was made to use .Dv PT_ATTACH on a process in violation of the requirements listed under .Dv PT_ATTACH above. .El .It Bq Er ENOENT .Bl -bullet -compact .It .Dv PT_VM_ENTRY previously returned the last entry of the memory map. No more entries exist. .El .It Bq Er ENAMETOOLONG .Bl -bullet -compact .It .Dv PT_VM_ENTRY cannot return the pathname of the backing object because the buffer is not big enough. .Fa pve_pathlen holds the minimum buffer size required on return. .El .El .Sh SEE ALSO .Xr execve 2 , .Xr sigaction 2 , .Xr wait 2 , .Xr execv 3 , .Xr i386_clr_watch 3 , .Xr i386_set_watch 3 .Sh HISTORY The .Fn ptrace function appeared in .At v7 . Index: stable/10/sys/amd64/amd64/ptrace_machdep.c =================================================================== --- stable/10/sys/amd64/amd64/ptrace_machdep.c (revision 286310) +++ stable/10/sys/amd64/amd64/ptrace_machdep.c (revision 286311) @@ -1,185 +1,247 @@ /*- * Copyright (c) 2011 Konstantin Belousov * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include __FBSDID("$FreeBSD$"); #include "opt_compat.h" #include #include #include #include #include #include +#include +#include #include #include +#include +#include static int cpu_ptrace_xstate(struct thread *td, int req, void *addr, int data) { struct ptrace_xstate_info info; char *savefpu; int error; if (!use_xsave) return (EOPNOTSUPP); switch (req) { case PT_GETXSTATE_OLD: fpugetregs(td); savefpu = (char *)(get_pcb_user_save_td(td) + 1); error = copyout(savefpu, addr, cpu_max_ext_state_size - sizeof(struct savefpu)); break; case PT_SETXSTATE_OLD: if (data > cpu_max_ext_state_size - sizeof(struct savefpu)) { error = EINVAL; break; } savefpu = malloc(data, M_TEMP, M_WAITOK); error = copyin(addr, savefpu, data); if (error == 0) { fpugetregs(td); error = fpusetxstate(td, savefpu, data); } free(savefpu, M_TEMP); break; case PT_GETXSTATE_INFO: if (data != sizeof(info)) { error = EINVAL; break; } info.xsave_len = cpu_max_ext_state_size; info.xsave_mask = xsave_mask; error = copyout(&info, addr, data); break; case PT_GETXSTATE: fpugetregs(td); savefpu = (char *)(get_pcb_user_save_td(td)); error = copyout(savefpu, addr, cpu_max_ext_state_size); break; case PT_SETXSTATE: if (data < sizeof(struct savefpu) || data > cpu_max_ext_state_size) { error = EINVAL; break; } savefpu = malloc(data, M_TEMP, M_WAITOK); error = copyin(addr, savefpu, data); if (error == 0) error = fpusetregs(td, (struct savefpu *)savefpu, savefpu + sizeof(struct savefpu), data - sizeof(struct savefpu)); free(savefpu, M_TEMP); break; default: error = EINVAL; break; } return (error); } +static void +cpu_ptrace_setbase(struct thread *td, int req, register_t r) +{ + + if (req == PT_SETFSBASE) { + td->td_pcb->pcb_fsbase = r; + td->td_frame->tf_fs = _ufssel; + } else { + td->td_pcb->pcb_gsbase = r; + td->td_frame->tf_gs = _ugssel; + } + set_pcb_flags(td->td_pcb, PCB_FULL_IRET); +} + #ifdef COMPAT_FREEBSD32 #define PT_I386_GETXMMREGS (PT_FIRSTMACH + 0) #define PT_I386_SETXMMREGS (PT_FIRSTMACH + 1) static int cpu32_ptrace(struct thread *td, int req, void *addr, int data) { struct savefpu *fpstate; + uint32_t r; int error; switch (req) { case PT_I386_GETXMMREGS: fpugetregs(td); error = copyout(get_pcb_user_save_td(td), addr, sizeof(*fpstate)); break; case PT_I386_SETXMMREGS: fpugetregs(td); fpstate = get_pcb_user_save_td(td); error = copyin(addr, fpstate, sizeof(*fpstate)); fpstate->sv_env.en_mxcsr &= cpu_mxcsr_mask; break; case PT_GETXSTATE_OLD: case PT_SETXSTATE_OLD: case PT_GETXSTATE_INFO: case PT_GETXSTATE: case PT_SETXSTATE: error = cpu_ptrace_xstate(td, req, addr, data); break; + case PT_GETFSBASE: + case PT_GETGSBASE: + if (!SV_PROC_FLAG(td->td_proc, SV_ILP32)) { + error = EINVAL; + break; + } + r = req == PT_GETFSBASE ? td->td_pcb->pcb_fsbase : + td->td_pcb->pcb_gsbase; + error = copyout(&r, addr, sizeof(r)); + break; + + case PT_SETFSBASE: + case PT_SETGSBASE: + if (!SV_PROC_FLAG(td->td_proc, SV_ILP32)) { + error = EINVAL; + break; + } + error = copyin(addr, &r, sizeof(r)); + if (error != 0) + break; + cpu_ptrace_setbase(td, req, r); + break; + default: error = EINVAL; break; } return (error); } #endif int cpu_ptrace(struct thread *td, int req, void *addr, int data) { + register_t *r, rv; int error; #ifdef COMPAT_FREEBSD32 if (SV_CURPROC_FLAG(SV_ILP32)) return (cpu32_ptrace(td, req, addr, data)); #endif /* Support old values of PT_GETXSTATE_OLD and PT_SETXSTATE_OLD. */ if (req == PT_FIRSTMACH + 0) req = PT_GETXSTATE_OLD; if (req == PT_FIRSTMACH + 1) req = PT_SETXSTATE_OLD; switch (req) { case PT_GETXSTATE_OLD: case PT_SETXSTATE_OLD: case PT_GETXSTATE_INFO: case PT_GETXSTATE: case PT_SETXSTATE: error = cpu_ptrace_xstate(td, req, addr, data); + break; + + case PT_GETFSBASE: + case PT_GETGSBASE: + r = req == PT_GETFSBASE ? &td->td_pcb->pcb_fsbase : + &td->td_pcb->pcb_gsbase; + error = copyout(r, addr, sizeof(*r)); + break; + + case PT_SETFSBASE: + case PT_SETGSBASE: + error = copyin(addr, &rv, sizeof(rv)); + if (error != 0) + break; + if (rv >= td->td_proc->p_sysent->sv_maxuser) { + error = EINVAL; + break; + } + cpu_ptrace_setbase(td, req, rv); break; default: error = EINVAL; break; } return (error); } Index: stable/10/sys/i386/i386/ptrace_machdep.c =================================================================== --- stable/10/sys/i386/i386/ptrace_machdep.c (revision 286310) +++ stable/10/sys/i386/i386/ptrace_machdep.c (revision 286311) @@ -1,157 +1,206 @@ /*- * Copyright (c) 2005 Doug Rabson * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include __FBSDID("$FreeBSD$"); #include "opt_cpu.h" #include #include #include #include #include +#include #include #include #if !defined(CPU_DISABLE_SSE) && defined(I686_CPU) #define CPU_ENABLE_SSE #endif #ifdef CPU_ENABLE_SSE static int cpu_ptrace_xstate(struct thread *td, int req, void *addr, int data) { struct ptrace_xstate_info info; char *savefpu; int error; if (!use_xsave) return (EOPNOTSUPP); switch (req) { case PT_GETXSTATE_OLD: npxgetregs(td); savefpu = (char *)(get_pcb_user_save_td(td) + 1); error = copyout(savefpu, addr, cpu_max_ext_state_size - sizeof(union savefpu)); break; case PT_SETXSTATE_OLD: if (data > cpu_max_ext_state_size - sizeof(union savefpu)) { error = EINVAL; break; } savefpu = malloc(data, M_TEMP, M_WAITOK); error = copyin(addr, savefpu, data); if (error == 0) { npxgetregs(td); error = npxsetxstate(td, savefpu, data); } free(savefpu, M_TEMP); break; case PT_GETXSTATE_INFO: if (data != sizeof(info)) { error = EINVAL; break; } info.xsave_len = cpu_max_ext_state_size; info.xsave_mask = xsave_mask; error = copyout(&info, addr, data); break; case PT_GETXSTATE: npxgetregs(td); savefpu = (char *)(get_pcb_user_save_td(td)); error = copyout(savefpu, addr, cpu_max_ext_state_size); break; case PT_SETXSTATE: if (data < sizeof(union savefpu) || data > cpu_max_ext_state_size) { error = EINVAL; break; } savefpu = malloc(data, M_TEMP, M_WAITOK); error = copyin(addr, savefpu, data); if (error == 0) error = npxsetregs(td, (union savefpu *)savefpu, savefpu + sizeof(union savefpu), data - sizeof(union savefpu)); free(savefpu, M_TEMP); break; default: error = EINVAL; break; } return (error); } #endif -int -cpu_ptrace(struct thread *td, int req, void *addr, int data) +static int +cpu_ptrace_xmm(struct thread *td, int req, void *addr, int data) { #ifdef CPU_ENABLE_SSE struct savexmm *fpstate; int error; if (!cpu_fxsr) return (EINVAL); fpstate = &get_pcb_user_save_td(td)->sv_xmm; switch (req) { case PT_GETXMMREGS: npxgetregs(td); error = copyout(fpstate, addr, sizeof(*fpstate)); break; case PT_SETXMMREGS: npxgetregs(td); error = copyin(addr, fpstate, sizeof(*fpstate)); fpstate->sv_env.en_mxcsr &= cpu_mxcsr_mask; break; case PT_GETXSTATE_OLD: case PT_SETXSTATE_OLD: case PT_GETXSTATE_INFO: case PT_GETXSTATE: case PT_SETXSTATE: error = cpu_ptrace_xstate(td, req, addr, data); break; default: return (EINVAL); } return (error); #else return (EINVAL); #endif +} + +int +cpu_ptrace(struct thread *td, int req, void *addr, int data) +{ + struct segment_descriptor *sdp, sd; + register_t r; + int error; + + switch (req) { + case PT_GETXMMREGS: + case PT_SETXMMREGS: + case PT_GETXSTATE_OLD: + case PT_SETXSTATE_OLD: + case PT_GETXSTATE_INFO: + case PT_GETXSTATE: + case PT_SETXSTATE: + error = cpu_ptrace_xmm(td, req, addr, data); + break; + + case PT_GETFSBASE: + case PT_GETGSBASE: + sdp = req == PT_GETFSBASE ? &td->td_pcb->pcb_fsd : + &td->td_pcb->pcb_gsd; + r = sdp->sd_hibase << 24 | sdp->sd_lobase; + error = copyout(&r, addr, sizeof(r)); + break; + + case PT_SETFSBASE: + case PT_SETGSBASE: + error = copyin(addr, &r, sizeof(r)); + if (error != 0) + break; + fill_based_sd(&sd, r); + if (req == PT_SETFSBASE) { + td->td_pcb->pcb_fsd = sd; + td->td_frame->tf_fs = GSEL(GUFS_SEL, SEL_UPL); + } else { + td->td_pcb->pcb_gsd = sd; + td->td_pcb->pcb_gs = GSEL(GUGS_SEL, SEL_UPL); + } + break; + + default: + return (EINVAL); + } + + return (error); } Index: stable/10/sys/i386/i386/sys_machdep.c =================================================================== --- stable/10/sys/i386/i386/sys_machdep.c (revision 286310) +++ stable/10/sys/i386/i386/sys_machdep.c (revision 286311) @@ -1,896 +1,888 @@ /*- * Copyright (c) 1990 The Regents of the University of California. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 4. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * from: @(#)sys_machdep.c 5.5 (Berkeley) 1/19/91 */ #include __FBSDID("$FreeBSD$"); #include "opt_capsicum.h" #include "opt_kstack_pages.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #ifdef XEN #include void i386_reset_ldt(struct proc_ldt *pldt); void i386_reset_ldt(struct proc_ldt *pldt) { xen_set_ldt((vm_offset_t)pldt->ldt_base, pldt->ldt_len); } #else #define i386_reset_ldt(x) #endif #include /* for kernel_map */ #define MAX_LD 8192 #define LD_PER_PAGE 512 #define NEW_MAX_LD(num) ((num + LD_PER_PAGE) & ~(LD_PER_PAGE-1)) #define SIZE_FROM_LARGEST_LD(num) (NEW_MAX_LD(num) << 3) #define NULL_LDT_BASE ((caddr_t)NULL) #ifdef SMP static void set_user_ldt_rv(struct vmspace *vmsp); #endif static int i386_set_ldt_data(struct thread *, int start, int num, union descriptor *descs); static int i386_ldt_grow(struct thread *td, int len); +void +fill_based_sd(struct segment_descriptor *sdp, uint32_t base) +{ + + sdp->sd_lobase = base & 0xffffff; + sdp->sd_hibase = (base >> 24) & 0xff; +#ifdef XEN + /* need to do nosegneg like Linux */ + sdp->sd_lolimit = (HYPERVISOR_VIRT_START >> 12) & 0xffff; +#else + sdp->sd_lolimit = 0xffff; /* 4GB limit, wraps around */ +#endif + sdp->sd_hilimit = 0xf; + sdp->sd_type = SDT_MEMRWA; + sdp->sd_dpl = SEL_UPL; + sdp->sd_p = 1; + sdp->sd_xx = 0; + sdp->sd_def32 = 1; + sdp->sd_gran = 1; +} + #ifndef _SYS_SYSPROTO_H_ struct sysarch_args { int op; char *parms; }; #endif int sysarch(td, uap) struct thread *td; register struct sysarch_args *uap; { int error; union descriptor *lp; union { struct i386_ldt_args largs; struct i386_ioperm_args iargs; struct i386_get_xfpustate xfpu; } kargs; uint32_t base; struct segment_descriptor sd, *sdp; AUDIT_ARG_CMD(uap->op); #ifdef CAPABILITY_MODE /* * When adding new operations, add a new case statement here to * explicitly indicate whether or not the operation is safe to * perform in capability mode. */ if (IN_CAPABILITY_MODE(td)) { switch (uap->op) { case I386_GET_LDT: case I386_SET_LDT: case I386_GET_IOPERM: case I386_GET_FSBASE: case I386_SET_FSBASE: case I386_GET_GSBASE: case I386_SET_GSBASE: case I386_GET_XFPUSTATE: break; case I386_SET_IOPERM: default: #ifdef KTRACE if (KTRPOINT(td, KTR_CAPFAIL)) ktrcapfail(CAPFAIL_SYSCALL, NULL, NULL); #endif return (ECAPMODE); } } #endif switch (uap->op) { case I386_GET_IOPERM: case I386_SET_IOPERM: if ((error = copyin(uap->parms, &kargs.iargs, sizeof(struct i386_ioperm_args))) != 0) return (error); break; case I386_GET_LDT: case I386_SET_LDT: if ((error = copyin(uap->parms, &kargs.largs, sizeof(struct i386_ldt_args))) != 0) return (error); if (kargs.largs.num > MAX_LD || kargs.largs.num <= 0) return (EINVAL); break; case I386_GET_XFPUSTATE: if ((error = copyin(uap->parms, &kargs.xfpu, sizeof(struct i386_get_xfpustate))) != 0) return (error); break; default: break; } switch(uap->op) { case I386_GET_LDT: error = i386_get_ldt(td, &kargs.largs); break; case I386_SET_LDT: if (kargs.largs.descs != NULL) { lp = (union descriptor *)malloc( kargs.largs.num * sizeof(union descriptor), M_TEMP, M_WAITOK); error = copyin(kargs.largs.descs, lp, kargs.largs.num * sizeof(union descriptor)); if (error == 0) error = i386_set_ldt(td, &kargs.largs, lp); free(lp, M_TEMP); } else { error = i386_set_ldt(td, &kargs.largs, NULL); } break; case I386_GET_IOPERM: error = i386_get_ioperm(td, &kargs.iargs); if (error == 0) error = copyout(&kargs.iargs, uap->parms, sizeof(struct i386_ioperm_args)); break; case I386_SET_IOPERM: error = i386_set_ioperm(td, &kargs.iargs); break; case I386_VM86: error = vm86_sysarch(td, uap->parms); break; case I386_GET_FSBASE: sdp = &td->td_pcb->pcb_fsd; base = sdp->sd_hibase << 24 | sdp->sd_lobase; error = copyout(&base, uap->parms, sizeof(base)); break; case I386_SET_FSBASE: error = copyin(uap->parms, &base, sizeof(base)); - if (!error) { + if (error == 0) { /* * Construct a descriptor and store it in the pcb for * the next context switch. Also store it in the gdt * so that the load of tf_fs into %fs will activate it * at return to userland. */ - sd.sd_lobase = base & 0xffffff; - sd.sd_hibase = (base >> 24) & 0xff; -#ifdef XEN - /* need to do nosegneg like Linux */ - sd.sd_lolimit = (HYPERVISOR_VIRT_START >> 12) & 0xffff; -#else - sd.sd_lolimit = 0xffff; /* 4GB limit, wraps around */ -#endif - sd.sd_hilimit = 0xf; - sd.sd_type = SDT_MEMRWA; - sd.sd_dpl = SEL_UPL; - sd.sd_p = 1; - sd.sd_xx = 0; - sd.sd_def32 = 1; - sd.sd_gran = 1; + fill_based_sd(&sd, base); critical_enter(); td->td_pcb->pcb_fsd = sd; #ifdef XEN HYPERVISOR_update_descriptor(vtomach(&PCPU_GET(fsgs_gdt)[0]), *(uint64_t *)&sd); #else PCPU_GET(fsgs_gdt)[0] = sd; #endif critical_exit(); td->td_frame->tf_fs = GSEL(GUFS_SEL, SEL_UPL); } break; case I386_GET_GSBASE: sdp = &td->td_pcb->pcb_gsd; base = sdp->sd_hibase << 24 | sdp->sd_lobase; error = copyout(&base, uap->parms, sizeof(base)); break; case I386_SET_GSBASE: error = copyin(uap->parms, &base, sizeof(base)); - if (!error) { + if (error == 0) { /* * Construct a descriptor and store it in the pcb for * the next context switch. Also store it in the gdt * because we have to do a load_gs() right now. */ - sd.sd_lobase = base & 0xffffff; - sd.sd_hibase = (base >> 24) & 0xff; - -#ifdef XEN - /* need to do nosegneg like Linux */ - sd.sd_lolimit = (HYPERVISOR_VIRT_START >> 12) & 0xffff; -#else - sd.sd_lolimit = 0xffff; /* 4GB limit, wraps around */ -#endif - sd.sd_hilimit = 0xf; - sd.sd_type = SDT_MEMRWA; - sd.sd_dpl = SEL_UPL; - sd.sd_p = 1; - sd.sd_xx = 0; - sd.sd_def32 = 1; - sd.sd_gran = 1; + fill_based_sd(&sd, base); critical_enter(); td->td_pcb->pcb_gsd = sd; #ifdef XEN HYPERVISOR_update_descriptor(vtomach(&PCPU_GET(fsgs_gdt)[1]), *(uint64_t *)&sd); #else PCPU_GET(fsgs_gdt)[1] = sd; #endif critical_exit(); load_gs(GSEL(GUGS_SEL, SEL_UPL)); } break; case I386_GET_XFPUSTATE: if (kargs.xfpu.len > cpu_max_ext_state_size - sizeof(union savefpu)) return (EINVAL); npxgetregs(td); error = copyout((char *)(get_pcb_user_save_td(td) + 1), kargs.xfpu.addr, kargs.xfpu.len); break; default: error = EINVAL; break; } return (error); } int i386_extend_pcb(struct thread *td) { int i, offset; u_long *addr; struct pcb_ext *ext; struct soft_segment_descriptor ssd = { 0, /* segment base address (overwritten) */ ctob(IOPAGES + 1) - 1, /* length */ SDT_SYS386TSS, /* segment type */ 0, /* priority level */ 1, /* descriptor present */ 0, 0, 0, /* default 32 size */ 0 /* granularity */ }; ext = (struct pcb_ext *)kmem_malloc(kernel_arena, ctob(IOPAGES+1), M_WAITOK | M_ZERO); /* -16 is so we can convert a trapframe into vm86trapframe inplace */ ext->ext_tss.tss_esp0 = td->td_kstack + ctob(KSTACK_PAGES) - sizeof(struct pcb) - 16; ext->ext_tss.tss_ss0 = GSEL(GDATA_SEL, SEL_KPL); /* * The last byte of the i/o map must be followed by an 0xff byte. * We arbitrarily allocate 16 bytes here, to keep the starting * address on a doubleword boundary. */ offset = PAGE_SIZE - 16; ext->ext_tss.tss_ioopt = (offset - ((unsigned)&ext->ext_tss - (unsigned)ext)) << 16; ext->ext_iomap = (caddr_t)ext + offset; ext->ext_vm86.vm86_intmap = (caddr_t)ext + offset - 32; addr = (u_long *)ext->ext_vm86.vm86_intmap; for (i = 0; i < (ctob(IOPAGES) + 32 + 16) / sizeof(u_long); i++) *addr++ = ~0; ssd.ssd_base = (unsigned)&ext->ext_tss; ssd.ssd_limit -= ((unsigned)&ext->ext_tss - (unsigned)ext); ssdtosd(&ssd, &ext->ext_tssd); KASSERT(td == curthread, ("giving TSS to !curthread")); KASSERT(td->td_pcb->pcb_ext == 0, ("already have a TSS!")); /* Switch to the new TSS. */ critical_enter(); td->td_pcb->pcb_ext = ext; PCPU_SET(private_tss, 1); *PCPU_GET(tss_gdt) = ext->ext_tssd; ltr(GSEL(GPROC0_SEL, SEL_KPL)); critical_exit(); return 0; } int i386_set_ioperm(td, uap) struct thread *td; struct i386_ioperm_args *uap; { int i, error; char *iomap; if ((error = priv_check(td, PRIV_IO)) != 0) return (error); if ((error = securelevel_gt(td->td_ucred, 0)) != 0) return (error); /* * XXX * While this is restricted to root, we should probably figure out * whether any other driver is using this i/o address, as so not to * cause confusion. This probably requires a global 'usage registry'. */ if (td->td_pcb->pcb_ext == 0) if ((error = i386_extend_pcb(td)) != 0) return (error); iomap = (char *)td->td_pcb->pcb_ext->ext_iomap; if (uap->start + uap->length > IOPAGES * PAGE_SIZE * NBBY) return (EINVAL); for (i = uap->start; i < uap->start + uap->length; i++) { if (uap->enable) iomap[i >> 3] &= ~(1 << (i & 7)); else iomap[i >> 3] |= (1 << (i & 7)); } return (error); } int i386_get_ioperm(td, uap) struct thread *td; struct i386_ioperm_args *uap; { int i, state; char *iomap; if (uap->start >= IOPAGES * PAGE_SIZE * NBBY) return (EINVAL); if (td->td_pcb->pcb_ext == 0) { uap->length = 0; goto done; } iomap = (char *)td->td_pcb->pcb_ext->ext_iomap; i = uap->start; state = (iomap[i >> 3] >> (i & 7)) & 1; uap->enable = !state; uap->length = 1; for (i = uap->start + 1; i < IOPAGES * PAGE_SIZE * NBBY; i++) { if (state != ((iomap[i >> 3] >> (i & 7)) & 1)) break; uap->length++; } done: return (0); } /* * Update the GDT entry pointing to the LDT to point to the LDT of the * current process. Manage dt_lock holding/unholding autonomously. */ void set_user_ldt(struct mdproc *mdp) { struct proc_ldt *pldt; int dtlocked; dtlocked = 0; if (!mtx_owned(&dt_lock)) { mtx_lock_spin(&dt_lock); dtlocked = 1; } pldt = mdp->md_ldt; #ifdef XEN i386_reset_ldt(pldt); PCPU_SET(currentldt, (int)pldt); #else #ifdef SMP gdt[PCPU_GET(cpuid) * NGDT + GUSERLDT_SEL].sd = pldt->ldt_sd; #else gdt[GUSERLDT_SEL].sd = pldt->ldt_sd; #endif lldt(GSEL(GUSERLDT_SEL, SEL_KPL)); PCPU_SET(currentldt, GSEL(GUSERLDT_SEL, SEL_KPL)); #endif /* XEN */ if (dtlocked) mtx_unlock_spin(&dt_lock); } #ifdef SMP static void set_user_ldt_rv(struct vmspace *vmsp) { struct thread *td; td = curthread; if (vmsp != td->td_proc->p_vmspace) return; set_user_ldt(&td->td_proc->p_md); } #endif #ifdef XEN /* * dt_lock must be held. Returns with dt_lock held. */ struct proc_ldt * user_ldt_alloc(struct mdproc *mdp, int len) { struct proc_ldt *pldt, *new_ldt; mtx_assert(&dt_lock, MA_OWNED); mtx_unlock_spin(&dt_lock); new_ldt = malloc(sizeof(struct proc_ldt), M_SUBPROC, M_WAITOK); new_ldt->ldt_len = len = NEW_MAX_LD(len); new_ldt->ldt_base = (caddr_t)kmem_malloc(kernel_arena, round_page(len * sizeof(union descriptor)), M_WAITOK); new_ldt->ldt_refcnt = 1; new_ldt->ldt_active = 0; mtx_lock_spin(&dt_lock); if ((pldt = mdp->md_ldt)) { if (len > pldt->ldt_len) len = pldt->ldt_len; bcopy(pldt->ldt_base, new_ldt->ldt_base, len * sizeof(union descriptor)); } else { bcopy(ldt, new_ldt->ldt_base, PAGE_SIZE); } mtx_unlock_spin(&dt_lock); /* XXX kill once pmap locking fixed. */ pmap_map_readonly(kernel_pmap, (vm_offset_t)new_ldt->ldt_base, new_ldt->ldt_len*sizeof(union descriptor)); mtx_lock_spin(&dt_lock); /* XXX kill once pmap locking fixed. */ return (new_ldt); } #else /* * dt_lock must be held. Returns with dt_lock held. */ struct proc_ldt * user_ldt_alloc(struct mdproc *mdp, int len) { struct proc_ldt *pldt, *new_ldt; mtx_assert(&dt_lock, MA_OWNED); mtx_unlock_spin(&dt_lock); new_ldt = malloc(sizeof(struct proc_ldt), M_SUBPROC, M_WAITOK); new_ldt->ldt_len = len = NEW_MAX_LD(len); new_ldt->ldt_base = (caddr_t)kmem_malloc(kernel_arena, len * sizeof(union descriptor), M_WAITOK); new_ldt->ldt_refcnt = 1; new_ldt->ldt_active = 0; mtx_lock_spin(&dt_lock); gdt_segs[GUSERLDT_SEL].ssd_base = (unsigned)new_ldt->ldt_base; gdt_segs[GUSERLDT_SEL].ssd_limit = len * sizeof(union descriptor) - 1; ssdtosd(&gdt_segs[GUSERLDT_SEL], &new_ldt->ldt_sd); if ((pldt = mdp->md_ldt) != NULL) { if (len > pldt->ldt_len) len = pldt->ldt_len; bcopy(pldt->ldt_base, new_ldt->ldt_base, len * sizeof(union descriptor)); } else bcopy(ldt, new_ldt->ldt_base, sizeof(ldt)); return (new_ldt); } #endif /* !XEN */ /* * Must be called with dt_lock held. Returns with dt_lock unheld. */ void user_ldt_free(struct thread *td) { struct mdproc *mdp = &td->td_proc->p_md; struct proc_ldt *pldt; mtx_assert(&dt_lock, MA_OWNED); if ((pldt = mdp->md_ldt) == NULL) { mtx_unlock_spin(&dt_lock); return; } if (td == curthread) { #ifdef XEN i386_reset_ldt(&default_proc_ldt); PCPU_SET(currentldt, (int)&default_proc_ldt); #else lldt(_default_ldt); PCPU_SET(currentldt, _default_ldt); #endif } mdp->md_ldt = NULL; user_ldt_deref(pldt); } void user_ldt_deref(struct proc_ldt *pldt) { mtx_assert(&dt_lock, MA_OWNED); if (--pldt->ldt_refcnt == 0) { mtx_unlock_spin(&dt_lock); kmem_free(kernel_arena, (vm_offset_t)pldt->ldt_base, pldt->ldt_len * sizeof(union descriptor)); free(pldt, M_SUBPROC); } else mtx_unlock_spin(&dt_lock); } /* * Note for the authors of compat layers (linux, etc): copyout() in * the function below is not a problem since it presents data in * arch-specific format (i.e. i386-specific in this case), not in * the OS-specific one. */ int i386_get_ldt(td, uap) struct thread *td; struct i386_ldt_args *uap; { int error = 0; struct proc_ldt *pldt; int nldt, num; union descriptor *lp; #ifdef DEBUG printf("i386_get_ldt: start=%d num=%d descs=%p\n", uap->start, uap->num, (void *)uap->descs); #endif mtx_lock_spin(&dt_lock); if ((pldt = td->td_proc->p_md.md_ldt) != NULL) { nldt = pldt->ldt_len; lp = &((union descriptor *)(pldt->ldt_base))[uap->start]; mtx_unlock_spin(&dt_lock); num = min(uap->num, nldt); } else { mtx_unlock_spin(&dt_lock); nldt = sizeof(ldt)/sizeof(ldt[0]); num = min(uap->num, nldt); lp = &ldt[uap->start]; } if ((uap->start > (unsigned int)nldt) || ((unsigned int)num > (unsigned int)nldt) || ((unsigned int)(uap->start + num) > (unsigned int)nldt)) return(EINVAL); error = copyout(lp, uap->descs, num * sizeof(union descriptor)); if (!error) td->td_retval[0] = num; return(error); } int i386_set_ldt(td, uap, descs) struct thread *td; struct i386_ldt_args *uap; union descriptor *descs; { int error = 0, i; int largest_ld; struct mdproc *mdp = &td->td_proc->p_md; struct proc_ldt *pldt; union descriptor *dp; #ifdef DEBUG printf("i386_set_ldt: start=%d num=%d descs=%p\n", uap->start, uap->num, (void *)uap->descs); #endif if (descs == NULL) { /* Free descriptors */ if (uap->start == 0 && uap->num == 0) { /* * Treat this as a special case, so userland needn't * know magic number NLDT. */ uap->start = NLDT; uap->num = MAX_LD - NLDT; } if (uap->num == 0) return (EINVAL); mtx_lock_spin(&dt_lock); if ((pldt = mdp->md_ldt) == NULL || uap->start >= pldt->ldt_len) { mtx_unlock_spin(&dt_lock); return (0); } largest_ld = uap->start + uap->num; if (largest_ld > pldt->ldt_len) largest_ld = pldt->ldt_len; i = largest_ld - uap->start; bzero(&((union descriptor *)(pldt->ldt_base))[uap->start], sizeof(union descriptor) * i); mtx_unlock_spin(&dt_lock); return (0); } if (!(uap->start == LDT_AUTO_ALLOC && uap->num == 1)) { /* verify range of descriptors to modify */ largest_ld = uap->start + uap->num; if (uap->start >= MAX_LD || largest_ld > MAX_LD) { return (EINVAL); } } /* Check descriptors for access violations */ for (i = 0; i < uap->num; i++) { dp = &descs[i]; switch (dp->sd.sd_type) { case SDT_SYSNULL: /* system null */ dp->sd.sd_p = 0; break; case SDT_SYS286TSS: /* system 286 TSS available */ case SDT_SYSLDT: /* system local descriptor table */ case SDT_SYS286BSY: /* system 286 TSS busy */ case SDT_SYSTASKGT: /* system task gate */ case SDT_SYS286IGT: /* system 286 interrupt gate */ case SDT_SYS286TGT: /* system 286 trap gate */ case SDT_SYSNULL2: /* undefined by Intel */ case SDT_SYS386TSS: /* system 386 TSS available */ case SDT_SYSNULL3: /* undefined by Intel */ case SDT_SYS386BSY: /* system 386 TSS busy */ case SDT_SYSNULL4: /* undefined by Intel */ case SDT_SYS386IGT: /* system 386 interrupt gate */ case SDT_SYS386TGT: /* system 386 trap gate */ case SDT_SYS286CGT: /* system 286 call gate */ case SDT_SYS386CGT: /* system 386 call gate */ /* I can't think of any reason to allow a user proc * to create a segment of these types. They are * for OS use only. */ return (EACCES); /*NOTREACHED*/ /* memory segment types */ case SDT_MEMEC: /* memory execute only conforming */ case SDT_MEMEAC: /* memory execute only accessed conforming */ case SDT_MEMERC: /* memory execute read conforming */ case SDT_MEMERAC: /* memory execute read accessed conforming */ /* Must be "present" if executable and conforming. */ if (dp->sd.sd_p == 0) return (EACCES); break; case SDT_MEMRO: /* memory read only */ case SDT_MEMROA: /* memory read only accessed */ case SDT_MEMRW: /* memory read write */ case SDT_MEMRWA: /* memory read write accessed */ case SDT_MEMROD: /* memory read only expand dwn limit */ case SDT_MEMRODA: /* memory read only expand dwn lim accessed */ case SDT_MEMRWD: /* memory read write expand dwn limit */ case SDT_MEMRWDA: /* memory read write expand dwn lim acessed */ case SDT_MEME: /* memory execute only */ case SDT_MEMEA: /* memory execute only accessed */ case SDT_MEMER: /* memory execute read */ case SDT_MEMERA: /* memory execute read accessed */ break; default: return(EINVAL); /*NOTREACHED*/ } /* Only user (ring-3) descriptors may be present. */ if ((dp->sd.sd_p != 0) && (dp->sd.sd_dpl != SEL_UPL)) return (EACCES); } if (uap->start == LDT_AUTO_ALLOC && uap->num == 1) { /* Allocate a free slot */ mtx_lock_spin(&dt_lock); if ((pldt = mdp->md_ldt) == NULL) { if ((error = i386_ldt_grow(td, NLDT + 1))) { mtx_unlock_spin(&dt_lock); return (error); } pldt = mdp->md_ldt; } again: /* * start scanning a bit up to leave room for NVidia and * Wine, which still user the "Blat" method of allocation. */ dp = &((union descriptor *)(pldt->ldt_base))[NLDT]; for (i = NLDT; i < pldt->ldt_len; ++i) { if (dp->sd.sd_type == SDT_SYSNULL) break; dp++; } if (i >= pldt->ldt_len) { if ((error = i386_ldt_grow(td, pldt->ldt_len+1))) { mtx_unlock_spin(&dt_lock); return (error); } goto again; } uap->start = i; error = i386_set_ldt_data(td, i, 1, descs); mtx_unlock_spin(&dt_lock); } else { largest_ld = uap->start + uap->num; mtx_lock_spin(&dt_lock); if (!(error = i386_ldt_grow(td, largest_ld))) { error = i386_set_ldt_data(td, uap->start, uap->num, descs); } mtx_unlock_spin(&dt_lock); } if (error == 0) td->td_retval[0] = uap->start; return (error); } #ifdef XEN static int i386_set_ldt_data(struct thread *td, int start, int num, union descriptor *descs) { struct mdproc *mdp = &td->td_proc->p_md; struct proc_ldt *pldt = mdp->md_ldt; mtx_assert(&dt_lock, MA_OWNED); while (num) { xen_update_descriptor( &((union descriptor *)(pldt->ldt_base))[start], descs); num--; start++; descs++; } return (0); } #else static int i386_set_ldt_data(struct thread *td, int start, int num, union descriptor *descs) { struct mdproc *mdp = &td->td_proc->p_md; struct proc_ldt *pldt = mdp->md_ldt; mtx_assert(&dt_lock, MA_OWNED); /* Fill in range */ bcopy(descs, &((union descriptor *)(pldt->ldt_base))[start], num * sizeof(union descriptor)); return (0); } #endif /* !XEN */ static int i386_ldt_grow(struct thread *td, int len) { struct mdproc *mdp = &td->td_proc->p_md; struct proc_ldt *new_ldt, *pldt; caddr_t old_ldt_base = NULL_LDT_BASE; int old_ldt_len = 0; mtx_assert(&dt_lock, MA_OWNED); if (len > MAX_LD) return (ENOMEM); if (len < NLDT + 1) len = NLDT + 1; /* Allocate a user ldt. */ if ((pldt = mdp->md_ldt) == NULL || len > pldt->ldt_len) { new_ldt = user_ldt_alloc(mdp, len); if (new_ldt == NULL) return (ENOMEM); pldt = mdp->md_ldt; if (pldt != NULL) { if (new_ldt->ldt_len <= pldt->ldt_len) { /* * We just lost the race for allocation, so * free the new object and return. */ mtx_unlock_spin(&dt_lock); kmem_free(kernel_arena, (vm_offset_t)new_ldt->ldt_base, new_ldt->ldt_len * sizeof(union descriptor)); free(new_ldt, M_SUBPROC); mtx_lock_spin(&dt_lock); return (0); } /* * We have to substitute the current LDT entry for * curproc with the new one since its size grew. */ old_ldt_base = pldt->ldt_base; old_ldt_len = pldt->ldt_len; pldt->ldt_sd = new_ldt->ldt_sd; pldt->ldt_base = new_ldt->ldt_base; pldt->ldt_len = new_ldt->ldt_len; } else mdp->md_ldt = pldt = new_ldt; #ifdef SMP /* * Signal other cpus to reload ldt. We need to unlock dt_lock * here because other CPU will contest on it since their * curthreads won't hold the lock and will block when trying * to acquire it. */ mtx_unlock_spin(&dt_lock); smp_rendezvous(NULL, (void (*)(void *))set_user_ldt_rv, NULL, td->td_proc->p_vmspace); #else set_user_ldt(&td->td_proc->p_md); mtx_unlock_spin(&dt_lock); #endif if (old_ldt_base != NULL_LDT_BASE) { kmem_free(kernel_arena, (vm_offset_t)old_ldt_base, old_ldt_len * sizeof(union descriptor)); free(new_ldt, M_SUBPROC); } mtx_lock_spin(&dt_lock); } return (0); } Index: stable/10/sys/i386/include/md_var.h =================================================================== --- stable/10/sys/i386/include/md_var.h (revision 286310) +++ stable/10/sys/i386/include/md_var.h (revision 286311) @@ -1,133 +1,135 @@ /*- * Copyright (c) 1995 Bruce D. Evans. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ #ifndef _MACHINE_MD_VAR_H_ #define _MACHINE_MD_VAR_H_ /* * Miscellaneous machine-dependent declarations. */ extern long Maxmem; extern u_int basemem; /* PA of original top of base memory */ extern int busdma_swi_pending; extern u_int cpu_exthigh; extern u_int cpu_feature; extern u_int cpu_feature2; extern u_int amd_feature; extern u_int amd_feature2; extern u_int amd_pminfo; extern u_int via_feature_rng; extern u_int via_feature_xcrypt; extern u_int cpu_clflush_line_size; extern u_int cpu_stdext_feature; extern u_int cpu_stdext_feature2; extern u_int cpu_fxsr; extern u_int cpu_high; extern u_int cpu_id; extern u_int cpu_max_ext_state_size; extern u_int cpu_mxcsr_mask; extern u_int cpu_procinfo; extern u_int cpu_procinfo2; extern char cpu_vendor[]; extern u_int cpu_vendor_id; extern u_int cpu_mon_mwait_flags; extern u_int cpu_mon_min_size; extern u_int cpu_mon_max_size; extern u_int cpu_maxphyaddr; extern u_int cyrix_did; #if defined(I586_CPU) && !defined(NO_F00F_HACK) extern int has_f00f_bug; #endif extern u_int hv_high; extern char hv_vendor[]; extern char kstack[]; extern char sigcode[]; extern int szsigcode; #ifdef COMPAT_FREEBSD4 extern int szfreebsd4_sigcode; #endif #ifdef COMPAT_43 extern int szosigcode; #endif extern uint32_t *vm_page_dump; extern int vm_page_dump_size; extern int workaround_erratum383; extern int _udatasel; extern int _ucodesel; extern int use_xsave; extern uint64_t xsave_mask; typedef void alias_for_inthand_t(u_int cs, u_int ef, u_int esp, u_int ss); struct pcb; union savefpu; struct thread; struct reg; struct fpreg; struct dbreg; struct dumperinfo; +struct segment_descriptor; void *alloc_fpusave(int flags); void bcopyb(const void *from, void *to, size_t len); void busdma_swi(void); void cpu_setregs(void); void cpu_switch_load_gs(void) __asm(__STRING(cpu_switch_load_gs)); void doreti_iret(void) __asm(__STRING(doreti_iret)); void doreti_iret_fault(void) __asm(__STRING(doreti_iret_fault)); void doreti_popl_ds(void) __asm(__STRING(doreti_popl_ds)); void doreti_popl_ds_fault(void) __asm(__STRING(doreti_popl_ds_fault)); void doreti_popl_es(void) __asm(__STRING(doreti_popl_es)); void doreti_popl_es_fault(void) __asm(__STRING(doreti_popl_es_fault)); void doreti_popl_fs(void) __asm(__STRING(doreti_popl_fs)); void doreti_popl_fs_fault(void) __asm(__STRING(doreti_popl_fs_fault)); void dump_add_page(vm_paddr_t); void dump_drop_page(vm_paddr_t); void finishidentcpu(void); void fillw(int /*u_short*/ pat, void *base, size_t cnt); +void fill_based_sd(struct segment_descriptor *sdp, uint32_t base); void initializecpu(void); void initializecpucache(void); void i686_pagezero(void *addr); void sse2_pagezero(void *addr); void init_AMD_Elan_sc520(void); int is_physical_memory(vm_paddr_t addr); int isa_nmi(int cd); vm_paddr_t kvtop(void *addr); void panicifcpuunsupported(void); void ppro_reenable_apic(void); void printcpuinfo(void); void setidt(int idx, alias_for_inthand_t *func, int typ, int dpl, int selec); int user_dbreg_trap(void); void minidumpsys(struct dumperinfo *); union savefpu *get_pcb_user_save_td(struct thread *td); union savefpu *get_pcb_user_save_pcb(struct pcb *pcb); struct pcb *get_pcb_td(struct thread *td); #endif /* !_MACHINE_MD_VAR_H_ */ Index: stable/10/sys/x86/include/ptrace.h =================================================================== --- stable/10/sys/x86/include/ptrace.h (revision 286310) +++ stable/10/sys/x86/include/ptrace.h (revision 286311) @@ -1,61 +1,65 @@ /*- * Copyright (c) 1992, 1993 * The Regents of the University of California. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 4. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * @(#)ptrace.h 8.1 (Berkeley) 6/11/93 * $FreeBSD$ */ #ifndef _MACHINE_PTRACE_H_ #define _MACHINE_PTRACE_H_ #define __HAVE_PTRACE_MACHDEP /* * On amd64 (PT_FIRSTMACH + 0) and (PT_FIRSTMACH + 1) are old values for * PT_GETXSTATE_OLD and PT_SETXSTATE_OLD. They should not be (re)used. */ #ifdef __i386__ #define PT_GETXMMREGS (PT_FIRSTMACH + 0) #define PT_SETXMMREGS (PT_FIRSTMACH + 1) #endif #ifdef _KERNEL #define PT_GETXSTATE_OLD (PT_FIRSTMACH + 2) #define PT_SETXSTATE_OLD (PT_FIRSTMACH + 3) #endif #define PT_GETXSTATE_INFO (PT_FIRSTMACH + 4) #define PT_GETXSTATE (PT_FIRSTMACH + 5) #define PT_SETXSTATE (PT_FIRSTMACH + 6) +#define PT_GETFSBASE (PT_FIRSTMACH + 7) +#define PT_SETFSBASE (PT_FIRSTMACH + 8) +#define PT_GETGSBASE (PT_FIRSTMACH + 9) +#define PT_SETGSBASE (PT_FIRSTMACH + 10) /* Argument structure for PT_GETXSTATE_INFO. */ struct ptrace_xstate_info { uint64_t xsave_mask; uint32_t xsave_len; }; #endif Index: stable/10 =================================================================== --- stable/10 (revision 286310) +++ stable/10 (revision 286311) Property changes on: stable/10 ___________________________________________________________________ Modified: svn:mergeinfo ## -0,0 +0,1 ## Merged /head:r284918-284919,284965,285011,285104