Index: head/share/man/man4/cpufreq.4 =================================================================== --- head/share/man/man4/cpufreq.4 (revision 357001) +++ head/share/man/man4/cpufreq.4 (revision 357002) @@ -1,308 +1,312 @@ .\" Copyright (c) 2005 Nate Lawson .\" All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" .\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE .\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF .\" SUCH DAMAGE. .\" .\" $FreeBSD$ .\" -.Dd March 3, 2006 +.Dd January 22, 2020 .Dt CPUFREQ 4 .Os .Sh NAME .Nm cpufreq .Nd CPU frequency control framework .Sh SYNOPSIS .Cd "device cpufreq" .Pp .In sys/cpu.h .Ft int .Fn cpufreq_levels "device_t dev" "struct cf_level *levels" "int *count" .Ft int .Fn cpufreq_set "device_t dev" "const struct cf_level *level" "int priority" .Ft int .Fn cpufreq_get "device_t dev" "struct cf_level *level" .Ft int .Fo cpufreq_drv_settings .Fa "device_t dev" .Fa "struct cf_setting *sets" .Fa "int *count" .Fc .Ft int .Fn cpufreq_drv_type "device_t dev" "int *type" .Ft int .Fn cpufreq_drv_set "device_t dev" "const struct cf_setting *set" .Ft int .Fn cpufreq_drv_get "device_t dev" "struct cf_setting *set" .Sh DESCRIPTION The .Nm driver provides a unified kernel and user interface to CPU frequency control drivers. It combines multiple drivers offering different settings into a single interface of all possible levels. Users can access this interface directly via .Xr sysctl 8 or by indicating to .Pa /etc/rc.d/power_profile that it should switch settings when the AC line state changes via .Xr rc.conf 5 . .Sh SYSCTL VARIABLES These settings may be overridden by kernel drivers requesting alternate settings. If this occurs, the original values will be restored once the condition has passed (e.g., the system has cooled sufficiently). If a sysctl cannot be set due to an override condition, it will return .Er EPERM . .Pp The frequency cannot be changed if TSC is in use as the timecounter. This is because the timecounter system needs to use a source that has a constant rate. The timecounter source can be changed with the .Pa kern.timecounter.hardware sysctl. Available modes are in .Pa kern.timecounter.choice sysctl entry. .Bl -tag -width indent .It Va dev.cpu.%d.freq Current active CPU frequency in MHz. +.It Va dev.cpu.%d.freq_driver +The specific +.Nm +driver used by this cpu. .It Va dev.cpu.%d.freq_levels Currently available levels for the CPU (frequency/power usage). Values are in units of MHz and milliwatts. .It Va dev.DEVICE.%d.freq_settings Currently available settings for the driver (frequency/power usage). Values are in units of MHz and milliwatts. This is helpful for understanding which settings are offered by which driver for debugging purposes. .It Va debug.cpufreq.lowest Lowest CPU frequency in MHz to offer to users. This setting is also accessible via a tunable with the same name. This can be used to disable very low levels that may be unusable on some systems. .It Va debug.cpufreq.verbose Print verbose messages. This setting is also accessible via a tunable with the same name. .El .Sh SUPPORTED DRIVERS The following device drivers offer absolute frequency control via the .Nm interface. Usually, only one of these can be active at a time. .Pp .Bl -tag -compact -width ".Pa acpi_perf" .It Pa acpi_perf ACPI CPU performance states .It Pa est Intel Enhanced SpeedStep .It Pa ichss Intel SpeedStep for ICH .It Pa powernow AMD PowerNow!\& and Cool'n'Quiet for K7 and K8 .It Pa smist Intel SMI-based SpeedStep for PIIX4 .El .Pp The following device drivers offer relative frequency control and have an additive effect: .Pp .Bl -tag -compact -width ".Pa acpi_throttle" .It Pa acpi_throttle ACPI CPU throttling .It Pa p4tcc Pentium 4 Thermal Control Circuitry .El .Sh KERNEL INTERFACE Kernel components can query and set CPU frequencies through the .Nm kernel interface. This involves obtaining a .Nm device, calling .Fn cpufreq_levels to get the currently available frequency levels, checking the current level with .Fn cpufreq_get , and setting a new one from the list with .Fn cpufreq_set . Each level may actually reference more than one .Nm driver but kernel components do not need to be aware of this. The .Va total_set element of .Vt "struct cf_level" provides a summary of the frequency and power for this level. Unknown or irrelevant values are set to .Dv CPUFREQ_VAL_UNKNOWN . .Pp The .Fn cpufreq_levels method takes a .Nm device and an empty array of .Fa levels . The .Fa count value should be set to the number of levels available and after the function completes, will be set to the actual number of levels returned. If there are more levels than .Fa count will allow, it should return .Er E2BIG . .Pp The .Fn cpufreq_get method takes a pointer to space to store a .Fa level . After successful completion, the output will be the current active level and is equal to one of the levels returned by .Fn cpufreq_levels . .Pp The .Fn cpufreq_set method takes a pointer a .Fa level and attempts to activate it. The .Fa priority (i.e., .Dv CPUFREQ_PRIO_KERN ) tells .Nm whether to override previous settings while activating this level. If .Fa priority is higher than the current active level, that level will be saved and overridden with the new level. If a level is already saved, the new level is set without overwriting the older saved level. If .Fn cpufreq_set is called with a .Dv NULL .Fa level , the saved level will be restored. If there is no saved level, .Fn cpufreq_set will return .Er ENXIO . If .Fa priority is lower than the current active level's priority, this method returns .Er EPERM . .Sh DRIVER INTERFACE Kernel drivers offering hardware-specific CPU frequency control export their individual settings through the .Nm driver interface. This involves implementing these methods: .Fn cpufreq_drv_settings , .Fn cpufreq_drv_type , .Fn cpufreq_drv_set , and .Fn cpufreq_drv_get . Additionally, the driver must attach a device as a child of a CPU device so that these methods can be called by the .Nm framework. .Pp The .Fn cpufreq_drv_settings method returns an array of currently available settings, each of type .Vt "struct cf_setting" . The driver should set unknown or irrelevant values to .Dv CPUFREQ_VAL_UNKNOWN . All the following elements for each setting should be returned: .Bd -literal struct cf_setting { int freq; /* CPU clock in MHz or 100ths of a percent. */ int volts; /* Voltage in mV. */ int power; /* Power consumed in mW. */ int lat; /* Transition latency in us. */ device_t dev; /* Driver providing this setting. */ }; .Ed .Pp On entry to this method, .Fa count contains the number of settings that can be returned. On successful completion, the driver sets it to the actual number of settings returned. If the driver offers more settings than .Fa count will allow, it should return .Er E2BIG . .Pp The .Fn cpufreq_drv_type method indicates the type of settings it offers, either .Dv CPUFREQ_TYPE_ABSOLUTE or .Dv CPUFREQ_TYPE_RELATIVE . Additionally, the driver may set the .Dv CPUFREQ_FLAG_INFO_ONLY flag if the settings it provides are information for other drivers only and cannot be passed to .Fn cpufreq_drv_set to activate them. .Pp The .Fn cpufreq_drv_set method takes a driver setting and makes it active. If the setting is invalid or not currently available, it should return .Er EINVAL . .Pp The .Fn cpufreq_drv_get method returns the currently-active driver setting. The .Vt "struct cf_setting" returned must be valid for passing to .Fn cpufreq_drv_set , including all elements being filled out correctly. If the driver cannot infer the current setting (even by estimating it with .Fn cpu_est_clockrate ) then it should set all elements to .Dv CPUFREQ_VAL_UNKNOWN . .Sh SEE ALSO .Xr acpi 4 , .Xr est 4 , .Xr timecounters 4 , .Xr powerd 8 , .Xr sysctl 8 .Sh AUTHORS .An Nate Lawson .An Bruno Ducrot contributed the .Pa powernow driver. .Sh BUGS The following drivers have not yet been converted to the .Nm interface: .Xr longrun 4 . .Pp Notification of CPU and bus frequency changes is not implemented yet. .Pp When multiple CPUs offer frequency control, they cannot be set to different levels and must all offer the same frequency settings. Index: head/share/man/man4/hwpstate_intel.4 =================================================================== --- head/share/man/man4/hwpstate_intel.4 (nonexistent) +++ head/share/man/man4/hwpstate_intel.4 (revision 357002) @@ -0,0 +1,89 @@ +.\" +.\" Copyright (c) 2019 Intel Corporation +.\" +.\" Redistribution and use in source and binary forms, with or without +.\" modification, are permitted provided that the following conditions +.\" are met: +.\" 1. Redistributions of source code must retain the above copyright +.\" notice, this list of conditions and the following disclaimer. +.\" 2. Redistributions in binary form must reproduce the above copyright +.\" notice, this list of conditions and the following disclaimer in the +.\" documentation and/or other materials provided with the distribution. +.\" +.\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND +.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +.\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +.\" SUCH DAMAGE. +.\" +.\" $FreeBSD$ +.\" +.Dd January 22, 2020 +.Dt HWPSTATE_INTEL 4 +.Os +.Sh NAME +.Nm hwpstate_intel +.Nd Intel Speed Shift Technology driver +.Sh SYNOPSIS +To compile this driver into your kernel +place the following line in your kernel +configuration file: +.Bd -ragged -offset indent +.Cd "device cpufreq" +.Ed +.Sh DESCRIPTION +The +.Nm +driver provides support for hardware-controlled performance states on Intel +platforms, also known as Intel Speed Shift Technology. +.Sh LOADER TUNABLES +.Bl -tag -width indent +.It Va hint.hwpstate_intel.0.disabled +Can be used to disable +.Nm , +allowing other compatible drivers to manage performance states, like +.Xr est 4 . +.Pq default 0 +.El +.Sh SYSCTL VARIABLES +The following +.Xr sysctl 8 +values are available +.Bl -tag -width indent +.It Va dev.hwpstate_intel.%d.\%desc +Describes the attached driver +.It dev.hwpstate_intel.0.%desc: Intel Speed Shift +.It Va dev.hwpstate_intel.%d.\%driver +Driver in use, always hwpstate_intel. +.It dev.hwpstate_intel.0.%driver: hwpstate_intel +.It Va dev.hwpstate_intel.%d.\%parent +.It dev.hwpstate_intel.0.%parent: cpu0 +The cpu that is exposing these frequencies. +For example +.Va cpu0 . +.It Va dev.hwpstate_intel.%d.epp +Energy/Performance Preference. +Valid values range from 0 to 100. +Setting this field conveys a hint to the hardware regarding a preference towards +performance (at value 0), energy efficiency (at value 100), or somewhere in +between. +.It dev.hwpstate_intel.0.epp: 0 +.El +.Sh COMPATIBILITY +.Nm +is only found on supported Intel CPUs. +.Sh SEE ALSO +.Xr cpufreq 4 +.Rs +.%T "Intel 64 and IA-32 Architectures Software Developer Manuals" +.%U "http://www.intel.com/content/www/us/en/processors/architectures-software-developer-manuals.html" +.Re +.Sh AUTHORS +This manual page was written by +.An D Scott Phillips Aq Mt scottph@FreeBSD.org . Property changes on: head/share/man/man4/hwpstate_intel.4 ___________________________________________________________________ Added: svn:eol-style ## -0,0 +1 ## +native \ No newline at end of property Added: svn:keywords ## -0,0 +1 ## +FreeBSD=%H \ No newline at end of property Added: svn:mime-type ## -0,0 +1 ## +text/plain \ No newline at end of property Index: head/sys/conf/files.x86 =================================================================== --- head/sys/conf/files.x86 (revision 357001) +++ head/sys/conf/files.x86 (revision 357002) @@ -1,339 +1,340 @@ # This file tells config what files go into building a kernel, # files marked standard are always included. # # $FreeBSD$ # # This file contains all the x86 devices and such that are # common between i386 and amd64, but aren't applicable to # any other architecture we support. # # The long compile-with and dependency lines are required because of # limitations in config: backslash-newline doesn't work in strings, and # dependency lines other than the first are silently ignored. cddl/dev/fbt/x86/fbt_isa.c optional dtrace_fbt | dtraceall compile-with "${FBT_C}" cddl/dev/dtrace/x86/dis_tables.c optional dtrace_fbt | dtraceall compile-with "${DTRACE_C}" cddl/dev/dtrace/x86/instr_size.c optional dtrace_fbt | dtraceall compile-with "${DTRACE_C}" compat/ndis/kern_ndis.c optional ndisapi pci compat/ndis/kern_windrv.c optional ndisapi pci compat/ndis/subr_hal.c optional ndisapi pci compat/ndis/subr_ndis.c optional ndisapi pci compat/ndis/subr_ntoskrnl.c optional ndisapi pci compat/ndis/subr_pe.c optional ndisapi pci compat/ndis/subr_usbd.c optional ndisapi pci crypto/aesni/aesni.c optional aesni aesni_ghash.o optional aesni \ dependency "$S/crypto/aesni/aesni_ghash.c" \ compile-with "${CC} -c ${CFLAGS:C/^-O2$/-O3/:N-nostdinc} ${WERROR} ${NO_WCAST_QUAL} ${PROF} -mmmx -msse -msse4 -maes -mpclmul ${.IMPSRC}" \ no-implicit-rule \ clean "aesni_ghash.o" aesni_ccm.o optional aesni \ dependency "$S/crypto/aesni/aesni_ccm.c" \ compile-with "${CC} -c ${CFLAGS:C/^-O2$/-O3/:N-nostdinc} ${WERROR} ${NO_WCAST_QUAL} ${PROF} -mmmx -msse -msse4 -maes -mpclmul ${.IMPSRC}" \ no-implicit-rule \ clean "aesni_ccm.o" aesni_wrap.o optional aesni \ dependency "$S/crypto/aesni/aesni_wrap.c" \ compile-with "${CC} -c ${CFLAGS:C/^-O2$/-O3/:N-nostdinc} ${WERROR} ${NO_WCAST_QUAL} ${PROF} -mmmx -msse -msse4 -maes ${.IMPSRC}" \ no-implicit-rule \ clean "aesni_wrap.o" intel_sha1.o optional aesni \ dependency "$S/crypto/aesni/intel_sha1.c" \ compile-with "${CC} -c ${CFLAGS:C/^-O2$/-O3/:N-nostdinc} ${WERROR} ${PROF} -mmmx -msse -msse4 -msha ${.IMPSRC}" \ no-implicit-rule \ clean "intel_sha1.o" intel_sha256.o optional aesni \ dependency "$S/crypto/aesni/intel_sha256.c" \ compile-with "${CC} -c ${CFLAGS:C/^-O2$/-O3/:N-nostdinc} ${WERROR} ${PROF} -mmmx -msse -msse4 -msha ${.IMPSRC}" \ no-implicit-rule \ clean "intel_sha256.o" crypto/via/padlock.c optional padlock crypto/via/padlock_cipher.c optional padlock crypto/via/padlock_hash.c optional padlock dev/acpica/acpi_hpet.c optional acpi dev/acpica/acpi_if.m standard dev/acpica/acpi_pci.c optional acpi pci dev/acpica/acpi_pci_link.c optional acpi pci dev/acpica/acpi_pcib.c optional acpi pci dev/acpica/acpi_pcib_acpi.c optional acpi pci dev/acpica/acpi_pcib_pci.c optional acpi pci dev/acpica/acpi_pxm.c optional acpi dev/acpica/acpi_timer.c optional acpi dev/amdsbwd/amdsbwd.c optional amdsbwd dev/amdsmn/amdsmn.c optional amdsmn | amdtemp dev/amdtemp/amdtemp.c optional amdtemp dev/arcmsr/arcmsr.c optional arcmsr pci dev/asmc/asmc.c optional asmc isa dev/atkbdc/atkbd.c optional atkbd atkbdc dev/atkbdc/atkbd_atkbdc.c optional atkbd atkbdc dev/atkbdc/atkbdc.c optional atkbdc dev/atkbdc/atkbdc_isa.c optional atkbdc isa dev/atkbdc/atkbdc_subr.c optional atkbdc dev/atkbdc/psm.c optional psm atkbdc dev/bxe/bxe.c optional bxe pci dev/bxe/bxe_stats.c optional bxe pci dev/bxe/bxe_debug.c optional bxe pci dev/bxe/ecore_sp.c optional bxe pci dev/bxe/bxe_elink.c optional bxe pci dev/bxe/57710_init_values.c optional bxe pci dev/bxe/57711_init_values.c optional bxe pci dev/bxe/57712_init_values.c optional bxe pci dev/coretemp/coretemp.c optional coretemp dev/cpuctl/cpuctl.c optional cpuctl dev/dpms/dpms.c optional dpms dev/fb/fb.c optional fb | vga dev/fb/s3_pci.c optional s3pci dev/fb/vesa.c optional vga vesa dev/fb/vga.c optional vga dev/fdc/fdc.c optional fdc dev/fdc/fdc_acpi.c optional fdc dev/fdc/fdc_isa.c optional fdc isa dev/fdc/fdc_pccard.c optional fdc pccard dev/gpio/bytgpio.c optional bytgpio dev/gpio/chvgpio.c optional chvgpio dev/hpt27xx/hpt27xx_os_bsd.c optional hpt27xx dev/hpt27xx/hpt27xx_osm_bsd.c optional hpt27xx dev/hpt27xx/hpt27xx_config.c optional hpt27xx hpt27xx_lib.o optional hpt27xx \ dependency "$S/dev/hpt27xx/$M-elf.hpt27xx_lib.o.uu" \ compile-with "uudecode < $S/dev/hpt27xx/$M-elf.hpt27xx_lib.o.uu" \ no-implicit-rule dev/hptmv/entry.c optional hptmv dev/hptmv/mv.c optional hptmv dev/hptmv/gui_lib.c optional hptmv dev/hptmv/hptproc.c optional hptmv dev/hptmv/ioctl.c optional hptmv hptmvraid.o optional hptmv \ dependency "$S/dev/hptmv/$M-elf.raid.o.uu" \ compile-with "uudecode < $S/dev/hptmv/$M-elf.raid.o.uu" \ no-implicit-rule dev/hptnr/hptnr_os_bsd.c optional hptnr dev/hptnr/hptnr_osm_bsd.c optional hptnr dev/hptnr/hptnr_config.c optional hptnr hptnr_lib.o optional hptnr \ dependency "$S/dev/hptnr/$M-elf.hptnr_lib.o.uu" \ compile-with "uudecode < $S/dev/hptnr/$M-elf.hptnr_lib.o.uu" \ no-implicit-rule dev/hptrr/hptrr_os_bsd.c optional hptrr dev/hptrr/hptrr_osm_bsd.c optional hptrr dev/hptrr/hptrr_config.c optional hptrr hptrr_lib.o optional hptrr \ dependency "$S/dev/hptrr/$M-elf.hptrr_lib.o.uu" \ compile-with "uudecode < $S/dev/hptrr/$M-elf.hptrr_lib.o.uu" \ no-implicit-rule dev/hwpmc/hwpmc_amd.c optional hwpmc dev/hwpmc/hwpmc_intel.c optional hwpmc dev/hwpmc/hwpmc_core.c optional hwpmc dev/hwpmc/hwpmc_uncore.c optional hwpmc dev/hwpmc/hwpmc_tsc.c optional hwpmc dev/hwpmc/hwpmc_x86.c optional hwpmc dev/hyperv/input/hv_kbd.c optional hyperv dev/hyperv/input/hv_kbdc.c optional hyperv dev/hyperv/pcib/vmbus_pcib.c optional hyperv pci dev/hyperv/netvsc/hn_nvs.c optional hyperv dev/hyperv/netvsc/hn_rndis.c optional hyperv dev/hyperv/netvsc/if_hn.c optional hyperv dev/hyperv/storvsc/hv_storvsc_drv_freebsd.c optional hyperv dev/hyperv/utilities/hv_kvp.c optional hyperv dev/hyperv/utilities/hv_snapshot.c optional hyperv dev/hyperv/utilities/vmbus_heartbeat.c optional hyperv dev/hyperv/utilities/vmbus_ic.c optional hyperv dev/hyperv/utilities/vmbus_shutdown.c optional hyperv dev/hyperv/utilities/vmbus_timesync.c optional hyperv dev/hyperv/vmbus/hyperv.c optional hyperv dev/hyperv/vmbus/hyperv_busdma.c optional hyperv dev/hyperv/vmbus/vmbus.c optional hyperv pci dev/hyperv/vmbus/vmbus_br.c optional hyperv dev/hyperv/vmbus/vmbus_chan.c optional hyperv dev/hyperv/vmbus/vmbus_et.c optional hyperv dev/hyperv/vmbus/vmbus_if.m optional hyperv dev/hyperv/vmbus/vmbus_res.c optional hyperv dev/hyperv/vmbus/vmbus_xact.c optional hyperv dev/ichwd/ichwd.c optional ichwd dev/if_ndis/if_ndis.c optional ndis dev/if_ndis/if_ndis_pccard.c optional ndis pccard dev/if_ndis/if_ndis_pci.c optional ndis cardbus | ndis pci dev/if_ndis/if_ndis_usb.c optional ndis usb dev/imcsmb/imcsmb.c optional imcsmb dev/imcsmb/imcsmb_pci.c optional imcsmb pci dev/intel/spi.c optional intelspi dev/io/iodev.c optional io dev/ipmi/ipmi.c optional ipmi dev/ipmi/ipmi_acpi.c optional ipmi acpi dev/ipmi/ipmi_isa.c optional ipmi isa dev/ipmi/ipmi_kcs.c optional ipmi dev/ipmi/ipmi_smic.c optional ipmi dev/ipmi/ipmi_smbus.c optional ipmi smbus dev/ipmi/ipmi_smbios.c optional ipmi dev/ipmi/ipmi_ssif.c optional ipmi smbus dev/ipmi/ipmi_pci.c optional ipmi pci dev/ipmi/ipmi_linux.c optional ipmi compat_linux32 dev/isci/isci.c optional isci dev/isci/isci_controller.c optional isci dev/isci/isci_domain.c optional isci dev/isci/isci_interrupt.c optional isci dev/isci/isci_io_request.c optional isci dev/isci/isci_logger.c optional isci dev/isci/isci_oem_parameters.c optional isci dev/isci/isci_remote_device.c optional isci dev/isci/isci_sysctl.c optional isci dev/isci/isci_task_request.c optional isci dev/isci/isci_timer.c optional isci dev/isci/scil/sati.c optional isci dev/isci/scil/sati_abort_task_set.c optional isci dev/isci/scil/sati_atapi.c optional isci dev/isci/scil/sati_device.c optional isci dev/isci/scil/sati_inquiry.c optional isci dev/isci/scil/sati_log_sense.c optional isci dev/isci/scil/sati_lun_reset.c optional isci dev/isci/scil/sati_mode_pages.c optional isci dev/isci/scil/sati_mode_select.c optional isci dev/isci/scil/sati_mode_sense.c optional isci dev/isci/scil/sati_mode_sense_10.c optional isci dev/isci/scil/sati_mode_sense_6.c optional isci dev/isci/scil/sati_move.c optional isci dev/isci/scil/sati_passthrough.c optional isci dev/isci/scil/sati_read.c optional isci dev/isci/scil/sati_read_buffer.c optional isci dev/isci/scil/sati_read_capacity.c optional isci dev/isci/scil/sati_reassign_blocks.c optional isci dev/isci/scil/sati_report_luns.c optional isci dev/isci/scil/sati_request_sense.c optional isci dev/isci/scil/sati_start_stop_unit.c optional isci dev/isci/scil/sati_synchronize_cache.c optional isci dev/isci/scil/sati_test_unit_ready.c optional isci dev/isci/scil/sati_unmap.c optional isci dev/isci/scil/sati_util.c optional isci dev/isci/scil/sati_verify.c optional isci dev/isci/scil/sati_write.c optional isci dev/isci/scil/sati_write_and_verify.c optional isci dev/isci/scil/sati_write_buffer.c optional isci dev/isci/scil/sati_write_long.c optional isci dev/isci/scil/sci_abstract_list.c optional isci dev/isci/scil/sci_base_controller.c optional isci dev/isci/scil/sci_base_domain.c optional isci dev/isci/scil/sci_base_iterator.c optional isci dev/isci/scil/sci_base_library.c optional isci dev/isci/scil/sci_base_logger.c optional isci dev/isci/scil/sci_base_memory_descriptor_list.c optional isci dev/isci/scil/sci_base_memory_descriptor_list_decorator.c optional isci dev/isci/scil/sci_base_object.c optional isci dev/isci/scil/sci_base_observer.c optional isci dev/isci/scil/sci_base_phy.c optional isci dev/isci/scil/sci_base_port.c optional isci dev/isci/scil/sci_base_remote_device.c optional isci dev/isci/scil/sci_base_request.c optional isci dev/isci/scil/sci_base_state_machine.c optional isci dev/isci/scil/sci_base_state_machine_logger.c optional isci dev/isci/scil/sci_base_state_machine_observer.c optional isci dev/isci/scil/sci_base_subject.c optional isci dev/isci/scil/sci_util.c optional isci dev/isci/scil/scic_sds_controller.c optional isci dev/isci/scil/scic_sds_library.c optional isci dev/isci/scil/scic_sds_pci.c optional isci dev/isci/scil/scic_sds_phy.c optional isci dev/isci/scil/scic_sds_port.c optional isci dev/isci/scil/scic_sds_port_configuration_agent.c optional isci dev/isci/scil/scic_sds_remote_device.c optional isci dev/isci/scil/scic_sds_remote_node_context.c optional isci dev/isci/scil/scic_sds_remote_node_table.c optional isci dev/isci/scil/scic_sds_request.c optional isci dev/isci/scil/scic_sds_sgpio.c optional isci dev/isci/scil/scic_sds_smp_remote_device.c optional isci dev/isci/scil/scic_sds_smp_request.c optional isci dev/isci/scil/scic_sds_ssp_request.c optional isci dev/isci/scil/scic_sds_stp_packet_request.c optional isci dev/isci/scil/scic_sds_stp_remote_device.c optional isci dev/isci/scil/scic_sds_stp_request.c optional isci dev/isci/scil/scic_sds_unsolicited_frame_control.c optional isci dev/isci/scil/scif_sas_controller.c optional isci dev/isci/scil/scif_sas_controller_state_handlers.c optional isci dev/isci/scil/scif_sas_controller_states.c optional isci dev/isci/scil/scif_sas_domain.c optional isci dev/isci/scil/scif_sas_domain_state_handlers.c optional isci dev/isci/scil/scif_sas_domain_states.c optional isci dev/isci/scil/scif_sas_high_priority_request_queue.c optional isci dev/isci/scil/scif_sas_internal_io_request.c optional isci dev/isci/scil/scif_sas_io_request.c optional isci dev/isci/scil/scif_sas_io_request_state_handlers.c optional isci dev/isci/scil/scif_sas_io_request_states.c optional isci dev/isci/scil/scif_sas_library.c optional isci dev/isci/scil/scif_sas_remote_device.c optional isci dev/isci/scil/scif_sas_remote_device_ready_substate_handlers.c optional isci dev/isci/scil/scif_sas_remote_device_ready_substates.c optional isci dev/isci/scil/scif_sas_remote_device_starting_substate_handlers.c optional isci dev/isci/scil/scif_sas_remote_device_starting_substates.c optional isci dev/isci/scil/scif_sas_remote_device_state_handlers.c optional isci dev/isci/scil/scif_sas_remote_device_states.c optional isci dev/isci/scil/scif_sas_request.c optional isci dev/isci/scil/scif_sas_smp_activity_clear_affiliation.c optional isci dev/isci/scil/scif_sas_smp_io_request.c optional isci dev/isci/scil/scif_sas_smp_phy.c optional isci dev/isci/scil/scif_sas_smp_remote_device.c optional isci dev/isci/scil/scif_sas_stp_io_request.c optional isci dev/isci/scil/scif_sas_stp_remote_device.c optional isci dev/isci/scil/scif_sas_stp_task_request.c optional isci dev/isci/scil/scif_sas_task_request.c optional isci dev/isci/scil/scif_sas_task_request_state_handlers.c optional isci dev/isci/scil/scif_sas_task_request_states.c optional isci dev/isci/scil/scif_sas_timer.c optional isci dev/itwd/itwd.c optional itwd libkern/x86/crc32_sse42.c standard # # x86 shared code between IA32 and AMD64 architectures # x86/acpica/OsdEnvironment.c optional acpi x86/acpica/acpi_apm.c optional acpi x86/acpica/acpi_wakeup.c optional acpi x86/acpica/srat.c optional acpi x86/bios/smbios.c optional smbios x86/bios/vpd.c optional vpd x86/cpufreq/est.c optional cpufreq -x86/cpufreq/hwpstate.c optional cpufreq +x86/cpufreq/hwpstate_amd.c optional cpufreq +x86/cpufreq/hwpstate_intel.c optional cpufreq x86/cpufreq/p4tcc.c optional cpufreq x86/cpufreq/powernow.c optional cpufreq x86/iommu/busdma_dmar.c optional acpi acpi_dmar pci x86/iommu/intel_ctx.c optional acpi acpi_dmar pci x86/iommu/intel_drv.c optional acpi acpi_dmar pci x86/iommu/intel_fault.c optional acpi acpi_dmar pci x86/iommu/intel_gas.c optional acpi acpi_dmar pci x86/iommu/intel_idpgtbl.c optional acpi acpi_dmar pci x86/iommu/intel_intrmap.c optional acpi acpi_dmar pci x86/iommu/intel_qi.c optional acpi acpi_dmar pci x86/iommu/intel_quirks.c optional acpi acpi_dmar pci x86/iommu/intel_utils.c optional acpi acpi_dmar pci x86/isa/atrtc.c standard x86/isa/clock.c standard x86/isa/isa.c optional isa x86/isa/isa_dma.c optional isa x86/isa/nmi.c standard x86/isa/orm.c optional isa x86/pci/pci_bus.c optional pci x86/pci/qpi.c optional pci x86/x86/autoconf.c standard x86/x86/bus_machdep.c standard x86/x86/busdma_bounce.c standard x86/x86/busdma_machdep.c standard x86/x86/cpu_machdep.c standard x86/x86/dump_machdep.c standard x86/x86/fdt_machdep.c optional fdt x86/x86/identcpu.c standard x86/x86/intr_machdep.c standard x86/x86/legacy.c standard x86/x86/mca.c standard x86/x86/x86_mem.c optional mem x86/x86/mp_x86.c optional smp x86/x86/mp_watchdog.c optional mp_watchdog smp x86/x86/nexus.c standard x86/x86/pvclock.c standard x86/x86/stack_machdep.c optional ddb | stack x86/x86/tsc.c standard x86/x86/ucode.c standard x86/x86/delay.c standard x86/xen/hvm.c optional xenhvm x86/xen/xen_intr.c optional xenhvm x86/xen/xen_apic.c optional xenhvm x86/xen/xenpv.c optional xenhvm x86/xen/xen_msi.c optional xenhvm x86/xen/xen_nexus.c optional xenhvm Index: head/sys/kern/kern_cpu.c =================================================================== --- head/sys/kern/kern_cpu.c (revision 357001) +++ head/sys/kern/kern_cpu.c (revision 357002) @@ -1,1088 +1,1147 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2004-2007 Nate Lawson (SDG) * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "cpufreq_if.h" /* * Common CPU frequency glue code. Drivers for specific hardware can * attach this interface to allow users to get/set the CPU frequency. */ /* * Number of levels we can handle. Levels are synthesized from settings * so for M settings and N drivers, there may be M*N levels. */ #define CF_MAX_LEVELS 256 struct cf_saved_freq { struct cf_level level; int priority; SLIST_ENTRY(cf_saved_freq) link; }; struct cpufreq_softc { struct sx lock; struct cf_level curr_level; int curr_priority; SLIST_HEAD(, cf_saved_freq) saved_freq; struct cf_level_lst all_levels; int all_count; int max_mhz; device_t dev; + device_t cf_drv_dev; struct sysctl_ctx_list sysctl_ctx; struct task startup_task; struct cf_level *levels_buf; }; struct cf_setting_array { struct cf_setting sets[MAX_SETTINGS]; int count; TAILQ_ENTRY(cf_setting_array) link; }; TAILQ_HEAD(cf_setting_lst, cf_setting_array); #define CF_MTX_INIT(x) sx_init((x), "cpufreq lock") #define CF_MTX_LOCK(x) sx_xlock((x)) #define CF_MTX_UNLOCK(x) sx_xunlock((x)) #define CF_MTX_ASSERT(x) sx_assert((x), SX_XLOCKED) #define CF_DEBUG(msg...) do { \ if (cf_verbose) \ printf("cpufreq: " msg); \ } while (0) static int cpufreq_attach(device_t dev); static void cpufreq_startup_task(void *ctx, int pending); static int cpufreq_detach(device_t dev); static int cf_set_method(device_t dev, const struct cf_level *level, int priority); static int cf_get_method(device_t dev, struct cf_level *level); static int cf_levels_method(device_t dev, struct cf_level *levels, int *count); static int cpufreq_insert_abs(struct cpufreq_softc *sc, struct cf_setting *sets, int count); static int cpufreq_expand_set(struct cpufreq_softc *sc, struct cf_setting_array *set_arr); static struct cf_level *cpufreq_dup_set(struct cpufreq_softc *sc, struct cf_level *dup, struct cf_setting *set); static int cpufreq_curr_sysctl(SYSCTL_HANDLER_ARGS); static int cpufreq_levels_sysctl(SYSCTL_HANDLER_ARGS); static int cpufreq_settings_sysctl(SYSCTL_HANDLER_ARGS); static device_method_t cpufreq_methods[] = { DEVMETHOD(device_probe, bus_generic_probe), DEVMETHOD(device_attach, cpufreq_attach), DEVMETHOD(device_detach, cpufreq_detach), DEVMETHOD(cpufreq_set, cf_set_method), DEVMETHOD(cpufreq_get, cf_get_method), DEVMETHOD(cpufreq_levels, cf_levels_method), {0, 0} }; static driver_t cpufreq_driver = { "cpufreq", cpufreq_methods, sizeof(struct cpufreq_softc) }; static devclass_t cpufreq_dc; DRIVER_MODULE(cpufreq, cpu, cpufreq_driver, cpufreq_dc, 0, 0); static int cf_lowest_freq; static int cf_verbose; static SYSCTL_NODE(_debug, OID_AUTO, cpufreq, CTLFLAG_RD, NULL, "cpufreq debugging"); SYSCTL_INT(_debug_cpufreq, OID_AUTO, lowest, CTLFLAG_RWTUN, &cf_lowest_freq, 1, "Don't provide levels below this frequency."); SYSCTL_INT(_debug_cpufreq, OID_AUTO, verbose, CTLFLAG_RWTUN, &cf_verbose, 1, "Print verbose debugging messages"); +/* + * This is called as the result of a hardware specific frequency control driver + * calling cpufreq_register. It provides a general interface for system wide + * frequency controls and operates on a per cpu basis. + */ static int cpufreq_attach(device_t dev) { struct cpufreq_softc *sc; struct pcpu *pc; device_t parent; uint64_t rate; - int numdevs; CF_DEBUG("initializing %s\n", device_get_nameunit(dev)); sc = device_get_softc(dev); parent = device_get_parent(dev); sc->dev = dev; sysctl_ctx_init(&sc->sysctl_ctx); TAILQ_INIT(&sc->all_levels); CF_MTX_INIT(&sc->lock); sc->curr_level.total_set.freq = CPUFREQ_VAL_UNKNOWN; SLIST_INIT(&sc->saved_freq); /* Try to get nominal CPU freq to use it as maximum later if needed */ sc->max_mhz = cpu_get_nominal_mhz(dev); /* If that fails, try to measure the current rate */ if (sc->max_mhz <= 0) { + CF_DEBUG("Unable to obtain nominal frequency.\n"); pc = cpu_get_pcpu(dev); if (cpu_est_clockrate(pc->pc_cpuid, &rate) == 0) sc->max_mhz = rate / 1000000; else sc->max_mhz = CPUFREQ_VAL_UNKNOWN; } - /* - * Only initialize one set of sysctls for all CPUs. In the future, - * if multiple CPUs can have different settings, we can move these - * sysctls to be under every CPU instead of just the first one. - */ - numdevs = devclass_get_count(cpufreq_dc); - if (numdevs > 1) - return (0); - CF_DEBUG("initializing one-time data for %s\n", device_get_nameunit(dev)); sc->levels_buf = malloc(CF_MAX_LEVELS * sizeof(*sc->levels_buf), M_DEVBUF, M_WAITOK); SYSCTL_ADD_PROC(&sc->sysctl_ctx, SYSCTL_CHILDREN(device_get_sysctl_tree(parent)), OID_AUTO, "freq", CTLTYPE_INT | CTLFLAG_RW, sc, 0, cpufreq_curr_sysctl, "I", "Current CPU frequency"); SYSCTL_ADD_PROC(&sc->sysctl_ctx, SYSCTL_CHILDREN(device_get_sysctl_tree(parent)), OID_AUTO, "freq_levels", CTLTYPE_STRING | CTLFLAG_RD, sc, 0, cpufreq_levels_sysctl, "A", "CPU frequency levels"); /* * Queue a one-shot broadcast that levels have changed. * It will run once the system has completed booting. */ TASK_INIT(&sc->startup_task, 0, cpufreq_startup_task, dev); taskqueue_enqueue(taskqueue_thread, &sc->startup_task); return (0); } /* Handle any work to be done for all drivers that attached during boot. */ static void cpufreq_startup_task(void *ctx, int pending) { cpufreq_settings_changed((device_t)ctx); } static int cpufreq_detach(device_t dev) { struct cpufreq_softc *sc; struct cf_saved_freq *saved_freq; - int numdevs; CF_DEBUG("shutdown %s\n", device_get_nameunit(dev)); sc = device_get_softc(dev); sysctl_ctx_free(&sc->sysctl_ctx); while ((saved_freq = SLIST_FIRST(&sc->saved_freq)) != NULL) { SLIST_REMOVE_HEAD(&sc->saved_freq, link); free(saved_freq, M_TEMP); } - /* Only clean up these resources when the last device is detaching. */ - numdevs = devclass_get_count(cpufreq_dc); - if (numdevs == 1) { - CF_DEBUG("final shutdown for %s\n", device_get_nameunit(dev)); - free(sc->levels_buf, M_DEVBUF); - } + free(sc->levels_buf, M_DEVBUF); return (0); } static int cf_set_method(device_t dev, const struct cf_level *level, int priority) { struct cpufreq_softc *sc; const struct cf_setting *set; struct cf_saved_freq *saved_freq, *curr_freq; struct pcpu *pc; int error, i; u_char pri; sc = device_get_softc(dev); error = 0; set = NULL; saved_freq = NULL; /* We are going to change levels so notify the pre-change handler. */ EVENTHANDLER_INVOKE(cpufreq_pre_change, level, &error); if (error != 0) { EVENTHANDLER_INVOKE(cpufreq_post_change, level, error); return (error); } CF_MTX_LOCK(&sc->lock); #ifdef SMP #ifdef EARLY_AP_STARTUP MPASS(mp_ncpus == 1 || smp_started); #else /* * If still booting and secondary CPUs not started yet, don't allow * changing the frequency until they're online. This is because we * can't switch to them using sched_bind() and thus we'd only be * switching the main CPU. XXXTODO: Need to think more about how to * handle having different CPUs at different frequencies. */ if (mp_ncpus > 1 && !smp_started) { device_printf(dev, "rejecting change, SMP not started yet\n"); error = ENXIO; goto out; } #endif #endif /* SMP */ /* * If the requested level has a lower priority, don't allow * the new level right now. */ if (priority < sc->curr_priority) { CF_DEBUG("ignoring, curr prio %d less than %d\n", priority, sc->curr_priority); error = EPERM; goto out; } /* * If the caller didn't specify a level and one is saved, prepare to * restore the saved level. If none has been saved, return an error. */ if (level == NULL) { saved_freq = SLIST_FIRST(&sc->saved_freq); if (saved_freq == NULL) { CF_DEBUG("NULL level, no saved level\n"); error = ENXIO; goto out; } level = &saved_freq->level; priority = saved_freq->priority; CF_DEBUG("restoring saved level, freq %d prio %d\n", level->total_set.freq, priority); } /* Reject levels that are below our specified threshold. */ if (level->total_set.freq < cf_lowest_freq) { CF_DEBUG("rejecting freq %d, less than %d limit\n", level->total_set.freq, cf_lowest_freq); error = EINVAL; goto out; } /* If already at this level, just return. */ if (sc->curr_level.total_set.freq == level->total_set.freq) { CF_DEBUG("skipping freq %d, same as current level %d\n", level->total_set.freq, sc->curr_level.total_set.freq); goto skip; } /* First, set the absolute frequency via its driver. */ set = &level->abs_set; if (set->dev) { if (!device_is_attached(set->dev)) { error = ENXIO; goto out; } /* Bind to the target CPU before switching. */ pc = cpu_get_pcpu(set->dev); thread_lock(curthread); pri = curthread->td_priority; sched_prio(curthread, PRI_MIN); sched_bind(curthread, pc->pc_cpuid); thread_unlock(curthread); CF_DEBUG("setting abs freq %d on %s (cpu %d)\n", set->freq, device_get_nameunit(set->dev), PCPU_GET(cpuid)); error = CPUFREQ_DRV_SET(set->dev, set); thread_lock(curthread); sched_unbind(curthread); sched_prio(curthread, pri); thread_unlock(curthread); if (error) { goto out; } } /* Next, set any/all relative frequencies via their drivers. */ for (i = 0; i < level->rel_count; i++) { set = &level->rel_set[i]; if (!device_is_attached(set->dev)) { error = ENXIO; goto out; } /* Bind to the target CPU before switching. */ pc = cpu_get_pcpu(set->dev); thread_lock(curthread); pri = curthread->td_priority; sched_prio(curthread, PRI_MIN); sched_bind(curthread, pc->pc_cpuid); thread_unlock(curthread); CF_DEBUG("setting rel freq %d on %s (cpu %d)\n", set->freq, device_get_nameunit(set->dev), PCPU_GET(cpuid)); error = CPUFREQ_DRV_SET(set->dev, set); thread_lock(curthread); sched_unbind(curthread); sched_prio(curthread, pri); thread_unlock(curthread); if (error) { /* XXX Back out any successful setting? */ goto out; } } skip: /* * Before recording the current level, check if we're going to a * higher priority. If so, save the previous level and priority. */ if (sc->curr_level.total_set.freq != CPUFREQ_VAL_UNKNOWN && priority > sc->curr_priority) { CF_DEBUG("saving level, freq %d prio %d\n", sc->curr_level.total_set.freq, sc->curr_priority); curr_freq = malloc(sizeof(*curr_freq), M_TEMP, M_NOWAIT); if (curr_freq == NULL) { error = ENOMEM; goto out; } curr_freq->level = sc->curr_level; curr_freq->priority = sc->curr_priority; SLIST_INSERT_HEAD(&sc->saved_freq, curr_freq, link); } sc->curr_level = *level; sc->curr_priority = priority; /* If we were restoring a saved state, reset it to "unused". */ if (saved_freq != NULL) { CF_DEBUG("resetting saved level\n"); sc->curr_level.total_set.freq = CPUFREQ_VAL_UNKNOWN; SLIST_REMOVE_HEAD(&sc->saved_freq, link); free(saved_freq, M_TEMP); } out: CF_MTX_UNLOCK(&sc->lock); /* * We changed levels (or attempted to) so notify the post-change * handler of new frequency or error. */ EVENTHANDLER_INVOKE(cpufreq_post_change, level, error); if (error && set) device_printf(set->dev, "set freq failed, err %d\n", error); return (error); } static int +cpufreq_get_frequency(device_t dev) +{ + struct cf_setting set; + + if (CPUFREQ_DRV_GET(dev, &set) != 0) + return (-1); + + return (set.freq); +} + +/* Returns the index into *levels with the match */ +static int +cpufreq_get_level(device_t dev, struct cf_level *levels, int count) +{ + int i, freq; + + if ((freq = cpufreq_get_frequency(dev)) < 0) + return (-1); + for (i = 0; i < count; i++) + if (freq == levels[i].total_set.freq) + return (i); + + return (-1); +} + +/* + * Used by the cpufreq core, this function will populate *level with the current + * frequency as either determined by a cached value sc->curr_level, or in the + * case the lower level driver has set the CPUFREQ_FLAG_UNCACHED flag, it will + * obtain the frequency from the driver itself. + */ +static int cf_get_method(device_t dev, struct cf_level *level) { struct cpufreq_softc *sc; struct cf_level *levels; - struct cf_setting *curr_set, set; + struct cf_setting *curr_set; struct pcpu *pc; - device_t *devs; - int bdiff, count, diff, error, i, n, numdevs; + int bdiff, count, diff, error, i, type; uint64_t rate; sc = device_get_softc(dev); error = 0; levels = NULL; - /* If we already know the current frequency, we're done. */ + /* + * If we already know the current frequency, and the driver didn't ask + * for uncached usage, we're done. + */ CF_MTX_LOCK(&sc->lock); curr_set = &sc->curr_level.total_set; - if (curr_set->freq != CPUFREQ_VAL_UNKNOWN) { + error = CPUFREQ_DRV_TYPE(sc->cf_drv_dev, &type); + if (error == 0 && (type & CPUFREQ_FLAG_UNCACHED)) { + struct cf_setting set; + + /* + * If the driver wants to always report back the real frequency, + * first try the driver and if that fails, fall back to + * estimating. + */ + if (CPUFREQ_DRV_GET(sc->cf_drv_dev, &set) != 0) + goto estimate; + sc->curr_level.total_set = set; + CF_DEBUG("get returning immediate freq %d\n", curr_set->freq); + goto out; + } else if (curr_set->freq != CPUFREQ_VAL_UNKNOWN) { CF_DEBUG("get returning known freq %d\n", curr_set->freq); + error = 0; goto out; } CF_MTX_UNLOCK(&sc->lock); /* * We need to figure out the current level. Loop through every * driver, getting the current setting. Then, attempt to get a best * match of settings against each level. */ count = CF_MAX_LEVELS; levels = malloc(count * sizeof(*levels), M_TEMP, M_NOWAIT); if (levels == NULL) return (ENOMEM); error = CPUFREQ_LEVELS(sc->dev, levels, &count); if (error) { if (error == E2BIG) printf("cpufreq: need to increase CF_MAX_LEVELS\n"); free(levels, M_TEMP); return (error); } - error = device_get_children(device_get_parent(dev), &devs, &numdevs); - if (error) { - free(levels, M_TEMP); - return (error); - } /* * Reacquire the lock and search for the given level. * * XXX Note: this is not quite right since we really need to go * through each level and compare both absolute and relative * settings for each driver in the system before making a match. * The estimation code below catches this case though. */ CF_MTX_LOCK(&sc->lock); - for (n = 0; n < numdevs && curr_set->freq == CPUFREQ_VAL_UNKNOWN; n++) { - if (!device_is_attached(devs[n])) - continue; - if (CPUFREQ_DRV_GET(devs[n], &set) != 0) - continue; - for (i = 0; i < count; i++) { - if (set.freq == levels[i].total_set.freq) { - sc->curr_level = levels[i]; - break; - } - } - } - free(devs, M_TEMP); + i = cpufreq_get_level(sc->cf_drv_dev, levels, count); + if (i >= 0) + sc->curr_level = levels[i]; + else + CF_DEBUG("Couldn't find supported level for %s\n", + device_get_nameunit(sc->cf_drv_dev)); + if (curr_set->freq != CPUFREQ_VAL_UNKNOWN) { CF_DEBUG("get matched freq %d from drivers\n", curr_set->freq); goto out; } +estimate: + CF_MTX_ASSERT(&sc->lock); + /* * We couldn't find an exact match, so attempt to estimate and then * match against a level. */ pc = cpu_get_pcpu(dev); if (pc == NULL) { error = ENXIO; goto out; } cpu_est_clockrate(pc->pc_cpuid, &rate); rate /= 1000000; bdiff = 1 << 30; for (i = 0; i < count; i++) { diff = abs(levels[i].total_set.freq - rate); if (diff < bdiff) { bdiff = diff; sc->curr_level = levels[i]; } } CF_DEBUG("get estimated freq %d\n", curr_set->freq); out: if (error == 0) *level = sc->curr_level; CF_MTX_UNLOCK(&sc->lock); if (levels) free(levels, M_TEMP); return (error); } +/* + * Either directly obtain settings from the cpufreq driver, or build a list of + * relative settings to be integrated later against an absolute max. + */ static int +cpufreq_add_levels(device_t cf_dev, struct cf_setting_lst *rel_sets) +{ + struct cf_setting_array *set_arr; + struct cf_setting *sets; + device_t dev; + struct cpufreq_softc *sc; + int type, set_count, error; + + sc = device_get_softc(cf_dev); + dev = sc->cf_drv_dev; + + /* Skip devices that aren't ready. */ + if (!device_is_attached(cf_dev)) + return (0); + + /* + * Get settings, skipping drivers that offer no settings or + * provide settings for informational purposes only. + */ + error = CPUFREQ_DRV_TYPE(dev, &type); + if (error != 0 || (type & CPUFREQ_FLAG_INFO_ONLY)) { + if (error == 0) { + CF_DEBUG("skipping info-only driver %s\n", + device_get_nameunit(cf_dev)); + } + return (error); + } + + sets = malloc(MAX_SETTINGS * sizeof(*sets), M_TEMP, M_NOWAIT); + if (sets == NULL) + return (ENOMEM); + + set_count = MAX_SETTINGS; + error = CPUFREQ_DRV_SETTINGS(dev, sets, &set_count); + if (error != 0 || set_count == 0) + goto out; + + /* Add the settings to our absolute/relative lists. */ + switch (type & CPUFREQ_TYPE_MASK) { + case CPUFREQ_TYPE_ABSOLUTE: + error = cpufreq_insert_abs(sc, sets, set_count); + break; + case CPUFREQ_TYPE_RELATIVE: + CF_DEBUG("adding %d relative settings\n", set_count); + set_arr = malloc(sizeof(*set_arr), M_TEMP, M_NOWAIT); + if (set_arr == NULL) { + error = ENOMEM; + goto out; + } + bcopy(sets, set_arr->sets, set_count * sizeof(*sets)); + set_arr->count = set_count; + TAILQ_INSERT_TAIL(rel_sets, set_arr, link); + break; + default: + error = EINVAL; + } + +out: + free(sets, M_TEMP); + return (error); +} + +static int cf_levels_method(device_t dev, struct cf_level *levels, int *count) { struct cf_setting_array *set_arr; struct cf_setting_lst rel_sets; struct cpufreq_softc *sc; struct cf_level *lev; - struct cf_setting *sets; struct pcpu *pc; - device_t *devs; - int error, i, numdevs, set_count, type; + int error, i; uint64_t rate; if (levels == NULL || count == NULL) return (EINVAL); TAILQ_INIT(&rel_sets); sc = device_get_softc(dev); - error = device_get_children(device_get_parent(dev), &devs, &numdevs); - if (error) - return (error); - sets = malloc(MAX_SETTINGS * sizeof(*sets), M_TEMP, M_NOWAIT); - if (sets == NULL) { - free(devs, M_TEMP); - return (ENOMEM); - } - /* Get settings from all cpufreq drivers. */ CF_MTX_LOCK(&sc->lock); - for (i = 0; i < numdevs; i++) { - /* Skip devices that aren't ready. */ - if (!device_is_attached(devs[i])) - continue; + error = cpufreq_add_levels(sc->dev, &rel_sets); + if (error) + goto out; - /* - * Get settings, skipping drivers that offer no settings or - * provide settings for informational purposes only. - */ - error = CPUFREQ_DRV_TYPE(devs[i], &type); - if (error || (type & CPUFREQ_FLAG_INFO_ONLY)) { - if (error == 0) { - CF_DEBUG("skipping info-only driver %s\n", - device_get_nameunit(devs[i])); - } - continue; - } - set_count = MAX_SETTINGS; - error = CPUFREQ_DRV_SETTINGS(devs[i], sets, &set_count); - if (error || set_count == 0) - continue; - - /* Add the settings to our absolute/relative lists. */ - switch (type & CPUFREQ_TYPE_MASK) { - case CPUFREQ_TYPE_ABSOLUTE: - error = cpufreq_insert_abs(sc, sets, set_count); - break; - case CPUFREQ_TYPE_RELATIVE: - CF_DEBUG("adding %d relative settings\n", set_count); - set_arr = malloc(sizeof(*set_arr), M_TEMP, M_NOWAIT); - if (set_arr == NULL) { - error = ENOMEM; - goto out; - } - bcopy(sets, set_arr->sets, set_count * sizeof(*sets)); - set_arr->count = set_count; - TAILQ_INSERT_TAIL(&rel_sets, set_arr, link); - break; - default: - error = EINVAL; - } - if (error) - goto out; - } - /* * If there are no absolute levels, create a fake one at 100%. We * then cache the clockrate for later use as our base frequency. */ if (TAILQ_EMPTY(&sc->all_levels)) { + struct cf_setting set; + + CF_DEBUG("No absolute levels returned by driver\n"); + if (sc->max_mhz == CPUFREQ_VAL_UNKNOWN) { sc->max_mhz = cpu_get_nominal_mhz(dev); /* * If the CPU can't report a rate for 100%, hope * the CPU is running at its nominal rate right now, * and use that instead. */ if (sc->max_mhz <= 0) { pc = cpu_get_pcpu(dev); cpu_est_clockrate(pc->pc_cpuid, &rate); sc->max_mhz = rate / 1000000; } } - memset(&sets[0], CPUFREQ_VAL_UNKNOWN, sizeof(*sets)); - sets[0].freq = sc->max_mhz; - sets[0].dev = NULL; - error = cpufreq_insert_abs(sc, sets, 1); + memset(&set, CPUFREQ_VAL_UNKNOWN, sizeof(set)); + set.freq = sc->max_mhz; + set.dev = NULL; + error = cpufreq_insert_abs(sc, &set, 1); if (error) goto out; } /* Create a combined list of absolute + relative levels. */ TAILQ_FOREACH(set_arr, &rel_sets, link) cpufreq_expand_set(sc, set_arr); /* If the caller doesn't have enough space, return the actual count. */ if (sc->all_count > *count) { *count = sc->all_count; error = E2BIG; goto out; } /* Finally, output the list of levels. */ i = 0; TAILQ_FOREACH(lev, &sc->all_levels, link) { /* Skip levels that have a frequency that is too low. */ if (lev->total_set.freq < cf_lowest_freq) { sc->all_count--; continue; } levels[i] = *lev; i++; } *count = sc->all_count; error = 0; out: /* Clear all levels since we regenerate them each time. */ while ((lev = TAILQ_FIRST(&sc->all_levels)) != NULL) { TAILQ_REMOVE(&sc->all_levels, lev, link); free(lev, M_TEMP); } sc->all_count = 0; CF_MTX_UNLOCK(&sc->lock); while ((set_arr = TAILQ_FIRST(&rel_sets)) != NULL) { TAILQ_REMOVE(&rel_sets, set_arr, link); free(set_arr, M_TEMP); } - free(devs, M_TEMP); - free(sets, M_TEMP); return (error); } /* * Create levels for an array of absolute settings and insert them in * sorted order in the specified list. */ static int cpufreq_insert_abs(struct cpufreq_softc *sc, struct cf_setting *sets, int count) { struct cf_level_lst *list; struct cf_level *level, *search; int i, inserted; CF_MTX_ASSERT(&sc->lock); list = &sc->all_levels; for (i = 0; i < count; i++) { level = malloc(sizeof(*level), M_TEMP, M_NOWAIT | M_ZERO); if (level == NULL) return (ENOMEM); level->abs_set = sets[i]; level->total_set = sets[i]; level->total_set.dev = NULL; sc->all_count++; inserted = 0; if (TAILQ_EMPTY(list)) { CF_DEBUG("adding abs setting %d at head\n", sets[i].freq); TAILQ_INSERT_HEAD(list, level, link); continue; } TAILQ_FOREACH_REVERSE(search, list, cf_level_lst, link) if (sets[i].freq <= search->total_set.freq) { CF_DEBUG("adding abs setting %d after %d\n", sets[i].freq, search->total_set.freq); TAILQ_INSERT_AFTER(list, search, level, link); inserted = 1; break; } if (inserted == 0) { TAILQ_FOREACH(search, list, link) if (sets[i].freq >= search->total_set.freq) { CF_DEBUG("adding abs setting %d before %d\n", sets[i].freq, search->total_set.freq); TAILQ_INSERT_BEFORE(search, level, link); break; } } } return (0); } /* * Expand a group of relative settings, creating derived levels from them. */ static int cpufreq_expand_set(struct cpufreq_softc *sc, struct cf_setting_array *set_arr) { struct cf_level *fill, *search; struct cf_setting *set; int i; CF_MTX_ASSERT(&sc->lock); /* * Walk the set of all existing levels in reverse. This is so we * create derived states from the lowest absolute settings first * and discard duplicates created from higher absolute settings. * For instance, a level of 50 Mhz derived from 100 Mhz + 50% is * preferable to 200 Mhz + 25% because absolute settings are more * efficient since they often change the voltage as well. */ TAILQ_FOREACH_REVERSE(search, &sc->all_levels, cf_level_lst, link) { /* Add each setting to the level, duplicating if necessary. */ for (i = 0; i < set_arr->count; i++) { set = &set_arr->sets[i]; /* * If this setting is less than 100%, split the level * into two and add this setting to the new level. */ fill = search; if (set->freq < 10000) { fill = cpufreq_dup_set(sc, search, set); /* * The new level was a duplicate of an existing * level or its absolute setting is too high * so we freed it. For example, we discard a * derived level of 1000 MHz/25% if a level * of 500 MHz/100% already exists. */ if (fill == NULL) break; } /* Add this setting to the existing or new level. */ KASSERT(fill->rel_count < MAX_SETTINGS, ("cpufreq: too many relative drivers (%d)", MAX_SETTINGS)); fill->rel_set[fill->rel_count] = *set; fill->rel_count++; CF_DEBUG( "expand set added rel setting %d%% to %d level\n", set->freq / 100, fill->total_set.freq); } } return (0); } static struct cf_level * cpufreq_dup_set(struct cpufreq_softc *sc, struct cf_level *dup, struct cf_setting *set) { struct cf_level_lst *list; struct cf_level *fill, *itr; struct cf_setting *fill_set, *itr_set; int i; CF_MTX_ASSERT(&sc->lock); /* * Create a new level, copy it from the old one, and update the * total frequency and power by the percentage specified in the * relative setting. */ fill = malloc(sizeof(*fill), M_TEMP, M_NOWAIT); if (fill == NULL) return (NULL); *fill = *dup; fill_set = &fill->total_set; fill_set->freq = ((uint64_t)fill_set->freq * set->freq) / 10000; if (fill_set->power != CPUFREQ_VAL_UNKNOWN) { fill_set->power = ((uint64_t)fill_set->power * set->freq) / 10000; } if (set->lat != CPUFREQ_VAL_UNKNOWN) { if (fill_set->lat != CPUFREQ_VAL_UNKNOWN) fill_set->lat += set->lat; else fill_set->lat = set->lat; } CF_DEBUG("dup set considering derived setting %d\n", fill_set->freq); /* * If we copied an old level that we already modified (say, at 100%), * we need to remove that setting before adding this one. Since we * process each setting array in order, we know any settings for this * driver will be found at the end. */ for (i = fill->rel_count; i != 0; i--) { if (fill->rel_set[i - 1].dev != set->dev) break; CF_DEBUG("removed last relative driver: %s\n", device_get_nameunit(set->dev)); fill->rel_count--; } /* * Insert the new level in sorted order. If it is a duplicate of an * existing level (1) or has an absolute setting higher than the * existing level (2), do not add it. We can do this since any such * level is guaranteed use less power. For example (1), a level with * one absolute setting of 800 Mhz uses less power than one composed * of an absolute setting of 1600 Mhz and a relative setting at 50%. * Also for example (2), a level of 800 Mhz/75% is preferable to * 1600 Mhz/25% even though the latter has a lower total frequency. */ list = &sc->all_levels; KASSERT(!TAILQ_EMPTY(list), ("all levels list empty in dup set")); TAILQ_FOREACH_REVERSE(itr, list, cf_level_lst, link) { itr_set = &itr->total_set; if (CPUFREQ_CMP(fill_set->freq, itr_set->freq)) { CF_DEBUG("dup set rejecting %d (dupe)\n", fill_set->freq); itr = NULL; break; } else if (fill_set->freq < itr_set->freq) { if (fill->abs_set.freq <= itr->abs_set.freq) { CF_DEBUG( "dup done, inserting new level %d after %d\n", fill_set->freq, itr_set->freq); TAILQ_INSERT_AFTER(list, itr, fill, link); sc->all_count++; } else { CF_DEBUG("dup set rejecting %d (abs too big)\n", fill_set->freq); itr = NULL; } break; } } /* We didn't find a good place for this new level so free it. */ if (itr == NULL) { CF_DEBUG("dup set freeing new level %d (not optimal)\n", fill_set->freq); free(fill, M_TEMP); fill = NULL; } return (fill); } static int cpufreq_curr_sysctl(SYSCTL_HANDLER_ARGS) { struct cpufreq_softc *sc; struct cf_level *levels; int best, count, diff, bdiff, devcount, error, freq, i, n; device_t *devs; devs = NULL; sc = oidp->oid_arg1; levels = sc->levels_buf; error = CPUFREQ_GET(sc->dev, &levels[0]); if (error) goto out; freq = levels[0].total_set.freq; error = sysctl_handle_int(oidp, &freq, 0, req); if (error != 0 || req->newptr == NULL) goto out; /* * While we only call cpufreq_get() on one device (assuming all * CPUs have equal levels), we call cpufreq_set() on all CPUs. * This is needed for some MP systems. */ error = devclass_get_devices(cpufreq_dc, &devs, &devcount); if (error) goto out; for (n = 0; n < devcount; n++) { count = CF_MAX_LEVELS; error = CPUFREQ_LEVELS(devs[n], levels, &count); if (error) { if (error == E2BIG) printf( "cpufreq: need to increase CF_MAX_LEVELS\n"); break; } best = 0; bdiff = 1 << 30; for (i = 0; i < count; i++) { diff = abs(levels[i].total_set.freq - freq); if (diff < bdiff) { bdiff = diff; best = i; } } error = CPUFREQ_SET(devs[n], &levels[best], CPUFREQ_PRIO_USER); } out: if (devs) free(devs, M_TEMP); return (error); } static int cpufreq_levels_sysctl(SYSCTL_HANDLER_ARGS) { struct cpufreq_softc *sc; struct cf_level *levels; struct cf_setting *set; struct sbuf sb; int count, error, i; sc = oidp->oid_arg1; sbuf_new(&sb, NULL, 128, SBUF_AUTOEXTEND); /* Get settings from the device and generate the output string. */ count = CF_MAX_LEVELS; levels = sc->levels_buf; if (levels == NULL) { sbuf_delete(&sb); return (ENOMEM); } error = CPUFREQ_LEVELS(sc->dev, levels, &count); if (error) { if (error == E2BIG) printf("cpufreq: need to increase CF_MAX_LEVELS\n"); goto out; } if (count) { for (i = 0; i < count; i++) { set = &levels[i].total_set; sbuf_printf(&sb, "%d/%d ", set->freq, set->power); } } else sbuf_cpy(&sb, "0"); sbuf_trim(&sb); sbuf_finish(&sb); error = sysctl_handle_string(oidp, sbuf_data(&sb), sbuf_len(&sb), req); out: sbuf_delete(&sb); return (error); } static int cpufreq_settings_sysctl(SYSCTL_HANDLER_ARGS) { device_t dev; struct cf_setting *sets; struct sbuf sb; int error, i, set_count; dev = oidp->oid_arg1; sbuf_new(&sb, NULL, 128, SBUF_AUTOEXTEND); /* Get settings from the device and generate the output string. */ set_count = MAX_SETTINGS; sets = malloc(set_count * sizeof(*sets), M_TEMP, M_NOWAIT); if (sets == NULL) { sbuf_delete(&sb); return (ENOMEM); } error = CPUFREQ_DRV_SETTINGS(dev, sets, &set_count); if (error) goto out; if (set_count) { for (i = 0; i < set_count; i++) sbuf_printf(&sb, "%d/%d ", sets[i].freq, sets[i].power); } else sbuf_cpy(&sb, "0"); sbuf_trim(&sb); sbuf_finish(&sb); error = sysctl_handle_string(oidp, sbuf_data(&sb), sbuf_len(&sb), req); out: free(sets, M_TEMP); sbuf_delete(&sb); return (error); } +static void +cpufreq_add_freq_driver_sysctl(device_t cf_dev) +{ + struct cpufreq_softc *sc; + + sc = device_get_softc(cf_dev); + SYSCTL_ADD_CONST_STRING(&sc->sysctl_ctx, + SYSCTL_CHILDREN(device_get_sysctl_tree(cf_dev)), OID_AUTO, + "freq_driver", CTLFLAG_RD, device_get_nameunit(sc->cf_drv_dev), + "cpufreq driver used by this cpu"); +} + int cpufreq_register(device_t dev) { struct cpufreq_softc *sc; device_t cf_dev, cpu_dev; + int error; /* Add a sysctl to get each driver's settings separately. */ SYSCTL_ADD_PROC(device_get_sysctl_ctx(dev), SYSCTL_CHILDREN(device_get_sysctl_tree(dev)), OID_AUTO, "freq_settings", CTLTYPE_STRING | CTLFLAG_RD, dev, 0, cpufreq_settings_sysctl, "A", "CPU frequency driver settings"); /* * Add only one cpufreq device to each CPU. Currently, all CPUs * must offer the same levels and be switched at the same time. */ cpu_dev = device_get_parent(dev); if ((cf_dev = device_find_child(cpu_dev, "cpufreq", -1))) { sc = device_get_softc(cf_dev); sc->max_mhz = CPUFREQ_VAL_UNKNOWN; + MPASS(sc->cf_drv_dev != NULL); return (0); } /* Add the child device and possibly sysctls. */ cf_dev = BUS_ADD_CHILD(cpu_dev, 0, "cpufreq", -1); if (cf_dev == NULL) return (ENOMEM); device_quiet(cf_dev); - return (device_probe_and_attach(cf_dev)); + error = device_probe_and_attach(cf_dev); + if (error) + return (error); + + sc = device_get_softc(cf_dev); + sc->cf_drv_dev = dev; + cpufreq_add_freq_driver_sysctl(cf_dev); + return (error); } int cpufreq_unregister(device_t dev) { - device_t cf_dev, *devs; - int cfcount, devcount, error, i, type; + device_t cf_dev; + struct cpufreq_softc *sc; /* * If this is the last cpufreq child device, remove the control * device as well. We identify cpufreq children by calling a method * they support. */ - error = device_get_children(device_get_parent(dev), &devs, &devcount); - if (error) - return (error); cf_dev = device_find_child(device_get_parent(dev), "cpufreq", -1); if (cf_dev == NULL) { device_printf(dev, "warning: cpufreq_unregister called with no cpufreq device active\n"); - free(devs, M_TEMP); return (0); } - cfcount = 0; - for (i = 0; i < devcount; i++) { - if (!device_is_attached(devs[i])) - continue; - if (CPUFREQ_DRV_TYPE(devs[i], &type) == 0) - cfcount++; - } - if (cfcount <= 1) - device_delete_child(device_get_parent(cf_dev), cf_dev); - free(devs, M_TEMP); + sc = device_get_softc(cf_dev); + MPASS(sc->cf_drv_dev == dev); + device_delete_child(device_get_parent(cf_dev), cf_dev); return (0); } int cpufreq_settings_changed(device_t dev) { EVENTHANDLER_INVOKE(cpufreq_levels_changed, device_get_unit(device_get_parent(dev))); return (0); } Index: head/sys/modules/cpufreq/Makefile =================================================================== --- head/sys/modules/cpufreq/Makefile (revision 357001) +++ head/sys/modules/cpufreq/Makefile (revision 357002) @@ -1,26 +1,26 @@ # $FreeBSD$ .PATH: ${SRCTOP}/sys/dev/cpufreq \ ${SRCTOP}/sys/${MACHINE_CPUARCH}/cpufreq KMOD= cpufreq SRCS= ichss.c SRCS+= bus_if.h cpufreq_if.h device_if.h pci_if.h .if ${MACHINE} == "i386" || ${MACHINE} == "amd64" .PATH: ${SRCTOP}/sys/x86/cpufreq SRCS+= acpi_if.h opt_acpi.h -SRCS+= est.c hwpstate.c p4tcc.c powernow.c +SRCS+= est.c hwpstate_amd.c p4tcc.c powernow.c hwpstate_intel.c .endif .if ${MACHINE} == "i386" SRCS+= smist.c .endif .if ${MACHINE} == "powerpc" .PATH: ${SRCTOP}/sys/powerpc/cpufreq SRCS+= dfs.c .endif .include Index: head/sys/sys/cpu.h =================================================================== --- head/sys/sys/cpu.h (revision 357001) +++ head/sys/sys/cpu.h (revision 357002) @@ -1,191 +1,196 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2005-2007 Nate Lawson (SDG) * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * $FreeBSD$ */ #ifndef _SYS_CPU_H_ #define _SYS_CPU_H_ #include /* * CPU device support. */ #define CPU_IVAR_PCPU 1 #define CPU_IVAR_NOMINAL_MHZ 2 #define CPU_IVAR_CPUID_SIZE 3 #define CPU_IVAR_CPUID 4 static __inline struct pcpu *cpu_get_pcpu(device_t dev) { uintptr_t v = 0; BUS_READ_IVAR(device_get_parent(dev), dev, CPU_IVAR_PCPU, &v); return ((struct pcpu *)v); } static __inline int32_t cpu_get_nominal_mhz(device_t dev) { uintptr_t v = 0; if (BUS_READ_IVAR(device_get_parent(dev), dev, CPU_IVAR_NOMINAL_MHZ, &v) != 0) return (-1); return ((int32_t)v); } static __inline const uint32_t *cpu_get_cpuid(device_t dev, size_t *count) { uintptr_t v = 0; if (BUS_READ_IVAR(device_get_parent(dev), dev, CPU_IVAR_CPUID_SIZE, &v) != 0) return (NULL); *count = (size_t)v; if (BUS_READ_IVAR(device_get_parent(dev), dev, CPU_IVAR_CPUID, &v) != 0) return (NULL); return ((const uint32_t *)v); } /* * CPU frequency control interface. */ /* Each driver's CPU frequency setting is exported in this format. */ struct cf_setting { int freq; /* CPU clock in Mhz or 100ths of a percent. */ int volts; /* Voltage in mV. */ int power; /* Power consumed in mW. */ int lat; /* Transition latency in us. */ device_t dev; /* Driver providing this setting. */ int spec[4];/* Driver-specific storage for non-standard info. */ }; /* Maximum number of settings a given driver can have. */ #define MAX_SETTINGS 256 /* A combination of settings is a level. */ struct cf_level { struct cf_setting total_set; struct cf_setting abs_set; struct cf_setting rel_set[MAX_SETTINGS]; int rel_count; TAILQ_ENTRY(cf_level) link; }; TAILQ_HEAD(cf_level_lst, cf_level); /* Drivers should set all unknown values to this. */ #define CPUFREQ_VAL_UNKNOWN (-1) /* * Every driver offers a type of CPU control. Absolute levels are mutually * exclusive while relative levels modify the current absolute level. There * may be multiple absolute and relative drivers available on a given * system. * * For example, consider a system with two absolute drivers that provide * frequency settings of 100, 200 and 300, 400 and a relative driver that * provides settings of 50%, 100%. The cpufreq core would export frequency * levels of 50, 100, 150, 200, 300, 400. * * The "info only" flag signifies that settings returned by * CPUFREQ_DRV_SETTINGS cannot be passed to the CPUFREQ_DRV_SET method and * are only informational. This is for some drivers that can return * information about settings but rely on another machine-dependent driver * for actually performing the frequency transition (e.g., ACPI performance * states of type "functional fixed hardware.") + * + * The "uncached" flag tells CPUFREQ_DRV_GET to try obtaining the real + * instantaneous frequency from the underlying hardware regardless of cached + * state. It is probably a bug to not combine this with "info only" */ #define CPUFREQ_TYPE_MASK 0xffff #define CPUFREQ_TYPE_RELATIVE (1<<0) #define CPUFREQ_TYPE_ABSOLUTE (1<<1) #define CPUFREQ_FLAG_INFO_ONLY (1<<16) +#define CPUFREQ_FLAG_UNCACHED (1<<17) /* * When setting a level, the caller indicates the priority of this request. * Priorities determine, among other things, whether a level can be * overridden by other callers. For example, if the user sets a level but * the system thermal driver needs to override it for emergency cooling, * the driver would use a higher priority. Once the event has passed, the * driver would call cpufreq to resume any previous level. */ #define CPUFREQ_PRIO_HIGHEST 1000000 #define CPUFREQ_PRIO_KERN 1000 #define CPUFREQ_PRIO_USER 100 #define CPUFREQ_PRIO_LOWEST 0 /* * Register and unregister a driver with the cpufreq core. Once a driver * is registered, it must support calls to its CPUFREQ_GET, CPUFREQ_GET_LEVEL, * and CPUFREQ_SET methods. It must also unregister before returning from * its DEVICE_DETACH method. */ int cpufreq_register(device_t dev); int cpufreq_unregister(device_t dev); /* * Notify the cpufreq core that the number of or values for settings have * changed. */ int cpufreq_settings_changed(device_t dev); /* * Eventhandlers that are called before and after a change in frequency. * The new level and the result of the change (0 is success) is passed in. * If the driver wishes to revoke the change from cpufreq_pre_change, it * stores a non-zero error code in the result parameter and the change will * not be made. If the post-change eventhandler gets a non-zero result, * no change was made and the previous level remains in effect. If a change * is revoked, the post-change eventhandler is still called with the error * value supplied by the revoking driver. This gives listeners who cached * some data in preparation for a level change a chance to clean up. */ typedef void (*cpufreq_pre_notify_fn)(void *, const struct cf_level *, int *); typedef void (*cpufreq_post_notify_fn)(void *, const struct cf_level *, int); EVENTHANDLER_DECLARE(cpufreq_pre_change, cpufreq_pre_notify_fn); EVENTHANDLER_DECLARE(cpufreq_post_change, cpufreq_post_notify_fn); /* * Eventhandler called when the available list of levels changed. * The unit number of the device (i.e. "cpufreq0") whose levels changed * is provided so the listener can retrieve the new list of levels. */ typedef void (*cpufreq_levels_notify_fn)(void *, int); EVENTHANDLER_DECLARE(cpufreq_levels_changed, cpufreq_levels_notify_fn); /* Allow values to be +/- a bit since sometimes we have to estimate. */ #define CPUFREQ_CMP(x, y) (abs((x) - (y)) < 25) /* * Machine-dependent functions. */ /* Estimate the current clock rate for the given CPU id. */ int cpu_est_clockrate(int cpu_id, uint64_t *rate); #endif /* !_SYS_CPU_H_ */ Index: head/sys/x86/cpufreq/hwpstate.c =================================================================== --- head/sys/x86/cpufreq/hwpstate.c (revision 357001) +++ head/sys/x86/cpufreq/hwpstate.c (nonexistent) @@ -1,543 +0,0 @@ -/*- - * SPDX-License-Identifier: BSD-2-Clause-FreeBSD - * - * Copyright (c) 2005 Nate Lawson - * Copyright (c) 2004 Colin Percival - * Copyright (c) 2004-2005 Bruno Durcot - * Copyright (c) 2004 FUKUDA Nobuhiko - * Copyright (c) 2009 Michael Reifenberger - * Copyright (c) 2009 Norikatsu Shigemura - * Copyright (c) 2008-2009 Gen Otsuji - * - * This code is depending on kern_cpu.c, est.c, powernow.c, p4tcc.c, smist.c - * in various parts. The authors of these files are Nate Lawson, - * Colin Percival, Bruno Durcot, and FUKUDA Nobuhiko. - * This code contains patches by Michael Reifenberger and Norikatsu Shigemura. - * Thank you. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted providing that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE AUTHOR``AS IS'' AND ANY EXPRESS OR - * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED - * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL - * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS - * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) - * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, - * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING - * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE - * POSSIBILITY OF SUCH DAMAGE. - */ - -/* - * For more info: - * BIOS and Kernel Developer's Guide(BKDG) for AMD Family 10h Processors - * 31116 Rev 3.20 February 04, 2009 - * BIOS and Kernel Developer's Guide(BKDG) for AMD Family 11h Processors - * 41256 Rev 3.00 - July 07, 2008 - */ - -#include -__FBSDID("$FreeBSD$"); - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#include - -#include - -#include - -#include "acpi_if.h" -#include "cpufreq_if.h" - -#define MSR_AMD_10H_11H_LIMIT 0xc0010061 -#define MSR_AMD_10H_11H_CONTROL 0xc0010062 -#define MSR_AMD_10H_11H_STATUS 0xc0010063 -#define MSR_AMD_10H_11H_CONFIG 0xc0010064 - -#define AMD_10H_11H_MAX_STATES 16 - -/* for MSR_AMD_10H_11H_LIMIT C001_0061 */ -#define AMD_10H_11H_GET_PSTATE_MAX_VAL(msr) (((msr) >> 4) & 0x7) -#define AMD_10H_11H_GET_PSTATE_LIMIT(msr) (((msr)) & 0x7) -/* for MSR_AMD_10H_11H_CONFIG 10h:C001_0064:68 / 11h:C001_0064:6B */ -#define AMD_10H_11H_CUR_VID(msr) (((msr) >> 9) & 0x7F) -#define AMD_10H_11H_CUR_DID(msr) (((msr) >> 6) & 0x07) -#define AMD_10H_11H_CUR_FID(msr) ((msr) & 0x3F) - -#define AMD_17H_CUR_VID(msr) (((msr) >> 14) & 0xFF) -#define AMD_17H_CUR_DID(msr) (((msr) >> 8) & 0x3F) -#define AMD_17H_CUR_FID(msr) ((msr) & 0xFF) - -#define HWPSTATE_DEBUG(dev, msg...) \ - do { \ - if (hwpstate_verbose) \ - device_printf(dev, msg); \ - } while (0) - -struct hwpstate_setting { - int freq; /* CPU clock in Mhz or 100ths of a percent. */ - int volts; /* Voltage in mV. */ - int power; /* Power consumed in mW. */ - int lat; /* Transition latency in us. */ - int pstate_id; /* P-State id */ -}; - -struct hwpstate_softc { - device_t dev; - struct hwpstate_setting hwpstate_settings[AMD_10H_11H_MAX_STATES]; - int cfnum; -}; - -static void hwpstate_identify(driver_t *driver, device_t parent); -static int hwpstate_probe(device_t dev); -static int hwpstate_attach(device_t dev); -static int hwpstate_detach(device_t dev); -static int hwpstate_set(device_t dev, const struct cf_setting *cf); -static int hwpstate_get(device_t dev, struct cf_setting *cf); -static int hwpstate_settings(device_t dev, struct cf_setting *sets, int *count); -static int hwpstate_type(device_t dev, int *type); -static int hwpstate_shutdown(device_t dev); -static int hwpstate_features(driver_t *driver, u_int *features); -static int hwpstate_get_info_from_acpi_perf(device_t dev, device_t perf_dev); -static int hwpstate_get_info_from_msr(device_t dev); -static int hwpstate_goto_pstate(device_t dev, int pstate_id); - -static int hwpstate_verbose; -SYSCTL_INT(_debug, OID_AUTO, hwpstate_verbose, CTLFLAG_RWTUN, - &hwpstate_verbose, 0, "Debug hwpstate"); - -static int hwpstate_verify; -SYSCTL_INT(_debug, OID_AUTO, hwpstate_verify, CTLFLAG_RWTUN, - &hwpstate_verify, 0, "Verify P-state after setting"); - -static device_method_t hwpstate_methods[] = { - /* Device interface */ - DEVMETHOD(device_identify, hwpstate_identify), - DEVMETHOD(device_probe, hwpstate_probe), - DEVMETHOD(device_attach, hwpstate_attach), - DEVMETHOD(device_detach, hwpstate_detach), - DEVMETHOD(device_shutdown, hwpstate_shutdown), - - /* cpufreq interface */ - DEVMETHOD(cpufreq_drv_set, hwpstate_set), - DEVMETHOD(cpufreq_drv_get, hwpstate_get), - DEVMETHOD(cpufreq_drv_settings, hwpstate_settings), - DEVMETHOD(cpufreq_drv_type, hwpstate_type), - - /* ACPI interface */ - DEVMETHOD(acpi_get_features, hwpstate_features), - - {0, 0} -}; - -static devclass_t hwpstate_devclass; -static driver_t hwpstate_driver = { - "hwpstate", - hwpstate_methods, - sizeof(struct hwpstate_softc), -}; - -DRIVER_MODULE(hwpstate, cpu, hwpstate_driver, hwpstate_devclass, 0, 0); - -/* - * Go to Px-state on all cpus considering the limit. - */ -static int -hwpstate_goto_pstate(device_t dev, int id) -{ - sbintime_t sbt; - uint64_t msr; - int cpu, i, j, limit; - - /* get the current pstate limit */ - msr = rdmsr(MSR_AMD_10H_11H_LIMIT); - limit = AMD_10H_11H_GET_PSTATE_LIMIT(msr); - if (limit > id) - id = limit; - - cpu = curcpu; - HWPSTATE_DEBUG(dev, "setting P%d-state on cpu%d\n", id, cpu); - /* Go To Px-state */ - wrmsr(MSR_AMD_10H_11H_CONTROL, id); - - /* - * We are going to the same Px-state on all cpus. - * Probably should take _PSD into account. - */ - CPU_FOREACH(i) { - if (i == cpu) - continue; - - /* Bind to each cpu. */ - thread_lock(curthread); - sched_bind(curthread, i); - thread_unlock(curthread); - HWPSTATE_DEBUG(dev, "setting P%d-state on cpu%d\n", id, i); - /* Go To Px-state */ - wrmsr(MSR_AMD_10H_11H_CONTROL, id); - } - - /* - * Verify whether each core is in the requested P-state. - */ - if (hwpstate_verify) { - CPU_FOREACH(i) { - thread_lock(curthread); - sched_bind(curthread, i); - thread_unlock(curthread); - /* wait loop (100*100 usec is enough ?) */ - for (j = 0; j < 100; j++) { - /* get the result. not assure msr=id */ - msr = rdmsr(MSR_AMD_10H_11H_STATUS); - if (msr == id) - break; - sbt = SBT_1MS / 10; - tsleep_sbt(dev, PZERO, "pstate_goto", sbt, - sbt >> tc_precexp, 0); - } - HWPSTATE_DEBUG(dev, "result: P%d-state on cpu%d\n", - (int)msr, i); - if (msr != id) { - HWPSTATE_DEBUG(dev, - "error: loop is not enough.\n"); - return (ENXIO); - } - } - } - - return (0); -} - -static int -hwpstate_set(device_t dev, const struct cf_setting *cf) -{ - struct hwpstate_softc *sc; - struct hwpstate_setting *set; - int i; - - if (cf == NULL) - return (EINVAL); - sc = device_get_softc(dev); - set = sc->hwpstate_settings; - for (i = 0; i < sc->cfnum; i++) - if (CPUFREQ_CMP(cf->freq, set[i].freq)) - break; - if (i == sc->cfnum) - return (EINVAL); - - return (hwpstate_goto_pstate(dev, set[i].pstate_id)); -} - -static int -hwpstate_get(device_t dev, struct cf_setting *cf) -{ - struct hwpstate_softc *sc; - struct hwpstate_setting set; - uint64_t msr; - - sc = device_get_softc(dev); - if (cf == NULL) - return (EINVAL); - msr = rdmsr(MSR_AMD_10H_11H_STATUS); - if (msr >= sc->cfnum) - return (EINVAL); - set = sc->hwpstate_settings[msr]; - - cf->freq = set.freq; - cf->volts = set.volts; - cf->power = set.power; - cf->lat = set.lat; - cf->dev = dev; - return (0); -} - -static int -hwpstate_settings(device_t dev, struct cf_setting *sets, int *count) -{ - struct hwpstate_softc *sc; - struct hwpstate_setting set; - int i; - - if (sets == NULL || count == NULL) - return (EINVAL); - sc = device_get_softc(dev); - if (*count < sc->cfnum) - return (E2BIG); - for (i = 0; i < sc->cfnum; i++, sets++) { - set = sc->hwpstate_settings[i]; - sets->freq = set.freq; - sets->volts = set.volts; - sets->power = set.power; - sets->lat = set.lat; - sets->dev = dev; - } - *count = sc->cfnum; - - return (0); -} - -static int -hwpstate_type(device_t dev, int *type) -{ - - if (type == NULL) - return (EINVAL); - - *type = CPUFREQ_TYPE_ABSOLUTE; - return (0); -} - -static void -hwpstate_identify(driver_t *driver, device_t parent) -{ - - if (device_find_child(parent, "hwpstate", -1) != NULL) - return; - - if ((cpu_vendor_id != CPU_VENDOR_AMD || CPUID_TO_FAMILY(cpu_id) < 0x10) && - cpu_vendor_id != CPU_VENDOR_HYGON) - return; - - /* - * Check if hardware pstate enable bit is set. - */ - if ((amd_pminfo & AMDPM_HW_PSTATE) == 0) { - HWPSTATE_DEBUG(parent, "hwpstate enable bit is not set.\n"); - return; - } - - if (resource_disabled("hwpstate", 0)) - return; - - if (BUS_ADD_CHILD(parent, 10, "hwpstate", -1) == NULL) - device_printf(parent, "hwpstate: add child failed\n"); -} - -static int -hwpstate_probe(device_t dev) -{ - struct hwpstate_softc *sc; - device_t perf_dev; - uint64_t msr; - int error, type; - - /* - * Only hwpstate0. - * It goes well with acpi_throttle. - */ - if (device_get_unit(dev) != 0) - return (ENXIO); - - sc = device_get_softc(dev); - sc->dev = dev; - - /* - * Check if acpi_perf has INFO only flag. - */ - perf_dev = device_find_child(device_get_parent(dev), "acpi_perf", -1); - error = TRUE; - if (perf_dev && device_is_attached(perf_dev)) { - error = CPUFREQ_DRV_TYPE(perf_dev, &type); - if (error == 0) { - if ((type & CPUFREQ_FLAG_INFO_ONLY) == 0) { - /* - * If acpi_perf doesn't have INFO_ONLY flag, - * it will take care of pstate transitions. - */ - HWPSTATE_DEBUG(dev, "acpi_perf will take care of pstate transitions.\n"); - return (ENXIO); - } else { - /* - * If acpi_perf has INFO_ONLY flag, (_PCT has FFixedHW) - * we can get _PSS info from acpi_perf - * without going into ACPI. - */ - HWPSTATE_DEBUG(dev, "going to fetch info from acpi_perf\n"); - error = hwpstate_get_info_from_acpi_perf(dev, perf_dev); - } - } - } - - if (error == 0) { - /* - * Now we get _PSS info from acpi_perf without error. - * Let's check it. - */ - msr = rdmsr(MSR_AMD_10H_11H_LIMIT); - if (sc->cfnum != 1 + AMD_10H_11H_GET_PSTATE_MAX_VAL(msr)) { - HWPSTATE_DEBUG(dev, "MSR (%jd) and ACPI _PSS (%d)" - " count mismatch\n", (intmax_t)msr, sc->cfnum); - error = TRUE; - } - } - - /* - * If we cannot get info from acpi_perf, - * Let's get info from MSRs. - */ - if (error) - error = hwpstate_get_info_from_msr(dev); - if (error) - return (error); - - device_set_desc(dev, "Cool`n'Quiet 2.0"); - return (0); -} - -static int -hwpstate_attach(device_t dev) -{ - - return (cpufreq_register(dev)); -} - -static int -hwpstate_get_info_from_msr(device_t dev) -{ - struct hwpstate_softc *sc; - struct hwpstate_setting *hwpstate_set; - uint64_t msr; - int family, i, fid, did; - - family = CPUID_TO_FAMILY(cpu_id); - sc = device_get_softc(dev); - /* Get pstate count */ - msr = rdmsr(MSR_AMD_10H_11H_LIMIT); - sc->cfnum = 1 + AMD_10H_11H_GET_PSTATE_MAX_VAL(msr); - hwpstate_set = sc->hwpstate_settings; - for (i = 0; i < sc->cfnum; i++) { - msr = rdmsr(MSR_AMD_10H_11H_CONFIG + i); - if ((msr & ((uint64_t)1 << 63)) == 0) { - HWPSTATE_DEBUG(dev, "msr is not valid.\n"); - return (ENXIO); - } - did = AMD_10H_11H_CUR_DID(msr); - fid = AMD_10H_11H_CUR_FID(msr); - - /* Convert fid/did to frequency. */ - switch (family) { - case 0x11: - hwpstate_set[i].freq = (100 * (fid + 0x08)) >> did; - break; - case 0x10: - case 0x12: - case 0x15: - case 0x16: - hwpstate_set[i].freq = (100 * (fid + 0x10)) >> did; - break; - case 0x17: - case 0x18: - did = AMD_17H_CUR_DID(msr); - if (did == 0) { - HWPSTATE_DEBUG(dev, "unexpected did: 0\n"); - did = 1; - } - fid = AMD_17H_CUR_FID(msr); - hwpstate_set[i].freq = (200 * fid) / did; - break; - default: - HWPSTATE_DEBUG(dev, "get_info_from_msr: %s family" - " 0x%02x CPUs are not supported yet\n", - cpu_vendor_id == CPU_VENDOR_HYGON ? "Hygon" : "AMD", - family); - return (ENXIO); - } - hwpstate_set[i].pstate_id = i; - /* There was volts calculation, but deleted it. */ - hwpstate_set[i].volts = CPUFREQ_VAL_UNKNOWN; - hwpstate_set[i].power = CPUFREQ_VAL_UNKNOWN; - hwpstate_set[i].lat = CPUFREQ_VAL_UNKNOWN; - } - return (0); -} - -static int -hwpstate_get_info_from_acpi_perf(device_t dev, device_t perf_dev) -{ - struct hwpstate_softc *sc; - struct cf_setting *perf_set; - struct hwpstate_setting *hwpstate_set; - int count, error, i; - - perf_set = malloc(MAX_SETTINGS * sizeof(*perf_set), M_TEMP, M_NOWAIT); - if (perf_set == NULL) { - HWPSTATE_DEBUG(dev, "nomem\n"); - return (ENOMEM); - } - /* - * Fetch settings from acpi_perf. - * Now it is attached, and has info only flag. - */ - count = MAX_SETTINGS; - error = CPUFREQ_DRV_SETTINGS(perf_dev, perf_set, &count); - if (error) { - HWPSTATE_DEBUG(dev, "error: CPUFREQ_DRV_SETTINGS.\n"); - goto out; - } - sc = device_get_softc(dev); - sc->cfnum = count; - hwpstate_set = sc->hwpstate_settings; - for (i = 0; i < count; i++) { - if (i == perf_set[i].spec[0]) { - hwpstate_set[i].pstate_id = i; - hwpstate_set[i].freq = perf_set[i].freq; - hwpstate_set[i].volts = perf_set[i].volts; - hwpstate_set[i].power = perf_set[i].power; - hwpstate_set[i].lat = perf_set[i].lat; - } else { - HWPSTATE_DEBUG(dev, "ACPI _PSS object mismatch.\n"); - error = ENXIO; - goto out; - } - } -out: - if (perf_set) - free(perf_set, M_TEMP); - return (error); -} - -static int -hwpstate_detach(device_t dev) -{ - - hwpstate_goto_pstate(dev, 0); - return (cpufreq_unregister(dev)); -} - -static int -hwpstate_shutdown(device_t dev) -{ - - /* hwpstate_goto_pstate(dev, 0); */ - return (0); -} - -static int -hwpstate_features(driver_t *driver, u_int *features) -{ - - /* Notify the ACPI CPU that we support direct access to MSRs */ - *features = ACPI_CAP_PERF_MSRS; - return (0); -} Property changes on: head/sys/x86/cpufreq/hwpstate.c ___________________________________________________________________ Deleted: svn:eol-style ## -1 +0,0 ## -native \ No newline at end of property Deleted: svn:keywords ## -1 +0,0 ## -FreeBSD=%H \ No newline at end of property Deleted: svn:mime-type ## -1 +0,0 ## -text/plain \ No newline at end of property Index: head/sys/x86/cpufreq/est.c =================================================================== --- head/sys/x86/cpufreq/est.c (revision 357001) +++ head/sys/x86/cpufreq/est.c (revision 357002) @@ -1,1357 +1,1369 @@ /*- * SPDX-License-Identifier: BSD-2-Clause-FreeBSD * * Copyright (c) 2004 Colin Percival * Copyright (c) 2005 Nate Lawson * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted providing that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. */ #include __FBSDID("$FreeBSD$"); #include #include #include #include #include #include #include #include #include "cpufreq_if.h" #include #include #include #include #include #include #include "acpi_if.h" +#include + /* Status/control registers (from the IA-32 System Programming Guide). */ #define MSR_PERF_STATUS 0x198 #define MSR_PERF_CTL 0x199 /* Register and bit for enabling SpeedStep. */ #define MSR_MISC_ENABLE 0x1a0 #define MSR_SS_ENABLE (1<<16) /* Frequency and MSR control values. */ typedef struct { uint16_t freq; uint16_t volts; uint16_t id16; int power; } freq_info; /* Identifying characteristics of a processor and supported frequencies. */ typedef struct { const u_int vendor_id; uint32_t id32; freq_info *freqtab; size_t tablen; } cpu_info; struct est_softc { device_t dev; int acpi_settings; int msr_settings; freq_info *freq_list; size_t flist_len; }; /* Convert MHz and mV into IDs for passing to the MSR. */ #define ID16(MHz, mV, bus_clk) \ (((MHz / bus_clk) << 8) | ((mV ? mV - 700 : 0) >> 4)) #define ID32(MHz_hi, mV_hi, MHz_lo, mV_lo, bus_clk) \ ((ID16(MHz_lo, mV_lo, bus_clk) << 16) | (ID16(MHz_hi, mV_hi, bus_clk))) /* Format for storing IDs in our table. */ #define FREQ_INFO_PWR(MHz, mV, bus_clk, mW) \ { MHz, mV, ID16(MHz, mV, bus_clk), mW } #define FREQ_INFO(MHz, mV, bus_clk) \ FREQ_INFO_PWR(MHz, mV, bus_clk, CPUFREQ_VAL_UNKNOWN) #define INTEL(tab, zhi, vhi, zlo, vlo, bus_clk) \ { CPU_VENDOR_INTEL, ID32(zhi, vhi, zlo, vlo, bus_clk), tab, nitems(tab) } #define CENTAUR(tab, zhi, vhi, zlo, vlo, bus_clk) \ { CPU_VENDOR_CENTAUR, ID32(zhi, vhi, zlo, vlo, bus_clk), tab, nitems(tab) } static int msr_info_enabled = 0; TUNABLE_INT("hw.est.msr_info", &msr_info_enabled); static int strict = -1; TUNABLE_INT("hw.est.strict", &strict); /* Default bus clock value for Centrino processors. */ #define INTEL_BUS_CLK 100 /* XXX Update this if new CPUs have more settings. */ #define EST_MAX_SETTINGS 10 CTASSERT(EST_MAX_SETTINGS <= MAX_SETTINGS); /* Estimate in microseconds of latency for performing a transition. */ #define EST_TRANS_LAT 1000 /* * Frequency (MHz) and voltage (mV) settings. * * Dothan processors have multiple VID#s with different settings for * each VID#. Since we can't uniquely identify this info * without undisclosed methods from Intel, we can't support newer * processors with this table method. If ACPI Px states are supported, * we get info from them. * * Data from the "Intel Pentium M Processor Datasheet", * Order Number 252612-003, Table 5. */ static freq_info PM17_130[] = { /* 130nm 1.70GHz Pentium M */ FREQ_INFO(1700, 1484, INTEL_BUS_CLK), FREQ_INFO(1400, 1308, INTEL_BUS_CLK), FREQ_INFO(1200, 1228, INTEL_BUS_CLK), FREQ_INFO(1000, 1116, INTEL_BUS_CLK), FREQ_INFO( 800, 1004, INTEL_BUS_CLK), FREQ_INFO( 600, 956, INTEL_BUS_CLK), }; static freq_info PM16_130[] = { /* 130nm 1.60GHz Pentium M */ FREQ_INFO(1600, 1484, INTEL_BUS_CLK), FREQ_INFO(1400, 1420, INTEL_BUS_CLK), FREQ_INFO(1200, 1276, INTEL_BUS_CLK), FREQ_INFO(1000, 1164, INTEL_BUS_CLK), FREQ_INFO( 800, 1036, INTEL_BUS_CLK), FREQ_INFO( 600, 956, INTEL_BUS_CLK), }; static freq_info PM15_130[] = { /* 130nm 1.50GHz Pentium M */ FREQ_INFO(1500, 1484, INTEL_BUS_CLK), FREQ_INFO(1400, 1452, INTEL_BUS_CLK), FREQ_INFO(1200, 1356, INTEL_BUS_CLK), FREQ_INFO(1000, 1228, INTEL_BUS_CLK), FREQ_INFO( 800, 1116, INTEL_BUS_CLK), FREQ_INFO( 600, 956, INTEL_BUS_CLK), }; static freq_info PM14_130[] = { /* 130nm 1.40GHz Pentium M */ FREQ_INFO(1400, 1484, INTEL_BUS_CLK), FREQ_INFO(1200, 1436, INTEL_BUS_CLK), FREQ_INFO(1000, 1308, INTEL_BUS_CLK), FREQ_INFO( 800, 1180, INTEL_BUS_CLK), FREQ_INFO( 600, 956, INTEL_BUS_CLK), }; static freq_info PM13_130[] = { /* 130nm 1.30GHz Pentium M */ FREQ_INFO(1300, 1388, INTEL_BUS_CLK), FREQ_INFO(1200, 1356, INTEL_BUS_CLK), FREQ_INFO(1000, 1292, INTEL_BUS_CLK), FREQ_INFO( 800, 1260, INTEL_BUS_CLK), FREQ_INFO( 600, 956, INTEL_BUS_CLK), }; static freq_info PM13_LV_130[] = { /* 130nm 1.30GHz Low Voltage Pentium M */ FREQ_INFO(1300, 1180, INTEL_BUS_CLK), FREQ_INFO(1200, 1164, INTEL_BUS_CLK), FREQ_INFO(1100, 1100, INTEL_BUS_CLK), FREQ_INFO(1000, 1020, INTEL_BUS_CLK), FREQ_INFO( 900, 1004, INTEL_BUS_CLK), FREQ_INFO( 800, 988, INTEL_BUS_CLK), FREQ_INFO( 600, 956, INTEL_BUS_CLK), }; static freq_info PM12_LV_130[] = { /* 130 nm 1.20GHz Low Voltage Pentium M */ FREQ_INFO(1200, 1180, INTEL_BUS_CLK), FREQ_INFO(1100, 1164, INTEL_BUS_CLK), FREQ_INFO(1000, 1100, INTEL_BUS_CLK), FREQ_INFO( 900, 1020, INTEL_BUS_CLK), FREQ_INFO( 800, 1004, INTEL_BUS_CLK), FREQ_INFO( 600, 956, INTEL_BUS_CLK), }; static freq_info PM11_LV_130[] = { /* 130 nm 1.10GHz Low Voltage Pentium M */ FREQ_INFO(1100, 1180, INTEL_BUS_CLK), FREQ_INFO(1000, 1164, INTEL_BUS_CLK), FREQ_INFO( 900, 1100, INTEL_BUS_CLK), FREQ_INFO( 800, 1020, INTEL_BUS_CLK), FREQ_INFO( 600, 956, INTEL_BUS_CLK), }; static freq_info PM11_ULV_130[] = { /* 130 nm 1.10GHz Ultra Low Voltage Pentium M */ FREQ_INFO(1100, 1004, INTEL_BUS_CLK), FREQ_INFO(1000, 988, INTEL_BUS_CLK), FREQ_INFO( 900, 972, INTEL_BUS_CLK), FREQ_INFO( 800, 956, INTEL_BUS_CLK), FREQ_INFO( 600, 844, INTEL_BUS_CLK), }; static freq_info PM10_ULV_130[] = { /* 130 nm 1.00GHz Ultra Low Voltage Pentium M */ FREQ_INFO(1000, 1004, INTEL_BUS_CLK), FREQ_INFO( 900, 988, INTEL_BUS_CLK), FREQ_INFO( 800, 972, INTEL_BUS_CLK), FREQ_INFO( 600, 844, INTEL_BUS_CLK), }; /* * Data from "Intel Pentium M Processor on 90nm Process with * 2-MB L2 Cache Datasheet", Order Number 302189-008, Table 5. */ static freq_info PM_765A_90[] = { /* 90 nm 2.10GHz Pentium M, VID #A */ FREQ_INFO(2100, 1340, INTEL_BUS_CLK), FREQ_INFO(1800, 1276, INTEL_BUS_CLK), FREQ_INFO(1600, 1228, INTEL_BUS_CLK), FREQ_INFO(1400, 1180, INTEL_BUS_CLK), FREQ_INFO(1200, 1132, INTEL_BUS_CLK), FREQ_INFO(1000, 1084, INTEL_BUS_CLK), FREQ_INFO( 800, 1036, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_765B_90[] = { /* 90 nm 2.10GHz Pentium M, VID #B */ FREQ_INFO(2100, 1324, INTEL_BUS_CLK), FREQ_INFO(1800, 1260, INTEL_BUS_CLK), FREQ_INFO(1600, 1212, INTEL_BUS_CLK), FREQ_INFO(1400, 1180, INTEL_BUS_CLK), FREQ_INFO(1200, 1132, INTEL_BUS_CLK), FREQ_INFO(1000, 1084, INTEL_BUS_CLK), FREQ_INFO( 800, 1036, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_765C_90[] = { /* 90 nm 2.10GHz Pentium M, VID #C */ FREQ_INFO(2100, 1308, INTEL_BUS_CLK), FREQ_INFO(1800, 1244, INTEL_BUS_CLK), FREQ_INFO(1600, 1212, INTEL_BUS_CLK), FREQ_INFO(1400, 1164, INTEL_BUS_CLK), FREQ_INFO(1200, 1116, INTEL_BUS_CLK), FREQ_INFO(1000, 1084, INTEL_BUS_CLK), FREQ_INFO( 800, 1036, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_765E_90[] = { /* 90 nm 2.10GHz Pentium M, VID #E */ FREQ_INFO(2100, 1356, INTEL_BUS_CLK), FREQ_INFO(1800, 1292, INTEL_BUS_CLK), FREQ_INFO(1600, 1244, INTEL_BUS_CLK), FREQ_INFO(1400, 1196, INTEL_BUS_CLK), FREQ_INFO(1200, 1148, INTEL_BUS_CLK), FREQ_INFO(1000, 1100, INTEL_BUS_CLK), FREQ_INFO( 800, 1052, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_755A_90[] = { /* 90 nm 2.00GHz Pentium M, VID #A */ FREQ_INFO(2000, 1340, INTEL_BUS_CLK), FREQ_INFO(1800, 1292, INTEL_BUS_CLK), FREQ_INFO(1600, 1244, INTEL_BUS_CLK), FREQ_INFO(1400, 1196, INTEL_BUS_CLK), FREQ_INFO(1200, 1148, INTEL_BUS_CLK), FREQ_INFO(1000, 1100, INTEL_BUS_CLK), FREQ_INFO( 800, 1052, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_755B_90[] = { /* 90 nm 2.00GHz Pentium M, VID #B */ FREQ_INFO(2000, 1324, INTEL_BUS_CLK), FREQ_INFO(1800, 1276, INTEL_BUS_CLK), FREQ_INFO(1600, 1228, INTEL_BUS_CLK), FREQ_INFO(1400, 1180, INTEL_BUS_CLK), FREQ_INFO(1200, 1132, INTEL_BUS_CLK), FREQ_INFO(1000, 1084, INTEL_BUS_CLK), FREQ_INFO( 800, 1036, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_755C_90[] = { /* 90 nm 2.00GHz Pentium M, VID #C */ FREQ_INFO(2000, 1308, INTEL_BUS_CLK), FREQ_INFO(1800, 1276, INTEL_BUS_CLK), FREQ_INFO(1600, 1228, INTEL_BUS_CLK), FREQ_INFO(1400, 1180, INTEL_BUS_CLK), FREQ_INFO(1200, 1132, INTEL_BUS_CLK), FREQ_INFO(1000, 1084, INTEL_BUS_CLK), FREQ_INFO( 800, 1036, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_755D_90[] = { /* 90 nm 2.00GHz Pentium M, VID #D */ FREQ_INFO(2000, 1276, INTEL_BUS_CLK), FREQ_INFO(1800, 1244, INTEL_BUS_CLK), FREQ_INFO(1600, 1196, INTEL_BUS_CLK), FREQ_INFO(1400, 1164, INTEL_BUS_CLK), FREQ_INFO(1200, 1116, INTEL_BUS_CLK), FREQ_INFO(1000, 1084, INTEL_BUS_CLK), FREQ_INFO( 800, 1036, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_745A_90[] = { /* 90 nm 1.80GHz Pentium M, VID #A */ FREQ_INFO(1800, 1340, INTEL_BUS_CLK), FREQ_INFO(1600, 1292, INTEL_BUS_CLK), FREQ_INFO(1400, 1228, INTEL_BUS_CLK), FREQ_INFO(1200, 1164, INTEL_BUS_CLK), FREQ_INFO(1000, 1116, INTEL_BUS_CLK), FREQ_INFO( 800, 1052, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_745B_90[] = { /* 90 nm 1.80GHz Pentium M, VID #B */ FREQ_INFO(1800, 1324, INTEL_BUS_CLK), FREQ_INFO(1600, 1276, INTEL_BUS_CLK), FREQ_INFO(1400, 1212, INTEL_BUS_CLK), FREQ_INFO(1200, 1164, INTEL_BUS_CLK), FREQ_INFO(1000, 1116, INTEL_BUS_CLK), FREQ_INFO( 800, 1052, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_745C_90[] = { /* 90 nm 1.80GHz Pentium M, VID #C */ FREQ_INFO(1800, 1308, INTEL_BUS_CLK), FREQ_INFO(1600, 1260, INTEL_BUS_CLK), FREQ_INFO(1400, 1212, INTEL_BUS_CLK), FREQ_INFO(1200, 1148, INTEL_BUS_CLK), FREQ_INFO(1000, 1100, INTEL_BUS_CLK), FREQ_INFO( 800, 1052, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_745D_90[] = { /* 90 nm 1.80GHz Pentium M, VID #D */ FREQ_INFO(1800, 1276, INTEL_BUS_CLK), FREQ_INFO(1600, 1228, INTEL_BUS_CLK), FREQ_INFO(1400, 1180, INTEL_BUS_CLK), FREQ_INFO(1200, 1132, INTEL_BUS_CLK), FREQ_INFO(1000, 1084, INTEL_BUS_CLK), FREQ_INFO( 800, 1036, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_735A_90[] = { /* 90 nm 1.70GHz Pentium M, VID #A */ FREQ_INFO(1700, 1340, INTEL_BUS_CLK), FREQ_INFO(1400, 1244, INTEL_BUS_CLK), FREQ_INFO(1200, 1180, INTEL_BUS_CLK), FREQ_INFO(1000, 1116, INTEL_BUS_CLK), FREQ_INFO( 800, 1052, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_735B_90[] = { /* 90 nm 1.70GHz Pentium M, VID #B */ FREQ_INFO(1700, 1324, INTEL_BUS_CLK), FREQ_INFO(1400, 1244, INTEL_BUS_CLK), FREQ_INFO(1200, 1180, INTEL_BUS_CLK), FREQ_INFO(1000, 1116, INTEL_BUS_CLK), FREQ_INFO( 800, 1052, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_735C_90[] = { /* 90 nm 1.70GHz Pentium M, VID #C */ FREQ_INFO(1700, 1308, INTEL_BUS_CLK), FREQ_INFO(1400, 1228, INTEL_BUS_CLK), FREQ_INFO(1200, 1164, INTEL_BUS_CLK), FREQ_INFO(1000, 1116, INTEL_BUS_CLK), FREQ_INFO( 800, 1052, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_735D_90[] = { /* 90 nm 1.70GHz Pentium M, VID #D */ FREQ_INFO(1700, 1276, INTEL_BUS_CLK), FREQ_INFO(1400, 1212, INTEL_BUS_CLK), FREQ_INFO(1200, 1148, INTEL_BUS_CLK), FREQ_INFO(1000, 1100, INTEL_BUS_CLK), FREQ_INFO( 800, 1052, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_725A_90[] = { /* 90 nm 1.60GHz Pentium M, VID #A */ FREQ_INFO(1600, 1340, INTEL_BUS_CLK), FREQ_INFO(1400, 1276, INTEL_BUS_CLK), FREQ_INFO(1200, 1212, INTEL_BUS_CLK), FREQ_INFO(1000, 1132, INTEL_BUS_CLK), FREQ_INFO( 800, 1068, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_725B_90[] = { /* 90 nm 1.60GHz Pentium M, VID #B */ FREQ_INFO(1600, 1324, INTEL_BUS_CLK), FREQ_INFO(1400, 1260, INTEL_BUS_CLK), FREQ_INFO(1200, 1196, INTEL_BUS_CLK), FREQ_INFO(1000, 1132, INTEL_BUS_CLK), FREQ_INFO( 800, 1068, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_725C_90[] = { /* 90 nm 1.60GHz Pentium M, VID #C */ FREQ_INFO(1600, 1308, INTEL_BUS_CLK), FREQ_INFO(1400, 1244, INTEL_BUS_CLK), FREQ_INFO(1200, 1180, INTEL_BUS_CLK), FREQ_INFO(1000, 1116, INTEL_BUS_CLK), FREQ_INFO( 800, 1052, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_725D_90[] = { /* 90 nm 1.60GHz Pentium M, VID #D */ FREQ_INFO(1600, 1276, INTEL_BUS_CLK), FREQ_INFO(1400, 1228, INTEL_BUS_CLK), FREQ_INFO(1200, 1164, INTEL_BUS_CLK), FREQ_INFO(1000, 1116, INTEL_BUS_CLK), FREQ_INFO( 800, 1052, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_715A_90[] = { /* 90 nm 1.50GHz Pentium M, VID #A */ FREQ_INFO(1500, 1340, INTEL_BUS_CLK), FREQ_INFO(1200, 1228, INTEL_BUS_CLK), FREQ_INFO(1000, 1148, INTEL_BUS_CLK), FREQ_INFO( 800, 1068, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_715B_90[] = { /* 90 nm 1.50GHz Pentium M, VID #B */ FREQ_INFO(1500, 1324, INTEL_BUS_CLK), FREQ_INFO(1200, 1212, INTEL_BUS_CLK), FREQ_INFO(1000, 1148, INTEL_BUS_CLK), FREQ_INFO( 800, 1068, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_715C_90[] = { /* 90 nm 1.50GHz Pentium M, VID #C */ FREQ_INFO(1500, 1308, INTEL_BUS_CLK), FREQ_INFO(1200, 1212, INTEL_BUS_CLK), FREQ_INFO(1000, 1132, INTEL_BUS_CLK), FREQ_INFO( 800, 1068, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_715D_90[] = { /* 90 nm 1.50GHz Pentium M, VID #D */ FREQ_INFO(1500, 1276, INTEL_BUS_CLK), FREQ_INFO(1200, 1180, INTEL_BUS_CLK), FREQ_INFO(1000, 1116, INTEL_BUS_CLK), FREQ_INFO( 800, 1052, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_778_90[] = { /* 90 nm 1.60GHz Low Voltage Pentium M */ FREQ_INFO(1600, 1116, INTEL_BUS_CLK), FREQ_INFO(1500, 1116, INTEL_BUS_CLK), FREQ_INFO(1400, 1100, INTEL_BUS_CLK), FREQ_INFO(1300, 1084, INTEL_BUS_CLK), FREQ_INFO(1200, 1068, INTEL_BUS_CLK), FREQ_INFO(1100, 1052, INTEL_BUS_CLK), FREQ_INFO(1000, 1052, INTEL_BUS_CLK), FREQ_INFO( 900, 1036, INTEL_BUS_CLK), FREQ_INFO( 800, 1020, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_758_90[] = { /* 90 nm 1.50GHz Low Voltage Pentium M */ FREQ_INFO(1500, 1116, INTEL_BUS_CLK), FREQ_INFO(1400, 1116, INTEL_BUS_CLK), FREQ_INFO(1300, 1100, INTEL_BUS_CLK), FREQ_INFO(1200, 1084, INTEL_BUS_CLK), FREQ_INFO(1100, 1068, INTEL_BUS_CLK), FREQ_INFO(1000, 1052, INTEL_BUS_CLK), FREQ_INFO( 900, 1036, INTEL_BUS_CLK), FREQ_INFO( 800, 1020, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_738_90[] = { /* 90 nm 1.40GHz Low Voltage Pentium M */ FREQ_INFO(1400, 1116, INTEL_BUS_CLK), FREQ_INFO(1300, 1116, INTEL_BUS_CLK), FREQ_INFO(1200, 1100, INTEL_BUS_CLK), FREQ_INFO(1100, 1068, INTEL_BUS_CLK), FREQ_INFO(1000, 1052, INTEL_BUS_CLK), FREQ_INFO( 900, 1036, INTEL_BUS_CLK), FREQ_INFO( 800, 1020, INTEL_BUS_CLK), FREQ_INFO( 600, 988, INTEL_BUS_CLK), }; static freq_info PM_773G_90[] = { /* 90 nm 1.30GHz Ultra Low Voltage Pentium M, VID #G */ FREQ_INFO(1300, 956, INTEL_BUS_CLK), FREQ_INFO(1200, 940, INTEL_BUS_CLK), FREQ_INFO(1100, 924, INTEL_BUS_CLK), FREQ_INFO(1000, 908, INTEL_BUS_CLK), FREQ_INFO( 900, 876, INTEL_BUS_CLK), FREQ_INFO( 800, 860, INTEL_BUS_CLK), FREQ_INFO( 600, 812, INTEL_BUS_CLK), }; static freq_info PM_773H_90[] = { /* 90 nm 1.30GHz Ultra Low Voltage Pentium M, VID #H */ FREQ_INFO(1300, 940, INTEL_BUS_CLK), FREQ_INFO(1200, 924, INTEL_BUS_CLK), FREQ_INFO(1100, 908, INTEL_BUS_CLK), FREQ_INFO(1000, 892, INTEL_BUS_CLK), FREQ_INFO( 900, 876, INTEL_BUS_CLK), FREQ_INFO( 800, 860, INTEL_BUS_CLK), FREQ_INFO( 600, 812, INTEL_BUS_CLK), }; static freq_info PM_773I_90[] = { /* 90 nm 1.30GHz Ultra Low Voltage Pentium M, VID #I */ FREQ_INFO(1300, 924, INTEL_BUS_CLK), FREQ_INFO(1200, 908, INTEL_BUS_CLK), FREQ_INFO(1100, 892, INTEL_BUS_CLK), FREQ_INFO(1000, 876, INTEL_BUS_CLK), FREQ_INFO( 900, 860, INTEL_BUS_CLK), FREQ_INFO( 800, 844, INTEL_BUS_CLK), FREQ_INFO( 600, 812, INTEL_BUS_CLK), }; static freq_info PM_773J_90[] = { /* 90 nm 1.30GHz Ultra Low Voltage Pentium M, VID #J */ FREQ_INFO(1300, 908, INTEL_BUS_CLK), FREQ_INFO(1200, 908, INTEL_BUS_CLK), FREQ_INFO(1100, 892, INTEL_BUS_CLK), FREQ_INFO(1000, 876, INTEL_BUS_CLK), FREQ_INFO( 900, 860, INTEL_BUS_CLK), FREQ_INFO( 800, 844, INTEL_BUS_CLK), FREQ_INFO( 600, 812, INTEL_BUS_CLK), }; static freq_info PM_773K_90[] = { /* 90 nm 1.30GHz Ultra Low Voltage Pentium M, VID #K */ FREQ_INFO(1300, 892, INTEL_BUS_CLK), FREQ_INFO(1200, 892, INTEL_BUS_CLK), FREQ_INFO(1100, 876, INTEL_BUS_CLK), FREQ_INFO(1000, 860, INTEL_BUS_CLK), FREQ_INFO( 900, 860, INTEL_BUS_CLK), FREQ_INFO( 800, 844, INTEL_BUS_CLK), FREQ_INFO( 600, 812, INTEL_BUS_CLK), }; static freq_info PM_773L_90[] = { /* 90 nm 1.30GHz Ultra Low Voltage Pentium M, VID #L */ FREQ_INFO(1300, 876, INTEL_BUS_CLK), FREQ_INFO(1200, 876, INTEL_BUS_CLK), FREQ_INFO(1100, 860, INTEL_BUS_CLK), FREQ_INFO(1000, 860, INTEL_BUS_CLK), FREQ_INFO( 900, 844, INTEL_BUS_CLK), FREQ_INFO( 800, 844, INTEL_BUS_CLK), FREQ_INFO( 600, 812, INTEL_BUS_CLK), }; static freq_info PM_753G_90[] = { /* 90 nm 1.20GHz Ultra Low Voltage Pentium M, VID #G */ FREQ_INFO(1200, 956, INTEL_BUS_CLK), FREQ_INFO(1100, 940, INTEL_BUS_CLK), FREQ_INFO(1000, 908, INTEL_BUS_CLK), FREQ_INFO( 900, 892, INTEL_BUS_CLK), FREQ_INFO( 800, 860, INTEL_BUS_CLK), FREQ_INFO( 600, 812, INTEL_BUS_CLK), }; static freq_info PM_753H_90[] = { /* 90 nm 1.20GHz Ultra Low Voltage Pentium M, VID #H */ FREQ_INFO(1200, 940, INTEL_BUS_CLK), FREQ_INFO(1100, 924, INTEL_BUS_CLK), FREQ_INFO(1000, 908, INTEL_BUS_CLK), FREQ_INFO( 900, 876, INTEL_BUS_CLK), FREQ_INFO( 800, 860, INTEL_BUS_CLK), FREQ_INFO( 600, 812, INTEL_BUS_CLK), }; static freq_info PM_753I_90[] = { /* 90 nm 1.20GHz Ultra Low Voltage Pentium M, VID #I */ FREQ_INFO(1200, 924, INTEL_BUS_CLK), FREQ_INFO(1100, 908, INTEL_BUS_CLK), FREQ_INFO(1000, 892, INTEL_BUS_CLK), FREQ_INFO( 900, 876, INTEL_BUS_CLK), FREQ_INFO( 800, 860, INTEL_BUS_CLK), FREQ_INFO( 600, 812, INTEL_BUS_CLK), }; static freq_info PM_753J_90[] = { /* 90 nm 1.20GHz Ultra Low Voltage Pentium M, VID #J */ FREQ_INFO(1200, 908, INTEL_BUS_CLK), FREQ_INFO(1100, 892, INTEL_BUS_CLK), FREQ_INFO(1000, 876, INTEL_BUS_CLK), FREQ_INFO( 900, 860, INTEL_BUS_CLK), FREQ_INFO( 800, 844, INTEL_BUS_CLK), FREQ_INFO( 600, 812, INTEL_BUS_CLK), }; static freq_info PM_753K_90[] = { /* 90 nm 1.20GHz Ultra Low Voltage Pentium M, VID #K */ FREQ_INFO(1200, 892, INTEL_BUS_CLK), FREQ_INFO(1100, 892, INTEL_BUS_CLK), FREQ_INFO(1000, 876, INTEL_BUS_CLK), FREQ_INFO( 900, 860, INTEL_BUS_CLK), FREQ_INFO( 800, 844, INTEL_BUS_CLK), FREQ_INFO( 600, 812, INTEL_BUS_CLK), }; static freq_info PM_753L_90[] = { /* 90 nm 1.20GHz Ultra Low Voltage Pentium M, VID #L */ FREQ_INFO(1200, 876, INTEL_BUS_CLK), FREQ_INFO(1100, 876, INTEL_BUS_CLK), FREQ_INFO(1000, 860, INTEL_BUS_CLK), FREQ_INFO( 900, 844, INTEL_BUS_CLK), FREQ_INFO( 800, 844, INTEL_BUS_CLK), FREQ_INFO( 600, 812, INTEL_BUS_CLK), }; static freq_info PM_733JG_90[] = { /* 90 nm 1.10GHz Ultra Low Voltage Pentium M, VID #G */ FREQ_INFO(1100, 956, INTEL_BUS_CLK), FREQ_INFO(1000, 940, INTEL_BUS_CLK), FREQ_INFO( 900, 908, INTEL_BUS_CLK), FREQ_INFO( 800, 876, INTEL_BUS_CLK), FREQ_INFO( 600, 812, INTEL_BUS_CLK), }; static freq_info PM_733JH_90[] = { /* 90 nm 1.10GHz Ultra Low Voltage Pentium M, VID #H */ FREQ_INFO(1100, 940, INTEL_BUS_CLK), FREQ_INFO(1000, 924, INTEL_BUS_CLK), FREQ_INFO( 900, 892, INTEL_BUS_CLK), FREQ_INFO( 800, 876, INTEL_BUS_CLK), FREQ_INFO( 600, 812, INTEL_BUS_CLK), }; static freq_info PM_733JI_90[] = { /* 90 nm 1.10GHz Ultra Low Voltage Pentium M, VID #I */ FREQ_INFO(1100, 924, INTEL_BUS_CLK), FREQ_INFO(1000, 908, INTEL_BUS_CLK), FREQ_INFO( 900, 892, INTEL_BUS_CLK), FREQ_INFO( 800, 860, INTEL_BUS_CLK), FREQ_INFO( 600, 812, INTEL_BUS_CLK), }; static freq_info PM_733JJ_90[] = { /* 90 nm 1.10GHz Ultra Low Voltage Pentium M, VID #J */ FREQ_INFO(1100, 908, INTEL_BUS_CLK), FREQ_INFO(1000, 892, INTEL_BUS_CLK), FREQ_INFO( 900, 876, INTEL_BUS_CLK), FREQ_INFO( 800, 860, INTEL_BUS_CLK), FREQ_INFO( 600, 812, INTEL_BUS_CLK), }; static freq_info PM_733JK_90[] = { /* 90 nm 1.10GHz Ultra Low Voltage Pentium M, VID #K */ FREQ_INFO(1100, 892, INTEL_BUS_CLK), FREQ_INFO(1000, 876, INTEL_BUS_CLK), FREQ_INFO( 900, 860, INTEL_BUS_CLK), FREQ_INFO( 800, 844, INTEL_BUS_CLK), FREQ_INFO( 600, 812, INTEL_BUS_CLK), }; static freq_info PM_733JL_90[] = { /* 90 nm 1.10GHz Ultra Low Voltage Pentium M, VID #L */ FREQ_INFO(1100, 876, INTEL_BUS_CLK), FREQ_INFO(1000, 876, INTEL_BUS_CLK), FREQ_INFO( 900, 860, INTEL_BUS_CLK), FREQ_INFO( 800, 844, INTEL_BUS_CLK), FREQ_INFO( 600, 812, INTEL_BUS_CLK), }; static freq_info PM_733_90[] = { /* 90 nm 1.10GHz Ultra Low Voltage Pentium M */ FREQ_INFO(1100, 940, INTEL_BUS_CLK), FREQ_INFO(1000, 924, INTEL_BUS_CLK), FREQ_INFO( 900, 892, INTEL_BUS_CLK), FREQ_INFO( 800, 876, INTEL_BUS_CLK), FREQ_INFO( 600, 812, INTEL_BUS_CLK), }; static freq_info PM_723_90[] = { /* 90 nm 1.00GHz Ultra Low Voltage Pentium M */ FREQ_INFO(1000, 940, INTEL_BUS_CLK), FREQ_INFO( 900, 908, INTEL_BUS_CLK), FREQ_INFO( 800, 876, INTEL_BUS_CLK), FREQ_INFO( 600, 812, INTEL_BUS_CLK), }; /* * VIA C7-M 500 MHz FSB, 400 MHz FSB, and ULV variants. * Data from the "VIA C7-M Processor BIOS Writer's Guide (v2.17)" datasheet. */ static freq_info C7M_795[] = { /* 2.00GHz Centaur C7-M 533 Mhz FSB */ FREQ_INFO_PWR(2000, 1148, 133, 20000), FREQ_INFO_PWR(1867, 1132, 133, 18000), FREQ_INFO_PWR(1600, 1100, 133, 15000), FREQ_INFO_PWR(1467, 1052, 133, 13000), FREQ_INFO_PWR(1200, 1004, 133, 10000), FREQ_INFO_PWR( 800, 844, 133, 7000), FREQ_INFO_PWR( 667, 844, 133, 6000), FREQ_INFO_PWR( 533, 844, 133, 5000), }; static freq_info C7M_785[] = { /* 1.80GHz Centaur C7-M 533 Mhz FSB */ FREQ_INFO_PWR(1867, 1148, 133, 18000), FREQ_INFO_PWR(1600, 1100, 133, 15000), FREQ_INFO_PWR(1467, 1052, 133, 13000), FREQ_INFO_PWR(1200, 1004, 133, 10000), FREQ_INFO_PWR( 800, 844, 133, 7000), FREQ_INFO_PWR( 667, 844, 133, 6000), FREQ_INFO_PWR( 533, 844, 133, 5000), }; static freq_info C7M_765[] = { /* 1.60GHz Centaur C7-M 533 Mhz FSB */ FREQ_INFO_PWR(1600, 1084, 133, 15000), FREQ_INFO_PWR(1467, 1052, 133, 13000), FREQ_INFO_PWR(1200, 1004, 133, 10000), FREQ_INFO_PWR( 800, 844, 133, 7000), FREQ_INFO_PWR( 667, 844, 133, 6000), FREQ_INFO_PWR( 533, 844, 133, 5000), }; static freq_info C7M_794[] = { /* 2.00GHz Centaur C7-M 400 Mhz FSB */ FREQ_INFO_PWR(2000, 1148, 100, 20000), FREQ_INFO_PWR(1800, 1132, 100, 18000), FREQ_INFO_PWR(1600, 1100, 100, 15000), FREQ_INFO_PWR(1400, 1052, 100, 13000), FREQ_INFO_PWR(1000, 1004, 100, 10000), FREQ_INFO_PWR( 800, 844, 100, 7000), FREQ_INFO_PWR( 600, 844, 100, 6000), FREQ_INFO_PWR( 400, 844, 100, 5000), }; static freq_info C7M_784[] = { /* 1.80GHz Centaur C7-M 400 Mhz FSB */ FREQ_INFO_PWR(1800, 1148, 100, 18000), FREQ_INFO_PWR(1600, 1100, 100, 15000), FREQ_INFO_PWR(1400, 1052, 100, 13000), FREQ_INFO_PWR(1000, 1004, 100, 10000), FREQ_INFO_PWR( 800, 844, 100, 7000), FREQ_INFO_PWR( 600, 844, 100, 6000), FREQ_INFO_PWR( 400, 844, 100, 5000), }; static freq_info C7M_764[] = { /* 1.60GHz Centaur C7-M 400 Mhz FSB */ FREQ_INFO_PWR(1600, 1084, 100, 15000), FREQ_INFO_PWR(1400, 1052, 100, 13000), FREQ_INFO_PWR(1000, 1004, 100, 10000), FREQ_INFO_PWR( 800, 844, 100, 7000), FREQ_INFO_PWR( 600, 844, 100, 6000), FREQ_INFO_PWR( 400, 844, 100, 5000), }; static freq_info C7M_754[] = { /* 1.50GHz Centaur C7-M 400 Mhz FSB */ FREQ_INFO_PWR(1500, 1004, 100, 12000), FREQ_INFO_PWR(1400, 988, 100, 11000), FREQ_INFO_PWR(1000, 940, 100, 9000), FREQ_INFO_PWR( 800, 844, 100, 7000), FREQ_INFO_PWR( 600, 844, 100, 6000), FREQ_INFO_PWR( 400, 844, 100, 5000), }; static freq_info C7M_771[] = { /* 1.20GHz Centaur C7-M 400 Mhz FSB */ FREQ_INFO_PWR(1200, 860, 100, 7000), FREQ_INFO_PWR(1000, 860, 100, 6000), FREQ_INFO_PWR( 800, 844, 100, 5500), FREQ_INFO_PWR( 600, 844, 100, 5000), FREQ_INFO_PWR( 400, 844, 100, 4000), }; static freq_info C7M_775_ULV[] = { /* 1.50GHz Centaur C7-M ULV */ FREQ_INFO_PWR(1500, 956, 100, 7500), FREQ_INFO_PWR(1400, 940, 100, 6000), FREQ_INFO_PWR(1000, 860, 100, 5000), FREQ_INFO_PWR( 800, 828, 100, 2800), FREQ_INFO_PWR( 600, 796, 100, 2500), FREQ_INFO_PWR( 400, 796, 100, 2000), }; static freq_info C7M_772_ULV[] = { /* 1.20GHz Centaur C7-M ULV */ FREQ_INFO_PWR(1200, 844, 100, 5000), FREQ_INFO_PWR(1000, 844, 100, 4000), FREQ_INFO_PWR( 800, 828, 100, 2800), FREQ_INFO_PWR( 600, 796, 100, 2500), FREQ_INFO_PWR( 400, 796, 100, 2000), }; static freq_info C7M_779_ULV[] = { /* 1.00GHz Centaur C7-M ULV */ FREQ_INFO_PWR(1000, 796, 100, 3500), FREQ_INFO_PWR( 800, 796, 100, 2800), FREQ_INFO_PWR( 600, 796, 100, 2500), FREQ_INFO_PWR( 400, 796, 100, 2000), }; static freq_info C7M_770_ULV[] = { /* 1.00GHz Centaur C7-M ULV */ FREQ_INFO_PWR(1000, 844, 100, 5000), FREQ_INFO_PWR( 800, 796, 100, 2800), FREQ_INFO_PWR( 600, 796, 100, 2500), FREQ_INFO_PWR( 400, 796, 100, 2000), }; static cpu_info ESTprocs[] = { INTEL(PM17_130, 1700, 1484, 600, 956, INTEL_BUS_CLK), INTEL(PM16_130, 1600, 1484, 600, 956, INTEL_BUS_CLK), INTEL(PM15_130, 1500, 1484, 600, 956, INTEL_BUS_CLK), INTEL(PM14_130, 1400, 1484, 600, 956, INTEL_BUS_CLK), INTEL(PM13_130, 1300, 1388, 600, 956, INTEL_BUS_CLK), INTEL(PM13_LV_130, 1300, 1180, 600, 956, INTEL_BUS_CLK), INTEL(PM12_LV_130, 1200, 1180, 600, 956, INTEL_BUS_CLK), INTEL(PM11_LV_130, 1100, 1180, 600, 956, INTEL_BUS_CLK), INTEL(PM11_ULV_130, 1100, 1004, 600, 844, INTEL_BUS_CLK), INTEL(PM10_ULV_130, 1000, 1004, 600, 844, INTEL_BUS_CLK), INTEL(PM_765A_90, 2100, 1340, 600, 988, INTEL_BUS_CLK), INTEL(PM_765B_90, 2100, 1324, 600, 988, INTEL_BUS_CLK), INTEL(PM_765C_90, 2100, 1308, 600, 988, INTEL_BUS_CLK), INTEL(PM_765E_90, 2100, 1356, 600, 988, INTEL_BUS_CLK), INTEL(PM_755A_90, 2000, 1340, 600, 988, INTEL_BUS_CLK), INTEL(PM_755B_90, 2000, 1324, 600, 988, INTEL_BUS_CLK), INTEL(PM_755C_90, 2000, 1308, 600, 988, INTEL_BUS_CLK), INTEL(PM_755D_90, 2000, 1276, 600, 988, INTEL_BUS_CLK), INTEL(PM_745A_90, 1800, 1340, 600, 988, INTEL_BUS_CLK), INTEL(PM_745B_90, 1800, 1324, 600, 988, INTEL_BUS_CLK), INTEL(PM_745C_90, 1800, 1308, 600, 988, INTEL_BUS_CLK), INTEL(PM_745D_90, 1800, 1276, 600, 988, INTEL_BUS_CLK), INTEL(PM_735A_90, 1700, 1340, 600, 988, INTEL_BUS_CLK), INTEL(PM_735B_90, 1700, 1324, 600, 988, INTEL_BUS_CLK), INTEL(PM_735C_90, 1700, 1308, 600, 988, INTEL_BUS_CLK), INTEL(PM_735D_90, 1700, 1276, 600, 988, INTEL_BUS_CLK), INTEL(PM_725A_90, 1600, 1340, 600, 988, INTEL_BUS_CLK), INTEL(PM_725B_90, 1600, 1324, 600, 988, INTEL_BUS_CLK), INTEL(PM_725C_90, 1600, 1308, 600, 988, INTEL_BUS_CLK), INTEL(PM_725D_90, 1600, 1276, 600, 988, INTEL_BUS_CLK), INTEL(PM_715A_90, 1500, 1340, 600, 988, INTEL_BUS_CLK), INTEL(PM_715B_90, 1500, 1324, 600, 988, INTEL_BUS_CLK), INTEL(PM_715C_90, 1500, 1308, 600, 988, INTEL_BUS_CLK), INTEL(PM_715D_90, 1500, 1276, 600, 988, INTEL_BUS_CLK), INTEL(PM_778_90, 1600, 1116, 600, 988, INTEL_BUS_CLK), INTEL(PM_758_90, 1500, 1116, 600, 988, INTEL_BUS_CLK), INTEL(PM_738_90, 1400, 1116, 600, 988, INTEL_BUS_CLK), INTEL(PM_773G_90, 1300, 956, 600, 812, INTEL_BUS_CLK), INTEL(PM_773H_90, 1300, 940, 600, 812, INTEL_BUS_CLK), INTEL(PM_773I_90, 1300, 924, 600, 812, INTEL_BUS_CLK), INTEL(PM_773J_90, 1300, 908, 600, 812, INTEL_BUS_CLK), INTEL(PM_773K_90, 1300, 892, 600, 812, INTEL_BUS_CLK), INTEL(PM_773L_90, 1300, 876, 600, 812, INTEL_BUS_CLK), INTEL(PM_753G_90, 1200, 956, 600, 812, INTEL_BUS_CLK), INTEL(PM_753H_90, 1200, 940, 600, 812, INTEL_BUS_CLK), INTEL(PM_753I_90, 1200, 924, 600, 812, INTEL_BUS_CLK), INTEL(PM_753J_90, 1200, 908, 600, 812, INTEL_BUS_CLK), INTEL(PM_753K_90, 1200, 892, 600, 812, INTEL_BUS_CLK), INTEL(PM_753L_90, 1200, 876, 600, 812, INTEL_BUS_CLK), INTEL(PM_733JG_90, 1100, 956, 600, 812, INTEL_BUS_CLK), INTEL(PM_733JH_90, 1100, 940, 600, 812, INTEL_BUS_CLK), INTEL(PM_733JI_90, 1100, 924, 600, 812, INTEL_BUS_CLK), INTEL(PM_733JJ_90, 1100, 908, 600, 812, INTEL_BUS_CLK), INTEL(PM_733JK_90, 1100, 892, 600, 812, INTEL_BUS_CLK), INTEL(PM_733JL_90, 1100, 876, 600, 812, INTEL_BUS_CLK), INTEL(PM_733_90, 1100, 940, 600, 812, INTEL_BUS_CLK), INTEL(PM_723_90, 1000, 940, 600, 812, INTEL_BUS_CLK), CENTAUR(C7M_795, 2000, 1148, 533, 844, 133), CENTAUR(C7M_794, 2000, 1148, 400, 844, 100), CENTAUR(C7M_785, 1867, 1148, 533, 844, 133), CENTAUR(C7M_784, 1800, 1148, 400, 844, 100), CENTAUR(C7M_765, 1600, 1084, 533, 844, 133), CENTAUR(C7M_764, 1600, 1084, 400, 844, 100), CENTAUR(C7M_754, 1500, 1004, 400, 844, 100), CENTAUR(C7M_775_ULV, 1500, 956, 400, 796, 100), CENTAUR(C7M_771, 1200, 860, 400, 844, 100), CENTAUR(C7M_772_ULV, 1200, 844, 400, 796, 100), CENTAUR(C7M_779_ULV, 1000, 796, 400, 796, 100), CENTAUR(C7M_770_ULV, 1000, 844, 400, 796, 100), { 0, 0, NULL }, }; static void est_identify(driver_t *driver, device_t parent); static int est_features(driver_t *driver, u_int *features); static int est_probe(device_t parent); static int est_attach(device_t parent); static int est_detach(device_t parent); static int est_get_info(device_t dev); static int est_acpi_info(device_t dev, freq_info **freqs, size_t *freqslen); static int est_table_info(device_t dev, uint64_t msr, freq_info **freqs, size_t *freqslen); static int est_msr_info(device_t dev, uint64_t msr, freq_info **freqs, size_t *freqslen); static freq_info *est_get_current(freq_info *freq_list, size_t tablen); static int est_settings(device_t dev, struct cf_setting *sets, int *count); static int est_set(device_t dev, const struct cf_setting *set); static int est_get(device_t dev, struct cf_setting *set); static int est_type(device_t dev, int *type); static int est_set_id16(device_t dev, uint16_t id16, int need_check); static void est_get_id16(uint16_t *id16_p); static device_method_t est_methods[] = { /* Device interface */ DEVMETHOD(device_identify, est_identify), DEVMETHOD(device_probe, est_probe), DEVMETHOD(device_attach, est_attach), DEVMETHOD(device_detach, est_detach), /* cpufreq interface */ DEVMETHOD(cpufreq_drv_set, est_set), DEVMETHOD(cpufreq_drv_get, est_get), DEVMETHOD(cpufreq_drv_type, est_type), DEVMETHOD(cpufreq_drv_settings, est_settings), /* ACPI interface */ DEVMETHOD(acpi_get_features, est_features), {0, 0} }; static driver_t est_driver = { "est", est_methods, sizeof(struct est_softc), }; static devclass_t est_devclass; DRIVER_MODULE(est, cpu, est_driver, est_devclass, 0, 0); +MODULE_DEPEND(est, hwpstate_intel, 1, 1, 1); static int est_features(driver_t *driver, u_int *features) { /* * Notify the ACPI CPU that we support direct access to MSRs. * XXX C1 "I/O then Halt" seems necessary for some broken BIOS. */ *features = ACPI_CAP_PERF_MSRS | ACPI_CAP_C1_IO_HALT; return (0); } static void est_identify(driver_t *driver, device_t parent) { device_t child; + + /* + * Defer to hwpstate if it is present. This priority logic + * should be replaced with normal newbus probing in the + * future. + */ + intel_hwpstate_identify(NULL, parent); + if (device_find_child(parent, "hwpstate_intel", -1) != NULL) + return; /* Make sure we're not being doubly invoked. */ if (device_find_child(parent, "est", -1) != NULL) return; /* Check that CPUID is supported and the vendor is Intel.*/ if (cpu_high == 0 || (cpu_vendor_id != CPU_VENDOR_INTEL && cpu_vendor_id != CPU_VENDOR_CENTAUR)) return; /* * Check if the CPU supports EST. */ if (!(cpu_feature2 & CPUID2_EST)) return; /* * We add a child for each CPU since settings must be performed * on each CPU in the SMP case. */ child = BUS_ADD_CHILD(parent, 10, "est", -1); if (child == NULL) device_printf(parent, "add est child failed\n"); } static int est_probe(device_t dev) { device_t perf_dev; uint64_t msr; int error, type; if (resource_disabled("est", 0)) return (ENXIO); /* * If the ACPI perf driver has attached and is not just offering * info, let it manage things. */ perf_dev = device_find_child(device_get_parent(dev), "acpi_perf", -1); if (perf_dev && device_is_attached(perf_dev)) { error = CPUFREQ_DRV_TYPE(perf_dev, &type); if (error == 0 && (type & CPUFREQ_FLAG_INFO_ONLY) == 0) return (ENXIO); } /* Attempt to enable SpeedStep if not currently enabled. */ msr = rdmsr(MSR_MISC_ENABLE); if ((msr & MSR_SS_ENABLE) == 0) { wrmsr(MSR_MISC_ENABLE, msr | MSR_SS_ENABLE); if (bootverbose) device_printf(dev, "enabling SpeedStep\n"); /* Check if the enable failed. */ msr = rdmsr(MSR_MISC_ENABLE); if ((msr & MSR_SS_ENABLE) == 0) { device_printf(dev, "failed to enable SpeedStep\n"); return (ENXIO); } } device_set_desc(dev, "Enhanced SpeedStep Frequency Control"); return (0); } static int est_attach(device_t dev) { struct est_softc *sc; sc = device_get_softc(dev); sc->dev = dev; /* On SMP system we can't guarantie independent freq setting. */ if (strict == -1 && mp_ncpus > 1) strict = 0; /* Check CPU for supported settings. */ if (est_get_info(dev)) return (ENXIO); cpufreq_register(dev); return (0); } static int est_detach(device_t dev) { struct est_softc *sc; int error; error = cpufreq_unregister(dev); if (error) return (error); sc = device_get_softc(dev); if (sc->acpi_settings || sc->msr_settings) free(sc->freq_list, M_DEVBUF); return (0); } /* * Probe for supported CPU settings. First, check our static table of * settings. If no match, try using the ones offered by acpi_perf * (i.e., _PSS). We use ACPI second because some systems (IBM R/T40 * series) export both legacy SMM IO-based access and direct MSR access * but the direct access specifies invalid values for _PSS. */ static int est_get_info(device_t dev) { struct est_softc *sc; uint64_t msr; int error; sc = device_get_softc(dev); msr = rdmsr(MSR_PERF_STATUS); error = est_table_info(dev, msr, &sc->freq_list, &sc->flist_len); if (error) error = est_acpi_info(dev, &sc->freq_list, &sc->flist_len); if (error) error = est_msr_info(dev, msr, &sc->freq_list, &sc->flist_len); if (error) { printf( "est: CPU supports Enhanced Speedstep, but is not recognized.\n" "est: cpu_vendor %s, msr %0jx\n", cpu_vendor, msr); return (ENXIO); } return (0); } static int est_acpi_info(device_t dev, freq_info **freqs, size_t *freqslen) { struct est_softc *sc; struct cf_setting *sets; freq_info *table; device_t perf_dev; int count, error, i, j; uint16_t saved_id16; perf_dev = device_find_child(device_get_parent(dev), "acpi_perf", -1); if (perf_dev == NULL || !device_is_attached(perf_dev)) return (ENXIO); /* Fetch settings from acpi_perf. */ sc = device_get_softc(dev); table = NULL; sets = malloc(MAX_SETTINGS * sizeof(*sets), M_TEMP, M_NOWAIT); if (sets == NULL) return (ENOMEM); count = MAX_SETTINGS; error = CPUFREQ_DRV_SETTINGS(perf_dev, sets, &count); if (error) goto out; /* Parse settings into our local table format. */ table = malloc(count * sizeof(*table), M_DEVBUF, M_NOWAIT); if (table == NULL) { error = ENOMEM; goto out; } est_get_id16(&saved_id16); for (i = 0, j = 0; i < count; i++) { /* * Confirm id16 value is correct. */ if (sets[i].freq > 0) { error = est_set_id16(dev, sets[i].spec[0], strict); if (error != 0) { if (bootverbose) device_printf(dev, "Invalid freq %u, " "ignored.\n", sets[i].freq); continue; } table[j].freq = sets[i].freq; table[j].volts = sets[i].volts; table[j].id16 = sets[i].spec[0]; table[j].power = sets[i].power; ++j; } } /* restore saved setting */ est_set_id16(dev, saved_id16, 0); sc->acpi_settings = TRUE; *freqs = table; *freqslen = j; error = 0; out: if (sets) free(sets, M_TEMP); if (error && table) free(table, M_DEVBUF); return (error); } static int est_table_info(device_t dev, uint64_t msr, freq_info **freqs, size_t *freqslen) { cpu_info *p; uint32_t id; /* Find a table which matches (vendor, id32). */ id = msr >> 32; for (p = ESTprocs; p->id32 != 0; p++) { if (p->vendor_id == cpu_vendor_id && p->id32 == id) break; } if (p->id32 == 0) return (EOPNOTSUPP); /* Make sure the current setpoint is valid. */ if (est_get_current(p->freqtab, p->tablen) == NULL) { device_printf(dev, "current setting not found in table\n"); return (EOPNOTSUPP); } *freqs = p->freqtab; *freqslen = p->tablen; return (0); } static int bus_speed_ok(int bus) { switch (bus) { case 100: case 133: case 333: return (1); default: return (0); } } /* * Flesh out a simple rate table containing the high and low frequencies * based on the current clock speed and the upper 32 bits of the MSR. */ static int est_msr_info(device_t dev, uint64_t msr, freq_info **freqs, size_t *freqslen) { struct est_softc *sc; freq_info *fp; int bus, freq, volts; uint16_t id; if (!msr_info_enabled) return (EOPNOTSUPP); /* Figure out the bus clock. */ freq = atomic_load_acq_64(&tsc_freq) / 1000000; id = msr >> 32; bus = freq / (id >> 8); device_printf(dev, "Guessed bus clock (high) of %d MHz\n", bus); if (!bus_speed_ok(bus)) { /* We may be running on the low frequency. */ id = msr >> 48; bus = freq / (id >> 8); device_printf(dev, "Guessed bus clock (low) of %d MHz\n", bus); if (!bus_speed_ok(bus)) return (EOPNOTSUPP); /* Calculate high frequency. */ id = msr >> 32; freq = ((id >> 8) & 0xff) * bus; } /* Fill out a new freq table containing just the high and low freqs. */ sc = device_get_softc(dev); fp = malloc(sizeof(freq_info) * 2, M_DEVBUF, M_WAITOK | M_ZERO); /* First, the high frequency. */ volts = id & 0xff; if (volts != 0) { volts <<= 4; volts += 700; } fp[0].freq = freq; fp[0].volts = volts; fp[0].id16 = id; fp[0].power = CPUFREQ_VAL_UNKNOWN; device_printf(dev, "Guessed high setting of %d MHz @ %d Mv\n", freq, volts); /* Second, the low frequency. */ id = msr >> 48; freq = ((id >> 8) & 0xff) * bus; volts = id & 0xff; if (volts != 0) { volts <<= 4; volts += 700; } fp[1].freq = freq; fp[1].volts = volts; fp[1].id16 = id; fp[1].power = CPUFREQ_VAL_UNKNOWN; device_printf(dev, "Guessed low setting of %d MHz @ %d Mv\n", freq, volts); /* Table is already terminated due to M_ZERO. */ sc->msr_settings = TRUE; *freqs = fp; *freqslen = 2; return (0); } static void est_get_id16(uint16_t *id16_p) { *id16_p = rdmsr(MSR_PERF_STATUS) & 0xffff; } static int est_set_id16(device_t dev, uint16_t id16, int need_check) { uint64_t msr; uint16_t new_id16; int ret = 0; /* Read the current register, mask out the old, set the new id. */ msr = rdmsr(MSR_PERF_CTL); msr = (msr & ~0xffff) | id16; wrmsr(MSR_PERF_CTL, msr); if (need_check) { /* Wait a short while and read the new status. */ DELAY(EST_TRANS_LAT); est_get_id16(&new_id16); if (new_id16 != id16) { if (bootverbose) device_printf(dev, "Invalid id16 (set, cur) " "= (%u, %u)\n", id16, new_id16); ret = ENXIO; } } return (ret); } static freq_info * est_get_current(freq_info *freq_list, size_t tablen) { freq_info *f; int i; uint16_t id16; /* * Try a few times to get a valid value. Sometimes, if the CPU * is in the middle of an asynchronous transition (i.e., P4TCC), * we get a temporary invalid result. */ for (i = 0; i < 5; i++) { est_get_id16(&id16); for (f = freq_list; f < freq_list + tablen; f++) { if (f->id16 == id16) return (f); } DELAY(100); } return (NULL); } static int est_settings(device_t dev, struct cf_setting *sets, int *count) { struct est_softc *sc; freq_info *f; int i; sc = device_get_softc(dev); if (*count < EST_MAX_SETTINGS) return (E2BIG); i = 0; for (f = sc->freq_list; f < sc->freq_list + sc->flist_len; f++, i++) { sets[i].freq = f->freq; sets[i].volts = f->volts; sets[i].power = f->power; sets[i].lat = EST_TRANS_LAT; sets[i].dev = dev; } *count = i; return (0); } static int est_set(device_t dev, const struct cf_setting *set) { struct est_softc *sc; freq_info *f; /* Find the setting matching the requested one. */ sc = device_get_softc(dev); for (f = sc->freq_list; f < sc->freq_list + sc->flist_len; f++) { if (f->freq == set->freq) break; } if (f->freq == 0) return (EINVAL); /* Read the current register, mask out the old, set the new id. */ est_set_id16(dev, f->id16, 0); return (0); } static int est_get(device_t dev, struct cf_setting *set) { struct est_softc *sc; freq_info *f; sc = device_get_softc(dev); f = est_get_current(sc->freq_list, sc->flist_len); if (f == NULL) return (ENXIO); set->freq = f->freq; set->volts = f->volts; set->power = f->power; set->lat = EST_TRANS_LAT; set->dev = dev; return (0); } static int est_type(device_t dev, int *type) { if (type == NULL) return (EINVAL); *type = CPUFREQ_TYPE_ABSOLUTE; return (0); } Index: head/sys/x86/cpufreq/hwpstate_amd.c =================================================================== --- head/sys/x86/cpufreq/hwpstate_amd.c (nonexistent) +++ head/sys/x86/cpufreq/hwpstate_amd.c (revision 357002) @@ -0,0 +1,543 @@ +/*- + * SPDX-License-Identifier: BSD-2-Clause-FreeBSD + * + * Copyright (c) 2005 Nate Lawson + * Copyright (c) 2004 Colin Percival + * Copyright (c) 2004-2005 Bruno Durcot + * Copyright (c) 2004 FUKUDA Nobuhiko + * Copyright (c) 2009 Michael Reifenberger + * Copyright (c) 2009 Norikatsu Shigemura + * Copyright (c) 2008-2009 Gen Otsuji + * + * This code is depending on kern_cpu.c, est.c, powernow.c, p4tcc.c, smist.c + * in various parts. The authors of these files are Nate Lawson, + * Colin Percival, Bruno Durcot, and FUKUDA Nobuhiko. + * This code contains patches by Michael Reifenberger and Norikatsu Shigemura. + * Thank you. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted providing that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR``AS IS'' AND ANY EXPRESS OR + * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE + * POSSIBILITY OF SUCH DAMAGE. + */ + +/* + * For more info: + * BIOS and Kernel Developer's Guide(BKDG) for AMD Family 10h Processors + * 31116 Rev 3.20 February 04, 2009 + * BIOS and Kernel Developer's Guide(BKDG) for AMD Family 11h Processors + * 41256 Rev 3.00 - July 07, 2008 + */ + +#include +__FBSDID("$FreeBSD$"); + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +#include + +#include + +#include "acpi_if.h" +#include "cpufreq_if.h" + +#define MSR_AMD_10H_11H_LIMIT 0xc0010061 +#define MSR_AMD_10H_11H_CONTROL 0xc0010062 +#define MSR_AMD_10H_11H_STATUS 0xc0010063 +#define MSR_AMD_10H_11H_CONFIG 0xc0010064 + +#define AMD_10H_11H_MAX_STATES 16 + +/* for MSR_AMD_10H_11H_LIMIT C001_0061 */ +#define AMD_10H_11H_GET_PSTATE_MAX_VAL(msr) (((msr) >> 4) & 0x7) +#define AMD_10H_11H_GET_PSTATE_LIMIT(msr) (((msr)) & 0x7) +/* for MSR_AMD_10H_11H_CONFIG 10h:C001_0064:68 / 11h:C001_0064:6B */ +#define AMD_10H_11H_CUR_VID(msr) (((msr) >> 9) & 0x7F) +#define AMD_10H_11H_CUR_DID(msr) (((msr) >> 6) & 0x07) +#define AMD_10H_11H_CUR_FID(msr) ((msr) & 0x3F) + +#define AMD_17H_CUR_VID(msr) (((msr) >> 14) & 0xFF) +#define AMD_17H_CUR_DID(msr) (((msr) >> 8) & 0x3F) +#define AMD_17H_CUR_FID(msr) ((msr) & 0xFF) + +#define HWPSTATE_DEBUG(dev, msg...) \ + do { \ + if (hwpstate_verbose) \ + device_printf(dev, msg); \ + } while (0) + +struct hwpstate_setting { + int freq; /* CPU clock in Mhz or 100ths of a percent. */ + int volts; /* Voltage in mV. */ + int power; /* Power consumed in mW. */ + int lat; /* Transition latency in us. */ + int pstate_id; /* P-State id */ +}; + +struct hwpstate_softc { + device_t dev; + struct hwpstate_setting hwpstate_settings[AMD_10H_11H_MAX_STATES]; + int cfnum; +}; + +static void hwpstate_identify(driver_t *driver, device_t parent); +static int hwpstate_probe(device_t dev); +static int hwpstate_attach(device_t dev); +static int hwpstate_detach(device_t dev); +static int hwpstate_set(device_t dev, const struct cf_setting *cf); +static int hwpstate_get(device_t dev, struct cf_setting *cf); +static int hwpstate_settings(device_t dev, struct cf_setting *sets, int *count); +static int hwpstate_type(device_t dev, int *type); +static int hwpstate_shutdown(device_t dev); +static int hwpstate_features(driver_t *driver, u_int *features); +static int hwpstate_get_info_from_acpi_perf(device_t dev, device_t perf_dev); +static int hwpstate_get_info_from_msr(device_t dev); +static int hwpstate_goto_pstate(device_t dev, int pstate_id); + +static int hwpstate_verbose; +SYSCTL_INT(_debug, OID_AUTO, hwpstate_verbose, CTLFLAG_RWTUN, + &hwpstate_verbose, 0, "Debug hwpstate"); + +static int hwpstate_verify; +SYSCTL_INT(_debug, OID_AUTO, hwpstate_verify, CTLFLAG_RWTUN, + &hwpstate_verify, 0, "Verify P-state after setting"); + +static device_method_t hwpstate_methods[] = { + /* Device interface */ + DEVMETHOD(device_identify, hwpstate_identify), + DEVMETHOD(device_probe, hwpstate_probe), + DEVMETHOD(device_attach, hwpstate_attach), + DEVMETHOD(device_detach, hwpstate_detach), + DEVMETHOD(device_shutdown, hwpstate_shutdown), + + /* cpufreq interface */ + DEVMETHOD(cpufreq_drv_set, hwpstate_set), + DEVMETHOD(cpufreq_drv_get, hwpstate_get), + DEVMETHOD(cpufreq_drv_settings, hwpstate_settings), + DEVMETHOD(cpufreq_drv_type, hwpstate_type), + + /* ACPI interface */ + DEVMETHOD(acpi_get_features, hwpstate_features), + + {0, 0} +}; + +static devclass_t hwpstate_devclass; +static driver_t hwpstate_driver = { + "hwpstate", + hwpstate_methods, + sizeof(struct hwpstate_softc), +}; + +DRIVER_MODULE(hwpstate, cpu, hwpstate_driver, hwpstate_devclass, 0, 0); + +/* + * Go to Px-state on all cpus considering the limit. + */ +static int +hwpstate_goto_pstate(device_t dev, int id) +{ + sbintime_t sbt; + uint64_t msr; + int cpu, i, j, limit; + + /* get the current pstate limit */ + msr = rdmsr(MSR_AMD_10H_11H_LIMIT); + limit = AMD_10H_11H_GET_PSTATE_LIMIT(msr); + if (limit > id) + id = limit; + + cpu = curcpu; + HWPSTATE_DEBUG(dev, "setting P%d-state on cpu%d\n", id, cpu); + /* Go To Px-state */ + wrmsr(MSR_AMD_10H_11H_CONTROL, id); + + /* + * We are going to the same Px-state on all cpus. + * Probably should take _PSD into account. + */ + CPU_FOREACH(i) { + if (i == cpu) + continue; + + /* Bind to each cpu. */ + thread_lock(curthread); + sched_bind(curthread, i); + thread_unlock(curthread); + HWPSTATE_DEBUG(dev, "setting P%d-state on cpu%d\n", id, i); + /* Go To Px-state */ + wrmsr(MSR_AMD_10H_11H_CONTROL, id); + } + + /* + * Verify whether each core is in the requested P-state. + */ + if (hwpstate_verify) { + CPU_FOREACH(i) { + thread_lock(curthread); + sched_bind(curthread, i); + thread_unlock(curthread); + /* wait loop (100*100 usec is enough ?) */ + for (j = 0; j < 100; j++) { + /* get the result. not assure msr=id */ + msr = rdmsr(MSR_AMD_10H_11H_STATUS); + if (msr == id) + break; + sbt = SBT_1MS / 10; + tsleep_sbt(dev, PZERO, "pstate_goto", sbt, + sbt >> tc_precexp, 0); + } + HWPSTATE_DEBUG(dev, "result: P%d-state on cpu%d\n", + (int)msr, i); + if (msr != id) { + HWPSTATE_DEBUG(dev, + "error: loop is not enough.\n"); + return (ENXIO); + } + } + } + + return (0); +} + +static int +hwpstate_set(device_t dev, const struct cf_setting *cf) +{ + struct hwpstate_softc *sc; + struct hwpstate_setting *set; + int i; + + if (cf == NULL) + return (EINVAL); + sc = device_get_softc(dev); + set = sc->hwpstate_settings; + for (i = 0; i < sc->cfnum; i++) + if (CPUFREQ_CMP(cf->freq, set[i].freq)) + break; + if (i == sc->cfnum) + return (EINVAL); + + return (hwpstate_goto_pstate(dev, set[i].pstate_id)); +} + +static int +hwpstate_get(device_t dev, struct cf_setting *cf) +{ + struct hwpstate_softc *sc; + struct hwpstate_setting set; + uint64_t msr; + + sc = device_get_softc(dev); + if (cf == NULL) + return (EINVAL); + msr = rdmsr(MSR_AMD_10H_11H_STATUS); + if (msr >= sc->cfnum) + return (EINVAL); + set = sc->hwpstate_settings[msr]; + + cf->freq = set.freq; + cf->volts = set.volts; + cf->power = set.power; + cf->lat = set.lat; + cf->dev = dev; + return (0); +} + +static int +hwpstate_settings(device_t dev, struct cf_setting *sets, int *count) +{ + struct hwpstate_softc *sc; + struct hwpstate_setting set; + int i; + + if (sets == NULL || count == NULL) + return (EINVAL); + sc = device_get_softc(dev); + if (*count < sc->cfnum) + return (E2BIG); + for (i = 0; i < sc->cfnum; i++, sets++) { + set = sc->hwpstate_settings[i]; + sets->freq = set.freq; + sets->volts = set.volts; + sets->power = set.power; + sets->lat = set.lat; + sets->dev = dev; + } + *count = sc->cfnum; + + return (0); +} + +static int +hwpstate_type(device_t dev, int *type) +{ + + if (type == NULL) + return (EINVAL); + + *type = CPUFREQ_TYPE_ABSOLUTE; + return (0); +} + +static void +hwpstate_identify(driver_t *driver, device_t parent) +{ + + if (device_find_child(parent, "hwpstate", -1) != NULL) + return; + + if ((cpu_vendor_id != CPU_VENDOR_AMD || CPUID_TO_FAMILY(cpu_id) < 0x10) && + cpu_vendor_id != CPU_VENDOR_HYGON) + return; + + /* + * Check if hardware pstate enable bit is set. + */ + if ((amd_pminfo & AMDPM_HW_PSTATE) == 0) { + HWPSTATE_DEBUG(parent, "hwpstate enable bit is not set.\n"); + return; + } + + if (resource_disabled("hwpstate", 0)) + return; + + if (BUS_ADD_CHILD(parent, 10, "hwpstate", -1) == NULL) + device_printf(parent, "hwpstate: add child failed\n"); +} + +static int +hwpstate_probe(device_t dev) +{ + struct hwpstate_softc *sc; + device_t perf_dev; + uint64_t msr; + int error, type; + + /* + * Only hwpstate0. + * It goes well with acpi_throttle. + */ + if (device_get_unit(dev) != 0) + return (ENXIO); + + sc = device_get_softc(dev); + sc->dev = dev; + + /* + * Check if acpi_perf has INFO only flag. + */ + perf_dev = device_find_child(device_get_parent(dev), "acpi_perf", -1); + error = TRUE; + if (perf_dev && device_is_attached(perf_dev)) { + error = CPUFREQ_DRV_TYPE(perf_dev, &type); + if (error == 0) { + if ((type & CPUFREQ_FLAG_INFO_ONLY) == 0) { + /* + * If acpi_perf doesn't have INFO_ONLY flag, + * it will take care of pstate transitions. + */ + HWPSTATE_DEBUG(dev, "acpi_perf will take care of pstate transitions.\n"); + return (ENXIO); + } else { + /* + * If acpi_perf has INFO_ONLY flag, (_PCT has FFixedHW) + * we can get _PSS info from acpi_perf + * without going into ACPI. + */ + HWPSTATE_DEBUG(dev, "going to fetch info from acpi_perf\n"); + error = hwpstate_get_info_from_acpi_perf(dev, perf_dev); + } + } + } + + if (error == 0) { + /* + * Now we get _PSS info from acpi_perf without error. + * Let's check it. + */ + msr = rdmsr(MSR_AMD_10H_11H_LIMIT); + if (sc->cfnum != 1 + AMD_10H_11H_GET_PSTATE_MAX_VAL(msr)) { + HWPSTATE_DEBUG(dev, "MSR (%jd) and ACPI _PSS (%d)" + " count mismatch\n", (intmax_t)msr, sc->cfnum); + error = TRUE; + } + } + + /* + * If we cannot get info from acpi_perf, + * Let's get info from MSRs. + */ + if (error) + error = hwpstate_get_info_from_msr(dev); + if (error) + return (error); + + device_set_desc(dev, "Cool`n'Quiet 2.0"); + return (0); +} + +static int +hwpstate_attach(device_t dev) +{ + + return (cpufreq_register(dev)); +} + +static int +hwpstate_get_info_from_msr(device_t dev) +{ + struct hwpstate_softc *sc; + struct hwpstate_setting *hwpstate_set; + uint64_t msr; + int family, i, fid, did; + + family = CPUID_TO_FAMILY(cpu_id); + sc = device_get_softc(dev); + /* Get pstate count */ + msr = rdmsr(MSR_AMD_10H_11H_LIMIT); + sc->cfnum = 1 + AMD_10H_11H_GET_PSTATE_MAX_VAL(msr); + hwpstate_set = sc->hwpstate_settings; + for (i = 0; i < sc->cfnum; i++) { + msr = rdmsr(MSR_AMD_10H_11H_CONFIG + i); + if ((msr & ((uint64_t)1 << 63)) == 0) { + HWPSTATE_DEBUG(dev, "msr is not valid.\n"); + return (ENXIO); + } + did = AMD_10H_11H_CUR_DID(msr); + fid = AMD_10H_11H_CUR_FID(msr); + + /* Convert fid/did to frequency. */ + switch (family) { + case 0x11: + hwpstate_set[i].freq = (100 * (fid + 0x08)) >> did; + break; + case 0x10: + case 0x12: + case 0x15: + case 0x16: + hwpstate_set[i].freq = (100 * (fid + 0x10)) >> did; + break; + case 0x17: + case 0x18: + did = AMD_17H_CUR_DID(msr); + if (did == 0) { + HWPSTATE_DEBUG(dev, "unexpected did: 0\n"); + did = 1; + } + fid = AMD_17H_CUR_FID(msr); + hwpstate_set[i].freq = (200 * fid) / did; + break; + default: + HWPSTATE_DEBUG(dev, "get_info_from_msr: %s family" + " 0x%02x CPUs are not supported yet\n", + cpu_vendor_id == CPU_VENDOR_HYGON ? "Hygon" : "AMD", + family); + return (ENXIO); + } + hwpstate_set[i].pstate_id = i; + /* There was volts calculation, but deleted it. */ + hwpstate_set[i].volts = CPUFREQ_VAL_UNKNOWN; + hwpstate_set[i].power = CPUFREQ_VAL_UNKNOWN; + hwpstate_set[i].lat = CPUFREQ_VAL_UNKNOWN; + } + return (0); +} + +static int +hwpstate_get_info_from_acpi_perf(device_t dev, device_t perf_dev) +{ + struct hwpstate_softc *sc; + struct cf_setting *perf_set; + struct hwpstate_setting *hwpstate_set; + int count, error, i; + + perf_set = malloc(MAX_SETTINGS * sizeof(*perf_set), M_TEMP, M_NOWAIT); + if (perf_set == NULL) { + HWPSTATE_DEBUG(dev, "nomem\n"); + return (ENOMEM); + } + /* + * Fetch settings from acpi_perf. + * Now it is attached, and has info only flag. + */ + count = MAX_SETTINGS; + error = CPUFREQ_DRV_SETTINGS(perf_dev, perf_set, &count); + if (error) { + HWPSTATE_DEBUG(dev, "error: CPUFREQ_DRV_SETTINGS.\n"); + goto out; + } + sc = device_get_softc(dev); + sc->cfnum = count; + hwpstate_set = sc->hwpstate_settings; + for (i = 0; i < count; i++) { + if (i == perf_set[i].spec[0]) { + hwpstate_set[i].pstate_id = i; + hwpstate_set[i].freq = perf_set[i].freq; + hwpstate_set[i].volts = perf_set[i].volts; + hwpstate_set[i].power = perf_set[i].power; + hwpstate_set[i].lat = perf_set[i].lat; + } else { + HWPSTATE_DEBUG(dev, "ACPI _PSS object mismatch.\n"); + error = ENXIO; + goto out; + } + } +out: + if (perf_set) + free(perf_set, M_TEMP); + return (error); +} + +static int +hwpstate_detach(device_t dev) +{ + + hwpstate_goto_pstate(dev, 0); + return (cpufreq_unregister(dev)); +} + +static int +hwpstate_shutdown(device_t dev) +{ + + /* hwpstate_goto_pstate(dev, 0); */ + return (0); +} + +static int +hwpstate_features(driver_t *driver, u_int *features) +{ + + /* Notify the ACPI CPU that we support direct access to MSRs */ + *features = ACPI_CAP_PERF_MSRS; + return (0); +} Property changes on: head/sys/x86/cpufreq/hwpstate_amd.c ___________________________________________________________________ Added: svn:eol-style ## -0,0 +1 ## +native \ No newline at end of property Added: svn:keywords ## -0,0 +1 ## +FreeBSD=%H \ No newline at end of property Added: svn:mime-type ## -0,0 +1 ## +text/plain \ No newline at end of property Index: head/sys/x86/cpufreq/hwpstate_intel.c =================================================================== --- head/sys/x86/cpufreq/hwpstate_intel.c (nonexistent) +++ head/sys/x86/cpufreq/hwpstate_intel.c (revision 357002) @@ -0,0 +1,516 @@ +/*- + * SPDX-License-Identifier: BSD-2-Clause-FreeBSD + * + * Copyright (c) 2018 Intel Corporation + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted providing that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR``AS IS'' AND ANY EXPRESS OR + * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE + * POSSIBILITY OF SUCH DAMAGE. + */ + +#include +__FBSDID("$FreeBSD$"); + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include + +#include + +#include + +#include "acpi_if.h" +#include "cpufreq_if.h" + +extern uint64_t tsc_freq; + +static int intel_hwpstate_probe(device_t dev); +static int intel_hwpstate_attach(device_t dev); +static int intel_hwpstate_detach(device_t dev); +static int intel_hwpstate_suspend(device_t dev); +static int intel_hwpstate_resume(device_t dev); + +static int intel_hwpstate_get(device_t dev, struct cf_setting *cf); +static int intel_hwpstate_type(device_t dev, int *type); + +static device_method_t intel_hwpstate_methods[] = { + /* Device interface */ + DEVMETHOD(device_identify, intel_hwpstate_identify), + DEVMETHOD(device_probe, intel_hwpstate_probe), + DEVMETHOD(device_attach, intel_hwpstate_attach), + DEVMETHOD(device_detach, intel_hwpstate_detach), + DEVMETHOD(device_suspend, intel_hwpstate_suspend), + DEVMETHOD(device_resume, intel_hwpstate_resume), + + /* cpufreq interface */ + DEVMETHOD(cpufreq_drv_get, intel_hwpstate_get), + DEVMETHOD(cpufreq_drv_type, intel_hwpstate_type), + + DEVMETHOD_END +}; + +struct hwp_softc { + device_t dev; + bool hwp_notifications; + bool hwp_activity_window; + bool hwp_pref_ctrl; + bool hwp_pkg_ctrl; + + uint64_t req; /* Cached copy of last request */ + + uint8_t high; + uint8_t guaranteed; + uint8_t efficient; + uint8_t low; +}; + +static devclass_t hwpstate_intel_devclass; +static driver_t hwpstate_intel_driver = { + "hwpstate_intel", + intel_hwpstate_methods, + sizeof(struct hwp_softc), +}; + +DRIVER_MODULE(hwpstate_intel, cpu, hwpstate_intel_driver, + hwpstate_intel_devclass, NULL, NULL); + +static int +intel_hwp_dump_sysctl_handler(SYSCTL_HANDLER_ARGS) +{ + device_t dev; + struct pcpu *pc; + struct sbuf *sb; + struct hwp_softc *sc; + uint64_t data, data2; + int ret; + + sc = (struct hwp_softc *)arg1; + dev = sc->dev; + + pc = cpu_get_pcpu(dev); + if (pc == NULL) + return (ENXIO); + + sb = sbuf_new(NULL, NULL, 1024, SBUF_FIXEDLEN | SBUF_INCLUDENUL); + sbuf_putc(sb, '\n'); + thread_lock(curthread); + sched_bind(curthread, pc->pc_cpuid); + thread_unlock(curthread); + + rdmsr_safe(MSR_IA32_PM_ENABLE, &data); + sbuf_printf(sb, "CPU%d: HWP %sabled\n", pc->pc_cpuid, + ((data & 1) ? "En" : "Dis")); + + if (data == 0) { + ret = 0; + goto out; + } + + rdmsr_safe(MSR_IA32_HWP_CAPABILITIES, &data); + sbuf_printf(sb, "\tHighest Performance: %03lu\n", data & 0xff); + sbuf_printf(sb, "\tGuaranteed Performance: %03lu\n", (data >> 8) & 0xff); + sbuf_printf(sb, "\tEfficient Performance: %03lu\n", (data >> 16) & 0xff); + sbuf_printf(sb, "\tLowest Performance: %03lu\n", (data >> 24) & 0xff); + + rdmsr_safe(MSR_IA32_HWP_REQUEST, &data); + if (sc->hwp_pkg_ctrl && (data & IA32_HWP_REQUEST_PACKAGE_CONTROL)) { + rdmsr_safe(MSR_IA32_HWP_REQUEST_PKG, &data2); + } + + sbuf_putc(sb, '\n'); + +#define pkg_print(x, name, offset) do { \ + if (!sc->hwp_pkg_ctrl || (data & x) != 0) \ + sbuf_printf(sb, "\t%s: %03lu\n", name, (data >> offset) & 0xff);\ + else \ + sbuf_printf(sb, "\t%s: %03lu\n", name, (data2 >> offset) & 0xff);\ +} while (0) + + pkg_print(IA32_HWP_REQUEST_EPP_VALID, + "Requested Efficiency Performance Preference", 24); + pkg_print(IA32_HWP_REQUEST_DESIRED_VALID, + "Requested Desired Performance", 16); + pkg_print(IA32_HWP_REQUEST_MAXIMUM_VALID, + "Requested Maximum Performance", 8); + pkg_print(IA32_HWP_REQUEST_MINIMUM_VALID, + "Requested Minimum Performance", 0); +#undef pkg_print + + sbuf_putc(sb, '\n'); + +out: + thread_lock(curthread); + sched_unbind(curthread); + thread_unlock(curthread); + + ret = sbuf_finish(sb); + if (ret == 0) + ret = SYSCTL_OUT(req, sbuf_data(sb), sbuf_len(sb)); + sbuf_delete(sb); + + return (ret); +} + +static inline int +percent_to_raw(int x) +{ + + MPASS(x <= 100 && x >= 0); + return (0xff * x / 100); +} + +/* + * Given x * 10 in [0, 1000], round to the integer nearest x. + * + * This allows round-tripping nice human readable numbers through this + * interface. Otherwise, user-provided percentages such as 25, 50, 75 get + * rounded down to 24, 49, and 74, which is a bit ugly. + */ +static inline int +round10(int xtimes10) +{ + return ((xtimes10 + 5) / 10); +} + +static inline int +raw_to_percent(int x) +{ + MPASS(x <= 0xff && x >= 0); + return (round10(x * 1000 / 0xff)); +} + +static int +sysctl_epp_select(SYSCTL_HANDLER_ARGS) +{ + device_t dev; + struct pcpu *pc; + uint64_t requested; + uint32_t val; + int ret; + + dev = oidp->oid_arg1; + pc = cpu_get_pcpu(dev); + if (pc == NULL) + return (ENXIO); + + thread_lock(curthread); + sched_bind(curthread, pc->pc_cpuid); + thread_unlock(curthread); + + rdmsr_safe(MSR_IA32_HWP_REQUEST, &requested); + val = (requested & IA32_HWP_REQUEST_ENERGY_PERFORMANCE_PREFERENCE) >> 24; + val = raw_to_percent(val); + + MPASS(val >= 0 && val <= 100); + + ret = sysctl_handle_int(oidp, &val, 0, req); + if (ret || req->newptr == NULL) + goto out; + + if (val > 100) { + ret = EINVAL; + goto out; + } + + val = percent_to_raw(val); + + requested &= ~IA32_HWP_REQUEST_ENERGY_PERFORMANCE_PREFERENCE; + requested |= val << 24; + + wrmsr_safe(MSR_IA32_HWP_REQUEST, requested); + +out: + thread_lock(curthread); + sched_unbind(curthread); + thread_unlock(curthread); + + return (ret); +} + +void +intel_hwpstate_identify(driver_t *driver, device_t parent) +{ + uint32_t regs[4]; + + if (device_find_child(parent, "hwpstate_intel", -1) != NULL) + return; + + if (cpu_vendor_id != CPU_VENDOR_INTEL) + return; + + if (resource_disabled("hwpstate_intel", 0)) + return; + + /* + * Intel SDM 14.4.1 (HWP Programming Interfaces): + * The CPUID instruction allows software to discover the presence of + * HWP support in an Intel processor. Specifically, execute CPUID + * instruction with EAX=06H as input will return 5 bit flags covering + * the following aspects in bits 7 through 11 of CPUID.06H:EAX. + */ + + if (cpu_high < 6) + return; + + /* + * Intel SDM 14.4.1 (HWP Programming Interfaces): + * Availability of HWP baseline resource and capability, + * CPUID.06H:EAX[bit 7]: If this bit is set, HWP provides several new + * architectural MSRs: IA32_PM_ENABLE, IA32_HWP_CAPABILITIES, + * IA32_HWP_REQUEST, IA32_HWP_STATUS. + */ + + do_cpuid(6, regs); + if ((regs[0] & CPUTPM1_HWP) == 0) + return; + + if (BUS_ADD_CHILD(parent, 10, "hwpstate_intel", -1) == NULL) + return; + + if (bootverbose) + device_printf(parent, "hwpstate registered\n"); +} + +static int +intel_hwpstate_probe(device_t dev) +{ + + device_set_desc(dev, "Intel Speed Shift"); + return (BUS_PROBE_NOWILDCARD); +} + +/* FIXME: Need to support PKG variant */ +static int +set_autonomous_hwp(struct hwp_softc *sc) +{ + struct pcpu *pc; + device_t dev; + uint64_t caps; + int ret; + + dev = sc->dev; + + pc = cpu_get_pcpu(dev); + if (pc == NULL) + return (ENXIO); + + thread_lock(curthread); + sched_bind(curthread, pc->pc_cpuid); + thread_unlock(curthread); + + /* XXX: Many MSRs aren't readable until feature is enabled */ + ret = wrmsr_safe(MSR_IA32_PM_ENABLE, 1); + if (ret) { + device_printf(dev, "Failed to enable HWP for cpu%d (%d)\n", + pc->pc_cpuid, ret); + goto out; + } + + ret = rdmsr_safe(MSR_IA32_HWP_REQUEST, &sc->req); + if (ret) + return (ret); + + ret = rdmsr_safe(MSR_IA32_HWP_CAPABILITIES, &caps); + if (ret) + return (ret); + + sc->high = IA32_HWP_CAPABILITIES_HIGHEST_PERFORMANCE(caps); + sc->guaranteed = IA32_HWP_CAPABILITIES_GUARANTEED_PERFORMANCE(caps); + sc->efficient = IA32_HWP_CAPABILITIES_EFFICIENT_PERFORMANCE(caps); + sc->low = IA32_HWP_CAPABILITIES_LOWEST_PERFORMANCE(caps); + + /* hardware autonomous selection determines the performance target */ + sc->req &= ~IA32_HWP_DESIRED_PERFORMANCE; + + /* enable HW dynamic selection of window size */ + sc->req &= ~IA32_HWP_ACTIVITY_WINDOW; + + /* IA32_HWP_REQUEST.Minimum_Performance = IA32_HWP_CAPABILITIES.Lowest_Performance */ + sc->req &= ~IA32_HWP_MINIMUM_PERFORMANCE; + sc->req |= sc->low; + + /* IA32_HWP_REQUEST.Maximum_Performance = IA32_HWP_CAPABILITIES.Highest_Performance. */ + sc->req &= ~IA32_HWP_REQUEST_MAXIMUM_PERFORMANCE; + sc->req |= sc->high << 8; + + ret = wrmsr_safe(MSR_IA32_HWP_REQUEST, sc->req); + if (ret) { + device_printf(dev, + "Failed to setup autonomous HWP for cpu%d (file a bug)\n", + pc->pc_cpuid); + } + +out: + thread_lock(curthread); + sched_unbind(curthread); + thread_unlock(curthread); + + return (ret); +} + +static int +intel_hwpstate_attach(device_t dev) +{ + struct hwp_softc *sc; + uint32_t regs[4]; + int ret; + + sc = device_get_softc(dev); + sc->dev = dev; + + do_cpuid(6, regs); + if (regs[0] & CPUTPM1_HWP_NOTIFICATION) + sc->hwp_notifications = true; + if (regs[0] & CPUTPM1_HWP_ACTIVITY_WINDOW) + sc->hwp_activity_window = true; + if (regs[0] & CPUTPM1_HWP_PERF_PREF) + sc->hwp_pref_ctrl = true; + if (regs[0] & CPUTPM1_HWP_PKG) + sc->hwp_pkg_ctrl = true; + + ret = set_autonomous_hwp(sc); + if (ret) + return (ret); + + SYSCTL_ADD_PROC(device_get_sysctl_ctx(dev), + SYSCTL_STATIC_CHILDREN(_debug), OID_AUTO, device_get_nameunit(dev), + CTLTYPE_STRING | CTLFLAG_RD | CTLFLAG_SKIP, + sc, 0, intel_hwp_dump_sysctl_handler, "A", ""); + + SYSCTL_ADD_PROC(device_get_sysctl_ctx(dev), + SYSCTL_CHILDREN(device_get_sysctl_tree(dev)), OID_AUTO, + "epp", CTLTYPE_INT | CTLFLAG_RWTUN, dev, sizeof(dev), + sysctl_epp_select, "I", + "Efficiency/Performance Preference " + "(range from 0, most performant, through 100, most efficient)"); + + return (cpufreq_register(dev)); +} + +static int +intel_hwpstate_detach(device_t dev) +{ + + return (cpufreq_unregister(dev)); +} + +static int +intel_hwpstate_get(device_t dev, struct cf_setting *set) +{ + struct pcpu *pc; + uint64_t rate; + int ret; + + if (set == NULL) + return (EINVAL); + + pc = cpu_get_pcpu(dev); + if (pc == NULL) + return (ENXIO); + + memset(set, CPUFREQ_VAL_UNKNOWN, sizeof(*set)); + set->dev = dev; + + ret = cpu_est_clockrate(pc->pc_cpuid, &rate); + if (ret == 0) + set->freq = rate / 1000000; + + set->volts = CPUFREQ_VAL_UNKNOWN; + set->power = CPUFREQ_VAL_UNKNOWN; + set->lat = CPUFREQ_VAL_UNKNOWN; + + return (0); +} + +static int +intel_hwpstate_type(device_t dev, int *type) +{ + if (type == NULL) + return (EINVAL); + *type = CPUFREQ_TYPE_ABSOLUTE | CPUFREQ_FLAG_INFO_ONLY | CPUFREQ_FLAG_UNCACHED; + + return (0); +} + +static int +intel_hwpstate_suspend(device_t dev) +{ + return (0); +} + +/* + * Redo a subset of set_autonomous_hwp on resume; untested. Without this, + * testers observed that on resume MSR_IA32_HWP_REQUEST was bogus. + */ +static int +intel_hwpstate_resume(device_t dev) +{ + struct hwp_softc *sc; + struct pcpu *pc; + int ret; + + sc = device_get_softc(dev); + + pc = cpu_get_pcpu(dev); + if (pc == NULL) + return (ENXIO); + + thread_lock(curthread); + sched_bind(curthread, pc->pc_cpuid); + thread_unlock(curthread); + + ret = wrmsr_safe(MSR_IA32_PM_ENABLE, 1); + if (ret) { + device_printf(dev, + "Failed to enable HWP for cpu%d after suspend (%d)\n", + pc->pc_cpuid, ret); + goto out; + } + + ret = wrmsr_safe(MSR_IA32_HWP_REQUEST, sc->req); + if (ret) { + device_printf(dev, + "Failed to setup autonomous HWP for cpu%d after suspend\n", + pc->pc_cpuid); + } + +out: + thread_lock(curthread); + sched_unbind(curthread); + thread_unlock(curthread); + + return (ret); +} Property changes on: head/sys/x86/cpufreq/hwpstate_intel.c ___________________________________________________________________ Added: svn:eol-style ## -0,0 +1 ## +native \ No newline at end of property Added: svn:keywords ## -0,0 +1 ## +FreeBSD=%H \ No newline at end of property Added: svn:mime-type ## -0,0 +1 ## +text/plain \ No newline at end of property Index: head/sys/x86/cpufreq/hwpstate_intel_internal.h =================================================================== --- head/sys/x86/cpufreq/hwpstate_intel_internal.h (nonexistent) +++ head/sys/x86/cpufreq/hwpstate_intel_internal.h (revision 357002) @@ -0,0 +1,35 @@ +/*- + * SPDX-License-Identifier: BSD-2-Clause-FreeBSD + * + * Copyright (c) 2018 Intel Corporation + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted providing that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR``AS IS'' AND ANY EXPRESS OR + * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE + * POSSIBILITY OF SUCH DAMAGE. + * + * $FreeBSD$ + */ + +#ifndef __X86_CPUFREQ_HWPSTATE_INTEL_INTERNAL_H +#define __X86_CPUFREQ_HWPSTATE_INTEL_INTERNAL_H + +void intel_hwpstate_identify(driver_t *driver, device_t parent); + +#endif Property changes on: head/sys/x86/cpufreq/hwpstate_intel_internal.h ___________________________________________________________________ Added: svn:eol-style ## -0,0 +1 ## +native \ No newline at end of property Added: svn:keywords ## -0,0 +1 ## +FreeBSD=%H \ No newline at end of property Added: svn:mime-type ## -0,0 +1 ## +text/plain \ No newline at end of property