- 01 3月, 2010 4 次提交
-
-
由 Sheng Yang 提交于
We don't support these instructions, but guest can execute them even if the feature('monitor') haven't been exposed in CPUID. So we would trap and inject a #UD if guest try this way. Cc: stable@kernel.org Signed-off-by: NSheng Yang <sheng@linux.intel.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
In the past we've had errors of single-bit in the other two cases; the printk() may confirm it for the third case (many->many). Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Marcelo Tosatti 提交于
Windows 2003 uses task switch to triple fault and reboot (the other exception being reserved pdptrs bits). Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Eddie Dong 提交于
Move Double-Fault generation logic out of page fault exception generating function to cover more generic case. Signed-off-by: NEddie Dong <eddie.dong@intel.com> Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
- 27 2月, 2010 2 次提交
-
-
由 Russ Anderson 提交于
Enable NMI on all cpus in UV system and add an NMI handler to dump_stack on each cpu. By default on x86 all the cpus except the boot cpu have NMI masked off. This patch enables NMI on all cpus in UV system and adds an NMI handler to dump_stack on each cpu. This way if a system hangs we can NMI the machine and get a backtrace from all the cpus. Version 2: Use x86_platform driver mechanism for nmi init, per Ingo's suggestion. Version 3: Clean up Ingo's nits. Signed-off-by: NRuss Anderson <rja@sgi.com> LKML-Reference: <20100226164912.GA24439@sgi.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Avoid kernels from exploding on AMD machines when they have any lock debugging bits enabled. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 26 2月, 2010 17 次提交
-
-
由 Peter Zijlstra 提交于
Split amd,p6,intel into separate files so that we can easily deal with CONFIG_CPU_SUP_* things, needed to make things build now that perf_event.c relies on symbols from amd.c Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Robert Richter 提交于
During switching virtual counters there is access to perfctr msrs. If the counter is not available this fails due to an invalid address. This patch fixes this. Cc: stable@kernel.org Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
Cc: stable@kernel.org Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
Multiple virtual counters share one physical counter. The reservation of virtual counters fails due to duplicate allocation of the same counter. The counters are already reserved. Thus, virtual counter reservation may removed at all. This also makes the code easier. Cc: stable@kernel.org Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Naga Chumbalkar 提交于
Currently, oprofile fails silently on platforms where a non-OS entity such as the system firmware "enables" and uses a performance counter. There is a warning in the code for this case. The warning indicates an already running counter. If oprofile doesn't collect data, then try using a different performance counter on your platform to monitor the desired event. Delete the counter from the desired event by editing the /usr/share/oprofile/<cpu_type>/<cpu>/events file. If the event cannot be monitored by any other counter, contact your hardware or BIOS vendor. Cc: Shashi Belur <shashi-kiran.belur@hp.com> Cc: Tony Jones <tonyj@suse.de> Signed-off-by: NNaga Chumbalkar <nagananda.chumbalkar@hp.com> Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
This patch generates a warning if a counter is already active. Implemented for AMD and P6 models. P4 is not supported. Cc: Naga Chumbalkar <nagananda.chumbalkar@hp.com> Cc: Shashi Belur <shashi-kiran.belur@hp.com> Cc: Tony Jones <tonyj@suse.de> Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
IBS selects an op (execution operation) for sampling by counting either cycles or dispatched ops. Better statistical samples can be produced by adding a software generated random offset to the periodic op counter value with each sample. This patch adds software randomization to the IBS periodic op counter. The lower 12 bits of the 20 bit counter are randomized. IbsOpCurCnt is initialized with a 12 bit random value. There is a work around if the hw can not write to IbsOpCurCnt. Then the lower 8 bits of the 16 bit IbsOpMaxCnt [15:0] value are randomized in the range of -128 to +127 by adding/subtracting an offset to the maximum count (IbsOpMaxCnt). The linear feedback shift register (LFSR) algorithm is used for pseudo-random number generation to have low impact to the memory system. Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Suravee Suthikulpanit 提交于
This patch implements a linear feedback shift register (LFSR) for pseudo-random number generation for IBS. For IBS measurements it would be good to minimize memory traffic in the interrupt handler since every access pollutes the data caches. Computing a maximal period LFSR just needs shifts and ORs. The LFSR method is good enough to randomize the ops at low overhead. 16 pseudo-random bits are enough for the implementation and it doesn't matter that the pattern repeats with a fairly short cycle. It only needs to break up (hard) periodic sampling behavior. The logic was designed by Paul Drongowski. Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
This patch adds IBS feature detection using cpuid flags. An IBS capability mask is introduced to test for certain IBS features. The bit mask is the same as for IBS cpuid feature flags (Fn8000_001B_EAX), but bit 0 is used to indicate the existence of IBS. The patch also changes the handling of the IbsOpCntCtl bit (periodic op counter count control). The oprofilefs file for this feature (ibs_op/dispatched_ops) will be only exposed if the feature is available, also the default for the bit is set to count clock cycles. In general, the userland can detect the availability of a feature by checking for the corresponding file in oprofilefs. If it exists, the feature also exists. This may lead to a dynamic file layout depending on the cpu type with that the userland has to deal with. Current opcontrol is compatible. Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
Standard AMD systems have the same number of nodes as there are northbridge devices. However, there may kernel configurations (especially for 32 bit) or system setups exist, where the node number is different or it can not be detected properly. Thus the check is not reliable and may fail though IBS setup was fine. For this reason it is better to remove the check. Cc: stable <stable@kernel.org> Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
OProfile support for IBS is now for several versions in the kernel. The feature is stable now and the code can be activated permanently. As a side effect IBS now works also on nosmp configs. Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Peter Zijlstra 提交于
We re-program the event control register every time we reset the count, this appears to be superflous, hence remove it. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arjan van de Ven <arjan@linux.intel.com> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Since the cpu argument to hw_perf_group_sched_in() is always smp_processor_id(), simplify the code a little by removing this argument and using the current cpu where needed. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: David Miller <davem@davemloft.net> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <1265890918.5396.3.camel@laptop> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Stephane Eranian 提交于
This patch adds correct AMD NorthBridge event scheduling. NB events are events measuring L3 cache, Hypertransport traffic. They are identified by an event code >= 0xe0. They measure events on the Northbride which is shared by all cores on a package. NB events are counted on a shared set of counters. When a NB event is programmed in a counter, the data actually comes from a shared counter. Thus, access to those counters needs to be synchronized. We implement the synchronization such that no two cores can be measuring NB events using the same counters. Thus, we maintain a per-NB allocation table. The available slot is propagated using the event_constraint structure. Signed-off-by: NStephane Eranian <eranian@google.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <4b703957.0702d00a.6bf2.7b7d@mx.google.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Stephane Eranian 提交于
In certain situations, the kernel may need to stop and start the same event rapidly. The current PMU callbacks do not distinguish between stop and release (i.e., stop + free the resource). Thus, a counter may be released, then it will be immediately re-acquired. Event scheduling will again take place with no guarantee to assign the same counter. On some processors, this may event yield to failure to assign the event back due to competion between cores. This patch is adding a new pair of callback to stop and restart a counter without actually release the underlying counter resource. On stop, the counter is stopped, its values saved and that's it. On start, the value is reloaded and counter is restarted (on x86, actual restart is delayed until perf_enable()). Signed-off-by: NStephane Eranian <eranian@google.com> [ added fallback to ->enable/->disable for all other PMUs fixed x86_pmu_start() to call x86_pmu.enable() merged __x86_pmu_disable into x86_pmu_stop() ] Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <4b703875.0a04d00a.7896.ffffb824@mx.google.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Pekka Enberg 提交于
This patch changes the 32-bit version of kernel_physical_mapping_init() to return the last mapped address like the 64-bit one so that we can unify the call-site in init_memory_mapping(). Cc: Yinghai Lu <yinghai@kernel.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi> LKML-Reference: <alpine.DEB.2.00.1002241703570.1180@melkki.cs.helsinki.fi> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Thomas Gleixner 提交于
commit ff097ddd (x86/PCI: MMCONFIG: manage pci_mmcfg_region as a list, not a table) introduced a nasty memory corruption when pci_mmcfg_list is empty. pci_mmcfg_check_end_bus_number() dereferences pci_mmcfg_list.prev even when the list is empty. The following write hits some variable near to pci_mmcfg_list. Further down a similar problem exists, where cfg->list.next is dereferenced unconditionally and a comparison with some variable near to pci_mmcfg_list happens. Add a check for the last element into the for_each_entry() loop and remove all the other crappy logic which is just a leftover of the old array based code which was replaced by the list conversion. Reported-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Bjorn Helgaas <bjorn.helgaas@hp.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: stable@kernel.org Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
-
- 25 2月, 2010 3 次提交
-
-
由 Steven Rostedt 提交于
The code in stop_machine that modifies the kernel text has a bit of logic to handle the case of NMIs. stop_machine does not prevent NMIs from executing, and if an NMI were to trigger on another CPU as the modifying CPU is changing the NMI text, a GPF could result. To prevent the GPF, the NMI calls ftrace_nmi_enter() which may modify the code first, then any other NMIs will just change the text to the same content which will do no harm. The code that stop_machine called must wait for NMIs to finish while it changes each location in the kernel. That code may also change the text to what the NMI changed it to. The key is that the text will never change content while another CPU is executing it. To make the above work, the call to ftrace_nmi_enter() must also do a smp_mb() as well as atomic_inc(). But for applications like perf that require a high number of NMIs for profiling, this can have a dramatic effect on the system. Not only is it doing a full memory barrier on both nmi_enter() as well as nmi_exit() it is also modifying a global variable with an atomic operation. This kills performance on large SMP machines. Since the memory barriers are only needed when ftrace is in the process of modifying the text (which is seldom), this patch adds a "modifying_code" variable that gets set before stop machine is executed and cleared afterwards. The NMIs will check this variable and store it in a per CPU "save_modifying_code" variable that it will use to check if it needs to do the memory barriers and atomic dec on NMI exit. Acked-by: NPeter Zijlstra <peterz@infradead.org> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Ian Campbell 提交于
Distros generally (I looked at Debian, RHEL5 and SLES11) seem to enable CONFIG_HIGHPTE for any x86 configuration which has highmem enabled. This means that the overhead applies even to machines which have a fairly modest amount of high memory and which therefore do not really benefit from allocating PTEs in high memory but still pay the price of the additional mapping operations. Running kernbench on a 4G box I found that with CONFIG_HIGHPTE=y but no actual highptes being allocated there was a reduction in system time used from 59.737s to 55.9s. With CONFIG_HIGHPTE=y and highmem PTEs being allocated: Average Optimal load -j 4 Run (std deviation): Elapsed Time 175.396 (0.238914) User Time 515.983 (5.85019) System Time 59.737 (1.26727) Percent CPU 263.8 (71.6796) Context Switches 39989.7 (4672.64) Sleeps 42617.7 (246.307) With CONFIG_HIGHPTE=y but with no highmem PTEs being allocated: Average Optimal load -j 4 Run (std deviation): Elapsed Time 174.278 (0.831968) User Time 515.659 (6.07012) System Time 55.9 (1.07799) Percent CPU 263.8 (71.266) Context Switches 39929.6 (4485.13) Sleeps 42583.7 (373.039) This patch allows the user to control the allocation of PTEs in highmem from the command line ("userpte=nohigh") but retains the status-quo as the default. It is possible that some simple heuristic could be developed which allows auto-tuning of this option however I don't have a sufficiently large machine available to me to perform any particularly meaningful experiments. We could probably handwave up an argument for a threshold at 16G of total RAM. Assuming 768M of lowmem we have 196608 potential lowmem PTE pages. Each page can map 2M of RAM in a PAE-enabled configuration, meaning a maximum of 384G of RAM could potentially be mapped using lowmem PTEs. Even allowing generous factor of 10 to account for other required lowmem allocations, generous slop to account for page sharing (which reduces the total amount of RAM mappable by a given number of PT pages) and other innacuracies in the estimations it would seem that even a 32G machine would not have a particularly pressing need for highmem PTEs. I think 32G could be considered to be at the upper bound of what might be sensible on a 32 bit machine (although I think in practice 64G is still supported). It's seems questionable if HIGHPTE is even a win for any amount of RAM you would sensibly run a 32 bit kernel on rather than going 64 bit. Signed-off-by: NIan Campbell <ian.campbell@citrix.com> LKML-Reference: <1266403090-20162-1-git-send-email-ian.campbell@citrix.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
This will save 64K bytes from memory when loading linux if DMI is disabled, which is good for embedded systems. Signed-off-by: NThadeu Lima de Souza Cascardo <cascardo@holoscopio.com> LKML-Reference: <1265758732-19320-1-git-send-email-cascardo@holoscopio.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 24 2月, 2010 4 次提交
-
-
由 Suresh Siddha 提交于
init_fpu() already ensures that the used_math() is set for the stopped child. Remove the redundant set_stopped_child_used_math() in [x]fpregs_set() Reported-by: NOleg Nesterov <oleg@redhat.com> Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> LKML-Reference: <20100222225240.642169080@sbs-t61.sc.intel.com> Acked-by: NRolan McGrath <roland@redhat.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Suresh Siddha 提交于
48 bytes (bytes 464..511) of the xstateregs payload come from the kernel defined structure (xstate_fx_sw_bytes). Rest comes from the xstate regs structure in the thread struct. Instead of having multiple user_regset_copyout()'s, simplify the xstateregs_get() by first copying the SW bytes into the xstate regs structure in the thread structure and then using one user_regset_copyout() to copyout the xstateregs. Requested-by: NRoland McGrath <roland@redhat.com> Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> LKML-Reference: <20100222225240.494688491@sbs-t61.sc.intel.com> Acked-by: NRoland McGrath <roland@redhat.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Cc: Oleg Nesterov <oleg@redhat.com>
-
由 Bjorn Helgaas 提交于
The main benefit of using ACPI host bridge window information is that we can do better resource allocation in systems with multiple host bridges, e.g., http://bugzilla.kernel.org/show_bug.cgi?id=14183 Sometimes we need _CRS information even if we only have one host bridge, e.g., https://bugs.launchpad.net/ubuntu/+source/linux/+bug/341681 Most of these systems are relatively new, so this patch turns on "pci=use_crs" only on machines with a BIOS date of 2008 or newer. Signed-off-by: NBjorn Helgaas <bjorn.helgaas@hp.com> Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
-
由 Bjorn Helgaas 提交于
Previously we used a table of size PCI_BUS_NUM_RESOURCES (16) for resources forwarded to a bus by its upstream bridge. We've increased this size several times when the table overflowed. But there's no good limit on the number of resources because host bridges and subtractive decode bridges can forward any number of ranges to their secondary buses. This patch reduces the table to only PCI_BRIDGE_RESOURCE_NUM (4) entries, which corresponds to the number of windows a PCI-to-PCI (3) or CardBus (4) bridge can positively decode. Any additional resources, e.g., PCI host bridge windows or subtractively-decoded regions, are kept in a list. I'd prefer a single list rather than this split table/list approach, but that requires simultaneous changes to every architecture. This approach only requires immediate changes where we set up (a) host bridges with more than four windows and (b) subtractive-decode P2P bridges, and we can incrementally change other architectures to use the list. Signed-off-by: NBjorn Helgaas <bjorn.helgaas@hp.com> Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
-
- 23 2月, 2010 3 次提交
-
-
由 Dominik Brodowski 提交于
Now that we return the new resource start position, there is no need to update "struct resource" inside the align function. Therefore, mark the struct resource as const. Cc: Bjorn Helgaas <bjorn.helgaas@hp.com> Cc: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
-
由 Dominik Brodowski 提交于
As suggested by Linus, align functions should return the start of a resource, not void. An update of "res->start" is no longer necessary. Cc: Bjorn Helgaas <bjorn.helgaas@hp.com> Cc: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
-
由 Seth Heasley 提交于
This patch adds the Intel Cougar Point (PCH) LPC and SMBus Controller DeviceIDs. Signed-off-by: NSeth Heasley <seth.heasley@intel.com> Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
-
- 20 2月, 2010 3 次提交
-
-
由 H. Peter Anvin 提交于
The code for setting standard VGA modes probes for the current mode, and skips the mode setting if the mode is 3 (color text 80x25) or 7 (mono text 80x25). Unfortunately, there are BIOSes, including the VMware BIOS, which report the previous mode if function 0F is queried while the screen is in a VESA mode, and of course, nothing can help a mode poked directly into the hardware. As such, the safe option is to set the mode anyway, and only query to see if we should be using mode 7 rather than mode 3. People who don't want any mode setting at all should probably use vga=0x0f04 (VIDEO_CURRENT_MODE). It's possible that should be the kernel default. Reported-by Rene Arends <R.R.Arends@hro.nl> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> LKML-Reference: <tip-*@git.kernel.org>
-
由 Frederic Weisbecker 提交于
When the user enables breakpoints through dr7, he can choose between "local" or "global" enable bits but given how linux is implemented, both have the same effect. That said we don't keep track how the user enabled the breakpoints so when the user requests the dr7 value, we only translate the "enabled" status using the global enabled bits. It means that if the user enabled a breakpoint using the local enabled bit, reading back dr7 will set the global bit and clear the local one. Apps like Wine expect a full dr7 POKEUSER/PEEKUSER match for emulated softwares that implement old reverse engineering protection schemes. We fix that by keeping track of the whole dr7 value given by the user in the thread structure to drop this bug. We'll think about something more proper later. This fixes a 2.6.32 - 2.6.33-x ptrace regression. Reported-and-tested-by: NMichael Stefaniuc <mstefani@redhat.com> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Acked-by: NK.Prasad <prasad@linux.vnet.ibm.com> Cc: Alan Stern <stern@rowland.harvard.edu> Cc: Maneesh Soni <maneesh@linux.vnet.ibm.com> Cc: Alexandre Julliard <julliard@winehq.org> Cc: Rafael J. Wysocki <rjw@sisk.pl> Cc: Maciej Rutecki <maciej.rutecki@gmail.com>
-
由 Frederic Weisbecker 提交于
Before we had a generic breakpoint API, ptrace was accepting breakpoints on NULL address in x86. The new API refuse them, without given strong reasons. We need to follow the previous behaviour as some userspace apps like Wine need such NULL breakpoints to ensure old emulated software protections are still working. This fixes a 2.6.32 - 2.6.33-x ptrace regression. Reported-and-tested-by: NMichael Stefaniuc <mstefani@redhat.com> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Acked-by: NK.Prasad <prasad@linux.vnet.ibm.com> Acked-by: NRoland McGrath <roland@redhat.com> Cc: Alan Stern <stern@rowland.harvard.edu> Cc: Maneesh Soni <maneesh@linux.vnet.ibm.com> Cc: Alexandre Julliard <julliard@winehq.org> Cc: Rafael J. Wysocki <rjw@sisk.pl> Cc: Maciej Rutecki <maciej.rutecki@gmail.com>
-
- 19 2月, 2010 3 次提交
-
-
由 H. Peter Anvin 提交于
Inhibit output from the kernel decompressor if the video information is invalid. This was already the case for 32 bits, make 64 bits match. Signed-off-by: NH. Peter Anvin <hpa@zytor.com> LKML-Reference: <tip-*@git.kernel.org>
-
由 Borislav Petkov 提交于
Final stage linking can fail with arch/x86/built-in.o: In function `store_cache_disable': intel_cacheinfo.c:(.text+0xc509): undefined reference to `amd_get_nb_id' arch/x86/built-in.o: In function `show_cache_disable': intel_cacheinfo.c:(.text+0xc7d3): undefined reference to `amd_get_nb_id' when CONFIG_CPU_SUP_AMD is not enabled because the amd_get_nb_id helper is defined in AMD-specific code but also used in generic code (intel_cacheinfo.c). Reorganize the L3 cache index disable code under CONFIG_CPU_SUP_AMD since it is AMD-only anyway. Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> LKML-Reference: <20100218184210.GF20473@aftab> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Borislav Petkov 提交于
The show/store_cache_disable routines depend unnecessarily on NUMA's cpu_to_node and the disabling of cache indices broke when !CONFIG_NUMA. Remove that dependency by using a helper which is always correct. While at it, enable L3 Cache Index disable on rev D1 Istanbuls which sport the feature too. Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> LKML-Reference: <20100218184339.GG20473@aftab> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 18 2月, 2010 1 次提交
-
-
由 H. Peter Anvin 提交于
When we restore the screen content after a mode change, we return the cursor to its former position. However, we need to also update boot_params.screen_info accordingly, so that the decompression code knows where on the screen the cursor is. Just in case the video BIOS does something extra screwy, read the cursor position back from the BIOS instead of relying on it doing the right thing. While we're at it, make sure we cap the cursor position to the new screen coordinates. Reported-by: NWim Osterholt <wim@djo.tudelft.nl> Bugzilla-Reference: http://bugzilla.kernel.org/show_bug.cgi?id=15329Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-