- 14 10月, 2015 32 次提交
-
-
由 Christian Borntraeger 提交于
The program parameter can be used to mark hardware samples with some token. Previously, it was used to mark guest samples only. Improve the program parameter doubleword by combining two parts, the leftmost LPP part and the rightmost PID part. Set the PID part for processes by using the task PID. To distinguish host and guest samples for the kernel (PID part is zero), the guest must always set the program paramater to a non-zero value. Use the leftmost bit in the LPP part of the program parameter to be able to detect guest kernel samples. [brueckner@linux.vnet.ibm.com]: Split __LC_CURRENT and introduced __LC_LPP. Corrected __LC_CURRENT users and adjusted assembler parts. And updated the commit message accordingly. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The use of OFFSET instead of DEFINE makes the definitions in asm-offsets.c more readable. While we are at it sort the defines for struct _lowcore according to the field order and remove some unneeded defines. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The sclp console and tty code currently uses several message text objects in a single message event to print several lines with one SCCB. This causes the output of these lines to be fused into a block which is noticeable when selecting text in the operating system message panel. Instead use several message events with a single message text object each to print every line on its own. This changes the SCCB layout from struct sccb_header struct evbuf_header struct mdb_header struct go struct mto ... struct mto to struct sccb_header struct evbuf_header struct mdb_header struct go struct mto ... struct evbuf_header struct mdb_header struct go struct mto Reviewed-by: NPeter Oberparleiter <oberpar@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Hendrik Brueckner 提交于
Various functions in entry.S perform test-under-mask instructions to test for particular bits in memory. Because test-under-mask uses a mask value of one byte, the mask value and the offset into the memory must be calculated manually. This easily introduces errors and is hard to review and read. Introduce the TSTMSK assembler macro to specify a mask constant and let the macro calculate the offset and the byte mask to generate a test-under-mask instruction. The benefit is that existing symbolic constants can now be used for tests. Also the macro checks for zero mask values and mask values that consist of multiple bytes. Signed-off-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Hendrik Brueckner 提交于
Previously, the init task did not have an allocated FPU save area and saving an FPU state was not possible. Now if the vector extension is always enabled, provide a static FPU save area to save FPU states of vector instructions that can be executed quite early. Signed-off-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Hendrik Brueckner 提交于
If the kernel detects that the s390 hardware supports the vector facility, it is enabled by default at an early stage. To force it off, use the novx kernel parameter. Note that there is a small time window, where the vector facility is enabled before it is forced to be off. With enabling the vector facility by default, the FPU save and restore functions can be improved. They do not longer require to manage expensive control register updates to enable or disable the vector enablement control for particular processes. Signed-off-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The call to pgste_set_key in ptep_set_access_flags can be avoided if the old pte is found to be valid at the time the new access rights are set. The function that created the old, valid pte already completed the required storage key operation. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The principles of operation states reads are in order, writes are in order, writes can be reordered after reads, but no reads can be reordered after writes. The atomic and bitops variantes for z196 use the interlocked-access facility instructions with a memory barrier before and after the instruction. Because of the memory ordering the first barrier is unnecessary and can be removed. Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
To be able to analyse problems in regard to hypervisor overhead add a tracepoing for diagnose calls. It reports the number of the diagnose issued, e.g. sshd-1385 [002] .... 42.701431: diagnose: nr=0x9c <idle>-0 [001] ..s. 43.587528: diagnose: nr=0x9c Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Introduce /sys/debug/kernel/diag_stat with a statistic how many diagnose calls have been done by each CPU in the system. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The generic implementation for test_and_set_bit_lock in include/asm-generic uses the standard test_and_set_bit operation. This is done with either a 'csg' or a 'loag' instruction. For both version the cache line is fetched exclusively, even if the bit is already set. The result is an increase in cache traffic, for a contented lock this is a bad idea. Acked-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Use bit 2**1 of the pte and bit 2**14 of the pmd for the soft dirty bit. The fault mechanism to do dirty tracking is already in place. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
There are primitives to create and query the software dirty bits in a pte or pmd. But the clearing of the software dirty bits is done in common code with x86 specific page table functions. Add the missing architecture primitives to clear the software dirty bits to allow the feature to be used on non-x86 systems, e.g. the s390 architecture. Acked-by: NCyrill Gorcunov <gorcunov@openvz.org> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Sebastian Ott 提交于
We often need to correlate an 8 bit path mask with the position in a channel path array. Introduce and use pathmask_to_pos for that task. Reviewed-by: NPeter Oberparleiter <oberpar@linux.vnet.ibm.com> Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Sebastian Ott 提交于
During resume from hibernate we already reenable measurement block updates on a per device basis. In addition to that we also need to activate channel measurement globally using the set channel monitor instruction. Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Sebastian Ott 提交于
Extended measurement blocks need to be 64 byte aligned. To achieve that 128 bytes for each measurement block are allocated and an align callback returns a 64 byte aligned address inside this area. Replace this code with kmem_cache allocations. Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Sebastian Ott 提交于
The measurement block for the extended measurement data is not freed when switching off per device measurement. Free the measurement block after HW stopped accessing it. Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Sebastian Ott 提交于
During allocation of extended measurement blocks we check if the device is already active for channel measurement and add the device to a list of devices with active channel measurement. The check is done under ccwlock protection and the list modification is guarded by a different lock. To guarantee that both states are in sync make sure that both locks are held during the allocation process (like it's already done for the "normal" measurement block allocation). Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Sebastian Ott 提交于
Devices with active channel measurement are included in a list. When a device is removed without deactivating channel measurement first the list_head is freed but still used. Fix this by making sure that channel measurement is deactivated during device deregistration. For devices that we deregister because they are no longer accessible deactivating channel measurement will fail. In this case we can report success because the FW will no longer access the measurement block. In addition to these steps keep an extra device reference while channel measurement is active. Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Sebastian Ott 提交于
Hold the device_lock during [de]activation of the channel measurement block to synchronize concurrent usage of these functions. Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Sebastian Ott 提交于
Ensure that we hold the ccwlock when accessing private data. Return errors that occur during measurement enabling to userspace. Apply some cleanups while at it. Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Alexander Kuleshov 提交于
The <linux/memblock.h> already provides for_each_mem_range() macro that iterates through memblock areas from type_a and not included in type_b. We can remove custom for_each_dump_mem_range() macro and use the for_each_mem_range() instead. Signed-off-by: NAlexander Kuleshov <kuleshovmail@gmail.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
由 Christian Borntraeger 提交于
The principles of operation says: The storage-operand fetch references of one instruction occur after those of all preceding instructions and before those of subsequent instructions, as observed by other CPUs and by channel programs. [...] The CPU may fetch the operands of instructions before the instructions are executed. [...] The CPU may delay placing results in storage. [...] the results of one instruction are placed in storage after the results of all preceding instructions have been placed in storage and before any results of the succeeding instructions are stored, as observed by other CPUs and by the channel subsystem. which boils down to: - reads are in order - writes are in order - reads can happen earlier - writes can happen later By definition (see memory-barrier.txt) read barriers orders reads vs reads and write barriers orders writes agains writes. but neither of these orders reads vs. writes. That means we can implement smp_wmb,smp_rmb,wmb and rmb as simple compiler barriers. To avoid reviewing all driver code for correct barrier usage we keep dma_[rw]mb as serialization for now. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Christian Borntraeger 提交于
By definition smp_wmb only orders writes against writes. (Finish all previous writes, and do not start any future write). To protect the vdso init code against early reads on other CPUs, let's use a full smp_mb at the end of vdso init. As right now smp_wmb is implemented as full serialization, this needs no stable backport, but this change will be necessary if we reimplement smp_wmb. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Christian Borntraeger 提交于
_raw_write_lock_wait first sets the high order bit to indicate a pending writer and then waits for the reader to drop to zero. smp_rmb by definition only orders reads against reads. Let's use a full smp_mb instead. As right now smp_rmb is implemented as full serialization, this needs no stable backport, but this patch will be necessary if we reimplement smp_rmb. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Christian Borntraeger 提交于
We were able to reduce the CPU overhead of big paging scenarios when announcing our paging disks as non-rotational. Almost all dasd devices are implemented in storage servers with cache, raid, striping and lots of magic. There is no point in optimizing the disk schedulers and swap code for a single platter moving arm rotational disks. Given the complexity of the setup and the fact that this change is mostly to disable the additional overhead in swap code, lets keep the other functionality unchanged and do not disable the this device as entropy source - unlike other non-rotational devices. Suggested-by: NChristian Ehrhardt <ehrhardt@linux.vnet.ibm.com> Reviewed-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Ingo Tuchscherer 提交于
In the past only even modulus sizes were allowed for RSA keys in CRT format. This restriction was based on limited RSA key generation on older crypto adapters that provides only even modulus sizes. This restriction is not valid any more. Revoke restrictions that crypto requests can be serviced with odd RSA modulus length in CRT format. Signed-off-by: NIngo Tuchscherer <ingo.tuchscherer@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Michael Holzheu 提交于
Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
git://linux-nfs.org/~bfields/linux由 Linus Torvalds 提交于
Pull nfsd fixes from Bruce Fields: "Two nfsd fixes, one for an RDMA crash, one for a pnfs/block protocol bug" * tag 'nfsd-4.3-2' of git://linux-nfs.org/~bfields/linux: svcrdma: Fix NFS server crash triggered by 1MB NFS WRITE nfsd/blocklayout: accept any minlength
-
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6由 Linus Torvalds 提交于
Pull crypto fixes from Herbert Xu: "This fixes the following issues: - Fix AVX detection to prevent use of non-existent AESNI. - Some SPARC ciphers did not set their IV size which may lead to memory corruption" * git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: crypto: ahash - ensure statesize is non-zero crypto: camellia_aesni_avx - Fix CPU feature checks crypto: sparc - initialize blkcipher.ivsize
-
git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu由 Linus Torvalds 提交于
Pull IOMMU fixes from Joerg Roedel: "A few fixes piled up: - Fix for a suspend/resume issue where PCI probing code overwrote dev->irq for the MSI irq of the AMD IOMMU. - Fix for a kernel crash when a 32 bit PCI device was assigned to a KVM guest. - Fix for a possible memory leak in the VT-d driver - A couple of fixes for the ARM-SMMU driver" * tag 'iommu-fixes-v4.3-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: iommu/amd: Fix NULL pointer deref on device detach iommu/amd: Prevent binding other PCI drivers to IOMMU PCI devices iommu/vt-d: Fix memory leak in dmar_insert_one_dev_info() iommu/arm-smmu: Use correct address mask for CMD_TLBI_S2_IPA iommu/arm-smmu: Ensure IAS is set correctly for AArch32-capable SMMUs iommu/io-pgtable-arm: Don't use dma_to_phys()
-
git://people.freedesktop.org/~airlied/linux由 Linus Torvalds 提交于
Pull drm fixes from Dave Airlie: "I got a bit behind last week, so here is a delayed fixes pull: - a bunch of radeon/amd gpu fixes - some nouveau regression fixes (ppc bios reading and runtime pm fix) - one drm core oops fix - two qxl locking fixes - one qxl regression fix" * 'drm-fixes' of git://people.freedesktop.org/~airlied/linux: drm/nouveau/bios: fix OF loading drm/nouveau/fbcon: take runpm reference when userspace has an open fd drm/nouveau/nouveau: Disable AGP for SiS 761 drm/nouveau/display: allow up to 16k width/height for fermi+ drm/nouveau/bios: translate devinit pri/sec i2c bus to internal identifiers drm: Fix locking for sysfs dpms file drm/amdgpu: fix memory leak in amdgpu_vm_update_page_directory drm/amdgpu: fix 32-bit compiler warning drm/qxl: avoid dependency lock drm/qxl: avoid buffer reservation in qxl_crtc_page_flip drm/qxl: fix framebuffer dirty rectangle tracking. drm/amdgpu: flag iceland as experimental drm/amdgpu: check before checking pci bridge registers drm/amdgpu: fix num_crtc on CZ drm/amdgpu: restore the fbdev mode in lastclose drm/radeon: restore the fbdev mode in lastclose drm/radeon: add quirk for ASUS R7 370 drm/amdgpu: add pm sysfs files late drm/radeon: add pm sysfs files late
-
- 13 10月, 2015 1 次提交
-
-
由 Russell King 提交于
Unlike shash algorithms, ahash drivers must implement export and import as their descriptors may contain hardware state and cannot be exported as is. Unfortunately some ahash drivers did not provide them and end up causing crashes with algif_hash. This patch adds a check to prevent these drivers from registering ahash algorithms until they are fixed. Cc: stable@vger.kernel.org Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 12 10月, 2015 7 次提交
-
-
由 Chuck Lever 提交于
Now that the NFS server advertises a maximum payload size of 1MB for RPC/RDMA again, it crashes in svc_process_common() when NFS client sends a 1MB NFS WRITE on an NFS/RDMA mount. The server has set up a 259 element array of struct page pointers in rq_pages[] for each incoming request. The last element of the array is NULL. When an incoming request has been completely received, rdma_read_complete() attempts to set the starting page of the incoming page vector: rqstp->rq_arg.pages = &rqstp->rq_pages[head->hdr_count]; and the page to use for the reply: rqstp->rq_respages = &rqstp->rq_arg.pages[page_no]; But the value of page_no has already accounted for head->hdr_count. Thus rq_respages now points past the end of the incoming pages. For NFS WRITE operations smaller than the maximum, this is harmless. But when the NFS WRITE operation is as large as the server's max payload size, rq_respages now points at the last entry in rq_pages, which is NULL. Fixes: cc9a903d ('svcrdma: Change maximum server payload . . .') BugLink: https://bugzilla.linux-nfs.org/show_bug.cgi?id=270Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Reviewed-by: NSagi Grimberg <sagig@dev.mellanox.co.il> Reviewed-by: NSteve Wise <swise@opengridcomputing.com> Reviewed-by: NShirley Ma <shirley.ma@oracle.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
git://anongit.freedesktop.org/git/nouveau/linux-2.6由 Dave Airlie 提交于
Nothing too crazy here, a couple of regression fixes + runpm/fbcon race fix. * 'linux-4.3' of git://anongit.freedesktop.org/git/nouveau/linux-2.6: drm/nouveau/bios: fix OF loading drm/nouveau/fbcon: take runpm reference when userspace has an open fd drm/nouveau/nouveau: Disable AGP for SiS 761 drm/nouveau/display: allow up to 16k width/height for fermi+ drm/nouveau/bios: translate devinit pri/sec i2c bus to internal identifiers
-
由 Ilia Mirkin 提交于
Currently OF bios load fails for a few reasons: - checksum failure - bios size too small - no PCIR header - bios length not a multiple of 4 In this change, we resolve all of the above by ignoring any checksum failures (since OF VBIOS tends not to have a checksum), and faking the PCIR data when loading from OF. Signed-off-by: NIlia Mirkin <imirkin@alum.mit.edu> Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Ben Skeggs 提交于
We need to do this in order to prevent accesses to the device while it's powered down. Userspace may have an mmap of the fb, and there's no good way (that I know of) to prevent it from touching the device otherwise. This fixes some nasty races between runpm and plymouth on some systems, which result in the GPU getting very upset and hanging the boot. Signed-off-by: NBen Skeggs <bskeggs@redhat.com> Cc: stable@vger.kernel.org
-
由 Ondrej Zary 提交于
SiS 761 chipset does not support AGP cards but has AGP capability (for the onboard video). At least PC Chips A31G board using this chipset has an AGP-like AGPro slot that's wired to the PCI bus. Enabling AGP will fail (GPU lockup and software fbcon, X11 hangs). Add support for matching just the host bridge in nvkm_device_agp_quirks and add entry for SiS 761 with mode 0 (AGP disabled). Signed-off-by: NOndrej Zary <linux@rainbow-software.org> Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Ilia Mirkin 提交于
Signed-off-by: NIlia Mirkin <imirkin@alum.mit.edu> Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Ben Skeggs 提交于
fdo#92013. Regression from "i2c: transition pad/ports away from being based on nvkm_object" Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-