提交 22cbbcef 编写于 作者: D Dave Airlie

Merge tag 'v3.19-rc6' into drm-fixes

Linux 3.19-rc6

pull in rc6 as the amdkfd fixes are based on it, and I'd rather
be doing the merges separately
What: /sys/class/leds/dell::kbd_backlight/als_setting
Date: December 2014
KernelVersion: 3.19
Contact: Gabriele Mazzotta <gabriele.mzt@gmail.com>,
Pali Rohár <pali.rohar@gmail.com>
Description:
This file allows to control the automatic keyboard
illumination mode on some systems that have an ambient
light sensor. Write 1 to this file to enable the auto
mode, 0 to disable it.
What: /sys/class/leds/dell::kbd_backlight/start_triggers
Date: December 2014
KernelVersion: 3.19
Contact: Gabriele Mazzotta <gabriele.mzt@gmail.com>,
Pali Rohár <pali.rohar@gmail.com>
Description:
This file allows to control the input triggers that
turn on the keyboard backlight illumination that is
disabled because of inactivity.
Read the file to see the triggers available. The ones
enabled are preceded by '+', those disabled by '-'.
To enable a trigger, write its name preceded by '+' to
this file. To disable a trigger, write its name preceded
by '-' instead.
For example, to enable the keyboard as trigger run:
echo +keyboard > /sys/class/leds/dell::kbd_backlight/start_triggers
To disable it:
echo -keyboard > /sys/class/leds/dell::kbd_backlight/start_triggers
Note that not all the available triggers can be configured.
What: /sys/class/leds/dell::kbd_backlight/stop_timeout
Date: December 2014
KernelVersion: 3.19
Contact: Gabriele Mazzotta <gabriele.mzt@gmail.com>,
Pali Rohár <pali.rohar@gmail.com>
Description:
This file allows to specify the interval after which the
keyboard illumination is disabled because of inactivity.
The timeouts are expressed in seconds, minutes, hours and
days, for which the symbols are 's', 'm', 'h' and 'd'
respectively.
To configure the timeout, write to this file a value along
with any the above units. If no unit is specified, the value
is assumed to be expressed in seconds.
For example, to set the timeout to 10 minutes run:
echo 10m > /sys/class/leds/dell::kbd_backlight/stop_timeout
Note that when this file is read, the returned value might be
expressed in a different unit than the one used when the timeout
was set.
Also note that only some timeouts are supported and that
some systems might fall back to a specific timeout in case
an invalid timeout is written to this file.
...@@ -23,7 +23,7 @@ Required nodes: ...@@ -23,7 +23,7 @@ Required nodes:
range of 0x200 bytes. range of 0x200 bytes.
- syscon: the root node of the Integrator platforms must have a - syscon: the root node of the Integrator platforms must have a
system controller node pointong to the control registers, system controller node pointing to the control registers,
with the compatible string with the compatible string
"arm,integrator-ap-syscon" "arm,integrator-ap-syscon"
"arm,integrator-cp-syscon" "arm,integrator-cp-syscon"
......
* QEMU Firmware Configuration bindings for ARM
QEMU's arm-softmmu and aarch64-softmmu emulation / virtualization targets
provide the following Firmware Configuration interface on the "virt" machine
type:
- A write-only, 16-bit wide selector (or control) register,
- a read-write, 64-bit wide data register.
QEMU exposes the control and data register to ARM guests as memory mapped
registers; their location is communicated to the guest's UEFI firmware in the
DTB that QEMU places at the bottom of the guest's DRAM.
The guest writes a selector value (a key) to the selector register, and then
can read the corresponding data (produced by QEMU) via the data register. If
the selected entry is writable, the guest can rewrite it through the data
register.
The selector register takes keys in big endian byte order.
The data register allows accesses with 8, 16, 32 and 64-bit width (only at
offset 0 of the register). Accesses larger than a byte are interpreted as
arrays, bundled together only for better performance. The bytes constituting
such a word, in increasing address order, correspond to the bytes that would
have been transferred by byte-wide accesses in chronological order.
The interface allows guest firmware to download various parameters and blobs
that affect how the firmware works and what tables it installs for the guest
OS. For example, boot order of devices, ACPI tables, SMBIOS tables, kernel and
initrd images for direct kernel booting, virtual machine UUID, SMP information,
virtual NUMA topology, and so on.
The authoritative registry of the valid selector values and their meanings is
the QEMU source code; the structure of the data blobs corresponding to the
individual key values is also defined in the QEMU source code.
The presence of the registers can be verified by selecting the "signature" blob
with key 0x0000, and reading four bytes from the data register. The returned
signature is "QEMU".
The outermost protocol (involving the write / read sequences of the control and
data registers) is expected to be versioned, and/or described by feature bits.
The interface revision / feature bitmap can be retrieved with key 0x0001. The
blob to be read from the data register has size 4, and it is to be interpreted
as a uint32_t value in little endian byte order. The current value
(corresponding to the above outer protocol) is zero.
The guest kernel is not expected to use these registers (although it is
certainly allowed to); the device tree bindings are documented here because
this is where device tree bindings reside in general.
Required properties:
- compatible: "qemu,fw-cfg-mmio".
- reg: the MMIO region used by the device.
* Bytes 0x0 to 0x7 cover the data register.
* Bytes 0x8 to 0x9 cover the selector register.
* Further registers may be appended to the region in case of future interface
revisions / feature bits.
Example:
/ {
#size-cells = <0x2>;
#address-cells = <0x2>;
fw-cfg@9020000 {
compatible = "qemu,fw-cfg-mmio";
reg = <0x0 0x9020000 0x0 0xa>;
};
};
...@@ -19,7 +19,7 @@ type of the connections, they just map their existence. Specific properties ...@@ -19,7 +19,7 @@ type of the connections, they just map their existence. Specific properties
may be described by specialized bindings depending on the type of connection. may be described by specialized bindings depending on the type of connection.
To see how this binding applies to video pipelines, for example, see To see how this binding applies to video pipelines, for example, see
Documentation/device-tree/bindings/media/video-interfaces.txt. Documentation/devicetree/bindings/media/video-interfaces.txt.
Here the ports describe data interfaces, and the links between them are Here the ports describe data interfaces, and the links between them are
the connecting data buses. A single port with multiple connections can the connecting data buses. A single port with multiple connections can
correspond to multiple devices being connected to the same physical bus. correspond to multiple devices being connected to the same physical bus.
......
...@@ -9,7 +9,6 @@ ad Avionic Design GmbH ...@@ -9,7 +9,6 @@ ad Avionic Design GmbH
adapteva Adapteva, Inc. adapteva Adapteva, Inc.
adi Analog Devices, Inc. adi Analog Devices, Inc.
aeroflexgaisler Aeroflex Gaisler AB aeroflexgaisler Aeroflex Gaisler AB
ak Asahi Kasei Corp.
allwinner Allwinner Technology Co., Ltd. allwinner Allwinner Technology Co., Ltd.
altr Altera Corp. altr Altera Corp.
amcc Applied Micro Circuits Corporation (APM, formally AMCC) amcc Applied Micro Circuits Corporation (APM, formally AMCC)
...@@ -20,6 +19,7 @@ amstaos AMS-Taos Inc. ...@@ -20,6 +19,7 @@ amstaos AMS-Taos Inc.
apm Applied Micro Circuits Corporation (APM) apm Applied Micro Circuits Corporation (APM)
arm ARM Ltd. arm ARM Ltd.
armadeus ARMadeus Systems SARL armadeus ARMadeus Systems SARL
asahi-kasei Asahi Kasei Corp.
atmel Atmel Corporation atmel Atmel Corporation
auo AU Optronics Corporation auo AU Optronics Corporation
avago Avago Technologies avago Avago Technologies
...@@ -127,6 +127,7 @@ pixcir PIXCIR MICROELECTRONICS Co., Ltd ...@@ -127,6 +127,7 @@ pixcir PIXCIR MICROELECTRONICS Co., Ltd
powervr PowerVR (deprecated, use img) powervr PowerVR (deprecated, use img)
qca Qualcomm Atheros, Inc. qca Qualcomm Atheros, Inc.
qcom Qualcomm Technologies, Inc qcom Qualcomm Technologies, Inc
qemu QEMU, a generic and open source machine emulator and virtualizer
qnap QNAP Systems, Inc. qnap QNAP Systems, Inc.
radxa Radxa radxa Radxa
raidsonic RaidSonic Technology GmbH raidsonic RaidSonic Technology GmbH
...@@ -168,6 +169,7 @@ usi Universal Scientific Industrial Co., Ltd. ...@@ -168,6 +169,7 @@ usi Universal Scientific Industrial Co., Ltd.
v3 V3 Semiconductor v3 V3 Semiconductor
variscite Variscite Ltd. variscite Variscite Ltd.
via VIA Technologies, Inc. via VIA Technologies, Inc.
virtio Virtual I/O Device Specification, developed by the OASIS consortium
voipac Voipac Technologies s.r.o. voipac Voipac Technologies s.r.o.
winbond Winbond Electronics corp. winbond Winbond Electronics corp.
wlf Wolfson Microelectronics wlf Wolfson Microelectronics
......
...@@ -696,7 +696,7 @@ L: alsa-devel@alsa-project.org (moderated for non-subscribers) ...@@ -696,7 +696,7 @@ L: alsa-devel@alsa-project.org (moderated for non-subscribers)
W: http://blackfin.uclinux.org/ W: http://blackfin.uclinux.org/
S: Supported S: Supported
F: sound/soc/blackfin/* F: sound/soc/blackfin/*
ANALOG DEVICES INC IIO DRIVERS ANALOG DEVICES INC IIO DRIVERS
M: Lars-Peter Clausen <lars@metafoo.de> M: Lars-Peter Clausen <lars@metafoo.de>
M: Michael Hennerich <Michael.Hennerich@analog.com> M: Michael Hennerich <Michael.Hennerich@analog.com>
...@@ -4750,14 +4750,14 @@ S: Supported ...@@ -4750,14 +4750,14 @@ S: Supported
F: drivers/net/ethernet/ibm/ibmveth.* F: drivers/net/ethernet/ibm/ibmveth.*
IBM Power Virtual SCSI Device Drivers IBM Power Virtual SCSI Device Drivers
M: Nathan Fontenot <nfont@linux.vnet.ibm.com> M: Tyrel Datwyler <tyreld@linux.vnet.ibm.com>
L: linux-scsi@vger.kernel.org L: linux-scsi@vger.kernel.org
S: Supported S: Supported
F: drivers/scsi/ibmvscsi/ibmvscsi* F: drivers/scsi/ibmvscsi/ibmvscsi*
F: drivers/scsi/ibmvscsi/viosrp.h F: drivers/scsi/ibmvscsi/viosrp.h
IBM Power Virtual FC Device Drivers IBM Power Virtual FC Device Drivers
M: Brian King <brking@linux.vnet.ibm.com> M: Tyrel Datwyler <tyreld@linux.vnet.ibm.com>
L: linux-scsi@vger.kernel.org L: linux-scsi@vger.kernel.org
S: Supported S: Supported
F: drivers/scsi/ibmvscsi/ibmvfc* F: drivers/scsi/ibmvscsi/ibmvfc*
...@@ -4946,7 +4946,6 @@ K: \b(ABS|SYN)_MT_ ...@@ -4946,7 +4946,6 @@ K: \b(ABS|SYN)_MT_
INTEL C600 SERIES SAS CONTROLLER DRIVER INTEL C600 SERIES SAS CONTROLLER DRIVER
M: Intel SCU Linux support <intel-linux-scu@intel.com> M: Intel SCU Linux support <intel-linux-scu@intel.com>
M: Artur Paszkiewicz <artur.paszkiewicz@intel.com> M: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
M: Dave Jiang <dave.jiang@intel.com>
L: linux-scsi@vger.kernel.org L: linux-scsi@vger.kernel.org
T: git git://git.code.sf.net/p/intel-sas/isci T: git git://git.code.sf.net/p/intel-sas/isci
S: Supported S: Supported
...@@ -7024,14 +7023,12 @@ OPEN FIRMWARE AND FLATTENED DEVICE TREE ...@@ -7024,14 +7023,12 @@ OPEN FIRMWARE AND FLATTENED DEVICE TREE
M: Grant Likely <grant.likely@linaro.org> M: Grant Likely <grant.likely@linaro.org>
M: Rob Herring <robh+dt@kernel.org> M: Rob Herring <robh+dt@kernel.org>
L: devicetree@vger.kernel.org L: devicetree@vger.kernel.org
W: http://fdt.secretlab.ca W: http://www.devicetree.org/
T: git git://git.secretlab.ca/git/linux-2.6.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/glikely/linux.git
S: Maintained S: Maintained
F: drivers/of/ F: drivers/of/
F: include/linux/of*.h F: include/linux/of*.h
F: scripts/dtc/ F: scripts/dtc/
K: of_get_property
K: of_match_table
OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS
M: Rob Herring <robh+dt@kernel.org> M: Rob Herring <robh+dt@kernel.org>
...@@ -7276,7 +7273,7 @@ S: Maintained ...@@ -7276,7 +7273,7 @@ S: Maintained
F: drivers/pci/host/*layerscape* F: drivers/pci/host/*layerscape*
PCI DRIVER FOR IMX6 PCI DRIVER FOR IMX6
M: Richard Zhu <r65037@freescale.com> M: Richard Zhu <Richard.Zhu@freescale.com>
M: Lucas Stach <l.stach@pengutronix.de> M: Lucas Stach <l.stach@pengutronix.de>
L: linux-pci@vger.kernel.org L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
......
VERSION = 3 VERSION = 3
PATCHLEVEL = 19 PATCHLEVEL = 19
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc5 EXTRAVERSION = -rc6
NAME = Diseased Newt NAME = Diseased Newt
# *DOCUMENTATION* # *DOCUMENTATION*
......
...@@ -285,8 +285,12 @@ pcibios_claim_one_bus(struct pci_bus *b) ...@@ -285,8 +285,12 @@ pcibios_claim_one_bus(struct pci_bus *b)
if (r->parent || !r->start || !r->flags) if (r->parent || !r->start || !r->flags)
continue; continue;
if (pci_has_flag(PCI_PROBE_ONLY) || if (pci_has_flag(PCI_PROBE_ONLY) ||
(r->flags & IORESOURCE_PCI_FIXED)) (r->flags & IORESOURCE_PCI_FIXED)) {
pci_claim_resource(dev, i); if (pci_claim_resource(dev, i) == 0)
continue;
pci_claim_bridge_resource(dev, i);
}
} }
} }
......
...@@ -1257,6 +1257,8 @@ ...@@ -1257,6 +1257,8 @@
tx-fifo-resize; tx-fifo-resize;
maximum-speed = "super-speed"; maximum-speed = "super-speed";
dr_mode = "otg"; dr_mode = "otg";
snps,dis_u3_susphy_quirk;
snps,dis_u2_susphy_quirk;
}; };
}; };
...@@ -1278,6 +1280,8 @@ ...@@ -1278,6 +1280,8 @@
tx-fifo-resize; tx-fifo-resize;
maximum-speed = "high-speed"; maximum-speed = "high-speed";
dr_mode = "otg"; dr_mode = "otg";
snps,dis_u3_susphy_quirk;
snps,dis_u2_susphy_quirk;
}; };
}; };
...@@ -1299,6 +1303,8 @@ ...@@ -1299,6 +1303,8 @@
tx-fifo-resize; tx-fifo-resize;
maximum-speed = "high-speed"; maximum-speed = "high-speed";
dr_mode = "otg"; dr_mode = "otg";
snps,dis_u3_susphy_quirk;
snps,dis_u2_susphy_quirk;
}; };
}; };
......
...@@ -369,7 +369,7 @@ ...@@ -369,7 +369,7 @@
compatible = "fsl,imx25-pwm", "fsl,imx27-pwm"; compatible = "fsl,imx25-pwm", "fsl,imx27-pwm";
#pwm-cells = <2>; #pwm-cells = <2>;
reg = <0x53fa0000 0x4000>; reg = <0x53fa0000 0x4000>;
clocks = <&clks 106>, <&clks 36>; clocks = <&clks 106>, <&clks 52>;
clock-names = "ipg", "per"; clock-names = "ipg", "per";
interrupts = <36>; interrupts = <36>;
}; };
...@@ -388,7 +388,7 @@ ...@@ -388,7 +388,7 @@
compatible = "fsl,imx25-pwm", "fsl,imx27-pwm"; compatible = "fsl,imx25-pwm", "fsl,imx27-pwm";
#pwm-cells = <2>; #pwm-cells = <2>;
reg = <0x53fa8000 0x4000>; reg = <0x53fa8000 0x4000>;
clocks = <&clks 107>, <&clks 36>; clocks = <&clks 107>, <&clks 52>;
clock-names = "ipg", "per"; clock-names = "ipg", "per";
interrupts = <41>; interrupts = <41>;
}; };
...@@ -429,7 +429,7 @@ ...@@ -429,7 +429,7 @@
pwm4: pwm@53fc8000 { pwm4: pwm@53fc8000 {
compatible = "fsl,imx25-pwm", "fsl,imx27-pwm"; compatible = "fsl,imx25-pwm", "fsl,imx27-pwm";
reg = <0x53fc8000 0x4000>; reg = <0x53fc8000 0x4000>;
clocks = <&clks 108>, <&clks 36>; clocks = <&clks 108>, <&clks 52>;
clock-names = "ipg", "per"; clock-names = "ipg", "per";
interrupts = <42>; interrupts = <42>;
}; };
...@@ -476,7 +476,7 @@ ...@@ -476,7 +476,7 @@
compatible = "fsl,imx25-pwm", "fsl,imx27-pwm"; compatible = "fsl,imx25-pwm", "fsl,imx27-pwm";
#pwm-cells = <2>; #pwm-cells = <2>;
reg = <0x53fe0000 0x4000>; reg = <0x53fe0000 0x4000>;
clocks = <&clks 105>, <&clks 36>; clocks = <&clks 105>, <&clks 52>;
clock-names = "ipg", "per"; clock-names = "ipg", "per";
interrupts = <26>; interrupts = <26>;
}; };
......
...@@ -406,7 +406,7 @@ ...@@ -406,7 +406,7 @@
clock-frequency = <400000>; clock-frequency = <400000>;
magnetometer@c { magnetometer@c {
compatible = "ak,ak8975"; compatible = "asahi-kasei,ak8975";
reg = <0xc>; reg = <0xc>;
interrupt-parent = <&gpio>; interrupt-parent = <&gpio>;
interrupts = <TEGRA_GPIO(N, 5) IRQ_TYPE_LEVEL_HIGH>; interrupts = <TEGRA_GPIO(N, 5) IRQ_TYPE_LEVEL_HIGH>;
......
...@@ -253,21 +253,22 @@ ...@@ -253,21 +253,22 @@
.endm .endm
.macro restore_user_regs, fast = 0, offset = 0 .macro restore_user_regs, fast = 0, offset = 0
ldr r1, [sp, #\offset + S_PSR] @ get calling cpsr mov r2, sp
ldr lr, [sp, #\offset + S_PC]! @ get pc ldr r1, [r2, #\offset + S_PSR] @ get calling cpsr
ldr lr, [r2, #\offset + S_PC]! @ get pc
msr spsr_cxsf, r1 @ save in spsr_svc msr spsr_cxsf, r1 @ save in spsr_svc
#if defined(CONFIG_CPU_V6) || defined(CONFIG_CPU_32v6K) #if defined(CONFIG_CPU_V6) || defined(CONFIG_CPU_32v6K)
@ We must avoid clrex due to Cortex-A15 erratum #830321 @ We must avoid clrex due to Cortex-A15 erratum #830321
strex r1, r2, [sp] @ clear the exclusive monitor strex r1, r2, [r2] @ clear the exclusive monitor
#endif #endif
.if \fast .if \fast
ldmdb sp, {r1 - lr}^ @ get calling r1 - lr ldmdb r2, {r1 - lr}^ @ get calling r1 - lr
.else .else
ldmdb sp, {r0 - lr}^ @ get calling r0 - lr ldmdb r2, {r0 - lr}^ @ get calling r0 - lr
.endif .endif
mov r0, r0 @ ARMv5T and earlier require a nop mov r0, r0 @ ARMv5T and earlier require a nop
@ after ldm {}^ @ after ldm {}^
add sp, sp, #S_FRAME_SIZE - S_PC add sp, sp, #\offset + S_FRAME_SIZE
movs pc, lr @ return & move spsr_svc into cpsr movs pc, lr @ return & move spsr_svc into cpsr
.endm .endm
......
...@@ -116,8 +116,14 @@ int armpmu_event_set_period(struct perf_event *event) ...@@ -116,8 +116,14 @@ int armpmu_event_set_period(struct perf_event *event)
ret = 1; ret = 1;
} }
if (left > (s64)armpmu->max_period) /*
left = armpmu->max_period; * Limit the maximum period to prevent the counter value
* from overtaking the one we are about to program. In
* effect we are reducing max_period to account for
* interrupt latency (and we are being very conservative).
*/
if (left > (armpmu->max_period >> 1))
left = armpmu->max_period >> 1;
local64_set(&hwc->prev_count, (u64)-left); local64_set(&hwc->prev_count, (u64)-left);
......
...@@ -657,10 +657,13 @@ int __init arm_add_memory(u64 start, u64 size) ...@@ -657,10 +657,13 @@ int __init arm_add_memory(u64 start, u64 size)
/* /*
* Ensure that start/size are aligned to a page boundary. * Ensure that start/size are aligned to a page boundary.
* Size is appropriately rounded down, start is rounded up. * Size is rounded down, start is rounded up.
*/ */
size -= start & ~PAGE_MASK;
aligned_start = PAGE_ALIGN(start); aligned_start = PAGE_ALIGN(start);
if (aligned_start > start + size)
size = 0;
else
size -= aligned_start - start;
#ifndef CONFIG_ARCH_PHYS_ADDR_T_64BIT #ifndef CONFIG_ARCH_PHYS_ADDR_T_64BIT
if (aligned_start > ULONG_MAX) { if (aligned_start > ULONG_MAX) {
......
...@@ -246,9 +246,14 @@ static int coherency_type(void) ...@@ -246,9 +246,14 @@ static int coherency_type(void)
return type; return type;
} }
/*
* As a precaution, we currently completely disable hardware I/O
* coherency, until enough testing is done with automatic I/O
* synchronization barriers to validate that it is a proper solution.
*/
int coherency_available(void) int coherency_available(void)
{ {
return coherency_type() != COHERENCY_FABRIC_TYPE_NONE; return false;
} }
int __init coherency_init(void) int __init coherency_init(void)
......
...@@ -211,6 +211,7 @@ extern struct device *omap2_get_iva_device(void); ...@@ -211,6 +211,7 @@ extern struct device *omap2_get_iva_device(void);
extern struct device *omap2_get_l3_device(void); extern struct device *omap2_get_l3_device(void);
extern struct device *omap4_get_dsp_device(void); extern struct device *omap4_get_dsp_device(void);
unsigned int omap4_xlate_irq(unsigned int hwirq);
void omap_gic_of_init(void); void omap_gic_of_init(void);
#ifdef CONFIG_CACHE_L2X0 #ifdef CONFIG_CACHE_L2X0
......
...@@ -256,6 +256,38 @@ static int __init omap4_sar_ram_init(void) ...@@ -256,6 +256,38 @@ static int __init omap4_sar_ram_init(void)
} }
omap_early_initcall(omap4_sar_ram_init); omap_early_initcall(omap4_sar_ram_init);
static struct of_device_id gic_match[] = {
{ .compatible = "arm,cortex-a9-gic", },
{ .compatible = "arm,cortex-a15-gic", },
{ },
};
static struct device_node *gic_node;
unsigned int omap4_xlate_irq(unsigned int hwirq)
{
struct of_phandle_args irq_data;
unsigned int irq;
if (!gic_node)
gic_node = of_find_matching_node(NULL, gic_match);
if (WARN_ON(!gic_node))
return hwirq;
irq_data.np = gic_node;
irq_data.args_count = 3;
irq_data.args[0] = 0;
irq_data.args[1] = hwirq - OMAP44XX_IRQ_GIC_START;
irq_data.args[2] = IRQ_TYPE_LEVEL_HIGH;
irq = irq_create_of_mapping(&irq_data);
if (WARN_ON(!irq))
irq = hwirq;
return irq;
}
void __init omap_gic_of_init(void) void __init omap_gic_of_init(void)
{ {
struct device_node *np; struct device_node *np;
......
...@@ -3534,9 +3534,15 @@ int omap_hwmod_fill_resources(struct omap_hwmod *oh, struct resource *res) ...@@ -3534,9 +3534,15 @@ int omap_hwmod_fill_resources(struct omap_hwmod *oh, struct resource *res)
mpu_irqs_cnt = _count_mpu_irqs(oh); mpu_irqs_cnt = _count_mpu_irqs(oh);
for (i = 0; i < mpu_irqs_cnt; i++) { for (i = 0; i < mpu_irqs_cnt; i++) {
unsigned int irq;
if (oh->xlate_irq)
irq = oh->xlate_irq((oh->mpu_irqs + i)->irq);
else
irq = (oh->mpu_irqs + i)->irq;
(res + r)->name = (oh->mpu_irqs + i)->name; (res + r)->name = (oh->mpu_irqs + i)->name;
(res + r)->start = (oh->mpu_irqs + i)->irq; (res + r)->start = irq;
(res + r)->end = (oh->mpu_irqs + i)->irq; (res + r)->end = irq;
(res + r)->flags = IORESOURCE_IRQ; (res + r)->flags = IORESOURCE_IRQ;
r++; r++;
} }
......
...@@ -676,6 +676,7 @@ struct omap_hwmod { ...@@ -676,6 +676,7 @@ struct omap_hwmod {
spinlock_t _lock; spinlock_t _lock;
struct list_head node; struct list_head node;
struct omap_hwmod_ocp_if *_mpu_port; struct omap_hwmod_ocp_if *_mpu_port;
unsigned int (*xlate_irq)(unsigned int);
u16 flags; u16 flags;
u8 mpu_rt_idx; u8 mpu_rt_idx;
u8 response_lat; u8 response_lat;
......
...@@ -479,6 +479,7 @@ static struct omap_hwmod omap44xx_dma_system_hwmod = { ...@@ -479,6 +479,7 @@ static struct omap_hwmod omap44xx_dma_system_hwmod = {
.class = &omap44xx_dma_hwmod_class, .class = &omap44xx_dma_hwmod_class,
.clkdm_name = "l3_dma_clkdm", .clkdm_name = "l3_dma_clkdm",
.mpu_irqs = omap44xx_dma_system_irqs, .mpu_irqs = omap44xx_dma_system_irqs,
.xlate_irq = omap4_xlate_irq,
.main_clk = "l3_div_ck", .main_clk = "l3_div_ck",
.prcm = { .prcm = {
.omap4 = { .omap4 = {
...@@ -640,6 +641,7 @@ static struct omap_hwmod omap44xx_dss_dispc_hwmod = { ...@@ -640,6 +641,7 @@ static struct omap_hwmod omap44xx_dss_dispc_hwmod = {
.class = &omap44xx_dispc_hwmod_class, .class = &omap44xx_dispc_hwmod_class,
.clkdm_name = "l3_dss_clkdm", .clkdm_name = "l3_dss_clkdm",
.mpu_irqs = omap44xx_dss_dispc_irqs, .mpu_irqs = omap44xx_dss_dispc_irqs,
.xlate_irq = omap4_xlate_irq,
.sdma_reqs = omap44xx_dss_dispc_sdma_reqs, .sdma_reqs = omap44xx_dss_dispc_sdma_reqs,
.main_clk = "dss_dss_clk", .main_clk = "dss_dss_clk",
.prcm = { .prcm = {
...@@ -693,6 +695,7 @@ static struct omap_hwmod omap44xx_dss_dsi1_hwmod = { ...@@ -693,6 +695,7 @@ static struct omap_hwmod omap44xx_dss_dsi1_hwmod = {
.class = &omap44xx_dsi_hwmod_class, .class = &omap44xx_dsi_hwmod_class,
.clkdm_name = "l3_dss_clkdm", .clkdm_name = "l3_dss_clkdm",
.mpu_irqs = omap44xx_dss_dsi1_irqs, .mpu_irqs = omap44xx_dss_dsi1_irqs,
.xlate_irq = omap4_xlate_irq,
.sdma_reqs = omap44xx_dss_dsi1_sdma_reqs, .sdma_reqs = omap44xx_dss_dsi1_sdma_reqs,
.main_clk = "dss_dss_clk", .main_clk = "dss_dss_clk",
.prcm = { .prcm = {
...@@ -726,6 +729,7 @@ static struct omap_hwmod omap44xx_dss_dsi2_hwmod = { ...@@ -726,6 +729,7 @@ static struct omap_hwmod omap44xx_dss_dsi2_hwmod = {
.class = &omap44xx_dsi_hwmod_class, .class = &omap44xx_dsi_hwmod_class,
.clkdm_name = "l3_dss_clkdm", .clkdm_name = "l3_dss_clkdm",
.mpu_irqs = omap44xx_dss_dsi2_irqs, .mpu_irqs = omap44xx_dss_dsi2_irqs,
.xlate_irq = omap4_xlate_irq,
.sdma_reqs = omap44xx_dss_dsi2_sdma_reqs, .sdma_reqs = omap44xx_dss_dsi2_sdma_reqs,
.main_clk = "dss_dss_clk", .main_clk = "dss_dss_clk",
.prcm = { .prcm = {
...@@ -784,6 +788,7 @@ static struct omap_hwmod omap44xx_dss_hdmi_hwmod = { ...@@ -784,6 +788,7 @@ static struct omap_hwmod omap44xx_dss_hdmi_hwmod = {
*/ */
.flags = HWMOD_SWSUP_SIDLE, .flags = HWMOD_SWSUP_SIDLE,
.mpu_irqs = omap44xx_dss_hdmi_irqs, .mpu_irqs = omap44xx_dss_hdmi_irqs,
.xlate_irq = omap4_xlate_irq,
.sdma_reqs = omap44xx_dss_hdmi_sdma_reqs, .sdma_reqs = omap44xx_dss_hdmi_sdma_reqs,
.main_clk = "dss_48mhz_clk", .main_clk = "dss_48mhz_clk",
.prcm = { .prcm = {
......
...@@ -288,6 +288,7 @@ static struct omap_hwmod omap54xx_dma_system_hwmod = { ...@@ -288,6 +288,7 @@ static struct omap_hwmod omap54xx_dma_system_hwmod = {
.class = &omap54xx_dma_hwmod_class, .class = &omap54xx_dma_hwmod_class,
.clkdm_name = "dma_clkdm", .clkdm_name = "dma_clkdm",
.mpu_irqs = omap54xx_dma_system_irqs, .mpu_irqs = omap54xx_dma_system_irqs,
.xlate_irq = omap4_xlate_irq,
.main_clk = "l3_iclk_div", .main_clk = "l3_iclk_div",
.prcm = { .prcm = {
.omap4 = { .omap4 = {
......
...@@ -498,6 +498,7 @@ struct omap_prcm_irq_setup { ...@@ -498,6 +498,7 @@ struct omap_prcm_irq_setup {
u8 nr_irqs; u8 nr_irqs;
const struct omap_prcm_irq *irqs; const struct omap_prcm_irq *irqs;
int irq; int irq;
unsigned int (*xlate_irq)(unsigned int);
void (*read_pending_irqs)(unsigned long *events); void (*read_pending_irqs)(unsigned long *events);
void (*ocp_barrier)(void); void (*ocp_barrier)(void);
void (*save_and_clear_irqen)(u32 *saved_mask); void (*save_and_clear_irqen)(u32 *saved_mask);
......
...@@ -49,6 +49,7 @@ static struct omap_prcm_irq_setup omap4_prcm_irq_setup = { ...@@ -49,6 +49,7 @@ static struct omap_prcm_irq_setup omap4_prcm_irq_setup = {
.irqs = omap4_prcm_irqs, .irqs = omap4_prcm_irqs,
.nr_irqs = ARRAY_SIZE(omap4_prcm_irqs), .nr_irqs = ARRAY_SIZE(omap4_prcm_irqs),
.irq = 11 + OMAP44XX_IRQ_GIC_START, .irq = 11 + OMAP44XX_IRQ_GIC_START,
.xlate_irq = omap4_xlate_irq,
.read_pending_irqs = &omap44xx_prm_read_pending_irqs, .read_pending_irqs = &omap44xx_prm_read_pending_irqs,
.ocp_barrier = &omap44xx_prm_ocp_barrier, .ocp_barrier = &omap44xx_prm_ocp_barrier,
.save_and_clear_irqen = &omap44xx_prm_save_and_clear_irqen, .save_and_clear_irqen = &omap44xx_prm_save_and_clear_irqen,
...@@ -751,8 +752,10 @@ static int omap44xx_prm_late_init(void) ...@@ -751,8 +752,10 @@ static int omap44xx_prm_late_init(void)
} }
/* Once OMAP4 DT is filled as well */ /* Once OMAP4 DT is filled as well */
if (irq_num >= 0) if (irq_num >= 0) {
omap4_prcm_irq_setup.irq = irq_num; omap4_prcm_irq_setup.irq = irq_num;
omap4_prcm_irq_setup.xlate_irq = NULL;
}
} }
omap44xx_prm_enable_io_wakeup(); omap44xx_prm_enable_io_wakeup();
......
...@@ -187,6 +187,7 @@ int omap_prcm_event_to_irq(const char *name) ...@@ -187,6 +187,7 @@ int omap_prcm_event_to_irq(const char *name)
*/ */
void omap_prcm_irq_cleanup(void) void omap_prcm_irq_cleanup(void)
{ {
unsigned int irq;
int i; int i;
if (!prcm_irq_setup) { if (!prcm_irq_setup) {
...@@ -211,7 +212,11 @@ void omap_prcm_irq_cleanup(void) ...@@ -211,7 +212,11 @@ void omap_prcm_irq_cleanup(void)
kfree(prcm_irq_setup->priority_mask); kfree(prcm_irq_setup->priority_mask);
prcm_irq_setup->priority_mask = NULL; prcm_irq_setup->priority_mask = NULL;
irq_set_chained_handler(prcm_irq_setup->irq, NULL); if (prcm_irq_setup->xlate_irq)
irq = prcm_irq_setup->xlate_irq(prcm_irq_setup->irq);
else
irq = prcm_irq_setup->irq;
irq_set_chained_handler(irq, NULL);
if (prcm_irq_setup->base_irq > 0) if (prcm_irq_setup->base_irq > 0)
irq_free_descs(prcm_irq_setup->base_irq, irq_free_descs(prcm_irq_setup->base_irq,
...@@ -259,6 +264,7 @@ int omap_prcm_register_chain_handler(struct omap_prcm_irq_setup *irq_setup) ...@@ -259,6 +264,7 @@ int omap_prcm_register_chain_handler(struct omap_prcm_irq_setup *irq_setup)
int offset, i; int offset, i;
struct irq_chip_generic *gc; struct irq_chip_generic *gc;
struct irq_chip_type *ct; struct irq_chip_type *ct;
unsigned int irq;
if (!irq_setup) if (!irq_setup)
return -EINVAL; return -EINVAL;
...@@ -298,7 +304,11 @@ int omap_prcm_register_chain_handler(struct omap_prcm_irq_setup *irq_setup) ...@@ -298,7 +304,11 @@ int omap_prcm_register_chain_handler(struct omap_prcm_irq_setup *irq_setup)
1 << (offset & 0x1f); 1 << (offset & 0x1f);
} }
irq_set_chained_handler(irq_setup->irq, omap_prcm_irq_handler); if (irq_setup->xlate_irq)
irq = irq_setup->xlate_irq(irq_setup->irq);
else
irq = irq_setup->irq;
irq_set_chained_handler(irq, omap_prcm_irq_handler);
irq_setup->base_irq = irq_alloc_descs(-1, 0, irq_setup->nr_regs * 32, irq_setup->base_irq = irq_alloc_descs(-1, 0, irq_setup->nr_regs * 32,
0); 0);
......
...@@ -66,19 +66,24 @@ void __init omap_pmic_init(int bus, u32 clkrate, ...@@ -66,19 +66,24 @@ void __init omap_pmic_init(int bus, u32 clkrate,
omap_register_i2c_bus(bus, clkrate, &pmic_i2c_board_info, 1); omap_register_i2c_bus(bus, clkrate, &pmic_i2c_board_info, 1);
} }
#ifdef CONFIG_ARCH_OMAP4
void __init omap4_pmic_init(const char *pmic_type, void __init omap4_pmic_init(const char *pmic_type,
struct twl4030_platform_data *pmic_data, struct twl4030_platform_data *pmic_data,
struct i2c_board_info *devices, int nr_devices) struct i2c_board_info *devices, int nr_devices)
{ {
/* PMIC part*/ /* PMIC part*/
unsigned int irq;
omap_mux_init_signal("sys_nirq1", OMAP_PIN_INPUT_PULLUP | OMAP_PIN_OFF_WAKEUPENABLE); omap_mux_init_signal("sys_nirq1", OMAP_PIN_INPUT_PULLUP | OMAP_PIN_OFF_WAKEUPENABLE);
omap_mux_init_signal("fref_clk0_out.sys_drm_msecure", OMAP_PIN_OUTPUT); omap_mux_init_signal("fref_clk0_out.sys_drm_msecure", OMAP_PIN_OUTPUT);
omap_pmic_init(1, 400, pmic_type, 7 + OMAP44XX_IRQ_GIC_START, pmic_data); irq = omap4_xlate_irq(7 + OMAP44XX_IRQ_GIC_START);
omap_pmic_init(1, 400, pmic_type, irq, pmic_data);
/* Register additional devices on i2c1 bus if needed */ /* Register additional devices on i2c1 bus if needed */
if (devices) if (devices)
i2c_register_board_info(1, devices, nr_devices); i2c_register_board_info(1, devices, nr_devices);
} }
#endif
void __init omap_pmic_late_init(void) void __init omap_pmic_late_init(void)
{ {
......
...@@ -576,11 +576,18 @@ void __init r8a7778_init_irq_extpin(int irlm) ...@@ -576,11 +576,18 @@ void __init r8a7778_init_irq_extpin(int irlm)
void __init r8a7778_init_irq_dt(void) void __init r8a7778_init_irq_dt(void)
{ {
void __iomem *base = ioremap_nocache(0xfe700000, 0x00100000); void __iomem *base = ioremap_nocache(0xfe700000, 0x00100000);
#ifdef CONFIG_ARCH_SHMOBILE_LEGACY
void __iomem *gic_dist_base = ioremap_nocache(0xfe438000, 0x1000);
void __iomem *gic_cpu_base = ioremap_nocache(0xfe430000, 0x1000);
#endif
BUG_ON(!base); BUG_ON(!base);
#ifdef CONFIG_ARCH_SHMOBILE_LEGACY
gic_init(0, 29, gic_dist_base, gic_cpu_base);
#else
irqchip_init(); irqchip_init();
#endif
/* route all interrupts to ARM */ /* route all interrupts to ARM */
__raw_writel(0x73ffffff, base + INT2NTSR0); __raw_writel(0x73ffffff, base + INT2NTSR0);
__raw_writel(0xffffffff, base + INT2NTSR1); __raw_writel(0xffffffff, base + INT2NTSR1);
......
...@@ -720,10 +720,17 @@ static int r8a7779_set_wake(struct irq_data *data, unsigned int on) ...@@ -720,10 +720,17 @@ static int r8a7779_set_wake(struct irq_data *data, unsigned int on)
void __init r8a7779_init_irq_dt(void) void __init r8a7779_init_irq_dt(void)
{ {
#ifdef CONFIG_ARCH_SHMOBILE_LEGACY
void __iomem *gic_dist_base = ioremap_nocache(0xf0001000, 0x1000);
void __iomem *gic_cpu_base = ioremap_nocache(0xf0000100, 0x1000);
#endif
gic_arch_extn.irq_set_wake = r8a7779_set_wake; gic_arch_extn.irq_set_wake = r8a7779_set_wake;
#ifdef CONFIG_ARCH_SHMOBILE_LEGACY
gic_init(0, 29, gic_dist_base, gic_cpu_base);
#else
irqchip_init(); irqchip_init();
#endif
/* route all interrupts to ARM */ /* route all interrupts to ARM */
__raw_writel(0xffffffff, INT2NTSR0); __raw_writel(0xffffffff, INT2NTSR0);
__raw_writel(0x3fffffff, INT2NTSR1); __raw_writel(0x3fffffff, INT2NTSR1);
......
...@@ -85,6 +85,7 @@ vdso_install: ...@@ -85,6 +85,7 @@ vdso_install:
# We use MRPROPER_FILES and CLEAN_FILES now # We use MRPROPER_FILES and CLEAN_FILES now
archclean: archclean:
$(Q)$(MAKE) $(clean)=$(boot) $(Q)$(MAKE) $(clean)=$(boot)
$(Q)$(MAKE) $(clean)=$(boot)/dts
define archhelp define archhelp
echo '* Image.gz - Compressed kernel image (arch/$(ARCH)/boot/Image.gz)' echo '* Image.gz - Compressed kernel image (arch/$(ARCH)/boot/Image.gz)'
......
...@@ -3,6 +3,4 @@ dts-dirs += apm ...@@ -3,6 +3,4 @@ dts-dirs += apm
dts-dirs += arm dts-dirs += arm
dts-dirs += cavium dts-dirs += cavium
always := $(dtb-y)
subdir-y := $(dts-dirs) subdir-y := $(dts-dirs)
clean-files := *.dtb
...@@ -22,7 +22,7 @@ ...@@ -22,7 +22,7 @@
}; };
chosen { chosen {
stdout-path = &soc_uart0; stdout-path = "serial0:115200n8";
}; };
psci { psci {
......
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
*/ */
#include <linux/debugfs.h> #include <linux/debugfs.h>
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/io.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
......
...@@ -19,12 +19,10 @@ ...@@ -19,12 +19,10 @@
#include <linux/moduleloader.h> #include <linux/moduleloader.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
void module_free(struct module *mod, void *module_region) void module_arch_freeing_init(struct module *mod)
{ {
vfree(mod->arch.syminfo); vfree(mod->arch.syminfo);
mod->arch.syminfo = NULL; mod->arch.syminfo = NULL;
vfree(module_region);
} }
static inline int check_rela(Elf32_Rela *rela, struct module *module, static inline int check_rela(Elf32_Rela *rela, struct module *module,
...@@ -291,12 +289,3 @@ int apply_relocate_add(Elf32_Shdr *sechdrs, const char *strtab, ...@@ -291,12 +289,3 @@ int apply_relocate_add(Elf32_Shdr *sechdrs, const char *strtab,
return ret; return ret;
} }
int module_finalize(const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs,
struct module *module)
{
vfree(module->arch.syminfo);
module->arch.syminfo = NULL;
return 0;
}
...@@ -604,7 +604,7 @@ static ssize_t __sync_serial_read(struct file *file, ...@@ -604,7 +604,7 @@ static ssize_t __sync_serial_read(struct file *file,
struct timespec *ts) struct timespec *ts)
{ {
unsigned long flags; unsigned long flags;
int dev = MINOR(file->f_dentry->d_inode->i_rdev); int dev = MINOR(file_inode(file)->i_rdev);
int avail; int avail;
struct sync_port *port; struct sync_port *port;
unsigned char *start; unsigned char *start;
......
...@@ -36,7 +36,7 @@ void *module_alloc(unsigned long size) ...@@ -36,7 +36,7 @@ void *module_alloc(unsigned long size)
} }
/* Free memory returned from module_alloc */ /* Free memory returned from module_alloc */
void module_free(struct module *mod, void *module_region) void module_memfree(void *module_region)
{ {
kfree(module_region); kfree(module_region);
} }
......
...@@ -94,7 +94,7 @@ static void __init pcibios_allocate_bus_resources(struct list_head *bus_list) ...@@ -94,7 +94,7 @@ static void __init pcibios_allocate_bus_resources(struct list_head *bus_list)
r = &dev->resource[idx]; r = &dev->resource[idx];
if (!r->start) if (!r->start)
continue; continue;
pci_claim_resource(dev, idx); pci_claim_bridge_resource(dev, idx);
} }
} }
pcibios_allocate_bus_resources(&bus->children); pcibios_allocate_bus_resources(&bus->children);
......
...@@ -305,14 +305,12 @@ plt_target (struct plt_entry *plt) ...@@ -305,14 +305,12 @@ plt_target (struct plt_entry *plt)
#endif /* !USE_BRL */ #endif /* !USE_BRL */
void void
module_free (struct module *mod, void *module_region) module_arch_freeing_init (struct module *mod)
{ {
if (mod && mod->arch.init_unw_table && if (mod->arch.init_unw_table) {
module_region == mod->module_init) {
unw_remove_unwind_table(mod->arch.init_unw_table); unw_remove_unwind_table(mod->arch.init_unw_table);
mod->arch.init_unw_table = NULL; mod->arch.init_unw_table = NULL;
} }
vfree(module_region);
} }
/* Have we already seen one of these relocations? */ /* Have we already seen one of these relocations? */
......
...@@ -487,45 +487,39 @@ int pcibios_root_bridge_prepare(struct pci_host_bridge *bridge) ...@@ -487,45 +487,39 @@ int pcibios_root_bridge_prepare(struct pci_host_bridge *bridge)
return 0; return 0;
} }
static int is_valid_resource(struct pci_dev *dev, int idx) void pcibios_fixup_device_resources(struct pci_dev *dev)
{ {
unsigned int i, type_mask = IORESOURCE_IO | IORESOURCE_MEM; int idx;
struct resource *devr = &dev->resource[idx], *busr;
if (!dev->bus) if (!dev->bus)
return 0; return;
pci_bus_for_each_resource(dev->bus, busr, i) {
if (!busr || ((busr->flags ^ devr->flags) & type_mask))
continue;
if ((devr->start) && (devr->start >= busr->start) &&
(devr->end <= busr->end))
return 1;
}
return 0;
}
static void pcibios_fixup_resources(struct pci_dev *dev, int start, int limit) for (idx = 0; idx < PCI_BRIDGE_RESOURCES; idx++) {
{ struct resource *r = &dev->resource[idx];
int i;
for (i = start; i < limit; i++) { if (!r->flags || r->parent || !r->start)
if (!dev->resource[i].flags)
continue; continue;
if ((is_valid_resource(dev, i)))
pci_claim_resource(dev, i);
}
}
void pcibios_fixup_device_resources(struct pci_dev *dev) pci_claim_resource(dev, idx);
{ }
pcibios_fixup_resources(dev, 0, PCI_BRIDGE_RESOURCES);
} }
EXPORT_SYMBOL_GPL(pcibios_fixup_device_resources); EXPORT_SYMBOL_GPL(pcibios_fixup_device_resources);
static void pcibios_fixup_bridge_resources(struct pci_dev *dev) static void pcibios_fixup_bridge_resources(struct pci_dev *dev)
{ {
pcibios_fixup_resources(dev, PCI_BRIDGE_RESOURCES, PCI_NUM_RESOURCES); int idx;
if (!dev->bus)
return;
for (idx = PCI_BRIDGE_RESOURCES; idx < PCI_NUM_RESOURCES; idx++) {
struct resource *r = &dev->resource[idx];
if (!r->flags || r->parent || !r->start)
continue;
pci_claim_bridge_resource(dev, idx);
}
} }
/* /*
......
...@@ -1026,6 +1026,8 @@ static void pcibios_allocate_bus_resources(struct pci_bus *bus) ...@@ -1026,6 +1026,8 @@ static void pcibios_allocate_bus_resources(struct pci_bus *bus)
pr, (pr && pr->name) ? pr->name : "nil"); pr, (pr && pr->name) ? pr->name : "nil");
if (pr && !(pr->flags & IORESOURCE_UNSET)) { if (pr && !(pr->flags & IORESOURCE_UNSET)) {
struct pci_dev *dev = bus->self;
if (request_resource(pr, res) == 0) if (request_resource(pr, res) == 0)
continue; continue;
/* /*
...@@ -1035,6 +1037,12 @@ static void pcibios_allocate_bus_resources(struct pci_bus *bus) ...@@ -1035,6 +1037,12 @@ static void pcibios_allocate_bus_resources(struct pci_bus *bus)
*/ */
if (reparent_resources(pr, res) == 0) if (reparent_resources(pr, res) == 0)
continue; continue;
if (dev && i < PCI_BRIDGE_RESOURCE_NUM &&
pci_claim_bridge_resource(dev,
i + PCI_BRIDGE_RESOURCES) == 0)
continue;
} }
pr_warn("PCI: Cannot allocate resource region "); pr_warn("PCI: Cannot allocate resource region ");
pr_cont("%d of PCI bridge %d, will remap\n", i, bus->number); pr_cont("%d of PCI bridge %d, will remap\n", i, bus->number);
...@@ -1227,7 +1235,10 @@ void pcibios_claim_one_bus(struct pci_bus *bus) ...@@ -1227,7 +1235,10 @@ void pcibios_claim_one_bus(struct pci_bus *bus)
(unsigned long long)r->end, (unsigned long long)r->end,
(unsigned int)r->flags); (unsigned int)r->flags);
pci_claim_resource(dev, i); if (pci_claim_resource(dev, i) == 0)
continue;
pci_claim_bridge_resource(dev, i);
} }
} }
......
...@@ -1388,7 +1388,7 @@ void bpf_jit_compile(struct bpf_prog *fp) ...@@ -1388,7 +1388,7 @@ void bpf_jit_compile(struct bpf_prog *fp)
void bpf_jit_free(struct bpf_prog *fp) void bpf_jit_free(struct bpf_prog *fp)
{ {
if (fp->jited) if (fp->jited)
module_free(NULL, fp->bpf_func); module_memfree(fp->bpf_func);
bpf_prog_unlock_free(fp); bpf_prog_unlock_free(fp);
} }
...@@ -106,7 +106,7 @@ static void __init pcibios_allocate_bus_resources(struct list_head *bus_list) ...@@ -106,7 +106,7 @@ static void __init pcibios_allocate_bus_resources(struct list_head *bus_list)
if (!r->flags) if (!r->flags)
continue; continue;
if (!r->start || if (!r->start ||
pci_claim_resource(dev, idx) < 0) { pci_claim_bridge_resource(dev, idx) < 0) {
printk(KERN_ERR "PCI:" printk(KERN_ERR "PCI:"
" Cannot allocate resource" " Cannot allocate resource"
" region %d of bridge %s\n", " region %d of bridge %s\n",
......
...@@ -281,42 +281,37 @@ static int __init pci_check_direct(void) ...@@ -281,42 +281,37 @@ static int __init pci_check_direct(void)
return -ENODEV; return -ENODEV;
} }
static int is_valid_resource(struct pci_dev *dev, int idx) static void pcibios_fixup_device_resources(struct pci_dev *dev)
{ {
unsigned int i, type_mask = IORESOURCE_IO | IORESOURCE_MEM; int idx;
struct resource *devr = &dev->resource[idx], *busr;
if (dev->bus) {
pci_bus_for_each_resource(dev->bus, busr, i) {
if (!busr || (busr->flags ^ devr->flags) & type_mask)
continue;
if (devr->start &&
devr->start >= busr->start &&
devr->end <= busr->end)
return 1;
}
}
return 0; if (!dev->bus)
return;
for (idx = 0; idx < PCI_BRIDGE_RESOURCES; idx++) {
struct resource *r = &dev->resource[idx];
if (!r->flags || r->parent || !r->start)
continue;
pci_claim_resource(dev, idx);
}
} }
static void pcibios_fixup_device_resources(struct pci_dev *dev) static void pcibios_fixup_bridge_resources(struct pci_dev *dev)
{ {
int limit, i; int idx;
if (dev->bus->number != 0) if (!dev->bus)
return; return;
limit = (dev->hdr_type == PCI_HEADER_TYPE_NORMAL) ? for (idx = PCI_BRIDGE_RESOURCES; idx < PCI_NUM_RESOURCES; idx++) {
PCI_BRIDGE_RESOURCES : PCI_NUM_RESOURCES; struct resource *r = &dev->resource[idx];
for (i = 0; i < limit; i++) { if (!r->flags || r->parent || !r->start)
if (!dev->resource[i].flags)
continue; continue;
if (is_valid_resource(dev, i)) pci_claim_bridge_resource(dev, idx);
pci_claim_resource(dev, i);
} }
} }
...@@ -330,7 +325,7 @@ void pcibios_fixup_bus(struct pci_bus *bus) ...@@ -330,7 +325,7 @@ void pcibios_fixup_bus(struct pci_bus *bus)
if (bus->self) { if (bus->self) {
pci_read_bridge_bases(bus); pci_read_bridge_bases(bus);
pcibios_fixup_device_resources(bus->self); pcibios_fixup_bridge_resources(bus->self);
} }
list_for_each_entry(dev, &bus->devices, bus_list) list_for_each_entry(dev, &bus->devices, bus_list)
......
...@@ -36,7 +36,7 @@ void *module_alloc(unsigned long size) ...@@ -36,7 +36,7 @@ void *module_alloc(unsigned long size)
} }
/* Free memory returned from module_alloc */ /* Free memory returned from module_alloc */
void module_free(struct module *mod, void *module_region) void module_memfree(void *module_region)
{ {
kfree(module_region); kfree(module_region);
} }
......
...@@ -200,7 +200,7 @@ static int setup_rt_frame(struct ksignal *ksig, sigset_t *set, ...@@ -200,7 +200,7 @@ static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
/* Set up to return from userspace; jump to fixed address sigreturn /* Set up to return from userspace; jump to fixed address sigreturn
trampoline on kuser page. */ trampoline on kuser page. */
regs->ra = (unsigned long) (0x1040); regs->ra = (unsigned long) (0x1044);
/* Set up registers for signal handler */ /* Set up registers for signal handler */
regs->sp = (unsigned long) frame; regs->sp = (unsigned long) frame;
......
...@@ -298,14 +298,10 @@ static inline unsigned long count_stubs(const Elf_Rela *rela, unsigned long n) ...@@ -298,14 +298,10 @@ static inline unsigned long count_stubs(const Elf_Rela *rela, unsigned long n)
} }
#endif #endif
void module_arch_freeing_init(struct module *mod)
/* Free memory returned from module_alloc */
void module_free(struct module *mod, void *module_region)
{ {
kfree(mod->arch.section); kfree(mod->arch.section);
mod->arch.section = NULL; mod->arch.section = NULL;
vfree(module_region);
} }
/* Additional bytes needed in front of individual sections */ /* Additional bytes needed in front of individual sections */
......
...@@ -1184,6 +1184,8 @@ static void pcibios_allocate_bus_resources(struct pci_bus *bus) ...@@ -1184,6 +1184,8 @@ static void pcibios_allocate_bus_resources(struct pci_bus *bus)
pr, (pr && pr->name) ? pr->name : "nil"); pr, (pr && pr->name) ? pr->name : "nil");
if (pr && !(pr->flags & IORESOURCE_UNSET)) { if (pr && !(pr->flags & IORESOURCE_UNSET)) {
struct pci_dev *dev = bus->self;
if (request_resource(pr, res) == 0) if (request_resource(pr, res) == 0)
continue; continue;
/* /*
...@@ -1193,6 +1195,11 @@ static void pcibios_allocate_bus_resources(struct pci_bus *bus) ...@@ -1193,6 +1195,11 @@ static void pcibios_allocate_bus_resources(struct pci_bus *bus)
*/ */
if (reparent_resources(pr, res) == 0) if (reparent_resources(pr, res) == 0)
continue; continue;
if (dev && i < PCI_BRIDGE_RESOURCE_NUM &&
pci_claim_bridge_resource(dev,
i + PCI_BRIDGE_RESOURCES) == 0)
continue;
} }
pr_warning("PCI: Cannot allocate resource region " pr_warning("PCI: Cannot allocate resource region "
"%d of PCI bridge %d, will remap\n", i, bus->number); "%d of PCI bridge %d, will remap\n", i, bus->number);
...@@ -1401,7 +1408,10 @@ void pcibios_claim_one_bus(struct pci_bus *bus) ...@@ -1401,7 +1408,10 @@ void pcibios_claim_one_bus(struct pci_bus *bus)
(unsigned long long)r->end, (unsigned long long)r->end,
(unsigned int)r->flags); (unsigned int)r->flags);
pci_claim_resource(dev, i); if (pci_claim_resource(dev, i) == 0)
continue;
pci_claim_bridge_resource(dev, i);
} }
} }
......
...@@ -699,7 +699,7 @@ void bpf_jit_compile(struct bpf_prog *fp) ...@@ -699,7 +699,7 @@ void bpf_jit_compile(struct bpf_prog *fp)
void bpf_jit_free(struct bpf_prog *fp) void bpf_jit_free(struct bpf_prog *fp)
{ {
if (fp->jited) if (fp->jited)
module_free(NULL, fp->bpf_func); module_memfree(fp->bpf_func);
bpf_prog_unlock_free(fp); bpf_prog_unlock_free(fp);
} }
...@@ -55,14 +55,10 @@ void *module_alloc(unsigned long size) ...@@ -55,14 +55,10 @@ void *module_alloc(unsigned long size)
} }
#endif #endif
/* Free memory returned from module_alloc */ void module_arch_freeing_init(struct module *mod)
void module_free(struct module *mod, void *module_region)
{ {
if (mod) { vfree(mod->arch.syminfo);
vfree(mod->arch.syminfo); mod->arch.syminfo = NULL;
mod->arch.syminfo = NULL;
}
vfree(module_region);
} }
static void check_rela(Elf_Rela *rela, struct module *me) static void check_rela(Elf_Rela *rela, struct module *me)
......
...@@ -22,8 +22,8 @@ ...@@ -22,8 +22,8 @@
* skb_copy_bits takes 4 parameters: * skb_copy_bits takes 4 parameters:
* %r2 = skb pointer * %r2 = skb pointer
* %r3 = offset into skb data * %r3 = offset into skb data
* %r4 = length to copy * %r4 = pointer to temp buffer
* %r5 = pointer to temp buffer * %r5 = length to copy
*/ */
#define SKBDATA %r8 #define SKBDATA %r8
...@@ -44,8 +44,9 @@ ENTRY(sk_load_word) ...@@ -44,8 +44,9 @@ ENTRY(sk_load_word)
sk_load_word_slow: sk_load_word_slow:
lgr %r9,%r2 # save %r2 lgr %r9,%r2 # save %r2
lhi %r4,4 # 4 bytes lgr %r3,%r1 # offset
la %r5,160(%r15) # pointer to temp buffer la %r4,160(%r15) # pointer to temp buffer
lghi %r5,4 # 4 bytes
brasl %r14,skb_copy_bits # get data from skb brasl %r14,skb_copy_bits # get data from skb
l %r5,160(%r15) # load result from temp buffer l %r5,160(%r15) # load result from temp buffer
ltgr %r2,%r2 # set cc to (%r2 != 0) ltgr %r2,%r2 # set cc to (%r2 != 0)
...@@ -69,8 +70,9 @@ ENTRY(sk_load_half) ...@@ -69,8 +70,9 @@ ENTRY(sk_load_half)
sk_load_half_slow: sk_load_half_slow:
lgr %r9,%r2 # save %r2 lgr %r9,%r2 # save %r2
lhi %r4,2 # 2 bytes lgr %r3,%r1 # offset
la %r5,162(%r15) # pointer to temp buffer la %r4,162(%r15) # pointer to temp buffer
lghi %r5,2 # 2 bytes
brasl %r14,skb_copy_bits # get data from skb brasl %r14,skb_copy_bits # get data from skb
xc 160(2,%r15),160(%r15) xc 160(2,%r15),160(%r15)
l %r5,160(%r15) # load result from temp buffer l %r5,160(%r15) # load result from temp buffer
...@@ -95,8 +97,9 @@ ENTRY(sk_load_byte) ...@@ -95,8 +97,9 @@ ENTRY(sk_load_byte)
sk_load_byte_slow: sk_load_byte_slow:
lgr %r9,%r2 # save %r2 lgr %r9,%r2 # save %r2
lhi %r4,1 # 1 bytes lgr %r3,%r1 # offset
la %r5,163(%r15) # pointer to temp buffer la %r4,163(%r15) # pointer to temp buffer
lghi %r5,1 # 1 byte
brasl %r14,skb_copy_bits # get data from skb brasl %r14,skb_copy_bits # get data from skb
xc 160(3,%r15),160(%r15) xc 160(3,%r15),160(%r15)
l %r5,160(%r15) # load result from temp buffer l %r5,160(%r15) # load result from temp buffer
...@@ -104,11 +107,11 @@ sk_load_byte_slow: ...@@ -104,11 +107,11 @@ sk_load_byte_slow:
lgr %r2,%r9 # restore %r2 lgr %r2,%r9 # restore %r2
br %r8 br %r8
/* A = (*(u8 *)(skb->data+K) & 0xf) << 2 */ /* X = (*(u8 *)(skb->data+K) & 0xf) << 2 */
ENTRY(sk_load_byte_msh) ENTRY(sk_load_byte_msh)
llgfr %r1,%r3 # extend offset llgfr %r1,%r3 # extend offset
clr %r11,%r3 # hlen < offset ? clr %r11,%r3 # hlen < offset ?
jle sk_load_byte_slow jle sk_load_byte_msh_slow
lhi %r12,0 lhi %r12,0
ic %r12,0(%r1,%r10) # get byte from skb ic %r12,0(%r1,%r10) # get byte from skb
nill %r12,0x0f nill %r12,0x0f
...@@ -118,8 +121,9 @@ ENTRY(sk_load_byte_msh) ...@@ -118,8 +121,9 @@ ENTRY(sk_load_byte_msh)
sk_load_byte_msh_slow: sk_load_byte_msh_slow:
lgr %r9,%r2 # save %r2 lgr %r9,%r2 # save %r2
lhi %r4,2 # 2 bytes lgr %r3,%r1 # offset
la %r5,162(%r15) # pointer to temp buffer la %r4,163(%r15) # pointer to temp buffer
lghi %r5,1 # 1 byte
brasl %r14,skb_copy_bits # get data from skb brasl %r14,skb_copy_bits # get data from skb
xc 160(3,%r15),160(%r15) xc 160(3,%r15),160(%r15)
l %r12,160(%r15) # load result from temp buffer l %r12,160(%r15) # load result from temp buffer
......
...@@ -448,15 +448,12 @@ static int bpf_jit_insn(struct bpf_jit *jit, struct sock_filter *filter, ...@@ -448,15 +448,12 @@ static int bpf_jit_insn(struct bpf_jit *jit, struct sock_filter *filter,
mask = 0x800000; /* je */ mask = 0x800000; /* je */
kbranch: /* Emit compare if the branch targets are different */ kbranch: /* Emit compare if the branch targets are different */
if (filter->jt != filter->jf) { if (filter->jt != filter->jf) {
if (K <= 16383) if (test_facility(21))
/* chi %r5,<K> */
EMIT4_IMM(0xa75e0000, K);
else if (test_facility(21))
/* clfi %r5,<K> */ /* clfi %r5,<K> */
EMIT6_IMM(0xc25f0000, K); EMIT6_IMM(0xc25f0000, K);
else else
/* c %r5,<d(K)>(%r13) */ /* cl %r5,<d(K)>(%r13) */
EMIT4_DISP(0x5950d000, EMIT_CONST(K)); EMIT4_DISP(0x5550d000, EMIT_CONST(K));
} }
branch: if (filter->jt == filter->jf) { branch: if (filter->jt == filter->jf) {
if (filter->jt == 0) if (filter->jt == 0)
......
...@@ -639,7 +639,10 @@ static void pci_claim_bus_resources(struct pci_bus *bus) ...@@ -639,7 +639,10 @@ static void pci_claim_bus_resources(struct pci_bus *bus)
(unsigned long long)r->end, (unsigned long long)r->end,
(unsigned int)r->flags); (unsigned int)r->flags);
pci_claim_resource(dev, i); if (pci_claim_resource(dev, i) == 0)
continue;
pci_claim_bridge_resource(dev, i);
} }
} }
......
...@@ -776,7 +776,7 @@ cond_branch: f_offset = addrs[i + filter[i].jf]; ...@@ -776,7 +776,7 @@ cond_branch: f_offset = addrs[i + filter[i].jf];
if (unlikely(proglen + ilen > oldproglen)) { if (unlikely(proglen + ilen > oldproglen)) {
pr_err("bpb_jit_compile fatal error\n"); pr_err("bpb_jit_compile fatal error\n");
kfree(addrs); kfree(addrs);
module_free(NULL, image); module_memfree(image);
return; return;
} }
memcpy(image + proglen, temp, ilen); memcpy(image + proglen, temp, ilen);
...@@ -822,7 +822,7 @@ cond_branch: f_offset = addrs[i + filter[i].jf]; ...@@ -822,7 +822,7 @@ cond_branch: f_offset = addrs[i + filter[i].jf];
void bpf_jit_free(struct bpf_prog *fp) void bpf_jit_free(struct bpf_prog *fp)
{ {
if (fp->jited) if (fp->jited)
module_free(NULL, fp->bpf_func); module_memfree(fp->bpf_func);
bpf_prog_unlock_free(fp); bpf_prog_unlock_free(fp);
} }
...@@ -74,7 +74,7 @@ void *module_alloc(unsigned long size) ...@@ -74,7 +74,7 @@ void *module_alloc(unsigned long size)
/* Free memory returned from module_alloc */ /* Free memory returned from module_alloc */
void module_free(struct module *mod, void *module_region) void module_memfree(void *module_region)
{ {
vfree(module_region); vfree(module_region);
...@@ -83,7 +83,7 @@ void module_free(struct module *mod, void *module_region) ...@@ -83,7 +83,7 @@ void module_free(struct module *mod, void *module_region)
0, 0, 0, NULL, NULL, 0); 0, 0, 0, NULL, NULL, 0);
/* /*
* FIXME: If module_region == mod->module_init, trim exception * FIXME: Add module_arch_freeing_init to trim exception
* table entries. * table entries.
*/ */
} }
......
...@@ -857,7 +857,7 @@ source "kernel/Kconfig.preempt" ...@@ -857,7 +857,7 @@ source "kernel/Kconfig.preempt"
config X86_UP_APIC config X86_UP_APIC
bool "Local APIC support on uniprocessors" bool "Local APIC support on uniprocessors"
depends on X86_32 && !SMP && !X86_32_NON_STANDARD && !PCI_MSI depends on X86_32 && !SMP && !X86_32_NON_STANDARD
---help--- ---help---
A local APIC (Advanced Programmable Interrupt Controller) is an A local APIC (Advanced Programmable Interrupt Controller) is an
integrated interrupt controller in the CPU. If you have a single-CPU integrated interrupt controller in the CPU. If you have a single-CPU
...@@ -868,6 +868,10 @@ config X86_UP_APIC ...@@ -868,6 +868,10 @@ config X86_UP_APIC
performance counters), and the NMI watchdog which detects hard performance counters), and the NMI watchdog which detects hard
lockups. lockups.
config X86_UP_APIC_MSI
def_bool y
select X86_UP_APIC if X86_32 && !SMP && !X86_32_NON_STANDARD && PCI_MSI
config X86_UP_IOAPIC config X86_UP_IOAPIC
bool "IO-APIC support on uniprocessors" bool "IO-APIC support on uniprocessors"
depends on X86_UP_APIC depends on X86_UP_APIC
......
...@@ -373,6 +373,8 @@ asmlinkage __visible void *decompress_kernel(void *rmode, memptr heap, ...@@ -373,6 +373,8 @@ asmlinkage __visible void *decompress_kernel(void *rmode, memptr heap,
unsigned long output_len, unsigned long output_len,
unsigned long run_size) unsigned long run_size)
{ {
unsigned char *output_orig = output;
real_mode = rmode; real_mode = rmode;
sanitize_boot_params(real_mode); sanitize_boot_params(real_mode);
...@@ -421,7 +423,12 @@ asmlinkage __visible void *decompress_kernel(void *rmode, memptr heap, ...@@ -421,7 +423,12 @@ asmlinkage __visible void *decompress_kernel(void *rmode, memptr heap,
debug_putstr("\nDecompressing Linux... "); debug_putstr("\nDecompressing Linux... ");
decompress(input_data, input_len, NULL, NULL, output, NULL, error); decompress(input_data, input_len, NULL, NULL, output, NULL, error);
parse_elf(output); parse_elf(output);
handle_relocations(output, output_len); /*
* 32-bit always performs relocations. 64-bit relocations are only
* needed if kASLR has chosen a different load address.
*/
if (!IS_ENABLED(CONFIG_X86_64) || output != output_orig)
handle_relocations(output, output_len);
debug_putstr("done.\nBooting the kernel.\n"); debug_putstr("done.\nBooting the kernel.\n");
return output; return output;
} }
...@@ -50,6 +50,7 @@ void acpi_pic_sci_set_trigger(unsigned int, u16); ...@@ -50,6 +50,7 @@ void acpi_pic_sci_set_trigger(unsigned int, u16);
extern int (*__acpi_register_gsi)(struct device *dev, u32 gsi, extern int (*__acpi_register_gsi)(struct device *dev, u32 gsi,
int trigger, int polarity); int trigger, int polarity);
extern void (*__acpi_unregister_gsi)(u32 gsi);
static inline void disable_acpi(void) static inline void disable_acpi(void)
{ {
......
...@@ -251,7 +251,8 @@ static inline void native_load_tls(struct thread_struct *t, unsigned int cpu) ...@@ -251,7 +251,8 @@ static inline void native_load_tls(struct thread_struct *t, unsigned int cpu)
gdt[GDT_ENTRY_TLS_MIN + i] = t->tls_array[i]; gdt[GDT_ENTRY_TLS_MIN + i] = t->tls_array[i];
} }
#define _LDT_empty(info) \ /* This intentionally ignores lm, since 32-bit apps don't have that field. */
#define LDT_empty(info) \
((info)->base_addr == 0 && \ ((info)->base_addr == 0 && \
(info)->limit == 0 && \ (info)->limit == 0 && \
(info)->contents == 0 && \ (info)->contents == 0 && \
...@@ -261,11 +262,18 @@ static inline void native_load_tls(struct thread_struct *t, unsigned int cpu) ...@@ -261,11 +262,18 @@ static inline void native_load_tls(struct thread_struct *t, unsigned int cpu)
(info)->seg_not_present == 1 && \ (info)->seg_not_present == 1 && \
(info)->useable == 0) (info)->useable == 0)
#ifdef CONFIG_X86_64 /* Lots of programs expect an all-zero user_desc to mean "no segment at all". */
#define LDT_empty(info) (_LDT_empty(info) && ((info)->lm == 0)) static inline bool LDT_zero(const struct user_desc *info)
#else {
#define LDT_empty(info) (_LDT_empty(info)) return (info->base_addr == 0 &&
#endif info->limit == 0 &&
info->contents == 0 &&
info->read_exec_only == 0 &&
info->seg_32bit == 0 &&
info->limit_in_pages == 0 &&
info->seg_not_present == 0 &&
info->useable == 0);
}
static inline void clear_LDT(void) static inline void clear_LDT(void)
{ {
......
...@@ -130,7 +130,25 @@ static inline void arch_bprm_mm_init(struct mm_struct *mm, ...@@ -130,7 +130,25 @@ static inline void arch_bprm_mm_init(struct mm_struct *mm,
static inline void arch_unmap(struct mm_struct *mm, struct vm_area_struct *vma, static inline void arch_unmap(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long start, unsigned long end) unsigned long start, unsigned long end)
{ {
mpx_notify_unmap(mm, vma, start, end); /*
* mpx_notify_unmap() goes and reads a rarely-hot
* cacheline in the mm_struct. That can be expensive
* enough to be seen in profiles.
*
* The mpx_notify_unmap() call and its contents have been
* observed to affect munmap() performance on hardware
* where MPX is not present.
*
* The unlikely() optimizes for the fast case: no MPX
* in the CPU, or no MPX use in the process. Even if
* we get this wrong (in the unlikely event that MPX
* is widely enabled on some system) the overhead of
* MPX itself (reading bounds tables) is expected to
* overwhelm the overhead of getting this unlikely()
* consistently wrong.
*/
if (unlikely(cpu_feature_enabled(X86_FEATURE_MPX)))
mpx_notify_unmap(mm, vma, start, end);
} }
#endif /* _ASM_X86_MMU_CONTEXT_H */ #endif /* _ASM_X86_MMU_CONTEXT_H */
...@@ -611,20 +611,20 @@ void __init acpi_pic_sci_set_trigger(unsigned int irq, u16 trigger) ...@@ -611,20 +611,20 @@ void __init acpi_pic_sci_set_trigger(unsigned int irq, u16 trigger)
int acpi_gsi_to_irq(u32 gsi, unsigned int *irqp) int acpi_gsi_to_irq(u32 gsi, unsigned int *irqp)
{ {
int irq; int rc, irq, trigger, polarity;
if (acpi_irq_model == ACPI_IRQ_MODEL_PIC) { rc = acpi_get_override_irq(gsi, &trigger, &polarity);
*irqp = gsi; if (rc == 0) {
} else { trigger = trigger ? ACPI_LEVEL_SENSITIVE : ACPI_EDGE_SENSITIVE;
mutex_lock(&acpi_ioapic_lock); polarity = polarity ? ACPI_ACTIVE_LOW : ACPI_ACTIVE_HIGH;
irq = mp_map_gsi_to_irq(gsi, irq = acpi_register_gsi(NULL, gsi, trigger, polarity);
IOAPIC_MAP_ALLOC | IOAPIC_MAP_CHECK); if (irq >= 0) {
mutex_unlock(&acpi_ioapic_lock); *irqp = irq;
if (irq < 0) return 0;
return -1; }
*irqp = irq;
} }
return 0;
return -1;
} }
EXPORT_SYMBOL_GPL(acpi_gsi_to_irq); EXPORT_SYMBOL_GPL(acpi_gsi_to_irq);
......
...@@ -107,6 +107,7 @@ static struct clocksource hyperv_cs = { ...@@ -107,6 +107,7 @@ static struct clocksource hyperv_cs = {
.rating = 400, /* use this when running on Hyperv*/ .rating = 400, /* use this when running on Hyperv*/
.read = read_hv_clock, .read = read_hv_clock,
.mask = CLOCKSOURCE_MASK(64), .mask = CLOCKSOURCE_MASK(64),
.flags = CLOCK_SOURCE_IS_CONTINUOUS,
}; };
static void __init ms_hyperv_init_platform(void) static void __init ms_hyperv_init_platform(void)
......
...@@ -674,7 +674,7 @@ static inline void *alloc_tramp(unsigned long size) ...@@ -674,7 +674,7 @@ static inline void *alloc_tramp(unsigned long size)
} }
static inline void tramp_free(void *tramp) static inline void tramp_free(void *tramp)
{ {
module_free(NULL, tramp); module_memfree(tramp);
} }
#else #else
/* Trampolines can only be created if modules are supported */ /* Trampolines can only be created if modules are supported */
......
...@@ -127,7 +127,7 @@ int arch_show_interrupts(struct seq_file *p, int prec) ...@@ -127,7 +127,7 @@ int arch_show_interrupts(struct seq_file *p, int prec)
seq_puts(p, " Machine check polls\n"); seq_puts(p, " Machine check polls\n");
#endif #endif
#if IS_ENABLED(CONFIG_HYPERV) || defined(CONFIG_XEN) #if IS_ENABLED(CONFIG_HYPERV) || defined(CONFIG_XEN)
seq_printf(p, "%*s: ", prec, "THR"); seq_printf(p, "%*s: ", prec, "HYP");
for_each_online_cpu(j) for_each_online_cpu(j)
seq_printf(p, "%10u ", irq_stats(j)->irq_hv_callback_count); seq_printf(p, "%10u ", irq_stats(j)->irq_hv_callback_count);
seq_puts(p, " Hypervisor callback interrupts\n"); seq_puts(p, " Hypervisor callback interrupts\n");
......
...@@ -29,7 +29,28 @@ static int get_free_idx(void) ...@@ -29,7 +29,28 @@ static int get_free_idx(void)
static bool tls_desc_okay(const struct user_desc *info) static bool tls_desc_okay(const struct user_desc *info)
{ {
if (LDT_empty(info)) /*
* For historical reasons (i.e. no one ever documented how any
* of the segmentation APIs work), user programs can and do
* assume that a struct user_desc that's all zeros except for
* entry_number means "no segment at all". This never actually
* worked. In fact, up to Linux 3.19, a struct user_desc like
* this would create a 16-bit read-write segment with base and
* limit both equal to zero.
*
* That was close enough to "no segment at all" until we
* hardened this function to disallow 16-bit TLS segments. Fix
* it up by interpreting these zeroed segments the way that they
* were almost certainly intended to be interpreted.
*
* The correct way to ask for "no segment at all" is to specify
* a user_desc that satisfies LDT_empty. To keep everything
* working, we accept both.
*
* Note that there's a similar kludge in modify_ldt -- look at
* the distinction between modes 1 and 0x11.
*/
if (LDT_empty(info) || LDT_zero(info))
return true; return true;
/* /*
...@@ -71,7 +92,7 @@ static void set_tls_desc(struct task_struct *p, int idx, ...@@ -71,7 +92,7 @@ static void set_tls_desc(struct task_struct *p, int idx,
cpu = get_cpu(); cpu = get_cpu();
while (n-- > 0) { while (n-- > 0) {
if (LDT_empty(info)) if (LDT_empty(info) || LDT_zero(info))
desc->a = desc->b = 0; desc->a = desc->b = 0;
else else
fill_ldt(desc, info); fill_ldt(desc, info);
......
...@@ -617,7 +617,7 @@ static unsigned long quick_pit_calibrate(void) ...@@ -617,7 +617,7 @@ static unsigned long quick_pit_calibrate(void)
goto success; goto success;
} }
} }
pr_err("Fast TSC calibration failed\n"); pr_info("Fast TSC calibration failed\n");
return 0; return 0;
success: success:
......
...@@ -2348,7 +2348,7 @@ static int em_sysenter(struct x86_emulate_ctxt *ctxt) ...@@ -2348,7 +2348,7 @@ static int em_sysenter(struct x86_emulate_ctxt *ctxt)
* Not recognized on AMD in compat mode (but is recognized in legacy * Not recognized on AMD in compat mode (but is recognized in legacy
* mode). * mode).
*/ */
if ((ctxt->mode == X86EMUL_MODE_PROT32) && (efer & EFER_LMA) if ((ctxt->mode != X86EMUL_MODE_PROT64) && (efer & EFER_LMA)
&& !vendor_intel(ctxt)) && !vendor_intel(ctxt))
return emulate_ud(ctxt); return emulate_ud(ctxt);
...@@ -2359,25 +2359,13 @@ static int em_sysenter(struct x86_emulate_ctxt *ctxt) ...@@ -2359,25 +2359,13 @@ static int em_sysenter(struct x86_emulate_ctxt *ctxt)
setup_syscalls_segments(ctxt, &cs, &ss); setup_syscalls_segments(ctxt, &cs, &ss);
ops->get_msr(ctxt, MSR_IA32_SYSENTER_CS, &msr_data); ops->get_msr(ctxt, MSR_IA32_SYSENTER_CS, &msr_data);
switch (ctxt->mode) { if ((msr_data & 0xfffc) == 0x0)
case X86EMUL_MODE_PROT32: return emulate_gp(ctxt, 0);
if ((msr_data & 0xfffc) == 0x0)
return emulate_gp(ctxt, 0);
break;
case X86EMUL_MODE_PROT64:
if (msr_data == 0x0)
return emulate_gp(ctxt, 0);
break;
default:
break;
}
ctxt->eflags &= ~(EFLG_VM | EFLG_IF); ctxt->eflags &= ~(EFLG_VM | EFLG_IF);
cs_sel = (u16)msr_data; cs_sel = (u16)msr_data & ~SELECTOR_RPL_MASK;
cs_sel &= ~SELECTOR_RPL_MASK;
ss_sel = cs_sel + 8; ss_sel = cs_sel + 8;
ss_sel &= ~SELECTOR_RPL_MASK; if (efer & EFER_LMA) {
if (ctxt->mode == X86EMUL_MODE_PROT64 || (efer & EFER_LMA)) {
cs.d = 0; cs.d = 0;
cs.l = 1; cs.l = 1;
} }
...@@ -2386,10 +2374,11 @@ static int em_sysenter(struct x86_emulate_ctxt *ctxt) ...@@ -2386,10 +2374,11 @@ static int em_sysenter(struct x86_emulate_ctxt *ctxt)
ops->set_segment(ctxt, ss_sel, &ss, 0, VCPU_SREG_SS); ops->set_segment(ctxt, ss_sel, &ss, 0, VCPU_SREG_SS);
ops->get_msr(ctxt, MSR_IA32_SYSENTER_EIP, &msr_data); ops->get_msr(ctxt, MSR_IA32_SYSENTER_EIP, &msr_data);
ctxt->_eip = msr_data; ctxt->_eip = (efer & EFER_LMA) ? msr_data : (u32)msr_data;
ops->get_msr(ctxt, MSR_IA32_SYSENTER_ESP, &msr_data); ops->get_msr(ctxt, MSR_IA32_SYSENTER_ESP, &msr_data);
*reg_write(ctxt, VCPU_REGS_RSP) = msr_data; *reg_write(ctxt, VCPU_REGS_RSP) = (efer & EFER_LMA) ? msr_data :
(u32)msr_data;
return X86EMUL_CONTINUE; return X86EMUL_CONTINUE;
} }
...@@ -3791,8 +3780,8 @@ static const struct opcode group5[] = { ...@@ -3791,8 +3780,8 @@ static const struct opcode group5[] = {
}; };
static const struct opcode group6[] = { static const struct opcode group6[] = {
DI(Prot, sldt), DI(Prot | DstMem, sldt),
DI(Prot, str), DI(Prot | DstMem, str),
II(Prot | Priv | SrcMem16, em_lldt, lldt), II(Prot | Priv | SrcMem16, em_lldt, lldt),
II(Prot | Priv | SrcMem16, em_ltr, ltr), II(Prot | Priv | SrcMem16, em_ltr, ltr),
N, N, N, N, N, N, N, N,
......
...@@ -43,7 +43,7 @@ uint16_t __cachemode2pte_tbl[_PAGE_CACHE_MODE_NUM] = { ...@@ -43,7 +43,7 @@ uint16_t __cachemode2pte_tbl[_PAGE_CACHE_MODE_NUM] = {
[_PAGE_CACHE_MODE_WT] = _PAGE_PCD, [_PAGE_CACHE_MODE_WT] = _PAGE_PCD,
[_PAGE_CACHE_MODE_WP] = _PAGE_PCD, [_PAGE_CACHE_MODE_WP] = _PAGE_PCD,
}; };
EXPORT_SYMBOL_GPL(__cachemode2pte_tbl); EXPORT_SYMBOL(__cachemode2pte_tbl);
uint8_t __pte2cachemode_tbl[8] = { uint8_t __pte2cachemode_tbl[8] = {
[__pte2cm_idx(0)] = _PAGE_CACHE_MODE_WB, [__pte2cm_idx(0)] = _PAGE_CACHE_MODE_WB,
[__pte2cm_idx(_PAGE_PWT)] = _PAGE_CACHE_MODE_WC, [__pte2cm_idx(_PAGE_PWT)] = _PAGE_CACHE_MODE_WC,
...@@ -54,7 +54,7 @@ uint8_t __pte2cachemode_tbl[8] = { ...@@ -54,7 +54,7 @@ uint8_t __pte2cachemode_tbl[8] = {
[__pte2cm_idx(_PAGE_PCD | _PAGE_PAT)] = _PAGE_CACHE_MODE_UC_MINUS, [__pte2cm_idx(_PAGE_PCD | _PAGE_PAT)] = _PAGE_CACHE_MODE_UC_MINUS,
[__pte2cm_idx(_PAGE_PWT | _PAGE_PCD | _PAGE_PAT)] = _PAGE_CACHE_MODE_UC, [__pte2cm_idx(_PAGE_PWT | _PAGE_PCD | _PAGE_PAT)] = _PAGE_CACHE_MODE_UC,
}; };
EXPORT_SYMBOL_GPL(__pte2cachemode_tbl); EXPORT_SYMBOL(__pte2cachemode_tbl);
static unsigned long __initdata pgt_buf_start; static unsigned long __initdata pgt_buf_start;
static unsigned long __initdata pgt_buf_end; static unsigned long __initdata pgt_buf_end;
......
...@@ -348,6 +348,12 @@ static __user void *task_get_bounds_dir(struct task_struct *tsk) ...@@ -348,6 +348,12 @@ static __user void *task_get_bounds_dir(struct task_struct *tsk)
if (!cpu_feature_enabled(X86_FEATURE_MPX)) if (!cpu_feature_enabled(X86_FEATURE_MPX))
return MPX_INVALID_BOUNDS_DIR; return MPX_INVALID_BOUNDS_DIR;
/*
* 32-bit binaries on 64-bit kernels are currently
* unsupported.
*/
if (IS_ENABLED(CONFIG_X86_64) && test_thread_flag(TIF_IA32))
return MPX_INVALID_BOUNDS_DIR;
/* /*
* The bounds directory pointer is stored in a register * The bounds directory pointer is stored in a register
* only accessible if we first do an xsave. * only accessible if we first do an xsave.
......
...@@ -234,8 +234,13 @@ void pat_init(void) ...@@ -234,8 +234,13 @@ void pat_init(void)
PAT(4, WB) | PAT(5, WC) | PAT(6, UC_MINUS) | PAT(7, UC); PAT(4, WB) | PAT(5, WC) | PAT(6, UC_MINUS) | PAT(7, UC);
/* Boot CPU check */ /* Boot CPU check */
if (!boot_pat_state) if (!boot_pat_state) {
rdmsrl(MSR_IA32_CR_PAT, boot_pat_state); rdmsrl(MSR_IA32_CR_PAT, boot_pat_state);
if (!boot_pat_state) {
pat_disable("PAT read returns always zero, disabled.");
return;
}
}
wrmsrl(MSR_IA32_CR_PAT, pat); wrmsrl(MSR_IA32_CR_PAT, pat);
......
...@@ -216,7 +216,7 @@ static void pcibios_allocate_bridge_resources(struct pci_dev *dev) ...@@ -216,7 +216,7 @@ static void pcibios_allocate_bridge_resources(struct pci_dev *dev)
continue; continue;
if (r->parent) /* Already allocated */ if (r->parent) /* Already allocated */
continue; continue;
if (!r->start || pci_claim_resource(dev, idx) < 0) { if (!r->start || pci_claim_bridge_resource(dev, idx) < 0) {
/* /*
* Something is wrong with the region. * Something is wrong with the region.
* Invalidate the resource to prevent * Invalidate the resource to prevent
......
...@@ -458,6 +458,7 @@ int __init pci_xen_hvm_init(void) ...@@ -458,6 +458,7 @@ int __init pci_xen_hvm_init(void)
* just how GSIs get registered. * just how GSIs get registered.
*/ */
__acpi_register_gsi = acpi_register_gsi_xen_hvm; __acpi_register_gsi = acpi_register_gsi_xen_hvm;
__acpi_unregister_gsi = NULL;
#endif #endif
#ifdef CONFIG_PCI_MSI #ifdef CONFIG_PCI_MSI
...@@ -471,52 +472,6 @@ int __init pci_xen_hvm_init(void) ...@@ -471,52 +472,6 @@ int __init pci_xen_hvm_init(void)
} }
#ifdef CONFIG_XEN_DOM0 #ifdef CONFIG_XEN_DOM0
static __init void xen_setup_acpi_sci(void)
{
int rc;
int trigger, polarity;
int gsi = acpi_sci_override_gsi;
int irq = -1;
int gsi_override = -1;
if (!gsi)
return;
rc = acpi_get_override_irq(gsi, &trigger, &polarity);
if (rc) {
printk(KERN_WARNING "xen: acpi_get_override_irq failed for acpi"
" sci, rc=%d\n", rc);
return;
}
trigger = trigger ? ACPI_LEVEL_SENSITIVE : ACPI_EDGE_SENSITIVE;
polarity = polarity ? ACPI_ACTIVE_LOW : ACPI_ACTIVE_HIGH;
printk(KERN_INFO "xen: sci override: global_irq=%d trigger=%d "
"polarity=%d\n", gsi, trigger, polarity);
/* Before we bind the GSI to a Linux IRQ, check whether
* we need to override it with bus_irq (IRQ) value. Usually for
* IRQs below IRQ_LEGACY_IRQ this holds IRQ == GSI, as so:
* ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
* but there are oddballs where the IRQ != GSI:
* ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 20 low level)
* which ends up being: gsi_to_irq[9] == 20
* (which is what acpi_gsi_to_irq ends up calling when starting the
* the ACPI interpreter and keels over since IRQ 9 has not been
* setup as we had setup IRQ 20 for it).
*/
if (acpi_gsi_to_irq(gsi, &irq) == 0) {
/* Use the provided value if it's valid. */
if (irq >= 0)
gsi_override = irq;
}
gsi = xen_register_gsi(gsi, gsi_override, trigger, polarity);
printk(KERN_INFO "xen: acpi sci %d\n", gsi);
return;
}
int __init pci_xen_initial_domain(void) int __init pci_xen_initial_domain(void)
{ {
int irq; int irq;
...@@ -527,8 +482,8 @@ int __init pci_xen_initial_domain(void) ...@@ -527,8 +482,8 @@ int __init pci_xen_initial_domain(void)
x86_msi.restore_msi_irqs = xen_initdom_restore_msi_irqs; x86_msi.restore_msi_irqs = xen_initdom_restore_msi_irqs;
pci_msi_ignore_mask = 1; pci_msi_ignore_mask = 1;
#endif #endif
xen_setup_acpi_sci();
__acpi_register_gsi = acpi_register_gsi_xen; __acpi_register_gsi = acpi_register_gsi_xen;
__acpi_unregister_gsi = NULL;
/* Pre-allocate legacy irqs */ /* Pre-allocate legacy irqs */
for (irq = 0; irq < nr_legacy_irqs(); irq++) { for (irq = 0; irq < nr_legacy_irqs(); irq++) {
int trigger, polarity; int trigger, polarity;
......
...@@ -15,6 +15,26 @@ ...@@ -15,6 +15,26 @@
static void blk_mq_sysfs_release(struct kobject *kobj) static void blk_mq_sysfs_release(struct kobject *kobj)
{ {
struct request_queue *q;
q = container_of(kobj, struct request_queue, mq_kobj);
free_percpu(q->queue_ctx);
}
static void blk_mq_ctx_release(struct kobject *kobj)
{
struct blk_mq_ctx *ctx;
ctx = container_of(kobj, struct blk_mq_ctx, kobj);
kobject_put(&ctx->queue->mq_kobj);
}
static void blk_mq_hctx_release(struct kobject *kobj)
{
struct blk_mq_hw_ctx *hctx;
hctx = container_of(kobj, struct blk_mq_hw_ctx, kobj);
kfree(hctx);
} }
struct blk_mq_ctx_sysfs_entry { struct blk_mq_ctx_sysfs_entry {
...@@ -318,13 +338,13 @@ static struct kobj_type blk_mq_ktype = { ...@@ -318,13 +338,13 @@ static struct kobj_type blk_mq_ktype = {
static struct kobj_type blk_mq_ctx_ktype = { static struct kobj_type blk_mq_ctx_ktype = {
.sysfs_ops = &blk_mq_sysfs_ops, .sysfs_ops = &blk_mq_sysfs_ops,
.default_attrs = default_ctx_attrs, .default_attrs = default_ctx_attrs,
.release = blk_mq_sysfs_release, .release = blk_mq_ctx_release,
}; };
static struct kobj_type blk_mq_hw_ktype = { static struct kobj_type blk_mq_hw_ktype = {
.sysfs_ops = &blk_mq_hw_sysfs_ops, .sysfs_ops = &blk_mq_hw_sysfs_ops,
.default_attrs = default_hw_ctx_attrs, .default_attrs = default_hw_ctx_attrs,
.release = blk_mq_sysfs_release, .release = blk_mq_hctx_release,
}; };
static void blk_mq_unregister_hctx(struct blk_mq_hw_ctx *hctx) static void blk_mq_unregister_hctx(struct blk_mq_hw_ctx *hctx)
...@@ -355,6 +375,7 @@ static int blk_mq_register_hctx(struct blk_mq_hw_ctx *hctx) ...@@ -355,6 +375,7 @@ static int blk_mq_register_hctx(struct blk_mq_hw_ctx *hctx)
return ret; return ret;
hctx_for_each_ctx(hctx, ctx, i) { hctx_for_each_ctx(hctx, ctx, i) {
kobject_get(&q->mq_kobj);
ret = kobject_add(&ctx->kobj, &hctx->kobj, "cpu%u", ctx->cpu); ret = kobject_add(&ctx->kobj, &hctx->kobj, "cpu%u", ctx->cpu);
if (ret) if (ret)
break; break;
......
...@@ -1641,10 +1641,8 @@ static void blk_mq_free_hw_queues(struct request_queue *q, ...@@ -1641,10 +1641,8 @@ static void blk_mq_free_hw_queues(struct request_queue *q,
struct blk_mq_hw_ctx *hctx; struct blk_mq_hw_ctx *hctx;
unsigned int i; unsigned int i;
queue_for_each_hw_ctx(q, hctx, i) { queue_for_each_hw_ctx(q, hctx, i)
free_cpumask_var(hctx->cpumask); free_cpumask_var(hctx->cpumask);
kfree(hctx);
}
} }
static int blk_mq_init_hctx(struct request_queue *q, static int blk_mq_init_hctx(struct request_queue *q,
...@@ -2002,11 +2000,9 @@ void blk_mq_free_queue(struct request_queue *q) ...@@ -2002,11 +2000,9 @@ void blk_mq_free_queue(struct request_queue *q)
percpu_ref_exit(&q->mq_usage_counter); percpu_ref_exit(&q->mq_usage_counter);
free_percpu(q->queue_ctx);
kfree(q->queue_hw_ctx); kfree(q->queue_hw_ctx);
kfree(q->mq_map); kfree(q->mq_map);
q->queue_ctx = NULL;
q->queue_hw_ctx = NULL; q->queue_hw_ctx = NULL;
q->mq_map = NULL; q->mq_map = NULL;
......
...@@ -512,7 +512,6 @@ void acpi_pci_irq_disable(struct pci_dev *dev) ...@@ -512,7 +512,6 @@ void acpi_pci_irq_disable(struct pci_dev *dev)
dev_dbg(&dev->dev, "PCI INT %c disabled\n", pin_name(pin)); dev_dbg(&dev->dev, "PCI INT %c disabled\n", pin_name(pin));
if (gsi >= 0) { if (gsi >= 0) {
acpi_unregister_gsi(gsi); acpi_unregister_gsi(gsi);
dev->irq = 0;
dev->irq_managed = 0; dev->irq_managed = 0;
} }
} }
...@@ -106,7 +106,7 @@ struct nvme_queue { ...@@ -106,7 +106,7 @@ struct nvme_queue {
dma_addr_t cq_dma_addr; dma_addr_t cq_dma_addr;
u32 __iomem *q_db; u32 __iomem *q_db;
u16 q_depth; u16 q_depth;
u16 cq_vector; s16 cq_vector;
u16 sq_head; u16 sq_head;
u16 sq_tail; u16 sq_tail;
u16 cq_head; u16 cq_head;
......
...@@ -210,12 +210,25 @@ static void mvebu_mbus_disable_window(struct mvebu_mbus_state *mbus, ...@@ -210,12 +210,25 @@ static void mvebu_mbus_disable_window(struct mvebu_mbus_state *mbus,
} }
/* Checks whether the given window number is available */ /* Checks whether the given window number is available */
/* On Armada XP, 375 and 38x the MBus window 13 has the remap
* capability, like windows 0 to 7. However, the mvebu-mbus driver
* isn't currently taking into account this special case, which means
* that when window 13 is actually used, the remap registers are left
* to 0, making the device using this MBus window unavailable. The
* quick fix for stable is to not use window 13. A follow up patch
* will correctly handle this window.
*/
static int mvebu_mbus_window_is_free(struct mvebu_mbus_state *mbus, static int mvebu_mbus_window_is_free(struct mvebu_mbus_state *mbus,
const int win) const int win)
{ {
void __iomem *addr = mbus->mbuswins_base + void __iomem *addr = mbus->mbuswins_base +
mbus->soc->win_cfg_offset(win); mbus->soc->win_cfg_offset(win);
u32 ctrl = readl(addr + WIN_CTRL_OFF); u32 ctrl = readl(addr + WIN_CTRL_OFF);
if (win == 13)
return false;
return !(ctrl & WIN_CTRL_ENABLE); return !(ctrl & WIN_CTRL_ENABLE);
} }
......
...@@ -68,9 +68,8 @@ static void kona_timer_disable_and_clear(void __iomem *base) ...@@ -68,9 +68,8 @@ static void kona_timer_disable_and_clear(void __iomem *base)
} }
static void static void
kona_timer_get_counter(void *timer_base, uint32_t *msw, uint32_t *lsw) kona_timer_get_counter(void __iomem *timer_base, uint32_t *msw, uint32_t *lsw)
{ {
void __iomem *base = IOMEM(timer_base);
int loop_limit = 4; int loop_limit = 4;
/* /*
...@@ -86,9 +85,9 @@ kona_timer_get_counter(void *timer_base, uint32_t *msw, uint32_t *lsw) ...@@ -86,9 +85,9 @@ kona_timer_get_counter(void *timer_base, uint32_t *msw, uint32_t *lsw)
*/ */
while (--loop_limit) { while (--loop_limit) {
*msw = readl(base + KONA_GPTIMER_STCHI_OFFSET); *msw = readl(timer_base + KONA_GPTIMER_STCHI_OFFSET);
*lsw = readl(base + KONA_GPTIMER_STCLO_OFFSET); *lsw = readl(timer_base + KONA_GPTIMER_STCLO_OFFSET);
if (*msw == readl(base + KONA_GPTIMER_STCHI_OFFSET)) if (*msw == readl(timer_base + KONA_GPTIMER_STCHI_OFFSET))
break; break;
} }
if (!loop_limit) { if (!loop_limit) {
......
...@@ -97,8 +97,8 @@ static void exynos4_mct_write(unsigned int value, unsigned long offset) ...@@ -97,8 +97,8 @@ static void exynos4_mct_write(unsigned int value, unsigned long offset)
writel_relaxed(value, reg_base + offset); writel_relaxed(value, reg_base + offset);
if (likely(offset >= EXYNOS4_MCT_L_BASE(0))) { if (likely(offset >= EXYNOS4_MCT_L_BASE(0))) {
stat_addr = (offset & ~EXYNOS4_MCT_L_MASK) + MCT_L_WSTAT_OFFSET; stat_addr = (offset & EXYNOS4_MCT_L_MASK) + MCT_L_WSTAT_OFFSET;
switch (offset & EXYNOS4_MCT_L_MASK) { switch (offset & ~EXYNOS4_MCT_L_MASK) {
case MCT_L_TCON_OFFSET: case MCT_L_TCON_OFFSET:
mask = 1 << 3; /* L_TCON write status */ mask = 1 << 3; /* L_TCON write status */
break; break;
......
...@@ -428,7 +428,7 @@ static void sh_tmu_register_clockevent(struct sh_tmu_channel *ch, ...@@ -428,7 +428,7 @@ static void sh_tmu_register_clockevent(struct sh_tmu_channel *ch,
ced->features = CLOCK_EVT_FEAT_PERIODIC; ced->features = CLOCK_EVT_FEAT_PERIODIC;
ced->features |= CLOCK_EVT_FEAT_ONESHOT; ced->features |= CLOCK_EVT_FEAT_ONESHOT;
ced->rating = 200; ced->rating = 200;
ced->cpumask = cpumask_of(0); ced->cpumask = cpu_possible_mask;
ced->set_next_event = sh_tmu_clock_event_next; ced->set_next_event = sh_tmu_clock_event_next;
ced->set_mode = sh_tmu_clock_event_mode; ced->set_mode = sh_tmu_clock_event_mode;
ced->suspend = sh_tmu_clock_event_suspend; ced->suspend = sh_tmu_clock_event_suspend;
......
...@@ -574,6 +574,16 @@ config SENSORS_IIO_HWMON ...@@ -574,6 +574,16 @@ config SENSORS_IIO_HWMON
for those channels specified in the map. This map can be provided for those channels specified in the map. This map can be provided
either via platform data or the device tree bindings. either via platform data or the device tree bindings.
config SENSORS_I5500
tristate "Intel 5500/5520/X58 temperature sensor"
depends on X86 && PCI
help
If you say yes here you get support for the temperature
sensor inside the Intel 5500, 5520 and X58 chipsets.
This driver can also be built as a module. If so, the module
will be called i5500_temp.
config SENSORS_CORETEMP config SENSORS_CORETEMP
tristate "Intel Core/Core2/Atom temperature sensor" tristate "Intel Core/Core2/Atom temperature sensor"
depends on X86 depends on X86
......
...@@ -68,6 +68,7 @@ obj-$(CONFIG_SENSORS_GPIO_FAN) += gpio-fan.o ...@@ -68,6 +68,7 @@ obj-$(CONFIG_SENSORS_GPIO_FAN) += gpio-fan.o
obj-$(CONFIG_SENSORS_HIH6130) += hih6130.o obj-$(CONFIG_SENSORS_HIH6130) += hih6130.o
obj-$(CONFIG_SENSORS_HTU21) += htu21.o obj-$(CONFIG_SENSORS_HTU21) += htu21.o
obj-$(CONFIG_SENSORS_ULTRA45) += ultra45_env.o obj-$(CONFIG_SENSORS_ULTRA45) += ultra45_env.o
obj-$(CONFIG_SENSORS_I5500) += i5500_temp.o
obj-$(CONFIG_SENSORS_I5K_AMB) += i5k_amb.o obj-$(CONFIG_SENSORS_I5K_AMB) += i5k_amb.o
obj-$(CONFIG_SENSORS_IBMAEM) += ibmaem.o obj-$(CONFIG_SENSORS_IBMAEM) += ibmaem.o
obj-$(CONFIG_SENSORS_IBMPEX) += ibmpex.o obj-$(CONFIG_SENSORS_IBMPEX) += ibmpex.o
......
/*
* i5500_temp - Driver for Intel 5500/5520/X58 chipset thermal sensor
*
* Copyright (C) 2012, 2014 Jean Delvare <jdelvare@suse.de>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/module.h>
#include <linux/init.h>
#include <linux/slab.h>
#include <linux/jiffies.h>
#include <linux/device.h>
#include <linux/pci.h>
#include <linux/hwmon.h>
#include <linux/hwmon-sysfs.h>
#include <linux/err.h>
#include <linux/mutex.h>
/* Register definitions from datasheet */
#define REG_TSTHRCATA 0xE2
#define REG_TSCTRL 0xE8
#define REG_TSTHRRPEX 0xEB
#define REG_TSTHRLO 0xEC
#define REG_TSTHRHI 0xEE
#define REG_CTHINT 0xF0
#define REG_TSFSC 0xF3
#define REG_CTSTS 0xF4
#define REG_TSTHRRQPI 0xF5
#define REG_CTCTRL 0xF7
#define REG_TSTIMER 0xF8
/*
* Sysfs stuff
*/
/* Sensor resolution : 0.5 degree C */
static ssize_t show_temp(struct device *dev,
struct device_attribute *devattr, char *buf)
{
struct pci_dev *pdev = to_pci_dev(dev->parent);
long temp;
u16 tsthrhi;
s8 tsfsc;
pci_read_config_word(pdev, REG_TSTHRHI, &tsthrhi);
pci_read_config_byte(pdev, REG_TSFSC, &tsfsc);
temp = ((long)tsthrhi - tsfsc) * 500;
return sprintf(buf, "%ld\n", temp);
}
static ssize_t show_thresh(struct device *dev,
struct device_attribute *devattr, char *buf)
{
struct pci_dev *pdev = to_pci_dev(dev->parent);
int reg = to_sensor_dev_attr(devattr)->index;
long temp;
u16 tsthr;
pci_read_config_word(pdev, reg, &tsthr);
temp = tsthr * 500;
return sprintf(buf, "%ld\n", temp);
}
static ssize_t show_alarm(struct device *dev,
struct device_attribute *devattr, char *buf)
{
struct pci_dev *pdev = to_pci_dev(dev->parent);
int nr = to_sensor_dev_attr(devattr)->index;
u8 ctsts;
pci_read_config_byte(pdev, REG_CTSTS, &ctsts);
return sprintf(buf, "%u\n", (unsigned int)ctsts & (1 << nr));
}
static DEVICE_ATTR(temp1_input, S_IRUGO, show_temp, NULL);
static SENSOR_DEVICE_ATTR(temp1_crit, S_IRUGO, show_thresh, NULL, 0xE2);
static SENSOR_DEVICE_ATTR(temp1_max_hyst, S_IRUGO, show_thresh, NULL, 0xEC);
static SENSOR_DEVICE_ATTR(temp1_max, S_IRUGO, show_thresh, NULL, 0xEE);
static SENSOR_DEVICE_ATTR(temp1_crit_alarm, S_IRUGO, show_alarm, NULL, 0);
static SENSOR_DEVICE_ATTR(temp1_max_alarm, S_IRUGO, show_alarm, NULL, 1);
static struct attribute *i5500_temp_attrs[] = {
&dev_attr_temp1_input.attr,
&sensor_dev_attr_temp1_crit.dev_attr.attr,
&sensor_dev_attr_temp1_max_hyst.dev_attr.attr,
&sensor_dev_attr_temp1_max.dev_attr.attr,
&sensor_dev_attr_temp1_crit_alarm.dev_attr.attr,
&sensor_dev_attr_temp1_max_alarm.dev_attr.attr,
NULL
};
ATTRIBUTE_GROUPS(i5500_temp);
static const struct pci_device_id i5500_temp_ids[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x3438) },
{ 0 },
};
MODULE_DEVICE_TABLE(pci, i5500_temp_ids);
static int i5500_temp_probe(struct pci_dev *pdev,
const struct pci_device_id *id)
{
int err;
struct device *hwmon_dev;
u32 tstimer;
s8 tsfsc;
err = pci_enable_device(pdev);
if (err) {
dev_err(&pdev->dev, "Failed to enable device\n");
return err;
}
pci_read_config_byte(pdev, REG_TSFSC, &tsfsc);
pci_read_config_dword(pdev, REG_TSTIMER, &tstimer);
if (tsfsc == 0x7F && tstimer == 0x07D30D40) {
dev_notice(&pdev->dev, "Sensor seems to be disabled\n");
return -ENODEV;
}
hwmon_dev = devm_hwmon_device_register_with_groups(&pdev->dev,
"intel5500", NULL,
i5500_temp_groups);
return PTR_ERR_OR_ZERO(hwmon_dev);
}
static struct pci_driver i5500_temp_driver = {
.name = "i5500_temp",
.id_table = i5500_temp_ids,
.probe = i5500_temp_probe,
};
module_pci_driver(i5500_temp_driver);
MODULE_AUTHOR("Jean Delvare <jdelvare@suse.de>");
MODULE_DESCRIPTION("Intel 5500/5520/X58 chipset thermal sensor driver");
MODULE_LICENSE("GPL");
...@@ -28,7 +28,7 @@ ...@@ -28,7 +28,7 @@
#define AT91_AIC_IRQ_MIN_PRIORITY 0 #define AT91_AIC_IRQ_MIN_PRIORITY 0
#define AT91_AIC_IRQ_MAX_PRIORITY 7 #define AT91_AIC_IRQ_MAX_PRIORITY 7
#define AT91_AIC_SRCTYPE GENMASK(7, 6) #define AT91_AIC_SRCTYPE GENMASK(6, 5)
#define AT91_AIC_SRCTYPE_LOW (0 << 5) #define AT91_AIC_SRCTYPE_LOW (0 << 5)
#define AT91_AIC_SRCTYPE_FALLING (1 << 5) #define AT91_AIC_SRCTYPE_FALLING (1 << 5)
#define AT91_AIC_SRCTYPE_HIGH (2 << 5) #define AT91_AIC_SRCTYPE_HIGH (2 << 5)
...@@ -74,7 +74,7 @@ int aic_common_set_type(struct irq_data *d, unsigned type, unsigned *val) ...@@ -74,7 +74,7 @@ int aic_common_set_type(struct irq_data *d, unsigned type, unsigned *val)
return -EINVAL; return -EINVAL;
} }
*val &= AT91_AIC_SRCTYPE; *val &= ~AT91_AIC_SRCTYPE;
*val |= aic_type; *val |= aic_type;
return 0; return 0;
......
...@@ -1053,7 +1053,7 @@ static struct its_device *its_create_device(struct its_node *its, u32 dev_id, ...@@ -1053,7 +1053,7 @@ static struct its_device *its_create_device(struct its_node *its, u32 dev_id,
* of two entries. No, the architecture doesn't let you * of two entries. No, the architecture doesn't let you
* express an ITT with a single entry. * express an ITT with a single entry.
*/ */
nr_ites = max(2, roundup_pow_of_two(nvecs)); nr_ites = max(2UL, roundup_pow_of_two(nvecs));
sz = nr_ites * its->ite_size; sz = nr_ites * its->ite_size;
sz = max(sz, ITS_ITT_ALIGN) + ITS_ITT_ALIGN - 1; sz = max(sz, ITS_ITT_ALIGN) + ITS_ITT_ALIGN - 1;
itt = kmalloc(sz, GFP_KERNEL); itt = kmalloc(sz, GFP_KERNEL);
......
...@@ -381,7 +381,7 @@ hip04_of_init(struct device_node *node, struct device_node *parent) ...@@ -381,7 +381,7 @@ hip04_of_init(struct device_node *node, struct device_node *parent)
* It will be refined as each CPU probes its ID. * It will be refined as each CPU probes its ID.
*/ */
for (i = 0; i < NR_HIP04_CPU_IF; i++) for (i = 0; i < NR_HIP04_CPU_IF; i++)
hip04_cpu_map[i] = 0xff; hip04_cpu_map[i] = 0xffff;
/* /*
* Find out how many interrupts are supported. * Find out how many interrupts are supported.
......
...@@ -137,9 +137,9 @@ static int __init mtk_sysirq_of_init(struct device_node *node, ...@@ -137,9 +137,9 @@ static int __init mtk_sysirq_of_init(struct device_node *node,
return -ENOMEM; return -ENOMEM;
chip_data->intpol_base = of_io_request_and_map(node, 0, "intpol"); chip_data->intpol_base = of_io_request_and_map(node, 0, "intpol");
if (!chip_data->intpol_base) { if (IS_ERR(chip_data->intpol_base)) {
pr_err("mtk_sysirq: unable to map sysirq register\n"); pr_err("mtk_sysirq: unable to map sysirq register\n");
ret = -ENOMEM; ret = PTR_ERR(chip_data->intpol_base);
goto out_free; goto out_free;
} }
......
...@@ -263,7 +263,7 @@ static int __init omap_init_irq_of(struct device_node *node) ...@@ -263,7 +263,7 @@ static int __init omap_init_irq_of(struct device_node *node)
return ret; return ret;
} }
static int __init omap_init_irq_legacy(u32 base) static int __init omap_init_irq_legacy(u32 base, struct device_node *node)
{ {
int j, irq_base; int j, irq_base;
...@@ -277,7 +277,7 @@ static int __init omap_init_irq_legacy(u32 base) ...@@ -277,7 +277,7 @@ static int __init omap_init_irq_legacy(u32 base)
irq_base = 0; irq_base = 0;
} }
domain = irq_domain_add_legacy(NULL, omap_nr_irqs, irq_base, 0, domain = irq_domain_add_legacy(node, omap_nr_irqs, irq_base, 0,
&irq_domain_simple_ops, NULL); &irq_domain_simple_ops, NULL);
omap_irq_soft_reset(); omap_irq_soft_reset();
...@@ -301,10 +301,26 @@ static int __init omap_init_irq(u32 base, struct device_node *node) ...@@ -301,10 +301,26 @@ static int __init omap_init_irq(u32 base, struct device_node *node)
{ {
int ret; int ret;
if (node) /*
* FIXME legacy OMAP DMA driver sitting under arch/arm/plat-omap/dma.c
* depends is still not ready for linear IRQ domains; because of that
* we need to temporarily "blacklist" OMAP2 and OMAP3 devices from using
* linear IRQ Domain until that driver is finally fixed.
*/
if (of_device_is_compatible(node, "ti,omap2-intc") ||
of_device_is_compatible(node, "ti,omap3-intc")) {
struct resource res;
if (of_address_to_resource(node, 0, &res))
return -ENOMEM;
base = res.start;
ret = omap_init_irq_legacy(base, node);
} else if (node) {
ret = omap_init_irq_of(node); ret = omap_init_irq_of(node);
else } else {
ret = omap_init_irq_legacy(base); ret = omap_init_irq_legacy(base, NULL);
}
if (ret == 0) if (ret == 0)
omap_irq_enable_protection(); omap_irq_enable_protection();
......
...@@ -94,6 +94,9 @@ struct cache_disk_superblock { ...@@ -94,6 +94,9 @@ struct cache_disk_superblock {
} __packed; } __packed;
struct dm_cache_metadata { struct dm_cache_metadata {
atomic_t ref_count;
struct list_head list;
struct block_device *bdev; struct block_device *bdev;
struct dm_block_manager *bm; struct dm_block_manager *bm;
struct dm_space_map *metadata_sm; struct dm_space_map *metadata_sm;
...@@ -669,10 +672,10 @@ static void unpack_value(__le64 value_le, dm_oblock_t *block, unsigned *flags) ...@@ -669,10 +672,10 @@ static void unpack_value(__le64 value_le, dm_oblock_t *block, unsigned *flags)
/*----------------------------------------------------------------*/ /*----------------------------------------------------------------*/
struct dm_cache_metadata *dm_cache_metadata_open(struct block_device *bdev, static struct dm_cache_metadata *metadata_open(struct block_device *bdev,
sector_t data_block_size, sector_t data_block_size,
bool may_format_device, bool may_format_device,
size_t policy_hint_size) size_t policy_hint_size)
{ {
int r; int r;
struct dm_cache_metadata *cmd; struct dm_cache_metadata *cmd;
...@@ -683,6 +686,7 @@ struct dm_cache_metadata *dm_cache_metadata_open(struct block_device *bdev, ...@@ -683,6 +686,7 @@ struct dm_cache_metadata *dm_cache_metadata_open(struct block_device *bdev,
return NULL; return NULL;
} }
atomic_set(&cmd->ref_count, 1);
init_rwsem(&cmd->root_lock); init_rwsem(&cmd->root_lock);
cmd->bdev = bdev; cmd->bdev = bdev;
cmd->data_block_size = data_block_size; cmd->data_block_size = data_block_size;
...@@ -705,10 +709,95 @@ struct dm_cache_metadata *dm_cache_metadata_open(struct block_device *bdev, ...@@ -705,10 +709,95 @@ struct dm_cache_metadata *dm_cache_metadata_open(struct block_device *bdev,
return cmd; return cmd;
} }
/*
* We keep a little list of ref counted metadata objects to prevent two
* different target instances creating separate bufio instances. This is
* an issue if a table is reloaded before the suspend.
*/
static DEFINE_MUTEX(table_lock);
static LIST_HEAD(table);
static struct dm_cache_metadata *lookup(struct block_device *bdev)
{
struct dm_cache_metadata *cmd;
list_for_each_entry(cmd, &table, list)
if (cmd->bdev == bdev) {
atomic_inc(&cmd->ref_count);
return cmd;
}
return NULL;
}
static struct dm_cache_metadata *lookup_or_open(struct block_device *bdev,
sector_t data_block_size,
bool may_format_device,
size_t policy_hint_size)
{
struct dm_cache_metadata *cmd, *cmd2;
mutex_lock(&table_lock);
cmd = lookup(bdev);
mutex_unlock(&table_lock);
if (cmd)
return cmd;
cmd = metadata_open(bdev, data_block_size, may_format_device, policy_hint_size);
if (cmd) {
mutex_lock(&table_lock);
cmd2 = lookup(bdev);
if (cmd2) {
mutex_unlock(&table_lock);
__destroy_persistent_data_objects(cmd);
kfree(cmd);
return cmd2;
}
list_add(&cmd->list, &table);
mutex_unlock(&table_lock);
}
return cmd;
}
static bool same_params(struct dm_cache_metadata *cmd, sector_t data_block_size)
{
if (cmd->data_block_size != data_block_size) {
DMERR("data_block_size (%llu) different from that in metadata (%llu)\n",
(unsigned long long) data_block_size,
(unsigned long long) cmd->data_block_size);
return false;
}
return true;
}
struct dm_cache_metadata *dm_cache_metadata_open(struct block_device *bdev,
sector_t data_block_size,
bool may_format_device,
size_t policy_hint_size)
{
struct dm_cache_metadata *cmd = lookup_or_open(bdev, data_block_size,
may_format_device, policy_hint_size);
if (cmd && !same_params(cmd, data_block_size)) {
dm_cache_metadata_close(cmd);
return NULL;
}
return cmd;
}
void dm_cache_metadata_close(struct dm_cache_metadata *cmd) void dm_cache_metadata_close(struct dm_cache_metadata *cmd)
{ {
__destroy_persistent_data_objects(cmd); if (atomic_dec_and_test(&cmd->ref_count)) {
kfree(cmd); mutex_lock(&table_lock);
list_del(&cmd->list);
mutex_unlock(&table_lock);
__destroy_persistent_data_objects(cmd);
kfree(cmd);
}
} }
/* /*
......
...@@ -221,7 +221,13 @@ struct cache { ...@@ -221,7 +221,13 @@ struct cache {
struct list_head need_commit_migrations; struct list_head need_commit_migrations;
sector_t migration_threshold; sector_t migration_threshold;
wait_queue_head_t migration_wait; wait_queue_head_t migration_wait;
atomic_t nr_migrations; atomic_t nr_allocated_migrations;
/*
* The number of in flight migrations that are performing
* background io. eg, promotion, writeback.
*/
atomic_t nr_io_migrations;
wait_queue_head_t quiescing_wait; wait_queue_head_t quiescing_wait;
atomic_t quiescing; atomic_t quiescing;
...@@ -258,7 +264,6 @@ struct cache { ...@@ -258,7 +264,6 @@ struct cache {
struct dm_deferred_set *all_io_ds; struct dm_deferred_set *all_io_ds;
mempool_t *migration_pool; mempool_t *migration_pool;
struct dm_cache_migration *next_migration;
struct dm_cache_policy *policy; struct dm_cache_policy *policy;
unsigned policy_nr_args; unsigned policy_nr_args;
...@@ -350,10 +355,31 @@ static void free_prison_cell(struct cache *cache, struct dm_bio_prison_cell *cel ...@@ -350,10 +355,31 @@ static void free_prison_cell(struct cache *cache, struct dm_bio_prison_cell *cel
dm_bio_prison_free_cell(cache->prison, cell); dm_bio_prison_free_cell(cache->prison, cell);
} }
static struct dm_cache_migration *alloc_migration(struct cache *cache)
{
struct dm_cache_migration *mg;
mg = mempool_alloc(cache->migration_pool, GFP_NOWAIT);
if (mg) {
mg->cache = cache;
atomic_inc(&mg->cache->nr_allocated_migrations);
}
return mg;
}
static void free_migration(struct dm_cache_migration *mg)
{
if (atomic_dec_and_test(&mg->cache->nr_allocated_migrations))
wake_up(&mg->cache->migration_wait);
mempool_free(mg, mg->cache->migration_pool);
}
static int prealloc_data_structs(struct cache *cache, struct prealloc *p) static int prealloc_data_structs(struct cache *cache, struct prealloc *p)
{ {
if (!p->mg) { if (!p->mg) {
p->mg = mempool_alloc(cache->migration_pool, GFP_NOWAIT); p->mg = alloc_migration(cache);
if (!p->mg) if (!p->mg)
return -ENOMEM; return -ENOMEM;
} }
...@@ -382,7 +408,7 @@ static void prealloc_free_structs(struct cache *cache, struct prealloc *p) ...@@ -382,7 +408,7 @@ static void prealloc_free_structs(struct cache *cache, struct prealloc *p)
free_prison_cell(cache, p->cell1); free_prison_cell(cache, p->cell1);
if (p->mg) if (p->mg)
mempool_free(p->mg, cache->migration_pool); free_migration(p->mg);
} }
static struct dm_cache_migration *prealloc_get_migration(struct prealloc *p) static struct dm_cache_migration *prealloc_get_migration(struct prealloc *p)
...@@ -854,24 +880,14 @@ static void remap_to_origin_then_cache(struct cache *cache, struct bio *bio, ...@@ -854,24 +880,14 @@ static void remap_to_origin_then_cache(struct cache *cache, struct bio *bio,
* Migration covers moving data from the origin device to the cache, or * Migration covers moving data from the origin device to the cache, or
* vice versa. * vice versa.
*--------------------------------------------------------------*/ *--------------------------------------------------------------*/
static void free_migration(struct dm_cache_migration *mg) static void inc_io_migrations(struct cache *cache)
{
mempool_free(mg, mg->cache->migration_pool);
}
static void inc_nr_migrations(struct cache *cache)
{ {
atomic_inc(&cache->nr_migrations); atomic_inc(&cache->nr_io_migrations);
} }
static void dec_nr_migrations(struct cache *cache) static void dec_io_migrations(struct cache *cache)
{ {
atomic_dec(&cache->nr_migrations); atomic_dec(&cache->nr_io_migrations);
/*
* Wake the worker in case we're suspending the target.
*/
wake_up(&cache->migration_wait);
} }
static void __cell_defer(struct cache *cache, struct dm_bio_prison_cell *cell, static void __cell_defer(struct cache *cache, struct dm_bio_prison_cell *cell,
...@@ -894,11 +910,10 @@ static void cell_defer(struct cache *cache, struct dm_bio_prison_cell *cell, ...@@ -894,11 +910,10 @@ static void cell_defer(struct cache *cache, struct dm_bio_prison_cell *cell,
wake_worker(cache); wake_worker(cache);
} }
static void cleanup_migration(struct dm_cache_migration *mg) static void free_io_migration(struct dm_cache_migration *mg)
{ {
struct cache *cache = mg->cache; dec_io_migrations(mg->cache);
free_migration(mg); free_migration(mg);
dec_nr_migrations(cache);
} }
static void migration_failure(struct dm_cache_migration *mg) static void migration_failure(struct dm_cache_migration *mg)
...@@ -923,7 +938,7 @@ static void migration_failure(struct dm_cache_migration *mg) ...@@ -923,7 +938,7 @@ static void migration_failure(struct dm_cache_migration *mg)
cell_defer(cache, mg->new_ocell, true); cell_defer(cache, mg->new_ocell, true);
} }
cleanup_migration(mg); free_io_migration(mg);
} }
static void migration_success_pre_commit(struct dm_cache_migration *mg) static void migration_success_pre_commit(struct dm_cache_migration *mg)
...@@ -934,7 +949,7 @@ static void migration_success_pre_commit(struct dm_cache_migration *mg) ...@@ -934,7 +949,7 @@ static void migration_success_pre_commit(struct dm_cache_migration *mg)
if (mg->writeback) { if (mg->writeback) {
clear_dirty(cache, mg->old_oblock, mg->cblock); clear_dirty(cache, mg->old_oblock, mg->cblock);
cell_defer(cache, mg->old_ocell, false); cell_defer(cache, mg->old_ocell, false);
cleanup_migration(mg); free_io_migration(mg);
return; return;
} else if (mg->demote) { } else if (mg->demote) {
...@@ -944,14 +959,14 @@ static void migration_success_pre_commit(struct dm_cache_migration *mg) ...@@ -944,14 +959,14 @@ static void migration_success_pre_commit(struct dm_cache_migration *mg)
mg->old_oblock); mg->old_oblock);
if (mg->promote) if (mg->promote)
cell_defer(cache, mg->new_ocell, true); cell_defer(cache, mg->new_ocell, true);
cleanup_migration(mg); free_io_migration(mg);
return; return;
} }
} else { } else {
if (dm_cache_insert_mapping(cache->cmd, mg->cblock, mg->new_oblock)) { if (dm_cache_insert_mapping(cache->cmd, mg->cblock, mg->new_oblock)) {
DMWARN_LIMIT("promotion failed; couldn't update on disk metadata"); DMWARN_LIMIT("promotion failed; couldn't update on disk metadata");
policy_remove_mapping(cache->policy, mg->new_oblock); policy_remove_mapping(cache->policy, mg->new_oblock);
cleanup_migration(mg); free_io_migration(mg);
return; return;
} }
} }
...@@ -984,7 +999,7 @@ static void migration_success_post_commit(struct dm_cache_migration *mg) ...@@ -984,7 +999,7 @@ static void migration_success_post_commit(struct dm_cache_migration *mg)
} else { } else {
if (mg->invalidate) if (mg->invalidate)
policy_remove_mapping(cache->policy, mg->old_oblock); policy_remove_mapping(cache->policy, mg->old_oblock);
cleanup_migration(mg); free_io_migration(mg);
} }
} else { } else {
...@@ -999,7 +1014,7 @@ static void migration_success_post_commit(struct dm_cache_migration *mg) ...@@ -999,7 +1014,7 @@ static void migration_success_post_commit(struct dm_cache_migration *mg)
bio_endio(mg->new_ocell->holder, 0); bio_endio(mg->new_ocell->holder, 0);
cell_defer(cache, mg->new_ocell, false); cell_defer(cache, mg->new_ocell, false);
} }
cleanup_migration(mg); free_io_migration(mg);
} }
} }
...@@ -1251,7 +1266,7 @@ static void promote(struct cache *cache, struct prealloc *structs, ...@@ -1251,7 +1266,7 @@ static void promote(struct cache *cache, struct prealloc *structs,
mg->new_ocell = cell; mg->new_ocell = cell;
mg->start_jiffies = jiffies; mg->start_jiffies = jiffies;
inc_nr_migrations(cache); inc_io_migrations(cache);
quiesce_migration(mg); quiesce_migration(mg);
} }
...@@ -1275,7 +1290,7 @@ static void writeback(struct cache *cache, struct prealloc *structs, ...@@ -1275,7 +1290,7 @@ static void writeback(struct cache *cache, struct prealloc *structs,
mg->new_ocell = NULL; mg->new_ocell = NULL;
mg->start_jiffies = jiffies; mg->start_jiffies = jiffies;
inc_nr_migrations(cache); inc_io_migrations(cache);
quiesce_migration(mg); quiesce_migration(mg);
} }
...@@ -1302,7 +1317,7 @@ static void demote_then_promote(struct cache *cache, struct prealloc *structs, ...@@ -1302,7 +1317,7 @@ static void demote_then_promote(struct cache *cache, struct prealloc *structs,
mg->new_ocell = new_ocell; mg->new_ocell = new_ocell;
mg->start_jiffies = jiffies; mg->start_jiffies = jiffies;
inc_nr_migrations(cache); inc_io_migrations(cache);
quiesce_migration(mg); quiesce_migration(mg);
} }
...@@ -1330,7 +1345,7 @@ static void invalidate(struct cache *cache, struct prealloc *structs, ...@@ -1330,7 +1345,7 @@ static void invalidate(struct cache *cache, struct prealloc *structs,
mg->new_ocell = NULL; mg->new_ocell = NULL;
mg->start_jiffies = jiffies; mg->start_jiffies = jiffies;
inc_nr_migrations(cache); inc_io_migrations(cache);
quiesce_migration(mg); quiesce_migration(mg);
} }
...@@ -1412,7 +1427,7 @@ static void process_discard_bio(struct cache *cache, struct prealloc *structs, ...@@ -1412,7 +1427,7 @@ static void process_discard_bio(struct cache *cache, struct prealloc *structs,
static bool spare_migration_bandwidth(struct cache *cache) static bool spare_migration_bandwidth(struct cache *cache)
{ {
sector_t current_volume = (atomic_read(&cache->nr_migrations) + 1) * sector_t current_volume = (atomic_read(&cache->nr_io_migrations) + 1) *
cache->sectors_per_block; cache->sectors_per_block;
return current_volume < cache->migration_threshold; return current_volume < cache->migration_threshold;
} }
...@@ -1764,7 +1779,7 @@ static void stop_quiescing(struct cache *cache) ...@@ -1764,7 +1779,7 @@ static void stop_quiescing(struct cache *cache)
static void wait_for_migrations(struct cache *cache) static void wait_for_migrations(struct cache *cache)
{ {
wait_event(cache->migration_wait, !atomic_read(&cache->nr_migrations)); wait_event(cache->migration_wait, !atomic_read(&cache->nr_allocated_migrations));
} }
static void stop_worker(struct cache *cache) static void stop_worker(struct cache *cache)
...@@ -1876,9 +1891,6 @@ static void destroy(struct cache *cache) ...@@ -1876,9 +1891,6 @@ static void destroy(struct cache *cache)
{ {
unsigned i; unsigned i;
if (cache->next_migration)
mempool_free(cache->next_migration, cache->migration_pool);
if (cache->migration_pool) if (cache->migration_pool)
mempool_destroy(cache->migration_pool); mempool_destroy(cache->migration_pool);
...@@ -2424,7 +2436,8 @@ static int cache_create(struct cache_args *ca, struct cache **result) ...@@ -2424,7 +2436,8 @@ static int cache_create(struct cache_args *ca, struct cache **result)
INIT_LIST_HEAD(&cache->quiesced_migrations); INIT_LIST_HEAD(&cache->quiesced_migrations);
INIT_LIST_HEAD(&cache->completed_migrations); INIT_LIST_HEAD(&cache->completed_migrations);
INIT_LIST_HEAD(&cache->need_commit_migrations); INIT_LIST_HEAD(&cache->need_commit_migrations);
atomic_set(&cache->nr_migrations, 0); atomic_set(&cache->nr_allocated_migrations, 0);
atomic_set(&cache->nr_io_migrations, 0);
init_waitqueue_head(&cache->migration_wait); init_waitqueue_head(&cache->migration_wait);
init_waitqueue_head(&cache->quiescing_wait); init_waitqueue_head(&cache->quiescing_wait);
...@@ -2487,8 +2500,6 @@ static int cache_create(struct cache_args *ca, struct cache **result) ...@@ -2487,8 +2500,6 @@ static int cache_create(struct cache_args *ca, struct cache **result)
goto bad; goto bad;
} }
cache->next_migration = NULL;
cache->need_tick_bio = true; cache->need_tick_bio = true;
cache->sized = false; cache->sized = false;
cache->invalidate = false; cache->invalidate = false;
......
...@@ -206,6 +206,9 @@ struct mapped_device { ...@@ -206,6 +206,9 @@ struct mapped_device {
/* zero-length flush that will be cloned and submitted to targets */ /* zero-length flush that will be cloned and submitted to targets */
struct bio flush_bio; struct bio flush_bio;
/* the number of internal suspends */
unsigned internal_suspend_count;
struct dm_stats stats; struct dm_stats stats;
}; };
...@@ -2928,7 +2931,7 @@ static void __dm_internal_suspend(struct mapped_device *md, unsigned suspend_fla ...@@ -2928,7 +2931,7 @@ static void __dm_internal_suspend(struct mapped_device *md, unsigned suspend_fla
{ {
struct dm_table *map = NULL; struct dm_table *map = NULL;
if (dm_suspended_internally_md(md)) if (md->internal_suspend_count++)
return; /* nested internal suspend */ return; /* nested internal suspend */
if (dm_suspended_md(md)) { if (dm_suspended_md(md)) {
...@@ -2953,7 +2956,9 @@ static void __dm_internal_suspend(struct mapped_device *md, unsigned suspend_fla ...@@ -2953,7 +2956,9 @@ static void __dm_internal_suspend(struct mapped_device *md, unsigned suspend_fla
static void __dm_internal_resume(struct mapped_device *md) static void __dm_internal_resume(struct mapped_device *md)
{ {
if (!dm_suspended_internally_md(md)) BUG_ON(!md->internal_suspend_count);
if (--md->internal_suspend_count)
return; /* resume from nested internal suspend */ return; /* resume from nested internal suspend */
if (dm_suspended_md(md)) if (dm_suspended_md(md))
......
...@@ -614,7 +614,7 @@ struct cx23885_board cx23885_boards[] = { ...@@ -614,7 +614,7 @@ struct cx23885_board cx23885_boards[] = {
.portb = CX23885_MPEG_DVB, .portb = CX23885_MPEG_DVB,
}, },
[CX23885_BOARD_HAUPPAUGE_HVR4400] = { [CX23885_BOARD_HAUPPAUGE_HVR4400] = {
.name = "Hauppauge WinTV-HVR4400", .name = "Hauppauge WinTV-HVR4400/HVR5500",
.porta = CX23885_ANALOG_VIDEO, .porta = CX23885_ANALOG_VIDEO,
.portb = CX23885_MPEG_DVB, .portb = CX23885_MPEG_DVB,
.portc = CX23885_MPEG_DVB, .portc = CX23885_MPEG_DVB,
...@@ -622,6 +622,10 @@ struct cx23885_board cx23885_boards[] = { ...@@ -622,6 +622,10 @@ struct cx23885_board cx23885_boards[] = {
.tuner_addr = 0x60, /* 0xc0 >> 1 */ .tuner_addr = 0x60, /* 0xc0 >> 1 */
.tuner_bus = 1, .tuner_bus = 1,
}, },
[CX23885_BOARD_HAUPPAUGE_STARBURST] = {
.name = "Hauppauge WinTV Starburst",
.portb = CX23885_MPEG_DVB,
},
[CX23885_BOARD_AVERMEDIA_HC81R] = { [CX23885_BOARD_AVERMEDIA_HC81R] = {
.name = "AVerTV Hybrid Express Slim HC81R", .name = "AVerTV Hybrid Express Slim HC81R",
.tuner_type = TUNER_XC2028, .tuner_type = TUNER_XC2028,
...@@ -936,19 +940,19 @@ struct cx23885_subid cx23885_subids[] = { ...@@ -936,19 +940,19 @@ struct cx23885_subid cx23885_subids[] = {
}, { }, {
.subvendor = 0x0070, .subvendor = 0x0070,
.subdevice = 0xc108, .subdevice = 0xc108,
.card = CX23885_BOARD_HAUPPAUGE_HVR4400, .card = CX23885_BOARD_HAUPPAUGE_HVR4400, /* Hauppauge WinTV HVR-4400 (Model 121xxx, Hybrid DVB-T/S2, IR) */
}, { }, {
.subvendor = 0x0070, .subvendor = 0x0070,
.subdevice = 0xc138, .subdevice = 0xc138,
.card = CX23885_BOARD_HAUPPAUGE_HVR4400, .card = CX23885_BOARD_HAUPPAUGE_HVR4400, /* Hauppauge WinTV HVR-5500 (Model 121xxx, Hybrid DVB-T/C/S2, IR) */
}, { }, {
.subvendor = 0x0070, .subvendor = 0x0070,
.subdevice = 0xc12a, .subdevice = 0xc12a,
.card = CX23885_BOARD_HAUPPAUGE_HVR4400, .card = CX23885_BOARD_HAUPPAUGE_STARBURST, /* Hauppauge WinTV Starburst (Model 121x00, DVB-S2, IR) */
}, { }, {
.subvendor = 0x0070, .subvendor = 0x0070,
.subdevice = 0xc1f8, .subdevice = 0xc1f8,
.card = CX23885_BOARD_HAUPPAUGE_HVR4400, .card = CX23885_BOARD_HAUPPAUGE_HVR4400, /* Hauppauge WinTV HVR-5500 (Model 121xxx, Hybrid DVB-T/C/S2, IR) */
}, { }, {
.subvendor = 0x1461, .subvendor = 0x1461,
.subdevice = 0xd939, .subdevice = 0xd939,
...@@ -1545,8 +1549,9 @@ void cx23885_gpio_setup(struct cx23885_dev *dev) ...@@ -1545,8 +1549,9 @@ void cx23885_gpio_setup(struct cx23885_dev *dev)
cx_write(GPIO_ISM, 0x00000000);/* INTERRUPTS active low*/ cx_write(GPIO_ISM, 0x00000000);/* INTERRUPTS active low*/
break; break;
case CX23885_BOARD_HAUPPAUGE_HVR4400: case CX23885_BOARD_HAUPPAUGE_HVR4400:
case CX23885_BOARD_HAUPPAUGE_STARBURST:
/* GPIO-8 tda10071 demod reset */ /* GPIO-8 tda10071 demod reset */
/* GPIO-9 si2165 demod reset */ /* GPIO-9 si2165 demod reset (only HVR4400/HVR5500)*/
/* Put the parts into reset and back */ /* Put the parts into reset and back */
cx23885_gpio_enable(dev, GPIO_8 | GPIO_9, 1); cx23885_gpio_enable(dev, GPIO_8 | GPIO_9, 1);
...@@ -1872,6 +1877,7 @@ void cx23885_card_setup(struct cx23885_dev *dev) ...@@ -1872,6 +1877,7 @@ void cx23885_card_setup(struct cx23885_dev *dev)
case CX23885_BOARD_HAUPPAUGE_HVR1850: case CX23885_BOARD_HAUPPAUGE_HVR1850:
case CX23885_BOARD_HAUPPAUGE_HVR1290: case CX23885_BOARD_HAUPPAUGE_HVR1290:
case CX23885_BOARD_HAUPPAUGE_HVR4400: case CX23885_BOARD_HAUPPAUGE_HVR4400:
case CX23885_BOARD_HAUPPAUGE_STARBURST:
case CX23885_BOARD_HAUPPAUGE_IMPACTVCBE: case CX23885_BOARD_HAUPPAUGE_IMPACTVCBE:
if (dev->i2c_bus[0].i2c_rc == 0) if (dev->i2c_bus[0].i2c_rc == 0)
hauppauge_eeprom(dev, eeprom+0xc0); hauppauge_eeprom(dev, eeprom+0xc0);
...@@ -1980,6 +1986,11 @@ void cx23885_card_setup(struct cx23885_dev *dev) ...@@ -1980,6 +1986,11 @@ void cx23885_card_setup(struct cx23885_dev *dev)
ts2->ts_clk_en_val = 0x1; /* Enable TS_CLK */ ts2->ts_clk_en_val = 0x1; /* Enable TS_CLK */
ts2->src_sel_val = CX23885_SRC_SEL_PARALLEL_MPEG_VIDEO; ts2->src_sel_val = CX23885_SRC_SEL_PARALLEL_MPEG_VIDEO;
break; break;
case CX23885_BOARD_HAUPPAUGE_STARBURST:
ts1->gen_ctrl_val = 0xc; /* Serial bus + punctured clock */
ts1->ts_clk_en_val = 0x1; /* Enable TS_CLK */
ts1->src_sel_val = CX23885_SRC_SEL_PARALLEL_MPEG_VIDEO;
break;
case CX23885_BOARD_DVBSKY_T9580: case CX23885_BOARD_DVBSKY_T9580:
case CX23885_BOARD_DVBSKY_T982: case CX23885_BOARD_DVBSKY_T982:
ts1->gen_ctrl_val = 0x5; /* Parallel */ ts1->gen_ctrl_val = 0x5; /* Parallel */
......
...@@ -2049,11 +2049,11 @@ static void cx23885_finidev(struct pci_dev *pci_dev) ...@@ -2049,11 +2049,11 @@ static void cx23885_finidev(struct pci_dev *pci_dev)
cx23885_shutdown(dev); cx23885_shutdown(dev);
pci_disable_device(pci_dev);
/* unregister stuff */ /* unregister stuff */
free_irq(pci_dev->irq, dev); free_irq(pci_dev->irq, dev);
pci_disable_device(pci_dev);
cx23885_dev_unregister(dev); cx23885_dev_unregister(dev);
vb2_dma_sg_cleanup_ctx(dev->alloc_ctx); vb2_dma_sg_cleanup_ctx(dev->alloc_ctx);
v4l2_ctrl_handler_free(&dev->ctrl_handler); v4l2_ctrl_handler_free(&dev->ctrl_handler);
......
...@@ -1710,6 +1710,17 @@ static int dvb_register(struct cx23885_tsport *port) ...@@ -1710,6 +1710,17 @@ static int dvb_register(struct cx23885_tsport *port)
break; break;
} }
break; break;
case CX23885_BOARD_HAUPPAUGE_STARBURST:
i2c_bus = &dev->i2c_bus[0];
fe0->dvb.frontend = dvb_attach(tda10071_attach,
&hauppauge_tda10071_config,
&i2c_bus->i2c_adap);
if (fe0->dvb.frontend != NULL) {
dvb_attach(a8293_attach, fe0->dvb.frontend,
&i2c_bus->i2c_adap,
&hauppauge_a8293_config);
}
break;
case CX23885_BOARD_DVBSKY_T9580: case CX23885_BOARD_DVBSKY_T9580:
case CX23885_BOARD_DVBSKY_S950: case CX23885_BOARD_DVBSKY_S950:
i2c_bus = &dev->i2c_bus[0]; i2c_bus = &dev->i2c_bus[0];
......
...@@ -99,6 +99,7 @@ ...@@ -99,6 +99,7 @@
#define CX23885_BOARD_DVBSKY_S950 49 #define CX23885_BOARD_DVBSKY_S950 49
#define CX23885_BOARD_DVBSKY_S952 50 #define CX23885_BOARD_DVBSKY_S952 50
#define CX23885_BOARD_DVBSKY_T982 51 #define CX23885_BOARD_DVBSKY_T982 51
#define CX23885_BOARD_HAUPPAUGE_STARBURST 52
#define GPIO_0 0x00000001 #define GPIO_0 0x00000001
#define GPIO_1 0x00000002 #define GPIO_1 0x00000002
......
...@@ -602,10 +602,13 @@ isp_video_querycap(struct file *file, void *fh, struct v4l2_capability *cap) ...@@ -602,10 +602,13 @@ isp_video_querycap(struct file *file, void *fh, struct v4l2_capability *cap)
strlcpy(cap->card, video->video.name, sizeof(cap->card)); strlcpy(cap->card, video->video.name, sizeof(cap->card));
strlcpy(cap->bus_info, "media", sizeof(cap->bus_info)); strlcpy(cap->bus_info, "media", sizeof(cap->bus_info));
cap->capabilities = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_VIDEO_OUTPUT
| V4L2_CAP_STREAMING | V4L2_CAP_DEVICE_CAPS;
if (video->type == V4L2_BUF_TYPE_VIDEO_CAPTURE) if (video->type == V4L2_BUF_TYPE_VIDEO_CAPTURE)
cap->capabilities = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; cap->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
else else
cap->capabilities = V4L2_CAP_VIDEO_OUTPUT | V4L2_CAP_STREAMING; cap->device_caps = V4L2_CAP_VIDEO_OUTPUT | V4L2_CAP_STREAMING;
return 0; return 0;
} }
......
...@@ -760,8 +760,9 @@ static int isi_camera_querycap(struct soc_camera_host *ici, ...@@ -760,8 +760,9 @@ static int isi_camera_querycap(struct soc_camera_host *ici,
{ {
strcpy(cap->driver, "atmel-isi"); strcpy(cap->driver, "atmel-isi");
strcpy(cap->card, "Atmel Image Sensor Interface"); strcpy(cap->card, "Atmel Image Sensor Interface");
cap->capabilities = (V4L2_CAP_VIDEO_CAPTURE | cap->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
V4L2_CAP_STREAMING); cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS;
return 0; return 0;
} }
......
...@@ -1256,7 +1256,8 @@ static int mx2_camera_querycap(struct soc_camera_host *ici, ...@@ -1256,7 +1256,8 @@ static int mx2_camera_querycap(struct soc_camera_host *ici,
{ {
/* cap->name is set by the friendly caller:-> */ /* cap->name is set by the friendly caller:-> */
strlcpy(cap->card, MX2_CAM_DRIVER_DESCRIPTION, sizeof(cap->card)); strlcpy(cap->card, MX2_CAM_DRIVER_DESCRIPTION, sizeof(cap->card));
cap->capabilities = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; cap->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS;
return 0; return 0;
} }
......
...@@ -967,7 +967,8 @@ static int mx3_camera_querycap(struct soc_camera_host *ici, ...@@ -967,7 +967,8 @@ static int mx3_camera_querycap(struct soc_camera_host *ici,
{ {
/* cap->name is set by the firendly caller:-> */ /* cap->name is set by the firendly caller:-> */
strlcpy(cap->card, "i.MX3x Camera", sizeof(cap->card)); strlcpy(cap->card, "i.MX3x Camera", sizeof(cap->card));
cap->capabilities = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; cap->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS;
return 0; return 0;
} }
......
...@@ -1427,7 +1427,8 @@ static int omap1_cam_querycap(struct soc_camera_host *ici, ...@@ -1427,7 +1427,8 @@ static int omap1_cam_querycap(struct soc_camera_host *ici,
{ {
/* cap->name is set by the friendly caller:-> */ /* cap->name is set by the friendly caller:-> */
strlcpy(cap->card, "OMAP1 Camera", sizeof(cap->card)); strlcpy(cap->card, "OMAP1 Camera", sizeof(cap->card));
cap->capabilities = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; cap->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS;
return 0; return 0;
} }
......
...@@ -1576,7 +1576,8 @@ static int pxa_camera_querycap(struct soc_camera_host *ici, ...@@ -1576,7 +1576,8 @@ static int pxa_camera_querycap(struct soc_camera_host *ici,
{ {
/* cap->name is set by the firendly caller:-> */ /* cap->name is set by the firendly caller:-> */
strlcpy(cap->card, pxa_cam_driver_description, sizeof(cap->card)); strlcpy(cap->card, pxa_cam_driver_description, sizeof(cap->card));
cap->capabilities = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; cap->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS;
return 0; return 0;
} }
......
...@@ -1799,7 +1799,9 @@ static int rcar_vin_querycap(struct soc_camera_host *ici, ...@@ -1799,7 +1799,9 @@ static int rcar_vin_querycap(struct soc_camera_host *ici,
struct v4l2_capability *cap) struct v4l2_capability *cap)
{ {
strlcpy(cap->card, "R_Car_VIN", sizeof(cap->card)); strlcpy(cap->card, "R_Car_VIN", sizeof(cap->card));
cap->capabilities = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; cap->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS;
return 0; return 0;
} }
......
...@@ -1652,7 +1652,9 @@ static int sh_mobile_ceu_querycap(struct soc_camera_host *ici, ...@@ -1652,7 +1652,9 @@ static int sh_mobile_ceu_querycap(struct soc_camera_host *ici,
struct v4l2_capability *cap) struct v4l2_capability *cap)
{ {
strlcpy(cap->card, "SuperH_Mobile_CEU", sizeof(cap->card)); strlcpy(cap->card, "SuperH_Mobile_CEU", sizeof(cap->card));
cap->capabilities = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; cap->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS;
return 0; return 0;
} }
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册