提交 8eee93e2 编写于 作者: L Linus Torvalds

Merge tag 'char-misc-4.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc updates from Greg KH:
 "Here is the big char/misc driver update for 4.6-rc1.

  The majority of the patches here is hwtracing and some new mic
  drivers, but there's a lot of other driver updates as well.  Full
  details in the shortlog.

  All have been in linux-next for a while with no reported issues"

* tag 'char-misc-4.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (238 commits)
  goldfish: Fix build error of missing ioremap on UM
  nvmem: mediatek: Fix later provider initialization
  nvmem: imx-ocotp: Fix return value of imx_ocotp_read
  nvmem: Fix dependencies for !HAS_IOMEM archs
  char: genrtc: replace blacklist with whitelist
  drivers/hwtracing: make coresight-etm-perf.c explicitly non-modular
  drivers: char: mem: fix IS_ERROR_VALUE usage
  char: xillybus: Fix internal data structure initialization
  pch_phub: return -ENODATA if ROM can't be mapped
  Drivers: hv: vmbus: Support kexec on ws2012 r2 and above
  Drivers: hv: vmbus: Support handling messages on multiple CPUs
  Drivers: hv: utils: Remove util transport handler from list if registration fails
  Drivers: hv: util: Pass the channel information during the init call
  Drivers: hv: vmbus: avoid unneeded compiler optimizations in vmbus_wait_for_unload()
  Drivers: hv: vmbus: remove code duplication in message handling
  Drivers: hv: vmbus: avoid wait_for_completion() on crash
  Drivers: hv: vmbus: don't loose HVMSG_TIMER_EXPIRED messages
  misc: at24: replace memory_accessor with nvmem_device_read
  eeprom: 93xx46: extend driver to plug into the NVMEM framework
  eeprom: at25: extend driver to plug into the NVMEM framework
  ...
......@@ -27,3 +27,17 @@ Description: The mapping of which primary/sub channels are bound to which
Virtual Processors.
Format: <channel's child_relid:the bound cpu's number>
Users: tools/hv/lsvmbus
What: /sys/bus/vmbus/devices/vmbus_*/device
Date: Dec. 2015
KernelVersion: 4.5
Contact: K. Y. Srinivasan <kys@microsoft.com>
Description: The 16 bit device ID of the device
Users: tools/hv/lsvmbus and user level RDMA libraries
What: /sys/bus/vmbus/devices/vmbus_*/vendor
Date: Dec. 2015
KernelVersion: 4.5
Contact: K. Y. Srinivasan <kys@microsoft.com>
Description: The 16 bit vendor ID of the device
Users: tools/hv/lsvmbus and user level RDMA libraries
Android Goldfish QEMU Pipe
Andorid pipe virtual device generated by android emulator.
Required properties:
- compatible : should contain "google,android-pipe" to match emulator
- reg : <registers mapping>
- interrupts : <interrupt mapping>
Example:
android_pipe@a010000 {
compatible = "google,android-pipe";
reg = <ff018000 0x2000>;
interrupts = <0x12>;
};
EEPROMs (SPI) compatible with Microchip Technology 93xx46 family.
Required properties:
- compatible : shall be one of:
"atmel,at93c46d"
"eeprom-93xx46"
- data-size : number of data bits per word (either 8 or 16)
Optional properties:
- read-only : parameter-less property which disables writes to the EEPROM
- select-gpios : if present, specifies the GPIO that will be asserted prior to
each access to the EEPROM (e.g. for SPI bus multiplexing)
Property rules described in Documentation/devicetree/bindings/spi/spi-bus.txt
apply. In particular, "reg" and "spi-max-frequency" properties must be given.
Example:
eeprom@0 {
compatible = "eeprom-93xx46";
reg = <0>;
spi-max-frequency = <1000000>;
spi-cs-high;
data-size = <8>;
select-gpios = <&gpio4 4 GPIO_ACTIVE_HIGH>;
};
* NXP LPC18xx EEPROM memory NVMEM driver
Required properties:
- compatible: Should be "nxp,lpc1857-eeprom"
- reg: Must contain an entry with the physical base address and length
for each entry in reg-names.
- reg-names: Must include the following entries.
- reg: EEPROM registers.
- mem: EEPROM address space.
- clocks: Must contain an entry for each entry in clock-names.
- clock-names: Must include the following entries.
- eeprom: EEPROM operating clock.
- resets: Should contain a reference to the reset controller asserting
the EEPROM in reset.
- interrupts: Should contain EEPROM interrupt.
Example:
eeprom: eeprom@4000e000 {
compatible = "nxp,lpc1857-eeprom";
reg = <0x4000e000 0x1000>,
<0x20040000 0x4000>;
reg-names = "reg", "mem";
clocks = <&ccu1 CLK_CPU_EEPROM>;
clock-names = "eeprom";
resets = <&rgu 27>;
interrupts = <4>;
};
= Mediatek MTK-EFUSE device tree bindings =
This binding is intended to represent MTK-EFUSE which is found in most Mediatek SOCs.
Required properties:
- compatible: should be "mediatek,mt8173-efuse" or "mediatek,efuse"
- reg: Should contain registers location and length
= Data cells =
Are child nodes of MTK-EFUSE, bindings of which as described in
bindings/nvmem/nvmem.txt
Example:
efuse: efuse@10206000 {
compatible = "mediatek,mt8173-efuse";
reg = <0 0x10206000 0 0x1000>;
#address-cells = <1>;
#size-cells = <1>;
/* Data cells */
thermal_calibration: calib@528 {
reg = <0x528 0xc>;
};
};
= Data consumers =
Are device nodes which consume nvmem data cells.
For example:
thermal {
...
nvmem-cells = <&thermal_calibration>;
nvmem-cell-names = "calibration";
};
......@@ -12,10 +12,19 @@ for the X100 devices.
Since it is a PCIe card, it does not have the ability to host hardware
devices for networking, storage and console. We provide these devices
on X100 coprocessors thus enabling a self-bootable equivalent environment
for applications. A key benefit of our solution is that it leverages
the standard virtio framework for network, disk and console devices,
though in our case the virtio framework is used across a PCIe bus.
on X100 coprocessors thus enabling a self-bootable equivalent
environment for applications. A key benefit of our solution is that it
leverages the standard virtio framework for network, disk and console
devices, though in our case the virtio framework is used across a PCIe
bus. A Virtio Over PCIe (VOP) driver allows creating user space
backends or devices on the host which are used to probe virtio drivers
for these devices on the MIC card. The existing VRINGH infrastructure
in the kernel is used to access virtio rings from the host. The card
VOP driver allows card virtio drivers to communicate with their user
space backends on the host via a device page. Ring 3 apps on the host
can add, remove and configure virtio devices. A thin MIC specific
virtio_config_ops is implemented which is borrowed heavily from
previous similar implementations in lguest and s390.
MIC PCIe card has a dma controller with 8 channels. These channels are
shared between the host s/w and the card s/w. 0 to 3 are used by host
......@@ -38,7 +47,6 @@ single threaded performance for the host compared to MIC, the ability of
the host to initiate DMA's to/from the card using the MIC DMA engine and
the fact that the virtio block storage backend can only be on the host.
|
+----------+ | +----------+
| Card OS | | | Host OS |
+----------+ | +----------+
......@@ -47,27 +55,25 @@ the fact that the virtio block storage backend can only be on the host.
| Virtio| |Virtio | |Virtio| | |Virtio | |Virtio | |Virtio |
| Net | |Console | |Block | | |Net | |Console | |Block |
| Driver| |Driver | |Driver| | |backend | |backend | |backend |
+-------+ +--------+ +------+ | +---------+ +--------+ +--------+
+---+---+ +---+----+ +--+---+ | +---------+ +----+---+ +--------+
| | | | | | |
| | | |User | | |
| | | |------|------------|---------|-------
+-------------------+ |Kernel +--------------------------+
| | | Virtio over PCIe IOCTLs |
| | +--------------------------+
+-----------+ | | | +-----------+
| MIC DMA | | +------+ | +------+ +------+ | | MIC DMA |
| Driver | | | SCIF | | | SCIF | | COSM | | | Driver |
+-----------+ | +------+ | +------+ +--+---+ | +-----------+
| | | | | | | |
+---------------+ | +------+ | +--+---+ +--+---+ | +----------------+
|MIC virtual Bus| | |SCIF | | |SCIF | | COSM | | |MIC virtual Bus |
+---------------+ | |HW Bus| | |HW Bus| | Bus | | +----------------+
| | +------+ | +--+---+ +------+ | |
| | | | | | | |
| +-----------+---+ | | | +---------------+ |
| |Intel MIC | | | | |Intel MIC | |
+---|Card Driver | | | | |Host Driver | |
+------------+--------+ | +----+---------------+-----+
| | | |------|------------|--+------|-------
+---------+---------+ |Kernel |
| | |
+---------+ +---+----+ +------+ | +------+ +------+ +--+---+ +-------+
|MIC DMA | | VOP | | SCIF | | | SCIF | | COSM | | VOP | |MIC DMA|
+---+-----+ +---+----+ +--+---+ | +--+---+ +--+---+ +------+ +----+--+
| | | | | | |
+---+-----+ +---+----+ +--+---+ | +--+---+ +--+---+ +------+ +----+--+
|MIC | | VOP | |SCIF | | |SCIF | | COSM | | VOP | | MIC |
|HW Bus | | HW Bus| |HW Bus| | |HW Bus| | Bus | |HW Bus| |HW Bus |
+---------+ +--------+ +--+---+ | +--+---+ +------+ +------+ +-------+
| | | | | | |
| +-----------+--+ | | | +---------------+ |
| |Intel MIC | | | | |Intel MIC | |
| |Card Driver | | | | |Host Driver | |
+---+--------------+------+ | +----+---------------+-----+
| | |
+-------------------------------------------------------------+
| |
......
......@@ -35,7 +35,7 @@
exec=/usr/sbin/mpssd
sysfs="/sys/class/mic"
mic_modules="mic_host mic_x100_dma scif"
mic_modules="mic_host mic_x100_dma scif vop"
start()
{
......
......@@ -926,7 +926,7 @@ add_virtio_device(struct mic_info *mic, struct mic_device_desc *dd)
char path[PATH_MAX];
int fd, err;
snprintf(path, PATH_MAX, "/dev/mic%d", mic->id);
snprintf(path, PATH_MAX, "/dev/vop_virtio%d", mic->id);
fd = open(path, O_RDWR);
if (fd < 0) {
mpsslog("Could not open %s %s\n", path, strerror(errno));
......
......@@ -231,15 +231,15 @@ IT knows when a platform crashes even when there is a hard failure on the host.
The Intel AMT Watchdog is composed of two parts:
1) Firmware feature - receives the heartbeats
and sends an event when the heartbeats stop.
2) Intel MEI driver - connects to the watchdog feature, configures the
watchdog and sends the heartbeats.
2) Intel MEI iAMT watchdog driver - connects to the watchdog feature,
configures the watchdog and sends the heartbeats.
The Intel MEI driver uses the kernel watchdog API to configure the Intel AMT
Watchdog and to send heartbeats to it. The default timeout of the
The Intel iAMT watchdog MEI driver uses the kernel watchdog API to configure
the Intel AMT Watchdog and to send heartbeats to it. The default timeout of the
watchdog is 120 seconds.
If the Intel AMT Watchdog feature does not exist (i.e. the connection failed),
the Intel MEI driver will disable the sending of heartbeats.
If the Intel AMT is not enabled in the firmware then the watchdog client won't enumerate
on the me client bus and watchdog devices won't be exposed.
Supported Chipsets
......
......@@ -5765,6 +5765,7 @@ S: Supported
F: include/uapi/linux/mei.h
F: include/linux/mei_cl_bus.h
F: drivers/misc/mei/*
F: drivers/watchdog/mei_wdt.c
F: Documentation/misc-devices/mei/*
INTEL MIC DRIVERS (mic)
......@@ -6598,6 +6599,11 @@ F: samples/livepatch/
L: live-patching@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jikos/livepatching.git
LINUX KERNEL DUMP TEST MODULE (LKDTM)
M: Kees Cook <keescook@chromium.org>
S: Maintained
F: drivers/misc/lkdtm.c
LLC (802.2)
M: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
S: Maintained
......
......@@ -562,8 +562,7 @@
extcon_usb2: tps659038_usb {
compatible = "ti,palmas-usb-vid";
ti,enable-vbus-detection;
ti,enable-id-detection;
id-gpios = <&gpio7 24 GPIO_ACTIVE_HIGH>;
vbus-gpio = <&gpio4 21 GPIO_ACTIVE_HIGH>;
};
};
......
......@@ -115,13 +115,14 @@ static void mityomapl138_cpufreq_init(const char *partnum)
static void mityomapl138_cpufreq_init(const char *partnum) { }
#endif
static void read_factory_config(struct memory_accessor *a, void *context)
static void read_factory_config(struct nvmem_device *nvmem, void *context)
{
int ret;
const char *partnum = NULL;
struct davinci_soc_info *soc_info = &davinci_soc_info;
ret = a->read(a, (char *)&factory_config, 0, sizeof(factory_config));
ret = nvmem_device_read(nvmem, 0, sizeof(factory_config),
&factory_config);
if (ret != sizeof(struct factory_config)) {
pr_warn("Read Factory Config Failed: %d\n", ret);
goto bad_config;
......
......@@ -28,13 +28,13 @@ EXPORT_SYMBOL(davinci_soc_info);
void __iomem *davinci_intc_base;
int davinci_intc_type;
void davinci_get_mac_addr(struct memory_accessor *mem_acc, void *context)
void davinci_get_mac_addr(struct nvmem_device *nvmem, void *context)
{
char *mac_addr = davinci_soc_info.emac_pdata->mac_addr;
off_t offset = (off_t)context;
/* Read MAC addr from EEPROM */
if (mem_acc->read(mem_acc, mac_addr, offset, ETH_ALEN) == ETH_ALEN)
if (nvmem_device_read(nvmem, offset, ETH_ALEN, mac_addr) == ETH_ALEN)
pr_info("Read MAC addr from EEPROM: %pM\n", mac_addr);
}
......
......@@ -1321,6 +1321,7 @@ static void binder_transaction(struct binder_proc *proc,
struct binder_transaction *t;
struct binder_work *tcomplete;
binder_size_t *offp, *off_end;
binder_size_t off_min;
struct binder_proc *target_proc;
struct binder_thread *target_thread = NULL;
struct binder_node *target_node = NULL;
......@@ -1522,18 +1523,24 @@ static void binder_transaction(struct binder_proc *proc,
goto err_bad_offset;
}
off_end = (void *)offp + tr->offsets_size;
off_min = 0;
for (; offp < off_end; offp++) {
struct flat_binder_object *fp;
if (*offp > t->buffer->data_size - sizeof(*fp) ||
*offp < off_min ||
t->buffer->data_size < sizeof(*fp) ||
!IS_ALIGNED(*offp, sizeof(u32))) {
binder_user_error("%d:%d got transaction with invalid offset, %lld\n",
proc->pid, thread->pid, (u64)*offp);
binder_user_error("%d:%d got transaction with invalid offset, %lld (min %lld, max %lld)\n",
proc->pid, thread->pid, (u64)*offp,
(u64)off_min,
(u64)(t->buffer->data_size -
sizeof(*fp)));
return_error = BR_FAILED_REPLY;
goto err_bad_offset;
}
fp = (struct flat_binder_object *)(t->buffer->data + *offp);
off_min = *offp + sizeof(struct flat_binder_object);
switch (fp->type) {
case BINDER_TYPE_BINDER:
case BINDER_TYPE_WEAK_BINDER: {
......@@ -3593,13 +3600,24 @@ static int binder_transactions_show(struct seq_file *m, void *unused)
static int binder_proc_show(struct seq_file *m, void *unused)
{
struct binder_proc *itr;
struct binder_proc *proc = m->private;
int do_lock = !binder_debug_no_lock;
bool valid_proc = false;
if (do_lock)
binder_lock(__func__);
seq_puts(m, "binder proc state:\n");
print_binder_proc(m, proc, 1);
hlist_for_each_entry(itr, &binder_procs, proc_node) {
if (itr == proc) {
valid_proc = true;
break;
}
}
if (valid_proc) {
seq_puts(m, "binder proc state:\n");
print_binder_proc(m, proc, 1);
}
if (do_lock)
binder_unlock(__func__);
return 0;
......
......@@ -258,7 +258,7 @@ static void __fw_free_buf(struct kref *ref)
vunmap(buf->data);
for (i = 0; i < buf->nr_pages; i++)
__free_page(buf->pages[i]);
kfree(buf->pages);
vfree(buf->pages);
} else
#endif
vfree(buf->data);
......@@ -635,7 +635,7 @@ static ssize_t firmware_loading_store(struct device *dev,
if (!test_bit(FW_STATUS_DONE, &fw_buf->status)) {
for (i = 0; i < fw_buf->nr_pages; i++)
__free_page(fw_buf->pages[i]);
kfree(fw_buf->pages);
vfree(fw_buf->pages);
fw_buf->pages = NULL;
fw_buf->page_array_size = 0;
fw_buf->nr_pages = 0;
......@@ -746,8 +746,7 @@ static int fw_realloc_buffer(struct firmware_priv *fw_priv, int min_size)
buf->page_array_size * 2);
struct page **new_pages;
new_pages = kmalloc(new_array_size * sizeof(void *),
GFP_KERNEL);
new_pages = vmalloc(new_array_size * sizeof(void *));
if (!new_pages) {
fw_load_abort(fw_priv);
return -ENOMEM;
......@@ -756,7 +755,7 @@ static int fw_realloc_buffer(struct firmware_priv *fw_priv, int min_size)
buf->page_array_size * sizeof(void *));
memset(&new_pages[buf->page_array_size], 0, sizeof(void *) *
(new_array_size - buf->page_array_size));
kfree(buf->pages);
vfree(buf->pages);
buf->pages = new_pages;
buf->page_array_size = new_array_size;
}
......
......@@ -328,7 +328,8 @@ config JS_RTC
config GEN_RTC
tristate "Generic /dev/rtc emulation"
depends on RTC!=y && !IA64 && !ARM && !M32R && !MIPS && !SPARC && !FRV && !S390 && !SUPERH && !AVR32 && !BLACKFIN && !UML
depends on RTC!=y
depends on ALPHA || M68K || MN10300 || PARISC || PPC || X86
---help---
If you say Y here and create a character special file /dev/rtc with
major number 10 and minor number 135 using mknod ("man mknod"), you
......
......@@ -695,7 +695,7 @@ static loff_t memory_lseek(struct file *file, loff_t offset, int orig)
offset += file->f_pos;
case SEEK_SET:
/* to avoid userland mistaking f_pos=-9 as -EBADF=-9 */
if (IS_ERR_VALUE((unsigned long long)offset)) {
if ((unsigned long long)offset >= -MAX_ERRNO) {
ret = -EOVERFLOW;
break;
}
......
......@@ -496,12 +496,12 @@ static void pc_set_checksum(void)
#ifdef CONFIG_PROC_FS
static char *floppy_types[] = {
static const char * const floppy_types[] = {
"none", "5.25'' 360k", "5.25'' 1.2M", "3.5'' 720k", "3.5'' 1.44M",
"3.5'' 2.88M", "3.5'' 2.88M"
};
static char *gfx_types[] = {
static const char * const gfx_types[] = {
"EGA, VGA, ... (with BIOS)",
"CGA (40 cols)",
"CGA (80 cols)",
......@@ -602,7 +602,7 @@ static void atari_set_checksum(void)
static struct {
unsigned char val;
char *name;
const char *name;
} boot_prefs[] = {
{ 0x80, "TOS" },
{ 0x40, "ASV" },
......@@ -611,7 +611,7 @@ static struct {
{ 0x00, "unspecified" }
};
static char *languages[] = {
static const char * const languages[] = {
"English (US)",
"German",
"French",
......@@ -623,7 +623,7 @@ static char *languages[] = {
"Swiss (German)"
};
static char *dateformat[] = {
static const char * const dateformat[] = {
"MM%cDD%cYY",
"DD%cMM%cYY",
"YY%cMM%cDD",
......@@ -634,7 +634,7 @@ static char *dateformat[] = {
"7 (undefined)"
};
static char *colors[] = {
static const char * const colors[] = {
"2", "4", "16", "256", "65536", "??", "??", "??"
};
......
......@@ -129,10 +129,9 @@ static void button_consume_callbacks (int bpcount)
static void button_sequence_finished (unsigned long parameters)
{
#ifdef CONFIG_NWBUTTON_REBOOT /* Reboot using button is enabled */
if (button_press_count == reboot_count)
if (IS_ENABLED(CONFIG_NWBUTTON_REBOOT) &&
button_press_count == reboot_count)
kill_cad_pid(SIGINT, 1); /* Ask init to reboot us */
#endif /* CONFIG_NWBUTTON_REBOOT */
button_consume_callbacks (button_press_count);
bcount = sprintf (button_output_buffer, "%d\n", button_press_count);
button_press_count = 0; /* Reset the button press counter */
......
此差异已折叠。
......@@ -334,10 +334,8 @@ static int __init raw_init(void)
cdev_init(&raw_cdev, &raw_fops);
ret = cdev_add(&raw_cdev, dev, max_raw_minors);
if (ret) {
if (ret)
goto error_region;
}
raw_class = class_create(THIS_MODULE, "raw");
if (IS_ERR(raw_class)) {
printk(KERN_ERR "Error creating raw class.\n");
......
......@@ -509,7 +509,7 @@ static int xilly_setupchannels(struct xilly_endpoint *ep,
channel->log2_element_size = ((format > 2) ?
2 : format);
bytebufsize = channel->rd_buf_size = bufsize *
bytebufsize = bufsize *
(1 << channel->log2_element_size);
buffers = devm_kcalloc(dev, bufnum,
......@@ -523,6 +523,7 @@ static int xilly_setupchannels(struct xilly_endpoint *ep,
if (!is_writebuf) {
channel->num_rd_buffers = bufnum;
channel->rd_buf_size = bytebufsize;
channel->rd_allow_partial = allowpartial;
channel->rd_synchronous = synchronous;
channel->rd_exclusive_open = exclusive_open;
......@@ -533,6 +534,7 @@ static int xilly_setupchannels(struct xilly_endpoint *ep,
bufnum, bytebufsize);
} else if (channelnum > 0) {
channel->num_wr_buffers = bufnum;
channel->wr_buf_size = bytebufsize;
channel->seekable = seekable;
channel->wr_supports_nonempty = supports_nonempty;
......
......@@ -185,7 +185,7 @@ static void arizona_extcon_hp_clamp(struct arizona_extcon_info *info,
break;
};
mutex_lock(&arizona->dapm->card->dapm_mutex);
snd_soc_dapm_mutex_lock(arizona->dapm);
arizona->hpdet_clamp = clamp;
......@@ -227,7 +227,7 @@ static void arizona_extcon_hp_clamp(struct arizona_extcon_info *info,
ret);
}
mutex_unlock(&arizona->dapm->card->dapm_mutex);
snd_soc_dapm_mutex_unlock(arizona->dapm);
}
static void arizona_extcon_set_mode(struct arizona_extcon_info *info, int mode)
......
......@@ -126,7 +126,7 @@ static int gpio_extcon_probe(struct platform_device *pdev)
INIT_DELAYED_WORK(&data->work, gpio_extcon_work);
/*
* Request the interrput of gpio to detect whether external connector
* Request the interrupt of gpio to detect whether external connector
* is attached or detached.
*/
ret = devm_request_any_context_irq(&pdev->dev, data->irq,
......
......@@ -150,6 +150,7 @@ enum max14577_muic_acc_type {
static const unsigned int max14577_extcon_cable[] = {
EXTCON_USB,
EXTCON_CHG_USB_SDP,
EXTCON_CHG_USB_DCP,
EXTCON_CHG_USB_FAST,
EXTCON_CHG_USB_SLOW,
......@@ -454,6 +455,8 @@ static int max14577_muic_chg_handler(struct max14577_muic_info *info)
return ret;
extcon_set_cable_state_(info->edev, EXTCON_USB, attached);
extcon_set_cable_state_(info->edev, EXTCON_CHG_USB_SDP,
attached);
break;
case MAX14577_CHARGER_TYPE_DEDICATED_CHG:
extcon_set_cable_state_(info->edev, EXTCON_CHG_USB_DCP,
......
......@@ -204,6 +204,7 @@ enum max77693_muic_acc_type {
static const unsigned int max77693_extcon_cable[] = {
EXTCON_USB,
EXTCON_USB_HOST,
EXTCON_CHG_USB_SDP,
EXTCON_CHG_USB_DCP,
EXTCON_CHG_USB_FAST,
EXTCON_CHG_USB_SLOW,
......@@ -512,8 +513,11 @@ static int max77693_muic_dock_handler(struct max77693_muic_info *info,
break;
case MAX77693_MUIC_ADC_AV_CABLE_NOLOAD: /* Dock-Audio */
dock_id = EXTCON_DOCK;
if (!attached)
if (!attached) {
extcon_set_cable_state_(info->edev, EXTCON_USB, false);
extcon_set_cable_state_(info->edev, EXTCON_CHG_USB_SDP,
false);
}
break;
default:
dev_err(info->dev, "failed to detect %s dock device\n",
......@@ -601,6 +605,8 @@ static int max77693_muic_adc_ground_handler(struct max77693_muic_info *info)
if (ret < 0)
return ret;
extcon_set_cable_state_(info->edev, EXTCON_USB, attached);
extcon_set_cable_state_(info->edev, EXTCON_CHG_USB_SDP,
attached);
break;
case MAX77693_MUIC_GND_MHL:
case MAX77693_MUIC_GND_MHL_VB:
......@@ -830,6 +836,8 @@ static int max77693_muic_chg_handler(struct max77693_muic_info *info)
*/
extcon_set_cable_state_(info->edev, EXTCON_USB,
attached);
extcon_set_cable_state_(info->edev, EXTCON_CHG_USB_SDP,
attached);
if (!cable_attached)
extcon_set_cable_state_(info->edev, EXTCON_DOCK,
......@@ -899,6 +907,8 @@ static int max77693_muic_chg_handler(struct max77693_muic_info *info)
extcon_set_cable_state_(info->edev, EXTCON_USB,
attached);
extcon_set_cable_state_(info->edev, EXTCON_CHG_USB_SDP,
attached);
break;
case MAX77693_CHARGER_TYPE_DEDICATED_CHG:
/* Only TA cable */
......
......@@ -122,6 +122,7 @@ enum max77843_muic_charger_type {
static const unsigned int max77843_extcon_cable[] = {
EXTCON_USB,
EXTCON_USB_HOST,
EXTCON_CHG_USB_SDP,
EXTCON_CHG_USB_DCP,
EXTCON_CHG_USB_CDP,
EXTCON_CHG_USB_FAST,
......@@ -486,6 +487,8 @@ static int max77843_muic_chg_handler(struct max77843_muic_info *info)
return ret;
extcon_set_cable_state_(info->edev, EXTCON_USB, attached);
extcon_set_cable_state_(info->edev, EXTCON_CHG_USB_SDP,
attached);
break;
case MAX77843_MUIC_CHG_DOWNSTREAM:
ret = max77843_muic_set_path(info,
......@@ -803,7 +806,7 @@ static int max77843_muic_probe(struct platform_device *pdev)
/* Clear IRQ bits before request IRQs */
ret = regmap_bulk_read(max77843->regmap_muic,
MAX77843_MUIC_REG_INT1, info->status,
MAX77843_MUIC_IRQ_NUM);
MAX77843_MUIC_STATUS_NUM);
if (ret) {
dev_err(&pdev->dev, "Failed to Clear IRQ bits\n");
goto err_muic_irq;
......
......@@ -148,6 +148,7 @@ struct max8997_muic_info {
static const unsigned int max8997_extcon_cable[] = {
EXTCON_USB,
EXTCON_USB_HOST,
EXTCON_CHG_USB_SDP,
EXTCON_CHG_USB_DCP,
EXTCON_CHG_USB_FAST,
EXTCON_CHG_USB_SLOW,
......@@ -334,6 +335,8 @@ static int max8997_muic_handle_usb(struct max8997_muic_info *info,
break;
case MAX8997_USB_DEVICE:
extcon_set_cable_state_(info->edev, EXTCON_USB, attached);
extcon_set_cable_state_(info->edev, EXTCON_CHG_USB_SDP,
attached);
break;
default:
dev_err(info->dev, "failed to detect %s usb cable\n",
......
......@@ -216,11 +216,23 @@ static int palmas_usb_probe(struct platform_device *pdev)
return PTR_ERR(palmas_usb->id_gpiod);
}
palmas_usb->vbus_gpiod = devm_gpiod_get_optional(&pdev->dev, "vbus",
GPIOD_IN);
if (IS_ERR(palmas_usb->vbus_gpiod)) {
dev_err(&pdev->dev, "failed to get vbus gpio\n");
return PTR_ERR(palmas_usb->vbus_gpiod);
}
if (palmas_usb->enable_id_detection && palmas_usb->id_gpiod) {
palmas_usb->enable_id_detection = false;
palmas_usb->enable_gpio_id_detection = true;
}
if (palmas_usb->enable_vbus_detection && palmas_usb->vbus_gpiod) {
palmas_usb->enable_vbus_detection = false;
palmas_usb->enable_gpio_vbus_detection = true;
}
if (palmas_usb->enable_gpio_id_detection) {
u32 debounce;
......@@ -266,7 +278,7 @@ static int palmas_usb_probe(struct platform_device *pdev)
palmas_usb->id_irq,
NULL, palmas_id_irq_handler,
IRQF_TRIGGER_FALLING | IRQF_TRIGGER_RISING |
IRQF_ONESHOT | IRQF_EARLY_RESUME,
IRQF_ONESHOT,
"palmas_usb_id", palmas_usb);
if (status < 0) {
dev_err(&pdev->dev, "can't get IRQ %d, err %d\n",
......@@ -304,13 +316,47 @@ static int palmas_usb_probe(struct platform_device *pdev)
palmas_usb->vbus_irq, NULL,
palmas_vbus_irq_handler,
IRQF_TRIGGER_FALLING | IRQF_TRIGGER_RISING |
IRQF_ONESHOT | IRQF_EARLY_RESUME,
IRQF_ONESHOT,
"palmas_usb_vbus", palmas_usb);
if (status < 0) {
dev_err(&pdev->dev, "can't get IRQ %d, err %d\n",
palmas_usb->vbus_irq, status);
return status;
}
} else if (palmas_usb->enable_gpio_vbus_detection) {
/* remux GPIO_1 as VBUSDET */
status = palmas_update_bits(palmas,
PALMAS_PU_PD_OD_BASE,
PALMAS_PRIMARY_SECONDARY_PAD1,
PALMAS_PRIMARY_SECONDARY_PAD1_GPIO_1_MASK,
(1 << PALMAS_PRIMARY_SECONDARY_PAD1_GPIO_1_SHIFT));
if (status < 0) {
dev_err(&pdev->dev, "can't remux GPIO1\n");
return status;
}
palmas_usb->vbus_otg_irq = regmap_irq_get_virq(palmas->irq_data,
PALMAS_VBUS_OTG_IRQ);
palmas_usb->gpio_vbus_irq = gpiod_to_irq(palmas_usb->vbus_gpiod);
if (palmas_usb->gpio_vbus_irq < 0) {
dev_err(&pdev->dev, "failed to get vbus irq\n");
return palmas_usb->gpio_vbus_irq;
}
status = devm_request_threaded_irq(&pdev->dev,
palmas_usb->gpio_vbus_irq,
NULL,
palmas_vbus_irq_handler,
IRQF_TRIGGER_FALLING |
IRQF_TRIGGER_RISING |
IRQF_ONESHOT |
IRQF_EARLY_RESUME,
"palmas_usb_vbus",
palmas_usb);
if (status < 0) {
dev_err(&pdev->dev,
"failed to request handler for vbus irq\n");
return status;
}
}
palmas_enable_irq(palmas_usb);
......@@ -337,6 +383,8 @@ static int palmas_usb_suspend(struct device *dev)
if (device_may_wakeup(dev)) {
if (palmas_usb->enable_vbus_detection)
enable_irq_wake(palmas_usb->vbus_irq);
if (palmas_usb->enable_gpio_vbus_detection)
enable_irq_wake(palmas_usb->gpio_vbus_irq);
if (palmas_usb->enable_id_detection)
enable_irq_wake(palmas_usb->id_irq);
if (palmas_usb->enable_gpio_id_detection)
......@@ -352,6 +400,8 @@ static int palmas_usb_resume(struct device *dev)
if (device_may_wakeup(dev)) {
if (palmas_usb->enable_vbus_detection)
disable_irq_wake(palmas_usb->vbus_irq);
if (palmas_usb->enable_gpio_vbus_detection)
disable_irq_wake(palmas_usb->gpio_vbus_irq);
if (palmas_usb->enable_id_detection)
disable_irq_wake(palmas_usb->id_irq);
if (palmas_usb->enable_gpio_id_detection)
......
......@@ -93,6 +93,7 @@ static struct reg_data rt8973a_reg_data[] = {
static const unsigned int rt8973a_extcon_cable[] = {
EXTCON_USB,
EXTCON_USB_HOST,
EXTCON_CHG_USB_SDP,
EXTCON_CHG_USB_DCP,
EXTCON_JIG,
EXTCON_NONE,
......@@ -398,6 +399,9 @@ static int rt8973a_muic_cable_handler(struct rt8973a_muic_info *info,
/* Change the state of external accessory */
extcon_set_cable_state_(info->edev, id, attached);
if (id == EXTCON_USB)
extcon_set_cable_state_(info->edev, EXTCON_CHG_USB_SDP,
attached);
return 0;
}
......@@ -663,7 +667,7 @@ MODULE_DEVICE_TABLE(of, rt8973a_dt_match);
#ifdef CONFIG_PM_SLEEP
static int rt8973a_muic_suspend(struct device *dev)
{
struct i2c_client *i2c = container_of(dev, struct i2c_client, dev);
struct i2c_client *i2c = to_i2c_client(dev);
struct rt8973a_muic_info *info = i2c_get_clientdata(i2c);
enable_irq_wake(info->irq);
......@@ -673,7 +677,7 @@ static int rt8973a_muic_suspend(struct device *dev)
static int rt8973a_muic_resume(struct device *dev)
{
struct i2c_client *i2c = container_of(dev, struct i2c_client, dev);
struct i2c_client *i2c = to_i2c_client(dev);
struct rt8973a_muic_info *info = i2c_get_clientdata(i2c);
disable_irq_wake(info->irq);
......
......@@ -95,6 +95,7 @@ static struct reg_data sm5502_reg_data[] = {
static const unsigned int sm5502_extcon_cable[] = {
EXTCON_USB,
EXTCON_USB_HOST,
EXTCON_CHG_USB_SDP,
EXTCON_CHG_USB_DCP,
EXTCON_NONE,
};
......@@ -411,6 +412,9 @@ static int sm5502_muic_cable_handler(struct sm5502_muic_info *info,
/* Change the state of external accessory */
extcon_set_cable_state_(info->edev, id, attached);
if (id == EXTCON_USB)
extcon_set_cable_state_(info->edev, EXTCON_CHG_USB_SDP,
attached);
return 0;
}
......@@ -655,7 +659,7 @@ MODULE_DEVICE_TABLE(of, sm5502_dt_match);
#ifdef CONFIG_PM_SLEEP
static int sm5502_muic_suspend(struct device *dev)
{
struct i2c_client *i2c = container_of(dev, struct i2c_client, dev);
struct i2c_client *i2c = to_i2c_client(dev);
struct sm5502_muic_info *info = i2c_get_clientdata(i2c);
enable_irq_wake(info->irq);
......@@ -665,7 +669,7 @@ static int sm5502_muic_suspend(struct device *dev)
static int sm5502_muic_resume(struct device *dev)
{
struct i2c_client *i2c = container_of(dev, struct i2c_client, dev);
struct i2c_client *i2c = to_i2c_client(dev);
struct sm5502_muic_info *info = i2c_get_clientdata(i2c);
disable_irq_wake(info->irq);
......
......@@ -219,6 +219,21 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
}
EXPORT_SYMBOL_GPL(vmbus_open);
/* Used for Hyper-V Socket: a guest client's connect() to the host */
int vmbus_send_tl_connect_request(const uuid_le *shv_guest_servie_id,
const uuid_le *shv_host_servie_id)
{
struct vmbus_channel_tl_connect_request conn_msg;
memset(&conn_msg, 0, sizeof(conn_msg));
conn_msg.header.msgtype = CHANNELMSG_TL_CONNECT_REQUEST;
conn_msg.guest_endpoint_id = *shv_guest_servie_id;
conn_msg.host_service_id = *shv_host_servie_id;
return vmbus_post_msg(&conn_msg, sizeof(conn_msg));
}
EXPORT_SYMBOL_GPL(vmbus_send_tl_connect_request);
/*
* create_gpadl_header - Creates a gpadl for the specified buffer
*/
......@@ -624,6 +639,7 @@ int vmbus_sendpacket_ctl(struct vmbus_channel *channel, void *buffer,
u64 aligned_data = 0;
int ret;
bool signal = false;
bool lock = channel->acquire_ring_lock;
int num_vecs = ((bufferlen != 0) ? 3 : 1);
......@@ -643,7 +659,7 @@ int vmbus_sendpacket_ctl(struct vmbus_channel *channel, void *buffer,
bufferlist[2].iov_len = (packetlen_aligned - packetlen);
ret = hv_ringbuffer_write(&channel->outbound, bufferlist, num_vecs,
&signal);
&signal, lock);
/*
* Signalling the host is conditional on many factors:
......@@ -659,6 +675,9 @@ int vmbus_sendpacket_ctl(struct vmbus_channel *channel, void *buffer,
* If we cannot write to the ring-buffer; signal the host
* even if we may not have written anything. This is a rare
* enough condition that it should not matter.
* NOTE: in this case, the hvsock channel is an exception, because
* it looks the host side's hvsock implementation has a throttling
* mechanism which can hurt the performance otherwise.
*/
if (channel->signal_policy)
......@@ -666,7 +685,8 @@ int vmbus_sendpacket_ctl(struct vmbus_channel *channel, void *buffer,
else
kick_q = true;
if (((ret == 0) && kick_q && signal) || (ret))
if (((ret == 0) && kick_q && signal) ||
(ret && !is_hvsock_channel(channel)))
vmbus_setevent(channel);
return ret;
......@@ -719,6 +739,7 @@ int vmbus_sendpacket_pagebuffer_ctl(struct vmbus_channel *channel,
struct kvec bufferlist[3];
u64 aligned_data = 0;
bool signal = false;
bool lock = channel->acquire_ring_lock;
if (pagecount > MAX_PAGE_BUFFER_COUNT)
return -EINVAL;
......@@ -755,7 +776,8 @@ int vmbus_sendpacket_pagebuffer_ctl(struct vmbus_channel *channel,
bufferlist[2].iov_base = &aligned_data;
bufferlist[2].iov_len = (packetlen_aligned - packetlen);
ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3, &signal);
ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3,
&signal, lock);
/*
* Signalling the host is conditional on many factors:
......@@ -818,6 +840,7 @@ int vmbus_sendpacket_mpb_desc(struct vmbus_channel *channel,
struct kvec bufferlist[3];
u64 aligned_data = 0;
bool signal = false;
bool lock = channel->acquire_ring_lock;
packetlen = desc_size + bufferlen;
packetlen_aligned = ALIGN(packetlen, sizeof(u64));
......@@ -837,7 +860,8 @@ int vmbus_sendpacket_mpb_desc(struct vmbus_channel *channel,
bufferlist[2].iov_base = &aligned_data;
bufferlist[2].iov_len = (packetlen_aligned - packetlen);
ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3, &signal);
ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3,
&signal, lock);
if (ret == 0 && signal)
vmbus_setevent(channel);
......@@ -862,6 +886,7 @@ int vmbus_sendpacket_multipagebuffer(struct vmbus_channel *channel,
struct kvec bufferlist[3];
u64 aligned_data = 0;
bool signal = false;
bool lock = channel->acquire_ring_lock;
u32 pfncount = NUM_PAGES_SPANNED(multi_pagebuffer->offset,
multi_pagebuffer->len);
......@@ -900,7 +925,8 @@ int vmbus_sendpacket_multipagebuffer(struct vmbus_channel *channel,
bufferlist[2].iov_base = &aligned_data;
bufferlist[2].iov_len = (packetlen_aligned - packetlen);
ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3, &signal);
ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3,
&signal, lock);
if (ret == 0 && signal)
vmbus_setevent(channel);
......
......@@ -28,12 +28,127 @@
#include <linux/list.h>
#include <linux/module.h>
#include <linux/completion.h>
#include <linux/delay.h>
#include <linux/hyperv.h>
#include "hyperv_vmbus.h"
static void init_vp_index(struct vmbus_channel *channel,
const uuid_le *type_guid);
static void init_vp_index(struct vmbus_channel *channel, u16 dev_type);
static const struct vmbus_device vmbus_devs[] = {
/* IDE */
{ .dev_type = HV_IDE,
HV_IDE_GUID,
.perf_device = true,
},
/* SCSI */
{ .dev_type = HV_SCSI,
HV_SCSI_GUID,
.perf_device = true,
},
/* Fibre Channel */
{ .dev_type = HV_FC,
HV_SYNTHFC_GUID,
.perf_device = true,
},
/* Synthetic NIC */
{ .dev_type = HV_NIC,
HV_NIC_GUID,
.perf_device = true,
},
/* Network Direct */
{ .dev_type = HV_ND,
HV_ND_GUID,
.perf_device = true,
},
/* PCIE */
{ .dev_type = HV_PCIE,
HV_PCIE_GUID,
.perf_device = true,
},
/* Synthetic Frame Buffer */
{ .dev_type = HV_FB,
HV_SYNTHVID_GUID,
.perf_device = false,
},
/* Synthetic Keyboard */
{ .dev_type = HV_KBD,
HV_KBD_GUID,
.perf_device = false,
},
/* Synthetic MOUSE */
{ .dev_type = HV_MOUSE,
HV_MOUSE_GUID,
.perf_device = false,
},
/* KVP */
{ .dev_type = HV_KVP,
HV_KVP_GUID,
.perf_device = false,
},
/* Time Synch */
{ .dev_type = HV_TS,
HV_TS_GUID,
.perf_device = false,
},
/* Heartbeat */
{ .dev_type = HV_HB,
HV_HEART_BEAT_GUID,
.perf_device = false,
},
/* Shutdown */
{ .dev_type = HV_SHUTDOWN,
HV_SHUTDOWN_GUID,
.perf_device = false,
},
/* File copy */
{ .dev_type = HV_FCOPY,
HV_FCOPY_GUID,
.perf_device = false,
},
/* Backup */
{ .dev_type = HV_BACKUP,
HV_VSS_GUID,
.perf_device = false,
},
/* Dynamic Memory */
{ .dev_type = HV_DM,
HV_DM_GUID,
.perf_device = false,
},
/* Unknown GUID */
{ .dev_type = HV_UNKOWN,
.perf_device = false,
},
};
static u16 hv_get_dev_type(const uuid_le *guid)
{
u16 i;
for (i = HV_IDE; i < HV_UNKOWN; i++) {
if (!uuid_le_cmp(*guid, vmbus_devs[i].guid))
return i;
}
pr_info("Unknown GUID: %pUl\n", guid);
return i;
}
/**
* vmbus_prep_negotiate_resp() - Create default response for Hyper-V Negotiate message
......@@ -144,6 +259,7 @@ static struct vmbus_channel *alloc_channel(void)
return NULL;
channel->id = atomic_inc_return(&chan_num);
channel->acquire_ring_lock = true;
spin_lock_init(&channel->inbound_lock);
spin_lock_init(&channel->lock);
......@@ -195,6 +311,7 @@ void hv_process_channel_removal(struct vmbus_channel *channel, u32 relid)
vmbus_release_relid(relid);
BUG_ON(!channel->rescind);
BUG_ON(!mutex_is_locked(&vmbus_connection.channel_mutex));
if (channel->target_cpu != get_cpu()) {
put_cpu();
......@@ -206,9 +323,7 @@ void hv_process_channel_removal(struct vmbus_channel *channel, u32 relid)
}
if (channel->primary_channel == NULL) {
mutex_lock(&vmbus_connection.channel_mutex);
list_del(&channel->listentry);
mutex_unlock(&vmbus_connection.channel_mutex);
primary_channel = channel;
} else {
......@@ -251,6 +366,8 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
struct vmbus_channel *channel;
bool fnew = true;
unsigned long flags;
u16 dev_type;
int ret;
/* Make sure this is a new offer */
mutex_lock(&vmbus_connection.channel_mutex);
......@@ -288,7 +405,9 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
goto err_free_chan;
}
init_vp_index(newchannel, &newchannel->offermsg.offer.if_type);
dev_type = hv_get_dev_type(&newchannel->offermsg.offer.if_type);
init_vp_index(newchannel, dev_type);
if (newchannel->target_cpu != get_cpu()) {
put_cpu();
......@@ -325,12 +444,17 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
if (!newchannel->device_obj)
goto err_deq_chan;
newchannel->device_obj->device_id = dev_type;
/*
* Add the new device to the bus. This will kick off device-driver
* binding which eventually invokes the device driver's AddDevice()
* method.
*/
if (vmbus_device_register(newchannel->device_obj) != 0) {
mutex_lock(&vmbus_connection.channel_mutex);
ret = vmbus_device_register(newchannel->device_obj);
mutex_unlock(&vmbus_connection.channel_mutex);
if (ret != 0) {
pr_err("unable to add child device object (relid %d)\n",
newchannel->offermsg.child_relid);
kfree(newchannel->device_obj);
......@@ -358,37 +482,6 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
free_channel(newchannel);
}
enum {
IDE = 0,
SCSI,
FC,
NIC,
ND_NIC,
PCIE,
MAX_PERF_CHN,
};
/*
* This is an array of device_ids (device types) that are performance critical.
* We attempt to distribute the interrupt load for these devices across
* all available CPUs.
*/
static const struct hv_vmbus_device_id hp_devs[] = {
/* IDE */
{ HV_IDE_GUID, },
/* Storage - SCSI */
{ HV_SCSI_GUID, },
/* Storage - FC */
{ HV_SYNTHFC_GUID, },
/* Network */
{ HV_NIC_GUID, },
/* NetworkDirect Guest RDMA */
{ HV_ND_GUID, },
/* PCI Express Pass Through */
{ HV_PCIE_GUID, },
};
/*
* We use this state to statically distribute the channel interrupt load.
*/
......@@ -405,22 +498,15 @@ static int next_numa_node_id;
* For pre-win8 hosts or non-performance critical channels we assign the
* first CPU in the first NUMA node.
*/
static void init_vp_index(struct vmbus_channel *channel, const uuid_le *type_guid)
static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
{
u32 cur_cpu;
int i;
bool perf_chn = false;
bool perf_chn = vmbus_devs[dev_type].perf_device;
struct vmbus_channel *primary = channel->primary_channel;
int next_node;
struct cpumask available_mask;
struct cpumask *alloced_mask;
for (i = IDE; i < MAX_PERF_CHN; i++) {
if (!uuid_le_cmp(*type_guid, hp_devs[i].guid)) {
perf_chn = true;
break;
}
}
if ((vmbus_proto_version == VERSION_WS2008) ||
(vmbus_proto_version == VERSION_WIN7) || (!perf_chn)) {
/*
......@@ -469,6 +555,17 @@ static void init_vp_index(struct vmbus_channel *channel, const uuid_le *type_gui
cpumask_of_node(primary->numa_node));
cur_cpu = -1;
/*
* Normally Hyper-V host doesn't create more subchannels than there
* are VCPUs on the node but it is possible when not all present VCPUs
* on the node are initialized by guest. Clear the alloced_cpus_in_node
* to start over.
*/
if (cpumask_equal(&primary->alloced_cpus_in_node,
cpumask_of_node(primary->numa_node)))
cpumask_clear(&primary->alloced_cpus_in_node);
while (true) {
cur_cpu = cpumask_next(cur_cpu, &available_mask);
if (cur_cpu >= nr_cpu_ids) {
......@@ -498,6 +595,32 @@ static void init_vp_index(struct vmbus_channel *channel, const uuid_le *type_gui
channel->target_vp = hv_context.vp_index[cur_cpu];
}
static void vmbus_wait_for_unload(void)
{
int cpu = smp_processor_id();
void *page_addr = hv_context.synic_message_page[cpu];
struct hv_message *msg = (struct hv_message *)page_addr +
VMBUS_MESSAGE_SINT;
struct vmbus_channel_message_header *hdr;
bool unloaded = false;
while (1) {
if (READ_ONCE(msg->header.message_type) == HVMSG_NONE) {
mdelay(10);
continue;
}
hdr = (struct vmbus_channel_message_header *)msg->u.payload;
if (hdr->msgtype == CHANNELMSG_UNLOAD_RESPONSE)
unloaded = true;
vmbus_signal_eom(msg);
if (unloaded)
break;
}
}
/*
* vmbus_unload_response - Handler for the unload response.
*/
......@@ -510,7 +633,7 @@ static void vmbus_unload_response(struct vmbus_channel_message_header *hdr)
complete(&vmbus_connection.unload_event);
}
void vmbus_initiate_unload(void)
void vmbus_initiate_unload(bool crash)
{
struct vmbus_channel_message_header hdr;
......@@ -523,7 +646,14 @@ void vmbus_initiate_unload(void)
hdr.msgtype = CHANNELMSG_UNLOAD;
vmbus_post_msg(&hdr, sizeof(struct vmbus_channel_message_header));
wait_for_completion(&vmbus_connection.unload_event);
/*
* vmbus_initiate_unload() is also called on crash and the crash can be
* happening in an interrupt context, where scheduling is impossible.
*/
if (!crash)
wait_for_completion(&vmbus_connection.unload_event);
else
vmbus_wait_for_unload();
}
/*
......@@ -592,6 +722,8 @@ static void vmbus_onoffer_rescind(struct vmbus_channel_message_header *hdr)
struct device *dev;
rescind = (struct vmbus_channel_rescind_offer *)hdr;
mutex_lock(&vmbus_connection.channel_mutex);
channel = relid2channel(rescind->child_relid);
if (channel == NULL) {
......@@ -600,7 +732,7 @@ static void vmbus_onoffer_rescind(struct vmbus_channel_message_header *hdr)
* vmbus_process_offer(), we have already invoked
* vmbus_release_relid() on error.
*/
return;
goto out;
}
spin_lock_irqsave(&channel->lock, flags);
......@@ -608,6 +740,10 @@ static void vmbus_onoffer_rescind(struct vmbus_channel_message_header *hdr)
spin_unlock_irqrestore(&channel->lock, flags);
if (channel->device_obj) {
if (channel->chn_rescind_callback) {
channel->chn_rescind_callback(channel);
goto out;
}
/*
* We will have to unregister this device from the
* driver core.
......@@ -621,7 +757,24 @@ static void vmbus_onoffer_rescind(struct vmbus_channel_message_header *hdr)
hv_process_channel_removal(channel,
channel->offermsg.child_relid);
}
out:
mutex_unlock(&vmbus_connection.channel_mutex);
}
void vmbus_hvsock_device_unregister(struct vmbus_channel *channel)
{
mutex_lock(&vmbus_connection.channel_mutex);
BUG_ON(!is_hvsock_channel(channel));
channel->rescind = true;
vmbus_device_unregister(channel->device_obj);
mutex_unlock(&vmbus_connection.channel_mutex);
}
EXPORT_SYMBOL_GPL(vmbus_hvsock_device_unregister);
/*
* vmbus_onoffers_delivered -
......@@ -825,6 +978,10 @@ struct vmbus_channel_message_table_entry
{CHANNELMSG_VERSION_RESPONSE, 1, vmbus_onversion_response},
{CHANNELMSG_UNLOAD, 0, NULL},
{CHANNELMSG_UNLOAD_RESPONSE, 1, vmbus_unload_response},
{CHANNELMSG_18, 0, NULL},
{CHANNELMSG_19, 0, NULL},
{CHANNELMSG_20, 0, NULL},
{CHANNELMSG_TL_CONNECT_REQUEST, 0, NULL},
};
/*
......@@ -973,3 +1130,10 @@ bool vmbus_are_subchannels_present(struct vmbus_channel *primary)
return ret;
}
EXPORT_SYMBOL_GPL(vmbus_are_subchannels_present);
void vmbus_set_chn_rescind_callback(struct vmbus_channel *channel,
void (*chn_rescind_cb)(struct vmbus_channel *))
{
channel->chn_rescind_callback = chn_rescind_cb;
}
EXPORT_SYMBOL_GPL(vmbus_set_chn_rescind_callback);
......@@ -88,8 +88,16 @@ static int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo,
* This has been the behavior pre-win8. This is not
* perf issue and having all channel messages delivered on CPU 0
* would be ok.
* For post win8 hosts, we support receiving channel messagges on
* all the CPUs. This is needed for kexec to work correctly where
* the CPU attempting to connect may not be CPU 0.
*/
msg->target_vcpu = 0;
if (version >= VERSION_WIN8_1) {
msg->target_vcpu = hv_context.vp_index[get_cpu()];
put_cpu();
} else {
msg->target_vcpu = 0;
}
/*
* Add to list before we send the request since we may
......@@ -236,7 +244,7 @@ void vmbus_disconnect(void)
/*
* First send the unload request to the host.
*/
vmbus_initiate_unload();
vmbus_initiate_unload(false);
if (vmbus_connection.work_queue) {
drain_workqueue(vmbus_connection.work_queue);
......@@ -288,7 +296,8 @@ struct vmbus_channel *relid2channel(u32 relid)
struct list_head *cur, *tmp;
struct vmbus_channel *cur_sc;
mutex_lock(&vmbus_connection.channel_mutex);
BUG_ON(!mutex_is_locked(&vmbus_connection.channel_mutex));
list_for_each_entry(channel, &vmbus_connection.chn_list, listentry) {
if (channel->offermsg.child_relid == relid) {
found_channel = channel;
......@@ -307,7 +316,6 @@ struct vmbus_channel *relid2channel(u32 relid)
}
}
}
mutex_unlock(&vmbus_connection.channel_mutex);
return found_channel;
}
......@@ -474,7 +482,7 @@ int vmbus_post_msg(void *buffer, size_t buflen)
/*
* vmbus_set_event - Send an event notification to the parent
*/
int vmbus_set_event(struct vmbus_channel *channel)
void vmbus_set_event(struct vmbus_channel *channel)
{
u32 child_relid = channel->offermsg.child_relid;
......@@ -485,5 +493,5 @@ int vmbus_set_event(struct vmbus_channel *channel)
(child_relid >> 5));
}
return hv_signal_event(channel->sig_event);
hv_do_hypercall(HVCALL_SIGNAL_EVENT, channel->sig_event, NULL);
}
......@@ -204,6 +204,8 @@ int hv_init(void)
sizeof(int) * NR_CPUS);
memset(hv_context.event_dpc, 0,
sizeof(void *) * NR_CPUS);
memset(hv_context.msg_dpc, 0,
sizeof(void *) * NR_CPUS);
memset(hv_context.clk_evt, 0,
sizeof(void *) * NR_CPUS);
......@@ -295,8 +297,14 @@ void hv_cleanup(void)
* Cleanup the TSC page based CS.
*/
if (ms_hyperv.features & HV_X64_MSR_REFERENCE_TSC_AVAILABLE) {
clocksource_change_rating(&hyperv_cs_tsc, 10);
clocksource_unregister(&hyperv_cs_tsc);
/*
* Crash can happen in an interrupt context and unregistering
* a clocksource is impossible and redundant in this case.
*/
if (!oops_in_progress) {
clocksource_change_rating(&hyperv_cs_tsc, 10);
clocksource_unregister(&hyperv_cs_tsc);
}
hypercall_msr.as_uint64 = 0;
wrmsrl(HV_X64_MSR_REFERENCE_TSC, hypercall_msr.as_uint64);
......@@ -337,22 +345,6 @@ int hv_post_message(union hv_connection_id connection_id,
return status & 0xFFFF;
}
/*
* hv_signal_event -
* Signal an event on the specified connection using the hypervisor event IPC.
*
* This involves a hypercall.
*/
int hv_signal_event(void *con_id)
{
u64 status;
status = hv_do_hypercall(HVCALL_SIGNAL_EVENT, con_id, NULL);
return status & 0xFFFF;
}
static int hv_ce_set_next_event(unsigned long delta,
struct clock_event_device *evt)
{
......@@ -425,6 +417,13 @@ int hv_synic_alloc(void)
}
tasklet_init(hv_context.event_dpc[cpu], vmbus_on_event, cpu);
hv_context.msg_dpc[cpu] = kmalloc(size, GFP_ATOMIC);
if (hv_context.msg_dpc[cpu] == NULL) {
pr_err("Unable to allocate event dpc\n");
goto err;
}
tasklet_init(hv_context.msg_dpc[cpu], vmbus_on_msg_dpc, cpu);
hv_context.clk_evt[cpu] = kzalloc(ced_size, GFP_ATOMIC);
if (hv_context.clk_evt[cpu] == NULL) {
pr_err("Unable to allocate clock event device\n");
......@@ -466,6 +465,7 @@ int hv_synic_alloc(void)
static void hv_synic_free_cpu(int cpu)
{
kfree(hv_context.event_dpc[cpu]);
kfree(hv_context.msg_dpc[cpu]);
kfree(hv_context.clk_evt[cpu]);
if (hv_context.synic_event_page[cpu])
free_page((unsigned long)hv_context.synic_event_page[cpu]);
......
......@@ -251,7 +251,6 @@ void hv_fcopy_onchannelcallback(void *context)
*/
fcopy_transaction.recv_len = recvlen;
fcopy_transaction.recv_channel = channel;
fcopy_transaction.recv_req_id = requestid;
fcopy_transaction.fcopy_msg = fcopy_msg;
......@@ -317,6 +316,7 @@ static void fcopy_on_reset(void)
int hv_fcopy_init(struct hv_util_service *srv)
{
recv_buffer = srv->recv_buffer;
fcopy_transaction.recv_channel = srv->channel;
/*
* When this driver loads, the user level daemon that
......
......@@ -639,7 +639,6 @@ void hv_kvp_onchannelcallback(void *context)
*/
kvp_transaction.recv_len = recvlen;
kvp_transaction.recv_channel = channel;
kvp_transaction.recv_req_id = requestid;
kvp_transaction.kvp_msg = kvp_msg;
......@@ -688,6 +687,7 @@ int
hv_kvp_init(struct hv_util_service *srv)
{
recv_buffer = srv->recv_buffer;
kvp_transaction.recv_channel = srv->channel;
/*
* When this driver loads, the user level daemon that
......
......@@ -263,7 +263,6 @@ void hv_vss_onchannelcallback(void *context)
*/
vss_transaction.recv_len = recvlen;
vss_transaction.recv_channel = channel;
vss_transaction.recv_req_id = requestid;
vss_transaction.msg = (struct hv_vss_msg *)vss_msg;
......@@ -337,6 +336,7 @@ hv_vss_init(struct hv_util_service *srv)
return -ENOTSUPP;
}
recv_buffer = srv->recv_buffer;
vss_transaction.recv_channel = srv->channel;
/*
* When this driver loads, the user level daemon that
......
......@@ -322,6 +322,7 @@ static int util_probe(struct hv_device *dev,
srv->recv_buffer = kmalloc(PAGE_SIZE * 4, GFP_KERNEL);
if (!srv->recv_buffer)
return -ENOMEM;
srv->channel = dev->channel;
if (srv->util_init) {
ret = srv->util_init(srv);
if (ret) {
......
......@@ -310,6 +310,9 @@ struct hvutil_transport *hvutil_transport_init(const char *name,
return hvt;
err_free_hvt:
spin_lock(&hvt_list_lock);
list_del(&hvt->list);
spin_unlock(&hvt_list_lock);
kfree(hvt);
return NULL;
}
......
......@@ -443,10 +443,11 @@ struct hv_context {
u32 vp_index[NR_CPUS];
/*
* Starting with win8, we can take channel interrupts on any CPU;
* we will manage the tasklet that handles events on a per CPU
* we will manage the tasklet that handles events messages on a per CPU
* basis.
*/
struct tasklet_struct *event_dpc[NR_CPUS];
struct tasklet_struct *msg_dpc[NR_CPUS];
/*
* To optimize the mapping of relid to channel, maintain
* per-cpu list of the channels based on their CPU affinity.
......@@ -495,8 +496,6 @@ extern int hv_post_message(union hv_connection_id connection_id,
enum hv_message_type message_type,
void *payload, size_t payload_size);
extern int hv_signal_event(void *con_id);
extern int hv_synic_alloc(void);
extern void hv_synic_free(void);
......@@ -525,7 +524,7 @@ void hv_ringbuffer_cleanup(struct hv_ring_buffer_info *ring_info);
int hv_ringbuffer_write(struct hv_ring_buffer_info *ring_info,
struct kvec *kv_list,
u32 kv_count, bool *signal);
u32 kv_count, bool *signal, bool lock);
int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info,
void *buffer, u32 buflen, u32 *buffer_actual_len,
......@@ -620,6 +619,30 @@ struct vmbus_channel_message_table_entry {
extern struct vmbus_channel_message_table_entry
channel_message_table[CHANNELMSG_COUNT];
/* Free the message slot and signal end-of-message if required */
static inline void vmbus_signal_eom(struct hv_message *msg)
{
msg->header.message_type = HVMSG_NONE;
/*
* Make sure the write to MessageType (ie set to
* HVMSG_NONE) happens before we read the
* MessagePending and EOMing. Otherwise, the EOMing
* will not deliver any more messages since there is
* no empty slot
*/
mb();
if (msg->header.message_flags.msg_pending) {
/*
* This will cause message queue rescan to
* possibly deliver another msg from the
* hypervisor
*/
wrmsrl(HV_X64_MSR_EOM, 0);
}
}
/* General vmbus interface */
struct hv_device *vmbus_device_create(const uuid_le *type,
......@@ -644,9 +667,10 @@ void vmbus_disconnect(void);
int vmbus_post_msg(void *buffer, size_t buflen);
int vmbus_set_event(struct vmbus_channel *channel);
void vmbus_set_event(struct vmbus_channel *channel);
void vmbus_on_event(unsigned long data);
void vmbus_on_msg_dpc(unsigned long data);
int hv_kvp_init(struct hv_util_service *);
void hv_kvp_deinit(void);
......@@ -659,7 +683,7 @@ void hv_vss_onchannelcallback(void *);
int hv_fcopy_init(struct hv_util_service *);
void hv_fcopy_deinit(void);
void hv_fcopy_onchannelcallback(void *);
void vmbus_initiate_unload(void);
void vmbus_initiate_unload(bool crash);
static inline void hv_poll_channel(struct vmbus_channel *channel,
void (*cb)(void *))
......
......@@ -314,7 +314,7 @@ void hv_ringbuffer_cleanup(struct hv_ring_buffer_info *ring_info)
/* Write to the ring buffer. */
int hv_ringbuffer_write(struct hv_ring_buffer_info *outring_info,
struct kvec *kv_list, u32 kv_count, bool *signal)
struct kvec *kv_list, u32 kv_count, bool *signal, bool lock)
{
int i = 0;
u32 bytes_avail_towrite;
......@@ -324,14 +324,15 @@ int hv_ringbuffer_write(struct hv_ring_buffer_info *outring_info,
u32 next_write_location;
u32 old_write;
u64 prev_indices = 0;
unsigned long flags;
unsigned long flags = 0;
for (i = 0; i < kv_count; i++)
totalbytes_towrite += kv_list[i].iov_len;
totalbytes_towrite += sizeof(u64);
spin_lock_irqsave(&outring_info->ring_lock, flags);
if (lock)
spin_lock_irqsave(&outring_info->ring_lock, flags);
hv_get_ringbuffer_availbytes(outring_info,
&bytes_avail_toread,
......@@ -343,7 +344,8 @@ int hv_ringbuffer_write(struct hv_ring_buffer_info *outring_info,
* is empty since the read index == write index.
*/
if (bytes_avail_towrite <= totalbytes_towrite) {
spin_unlock_irqrestore(&outring_info->ring_lock, flags);
if (lock)
spin_unlock_irqrestore(&outring_info->ring_lock, flags);
return -EAGAIN;
}
......@@ -374,7 +376,8 @@ int hv_ringbuffer_write(struct hv_ring_buffer_info *outring_info,
hv_set_next_write_location(outring_info, next_write_location);
spin_unlock_irqrestore(&outring_info->ring_lock, flags);
if (lock)
spin_unlock_irqrestore(&outring_info->ring_lock, flags);
*signal = hv_need_to_signal(old_write, outring_info);
return 0;
......@@ -388,7 +391,6 @@ int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info,
u32 bytes_avail_toread;
u32 next_read_location = 0;
u64 prev_indices = 0;
unsigned long flags;
struct vmpacket_descriptor desc;
u32 offset;
u32 packetlen;
......@@ -397,7 +399,6 @@ int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info,
if (buflen <= 0)
return -EINVAL;
spin_lock_irqsave(&inring_info->ring_lock, flags);
*buffer_actual_len = 0;
*requestid = 0;
......@@ -412,7 +413,7 @@ int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info,
* No error is set when there is even no header, drivers are
* supposed to analyze buffer_actual_len.
*/
goto out_unlock;
return ret;
}
next_read_location = hv_get_next_read_location(inring_info);
......@@ -425,15 +426,11 @@ int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info,
*buffer_actual_len = packetlen;
*requestid = desc.trans_id;
if (bytes_avail_toread < packetlen + offset) {
ret = -EAGAIN;
goto out_unlock;
}
if (bytes_avail_toread < packetlen + offset)
return -EAGAIN;
if (packetlen > buflen) {
ret = -ENOBUFS;
goto out_unlock;
}
if (packetlen > buflen)
return -ENOBUFS;
next_read_location =
hv_get_next_readlocation_withoffset(inring_info, offset);
......@@ -460,7 +457,5 @@ int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info,
*signal = hv_need_to_signal_on_read(bytes_avail_towrite, inring_info);
out_unlock:
spin_unlock_irqrestore(&inring_info->ring_lock, flags);
return ret;
}
......@@ -45,7 +45,6 @@
static struct acpi_device *hv_acpi_dev;
static struct tasklet_struct msg_dpc;
static struct completion probe_event;
......@@ -477,6 +476,24 @@ static ssize_t channel_vp_mapping_show(struct device *dev,
}
static DEVICE_ATTR_RO(channel_vp_mapping);
static ssize_t vendor_show(struct device *dev,
struct device_attribute *dev_attr,
char *buf)
{
struct hv_device *hv_dev = device_to_hv_device(dev);
return sprintf(buf, "0x%x\n", hv_dev->vendor_id);
}
static DEVICE_ATTR_RO(vendor);
static ssize_t device_show(struct device *dev,
struct device_attribute *dev_attr,
char *buf)
{
struct hv_device *hv_dev = device_to_hv_device(dev);
return sprintf(buf, "0x%x\n", hv_dev->device_id);
}
static DEVICE_ATTR_RO(device);
/* Set up per device attributes in /sys/bus/vmbus/devices/<bus device> */
static struct attribute *vmbus_attrs[] = {
&dev_attr_id.attr,
......@@ -502,6 +519,8 @@ static struct attribute *vmbus_attrs[] = {
&dev_attr_in_read_bytes_avail.attr,
&dev_attr_in_write_bytes_avail.attr,
&dev_attr_channel_vp_mapping.attr,
&dev_attr_vendor.attr,
&dev_attr_device.attr,
NULL,
};
ATTRIBUTE_GROUPS(vmbus);
......@@ -562,6 +581,10 @@ static int vmbus_match(struct device *device, struct device_driver *driver)
struct hv_driver *drv = drv_to_hv_drv(driver);
struct hv_device *hv_dev = device_to_hv_device(device);
/* The hv_sock driver handles all hv_sock offers. */
if (is_hvsock_channel(hv_dev->channel))
return drv->hvsock;
if (hv_vmbus_get_id(drv->id_table, &hv_dev->dev_type))
return 1;
......@@ -685,28 +708,10 @@ static void hv_process_timer_expiration(struct hv_message *msg, int cpu)
if (dev->event_handler)
dev->event_handler(dev);
msg->header.message_type = HVMSG_NONE;
/*
* Make sure the write to MessageType (ie set to
* HVMSG_NONE) happens before we read the
* MessagePending and EOMing. Otherwise, the EOMing
* will not deliver any more messages since there is
* no empty slot
*/
mb();
if (msg->header.message_flags.msg_pending) {
/*
* This will cause message queue rescan to
* possibly deliver another msg from the
* hypervisor
*/
wrmsrl(HV_X64_MSR_EOM, 0);
}
vmbus_signal_eom(msg);
}
static void vmbus_on_msg_dpc(unsigned long data)
void vmbus_on_msg_dpc(unsigned long data)
{
int cpu = smp_processor_id();
void *page_addr = hv_context.synic_message_page[cpu];
......@@ -716,52 +721,32 @@ static void vmbus_on_msg_dpc(unsigned long data)
struct vmbus_channel_message_table_entry *entry;
struct onmessage_work_context *ctx;
while (1) {
if (msg->header.message_type == HVMSG_NONE)
/* no msg */
break;
if (msg->header.message_type == HVMSG_NONE)
/* no msg */
return;
hdr = (struct vmbus_channel_message_header *)msg->u.payload;
hdr = (struct vmbus_channel_message_header *)msg->u.payload;
if (hdr->msgtype >= CHANNELMSG_COUNT) {
WARN_ONCE(1, "unknown msgtype=%d\n", hdr->msgtype);
goto msg_handled;
}
if (hdr->msgtype >= CHANNELMSG_COUNT) {
WARN_ONCE(1, "unknown msgtype=%d\n", hdr->msgtype);
goto msg_handled;
}
entry = &channel_message_table[hdr->msgtype];
if (entry->handler_type == VMHT_BLOCKING) {
ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC);
if (ctx == NULL)
continue;
entry = &channel_message_table[hdr->msgtype];
if (entry->handler_type == VMHT_BLOCKING) {
ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC);
if (ctx == NULL)
return;
INIT_WORK(&ctx->work, vmbus_onmessage_work);
memcpy(&ctx->msg, msg, sizeof(*msg));
INIT_WORK(&ctx->work, vmbus_onmessage_work);
memcpy(&ctx->msg, msg, sizeof(*msg));
queue_work(vmbus_connection.work_queue, &ctx->work);
} else
entry->message_handler(hdr);
queue_work(vmbus_connection.work_queue, &ctx->work);
} else
entry->message_handler(hdr);
msg_handled:
msg->header.message_type = HVMSG_NONE;
/*
* Make sure the write to MessageType (ie set to
* HVMSG_NONE) happens before we read the
* MessagePending and EOMing. Otherwise, the EOMing
* will not deliver any more messages since there is
* no empty slot
*/
mb();
if (msg->header.message_flags.msg_pending) {
/*
* This will cause message queue rescan to
* possibly deliver another msg from the
* hypervisor
*/
wrmsrl(HV_X64_MSR_EOM, 0);
}
}
vmbus_signal_eom(msg);
}
static void vmbus_isr(void)
......@@ -814,7 +799,7 @@ static void vmbus_isr(void)
if (msg->header.message_type == HVMSG_TIMER_EXPIRED)
hv_process_timer_expiration(msg, cpu);
else
tasklet_schedule(&msg_dpc);
tasklet_schedule(hv_context.msg_dpc[cpu]);
}
}
......@@ -838,8 +823,6 @@ static int vmbus_bus_init(void)
return ret;
}
tasklet_init(&msg_dpc, vmbus_on_msg_dpc, 0);
ret = bus_register(&hv_bus);
if (ret)
goto err_cleanup;
......@@ -957,6 +940,7 @@ struct hv_device *vmbus_device_create(const uuid_le *type,
memcpy(&child_device_obj->dev_type, type, sizeof(uuid_le));
memcpy(&child_device_obj->dev_instance, instance,
sizeof(uuid_le));
child_device_obj->vendor_id = 0x1414; /* MSFT vendor ID */
return child_device_obj;
......@@ -1268,7 +1252,7 @@ static void hv_kexec_handler(void)
int cpu;
hv_synic_clockevents_cleanup();
vmbus_initiate_unload();
vmbus_initiate_unload(false);
for_each_online_cpu(cpu)
smp_call_function_single(cpu, hv_synic_cleanup, NULL, 1);
hv_cleanup();
......@@ -1276,7 +1260,7 @@ static void hv_kexec_handler(void)
static void hv_crash_handler(struct pt_regs *regs)
{
vmbus_initiate_unload();
vmbus_initiate_unload(true);
/*
* In crash handler we can't schedule synic cleanup for all CPUs,
* doing the cleanup for current CPU only. This should be sufficient
......@@ -1334,7 +1318,8 @@ static void __exit vmbus_exit(void)
hv_synic_clockevents_cleanup();
vmbus_disconnect();
hv_remove_vmbus_irq();
tasklet_kill(&msg_dpc);
for_each_online_cpu(cpu)
tasklet_kill(hv_context.msg_dpc[cpu]);
vmbus_free_channels();
if (ms_hyperv.misc_features & HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE) {
unregister_die_notifier(&hyperv_die_block);
......
......@@ -4,6 +4,7 @@
menuconfig CORESIGHT
bool "CoreSight Tracing Support"
select ARM_AMBA
select PERF_EVENTS
help
This framework provides a kernel interface for the CoreSight debug
and trace drivers to register themselves with. It's intended to build
......
......@@ -8,6 +8,8 @@ obj-$(CONFIG_CORESIGHT_SINK_TPIU) += coresight-tpiu.o
obj-$(CONFIG_CORESIGHT_SINK_ETBV10) += coresight-etb10.o
obj-$(CONFIG_CORESIGHT_LINKS_AND_SINKS) += coresight-funnel.o \
coresight-replicator.o
obj-$(CONFIG_CORESIGHT_SOURCE_ETM3X) += coresight-etm3x.o coresight-etm-cp14.o
obj-$(CONFIG_CORESIGHT_SOURCE_ETM3X) += coresight-etm3x.o coresight-etm-cp14.o \
coresight-etm3x-sysfs.o \
coresight-etm-perf.o
obj-$(CONFIG_CORESIGHT_SOURCE_ETM4X) += coresight-etm4x.o
obj-$(CONFIG_CORESIGHT_QCOM_REPLICATOR) += coresight-replicator-qcom.o
/* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved.
*
* Description: CoreSight Embedded Trace Buffer driver
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
......@@ -10,8 +12,8 @@
* GNU General Public License for more details.
*/
#include <asm/local.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/types.h>
#include <linux/device.h>
......@@ -27,6 +29,11 @@
#include <linux/coresight.h>
#include <linux/amba/bus.h>
#include <linux/clk.h>
#include <linux/circ_buf.h>
#include <linux/mm.h>
#include <linux/perf_event.h>
#include <asm/local.h>
#include "coresight-priv.h"
......@@ -63,6 +70,26 @@
#define ETB_FFSR_BIT 1
#define ETB_FRAME_SIZE_WORDS 4
/**
* struct cs_buffer - keep track of a recording session' specifics
* @cur: index of the current buffer
* @nr_pages: max number of pages granted to us
* @offset: offset within the current buffer
* @data_size: how much we collected in this run
* @lost: other than zero if we had a HW buffer wrap around
* @snapshot: is this run in snapshot mode
* @data_pages: a handle the ring buffer
*/
struct cs_buffers {
unsigned int cur;
unsigned int nr_pages;
unsigned long offset;
local_t data_size;
local_t lost;
bool snapshot;
void **data_pages;
};
/**
* struct etb_drvdata - specifics associated to an ETB component
* @base: memory mapped base address for this component.
......@@ -71,10 +98,10 @@
* @csdev: component vitals needed by the framework.
* @miscdev: specifics to handle "/dev/xyz.etb" entry.
* @spinlock: only one at a time pls.
* @in_use: synchronise user space access to etb buffer.
* @reading: synchronise user space access to etb buffer.
* @mode: this ETB is being used.
* @buf: area of memory where ETB buffer content gets sent.
* @buffer_depth: size of @buf.
* @enable: this ETB is being used.
* @trigger_cntr: amount of words to store after a trigger.
*/
struct etb_drvdata {
......@@ -84,10 +111,10 @@ struct etb_drvdata {
struct coresight_device *csdev;
struct miscdevice miscdev;
spinlock_t spinlock;
atomic_t in_use;
local_t reading;
local_t mode;
u8 *buf;
u32 buffer_depth;
bool enable;
u32 trigger_cntr;
};
......@@ -132,18 +159,31 @@ static void etb_enable_hw(struct etb_drvdata *drvdata)
CS_LOCK(drvdata->base);
}
static int etb_enable(struct coresight_device *csdev)
static int etb_enable(struct coresight_device *csdev, u32 mode)
{
struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
u32 val;
unsigned long flags;
struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
pm_runtime_get_sync(drvdata->dev);
val = local_cmpxchg(&drvdata->mode,
CS_MODE_DISABLED, mode);
/*
* When accessing from Perf, a HW buffer can be handled
* by a single trace entity. In sysFS mode many tracers
* can be logging to the same HW buffer.
*/
if (val == CS_MODE_PERF)
return -EBUSY;
/* Nothing to do, the tracer is already enabled. */
if (val == CS_MODE_SYSFS)
goto out;
spin_lock_irqsave(&drvdata->spinlock, flags);
etb_enable_hw(drvdata);
drvdata->enable = true;
spin_unlock_irqrestore(&drvdata->spinlock, flags);
out:
dev_info(drvdata->dev, "ETB enabled\n");
return 0;
}
......@@ -244,17 +284,225 @@ static void etb_disable(struct coresight_device *csdev)
spin_lock_irqsave(&drvdata->spinlock, flags);
etb_disable_hw(drvdata);
etb_dump_hw(drvdata);
drvdata->enable = false;
spin_unlock_irqrestore(&drvdata->spinlock, flags);
pm_runtime_put(drvdata->dev);
local_set(&drvdata->mode, CS_MODE_DISABLED);
dev_info(drvdata->dev, "ETB disabled\n");
}
static void *etb_alloc_buffer(struct coresight_device *csdev, int cpu,
void **pages, int nr_pages, bool overwrite)
{
int node;
struct cs_buffers *buf;
if (cpu == -1)
cpu = smp_processor_id();
node = cpu_to_node(cpu);
buf = kzalloc_node(sizeof(struct cs_buffers), GFP_KERNEL, node);
if (!buf)
return NULL;
buf->snapshot = overwrite;
buf->nr_pages = nr_pages;
buf->data_pages = pages;
return buf;
}
static void etb_free_buffer(void *config)
{
struct cs_buffers *buf = config;
kfree(buf);
}
static int etb_set_buffer(struct coresight_device *csdev,
struct perf_output_handle *handle,
void *sink_config)
{
int ret = 0;
unsigned long head;
struct cs_buffers *buf = sink_config;
/* wrap head around to the amount of space we have */
head = handle->head & ((buf->nr_pages << PAGE_SHIFT) - 1);
/* find the page to write to */
buf->cur = head / PAGE_SIZE;
/* and offset within that page */
buf->offset = head % PAGE_SIZE;
local_set(&buf->data_size, 0);
return ret;
}
static unsigned long etb_reset_buffer(struct coresight_device *csdev,
struct perf_output_handle *handle,
void *sink_config, bool *lost)
{
unsigned long size = 0;
struct cs_buffers *buf = sink_config;
if (buf) {
/*
* In snapshot mode ->data_size holds the new address of the
* ring buffer's head. The size itself is the whole address
* range since we want the latest information.
*/
if (buf->snapshot)
handle->head = local_xchg(&buf->data_size,
buf->nr_pages << PAGE_SHIFT);
/*
* Tell the tracer PMU how much we got in this run and if
* something went wrong along the way. Nobody else can use
* this cs_buffers instance until we are done. As such
* resetting parameters here and squaring off with the ring
* buffer API in the tracer PMU is fine.
*/
*lost = !!local_xchg(&buf->lost, 0);
size = local_xchg(&buf->data_size, 0);
}
return size;
}
static void etb_update_buffer(struct coresight_device *csdev,
struct perf_output_handle *handle,
void *sink_config)
{
int i, cur;
u8 *buf_ptr;
u32 read_ptr, write_ptr, capacity;
u32 status, read_data, to_read;
unsigned long offset;
struct cs_buffers *buf = sink_config;
struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
if (!buf)
return;
capacity = drvdata->buffer_depth * ETB_FRAME_SIZE_WORDS;
CS_UNLOCK(drvdata->base);
etb_disable_hw(drvdata);
/* unit is in words, not bytes */
read_ptr = readl_relaxed(drvdata->base + ETB_RAM_READ_POINTER);
write_ptr = readl_relaxed(drvdata->base + ETB_RAM_WRITE_POINTER);
/*
* Entries should be aligned to the frame size. If they are not
* go back to the last alignement point to give decoding tools a
* chance to fix things.
*/
if (write_ptr % ETB_FRAME_SIZE_WORDS) {
dev_err(drvdata->dev,
"write_ptr: %lu not aligned to formatter frame size\n",
(unsigned long)write_ptr);
write_ptr &= ~(ETB_FRAME_SIZE_WORDS - 1);
local_inc(&buf->lost);
}
/*
* Get a hold of the status register and see if a wrap around
* has occurred. If so adjust things accordingly. Otherwise
* start at the beginning and go until the write pointer has
* been reached.
*/
status = readl_relaxed(drvdata->base + ETB_STATUS_REG);
if (status & ETB_STATUS_RAM_FULL) {
local_inc(&buf->lost);
to_read = capacity;
read_ptr = write_ptr;
} else {
to_read = CIRC_CNT(write_ptr, read_ptr, drvdata->buffer_depth);
to_read *= ETB_FRAME_SIZE_WORDS;
}
/*
* Make sure we don't overwrite data that hasn't been consumed yet.
* It is entirely possible that the HW buffer has more data than the
* ring buffer can currently handle. If so adjust the start address
* to take only the last traces.
*
* In snapshot mode we are looking to get the latest traces only and as
* such, we don't care about not overwriting data that hasn't been
* processed by user space.
*/
if (!buf->snapshot && to_read > handle->size) {
u32 mask = ~(ETB_FRAME_SIZE_WORDS - 1);
/* The new read pointer must be frame size aligned */
to_read -= handle->size & mask;
/*
* Move the RAM read pointer up, keeping in mind that
* everything is in frame size units.
*/
read_ptr = (write_ptr + drvdata->buffer_depth) -
to_read / ETB_FRAME_SIZE_WORDS;
/* Wrap around if need be*/
read_ptr &= ~(drvdata->buffer_depth - 1);
/* let the decoder know we've skipped ahead */
local_inc(&buf->lost);
}
/* finally tell HW where we want to start reading from */
writel_relaxed(read_ptr, drvdata->base + ETB_RAM_READ_POINTER);
cur = buf->cur;
offset = buf->offset;
for (i = 0; i < to_read; i += 4) {
buf_ptr = buf->data_pages[cur] + offset;
read_data = readl_relaxed(drvdata->base +
ETB_RAM_READ_DATA_REG);
*buf_ptr++ = read_data >> 0;
*buf_ptr++ = read_data >> 8;
*buf_ptr++ = read_data >> 16;
*buf_ptr++ = read_data >> 24;
offset += 4;
if (offset >= PAGE_SIZE) {
offset = 0;
cur++;
/* wrap around at the end of the buffer */
cur &= buf->nr_pages - 1;
}
}
/* reset ETB buffer for next run */
writel_relaxed(0x0, drvdata->base + ETB_RAM_READ_POINTER);
writel_relaxed(0x0, drvdata->base + ETB_RAM_WRITE_POINTER);
/*
* In snapshot mode all we have to do is communicate to
* perf_aux_output_end() the address of the current head. In full
* trace mode the same function expects a size to move rb->aux_head
* forward.
*/
if (buf->snapshot)
local_set(&buf->data_size, (cur * PAGE_SIZE) + offset);
else
local_add(to_read, &buf->data_size);
etb_enable_hw(drvdata);
CS_LOCK(drvdata->base);
}
static const struct coresight_ops_sink etb_sink_ops = {
.enable = etb_enable,
.disable = etb_disable,
.alloc_buffer = etb_alloc_buffer,
.free_buffer = etb_free_buffer,
.set_buffer = etb_set_buffer,
.reset_buffer = etb_reset_buffer,
.update_buffer = etb_update_buffer,
};
static const struct coresight_ops etb_cs_ops = {
......@@ -266,7 +514,7 @@ static void etb_dump(struct etb_drvdata *drvdata)
unsigned long flags;
spin_lock_irqsave(&drvdata->spinlock, flags);
if (drvdata->enable) {
if (local_read(&drvdata->mode) == CS_MODE_SYSFS) {
etb_disable_hw(drvdata);
etb_dump_hw(drvdata);
etb_enable_hw(drvdata);
......@@ -281,7 +529,7 @@ static int etb_open(struct inode *inode, struct file *file)
struct etb_drvdata *drvdata = container_of(file->private_data,
struct etb_drvdata, miscdev);
if (atomic_cmpxchg(&drvdata->in_use, 0, 1))
if (local_cmpxchg(&drvdata->reading, 0, 1))
return -EBUSY;
dev_dbg(drvdata->dev, "%s: successfully opened\n", __func__);
......@@ -317,7 +565,7 @@ static int etb_release(struct inode *inode, struct file *file)
{
struct etb_drvdata *drvdata = container_of(file->private_data,
struct etb_drvdata, miscdev);
atomic_set(&drvdata->in_use, 0);
local_set(&drvdata->reading, 0);
dev_dbg(drvdata->dev, "%s: released\n", __func__);
return 0;
......@@ -489,15 +737,6 @@ static int etb_probe(struct amba_device *adev, const struct amba_id *id)
return ret;
}
static int etb_remove(struct amba_device *adev)
{
struct etb_drvdata *drvdata = amba_get_drvdata(adev);
misc_deregister(&drvdata->miscdev);
coresight_unregister(drvdata->csdev);
return 0;
}
#ifdef CONFIG_PM
static int etb_runtime_suspend(struct device *dev)
{
......@@ -537,14 +776,10 @@ static struct amba_driver etb_driver = {
.name = "coresight-etb10",
.owner = THIS_MODULE,
.pm = &etb_dev_pm_ops,
.suppress_bind_attrs = true,
},
.probe = etb_probe,
.remove = etb_remove,
.id_table = etb_ids,
};
module_amba_driver(etb_driver);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("CoreSight Embedded Trace Buffer driver");
builtin_amba_driver(etb_driver);
/*
* Copyright(C) 2015 Linaro Limited. All rights reserved.
* Author: Mathieu Poirier <mathieu.poirier@linaro.org>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 as published by
* the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/coresight.h>
#include <linux/coresight-pmu.h>
#include <linux/cpumask.h>
#include <linux/device.h>
#include <linux/list.h>
#include <linux/mm.h>
#include <linux/init.h>
#include <linux/perf_event.h>
#include <linux/slab.h>
#include <linux/types.h>
#include <linux/workqueue.h>
#include "coresight-priv.h"
static struct pmu etm_pmu;
static bool etm_perf_up;
/**
* struct etm_event_data - Coresight specifics associated to an event
* @work: Handle to free allocated memory outside IRQ context.
* @mask: Hold the CPU(s) this event was set for.
* @snk_config: The sink configuration.
* @path: An array of path, each slot for one CPU.
*/
struct etm_event_data {
struct work_struct work;
cpumask_t mask;
void *snk_config;
struct list_head **path;
};
static DEFINE_PER_CPU(struct perf_output_handle, ctx_handle);
static DEFINE_PER_CPU(struct coresight_device *, csdev_src);
/* ETMv3.5/PTM's ETMCR is 'config' */
PMU_FORMAT_ATTR(cycacc, "config:" __stringify(ETM_OPT_CYCACC));
PMU_FORMAT_ATTR(timestamp, "config:" __stringify(ETM_OPT_TS));
static struct attribute *etm_config_formats_attr[] = {
&format_attr_cycacc.attr,
&format_attr_timestamp.attr,
NULL,
};
static struct attribute_group etm_pmu_format_group = {
.name = "format",
.attrs = etm_config_formats_attr,
};
static const struct attribute_group *etm_pmu_attr_groups[] = {
&etm_pmu_format_group,
NULL,
};
static void etm_event_read(struct perf_event *event) {}
static int etm_event_init(struct perf_event *event)
{
if (event->attr.type != etm_pmu.type)
return -ENOENT;
return 0;
}
static void free_event_data(struct work_struct *work)
{
int cpu;
cpumask_t *mask;
struct etm_event_data *event_data;
struct coresight_device *sink;
event_data = container_of(work, struct etm_event_data, work);
mask = &event_data->mask;
/*
* First deal with the sink configuration. See comment in
* etm_setup_aux() about why we take the first available path.
*/
if (event_data->snk_config) {
cpu = cpumask_first(mask);
sink = coresight_get_sink(event_data->path[cpu]);
if (sink_ops(sink)->free_buffer)
sink_ops(sink)->free_buffer(event_data->snk_config);
}
for_each_cpu(cpu, mask) {
if (event_data->path[cpu])
coresight_release_path(event_data->path[cpu]);
}
kfree(event_data->path);
kfree(event_data);
}
static void *alloc_event_data(int cpu)
{
int size;
cpumask_t *mask;
struct etm_event_data *event_data;
/* First get memory for the session's data */
event_data = kzalloc(sizeof(struct etm_event_data), GFP_KERNEL);
if (!event_data)
return NULL;
/* Make sure nothing disappears under us */
get_online_cpus();
size = num_online_cpus();
mask = &event_data->mask;
if (cpu != -1)
cpumask_set_cpu(cpu, mask);
else
cpumask_copy(mask, cpu_online_mask);
put_online_cpus();
/*
* Each CPU has a single path between source and destination. As such
* allocate an array using CPU numbers as indexes. That way a path
* for any CPU can easily be accessed at any given time. We proceed
* the same way for sessions involving a single CPU. The cost of
* unused memory when dealing with single CPU trace scenarios is small
* compared to the cost of searching through an optimized array.
*/
event_data->path = kcalloc(size,
sizeof(struct list_head *), GFP_KERNEL);
if (!event_data->path) {
kfree(event_data);
return NULL;
}
return event_data;
}
static void etm_free_aux(void *data)
{
struct etm_event_data *event_data = data;
schedule_work(&event_data->work);
}
static void *etm_setup_aux(int event_cpu, void **pages,
int nr_pages, bool overwrite)
{
int cpu;
cpumask_t *mask;
struct coresight_device *sink;
struct etm_event_data *event_data = NULL;
event_data = alloc_event_data(event_cpu);
if (!event_data)
return NULL;
INIT_WORK(&event_data->work, free_event_data);
mask = &event_data->mask;
/* Setup the path for each CPU in a trace session */
for_each_cpu(cpu, mask) {
struct coresight_device *csdev;
csdev = per_cpu(csdev_src, cpu);
if (!csdev)
goto err;
/*
* Building a path doesn't enable it, it simply builds a
* list of devices from source to sink that can be
* referenced later when the path is actually needed.
*/
event_data->path[cpu] = coresight_build_path(csdev);
if (!event_data->path[cpu])
goto err;
}
/*
* In theory nothing prevent tracers in a trace session from being
* associated with different sinks, nor having a sink per tracer. But
* until we have HW with this kind of topology and a way to convey
* sink assignement from the perf cmd line we need to assume tracers
* in a trace session are using the same sink. Therefore pick the sink
* found at the end of the first available path.
*/
cpu = cpumask_first(mask);
/* Grab the sink at the end of the path */
sink = coresight_get_sink(event_data->path[cpu]);
if (!sink)
goto err;
if (!sink_ops(sink)->alloc_buffer)
goto err;
/* Get the AUX specific data from the sink buffer */
event_data->snk_config =
sink_ops(sink)->alloc_buffer(sink, cpu, pages,
nr_pages, overwrite);
if (!event_data->snk_config)
goto err;
out:
return event_data;
err:
etm_free_aux(event_data);
event_data = NULL;
goto out;
}
static void etm_event_start(struct perf_event *event, int flags)
{
int cpu = smp_processor_id();
struct etm_event_data *event_data;
struct perf_output_handle *handle = this_cpu_ptr(&ctx_handle);
struct coresight_device *sink, *csdev = per_cpu(csdev_src, cpu);
if (!csdev)
goto fail;
/*
* Deal with the ring buffer API and get a handle on the
* session's information.
*/
event_data = perf_aux_output_begin(handle, event);
if (!event_data)
goto fail;
/* We need a sink, no need to continue without one */
sink = coresight_get_sink(event_data->path[cpu]);
if (WARN_ON_ONCE(!sink || !sink_ops(sink)->set_buffer))
goto fail_end_stop;
/* Configure the sink */
if (sink_ops(sink)->set_buffer(sink, handle,
event_data->snk_config))
goto fail_end_stop;
/* Nothing will happen without a path */
if (coresight_enable_path(event_data->path[cpu], CS_MODE_PERF))
goto fail_end_stop;
/* Tell the perf core the event is alive */
event->hw.state = 0;
/* Finally enable the tracer */
if (source_ops(csdev)->enable(csdev, &event->attr, CS_MODE_PERF))
goto fail_end_stop;
out:
return;
fail_end_stop:
perf_aux_output_end(handle, 0, true);
fail:
event->hw.state = PERF_HES_STOPPED;
goto out;
}
static void etm_event_stop(struct perf_event *event, int mode)
{
bool lost;
int cpu = smp_processor_id();
unsigned long size;
struct coresight_device *sink, *csdev = per_cpu(csdev_src, cpu);
struct perf_output_handle *handle = this_cpu_ptr(&ctx_handle);
struct etm_event_data *event_data = perf_get_aux(handle);
if (event->hw.state == PERF_HES_STOPPED)
return;
if (!csdev)
return;
sink = coresight_get_sink(event_data->path[cpu]);
if (!sink)
return;
/* stop tracer */
source_ops(csdev)->disable(csdev);
/* tell the core */
event->hw.state = PERF_HES_STOPPED;
if (mode & PERF_EF_UPDATE) {
if (WARN_ON_ONCE(handle->event != event))
return;
/* update trace information */
if (!sink_ops(sink)->update_buffer)
return;
sink_ops(sink)->update_buffer(sink, handle,
event_data->snk_config);
if (!sink_ops(sink)->reset_buffer)
return;
size = sink_ops(sink)->reset_buffer(sink, handle,
event_data->snk_config,
&lost);
perf_aux_output_end(handle, size, lost);
}
/* Disabling the path make its elements available to other sessions */
coresight_disable_path(event_data->path[cpu]);
}
static int etm_event_add(struct perf_event *event, int mode)
{
int ret = 0;
struct hw_perf_event *hwc = &event->hw;
if (mode & PERF_EF_START) {
etm_event_start(event, 0);
if (hwc->state & PERF_HES_STOPPED)
ret = -EINVAL;
} else {
hwc->state = PERF_HES_STOPPED;
}
return ret;
}
static void etm_event_del(struct perf_event *event, int mode)
{
etm_event_stop(event, PERF_EF_UPDATE);
}
int etm_perf_symlink(struct coresight_device *csdev, bool link)
{
char entry[sizeof("cpu9999999")];
int ret = 0, cpu = source_ops(csdev)->cpu_id(csdev);
struct device *pmu_dev = etm_pmu.dev;
struct device *cs_dev = &csdev->dev;
sprintf(entry, "cpu%d", cpu);
if (!etm_perf_up)
return -EPROBE_DEFER;
if (link) {
ret = sysfs_create_link(&pmu_dev->kobj, &cs_dev->kobj, entry);
if (ret)
return ret;
per_cpu(csdev_src, cpu) = csdev;
} else {
sysfs_remove_link(&pmu_dev->kobj, entry);
per_cpu(csdev_src, cpu) = NULL;
}
return 0;
}
static int __init etm_perf_init(void)
{
int ret;
etm_pmu.capabilities = PERF_PMU_CAP_EXCLUSIVE;
etm_pmu.attr_groups = etm_pmu_attr_groups;
etm_pmu.task_ctx_nr = perf_sw_context;
etm_pmu.read = etm_event_read;
etm_pmu.event_init = etm_event_init;
etm_pmu.setup_aux = etm_setup_aux;
etm_pmu.free_aux = etm_free_aux;
etm_pmu.start = etm_event_start;
etm_pmu.stop = etm_event_stop;
etm_pmu.add = etm_event_add;
etm_pmu.del = etm_event_del;
ret = perf_pmu_register(&etm_pmu, CORESIGHT_ETM_PMU_NAME, -1);
if (ret == 0)
etm_perf_up = true;
return ret;
}
device_initcall(etm_perf_init);
/*
* Copyright(C) 2015 Linaro Limited. All rights reserved.
* Author: Mathieu Poirier <mathieu.poirier@linaro.org>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 as published by
* the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef _CORESIGHT_ETM_PERF_H
#define _CORESIGHT_ETM_PERF_H
struct coresight_device;
#ifdef CONFIG_CORESIGHT
int etm_perf_symlink(struct coresight_device *csdev, bool link);
#else
static inline int etm_perf_symlink(struct coresight_device *csdev, bool link)
{ return -EINVAL; }
#endif /* CONFIG_CORESIGHT */
#endif
......@@ -13,6 +13,7 @@
#ifndef _CORESIGHT_CORESIGHT_ETM_H
#define _CORESIGHT_CORESIGHT_ETM_H
#include <asm/local.h>
#include <linux/spinlock.h>
#include "coresight-priv.h"
......@@ -109,7 +110,10 @@
#define ETM_MODE_STALL BIT(2)
#define ETM_MODE_TIMESTAMP BIT(3)
#define ETM_MODE_CTXID BIT(4)
#define ETM_MODE_ALL 0x1f
#define ETM_MODE_ALL (ETM_MODE_EXCLUDE | ETM_MODE_CYCACC | \
ETM_MODE_STALL | ETM_MODE_TIMESTAMP | \
ETM_MODE_CTXID | ETM_MODE_EXCL_KERN | \
ETM_MODE_EXCL_USER)
#define ETM_SQR_MASK 0x3
#define ETM_TRACEID_MASK 0x3f
......@@ -136,35 +140,16 @@
#define ETM_DEFAULT_EVENT_VAL (ETM_HARD_WIRE_RES_A | \
ETM_ADD_COMP_0 | \
ETM_EVENT_NOT_A)
/**
* struct etm_drvdata - specifics associated to an ETM component
* @base: memory mapped base address for this component.
* @dev: the device entity associated to this component.
* @atclk: optional clock for the core parts of the ETM.
* @csdev: component vitals needed by the framework.
* @spinlock: only one at a time pls.
* @cpu: the cpu this component is affined to.
* @port_size: port size as reported by ETMCR bit 4-6 and 21.
* @arch: ETM/PTM version number.
* @use_cpu14: true if management registers need to be accessed via CP14.
* @enable: is this ETM/PTM currently tracing.
* @sticky_enable: true if ETM base configuration has been done.
* @boot_enable:true if we should start tracing at boot time.
* @os_unlock: true if access to management registers is allowed.
* @nr_addr_cmp:Number of pairs of address comparators as found in ETMCCR.
* @nr_cntr: Number of counters as found in ETMCCR bit 13-15.
* @nr_ext_inp: Number of external input as found in ETMCCR bit 17-19.
* @nr_ext_out: Number of external output as found in ETMCCR bit 20-22.
* @nr_ctxid_cmp: Number of contextID comparators as found in ETMCCR bit 24-25.
* @etmccr: value of register ETMCCR.
* @etmccer: value of register ETMCCER.
* @traceid: value of the current ID for this component.
* struct etm_config - configuration information related to an ETM
* @mode: controls various modes supported by this ETM/PTM.
* @ctrl: used in conjunction with @mode.
* @trigger_event: setting for register ETMTRIGGER.
* @startstop_ctrl: setting for register ETMTSSCR.
* @enable_event: setting for register ETMTEEVR.
* @enable_ctrl1: setting for register ETMTECR1.
* @enable_ctrl2: setting for register ETMTECR2.
* @fifofull_level: setting for register ETMFFLR.
* @addr_idx: index for the address comparator selection.
* @addr_val: value for address comparator register.
......@@ -189,36 +174,16 @@
* @ctxid_mask: mask applicable to all the context IDs.
* @sync_freq: Synchronisation frequency.
* @timestamp_event: Defines an event that requests the insertion
of a timestamp into the trace stream.
* of a timestamp into the trace stream.
*/
struct etm_drvdata {
void __iomem *base;
struct device *dev;
struct clk *atclk;
struct coresight_device *csdev;
spinlock_t spinlock;
int cpu;
int port_size;
u8 arch;
bool use_cp14;
bool enable;
bool sticky_enable;
bool boot_enable;
bool os_unlock;
u8 nr_addr_cmp;
u8 nr_cntr;
u8 nr_ext_inp;
u8 nr_ext_out;
u8 nr_ctxid_cmp;
u32 etmccr;
u32 etmccer;
u32 traceid;
struct etm_config {
u32 mode;
u32 ctrl;
u32 trigger_event;
u32 startstop_ctrl;
u32 enable_event;
u32 enable_ctrl1;
u32 enable_ctrl2;
u32 fifofull_level;
u8 addr_idx;
u32 addr_val[ETM_MAX_ADDR_CMP];
......@@ -244,6 +209,56 @@ struct etm_drvdata {
u32 timestamp_event;
};
/**
* struct etm_drvdata - specifics associated to an ETM component
* @base: memory mapped base address for this component.
* @dev: the device entity associated to this component.
* @atclk: optional clock for the core parts of the ETM.
* @csdev: component vitals needed by the framework.
* @spinlock: only one at a time pls.
* @cpu: the cpu this component is affined to.
* @port_size: port size as reported by ETMCR bit 4-6 and 21.
* @arch: ETM/PTM version number.
* @use_cpu14: true if management registers need to be accessed via CP14.
* @mode: this tracer's mode, i.e sysFS, Perf or disabled.
* @sticky_enable: true if ETM base configuration has been done.
* @boot_enable:true if we should start tracing at boot time.
* @os_unlock: true if access to management registers is allowed.
* @nr_addr_cmp:Number of pairs of address comparators as found in ETMCCR.
* @nr_cntr: Number of counters as found in ETMCCR bit 13-15.
* @nr_ext_inp: Number of external input as found in ETMCCR bit 17-19.
* @nr_ext_out: Number of external output as found in ETMCCR bit 20-22.
* @nr_ctxid_cmp: Number of contextID comparators as found in ETMCCR bit 24-25.
* @etmccr: value of register ETMCCR.
* @etmccer: value of register ETMCCER.
* @traceid: value of the current ID for this component.
* @config: structure holding configuration parameters.
*/
struct etm_drvdata {
void __iomem *base;
struct device *dev;
struct clk *atclk;
struct coresight_device *csdev;
spinlock_t spinlock;
int cpu;
int port_size;
u8 arch;
bool use_cp14;
local_t mode;
bool sticky_enable;
bool boot_enable;
bool os_unlock;
u8 nr_addr_cmp;
u8 nr_cntr;
u8 nr_ext_inp;
u8 nr_ext_out;
u8 nr_ctxid_cmp;
u32 etmccr;
u32 etmccer;
u32 traceid;
struct etm_config config;
};
enum etm_addr_type {
ETM_ADDR_TYPE_NONE,
ETM_ADDR_TYPE_SINGLE,
......@@ -251,4 +266,39 @@ enum etm_addr_type {
ETM_ADDR_TYPE_START,
ETM_ADDR_TYPE_STOP,
};
static inline void etm_writel(struct etm_drvdata *drvdata,
u32 val, u32 off)
{
if (drvdata->use_cp14) {
if (etm_writel_cp14(off, val)) {
dev_err(drvdata->dev,
"invalid CP14 access to ETM reg: %#x", off);
}
} else {
writel_relaxed(val, drvdata->base + off);
}
}
static inline unsigned int etm_readl(struct etm_drvdata *drvdata, u32 off)
{
u32 val;
if (drvdata->use_cp14) {
if (etm_readl_cp14(off, &val)) {
dev_err(drvdata->dev,
"invalid CP14 access to ETM reg: %#x", off);
}
} else {
val = readl_relaxed(drvdata->base + off);
}
return val;
}
extern const struct attribute_group *coresight_etm_groups[];
int etm_get_trace_id(struct etm_drvdata *drvdata);
void etm_set_default(struct etm_config *config);
void etm_config_trace_mode(struct etm_config *config);
struct etm_config *get_etm_config(struct etm_drvdata *drvdata);
#endif
此差异已折叠。
/* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved.
*
* Description: CoreSight Funnel driver
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
......@@ -11,7 +13,6 @@
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/types.h>
#include <linux/device.h>
......@@ -69,7 +70,6 @@ static int funnel_enable(struct coresight_device *csdev, int inport,
{
struct funnel_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
pm_runtime_get_sync(drvdata->dev);
funnel_enable_hw(drvdata, inport);
dev_info(drvdata->dev, "FUNNEL inport %d enabled\n", inport);
......@@ -95,7 +95,6 @@ static void funnel_disable(struct coresight_device *csdev, int inport,
struct funnel_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
funnel_disable_hw(drvdata, inport);
pm_runtime_put(drvdata->dev);
dev_info(drvdata->dev, "FUNNEL inport %d disabled\n", inport);
}
......@@ -226,14 +225,6 @@ static int funnel_probe(struct amba_device *adev, const struct amba_id *id)
return 0;
}
static int funnel_remove(struct amba_device *adev)
{
struct funnel_drvdata *drvdata = amba_get_drvdata(adev);
coresight_unregister(drvdata->csdev);
return 0;
}
#ifdef CONFIG_PM
static int funnel_runtime_suspend(struct device *dev)
{
......@@ -273,13 +264,9 @@ static struct amba_driver funnel_driver = {
.name = "coresight-funnel",
.owner = THIS_MODULE,
.pm = &funnel_dev_pm_ops,
.suppress_bind_attrs = true,
},
.probe = funnel_probe,
.remove = funnel_remove,
.id_table = funnel_ids,
};
module_amba_driver(funnel_driver);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("CoreSight Funnel driver");
builtin_amba_driver(funnel_driver);
......@@ -10,7 +10,6 @@
* GNU General Public License for more details.
*/
#include <linux/module.h>
#include <linux/types.h>
#include <linux/err.h>
#include <linux/slab.h>
......@@ -86,7 +85,7 @@ static int of_coresight_alloc_memory(struct device *dev,
return -ENOMEM;
/* Children connected to this component via @outports */
pdata->child_names = devm_kzalloc(dev, pdata->nr_outport *
pdata->child_names = devm_kzalloc(dev, pdata->nr_outport *
sizeof(*pdata->child_names),
GFP_KERNEL);
if (!pdata->child_names)
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册