提交 bafb0762 编写于 作者: L Linus Torvalds

Merge tag 'char-misc-4.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc driver updates from Greg KH:
 "Here is the big char/misc driver update for 4.14-rc1.

  Lots of different stuff in here, it's been an active development cycle
  for some reason. Highlights are:

   - updated binder driver, this brings binder up to date with what
     shipped in the Android O release, plus some more changes that
     happened since then that are in the Android development trees.

   - coresight updates and fixes

   - mux driver file renames to be a bit "nicer"

   - intel_th driver updates

   - normal set of hyper-v updates and changes

   - small fpga subsystem and driver updates

   - lots of const code changes all over the driver trees

   - extcon driver updates

   - fmc driver subsystem upadates

   - w1 subsystem minor reworks and new features and drivers added

   - spmi driver updates

  Plus a smattering of other minor driver updates and fixes.

  All of these have been in linux-next with no reported issues for a
  while"

* tag 'char-misc-4.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (244 commits)
  ANDROID: binder: don't queue async transactions to thread.
  ANDROID: binder: don't enqueue death notifications to thread todo.
  ANDROID: binder: Don't BUG_ON(!spin_is_locked()).
  ANDROID: binder: Add BINDER_GET_NODE_DEBUG_INFO ioctl
  ANDROID: binder: push new transactions to waiting threads.
  ANDROID: binder: remove proc waitqueue
  android: binder: Add page usage in binder stats
  android: binder: fixup crash introduced by moving buffer hdr
  drivers: w1: add hwmon temp support for w1_therm
  drivers: w1: refactor w1_slave_show to make the temp reading functionality separate
  drivers: w1: add hwmon support structures
  eeprom: idt_89hpesx: Support both ACPI and OF probing
  mcb: Fix an error handling path in 'chameleon_parse_cells()'
  MCB: add support for SC31 to mcb-lpc
  mux: make device_type const
  char: virtio: constify attribute_group structures.
  Documentation/ABI: document the nvmem sysfs files
  lkdtm: fix spelling mistake: "incremeted" -> "incremented"
  perf: cs-etm: Fix ETMv4 CONFIGR entry in perf.data file
  nvmem: include linux/err.h from header
  ...
What: /sys/bus/nvmem/devices/.../nvmem
Date: July 2015
KernelVersion: 4.2
Contact: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
Description:
This file allows user to read/write the raw NVMEM contents.
Permissions for write to this file depends on the nvmem
provider configuration.
ex:
hexdump /sys/bus/nvmem/devices/qfprom0/nvmem
0000000 0000 0000 0000 0000 0000 0000 0000 0000
*
00000a0 db10 2240 0000 e000 0c00 0c00 0000 0c00
0000000 0000 0000 0000 0000 0000 0000 0000 0000
...
*
0001000
......@@ -45,6 +45,8 @@ Contact: thunderbolt-software@lists.01.org
Description: When a devices supports Thunderbolt secure connect it will
have this attribute. Writing 32 byte hex string changes
authorization to use the secure connection method instead.
Writing an empty string clears the key and regular connection
method can be used again.
What: /sys/bus/thunderbolt/devices/.../device
Date: Sep 2017
......
What: /sys/bus/pci/drivers/altera-cvp/chkcfg
Date: May 2017
Kernel Version: 4.13
Contact: Anatolij Gustschin <agust@denx.de>
Description:
Contains either 1 or 0 and controls if configuration
error checking in altera-cvp driver is turned on or
off.
......@@ -3081,3 +3081,8 @@
1 = /dev/osd1 Second OSD Device
...
255 = /dev/osd255 256th OSD Device
384-511 char RESERVED FOR DYNAMIC ASSIGNMENT
Character devices that request a dynamic allocation of major
number will take numbers starting from 511 and downward,
once the 234-254 range is full.
......@@ -34,8 +34,8 @@ its hardware characteristcs.
- Embedded Trace Macrocell (version 4.x):
"arm,coresight-etm4x", "arm,primecell";
- Qualcomm Configurable Replicator (version 1.x):
"qcom,coresight-replicator1x", "arm,primecell";
- Coresight programmable Replicator :
"arm,coresight-dynamic-replicator", "arm,primecell";
- System Trace Macrocell:
"arm,coresight-stm", "arm,primecell"; [1]
......
ChromeOS EC USB Type-C cable and accessories detection
On ChromeOS systems with USB Type C ports, the ChromeOS Embedded Controller is
able to detect the state of external accessories such as display adapters
or USB devices when said accessories are attached or detached.
The node for this device must be under a cros-ec node like google,cros-ec-spi
or google,cros-ec-i2c.
Required properties:
- compatible: Should be "google,extcon-usbc-cros-ec".
- google,usb-port-id: Specifies the USB port ID to use.
Example:
cros-ec@0 {
compatible = "google,cros-ec-i2c";
...
extcon {
compatible = "google,extcon-usbc-cros-ec";
google,usb-port-id = <0>;
};
}
Altera Passive Serial SPI FPGA Manager
Altera FPGAs support a method of loading the bitstream over what is
referred to as "passive serial".
The passive serial link is not technically SPI, and might require extra
circuits in order to play nicely with other SPI slaves on the same bus.
See https://www.altera.com/literature/hb/cyc/cyc_c51013.pdf
Required properties:
- compatible: Must be one of the following:
"altr,fpga-passive-serial",
"altr,fpga-arria10-passive-serial"
- reg: SPI chip select of the FPGA
- nconfig-gpios: config pin (referred to as nCONFIG in the manual)
- nstat-gpios: status pin (referred to as nSTATUS in the manual)
Optional properties:
- confd-gpios: confd pin (referred to as CONF_DONE in the manual)
Example:
fpga: fpga@0 {
compatible = "altr,fpga-passive-serial";
spi-max-frequency = <20000000>;
reg = <0>;
nconfig-gpios = <&gpio4 9 GPIO_ACTIVE_LOW>;
nstat-gpios = <&gpio4 11 GPIO_ACTIVE_LOW>;
confd-gpios = <&gpio4 12 GPIO_ACTIVE_LOW>;
};
Xilinx LogiCORE Partial Reconfig Decoupler Softcore
The Xilinx LogiCORE Partial Reconfig Decoupler manages one or more
decouplers / fpga bridges.
The controller can decouple/disable the bridges which prevents signal
changes from passing through the bridge. The controller can also
couple / enable the bridges which allows traffic to pass through the
bridge normally.
The Driver supports only MMIO handling. A PR region can have multiple
PR Decouplers which can be handled independently or chained via decouple/
decouple_status signals.
Required properties:
- compatible : Should contain "xlnx,pr-decoupler-1.00" followed by
"xlnx,pr-decoupler"
- regs : base address and size for decoupler module
- clocks : input clock to IP
- clock-names : should contain "aclk"
Optional properties:
- bridge-enable : 0 if driver should disable bridge at startup
1 if driver should enable bridge at startup
Default is to leave bridge in current state.
See Documentation/devicetree/bindings/fpga/fpga-region.txt for generic bindings.
Example:
fpga-bridge@100000450 {
compatible = "xlnx,pr-decoupler-1.00",
"xlnx-pr-decoupler";
regs = <0x10000045 0x10>;
clocks = <&clkc 15>;
clock-names = "aclk";
bridge-enable = <0>;
};
......@@ -175,6 +175,7 @@ kosagi Sutajio Ko-Usagi PTE Ltd.
kyo Kyocera Corporation
lacie LaCie
lantiq Lantiq Semiconductor
lattice Lattice Semiconductor
lego LEGO Systems A/S
lenovo Lenovo Group Ltd.
lg LG Corporation
......
......@@ -281,6 +281,8 @@
capabilities of the underlying ICAP hardware
differ between different families. May be
'virtex2p', 'virtex4', or 'virtex5'.
- compatible : should contain "xlnx,xps-hwicap-1.00.a" or
"xlnx,opb-hwicap-1.00.b".
vi) Xilinx Uart 16550
......
......@@ -83,7 +83,7 @@ by writing the name of the desired stm device there, for example:
$ echo dummy_stm.0 > /sys/class/stm_source/console/stm_source_link
For examples on how to use stm_source interface in the kernel, refer
to stm_console or stm_heartbeat drivers.
to stm_console, stm_heartbeat or stm_ftrace drivers.
Each stm_source device will need to assume a master and a range of
channels, depending on how many channels it requires. These are
......@@ -107,5 +107,16 @@ console in the STP stream, create a "console" policy entry (see the
beginning of this text on how to do that). When initialized, it will
consume one channel.
stm_ftrace
==========
This is another "stm_source" device, once the stm_ftrace has been
linked with an stm device, and if "function" tracer is enabled,
function address and parent function address which Ftrace subsystem
would store into ring buffer will be exported via the stm device at
the same time.
Currently only Ftrace "function" tracer is supported.
[1] https://software.intel.com/sites/default/files/managed/d3/3c/intel-th-developer-manual.pdf
[2] http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0444b/index.html
......@@ -5362,10 +5362,11 @@ K: fmc_d.*register
FPGA MANAGER FRAMEWORK
M: Alan Tull <atull@kernel.org>
R: Moritz Fischer <moritz.fischer@ettus.com>
R: Moritz Fischer <mdf@kernel.org>
L: linux-fpga@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/atull/linux-fpga.git
Q: http://patchwork.kernel.org/project/linux-fpga/list/
F: Documentation/fpga/
F: Documentation/devicetree/bindings/fpga/
F: drivers/fpga/
......@@ -9484,6 +9485,7 @@ M: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
S: Maintained
F: drivers/nvmem/
F: Documentation/devicetree/bindings/nvmem/
F: Documentation/ABI/stable/sysfs-bus-nvmem
F: include/linux/nvmem-consumer.h
F: include/linux/nvmem-provider.h
......
......@@ -94,6 +94,15 @@
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_ecspi1 &pinctrl_ecspi1cs>;
status = "okay";
fpga: fpga@0 {
compatible = "altr,fpga-passive-serial";
spi-max-frequency = <20000000>;
reg = <0>;
pinctrl-0 = <&pinctrl_fpgaspi>;
nconfig-gpios = <&gpio4 9 GPIO_ACTIVE_LOW>;
nstat-gpios = <&gpio4 11 GPIO_ACTIVE_LOW>;
};
};
&ecspi3 {
......@@ -319,6 +328,13 @@
>;
};
pinctrl_fpgaspi: fpgaspigrp {
fsl,pins = <
MX6QDL_PAD_KEY_ROW1__GPIO4_IO09 0x1b0b0
MX6QDL_PAD_KEY_ROW2__GPIO4_IO11 0x1b0b0
>;
};
pinctrl_gpminand: gpminandgrp {
fsl,pins = <
MX6QDL_PAD_NANDF_CLE__NAND_CLE 0xb0b1
......
......@@ -28,6 +28,8 @@ struct ms_hyperv_info {
u32 features;
u32 misc_features;
u32 hints;
u32 max_vp_index;
u32 max_lp_index;
};
extern struct ms_hyperv_info ms_hyperv;
......
......@@ -179,9 +179,15 @@ static void __init ms_hyperv_init_platform(void)
ms_hyperv.misc_features = cpuid_edx(HYPERV_CPUID_FEATURES);
ms_hyperv.hints = cpuid_eax(HYPERV_CPUID_ENLIGHTMENT_INFO);
pr_info("HyperV: features 0x%x, hints 0x%x\n",
pr_info("Hyper-V: features 0x%x, hints 0x%x\n",
ms_hyperv.features, ms_hyperv.hints);
ms_hyperv.max_vp_index = cpuid_eax(HVCPUID_IMPLEMENTATION_LIMITS);
ms_hyperv.max_lp_index = cpuid_ebx(HVCPUID_IMPLEMENTATION_LIMITS);
pr_debug("Hyper-V: max %u virtual processors, %u logical processors\n",
ms_hyperv.max_vp_index, ms_hyperv.max_lp_index);
/*
* Extract host information.
*/
......@@ -214,7 +220,7 @@ static void __init ms_hyperv_init_platform(void)
rdmsrl(HV_X64_MSR_APIC_FREQUENCY, hv_lapic_frequency);
hv_lapic_frequency = div_u64(hv_lapic_frequency, HZ);
lapic_timer_frequency = hv_lapic_frequency;
pr_info("HyperV: LAPIC Timer Frequency: %#x\n",
pr_info("Hyper-V: LAPIC Timer Frequency: %#x\n",
lapic_timer_frequency);
}
......@@ -248,7 +254,7 @@ static void __init ms_hyperv_init_platform(void)
}
const __refconst struct hypervisor_x86 x86_hyper_ms_hyperv = {
.name = "Microsoft HyperV",
.name = "Microsoft Hyper-V",
.detect = ms_hyperv_platform,
.init_platform = ms_hyperv_init_platform,
};
......
......@@ -242,6 +242,7 @@ EXPORT_SYMBOL_GPL(disk_map_sector_rcu);
* Can be deleted altogether. Later.
*
*/
#define BLKDEV_MAJOR_HASH_SIZE 255
static struct blk_major_name {
struct blk_major_name *next;
int major;
......@@ -259,12 +260,11 @@ void blkdev_show(struct seq_file *seqf, off_t offset)
{
struct blk_major_name *dp;
if (offset < BLKDEV_MAJOR_HASH_SIZE) {
mutex_lock(&block_class_lock);
for (dp = major_names[offset]; dp; dp = dp->next)
mutex_lock(&block_class_lock);
for (dp = major_names[major_to_index(offset)]; dp; dp = dp->next)
if (dp->major == offset)
seq_printf(seqf, "%3d %s\n", dp->major, dp->name);
mutex_unlock(&block_class_lock);
}
mutex_unlock(&block_class_lock);
}
#endif /* CONFIG_PROC_FS */
......@@ -309,6 +309,14 @@ int register_blkdev(unsigned int major, const char *name)
ret = major;
}
if (major >= BLKDEV_MAJOR_MAX) {
pr_err("register_blkdev: major requested (%d) is greater than the maximum (%d) for %s\n",
major, BLKDEV_MAJOR_MAX, name);
ret = -EINVAL;
goto out;
}
p = kmalloc(sizeof(struct blk_major_name), GFP_KERNEL);
if (p == NULL) {
ret = -ENOMEM;
......
......@@ -22,7 +22,7 @@ config ANDROID_BINDER_IPC
config ANDROID_BINDER_DEVICES
string "Android Binder devices"
depends on ANDROID_BINDER_IPC
default "binder,hwbinder"
default "binder,hwbinder,vndbinder"
---help---
Default value for the binder.devices parameter.
......@@ -32,7 +32,7 @@ config ANDROID_BINDER_DEVICES
therefore logically separated from the other devices.
config ANDROID_BINDER_IPC_32BIT
bool
bool "Use old (Android 4.4 and earlier) 32-bit binder API"
depends on !64BIT && ANDROID_BINDER_IPC
default y
---help---
......@@ -44,6 +44,16 @@ config ANDROID_BINDER_IPC_32BIT
Note that enabling this will break newer Android user-space.
config ANDROID_BINDER_IPC_SELFTEST
bool "Android Binder IPC Driver Selftest"
depends on ANDROID_BINDER_IPC
---help---
This feature allows binder selftest to run.
Binder selftest checks the allocation and free of binder buffers
exhaustively with combinations of various buffer sizes and
alignments.
endif # if ANDROID
endmenu
ccflags-y += -I$(src) # needed for trace events
obj-$(CONFIG_ANDROID_BINDER_IPC) += binder.o
obj-$(CONFIG_ANDROID_BINDER_IPC) += binder.o binder_alloc.o
obj-$(CONFIG_ANDROID_BINDER_IPC_SELFTEST) += binder_alloc_selftest.o
此差异已折叠。
此差异已折叠。
/*
* Copyright (C) 2017 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef _LINUX_BINDER_ALLOC_H
#define _LINUX_BINDER_ALLOC_H
#include <linux/rbtree.h>
#include <linux/list.h>
#include <linux/mm.h>
#include <linux/rtmutex.h>
#include <linux/vmalloc.h>
#include <linux/slab.h>
#include <linux/list_lru.h>
extern struct list_lru binder_alloc_lru;
struct binder_transaction;
/**
* struct binder_buffer - buffer used for binder transactions
* @entry: entry alloc->buffers
* @rb_node: node for allocated_buffers/free_buffers rb trees
* @free: true if buffer is free
* @allow_user_free: describe the second member of struct blah,
* @async_transaction: describe the second member of struct blah,
* @debug_id: describe the second member of struct blah,
* @transaction: describe the second member of struct blah,
* @target_node: describe the second member of struct blah,
* @data_size: describe the second member of struct blah,
* @offsets_size: describe the second member of struct blah,
* @extra_buffers_size: describe the second member of struct blah,
* @data:i describe the second member of struct blah,
*
* Bookkeeping structure for binder transaction buffers
*/
struct binder_buffer {
struct list_head entry; /* free and allocated entries by address */
struct rb_node rb_node; /* free entry by size or allocated entry */
/* by address */
unsigned free:1;
unsigned allow_user_free:1;
unsigned async_transaction:1;
unsigned free_in_progress:1;
unsigned debug_id:28;
struct binder_transaction *transaction;
struct binder_node *target_node;
size_t data_size;
size_t offsets_size;
size_t extra_buffers_size;
void *data;
};
/**
* struct binder_lru_page - page object used for binder shrinker
* @page_ptr: pointer to physical page in mmap'd space
* @lru: entry in binder_alloc_lru
* @alloc: binder_alloc for a proc
*/
struct binder_lru_page {
struct list_head lru;
struct page *page_ptr;
struct binder_alloc *alloc;
};
/**
* struct binder_alloc - per-binder proc state for binder allocator
* @vma: vm_area_struct passed to mmap_handler
* (invarient after mmap)
* @tsk: tid for task that called init for this proc
* (invariant after init)
* @vma_vm_mm: copy of vma->vm_mm (invarient after mmap)
* @buffer: base of per-proc address space mapped via mmap
* @user_buffer_offset: offset between user and kernel VAs for buffer
* @buffers: list of all buffers for this proc
* @free_buffers: rb tree of buffers available for allocation
* sorted by size
* @allocated_buffers: rb tree of allocated buffers sorted by address
* @free_async_space: VA space available for async buffers. This is
* initialized at mmap time to 1/2 the full VA space
* @pages: array of binder_lru_page
* @buffer_size: size of address space specified via mmap
* @pid: pid for associated binder_proc (invariant after init)
*
* Bookkeeping structure for per-proc address space management for binder
* buffers. It is normally initialized during binder_init() and binder_mmap()
* calls. The address space is used for both user-visible buffers and for
* struct binder_buffer objects used to track the user buffers
*/
struct binder_alloc {
struct mutex mutex;
struct task_struct *tsk;
struct vm_area_struct *vma;
struct mm_struct *vma_vm_mm;
void *buffer;
ptrdiff_t user_buffer_offset;
struct list_head buffers;
struct rb_root free_buffers;
struct rb_root allocated_buffers;
size_t free_async_space;
struct binder_lru_page *pages;
size_t buffer_size;
uint32_t buffer_free;
int pid;
};
#ifdef CONFIG_ANDROID_BINDER_IPC_SELFTEST
void binder_selftest_alloc(struct binder_alloc *alloc);
#else
static inline void binder_selftest_alloc(struct binder_alloc *alloc) {}
#endif
enum lru_status binder_alloc_free_page(struct list_head *item,
struct list_lru_one *lru,
spinlock_t *lock, void *cb_arg);
extern struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc,
size_t data_size,
size_t offsets_size,
size_t extra_buffers_size,
int is_async);
extern void binder_alloc_init(struct binder_alloc *alloc);
void binder_alloc_shrinker_init(void);
extern void binder_alloc_vma_close(struct binder_alloc *alloc);
extern struct binder_buffer *
binder_alloc_prepare_to_free(struct binder_alloc *alloc,
uintptr_t user_ptr);
extern void binder_alloc_free_buf(struct binder_alloc *alloc,
struct binder_buffer *buffer);
extern int binder_alloc_mmap_handler(struct binder_alloc *alloc,
struct vm_area_struct *vma);
extern void binder_alloc_deferred_release(struct binder_alloc *alloc);
extern int binder_alloc_get_allocated_count(struct binder_alloc *alloc);
extern void binder_alloc_print_allocated(struct seq_file *m,
struct binder_alloc *alloc);
void binder_alloc_print_pages(struct seq_file *m,
struct binder_alloc *alloc);
/**
* binder_alloc_get_free_async_space() - get free space available for async
* @alloc: binder_alloc for this proc
*
* Return: the bytes remaining in the address-space for async transactions
*/
static inline size_t
binder_alloc_get_free_async_space(struct binder_alloc *alloc)
{
size_t free_async_space;
mutex_lock(&alloc->mutex);
free_async_space = alloc->free_async_space;
mutex_unlock(&alloc->mutex);
return free_async_space;
}
/**
* binder_alloc_get_user_buffer_offset() - get offset between kernel/user addrs
* @alloc: binder_alloc for this proc
*
* Return: the offset between kernel and user-space addresses to use for
* virtual address conversion
*/
static inline ptrdiff_t
binder_alloc_get_user_buffer_offset(struct binder_alloc *alloc)
{
/*
* user_buffer_offset is constant if vma is set and
* undefined if vma is not set. It is possible to
* get here with !alloc->vma if the target process
* is dying while a transaction is being initiated.
* Returning the old value is ok in this case and
* the transaction will fail.
*/
return alloc->user_buffer_offset;
}
#endif /* _LINUX_BINDER_ALLOC_H */
/* binder_alloc_selftest.c
*
* Android IPC Subsystem
*
* Copyright (C) 2017 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/mm_types.h>
#include <linux/err.h>
#include "binder_alloc.h"
#define BUFFER_NUM 5
#define BUFFER_MIN_SIZE (PAGE_SIZE / 8)
static bool binder_selftest_run = true;
static int binder_selftest_failures;
static DEFINE_MUTEX(binder_selftest_lock);
/**
* enum buf_end_align_type - Page alignment of a buffer
* end with regard to the end of the previous buffer.
*
* In the pictures below, buf2 refers to the buffer we
* are aligning. buf1 refers to previous buffer by addr.
* Symbol [ means the start of a buffer, ] means the end
* of a buffer, and | means page boundaries.
*/
enum buf_end_align_type {
/**
* @SAME_PAGE_UNALIGNED: The end of this buffer is on
* the same page as the end of the previous buffer and
* is not page aligned. Examples:
* buf1 ][ buf2 ][ ...
* buf1 ]|[ buf2 ][ ...
*/
SAME_PAGE_UNALIGNED = 0,
/**
* @SAME_PAGE_ALIGNED: When the end of the previous buffer
* is not page aligned, the end of this buffer is on the
* same page as the end of the previous buffer and is page
* aligned. When the previous buffer is page aligned, the
* end of this buffer is aligned to the next page boundary.
* Examples:
* buf1 ][ buf2 ]| ...
* buf1 ]|[ buf2 ]| ...
*/
SAME_PAGE_ALIGNED,
/**
* @NEXT_PAGE_UNALIGNED: The end of this buffer is on
* the page next to the end of the previous buffer and
* is not page aligned. Examples:
* buf1 ][ buf2 | buf2 ][ ...
* buf1 ]|[ buf2 | buf2 ][ ...
*/
NEXT_PAGE_UNALIGNED,
/**
* @NEXT_PAGE_ALIGNED: The end of this buffer is on
* the page next to the end of the previous buffer and
* is page aligned. Examples:
* buf1 ][ buf2 | buf2 ]| ...
* buf1 ]|[ buf2 | buf2 ]| ...
*/
NEXT_PAGE_ALIGNED,
/**
* @NEXT_NEXT_UNALIGNED: The end of this buffer is on
* the page that follows the page after the end of the
* previous buffer and is not page aligned. Examples:
* buf1 ][ buf2 | buf2 | buf2 ][ ...
* buf1 ]|[ buf2 | buf2 | buf2 ][ ...
*/
NEXT_NEXT_UNALIGNED,
LOOP_END,
};
static void pr_err_size_seq(size_t *sizes, int *seq)
{
int i;
pr_err("alloc sizes: ");
for (i = 0; i < BUFFER_NUM; i++)
pr_cont("[%zu]", sizes[i]);
pr_cont("\n");
pr_err("free seq: ");
for (i = 0; i < BUFFER_NUM; i++)
pr_cont("[%d]", seq[i]);
pr_cont("\n");
}
static bool check_buffer_pages_allocated(struct binder_alloc *alloc,
struct binder_buffer *buffer,
size_t size)
{
void *page_addr, *end;
int page_index;
end = (void *)PAGE_ALIGN((uintptr_t)buffer->data + size);
page_addr = buffer->data;
for (; page_addr < end; page_addr += PAGE_SIZE) {
page_index = (page_addr - alloc->buffer) / PAGE_SIZE;
if (!alloc->pages[page_index].page_ptr ||
!list_empty(&alloc->pages[page_index].lru)) {
pr_err("expect alloc but is %s at page index %d\n",
alloc->pages[page_index].page_ptr ?
"lru" : "free", page_index);
return false;
}
}
return true;
}
static void binder_selftest_alloc_buf(struct binder_alloc *alloc,
struct binder_buffer *buffers[],
size_t *sizes, int *seq)
{
int i;
for (i = 0; i < BUFFER_NUM; i++) {
buffers[i] = binder_alloc_new_buf(alloc, sizes[i], 0, 0, 0);
if (IS_ERR(buffers[i]) ||
!check_buffer_pages_allocated(alloc, buffers[i],
sizes[i])) {
pr_err_size_seq(sizes, seq);
binder_selftest_failures++;
}
}
}
static void binder_selftest_free_buf(struct binder_alloc *alloc,
struct binder_buffer *buffers[],
size_t *sizes, int *seq, size_t end)
{
int i;
for (i = 0; i < BUFFER_NUM; i++)
binder_alloc_free_buf(alloc, buffers[seq[i]]);
for (i = 0; i < end / PAGE_SIZE; i++) {
/**
* Error message on a free page can be false positive
* if binder shrinker ran during binder_alloc_free_buf
* calls above.
*/
if (list_empty(&alloc->pages[i].lru)) {
pr_err_size_seq(sizes, seq);
pr_err("expect lru but is %s at page index %d\n",
alloc->pages[i].page_ptr ? "alloc" : "free", i);
binder_selftest_failures++;
}
}
}
static void binder_selftest_free_page(struct binder_alloc *alloc)
{
int i;
unsigned long count;
while ((count = list_lru_count(&binder_alloc_lru))) {
list_lru_walk(&binder_alloc_lru, binder_alloc_free_page,
NULL, count);
}
for (i = 0; i < (alloc->buffer_size / PAGE_SIZE); i++) {
if (alloc->pages[i].page_ptr) {
pr_err("expect free but is %s at page index %d\n",
list_empty(&alloc->pages[i].lru) ?
"alloc" : "lru", i);
binder_selftest_failures++;
}
}
}
static void binder_selftest_alloc_free(struct binder_alloc *alloc,
size_t *sizes, int *seq, size_t end)
{
struct binder_buffer *buffers[BUFFER_NUM];
binder_selftest_alloc_buf(alloc, buffers, sizes, seq);
binder_selftest_free_buf(alloc, buffers, sizes, seq, end);
/* Allocate from lru. */
binder_selftest_alloc_buf(alloc, buffers, sizes, seq);
if (list_lru_count(&binder_alloc_lru))
pr_err("lru list should be empty but is not\n");
binder_selftest_free_buf(alloc, buffers, sizes, seq, end);
binder_selftest_free_page(alloc);
}
static bool is_dup(int *seq, int index, int val)
{
int i;
for (i = 0; i < index; i++) {
if (seq[i] == val)
return true;
}
return false;
}
/* Generate BUFFER_NUM factorial free orders. */
static void binder_selftest_free_seq(struct binder_alloc *alloc,
size_t *sizes, int *seq,
int index, size_t end)
{
int i;
if (index == BUFFER_NUM) {
binder_selftest_alloc_free(alloc, sizes, seq, end);
return;
}
for (i = 0; i < BUFFER_NUM; i++) {
if (is_dup(seq, index, i))
continue;
seq[index] = i;
binder_selftest_free_seq(alloc, sizes, seq, index + 1, end);
}
}
static void binder_selftest_alloc_size(struct binder_alloc *alloc,
size_t *end_offset)
{
int i;
int seq[BUFFER_NUM] = {0};
size_t front_sizes[BUFFER_NUM];
size_t back_sizes[BUFFER_NUM];
size_t last_offset, offset = 0;
for (i = 0; i < BUFFER_NUM; i++) {
last_offset = offset;
offset = end_offset[i];
front_sizes[i] = offset - last_offset;
back_sizes[BUFFER_NUM - i - 1] = front_sizes[i];
}
/*
* Buffers share the first or last few pages.
* Only BUFFER_NUM - 1 buffer sizes are adjustable since
* we need one giant buffer before getting to the last page.
*/
back_sizes[0] += alloc->buffer_size - end_offset[BUFFER_NUM - 1];
binder_selftest_free_seq(alloc, front_sizes, seq, 0,
end_offset[BUFFER_NUM - 1]);
binder_selftest_free_seq(alloc, back_sizes, seq, 0, alloc->buffer_size);
}
static void binder_selftest_alloc_offset(struct binder_alloc *alloc,
size_t *end_offset, int index)
{
int align;
size_t end, prev;
if (index == BUFFER_NUM) {
binder_selftest_alloc_size(alloc, end_offset);
return;
}
prev = index == 0 ? 0 : end_offset[index - 1];
end = prev;
BUILD_BUG_ON(BUFFER_MIN_SIZE * BUFFER_NUM >= PAGE_SIZE);
for (align = SAME_PAGE_UNALIGNED; align < LOOP_END; align++) {
if (align % 2)
end = ALIGN(end, PAGE_SIZE);
else
end += BUFFER_MIN_SIZE;
end_offset[index] = end;
binder_selftest_alloc_offset(alloc, end_offset, index + 1);
}
}
/**
* binder_selftest_alloc() - Test alloc and free of buffer pages.
* @alloc: Pointer to alloc struct.
*
* Allocate BUFFER_NUM buffers to cover all page alignment cases,
* then free them in all orders possible. Check that pages are
* correctly allocated, put onto lru when buffers are freed, and
* are freed when binder_alloc_free_page is called.
*/
void binder_selftest_alloc(struct binder_alloc *alloc)
{
size_t end_offset[BUFFER_NUM];
if (!binder_selftest_run)
return;
mutex_lock(&binder_selftest_lock);
if (!binder_selftest_run || !alloc->vma)
goto done;
pr_info("STARTED\n");
binder_selftest_alloc_offset(alloc, end_offset, 0);
binder_selftest_run = false;
if (binder_selftest_failures > 0)
pr_info("%d tests FAILED\n", binder_selftest_failures);
else
pr_info("PASSED\n");
done:
mutex_unlock(&binder_selftest_lock);
}
......@@ -23,7 +23,8 @@
struct binder_buffer;
struct binder_node;
struct binder_proc;
struct binder_ref;
struct binder_alloc;
struct binder_ref_data;
struct binder_thread;
struct binder_transaction;
......@@ -146,8 +147,8 @@ TRACE_EVENT(binder_transaction_received,
TRACE_EVENT(binder_transaction_node_to_ref,
TP_PROTO(struct binder_transaction *t, struct binder_node *node,
struct binder_ref *ref),
TP_ARGS(t, node, ref),
struct binder_ref_data *rdata),
TP_ARGS(t, node, rdata),
TP_STRUCT__entry(
__field(int, debug_id)
......@@ -160,8 +161,8 @@ TRACE_EVENT(binder_transaction_node_to_ref,
__entry->debug_id = t->debug_id;
__entry->node_debug_id = node->debug_id;
__entry->node_ptr = node->ptr;
__entry->ref_debug_id = ref->debug_id;
__entry->ref_desc = ref->desc;
__entry->ref_debug_id = rdata->debug_id;
__entry->ref_desc = rdata->desc;
),
TP_printk("transaction=%d node=%d src_ptr=0x%016llx ==> dest_ref=%d dest_desc=%d",
__entry->debug_id, __entry->node_debug_id,
......@@ -170,8 +171,9 @@ TRACE_EVENT(binder_transaction_node_to_ref,
);
TRACE_EVENT(binder_transaction_ref_to_node,
TP_PROTO(struct binder_transaction *t, struct binder_ref *ref),
TP_ARGS(t, ref),
TP_PROTO(struct binder_transaction *t, struct binder_node *node,
struct binder_ref_data *rdata),
TP_ARGS(t, node, rdata),
TP_STRUCT__entry(
__field(int, debug_id)
......@@ -182,10 +184,10 @@ TRACE_EVENT(binder_transaction_ref_to_node,
),
TP_fast_assign(
__entry->debug_id = t->debug_id;
__entry->ref_debug_id = ref->debug_id;
__entry->ref_desc = ref->desc;
__entry->node_debug_id = ref->node->debug_id;
__entry->node_ptr = ref->node->ptr;
__entry->ref_debug_id = rdata->debug_id;
__entry->ref_desc = rdata->desc;
__entry->node_debug_id = node->debug_id;
__entry->node_ptr = node->ptr;
),
TP_printk("transaction=%d node=%d src_ref=%d src_desc=%d ==> dest_ptr=0x%016llx",
__entry->debug_id, __entry->node_debug_id,
......@@ -194,9 +196,10 @@ TRACE_EVENT(binder_transaction_ref_to_node,
);
TRACE_EVENT(binder_transaction_ref_to_ref,
TP_PROTO(struct binder_transaction *t, struct binder_ref *src_ref,
struct binder_ref *dest_ref),
TP_ARGS(t, src_ref, dest_ref),
TP_PROTO(struct binder_transaction *t, struct binder_node *node,
struct binder_ref_data *src_ref,
struct binder_ref_data *dest_ref),
TP_ARGS(t, node, src_ref, dest_ref),
TP_STRUCT__entry(
__field(int, debug_id)
......@@ -208,7 +211,7 @@ TRACE_EVENT(binder_transaction_ref_to_ref,
),
TP_fast_assign(
__entry->debug_id = t->debug_id;
__entry->node_debug_id = src_ref->node->debug_id;
__entry->node_debug_id = node->debug_id;
__entry->src_ref_debug_id = src_ref->debug_id;
__entry->src_ref_desc = src_ref->desc;
__entry->dest_ref_debug_id = dest_ref->debug_id;
......@@ -268,9 +271,9 @@ DEFINE_EVENT(binder_buffer_class, binder_transaction_failed_buffer_release,
TP_ARGS(buffer));
TRACE_EVENT(binder_update_page_range,
TP_PROTO(struct binder_proc *proc, bool allocate,
TP_PROTO(struct binder_alloc *alloc, bool allocate,
void *start, void *end),
TP_ARGS(proc, allocate, start, end),
TP_ARGS(alloc, allocate, start, end),
TP_STRUCT__entry(
__field(int, proc)
__field(bool, allocate)
......@@ -278,9 +281,9 @@ TRACE_EVENT(binder_update_page_range,
__field(size_t, size)
),
TP_fast_assign(
__entry->proc = proc->pid;
__entry->proc = alloc->pid;
__entry->allocate = allocate;
__entry->offset = start - proc->buffer;
__entry->offset = start - alloc->buffer;
__entry->size = end - start;
),
TP_printk("proc=%d allocate=%d offset=%zu size=%zu",
......@@ -288,6 +291,61 @@ TRACE_EVENT(binder_update_page_range,
__entry->offset, __entry->size)
);
DECLARE_EVENT_CLASS(binder_lru_page_class,
TP_PROTO(const struct binder_alloc *alloc, size_t page_index),
TP_ARGS(alloc, page_index),
TP_STRUCT__entry(
__field(int, proc)
__field(size_t, page_index)
),
TP_fast_assign(
__entry->proc = alloc->pid;
__entry->page_index = page_index;
),
TP_printk("proc=%d page_index=%zu",
__entry->proc, __entry->page_index)
);
DEFINE_EVENT(binder_lru_page_class, binder_alloc_lru_start,
TP_PROTO(const struct binder_alloc *alloc, size_t page_index),
TP_ARGS(alloc, page_index));
DEFINE_EVENT(binder_lru_page_class, binder_alloc_lru_end,
TP_PROTO(const struct binder_alloc *alloc, size_t page_index),
TP_ARGS(alloc, page_index));
DEFINE_EVENT(binder_lru_page_class, binder_free_lru_start,
TP_PROTO(const struct binder_alloc *alloc, size_t page_index),
TP_ARGS(alloc, page_index));
DEFINE_EVENT(binder_lru_page_class, binder_free_lru_end,
TP_PROTO(const struct binder_alloc *alloc, size_t page_index),
TP_ARGS(alloc, page_index));
DEFINE_EVENT(binder_lru_page_class, binder_alloc_page_start,
TP_PROTO(const struct binder_alloc *alloc, size_t page_index),
TP_ARGS(alloc, page_index));
DEFINE_EVENT(binder_lru_page_class, binder_alloc_page_end,
TP_PROTO(const struct binder_alloc *alloc, size_t page_index),
TP_ARGS(alloc, page_index));
DEFINE_EVENT(binder_lru_page_class, binder_unmap_user_start,
TP_PROTO(const struct binder_alloc *alloc, size_t page_index),
TP_ARGS(alloc, page_index));
DEFINE_EVENT(binder_lru_page_class, binder_unmap_user_end,
TP_PROTO(const struct binder_alloc *alloc, size_t page_index),
TP_ARGS(alloc, page_index));
DEFINE_EVENT(binder_lru_page_class, binder_unmap_kernel_start,
TP_PROTO(const struct binder_alloc *alloc, size_t page_index),
TP_ARGS(alloc, page_index));
DEFINE_EVENT(binder_lru_page_class, binder_unmap_kernel_end,
TP_PROTO(const struct binder_alloc *alloc, size_t page_index),
TP_ARGS(alloc, page_index));
TRACE_EVENT(binder_command,
TP_PROTO(uint32_t cmd),
TP_ARGS(cmd),
......
......@@ -877,21 +877,21 @@ static void lcd_clear_fast_tilcd(struct charlcd *charlcd)
spin_unlock_irq(&pprt_lock);
}
static struct charlcd_ops charlcd_serial_ops = {
static const struct charlcd_ops charlcd_serial_ops = {
.write_cmd = lcd_write_cmd_s,
.write_data = lcd_write_data_s,
.clear_fast = lcd_clear_fast_s,
.backlight = lcd_backlight,
};
static struct charlcd_ops charlcd_parallel_ops = {
static const struct charlcd_ops charlcd_parallel_ops = {
.write_cmd = lcd_write_cmd_p8,
.write_data = lcd_write_data_p8,
.clear_fast = lcd_clear_fast_p8,
.backlight = lcd_backlight,
};
static struct charlcd_ops charlcd_tilcd_ops = {
static const struct charlcd_ops charlcd_tilcd_ops = {
.write_cmd = lcd_write_cmd_tilcd,
.write_data = lcd_write_data_tilcd,
.clear_fast = lcd_clear_fast_tilcd,
......
......@@ -67,7 +67,7 @@ static char *applicom_pci_devnames[] = {
"PCI2000PFB"
};
static struct pci_device_id applicom_pci_tbl[] = {
static const struct pci_device_id applicom_pci_tbl[] = {
{ PCI_VDEVICE(APPLICOM, PCI_DEVICE_ID_APPLICOM_PCIGENERIC) },
{ PCI_VDEVICE(APPLICOM, PCI_DEVICE_ID_APPLICOM_PCI2000IBS_CAN) },
{ PCI_VDEVICE(APPLICOM, PCI_DEVICE_ID_APPLICOM_PCI2000PFB) },
......
......@@ -128,10 +128,11 @@ int smapi_query_DSP_cfg(SMAPI_DSP_SETTINGS * pSettings)
{
int bRC = -EIO;
unsigned short usAX, usBX, usCX, usDX, usDI, usSI;
unsigned short ausDspBases[] = { 0x0030, 0x4E30, 0x8E30, 0xCE30, 0x0130, 0x0350, 0x0070, 0x0DB0 };
unsigned short ausUartBases[] = { 0x03F8, 0x02F8, 0x03E8, 0x02E8 };
unsigned short numDspBases = 8;
unsigned short numUartBases = 4;
static const unsigned short ausDspBases[] = {
0x0030, 0x4E30, 0x8E30, 0xCE30,
0x0130, 0x0350, 0x0070, 0x0DB0 };
static const unsigned short ausUartBases[] = {
0x03F8, 0x02F8, 0x03E8, 0x02E8 };
PRINTK_1(TRACE_SMAPI, "smapi::smapi_query_DSP_cfg entry\n");
......@@ -148,7 +149,7 @@ int smapi_query_DSP_cfg(SMAPI_DSP_SETTINGS * pSettings)
pSettings->bDSPEnabled = ((usCX & 0x0001) != 0);
pSettings->usDspIRQ = usSI & 0x00FF;
pSettings->usDspDMA = (usSI & 0xFF00) >> 8;
if ((usDI & 0x00FF) < numDspBases) {
if ((usDI & 0x00FF) < ARRAY_SIZE(ausDspBases)) {
pSettings->usDspBaseIO = ausDspBases[usDI & 0x00FF];
} else {
pSettings->usDspBaseIO = 0;
......@@ -176,7 +177,7 @@ int smapi_query_DSP_cfg(SMAPI_DSP_SETTINGS * pSettings)
pSettings->bModemEnabled = ((usCX & 0x0001) != 0);
pSettings->usUartIRQ = usSI & 0x000F;
if (((usSI & 0xFF00) >> 8) < numUartBases) {
if (((usSI & 0xFF00) >> 8) < ARRAY_SIZE(ausUartBases)) {
pSettings->usUartBaseIO = ausUartBases[(usSI & 0xFF00) >> 8];
} else {
pSettings->usUartBaseIO = 0;
......@@ -205,15 +206,16 @@ int smapi_set_DSP_cfg(void)
int bRC = -EIO;
int i;
unsigned short usAX, usBX, usCX, usDX, usDI, usSI;
unsigned short ausDspBases[] = { 0x0030, 0x4E30, 0x8E30, 0xCE30, 0x0130, 0x0350, 0x0070, 0x0DB0 };
unsigned short ausUartBases[] = { 0x03F8, 0x02F8, 0x03E8, 0x02E8 };
unsigned short ausDspIrqs[] = { 5, 7, 10, 11, 15 };
unsigned short ausUartIrqs[] = { 3, 4 };
unsigned short numDspBases = 8;
unsigned short numUartBases = 4;
unsigned short numDspIrqs = 5;
unsigned short numUartIrqs = 2;
static const unsigned short ausDspBases[] = {
0x0030, 0x4E30, 0x8E30, 0xCE30,
0x0130, 0x0350, 0x0070, 0x0DB0 };
static const unsigned short ausUartBases[] = {
0x03F8, 0x02F8, 0x03E8, 0x02E8 };
static const unsigned short ausDspIrqs[] = {
5, 7, 10, 11, 15 };
static const unsigned short ausUartIrqs[] = {
3, 4 };
unsigned short dspio_index = 0, uartio_index = 0;
PRINTK_5(TRACE_SMAPI,
......@@ -221,11 +223,11 @@ int smapi_set_DSP_cfg(void)
mwave_3780i_irq, mwave_3780i_io, mwave_uart_irq, mwave_uart_io);
if (mwave_3780i_io) {
for (i = 0; i < numDspBases; i++) {
for (i = 0; i < ARRAY_SIZE(ausDspBases); i++) {
if (mwave_3780i_io == ausDspBases[i])
break;
}
if (i == numDspBases) {
if (i == ARRAY_SIZE(ausDspBases)) {
PRINTK_ERROR(KERN_ERR_MWAVE "smapi::smapi_set_DSP_cfg: Error: Invalid mwave_3780i_io address %x. Aborting.\n", mwave_3780i_io);
return bRC;
}
......@@ -233,22 +235,22 @@ int smapi_set_DSP_cfg(void)
}
if (mwave_3780i_irq) {
for (i = 0; i < numDspIrqs; i++) {
for (i = 0; i < ARRAY_SIZE(ausDspIrqs); i++) {
if (mwave_3780i_irq == ausDspIrqs[i])
break;
}
if (i == numDspIrqs) {
if (i == ARRAY_SIZE(ausDspIrqs)) {
PRINTK_ERROR(KERN_ERR_MWAVE "smapi::smapi_set_DSP_cfg: Error: Invalid mwave_3780i_irq %x. Aborting.\n", mwave_3780i_irq);
return bRC;
}
}
if (mwave_uart_io) {
for (i = 0; i < numUartBases; i++) {
for (i = 0; i < ARRAY_SIZE(ausUartBases); i++) {
if (mwave_uart_io == ausUartBases[i])
break;
}
if (i == numUartBases) {
if (i == ARRAY_SIZE(ausUartBases)) {
PRINTK_ERROR(KERN_ERR_MWAVE "smapi::smapi_set_DSP_cfg: Error: Invalid mwave_uart_io address %x. Aborting.\n", mwave_uart_io);
return bRC;
}
......@@ -257,11 +259,11 @@ int smapi_set_DSP_cfg(void)
if (mwave_uart_irq) {
for (i = 0; i < numUartIrqs; i++) {
for (i = 0; i < ARRAY_SIZE(ausUartIrqs); i++) {
if (mwave_uart_irq == ausUartIrqs[i])
break;
}
if (i == numUartIrqs) {
if (i == ARRAY_SIZE(ausUartIrqs)) {
PRINTK_ERROR(KERN_ERR_MWAVE "smapi::smapi_set_DSP_cfg: Error: Invalid mwave_uart_irq %x. Aborting.\n", mwave_uart_irq);
return bRC;
}
......
......@@ -101,9 +101,6 @@ static DEFINE_IDA(ida_index);
#define PP_BUFFER_SIZE 1024
#define PARDEVICE_MAX 8
/* ROUND_UP macro from fs/select.c */
#define ROUND_UP(x,y) (((x)+(y)-1)/(y))
static DEFINE_MUTEX(pp_do_mutex);
/* define fixed sized ioctl cmd for y2038 migration */
......
......@@ -766,7 +766,7 @@ static struct attribute *tlclk_sysfs_entries[] = {
NULL
};
static struct attribute_group tlclk_attribute_group = {
static const struct attribute_group tlclk_attribute_group = {
.name = NULL, /* put in device directory */
.attrs = tlclk_sysfs_entries,
};
......
......@@ -1308,7 +1308,7 @@ static struct attribute *port_sysfs_entries[] = {
NULL
};
static struct attribute_group port_attribute_group = {
static const struct attribute_group port_attribute_group = {
.name = NULL, /* put in device directory */
.attrs = port_sysfs_entries,
};
......
......@@ -86,8 +86,7 @@
#include <linux/cdev.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <asm/io.h>
#include <linux/io.h>
#include <linux/uaccess.h>
#ifdef CONFIG_OF
......@@ -222,6 +221,8 @@ static const struct config_registers v6_config_registers = {
* hwicap_command_desync - Send a DESYNC command to the ICAP port.
* @drvdata: a pointer to the drvdata.
*
* Returns: '0' on success and failure value on error
*
* This command desynchronizes the ICAP After this command, a
* bitstream containing a NULL packet, followed by a SYNCH packet is
* required before the ICAP will recognize commands.
......@@ -251,10 +252,12 @@ static int hwicap_command_desync(struct hwicap_drvdata *drvdata)
* hwicap_get_configuration_register - Query a configuration register.
* @drvdata: a pointer to the drvdata.
* @reg: a constant which represents the configuration
* register value to be returned.
* Examples: XHI_IDCODE, XHI_FLR.
* register value to be returned.
* Examples: XHI_IDCODE, XHI_FLR.
* @reg_data: returns the value of the register.
*
* Returns: '0' on success and failure value on error
*
* Sends a query packet to the ICAP and then receives the response.
* The icap is left in Synched state.
*/
......@@ -320,7 +323,8 @@ static int hwicap_initialize_hwicap(struct hwicap_drvdata *drvdata)
dev_dbg(drvdata->dev, "initializing\n");
/* Abort any current transaction, to make sure we have the
* ICAP in a good state. */
* ICAP in a good state.
*/
dev_dbg(drvdata->dev, "Reset...\n");
drvdata->config->reset(drvdata);
......@@ -632,7 +636,6 @@ static int hwicap_setup(struct device *dev, int id,
drvdata = kzalloc(sizeof(struct hwicap_drvdata), GFP_KERNEL);
if (!drvdata) {
dev_err(dev, "Couldn't allocate device private record\n");
retval = -ENOMEM;
goto failed0;
}
......@@ -759,20 +762,20 @@ static int hwicap_of_probe(struct platform_device *op,
id = of_get_property(op->dev.of_node, "port-number", NULL);
/* It's most likely that we're using V4, if the family is not
specified */
* specified
*/
regs = &v4_config_registers;
family = of_get_property(op->dev.of_node, "xlnx,family", NULL);
if (family) {
if (!strcmp(family, "virtex2p")) {
if (!strcmp(family, "virtex2p"))
regs = &v2_config_registers;
} else if (!strcmp(family, "virtex4")) {
else if (!strcmp(family, "virtex4"))
regs = &v4_config_registers;
} else if (!strcmp(family, "virtex5")) {
else if (!strcmp(family, "virtex5"))
regs = &v5_config_registers;
} else if (!strcmp(family, "virtex6")) {
else if (!strcmp(family, "virtex6"))
regs = &v6_config_registers;
}
}
return hwicap_setup(&op->dev, id ? *id : -1, &res, config,
regs);
......@@ -802,20 +805,20 @@ static int hwicap_drv_probe(struct platform_device *pdev)
return -ENODEV;
/* It's most likely that we're using V4, if the family is not
specified */
* specified
*/
regs = &v4_config_registers;
family = pdev->dev.platform_data;
if (family) {
if (!strcmp(family, "virtex2p")) {
if (!strcmp(family, "virtex2p"))
regs = &v2_config_registers;
} else if (!strcmp(family, "virtex4")) {
else if (!strcmp(family, "virtex4"))
regs = &v4_config_registers;
} else if (!strcmp(family, "virtex5")) {
else if (!strcmp(family, "virtex5"))
regs = &v5_config_registers;
} else if (!strcmp(family, "virtex6")) {
else if (!strcmp(family, "virtex6"))
regs = &v6_config_registers;
}
}
return hwicap_setup(&pdev->dev, pdev->id, res,
......
......@@ -62,11 +62,13 @@ struct hwicap_drvdata {
struct hwicap_driver_config {
/* Read configuration data given by size into the data buffer.
Return 0 if successful. */
* Return 0 if successful.
*/
int (*get_configuration)(struct hwicap_drvdata *drvdata, u32 *data,
u32 size);
/* Write configuration data given by size from the data buffer.
Return 0 if successful. */
* Return 0 if successful.
*/
int (*set_configuration)(struct hwicap_drvdata *drvdata, u32 *data,
u32 size);
/* Get the status register, bit pattern given by:
......@@ -193,11 +195,12 @@ struct config_registers {
* hwicap_type_1_read - Generates a Type 1 read packet header.
* @reg: is the address of the register to be read back.
*
* Return:
* Generates a Type 1 read packet header, which is used to indirectly
* read registers in the configuration logic. This packet must then
* be sent through the icap device, and a return packet received with
* the information.
**/
*/
static inline u32 hwicap_type_1_read(u32 reg)
{
return (XHI_TYPE_1 << XHI_TYPE_SHIFT) |
......@@ -208,7 +211,9 @@ static inline u32 hwicap_type_1_read(u32 reg)
/**
* hwicap_type_1_write - Generates a Type 1 write packet header
* @reg: is the address of the register to be read back.
**/
*
* Return: Type 1 write packet header
*/
static inline u32 hwicap_type_1_write(u32 reg)
{
return (XHI_TYPE_1 << XHI_TYPE_SHIFT) |
......
......@@ -150,4 +150,11 @@ config EXTCON_USB_GPIO
Say Y here to enable GPIO based USB cable detection extcon support.
Used typically if GPIO is used for USB ID pin detection.
config EXTCON_USBC_CROS_EC
tristate "ChromeOS Embedded Controller EXTCON support"
depends on MFD_CROS_EC
help
Say Y here to enable USB Type C cable detection extcon support when
using Chrome OS EC based USB Type-C ports.
endif
......@@ -20,3 +20,4 @@ obj-$(CONFIG_EXTCON_QCOM_SPMI_MISC) += extcon-qcom-spmi-misc.o
obj-$(CONFIG_EXTCON_RT8973A) += extcon-rt8973a.o
obj-$(CONFIG_EXTCON_SM5502) += extcon-sm5502.o
obj-$(CONFIG_EXTCON_USB_GPIO) += extcon-usb-gpio.o
obj-$(CONFIG_EXTCON_USBC_CROS_EC) += extcon-usbc-cros-ec.o
/*
* drivers/extcon/devres.c - EXTCON device's resource management
* drivers/extcon/devres.c - EXTCON device's resource management
*
* Copyright (C) 2016 Samsung Electronics
* Author: Chanwoo Choi <cw00.choi@samsung.com>
......@@ -59,10 +59,9 @@ static void devm_extcon_dev_notifier_all_unreg(struct device *dev, void *res)
/**
* devm_extcon_dev_allocate - Allocate managed extcon device
* @dev: device owning the extcon device being created
* @supported_cable: Array of supported extcon ending with EXTCON_NONE.
* If supported_cable is NULL, cable name related APIs
* are disabled.
* @dev: the device owning the extcon device being created
* @supported_cable: the array of the supported external connectors
* ending with EXTCON_NONE.
*
* This function manages automatically the memory of extcon device using device
* resource management and simplify the control of freeing the memory of extcon
......@@ -97,8 +96,8 @@ EXPORT_SYMBOL_GPL(devm_extcon_dev_allocate);
/**
* devm_extcon_dev_free() - Resource-managed extcon_dev_unregister()
* @dev: device the extcon belongs to
* @edev: the extcon device to unregister
* @dev: the device owning the extcon device being created
* @edev: the extcon device to be freed
*
* Free the memory that is allocated with devm_extcon_dev_allocate()
* function.
......@@ -112,10 +111,9 @@ EXPORT_SYMBOL_GPL(devm_extcon_dev_free);
/**
* devm_extcon_dev_register() - Resource-managed extcon_dev_register()
* @dev: device to allocate extcon device
* @edev: the new extcon device to register
* @dev: the device owning the extcon device being created
* @edev: the extcon device to be registered
*
* Managed extcon_dev_register() function. If extcon device is attached with
* this function, that extcon device is automatically unregistered on driver
* detach. Internally this function calls extcon_dev_register() function.
* To get more information, refer that function.
......@@ -149,8 +147,8 @@ EXPORT_SYMBOL_GPL(devm_extcon_dev_register);
/**
* devm_extcon_dev_unregister() - Resource-managed extcon_dev_unregister()
* @dev: device the extcon belongs to
* @edev: the extcon device to unregister
* @dev: the device owning the extcon device being created
* @edev: the extcon device to unregistered
*
* Unregister extcon device that is registered with devm_extcon_dev_register()
* function.
......@@ -164,10 +162,10 @@ EXPORT_SYMBOL_GPL(devm_extcon_dev_unregister);
/**
* devm_extcon_register_notifier() - Resource-managed extcon_register_notifier()
* @dev: device to allocate extcon device
* @edev: the extcon device that has the external connecotr.
* @id: the unique id of each external connector in extcon enumeration.
* @nb: a notifier block to be registered.
* @dev: the device owning the extcon device being created
* @edev: the extcon device
* @id: the unique id among the extcon enumeration
* @nb: a notifier block to be registered
*
* This function manages automatically the notifier of extcon device using
* device resource management and simplify the control of unregistering
......@@ -208,10 +206,10 @@ EXPORT_SYMBOL(devm_extcon_register_notifier);
/**
* devm_extcon_unregister_notifier()
- Resource-managed extcon_unregister_notifier()
* @dev: device to allocate extcon device
* @edev: the extcon device that has the external connecotr.
* @id: the unique id of each external connector in extcon enumeration.
* @nb: a notifier block to be registered.
* @dev: the device owning the extcon device being created
* @edev: the extcon device
* @id: the unique id among the extcon enumeration
* @nb: a notifier block to be registered
*/
void devm_extcon_unregister_notifier(struct device *dev,
struct extcon_dev *edev, unsigned int id,
......@@ -225,9 +223,9 @@ EXPORT_SYMBOL(devm_extcon_unregister_notifier);
/**
* devm_extcon_register_notifier_all()
* - Resource-managed extcon_register_notifier_all()
* @dev: device to allocate extcon device
* @edev: the extcon device that has the external connecotr.
* @nb: a notifier block to be registered.
* @dev: the device owning the extcon device being created
* @edev: the extcon device
* @nb: a notifier block to be registered
*
* This function manages automatically the notifier of extcon device using
* device resource management and simplify the control of unregistering
......@@ -263,9 +261,9 @@ EXPORT_SYMBOL(devm_extcon_register_notifier_all);
/**
* devm_extcon_unregister_notifier_all()
* - Resource-managed extcon_unregister_notifier_all()
* @dev: device to allocate extcon device
* @edev: the extcon device that has the external connecotr.
* @nb: a notifier block to be registered.
* @dev: the device owning the extcon device being created
* @edev: the extcon device
* @nb: a notifier block to be registered
*/
void devm_extcon_unregister_notifier_all(struct device *dev,
struct extcon_dev *edev,
......
......@@ -171,7 +171,7 @@ static int int3496_remove(struct platform_device *pdev)
return 0;
}
static struct acpi_device_id int3496_acpi_match[] = {
static const struct acpi_device_id int3496_acpi_match[] = {
{ "INT3496" },
{ }
};
......
......@@ -811,9 +811,8 @@ static int max77693_muic_chg_handler(struct max77693_muic_info *info)
*/
extcon_set_state_sync(info->edev, EXTCON_CHG_USB_DCP,
attached);
if (!cable_attached)
extcon_set_state_sync(info->edev,
EXTCON_DISP_MHL, cable_attached);
extcon_set_state_sync(info->edev, EXTCON_DISP_MHL,
cable_attached);
break;
}
......
/**
* drivers/extcon/extcon-usbc-cros-ec - ChromeOS Embedded Controller extcon
*
* Copyright (C) 2017 Google, Inc
* Author: Benson Leung <bleung@chromium.org>
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/extcon.h>
#include <linux/kernel.h>
#include <linux/mfd/cros_ec.h>
#include <linux/module.h>
#include <linux/notifier.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/sched.h>
struct cros_ec_extcon_info {
struct device *dev;
struct extcon_dev *edev;
int port_id;
struct cros_ec_device *ec;
struct notifier_block notifier;
bool dp; /* DisplayPort enabled */
bool mux; /* SuperSpeed (usb3) enabled */
unsigned int power_type;
};
static const unsigned int usb_type_c_cable[] = {
EXTCON_DISP_DP,
EXTCON_NONE,
};
/**
* cros_ec_pd_command() - Send a command to the EC.
* @info: pointer to struct cros_ec_extcon_info
* @command: EC command
* @version: EC command version
* @outdata: EC command output data
* @outsize: Size of outdata
* @indata: EC command input data
* @insize: Size of indata
*
* Return: 0 on success, <0 on failure.
*/
static int cros_ec_pd_command(struct cros_ec_extcon_info *info,
unsigned int command,
unsigned int version,
void *outdata,
unsigned int outsize,
void *indata,
unsigned int insize)
{
struct cros_ec_command *msg;
int ret;
msg = kzalloc(sizeof(*msg) + max(outsize, insize), GFP_KERNEL);
if (!msg)
return -ENOMEM;
msg->version = version;
msg->command = command;
msg->outsize = outsize;
msg->insize = insize;
if (outsize)
memcpy(msg->data, outdata, outsize);
ret = cros_ec_cmd_xfer_status(info->ec, msg);
if (ret >= 0 && insize)
memcpy(indata, msg->data, insize);
kfree(msg);
return ret;
}
/**
* cros_ec_usb_get_power_type() - Get power type info about PD device attached
* to given port.
* @info: pointer to struct cros_ec_extcon_info
*
* Return: power type on success, <0 on failure.
*/
static int cros_ec_usb_get_power_type(struct cros_ec_extcon_info *info)
{
struct ec_params_usb_pd_power_info req;
struct ec_response_usb_pd_power_info resp;
int ret;
req.port = info->port_id;
ret = cros_ec_pd_command(info, EC_CMD_USB_PD_POWER_INFO, 0,
&req, sizeof(req), &resp, sizeof(resp));
if (ret < 0)
return ret;
return resp.type;
}
/**
* cros_ec_usb_get_pd_mux_state() - Get PD mux state for given port.
* @info: pointer to struct cros_ec_extcon_info
*
* Return: PD mux state on success, <0 on failure.
*/
static int cros_ec_usb_get_pd_mux_state(struct cros_ec_extcon_info *info)
{
struct ec_params_usb_pd_mux_info req;
struct ec_response_usb_pd_mux_info resp;
int ret;
req.port = info->port_id;
ret = cros_ec_pd_command(info, EC_CMD_USB_PD_MUX_INFO, 0,
&req, sizeof(req),
&resp, sizeof(resp));
if (ret < 0)
return ret;
return resp.flags;
}
/**
* cros_ec_usb_get_role() - Get role info about possible PD device attached to a
* given port.
* @info: pointer to struct cros_ec_extcon_info
* @polarity: pointer to cable polarity (return value)
*
* Return: role info on success, -ENOTCONN if no cable is connected, <0 on
* failure.
*/
static int cros_ec_usb_get_role(struct cros_ec_extcon_info *info,
bool *polarity)
{
struct ec_params_usb_pd_control pd_control;
struct ec_response_usb_pd_control_v1 resp;
int ret;
pd_control.port = info->port_id;
pd_control.role = USB_PD_CTRL_ROLE_NO_CHANGE;
pd_control.mux = USB_PD_CTRL_MUX_NO_CHANGE;
ret = cros_ec_pd_command(info, EC_CMD_USB_PD_CONTROL, 1,
&pd_control, sizeof(pd_control),
&resp, sizeof(resp));
if (ret < 0)
return ret;
if (!(resp.enabled & PD_CTRL_RESP_ENABLED_CONNECTED))
return -ENOTCONN;
*polarity = resp.polarity;
return resp.role;
}
/**
* cros_ec_pd_get_num_ports() - Get number of EC charge ports.
* @info: pointer to struct cros_ec_extcon_info
*
* Return: number of ports on success, <0 on failure.
*/
static int cros_ec_pd_get_num_ports(struct cros_ec_extcon_info *info)
{
struct ec_response_usb_pd_ports resp;
int ret;
ret = cros_ec_pd_command(info, EC_CMD_USB_PD_PORTS,
0, NULL, 0, &resp, sizeof(resp));
if (ret < 0)
return ret;
return resp.num_ports;
}
static int extcon_cros_ec_detect_cable(struct cros_ec_extcon_info *info,
bool force)
{
struct device *dev = info->dev;
int role, power_type;
bool polarity = false;
bool dp = false;
bool mux = false;
bool hpd = false;
power_type = cros_ec_usb_get_power_type(info);
if (power_type < 0) {
dev_err(dev, "failed getting power type err = %d\n",
power_type);
return power_type;
}
role = cros_ec_usb_get_role(info, &polarity);
if (role < 0) {
if (role != -ENOTCONN) {
dev_err(dev, "failed getting role err = %d\n", role);
return role;
}
} else {
int pd_mux_state;
pd_mux_state = cros_ec_usb_get_pd_mux_state(info);
if (pd_mux_state < 0)
pd_mux_state = USB_PD_MUX_USB_ENABLED;
dp = pd_mux_state & USB_PD_MUX_DP_ENABLED;
mux = pd_mux_state & USB_PD_MUX_USB_ENABLED;
hpd = pd_mux_state & USB_PD_MUX_HPD_IRQ;
}
if (force || info->dp != dp || info->mux != mux ||
info->power_type != power_type) {
info->dp = dp;
info->mux = mux;
info->power_type = power_type;
extcon_set_state(info->edev, EXTCON_DISP_DP, dp);
extcon_set_property(info->edev, EXTCON_DISP_DP,
EXTCON_PROP_USB_TYPEC_POLARITY,
(union extcon_property_value)(int)polarity);
extcon_set_property(info->edev, EXTCON_DISP_DP,
EXTCON_PROP_USB_SS,
(union extcon_property_value)(int)mux);
extcon_set_property(info->edev, EXTCON_DISP_DP,
EXTCON_PROP_DISP_HPD,
(union extcon_property_value)(int)hpd);
extcon_sync(info->edev, EXTCON_DISP_DP);
} else if (hpd) {
extcon_set_property(info->edev, EXTCON_DISP_DP,
EXTCON_PROP_DISP_HPD,
(union extcon_property_value)(int)hpd);
extcon_sync(info->edev, EXTCON_DISP_DP);
}
return 0;
}
static int extcon_cros_ec_event(struct notifier_block *nb,
unsigned long queued_during_suspend,
void *_notify)
{
struct cros_ec_extcon_info *info;
struct cros_ec_device *ec;
u32 host_event;
info = container_of(nb, struct cros_ec_extcon_info, notifier);
ec = info->ec;
host_event = cros_ec_get_host_event(ec);
if (host_event & (EC_HOST_EVENT_MASK(EC_HOST_EVENT_PD_MCU) |
EC_HOST_EVENT_MASK(EC_HOST_EVENT_USB_MUX))) {
extcon_cros_ec_detect_cable(info, false);
return NOTIFY_OK;
}
return NOTIFY_DONE;
}
static int extcon_cros_ec_probe(struct platform_device *pdev)
{
struct cros_ec_extcon_info *info;
struct cros_ec_device *ec = dev_get_drvdata(pdev->dev.parent);
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
int numports, ret;
info = devm_kzalloc(dev, sizeof(*info), GFP_KERNEL);
if (!info)
return -ENOMEM;
info->dev = dev;
info->ec = ec;
if (np) {
u32 port;
ret = of_property_read_u32(np, "google,usb-port-id", &port);
if (ret < 0) {
dev_err(dev, "Missing google,usb-port-id property\n");
return ret;
}
info->port_id = port;
} else {
info->port_id = pdev->id;
}
numports = cros_ec_pd_get_num_ports(info);
if (numports < 0) {
dev_err(dev, "failed getting number of ports! ret = %d\n",
numports);
return numports;
}
if (info->port_id >= numports) {
dev_err(dev, "This system only supports %d ports\n", numports);
return -ENODEV;
}
info->edev = devm_extcon_dev_allocate(dev, usb_type_c_cable);
if (IS_ERR(info->edev)) {
dev_err(dev, "failed to allocate extcon device\n");
return -ENOMEM;
}
ret = devm_extcon_dev_register(dev, info->edev);
if (ret < 0) {
dev_err(dev, "failed to register extcon device\n");
return ret;
}
extcon_set_property_capability(info->edev, EXTCON_DISP_DP,
EXTCON_PROP_USB_TYPEC_POLARITY);
extcon_set_property_capability(info->edev, EXTCON_DISP_DP,
EXTCON_PROP_USB_SS);
extcon_set_property_capability(info->edev, EXTCON_DISP_DP,
EXTCON_PROP_DISP_HPD);
platform_set_drvdata(pdev, info);
/* Get PD events from the EC */
info->notifier.notifier_call = extcon_cros_ec_event;
ret = blocking_notifier_chain_register(&info->ec->event_notifier,
&info->notifier);
if (ret < 0) {
dev_err(dev, "failed to register notifier\n");
return ret;
}
/* Perform initial detection */
ret = extcon_cros_ec_detect_cable(info, true);
if (ret < 0) {
dev_err(dev, "failed to detect initial cable state\n");
goto unregister_notifier;
}
return 0;
unregister_notifier:
blocking_notifier_chain_unregister(&info->ec->event_notifier,
&info->notifier);
return ret;
}
static int extcon_cros_ec_remove(struct platform_device *pdev)
{
struct cros_ec_extcon_info *info = platform_get_drvdata(pdev);
blocking_notifier_chain_unregister(&info->ec->event_notifier,
&info->notifier);
return 0;
}
#ifdef CONFIG_PM_SLEEP
static int extcon_cros_ec_suspend(struct device *dev)
{
return 0;
}
static int extcon_cros_ec_resume(struct device *dev)
{
int ret;
struct cros_ec_extcon_info *info = dev_get_drvdata(dev);
ret = extcon_cros_ec_detect_cable(info, true);
if (ret < 0)
dev_err(dev, "failed to detect cable state on resume\n");
return 0;
}
static const struct dev_pm_ops extcon_cros_ec_dev_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(extcon_cros_ec_suspend, extcon_cros_ec_resume)
};
#define DEV_PM_OPS (&extcon_cros_ec_dev_pm_ops)
#else
#define DEV_PM_OPS NULL
#endif /* CONFIG_PM_SLEEP */
#ifdef CONFIG_OF
static const struct of_device_id extcon_cros_ec_of_match[] = {
{ .compatible = "google,extcon-usbc-cros-ec" },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, extcon_cros_ec_of_match);
#endif /* CONFIG_OF */
static struct platform_driver extcon_cros_ec_driver = {
.driver = {
.name = "extcon-usbc-cros-ec",
.of_match_table = of_match_ptr(extcon_cros_ec_of_match),
.pm = DEV_PM_OPS,
},
.remove = extcon_cros_ec_remove,
.probe = extcon_cros_ec_probe,
};
module_platform_driver(extcon_cros_ec_driver);
MODULE_DESCRIPTION("ChromeOS Embedded Controller extcon driver");
MODULE_AUTHOR("Benson Leung <bleung@chromium.org>");
MODULE_LICENSE("GPL");
此差异已折叠。
......@@ -202,7 +202,7 @@ static int vpd_section_init(const char *name, struct vpd_section *sec,
sec->raw_name = kasprintf(GFP_KERNEL, "%s_raw", name);
if (!sec->raw_name) {
err = -ENOMEM;
goto err_iounmap;
goto err_memunmap;
}
sysfs_bin_attr_init(&sec->bin_attr);
......@@ -233,8 +233,8 @@ static int vpd_section_init(const char *name, struct vpd_section *sec,
sysfs_remove_bin_file(vpd_kobj, &sec->bin_attr);
err_free_raw_name:
kfree(sec->raw_name);
err_iounmap:
iounmap(sec->baseaddr);
err_memunmap:
memunmap(sec->baseaddr);
return err;
}
......@@ -245,7 +245,7 @@ static int vpd_section_destroy(struct vpd_section *sec)
kobject_put(sec->kobj);
sysfs_remove_bin_file(vpd_kobj, &sec->bin_attr);
kfree(sec->raw_name);
iounmap(sec->baseaddr);
memunmap(sec->baseaddr);
}
return 0;
......@@ -262,7 +262,7 @@ static int vpd_sections_init(phys_addr_t physaddr)
return -ENOMEM;
memcpy_fromio(&header, temp, sizeof(struct vpd_cbmem));
iounmap(temp);
memunmap(temp);
if (header.magic != VPD_CBMEM_MAGIC)
return -ENODEV;
......
......@@ -6,6 +6,7 @@ fmc-y += fmc-match.o
fmc-y += fmc-sdb.o
fmc-y += fru-parse.o
fmc-y += fmc-dump.o
fmc-y += fmc-debug.o
obj-$(CONFIG_FMC_FAKEDEV) += fmc-fakedev.o
obj-$(CONFIG_FMC_TRIVIAL) += fmc-trivial.o
......
......@@ -129,8 +129,7 @@ static int fc_probe(struct fmc_device *fmc)
struct fc_instance *fc;
if (fmc->op->validate)
index = fmc->op->validate(fmc, &fc_drv);
index = fmc_validate(fmc, &fc_drv);
if (index < 0)
return -EINVAL; /* not our device: invalid */
......
......@@ -13,6 +13,9 @@
#include <linux/init.h>
#include <linux/device.h>
#include <linux/fmc.h>
#include <linux/fmc-sdb.h>
#include "fmc-private.h"
static int fmc_check_version(unsigned long version, const char *name)
{
......@@ -118,6 +121,61 @@ static struct bin_attribute fmc_eeprom_attr = {
.write = fmc_write_eeprom,
};
int fmc_irq_request(struct fmc_device *fmc, irq_handler_t h,
char *name, int flags)
{
if (fmc->op->irq_request)
return fmc->op->irq_request(fmc, h, name, flags);
return -EPERM;
}
EXPORT_SYMBOL(fmc_irq_request);
void fmc_irq_free(struct fmc_device *fmc)
{
if (fmc->op->irq_free)
fmc->op->irq_free(fmc);
}
EXPORT_SYMBOL(fmc_irq_free);
void fmc_irq_ack(struct fmc_device *fmc)
{
if (likely(fmc->op->irq_ack))
fmc->op->irq_ack(fmc);
}
EXPORT_SYMBOL(fmc_irq_ack);
int fmc_validate(struct fmc_device *fmc, struct fmc_driver *drv)
{
if (fmc->op->validate)
return fmc->op->validate(fmc, drv);
return -EPERM;
}
EXPORT_SYMBOL(fmc_validate);
int fmc_gpio_config(struct fmc_device *fmc, struct fmc_gpio *gpio, int ngpio)
{
if (fmc->op->gpio_config)
return fmc->op->gpio_config(fmc, gpio, ngpio);
return -EPERM;
}
EXPORT_SYMBOL(fmc_gpio_config);
int fmc_read_ee(struct fmc_device *fmc, int pos, void *d, int l)
{
if (fmc->op->read_ee)
return fmc->op->read_ee(fmc, pos, d, l);
return -EPERM;
}
EXPORT_SYMBOL(fmc_read_ee);
int fmc_write_ee(struct fmc_device *fmc, int pos, const void *d, int l)
{
if (fmc->op->write_ee)
return fmc->op->write_ee(fmc, pos, d, l);
return -EPERM;
}
EXPORT_SYMBOL(fmc_write_ee);
/*
* Functions for client modules follow
*/
......@@ -141,7 +199,8 @@ EXPORT_SYMBOL(fmc_driver_unregister);
* When a device set is registered, all eeproms must be read
* and all FRUs must be parsed
*/
int fmc_device_register_n(struct fmc_device **devs, int n)
int fmc_device_register_n_gw(struct fmc_device **devs, int n,
struct fmc_gateware *gw)
{
struct fmc_device *fmc, **devarray;
uint32_t device_id;
......@@ -221,6 +280,21 @@ int fmc_device_register_n(struct fmc_device **devs, int n)
else
dev_set_name(&fmc->dev, "%s-%04x", fmc->mezzanine_name,
device_id);
if (gw) {
/*
* The carrier already know the bitstream to load
* for this set of FMC mezzanines.
*/
ret = fmc->op->reprogram_raw(fmc, NULL,
gw->bitstream, gw->len);
if (ret) {
dev_warn(fmc->hwdev,
"Invalid gateware for FMC mezzanine\n");
goto out;
}
}
ret = device_add(&fmc->dev);
if (ret < 0) {
dev_err(fmc->hwdev, "Slot %i: Failed in registering "
......@@ -234,18 +308,16 @@ int fmc_device_register_n(struct fmc_device **devs, int n)
}
/* This device went well, give information to the user */
fmc_dump_eeprom(fmc);
fmc_dump_sdb(fmc);
fmc_debug_init(fmc);
}
return 0;
out1:
device_del(&fmc->dev);
out:
fmc_free_id_info(fmc);
put_device(&fmc->dev);
kfree(devarray);
for (i--; i >= 0; i--) {
fmc_debug_exit(devs[i]);
sysfs_remove_bin_file(&devs[i]->dev.kobj, &fmc_eeprom_attr);
device_del(&devs[i]->dev);
fmc_free_id_info(devs[i]);
......@@ -254,8 +326,20 @@ int fmc_device_register_n(struct fmc_device **devs, int n)
return ret;
}
EXPORT_SYMBOL(fmc_device_register_n_gw);
int fmc_device_register_n(struct fmc_device **devs, int n)
{
return fmc_device_register_n_gw(devs, n, NULL);
}
EXPORT_SYMBOL(fmc_device_register_n);
int fmc_device_register_gw(struct fmc_device *fmc, struct fmc_gateware *gw)
{
return fmc_device_register_n_gw(&fmc, 1, gw);
}
EXPORT_SYMBOL(fmc_device_register_gw);
int fmc_device_register(struct fmc_device *fmc)
{
return fmc_device_register_n(&fmc, 1);
......@@ -273,6 +357,7 @@ void fmc_device_unregister_n(struct fmc_device **devs, int n)
kfree(devs[0]->devarray);
for (i = 0; i < n; i++) {
fmc_debug_exit(devs[i]);
sysfs_remove_bin_file(&devs[i]->dev.kobj, &fmc_eeprom_attr);
device_del(&devs[i]->dev);
fmc_free_id_info(devs[i]);
......
/*
* Copyright (C) 2015 CERN (www.cern.ch)
* Author: Federico Vaga <federico.vaga@cern.ch>
*
* Released according to the GNU GPL, version 2 or any later version.
*/
#include <linux/module.h>
#include <linux/device.h>
#include <linux/init.h>
#include <linux/fs.h>
#include <linux/debugfs.h>
#include <linux/seq_file.h>
#include <asm/byteorder.h>
#include <linux/fmc.h>
#include <linux/sdb.h>
#include <linux/fmc-sdb.h>
#define FMC_DBG_SDB_DUMP "dump_sdb"
static char *__strip_trailing_space(char *buf, char *str, int len)
{
int i = len - 1;
memcpy(buf, str, len);
buf[len] = '\0';
while (i >= 0 && buf[i] == ' ')
buf[i--] = '\0';
return buf;
}
#define __sdb_string(buf, field) ({ \
BUILD_BUG_ON(sizeof(buf) < sizeof(field)); \
__strip_trailing_space(buf, (void *)(field), sizeof(field)); \
})
/**
* We do not check seq_printf() errors because we want to see things in any case
*/
static void fmc_sdb_dump_recursive(struct fmc_device *fmc, struct seq_file *s,
const struct sdb_array *arr)
{
unsigned long base = arr->baseaddr;
int i, j, n = arr->len, level = arr->level;
char tmp[64];
for (i = 0; i < n; i++) {
union sdb_record *r;
struct sdb_product *p;
struct sdb_component *c;
r = &arr->record[i];
c = &r->dev.sdb_component;
p = &c->product;
for (j = 0; j < level; j++)
seq_printf(s, " ");
switch (r->empty.record_type) {
case sdb_type_interconnect:
seq_printf(s, "%08llx:%08x %.19s\n",
__be64_to_cpu(p->vendor_id),
__be32_to_cpu(p->device_id),
p->name);
break;
case sdb_type_device:
seq_printf(s, "%08llx:%08x %.19s (%08llx-%08llx)\n",
__be64_to_cpu(p->vendor_id),
__be32_to_cpu(p->device_id),
p->name,
__be64_to_cpu(c->addr_first) + base,
__be64_to_cpu(c->addr_last) + base);
break;
case sdb_type_bridge:
seq_printf(s, "%08llx:%08x %.19s (bridge: %08llx)\n",
__be64_to_cpu(p->vendor_id),
__be32_to_cpu(p->device_id),
p->name,
__be64_to_cpu(c->addr_first) + base);
if (IS_ERR(arr->subtree[i])) {
seq_printf(s, "SDB: (bridge error %li)\n",
PTR_ERR(arr->subtree[i]));
break;
}
fmc_sdb_dump_recursive(fmc, s, arr->subtree[i]);
break;
case sdb_type_integration:
seq_printf(s, "integration\n");
break;
case sdb_type_repo_url:
seq_printf(s, "Synthesis repository: %s\n",
__sdb_string(tmp, r->repo_url.repo_url));
break;
case sdb_type_synthesis:
seq_printf(s, "Bitstream '%s' ",
__sdb_string(tmp, r->synthesis.syn_name));
seq_printf(s, "synthesized %08x by %s ",
__be32_to_cpu(r->synthesis.date),
__sdb_string(tmp, r->synthesis.user_name));
seq_printf(s, "(%s version %x), ",
__sdb_string(tmp, r->synthesis.tool_name),
__be32_to_cpu(r->synthesis.tool_version));
seq_printf(s, "commit %pm\n",
r->synthesis.commit_id);
break;
case sdb_type_empty:
seq_printf(s, "empty\n");
break;
default:
seq_printf(s, "UNKNOWN TYPE 0x%02x\n",
r->empty.record_type);
break;
}
}
}
static int fmc_sdb_dump(struct seq_file *s, void *offset)
{
struct fmc_device *fmc = s->private;
if (!fmc->sdb) {
seq_printf(s, "no SDB information\n");
return 0;
}
seq_printf(s, "FMC: %s (%s), slot %i, device %s\n", dev_name(fmc->hwdev),
fmc->carrier_name, fmc->slot_id, dev_name(&fmc->dev));
/* Dump SDB information */
fmc_sdb_dump_recursive(fmc, s, fmc->sdb);
return 0;
}
static int fmc_sdb_dump_open(struct inode *inode, struct file *file)
{
struct fmc_device *fmc = inode->i_private;
return single_open(file, fmc_sdb_dump, fmc);
}
const struct file_operations fmc_dbgfs_sdb_dump = {
.owner = THIS_MODULE,
.open = fmc_sdb_dump_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
int fmc_debug_init(struct fmc_device *fmc)
{
fmc->dbg_dir = debugfs_create_dir(dev_name(&fmc->dev), NULL);
if (IS_ERR_OR_NULL(fmc->dbg_dir)) {
pr_err("FMC: Cannot create debugfs\n");
return PTR_ERR(fmc->dbg_dir);
}
fmc->dbg_sdb_dump = debugfs_create_file(FMC_DBG_SDB_DUMP, 0444,
fmc->dbg_dir, fmc,
&fmc_dbgfs_sdb_dump);
if (IS_ERR_OR_NULL(fmc->dbg_sdb_dump))
pr_err("FMC: Cannot create debugfs file %s\n",
FMC_DBG_SDB_DUMP);
return 0;
}
void fmc_debug_exit(struct fmc_device *fmc)
{
if (fmc->dbg_dir)
debugfs_remove_recursive(fmc->dbg_dir);
}
......@@ -15,8 +15,6 @@
static int fmc_must_dump_eeprom;
module_param_named(dump_eeprom, fmc_must_dump_eeprom, int, 0644);
static int fmc_must_dump_sdb;
module_param_named(dump_sdb, fmc_must_dump_sdb, int, 0644);
#define LINELEN 16
......@@ -59,42 +57,3 @@ void fmc_dump_eeprom(const struct fmc_device *fmc)
for (i = 0; i < fmc->eeprom_len; i += LINELEN, line += LINELEN)
prev = dump_line(i, line, prev);
}
void fmc_dump_sdb(const struct fmc_device *fmc)
{
const uint8_t *line, *prev;
int i, len;
if (!fmc->sdb)
return;
if (!fmc_must_dump_sdb)
return;
/* If the argument is not-zero, do simple dump (== show) */
if (fmc_must_dump_sdb > 0)
fmc_show_sdb_tree(fmc);
if (fmc_must_dump_sdb == 1)
return;
/* If bigger than 1, dump it seriously, to help debugging */
/*
* Here we should really use libsdbfs (which is designed to
* work in kernel space as well) , but it doesn't support
* directories yet, and it requires better intergration (it
* should be used instead of fmc-specific code).
*
* So, lazily, just dump the top-level array
*/
pr_info("FMC: %s (%s), slot %i, device %s\n", dev_name(fmc->hwdev),
fmc->carrier_name, fmc->slot_id, dev_name(&fmc->dev));
pr_info("FMC: poor dump of sdb first level:\n");
len = fmc->sdb->len * sizeof(union sdb_record);
line = (void *)fmc->sdb->record;
prev = NULL;
for (i = 0; i < len; i += LINELEN, line += LINELEN)
prev = dump_line(i, line, prev);
return;
}
......@@ -63,7 +63,7 @@ int fmc_fill_id_info(struct fmc_device *fmc)
if (!fmc->eeprom)
return -ENOMEM;
allocated = 1;
ret = fmc->op->read_ee(fmc, 0, fmc->eeprom, fmc->eeprom_len);
ret = fmc_read_ee(fmc, 0, fmc->eeprom, fmc->eeprom_len);
if (ret < 0)
goto out;
}
......
/*
* Copyright (C) 2015 CERN (www.cern.ch)
* Author: Federico Vaga <federico.vaga@cern.ch>
*
* Released according to the GNU GPL, version 2 or any later version.
*/
extern int fmc_debug_init(struct fmc_device *fmc);
extern void fmc_debug_exit(struct fmc_device *fmc);
......@@ -127,12 +127,12 @@ int fmc_free_sdb_tree(struct fmc_device *fmc)
EXPORT_SYMBOL(fmc_free_sdb_tree);
/* This helper calls reprogram and inizialized sdb as well */
int fmc_reprogram(struct fmc_device *fmc, struct fmc_driver *d, char *gw,
int sdb_entry)
int fmc_reprogram_raw(struct fmc_device *fmc, struct fmc_driver *d,
void *gw, unsigned long len, int sdb_entry)
{
int ret;
ret = fmc->op->reprogram(fmc, d, gw);
ret = fmc->op->reprogram_raw(fmc, d, gw, len);
if (ret < 0)
return ret;
if (sdb_entry < 0)
......@@ -145,108 +145,39 @@ int fmc_reprogram(struct fmc_device *fmc, struct fmc_driver *d, char *gw,
sdb_entry);
return -ENODEV;
}
fmc_dump_sdb(fmc);
return 0;
}
EXPORT_SYMBOL(fmc_reprogram);
static char *__strip_trailing_space(char *buf, char *str, int len)
{
int i = len - 1;
memcpy(buf, str, len);
while(i >= 0 && buf[i] == ' ')
buf[i--] = '\0';
return buf;
return 0;
}
EXPORT_SYMBOL(fmc_reprogram_raw);
#define __sdb_string(buf, field) ({ \
BUILD_BUG_ON(sizeof(buf) < sizeof(field)); \
__strip_trailing_space(buf, (void *)(field), sizeof(field)); \
})
static void __fmc_show_sdb_tree(const struct fmc_device *fmc,
const struct sdb_array *arr)
/* This helper calls reprogram and inizialized sdb as well */
int fmc_reprogram(struct fmc_device *fmc, struct fmc_driver *d, char *gw,
int sdb_entry)
{
unsigned long base = arr->baseaddr;
int i, j, n = arr->len, level = arr->level;
char buf[64];
for (i = 0; i < n; i++) {
union sdb_record *r;
struct sdb_product *p;
struct sdb_component *c;
r = &arr->record[i];
c = &r->dev.sdb_component;
p = &c->product;
int ret;
dev_info(&fmc->dev, "SDB: ");
ret = fmc->op->reprogram(fmc, d, gw);
if (ret < 0)
return ret;
if (sdb_entry < 0)
return ret;
for (j = 0; j < level; j++)
printk(KERN_CONT " ");
switch (r->empty.record_type) {
case sdb_type_interconnect:
printk(KERN_CONT "%08llx:%08x %.19s\n",
__be64_to_cpu(p->vendor_id),
__be32_to_cpu(p->device_id),
p->name);
break;
case sdb_type_device:
printk(KERN_CONT "%08llx:%08x %.19s (%08llx-%08llx)\n",
__be64_to_cpu(p->vendor_id),
__be32_to_cpu(p->device_id),
p->name,
__be64_to_cpu(c->addr_first) + base,
__be64_to_cpu(c->addr_last) + base);
break;
case sdb_type_bridge:
printk(KERN_CONT "%08llx:%08x %.19s (bridge: %08llx)\n",
__be64_to_cpu(p->vendor_id),
__be32_to_cpu(p->device_id),
p->name,
__be64_to_cpu(c->addr_first) + base);
if (IS_ERR(arr->subtree[i])) {
dev_info(&fmc->dev, "SDB: (bridge error %li)\n",
PTR_ERR(arr->subtree[i]));
break;
}
__fmc_show_sdb_tree(fmc, arr->subtree[i]);
break;
case sdb_type_integration:
printk(KERN_CONT "integration\n");
break;
case sdb_type_repo_url:
printk(KERN_CONT "Synthesis repository: %s\n",
__sdb_string(buf, r->repo_url.repo_url));
break;
case sdb_type_synthesis:
printk(KERN_CONT "Bitstream '%s' ",
__sdb_string(buf, r->synthesis.syn_name));
printk(KERN_CONT "synthesized %08x by %s ",
__be32_to_cpu(r->synthesis.date),
__sdb_string(buf, r->synthesis.user_name));
printk(KERN_CONT "(%s version %x), ",
__sdb_string(buf, r->synthesis.tool_name),
__be32_to_cpu(r->synthesis.tool_version));
printk(KERN_CONT "commit %pm\n",
r->synthesis.commit_id);
break;
case sdb_type_empty:
printk(KERN_CONT "empty\n");
break;
default:
printk(KERN_CONT "UNKNOWN TYPE 0x%02x\n",
r->empty.record_type);
break;
}
/* We are required to find SDB at a given offset */
ret = fmc_scan_sdb_tree(fmc, sdb_entry);
if (ret < 0) {
dev_err(&fmc->dev, "Can't find SDB at address 0x%x\n",
sdb_entry);
return -ENODEV;
}
return 0;
}
EXPORT_SYMBOL(fmc_reprogram);
void fmc_show_sdb_tree(const struct fmc_device *fmc)
{
if (!fmc->sdb)
return;
__fmc_show_sdb_tree(fmc, fmc->sdb);
pr_err("%s: not supported anymore, use debugfs to dump SDB\n",
__func__);
}
EXPORT_SYMBOL(fmc_show_sdb_tree);
......
......@@ -24,7 +24,7 @@ static irqreturn_t t_handler(int irq, void *dev_id)
{
struct fmc_device *fmc = dev_id;
fmc->op->irq_ack(fmc);
fmc_irq_ack(fmc);
dev_info(&fmc->dev, "received irq %i\n", irq);
return IRQ_HANDLED;
}
......@@ -46,25 +46,21 @@ static int t_probe(struct fmc_device *fmc)
int ret;
int index = 0;
if (fmc->op->validate)
index = fmc->op->validate(fmc, &t_drv);
index = fmc_validate(fmc, &t_drv);
if (index < 0)
return -EINVAL; /* not our device: invalid */
ret = fmc->op->irq_request(fmc, t_handler, "fmc-trivial", IRQF_SHARED);
ret = fmc_irq_request(fmc, t_handler, "fmc-trivial", IRQF_SHARED);
if (ret < 0)
return ret;
/* ignore error code of call below, we really don't care */
fmc->op->gpio_config(fmc, t_gpio, ARRAY_SIZE(t_gpio));
fmc_gpio_config(fmc, t_gpio, ARRAY_SIZE(t_gpio));
/* Reprogram, if asked to. ESRCH == no filename specified */
ret = -ESRCH;
if (fmc->op->reprogram)
ret = fmc->op->reprogram(fmc, &t_drv, "");
if (ret == -ESRCH)
ret = fmc_reprogram(fmc, &t_drv, "", 0);
if (ret == -EPERM) /* programming not supported */
ret = 0;
if (ret < 0)
fmc->op->irq_free(fmc);
fmc_irq_free(fmc);
/* FIXME: reprogram LM32 too */
return ret;
......@@ -72,7 +68,7 @@ static int t_probe(struct fmc_device *fmc)
static int t_remove(struct fmc_device *fmc)
{
fmc->op->irq_free(fmc);
fmc_irq_free(fmc);
return 0;
}
......
......@@ -50,7 +50,7 @@ static int fwe_run_tlv(struct fmc_device *fmc, const struct firmware *fw,
if (write) {
dev_info(&fmc->dev, "write %i bytes at 0x%04x\n",
thislen, thisaddr);
err = fmc->op->write_ee(fmc, thisaddr, p + 5, thislen);
err = fmc_write_ee(fmc, thisaddr, p + 5, thislen);
}
if (err < 0) {
dev_err(&fmc->dev, "write failure @0x%04x\n",
......@@ -70,7 +70,7 @@ static int fwe_run_bin(struct fmc_device *fmc, const struct firmware *fw)
int ret;
dev_info(&fmc->dev, "programming %zi bytes\n", fw->size);
ret = fmc->op->write_ee(fmc, 0, (void *)fw->data, fw->size);
ret = fmc_write_ee(fmc, 0, (void *)fw->data, fw->size);
if (ret < 0) {
dev_info(&fmc->dev, "write_eeprom: error %i\n", ret);
return ret;
......@@ -115,8 +115,8 @@ static int fwe_probe(struct fmc_device *fmc)
KBUILD_MODNAME);
return -ENODEV;
}
if (fmc->op->validate)
index = fmc->op->validate(fmc, &fwe_drv);
index = fmc_validate(fmc, &fwe_drv);
if (index < 0) {
pr_err("%s: refusing device \"%s\"\n", KBUILD_MODNAME,
dev_name(dev));
......
......@@ -31,12 +31,11 @@ static char *__fru_alloc_get_tl(struct fru_common_header *header, int nr)
{
struct fru_type_length *tl;
char *res;
int len;
tl = __fru_get_board_tl(header, nr);
if (!tl)
return NULL;
len = fru_strlen(tl);
res = fru_alloc(fru_strlen(tl) + 1);
if (!res)
return NULL;
......
......@@ -2,9 +2,7 @@
# FPGA framework configuration
#
menu "FPGA Configuration Support"
config FPGA
menuconfig FPGA
tristate "FPGA Configuration Framework"
help
Say Y here if you want support for configuring FPGAs from the
......@@ -26,6 +24,20 @@ config FPGA_MGR_ICE40_SPI
help
FPGA manager driver support for Lattice iCE40 FPGAs over SPI.
config FPGA_MGR_ALTERA_CVP
tristate "Altera Arria-V/Cyclone-V/Stratix-V CvP FPGA Manager"
depends on PCI
help
FPGA manager driver support for Arria-V, Cyclone-V, Stratix-V
and Arria 10 Altera FPGAs using the CvP interface over PCIe.
config FPGA_MGR_ALTERA_PS_SPI
tristate "Altera FPGA Passive Serial over SPI"
depends on SPI
help
FPGA manager driver support for Altera Arria/Cyclone/Stratix
using the passive serial interface over SPI.
config FPGA_MGR_SOCFPGA
tristate "Altera SOCFPGA FPGA Manager"
depends on ARCH_SOCFPGA || COMPILE_TEST
......@@ -106,5 +118,3 @@ config XILINX_PR_DECOUPLER
being reprogrammed during partial reconfig.
endif # FPGA
endmenu
......@@ -6,6 +6,8 @@
obj-$(CONFIG_FPGA) += fpga-mgr.o
# FPGA Manager Drivers
obj-$(CONFIG_FPGA_MGR_ALTERA_CVP) += altera-cvp.o
obj-$(CONFIG_FPGA_MGR_ALTERA_PS_SPI) += altera-ps-spi.o
obj-$(CONFIG_FPGA_MGR_ICE40_SPI) += ice40-spi.o
obj-$(CONFIG_FPGA_MGR_SOCFPGA) += socfpga.o
obj-$(CONFIG_FPGA_MGR_SOCFPGA_A10) += socfpga-a10.o
......
/*
* FPGA Manager Driver for Altera Arria/Cyclone/Stratix CvP
*
* Copyright (C) 2017 DENX Software Engineering
*
* Anatolij Gustschin <agust@denx.de>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; version 2 of the License.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* Manage Altera FPGA firmware using PCIe CvP.
* Firmware must be in binary "rbf" format.
*/
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/fpga/fpga-mgr.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/sizes.h>
#define CVP_BAR 0 /* BAR used for data transfer in memory mode */
#define CVP_DUMMY_WR 244 /* dummy writes to clear CvP state machine */
#define TIMEOUT_US 2000 /* CVP STATUS timeout for USERMODE polling */
/* Vendor Specific Extended Capability Registers */
#define VSE_PCIE_EXT_CAP_ID 0x200
#define VSE_PCIE_EXT_CAP_ID_VAL 0x000b /* 16bit */
#define VSE_CVP_STATUS 0x21c /* 32bit */
#define VSE_CVP_STATUS_CFG_RDY BIT(18) /* CVP_CONFIG_READY */
#define VSE_CVP_STATUS_CFG_ERR BIT(19) /* CVP_CONFIG_ERROR */
#define VSE_CVP_STATUS_CVP_EN BIT(20) /* ctrl block is enabling CVP */
#define VSE_CVP_STATUS_USERMODE BIT(21) /* USERMODE */
#define VSE_CVP_STATUS_CFG_DONE BIT(23) /* CVP_CONFIG_DONE */
#define VSE_CVP_STATUS_PLD_CLK_IN_USE BIT(24) /* PLD_CLK_IN_USE */
#define VSE_CVP_MODE_CTRL 0x220 /* 32bit */
#define VSE_CVP_MODE_CTRL_CVP_MODE BIT(0) /* CVP (1) or normal mode (0) */
#define VSE_CVP_MODE_CTRL_HIP_CLK_SEL BIT(1) /* PMA (1) or fabric clock (0) */
#define VSE_CVP_MODE_CTRL_NUMCLKS_OFF 8 /* NUMCLKS bits offset */
#define VSE_CVP_MODE_CTRL_NUMCLKS_MASK GENMASK(15, 8)
#define VSE_CVP_DATA 0x228 /* 32bit */
#define VSE_CVP_PROG_CTRL 0x22c /* 32bit */
#define VSE_CVP_PROG_CTRL_CONFIG BIT(0)
#define VSE_CVP_PROG_CTRL_START_XFER BIT(1)
#define VSE_UNCOR_ERR_STATUS 0x234 /* 32bit */
#define VSE_UNCOR_ERR_CVP_CFG_ERR BIT(5) /* CVP_CONFIG_ERROR_LATCHED */
#define DRV_NAME "altera-cvp"
#define ALTERA_CVP_MGR_NAME "Altera CvP FPGA Manager"
/* Optional CvP config error status check for debugging */
static bool altera_cvp_chkcfg;
struct altera_cvp_conf {
struct fpga_manager *mgr;
struct pci_dev *pci_dev;
void __iomem *map;
void (*write_data)(struct altera_cvp_conf *, u32);
char mgr_name[64];
u8 numclks;
};
static enum fpga_mgr_states altera_cvp_state(struct fpga_manager *mgr)
{
struct altera_cvp_conf *conf = mgr->priv;
u32 status;
pci_read_config_dword(conf->pci_dev, VSE_CVP_STATUS, &status);
if (status & VSE_CVP_STATUS_CFG_DONE)
return FPGA_MGR_STATE_OPERATING;
if (status & VSE_CVP_STATUS_CVP_EN)
return FPGA_MGR_STATE_POWER_UP;
return FPGA_MGR_STATE_UNKNOWN;
}
static void altera_cvp_write_data_iomem(struct altera_cvp_conf *conf, u32 val)
{
writel(val, conf->map);
}
static void altera_cvp_write_data_config(struct altera_cvp_conf *conf, u32 val)
{
pci_write_config_dword(conf->pci_dev, VSE_CVP_DATA, val);
}
/* switches between CvP clock and internal clock */
static void altera_cvp_dummy_write(struct altera_cvp_conf *conf)
{
unsigned int i;
u32 val;
/* set 1 CVP clock cycle for every CVP Data Register Write */
pci_read_config_dword(conf->pci_dev, VSE_CVP_MODE_CTRL, &val);
val &= ~VSE_CVP_MODE_CTRL_NUMCLKS_MASK;
val |= 1 << VSE_CVP_MODE_CTRL_NUMCLKS_OFF;
pci_write_config_dword(conf->pci_dev, VSE_CVP_MODE_CTRL, val);
for (i = 0; i < CVP_DUMMY_WR; i++)
conf->write_data(conf, 0); /* dummy data, could be any value */
}
static int altera_cvp_wait_status(struct altera_cvp_conf *conf, u32 status_mask,
u32 status_val, int timeout_us)
{
unsigned int retries;
u32 val;
retries = timeout_us / 10;
if (timeout_us % 10)
retries++;
do {
pci_read_config_dword(conf->pci_dev, VSE_CVP_STATUS, &val);
if ((val & status_mask) == status_val)
return 0;
/* use small usleep value to re-check and break early */
usleep_range(10, 11);
} while (--retries);
return -ETIMEDOUT;
}
static int altera_cvp_teardown(struct fpga_manager *mgr,
struct fpga_image_info *info)
{
struct altera_cvp_conf *conf = mgr->priv;
struct pci_dev *pdev = conf->pci_dev;
int ret;
u32 val;
/* STEP 12 - reset START_XFER bit */
pci_read_config_dword(pdev, VSE_CVP_PROG_CTRL, &val);
val &= ~VSE_CVP_PROG_CTRL_START_XFER;
pci_write_config_dword(pdev, VSE_CVP_PROG_CTRL, val);
/* STEP 13 - reset CVP_CONFIG bit */
val &= ~VSE_CVP_PROG_CTRL_CONFIG;
pci_write_config_dword(pdev, VSE_CVP_PROG_CTRL, val);
/*
* STEP 14
* - set CVP_NUMCLKS to 1 and then issue CVP_DUMMY_WR dummy
* writes to the HIP
*/
altera_cvp_dummy_write(conf); /* from CVP clock to internal clock */
/* STEP 15 - poll CVP_CONFIG_READY bit for 0 with 10us timeout */
ret = altera_cvp_wait_status(conf, VSE_CVP_STATUS_CFG_RDY, 0, 10);
if (ret)
dev_err(&mgr->dev, "CFG_RDY == 0 timeout\n");
return ret;
}
static int altera_cvp_write_init(struct fpga_manager *mgr,
struct fpga_image_info *info,
const char *buf, size_t count)
{
struct altera_cvp_conf *conf = mgr->priv;
struct pci_dev *pdev = conf->pci_dev;
u32 iflags, val;
int ret;
iflags = info ? info->flags : 0;
if (iflags & FPGA_MGR_PARTIAL_RECONFIG) {
dev_err(&mgr->dev, "Partial reconfiguration not supported.\n");
return -EINVAL;
}
/* Determine allowed clock to data ratio */
if (iflags & FPGA_MGR_COMPRESSED_BITSTREAM)
conf->numclks = 8; /* ratio for all compressed images */
else if (iflags & FPGA_MGR_ENCRYPTED_BITSTREAM)
conf->numclks = 4; /* for uncompressed and encrypted images */
else
conf->numclks = 1; /* for uncompressed and unencrypted images */
/* STEP 1 - read CVP status and check CVP_EN flag */
pci_read_config_dword(pdev, VSE_CVP_STATUS, &val);
if (!(val & VSE_CVP_STATUS_CVP_EN)) {
dev_err(&mgr->dev, "CVP mode off: 0x%04x\n", val);
return -ENODEV;
}
if (val & VSE_CVP_STATUS_CFG_RDY) {
dev_warn(&mgr->dev, "CvP already started, teardown first\n");
ret = altera_cvp_teardown(mgr, info);
if (ret)
return ret;
}
/*
* STEP 2
* - set HIP_CLK_SEL and CVP_MODE (must be set in the order mentioned)
*/
/* switch from fabric to PMA clock */
pci_read_config_dword(pdev, VSE_CVP_MODE_CTRL, &val);
val |= VSE_CVP_MODE_CTRL_HIP_CLK_SEL;
pci_write_config_dword(pdev, VSE_CVP_MODE_CTRL, val);
/* set CVP mode */
pci_read_config_dword(pdev, VSE_CVP_MODE_CTRL, &val);
val |= VSE_CVP_MODE_CTRL_CVP_MODE;
pci_write_config_dword(pdev, VSE_CVP_MODE_CTRL, val);
/*
* STEP 3
* - set CVP_NUMCLKS to 1 and issue CVP_DUMMY_WR dummy writes to the HIP
*/
altera_cvp_dummy_write(conf);
/* STEP 4 - set CVP_CONFIG bit */
pci_read_config_dword(pdev, VSE_CVP_PROG_CTRL, &val);
/* request control block to begin transfer using CVP */
val |= VSE_CVP_PROG_CTRL_CONFIG;
pci_write_config_dword(pdev, VSE_CVP_PROG_CTRL, val);
/* STEP 5 - poll CVP_CONFIG READY for 1 with 10us timeout */
ret = altera_cvp_wait_status(conf, VSE_CVP_STATUS_CFG_RDY,
VSE_CVP_STATUS_CFG_RDY, 10);
if (ret) {
dev_warn(&mgr->dev, "CFG_RDY == 1 timeout\n");
return ret;
}
/*
* STEP 6
* - set CVP_NUMCLKS to 1 and issue CVP_DUMMY_WR dummy writes to the HIP
*/
altera_cvp_dummy_write(conf);
/* STEP 7 - set START_XFER */
pci_read_config_dword(pdev, VSE_CVP_PROG_CTRL, &val);
val |= VSE_CVP_PROG_CTRL_START_XFER;
pci_write_config_dword(pdev, VSE_CVP_PROG_CTRL, val);
/* STEP 8 - start transfer (set CVP_NUMCLKS for bitstream) */
pci_read_config_dword(pdev, VSE_CVP_MODE_CTRL, &val);
val &= ~VSE_CVP_MODE_CTRL_NUMCLKS_MASK;
val |= conf->numclks << VSE_CVP_MODE_CTRL_NUMCLKS_OFF;
pci_write_config_dword(pdev, VSE_CVP_MODE_CTRL, val);
return 0;
}
static inline int altera_cvp_chk_error(struct fpga_manager *mgr, size_t bytes)
{
struct altera_cvp_conf *conf = mgr->priv;
u32 val;
/* STEP 10 (optional) - check CVP_CONFIG_ERROR flag */
pci_read_config_dword(conf->pci_dev, VSE_CVP_STATUS, &val);
if (val & VSE_CVP_STATUS_CFG_ERR) {
dev_err(&mgr->dev, "CVP_CONFIG_ERROR after %zu bytes!\n",
bytes);
return -EPROTO;
}
return 0;
}
static int altera_cvp_write(struct fpga_manager *mgr, const char *buf,
size_t count)
{
struct altera_cvp_conf *conf = mgr->priv;
const u32 *data;
size_t done, remaining;
int status = 0;
u32 mask;
/* STEP 9 - write 32-bit data from RBF file to CVP data register */
data = (u32 *)buf;
remaining = count;
done = 0;
while (remaining >= 4) {
conf->write_data(conf, *data++);
done += 4;
remaining -= 4;
/*
* STEP 10 (optional) and STEP 11
* - check error flag
* - loop until data transfer completed
* Config images can be huge (more than 40 MiB), so
* only check after a new 4k data block has been written.
* This reduces the number of checks and speeds up the
* configuration process.
*/
if (altera_cvp_chkcfg && !(done % SZ_4K)) {
status = altera_cvp_chk_error(mgr, done);
if (status < 0)
return status;
}
}
/* write up to 3 trailing bytes, if any */
mask = BIT(remaining * 8) - 1;
if (mask)
conf->write_data(conf, *data & mask);
if (altera_cvp_chkcfg)
status = altera_cvp_chk_error(mgr, count);
return status;
}
static int altera_cvp_write_complete(struct fpga_manager *mgr,
struct fpga_image_info *info)
{
struct altera_cvp_conf *conf = mgr->priv;
struct pci_dev *pdev = conf->pci_dev;
int ret;
u32 mask;
u32 val;
ret = altera_cvp_teardown(mgr, info);
if (ret)
return ret;
/* STEP 16 - check CVP_CONFIG_ERROR_LATCHED bit */
pci_read_config_dword(pdev, VSE_UNCOR_ERR_STATUS, &val);
if (val & VSE_UNCOR_ERR_CVP_CFG_ERR) {
dev_err(&mgr->dev, "detected CVP_CONFIG_ERROR_LATCHED!\n");
return -EPROTO;
}
/* STEP 17 - reset CVP_MODE and HIP_CLK_SEL bit */
pci_read_config_dword(pdev, VSE_CVP_MODE_CTRL, &val);
val &= ~VSE_CVP_MODE_CTRL_HIP_CLK_SEL;
val &= ~VSE_CVP_MODE_CTRL_CVP_MODE;
pci_write_config_dword(pdev, VSE_CVP_MODE_CTRL, val);
/* STEP 18 - poll PLD_CLK_IN_USE and USER_MODE bits */
mask = VSE_CVP_STATUS_PLD_CLK_IN_USE | VSE_CVP_STATUS_USERMODE;
ret = altera_cvp_wait_status(conf, mask, mask, TIMEOUT_US);
if (ret)
dev_err(&mgr->dev, "PLD_CLK_IN_USE|USERMODE timeout\n");
return ret;
}
static const struct fpga_manager_ops altera_cvp_ops = {
.state = altera_cvp_state,
.write_init = altera_cvp_write_init,
.write = altera_cvp_write,
.write_complete = altera_cvp_write_complete,
};
static ssize_t show_chkcfg(struct device_driver *dev, char *buf)
{
return snprintf(buf, 3, "%d\n", altera_cvp_chkcfg);
}
static ssize_t store_chkcfg(struct device_driver *drv, const char *buf,
size_t count)
{
int ret;
ret = kstrtobool(buf, &altera_cvp_chkcfg);
if (ret)
return ret;
return count;
}
static DRIVER_ATTR(chkcfg, 0600, show_chkcfg, store_chkcfg);
static int altera_cvp_probe(struct pci_dev *pdev,
const struct pci_device_id *dev_id);
static void altera_cvp_remove(struct pci_dev *pdev);
#define PCI_VENDOR_ID_ALTERA 0x1172
static struct pci_device_id altera_cvp_id_tbl[] = {
{ PCI_VDEVICE(ALTERA, PCI_ANY_ID) },
{ }
};
MODULE_DEVICE_TABLE(pci, altera_cvp_id_tbl);
static struct pci_driver altera_cvp_driver = {
.name = DRV_NAME,
.id_table = altera_cvp_id_tbl,
.probe = altera_cvp_probe,
.remove = altera_cvp_remove,
};
static int altera_cvp_probe(struct pci_dev *pdev,
const struct pci_device_id *dev_id)
{
struct altera_cvp_conf *conf;
u16 cmd, val;
int ret;
/*
* First check if this is the expected FPGA device. PCI config
* space access works without enabling the PCI device, memory
* space access is enabled further down.
*/
pci_read_config_word(pdev, VSE_PCIE_EXT_CAP_ID, &val);
if (val != VSE_PCIE_EXT_CAP_ID_VAL) {
dev_err(&pdev->dev, "Wrong EXT_CAP_ID value 0x%x\n", val);
return -ENODEV;
}
conf = devm_kzalloc(&pdev->dev, sizeof(*conf), GFP_KERNEL);
if (!conf)
return -ENOMEM;
/*
* Enable memory BAR access. We cannot use pci_enable_device() here
* because it will make the driver unusable with FPGA devices that
* have additional big IOMEM resources (e.g. 4GiB BARs) on 32-bit
* platform. Such BARs will not have an assigned address range and
* pci_enable_device() will fail, complaining about not claimed BAR,
* even if the concerned BAR is not needed for FPGA configuration
* at all. Thus, enable the device via PCI config space command.
*/
pci_read_config_word(pdev, PCI_COMMAND, &cmd);
if (!(cmd & PCI_COMMAND_MEMORY)) {
cmd |= PCI_COMMAND_MEMORY;
pci_write_config_word(pdev, PCI_COMMAND, cmd);
}
ret = pci_request_region(pdev, CVP_BAR, "CVP");
if (ret) {
dev_err(&pdev->dev, "Requesting CVP BAR region failed\n");
goto err_disable;
}
conf->pci_dev = pdev;
conf->write_data = altera_cvp_write_data_iomem;
conf->map = pci_iomap(pdev, CVP_BAR, 0);
if (!conf->map) {
dev_warn(&pdev->dev, "Mapping CVP BAR failed\n");
conf->write_data = altera_cvp_write_data_config;
}
snprintf(conf->mgr_name, sizeof(conf->mgr_name), "%s @%s",
ALTERA_CVP_MGR_NAME, pci_name(pdev));
ret = fpga_mgr_register(&pdev->dev, conf->mgr_name,
&altera_cvp_ops, conf);
if (ret)
goto err_unmap;
ret = driver_create_file(&altera_cvp_driver.driver,
&driver_attr_chkcfg);
if (ret) {
dev_err(&pdev->dev, "Can't create sysfs chkcfg file\n");
fpga_mgr_unregister(&pdev->dev);
goto err_unmap;
}
return 0;
err_unmap:
pci_iounmap(pdev, conf->map);
pci_release_region(pdev, CVP_BAR);
err_disable:
cmd &= ~PCI_COMMAND_MEMORY;
pci_write_config_word(pdev, PCI_COMMAND, cmd);
return ret;
}
static void altera_cvp_remove(struct pci_dev *pdev)
{
struct fpga_manager *mgr = pci_get_drvdata(pdev);
struct altera_cvp_conf *conf = mgr->priv;
u16 cmd;
driver_remove_file(&altera_cvp_driver.driver, &driver_attr_chkcfg);
fpga_mgr_unregister(&pdev->dev);
pci_iounmap(pdev, conf->map);
pci_release_region(pdev, CVP_BAR);
pci_read_config_word(pdev, PCI_COMMAND, &cmd);
cmd &= ~PCI_COMMAND_MEMORY;
pci_write_config_word(pdev, PCI_COMMAND, cmd);
}
module_pci_driver(altera_cvp_driver);
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Anatolij Gustschin <agust@denx.de>");
MODULE_DESCRIPTION("Module to load Altera FPGA over CvP");
......@@ -66,7 +66,7 @@ static int alt_hps2fpga_enable_show(struct fpga_bridge *bridge)
/* The L3 REMAP register is write only, so keep a cached value. */
static unsigned int l3_remap_shadow;
static spinlock_t l3_remap_lock;
static DEFINE_SPINLOCK(l3_remap_lock);
static int _alt_hps2fpga_enable_set(struct altera_hps2fpga_data *priv,
bool enable)
......@@ -143,9 +143,15 @@ static int alt_fpga_bridge_probe(struct platform_device *pdev)
int ret;
of_id = of_match_device(altera_fpga_of_match, dev);
if (!of_id) {
dev_err(dev, "failed to match device\n");
return -ENODEV;
}
priv = (struct altera_hps2fpga_data *)of_id->data;
priv->bridge_reset = of_reset_control_get_by_index(dev->of_node, 0);
priv->bridge_reset = of_reset_control_get_exclusive_by_index(dev->of_node,
0);
if (IS_ERR(priv->bridge_reset)) {
dev_err(dev, "Could not get %s reset control\n", priv->name);
return PTR_ERR(priv->bridge_reset);
......@@ -171,8 +177,6 @@ static int alt_fpga_bridge_probe(struct platform_device *pdev)
return -EBUSY;
}
spin_lock_init(&l3_remap_lock);
if (!of_property_read_u32(dev->of_node, "bridge-enable", &enable)) {
if (enable > 1) {
dev_warn(dev, "invalid bridge-enable %u > 1\n", enable);
......
/*
* Altera Passive Serial SPI Driver
*
* Copyright (c) 2017 United Western Technologies, Corporation
*
* Joshua Clayton <stillcompiling@gmail.com>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* Manage Altera FPGA firmware that is loaded over SPI using the passive
* serial configuration method.
* Firmware must be in binary "rbf" format.
* Works on Arria 10, Cyclone V and Stratix V. Should work on Cyclone series.
* May work on other Altera FPGAs.
*/
#include <linux/bitrev.h>
#include <linux/delay.h>
#include <linux/fpga/fpga-mgr.h>
#include <linux/gpio/consumer.h>
#include <linux/module.h>
#include <linux/of_gpio.h>
#include <linux/of_device.h>
#include <linux/spi/spi.h>
#include <linux/sizes.h>
enum altera_ps_devtype {
CYCLONE5,
ARRIA10,
};
struct altera_ps_data {
enum altera_ps_devtype devtype;
int status_wait_min_us;
int status_wait_max_us;
int t_cfg_us;
int t_st2ck_us;
};
struct altera_ps_conf {
struct gpio_desc *config;
struct gpio_desc *confd;
struct gpio_desc *status;
struct spi_device *spi;
const struct altera_ps_data *data;
u32 info_flags;
char mgr_name[64];
};
/* | Arria 10 | Cyclone5 | Stratix5 |
* t_CF2ST0 | [; 600] | [; 600] | [; 600] |ns
* t_CFG | [2;] | [2;] | [2;] |µs
* t_STATUS | [268; 3000] | [268; 1506] | [268; 1506] |µs
* t_CF2ST1 | [; 3000] | [; 1506] | [; 1506] |µs
* t_CF2CK | [3010;] | [1506;] | [1506;] |µs
* t_ST2CK | [10;] | [2;] | [2;] |µs
* t_CD2UM | [175; 830] | [175; 437] | [175; 437] |µs
*/
static struct altera_ps_data c5_data = {
/* these values for Cyclone5 are compatible with Stratix5 */
.devtype = CYCLONE5,
.status_wait_min_us = 268,
.status_wait_max_us = 1506,
.t_cfg_us = 2,
.t_st2ck_us = 2,
};
static struct altera_ps_data a10_data = {
.devtype = ARRIA10,
.status_wait_min_us = 268, /* min(t_STATUS) */
.status_wait_max_us = 3000, /* max(t_CF2ST1) */
.t_cfg_us = 2, /* max { min(t_CFG), max(tCF2ST0) } */
.t_st2ck_us = 10, /* min(t_ST2CK) */
};
static const struct of_device_id of_ef_match[] = {
{ .compatible = "altr,fpga-passive-serial", .data = &c5_data },
{ .compatible = "altr,fpga-arria10-passive-serial", .data = &a10_data },
{}
};
MODULE_DEVICE_TABLE(of, of_ef_match);
static enum fpga_mgr_states altera_ps_state(struct fpga_manager *mgr)
{
struct altera_ps_conf *conf = mgr->priv;
if (gpiod_get_value_cansleep(conf->status))
return FPGA_MGR_STATE_RESET;
return FPGA_MGR_STATE_UNKNOWN;
}
static inline void altera_ps_delay(int delay_us)
{
if (delay_us > 10)
usleep_range(delay_us, delay_us + 5);
else
udelay(delay_us);
}
static int altera_ps_write_init(struct fpga_manager *mgr,
struct fpga_image_info *info,
const char *buf, size_t count)
{
struct altera_ps_conf *conf = mgr->priv;
int min, max, waits;
int i;
conf->info_flags = info->flags;
if (info->flags & FPGA_MGR_PARTIAL_RECONFIG) {
dev_err(&mgr->dev, "Partial reconfiguration not supported.\n");
return -EINVAL;
}
gpiod_set_value_cansleep(conf->config, 1);
/* wait min reset pulse time */
altera_ps_delay(conf->data->t_cfg_us);
if (!gpiod_get_value_cansleep(conf->status)) {
dev_err(&mgr->dev, "Status pin failed to show a reset\n");
return -EIO;
}
gpiod_set_value_cansleep(conf->config, 0);
min = conf->data->status_wait_min_us;
max = conf->data->status_wait_max_us;
waits = max / min;
if (max % min)
waits++;
/* wait for max { max(t_STATUS), max(t_CF2ST1) } */
for (i = 0; i < waits; i++) {
usleep_range(min, min + 10);
if (!gpiod_get_value_cansleep(conf->status)) {
/* wait for min(t_ST2CK)*/
altera_ps_delay(conf->data->t_st2ck_us);
return 0;
}
}
dev_err(&mgr->dev, "Status pin not ready.\n");
return -EIO;
}
static void rev_buf(char *buf, size_t len)
{
u32 *fw32 = (u32 *)buf;
size_t extra_bytes = (len & 0x03);
const u32 *fw_end = (u32 *)(buf + len - extra_bytes);
/* set buffer to lsb first */
while (fw32 < fw_end) {
*fw32 = bitrev8x4(*fw32);
fw32++;
}
if (extra_bytes) {
buf = (char *)fw_end;
while (extra_bytes) {
*buf = bitrev8(*buf);
buf++;
extra_bytes--;
}
}
}
static int altera_ps_write(struct fpga_manager *mgr, const char *buf,
size_t count)
{
struct altera_ps_conf *conf = mgr->priv;
const char *fw_data = buf;
const char *fw_data_end = fw_data + count;
while (fw_data < fw_data_end) {
int ret;
size_t stride = min_t(size_t, fw_data_end - fw_data, SZ_4K);
if (!(conf->info_flags & FPGA_MGR_BITSTREAM_LSB_FIRST))
rev_buf((char *)fw_data, stride);
ret = spi_write(conf->spi, fw_data, stride);
if (ret) {
dev_err(&mgr->dev, "spi error in firmware write: %d\n",
ret);
return ret;
}
fw_data += stride;
}
return 0;
}
static int altera_ps_write_complete(struct fpga_manager *mgr,
struct fpga_image_info *info)
{
struct altera_ps_conf *conf = mgr->priv;
const char dummy[] = {0};
int ret;
if (gpiod_get_value_cansleep(conf->status)) {
dev_err(&mgr->dev, "Error during configuration.\n");
return -EIO;
}
if (!IS_ERR(conf->confd)) {
if (!gpiod_get_raw_value_cansleep(conf->confd)) {
dev_err(&mgr->dev, "CONF_DONE is inactive!\n");
return -EIO;
}
}
/*
* After CONF_DONE goes high, send two additional falling edges on DCLK
* to begin initialization and enter user mode
*/
ret = spi_write(conf->spi, dummy, 1);
if (ret) {
dev_err(&mgr->dev, "spi error during end sequence: %d\n", ret);
return ret;
}
return 0;
}
static const struct fpga_manager_ops altera_ps_ops = {
.state = altera_ps_state,
.write_init = altera_ps_write_init,
.write = altera_ps_write,
.write_complete = altera_ps_write_complete,
};
static int altera_ps_probe(struct spi_device *spi)
{
struct altera_ps_conf *conf;
const struct of_device_id *of_id;
conf = devm_kzalloc(&spi->dev, sizeof(*conf), GFP_KERNEL);
if (!conf)
return -ENOMEM;
of_id = of_match_device(of_ef_match, &spi->dev);
if (!of_id)
return -ENODEV;
conf->data = of_id->data;
conf->spi = spi;
conf->config = devm_gpiod_get(&spi->dev, "nconfig", GPIOD_OUT_HIGH);
if (IS_ERR(conf->config)) {
dev_err(&spi->dev, "Failed to get config gpio: %ld\n",
PTR_ERR(conf->config));
return PTR_ERR(conf->config);
}
conf->status = devm_gpiod_get(&spi->dev, "nstat", GPIOD_IN);
if (IS_ERR(conf->status)) {
dev_err(&spi->dev, "Failed to get status gpio: %ld\n",
PTR_ERR(conf->status));
return PTR_ERR(conf->status);
}
conf->confd = devm_gpiod_get(&spi->dev, "confd", GPIOD_IN);
if (IS_ERR(conf->confd)) {
dev_warn(&spi->dev, "Not using confd gpio: %ld\n",
PTR_ERR(conf->confd));
}
/* Register manager with unique name */
snprintf(conf->mgr_name, sizeof(conf->mgr_name), "%s %s",
dev_driver_string(&spi->dev), dev_name(&spi->dev));
return fpga_mgr_register(&spi->dev, conf->mgr_name,
&altera_ps_ops, conf);
}
static int altera_ps_remove(struct spi_device *spi)
{
fpga_mgr_unregister(&spi->dev);
return 0;
}
static const struct spi_device_id altera_ps_spi_ids[] = {
{"cyclone-ps-spi", 0},
{}
};
MODULE_DEVICE_TABLE(spi, altera_ps_spi_ids);
static struct spi_driver altera_ps_driver = {
.driver = {
.name = "altera-ps-spi",
.owner = THIS_MODULE,
.of_match_table = of_match_ptr(of_ef_match),
},
.id_table = altera_ps_spi_ids,
.probe = altera_ps_probe,
.remove = altera_ps_remove,
};
module_spi_driver(altera_ps_driver)
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Joshua Clayton <stillcompiling@gmail.com>");
MODULE_DESCRIPTION("Module to load Altera FPGA firmware over SPI");
......@@ -319,8 +319,8 @@ static int child_regions_with_firmware(struct device_node *overlay)
of_node_put(child_region);
if (ret)
pr_err("firmware-name not allowed in child FPGA region: %s",
child_region->full_name);
pr_err("firmware-name not allowed in child FPGA region: %pOF",
child_region);
return ret;
}
......
......@@ -475,7 +475,7 @@ static ssize_t fsi_slave_sysfs_raw_write(struct file *file,
return count;
}
static struct bin_attribute fsi_slave_raw_attr = {
static const struct bin_attribute fsi_slave_raw_attr = {
.attr = {
.name = "raw",
.mode = 0600,
......@@ -499,7 +499,7 @@ static ssize_t fsi_slave_sysfs_term_write(struct file *file,
return count;
}
static struct bin_attribute fsi_slave_term_attr = {
static const struct bin_attribute fsi_slave_term_attr = {
.attr = {
.name = "term",
.mode = 0200,
......
......@@ -57,12 +57,6 @@ static int put_scom(struct scom_device *scom_dev, uint64_t value,
int rc;
uint32_t data;
data = cpu_to_be32(SCOM_RESET_CMD);
rc = fsi_device_write(scom_dev->fsi_dev, SCOM_RESET_REG, &data,
sizeof(uint32_t));
if (rc)
return rc;
data = cpu_to_be32((value >> 32) & 0xffffffff);
rc = fsi_device_write(scom_dev->fsi_dev, SCOM_DATA0_REG, &data,
sizeof(uint32_t));
......@@ -186,6 +180,7 @@ static const struct file_operations scom_fops = {
static int scom_probe(struct device *dev)
{
uint32_t data;
struct fsi_device *fsi_dev = to_fsi_dev(dev);
struct scom_device *scom;
......@@ -202,6 +197,9 @@ static int scom_probe(struct device *dev)
scom->mdev.parent = dev;
list_add(&scom->link, &scom_devices);
data = cpu_to_be32(SCOM_RESET_CMD);
fsi_device_write(fsi_dev, SCOM_RESET_REG, &data, sizeof(uint32_t));
return misc_register(&scom->mdev);
}
......
......@@ -177,6 +177,11 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
&vmbus_connection.chn_msg_list);
spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
if (newchannel->rescind) {
err = -ENODEV;
goto error_free_gpadl;
}
ret = vmbus_post_msg(open_msg,
sizeof(struct vmbus_channel_open_channel), true);
......@@ -421,6 +426,11 @@ int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer,
spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
if (channel->rescind) {
ret = -ENODEV;
goto cleanup;
}
ret = vmbus_post_msg(gpadlmsg, msginfo->msgsize -
sizeof(*msginfo), true);
if (ret != 0)
......@@ -494,6 +504,10 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle)
list_add_tail(&info->msglistentry,
&vmbus_connection.chn_msg_list);
spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
if (channel->rescind)
goto post_msg_err;
ret = vmbus_post_msg(msg, sizeof(struct vmbus_channel_gpadl_teardown),
true);
......
此差异已折叠。
此差异已折叠。
......@@ -304,7 +304,7 @@ static int process_ob_ipinfo(void *in_msg, void *out_msg, int op)
strlen((char *)in->body.kvp_ip_val.adapter_id),
UTF16_HOST_ENDIAN,
(wchar_t *)out->kvp_ip_val.adapter_id,
MAX_IP_ADDR_SIZE);
MAX_ADAPTER_ID_SIZE);
if (len < 0)
return len;
......
此差异已折叠。
......@@ -940,6 +940,9 @@ static void vmbus_chan_sched(struct hv_per_cpu_context *hv_cpu)
if (channel->offermsg.child_relid != relid)
continue;
if (channel->rescind)
continue;
switch (channel->callback_mode) {
case HV_CALL_ISR:
vmbus_channel_isr(channel);
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册