提交 12e55508 编写于 作者: L Linus Torvalds

Merge branch 'staging-next' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging

* 'staging-next' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging: (466 commits)
  net/hyperv: Add support for jumbo frame up to 64KB
  net/hyperv: Add NETVSP protocol version negotiation
  net/hyperv: Remove unnecessary kmap_atomic in netvsc driver
  staging/rtl8192e: Register against lib80211
  staging/rtl8192e: Convert to lib80211_crypt_info
  staging/rtl8192e: Convert to lib80211_crypt_data and lib80211_crypt_ops
  staging/rtl8192e: Add lib80211.h to rtllib.h
  staging/mei: add watchdog device registration wrappers
  drm/omap: GEM, deal with cache
  staging: vt6656: int.c, int.h: Change return of function to void
  staging: usbip: removed unused definitions from header
  staging: usbip: removed dead code from receive function
  staging:iio: Drop {mark,unmark}_in_use callbacks
  staging:iio: Drop buffer mark_param_change callback
  staging:iio: Drop the unused buffer enable() and is_enabled() callbacks
  staging:iio: Drop buffer busy flag
  staging:iio: Make sure a device is only opened once at a time
  staging:iio: Disallow modifying buffer size when buffer is enabled
  staging:iio: Disallow changing scan elements in all buffered modes
  staging:iio: Use iio_buffer_enabled instead of open coding it
  ...

Fix up conflict in drivers/staging/iio/adc/ad799x_core.c (removal of
module_init due to using module_i2c_driver() helper, next to removal of
MODULE_ALIAS due to using MODULE_DEVICE_TABLE instead).
NVIDIA compliant embedded controller
Required properties:
- compatible : should be "nvidia,nvec".
- reg : the iomem of the i2c slave controller
- interrupts : the interrupt line of the i2c slave controller
- clock-frequency : the frequency of the i2c bus
- gpios : the gpio used for ec request
- slave-addr: the i2c address of the slave controller
......@@ -184,11 +184,6 @@ S: Maintained
F: Documentation/filesystems/9p.txt
F: fs/9p/
A2232 SERIAL BOARD DRIVER
L: linux-m68k@lists.linux-m68k.org
S: Orphan
F: drivers/staging/generic_serial/ser_a2232*
AACRAID SCSI RAID DRIVER
M: Adaptec OEM Raid Solutions <aacraid@adaptec.com>
L: linux-scsi@vger.kernel.org
......@@ -1587,7 +1582,7 @@ M: Franky (Zhenhui) Lin <frankyl@broadcom.com>
M: Kan Yan <kanyan@broadcom.com>
L: linux-wireless@vger.kernel.org
S: Supported
F: drivers/staging/brcm80211/
F: drivers/net/wireless/brcm80211/
BROADCOM BNX2FC 10 GIGABIT FCOE DRIVER
M: Bhanu Prakash Gollapudi <bprakash@broadcom.com>
......@@ -1891,12 +1886,6 @@ L: platform-driver-x86@vger.kernel.org
S: Maintained
F: drivers/platform/x86/compal-laptop.c
COMPUTONE INTELLIPORT MULTIPORT CARD
W: http://www.wittsend.com/computone.html
S: Orphan
F: Documentation/serial/computone.txt
F: drivers/staging/tty/ip2/
CONEXANT ACCESSRUNNER USB DRIVER
M: Simon Arlott <cxacru@fire.lp0.eu>
L: accessrunner-general@lists.sourceforge.net
......@@ -2200,15 +2189,6 @@ F: drivers/md/dm*
F: include/linux/device-mapper.h
F: include/linux/dm-*.h
DIGI INTL. EPCA DRIVER
M: "Digi International, Inc" <Eng.Linux@digi.com>
L: Eng.Linux@digi.com
W: http://www.digi.com
S: Orphan
F: Documentation/serial/digiepca.txt
F: drivers/staging/tty/epca*
F: drivers/staging/tty/digi*
DIOLAN U2C-12 I2C DRIVER
M: Guenter Roeck <guenter.roeck@ericsson.com>
L: linux-i2c@vger.kernel.org
......@@ -5555,11 +5535,6 @@ M: Maxim Levitsky <maximlevitsky@gmail.com>
S: Maintained
F: drivers/memstick/host/r592.*
RISCOM8 DRIVER
S: Orphan
F: Documentation/serial/riscom8.txt
F: drivers/staging/tty/riscom8*
ROCKETPORT DRIVER
P: Comtrol Corp.
W: http://www.comtrol.com
......@@ -6222,11 +6197,6 @@ F: arch/arm/mach-spear3xx/spear3*0_evb.c
F: arch/arm/mach-spear6xx/spear600.c
F: arch/arm/mach-spear6xx/spear600_evb.c
SPECIALIX IO8+ MULTIPORT SERIAL CARD DRIVER
S: Orphan
F: Documentation/serial/specialix.txt
F: drivers/staging/tty/specialix*
SPI SUBSYSTEM
M: Grant Likely <grant.likely@secretlab.ca>
L: spi-devel-general@lists.sourceforge.net
......@@ -6304,11 +6274,6 @@ M: Manu Abraham <abraham.manu@gmail.com>
S: Odd Fixes
F: drivers/staging/crystalhd/
STAGING - CYPRESS WESTBRIDGE SUPPORT
M: David Cross <david.cross@cypress.com>
S: Odd Fixes
F: drivers/staging/westbridge/
STAGING - ECHO CANCELLER
M: Steve Underwood <steveu@coppice.org>
M: David Rowe <david@rowetel.com>
......@@ -6414,7 +6379,7 @@ S: Odd Fixes
F: drivers/staging/winbond/
STAGING - XGI Z7,Z9,Z11 PCI DISPLAY DRIVER
M: Arnaud Patard <apatard@mandriva.com>
M: Arnaud Patard <arnaud.patard@rtp-net.org>
S: Odd Fixes
F: drivers/staging/xgifb/
......
......@@ -342,4 +342,6 @@ config VMXNET3
To compile this driver as a module, choose M here: the
module will be called vmxnet3.
source "drivers/net/hyperv/Kconfig"
endif # NETDEVICES
......@@ -68,3 +68,5 @@ obj-$(CONFIG_USB_USBNET) += usb/
obj-$(CONFIG_USB_ZD1201) += usb/
obj-$(CONFIG_USB_IPHETH) += usb/
obj-$(CONFIG_USB_CDC_PHONET) += usb/
obj-$(CONFIG_HYPERV_NET) += hyperv/
config HYPERV_NET
tristate "Microsoft Hyper-V virtual network driver"
depends on HYPERV
help
Select this option to enable the Hyper-V virtual network driver.
obj-$(CONFIG_HYPERV_NET) += hv_netvsc.o
hv_netvsc-y := netvsc_drv.o netvsc.o rndis_filter.o
......@@ -39,9 +39,6 @@ struct xferpage_packet {
u32 count;
};
/* The number of pages which are enough to cover jumbo frame buffer. */
#define NETVSC_PACKET_MAXPAGE 4
/*
* Represent netvsc packet which contains 1 RNDIS and 1 ethernet frame
* within the RNDIS
......@@ -77,8 +74,9 @@ struct hv_netvsc_packet {
u32 total_data_buflen;
/* Points to the send/receive buffer where the ethernet frame is */
void *data;
u32 page_buf_cnt;
struct hv_page_buffer page_buf[NETVSC_PACKET_MAXPAGE];
struct hv_page_buffer page_buf[0];
};
struct netvsc_device_info {
......@@ -87,6 +85,27 @@ struct netvsc_device_info {
int ring_size;
};
enum rndis_device_state {
RNDIS_DEV_UNINITIALIZED = 0,
RNDIS_DEV_INITIALIZING,
RNDIS_DEV_INITIALIZED,
RNDIS_DEV_DATAINITIALIZED,
};
struct rndis_device {
struct netvsc_device *net_dev;
enum rndis_device_state state;
bool link_state;
atomic_t new_req_id;
spinlock_t request_lock;
struct list_head req_list;
unsigned char hw_mac_adr[ETH_ALEN];
};
/* Interface */
int netvsc_device_add(struct hv_device *device, void *additional_info);
int netvsc_device_remove(struct hv_device *device);
......@@ -109,11 +128,13 @@ int rndis_filter_receive(struct hv_device *dev,
int rndis_filter_send(struct hv_device *dev,
struct hv_netvsc_packet *pkt);
int rndis_filter_set_packet_filter(struct rndis_device *dev, u32 new_filter);
#define NVSP_INVALID_PROTOCOL_VERSION ((u32)0xFFFFFFFF)
#define NVSP_PROTOCOL_VERSION_1 2
#define NVSP_MIN_PROTOCOL_VERSION NVSP_PROTOCOL_VERSION_1
#define NVSP_MAX_PROTOCOL_VERSION NVSP_PROTOCOL_VERSION_1
#define NVSP_PROTOCOL_VERSION_2 0x30002
enum {
NVSP_MSG_TYPE_NONE = 0,
......@@ -138,11 +159,36 @@ enum {
NVSP_MSG1_TYPE_SEND_RNDIS_PKT,
NVSP_MSG1_TYPE_SEND_RNDIS_PKT_COMPLETE,
/*
* This should be set to the number of messages for the version with
* the maximum number of messages.
*/
NVSP_NUM_MSG_PER_VERSION = 9,
/* Version 2 messages */
NVSP_MSG2_TYPE_SEND_CHIMNEY_DELEGATED_BUF,
NVSP_MSG2_TYPE_SEND_CHIMNEY_DELEGATED_BUF_COMP,
NVSP_MSG2_TYPE_REVOKE_CHIMNEY_DELEGATED_BUF,
NVSP_MSG2_TYPE_RESUME_CHIMNEY_RX_INDICATION,
NVSP_MSG2_TYPE_TERMINATE_CHIMNEY,
NVSP_MSG2_TYPE_TERMINATE_CHIMNEY_COMP,
NVSP_MSG2_TYPE_INDICATE_CHIMNEY_EVENT,
NVSP_MSG2_TYPE_SEND_CHIMNEY_PKT,
NVSP_MSG2_TYPE_SEND_CHIMNEY_PKT_COMP,
NVSP_MSG2_TYPE_POST_CHIMNEY_RECV_REQ,
NVSP_MSG2_TYPE_POST_CHIMNEY_RECV_REQ_COMP,
NVSP_MSG2_TYPE_ALLOC_RXBUF,
NVSP_MSG2_TYPE_ALLOC_RXBUF_COMP,
NVSP_MSG2_TYPE_FREE_RXBUF,
NVSP_MSG2_TYPE_SEND_VMQ_RNDIS_PKT,
NVSP_MSG2_TYPE_SEND_VMQ_RNDIS_PKT_COMP,
NVSP_MSG2_TYPE_SEND_NDIS_CONFIG,
NVSP_MSG2_TYPE_ALLOC_CHIMNEY_HANDLE,
NVSP_MSG2_TYPE_ALLOC_CHIMNEY_HANDLE_COMP,
};
enum {
......@@ -153,6 +199,7 @@ enum {
NVSP_STAT_PROTOCOL_TOO_OLD,
NVSP_STAT_INVALID_RNDIS_PKT,
NVSP_STAT_BUSY,
NVSP_STAT_PROTOCOL_UNSUPPORTED,
NVSP_STAT_MAX,
};
......@@ -337,9 +384,69 @@ union nvsp_1_message_uber {
send_rndis_pkt_complete;
} __packed;
/*
* Network VSP protocol version 2 messages:
*/
struct nvsp_2_vsc_capability {
union {
u64 data;
struct {
u64 vmq:1;
u64 chimney:1;
u64 sriov:1;
u64 ieee8021q:1;
u64 correlation_id:1;
};
};
} __packed;
struct nvsp_2_send_ndis_config {
u32 mtu;
u32 reserved;
struct nvsp_2_vsc_capability capability;
} __packed;
/* Allocate receive buffer */
struct nvsp_2_alloc_rxbuf {
/* Allocation ID to match the allocation request and response */
u32 alloc_id;
/* Length of the VM shared memory receive buffer that needs to
* be allocated
*/
u32 len;
} __packed;
/* Allocate receive buffer complete */
struct nvsp_2_alloc_rxbuf_comp {
/* The NDIS_STATUS code for buffer allocation */
u32 status;
u32 alloc_id;
/* GPADL handle for the allocated receive buffer */
u32 gpadl_handle;
/* Receive buffer ID */
u64 recv_buf_id;
} __packed;
struct nvsp_2_free_rxbuf {
u64 recv_buf_id;
} __packed;
union nvsp_2_message_uber {
struct nvsp_2_send_ndis_config send_ndis_config;
struct nvsp_2_alloc_rxbuf alloc_rxbuf;
struct nvsp_2_alloc_rxbuf_comp alloc_rxbuf_comp;
struct nvsp_2_free_rxbuf free_rxbuf;
} __packed;
union nvsp_all_messages {
union nvsp_message_init_uber init_msg;
union nvsp_1_message_uber v1_msg;
union nvsp_2_message_uber v2_msg;
} __packed;
/* ALL Messages */
......@@ -349,12 +456,9 @@ struct nvsp_message {
} __packed;
#define NETVSC_MTU 65536
/* #define NVSC_MIN_PROTOCOL_VERSION 1 */
/* #define NVSC_MAX_PROTOCOL_VERSION 1 */
#define NETVSC_RECEIVE_BUFFER_SIZE (1024*1024) /* 1MB */
#define NETVSC_RECEIVE_BUFFER_SIZE (1024*1024*2) /* 2MB */
#define NETVSC_RECEIVE_BUFFER_ID 0xcafe
......@@ -369,7 +473,10 @@ struct nvsp_message {
struct netvsc_device {
struct hv_device *dev;
u32 nvsp_version;
atomic_t num_outstanding_sends;
bool start_remove;
bool destroy;
/*
* List of free preallocated hv_netvsc_packet to represent receive
......
......@@ -28,6 +28,7 @@
#include <linux/io.h>
#include <linux/slab.h>
#include <linux/netdevice.h>
#include <linux/if_ether.h>
#include "hyperv_net.h"
......@@ -41,7 +42,7 @@ static struct netvsc_device *alloc_net_device(struct hv_device *device)
if (!net_device)
return NULL;
net_device->start_remove = false;
net_device->destroy = false;
net_device->dev = device;
net_device->ndev = ndev;
......@@ -230,19 +231,16 @@ static int netvsc_init_recv_buf(struct hv_device *device)
net_device->recv_section_cnt = init_packet->msg.
v1_msg.send_recv_buf_complete.num_sections;
net_device->recv_section = kmalloc(net_device->recv_section_cnt
* sizeof(struct nvsp_1_receive_buffer_section), GFP_KERNEL);
net_device->recv_section = kmemdup(
init_packet->msg.v1_msg.send_recv_buf_complete.sections,
net_device->recv_section_cnt *
sizeof(struct nvsp_1_receive_buffer_section),
GFP_KERNEL);
if (net_device->recv_section == NULL) {
ret = -EINVAL;
goto cleanup;
}
memcpy(net_device->recv_section,
init_packet->msg.v1_msg.
send_recv_buf_complete.sections,
net_device->recv_section_cnt *
sizeof(struct nvsp_1_receive_buffer_section));
/*
* For 1st release, there should only be 1 section that represents the
* entire receive buffer
......@@ -263,27 +261,18 @@ static int netvsc_init_recv_buf(struct hv_device *device)
}
static int netvsc_connect_vsp(struct hv_device *device)
/* Negotiate NVSP protocol version */
static int negotiate_nvsp_ver(struct hv_device *device,
struct netvsc_device *net_device,
struct nvsp_message *init_packet,
u32 nvsp_ver)
{
int ret, t;
struct netvsc_device *net_device;
struct nvsp_message *init_packet;
int ndis_version;
struct net_device *ndev;
net_device = get_outbound_net_device(device);
if (!net_device)
return -ENODEV;
ndev = net_device->ndev;
init_packet = &net_device->channel_init_pkt;
memset(init_packet, 0, sizeof(struct nvsp_message));
init_packet->hdr.msg_type = NVSP_MSG_TYPE_INIT;
init_packet->msg.init_msg.init.min_protocol_ver =
NVSP_MIN_PROTOCOL_VERSION;
init_packet->msg.init_msg.init.max_protocol_ver =
NVSP_MAX_PROTOCOL_VERSION;
init_packet->msg.init_msg.init.min_protocol_ver = nvsp_ver;
init_packet->msg.init_msg.init.max_protocol_ver = nvsp_ver;
/* Send the init request */
ret = vmbus_sendpacket(device->channel, init_packet,
......@@ -293,26 +282,62 @@ static int netvsc_connect_vsp(struct hv_device *device)
VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
if (ret != 0)
goto cleanup;
return ret;
t = wait_for_completion_timeout(&net_device->channel_init_wait, 5*HZ);
if (t == 0) {
ret = -ETIMEDOUT;
goto cleanup;
}
if (t == 0)
return -ETIMEDOUT;
if (init_packet->msg.init_msg.init_complete.status !=
NVSP_STAT_SUCCESS) {
ret = -EINVAL;
goto cleanup;
}
NVSP_STAT_SUCCESS)
return -EINVAL;
if (init_packet->msg.init_msg.init_complete.
negotiated_protocol_ver != NVSP_PROTOCOL_VERSION_1) {
if (nvsp_ver != NVSP_PROTOCOL_VERSION_2)
return 0;
/* NVSPv2 only: Send NDIS config */
memset(init_packet, 0, sizeof(struct nvsp_message));
init_packet->hdr.msg_type = NVSP_MSG2_TYPE_SEND_NDIS_CONFIG;
init_packet->msg.v2_msg.send_ndis_config.mtu = net_device->ndev->mtu;
ret = vmbus_sendpacket(device->channel, init_packet,
sizeof(struct nvsp_message),
(unsigned long)init_packet,
VM_PKT_DATA_INBAND, 0);
return ret;
}
static int netvsc_connect_vsp(struct hv_device *device)
{
int ret;
struct netvsc_device *net_device;
struct nvsp_message *init_packet;
int ndis_version;
struct net_device *ndev;
net_device = get_outbound_net_device(device);
if (!net_device)
return -ENODEV;
ndev = net_device->ndev;
init_packet = &net_device->channel_init_pkt;
/* Negotiate the latest NVSP protocol supported */
if (negotiate_nvsp_ver(device, net_device, init_packet,
NVSP_PROTOCOL_VERSION_2) == 0) {
net_device->nvsp_version = NVSP_PROTOCOL_VERSION_2;
} else if (negotiate_nvsp_ver(device, net_device, init_packet,
NVSP_PROTOCOL_VERSION_1) == 0) {
net_device->nvsp_version = NVSP_PROTOCOL_VERSION_1;
} else {
ret = -EPROTO;
goto cleanup;
}
pr_debug("Negotiated NVSP version:%x\n", net_device->nvsp_version);
/* Send the ndis version */
memset(init_packet, 0, sizeof(struct nvsp_message));
......@@ -438,6 +463,9 @@ static void netvsc_send_completion(struct hv_device *device,
nvsc_packet->completion.send.send_completion_ctx);
atomic_dec(&net_device->num_outstanding_sends);
if (netif_queue_stopped(ndev) && !net_device->start_remove)
netif_wake_queue(ndev);
} else {
netdev_err(ndev, "Unknown send completion packet type- "
"%d received!!\n", nvsp_packet->hdr.msg_type);
......@@ -488,11 +516,16 @@ int netvsc_send(struct hv_device *device,
}
if (ret != 0)
if (ret == 0) {
atomic_inc(&net_device->num_outstanding_sends);
} else if (ret == -EAGAIN) {
netif_stop_queue(ndev);
if (atomic_read(&net_device->num_outstanding_sends) < 1)
netif_wake_queue(ndev);
} else {
netdev_err(ndev, "Unable to send packet %p ret %d\n",
packet, ret);
else
atomic_inc(&net_device->num_outstanding_sends);
}
return ret;
}
......@@ -598,12 +631,10 @@ static void netvsc_receive(struct hv_device *device,
struct vmtransfer_page_packet_header *vmxferpage_packet;
struct nvsp_message *nvsp_packet;
struct hv_netvsc_packet *netvsc_packet = NULL;
unsigned long start;
unsigned long end, end_virtual;
/* struct netvsc_driver *netvscDriver; */
struct xferpage_packet *xferpage_packet = NULL;
int i, j;
int count = 0, bytes_remain = 0;
int i;
int count = 0;
unsigned long flags;
struct net_device *ndev;
......@@ -712,53 +743,10 @@ static void netvsc_receive(struct hv_device *device,
netvsc_packet->completion.recv.recv_completion_tid =
vmxferpage_packet->d.trans_id;
netvsc_packet->data = (void *)((unsigned long)net_device->
recv_buf + vmxferpage_packet->ranges[i].byte_offset);
netvsc_packet->total_data_buflen =
vmxferpage_packet->ranges[i].byte_count;
netvsc_packet->page_buf_cnt = 1;
netvsc_packet->page_buf[0].len =
vmxferpage_packet->ranges[i].byte_count;
start = virt_to_phys((void *)((unsigned long)net_device->
recv_buf + vmxferpage_packet->ranges[i].byte_offset));
netvsc_packet->page_buf[0].pfn = start >> PAGE_SHIFT;
end_virtual = (unsigned long)net_device->recv_buf
+ vmxferpage_packet->ranges[i].byte_offset
+ vmxferpage_packet->ranges[i].byte_count - 1;
end = virt_to_phys((void *)end_virtual);
/* Calculate the page relative offset */
netvsc_packet->page_buf[0].offset =
vmxferpage_packet->ranges[i].byte_offset &
(PAGE_SIZE - 1);
if ((end >> PAGE_SHIFT) != (start >> PAGE_SHIFT)) {
/* Handle frame across multiple pages: */
netvsc_packet->page_buf[0].len =
(netvsc_packet->page_buf[0].pfn <<
PAGE_SHIFT)
+ PAGE_SIZE - start;
bytes_remain = netvsc_packet->total_data_buflen -
netvsc_packet->page_buf[0].len;
for (j = 1; j < NETVSC_PACKET_MAXPAGE; j++) {
netvsc_packet->page_buf[j].offset = 0;
if (bytes_remain <= PAGE_SIZE) {
netvsc_packet->page_buf[j].len =
bytes_remain;
bytes_remain = 0;
} else {
netvsc_packet->page_buf[j].len =
PAGE_SIZE;
bytes_remain -= PAGE_SIZE;
}
netvsc_packet->page_buf[j].pfn =
virt_to_phys((void *)(end_virtual -
bytes_remain)) >> PAGE_SHIFT;
netvsc_packet->page_buf_cnt++;
if (bytes_remain == 0)
break;
}
}
/* Pass it to the upper layer */
rndis_filter_receive(device, netvsc_packet);
......
......@@ -43,24 +43,59 @@
struct net_device_context {
/* point back to our device context */
struct hv_device *device_ctx;
atomic_t avail;
struct delayed_work dwork;
};
#define PACKET_PAGES_LOWATER 8
/* Need this many pages to handle worst case fragmented packet */
#define PACKET_PAGES_HIWATER (MAX_SKB_FRAGS + 2)
static int ring_size = 128;
module_param(ring_size, int, S_IRUGO);
MODULE_PARM_DESC(ring_size, "Ring buffer size (# of pages)");
/* no-op so the netdev core doesn't return -EINVAL when modifying the the
* multicast address list in SIOCADDMULTI. hv is setup to get all multicast
* when it calls RndisFilterOnOpen() */
struct set_multicast_work {
struct work_struct work;
struct net_device *net;
};
static void do_set_multicast(struct work_struct *w)
{
struct set_multicast_work *swk =
container_of(w, struct set_multicast_work, work);
struct net_device *net = swk->net;
struct net_device_context *ndevctx = netdev_priv(net);
struct netvsc_device *nvdev;
struct rndis_device *rdev;
nvdev = hv_get_drvdata(ndevctx->device_ctx);
if (nvdev == NULL)
return;
rdev = nvdev->extension;
if (rdev == NULL)
return;
if (net->flags & IFF_PROMISC)
rndis_filter_set_packet_filter(rdev,
NDIS_PACKET_TYPE_PROMISCUOUS);
else
rndis_filter_set_packet_filter(rdev,
NDIS_PACKET_TYPE_BROADCAST |
NDIS_PACKET_TYPE_ALL_MULTICAST |
NDIS_PACKET_TYPE_DIRECTED);
kfree(w);
}
static void netvsc_set_multicast_list(struct net_device *net)
{
struct set_multicast_work *swk =
kmalloc(sizeof(struct set_multicast_work), GFP_ATOMIC);
if (swk == NULL)
return;
swk->net = net;
INIT_WORK(&swk->work, do_set_multicast);
schedule_work(&swk->work);
}
static int netvsc_open(struct net_device *net)
......@@ -104,18 +139,8 @@ static void netvsc_xmit_completion(void *context)
kfree(packet);
if (skb) {
struct net_device *net = skb->dev;
struct net_device_context *net_device_ctx = netdev_priv(net);
unsigned int num_pages = skb_shinfo(skb)->nr_frags + 2;
if (skb)
dev_kfree_skb_any(skb);
atomic_add(num_pages, &net_device_ctx->avail);
if (atomic_read(&net_device_ctx->avail) >=
PACKET_PAGES_HIWATER)
netif_wake_queue(net);
}
}
static int netvsc_start_xmit(struct sk_buff *skb, struct net_device *net)
......@@ -123,12 +148,12 @@ static int netvsc_start_xmit(struct sk_buff *skb, struct net_device *net)
struct net_device_context *net_device_ctx = netdev_priv(net);
struct hv_netvsc_packet *packet;
int ret;
unsigned int i, num_pages;
unsigned int i, num_pages, npg_data;
/* Add 1 for skb->data and additional one for RNDIS */
num_pages = skb_shinfo(skb)->nr_frags + 1 + 1;
if (num_pages > atomic_read(&net_device_ctx->avail))
return NETDEV_TX_BUSY;
/* Add multipage for skb->data and additional one for RNDIS */
npg_data = (((unsigned long)skb->data + skb_headlen(skb) - 1)
>> PAGE_SHIFT) - ((unsigned long)skb->data >> PAGE_SHIFT) + 1;
num_pages = skb_shinfo(skb)->nr_frags + npg_data + 1;
/* Allocate a netvsc packet based on # of frags. */
packet = kzalloc(sizeof(struct hv_netvsc_packet) +
......@@ -151,21 +176,36 @@ static int netvsc_start_xmit(struct sk_buff *skb, struct net_device *net)
packet->page_buf_cnt = num_pages;
/* Initialize it from the skb */
packet->total_data_buflen = skb->len;
packet->total_data_buflen = skb->len;
/* Start filling in the page buffers starting after RNDIS buffer. */
packet->page_buf[1].pfn = virt_to_phys(skb->data) >> PAGE_SHIFT;
packet->page_buf[1].offset
= (unsigned long)skb->data & (PAGE_SIZE - 1);
packet->page_buf[1].len = skb_headlen(skb);
if (npg_data == 1)
packet->page_buf[1].len = skb_headlen(skb);
else
packet->page_buf[1].len = PAGE_SIZE
- packet->page_buf[1].offset;
for (i = 2; i <= npg_data; i++) {
packet->page_buf[i].pfn = virt_to_phys(skb->data
+ PAGE_SIZE * (i-1)) >> PAGE_SHIFT;
packet->page_buf[i].offset = 0;
packet->page_buf[i].len = PAGE_SIZE;
}
if (npg_data > 1)
packet->page_buf[npg_data].len = (((unsigned long)skb->data
+ skb_headlen(skb) - 1) & (PAGE_SIZE - 1)) + 1;
/* Additional fragments are after SKB data */
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
const skb_frag_t *f = &skb_shinfo(skb)->frags[i];
packet->page_buf[i+2].pfn = page_to_pfn(skb_frag_page(f));
packet->page_buf[i+2].offset = f->page_offset;
packet->page_buf[i+2].len = skb_frag_size(f);
packet->page_buf[i+npg_data+1].pfn =
page_to_pfn(skb_frag_page(f));
packet->page_buf[i+npg_data+1].offset = f->page_offset;
packet->page_buf[i+npg_data+1].len = skb_frag_size(f);
}
/* Set the completion routine */
......@@ -178,10 +218,6 @@ static int netvsc_start_xmit(struct sk_buff *skb, struct net_device *net)
if (ret == 0) {
net->stats.tx_bytes += skb->len;
net->stats.tx_packets++;
atomic_sub(num_pages, &net_device_ctx->avail);
if (atomic_read(&net_device_ctx->avail) < PACKET_PAGES_LOWATER)
netif_stop_queue(net);
} else {
/* we are shutting down or bus overloaded, just drop packet */
net->stats.tx_dropped++;
......@@ -232,9 +268,6 @@ int netvsc_recv_callback(struct hv_device *device_obj,
{
struct net_device *net = dev_get_drvdata(&device_obj->device);
struct sk_buff *skb;
void *data;
int i;
unsigned long flags;
struct netvsc_device *net_device;
net_device = hv_get_drvdata(device_obj);
......@@ -253,27 +286,12 @@ int netvsc_recv_callback(struct hv_device *device_obj,
return 0;
}
/* for kmap_atomic */
local_irq_save(flags);
/*
* Copy to skb. This copy is needed here since the memory pointed by
* hv_netvsc_packet cannot be deallocated
*/
for (i = 0; i < packet->page_buf_cnt; i++) {
data = kmap_atomic(pfn_to_page(packet->page_buf[i].pfn),
KM_IRQ1);
data = (void *)(unsigned long)data +
packet->page_buf[i].offset;
memcpy(skb_put(skb, packet->page_buf[i].len), data,
packet->page_buf[i].len);
kunmap_atomic((void *)((unsigned long)data -
packet->page_buf[i].offset), KM_IRQ1);
}
local_irq_restore(flags);
memcpy(skb_put(skb, packet->total_data_buflen), packet->data,
packet->total_data_buflen);
skb->protocol = eth_type_trans(skb, net);
skb->ip_summed = CHECKSUM_NONE;
......@@ -299,6 +317,39 @@ static void netvsc_get_drvinfo(struct net_device *net,
strcpy(info->fw_version, "N/A");
}
static int netvsc_change_mtu(struct net_device *ndev, int mtu)
{
struct net_device_context *ndevctx = netdev_priv(ndev);
struct hv_device *hdev = ndevctx->device_ctx;
struct netvsc_device *nvdev = hv_get_drvdata(hdev);
struct netvsc_device_info device_info;
int limit = ETH_DATA_LEN;
if (nvdev == NULL || nvdev->destroy)
return -ENODEV;
if (nvdev->nvsp_version == NVSP_PROTOCOL_VERSION_2)
limit = NETVSC_MTU;
if (mtu < 68 || mtu > limit)
return -EINVAL;
nvdev->start_remove = true;
cancel_delayed_work_sync(&ndevctx->dwork);
netif_stop_queue(ndev);
rndis_filter_device_remove(hdev);
ndev->mtu = mtu;
ndevctx->device_ctx = hdev;
hv_set_drvdata(hdev, ndev);
device_info.ring_size = ring_size;
rndis_filter_device_add(hdev, &device_info);
netif_wake_queue(ndev);
return 0;
}
static const struct ethtool_ops ethtool_ops = {
.get_drvinfo = netvsc_get_drvinfo,
.get_link = ethtool_op_get_link,
......@@ -309,7 +360,7 @@ static const struct net_device_ops device_ops = {
.ndo_stop = netvsc_close,
.ndo_start_xmit = netvsc_start_xmit,
.ndo_set_rx_mode = netvsc_set_multicast_list,
.ndo_change_mtu = eth_change_mtu,
.ndo_change_mtu = netvsc_change_mtu,
.ndo_validate_addr = eth_validate_addr,
.ndo_set_mac_address = eth_mac_addr,
};
......@@ -351,7 +402,6 @@ static int netvsc_probe(struct hv_device *dev,
net_device_ctx = netdev_priv(net);
net_device_ctx->device_ctx = dev;
atomic_set(&net_device_ctx->avail, ring_size);
hv_set_drvdata(dev, net);
INIT_DELAYED_WORK(&net_device_ctx->dwork, netvsc_send_garp);
......@@ -403,6 +453,8 @@ static int netvsc_remove(struct hv_device *dev)
return 0;
}
net_device->start_remove = true;
ndev_ctx = netdev_priv(net);
cancel_delayed_work_sync(&ndev_ctx->dwork);
......
......@@ -30,26 +30,6 @@
#include "hyperv_net.h"
enum rndis_device_state {
RNDIS_DEV_UNINITIALIZED = 0,
RNDIS_DEV_INITIALIZING,
RNDIS_DEV_INITIALIZED,
RNDIS_DEV_DATAINITIALIZED,
};
struct rndis_device {
struct netvsc_device *net_dev;
enum rndis_device_state state;
bool link_state;
atomic_t new_req_id;
spinlock_t request_lock;
struct list_head req_list;
unsigned char hw_mac_adr[ETH_ALEN];
};
struct rndis_request {
struct list_head list_ent;
struct completion wait_event;
......@@ -329,7 +309,6 @@ static void rndis_filter_receive_data(struct rndis_device *dev,
{
struct rndis_packet *rndis_pkt;
u32 data_offset;
int i;
rndis_pkt = &msg->msg.pkt;
......@@ -342,17 +321,7 @@ static void rndis_filter_receive_data(struct rndis_device *dev,
data_offset = RNDIS_HEADER_SIZE + rndis_pkt->data_offset;
pkt->total_data_buflen -= data_offset;
pkt->page_buf[0].offset += data_offset;
pkt->page_buf[0].len -= data_offset;
/* Drop the 0th page, if rndis data go beyond page boundary */
if (pkt->page_buf[0].offset >= PAGE_SIZE) {
pkt->page_buf[1].offset = pkt->page_buf[0].offset - PAGE_SIZE;
pkt->page_buf[1].len -= pkt->page_buf[1].offset;
pkt->page_buf_cnt--;
for (i = 0; i < pkt->page_buf_cnt; i++)
pkt->page_buf[i] = pkt->page_buf[i+1];
}
pkt->data = (void *)((unsigned long)pkt->data + data_offset);
pkt->is_data_pkt = true;
......@@ -387,11 +356,7 @@ int rndis_filter_receive(struct hv_device *dev,
return -ENODEV;
}
rndis_hdr = (struct rndis_message *)kmap_atomic(
pfn_to_page(pkt->page_buf[0].pfn), KM_IRQ0);
rndis_hdr = (void *)((unsigned long)rndis_hdr +
pkt->page_buf[0].offset);
rndis_hdr = pkt->data;
/* Make sure we got a valid rndis message */
if ((rndis_hdr->ndis_msg_type != REMOTE_NDIS_PACKET_MSG) &&
......@@ -407,8 +372,6 @@ int rndis_filter_receive(struct hv_device *dev,
sizeof(struct rndis_message) :
rndis_hdr->msg_len);
kunmap_atomic(rndis_hdr - pkt->page_buf[0].offset, KM_IRQ0);
dump_rndis_message(dev, &rndis_msg);
switch (rndis_msg.ndis_msg_type) {
......@@ -522,8 +485,7 @@ static int rndis_filter_query_device_link_status(struct rndis_device *dev)
return ret;
}
static int rndis_filter_set_packet_filter(struct rndis_device *dev,
u32 new_filter)
int rndis_filter_set_packet_filter(struct rndis_device *dev, u32 new_filter)
{
struct rndis_request *request;
struct rndis_set_request *set;
......
......@@ -72,8 +72,6 @@ source "drivers/staging/octeon/Kconfig"
source "drivers/staging/serqt_usb2/Kconfig"
source "drivers/staging/spectra/Kconfig"
source "drivers/staging/quatech_usb2/Kconfig"
source "drivers/staging/vt6655/Kconfig"
......@@ -116,8 +114,6 @@ source "drivers/staging/bcm/Kconfig"
source "drivers/staging/ft1000/Kconfig"
source "drivers/staging/intel_sst/Kconfig"
source "drivers/staging/speakup/Kconfig"
source "drivers/staging/cptm1217/Kconfig"
......@@ -132,4 +128,8 @@ source "drivers/staging/nvec/Kconfig"
source "drivers/staging/media/Kconfig"
source "drivers/staging/omapdrm/Kconfig"
source "drivers/staging/android/Kconfig"
endif # STAGING
......@@ -21,7 +21,6 @@ obj-$(CONFIG_RTL8192E) += rtl8192e/
obj-$(CONFIG_R8712U) += rtl8712/
obj-$(CONFIG_RTS_PSTOR) += rts_pstor/
obj-$(CONFIG_RTS5139) += rts5139/
obj-$(CONFIG_SPECTRA) += spectra/
obj-$(CONFIG_TRANZPORT) += frontier/
obj-$(CONFIG_POHMELFS) += pohmelfs/
obj-$(CONFIG_IDE_PHISON) += phison/
......@@ -50,10 +49,11 @@ obj-$(CONFIG_SBE_2T3E3) += sbe-2t3e3/
obj-$(CONFIG_USB_ENESTORAGE) += keucr/
obj-$(CONFIG_BCM_WIMAX) += bcm/
obj-$(CONFIG_FT1000) += ft1000/
obj-$(CONFIG_SND_INTEL_SST) += intel_sst/
obj-$(CONFIG_SPEAKUP) += speakup/
obj-$(CONFIG_TOUCHSCREEN_CLEARPAD_TM1217) += cptm1217/
obj-$(CONFIG_TOUCHSCREEN_SYNAPTICS_I2C_RMI4) += ste_rmi4/
obj-$(CONFIG_DRM_PSB) += gma500/
obj-$(CONFIG_INTEL_MEI) += mei/
obj-$(CONFIG_MFD_NVEC) += nvec/
obj-$(CONFIG_DRM_OMAP) += omapdrm/
obj-$(CONFIG_ANDROID) += android/
menu "Android"
config ANDROID
bool "Android Drivers"
default N
---help---
Enable support for various drivers needed on the Android platform
if ANDROID
config ANDROID_BINDER_IPC
bool "Android Binder IPC Driver"
default n
config ASHMEM
bool "Enable the Anonymous Shared Memory Subsystem"
default n
depends on SHMEM || TINY_SHMEM
help
The ashmem subsystem is a new shared memory allocator, similar to
POSIX SHM but with different behavior and sporting a simpler
file-based API.
config ANDROID_LOGGER
tristate "Android log driver"
default n
config ANDROID_RAM_CONSOLE
bool "Android RAM buffer console"
default n
config ANDROID_RAM_CONSOLE_ENABLE_VERBOSE
bool "Enable verbose console messages on Android RAM console"
default y
depends on ANDROID_RAM_CONSOLE
menuconfig ANDROID_RAM_CONSOLE_ERROR_CORRECTION
bool "Android RAM Console Enable error correction"
default n
depends on ANDROID_RAM_CONSOLE
depends on !ANDROID_RAM_CONSOLE_EARLY_INIT
select REED_SOLOMON
select REED_SOLOMON_ENC8
select REED_SOLOMON_DEC8
if ANDROID_RAM_CONSOLE_ERROR_CORRECTION
config ANDROID_RAM_CONSOLE_ERROR_CORRECTION_DATA_SIZE
int "Android RAM Console Data data size"
default 128
help
Must be a power of 2.
config ANDROID_RAM_CONSOLE_ERROR_CORRECTION_ECC_SIZE
int "Android RAM Console ECC size"
default 16
config ANDROID_RAM_CONSOLE_ERROR_CORRECTION_SYMBOL_SIZE
int "Android RAM Console Symbol size"
default 8
config ANDROID_RAM_CONSOLE_ERROR_CORRECTION_POLYNOMIAL
hex "Android RAM Console Polynomial"
default 0x19 if (ANDROID_RAM_CONSOLE_ERROR_CORRECTION_SYMBOL_SIZE = 4)
default 0x29 if (ANDROID_RAM_CONSOLE_ERROR_CORRECTION_SYMBOL_SIZE = 5)
default 0x61 if (ANDROID_RAM_CONSOLE_ERROR_CORRECTION_SYMBOL_SIZE = 6)
default 0x89 if (ANDROID_RAM_CONSOLE_ERROR_CORRECTION_SYMBOL_SIZE = 7)
default 0x11d if (ANDROID_RAM_CONSOLE_ERROR_CORRECTION_SYMBOL_SIZE = 8)
endif # ANDROID_RAM_CONSOLE_ERROR_CORRECTION
config ANDROID_RAM_CONSOLE_EARLY_INIT
bool "Start Android RAM console early"
default n
depends on ANDROID_RAM_CONSOLE
config ANDROID_RAM_CONSOLE_EARLY_ADDR
hex "Android RAM console virtual address"
default 0
depends on ANDROID_RAM_CONSOLE_EARLY_INIT
config ANDROID_RAM_CONSOLE_EARLY_SIZE
hex "Android RAM console buffer size"
default 0
depends on ANDROID_RAM_CONSOLE_EARLY_INIT
config ANDROID_TIMED_OUTPUT
bool "Timed output class driver"
default y
config ANDROID_TIMED_GPIO
tristate "Android timed gpio driver"
depends on GENERIC_GPIO && ANDROID_TIMED_OUTPUT
default n
config ANDROID_LOW_MEMORY_KILLER
bool "Android Low Memory Killer"
default N
---help---
Register processes to be killed when memory is low
config ANDROID_PMEM
bool "Android pmem allocator"
depends on ARM
source "drivers/staging/android/switch/Kconfig"
endif # if ANDROID
endmenu
obj-$(CONFIG_ANDROID_BINDER_IPC) += binder.o
obj-$(CONFIG_ASHMEM) += ashmem.o
obj-$(CONFIG_ANDROID_LOGGER) += logger.o
obj-$(CONFIG_ANDROID_RAM_CONSOLE) += ram_console.o
obj-$(CONFIG_ANDROID_TIMED_OUTPUT) += timed_output.o
obj-$(CONFIG_ANDROID_TIMED_GPIO) += timed_gpio.o
obj-$(CONFIG_ANDROID_LOW_MEMORY_KILLER) += lowmemorykiller.o
obj-$(CONFIG_ANDROID_PMEM) += pmem.o
obj-$(CONFIG_ANDROID_SWITCH) += switch/
TODO:
- checkpatch.pl cleanups
- sparse fixes
- rename files to be not so "generic"
- make sure things build as modules properly
- add proper arch dependancies as needed
- audit userspace interfaces to make sure they are sane
Please send patches to Greg Kroah-Hartman <greg@kroah.com> and Cc:
Brian Swetland <swetland@google.com>
/* include/linux/android_pmem.h
*
* Copyright (C) 2007 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef _ANDROID_PMEM_H_
#define _ANDROID_PMEM_H_
#define PMEM_IOCTL_MAGIC 'p'
#define PMEM_GET_PHYS _IOW(PMEM_IOCTL_MAGIC, 1, unsigned int)
#define PMEM_MAP _IOW(PMEM_IOCTL_MAGIC, 2, unsigned int)
#define PMEM_GET_SIZE _IOW(PMEM_IOCTL_MAGIC, 3, unsigned int)
#define PMEM_UNMAP _IOW(PMEM_IOCTL_MAGIC, 4, unsigned int)
/* This ioctl will allocate pmem space, backing the file, it will fail
* if the file already has an allocation, pass it the len as the argument
* to the ioctl */
#define PMEM_ALLOCATE _IOW(PMEM_IOCTL_MAGIC, 5, unsigned int)
/* This will connect a one pmem file to another, pass the file that is already
* backed in memory as the argument to the ioctl
*/
#define PMEM_CONNECT _IOW(PMEM_IOCTL_MAGIC, 6, unsigned int)
/* Returns the total size of the pmem region it is sent to as a pmem_region
* struct (with offset set to 0).
*/
#define PMEM_GET_TOTAL_SIZE _IOW(PMEM_IOCTL_MAGIC, 7, unsigned int)
#define PMEM_CACHE_FLUSH _IOW(PMEM_IOCTL_MAGIC, 8, unsigned int)
struct android_pmem_platform_data
{
const char* name;
/* starting physical address of memory region */
unsigned long start;
/* size of memory region */
unsigned long size;
/* set to indicate the region should not be managed with an allocator */
unsigned no_allocator;
/* set to indicate maps of this region should be cached, if a mix of
* cached and uncached is desired, set this and open the device with
* O_SYNC to get an uncached region */
unsigned cached;
/* The MSM7k has bits to enable a write buffer in the bus controller*/
unsigned buffered;
};
struct pmem_region {
unsigned long offset;
unsigned long len;
};
#ifdef CONFIG_ANDROID_PMEM
int is_pmem_file(struct file *file);
int get_pmem_file(int fd, unsigned long *start, unsigned long *vstart,
unsigned long *end, struct file **filp);
int get_pmem_user_addr(struct file *file, unsigned long *start,
unsigned long *end);
void put_pmem_file(struct file* file);
void flush_pmem_file(struct file *file, unsigned long start, unsigned long len);
int pmem_setup(struct android_pmem_platform_data *pdata,
long (*ioctl)(struct file *, unsigned int, unsigned long),
int (*release)(struct inode *, struct file *));
int pmem_remap(struct pmem_region *region, struct file *file,
unsigned operation);
#else
static inline int is_pmem_file(struct file *file) { return 0; }
static inline int get_pmem_file(int fd, unsigned long *start,
unsigned long *vstart, unsigned long *end,
struct file **filp) { return -ENOSYS; }
static inline int get_pmem_user_addr(struct file *file, unsigned long *start,
unsigned long *end) { return -ENOSYS; }
static inline void put_pmem_file(struct file* file) { return; }
static inline void flush_pmem_file(struct file *file, unsigned long start,
unsigned long len) { return; }
static inline int pmem_setup(struct android_pmem_platform_data *pdata,
long (*ioctl)(struct file *, unsigned int, unsigned long),
int (*release)(struct inode *, struct file *)) { return -ENOSYS; }
static inline int pmem_remap(struct pmem_region *region, struct file *file,
unsigned operation) { return -ENOSYS; }
#endif
#endif //_ANDROID_PPP_H_
/* mm/ashmem.c
**
** Anonymous Shared Memory Subsystem, ashmem
**
** Copyright (C) 2008 Google, Inc.
**
** Robert Love <rlove@google.com>
**
** This software is licensed under the terms of the GNU General Public
** License version 2, as published by the Free Software Foundation, and
** may be copied, distributed, and modified under those terms.
**
** This program is distributed in the hope that it will be useful,
** but WITHOUT ANY WARRANTY; without even the implied warranty of
** MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
** GNU General Public License for more details.
*/
#include <linux/module.h>
#include <linux/file.h>
#include <linux/fs.h>
#include <linux/miscdevice.h>
#include <linux/security.h>
#include <linux/mm.h>
#include <linux/mman.h>
#include <linux/uaccess.h>
#include <linux/personality.h>
#include <linux/bitops.h>
#include <linux/mutex.h>
#include <linux/shmem_fs.h>
#include "ashmem.h"
#define ASHMEM_NAME_PREFIX "dev/ashmem/"
#define ASHMEM_NAME_PREFIX_LEN (sizeof(ASHMEM_NAME_PREFIX) - 1)
#define ASHMEM_FULL_NAME_LEN (ASHMEM_NAME_LEN + ASHMEM_NAME_PREFIX_LEN)
/*
* ashmem_area - anonymous shared memory area
* Lifecycle: From our parent file's open() until its release()
* Locking: Protected by `ashmem_mutex'
* Big Note: Mappings do NOT pin this structure; it dies on close()
*/
struct ashmem_area {
char name[ASHMEM_FULL_NAME_LEN]; /* optional name in /proc/pid/maps */
struct list_head unpinned_list; /* list of all ashmem areas */
struct file *file; /* the shmem-based backing file */
size_t size; /* size of the mapping, in bytes */
unsigned long prot_mask; /* allowed prot bits, as vm_flags */
};
/*
* ashmem_range - represents an interval of unpinned (evictable) pages
* Lifecycle: From unpin to pin
* Locking: Protected by `ashmem_mutex'
*/
struct ashmem_range {
struct list_head lru; /* entry in LRU list */
struct list_head unpinned; /* entry in its area's unpinned list */
struct ashmem_area *asma; /* associated area */
size_t pgstart; /* starting page, inclusive */
size_t pgend; /* ending page, inclusive */
unsigned int purged; /* ASHMEM_NOT or ASHMEM_WAS_PURGED */
};
/* LRU list of unpinned pages, protected by ashmem_mutex */
static LIST_HEAD(ashmem_lru_list);
/* Count of pages on our LRU list, protected by ashmem_mutex */
static unsigned long lru_count;
/*
* ashmem_mutex - protects the list of and each individual ashmem_area
*
* Lock Ordering: ashmex_mutex -> i_mutex -> i_alloc_sem
*/
static DEFINE_MUTEX(ashmem_mutex);
static struct kmem_cache *ashmem_area_cachep __read_mostly;
static struct kmem_cache *ashmem_range_cachep __read_mostly;
#define range_size(range) \
((range)->pgend - (range)->pgstart + 1)
#define range_on_lru(range) \
((range)->purged == ASHMEM_NOT_PURGED)
#define page_range_subsumes_range(range, start, end) \
(((range)->pgstart >= (start)) && ((range)->pgend <= (end)))
#define page_range_subsumed_by_range(range, start, end) \
(((range)->pgstart <= (start)) && ((range)->pgend >= (end)))
#define page_in_range(range, page) \
(((range)->pgstart <= (page)) && ((range)->pgend >= (page)))
#define page_range_in_range(range, start, end) \
(page_in_range(range, start) || page_in_range(range, end) || \
page_range_subsumes_range(range, start, end))
#define range_before_page(range, page) \
((range)->pgend < (page))
#define PROT_MASK (PROT_EXEC | PROT_READ | PROT_WRITE)
static inline void lru_add(struct ashmem_range *range)
{
list_add_tail(&range->lru, &ashmem_lru_list);
lru_count += range_size(range);
}
static inline void lru_del(struct ashmem_range *range)
{
list_del(&range->lru);
lru_count -= range_size(range);
}
/*
* range_alloc - allocate and initialize a new ashmem_range structure
*
* 'asma' - associated ashmem_area
* 'prev_range' - the previous ashmem_range in the sorted asma->unpinned list
* 'purged' - initial purge value (ASMEM_NOT_PURGED or ASHMEM_WAS_PURGED)
* 'start' - starting page, inclusive
* 'end' - ending page, inclusive
*
* Caller must hold ashmem_mutex.
*/
static int range_alloc(struct ashmem_area *asma,
struct ashmem_range *prev_range, unsigned int purged,
size_t start, size_t end)
{
struct ashmem_range *range;
range = kmem_cache_zalloc(ashmem_range_cachep, GFP_KERNEL);
if (unlikely(!range))
return -ENOMEM;
range->asma = asma;
range->pgstart = start;
range->pgend = end;
range->purged = purged;
list_add_tail(&range->unpinned, &prev_range->unpinned);
if (range_on_lru(range))
lru_add(range);
return 0;
}
static void range_del(struct ashmem_range *range)
{
list_del(&range->unpinned);
if (range_on_lru(range))
lru_del(range);
kmem_cache_free(ashmem_range_cachep, range);
}
/*
* range_shrink - shrinks a range
*
* Caller must hold ashmem_mutex.
*/
static inline void range_shrink(struct ashmem_range *range,
size_t start, size_t end)
{
size_t pre = range_size(range);
range->pgstart = start;
range->pgend = end;
if (range_on_lru(range))
lru_count -= pre - range_size(range);
}
static int ashmem_open(struct inode *inode, struct file *file)
{
struct ashmem_area *asma;
int ret;
ret = generic_file_open(inode, file);
if (unlikely(ret))
return ret;
asma = kmem_cache_zalloc(ashmem_area_cachep, GFP_KERNEL);
if (unlikely(!asma))
return -ENOMEM;
INIT_LIST_HEAD(&asma->unpinned_list);
memcpy(asma->name, ASHMEM_NAME_PREFIX, ASHMEM_NAME_PREFIX_LEN);
asma->prot_mask = PROT_MASK;
file->private_data = asma;
return 0;
}
static int ashmem_release(struct inode *ignored, struct file *file)
{
struct ashmem_area *asma = file->private_data;
struct ashmem_range *range, *next;
mutex_lock(&ashmem_mutex);
list_for_each_entry_safe(range, next, &asma->unpinned_list, unpinned)
range_del(range);
mutex_unlock(&ashmem_mutex);
if (asma->file)
fput(asma->file);
kmem_cache_free(ashmem_area_cachep, asma);
return 0;
}
static ssize_t ashmem_read(struct file *file, char __user *buf,
size_t len, loff_t *pos)
{
struct ashmem_area *asma = file->private_data;
int ret = 0;
mutex_lock(&ashmem_mutex);
/* If size is not set, or set to 0, always return EOF. */
if (asma->size == 0)
goto out;
if (!asma->file) {
ret = -EBADF;
goto out;
}
ret = asma->file->f_op->read(asma->file, buf, len, pos);
if (ret < 0)
goto out;
/** Update backing file pos, since f_ops->read() doesn't */
asma->file->f_pos = *pos;
out:
mutex_unlock(&ashmem_mutex);
return ret;
}
static loff_t ashmem_llseek(struct file *file, loff_t offset, int origin)
{
struct ashmem_area *asma = file->private_data;
int ret;
mutex_lock(&ashmem_mutex);
if (asma->size == 0) {
ret = -EINVAL;
goto out;
}
if (!asma->file) {
ret = -EBADF;
goto out;
}
ret = asma->file->f_op->llseek(asma->file, offset, origin);
if (ret < 0)
goto out;
/** Copy f_pos from backing file, since f_ops->llseek() sets it */
file->f_pos = asma->file->f_pos;
out:
mutex_unlock(&ashmem_mutex);
return ret;
}
static inline unsigned long calc_vm_may_flags(unsigned long prot)
{
return _calc_vm_trans(prot, PROT_READ, VM_MAYREAD) |
_calc_vm_trans(prot, PROT_WRITE, VM_MAYWRITE) |
_calc_vm_trans(prot, PROT_EXEC, VM_MAYEXEC);
}
static int ashmem_mmap(struct file *file, struct vm_area_struct *vma)
{
struct ashmem_area *asma = file->private_data;
int ret = 0;
mutex_lock(&ashmem_mutex);
/* user needs to SET_SIZE before mapping */
if (unlikely(!asma->size)) {
ret = -EINVAL;
goto out;
}
/* requested protection bits must match our allowed protection mask */
if (unlikely((vma->vm_flags & ~calc_vm_prot_bits(asma->prot_mask)) &
calc_vm_prot_bits(PROT_MASK))) {
ret = -EPERM;
goto out;
}
vma->vm_flags &= ~calc_vm_may_flags(~asma->prot_mask);
if (!asma->file) {
char *name = ASHMEM_NAME_DEF;
struct file *vmfile;
if (asma->name[ASHMEM_NAME_PREFIX_LEN] != '\0')
name = asma->name;
/* ... and allocate the backing shmem file */
vmfile = shmem_file_setup(name, asma->size, vma->vm_flags);
if (unlikely(IS_ERR(vmfile))) {
ret = PTR_ERR(vmfile);
goto out;
}
asma->file = vmfile;
}
get_file(asma->file);
/*
* XXX - Reworked to use shmem_zero_setup() instead of
* shmem_set_file while we're in staging. -jstultz
*/
if (vma->vm_flags & VM_SHARED) {
ret = shmem_zero_setup(vma);
if (ret) {
fput(asma->file);
goto out;
}
}
if (vma->vm_file)
fput(vma->vm_file);
vma->vm_file = asma->file;
vma->vm_flags |= VM_CAN_NONLINEAR;
out:
mutex_unlock(&ashmem_mutex);
return ret;
}
/*
* ashmem_shrink - our cache shrinker, called from mm/vmscan.c :: shrink_slab
*
* 'nr_to_scan' is the number of objects (pages) to prune, or 0 to query how
* many objects (pages) we have in total.
*
* 'gfp_mask' is the mask of the allocation that got us into this mess.
*
* Return value is the number of objects (pages) remaining, or -1 if we cannot
* proceed without risk of deadlock (due to gfp_mask).
*
* We approximate LRU via least-recently-unpinned, jettisoning unpinned partial
* chunks of ashmem regions LRU-wise one-at-a-time until we hit 'nr_to_scan'
* pages freed.
*/
static int ashmem_shrink(struct shrinker *s, struct shrink_control *sc)
{
struct ashmem_range *range, *next;
/* We might recurse into filesystem code, so bail out if necessary */
if (sc->nr_to_scan && !(sc->gfp_mask & __GFP_FS))
return -1;
if (!sc->nr_to_scan)
return lru_count;
mutex_lock(&ashmem_mutex);
list_for_each_entry_safe(range, next, &ashmem_lru_list, lru) {
struct inode *inode = range->asma->file->f_dentry->d_inode;
loff_t start = range->pgstart * PAGE_SIZE;
loff_t end = (range->pgend + 1) * PAGE_SIZE - 1;
vmtruncate_range(inode, start, end);
range->purged = ASHMEM_WAS_PURGED;
lru_del(range);
sc->nr_to_scan -= range_size(range);
if (sc->nr_to_scan <= 0)
break;
}
mutex_unlock(&ashmem_mutex);
return lru_count;
}
static struct shrinker ashmem_shrinker = {
.shrink = ashmem_shrink,
.seeks = DEFAULT_SEEKS * 4,
};
static int set_prot_mask(struct ashmem_area *asma, unsigned long prot)
{
int ret = 0;
mutex_lock(&ashmem_mutex);
/* the user can only remove, not add, protection bits */
if (unlikely((asma->prot_mask & prot) != prot)) {
ret = -EINVAL;
goto out;
}
/* does the application expect PROT_READ to imply PROT_EXEC? */
if ((prot & PROT_READ) && (current->personality & READ_IMPLIES_EXEC))
prot |= PROT_EXEC;
asma->prot_mask = prot;
out:
mutex_unlock(&ashmem_mutex);
return ret;
}
static int set_name(struct ashmem_area *asma, void __user *name)
{
int ret = 0;
mutex_lock(&ashmem_mutex);
/* cannot change an existing mapping's name */
if (unlikely(asma->file)) {
ret = -EINVAL;
goto out;
}
if (unlikely(copy_from_user(asma->name + ASHMEM_NAME_PREFIX_LEN,
name, ASHMEM_NAME_LEN)))
ret = -EFAULT;
asma->name[ASHMEM_FULL_NAME_LEN-1] = '\0';
out:
mutex_unlock(&ashmem_mutex);
return ret;
}
static int get_name(struct ashmem_area *asma, void __user *name)
{
int ret = 0;
mutex_lock(&ashmem_mutex);
if (asma->name[ASHMEM_NAME_PREFIX_LEN] != '\0') {
size_t len;
/*
* Copying only `len', instead of ASHMEM_NAME_LEN, bytes
* prevents us from revealing one user's stack to another.
*/
len = strlen(asma->name + ASHMEM_NAME_PREFIX_LEN) + 1;
if (unlikely(copy_to_user(name,
asma->name + ASHMEM_NAME_PREFIX_LEN, len)))
ret = -EFAULT;
} else {
if (unlikely(copy_to_user(name, ASHMEM_NAME_DEF,
sizeof(ASHMEM_NAME_DEF))))
ret = -EFAULT;
}
mutex_unlock(&ashmem_mutex);
return ret;
}
/*
* ashmem_pin - pin the given ashmem region, returning whether it was
* previously purged (ASHMEM_WAS_PURGED) or not (ASHMEM_NOT_PURGED).
*
* Caller must hold ashmem_mutex.
*/
static int ashmem_pin(struct ashmem_area *asma, size_t pgstart, size_t pgend)
{
struct ashmem_range *range, *next;
int ret = ASHMEM_NOT_PURGED;
list_for_each_entry_safe(range, next, &asma->unpinned_list, unpinned) {
/* moved past last applicable page; we can short circuit */
if (range_before_page(range, pgstart))
break;
/*
* The user can ask us to pin pages that span multiple ranges,
* or to pin pages that aren't even unpinned, so this is messy.
*
* Four cases:
* 1. The requested range subsumes an existing range, so we
* just remove the entire matching range.
* 2. The requested range overlaps the start of an existing
* range, so we just update that range.
* 3. The requested range overlaps the end of an existing
* range, so we just update that range.
* 4. The requested range punches a hole in an existing range,
* so we have to update one side of the range and then
* create a new range for the other side.
*/
if (page_range_in_range(range, pgstart, pgend)) {
ret |= range->purged;
/* Case #1: Easy. Just nuke the whole thing. */
if (page_range_subsumes_range(range, pgstart, pgend)) {
range_del(range);
continue;
}
/* Case #2: We overlap from the start, so adjust it */
if (range->pgstart >= pgstart) {
range_shrink(range, pgend + 1, range->pgend);
continue;
}
/* Case #3: We overlap from the rear, so adjust it */
if (range->pgend <= pgend) {
range_shrink(range, range->pgstart, pgstart-1);
continue;
}
/*
* Case #4: We eat a chunk out of the middle. A bit
* more complicated, we allocate a new range for the
* second half and adjust the first chunk's endpoint.
*/
range_alloc(asma, range, range->purged,
pgend + 1, range->pgend);
range_shrink(range, range->pgstart, pgstart - 1);
break;
}
}
return ret;
}
/*
* ashmem_unpin - unpin the given range of pages. Returns zero on success.
*
* Caller must hold ashmem_mutex.
*/
static int ashmem_unpin(struct ashmem_area *asma, size_t pgstart, size_t pgend)
{
struct ashmem_range *range, *next;
unsigned int purged = ASHMEM_NOT_PURGED;
restart:
list_for_each_entry_safe(range, next, &asma->unpinned_list, unpinned) {
/* short circuit: this is our insertion point */
if (range_before_page(range, pgstart))
break;
/*
* The user can ask us to unpin pages that are already entirely
* or partially pinned. We handle those two cases here.
*/
if (page_range_subsumed_by_range(range, pgstart, pgend))
return 0;
if (page_range_in_range(range, pgstart, pgend)) {
pgstart = min_t(size_t, range->pgstart, pgstart),
pgend = max_t(size_t, range->pgend, pgend);
purged |= range->purged;
range_del(range);
goto restart;
}
}
return range_alloc(asma, range, purged, pgstart, pgend);
}
/*
* ashmem_get_pin_status - Returns ASHMEM_IS_UNPINNED if _any_ pages in the
* given interval are unpinned and ASHMEM_IS_PINNED otherwise.
*
* Caller must hold ashmem_mutex.
*/
static int ashmem_get_pin_status(struct ashmem_area *asma, size_t pgstart,
size_t pgend)
{
struct ashmem_range *range;
int ret = ASHMEM_IS_PINNED;
list_for_each_entry(range, &asma->unpinned_list, unpinned) {
if (range_before_page(range, pgstart))
break;
if (page_range_in_range(range, pgstart, pgend)) {
ret = ASHMEM_IS_UNPINNED;
break;
}
}
return ret;
}
static int ashmem_pin_unpin(struct ashmem_area *asma, unsigned long cmd,
void __user *p)
{
struct ashmem_pin pin;
size_t pgstart, pgend;
int ret = -EINVAL;
if (unlikely(!asma->file))
return -EINVAL;
if (unlikely(copy_from_user(&pin, p, sizeof(pin))))
return -EFAULT;
/* per custom, you can pass zero for len to mean "everything onward" */
if (!pin.len)
pin.len = PAGE_ALIGN(asma->size) - pin.offset;
if (unlikely((pin.offset | pin.len) & ~PAGE_MASK))
return -EINVAL;
if (unlikely(((__u32) -1) - pin.offset < pin.len))
return -EINVAL;
if (unlikely(PAGE_ALIGN(asma->size) < pin.offset + pin.len))
return -EINVAL;
pgstart = pin.offset / PAGE_SIZE;
pgend = pgstart + (pin.len / PAGE_SIZE) - 1;
mutex_lock(&ashmem_mutex);
switch (cmd) {
case ASHMEM_PIN:
ret = ashmem_pin(asma, pgstart, pgend);
break;
case ASHMEM_UNPIN:
ret = ashmem_unpin(asma, pgstart, pgend);
break;
case ASHMEM_GET_PIN_STATUS:
ret = ashmem_get_pin_status(asma, pgstart, pgend);
break;
}
mutex_unlock(&ashmem_mutex);
return ret;
}
static long ashmem_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
struct ashmem_area *asma = file->private_data;
long ret = -ENOTTY;
switch (cmd) {
case ASHMEM_SET_NAME:
ret = set_name(asma, (void __user *) arg);
break;
case ASHMEM_GET_NAME:
ret = get_name(asma, (void __user *) arg);
break;
case ASHMEM_SET_SIZE:
ret = -EINVAL;
if (!asma->file) {
ret = 0;
asma->size = (size_t) arg;
}
break;
case ASHMEM_GET_SIZE:
ret = asma->size;
break;
case ASHMEM_SET_PROT_MASK:
ret = set_prot_mask(asma, arg);
break;
case ASHMEM_GET_PROT_MASK:
ret = asma->prot_mask;
break;
case ASHMEM_PIN:
case ASHMEM_UNPIN:
case ASHMEM_GET_PIN_STATUS:
ret = ashmem_pin_unpin(asma, cmd, (void __user *) arg);
break;
case ASHMEM_PURGE_ALL_CACHES:
ret = -EPERM;
if (capable(CAP_SYS_ADMIN)) {
struct shrink_control sc = {
.gfp_mask = GFP_KERNEL,
.nr_to_scan = 0,
};
ret = ashmem_shrink(&ashmem_shrinker, &sc);
sc.nr_to_scan = ret;
ashmem_shrink(&ashmem_shrinker, &sc);
}
break;
}
return ret;
}
static struct file_operations ashmem_fops = {
.owner = THIS_MODULE,
.open = ashmem_open,
.release = ashmem_release,
.read = ashmem_read,
.llseek = ashmem_llseek,
.mmap = ashmem_mmap,
.unlocked_ioctl = ashmem_ioctl,
.compat_ioctl = ashmem_ioctl,
};
static struct miscdevice ashmem_misc = {
.minor = MISC_DYNAMIC_MINOR,
.name = "ashmem",
.fops = &ashmem_fops,
};
static int __init ashmem_init(void)
{
int ret;
ashmem_area_cachep = kmem_cache_create("ashmem_area_cache",
sizeof(struct ashmem_area),
0, 0, NULL);
if (unlikely(!ashmem_area_cachep)) {
printk(KERN_ERR "ashmem: failed to create slab cache\n");
return -ENOMEM;
}
ashmem_range_cachep = kmem_cache_create("ashmem_range_cache",
sizeof(struct ashmem_range),
0, 0, NULL);
if (unlikely(!ashmem_range_cachep)) {
printk(KERN_ERR "ashmem: failed to create slab cache\n");
return -ENOMEM;
}
ret = misc_register(&ashmem_misc);
if (unlikely(ret)) {
printk(KERN_ERR "ashmem: failed to register misc device!\n");
return ret;
}
register_shrinker(&ashmem_shrinker);
printk(KERN_INFO "ashmem: initialized\n");
return 0;
}
static void __exit ashmem_exit(void)
{
int ret;
unregister_shrinker(&ashmem_shrinker);
ret = misc_deregister(&ashmem_misc);
if (unlikely(ret))
printk(KERN_ERR "ashmem: failed to unregister misc device!\n");
kmem_cache_destroy(ashmem_range_cachep);
kmem_cache_destroy(ashmem_area_cachep);
printk(KERN_INFO "ashmem: unloaded\n");
}
module_init(ashmem_init);
module_exit(ashmem_exit);
MODULE_LICENSE("GPL");
/*
* include/linux/ashmem.h
*
* Copyright 2008 Google Inc.
* Author: Robert Love
*
* This file is dual licensed. It may be redistributed and/or modified
* under the terms of the Apache 2.0 License OR version 2 of the GNU
* General Public License.
*/
#ifndef _LINUX_ASHMEM_H
#define _LINUX_ASHMEM_H
#include <linux/limits.h>
#include <linux/ioctl.h>
#define ASHMEM_NAME_LEN 256
#define ASHMEM_NAME_DEF "dev/ashmem"
/* Return values from ASHMEM_PIN: Was the mapping purged while unpinned? */
#define ASHMEM_NOT_PURGED 0
#define ASHMEM_WAS_PURGED 1
/* Return values from ASHMEM_GET_PIN_STATUS: Is the mapping pinned? */
#define ASHMEM_IS_UNPINNED 0
#define ASHMEM_IS_PINNED 1
struct ashmem_pin {
__u32 offset; /* offset into region, in bytes, page-aligned */
__u32 len; /* length forward from offset, in bytes, page-aligned */
};
#define __ASHMEMIOC 0x77
#define ASHMEM_SET_NAME _IOW(__ASHMEMIOC, 1, char[ASHMEM_NAME_LEN])
#define ASHMEM_GET_NAME _IOR(__ASHMEMIOC, 2, char[ASHMEM_NAME_LEN])
#define ASHMEM_SET_SIZE _IOW(__ASHMEMIOC, 3, size_t)
#define ASHMEM_GET_SIZE _IO(__ASHMEMIOC, 4)
#define ASHMEM_SET_PROT_MASK _IOW(__ASHMEMIOC, 5, unsigned long)
#define ASHMEM_GET_PROT_MASK _IO(__ASHMEMIOC, 6)
#define ASHMEM_PIN _IOW(__ASHMEMIOC, 7, struct ashmem_pin)
#define ASHMEM_UNPIN _IOW(__ASHMEMIOC, 8, struct ashmem_pin)
#define ASHMEM_GET_PIN_STATUS _IO(__ASHMEMIOC, 9)
#define ASHMEM_PURGE_ALL_CACHES _IO(__ASHMEMIOC, 10)
#endif /* _LINUX_ASHMEM_H */
此差异已折叠。
/*
* Copyright (C) 2008 Google, Inc.
*
* Based on, but no longer compatible with, the original
* OpenBinder.org binder driver interface, which is:
*
* Copyright (c) 2005 Palmsource, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef _LINUX_BINDER_H
#define _LINUX_BINDER_H
#include <linux/ioctl.h>
#define B_PACK_CHARS(c1, c2, c3, c4) \
((((c1)<<24)) | (((c2)<<16)) | (((c3)<<8)) | (c4))
#define B_TYPE_LARGE 0x85
enum {
BINDER_TYPE_BINDER = B_PACK_CHARS('s', 'b', '*', B_TYPE_LARGE),
BINDER_TYPE_WEAK_BINDER = B_PACK_CHARS('w', 'b', '*', B_TYPE_LARGE),
BINDER_TYPE_HANDLE = B_PACK_CHARS('s', 'h', '*', B_TYPE_LARGE),
BINDER_TYPE_WEAK_HANDLE = B_PACK_CHARS('w', 'h', '*', B_TYPE_LARGE),
BINDER_TYPE_FD = B_PACK_CHARS('f', 'd', '*', B_TYPE_LARGE),
};
enum {
FLAT_BINDER_FLAG_PRIORITY_MASK = 0xff,
FLAT_BINDER_FLAG_ACCEPTS_FDS = 0x100,
};
/*
* This is the flattened representation of a Binder object for transfer
* between processes. The 'offsets' supplied as part of a binder transaction
* contains offsets into the data where these structures occur. The Binder
* driver takes care of re-writing the structure type and data as it moves
* between processes.
*/
struct flat_binder_object {
/* 8 bytes for large_flat_header. */
unsigned long type;
unsigned long flags;
/* 8 bytes of data. */
union {
void *binder; /* local object */
signed long handle; /* remote object */
};
/* extra data associated with local object */
void *cookie;
};
/*
* On 64-bit platforms where user code may run in 32-bits the driver must
* translate the buffer (and local binder) addresses apropriately.
*/
struct binder_write_read {
signed long write_size; /* bytes to write */
signed long write_consumed; /* bytes consumed by driver */
unsigned long write_buffer;
signed long read_size; /* bytes to read */
signed long read_consumed; /* bytes consumed by driver */
unsigned long read_buffer;
};
/* Use with BINDER_VERSION, driver fills in fields. */
struct binder_version {
/* driver protocol version -- increment with incompatible change */
signed long protocol_version;
};
/* This is the current protocol version. */
#define BINDER_CURRENT_PROTOCOL_VERSION 7
#define BINDER_WRITE_READ _IOWR('b', 1, struct binder_write_read)
#define BINDER_SET_IDLE_TIMEOUT _IOW('b', 3, int64_t)
#define BINDER_SET_MAX_THREADS _IOW('b', 5, size_t)
#define BINDER_SET_IDLE_PRIORITY _IOW('b', 6, int)
#define BINDER_SET_CONTEXT_MGR _IOW('b', 7, int)
#define BINDER_THREAD_EXIT _IOW('b', 8, int)
#define BINDER_VERSION _IOWR('b', 9, struct binder_version)
/*
* NOTE: Two special error codes you should check for when calling
* in to the driver are:
*
* EINTR -- The operation has been interupted. This should be
* handled by retrying the ioctl() until a different error code
* is returned.
*
* ECONNREFUSED -- The driver is no longer accepting operations
* from your process. That is, the process is being destroyed.
* You should handle this by exiting from your process. Note
* that once this error code is returned, all further calls to
* the driver from any thread will return this same code.
*/
enum transaction_flags {
TF_ONE_WAY = 0x01, /* this is a one-way call: async, no return */
TF_ROOT_OBJECT = 0x04, /* contents are the component's root object */
TF_STATUS_CODE = 0x08, /* contents are a 32-bit status code */
TF_ACCEPT_FDS = 0x10, /* allow replies with file descriptors */
};
struct binder_transaction_data {
/* The first two are only used for bcTRANSACTION and brTRANSACTION,
* identifying the target and contents of the transaction.
*/
union {
size_t handle; /* target descriptor of command transaction */
void *ptr; /* target descriptor of return transaction */
} target;
void *cookie; /* target object cookie */
unsigned int code; /* transaction command */
/* General information about the transaction. */
unsigned int flags;
pid_t sender_pid;
uid_t sender_euid;
size_t data_size; /* number of bytes of data */
size_t offsets_size; /* number of bytes of offsets */
/* If this transaction is inline, the data immediately
* follows here; otherwise, it ends with a pointer to
* the data buffer.
*/
union {
struct {
/* transaction data */
const void *buffer;
/* offsets from buffer to flat_binder_object structs */
const void *offsets;
} ptr;
uint8_t buf[8];
} data;
};
struct binder_ptr_cookie {
void *ptr;
void *cookie;
};
struct binder_pri_desc {
int priority;
int desc;
};
struct binder_pri_ptr_cookie {
int priority;
void *ptr;
void *cookie;
};
enum BinderDriverReturnProtocol {
BR_ERROR = _IOR('r', 0, int),
/*
* int: error code
*/
BR_OK = _IO('r', 1),
/* No parameters! */
BR_TRANSACTION = _IOR('r', 2, struct binder_transaction_data),
BR_REPLY = _IOR('r', 3, struct binder_transaction_data),
/*
* binder_transaction_data: the received command.
*/
BR_ACQUIRE_RESULT = _IOR('r', 4, int),
/*
* not currently supported
* int: 0 if the last bcATTEMPT_ACQUIRE was not successful.
* Else the remote object has acquired a primary reference.
*/
BR_DEAD_REPLY = _IO('r', 5),
/*
* The target of the last transaction (either a bcTRANSACTION or
* a bcATTEMPT_ACQUIRE) is no longer with us. No parameters.
*/
BR_TRANSACTION_COMPLETE = _IO('r', 6),
/*
* No parameters... always refers to the last transaction requested
* (including replies). Note that this will be sent even for
* asynchronous transactions.
*/
BR_INCREFS = _IOR('r', 7, struct binder_ptr_cookie),
BR_ACQUIRE = _IOR('r', 8, struct binder_ptr_cookie),
BR_RELEASE = _IOR('r', 9, struct binder_ptr_cookie),
BR_DECREFS = _IOR('r', 10, struct binder_ptr_cookie),
/*
* void *: ptr to binder
* void *: cookie for binder
*/
BR_ATTEMPT_ACQUIRE = _IOR('r', 11, struct binder_pri_ptr_cookie),
/*
* not currently supported
* int: priority
* void *: ptr to binder
* void *: cookie for binder
*/
BR_NOOP = _IO('r', 12),
/*
* No parameters. Do nothing and examine the next command. It exists
* primarily so that we can replace it with a BR_SPAWN_LOOPER command.
*/
BR_SPAWN_LOOPER = _IO('r', 13),
/*
* No parameters. The driver has determined that a process has no
* threads waiting to service incomming transactions. When a process
* receives this command, it must spawn a new service thread and
* register it via bcENTER_LOOPER.
*/
BR_FINISHED = _IO('r', 14),
/*
* not currently supported
* stop threadpool thread
*/
BR_DEAD_BINDER = _IOR('r', 15, void *),
/*
* void *: cookie
*/
BR_CLEAR_DEATH_NOTIFICATION_DONE = _IOR('r', 16, void *),
/*
* void *: cookie
*/
BR_FAILED_REPLY = _IO('r', 17),
/*
* The the last transaction (either a bcTRANSACTION or
* a bcATTEMPT_ACQUIRE) failed (e.g. out of memory). No parameters.
*/
};
enum BinderDriverCommandProtocol {
BC_TRANSACTION = _IOW('c', 0, struct binder_transaction_data),
BC_REPLY = _IOW('c', 1, struct binder_transaction_data),
/*
* binder_transaction_data: the sent command.
*/
BC_ACQUIRE_RESULT = _IOW('c', 2, int),
/*
* not currently supported
* int: 0 if the last BR_ATTEMPT_ACQUIRE was not successful.
* Else you have acquired a primary reference on the object.
*/
BC_FREE_BUFFER = _IOW('c', 3, int),
/*
* void *: ptr to transaction data received on a read
*/
BC_INCREFS = _IOW('c', 4, int),
BC_ACQUIRE = _IOW('c', 5, int),
BC_RELEASE = _IOW('c', 6, int),
BC_DECREFS = _IOW('c', 7, int),
/*
* int: descriptor
*/
BC_INCREFS_DONE = _IOW('c', 8, struct binder_ptr_cookie),
BC_ACQUIRE_DONE = _IOW('c', 9, struct binder_ptr_cookie),
/*
* void *: ptr to binder
* void *: cookie for binder
*/
BC_ATTEMPT_ACQUIRE = _IOW('c', 10, struct binder_pri_desc),
/*
* not currently supported
* int: priority
* int: descriptor
*/
BC_REGISTER_LOOPER = _IO('c', 11),
/*
* No parameters.
* Register a spawned looper thread with the device.
*/
BC_ENTER_LOOPER = _IO('c', 12),
BC_EXIT_LOOPER = _IO('c', 13),
/*
* No parameters.
* These two commands are sent as an application-level thread
* enters and exits the binder loop, respectively. They are
* used so the binder can have an accurate count of the number
* of looping threads it has available.
*/
BC_REQUEST_DEATH_NOTIFICATION = _IOW('c', 14, struct binder_ptr_cookie),
/*
* void *: ptr to binder
* void *: cookie
*/
BC_CLEAR_DEATH_NOTIFICATION = _IOW('c', 15, struct binder_ptr_cookie),
/*
* void *: ptr to binder
* void *: cookie
*/
BC_DEAD_BINDER_DONE = _IOW('c', 16, void *),
/*
* void *: cookie
*/
};
#endif /* _LINUX_BINDER_H */
/*
* drivers/misc/logger.c
*
* A Logging Subsystem
*
* Copyright (C) 2007-2008 Google, Inc.
*
* Robert Love <rlove@google.com>
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/sched.h>
#include <linux/module.h>
#include <linux/fs.h>
#include <linux/miscdevice.h>
#include <linux/uaccess.h>
#include <linux/poll.h>
#include <linux/slab.h>
#include <linux/time.h>
#include "logger.h"
#include <asm/ioctls.h>
/*
* struct logger_log - represents a specific log, such as 'main' or 'radio'
*
* This structure lives from module insertion until module removal, so it does
* not need additional reference counting. The structure is protected by the
* mutex 'mutex'.
*/
struct logger_log {
unsigned char *buffer;/* the ring buffer itself */
struct miscdevice misc; /* misc device representing the log */
wait_queue_head_t wq; /* wait queue for readers */
struct list_head readers; /* this log's readers */
struct mutex mutex; /* mutex protecting buffer */
size_t w_off; /* current write head offset */
size_t head; /* new readers start here */
size_t size; /* size of the log */
};
/*
* struct logger_reader - a logging device open for reading
*
* This object lives from open to release, so we don't need additional
* reference counting. The structure is protected by log->mutex.
*/
struct logger_reader {
struct logger_log *log; /* associated log */
struct list_head list; /* entry in logger_log's list */
size_t r_off; /* current read head offset */
};
/* logger_offset - returns index 'n' into the log via (optimized) modulus */
#define logger_offset(n) ((n) & (log->size - 1))
/*
* file_get_log - Given a file structure, return the associated log
*
* This isn't aesthetic. We have several goals:
*
* 1) Need to quickly obtain the associated log during an I/O operation
* 2) Readers need to maintain state (logger_reader)
* 3) Writers need to be very fast (open() should be a near no-op)
*
* In the reader case, we can trivially go file->logger_reader->logger_log.
* For a writer, we don't want to maintain a logger_reader, so we just go
* file->logger_log. Thus what file->private_data points at depends on whether
* or not the file was opened for reading. This function hides that dirtiness.
*/
static inline struct logger_log *file_get_log(struct file *file)
{
if (file->f_mode & FMODE_READ) {
struct logger_reader *reader = file->private_data;
return reader->log;
} else
return file->private_data;
}
/*
* get_entry_len - Grabs the length of the payload of the next entry starting
* from 'off'.
*
* Caller needs to hold log->mutex.
*/
static __u32 get_entry_len(struct logger_log *log, size_t off)
{
__u16 val;
switch (log->size - off) {
case 1:
memcpy(&val, log->buffer + off, 1);
memcpy(((char *) &val) + 1, log->buffer, 1);
break;
default:
memcpy(&val, log->buffer + off, 2);
}
return sizeof(struct logger_entry) + val;
}
/*
* do_read_log_to_user - reads exactly 'count' bytes from 'log' into the
* user-space buffer 'buf'. Returns 'count' on success.
*
* Caller must hold log->mutex.
*/
static ssize_t do_read_log_to_user(struct logger_log *log,
struct logger_reader *reader,
char __user *buf,
size_t count)
{
size_t len;
/*
* We read from the log in two disjoint operations. First, we read from
* the current read head offset up to 'count' bytes or to the end of
* the log, whichever comes first.
*/
len = min(count, log->size - reader->r_off);
if (copy_to_user(buf, log->buffer + reader->r_off, len))
return -EFAULT;
/*
* Second, we read any remaining bytes, starting back at the head of
* the log.
*/
if (count != len)
if (copy_to_user(buf + len, log->buffer, count - len))
return -EFAULT;
reader->r_off = logger_offset(reader->r_off + count);
return count;
}
/*
* logger_read - our log's read() method
*
* Behavior:
*
* - O_NONBLOCK works
* - If there are no log entries to read, blocks until log is written to
* - Atomically reads exactly one log entry
*
* Optimal read size is LOGGER_ENTRY_MAX_LEN. Will set errno to EINVAL if read
* buffer is insufficient to hold next entry.
*/
static ssize_t logger_read(struct file *file, char __user *buf,
size_t count, loff_t *pos)
{
struct logger_reader *reader = file->private_data;
struct logger_log *log = reader->log;
ssize_t ret;
DEFINE_WAIT(wait);
start:
while (1) {
prepare_to_wait(&log->wq, &wait, TASK_INTERRUPTIBLE);
mutex_lock(&log->mutex);
ret = (log->w_off == reader->r_off);
mutex_unlock(&log->mutex);
if (!ret)
break;
if (file->f_flags & O_NONBLOCK) {
ret = -EAGAIN;
break;
}
if (signal_pending(current)) {
ret = -EINTR;
break;
}
schedule();
}
finish_wait(&log->wq, &wait);
if (ret)
return ret;
mutex_lock(&log->mutex);
/* is there still something to read or did we race? */
if (unlikely(log->w_off == reader->r_off)) {
mutex_unlock(&log->mutex);
goto start;
}
/* get the size of the next entry */
ret = get_entry_len(log, reader->r_off);
if (count < ret) {
ret = -EINVAL;
goto out;
}
/* get exactly one entry from the log */
ret = do_read_log_to_user(log, reader, buf, ret);
out:
mutex_unlock(&log->mutex);
return ret;
}
/*
* get_next_entry - return the offset of the first valid entry at least 'len'
* bytes after 'off'.
*
* Caller must hold log->mutex.
*/
static size_t get_next_entry(struct logger_log *log, size_t off, size_t len)
{
size_t count = 0;
do {
size_t nr = get_entry_len(log, off);
off = logger_offset(off + nr);
count += nr;
} while (count < len);
return off;
}
/*
* clock_interval - is a < c < b in mod-space? Put another way, does the line
* from a to b cross c?
*/
static inline int clock_interval(size_t a, size_t b, size_t c)
{
if (b < a) {
if (a < c || b >= c)
return 1;
} else {
if (a < c && b >= c)
return 1;
}
return 0;
}
/*
* fix_up_readers - walk the list of all readers and "fix up" any who were
* lapped by the writer; also do the same for the default "start head".
* We do this by "pulling forward" the readers and start head to the first
* entry after the new write head.
*
* The caller needs to hold log->mutex.
*/
static void fix_up_readers(struct logger_log *log, size_t len)
{
size_t old = log->w_off;
size_t new = logger_offset(old + len);
struct logger_reader *reader;
if (clock_interval(old, new, log->head))
log->head = get_next_entry(log, log->head, len);
list_for_each_entry(reader, &log->readers, list)
if (clock_interval(old, new, reader->r_off))
reader->r_off = get_next_entry(log, reader->r_off, len);
}
/*
* do_write_log - writes 'len' bytes from 'buf' to 'log'
*
* The caller needs to hold log->mutex.
*/
static void do_write_log(struct logger_log *log, const void *buf, size_t count)
{
size_t len;
len = min(count, log->size - log->w_off);
memcpy(log->buffer + log->w_off, buf, len);
if (count != len)
memcpy(log->buffer, buf + len, count - len);
log->w_off = logger_offset(log->w_off + count);
}
/*
* do_write_log_user - writes 'len' bytes from the user-space buffer 'buf' to
* the log 'log'
*
* The caller needs to hold log->mutex.
*
* Returns 'count' on success, negative error code on failure.
*/
static ssize_t do_write_log_from_user(struct logger_log *log,
const void __user *buf, size_t count)
{
size_t len;
len = min(count, log->size - log->w_off);
if (len && copy_from_user(log->buffer + log->w_off, buf, len))
return -EFAULT;
if (count != len)
if (copy_from_user(log->buffer, buf + len, count - len))
return -EFAULT;
log->w_off = logger_offset(log->w_off + count);
return count;
}
/*
* logger_aio_write - our write method, implementing support for write(),
* writev(), and aio_write(). Writes are our fast path, and we try to optimize
* them above all else.
*/
ssize_t logger_aio_write(struct kiocb *iocb, const struct iovec *iov,
unsigned long nr_segs, loff_t ppos)
{
struct logger_log *log = file_get_log(iocb->ki_filp);
size_t orig = log->w_off;
struct logger_entry header;
struct timespec now;
ssize_t ret = 0;
now = current_kernel_time();
header.pid = current->tgid;
header.tid = current->pid;
header.sec = now.tv_sec;
header.nsec = now.tv_nsec;
header.len = min_t(size_t, iocb->ki_left, LOGGER_ENTRY_MAX_PAYLOAD);
/* null writes succeed, return zero */
if (unlikely(!header.len))
return 0;
mutex_lock(&log->mutex);
/*
* Fix up any readers, pulling them forward to the first readable
* entry after (what will be) the new write offset. We do this now
* because if we partially fail, we can end up with clobbered log
* entries that encroach on readable buffer.
*/
fix_up_readers(log, sizeof(struct logger_entry) + header.len);
do_write_log(log, &header, sizeof(struct logger_entry));
while (nr_segs-- > 0) {
size_t len;
ssize_t nr;
/* figure out how much of this vector we can keep */
len = min_t(size_t, iov->iov_len, header.len - ret);
/* write out this segment's payload */
nr = do_write_log_from_user(log, iov->iov_base, len);
if (unlikely(nr < 0)) {
log->w_off = orig;
mutex_unlock(&log->mutex);
return nr;
}
iov++;
ret += nr;
}
mutex_unlock(&log->mutex);
/* wake up any blocked readers */
wake_up_interruptible(&log->wq);
return ret;
}
static struct logger_log *get_log_from_minor(int);
/*
* logger_open - the log's open() file operation
*
* Note how near a no-op this is in the write-only case. Keep it that way!
*/
static int logger_open(struct inode *inode, struct file *file)
{
struct logger_log *log;
int ret;
ret = nonseekable_open(inode, file);
if (ret)
return ret;
log = get_log_from_minor(MINOR(inode->i_rdev));
if (!log)
return -ENODEV;
if (file->f_mode & FMODE_READ) {
struct logger_reader *reader;
reader = kmalloc(sizeof(struct logger_reader), GFP_KERNEL);
if (!reader)
return -ENOMEM;
reader->log = log;
INIT_LIST_HEAD(&reader->list);
mutex_lock(&log->mutex);
reader->r_off = log->head;
list_add_tail(&reader->list, &log->readers);
mutex_unlock(&log->mutex);
file->private_data = reader;
} else
file->private_data = log;
return 0;
}
/*
* logger_release - the log's release file operation
*
* Note this is a total no-op in the write-only case. Keep it that way!
*/
static int logger_release(struct inode *ignored, struct file *file)
{
if (file->f_mode & FMODE_READ) {
struct logger_reader *reader = file->private_data;
list_del(&reader->list);
kfree(reader);
}
return 0;
}
/*
* logger_poll - the log's poll file operation, for poll/select/epoll
*
* Note we always return POLLOUT, because you can always write() to the log.
* Note also that, strictly speaking, a return value of POLLIN does not
* guarantee that the log is readable without blocking, as there is a small
* chance that the writer can lap the reader in the interim between poll()
* returning and the read() request.
*/
static unsigned int logger_poll(struct file *file, poll_table *wait)
{
struct logger_reader *reader;
struct logger_log *log;
unsigned int ret = POLLOUT | POLLWRNORM;
if (!(file->f_mode & FMODE_READ))
return ret;
reader = file->private_data;
log = reader->log;
poll_wait(file, &log->wq, wait);
mutex_lock(&log->mutex);
if (log->w_off != reader->r_off)
ret |= POLLIN | POLLRDNORM;
mutex_unlock(&log->mutex);
return ret;
}
static long logger_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
struct logger_log *log = file_get_log(file);
struct logger_reader *reader;
long ret = -ENOTTY;
mutex_lock(&log->mutex);
switch (cmd) {
case LOGGER_GET_LOG_BUF_SIZE:
ret = log->size;
break;
case LOGGER_GET_LOG_LEN:
if (!(file->f_mode & FMODE_READ)) {
ret = -EBADF;
break;
}
reader = file->private_data;
if (log->w_off >= reader->r_off)
ret = log->w_off - reader->r_off;
else
ret = (log->size - reader->r_off) + log->w_off;
break;
case LOGGER_GET_NEXT_ENTRY_LEN:
if (!(file->f_mode & FMODE_READ)) {
ret = -EBADF;
break;
}
reader = file->private_data;
if (log->w_off != reader->r_off)
ret = get_entry_len(log, reader->r_off);
else
ret = 0;
break;
case LOGGER_FLUSH_LOG:
if (!(file->f_mode & FMODE_WRITE)) {
ret = -EBADF;
break;
}
list_for_each_entry(reader, &log->readers, list)
reader->r_off = log->w_off;
log->head = log->w_off;
ret = 0;
break;
}
mutex_unlock(&log->mutex);
return ret;
}
static const struct file_operations logger_fops = {
.owner = THIS_MODULE,
.read = logger_read,
.aio_write = logger_aio_write,
.poll = logger_poll,
.unlocked_ioctl = logger_ioctl,
.compat_ioctl = logger_ioctl,
.open = logger_open,
.release = logger_release,
};
/*
* Defines a log structure with name 'NAME' and a size of 'SIZE' bytes, which
* must be a power of two, greater than LOGGER_ENTRY_MAX_LEN, and less than
* LONG_MAX minus LOGGER_ENTRY_MAX_LEN.
*/
#define DEFINE_LOGGER_DEVICE(VAR, NAME, SIZE) \
static unsigned char _buf_ ## VAR[SIZE]; \
static struct logger_log VAR = { \
.buffer = _buf_ ## VAR, \
.misc = { \
.minor = MISC_DYNAMIC_MINOR, \
.name = NAME, \
.fops = &logger_fops, \
.parent = NULL, \
}, \
.wq = __WAIT_QUEUE_HEAD_INITIALIZER(VAR .wq), \
.readers = LIST_HEAD_INIT(VAR .readers), \
.mutex = __MUTEX_INITIALIZER(VAR .mutex), \
.w_off = 0, \
.head = 0, \
.size = SIZE, \
};
DEFINE_LOGGER_DEVICE(log_main, LOGGER_LOG_MAIN, 256*1024)
DEFINE_LOGGER_DEVICE(log_events, LOGGER_LOG_EVENTS, 256*1024)
DEFINE_LOGGER_DEVICE(log_radio, LOGGER_LOG_RADIO, 256*1024)
DEFINE_LOGGER_DEVICE(log_system, LOGGER_LOG_SYSTEM, 256*1024)
static struct logger_log *get_log_from_minor(int minor)
{
if (log_main.misc.minor == minor)
return &log_main;
if (log_events.misc.minor == minor)
return &log_events;
if (log_radio.misc.minor == minor)
return &log_radio;
if (log_system.misc.minor == minor)
return &log_system;
return NULL;
}
static int __init init_log(struct logger_log *log)
{
int ret;
ret = misc_register(&log->misc);
if (unlikely(ret)) {
printk(KERN_ERR "logger: failed to register misc "
"device for log '%s'!\n", log->misc.name);
return ret;
}
printk(KERN_INFO "logger: created %luK log '%s'\n",
(unsigned long) log->size >> 10, log->misc.name);
return 0;
}
static int __init logger_init(void)
{
int ret;
ret = init_log(&log_main);
if (unlikely(ret))
goto out;
ret = init_log(&log_events);
if (unlikely(ret))
goto out;
ret = init_log(&log_radio);
if (unlikely(ret))
goto out;
ret = init_log(&log_system);
if (unlikely(ret))
goto out;
out:
return ret;
}
device_initcall(logger_init);
/* include/linux/logger.h
*
* Copyright (C) 2007-2008 Google, Inc.
* Author: Robert Love <rlove@android.com>
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef _LINUX_LOGGER_H
#define _LINUX_LOGGER_H
#include <linux/types.h>
#include <linux/ioctl.h>
struct logger_entry {
__u16 len; /* length of the payload */
__u16 __pad; /* no matter what, we get 2 bytes of padding */
__s32 pid; /* generating process's pid */
__s32 tid; /* generating process's tid */
__s32 sec; /* seconds since Epoch */
__s32 nsec; /* nanoseconds */
char msg[0]; /* the entry's payload */
};
#define LOGGER_LOG_RADIO "log_radio" /* radio-related messages */
#define LOGGER_LOG_EVENTS "log_events" /* system/hardware events */
#define LOGGER_LOG_SYSTEM "log_system" /* system/framework messages */
#define LOGGER_LOG_MAIN "log_main" /* everything else */
#define LOGGER_ENTRY_MAX_LEN (4*1024)
#define LOGGER_ENTRY_MAX_PAYLOAD \
(LOGGER_ENTRY_MAX_LEN - sizeof(struct logger_entry))
#define __LOGGERIO 0xAE
#define LOGGER_GET_LOG_BUF_SIZE _IO(__LOGGERIO, 1) /* size of log */
#define LOGGER_GET_LOG_LEN _IO(__LOGGERIO, 2) /* used log len */
#define LOGGER_GET_NEXT_ENTRY_LEN _IO(__LOGGERIO, 3) /* next entry len */
#define LOGGER_FLUSH_LOG _IO(__LOGGERIO, 4) /* flush log */
#endif /* _LINUX_LOGGER_H */
/* drivers/misc/lowmemorykiller.c
*
* The lowmemorykiller driver lets user-space specify a set of memory thresholds
* where processes with a range of oom_adj values will get killed. Specify the
* minimum oom_adj values in /sys/module/lowmemorykiller/parameters/adj and the
* number of free pages in /sys/module/lowmemorykiller/parameters/minfree. Both
* files take a comma separated list of numbers in ascending order.
*
* For example, write "0,8" to /sys/module/lowmemorykiller/parameters/adj and
* "1024,4096" to /sys/module/lowmemorykiller/parameters/minfree to kill
* processes with a oom_adj value of 8 or higher when the free memory drops
* below 4096 pages and kill processes with a oom_adj value of 0 or higher
* when the free memory drops below 1024 pages.
*
* The driver considers memory used for caches to be free, but if a large
* percentage of the cached memory is locked this can be very inaccurate
* and processes may not get killed until the normal oom killer is triggered.
*
* Copyright (C) 2007-2008 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/oom.h>
#include <linux/sched.h>
#include <linux/profile.h>
#include <linux/notifier.h>
static uint32_t lowmem_debug_level = 2;
static int lowmem_adj[6] = {
0,
1,
6,
12,
};
static int lowmem_adj_size = 4;
static size_t lowmem_minfree[6] = {
3 * 512, /* 6MB */
2 * 1024, /* 8MB */
4 * 1024, /* 16MB */
16 * 1024, /* 64MB */
};
static int lowmem_minfree_size = 4;
static struct task_struct *lowmem_deathpending;
#define lowmem_print(level, x...) \
do { \
if (lowmem_debug_level >= (level)) \
printk(x); \
} while (0)
static int
task_notify_func(struct notifier_block *self, unsigned long val, void *data);
static struct notifier_block task_nb = {
.notifier_call = task_notify_func,
};
static int
task_notify_func(struct notifier_block *self, unsigned long val, void *data)
{
struct task_struct *task = data;
if (task == lowmem_deathpending) {
lowmem_deathpending = NULL;
task_handoff_unregister(&task_nb);
}
return NOTIFY_OK;
}
static int lowmem_shrink(struct shrinker *s, struct shrink_control *sc)
{
struct task_struct *p;
struct task_struct *selected = NULL;
int rem = 0;
int tasksize;
int i;
int min_adj = OOM_ADJUST_MAX + 1;
int selected_tasksize = 0;
int selected_oom_adj;
int array_size = ARRAY_SIZE(lowmem_adj);
int other_free = global_page_state(NR_FREE_PAGES);
int other_file = global_page_state(NR_FILE_PAGES) -
global_page_state(NR_SHMEM);
/*
* If we already have a death outstanding, then
* bail out right away; indicating to vmscan
* that we have nothing further to offer on
* this pass.
*
* Note: Currently you need CONFIG_PROFILING
* for this to work correctly.
*/
if (lowmem_deathpending)
return 0;
if (lowmem_adj_size < array_size)
array_size = lowmem_adj_size;
if (lowmem_minfree_size < array_size)
array_size = lowmem_minfree_size;
for (i = 0; i < array_size; i++) {
if (other_free < lowmem_minfree[i] &&
other_file < lowmem_minfree[i]) {
min_adj = lowmem_adj[i];
break;
}
}
if (sc->nr_to_scan > 0)
lowmem_print(3, "lowmem_shrink %lu, %x, ofree %d %d, ma %d\n",
sc->nr_to_scan, sc->gfp_mask, other_free,
other_file, min_adj);
rem = global_page_state(NR_ACTIVE_ANON) +
global_page_state(NR_ACTIVE_FILE) +
global_page_state(NR_INACTIVE_ANON) +
global_page_state(NR_INACTIVE_FILE);
if (sc->nr_to_scan <= 0 || min_adj == OOM_ADJUST_MAX + 1) {
lowmem_print(5, "lowmem_shrink %lu, %x, return %d\n",
sc->nr_to_scan, sc->gfp_mask, rem);
return rem;
}
selected_oom_adj = min_adj;
read_lock(&tasklist_lock);
for_each_process(p) {
struct mm_struct *mm;
struct signal_struct *sig;
int oom_adj;
task_lock(p);
mm = p->mm;
sig = p->signal;
if (!mm || !sig) {
task_unlock(p);
continue;
}
oom_adj = sig->oom_adj;
if (oom_adj < min_adj) {
task_unlock(p);
continue;
}
tasksize = get_mm_rss(mm);
task_unlock(p);
if (tasksize <= 0)
continue;
if (selected) {
if (oom_adj < selected_oom_adj)
continue;
if (oom_adj == selected_oom_adj &&
tasksize <= selected_tasksize)
continue;
}
selected = p;
selected_tasksize = tasksize;
selected_oom_adj = oom_adj;
lowmem_print(2, "select %d (%s), adj %d, size %d, to kill\n",
p->pid, p->comm, oom_adj, tasksize);
}
if (selected) {
lowmem_print(1, "send sigkill to %d (%s), adj %d, size %d\n",
selected->pid, selected->comm,
selected_oom_adj, selected_tasksize);
/*
* If CONFIG_PROFILING is off, then task_handoff_register()
* is a nop. In that case we don't want to stall the killer
* by setting lowmem_deathpending.
*/
#ifdef CONFIG_PROFILING
lowmem_deathpending = selected;
task_handoff_register(&task_nb);
#endif
force_sig(SIGKILL, selected);
rem -= selected_tasksize;
}
lowmem_print(4, "lowmem_shrink %lu, %x, return %d\n",
sc->nr_to_scan, sc->gfp_mask, rem);
read_unlock(&tasklist_lock);
return rem;
}
static struct shrinker lowmem_shrinker = {
.shrink = lowmem_shrink,
.seeks = DEFAULT_SEEKS * 16
};
static int __init lowmem_init(void)
{
register_shrinker(&lowmem_shrinker);
return 0;
}
static void __exit lowmem_exit(void)
{
unregister_shrinker(&lowmem_shrinker);
}
module_param_named(cost, lowmem_shrinker.seeks, int, S_IRUGO | S_IWUSR);
module_param_array_named(adj, lowmem_adj, int, &lowmem_adj_size,
S_IRUGO | S_IWUSR);
module_param_array_named(minfree, lowmem_minfree, uint, &lowmem_minfree_size,
S_IRUGO | S_IWUSR);
module_param_named(debug_level, lowmem_debug_level, uint, S_IRUGO | S_IWUSR);
module_init(lowmem_init);
module_exit(lowmem_exit);
MODULE_LICENSE("GPL");
此差异已折叠。
/* drivers/android/ram_console.c
*
* Copyright (C) 2007-2008 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/console.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/proc_fs.h>
#include <linux/string.h>
#include <linux/uaccess.h>
#include <linux/io.h>
#include "ram_console.h"
#ifdef CONFIG_ANDROID_RAM_CONSOLE_ERROR_CORRECTION
#include <linux/rslib.h>
#endif
struct ram_console_buffer {
uint32_t sig;
uint32_t start;
uint32_t size;
uint8_t data[0];
};
#define RAM_CONSOLE_SIG (0x43474244) /* DBGC */
#ifdef CONFIG_ANDROID_RAM_CONSOLE_EARLY_INIT
static char __initdata
ram_console_old_log_init_buffer[CONFIG_ANDROID_RAM_CONSOLE_EARLY_SIZE];
#endif
static char *ram_console_old_log;
static size_t ram_console_old_log_size;
static struct ram_console_buffer *ram_console_buffer;
static size_t ram_console_buffer_size;
#ifdef CONFIG_ANDROID_RAM_CONSOLE_ERROR_CORRECTION
static char *ram_console_par_buffer;
static struct rs_control *ram_console_rs_decoder;
static int ram_console_corrected_bytes;
static int ram_console_bad_blocks;
#define ECC_BLOCK_SIZE CONFIG_ANDROID_RAM_CONSOLE_ERROR_CORRECTION_DATA_SIZE
#define ECC_SIZE CONFIG_ANDROID_RAM_CONSOLE_ERROR_CORRECTION_ECC_SIZE
#define ECC_SYMSIZE CONFIG_ANDROID_RAM_CONSOLE_ERROR_CORRECTION_SYMBOL_SIZE
#define ECC_POLY CONFIG_ANDROID_RAM_CONSOLE_ERROR_CORRECTION_POLYNOMIAL
#endif
#ifdef CONFIG_ANDROID_RAM_CONSOLE_ERROR_CORRECTION
static void ram_console_encode_rs8(uint8_t *data, size_t len, uint8_t *ecc)
{
int i;
uint16_t par[ECC_SIZE];
/* Initialize the parity buffer */
memset(par, 0, sizeof(par));
encode_rs8(ram_console_rs_decoder, data, len, par, 0);
for (i = 0; i < ECC_SIZE; i++)
ecc[i] = par[i];
}
static int ram_console_decode_rs8(void *data, size_t len, uint8_t *ecc)
{
int i;
uint16_t par[ECC_SIZE];
for (i = 0; i < ECC_SIZE; i++)
par[i] = ecc[i];
return decode_rs8(ram_console_rs_decoder, data, par, len,
NULL, 0, NULL, 0, NULL);
}
#endif
static void ram_console_update(const char *s, unsigned int count)
{
struct ram_console_buffer *buffer = ram_console_buffer;
#ifdef CONFIG_ANDROID_RAM_CONSOLE_ERROR_CORRECTION
uint8_t *buffer_end = buffer->data + ram_console_buffer_size;
uint8_t *block;
uint8_t *par;
int size = ECC_BLOCK_SIZE;
#endif
memcpy(buffer->data + buffer->start, s, count);
#ifdef CONFIG_ANDROID_RAM_CONSOLE_ERROR_CORRECTION
block = buffer->data + (buffer->start & ~(ECC_BLOCK_SIZE - 1));
par = ram_console_par_buffer +
(buffer->start / ECC_BLOCK_SIZE) * ECC_SIZE;
do {
if (block + ECC_BLOCK_SIZE > buffer_end)
size = buffer_end - block;
ram_console_encode_rs8(block, size, par);
block += ECC_BLOCK_SIZE;
par += ECC_SIZE;
} while (block < buffer->data + buffer->start + count);
#endif
}
static void ram_console_update_header(void)
{
#ifdef CONFIG_ANDROID_RAM_CONSOLE_ERROR_CORRECTION
struct ram_console_buffer *buffer = ram_console_buffer;
uint8_t *par;
par = ram_console_par_buffer +
DIV_ROUND_UP(ram_console_buffer_size, ECC_BLOCK_SIZE) * ECC_SIZE;
ram_console_encode_rs8((uint8_t *)buffer, sizeof(*buffer), par);
#endif
}
static void
ram_console_write(struct console *console, const char *s, unsigned int count)
{
int rem;
struct ram_console_buffer *buffer = ram_console_buffer;
if (count > ram_console_buffer_size) {
s += count - ram_console_buffer_size;
count = ram_console_buffer_size;
}
rem = ram_console_buffer_size - buffer->start;
if (rem < count) {
ram_console_update(s, rem);
s += rem;
count -= rem;
buffer->start = 0;
buffer->size = ram_console_buffer_size;
}
ram_console_update(s, count);
buffer->start += count;
if (buffer->size < ram_console_buffer_size)
buffer->size += count;
ram_console_update_header();
}
static struct console ram_console = {
.name = "ram",
.write = ram_console_write,
.flags = CON_PRINTBUFFER | CON_ENABLED,
.index = -1,
};
void ram_console_enable_console(int enabled)
{
if (enabled)
ram_console.flags |= CON_ENABLED;
else
ram_console.flags &= ~CON_ENABLED;
}
static void __init
ram_console_save_old(struct ram_console_buffer *buffer, const char *bootinfo,
char *dest)
{
size_t old_log_size = buffer->size;
size_t bootinfo_size = 0;
size_t total_size = old_log_size;
char *ptr;
const char *bootinfo_label = "Boot info:\n";
#ifdef CONFIG_ANDROID_RAM_CONSOLE_ERROR_CORRECTION
uint8_t *block;
uint8_t *par;
char strbuf[80];
int strbuf_len = 0;
block = buffer->data;
par = ram_console_par_buffer;
while (block < buffer->data + buffer->size) {
int numerr;
int size = ECC_BLOCK_SIZE;
if (block + size > buffer->data + ram_console_buffer_size)
size = buffer->data + ram_console_buffer_size - block;
numerr = ram_console_decode_rs8(block, size, par);
if (numerr > 0) {
#if 0
printk(KERN_INFO "ram_console: error in block %p, %d\n",
block, numerr);
#endif
ram_console_corrected_bytes += numerr;
} else if (numerr < 0) {
#if 0
printk(KERN_INFO "ram_console: uncorrectable error in "
"block %p\n", block);
#endif
ram_console_bad_blocks++;
}
block += ECC_BLOCK_SIZE;
par += ECC_SIZE;
}
if (ram_console_corrected_bytes || ram_console_bad_blocks)
strbuf_len = snprintf(strbuf, sizeof(strbuf),
"\n%d Corrected bytes, %d unrecoverable blocks\n",
ram_console_corrected_bytes, ram_console_bad_blocks);
else
strbuf_len = snprintf(strbuf, sizeof(strbuf),
"\nNo errors detected\n");
if (strbuf_len >= sizeof(strbuf))
strbuf_len = sizeof(strbuf) - 1;
total_size += strbuf_len;
#endif
if (bootinfo)
bootinfo_size = strlen(bootinfo) + strlen(bootinfo_label);
total_size += bootinfo_size;
if (dest == NULL) {
dest = kmalloc(total_size, GFP_KERNEL);
if (dest == NULL) {
printk(KERN_ERR
"ram_console: failed to allocate buffer\n");
return;
}
}
ram_console_old_log = dest;
ram_console_old_log_size = total_size;
memcpy(ram_console_old_log,
&buffer->data[buffer->start], buffer->size - buffer->start);
memcpy(ram_console_old_log + buffer->size - buffer->start,
&buffer->data[0], buffer->start);
ptr = ram_console_old_log + old_log_size;
#ifdef CONFIG_ANDROID_RAM_CONSOLE_ERROR_CORRECTION
memcpy(ptr, strbuf, strbuf_len);
ptr += strbuf_len;
#endif
if (bootinfo) {
memcpy(ptr, bootinfo_label, strlen(bootinfo_label));
ptr += strlen(bootinfo_label);
memcpy(ptr, bootinfo, bootinfo_size);
ptr += bootinfo_size;
}
}
static int __init ram_console_init(struct ram_console_buffer *buffer,
size_t buffer_size, const char *bootinfo,
char *old_buf)
{
#ifdef CONFIG_ANDROID_RAM_CONSOLE_ERROR_CORRECTION
int numerr;
uint8_t *par;
#endif
ram_console_buffer = buffer;
ram_console_buffer_size =
buffer_size - sizeof(struct ram_console_buffer);
if (ram_console_buffer_size > buffer_size) {
pr_err("ram_console: buffer %p, invalid size %zu, "
"datasize %zu\n", buffer, buffer_size,
ram_console_buffer_size);
return 0;
}
#ifdef CONFIG_ANDROID_RAM_CONSOLE_ERROR_CORRECTION
ram_console_buffer_size -= (DIV_ROUND_UP(ram_console_buffer_size,
ECC_BLOCK_SIZE) + 1) * ECC_SIZE;
if (ram_console_buffer_size > buffer_size) {
pr_err("ram_console: buffer %p, invalid size %zu, "
"non-ecc datasize %zu\n",
buffer, buffer_size, ram_console_buffer_size);
return 0;
}
ram_console_par_buffer = buffer->data + ram_console_buffer_size;
/* first consecutive root is 0
* primitive element to generate roots = 1
*/
ram_console_rs_decoder = init_rs(ECC_SYMSIZE, ECC_POLY, 0, 1, ECC_SIZE);
if (ram_console_rs_decoder == NULL) {
printk(KERN_INFO "ram_console: init_rs failed\n");
return 0;
}
ram_console_corrected_bytes = 0;
ram_console_bad_blocks = 0;
par = ram_console_par_buffer +
DIV_ROUND_UP(ram_console_buffer_size, ECC_BLOCK_SIZE) * ECC_SIZE;
numerr = ram_console_decode_rs8(buffer, sizeof(*buffer), par);
if (numerr > 0) {
printk(KERN_INFO "ram_console: error in header, %d\n", numerr);
ram_console_corrected_bytes += numerr;
} else if (numerr < 0) {
printk(KERN_INFO
"ram_console: uncorrectable error in header\n");
ram_console_bad_blocks++;
}
#endif
if (buffer->sig == RAM_CONSOLE_SIG) {
if (buffer->size > ram_console_buffer_size
|| buffer->start > buffer->size)
printk(KERN_INFO "ram_console: found existing invalid "
"buffer, size %d, start %d\n",
buffer->size, buffer->start);
else {
printk(KERN_INFO "ram_console: found existing buffer, "
"size %d, start %d\n",
buffer->size, buffer->start);
ram_console_save_old(buffer, bootinfo, old_buf);
}
} else {
printk(KERN_INFO "ram_console: no valid data in buffer "
"(sig = 0x%08x)\n", buffer->sig);
}
buffer->sig = RAM_CONSOLE_SIG;
buffer->start = 0;
buffer->size = 0;
register_console(&ram_console);
#ifdef CONFIG_ANDROID_RAM_CONSOLE_ENABLE_VERBOSE
console_verbose();
#endif
return 0;
}
#ifdef CONFIG_ANDROID_RAM_CONSOLE_EARLY_INIT
static int __init ram_console_early_init(void)
{
return ram_console_init((struct ram_console_buffer *)
CONFIG_ANDROID_RAM_CONSOLE_EARLY_ADDR,
CONFIG_ANDROID_RAM_CONSOLE_EARLY_SIZE,
NULL,
ram_console_old_log_init_buffer);
}
#else
static int ram_console_driver_probe(struct platform_device *pdev)
{
struct resource *res = pdev->resource;
size_t start;
size_t buffer_size;
void *buffer;
const char *bootinfo = NULL;
struct ram_console_platform_data *pdata = pdev->dev.platform_data;
if (res == NULL || pdev->num_resources != 1 ||
!(res->flags & IORESOURCE_MEM)) {
printk(KERN_ERR "ram_console: invalid resource, %p %d flags "
"%lx\n", res, pdev->num_resources, res ? res->flags : 0);
return -ENXIO;
}
buffer_size = res->end - res->start + 1;
start = res->start;
printk(KERN_INFO "ram_console: got buffer at %zx, size %zx\n",
start, buffer_size);
buffer = ioremap(res->start, buffer_size);
if (buffer == NULL) {
printk(KERN_ERR "ram_console: failed to map memory\n");
return -ENOMEM;
}
if (pdata)
bootinfo = pdata->bootinfo;
return ram_console_init(buffer, buffer_size, bootinfo, NULL/* allocate */);
}
static struct platform_driver ram_console_driver = {
.probe = ram_console_driver_probe,
.driver = {
.name = "ram_console",
},
};
static int __init ram_console_module_init(void)
{
int err;
err = platform_driver_register(&ram_console_driver);
return err;
}
#endif
static ssize_t ram_console_read_old(struct file *file, char __user *buf,
size_t len, loff_t *offset)
{
loff_t pos = *offset;
ssize_t count;
if (pos >= ram_console_old_log_size)
return 0;
count = min(len, (size_t)(ram_console_old_log_size - pos));
if (copy_to_user(buf, ram_console_old_log + pos, count))
return -EFAULT;
*offset += count;
return count;
}
static const struct file_operations ram_console_file_ops = {
.owner = THIS_MODULE,
.read = ram_console_read_old,
};
static int __init ram_console_late_init(void)
{
struct proc_dir_entry *entry;
if (ram_console_old_log == NULL)
return 0;
#ifdef CONFIG_ANDROID_RAM_CONSOLE_EARLY_INIT
ram_console_old_log = kmalloc(ram_console_old_log_size, GFP_KERNEL);
if (ram_console_old_log == NULL) {
printk(KERN_ERR
"ram_console: failed to allocate buffer for old log\n");
ram_console_old_log_size = 0;
return 0;
}
memcpy(ram_console_old_log,
ram_console_old_log_init_buffer, ram_console_old_log_size);
#endif
entry = create_proc_entry("last_kmsg", S_IFREG | S_IRUGO, NULL);
if (!entry) {
printk(KERN_ERR "ram_console: failed to create proc entry\n");
kfree(ram_console_old_log);
ram_console_old_log = NULL;
return 0;
}
entry->proc_fops = &ram_console_file_ops;
entry->size = ram_console_old_log_size;
return 0;
}
#ifdef CONFIG_ANDROID_RAM_CONSOLE_EARLY_INIT
console_initcall(ram_console_early_init);
#else
postcore_initcall(ram_console_module_init);
#endif
late_initcall(ram_console_late_init);
/*
* Copyright (C) 2010 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef _INCLUDE_LINUX_PLATFORM_DATA_RAM_CONSOLE_H_
#define _INCLUDE_LINUX_PLATFORM_DATA_RAM_CONSOLE_H_
struct ram_console_platform_data {
const char *bootinfo;
};
#endif /* _INCLUDE_LINUX_PLATFORM_DATA_RAM_CONSOLE_H_ */
menuconfig ANDROID_SWITCH
tristate "Android Switch class support"
help
Say Y here to enable Android switch class support. This allows
monitoring switches by userspace via sysfs and uevent.
config ANDROID_SWITCH_GPIO
tristate "Android GPIO Switch support"
depends on GENERIC_GPIO && ANDROID_SWITCH
help
Say Y here to enable GPIO based switch support.
# Android Switch Class Driver
obj-$(CONFIG_ANDROID_SWITCH) += switch_class.o
obj-$(CONFIG_ANDROID_SWITCH_GPIO) += switch_gpio.o
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册