提交 c9f19dff 编写于 作者: P Peter Maydell

Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging

* switch to C11 atomics (Alex)
* Coverity fixes for IPMI (Corey), i386 (Paolo), qemu-char (Paolo)
* at long last, fail on wrong .pc files if -m32 is in use (Daniel)
* qemu-char regression fix (Daniel)
* SAS1068 device (Paolo)
* memory region docs improvements (Peter)
* target-i386 cleanups (Richard)
* qemu-nbd docs improvements (Sitsofe)
* thread-safe memory hotplug (Stefan)

# gpg: Signature made Tue 09 Feb 2016 16:09:30 GMT using RSA key ID 78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>"

* remotes/bonzini/tags/for-upstream: (33 commits)
  qemu-char, io: fix ordering of arguments for UDP socket creation
  MAINTAINERS: add all-match entry for qemu-devel@
  get_maintainer.pl: fall back to git if only lists are found
  target-i386: fix PSE36 mode
  docs/memory.txt: Improve list of different memory regions
  ipmi_bmc_sim: Add break to correct watchdog NMI check
  ipmi_bmc_sim: Fix off by one in check.
  ipmi: do not take/drop iothread lock
  target-i386: Deconstruct the cpu_T array
  target-i386: Tidy gen_add_A0_im
  target-i386: Rewrite leave
  target-i386: Rewrite gen_enter inline
  target-i386: Use gen_lea_v_seg in pusha/popa
  target-i386: Access segs via TCG registers
  target-i386: Use gen_lea_v_seg in stack subroutines
  target-i386: Use gen_lea_v_seg in gen_lea_modrm
  target-i386: Introduce mo_stacksize
  target-i386: Create gen_lea_v_seg
  char: fix repeated registration of tcp chardev I/O handlers
  kvm-all: trace: strerror fixup
  ...
Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
...@@ -52,6 +52,11 @@ General Project Administration ...@@ -52,6 +52,11 @@ General Project Administration
------------------------------ ------------------------------
M: Peter Maydell <peter.maydell@linaro.org> M: Peter Maydell <peter.maydell@linaro.org>
All patches CC here
L: qemu-devel@nongnu.org
F: *
F: */
Responsible Disclosure, Reporting Security Issues Responsible Disclosure, Reporting Security Issues
------------------------------ ------------------------------
W: http://wiki.qemu.org/SecurityProcess W: http://wiki.qemu.org/SecurityProcess
......
...@@ -3063,6 +3063,30 @@ for i in $glib_modules; do ...@@ -3063,6 +3063,30 @@ for i in $glib_modules; do
fi fi
done done
# Sanity check that the current size_t matches the
# size that glib thinks it should be. This catches
# problems on multi-arch where people try to build
# 32-bit QEMU while pointing at 64-bit glib headers
cat > $TMPC <<EOF
#include <glib.h>
#include <unistd.h>
#define QEMU_BUILD_BUG_ON(x) \
typedef char qemu_build_bug_on[(x)?-1:1] __attribute__((unused));
int main(void) {
QEMU_BUILD_BUG_ON(sizeof(size_t) != GLIB_SIZEOF_SIZE_T);
return 0;
}
EOF
if ! compile_prog "-Werror $CFLAGS" "$LIBS" ; then
error_exit "sizeof(size_t) doesn't match GLIB_SIZEOF_SIZE_T."\
"You probably need to set PKG_CONFIG_LIBDIR"\
"to point to the right pkg-config files for your"\
"build target"
fi
# g_test_trap_subprocess added in 2.38. Used by some tests. # g_test_trap_subprocess added in 2.38. Used by some tests.
glib_subprocess=yes glib_subprocess=yes
if ! $pkg_config --atleast-version=2.38 glib-2.0; then if ! $pkg_config --atleast-version=2.38 glib-2.0; then
......
...@@ -15,6 +15,7 @@ CONFIG_ES1370=y ...@@ -15,6 +15,7 @@ CONFIG_ES1370=y
CONFIG_LSI_SCSI_PCI=y CONFIG_LSI_SCSI_PCI=y
CONFIG_VMW_PVSCSI_SCSI_PCI=y CONFIG_VMW_PVSCSI_SCSI_PCI=y
CONFIG_MEGASAS_SCSI_PCI=y CONFIG_MEGASAS_SCSI_PCI=y
CONFIG_MPTSAS_SCSI_PCI=y
CONFIG_RTL8139_PCI=y CONFIG_RTL8139_PCI=y
CONFIG_E1000_PCI=y CONFIG_E1000_PCI=y
CONFIG_VMXNET3_PCI=y CONFIG_VMXNET3_PCI=y
......
...@@ -26,14 +26,28 @@ These represent memory as seen from the CPU or a device's viewpoint. ...@@ -26,14 +26,28 @@ These represent memory as seen from the CPU or a device's viewpoint.
Types of regions Types of regions
---------------- ----------------
There are four types of memory regions (all represented by a single C type There are multiple types of memory regions (all represented by a single C type
MemoryRegion): MemoryRegion):
- RAM: a RAM region is simply a range of host memory that can be made available - RAM: a RAM region is simply a range of host memory that can be made available
to the guest. to the guest.
You typically initialize these with memory_region_init_ram(). Some special
purposes require the variants memory_region_init_resizeable_ram(),
memory_region_init_ram_from_file(), or memory_region_init_ram_ptr().
- MMIO: a range of guest memory that is implemented by host callbacks; - MMIO: a range of guest memory that is implemented by host callbacks;
each read or write causes a callback to be called on the host. each read or write causes a callback to be called on the host.
You initialize these with memory_region_io(), passing it a MemoryRegionOps
structure describing the callbacks.
- ROM: a ROM memory region works like RAM for reads (directly accessing
a region of host memory), but like MMIO for writes (invoking a callback).
You initialize these with memory_region_init_rom_device().
- IOMMU region: an IOMMU region translates addresses of accesses made to it
and forwards them to some other target memory region. As the name suggests,
these are only needed for modelling an IOMMU, not for simple devices.
You initialize these with memory_region_init_iommu().
- container: a container simply includes other memory regions, each at - container: a container simply includes other memory regions, each at
a different offset. Containers are useful for grouping several regions a different offset. Containers are useful for grouping several regions
...@@ -45,12 +59,22 @@ MemoryRegion): ...@@ -45,12 +59,22 @@ MemoryRegion):
can overlay a subregion of RAM with MMIO or ROM, or a PCI controller can overlay a subregion of RAM with MMIO or ROM, or a PCI controller
that does not prevent card from claiming overlapping BARs. that does not prevent card from claiming overlapping BARs.
You initialize a pure container with memory_region_init().
- alias: a subsection of another region. Aliases allow a region to be - alias: a subsection of another region. Aliases allow a region to be
split apart into discontiguous regions. Examples of uses are memory banks split apart into discontiguous regions. Examples of uses are memory banks
used when the guest address space is smaller than the amount of RAM used when the guest address space is smaller than the amount of RAM
addressed, or a memory controller that splits main memory to expose a "PCI addressed, or a memory controller that splits main memory to expose a "PCI
hole". Aliases may point to any type of region, including other aliases, hole". Aliases may point to any type of region, including other aliases,
but an alias may not point back to itself, directly or indirectly. but an alias may not point back to itself, directly or indirectly.
You initialize these with memory_region_init_alias().
- reservation region: a reservation region is primarily for debugging.
It claims I/O space that is not supposed to be handled by QEMU itself.
The typical use is to track parts of the address space which will be
handled by the host kernel when KVM is enabled.
You initialize these with memory_region_init_reservation(), or by
passing a NULL callback parameter to memory_region_init_io().
It is valid to add subregions to a region which is not a pure container It is valid to add subregions to a region which is not a pure container
(that is, to an MMIO, RAM or ROM region). This means that the region (that is, to an MMIO, RAM or ROM region). This means that the region
......
...@@ -980,8 +980,9 @@ bool cpu_physical_memory_test_and_clear_dirty(ram_addr_t start, ...@@ -980,8 +980,9 @@ bool cpu_physical_memory_test_and_clear_dirty(ram_addr_t start,
ram_addr_t length, ram_addr_t length,
unsigned client) unsigned client)
{ {
DirtyMemoryBlocks *blocks;
unsigned long end, page; unsigned long end, page;
bool dirty; bool dirty = false;
if (length == 0) { if (length == 0) {
return false; return false;
...@@ -989,8 +990,22 @@ bool cpu_physical_memory_test_and_clear_dirty(ram_addr_t start, ...@@ -989,8 +990,22 @@ bool cpu_physical_memory_test_and_clear_dirty(ram_addr_t start,
end = TARGET_PAGE_ALIGN(start + length) >> TARGET_PAGE_BITS; end = TARGET_PAGE_ALIGN(start + length) >> TARGET_PAGE_BITS;
page = start >> TARGET_PAGE_BITS; page = start >> TARGET_PAGE_BITS;
dirty = bitmap_test_and_clear_atomic(ram_list.dirty_memory[client],
page, end - page); rcu_read_lock();
blocks = atomic_rcu_read(&ram_list.dirty_memory[client]);
while (page < end) {
unsigned long idx = page / DIRTY_MEMORY_BLOCK_SIZE;
unsigned long offset = page % DIRTY_MEMORY_BLOCK_SIZE;
unsigned long num = MIN(end - page, DIRTY_MEMORY_BLOCK_SIZE - offset);
dirty |= bitmap_test_and_clear_atomic(blocks->blocks[idx],
offset, num);
page += num;
}
rcu_read_unlock();
if (dirty && tcg_enabled()) { if (dirty && tcg_enabled()) {
tlb_reset_dirty_range_all(start, length); tlb_reset_dirty_range_all(start, length);
...@@ -1504,6 +1519,47 @@ int qemu_ram_resize(ram_addr_t base, ram_addr_t newsize, Error **errp) ...@@ -1504,6 +1519,47 @@ int qemu_ram_resize(ram_addr_t base, ram_addr_t newsize, Error **errp)
return 0; return 0;
} }
/* Called with ram_list.mutex held */
static void dirty_memory_extend(ram_addr_t old_ram_size,
ram_addr_t new_ram_size)
{
ram_addr_t old_num_blocks = DIV_ROUND_UP(old_ram_size,
DIRTY_MEMORY_BLOCK_SIZE);
ram_addr_t new_num_blocks = DIV_ROUND_UP(new_ram_size,
DIRTY_MEMORY_BLOCK_SIZE);
int i;
/* Only need to extend if block count increased */
if (new_num_blocks <= old_num_blocks) {
return;
}
for (i = 0; i < DIRTY_MEMORY_NUM; i++) {
DirtyMemoryBlocks *old_blocks;
DirtyMemoryBlocks *new_blocks;
int j;
old_blocks = atomic_rcu_read(&ram_list.dirty_memory[i]);
new_blocks = g_malloc(sizeof(*new_blocks) +
sizeof(new_blocks->blocks[0]) * new_num_blocks);
if (old_num_blocks) {
memcpy(new_blocks->blocks, old_blocks->blocks,
old_num_blocks * sizeof(old_blocks->blocks[0]));
}
for (j = old_num_blocks; j < new_num_blocks; j++) {
new_blocks->blocks[j] = bitmap_new(DIRTY_MEMORY_BLOCK_SIZE);
}
atomic_rcu_set(&ram_list.dirty_memory[i], new_blocks);
if (old_blocks) {
g_free_rcu(old_blocks, rcu);
}
}
}
static ram_addr_t ram_block_add(RAMBlock *new_block, Error **errp) static ram_addr_t ram_block_add(RAMBlock *new_block, Error **errp)
{ {
RAMBlock *block; RAMBlock *block;
...@@ -1543,6 +1599,7 @@ static ram_addr_t ram_block_add(RAMBlock *new_block, Error **errp) ...@@ -1543,6 +1599,7 @@ static ram_addr_t ram_block_add(RAMBlock *new_block, Error **errp)
(new_block->offset + new_block->max_length) >> TARGET_PAGE_BITS); (new_block->offset + new_block->max_length) >> TARGET_PAGE_BITS);
if (new_ram_size > old_ram_size) { if (new_ram_size > old_ram_size) {
migration_bitmap_extend(old_ram_size, new_ram_size); migration_bitmap_extend(old_ram_size, new_ram_size);
dirty_memory_extend(old_ram_size, new_ram_size);
} }
/* Keep the list sorted from biggest to smallest block. Unlike QTAILQ, /* Keep the list sorted from biggest to smallest block. Unlike QTAILQ,
* QLIST (which has an RCU-friendly variant) does not have insertion at * QLIST (which has an RCU-friendly variant) does not have insertion at
...@@ -1568,18 +1625,6 @@ static ram_addr_t ram_block_add(RAMBlock *new_block, Error **errp) ...@@ -1568,18 +1625,6 @@ static ram_addr_t ram_block_add(RAMBlock *new_block, Error **errp)
ram_list.version++; ram_list.version++;
qemu_mutex_unlock_ramlist(); qemu_mutex_unlock_ramlist();
new_ram_size = last_ram_offset() >> TARGET_PAGE_BITS;
if (new_ram_size > old_ram_size) {
int i;
/* ram_list.dirty_memory[] is protected by the iothread lock. */
for (i = 0; i < DIRTY_MEMORY_NUM; i++) {
ram_list.dirty_memory[i] =
bitmap_zero_extend(ram_list.dirty_memory[i],
old_ram_size, new_ram_size);
}
}
cpu_physical_memory_set_dirty_range(new_block->offset, cpu_physical_memory_set_dirty_range(new_block->offset,
new_block->used_length, new_block->used_length,
DIRTY_CLIENTS_ALL); DIRTY_CLIENTS_ALL);
......
...@@ -51,9 +51,7 @@ static int ipmi_do_hw_op(IPMIInterface *s, enum ipmi_op op, int checkonly) ...@@ -51,9 +51,7 @@ static int ipmi_do_hw_op(IPMIInterface *s, enum ipmi_op op, int checkonly)
if (checkonly) { if (checkonly) {
return 0; return 0;
} }
qemu_mutex_lock_iothread();
qmp_inject_nmi(NULL); qmp_inject_nmi(NULL);
qemu_mutex_unlock_iothread();
return 0; return 0;
case IPMI_POWERCYCLE_CHASSIS: case IPMI_POWERCYCLE_CHASSIS:
......
...@@ -559,7 +559,7 @@ static void ipmi_init_sensors_from_sdrs(IPMIBmcSim *s) ...@@ -559,7 +559,7 @@ static void ipmi_init_sensors_from_sdrs(IPMIBmcSim *s)
static int ipmi_register_netfn(IPMIBmcSim *s, unsigned int netfn, static int ipmi_register_netfn(IPMIBmcSim *s, unsigned int netfn,
const IPMINetfn *netfnd) const IPMINetfn *netfnd)
{ {
if ((netfn & 1) || (netfn > MAX_NETFNS) || (s->netfns[netfn / 2])) { if ((netfn & 1) || (netfn >= MAX_NETFNS) || (s->netfns[netfn / 2])) {
return -1; return -1;
} }
s->netfns[netfn / 2] = netfnd; s->netfns[netfn / 2] = netfnd;
...@@ -1135,6 +1135,8 @@ static void set_watchdog_timer(IPMIBmcSim *ibs, ...@@ -1135,6 +1135,8 @@ static void set_watchdog_timer(IPMIBmcSim *ibs,
rsp[2] = IPMI_CC_INVALID_DATA_FIELD; rsp[2] = IPMI_CC_INVALID_DATA_FIELD;
return; return;
} }
break;
default: default:
/* We don't support PRE_SMI */ /* We don't support PRE_SMI */
rsp[2] = IPMI_CC_INVALID_DATA_FIELD; rsp[2] = IPMI_CC_INVALID_DATA_FIELD;
......
common-obj-y += scsi-disk.o common-obj-y += scsi-disk.o
common-obj-y += scsi-generic.o scsi-bus.o common-obj-y += scsi-generic.o scsi-bus.o
common-obj-$(CONFIG_LSI_SCSI_PCI) += lsi53c895a.o common-obj-$(CONFIG_LSI_SCSI_PCI) += lsi53c895a.o
common-obj-$(CONFIG_MPTSAS_SCSI_PCI) += mptsas.o mptconfig.o mptendian.o
common-obj-$(CONFIG_MEGASAS_SCSI_PCI) += megasas.o common-obj-$(CONFIG_MEGASAS_SCSI_PCI) += megasas.o
common-obj-$(CONFIG_VMW_PVSCSI_SCSI_PCI) += vmw_pvscsi.o common-obj-$(CONFIG_VMW_PVSCSI_SCSI_PCI) += vmw_pvscsi.o
common-obj-$(CONFIG_ESP) += esp.o common-obj-$(CONFIG_ESP) += esp.o
......
此差异已折叠。
此差异已折叠。
/*
* QEMU LSI SAS1068 Host Bus Adapter emulation
* Endianness conversion for MPI data structures
*
* Copyright (c) 2016 Red Hat, Inc.
*
* Authors: Paolo Bonzini <pbonzini@redhat.com>
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#include "qemu/osdep.h"
#include "hw/hw.h"
#include "hw/pci/pci.h"
#include "sysemu/dma.h"
#include "sysemu/block-backend.h"
#include "hw/pci/msi.h"
#include "qemu/iov.h"
#include "hw/scsi/scsi.h"
#include "block/scsi.h"
#include "trace.h"
#include "mptsas.h"
#include "mpi.h"
static void mptsas_fix_sgentry_endianness(MPISGEntry *sge)
{
le32_to_cpus(&sge->FlagsLength);
if (sge->FlagsLength & MPI_SGE_FLAGS_64_BIT_ADDRESSING) {
le64_to_cpus(&sge->u.Address64);
} else {
le32_to_cpus(&sge->u.Address32);
}
}
static void mptsas_fix_sgentry_endianness_reply(MPISGEntry *sge)
{
if (sge->FlagsLength & MPI_SGE_FLAGS_64_BIT_ADDRESSING) {
cpu_to_le64s(&sge->u.Address64);
} else {
cpu_to_le32s(&sge->u.Address32);
}
cpu_to_le32s(&sge->FlagsLength);
}
void mptsas_fix_scsi_io_endianness(MPIMsgSCSIIORequest *req)
{
le32_to_cpus(&req->MsgContext);
le32_to_cpus(&req->Control);
le32_to_cpus(&req->DataLength);
le32_to_cpus(&req->SenseBufferLowAddr);
}
void mptsas_fix_scsi_io_reply_endianness(MPIMsgSCSIIOReply *reply)
{
cpu_to_le32s(&reply->MsgContext);
cpu_to_le16s(&reply->IOCStatus);
cpu_to_le32s(&reply->IOCLogInfo);
cpu_to_le32s(&reply->TransferCount);
cpu_to_le32s(&reply->SenseCount);
cpu_to_le32s(&reply->ResponseInfo);
cpu_to_le16s(&reply->TaskTag);
}
void mptsas_fix_scsi_task_mgmt_endianness(MPIMsgSCSITaskMgmt *req)
{
le32_to_cpus(&req->MsgContext);
le32_to_cpus(&req->TaskMsgContext);
}
void mptsas_fix_scsi_task_mgmt_reply_endianness(MPIMsgSCSITaskMgmtReply *reply)
{
cpu_to_le32s(&reply->MsgContext);
cpu_to_le16s(&reply->IOCStatus);
cpu_to_le32s(&reply->IOCLogInfo);
cpu_to_le32s(&reply->TerminationCount);
}
void mptsas_fix_ioc_init_endianness(MPIMsgIOCInit *req)
{
le32_to_cpus(&req->MsgContext);
le16_to_cpus(&req->ReplyFrameSize);
le32_to_cpus(&req->HostMfaHighAddr);
le32_to_cpus(&req->SenseBufferHighAddr);
le32_to_cpus(&req->ReplyFifoHostSignalingAddr);
mptsas_fix_sgentry_endianness(&req->HostPageBufferSGE);
le16_to_cpus(&req->MsgVersion);
le16_to_cpus(&req->HeaderVersion);
}
void mptsas_fix_ioc_init_reply_endianness(MPIMsgIOCInitReply *reply)
{
cpu_to_le32s(&reply->MsgContext);
cpu_to_le16s(&reply->IOCStatus);
cpu_to_le32s(&reply->IOCLogInfo);
}
void mptsas_fix_ioc_facts_endianness(MPIMsgIOCFacts *req)
{
le32_to_cpus(&req->MsgContext);
}
void mptsas_fix_ioc_facts_reply_endianness(MPIMsgIOCFactsReply *reply)
{
cpu_to_le16s(&reply->MsgVersion);
cpu_to_le16s(&reply->HeaderVersion);
cpu_to_le32s(&reply->MsgContext);
cpu_to_le16s(&reply->IOCExceptions);
cpu_to_le16s(&reply->IOCStatus);
cpu_to_le32s(&reply->IOCLogInfo);
cpu_to_le16s(&reply->ReplyQueueDepth);
cpu_to_le16s(&reply->RequestFrameSize);
cpu_to_le16s(&reply->ProductID);
cpu_to_le32s(&reply->CurrentHostMfaHighAddr);
cpu_to_le16s(&reply->GlobalCredits);
cpu_to_le32s(&reply->CurrentSenseBufferHighAddr);
cpu_to_le16s(&reply->CurReplyFrameSize);
cpu_to_le32s(&reply->FWImageSize);
cpu_to_le32s(&reply->IOCCapabilities);
cpu_to_le16s(&reply->HighPriorityQueueDepth);
mptsas_fix_sgentry_endianness_reply(&reply->HostPageBufferSGE);
cpu_to_le32s(&reply->ReplyFifoHostSignalingAddr);
}
void mptsas_fix_config_endianness(MPIMsgConfig *req)
{
le16_to_cpus(&req->ExtPageLength);
le32_to_cpus(&req->MsgContext);
le32_to_cpus(&req->PageAddress);
mptsas_fix_sgentry_endianness(&req->PageBufferSGE);
}
void mptsas_fix_config_reply_endianness(MPIMsgConfigReply *reply)
{
cpu_to_le16s(&reply->ExtPageLength);
cpu_to_le32s(&reply->MsgContext);
cpu_to_le16s(&reply->IOCStatus);
cpu_to_le32s(&reply->IOCLogInfo);
}
void mptsas_fix_port_facts_endianness(MPIMsgPortFacts *req)
{
le32_to_cpus(&req->MsgContext);
}
void mptsas_fix_port_facts_reply_endianness(MPIMsgPortFactsReply *reply)
{
cpu_to_le32s(&reply->MsgContext);
cpu_to_le16s(&reply->IOCStatus);
cpu_to_le32s(&reply->IOCLogInfo);
cpu_to_le16s(&reply->MaxDevices);
cpu_to_le16s(&reply->PortSCSIID);
cpu_to_le16s(&reply->ProtocolFlags);
cpu_to_le16s(&reply->MaxPostedCmdBuffers);
cpu_to_le16s(&reply->MaxPersistentIDs);
cpu_to_le16s(&reply->MaxLanBuckets);
}
void mptsas_fix_port_enable_endianness(MPIMsgPortEnable *req)
{
le32_to_cpus(&req->MsgContext);
}
void mptsas_fix_port_enable_reply_endianness(MPIMsgPortEnableReply *reply)
{
cpu_to_le32s(&reply->MsgContext);
cpu_to_le16s(&reply->IOCStatus);
cpu_to_le32s(&reply->IOCLogInfo);
}
void mptsas_fix_event_notification_endianness(MPIMsgEventNotify *req)
{
le32_to_cpus(&req->MsgContext);
}
void mptsas_fix_event_notification_reply_endianness(MPIMsgEventNotifyReply *reply)
{
int length = reply->EventDataLength;
int i;
cpu_to_le16s(&reply->EventDataLength);
cpu_to_le32s(&reply->MsgContext);
cpu_to_le16s(&reply->IOCStatus);
cpu_to_le32s(&reply->IOCLogInfo);
cpu_to_le32s(&reply->Event);
cpu_to_le32s(&reply->EventContext);
/* Really depends on the event kind. This will do for now. */
for (i = 0; i < length; i++) {
cpu_to_le32s(&reply->Data[i]);
}
}
此差异已折叠。
#ifndef MPTSAS_H
#define MPTSAS_H
#include "mpi.h"
#define MPTSAS_NUM_PORTS 8
#define MPTSAS_MAX_FRAMES 2048 /* Firmware limit at 65535 */
#define MPTSAS_REQUEST_QUEUE_DEPTH 128
#define MPTSAS_REPLY_QUEUE_DEPTH 128
#define MPTSAS_MAXIMUM_CHAIN_DEPTH 0x22
typedef struct MPTSASState MPTSASState;
typedef struct MPTSASRequest MPTSASRequest;
enum {
DOORBELL_NONE,
DOORBELL_WRITE,
DOORBELL_READ
};
struct MPTSASState {
PCIDevice dev;
MemoryRegion mmio_io;
MemoryRegion port_io;
MemoryRegion diag_io;
QEMUBH *request_bh;
uint32_t msi_available;
uint64_t sas_addr;
bool msi_in_use;
/* Doorbell register */
uint32_t state;
uint8_t who_init;
uint8_t doorbell_state;
/* Buffer for requests that are sent through the doorbell register. */
uint32_t doorbell_msg[256];
int doorbell_idx;
int doorbell_cnt;
uint16_t doorbell_reply[256];
int doorbell_reply_idx;
int doorbell_reply_size;
/* Other registers */
uint8_t diagnostic_idx;
uint32_t diagnostic;
uint32_t intr_mask;
uint32_t intr_status;
/* Request queues */
uint32_t request_post[MPTSAS_REQUEST_QUEUE_DEPTH + 1];
uint16_t request_post_head;
uint16_t request_post_tail;
uint32_t reply_post[MPTSAS_REPLY_QUEUE_DEPTH + 1];
uint16_t reply_post_head;
uint16_t reply_post_tail;
uint32_t reply_free[MPTSAS_REPLY_QUEUE_DEPTH + 1];
uint16_t reply_free_head;
uint16_t reply_free_tail;
/* IOC Facts */
hwaddr host_mfa_high_addr;
hwaddr sense_buffer_high_addr;
uint16_t max_devices;
uint16_t max_buses;
uint16_t reply_frame_size;
SCSIBus bus;
QTAILQ_HEAD(, MPTSASRequest) pending;
};
void mptsas_fix_scsi_io_endianness(MPIMsgSCSIIORequest *req);
void mptsas_fix_scsi_io_reply_endianness(MPIMsgSCSIIOReply *reply);
void mptsas_fix_scsi_task_mgmt_endianness(MPIMsgSCSITaskMgmt *req);
void mptsas_fix_scsi_task_mgmt_reply_endianness(MPIMsgSCSITaskMgmtReply *reply);
void mptsas_fix_ioc_init_endianness(MPIMsgIOCInit *req);
void mptsas_fix_ioc_init_reply_endianness(MPIMsgIOCInitReply *reply);
void mptsas_fix_ioc_facts_endianness(MPIMsgIOCFacts *req);
void mptsas_fix_ioc_facts_reply_endianness(MPIMsgIOCFactsReply *reply);
void mptsas_fix_config_endianness(MPIMsgConfig *req);
void mptsas_fix_config_reply_endianness(MPIMsgConfigReply *reply);
void mptsas_fix_port_facts_endianness(MPIMsgPortFacts *req);
void mptsas_fix_port_facts_reply_endianness(MPIMsgPortFactsReply *reply);
void mptsas_fix_port_enable_endianness(MPIMsgPortEnable *req);
void mptsas_fix_port_enable_reply_endianness(MPIMsgPortEnableReply *reply);
void mptsas_fix_event_notification_endianness(MPIMsgEventNotify *req);
void mptsas_fix_event_notification_reply_endianness(MPIMsgEventNotifyReply *reply);
void mptsas_reply(MPTSASState *s, MPIDefaultReply *reply);
void mptsas_process_config(MPTSASState *s, MPIMsgConfig *req);
#endif /* MPTSAS_H */
...@@ -77,8 +77,6 @@ struct SCSIDiskState ...@@ -77,8 +77,6 @@ struct SCSIDiskState
bool media_changed; bool media_changed;
bool media_event; bool media_event;
bool eject_request; bool eject_request;
uint64_t wwn;
uint64_t port_wwn;
uint16_t port_index; uint16_t port_index;
uint64_t max_unmap_size; uint64_t max_unmap_size;
uint64_t max_io_size; uint64_t max_io_size;
...@@ -633,21 +631,21 @@ static int scsi_disk_emulate_inquiry(SCSIRequest *req, uint8_t *outbuf) ...@@ -633,21 +631,21 @@ static int scsi_disk_emulate_inquiry(SCSIRequest *req, uint8_t *outbuf)
memcpy(outbuf+buflen, str, id_len); memcpy(outbuf+buflen, str, id_len);
buflen += id_len; buflen += id_len;
if (s->wwn) { if (s->qdev.wwn) {
outbuf[buflen++] = 0x1; // Binary outbuf[buflen++] = 0x1; // Binary
outbuf[buflen++] = 0x3; // NAA outbuf[buflen++] = 0x3; // NAA
outbuf[buflen++] = 0; // reserved outbuf[buflen++] = 0; // reserved
outbuf[buflen++] = 8; outbuf[buflen++] = 8;
stq_be_p(&outbuf[buflen], s->wwn); stq_be_p(&outbuf[buflen], s->qdev.wwn);
buflen += 8; buflen += 8;
} }
if (s->port_wwn) { if (s->qdev.port_wwn) {
outbuf[buflen++] = 0x61; // SAS / Binary outbuf[buflen++] = 0x61; // SAS / Binary
outbuf[buflen++] = 0x93; // PIV / Target port / NAA outbuf[buflen++] = 0x93; // PIV / Target port / NAA
outbuf[buflen++] = 0; // reserved outbuf[buflen++] = 0; // reserved
outbuf[buflen++] = 8; outbuf[buflen++] = 8;
stq_be_p(&outbuf[buflen], s->port_wwn); stq_be_p(&outbuf[buflen], s->qdev.port_wwn);
buflen += 8; buflen += 8;
} }
...@@ -2575,6 +2573,7 @@ static void scsi_block_realize(SCSIDevice *dev, Error **errp) ...@@ -2575,6 +2573,7 @@ static void scsi_block_realize(SCSIDevice *dev, Error **errp)
s->features |= (1 << SCSI_DISK_F_NO_REMOVABLE_DEVOPS); s->features |= (1 << SCSI_DISK_F_NO_REMOVABLE_DEVOPS);
scsi_realize(&s->qdev, errp); scsi_realize(&s->qdev, errp);
scsi_generic_read_device_identification(&s->qdev);
} }
static bool scsi_block_is_passthrough(SCSIDiskState *s, uint8_t *buf) static bool scsi_block_is_passthrough(SCSIDiskState *s, uint8_t *buf)
...@@ -2668,8 +2667,8 @@ static Property scsi_hd_properties[] = { ...@@ -2668,8 +2667,8 @@ static Property scsi_hd_properties[] = {
SCSI_DISK_F_REMOVABLE, false), SCSI_DISK_F_REMOVABLE, false),
DEFINE_PROP_BIT("dpofua", SCSIDiskState, features, DEFINE_PROP_BIT("dpofua", SCSIDiskState, features,
SCSI_DISK_F_DPOFUA, false), SCSI_DISK_F_DPOFUA, false),
DEFINE_PROP_UINT64("wwn", SCSIDiskState, wwn, 0), DEFINE_PROP_UINT64("wwn", SCSIDiskState, qdev.wwn, 0),
DEFINE_PROP_UINT64("port_wwn", SCSIDiskState, port_wwn, 0), DEFINE_PROP_UINT64("port_wwn", SCSIDiskState, qdev.port_wwn, 0),
DEFINE_PROP_UINT16("port_index", SCSIDiskState, port_index, 0), DEFINE_PROP_UINT16("port_index", SCSIDiskState, port_index, 0),
DEFINE_PROP_UINT64("max_unmap_size", SCSIDiskState, max_unmap_size, DEFINE_PROP_UINT64("max_unmap_size", SCSIDiskState, max_unmap_size,
DEFAULT_MAX_UNMAP_SIZE), DEFAULT_MAX_UNMAP_SIZE),
...@@ -2718,8 +2717,8 @@ static const TypeInfo scsi_hd_info = { ...@@ -2718,8 +2717,8 @@ static const TypeInfo scsi_hd_info = {
static Property scsi_cd_properties[] = { static Property scsi_cd_properties[] = {
DEFINE_SCSI_DISK_PROPERTIES(), DEFINE_SCSI_DISK_PROPERTIES(),
DEFINE_PROP_UINT64("wwn", SCSIDiskState, wwn, 0), DEFINE_PROP_UINT64("wwn", SCSIDiskState, qdev.wwn, 0),
DEFINE_PROP_UINT64("port_wwn", SCSIDiskState, port_wwn, 0), DEFINE_PROP_UINT64("port_wwn", SCSIDiskState, qdev.port_wwn, 0),
DEFINE_PROP_UINT16("port_index", SCSIDiskState, port_index, 0), DEFINE_PROP_UINT16("port_index", SCSIDiskState, port_index, 0),
DEFINE_PROP_UINT64("max_io_size", SCSIDiskState, max_io_size, DEFINE_PROP_UINT64("max_io_size", SCSIDiskState, max_io_size,
DEFAULT_MAX_IO_SIZE), DEFAULT_MAX_IO_SIZE),
...@@ -2783,8 +2782,8 @@ static Property scsi_disk_properties[] = { ...@@ -2783,8 +2782,8 @@ static Property scsi_disk_properties[] = {
SCSI_DISK_F_REMOVABLE, false), SCSI_DISK_F_REMOVABLE, false),
DEFINE_PROP_BIT("dpofua", SCSIDiskState, features, DEFINE_PROP_BIT("dpofua", SCSIDiskState, features,
SCSI_DISK_F_DPOFUA, false), SCSI_DISK_F_DPOFUA, false),
DEFINE_PROP_UINT64("wwn", SCSIDiskState, wwn, 0), DEFINE_PROP_UINT64("wwn", SCSIDiskState, qdev.wwn, 0),
DEFINE_PROP_UINT64("port_wwn", SCSIDiskState, port_wwn, 0), DEFINE_PROP_UINT64("port_wwn", SCSIDiskState, qdev.port_wwn, 0),
DEFINE_PROP_UINT16("port_index", SCSIDiskState, port_index, 0), DEFINE_PROP_UINT16("port_index", SCSIDiskState, port_index, 0),
DEFINE_PROP_UINT64("max_unmap_size", SCSIDiskState, max_unmap_size, DEFINE_PROP_UINT64("max_unmap_size", SCSIDiskState, max_unmap_size,
DEFAULT_MAX_UNMAP_SIZE), DEFAULT_MAX_UNMAP_SIZE),
......
...@@ -355,6 +355,96 @@ static int32_t scsi_send_command(SCSIRequest *req, uint8_t *cmd) ...@@ -355,6 +355,96 @@ static int32_t scsi_send_command(SCSIRequest *req, uint8_t *cmd)
} }
} }
static int read_naa_id(const uint8_t *p, uint64_t *p_wwn)
{
int i;
if ((p[1] & 0xF) == 3) {
/* NAA designator type */
if (p[3] != 8) {
return -EINVAL;
}
*p_wwn = ldq_be_p(p + 4);
return 0;
}
if ((p[1] & 0xF) == 8) {
/* SCSI name string designator type */
if (p[3] < 20 || memcmp(&p[4], "naa.", 4)) {
return -EINVAL;
}
if (p[3] > 20 && p[24] != ',') {
return -EINVAL;
}
*p_wwn = 0;
for (i = 8; i < 24; i++) {
char c = toupper(p[i]);
c -= (c >= '0' && c <= '9' ? '0' : 'A' - 10);
*p_wwn = (*p_wwn << 4) | c;
}
return 0;
}
return -EINVAL;
}
void scsi_generic_read_device_identification(SCSIDevice *s)
{
uint8_t cmd[6];
uint8_t buf[250];
uint8_t sensebuf[8];
sg_io_hdr_t io_header;
int ret;
int i, len;
memset(cmd, 0, sizeof(cmd));
memset(buf, 0, sizeof(buf));
cmd[0] = INQUIRY;
cmd[1] = 1;
cmd[2] = 0x83;
cmd[4] = sizeof(buf);
memset(&io_header, 0, sizeof(io_header));
io_header.interface_id = 'S';
io_header.dxfer_direction = SG_DXFER_FROM_DEV;
io_header.dxfer_len = sizeof(buf);
io_header.dxferp = buf;
io_header.cmdp = cmd;
io_header.cmd_len = sizeof(cmd);
io_header.mx_sb_len = sizeof(sensebuf);
io_header.sbp = sensebuf;
io_header.timeout = 6000; /* XXX */
ret = blk_ioctl(s->conf.blk, SG_IO, &io_header);
if (ret < 0 || io_header.driver_status || io_header.host_status) {
return;
}
len = MIN((buf[2] << 8) | buf[3], sizeof(buf) - 4);
for (i = 0; i + 3 <= len; ) {
const uint8_t *p = &buf[i + 4];
uint64_t wwn;
if (i + (p[3] + 4) > len) {
break;
}
if ((p[1] & 0x10) == 0) {
/* Associated with the logical unit */
if (read_naa_id(p, &wwn) == 0) {
s->wwn = wwn;
}
} else if ((p[1] & 0x10) == 0x10) {
/* Associated with the target port */
if (read_naa_id(p, &wwn) == 0) {
s->port_wwn = wwn;
}
}
i += p[3] + 4;
}
}
static int get_stream_blocksize(BlockBackend *blk) static int get_stream_blocksize(BlockBackend *blk)
{ {
uint8_t cmd[6]; uint8_t cmd[6];
...@@ -458,6 +548,8 @@ static void scsi_generic_realize(SCSIDevice *s, Error **errp) ...@@ -458,6 +548,8 @@ static void scsi_generic_realize(SCSIDevice *s, Error **errp)
} }
DPRINTF("block size %d\n", s->blocksize); DPRINTF("block size %d\n", s->blocksize);
scsi_generic_read_device_identification(s);
} }
const SCSIReqOps scsi_generic_req_ops = { const SCSIReqOps scsi_generic_req_ops = {
......
...@@ -49,13 +49,43 @@ static inline void *ramblock_ptr(RAMBlock *block, ram_addr_t offset) ...@@ -49,13 +49,43 @@ static inline void *ramblock_ptr(RAMBlock *block, ram_addr_t offset)
return (char *)block->host + offset; return (char *)block->host + offset;
} }
/* The dirty memory bitmap is split into fixed-size blocks to allow growth
* under RCU. The bitmap for a block can be accessed as follows:
*
* rcu_read_lock();
*
* DirtyMemoryBlocks *blocks =
* atomic_rcu_read(&ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION]);
*
* ram_addr_t idx = (addr >> TARGET_PAGE_BITS) / DIRTY_MEMORY_BLOCK_SIZE;
* unsigned long *block = blocks.blocks[idx];
* ...access block bitmap...
*
* rcu_read_unlock();
*
* Remember to check for the end of the block when accessing a range of
* addresses. Move on to the next block if you reach the end.
*
* Organization into blocks allows dirty memory to grow (but not shrink) under
* RCU. When adding new RAMBlocks requires the dirty memory to grow, a new
* DirtyMemoryBlocks array is allocated with pointers to existing blocks kept
* the same. Other threads can safely access existing blocks while dirty
* memory is being grown. When no threads are using the old DirtyMemoryBlocks
* anymore it is freed by RCU (but the underlying blocks stay because they are
* pointed to from the new DirtyMemoryBlocks).
*/
#define DIRTY_MEMORY_BLOCK_SIZE ((ram_addr_t)256 * 1024 * 8)
typedef struct {
struct rcu_head rcu;
unsigned long *blocks[];
} DirtyMemoryBlocks;
typedef struct RAMList { typedef struct RAMList {
QemuMutex mutex; QemuMutex mutex;
/* Protected by the iothread lock. */
unsigned long *dirty_memory[DIRTY_MEMORY_NUM];
RAMBlock *mru_block; RAMBlock *mru_block;
/* RCU-enabled, writes protected by the ramlist lock. */ /* RCU-enabled, writes protected by the ramlist lock. */
QLIST_HEAD(, RAMBlock) blocks; QLIST_HEAD(, RAMBlock) blocks;
DirtyMemoryBlocks *dirty_memory[DIRTY_MEMORY_NUM];
uint32_t version; uint32_t version;
} RAMList; } RAMList;
extern RAMList ram_list; extern RAMList ram_list;
...@@ -89,30 +119,70 @@ static inline bool cpu_physical_memory_get_dirty(ram_addr_t start, ...@@ -89,30 +119,70 @@ static inline bool cpu_physical_memory_get_dirty(ram_addr_t start,
ram_addr_t length, ram_addr_t length,
unsigned client) unsigned client)
{ {
unsigned long end, page, next; DirtyMemoryBlocks *blocks;
unsigned long end, page;
bool dirty = false;
assert(client < DIRTY_MEMORY_NUM); assert(client < DIRTY_MEMORY_NUM);
end = TARGET_PAGE_ALIGN(start + length) >> TARGET_PAGE_BITS; end = TARGET_PAGE_ALIGN(start + length) >> TARGET_PAGE_BITS;
page = start >> TARGET_PAGE_BITS; page = start >> TARGET_PAGE_BITS;
next = find_next_bit(ram_list.dirty_memory[client], end, page);
return next < end; rcu_read_lock();
blocks = atomic_rcu_read(&ram_list.dirty_memory[client]);
while (page < end) {
unsigned long idx = page / DIRTY_MEMORY_BLOCK_SIZE;
unsigned long offset = page % DIRTY_MEMORY_BLOCK_SIZE;
unsigned long num = MIN(end - page, DIRTY_MEMORY_BLOCK_SIZE - offset);
if (find_next_bit(blocks->blocks[idx], offset, num) < num) {
dirty = true;
break;
}
page += num;
}
rcu_read_unlock();
return dirty;
} }
static inline bool cpu_physical_memory_all_dirty(ram_addr_t start, static inline bool cpu_physical_memory_all_dirty(ram_addr_t start,
ram_addr_t length, ram_addr_t length,
unsigned client) unsigned client)
{ {
unsigned long end, page, next; DirtyMemoryBlocks *blocks;
unsigned long end, page;
bool dirty = true;
assert(client < DIRTY_MEMORY_NUM); assert(client < DIRTY_MEMORY_NUM);
end = TARGET_PAGE_ALIGN(start + length) >> TARGET_PAGE_BITS; end = TARGET_PAGE_ALIGN(start + length) >> TARGET_PAGE_BITS;
page = start >> TARGET_PAGE_BITS; page = start >> TARGET_PAGE_BITS;
next = find_next_zero_bit(ram_list.dirty_memory[client], end, page);
return next >= end; rcu_read_lock();
blocks = atomic_rcu_read(&ram_list.dirty_memory[client]);
while (page < end) {
unsigned long idx = page / DIRTY_MEMORY_BLOCK_SIZE;
unsigned long offset = page % DIRTY_MEMORY_BLOCK_SIZE;
unsigned long num = MIN(end - page, DIRTY_MEMORY_BLOCK_SIZE - offset);
if (find_next_zero_bit(blocks->blocks[idx], offset, num) < num) {
dirty = false;
break;
}
page += num;
}
rcu_read_unlock();
return dirty;
} }
static inline bool cpu_physical_memory_get_dirty_flag(ram_addr_t addr, static inline bool cpu_physical_memory_get_dirty_flag(ram_addr_t addr,
...@@ -154,28 +224,68 @@ static inline uint8_t cpu_physical_memory_range_includes_clean(ram_addr_t start, ...@@ -154,28 +224,68 @@ static inline uint8_t cpu_physical_memory_range_includes_clean(ram_addr_t start,
static inline void cpu_physical_memory_set_dirty_flag(ram_addr_t addr, static inline void cpu_physical_memory_set_dirty_flag(ram_addr_t addr,
unsigned client) unsigned client)
{ {
unsigned long page, idx, offset;
DirtyMemoryBlocks *blocks;
assert(client < DIRTY_MEMORY_NUM); assert(client < DIRTY_MEMORY_NUM);
set_bit_atomic(addr >> TARGET_PAGE_BITS, ram_list.dirty_memory[client]);
page = addr >> TARGET_PAGE_BITS;
idx = page / DIRTY_MEMORY_BLOCK_SIZE;
offset = page % DIRTY_MEMORY_BLOCK_SIZE;
rcu_read_lock();
blocks = atomic_rcu_read(&ram_list.dirty_memory[client]);
set_bit_atomic(offset, blocks->blocks[idx]);
rcu_read_unlock();
} }
static inline void cpu_physical_memory_set_dirty_range(ram_addr_t start, static inline void cpu_physical_memory_set_dirty_range(ram_addr_t start,
ram_addr_t length, ram_addr_t length,
uint8_t mask) uint8_t mask)
{ {
DirtyMemoryBlocks *blocks[DIRTY_MEMORY_NUM];
unsigned long end, page; unsigned long end, page;
unsigned long **d = ram_list.dirty_memory; int i;
if (!mask && !xen_enabled()) {
return;
}
end = TARGET_PAGE_ALIGN(start + length) >> TARGET_PAGE_BITS; end = TARGET_PAGE_ALIGN(start + length) >> TARGET_PAGE_BITS;
page = start >> TARGET_PAGE_BITS; page = start >> TARGET_PAGE_BITS;
if (likely(mask & (1 << DIRTY_MEMORY_MIGRATION))) {
bitmap_set_atomic(d[DIRTY_MEMORY_MIGRATION], page, end - page); rcu_read_lock();
}
if (unlikely(mask & (1 << DIRTY_MEMORY_VGA))) { for (i = 0; i < DIRTY_MEMORY_NUM; i++) {
bitmap_set_atomic(d[DIRTY_MEMORY_VGA], page, end - page); blocks[i] = atomic_rcu_read(&ram_list.dirty_memory[i]);
} }
if (unlikely(mask & (1 << DIRTY_MEMORY_CODE))) {
bitmap_set_atomic(d[DIRTY_MEMORY_CODE], page, end - page); while (page < end) {
unsigned long idx = page / DIRTY_MEMORY_BLOCK_SIZE;
unsigned long offset = page % DIRTY_MEMORY_BLOCK_SIZE;
unsigned long num = MIN(end - page, DIRTY_MEMORY_BLOCK_SIZE - offset);
if (likely(mask & (1 << DIRTY_MEMORY_MIGRATION))) {
bitmap_set_atomic(blocks[DIRTY_MEMORY_MIGRATION]->blocks[idx],
offset, num);
}
if (unlikely(mask & (1 << DIRTY_MEMORY_VGA))) {
bitmap_set_atomic(blocks[DIRTY_MEMORY_VGA]->blocks[idx],
offset, num);
}
if (unlikely(mask & (1 << DIRTY_MEMORY_CODE))) {
bitmap_set_atomic(blocks[DIRTY_MEMORY_CODE]->blocks[idx],
offset, num);
}
page += num;
} }
rcu_read_unlock();
xen_modified_memory(start, length); xen_modified_memory(start, length);
} }
...@@ -195,21 +305,41 @@ static inline void cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap, ...@@ -195,21 +305,41 @@ static inline void cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap,
/* start address is aligned at the start of a word? */ /* start address is aligned at the start of a word? */
if ((((page * BITS_PER_LONG) << TARGET_PAGE_BITS) == start) && if ((((page * BITS_PER_LONG) << TARGET_PAGE_BITS) == start) &&
(hpratio == 1)) { (hpratio == 1)) {
unsigned long **blocks[DIRTY_MEMORY_NUM];
unsigned long idx;
unsigned long offset;
long k; long k;
long nr = BITS_TO_LONGS(pages); long nr = BITS_TO_LONGS(pages);
idx = (start >> TARGET_PAGE_BITS) / DIRTY_MEMORY_BLOCK_SIZE;
offset = BIT_WORD((start >> TARGET_PAGE_BITS) %
DIRTY_MEMORY_BLOCK_SIZE);
rcu_read_lock();
for (i = 0; i < DIRTY_MEMORY_NUM; i++) {
blocks[i] = atomic_rcu_read(&ram_list.dirty_memory[i])->blocks;
}
for (k = 0; k < nr; k++) { for (k = 0; k < nr; k++) {
if (bitmap[k]) { if (bitmap[k]) {
unsigned long temp = leul_to_cpu(bitmap[k]); unsigned long temp = leul_to_cpu(bitmap[k]);
unsigned long **d = ram_list.dirty_memory;
atomic_or(&d[DIRTY_MEMORY_MIGRATION][page + k], temp); atomic_or(&blocks[DIRTY_MEMORY_MIGRATION][idx][offset], temp);
atomic_or(&d[DIRTY_MEMORY_VGA][page + k], temp); atomic_or(&blocks[DIRTY_MEMORY_VGA][idx][offset], temp);
if (tcg_enabled()) { if (tcg_enabled()) {
atomic_or(&d[DIRTY_MEMORY_CODE][page + k], temp); atomic_or(&blocks[DIRTY_MEMORY_CODE][idx][offset], temp);
} }
} }
if (++offset >= BITS_TO_LONGS(DIRTY_MEMORY_BLOCK_SIZE)) {
offset = 0;
idx++;
}
} }
rcu_read_unlock();
xen_modified_memory(start, pages << TARGET_PAGE_BITS); xen_modified_memory(start, pages << TARGET_PAGE_BITS);
} else { } else {
uint8_t clients = tcg_enabled() ? DIRTY_CLIENTS_ALL : DIRTY_CLIENTS_NOCODE; uint8_t clients = tcg_enabled() ? DIRTY_CLIENTS_ALL : DIRTY_CLIENTS_NOCODE;
...@@ -261,18 +391,33 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(unsigned long *dest, ...@@ -261,18 +391,33 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(unsigned long *dest,
if (((page * BITS_PER_LONG) << TARGET_PAGE_BITS) == start) { if (((page * BITS_PER_LONG) << TARGET_PAGE_BITS) == start) {
int k; int k;
int nr = BITS_TO_LONGS(length >> TARGET_PAGE_BITS); int nr = BITS_TO_LONGS(length >> TARGET_PAGE_BITS);
unsigned long *src = ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION]; unsigned long * const *src;
unsigned long idx = (page * BITS_PER_LONG) / DIRTY_MEMORY_BLOCK_SIZE;
unsigned long offset = BIT_WORD((page * BITS_PER_LONG) %
DIRTY_MEMORY_BLOCK_SIZE);
rcu_read_lock();
src = atomic_rcu_read(
&ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION])->blocks;
for (k = page; k < page + nr; k++) { for (k = page; k < page + nr; k++) {
if (src[k]) { if (src[idx][offset]) {
unsigned long bits = atomic_xchg(&src[k], 0); unsigned long bits = atomic_xchg(&src[idx][offset], 0);
unsigned long new_dirty; unsigned long new_dirty;
new_dirty = ~dest[k]; new_dirty = ~dest[k];
dest[k] |= bits; dest[k] |= bits;
new_dirty &= bits; new_dirty &= bits;
num_dirty += ctpopl(new_dirty); num_dirty += ctpopl(new_dirty);
} }
if (++offset >= BITS_TO_LONGS(DIRTY_MEMORY_BLOCK_SIZE)) {
offset = 0;
idx++;
}
} }
rcu_read_unlock();
} else { } else {
for (addr = 0; addr < length; addr += TARGET_PAGE_SIZE) { for (addr = 0; addr < length; addr += TARGET_PAGE_SIZE) {
if (cpu_physical_memory_test_and_clear_dirty( if (cpu_physical_memory_test_and_clear_dirty(
......
...@@ -64,6 +64,7 @@ ...@@ -64,6 +64,7 @@
#define PCI_VENDOR_ID_LSI_LOGIC 0x1000 #define PCI_VENDOR_ID_LSI_LOGIC 0x1000
#define PCI_DEVICE_ID_LSI_53C810 0x0001 #define PCI_DEVICE_ID_LSI_53C810 0x0001
#define PCI_DEVICE_ID_LSI_53C895A 0x0012 #define PCI_DEVICE_ID_LSI_53C895A 0x0012
#define PCI_DEVICE_ID_LSI_SAS1068 0x0054
#define PCI_DEVICE_ID_LSI_SAS1078 0x0060 #define PCI_DEVICE_ID_LSI_SAS1078 0x0060
#define PCI_DEVICE_ID_LSI_SAS0079 0x0079 #define PCI_DEVICE_ID_LSI_SAS0079 0x0079
......
...@@ -108,6 +108,8 @@ struct SCSIDevice ...@@ -108,6 +108,8 @@ struct SCSIDevice
int blocksize; int blocksize;
int type; int type;
uint64_t max_lba; uint64_t max_lba;
uint64_t wwn;
uint64_t port_wwn;
}; };
extern const VMStateDescription vmstate_scsi_device; extern const VMStateDescription vmstate_scsi_device;
...@@ -271,6 +273,7 @@ void scsi_device_purge_requests(SCSIDevice *sdev, SCSISense sense); ...@@ -271,6 +273,7 @@ void scsi_device_purge_requests(SCSIDevice *sdev, SCSISense sense);
void scsi_device_set_ua(SCSIDevice *sdev, SCSISense sense); void scsi_device_set_ua(SCSIDevice *sdev, SCSISense sense);
void scsi_device_report_change(SCSIDevice *dev, SCSISense sense); void scsi_device_report_change(SCSIDevice *dev, SCSISense sense);
void scsi_device_unit_attention_reported(SCSIDevice *dev); void scsi_device_unit_attention_reported(SCSIDevice *dev);
void scsi_generic_read_device_identification(SCSIDevice *dev);
int scsi_device_get_sense(SCSIDevice *dev, uint8_t *buf, int len, bool fixed); int scsi_device_get_sense(SCSIDevice *dev, uint8_t *buf, int len, bool fixed);
SCSIDevice *scsi_device_find(SCSIBus *bus, int channel, int target, int lun); SCSIDevice *scsi_device_find(SCSIBus *bus, int channel, int target, int lun);
......
...@@ -8,6 +8,8 @@ ...@@ -8,6 +8,8 @@
* This work is licensed under the terms of the GNU GPL, version 2 or later. * This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory. * See the COPYING file in the top-level directory.
* *
* See docs/atomics.txt for discussion about the guarantees each
* atomic primitive is meant to provide.
*/ */
#ifndef __QEMU_ATOMIC_H #ifndef __QEMU_ATOMIC_H
...@@ -15,12 +17,130 @@ ...@@ -15,12 +17,130 @@
#include "qemu/compiler.h" #include "qemu/compiler.h"
/* For C11 atomic ops */
/* Compiler barrier */ /* Compiler barrier */
#define barrier() ({ asm volatile("" ::: "memory"); (void)0; }) #define barrier() ({ asm volatile("" ::: "memory"); (void)0; })
#ifndef __ATOMIC_RELAXED #ifdef __ATOMIC_RELAXED
/* For C11 atomic ops */
/* Manual memory barriers
*
*__atomic_thread_fence does not include a compiler barrier; instead,
* the barrier is part of __atomic_load/__atomic_store's "volatile-like"
* semantics. If smp_wmb() is a no-op, absence of the barrier means that
* the compiler is free to reorder stores on each side of the barrier.
* Add one here, and similarly in smp_rmb() and smp_read_barrier_depends().
*/
#define smp_mb() ({ barrier(); __atomic_thread_fence(__ATOMIC_SEQ_CST); barrier(); })
#define smp_wmb() ({ barrier(); __atomic_thread_fence(__ATOMIC_RELEASE); barrier(); })
#define smp_rmb() ({ barrier(); __atomic_thread_fence(__ATOMIC_ACQUIRE); barrier(); })
#define smp_read_barrier_depends() ({ barrier(); __atomic_thread_fence(__ATOMIC_CONSUME); barrier(); })
/* Weak atomic operations prevent the compiler moving other
* loads/stores past the atomic operation load/store. However there is
* no explicit memory barrier for the processor.
*/
#define atomic_read(ptr) \
({ \
typeof(*ptr) _val; \
__atomic_load(ptr, &_val, __ATOMIC_RELAXED); \
_val; \
})
#define atomic_set(ptr, i) do { \
typeof(*ptr) _val = (i); \
__atomic_store(ptr, &_val, __ATOMIC_RELAXED); \
} while(0)
/* Atomic RCU operations imply weak memory barriers */
#define atomic_rcu_read(ptr) \
({ \
typeof(*ptr) _val; \
__atomic_load(ptr, &_val, __ATOMIC_CONSUME); \
_val; \
})
#define atomic_rcu_set(ptr, i) do { \
typeof(*ptr) _val = (i); \
__atomic_store(ptr, &_val, __ATOMIC_RELEASE); \
} while(0)
/* atomic_mb_read/set semantics map Java volatile variables. They are
* less expensive on some platforms (notably POWER & ARMv7) than fully
* sequentially consistent operations.
*
* As long as they are used as paired operations they are safe to
* use. See docs/atomic.txt for more discussion.
*/
#if defined(_ARCH_PPC)
#define atomic_mb_read(ptr) \
({ \
typeof(*ptr) _val; \
__atomic_load(ptr, &_val, __ATOMIC_RELAXED); \
smp_rmb(); \
_val; \
})
#define atomic_mb_set(ptr, i) do { \
typeof(*ptr) _val = (i); \
smp_wmb(); \
__atomic_store(ptr, &_val, __ATOMIC_RELAXED); \
smp_mb(); \
} while(0)
#else
#define atomic_mb_read(ptr) \
({ \
typeof(*ptr) _val; \
__atomic_load(ptr, &_val, __ATOMIC_SEQ_CST); \
_val; \
})
#define atomic_mb_set(ptr, i) do { \
typeof(*ptr) _val = (i); \
__atomic_store(ptr, &_val, __ATOMIC_SEQ_CST); \
} while(0)
#endif
/* All the remaining operations are fully sequentially consistent */
#define atomic_xchg(ptr, i) ({ \
typeof(*ptr) _new = (i), _old; \
__atomic_exchange(ptr, &_new, &_old, __ATOMIC_SEQ_CST); \
_old; \
})
/* Returns the eventual value, failed or not */
#define atomic_cmpxchg(ptr, old, new) \
({ \
typeof(*ptr) _old = (old), _new = (new); \
__atomic_compare_exchange(ptr, &_old, &_new, false, \
__ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST); \
_old; \
})
/* Provide shorter names for GCC atomic builtins, return old value */
#define atomic_fetch_inc(ptr) __atomic_fetch_add(ptr, 1, __ATOMIC_SEQ_CST)
#define atomic_fetch_dec(ptr) __atomic_fetch_sub(ptr, 1, __ATOMIC_SEQ_CST)
#define atomic_fetch_add(ptr, n) __atomic_fetch_add(ptr, n, __ATOMIC_SEQ_CST)
#define atomic_fetch_sub(ptr, n) __atomic_fetch_sub(ptr, n, __ATOMIC_SEQ_CST)
#define atomic_fetch_and(ptr, n) __atomic_fetch_and(ptr, n, __ATOMIC_SEQ_CST)
#define atomic_fetch_or(ptr, n) __atomic_fetch_or(ptr, n, __ATOMIC_SEQ_CST)
/* And even shorter names that return void. */
#define atomic_inc(ptr) ((void) __atomic_fetch_add(ptr, 1, __ATOMIC_SEQ_CST))
#define atomic_dec(ptr) ((void) __atomic_fetch_sub(ptr, 1, __ATOMIC_SEQ_CST))
#define atomic_add(ptr, n) ((void) __atomic_fetch_add(ptr, n, __ATOMIC_SEQ_CST))
#define atomic_sub(ptr, n) ((void) __atomic_fetch_sub(ptr, n, __ATOMIC_SEQ_CST))
#define atomic_and(ptr, n) ((void) __atomic_fetch_and(ptr, n, __ATOMIC_SEQ_CST))
#define atomic_or(ptr, n) ((void) __atomic_fetch_or(ptr, n, __ATOMIC_SEQ_CST))
#else /* __ATOMIC_RELAXED */
/* /*
* We use GCC builtin if it's available, as that can use mfence on * We use GCC builtin if it's available, as that can use mfence on
...@@ -85,8 +205,6 @@ ...@@ -85,8 +205,6 @@
#endif /* _ARCH_PPC */ #endif /* _ARCH_PPC */
#endif /* C11 atomics */
/* /*
* For (host) platforms we don't have explicit barrier definitions * For (host) platforms we don't have explicit barrier definitions
* for, we use the gcc __sync_synchronize() primitive to generate a * for, we use the gcc __sync_synchronize() primitive to generate a
...@@ -98,42 +216,22 @@ ...@@ -98,42 +216,22 @@
#endif #endif
#ifndef smp_wmb #ifndef smp_wmb
#ifdef __ATOMIC_RELEASE
/* __atomic_thread_fence does not include a compiler barrier; instead,
* the barrier is part of __atomic_load/__atomic_store's "volatile-like"
* semantics. If smp_wmb() is a no-op, absence of the barrier means that
* the compiler is free to reorder stores on each side of the barrier.
* Add one here, and similarly in smp_rmb() and smp_read_barrier_depends().
*/
#define smp_wmb() ({ barrier(); __atomic_thread_fence(__ATOMIC_RELEASE); barrier(); })
#else
#define smp_wmb() __sync_synchronize() #define smp_wmb() __sync_synchronize()
#endif #endif
#endif
#ifndef smp_rmb #ifndef smp_rmb
#ifdef __ATOMIC_ACQUIRE
#define smp_rmb() ({ barrier(); __atomic_thread_fence(__ATOMIC_ACQUIRE); barrier(); })
#else
#define smp_rmb() __sync_synchronize() #define smp_rmb() __sync_synchronize()
#endif #endif
#endif
#ifndef smp_read_barrier_depends #ifndef smp_read_barrier_depends
#ifdef __ATOMIC_CONSUME
#define smp_read_barrier_depends() ({ barrier(); __atomic_thread_fence(__ATOMIC_CONSUME); barrier(); })
#else
#define smp_read_barrier_depends() barrier() #define smp_read_barrier_depends() barrier()
#endif #endif
#endif
#ifndef atomic_read /* These will only be atomic if the processor does the fetch or store
* in a single issue memory operation
*/
#define atomic_read(ptr) (*(__typeof__(*ptr) volatile*) (ptr)) #define atomic_read(ptr) (*(__typeof__(*ptr) volatile*) (ptr))
#endif
#ifndef atomic_set
#define atomic_set(ptr, i) ((*(__typeof__(*ptr) volatile*) (ptr)) = (i)) #define atomic_set(ptr, i) ((*(__typeof__(*ptr) volatile*) (ptr)) = (i))
#endif
/** /**
* atomic_rcu_read - reads a RCU-protected pointer to a local variable * atomic_rcu_read - reads a RCU-protected pointer to a local variable
...@@ -146,30 +244,18 @@ ...@@ -146,30 +244,18 @@
* Inserts memory barriers on architectures that require them (currently only * Inserts memory barriers on architectures that require them (currently only
* Alpha) and documents which pointers are protected by RCU. * Alpha) and documents which pointers are protected by RCU.
* *
* Unless the __ATOMIC_CONSUME memory order is available, atomic_rcu_read also * atomic_rcu_read also includes a compiler barrier to ensure that
* includes a compiler barrier to ensure that value-speculative optimizations * value-speculative optimizations (e.g. VSS: Value Speculation
* (e.g. VSS: Value Speculation Scheduling) does not perform the data read * Scheduling) does not perform the data read before the pointer read
* before the pointer read by speculating the value of the pointer. On new * by speculating the value of the pointer.
* enough compilers, atomic_load takes care of such concern about
* dependency-breaking optimizations.
* *
* Should match atomic_rcu_set(), atomic_xchg(), atomic_cmpxchg(). * Should match atomic_rcu_set(), atomic_xchg(), atomic_cmpxchg().
*/ */
#ifndef atomic_rcu_read
#ifdef __ATOMIC_CONSUME
#define atomic_rcu_read(ptr) ({ \
typeof(*ptr) _val; \
__atomic_load(ptr, &_val, __ATOMIC_CONSUME); \
_val; \
})
#else
#define atomic_rcu_read(ptr) ({ \ #define atomic_rcu_read(ptr) ({ \
typeof(*ptr) _val = atomic_read(ptr); \ typeof(*ptr) _val = atomic_read(ptr); \
smp_read_barrier_depends(); \ smp_read_barrier_depends(); \
_val; \ _val; \
}) })
#endif
#endif
/** /**
* atomic_rcu_set - assigns (publicizes) a pointer to a new data structure * atomic_rcu_set - assigns (publicizes) a pointer to a new data structure
...@@ -182,19 +268,10 @@ ...@@ -182,19 +268,10 @@
* *
* Should match atomic_rcu_read(). * Should match atomic_rcu_read().
*/ */
#ifndef atomic_rcu_set
#ifdef __ATOMIC_RELEASE
#define atomic_rcu_set(ptr, i) do { \
typeof(*ptr) _val = (i); \
__atomic_store(ptr, &_val, __ATOMIC_RELEASE); \
} while(0)
#else
#define atomic_rcu_set(ptr, i) do { \ #define atomic_rcu_set(ptr, i) do { \
smp_wmb(); \ smp_wmb(); \
atomic_set(ptr, i); \ atomic_set(ptr, i); \
} while (0) } while (0)
#endif
#endif
/* These have the same semantics as Java volatile variables. /* These have the same semantics as Java volatile variables.
* See http://gee.cs.oswego.edu/dl/jmm/cookbook.html: * See http://gee.cs.oswego.edu/dl/jmm/cookbook.html:
...@@ -218,13 +295,11 @@ ...@@ -218,13 +295,11 @@
* (see docs/atomics.txt), and I'm not sure that __ATOMIC_ACQ_REL is enough. * (see docs/atomics.txt), and I'm not sure that __ATOMIC_ACQ_REL is enough.
* Just always use the barriers manually by the rules above. * Just always use the barriers manually by the rules above.
*/ */
#ifndef atomic_mb_read
#define atomic_mb_read(ptr) ({ \ #define atomic_mb_read(ptr) ({ \
typeof(*ptr) _val = atomic_read(ptr); \ typeof(*ptr) _val = atomic_read(ptr); \
smp_rmb(); \ smp_rmb(); \
_val; \ _val; \
}) })
#endif
#ifndef atomic_mb_set #ifndef atomic_mb_set
#define atomic_mb_set(ptr, i) do { \ #define atomic_mb_set(ptr, i) do { \
...@@ -237,12 +312,6 @@ ...@@ -237,12 +312,6 @@
#ifndef atomic_xchg #ifndef atomic_xchg
#if defined(__clang__) #if defined(__clang__)
#define atomic_xchg(ptr, i) __sync_swap(ptr, i) #define atomic_xchg(ptr, i) __sync_swap(ptr, i)
#elif defined(__ATOMIC_SEQ_CST)
#define atomic_xchg(ptr, i) ({ \
typeof(*ptr) _new = (i), _old; \
__atomic_exchange(ptr, &_new, &_old, __ATOMIC_SEQ_CST); \
_old; \
})
#else #else
/* __sync_lock_test_and_set() is documented to be an acquire barrier only. */ /* __sync_lock_test_and_set() is documented to be an acquire barrier only. */
#define atomic_xchg(ptr, i) (smp_mb(), __sync_lock_test_and_set(ptr, i)) #define atomic_xchg(ptr, i) (smp_mb(), __sync_lock_test_and_set(ptr, i))
...@@ -266,4 +335,5 @@ ...@@ -266,4 +335,5 @@
#define atomic_and(ptr, n) ((void) __sync_fetch_and_and(ptr, n)) #define atomic_and(ptr, n) ((void) __sync_fetch_and_and(ptr, n))
#define atomic_or(ptr, n) ((void) __sync_fetch_and_or(ptr, n)) #define atomic_or(ptr, n) ((void) __sync_fetch_and_or(ptr, n))
#endif #endif /* __ATOMIC_RELAXED */
#endif /* __QEMU_ATOMIC_H */
...@@ -258,7 +258,7 @@ int qio_channel_socket_dgram_sync(QIOChannelSocket *ioc, ...@@ -258,7 +258,7 @@ int qio_channel_socket_dgram_sync(QIOChannelSocket *ioc,
int fd; int fd;
trace_qio_channel_socket_dgram_sync(ioc, localAddr, remoteAddr); trace_qio_channel_socket_dgram_sync(ioc, localAddr, remoteAddr);
fd = socket_dgram(localAddr, remoteAddr, errp); fd = socket_dgram(remoteAddr, localAddr, errp);
if (fd < 0) { if (fd < 0) {
trace_qio_channel_socket_dgram_fail(ioc); trace_qio_channel_socket_dgram_fail(ioc);
return -1; return -1;
......
...@@ -2361,7 +2361,7 @@ int kvm_set_one_reg(CPUState *cs, uint64_t id, void *source) ...@@ -2361,7 +2361,7 @@ int kvm_set_one_reg(CPUState *cs, uint64_t id, void *source)
reg.addr = (uintptr_t) source; reg.addr = (uintptr_t) source;
r = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &reg); r = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &reg);
if (r) { if (r) {
trace_kvm_failed_reg_set(id, strerror(r)); trace_kvm_failed_reg_set(id, strerror(-r));
} }
return r; return r;
} }
...@@ -2375,7 +2375,7 @@ int kvm_get_one_reg(CPUState *cs, uint64_t id, void *target) ...@@ -2375,7 +2375,7 @@ int kvm_get_one_reg(CPUState *cs, uint64_t id, void *target)
reg.addr = (uintptr_t) target; reg.addr = (uintptr_t) target;
r = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, &reg); r = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, &reg);
if (r) { if (r) {
trace_kvm_failed_reg_get(id, strerror(r)); trace_kvm_failed_reg_get(id, strerror(-r));
} }
return r; return r;
} }
......
...@@ -609,7 +609,6 @@ static void migration_bitmap_sync_init(void) ...@@ -609,7 +609,6 @@ static void migration_bitmap_sync_init(void)
iterations_prev = 0; iterations_prev = 0;
} }
/* Called with iothread lock held, to protect ram_list.dirty_memory[] */
static void migration_bitmap_sync(void) static void migration_bitmap_sync(void)
{ {
RAMBlock *block; RAMBlock *block;
...@@ -1921,8 +1920,6 @@ static int ram_save_setup(QEMUFile *f, void *opaque) ...@@ -1921,8 +1920,6 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
acct_clear(); acct_clear();
} }
/* iothread lock needed for ram_list.dirty_memory[] */
qemu_mutex_lock_iothread();
qemu_mutex_lock_ramlist(); qemu_mutex_lock_ramlist();
rcu_read_lock(); rcu_read_lock();
bytes_transferred = 0; bytes_transferred = 0;
...@@ -1947,7 +1944,6 @@ static int ram_save_setup(QEMUFile *f, void *opaque) ...@@ -1947,7 +1944,6 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
memory_global_dirty_log_start(); memory_global_dirty_log_start();
migration_bitmap_sync(); migration_bitmap_sync();
qemu_mutex_unlock_ramlist(); qemu_mutex_unlock_ramlist();
qemu_mutex_unlock_iothread();
qemu_put_be64(f, ram_bytes_total() | RAM_SAVE_FLAG_MEM_SIZE); qemu_put_be64(f, ram_bytes_total() | RAM_SAVE_FLAG_MEM_SIZE);
......
...@@ -417,12 +417,12 @@ static coroutine_fn int nbd_negotiate(NBDClientNewData *data) ...@@ -417,12 +417,12 @@ static coroutine_fn int nbd_negotiate(NBDClientNewData *data)
memcpy(buf, "NBDMAGIC", 8); memcpy(buf, "NBDMAGIC", 8);
if (client->exp) { if (client->exp) {
assert ((client->exp->nbdflags & ~65535) == 0); assert ((client->exp->nbdflags & ~65535) == 0);
cpu_to_be64w((uint64_t*)(buf + 8), NBD_CLIENT_MAGIC); stq_be_p(buf + 8, NBD_CLIENT_MAGIC);
cpu_to_be64w((uint64_t*)(buf + 16), client->exp->size); stq_be_p(buf + 16, client->exp->size);
cpu_to_be16w((uint16_t*)(buf + 26), client->exp->nbdflags | myflags); stw_be_p(buf + 26, client->exp->nbdflags | myflags);
} else { } else {
cpu_to_be64w((uint64_t*)(buf + 8), NBD_OPTS_MAGIC); stq_be_p(buf + 8, NBD_OPTS_MAGIC);
cpu_to_be16w((uint16_t *)(buf + 16), NBD_FLAG_FIXED_NEWSTYLE); stw_be_p(buf + 16, NBD_FLAG_FIXED_NEWSTYLE);
} }
if (client->exp) { if (client->exp) {
...@@ -442,8 +442,8 @@ static coroutine_fn int nbd_negotiate(NBDClientNewData *data) ...@@ -442,8 +442,8 @@ static coroutine_fn int nbd_negotiate(NBDClientNewData *data)
} }
assert ((client->exp->nbdflags & ~65535) == 0); assert ((client->exp->nbdflags & ~65535) == 0);
cpu_to_be64w((uint64_t*)(buf + 18), client->exp->size); stq_be_p(buf + 18, client->exp->size);
cpu_to_be16w((uint16_t*)(buf + 26), client->exp->nbdflags | myflags); stw_be_p(buf + 26, client->exp->nbdflags | myflags);
if (nbd_negotiate_write(csock, buf + 18, if (nbd_negotiate_write(csock, buf + 18,
sizeof(buf) - 18) != sizeof(buf) - 18) { sizeof(buf) - 18) != sizeof(buf) - 18) {
LOG("write failed"); LOG("write failed");
...@@ -528,9 +528,9 @@ static ssize_t nbd_send_reply(int csock, struct nbd_reply *reply) ...@@ -528,9 +528,9 @@ static ssize_t nbd_send_reply(int csock, struct nbd_reply *reply)
[ 4 .. 7] error (0 == no error) [ 4 .. 7] error (0 == no error)
[ 7 .. 15] handle [ 7 .. 15] handle
*/ */
cpu_to_be32w((uint32_t*)buf, NBD_REPLY_MAGIC); stl_be_p(buf, NBD_REPLY_MAGIC);
cpu_to_be32w((uint32_t*)(buf + 4), reply->error); stl_be_p(buf + 4, reply->error);
cpu_to_be64w((uint64_t*)(buf + 8), reply->handle); stq_be_p(buf + 8, reply->handle);
TRACE("Sending response to client"); TRACE("Sending response to client");
......
...@@ -1171,6 +1171,7 @@ typedef struct { ...@@ -1171,6 +1171,7 @@ typedef struct {
int connected; int connected;
guint timer_tag; guint timer_tag;
guint open_tag; guint open_tag;
int slave_fd;
} PtyCharDriver; } PtyCharDriver;
static void pty_chr_update_read_handler_locked(CharDriverState *chr); static void pty_chr_update_read_handler_locked(CharDriverState *chr);
...@@ -1347,6 +1348,7 @@ static void pty_chr_close(struct CharDriverState *chr) ...@@ -1347,6 +1348,7 @@ static void pty_chr_close(struct CharDriverState *chr)
qemu_mutex_lock(&chr->chr_write_lock); qemu_mutex_lock(&chr->chr_write_lock);
pty_chr_state(chr, 0); pty_chr_state(chr, 0);
close(s->slave_fd);
object_unref(OBJECT(s->ioc)); object_unref(OBJECT(s->ioc));
if (s->timer_tag) { if (s->timer_tag) {
g_source_remove(s->timer_tag); g_source_remove(s->timer_tag);
...@@ -1374,7 +1376,6 @@ static CharDriverState *qemu_chr_open_pty(const char *id, ...@@ -1374,7 +1376,6 @@ static CharDriverState *qemu_chr_open_pty(const char *id,
return NULL; return NULL;
} }
close(slave_fd);
qemu_set_nonblock(master_fd); qemu_set_nonblock(master_fd);
chr = qemu_chr_alloc(common, errp); chr = qemu_chr_alloc(common, errp);
...@@ -1399,6 +1400,7 @@ static CharDriverState *qemu_chr_open_pty(const char *id, ...@@ -1399,6 +1400,7 @@ static CharDriverState *qemu_chr_open_pty(const char *id,
chr->explicit_be_open = true; chr->explicit_be_open = true;
s->ioc = QIO_CHANNEL(qio_channel_file_new_fd(master_fd)); s->ioc = QIO_CHANNEL(qio_channel_file_new_fd(master_fd));
s->slave_fd = slave_fd;
s->timer_tag = 0; s->timer_tag = 0;
return chr; return chr;
...@@ -2856,6 +2858,10 @@ static void tcp_chr_update_read_handler(CharDriverState *chr) ...@@ -2856,6 +2858,10 @@ static void tcp_chr_update_read_handler(CharDriverState *chr)
{ {
TCPCharDriver *s = chr->opaque; TCPCharDriver *s = chr->opaque;
if (!s->connected) {
return;
}
remove_fd_in_watch(chr); remove_fd_in_watch(chr);
if (s->ioc) { if (s->ioc) {
chr->fd_in_tag = io_add_watch_poll(s->ioc, chr->fd_in_tag = io_add_watch_poll(s->ioc,
...@@ -4380,7 +4386,7 @@ static CharDriverState *qmp_chardev_open_udp(const char *id, ...@@ -4380,7 +4386,7 @@ static CharDriverState *qmp_chardev_open_udp(const char *id,
QIOChannelSocket *sioc = qio_channel_socket_new(); QIOChannelSocket *sioc = qio_channel_socket_new();
if (qio_channel_socket_dgram_sync(sioc, if (qio_channel_socket_dgram_sync(sioc,
udp->remote, udp->local, udp->local, udp->remote,
errp) < 0) { errp) < 0) {
object_unref(OBJECT(sioc)); object_unref(OBJECT(sioc));
return NULL; return NULL;
......
此差异已折叠。
...@@ -636,7 +636,7 @@ sub get_maintainers { ...@@ -636,7 +636,7 @@ sub get_maintainers {
if ($email) { if ($email) {
if (! $interactive) { if (! $interactive) {
$email_git_fallback = 0 if @email_to > 0 || @list_to > 0 || $email_git || $email_git_blame; $email_git_fallback = 0 if @email_to > 0 || $email_git || $email_git_blame;
if ($email_git_fallback) { if ($email_git_fallback) {
print STDERR "get_maintainer.pl: No maintainers found, printing recent contributors.\n"; print STDERR "get_maintainer.pl: No maintainers found, printing recent contributors.\n";
print STDERR "get_maintainer.pl: Do not blindly cc: them on patches! Use common sense.\n"; print STDERR "get_maintainer.pl: Do not blindly cc: them on patches! Use common sense.\n";
......
...@@ -22,6 +22,7 @@ import resource ...@@ -22,6 +22,7 @@ import resource
import struct import struct
import re import re
from collections import defaultdict from collections import defaultdict
from time import sleep
VMX_EXIT_REASONS = { VMX_EXIT_REASONS = {
'EXCEPTION_NMI': 0, 'EXCEPTION_NMI': 0,
...@@ -778,7 +779,7 @@ def get_providers(options): ...@@ -778,7 +779,7 @@ def get_providers(options):
return providers return providers
def check_access(): def check_access(options):
if not os.path.exists('/sys/kernel/debug'): if not os.path.exists('/sys/kernel/debug'):
sys.stderr.write('Please enable CONFIG_DEBUG_FS in your kernel.') sys.stderr.write('Please enable CONFIG_DEBUG_FS in your kernel.')
sys.exit(1) sys.exit(1)
...@@ -790,14 +791,24 @@ def check_access(): ...@@ -790,14 +791,24 @@ def check_access():
"Also ensure, that the kvm modules are loaded.\n") "Also ensure, that the kvm modules are loaded.\n")
sys.exit(1) sys.exit(1)
if not os.path.exists(PATH_DEBUGFS_TRACING): if not os.path.exists(PATH_DEBUGFS_TRACING) and (options.tracepoints
sys.stderr.write("Please make {0} readable by the current user.\n" or not options.debugfs):
.format(PATH_DEBUGFS_TRACING)) sys.stderr.write("Please enable CONFIG_TRACING in your kernel "
sys.exit(1) "when using the option -t (default).\n"
"If it is enabled, make {0} readable by the "
"current user.\n")
if options.tracepoints:
sys.exit(1)
sys.stderr.write("Falling back to debugfs statistics!\n"
options.debugfs = True
sleep(5)
return options
def main(): def main():
check_access()
options = get_options() options = get_options()
options = check_access(options)
providers = get_providers(options) providers = get_providers(options)
stats = Stats(providers, fields=options.fields) stats = Stats(providers, fields=options.fields)
......
...@@ -861,7 +861,7 @@ int x86_cpu_handle_mmu_fault(CPUState *cs, vaddr addr, ...@@ -861,7 +861,7 @@ int x86_cpu_handle_mmu_fault(CPUState *cs, vaddr addr,
/* Bits 20-13 provide bits 39-32 of the address, bit 21 is reserved. /* Bits 20-13 provide bits 39-32 of the address, bit 21 is reserved.
* Leave bits 20-13 in place for setting accessed/dirty bits below. * Leave bits 20-13 in place for setting accessed/dirty bits below.
*/ */
pte = pde | ((pde & 0x1fe000) << (32 - 13)); pte = pde | ((pde & 0x1fe000LL) << (32 - 13));
rsvd_mask = 0x200000; rsvd_mask = 0x200000;
goto do_check_protect_pse36; goto do_check_protect_pse36;
} }
...@@ -1056,7 +1056,7 @@ hwaddr x86_cpu_get_phys_page_debug(CPUState *cs, vaddr addr) ...@@ -1056,7 +1056,7 @@ hwaddr x86_cpu_get_phys_page_debug(CPUState *cs, vaddr addr)
if (!(pde & PG_PRESENT_MASK)) if (!(pde & PG_PRESENT_MASK))
return -1; return -1;
if ((pde & PG_PSE_MASK) && (env->cr[4] & CR4_PSE_MASK)) { if ((pde & PG_PSE_MASK) && (env->cr[4] & CR4_PSE_MASK)) {
pte = pde | ((pde & 0x1fe000) << (32 - 13)); pte = pde | ((pde & 0x1fe000LL) << (32 - 13));
page_size = 4096 * 1024; page_size = 4096 * 1024;
} else { } else {
/* page directory entry */ /* page directory entry */
......
...@@ -44,10 +44,6 @@ DEF_HELPER_FLAGS_3(set_dr, TCG_CALL_NO_WG, void, env, int, tl) ...@@ -44,10 +44,6 @@ DEF_HELPER_FLAGS_3(set_dr, TCG_CALL_NO_WG, void, env, int, tl)
DEF_HELPER_FLAGS_2(get_dr, TCG_CALL_NO_WG, tl, env, int) DEF_HELPER_FLAGS_2(get_dr, TCG_CALL_NO_WG, tl, env, int)
DEF_HELPER_2(invlpg, void, env, tl) DEF_HELPER_2(invlpg, void, env, tl)
DEF_HELPER_4(enter_level, void, env, int, int, tl)
#ifdef TARGET_X86_64
DEF_HELPER_4(enter64_level, void, env, int, int, tl)
#endif
DEF_HELPER_1(sysenter, void, env) DEF_HELPER_1(sysenter, void, env)
DEF_HELPER_2(sysexit, void, env, int) DEF_HELPER_2(sysexit, void, env, int)
#ifdef TARGET_X86_64 #ifdef TARGET_X86_64
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册