提交 bfd4bda0 编写于 作者: D David Woodhouse

Merge with master.kernel.org:/pub/scm/linux/kernel/git/torvalds/linux-2.6.git

......@@ -4,6 +4,16 @@ The EtherDrive (R) HOWTO for users of 2.6 kernels is found at ...
It has many tips and hints!
The aoetools are userland programs that are designed to work with this
driver. The aoetools are on sourceforge.
http://aoetools.sourceforge.net/
The scripts in this Documentation/aoe directory are intended to
document the use of the driver and are not necessary if you install
the aoetools.
CREATING DEVICE NODES
Users of udev should find the block device nodes created
......@@ -35,14 +45,15 @@ USING DEVICE NODES
"echo eth2 eth4 > /dev/etherd/interfaces" tells the aoe driver to
limit ATA over Ethernet traffic to eth2 and eth4. AoE traffic from
untrusted networks should be ignored as a matter of security.
untrusted networks should be ignored as a matter of security. See
also the aoe_iflist driver option described below.
"echo > /dev/etherd/discover" tells the driver to find out what AoE
devices are available.
These character devices may disappear and be replaced by sysfs
counterparts, so distribution maintainers are encouraged to create
scripts that use these devices.
counterparts. Using the commands in aoetools insulates users from
these implementation details.
The block devices are named like this:
......@@ -66,7 +77,8 @@ USING SYSFS
through which we are communicating with the remote AoE device.
There is a script in this directory that formats this information
in a convenient way.
in a convenient way. Users with aoetools can use the aoe-stat
command.
root@makki root# sh Documentation/aoe/status.sh
e10.0 eth3 up
......@@ -89,3 +101,23 @@ USING SYSFS
e4.7 eth1 up
e4.8 eth1 up
e4.9 eth1 up
Use /sys/module/aoe/parameters/aoe_iflist (or better, the driver
option discussed below) instead of /dev/etherd/interfaces to limit
AoE traffic to the network interfaces in the given
whitespace-separated list. Unlike the old character device, the
sysfs entry can be read from as well as written to.
It's helpful to trigger discovery after setting the list of allowed
interfaces. The aoetools package provides an aoe-discover script
for this purpose. You can also directly use the
/dev/etherd/discover special file described above.
DRIVER OPTIONS
There is a boot option for the built-in aoe driver and a
corresponding module parameter, aoe_iflist. Without this option,
all network interfaces may be used for ATA over Ethernet. Here is a
usage example for the module parameter.
modprobe aoe_iflist="eth1 eth3"
......@@ -14,10 +14,6 @@ test ! -d "$sysd/block" && {
echo "$me Error: sysfs is not mounted" 1>&2
exit 1
}
test -z "`lsmod | grep '^aoe'`" && {
echo "$me Error: aoe module is not loaded" 1>&2
exit 1
}
for d in `ls -d $sysd/block/etherd* 2>/dev/null | grep -v p` end; do
# maybe ls comes up empty, so we use "end"
......
......@@ -279,6 +279,7 @@ pci_for_each_dev_reverse() Superseded by pci_find_device_reverse()
pci_for_each_bus() Superseded by pci_find_next_bus()
pci_find_device() Superseded by pci_get_device()
pci_find_subsys() Superseded by pci_get_subsys()
pci_find_slot() Superseded by pci_get_slot()
pcibios_find_class() Superseded by pci_get_class()
pci_find_class() Superseded by pci_get_class()
pci_(read|write)_*_nodev() Superseded by pci_bus_(read|write)_*()
......@@ -165,40 +165,9 @@ Description:
These functions are intended for use by individual drivers, and are defined in
struct pci_driver:
int (*save_state) (struct pci_dev *dev, u32 state);
int (*suspend) (struct pci_dev *dev, u32 state);
int (*suspend) (struct pci_dev *dev, pm_message_t state);
int (*resume) (struct pci_dev *dev);
int (*enable_wake) (struct pci_dev *dev, u32 state, int enable);
save_state
----------
Usage:
if (dev->driver && dev->driver->save_state)
dev->driver->save_state(dev,state);
The driver should use this callback to save device state. It should take into
account the current state of the device and the requested state in order to
avoid any unnecessary operations.
For example, a video card that supports all 4 states (D0-D3), all controller
context is preserved when entering D1, but the screen is placed into a low power
state (blanked).
The driver can also interpret this function as a notification that it may be
entering a sleep state in the near future. If it knows that the device cannot
enter the requested state, either because of lack of support for it, or because
the device is middle of some critical operation, then it should fail.
This function should not be used to set any state in the device or the driver
because the device may not actually enter the sleep state (e.g. another driver
later causes causes a global state transition to fail).
Note that in intermediate low power states, a device's I/O and memory spaces may
be disabled and may not be available in subsequent transitions to lower power
states.
int (*enable_wake) (struct pci_dev *dev, pci_power_t state, int enable);
suspend
......
......@@ -280,6 +280,10 @@ config ISA
(MCA) or VESA. ISA is an older system, now being displaced by PCI;
newer boards don't support it. If you have ISA, say Y, otherwise N.
config ISA_DMA_API
bool
default y
config PCI
bool
depends on !ALPHA_JENSEN
......
......@@ -266,6 +266,10 @@ config ISA_DMA
depends on FOOTBRIDGE_HOST || ARCH_SHARK
default y
config ISA_DMA_API
bool
default y
config PCI
bool "PCI support" if ARCH_INTEGRATOR_AP
default y if ARCH_SHARK || FOOTBRIDGE_HOST || ARCH_IOP3XX || ARCH_IXP4XX || ARCH_IXP2000
......
......@@ -18,48 +18,30 @@
* Please select one of the following when turning on debugging.
*/
#ifdef DEBUG
#if defined(CONFIG_DEBUG_DC21285_PORT)
.macro loadsp, rb
mov \rb, #0x42000000
.endm
.macro writeb, rb
str \rb, [r3, #0x160]
.endm
#elif defined(CONFIG_DEBUG_ICEDCC)
#include <asm/arch/debug-macro.S>
#if defined(CONFIG_DEBUG_ICEDCC)
.macro loadsp, rb
.endm
.macro writeb, rb
mcr p14, 0, \rb, c0, c1, 0
.endm
#elif defined(CONFIG_FOOTBRIDGE)
.macro loadsp, rb
mov \rb, #0x7c000000
.macro writeb, ch, rb
mcr p14, 0, \ch, c0, c1, 0
.endm
.macro writeb, rb
strb \rb, [r3, #0x3f8]
#else
.macro writeb, ch, rb
senduart \ch, \rb
.endm
#elif defined(CONFIG_ARCH_RPC)
#if defined(CONFIG_FOOTBRIDGE) || \
defined(CONFIG_ARCH_RPC) || \
defined(CONFIG_ARCH_INTEGRATOR) || \
defined(CONFIG_ARCH_PXA) || \
defined(CONFIG_ARCH_IXP4XX) || \
defined(CONFIG_ARCH_IXP2000) || \
defined(CONFIG_ARCH_LH7A40X) || \
defined(CONFIG_ARCH_OMAP)
.macro loadsp, rb
mov \rb, #0x03000000
orr \rb, \rb, #0x00010000
.endm
.macro writeb, rb
strb \rb, [r3, #0x3f8 << 2]
.endm
#elif defined(CONFIG_ARCH_INTEGRATOR)
.macro loadsp, rb
mov \rb, #0x16000000
.endm
.macro writeb, rb
strb \rb, [r3, #0]
.endm
#elif defined(CONFIG_ARCH_PXA) /* Xscale-type */
.macro loadsp, rb
mov \rb, #0x40000000
orr \rb, \rb, #0x00100000
.endm
.macro writeb, rb
strb \rb, [r3, #0]
addruart \rb
.endm
#elif defined(CONFIG_ARCH_SA1100)
.macro loadsp, rb
......@@ -70,64 +52,21 @@
add \rb, \rb, #0x00010000 @ Ser1
# endif
.endm
.macro writeb, rb
str \rb, [r3, #0x14] @ UTDR
.endm
#elif defined(CONFIG_ARCH_IXP4XX)
.macro loadsp, rb
mov \rb, #0xc8000000
.endm
.macro writeb, rb
str \rb, [r3, #0]
#elif defined(CONFIG_ARCH_IXP2000)
.macro loadsp, rb
mov \rb, #0xc0000000
orr \rb, \rb, #0x00030000
.endm
.macro writeb, rb
str \rb, [r3, #0]
.endm
#elif defined(CONFIG_ARCH_LH7A40X)
.macro loadsp, rb
ldr \rb, =0x80000700 @ UART2 UARTBASE
.endm
.macro writeb, rb
strb \rb, [r3, #0]
.endm
#elif defined(CONFIG_ARCH_OMAP)
.macro loadsp, rb
mov \rb, #0xff000000 @ physical base address
add \rb, \rb, #0x00fb0000
#if defined(CONFIG_OMAP_LL_DEBUG_UART2) || defined(CONFIG_OMAP_LL_DEBUG_UART3)
add \rb, \rb, #0x00000800
#endif
#ifdef CONFIG_OMAP_LL_DEBUG_UART3
add \rb, \rb, #0x00009000
#endif
.endm
.macro writeb, rb
strb \rb, [r3]
.endm
#elif defined(CONFIG_ARCH_IOP331)
.macro loadsp, rb
mov \rb, #0xff000000
orr \rb, \rb, #0x00ff0000
orr \rb, \rb, #0x0000f700 @ location of the UART
.endm
.macro writeb, rb
str \rb, [r3, #0]
.endm
#elif defined(CONFIG_ARCH_S3C2410)
.macro loadsp, rb
.macro loadsp, rb
mov \rb, #0x50000000
add \rb, \rb, #0x4000 * CONFIG_S3C2410_LOWLEVEL_UART_PORT
.endm
.macro writeb, rb
strb \rb, [r3, #0x20]
.endm
#else
#error no serial architecture defined
#endif
#endif
#endif
.macro kputc,val
......@@ -734,7 +673,7 @@ puts: loadsp r3
1: ldrb r2, [r0], #1
teq r2, #0
moveq pc, lr
2: writeb r2
2: writeb r2, r3
mov r1, #0x00020000
3: subs r1, r1, #1
bne 3b
......
......@@ -26,6 +26,7 @@
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <asm/arch/imxfb.h>
#include <asm/hardware.h>
#include <asm/mach/map.h>
......@@ -228,6 +229,14 @@ static struct platform_device imx_uart2_device = {
.resource = imx_uart2_resources,
};
static struct imxfb_mach_info imx_fb_info;
void __init set_imx_fb_info(struct imxfb_mach_info *hard_imx_fb_info)
{
memcpy(&imx_fb_info,hard_imx_fb_info,sizeof(struct imxfb_mach_info));
}
EXPORT_SYMBOL(set_imx_fb_info);
static struct resource imxfb_resources[] = {
[0] = {
.start = 0x00205000,
......@@ -241,9 +250,16 @@ static struct resource imxfb_resources[] = {
},
};
static u64 fb_dma_mask = ~(u64)0;
static struct platform_device imxfb_device = {
.name = "imx-fb",
.id = 0,
.dev = {
.platform_data = &imx_fb_info,
.dma_mask = &fb_dma_mask,
.coherent_dma_mask = 0xffffffff,
},
.num_resources = ARRAY_SIZE(imxfb_resources),
.resource = imxfb_resources,
};
......
......@@ -216,7 +216,9 @@ integrator_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
write_seqlock(&xtime_lock);
// ...clear the interrupt
/*
* clear the interrupt
*/
timer1->TimerClear = 1;
timer_tick(regs);
......@@ -264,7 +266,7 @@ void __init integrator_time_init(unsigned long reload, unsigned int ctrl)
timer1->TimerValue = timer_reload;
timer1->TimerControl = timer_ctrl;
/*
/*
* Make irqs happen for the system timer
*/
setup_irq(IRQ_TIMERINT1, &integrator_timer_irq);
......
......@@ -37,7 +37,7 @@ static void integrator_leds_event(led_event_t ledevt)
unsigned long flags;
const unsigned int dbg_base = IO_ADDRESS(INTEGRATOR_DBG_BASE);
unsigned int update_alpha_leds;
// yup, change the LEDs
local_irq_save(flags);
update_alpha_leds = 0;
......
......@@ -501,15 +501,6 @@ pci_set_dma_mask(struct pci_dev *dev, u64 mask)
return -EIO;
}
int
pci_dac_set_dma_mask(struct pci_dev *dev, u64 mask)
{
if (mask >= SZ_64M - 1 )
return 0;
return -EIO;
}
int
pci_set_consistent_dma_mask(struct pci_dev *dev, u64 mask)
{
......@@ -520,7 +511,6 @@ pci_set_consistent_dma_mask(struct pci_dev *dev, u64 mask)
}
EXPORT_SYMBOL(pci_set_dma_mask);
EXPORT_SYMBOL(pci_dac_set_dma_mask);
EXPORT_SYMBOL(pci_set_consistent_dma_mask);
EXPORT_SYMBOL(ixp4xx_pci_read);
EXPORT_SYMBOL(ixp4xx_pci_write);
......
......@@ -413,6 +413,7 @@ config CPU_BPREDICT_DISABLE
config HAS_TLS_REG
bool
depends on CPU_32v6 && !CPU_32v5 && !CPU_32v4 && !CPU_32v3
default y
help
This selects support for the CP15 thread register.
It is defined to be available on ARMv6 or later. However
......
......@@ -89,6 +89,10 @@ config PAGESIZE_16
machine with 4MB of memory.
endmenu
config ISA_DMA_API
bool
default y
menu "General setup"
# Compressed boot loader in ROM. Yes, we really want to ask about
......
......@@ -1173,6 +1173,10 @@ source "drivers/pci/pcie/Kconfig"
source "drivers/pci/Kconfig"
config ISA_DMA_API
bool
default y
config ISA
bool "ISA support"
depends on !(X86_VOYAGER || X86_VISWS)
......
......@@ -217,6 +217,16 @@ config IA64_SGI_SN_SIM
If you are compiling a kernel that will run under SGI's IA-64
simulator (Medusa) then say Y, otherwise say N.
config IA64_SGI_SN_XP
tristate "Support communication between SGI SSIs"
depends on MSPEC
help
An SGI machine can be divided into multiple Single System
Images which act independently of each other and have
hardware based memory protection from the others. Enabling
this feature will allow for direct communication between SSIs
based on a network adapter and DMA messaging.
config FORCE_MAX_ZONEORDER
int
default "18"
......@@ -261,6 +271,15 @@ config HOTPLUG_CPU
can be controlled through /sys/devices/system/cpu/cpu#.
Say N if you want to disable CPU hotplug.
config SCHED_SMT
bool "SMT scheduler support"
depends on SMP
default off
help
Improves the CPU scheduler's decision making when dealing with
Intel IA64 chips with MultiThreading at a cost of slightly increased
overhead in some places. If unsure say N here.
config PREEMPT
bool "Preemptible Kernel"
help
......
#
# Automatically generated make config: don't edit
# Linux kernel version: 2.6.11-rc2
# Sat Jan 22 11:17:02 2005
# Linux kernel version: 2.6.12-rc3
# Tue May 3 15:55:04 2005
#
#
......@@ -10,6 +10,7 @@
CONFIG_EXPERIMENTAL=y
CONFIG_CLEAN_COMPILE=y
CONFIG_LOCK_KERNEL=y
CONFIG_INIT_ENV_ARG_LIMIT=32
#
# General setup
......@@ -21,24 +22,27 @@ CONFIG_POSIX_MQUEUE=y
# CONFIG_BSD_PROCESS_ACCT is not set
CONFIG_SYSCTL=y
# CONFIG_AUDIT is not set
CONFIG_LOG_BUF_SHIFT=20
CONFIG_HOTPLUG=y
CONFIG_KOBJECT_UEVENT=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
# CONFIG_CPUSETS is not set
# CONFIG_EMBEDDED is not set
CONFIG_KALLSYMS=y
CONFIG_KALLSYMS_ALL=y
# CONFIG_KALLSYMS_EXTRA_PASS is not set
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_EPOLL=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_SHMEM=y
CONFIG_CC_ALIGN_FUNCTIONS=0
CONFIG_CC_ALIGN_LABELS=0
CONFIG_CC_ALIGN_LOOPS=0
CONFIG_CC_ALIGN_JUMPS=0
# CONFIG_TINY_SHMEM is not set
CONFIG_BASE_SMALL=0
#
# Loadable module support
......@@ -85,6 +89,7 @@ CONFIG_FORCE_MAX_ZONEORDER=18
CONFIG_SMP=y
CONFIG_NR_CPUS=4
CONFIG_HOTPLUG_CPU=y
# CONFIG_SCHED_SMT is not set
# CONFIG_PREEMPT is not set
CONFIG_HAVE_DEC_LOCK=y
CONFIG_IA32_SUPPORT=y
......@@ -135,6 +140,7 @@ CONFIG_PCI_DOMAINS=y
# CONFIG_PCI_MSI is not set
CONFIG_PCI_LEGACY_PROC=y
CONFIG_PCI_NAMES=y
# CONFIG_PCI_DEBUG is not set
#
# PCI Hotplug Support
......@@ -151,10 +157,6 @@ CONFIG_HOTPLUG_PCI_ACPI=m
#
# CONFIG_PCCARD is not set
#
# PC-card bridges
#
#
# Device Drivers
#
......@@ -195,9 +197,10 @@ CONFIG_BLK_DEV_CRYPTOLOOP=m
CONFIG_BLK_DEV_NBD=m
# CONFIG_BLK_DEV_SX8 is not set
# CONFIG_BLK_DEV_UB is not set
CONFIG_BLK_DEV_RAM=m
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=4096
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
# CONFIG_CDROM_PKTCDVD is not set
......@@ -313,7 +316,6 @@ CONFIG_SCSI_FC_ATTRS=y
# CONFIG_SCSI_BUSLOGIC is not set
# CONFIG_SCSI_DMX3191D is not set
# CONFIG_SCSI_EATA is not set
# CONFIG_SCSI_EATA_PIO is not set
# CONFIG_SCSI_FUTURE_DOMAIN is not set
# CONFIG_SCSI_GDTH is not set
# CONFIG_SCSI_IPS is not set
......@@ -325,7 +327,6 @@ CONFIG_SCSI_SYM53C8XX_DEFAULT_TAGS=16
CONFIG_SCSI_SYM53C8XX_MAX_TAGS=64
# CONFIG_SCSI_SYM53C8XX_IOMAPPED is not set
# CONFIG_SCSI_IPR is not set
# CONFIG_SCSI_QLOGIC_ISP is not set
CONFIG_SCSI_QLOGIC_FC=y
# CONFIG_SCSI_QLOGIC_FC_FIRMWARE is not set
CONFIG_SCSI_QLOGIC_1280=y
......@@ -336,6 +337,7 @@ CONFIG_SCSI_QLA22XX=m
CONFIG_SCSI_QLA2300=m
CONFIG_SCSI_QLA2322=m
# CONFIG_SCSI_QLA6312 is not set
# CONFIG_SCSI_LPFC is not set
# CONFIG_SCSI_DC395x is not set
# CONFIG_SCSI_DC390T is not set
# CONFIG_SCSI_DEBUG is not set
......@@ -358,6 +360,7 @@ CONFIG_DM_CRYPT=m
CONFIG_DM_SNAPSHOT=m
CONFIG_DM_MIRROR=m
CONFIG_DM_ZERO=m
# CONFIG_DM_MULTIPATH is not set
#
# Fusion MPT device support
......@@ -386,7 +389,6 @@ CONFIG_NET=y
#
CONFIG_PACKET=y
# CONFIG_PACKET_MMAP is not set
CONFIG_NETLINK_DEV=y
CONFIG_UNIX=y
# CONFIG_NET_KEY is not set
CONFIG_INET=y
......@@ -446,7 +448,6 @@ CONFIG_DUMMY=m
# CONFIG_BONDING is not set
# CONFIG_EQUALIZER is not set
# CONFIG_TUN is not set
# CONFIG_ETHERTAP is not set
#
# ARCnet devices
......@@ -484,7 +485,6 @@ CONFIG_NET_PCI=y
# CONFIG_DGRS is not set
CONFIG_EEPRO100=m
CONFIG_E100=m
# CONFIG_E100_NAPI is not set
# CONFIG_FEALNX is not set
# CONFIG_NATSEMI is not set
# CONFIG_NE2K_PCI is not set
......@@ -565,25 +565,6 @@ CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
# CONFIG_INPUT_EVDEV is not set
# CONFIG_INPUT_EVBUG is not set
#
# Input I/O drivers
#
CONFIG_GAMEPORT=m
CONFIG_SOUND_GAMEPORT=m
# CONFIG_GAMEPORT_NS558 is not set
# CONFIG_GAMEPORT_L4 is not set
# CONFIG_GAMEPORT_EMU10K1 is not set
# CONFIG_GAMEPORT_VORTEX is not set
# CONFIG_GAMEPORT_FM801 is not set
# CONFIG_GAMEPORT_CS461X is not set
CONFIG_SERIO=y
CONFIG_SERIO_I8042=y
# CONFIG_SERIO_SERPORT is not set
# CONFIG_SERIO_CT82C710 is not set
# CONFIG_SERIO_PCIPS2 is not set
CONFIG_SERIO_LIBPS2=y
# CONFIG_SERIO_RAW is not set
#
# Input Device Drivers
#
......@@ -601,6 +582,24 @@ CONFIG_MOUSE_PS2=y
# CONFIG_INPUT_TOUCHSCREEN is not set
# CONFIG_INPUT_MISC is not set
#
# Hardware I/O ports
#
CONFIG_SERIO=y
CONFIG_SERIO_I8042=y
# CONFIG_SERIO_SERPORT is not set
# CONFIG_SERIO_PCIPS2 is not set
CONFIG_SERIO_LIBPS2=y
# CONFIG_SERIO_RAW is not set
CONFIG_GAMEPORT=m
# CONFIG_GAMEPORT_NS558 is not set
# CONFIG_GAMEPORT_L4 is not set
# CONFIG_GAMEPORT_EMU10K1 is not set
# CONFIG_GAMEPORT_VORTEX is not set
# CONFIG_GAMEPORT_FM801 is not set
# CONFIG_GAMEPORT_CS461X is not set
CONFIG_SOUND_GAMEPORT=m
#
# Character devices
#
......@@ -615,6 +614,8 @@ CONFIG_SERIAL_NONSTANDARD=y
# CONFIG_SYNCLINK is not set
# CONFIG_SYNCLINKMP is not set
# CONFIG_N_HDLC is not set
# CONFIG_SPECIALIX is not set
# CONFIG_SX is not set
# CONFIG_STALDRV is not set
#
......@@ -635,6 +636,7 @@ CONFIG_SERIAL_8250_SHARE_IRQ=y
#
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
# CONFIG_SERIAL_JSM is not set
CONFIG_UNIX98_PTYS=y
CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=256
......@@ -670,6 +672,12 @@ CONFIG_HPET=y
# CONFIG_HPET_RTC_IRQ is not set
CONFIG_HPET_MMAP=y
CONFIG_MAX_RAW_DEVS=256
# CONFIG_HANGCHECK_TIMER is not set
#
# TPM devices
#
# CONFIG_TCG_TPM is not set
#
# I2C support
......@@ -705,7 +713,6 @@ CONFIG_MAX_RAW_DEVS=256
#
CONFIG_VGA_CONSOLE=y
CONFIG_DUMMY_CONSOLE=y
# CONFIG_BACKLIGHT_LCD_SUPPORT is not set
#
# Sound
......@@ -715,6 +722,8 @@ CONFIG_DUMMY_CONSOLE=y
#
# USB support
#
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB_ARCH_HAS_OHCI=y
CONFIG_USB=y
# CONFIG_USB_DEBUG is not set
......@@ -726,8 +735,6 @@ CONFIG_USB_DEVICEFS=y
# CONFIG_USB_DYNAMIC_MINORS is not set
# CONFIG_USB_SUSPEND is not set
# CONFIG_USB_OTG is not set
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB_ARCH_HAS_OHCI=y
#
# USB Host Controller Drivers
......@@ -736,6 +743,8 @@ CONFIG_USB_EHCI_HCD=m
# CONFIG_USB_EHCI_SPLIT_ISO is not set
# CONFIG_USB_EHCI_ROOT_HUB_TT is not set
CONFIG_USB_OHCI_HCD=m
# CONFIG_USB_OHCI_BIG_ENDIAN is not set
CONFIG_USB_OHCI_LITTLE_ENDIAN=y
CONFIG_USB_UHCI_HCD=y
# CONFIG_USB_SL811_HCD is not set
......@@ -751,12 +760,11 @@ CONFIG_USB_UHCI_HCD=y
#
CONFIG_USB_STORAGE=m
# CONFIG_USB_STORAGE_DEBUG is not set
# CONFIG_USB_STORAGE_RW_DETECT is not set
# CONFIG_USB_STORAGE_DATAFAB is not set
# CONFIG_USB_STORAGE_FREECOM is not set
# CONFIG_USB_STORAGE_ISD200 is not set
# CONFIG_USB_STORAGE_DPCM is not set
# CONFIG_USB_STORAGE_HP8200e is not set
# CONFIG_USB_STORAGE_USBAT is not set
# CONFIG_USB_STORAGE_SDDR09 is not set
# CONFIG_USB_STORAGE_SDDR55 is not set
# CONFIG_USB_STORAGE_JUMPSHOT is not set
......@@ -800,6 +808,7 @@ CONFIG_USB_HIDINPUT=y
# CONFIG_USB_PEGASUS is not set
# CONFIG_USB_RTL8150 is not set
# CONFIG_USB_USBNET is not set
# CONFIG_USB_MON is not set
#
# USB port drivers
......@@ -824,6 +833,7 @@ CONFIG_USB_HIDINPUT=y
# CONFIG_USB_PHIDGETKIT is not set
# CONFIG_USB_PHIDGETSERVO is not set
# CONFIG_USB_IDMOUSE is not set
# CONFIG_USB_SISUSBVGA is not set
# CONFIG_USB_TEST is not set
#
......@@ -867,7 +877,12 @@ CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_SECURITY=y
# CONFIG_JFS_FS is not set
CONFIG_FS_POSIX_ACL=y
#
# XFS support
#
CONFIG_XFS_FS=y
CONFIG_XFS_EXPORT=y
# CONFIG_XFS_RT is not set
# CONFIG_XFS_QUOTA is not set
# CONFIG_XFS_SECURITY is not set
......@@ -945,7 +960,7 @@ CONFIG_NFSD_V4=y
CONFIG_NFSD_TCP=y
CONFIG_LOCKD=m
CONFIG_LOCKD_V4=y
CONFIG_EXPORTFS=m
CONFIG_EXPORTFS=y
CONFIG_SUNRPC=m
CONFIG_SUNRPC_GSS=m
CONFIG_RPCSEC_GSS_KRB5=m
......@@ -1042,8 +1057,10 @@ CONFIG_GENERIC_IRQ_PROBE=y
#
# Kernel hacking
#
# CONFIG_PRINTK_TIME is not set
CONFIG_DEBUG_KERNEL=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_LOG_BUF_SHIFT=20
# CONFIG_SCHEDSTATS is not set
# CONFIG_DEBUG_SLAB is not set
# CONFIG_DEBUG_SPINLOCK is not set
......@@ -1077,6 +1094,7 @@ CONFIG_CRYPTO_MD5=m
# CONFIG_CRYPTO_SHA256 is not set
# CONFIG_CRYPTO_SHA512 is not set
# CONFIG_CRYPTO_WP512 is not set
# CONFIG_CRYPTO_TGR192 is not set
CONFIG_CRYPTO_DES=m
# CONFIG_CRYPTO_BLOWFISH is not set
# CONFIG_CRYPTO_TWOFISH is not set
......
......@@ -1944,43 +1944,17 @@ sba_connect_bus(struct pci_bus *bus)
static void __init
sba_map_ioc_to_node(struct ioc *ioc, acpi_handle handle)
{
struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL};
union acpi_object *obj;
acpi_handle phandle;
unsigned int node;
int pxm;
ioc->node = MAX_NUMNODES;
/*
* Check for a _PXM on this node first. We don't typically see
* one here, so we'll end up getting it from the parent.
*/
if (ACPI_FAILURE(acpi_evaluate_object(handle, "_PXM", NULL, &buffer))) {
if (ACPI_FAILURE(acpi_get_parent(handle, &phandle)))
return;
/* Reset the acpi buffer */
buffer.length = ACPI_ALLOCATE_BUFFER;
buffer.pointer = NULL;
if (ACPI_FAILURE(acpi_evaluate_object(phandle, "_PXM", NULL,
&buffer)))
return;
}
pxm = acpi_get_pxm(handle);
if (!buffer.length || !buffer.pointer)
if (pxm < 0)
return;
obj = buffer.pointer;
if (obj->type != ACPI_TYPE_INTEGER ||
obj->integer.value >= MAX_PXM_DOMAINS) {
acpi_os_free(buffer.pointer);
return;
}
node = pxm_to_nid_map[obj->integer.value];
acpi_os_free(buffer.pointer);
node = pxm_to_nid_map[pxm];
if (node >= MAX_NUMNODES || !node_online(node))
return;
......
......@@ -779,7 +779,7 @@ acpi_map_iosapic (acpi_handle handle, u32 depth, void *context, void **ret)
union acpi_object *obj;
struct acpi_table_iosapic *iosapic;
unsigned int gsi_base;
int node;
int pxm, node;
/* Only care about objects w/ a method that returns the MADT */
if (ACPI_FAILURE(acpi_evaluate_object(handle, "_MAT", NULL, &buffer)))
......@@ -805,29 +805,16 @@ acpi_map_iosapic (acpi_handle handle, u32 depth, void *context, void **ret)
gsi_base = iosapic->global_irq_base;
acpi_os_free(buffer.pointer);
buffer.length = ACPI_ALLOCATE_BUFFER;
buffer.pointer = NULL;
/*
* OK, it's an IOSAPIC MADT entry, look for a _PXM method to tell
* OK, it's an IOSAPIC MADT entry, look for a _PXM value to tell
* us which node to associate this with.
*/
if (ACPI_FAILURE(acpi_evaluate_object(handle, "_PXM", NULL, &buffer)))
return AE_OK;
if (!buffer.length || !buffer.pointer)
return AE_OK;
obj = buffer.pointer;
if (obj->type != ACPI_TYPE_INTEGER ||
obj->integer.value >= MAX_PXM_DOMAINS) {
acpi_os_free(buffer.pointer);
pxm = acpi_get_pxm(handle);
if (pxm < 0)
return AE_OK;
}
node = pxm_to_nid_map[obj->integer.value];
acpi_os_free(buffer.pointer);
node = pxm_to_nid_map[pxm];
if (node >= MAX_NUMNODES || !node_online(node) ||
cpus_empty(node_to_cpumask(node)))
......
......@@ -782,7 +782,7 @@ GLOBAL_ENTRY(ia64_ret_from_ia32_execve)
st8.spill [r2]=r8 // store return value in slot for r8 and set unat bit
.mem.offset 8,0
st8.spill [r3]=r0 // clear error indication in slot for r10 and set unat bit
END(ia64_ret_from_ia32_execve_syscall)
END(ia64_ret_from_ia32_execve)
// fall through
#endif /* CONFIG_IA32_SUPPORT */
GLOBAL_ENTRY(ia64_leave_kernel)
......
......@@ -132,8 +132,7 @@ mca_handler_bh(unsigned long paddr)
spin_unlock(&mca_bh_lock);
/* This process is about to be killed itself */
force_sig(SIGKILL, current);
schedule();
do_exit(SIGKILL);
}
/**
......@@ -439,6 +438,7 @@ recover_from_read_error(slidx_table_t *slidx, peidx_table_t *peidx, pal_bus_chec
psr2 = (struct ia64_psr *)&pmsa->pmsa_ipsr;
psr2->cpl = 0;
psr2->ri = 0;
psr2->i = 0;
return 1;
}
......
......@@ -10,6 +10,7 @@
#include <asm/asmmacro.h>
#include <asm/processor.h>
#include <asm/ptrace.h>
GLOBAL_ENTRY(mca_handler_bhhook)
invala // clear RSE ?
......@@ -20,12 +21,21 @@ GLOBAL_ENTRY(mca_handler_bhhook)
;;
alloc r16=ar.pfs,0,2,1,0 // make a new frame
;;
mov ar.rsc=0
;;
mov r13=IA64_KR(CURRENT) // current task pointer
;;
adds r12=IA64_TASK_THREAD_KSP_OFFSET,r13
mov r2=r13
;;
addl r22=IA64_RBS_OFFSET,r2
;;
mov ar.bspstore=r22
;;
ld8 r12=[r12] // stack pointer
addl sp=IA64_STK_OFFSET-IA64_PT_REGS_SIZE,r2
;;
adds r2=IA64_TASK_THREAD_ON_USTACK_OFFSET,r13
;;
st1 [r2]=r0 // clear current->thread.on_ustack flag
mov loc0=r16
movl loc1=mca_handler_bh // recovery C function
;;
......@@ -34,7 +44,9 @@ GLOBAL_ENTRY(mca_handler_bhhook)
;;
mov loc1=rp
;;
br.call.sptk.many rp=b6 // not return ...
ssm psr.i
;;
br.call.sptk.many rp=b6 // does not return ...
;;
mov ar.pfs=loc0
mov rp=loc1
......
......@@ -1265,6 +1265,8 @@ pfm_unregister_buffer_fmt(pfm_uuid_t uuid)
}
EXPORT_SYMBOL(pfm_unregister_buffer_fmt);
extern void update_pal_halt_status(int);
static int
pfm_reserve_session(struct task_struct *task, int is_syswide, unsigned int cpu)
{
......@@ -1311,6 +1313,11 @@ pfm_reserve_session(struct task_struct *task, int is_syswide, unsigned int cpu)
is_syswide,
cpu));
/*
* disable default_idle() to go to PAL_HALT
*/
update_pal_halt_status(0);
UNLOCK_PFS(flags);
return 0;
......@@ -1366,6 +1373,12 @@ pfm_unreserve_session(pfm_context_t *ctx, int is_syswide, unsigned int cpu)
is_syswide,
cpu));
/*
* if possible, enable default_idle() to go into PAL_HALT
*/
if (pfm_sessions.pfs_task_sessions == 0 && pfm_sessions.pfs_sys_sessions == 0)
update_pal_halt_status(1);
UNLOCK_PFS(flags);
return 0;
......@@ -4202,7 +4215,7 @@ pfm_context_load(pfm_context_t *ctx, void *arg, int count, struct pt_regs *regs)
DPRINT(("cannot load to [%d], invalid ctx_state=%d\n",
req->load_pid,
ctx->ctx_state));
return -EINVAL;
return -EBUSY;
}
DPRINT(("load_pid [%d] using_dbreg=%d\n", req->load_pid, ctx->ctx_fl_using_dbreg));
......@@ -4704,16 +4717,26 @@ pfm_check_task_state(pfm_context_t *ctx, int cmd, unsigned long flags)
if (task == current || ctx->ctx_fl_system) return 0;
/*
* if context is UNLOADED we are safe to go
*/
if (state == PFM_CTX_UNLOADED) return 0;
/*
* no command can operate on a zombie context
* we are monitoring another thread
*/
if (state == PFM_CTX_ZOMBIE) {
DPRINT(("cmd %d state zombie cannot operate on context\n", cmd));
return -EINVAL;
switch(state) {
case PFM_CTX_UNLOADED:
/*
* if context is UNLOADED we are safe to go
*/
return 0;
case PFM_CTX_ZOMBIE:
/*
* no command can operate on a zombie context
*/
DPRINT(("cmd %d state zombie cannot operate on context\n", cmd));
return -EINVAL;
case PFM_CTX_MASKED:
/*
* PMU state has been saved to software even though
* the thread may still be running.
*/
if (cmd != PFM_UNLOAD_CONTEXT) return 0;
}
/*
......
......@@ -50,7 +50,7 @@
#include "sigframe.h"
void (*ia64_mark_idle)(int);
static cpumask_t cpu_idle_map;
static DEFINE_PER_CPU(unsigned int, cpu_idle_state);
unsigned long boot_option_idle_override = 0;
EXPORT_SYMBOL(boot_option_idle_override);
......@@ -173,7 +173,9 @@ do_notify_resume_user (sigset_t *oldset, struct sigscratch *scr, long in_syscall
ia64_do_signal(oldset, scr, in_syscall);
}
static int pal_halt = 1;
static int pal_halt = 1;
static int can_do_pal_halt = 1;
static int __init nohalt_setup(char * str)
{
pal_halt = 0;
......@@ -181,16 +183,20 @@ static int __init nohalt_setup(char * str)
}
__setup("nohalt", nohalt_setup);
void
update_pal_halt_status(int status)
{
can_do_pal_halt = pal_halt && status;
}
/*
* We use this if we don't have any better idle routine..
*/
void
default_idle (void)
{
unsigned long pmu_active = ia64_getreg(_IA64_REG_PSR) & (IA64_PSR_PP | IA64_PSR_UP);
while (!need_resched())
if (pal_halt && !pmu_active)
if (can_do_pal_halt)
safe_halt();
else
cpu_relax();
......@@ -223,20 +229,31 @@ static inline void play_dead(void)
}
#endif /* CONFIG_HOTPLUG_CPU */
void cpu_idle_wait(void)
{
int cpu;
cpumask_t map;
unsigned int cpu, this_cpu = get_cpu();
cpumask_t map;
for_each_online_cpu(cpu)
cpu_set(cpu, cpu_idle_map);
set_cpus_allowed(current, cpumask_of_cpu(this_cpu));
put_cpu();
wmb();
do {
ssleep(1);
cpus_and(map, cpu_idle_map, cpu_online_map);
} while (!cpus_empty(map));
cpus_clear(map);
for_each_online_cpu(cpu) {
per_cpu(cpu_idle_state, cpu) = 1;
cpu_set(cpu, map);
}
__get_cpu_var(cpu_idle_state) = 0;
wmb();
do {
ssleep(1);
for_each_online_cpu(cpu) {
if (cpu_isset(cpu, map) && !per_cpu(cpu_idle_state, cpu))
cpu_clear(cpu, map);
}
cpus_and(map, map, cpu_online_map);
} while (!cpus_empty(map));
}
EXPORT_SYMBOL_GPL(cpu_idle_wait);
......@@ -244,7 +261,6 @@ void __attribute__((noreturn))
cpu_idle (void)
{
void (*mark_idle)(int) = ia64_mark_idle;
int cpu = smp_processor_id();
/* endless idle loop with no priority at all */
while (1) {
......@@ -255,12 +271,13 @@ cpu_idle (void)
while (!need_resched()) {
void (*idle)(void);
if (__get_cpu_var(cpu_idle_state))
__get_cpu_var(cpu_idle_state) = 0;
rmb();
if (mark_idle)
(*mark_idle)(1);
if (cpu_isset(cpu, cpu_idle_map))
cpu_clear(cpu, cpu_idle_map);
rmb();
idle = pm_idle;
if (!idle)
idle = default_idle;
......
......@@ -224,7 +224,7 @@ ia64_rt_sigreturn (struct sigscratch *scr)
* could be corrupted.
*/
retval = (long) &ia64_leave_kernel;
if (test_thread_flag(TIF_SYSCALL_TRACE)
if (test_thread_flag(TIF_SYSCALL_TRACE)
|| test_thread_flag(TIF_SYSCALL_AUDIT))
/*
* strace expects to be notified after sigreturn returns even though the
......
/*
* Cache flushing routines.
*
* Copyright (C) 1999-2001 Hewlett-Packard Co
* Copyright (C) 1999-2001 David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 1999-2001, 2005 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <asm/asmmacro.h>
#include <asm/page.h>
......@@ -26,7 +26,7 @@ GLOBAL_ENTRY(flush_icache_range)
mov ar.lc=r8
;;
.Loop: fc in0 // issuable on M0 only
.Loop: fc.i in0 // issuable on M2 only
add in0=32,in0
br.cloop.sptk.few .Loop
;;
......
......@@ -75,6 +75,7 @@ GLOBAL_ENTRY(memcpy)
mov f6=f0
br.cond.sptk .common_code
;;
END(memcpy)
GLOBAL_ENTRY(__copy_user)
.prologue
// check dest alignment
......@@ -524,7 +525,6 @@ EK(.ex_handler, (p17) st8 [dst1]=r39,8); \
#undef B
#undef C
#undef D
END(memcpy)
/*
* Due to lack of local tag support in gcc 2.x assembler, it is not clear which
......
......@@ -57,10 +57,10 @@ GLOBAL_ENTRY(memset)
{ .mmi
.prologue
alloc tmp = ar.pfs, 3, 0, 0, 0
.body
lfetch.nt1 [dest] //
.save ar.lc, save_lc
mov.i save_lc = ar.lc
.body
} { .mmi
mov ret0 = dest // return value
cmp.ne p_nz, p_zr = value, r0 // use stf.spill if value is zero
......
......@@ -4,10 +4,15 @@
# License. See the file "COPYING" in the main directory of this archive
# for more details.
#
# Copyright (C) 1999,2001-2003 Silicon Graphics, Inc. All Rights Reserved.
# Copyright (C) 1999,2001-2005 Silicon Graphics, Inc. All Rights Reserved.
#
obj-y += setup.o bte.o bte_error.o irq.o mca.o idle.o \
huberror.o io_init.o iomv.o klconflib.o sn2/
obj-$(CONFIG_IA64_GENERIC) += machvec.o
obj-$(CONFIG_SGI_TIOCX) += tiocx.o
obj-$(CONFIG_IA64_SGI_SN_XP) += xp.o
xp-y := xp_main.o xp_nofault.o
obj-$(CONFIG_IA64_SGI_SN_XP) += xpc.o
xpc-y := xpc_main.o xpc_channel.o xpc_partition.o
obj-$(CONFIG_IA64_SGI_SN_XP) += xpnet.o
......@@ -174,6 +174,12 @@ static void sn_fixup_ionodes(void)
if (status)
continue;
/* Attach the error interrupt handlers */
if (nasid & 1)
ice_error_init(hubdev);
else
hub_error_init(hubdev);
for (widget = 0; widget <= HUB_WIDGET_ID_MAX; widget++)
hubdev->hdi_xwidget_info[widget].xwi_hubinfo = hubdev;
......@@ -211,10 +217,6 @@ static void sn_fixup_ionodes(void)
sn_flush_device_list;
}
if (!(i & 1))
hub_error_init(hubdev);
else
ice_error_init(hubdev);
}
}
......
......@@ -37,6 +37,11 @@ static u64 *sn_oemdata_size, sn_oemdata_bufsize;
* This function is the callback routine that SAL calls to log error
* info for platform errors. buf is appended to sn_oemdata, resizing as
* required.
* Note: this is a SAL to OS callback, running under the same rules as the SAL
* code. SAL calls are run with preempt disabled so this routine must not
* sleep. vmalloc can sleep so print_hook cannot resize the output buffer
* itself, instead it must set the required size and return to let the caller
* resize the buffer then redrive the SAL call.
*/
static int print_hook(const char *fmt, ...)
{
......@@ -47,18 +52,8 @@ static int print_hook(const char *fmt, ...)
vsnprintf(buf, sizeof(buf), fmt, args);
va_end(args);
len = strlen(buf);
while (*sn_oemdata_size + len + 1 > sn_oemdata_bufsize) {
u8 *newbuf = vmalloc(sn_oemdata_bufsize += 1000);
if (!newbuf) {
printk(KERN_ERR "%s: unable to extend sn_oemdata\n",
__FUNCTION__);
return 0;
}
memcpy(newbuf, *sn_oemdata, *sn_oemdata_size);
vfree(*sn_oemdata);
*sn_oemdata = newbuf;
}
memcpy(*sn_oemdata + *sn_oemdata_size, buf, len + 1);
if (*sn_oemdata_size + len <= sn_oemdata_bufsize)
memcpy(*sn_oemdata + *sn_oemdata_size, buf, len);
*sn_oemdata_size += len;
return 0;
}
......@@ -98,7 +93,20 @@ sn_platform_plat_specific_err_print(const u8 * sect_header, u8 ** oemdata,
sn_oemdata = oemdata;
sn_oemdata_size = oemdata_size;
sn_oemdata_bufsize = 0;
ia64_sn_plat_specific_err_print(print_hook, (char *)sect_header);
*sn_oemdata_size = PAGE_SIZE; /* first guess at how much data will be generated */
while (*sn_oemdata_size > sn_oemdata_bufsize) {
u8 *newbuf = vmalloc(*sn_oemdata_size);
if (!newbuf) {
printk(KERN_ERR "%s: unable to extend sn_oemdata\n",
__FUNCTION__);
return 1;
}
vfree(*sn_oemdata);
*sn_oemdata = newbuf;
sn_oemdata_bufsize = *sn_oemdata_size;
*sn_oemdata_size = 0;
ia64_sn_plat_specific_err_print(print_hook, (char *)sect_header);
}
up(&sn_oemdata_mutex);
return 0;
}
......
......@@ -3,7 +3,7 @@
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Copyright (C) 1999,2001-2004 Silicon Graphics, Inc. All rights reserved.
* Copyright (C) 1999,2001-2005 Silicon Graphics, Inc. All rights reserved.
*/
#include <linux/config.h>
......@@ -73,6 +73,12 @@ EXPORT_SYMBOL(sn_rtc_cycles_per_second);
DEFINE_PER_CPU(struct sn_hub_info_s, __sn_hub_info);
EXPORT_PER_CPU_SYMBOL(__sn_hub_info);
DEFINE_PER_CPU(short, __sn_cnodeid_to_nasid[MAX_NUMNODES]);
EXPORT_PER_CPU_SYMBOL(__sn_cnodeid_to_nasid);
DEFINE_PER_CPU(struct nodepda_s *, __sn_nodepda);
EXPORT_PER_CPU_SYMBOL(__sn_nodepda);
partid_t sn_partid = -1;
EXPORT_SYMBOL(sn_partid);
char sn_system_serial_number_string[128];
......@@ -373,11 +379,11 @@ static void __init sn_init_pdas(char **cmdline_p)
{
cnodeid_t cnode;
memset(pda->cnodeid_to_nasid_table, -1,
sizeof(pda->cnodeid_to_nasid_table));
memset(sn_cnodeid_to_nasid, -1,
sizeof(__ia64_per_cpu_var(__sn_cnodeid_to_nasid)));
for_each_online_node(cnode)
pda->cnodeid_to_nasid_table[cnode] =
pxm_to_nasid(nid_to_pxm_map[cnode]);
sn_cnodeid_to_nasid[cnode] =
pxm_to_nasid(nid_to_pxm_map[cnode]);
numionodes = num_online_nodes();
scan_for_ionodes();
......@@ -477,7 +483,8 @@ void __init sn_cpu_init(void)
cnode = nasid_to_cnodeid(nasid);
pda->p_nodepda = nodepdaindr[cnode];
sn_nodepda = nodepdaindr[cnode];
pda->led_address =
(typeof(pda->led_address)) (LED0 + (slice << LED_CPU_SHIFT));
pda->led_state = LED_ALWAYS_SET;
......@@ -486,15 +493,18 @@ void __init sn_cpu_init(void)
pda->idle_flag = 0;
if (cpuid != 0) {
memcpy(pda->cnodeid_to_nasid_table,
pdacpu(0)->cnodeid_to_nasid_table,
sizeof(pda->cnodeid_to_nasid_table));
/* copy cpu 0's sn_cnodeid_to_nasid table to this cpu's */
memcpy(sn_cnodeid_to_nasid,
(&per_cpu(__sn_cnodeid_to_nasid, 0)),
sizeof(__ia64_per_cpu_var(__sn_cnodeid_to_nasid)));
}
/*
* Check for WARs.
* Only needs to be done once, on BSP.
* Has to be done after loop above, because it uses pda.cnodeid_to_nasid_table[i].
* Has to be done after loop above, because it uses this cpu's
* sn_cnodeid_to_nasid table which was just initialized if this
* isn't cpu 0.
* Has to be done before assignment below.
*/
if (!wars_have_been_checked) {
......@@ -580,8 +590,7 @@ static void __init scan_for_ionodes(void)
brd = find_lboard_any(brd, KLTYPE_SNIA);
while (brd) {
pda->cnodeid_to_nasid_table[numionodes] =
brd->brd_nasid;
sn_cnodeid_to_nasid[numionodes] = brd->brd_nasid;
physical_node_map[brd->brd_nasid] = numionodes;
root_lboard[numionodes] = brd;
numionodes++;
......@@ -602,8 +611,7 @@ static void __init scan_for_ionodes(void)
root_lboard[nasid_to_cnodeid(nasid)],
KLTYPE_TIO);
while (brd) {
pda->cnodeid_to_nasid_table[numionodes] =
brd->brd_nasid;
sn_cnodeid_to_nasid[numionodes] = brd->brd_nasid;
physical_node_map[brd->brd_nasid] = numionodes;
root_lboard[numionodes] = brd;
numionodes++;
......@@ -614,7 +622,6 @@ static void __init scan_for_ionodes(void)
brd = find_lboard_any(brd, KLTYPE_TIO);
}
}
}
int
......@@ -623,7 +630,8 @@ nasid_slice_to_cpuid(int nasid, int slice)
long cpu;
for (cpu=0; cpu < NR_CPUS; cpu++)
if (nodepda->phys_cpuid[cpu].nasid == nasid && nodepda->phys_cpuid[cpu].slice == slice)
if (cpuid_to_nasid(cpu) == nasid &&
cpuid_to_slice(cpu) == slice)
return cpu;
return -1;
......
......@@ -21,6 +21,8 @@
#include <asm/sn/types.h>
#include <asm/sn/shubio.h>
#include <asm/sn/tiocx.h>
#include <asm/sn/l1.h>
#include <asm/sn/module.h>
#include "tio.h"
#include "xtalk/xwidgetdev.h"
#include "xtalk/hubdev.h"
......@@ -308,14 +310,12 @@ void tiocx_irq_free(struct sn_irq_info *sn_irq_info)
}
}
uint64_t
tiocx_dma_addr(uint64_t addr)
uint64_t tiocx_dma_addr(uint64_t addr)
{
return PHYS_TO_TIODMA(addr);
}
uint64_t
tiocx_swin_base(int nasid)
uint64_t tiocx_swin_base(int nasid)
{
return TIO_SWIN_BASE(nasid, TIOCX_CORELET);
}
......@@ -330,19 +330,6 @@ EXPORT_SYMBOL(tiocx_bus_type);
EXPORT_SYMBOL(tiocx_dma_addr);
EXPORT_SYMBOL(tiocx_swin_base);
static uint64_t tiocx_get_hubdev_info(u64 handle, u64 address)
{
struct ia64_sal_retval ret_stuff;
ret_stuff.status = 0;
ret_stuff.v0 = 0;
ia64_sal_oemcall_nolock(&ret_stuff,
SN_SAL_IOIF_GET_HUBDEV_INFO,
handle, address, 0, 0, 0, 0, 0);
return ret_stuff.v0;
}
static void tio_conveyor_set(nasid_t nasid, int enable_flag)
{
uint64_t ice_frz;
......@@ -379,7 +366,29 @@ static void tio_corelet_reset(nasid_t nasid, int corelet)
udelay(2000);
}
static int fpga_attached(nasid_t nasid)
static int tiocx_btchar_get(int nasid)
{
moduleid_t module_id;
geoid_t geoid;
int cnodeid;
cnodeid = nasid_to_cnodeid(nasid);
geoid = cnodeid_get_geoid(cnodeid);
module_id = geo_module(geoid);
return MODULE_GET_BTCHAR(module_id);
}
static int is_fpga_brick(int nasid)
{
switch (tiocx_btchar_get(nasid)) {
case L1_BRICKTYPE_SA:
case L1_BRICKTYPE_ATHENA:
return 1;
}
return 0;
}
static int bitstream_loaded(nasid_t nasid)
{
uint64_t cx_credits;
......@@ -396,7 +405,7 @@ static int tiocx_reload(struct cx_dev *cx_dev)
int mfg_num = CX_DEV_NONE;
nasid_t nasid = cx_dev->cx_id.nasid;
if (fpga_attached(nasid)) {
if (bitstream_loaded(nasid)) {
uint64_t cx_id;
cx_id =
......@@ -427,9 +436,10 @@ static ssize_t show_cxdev_control(struct device *dev, char *buf)
{
struct cx_dev *cx_dev = to_cx_dev(dev);
return sprintf(buf, "0x%x 0x%x 0x%x\n",
return sprintf(buf, "0x%x 0x%x 0x%x %d\n",
cx_dev->cx_id.nasid,
cx_dev->cx_id.part_num, cx_dev->cx_id.mfg_num);
cx_dev->cx_id.part_num, cx_dev->cx_id.mfg_num,
tiocx_btchar_get(cx_dev->cx_id.nasid));
}
static ssize_t store_cxdev_control(struct device *dev, const char *buf,
......@@ -475,20 +485,14 @@ static int __init tiocx_init(void)
if ((nasid = cnodeid_to_nasid(cnodeid)) < 0)
break; /* No more nasids .. bail out of loop */
if (nasid & 0x1) { /* TIO's are always odd */
if ((nasid & 0x1) && is_fpga_brick(nasid)) {
struct hubdev_info *hubdev;
uint64_t status;
struct xwidget_info *widgetp;
DBG("Found TIO at nasid 0x%x\n", nasid);
hubdev =
(struct hubdev_info *)(NODEPDA(cnodeid)->pdinfo);
status =
tiocx_get_hubdev_info(nasid,
(uint64_t) __pa(hubdev));
if (status)
continue;
widgetp = &hubdev->hdi_xwidget_info[TIOCX_CORELET];
......
/*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Copyright (c) 2004-2005 Silicon Graphics, Inc. All Rights Reserved.
*/
/*
* Cross Partition (XP) base.
*
* XP provides a base from which its users can interact
* with XPC, yet not be dependent on XPC.
*
*/
#include <linux/kernel.h>
#include <linux/interrupt.h>
#include <linux/module.h>
#include <asm/sn/intr.h>
#include <asm/sn/sn_sal.h>
#include <asm/sn/xp.h>
/*
* Target of nofault PIO read.
*/
u64 xp_nofault_PIOR_target;
/*
* xpc_registrations[] keeps track of xpc_connect()'s done by the kernel-level
* users of XPC.
*/
struct xpc_registration xpc_registrations[XPC_NCHANNELS];
/*
* Initialize the XPC interface to indicate that XPC isn't loaded.
*/
static enum xpc_retval xpc_notloaded(void) { return xpcNotLoaded; }
struct xpc_interface xpc_interface = {
(void (*)(int)) xpc_notloaded,
(void (*)(int)) xpc_notloaded,
(enum xpc_retval (*)(partid_t, int, u32, void **)) xpc_notloaded,
(enum xpc_retval (*)(partid_t, int, void *)) xpc_notloaded,
(enum xpc_retval (*)(partid_t, int, void *, xpc_notify_func, void *))
xpc_notloaded,
(void (*)(partid_t, int, void *)) xpc_notloaded,
(enum xpc_retval (*)(partid_t, void *)) xpc_notloaded
};
/*
* XPC calls this when it (the XPC module) has been loaded.
*/
void
xpc_set_interface(void (*connect)(int),
void (*disconnect)(int),
enum xpc_retval (*allocate)(partid_t, int, u32, void **),
enum xpc_retval (*send)(partid_t, int, void *),
enum xpc_retval (*send_notify)(partid_t, int, void *,
xpc_notify_func, void *),
void (*received)(partid_t, int, void *),
enum xpc_retval (*partid_to_nasids)(partid_t, void *))
{
xpc_interface.connect = connect;
xpc_interface.disconnect = disconnect;
xpc_interface.allocate = allocate;
xpc_interface.send = send;
xpc_interface.send_notify = send_notify;
xpc_interface.received = received;
xpc_interface.partid_to_nasids = partid_to_nasids;
}
/*
* XPC calls this when it (the XPC module) is being unloaded.
*/
void
xpc_clear_interface(void)
{
xpc_interface.connect = (void (*)(int)) xpc_notloaded;
xpc_interface.disconnect = (void (*)(int)) xpc_notloaded;
xpc_interface.allocate = (enum xpc_retval (*)(partid_t, int, u32,
void **)) xpc_notloaded;
xpc_interface.send = (enum xpc_retval (*)(partid_t, int, void *))
xpc_notloaded;
xpc_interface.send_notify = (enum xpc_retval (*)(partid_t, int, void *,
xpc_notify_func, void *)) xpc_notloaded;
xpc_interface.received = (void (*)(partid_t, int, void *))
xpc_notloaded;
xpc_interface.partid_to_nasids = (enum xpc_retval (*)(partid_t, void *))
xpc_notloaded;
}
/*
* Register for automatic establishment of a channel connection whenever
* a partition comes up.
*
* Arguments:
*
* ch_number - channel # to register for connection.
* func - function to call for asynchronous notification of channel
* state changes (i.e., connection, disconnection, error) and
* the arrival of incoming messages.
* key - pointer to optional user-defined value that gets passed back
* to the user on any callouts made to func.
* payload_size - size in bytes of the XPC message's payload area which
* contains a user-defined message. The user should make
* this large enough to hold their largest message.
* nentries - max #of XPC message entries a message queue can contain.
* The actual number, which is determined when a connection
* is established and may be less then requested, will be
* passed to the user via the xpcConnected callout.
* assigned_limit - max number of kthreads allowed to be processing
* messages (per connection) at any given instant.
* idle_limit - max number of kthreads allowed to be idle at any given
* instant.
*/
enum xpc_retval
xpc_connect(int ch_number, xpc_channel_func func, void *key, u16 payload_size,
u16 nentries, u32 assigned_limit, u32 idle_limit)
{
struct xpc_registration *registration;
DBUG_ON(ch_number < 0 || ch_number >= XPC_NCHANNELS);
DBUG_ON(payload_size == 0 || nentries == 0);
DBUG_ON(func == NULL);
DBUG_ON(assigned_limit == 0 || idle_limit > assigned_limit);
registration = &xpc_registrations[ch_number];
if (down_interruptible(&registration->sema) != 0) {
return xpcInterrupted;
}
/* if XPC_CHANNEL_REGISTERED(ch_number) */
if (registration->func != NULL) {
up(&registration->sema);
return xpcAlreadyRegistered;
}
/* register the channel for connection */
registration->msg_size = XPC_MSG_SIZE(payload_size);
registration->nentries = nentries;
registration->assigned_limit = assigned_limit;
registration->idle_limit = idle_limit;
registration->key = key;
registration->func = func;
up(&registration->sema);
xpc_interface.connect(ch_number);
return xpcSuccess;
}
/*
* Remove the registration for automatic connection of the specified channel
* when a partition comes up.
*
* Before returning this xpc_disconnect() will wait for all connections on the
* specified channel have been closed/torndown. So the caller can be assured
* that they will not be receiving any more callouts from XPC to their
* function registered via xpc_connect().
*
* Arguments:
*
* ch_number - channel # to unregister.
*/
void
xpc_disconnect(int ch_number)
{
struct xpc_registration *registration;
DBUG_ON(ch_number < 0 || ch_number >= XPC_NCHANNELS);
registration = &xpc_registrations[ch_number];
/*
* We've decided not to make this a down_interruptible(), since we
* figured XPC's users will just turn around and call xpc_disconnect()
* again anyways, so we might as well wait, if need be.
*/
down(&registration->sema);
/* if !XPC_CHANNEL_REGISTERED(ch_number) */
if (registration->func == NULL) {
up(&registration->sema);
return;
}
/* remove the connection registration for the specified channel */
registration->func = NULL;
registration->key = NULL;
registration->nentries = 0;
registration->msg_size = 0;
registration->assigned_limit = 0;
registration->idle_limit = 0;
xpc_interface.disconnect(ch_number);
up(&registration->sema);
return;
}
int __init
xp_init(void)
{
int ret, ch_number;
u64 func_addr = *(u64 *) xp_nofault_PIOR;
u64 err_func_addr = *(u64 *) xp_error_PIOR;
if (!ia64_platform_is("sn2")) {
return -ENODEV;
}
/*
* Register a nofault code region which performs a cross-partition
* PIO read. If the PIO read times out, the MCA handler will consume
* the error and return to a kernel-provided instruction to indicate
* an error. This PIO read exists because it is guaranteed to timeout
* if the destination is down (AMO operations do not timeout on at
* least some CPUs on Shubs <= v1.2, which unfortunately we have to
* work around).
*/
if ((ret = sn_register_nofault_code(func_addr, err_func_addr,
err_func_addr, 1, 1)) != 0) {
printk(KERN_ERR "XP: can't register nofault code, error=%d\n",
ret);
}
/*
* Setup the nofault PIO read target. (There is no special reason why
* SH_IPI_ACCESS was selected.)
*/
if (is_shub2()) {
xp_nofault_PIOR_target = SH2_IPI_ACCESS0;
} else {
xp_nofault_PIOR_target = SH1_IPI_ACCESS;
}
/* initialize the connection registration semaphores */
for (ch_number = 0; ch_number < XPC_NCHANNELS; ch_number++) {
sema_init(&xpc_registrations[ch_number].sema, 1); /* mutex */
}
return 0;
}
module_init(xp_init);
void __exit
xp_exit(void)
{
u64 func_addr = *(u64 *) xp_nofault_PIOR;
u64 err_func_addr = *(u64 *) xp_error_PIOR;
/* unregister the PIO read nofault code region */
(void) sn_register_nofault_code(func_addr, err_func_addr,
err_func_addr, 1, 0);
}
module_exit(xp_exit);
MODULE_AUTHOR("Silicon Graphics, Inc.");
MODULE_DESCRIPTION("Cross Partition (XP) base");
MODULE_LICENSE("GPL");
EXPORT_SYMBOL(xp_nofault_PIOR);
EXPORT_SYMBOL(xp_nofault_PIOR_target);
EXPORT_SYMBOL(xpc_registrations);
EXPORT_SYMBOL(xpc_interface);
EXPORT_SYMBOL(xpc_clear_interface);
EXPORT_SYMBOL(xpc_set_interface);
EXPORT_SYMBOL(xpc_connect);
EXPORT_SYMBOL(xpc_disconnect);
/*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Copyright (c) 2004-2005 Silicon Graphics, Inc. All Rights Reserved.
*/
/*
* The xp_nofault_PIOR function takes a pointer to a remote PIO register
* and attempts to load and consume a value from it. This function
* will be registered as a nofault code block. In the event that the
* PIO read fails, the MCA handler will force the error to look
* corrected and vector to the xp_error_PIOR which will return an error.
*
* extern int xp_nofault_PIOR(void *remote_register);
*/
.global xp_nofault_PIOR
xp_nofault_PIOR:
mov r8=r0 // Stage a success return value
ld8.acq r9=[r32];; // PIO Read the specified register
adds r9=1,r9 // Add to force a consume
br.ret.sptk.many b0;; // Return success
.global xp_error_PIOR
xp_error_PIOR:
mov r8=1 // Return value of 1
br.ret.sptk.many b0;; // Return failure
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
......@@ -301,7 +301,7 @@ void sn_dma_flush(uint64_t addr)
spin_lock_irqsave(&((struct sn_flush_device_list *)p)->
sfdl_flush_lock, flags);
p->sfdl_flush_value = 0;
*p->sfdl_flush_addr = 0;
/* force an interrupt. */
*(volatile uint32_t *)(p->sfdl_force_int_addr) = 1;
......
......@@ -431,7 +431,7 @@ tioca_dma_mapped(struct pci_dev *pdev, uint64_t paddr, size_t req_size)
ca_dmamap->cad_dma_addr = bus_addr;
ca_dmamap->cad_gart_size = entries;
ca_dmamap->cad_gart_entry = entry;
list_add(&ca_dmamap->cad_list, &tioca_kern->ca_list);
list_add(&ca_dmamap->cad_list, &tioca_kern->ca_dmamaps);
if (xio_addr % ps) {
tioca_kern->ca_pcigart[entry] = tioca_paddr_to_gart(xio_addr);
......
......@@ -534,6 +534,11 @@ endchoice
endmenu
config ISA_DMA_API
bool
depends on !M5272
default y
menu "Bus options (PCI, PCMCIA, EISA, MCA, ISA)"
config PCI
......
......@@ -1656,3 +1656,7 @@ config GENERIC_HARDIRQS
config GENERIC_IRQ_PROBE
bool
default y
config ISA_DMA_API
bool
default y
......@@ -45,6 +45,10 @@ config GENERIC_IRQ_PROBE
config PM
bool
config ISA_DMA_API
bool
default y
source "init/Kconfig"
......
......@@ -1079,6 +1079,10 @@ source kernel/power/Kconfig
endmenu
config ISA_DMA_API
bool
default y
menu "Bus options"
config ISA
......
......@@ -293,6 +293,9 @@ config SECCOMP
endmenu
config ISA_DMA_API
bool
default y
menu "General setup"
......
......@@ -56,12 +56,19 @@ LDFLAGS_vmlinux := -Bstatic -e $(KERNELLOAD) -Ttext $(KERNELLOAD)
CFLAGS += -msoft-float -pipe -mminimal-toc -mtraceback=none \
-mcall-aixdesc
GCC_VERSION := $(call cc-version)
GCC_BROKEN_VEC := $(shell if [ $(GCC_VERSION) -lt 0400 ] ; then echo "y"; fi ;)
ifeq ($(CONFIG_POWER4_ONLY),y)
ifeq ($(CONFIG_ALTIVEC),y)
ifeq ($(GCC_BROKEN_VEC),y)
CFLAGS += $(call cc-option,-mcpu=970)
else
CFLAGS += $(call cc-option,-mcpu=power4)
endif
else
CFLAGS += $(call cc-option,-mcpu=power4)
endif
else
CFLAGS += $(call cc-option,-mtune=power4)
endif
......
......@@ -693,6 +693,10 @@ config RTC_9701JE
endmenu
config ISA_DMA_API
bool
depends on MPC1211
default y
menu "Bus options (PCI, PCMCIA, EISA, MCA, ISA)"
......
......@@ -47,9 +47,9 @@ prom_sortmemlist(struct linux_mlist_v0 *thislist)
char *tmpaddr;
char *lowest;
for(i=0; thislist[i].theres_more != 0; i++) {
for(i=0; thislist[i].theres_more; i++) {
lowest = thislist[i].start_adr;
for(mitr = i+1; thislist[mitr-1].theres_more != 0; mitr++)
for(mitr = i+1; thislist[mitr-1].theres_more; mitr++)
if(thislist[mitr].start_adr < lowest) {
lowest = thislist[mitr].start_adr;
swapi = mitr;
......@@ -85,7 +85,7 @@ void __init prom_meminit(void)
prom_phys_total[iter].num_bytes = mptr->num_bytes;
prom_phys_total[iter].theres_more = &prom_phys_total[iter+1];
}
prom_phys_total[iter-1].theres_more = 0x0;
prom_phys_total[iter-1].theres_more = NULL;
/* Second, the total prom taken descriptors. */
for(mptr = (*(romvec->pv_v0mem.v0_prommap)), iter=0;
mptr; mptr=mptr->theres_more, iter++) {
......@@ -93,7 +93,7 @@ void __init prom_meminit(void)
prom_prom_taken[iter].num_bytes = mptr->num_bytes;
prom_prom_taken[iter].theres_more = &prom_prom_taken[iter+1];
}
prom_prom_taken[iter-1].theres_more = 0x0;
prom_prom_taken[iter-1].theres_more = NULL;
/* Last, the available physical descriptors. */
for(mptr = (*(romvec->pv_v0mem.v0_available)), iter=0;
mptr; mptr=mptr->theres_more, iter++) {
......@@ -101,7 +101,7 @@ void __init prom_meminit(void)
prom_phys_avail[iter].num_bytes = mptr->num_bytes;
prom_phys_avail[iter].theres_more = &prom_phys_avail[iter+1];
}
prom_phys_avail[iter-1].theres_more = 0x0;
prom_phys_avail[iter-1].theres_more = NULL;
/* Sort all the lists. */
prom_sortmemlist(prom_phys_total);
prom_sortmemlist(prom_prom_taken);
......@@ -124,7 +124,7 @@ void __init prom_meminit(void)
prom_phys_avail[iter].theres_more =
&prom_phys_avail[iter+1];
}
prom_phys_avail[iter-1].theres_more = 0x0;
prom_phys_avail[iter-1].theres_more = NULL;
num_regs = prom_getproperty(node, "reg",
(char *) prom_reg_memlist,
......@@ -138,7 +138,7 @@ void __init prom_meminit(void)
prom_phys_total[iter].theres_more =
&prom_phys_total[iter+1];
}
prom_phys_total[iter-1].theres_more = 0x0;
prom_phys_total[iter-1].theres_more = NULL;
node = prom_getchild(prom_root_node);
node = prom_searchsiblings(node, "virtual-memory");
......@@ -158,7 +158,7 @@ void __init prom_meminit(void)
prom_prom_taken[iter].theres_more =
&prom_prom_taken[iter+1];
}
prom_prom_taken[iter-1].theres_more = 0x0;
prom_prom_taken[iter-1].theres_more = NULL;
prom_sortmemlist(prom_prom_taken);
......@@ -182,15 +182,15 @@ void __init prom_meminit(void)
case PROM_SUN4:
#ifdef CONFIG_SUN4
/* how simple :) */
prom_phys_total[0].start_adr = 0x0;
prom_phys_total[0].start_adr = NULL;
prom_phys_total[0].num_bytes = *(sun4_romvec->memorysize);
prom_phys_total[0].theres_more = 0x0;
prom_prom_taken[0].start_adr = 0x0;
prom_phys_total[0].theres_more = NULL;
prom_prom_taken[0].start_adr = NULL;
prom_prom_taken[0].num_bytes = 0x0;
prom_prom_taken[0].theres_more = 0x0;
prom_phys_avail[0].start_adr = 0x0;
prom_prom_taken[0].theres_more = NULL;
prom_phys_avail[0].start_adr = NULL;
prom_phys_avail[0].num_bytes = *(sun4_romvec->memoryavail);
prom_phys_avail[0].theres_more = 0x0;
prom_phys_avail[0].theres_more = NULL;
#endif
break;
......
......@@ -151,7 +151,7 @@ struct linux_romvec * __init sun4_prom_init(void)
* have more time, we can teach the penguin to say "By your
* command" or "Activating turbo boost, Michael". :-)
*/
sun4_romvec->setLEDs(0x0);
sun4_romvec->setLEDs(NULL);
printk("PROMLIB: Old Sun4 boot PROM monitor %s, romvec version %d\n",
sun4_romvec->monid,
......
......@@ -756,7 +756,7 @@ void handler_irq(int irq, struct pt_regs *regs)
clear_softint(clr_mask);
}
#else
int should_forward = 1;
int should_forward = 0;
clear_softint(1 << irq);
#endif
......@@ -1007,10 +1007,10 @@ static int retarget_one_irq(struct irqaction *p, int goal_cpu)
}
upa_writel(tid | IMAP_VALID, imap);
while (!cpu_online(goal_cpu)) {
do {
if (++goal_cpu >= NR_CPUS)
goal_cpu = 0;
}
} while (!cpu_online(goal_cpu));
return goal_cpu;
}
......
......@@ -379,6 +379,11 @@ config GENERIC_IRQ_PROBE
bool
default y
# we have no ISA slots, but we do have ISA-style DMA.
config ISA_DMA_API
bool
default y
menu "Power management options"
source kernel/power/Kconfig
......
......@@ -405,9 +405,8 @@ void device_release_driver(struct device * dev)
static void driver_detach(struct device_driver * drv)
{
struct list_head * entry, * next;
list_for_each_safe(entry, next, &drv->devices) {
struct device * dev = container_of(entry, struct device, driver_list);
while (!list_empty(&drv->devices)) {
struct device * dev = container_of(drv->devices.next, struct device, driver_list);
device_release_driver(dev);
}
}
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册