提交 23930fa1 编写于 作者: J Jeff Garzik

Merge branch 'master' into upstream

要显示的变更太多。

To preserve performance only 1000 of 1000+ files are displayed.
......@@ -2384,6 +2384,13 @@ N: Thomas Molina
E: tmolina@cablespeed.com
D: bug fixes, documentation, minor hackery
N: Paul Moore
E: paul.moore@hp.com
D: NetLabel author
S: Hewlett-Packard
S: 110 Spit Brook Road
S: Nashua, NH 03062
N: James Morris
E: jmorris@namei.org
W: http://namei.org/
......
......@@ -184,6 +184,8 @@ mtrr.txt
- how to use PPro Memory Type Range Registers to increase performance.
nbd.txt
- info on a TCP implementation of a network block device.
netlabel/
- directory with information on the NetLabel subsystem.
networking/
- directory with info on various aspects of networking with Linux.
nfsroot.txt
......
......@@ -19,15 +19,14 @@ At the lowest level are algorithms, which register dynamically with the
API.
'Transforms' are user-instantiated objects, which maintain state, handle all
of the implementation logic (e.g. manipulating page vectors), provide an
abstraction to the underlying algorithms, and handle common logical
operations (e.g. cipher modes, HMAC for digests). However, at the user
of the implementation logic (e.g. manipulating page vectors) and provide an
abstraction to the underlying algorithms. However, at the user
level they are very simple.
Conceptually, the API layering looks like this:
[transform api] (user interface)
[transform ops] (per-type logic glue e.g. cipher.c, digest.c)
[transform ops] (per-type logic glue e.g. cipher.c, compress.c)
[algorithm api] (for registering algorithms)
The idea is to make the user interface and algorithm registration API
......@@ -44,22 +43,27 @@ under development.
Here's an example of how to use the API:
#include <linux/crypto.h>
#include <linux/err.h>
#include <linux/scatterlist.h>
struct scatterlist sg[2];
char result[128];
struct crypto_tfm *tfm;
struct crypto_hash *tfm;
struct hash_desc desc;
tfm = crypto_alloc_tfm("md5", 0);
if (tfm == NULL)
tfm = crypto_alloc_hash("md5", 0, CRYPTO_ALG_ASYNC);
if (IS_ERR(tfm))
fail();
/* ... set up the scatterlists ... */
desc.tfm = tfm;
desc.flags = 0;
crypto_digest_init(tfm);
crypto_digest_update(tfm, &sg, 2);
crypto_digest_final(tfm, result);
if (crypto_hash_digest(&desc, &sg, 2, result))
fail();
crypto_free_tfm(tfm);
crypto_free_hash(tfm);
Many real examples are available in the regression test module (tcrypt.c).
......@@ -126,7 +130,7 @@ might already be working on.
BUGS
Send bug reports to:
James Morris <jmorris@redhat.com>
Herbert Xu <herbert@gondor.apana.org.au>
Cc: David S. Miller <davem@redhat.com>
......@@ -134,13 +138,14 @@ FURTHER INFORMATION
For further patches and various updates, including the current TODO
list, see:
http://samba.org/~jamesm/crypto/
http://gondor.apana.org.au/~herbert/crypto/
AUTHORS
James Morris
David S. Miller
Herbert Xu
CREDITS
......@@ -238,8 +243,11 @@ Anubis algorithm contributors:
Tiger algorithm contributors:
Aaron Grothe
VIA PadLock contributors:
Michal Ludvig
Generic scatterwalk code by Adam J. Richter <adam@yggdrasil.com>
Please send any credits updates or corrections to:
James Morris <jmorris@redhat.com>
Herbert Xu <herbert@gondor.apana.org.au>
......@@ -1189,8 +1189,6 @@ running once the system is up.
Mechanism 2.
nommconf [IA-32,X86_64] Disable use of MMCONFIG for PCI
Configuration
mmconf [IA-32,X86_64] Force MMCONFIG. This is useful
to override the builtin blacklist.
nomsi [MSI] If the PCI_MSI kernel config parameter is
enabled, this kernel boot option can be used to
disable the use of MSI interrupts system-wide.
......
00-INDEX
- this file.
cipso_ipv4.txt
- documentation on the IPv4 CIPSO protocol engine.
draft-ietf-cipso-ipsecurity-01.txt
- IETF draft of the CIPSO protocol, dated 16 July 1992.
introduction.txt
- NetLabel introduction, READ THIS FIRST.
lsm_interface.txt
- documentation on the NetLabel kernel security module API.
NetLabel CIPSO/IPv4 Protocol Engine
==============================================================================
Paul Moore, paul.moore@hp.com
May 17, 2006
* Overview
The NetLabel CIPSO/IPv4 protocol engine is based on the IETF Commercial IP
Security Option (CIPSO) draft from July 16, 1992. A copy of this draft can be
found in this directory, consult '00-INDEX' for the filename. While the IETF
draft never made it to an RFC standard it has become a de-facto standard for
labeled networking and is used in many trusted operating systems.
* Outbound Packet Processing
The CIPSO/IPv4 protocol engine applies the CIPSO IP option to packets by
adding the CIPSO label to the socket. This causes all packets leaving the
system through the socket to have the CIPSO IP option applied. The socket's
CIPSO label can be changed at any point in time, however, it is recommended
that it is set upon the socket's creation. The LSM can set the socket's CIPSO
label by using the NetLabel security module API; if the NetLabel "domain" is
configured to use CIPSO for packet labeling then a CIPSO IP option will be
generated and attached to the socket.
* Inbound Packet Processing
The CIPSO/IPv4 protocol engine validates every CIPSO IP option it finds at the
IP layer without any special handling required by the LSM. However, in order
to decode and translate the CIPSO label on the packet the LSM must use the
NetLabel security module API to extract the security attributes of the packet.
This is typically done at the socket layer using the 'socket_sock_rcv_skb()'
LSM hook.
* Label Translation
The CIPSO/IPv4 protocol engine contains a mechanism to translate CIPSO security
attributes such as sensitivity level and category to values which are
appropriate for the host. These mappings are defined as part of a CIPSO
Domain Of Interpretation (DOI) definition and are configured through the
NetLabel user space communication layer. Each DOI definition can have a
different security attribute mapping table.
* Label Translation Cache
The NetLabel system provides a framework for caching security attribute
mappings from the network labels to the corresponding LSM identifiers. The
CIPSO/IPv4 protocol engine supports this caching mechanism.
NetLabel Introduction
==============================================================================
Paul Moore, paul.moore@hp.com
August 2, 2006
* Overview
NetLabel is a mechanism which can be used by kernel security modules to attach
security attributes to outgoing network packets generated from user space
applications and read security attributes from incoming network packets. It
is composed of three main components, the protocol engines, the communication
layer, and the kernel security module API.
* Protocol Engines
The protocol engines are responsible for both applying and retrieving the
network packet's security attributes. If any translation between the network
security attributes and those on the host are required then the protocol
engine will handle those tasks as well. Other kernel subsystems should
refrain from calling the protocol engines directly, instead they should use
the NetLabel kernel security module API described below.
Detailed information about each NetLabel protocol engine can be found in this
directory, consult '00-INDEX' for filenames.
* Communication Layer
The communication layer exists to allow NetLabel configuration and monitoring
from user space. The NetLabel communication layer uses a message based
protocol built on top of the Generic NETLINK transport mechanism. The exact
formatting of these NetLabel messages as well as the Generic NETLINK family
names can be found in the the 'net/netlabel/' directory as comments in the
header files as well as in 'include/net/netlabel.h'.
* Security Module API
The purpose of the NetLabel security module API is to provide a protocol
independent interface to the underlying NetLabel protocol engines. In addition
to protocol independence, the security module API is designed to be completely
LSM independent which should allow multiple LSMs to leverage the same code
base.
Detailed information about the NetLabel security module API can be found in the
'include/net/netlabel.h' header file as well as the 'lsm_interface.txt' file
found in this directory.
NetLabel Linux Security Module Interface
==============================================================================
Paul Moore, paul.moore@hp.com
May 17, 2006
* Overview
NetLabel is a mechanism which can set and retrieve security attributes from
network packets. It is intended to be used by LSM developers who want to make
use of a common code base for several different packet labeling protocols.
The NetLabel security module API is defined in 'include/net/netlabel.h' but a
brief overview is given below.
* NetLabel Security Attributes
Since NetLabel supports multiple different packet labeling protocols and LSMs
it uses the concept of security attributes to refer to the packet's security
labels. The NetLabel security attributes are defined by the
'netlbl_lsm_secattr' structure in the NetLabel header file. Internally the
NetLabel subsystem converts the security attributes to and from the correct
low-level packet label depending on the NetLabel build time and run time
configuration. It is up to the LSM developer to translate the NetLabel
security attributes into whatever security identifiers are in use for their
particular LSM.
* NetLabel LSM Protocol Operations
These are the functions which allow the LSM developer to manipulate the labels
on outgoing packets as well as read the labels on incoming packets. Functions
exist to operate both on sockets as well as the sk_buffs directly. These high
level functions are translated into low level protocol operations based on how
the administrator has configured the NetLabel subsystem.
* NetLabel Label Mapping Cache Operations
Depending on the exact configuration, translation between the network packet
label and the internal LSM security identifier can be time consuming. The
NetLabel label mapping cache is a caching mechanism which can be used to
sidestep much of this overhead once a mapping has been established. Once the
LSM has received a packet, used NetLabel to decode it's security attributes,
and translated the security attributes into a LSM internal identifier the LSM
can use the NetLabel caching functions to associate the LSM internal
identifier with the network packet's label. This means that in the future
when a incoming packet matches a cached value not only are the internal
NetLabel translation mechanisms bypassed but the LSM translation mechanisms are
bypassed as well which should result in a significant reduction in overhead.
......@@ -375,6 +375,41 @@ tcp_slow_start_after_idle - BOOLEAN
be timed out after an idle period.
Default: 1
CIPSOv4 Variables:
cipso_cache_enable - BOOLEAN
If set, enable additions to and lookups from the CIPSO label mapping
cache. If unset, additions are ignored and lookups always result in a
miss. However, regardless of the setting the cache is still
invalidated when required when means you can safely toggle this on and
off and the cache will always be "safe".
Default: 1
cipso_cache_bucket_size - INTEGER
The CIPSO label cache consists of a fixed size hash table with each
hash bucket containing a number of cache entries. This variable limits
the number of entries in each hash bucket; the larger the value the
more CIPSO label mappings that can be cached. When the number of
entries in a given hash bucket reaches this limit adding new entries
causes the oldest entry in the bucket to be removed to make room.
Default: 10
cipso_rbm_optfmt - BOOLEAN
Enable the "Optimized Tag 1 Format" as defined in section 3.4.2.6 of
the CIPSO draft specification (see Documentation/netlabel for details).
This means that when set the CIPSO tag will be padded with empty
categories in order to make the packet data 32-bit aligned.
Default: 0
cipso_rbm_structvalid - BOOLEAN
If set, do a very strict check of the CIPSO option when
ip_options_compile() is called. If unset, relax the checks done during
ip_options_compile(). Either way is "safe" as errors are caught else
where in the CIPSO processing code but setting this to 0 (False) should
result in less work (i.e. it should be faster) but could cause problems
with other implementations that require strict checking.
Default: 0
IP Variables:
ip_local_port_range - 2 INTEGERS
......@@ -730,6 +765,9 @@ conf/all/forwarding - BOOLEAN
This referred to as global forwarding.
proxy_ndp - BOOLEAN
Do proxy ndp.
conf/interface/*:
Change special settings per interface.
......
flowi structure:
The secid member in the flow structure is used in LSMs (e.g. SELinux) to indicate
the label of the flow. This label of the flow is currently used in selecting
matching labeled xfrm(s).
If this is an outbound flow, the label is derived from the socket, if any, or
the incoming packet this flow is being generated as a response to (e.g. tcp
resets, timewait ack, etc.). It is also conceivable that the label could be
derived from other sources such as process context, device, etc., in special
cases, as may be appropriate.
If this is an inbound flow, the label is derived from the IPSec security
associations, if any, used by the packet.
**************************************************************************
** History
**
** REV# DATE NAME DESCRIPTION
** 1.00.00.00 3/31/2004 Erich Chen First release
** 1.10.00.04 7/28/2004 Erich Chen modify for ioctl
** 1.10.00.06 8/28/2004 Erich Chen modify for 2.6.x
** 1.10.00.08 9/28/2004 Erich Chen modify for x86_64
** 1.10.00.10 10/10/2004 Erich Chen bug fix for SMP & ioctl
** 1.20.00.00 11/29/2004 Erich Chen bug fix with arcmsr_bus_reset when PHY error
** 1.20.00.02 12/09/2004 Erich Chen bug fix with over 2T bytes RAID Volume
** 1.20.00.04 1/09/2005 Erich Chen fits for Debian linux kernel version 2.2.xx
** 1.20.00.05 2/20/2005 Erich Chen cleanly as look like a Linux driver at 2.6.x
** thanks for peoples kindness comment
** Kornel Wieliczek
** Christoph Hellwig
** Adrian Bunk
** Andrew Morton
** Christoph Hellwig
** James Bottomley
** Arjan van de Ven
** 1.20.00.06 3/12/2005 Erich Chen fix with arcmsr_pci_unmap_dma "unsigned long" cast,
** modify PCCB POOL allocated by "dma_alloc_coherent"
** (Kornel Wieliczek's comment)
** 1.20.00.07 3/23/2005 Erich Chen bug fix with arcmsr_scsi_host_template_init
** occur segmentation fault,
** if RAID adapter does not on PCI slot
** and modprobe/rmmod this driver twice.
** bug fix enormous stack usage (Adrian Bunk's comment)
** 1.20.00.08 6/23/2005 Erich Chen bug fix with abort command,
** in case of heavy loading when sata cable
** working on low quality connection
** 1.20.00.09 9/12/2005 Erich Chen bug fix with abort command handling, firmware version check
** and firmware update notify for hardware bug fix
** 1.20.00.10 9/23/2005 Erich Chen enhance sysfs function for change driver's max tag Q number.
** add DMA_64BIT_MASK for backward compatible with all 2.6.x
** add some useful message for abort command
** add ioctl code 'ARCMSR_IOCTL_FLUSH_ADAPTER_CACHE'
** customer can send this command for sync raid volume data
** 1.20.00.11 9/29/2005 Erich Chen by comment of Arjan van de Ven fix incorrect msleep redefine
** cast off sizeof(dma_addr_t) condition for 64bit pci_set_dma_mask
** 1.20.00.12 9/30/2005 Erich Chen bug fix with 64bit platform's ccbs using if over 4G system memory
** change 64bit pci_set_consistent_dma_mask into 32bit
** increcct adapter count if adapter initialize fail.
** miss edit at arcmsr_build_ccb....
** psge += sizeof(struct _SG64ENTRY *) =>
** psge += sizeof(struct _SG64ENTRY)
** 64 bits sg entry would be incorrectly calculated
** thanks Kornel Wieliczek give me kindly notify
** and detail description
** 1.20.00.13 11/15/2005 Erich Chen scheduling pending ccb with FIFO
** change the architecture of arcmsr command queue list
** for linux standard list
** enable usage of pci message signal interrupt
** follow Randy.Danlup kindness suggestion cleanup this code
**************************************************************************
\ No newline at end of file
......@@ -11,38 +11,43 @@ the original).
Supported Cards/Chipsets
-------------------------
PCI ID (pci.ids) OEM Product
9005:0285:9005:028a Adaptec 2020ZCR (Skyhawk)
9005:0285:9005:028e Adaptec 2020SA (Skyhawk)
9005:0285:9005:028b Adaptec 2025ZCR (Terminator)
9005:0285:9005:028f Adaptec 2025SA (Terminator)
9005:0285:9005:0286 Adaptec 2120S (Crusader)
9005:0286:9005:028d Adaptec 2130S (Lancer)
9005:0283:9005:0283 Adaptec Catapult (3210S with arc firmware)
9005:0284:9005:0284 Adaptec Tomcat (3410S with arc firmware)
9005:0285:9005:0285 Adaptec 2200S (Vulcan)
9005:0285:9005:0286 Adaptec 2120S (Crusader)
9005:0285:9005:0287 Adaptec 2200S (Vulcan-2m)
9005:0285:9005:0288 Adaptec 3230S (Harrier)
9005:0285:9005:0289 Adaptec 3240S (Tornado)
9005:0285:9005:028a Adaptec 2020ZCR (Skyhawk)
9005:0285:9005:028b Adaptec 2025ZCR (Terminator)
9005:0286:9005:028c Adaptec 2230S (Lancer)
9005:0286:9005:028c Adaptec 2230SLP (Lancer)
9005:0285:9005:0296 Adaptec 2240S (SabreExpress)
9005:0286:9005:028d Adaptec 2130S (Lancer)
9005:0285:9005:028e Adaptec 2020SA (Skyhawk)
9005:0285:9005:028f Adaptec 2025SA (Terminator)
9005:0285:9005:0290 Adaptec 2410SA (Jaguar)
9005:0285:9005:0293 Adaptec 21610SA (Corsair-16)
9005:0285:103c:3227 Adaptec 2610SA (Bearcat HP release)
9005:0285:9005:0293 Adaptec 21610SA (Corsair-16)
9005:0285:9005:0296 Adaptec 2240S (SabreExpress)
9005:0285:9005:0292 Adaptec 2810SA (Corsair-8)
9005:0285:9005:0294 Adaptec Prowler
9005:0286:9005:029d Adaptec 2420SA (Intruder HP release)
9005:0286:9005:029c Adaptec 2620SA (Intruder)
9005:0286:9005:029b Adaptec 2820SA (Intruder)
9005:0286:9005:02a7 Adaptec 2830SA (Skyray)
9005:0286:9005:02a8 Adaptec 2430SA (Skyray)
9005:0285:9005:0288 Adaptec 3230S (Harrier)
9005:0285:9005:0289 Adaptec 3240S (Tornado)
9005:0285:9005:0298 Adaptec 4000SAS (BlackBird)
9005:0285:9005:0297 Adaptec 4005SAS (AvonPark)
9005:0285:9005:0298 Adaptec 4000SAS (BlackBird)
9005:0285:9005:0299 Adaptec 4800SAS (Marauder-X)
9005:0285:9005:029a Adaptec 4805SAS (Marauder-E)
9005:0286:9005:029b Adaptec 2820SA (Intruder)
9005:0286:9005:029c Adaptec 2620SA (Intruder)
9005:0286:9005:029d Adaptec 2420SA (Intruder HP release)
9005:0286:9005:02a2 Adaptec 3800SAS (Hurricane44)
9005:0286:9005:02a7 Adaptec 3805SAS (Hurricane80)
9005:0286:9005:02a8 Adaptec 3400SAS (Hurricane40)
9005:0286:9005:02ac Adaptec 1800SAS (Typhoon44)
9005:0286:9005:02b3 Adaptec 2400SAS (Hurricane40lm)
9005:0285:9005:02b5 Adaptec ASR5800 (Voodoo44)
9005:0285:9005:02b6 Adaptec ASR5805 (Voodoo80)
9005:0285:9005:02b7 Adaptec ASR5808 (Voodoo08)
1011:0046:9005:0364 Adaptec 5400S (Mustang)
1011:0046:9005:0365 Adaptec 5400S (Mustang)
9005:0283:9005:0283 Adaptec Catapult (3210S with arc firmware)
9005:0284:9005:0284 Adaptec Tomcat (3410S with arc firmware)
9005:0287:9005:0800 Adaptec Themisto (Jupiter)
9005:0200:9005:0200 Adaptec Themisto (Jupiter)
9005:0286:9005:0800 Adaptec Callisto (Jupiter)
......@@ -64,18 +69,20 @@ Supported Cards/Chipsets
9005:0285:9005:0290 IBM ServeRAID 7t (Jaguar)
9005:0285:1014:02F2 IBM ServeRAID 8i (AvonPark)
9005:0285:1014:0312 IBM ServeRAID 8i (AvonParkLite)
9005:0286:1014:9580 IBM ServeRAID 8k/8k-l8 (Aurora)
9005:0286:1014:9540 IBM ServeRAID 8k/8k-l4 (AuroraLite)
9005:0286:9005:029f ICP ICP9014R0 (Lancer)
9005:0286:1014:9580 IBM ServeRAID 8k/8k-l8 (Aurora)
9005:0286:1014:034d IBM ServeRAID 8s (Hurricane)
9005:0286:9005:029e ICP ICP9024R0 (Lancer)
9005:0286:9005:029f ICP ICP9014R0 (Lancer)
9005:0286:9005:02a0 ICP ICP9047MA (Lancer)
9005:0286:9005:02a1 ICP ICP9087MA (Lancer)
9005:0286:9005:02a3 ICP ICP5445AU (Hurricane44)
9005:0286:9005:02a4 ICP ICP9085LI (Marauder-X)
9005:0286:9005:02a5 ICP ICP5085BR (Marauder-E)
9005:0286:9005:02a3 ICP ICP5445AU (Hurricane44)
9005:0286:9005:02a6 ICP ICP9067MA (Intruder-6)
9005:0286:9005:02a9 ICP ICP5087AU (Skyray)
9005:0286:9005:02aa ICP ICP5047AU (Skyray)
9005:0286:9005:02a9 ICP ICP5085AU (Hurricane80)
9005:0286:9005:02aa ICP ICP5045AU (Hurricane40)
9005:0286:9005:02b4 ICP ICP5045AL (Hurricane40lm)
People
-------------------------
......
此差异已折叠。
SAS Layer
---------
The SAS Layer is a management infrastructure which manages
SAS LLDDs. It sits between SCSI Core and SAS LLDDs. The
layout is as follows: while SCSI Core is concerned with
SAM/SPC issues, and a SAS LLDD+sequencer is concerned with
phy/OOB/link management, the SAS layer is concerned with:
* SAS Phy/Port/HA event management (LLDD generates,
SAS Layer processes),
* SAS Port management (creation/destruction),
* SAS Domain discovery and revalidation,
* SAS Domain device management,
* SCSI Host registration/unregistration,
* Device registration with SCSI Core (SAS) or libata
(SATA), and
* Expander management and exporting expander control
to user space.
A SAS LLDD is a PCI device driver. It is concerned with
phy/OOB management, and vendor specific tasks and generates
events to the SAS layer.
The SAS Layer does most SAS tasks as outlined in the SAS 1.1
spec.
The sas_ha_struct describes the SAS LLDD to the SAS layer.
Most of it is used by the SAS Layer but a few fields need to
be initialized by the LLDDs.
After initializing your hardware, from the probe() function
you call sas_register_ha(). It will register your LLDD with
the SCSI subsystem, creating a SCSI host and it will
register your SAS driver with the sysfs SAS tree it creates.
It will then return. Then you enable your phys to actually
start OOB (at which point your driver will start calling the
notify_* event callbacks).
Structure descriptions:
struct sas_phy --------------------
Normally this is statically embedded to your driver's
phy structure:
struct my_phy {
blah;
struct sas_phy sas_phy;
bleh;
};
And then all the phys are an array of my_phy in your HA
struct (shown below).
Then as you go along and initialize your phys you also
initialize the sas_phy struct, along with your own
phy structure.
In general, the phys are managed by the LLDD and the ports
are managed by the SAS layer. So the phys are initialized
and updated by the LLDD and the ports are initialized and
updated by the SAS layer.
There is a scheme where the LLDD can RW certain fields,
and the SAS layer can only read such ones, and vice versa.
The idea is to avoid unnecessary locking.
enabled -- must be set (0/1)
id -- must be set [0,MAX_PHYS)
class, proto, type, role, oob_mode, linkrate -- must be set
oob_mode -- you set this when OOB has finished and then notify
the SAS Layer.
sas_addr -- this normally points to an array holding the sas
address of the phy, possibly somewhere in your my_phy
struct.
attached_sas_addr -- set this when you (LLDD) receive an
IDENTIFY frame or a FIS frame, _before_ notifying the SAS
layer. The idea is that sometimes the LLDD may want to fake
or provide a different SAS address on that phy/port and this
allows it to do this. At best you should copy the sas
address from the IDENTIFY frame or maybe generate a SAS
address for SATA directly attached devices. The Discover
process may later change this.
frame_rcvd -- this is where you copy the IDENTIFY/FIS frame
when you get it; you lock, copy, set frame_rcvd_size and
unlock the lock, and then call the event. It is a pointer
since there's no way to know your hw frame size _exactly_,
so you define the actual array in your phy struct and let
this pointer point to it. You copy the frame from your
DMAable memory to that area holding the lock.
sas_prim -- this is where primitives go when they're
received. See sas.h. Grab the lock, set the primitive,
release the lock, notify.
port -- this points to the sas_port if the phy belongs
to a port -- the LLDD only reads this. It points to the
sas_port this phy is part of. Set by the SAS Layer.
ha -- may be set; the SAS layer sets it anyway.
lldd_phy -- you should set this to point to your phy so you
can find your way around faster when the SAS layer calls one
of your callbacks and passes you a phy. If the sas_phy is
embedded you can also use container_of -- whatever you
prefer.
struct sas_port --------------------
The LLDD doesn't set any fields of this struct -- it only
reads them. They should be self explanatory.
phy_mask is 32 bit, this should be enough for now, as I
haven't heard of a HA having more than 8 phys.
lldd_port -- I haven't found use for that -- maybe other
LLDD who wish to have internal port representation can make
use of this.
struct sas_ha_struct --------------------
It normally is statically declared in your own LLDD
structure describing your adapter:
struct my_sas_ha {
blah;
struct sas_ha_struct sas_ha;
struct my_phy phys[MAX_PHYS];
struct sas_port sas_ports[MAX_PHYS]; /* (1) */
bleh;
};
(1) If your LLDD doesn't have its own port representation.
What needs to be initialized (sample function given below).
pcidev
sas_addr -- since the SAS layer doesn't want to mess with
memory allocation, etc, this points to statically
allocated array somewhere (say in your host adapter
structure) and holds the SAS address of the host
adapter as given by you or the manufacturer, etc.
sas_port
sas_phy -- an array of pointers to structures. (see
note above on sas_addr).
These must be set. See more notes below.
num_phys -- the number of phys present in the sas_phy array,
and the number of ports present in the sas_port
array. There can be a maximum num_phys ports (one per
port) so we drop the num_ports, and only use
num_phys.
The event interface:
/* LLDD calls these to notify the class of an event. */
void (*notify_ha_event)(struct sas_ha_struct *, enum ha_event);
void (*notify_port_event)(struct sas_phy *, enum port_event);
void (*notify_phy_event)(struct sas_phy *, enum phy_event);
When sas_register_ha() returns, those are set and can be
called by the LLDD to notify the SAS layer of such events
the SAS layer.
The port notification:
/* The class calls these to notify the LLDD of an event. */
void (*lldd_port_formed)(struct sas_phy *);
void (*lldd_port_deformed)(struct sas_phy *);
If the LLDD wants notification when a port has been formed
or deformed it sets those to a function satisfying the type.
A SAS LLDD should also implement at least one of the Task
Management Functions (TMFs) described in SAM:
/* Task Management Functions. Must be called from process context. */
int (*lldd_abort_task)(struct sas_task *);
int (*lldd_abort_task_set)(struct domain_device *, u8 *lun);
int (*lldd_clear_aca)(struct domain_device *, u8 *lun);
int (*lldd_clear_task_set)(struct domain_device *, u8 *lun);
int (*lldd_I_T_nexus_reset)(struct domain_device *);
int (*lldd_lu_reset)(struct domain_device *, u8 *lun);
int (*lldd_query_task)(struct sas_task *);
For more information please read SAM from T10.org.
Port and Adapter management:
/* Port and Adapter management */
int (*lldd_clear_nexus_port)(struct sas_port *);
int (*lldd_clear_nexus_ha)(struct sas_ha_struct *);
A SAS LLDD should implement at least one of those.
Phy management:
/* Phy management */
int (*lldd_control_phy)(struct sas_phy *, enum phy_func);
lldd_ha -- set this to point to your HA struct. You can also
use container_of if you embedded it as shown above.
A sample initialization and registration function
can look like this (called last thing from probe())
*but* before you enable the phys to do OOB:
static int register_sas_ha(struct my_sas_ha *my_ha)
{
int i;
static struct sas_phy *sas_phys[MAX_PHYS];
static struct sas_port *sas_ports[MAX_PHYS];
my_ha->sas_ha.sas_addr = &my_ha->sas_addr[0];
for (i = 0; i < MAX_PHYS; i++) {
sas_phys[i] = &my_ha->phys[i].sas_phy;
sas_ports[i] = &my_ha->sas_ports[i];
}
my_ha->sas_ha.sas_phy = sas_phys;
my_ha->sas_ha.sas_port = sas_ports;
my_ha->sas_ha.num_phys = MAX_PHYS;
my_ha->sas_ha.lldd_port_formed = my_port_formed;
my_ha->sas_ha.lldd_dev_found = my_dev_found;
my_ha->sas_ha.lldd_dev_gone = my_dev_gone;
my_ha->sas_ha.lldd_max_execute_num = lldd_max_execute_num; (1)
my_ha->sas_ha.lldd_queue_size = ha_can_queue;
my_ha->sas_ha.lldd_execute_task = my_execute_task;
my_ha->sas_ha.lldd_abort_task = my_abort_task;
my_ha->sas_ha.lldd_abort_task_set = my_abort_task_set;
my_ha->sas_ha.lldd_clear_aca = my_clear_aca;
my_ha->sas_ha.lldd_clear_task_set = my_clear_task_set;
my_ha->sas_ha.lldd_I_T_nexus_reset= NULL; (2)
my_ha->sas_ha.lldd_lu_reset = my_lu_reset;
my_ha->sas_ha.lldd_query_task = my_query_task;
my_ha->sas_ha.lldd_clear_nexus_port = my_clear_nexus_port;
my_ha->sas_ha.lldd_clear_nexus_ha = my_clear_nexus_ha;
my_ha->sas_ha.lldd_control_phy = my_control_phy;
return sas_register_ha(&my_ha->sas_ha);
}
(1) This is normally a LLDD parameter, something of the
lines of a task collector. What it tells the SAS Layer is
whether the SAS layer should run in Direct Mode (default:
value 0 or 1) or Task Collector Mode (value greater than 1).
In Direct Mode, the SAS Layer calls Execute Task as soon as
it has a command to send to the SDS, _and_ this is a single
command, i.e. not linked.
Some hardware (e.g. aic94xx) has the capability to DMA more
than one task at a time (interrupt) from host memory. Task
Collector Mode is an optional feature for HAs which support
this in their hardware. (Again, it is completely optional
even if your hardware supports it.)
In Task Collector Mode, the SAS Layer would do _natural_
coalescing of tasks and at the appropriate moment it would
call your driver to DMA more than one task in a single HA
interrupt. DMBS may want to use this by insmod/modprobe
setting the lldd_max_execute_num to something greater than
1.
(2) SAS 1.1 does not define I_T Nexus Reset TMF.
Events
------
Events are _the only way_ a SAS LLDD notifies the SAS layer
of anything. There is no other method or way a LLDD to tell
the SAS layer of anything happening internally or in the SAS
domain.
Phy events:
PHYE_LOSS_OF_SIGNAL, (C)
PHYE_OOB_DONE,
PHYE_OOB_ERROR, (C)
PHYE_SPINUP_HOLD.
Port events, passed on a _phy_:
PORTE_BYTES_DMAED, (M)
PORTE_BROADCAST_RCVD, (E)
PORTE_LINK_RESET_ERR, (C)
PORTE_TIMER_EVENT, (C)
PORTE_HARD_RESET.
Host Adapter event:
HAE_RESET
A SAS LLDD should be able to generate
- at least one event from group C (choice),
- events marked M (mandatory) are mandatory (only one),
- events marked E (expander) if it wants the SAS layer
to handle domain revalidation (only one such).
- Unmarked events are optional.
Meaning:
HAE_RESET -- when your HA got internal error and was reset.
PORTE_BYTES_DMAED -- on receiving an IDENTIFY/FIS frame
PORTE_BROADCAST_RCVD -- on receiving a primitive
PORTE_LINK_RESET_ERR -- timer expired, loss of signal, loss
of DWS, etc. (*)
PORTE_TIMER_EVENT -- DWS reset timeout timer expired (*)
PORTE_HARD_RESET -- Hard Reset primitive received.
PHYE_LOSS_OF_SIGNAL -- the device is gone (*)
PHYE_OOB_DONE -- OOB went fine and oob_mode is valid
PHYE_OOB_ERROR -- Error while doing OOB, the device probably
got disconnected. (*)
PHYE_SPINUP_HOLD -- SATA is present, COMWAKE not sent.
(*) should set/clear the appropriate fields in the phy,
or alternatively call the inlined sas_phy_disconnected()
which is just a helper, from their tasklet.
The Execute Command SCSI RPC:
int (*lldd_execute_task)(struct sas_task *, int num,
unsigned long gfp_flags);
Used to queue a task to the SAS LLDD. @task is the tasks to
be executed. @num should be the number of tasks being
queued at this function call (they are linked listed via
task::list), @gfp_mask should be the gfp_mask defining the
context of the caller.
This function should implement the Execute Command SCSI RPC,
or if you're sending a SCSI Task as linked commands, you
should also use this function.
That is, when lldd_execute_task() is called, the command(s)
go out on the transport *immediately*. There is *no*
queuing of any sort and at any level in a SAS LLDD.
The use of task::list is two-fold, one for linked commands,
the other discussed below.
It is possible to queue up more than one task at a time, by
initializing the list element of struct sas_task, and
passing the number of tasks enlisted in this manner in num.
Returns: -SAS_QUEUE_FULL, -ENOMEM, nothing was queued;
0, the task(s) were queued.
If you want to pass num > 1, then either
A) you're the only caller of this function and keep track
of what you've queued to the LLDD, or
B) you know what you're doing and have a strategy of
retrying.
As opposed to queuing one task at a time (function call),
batch queuing of tasks, by having num > 1, greatly
simplifies LLDD code, sequencer code, and _hardware design_,
and has some performance advantages in certain situations
(DBMS).
The LLDD advertises if it can take more than one command at
a time at lldd_execute_task(), by setting the
lldd_max_execute_num parameter (controlled by "collector"
module parameter in aic94xx SAS LLDD).
You should leave this to the default 1, unless you know what
you're doing.
This is a function of the LLDD, to which the SAS layer can
cater to.
int lldd_queue_size
The host adapter's queue size. This is the maximum
number of commands the lldd can have pending to domain
devices on behalf of all upper layers submitting through
lldd_execute_task().
You really want to set this to something (much) larger than
1.
This _really_ has absolutely nothing to do with queuing.
There is no queuing in SAS LLDDs.
struct sas_task {
dev -- the device this task is destined to
list -- must be initialized (INIT_LIST_HEAD)
task_proto -- _one_ of enum sas_proto
scatter -- pointer to scatter gather list array
num_scatter -- number of elements in scatter
total_xfer_len -- total number of bytes expected to be transfered
data_dir -- PCI_DMA_...
task_done -- callback when the task has finished execution
};
When an external entity, entity other than the LLDD or the
SAS Layer, wants to work with a struct domain_device, it
_must_ call kobject_get() when getting a handle on the
device and kobject_put() when it is done with the device.
This does two things:
A) implements proper kfree() for the device;
B) increments/decrements the kref for all players:
domain_device
all domain_device's ... (if past an expander)
port
host adapter
pci device
and up the ladder, etc.
DISCOVERY
---------
The sysfs tree has the following purposes:
a) It shows you the physical layout of the SAS domain at
the current time, i.e. how the domain looks in the
physical world right now.
b) Shows some device parameters _at_discovery_time_.
This is a link to the tree(1) program, very useful in
viewing the SAS domain:
ftp://mama.indstate.edu/linux/tree/
I expect user space applications to actually create a
graphical interface of this.
That is, the sysfs domain tree doesn't show or keep state if
you e.g., change the meaning of the READY LED MEANING
setting, but it does show you the current connection status
of the domain device.
Keeping internal device state changes is responsibility of
upper layers (Command set drivers) and user space.
When a device or devices are unplugged from the domain, this
is reflected in the sysfs tree immediately, and the device(s)
removed from the system.
The structure domain_device describes any device in the SAS
domain. It is completely managed by the SAS layer. A task
points to a domain device, this is how the SAS LLDD knows
where to send the task(s) to. A SAS LLDD only reads the
contents of the domain_device structure, but it never creates
or destroys one.
Expander management from User Space
-----------------------------------
In each expander directory in sysfs, there is a file called
"smp_portal". It is a binary sysfs attribute file, which
implements an SMP portal (Note: this is *NOT* an SMP port),
to which user space applications can send SMP requests and
receive SMP responses.
Functionality is deceptively simple:
1. Build the SMP frame you want to send. The format and layout
is described in the SAS spec. Leave the CRC field equal 0.
open(2)
2. Open the expander's SMP portal sysfs file in RW mode.
write(2)
3. Write the frame you built in 1.
read(2)
4. Read the amount of data you expect to receive for the frame you built.
If you receive different amount of data you expected to receive,
then there was some kind of error.
close(2)
All this process is shown in detail in the function do_smp_func()
and its callers, in the file "expander_conf.c".
The kernel functionality is implemented in the file
"sas_expander.c".
The program "expander_conf.c" implements this. It takes one
argument, the sysfs file name of the SMP portal to the
expander, and gives expander information, including routing
tables.
The SMP portal gives you complete control of the expander,
so please be careful.
......@@ -758,6 +758,7 @@ Prior to version 0.9.0rc4 options had a 'snd_' prefix. This was removed.
position_fix - Fix DMA pointer (0 = auto, 1 = none, 2 = POSBUF, 3 = FIFO size)
single_cmd - Use single immediate commands to communicate with
codecs (for debugging only)
disable_msi - Disable Message Signaled Interrupt (MSI)
This module supports one card and autoprobe.
......@@ -778,11 +779,16 @@ Prior to version 0.9.0rc4 options had a 'snd_' prefix. This was removed.
6stack-digout 6-jack with a SPDIF out
w810 3-jack
z71v 3-jack (HP shared SPDIF)
asus 3-jack
asus 3-jack (ASUS Mobo)
asus-w1v ASUS W1V
asus-dig ASUS with SPDIF out
asus-dig2 ASUS with SPDIF out (using GPIO2)
uniwill 3-jack
F1734 2-jack
lg LG laptop (m1 express dual)
lg-lw LG LW20 laptop
lg-lw LG LW20/LW25 laptop
tcl TCL S700
clevo Clevo laptops (m520G, m665n)
test for testing/debugging purpose, almost all controls can be
adjusted. Appearing only when compiled with
$CONFIG_SND_DEBUG=y
......@@ -790,6 +796,7 @@ Prior to version 0.9.0rc4 options had a 'snd_' prefix. This was removed.
ALC260
hp HP machines
hp-3013 HP machines (3013-variant)
fujitsu Fujitsu S7020
acer Acer TravelMate
basic fixed pin assignment (old default model)
......@@ -797,24 +804,32 @@ Prior to version 0.9.0rc4 options had a 'snd_' prefix. This was removed.
ALC262
fujitsu Fujitsu Laptop
hp-bpc HP xw4400/6400/8400/9400 laptops
benq Benq ED8
basic fixed pin assignment w/o SPDIF
auto auto-config reading BIOS (default)
ALC882/885
3stack-dig 3-jack with SPDIF I/O
6stck-dig 6-jack digital with SPDIF I/O
arima Arima W820Di1
auto auto-config reading BIOS (default)
ALC883/888
3stack-dig 3-jack with SPDIF I/O
6stack-dig 6-jack digital with SPDIF I/O
6stack-dig-demo 6-stack digital for Intel demo board
3stack-6ch 3-jack 6-channel
3stack-6ch-dig 3-jack 6-channel with SPDIF I/O
6stack-dig-demo 6-jack digital for Intel demo board
acer Acer laptops (Travelmate 3012WTMi, Aspire 5600, etc)
auto auto-config reading BIOS (default)
ALC861/660
3stack 3-jack
3stack-dig 3-jack with SPDIF I/O
6stack-dig 6-jack with SPDIF I/O
3stack-660 3-jack (for ALC660)
uniwill-m31 Uniwill M31 laptop
auto auto-config reading BIOS (default)
CMI9880
......@@ -843,10 +858,21 @@ Prior to version 0.9.0rc4 options had a 'snd_' prefix. This was removed.
3stack-dig ditto with SPDIF
laptop 3-jack with hp-jack automute
laptop-dig ditto with SPDIF
auto auto-confgi reading BIOS (default)
auto auto-config reading BIOS (default)
STAC9200/9205/9220/9221/9254
ref Reference board
3stack D945 3stack
5stack D945 5stack + SPDIF
STAC7661(?)
STAC9227/9228/9229/927x
ref Reference board
3stack D965 3stack
5stack D965 5stack + SPDIF
STAC9872
vaio Setup for VAIO FE550G/SZ110
vaio-ar Setup for VAIO AR
If the default configuration doesn't work and one of the above
matches with your device, report it together with the PCI
......@@ -1213,6 +1239,14 @@ Prior to version 0.9.0rc4 options had a 'snd_' prefix. This was removed.
Module supports only 1 card. This module has no enable option.
Module snd-mts64
----------------
Module for Ego Systems (ESI) Miditerminal 4140
This module supports multiple devices.
Requires parport (CONFIG_PARPORT).
Module snd-nm256
----------------
......
......@@ -1054,9 +1054,8 @@
<para>
For a device which allows hotplugging, you can use
<function>snd_card_free_in_thread</function>. This one will
postpone the destruction and wait in a kernel-thread until all
devices are closed.
<function>snd_card_free_when_closed</function>. This one will
postpone the destruction until all devices are closed.
</para>
</section>
......
......@@ -298,6 +298,14 @@ L: info-linux@geode.amd.com
W: http://www.amd.com/us-en/ConnectivitySolutions/TechnicalResources/0,,50_2334_2452_11363,00.html
S: Supported
AMSO1100 RNIC DRIVER
P: Tom Tucker
M: tom@opengridcomputing.com
P: Steve Wise
M: swise@opengridcomputing.com
L: openib-general@openib.org
S: Maintained
AOA (Apple Onboard Audio) ALSA DRIVER
P: Johannes Berg
M: johannes@sipsolutions.net
......@@ -991,6 +999,14 @@ EFS FILESYSTEM
W: http://aeschi.ch.eu.org/efs/
S: Orphan
EHCA (IBM GX bus InfiniBand adapter) DRIVER:
P: Hoang-Nam Nguyen
M: hnguyen@de.ibm.com
P: Christoph Raisch
M: raisch@de.ibm.com
L: openib-general@openib.org
S: Supported
EMU10K1 SOUND DRIVER
P: James Courtier-Dutton
M: James@superbug.demon.co.uk
......@@ -1783,6 +1799,13 @@ W: http://www.penguinppc.org/
L: linuxppc-embedded@ozlabs.org
S: Maintained
LINUX FOR POWERPC PA SEMI PWRFICIENT
P: Olof Johansson
M: olof@lixom.net
W: http://www.pasemi.com/
L: linuxppc-dev@ozlabs.org
S: Supported
LLC (802.2)
P: Arnaldo Carvalho de Melo
M: acme@conectiva.com.br
......@@ -2445,6 +2468,8 @@ S: Maintained
S390
P: Martin Schwidefsky
M: schwidefsky@de.ibm.com
P: Heiko Carstens
M: heiko.carstens@de.ibm.com
M: linux390@de.ibm.com
L: linux-390@vm.marist.edu
W: http://www.ibm.com/developerworks/linux/linux390/
......@@ -2459,8 +2484,8 @@ W: http://www.ibm.com/developerworks/linux/linux390/
S: Supported
S390 ZFCP DRIVER
P: Andreas Herrmann
M: aherrman@de.ibm.com
P: Swen Schillig
M: swen@vnet.ibm.com
M: linux390@de.ibm.com
L: linux-390@vm.marist.edu
W: http://www.ibm.com/developerworks/linux/linux390/
......
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 18
EXTRAVERSION = -rc7
NAME=Crazed Snow-Weasel
EXTRAVERSION =
NAME=Avast! A bilge rat!
# *DOCUMENTATION*
# To see a list of typical targets execute "make help"
......@@ -1082,6 +1082,7 @@ help:
@echo 'Static analysers'
@echo ' checkstack - Generate a list of stack hogs'
@echo ' namespacecheck - Name space analysis on compiled kernel'
@echo ' headers_check - Sanity check on exported headers'
@echo ''
@echo 'Kernel packaging:'
@$(MAKE) $(build)=$(package-dir) help
......
......@@ -108,11 +108,8 @@ Image: vmlinux
bootstrap:
$(Q)$(MAKEBOOT) bootstrap
archmrproper:
$(Q)$(MAKE) $(build)=arch/frv/boot mrproper
archclean:
$(Q)$(MAKE) $(build)=arch/frv/boot clean
$(Q)$(MAKE) $(clean)=arch/frv/boot
archdep: scripts/mkdep symlinks
$(Q)$(MAKE) $(build)=arch/frv/boot dep
......@@ -8,6 +8,8 @@
# Copyright (C) 1995-2000 Russell King
#
targets := Image zImage bootpImage
SYSTEM =$(TOPDIR)/$(LINUX)
ZTEXTADDR = 0x02080000
......@@ -66,7 +68,6 @@ zinstall: $(CONFIGURE) zImage
# miscellany
#
mrproper clean:
$(RM) Image zImage bootpImage
# @$(MAKE) -C compressed clean
# @$(MAKE) -C bootp clean
......
......@@ -5,5 +5,8 @@
#
obj-$(CONFIG_CRYPTO_AES_586) += aes-i586.o
obj-$(CONFIG_CRYPTO_TWOFISH_586) += twofish-i586.o
aes-i586-y := aes-i586-asm.o aes.o
twofish-i586-y := twofish-i586-asm.o twofish.o
......@@ -379,12 +379,13 @@ static void gen_tabs(void)
}
static int aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
unsigned int key_len, u32 *flags)
unsigned int key_len)
{
int i;
u32 ss[8];
struct aes_ctx *ctx = crypto_tfm_ctx(tfm);
const __le32 *key = (const __le32 *)in_key;
u32 *flags = &tfm->crt_flags;
/* encryption schedule */
......
/***************************************************************************
* Copyright (C) 2006 by Joachim Fritschi, <jfritschi@freenet.de> *
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
* This program is distributed in the hope that it will be useful, *
* but WITHOUT ANY WARRANTY; without even the implied warranty of *
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
* GNU General Public License for more details. *
* *
* You should have received a copy of the GNU General Public License *
* along with this program; if not, write to the *
* Free Software Foundation, Inc., *
* 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *
***************************************************************************/
.file "twofish-i586-asm.S"
.text
#include <asm/asm-offsets.h>
/* return adress at 0 */
#define in_blk 12 /* input byte array address parameter*/
#define out_blk 8 /* output byte array address parameter*/
#define tfm 4 /* Twofish context structure */
#define a_offset 0
#define b_offset 4
#define c_offset 8
#define d_offset 12
/* Structure of the crypto context struct*/
#define s0 0 /* S0 Array 256 Words each */
#define s1 1024 /* S1 Array */
#define s2 2048 /* S2 Array */
#define s3 3072 /* S3 Array */
#define w 4096 /* 8 whitening keys (word) */
#define k 4128 /* key 1-32 ( word ) */
/* define a few register aliases to allow macro substitution */
#define R0D %eax
#define R0B %al
#define R0H %ah
#define R1D %ebx
#define R1B %bl
#define R1H %bh
#define R2D %ecx
#define R2B %cl
#define R2H %ch
#define R3D %edx
#define R3B %dl
#define R3H %dh
/* performs input whitening */
#define input_whitening(src,context,offset)\
xor w+offset(context), src;
/* performs input whitening */
#define output_whitening(src,context,offset)\
xor w+16+offset(context), src;
/*
* a input register containing a (rotated 16)
* b input register containing b
* c input register containing c
* d input register containing d (already rol $1)
* operations on a and b are interleaved to increase performance
*/
#define encrypt_round(a,b,c,d,round)\
push d ## D;\
movzx b ## B, %edi;\
mov s1(%ebp,%edi,4),d ## D;\
movzx a ## B, %edi;\
mov s2(%ebp,%edi,4),%esi;\
movzx b ## H, %edi;\
ror $16, b ## D;\
xor s2(%ebp,%edi,4),d ## D;\
movzx a ## H, %edi;\
ror $16, a ## D;\
xor s3(%ebp,%edi,4),%esi;\
movzx b ## B, %edi;\
xor s3(%ebp,%edi,4),d ## D;\
movzx a ## B, %edi;\
xor (%ebp,%edi,4), %esi;\
movzx b ## H, %edi;\
ror $15, b ## D;\
xor (%ebp,%edi,4), d ## D;\
movzx a ## H, %edi;\
xor s1(%ebp,%edi,4),%esi;\
pop %edi;\
add d ## D, %esi;\
add %esi, d ## D;\
add k+round(%ebp), %esi;\
xor %esi, c ## D;\
rol $15, c ## D;\
add k+4+round(%ebp),d ## D;\
xor %edi, d ## D;
/*
* a input register containing a (rotated 16)
* b input register containing b
* c input register containing c
* d input register containing d (already rol $1)
* operations on a and b are interleaved to increase performance
* last round has different rotations for the output preparation
*/
#define encrypt_last_round(a,b,c,d,round)\
push d ## D;\
movzx b ## B, %edi;\
mov s1(%ebp,%edi,4),d ## D;\
movzx a ## B, %edi;\
mov s2(%ebp,%edi,4),%esi;\
movzx b ## H, %edi;\
ror $16, b ## D;\
xor s2(%ebp,%edi,4),d ## D;\
movzx a ## H, %edi;\
ror $16, a ## D;\
xor s3(%ebp,%edi,4),%esi;\
movzx b ## B, %edi;\
xor s3(%ebp,%edi,4),d ## D;\
movzx a ## B, %edi;\
xor (%ebp,%edi,4), %esi;\
movzx b ## H, %edi;\
ror $16, b ## D;\
xor (%ebp,%edi,4), d ## D;\
movzx a ## H, %edi;\
xor s1(%ebp,%edi,4),%esi;\
pop %edi;\
add d ## D, %esi;\
add %esi, d ## D;\
add k+round(%ebp), %esi;\
xor %esi, c ## D;\
ror $1, c ## D;\
add k+4+round(%ebp),d ## D;\
xor %edi, d ## D;
/*
* a input register containing a
* b input register containing b (rotated 16)
* c input register containing c
* d input register containing d (already rol $1)
* operations on a and b are interleaved to increase performance
*/
#define decrypt_round(a,b,c,d,round)\
push c ## D;\
movzx a ## B, %edi;\
mov (%ebp,%edi,4), c ## D;\
movzx b ## B, %edi;\
mov s3(%ebp,%edi,4),%esi;\
movzx a ## H, %edi;\
ror $16, a ## D;\
xor s1(%ebp,%edi,4),c ## D;\
movzx b ## H, %edi;\
ror $16, b ## D;\
xor (%ebp,%edi,4), %esi;\
movzx a ## B, %edi;\
xor s2(%ebp,%edi,4),c ## D;\
movzx b ## B, %edi;\
xor s1(%ebp,%edi,4),%esi;\
movzx a ## H, %edi;\
ror $15, a ## D;\
xor s3(%ebp,%edi,4),c ## D;\
movzx b ## H, %edi;\
xor s2(%ebp,%edi,4),%esi;\
pop %edi;\
add %esi, c ## D;\
add c ## D, %esi;\
add k+round(%ebp), c ## D;\
xor %edi, c ## D;\
add k+4+round(%ebp),%esi;\
xor %esi, d ## D;\
rol $15, d ## D;
/*
* a input register containing a
* b input register containing b (rotated 16)
* c input register containing c
* d input register containing d (already rol $1)
* operations on a and b are interleaved to increase performance
* last round has different rotations for the output preparation
*/
#define decrypt_last_round(a,b,c,d,round)\
push c ## D;\
movzx a ## B, %edi;\
mov (%ebp,%edi,4), c ## D;\
movzx b ## B, %edi;\
mov s3(%ebp,%edi,4),%esi;\
movzx a ## H, %edi;\
ror $16, a ## D;\
xor s1(%ebp,%edi,4),c ## D;\
movzx b ## H, %edi;\
ror $16, b ## D;\
xor (%ebp,%edi,4), %esi;\
movzx a ## B, %edi;\
xor s2(%ebp,%edi,4),c ## D;\
movzx b ## B, %edi;\
xor s1(%ebp,%edi,4),%esi;\
movzx a ## H, %edi;\
ror $16, a ## D;\
xor s3(%ebp,%edi,4),c ## D;\
movzx b ## H, %edi;\
xor s2(%ebp,%edi,4),%esi;\
pop %edi;\
add %esi, c ## D;\
add c ## D, %esi;\
add k+round(%ebp), c ## D;\
xor %edi, c ## D;\
add k+4+round(%ebp),%esi;\
xor %esi, d ## D;\
ror $1, d ## D;
.align 4
.global twofish_enc_blk
.global twofish_dec_blk
twofish_enc_blk:
push %ebp /* save registers according to calling convention*/
push %ebx
push %esi
push %edi
mov tfm + 16(%esp), %ebp /* abuse the base pointer: set new base bointer to the crypto tfm */
add $crypto_tfm_ctx_offset, %ebp /* ctx adress */
mov in_blk+16(%esp),%edi /* input adress in edi */
mov (%edi), %eax
mov b_offset(%edi), %ebx
mov c_offset(%edi), %ecx
mov d_offset(%edi), %edx
input_whitening(%eax,%ebp,a_offset)
ror $16, %eax
input_whitening(%ebx,%ebp,b_offset)
input_whitening(%ecx,%ebp,c_offset)
input_whitening(%edx,%ebp,d_offset)
rol $1, %edx
encrypt_round(R0,R1,R2,R3,0);
encrypt_round(R2,R3,R0,R1,8);
encrypt_round(R0,R1,R2,R3,2*8);
encrypt_round(R2,R3,R0,R1,3*8);
encrypt_round(R0,R1,R2,R3,4*8);
encrypt_round(R2,R3,R0,R1,5*8);
encrypt_round(R0,R1,R2,R3,6*8);
encrypt_round(R2,R3,R0,R1,7*8);
encrypt_round(R0,R1,R2,R3,8*8);
encrypt_round(R2,R3,R0,R1,9*8);
encrypt_round(R0,R1,R2,R3,10*8);
encrypt_round(R2,R3,R0,R1,11*8);
encrypt_round(R0,R1,R2,R3,12*8);
encrypt_round(R2,R3,R0,R1,13*8);
encrypt_round(R0,R1,R2,R3,14*8);
encrypt_last_round(R2,R3,R0,R1,15*8);
output_whitening(%eax,%ebp,c_offset)
output_whitening(%ebx,%ebp,d_offset)
output_whitening(%ecx,%ebp,a_offset)
output_whitening(%edx,%ebp,b_offset)
mov out_blk+16(%esp),%edi;
mov %eax, c_offset(%edi)
mov %ebx, d_offset(%edi)
mov %ecx, (%edi)
mov %edx, b_offset(%edi)
pop %edi
pop %esi
pop %ebx
pop %ebp
mov $1, %eax
ret
twofish_dec_blk:
push %ebp /* save registers according to calling convention*/
push %ebx
push %esi
push %edi
mov tfm + 16(%esp), %ebp /* abuse the base pointer: set new base bointer to the crypto tfm */
add $crypto_tfm_ctx_offset, %ebp /* ctx adress */
mov in_blk+16(%esp),%edi /* input adress in edi */
mov (%edi), %eax
mov b_offset(%edi), %ebx
mov c_offset(%edi), %ecx
mov d_offset(%edi), %edx
output_whitening(%eax,%ebp,a_offset)
output_whitening(%ebx,%ebp,b_offset)
ror $16, %ebx
output_whitening(%ecx,%ebp,c_offset)
output_whitening(%edx,%ebp,d_offset)
rol $1, %ecx
decrypt_round(R0,R1,R2,R3,15*8);
decrypt_round(R2,R3,R0,R1,14*8);
decrypt_round(R0,R1,R2,R3,13*8);
decrypt_round(R2,R3,R0,R1,12*8);
decrypt_round(R0,R1,R2,R3,11*8);
decrypt_round(R2,R3,R0,R1,10*8);
decrypt_round(R0,R1,R2,R3,9*8);
decrypt_round(R2,R3,R0,R1,8*8);
decrypt_round(R0,R1,R2,R3,7*8);
decrypt_round(R2,R3,R0,R1,6*8);
decrypt_round(R0,R1,R2,R3,5*8);
decrypt_round(R2,R3,R0,R1,4*8);
decrypt_round(R0,R1,R2,R3,3*8);
decrypt_round(R2,R3,R0,R1,2*8);
decrypt_round(R0,R1,R2,R3,1*8);
decrypt_last_round(R2,R3,R0,R1,0);
input_whitening(%eax,%ebp,c_offset)
input_whitening(%ebx,%ebp,d_offset)
input_whitening(%ecx,%ebp,a_offset)
input_whitening(%edx,%ebp,b_offset)
mov out_blk+16(%esp),%edi;
mov %eax, c_offset(%edi)
mov %ebx, d_offset(%edi)
mov %ecx, (%edi)
mov %edx, b_offset(%edi)
pop %edi
pop %esi
pop %ebx
pop %ebp
mov $1, %eax
ret
/*
* Glue Code for optimized 586 assembler version of TWOFISH
*
* Originally Twofish for GPG
* By Matthew Skala <mskala@ansuz.sooke.bc.ca>, July 26, 1998
* 256-bit key length added March 20, 1999
* Some modifications to reduce the text size by Werner Koch, April, 1998
* Ported to the kerneli patch by Marc Mutz <Marc@Mutz.com>
* Ported to CryptoAPI by Colin Slater <hoho@tacomeat.net>
*
* The original author has disclaimed all copyright interest in this
* code and thus put it in the public domain. The subsequent authors
* have put this under the GNU General Public License.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
* USA
*
* This code is a "clean room" implementation, written from the paper
* _Twofish: A 128-Bit Block Cipher_ by Bruce Schneier, John Kelsey,
* Doug Whiting, David Wagner, Chris Hall, and Niels Ferguson, available
* through http://www.counterpane.com/twofish.html
*
* For background information on multiplication in finite fields, used for
* the matrix operations in the key schedule, see the book _Contemporary
* Abstract Algebra_ by Joseph A. Gallian, especially chapter 22 in the
* Third Edition.
*/
#include <crypto/twofish.h>
#include <linux/crypto.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/types.h>
asmlinkage void twofish_enc_blk(struct crypto_tfm *tfm, u8 *dst, const u8 *src);
asmlinkage void twofish_dec_blk(struct crypto_tfm *tfm, u8 *dst, const u8 *src);
static void twofish_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
{
twofish_enc_blk(tfm, dst, src);
}
static void twofish_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
{
twofish_dec_blk(tfm, dst, src);
}
static struct crypto_alg alg = {
.cra_name = "twofish",
.cra_driver_name = "twofish-i586",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_TYPE_CIPHER,
.cra_blocksize = TF_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct twofish_ctx),
.cra_alignmask = 3,
.cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(alg.cra_list),
.cra_u = {
.cipher = {
.cia_min_keysize = TF_MIN_KEY_SIZE,
.cia_max_keysize = TF_MAX_KEY_SIZE,
.cia_setkey = twofish_setkey,
.cia_encrypt = twofish_encrypt,
.cia_decrypt = twofish_decrypt
}
}
};
static int __init init(void)
{
return crypto_register_alg(&alg);
}
static void __exit fini(void)
{
crypto_unregister_alg(&alg);
}
module_init(init);
module_exit(fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION ("Twofish Cipher Algorithm, i586 asm optimized");
MODULE_ALIAS("twofish");
......@@ -32,6 +32,7 @@
#include <linux/seq_file.h>
#include <linux/compiler.h>
#include <linux/sched.h> /* current */
#include <linux/dmi.h>
#include <asm/io.h>
#include <asm/delay.h>
#include <asm/uaccess.h>
......@@ -387,6 +388,33 @@ static int acpi_cpufreq_early_init_acpi(void)
return acpi_processor_preregister_performance(acpi_perf_data);
}
/*
* Some BIOSes do SW_ANY coordination internally, either set it up in hw
* or do it in BIOS firmware and won't inform about it to OS. If not
* detected, this has a side effect of making CPU run at a different speed
* than OS intended it to run at. Detect it and handle it cleanly.
*/
static int bios_with_sw_any_bug;
static int __init sw_any_bug_found(struct dmi_system_id *d)
{
bios_with_sw_any_bug = 1;
return 0;
}
static struct dmi_system_id __initdata sw_any_bug_dmi_table[] = {
{
.callback = sw_any_bug_found,
.ident = "Supermicro Server X6DLP",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Supermicro"),
DMI_MATCH(DMI_BIOS_VERSION, "080010"),
DMI_MATCH(DMI_PRODUCT_NAME, "X6DLP"),
},
},
{ }
};
static int
acpi_cpufreq_cpu_init (
struct cpufreq_policy *policy)
......@@ -422,8 +450,17 @@ acpi_cpufreq_cpu_init (
* coordination is required.
*/
if (policy->shared_type == CPUFREQ_SHARED_TYPE_ALL ||
policy->shared_type == CPUFREQ_SHARED_TYPE_ANY)
policy->shared_type == CPUFREQ_SHARED_TYPE_ANY) {
policy->cpus = perf->shared_cpu_map;
}
#ifdef CONFIG_SMP
dmi_check_system(sw_any_bug_dmi_table);
if (bios_with_sw_any_bug && cpus_weight(policy->cpus) == 1) {
policy->shared_type = CPUFREQ_SHARED_TYPE_ALL;
policy->cpus = cpu_core_map[cpu];
}
#endif
if (cpu_has(c, X86_FEATURE_CONSTANT_TSC)) {
acpi_cpufreq_driver.flags |= CPUFREQ_CONST_LOOPS;
......
......@@ -27,6 +27,7 @@
#include <linux/moduleparam.h>
#include <linux/init.h>
#include <linux/cpufreq.h>
#include <linux/pci.h>
#include <linux/slab.h>
#include <linux/string.h>
......@@ -52,18 +53,26 @@
#define CPU_NEHEMIAH 5
static int cpu_model;
static unsigned int numscales=16, numvscales;
static unsigned int numscales=16;
static unsigned int fsb;
static int minvid, maxvid;
static struct mV_pos *vrm_mV_table;
static unsigned char *mV_vrm_table;
struct f_msr {
unsigned char vrm;
};
static struct f_msr f_msr_table[32];
static unsigned int highest_speed, lowest_speed; /* kHz */
static unsigned int minmult, maxmult;
static int can_scale_voltage;
static int vrmrev;
static struct acpi_processor *pr = NULL;
static struct acpi_processor_cx *cx = NULL;
static int port22_en;
/* Module parameters */
static int dont_scale_voltage;
static int scale_voltage;
static int ignore_latency;
#define dprintk(msg...) cpufreq_debug_printk(CPUFREQ_DEBUG_DRIVER, "longhaul", msg)
......@@ -71,7 +80,6 @@ static int dont_scale_voltage;
/* Clock ratios multiplied by 10 */
static int clock_ratio[32];
static int eblcr_table[32];
static int voltage_table[32];
static unsigned int highest_speed, lowest_speed; /* kHz */
static int longhaul_version;
static struct cpufreq_frequency_table *longhaul_table;
......@@ -124,10 +132,9 @@ static int longhaul_get_cpu_mult(void)
/* For processor with BCR2 MSR */
static void do_longhaul1(int cx_address, unsigned int clock_ratio_index)
static void do_longhaul1(unsigned int clock_ratio_index)
{
union msr_bcr2 bcr2;
u32 t;
rdmsrl(MSR_VIA_BCR2, bcr2.val);
/* Enable software clock multiplier */
......@@ -136,13 +143,11 @@ static void do_longhaul1(int cx_address, unsigned int clock_ratio_index)
/* Sync to timer tick */
safe_halt();
ACPI_FLUSH_CPU_CACHE();
/* Change frequency on next halt or sleep */
wrmsrl(MSR_VIA_BCR2, bcr2.val);
/* Invoke C3 */
inb(cx_address);
/* Dummy op - must do something useless after P_LVL3 read */
t = inl(acpi_fadt.xpm_tmr_blk.address);
/* Invoke transition */
ACPI_FLUSH_CPU_CACHE();
halt();
/* Disable software clock multiplier */
local_irq_disable();
......@@ -164,11 +169,16 @@ static void do_powersaver(int cx_address, unsigned int clock_ratio_index)
longhaul.bits.SoftBusRatio4 = (clock_ratio_index & 0x10) >> 4;
longhaul.bits.EnableSoftBusRatio = 1;
if (can_scale_voltage) {
longhaul.bits.SoftVID = f_msr_table[clock_ratio_index].vrm;
longhaul.bits.EnableSoftVID = 1;
}
/* Sync to timer tick */
safe_halt();
ACPI_FLUSH_CPU_CACHE();
/* Change frequency on next halt or sleep */
wrmsrl(MSR_VIA_LONGHAUL, longhaul.val);
ACPI_FLUSH_CPU_CACHE();
/* Invoke C3 */
inb(cx_address);
/* Dummy op - must do something useless after P_LVL3 read */
......@@ -227,10 +237,13 @@ static void longhaul_setstate(unsigned int clock_ratio_index)
outb(0xFF,0xA1); /* Overkill */
outb(0xFE,0x21); /* TMR0 only */
/* Disable bus master arbitration */
if (pr->flags.bm_check) {
if (pr->flags.bm_control) {
/* Disable bus master arbitration */
acpi_set_register(ACPI_BITREG_ARB_DISABLE, 1,
ACPI_MTX_DO_NOT_LOCK);
} else if (port22_en) {
/* Disable AGP and PCI arbiters */
outb(3, 0x22);
}
switch (longhaul_version) {
......@@ -244,7 +257,7 @@ static void longhaul_setstate(unsigned int clock_ratio_index)
*/
case TYPE_LONGHAUL_V1:
case TYPE_LONGHAUL_V2:
do_longhaul1(cx->address, clock_ratio_index);
do_longhaul1(clock_ratio_index);
break;
/*
......@@ -259,14 +272,20 @@ static void longhaul_setstate(unsigned int clock_ratio_index)
* to work in practice.
*/
case TYPE_POWERSAVER:
/* Don't allow wakeup */
acpi_set_register(ACPI_BITREG_BUS_MASTER_RLD, 0,
ACPI_MTX_DO_NOT_LOCK);
do_powersaver(cx->address, clock_ratio_index);
break;
}
/* Enable bus master arbitration */
if (pr->flags.bm_check) {
if (pr->flags.bm_control) {
/* Enable bus master arbitration */
acpi_set_register(ACPI_BITREG_ARB_DISABLE, 0,
ACPI_MTX_DO_NOT_LOCK);
} else if (port22_en) {
/* Enable arbiters */
outb(0, 0x22);
}
outb(pic2_mask,0xA1); /* restore mask */
......@@ -446,53 +465,57 @@ static int __init longhaul_get_ranges(void)
static void __init longhaul_setup_voltagescaling(void)
{
union msr_longhaul longhaul;
struct mV_pos minvid, maxvid;
unsigned int j, speed, pos, kHz_step, numvscales;
rdmsrl (MSR_VIA_LONGHAUL, longhaul.val);
if (!(longhaul.bits.RevisionID & 1))
rdmsrl(MSR_VIA_LONGHAUL, longhaul.val);
if (!(longhaul.bits.RevisionID & 1)) {
printk(KERN_INFO PFX "Voltage scaling not supported by CPU.\n");
return;
}
if (!longhaul.bits.VRMRev) {
printk (KERN_INFO PFX "VRM 8.5\n");
vrm_mV_table = &vrm85_mV[0];
mV_vrm_table = &mV_vrm85[0];
} else {
printk (KERN_INFO PFX "Mobile VRM\n");
vrm_mV_table = &mobilevrm_mV[0];
mV_vrm_table = &mV_mobilevrm[0];
}
minvid = longhaul.bits.MinimumVID;
maxvid = longhaul.bits.MaximumVID;
vrmrev = longhaul.bits.VRMRev;
minvid = vrm_mV_table[longhaul.bits.MinimumVID];
maxvid = vrm_mV_table[longhaul.bits.MaximumVID];
numvscales = maxvid.pos - minvid.pos + 1;
kHz_step = (highest_speed - lowest_speed) / numvscales;
if (minvid == 0 || maxvid == 0) {
if (minvid.mV == 0 || maxvid.mV == 0 || minvid.mV > maxvid.mV) {
printk (KERN_INFO PFX "Bogus values Min:%d.%03d Max:%d.%03d. "
"Voltage scaling disabled.\n",
minvid/1000, minvid%1000, maxvid/1000, maxvid%1000);
minvid.mV/1000, minvid.mV%1000, maxvid.mV/1000, maxvid.mV%1000);
return;
}
if (minvid == maxvid) {
if (minvid.mV == maxvid.mV) {
printk (KERN_INFO PFX "Claims to support voltage scaling but min & max are "
"both %d.%03d. Voltage scaling disabled\n",
maxvid/1000, maxvid%1000);
maxvid.mV/1000, maxvid.mV%1000);
return;
}
if (vrmrev==0) {
dprintk ("VRM 8.5\n");
memcpy (voltage_table, vrm85scales, sizeof(voltage_table));
numvscales = (voltage_table[maxvid]-voltage_table[minvid])/25;
} else {
dprintk ("Mobile VRM\n");
memcpy (voltage_table, mobilevrmscales, sizeof(voltage_table));
numvscales = (voltage_table[maxvid]-voltage_table[minvid])/5;
printk(KERN_INFO PFX "Max VID=%d.%03d Min VID=%d.%03d, %d possible voltage scales\n",
maxvid.mV/1000, maxvid.mV%1000,
minvid.mV/1000, minvid.mV%1000,
numvscales);
j = 0;
while (longhaul_table[j].frequency != CPUFREQ_TABLE_END) {
speed = longhaul_table[j].frequency;
pos = (speed - lowest_speed) / kHz_step + minvid.pos;
f_msr_table[longhaul_table[j].index].vrm = mV_vrm_table[pos];
j++;
}
/* Current voltage isn't readable at first, so we need to
set it to a known value. The spec says to use maxvid */
longhaul.bits.RevisionKey = longhaul.bits.RevisionID; /* FIXME: This is bad. */
longhaul.bits.EnableSoftVID = 1;
longhaul.bits.SoftVID = maxvid;
wrmsrl (MSR_VIA_LONGHAUL, longhaul.val);
minvid = voltage_table[minvid];
maxvid = voltage_table[maxvid];
dprintk ("Min VID=%d.%03d Max VID=%d.%03d, %d possible voltage scales\n",
maxvid/1000, maxvid%1000, minvid/1000, minvid%1000, numvscales);
can_scale_voltage = 1;
}
......@@ -540,21 +563,33 @@ static acpi_status longhaul_walk_callback(acpi_handle obj_handle,
return 1;
}
/* VIA don't support PM2 reg, but have something similar */
static int enable_arbiter_disable(void)
{
struct pci_dev *dev;
u8 pci_cmd;
/* Find PLE133 host bridge */
dev = pci_find_device(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8601_0, NULL);
if (dev != NULL) {
/* Enable access to port 0x22 */
pci_read_config_byte(dev, 0x78, &pci_cmd);
if ( !(pci_cmd & 1<<7) ) {
pci_cmd |= 1<<7;
pci_write_config_byte(dev, 0x78, pci_cmd);
}
return 1;
}
return 0;
}
static int __init longhaul_cpu_init(struct cpufreq_policy *policy)
{
struct cpuinfo_x86 *c = cpu_data;
char *cpuname=NULL;
int ret;
/* Check ACPI support for C3 state */
acpi_walk_namespace(ACPI_TYPE_PROCESSOR, ACPI_ROOT_OBJECT, ACPI_UINT32_MAX,
&longhaul_walk_callback, NULL, (void *)&pr);
if (pr == NULL) goto err_acpi;
cx = &pr->power.states[ACPI_STATE_C3];
if (cx->address == 0 || cx->latency > 1000) goto err_acpi;
/* Now check what we have on this motherboard */
/* Check what we have on this motherboard */
switch (c->x86_model) {
case 6:
cpu_model = CPU_SAMUEL;
......@@ -636,12 +671,36 @@ static int __init longhaul_cpu_init(struct cpufreq_policy *policy)
break;
};
/* Find ACPI data for processor */
acpi_walk_namespace(ACPI_TYPE_PROCESSOR, ACPI_ROOT_OBJECT, ACPI_UINT32_MAX,
&longhaul_walk_callback, NULL, (void *)&pr);
if (pr == NULL)
goto err_acpi;
if (longhaul_version == TYPE_POWERSAVER) {
/* Check ACPI support for C3 state */
cx = &pr->power.states[ACPI_STATE_C3];
if (cx->address == 0 ||
(cx->latency > 1000 && ignore_latency == 0) )
goto err_acpi;
} else {
/* Check ACPI support for bus master arbiter disable */
if (!pr->flags.bm_control) {
if (!enable_arbiter_disable()) {
printk(KERN_ERR PFX "No ACPI support. No VT8601 host bridge. Aborting.\n");
return -ENODEV;
} else
port22_en = 1;
}
}
ret = longhaul_get_ranges();
if (ret != 0)
return ret;
if ((longhaul_version==TYPE_LONGHAUL_V2 || longhaul_version==TYPE_POWERSAVER) &&
(dont_scale_voltage==0))
(scale_voltage != 0))
longhaul_setup_voltagescaling();
policy->governor = CPUFREQ_DEFAULT_GOVERNOR;
......@@ -729,8 +788,10 @@ static void __exit longhaul_exit(void)
kfree(longhaul_table);
}
module_param (dont_scale_voltage, int, 0644);
MODULE_PARM_DESC(dont_scale_voltage, "Don't scale voltage of processor");
module_param (scale_voltage, int, 0644);
MODULE_PARM_DESC(scale_voltage, "Scale voltage of processor");
module_param(ignore_latency, int, 0644);
MODULE_PARM_DESC(ignore_latency, "Skip ACPI C3 latency test");
MODULE_AUTHOR ("Dave Jones <davej@codemonkey.org.uk>");
MODULE_DESCRIPTION ("Longhaul driver for VIA Cyrix processors.");
......@@ -738,4 +799,3 @@ MODULE_LICENSE ("GPL");
late_initcall(longhaul_init);
module_exit(longhaul_exit);
......@@ -450,17 +450,45 @@ static int __initdata nehemiah_c_eblcr[32] = {
* Voltage scales. Div/Mod by 1000 to get actual voltage.
* Which scale to use depends on the VRM type in use.
*/
static int __initdata vrm85scales[32] = {
1250, 1200, 1150, 1100, 1050, 1800, 1750, 1700,
1650, 1600, 1550, 1500, 1450, 1400, 1350, 1300,
1275, 1225, 1175, 1125, 1075, 1825, 1775, 1725,
1675, 1625, 1575, 1525, 1475, 1425, 1375, 1325,
struct mV_pos {
unsigned short mV;
unsigned short pos;
};
static struct mV_pos __initdata vrm85_mV[32] = {
{1250, 8}, {1200, 6}, {1150, 4}, {1100, 2},
{1050, 0}, {1800, 30}, {1750, 28}, {1700, 26},
{1650, 24}, {1600, 22}, {1550, 20}, {1500, 18},
{1450, 16}, {1400, 14}, {1350, 12}, {1300, 10},
{1275, 9}, {1225, 7}, {1175, 5}, {1125, 3},
{1075, 1}, {1825, 31}, {1775, 29}, {1725, 27},
{1675, 25}, {1625, 23}, {1575, 21}, {1525, 19},
{1475, 17}, {1425, 15}, {1375, 13}, {1325, 11}
};
static unsigned char __initdata mV_vrm85[32] = {
0x04, 0x14, 0x03, 0x13, 0x02, 0x12, 0x01, 0x11,
0x00, 0x10, 0x0f, 0x1f, 0x0e, 0x1e, 0x0d, 0x1d,
0x0c, 0x1c, 0x0b, 0x1b, 0x0a, 0x1a, 0x09, 0x19,
0x08, 0x18, 0x07, 0x17, 0x06, 0x16, 0x05, 0x15
};
static struct mV_pos __initdata mobilevrm_mV[32] = {
{1750, 31}, {1700, 30}, {1650, 29}, {1600, 28},
{1550, 27}, {1500, 26}, {1450, 25}, {1400, 24},
{1350, 23}, {1300, 22}, {1250, 21}, {1200, 20},
{1150, 19}, {1100, 18}, {1050, 17}, {1000, 16},
{975, 15}, {950, 14}, {925, 13}, {900, 12},
{875, 11}, {850, 10}, {825, 9}, {800, 8},
{775, 7}, {750, 6}, {725, 5}, {700, 4},
{675, 3}, {650, 2}, {625, 1}, {600, 0}
};
static int __initdata mobilevrmscales[32] = {
2000, 1950, 1900, 1850, 1800, 1750, 1700, 1650,
1600, 1550, 1500, 1450, 1500, 1350, 1300, -1,
1275, 1250, 1225, 1200, 1175, 1150, 1125, 1100,
1075, 1050, 1025, 1000, 975, 950, 925, -1,
static unsigned char __initdata mV_mobilevrm[32] = {
0x1f, 0x1e, 0x1d, 0x1c, 0x1b, 0x1a, 0x19, 0x18,
0x17, 0x16, 0x15, 0x14, 0x13, 0x12, 0x11, 0x10,
0x0f, 0x0e, 0x0d, 0x0c, 0x0b, 0x0a, 0x09, 0x08,
0x07, 0x06, 0x05, 0x04, 0x03, 0x02, 0x01, 0x00
};
......@@ -23,6 +23,7 @@
#ifdef CONFIG_X86_SPEEDSTEP_CENTRINO_ACPI
#include <linux/acpi.h>
#include <linux/dmi.h>
#include <acpi/processor.h>
#endif
......@@ -377,6 +378,35 @@ static int centrino_cpu_early_init_acpi(void)
return 0;
}
/*
* Some BIOSes do SW_ANY coordination internally, either set it up in hw
* or do it in BIOS firmware and won't inform about it to OS. If not
* detected, this has a side effect of making CPU run at a different speed
* than OS intended it to run at. Detect it and handle it cleanly.
*/
static int bios_with_sw_any_bug;
static int __init sw_any_bug_found(struct dmi_system_id *d)
{
bios_with_sw_any_bug = 1;
return 0;
}
static struct dmi_system_id sw_any_bug_dmi_table[] = {
{
.callback = sw_any_bug_found,
.ident = "Supermicro Server X6DLP",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Supermicro"),
DMI_MATCH(DMI_BIOS_VERSION, "080010"),
DMI_MATCH(DMI_PRODUCT_NAME, "X6DLP"),
},
},
{ }
};
/*
* centrino_cpu_init_acpi - register with ACPI P-States library
*
......@@ -398,14 +428,24 @@ static int centrino_cpu_init_acpi(struct cpufreq_policy *policy)
dprintk(PFX "obtaining ACPI data failed\n");
return -EIO;
}
policy->shared_type = p->shared_type;
/*
* Will let policy->cpus know about dependency only when software
* coordination is required.
*/
if (policy->shared_type == CPUFREQ_SHARED_TYPE_ALL ||
policy->shared_type == CPUFREQ_SHARED_TYPE_ANY)
policy->shared_type == CPUFREQ_SHARED_TYPE_ANY) {
policy->cpus = p->shared_cpu_map;
}
#ifdef CONFIG_SMP
dmi_check_system(sw_any_bug_dmi_table);
if (bios_with_sw_any_bug && cpus_weight(policy->cpus) == 1) {
policy->shared_type = CPUFREQ_SHARED_TYPE_ALL;
policy->cpus = cpu_core_map[cpu];
}
#endif
/* verify the acpi_data */
if (p->state_count <= 1) {
......
......@@ -956,6 +956,38 @@ efi_memory_present_wrapper(unsigned long start, unsigned long end, void *arg)
return 0;
}
/*
* This function checks if the entire range <start,end> is mapped with type.
*
* Note: this function only works correct if the e820 table is sorted and
* not-overlapping, which is the case
*/
int __init
e820_all_mapped(unsigned long s, unsigned long e, unsigned type)
{
u64 start = s;
u64 end = e;
int i;
for (i = 0; i < e820.nr_map; i++) {
struct e820entry *ei = &e820.map[i];
if (type && ei->type != type)
continue;
/* is the region (part) in overlap with the current region ?*/
if (ei->addr >= end || ei->addr + ei->size <= start)
continue;
/* if the region is at the beginning of <start,end> we move
* start to the end of the region since it's ok until there
*/
if (ei->addr <= start)
start = ei->addr + ei->size;
/* if start is now at or beyond end, we're done, full
* coverage */
if (start >= end)
return 1; /* we're done */
}
return 0;
}
/*
* Find the highest page frame number we have available
*/
......
......@@ -237,11 +237,6 @@ char * __devinit pcibios_setup(char *str)
pci_probe &= ~PCI_PROBE_MMCONF;
return NULL;
}
/* override DMI blacklist */
else if (!strcmp(str, "mmconf")) {
pci_probe |= PCI_PROBE_MMCONF_FORCE;
return NULL;
}
#endif
else if (!strcmp(str, "noacpi")) {
acpi_noirq_set();
......
......@@ -12,7 +12,6 @@
#include <linux/pci.h>
#include <linux/init.h>
#include <linux/acpi.h>
#include <linux/dmi.h>
#include <asm/e820.h>
#include "pci.h"
......@@ -188,31 +187,9 @@ static __init void unreachable_devices(void)
}
}
static int disable_mcfg(struct dmi_system_id *d)
{
printk("PCI: %s detected. Disabling MCFG.\n", d->ident);
pci_probe &= ~PCI_PROBE_MMCONF;
return 0;
}
static struct dmi_system_id __initdata dmi_bad_mcfg[] = {
/* Has broken MCFG table that makes the system hang when used */
{
.callback = disable_mcfg,
.ident = "Intel D3C5105 SDV",
.matches = {
DMI_MATCH(DMI_BIOS_VENDOR, "Intel"),
DMI_MATCH(DMI_BOARD_NAME, "D26928"),
},
},
{}
};
void __init pci_mmcfg_init(void)
{
dmi_check_system(dmi_bad_mcfg);
if ((pci_probe & (PCI_PROBE_MMCONF_FORCE|PCI_PROBE_MMCONF)) == 0)
if ((pci_probe & PCI_PROBE_MMCONF) == 0)
return;
acpi_table_parse(ACPI_MCFG, acpi_parse_mcfg);
......@@ -221,6 +198,15 @@ void __init pci_mmcfg_init(void)
(pci_mmcfg_config[0].base_address == 0))
return;
if (!e820_all_mapped(pci_mmcfg_config[0].base_address,
pci_mmcfg_config[0].base_address + MMCONFIG_APER_MIN,
E820_RESERVED)) {
printk(KERN_ERR "PCI: BIOS Bug: MCFG area at %x is not E820-reserved\n",
pci_mmcfg_config[0].base_address);
printk(KERN_ERR "PCI: Not using MMCONFIG.\n");
return;
}
printk(KERN_INFO "PCI: Using MMCONFIG\n");
raw_pci_ops = &pci_mmcfg;
pci_probe = (pci_probe & ~PCI_PROBE_MASK) | PCI_PROBE_MMCONF;
......
......@@ -16,8 +16,7 @@
#define PCI_PROBE_CONF1 0x0002
#define PCI_PROBE_CONF2 0x0004
#define PCI_PROBE_MMCONF 0x0008
#define PCI_PROBE_MMCONF_FORCE 0x0010
#define PCI_PROBE_MASK 0x00ff
#define PCI_PROBE_MASK 0x000f
#define PCI_NO_SORT 0x0100
#define PCI_BIOS_SORT 0x0200
......
......@@ -417,6 +417,17 @@ config PPC_MAPLE
This option enables support for the Maple 970FX Evaluation Board.
For more informations, refer to <http://www.970eval.com>
config PPC_PASEMI
depends on PPC_MULTIPLATFORM && PPC64
bool "PA Semi SoC-based platforms"
default n
select MPIC
select PPC_UDBG_16550
select GENERIC_TBSYNC
help
This option enables support for PA Semi's PWRficient line
of SoC processors, including PA6T-1682M
config PPC_CELL
bool
default n
......@@ -436,7 +447,8 @@ config PPC_IBM_CELL_BLADE
select UDBG_RTAS_CONSOLE
config UDBG_RTAS_CONSOLE
bool
bool "RTAS based debug console"
depends on PPC_RTAS
default n
config XICS
......
......@@ -18,6 +18,20 @@ config DEBUG_STACK_USAGE
This option will slow down process creation somewhat.
config HCALL_STATS
bool "Hypervisor call instrumentation"
depends on PPC_PSERIES && DEBUG_FS
help
Adds code to keep track of the number of hypervisor calls made and
the amount of time spent in hypervisor callsr. Wall time spent in
each call is always calculated, and if available CPU cycles spent
are also calculated. A directory named hcall_inst is added at the
root of the debugfs filesystem. Within the hcall_inst directory
are files that contain CPU specific call statistics.
This option will add a small amount of overhead to all hypervisor
calls.
config DEBUGGER
bool "Enable debugger hooks"
depends on DEBUG_KERNEL
......@@ -74,6 +88,8 @@ config XMON
very early during boot. 'xmon=on' will just enable the xmon
debugger hooks. 'xmon=off' will disable the debugger hooks
if CONFIG_XMON_DEFAULT is set.
xmon will print a backtrace on the very first invocation.
'xmon=nobt' will disable this autobacktrace.
config XMON_DEFAULT
bool "Enable xmon by default"
......
......@@ -36,11 +36,16 @@ zliblinuxheader := zlib.h zconf.h zutil.h
$(addprefix $(obj)/,$(zlib) main.o): $(addprefix $(obj)/,$(zliblinuxheader)) $(addprefix $(obj)/,$(zlibheader))
#$(addprefix $(obj)/,main.o): $(addprefix $(obj)/,zlib.h)
src-boot := crt0.S string.S prom.c stdio.c main.c div64.S
src-boot-$(CONFIG_PPC_MULTIPLATFORM) := of.c
src-boot := crt0.S string.S stdio.c main.c div64.S $(src-boot-y)
src-boot += $(zlib)
src-boot := $(addprefix $(obj)/, $(src-boot))
obj-boot := $(addsuffix .o, $(basename $(src-boot)))
ifeq ($(call cc-option-yn, -fstack-protector),y)
BOOTCFLAGS += -fno-stack-protector
endif
BOOTCFLAGS += -I$(obj) -I$(srctree)/$(obj)
quiet_cmd_copy_zlib = COPY $@
......
......@@ -214,10 +214,10 @@
b800 0 0 4 700 15 8
/* IDSEL 0x18 */
b000 0 0 1 700 15 8
b000 0 0 2 700 16 8
b000 0 0 3 700 17 8
b000 0 0 4 700 14 8>;
c000 0 0 1 700 15 8
c000 0 0 2 700 16 8
c000 0 0 3 700 17 8
c000 0 0 4 700 14 8>;
interrupt-parent = <700>;
interrupts = <42 8>;
bus-range = <0 0>;
......@@ -274,10 +274,10 @@
b800 0 0 4 700 15 8
/* IDSEL 0x18 */
b000 0 0 1 700 15 8
b000 0 0 2 700 16 8
b000 0 0 3 700 17 8
b000 0 0 4 700 14 8>;
c000 0 0 1 700 15 8
c000 0 0 2 700 16 8
c000 0 0 3 700 17 8
c000 0 0 4 700 14 8>;
interrupt-parent = <700>;
interrupts = <42 8>;
bus-range = <0 0>;
......
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#ifndef FLATDEVTREE_H
#define FLATDEVTREE_H
#include "types.h"
/* Definitions used by the flattened device tree */
#define OF_DT_HEADER 0xd00dfeed /* marker */
#define OF_DT_BEGIN_NODE 0x1 /* Start of node, full name */
#define OF_DT_END_NODE 0x2 /* End node */
#define OF_DT_PROP 0x3 /* Property: name off, size, content */
#define OF_DT_NOP 0x4 /* nop */
#define OF_DT_END 0x9
#define OF_DT_VERSION 0x10
struct boot_param_header {
u32 magic; /* magic word OF_DT_HEADER */
u32 totalsize; /* total size of DT block */
u32 off_dt_struct; /* offset to structure */
u32 off_dt_strings; /* offset to strings */
u32 off_mem_rsvmap; /* offset to memory reserve map */
u32 version; /* format version */
u32 last_comp_version; /* last compatible version */
/* version 2 fields below */
u32 boot_cpuid_phys; /* Physical CPU id we're booting on */
/* version 3 fields below */
u32 dt_strings_size; /* size of the DT strings block */
};
#endif /* FLATDEVTREE_H */
......@@ -14,17 +14,12 @@
#include "page.h"
#include "string.h"
#include "stdio.h"
#include "prom.h"
#include "zlib.h"
#include "ops.h"
#include "flatdevtree.h"
extern void flush_cache(void *, unsigned long);
/* Value picked to match that used by yaboot */
#define PROG_START 0x01400000 /* only used on 64-bit systems */
#define RAM_END (512<<20) /* Fixme: use OF */
#define ONE_MB 0x100000
extern char _start[];
extern char __bss_start[];
extern char _end[];
......@@ -33,14 +28,6 @@ extern char _vmlinux_end[];
extern char _initrd_start[];
extern char _initrd_end[];
/* A buffer that may be edited by tools operating on a zImage binary so as to
* edit the command line passed to vmlinux (by setting /chosen/bootargs).
* The buffer is put in it's own section so that tools may locate it easier.
*/
static char builtin_cmdline[512]
__attribute__((section("__builtin_cmdline")));
struct addr_range {
unsigned long addr;
unsigned long size;
......@@ -51,21 +38,16 @@ static struct addr_range vmlinuz;
static struct addr_range initrd;
static unsigned long elfoffset;
static int is_64bit;
static char scratch[46912]; /* scratch space for gunzip, from zlib_inflate_workspacesize() */
/* scratch space for gunzip; 46912 is from zlib_inflate_workspacesize() */
static char scratch[46912];
static char elfheader[256];
typedef void (*kernel_entry_t)( unsigned long,
unsigned long,
void *,
void *);
typedef void (*kernel_entry_t)(unsigned long, unsigned long, void *);
#undef DEBUG
static unsigned long claim_base;
#define HEAD_CRC 2
#define EXTRA_FIELD 4
#define ORIG_NAME 8
......@@ -123,24 +105,6 @@ static void gunzip(void *dst, int dstlen, unsigned char *src, int *lenp)
zlib_inflateEnd(&s);
}
static unsigned long try_claim(unsigned long size)
{
unsigned long addr = 0;
for(; claim_base < RAM_END; claim_base += ONE_MB) {
#ifdef DEBUG
printf(" trying: 0x%08lx\n\r", claim_base);
#endif
addr = (unsigned long)claim(claim_base, size, 0);
if ((void *)addr != (void *)-1)
break;
}
if (addr == 0)
return 0;
claim_base = PAGE_ALIGN(claim_base + size);
return addr;
}
static int is_elf64(void *hdr)
{
Elf64_Ehdr *elf64 = hdr;
......@@ -169,16 +133,7 @@ static int is_elf64(void *hdr)
vmlinux.size = (unsigned long)elf64ph->p_filesz + elfoffset;
vmlinux.memsize = (unsigned long)elf64ph->p_memsz + elfoffset;
#if defined(PROG_START)
/*
* Maintain a "magic" minimum address. This keeps some older
* firmware platforms running.
*/
if (claim_base < PROG_START)
claim_base = PROG_START;
#endif
is_64bit = 1;
return 1;
}
......@@ -212,47 +167,9 @@ static int is_elf32(void *hdr)
return 1;
}
void export_cmdline(void* chosen_handle)
{
int len;
char cmdline[2] = { 0, 0 };
if (builtin_cmdline[0] == 0)
return;
len = getprop(chosen_handle, "bootargs", cmdline, sizeof(cmdline));
if (len > 0 && cmdline[0] != 0)
return;
setprop(chosen_handle, "bootargs", builtin_cmdline,
strlen(builtin_cmdline) + 1);
}
void start(unsigned long a1, unsigned long a2, void *promptr, void *sp)
static void prep_kernel(unsigned long *a1, unsigned long *a2)
{
int len;
kernel_entry_t kernel_entry;
memset(__bss_start, 0, _end - __bss_start);
prom = (int (*)(void *)) promptr;
chosen_handle = finddevice("/chosen");
if (chosen_handle == (void *) -1)
exit();
if (getprop(chosen_handle, "stdout", &stdout, sizeof(stdout)) != 4)
exit();
printf("\n\rzImage starting: loaded at 0x%p (sp: 0x%p)\n\r", _start, sp);
/*
* The first available claim_base must be above the end of the
* the loaded kernel wrapper file (_start to _end includes the
* initrd image if it is present) and rounded up to a nice
* 1 MB boundary for good measure.
*/
claim_base = _ALIGN_UP((unsigned long)_end, ONE_MB);
vmlinuz.addr = (unsigned long)_vmlinux_start;
vmlinuz.size = (unsigned long)(_vmlinux_end - _vmlinux_start);
......@@ -263,43 +180,51 @@ void start(unsigned long a1, unsigned long a2, void *promptr, void *sp)
gunzip(elfheader, sizeof(elfheader),
(unsigned char *)vmlinuz.addr, &len);
} else
memcpy(elfheader, (const void *)vmlinuz.addr, sizeof(elfheader));
memcpy(elfheader, (const void *)vmlinuz.addr,
sizeof(elfheader));
if (!is_elf64(elfheader) && !is_elf32(elfheader)) {
printf("Error: not a valid PPC32 or PPC64 ELF file!\n\r");
exit();
}
if (platform_ops.image_hdr)
platform_ops.image_hdr(elfheader);
/* We need to claim the memsize plus the file offset since gzip
/* We need to alloc the memsize plus the file offset since gzip
* will expand the header (file offset), then the kernel, then
* possible rubbish we don't care about. But the kernel bss must
* be claimed (it will be zero'd by the kernel itself)
*/
printf("Allocating 0x%lx bytes for kernel ...\n\r", vmlinux.memsize);
vmlinux.addr = try_claim(vmlinux.memsize);
vmlinux.addr = (unsigned long)malloc(vmlinux.memsize);
if (vmlinux.addr == 0) {
printf("Can't allocate memory for kernel image !\n\r");
exit();
}
/*
* Now we try to claim memory for the initrd (and copy it there)
* Now we try to alloc memory for the initrd (and copy it there)
*/
initrd.size = (unsigned long)(_initrd_end - _initrd_start);
initrd.memsize = initrd.size;
if ( initrd.size > 0 ) {
printf("Allocating 0x%lx bytes for initrd ...\n\r", initrd.size);
initrd.addr = try_claim(initrd.size);
printf("Allocating 0x%lx bytes for initrd ...\n\r",
initrd.size);
initrd.addr = (unsigned long)malloc((u32)initrd.size);
if (initrd.addr == 0) {
printf("Can't allocate memory for initial ramdisk !\n\r");
printf("Can't allocate memory for initial "
"ramdisk !\n\r");
exit();
}
a1 = initrd.addr;
a2 = initrd.size;
printf("initial ramdisk moving 0x%lx <- 0x%lx (0x%lx bytes)\n\r",
initrd.addr, (unsigned long)_initrd_start, initrd.size);
memmove((void *)initrd.addr, (void *)_initrd_start, initrd.size);
printf("initrd head: 0x%lx\n\r", *((unsigned long *)initrd.addr));
*a1 = initrd.addr;
*a2 = initrd.size;
printf("initial ramdisk moving 0x%lx <- 0x%lx "
"(0x%lx bytes)\n\r", initrd.addr,
(unsigned long)_initrd_start, initrd.size);
memmove((void *)initrd.addr, (void *)_initrd_start,
initrd.size);
printf("initrd head: 0x%lx\n\r",
*((unsigned long *)initrd.addr));
}
/* Eventually gunzip the kernel */
......@@ -311,11 +236,10 @@ void start(unsigned long a1, unsigned long a2, void *promptr, void *sp)
(unsigned char *)vmlinuz.addr, &len);
printf("done 0x%lx bytes\n\r", len);
} else {
memmove((void *)vmlinux.addr,(void *)vmlinuz.addr,vmlinuz.size);
memmove((void *)vmlinux.addr,(void *)vmlinuz.addr,
vmlinuz.size);
}
export_cmdline(chosen_handle);
/* Skip over the ELF header */
#ifdef DEBUG
printf("... skipping 0x%lx bytes of ELF header\n\r",
......@@ -324,23 +248,107 @@ void start(unsigned long a1, unsigned long a2, void *promptr, void *sp)
vmlinux.addr += elfoffset;
flush_cache((void *)vmlinux.addr, vmlinux.size);
}
kernel_entry = (kernel_entry_t)vmlinux.addr;
#ifdef DEBUG
printf( "kernel:\n\r"
" entry addr = 0x%lx\n\r"
" a1 = 0x%lx,\n\r"
" a2 = 0x%lx,\n\r"
" prom = 0x%lx,\n\r"
" bi_recs = 0x%lx,\n\r",
(unsigned long)kernel_entry, a1, a2,
(unsigned long)prom, NULL);
#endif
void __attribute__ ((weak)) ft_init(void *dt_blob)
{
}
kernel_entry(a1, a2, prom, NULL);
/* A buffer that may be edited by tools operating on a zImage binary so as to
* edit the command line passed to vmlinux (by setting /chosen/bootargs).
* The buffer is put in it's own section so that tools may locate it easier.
*/
static char builtin_cmdline[COMMAND_LINE_SIZE]
__attribute__((__section__("__builtin_cmdline")));
printf("Error: Linux kernel returned to zImage bootloader!\n\r");
static void get_cmdline(char *buf, int size)
{
void *devp;
int len = strlen(builtin_cmdline);
exit();
buf[0] = '\0';
if (len > 0) { /* builtin_cmdline overrides dt's /chosen/bootargs */
len = min(len, size-1);
strncpy(buf, builtin_cmdline, len);
buf[len] = '\0';
}
else if ((devp = finddevice("/chosen")))
getprop(devp, "bootargs", buf, size);
}
static void set_cmdline(char *buf)
{
void *devp;
if ((devp = finddevice("/chosen")))
setprop(devp, "bootargs", buf, strlen(buf) + 1);
}
/* Section where ft can be tacked on after zImage is built */
union blobspace {
struct boot_param_header hdr;
char space[8*1024];
} dt_blob __attribute__((__section__("__builtin_ft")));
struct platform_ops platform_ops;
struct dt_ops dt_ops;
struct console_ops console_ops;
void start(unsigned long a1, unsigned long a2, void *promptr, void *sp)
{
int have_dt = 0;
kernel_entry_t kentry;
char cmdline[COMMAND_LINE_SIZE];
memset(__bss_start, 0, _end - __bss_start);
memset(&platform_ops, 0, sizeof(platform_ops));
memset(&dt_ops, 0, sizeof(dt_ops));
memset(&console_ops, 0, sizeof(console_ops));
/* Override the dt_ops and device tree if there was an flat dev
* tree attached to the zImage.
*/
if (dt_blob.hdr.magic == OF_DT_HEADER) {
have_dt = 1;
ft_init(&dt_blob);
}
if (platform_init(promptr))
exit();
if (console_ops.open && (console_ops.open() < 0))
exit();
if (platform_ops.fixups)
platform_ops.fixups();
printf("\n\rzImage starting: loaded at 0x%p (sp: 0x%p)\n\r",
_start, sp);
prep_kernel(&a1, &a2);
/* If cmdline came from zimage wrapper or if we can edit the one
* in the dt, print it out and edit it, if possible.
*/
if ((strlen(builtin_cmdline) > 0) || console_ops.edit_cmdline) {
get_cmdline(cmdline, COMMAND_LINE_SIZE);
printf("\n\rLinux/PowerPC load: %s", cmdline);
if (console_ops.edit_cmdline)
console_ops.edit_cmdline(cmdline, COMMAND_LINE_SIZE);
printf("\n\r");
set_cmdline(cmdline);
}
if (console_ops.close)
console_ops.close();
kentry = (kernel_entry_t) vmlinux.addr;
if (have_dt)
kentry(dt_ops.ft_addr(), 0, NULL);
else
/* XXX initrd addr/size should be passed in properties */
kentry(a1, a2, promptr);
/* console closed so printf below may not work */
printf("Error: Linux kernel returned to zImage boot wrapper!\n\r");
exit();
}
......@@ -8,15 +8,29 @@
*/
#include <stdarg.h>
#include <stddef.h>
#include "types.h"
#include "elf.h"
#include "string.h"
#include "stdio.h"
#include "prom.h"
#include "page.h"
#include "ops.h"
int (*prom)(void *);
phandle chosen_handle;
ihandle stdout;
typedef void *ihandle;
typedef void *phandle;
int call_prom(const char *service, int nargs, int nret, ...)
extern char _end[];
/* Value picked to match that used by yaboot */
#define PROG_START 0x01400000 /* only used on 64-bit systems */
#define RAM_END (512<<20) /* Fixme: use OF */
#define ONE_MB 0x100000
int (*prom) (void *);
static unsigned long claim_base;
static int call_prom(const char *service, int nargs, int nret, ...)
{
int i;
struct prom_args {
......@@ -45,7 +59,7 @@ int call_prom(const char *service, int nargs, int nret, ...)
return (nret > 0)? args.args[nargs]: 0;
}
int call_prom_ret(const char *service, int nargs, int nret,
static int call_prom_ret(const char *service, int nargs, int nret,
unsigned int *rets, ...)
{
int i;
......@@ -79,11 +93,6 @@ int call_prom_ret(const char *service, int nargs, int nret,
return (nret > 0)? args.args[nargs]: 0;
}
int write(void *handle, void *ptr, int nb)
{
return call_prom("write", 3, 1, handle, ptr, nb);
}
/*
* Older OF's require that when claiming a specific range of addresses,
* we claim the physical space in the /memory node and the virtual
......@@ -142,7 +151,7 @@ static int check_of_version(void)
return 1;
}
void *claim(unsigned long virt, unsigned long size, unsigned long align)
static void *claim(unsigned long virt, unsigned long size, unsigned long align)
{
int ret;
unsigned int result;
......@@ -151,7 +160,7 @@ void *claim(unsigned long virt, unsigned long size, unsigned long align)
need_map = check_of_version();
if (align || !need_map)
return (void *) call_prom("claim", 3, 1, virt, size, align);
ret = call_prom_ret("call-method", 5, 2, &result, "claim", memory,
align, size, virt);
if (ret != 0 || result == -1)
......@@ -163,3 +172,112 @@ void *claim(unsigned long virt, unsigned long size, unsigned long align)
0x12, size, virt, virt);
return (void *) virt;
}
static void *of_try_claim(u32 size)
{
unsigned long addr = 0;
static u8 first_time = 1;
if (first_time) {
claim_base = _ALIGN_UP((unsigned long)_end, ONE_MB);
first_time = 0;
}
for(; claim_base < RAM_END; claim_base += ONE_MB) {
#ifdef DEBUG
printf(" trying: 0x%08lx\n\r", claim_base);
#endif
addr = (unsigned long)claim(claim_base, size, 0);
if ((void *)addr != (void *)-1)
break;
}
if (addr == 0)
return NULL;
claim_base = PAGE_ALIGN(claim_base + size);
return (void *)addr;
}
static void of_image_hdr(const void *hdr)
{
const Elf64_Ehdr *elf64 = hdr;
if (elf64->e_ident[EI_CLASS] == ELFCLASS64) {
/*
* Maintain a "magic" minimum address. This keeps some older
* firmware platforms running.
*/
if (claim_base < PROG_START)
claim_base = PROG_START;
}
}
static void of_exit(void)
{
call_prom("exit", 0, 0);
}
/*
* OF device tree routines
*/
static void *of_finddevice(const char *name)
{
return (phandle) call_prom("finddevice", 1, 1, name);
}
static int of_getprop(const void *phandle, const char *name, void *buf,
const int buflen)
{
return call_prom("getprop", 4, 1, phandle, name, buf, buflen);
}
static int of_setprop(const void *phandle, const char *name, const void *buf,
const int buflen)
{
return call_prom("setprop", 4, 1, phandle, name, buf, buflen);
}
/*
* OF console routines
*/
static void *of_stdout_handle;
static int of_console_open(void)
{
void *devp;
if (((devp = finddevice("/chosen")) != NULL)
&& (getprop(devp, "stdout", &of_stdout_handle,
sizeof(of_stdout_handle))
== sizeof(of_stdout_handle)))
return 0;
return -1;
}
static void of_console_write(char *buf, int len)
{
call_prom("write", 3, 1, of_stdout_handle, buf, len);
}
int platform_init(void *promptr)
{
platform_ops.fixups = NULL;
platform_ops.image_hdr = of_image_hdr;
platform_ops.malloc = of_try_claim;
platform_ops.free = NULL;
platform_ops.exit = of_exit;
dt_ops.finddevice = of_finddevice;
dt_ops.getprop = of_getprop;
dt_ops.setprop = of_setprop;
dt_ops.translate_addr = NULL;
console_ops.open = of_console_open;
console_ops.write = of_console_write;
console_ops.edit_cmdline = NULL;
console_ops.close = NULL;
console_ops.data = NULL;
prom = (int (*)(void *))promptr;
return 0;
}
/*
* Global definition of all the bootwrapper operations.
*
* Author: Mark A. Greer <mgreer@mvista.com>
*
* 2006 (c) MontaVista Software, Inc. This file is licensed under
* the terms of the GNU General Public License version 2. This program
* is licensed "as is" without any warranty of any kind, whether express
* or implied.
*/
#ifndef _PPC_BOOT_OPS_H_
#define _PPC_BOOT_OPS_H_
#include "types.h"
#define COMMAND_LINE_SIZE 512
#define MAX_PATH_LEN 256
#define MAX_PROP_LEN 256 /* What should this be? */
/* Platform specific operations */
struct platform_ops {
void (*fixups)(void);
void (*image_hdr)(const void *);
void * (*malloc)(u32 size);
void (*free)(void *ptr, u32 size);
void (*exit)(void);
};
extern struct platform_ops platform_ops;
/* Device Tree operations */
struct dt_ops {
void * (*finddevice)(const char *name);
int (*getprop)(const void *node, const char *name, void *buf,
const int buflen);
int (*setprop)(const void *node, const char *name,
const void *buf, const int buflen);
u64 (*translate_addr)(const char *path, const u32 *in_addr,
const u32 addr_len);
unsigned long (*ft_addr)(void);
};
extern struct dt_ops dt_ops;
/* Console operations */
struct console_ops {
int (*open)(void);
void (*write)(char *buf, int len);
void (*edit_cmdline)(char *buf, int len);
void (*close)(void);
void *data;
};
extern struct console_ops console_ops;
/* Serial console operations */
struct serial_console_data {
int (*open)(void);
void (*putc)(unsigned char c);
unsigned char (*getc)(void);
u8 (*tstc)(void);
void (*close)(void);
};
extern int platform_init(void *promptr);
extern void simple_alloc_init(void);
extern void ft_init(void *dt_blob);
extern int serial_console_init(void);
static inline void *finddevice(const char *name)
{
return (dt_ops.finddevice) ? dt_ops.finddevice(name) : NULL;
}
static inline int getprop(void *devp, const char *name, void *buf, int buflen)
{
return (dt_ops.getprop) ? dt_ops.getprop(devp, name, buf, buflen) : -1;
}
static inline int setprop(void *devp, const char *name, void *buf, int buflen)
{
return (dt_ops.setprop) ? dt_ops.setprop(devp, name, buf, buflen) : -1;
}
static inline void *malloc(u32 size)
{
return (platform_ops.malloc) ? platform_ops.malloc(size) : NULL;
}
static inline void free(void *ptr, u32 size)
{
if (platform_ops.free)
platform_ops.free(ptr, size);
}
static inline void exit(void)
{
if (platform_ops.exit)
platform_ops.exit();
for(;;);
}
#endif /* _PPC_BOOT_OPS_H_ */
#ifndef _PPC_BOOT_PROM_H_
#define _PPC_BOOT_PROM_H_
typedef void *phandle;
typedef void *ihandle;
extern int (*prom) (void *);
extern phandle chosen_handle;
extern ihandle stdout;
int call_prom(const char *service, int nargs, int nret, ...);
int call_prom_ret(const char *service, int nargs, int nret,
unsigned int *rets, ...);
extern int write(void *handle, void *ptr, int nb);
extern void *claim(unsigned long virt, unsigned long size, unsigned long aln);
static inline void exit(void)
{
call_prom("exit", 0, 0);
}
static inline phandle finddevice(const char *name)
{
return (phandle) call_prom("finddevice", 1, 1, name);
}
static inline int getprop(void *phandle, const char *name,
void *buf, int buflen)
{
return call_prom("getprop", 4, 1, phandle, name, buf, buflen);
}
static inline int setprop(void *phandle, const char *name,
void *buf, int buflen)
{
return call_prom("setprop", 4, 1, phandle, name, buf, buflen);
}
#endif /* _PPC_BOOT_PROM_H_ */
......@@ -10,7 +10,7 @@
#include <stddef.h>
#include "string.h"
#include "stdio.h"
#include "prom.h"
#include "ops.h"
size_t strnlen(const char * s, size_t count)
{
......@@ -320,6 +320,6 @@ printf(const char *fmt, ...)
va_start(args, fmt);
n = vsprintf(sprint_buf, fmt, args);
va_end(args);
write(stdout, sprint_buf, n);
console_ops.write(sprint_buf, n);
return n;
}
#ifndef _PPC_BOOT_STDIO_H_
#define _PPC_BOOT_STDIO_H_
#include <stdarg.h>
#define ENOMEM 12 /* Out of Memory */
#define EINVAL 22 /* Invalid argument */
#define ENOSPC 28 /* No space left on device */
extern int printf(const char *fmt, ...);
#define fprintf(fmt, args...) printf(args)
extern int sprintf(char *buf, const char *fmt, ...);
extern int vsprintf(char *buf, const char *fmt, va_list args);
......
#ifndef _TYPES_H_
#define _TYPES_H_
#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
typedef unsigned char u8;
typedef unsigned short u16;
typedef unsigned int u32;
typedef unsigned long long u64;
#define min(x,y) ({ \
typeof(x) _x = (x); \
typeof(y) _y = (y); \
(void) (&_x == &_y); \
_x < _y ? _x : _y; })
#define max(x,y) ({ \
typeof(x) _x = (x); \
typeof(y) _y = (y); \
(void) (&_x == &_y); \
_x > _y ? _x : _y; })
#endif /* _TYPES_H_ */
......@@ -496,7 +496,7 @@ CONFIG_E1000=y
# CONFIG_SKY2 is not set
# CONFIG_SK98LIN is not set
# CONFIG_VIA_VELOCITY is not set
# CONFIG_TIGON3 is not set
CONFIG_TIGON3=y
# CONFIG_BNX2 is not set
# CONFIG_MV643XX_ETH is not set
......
......@@ -16,7 +16,7 @@ obj-y := semaphore.o cputable.o ptrace.o syscalls.o \
obj-y += vdso32/
obj-$(CONFIG_PPC64) += setup_64.o binfmt_elf32.o sys_ppc32.o \
signal_64.o ptrace32.o \
paca.o cpu_setup_power4.o \
paca.o cpu_setup_ppc970.o \
firmware.o sysfs.o
obj-$(CONFIG_PPC64) += vdso64/
obj-$(CONFIG_ALTIVEC) += vecemu.o vector.o
......@@ -51,7 +51,7 @@ extra-$(CONFIG_8xx) := head_8xx.o
extra-y += vmlinux.lds
obj-y += time.o prom.o traps.o setup-common.o \
udbg.o misc.o
udbg.o misc.o io.o
obj-$(CONFIG_PPC32) += entry_32.o setup_32.o misc_32.o
obj-$(CONFIG_PPC64) += misc_64.o dma_64.o iommu.o
obj-$(CONFIG_PPC_MULTIPLATFORM) += prom_init.o
......
......@@ -40,9 +40,10 @@
#ifdef CONFIG_PPC64
#include <asm/paca.h>
#include <asm/lppaca.h>
#include <asm/iseries/hv_lp_event.h>
#include <asm/cache.h>
#include <asm/compat.h>
#include <asm/mmu.h>
#include <asm/hvcall.h>
#endif
#define DEFINE(sym, val) \
......@@ -136,11 +137,18 @@ int main(void)
DEFINE(PACA_STARTPURR, offsetof(struct paca_struct, startpurr));
DEFINE(PACA_USER_TIME, offsetof(struct paca_struct, user_time));
DEFINE(PACA_SYSTEM_TIME, offsetof(struct paca_struct, system_time));
DEFINE(PACA_SLBSHADOWPTR, offsetof(struct paca_struct, slb_shadow_ptr));
DEFINE(PACA_DATA_OFFSET, offsetof(struct paca_struct, data_offset));
DEFINE(SLBSHADOW_STACKVSID,
offsetof(struct slb_shadow, save_area[SLB_NUM_BOLTED - 1].vsid));
DEFINE(SLBSHADOW_STACKESID,
offsetof(struct slb_shadow, save_area[SLB_NUM_BOLTED - 1].esid));
DEFINE(LPPACASRR0, offsetof(struct lppaca, saved_srr0));
DEFINE(LPPACASRR1, offsetof(struct lppaca, saved_srr1));
DEFINE(LPPACAANYINT, offsetof(struct lppaca, int_dword.any_int));
DEFINE(LPPACADECRINT, offsetof(struct lppaca, int_dword.fields.decr_int));
DEFINE(SLBSHADOW_SAVEAREA, offsetof(struct slb_shadow, save_area));
#endif /* CONFIG_PPC64 */
/* RTAS */
......@@ -159,6 +167,12 @@ int main(void)
/* Create extra stack space for SRR0 and SRR1 when calling prom/rtas. */
DEFINE(PROM_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs) + 16);
DEFINE(RTAS_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs) + 16);
/* hcall statistics */
DEFINE(HCALL_STAT_SIZE, sizeof(struct hcall_stats));
DEFINE(HCALL_STAT_CALLS, offsetof(struct hcall_stats, num_calls));
DEFINE(HCALL_STAT_TB, offsetof(struct hcall_stats, tb_total));
DEFINE(HCALL_STAT_PURR, offsetof(struct hcall_stats, purr_total));
#endif /* CONFIG_PPC64 */
DEFINE(GPR0, STACK_FRAME_OVERHEAD+offsetof(struct pt_regs, gpr[0]));
DEFINE(GPR1, STACK_FRAME_OVERHEAD+offsetof(struct pt_regs, gpr[1]));
......@@ -240,6 +254,7 @@ int main(void)
DEFINE(CPU_SPEC_PVR_VALUE, offsetof(struct cpu_spec, pvr_value));
DEFINE(CPU_SPEC_FEATURES, offsetof(struct cpu_spec, cpu_features));
DEFINE(CPU_SPEC_SETUP, offsetof(struct cpu_spec, cpu_setup));
DEFINE(CPU_SPEC_RESTORE, offsetof(struct cpu_spec, cpu_restore));
#ifndef CONFIG_PPC64
DEFINE(pbe_address, offsetof(struct pbe, address));
......
......@@ -158,35 +158,35 @@ int btext_initialize(struct device_node *np)
{
unsigned int width, height, depth, pitch;
unsigned long address = 0;
u32 *prop;
const u32 *prop;
prop = (u32 *)get_property(np, "linux,bootx-width", NULL);
prop = get_property(np, "linux,bootx-width", NULL);
if (prop == NULL)
prop = (u32 *)get_property(np, "width", NULL);
prop = get_property(np, "width", NULL);
if (prop == NULL)
return -EINVAL;
width = *prop;
prop = (u32 *)get_property(np, "linux,bootx-height", NULL);
prop = get_property(np, "linux,bootx-height", NULL);
if (prop == NULL)
prop = (u32 *)get_property(np, "height", NULL);
prop = get_property(np, "height", NULL);
if (prop == NULL)
return -EINVAL;
height = *prop;
prop = (u32 *)get_property(np, "linux,bootx-depth", NULL);
prop = get_property(np, "linux,bootx-depth", NULL);
if (prop == NULL)
prop = (u32 *)get_property(np, "depth", NULL);
prop = get_property(np, "depth", NULL);
if (prop == NULL)
return -EINVAL;
depth = *prop;
pitch = width * ((depth + 7) / 8);
prop = (u32 *)get_property(np, "linux,bootx-linebytes", NULL);
prop = get_property(np, "linux,bootx-linebytes", NULL);
if (prop == NULL)
prop = (u32 *)get_property(np, "linebytes", NULL);
prop = get_property(np, "linebytes", NULL);
if (prop)
pitch = *prop;
if (pitch == 1)
pitch = 0x1000;
prop = (u32 *)get_property(np, "address", NULL);
prop = get_property(np, "address", NULL);
if (prop)
address = *prop;
......@@ -214,11 +214,11 @@ int btext_initialize(struct device_node *np)
int __init btext_find_display(int allow_nonstdout)
{
char *name;
const char *name;
struct device_node *np = NULL;
int rc = -ENODEV;
name = (char *)get_property(of_chosen, "linux,stdout-path", NULL);
name = get_property(of_chosen, "linux,stdout-path", NULL);
if (name != NULL) {
np = of_find_node_by_path(name);
if (np != NULL) {
......
......@@ -16,27 +16,12 @@
#include <asm/asm-offsets.h>
#include <asm/cache.h>
_GLOBAL(__970_cpu_preinit)
/*
* Do nothing if not running in HV mode
*/
_GLOBAL(__cpu_preinit_ppc970)
/* Do nothing if not running in HV mode */
mfmsr r0
rldicl. r0,r0,4,63
beqlr
/*
* Deal only with PPC970 and PPC970FX.
*/
mfspr r0,SPRN_PVR
srwi r0,r0,16
cmpwi r0,0x39
beq 1f
cmpwi r0,0x3c
beq 1f
cmpwi r0,0x44
bnelr
1:
/* Make sure HID4:rm_ci is off before MMU is turned off, that large
* pages are enabled with HID4:61 and clear HID5:DCBZ_size and
* HID5:DCBZ32_ill
......@@ -72,23 +57,6 @@ _GLOBAL(__970_cpu_preinit)
isync
blr
_GLOBAL(__setup_cpu_ppc970)
mfspr r0,SPRN_HID0
li r11,5 /* clear DOZE and SLEEP */
rldimi r0,r11,52,8 /* set NAP and DPM */
li r11,0
rldimi r0,r11,32,31 /* clear EN_ATTN */
mtspr SPRN_HID0,r0
mfspr r0,SPRN_HID0
mfspr r0,SPRN_HID0
mfspr r0,SPRN_HID0
mfspr r0,SPRN_HID0
mfspr r0,SPRN_HID0
mfspr r0,SPRN_HID0
sync
isync
blr
/* Definitions for the table use to save CPU states */
#define CS_HID0 0
#define CS_HID1 8
......@@ -103,33 +71,30 @@ cpu_state_storage:
.balign L1_CACHE_BYTES,0
.text
/* Called in normal context to backup CPU 0 state. This
* does not include cache settings. This function is also
* called for machine sleep. This does not include the MMU
* setup, BATs, etc... but rather the "special" registers
* like HID0, HID1, HID4, etc...
*/
_GLOBAL(__save_cpu_setup)
/* Some CR fields are volatile, we back it up all */
mfcr r7
/* Get storage ptr */
LOAD_REG_IMMEDIATE(r5,cpu_state_storage)
/* We only deal with 970 for now */
mfspr r0,SPRN_PVR
srwi r0,r0,16
cmpwi r0,0x39
beq 1f
cmpwi r0,0x3c
beq 1f
cmpwi r0,0x44
bne 2f
1: /* skip if not running in HV mode */
_GLOBAL(__setup_cpu_ppc970)
/* Do nothing if not running in HV mode */
mfmsr r0
rldicl. r0,r0,4,63
beq 2f
beqlr
mfspr r0,SPRN_HID0
li r11,5 /* clear DOZE and SLEEP */
rldimi r0,r11,52,8 /* set NAP and DPM */
li r11,0
rldimi r0,r11,32,31 /* clear EN_ATTN */
mtspr SPRN_HID0,r0
mfspr r0,SPRN_HID0
mfspr r0,SPRN_HID0
mfspr r0,SPRN_HID0
mfspr r0,SPRN_HID0
mfspr r0,SPRN_HID0
mfspr r0,SPRN_HID0
sync
isync
/* Save away cpu state */
LOAD_REG_IMMEDIATE(r5,cpu_state_storage)
/* Save HID0,1,4 and 5 */
mfspr r3,SPRN_HID0
......@@ -141,35 +106,19 @@ _GLOBAL(__save_cpu_setup)
mfspr r3,SPRN_HID5
std r3,CS_HID5(r5)
2:
mtcr r7
blr
/* Called with no MMU context (typically MSR:IR/DR off) to
* restore CPU state as backed up by the previous
* function. This does not include cache setting
*/
_GLOBAL(__restore_cpu_setup)
/* Get storage ptr (FIXME when using anton reloc as we
* are running with translation disabled here
*/
LOAD_REG_IMMEDIATE(r5,cpu_state_storage)
/* We only deal with 970 for now */
mfspr r0,SPRN_PVR
srwi r0,r0,16
cmpwi r0,0x39
beq 1f
cmpwi r0,0x3c
beq 1f
cmpwi r0,0x44
bnelr
1: /* skip if not running in HV mode */
_GLOBAL(__restore_cpu_ppc970)
/* Do nothing if not running in HV mode */
mfmsr r0
rldicl. r0,r0,4,63
beqlr
LOAD_REG_IMMEDIATE(r5,cpu_state_storage)
/* Before accessing memory, we make sure rm_ci is clear */
li r0,0
mfspr r3,SPRN_HID4
......
......@@ -39,7 +39,10 @@ extern void __setup_cpu_7400(unsigned long offset, struct cpu_spec* spec);
extern void __setup_cpu_7410(unsigned long offset, struct cpu_spec* spec);
extern void __setup_cpu_745x(unsigned long offset, struct cpu_spec* spec);
#endif /* CONFIG_PPC32 */
#ifdef CONFIG_PPC64
extern void __setup_cpu_ppc970(unsigned long offset, struct cpu_spec* spec);
extern void __restore_cpu_ppc970(void);
#endif /* CONFIG_PPC64 */
/* This table only contains "desktop" CPUs, it need to be filled with embedded
* ones as well...
......@@ -55,6 +58,9 @@ extern void __setup_cpu_ppc970(unsigned long offset, struct cpu_spec* spec);
#define COMMON_USER_POWER6 (COMMON_USER_PPC64 | PPC_FEATURE_ARCH_2_05 |\
PPC_FEATURE_SMT | PPC_FEATURE_ICACHE_SNOOP | \
PPC_FEATURE_TRUE_LE)
#define COMMON_USER_PA6T (COMMON_USER_PPC64 | PPC_FEATURE_PA6T |\
PPC_FEATURE_TRUE_LE | \
PPC_FEATURE_HAS_ALTIVEC_COMP)
#define COMMON_USER_BOOKE (PPC_FEATURE_32 | PPC_FEATURE_HAS_MMU | \
PPC_FEATURE_BOOKE)
......@@ -184,6 +190,7 @@ struct cpu_spec cpu_specs[] = {
.dcache_bsize = 128,
.num_pmcs = 8,
.cpu_setup = __setup_cpu_ppc970,
.cpu_restore = __restore_cpu_ppc970,
.oprofile_cpu_type = "ppc64/970",
.oprofile_type = PPC_OPROFILE_POWER4,
.platform = "ppc970",
......@@ -199,6 +206,7 @@ struct cpu_spec cpu_specs[] = {
.dcache_bsize = 128,
.num_pmcs = 8,
.cpu_setup = __setup_cpu_ppc970,
.cpu_restore = __restore_cpu_ppc970,
.oprofile_cpu_type = "ppc64/970",
.oprofile_type = PPC_OPROFILE_POWER4,
.platform = "ppc970",
......@@ -214,6 +222,7 @@ struct cpu_spec cpu_specs[] = {
.dcache_bsize = 128,
.num_pmcs = 8,
.cpu_setup = __setup_cpu_ppc970,
.cpu_restore = __restore_cpu_ppc970,
.oprofile_cpu_type = "ppc64/970",
.oprofile_type = PPC_OPROFILE_POWER4,
.platform = "ppc970",
......@@ -280,6 +289,17 @@ struct cpu_spec cpu_specs[] = {
.dcache_bsize = 128,
.platform = "ppc-cell-be",
},
{ /* PA Semi PA6T */
.pvr_mask = 0x7fff0000,
.pvr_value = 0x00900000,
.cpu_name = "PA6T",
.cpu_features = CPU_FTRS_PA6T,
.cpu_user_features = COMMON_USER_PA6T,
.icache_bsize = 64,
.dcache_bsize = 64,
.num_pmcs = 6,
.platform = "pa6t",
},
{ /* default match */
.pvr_mask = 0x00000000,
.pvr_value = 0x00000000,
......@@ -929,6 +949,7 @@ struct cpu_spec cpu_specs[] = {
PPC_FEATURE_HAS_MMU | PPC_FEATURE_HAS_4xxMAC,
.icache_bsize = 32,
.dcache_bsize = 32,
.platform = "ppc405",
},
{ /* 405EP */
.pvr_mask = 0xffff0000,
......
......@@ -80,7 +80,7 @@ static int __init parse_savemaxmem(char *p)
}
__setup("savemaxmem=", parse_savemaxmem);
/*
/**
* copy_oldmem_page - copy one page from "oldmem"
* @pfn: page frame number to be copied
* @buf: target memory address for the copy; this can be in kernel address
......
......@@ -35,10 +35,9 @@ int dma_supported(struct device *dev, u64 mask)
{
struct dma_mapping_ops *dma_ops = get_dma_ops(dev);
if (dma_ops)
return dma_ops->dma_supported(dev, mask);
BUG();
return 0;
BUG_ON(!dma_ops);
return dma_ops->dma_supported(dev, mask);
}
EXPORT_SYMBOL(dma_supported);
......@@ -66,10 +65,9 @@ void *dma_alloc_coherent(struct device *dev, size_t size,
{
struct dma_mapping_ops *dma_ops = get_dma_ops(dev);
if (dma_ops)
return dma_ops->alloc_coherent(dev, size, dma_handle, flag);
BUG();
return NULL;
BUG_ON(!dma_ops);
return dma_ops->alloc_coherent(dev, size, dma_handle, flag);
}
EXPORT_SYMBOL(dma_alloc_coherent);
......@@ -78,10 +76,9 @@ void dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
{
struct dma_mapping_ops *dma_ops = get_dma_ops(dev);
if (dma_ops)
dma_ops->free_coherent(dev, size, cpu_addr, dma_handle);
else
BUG();
BUG_ON(!dma_ops);
dma_ops->free_coherent(dev, size, cpu_addr, dma_handle);
}
EXPORT_SYMBOL(dma_free_coherent);
......@@ -90,10 +87,9 @@ dma_addr_t dma_map_single(struct device *dev, void *cpu_addr, size_t size,
{
struct dma_mapping_ops *dma_ops = get_dma_ops(dev);
if (dma_ops)
return dma_ops->map_single(dev, cpu_addr, size, direction);
BUG();
return (dma_addr_t)0;
BUG_ON(!dma_ops);
return dma_ops->map_single(dev, cpu_addr, size, direction);
}
EXPORT_SYMBOL(dma_map_single);
......@@ -102,10 +98,9 @@ void dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
{
struct dma_mapping_ops *dma_ops = get_dma_ops(dev);
if (dma_ops)
dma_ops->unmap_single(dev, dma_addr, size, direction);
else
BUG();
BUG_ON(!dma_ops);
dma_ops->unmap_single(dev, dma_addr, size, direction);
}
EXPORT_SYMBOL(dma_unmap_single);
......@@ -115,11 +110,10 @@ dma_addr_t dma_map_page(struct device *dev, struct page *page,
{
struct dma_mapping_ops *dma_ops = get_dma_ops(dev);
if (dma_ops)
return dma_ops->map_single(dev,
(page_address(page) + offset), size, direction);
BUG();
return (dma_addr_t)0;
BUG_ON(!dma_ops);
return dma_ops->map_single(dev, page_address(page) + offset, size,
direction);
}
EXPORT_SYMBOL(dma_map_page);
......@@ -128,10 +122,9 @@ void dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
{
struct dma_mapping_ops *dma_ops = get_dma_ops(dev);
if (dma_ops)
dma_ops->unmap_single(dev, dma_address, size, direction);
else
BUG();
BUG_ON(!dma_ops);
dma_ops->unmap_single(dev, dma_address, size, direction);
}
EXPORT_SYMBOL(dma_unmap_page);
......@@ -140,10 +133,9 @@ int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
{
struct dma_mapping_ops *dma_ops = get_dma_ops(dev);
if (dma_ops)
return dma_ops->map_sg(dev, sg, nents, direction);
BUG();
return 0;
BUG_ON(!dma_ops);
return dma_ops->map_sg(dev, sg, nents, direction);
}
EXPORT_SYMBOL(dma_map_sg);
......@@ -152,9 +144,8 @@ void dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
{
struct dma_mapping_ops *dma_ops = get_dma_ops(dev);
if (dma_ops)
dma_ops->unmap_sg(dev, sg, nhwentries, direction);
else
BUG();
BUG_ON(!dma_ops);
dma_ops->unmap_sg(dev, sg, nhwentries, direction);
}
EXPORT_SYMBOL(dma_unmap_sg);
......@@ -375,6 +375,14 @@ BEGIN_FTR_SECTION
ld r7,KSP_VSID(r4) /* Get new stack's VSID */
oris r0,r6,(SLB_ESID_V)@h
ori r0,r0,(SLB_NUM_BOLTED-1)@l
/* Update the last bolted SLB */
ld r9,PACA_SLBSHADOWPTR(r13)
li r12,0
std r12,SLBSHADOW_STACKESID(r9) /* Clear ESID */
std r7,SLBSHADOW_STACKVSID(r9) /* Save VSID */
std r0,SLBSHADOW_STACKESID(r9) /* Save ESID */
slbie r6
slbie r6 /* Workaround POWER5 < DD2.1 issue */
slbmte r7,r0
......
此差异已折叠。
......@@ -167,7 +167,7 @@ static DEVICE_ATTR(name, S_IRUSR | S_IRGRP | S_IROTH, ibmebusdev_show_name,
NULL);
static struct ibmebus_dev* __devinit ibmebus_register_device_common(
struct ibmebus_dev *dev, char *name)
struct ibmebus_dev *dev, const char *name)
{
int err = 0;
......@@ -194,10 +194,10 @@ static struct ibmebus_dev* __devinit ibmebus_register_device_node(
struct device_node *dn)
{
struct ibmebus_dev *dev;
char *loc_code;
const char *loc_code;
int length;
loc_code = (char *)get_property(dn, "ibm,loc-code", NULL);
loc_code = get_property(dn, "ibm,loc-code", NULL);
if (!loc_code) {
printk(KERN_WARNING "%s: node %s missing 'ibm,loc-code'\n",
__FUNCTION__, dn->name ? dn->name : "<unknown>");
......
此差异已折叠。
......@@ -52,6 +52,7 @@
#include <linux/radix-tree.h>
#include <linux/mutex.h>
#include <linux/bootmem.h>
#include <linux/pci.h>
#include <asm/uaccess.h>
#include <asm/system.h>
......@@ -875,12 +876,14 @@ int pci_enable_msi(struct pci_dev * pdev)
else
return -1;
}
EXPORT_SYMBOL(pci_enable_msi);
void pci_disable_msi(struct pci_dev * pdev)
{
if (ppc_md.disable_msi)
ppc_md.disable_msi(pdev);
}
EXPORT_SYMBOL(pci_disable_msi);
void pci_scan_msi_device(struct pci_dev *dev) {}
int pci_enable_msix(struct pci_dev* dev, struct msix_entry *entries, int nvec) {return -1;}
......@@ -888,6 +891,8 @@ void pci_disable_msix(struct pci_dev *dev) {}
void msi_remove_pci_irq_vectors(struct pci_dev *dev) {}
void disable_msi_mode(struct pci_dev *dev, int pos, int type) {}
void pci_no_msi(void) {}
EXPORT_SYMBOL(pci_enable_msix);
EXPORT_SYMBOL(pci_disable_msix);
#endif
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册