提交 c32da023 编写于 作者: L Linus Torvalds

Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial: (56 commits)
  doc: fix typo in comment explaining rb_tree usage
  Remove fs/ntfs/ChangeLog
  doc: fix console doc typo
  doc: cpuset: Update the cpuset flag file
  Fix of spelling in arch/sparc/kernel/leon_kernel.c no longer needed
  Remove drivers/parport/ChangeLog
  Remove drivers/char/ChangeLog
  doc: typo - Table 1-2 should refer to "status", not "statm"
  tree-wide: fix typos "ass?o[sc]iac?te" -> "associate" in comments
  No need to patch AMD-provided drivers/gpu/drm/radeon/atombios.h
  devres/irq: Fix devm_irq_match comment
  Remove reference to kthread_create_on_cpu
  tree-wide: Assorted spelling fixes
  tree-wide: fix 'lenght' typo in comments and code
  drm/kms: fix spelling in error message
  doc: capitalization and other minor fixes in pnp doc
  devres: typo fix s/dev/devm/
  Remove redundant trailing semicolons from macros
  fix typo "definetly" -> "definitely" in comment
  tree-wide: s/widht/width/g typo in comments
  ...

Fix trivial conflict in Documentation/laptops/00-INDEX
......@@ -488,7 +488,7 @@ static void board_select_chip (struct mtd_info *mtd, int chip)
The ECC bytes must be placed immidiately after the data
bytes in order to make the syndrome generator work. This
is contrary to the usual layout used by software ECC. The
seperation of data and out of band area is not longer
separation of data and out of band area is not longer
possible. The nand driver code handles this layout and
the remaining free bytes in the oob area are managed by
the autoplacement code. Provide a matching oob-layout
......@@ -560,7 +560,7 @@ static void board_select_chip (struct mtd_info *mtd, int chip)
bad blocks. They have factory marked good blocks. The marker pattern
is erased when the block is erased to be reused. So in case of
powerloss before writing the pattern back to the chip this block
would be lost and added to the bad blocks. Therefor we scan the
would be lost and added to the bad blocks. Therefore we scan the
chip(s) when we detect them the first time for good blocks and
store this information in a bad block table before erasing any
of the blocks.
......@@ -1094,7 +1094,7 @@ in this page</entry>
manufacturers specifications. This applies similar to the spare area.
</para>
<para>
Therefor NAND aware filesystems must either write in page size chunks
Therefore NAND aware filesystems must either write in page size chunks
or hold a writebuffer to collect smaller writes until they sum up to
pagesize. Available NAND aware filesystems: JFFS2, YAFFS.
</para>
......
......@@ -1170,7 +1170,7 @@ frames per second. If less than this number of frames is to be
captured or output, applications can request frame skipping or
duplicating on the driver side. This is especially useful when using
the &func-read; or &func-write;, which are not augmented by timestamps
or sequence counters, and to avoid unneccessary data copying.</para>
or sequence counters, and to avoid unnecessary data copying.</para>
<para>Finally these ioctls can be used to determine the number of
buffers used internally by a driver in read/write mode. For
......
......@@ -55,7 +55,7 @@ captured or output, applications can request frame skipping or
duplicating on the driver side. This is especially useful when using
the <function>read()</function> or <function>write()</function>, which
are not augmented by timestamps or sequence counters, and to avoid
unneccessary data copying.</para>
unnecessary data copying.</para>
<para>Further these ioctls can be used to determine the number of
buffers used internally by a driver in read/write mode. For
......
......@@ -14,8 +14,8 @@ Introduction
how the clocks are arranged. The first implementation used as single
PLL to feed the ARM, memory and peripherals via a series of dividers
and muxes and this is the implementation that is documented here. A
newer version where there is a seperate PLL and clock divider for the
ARM core is available as a seperate driver.
newer version where there is a separate PLL and clock divider for the
ARM core is available as a separate driver.
Layout
......
......@@ -168,20 +168,20 @@ Each cpuset is represented by a directory in the cgroup file system
containing (on top of the standard cgroup files) the following
files describing that cpuset:
- cpus: list of CPUs in that cpuset
- mems: list of Memory Nodes in that cpuset
- memory_migrate flag: if set, move pages to cpusets nodes
- cpu_exclusive flag: is cpu placement exclusive?
- mem_exclusive flag: is memory placement exclusive?
- mem_hardwall flag: is memory allocation hardwalled
- memory_pressure: measure of how much paging pressure in cpuset
- memory_spread_page flag: if set, spread page cache evenly on allowed nodes
- memory_spread_slab flag: if set, spread slab cache evenly on allowed nodes
- sched_load_balance flag: if set, load balance within CPUs on that cpuset
- sched_relax_domain_level: the searching range when migrating tasks
- cpuset.cpus: list of CPUs in that cpuset
- cpuset.mems: list of Memory Nodes in that cpuset
- cpuset.memory_migrate flag: if set, move pages to cpusets nodes
- cpuset.cpu_exclusive flag: is cpu placement exclusive?
- cpuset.mem_exclusive flag: is memory placement exclusive?
- cpuset.mem_hardwall flag: is memory allocation hardwalled
- cpuset.memory_pressure: measure of how much paging pressure in cpuset
- cpuset.memory_spread_page flag: if set, spread page cache evenly on allowed nodes
- cpuset.memory_spread_slab flag: if set, spread slab cache evenly on allowed nodes
- cpuset.sched_load_balance flag: if set, load balance within CPUs on that cpuset
- cpuset.sched_relax_domain_level: the searching range when migrating tasks
In addition, the root cpuset only has the following file:
- memory_pressure_enabled flag: compute memory_pressure?
- cpuset.memory_pressure_enabled flag: compute memory_pressure?
New cpusets are created using the mkdir system call or shell
command. The properties of a cpuset, such as its flags, allowed
......@@ -229,7 +229,7 @@ If a cpuset is cpu or mem exclusive, no other cpuset, other than
a direct ancestor or descendant, may share any of the same CPUs or
Memory Nodes.
A cpuset that is mem_exclusive *or* mem_hardwall is "hardwalled",
A cpuset that is cpuset.mem_exclusive *or* cpuset.mem_hardwall is "hardwalled",
i.e. it restricts kernel allocations for page, buffer and other data
commonly shared by the kernel across multiple users. All cpusets,
whether hardwalled or not, restrict allocations of memory for user
......@@ -304,15 +304,15 @@ times 1000.
---------------------------
There are two boolean flag files per cpuset that control where the
kernel allocates pages for the file system buffers and related in
kernel data structures. They are called 'memory_spread_page' and
'memory_spread_slab'.
kernel data structures. They are called 'cpuset.memory_spread_page' and
'cpuset.memory_spread_slab'.
If the per-cpuset boolean flag file 'memory_spread_page' is set, then
If the per-cpuset boolean flag file 'cpuset.memory_spread_page' is set, then
the kernel will spread the file system buffers (page cache) evenly
over all the nodes that the faulting task is allowed to use, instead
of preferring to put those pages on the node where the task is running.
If the per-cpuset boolean flag file 'memory_spread_slab' is set,
If the per-cpuset boolean flag file 'cpuset.memory_spread_slab' is set,
then the kernel will spread some file system related slab caches,
such as for inodes and dentries evenly over all the nodes that the
faulting task is allowed to use, instead of preferring to put those
......@@ -337,21 +337,21 @@ their containing tasks memory spread settings. If memory spreading
is turned off, then the currently specified NUMA mempolicy once again
applies to memory page allocations.
Both 'memory_spread_page' and 'memory_spread_slab' are boolean flag
Both 'cpuset.memory_spread_page' and 'cpuset.memory_spread_slab' are boolean flag
files. By default they contain "0", meaning that the feature is off
for that cpuset. If a "1" is written to that file, then that turns
the named feature on.
The implementation is simple.
Setting the flag 'memory_spread_page' turns on a per-process flag
Setting the flag 'cpuset.memory_spread_page' turns on a per-process flag
PF_SPREAD_PAGE for each task that is in that cpuset or subsequently
joins that cpuset. The page allocation calls for the page cache
is modified to perform an inline check for this PF_SPREAD_PAGE task
flag, and if set, a call to a new routine cpuset_mem_spread_node()
returns the node to prefer for the allocation.
Similarly, setting 'memory_spread_slab' turns on the flag
Similarly, setting 'cpuset.memory_spread_slab' turns on the flag
PF_SPREAD_SLAB, and appropriately marked slab caches will allocate
pages from the node returned by cpuset_mem_spread_node().
......@@ -404,24 +404,24 @@ the following two situations:
system overhead on those CPUs, including avoiding task load
balancing if that is not needed.
When the per-cpuset flag "sched_load_balance" is enabled (the default
setting), it requests that all the CPUs in that cpusets allowed 'cpus'
When the per-cpuset flag "cpuset.sched_load_balance" is enabled (the default
setting), it requests that all the CPUs in that cpusets allowed 'cpuset.cpus'
be contained in a single sched domain, ensuring that load balancing
can move a task (not otherwised pinned, as by sched_setaffinity)
from any CPU in that cpuset to any other.
When the per-cpuset flag "sched_load_balance" is disabled, then the
When the per-cpuset flag "cpuset.sched_load_balance" is disabled, then the
scheduler will avoid load balancing across the CPUs in that cpuset,
--except-- in so far as is necessary because some overlapping cpuset
has "sched_load_balance" enabled.
So, for example, if the top cpuset has the flag "sched_load_balance"
So, for example, if the top cpuset has the flag "cpuset.sched_load_balance"
enabled, then the scheduler will have one sched domain covering all
CPUs, and the setting of the "sched_load_balance" flag in any other
CPUs, and the setting of the "cpuset.sched_load_balance" flag in any other
cpusets won't matter, as we're already fully load balancing.
Therefore in the above two situations, the top cpuset flag
"sched_load_balance" should be disabled, and only some of the smaller,
"cpuset.sched_load_balance" should be disabled, and only some of the smaller,
child cpusets have this flag enabled.
When doing this, you don't usually want to leave any unpinned tasks in
......@@ -433,7 +433,7 @@ scheduler might not consider the possibility of load balancing that
task to that underused CPU.
Of course, tasks pinned to a particular CPU can be left in a cpuset
that disables "sched_load_balance" as those tasks aren't going anywhere
that disables "cpuset.sched_load_balance" as those tasks aren't going anywhere
else anyway.
There is an impedance mismatch here, between cpusets and sched domains.
......@@ -443,19 +443,19 @@ overlap and each CPU is in at most one sched domain.
It is necessary for sched domains to be flat because load balancing
across partially overlapping sets of CPUs would risk unstable dynamics
that would be beyond our understanding. So if each of two partially
overlapping cpusets enables the flag 'sched_load_balance', then we
overlapping cpusets enables the flag 'cpuset.sched_load_balance', then we
form a single sched domain that is a superset of both. We won't move
a task to a CPU outside it cpuset, but the scheduler load balancing
code might waste some compute cycles considering that possibility.
This mismatch is why there is not a simple one-to-one relation
between which cpusets have the flag "sched_load_balance" enabled,
between which cpusets have the flag "cpuset.sched_load_balance" enabled,
and the sched domain configuration. If a cpuset enables the flag, it
will get balancing across all its CPUs, but if it disables the flag,
it will only be assured of no load balancing if no other overlapping
cpuset enables the flag.
If two cpusets have partially overlapping 'cpus' allowed, and only
If two cpusets have partially overlapping 'cpuset.cpus' allowed, and only
one of them has this flag enabled, then the other may find its
tasks only partially load balanced, just on the overlapping CPUs.
This is just the general case of the top_cpuset example given a few
......@@ -468,23 +468,23 @@ load balancing to the other CPUs.
1.7.1 sched_load_balance implementation details.
------------------------------------------------
The per-cpuset flag 'sched_load_balance' defaults to enabled (contrary
The per-cpuset flag 'cpuset.sched_load_balance' defaults to enabled (contrary
to most cpuset flags.) When enabled for a cpuset, the kernel will
ensure that it can load balance across all the CPUs in that cpuset
(makes sure that all the CPUs in the cpus_allowed of that cpuset are
in the same sched domain.)
If two overlapping cpusets both have 'sched_load_balance' enabled,
If two overlapping cpusets both have 'cpuset.sched_load_balance' enabled,
then they will be (must be) both in the same sched domain.
If, as is the default, the top cpuset has 'sched_load_balance' enabled,
If, as is the default, the top cpuset has 'cpuset.sched_load_balance' enabled,
then by the above that means there is a single sched domain covering
the whole system, regardless of any other cpuset settings.
The kernel commits to user space that it will avoid load balancing
where it can. It will pick as fine a granularity partition of sched
domains as it can while still providing load balancing for any set
of CPUs allowed to a cpuset having 'sched_load_balance' enabled.
of CPUs allowed to a cpuset having 'cpuset.sched_load_balance' enabled.
The internal kernel cpuset to scheduler interface passes from the
cpuset code to the scheduler code a partition of the load balanced
......@@ -495,9 +495,9 @@ all the CPUs that must be load balanced.
The cpuset code builds a new such partition and passes it to the
scheduler sched domain setup code, to have the sched domains rebuilt
as necessary, whenever:
- the 'sched_load_balance' flag of a cpuset with non-empty CPUs changes,
- the 'cpuset.sched_load_balance' flag of a cpuset with non-empty CPUs changes,
- or CPUs come or go from a cpuset with this flag enabled,
- or 'sched_relax_domain_level' value of a cpuset with non-empty CPUs
- or 'cpuset.sched_relax_domain_level' value of a cpuset with non-empty CPUs
and with this flag enabled changes,
- or a cpuset with non-empty CPUs and with this flag enabled is removed,
- or a cpu is offlined/onlined.
......@@ -542,7 +542,7 @@ As the result, task B on CPU X need to wait task A or wait load balance
on the next tick. For some applications in special situation, waiting
1 tick may be too long.
The 'sched_relax_domain_level' file allows you to request changing
The 'cpuset.sched_relax_domain_level' file allows you to request changing
this searching range as you like. This file takes int value which
indicates size of searching range in levels ideally as follows,
otherwise initial value -1 that indicates the cpuset has no request.
......@@ -559,8 +559,8 @@ The system default is architecture dependent. The system default
can be changed using the relax_domain_level= boot parameter.
This file is per-cpuset and affect the sched domain where the cpuset
belongs to. Therefore if the flag 'sched_load_balance' of a cpuset
is disabled, then 'sched_relax_domain_level' have no effect since
belongs to. Therefore if the flag 'cpuset.sched_load_balance' of a cpuset
is disabled, then 'cpuset.sched_relax_domain_level' have no effect since
there is no sched domain belonging the cpuset.
If multiple cpusets are overlapping and hence they form a single sched
......@@ -607,9 +607,9 @@ from one cpuset to another, then the kernel will adjust the tasks
memory placement, as above, the next time that the kernel attempts
to allocate a page of memory for that task.
If a cpuset has its 'cpus' modified, then each task in that cpuset
If a cpuset has its 'cpuset.cpus' modified, then each task in that cpuset
will have its allowed CPU placement changed immediately. Similarly,
if a tasks pid is written to another cpusets 'tasks' file, then its
if a tasks pid is written to another cpusets 'cpuset.tasks' file, then its
allowed CPU placement is changed immediately. If such a task had been
bound to some subset of its cpuset using the sched_setaffinity() call,
the task will be allowed to run on any CPU allowed in its new cpuset,
......@@ -622,8 +622,8 @@ and the processor placement is updated immediately.
Normally, once a page is allocated (given a physical page
of main memory) then that page stays on whatever node it
was allocated, so long as it remains allocated, even if the
cpusets memory placement policy 'mems' subsequently changes.
If the cpuset flag file 'memory_migrate' is set true, then when
cpusets memory placement policy 'cpuset.mems' subsequently changes.
If the cpuset flag file 'cpuset.memory_migrate' is set true, then when
tasks are attached to that cpuset, any pages that task had
allocated to it on nodes in its previous cpuset are migrated
to the tasks new cpuset. The relative placement of the page within
......@@ -631,12 +631,12 @@ the cpuset is preserved during these migration operations if possible.
For example if the page was on the second valid node of the prior cpuset
then the page will be placed on the second valid node of the new cpuset.
Also if 'memory_migrate' is set true, then if that cpusets
'mems' file is modified, pages allocated to tasks in that
cpuset, that were on nodes in the previous setting of 'mems',
Also if 'cpuset.memory_migrate' is set true, then if that cpusets
'cpuset.mems' file is modified, pages allocated to tasks in that
cpuset, that were on nodes in the previous setting of 'cpuset.mems',
will be moved to nodes in the new setting of 'mems.'
Pages that were not in the tasks prior cpuset, or in the cpusets
prior 'mems' setting, will not be moved.
prior 'cpuset.mems' setting, will not be moved.
There is an exception to the above. If hotplug functionality is used
to remove all the CPUs that are currently assigned to a cpuset,
......@@ -678,8 +678,8 @@ and then start a subshell 'sh' in that cpuset:
cd /dev/cpuset
mkdir Charlie
cd Charlie
/bin/echo 2-3 > cpus
/bin/echo 1 > mems
/bin/echo 2-3 > cpuset.cpus
/bin/echo 1 > cpuset.mems
/bin/echo $$ > tasks
sh
# The subshell 'sh' is now running in cpuset Charlie
......@@ -725,10 +725,13 @@ Now you want to do something with this cpuset.
In this directory you can find several files:
# ls
cpu_exclusive memory_migrate mems tasks
cpus memory_pressure notify_on_release
mem_exclusive memory_spread_page sched_load_balance
mem_hardwall memory_spread_slab sched_relax_domain_level
cpuset.cpu_exclusive cpuset.memory_spread_slab
cpuset.cpus cpuset.mems
cpuset.mem_exclusive cpuset.sched_load_balance
cpuset.mem_hardwall cpuset.sched_relax_domain_level
cpuset.memory_migrate notify_on_release
cpuset.memory_pressure tasks
cpuset.memory_spread_page
Reading them will give you information about the state of this cpuset:
the CPUs and Memory Nodes it can use, the processes that are using
......@@ -736,13 +739,13 @@ it, its properties. By writing to these files you can manipulate
the cpuset.
Set some flags:
# /bin/echo 1 > cpu_exclusive
# /bin/echo 1 > cpuset.cpu_exclusive
Add some cpus:
# /bin/echo 0-7 > cpus
# /bin/echo 0-7 > cpuset.cpus
Add some mems:
# /bin/echo 0-7 > mems
# /bin/echo 0-7 > cpuset.mems
Now attach your shell to this cpuset:
# /bin/echo $$ > tasks
......@@ -774,28 +777,28 @@ echo "/sbin/cpuset_release_agent" > /dev/cpuset/release_agent
This is the syntax to use when writing in the cpus or mems files
in cpuset directories:
# /bin/echo 1-4 > cpus -> set cpus list to cpus 1,2,3,4
# /bin/echo 1,2,3,4 > cpus -> set cpus list to cpus 1,2,3,4
# /bin/echo 1-4 > cpuset.cpus -> set cpus list to cpus 1,2,3,4
# /bin/echo 1,2,3,4 > cpuset.cpus -> set cpus list to cpus 1,2,3,4
To add a CPU to a cpuset, write the new list of CPUs including the
CPU to be added. To add 6 to the above cpuset:
# /bin/echo 1-4,6 > cpus -> set cpus list to cpus 1,2,3,4,6
# /bin/echo 1-4,6 > cpuset.cpus -> set cpus list to cpus 1,2,3,4,6
Similarly to remove a CPU from a cpuset, write the new list of CPUs
without the CPU to be removed.
To remove all the CPUs:
# /bin/echo "" > cpus -> clear cpus list
# /bin/echo "" > cpuset.cpus -> clear cpus list
2.3 Setting flags
-----------------
The syntax is very simple:
# /bin/echo 1 > cpu_exclusive -> set flag 'cpu_exclusive'
# /bin/echo 0 > cpu_exclusive -> unset flag 'cpu_exclusive'
# /bin/echo 1 > cpuset.cpu_exclusive -> set flag 'cpuset.cpu_exclusive'
# /bin/echo 0 > cpuset.cpu_exclusive -> unset flag 'cpuset.cpu_exclusive'
2.4 Attaching processes
-----------------------
......
......@@ -74,7 +74,7 @@ driver takes over the consoles vacated by the driver. Binding, on the other
hand, will bind the driver to the consoles that are currently occupied by a
system driver.
NOTE1: Binding and binding must be selected in Kconfig. It's under:
NOTE1: Binding and unbinding must be selected in Kconfig. It's under:
Device Drivers -> Character devices -> Support for binding and unbinding
console drivers
......
......@@ -192,7 +192,7 @@ command line. This will execute all matching early_param() callbacks.
User specified early platform devices will be registered at this point.
For the early serial console case the user can specify port on the
kernel command line as "earlyprintk=serial.0" where "earlyprintk" is
the class string, "serial" is the name of the platfrom driver and
the class string, "serial" is the name of the platform driver and
0 is the platform device id. If the id is -1 then the dot and the
id can be omitted.
......
......@@ -171,7 +171,7 @@ device.
virtual_root.force_probe :
Force the probing code to probe EISA slots even when it cannot find an
EISA compliant mainboard (nothing appears on slot 0). Defaultd to 0
EISA compliant mainboard (nothing appears on slot 0). Defaults to 0
(don't force), and set to 1 (force probing) when either
CONFIG_ALPHA_JENSEN or CONFIG_EISA_VLB_PRIMING are set.
......
......@@ -195,7 +195,7 @@ asynchronous manner and the vaule may not be very precise. To see a precise
snapshot of a moment, you can see /proc/<pid>/smaps file and scan page table.
It's slow but very precise.
Table 1-2: Contents of the statm files (as of 2.6.30-rc7)
Table 1-2: Contents of the status files (as of 2.6.30-rc7)
..............................................................................
Field Content
Name filename of the executable
......
......@@ -30,7 +30,7 @@ Supported chips:
bank1_types=1,1,0,0,0,0,0,2,0,0,0,0,2,0,0,1
You may also need to specify the fan_sensors option for these boards
fan_sensors=5
2) There is a seperate abituguru3 driver for these motherboards,
2) There is a separate abituguru3 driver for these motherboards,
the abituguru (without the 3 !) driver will not work on these
motherboards (and visa versa)!
......
......@@ -75,7 +75,7 @@ and the number of steps or will clamp at the maximum and zero depending on
the configuration.
Because GPIO to IRQ mapping is platform specific, this information must
be given in seperately to the driver. See the example below.
be given in separately to the driver. See the example below.
---------<snip>---------
......
......@@ -2,6 +2,10 @@
- This file
acer-wmi.txt
- information on the Acer Laptop WMI Extras driver.
asus-laptop.txt
- information on the Asus Laptop Extras driver.
disk-shock-protection.txt
- information on hard disk shock protection.
dslm.c
- Simple Disk Sleep Monitor program
laptop-mode.txt
......
......@@ -68,7 +68,7 @@ Compaq adapters (not tested):
=======================
From v2.01 on, the driver is integrated in the linux kernel sources.
Therefor, the installation is the same as for any other adapter
Therefore, the installation is the same as for any other adapter
supported by the kernel.
Refer to the manual of your distribution about the installation
of network adapters.
......
......@@ -370,7 +370,7 @@ int main(int argc, char **argv)
}
sock = socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP);
if (socket < 0)
if (sock < 0)
bail("socket");
memset(&device, 0, sizeof(device));
......
......@@ -57,7 +57,7 @@ PC standard floppy disk controller
# cat resources
DISABLED
- Notice the string "DISABLED". THis means the device is not active.
- Notice the string "DISABLED". This means the device is not active.
3.) check the device's possible configurations (optional)
# cat options
......@@ -139,7 +139,7 @@ Plug and Play but it is planned to be in the near future.
Requirements for a Linux PnP protocol:
1.) the protocol must use EISA IDs
2.) the protocol must inform the PnP Layer of a devices current configuration
2.) the protocol must inform the PnP Layer of a device's current configuration
- the ability to set resources is optional but preferred.
The following are PnP protocol related functions:
......@@ -158,7 +158,7 @@ pnp_remove_device
- automatically will free mem used by the device and related structures
pnp_add_id
- adds a EISA ID to the list of supported IDs for the specified device
- adds an EISA ID to the list of supported IDs for the specified device
For more information consult the source of a protocol such as
/drivers/pnp/pnpbios/core.c.
......@@ -167,7 +167,7 @@ For more information consult the source of a protocol such as
Linux Plug and Play Drivers
---------------------------
This section contains information for linux PnP driver developers.
This section contains information for Linux PnP driver developers.
The New Way
...........
......@@ -235,11 +235,10 @@ static int __init serial8250_pnp_init(void)
The Old Way
...........
a series of compatibility functions have been created to make it easy to convert
A series of compatibility functions have been created to make it easy to convert
ISAPNP drivers. They should serve as a temporary solution only.
they are as follows:
They are as follows:
struct pnp_card *pnp_find_card(unsigned short vendor,
unsigned short device,
......
......@@ -256,7 +256,7 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h:
to suspend the device again in future
int pm_runtime_resume(struct device *dev);
- execute the subsystem-leve resume callback for the device; returns 0 on
- execute the subsystem-level resume callback for the device; returns 0 on
success, 1 if the device's run-time PM status was already 'active' or
error code on failure, where -EAGAIN means it may be safe to attempt to
resume the device again in future, but 'power.runtime_error' should be
......
......@@ -102,7 +102,7 @@ args: unsigned long
see also: include/linux/kvm.h
This ioctl stores the state of the cpu at the guest real address given as
argument, unless one of the following values defined in include/linux/kvm.h
is given as arguement:
is given as argument:
KVM_S390_STORE_STATUS_NOADDR - the CPU stores its status to the save area in
absolute lowcore as defined by the principles of operation
KVM_S390_STORE_STATUS_PREFIXED - the CPU stores its status to the save area in
......
......@@ -989,8 +989,8 @@ Changes from 20040709 to 20040716
* Remove redundant port_cmp != 2 check in if
(!port_cmp) { .... if (port_cmp != 2).... }
* Clock changes: removed struct clk_data and timerList.
* Clock changes: seperate nodev_tmo and els_retry_delay into 2
seperate timers and convert to 1 argument changed
* Clock changes: separate nodev_tmo and els_retry_delay into 2
separate timers and convert to 1 argument changed
LPFC_NODE_FARP_PEND_t to struct lpfc_node_farp_pend convert
ipfarp_tmo to 1 argument convert target struct tmofunc and
rtplunfunc to 1 argument * cr_count, cr_delay and
......@@ -1514,7 +1514,7 @@ Changes from 20040402 to 20040409
* Remove unused elxclock declaration in elx_sli.h.
* Since everywhere IOCB_ENTRY is used, the return value is cast,
move the cast into the macro.
* Split ioctls out into seperate files
* Split ioctls out into separate files
Changes from 20040326 to 20040402
......@@ -1534,7 +1534,7 @@ Changes from 20040326 to 20040402
* Unused variable cleanup
* Use Linux list macros for DMABUF_t
* Break up ioctls into 3 sections, dfc, util, hbaapi
rearranged code so this could be easily seperated into a
rearranged code so this could be easily separated into a
differnet module later All 3 are currently turned on by
defines in lpfc_ioctl.c LPFC_DFC_IOCTL, LPFC_UTIL_IOCTL,
LPFC_HBAAPI_IOCTL
......@@ -1551,7 +1551,7 @@ Changes from 20040326 to 20040402
started by lpfc_online(). lpfc_offline() only stopped
els_timeout routine. It now stops all timeout routines
associated with that hba.
* Replace seperate next and prev pointers in struct
* Replace separate next and prev pointers in struct
lpfc_bindlist with list_head type. In elxHBA_t, replace
fc_nlpbind_start and _end with fc_nlpbind_list and use
list_head macros to access it.
......
......@@ -1588,7 +1588,7 @@ module author does not need to worry about it.
When tracing is enabled, kstop_machine is called to prevent
races with the CPUS executing code being modified (which can
cause the CPU to do undesireable things), and the nops are
cause the CPU to do undesirable things), and the nops are
patched back to calls. But this time, they do not call mcount
(which is just a function stub). They now call into the ftrace
infrastructure.
......
......@@ -49,7 +49,7 @@ _start: add lr, pc, #-0x8 @ lr = current load addr
/*
* find the end of the tag list, and then add an INITRD tag on the end.
* If there is already an INITRD tag, then we ignore it; the last INITRD
* tag takes precidence.
* tag takes precedence.
*/
taglist: ldr r10, [r9, #0] @ tag length
teq r10, #0 @ last tag (zero length)?
......
......@@ -32,7 +32,7 @@ static DEFINE_MUTEX(clocks_mutex);
* If an entry has a device ID, it must match
* If an entry has a connection ID, it must match
* Then we take the most specific entry - with the following
* order of precidence: dev+con > dev only > con only.
* order of precedence: dev+con > dev only > con only.
*/
static struct clk *clk_find(const char *dev_id, const char *con_id)
{
......
......@@ -77,7 +77,7 @@
#define AT91_MCI_BLKR 0x18 /* Block Register */
#define AT91_MCI_BLKR_BCNT(n) ((0xffff & (n)) << 0) /* Block count */
#define AT91_MCI_BLKR_BLKLEN(n) ((0xffff & (n)) << 16) /* Block lenght */
#define AT91_MCI_BLKR_BLKLEN(n) ((0xffff & (n)) << 16) /* Block length */
#define AT91_MCI_RSPR(n) (0x20 + ((n) * 4)) /* Response Registers 0-3 */
#define AT91_MCR_RDR 0x30 /* Receive Data Register */
......
/*
* DaVinci I2C controller platfrom_device info
* DaVinci I2C controller platform_device info
*
* Author: Vladimir Barinov, MontaVista Software, Inc. <source@mvista.com>
*
......
......@@ -28,7 +28,7 @@
*
* Micro9-High has up to 64MB of 32-bit flash on CS1
* Micro9-Mid has up to 64MB of either 32-bit or 16-bit flash on CS1
* Micro9-Lite uses a seperate MTD map driver for flash support
* Micro9-Lite uses a separate MTD map driver for flash support
* Micro9-Slim has up to 64MB of either 32-bit or 16-bit flash on CS1
*************************************************************************/
static struct physmap_flash_data micro9_flash_data;
......
......@@ -38,7 +38,7 @@
#define SRC_CR_INIT_MASK 0x00007fff
#define SRC_CR_INIT_VAL 0x2aaa8000
/* These adresses span 16MB, so use three individual pages */
/* These addresses span 16MB, so use three individual pages */
static struct resource nhk8815_nand_resources[] = {
{
.name = "nand_addr",
......
文件模式从 100755 更改为 100644
文件模式从 100755 更改为 100644
......@@ -3,7 +3,7 @@
* Copyright (c) 2006 Simtec Electronics
* Ben Dooks <ben@simtec.co.uk>
*
* S3C2410 - SPI Controller platfrom_device info
* S3C2410 - SPI Controller platform_device info
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
......
......@@ -28,8 +28,8 @@
char *s5pc100_hsmmc_clksrcs[4] = {
[0] = "hsmmc",
[1] = "hsmmc",
/* [2] = "mmc_bus", not yet succesfuuly used yet */
/* [3] = "48m", - note not succesfully used yet */
/* [2] = "mmc_bus", not yet successfully used yet */
/* [3] = "48m", - note not successfully used yet */
};
......
......@@ -358,7 +358,7 @@ static struct resource ave_resources[] = {
/*
* The AVE3e requires two regions of 256MB that it considers
* "invisible". The hardware will not be able to access these
* adresses, so they should never point to system RAM.
* addresses, so they should never point to system RAM.
*/
{
.name = "AVE3e Reserved 0",
......@@ -1596,7 +1596,7 @@ static void __init u300_init_check_chip(void)
/*
* Some devices and their resources require reserved physical memory from
* the end of the available RAM. This function traverses the list of devices
* and assigns actual adresses to these.
* and assigns actual addresses to these.
*/
static void __init u300_assign_physmem(void)
{
......
......@@ -11,7 +11,7 @@
#include <mach/hardware.h>
.macro addruart, rx, tmp
/* If we move the adress using MMU, use this. */
/* If we move the address using MMU, use this. */
mrc p15, 0, \rx, c1, c0
tst \rx, #1 @ MMU enabled?
ldreq \rx, = U300_SLOW_PER_PHYS_BASE @ MMU off, physical address
......
......@@ -135,7 +135,7 @@ struct s3c_cpufreq_config {
* @locktime_m: The lock-time in uS for the MPLL.
* @locktime_u: The lock-time in uS for the UPLL.
* @locttime_bits: The number of bits each LOCKTIME field.
* @need_pll: Set if this driver needs to change the PLL values to acheive
* @need_pll: Set if this driver needs to change the PLL values to achieve
* any frequency changes. This is really only need by devices like the
* S3C2410 where there is no or limited divider between the PLL and the
* ARMCLK.
......
......@@ -78,7 +78,7 @@ extern int s3c_gpio_setcfg_s3c24xx_a(struct s3c_gpio_chip *chip,
* others = Special functions (dependant on bank)
*
* Note, since the code to deal with the case where there are two control
* registers instead of one, we do not have a seperate set of functions for
* registers instead of one, we do not have a separate set of functions for
* each case.
*/
extern int s3c_gpio_setcfg_s3c64xx_4bit(struct s3c_gpio_chip *chip,
......
......@@ -3,7 +3,7 @@
* Copyright (c) 2004 Simtec Electronics
* Ben Dooks <ben@simtec.co.uk>
*
* S3C2410 - NAND device controller platfrom_device info
* S3C2410 - NAND device controller platform_device info
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
......
......@@ -12,7 +12,7 @@
* published by the Free Software Foundation.
*/
/* Note, this is a seperate header file as some of the clock framework
/* Note, this is a separate header file as some of the clock framework
* needs to touch this if the clk_48m is used as the USB OHCI or other
* peripheral source.
*/
......
/*
* BF5XX - NAND flash controller platfrom_device info
* BF5XX - NAND flash controller platform_device info
*
* Copyright 2007-2008 Analog Devices, Inc.
*
......@@ -8,7 +8,7 @@
/* struct bf5xx_nand_platform
*
* define a interface between platfrom board specific code and
* define a interface between platform board specific code and
* bf54x NFC driver.
*
* nr_partitions = number of partitions pointed to be partitoons (or zero)
......
......@@ -77,7 +77,7 @@ __wsum csum_partial(const void *p, int len, __wsum __sum)
sum += *buff++;
if (endMarker > buff)
sum += *(const u8 *)buff; /* add extra byte seperately */
sum += *(const u8 *)buff; /* add extra byte separately */
BITOFF;
return (__force __wsum)sum;
......
......@@ -189,7 +189,7 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next,
spin_unlock(&mmu_context_lock);
/*
* Remember the pgd for the fault handlers. Keep a seperate
* Remember the pgd for the fault handlers. Keep a separate
* copy of it because current and active_mm might be invalid
* at points where * there's still a need to derefer the pgd.
*/
......
......@@ -25,7 +25,7 @@
* memory location directly.
*/
/* ++roman: The assignments to temp. vars avoid that gcc sometimes generates
* two accesses to memory, which may be undesireable for some devices.
* two accesses to memory, which may be undesirable for some devices.
*/
/*
......
......@@ -241,7 +241,7 @@ static void __cpuinit sn_check_for_wars(void)
* Note: This stuff is duped here because Altix requires the PCDP to
* locate a usable VGA device due to lack of proper ACPI support. Structures
* could be used from drivers/firmware/pcdp.h, but it was decided that moving
* this file to a more public location just for Altix use was undesireable.
* this file to a more public location just for Altix use was undesirable.
*/
struct hcdp_uart_desc {
......
......@@ -121,7 +121,7 @@ KEYBOARD_STATE kb_state;
* bytes have been lost and in which state of the packet structure we are now.
* This usually causes keyboards bytes to be interpreted as mouse movements
* and vice versa, which is very annoying. It seems better to throw away some
* bytes (that are usually mouse bytes) than to misinterpret them. Therefor I
* bytes (that are usually mouse bytes) than to misinterpret them. Therefore I
* introduced the RESYNC state for IKBD data. In this state, the bytes up to
* one that really looks like a key event (0x04..0xf2) or the start of a mouse
* packet (0xf8..0xfb) are thrown away, but at most 2 bytes. This at least
......
......@@ -173,7 +173,7 @@ struct mdi_cfginfo {
int mdi_ncluts; /* Number of implemented CLUTs in this MDI */
int mdi_type; /* FBTYPE name */
int mdi_height; /* height */
int mdi_width; /* widht */
int mdi_width; /* width */
int mdi_size; /* available ram */
int mdi_mode; /* 8bpp, 16bpp or 32bpp */
int mdi_pixfreq; /* pixel clock (from PROM) */
......
......@@ -16,7 +16,7 @@
* memory location directly.
*/
/* ++roman: The assignments to temp. vars avoid that gcc sometimes generates
* two accesses to memory, which may be undesireable for some devices.
* two accesses to memory, which may be undesirable for some devices.
*/
/*
......
......@@ -490,7 +490,7 @@
compatible = "cfi-flash";
/*
* The Intel P30 chip has 2 non-identical chips on
* one die, so we need to define 2 seperate regions
* one die, so we need to define 2 separate regions
* that are scanned by physmap_of independantly.
*/
reg = <0 0x00000000 0x02000000
......
......@@ -469,7 +469,7 @@ static int __init serial_dev_init(void)
return -ENODEV;
/*
* Before we register the platfrom serial devices, we need
* Before we register the platform serial devices, we need
* to fixup their interrupts and their IO ports.
*/
DBG("Fixing serial ports interrupts and IO ports ...\n");
......
......@@ -47,7 +47,7 @@
#include "mmu_decl.h"
#if defined(CONFIG_KERNEL_START_BOOL) || defined(CONFIG_LOWMEM_SIZE_BOOL)
/* The ammount of lowmem must be within 0xF0000000 - KERNELBASE. */
/* The amount of lowmem must be within 0xF0000000 - KERNELBASE. */
#if (CONFIG_LOWMEM_SIZE > (0xF0000000 - PAGE_OFFSET))
#error "You must adjust CONFIG_LOWMEM_SIZE or CONFIG_START_KERNEL"
#endif
......
......@@ -79,7 +79,7 @@ int s390_sha_final(struct shash_desc *desc, u8 *out)
memset(ctx->buf + index, 0x00, end - index - 8);
/*
* Append message length. Well, SHA-512 wants a 128 bit lenght value,
* Append message length. Well, SHA-512 wants a 128 bit length value,
* nevertheless we use u64, should be enough for now...
*/
bits = ctx->count * 8;
......
......@@ -20,7 +20,7 @@
/**
* struct ccw1 - channel command word
* @cmd_code: command code
* @flags: flags, like IDA adressing, etc.
* @flags: flags, like IDA addressing, etc.
* @count: byte count
* @cda: data address
*
......
......@@ -235,7 +235,7 @@ _sclp_print:
lh %r9,0(%r8) # update sccb length
ar %r9,%r6
sth %r9,0(%r8)
ar %r7,%r6 # update current mto adress
ar %r7,%r6 # update current mto address
ltr %r0,%r0 # more characters?
jnz .LinitmtoS4
l %r2,.LwritedataS4-.LbaseS4(%r13)# write data
......
......@@ -404,7 +404,7 @@ EXPORT_SYMBOL_GPL(clk_round_rate);
* If an entry has a device ID, it must match
* If an entry has a connection ID, it must match
* Then we take the most specific entry - with the following
* order of precidence: dev+con > dev only > con only.
* order of precedence: dev+con > dev only > con only.
*/
static struct clk *clk_find(const char *dev_id, const char *con_id)
{
......
......@@ -173,7 +173,7 @@ struct mdi_cfginfo {
int mdi_ncluts; /* Number of implemented CLUTs in this MDI */
int mdi_type; /* FBTYPE name */
int mdi_height; /* height */
int mdi_width; /* widht */
int mdi_width; /* width */
int mdi_size; /* available ram */
int mdi_mode; /* 8bpp, 16bpp or 32bpp */
int mdi_pixfreq; /* pixel clock (from PROM) */
......
......@@ -1353,7 +1353,7 @@ static void perf_callchain_user_32(struct pt_regs *regs,
}
/* Like powerpc we can't get PMU interrupts within the PMU handler,
* so no need for seperate NMI and IRQ chains as on x86.
* so no need for separate NMI and IRQ chains as on x86.
*/
static DEFINE_PER_CPU(struct perf_callchain_entry, callchain);
......
......@@ -22,7 +22,7 @@
#include <asm/asm-offsets.h>
/* return adress at 0 */
/* return address at 0 */
#define in_blk 12 /* input byte array address parameter*/
#define out_blk 8 /* output byte array address parameter*/
......@@ -230,8 +230,8 @@ twofish_enc_blk:
push %edi
mov tfm + 16(%esp), %ebp /* abuse the base pointer: set new base bointer to the crypto tfm */
add $crypto_tfm_ctx_offset, %ebp /* ctx adress */
mov in_blk+16(%esp),%edi /* input adress in edi */
add $crypto_tfm_ctx_offset, %ebp /* ctx address */
mov in_blk+16(%esp),%edi /* input address in edi */
mov (%edi), %eax
mov b_offset(%edi), %ebx
......@@ -286,8 +286,8 @@ twofish_dec_blk:
mov tfm + 16(%esp), %ebp /* abuse the base pointer: set new base bointer to the crypto tfm */
add $crypto_tfm_ctx_offset, %ebp /* ctx adress */
mov in_blk+16(%esp),%edi /* input adress in edi */
add $crypto_tfm_ctx_offset, %ebp /* ctx address */
mov in_blk+16(%esp),%edi /* input address in edi */
mov (%edi), %eax
mov b_offset(%edi), %ebx
......
......@@ -221,11 +221,11 @@
twofish_enc_blk:
pushq R1
/* %rdi contains the crypto tfm adress */
/* %rsi contains the output adress */
/* %rdx contains the input adress */
add $crypto_tfm_ctx_offset, %rdi /* set ctx adress */
/* ctx adress is moved to free one non-rex register
/* %rdi contains the crypto tfm address */
/* %rsi contains the output address */
/* %rdx contains the input address */
add $crypto_tfm_ctx_offset, %rdi /* set ctx address */
/* ctx address is moved to free one non-rex register
as target for the 8bit high operations */
mov %rdi, %r11
......@@ -274,11 +274,11 @@ twofish_enc_blk:
twofish_dec_blk:
pushq R1
/* %rdi contains the crypto tfm adress */
/* %rsi contains the output adress */
/* %rdx contains the input adress */
add $crypto_tfm_ctx_offset, %rdi /* set ctx adress */
/* ctx adress is moved to free one non-rex register
/* %rdi contains the crypto tfm address */
/* %rsi contains the output address */
/* %rdx contains the input address */
add $crypto_tfm_ctx_offset, %rdi /* set ctx address */
/* ctx address is moved to free one non-rex register
as target for the 8bit high operations */
mov %rdi, %r11
......
......@@ -223,7 +223,7 @@ struct apic apic_flat = {
};
/*
* Physflat mode is used when there are more than 8 CPUs on a AMD system.
* Physflat mode is used when there are more than 8 CPUs on a system.
* We cannot use logical delivery in this case because the mask
* overflows, so use physical mode.
*/
......
......@@ -27,7 +27,7 @@
#define GET_CR2_INTO_RCX movq %cr2, %rcx
#endif
/* we are not able to switch in one step to the final KERNEL ADRESS SPACE
/* we are not able to switch in one step to the final KERNEL ADDRESS SPACE
* because we need identity-mapped pages.
*
*/
......
......@@ -1309,7 +1309,7 @@ static void calgary_init_bitmap_from_tce_table(struct iommu_table *tbl)
/*
* get_tce_space_from_tar():
* Function for kdump case. Get the tce tables from first kernel
* by reading the contents of the base adress register of calgary iommu
* by reading the contents of the base address register of calgary iommu
*/
static void __init get_tce_space_from_tar(void)
{
......
......@@ -38,7 +38,7 @@ int iommu_detected __read_mostly = 0;
* This variable becomes 1 if iommu=pt is passed on the kernel command line.
* If this variable is 1, IOMMU implementations do no DMA translation for
* devices and allow every device to access to whole physical memory. This is
* useful if a user want to use an IOMMU only for KVM device assignment to
* useful if a user wants to use an IOMMU only for KVM device assignment to
* guests and not for driver dma translation.
*/
int iommu_pass_through __read_mostly;
......
......@@ -581,7 +581,7 @@ ptrace_modify_breakpoint(struct perf_event *bp, int len, int type,
struct perf_event_attr attr;
/*
* We shoud have at least an inactive breakpoint at this
* We should have at least an inactive breakpoint at this
* slot. It means the user is writing dr7 without having
* written the address register first
*/
......
......@@ -50,7 +50,7 @@ u64 native_sched_clock(void)
* unstable. We do this because unlike Time Of Day,
* the scheduler clock tolerates small errors and it's
* very important for it to be as fast as the platform
* can achive it. )
* can achieve it. )
*/
if (unlikely(tsc_disabled)) {
/* No locking but a rare wrong value is not a big deal: */
......
......@@ -167,7 +167,7 @@ static int vmi_timer_next_event(unsigned long delta,
{
/* Unfortunately, set_next_event interface only passes relative
* expiry, but we want absolute expiry. It'd be better if were
* were passed an aboslute expiry, since a bunch of time may
* were passed an absolute expiry, since a bunch of time may
* have been stolen between the time the delta is computed and
* when we set the alarm below. */
cycle_t now = vmi_timer_ops.get_cycle_counter(vmi_counter(VMI_ONESHOT));
......
......@@ -361,7 +361,7 @@ static void xen_cpu_die(unsigned int cpu)
alternatives_smp_switch(0);
}
static void __cpuinit xen_play_dead(void) /* used only with CPU_HOTPLUG */
static void __cpuinit xen_play_dead(void) /* used only with HOTPLUG_CPU */
{
play_dead_common();
HYPERVISOR_vcpu_op(VCPUOP_down, smp_processor_id(), NULL);
......
......@@ -104,7 +104,7 @@
* excsave has been restored, and
* stack pointer (a1) has been set.
*
* Note: _user_exception might be at an odd adress. Don't use call0..call12
* Note: _user_exception might be at an odd address. Don't use call0..call12
*/
ENTRY(user_exception)
......@@ -244,7 +244,7 @@ _user_exception:
* excsave has been restored, and
* stack pointer (a1) has been set.
*
* Note: _kernel_exception might be at an odd adress. Don't use call0..call12
* Note: _kernel_exception might be at an odd address. Don't use call0..call12
*/
ENTRY(kernel_exception)
......
......@@ -260,7 +260,7 @@ bsg_map_hdr(struct bsg_device *bd, struct sg_io_v4 *hdr, fmode_t has_write_perm,
return ERR_PTR(ret);
/*
* map scatter-gather elements seperately and string them to request
* map scatter-gather elements separately and string them to request
*/
rq = blk_get_request(q, rw, GFP_KERNEL);
if (!rq)
......
......@@ -826,8 +826,8 @@ config CRYPTO_ANSI_CPRNG
help
This option enables the generic pseudo random number generator
for cryptographic modules. Uses the Algorithm specified in
ANSI X9.31 A.2.4. Not this option must be enabled if CRYPTO_FIPS
is selected
ANSI X9.31 A.2.4. Note that this option must be enabled if
CRYPTO_FIPS is selected
source "drivers/crypto/Kconfig"
......
......@@ -605,7 +605,7 @@ register_hotplug_dock_device(acpi_handle handle, struct acpi_dock_ops *ops,
list_for_each_entry(dock_station, &dock_stations, sibling) {
/*
* An ATA bay can be in a dock and itself can be ejected
* seperately, so there are two 'dock stations' which need the
* separately, so there are two 'dock stations' which need the
* ops
*/
dd = find_dock_dependent_device(dock_station, handle);
......
......@@ -435,7 +435,7 @@ acpi_system_write_wakeup_device(struct file *file,
found_dev->wakeup.gpe_device)) {
printk(KERN_WARNING
"ACPI: '%s' and '%s' have the same GPE, "
"can't disable/enable one seperately\n",
"can't disable/enable one separately\n",
dev->pnp.bus_id, found_dev->pnp.bus_id);
dev->wakeup.state.enabled =
found_dev->wakeup.state.enabled;
......
......@@ -2232,7 +2232,7 @@ int ata_dev_read_id(struct ata_device *dev, unsigned int *p_class,
* Some drives were very specific about that exact sequence.
*
* Note that ATA4 says lba is mandatory so the second check
* shoud never trigger.
* should never trigger.
*/
if (ata_id_major_version(id) < 4 || !ata_id_has_lba(id)) {
err_mask = ata_dev_init_params(dev, id[3], id[6]);
......
......@@ -2287,7 +2287,7 @@ EXPORT_SYMBOL_GPL(ata_sff_postreset);
* @qc: command
*
* Drain the FIFO and device of any stuck data following a command
* failing to complete. In some cases this is neccessary before a
* failing to complete. In some cases this is necessary before a
* reset will recover the device.
*
*/
......
......@@ -161,7 +161,7 @@ static void pacpi_set_dmamode(struct ata_port *ap, struct ata_device *adev)
*
* Called when the libata layer is about to issue a command. We wrap
* this interface so that we can load the correct ATA timings if
* neccessary.
* necessary.
*/
static unsigned int pacpi_qc_issue(struct ata_queued_cmd *qc)
......
......@@ -180,7 +180,7 @@ static void hpt3x3_init_chipset(struct pci_dev *dev)
* @id: Entry in match table
*
* Perform basic initialisation. We set the device up so we access all
* ports via BAR4. This is neccessary to work around errata.
* ports via BAR4. This is necessary to work around errata.
*/
static int hpt3x3_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
......
......@@ -131,7 +131,7 @@ static unsigned int ata_data_xfer_8bit(struct ata_device *dev,
* @qc: command
*
* Drain the FIFO and device of any stuck data following a command
* failing to complete. In some cases this is neccessary before a
* failing to complete. In some cases this is necessary before a
* reset will recover the device.
*
*/
......
......@@ -95,7 +95,7 @@ extern char usermode_helper[];
/* All EEs on the free list should have ID_VACANT (== 0)
* freshly allocated EEs get !ID_VACANT (== 1)
* so if it says "cannot dereference null pointer at adress 0x00000001",
* so if it says "cannot dereference null pointer at address 0x00000001",
* it is most likely one of these :( */
#define ID_IN_SYNC (4711ULL)
......@@ -1171,7 +1171,7 @@ extern int drbd_bitmap_io(struct drbd_conf *mdev, int (*io_fn)(struct drbd_conf
/* Meta data layout
We reserve a 128MB Block (4k aligned)
* either at the end of the backing device
* or on a seperate meta data device. */
* or on a separate meta data device. */
#define MD_RESERVED_SECT (128LU << 11) /* 128 MB, unit sectors */
/* The following numbers are sectors */
......
......@@ -57,7 +57,7 @@
*
* It may me handed over to the local disk subsystem.
* It may be completed by the local disk subsystem,
* either sucessfully or with io-error.
* either successfully or with io-error.
* In case it is a READ request, and it failed locally,
* it may be retried remotely.
*
......
此差异已折叠。
......@@ -298,7 +298,7 @@ static void intel_agp_insert_sg_entries(struct agp_memory *mem,
j++;
}
} else {
/* sg may merge pages, but we have to seperate
/* sg may merge pages, but we have to separate
* per-page addr for GTT */
unsigned int len, m;
......
......@@ -14,7 +14,7 @@
/* et passe en argument a acinit, mais est scrute sur le bus pour s'adapter */
/* au nombre de cartes presentes sur le bus. IOCL code 6 affichait V2.4.3 */
/* F.LAFORSE 28/11/95 creation de fichiers acXX.o avec les differentes */
/* adresses de base des cartes, IOCTL 6 plus complet */
/* addresses de base des cartes, IOCTL 6 plus complet */
/* J.PAGET le 19/08/96 copie de la version V2.6 en V2.8.0 sans modification */
/* de code autre que le texte V2.6.1 en V2.8.0 */
/*****************************************************************************/
......
......@@ -353,7 +353,7 @@ static void hvc_close_event(struct HvLpEvent *event)
if (!hvlpevent_is_int(event)) {
printk(KERN_WARNING
"hvc: got unexpected close acknowlegement\n");
"hvc: got unexpected close acknowledgement\n");
return;
}
......
......@@ -71,7 +71,7 @@ MODULE_VERSION(DRV_MODULE_VERSION);
* x22 + x21 + x17 + x15 + x13 + x12 + x11 + x7 + x5 + x + 1
*
* The RNG_CTL_VCO value of each noise cell must be programmed
* seperately. This is why 4 control register values must be provided
* separately. This is why 4 control register values must be provided
* to the hypervisor. During a write, the hypervisor writes them all,
* one at a time, to the actual RNG_CTL register. The first three
* values are used to setup the desired RNG_CTL_VCO for each entropy
......
......@@ -559,7 +559,7 @@ Loadware may be sent to the board in two ways:
2) It may be hard-coded into your source by including a .h file (typically
supplied by Computone), which declares a data array and initializes every
element. This acheives the same result as if an entire loadware file had
element. This achieves the same result as if an entire loadware file had
been read into the array.
This requires more data space in your program, but access to the file system
......
......@@ -220,7 +220,7 @@ static void pty_set_termios(struct tty_struct *tty,
* @tty: tty being resized
* @ws: window size being set.
*
* Update the termios variables and send the neccessary signals to
* Update the termios variables and send the necessary signals to
* peform a terminal resize correctly
*/
......
......@@ -1191,7 +1191,7 @@ const struct file_operations urandom_fops = {
void generate_random_uuid(unsigned char uuid_out[16])
{
get_random_bytes(uuid_out, 16);
/* Set UUID version to 4 --- truely random generation */
/* Set UUID version to 4 --- truly random generation */
uuid_out[6] = (uuid_out[6] & 0x0F) | 0x40;
/* Set the UUID variant to DCE */
uuid_out[8] = (uuid_out[8] & 0x3F) | 0x80;
......
......@@ -1989,7 +1989,7 @@ void mvme167_serial_console_setup(int cflag)
/*
* Attempt to set up all channels to something reasonable, and
* bang out a INIT_CHAN command. We should then be able to limit
* the ammount of fiddling we have to do in normal running.
* the amount of fiddling we have to do in normal running.
*/
for (ch = 3; ch >= 0; ch--) {
......
......@@ -2028,7 +2028,7 @@ static int tiocgwinsz(struct tty_struct *tty, struct winsize __user *arg)
* @rows: rows (character)
* @cols: cols (character)
*
* Update the termios variables and send the neccessary signals to
* Update the termios variables and send the necessary signals to
* peform a terminal resize correctly
*/
......
......@@ -821,7 +821,7 @@ static inline int resize_screen(struct vc_data *vc, int width, int height,
*
* Resize a virtual console, clipping according to the actual constraints.
* If the caller passes a tty structure then update the termios winsize
* information and perform any neccessary signal handling.
* information and perform any necessary signal handling.
*
* Caller must hold the console semaphore. Takes the termios mutex and
* ctrl_lock of the tty IFF a tty is passed.
......@@ -2119,8 +2119,6 @@ static int do_con_write(struct tty_struct *tty, const unsigned char *buf, int co
uint8_t inverse;
uint8_t width;
u16 himask, charmask;
const unsigned char *orig_buf = NULL;
int orig_count;
if (in_interrupt())
return count;
......@@ -2142,8 +2140,6 @@ static int do_con_write(struct tty_struct *tty, const unsigned char *buf, int co
release_console_sem();
return 0;
}
orig_buf = buf;
orig_count = count;
himask = vc->vc_hi_font_mask;
charmask = himask ? 0x1ff : 0xff;
......
......@@ -321,7 +321,7 @@ static atomic_t hifn_dev_number;
#define HIFN_PUBOPLEN_MOD_M 0x0000007f /* modulus length mask */
#define HIFN_PUBOPLEN_MOD_S 0 /* modulus length shift */
#define HIFN_PUBOPLEN_EXP_M 0x0003ff80 /* exponent length mask */
#define HIFN_PUBOPLEN_EXP_S 7 /* exponent lenght shift */
#define HIFN_PUBOPLEN_EXP_S 7 /* exponent length shift */
#define HIFN_PUBOPLEN_RED_M 0x003c0000 /* reducend length mask */
#define HIFN_PUBOPLEN_RED_S 18 /* reducend length shift */
......
......@@ -30,7 +30,7 @@ struct device;
* @pool: pool handle
* @dev: dma device
* @lli_nbr: number of lli:s in the pool
* @algin: adress alignemtn of lli:s
* @algin: address alignemtn of lli:s
* returns 0 on success otherwise none zero
*/
int coh901318_pool_create(struct coh901318_pool *pool,
......
......@@ -3545,7 +3545,7 @@ int nouveau_bios_parse_lvds_table(struct drm_device *dev, int pxclk, bool *dl, b
* at which modes should be set up in the dual link style.
*
* Following the header, the BMP (ver 0xa) table has several records,
* indexed by a seperate xlat table, indexed in turn by the fp strap in
* indexed by a separate xlat table, indexed in turn by the fp strap in
* EXTDEV_BOOT. Each record had a config byte, followed by 6 script
* numbers for use by INIT_SUB which controlled panel init and power,
* and finally a dword of ms to sleep between power off and on
......
......@@ -553,7 +553,7 @@ struct drm_nouveau_private {
uint32_t ramro_offset;
uint32_t ramro_size;
/* base physical adresses */
/* base physical addresses */
uint64_t fb_phys;
uint64_t fb_available_size;
uint64_t fb_mappable_pages;
......
......@@ -1093,7 +1093,7 @@ static void radeon_cp_dispatch_clear(struct drm_device * dev,
/* judging by the first tile offset needed, could possibly
directly address/clear 4x4 tiles instead of 8x2 * 4x4
macro tiles, though would still need clear mask for
right/bottom if truely 4x4 granularity is desired ? */
right/bottom if truly 4x4 granularity is desired ? */
OUT_RING(tileoffset * 16);
/* the number of tiles to clear */
OUT_RING(nrtilesx + 1);
......
......@@ -150,7 +150,7 @@ irqreturn_t via_driver_irq_handler(DRM_IRQ_ARGS)
cur_irq++;
}
/* Acknowlege interrupts */
/* Acknowledge interrupts */
VIA_WRITE(VIA_REG_INTERRUPT, status);
......@@ -165,7 +165,7 @@ static __inline__ void viadrv_acknowledge_irqs(drm_via_private_t * dev_priv)
u32 status;
if (dev_priv) {
/* Acknowlege interrupts */
/* Acknowledge interrupts */
status = VIA_READ(VIA_REG_INTERRUPT);
VIA_WRITE(VIA_REG_INTERRUPT, status |
dev_priv->irq_pending_mask);
......
......@@ -176,7 +176,7 @@ static int pcf_init_8584 (struct i2c_algo_pcf_data *adap)
*/
if (((temp = get_pcf(adap, 1)) & 0x7f) != (0)) {
DEB2(printk(KERN_ERR "i2c-algo-pcf.o: PCF detection failed -- can't select S0 (0x%02x).\n", temp));
return -ENXIO; /* definetly not PCF8584 */
return -ENXIO; /* definitely not PCF8584 */
}
/* load own address in S0, effective address is (own << 1) */
......
......@@ -12,7 +12,7 @@
*
* History:
* Apr 2002: Initial version [CS]
* Jun 2002: Properly seperated algo/adap [FB]
* Jun 2002: Properly separated algo/adap [FB]
* Jan 2003: Fixed several bugs concerning interrupt handling [Kai-Uwe Bloem]
* Jan 2003: added limited signal handling [Kai-Uwe Bloem]
* Sep 2004: Major rework to ensure efficient bus handling [RMK]
......
......@@ -1452,7 +1452,7 @@ static int __devinit add_card(struct pci_dev *dev,
PRINT(KERN_ERR, lynx->id, "unable to read bus info block from i2c");
} else {
PRINT(KERN_INFO, lynx->id, "got bus info block from serial eeprom");
/* FIXME: probably we shoud rewrite the max_rec, max_ROM(1394a),
/* FIXME: probably we should rewrite the max_rec, max_ROM(1394a),
* generation(1394a) and link_spd(1394a) field and recalculate
* the CRC */
......
......@@ -46,7 +46,7 @@
#include "ehca_tools.h"
/* virtual scatter gather entry to specify remote adresses with length */
/* virtual scatter gather entry to specify remote addresses with length */
struct ehca_vsgentry {
u64 vaddr;
u32 lkey;
......@@ -148,7 +148,7 @@ struct ehca_wqe {
u32 immediate_data;
union {
struct {
u64 remote_virtual_adress;
u64 remote_virtual_address;
u32 rkey;
u32 reserved;
u64 atomic_1st_op_dma_len;
......
......@@ -269,7 +269,7 @@ static inline int ehca_write_swqe(struct ehca_qp *qp,
/* no break is intentional here */
case IB_QPT_RC:
/* TODO: atomic not implemented */
wqe_p->u.nud.remote_virtual_adress =
wqe_p->u.nud.remote_virtual_address =
send_wr->wr.rdma.remote_addr;
wqe_p->u.nud.rkey = send_wr->wr.rdma.rkey;
......
......@@ -127,7 +127,7 @@ struct yld_ctl_packet {
* yld_status struct.
*/
/* LCD, each segment must be driven seperately.
/* LCD, each segment must be driven separately.
*
* Layout:
*
......
......@@ -430,7 +430,7 @@ static bool i8042_filter(unsigned char data, unsigned char str,
}
if (i8042_platform_filter && i8042_platform_filter(data, str, serio)) {
dbg("Filtered out by platfrom filter\n");
dbg("Filtered out by platform filter\n");
return true;
}
......
......@@ -362,7 +362,7 @@ static const int macroKeyEvents[] = {
};
/***********************************************************************
* Map values to strings and back. Every map shoudl have the following
* Map values to strings and back. Every map should have the following
* as its last element: { NULL, AIPTEK_INVALID_VALUE }.
*/
#define AIPTEK_INVALID_VALUE -1
......
......@@ -1712,13 +1712,13 @@ mISDNisar_init(struct isar_hw *isar, void *hw)
}
EXPORT_SYMBOL(mISDNisar_init);
static int isar_mod_init(void)
static int __init isar_mod_init(void)
{
pr_notice("mISDN: ISAR driver Rev. %s\n", ISAR_REV);
return 0;
}
static void isar_mod_cleanup(void)
static void __exit isar_mod_cleanup(void)
{
pr_notice("mISDN: ISAR module unloaded\n");
}
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
文件模式从 100755 更改为 100644
此差异已折叠。
文件模式从 100755 更改为 100644
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册
新手
引导
客服 返回
顶部