提交 edf7b938 编写于 作者: M Maciej W. Rozycki 提交者: Ralf Baechle

MIPS: Random whitespace clean-ups

Another whitespace clean-up, this removes tabs from between sentences in
some comments.
Signed-off-by: NMaciej W. Rozycki <macro@codesourcery.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6103/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
上级 dc73e4c1
...@@ -58,7 +58,7 @@ ...@@ -58,7 +58,7 @@
/* /*
* Memory segments (64bit kernel mode addresses) * Memory segments (64bit kernel mode addresses)
* The compatibility segments use the full 64-bit sign extended value. Note * The compatibility segments use the full 64-bit sign extended value. Note
* the R8000 doesn't have them so don't reference these in generic MIPS code. * the R8000 doesn't have them so don't reference these in generic MIPS code.
*/ */
#define XKUSEG _CONST64_(0x0000000000000000) #define XKUSEG _CONST64_(0x0000000000000000)
...@@ -131,7 +131,7 @@ ...@@ -131,7 +131,7 @@
/* /*
* The ultimate limited of the 64-bit MIPS architecture: 2 bits for selecting * The ultimate limited of the 64-bit MIPS architecture: 2 bits for selecting
* the region, 3 bits for the CCA mode. This leaves 59 bits of which the * the region, 3 bits for the CCA mode. This leaves 59 bits of which the
* R8000 implements most with its 48-bit physical address space. * R8000 implements most with its 48-bit physical address space.
*/ */
#define TO_PHYS_MASK _CONST64_(0x07ffffffffffffff) /* 2^^59 - 1 */ #define TO_PHYS_MASK _CONST64_(0x07ffffffffffffff) /* 2^^59 - 1 */
......
/* /*
* Atomic operations that C can't guarantee us. Useful for * Atomic operations that C can't guarantee us. Useful for
* resource counting etc.. * resource counting etc..
* *
* But use these as seldom as possible since they are much more slower * But use these as seldom as possible since they are much more slower
......
...@@ -18,7 +18,7 @@ ...@@ -18,7 +18,7 @@
* over this barrier. All reads preceding this primitive are guaranteed * over this barrier. All reads preceding this primitive are guaranteed
* to access memory (but not necessarily other CPUs' caches) before any * to access memory (but not necessarily other CPUs' caches) before any
* reads following this primitive that depend on the data return by * reads following this primitive that depend on the data return by
* any of the preceding reads. This primitive is much lighter weight than * any of the preceding reads. This primitive is much lighter weight than
* rmb() on most CPUs, and is never heavier weight than is * rmb() on most CPUs, and is never heavier weight than is
* rmb(). * rmb().
* *
...@@ -43,7 +43,7 @@ ...@@ -43,7 +43,7 @@
* </programlisting> * </programlisting>
* *
* because the read of "*q" depends on the read of "p" and these * because the read of "*q" depends on the read of "p" and these
* two reads are separated by a read_barrier_depends(). However, * two reads are separated by a read_barrier_depends(). However,
* the following code, with the same initial values for "a" and "b": * the following code, with the same initial values for "a" and "b":
* *
* <programlisting> * <programlisting>
...@@ -57,7 +57,7 @@ ...@@ -57,7 +57,7 @@
* </programlisting> * </programlisting>
* *
* does not enforce ordering, since there is no data dependency between * does not enforce ordering, since there is no data dependency between
* the read of "a" and the read of "b". Therefore, on some CPUs, such * the read of "a" and the read of "b". Therefore, on some CPUs, such
* as Alpha, "y" could be set to 3 and "x" to 0. Use rmb() * as Alpha, "y" could be set to 3 and "x" to 0. Use rmb()
* in cases like this where there are no data dependencies. * in cases like this where there are no data dependencies.
*/ */
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册