1. 23 11月, 2022 1 次提交
  2. 11 10月, 2022 1 次提交
  3. 04 10月, 2022 1 次提交
    • J
      mm: memcontrol: deprecate swapaccounting=0 mode · b25806dc
      Johannes Weiner 提交于
      The swapaccounting= commandline option already does very little today.  To
      close a trivial containment failure case, the swap ownership tracking part
      of the swap controller has recently become mandatory (see commit
      2d1c4980 ("mm: memcontrol: make swap tracking an integral part of
      memory control") for details), which makes up the majority of the work
      during swapout, swapin, and the swap slot map.
      
      The only thing left under this flag is the page_counter operations and the
      visibility of the swap control files in the first place, which are rather
      meager savings.  There also aren't many scenarios, if any, where
      controlling the memory of a cgroup while allowing it unlimited access to a
      global swap space is a workable resource isolation strategy.
      
      On the other hand, there have been several bugs and confusion around the
      many possible swap controller states (cgroup1 vs cgroup2 behavior, memory
      accounting without swap accounting, memcg runtime disabled).
      
      This puts the maintenance overhead of retaining the toggle above its
      practical benefits.  Deprecate it.
      
      Link: https://lkml.kernel.org/r/20220926135704.400818-3-hannes@cmpxchg.orgSigned-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Suggested-by: NShakeel Butt <shakeelb@google.com>
      Reviewed-by: NShakeel Butt <shakeelb@google.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Roman Gushchin <roman.gushchin@linux.dev>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      b25806dc
  4. 01 10月, 2022 1 次提交
  5. 26 9月, 2022 1 次提交
  6. 16 9月, 2022 1 次提交
  7. 12 9月, 2022 1 次提交
    • L
      page_ext: introduce boot parameter 'early_page_ext' · c4f20f14
      Li Zhe 提交于
      In commit 2f1ee091 ("Revert "mm: use early_pfn_to_nid in
      page_ext_init""), we call page_ext_init() after page_alloc_init_late() to
      avoid some panic problem.  It seems that we cannot track early page
      allocations in current kernel even if page structure has been initialized
      early.
      
      This patch introduces a new boot parameter 'early_page_ext' to resolve
      this problem.  If we pass it to the kernel, page_ext_init() will be moved
      up and the feature 'deferred initialization of struct pages' will be
      disabled to initialize the page allocator early and prevent the panic
      problem above.  It can help us to catch early page allocations.  This is
      useful especially when we find that the free memory value is not the same
      right after different kernel booting.
      
      [akpm@linux-foundation.org: fix section issue by removing __meminitdata]
      Link: https://lkml.kernel.org/r/20220825102714.669-1-lizhe.67@bytedance.comSigned-off-by: NLi Zhe <lizhe.67@bytedance.com>
      Suggested-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Jason A. Donenfeld <Jason@zx2c4.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
      Cc: Masami Hiramatsu (Google) <mhiramat@kernel.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      c4f20f14
  8. 10 9月, 2022 1 次提交
  9. 07 9月, 2022 1 次提交
  10. 05 9月, 2022 1 次提交
    • N
      powerpc/pseries: Implement CONFIG_PARAVIRT_TIME_ACCOUNTING · 0e8a6313
      Nicholas Piggin 提交于
      CONFIG_VIRT_CPU_ACCOUNTING_GEN under pseries does not provide stolen
      time accounting unless CONFIG_PARAVIRT_TIME_ACCOUNTING is enabled.
      Implement this using the VPA accumulated wait counters.
      
      Note this will not work on current KVM hosts because KVM does not
      implement the VPA dispatch counters (yet). It could be implemented
      with the dispatch trace log as it is for VIRT_CPU_ACCOUNTING_NATIVE,
      but that is not necessary for the more limited accounting provided
      by PARAVIRT_TIME_ACCOUNTING, and it is more expensive, complex, and
      has downsides like potential log wrap.
      
      From Shrikanth:
      
        [...] it was tested on Power10 [PowerVM] Shared LPAR. system has two
        LPAR. we will call first one LPAR1 and second one as LPAR2. Test was
        carried out in SMT=1. Similar observation was seen in SMT=8 as well.
      
        LPAR config header from each LPAR is below. LPAR1 is twice as big as
        LPAR2. Since Both are sharing the same underlying hardware, work
        stealing will happen when both the LPAR's are contending for the same
        resource.
      
        LPAR1:
        type=Shared mode=Uncapped smt=Off lcpu=40 cpus=40 ent=20.00
        LPAR2:
        type=Shared mode=Uncapped smt=Off lcpu=20 cpus=40 ent=10.00
      
        mpstat was used to check for the utilization. stress-ng has been used
        as the workload. Few cases are tested. when the both LPAR are idle
        there is no steal time. when LPAR1 starts running at 100% which
        consumes all of the physical resource, steal time starts to get
        accounted.  With LPAR1 running at 100% and LPAR2 starts running, steal
        time starts increasing. This is as expected. When the LPAR2 Load is
        increased further, steal time increases further.
      
        Case 1: 0% LPAR1; 0% LPAR2
         %usr  %nice   %sys %iowait  %irq  %soft %steal %guest %gnice  %idle
         0.00   0.00   0.05   0.00   0.00   0.00   0.00   0.00   0.00  99.95
      
        Case 2: 100% LPAR1; 0% LPAR2
         %usr  %nice   %sys %iowait  %irq  %soft %steal %guest %gnice  %idle
        97.68   0.00   0.00   0.00   0.00   0.00   2.32   0.00   0.00   0.00
      
        Case 3: 100% LPAR1; 50% LPAR2
         %usr  %nice   %sys %iowait  %irq  %soft %steal %guest %gnice  %idle
        86.34   0.00   0.10   0.00   0.00   0.03  13.54   0.00   0.00   0.00
      
        Case 4: 100% LPAR1; 100% LPAR2
         %usr  %nice   %sys %iowait  %irq  %soft %steal %guest %gnice  %idle
        78.54   0.00   0.07   0.00   0.00   0.02  21.36   0.00   0.00   0.00
      
        Case 5: 50% LPAR1; 100% LPAR2
         %usr  %nice   %sys %iowait  %irq  %soft %steal %guest %gnice  %idle
        49.37   0.00   0.00   0.00   0.00   0.00   1.17   0.00   0.00  49.47
      
        Patch is accounting for the steal time and basic tests are holding
        good.
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Tested-by: NShrikanth Hegde <sshegde@linux.ibm.com>
      [mpe: Add SPDX tag to new paravirt_api_clock.h]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20220902085316.2071519-3-npiggin@gmail.com
      0e8a6313
  11. 01 9月, 2022 1 次提交
  12. 23 8月, 2022 1 次提交
    • M
      arm64: fix rodata=full · 2e8cff0a
      Mark Rutland 提交于
      On arm64, "rodata=full" has been suppored (but not documented) since
      commit:
      
        c55191e9 ("arm64: mm: apply r/o permissions of VM areas to its linear alias as well")
      
      As it's necessary to determine the rodata configuration early during
      boot, arm64 has an early_param() handler for this, whereas init/main.c
      has a __setup() handler which is run later.
      
      Unfortunately, this split meant that since commit:
      
        f9a40b08 ("init/main.c: return 1 from handled __setup() functions")
      
      ... passing "rodata=full" would result in a spurious warning from the
      __setup() handler (though RO permissions would be configured
      appropriately).
      
      Further, "rodata=full" has been broken since commit:
      
        0d6ea3ac ("lib/kstrtox.c: add "false"/"true" support to kstrtobool()")
      
      ... which caused strtobool() to parse "full" as false (in addition to
      many other values not documented for the "rodata=" kernel parameter.
      
      This patch fixes this breakage by:
      
      * Moving the core parameter parser to an __early_param(), such that it
        is available early.
      
      * Adding an (optional) arch hook which arm64 can use to parse "full".
      
      * Updating the documentation to mention that "full" is valid for arm64.
      
      * Having the core parameter parser handle "on" and "off" explicitly,
        such that any undocumented values (e.g. typos such as "ful") are
        reported as errors rather than being silently accepted.
      
      Note that __setup() and early_param() have opposite conventions for
      their return values, where __setup() uses 1 to indicate a parameter was
      handled and early_param() uses 0 to indicate a parameter was handled.
      
      Fixes: f9a40b08 ("init/main.c: return 1 from handled __setup() functions")
      Fixes: 0d6ea3ac ("lib/kstrtox.c: add "false"/"true" support to kstrtobool()")
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Andy Shevchenko <andy.shevchenko@gmail.com>
      Cc: Ard Biesheuvel <ardb@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Jagdish Gediya <jvgediya@linux.ibm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Will Deacon <will@kernel.org>
      Reviewed-by: NArd Biesheuvel <ardb@kernel.org>
      Link: https://lore.kernel.org/r/20220817154022.3974645-1-mark.rutland@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
      2e8cff0a
  13. 22 8月, 2022 1 次提交
  14. 09 8月, 2022 2 次提交
  15. 30 7月, 2022 1 次提交
  16. 22 7月, 2022 1 次提交
  17. 20 7月, 2022 2 次提交
    • J
      rcu/nocb: Add an option to offload all CPUs on boot · b37a667c
      Joel Fernandes 提交于
      Systems built with CONFIG_RCU_NOCB_CPU=y but booted without either
      the rcu_nocbs= or rcu_nohz_full= kernel-boot parameters will not have
      callback offloading on any of the CPUs, nor can any of the CPUs be
      switched to enable callback offloading at runtime.  Although this is
      intentional, it would be nice to have a way to offload all the CPUs
      without having to make random bootloaders specify either the rcu_nocbs=
      or the rcu_nohz_full= kernel-boot parameters.
      
      This commit therefore provides a new CONFIG_RCU_NOCB_CPU_DEFAULT_ALL
      Kconfig option that switches the default so as to offload callback
      processing on all of the CPUs.  This default can still be overridden
      using the rcu_nocbs= and rcu_nohz_full= kernel-boot parameters.
      Reviewed-by: NKalesh Singh <kaleshsingh@google.com>
      Reviewed-by: NUladzislau Rezki <urezki@gmail.com>
      (In v4.1, fixed issues with CONFIG maze reported by kernel test robot).
      Reported-by: Nkernel test robot <lkp@intel.com>
      Signed-off-by: NJoel Fernandes <joel@joelfernandes.org>
      Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
      Reviewed-by: NNeeraj Upadhyay <quic_neeraju@quicinc.com>
      b37a667c
    • N
      srcu: Make expedited RCU grace periods block even less frequently · 4f2bfd94
      Neeraj Upadhyay 提交于
      The purpose of commit 282d8998 ("srcu: Prevent expedited GPs
      and blocking readers from consuming CPU") was to prevent a long
      series of never-blocking expedited SRCU grace periods from blocking
      kernel-live-patching (KLP) progress.  Although it was successful, it also
      resulted in excessive boot times on certain embedded workloads running
      under qemu with the "-bios QEMU_EFI.fd" command line.  Here "excessive"
      means increasing the boot time up into the three-to-four minute range.
      This increase in boot time was due to the more than 6000 back-to-back
      invocations of synchronize_rcu_expedited() within the KVM host OS, which
      in turn resulted from qemu's emulation of a long series of MMIO accesses.
      
      Commit 640a7d37c3f4 ("srcu: Block less aggressively for expedited grace
      periods") did not significantly help this particular use case.
      
      Zhangfei Gao and Shameerali Kolothum Thodi did experiments varying the
      value of SRCU_MAX_NODELAY_PHASE with HZ=250 and with various values
      of non-sleeping per phase counts on a system with preemption enabled,
      and observed the following boot times:
      
      +──────────────────────────+────────────────+
      | SRCU_MAX_NODELAY_PHASE   | Boot time (s)  |
      +──────────────────────────+────────────────+
      | 100                      | 30.053         |
      | 150                      | 25.151         |
      | 200                      | 20.704         |
      | 250                      | 15.748         |
      | 500                      | 11.401         |
      | 1000                     | 11.443         |
      | 10000                    | 11.258         |
      | 1000000                  | 11.154         |
      +──────────────────────────+────────────────+
      
      Analysis on the experiment results show additional improvements with
      CPU-bound delays approaching one jiffy in duration. This improvement was
      also seen when number of per-phase iterations were scaled to one jiffy.
      
      This commit therefore scales per-grace-period phase number of non-sleeping
      polls so that non-sleeping polls extend for about one jiffy. In addition,
      the delay-calculation call to srcu_get_delay() in srcu_gp_end() is
      replaced with a simple check for an expedited grace period.  This change
      schedules callback invocation immediately after expedited grace periods
      complete, which results in greatly improved boot times.  Testing done
      by Marc and Zhangfei confirms that this change recovers most of the
      performance degradation in boottime; for CONFIG_HZ_250 configuration,
      specifically, boot times improve from 3m50s to 41s on Marc's setup;
      and from 2m40s to ~9.7s on Zhangfei's setup.
      
      In addition to the changes to default per phase delays, this
      change adds 3 new kernel parameters - srcutree.srcu_max_nodelay,
      srcutree.srcu_max_nodelay_phase, and srcutree.srcu_retry_check_delay.
      This allows users to configure the srcu grace period scanning delays in
      order to more quickly react to additional use cases.
      
      Fixes: 640a7d37c3f4 ("srcu: Block less aggressively for expedited grace periods")
      Fixes: 282d8998 ("srcu: Prevent expedited GPs and blocking readers from consuming CPU")
      Reported-by: NZhangfei Gao <zhangfei.gao@linaro.org>
      Reported-by: Nyueluck <yueluck@163.com>
      Signed-off-by: NNeeraj Upadhyay <quic_neeraju@quicinc.com>
      Tested-by: NMarc Zyngier <maz@kernel.org>
      Tested-by: NZhangfei Gao <zhangfei.gao@linaro.org>
      Link: https://lore.kernel.org/all/20615615-0013-5adc-584f-2b1d5c03ebfc@linaro.org/Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
      4f2bfd94
  18. 18 7月, 2022 2 次提交
    • J
      x86/rdrand: Remove "nordrand" flag in favor of "random.trust_cpu" · 049f9ae9
      Jason A. Donenfeld 提交于
      The decision of whether or not to trust RDRAND is controlled by the
      "random.trust_cpu" boot time parameter or the CONFIG_RANDOM_TRUST_CPU
      compile time default. The "nordrand" flag was added during the early
      days of RDRAND, when there were worries that merely using its values
      could compromise the RNG. However, these days, RDRAND values are not
      used directly but always go through the RNG's hash function, making
      "nordrand" no longer useful.
      
      Rather, the correct switch is "random.trust_cpu", which not only handles
      the relevant trust issue directly, but also is general to multiple CPU
      types, not just x86.
      
      However, x86 RDRAND does have a history of being occasionally
      problematic. Prior, when the kernel would notice something strange, it'd
      warn in dmesg and suggest enabling "nordrand". We can improve on that by
      making the test a little bit better and then taking the step of
      automatically disabling RDRAND if we detect it's problematic.
      
      Also disable RDSEED if the RDRAND test fails.
      
      Cc: x86@kernel.org
      Cc: Theodore Ts'o <tytso@mit.edu>
      Suggested-by: NH. Peter Anvin <hpa@zytor.com>
      Suggested-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      049f9ae9
    • D
      init: add "hostname" kernel parameter · 5a704629
      Dan Moulding 提交于
      The gethostname system call returns the hostname for the current machine. 
      However, the kernel has no mechanism to initially set the current
      machine's name in such a way as to guarantee that the first userspace
      process to call gethostname will receive a meaningful result.  It relies
      on some unspecified userspace process to first call sethostname before
      gethostname can produce a meaningful name.
      
      Traditionally the machine's hostname is set from userspace by the init
      system.  The init system, in turn, often relies on a configuration file
      (say, /etc/hostname) to provide the value that it will supply in the call
      to sethostname.  Consequently, the file system containing /etc/hostname
      usually must be available before the hostname will be set.  There may,
      however, be earlier userspace processes that could call gethostname before
      the file system containing /etc/hostname is mounted.  Such a process will
      get some other, likely meaningless, name from gethostname (such as
      "(none)", "localhost", or "darkstar").
      
      A real-world example where this can happen, and lead to undesirable
      results, is with mdadm.  When assembling arrays, mdadm distinguishes
      between "local" arrays and "foreign" arrays.  A local array is one that
      properly belongs to the current machine, and a foreign array is one that
      is (possibly temporarily) attached to the current machine, but properly
      belongs to some other machine.  To determine if an array is local or
      foreign, mdadm may compare the "homehost" recorded on the array with the
      current hostname.  If mdadm is run before the root file system is mounted,
      perhaps because the root file system itself resides on an md-raid array,
      then /etc/hostname isn't yet available and the init system will not yet
      have called sethostname, causing mdadm to incorrectly conclude that all of
      the local arrays are foreign.
      
      Solving this problem *could* be delegated to the init system.  It could be
      left up to the init system (including any init system that starts within
      an initramfs, if one is in use) to ensure that sethostname is called
      before any other userspace process could possibly call gethostname. 
      However, it may not always be obvious which processes could call
      gethostname (for example, udev itself might not call gethostname, but it
      could via udev rules invoke processes that do).  Additionally, the init
      system has to ensure that the hostname configuration value is stored in
      some place where it will be readily accessible during early boot. 
      Unfortunately, every init system will attempt to (or has already attempted
      to) solve this problem in a different, possibly incorrect, way.  This
      makes getting consistently working configurations harder for users.
      
      I believe it is better for the kernel to provide the means by which the
      hostname may be set early, rather than making this a problem for the init
      system to solve.  The option to set the hostname during early startup, via
      a kernel parameter, provides a simple, reliable way to solve this problem.
      It also could make system configuration easier for some embedded systems.
      
      [dmoulding@me.com: v2]
        Link: https://lkml.kernel.org/r/20220506060310.7495-2-dmoulding@me.com
      Link: https://lkml.kernel.org/r/20220505180651.22849-2-dmoulding@me.comSigned-off-by: NDan Moulding <dmoulding@me.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      5a704629
  19. 13 7月, 2022 1 次提交
    • T
      swiotlb: split up the global swiotlb lock · 20347fca
      Tianyu Lan 提交于
      Traditionally swiotlb was not performance critical because it was only
      used for slow devices. But in some setups, like TDX/SEV confidential
      guests, all IO has to go through swiotlb. Currently swiotlb only has a
      single lock. Under high IO load with multiple CPUs this can lead to
      significat lock contention on the swiotlb lock.
      
      This patch splits the swiotlb bounce buffer pool into individual areas
      which have their own lock. Each CPU tries to allocate in its own area
      first. Only if that fails does it search other areas. On freeing the
      allocation is freed into the area where the memory was originally
      allocated from.
      
      Area number can be set via swiotlb kernel parameter and is default
      to be possible cpu number. If possible cpu number is not power of
      2, area number will be round up to the next power of 2.
      
      This idea from Andi Kleen patch(https://github.com/intel/tdx/commit/
      4529b5784c141782c72ec9bd9a92df2b68cb7d45).
      Based-on-idea-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NTianyu Lan <Tianyu.Lan@microsoft.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      20347fca
  20. 12 7月, 2022 1 次提交
  21. 08 7月, 2022 1 次提交
  22. 07 7月, 2022 1 次提交
  23. 04 7月, 2022 1 次提交
  24. 01 7月, 2022 2 次提交
  25. 29 6月, 2022 2 次提交
  26. 28 6月, 2022 2 次提交
  27. 27 6月, 2022 4 次提交
  28. 24 6月, 2022 1 次提交
    • D
      KVM: x86/mmu: Extend Eager Page Splitting to nested MMUs · ada51a9d
      David Matlack 提交于
      Add support for Eager Page Splitting pages that are mapped by nested
      MMUs. Walk through the rmap first splitting all 1GiB pages to 2MiB
      pages, and then splitting all 2MiB pages to 4KiB pages.
      
      Note, Eager Page Splitting is limited to nested MMUs as a policy rather
      than due to any technical reason (the sp->role.guest_mode check could
      just be deleted and Eager Page Splitting would work correctly for all
      shadow MMU pages). There is really no reason to support Eager Page
      Splitting for tdp_mmu=N, since such support will eventually be phased
      out, and there is no current use case supporting Eager Page Splitting on
      hosts where TDP is either disabled or unavailable in hardware.
      Furthermore, future improvements to nested MMU scalability may diverge
      the code from the legacy shadow paging implementation. These
      improvements will be simpler to make if Eager Page Splitting does not
      have to worry about legacy shadow paging.
      
      Splitting huge pages mapped by nested MMUs requires dealing with some
      extra complexity beyond that of the TDP MMU:
      
      (1) The shadow MMU has a limit on the number of shadow pages that are
          allowed to be allocated. So, as a policy, Eager Page Splitting
          refuses to split if there are KVM_MIN_FREE_MMU_PAGES or fewer
          pages available.
      
      (2) Splitting a huge page may end up re-using an existing lower level
          shadow page tables. This is unlike the TDP MMU which always allocates
          new shadow page tables when splitting.
      
      (3) When installing the lower level SPTEs, they must be added to the
          rmap which may require allocating additional pte_list_desc structs.
      
      Case (2) is especially interesting since it may require a TLB flush,
      unlike the TDP MMU which can fully split huge pages without any TLB
      flushes. Specifically, an existing lower level page table may point to
      even lower level page tables that are not fully populated, effectively
      unmapping a portion of the huge page, which requires a flush.  As of
      this commit, a flush is always done always after dropping the huge page
      and before installing the lower level page table.
      
      This TLB flush could instead be delayed until the MMU lock is about to be
      dropped, which would batch flushes for multiple splits.  However these
      flushes should be rare in practice (a huge page must be aliased in
      multiple SPTEs and have been split for NX Huge Pages in only some of
      them). Flushing immediately is simpler to plumb and also reduces the
      chances of tripping over a CPU bug (e.g. see iTLB multihit).
      
      [ This commit is based off of the original implementation of Eager Page
        Splitting from Peter in Google's kernel from 2016. ]
      Suggested-by: NPeter Feiner <pfeiner@google.com>
      Signed-off-by: NDavid Matlack <dmatlack@google.com>
      Message-Id: <20220516232138.1783324-23-dmatlack@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      ada51a9d
  29. 22 6月, 2022 2 次提交
  30. 14 6月, 2022 1 次提交
    • R
      docs: selinux: add '=' signs to kernel boot options · 8d6d51ed
      Randy Dunlap 提交于
      Provide the full kernel boot option string (with ending '=' sign).
      They won't work without that and that is how other boot options are
      listed.
      
      If used without an '=' sign (as listed here), they cause an "Unknown
      parameters" message and are added to init's argument strings,
      polluting them.
      
        Unknown kernel command line parameters "enforcing checkreqprot
          BOOT_IMAGE=/boot/bzImage-517rc6", will be passed to user space.
      
       Run /sbin/init as init process
         with arguments:
           /sbin/init
           enforcing
           checkreqprot
         with environment:
           HOME=/
           TERM=linux
           BOOT_IMAGE=/boot/bzImage-517rc6
      Signed-off-by: NRandy Dunlap <rdunlap@infradead.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Stephen Smalley <stephen.smalley.work@gmail.com>
      Cc: Eric Paris <eparis@parisplace.org>
      Cc: selinux@vger.kernel.org
      Cc: Jonathan Corbet <corbet@lwn.net>
      [PM: removed bogus 'Fixes' line]
      Signed-off-by: NPaul Moore <paul@paul-moore.com>
      8d6d51ed