1. 11 4月, 2020 1 次提交
  2. 07 4月, 2020 1 次提交
  3. 06 4月, 2020 1 次提交
  4. 04 4月, 2020 1 次提交
  5. 03 4月, 2020 3 次提交
    • A
      ipmi: kcs: aspeed: Implement v2 bindings · 09f5f680
      Andrew Jeffery 提交于
      The v2 bindings allow us to extract the resources from the devicetree.
      The table in the driver is retained to derive the channel index, which
      removes the need for kcs_chan property from the v1 bindings. The v2
      bindings allow us to reduce the number of warnings generated by the
      existing devicetree nodes.
      Signed-off-by: NAndrew Jeffery <andrew@aj.id.au>
      Reviewed-by: NJoel Stanley <joel@jms.id.au>
      Reviewed-by: NHaiyue Wang <haiyue.wang@linux.intel.com>
      Message-Id: <01ef3787e9ddaa9d87cfd55a2ac793053b5a69de.1576462051.git-series.andrew@aj.id.au>
      Signed-off-by: NCorey Minyard <cminyard@mvista.com>
      09f5f680
    • A
      ipmi: kcs: Finish configuring ASPEED KCS device before enable · af6432c7
      Andrew Jeffery 提交于
      The interrupts were configured after the channel was enabled. Configure
      them beforehand so they will work.
      Signed-off-by: NAndrew Jeffery <andrew@aj.id.au>
      Reviewed-by: NJoel Stanley <joel@jms.id.au>
      Reviewed-by: NHaiyue Wang <haiyue.wang@linux.intel.com>
      Message-Id: <c0aba2c9dfe2d0525e9cefd37995983ead0ec242.1576462051.git-series.andrew@aj.id.au>
      Signed-off-by: NCorey Minyard <cminyard@mvista.com>
      af6432c7
    • W
      ipmi: fix hung processes in __get_guid() · 32830a05
      Wen Yang 提交于
      The wait_event() function is used to detect command completion.
      When send_guid_cmd() returns an error, smi_send() has not been
      called to send data. Therefore, wait_event() should not be used
      on the error path, otherwise it will cause the following warning:
      
      [ 1361.588808] systemd-udevd   D    0  1501   1436 0x00000004
      [ 1361.588813]  ffff883f4b1298c0 0000000000000000 ffff883f4b188000 ffff887f7e3d9f40
      [ 1361.677952]  ffff887f64bd4280 ffffc90037297a68 ffffffff8173ca3b ffffc90000000010
      [ 1361.767077]  00ffc90037297ad0 ffff887f7e3d9f40 0000000000000286 ffff883f4b188000
      [ 1361.856199] Call Trace:
      [ 1361.885578]  [<ffffffff8173ca3b>] ? __schedule+0x23b/0x780
      [ 1361.951406]  [<ffffffff8173cfb6>] schedule+0x36/0x80
      [ 1362.010979]  [<ffffffffa071f178>] get_guid+0x118/0x150 [ipmi_msghandler]
      [ 1362.091281]  [<ffffffff810d5350>] ? prepare_to_wait_event+0x100/0x100
      [ 1362.168533]  [<ffffffffa071f755>] ipmi_register_smi+0x405/0x940 [ipmi_msghandler]
      [ 1362.258337]  [<ffffffffa0230ae9>] try_smi_init+0x529/0x950 [ipmi_si]
      [ 1362.334521]  [<ffffffffa022f350>] ? std_irq_setup+0xd0/0xd0 [ipmi_si]
      [ 1362.411701]  [<ffffffffa0232bd2>] init_ipmi_si+0x492/0x9e0 [ipmi_si]
      [ 1362.487917]  [<ffffffffa0232740>] ? ipmi_pci_probe+0x280/0x280 [ipmi_si]
      [ 1362.568219]  [<ffffffff810021a0>] do_one_initcall+0x50/0x180
      [ 1362.636109]  [<ffffffff812231b2>] ? kmem_cache_alloc_trace+0x142/0x190
      [ 1362.714330]  [<ffffffff811b2ae1>] do_init_module+0x5f/0x200
      [ 1362.781208]  [<ffffffff81123ca8>] load_module+0x1898/0x1de0
      [ 1362.848069]  [<ffffffff811202e0>] ? __symbol_put+0x60/0x60
      [ 1362.913886]  [<ffffffff8130696b>] ? security_kernel_post_read_file+0x6b/0x80
      [ 1362.998514]  [<ffffffff81124465>] SYSC_finit_module+0xe5/0x120
      [ 1363.068463]  [<ffffffff81124465>] ? SYSC_finit_module+0xe5/0x120
      [ 1363.140513]  [<ffffffff811244be>] SyS_finit_module+0xe/0x10
      [ 1363.207364]  [<ffffffff81003c04>] do_syscall_64+0x74/0x180
      
      Fixes: 50c812b2 ("[PATCH] ipmi: add full sysfs support")
      Signed-off-by: NWen Yang <wenyang@linux.alibaba.com>
      Cc: Corey Minyard <minyard@acm.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: openipmi-developer@lists.sourceforge.net
      Cc: linux-kernel@vger.kernel.org
      Cc: stable@vger.kernel.org # 2.6.17-
      Message-Id: <20200403090408.58745-1-wenyang@linux.alibaba.com>
      Signed-off-by: NCorey Minyard <cminyard@mvista.com>
      32830a05
  6. 25 3月, 2020 1 次提交
  7. 19 3月, 2020 4 次提交
  8. 18 3月, 2020 2 次提交
  9. 17 3月, 2020 1 次提交
  10. 16 3月, 2020 2 次提交
  11. 13 3月, 2020 9 次提交
  12. 12 3月, 2020 7 次提交
  13. 28 2月, 2020 6 次提交
    • Q
      random: fix data races at timer_rand_state · e00d996a
      Qian Cai 提交于
      Fields in "struct timer_rand_state" could be accessed concurrently.
      Lockless plain reads and writes result in data races. Fix them by adding
      pairs of READ|WRITE_ONCE(). The data races were reported by KCSAN,
      
       BUG: KCSAN: data-race in add_timer_randomness / add_timer_randomness
      
       write to 0xffff9f320a0a01d0 of 8 bytes by interrupt on cpu 22:
        add_timer_randomness+0x100/0x190
        add_timer_randomness at drivers/char/random.c:1152
        add_disk_randomness+0x85/0x280
        scsi_end_request+0x43a/0x4a0
        scsi_io_completion+0xb7/0x7e0
        scsi_finish_command+0x1ed/0x2a0
        scsi_softirq_done+0x1c9/0x1d0
        blk_done_softirq+0x181/0x1d0
        __do_softirq+0xd9/0x57c
        irq_exit+0xa2/0xc0
        do_IRQ+0x8b/0x190
        ret_from_intr+0x0/0x42
        cpuidle_enter_state+0x15e/0x980
        cpuidle_enter+0x69/0xc0
        call_cpuidle+0x23/0x40
        do_idle+0x248/0x280
        cpu_startup_entry+0x1d/0x1f
        start_secondary+0x1b2/0x230
        secondary_startup_64+0xb6/0xc0
      
       no locks held by swapper/22/0.
       irq event stamp: 32871382
       _raw_spin_unlock_irqrestore+0x53/0x60
       _raw_spin_lock_irqsave+0x21/0x60
       _local_bh_enable+0x21/0x30
       irq_exit+0xa2/0xc0
      
       read to 0xffff9f320a0a01d0 of 8 bytes by interrupt on cpu 2:
        add_timer_randomness+0xe8/0x190
        add_disk_randomness+0x85/0x280
        scsi_end_request+0x43a/0x4a0
        scsi_io_completion+0xb7/0x7e0
        scsi_finish_command+0x1ed/0x2a0
        scsi_softirq_done+0x1c9/0x1d0
        blk_done_softirq+0x181/0x1d0
        __do_softirq+0xd9/0x57c
        irq_exit+0xa2/0xc0
        do_IRQ+0x8b/0x190
        ret_from_intr+0x0/0x42
        cpuidle_enter_state+0x15e/0x980
        cpuidle_enter+0x69/0xc0
        call_cpuidle+0x23/0x40
        do_idle+0x248/0x280
        cpu_startup_entry+0x1d/0x1f
        start_secondary+0x1b2/0x230
        secondary_startup_64+0xb6/0xc0
      
       no locks held by swapper/2/0.
       irq event stamp: 37846304
       _raw_spin_unlock_irqrestore+0x53/0x60
       _raw_spin_lock_irqsave+0x21/0x60
       _local_bh_enable+0x21/0x30
       irq_exit+0xa2/0xc0
      
       Reported by Kernel Concurrency Sanitizer on:
       Hardware name: HP ProLiant BL660c Gen9, BIOS I38 10/17/2018
      
      Link: https://lore.kernel.org/r/1582648024-13111-1-git-send-email-cai@lca.pwSigned-off-by: NQian Cai <cai@lca.pw>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      e00d996a
    • J
      random: always use batched entropy for get_random_u{32,64} · 69efea71
      Jason A. Donenfeld 提交于
      It turns out that RDRAND is pretty slow. Comparing these two
      constructions:
      
        for (i = 0; i < CHACHA_BLOCK_SIZE; i += sizeof(ret))
          arch_get_random_long(&ret);
      
      and
      
        long buf[CHACHA_BLOCK_SIZE / sizeof(long)];
        extract_crng((u8 *)buf);
      
      it amortizes out to 352 cycles per long for the top one and 107 cycles
      per long for the bottom one, on Coffee Lake Refresh, Intel Core i9-9880H.
      
      And importantly, the top one has the drawback of not benefiting from the
      real rng, whereas the bottom one has all the nice benefits of using our
      own chacha rng. As get_random_u{32,64} gets used in more places (perhaps
      beyond what it was originally intended for when it was introduced as
      get_random_{int,long} back in the md5 monstrosity era), it seems like it
      might be a good thing to strengthen its posture a tiny bit. Doing this
      should only be stronger and not any weaker because that pool is already
      initialized with a bunch of rdrand data (when available). This way, we
      get the benefits of the hardware rng as well as our own rng.
      
      Another benefit of this is that we no longer hit pitfalls of the recent
      stream of AMD bugs in RDRAND. One often used code pattern for various
      things is:
      
        do {
        	val = get_random_u32();
        } while (hash_table_contains_key(val));
      
      That recent AMD bug rendered that pattern useless, whereas we're really
      very certain that chacha20 output will give pretty distributed numbers,
      no matter what.
      
      So, this simplification seems better both from a security perspective
      and from a performance perspective.
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Link: https://lore.kernel.org/r/20200221201037.30231-1-Jason@zx2c4.comSigned-off-by: NTheodore Ts'o <tytso@mit.edu>
      69efea71
    • R
      random: Make RANDOM_TRUST_CPU depend on ARCH_RANDOM · 23ae0c17
      Richard Henderson 提交于
      Listing the set of host architectures does not scale.
      Depend instead on the existence of the architecture rng.
      
      This will allow RANDOM_TRUST_CPU to be selected on arm64. Today
      ARCH_RANDOM is only selected by x86, s390, and powerpc, so this does not
      adversely affect other architectures.
      Signed-off-by: NRichard Henderson <richard.henderson@linaro.org>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Link: https://lore.kernel.org/r/20200210130015.17664-5-mark.rutland@arm.comSigned-off-by: NTheodore Ts'o <tytso@mit.edu>
      23ae0c17
    • M
      random: add arch_get_random_*long_early() · 253d3194
      Mark Rutland 提交于
      Some architectures (e.g. arm64) can have heterogeneous CPUs, and the
      boot CPU may be able to provide entropy while secondary CPUs cannot. On
      such systems, arch_get_random_long() and arch_get_random_seed_long()
      will fail unless support for RNG instructions has been detected on all
      CPUs. This prevents the boot CPU from being able to provide
      (potentially) trusted entropy when seeding the primary CRNG.
      
      To make it possible to seed the primary CRNG from the boot CPU without
      adversely affecting the runtime versions of arch_get_random_long() and
      arch_get_random_seed_long(), this patch adds new early versions of the
      functions used when initializing the primary CRNG.
      
      Default implementations are provided atop of the existing
      arch_get_random_long() and arch_get_random_seed_long() so that only
      architectures with such constraints need to provide the new helpers.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Link: https://lore.kernel.org/r/20200210130015.17664-3-mark.rutland@arm.comSigned-off-by: NTheodore Ts'o <tytso@mit.edu>
      253d3194
    • M
      random: split primary/secondary crng init paths · 5cbe0f13
      Mark Rutland 提交于
      Currently crng_initialize() is used for both the primary CRNG and
      secondary CRNGs. While we wish to share common logic, we need to do a
      number of additional things for the primary CRNG, and this would be
      easier to deal with were these handled in separate functions.
      
      This patch splits crng_initialize() into crng_initialize_primary() and
      crng_initialize_secondary(), with common logic factored out into a
      crng_init_try_arch() helper.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Link: https://lore.kernel.org/r/20200210130015.17664-2-mark.rutland@arm.comSigned-off-by: NTheodore Ts'o <tytso@mit.edu>
      5cbe0f13
    • H
      hwrng: omap3-rom - Include linux/io.h for virt_to_phys · ba02b352
      Herbert Xu 提交于
      This patch adds linux/io.h to the header list to ensure that we
      get virt_to_phys on all architectures.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      ba02b352
  14. 24 2月, 2020 1 次提交
    • K
      pcmcia: Distribute switch variables for initialization · 78c24422
      Kees Cook 提交于
      Variables declared in a switch statement before any case statements
      cannot be automatically initialized with compiler instrumentation (as
      they are not part of any execution flow). With GCC's proposed automatic
      stack variable initialization feature, this triggers a warning (and they
      don't get initialized). Clang's automatic stack variable initialization
      (via CONFIG_INIT_STACK_ALL=y) doesn't throw a warning, but it also
      doesn't initialize such variables[1]. Note that these warnings (or silent
      skipping) happen before the dead-store elimination optimization phase,
      so even when the automatic initializations are later elided in favor of
      direct initializations, the warnings remain.
      
      To avoid these problems, move such variables into the "case" where
      they're used or lift them up into the main function body.
      
      drivers/char/pcmcia/cm4000_cs.c: In function ‘monitor_card’:
      drivers/char/pcmcia/cm4000_cs.c:734:17: warning: statement will never be executed [-Wswitch-unreachable]
        734 |   unsigned char flags0;
            |                 ^~~~~~
      
      [1] https://bugs.llvm.org/show_bug.cgi?id=44916Signed-off-by: NKees Cook <keescook@chromium.org>
      Link: https://lore.kernel.org/r/20200220062308.69032-1-keescook@chromium.orgSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      78c24422