1. 04 8月, 2020 1 次提交
    • L
      random32: move the pseudo-random 32-bit definitions to prandom.h · c0842fbc
      Linus Torvalds 提交于
      The addition of percpu.h to the list of includes in random.h revealed
      some circular dependencies on arm64 and possibly other platforms.  This
      include was added solely for the pseudo-random definitions, which have
      nothing to do with the rest of the definitions in this file but are
      still there for legacy reasons.
      
      This patch moves the pseudo-random parts to linux/prandom.h and the
      percpu.h include with it, which is now guarded by _LINUX_PRANDOM_H and
      protected against recursive inclusion.
      
      A further cleanup step would be to remove this from <linux/random.h>
      entirely, and make people who use the prandom infrastructure include
      just the new header file.  That's a bit of a churn patch, but grepping
      for "prandom_" and "next_pseudo_random32" "struct rnd_state" should
      catch most users.
      
      But it turns out that that nice cleanup step is fairly painful, because
      a _lot_ of code currently seems to depend on the implicit include of
      <linux/random.h>, which can currently come in a lot of ways, including
      such fairly core headfers as <linux/net.h>.
      
      So the "nice cleanup" part may or may never happen.
      
      Fixes: 1c9df907 ("random: fix circular include dependency on arm64 after addition of percpu.h")
      Tested-by: NGuenter Roeck <linux@roeck-us.net>
      Acked-by: NWilly Tarreau <w@1wt.eu>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c0842fbc
  2. 03 8月, 2020 1 次提交
  3. 02 8月, 2020 1 次提交
  4. 01 8月, 2020 1 次提交
  5. 31 7月, 2020 7 次提交
  6. 30 7月, 2020 4 次提交
    • C
      PM / devfreq: Add support delayed timer for polling mode · 4dc3bab8
      Chanwoo Choi 提交于
      Until now, the devfreq driver using polling mode like simple_ondemand
      governor have used only deferrable timer for reducing the redundant
      power consumption. It reduces the CPU wake-up from idle due to polling mode
      which check the status of Non-CPU device.
      
      But, it has a problem for Non-CPU device like DMC device with DMA operation.
      Some Non-CPU device need to do monitor continuously regardless of CPU state
      in order to decide the proper next status of Non-CPU device.
      
      So, add support the delayed timer for polling mode to support
      the repetitive monitoring. The devfreq driver and user can select
      the kind of timer on either deferrable and delayed timer.
      
      For example, change the timer type of DMC device
      based on Exynos5422-based Odroid-XU3 as following:
      
      - If want to use deferrable timer as following:
      echo deferrable > /sys/class/devfreq/10c20000.memory-controller/timer
      
      - If want to use delayed timer as following:
      echo delayed > /sys/class/devfreq/10c20000.memory-controller/timer
      Reviewed-by: NBartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
      Reviewed-by: NLukasz Luba <lukasz.luba@arm.com>
      Signed-off-by: NChanwoo Choi <cw00.choi@samsung.com>
      4dc3bab8
    • L
      random32: remove net_rand_state from the latent entropy gcc plugin · 83bdc727
      Linus Torvalds 提交于
      It turns out that the plugin right now ends up being really unhappy
      about the change from 'static' to 'extern' storage that happened in
      commit f227e3ec ("random32: update the net random state on interrupt
      and activity").
      
      This is probably a trivial fix for the latent_entropy plugin, but for
      now, just remove net_rand_state from the list of things the plugin
      worries about.
      Reported-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Cc: Emese Revfy <re.emese@gmail.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Willy Tarreau <w@1wt.eu>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      83bdc727
    • W
      random32: update the net random state on interrupt and activity · f227e3ec
      Willy Tarreau 提交于
      This modifies the first 32 bits out of the 128 bits of a random CPU's
      net_rand_state on interrupt or CPU activity to complicate remote
      observations that could lead to guessing the network RNG's internal
      state.
      
      Note that depending on some network devices' interrupt rate moderation
      or binding, this re-seeding might happen on every packet or even almost
      never.
      
      In addition, with NOHZ some CPUs might not even get timer interrupts,
      leaving their local state rarely updated, while they are running
      networked processes making use of the random state.  For this reason, we
      also perform this update in update_process_times() in order to at least
      update the state when there is user or system activity, since it's the
      only case we care about.
      Reported-by: NAmit Klein <aksecurity@gmail.com>
      Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: "Jason A. Donenfeld" <Jason@zx2c4.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NWilly Tarreau <w@1wt.eu>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f227e3ec
    • N
      cpuidle: change enter_s2idle() prototype · efe97112
      Neal Liu 提交于
      Control Flow Integrity(CFI) is a security mechanism that disallows
      changes to the original control flow graph of a compiled binary,
      making it significantly harder to perform such attacks.
      
      init_state_node() assign same function callback to different
      function pointer declarations.
      
      static int init_state_node(struct cpuidle_state *idle_state,
                                 const struct of_device_id *matches,
                                 struct device_node *state_node) { ...
              idle_state->enter = match_id->data; ...
              idle_state->enter_s2idle = match_id->data; }
      
      Function declarations:
      
      struct cpuidle_state { ...
              int (*enter) (struct cpuidle_device *dev,
                            struct cpuidle_driver *drv,
                            int index);
      
              void (*enter_s2idle) (struct cpuidle_device *dev,
                                    struct cpuidle_driver *drv,
                                    int index); };
      
      In this case, either enter() or enter_s2idle() would cause CFI check
      failed since they use same callee.
      
      Align function prototype of enter() since it needs return value for
      some use cases. The return value of enter_s2idle() is no
      need currently.
      Signed-off-by: NNeal Liu <neal.liu@mediatek.com>
      Reviewed-by: NSami Tolvanen <samitolvanen@google.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      efe97112
  7. 29 7月, 2020 14 次提交
  8. 28 7月, 2020 10 次提交
  9. 27 7月, 2020 1 次提交
    • T
      genirq/affinity: Make affinity setting if activated opt-in · f0c7baca
      Thomas Gleixner 提交于
      John reported that on a RK3288 system the perf per CPU interrupts are all
      affine to CPU0 and provided the analysis:
      
       "It looks like what happens is that because the interrupts are not per-CPU
        in the hardware, armpmu_request_irq() calls irq_force_affinity() while
        the interrupt is deactivated and then request_irq() with IRQF_PERCPU |
        IRQF_NOBALANCING.  
      
        Now when irq_startup() runs with IRQ_STARTUP_NORMAL, it calls
        irq_setup_affinity() which returns early because IRQF_PERCPU and
        IRQF_NOBALANCING are set, leaving the interrupt on its original CPU."
      
      This was broken by the recent commit which blocked interrupt affinity
      setting in hardware before activation of the interrupt. While this works in
      general, it does not work for this particular case. As contrary to the
      initial analysis not all interrupt chip drivers implement an activate
      callback, the safe cure is to make the deferred interrupt affinity setting
      at activation time opt-in.
      
      Implement the necessary core logic and make the two irqchip implementations
      for which this is required opt-in. In hindsight this would have been the
      right thing to do, but ...
      
      Fixes: baedb87d ("genirq/affinity: Handle affinity setting on inactive interrupts correctly")
      Reported-by: NJohn Keeping <john@metanate.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NMarc Zyngier <maz@kernel.org>
      Acked-by: NMarc Zyngier <maz@kernel.org>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/87blk4tzgm.fsf@nanos.tec.linutronix.de
      f0c7baca