1. 03 1月, 2020 5 次提交
  2. 26 12月, 2019 2 次提交
    • F
      ata: libahci_platform: Export again ahci_platform_<en/dis>able_phys() · 84b032db
      Florian Fainelli 提交于
      This reverts commit 6bb86fef
      ("libahci_platform: Staticize ahci_platform_<en/dis>able_phys()") we are
      going to need ahci_platform_{enable,disable}_phys() in a subsequent
      commit for ahci_brcm.c in order to properly control the PHY
      initialization order.
      
      Also make sure the function prototypes are declared in
      include/linux/ahci_platform.h as a result.
      
      Cc: stable@vger.kernel.org
      Reviewed-by: NHans de Goede <hdegoede@redhat.com>
      Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      84b032db
    • S
      libata: Fix retrieving of active qcs · 8385d756
      Sascha Hauer 提交于
      ata_qc_complete_multiple() is called with a mask of the still active
      tags.
      
      mv_sata doesn't have this information directly and instead calculates
      the still active tags from the started tags (ap->qc_active) and the
      finished tags as (ap->qc_active ^ done_mask)
      
      Since 28361c40 the hw_tag and tag are no longer the same and the
      equation is no longer valid. In ata_exec_internal_sg() ap->qc_active is
      initialized as 1ULL << ATA_TAG_INTERNAL, but in hardware tag 0 is
      started and this will be in done_mask on completion. ap->qc_active ^
      done_mask becomes 0x100000000 ^ 0x1 = 0x100000001 and thus tag 0 used as
      the internal tag will never be reported as completed.
      
      This is fixed by introducing ata_qc_get_active() which returns the
      active hardware tags and calling it where appropriate.
      
      This is tested on mv_sata, but sata_fsl and sata_nv suffer from the same
      problem. There is another case in sata_nv that most likely needs fixing
      as well, but this looks a little different, so I wasn't confident enough
      to change that.
      
      Fixes: 28361c40 ("libata: add extra internal command")
      Cc: stable@vger.kernel.org
      Tested-by: NPali Rohár <pali.rohar@gmail.com>
      Signed-off-by: NSascha Hauer <s.hauer@pengutronix.de>
      
      Add missing export of ata_qc_get_active(), as per Pali.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      8385d756
  3. 21 12月, 2019 2 次提交
  4. 20 12月, 2019 1 次提交
  5. 18 12月, 2019 4 次提交
  6. 17 12月, 2019 2 次提交
  7. 16 12月, 2019 1 次提交
  8. 14 12月, 2019 1 次提交
  9. 13 12月, 2019 4 次提交
    • D
      fs: remove ksys_dup() · 8243186f
      Dominik Brodowski 提交于
      ksys_dup() is used only at one place in the kernel, namely to duplicate
      fd 0 of /dev/console to stdout and stderr. The same functionality can be
      achieved by using functions already available within the kernel namespace.
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      8243186f
    • D
      init: unify opening /dev/console as stdin/stdout/stderr · b49a733d
      Dominik Brodowski 提交于
      Merge the two instances where /dev/console is opened as
      stdin/stdout/stderr.
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      b49a733d
    • R
      cpufreq: Avoid leaving stale IRQ work items during CPU offline · 85572c2c
      Rafael J. Wysocki 提交于
      The scheduler code calling cpufreq_update_util() may run during CPU
      offline on the target CPU after the IRQ work lists have been flushed
      for it, so the target CPU should be prevented from running code that
      may queue up an IRQ work item on it at that point.
      
      Unfortunately, that may not be the case if dvfs_possible_from_any_cpu
      is set for at least one cpufreq policy in the system, because that
      allows the CPU going offline to run the utilization update callback
      of the cpufreq governor on behalf of another (online) CPU in some
      cases.
      
      If that happens, the cpufreq governor callback may queue up an IRQ
      work on the CPU running it, which is going offline, and the IRQ work
      may not be flushed after that point.  Moreover, that IRQ work cannot
      be flushed until the "offlining" CPU goes back online, so if any
      other CPU calls irq_work_sync() to wait for the completion of that
      IRQ work, it will have to wait until the "offlining" CPU is back
      online and that may not happen forever.  In particular, a system-wide
      deadlock may occur during CPU online as a result of that.
      
      The failing scenario is as follows.  CPU0 is the boot CPU, so it
      creates a cpufreq policy and becomes the "leader" of it
      (policy->cpu).  It cannot go offline, because it is the boot CPU.
      Next, other CPUs join the cpufreq policy as they go online and they
      leave it when they go offline.  The last CPU to go offline, say CPU3,
      may queue up an IRQ work while running the governor callback on
      behalf of CPU0 after leaving the cpufreq policy because of the
      dvfs_possible_from_any_cpu effect described above.  Then, CPU0 is
      the only online CPU in the system and the stale IRQ work is still
      queued on CPU3.  When, say, CPU1 goes back online, it will run
      irq_work_sync() to wait for that IRQ work to complete and so it
      will wait for CPU3 to go back online (which may never happen even
      in principle), but (worse yet) CPU0 is waiting for CPU1 at that
      point too and a system-wide deadlock occurs.
      
      To address this problem notice that CPUs which cannot run cpufreq
      utilization update code for themselves (for example, because they
      have left the cpufreq policies that they belonged to), should also
      be prevented from running that code on behalf of the other CPUs that
      belong to a cpufreq policy with dvfs_possible_from_any_cpu set and so
      in that case the cpufreq_update_util_data pointer of the CPU running
      the code must not be NULL as well as for the CPU which is the target
      of the cpufreq utilization update in progress.
      
      Accordingly, change cpufreq_this_cpu_can_update() into a regular
      function in kernel/sched/cpufreq.c (instead of a static inline in a
      header file) and make it check the cpufreq_update_util_data pointer
      of the local CPU if dvfs_possible_from_any_cpu is set for the target
      cpufreq policy.
      
      Also update the schedutil governor to do the
      cpufreq_this_cpu_can_update() check in the non-fast-switch
      case too to avoid the stale IRQ work issues.
      
      Fixes: 99d14d0e ("cpufreq: Process remote callbacks from any CPU if the platform permits")
      Link: https://lore.kernel.org/linux-pm/20191121093557.bycvdo4xyinbc5cb@vireshk-i7/Reported-by: NAnson Huang <anson.huang@nxp.com>
      Tested-by: NAnson Huang <anson.huang@nxp.com>
      Cc: 4.14+ <stable@vger.kernel.org> # 4.14+
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: NViresh Kumar <viresh.kumar@linaro.org>
      Tested-by: Peng Fan <peng.fan@nxp.com> (i.MX8QXP-MEK)
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      85572c2c
    • G
      blk-cgroup: remove blkcg_drain_queue · 5addeae1
      Guoqing Jiang 提交于
      Since blk_drain_queue had already been removed, so this function
      is not needed anymore.
      Signed-off-by: NGuoqing Jiang <guoqing.jiang@cloud.ionos.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      5addeae1
  10. 12 12月, 2019 3 次提交
    • D
      init: use do_mount() instead of ksys_mount() · cccaa5e3
      Dominik Brodowski 提交于
      In prepare_namespace(), do_mount() can be used instead of ksys_mount()
      as the first and third argument are const strings in the kernel, the
      second and fourth argument are passed through anyway, and the fifth
      argument is NULL.
      
      In do_mount_root(), ksys_mount() is called with the first and third
      argument being already kernelspace strings, which do not need to be
      copied over from userspace to kernelspace (again). The second and
      fourth arguments are passed through to do_mount() anyway. The fifth
      argument, while already residing in kernelspace, needs to be put into
      a page of its own. Then, do_mount() can be used instead of
      ksys_mount().
      
      Once this is done, there are no in-kernel users to ksys_mount() left,
      which can therefore be removed.
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      cccaa5e3
    • D
      devtmpfs: use do_mount() instead of ksys_mount() · 5e787dbf
      Dominik Brodowski 提交于
      In devtmpfs, do_mount() can be called directly instead of complex wrapping
      by ksys_mount():
      - the first and third arguments are const strings in the kernel,
        and do not need to be copied over from userspace;
      - the fifth argument is NULL, and therefore no page needs to be
        copied over from userspace;
      - the second and fourth argument are passed through anyway.
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      5e787dbf
    • A
      bpf: Make BPF trampoline use register_ftrace_direct() API · b91e014f
      Alexei Starovoitov 提交于
      Make BPF trampoline attach its generated assembly code to kernel functions via
      register_ftrace_direct() API. It helps ftrace-based tracers co-exist with BPF
      trampoline on the same kernel function. It also switches attaching logic from
      arch specific text_poke to generic ftrace that is available on many
      architectures. text_poke is still necessary for bpf-to-bpf attach and for
      bpf_tail_call optimization.
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20191209000114.1876138-3-ast@kernel.org
      b91e014f
  11. 11 12月, 2019 4 次提交
  12. 10 12月, 2019 2 次提交
  13. 09 12月, 2019 2 次提交
  14. 08 12月, 2019 3 次提交
    • A
      efi: Fix efi_loaded_image_t::unload type · 9fa76ca7
      Arvind Sankar 提交于
      The ::unload field is a function pointer, so it should be u32 for 32-bit,
      u64 for 64-bit. Add a prototype for it in the native efi_loaded_image_t
      type. Also change type of parent_handle and device_handle from void * to
      efi_handle_t for documentation purposes.
      
      The unload method is not used, so no functional change.
      Signed-off-by: NArvind Sankar <nivedita@alum.mit.edu>
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
      Cc: Bhupesh Sharma <bhsharma@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-efi@vger.kernel.org
      Link: https://lkml.kernel.org/r/20191206165542.31469-6-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      9fa76ca7
    • L
      pipe: remove 'waiting_writers' merging logic · a28c8b9d
      Linus Torvalds 提交于
      This code is ancient, and goes back to when we only had a single page
      for the pipe buffers.  The exact history is hidden in the mists of time
      (ie "before git", and in fact predates the BK repository too).
      
      At that long-ago point in time, it actually helped to try to merge big
      back-and-forth pipe reads and writes, and not limit pipe reads to the
      single pipe buffer in length just because that was all we had at a time.
      
      However, since then we've expanded the pipe buffers to multiple pages,
      and this logic really doesn't seem to make sense.  And a lot of it is
      somewhat questionable (ie "hmm, the user asked for a non-blocking read,
      but we see that there's a writer pending, so let's wait anyway to get
      the extra data that the writer will have").
      
      But more importantly, it makes the "go to sleep" logic much less
      obvious, and considering the wakeup issues we've had, I want to make for
      less of those kinds of things.
      
      Cc: David Howells <dhowells@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a28c8b9d
    • E
      inet: protect against too small mtu values. · 501a90c9
      Eric Dumazet 提交于
      syzbot was once again able to crash a host by setting a very small mtu
      on loopback device.
      
      Let's make inetdev_valid_mtu() available in include/net/ip.h,
      and use it in ip_setup_cork(), so that we protect both ip_append_page()
      and __ip_append_data()
      
      Also add a READ_ONCE() when the device mtu is read.
      
      Pairs this lockless read with one WRITE_ONCE() in __dev_set_mtu(),
      even if other code paths might write over this field.
      
      Add a big comment in include/linux/netdevice.h about dev->mtu
      needing READ_ONCE()/WRITE_ONCE() annotations.
      
      Hopefully we will add the missing ones in followup patches.
      
      [1]
      
      refcount_t: saturated; leaking memory.
      WARNING: CPU: 0 PID: 9464 at lib/refcount.c:22 refcount_warn_saturate+0x138/0x1f0 lib/refcount.c:22
      Kernel panic - not syncing: panic_on_warn set ...
      CPU: 0 PID: 9464 Comm: syz-executor850 Not tainted 5.4.0-syzkaller #0
      Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
      Call Trace:
       __dump_stack lib/dump_stack.c:77 [inline]
       dump_stack+0x197/0x210 lib/dump_stack.c:118
       panic+0x2e3/0x75c kernel/panic.c:221
       __warn.cold+0x2f/0x3e kernel/panic.c:582
       report_bug+0x289/0x300 lib/bug.c:195
       fixup_bug arch/x86/kernel/traps.c:174 [inline]
       fixup_bug arch/x86/kernel/traps.c:169 [inline]
       do_error_trap+0x11b/0x200 arch/x86/kernel/traps.c:267
       do_invalid_op+0x37/0x50 arch/x86/kernel/traps.c:286
       invalid_op+0x23/0x30 arch/x86/entry/entry_64.S:1027
      RIP: 0010:refcount_warn_saturate+0x138/0x1f0 lib/refcount.c:22
      Code: 06 31 ff 89 de e8 c8 f5 e6 fd 84 db 0f 85 6f ff ff ff e8 7b f4 e6 fd 48 c7 c7 e0 71 4f 88 c6 05 56 a6 a4 06 01 e8 c7 a8 b7 fd <0f> 0b e9 50 ff ff ff e8 5c f4 e6 fd 0f b6 1d 3d a6 a4 06 31 ff 89
      RSP: 0018:ffff88809689f550 EFLAGS: 00010286
      RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
      RDX: 0000000000000000 RSI: ffffffff815e4336 RDI: ffffed1012d13e9c
      RBP: ffff88809689f560 R08: ffff88809c50a3c0 R09: fffffbfff15d31b1
      R10: fffffbfff15d31b0 R11: ffffffff8ae98d87 R12: 0000000000000001
      R13: 0000000000040100 R14: ffff888099041104 R15: ffff888218d96e40
       refcount_add include/linux/refcount.h:193 [inline]
       skb_set_owner_w+0x2b6/0x410 net/core/sock.c:1999
       sock_wmalloc+0xf1/0x120 net/core/sock.c:2096
       ip_append_page+0x7ef/0x1190 net/ipv4/ip_output.c:1383
       udp_sendpage+0x1c7/0x480 net/ipv4/udp.c:1276
       inet_sendpage+0xdb/0x150 net/ipv4/af_inet.c:821
       kernel_sendpage+0x92/0xf0 net/socket.c:3794
       sock_sendpage+0x8b/0xc0 net/socket.c:936
       pipe_to_sendpage+0x2da/0x3c0 fs/splice.c:458
       splice_from_pipe_feed fs/splice.c:512 [inline]
       __splice_from_pipe+0x3ee/0x7c0 fs/splice.c:636
       splice_from_pipe+0x108/0x170 fs/splice.c:671
       generic_splice_sendpage+0x3c/0x50 fs/splice.c:842
       do_splice_from fs/splice.c:861 [inline]
       direct_splice_actor+0x123/0x190 fs/splice.c:1035
       splice_direct_to_actor+0x3b4/0xa30 fs/splice.c:990
       do_splice_direct+0x1da/0x2a0 fs/splice.c:1078
       do_sendfile+0x597/0xd00 fs/read_write.c:1464
       __do_sys_sendfile64 fs/read_write.c:1525 [inline]
       __se_sys_sendfile64 fs/read_write.c:1511 [inline]
       __x64_sys_sendfile64+0x1dd/0x220 fs/read_write.c:1511
       do_syscall_64+0xfa/0x790 arch/x86/entry/common.c:294
       entry_SYSCALL_64_after_hwframe+0x49/0xbe
      RIP: 0033:0x441409
      Code: e8 ac e8 ff ff 48 83 c4 18 c3 0f 1f 80 00 00 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 eb 08 fc ff c3 66 2e 0f 1f 84 00 00 00 00
      RSP: 002b:00007fffb64c4f78 EFLAGS: 00000246 ORIG_RAX: 0000000000000028
      RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 0000000000441409
      RDX: 0000000000000000 RSI: 0000000000000006 RDI: 0000000000000005
      RBP: 0000000000073b8a R08: 0000000000000010 R09: 0000000000000010
      R10: 0000000000010001 R11: 0000000000000246 R12: 0000000000402180
      R13: 0000000000402210 R14: 0000000000000000 R15: 0000000000000000
      Kernel Offset: disabled
      Rebooting in 86400 seconds..
      
      Fixes: 1470ddf7 ("inet: Remove explicit write references to sk/inet in ip_append_data")
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Reported-by: Nsyzbot <syzkaller@googlegroups.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      501a90c9
  15. 07 12月, 2019 1 次提交
    • G
      tcp: fix rejected syncookies due to stale timestamps · 04d26e7b
      Guillaume Nault 提交于
      If no synflood happens for a long enough period of time, then the
      synflood timestamp isn't refreshed and jiffies can advance so much
      that time_after32() can't accurately compare them any more.
      
      Therefore, we can end up in a situation where time_after32(now,
      last_overflow + HZ) returns false, just because these two values are
      too far apart. In that case, the synflood timestamp isn't updated as
      it should be, which can trick tcp_synq_no_recent_overflow() into
      rejecting valid syncookies.
      
      For example, let's consider the following scenario on a system
      with HZ=1000:
      
        * The synflood timestamp is 0, either because that's the timestamp
          of the last synflood or, more commonly, because we're working with
          a freshly created socket.
      
        * We receive a new SYN, which triggers synflood protection. Let's say
          that this happens when jiffies == 2147484649 (that is,
          'synflood timestamp' + HZ + 2^31 + 1).
      
        * Then tcp_synq_overflow() doesn't update the synflood timestamp,
          because time_after32(2147484649, 1000) returns false.
          With:
            - 2147484649: the value of jiffies, aka. 'now'.
            - 1000: the value of 'last_overflow' + HZ.
      
        * A bit later, we receive the ACK completing the 3WHS. But
          cookie_v[46]_check() rejects it because tcp_synq_no_recent_overflow()
          says that we're not under synflood. That's because
          time_after32(2147484649, 120000) returns false.
          With:
            - 2147484649: the value of jiffies, aka. 'now'.
            - 120000: the value of 'last_overflow' + TCP_SYNCOOKIE_VALID.
      
          Of course, in reality jiffies would have increased a bit, but this
          condition will last for the next 119 seconds, which is far enough
          to accommodate for jiffie's growth.
      
      Fix this by updating the overflow timestamp whenever jiffies isn't
      within the [last_overflow, last_overflow + HZ] range. That shouldn't
      have any performance impact since the update still happens at most once
      per second.
      
      Now we're guaranteed to have fresh timestamps while under synflood, so
      tcp_synq_no_recent_overflow() can safely use it with time_after32() in
      such situations.
      
      Stale timestamps can still make tcp_synq_no_recent_overflow() return
      the wrong verdict when not under synflood. This will be handled in the
      next patch.
      
      For 64 bits architectures, the problem was introduced with the
      conversion of ->tw_ts_recent_stamp to 32 bits integer by commit
      cca9bab1 ("tcp: use monotonic timestamps for PAWS").
      The problem has always been there on 32 bits architectures.
      
      Fixes: cca9bab1 ("tcp: use monotonic timestamps for PAWS")
      Fixes: 1da177e4 ("Linux-2.6.12-rc2")
      Signed-off-by: NGuillaume Nault <gnault@redhat.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      04d26e7b
  16. 06 12月, 2019 1 次提交
  17. 05 12月, 2019 2 次提交