1. 29 11月, 2018 2 次提交
  2. 26 11月, 2018 1 次提交
    • L
      drm: rcar-du: Fix DU3 start/stop on M3-N · 0bc3544a
      Laurent Pinchart 提交于
      Group start/stop is controlled by the DRES and DEN bits of DSYSR0 for
      the first group and DSYSR2 for the second group. On most DU instances,
      this maps to the first CRTC of the group. On M3-N, however, DU2 doesn't
      exist, but DSYSR2 does. There is no CRTC object there that maps to the
      correct DSYSR register.
      
      Commit 9144adc5 ("drm: rcar-du: Cache DSYSR value to ensure known
      initial value") switched group start/stop from using group read/write
      access to DSYSR to a CRTC-based API to cache the DSYSR value. While
      doing so, it introduced a regression on M3-N by accessing DSYSR3 instead
      of DSYSR2 to start/stop the second group.
      
      To fix this, access the DSYSR register directly through group read/write
      if the SoC is missing the first DU channel of the group. Keep using the
      rcar_du_crtc_dsysr_clr_set() function otherwise, to retain the DSYSR
      caching feature.
      
      Fixes: 9144adc5 ("drm: rcar-du: Cache DSYSR value to ensure known initial value")
      Reported-by: NHoan Nguyen An <na-hoan@jinso.co.jp>
      Signed-off-by: NLaurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
      Acked-by: NKieran Bingham <kieran.bingham+renesas@ideasonboard.com>
      Tested-by: NSimon Horman <horms+renesas@verge.net.au>
      0bc3544a
  3. 24 11月, 2018 7 次提交
    • A
      net: gemini: Fix copy/paste error · 07093b76
      Andreas Fiedler 提交于
      The TX stats should be started with the tx_stats_syncp,
      there seems to be a copy/paste error in the driver.
      Signed-off-by: NAndreas Fiedler <andreas.fiedler@gmx.net>
      Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      07093b76
    • Q
      net: phy: mscc: fix deadlock in vsc85xx_default_config · 3fa528b7
      Quentin Schulz 提交于
      The vsc85xx_default_config function called in the vsc85xx_config_init
      function which is used by VSC8530, VSC8531, VSC8540 and VSC8541 PHYs
      mistakenly calls phy_read and phy_write in-between phy_select_page and
      phy_restore_page.
      
      phy_select_page and phy_restore_page actually take and release the MDIO
      bus lock and phy_write and phy_read take and release the lock to write
      or read to a PHY register.
      
      Let's fix this deadlock by using phy_modify_paged which handles
      correctly a read followed by a write in a non-standard page.
      
      Fixes: 6a0bfbbe ("net: phy: mscc: migrate to phy_select/restore_page functions")
      Signed-off-by: NQuentin Schulz <quentin.schulz@bootlin.com>
      Reviewed-by: NAndrew Lunn <andrew@lunn.ch>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3fa528b7
    • L
      net: thunderx: set tso_hdrs pointer to NULL in nicvf_free_snd_queue · ef2a7cf1
      Lorenzo Bianconi 提交于
      Reset snd_queue tso_hdrs pointer to NULL in nicvf_free_snd_queue routine
      since it is used to check if tso dma descriptor queue has been previously
      allocated. The issue can be triggered with the following reproducer:
      
      $ip link set dev enP2p1s0v0 xdpdrv obj xdp_dummy.o
      $ip link set dev enP2p1s0v0 xdpdrv off
      
      [  341.467649] WARNING: CPU: 74 PID: 2158 at mm/vmalloc.c:1511 __vunmap+0x98/0xe0
      [  341.515010] Hardware name: GIGABYTE H270-T70/MT70-HD0, BIOS T49 02/02/2018
      [  341.521874] pstate: 60400005 (nZCv daif +PAN -UAO)
      [  341.526654] pc : __vunmap+0x98/0xe0
      [  341.530132] lr : __vunmap+0x98/0xe0
      [  341.533609] sp : ffff00001c5db860
      [  341.536913] x29: ffff00001c5db860 x28: 0000000000020000
      [  341.542214] x27: ffff810feb5090b0 x26: ffff000017e57000
      [  341.547515] x25: 0000000000000000 x24: 00000000fbd00000
      [  341.552816] x23: 0000000000000000 x22: ffff810feb5090b0
      [  341.558117] x21: 0000000000000000 x20: 0000000000000000
      [  341.563418] x19: ffff000017e57000 x18: 0000000000000000
      [  341.568719] x17: 0000000000000000 x16: 0000000000000000
      [  341.574020] x15: 0000000000000010 x14: ffffffffffffffff
      [  341.579321] x13: ffff00008985eb27 x12: ffff00000985eb2f
      [  341.584622] x11: ffff0000096b3000 x10: ffff00001c5db510
      [  341.589923] x9 : 00000000ffffffd0 x8 : ffff0000086868e8
      [  341.595224] x7 : 3430303030303030 x6 : 00000000000006ef
      [  341.600525] x5 : 00000000003fffff x4 : 0000000000000000
      [  341.605825] x3 : 0000000000000000 x2 : ffffffffffffffff
      [  341.611126] x1 : ffff0000096b3728 x0 : 0000000000000038
      [  341.616428] Call trace:
      [  341.618866]  __vunmap+0x98/0xe0
      [  341.621997]  vunmap+0x3c/0x50
      [  341.624961]  arch_dma_free+0x68/0xa0
      [  341.628534]  dma_direct_free+0x50/0x80
      [  341.632285]  nicvf_free_resources+0x160/0x2d8 [nicvf]
      [  341.637327]  nicvf_config_data_transfer+0x174/0x5e8 [nicvf]
      [  341.642890]  nicvf_stop+0x298/0x340 [nicvf]
      [  341.647066]  __dev_close_many+0x9c/0x108
      [  341.650977]  dev_close_many+0xa4/0x158
      [  341.654720]  rollback_registered_many+0x140/0x530
      [  341.659414]  rollback_registered+0x54/0x80
      [  341.663499]  unregister_netdevice_queue+0x9c/0xe8
      [  341.668192]  unregister_netdev+0x28/0x38
      [  341.672106]  nicvf_remove+0xa4/0xa8 [nicvf]
      [  341.676280]  nicvf_shutdown+0x20/0x30 [nicvf]
      [  341.680630]  pci_device_shutdown+0x44/0x88
      [  341.684720]  device_shutdown+0x144/0x250
      [  341.688640]  kernel_restart_prepare+0x44/0x50
      [  341.692986]  kernel_restart+0x20/0x68
      [  341.696638]  __se_sys_reboot+0x210/0x238
      [  341.700550]  __arm64_sys_reboot+0x24/0x30
      [  341.704555]  el0_svc_handler+0x94/0x110
      [  341.708382]  el0_svc+0x8/0xc
      [  341.711252] ---[ end trace 3f4019c8439959c9 ]---
      [  341.715874] page:ffff7e0003ef4000 count:0 mapcount:0 mapping:0000000000000000 index:0x4
      [  341.723872] flags: 0x1fffe000000000()
      [  341.727527] raw: 001fffe000000000 ffff7e0003f1a008 ffff7e0003ef4048 0000000000000000
      [  341.735263] raw: 0000000000000004 0000000000000000 00000000ffffffff 0000000000000000
      [  341.742994] page dumped because: VM_BUG_ON_PAGE(page_ref_count(page) == 0)
      
      where xdp_dummy.c is a simple bpf program that forwards the incoming
      frames to the network stack (available here:
      https://github.com/altoor/xdp_walkthrough_examples/blob/master/sample_1/xdp_dummy.c)
      
      Fixes: 05c773f5 ("net: thunderx: Add basic XDP support")
      Fixes: 4863dea3 ("net: Adding support for Cavium ThunderX network controller")
      Signed-off-by: NLorenzo Bianconi <lorenzo.bianconi@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ef2a7cf1
    • Y
      net: amd: add missing of_node_put() · c44c749d
      Yangtao Li 提交于
      of_find_node_by_path() acquires a reference to the node
      returned by it and that reference needs to be dropped by its caller.
      This place doesn't do that, so fix it.
      Signed-off-by: NYangtao Li <tiny.windzz@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c44c749d
    • H
      team: no need to do team_notify_peers or team_mcast_rejoin when disabling port · 5ed9dc99
      Hangbin Liu 提交于
      team_notify_peers() will send ARP and NA to notify peers. team_mcast_rejoin()
      will send multicast join group message to notify peers. We should do this when
      enabling/changed to a new port. But it doesn't make sense to do it when a port
      is disabled.
      
      On the other hand, when we set mcast_rejoin_count to 2, and do a failover,
      team_port_disable() will increase mcast_rejoin.count_pending to 2 and then
      team_port_enable() will increase mcast_rejoin.count_pending to 4. We will send
      4 mcast rejoin messages at latest, which will make user confused. The same
      with notify_peers.count.
      
      Fix it by deleting team_notify_peers() and team_mcast_rejoin() in
      team_port_disable().
      Reported-by: NLiang Li <liali@redhat.com>
      Fixes: fc423ff0 ("team: add peer notification")
      Fixes: 492b200e ("team: add support for sending multicast rejoins")
      Signed-off-by: NHangbin Liu <liuhangbin@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5ed9dc99
    • J
      virtio-net: fail XDP set if guest csum is negotiated · 18ba58e1
      Jason Wang 提交于
      We don't support partial csumed packet since its metadata will be lost
      or incorrect during XDP processing. So fail the XDP set if guest_csum
      feature is negotiated.
      
      Fixes: f600b690 ("virtio_net: Add XDP support")
      Reported-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Cc: Pavel Popa <pashinho1990@gmail.com>
      Cc: David Ahern <dsahern@gmail.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      18ba58e1
    • J
      virtio-net: disable guest csum during XDP set · e59ff2c4
      Jason Wang 提交于
      We don't disable VIRTIO_NET_F_GUEST_CSUM if XDP was set. This means we
      can receive partial csumed packets with metadata kept in the
      vnet_hdr. This may have several side effects:
      
      - It could be overridden by header adjustment, thus is might be not
        correct after XDP processing.
      - There's no way to pass such metadata information through
        XDP_REDIRECT to another driver.
      - XDP does not support checksum offload right now.
      
      So simply disable guest csum if possible in this the case of XDP.
      
      Fixes: 3f93522f ("virtio-net: switch off offloads on demand if possible on XDP set")
      Reported-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Cc: Pavel Popa <pashinho1990@gmail.com>
      Cc: David Ahern <dsahern@gmail.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e59ff2c4
  4. 23 11月, 2018 4 次提交
  5. 22 11月, 2018 9 次提交
  6. 21 11月, 2018 8 次提交
  7. 20 11月, 2018 9 次提交