1. 13 10月, 2020 4 次提交
    • M
      net: fec: Fix phy_device lookup for phy_reset_after_clk_enable() · 64a632da
      Marek Vasut 提交于
      The phy_reset_after_clk_enable() is always called with ndev->phydev,
      however that pointer may be NULL even though the PHY device instance
      already exists and is sufficient to perform the PHY reset.
      
      This condition happens in fec_open(), where the clock must be enabled
      first, then the PHY must be reset, and then the PHY IDs can be read
      out of the PHY.
      
      If the PHY still is not bound to the MAC, but there is OF PHY node
      and a matching PHY device instance already, use the OF PHY node to
      obtain the PHY device instance, and then use that PHY device instance
      when triggering the PHY reset.
      
      Fixes: 1b0a83ac ("net: fec: add phy_reset_after_clk_enable() support")
      Signed-off-by: NMarek Vasut <marex@denx.de>
      Cc: Christoph Niedermaier <cniedermaier@dh-electronics.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: NXP Linux Team <linux-imx@nxp.com>
      Cc: Richard Leitner <richard.leitner@skidata.com>
      Cc: Shawn Guo <shawnguo@kernel.org>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      64a632da
    • J
      mlx4: handle non-napi callers to napi_poll · b2b8a927
      Jonathan Lemon 提交于
      netcons calls napi_poll with a budget of 0 to transmit packets.
      Handle this by:
       - skipping RX processing
       - do not try to recycle TX packets to the RX cache
      Signed-off-by: NJonathan Lemon <jonathan.lemon@gmail.com>
      Reviewed-by: NTariq Toukan <tariqt@nvidia.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      b2b8a927
    • V
      net: korina: fix kfree of rx/tx descriptor array · 3af5f0f5
      Valentin Vidic 提交于
      kmalloc returns KSEG0 addresses so convert back from KSEG1
      in kfree. Also make sure array is freed when the driver is
      unloaded from the kernel.
      
      Fixes: ef11291b ("Add support the Korina (IDT RC32434) Ethernet MAC")
      Signed-off-by: NValentin Vidic <vvidic@valentin-vidic.from.hr>
      Acked-by: NWillem de Bruijn <willemb@google.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      3af5f0f5
    • C
      net: dsa: microchip: fix race condition · 8098bd69
      Christian Eggers 提交于
      Between queuing the delayed work and finishing the setup of the dsa
      ports, the process may sleep in request_module() (via
      phy_device_create()) and the queued work may be executed prior to the
      switch net devices being registered. In ksz_mib_read_work(), a NULL
      dereference will happen within netof_carrier_ok(dp->slave).
      
      Not queuing the delayed work in ksz_init_mib_timer() makes things even
      worse because the work will now be queued for immediate execution
      (instead of 2000 ms) in ksz_mac_link_down() via
      dsa_port_link_register_of().
      
      Call tree:
      ksz9477_i2c_probe()
      \--ksz9477_switch_register()
         \--ksz_switch_register()
            +--dsa_register_switch()
            |  \--dsa_switch_probe()
            |     \--dsa_tree_setup()
            |        \--dsa_tree_setup_switches()
            |           +--dsa_switch_setup()
            |           |  +--ksz9477_setup()
            |           |  |  \--ksz_init_mib_timer()
            |           |  |     |--/* Start the timer 2 seconds later. */
            |           |  |     \--schedule_delayed_work(&dev->mib_read, msecs_to_jiffies(2000));
            |           |  \--__mdiobus_register()
            |           |     \--mdiobus_scan()
            |           |        \--get_phy_device()
            |           |           +--get_phy_id()
            |           |           \--phy_device_create()
            |           |              |--/* sleeping, ksz_mib_read_work() can be called meanwhile */
            |           |              \--request_module()
            |           |
            |           \--dsa_port_setup()
            |              +--/* Called for non-CPU ports */
            |              +--dsa_slave_create()
            |              |  +--/* Too late, ksz_mib_read_work() may be called beforehand */
            |              |  \--port->slave = ...
            |             ...
            |              +--Called for CPU port */
            |              \--dsa_port_link_register_of()
            |                 \--ksz_mac_link_down()
            |                    +--/* mib_read must be initialized here */
            |                    +--/* work is already scheduled, so it will be executed after 2000 ms */
            |                    \--schedule_delayed_work(&dev->mib_read, 0);
            \-- /* here port->slave is setup properly, scheduling the delayed work should be safe */
      
      Solution:
      1. Do not queue (only initialize) delayed work in ksz_init_mib_timer().
      2. Only queue delayed work in ksz_mac_link_down() if init is completed.
      3. Queue work once in ksz_switch_register(), after dsa_register_switch()
      has completed.
      
      Fixes: 7c6ff470 ("net: dsa: microchip: add MIB counter reading support")
      Signed-off-by: NChristian Eggers <ceggers@arri.de>
      Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Reviewed-by: NVladimir Oltean <olteanv@gmail.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      8098bd69
  2. 10 10月, 2020 2 次提交
  3. 09 10月, 2020 6 次提交
  4. 07 10月, 2020 3 次提交
  5. 06 10月, 2020 4 次提交
  6. 05 10月, 2020 3 次提交
    • A
      platform/x86: thinkpad_acpi: re-initialize ACPI buffer size when reuse · 720ef73d
      Aaron Ma 提交于
      Evaluating ACPI _BCL could fail, then ACPI buffer size will be set to 0.
      When reuse this ACPI buffer, AE_BUFFER_OVERFLOW will be triggered.
      
      Re-initialize buffer size will make ACPI evaluate successfully.
      
      Fixes: 46445b6b ("thinkpad-acpi: fix handle locate for video and query of _BCL")
      Signed-off-by: NAaron Ma <aaron.ma@canonical.com>
      Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com>
      720ef73d
    • T
      net: mvneta: fix double free of txq->buf · f4544e53
      Tom Rix 提交于
      clang static analysis reports this problem:
      
      drivers/net/ethernet/marvell/mvneta.c:3465:2: warning:
        Attempt to free released memory
              kfree(txq->buf);
              ^~~~~~~~~~~~~~~
      
      When mvneta_txq_sw_init() fails to alloc txq->tso_hdrs,
      it frees without poisoning txq->buf.  The error is caught
      in the mvneta_setup_txqs() caller which handles the error
      by cleaning up all of the txqs with a call to
      mvneta_txq_sw_deinit which also frees txq->buf.
      
      Since mvneta_txq_sw_deinit is a general cleaner, all of the
      partial cleaning in mvneta_txq_sw_deinit()'s error handling
      is not needed.
      
      Fixes: 2adb719d ("net: mvneta: Implement software TSO")
      Signed-off-by: NTom Rix <trix@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f4544e53
    • A
      net: team: fix memory leak in __team_options_register · 9a9e7749
      Anant Thazhemadam 提交于
      The variable "i" isn't initialized back correctly after the first loop
      under the label inst_rollback gets executed.
      
      The value of "i" is assigned to be option_count - 1, and the ensuing
      loop (under alloc_rollback) begins by initializing i--.
      Thus, the value of i when the loop begins execution will now become
      i = option_count - 2.
      
      Thus, when kfree(dst_opts[i]) is called in the second loop in this
      order, (i.e., inst_rollback followed by alloc_rollback),
      dst_optsp[option_count - 2] is the first element freed, and
      dst_opts[option_count - 1] does not get freed, and thus, a memory
      leak is caused.
      
      This memory leak can be fixed, by assigning i = option_count (instead of
      option_count - 1).
      
      Fixes: 80f7c668 ("team: add support for per-port options")
      Reported-by: syzbot+69b804437cfec30deac3@syzkaller.appspotmail.com
      Tested-by: syzbot+69b804437cfec30deac3@syzkaller.appspotmail.com
      Signed-off-by: NAnant Thazhemadam <anant.thazhemadam@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9a9e7749
  7. 04 10月, 2020 8 次提交
  8. 03 10月, 2020 10 次提交
    • C
      scsi: libiscsi: use sendpage_ok() in iscsi_tcp_segment_map() · 6aa25c73
      Coly Li 提交于
      In iscsci driver, iscsi_tcp_segment_map() uses the following code to
      check whether the page should or not be handled by sendpage:
          if (!recv && page_count(sg_page(sg)) >= 1 && !PageSlab(sg_page(sg)))
      
      The "page_count(sg_page(sg)) >= 1 && !PageSlab(sg_page(sg)" part is to
      make sure the page can be sent to network layer's zero copy path. This
      part is exactly what sendpage_ok() does.
      
      This patch uses  use sendpage_ok() in iscsi_tcp_segment_map() to replace
      the original open coded checks.
      Signed-off-by: NColy Li <colyli@suse.de>
      Reviewed-by: NLee Duncan <lduncan@suse.com>
      Acked-by: NMartin K. Petersen <martin.petersen@oracle.com>
      Cc: Vasily Averin <vvs@virtuozzo.com>
      Cc: Cong Wang <amwang@redhat.com>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Cc: Chris Leech <cleech@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Hannes Reinecke <hare@suse.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6aa25c73
    • C
      drbd: code cleanup by using sendpage_ok() to check page for kernel_sendpage() · fb25ebe1
      Coly Li 提交于
      In _drbd_send_page() a page is checked by following code before sending
      it by kernel_sendpage(),
              (page_count(page) < 1) || PageSlab(page)
      If the check is true, this page won't be send by kernel_sendpage() and
      handled by sock_no_sendpage().
      
      This kind of check is exactly what macro sendpage_ok() does, which is
      introduced into include/linux/net.h to solve a similar send page issue
      in nvme-tcp code.
      
      This patch uses macro sendpage_ok() to replace the open coded checks to
      page type and refcount in _drbd_send_page(), as a code cleanup.
      Signed-off-by: NColy Li <colyli@suse.de>
      Cc: Philipp Reisner <philipp.reisner@linbit.com>
      Cc: Sagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fb25ebe1
    • C
      nvme-tcp: check page by sendpage_ok() before calling kernel_sendpage() · 7d4194ab
      Coly Li 提交于
      Currently nvme_tcp_try_send_data() doesn't use kernel_sendpage() to
      send slab pages. But for pages allocated by __get_free_pages() without
      __GFP_COMP, which also have refcount as 0, they are still sent by
      kernel_sendpage() to remote end, this is problematic.
      
      The new introduced helper sendpage_ok() checks both PageSlab tag and
      page_count counter, and returns true if the checking page is OK to be
      sent by kernel_sendpage().
      
      This patch fixes the page checking issue of nvme_tcp_try_send_data()
      with sendpage_ok(). If sendpage_ok() returns true, send this page by
      kernel_sendpage(), otherwise use sock_no_sendpage to handle this page.
      Signed-off-by: NColy Li <colyli@suse.de>
      Cc: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Hannes Reinecke <hare@suse.de>
      Cc: Jan Kara <jack@suse.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Mikhail Skorzhinskii <mskorzhinskiy@solarflare.com>
      Cc: Philipp Reisner <philipp.reisner@linbit.com>
      Cc: Sagi Grimberg <sagi@grimberg.me>
      Cc: Vlastimil Babka <vbabka@suse.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7d4194ab
    • P
      net: usb: pegasus: Proper error handing when setting pegasus' MAC address · f30e25a9
      Petko Manolov 提交于
      v2:
      
      If reading the MAC address from eeprom fail don't throw an error, use randomly
      generated MAC instead.  Either way the adapter will soldier on and the return
      type of set_ethernet_addr() can be reverted to void.
      
      v1:
      
      Fix a bug in set_ethernet_addr() which does not take into account possible
      errors (or partial reads) returned by its helpers.  This can potentially lead to
      writing random data into device's MAC address registers.
      Signed-off-by: NPetko Manolov <petko.manolov@konsulko.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f30e25a9
    • V
      net/mlx5e: Fix race condition on nhe->n pointer in neigh update · 1253935a
      Vlad Buslov 提交于
      Current neigh update event handler implementation takes reference to
      neighbour structure, assigns it to nhe->n, tries to schedule workqueue task
      and releases the reference if task was already enqueued. This results
      potentially overwriting existing nhe->n pointer with another neighbour
      instance, which causes double release of the instance (once in neigh update
      handler that failed to enqueue to workqueue and another one in neigh update
      workqueue task that processes updated nhe->n pointer instead of original
      one):
      
      [ 3376.512806] ------------[ cut here ]------------
      [ 3376.513534] refcount_t: underflow; use-after-free.
      [ 3376.521213] Modules linked in: act_skbedit act_mirred act_tunnel_key vxlan ip6_udp_tunnel udp_tunnel nfnetlink act_gact cls_flower sch_ingress openvswitch nsh nf_conncount nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 mlx5_ib mlx5_core mlxfw pci_hyperv_intf ptp pps_core nfsv3 nfs_acl rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd
       grace fscache ib_isert iscsi_target_mod ib_srpt target_core_mod ib_srp rpcrdma rdma_ucm ib_umad ib_ipoib ib_iser rdma_cm ib_cm iw_cm rfkill ib_uverbs ib_core sunrpc kvm_intel kvm iTCO_wdt iTCO_vendor_support virtio_net irqbypass net_failover crc32_pclmul lpc_ich i2c_i801 failover pcspkr i2c_smbus mfd_core ghash_clmulni_intel sch_fq_codel drm i2c
      _core ip_tables crc32c_intel serio_raw [last unloaded: mlxfw]
      [ 3376.529468] CPU: 8 PID: 22756 Comm: kworker/u20:5 Not tainted 5.9.0-rc5+ #6
      [ 3376.530399] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.org 04/01/2014
      [ 3376.531975] Workqueue: mlx5e mlx5e_rep_neigh_update [mlx5_core]
      [ 3376.532820] RIP: 0010:refcount_warn_saturate+0xd8/0xe0
      [ 3376.533589] Code: ff 48 c7 c7 e0 b8 27 82 c6 05 0b b6 09 01 01 e8 94 93 c1 ff 0f 0b c3 48 c7 c7 88 b8 27 82 c6 05 f7 b5 09 01 01 e8 7e 93 c1 ff <0f> 0b c3 0f 1f 44 00 00 8b 07 3d 00 00 00 c0 74 12 83 f8 01 74 13
      [ 3376.536017] RSP: 0018:ffffc90002a97e30 EFLAGS: 00010286
      [ 3376.536793] RAX: 0000000000000000 RBX: ffff8882de30d648 RCX: 0000000000000000
      [ 3376.537718] RDX: ffff8882f5c28f20 RSI: ffff8882f5c18e40 RDI: ffff8882f5c18e40
      [ 3376.538654] RBP: ffff8882cdf56c00 R08: 000000000000c580 R09: 0000000000001a4d
      [ 3376.539582] R10: 0000000000000731 R11: ffffc90002a97ccd R12: 0000000000000000
      [ 3376.540519] R13: ffff8882de30d600 R14: ffff8882de30d640 R15: ffff88821e000900
      [ 3376.541444] FS:  0000000000000000(0000) GS:ffff8882f5c00000(0000) knlGS:0000000000000000
      [ 3376.542732] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [ 3376.543545] CR2: 0000556e5504b248 CR3: 00000002c6f10005 CR4: 0000000000770ee0
      [ 3376.544483] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      [ 3376.545419] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      [ 3376.546344] PKRU: 55555554
      [ 3376.546911] Call Trace:
      [ 3376.547479]  mlx5e_rep_neigh_update.cold+0x33/0xe2 [mlx5_core]
      [ 3376.548299]  process_one_work+0x1d8/0x390
      [ 3376.548977]  worker_thread+0x4d/0x3e0
      [ 3376.549631]  ? rescuer_thread+0x3e0/0x3e0
      [ 3376.550295]  kthread+0x118/0x130
      [ 3376.550914]  ? kthread_create_worker_on_cpu+0x70/0x70
      [ 3376.551675]  ret_from_fork+0x1f/0x30
      [ 3376.552312] ---[ end trace d84e8f46d2a77eec ]---
      
      Fix the bug by moving work_struct to dedicated dynamically-allocated
      structure. This enabled every event handler to work on its own private
      neighbour pointer and removes the need for handling the case when task is
      already enqueued.
      
      Fixes: 232c0013 ("net/mlx5e: Add support to neighbour update flow")
      Signed-off-by: NVlad Buslov <vladbu@nvidia.com>
      Reviewed-by: NRoi Dayan <roid@nvidia.com>
      Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
      1253935a
    • A
      net/mlx5e: Fix VLAN create flow · d4a16052
      Aya Levin 提交于
      When interface is attached while in promiscuous mode and with VLAN
      filtering turned off, both configurations are not respected and VLAN
      filtering is performed.
      There are 2 flows which add the any-vid rules during interface attach:
      VLAN creation table and set rx mode. Each is relaying on the other to
      add any-vid rules, eventually non of them does.
      
      Fix this by adding any-vid rules on VLAN creation regardless of
      promiscuous mode.
      
      Fixes: 9df30601 ("net/mlx5e: Restore vlan filter after seamless reset")
      Signed-off-by: NAya Levin <ayal@nvidia.com>
      Reviewed-by: NMoshe Shemesh <moshe@nvidia.com>
      Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
      d4a16052
    • A
      net/mlx5e: Fix VLAN cleanup flow · 8c7353b6
      Aya Levin 提交于
      Prior to this patch unloading an interface in promiscuous mode with RX
      VLAN filtering feature turned off - resulted in a warning. This is due
      to a wrong condition in the VLAN rules cleanup flow, which left the
      any-vid rules in the VLAN steering table. These rules prevented
      destroying the flow group and the flow table.
      
      The any-vid rules are removed in 2 flows, but none of them remove it in
      case both promiscuous is set and VLAN filtering is off. Fix the issue by
      changing the condition of the VLAN table cleanup flow to clean also in
      case of promiscuous mode.
      
      mlx5_core 0000:00:08.0: mlx5_destroy_flow_group:2123:(pid 28729): Flow group 20 wasn't destroyed, refcount > 1
      mlx5_core 0000:00:08.0: mlx5_destroy_flow_group:2123:(pid 28729): Flow group 19 wasn't destroyed, refcount > 1
      mlx5_core 0000:00:08.0: mlx5_destroy_flow_table:2112:(pid 28729): Flow table 262149 wasn't destroyed, refcount > 1
      ...
      ...
      ------------[ cut here ]------------
      FW pages counter is 11560 after reclaiming all pages
      WARNING: CPU: 1 PID: 28729 at
      drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c:660
      mlx5_reclaim_startup_pages+0x178/0x230 [mlx5_core]
      Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS
      rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.org 04/01/2014
      Call Trace:
        mlx5_function_teardown+0x2f/0x90 [mlx5_core]
        mlx5_unload_one+0x71/0x110 [mlx5_core]
        remove_one+0x44/0x80 [mlx5_core]
        pci_device_remove+0x3e/0xc0
        device_release_driver_internal+0xfb/0x1c0
        device_release_driver+0x12/0x20
        pci_stop_bus_device+0x68/0x90
        pci_stop_and_remove_bus_device+0x12/0x20
        hv_eject_device_work+0x6f/0x170 [pci_hyperv]
        ? __schedule+0x349/0x790
        process_one_work+0x206/0x400
        worker_thread+0x34/0x3f0
        ? process_one_work+0x400/0x400
        kthread+0x126/0x140
        ? kthread_park+0x90/0x90
        ret_from_fork+0x22/0x30
         ---[ end trace 6283bde8d26170dc ]---
      
      Fixes: 9df30601 ("net/mlx5e: Restore vlan filter after seamless reset")
      Signed-off-by: NAya Levin <ayal@nvidia.com>
      Reviewed-by: NMoshe Shemesh <moshe@nvidia.com>
      Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
      8c7353b6
    • A
      net/mlx5e: Fix return status when setting unsupported FEC mode · 2608a2f8
      Aya Levin 提交于
      Verify the configured FEC mode is supported by at least a single link
      mode before applying the command. Otherwise fail the command and return
      "Operation not supported".
      Prior to this patch, the command was successful, yet it falsely set all
      link modes to FEC auto mode - like configuring FEC mode to auto. Auto
      mode is the default configuration if a link mode doesn't support the
      configured FEC mode.
      
      Fixes: b5ede32d ("net/mlx5e: Add support for FEC modes based on 50G per lane links")
      Signed-off-by: NAya Levin <ayal@mellanox.com>
      Reviewed-by: NEran Ben Elisha <eranbe@nvidia.com>
      Reviewed-by: NMoshe Shemesh <moshe@nvidia.com>
      Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
      2608a2f8
    • A
      net/mlx5e: Fix driver's declaration to support GRE offload · 3d093bc2
      Aya Levin 提交于
      Declare GRE offload support with respect to the inner protocol. Add a
      list of supported inner protocols on which the driver can offload
      checksum and GSO. For other protocols, inform the stack to do the needed
      operations. There is no noticeable impact on GRE performance.
      
      Fixes: 27299841 ("net/mlx5e: Support TSO and TX checksum offloads for GRE tunnels")
      Signed-off-by: NAya Levin <ayal@mellanox.com>
      Reviewed-by: NMoshe Shemesh <moshe@nvidia.com>
      Reviewed-by: NTariq Toukan <tariqt@nvidia.com>
      Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
      3d093bc2
    • M
      net/mlx5e: CT, Fix coverity issue · 2b021989
      Maor Dickman 提交于
      The cited commit introduced the following coverity issue at function
      mlx5_tc_ct_rule_to_tuple_nat:
      - Memory - corruptions (OVERRUN)
        Overrunning array "tuple->ip.src_v6.in6_u.u6_addr32" of 4 4-byte
        elements at element index 7 (byte offset 31) using index
        "ip6_offset" (which evaluates to 7).
      
      In case of IPv6 destination address rewrite, ip6_offset values are
      between 4 to 7, which will cause memory overrun of array
      "tuple->ip.src_v6.in6_u.u6_addr32" to array
      "tuple->ip.dst_v6.in6_u.u6_addr32".
      
      Fixed by writing the value directly to array
      "tuple->ip.dst_v6.in6_u.u6_addr32" in case ip6_offset values are
      between 4 to 7.
      
      Fixes: bc562be9 ("net/mlx5e: CT: Save ct entries tuples in hashtables")
      Signed-off-by: NMaor Dickman <maord@nvidia.com>
      Reviewed-by: NRoi Dayan <roid@nvidia.com>
      Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
      2b021989