- 11 11月, 2017 15 次提交
-
-
由 Heinz Mauelshagen 提交于
Requesting a sync on an active raid device via a table reload (see 'sync' parameter in Documentation/device-mapper/dm-raid.txt) skips the super_load() call that defines the superblock size (rdev->sb_size) -- resulting in an oops if/when super_sync()->memset() is called. Fix by moving the initialization of the superblock start and size out of super_load() to the caller (analyse_superblocks). Signed-off-by: NHeinz Mauelshagen <heinzm@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Mikulas Patocka 提交于
When slub_debug is enabled kmalloc returns unaligned memory. XFS uses this unaligned memory for its buffers (if an unaligned buffer crosses a page, XFS frees it and allocates a full page instead - see the function xfs_buf_allocate_memory). dm-integrity checks if bv_offset is aligned on page size and this check fail with slub_debug and XFS. Fix this bug by removing the bv_offset check, leaving only the check for bv_len. Fixes: 7eada909 ("dm: add integrity target") Cc: stable@vger.kernel.org # v4.12+ Reported-by: NBruno Prémont <bonbons@sysophe.eu> Reviewed-by: NMilan Broz <gmazyland@gmail.com> Signed-off-by: NMikulas Patocka <mpatocka@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Mikulas Patocka 提交于
When slub_debug is enabled kmalloc returns unaligned memory. XFS uses this unaligned memory for its buffers (if an unaligned buffer crosses a page, XFS frees it and allocates a full page instead - see the function xfs_buf_allocate_memory). dm-crypt checks if bv_offset is aligned on page size and these checks fail with slub_debug and XFS. Fix this bug by removing the bv_offset checks. Switch to checking if bv_len is aligned instead of bv_offset (this check should be sufficient to prevent overruns if a bio with too small bv_len is received). Fixes: 8f0009a2 ("dm crypt: optionally support larger encryption sector size") Cc: stable@vger.kernel.org # v4.12+ Reported-by: NBruno Prémont <bonbons@sysophe.eu> Tested-by: NBruno Prémont <bonbons@sysophe.eu> Signed-off-by: NMikulas Patocka <mpatocka@redhat.com> Reviewed-by: NMilan Broz <gmazyland@gmail.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Mike Snitzer 提交于
Makes dm_get_md() and dm_get_from_kobject() have similar code. Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Hou Tao 提交于
The following BUG_ON was hit when testing repeat creation and removal of DM devices: kernel BUG at drivers/md/dm.c:2919! CPU: 7 PID: 750 Comm: systemd-udevd Not tainted 4.1.44 Call Trace: [<ffffffff81649e8b>] dm_get_from_kobject+0x34/0x3a [<ffffffff81650ef1>] dm_attr_show+0x2b/0x5e [<ffffffff817b46d1>] ? mutex_lock+0x26/0x44 [<ffffffff811df7f5>] sysfs_kf_seq_show+0x83/0xcf [<ffffffff811de257>] kernfs_seq_show+0x23/0x25 [<ffffffff81199118>] seq_read+0x16f/0x325 [<ffffffff811de994>] kernfs_fop_read+0x3a/0x13f [<ffffffff8117b625>] __vfs_read+0x26/0x9d [<ffffffff8130eb59>] ? security_file_permission+0x3c/0x44 [<ffffffff8117bdb8>] ? rw_verify_area+0x83/0xd9 [<ffffffff8117be9d>] vfs_read+0x8f/0xcf [<ffffffff81193e34>] ? __fdget_pos+0x12/0x41 [<ffffffff8117c686>] SyS_read+0x4b/0x76 [<ffffffff817b606e>] system_call_fastpath+0x12/0x71 The bug can be easily triggered, if an extra delay (e.g. 10ms) is added between the test of DMF_FREEING & DMF_DELETING and dm_get() in dm_get_from_kobject(). To fix it, we need to ensure the test of DMF_FREEING & DMF_DELETING and dm_get() are done in an atomic way, so _minor_lock is used. The other callers of dm_get() have also been checked to be OK: some callers invoke dm_get() under _minor_lock, some callers invoke it under _hash_lock, and dm_start_request() invoke it after increasing md->open_count. Cc: stable@vger.kernel.org Signed-off-by: NHou Tao <houtao1@huawei.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Mikulas Patocka 提交于
The structure srcu_struct can be very big, its size is proportional to the value CONFIG_NR_CPUS. The Fedora kernel has CONFIG_NR_CPUS 8192, the field io_barrier in the struct mapped_device has 84kB in the debugging kernel and 50kB in the non-debugging kernel. The large size may result in failure of the function kzalloc_node. In order to avoid the allocation failure, we use the function kvzalloc_node, this function falls back to vmalloc if a large contiguous chunk of memory is not available. This patch also moves the field io_barrier to the last position of struct mapped_device - the reason is that on many processor architectures, short memory offsets result in smaller code than long memory offsets - on x86-64 it reduces code size by 320 bytes. Note to stable kernel maintainers - the kernels 4.11 and older don't have the function kvzalloc_node, you can use the function vzalloc_node instead. Signed-off-by: NMikulas Patocka <mpatocka@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Damien Le Moal 提交于
The SCSI layer allows ZBC drives to have a smaller last runt zone. For such a device, specifying the entire capacity for a dm-zoned target table entry fails because the specified capacity is not aligned on a device zone size indicated in the request queue structure of the device. Fix this problem by ignoring the last runt zone in the entry length when seting up the dm-zoned target (ctr method) and when iterating table entries of the target (iterate_devices method). This allows dm-zoned users to still easily setup a target using the entire device capacity (as mandated by dm-zoned) or the aligned capacity excluding the last runt zone. While at it, replace direct references to the device queue chunk_sectors limit with calls to the accessor blk_queue_zone_sectors(). Reported-by: NPeter Desnoyers <pjd@ccs.neu.edu> Cc: stable@vger.kernel.org Signed-off-by: NDamien Le Moal <damien.lemoal@wdc.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Jérémy Lefaure 提交于
Using the ARRAY_SIZE macro improves the readability of the code. Found with Coccinelle with the following semantic patch: @r depends on (org || report)@ type T; T[] E; position p; @@ ( (sizeof(E)@p /sizeof(*E)) | (sizeof(E)@p /sizeof(E[...])) | (sizeof(E)@p /sizeof(T)) ) Signed-off-by: NJérémy Lefaure <jeremy.lefaure@lse.epita.fr> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Ross Zwisler 提交于
Now that we have the ability log filesystem writes using a flat buffer, add support for DAX. The motivation for this support is the need for an xfstest that can test the new MAP_SYNC DAX flag. By logging the filesystem activity with dm-log-writes we can show that the MAP_SYNC page faults are writing out their metadata as they happen, instead of requiring an explicit msync/fsync. Unfortunately we can't easily track data that has been written via mmap() now that the dax_flush() abstraction was removed by commit c3ca015f ("dax: remove the pmem_dax_ops->flush abstraction"). Otherwise we could just treat each flush as a big write, and store the data that is being synced to media. It may be worthwhile to add the dax_flush() entry point back, just as a notifier so we can do this logging. Signed-off-by: NRoss Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Ross Zwisler 提交于
Currently dm-log-writes supports writing filesystem data via BIOs, and writing internal metadata from a flat buffer via write_metadata(). For DAX writes, though, we won't have a BIO, but will instead have an iterator that we'll want to use to fill a flat data buffer. So, create write_inline_data() which allows us to write filesystem data using a flat buffer as a source, and wire it up in log_one_block(). Signed-off-by: NRoss Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Mike Snitzer 提交于
There is only one per_bio_data size now that writethrough-specific data was removed from the per_bio_data structure. Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Mike Snitzer 提交于
Now that the writethrough code is much simpler there is no need to track so much state or cascade bio submission (as was done, via writethrough_endio(), to issue origin then cache IO in series). As such the obsolete writethrough list and workqueue is also removed. Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Mike Snitzer 提交于
Discontinue issuing writethrough write IO in series to the origin and then cache. Use bio_clone_fast() to create a new origin clone bio that will be mapped to the origin device and then bio_chain() it to the bio that gets remapped to the cache device. The origin clone bio does _not_ have a copy of the per_bio_data -- as such check_if_tick_bio_needed() will not be called. The cache bio (parent bio) will not complete until the origin bio has completed -- this fulfills bio_clone_fast()'s requirements as well as the requirement to not complete the original IO until the write IO has completed to both the origin and cache device. Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Mike Snitzer 提交于
No functional changes, just a bit cleaner than passing cache_features structure. Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Joe Thornber 提交于
When a DM cache in writeback mode moves data between the slow and fast device it can often avoid a copy if the triggering bio either: i) covers the whole block (no point copying if we're about to overwrite it) ii) the migration is a promotion and the origin block is currently discarded Prior to this fix there was a race with case (ii). The discard status was checked with a shared lock held (rather than exclusive). This meant another bio could run in parallel and write data to the origin, removing the discard state. After the promotion the parallel write would have been lost. With this fix the discard status is re-checked once the exclusive lock has been aquired. If the block is no longer discarded it falls back to the slower full copy path. Fixes: b29d4986 ("dm cache: significant rework to leverage dm-bio-prison-v2") Cc: stable@vger.kernel.org # v4.12+ Signed-off-by: NJoe Thornber <ejt@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
- 25 10月, 2017 3 次提交
-
-
由 Elena Reshetova 提交于
atomic_t variables are currently used to implement reference counters with the following properties: - counter is initialized to 1 using atomic_set() - a resource is freed upon counter reaching zero - once counter reaches zero, its further increments aren't allowed - counter schema uses basic atomic operations (set, inc, inc_not_zero, dec_and_test, etc.) Such atomic variables should be converted to a newly provided refcount_t type and API that prevents accidental counter overflows and underflows. This is important since overflows and underflows can lead to use-after-free situation and be exploitable. The variable dm_cache_metadata.ref_count is used as pure reference counter. Convert it to refcount_t and fix up the operations. Suggested-by: NKees Cook <keescook@chromium.org> Reviewed-by: NDavid Windsor <dwindsor@gmail.com> Reviewed-by: NHans Liljestrand <ishkamiel@gmail.com> Signed-off-by: NElena Reshetova <elena.reshetova@intel.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Elena Reshetova 提交于
atomic_t variables are currently used to implement reference counters with the following properties: - counter is initialized to 1 using atomic_set() - a resource is freed upon counter reaching zero - once counter reaches zero, its further increments aren't allowed - counter schema uses basic atomic operations (set, inc, inc_not_zero, dec_and_test, etc.) Such atomic variables should be converted to a newly provided refcount_t type and API that prevents accidental counter overflows and underflows. This is important since overflows and underflows can lead to use-after-free situation and be exploitable. The variable table_device.count is used as pure reference counter. Convert it to refcount_t and fix up the operations. Suggested-by: NKees Cook <keescook@chromium.org> Reviewed-by: NDavid Windsor <dwindsor@gmail.com> Reviewed-by: NHans Liljestrand <ishkamiel@gmail.com> Signed-off-by: NElena Reshetova <elena.reshetova@intel.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Elena Reshetova 提交于
atomic_t variables are currently used to implement reference counters with the following properties: - counter is initialized to 1 using atomic_set() - a resource is freed upon counter reaching zero - once counter reaches zero, its further increments aren't allowed - counter schema uses basic atomic operations (set, inc, inc_not_zero, dec_and_test, etc.) Such atomic variables should be converted to a newly provided refcount_t type and API that prevents accidental counter overflows and underflows. This is important since overflows and underflows can lead to use-after-free situation and be exploitable. The variable dm_dev_internal.count is used as pure reference counter. Convert it to refcount_t and fix up the operations. Suggested-by: NKees Cook <keescook@chromium.org> Reviewed-by: NDavid Windsor <dwindsor@gmail.com> Reviewed-by: NHans Liljestrand <ishkamiel@gmail.com> Signed-off-by: NElena Reshetova <elena.reshetova@intel.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
- 22 10月, 2017 7 次提交
-
-
由 Bernd Edlinger 提交于
This is the possible reason for different hard to reproduce problems on my ARMv7-SMP test system. The symptoms are in recent kernels imprecise external aborts, and in older kernels various kinds of network stalls and unexpected page allocation failures. My testing indicates that the trouble started between v4.5 and v4.6 and prevails up to v4.14. Using the dirty_tx before acquiring the spin lock is clearly wrong and was first introduced with v4.6. Fixes: e3ad57c9 ("stmmac: review RX/TX ring management") Signed-off-by: NBernd Edlinger <bernd.edlinger@hotmail.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Geert Uytterhoeven 提交于
If an Ethernet PHY is initialized before the interrupt controller it is connected to, a message like the following is printed: irq: no irq domain found for /interrupt-controller@e61c0000 ! However, the actual error is ignored, leading to a non-functional (POLL) PHY interrupt later: Micrel KSZ8041RNLI ee700000.ethernet-ffffffff:01: attached PHY driver [Micrel KSZ8041RNLI] (mii_bus:phy_addr=ee700000.ethernet-ffffffff:01, irq=POLL) Depending on whether the PHY driver will fall back to polling, Ethernet may or may not work. To fix this: 1. Switch of_mdiobus_register_phy() from irq_of_parse_and_map() to of_irq_get(). Unlike the former, the latter returns -EPROBE_DEFER if the interrupt controller is not yet available, so this condition can be detected. Other errors are handled the same as before, i.e. use the passed mdio->irq[addr] as interrupt. 2. Propagate and handle errors from of_mdiobus_register_phy() and of_mdiobus_register_device(). Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jose Abreu 提交于
Prevent infinite loop by correctly setting the loop condition to break when i == 10. Signed-off-by: NJose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jose Abreu 提交于
When using GMAC4 the valid timestamp is from CTX next desc but we are passing the previous desc to get_rx_timestamp_status() callback. Fix this and while at it rework a little bit the function logic. Signed-off-by: NJose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jose Abreu 提交于
When RX HW timestamp is enabled and a frame is discarded we are not freeing the skb but instead only setting to NULL the entry. Add a call to dev_kfree_skb_any() so that skb entry is correctly freed. Signed-off-by: NJose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Petr Machata 提交于
Spectrum tunnels do not default to ttl of "inherit" like the Linux ones do. Configure TIGCR on router init so that the TTL of tunnel packets is copied from the overlay packets. Fixes: ee954d1a ("mlxsw: spectrum_router: Support GRE tunnels") Signed-off-by: NPetr Machata <petrm@mellanox.com> Reviewed-by: NIdo Schimmel <idosch@mellanox.com> Signed-off-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Petr Machata 提交于
The TIGCR register is used for setting up the IPinIP Tunnel configuration. Fixes: ee954d1a ("mlxsw: spectrum_router: Support GRE tunnels") Signed-off-by: NPetr Machata <petrm@mellanox.com> Reviewed-by: NIdo Schimmel <idosch@mellanox.com> Signed-off-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 21 10月, 2017 10 次提交
-
-
由 Igor Russkikh 提交于
Default Tx rates cause very long ISR delays on Tx. 0xff is 510us delay, giving only ~ 2000 interrupts per seconds for Tx rings cleanup. With these settings udp tx rate was never higher than ~800Mbps on a single stream. Changing min delay to 0xF makes it way better with ~6Gbps TCP stream performance is almost unaffected by this change, since LSO optimizations play important role. CPU load is affected insignificantly by this change. Signed-off-by: NPavel Belous <pavel.belous@aquantia.com> Signed-off-by: NIgor Russkikh <igor.russkikh@aquantia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Igor Russkikh 提交于
Aquantia NIC allows both TX and RX interrupt throttle rate (ITR) management, but this was used in a very limited way via predefined values. This patch allows to setup ITR default values via module command line arguments and via standard ethtool coalescing settings. Signed-off-by: NPavel Belous <pavel.belous@aquantia.com> Signed-off-by: NIgor Russkikh <igor.russkikh@aquantia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Igor Russkikh 提交于
That may lead to mmio resource leakage. Signed-off-by: NPavel Belous <pavel.belous@aquantia.com> Signed-off-by: NIgor Russkikh <igor.russkikh@aquantia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Igor Russkikh 提交于
There is no much practical use from having MSIX vectors more that number of cpus, thus cap this first with preconfigured limit, then with number of cpus online. Signed-off-by: NPavel Belous <pavel.belous@aquantia.com> Signed-off-by: NIgor Russkikh <igor.russkikh@aquantia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Igor Russkikh 提交于
When doing ifconfig down/up, driver did not reported carrier_off neither in nic_stop nor in nic_start. That caused link to be visible as "up" during couple of seconds immediately after "ifconfig up". Signed-off-by: NPavel Belous <pavel.belous@aquantia.com> Signed-off-by: NIgor Russkikh <igor.russkikh@aquantia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Igor Russkikh 提交于
Queue stat strings are cleaned up, duplicate stat name strings removed, queue restarts counter added Signed-off-by: NPavel Belous <pavel.belous@aquantia.com> Signed-off-by: NIgor Russkikh <igor.russkikh@aquantia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Igor Russkikh 提交于
Internal statistics system on chip never gets reset until hardware reboot. This is quite inconvenient in terms of ethtool statistics usage. This patch implements incremental statistics update inside of service callback. Upon nic initialization, first request is done to fetch initial stat data, current collected stat data gets cleared. Internal statistics mailbox readout is improved to save space and increase readability Signed-off-by: NPavel Belous <pavel.belous@aquantia.com> Signed-off-by: NIgor Russkikh <igor.russkikh@aquantia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sherry Yang 提交于
Don't access next->data in kernel debug message when the next buffer is null. Acked-by: NArve Hjønnevåg <arve@android.com> Signed-off-by: NSherry Yang <sherryy@android.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Sherry Yang 提交于
Use binder_alloc struct's mm_struct rather than getting a reference to the mm struct through get_task_mm to avoid a potential deadlock between lru lock, task lock and dentry lock, since a thread can be holding the task lock and the dentry lock while trying to acquire the lru lock. Acked-by: NArve Hjønnevåg <arve@android.com> Signed-off-by: NSherry Yang <sherryy@android.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Stefano Brivio 提交于
On big-endian machines, functions converting between tunnel ID and VNI use the three LSBs of tunnel ID storage to map VNI. The comparison function eq_tun_id_and_vni(), on the other hand, attempted to map the VNI from the three MSBs. Fix it by using the same check implemented on LE, which maps VNI from the three LSBs of tunnel ID. Fixes: 2e0b26e1 ("geneve: Optimize geneve device lookup.") Signed-off-by: NStefano Brivio <sbrivio@redhat.com> Reviewed-by: NJakub Sitnicki <jkbs@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 20 10月, 2017 5 次提交
-
-
由 Dexuan Cui 提交于
Without the patch, vmbus_hvsock_device_unregister() can destroy the device prematurely when close() is called, and can cause NULl dereferencing or potential data loss (the last portion of the data stream may be dropped prematurely). Signed-off-by: NDexuan Cui <decui@microsoft.com> Cc: Haiyang Zhang <haiyangz@microsoft.com> Cc: Stephen Hemminger <sthemmin@microsoft.com> Signed-off-by: NK. Y. Srinivasan <kys@microsoft.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 David Kozub 提交于
The interrupt handler mfgpt_tick() is not robust versus spurious interrupts which happen before the clock event device is registered and fully initialized. The reason is that the safe guard against spurious interrupts solely checks for the clockevents shutdown state, but lacks a check for detached state. If the interrupt hits while the device is in detached state it passes the safe guard and dereferences the event handler call back which is NULL. Add the missing state check. Fixes: 8f9327cb ("clockevents/drivers/cs5535: Migrate to new 'set-state' interface") Suggested-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NDavid Kozub <zub@linux.fjfi.cvut.cz> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20171020093103.3317F6004D@linux.fjfi.cvut.cz
-
由 Martijn Coenen 提交于
Because we're not guaranteed that subsequent calls to poll() will have a poll_table_struct parameter with _qproc set. When _qproc is not set, poll_wait() is a noop, and we won't be woken up correctly. Signed-off-by: NMartijn Coenen <maco@android.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Stefan Roese 提交于
The patch edf10919 [dmaengine: altera: fix spinlock usage] missed to change 2 occurrences of spin_unlock_bh() to spin_unlock_irqrestore(). This patch fixes this by moving to the IRQ-safe call in the error paths as well. Fixes: edf10919 (dmaengine: altera: fix spinlock usage) Signed-off-by: NStefan Roese <sr@denx.de> Reviewed-by: NSylvain Lesne <lesne@alse-fr.com> [add fixes tag and fix typo in log] Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Dmitry Torokhov 提交于
Before trying to use CDC union descriptor, try to validate whether that it is sane by checking that intf->altsetting->extra is big enough and that descriptor bLength is not too big and not too small. Reported-by: NAndrey Konovalov <andreyknvl@google.com> Signed-off-by: NDmitry Torokhov <dmitry.torokhov@gmail.com>
-