- 16 8月, 2017 7 次提交
-
-
由 Julian Wiedmann 提交于
Replace some open-coded parts with their proper API calls. Also remove two skb_[re]set_mac_header() calls in the L2 xmit paths that are clearly no longer required, since at least commit 6d1ccff6 ("net: reset mac header in dev_start_xmit()"). Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Acked-by: NUrsula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
For some xmit paths we pass down a data offset to qeth_fill_buffer(), to indicate that the first k bytes of the skb should be skipped when mapping it into buffer elements. Commit acd9776b ("s390/qeth: no ETH header for outbound AF_IUCV") recently switched the offset for the IUCV-over-HiperSockets path from 0 to ETH_HLEN, and now we have device offset OSA = 0 IQD > 0 for all xmit paths. OSA would previously pass down -1 from do_send_packet(), to distinguish between 1) OSA and 2) IQD with offset 0. That's no longer needed now, so have it pass 0, make the offset unsigned and clean up how we apply the offset in __qeth_fill_buffer(). No change of behaviour for any of our current xmit paths. Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Acked-by: NUrsula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
1. for adjusting the buffer's next_element_to_fill in __fill_buffer(), just pass the full qeth_qdio_out_buffer struct 2. when adding a header element, be consistent about passing a hint ('is_first_elem') to __fill_buffer() No functional change. Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Acked-by: NUrsula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Improve readability of the code that determines a buffer element's fragment type, and reduce the number of cases down from 5 to 3. Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Acked-by: NUrsula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
qeth_l3_setadapter_parms() queries the device for supported adapterparms, even though they already have been queried as part of the device's high-level setup. Remove that extra call. The only call chain for qeth_l3_setadapter_parms() is __qeth_l3_set_online() qeth_core_hardsetup_card() qeth_query_setadapterparms() qeth_l3_setadapter_parms() qeth_query_setadapterparms() , and we only reach qeth_l3_setadapter_parms() if the first adapterparms query succeeds. Hence removing the second query results in no loss of functionality. Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Acked-by: NUrsula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
qeth_l2_request_initial_mac() queries the device for its supported adapterparms, even though they already have been queried as part of the device's high-level setup. Remove that extra call. The only call chain for qeth_l2_request_initial_mac() is __qeth_l2_set_online() qeth_core_hardsetup_card() qeth_query_setadapterparms() qeth_l2_setup_netdev() qeth_l2_request_initial_mac() qeth_query_setadapterparms() , and we only reach qeth_l2_request_initial_mac() if the first adapterparms query succeeds. Hence removing the second query results in no loss of functionality. Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Acked-by: NUrsula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
After transmitting a skb via send_packet[_fast](), the statistics code accesses the skb once more to account for transmitted page frags. This has a (theoretical?) race against the TX completion - if the TX completion is processed and frees the skb before hard_start_xmit() gets to the statistics part, we access random memory. Fix this by caching the # of page frags, before the skb is transmitted. Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Acked-by: NUrsula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 08 8月, 2017 1 次提交
-
-
由 Julian Wiedmann 提交于
On L3, the qeth_hdr struct needs to be filled with the next-hop IP address. The current code accesses rtable->rt_gateway without checking that rtable is a valid address. The accidental access to a lowcore area results in a random next-hop address in the qeth_hdr. rtable (or more precisely, skb_dst(skb)) can be NULL in rare cases (for instance together with AF_PACKET sockets). This patch adds the missing NULL-ptr checks. Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: NUrsula Braun <ubraun@linux.vnet.ibm.com> Fixes: 87e7597b qeth: Move away from using neighbour entries in qeth_l3_fill_header() Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 7月, 2017 3 次提交
-
-
由 Dong Jia Shi 提交于
When channel path is identified as the report source code (RSC) of a CRW, and initialized (CRW_ERC_INIT) is recognized as the error recovery code (ERC) by the channel subsystem, it indicates a "path has come" event. Let's handle this case in chp_process_crw(). Reviewed-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: NDong Jia Shi <bjsdjshi@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Joe Perches 提交于
Make the code like the rest of the kernel. Link: http://lkml.kernel.org/r/3f980cd89084ae09716353aba3171e4b3815e690.1499284835.git.joe@perches.comSigned-off-by: NJoe Perches <joe@perches.com> Cc: Julian Wiedmann <jwi@linux.vnet.ibm.com> Cc: Ursula Braun <ubraun@linux.vnet.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Michal Hocko 提交于
__GFP_REPEAT was designed to allow retry-but-eventually-fail semantic to the page allocator. This has been true but only for allocations requests larger than PAGE_ALLOC_COSTLY_ORDER. It has been always ignored for smaller sizes. This is a bit unfortunate because there is no way to express the same semantic for those requests and they are considered too important to fail so they might end up looping in the page allocator for ever, similarly to GFP_NOFAIL requests. Now that the whole tree has been cleaned up and accidental or misled usage of __GFP_REPEAT flag has been removed for !costly requests we can give the original flag a better name and more importantly a more useful semantic. Let's rename it to __GFP_RETRY_MAYFAIL which tells the user that the allocator would try really hard but there is no promise of a success. This will work independent of the order and overrides the default allocator behavior. Page allocator users have several levels of guarantee vs. cost options (take GFP_KERNEL as an example) - GFP_KERNEL & ~__GFP_RECLAIM - optimistic allocation without _any_ attempt to free memory at all. The most light weight mode which even doesn't kick the background reclaim. Should be used carefully because it might deplete the memory and the next user might hit the more aggressive reclaim - GFP_KERNEL & ~__GFP_DIRECT_RECLAIM (or GFP_NOWAIT)- optimistic allocation without any attempt to free memory from the current context but can wake kswapd to reclaim memory if the zone is below the low watermark. Can be used from either atomic contexts or when the request is a performance optimization and there is another fallback for a slow path. - (GFP_KERNEL|__GFP_HIGH) & ~__GFP_DIRECT_RECLAIM (aka GFP_ATOMIC) - non sleeping allocation with an expensive fallback so it can access some portion of memory reserves. Usually used from interrupt/bh context with an expensive slow path fallback. - GFP_KERNEL - both background and direct reclaim are allowed and the _default_ page allocator behavior is used. That means that !costly allocation requests are basically nofail but there is no guarantee of that behavior so failures have to be checked properly by callers (e.g. OOM killer victim is allowed to fail currently). - GFP_KERNEL | __GFP_NORETRY - overrides the default allocator behavior and all allocation requests fail early rather than cause disruptive reclaim (one round of reclaim in this implementation). The OOM killer is not invoked. - GFP_KERNEL | __GFP_RETRY_MAYFAIL - overrides the default allocator behavior and all allocation requests try really hard. The request will fail if the reclaim cannot make any progress. The OOM killer won't be triggered. - GFP_KERNEL | __GFP_NOFAIL - overrides the default allocator behavior and all allocation requests will loop endlessly until they succeed. This might be really dangerous especially for larger orders. Existing users of __GFP_REPEAT are changed to __GFP_RETRY_MAYFAIL because they already had their semantic. No new users are added. __alloc_pages_slowpath is changed to bail out for __GFP_RETRY_MAYFAIL if there is no progress and we have already passed the OOM point. This means that all the reclaim opportunities have been exhausted except the most disruptive one (the OOM killer) and a user defined fallback behavior is more sensible than keep retrying in the page allocator. [akpm@linux-foundation.org: fix arch/sparc/kernel/mdesc.c] [mhocko@suse.com: semantic fix] Link: http://lkml.kernel.org/r/20170626123847.GM11534@dhcp22.suse.cz [mhocko@kernel.org: address other thing spotted by Vlastimil] Link: http://lkml.kernel.org/r/20170626124233.GN11534@dhcp22.suse.cz Link: http://lkml.kernel.org/r/20170623085345.11304-3-mhocko@kernel.orgSigned-off-by: NMichal Hocko <mhocko@suse.com> Acked-by: NVlastimil Babka <vbabka@suse.cz> Cc: Alex Belits <alex.belits@cavium.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Christoph Hellwig <hch@infradead.org> Cc: Darrick J. Wong <darrick.wong@oracle.com> Cc: David Daney <david.daney@cavium.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Mel Gorman <mgorman@suse.de> Cc: NeilBrown <neilb@suse.com> Cc: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 7月, 2017 7 次提交
-
-
由 Sebastian Ott 提交于
Fix this set but not used warning: drivers/s390/cio/vfio_ccw_drv.c: In function 'vfio_ccw_sch_io_todo': drivers/s390/cio/vfio_ccw_drv.c:72:21: warning: variable 'sch' set but not used [-Wunused-but-set-variable] struct subchannel *sch; ^ Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: NDong Jia Shi <bjsdjshi@linux.vnet.ibm.com> Acked-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Sebastian Ott 提交于
Fix these set but not used warnings: drivers/s390/block/dasd.c:3933:6: warning: variable 'rc' set but not used [-Wunused-but-set-variable] drivers/s390/block/dasd_alias.c:757:6: warning: variable 'rc' set but not used [-Wunused-but-set-variable] In addition to that remove the test if an unsigned is < 0: drivers/s390/block/dasd_devmap.c:153:11: warning: comparison of unsigned expression < 0 is always false [-Wtype-limits] Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Harald Freudenberger 提交于
On some debug feature invocations the newline was missing. Signed-off-by: NHarald Freudenberger <freude@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Jan Höppner 提交于
The Prefix CCW is not mandatory and raw I/O can also be issued without it. Check whether the Prefix CCW is supported and if not use the combination of Define Extent and Locate Record Extended instead. While at it, sort the variable declarations, replace the gotos with early exits, and remove an error check at the end which is irrelevant. Also, remove the XRC check as it is not relevant for raw I/O. Reviewed-by: NStefan Haberland <sth@linux.vnet.ibm.com> Signed-off-by: NJan Höppner <hoeppner@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Jan Höppner 提交于
Rename dasd_raw_build_cp() to dasd_eckd_build_cp_raw() to fit the scheme. Reviewed-by: NStefan Haberland <sth@linux.vnet.ibm.com> Signed-off-by: NJan Höppner <hoeppner@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Jan Höppner 提交于
We already have define_extent() that prepares necessary data for the Define Extent CCW. The exact same thing is done in prefix_LRE(). Remove the duplicate code and move commands that were only used in combination with the Prefix command to define_extent(). One of these commands needs the blocksize to be specified. Add the blksize parameter to define_extent() to account for that. In addition, the check_XRC() function can be made more generic. Do this and remove the Prefix-specific check_XRC_on_prefix() function. Furthermore, prefix_LRE() uses fill_LRE_data() to prepare Locate Record Extended data. Rename the function to fit the scheme better and make it usable outside of the Prefix context by adding the corresponding CCW command. Reviewed-by: NStefan Haberland <sth@linux.vnet.ibm.com> Signed-off-by: NJan Höppner <hoeppner@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Stephen Rothwell 提交于
Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 03 7月, 2017 1 次提交
-
-
由 David S. Miller 提交于
Reported-by: Nkbuild test robot <fengguang.wu@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 7月, 2017 1 次提交
-
-
由 Reshetova, Elena 提交于
refcount_t type and corresponding API should be used instead of atomic_t when the variable is used as a reference counter. This allows to avoid accidental refcounter overflows that might lead to use-after-free situations. Signed-off-by: NElena Reshetova <elena.reshetova@intel.com> Signed-off-by: NHans Liljestrand <ishkamiel@gmail.com> Signed-off-by: NKees Cook <keescook@chromium.org> Signed-off-by: NDavid Windsor <dwindsor@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 28 6月, 2017 2 次提交
-
-
由 Jan Höppner 提交于
If a device is offline it can still be set to read-only via the bus id through sysfs. Only the read-only feature flag for the ccw_device is then set. If the device is online the corresponding block device needs to be set to read-only as well (via set_disk_ro()). The check whether there is a device to do so, however, happens after the feature flag was set. This leads to an unnecessary "no such device" error in the offline case. This bug was introduced by commit 7571cb1c8e3cc ("s390/dasd: Make use of dasd_set_feature() more often"). Fix this by simply returning count if no device is available. Fixes: 7571cb1c8e3cc ("s390/dasd: Make use of dasd_set_feature() more often") Reviewed-by: NStefan Haberland <sth@linux.vnet.ibm.com> Signed-off-by: NJan Höppner <hoeppner@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Dan Williams 提交于
Require all dax-drivers to register a ->copy_from_iter() operation so that it is clear which dax_operations are optional and which must be implemented for filesystem-dax to operate. Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com> Suggested-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 21 6月, 2017 3 次提交
-
-
由 Julian Wiedmann 提交于
When a s390 guest runs on a z/VM host that's part of a SSI cluster, it can be migrated to a different host. In this case, the MAC address it originally obtained on the old host may be re-assigned to a new guest. This would result in address conflicts between the two guests. When running as z/VM guest, use the diag26c MAC Service to obtain a hypervisor-managed MAC address. The MAC Service is SSI-aware, and won't re-assign the address after the guest is migrated to a new host. This patch adds support for the z/VM MAC Service on L2 devices. Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Acked-by: NUrsula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
There's two spots in qeth_send_packet() where we don't accurately account for transmitted packing buffers in qeth's performance statistics: 1) when flushing the current buffer due to insufficient size, and the next buffer is not EMPTY, we need to account for that flushed buffer. 2) when synchronizing with the TX completion code, we reset flush_count and thus forget to account for any previously flushed buffers. Reported-by: NNils Hoppmann <niho@de.ibm.com> Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Kittipon Meesompop 提交于
add ipa return codes for Bridgeport (HiperSockets and OSA) according to system level design. Signed-off-by: NKittipon Meesompop <kmeesomp@linux.vnet.ibm.com> Reviewed-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Reviewed-by: NUrsula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 19 6月, 2017 1 次提交
-
-
由 NeilBrown 提交于
blk_queue_split() is always called with the last arg being q->bio_split, where 'q' is the first arg. Also blk_queue_split() sometimes uses the passed-in 'bs' and sometimes uses q->bio_split. This is inconsistent and unnecessary. Remove the last arg and always use q->bio_split inside blk_queue_split() Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NMing Lei <ming.lei@redhat.com> Credit-to: Javier González <jg@lightnvm.io> (Noticed that lightnvm was missed) Reviewed-by: NJavier González <javier@cnexlabs.com> Tested-by: NJavier González <javier@cnexlabs.com> Signed-off-by: NNeilBrown <neilb@suse.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
- 16 6月, 2017 2 次提交
-
-
由 Johannes Berg 提交于
It seems like a historic accident that these return unsigned char *, and in many places that means casts are required, more often than not. Make these functions return void * and remove all the casts across the tree, adding a (u8 *) cast only where the unsigned char pointer was used directly, all done with the following spatch: @@ expression SKB, LEN; typedef u8; identifier fn = { skb_push, __skb_push, skb_push_rcsum }; @@ - *(fn(SKB, LEN)) + *(u8 *)fn(SKB, LEN) @@ expression E, SKB, LEN; identifier fn = { skb_push, __skb_push, skb_push_rcsum }; type T; @@ - E = ((T *)(fn(SKB, LEN))) + E = fn(SKB, LEN) @@ expression SKB, LEN; identifier fn = { skb_push, __skb_push, skb_push_rcsum }; @@ - fn(SKB, LEN)[0] + *(u8 *)fn(SKB, LEN) Note that the last part there converts from push(...)[0] to the more idiomatic *(u8 *)push(...). Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Johannes Berg 提交于
A common pattern with skb_put() is to just want to memcpy() some data into the new space, introduce skb_put_data() for this. An spatch similar to the one for skb_put_zero() converts many of the places using it: @@ identifier p, p2; expression len, skb, data; type t, t2; @@ ( -p = skb_put(skb, len); +p = skb_put_data(skb, data, len); | -p = (t)skb_put(skb, len); +p = skb_put_data(skb, data, len); ) ( p2 = (t2)p; -memcpy(p2, data, len); | -memcpy(p, data, len); ) @@ type t, t2; identifier p, p2; expression skb, data; @@ t *p; ... ( -p = skb_put(skb, sizeof(t)); +p = skb_put_data(skb, data, sizeof(t)); | -p = (t *)skb_put(skb, sizeof(t)); +p = skb_put_data(skb, data, sizeof(t)); ) ( p2 = (t2)p; -memcpy(p2, data, sizeof(*p)); | -memcpy(p, data, sizeof(*p)); ) @@ expression skb, len, data; @@ -memcpy(skb_put(skb, len), data, len); +skb_put_data(skb, data, len); (again, manually post-processed to retain some comments) Reviewed-by: NStephen Hemminger <stephen@networkplumber.org> Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 6月, 2017 12 次提交
-
-
由 Sebastian Ott 提交于
The sysfs attributes implemented by the vfio_ccw driver are also implemented by the io_subchannel driver. Move these into a device_type which is set by the css bus. Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: NDong Jia Shi <bjsdjshi@linux.vnet.ibm.com> Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Stefan Haberland 提交于
The safe offline processing may hang forever because it waits for I/O which can not be started because of the offline flag that prevents new I/O from being started. Allow I/O to be started during safe offline processing because in this special case we take care that the queues are empty before throwing away the device. Signed-off-by: NStefan Haberland <sth@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Stefan Haberland 提交于
The safe offline processing needs, as well as the normal offline processing, to be locked against multiple parallel executions. But it should be able to be overtaken by a normal offline processing to make sure that the device does not wait forever for outstanding I/O if the user wants to. Unfortunately the parallel processing of safe offline and normal offline might lead to a race situation where both threads report successful execution to the CIO layer which in turn tries to deregister the kobject of the device twice. This leads to a refcount_t: underflow; use-after-free. error and the device is not able to be set online again afterwards without a reboot. Correct the locking of the safe offline processing by doing the following: - Use the cdev lock to secure all set and test operations to the device flags. - Two safe offline processes are locked against each other using the DASD_FLAG_SAFE_OFFLINE and DASD_FLAG_SAFE_OFFLINE_RUNNING device flags. The differentiation between offline triggered and offline running is needed since the normal offline attribute is owned by CIO and we have to pass over control in between. - The dasd_generic_set_offline process handles the offline processing. It is locked against parallel execution using the DASD_FLAG_OFFLINE. - Only a running safe offline should be able to be overtaken by a single normal offline. This is ensured by clearing the DASD_FLAG_SAFE_OFFLINE_RUNNING flag when a normal offline overtakes. So this can only happen ones. - The safe offline just aborts in this case doing nothing and the normal offline processing finishes as usual. Signed-off-by: NStefan Haberland <sth@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Jan Höppner 提交于
We have two flags, DASD_FLAG_DEVICE_RO and DASD_FEATURE_READONLY, that tell us whether a device is read-only. DASD_FLAG_DEVICE_RO is set when a device is attached as read-only to z/VM and DASD_FEATURE_READONLY is set when either the corresponding kernel parameter is configured, or the read-only state is changed via sysfs. This is valuable information in any case. However, only the feature flag is being checked at the moment when we display the current state. Fix this by checking both flags. Reviewed-by: NStefan Haberland <sth@linux.vnet.ibm.com> Signed-off-by: NJan Höppner <hoeppner@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Harald Freudenberger 提交于
Added some dbf debug messages on failure of the most important ioctl calls. These messages are only enabled with dbf level 6 (debug) and so do not affect the normal operating mode which uses level 3 (errors and higher). Signed-off-by: NHarald Freudenberger <freude@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Harald Freudenberger 提交于
When a out of range domain parameter was given, the init function returned with -EINVAL and the driver was not operational. As the driver is statically build into the kernel and is able to work with multiple domains anyway the init function should continue. Now the user has a chance to write a new default domain value via sysfs attribute file. Also added two new dbf debug messages related to the domain value handling. Signed-off-by: NHarald Freudenberger <freude@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Harald Freudenberger 提交于
Signed-off-by: NHarald Freudenberger <freude@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
The zcrypt code contains a couple of functions which receive a "big_endian" argument. All callers naturally pass "1" for big endian, since s390 is big endian. Therefore get rid of this argument and also get rid of the cpu_to_le()/cpu_to_be() calls. This way we get rid of a couple of sparse warnings: drivers/s390/crypto/zcrypt_cca_key.h:255:34: warning: incorrect type in assignment (different base types) expected unsigned short [unsigned] ulen got restricted __be16 [usertype] <noident> Cc: Harald Freudenberger <freude@linux.vnet.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Add missing __user annotations to get rid of a couple of sparse warnings. All callers actually pass kernel pointers instead of user space pointers, however the pointers are being used within KERNEL_DS. So everything is fine. Corresponding sparse warnings: drivers/s390/crypto/pkey_api.c:181:41: warning: incorrect type in assignment (different address spaces) expected char [noderef] <asn:1>*request_control_blk_addr got void *<noident> Cc: Harald Freudenberger <freude@linux.vnet.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Jan Höppner 提交于
Dynamic stack allocations are considered bad. Get rid of this one occurrence and use kstrdup() instead. Also, set the return codes so that we have only one exit where we can call kfree(). Signed-off-by: NJan Höppner <hoeppner@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Sebastian Ott 提交于
Exploit multiple hardware contexts (queues) that can process requests in parallel. Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Sebastian Ott 提交于
Drop the tasklet that was used to complete requests in favor of block layer helpers that finish the IO on the CPU that initiated it. Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-