- 23 8月, 2009 10 次提交
-
-
由 Gustavo F. Padovan 提交于
When L2CAP loses an I-frame we send a SREJ frame to the transmitter side requesting the lost packet. This patch implement all Recv I-frame events on SREJ_SENT state table except the ones that deal with SendRej (the REJ exception at receiver side is yet not implemented). Signed-off-by: NGustavo F. Padovan <gustavo@las.ic.unicamp.br> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Gustavo F. Padovan 提交于
Implement CRC16 check for L2CAP packets. FCS is used by Streaming Mode and Enhanced Retransmission Mode and is a extra check for the packet content. Using CRC16 is the default, L2CAP won't use FCS only when both side send a "No FCS" request. Initially based on a patch from Nathan Holstein <nathan@lampreynetworks.com> Signed-off-by: NGustavo F. Padovan <gustavo@las.ic.unicamp.br> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Gustavo F. Padovan 提交于
L2CAP uses retransmission and monitor timers to inquiry the other side about unacked I-frames. After sending each I-frame we (re)start the retransmission timer. If it expires, we start a monitor timer that send a S-frame with P bit set and wait for S-frame with F bit set. If monitor timer expires, try again, at a maximum of L2CAP_DEFAULT_MAX_TX. Signed-off-by: NGustavo F. Padovan <gustavo@las.ic.unicamp.br> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Gustavo F. Padovan 提交于
When receiving an I-frame with unexpected txSeq, receiver side start the recovery procedure by sending a REJ S-frame to the transmitter side. So the transmitter can re-send the lost I-frame. This patch just adds a basic support for retransmission, it doesn't mean that ERTM now has full support for packet retransmission. Signed-off-by: NGustavo F. Padovan <gustavo@las.ic.unicamp.br> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Gustavo F. Padovan 提交于
ERTM should use Segmentation and Reassembly to break down a SDU in many PDUs on sending data to the other side. On sending packets we queue all 'segments' until end of segmentation and just the add them to the queue for sending. On receiving we create a new SKB with the SDU reassembled. Initially based on a patch from Nathan Holstein <nathan@lampreynetworks.com> Signed-off-by: NGustavo F. Padovan <gustavo@las.ic.unicamp.br> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Gustavo F. Padovan 提交于
This patch adds support for ERTM transfers, without retransmission, with txWindow up to 63 and with acknowledgement of packets received. Now the packets are queued before call l2cap_do_send(), so packets couldn't be sent at the time we call l2cap_sock_sendmsg(). They will be sent in an asynchronous way on later calls of l2cap_ertm_send(). Besides if an error occurs on calling l2cap_do_send() we disconnect the channel. Initially based on a patch from Nathan Holstein <nathan@lampreynetworks.com> Signed-off-by: NGustavo F. Padovan <gustavo@las.ic.unicamp.br> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Gustavo F. Padovan 提交于
Add support to config_req and config_rsp to configure ERTM and Streaming mode. If the remote device specifies ERTM or Streaming mode, then the same mode is proposed. Otherwise ERTM or Basic mode is used. And in case of a state 2 device, the remote device should propose the same mode. If not, then the channel gets disconnected. Signed-off-by: NGustavo F. Padovan <gustavo@las.ic.unicamp.br> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Marcel Holtmann 提交于
To enable Enhanced Retransmission mode it needs to be set via a socket option. A different mode can be set on a socket, but on listen() and connect() the mode is checked and ERTM is only allowed if it is enabled via the module parameter. Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Thomas Gleixner 提交于
hdev->req_lock is used as mutex so make it a mutex. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Marcel Holtmann 提交于
The device model itself has no real usable reference counting at the moment and this causes problems if parents are deleted before their children. The device model itself handles the memory details of this correctly, but the uevent order is not consistent. This causes various problems for systems like HAL or even X. So until device_put() does a proper cleanup, the device for Bluetooth connection will be protected with an extra reference counting to ensure the correct order of uevents when connections are terminated. This is not an automatic feature. Higher Bluetooth layers like HIDP or BNEP should grab this new reference to ensure that their uevents are send before the ones from the parent device. Based on a report by Brian Rogers <brian@xyzw.org> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
- 20 8月, 2009 3 次提交
-
-
由 Johannes Berg 提交于
My patch "cfg80211: fix deadlock" broke the code it was supposed to fix, the scan request checking. But it's not trivial to put it back the way it was, since the original patch had a deadlock. Now do it in a completely new way: queue the check off to a work struct, where we can freely lock. But that has some more complications, like needing to wait for it to be done before the wiphy/rdev can be destroyed, so some code is required to handle that. Signed-off-by: NJohannes Berg <johannes@sipsolutions.net> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
由 Johannes Berg 提交于
All but two drivers have now stopped using the two deprecated members radio_enabled and beacon_int, so it's about time to remove them for good. Signed-off-by: NJohannes Berg <johannes@sipsolutions.net> Acked-by: NKalle Valo <kalle.valo@iki.fi> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
由 Johannes Berg 提交于
Over time, a whole bunch of drivers have come up with their own scheme to delay the configure_filter operation to a workqueue. To be able to simplify things, allow configure_filter to sleep, and add a new prepare_multicast callback that drivers that need the multicast address list implement. This new callback must be atomic, but most drivers either don't care or just calculate a hash which can be done atomically and then uploaded to the hardware non-atomically. A cursory look suggests that at76c50x-usb, ar9170, mwl8k (which is actually very broken now), rt2x00, wl1251, wl1271 and zd1211 should make use of this new capability. Signed-off-by: NJohannes Berg <johannes@sipsolutions.net> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
- 19 8月, 2009 1 次提交
-
-
由 Ben Hutchings 提交于
I misunderstood the meaning of __bitwise. In practice it makes sparse warn about every use of mmds which is certainly not what we want. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 8月, 2009 1 次提交
-
-
由 Yi Zou 提交于
Add NETIF_F_FCOE_MTU to indicate that the NIC can support a secondary MTU for converged traffic of LAN and Fiber Channel over Ethernet (FCoE). The MTU for FCoE is 2158 = 14 (FCoE header) + 24 (FC header) + 2112 (FC max payload) + 4 (FC CRC) + 4 (FCoE trailer). Signed-off-by: NYi Zou <yi.zou@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 8月, 2009 11 次提交
-
-
由 Johannes Berg 提交于
Sometimes drivers might have a good reason to override the PS default, like iwlwifi right now where it affects RX performance significantly at this point. This will allow them to override the default, if desired, in a way that users can still change it according to their trade-off choices, not the driver's, like would happen if the driver just disabled PS completely then. Signed-off-by: NJohannes Berg <johannes@sipsolutions.net> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
由 Jussi Kivilinna 提交于
Add rx queue pausing to usbnet. This is needed by rndis_wlan so that it can control rx queue and prevent received packets from being send forward before rndis_wlan receives and handles 'media connect'-indication. Without this establishing WPA connections is hard and fail often. [v2] - removed unneeded use of skb_clone Cc: David Brownell <dbrownell@users.sourceforge.net> Signed-off-by: NJussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
由 Gábor Stefanik 提交于
Also add a "SPEX32" macro for extracting 32-bit SPROM variables. Signed-off-by: NGábor Stefanik <netrolller.3d@gmail.com> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
由 Pat Erley 提交于
This removes the max_bandwidth attribute. It is only ever written to, and is duplicated by max_bandwidth_khz in the regulatory code. Signed-off-by: NPat Erley <pat-lkml@erley.org> Acked-by: NJohannes Berg <johannes@sipsolutions.net> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
由 Johannes Berg 提交于
The memory layout for scan requests was rather wrong, we put the scan SSIDs before the channels which could lead to the channel pointers being unaligned in memory. It turns out that using a pointer to the channel array isn't necessary anyway since we can embed a zero-length array into the struct. Signed-off-by: NJohannes Berg <johannes@sipsolutions.net> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
由 Johannes Berg 提交于
If we have a lot of frames to transmit at once, for instance with fragmentation, it can be an optimisation to only tell the DMA engine about them on the last fragment/frame to avoid banging the IO too much. This patch allows implementation such an optimisation by telling the driver when more frames can be expected. Currently, this is used by mac80211 only on fragmented frames, but could also be used in the future on other frames when the queue was full and there are multiple frames pending. Note that drivers need to be careful when using this flag, they need to kick their DMA engines not just when this flag is clear, but also when the queue gets full so that progress can be made. Signed-off-by: NJohannes Berg <johannes@sipsolutions.net> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
由 Johannes Berg 提交于
This documents what's required to implement that TX powersave filter properly wrt. handling hardware queues. Signed-off-by: NJohannes Berg <johannes@sipsolutions.net> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
由 Johannes Berg 提交于
Add some more documentation including an example so that it's clearer what should be done for TX retries. Signed-off-by: NJohannes Berg <johannes@sipsolutions.net> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
由 Johannes Berg 提交于
In order for userspace to be able to figure out whether it obtained a consistent snapshot of data or not when using netlink dumps, we need to have a generation number in each dump message that indicates whether the list has changed or not -- its value is arbitrary. This patch adds such a number to all dumps, this needs some mac80211 involvement to keep track of a generation number to start with when adding/removing mesh paths or stations. The wiphy and netdev lists can be fully handled within cfg80211, of course, but generation numbers need to be stored there as well. Signed-off-by: NJohannes Berg <johannes@sipsolutions.net> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
由 Johannes Berg 提交于
With the move of everything related to the SME from mac80211 to cfg80211, we lost the ability to send reassociation frames. This adds them back, but only for wireless extensions. With the userspace SME, it shall control assoc vs. reassoc (it already can do so with the nl80211 interface). I haven't touched the connect() implementation, so it is not possible to reassociate with the nl80211 connect primitive. I think that should be done with the NL80211_CMD_ROAM command, but we'll have to see how that can be handled in the future, especially with fullmac chips. This patch addresses only the immediate regression we had in mac80211, which previously sent reassoc. Signed-off-by: NJohannes Berg <johannes@sipsolutions.net> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
由 Neil Horman 提交于
skb allocation / cosumption tracer - Add consumption tracepoint This patch adds a tracepoint to skb_copy_datagram_iovec, which is called each time a userspace process copies a frame from a socket receive queue to a user space buffer. It allows us to hook in and examine each sk_buff that the system receives on a per-socket bases, and can be use to compile a list of which skb's were received by which processes. Signed-off-by: NNeil Horman <nhorman@tuxdriver.com> include/trace/events/skb.h | 20 ++++++++++++++++++++ net/core/datagram.c | 3 +++ 2 files changed, 23 insertions(+) Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 8月, 2009 7 次提交
-
-
由 françois romieu 提交于
Synced with Realtek's 6.011.00 r8169 driver. Signed-off-by: NFrancois Romieu <romieu@fr.zoreil.com> Cc: Edward Hsu <edward_hsu@realtek.com.tw> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jaswinder Singh Rajput 提交于
fix the following 'make includecheck' warning: include/linux/icmpv6.h: linux/skbuff.h is included more than once. Signed-off-by: NJaswinder Singh Rajput <jaswinderrajput@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dmitry Baryshkov 提交于
Signed-off-by: NDmitry Eremin-Solenikov <dbaryshkov@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dmitry Baryshkov 提交于
Signed-off-by: NDmitry Eremin-Solenikov <dbaryshkov@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dmitry Baryshkov 提交于
Signed-off-by: NDmitry Eremin-Solenikov <dbaryshkov@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dmitry Baryshkov 提交于
Signed-off-by: NDmitry Eremin-Solenikov <dbaryshkov@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dmitry Baryshkov 提交于
IEEE802154_SIOC_ADD_SLAVE was used to allocate 802.15.4 interfaces on the top of radio. It's not used anymore, drop it. Signed-off-by: NDmitry Eremin-Solenikov <dbaryshkov@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 8月, 2009 2 次提交
-
-
由 Frederic Weisbecker 提交于
This patch implements the kernel side support for ftrace event record sampling. A new counter sampling attribute is added: PERF_SAMPLE_TP_RECORD which requests ftrace events record sampling. In this case if a PERF_TYPE_TRACEPOINT counter is active and a tracepoint fires, we emit the tracepoint binary record to the perfcounter event buffer, as a sample. Result, after setting PERF_SAMPLE_TP_RECORD attribute from perf record: perf record -f -F 1 -a -e workqueue:workqueue_execution perf report -D 0x21e18 [0x48]: event: 9 . . ... raw event: size 72 bytes . 0000: 09 00 00 00 01 00 48 00 d0 c7 00 81 ff ff ff ff ......H........ . 0010: 0a 00 00 00 0a 00 00 00 21 00 00 00 00 00 00 00 ........!...... . 0020: 2b 00 01 02 0a 00 00 00 0a 00 00 00 65 76 65 6e +...........eve . 0030: 74 73 2f 31 00 00 00 00 00 00 00 00 0a 00 00 00 ts/1........... . 0040: e0 b1 31 81 ff ff ff ff ....... . 0x21e18 [0x48]: PERF_EVENT_SAMPLE (IP, 1): 10: 0xffffffff8100c7d0 period: 33 The raw ftrace binary record starts at offset 0020. Translation: struct trace_entry { type = 0x2b = 43; flags = 1; preempt_count = 2; pid = 0xa = 10; tgid = 0xa = 10; } thread_comm = "events/1" thread_pid = 0xa = 10; func = 0xffffffff8131b1e0 = flush_to_ldisc() What will come next? - Userspace support ('perf trace'), 'flight data recorder' mode for perf trace, etc. - The unconditional copy from the profiling callback brings some costs however if someone wants no such sampling to occur, and needs to be fixed in the future. For that we need to have an instant access to the perf counter attribute. This is a matter of a flag to add in the struct ftrace_event. - Take care of the events recursivity! Don't ever try to record a lock event for example, it seems some locking is used in the profiling fast path and lead to a tracing recursivity. That will be fixed using raw spinlock or recursivity protection. - [...] - Profit! :-) Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Cc: Li Zefan <lizf@cn.fujitsu.com> Cc: Tom Zanussi <tzanussi@gmail.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Cc: Gabriel Munteanu <eduard.munteanu@linux360.ro> Cc: Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Adds possible second part to the assign argument of TP_EVENT(). TP_perf_assign( __perf_count(foo); __perf_addr(bar); ) Which, when specified make the swcounter increment with @foo instead of the usual 1, and report @bar for PERF_SAMPLE_ADDR (data address associated with the event) when this triggers a counter overflow. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jason Baron <jbaron@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 08 8月, 2009 4 次提交
-
-
由 Phillip Lougher 提交于
Fix and improve comments in decompress/generic.h that describe the decompressor API. Also remove an unused definition, and rename INBUF_LEN in lib/decompress_inflate.c to conform to bzip2/lzma naming. Signed-off-by: NPhillip Lougher <phillip@lougher.demon.co.uk> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 KAMEZAWA Hiroyuki 提交于
At first, init_task's mems_allowed is initialized as this. init_task->mems_allowed == node_state[N_POSSIBLE] And cpuset's top_cpuset mask is initialized as this top_cpuset->mems_allowed = node_state[N_HIGH_MEMORY] Before 2.6.29: policy's mems_allowed is initialized as this. 1. update tasks->mems_allowed by its cpuset->mems_allowed. 2. policy->mems_allowed = nodes_and(tasks->mems_allowed, user's mask) Updating task's mems_allowed in reference to top_cpuset's one. cpuset's mems_allowed is aware of N_HIGH_MEMORY, always. In 2.6.30: After commit 58568d2a ("cpuset,mm: update tasks' mems_allowed in time"), policy's mems_allowed is initialized as this. 1. policy->mems_allowd = nodes_and(task->mems_allowed, user's mask) Here, if task is in top_cpuset, task->mems_allowed is not updated from init's one. Assume user excutes command as #numactrl --interleave=all ,.... policy->mems_allowd = nodes_and(N_POSSIBLE, ALL_SET_MASK) Then, policy's mems_allowd can includes a possible node, which has no pgdat. MPOL's INTERLEAVE just scans nodemask of task->mems_allowd and access this directly. NODE_DATA(nid)->zonelist even if NODE_DATA(nid)==NULL Then, what's we need is making policy->mems_allowed be aware of N_HIGH_MEMORY. This patch does that. But to do so, extra nodemask will be on statck. Because I know cpumask has a new interface of CPUMASK_ALLOC(), I added it to node. This patch stands on old behavior. But I feel this fix itself is just a Band-Aid. But to do fundametal fix, we have to take care of memory hotplug and it takes time. (task->mems_allowd should be N_HIGH_MEMORY, I think.) mpol_set_nodemask() should be aware of N_HIGH_MEMORY and policy's nodemask should be includes only online nodes. In old behavior, this is guaranteed by frequent reference to cpuset's code. Now, most of them are removed and mempolicy has to check it by itself. To do check, a few nodemask_t will be used for calculating nodemask. But, size of nodemask_t can be big and it's not good to allocate them on stack. Now, cpumask_t has CPUMASK_ALLOC/FREE an easy code for get scratch area. NODEMASK_ALLOC/FREE shoudl be there. [akpm@linux-foundation.org: cleanups & tweaks] Tested-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Miao Xie <miaox@cn.fujitsu.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Paul Menage <menage@google.com> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: Yasunori Goto <y-goto@jp.fujitsu.com> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Cc: David Rientjes <rientjes@google.com> Cc: Lee Schermerhorn <lee.schermerhorn@hp.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Christoph Hellwig 提交于
When we want to tear down an inode that lost the add to the cache race in XFS we must not call into ->destroy_inode because that would delete the inode that won the race from the inode cache radix tree. This patch provides the __destroy_inode helper needed to fix this, the actual fix will be in th next patch. As XFS was the only reason destroy_inode was exported we shift the export to the new __destroy_inode. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NEric Sandeen <sandeen@sandeen.net>
-
由 Christoph Hellwig 提交于
Currently inode_init_always calls into ->destroy_inode if the additional initialization fails. That's not only counter-intuitive because inode_init_always did not allocate the inode structure, but in case of XFS it's actively harmful as ->destroy_inode might delete the inode from a radix-tree that has never been added. This in turn might end up deleting the inode for the same inum that has been instanciated by another process and cause lots of cause subtile problems. Also in the case of re-initializing a reclaimable inode in XFS it would free an inode we still want to keep alive. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NEric Sandeen <sandeen@sandeen.net>
-
- 07 8月, 2009 1 次提交
-
-
由 Krishna Kumar 提交于
dev_queue_xmit enqueue's a skb and calls qdisc_run which dequeue's the skb and xmits it. In most cases, the skb that is enqueue'd is the same one that is dequeue'd (unless the queue gets stopped or multiple cpu's write to the same queue and ends in a race with qdisc_run). For default qdiscs, we can remove the redundant enqueue/dequeue and simply xmit the skb since the default qdisc is work-conserving. The patch uses a new flag - TCQ_F_CAN_BYPASS to identify the default fast queue. The controversial part of the patch is incrementing qlen when a skb is requeued - this is to avoid checks like the second line below: + } else if ((q->flags & TCQ_F_CAN_BYPASS) && !qdisc_qlen(q) && >> !q->gso_skb && + !test_and_set_bit(__QDISC_STATE_RUNNING, &q->state)) { Results of a 2 hour testing for multiple netperf sessions (1, 2, 4, 8, 12 sessions on a 4 cpu system-X). The BW numbers are aggregate Mb/s across iterations tested with this version on System-X boxes with Chelsio 10gbps cards: ---------------------------------- Size | ORG BW NEW BW | ---------------------------------- 128K | 156964 159381 | 256K | 158650 162042 | ---------------------------------- Changes from ver1: 1. Move sch_direct_xmit declaration from sch_generic.h to pkt_sched.h 2. Update qdisc basic statistics for direct xmit path. 3. Set qlen to zero in qdisc_reset. 4. Changed some function names to more meaningful ones. Signed-off-by: NKrishna Kumar <krkumar2@in.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-