提交 0fc31966 编写于 作者: D David S. Miller

Merge branch 'for-davem' of git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-next

John W. Linville says:

====================
Please pull this batch of wireless updates intended for 3.15!

For the mac80211 bits, Johannes says:

"This has a whole bunch of bugfixes for things that went into -next
previously as well as some other bugfixes I didn't want to rush into
3.14 at this point. The rest of it is some cleanups and a few small
features, the biggest of which is probably Janusz's regulatory DFS CAC
time code."

For the Bluetooth bits, Gustavo says:

"One more pull request to 3.15. This is mostly and bug fix pull request, it
contains several fixes and clean up all over the tree, plus some small new
features."

For the NFC bits, Samuel says:

"This is the NFC pull request for 3.15. With this one we have:

- Support for ISO 15693 a.k.a. NFC vicinity a.k.a. Type 5 tags. ISO
  15693 are long range (1 - 2 meters) vicinity tags/cards. The kernel
  now supports those through the NFC netlink and digital APIs.

- Support for TI's trf7970a chipset. This chipset relies on the NFC
  digital layer and the driver currently supports type 2, 4A and 5 tags.

- Support for NXP's pn544 secure firmare download. The pn544 C3 chipsets
  relies on a different firmware download protocal than the C2 one. We
  now support both and use the right one depending on the version we
  detect at runtime.

- Support for 4A tags from the NFC digital layer.

- A bunch of cleanups and minor fixes from Axel Lin and Thierry Escande."

For the iwlwifi bits, Emmanuel says:

"We were sending a host command while the mutex wasn't held. This
led to hard-to-catch races."

And...

"I have a fix for a "merge damage" which is not really a merge
damage: it enables scheduled scan which has been disabled in
wireless.git. Since you merged wireless.git into wireless-next.git,
this can now be fixed in wireless-next.git.

Besides this, Alex made a workaround for a hardware bug. This fix
allows us to consume less power in S3. Arik and Eliad continue to
work on D0i3 which is a run-time power saving feature. Eliad also
contributes a few bits to the rate scaling logic to which Eyal adds his
own contribution. Avri dives deep in the power code - newer firmware
will allow to enable power save in newer scenarios. Johannes made a few
clean-ups. I have the regular amount of BT Coex boring stuff. I disable
uAPSD since we identified firmware bugs that cause packet loss. One
thing that do stand out is the udev event that we now send when the
FW asserts. I hope it will allow us to debug the FW more easily."

Also included is one last iwlwifi pull for a build breakage fix...

For the Atheros bits, Kalle says:

"Michal now did some optimisations and was able to improve throughput by
100 Mbps on our MIPS based AP135 platform. Chun-Yeow added some
workarounds to be able to better use ad-hoc mode. Ben improved log
messages and added support for MSDU chaining. And, as usual, also some
smaller fixes."

Beyond that...

Andrea Merello continues his rtl8180 refactoring, in preparation for
a long-awaited rtl8187 driver.  We get a new driver (rsi) for the
RS9113 chip, from Fariya Fatima.  And, of course, we get the usual
round of updates for ath9k, brcmfmac, mwifiex, wil6210, etc. as well.
====================
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
* Texas Instruments TRF7970A RFID/NFC/15693 Transceiver
Required properties:
- compatible: Should be "ti,trf7970a".
- spi-max-frequency: Maximum SPI frequency (<= 2000000).
- interrupt-parent: phandle of parent interrupt handler.
- interrupts: A single interrupt specifier.
- ti,enable-gpios: Two GPIO entries used for 'EN' and 'EN2' pins on the
TRF7970A.
- vin-supply: Regulator for supply voltage to VIN pin
Optional SoC Specific Properties:
- pinctrl-names: Contains only one value - "default".
- pintctrl-0: Specifies the pin control groups used for this controller.
Example (for ARM-based BeagleBone with TRF7970A on SPI1):
&spi1 {
status = "okay";
nfc@0 {
compatible = "ti,trf7970a";
reg = <0>;
pinctrl-names = "default";
pinctrl-0 = <&trf7970a_default>;
spi-max-frequency = <2000000>;
interrupt-parent = <&gpio2>;
interrupts = <14 0>;
ti,enable-gpios = <&gpio2 2 GPIO_ACTIVE_LOW>,
<&gpio2 5 GPIO_ACTIVE_LOW>;
vin-supply = <&ldo3_reg>;
status = "okay";
};
};
......@@ -6117,6 +6117,7 @@ F: include/net/nfc/
F: include/uapi/linux/nfc.h
F: drivers/nfc/
F: include/linux/platform_data/pn544.h
F: Documentation/devicetree/bindings/net/nfc/
NFS, SUNRPC, AND LOCKD CLIENTS
M: Trond Myklebust <trond.myklebust@primarydata.com>
......
......@@ -89,6 +89,7 @@ static const struct usb_device_id ath3k_table[] = {
{ USB_DEVICE(0x0b05, 0x17d0) },
{ USB_DEVICE(0x0CF3, 0x0036) },
{ USB_DEVICE(0x0CF3, 0x3004) },
{ USB_DEVICE(0x0CF3, 0x3005) },
{ USB_DEVICE(0x0CF3, 0x3008) },
{ USB_DEVICE(0x0CF3, 0x311D) },
{ USB_DEVICE(0x0CF3, 0x311E) },
......@@ -137,6 +138,7 @@ static const struct usb_device_id ath3k_blist_tbl[] = {
{ USB_DEVICE(0x0b05, 0x17d0), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0CF3, 0x0036), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x3004), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x3005), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x3008), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x311D), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x311E), .driver_info = BTUSB_ATH3012 },
......@@ -180,10 +182,9 @@ static int ath3k_load_firmware(struct usb_device *udev,
}
memcpy(send_buf, firmware->data, 20);
if ((err = usb_control_msg(udev, pipe,
USB_REQ_DFU_DNLOAD,
USB_TYPE_VENDOR, 0, 0,
send_buf, 20, USB_CTRL_SET_TIMEOUT)) < 0) {
err = usb_control_msg(udev, pipe, USB_REQ_DFU_DNLOAD, USB_TYPE_VENDOR,
0, 0, send_buf, 20, USB_CTRL_SET_TIMEOUT);
if (err < 0) {
BT_ERR("Can't change to loading configuration err");
goto error;
}
......@@ -366,7 +367,7 @@ static int ath3k_load_patch(struct usb_device *udev)
}
snprintf(filename, ATH3K_NAME_LEN, "ar3k/AthrBT_0x%08x.dfu",
fw_version.rom_version);
le32_to_cpu(fw_version.rom_version));
ret = request_firmware(&firmware, filename, &udev->dev);
if (ret < 0) {
......@@ -428,7 +429,7 @@ static int ath3k_load_syscfg(struct usb_device *udev)
}
snprintf(filename, ATH3K_NAME_LEN, "ar3k/ramps_0x%08x_%d%s",
fw_version.rom_version, clk_value, ".dfu");
le32_to_cpu(fw_version.rom_version), clk_value, ".dfu");
ret = request_firmware(&firmware, filename, &udev->dev);
if (ret < 0) {
......
......@@ -131,8 +131,11 @@ static int bfusb_send_bulk(struct bfusb_data *data, struct sk_buff *skb)
BT_DBG("bfusb %p skb %p len %d", data, skb, skb->len);
if (!urb && !(urb = usb_alloc_urb(0, GFP_ATOMIC)))
return -ENOMEM;
if (!urb) {
urb = usb_alloc_urb(0, GFP_ATOMIC);
if (!urb)
return -ENOMEM;
}
pipe = usb_sndbulkpipe(data->udev, data->bulk_out_ep);
......@@ -218,8 +221,11 @@ static int bfusb_rx_submit(struct bfusb_data *data, struct urb *urb)
BT_DBG("bfusb %p urb %p", data, urb);
if (!urb && !(urb = usb_alloc_urb(0, GFP_ATOMIC)))
return -ENOMEM;
if (!urb) {
urb = usb_alloc_urb(0, GFP_ATOMIC);
if (!urb)
return -ENOMEM;
}
skb = bt_skb_alloc(size, GFP_ATOMIC);
if (!skb) {
......
......@@ -257,7 +257,8 @@ static void bluecard_write_wakeup(bluecard_info_t *info)
ready_bit = XMIT_BUF_ONE_READY;
}
if (!(skb = skb_dequeue(&(info->txq))))
skb = skb_dequeue(&(info->txq));
if (!skb)
break;
if (bt_cb(skb)->pkt_type & 0x80) {
......@@ -391,7 +392,8 @@ static void bluecard_receive(bluecard_info_t *info, unsigned int offset)
if (info->rx_skb == NULL) {
info->rx_state = RECV_WAIT_PACKET_TYPE;
info->rx_count = 0;
if (!(info->rx_skb = bt_skb_alloc(HCI_MAX_FRAME_SIZE, GFP_ATOMIC))) {
info->rx_skb = bt_skb_alloc(HCI_MAX_FRAME_SIZE, GFP_ATOMIC);
if (!info->rx_skb) {
BT_ERR("Can't allocate mem for new packet");
return;
}
......@@ -566,7 +568,8 @@ static int bluecard_hci_set_baud_rate(struct hci_dev *hdev, int baud)
/* Ericsson baud rate command */
unsigned char cmd[] = { HCI_COMMAND_PKT, 0x09, 0xfc, 0x01, 0x03 };
if (!(skb = bt_skb_alloc(HCI_MAX_FRAME_SIZE, GFP_ATOMIC))) {
skb = bt_skb_alloc(HCI_MAX_FRAME_SIZE, GFP_ATOMIC);
if (!skb) {
BT_ERR("Can't allocate mem for new packet");
return -1;
}
......
......@@ -193,8 +193,8 @@ static void bt3c_write_wakeup(bt3c_info_t *info)
if (!pcmcia_dev_present(info->p_dev))
break;
if (!(skb = skb_dequeue(&(info->txq)))) {
skb = skb_dequeue(&(info->txq));
if (!skb) {
clear_bit(XMIT_SENDING, &(info->tx_state));
break;
}
......@@ -238,7 +238,8 @@ static void bt3c_receive(bt3c_info_t *info)
if (info->rx_skb == NULL) {
info->rx_state = RECV_WAIT_PACKET_TYPE;
info->rx_count = 0;
if (!(info->rx_skb = bt_skb_alloc(HCI_MAX_FRAME_SIZE, GFP_ATOMIC))) {
info->rx_skb = bt_skb_alloc(HCI_MAX_FRAME_SIZE, GFP_ATOMIC);
if (!info->rx_skb) {
BT_ERR("Can't allocate mem for new packet");
return;
}
......
......@@ -149,7 +149,8 @@ static void btuart_write_wakeup(btuart_info_t *info)
if (!pcmcia_dev_present(info->p_dev))
return;
if (!(skb = skb_dequeue(&(info->txq))))
skb = skb_dequeue(&(info->txq));
if (!skb)
break;
/* Send frame */
......@@ -190,7 +191,8 @@ static void btuart_receive(btuart_info_t *info)
if (info->rx_skb == NULL) {
info->rx_state = RECV_WAIT_PACKET_TYPE;
info->rx_count = 0;
if (!(info->rx_skb = bt_skb_alloc(HCI_MAX_FRAME_SIZE, GFP_ATOMIC))) {
info->rx_skb = bt_skb_alloc(HCI_MAX_FRAME_SIZE, GFP_ATOMIC);
if (!info->rx_skb) {
BT_ERR("Can't allocate mem for new packet");
return;
}
......
......@@ -159,6 +159,7 @@ static const struct usb_device_id blacklist_table[] = {
{ USB_DEVICE(0x0b05, 0x17d0), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x0036), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x3004), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x3005), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x3008), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x311d), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x311e), .driver_info = BTUSB_ATH3012 },
......
......@@ -153,7 +153,8 @@ static void dtl1_write_wakeup(dtl1_info_t *info)
if (!pcmcia_dev_present(info->p_dev))
return;
if (!(skb = skb_dequeue(&(info->txq))))
skb = skb_dequeue(&(info->txq));
if (!skb)
break;
/* Send frame */
......@@ -215,13 +216,15 @@ static void dtl1_receive(dtl1_info_t *info)
info->hdev->stat.byte_rx++;
/* Allocate packet */
if (info->rx_skb == NULL)
if (!(info->rx_skb = bt_skb_alloc(HCI_MAX_FRAME_SIZE, GFP_ATOMIC))) {
if (info->rx_skb == NULL) {
info->rx_skb = bt_skb_alloc(HCI_MAX_FRAME_SIZE, GFP_ATOMIC);
if (!info->rx_skb) {
BT_ERR("Can't allocate mem for new packet");
info->rx_state = RECV_WAIT_NSH;
info->rx_count = NSHL;
return;
}
}
*skb_put(info->rx_skb, 1) = inb(iobase + UART_RX);
nsh = (nsh_t *)info->rx_skb->data;
......
......@@ -291,7 +291,8 @@ static struct sk_buff *bcsp_dequeue(struct hci_uart *hu)
/* First of all, check for unreliable messages in the queue,
since they have priority */
if ((skb = skb_dequeue(&bcsp->unrel)) != NULL) {
skb = skb_dequeue(&bcsp->unrel);
if (skb != NULL) {
struct sk_buff *nskb = bcsp_prepare_pkt(bcsp, skb->data, skb->len, bt_cb(skb)->pkt_type);
if (nskb) {
kfree_skb(skb);
......@@ -308,16 +309,20 @@ static struct sk_buff *bcsp_dequeue(struct hci_uart *hu)
spin_lock_irqsave_nested(&bcsp->unack.lock, flags, SINGLE_DEPTH_NESTING);
if (bcsp->unack.qlen < BCSP_TXWINSIZE && (skb = skb_dequeue(&bcsp->rel)) != NULL) {
struct sk_buff *nskb = bcsp_prepare_pkt(bcsp, skb->data, skb->len, bt_cb(skb)->pkt_type);
if (nskb) {
__skb_queue_tail(&bcsp->unack, skb);
mod_timer(&bcsp->tbcsp, jiffies + HZ / 4);
spin_unlock_irqrestore(&bcsp->unack.lock, flags);
return nskb;
} else {
skb_queue_head(&bcsp->rel, skb);
BT_ERR("Could not dequeue pkt because alloc_skb failed");
if (bcsp->unack.qlen < BCSP_TXWINSIZE) {
skb = skb_dequeue(&bcsp->rel);
if (skb != NULL) {
struct sk_buff *nskb = bcsp_prepare_pkt(bcsp, skb->data, skb->len,
bt_cb(skb)->pkt_type);
if (nskb) {
__skb_queue_tail(&bcsp->unack, skb);
mod_timer(&bcsp->tbcsp, jiffies + HZ / 4);
spin_unlock_irqrestore(&bcsp->unack.lock, flags);
return nskb;
} else {
skb_queue_head(&bcsp->rel, skb);
BT_ERR("Could not dequeue pkt because alloc_skb failed");
}
}
}
......@@ -715,6 +720,9 @@ static int bcsp_open(struct hci_uart *hu)
static int bcsp_close(struct hci_uart *hu)
{
struct bcsp_struct *bcsp = hu->priv;
del_timer_sync(&bcsp->tbcsp);
hu->priv = NULL;
BT_DBG("hu %p", hu);
......@@ -722,7 +730,6 @@ static int bcsp_close(struct hci_uart *hu)
skb_queue_purge(&bcsp->unack);
skb_queue_purge(&bcsp->rel);
skb_queue_purge(&bcsp->unrel);
del_timer(&bcsp->tbcsp);
kfree(bcsp);
return 0;
......
......@@ -206,12 +206,12 @@ static int h5_close(struct hci_uart *hu)
{
struct h5 *h5 = hu->priv;
del_timer_sync(&h5->timer);
skb_queue_purge(&h5->unack);
skb_queue_purge(&h5->rel);
skb_queue_purge(&h5->unrel);
del_timer(&h5->timer);
kfree(h5);
return 0;
......@@ -673,7 +673,8 @@ static struct sk_buff *h5_dequeue(struct hci_uart *hu)
return h5_prepare_pkt(hu, HCI_3WIRE_LINK_PKT, wakeup_req, 2);
}
if ((skb = skb_dequeue(&h5->unrel)) != NULL) {
skb = skb_dequeue(&h5->unrel);
if (skb != NULL) {
nskb = h5_prepare_pkt(hu, bt_cb(skb)->pkt_type,
skb->data, skb->len);
if (nskb) {
......@@ -690,7 +691,8 @@ static struct sk_buff *h5_dequeue(struct hci_uart *hu)
if (h5->unack.qlen >= h5->tx_win)
goto unlock;
if ((skb = skb_dequeue(&h5->rel)) != NULL) {
skb = skb_dequeue(&h5->rel);
if (skb != NULL) {
nskb = h5_prepare_pkt(hu, bt_cb(skb)->pkt_type,
skb->data, skb->len);
if (nskb) {
......
......@@ -271,7 +271,8 @@ static int hci_uart_tty_open(struct tty_struct *tty)
if (tty->ops->write == NULL)
return -EOPNOTSUPP;
if (!(hu = kzalloc(sizeof(struct hci_uart), GFP_KERNEL))) {
hu = kzalloc(sizeof(struct hci_uart), GFP_KERNEL);
if (!hu) {
BT_ERR("Can't allocate control structure");
return -ENFILE;
}
......@@ -569,7 +570,8 @@ static int __init hci_uart_init(void)
hci_uart_ldisc.write_wakeup = hci_uart_tty_wakeup;
hci_uart_ldisc.owner = THIS_MODULE;
if ((err = tty_register_ldisc(N_HCI, &hci_uart_ldisc))) {
err = tty_register_ldisc(N_HCI, &hci_uart_ldisc);
if (err) {
BT_ERR("HCI line discipline registration failed. (%d)", err);
return err;
}
......@@ -614,7 +616,8 @@ static void __exit hci_uart_exit(void)
#endif
/* Release tty registration of line discipline */
if ((err = tty_unregister_ldisc(N_HCI)))
err = tty_unregister_ldisc(N_HCI);
if (err)
BT_ERR("Can't unregister HCI line discipline (%d)", err);
}
......
......@@ -116,7 +116,7 @@ config AT76C50X_USB
config AIRO_CS
tristate "Cisco/Aironet 34X/35X/4500/4800 PCMCIA cards"
depends on PCMCIA && (BROKEN || !M32R)
depends on CFG80211 && PCMCIA && (BROKEN || !M32R)
select WIRELESS_EXT
select WEXT_SPY
select WEXT_PRIV
......@@ -281,5 +281,6 @@ source "drivers/net/wireless/ti/Kconfig"
source "drivers/net/wireless/zd1211rw/Kconfig"
source "drivers/net/wireless/mwifiex/Kconfig"
source "drivers/net/wireless/cw1200/Kconfig"
source "drivers/net/wireless/rsi/Kconfig"
endif # WLAN
......@@ -59,3 +59,4 @@ obj-$(CONFIG_BRCMFMAC) += brcm80211/
obj-$(CONFIG_BRCMSMAC) += brcm80211/
obj-$(CONFIG_CW1200) += cw1200/
obj-$(CONFIG_RSI_91X) += rsi/
......@@ -56,6 +56,15 @@ enum ath_device_state {
ATH_HW_INITIALIZED,
};
enum ath_op_flags {
ATH_OP_INVALID,
ATH_OP_BEACONS,
ATH_OP_ANI_RUN,
ATH_OP_PRIM_STA_VIF,
ATH_OP_HW_RESET,
ATH_OP_SCANNING,
};
enum ath_bus_type {
ATH_PCI,
ATH_AHB,
......@@ -130,6 +139,7 @@ struct ath_common {
struct ieee80211_hw *hw;
int debug_mask;
enum ath_device_state state;
unsigned long op_flags;
struct ath_ani ani;
......
......@@ -266,12 +266,12 @@ static inline void ath10k_ce_engine_int_status_clear(struct ath10k *ar,
* ath10k_ce_sendlist_send.
* The caller takes responsibility for any needed locking.
*/
static int ath10k_ce_send_nolock(struct ath10k_ce_pipe *ce_state,
void *per_transfer_context,
u32 buffer,
unsigned int nbytes,
unsigned int transfer_id,
unsigned int flags)
int ath10k_ce_send_nolock(struct ath10k_ce_pipe *ce_state,
void *per_transfer_context,
u32 buffer,
unsigned int nbytes,
unsigned int transfer_id,
unsigned int flags)
{
struct ath10k *ar = ce_state->ar;
struct ath10k_ce_ring *src_ring = ce_state->src_ring;
......@@ -1067,9 +1067,9 @@ struct ath10k_ce_pipe *ath10k_ce_init(struct ath10k *ar,
*
* For the lack of a better place do the check here.
*/
BUILD_BUG_ON(TARGET_NUM_MSDU_DESC >
BUILD_BUG_ON(2*TARGET_NUM_MSDU_DESC >
(CE_HTT_H2T_MSG_SRC_NENTRIES - 1));
BUILD_BUG_ON(TARGET_10X_NUM_MSDU_DESC >
BUILD_BUG_ON(2*TARGET_10X_NUM_MSDU_DESC >
(CE_HTT_H2T_MSG_SRC_NENTRIES - 1));
ret = ath10k_pci_wake(ar);
......
......@@ -23,7 +23,7 @@
/* Maximum number of Copy Engine's supported */
#define CE_COUNT_MAX 8
#define CE_HTT_H2T_MSG_SRC_NENTRIES 2048
#define CE_HTT_H2T_MSG_SRC_NENTRIES 4096
/* Descriptor rings must be aligned to this boundary */
#define CE_DESC_RING_ALIGN 8
......@@ -152,6 +152,13 @@ int ath10k_ce_send(struct ath10k_ce_pipe *ce_state,
unsigned int transfer_id,
unsigned int flags);
int ath10k_ce_send_nolock(struct ath10k_ce_pipe *ce_state,
void *per_transfer_context,
u32 buffer,
unsigned int nbytes,
unsigned int transfer_id,
unsigned int flags);
void ath10k_ce_send_cb_register(struct ath10k_ce_pipe *ce_state,
void (*send_cb)(struct ath10k_ce_pipe *),
int disable_interrupts);
......
......@@ -62,16 +62,13 @@ struct ath10k;
struct ath10k_skb_cb {
dma_addr_t paddr;
bool is_mapped;
bool is_aborted;
u8 vdev_id;
struct {
u8 tid;
bool is_offchan;
u8 frag_len;
u8 pad_len;
struct ath10k_htt_txbuf *txbuf;
u32 txbuf_paddr;
} __packed htt;
struct {
......@@ -87,32 +84,6 @@ static inline struct ath10k_skb_cb *ATH10K_SKB_CB(struct sk_buff *skb)
return (struct ath10k_skb_cb *)&IEEE80211_SKB_CB(skb)->driver_data;
}
static inline int ath10k_skb_map(struct device *dev, struct sk_buff *skb)
{
if (ATH10K_SKB_CB(skb)->is_mapped)
return -EINVAL;
ATH10K_SKB_CB(skb)->paddr = dma_map_single(dev, skb->data, skb->len,
DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(dev, ATH10K_SKB_CB(skb)->paddr)))
return -EIO;
ATH10K_SKB_CB(skb)->is_mapped = true;
return 0;
}
static inline int ath10k_skb_unmap(struct device *dev, struct sk_buff *skb)
{
if (!ATH10K_SKB_CB(skb)->is_mapped)
return -EINVAL;
dma_unmap_single(dev, ATH10K_SKB_CB(skb)->paddr, skb->len,
DMA_TO_DEVICE);
ATH10K_SKB_CB(skb)->is_mapped = false;
return 0;
}
static inline u32 host_interest_item_address(u32 item_offset)
{
return QCA988X_HOST_INTEREST_ADDRESS + item_offset;
......@@ -288,6 +259,7 @@ struct ath10k_vif {
u8 fixed_rate;
u8 fixed_nss;
u8 force_sgi;
};
struct ath10k_vif_iter {
......
......@@ -21,6 +21,14 @@
#include <linux/kernel.h>
#include "core.h"
struct ath10k_hif_sg_item {
u16 transfer_id;
void *transfer_context; /* NULL = tx completion callback not called */
void *vaddr; /* for debugging mostly */
u32 paddr;
u16 len;
};
struct ath10k_hif_cb {
int (*tx_completion)(struct ath10k *ar,
struct sk_buff *wbuf,
......@@ -31,11 +39,9 @@ struct ath10k_hif_cb {
};
struct ath10k_hif_ops {
/* Send the head of a buffer to HIF for transmission to the target. */
int (*send_head)(struct ath10k *ar, u8 pipe_id,
unsigned int transfer_id,
unsigned int nbytes,
struct sk_buff *buf);
/* send a scatter-gather list to the target */
int (*tx_sg)(struct ath10k *ar, u8 pipe_id,
struct ath10k_hif_sg_item *items, int n_items);
/*
* API to handle HIF-specific BMI message exchanges, this API is
......@@ -86,12 +92,11 @@ struct ath10k_hif_ops {
};
static inline int ath10k_hif_send_head(struct ath10k *ar, u8 pipe_id,
unsigned int transfer_id,
unsigned int nbytes,
struct sk_buff *buf)
static inline int ath10k_hif_tx_sg(struct ath10k *ar, u8 pipe_id,
struct ath10k_hif_sg_item *items,
int n_items)
{
return ar->hif.ops->send_head(ar, pipe_id, transfer_id, nbytes, buf);
return ar->hif.ops->tx_sg(ar, pipe_id, items, n_items);
}
static inline int ath10k_hif_exchange_bmi_msg(struct ath10k *ar,
......
......@@ -63,7 +63,9 @@ static struct sk_buff *ath10k_htc_build_tx_ctrl_skb(void *ar)
static inline void ath10k_htc_restore_tx_skb(struct ath10k_htc *htc,
struct sk_buff *skb)
{
ath10k_skb_unmap(htc->ar->dev, skb);
struct ath10k_skb_cb *skb_cb = ATH10K_SKB_CB(skb);
dma_unmap_single(htc->ar->dev, skb_cb->paddr, skb->len, DMA_TO_DEVICE);
skb_pull(skb, sizeof(struct ath10k_htc_hdr));
}
......@@ -122,6 +124,9 @@ int ath10k_htc_send(struct ath10k_htc *htc,
struct sk_buff *skb)
{
struct ath10k_htc_ep *ep = &htc->endpoint[eid];
struct ath10k_skb_cb *skb_cb = ATH10K_SKB_CB(skb);
struct ath10k_hif_sg_item sg_item;
struct device *dev = htc->ar->dev;
int credits = 0;
int ret;
......@@ -157,19 +162,25 @@ int ath10k_htc_send(struct ath10k_htc *htc,
ath10k_htc_prepare_tx_skb(ep, skb);
ret = ath10k_skb_map(htc->ar->dev, skb);
skb_cb->paddr = dma_map_single(dev, skb->data, skb->len, DMA_TO_DEVICE);
ret = dma_mapping_error(dev, skb_cb->paddr);
if (ret)
goto err_credits;
ret = ath10k_hif_send_head(htc->ar, ep->ul_pipe_id, ep->eid,
skb->len, skb);
sg_item.transfer_id = ep->eid;
sg_item.transfer_context = skb;
sg_item.vaddr = skb->data;
sg_item.paddr = skb_cb->paddr;
sg_item.len = skb->len;
ret = ath10k_hif_tx_sg(htc->ar, ep->ul_pipe_id, &sg_item, 1);
if (ret)
goto err_unmap;
return 0;
err_unmap:
ath10k_skb_unmap(htc->ar->dev, skb);
dma_unmap_single(dev, skb_cb->paddr, skb->len, DMA_TO_DEVICE);
err_credits:
if (ep->tx_credit_flow_enabled) {
spin_lock_bh(&htc->tx_lock);
......@@ -191,10 +202,8 @@ static int ath10k_htc_tx_completion_handler(struct ath10k *ar,
struct ath10k_htc *htc = &ar->htc;
struct ath10k_htc_ep *ep = &htc->endpoint[eid];
if (!skb) {
ath10k_warn("invalid sk_buff completion - NULL pointer. firmware crashed?\n");
if (WARN_ON_ONCE(!skb))
return 0;
}
ath10k_htc_notify_tx_completion(ep, skb);
/* the skb now belongs to the completion handler */
......
......@@ -20,6 +20,7 @@
#include <linux/bug.h>
#include <linux/interrupt.h>
#include <linux/dmapool.h>
#include "htc.h"
#include "rx_desc.h"
......@@ -1181,11 +1182,20 @@ struct htt_rx_info {
u32 info1;
u32 info2;
} rate;
u32 tsf;
bool fcs_err;
bool amsdu_more;
bool mic_err;
};
struct ath10k_htt_txbuf {
struct htt_data_tx_desc_frag frags[2];
struct ath10k_htc_hdr htc_hdr;
struct htt_cmd_hdr cmd_hdr;
struct htt_data_tx_desc cmd_tx;
} __packed;
struct ath10k_htt {
struct ath10k *ar;
enum ath10k_htc_ep_id eid;
......@@ -1267,11 +1277,18 @@ struct ath10k_htt {
struct sk_buff **pending_tx;
unsigned long *used_msdu_ids; /* bitmap */
wait_queue_head_t empty_tx_wq;
struct dma_pool *tx_pool;
/* set if host-fw communication goes haywire
* used to avoid further failures */
bool rx_confused;
struct tasklet_struct rx_replenish_task;
/* This is used to group tx/rx completions separately and process them
* in batches to reduce cache stalls */
struct tasklet_struct txrx_compl_task;
struct sk_buff_head tx_compl_q;
struct sk_buff_head rx_compl_q;
};
#define RX_HTT_HDR_STATUS_LEN 64
......@@ -1343,4 +1360,5 @@ int ath10k_htt_tx_alloc_msdu_id(struct ath10k_htt *htt);
void ath10k_htt_tx_free_msdu_id(struct ath10k_htt *htt, u16 msdu_id);
int ath10k_htt_mgmt_tx(struct ath10k_htt *htt, struct sk_buff *);
int ath10k_htt_tx(struct ath10k_htt *htt, struct sk_buff *);
#endif
......@@ -43,7 +43,7 @@
static int ath10k_htt_rx_get_csum_state(struct sk_buff *skb);
static void ath10k_htt_txrx_compl_task(unsigned long ptr);
static int ath10k_htt_rx_ring_size(struct ath10k_htt *htt)
{
......@@ -225,18 +225,16 @@ static void ath10k_htt_rx_ring_refill_retry(unsigned long arg)
ath10k_htt_rx_msdu_buff_replenish(htt);
}
static unsigned ath10k_htt_rx_ring_elems(struct ath10k_htt *htt)
{
return (__le32_to_cpu(*htt->rx_ring.alloc_idx.vaddr) -
htt->rx_ring.sw_rd_idx.msdu_payld) & htt->rx_ring.size_mask;
}
void ath10k_htt_rx_detach(struct ath10k_htt *htt)
{
int sw_rd_idx = htt->rx_ring.sw_rd_idx.msdu_payld;
del_timer_sync(&htt->rx_ring.refill_retry_timer);
tasklet_kill(&htt->rx_replenish_task);
tasklet_kill(&htt->txrx_compl_task);
skb_queue_purge(&htt->tx_compl_q);
skb_queue_purge(&htt->rx_compl_q);
while (sw_rd_idx != __le32_to_cpu(*(htt->rx_ring.alloc_idx.vaddr))) {
struct sk_buff *skb =
......@@ -270,10 +268,12 @@ static inline struct sk_buff *ath10k_htt_rx_netbuf_pop(struct ath10k_htt *htt)
int idx;
struct sk_buff *msdu;
spin_lock_bh(&htt->rx_ring.lock);
lockdep_assert_held(&htt->rx_ring.lock);
if (ath10k_htt_rx_ring_elems(htt) == 0)
ath10k_warn("htt rx ring is empty!\n");
if (htt->rx_ring.fill_cnt == 0) {
ath10k_warn("tried to pop sk_buff from an empty rx ring\n");
return NULL;
}
idx = htt->rx_ring.sw_rd_idx.msdu_payld;
msdu = htt->rx_ring.netbufs_ring[idx];
......@@ -283,7 +283,6 @@ static inline struct sk_buff *ath10k_htt_rx_netbuf_pop(struct ath10k_htt *htt)
htt->rx_ring.sw_rd_idx.msdu_payld = idx;
htt->rx_ring.fill_cnt--;
spin_unlock_bh(&htt->rx_ring.lock);
return msdu;
}
......@@ -307,8 +306,7 @@ static int ath10k_htt_rx_amsdu_pop(struct ath10k_htt *htt,
struct sk_buff *msdu;
struct htt_rx_desc *rx_desc;
if (ath10k_htt_rx_ring_elems(htt) == 0)
ath10k_warn("htt rx ring is empty!\n");
lockdep_assert_held(&htt->rx_ring.lock);
if (htt->rx_confused) {
ath10k_warn("htt is confused. refusing rx\n");
......@@ -400,6 +398,7 @@ static int ath10k_htt_rx_amsdu_pop(struct ath10k_htt *htt,
msdu_len = MS(__le32_to_cpu(rx_desc->msdu_start.info0),
RX_MSDU_START_INFO0_MSDU_LENGTH);
msdu_chained = rx_desc->frag_info.ring2_more_count;
msdu_chaining = msdu_chained;
if (msdu_len_invalid)
msdu_len = 0;
......@@ -427,7 +426,6 @@ static int ath10k_htt_rx_amsdu_pop(struct ath10k_htt *htt,
msdu->next = next;
msdu = next;
msdu_chaining = 1;
}
last_msdu = __le32_to_cpu(rx_desc->msdu_end.info0) &
......@@ -529,6 +527,12 @@ int ath10k_htt_rx_attach(struct ath10k_htt *htt)
tasklet_init(&htt->rx_replenish_task, ath10k_htt_rx_replenish_task,
(unsigned long)htt);
skb_queue_head_init(&htt->tx_compl_q);
skb_queue_head_init(&htt->rx_compl_q);
tasklet_init(&htt->txrx_compl_task, ath10k_htt_txrx_compl_task,
(unsigned long)htt);
ath10k_dbg(ATH10K_DBG_BOOT, "htt rx ring size %d fill_level %d\n",
htt->rx_ring.size, htt->rx_ring.fill_level);
return 0;
......@@ -632,6 +636,12 @@ struct amsdu_subframe_hdr {
__be16 len;
} __packed;
static int ath10k_htt_rx_nwifi_hdrlen(struct ieee80211_hdr *hdr)
{
/* nwifi header is padded to 4 bytes. this fixes 4addr rx */
return round_up(ieee80211_hdrlen(hdr->frame_control), 4);
}
static void ath10k_htt_rx_amsdu(struct ath10k_htt *htt,
struct htt_rx_info *info)
{
......@@ -681,7 +691,7 @@ static void ath10k_htt_rx_amsdu(struct ath10k_htt *htt,
case RX_MSDU_DECAP_NATIVE_WIFI:
/* pull decapped header and copy DA */
hdr = (struct ieee80211_hdr *)skb->data;
hdr_len = ieee80211_hdrlen(hdr->frame_control);
hdr_len = ath10k_htt_rx_nwifi_hdrlen(hdr);
memcpy(addr, ieee80211_get_DA(hdr), ETH_ALEN);
skb_pull(skb, hdr_len);
......@@ -768,7 +778,7 @@ static void ath10k_htt_rx_msdu(struct ath10k_htt *htt, struct htt_rx_info *info)
case RX_MSDU_DECAP_NATIVE_WIFI:
/* Pull decapped header */
hdr = (struct ieee80211_hdr *)skb->data;
hdr_len = ieee80211_hdrlen(hdr->frame_control);
hdr_len = ath10k_htt_rx_nwifi_hdrlen(hdr);
skb_pull(skb, hdr_len);
/* Push original header */
......@@ -846,6 +856,20 @@ static bool ath10k_htt_rx_has_mic_err(struct sk_buff *skb)
return false;
}
static bool ath10k_htt_rx_is_mgmt(struct sk_buff *skb)
{
struct htt_rx_desc *rxd;
u32 flags;
rxd = (void *)skb->data - sizeof(*rxd);
flags = __le32_to_cpu(rxd->attention.flags);
if (flags & RX_ATTENTION_FLAGS_MGMT_TYPE)
return true;
return false;
}
static int ath10k_htt_rx_get_csum_state(struct sk_buff *skb)
{
struct htt_rx_desc *rxd;
......@@ -877,6 +901,57 @@ static int ath10k_htt_rx_get_csum_state(struct sk_buff *skb)
return CHECKSUM_UNNECESSARY;
}
static int ath10k_unchain_msdu(struct sk_buff *msdu_head)
{
struct sk_buff *next = msdu_head->next;
struct sk_buff *to_free = next;
int space;
int total_len = 0;
/* TODO: Might could optimize this by using
* skb_try_coalesce or similar method to
* decrease copying, or maybe get mac80211 to
* provide a way to just receive a list of
* skb?
*/
msdu_head->next = NULL;
/* Allocate total length all at once. */
while (next) {
total_len += next->len;
next = next->next;
}
space = total_len - skb_tailroom(msdu_head);
if ((space > 0) &&
(pskb_expand_head(msdu_head, 0, space, GFP_ATOMIC) < 0)) {
/* TODO: bump some rx-oom error stat */
/* put it back together so we can free the
* whole list at once.
*/
msdu_head->next = to_free;
return -1;
}
/* Walk list again, copying contents into
* msdu_head
*/
next = to_free;
while (next) {
skb_copy_from_linear_data(next, skb_put(msdu_head, next->len),
next->len);
next = next->next;
}
/* If here, we have consolidated skb. Free the
* fragments and pass the main skb on up the
* stack.
*/
ath10k_htt_rx_free_msdu_chain(to_free);
return 0;
}
static void ath10k_htt_rx_handler(struct ath10k_htt *htt,
struct htt_rx_indication *rx)
{
......@@ -888,6 +963,8 @@ static void ath10k_htt_rx_handler(struct ath10k_htt *htt,
u8 *fw_desc;
int i, j;
lockdep_assert_held(&htt->rx_ring.lock);
memset(&info, 0, sizeof(info));
fw_desc_len = __le16_to_cpu(rx->prefix.fw_rx_desc_bytes);
......@@ -940,7 +1017,8 @@ static void ath10k_htt_rx_handler(struct ath10k_htt *htt,
status = info.status;
/* Skip mgmt frames while we handle this in WMI */
if (status == HTT_RX_IND_MPDU_STATUS_MGMT_CTRL) {
if (status == HTT_RX_IND_MPDU_STATUS_MGMT_CTRL ||
ath10k_htt_rx_is_mgmt(msdu_head)) {
ath10k_dbg(ATH10K_DBG_HTT, "htt rx mgmt ctrl\n");
ath10k_htt_rx_free_msdu_chain(msdu_head);
continue;
......@@ -964,10 +1042,8 @@ static void ath10k_htt_rx_handler(struct ath10k_htt *htt,
continue;
}
/* FIXME: we do not support chaining yet.
* this needs investigation */
if (msdu_chaining) {
ath10k_warn("htt rx msdu_chaining is true\n");
if (msdu_chaining &&
(ath10k_unchain_msdu(msdu_head) < 0)) {
ath10k_htt_rx_free_msdu_chain(msdu_head);
continue;
}
......@@ -990,6 +1066,7 @@ static void ath10k_htt_rx_handler(struct ath10k_htt *htt,
info.rate.info0 = rx->ppdu.info0;
info.rate.info1 = __le32_to_cpu(rx->ppdu.info1);
info.rate.info2 = __le32_to_cpu(rx->ppdu.info2);
info.tsf = __le32_to_cpu(rx->ppdu.tsf);
hdr = ath10k_htt_rx_skb_get_hdr(msdu_head);
......@@ -1023,8 +1100,11 @@ static void ath10k_htt_rx_frag_handler(struct ath10k_htt *htt,
msdu_head = NULL;
msdu_tail = NULL;
spin_lock_bh(&htt->rx_ring.lock);
msdu_chaining = ath10k_htt_rx_amsdu_pop(htt, &fw_desc, &fw_desc_len,
&msdu_head, &msdu_tail);
spin_unlock_bh(&htt->rx_ring.lock);
ath10k_dbg(ATH10K_DBG_HTT_DUMP, "htt rx frag ahead\n");
......@@ -1116,6 +1196,45 @@ static void ath10k_htt_rx_frag_handler(struct ath10k_htt *htt,
}
}
static void ath10k_htt_rx_frm_tx_compl(struct ath10k *ar,
struct sk_buff *skb)
{
struct ath10k_htt *htt = &ar->htt;
struct htt_resp *resp = (struct htt_resp *)skb->data;
struct htt_tx_done tx_done = {};
int status = MS(resp->data_tx_completion.flags, HTT_DATA_TX_STATUS);
__le16 msdu_id;
int i;
lockdep_assert_held(&htt->tx_lock);
switch (status) {
case HTT_DATA_TX_STATUS_NO_ACK:
tx_done.no_ack = true;
break;
case HTT_DATA_TX_STATUS_OK:
break;
case HTT_DATA_TX_STATUS_DISCARD:
case HTT_DATA_TX_STATUS_POSTPONE:
case HTT_DATA_TX_STATUS_DOWNLOAD_FAIL:
tx_done.discard = true;
break;
default:
ath10k_warn("unhandled tx completion status %d\n", status);
tx_done.discard = true;
break;
}
ath10k_dbg(ATH10K_DBG_HTT, "htt tx completion num_msdus %d\n",
resp->data_tx_completion.num_msdus);
for (i = 0; i < resp->data_tx_completion.num_msdus; i++) {
msdu_id = resp->data_tx_completion.msdus[i];
tx_done.msdu_id = __le16_to_cpu(msdu_id);
ath10k_txrx_tx_unref(htt, &tx_done);
}
}
void ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb)
{
struct ath10k_htt *htt = &ar->htt;
......@@ -1134,10 +1253,12 @@ void ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb)
complete(&htt->target_version_received);
break;
}
case HTT_T2H_MSG_TYPE_RX_IND: {
ath10k_htt_rx_handler(htt, &resp->rx_ind);
break;
}
case HTT_T2H_MSG_TYPE_RX_IND:
spin_lock_bh(&htt->rx_ring.lock);
__skb_queue_tail(&htt->rx_compl_q, skb);
spin_unlock_bh(&htt->rx_ring.lock);
tasklet_schedule(&htt->txrx_compl_task);
return;
case HTT_T2H_MSG_TYPE_PEER_MAP: {
struct htt_peer_map_event ev = {
.vdev_id = resp->peer_map.vdev_id,
......@@ -1172,44 +1293,17 @@ void ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb)
break;
}
spin_lock_bh(&htt->tx_lock);
ath10k_txrx_tx_unref(htt, &tx_done);
spin_unlock_bh(&htt->tx_lock);
break;
}
case HTT_T2H_MSG_TYPE_TX_COMPL_IND: {
struct htt_tx_done tx_done = {};
int status = MS(resp->data_tx_completion.flags,
HTT_DATA_TX_STATUS);
__le16 msdu_id;
int i;
switch (status) {
case HTT_DATA_TX_STATUS_NO_ACK:
tx_done.no_ack = true;
break;
case HTT_DATA_TX_STATUS_OK:
break;
case HTT_DATA_TX_STATUS_DISCARD:
case HTT_DATA_TX_STATUS_POSTPONE:
case HTT_DATA_TX_STATUS_DOWNLOAD_FAIL:
tx_done.discard = true;
break;
default:
ath10k_warn("unhandled tx completion status %d\n",
status);
tx_done.discard = true;
break;
}
ath10k_dbg(ATH10K_DBG_HTT, "htt tx completion num_msdus %d\n",
resp->data_tx_completion.num_msdus);
for (i = 0; i < resp->data_tx_completion.num_msdus; i++) {
msdu_id = resp->data_tx_completion.msdus[i];
tx_done.msdu_id = __le16_to_cpu(msdu_id);
ath10k_txrx_tx_unref(htt, &tx_done);
}
break;
}
case HTT_T2H_MSG_TYPE_TX_COMPL_IND:
spin_lock_bh(&htt->tx_lock);
__skb_queue_tail(&htt->tx_compl_q, skb);
spin_unlock_bh(&htt->tx_lock);
tasklet_schedule(&htt->txrx_compl_task);
return;
case HTT_T2H_MSG_TYPE_SEC_IND: {
struct ath10k *ar = htt->ar;
struct htt_security_indication *ev = &resp->security_indication;
......@@ -1249,3 +1343,25 @@ void ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb)
/* Free the indication buffer */
dev_kfree_skb_any(skb);
}
static void ath10k_htt_txrx_compl_task(unsigned long ptr)
{
struct ath10k_htt *htt = (struct ath10k_htt *)ptr;
struct htt_resp *resp;
struct sk_buff *skb;
spin_lock_bh(&htt->tx_lock);
while ((skb = __skb_dequeue(&htt->tx_compl_q))) {
ath10k_htt_rx_frm_tx_compl(htt->ar, skb);
dev_kfree_skb_any(skb);
}
spin_unlock_bh(&htt->tx_lock);
spin_lock_bh(&htt->rx_ring.lock);
while ((skb = __skb_dequeue(&htt->rx_compl_q))) {
resp = (struct htt_resp *)skb->data;
ath10k_htt_rx_handler(htt, &resp->rx_ind);
dev_kfree_skb_any(skb);
}
spin_unlock_bh(&htt->rx_ring.lock);
}
......@@ -109,6 +109,14 @@ int ath10k_htt_tx_attach(struct ath10k_htt *htt)
return -ENOMEM;
}
htt->tx_pool = dma_pool_create("ath10k htt tx pool", htt->ar->dev,
sizeof(struct ath10k_htt_txbuf), 4, 0);
if (!htt->tx_pool) {
kfree(htt->used_msdu_ids);
kfree(htt->pending_tx);
return -ENOMEM;
}
return 0;
}
......@@ -117,9 +125,7 @@ static void ath10k_htt_tx_cleanup_pending(struct ath10k_htt *htt)
struct htt_tx_done tx_done = {0};
int msdu_id;
/* No locks needed. Called after communication with the device has
* been stopped. */
spin_lock_bh(&htt->tx_lock);
for (msdu_id = 0; msdu_id < htt->max_num_pending_tx; msdu_id++) {
if (!test_bit(msdu_id, htt->used_msdu_ids))
continue;
......@@ -132,6 +138,7 @@ static void ath10k_htt_tx_cleanup_pending(struct ath10k_htt *htt)
ath10k_txrx_tx_unref(htt, &tx_done);
}
spin_unlock_bh(&htt->tx_lock);
}
void ath10k_htt_tx_detach(struct ath10k_htt *htt)
......@@ -139,6 +146,7 @@ void ath10k_htt_tx_detach(struct ath10k_htt *htt)
ath10k_htt_tx_cleanup_pending(htt);
kfree(htt->pending_tx);
kfree(htt->used_msdu_ids);
dma_pool_destroy(htt->tx_pool);
return;
}
......@@ -334,7 +342,9 @@ int ath10k_htt_mgmt_tx(struct ath10k_htt *htt, struct sk_buff *msdu)
goto err_free_msdu_id;
}
res = ath10k_skb_map(dev, msdu);
skb_cb->paddr = dma_map_single(dev, msdu->data, msdu->len,
DMA_TO_DEVICE);
res = dma_mapping_error(dev, skb_cb->paddr);
if (res)
goto err_free_txdesc;
......@@ -348,8 +358,7 @@ int ath10k_htt_mgmt_tx(struct ath10k_htt *htt, struct sk_buff *msdu)
memcpy(cmd->mgmt_tx.hdr, msdu->data,
min_t(int, msdu->len, HTT_MGMT_FRM_HDR_DOWNLOAD_LEN));
skb_cb->htt.frag_len = 0;
skb_cb->htt.pad_len = 0;
skb_cb->htt.txbuf = NULL;
res = ath10k_htc_send(&htt->ar->htc, htt->eid, txdesc);
if (res)
......@@ -358,7 +367,7 @@ int ath10k_htt_mgmt_tx(struct ath10k_htt *htt, struct sk_buff *msdu)
return 0;
err_unmap_msdu:
ath10k_skb_unmap(dev, msdu);
dma_unmap_single(dev, skb_cb->paddr, msdu->len, DMA_TO_DEVICE);
err_free_txdesc:
dev_kfree_skb_any(txdesc);
err_free_msdu_id:
......@@ -375,19 +384,19 @@ int ath10k_htt_mgmt_tx(struct ath10k_htt *htt, struct sk_buff *msdu)
int ath10k_htt_tx(struct ath10k_htt *htt, struct sk_buff *msdu)
{
struct device *dev = htt->ar->dev;
struct htt_cmd *cmd;
struct htt_data_tx_desc_frag *tx_frags;
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)msdu->data;
struct ath10k_skb_cb *skb_cb = ATH10K_SKB_CB(msdu);
struct sk_buff *txdesc = NULL;
bool use_frags;
u8 vdev_id = ATH10K_SKB_CB(msdu)->vdev_id;
u8 tid;
int prefetch_len, desc_len;
int msdu_id = -1;
struct ath10k_hif_sg_item sg_items[2];
struct htt_data_tx_desc_frag *frags;
u8 vdev_id = skb_cb->vdev_id;
u8 tid = skb_cb->htt.tid;
int prefetch_len;
int res;
u8 flags0;
u16 flags1;
u8 flags0 = 0;
u16 msdu_id, flags1 = 0;
dma_addr_t paddr;
u32 frags_paddr;
bool use_frags;
res = ath10k_htt_tx_inc_pending(htt);
if (res)
......@@ -406,114 +415,120 @@ int ath10k_htt_tx(struct ath10k_htt *htt, struct sk_buff *msdu)
prefetch_len = min(htt->prefetch_len, msdu->len);
prefetch_len = roundup(prefetch_len, 4);
desc_len = sizeof(cmd->hdr) + sizeof(cmd->data_tx) + prefetch_len;
txdesc = ath10k_htc_alloc_skb(desc_len);
if (!txdesc) {
res = -ENOMEM;
goto err_free_msdu_id;
}
/* Since HTT 3.0 there is no separate mgmt tx command. However in case
* of mgmt tx using TX_FRM there is not tx fragment list. Instead of tx
* fragment list host driver specifies directly frame pointer. */
use_frags = htt->target_version_major < 3 ||
!ieee80211_is_mgmt(hdr->frame_control);
if (!IS_ALIGNED((unsigned long)txdesc->data, 4)) {
ath10k_warn("htt alignment check failed. dropping packet.\n");
res = -EIO;
goto err_free_txdesc;
}
skb_cb->htt.txbuf = dma_pool_alloc(htt->tx_pool, GFP_ATOMIC,
&paddr);
if (!skb_cb->htt.txbuf)
goto err_free_msdu_id;
skb_cb->htt.txbuf_paddr = paddr;
if (use_frags) {
skb_cb->htt.frag_len = sizeof(*tx_frags) * 2;
skb_cb->htt.pad_len = (unsigned long)msdu->data -
round_down((unsigned long)msdu->data, 4);
skb_cb->paddr = dma_map_single(dev, msdu->data, msdu->len,
DMA_TO_DEVICE);
res = dma_mapping_error(dev, skb_cb->paddr);
if (res)
goto err_free_txbuf;
skb_push(msdu, skb_cb->htt.frag_len + skb_cb->htt.pad_len);
} else {
skb_cb->htt.frag_len = 0;
skb_cb->htt.pad_len = 0;
}
if (likely(use_frags)) {
frags = skb_cb->htt.txbuf->frags;
res = ath10k_skb_map(dev, msdu);
if (res)
goto err_pull_txfrag;
if (use_frags) {
dma_sync_single_for_cpu(dev, skb_cb->paddr, msdu->len,
DMA_TO_DEVICE);
/* tx fragment list must be terminated with zero-entry */
tx_frags = (struct htt_data_tx_desc_frag *)msdu->data;
tx_frags[0].paddr = __cpu_to_le32(skb_cb->paddr +
skb_cb->htt.frag_len +
skb_cb->htt.pad_len);
tx_frags[0].len = __cpu_to_le32(msdu->len -
skb_cb->htt.frag_len -
skb_cb->htt.pad_len);
tx_frags[1].paddr = __cpu_to_le32(0);
tx_frags[1].len = __cpu_to_le32(0);
dma_sync_single_for_device(dev, skb_cb->paddr, msdu->len,
DMA_TO_DEVICE);
}
frags[0].paddr = __cpu_to_le32(skb_cb->paddr);
frags[0].len = __cpu_to_le32(msdu->len);
frags[1].paddr = 0;
frags[1].len = 0;
ath10k_dbg(ATH10K_DBG_HTT, "tx-msdu 0x%llx\n",
(unsigned long long) ATH10K_SKB_CB(msdu)->paddr);
ath10k_dbg_dump(ATH10K_DBG_HTT_DUMP, NULL, "tx-msdu: ",
msdu->data, msdu->len);
flags0 |= SM(ATH10K_HW_TXRX_NATIVE_WIFI,
HTT_DATA_TX_DESC_FLAGS0_PKT_TYPE);
skb_put(txdesc, desc_len);
cmd = (struct htt_cmd *)txdesc->data;
frags_paddr = skb_cb->htt.txbuf_paddr;
} else {
flags0 |= SM(ATH10K_HW_TXRX_MGMT,
HTT_DATA_TX_DESC_FLAGS0_PKT_TYPE);
tid = ATH10K_SKB_CB(msdu)->htt.tid;
frags_paddr = skb_cb->paddr;
}
ath10k_dbg(ATH10K_DBG_HTT, "htt data tx using tid %hhu\n", tid);
/* Normally all commands go through HTC which manages tx credits for
* each endpoint and notifies when tx is completed.
*
* HTT endpoint is creditless so there's no need to care about HTC
* flags. In that case it is trivial to fill the HTC header here.
*
* MSDU transmission is considered completed upon HTT event. This
* implies no relevant resources can be freed until after the event is
* received. That's why HTC tx completion handler itself is ignored by
* setting NULL to transfer_context for all sg items.
*
* There is simply no point in pushing HTT TX_FRM through HTC tx path
* as it's a waste of resources. By bypassing HTC it is possible to
* avoid extra memory allocations, compress data structures and thus
* improve performance. */
skb_cb->htt.txbuf->htc_hdr.eid = htt->eid;
skb_cb->htt.txbuf->htc_hdr.len = __cpu_to_le16(
sizeof(skb_cb->htt.txbuf->cmd_hdr) +
sizeof(skb_cb->htt.txbuf->cmd_tx) +
prefetch_len);
skb_cb->htt.txbuf->htc_hdr.flags = 0;
flags0 = 0;
if (!ieee80211_has_protected(hdr->frame_control))
flags0 |= HTT_DATA_TX_DESC_FLAGS0_NO_ENCRYPT;
flags0 |= HTT_DATA_TX_DESC_FLAGS0_MAC_HDR_PRESENT;
if (use_frags)
flags0 |= SM(ATH10K_HW_TXRX_NATIVE_WIFI,
HTT_DATA_TX_DESC_FLAGS0_PKT_TYPE);
else
flags0 |= SM(ATH10K_HW_TXRX_MGMT,
HTT_DATA_TX_DESC_FLAGS0_PKT_TYPE);
flags0 |= HTT_DATA_TX_DESC_FLAGS0_MAC_HDR_PRESENT;
flags1 = 0;
flags1 |= SM((u16)vdev_id, HTT_DATA_TX_DESC_FLAGS1_VDEV_ID);
flags1 |= SM((u16)tid, HTT_DATA_TX_DESC_FLAGS1_EXT_TID);
flags1 |= HTT_DATA_TX_DESC_FLAGS1_CKSUM_L3_OFFLOAD;
flags1 |= HTT_DATA_TX_DESC_FLAGS1_CKSUM_L4_OFFLOAD;
cmd->hdr.msg_type = HTT_H2T_MSG_TYPE_TX_FRM;
cmd->data_tx.flags0 = flags0;
cmd->data_tx.flags1 = __cpu_to_le16(flags1);
cmd->data_tx.len = __cpu_to_le16(msdu->len -
skb_cb->htt.frag_len -
skb_cb->htt.pad_len);
cmd->data_tx.id = __cpu_to_le16(msdu_id);
cmd->data_tx.frags_paddr = __cpu_to_le32(skb_cb->paddr);
cmd->data_tx.peerid = __cpu_to_le32(HTT_INVALID_PEERID);
memcpy(cmd->data_tx.prefetch, hdr, prefetch_len);
skb_cb->htt.txbuf->cmd_hdr.msg_type = HTT_H2T_MSG_TYPE_TX_FRM;
skb_cb->htt.txbuf->cmd_tx.flags0 = flags0;
skb_cb->htt.txbuf->cmd_tx.flags1 = __cpu_to_le16(flags1);
skb_cb->htt.txbuf->cmd_tx.len = __cpu_to_le16(msdu->len);
skb_cb->htt.txbuf->cmd_tx.id = __cpu_to_le16(msdu_id);
skb_cb->htt.txbuf->cmd_tx.frags_paddr = __cpu_to_le32(frags_paddr);
skb_cb->htt.txbuf->cmd_tx.peerid = __cpu_to_le32(HTT_INVALID_PEERID);
ath10k_dbg(ATH10K_DBG_HTT,
"htt tx flags0 %hhu flags1 %hu len %d id %hu frags_paddr %08x, msdu_paddr %08x vdev %hhu tid %hhu\n",
flags0, flags1, msdu->len, msdu_id, frags_paddr,
(u32)skb_cb->paddr, vdev_id, tid);
ath10k_dbg_dump(ATH10K_DBG_HTT_DUMP, NULL, "htt tx msdu: ",
msdu->data, msdu->len);
res = ath10k_htc_send(&htt->ar->htc, htt->eid, txdesc);
sg_items[0].transfer_id = 0;
sg_items[0].transfer_context = NULL;
sg_items[0].vaddr = &skb_cb->htt.txbuf->htc_hdr;
sg_items[0].paddr = skb_cb->htt.txbuf_paddr +
sizeof(skb_cb->htt.txbuf->frags);
sg_items[0].len = sizeof(skb_cb->htt.txbuf->htc_hdr) +
sizeof(skb_cb->htt.txbuf->cmd_hdr) +
sizeof(skb_cb->htt.txbuf->cmd_tx);
sg_items[1].transfer_id = 0;
sg_items[1].transfer_context = NULL;
sg_items[1].vaddr = msdu->data;
sg_items[1].paddr = skb_cb->paddr;
sg_items[1].len = prefetch_len;
res = ath10k_hif_tx_sg(htt->ar,
htt->ar->htc.endpoint[htt->eid].ul_pipe_id,
sg_items, ARRAY_SIZE(sg_items));
if (res)
goto err_unmap_msdu;
return 0;
err_unmap_msdu:
ath10k_skb_unmap(dev, msdu);
err_pull_txfrag:
skb_pull(msdu, skb_cb->htt.frag_len + skb_cb->htt.pad_len);
err_free_txdesc:
dev_kfree_skb_any(txdesc);
dma_unmap_single(dev, skb_cb->paddr, msdu->len, DMA_TO_DEVICE);
err_free_txbuf:
dma_pool_free(htt->tx_pool,
skb_cb->htt.txbuf,
skb_cb->htt.txbuf_paddr);
err_free_msdu_id:
spin_lock_bh(&htt->tx_lock);
htt->pending_tx[msdu_id] = NULL;
......
......@@ -58,12 +58,10 @@ static DEFINE_PCI_DEVICE_TABLE(ath10k_pci_id_table) = {
static int ath10k_pci_diag_read_access(struct ath10k *ar, u32 address,
u32 *data);
static void ath10k_pci_process_ce(struct ath10k *ar);
static int ath10k_pci_post_rx(struct ath10k *ar);
static int ath10k_pci_post_rx_pipe(struct ath10k_pci_pipe *pipe_info,
int num);
static void ath10k_pci_rx_pipe_cleanup(struct ath10k_pci_pipe *pipe_info);
static void ath10k_pci_stop_ce(struct ath10k *ar);
static int ath10k_pci_cold_reset(struct ath10k *ar);
static int ath10k_pci_warm_reset(struct ath10k *ar);
static int ath10k_pci_wait_for_target_init(struct ath10k *ar);
......@@ -74,7 +72,6 @@ static void ath10k_pci_free_irq(struct ath10k *ar);
static int ath10k_pci_bmi_wait(struct ath10k_ce_pipe *tx_pipe,
struct ath10k_ce_pipe *rx_pipe,
struct bmi_xfer *xfer);
static void ath10k_pci_cleanup_ce(struct ath10k *ar);
static const struct ce_attr host_ce_config_wlan[] = {
/* CE0: host->target HTC control and raw streams */
......@@ -679,34 +676,12 @@ void ath10k_do_pci_sleep(struct ath10k *ar)
}
}
/*
* FIXME: Handle OOM properly.
*/
static inline
struct ath10k_pci_compl *get_free_compl(struct ath10k_pci_pipe *pipe_info)
{
struct ath10k_pci_compl *compl = NULL;
spin_lock_bh(&pipe_info->pipe_lock);
if (list_empty(&pipe_info->compl_free)) {
ath10k_warn("Completion buffers are full\n");
goto exit;
}
compl = list_first_entry(&pipe_info->compl_free,
struct ath10k_pci_compl, list);
list_del(&compl->list);
exit:
spin_unlock_bh(&pipe_info->pipe_lock);
return compl;
}
/* Called by lower (CE) layer when a send to Target completes. */
static void ath10k_pci_ce_send_done(struct ath10k_ce_pipe *ce_state)
{
struct ath10k *ar = ce_state->ar;
struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
struct ath10k_pci_pipe *pipe_info = &ar_pci->pipe_info[ce_state->id];
struct ath10k_pci_compl *compl;
struct ath10k_hif_cb *cb = &ar_pci->msg_callbacks_current;
void *transfer_context;
u32 ce_data;
unsigned int nbytes;
......@@ -715,27 +690,12 @@ static void ath10k_pci_ce_send_done(struct ath10k_ce_pipe *ce_state)
while (ath10k_ce_completed_send_next(ce_state, &transfer_context,
&ce_data, &nbytes,
&transfer_id) == 0) {
compl = get_free_compl(pipe_info);
if (!compl)
break;
compl->state = ATH10K_PCI_COMPL_SEND;
compl->ce_state = ce_state;
compl->pipe_info = pipe_info;
compl->skb = transfer_context;
compl->nbytes = nbytes;
compl->transfer_id = transfer_id;
compl->flags = 0;
/* no need to call tx completion for NULL pointers */
if (transfer_context == NULL)
continue;
/*
* Add the completion to the processing queue.
*/
spin_lock_bh(&ar_pci->compl_lock);
list_add_tail(&compl->list, &ar_pci->compl_process);
spin_unlock_bh(&ar_pci->compl_lock);
cb->tx_completion(ar, transfer_context, transfer_id);
}
ath10k_pci_process_ce(ar);
}
/* Called by lower (CE) layer when data is received from the Target. */
......@@ -744,77 +704,100 @@ static void ath10k_pci_ce_recv_data(struct ath10k_ce_pipe *ce_state)
struct ath10k *ar = ce_state->ar;
struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
struct ath10k_pci_pipe *pipe_info = &ar_pci->pipe_info[ce_state->id];
struct ath10k_pci_compl *compl;
struct ath10k_hif_cb *cb = &ar_pci->msg_callbacks_current;
struct sk_buff *skb;
void *transfer_context;
u32 ce_data;
unsigned int nbytes;
unsigned int nbytes, max_nbytes;
unsigned int transfer_id;
unsigned int flags;
int err;
while (ath10k_ce_completed_recv_next(ce_state, &transfer_context,
&ce_data, &nbytes, &transfer_id,
&flags) == 0) {
compl = get_free_compl(pipe_info);
if (!compl)
break;
compl->state = ATH10K_PCI_COMPL_RECV;
compl->ce_state = ce_state;
compl->pipe_info = pipe_info;
compl->skb = transfer_context;
compl->nbytes = nbytes;
compl->transfer_id = transfer_id;
compl->flags = flags;
err = ath10k_pci_post_rx_pipe(pipe_info, 1);
if (unlikely(err)) {
/* FIXME: retry */
ath10k_warn("failed to replenish CE rx ring %d: %d\n",
pipe_info->pipe_num, err);
}
skb = transfer_context;
max_nbytes = skb->len + skb_tailroom(skb);
dma_unmap_single(ar->dev, ATH10K_SKB_CB(skb)->paddr,
skb->len + skb_tailroom(skb),
DMA_FROM_DEVICE);
/*
* Add the completion to the processing queue.
*/
spin_lock_bh(&ar_pci->compl_lock);
list_add_tail(&compl->list, &ar_pci->compl_process);
spin_unlock_bh(&ar_pci->compl_lock);
}
max_nbytes, DMA_FROM_DEVICE);
ath10k_pci_process_ce(ar);
if (unlikely(max_nbytes < nbytes)) {
ath10k_warn("rxed more than expected (nbytes %d, max %d)",
nbytes, max_nbytes);
dev_kfree_skb_any(skb);
continue;
}
skb_put(skb, nbytes);
cb->rx_completion(ar, skb, pipe_info->pipe_num);
}
}
/* Send the first nbytes bytes of the buffer */
static int ath10k_pci_hif_send_head(struct ath10k *ar, u8 pipe_id,
unsigned int transfer_id,
unsigned int bytes, struct sk_buff *nbuf)
static int ath10k_pci_hif_tx_sg(struct ath10k *ar, u8 pipe_id,
struct ath10k_hif_sg_item *items, int n_items)
{
struct ath10k_skb_cb *skb_cb = ATH10K_SKB_CB(nbuf);
struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
struct ath10k_pci_pipe *pipe_info = &(ar_pci->pipe_info[pipe_id]);
struct ath10k_ce_pipe *ce_hdl = pipe_info->ce_hdl;
unsigned int len;
u32 flags = 0;
int ret;
struct ath10k_pci_pipe *pci_pipe = &ar_pci->pipe_info[pipe_id];
struct ath10k_ce_pipe *ce_pipe = pci_pipe->ce_hdl;
struct ath10k_ce_ring *src_ring = ce_pipe->src_ring;
unsigned int nentries_mask = src_ring->nentries_mask;
unsigned int sw_index = src_ring->sw_index;
unsigned int write_index = src_ring->write_index;
int err, i;
len = min(bytes, nbuf->len);
bytes -= len;
spin_lock_bh(&ar_pci->ce_lock);
if (len & 3)
ath10k_warn("skb not aligned to 4-byte boundary (%d)\n", len);
if (unlikely(CE_RING_DELTA(nentries_mask,
write_index, sw_index - 1) < n_items)) {
err = -ENOBUFS;
goto unlock;
}
ath10k_dbg(ATH10K_DBG_PCI,
"pci send data vaddr %p paddr 0x%llx len %d as %d bytes\n",
nbuf->data, (unsigned long long) skb_cb->paddr,
nbuf->len, len);
ath10k_dbg_dump(ATH10K_DBG_PCI_DUMP, NULL,
"ath10k tx: data: ",
nbuf->data, nbuf->len);
ret = ath10k_ce_send(ce_hdl, nbuf, skb_cb->paddr, len, transfer_id,
flags);
if (ret)
ath10k_warn("failed to send sk_buff to CE: %p\n", nbuf);
for (i = 0; i < n_items - 1; i++) {
ath10k_dbg(ATH10K_DBG_PCI,
"pci tx item %d paddr 0x%08x len %d n_items %d\n",
i, items[i].paddr, items[i].len, n_items);
ath10k_dbg_dump(ATH10K_DBG_PCI_DUMP, NULL, "item data: ",
items[i].vaddr, items[i].len);
return ret;
err = ath10k_ce_send_nolock(ce_pipe,
items[i].transfer_context,
items[i].paddr,
items[i].len,
items[i].transfer_id,
CE_SEND_FLAG_GATHER);
if (err)
goto unlock;
}
/* `i` is equal to `n_items -1` after for() */
ath10k_dbg(ATH10K_DBG_PCI,
"pci tx item %d paddr 0x%08x len %d n_items %d\n",
i, items[i].paddr, items[i].len, n_items);
ath10k_dbg_dump(ATH10K_DBG_PCI_DUMP, NULL, "item data: ",
items[i].vaddr, items[i].len);
err = ath10k_ce_send_nolock(ce_pipe,
items[i].transfer_context,
items[i].paddr,
items[i].len,
items[i].transfer_id,
0);
if (err)
goto unlock;
err = 0;
unlock:
spin_unlock_bh(&ar_pci->ce_lock);
return err;
}
static u16 ath10k_pci_hif_get_free_queue_number(struct ath10k *ar, u8 pipe)
......@@ -903,52 +886,6 @@ static void ath10k_pci_hif_set_callbacks(struct ath10k *ar,
sizeof(ar_pci->msg_callbacks_current));
}
static int ath10k_pci_alloc_compl(struct ath10k *ar)
{
struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
const struct ce_attr *attr;
struct ath10k_pci_pipe *pipe_info;
struct ath10k_pci_compl *compl;
int i, pipe_num, completions;
spin_lock_init(&ar_pci->compl_lock);
INIT_LIST_HEAD(&ar_pci->compl_process);
for (pipe_num = 0; pipe_num < CE_COUNT; pipe_num++) {
pipe_info = &ar_pci->pipe_info[pipe_num];
spin_lock_init(&pipe_info->pipe_lock);
INIT_LIST_HEAD(&pipe_info->compl_free);
/* Handle Diagnostic CE specially */
if (pipe_info->ce_hdl == ar_pci->ce_diag)
continue;
attr = &host_ce_config_wlan[pipe_num];
completions = 0;
if (attr->src_nentries)
completions += attr->src_nentries;
if (attr->dest_nentries)
completions += attr->dest_nentries;
for (i = 0; i < completions; i++) {
compl = kmalloc(sizeof(*compl), GFP_KERNEL);
if (!compl) {
ath10k_warn("No memory for completion state\n");
ath10k_pci_cleanup_ce(ar);
return -ENOMEM;
}
compl->state = ATH10K_PCI_COMPL_FREE;
list_add_tail(&compl->list, &pipe_info->compl_free);
}
}
return 0;
}
static int ath10k_pci_setup_ce_irq(struct ath10k *ar)
{
struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
......@@ -993,147 +930,6 @@ static void ath10k_pci_kill_tasklet(struct ath10k *ar)
tasklet_kill(&ar_pci->pipe_info[i].intr);
}
static void ath10k_pci_stop_ce(struct ath10k *ar)
{
struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
struct ath10k_pci_compl *compl;
struct sk_buff *skb;
/* Mark pending completions as aborted, so that upper layers free up
* their associated resources */
spin_lock_bh(&ar_pci->compl_lock);
list_for_each_entry(compl, &ar_pci->compl_process, list) {
skb = compl->skb;
ATH10K_SKB_CB(skb)->is_aborted = true;
}
spin_unlock_bh(&ar_pci->compl_lock);
}
static void ath10k_pci_cleanup_ce(struct ath10k *ar)
{
struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
struct ath10k_pci_compl *compl, *tmp;
struct ath10k_pci_pipe *pipe_info;
struct sk_buff *netbuf;
int pipe_num;
/* Free pending completions. */
spin_lock_bh(&ar_pci->compl_lock);
if (!list_empty(&ar_pci->compl_process))
ath10k_warn("pending completions still present! possible memory leaks.\n");
list_for_each_entry_safe(compl, tmp, &ar_pci->compl_process, list) {
list_del(&compl->list);
netbuf = compl->skb;
dev_kfree_skb_any(netbuf);
kfree(compl);
}
spin_unlock_bh(&ar_pci->compl_lock);
/* Free unused completions for each pipe. */
for (pipe_num = 0; pipe_num < CE_COUNT; pipe_num++) {
pipe_info = &ar_pci->pipe_info[pipe_num];
spin_lock_bh(&pipe_info->pipe_lock);
list_for_each_entry_safe(compl, tmp,
&pipe_info->compl_free, list) {
list_del(&compl->list);
kfree(compl);
}
spin_unlock_bh(&pipe_info->pipe_lock);
}
}
static void ath10k_pci_process_ce(struct ath10k *ar)
{
struct ath10k_pci *ar_pci = ar->hif.priv;
struct ath10k_hif_cb *cb = &ar_pci->msg_callbacks_current;
struct ath10k_pci_compl *compl;
struct sk_buff *skb;
unsigned int nbytes;
int ret, send_done = 0;
/* Upper layers aren't ready to handle tx/rx completions in parallel so
* we must serialize all completion processing. */
spin_lock_bh(&ar_pci->compl_lock);
if (ar_pci->compl_processing) {
spin_unlock_bh(&ar_pci->compl_lock);
return;
}
ar_pci->compl_processing = true;
spin_unlock_bh(&ar_pci->compl_lock);
for (;;) {
spin_lock_bh(&ar_pci->compl_lock);
if (list_empty(&ar_pci->compl_process)) {
spin_unlock_bh(&ar_pci->compl_lock);
break;
}
compl = list_first_entry(&ar_pci->compl_process,
struct ath10k_pci_compl, list);
list_del(&compl->list);
spin_unlock_bh(&ar_pci->compl_lock);
switch (compl->state) {
case ATH10K_PCI_COMPL_SEND:
cb->tx_completion(ar,
compl->skb,
compl->transfer_id);
send_done = 1;
break;
case ATH10K_PCI_COMPL_RECV:
ret = ath10k_pci_post_rx_pipe(compl->pipe_info, 1);
if (ret) {
ath10k_warn("failed to post RX buffer for pipe %d: %d\n",
compl->pipe_info->pipe_num, ret);
break;
}
skb = compl->skb;
nbytes = compl->nbytes;
ath10k_dbg(ATH10K_DBG_PCI,
"ath10k_pci_ce_recv_data netbuf=%p nbytes=%d\n",
skb, nbytes);
ath10k_dbg_dump(ATH10K_DBG_PCI_DUMP, NULL,
"ath10k rx: ", skb->data, nbytes);
if (skb->len + skb_tailroom(skb) >= nbytes) {
skb_trim(skb, 0);
skb_put(skb, nbytes);
cb->rx_completion(ar, skb,
compl->pipe_info->pipe_num);
} else {
ath10k_warn("rxed more than expected (nbytes %d, max %d)",
nbytes,
skb->len + skb_tailroom(skb));
}
break;
case ATH10K_PCI_COMPL_FREE:
ath10k_warn("free completion cannot be processed\n");
break;
default:
ath10k_warn("invalid completion state (%d)\n",
compl->state);
break;
}
compl->state = ATH10K_PCI_COMPL_FREE;
/*
* Add completion back to the pipe's free list.
*/
spin_lock_bh(&compl->pipe_info->pipe_lock);
list_add_tail(&compl->list, &compl->pipe_info->compl_free);
spin_unlock_bh(&compl->pipe_info->pipe_lock);
}
spin_lock_bh(&ar_pci->compl_lock);
ar_pci->compl_processing = false;
spin_unlock_bh(&ar_pci->compl_lock);
}
/* TODO - temporary mapping while we have too few CE's */
static int ath10k_pci_hif_map_service_to_pipe(struct ath10k *ar,
u16 service_id, u8 *ul_pipe,
......@@ -1305,17 +1101,11 @@ static int ath10k_pci_hif_start(struct ath10k *ar)
ath10k_pci_free_early_irq(ar);
ath10k_pci_kill_tasklet(ar);
ret = ath10k_pci_alloc_compl(ar);
if (ret) {
ath10k_warn("failed to allocate CE completions: %d\n", ret);
goto err_early_irq;
}
ret = ath10k_pci_request_irq(ar);
if (ret) {
ath10k_warn("failed to post RX buffers for all pipes: %d\n",
ret);
goto err_free_compl;
goto err_early_irq;
}
ret = ath10k_pci_setup_ce_irq(ar);
......@@ -1339,10 +1129,6 @@ static int ath10k_pci_hif_start(struct ath10k *ar)
ath10k_ce_disable_interrupts(ar);
ath10k_pci_free_irq(ar);
ath10k_pci_kill_tasklet(ar);
ath10k_pci_stop_ce(ar);
ath10k_pci_process_ce(ar);
err_free_compl:
ath10k_pci_cleanup_ce(ar);
err_early_irq:
/* Though there should be no interrupts (device was reset)
* power_down() expects the early IRQ to be installed as per the
......@@ -1413,18 +1199,10 @@ static void ath10k_pci_tx_pipe_cleanup(struct ath10k_pci_pipe *pipe_info)
while (ath10k_ce_cancel_send_next(ce_hdl, (void **)&netbuf,
&ce_data, &nbytes, &id) == 0) {
/*
* Indicate the completion to higer layer to free
* the buffer
*/
if (!netbuf) {
ath10k_warn("invalid sk_buff on CE %d - NULL pointer. firmware crashed?\n",
ce_hdl->id);
/* no need to call tx completion for NULL pointers */
if (!netbuf)
continue;
}
ATH10K_SKB_CB(netbuf)->is_aborted = true;
ar_pci->msg_callbacks_current.tx_completion(ar,
netbuf,
id);
......@@ -1482,7 +1260,6 @@ static void ath10k_pci_hif_stop(struct ath10k *ar)
ath10k_pci_free_irq(ar);
ath10k_pci_kill_tasklet(ar);
ath10k_pci_stop_ce(ar);
ret = ath10k_pci_request_early_irq(ar);
if (ret)
......@@ -1492,8 +1269,6 @@ static void ath10k_pci_hif_stop(struct ath10k *ar)
* not DMA nor interrupt. We process the leftovers and then free
* everything else up. */
ath10k_pci_process_ce(ar);
ath10k_pci_cleanup_ce(ar);
ath10k_pci_buffer_cleanup(ar);
/* Make the sure the device won't access any structures on the host by
......@@ -2269,7 +2044,7 @@ static int ath10k_pci_hif_resume(struct ath10k *ar)
#endif
static const struct ath10k_hif_ops ath10k_pci_hif_ops = {
.send_head = ath10k_pci_hif_send_head,
.tx_sg = ath10k_pci_hif_tx_sg,
.exchange_bmi_msg = ath10k_pci_hif_exchange_bmi_msg,
.start = ath10k_pci_hif_start,
.stop = ath10k_pci_hif_stop,
......
......@@ -43,23 +43,6 @@ struct bmi_xfer {
u32 resp_len;
};
enum ath10k_pci_compl_state {
ATH10K_PCI_COMPL_FREE = 0,
ATH10K_PCI_COMPL_SEND,
ATH10K_PCI_COMPL_RECV,
};
struct ath10k_pci_compl {
struct list_head list;
enum ath10k_pci_compl_state state;
struct ath10k_ce_pipe *ce_state;
struct ath10k_pci_pipe *pipe_info;
struct sk_buff *skb;
unsigned int nbytes;
unsigned int transfer_id;
unsigned int flags;
};
/*
* PCI-specific Target state
*
......@@ -175,9 +158,6 @@ struct ath10k_pci_pipe {
/* protects compl_free and num_send_allowed */
spinlock_t pipe_lock;
/* List of free CE completion slots */
struct list_head compl_free;
struct ath10k_pci *ar_pci;
struct tasklet_struct intr;
};
......@@ -205,14 +185,6 @@ struct ath10k_pci {
atomic_t keep_awake_count;
bool verified_awake;
/* List of CE completions to be processed */
struct list_head compl_process;
/* protects compl_processing and compl_process */
spinlock_t compl_lock;
bool compl_processing;
struct ath10k_pci_pipe pipe_info[CE_COUNT_MAX];
struct ath10k_hif_cb msg_callbacks_current;
......
......@@ -51,7 +51,8 @@ void ath10k_txrx_tx_unref(struct ath10k_htt *htt,
struct ieee80211_tx_info *info;
struct ath10k_skb_cb *skb_cb;
struct sk_buff *msdu;
int ret;
lockdep_assert_held(&htt->tx_lock);
ath10k_dbg(ATH10K_DBG_HTT, "htt tx completion msdu_id %u discard %d no_ack %d\n",
tx_done->msdu_id, !!tx_done->discard, !!tx_done->no_ack);
......@@ -65,12 +66,12 @@ void ath10k_txrx_tx_unref(struct ath10k_htt *htt,
msdu = htt->pending_tx[tx_done->msdu_id];
skb_cb = ATH10K_SKB_CB(msdu);
ret = ath10k_skb_unmap(dev, msdu);
if (ret)
ath10k_warn("data skb unmap failed (%d)\n", ret);
dma_unmap_single(dev, skb_cb->paddr, msdu->len, DMA_TO_DEVICE);
if (skb_cb->htt.frag_len)
skb_pull(msdu, skb_cb->htt.frag_len + skb_cb->htt.pad_len);
if (skb_cb->htt.txbuf)
dma_pool_free(htt->tx_pool,
skb_cb->htt.txbuf,
skb_cb->htt.txbuf_paddr);
ath10k_report_offchan_tx(htt->ar, msdu);
......@@ -92,13 +93,11 @@ void ath10k_txrx_tx_unref(struct ath10k_htt *htt,
/* we do not own the msdu anymore */
exit:
spin_lock_bh(&htt->tx_lock);
htt->pending_tx[tx_done->msdu_id] = NULL;
ath10k_htt_tx_free_msdu_id(htt, tx_done->msdu_id);
__ath10k_htt_tx_dec_pending(htt);
if (htt->num_pending_tx == 0)
wake_up(&htt->empty_tx_wq);
spin_unlock_bh(&htt->tx_lock);
}
static const u8 rx_legacy_rate_idx[] = {
......@@ -258,6 +257,12 @@ void ath10k_process_rx(struct ath10k *ar, struct htt_rx_info *info)
status->band = ch->band;
status->freq = ch->center_freq;
if (info->rate.info0 & HTT_RX_INDICATION_INFO0_END_VALID) {
/* TSF available only in 32-bit */
status->mactime = info->tsf & 0xffffffff;
status->flag |= RX_FLAG_MACTIME_END;
}
ath10k_dbg(ATH10K_DBG_DATA,
"rx skb %p len %u %s%s%s%s%s %srate_idx %u vht_nss %u freq %u band %u flag 0x%x fcs-err %i\n",
info->skb,
......@@ -378,7 +383,8 @@ void ath10k_peer_unmap_event(struct ath10k_htt *htt,
spin_lock_bh(&ar->data_lock);
peer = ath10k_peer_find_by_id(ar, ev->peer_id);
if (!peer) {
ath10k_warn("unknown peer id %d\n", ev->peer_id);
ath10k_warn("peer-unmap-event: unknown peer id %d\n",
ev->peer_id);
goto exit;
}
......
......@@ -1360,7 +1360,7 @@ static void ath10k_wmi_event_host_swba(struct ath10k *ar, struct sk_buff *skb)
struct wmi_bcn_info *bcn_info;
struct ath10k_vif *arvif;
struct sk_buff *bcn;
int vdev_id = 0;
int ret, vdev_id = 0;
ath10k_dbg(ATH10K_DBG_MGMT, "WMI_HOST_SWBA_EVENTID\n");
......@@ -1435,16 +1435,27 @@ static void ath10k_wmi_event_host_swba(struct ath10k *ar, struct sk_buff *skb)
ath10k_warn("SWBA overrun on vdev %d\n",
arvif->vdev_id);
ath10k_skb_unmap(ar->dev, arvif->beacon);
dma_unmap_single(arvif->ar->dev,
ATH10K_SKB_CB(arvif->beacon)->paddr,
arvif->beacon->len, DMA_TO_DEVICE);
dev_kfree_skb_any(arvif->beacon);
}
ath10k_skb_map(ar->dev, bcn);
ATH10K_SKB_CB(bcn)->paddr = dma_map_single(arvif->ar->dev,
bcn->data, bcn->len,
DMA_TO_DEVICE);
ret = dma_mapping_error(arvif->ar->dev,
ATH10K_SKB_CB(bcn)->paddr);
if (ret) {
ath10k_warn("failed to map beacon: %d\n", ret);
goto skip;
}
arvif->beacon = bcn;
arvif->beacon_sent = false;
ath10k_wmi_tx_beacon_nowait(arvif);
skip:
spin_unlock_bh(&ar->data_lock);
}
}
......@@ -3382,7 +3393,6 @@ int ath10k_wmi_scan_chan_list(struct ath10k *ar,
ci->max_power = ch->max_power;
ci->reg_power = ch->max_reg_power;
ci->antenna_max = ch->max_antenna_gain;
ci->antenna_max = 0;
/* mode & flags share storage */
ci->mode = ch->mode;
......
......@@ -751,6 +751,9 @@ ath5k_txbuf_setup(struct ath5k_hw *ah, struct ath5k_buf *bf,
bf->skbaddr = dma_map_single(ah->dev, skb->data, skb->len,
DMA_TO_DEVICE);
if (dma_mapping_error(ah->dev, bf->skbaddr))
return -ENOSPC;
ieee80211_get_tx_rates(info->control.vif, (control) ? control->sta : NULL, skb, bf->rates,
ARRAY_SIZE(bf->rates));
......
......@@ -52,7 +52,8 @@ obj-$(CONFIG_ATH9K_HW) += ath9k_hw.o
obj-$(CONFIG_ATH9K_COMMON) += ath9k_common.o
ath9k_common-y:= common.o \
common-init.o
common-init.o \
common-beacon.o
ath9k_htc-y += htc_hst.o \
hif_usb.o \
......
......@@ -39,6 +39,10 @@ static const struct platform_device_id ath9k_platform_id_table[] = {
.name = "qca955x_wmac",
.driver_data = AR9300_DEVID_QCA955X,
},
{
.name = "qca953x_wmac",
.driver_data = AR9300_DEVID_AR953X,
},
{},
};
......@@ -82,6 +86,7 @@ static int ath_ahb_probe(struct platform_device *pdev)
int irq;
int ret = 0;
struct ath_hw *ah;
struct ath_common *common;
char hw_name[64];
if (!dev_get_platdata(&pdev->dev)) {
......@@ -124,9 +129,6 @@ static int ath_ahb_probe(struct platform_device *pdev)
sc->mem = mem;
sc->irq = irq;
/* Will be cleared in ath9k_start() */
set_bit(SC_OP_INVALID, &sc->sc_flags);
ret = request_irq(irq, ath_isr, IRQF_SHARED, "ath9k", sc);
if (ret) {
dev_err(&pdev->dev, "request_irq failed\n");
......@@ -144,6 +146,9 @@ static int ath_ahb_probe(struct platform_device *pdev)
wiphy_info(hw->wiphy, "%s mem=0x%lx, irq=%d\n",
hw_name, (unsigned long)mem, irq);
common = ath9k_hw_common(sc->sc_ah);
/* Will be cleared in ath9k_start() */
set_bit(ATH_OP_INVALID, &common->op_flags);
return 0;
err_irq:
......
......@@ -318,17 +318,6 @@ void ath9k_ani_reset(struct ath_hw *ah, bool is_scanning)
BUG_ON(aniState == NULL);
ah->stats.ast_ani_reset++;
/* only allow a subset of functions in AP mode */
if (ah->opmode == NL80211_IFTYPE_AP) {
if (IS_CHAN_2GHZ(chan)) {
ah->ani_function = (ATH9K_ANI_SPUR_IMMUNITY_LEVEL |
ATH9K_ANI_FIRSTEP_LEVEL);
if (AR_SREV_9300_20_OR_LATER(ah))
ah->ani_function |= ATH9K_ANI_MRC_CCK;
} else
ah->ani_function = 0;
}
ofdm_nil = max_t(int, ATH9K_ANI_OFDM_DEF_LEVEL,
aniState->ofdmNoiseImmunityLevel);
cck_nil = max_t(int, ATH9K_ANI_CCK_DEF_LEVEL,
......
......@@ -26,10 +26,6 @@ static const int firstep_table[] =
/* level: 0 1 2 3 4 5 6 7 8 */
{ -4, -2, 0, 2, 4, 6, 8, 10, 12 }; /* lvl 0-8, default 2 */
static const int cycpwrThr1_table[] =
/* level: 0 1 2 3 4 5 6 7 8 */
{ -6, -4, -2, 0, 2, 4, 6, 8 }; /* lvl 0-7, default 3 */
/*
* register values to turn OFDM weak signal detection OFF
*/
......@@ -921,7 +917,7 @@ static bool ar5008_hw_ani_control_new(struct ath_hw *ah,
struct ath_common *common = ath9k_hw_common(ah);
struct ath9k_channel *chan = ah->curchan;
struct ar5416AniState *aniState = &ah->ani;
s32 value, value2;
s32 value;
switch (cmd & ah->ani_function) {
case ATH9K_ANI_OFDM_WEAK_SIGNAL_DETECTION:{
......@@ -1008,42 +1004,11 @@ static bool ar5008_hw_ani_control_new(struct ath_hw *ah,
case ATH9K_ANI_FIRSTEP_LEVEL:{
u32 level = param;
if (level >= ARRAY_SIZE(firstep_table)) {
ath_dbg(common, ANI,
"ATH9K_ANI_FIRSTEP_LEVEL: level out of range (%u > %zu)\n",
level, ARRAY_SIZE(firstep_table));
return false;
}
/*
* make register setting relative to default
* from INI file & cap value
*/
value = firstep_table[level] -
firstep_table[ATH9K_ANI_FIRSTEP_LVL] +
aniState->iniDef.firstep;
if (value < ATH9K_SIG_FIRSTEP_SETTING_MIN)
value = ATH9K_SIG_FIRSTEP_SETTING_MIN;
if (value > ATH9K_SIG_FIRSTEP_SETTING_MAX)
value = ATH9K_SIG_FIRSTEP_SETTING_MAX;
value = level * 2;
REG_RMW_FIELD(ah, AR_PHY_FIND_SIG,
AR_PHY_FIND_SIG_FIRSTEP,
value);
/*
* we need to set first step low register too
* make register setting relative to default
* from INI file & cap value
*/
value2 = firstep_table[level] -
firstep_table[ATH9K_ANI_FIRSTEP_LVL] +
aniState->iniDef.firstepLow;
if (value2 < ATH9K_SIG_FIRSTEP_SETTING_MIN)
value2 = ATH9K_SIG_FIRSTEP_SETTING_MIN;
if (value2 > ATH9K_SIG_FIRSTEP_SETTING_MAX)
value2 = ATH9K_SIG_FIRSTEP_SETTING_MAX;
AR_PHY_FIND_SIG_FIRSTEP, value);
REG_RMW_FIELD(ah, AR_PHY_FIND_SIG_LOW,
AR_PHY_FIND_SIG_FIRSTEP_LOW, value2);
AR_PHY_FIND_SIG_FIRSTEP_LOW, value);
if (level != aniState->firstepLevel) {
ath_dbg(common, ANI,
......@@ -1060,7 +1025,7 @@ static bool ar5008_hw_ani_control_new(struct ath_hw *ah,
aniState->firstepLevel,
level,
ATH9K_ANI_FIRSTEP_LVL,
value2,
value,
aniState->iniDef.firstepLow);
if (level > aniState->firstepLevel)
ah->stats.ast_ani_stepup++;
......@@ -1073,41 +1038,13 @@ static bool ar5008_hw_ani_control_new(struct ath_hw *ah,
case ATH9K_ANI_SPUR_IMMUNITY_LEVEL:{
u32 level = param;
if (level >= ARRAY_SIZE(cycpwrThr1_table)) {
ath_dbg(common, ANI,
"ATH9K_ANI_SPUR_IMMUNITY_LEVEL: level out of range (%u > %zu)\n",
level, ARRAY_SIZE(cycpwrThr1_table));
return false;
}
/*
* make register setting relative to default
* from INI file & cap value
*/
value = cycpwrThr1_table[level] -
cycpwrThr1_table[ATH9K_ANI_SPUR_IMMUNE_LVL] +
aniState->iniDef.cycpwrThr1;
if (value < ATH9K_SIG_SPUR_IMM_SETTING_MIN)
value = ATH9K_SIG_SPUR_IMM_SETTING_MIN;
if (value > ATH9K_SIG_SPUR_IMM_SETTING_MAX)
value = ATH9K_SIG_SPUR_IMM_SETTING_MAX;
value = (level + 1) * 2;
REG_RMW_FIELD(ah, AR_PHY_TIMING5,
AR_PHY_TIMING5_CYCPWR_THR1,
value);
AR_PHY_TIMING5_CYCPWR_THR1, value);
/*
* set AR_PHY_EXT_CCA for extension channel
* make register setting relative to default
* from INI file & cap value
*/
value2 = cycpwrThr1_table[level] -
cycpwrThr1_table[ATH9K_ANI_SPUR_IMMUNE_LVL] +
aniState->iniDef.cycpwrThr1Ext;
if (value2 < ATH9K_SIG_SPUR_IMM_SETTING_MIN)
value2 = ATH9K_SIG_SPUR_IMM_SETTING_MIN;
if (value2 > ATH9K_SIG_SPUR_IMM_SETTING_MAX)
value2 = ATH9K_SIG_SPUR_IMM_SETTING_MAX;
REG_RMW_FIELD(ah, AR_PHY_EXT_CCA,
AR_PHY_EXT_TIMING5_CYCPWR_THR1, value2);
if (IS_CHAN_HT40(ah->curchan))
REG_RMW_FIELD(ah, AR_PHY_EXT_CCA,
AR_PHY_EXT_TIMING5_CYCPWR_THR1, value);
if (level != aniState->spurImmunityLevel) {
ath_dbg(common, ANI,
......@@ -1124,7 +1061,7 @@ static bool ar5008_hw_ani_control_new(struct ath_hw *ah,
aniState->spurImmunityLevel,
level,
ATH9K_ANI_SPUR_IMMUNE_LVL,
value2,
value,
aniState->iniDef.cycpwrThr1Ext);
if (level > aniState->spurImmunityLevel)
ah->stats.ast_ani_spurup++;
......
......@@ -23,8 +23,8 @@
#define COMP_HDR_LEN 4
#define COMP_CKSUM_LEN 2
#define LE16(x) __constant_cpu_to_le16(x)
#define LE32(x) __constant_cpu_to_le32(x)
#define LE16(x) cpu_to_le16(x)
#define LE32(x) cpu_to_le32(x)
/* Local defines to distinguish between extension and control CTL's */
#define EXT_ADDITIVE (0x8000)
......@@ -4792,43 +4792,54 @@ static void ar9003_hw_power_control_override(struct ath_hw *ah,
tempslope:
if (AR_SREV_9550(ah) || AR_SREV_9531(ah)) {
u8 txmask = (eep->baseEepHeader.txrxMask & 0xf0) >> 4;
/*
* AR955x has tempSlope register for each chain.
* Check whether temp_compensation feature is enabled or not.
*/
if (eep->baseEepHeader.featureEnable & 0x1) {
if (frequency < 4000) {
REG_RMW_FIELD(ah, AR_PHY_TPC_19,
AR_PHY_TPC_19_ALPHA_THERM,
eep->base_ext2.tempSlopeLow);
REG_RMW_FIELD(ah, AR_PHY_TPC_19_B1,
AR_PHY_TPC_19_ALPHA_THERM,
temp_slope);
REG_RMW_FIELD(ah, AR_PHY_TPC_19_B2,
AR_PHY_TPC_19_ALPHA_THERM,
eep->base_ext2.tempSlopeHigh);
if (txmask & BIT(0))
REG_RMW_FIELD(ah, AR_PHY_TPC_19,
AR_PHY_TPC_19_ALPHA_THERM,
eep->base_ext2.tempSlopeLow);
if (txmask & BIT(1))
REG_RMW_FIELD(ah, AR_PHY_TPC_19_B1,
AR_PHY_TPC_19_ALPHA_THERM,
temp_slope);
if (txmask & BIT(2))
REG_RMW_FIELD(ah, AR_PHY_TPC_19_B2,
AR_PHY_TPC_19_ALPHA_THERM,
eep->base_ext2.tempSlopeHigh);
} else {
REG_RMW_FIELD(ah, AR_PHY_TPC_19,
AR_PHY_TPC_19_ALPHA_THERM,
temp_slope);
REG_RMW_FIELD(ah, AR_PHY_TPC_19_B1,
AR_PHY_TPC_19_ALPHA_THERM,
temp_slope1);
REG_RMW_FIELD(ah, AR_PHY_TPC_19_B2,
AR_PHY_TPC_19_ALPHA_THERM,
temp_slope2);
if (txmask & BIT(0))
REG_RMW_FIELD(ah, AR_PHY_TPC_19,
AR_PHY_TPC_19_ALPHA_THERM,
temp_slope);
if (txmask & BIT(1))
REG_RMW_FIELD(ah, AR_PHY_TPC_19_B1,
AR_PHY_TPC_19_ALPHA_THERM,
temp_slope1);
if (txmask & BIT(2))
REG_RMW_FIELD(ah, AR_PHY_TPC_19_B2,
AR_PHY_TPC_19_ALPHA_THERM,
temp_slope2);
}
} else {
/*
* If temp compensation is not enabled,
* set all registers to 0.
*/
REG_RMW_FIELD(ah, AR_PHY_TPC_19,
AR_PHY_TPC_19_ALPHA_THERM, 0);
REG_RMW_FIELD(ah, AR_PHY_TPC_19_B1,
AR_PHY_TPC_19_ALPHA_THERM, 0);
REG_RMW_FIELD(ah, AR_PHY_TPC_19_B2,
AR_PHY_TPC_19_ALPHA_THERM, 0);
if (txmask & BIT(0))
REG_RMW_FIELD(ah, AR_PHY_TPC_19,
AR_PHY_TPC_19_ALPHA_THERM, 0);
if (txmask & BIT(1))
REG_RMW_FIELD(ah, AR_PHY_TPC_19_B1,
AR_PHY_TPC_19_ALPHA_THERM, 0);
if (txmask & BIT(2))
REG_RMW_FIELD(ah, AR_PHY_TPC_19_B2,
AR_PHY_TPC_19_ALPHA_THERM, 0);
}
} else {
REG_RMW_FIELD(ah, AR_PHY_TPC_19,
......
......@@ -403,20 +403,10 @@ void ath9k_calculate_iter_data(struct ieee80211_hw *hw,
#define ATH_BCBUF 8
#define ATH_DEFAULT_BINTVAL 100 /* TU */
#define ATH_DEFAULT_BMISS_LIMIT 10
#define IEEE80211_MS_TO_TU(x) (((x) * 1000) / 1024)
#define TSF_TO_TU(_h,_l) \
((((u32)(_h)) << 22) | (((u32)(_l)) >> 10))
struct ath_beacon_config {
int beacon_interval;
u16 dtim_period;
u16 bmiss_timeout;
u8 dtim_count;
bool enable_beacon;
bool ibss_creator;
};
struct ath_beacon {
enum {
OK, /* no change needed */
......@@ -426,11 +416,9 @@ struct ath_beacon {
u32 beaconq;
u32 bmisscnt;
u32 bc_tstamp;
struct ieee80211_vif *bslot[ATH_BCBUF];
int slottime;
int slotupdate;
struct ath9k_tx_queue_info beacon_qi;
struct ath_descdma bdma;
struct ath_txq *cabq;
struct list_head bbuf;
......@@ -697,15 +685,6 @@ void ath_ant_comb_scan(struct ath_softc *sc, struct ath_rx_status *rs);
#define ATH_TXPOWER_MAX 100 /* .5 dBm units */
#define MAX_GTT_CNT 5
enum sc_op_flags {
SC_OP_INVALID,
SC_OP_BEACONS,
SC_OP_ANI_RUN,
SC_OP_PRIM_STA_VIF,
SC_OP_HW_RESET,
SC_OP_SCANNING,
};
/* Powersave flags */
#define PS_WAIT_FOR_BEACON BIT(0)
#define PS_WAIT_FOR_CAB BIT(1)
......@@ -735,7 +714,6 @@ struct ath_softc {
struct completion paprd_complete;
wait_queue_head_t tx_wait;
unsigned long sc_flags;
unsigned long driver_data;
u8 gtt_cnt;
......
......@@ -328,7 +328,7 @@ void ath9k_beacon_tasklet(unsigned long data)
bool edma = !!(ah->caps.hw_caps & ATH9K_HW_CAP_EDMA);
int slot;
if (test_bit(SC_OP_HW_RESET, &sc->sc_flags)) {
if (test_bit(ATH_OP_HW_RESET, &common->op_flags)) {
ath_dbg(common, RESET,
"reset work is pending, skip beaconing now\n");
return;
......@@ -447,33 +447,6 @@ static void ath9k_beacon_init(struct ath_softc *sc, u32 nexttbtt,
ath9k_hw_enable_interrupts(ah);
}
/* Calculate the modulo of a 64 bit TSF snapshot with a TU divisor */
static u32 ath9k_mod_tsf64_tu(u64 tsf, u32 div_tu)
{
u32 tsf_mod, tsf_hi, tsf_lo, mod_hi, mod_lo;
tsf_mod = tsf & (BIT(10) - 1);
tsf_hi = tsf >> 32;
tsf_lo = ((u32) tsf) >> 10;
mod_hi = tsf_hi % div_tu;
mod_lo = ((mod_hi << 22) + tsf_lo) % div_tu;
return (mod_lo << 10) | tsf_mod;
}
static u32 ath9k_get_next_tbtt(struct ath_softc *sc, u64 tsf,
unsigned int interval)
{
struct ath_hw *ah = sc->sc_ah;
unsigned int offset;
tsf += TU_TO_USEC(FUDGE + ah->config.sw_beacon_response_time);
offset = ath9k_mod_tsf64_tu(tsf, interval);
return (u32) tsf + TU_TO_USEC(interval) - offset;
}
/*
* For multi-bss ap support beacons are either staggered evenly over N slots or
* burst together. For the former arrange for the SWBA to be delivered for each
......@@ -483,109 +456,18 @@ static void ath9k_beacon_config_ap(struct ath_softc *sc,
struct ath_beacon_config *conf)
{
struct ath_hw *ah = sc->sc_ah;
struct ath_common *common = ath9k_hw_common(ah);
u32 nexttbtt, intval;
/* NB: the beacon interval is kept internally in TU's */
intval = TU_TO_USEC(conf->beacon_interval);
intval /= ATH_BCBUF;
nexttbtt = ath9k_get_next_tbtt(sc, ath9k_hw_gettsf64(ah),
conf->beacon_interval);
if (conf->enable_beacon)
ah->imask |= ATH9K_INT_SWBA;
else
ah->imask &= ~ATH9K_INT_SWBA;
ath_dbg(common, BEACON,
"AP (%s) nexttbtt: %u intval: %u conf_intval: %u\n",
(conf->enable_beacon) ? "Enable" : "Disable",
nexttbtt, intval, conf->beacon_interval);
ath9k_beacon_init(sc, nexttbtt, intval, false);
ath9k_cmn_beacon_config_ap(ah, conf, ATH_BCBUF);
ath9k_beacon_init(sc, conf->nexttbtt, conf->intval, false);
}
/*
* This sets up the beacon timers according to the timestamp of the last
* received beacon and the current TSF, configures PCF and DTIM
* handling, programs the sleep registers so the hardware will wakeup in
* time to receive beacons, and configures the beacon miss handling so
* we'll receive a BMISS interrupt when we stop seeing beacons from the AP
* we've associated with.
*/
static void ath9k_beacon_config_sta(struct ath_softc *sc,
static void ath9k_beacon_config_sta(struct ath_hw *ah,
struct ath_beacon_config *conf)
{
struct ath_hw *ah = sc->sc_ah;
struct ath_common *common = ath9k_hw_common(ah);
struct ath9k_beacon_state bs;
int dtim_intval;
u32 nexttbtt = 0, intval;
u64 tsf;
/* No need to configure beacon if we are not associated */
if (!test_bit(SC_OP_PRIM_STA_VIF, &sc->sc_flags)) {
ath_dbg(common, BEACON,
"STA is not yet associated..skipping beacon config\n");
if (ath9k_cmn_beacon_config_sta(ah, conf, &bs) == -EPERM)
return;
}
memset(&bs, 0, sizeof(bs));
intval = conf->beacon_interval;
/*
* Setup dtim parameters according to
* last beacon we received (which may be none).
*/
dtim_intval = intval * conf->dtim_period;
/*
* Pull nexttbtt forward to reflect the current
* TSF and calculate dtim state for the result.
*/
tsf = ath9k_hw_gettsf64(ah);
nexttbtt = ath9k_get_next_tbtt(sc, tsf, intval);
bs.bs_intval = TU_TO_USEC(intval);
bs.bs_dtimperiod = conf->dtim_period * bs.bs_intval;
bs.bs_nexttbtt = nexttbtt;
bs.bs_nextdtim = nexttbtt;
if (conf->dtim_period > 1)
bs.bs_nextdtim = ath9k_get_next_tbtt(sc, tsf, dtim_intval);
/*
* Calculate the number of consecutive beacons to miss* before taking
* a BMISS interrupt. The configuration is specified in TU so we only
* need calculate based on the beacon interval. Note that we clamp the
* result to at most 15 beacons.
*/
bs.bs_bmissthreshold = DIV_ROUND_UP(conf->bmiss_timeout, intval);
if (bs.bs_bmissthreshold > 15)
bs.bs_bmissthreshold = 15;
else if (bs.bs_bmissthreshold <= 0)
bs.bs_bmissthreshold = 1;
/*
* Calculate sleep duration. The configuration is given in ms.
* We ensure a multiple of the beacon period is used. Also, if the sleep
* duration is greater than the DTIM period then it makes senses
* to make it a multiple of that.
*
* XXX fixed at 100ms
*/
bs.bs_sleepduration = TU_TO_USEC(roundup(IEEE80211_MS_TO_TU(100),
intval));
if (bs.bs_sleepduration > bs.bs_dtimperiod)
bs.bs_sleepduration = bs.bs_dtimperiod;
/* TSF out of range threshold fixed at 1 second */
bs.bs_tsfoor_threshold = ATH9K_TSFOOR_THRESHOLD;
ath_dbg(common, BEACON, "bmiss: %u sleep: %u\n",
bs.bs_bmissthreshold, bs.bs_sleepduration);
/* Set the computed STA beacon timers */
ath9k_hw_disable_interrupts(ah);
ath9k_hw_set_sta_beacon_timers(ah, &bs);
......@@ -600,36 +482,19 @@ static void ath9k_beacon_config_adhoc(struct ath_softc *sc,
{
struct ath_hw *ah = sc->sc_ah;
struct ath_common *common = ath9k_hw_common(ah);
u32 intval, nexttbtt;
ath9k_reset_beacon_status(sc);
intval = TU_TO_USEC(conf->beacon_interval);
if (conf->ibss_creator)
nexttbtt = intval;
else
nexttbtt = ath9k_get_next_tbtt(sc, ath9k_hw_gettsf64(ah),
conf->beacon_interval);
if (conf->enable_beacon)
ah->imask |= ATH9K_INT_SWBA;
else
ah->imask &= ~ATH9K_INT_SWBA;
ath_dbg(common, BEACON,
"IBSS (%s) nexttbtt: %u intval: %u conf_intval: %u\n",
(conf->enable_beacon) ? "Enable" : "Disable",
nexttbtt, intval, conf->beacon_interval);
ath9k_cmn_beacon_config_adhoc(ah, conf);
ath9k_beacon_init(sc, nexttbtt, intval, conf->ibss_creator);
ath9k_beacon_init(sc, conf->nexttbtt, conf->intval, conf->ibss_creator);
/*
* Set the global 'beacon has been configured' flag for the
* joiner case in IBSS mode.
*/
if (!conf->ibss_creator && conf->enable_beacon)
set_bit(SC_OP_BEACONS, &sc->sc_flags);
set_bit(ATH_OP_BEACONS, &common->op_flags);
}
static bool ath9k_allow_beacon_config(struct ath_softc *sc,
......@@ -649,7 +514,7 @@ static bool ath9k_allow_beacon_config(struct ath_softc *sc,
if (sc->sc_ah->opmode == NL80211_IFTYPE_STATION) {
if ((vif->type == NL80211_IFTYPE_STATION) &&
test_bit(SC_OP_BEACONS, &sc->sc_flags) &&
test_bit(ATH_OP_BEACONS, &common->op_flags) &&
!avp->primary_sta_vif) {
ath_dbg(common, CONFIG,
"Beacon already configured for a station interface\n");
......@@ -700,6 +565,8 @@ void ath9k_beacon_config(struct ath_softc *sc, struct ieee80211_vif *vif,
{
struct ieee80211_bss_conf *bss_conf = &vif->bss_conf;
struct ath_beacon_config *cur_conf = &sc->cur_beacon_conf;
struct ath_hw *ah = sc->sc_ah;
struct ath_common *common = ath9k_hw_common(ah);
unsigned long flags;
bool skip_beacon = false;
......@@ -712,7 +579,7 @@ void ath9k_beacon_config(struct ath_softc *sc, struct ieee80211_vif *vif,
if (sc->sc_ah->opmode == NL80211_IFTYPE_STATION) {
ath9k_cache_beacon_config(sc, bss_conf);
ath9k_set_beacon(sc);
set_bit(SC_OP_BEACONS, &sc->sc_flags);
set_bit(ATH_OP_BEACONS, &common->op_flags);
return;
}
......@@ -751,13 +618,13 @@ void ath9k_beacon_config(struct ath_softc *sc, struct ieee80211_vif *vif,
}
/*
* Do not set the SC_OP_BEACONS flag for IBSS joiner mode
* Do not set the ATH_OP_BEACONS flag for IBSS joiner mode
* here, it is done in ath9k_beacon_config_adhoc().
*/
if (cur_conf->enable_beacon && !skip_beacon)
set_bit(SC_OP_BEACONS, &sc->sc_flags);
set_bit(ATH_OP_BEACONS, &common->op_flags);
else
clear_bit(SC_OP_BEACONS, &sc->sc_flags);
clear_bit(ATH_OP_BEACONS, &common->op_flags);
}
}
......@@ -775,7 +642,7 @@ void ath9k_set_beacon(struct ath_softc *sc)
ath9k_beacon_config_adhoc(sc, cur_conf);
break;
case NL80211_IFTYPE_STATION:
ath9k_beacon_config_sta(sc, cur_conf);
ath9k_beacon_config_sta(sc->sc_ah, cur_conf);
break;
default:
ath_dbg(common, CONFIG, "Unsupported beaconing mode\n");
......
/*
* Copyright (c) 2008-2011 Atheros Communications Inc.
*
* Permission to use, copy, modify, and/or distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include "common.h"
#define FUDGE 2
/* Calculate the modulo of a 64 bit TSF snapshot with a TU divisor */
static u32 ath9k_mod_tsf64_tu(u64 tsf, u32 div_tu)
{
u32 tsf_mod, tsf_hi, tsf_lo, mod_hi, mod_lo;
tsf_mod = tsf & (BIT(10) - 1);
tsf_hi = tsf >> 32;
tsf_lo = ((u32) tsf) >> 10;
mod_hi = tsf_hi % div_tu;
mod_lo = ((mod_hi << 22) + tsf_lo) % div_tu;
return (mod_lo << 10) | tsf_mod;
}
static u32 ath9k_get_next_tbtt(struct ath_hw *ah, u64 tsf,
unsigned int interval)
{
unsigned int offset;
tsf += TU_TO_USEC(FUDGE + ah->config.sw_beacon_response_time);
offset = ath9k_mod_tsf64_tu(tsf, interval);
return (u32) tsf + TU_TO_USEC(interval) - offset;
}
/*
* This sets up the beacon timers according to the timestamp of the last
* received beacon and the current TSF, configures PCF and DTIM
* handling, programs the sleep registers so the hardware will wakeup in
* time to receive beacons, and configures the beacon miss handling so
* we'll receive a BMISS interrupt when we stop seeing beacons from the AP
* we've associated with.
*/
int ath9k_cmn_beacon_config_sta(struct ath_hw *ah,
struct ath_beacon_config *conf,
struct ath9k_beacon_state *bs)
{
struct ath_common *common = ath9k_hw_common(ah);
int dtim_intval;
u64 tsf;
/* No need to configure beacon if we are not associated */
if (!test_bit(ATH_OP_PRIM_STA_VIF, &common->op_flags)) {
ath_dbg(common, BEACON,
"STA is not yet associated..skipping beacon config\n");
return -EPERM;
}
memset(bs, 0, sizeof(*bs));
conf->intval = conf->beacon_interval;
/*
* Setup dtim parameters according to
* last beacon we received (which may be none).
*/
dtim_intval = conf->intval * conf->dtim_period;
/*
* Pull nexttbtt forward to reflect the current
* TSF and calculate dtim state for the result.
*/
tsf = ath9k_hw_gettsf64(ah);
conf->nexttbtt = ath9k_get_next_tbtt(ah, tsf, conf->intval);
bs->bs_intval = TU_TO_USEC(conf->intval);
bs->bs_dtimperiod = conf->dtim_period * bs->bs_intval;
bs->bs_nexttbtt = conf->nexttbtt;
bs->bs_nextdtim = conf->nexttbtt;
if (conf->dtim_period > 1)
bs->bs_nextdtim = ath9k_get_next_tbtt(ah, tsf, dtim_intval);
/*
* Calculate the number of consecutive beacons to miss* before taking
* a BMISS interrupt. The configuration is specified in TU so we only
* need calculate based on the beacon interval. Note that we clamp the
* result to at most 15 beacons.
*/
bs->bs_bmissthreshold = DIV_ROUND_UP(conf->bmiss_timeout, conf->intval);
if (bs->bs_bmissthreshold > 15)
bs->bs_bmissthreshold = 15;
else if (bs->bs_bmissthreshold <= 0)
bs->bs_bmissthreshold = 1;
/*
* Calculate sleep duration. The configuration is given in ms.
* We ensure a multiple of the beacon period is used. Also, if the sleep
* duration is greater than the DTIM period then it makes senses
* to make it a multiple of that.
*
* XXX fixed at 100ms
*/
bs->bs_sleepduration = TU_TO_USEC(roundup(IEEE80211_MS_TO_TU(100),
conf->intval));
if (bs->bs_sleepduration > bs->bs_dtimperiod)
bs->bs_sleepduration = bs->bs_dtimperiod;
/* TSF out of range threshold fixed at 1 second */
bs->bs_tsfoor_threshold = ATH9K_TSFOOR_THRESHOLD;
ath_dbg(common, BEACON, "bmiss: %u sleep: %u\n",
bs->bs_bmissthreshold, bs->bs_sleepduration);
return 0;
}
EXPORT_SYMBOL(ath9k_cmn_beacon_config_sta);
void ath9k_cmn_beacon_config_adhoc(struct ath_hw *ah,
struct ath_beacon_config *conf)
{
struct ath_common *common = ath9k_hw_common(ah);
conf->intval = TU_TO_USEC(conf->beacon_interval);
if (conf->ibss_creator)
conf->nexttbtt = conf->intval;
else
conf->nexttbtt = ath9k_get_next_tbtt(ah, ath9k_hw_gettsf64(ah),
conf->beacon_interval);
if (conf->enable_beacon)
ah->imask |= ATH9K_INT_SWBA;
else
ah->imask &= ~ATH9K_INT_SWBA;
ath_dbg(common, BEACON,
"IBSS (%s) nexttbtt: %u intval: %u conf_intval: %u\n",
(conf->enable_beacon) ? "Enable" : "Disable",
conf->nexttbtt, conf->intval, conf->beacon_interval);
}
EXPORT_SYMBOL(ath9k_cmn_beacon_config_adhoc);
/*
* For multi-bss ap support beacons are either staggered evenly over N slots or
* burst together. For the former arrange for the SWBA to be delivered for each
* slot. Slots that are not occupied will generate nothing.
*/
void ath9k_cmn_beacon_config_ap(struct ath_hw *ah,
struct ath_beacon_config *conf,
unsigned int bc_buf)
{
struct ath_common *common = ath9k_hw_common(ah);
/* NB: the beacon interval is kept internally in TU's */
conf->intval = TU_TO_USEC(conf->beacon_interval);
conf->intval /= bc_buf;
conf->nexttbtt = ath9k_get_next_tbtt(ah, ath9k_hw_gettsf64(ah),
conf->beacon_interval);
if (conf->enable_beacon)
ah->imask |= ATH9K_INT_SWBA;
else
ah->imask &= ~ATH9K_INT_SWBA;
ath_dbg(common, BEACON,
"AP (%s) nexttbtt: %u intval: %u conf_intval: %u\n",
(conf->enable_beacon) ? "Enable" : "Disable",
conf->nexttbtt, conf->intval, conf->beacon_interval);
}
EXPORT_SYMBOL(ath9k_cmn_beacon_config_ap);
/*
* Copyright (c) 2009-2011 Atheros Communications Inc.
*
* Permission to use, copy, modify, and/or distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
struct ath_beacon_config;
int ath9k_cmn_beacon_config_sta(struct ath_hw *ah,
struct ath_beacon_config *conf,
struct ath9k_beacon_state *bs);
void ath9k_cmn_beacon_config_adhoc(struct ath_hw *ah,
struct ath_beacon_config *conf);
void ath9k_cmn_beacon_config_ap(struct ath_hw *ah,
struct ath_beacon_config *conf,
unsigned int bc_buf);
......@@ -22,6 +22,7 @@
#include "hw-ops.h"
#include "common-init.h"
#include "common-beacon.h"
/* Common header for Atheros 802.11n base driver cores */
......@@ -44,6 +45,19 @@
#define ATH_EP_RND(x, mul) \
(((x) + ((mul)/2)) / (mul))
#define IEEE80211_MS_TO_TU(x) (((x) * 1000) / 1024)
struct ath_beacon_config {
int beacon_interval;
u16 dtim_period;
u16 bmiss_timeout;
u8 dtim_count;
bool enable_beacon;
bool ibss_creator;
u32 nexttbtt;
u32 intval;
};
bool ath9k_cmn_rx_accept(struct ath_common *common,
struct ieee80211_hdr *hdr,
struct ieee80211_rx_status *rxs,
......
......@@ -139,43 +139,41 @@ static ssize_t read_file_ani(struct file *file, char __user *user_buf,
const unsigned int size = 1024;
ssize_t retval = 0;
char *buf;
int i;
struct {
const char *name;
unsigned int val;
} ani_info[] = {
{ "ANI RESET", ah->stats.ast_ani_reset },
{ "OFDM LEVEL", ah->ani.ofdmNoiseImmunityLevel },
{ "CCK LEVEL", ah->ani.cckNoiseImmunityLevel },
{ "SPUR UP", ah->stats.ast_ani_spurup },
{ "SPUR DOWN", ah->stats.ast_ani_spurup },
{ "OFDM WS-DET ON", ah->stats.ast_ani_ofdmon },
{ "OFDM WS-DET OFF", ah->stats.ast_ani_ofdmoff },
{ "MRC-CCK ON", ah->stats.ast_ani_ccklow },
{ "MRC-CCK OFF", ah->stats.ast_ani_cckhigh },
{ "FIR-STEP UP", ah->stats.ast_ani_stepup },
{ "FIR-STEP DOWN", ah->stats.ast_ani_stepdown },
{ "INV LISTENTIME", ah->stats.ast_ani_lneg_or_lzero },
{ "OFDM ERRORS", ah->stats.ast_ani_ofdmerrs },
{ "CCK ERRORS", ah->stats.ast_ani_cckerrs },
};
buf = kzalloc(size, GFP_KERNEL);
if (buf == NULL)
return -ENOMEM;
if (common->disable_ani) {
len += scnprintf(buf + len, size - len, "%s: %s\n",
"ANI", "DISABLED");
len += scnprintf(buf + len, size - len, "%15s: %s\n", "ANI",
common->disable_ani ? "DISABLED" : "ENABLED");
if (common->disable_ani)
goto exit;
}
len += scnprintf(buf + len, size - len, "%15s: %s\n",
"ANI", "ENABLED");
len += scnprintf(buf + len, size - len, "%15s: %u\n",
"ANI RESET", ah->stats.ast_ani_reset);
len += scnprintf(buf + len, size - len, "%15s: %u\n",
"SPUR UP", ah->stats.ast_ani_spurup);
len += scnprintf(buf + len, size - len, "%15s: %u\n",
"SPUR DOWN", ah->stats.ast_ani_spurup);
len += scnprintf(buf + len, size - len, "%15s: %u\n",
"OFDM WS-DET ON", ah->stats.ast_ani_ofdmon);
len += scnprintf(buf + len, size - len, "%15s: %u\n",
"OFDM WS-DET OFF", ah->stats.ast_ani_ofdmoff);
len += scnprintf(buf + len, size - len, "%15s: %u\n",
"MRC-CCK ON", ah->stats.ast_ani_ccklow);
len += scnprintf(buf + len, size - len, "%15s: %u\n",
"MRC-CCK OFF", ah->stats.ast_ani_cckhigh);
len += scnprintf(buf + len, size - len, "%15s: %u\n",
"FIR-STEP UP", ah->stats.ast_ani_stepup);
len += scnprintf(buf + len, size - len, "%15s: %u\n",
"FIR-STEP DOWN", ah->stats.ast_ani_stepdown);
len += scnprintf(buf + len, size - len, "%15s: %u\n",
"INV LISTENTIME", ah->stats.ast_ani_lneg_or_lzero);
len += scnprintf(buf + len, size - len, "%15s: %u\n",
"OFDM ERRORS", ah->stats.ast_ani_ofdmerrs);
len += scnprintf(buf + len, size - len, "%15s: %u\n",
"CCK ERRORS", ah->stats.ast_ani_cckerrs);
for (i = 0; i < ARRAY_SIZE(ani_info); i++)
len += scnprintf(buf + len, size - len, "%15s: %u\n",
ani_info[i].name, ani_info[i].val);
exit:
if (len > size)
len = size;
......@@ -210,7 +208,7 @@ static ssize_t write_file_ani(struct file *file,
common->disable_ani = !ani;
if (common->disable_ani) {
clear_bit(SC_OP_ANI_RUN, &sc->sc_flags);
clear_bit(ATH_OP_ANI_RUN, &common->op_flags);
ath_stop_ani(sc);
} else {
ath_check_ani(sc);
......
......@@ -39,7 +39,6 @@
#define ATH_RESTART_CALINTERVAL 1200000 /* 20 minutes */
#define ATH_DEFAULT_BMISS_LIMIT 10
#define IEEE80211_MS_TO_TU(x) (((x) * 1000) / 1024)
#define TSF_TO_TU(_h, _l) \
((((u32)(_h)) << 22) | (((u32)(_l)) >> 10))
......@@ -406,12 +405,18 @@ static inline void ath9k_htc_err_stat_rx(struct ath9k_htc_priv *priv,
#define DEFAULT_SWBA_RESPONSE 40 /* in TUs */
#define MIN_SWBA_RESPONSE 10 /* in TUs */
struct htc_beacon_config {
struct htc_beacon {
enum {
OK, /* no change needed */
UPDATE, /* update pending */
COMMIT /* beacon sent, commit change */
} updateslot; /* slot time update fsm */
struct ieee80211_vif *bslot[ATH9K_HTC_MAX_BCN_VIF];
u16 beacon_interval;
u16 dtim_period;
u16 bmiss_timeout;
u32 bmiss_cnt;
u32 bmisscnt;
u32 beaconq;
int slottime;
int slotupdate;
};
struct ath_btcoex {
......@@ -439,12 +444,8 @@ static inline void ath9k_htc_stop_btcoex(struct ath9k_htc_priv *priv)
}
#endif /* CONFIG_ATH9K_BTCOEX_SUPPORT */
#define OP_INVALID BIT(0)
#define OP_SCANNING BIT(1)
#define OP_ENABLE_BEACON BIT(2)
#define OP_BT_PRIORITY_DETECTED BIT(3)
#define OP_BT_SCAN BIT(4)
#define OP_ANI_RUNNING BIT(5)
#define OP_TSF_RESET BIT(6)
struct ath9k_htc_priv {
......@@ -489,7 +490,8 @@ struct ath9k_htc_priv {
struct ath9k_hw_cal_data caldata;
spinlock_t beacon_lock;
struct htc_beacon_config cur_beacon_conf;
struct ath_beacon_config cur_beacon_conf;
struct htc_beacon beacon;
struct ath9k_htc_rx rx;
struct ath9k_htc_tx tx;
......@@ -514,7 +516,6 @@ struct ath9k_htc_priv {
struct work_struct led_work;
#endif
int beaconq;
int cabq;
int hwq_map[IEEE80211_NUM_ACS];
......
......@@ -26,7 +26,7 @@ void ath9k_htc_beaconq_config(struct ath9k_htc_priv *priv)
memset(&qi, 0, sizeof(struct ath9k_tx_queue_info));
memset(&qi_be, 0, sizeof(struct ath9k_tx_queue_info));
ath9k_hw_get_txq_props(ah, priv->beaconq, &qi);
ath9k_hw_get_txq_props(ah, priv->beacon.beaconq, &qi);
if (priv->ah->opmode == NL80211_IFTYPE_AP ||
priv->ah->opmode == NL80211_IFTYPE_MESH_POINT) {
......@@ -54,212 +54,78 @@ void ath9k_htc_beaconq_config(struct ath9k_htc_priv *priv)
}
if (!ath9k_hw_set_txq_props(ah, priv->beaconq, &qi)) {
if (!ath9k_hw_set_txq_props(ah, priv->beacon.beaconq, &qi)) {
ath_err(ath9k_hw_common(ah),
"Unable to update beacon queue %u!\n", priv->beaconq);
"Unable to update beacon queue %u!\n", priv->beacon.beaconq);
} else {
ath9k_hw_resettxqueue(ah, priv->beaconq);
ath9k_hw_resettxqueue(ah, priv->beacon.beaconq);
}
}
static void ath9k_htc_beacon_config_sta(struct ath9k_htc_priv *priv,
struct htc_beacon_config *bss_conf)
/*
* Both nexttbtt and intval have to be in usecs.
*/
static void ath9k_htc_beacon_init(struct ath9k_htc_priv *priv,
struct ath_beacon_config *conf,
bool reset_tsf)
{
struct ath_common *common = ath9k_hw_common(priv->ah);
struct ath9k_beacon_state bs;
enum ath9k_int imask = 0;
int dtimperiod, dtimcount;
int bmiss_timeout;
u32 nexttbtt = 0, intval, tsftu;
__be32 htc_imask = 0;
u64 tsf;
int num_beacons, offset, dtim_dec_count;
struct ath_hw *ah = priv->ah;
int ret __attribute__ ((unused));
__be32 htc_imask = 0;
u8 cmd_rsp;
memset(&bs, 0, sizeof(bs));
intval = bss_conf->beacon_interval;
bmiss_timeout = (ATH_DEFAULT_BMISS_LIMIT * bss_conf->beacon_interval);
/*
* Setup dtim parameters according to
* last beacon we received (which may be none).
*/
dtimperiod = bss_conf->dtim_period;
if (dtimperiod <= 0) /* NB: 0 if not known */
dtimperiod = 1;
dtimcount = 1;
if (dtimcount >= dtimperiod) /* NB: sanity check */
dtimcount = 0;
/*
* Pull nexttbtt forward to reflect the current
* TSF and calculate dtim state for the result.
*/
tsf = ath9k_hw_gettsf64(priv->ah);
tsftu = TSF_TO_TU(tsf>>32, tsf) + FUDGE;
num_beacons = tsftu / intval + 1;
offset = tsftu % intval;
nexttbtt = tsftu - offset;
if (offset)
nexttbtt += intval;
/* DTIM Beacon every dtimperiod Beacon */
dtim_dec_count = num_beacons % dtimperiod;
dtimcount -= dtim_dec_count;
if (dtimcount < 0)
dtimcount += dtimperiod;
bs.bs_intval = TU_TO_USEC(intval);
bs.bs_nexttbtt = TU_TO_USEC(nexttbtt);
bs.bs_dtimperiod = dtimperiod * bs.bs_intval;
bs.bs_nextdtim = bs.bs_nexttbtt + dtimcount * bs.bs_intval;
/*
* Calculate the number of consecutive beacons to miss* before taking
* a BMISS interrupt. The configuration is specified in TU so we only
* need calculate based on the beacon interval. Note that we clamp the
* result to at most 15 beacons.
*/
bs.bs_bmissthreshold = DIV_ROUND_UP(bmiss_timeout, intval);
if (bs.bs_bmissthreshold > 15)
bs.bs_bmissthreshold = 15;
else if (bs.bs_bmissthreshold <= 0)
bs.bs_bmissthreshold = 1;
/*
* Calculate sleep duration. The configuration is given in ms.
* We ensure a multiple of the beacon period is used. Also, if the sleep
* duration is greater than the DTIM period then it makes senses
* to make it a multiple of that.
*
* XXX fixed at 100ms
*/
bs.bs_sleepduration = TU_TO_USEC(roundup(IEEE80211_MS_TO_TU(100),
intval));
if (bs.bs_sleepduration > bs.bs_dtimperiod)
bs.bs_sleepduration = bs.bs_dtimperiod;
/* TSF out of range threshold fixed at 1 second */
bs.bs_tsfoor_threshold = ATH9K_TSFOOR_THRESHOLD;
ath_dbg(common, CONFIG, "intval: %u tsf: %llu tsftu: %u\n",
intval, tsf, tsftu);
ath_dbg(common, CONFIG, "bmiss: %u sleep: %u\n",
bs.bs_bmissthreshold, bs.bs_sleepduration);
/* Set the computed STA beacon timers */
if (conf->intval >= TU_TO_USEC(DEFAULT_SWBA_RESPONSE))
ah->config.sw_beacon_response_time = DEFAULT_SWBA_RESPONSE;
else
ah->config.sw_beacon_response_time = MIN_SWBA_RESPONSE;
WMI_CMD(WMI_DISABLE_INTR_CMDID);
ath9k_hw_set_sta_beacon_timers(priv->ah, &bs);
imask |= ATH9K_INT_BMISS;
htc_imask = cpu_to_be32(imask);
if (reset_tsf)
ath9k_hw_reset_tsf(ah);
ath9k_htc_beaconq_config(priv);
ath9k_hw_beaconinit(ah, conf->nexttbtt, conf->intval);
priv->beacon.bmisscnt = 0;
htc_imask = cpu_to_be32(ah->imask);
WMI_CMD_BUF(WMI_ENABLE_INTR_CMDID, &htc_imask);
}
static void ath9k_htc_beacon_config_ap(struct ath9k_htc_priv *priv,
struct htc_beacon_config *bss_conf)
static void ath9k_htc_beacon_config_sta(struct ath9k_htc_priv *priv,
struct ath_beacon_config *bss_conf)
{
struct ath_common *common = ath9k_hw_common(priv->ah);
struct ath9k_beacon_state bs;
enum ath9k_int imask = 0;
u32 nexttbtt, intval, tsftu;
__be32 htc_imask = 0;
int ret __attribute__ ((unused));
u8 cmd_rsp;
u64 tsf;
intval = bss_conf->beacon_interval;
intval /= ATH9K_HTC_MAX_BCN_VIF;
nexttbtt = intval;
/*
* To reduce beacon misses under heavy TX load,
* set the beacon response time to a larger value.
*/
if (intval > DEFAULT_SWBA_RESPONSE)
priv->ah->config.sw_beacon_response_time = DEFAULT_SWBA_RESPONSE;
else
priv->ah->config.sw_beacon_response_time = MIN_SWBA_RESPONSE;
if (test_bit(OP_TSF_RESET, &priv->op_flags)) {
ath9k_hw_reset_tsf(priv->ah);
clear_bit(OP_TSF_RESET, &priv->op_flags);
} else {
/*
* Pull nexttbtt forward to reflect the current TSF.
*/
tsf = ath9k_hw_gettsf64(priv->ah);
tsftu = TSF_TO_TU(tsf >> 32, tsf) + FUDGE;
do {
nexttbtt += intval;
} while (nexttbtt < tsftu);
}
if (test_bit(OP_ENABLE_BEACON, &priv->op_flags))
imask |= ATH9K_INT_SWBA;
ath_dbg(common, CONFIG,
"AP Beacon config, intval: %d, nexttbtt: %u, resp_time: %d imask: 0x%x\n",
bss_conf->beacon_interval, nexttbtt,
priv->ah->config.sw_beacon_response_time, imask);
ath9k_htc_beaconq_config(priv);
if (ath9k_cmn_beacon_config_sta(priv->ah, bss_conf, &bs) == -EPERM)
return;
WMI_CMD(WMI_DISABLE_INTR_CMDID);
ath9k_hw_beaconinit(priv->ah, TU_TO_USEC(nexttbtt), TU_TO_USEC(intval));
priv->cur_beacon_conf.bmiss_cnt = 0;
ath9k_hw_set_sta_beacon_timers(priv->ah, &bs);
imask |= ATH9K_INT_BMISS;
htc_imask = cpu_to_be32(imask);
WMI_CMD_BUF(WMI_ENABLE_INTR_CMDID, &htc_imask);
}
static void ath9k_htc_beacon_config_adhoc(struct ath9k_htc_priv *priv,
struct htc_beacon_config *bss_conf)
static void ath9k_htc_beacon_config_ap(struct ath9k_htc_priv *priv,
struct ath_beacon_config *conf)
{
struct ath_common *common = ath9k_hw_common(priv->ah);
enum ath9k_int imask = 0;
u32 nexttbtt, intval, tsftu;
__be32 htc_imask = 0;
int ret __attribute__ ((unused));
u8 cmd_rsp;
u64 tsf;
intval = bss_conf->beacon_interval;
nexttbtt = intval;
/*
* Pull nexttbtt forward to reflect the current TSF.
*/
tsf = ath9k_hw_gettsf64(priv->ah);
tsftu = TSF_TO_TU(tsf >> 32, tsf) + FUDGE;
do {
nexttbtt += intval;
} while (nexttbtt < tsftu);
/*
* Only one IBSS interfce is allowed.
*/
if (intval > DEFAULT_SWBA_RESPONSE)
priv->ah->config.sw_beacon_response_time = DEFAULT_SWBA_RESPONSE;
else
priv->ah->config.sw_beacon_response_time = MIN_SWBA_RESPONSE;
struct ath_hw *ah = priv->ah;
ah->imask = 0;
if (test_bit(OP_ENABLE_BEACON, &priv->op_flags))
imask |= ATH9K_INT_SWBA;
ath9k_cmn_beacon_config_ap(ah, conf, ATH9K_HTC_MAX_BCN_VIF);
ath9k_htc_beacon_init(priv, conf, false);
}
ath_dbg(common, CONFIG,
"IBSS Beacon config, intval: %d, nexttbtt: %u, resp_time: %d, imask: 0x%x\n",
bss_conf->beacon_interval, nexttbtt,
priv->ah->config.sw_beacon_response_time, imask);
static void ath9k_htc_beacon_config_adhoc(struct ath9k_htc_priv *priv,
struct ath_beacon_config *conf)
{
struct ath_hw *ah = priv->ah;
ah->imask = 0;
WMI_CMD(WMI_DISABLE_INTR_CMDID);
ath9k_hw_beaconinit(priv->ah, TU_TO_USEC(nexttbtt), TU_TO_USEC(intval));
priv->cur_beacon_conf.bmiss_cnt = 0;
htc_imask = cpu_to_be32(imask);
WMI_CMD_BUF(WMI_ENABLE_INTR_CMDID, &htc_imask);
ath9k_cmn_beacon_config_adhoc(ah, conf);
ath9k_htc_beacon_init(priv, conf, conf->ibss_creator);
}
void ath9k_htc_beaconep(void *drv_priv, struct sk_buff *skb,
......@@ -279,7 +145,7 @@ static void ath9k_htc_send_buffered(struct ath9k_htc_priv *priv,
spin_lock_bh(&priv->beacon_lock);
vif = priv->cur_beacon_conf.bslot[slot];
vif = priv->beacon.bslot[slot];
skb = ieee80211_get_buffered_bc(priv->hw, vif);
......@@ -340,10 +206,10 @@ static void ath9k_htc_send_beacon(struct ath9k_htc_priv *priv,
spin_lock_bh(&priv->beacon_lock);
vif = priv->cur_beacon_conf.bslot[slot];
vif = priv->beacon.bslot[slot];
avp = (struct ath9k_htc_vif *)vif->drv_priv;
if (unlikely(test_bit(OP_SCANNING, &priv->op_flags))) {
if (unlikely(test_bit(ATH_OP_SCANNING, &common->op_flags))) {
spin_unlock_bh(&priv->beacon_lock);
return;
}
......@@ -423,8 +289,8 @@ void ath9k_htc_swba(struct ath9k_htc_priv *priv,
int slot;
if (swba->beacon_pending != 0) {
priv->cur_beacon_conf.bmiss_cnt++;
if (priv->cur_beacon_conf.bmiss_cnt > BSTUCK_THRESHOLD) {
priv->beacon.bmisscnt++;
if (priv->beacon.bmisscnt > BSTUCK_THRESHOLD) {
ath_dbg(common, BSTUCK, "Beacon stuck, HW reset\n");
ieee80211_queue_work(priv->hw,
&priv->fatal_work);
......@@ -432,16 +298,16 @@ void ath9k_htc_swba(struct ath9k_htc_priv *priv,
return;
}
if (priv->cur_beacon_conf.bmiss_cnt) {
if (priv->beacon.bmisscnt) {
ath_dbg(common, BSTUCK,
"Resuming beacon xmit after %u misses\n",
priv->cur_beacon_conf.bmiss_cnt);
priv->cur_beacon_conf.bmiss_cnt = 0;
priv->beacon.bmisscnt);
priv->beacon.bmisscnt = 0;
}
slot = ath9k_htc_choose_bslot(priv, swba);
spin_lock_bh(&priv->beacon_lock);
if (priv->cur_beacon_conf.bslot[slot] == NULL) {
if (priv->beacon.bslot[slot] == NULL) {
spin_unlock_bh(&priv->beacon_lock);
return;
}
......@@ -460,13 +326,13 @@ void ath9k_htc_assign_bslot(struct ath9k_htc_priv *priv,
spin_lock_bh(&priv->beacon_lock);
for (i = 0; i < ATH9K_HTC_MAX_BCN_VIF; i++) {
if (priv->cur_beacon_conf.bslot[i] == NULL) {
if (priv->beacon.bslot[i] == NULL) {
avp->bslot = i;
break;
}
}
priv->cur_beacon_conf.bslot[avp->bslot] = vif;
priv->beacon.bslot[avp->bslot] = vif;
spin_unlock_bh(&priv->beacon_lock);
ath_dbg(common, CONFIG, "Added interface at beacon slot: %d\n",
......@@ -480,7 +346,7 @@ void ath9k_htc_remove_bslot(struct ath9k_htc_priv *priv,
struct ath9k_htc_vif *avp = (struct ath9k_htc_vif *)vif->drv_priv;
spin_lock_bh(&priv->beacon_lock);
priv->cur_beacon_conf.bslot[avp->bslot] = NULL;
priv->beacon.bslot[avp->bslot] = NULL;
spin_unlock_bh(&priv->beacon_lock);
ath_dbg(common, CONFIG, "Removed interface at beacon slot: %d\n",
......@@ -496,7 +362,7 @@ void ath9k_htc_set_tsfadjust(struct ath9k_htc_priv *priv,
{
struct ath_common *common = ath9k_hw_common(priv->ah);
struct ath9k_htc_vif *avp = (struct ath9k_htc_vif *)vif->drv_priv;
struct htc_beacon_config *cur_conf = &priv->cur_beacon_conf;
struct ath_beacon_config *cur_conf = &priv->cur_beacon_conf;
u64 tsfadjust;
if (avp->bslot == 0)
......@@ -528,7 +394,7 @@ static bool ath9k_htc_check_beacon_config(struct ath9k_htc_priv *priv,
struct ieee80211_vif *vif)
{
struct ath_common *common = ath9k_hw_common(priv->ah);
struct htc_beacon_config *cur_conf = &priv->cur_beacon_conf;
struct ath_beacon_config *cur_conf = &priv->cur_beacon_conf;
struct ieee80211_bss_conf *bss_conf = &vif->bss_conf;
bool beacon_configured;
......@@ -583,7 +449,7 @@ void ath9k_htc_beacon_config(struct ath9k_htc_priv *priv,
struct ieee80211_vif *vif)
{
struct ath_common *common = ath9k_hw_common(priv->ah);
struct htc_beacon_config *cur_conf = &priv->cur_beacon_conf;
struct ath_beacon_config *cur_conf = &priv->cur_beacon_conf;
struct ieee80211_bss_conf *bss_conf = &vif->bss_conf;
struct ath9k_htc_vif *avp = (struct ath9k_htc_vif *) vif->drv_priv;
......@@ -619,7 +485,7 @@ void ath9k_htc_beacon_config(struct ath9k_htc_priv *priv,
void ath9k_htc_beacon_reconfig(struct ath9k_htc_priv *priv)
{
struct ath_common *common = ath9k_hw_common(priv->ah);
struct htc_beacon_config *cur_conf = &priv->cur_beacon_conf;
struct ath_beacon_config *cur_conf = &priv->cur_beacon_conf;
switch (priv->ah->opmode) {
case NL80211_IFTYPE_STATION:
......
......@@ -405,8 +405,8 @@ static int ath9k_init_queues(struct ath9k_htc_priv *priv)
for (i = 0; i < ARRAY_SIZE(priv->hwq_map); i++)
priv->hwq_map[i] = -1;
priv->beaconq = ath9k_hw_beaconq_setup(priv->ah);
if (priv->beaconq == -1) {
priv->beacon.beaconq = ath9k_hw_beaconq_setup(priv->ah);
if (priv->beacon.beaconq == -1) {
ath_err(common, "Unable to setup BEACON xmit queue\n");
goto err;
}
......@@ -459,8 +459,6 @@ static int ath9k_init_priv(struct ath9k_htc_priv *priv,
struct ath_common *common;
int i, ret = 0, csz = 0;
set_bit(OP_INVALID, &priv->op_flags);
ah = kzalloc(sizeof(struct ath_hw), GFP_KERNEL);
if (!ah)
return -ENOMEM;
......@@ -485,6 +483,7 @@ static int ath9k_init_priv(struct ath9k_htc_priv *priv,
common->priv = priv;
common->debug_mask = ath9k_debug;
common->btcoex_enabled = ath9k_htc_btcoex_enable == 1;
set_bit(ATH_OP_INVALID, &common->op_flags);
spin_lock_init(&priv->beacon_lock);
spin_lock_init(&priv->tx.tx_lock);
......@@ -520,7 +519,8 @@ static int ath9k_init_priv(struct ath9k_htc_priv *priv,
goto err_queues;
for (i = 0; i < ATH9K_HTC_MAX_BCN_VIF; i++)
priv->cur_beacon_conf.bslot[i] = NULL;
priv->beacon.bslot[i] = NULL;
priv->beacon.slottime = ATH9K_SLOT_TIME_9;
ath9k_cmn_init_channels_rates(common);
ath9k_cmn_init_crypto(ah);
......
......@@ -250,7 +250,7 @@ static int ath9k_htc_set_channel(struct ath9k_htc_priv *priv,
u8 cmd_rsp;
int ret;
if (test_bit(OP_INVALID, &priv->op_flags))
if (test_bit(ATH_OP_INVALID, &common->op_flags))
return -EIO;
fastcc = !!(hw->conf.flags & IEEE80211_CONF_OFFCHANNEL);
......@@ -304,7 +304,7 @@ static int ath9k_htc_set_channel(struct ath9k_htc_priv *priv,
htc_start(priv->htc);
if (!test_bit(OP_SCANNING, &priv->op_flags) &&
if (!test_bit(ATH_OP_SCANNING, &common->op_flags) &&
!(hw->conf.flags & IEEE80211_CONF_OFFCHANNEL))
ath9k_htc_vif_reconfig(priv);
......@@ -748,7 +748,7 @@ void ath9k_htc_start_ani(struct ath9k_htc_priv *priv)
common->ani.shortcal_timer = timestamp;
common->ani.checkani_timer = timestamp;
set_bit(OP_ANI_RUNNING, &priv->op_flags);
set_bit(ATH_OP_ANI_RUN, &common->op_flags);
ieee80211_queue_delayed_work(common->hw, &priv->ani_work,
msecs_to_jiffies(ATH_ANI_POLLINTERVAL));
......@@ -756,8 +756,9 @@ void ath9k_htc_start_ani(struct ath9k_htc_priv *priv)
void ath9k_htc_stop_ani(struct ath9k_htc_priv *priv)
{
struct ath_common *common = ath9k_hw_common(priv->ah);
cancel_delayed_work_sync(&priv->ani_work);
clear_bit(OP_ANI_RUNNING, &priv->op_flags);
clear_bit(ATH_OP_ANI_RUN, &common->op_flags);
}
void ath9k_htc_ani_work(struct work_struct *work)
......@@ -942,7 +943,7 @@ static int ath9k_htc_start(struct ieee80211_hw *hw)
ath_dbg(common, CONFIG,
"Failed to update capability in target\n");
clear_bit(OP_INVALID, &priv->op_flags);
clear_bit(ATH_OP_INVALID, &common->op_flags);
htc_start(priv->htc);
spin_lock_bh(&priv->tx.tx_lock);
......@@ -971,7 +972,7 @@ static void ath9k_htc_stop(struct ieee80211_hw *hw)
mutex_lock(&priv->mutex);
if (test_bit(OP_INVALID, &priv->op_flags)) {
if (test_bit(ATH_OP_INVALID, &common->op_flags)) {
ath_dbg(common, ANY, "Device not present\n");
mutex_unlock(&priv->mutex);
return;
......@@ -1013,7 +1014,7 @@ static void ath9k_htc_stop(struct ieee80211_hw *hw)
ath9k_htc_ps_restore(priv);
ath9k_htc_setpower(priv, ATH9K_PM_FULL_SLEEP);
set_bit(OP_INVALID, &priv->op_flags);
set_bit(ATH_OP_INVALID, &common->op_flags);
ath_dbg(common, CONFIG, "Driver halt\n");
mutex_unlock(&priv->mutex);
......@@ -1087,7 +1088,7 @@ static int ath9k_htc_add_interface(struct ieee80211_hw *hw,
ath9k_htc_set_opmode(priv);
if ((priv->ah->opmode == NL80211_IFTYPE_AP) &&
!test_bit(OP_ANI_RUNNING, &priv->op_flags)) {
!test_bit(ATH_OP_ANI_RUN, &common->op_flags)) {
ath9k_hw_set_tsfadjust(priv->ah, true);
ath9k_htc_start_ani(priv);
}
......@@ -1245,13 +1246,14 @@ static void ath9k_htc_configure_filter(struct ieee80211_hw *hw,
u64 multicast)
{
struct ath9k_htc_priv *priv = hw->priv;
struct ath_common *common = ath9k_hw_common(priv->ah);
u32 rfilt;
mutex_lock(&priv->mutex);
changed_flags &= SUPPORTED_FILTERS;
*total_flags &= SUPPORTED_FILTERS;
if (test_bit(OP_INVALID, &priv->op_flags)) {
if (test_bit(ATH_OP_INVALID, &common->op_flags)) {
ath_dbg(ath9k_hw_common(priv->ah), ANY,
"Unable to configure filter on invalid state\n");
mutex_unlock(&priv->mutex);
......@@ -1476,6 +1478,7 @@ static void ath9k_htc_bss_iter(void *data, u8 *mac, struct ieee80211_vif *vif)
common->curaid = bss_conf->aid;
common->last_rssi = ATH_RSSI_DUMMY_MARKER;
memcpy(common->curbssid, bss_conf->bssid, ETH_ALEN);
set_bit(ATH_OP_PRIM_STA_VIF, &common->op_flags);
}
}
......@@ -1497,6 +1500,7 @@ static void ath9k_htc_bss_info_changed(struct ieee80211_hw *hw,
struct ath9k_htc_priv *priv = hw->priv;
struct ath_hw *ah = priv->ah;
struct ath_common *common = ath9k_hw_common(ah);
int slottime;
mutex_lock(&priv->mutex);
ath9k_htc_ps_wakeup(priv);
......@@ -1508,6 +1512,9 @@ static void ath9k_htc_bss_info_changed(struct ieee80211_hw *hw,
bss_conf->assoc ?
priv->num_sta_assoc_vif++ : priv->num_sta_assoc_vif--;
if (!bss_conf->assoc)
clear_bit(ATH_OP_PRIM_STA_VIF, &common->op_flags);
if (priv->ah->opmode == NL80211_IFTYPE_STATION) {
ath9k_htc_choose_set_bssid(priv);
if (bss_conf->assoc && (priv->num_sta_assoc_vif == 1))
......@@ -1529,7 +1536,7 @@ static void ath9k_htc_bss_info_changed(struct ieee80211_hw *hw,
ath_dbg(common, CONFIG, "Beacon enabled for BSS: %pM\n",
bss_conf->bssid);
ath9k_htc_set_tsfadjust(priv, vif);
set_bit(OP_ENABLE_BEACON, &priv->op_flags);
priv->cur_beacon_conf.enable_beacon = 1;
ath9k_htc_beacon_config(priv, vif);
}
......@@ -1543,7 +1550,7 @@ static void ath9k_htc_bss_info_changed(struct ieee80211_hw *hw,
ath_dbg(common, CONFIG,
"Beacon disabled for BSS: %pM\n",
bss_conf->bssid);
clear_bit(OP_ENABLE_BEACON, &priv->op_flags);
priv->cur_beacon_conf.enable_beacon = 0;
ath9k_htc_beacon_config(priv, vif);
}
}
......@@ -1569,11 +1576,21 @@ static void ath9k_htc_bss_info_changed(struct ieee80211_hw *hw,
if (changed & BSS_CHANGED_ERP_SLOT) {
if (bss_conf->use_short_slot)
ah->slottime = 9;
slottime = 9;
else
ah->slottime = 20;
ath9k_hw_init_global_settings(ah);
slottime = 20;
if (vif->type == NL80211_IFTYPE_AP) {
/*
* Defer update, so that connected stations can adjust
* their settings at the same time.
* See beacon.c for more details
*/
priv->beacon.slottime = slottime;
priv->beacon.updateslot = UPDATE;
} else {
ah->slottime = slottime;
ath9k_hw_init_global_settings(ah);
}
}
if (changed & BSS_CHANGED_HT)
......@@ -1670,10 +1687,11 @@ static int ath9k_htc_ampdu_action(struct ieee80211_hw *hw,
static void ath9k_htc_sw_scan_start(struct ieee80211_hw *hw)
{
struct ath9k_htc_priv *priv = hw->priv;
struct ath_common *common = ath9k_hw_common(priv->ah);
mutex_lock(&priv->mutex);
spin_lock_bh(&priv->beacon_lock);
set_bit(OP_SCANNING, &priv->op_flags);
set_bit(ATH_OP_SCANNING, &common->op_flags);
spin_unlock_bh(&priv->beacon_lock);
cancel_work_sync(&priv->ps_work);
ath9k_htc_stop_ani(priv);
......@@ -1683,10 +1701,11 @@ static void ath9k_htc_sw_scan_start(struct ieee80211_hw *hw)
static void ath9k_htc_sw_scan_complete(struct ieee80211_hw *hw)
{
struct ath9k_htc_priv *priv = hw->priv;
struct ath_common *common = ath9k_hw_common(priv->ah);
mutex_lock(&priv->mutex);
spin_lock_bh(&priv->beacon_lock);
clear_bit(OP_SCANNING, &priv->op_flags);
clear_bit(ATH_OP_SCANNING, &common->op_flags);
spin_unlock_bh(&priv->beacon_lock);
ath9k_htc_ps_wakeup(priv);
ath9k_htc_vif_reconfig(priv);
......
......@@ -924,9 +924,10 @@ static void ath9k_htc_opmode_init(struct ath9k_htc_priv *priv)
void ath9k_host_rx_init(struct ath9k_htc_priv *priv)
{
struct ath_common *common = ath9k_hw_common(priv->ah);
ath9k_hw_rxena(priv->ah);
ath9k_htc_opmode_init(priv);
ath9k_hw_startpcureceive(priv->ah, test_bit(OP_SCANNING, &priv->op_flags));
ath9k_hw_startpcureceive(priv->ah, test_bit(ATH_OP_SCANNING, &common->op_flags));
}
static inline void convert_htc_flag(struct ath_rx_status *rx_stats,
......
......@@ -882,7 +882,7 @@ static void ath9k_hw_init_interrupt_masks(struct ath_hw *ah,
AR_IMR_RXORN |
AR_IMR_BCNMISC;
if (AR_SREV_9340(ah) || AR_SREV_9550(ah))
if (AR_SREV_9340(ah) || AR_SREV_9550(ah) || AR_SREV_9531(ah))
sync_default &= ~AR_INTR_SYNC_HOST1_FATAL;
if (AR_SREV_9300_20_OR_LATER(ah)) {
......@@ -3048,6 +3048,7 @@ static struct {
{ AR_SREV_VERSION_9462, "9462" },
{ AR_SREV_VERSION_9550, "9550" },
{ AR_SREV_VERSION_9565, "9565" },
{ AR_SREV_VERSION_9531, "9531" },
};
/* For devices with external radios */
......
......@@ -115,13 +115,14 @@ void ath_hw_pll_work(struct work_struct *work)
u32 pll_sqsum;
struct ath_softc *sc = container_of(work, struct ath_softc,
hw_pll_work.work);
struct ath_common *common = ath9k_hw_common(sc->sc_ah);
/*
* ensure that the PLL WAR is executed only
* after the STA is associated (or) if the
* beaconing had started in interfaces that
* uses beacons.
*/
if (!test_bit(SC_OP_BEACONS, &sc->sc_flags))
if (!test_bit(ATH_OP_BEACONS, &common->op_flags))
return;
if (sc->tx99_state)
......@@ -414,7 +415,7 @@ void ath_start_ani(struct ath_softc *sc)
unsigned long timestamp = jiffies_to_msecs(jiffies);
if (common->disable_ani ||
!test_bit(SC_OP_ANI_RUN, &sc->sc_flags) ||
!test_bit(ATH_OP_ANI_RUN, &common->op_flags) ||
(sc->hw->conf.flags & IEEE80211_CONF_OFFCHANNEL))
return;
......@@ -438,6 +439,7 @@ void ath_stop_ani(struct ath_softc *sc)
void ath_check_ani(struct ath_softc *sc)
{
struct ath_hw *ah = sc->sc_ah;
struct ath_common *common = ath9k_hw_common(sc->sc_ah);
struct ath_beacon_config *cur_conf = &sc->cur_beacon_conf;
/*
......@@ -453,23 +455,23 @@ void ath_check_ani(struct ath_softc *sc)
* Disable ANI only when there are no
* associated stations.
*/
if (!test_bit(SC_OP_PRIM_STA_VIF, &sc->sc_flags))
if (!test_bit(ATH_OP_PRIM_STA_VIF, &common->op_flags))
goto stop_ani;
}
} else if (ah->opmode == NL80211_IFTYPE_STATION) {
if (!test_bit(SC_OP_PRIM_STA_VIF, &sc->sc_flags))
if (!test_bit(ATH_OP_PRIM_STA_VIF, &common->op_flags))
goto stop_ani;
}
if (!test_bit(SC_OP_ANI_RUN, &sc->sc_flags)) {
set_bit(SC_OP_ANI_RUN, &sc->sc_flags);
if (!test_bit(ATH_OP_ANI_RUN, &common->op_flags)) {
set_bit(ATH_OP_ANI_RUN, &common->op_flags);
ath_start_ani(sc);
}
return;
stop_ani:
clear_bit(SC_OP_ANI_RUN, &sc->sc_flags);
clear_bit(ATH_OP_ANI_RUN, &common->op_flags);
ath_stop_ani(sc);
}
......
......@@ -827,7 +827,7 @@ void ath9k_hw_enable_interrupts(struct ath_hw *ah)
return;
}
if (AR_SREV_9340(ah) || AR_SREV_9550(ah))
if (AR_SREV_9340(ah) || AR_SREV_9550(ah) || AR_SREV_9531(ah))
sync_default &= ~AR_INTR_SYNC_HOST1_FATAL;
async_mask = AR_INTR_MAC_IRQ;
......
......@@ -229,16 +229,16 @@ static bool ath_complete_reset(struct ath_softc *sc, bool start)
ath9k_cmn_update_txpow(ah, sc->curtxpow,
sc->config.txpowlimit, &sc->curtxpow);
clear_bit(SC_OP_HW_RESET, &sc->sc_flags);
clear_bit(ATH_OP_HW_RESET, &common->op_flags);
ath9k_hw_set_interrupts(ah);
ath9k_hw_enable_interrupts(ah);
if (!(sc->hw->conf.flags & IEEE80211_CONF_OFFCHANNEL) && start) {
if (!test_bit(SC_OP_BEACONS, &sc->sc_flags))
if (!test_bit(ATH_OP_BEACONS, &common->op_flags))
goto work;
if (ah->opmode == NL80211_IFTYPE_STATION &&
test_bit(SC_OP_PRIM_STA_VIF, &sc->sc_flags)) {
test_bit(ATH_OP_PRIM_STA_VIF, &common->op_flags)) {
spin_lock_irqsave(&sc->sc_pm_lock, flags);
sc->ps_flags |= PS_BEACON_SYNC | PS_WAIT_FOR_BEACON;
spin_unlock_irqrestore(&sc->sc_pm_lock, flags);
......@@ -336,7 +336,7 @@ static int ath_set_channel(struct ath_softc *sc, struct cfg80211_chan_def *chand
int old_pos = -1;
int r;
if (test_bit(SC_OP_INVALID, &sc->sc_flags))
if (test_bit(ATH_OP_INVALID, &common->op_flags))
return -EIO;
offchannel = !!(hw->conf.flags & IEEE80211_CONF_OFFCHANNEL);
......@@ -402,7 +402,7 @@ static int ath_set_channel(struct ath_softc *sc, struct cfg80211_chan_def *chand
chan->center_freq);
} else {
/* perform spectral scan if requested. */
if (test_bit(SC_OP_SCANNING, &sc->sc_flags) &&
if (test_bit(ATH_OP_SCANNING, &common->op_flags) &&
sc->spectral_mode == SPECTRAL_CHANSCAN)
ath9k_spectral_scan_trigger(hw);
}
......@@ -566,6 +566,7 @@ irqreturn_t ath_isr(int irq, void *dev)
struct ath_softc *sc = dev;
struct ath_hw *ah = sc->sc_ah;
struct ath_common *common = ath9k_hw_common(ah);
enum ath9k_int status;
u32 sync_cause = 0;
bool sched = false;
......@@ -575,7 +576,7 @@ irqreturn_t ath_isr(int irq, void *dev)
* touch anything. Note this can happen early
* on if the IRQ is shared.
*/
if (test_bit(SC_OP_INVALID, &sc->sc_flags))
if (test_bit(ATH_OP_INVALID, &common->op_flags))
return IRQ_NONE;
/* shared irq, not for us */
......@@ -583,7 +584,7 @@ irqreturn_t ath_isr(int irq, void *dev)
if (!ath9k_hw_intrpend(ah))
return IRQ_NONE;
if (test_bit(SC_OP_HW_RESET, &sc->sc_flags)) {
if (test_bit(ATH_OP_HW_RESET, &common->op_flags)) {
ath9k_hw_kill_interrupts(ah);
return IRQ_HANDLED;
}
......@@ -684,10 +685,11 @@ int ath_reset(struct ath_softc *sc)
void ath9k_queue_reset(struct ath_softc *sc, enum ath_reset_type type)
{
struct ath_common *common = ath9k_hw_common(sc->sc_ah);
#ifdef CONFIG_ATH9K_DEBUGFS
RESET_STAT_INC(sc, type);
#endif
set_bit(SC_OP_HW_RESET, &sc->sc_flags);
set_bit(ATH_OP_HW_RESET, &common->op_flags);
ieee80211_queue_work(sc->hw, &sc->hw_reset_work);
}
......@@ -768,7 +770,7 @@ static int ath9k_start(struct ieee80211_hw *hw)
ath_mci_enable(sc);
clear_bit(SC_OP_INVALID, &sc->sc_flags);
clear_bit(ATH_OP_INVALID, &common->op_flags);
sc->sc_ah->is_monitoring = false;
if (!ath_complete_reset(sc, false))
......@@ -885,7 +887,7 @@ static void ath9k_stop(struct ieee80211_hw *hw)
ath_cancel_work(sc);
if (test_bit(SC_OP_INVALID, &sc->sc_flags)) {
if (test_bit(ATH_OP_INVALID, &common->op_flags)) {
ath_dbg(common, ANY, "Device not present\n");
mutex_unlock(&sc->mutex);
return;
......@@ -940,7 +942,7 @@ static void ath9k_stop(struct ieee80211_hw *hw)
ath9k_ps_restore(sc);
set_bit(SC_OP_INVALID, &sc->sc_flags);
set_bit(ATH_OP_INVALID, &common->op_flags);
sc->ps_idle = prev_idle;
mutex_unlock(&sc->mutex);
......@@ -1081,7 +1083,7 @@ static void ath9k_calculate_summary_state(struct ieee80211_hw *hw,
*/
if (ah->opmode == NL80211_IFTYPE_STATION &&
old_opmode == NL80211_IFTYPE_AP &&
test_bit(SC_OP_PRIM_STA_VIF, &sc->sc_flags)) {
test_bit(ATH_OP_PRIM_STA_VIF, &common->op_flags)) {
ieee80211_iterate_active_interfaces_atomic(
sc->hw, IEEE80211_IFACE_ITER_RESUME_ALL,
ath9k_sta_vif_iter, sc);
......@@ -1590,7 +1592,7 @@ static void ath9k_set_assoc_state(struct ath_softc *sc,
struct ieee80211_bss_conf *bss_conf = &vif->bss_conf;
unsigned long flags;
set_bit(SC_OP_PRIM_STA_VIF, &sc->sc_flags);
set_bit(ATH_OP_PRIM_STA_VIF, &common->op_flags);
avp->primary_sta_vif = true;
/*
......@@ -1625,8 +1627,9 @@ static void ath9k_bss_assoc_iter(void *data, u8 *mac, struct ieee80211_vif *vif)
{
struct ath_softc *sc = data;
struct ieee80211_bss_conf *bss_conf = &vif->bss_conf;
struct ath_common *common = ath9k_hw_common(sc->sc_ah);
if (test_bit(SC_OP_PRIM_STA_VIF, &sc->sc_flags))
if (test_bit(ATH_OP_PRIM_STA_VIF, &common->op_flags))
return;
if (bss_conf->assoc)
......@@ -1657,18 +1660,18 @@ static void ath9k_bss_info_changed(struct ieee80211_hw *hw,
bss_conf->bssid, bss_conf->assoc);
if (avp->primary_sta_vif && !bss_conf->assoc) {
clear_bit(SC_OP_PRIM_STA_VIF, &sc->sc_flags);
clear_bit(ATH_OP_PRIM_STA_VIF, &common->op_flags);
avp->primary_sta_vif = false;
if (ah->opmode == NL80211_IFTYPE_STATION)
clear_bit(SC_OP_BEACONS, &sc->sc_flags);
clear_bit(ATH_OP_BEACONS, &common->op_flags);
}
ieee80211_iterate_active_interfaces_atomic(
sc->hw, IEEE80211_IFACE_ITER_RESUME_ALL,
ath9k_bss_assoc_iter, sc);
if (!test_bit(SC_OP_PRIM_STA_VIF, &sc->sc_flags) &&
if (!test_bit(ATH_OP_PRIM_STA_VIF, &common->op_flags) &&
ah->opmode == NL80211_IFTYPE_STATION) {
memset(common->curbssid, 0, ETH_ALEN);
common->curaid = 0;
......@@ -1897,7 +1900,7 @@ static void ath9k_flush(struct ieee80211_hw *hw, u32 queues, bool drop)
return;
}
if (test_bit(SC_OP_INVALID, &sc->sc_flags)) {
if (test_bit(ATH_OP_INVALID, &common->op_flags)) {
ath_dbg(common, ANY, "Device not present\n");
mutex_unlock(&sc->mutex);
return;
......@@ -2070,13 +2073,15 @@ static int ath9k_get_antenna(struct ieee80211_hw *hw, u32 *tx_ant, u32 *rx_ant)
static void ath9k_sw_scan_start(struct ieee80211_hw *hw)
{
struct ath_softc *sc = hw->priv;
set_bit(SC_OP_SCANNING, &sc->sc_flags);
struct ath_common *common = ath9k_hw_common(sc->sc_ah);
set_bit(ATH_OP_SCANNING, &common->op_flags);
}
static void ath9k_sw_scan_complete(struct ieee80211_hw *hw)
{
struct ath_softc *sc = hw->priv;
clear_bit(SC_OP_SCANNING, &sc->sc_flags);
struct ath_common *common = ath9k_hw_common(sc->sc_ah);
clear_bit(ATH_OP_SCANNING, &common->op_flags);
}
static void ath9k_channel_switch_beacon(struct ieee80211_hw *hw,
......
......@@ -555,7 +555,7 @@ void ath_mci_intr(struct ath_softc *sc)
mci_int_rxmsg &= ~AR_MCI_INTERRUPT_RX_MSG_GPM;
while (more_data == MCI_GPM_MORE) {
if (test_bit(SC_OP_HW_RESET, &sc->sc_flags))
if (test_bit(ATH_OP_HW_RESET, &common->op_flags))
return;
pgpm = mci->gpm_buf.bf_addr;
......
......@@ -784,6 +784,7 @@ static int ath_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{
struct ath_softc *sc;
struct ieee80211_hw *hw;
struct ath_common *common;
u8 csz;
u32 val;
int ret = 0;
......@@ -858,9 +859,6 @@ static int ath_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
sc->mem = pcim_iomap_table(pdev)[0];
sc->driver_data = id->driver_data;
/* Will be cleared in ath9k_start() */
set_bit(SC_OP_INVALID, &sc->sc_flags);
ret = request_irq(pdev->irq, ath_isr, IRQF_SHARED, "ath9k", sc);
if (ret) {
dev_err(&pdev->dev, "request_irq failed\n");
......@@ -879,6 +877,10 @@ static int ath_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
wiphy_info(hw->wiphy, "%s mem=0x%lx, irq=%d\n",
hw_name, (unsigned long)sc->mem, pdev->irq);
/* Will be cleared in ath9k_start() */
common = ath9k_hw_common(sc->sc_ah);
set_bit(ATH_OP_INVALID, &common->op_flags);
return 0;
err_init:
......
......@@ -108,7 +108,7 @@ static int ath9k_tx99_init(struct ath_softc *sc)
struct ath_tx_control txctl;
int r;
if (test_bit(SC_OP_INVALID, &sc->sc_flags)) {
if (test_bit(ATH_OP_INVALID, &common->op_flags)) {
ath_err(common,
"driver is in invalid state unable to use TX99");
return -EINVAL;
......
......@@ -198,7 +198,7 @@ int ath9k_suspend(struct ieee80211_hw *hw,
ath_cancel_work(sc);
ath_stop_ani(sc);
if (test_bit(SC_OP_INVALID, &sc->sc_flags)) {
if (test_bit(ATH_OP_INVALID, &common->op_flags)) {
ath_dbg(common, ANY, "Device not present\n");
ret = -EINVAL;
goto fail_wow;
......@@ -224,7 +224,7 @@ int ath9k_suspend(struct ieee80211_hw *hw,
* STA.
*/
if (!test_bit(SC_OP_PRIM_STA_VIF, &sc->sc_flags)) {
if (!test_bit(ATH_OP_PRIM_STA_VIF, &common->op_flags)) {
ath_dbg(common, WOW, "None of the STA vifs are associated\n");
ret = 1;
goto fail_wow;
......
......@@ -1699,7 +1699,7 @@ int ath_cabq_update(struct ath_softc *sc)
ath9k_hw_get_txq_props(sc->sc_ah, qnum, &qi);
qi.tqi_readyTime = (cur_conf->beacon_interval *
qi.tqi_readyTime = (TU_TO_USEC(cur_conf->beacon_interval) *
ATH_CABQ_READY_TIME) / 100;
ath_txq_update(sc, qnum, &qi);
......@@ -1769,7 +1769,7 @@ bool ath_drain_all_txq(struct ath_softc *sc)
int i;
u32 npend = 0;
if (test_bit(SC_OP_INVALID, &sc->sc_flags))
if (test_bit(ATH_OP_INVALID, &common->op_flags))
return true;
ath9k_hw_abort_tx_dma(ah);
......@@ -1817,11 +1817,12 @@ void ath_tx_cleanupq(struct ath_softc *sc, struct ath_txq *txq)
*/
void ath_txq_schedule(struct ath_softc *sc, struct ath_txq *txq)
{
struct ath_common *common = ath9k_hw_common(sc->sc_ah);
struct ath_atx_ac *ac, *last_ac;
struct ath_atx_tid *tid, *last_tid;
bool sent = false;
if (test_bit(SC_OP_HW_RESET, &sc->sc_flags) ||
if (test_bit(ATH_OP_HW_RESET, &common->op_flags) ||
list_empty(&txq->axq_acq))
return;
......@@ -2471,7 +2472,7 @@ static void ath_tx_processq(struct ath_softc *sc, struct ath_txq *txq)
ath_txq_lock(sc, txq);
for (;;) {
if (test_bit(SC_OP_HW_RESET, &sc->sc_flags))
if (test_bit(ATH_OP_HW_RESET, &common->op_flags))
break;
if (list_empty(&txq->axq_q)) {
......@@ -2554,7 +2555,7 @@ void ath_tx_edma_tasklet(struct ath_softc *sc)
int status;
for (;;) {
if (test_bit(SC_OP_HW_RESET, &sc->sc_flags))
if (test_bit(ATH_OP_HW_RESET, &common->op_flags))
break;
status = ath9k_hw_txprocdesc(ah, NULL, (void *)&ts);
......
......@@ -179,7 +179,7 @@ static int wil_cfg80211_get_station(struct wiphy *wiphy,
int cid = wil_find_cid(wil, mac);
wil_info(wil, "%s(%pM) CID %d\n", __func__, mac, cid);
wil_dbg_misc(wil, "%s(%pM) CID %d\n", __func__, mac, cid);
if (cid < 0)
return cid;
......@@ -218,7 +218,7 @@ static int wil_cfg80211_dump_station(struct wiphy *wiphy,
return -ENOENT;
memcpy(mac, wil->sta[cid].addr, ETH_ALEN);
wil_info(wil, "%s(%pM) CID %d\n", __func__, mac, cid);
wil_dbg_misc(wil, "%s(%pM) CID %d\n", __func__, mac, cid);
rc = wil_cid_fill_sinfo(wil, cid, sinfo);
......@@ -265,6 +265,7 @@ static int wil_cfg80211_scan(struct wiphy *wiphy,
u16 chnl[4];
} __packed cmd;
uint i, n;
int rc;
if (wil->scan_request) {
wil_err(wil, "Already scanning\n");
......@@ -282,7 +283,7 @@ static int wil_cfg80211_scan(struct wiphy *wiphy,
/* FW don't support scan after connection attempt */
if (test_bit(wil_status_dontscan, &wil->status)) {
wil_err(wil, "Scan after connect attempt not supported\n");
wil_err(wil, "Can't scan now\n");
return -EBUSY;
}
......@@ -305,8 +306,13 @@ static int wil_cfg80211_scan(struct wiphy *wiphy,
request->channels[i]->center_freq);
}
return wmi_send(wil, WMI_START_SCAN_CMDID, &cmd, sizeof(cmd.cmd) +
rc = wmi_send(wil, WMI_START_SCAN_CMDID, &cmd, sizeof(cmd.cmd) +
cmd.cmd.num_channels * sizeof(cmd.cmd.channel_list[0]));
if (rc)
wil->scan_request = NULL;
return rc;
}
static int wil_cfg80211_connect(struct wiphy *wiphy,
......@@ -321,6 +327,10 @@ static int wil_cfg80211_connect(struct wiphy *wiphy,
int ch;
int rc = 0;
if (test_bit(wil_status_fwconnecting, &wil->status) ||
test_bit(wil_status_fwconnected, &wil->status))
return -EALREADY;
bss = cfg80211_get_bss(wiphy, sme->channel, sme->bssid,
sme->ssid, sme->ssid_len,
WLAN_CAPABILITY_ESS, WLAN_CAPABILITY_ESS);
......@@ -402,10 +412,7 @@ static int wil_cfg80211_connect(struct wiphy *wiphy,
memcpy(conn.bssid, bss->bssid, ETH_ALEN);
memcpy(conn.dst_mac, bss->bssid, ETH_ALEN);
/*
* FW don't support scan after connection attempt
*/
set_bit(wil_status_dontscan, &wil->status);
set_bit(wil_status_fwconnecting, &wil->status);
rc = wmi_send(wil, WMI_CONNECT_CMDID, &conn, sizeof(conn));
......@@ -414,7 +421,6 @@ static int wil_cfg80211_connect(struct wiphy *wiphy,
mod_timer(&wil->connect_timer,
jiffies + msecs_to_jiffies(2000));
} else {
clear_bit(wil_status_dontscan, &wil->status);
clear_bit(wil_status_fwconnecting, &wil->status);
}
......@@ -603,18 +609,20 @@ static int wil_cfg80211_start_ap(struct wiphy *wiphy,
if (wil_fix_bcon(wil, bcon))
wil_dbg_misc(wil, "Fixed bcon\n");
mutex_lock(&wil->mutex);
rc = wil_reset(wil);
if (rc)
return rc;
goto out;
/* Rx VRING. */
rc = wil_rx_init(wil);
if (rc)
return rc;
goto out;
rc = wmi_set_ssid(wil, info->ssid_len, info->ssid);
if (rc)
return rc;
goto out;
/* MAC address - pre-requisite for other commands */
wmi_set_mac_address(wil, ndev->dev_addr);
......@@ -638,11 +646,13 @@ static int wil_cfg80211_start_ap(struct wiphy *wiphy,
rc = wmi_pcp_start(wil, info->beacon_interval, wmi_nettype,
channel->hw_value);
if (rc)
return rc;
goto out;
netif_carrier_on(ndev);
out:
mutex_unlock(&wil->mutex);
return rc;
}
......@@ -652,8 +662,11 @@ static int wil_cfg80211_stop_ap(struct wiphy *wiphy,
int rc = 0;
struct wil6210_priv *wil = wiphy_to_wil(wiphy);
mutex_lock(&wil->mutex);
rc = wmi_pcp_stop(wil);
mutex_unlock(&wil->mutex);
return rc;
}
......@@ -661,7 +674,11 @@ static int wil_cfg80211_del_station(struct wiphy *wiphy,
struct net_device *dev, u8 *mac)
{
struct wil6210_priv *wil = wiphy_to_wil(wiphy);
mutex_lock(&wil->mutex);
wil6210_disconnect(wil, mac);
mutex_unlock(&wil->mutex);
return 0;
}
......
......@@ -398,6 +398,44 @@ static const struct file_operations fops_reset = {
.open = simple_open,
};
static void wil_seq_hexdump(struct seq_file *s, void *p, int len,
const char *prefix)
{
char printbuf[16 * 3 + 2];
int i = 0;
while (i < len) {
int l = min(len - i, 16);
hex_dump_to_buffer(p + i, l, 16, 1, printbuf,
sizeof(printbuf), false);
seq_printf(s, "%s%s\n", prefix, printbuf);
i += l;
}
}
static void wil_seq_print_skb(struct seq_file *s, struct sk_buff *skb)
{
int i = 0;
int len = skb_headlen(skb);
void *p = skb->data;
int nr_frags = skb_shinfo(skb)->nr_frags;
seq_printf(s, " len = %d\n", len);
wil_seq_hexdump(s, p, len, " : ");
if (nr_frags) {
seq_printf(s, " nr_frags = %d\n", nr_frags);
for (i = 0; i < nr_frags; i++) {
const struct skb_frag_struct *frag =
&skb_shinfo(skb)->frags[i];
len = skb_frag_size(frag);
p = skb_frag_address_safe(frag);
seq_printf(s, " [%2d] : len = %d\n", i, len);
wil_seq_hexdump(s, p, len, " : ");
}
}
}
/*---------Tx/Rx descriptor------------*/
static int wil_txdesc_debugfs_show(struct seq_file *s, void *data)
{
......@@ -438,26 +476,9 @@ static int wil_txdesc_debugfs_show(struct seq_file *s, void *data)
seq_printf(s, " SKB = %p\n", skb);
if (skb) {
char printbuf[16 * 3 + 2];
int i = 0;
int len = le16_to_cpu(d->dma.length);
void *p = skb->data;
if (len != skb_headlen(skb)) {
seq_printf(s, "!!! len: desc = %d skb = %d\n",
len, skb_headlen(skb));
len = min_t(int, len, skb_headlen(skb));
}
seq_printf(s, " len = %d\n", len);
while (i < len) {
int l = min(len - i, 16);
hex_dump_to_buffer(p + i, l, 16, 1, printbuf,
sizeof(printbuf), false);
seq_printf(s, " : %s\n", printbuf);
i += l;
}
skb_get(skb);
wil_seq_print_skb(s, skb);
kfree_skb(skb);
}
seq_printf(s, "}\n");
} else {
......@@ -631,7 +652,8 @@ static int wil_sta_debugfs_show(struct seq_file *s, void *data)
status = "connected";
break;
}
seq_printf(s, "[%d] %pM %s\n", i, p->addr, status);
seq_printf(s, "[%d] %pM %s%s\n", i, p->addr, status,
(p->data_port_open ? " data_port_open" : ""));
if (p->status == wil_sta_connected) {
for (tid = 0; tid < WIL_STA_TID_NUM; tid++) {
......
......@@ -195,8 +195,12 @@ static irqreturn_t wil6210_irq_rx(int irq, void *cookie)
if (isr & BIT_DMA_EP_RX_ICR_RX_DONE) {
wil_dbg_irq(wil, "RX done\n");
isr &= ~BIT_DMA_EP_RX_ICR_RX_DONE;
wil_dbg_txrx(wil, "NAPI schedule\n");
napi_schedule(&wil->napi_rx);
if (test_bit(wil_status_reset_done, &wil->status)) {
wil_dbg_txrx(wil, "NAPI(Rx) schedule\n");
napi_schedule(&wil->napi_rx);
} else {
wil_err(wil, "Got Rx interrupt while in reset\n");
}
}
if (isr)
......@@ -226,10 +230,15 @@ static irqreturn_t wil6210_irq_tx(int irq, void *cookie)
if (isr & BIT_DMA_EP_TX_ICR_TX_DONE) {
wil_dbg_irq(wil, "TX done\n");
napi_schedule(&wil->napi_tx);
isr &= ~BIT_DMA_EP_TX_ICR_TX_DONE;
/* clear also all VRING interrupts */
isr &= ~(BIT(25) - 1UL);
if (test_bit(wil_status_reset_done, &wil->status)) {
wil_dbg_txrx(wil, "NAPI(Tx) schedule\n");
napi_schedule(&wil->napi_tx);
} else {
wil_err(wil, "Got Tx interrupt while in reset\n");
}
}
if (isr)
......@@ -319,6 +328,7 @@ static irqreturn_t wil6210_irq_misc_thread(int irq, void *cookie)
if (isr & ISR_MISC_FW_ERROR) {
wil_notify_fw_error(wil);
isr &= ~ISR_MISC_FW_ERROR;
wil_fw_error_recovery(wil);
}
if (isr & ISR_MISC_MBOX_EVT) {
......@@ -493,6 +503,23 @@ static int wil6210_request_3msi(struct wil6210_priv *wil, int irq)
return rc;
}
/* can't use wil_ioread32_and_clear because ICC value is not ser yet */
static inline void wil_clear32(void __iomem *addr)
{
u32 x = ioread32(addr);
iowrite32(x, addr);
}
void wil6210_clear_irq(struct wil6210_priv *wil)
{
wil_clear32(wil->csr + HOSTADDR(RGF_DMA_EP_RX_ICR) +
offsetof(struct RGF_ICR, ICR));
wil_clear32(wil->csr + HOSTADDR(RGF_DMA_EP_TX_ICR) +
offsetof(struct RGF_ICR, ICR));
wil_clear32(wil->csr + HOSTADDR(RGF_DMA_EP_MISC_ICR) +
offsetof(struct RGF_ICR, ICR));
}
int wil6210_init_irq(struct wil6210_priv *wil, int irq)
{
......
......@@ -127,8 +127,9 @@ void *wil_if_alloc(struct device *dev, void __iomem *csr)
ndev->netdev_ops = &wil_netdev_ops;
ndev->ieee80211_ptr = wdev;
ndev->hw_features = NETIF_F_HW_CSUM | NETIF_F_RXCSUM;
ndev->features |= NETIF_F_HW_CSUM | NETIF_F_RXCSUM;
ndev->hw_features = NETIF_F_HW_CSUM | NETIF_F_RXCSUM |
NETIF_F_SG | NETIF_F_GRO;
ndev->features |= ndev->hw_features;
SET_NETDEV_DEV(ndev, wiphy_dev(wdev->wiphy));
wdev->netdev = ndev;
......
......@@ -68,10 +68,14 @@ static int wil_if_pcie_enable(struct wil6210_priv *wil)
goto stop_master;
/* need reset here to obtain MAC */
mutex_lock(&wil->mutex);
rc = wil_reset(wil);
mutex_unlock(&wil->mutex);
if (rc)
goto release_irq;
wil_info(wil, "HW version: 0x%08x\n", wil->hw_version);
return 0;
release_irq:
......@@ -149,6 +153,7 @@ static int wil_pcie_probe(struct pci_dev *pdev, const struct pci_device_id *id)
pci_set_drvdata(pdev, wil);
wil->pdev = pdev;
wil6210_clear_irq(wil);
/* FW should raise IRQ when ready */
rc = wil_if_pcie_enable(wil);
if (rc) {
......
......@@ -74,23 +74,21 @@ struct RGF_ICR {
} __packed;
/* registers - FW addresses */
#define RGF_USER_USER_SCRATCH_PAD (0x8802bc)
#define RGF_USER_USER_ICR (0x880b4c) /* struct RGF_ICR */
#define BIT_USER_USER_ICR_SW_INT_2 BIT(18)
#define RGF_USER_CLKS_CTL_SW_RST_MASK_0 (0x880b14)
#define RGF_USER_MAC_CPU_0 (0x8801fc)
#define RGF_USER_HW_MACHINE_STATE (0x8801dc)
#define HW_MACHINE_BOOT_DONE (0x3fffffd)
#define RGF_USER_USER_CPU_0 (0x8801e0)
#define RGF_USER_MAC_CPU_0 (0x8801fc)
#define RGF_USER_USER_SCRATCH_PAD (0x8802bc)
#define RGF_USER_FW_REV_ID (0x880a8c) /* chip revision */
#define RGF_USER_CLKS_CTL_0 (0x880abc)
#define BIT_USER_CLKS_RST_PWGD BIT(11) /* reset on "power good" */
#define RGF_USER_CLKS_CTL_SW_RST_VEC_0 (0x880b04)
#define RGF_USER_CLKS_CTL_SW_RST_VEC_1 (0x880b08)
#define RGF_USER_CLKS_CTL_SW_RST_VEC_2 (0x880b0c)
#define RGF_USER_CLKS_CTL_SW_RST_VEC_3 (0x880b10)
#define RGF_DMA_PSEUDO_CAUSE (0x881c68)
#define RGF_DMA_PSEUDO_CAUSE_MASK_SW (0x881c6c)
#define RGF_DMA_PSEUDO_CAUSE_MASK_FW (0x881c70)
#define BIT_DMA_PSEUDO_CAUSE_RX BIT(0)
#define BIT_DMA_PSEUDO_CAUSE_TX BIT(1)
#define BIT_DMA_PSEUDO_CAUSE_MISC BIT(2)
#define RGF_USER_CLKS_CTL_SW_RST_MASK_0 (0x880b14)
#define RGF_USER_USER_ICR (0x880b4c) /* struct RGF_ICR */
#define BIT_USER_USER_ICR_SW_INT_2 BIT(18)
#define RGF_DMA_EP_TX_ICR (0x881bb4) /* struct RGF_ICR */
#define BIT_DMA_EP_TX_ICR_TX_DONE BIT(0)
......@@ -105,13 +103,22 @@ struct RGF_ICR {
/* Interrupt moderation control */
#define RGF_DMA_ITR_CNT_TRSH (0x881c5c)
#define RGF_DMA_ITR_CNT_DATA (0x881c60)
#define RGF_DMA_ITR_CNT_CRL (0x881C64)
#define RGF_DMA_ITR_CNT_CRL (0x881c64)
#define BIT_DMA_ITR_CNT_CRL_EN BIT(0)
#define BIT_DMA_ITR_CNT_CRL_EXT_TICK BIT(1)
#define BIT_DMA_ITR_CNT_CRL_FOREVER BIT(2)
#define BIT_DMA_ITR_CNT_CRL_CLR BIT(3)
#define BIT_DMA_ITR_CNT_CRL_REACH_TRSH BIT(4)
#define RGF_DMA_PSEUDO_CAUSE (0x881c68)
#define RGF_DMA_PSEUDO_CAUSE_MASK_SW (0x881c6c)
#define RGF_DMA_PSEUDO_CAUSE_MASK_FW (0x881c70)
#define BIT_DMA_PSEUDO_CAUSE_RX BIT(0)
#define BIT_DMA_PSEUDO_CAUSE_TX BIT(1)
#define BIT_DMA_PSEUDO_CAUSE_MISC BIT(2)
#define RGF_PCIE_LOS_COUNTER_CTL (0x882dc4)
/* popular locations */
#define HOST_MBOX HOSTADDR(RGF_USER_USER_SCRATCH_PAD)
#define HOST_SW_INT (HOSTADDR(RGF_USER_USER_ICR) + \
......@@ -125,6 +132,31 @@ struct RGF_ICR {
/* Hardware definitions end */
/**
* mk_cidxtid - construct @cidxtid field
* @cid: CID value
* @tid: TID value
*
* @cidxtid field encoded as bits 0..3 - CID; 4..7 - TID
*/
static inline u8 mk_cidxtid(u8 cid, u8 tid)
{
return ((tid & 0xf) << 4) | (cid & 0xf);
}
/**
* parse_cidxtid - parse @cidxtid field
* @cid: store CID value here
* @tid: store TID value here
*
* @cidxtid field encoded as bits 0..3 - CID; 4..7 - TID
*/
static inline void parse_cidxtid(u8 cidxtid, u8 *cid, u8 *tid)
{
*cid = cidxtid & 0xf;
*tid = (cidxtid >> 4) & 0xf;
}
struct wil6210_mbox_ring {
u32 base;
u16 entry_size; /* max. size of mbox entry, incl. all headers */
......@@ -184,12 +216,19 @@ struct pending_wmi_event {
} __packed event;
};
enum { /* for wil_ctx.mapped_as */
wil_mapped_as_none = 0,
wil_mapped_as_single = 1,
wil_mapped_as_page = 2,
};
/**
* struct wil_ctx - software context for Vring descriptor
*/
struct wil_ctx {
struct sk_buff *skb;
u8 mapped_as_page:1;
u8 nr_frags;
u8 mapped_as;
};
union vring_desc;
......@@ -204,6 +243,14 @@ struct vring {
struct wil_ctx *ctx; /* ctx[size] - software context */
};
/**
* Additional data for Tx Vring
*/
struct vring_tx_data {
int enabled;
};
enum { /* for wil6210_priv.status */
wil_status_fwready = 0,
wil_status_fwconnecting,
......@@ -211,6 +258,7 @@ enum { /* for wil6210_priv.status */
wil_status_dontscan,
wil_status_reset_done,
wil_status_irqen, /* FIXME: interrupts enabled - for debug */
wil_status_napi_en, /* NAPI enabled protected by wil->mutex */
};
struct pci_dev;
......@@ -296,6 +344,7 @@ struct wil_sta_info {
u8 addr[ETH_ALEN];
enum wil_sta_status status;
struct wil_net_stats stats;
bool data_port_open; /* can send any data, not only EAPOL */
/* Rx BACK */
struct wil_tid_ampdu_rx *tid_rx[WIL_STA_TID_NUM];
unsigned long tid_rx_timer_expired[BITS_TO_LONGS(WIL_STA_TID_NUM)];
......@@ -309,6 +358,7 @@ struct wil6210_priv {
void __iomem *csr;
ulong status;
u32 fw_version;
u32 hw_version;
u8 n_mids; /* number of additional MIDs as reported by FW */
/* profile */
u32 monitor_flags;
......@@ -329,6 +379,7 @@ struct wil6210_priv {
struct workqueue_struct *wmi_wq_conn; /* for connect worker */
struct work_struct connect_worker;
struct work_struct disconnect_worker;
struct work_struct fw_error_worker; /* for FW error recovery */
struct timer_list connect_timer;
int pending_connect_cid;
struct list_head pending_wmi_ev;
......@@ -343,6 +394,7 @@ struct wil6210_priv {
/* DMA related */
struct vring vring_rx;
struct vring vring_tx[WIL6210_MAX_TX_RINGS];
struct vring_tx_data vring_tx_data[WIL6210_MAX_TX_RINGS];
u8 vring2cid_tid[WIL6210_MAX_TX_RINGS][2]; /* [0] - CID, [1] - TID */
struct wil_sta_info sta[WIL6210_MAX_CID];
/* scan */
......@@ -406,6 +458,7 @@ void wil_if_remove(struct wil6210_priv *wil);
int wil_priv_init(struct wil6210_priv *wil);
void wil_priv_deinit(struct wil6210_priv *wil);
int wil_reset(struct wil6210_priv *wil);
void wil_fw_error_recovery(struct wil6210_priv *wil);
void wil_link_on(struct wil6210_priv *wil);
void wil_link_off(struct wil6210_priv *wil);
int wil_up(struct wil6210_priv *wil);
......@@ -439,6 +492,7 @@ int wmi_rxon(struct wil6210_priv *wil, bool on);
int wmi_get_temperature(struct wil6210_priv *wil, u32 *t_m, u32 *t_r);
int wmi_disconnect_sta(struct wil6210_priv *wil, const u8 *mac, u16 reason);
void wil6210_clear_irq(struct wil6210_priv *wil);
int wil6210_init_irq(struct wil6210_priv *wil, int irq);
void wil6210_fini_irq(struct wil6210_priv *wil, int irq);
void wil6210_disable_irq(struct wil6210_priv *wil);
......
......@@ -504,6 +504,7 @@ static void brcmf_chip_get_raminfo(struct brcmf_chip_priv *ci)
ci->pub.ramsize = 0x3c000;
break;
case BCM4339_CHIP_ID:
case BCM4354_CHIP_ID:
ci->pub.ramsize = 0xc0000;
ci->pub.rambase = 0x180000;
break;
......@@ -1006,6 +1007,10 @@ bool brcmf_chip_sr_capable(struct brcmf_chip *pub)
chip = container_of(pub, struct brcmf_chip_priv, pub);
switch (pub->chip) {
case BCM4354_CHIP_ID:
/* explicitly check SR engine enable bit */
pmu_cc3_mask = BIT(2);
/* fall-through */
case BCM43241_CHIP_ID:
case BCM4335_CHIP_ID:
case BCM4339_CHIP_ID:
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册