提交 0bae60fc 编写于 作者: L Linus Torvalds

Merge tag 'mmc-v4.16' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc

Pull MMC updates from Ulf Hansson:
 "There are two major achievements for MMC in this release, which
  deserves to be specially highlighted.

  First, we have converted the MMC block device from using the legacy
  blk interface into using the modern blkmq interface. Not only do we
  get all the nice effects from using blkmq, but it also means that new
  fresh nice code replaces old rusty code. Great news to everybody that
  cares about MMC/SD!

  It should also be noted that converting to blkmq has not been trivial,
  mostly because of that we have been carrying too much of MMC specific
  optimizations for the I/O request path, rather than striving to move
  these to the generic blk layer. Hopefully we won't be doing that
  mistake, ever again.

  Special thanks to Adrian Hunter (Intel) and to Linus Walleij (Linaro),
  who both have been working on this for quite some time!

  Second, on top of the blkmq deployment, we have enabled full support
  the eMMC command queuing feature, introduced in the eMMC v.5.1 spec.
  This also includes an implementation of a host driver library,
  supporting the corresponding CQHCI HW. Ideally, those controllers that
  supports CQHCI should only need some minor adaptations to make this
  play.

  So far the sdhci-pci driver for the Intel GLKs and the sdhci-of-arasan
  driver used on Rockchip RK3399, have enabled support for eMMC command
  queueing.

  Worth to highlight is also that, implementing the eMMC command queuing
  support has been a collaborative effort, as several people from
  Codeaurora, Rockchip, Intel and Linaro have been involved. However,
  the work has been driven by Adrian Hunter (Intel).

  In some shadow of the above, here are the rest of the highlights:

  MMC core:
   - Don't remove non-removable cards during system suspend
   - Add a slot-gpio helper to check capability of GPIO WP detection

  MMC host:
   - sdhci: Cleanups and improvements of some wakeup related code
   - sdhci-pci-arasan: New variant to support Arasan PCI HW with integrated phy
   - sdhci-acpi: Avoid broken UHS transfer modes on Intel CHT
   - sdhci-acpi: Add support for ACPI HID of AMD Controller with HS400
   - sdhci_f_sdh30: Add ACPI support
   - sdhci-esdhc-imx: Enable/disable clock at runtime suspend/resume
   - sdhci-of-esdhc: A few minor fixes
   - mmci: Add support for new STM32 variant
   - renesas_sdhi: enable R-Car D3 (r8a77995) support
   - tmio/renesas_sdhi: Re-structuring, cleanups and modernizations"

* tag 'mmc-v4.16' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc: (96 commits)
  mmc: mmci: fix error return code in mmci_probe()
  mmc: davinci: suppress error message on EPROBE_DEFER
  mmc: davinci: dont' use module_platform_driver_probe()
  mmc: tmio: hide unused tmio_mmc_clk_disable/tmio_mmc_clk_enable functions
  mmc: mmci: Add STM32 variant
  mmc: mmci: Add support for setting pad type via pinctrl
  mmc: mmci: Don't pretend all variants to have OPENDRAIN bit
  mmc: mmci: Don't pretend all variants to have MCI_STARBITERR flag
  mmc: mmci: Don't pretend all variants to have MMCIMASK1 register
  mmc: tmio: refactor .get_ro hook
  mmc: slot-gpio: add a helper to check capability of GPIO WP detection
  mmc: tmio: remove dma_ops from tmio_mmc_host_probe() argument
  mmc: tmio: move {tmio_}mmc_of_parse() to tmio_mmc_host_alloc()
  mmc: tmio: move clk_enable/disable out of tmio_mmc_host_probe()
  mmc: tmio: ioremap memory resource in tmio_mmc_host_alloc()
  mmc: sh_mmcif: remove redundant initialization of 'opc'
  mmc: sdhci: Rework sdhci_enable_irq_wakeups()
  mmc: sdhci: Handle failure of enable_irq_wake()
  mmc: sdhci: Stop exporting sdhci_enable_irq_wakeups()
  mmc: sdhci-pci: Use device wakeup capability to determine MMC_PM_WAKE_SDIO_IRQ capability
  ...
......@@ -12,6 +12,8 @@ Required properties:
"mediatek,mt8173-mmc": for mmc host ip compatible with mt8173
"mediatek,mt2701-mmc": for mmc host ip compatible with mt2701
"mediatek,mt2712-mmc": for mmc host ip compatible with mt2712
"mediatek,mt7623-mmc", "mediatek,mt2701-mmc": for MT7623 SoC
- reg: physical base address of the controller and length
- interrupts: Should contain MSDC interrupt number
- clocks: Should contain phandle for the clock feeding the MMC controller
......
......@@ -26,6 +26,7 @@ Required properties:
"renesas,sdhi-r8a7794" - SDHI IP on R8A7794 SoC
"renesas,sdhi-r8a7795" - SDHI IP on R8A7795 SoC
"renesas,sdhi-r8a7796" - SDHI IP on R8A7796 SoC
"renesas,sdhi-r8a77995" - SDHI IP on R8A77995 SoC
"renesas,sdhi-shmobile" - a generic sh-mobile SDHI controller
"renesas,rcar-gen1-sdhi" - a generic R-Car Gen1 SDHI controller
"renesas,rcar-gen2-sdhi" - a generic R-Car Gen2 or RZ/G1
......
此差异已折叠。
......@@ -5,6 +5,16 @@
struct mmc_queue;
struct request;
void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req);
void mmc_blk_cqe_recovery(struct mmc_queue *mq);
enum mmc_issued;
enum mmc_issued mmc_blk_mq_issue_rq(struct mmc_queue *mq, struct request *req);
void mmc_blk_mq_complete(struct request *req);
void mmc_blk_mq_recovery(struct mmc_queue *mq);
struct work_struct;
void mmc_blk_mq_complete_work(struct work_struct *work);
#endif
......@@ -351,8 +351,6 @@ int mmc_add_card(struct mmc_card *card)
#ifdef CONFIG_DEBUG_FS
mmc_add_card_debugfs(card);
#endif
mmc_init_context_info(card->host);
card->dev.of_node = mmc_of_find_child_device(card->host, 0);
device_enable_async_suspend(&card->dev);
......
......@@ -341,6 +341,8 @@ int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
{
int err;
init_completion(&mrq->cmd_completion);
mmc_retune_hold(host);
if (mmc_card_removed(host->card))
......@@ -361,20 +363,6 @@ int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
}
EXPORT_SYMBOL(mmc_start_request);
/*
* mmc_wait_data_done() - done callback for data request
* @mrq: done data request
*
* Wakes up mmc context, passed as a callback to host controller driver
*/
static void mmc_wait_data_done(struct mmc_request *mrq)
{
struct mmc_context_info *context_info = &mrq->host->context_info;
context_info->is_done_rcv = true;
wake_up_interruptible(&context_info->wait);
}
static void mmc_wait_done(struct mmc_request *mrq)
{
complete(&mrq->completion);
......@@ -392,37 +380,6 @@ static inline void mmc_wait_ongoing_tfr_cmd(struct mmc_host *host)
wait_for_completion(&ongoing_mrq->cmd_completion);
}
/*
*__mmc_start_data_req() - starts data request
* @host: MMC host to start the request
* @mrq: data request to start
*
* Sets the done callback to be called when request is completed by the card.
* Starts data mmc request execution
* If an ongoing transfer is already in progress, wait for the command line
* to become available before sending another command.
*/
static int __mmc_start_data_req(struct mmc_host *host, struct mmc_request *mrq)
{
int err;
mmc_wait_ongoing_tfr_cmd(host);
mrq->done = mmc_wait_data_done;
mrq->host = host;
init_completion(&mrq->cmd_completion);
err = mmc_start_request(host, mrq);
if (err) {
mrq->cmd->error = err;
mmc_complete_cmd(mrq);
mmc_wait_data_done(mrq);
}
return err;
}
static int __mmc_start_req(struct mmc_host *host, struct mmc_request *mrq)
{
int err;
......@@ -432,8 +389,6 @@ static int __mmc_start_req(struct mmc_host *host, struct mmc_request *mrq)
init_completion(&mrq->completion);
mrq->done = mmc_wait_done;
init_completion(&mrq->cmd_completion);
err = mmc_start_request(host, mrq);
if (err) {
mrq->cmd->error = err;
......@@ -650,163 +605,10 @@ EXPORT_SYMBOL(mmc_cqe_recovery);
*/
bool mmc_is_req_done(struct mmc_host *host, struct mmc_request *mrq)
{
if (host->areq)
return host->context_info.is_done_rcv;
else
return completion_done(&mrq->completion);
return completion_done(&mrq->completion);
}
EXPORT_SYMBOL(mmc_is_req_done);
/**
* mmc_pre_req - Prepare for a new request
* @host: MMC host to prepare command
* @mrq: MMC request to prepare for
*
* mmc_pre_req() is called in prior to mmc_start_req() to let
* host prepare for the new request. Preparation of a request may be
* performed while another request is running on the host.
*/
static void mmc_pre_req(struct mmc_host *host, struct mmc_request *mrq)
{
if (host->ops->pre_req)
host->ops->pre_req(host, mrq);
}
/**
* mmc_post_req - Post process a completed request
* @host: MMC host to post process command
* @mrq: MMC request to post process for
* @err: Error, if non zero, clean up any resources made in pre_req
*
* Let the host post process a completed request. Post processing of
* a request may be performed while another reuqest is running.
*/
static void mmc_post_req(struct mmc_host *host, struct mmc_request *mrq,
int err)
{
if (host->ops->post_req)
host->ops->post_req(host, mrq, err);
}
/**
* mmc_finalize_areq() - finalize an asynchronous request
* @host: MMC host to finalize any ongoing request on
*
* Returns the status of the ongoing asynchronous request, but
* MMC_BLK_SUCCESS if no request was going on.
*/
static enum mmc_blk_status mmc_finalize_areq(struct mmc_host *host)
{
struct mmc_context_info *context_info = &host->context_info;
enum mmc_blk_status status;
if (!host->areq)
return MMC_BLK_SUCCESS;
while (1) {
wait_event_interruptible(context_info->wait,
(context_info->is_done_rcv ||
context_info->is_new_req));
if (context_info->is_done_rcv) {
struct mmc_command *cmd;
context_info->is_done_rcv = false;
cmd = host->areq->mrq->cmd;
if (!cmd->error || !cmd->retries ||
mmc_card_removed(host->card)) {
status = host->areq->err_check(host->card,
host->areq);
break; /* return status */
} else {
mmc_retune_recheck(host);
pr_info("%s: req failed (CMD%u): %d, retrying...\n",
mmc_hostname(host),
cmd->opcode, cmd->error);
cmd->retries--;
cmd->error = 0;
__mmc_start_request(host, host->areq->mrq);
continue; /* wait for done/new event again */
}
}
return MMC_BLK_NEW_REQUEST;
}
mmc_retune_release(host);
/*
* Check BKOPS urgency for each R1 response
*/
if (host->card && mmc_card_mmc(host->card) &&
((mmc_resp_type(host->areq->mrq->cmd) == MMC_RSP_R1) ||
(mmc_resp_type(host->areq->mrq->cmd) == MMC_RSP_R1B)) &&
(host->areq->mrq->cmd->resp[0] & R1_EXCEPTION_EVENT)) {
mmc_start_bkops(host->card, true);
}
return status;
}
/**
* mmc_start_areq - start an asynchronous request
* @host: MMC host to start command
* @areq: asynchronous request to start
* @ret_stat: out parameter for status
*
* Start a new MMC custom command request for a host.
* If there is on ongoing async request wait for completion
* of that request and start the new one and return.
* Does not wait for the new request to complete.
*
* Returns the completed request, NULL in case of none completed.
* Wait for the an ongoing request (previoulsy started) to complete and
* return the completed request. If there is no ongoing request, NULL
* is returned without waiting. NULL is not an error condition.
*/
struct mmc_async_req *mmc_start_areq(struct mmc_host *host,
struct mmc_async_req *areq,
enum mmc_blk_status *ret_stat)
{
enum mmc_blk_status status;
int start_err = 0;
struct mmc_async_req *previous = host->areq;
/* Prepare a new request */
if (areq)
mmc_pre_req(host, areq->mrq);
/* Finalize previous request */
status = mmc_finalize_areq(host);
if (ret_stat)
*ret_stat = status;
/* The previous request is still going on... */
if (status == MMC_BLK_NEW_REQUEST)
return NULL;
/* Fine so far, start the new request! */
if (status == MMC_BLK_SUCCESS && areq)
start_err = __mmc_start_data_req(host, areq->mrq);
/* Postprocess the old request at this point */
if (host->areq)
mmc_post_req(host, host->areq->mrq, 0);
/* Cancel a prepared request if it was not started. */
if ((status != MMC_BLK_SUCCESS || start_err) && areq)
mmc_post_req(host, areq->mrq, -EINVAL);
if (status != MMC_BLK_SUCCESS)
host->areq = NULL;
else
host->areq = areq;
return previous;
}
EXPORT_SYMBOL(mmc_start_areq);
/**
* mmc_wait_for_req - start a request and wait for completion
* @host: MMC host to start command
......@@ -2959,6 +2761,14 @@ static int mmc_pm_notify(struct notifier_block *notify_block,
if (!err)
break;
if (!mmc_card_is_removable(host)) {
dev_warn(mmc_dev(host),
"pre_suspend failed for non-removable host: "
"%d\n", err);
/* Avoid removing non-removable hosts */
break;
}
/* Calling bus_ops->remove() with a claimed host can deadlock */
host->bus_ops->remove(host);
mmc_claim_host(host);
......@@ -2994,22 +2804,6 @@ void mmc_unregister_pm_notifier(struct mmc_host *host)
}
#endif
/**
* mmc_init_context_info() - init synchronization context
* @host: mmc host
*
* Init struct context_info needed to implement asynchronous
* request mechanism, used by mmc core, host driver and mmc requests
* supplier.
*/
void mmc_init_context_info(struct mmc_host *host)
{
host->context_info.is_new_req = false;
host->context_info.is_done_rcv = false;
host->context_info.is_waiting_last_req = false;
init_waitqueue_head(&host->context_info.wait);
}
static int __init mmc_init(void)
{
int ret;
......
......@@ -62,12 +62,10 @@ void mmc_set_initial_state(struct mmc_host *host);
static inline void mmc_delay(unsigned int ms)
{
if (ms < 1000 / HZ) {
cond_resched();
mdelay(ms);
} else {
if (ms <= 20)
usleep_range(ms * 1000, ms * 1250);
else
msleep(ms);
}
}
void mmc_rescan(struct work_struct *work);
......@@ -91,8 +89,6 @@ void mmc_remove_host_debugfs(struct mmc_host *host);
void mmc_add_card_debugfs(struct mmc_card *card);
void mmc_remove_card_debugfs(struct mmc_card *card);
void mmc_init_context_info(struct mmc_host *host);
int mmc_execute_tuning(struct mmc_card *card);
int mmc_hs200_to_hs400(struct mmc_card *card);
int mmc_hs400_to_hs200(struct mmc_card *card);
......@@ -110,12 +106,6 @@ bool mmc_is_req_done(struct mmc_host *host, struct mmc_request *mrq);
int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq);
struct mmc_async_req;
struct mmc_async_req *mmc_start_areq(struct mmc_host *host,
struct mmc_async_req *areq,
enum mmc_blk_status *ret_stat);
int mmc_erase(struct mmc_card *card, unsigned int from, unsigned int nr,
unsigned int arg);
int mmc_can_erase(struct mmc_card *card);
......@@ -152,4 +142,35 @@ int mmc_cqe_start_req(struct mmc_host *host, struct mmc_request *mrq);
void mmc_cqe_post_req(struct mmc_host *host, struct mmc_request *mrq);
int mmc_cqe_recovery(struct mmc_host *host);
/**
* mmc_pre_req - Prepare for a new request
* @host: MMC host to prepare command
* @mrq: MMC request to prepare for
*
* mmc_pre_req() is called in prior to mmc_start_req() to let
* host prepare for the new request. Preparation of a request may be
* performed while another request is running on the host.
*/
static inline void mmc_pre_req(struct mmc_host *host, struct mmc_request *mrq)
{
if (host->ops->pre_req)
host->ops->pre_req(host, mrq);
}
/**
* mmc_post_req - Post process a completed request
* @host: MMC host to post process command
* @mrq: MMC request to post process for
* @err: Error, if non zero, clean up any resources made in pre_req
*
* Let the host post process a completed request. Post processing of
* a request may be performed while another request is running.
*/
static inline void mmc_post_req(struct mmc_host *host, struct mmc_request *mrq,
int err)
{
if (host->ops->post_req)
host->ops->post_req(host, mrq, err);
}
#endif
......@@ -41,6 +41,11 @@ static inline int mmc_host_cmd23(struct mmc_host *host)
return host->caps & MMC_CAP_CMD23;
}
static inline bool mmc_host_done_complete(struct mmc_host *host)
{
return host->caps & MMC_CAP_DONE_COMPLETE;
}
static inline int mmc_boot_partition_access(struct mmc_host *host)
{
return !(host->caps2 & MMC_CAP2_BOOTPART_NOACC);
......@@ -74,6 +79,5 @@ static inline bool mmc_card_hs400es(struct mmc_card *card)
return card->host->ios.enhanced_strobe;
}
#endif
......@@ -101,7 +101,7 @@ struct mmc_test_transfer_result {
struct list_head link;
unsigned int count;
unsigned int sectors;
struct timespec ts;
struct timespec64 ts;
unsigned int rate;
unsigned int iops;
};
......@@ -171,11 +171,6 @@ struct mmc_test_multiple_rw {
enum mmc_test_prep_media prepare;
};
struct mmc_test_async_req {
struct mmc_async_req areq;
struct mmc_test_card *test;
};
/*******************************************************************/
/* General helper functions */
/*******************************************************************/
......@@ -515,14 +510,11 @@ static int mmc_test_map_sg_max_scatter(struct mmc_test_mem *mem,
/*
* Calculate transfer rate in bytes per second.
*/
static unsigned int mmc_test_rate(uint64_t bytes, struct timespec *ts)
static unsigned int mmc_test_rate(uint64_t bytes, struct timespec64 *ts)
{
uint64_t ns;
ns = ts->tv_sec;
ns *= 1000000000;
ns += ts->tv_nsec;
ns = timespec64_to_ns(ts);
bytes *= 1000000000;
while (ns > UINT_MAX) {
......@@ -542,7 +534,7 @@ static unsigned int mmc_test_rate(uint64_t bytes, struct timespec *ts)
* Save transfer results for future usage
*/
static void mmc_test_save_transfer_result(struct mmc_test_card *test,
unsigned int count, unsigned int sectors, struct timespec ts,
unsigned int count, unsigned int sectors, struct timespec64 ts,
unsigned int rate, unsigned int iops)
{
struct mmc_test_transfer_result *tr;
......@@ -567,21 +559,21 @@ static void mmc_test_save_transfer_result(struct mmc_test_card *test,
* Print the transfer rate.
*/
static void mmc_test_print_rate(struct mmc_test_card *test, uint64_t bytes,
struct timespec *ts1, struct timespec *ts2)
struct timespec64 *ts1, struct timespec64 *ts2)
{
unsigned int rate, iops, sectors = bytes >> 9;
struct timespec ts;
struct timespec64 ts;
ts = timespec_sub(*ts2, *ts1);
ts = timespec64_sub(*ts2, *ts1);
rate = mmc_test_rate(bytes, &ts);
iops = mmc_test_rate(100, &ts); /* I/O ops per sec x 100 */
pr_info("%s: Transfer of %u sectors (%u%s KiB) took %lu.%09lu "
pr_info("%s: Transfer of %u sectors (%u%s KiB) took %llu.%09u "
"seconds (%u kB/s, %u KiB/s, %u.%02u IOPS)\n",
mmc_hostname(test->card->host), sectors, sectors >> 1,
(sectors & 1 ? ".5" : ""), (unsigned long)ts.tv_sec,
(unsigned long)ts.tv_nsec, rate / 1000, rate / 1024,
(sectors & 1 ? ".5" : ""), (u64)ts.tv_sec,
(u32)ts.tv_nsec, rate / 1000, rate / 1024,
iops / 100, iops % 100);
mmc_test_save_transfer_result(test, 1, sectors, ts, rate, iops);
......@@ -591,24 +583,24 @@ static void mmc_test_print_rate(struct mmc_test_card *test, uint64_t bytes,
* Print the average transfer rate.
*/
static void mmc_test_print_avg_rate(struct mmc_test_card *test, uint64_t bytes,
unsigned int count, struct timespec *ts1,
struct timespec *ts2)
unsigned int count, struct timespec64 *ts1,
struct timespec64 *ts2)
{
unsigned int rate, iops, sectors = bytes >> 9;
uint64_t tot = bytes * count;
struct timespec ts;
struct timespec64 ts;
ts = timespec_sub(*ts2, *ts1);
ts = timespec64_sub(*ts2, *ts1);
rate = mmc_test_rate(tot, &ts);
iops = mmc_test_rate(count * 100, &ts); /* I/O ops per sec x 100 */
pr_info("%s: Transfer of %u x %u sectors (%u x %u%s KiB) took "
"%lu.%09lu seconds (%u kB/s, %u KiB/s, "
"%llu.%09u seconds (%u kB/s, %u KiB/s, "
"%u.%02u IOPS, sg_len %d)\n",
mmc_hostname(test->card->host), count, sectors, count,
sectors >> 1, (sectors & 1 ? ".5" : ""),
(unsigned long)ts.tv_sec, (unsigned long)ts.tv_nsec,
(u64)ts.tv_sec, (u32)ts.tv_nsec,
rate / 1000, rate / 1024, iops / 100, iops % 100,
test->area.sg_len);
......@@ -741,30 +733,6 @@ static int mmc_test_check_result(struct mmc_test_card *test,
return ret;
}
static enum mmc_blk_status mmc_test_check_result_async(struct mmc_card *card,
struct mmc_async_req *areq)
{
struct mmc_test_async_req *test_async =
container_of(areq, struct mmc_test_async_req, areq);
int ret;
mmc_test_wait_busy(test_async->test);
/*
* FIXME: this would earlier just casts a regular error code,
* either of the kernel type -ERRORCODE or the local test framework
* RESULT_* errorcode, into an enum mmc_blk_status and return as
* result check. Instead, convert it to some reasonable type by just
* returning either MMC_BLK_SUCCESS or MMC_BLK_CMD_ERR.
* If possible, a reasonable error code should be returned.
*/
ret = mmc_test_check_result(test_async->test, areq->mrq);
if (ret)
return MMC_BLK_CMD_ERR;
return MMC_BLK_SUCCESS;
}
/*
* Checks that a "short transfer" behaved as expected
*/
......@@ -831,6 +799,45 @@ static struct mmc_test_req *mmc_test_req_alloc(void)
return rq;
}
static void mmc_test_wait_done(struct mmc_request *mrq)
{
complete(&mrq->completion);
}
static int mmc_test_start_areq(struct mmc_test_card *test,
struct mmc_request *mrq,
struct mmc_request *prev_mrq)
{
struct mmc_host *host = test->card->host;
int err = 0;
if (mrq) {
init_completion(&mrq->completion);
mrq->done = mmc_test_wait_done;
mmc_pre_req(host, mrq);
}
if (prev_mrq) {
wait_for_completion(&prev_mrq->completion);
err = mmc_test_wait_busy(test);
if (!err)
err = mmc_test_check_result(test, prev_mrq);
}
if (!err && mrq) {
err = mmc_start_request(host, mrq);
if (err)
mmc_retune_release(host);
}
if (prev_mrq)
mmc_post_req(host, prev_mrq, 0);
if (err && mrq)
mmc_post_req(host, mrq, err);
return err;
}
static int mmc_test_nonblock_transfer(struct mmc_test_card *test,
struct scatterlist *sg, unsigned sg_len,
......@@ -838,17 +845,10 @@ static int mmc_test_nonblock_transfer(struct mmc_test_card *test,
unsigned blksz, int write, int count)
{
struct mmc_test_req *rq1, *rq2;
struct mmc_test_async_req test_areq[2];
struct mmc_async_req *done_areq;
struct mmc_async_req *cur_areq = &test_areq[0].areq;
struct mmc_async_req *other_areq = &test_areq[1].areq;
enum mmc_blk_status status;
struct mmc_request *mrq, *prev_mrq;
int i;
int ret = RESULT_OK;
test_areq[0].test = test;
test_areq[1].test = test;
rq1 = mmc_test_req_alloc();
rq2 = mmc_test_req_alloc();
if (!rq1 || !rq2) {
......@@ -856,33 +856,25 @@ static int mmc_test_nonblock_transfer(struct mmc_test_card *test,
goto err;
}
cur_areq->mrq = &rq1->mrq;
cur_areq->err_check = mmc_test_check_result_async;
other_areq->mrq = &rq2->mrq;
other_areq->err_check = mmc_test_check_result_async;
mrq = &rq1->mrq;
prev_mrq = NULL;
for (i = 0; i < count; i++) {
mmc_test_prepare_mrq(test, cur_areq->mrq, sg, sg_len, dev_addr,
blocks, blksz, write);
done_areq = mmc_start_areq(test->card->host, cur_areq, &status);
if (status != MMC_BLK_SUCCESS || (!done_areq && i > 0)) {
ret = RESULT_FAIL;
mmc_test_req_reset(container_of(mrq, struct mmc_test_req, mrq));
mmc_test_prepare_mrq(test, mrq, sg, sg_len, dev_addr, blocks,
blksz, write);
ret = mmc_test_start_areq(test, mrq, prev_mrq);
if (ret)
goto err;
}
if (done_areq)
mmc_test_req_reset(container_of(done_areq->mrq,
struct mmc_test_req, mrq));
if (!prev_mrq)
prev_mrq = &rq2->mrq;
swap(cur_areq, other_areq);
swap(mrq, prev_mrq);
dev_addr += blocks;
}
done_areq = mmc_start_areq(test->card->host, NULL, &status);
if (status != MMC_BLK_SUCCESS)
ret = RESULT_FAIL;
ret = mmc_test_start_areq(test, NULL, prev_mrq);
err:
kfree(rq1);
kfree(rq2);
......@@ -1449,7 +1441,7 @@ static int mmc_test_area_io_seq(struct mmc_test_card *test, unsigned long sz,
int max_scatter, int timed, int count,
bool nonblock, int min_sg_len)
{
struct timespec ts1, ts2;
struct timespec64 ts1, ts2;
int ret = 0;
int i;
struct mmc_test_area *t = &test->area;
......@@ -1475,7 +1467,7 @@ static int mmc_test_area_io_seq(struct mmc_test_card *test, unsigned long sz,
return ret;
if (timed)
getnstimeofday(&ts1);
ktime_get_ts64(&ts1);
if (nonblock)
ret = mmc_test_nonblock_transfer(test, t->sg, t->sg_len,
dev_addr, t->blocks, 512, write, count);
......@@ -1489,7 +1481,7 @@ static int mmc_test_area_io_seq(struct mmc_test_card *test, unsigned long sz,
return ret;
if (timed)
getnstimeofday(&ts2);
ktime_get_ts64(&ts2);
if (timed)
mmc_test_print_avg_rate(test, sz, count, &ts1, &ts2);
......@@ -1747,7 +1739,7 @@ static int mmc_test_profile_trim_perf(struct mmc_test_card *test)
struct mmc_test_area *t = &test->area;
unsigned long sz;
unsigned int dev_addr;
struct timespec ts1, ts2;
struct timespec64 ts1, ts2;
int ret;
if (!mmc_can_trim(test->card))
......@@ -1758,19 +1750,19 @@ static int mmc_test_profile_trim_perf(struct mmc_test_card *test)
for (sz = 512; sz < t->max_sz; sz <<= 1) {
dev_addr = t->dev_addr + (sz >> 9);
getnstimeofday(&ts1);
ktime_get_ts64(&ts1);
ret = mmc_erase(test->card, dev_addr, sz >> 9, MMC_TRIM_ARG);
if (ret)
return ret;
getnstimeofday(&ts2);
ktime_get_ts64(&ts2);
mmc_test_print_rate(test, sz, &ts1, &ts2);
}
dev_addr = t->dev_addr;
getnstimeofday(&ts1);
ktime_get_ts64(&ts1);
ret = mmc_erase(test->card, dev_addr, sz >> 9, MMC_TRIM_ARG);
if (ret)
return ret;
getnstimeofday(&ts2);
ktime_get_ts64(&ts2);
mmc_test_print_rate(test, sz, &ts1, &ts2);
return 0;
}
......@@ -1779,19 +1771,19 @@ static int mmc_test_seq_read_perf(struct mmc_test_card *test, unsigned long sz)
{
struct mmc_test_area *t = &test->area;
unsigned int dev_addr, i, cnt;
struct timespec ts1, ts2;
struct timespec64 ts1, ts2;
int ret;
cnt = t->max_sz / sz;
dev_addr = t->dev_addr;
getnstimeofday(&ts1);
ktime_get_ts64(&ts1);
for (i = 0; i < cnt; i++) {
ret = mmc_test_area_io(test, sz, dev_addr, 0, 0, 0);
if (ret)
return ret;
dev_addr += (sz >> 9);
}
getnstimeofday(&ts2);
ktime_get_ts64(&ts2);
mmc_test_print_avg_rate(test, sz, cnt, &ts1, &ts2);
return 0;
}
......@@ -1818,7 +1810,7 @@ static int mmc_test_seq_write_perf(struct mmc_test_card *test, unsigned long sz)
{
struct mmc_test_area *t = &test->area;
unsigned int dev_addr, i, cnt;
struct timespec ts1, ts2;
struct timespec64 ts1, ts2;
int ret;
ret = mmc_test_area_erase(test);
......@@ -1826,14 +1818,14 @@ static int mmc_test_seq_write_perf(struct mmc_test_card *test, unsigned long sz)
return ret;
cnt = t->max_sz / sz;
dev_addr = t->dev_addr;
getnstimeofday(&ts1);
ktime_get_ts64(&ts1);
for (i = 0; i < cnt; i++) {
ret = mmc_test_area_io(test, sz, dev_addr, 1, 0, 0);
if (ret)
return ret;
dev_addr += (sz >> 9);
}
getnstimeofday(&ts2);
ktime_get_ts64(&ts2);
mmc_test_print_avg_rate(test, sz, cnt, &ts1, &ts2);
return 0;
}
......@@ -1864,7 +1856,7 @@ static int mmc_test_profile_seq_trim_perf(struct mmc_test_card *test)
struct mmc_test_area *t = &test->area;
unsigned long sz;
unsigned int dev_addr, i, cnt;
struct timespec ts1, ts2;
struct timespec64 ts1, ts2;
int ret;
if (!mmc_can_trim(test->card))
......@@ -1882,7 +1874,7 @@ static int mmc_test_profile_seq_trim_perf(struct mmc_test_card *test)
return ret;
cnt = t->max_sz / sz;
dev_addr = t->dev_addr;
getnstimeofday(&ts1);
ktime_get_ts64(&ts1);
for (i = 0; i < cnt; i++) {
ret = mmc_erase(test->card, dev_addr, sz >> 9,
MMC_TRIM_ARG);
......@@ -1890,7 +1882,7 @@ static int mmc_test_profile_seq_trim_perf(struct mmc_test_card *test)
return ret;
dev_addr += (sz >> 9);
}
getnstimeofday(&ts2);
ktime_get_ts64(&ts2);
mmc_test_print_avg_rate(test, sz, cnt, &ts1, &ts2);
}
return 0;
......@@ -1912,7 +1904,7 @@ static int mmc_test_rnd_perf(struct mmc_test_card *test, int write, int print,
{
unsigned int dev_addr, cnt, rnd_addr, range1, range2, last_ea = 0, ea;
unsigned int ssz;
struct timespec ts1, ts2, ts;
struct timespec64 ts1, ts2, ts;
int ret;
ssz = sz >> 9;
......@@ -1921,10 +1913,10 @@ static int mmc_test_rnd_perf(struct mmc_test_card *test, int write, int print,
range1 = rnd_addr / test->card->pref_erase;
range2 = range1 / ssz;
getnstimeofday(&ts1);
ktime_get_ts64(&ts1);
for (cnt = 0; cnt < UINT_MAX; cnt++) {
getnstimeofday(&ts2);
ts = timespec_sub(ts2, ts1);
ktime_get_ts64(&ts2);
ts = timespec64_sub(ts2, ts1);
if (ts.tv_sec >= 10)
break;
ea = mmc_test_rnd_num(range1);
......@@ -1998,7 +1990,7 @@ static int mmc_test_seq_perf(struct mmc_test_card *test, int write,
{
struct mmc_test_area *t = &test->area;
unsigned int dev_addr, i, cnt, sz, ssz;
struct timespec ts1, ts2;
struct timespec64 ts1, ts2;
int ret;
sz = t->max_tfr;
......@@ -2025,7 +2017,7 @@ static int mmc_test_seq_perf(struct mmc_test_card *test, int write,
cnt = tot_sz / sz;
dev_addr &= 0xffff0000; /* Round to 64MiB boundary */
getnstimeofday(&ts1);
ktime_get_ts64(&ts1);
for (i = 0; i < cnt; i++) {
ret = mmc_test_area_io(test, sz, dev_addr, write,
max_scatter, 0);
......@@ -2033,7 +2025,7 @@ static int mmc_test_seq_perf(struct mmc_test_card *test, int write,
return ret;
dev_addr += ssz;
}
getnstimeofday(&ts2);
ktime_get_ts64(&ts2);
mmc_test_print_avg_rate(test, sz, cnt, &ts1, &ts2);
......@@ -2328,10 +2320,17 @@ static int mmc_test_reset(struct mmc_test_card *test)
int err;
err = mmc_hw_reset(host);
if (!err)
if (!err) {
/*
* Reset will re-enable the card's command queue, but tests
* expect it to be disabled.
*/
if (card->ext_csd.cmdq_en)
mmc_cmdq_disable(card);
return RESULT_OK;
else if (err == -EOPNOTSUPP)
} else if (err == -EOPNOTSUPP) {
return RESULT_UNSUP_HOST;
}
return RESULT_FAIL;
}
......@@ -2356,11 +2355,9 @@ static int mmc_test_ongoing_transfer(struct mmc_test_card *test,
struct mmc_test_req *rq = mmc_test_req_alloc();
struct mmc_host *host = test->card->host;
struct mmc_test_area *t = &test->area;
struct mmc_test_async_req test_areq = { .test = test };
struct mmc_request *mrq;
unsigned long timeout;
bool expired = false;
enum mmc_blk_status blkstat = MMC_BLK_SUCCESS;
int ret = 0, cmd_ret;
u32 status = 0;
int count = 0;
......@@ -2373,9 +2370,6 @@ static int mmc_test_ongoing_transfer(struct mmc_test_card *test,
mrq->sbc = &rq->sbc;
mrq->cap_cmd_during_tfr = true;
test_areq.areq.mrq = mrq;
test_areq.areq.err_check = mmc_test_check_result_async;
mmc_test_prepare_mrq(test, mrq, t->sg, t->sg_len, dev_addr, t->blocks,
512, write);
......@@ -2388,11 +2382,9 @@ static int mmc_test_ongoing_transfer(struct mmc_test_card *test,
/* Start ongoing data request */
if (use_areq) {
mmc_start_areq(host, &test_areq.areq, &blkstat);
if (blkstat != MMC_BLK_SUCCESS) {
ret = RESULT_FAIL;
ret = mmc_test_start_areq(test, mrq, NULL);
if (ret)
goto out_free;
}
} else {
mmc_wait_for_req(host, mrq);
}
......@@ -2426,9 +2418,7 @@ static int mmc_test_ongoing_transfer(struct mmc_test_card *test,
/* Wait for data request to complete */
if (use_areq) {
mmc_start_areq(host, NULL, &blkstat);
if (blkstat != MMC_BLK_SUCCESS)
ret = RESULT_FAIL;
ret = mmc_test_start_areq(test, NULL, mrq);
} else {
mmc_wait_for_req_done(test->card->host, mrq);
}
......@@ -3066,10 +3056,9 @@ static int mtf_test_show(struct seq_file *sf, void *data)
seq_printf(sf, "Test %d: %d\n", gr->testcase + 1, gr->result);
list_for_each_entry(tr, &gr->tr_lst, link) {
seq_printf(sf, "%u %d %lu.%09lu %u %u.%02u\n",
seq_printf(sf, "%u %d %llu.%09u %u %u.%02u\n",
tr->count, tr->sectors,
(unsigned long)tr->ts.tv_sec,
(unsigned long)tr->ts.tv_nsec,
(u64)tr->ts.tv_sec, (u32)tr->ts.tv_nsec,
tr->rate, tr->iops / 100, tr->iops % 100);
}
}
......
......@@ -22,100 +22,147 @@
#include "block.h"
#include "core.h"
#include "card.h"
#include "host.h"
/*
* Prepare a MMC request. This just filters out odd stuff.
*/
static int mmc_prep_request(struct request_queue *q, struct request *req)
static inline bool mmc_cqe_dcmd_busy(struct mmc_queue *mq)
{
struct mmc_queue *mq = q->queuedata;
/* Allow only 1 DCMD at a time */
return mq->in_flight[MMC_ISSUE_DCMD];
}
if (mq && mmc_card_removed(mq->card))
return BLKPREP_KILL;
void mmc_cqe_check_busy(struct mmc_queue *mq)
{
if ((mq->cqe_busy & MMC_CQE_DCMD_BUSY) && !mmc_cqe_dcmd_busy(mq))
mq->cqe_busy &= ~MMC_CQE_DCMD_BUSY;
req->rq_flags |= RQF_DONTPREP;
mq->cqe_busy &= ~MMC_CQE_QUEUE_FULL;
}
return BLKPREP_OK;
static inline bool mmc_cqe_can_dcmd(struct mmc_host *host)
{
return host->caps2 & MMC_CAP2_CQE_DCMD;
}
static int mmc_queue_thread(void *d)
static enum mmc_issue_type mmc_cqe_issue_type(struct mmc_host *host,
struct request *req)
{
struct mmc_queue *mq = d;
struct request_queue *q = mq->queue;
struct mmc_context_info *cntx = &mq->card->host->context_info;
switch (req_op(req)) {
case REQ_OP_DRV_IN:
case REQ_OP_DRV_OUT:
case REQ_OP_DISCARD:
case REQ_OP_SECURE_ERASE:
return MMC_ISSUE_SYNC;
case REQ_OP_FLUSH:
return mmc_cqe_can_dcmd(host) ? MMC_ISSUE_DCMD : MMC_ISSUE_SYNC;
default:
return MMC_ISSUE_ASYNC;
}
}
current->flags |= PF_MEMALLOC;
enum mmc_issue_type mmc_issue_type(struct mmc_queue *mq, struct request *req)
{
struct mmc_host *host = mq->card->host;
down(&mq->thread_sem);
do {
struct request *req;
if (mq->use_cqe)
return mmc_cqe_issue_type(host, req);
spin_lock_irq(q->queue_lock);
set_current_state(TASK_INTERRUPTIBLE);
req = blk_fetch_request(q);
mq->asleep = false;
cntx->is_waiting_last_req = false;
cntx->is_new_req = false;
if (!req) {
/*
* Dispatch queue is empty so set flags for
* mmc_request_fn() to wake us up.
*/
if (mq->qcnt)
cntx->is_waiting_last_req = true;
else
mq->asleep = true;
}
spin_unlock_irq(q->queue_lock);
if (req_op(req) == REQ_OP_READ || req_op(req) == REQ_OP_WRITE)
return MMC_ISSUE_ASYNC;
if (req || mq->qcnt) {
set_current_state(TASK_RUNNING);
mmc_blk_issue_rq(mq, req);
cond_resched();
} else {
if (kthread_should_stop()) {
set_current_state(TASK_RUNNING);
break;
}
up(&mq->thread_sem);
schedule();
down(&mq->thread_sem);
}
} while (1);
up(&mq->thread_sem);
return MMC_ISSUE_SYNC;
}
return 0;
static void __mmc_cqe_recovery_notifier(struct mmc_queue *mq)
{
if (!mq->recovery_needed) {
mq->recovery_needed = true;
schedule_work(&mq->recovery_work);
}
}
/*
* Generic MMC request handler. This is called for any queue on a
* particular host. When the host is not busy, we look for a request
* on any queue on this host, and attempt to issue it. This may
* not be the queue we were asked to process.
*/
static void mmc_request_fn(struct request_queue *q)
void mmc_cqe_recovery_notifier(struct mmc_request *mrq)
{
struct mmc_queue_req *mqrq = container_of(mrq, struct mmc_queue_req,
brq.mrq);
struct request *req = mmc_queue_req_to_req(mqrq);
struct request_queue *q = req->q;
struct mmc_queue *mq = q->queuedata;
struct request *req;
struct mmc_context_info *cntx;
unsigned long flags;
spin_lock_irqsave(q->queue_lock, flags);
__mmc_cqe_recovery_notifier(mq);
spin_unlock_irqrestore(q->queue_lock, flags);
}
if (!mq) {
while ((req = blk_fetch_request(q)) != NULL) {
req->rq_flags |= RQF_QUIET;
__blk_end_request_all(req, BLK_STS_IOERR);
static enum blk_eh_timer_return mmc_cqe_timed_out(struct request *req)
{
struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req);
struct mmc_request *mrq = &mqrq->brq.mrq;
struct mmc_queue *mq = req->q->queuedata;
struct mmc_host *host = mq->card->host;
enum mmc_issue_type issue_type = mmc_issue_type(mq, req);
bool recovery_needed = false;
switch (issue_type) {
case MMC_ISSUE_ASYNC:
case MMC_ISSUE_DCMD:
if (host->cqe_ops->cqe_timeout(host, mrq, &recovery_needed)) {
if (recovery_needed)
__mmc_cqe_recovery_notifier(mq);
return BLK_EH_RESET_TIMER;
}
return;
/* No timeout */
return BLK_EH_HANDLED;
default:
/* Timeout is handled by mmc core */
return BLK_EH_RESET_TIMER;
}
}
static enum blk_eh_timer_return mmc_mq_timed_out(struct request *req,
bool reserved)
{
struct request_queue *q = req->q;
struct mmc_queue *mq = q->queuedata;
unsigned long flags;
int ret;
cntx = &mq->card->host->context_info;
spin_lock_irqsave(q->queue_lock, flags);
if (cntx->is_waiting_last_req) {
cntx->is_new_req = true;
wake_up_interruptible(&cntx->wait);
}
if (mq->recovery_needed || !mq->use_cqe)
ret = BLK_EH_RESET_TIMER;
else
ret = mmc_cqe_timed_out(req);
if (mq->asleep)
wake_up_process(mq->thread);
spin_unlock_irqrestore(q->queue_lock, flags);
return ret;
}
static void mmc_mq_recovery_handler(struct work_struct *work)
{
struct mmc_queue *mq = container_of(work, struct mmc_queue,
recovery_work);
struct request_queue *q = mq->queue;
mmc_get_card(mq->card, &mq->ctx);
mq->in_recovery = true;
if (mq->use_cqe)
mmc_blk_cqe_recovery(mq);
else
mmc_blk_mq_recovery(mq);
mq->in_recovery = false;
spin_lock_irq(q->queue_lock);
mq->recovery_needed = false;
spin_unlock_irq(q->queue_lock);
mmc_put_card(mq->card, &mq->ctx);
blk_mq_run_hw_queues(q, true);
}
static struct scatterlist *mmc_alloc_sg(int sg_len, gfp_t gfp)
......@@ -154,11 +201,10 @@ static void mmc_queue_setup_discard(struct request_queue *q,
* @req: the request
* @gfp: memory allocation policy
*/
static int mmc_init_request(struct request_queue *q, struct request *req,
gfp_t gfp)
static int __mmc_init_request(struct mmc_queue *mq, struct request *req,
gfp_t gfp)
{
struct mmc_queue_req *mq_rq = req_to_mmc_queue_req(req);
struct mmc_queue *mq = q->queuedata;
struct mmc_card *card = mq->card;
struct mmc_host *host = card->host;
......@@ -177,6 +223,131 @@ static void mmc_exit_request(struct request_queue *q, struct request *req)
mq_rq->sg = NULL;
}
static int mmc_mq_init_request(struct blk_mq_tag_set *set, struct request *req,
unsigned int hctx_idx, unsigned int numa_node)
{
return __mmc_init_request(set->driver_data, req, GFP_KERNEL);
}
static void mmc_mq_exit_request(struct blk_mq_tag_set *set, struct request *req,
unsigned int hctx_idx)
{
struct mmc_queue *mq = set->driver_data;
mmc_exit_request(mq->queue, req);
}
/*
* We use BLK_MQ_F_BLOCKING and have only 1 hardware queue, which means requests
* will not be dispatched in parallel.
*/
static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
const struct blk_mq_queue_data *bd)
{
struct request *req = bd->rq;
struct request_queue *q = req->q;
struct mmc_queue *mq = q->queuedata;
struct mmc_card *card = mq->card;
struct mmc_host *host = card->host;
enum mmc_issue_type issue_type;
enum mmc_issued issued;
bool get_card, cqe_retune_ok;
int ret;
if (mmc_card_removed(mq->card)) {
req->rq_flags |= RQF_QUIET;
return BLK_STS_IOERR;
}
issue_type = mmc_issue_type(mq, req);
spin_lock_irq(q->queue_lock);
if (mq->recovery_needed) {
spin_unlock_irq(q->queue_lock);
return BLK_STS_RESOURCE;
}
switch (issue_type) {
case MMC_ISSUE_DCMD:
if (mmc_cqe_dcmd_busy(mq)) {
mq->cqe_busy |= MMC_CQE_DCMD_BUSY;
spin_unlock_irq(q->queue_lock);
return BLK_STS_RESOURCE;
}
break;
case MMC_ISSUE_ASYNC:
break;
default:
/*
* Timeouts are handled by mmc core, and we don't have a host
* API to abort requests, so we can't handle the timeout anyway.
* However, when the timeout happens, blk_mq_complete_request()
* no longer works (to stop the request disappearing under us).
* To avoid racing with that, set a large timeout.
*/
req->timeout = 600 * HZ;
break;
}
mq->in_flight[issue_type] += 1;
get_card = (mmc_tot_in_flight(mq) == 1);
cqe_retune_ok = (mmc_cqe_qcnt(mq) == 1);
spin_unlock_irq(q->queue_lock);
if (!(req->rq_flags & RQF_DONTPREP)) {
req_to_mmc_queue_req(req)->retries = 0;
req->rq_flags |= RQF_DONTPREP;
}
if (get_card)
mmc_get_card(card, &mq->ctx);
if (mq->use_cqe) {
host->retune_now = host->need_retune && cqe_retune_ok &&
!host->hold_retune;
}
blk_mq_start_request(req);
issued = mmc_blk_mq_issue_rq(mq, req);
switch (issued) {
case MMC_REQ_BUSY:
ret = BLK_STS_RESOURCE;
break;
case MMC_REQ_FAILED_TO_START:
ret = BLK_STS_IOERR;
break;
default:
ret = BLK_STS_OK;
break;
}
if (issued != MMC_REQ_STARTED) {
bool put_card = false;
spin_lock_irq(q->queue_lock);
mq->in_flight[issue_type] -= 1;
if (mmc_tot_in_flight(mq) == 0)
put_card = true;
spin_unlock_irq(q->queue_lock);
if (put_card)
mmc_put_card(card, &mq->ctx);
}
return ret;
}
static const struct blk_mq_ops mmc_mq_ops = {
.queue_rq = mmc_mq_queue_rq,
.init_request = mmc_mq_init_request,
.exit_request = mmc_mq_exit_request,
.complete = mmc_blk_mq_complete,
.timeout = mmc_mq_timed_out,
};
static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
{
struct mmc_host *host = card->host;
......@@ -196,124 +367,139 @@ static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
blk_queue_max_segments(mq->queue, host->max_segs);
blk_queue_max_segment_size(mq->queue, host->max_seg_size);
/* Initialize thread_sem even if it is not used */
sema_init(&mq->thread_sem, 1);
INIT_WORK(&mq->recovery_work, mmc_mq_recovery_handler);
INIT_WORK(&mq->complete_work, mmc_blk_mq_complete_work);
mutex_init(&mq->complete_lock);
init_waitqueue_head(&mq->wait);
}
/**
* mmc_init_queue - initialise a queue structure.
* @mq: mmc queue
* @card: mmc card to attach this queue
* @lock: queue lock
* @subname: partition subname
*
* Initialise a MMC card request queue.
*/
int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
spinlock_t *lock, const char *subname)
static int mmc_mq_init_queue(struct mmc_queue *mq, int q_depth,
const struct blk_mq_ops *mq_ops, spinlock_t *lock)
{
struct mmc_host *host = card->host;
int ret = -ENOMEM;
mq->card = card;
mq->queue = blk_alloc_queue(GFP_KERNEL);
if (!mq->queue)
return -ENOMEM;
mq->queue->queue_lock = lock;
mq->queue->request_fn = mmc_request_fn;
mq->queue->init_rq_fn = mmc_init_request;
mq->queue->exit_rq_fn = mmc_exit_request;
mq->queue->cmd_size = sizeof(struct mmc_queue_req);
mq->queue->queuedata = mq;
mq->qcnt = 0;
ret = blk_init_allocated_queue(mq->queue);
if (ret) {
blk_cleanup_queue(mq->queue);
int ret;
memset(&mq->tag_set, 0, sizeof(mq->tag_set));
mq->tag_set.ops = mq_ops;
mq->tag_set.queue_depth = q_depth;
mq->tag_set.numa_node = NUMA_NO_NODE;
mq->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_SG_MERGE |
BLK_MQ_F_BLOCKING;
mq->tag_set.nr_hw_queues = 1;
mq->tag_set.cmd_size = sizeof(struct mmc_queue_req);
mq->tag_set.driver_data = mq;
ret = blk_mq_alloc_tag_set(&mq->tag_set);
if (ret)
return ret;
}
blk_queue_prep_rq(mq->queue, mmc_prep_request);
mmc_setup_queue(mq, card);
mq->thread = kthread_run(mmc_queue_thread, mq, "mmcqd/%d%s",
host->index, subname ? subname : "");
if (IS_ERR(mq->thread)) {
ret = PTR_ERR(mq->thread);
goto cleanup_queue;
mq->queue = blk_mq_init_queue(&mq->tag_set);
if (IS_ERR(mq->queue)) {
ret = PTR_ERR(mq->queue);
goto free_tag_set;
}
mq->queue->queue_lock = lock;
mq->queue->queuedata = mq;
return 0;
cleanup_queue:
blk_cleanup_queue(mq->queue);
free_tag_set:
blk_mq_free_tag_set(&mq->tag_set);
return ret;
}
void mmc_cleanup_queue(struct mmc_queue *mq)
{
struct request_queue *q = mq->queue;
unsigned long flags;
/* Set queue depth to get a reasonable value for q->nr_requests */
#define MMC_QUEUE_DEPTH 64
/* Make sure the queue isn't suspended, as that will deadlock */
mmc_queue_resume(mq);
static int mmc_mq_init(struct mmc_queue *mq, struct mmc_card *card,
spinlock_t *lock)
{
struct mmc_host *host = card->host;
int q_depth;
int ret;
/*
* The queue depth for CQE must match the hardware because the request
* tag is used to index the hardware queue.
*/
if (mq->use_cqe)
q_depth = min_t(int, card->ext_csd.cmdq_depth, host->cqe_qdepth);
else
q_depth = MMC_QUEUE_DEPTH;
ret = mmc_mq_init_queue(mq, q_depth, &mmc_mq_ops, lock);
if (ret)
return ret;
/* Then terminate our worker thread */
kthread_stop(mq->thread);
blk_queue_rq_timeout(mq->queue, 60 * HZ);
/* Empty the queue */
spin_lock_irqsave(q->queue_lock, flags);
q->queuedata = NULL;
blk_start_queue(q);
spin_unlock_irqrestore(q->queue_lock, flags);
mmc_setup_queue(mq, card);
mq->card = NULL;
return 0;
}
EXPORT_SYMBOL(mmc_cleanup_queue);
/**
* mmc_queue_suspend - suspend a MMC request queue
* @mq: MMC queue to suspend
* mmc_init_queue - initialise a queue structure.
* @mq: mmc queue
* @card: mmc card to attach this queue
* @lock: queue lock
* @subname: partition subname
*
* Stop the block request queue, and wait for our thread to
* complete any outstanding requests. This ensures that we
* won't suspend while a request is being processed.
* Initialise a MMC card request queue.
*/
void mmc_queue_suspend(struct mmc_queue *mq)
int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
spinlock_t *lock, const char *subname)
{
struct request_queue *q = mq->queue;
unsigned long flags;
struct mmc_host *host = card->host;
if (!mq->suspended) {
mq->suspended |= true;
mq->card = card;
spin_lock_irqsave(q->queue_lock, flags);
blk_stop_queue(q);
spin_unlock_irqrestore(q->queue_lock, flags);
mq->use_cqe = host->cqe_enabled;
down(&mq->thread_sem);
}
return mmc_mq_init(mq, card, lock);
}
void mmc_queue_suspend(struct mmc_queue *mq)
{
blk_mq_quiesce_queue(mq->queue);
/*
* The host remains claimed while there are outstanding requests, so
* simply claiming and releasing here ensures there are none.
*/
mmc_claim_host(mq->card->host);
mmc_release_host(mq->card->host);
}
/**
* mmc_queue_resume - resume a previously suspended MMC request queue
* @mq: MMC queue to resume
*/
void mmc_queue_resume(struct mmc_queue *mq)
{
blk_mq_unquiesce_queue(mq->queue);
}
void mmc_cleanup_queue(struct mmc_queue *mq)
{
struct request_queue *q = mq->queue;
unsigned long flags;
if (mq->suspended) {
mq->suspended = false;
/*
* The legacy code handled the possibility of being suspended,
* so do that here too.
*/
if (blk_queue_quiesced(q))
blk_mq_unquiesce_queue(q);
up(&mq->thread_sem);
blk_cleanup_queue(q);
spin_lock_irqsave(q->queue_lock, flags);
blk_start_queue(q);
spin_unlock_irqrestore(q->queue_lock, flags);
}
/*
* A request can be completed before the next request, potentially
* leaving a complete_work with nothing to do. Such a work item might
* still be queued at this point. Flush it.
*/
flush_work(&mq->complete_work);
mq->card = NULL;
}
/*
......
......@@ -8,6 +8,20 @@
#include <linux/mmc/core.h>
#include <linux/mmc/host.h>
enum mmc_issued {
MMC_REQ_STARTED,
MMC_REQ_BUSY,
MMC_REQ_FAILED_TO_START,
MMC_REQ_FINISHED,
};
enum mmc_issue_type {
MMC_ISSUE_SYNC,
MMC_ISSUE_DCMD,
MMC_ISSUE_ASYNC,
MMC_ISSUE_MAX,
};
static inline struct mmc_queue_req *req_to_mmc_queue_req(struct request *rq)
{
return blk_mq_rq_to_pdu(rq);
......@@ -20,7 +34,6 @@ static inline struct request *mmc_queue_req_to_req(struct mmc_queue_req *mqr)
return blk_mq_rq_from_pdu(mqr);
}
struct task_struct;
struct mmc_blk_data;
struct mmc_blk_ioc_data;
......@@ -30,7 +43,6 @@ struct mmc_blk_request {
struct mmc_command cmd;
struct mmc_command stop;
struct mmc_data data;
int retune_retry_done;
};
/**
......@@ -52,28 +64,34 @@ enum mmc_drv_op {
struct mmc_queue_req {
struct mmc_blk_request brq;
struct scatterlist *sg;
struct mmc_async_req areq;
enum mmc_drv_op drv_op;
int drv_op_result;
void *drv_op_data;
unsigned int ioc_count;
int retries;
};
struct mmc_queue {
struct mmc_card *card;
struct task_struct *thread;
struct semaphore thread_sem;
bool suspended;
bool asleep;
struct mmc_ctx ctx;
struct blk_mq_tag_set tag_set;
struct mmc_blk_data *blkdata;
struct request_queue *queue;
/*
* FIXME: this counter is not a very reliable way of keeping
* track of how many requests that are ongoing. Switch to just
* letting the block core keep track of requests and per-request
* associated mmc_queue_req data.
*/
int qcnt;
int in_flight[MMC_ISSUE_MAX];
unsigned int cqe_busy;
#define MMC_CQE_DCMD_BUSY BIT(0)
#define MMC_CQE_QUEUE_FULL BIT(1)
bool use_cqe;
bool recovery_needed;
bool in_recovery;
bool rw_wait;
bool waiting;
struct work_struct recovery_work;
wait_queue_head_t wait;
struct request *recovery_req;
struct request *complete_req;
struct mutex complete_lock;
struct work_struct complete_work;
};
extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *,
......@@ -84,4 +102,22 @@ extern void mmc_queue_resume(struct mmc_queue *);
extern unsigned int mmc_queue_map_sg(struct mmc_queue *,
struct mmc_queue_req *);
void mmc_cqe_check_busy(struct mmc_queue *mq);
void mmc_cqe_recovery_notifier(struct mmc_request *mrq);
enum mmc_issue_type mmc_issue_type(struct mmc_queue *mq, struct request *req);
static inline int mmc_tot_in_flight(struct mmc_queue *mq)
{
return mq->in_flight[MMC_ISSUE_SYNC] +
mq->in_flight[MMC_ISSUE_DCMD] +
mq->in_flight[MMC_ISSUE_ASYNC];
}
static inline int mmc_cqe_qcnt(struct mmc_queue *mq)
{
return mq->in_flight[MMC_ISSUE_DCMD] +
mq->in_flight[MMC_ISSUE_ASYNC];
}
#endif
......@@ -121,20 +121,18 @@ EXPORT_SYMBOL(mmc_gpio_request_ro);
void mmc_gpiod_request_cd_irq(struct mmc_host *host)
{
struct mmc_gpio *ctx = host->slot.handler_priv;
int ret, irq;
int irq = -EINVAL;
int ret;
if (host->slot.cd_irq >= 0 || !ctx || !ctx->cd_gpio)
return;
irq = gpiod_to_irq(ctx->cd_gpio);
/*
* Even if gpiod_to_irq() returns a valid IRQ number, the platform might
* still prefer to poll, e.g., because that IRQ number is already used
* by another unit and cannot be shared.
* Do not use IRQ if the platform prefers to poll, e.g., because that
* IRQ number is already used by another unit and cannot be shared.
*/
if (irq >= 0 && host->caps & MMC_CAP_NEEDS_POLL)
irq = -EINVAL;
if (!(host->caps & MMC_CAP_NEEDS_POLL))
irq = gpiod_to_irq(ctx->cd_gpio);
if (irq >= 0) {
if (!ctx->cd_gpio_isr)
......@@ -307,3 +305,11 @@ int mmc_gpiod_request_ro(struct mmc_host *host, const char *con_id,
return 0;
}
EXPORT_SYMBOL(mmc_gpiod_request_ro);
bool mmc_can_gpio_ro(struct mmc_host *host)
{
struct mmc_gpio *ctx = host->slot.handler_priv;
return ctx->ro_gpio ? true : false;
}
EXPORT_SYMBOL(mmc_can_gpio_ro);
......@@ -81,6 +81,7 @@ config MMC_SDHCI_BIG_ENDIAN_32BIT_BYTE_SWAPPER
config MMC_SDHCI_PCI
tristate "SDHCI support on PCI bus"
depends on MMC_SDHCI && PCI
select MMC_CQHCI
help
This selects the PCI Secure Digital Host Controller Interface.
Most controllers found today are PCI devices.
......@@ -132,6 +133,7 @@ config MMC_SDHCI_OF_ARASAN
depends on MMC_SDHCI_PLTFM
depends on OF
depends on COMMON_CLK
select MMC_CQHCI
help
This selects the Arasan Secure Digital Host Controller Interface
(SDHCI). This hardware is found e.g. in Xilinx' Zynq SoC.
......@@ -320,7 +322,7 @@ config MMC_SDHCI_BCM_KONA
config MMC_SDHCI_F_SDH30
tristate "SDHCI support for Fujitsu Semiconductor F_SDH30"
depends on MMC_SDHCI_PLTFM
depends on OF
depends on OF || ACPI
help
This selects the Secure Digital Host Controller Interface (SDHCI)
Needed by some Fujitsu SoC for MMC / SD / SDIO support.
......@@ -595,11 +597,8 @@ config MMC_TMIO
config MMC_SDHI
tristate "Renesas SDHI SD/SDIO controller support"
depends on SUPERH || ARM || ARM64
depends on SUPERH || ARCH_RENESAS || COMPILE_TEST
select MMC_TMIO_CORE
select MMC_SDHI_SYS_DMAC if (SUPERH || ARM)
select MMC_SDHI_INTERNAL_DMAC if ARM64
help
This provides support for the SDHI SD/SDIO controller found in
Renesas SuperH, ARM and ARM64 based SoCs
......@@ -607,6 +606,7 @@ config MMC_SDHI
config MMC_SDHI_SYS_DMAC
tristate "DMA for SDHI SD/SDIO controllers using SYS-DMAC"
depends on MMC_SDHI
default MMC_SDHI if (SUPERH || ARM)
help
This provides DMA support for SDHI SD/SDIO controllers
using SYS-DMAC via DMA Engine. This supports the controllers
......@@ -616,6 +616,7 @@ config MMC_SDHI_INTERNAL_DMAC
tristate "DMA for SDHI SD/SDIO controllers using on-chip bus mastering"
depends on ARM64 || COMPILE_TEST
depends on MMC_SDHI
default MMC_SDHI if ARM64
help
This provides DMA support for SDHI SD/SDIO controllers
using on-chip bus mastering. This supports the controllers
......@@ -857,6 +858,19 @@ config MMC_SUNXI
This selects support for the SD/MMC Host Controller on
Allwinner sunxi SoCs.
config MMC_CQHCI
tristate "Command Queue Host Controller Interface support"
depends on HAS_DMA
help
This selects the Command Queue Host Controller Interface (CQHCI)
support present in host controllers of Qualcomm Technologies, Inc
amongst others.
This controller supports eMMC devices with command queue support.
If you have a controller with this interface, say Y or M here.
If unsure, say N.
config MMC_TOSHIBA_PCI
tristate "Toshiba Type A SD/MMC Card Interface Driver"
depends on PCI
......
......@@ -11,7 +11,7 @@ obj-$(CONFIG_MMC_MXC) += mxcmmc.o
obj-$(CONFIG_MMC_MXS) += mxs-mmc.o
obj-$(CONFIG_MMC_SDHCI) += sdhci.o
obj-$(CONFIG_MMC_SDHCI_PCI) += sdhci-pci.o
sdhci-pci-y += sdhci-pci-core.o sdhci-pci-o2micro.o
sdhci-pci-y += sdhci-pci-core.o sdhci-pci-o2micro.o sdhci-pci-arasan.o
obj-$(subst m,y,$(CONFIG_MMC_SDHCI_PCI)) += sdhci-pci-data.o
obj-$(CONFIG_MMC_SDHCI_ACPI) += sdhci-acpi.o
obj-$(CONFIG_MMC_SDHCI_PXAV3) += sdhci-pxav3.o
......@@ -39,12 +39,8 @@ obj-$(CONFIG_MMC_SDRICOH_CS) += sdricoh_cs.o
obj-$(CONFIG_MMC_TMIO) += tmio_mmc.o
obj-$(CONFIG_MMC_TMIO_CORE) += tmio_mmc_core.o
obj-$(CONFIG_MMC_SDHI) += renesas_sdhi_core.o
ifeq ($(subst m,y,$(CONFIG_MMC_SDHI_SYS_DMAC)),y)
obj-$(CONFIG_MMC_SDHI) += renesas_sdhi_sys_dmac.o
endif
ifeq ($(subst m,y,$(CONFIG_MMC_SDHI_INTERNAL_DMAC)),y)
obj-$(CONFIG_MMC_SDHI) += renesas_sdhi_internal_dmac.o
endif
obj-$(CONFIG_MMC_SDHI_SYS_DMAC) += renesas_sdhi_sys_dmac.o
obj-$(CONFIG_MMC_SDHI_INTERNAL_DMAC) += renesas_sdhi_internal_dmac.o
obj-$(CONFIG_MMC_CB710) += cb710-mmc.o
obj-$(CONFIG_MMC_VIA_SDMMC) += via-sdmmc.o
obj-$(CONFIG_SDH_BFIN) += bfin_sdh.o
......@@ -92,6 +88,7 @@ obj-$(CONFIG_MMC_SDHCI_ST) += sdhci-st.o
obj-$(CONFIG_MMC_SDHCI_MICROCHIP_PIC32) += sdhci-pic32.o
obj-$(CONFIG_MMC_SDHCI_BRCMSTB) += sdhci-brcmstb.o
obj-$(CONFIG_MMC_SDHCI_OMAP) += sdhci-omap.o
obj-$(CONFIG_MMC_CQHCI) += cqhci.o
ifeq ($(CONFIG_CB710_DEBUG),y)
CFLAGS-cb710-mmc += -DDEBUG
......
......@@ -42,13 +42,11 @@
#include <linux/spinlock.h>
#include <linux/timer.h>
#include <linux/clk.h>
#include <linux/scatterlist.h>
#include <asm/io.h>
#include <asm/irq.h>
#include <asm/types.h>
#include <asm/io.h>
#include <linux/uaccess.h>
#define DRIVER_NAME "goldfish_mmc"
......
此差异已折叠。
/* Copyright (c) 2015, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef LINUX_MMC_CQHCI_H
#define LINUX_MMC_CQHCI_H
#include <linux/compiler.h>
#include <linux/bitops.h>
#include <linux/spinlock_types.h>
#include <linux/types.h>
#include <linux/completion.h>
#include <linux/wait.h>
#include <linux/irqreturn.h>
#include <asm/io.h>
/* registers */
/* version */
#define CQHCI_VER 0x00
#define CQHCI_VER_MAJOR(x) (((x) & GENMASK(11, 8)) >> 8)
#define CQHCI_VER_MINOR1(x) (((x) & GENMASK(7, 4)) >> 4)
#define CQHCI_VER_MINOR2(x) ((x) & GENMASK(3, 0))
/* capabilities */
#define CQHCI_CAP 0x04
/* configuration */
#define CQHCI_CFG 0x08
#define CQHCI_DCMD 0x00001000
#define CQHCI_TASK_DESC_SZ 0x00000100
#define CQHCI_ENABLE 0x00000001
/* control */
#define CQHCI_CTL 0x0C
#define CQHCI_CLEAR_ALL_TASKS 0x00000100
#define CQHCI_HALT 0x00000001
/* interrupt status */
#define CQHCI_IS 0x10
#define CQHCI_IS_HAC BIT(0)
#define CQHCI_IS_TCC BIT(1)
#define CQHCI_IS_RED BIT(2)
#define CQHCI_IS_TCL BIT(3)
#define CQHCI_IS_MASK (CQHCI_IS_TCC | CQHCI_IS_RED)
/* interrupt status enable */
#define CQHCI_ISTE 0x14
/* interrupt signal enable */
#define CQHCI_ISGE 0x18
/* interrupt coalescing */
#define CQHCI_IC 0x1C
#define CQHCI_IC_ENABLE BIT(31)
#define CQHCI_IC_RESET BIT(16)
#define CQHCI_IC_ICCTHWEN BIT(15)
#define CQHCI_IC_ICCTH(x) (((x) & 0x1F) << 8)
#define CQHCI_IC_ICTOVALWEN BIT(7)
#define CQHCI_IC_ICTOVAL(x) ((x) & 0x7F)
/* task list base address */
#define CQHCI_TDLBA 0x20
/* task list base address upper */
#define CQHCI_TDLBAU 0x24
/* door-bell */
#define CQHCI_TDBR 0x28
/* task completion notification */
#define CQHCI_TCN 0x2C
/* device queue status */
#define CQHCI_DQS 0x30
/* device pending tasks */
#define CQHCI_DPT 0x34
/* task clear */
#define CQHCI_TCLR 0x38
/* send status config 1 */
#define CQHCI_SSC1 0x40
/* send status config 2 */
#define CQHCI_SSC2 0x44
/* response for dcmd */
#define CQHCI_CRDCT 0x48
/* response mode error mask */
#define CQHCI_RMEM 0x50
/* task error info */
#define CQHCI_TERRI 0x54
#define CQHCI_TERRI_C_INDEX(x) ((x) & GENMASK(5, 0))
#define CQHCI_TERRI_C_TASK(x) (((x) & GENMASK(12, 8)) >> 8)
#define CQHCI_TERRI_C_VALID(x) ((x) & BIT(15))
#define CQHCI_TERRI_D_INDEX(x) (((x) & GENMASK(21, 16)) >> 16)
#define CQHCI_TERRI_D_TASK(x) (((x) & GENMASK(28, 24)) >> 24)
#define CQHCI_TERRI_D_VALID(x) ((x) & BIT(31))
/* command response index */
#define CQHCI_CRI 0x58
/* command response argument */
#define CQHCI_CRA 0x5C
#define CQHCI_INT_ALL 0xF
#define CQHCI_IC_DEFAULT_ICCTH 31
#define CQHCI_IC_DEFAULT_ICTOVAL 1
/* attribute fields */
#define CQHCI_VALID(x) (((x) & 1) << 0)
#define CQHCI_END(x) (((x) & 1) << 1)
#define CQHCI_INT(x) (((x) & 1) << 2)
#define CQHCI_ACT(x) (((x) & 0x7) << 3)
/* data command task descriptor fields */
#define CQHCI_FORCED_PROG(x) (((x) & 1) << 6)
#define CQHCI_CONTEXT(x) (((x) & 0xF) << 7)
#define CQHCI_DATA_TAG(x) (((x) & 1) << 11)
#define CQHCI_DATA_DIR(x) (((x) & 1) << 12)
#define CQHCI_PRIORITY(x) (((x) & 1) << 13)
#define CQHCI_QBAR(x) (((x) & 1) << 14)
#define CQHCI_REL_WRITE(x) (((x) & 1) << 15)
#define CQHCI_BLK_COUNT(x) (((x) & 0xFFFF) << 16)
#define CQHCI_BLK_ADDR(x) (((x) & 0xFFFFFFFF) << 32)
/* direct command task descriptor fields */
#define CQHCI_CMD_INDEX(x) (((x) & 0x3F) << 16)
#define CQHCI_CMD_TIMING(x) (((x) & 1) << 22)
#define CQHCI_RESP_TYPE(x) (((x) & 0x3) << 23)
/* transfer descriptor fields */
#define CQHCI_DAT_LENGTH(x) (((x) & 0xFFFF) << 16)
#define CQHCI_DAT_ADDR_LO(x) (((x) & 0xFFFFFFFF) << 32)
#define CQHCI_DAT_ADDR_HI(x) (((x) & 0xFFFFFFFF) << 0)
struct cqhci_host_ops;
struct mmc_host;
struct cqhci_slot;
struct cqhci_host {
const struct cqhci_host_ops *ops;
void __iomem *mmio;
struct mmc_host *mmc;
spinlock_t lock;
/* relative card address of device */
unsigned int rca;
/* 64 bit DMA */
bool dma64;
int num_slots;
int qcnt;
u32 dcmd_slot;
u32 caps;
#define CQHCI_TASK_DESC_SZ_128 0x1
u32 quirks;
#define CQHCI_QUIRK_SHORT_TXFR_DESC_SZ 0x1
bool enabled;
bool halted;
bool init_done;
bool activated;
bool waiting_for_idle;
bool recovery_halt;
size_t desc_size;
size_t data_size;
u8 *desc_base;
/* total descriptor size */
u8 slot_sz;
/* 64/128 bit depends on CQHCI_CFG */
u8 task_desc_len;
/* 64 bit on 32-bit arch, 128 bit on 64-bit */
u8 link_desc_len;
u8 *trans_desc_base;
/* same length as transfer descriptor */
u8 trans_desc_len;
dma_addr_t desc_dma_base;
dma_addr_t trans_desc_dma_base;
struct completion halt_comp;
wait_queue_head_t wait_queue;
struct cqhci_slot *slot;
};
struct cqhci_host_ops {
void (*dumpregs)(struct mmc_host *mmc);
void (*write_l)(struct cqhci_host *host, u32 val, int reg);
u32 (*read_l)(struct cqhci_host *host, int reg);
void (*enable)(struct mmc_host *mmc);
void (*disable)(struct mmc_host *mmc, bool recovery);
};
static inline void cqhci_writel(struct cqhci_host *host, u32 val, int reg)
{
if (unlikely(host->ops->write_l))
host->ops->write_l(host, val, reg);
else
writel_relaxed(val, host->mmio + reg);
}
static inline u32 cqhci_readl(struct cqhci_host *host, int reg)
{
if (unlikely(host->ops->read_l))
return host->ops->read_l(host, reg);
else
return readl_relaxed(host->mmio + reg);
}
struct platform_device;
irqreturn_t cqhci_irq(struct mmc_host *mmc, u32 intmask, int cmd_error,
int data_error);
int cqhci_init(struct cqhci_host *cq_host, struct mmc_host *mmc, bool dma64);
struct cqhci_host *cqhci_pltfm_init(struct platform_device *pdev);
int cqhci_suspend(struct mmc_host *mmc);
int cqhci_resume(struct mmc_host *mmc);
#endif
......@@ -174,7 +174,7 @@ module_param(poll_loopcount, uint, S_IRUGO);
MODULE_PARM_DESC(poll_loopcount,
"Maximum polling loop count. Default = 32");
static unsigned __initdata use_dma = 1;
static unsigned use_dma = 1;
module_param(use_dma, uint, 0);
MODULE_PARM_DESC(use_dma, "Whether to use DMA or not. Default = 1");
......@@ -496,8 +496,7 @@ static int mmc_davinci_start_dma_transfer(struct mmc_davinci_host *host,
return ret;
}
static void __init_or_module
davinci_release_dma_channels(struct mmc_davinci_host *host)
static void davinci_release_dma_channels(struct mmc_davinci_host *host)
{
if (!host->use_dma)
return;
......@@ -506,7 +505,7 @@ davinci_release_dma_channels(struct mmc_davinci_host *host)
dma_release_channel(host->dma_rx);
}
static int __init davinci_acquire_dma_channels(struct mmc_davinci_host *host)
static int davinci_acquire_dma_channels(struct mmc_davinci_host *host)
{
host->dma_tx = dma_request_chan(mmc_dev(host->mmc), "tx");
if (IS_ERR(host->dma_tx)) {
......@@ -1201,7 +1200,7 @@ static int mmc_davinci_parse_pdata(struct mmc_host *mmc)
return 0;
}
static int __init davinci_mmcsd_probe(struct platform_device *pdev)
static int davinci_mmcsd_probe(struct platform_device *pdev)
{
const struct of_device_id *match;
struct mmc_davinci_host *host = NULL;
......@@ -1254,8 +1253,9 @@ static int __init davinci_mmcsd_probe(struct platform_device *pdev)
pdev->id_entry = match->data;
ret = mmc_of_parse(mmc);
if (ret) {
dev_err(&pdev->dev,
"could not parse of data: %d\n", ret);
if (ret != -EPROBE_DEFER)
dev_err(&pdev->dev,
"could not parse of data: %d\n", ret);
goto parse_fail;
}
} else {
......@@ -1414,11 +1414,12 @@ static struct platform_driver davinci_mmcsd_driver = {
.pm = davinci_mmcsd_pm_ops,
.of_match_table = davinci_mmc_dt_ids,
},
.probe = davinci_mmcsd_probe,
.remove = __exit_p(davinci_mmcsd_remove),
.id_table = davinci_mmc_devtype,
};
module_platform_driver_probe(davinci_mmcsd_driver, davinci_mmcsd_probe);
module_platform_driver(davinci_mmcsd_driver);
MODULE_AUTHOR("Texas Instruments India");
MODULE_LICENSE("GPL");
......
......@@ -1208,7 +1208,7 @@ static int meson_mmc_probe(struct platform_device *pdev)
}
irq = platform_get_irq(pdev, 0);
if (!irq) {
if (irq <= 0) {
dev_err(&pdev->dev, "failed to get interrupt resource.\n");
ret = -EINVAL;
goto free_host;
......
......@@ -82,6 +82,10 @@ static unsigned int fmax = 515633;
* @qcom_fifo: enables qcom specific fifo pio read logic.
* @qcom_dml: enables qcom specific dma glue for dma transfers.
* @reversed_irq_handling: handle data irq before cmd irq.
* @mmcimask1: true if variant have a MMCIMASK1 register.
* @start_err: bitmask identifying the STARTBITERR bit inside MMCISTATUS
* register.
* @opendrain: bitmask identifying the OPENDRAIN bit inside MMCIPOWER register
*/
struct variant_data {
unsigned int clkreg;
......@@ -111,6 +115,9 @@ struct variant_data {
bool qcom_fifo;
bool qcom_dml;
bool reversed_irq_handling;
bool mmcimask1;
u32 start_err;
u32 opendrain;
};
static struct variant_data variant_arm = {
......@@ -120,6 +127,9 @@ static struct variant_data variant_arm = {
.pwrreg_powerup = MCI_PWR_UP,
.f_max = 100000000,
.reversed_irq_handling = true,
.mmcimask1 = true,
.start_err = MCI_STARTBITERR,
.opendrain = MCI_ROD,
};
static struct variant_data variant_arm_extended_fifo = {
......@@ -128,6 +138,9 @@ static struct variant_data variant_arm_extended_fifo = {
.datalength_bits = 16,
.pwrreg_powerup = MCI_PWR_UP,
.f_max = 100000000,
.mmcimask1 = true,
.start_err = MCI_STARTBITERR,
.opendrain = MCI_ROD,
};
static struct variant_data variant_arm_extended_fifo_hwfc = {
......@@ -137,6 +150,9 @@ static struct variant_data variant_arm_extended_fifo_hwfc = {
.datalength_bits = 16,
.pwrreg_powerup = MCI_PWR_UP,
.f_max = 100000000,
.mmcimask1 = true,
.start_err = MCI_STARTBITERR,
.opendrain = MCI_ROD,
};
static struct variant_data variant_u300 = {
......@@ -152,6 +168,9 @@ static struct variant_data variant_u300 = {
.signal_direction = true,
.pwrreg_clkgate = true,
.pwrreg_nopower = true,
.mmcimask1 = true,
.start_err = MCI_STARTBITERR,
.opendrain = MCI_OD,
};
static struct variant_data variant_nomadik = {
......@@ -168,6 +187,9 @@ static struct variant_data variant_nomadik = {
.signal_direction = true,
.pwrreg_clkgate = true,
.pwrreg_nopower = true,
.mmcimask1 = true,
.start_err = MCI_STARTBITERR,
.opendrain = MCI_OD,
};
static struct variant_data variant_ux500 = {
......@@ -190,6 +212,9 @@ static struct variant_data variant_ux500 = {
.busy_detect_flag = MCI_ST_CARDBUSY,
.busy_detect_mask = MCI_ST_BUSYENDMASK,
.pwrreg_nopower = true,
.mmcimask1 = true,
.start_err = MCI_STARTBITERR,
.opendrain = MCI_OD,
};
static struct variant_data variant_ux500v2 = {
......@@ -214,6 +239,26 @@ static struct variant_data variant_ux500v2 = {
.busy_detect_flag = MCI_ST_CARDBUSY,
.busy_detect_mask = MCI_ST_BUSYENDMASK,
.pwrreg_nopower = true,
.mmcimask1 = true,
.start_err = MCI_STARTBITERR,
.opendrain = MCI_OD,
};
static struct variant_data variant_stm32 = {
.fifosize = 32 * 4,
.fifohalfsize = 8 * 4,
.clkreg = MCI_CLK_ENABLE,
.clkreg_enable = MCI_ST_UX500_HWFCEN,
.clkreg_8bit_bus_enable = MCI_ST_8BIT_BUS,
.clkreg_neg_edge_enable = MCI_ST_UX500_NEG_EDGE,
.datalength_bits = 24,
.datactrl_mask_sdio = MCI_DPSM_ST_SDIOEN,
.st_sdio = true,
.st_clkdiv = true,
.pwrreg_powerup = MCI_PWR_ON,
.f_max = 48000000,
.pwrreg_clkgate = true,
.pwrreg_nopower = true,
};
static struct variant_data variant_qcom = {
......@@ -232,6 +277,9 @@ static struct variant_data variant_qcom = {
.explicit_mclk_control = true,
.qcom_fifo = true,
.qcom_dml = true,
.mmcimask1 = true,
.start_err = MCI_STARTBITERR,
.opendrain = MCI_ROD,
};
/* Busy detection for the ST Micro variant */
......@@ -396,6 +444,7 @@ mmci_request_end(struct mmci_host *host, struct mmc_request *mrq)
static void mmci_set_mask1(struct mmci_host *host, unsigned int mask)
{
void __iomem *base = host->base;
struct variant_data *variant = host->variant;
if (host->singleirq) {
unsigned int mask0 = readl(base + MMCIMASK0);
......@@ -406,7 +455,10 @@ static void mmci_set_mask1(struct mmci_host *host, unsigned int mask)
writel(mask0, base + MMCIMASK0);
}
writel(mask, base + MMCIMASK1);
if (variant->mmcimask1)
writel(mask, base + MMCIMASK1);
host->mask1_reg = mask;
}
static void mmci_stop_data(struct mmci_host *host)
......@@ -921,8 +973,9 @@ mmci_data_irq(struct mmci_host *host, struct mmc_data *data,
return;
/* First check for errors */
if (status & (MCI_DATACRCFAIL|MCI_DATATIMEOUT|MCI_STARTBITERR|
MCI_TXUNDERRUN|MCI_RXOVERRUN)) {
if (status & (MCI_DATACRCFAIL | MCI_DATATIMEOUT |
host->variant->start_err |
MCI_TXUNDERRUN | MCI_RXOVERRUN)) {
u32 remain, success;
/* Terminate the DMA transfer */
......@@ -1286,7 +1339,7 @@ static irqreturn_t mmci_irq(int irq, void *dev_id)
status = readl(host->base + MMCISTATUS);
if (host->singleirq) {
if (status & readl(host->base + MMCIMASK1))
if (status & host->mask1_reg)
mmci_pio_irq(irq, dev_id);
status &= ~MCI_IRQ1MASK;
......@@ -1429,16 +1482,18 @@ static void mmci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
~MCI_ST_DATA2DIREN);
}
if (ios->bus_mode == MMC_BUSMODE_OPENDRAIN) {
if (host->hw_designer != AMBA_VENDOR_ST)
pwr |= MCI_ROD;
else {
/*
* The ST Micro variant use the ROD bit for something
* else and only has OD (Open Drain).
*/
pwr |= MCI_OD;
}
if (variant->opendrain) {
if (ios->bus_mode == MMC_BUSMODE_OPENDRAIN)
pwr |= variant->opendrain;
} else {
/*
* If the variant cannot configure the pads by its own, then we
* expect the pinctrl to be able to do that for us
*/
if (ios->bus_mode == MMC_BUSMODE_OPENDRAIN)
pinctrl_select_state(host->pinctrl, host->pins_opendrain);
else
pinctrl_select_state(host->pinctrl, host->pins_default);
}
/*
......@@ -1583,6 +1638,35 @@ static int mmci_probe(struct amba_device *dev,
host = mmc_priv(mmc);
host->mmc = mmc;
/*
* Some variant (STM32) doesn't have opendrain bit, nevertheless
* pins can be set accordingly using pinctrl
*/
if (!variant->opendrain) {
host->pinctrl = devm_pinctrl_get(&dev->dev);
if (IS_ERR(host->pinctrl)) {
dev_err(&dev->dev, "failed to get pinctrl");
ret = PTR_ERR(host->pinctrl);
goto host_free;
}
host->pins_default = pinctrl_lookup_state(host->pinctrl,
PINCTRL_STATE_DEFAULT);
if (IS_ERR(host->pins_default)) {
dev_err(mmc_dev(mmc), "Can't select default pins\n");
ret = PTR_ERR(host->pins_default);
goto host_free;
}
host->pins_opendrain = pinctrl_lookup_state(host->pinctrl,
MMCI_PINCTRL_STATE_OPENDRAIN);
if (IS_ERR(host->pins_opendrain)) {
dev_err(mmc_dev(mmc), "Can't select opendrain pins\n");
ret = PTR_ERR(host->pins_opendrain);
goto host_free;
}
}
host->hw_designer = amba_manf(dev);
host->hw_revision = amba_rev(dev);
dev_dbg(mmc_dev(mmc), "designer ID = 0x%02x\n", host->hw_designer);
......@@ -1729,7 +1813,10 @@ static int mmci_probe(struct amba_device *dev,
spin_lock_init(&host->lock);
writel(0, host->base + MMCIMASK0);
writel(0, host->base + MMCIMASK1);
if (variant->mmcimask1)
writel(0, host->base + MMCIMASK1);
writel(0xfff, host->base + MMCICLEAR);
/*
......@@ -1809,6 +1896,7 @@ static int mmci_remove(struct amba_device *dev)
if (mmc) {
struct mmci_host *host = mmc_priv(mmc);
struct variant_data *variant = host->variant;
/*
* Undo pm_runtime_put() in probe. We use the _sync
......@@ -1819,7 +1907,9 @@ static int mmci_remove(struct amba_device *dev)
mmc_remove_host(mmc);
writel(0, host->base + MMCIMASK0);
writel(0, host->base + MMCIMASK1);
if (variant->mmcimask1)
writel(0, host->base + MMCIMASK1);
writel(0, host->base + MMCICOMMAND);
writel(0, host->base + MMCIDATACTRL);
......@@ -1951,6 +2041,11 @@ static const struct amba_id mmci_ids[] = {
.mask = 0xf0ffffff,
.data = &variant_ux500v2,
},
{
.id = 0x00880180,
.mask = 0x00ffffff,
.data = &variant_stm32,
},
/* Qualcomm variants */
{
.id = 0x00051180,
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册