From 682696605c7093d2800c498c04166831e5aedf87 Mon Sep 17 00:00:00 2001 From: Ulf Hansson Date: Thu, 13 Apr 2017 16:48:11 +0200 Subject: mmc: sdio: Add API to manage SDIO IRQs from a workqueue For hosts not supporting MMC_CAP2_SDIO_IRQ_NOTHREAD but MMC_CAP_SDIO_IRQ, the SDIO IRQs are processed from a dedicated kernel thread. For these cases, the host calls mmc_signal_sdio_irq() from its ISR to signal a new SDIO IRQ. Signaling an SDIO IRQ makes the host's ->enable_sdio_irq() callback to be invoked to temporary disable the IRQs, before the kernel thread is woken up to process it. When processing of the IRQs are completed, they are re-enabled by the kernel thread, again via invoking the host's ->enable_sdio_irq(). The observation from this, is that the execution path is being unnecessary complex, as the host driver already knows that it needs to temporary disable the IRQs before signaling a new one. Moreover, replacing the kernel thread with a work/workqueue would not only greatly simplify the code, but also make it more robust. To address the above problems, let's continue to build upon the support for MMC_CAP2_SDIO_IRQ_NOTHREAD, as it already implements SDIO IRQs to be processed without using the clumsy kernel thread and without the ping-pong calls of the host's ->enable_sdio_irq() callback for each processed IRQ. Therefore, let's add new API sdio_signal_irq(), which enables hosts to signal/process SDIO IRQs by using a work/workqueue, rather than using the kernel thread. Add also a new host callback ->ack_sdio_irq(), which the work invokes when the SDIO IRQs have been processed. This informs the host about when it shall re-enable the SDIO IRQs. Potentially, we could re-use the existing ->enable_sdio_irq() callback instead of adding a new one, however it has turned out that it's more convenient for hosts to get this information via a separate callback. Hosts that wants to use this new method to signal/process SDIO IRQs, must enable MMC_CAP2_SDIO_IRQ_NOTHREAD and implement the ->ack_sdio_irq() callback. Signed-off-by: Ulf Hansson Tested-by: Douglas Anderson Reviewed-by: Douglas Anderson --- include/linux/mmc/host.h | 3 +++ 1 file changed, 3 insertions(+) (limited to 'include') diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index 21385ac0c9b1..f186b26c05a4 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h @@ -130,6 +130,7 @@ struct mmc_host_ops { int (*get_cd)(struct mmc_host *host); void (*enable_sdio_irq)(struct mmc_host *host, int enable); + void (*ack_sdio_irq)(struct mmc_host *host); /* optional callback for HC quirks */ void (*init_card)(struct mmc_host *host, struct mmc_card *card); @@ -358,6 +359,7 @@ struct mmc_host { unsigned int sdio_irqs; struct task_struct *sdio_irq_thread; + struct delayed_work sdio_irq_work; bool sdio_irq_pending; atomic_t sdio_irq_thread_abort; @@ -428,6 +430,7 @@ static inline void mmc_signal_sdio_irq(struct mmc_host *host) } void sdio_run_irqs(struct mmc_host *host); +void sdio_signal_irq(struct mmc_host *host); #ifdef CONFIG_REGULATOR int mmc_regulator_get_ocrmask(struct regulator *supply); -- cgit v1.2.3 From c3dccb74be28a345a2ebcc224e41b774529b8b8f Mon Sep 17 00:00:00 2001 From: Linus Walleij Date: Thu, 18 May 2017 11:29:31 +0200 Subject: mmc: core: Delete bounce buffer Kconfig option This option is activated by all multiplatform configs and what not so we almost always have it turned on, and the memory it saves is negligible, even more so moving forward. The actual bounce buffer only gets allocated only when used, the only thing the ifdefs are saving is a little bit of code. It is highly improper to have this as a Kconfig option that get turned on by Kconfig, make this a pure runtime-thing and let the host decide whether we use bounce buffers. We add a new property "disable_bounce" to the host struct. Notice that mmc_queue_calc_bouncesz() already disables the bounce buffers if host->max_segs != 1, so any arch that has a maximum number of segments higher than 1 will have bounce buffers disabled. The option CONFIG_MMC_BLOCK_BOUNCE is default y so the majority of platforms in the kernel already have it on, and it then gets turned off at runtime since most of these have a host->max_segs > 1. The few exceptions that have host->max_segs == 1 and still turn off the bounce buffering are those that disable it in their defconfig. Those are the following: arch/arm/configs/colibri_pxa300_defconfig arch/arm/configs/zeus_defconfig - Uses MMC_PXA, drivers/mmc/host/pxamci.c - Sets host->max_segs = NR_SG, which is 1 - This needs its bounce buffer deactivated so we set host->disable_bounce to true in the host driver arch/arm/configs/davinci_all_defconfig - Uses MMC_DAVINCI, drivers/mmc/host/davinci_mmc.c - This driver sets host->max_segs to MAX_NR_SG, which is 16 - That means this driver anyways disabled bounce buffers - No special action needed for this platform arch/arm/configs/lpc32xx_defconfig arch/arm/configs/nhk8815_defconfig arch/arm/configs/u300_defconfig - Uses MMC_ARMMMCI, drivers/mmc/host/mmci.[c|h] - This driver by default sets host->max_segs to NR_SG, which is 128, unless a DMA engine is used, and in that case the number of segments are also > 1 - That means this driver already disables bounce buffers - No special action needed for these platforms arch/arm/configs/sama5_defconfig - Uses MMC_SDHCI, MMC_SDHCI_PLTFM, MMC_SDHCI_OF_AT91, MMC_ATMELMCI - Uses drivers/mmc/host/sdhci.c - Normally sets host->max_segs to SDHCI_MAX_SEGS which is 128 and thus disables bounce buffers - Sets host->max_segs to 1 if SDHCI_USE_SDMA is set - SDHCI_USE_SDMA is only set by SDHCI on PCI adapers - That means that for this platform bounce buffers are already disabled at runtime - No special action needed for this platform arch/blackfin/configs/CM-BF533_defconfig arch/blackfin/configs/CM-BF537E_defconfig - Uses MMC_SPI (a simple MMC card connected on SPI pins) - Uses drivers/mmc/host/mmc_spi.c - Sets host->max_segs to MMC_SPI_BLOCKSATONCE which is 128 - That means this platform already disables bounce buffers at runtime - No special action needed for these platforms arch/mips/configs/cavium_octeon_defconfig - Uses MMC_CAVIUM_OCTEON, drivers/mmc/host/cavium.c - Sets host->max_segs to 16 or 1 - Setting host->disable_bounce to be sure for the 1 case arch/mips/configs/qi_lb60_defconfig - Uses MMC_JZ4740, drivers/mmc/host/jz4740_mmc.c - This sets host->max_segs to 128 so bounce buffers are already runtime disabled - No action needed for this platform It would be interesting to come up with a list of the platforms that actually end up using bounce buffers. I have not been able to infer such a list, but it occurs when host->max_segs == 1 and the bounce buffering is not explicitly disabled. Signed-off-by: Linus Walleij Signed-off-by: Ulf Hansson --- drivers/mmc/core/Kconfig | 18 ------------------ drivers/mmc/core/queue.c | 15 +-------------- drivers/mmc/host/cavium.c | 4 +++- drivers/mmc/host/pxamci.c | 6 +++++- include/linux/mmc/host.h | 1 + 5 files changed, 10 insertions(+), 34 deletions(-) (limited to 'include') diff --git a/drivers/mmc/core/Kconfig b/drivers/mmc/core/Kconfig index fc1ecdaaa9ca..42e89060cd41 100644 --- a/drivers/mmc/core/Kconfig +++ b/drivers/mmc/core/Kconfig @@ -61,24 +61,6 @@ config MMC_BLOCK_MINORS If unsure, say 8 here. -config MMC_BLOCK_BOUNCE - bool "Use bounce buffer for simple hosts" - depends on MMC_BLOCK - default y - help - SD/MMC is a high latency protocol where it is crucial to - send large requests in order to get high performance. Many - controllers, however, are restricted to continuous memory - (i.e. they can't do scatter-gather), something the kernel - rarely can provide. - - Say Y here to help these restricted hosts by bouncing - requests back and forth from a large buffer. You will get - a big performance gain at the cost of up to 64 KiB of - physical memory. - - If unsure, say Y here. - config SDIO_UART tristate "SDIO UART/GPS class support" depends on TTY diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index 5c37b6be3e7b..70ba7f94c706 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -219,7 +219,6 @@ static struct mmc_queue_req *mmc_queue_alloc_mqrqs(int qdepth) return mqrq; } -#ifdef CONFIG_MMC_BLOCK_BOUNCE static int mmc_queue_alloc_bounce_bufs(struct mmc_queue_req *mqrq, int qdepth, unsigned int bouncesz) { @@ -258,7 +257,7 @@ static unsigned int mmc_queue_calc_bouncesz(struct mmc_host *host) { unsigned int bouncesz = MMC_QUEUE_BOUNCESZ; - if (host->max_segs != 1) + if (host->max_segs != 1 || (host->caps & MMC_CAP_NO_BOUNCE_BUFF)) return 0; if (bouncesz > host->max_req_size) @@ -273,18 +272,6 @@ static unsigned int mmc_queue_calc_bouncesz(struct mmc_host *host) return bouncesz; } -#else -static inline bool mmc_queue_alloc_bounce(struct mmc_queue_req *mqrq, - int qdepth, unsigned int bouncesz) -{ - return false; -} - -static unsigned int mmc_queue_calc_bouncesz(struct mmc_host *host) -{ - return 0; -} -#endif static int mmc_queue_alloc_sgs(struct mmc_queue_req *mqrq, int qdepth, int max_segs) diff --git a/drivers/mmc/host/cavium.c b/drivers/mmc/host/cavium.c index b8aaf0fdb77c..3686d77c717b 100644 --- a/drivers/mmc/host/cavium.c +++ b/drivers/mmc/host/cavium.c @@ -1035,10 +1035,12 @@ int cvm_mmc_of_slot_probe(struct device *dev, struct cvm_mmc_host *host) * We only have a 3.3v supply, we cannot support any * of the UHS modes. We do support the high speed DDR * modes up to 52MHz. + * + * Disable bounce buffers for max_segs = 1 */ mmc->caps |= MMC_CAP_MMC_HIGHSPEED | MMC_CAP_SD_HIGHSPEED | MMC_CAP_ERASE | MMC_CAP_CMD23 | MMC_CAP_POWER_OFF_CARD | - MMC_CAP_3_3V_DDR; + MMC_CAP_3_3V_DDR | MMC_CAP_NO_BOUNCE_BUFF; if (host->use_sg) mmc->max_segs = 16; diff --git a/drivers/mmc/host/pxamci.c b/drivers/mmc/host/pxamci.c index c763b404510f..59ab194cb009 100644 --- a/drivers/mmc/host/pxamci.c +++ b/drivers/mmc/host/pxamci.c @@ -702,7 +702,11 @@ static int pxamci_probe(struct platform_device *pdev) pxamci_init_ocr(host); - mmc->caps = 0; + /* + * This architecture used to disable bounce buffers through its + * defconfig, now it is done at runtime as a host property. + */ + mmc->caps = MMC_CAP_NO_BOUNCE_BUFF; host->cmdat = 0; if (!cpu_is_pxa25x()) { mmc->caps |= MMC_CAP_4_BIT_DATA | MMC_CAP_SDIO_IRQ; diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index f186b26c05a4..9209f95a5106 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h @@ -271,6 +271,7 @@ struct mmc_host { #define MMC_CAP_UHS_SDR50 (1 << 18) /* Host supports UHS SDR50 mode */ #define MMC_CAP_UHS_SDR104 (1 << 19) /* Host supports UHS SDR104 mode */ #define MMC_CAP_UHS_DDR50 (1 << 20) /* Host supports UHS DDR50 mode */ +#define MMC_CAP_NO_BOUNCE_BUFF (1 << 21) /* Disable bounce buffers on host */ #define MMC_CAP_DRIVER_TYPE_A (1 << 23) /* Host supports Driver Type A */ #define MMC_CAP_DRIVER_TYPE_C (1 << 24) /* Host supports Driver Type C */ #define MMC_CAP_DRIVER_TYPE_D (1 << 25) /* Host supports Driver Type D */ -- cgit v1.2.3 From 304419d8a7e9204c5d19b704467b814df8c8f5b1 Mon Sep 17 00:00:00 2001 From: Linus Walleij Date: Thu, 18 May 2017 11:29:32 +0200 Subject: mmc: core: Allocate per-request data using the block layer core The mmc_queue_req is a per-request state container the MMC core uses to carry bounce buffers, pointers to asynchronous requests and so on. Currently allocated as a static array of objects, then as a request comes in, a mmc_queue_req is assigned to it, and used during the lifetime of the request. This is backwards compared to how other block layer drivers work: they usally let the block core provide a per-request struct that get allocated right beind the struct request, and which can be obtained using the blk_mq_rq_to_pdu() helper. (The _mq_ infix in this function name is misleading: it is used by both the old and the MQ block layer.) The per-request struct gets allocated to the size stored in the queue variable .cmd_size initialized using the .init_rq_fn() and cleaned up using .exit_rq_fn(). The block layer code makes the MMC core rely on this mechanism to allocate the per-request mmc_queue_req state container. Doing this make a lot of complicated queue handling go away. We only need to keep the .qnct that keeps count of how many request are currently being processed by the MMC layer. The MQ block layer will replace also this once we transition to it. Doing this refactoring is necessary to move the ioctl() operations into custom block layer requests tagged with REQ_OP_DRV_[IN|OUT] instead of the custom code using the BigMMCHostLock that we have today: those require that per-request data be obtainable easily from a request after creating a custom request with e.g.: struct request *rq = blk_get_request(q, REQ_OP_DRV_IN, __GFP_RECLAIM); struct mmc_queue_req *mq_rq = req_to_mq_rq(rq); And this is not possible with the current construction, as the request is not immediately assigned the per-request state container, but instead it gets assigned when the request finally enters the MMC queue, which is way too late for custom requests. Signed-off-by: Linus Walleij [Ulf: Folded in the fix to drop a call to blk_cleanup_queue()] Signed-off-by: Ulf Hansson Tested-by: Heiner Kallweit --- drivers/mmc/core/block.c | 38 ++------ drivers/mmc/core/queue.c | 220 ++++++++++++----------------------------------- drivers/mmc/core/queue.h | 22 ++--- include/linux/mmc/card.h | 2 - 4 files changed, 78 insertions(+), 204 deletions(-) (limited to 'include') diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 8273b078686d..5f29b5625216 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -129,13 +129,6 @@ static inline int mmc_blk_part_switch(struct mmc_card *card, struct mmc_blk_data *md); static int get_card_status(struct mmc_card *card, u32 *status, int retries); -static void mmc_blk_requeue(struct request_queue *q, struct request *req) -{ - spin_lock_irq(q->queue_lock); - blk_requeue_request(q, req); - spin_unlock_irq(q->queue_lock); -} - static struct mmc_blk_data *mmc_blk_get(struct gendisk *disk) { struct mmc_blk_data *md; @@ -1642,7 +1635,7 @@ static void mmc_blk_rw_cmd_abort(struct mmc_queue *mq, struct mmc_card *card, if (mmc_card_removed(card)) req->rq_flags |= RQF_QUIET; while (blk_end_request(req, -EIO, blk_rq_cur_bytes(req))); - mmc_queue_req_free(mq, mqrq); + mq->qcnt--; } /** @@ -1662,7 +1655,7 @@ static void mmc_blk_rw_try_restart(struct mmc_queue *mq, struct request *req, if (mmc_card_removed(mq->card)) { req->rq_flags |= RQF_QUIET; blk_end_request_all(req, -EIO); - mmc_queue_req_free(mq, mqrq); + mq->qcnt--; /* FIXME: just set to 0? */ return; } /* Else proceed and try to restart the current async request */ @@ -1685,12 +1678,8 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) bool req_pending = true; if (new_req) { - mqrq_cur = mmc_queue_req_find(mq, new_req); - if (!mqrq_cur) { - WARN_ON(1); - mmc_blk_requeue(mq->queue, new_req); - new_req = NULL; - } + mqrq_cur = req_to_mmc_queue_req(new_req); + mq->qcnt++; } if (!mq->qcnt) @@ -1764,12 +1753,12 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) if (req_pending) mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); else - mmc_queue_req_free(mq, mq_rq); + mq->qcnt--; mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); return; } if (!req_pending) { - mmc_queue_req_free(mq, mq_rq); + mq->qcnt--; mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); return; } @@ -1814,7 +1803,7 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) req_pending = blk_end_request(old_req, -EIO, brq->data.blksz); if (!req_pending) { - mmc_queue_req_free(mq, mq_rq); + mq->qcnt--; mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); return; } @@ -1844,7 +1833,7 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) } } while (req_pending); - mmc_queue_req_free(mq, mq_rq); + mq->qcnt--; } void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) @@ -2166,7 +2155,6 @@ static int mmc_blk_probe(struct mmc_card *card) { struct mmc_blk_data *md, *part_md; char cap_str[10]; - int ret; /* * Check that the card supports the command class(es) we need. @@ -2176,15 +2164,9 @@ static int mmc_blk_probe(struct mmc_card *card) mmc_fixup_device(card, mmc_blk_fixups); - ret = mmc_queue_alloc_shared_queue(card); - if (ret) - return ret; - md = mmc_blk_alloc(card); - if (IS_ERR(md)) { - mmc_queue_free_shared_queue(card); + if (IS_ERR(md)) return PTR_ERR(md); - } string_get_size((u64)get_capacity(md->disk), 512, STRING_UNITS_2, cap_str, sizeof(cap_str)); @@ -2222,7 +2204,6 @@ static int mmc_blk_probe(struct mmc_card *card) out: mmc_blk_remove_parts(card, md); mmc_blk_remove_req(md); - mmc_queue_free_shared_queue(card); return 0; } @@ -2240,7 +2221,6 @@ static void mmc_blk_remove(struct mmc_card *card) pm_runtime_put_noidle(&card->dev); mmc_blk_remove_req(md); dev_set_drvdata(&card->dev, NULL); - mmc_queue_free_shared_queue(card); } static int _mmc_blk_suspend(struct mmc_card *card) diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index 70ba7f94c706..d6c7b4cde4db 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -40,35 +40,6 @@ static int mmc_prep_request(struct request_queue *q, struct request *req) return BLKPREP_OK; } -struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *mq, - struct request *req) -{ - struct mmc_queue_req *mqrq; - int i = ffz(mq->qslots); - - if (i >= mq->qdepth) - return NULL; - - mqrq = &mq->mqrq[i]; - WARN_ON(mqrq->req || mq->qcnt >= mq->qdepth || - test_bit(mqrq->task_id, &mq->qslots)); - mqrq->req = req; - mq->qcnt += 1; - __set_bit(mqrq->task_id, &mq->qslots); - - return mqrq; -} - -void mmc_queue_req_free(struct mmc_queue *mq, - struct mmc_queue_req *mqrq) -{ - WARN_ON(!mqrq->req || mq->qcnt < 1 || - !test_bit(mqrq->task_id, &mq->qslots)); - mqrq->req = NULL; - mq->qcnt -= 1; - __clear_bit(mqrq->task_id, &mq->qslots); -} - static int mmc_queue_thread(void *d) { struct mmc_queue *mq = d; @@ -149,11 +120,11 @@ static void mmc_request_fn(struct request_queue *q) wake_up_process(mq->thread); } -static struct scatterlist *mmc_alloc_sg(int sg_len) +static struct scatterlist *mmc_alloc_sg(int sg_len, gfp_t gfp) { struct scatterlist *sg; - sg = kmalloc_array(sg_len, sizeof(*sg), GFP_KERNEL); + sg = kmalloc_array(sg_len, sizeof(*sg), gfp); if (sg) sg_init_table(sg, sg_len); @@ -179,80 +150,6 @@ static void mmc_queue_setup_discard(struct request_queue *q, queue_flag_set_unlocked(QUEUE_FLAG_SECERASE, q); } -static void mmc_queue_req_free_bufs(struct mmc_queue_req *mqrq) -{ - kfree(mqrq->bounce_sg); - mqrq->bounce_sg = NULL; - - kfree(mqrq->sg); - mqrq->sg = NULL; - - kfree(mqrq->bounce_buf); - mqrq->bounce_buf = NULL; -} - -static void mmc_queue_reqs_free_bufs(struct mmc_queue_req *mqrq, int qdepth) -{ - int i; - - for (i = 0; i < qdepth; i++) - mmc_queue_req_free_bufs(&mqrq[i]); -} - -static void mmc_queue_free_mqrqs(struct mmc_queue_req *mqrq, int qdepth) -{ - mmc_queue_reqs_free_bufs(mqrq, qdepth); - kfree(mqrq); -} - -static struct mmc_queue_req *mmc_queue_alloc_mqrqs(int qdepth) -{ - struct mmc_queue_req *mqrq; - int i; - - mqrq = kcalloc(qdepth, sizeof(*mqrq), GFP_KERNEL); - if (mqrq) { - for (i = 0; i < qdepth; i++) - mqrq[i].task_id = i; - } - - return mqrq; -} - -static int mmc_queue_alloc_bounce_bufs(struct mmc_queue_req *mqrq, int qdepth, - unsigned int bouncesz) -{ - int i; - - for (i = 0; i < qdepth; i++) { - mqrq[i].bounce_buf = kmalloc(bouncesz, GFP_KERNEL); - if (!mqrq[i].bounce_buf) - return -ENOMEM; - - mqrq[i].sg = mmc_alloc_sg(1); - if (!mqrq[i].sg) - return -ENOMEM; - - mqrq[i].bounce_sg = mmc_alloc_sg(bouncesz / 512); - if (!mqrq[i].bounce_sg) - return -ENOMEM; - } - - return 0; -} - -static bool mmc_queue_alloc_bounce(struct mmc_queue_req *mqrq, int qdepth, - unsigned int bouncesz) -{ - int ret; - - ret = mmc_queue_alloc_bounce_bufs(mqrq, qdepth, bouncesz); - if (ret) - mmc_queue_reqs_free_bufs(mqrq, qdepth); - - return !ret; -} - static unsigned int mmc_queue_calc_bouncesz(struct mmc_host *host) { unsigned int bouncesz = MMC_QUEUE_BOUNCESZ; @@ -273,71 +170,61 @@ static unsigned int mmc_queue_calc_bouncesz(struct mmc_host *host) return bouncesz; } -static int mmc_queue_alloc_sgs(struct mmc_queue_req *mqrq, int qdepth, - int max_segs) +/** + * mmc_init_request() - initialize the MMC-specific per-request data + * @q: the request queue + * @req: the request + * @gfp: memory allocation policy + */ +static int mmc_init_request(struct request_queue *q, struct request *req, + gfp_t gfp) { - int i; + struct mmc_queue_req *mq_rq = req_to_mmc_queue_req(req); + struct mmc_queue *mq = q->queuedata; + struct mmc_card *card = mq->card; + struct mmc_host *host = card->host; - for (i = 0; i < qdepth; i++) { - mqrq[i].sg = mmc_alloc_sg(max_segs); - if (!mqrq[i].sg) + mq_rq->req = req; + + if (card->bouncesz) { + mq_rq->bounce_buf = kmalloc(card->bouncesz, gfp); + if (!mq_rq->bounce_buf) + return -ENOMEM; + if (card->bouncesz > 512) { + mq_rq->sg = mmc_alloc_sg(1, gfp); + if (!mq_rq->sg) + return -ENOMEM; + mq_rq->bounce_sg = mmc_alloc_sg(card->bouncesz / 512, + gfp); + if (!mq_rq->bounce_sg) + return -ENOMEM; + } + } else { + mq_rq->bounce_buf = NULL; + mq_rq->bounce_sg = NULL; + mq_rq->sg = mmc_alloc_sg(host->max_segs, gfp); + if (!mq_rq->sg) return -ENOMEM; } return 0; } -void mmc_queue_free_shared_queue(struct mmc_card *card) -{ - if (card->mqrq) { - mmc_queue_free_mqrqs(card->mqrq, card->qdepth); - card->mqrq = NULL; - } -} - -static int __mmc_queue_alloc_shared_queue(struct mmc_card *card, int qdepth) +static void mmc_exit_request(struct request_queue *q, struct request *req) { - struct mmc_host *host = card->host; - struct mmc_queue_req *mqrq; - unsigned int bouncesz; - int ret = 0; - - if (card->mqrq) - return -EINVAL; + struct mmc_queue_req *mq_rq = req_to_mmc_queue_req(req); - mqrq = mmc_queue_alloc_mqrqs(qdepth); - if (!mqrq) - return -ENOMEM; - - card->mqrq = mqrq; - card->qdepth = qdepth; + /* It is OK to kfree(NULL) so this will be smooth */ + kfree(mq_rq->bounce_sg); + mq_rq->bounce_sg = NULL; - bouncesz = mmc_queue_calc_bouncesz(host); + kfree(mq_rq->bounce_buf); + mq_rq->bounce_buf = NULL; - if (bouncesz && !mmc_queue_alloc_bounce(mqrq, qdepth, bouncesz)) { - bouncesz = 0; - pr_warn("%s: unable to allocate bounce buffers\n", - mmc_card_name(card)); - } + kfree(mq_rq->sg); + mq_rq->sg = NULL; - card->bouncesz = bouncesz; - - if (!bouncesz) { - ret = mmc_queue_alloc_sgs(mqrq, qdepth, host->max_segs); - if (ret) - goto out_err; - } - - return ret; - -out_err: - mmc_queue_free_shared_queue(card); - return ret; -} - -int mmc_queue_alloc_shared_queue(struct mmc_card *card) -{ - return __mmc_queue_alloc_shared_queue(card, 2); + mq_rq->req = NULL; } /** @@ -360,13 +247,21 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT; mq->card = card; - mq->queue = blk_init_queue(mmc_request_fn, lock); + mq->queue = blk_alloc_queue(GFP_KERNEL); if (!mq->queue) return -ENOMEM; - - mq->mqrq = card->mqrq; - mq->qdepth = card->qdepth; + mq->queue->queue_lock = lock; + mq->queue->request_fn = mmc_request_fn; + mq->queue->init_rq_fn = mmc_init_request; + mq->queue->exit_rq_fn = mmc_exit_request; + mq->queue->cmd_size = sizeof(struct mmc_queue_req); mq->queue->queuedata = mq; + mq->qcnt = 0; + ret = blk_init_allocated_queue(mq->queue); + if (ret) { + blk_cleanup_queue(mq->queue); + return ret; + } blk_queue_prep_rq(mq->queue, mmc_prep_request); queue_flag_set_unlocked(QUEUE_FLAG_NONROT, mq->queue); @@ -374,6 +269,7 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, if (mmc_can_erase(card)) mmc_queue_setup_discard(mq->queue, card); + card->bouncesz = mmc_queue_calc_bouncesz(host); if (card->bouncesz) { blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_ANY); blk_queue_max_hw_sectors(mq->queue, card->bouncesz / 512); @@ -400,7 +296,6 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, return 0; cleanup_queue: - mq->mqrq = NULL; blk_cleanup_queue(mq->queue); return ret; } @@ -422,7 +317,6 @@ void mmc_cleanup_queue(struct mmc_queue *mq) blk_start_queue(q); spin_unlock_irqrestore(q->queue_lock, flags); - mq->mqrq = NULL; mq->card = NULL; } EXPORT_SYMBOL(mmc_cleanup_queue); diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h index 871796c3f406..dae31bc0c2d3 100644 --- a/drivers/mmc/core/queue.h +++ b/drivers/mmc/core/queue.h @@ -3,9 +3,15 @@ #include #include +#include #include #include +static inline struct mmc_queue_req *req_to_mmc_queue_req(struct request *rq) +{ + return blk_mq_rq_to_pdu(rq); +} + static inline bool mmc_req_is_special(struct request *req) { return req && @@ -34,7 +40,6 @@ struct mmc_queue_req { struct scatterlist *bounce_sg; unsigned int bounce_sg_len; struct mmc_async_req areq; - int task_id; }; struct mmc_queue { @@ -45,14 +50,15 @@ struct mmc_queue { bool asleep; struct mmc_blk_data *blkdata; struct request_queue *queue; - struct mmc_queue_req *mqrq; - int qdepth; + /* + * FIXME: this counter is not a very reliable way of keeping + * track of how many requests that are ongoing. Switch to just + * letting the block core keep track of requests and per-request + * associated mmc_queue_req data. + */ int qcnt; - unsigned long qslots; }; -extern int mmc_queue_alloc_shared_queue(struct mmc_card *card); -extern void mmc_queue_free_shared_queue(struct mmc_card *card); extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *, const char *); extern void mmc_cleanup_queue(struct mmc_queue *); @@ -66,8 +72,4 @@ extern void mmc_queue_bounce_post(struct mmc_queue_req *); extern int mmc_access_rpmb(struct mmc_queue *); -extern struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *, - struct request *); -extern void mmc_queue_req_free(struct mmc_queue *, struct mmc_queue_req *); - #endif diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h index aad015e0152b..46c73e97e61f 100644 --- a/include/linux/mmc/card.h +++ b/include/linux/mmc/card.h @@ -305,9 +305,7 @@ struct mmc_card { struct mmc_part part[MMC_NUM_PHY_PARTITION]; /* physical partitions */ unsigned int nr_parts; - struct mmc_queue_req *mqrq; /* Shared queue structure */ unsigned int bouncesz; /* Bounce buffer size */ - int qdepth; /* Shared queue depth */ }; static inline bool mmc_large_sector(struct mmc_card *card) -- cgit v1.2.3 From d63c2bf49c0de83e88153da3af9970f68c633257 Mon Sep 17 00:00:00 2001 From: Wolfram Sang Date: Sun, 28 May 2017 11:30:47 +0200 Subject: mmc: use proper name for the R-Car SoC It is 'R-Car', not 'RCar'. No code or binding changes, only descriptive text. Signed-off-by: Wolfram Sang Acked-by: Geert Uytterhoeven Signed-off-by: Ulf Hansson --- drivers/mmc/host/renesas_sdhi_core.c | 2 +- include/linux/mfd/tmio.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) (limited to 'include') diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c index fa6c188e0327..82150a966391 100644 --- a/drivers/mmc/host/renesas_sdhi_core.c +++ b/drivers/mmc/host/renesas_sdhi_core.c @@ -130,7 +130,7 @@ static unsigned int renesas_sdhi_clk_update(struct tmio_mmc_host *host, unsigned int freq, diff, best_freq = 0, diff_min = ~0; int i, ret; - /* tested only on RCar Gen2+ currently; may work for others */ + /* tested only on R-Car Gen2+ currently; may work for others */ if (!(host->pdata->flags & TMIO_MMC_MIN_RCAR2)) return clk_get_rate(priv->clk); diff --git a/include/linux/mfd/tmio.h b/include/linux/mfd/tmio.h index a1520d88ebf3..c83c16b931a8 100644 --- a/include/linux/mfd/tmio.h +++ b/include/linux/mfd/tmio.h @@ -66,7 +66,7 @@ */ #define TMIO_MMC_SDIO_IRQ (1 << 2) -/* Some features are only available or tested on RCar Gen2 or later */ +/* Some features are only available or tested on R-Car Gen2 or later */ #define TMIO_MMC_MIN_RCAR2 (1 << 3) /* -- cgit v1.2.3 From d2a47176a877b1eccd3086a4c8d790d644d594cb Mon Sep 17 00:00:00 2001 From: Ulf Hansson Date: Thu, 8 Jun 2017 15:23:08 +0200 Subject: mmc: core: Remove MMC_CAP2_HC_ERASE_SZ The MMC_CAP2_HC_ERASE_SZ is used only by a few mmc host drivers. Its intent is to enable eMMC's high-capacity erase size, as to improve the behaviour of the erase operations. We should strive to avoid software configuration options that aren't necessary, but instead deploy common behaviours. For these reasons, let's remove the capability bit for MMC_CAP2_HC_ERASE_SZ and make it the default behaviour. Note that this change doesn't affect eMMCs supporting trim/discard, because these commands operates on sectors and takes precedence over erase commands. Signed-off-by: Ulf Hansson Acked-by: Adrian Hunter Reviewed-by: Linus Walleij Reviewed-by: Shawn Lin Tested-by: Shawn Lin --- drivers/mmc/core/mmc.c | 8 ++------ drivers/mmc/host/sdhci-acpi.c | 1 - drivers/mmc/host/sdhci-brcmstb.c | 3 --- drivers/mmc/host/sdhci-pci-core.c | 4 +--- include/linux/mmc/host.h | 1 - 5 files changed, 3 insertions(+), 14 deletions(-) (limited to 'include') diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c index e504b66bd41c..4ffea14b7eb6 100644 --- a/drivers/mmc/core/mmc.c +++ b/drivers/mmc/core/mmc.c @@ -1651,12 +1651,8 @@ static int mmc_init_card(struct mmc_host *host, u32 ocr, mmc_set_erase_size(card); } - /* - * If enhanced_area_en is TRUE, host needs to enable ERASE_GRP_DEF - * bit. This bit will be lost every time after a reset or power off. - */ - if (card->ext_csd.partition_setting_completed || - (card->ext_csd.rev >= 3 && (host->caps2 & MMC_CAP2_HC_ERASE_SZ))) { + /* Enable ERASE_GRP_DEF. This bit is lost after a reset or power off. */ + if (card->ext_csd.rev >= 3) { err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, EXT_CSD_ERASE_GROUP_DEF, 1, card->ext_csd.generic_cmd6_time); diff --git a/drivers/mmc/host/sdhci-acpi.c b/drivers/mmc/host/sdhci-acpi.c index 89d9a8c014f5..cf66a3db71b8 100644 --- a/drivers/mmc/host/sdhci-acpi.c +++ b/drivers/mmc/host/sdhci-acpi.c @@ -274,7 +274,6 @@ static const struct sdhci_acpi_slot sdhci_acpi_slot_int_emmc = { .caps = MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE | MMC_CAP_HW_RESET | MMC_CAP_1_8V_DDR | MMC_CAP_CMD_DURING_TFR | MMC_CAP_WAIT_WHILE_BUSY, - .caps2 = MMC_CAP2_HC_ERASE_SZ, .flags = SDHCI_ACPI_RUNTIME_PM, .quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC, .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN | diff --git a/drivers/mmc/host/sdhci-brcmstb.c b/drivers/mmc/host/sdhci-brcmstb.c index 242c5dc7a81e..e2f638338e8f 100644 --- a/drivers/mmc/host/sdhci-brcmstb.c +++ b/drivers/mmc/host/sdhci-brcmstb.c @@ -89,9 +89,6 @@ static int sdhci_brcmstb_probe(struct platform_device *pdev) goto err_clk; } - /* Enable MMC_CAP2_HC_ERASE_SZ for better max discard calculations */ - host->mmc->caps2 |= MMC_CAP2_HC_ERASE_SZ; - sdhci_get_of_property(pdev); mmc_of_parse(host->mmc); diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c index 8fa84a013be4..227a5cb4b3bf 100644 --- a/drivers/mmc/host/sdhci-pci-core.c +++ b/drivers/mmc/host/sdhci-pci-core.c @@ -347,8 +347,7 @@ static inline void sdhci_pci_remove_own_cd(struct sdhci_pci_slot *slot) static int mfd_emmc_probe_slot(struct sdhci_pci_slot *slot) { slot->host->mmc->caps |= MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE; - slot->host->mmc->caps2 |= MMC_CAP2_BOOTPART_NOACC | - MMC_CAP2_HC_ERASE_SZ; + slot->host->mmc->caps2 |= MMC_CAP2_BOOTPART_NOACC; return 0; } @@ -587,7 +586,6 @@ static int byt_emmc_probe_slot(struct sdhci_pci_slot *slot) MMC_CAP_HW_RESET | MMC_CAP_1_8V_DDR | MMC_CAP_CMD_DURING_TFR | MMC_CAP_WAIT_WHILE_BUSY; - slot->host->mmc->caps2 |= MMC_CAP2_HC_ERASE_SZ; slot->hw_reset = sdhci_pci_int_hw_reset; if (slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_BSW_EMMC) slot->host->timeout_clk = 1000; /* 1000 kHz i.e. 1 MHz */ diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index 9209f95a5106..c81380a2181f 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h @@ -287,7 +287,6 @@ struct mmc_host { #define MMC_CAP2_HS200_1_2V_SDR (1 << 6) /* can support */ #define MMC_CAP2_HS200 (MMC_CAP2_HS200_1_8V_SDR | \ MMC_CAP2_HS200_1_2V_SDR) -#define MMC_CAP2_HC_ERASE_SZ (1 << 9) /* High-capacity erase size */ #define MMC_CAP2_CD_ACTIVE_HIGH (1 << 10) /* Card-detect signal active high */ #define MMC_CAP2_RO_ACTIVE_HIGH (1 << 11) /* Write-protect signal active high */ #define MMC_CAP2_PACKED_RD (1 << 12) /* Allow packed read */ -- cgit v1.2.3 From 03dbaa04a2e5bac0ae907a9ed31472bc4bb56fd3 Mon Sep 17 00:00:00 2001 From: Adrian Hunter Date: Tue, 13 Jun 2017 15:07:51 +0300 Subject: mmc: slot-gpio: Add support to enable irq wake on cd_irq Add host capability MMC_CAP_CD_WAKE to enable irq wake on the card detect irq. Signed-off-by: Adrian Hunter Signed-off-by: Ulf Hansson --- drivers/mmc/core/core.c | 5 ++++- drivers/mmc/core/slot-gpio.c | 2 ++ include/linux/mmc/host.h | 2 ++ 3 files changed, 8 insertions(+), 1 deletion(-) (limited to 'include') diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index d40697fae911..26431267a3e2 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -2652,8 +2652,11 @@ void mmc_stop_host(struct mmc_host *host) host->removed = 1; spin_unlock_irqrestore(&host->lock, flags); #endif - if (host->slot.cd_irq >= 0) + if (host->slot.cd_irq >= 0) { + if (host->slot.cd_wake_enabled) + disable_irq_wake(host->slot.cd_irq); disable_irq(host->slot.cd_irq); + } host->rescan_disable = 1; cancel_delayed_work_sync(&host->detect); diff --git a/drivers/mmc/core/slot-gpio.c b/drivers/mmc/core/slot-gpio.c index a8450a8701e4..863f1dbbfc1b 100644 --- a/drivers/mmc/core/slot-gpio.c +++ b/drivers/mmc/core/slot-gpio.c @@ -151,6 +151,8 @@ void mmc_gpiod_request_cd_irq(struct mmc_host *host) if (irq < 0) host->caps |= MMC_CAP_NEEDS_POLL; + else if ((host->caps & MMC_CAP_CD_WAKE) && !enable_irq_wake(irq)) + host->slot.cd_wake_enabled = true; } EXPORT_SYMBOL(mmc_gpiod_request_cd_irq); diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index c81380a2181f..ebd1cebbef0c 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h @@ -185,6 +185,7 @@ struct mmc_async_req { */ struct mmc_slot { int cd_irq; + bool cd_wake_enabled; void *handler_priv; }; @@ -275,6 +276,7 @@ struct mmc_host { #define MMC_CAP_DRIVER_TYPE_A (1 << 23) /* Host supports Driver Type A */ #define MMC_CAP_DRIVER_TYPE_C (1 << 24) /* Host supports Driver Type C */ #define MMC_CAP_DRIVER_TYPE_D (1 << 25) /* Host supports Driver Type D */ +#define MMC_CAP_CD_WAKE (1 << 28) /* Enable card detect wake */ #define MMC_CAP_CMD_DURING_TFR (1 << 29) /* Commands during data transfer */ #define MMC_CAP_CMD23 (1 << 30) /* CMD23 supported. */ #define MMC_CAP_HW_RESET (1 << 31) /* Hardware reset */ -- cgit v1.2.3 From f2218db81548544bf7349911546a94bfaabbd697 Mon Sep 17 00:00:00 2001 From: Simon Horman Date: Fri, 16 Jun 2017 18:11:03 +0200 Subject: mmc: tmio: improve checkpatch cleanness Trivial updates to improve checkpatch cleanness. Signed-off-by: Simon Horman Reviewed-by: Wolfram Sang Tested-by: Wolfram Sang Signed-off-by: Ulf Hansson --- drivers/mmc/host/renesas_sdhi.h | 2 +- drivers/mmc/host/tmio_mmc.c | 8 ++--- drivers/mmc/host/tmio_mmc.h | 17 ++++++----- drivers/mmc/host/tmio_mmc_core.c | 66 +++++++++++++++++++++------------------- include/linux/mfd/tmio.h | 33 ++++++++++---------- 5 files changed, 66 insertions(+), 60 deletions(-) (limited to 'include') diff --git a/drivers/mmc/host/renesas_sdhi.h b/drivers/mmc/host/renesas_sdhi.h index eb3ea15ff92d..6b4a79508c6b 100644 --- a/drivers/mmc/host/renesas_sdhi.h +++ b/drivers/mmc/host/renesas_sdhi.h @@ -27,7 +27,7 @@ struct renesas_sdhi_of_data { unsigned long capabilities2; enum dma_slave_buswidth dma_buswidth; dma_addr_t dma_rx_offset; - unsigned bus_shift; + unsigned int bus_shift; int scc_offset; struct renesas_sdhi_scc *taps; int taps_num; diff --git a/drivers/mmc/host/tmio_mmc.c b/drivers/mmc/host/tmio_mmc.c index 61cf36fb270b..64b7e9f18361 100644 --- a/drivers/mmc/host/tmio_mmc.c +++ b/drivers/mmc/host/tmio_mmc.c @@ -104,8 +104,8 @@ static int tmio_mmc_probe(struct platform_device *pdev) goto host_free; ret = devm_request_irq(&pdev->dev, irq, tmio_mmc_irq, - IRQF_TRIGGER_FALLING, - dev_name(&pdev->dev), host); + IRQF_TRIGGER_FALLING, + dev_name(&pdev->dev), host); if (ret) goto host_remove; @@ -132,6 +132,7 @@ static int tmio_mmc_remove(struct platform_device *pdev) if (mmc) { struct tmio_mmc_host *host = mmc_priv(mmc); + tmio_mmc_host_remove(host); if (cell->disable) cell->disable(pdev); @@ -145,8 +146,7 @@ static int tmio_mmc_remove(struct platform_device *pdev) static const struct dev_pm_ops tmio_mmc_dev_pm_ops = { SET_SYSTEM_SLEEP_PM_OPS(tmio_mmc_suspend, tmio_mmc_resume) SET_RUNTIME_PM_OPS(tmio_mmc_host_runtime_suspend, - tmio_mmc_host_runtime_resume, - NULL) + tmio_mmc_host_runtime_resume, NULL) }; static struct platform_driver tmio_mmc_driver = { diff --git a/drivers/mmc/host/tmio_mmc.h b/drivers/mmc/host/tmio_mmc.h index 768c8abaedda..6ad6704175dc 100644 --- a/drivers/mmc/host/tmio_mmc.h +++ b/drivers/mmc/host/tmio_mmc.h @@ -239,24 +239,26 @@ static inline u16 sd_ctrl_read16(struct tmio_mmc_host *host, int addr) } static inline void sd_ctrl_read16_rep(struct tmio_mmc_host *host, int addr, - u16 *buf, int count) + u16 *buf, int count) { readsw(host->ctl + (addr << host->bus_shift), buf, count); } -static inline u32 sd_ctrl_read16_and_16_as_32(struct tmio_mmc_host *host, int addr) +static inline u32 sd_ctrl_read16_and_16_as_32(struct tmio_mmc_host *host, + int addr) { return readw(host->ctl + (addr << host->bus_shift)) | readw(host->ctl + ((addr + 2) << host->bus_shift)) << 16; } static inline void sd_ctrl_read32_rep(struct tmio_mmc_host *host, int addr, - u32 *buf, int count) + u32 *buf, int count) { readsl(host->ctl + (addr << host->bus_shift), buf, count); } -static inline void sd_ctrl_write16(struct tmio_mmc_host *host, int addr, u16 val) +static inline void sd_ctrl_write16(struct tmio_mmc_host *host, int addr, + u16 val) { /* If there is a hook and it returns non-zero then there * is an error and the write should be skipped @@ -267,19 +269,20 @@ static inline void sd_ctrl_write16(struct tmio_mmc_host *host, int addr, u16 val } static inline void sd_ctrl_write16_rep(struct tmio_mmc_host *host, int addr, - u16 *buf, int count) + u16 *buf, int count) { writesw(host->ctl + (addr << host->bus_shift), buf, count); } -static inline void sd_ctrl_write32_as_16_and_16(struct tmio_mmc_host *host, int addr, u32 val) +static inline void sd_ctrl_write32_as_16_and_16(struct tmio_mmc_host *host, + int addr, u32 val) { writew(val & 0xffff, host->ctl + (addr << host->bus_shift)); writew(val >> 16, host->ctl + ((addr + 2) << host->bus_shift)); } static inline void sd_ctrl_write32_rep(struct tmio_mmc_host *host, int addr, - const u32 *buf, int count) + const u32 *buf, int count) { writesl(host->ctl + (addr << host->bus_shift), buf, count); } diff --git a/drivers/mmc/host/tmio_mmc_core.c b/drivers/mmc/host/tmio_mmc_core.c index fbe38464e7d7..82b80d42f7ae 100644 --- a/drivers/mmc/host/tmio_mmc_core.c +++ b/drivers/mmc/host/tmio_mmc_core.c @@ -127,16 +127,17 @@ static int tmio_mmc_next_sg(struct tmio_mmc_host *host) #define STATUS_TO_TEXT(a, status, i) \ do { \ - if (status & TMIO_STAT_##a) { \ - if (i++) \ - printk(" | "); \ - printk(#a); \ + if ((status) & TMIO_STAT_##a) { \ + if ((i)++) \ + printk(KERN_DEBUG " | "); \ + printk(KERN_DEBUG #a); \ } \ } while (0) static void pr_debug_status(u32 status) { int i = 0; + pr_debug("status: %08x = ", status); STATUS_TO_TEXT(CARD_REMOVE, status, i); STATUS_TO_TEXT(CARD_INSERT, status, i); @@ -177,8 +178,7 @@ static void tmio_mmc_enable_sdio_irq(struct mmc_host *mmc, int enable) pm_runtime_get_sync(mmc_dev(mmc)); host->sdio_irq_enabled = true; - host->sdio_irq_mask = TMIO_SDIO_MASK_ALL & - ~TMIO_SDIO_STAT_IOIRQ; + host->sdio_irq_mask = TMIO_SDIO_MASK_ALL & ~TMIO_SDIO_STAT_IOIRQ; /* Clear obsolete interrupts before enabling */ sdio_status = sd_ctrl_read16(host, CTL_SDIO_STATUS) & ~TMIO_SDIO_MASK_ALL; @@ -222,7 +222,7 @@ static void tmio_mmc_clk_stop(struct tmio_mmc_host *host) } static void tmio_mmc_set_clock(struct tmio_mmc_host *host, - unsigned int new_clock) + unsigned int new_clock) { u32 clk = 0, clock; @@ -289,16 +289,16 @@ static void tmio_mmc_reset_work(struct work_struct *work) * cancel_delayed_work(), it can happen, that a .set_ios() call preempts * us, so, have to check for IS_ERR(host->mrq) */ - if (IS_ERR_OR_NULL(mrq) - || time_is_after_jiffies(host->last_req_ts + - msecs_to_jiffies(CMDREQ_TIMEOUT))) { + if (IS_ERR_OR_NULL(mrq) || + time_is_after_jiffies(host->last_req_ts + + msecs_to_jiffies(CMDREQ_TIMEOUT))) { spin_unlock_irqrestore(&host->lock, flags); return; } dev_warn(&host->pdev->dev, - "timeout waiting for hardware interrupt (CMD%u)\n", - mrq->cmd->opcode); + "timeout waiting for hardware interrupt (CMD%u)\n", + mrq->cmd->opcode); if (host->data) host->data->error = -ETIMEDOUT; @@ -336,7 +336,8 @@ static void tmio_mmc_reset_work(struct work_struct *work) #define SECURITY_CMD 0x4000 #define NO_CMD12_ISSUE 0x4000 /* TMIO_MMC_HAVE_CMD12_CTRL */ -static int tmio_mmc_start_command(struct tmio_mmc_host *host, struct mmc_command *cmd) +static int tmio_mmc_start_command(struct tmio_mmc_host *host, + struct mmc_command *cmd) { struct mmc_data *data = host->data; int c = cmd->opcode; @@ -375,8 +376,8 @@ static int tmio_mmc_start_command(struct tmio_mmc_host *host, struct mmc_command c |= TRANSFER_MULTI; /* - * Disable auto CMD12 at IO_RW_EXTENDED and SET_BLOCK_COUNT - * when doing multiple block transfer + * Disable auto CMD12 at IO_RW_EXTENDED and + * SET_BLOCK_COUNT when doing multiple block transfer */ if ((host->pdata->flags & TMIO_MMC_HAVE_CMD12_CTRL) && (cmd->opcode == SD_IO_RW_EXTENDED || host->mrq->sbc)) @@ -501,8 +502,6 @@ static void tmio_mmc_pio_irq(struct tmio_mmc_host *host) if (host->sg_off == host->sg_ptr->length) tmio_mmc_next_sg(host); - - return; } static void tmio_mmc_check_bounce_buffer(struct tmio_mmc_host *host) @@ -510,6 +509,7 @@ static void tmio_mmc_check_bounce_buffer(struct tmio_mmc_host *host) if (host->sg_ptr == &host->bounce_sg) { unsigned long flags; void *sg_vaddr = tmio_mmc_kmap_atomic(host->sg_orig, &flags); + memcpy(sg_vaddr, host->bounce_buf, host->bounce_sg.length); tmio_mmc_kunmap_atomic(host->sg_orig, &flags, sg_vaddr); } @@ -574,6 +574,7 @@ EXPORT_SYMBOL_GPL(tmio_mmc_do_data_irq); static void tmio_mmc_data_irq(struct tmio_mmc_host *host, unsigned int stat) { struct mmc_data *data; + spin_lock(&host->lock); data = host->data; @@ -618,8 +619,7 @@ out: spin_unlock(&host->lock); } -static void tmio_mmc_cmd_irq(struct tmio_mmc_host *host, - unsigned int stat) +static void tmio_mmc_cmd_irq(struct tmio_mmc_host *host, unsigned int stat) { struct mmc_command *cmd = host->cmd; int i, addr; @@ -680,7 +680,7 @@ out: } static bool __tmio_mmc_card_detect_irq(struct tmio_mmc_host *host, - int ireg, int status) + int ireg, int status) { struct mmc_host *mmc = host->mmc; @@ -698,14 +698,13 @@ static bool __tmio_mmc_card_detect_irq(struct tmio_mmc_host *host, return false; } -static bool __tmio_mmc_sdcard_irq(struct tmio_mmc_host *host, - int ireg, int status) +static bool __tmio_mmc_sdcard_irq(struct tmio_mmc_host *host, int ireg, + int status) { /* Command completion */ if (ireg & (TMIO_STAT_CMDRESPEND | TMIO_STAT_CMDTIMEOUT)) { - tmio_mmc_ack_mmc_irqs(host, - TMIO_STAT_CMDRESPEND | - TMIO_STAT_CMDTIMEOUT); + tmio_mmc_ack_mmc_irqs(host, TMIO_STAT_CMDRESPEND | + TMIO_STAT_CMDTIMEOUT); tmio_mmc_cmd_irq(host, status); return true; } @@ -776,7 +775,7 @@ irqreturn_t tmio_mmc_irq(int irq, void *devid) EXPORT_SYMBOL_GPL(tmio_mmc_irq); static int tmio_mmc_start_data(struct tmio_mmc_host *host, - struct mmc_data *data) + struct mmc_data *data) { struct tmio_mmc_data *pdata = host->pdata; @@ -831,7 +830,7 @@ static int tmio_mmc_execute_tuning(struct mmc_host *mmc, u32 opcode) if (host->tap_num * 2 >= sizeof(host->taps) * BITS_PER_BYTE) { dev_warn_once(&host->pdev->dev, - "Too many taps, skipping tuning. Please consider updating size of taps field of tmio_mmc_host\n"); + "Too many taps, skipping tuning. Please consider updating size of taps field of tmio_mmc_host\n"); goto out; } @@ -862,7 +861,8 @@ out: return ret; } -static void tmio_process_mrq(struct tmio_mmc_host *host, struct mmc_request *mrq) +static void tmio_process_mrq(struct tmio_mmc_host *host, + struct mmc_request *mrq) { struct mmc_command *cmd; int ret; @@ -1030,7 +1030,7 @@ static void tmio_mmc_power_off(struct tmio_mmc_host *host) } static void tmio_mmc_set_bus_width(struct tmio_mmc_host *host, - unsigned char bus_width) + unsigned char bus_width) { u16 reg = sd_ctrl_read16(host, CTL_SD_MEM_CARD_OPT) & ~(CARD_OPT_WIDTH | CARD_OPT_WIDTH8); @@ -1070,7 +1070,8 @@ static void tmio_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios) dev_dbg(dev, "%s.%d: CMD%u active since %lu, now %lu!\n", current->comm, task_pid_nr(current), - host->mrq->cmd->opcode, host->last_req_ts, jiffies); + host->mrq->cmd->opcode, host->last_req_ts, + jiffies); } spin_unlock_irqrestore(&host->lock, flags); @@ -1117,6 +1118,7 @@ static int tmio_mmc_get_ro(struct mmc_host *mmc) struct tmio_mmc_host *host = mmc_priv(mmc); struct tmio_mmc_data *pdata = host->pdata; int ret = mmc_gpio_get_ro(mmc); + if (ret >= 0) return ret; @@ -1173,6 +1175,7 @@ static void tmio_mmc_of_parse(struct platform_device *pdev, struct tmio_mmc_data *pdata) { const struct device_node *np = pdev->dev.of_node; + if (!np) return; @@ -1243,7 +1246,8 @@ int tmio_mmc_host_probe(struct tmio_mmc_host *_host, return -ENOMEM; tmio_mmc_ops.card_busy = _host->card_busy; - tmio_mmc_ops.start_signal_voltage_switch = _host->start_signal_voltage_switch; + tmio_mmc_ops.start_signal_voltage_switch = + _host->start_signal_voltage_switch; mmc->ops = &tmio_mmc_ops; mmc->caps |= MMC_CAP_4_BIT_DATA | pdata->capabilities; diff --git a/include/linux/mfd/tmio.h b/include/linux/mfd/tmio.h index c83c16b931a8..26e8f8c0a6db 100644 --- a/include/linux/mfd/tmio.h +++ b/include/linux/mfd/tmio.h @@ -13,15 +13,15 @@ #define tmio_ioread16(addr) readw(addr) #define tmio_ioread16_rep(r, b, l) readsw(r, b, l) #define tmio_ioread32(addr) \ - (((u32) readw((addr))) | (((u32) readw((addr) + 2)) << 16)) + (((u32)readw((addr))) | (((u32)readw((addr) + 2)) << 16)) #define tmio_iowrite8(val, addr) writeb((val), (addr)) #define tmio_iowrite16(val, addr) writew((val), (addr)) #define tmio_iowrite16_rep(r, b, l) writesw(r, b, l) #define tmio_iowrite32(val, addr) \ do { \ - writew((val), (addr)); \ - writew((val) >> 16, (addr) + 2); \ + writew((val), (addr)); \ + writew((val) >> 16, (addr) + 2); \ } while (0) #define CNF_CMD 0x04 @@ -55,57 +55,57 @@ } while (0) /* tmio MMC platform flags */ -#define TMIO_MMC_WRPROTECT_DISABLE (1 << 0) +#define TMIO_MMC_WRPROTECT_DISABLE BIT(0) /* * Some controllers can support a 2-byte block size when the bus width * is configured in 4-bit mode. */ -#define TMIO_MMC_BLKSZ_2BYTES (1 << 1) +#define TMIO_MMC_BLKSZ_2BYTES BIT(1) /* * Some controllers can support SDIO IRQ signalling. */ -#define TMIO_MMC_SDIO_IRQ (1 << 2) +#define TMIO_MMC_SDIO_IRQ BIT(2) /* Some features are only available or tested on R-Car Gen2 or later */ -#define TMIO_MMC_MIN_RCAR2 (1 << 3) +#define TMIO_MMC_MIN_RCAR2 BIT(3) /* * Some controllers require waiting for the SD bus to become * idle before writing to some registers. */ -#define TMIO_MMC_HAS_IDLE_WAIT (1 << 4) +#define TMIO_MMC_HAS_IDLE_WAIT BIT(4) /* * A GPIO is used for card hotplug detection. We need an extra flag for this, * because 0 is a valid GPIO number too, and requiring users to specify * cd_gpio < 0 to disable GPIO hotplug would break backwards compatibility. */ -#define TMIO_MMC_USE_GPIO_CD (1 << 5) +#define TMIO_MMC_USE_GPIO_CD BIT(5) /* * Some controllers doesn't have over 0x100 register. * it is used to checking accessibility of * CTL_SD_CARD_CLK_CTL / CTL_CLK_AND_WAIT_CTL */ -#define TMIO_MMC_HAVE_HIGH_REG (1 << 6) +#define TMIO_MMC_HAVE_HIGH_REG BIT(6) /* * Some controllers have CMD12 automatically * issue/non-issue register */ -#define TMIO_MMC_HAVE_CMD12_CTRL (1 << 7) +#define TMIO_MMC_HAVE_CMD12_CTRL BIT(7) /* Controller has some SDIO status bits which must be 1 */ -#define TMIO_MMC_SDIO_STATUS_SETBITS (1 << 8) +#define TMIO_MMC_SDIO_STATUS_SETBITS BIT(8) /* * Some controllers have a 32-bit wide data port register */ -#define TMIO_MMC_32BIT_DATA_PORT (1 << 9) +#define TMIO_MMC_32BIT_DATA_PORT BIT(9) /* * Some controllers allows to set SDx actual clock */ -#define TMIO_MMC_CLK_ACTUAL (1 << 10) +#define TMIO_MMC_CLK_ACTUAL BIT(10) int tmio_core_mmc_enable(void __iomem *cnf, int shift, unsigned long base); int tmio_core_mmc_resume(void __iomem *cnf, int shift, unsigned long base); @@ -146,9 +146,9 @@ struct tmio_nand_data { struct tmio_fb_data { int (*lcd_set_power)(struct platform_device *fb_dev, - bool on); + bool on); int (*lcd_mode)(struct platform_device *fb_dev, - const struct fb_videomode *mode); + const struct fb_videomode *mode); int num_modes; struct fb_videomode *modes; @@ -157,5 +157,4 @@ struct tmio_fb_data { int width; }; - #endif -- cgit v1.2.3