summaryrefslogtreecommitdiffstats
path: root/drivers/dma
diff options
context:
space:
mode:
authorSascha Hauer <s.hauer@pengutronix.de>2019-05-21 09:06:43 +0200
committerMiquel Raynal <miquel.raynal@bootlin.com>2019-06-27 20:05:30 +0200
commitef347c0cfd619a9251e5a2f9ff72e33650a9bccb (patch)
tree56fc6f1c9fd9763d53c57dd97f64ab295fc54c69 /drivers/dma
parentceeeb99cd821a2f7493e1e0e1eca5afc7a205213 (diff)
downloadlinux-ef347c0cfd619a9251e5a2f9ff72e33650a9bccb.tar.bz2
mtd: rawnand: gpmi: Implement exec_op
The gpmi driver performance suffers from NAND operations being split in multiple small DMA transfers. This has been forced by the NAND layer in the former days, but now with exec_op we can use the controller as intended. With this patch gpmi_nfc_exec_op becomes the main entry point to NAND operations. Here all instructions are collected and chained as separate DMA transfers. In the end whole chain is fired and waited to be finished. gpmi_nfc_exec_op only does the hardware operations, bad block marker swapping and buffer scrambling is done by the callers. It's worth noting that the nand_*_op functions always take the buffer lengths for the data that the NAND chip actually transfers. When doing BCH we have to calculate the net data size from the raw data size in some places. This patch has been tested with 2048/64 and 2048/128 byte NAND on i.MX6q. mtd_oobtest, mtd_subpagetest and mtd_speedtest run without errors. nandbiterrs, nandpagetest and nandsubpagetest userspace tests from mtdutils run without errors and UBIFS can successfully be mounted. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
Diffstat (limited to 'drivers/dma')
-rw-r--r--drivers/dma/mxs-dma.c3
1 files changed, 3 insertions, 0 deletions
diff --git a/drivers/dma/mxs-dma.c b/drivers/dma/mxs-dma.c
index c622bee7eb12..20a9cb7cb6d3 100644
--- a/drivers/dma/mxs-dma.c
+++ b/drivers/dma/mxs-dma.c
@@ -78,6 +78,7 @@
#define BM_CCW_COMMAND (3 << 0)
#define CCW_CHAIN (1 << 2)
#define CCW_IRQ (1 << 3)
+#define CCW_WAIT4RDY (1 << 5)
#define CCW_DEC_SEM (1 << 6)
#define CCW_WAIT4END (1 << 7)
#define CCW_HALT_ON_TERM (1 << 8)
@@ -547,6 +548,8 @@ static struct dma_async_tx_descriptor *mxs_dma_prep_slave_sg(
ccw->bits |= CCW_TERM_FLUSH;
ccw->bits |= BF_CCW(sg_len, PIO_NUM);
ccw->bits |= BF_CCW(MXS_DMA_CMD_NO_XFER, COMMAND);
+ if (flags & MXS_DMA_CTRL_WAIT4RDY)
+ ccw->bits |= CCW_WAIT4RDY;
} else {
for_each_sg(sgl, sg, sg_len, i) {
if (sg_dma_len(sg) > MAX_XFER_BYTES) {