Age | Commit message (Collapse) | Author | Files | Lines |
|
This patch makes BUG_ON() usage correct in drivers/dma/ppc4xx/adam.c
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Grant Likely <grant.likely@secretlab.ca>
Cc: Anatolij Gustschin <agust@denx.de>
Cc: Sean MacLennan <smaclennan@pikatech.com>
Cc: Joe Perches <joe@perches.com>
Signed-off-by: Coly Li <bosong.ly@taobao.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
This patch makes BUG_ON() usage correct in drivers/dma/mv_xor.c
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Coly Li <bosong.ly@taobao.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
This patch makes BUG_ON() usage correct in drivers/dma/iop-adma.c.
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Coly Li <bosong.ly@taobao.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
also with llp
dwc_scan_descriptors scans all descriptors from active_list in case transfer is
not completed. It compares first_desc->lli.llp, and then all childrens of its
tx_list. But it doesn't compare its own address, i.e. first_desc->txd.phys, as
this is what we have initially programmed into the controller register. So this
causes dma to stop and finish a transfer, which was never started. And thus
fail.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Commit 4aa5f366431fe (avr32: at32ap700x: specify DMA src and dst
masters) specified the masters for the ac97c playback device
but incorrectly set them in the capture slave information rather
than playback.
Cc: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com>
Reported-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Jamie Iles <jamie@jamieiles.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
[rebased on dmaengine for 2.6.39 (d42efe6b)]
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
|
When we try to test all channels present on our controller together, some
channels of lower priority may be very slow as compared to others. If number of
transfers is unlimited, some channels may timeout and will not finish within 3
seconds. Thus, while doing such regress testing we may need to have higher value
of timeouts. This patch adds support for passing timeout value via module
parameters. Default value is 3 msec, a negative value means max timeout
possible.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
|
As a side effect this makes IMX_DMA selectable on i.MX21 again, because
the symbol ARCH_MX21 doesn't exist (MACH_MX21 would have been more correct).
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
|
The original dma_halt() function set the CA (channel abort) bit on both
the 83xx and 85xx controllers. This is incorrect on the 83xx, where this
bit means TEM (transfer error mask) instead. The 83xx doesn't support
channel abort, so we only do this operation on 85xx.
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
|
This merges the fsl_chan_ld_cleanup() function into the dma_do_tasklet()
function to reduce locking overhead. In the best case, we will be able
to keep the DMA controller busy while we are freeing used descriptors.
In all cases, the spinlock is grabbed two times fewer than before on
each transaction.
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
|
Previous to this patch, the dma_run_dependencies() function has been
called while holding desc_lock. This function can call tx_submit() for
other descriptors, which may try to re-grab the lock. Avoid this by
moving the descriptors to be cleaned up to a temporary list, and
dropping the lock before cleanup.
At the same time, add support for automatic unmapping of src and dst
buffers, as offered by the DMAEngine API.
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
|
Enabling poisoning in the dmapool API quickly showed that the DMA
controller was fetching descriptors that should not have been in use.
This has caused intermittent controller lockups during testing.
I have been unable to figure out the exact set of conditions which cause
this to happen. However, I believe it is related to the driver using the
hardware registers to track whether the controller is busy or not. The
code can incorrectly decide that the hardware is idle due to lag between
register writes and the hardware actually becoming busy.
To fix this, the driver has been reworked to explicitly track the state
of the hardware, rather than try to guess what it is doing based on the
register values.
This has passed dmatest with 10 threads per channel, 100000 iterations
per thread several times without error. Previously, this would fail
within a few seconds.
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
|
This fixes some minor violations of the coding style. It also changes
the style of the device_prep_dma_*() function definitions so they are
identical.
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
|
This adds better tracking to link descriptor allocations, callbacks, and
frees. This makes it much easier to track errors with link descriptors.
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
|
This makes debugging the driver much easier when multiple channels are
running concurrently. In addition, you can see how much descriptor
memory each channel has allocated via the dmapool API in sysfs.
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
|
This is a purely cosmetic cleanup. It is nice to have related functions
right next to each other in the code.
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
|
The dmatest code relies on the DMAEngine API to automatically call
dma_unmap_single() on src buffers. The flags it passes are incorrect,
fix them.
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
|
dmaengine
* 'for_dan' of git://git.infradead.org/users/vkoul/slave-dma:
drivers, pch_dma: Fix warning when CONFIG_PM=n.
dmaengine/dw_dmac fix: use readl & writel instead of __raw_readl & __raw_writel
avr32: at32ap700x: Specify DMA Flow Controller, Src and Dst msize
dw_dmac: Setting Default Burst length for transfers as 16.
dw_dmac: Allow src/dst msize & flow controller to be configured at runtime
dw_dmac: Changing type of src_master and dest_master to u8.
dw_dmac: Pass Channel Priority from platform_data
dw_dmac: Pass Channel Allocation Order from platform_data
dw_dmac: Mark all tx_descriptors with DMA_CRTL_ACK after xfer finish
dw_dmac: Change value of DWC_MAX_COUNT to 4095.
dw_dmac: Adding support for 64 bit access width for memcpy xfers
dw_dmac: Calling dwc_scan_descriptors from dwc_tx_status() after taking lock
dw_dmac: Move single descriptor from dwc->queue to dwc->active_list in dwc_complete_all
dw_dmac: Replace module_init() with subsys_initcall()
dw_dmac: Remove compilation dependency from AVR32 and put on HAVE_CLK
dmaengine: mxs-dma: add dma support for i.MX23/28
pch_dma: set the number of array correctly
pch_dma: fix kernel error issue
|
|
|
|
When CONFIG_PM=n, we get the following warning:
drivers/dma/pch_dma.c:741: warning: ‘pch_dma_suspend’ defined but not used
drivers/dma/pch_dma.c:755: warning: ‘pch_dma_resume’ defined but not used
To fix it, wrap pch_dma_{suspend,resume} and
pch_dma_{save,restore}_regs functions with CONFIG_PM.
Signed-off-by: Rakib Mullick <rakib.mullick@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
On ARMv7 cores, device memory mapped as Normal Non-cacheable, may not guarantee
ordered access causing failures in device drivers that do not use the mandatory
memory barriers. readl & writel versions contain necessary memory barriers for
this.
commit 79f64dbf68c8a9779a7e9a25e0a9f0217a25b57a: "ARM: 6273/1: Add barriers to
the I/O accessors if ARM_DMA_MEM_BUFFERABLE" can be referred for more
information on this.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Now that the dw_dmac DMA driver supports configurable Flow Controller, source
and destination burst or msize, we need to specify which ones to use. Msize or
burst size was previously hardcoded to 1, Flow controller was DMA for both
M2P & P2M transfers.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
This patch sets default Burst length for all transfer to 16. This will
enhance performance when user doesn't have any chan->private data.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Msize or Burst Size is peripheral dependent in case of prep_slave_sg and
cyclic_prep transfers, and in case of memcpy transfers it is platform dependent.
So msize configuration must come from platform data.
Also some peripherals (ex: JPEG), need to be flow controller for dma transfers,
so this information in case of slave_sg & cyclic_prep transfers must come from
platform data.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
src_master & dest_master don't required u32 as they have values limited to u8
only. Also their description is missing from doc style comment. This patch
fixes above mentioned issues.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
In Synopsys designware, channel priority is programmable. This patch adds
support for passing channel priority through platform data. By default Ascending
channel priority will be followed, i.e. channel 0 will get highest priority and
channel 7 will get lowest.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
In SPEAr Platform channels 4-7 have more Fifo depth. So we must get better
channel first. This patch introduces concept of channel allocation order in
dw_dmac. If user doesn't pass anything or 0, than normal (ascending) channel
allocation will follow, else channels will be allocated in descending order.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
dwc_desc_get checks all descriptors for DMA_CTRL_ACK before allocating them for
transfers. And descriptors are not marked with DMA_CRTL_ACK after transfer
finishes. Thus descriptor once used is not usable again. This patch marks
descriptors with DMA_CRTL_ACK after dma xfer finishes
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Every descriptor can transfer a maximum count of 4095 (12 bits, in control reg),
So we must have DWC_MAX_COUNT as 4095 instead of 2048.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Lock must be taken before calling dwc_scan_descriptors, as this may
access/modify shared data and queues. dwc_tx_status wasn't taking lock before
calling this routine. This patch add code that takes lock before calling
dwc_scan_descriptors.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
dwc_complete_all
dwc_complete_all and other routines was removing all descriptors from dwc->queue
and pushing them to dwc->active_list. Only one was required to be removed. Also
we are calling dwc_dostart, once list is fixed.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
In some cases users of dw_dmac are initialized before dw_dmac, and if they try
to use dw_dmac, they simply fail. So its better we register init() routine
of driver using subsys_initcall() instead of module_init(), so that dma driver
is available at the earliest possible.
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
This driver will now be used in atleast two platforms AVR32 & ARM. And there is
no actual hardware dependency of this driver over AVR32 or ARM. So this
dependency can be removed altogether.
Also dw_dmac driver uses clk framework and must have compilation dependency on
HAVE_CLK
Signed-off-by: Viresh Kumar <viresh.kumar@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
This patch adds dma support for Freescale MXS-based SoC i.MX23/28,
including apbh-dma and apbx-dma.
* apbh-dma and apbx-dma are supported in the driver as two mxs-dma
instances.
* apbh-dma is different between mx23 and mx28, hardware version
register is used to differentiate.
* mxs-dma supports pio function besides data transfer. The driver
uses dma_data_direction DMA_NONE to identify the pio mode, and
steals sgl and sg_len to get pio words and numbers from clients.
* mxs dmaengine has some very specific features, like sense function
and the special NAND support (nand_lock, nand_wait4ready). These
are too specific to implemented in generic dmaengine driver.
* The driver refers to imx-sdma and only a single descriptor is
statically assigned to each channel.
Signed-off-by: Shawn Guo <shawn.guo@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
set the number of array correctly.
Signed-off-by: Tomoya MORINAGA <tomoya-linux@dsn.okisemi.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
fix the following kernel error
------------[ cut here ]------------
WARNING: at kernel/softirq.c:159 _local_bh_enable_ip.clone.5+0x35/0x71()
Hardware name: To be filled by O.E.M.
Modules linked in: pch_uart pch_dma fuse mga drm cpufreq_ondemand acpi_cpufreq mperf ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_filter ip6_tables ipv6 uinput snd_hda_codec_realtek snd_hda_intel snd_hda_codec matroxfb_base snd_hwdep 8250_pnp snd_seq snd_seq_device matroxfb_DAC1064 snd_pcm joydev 8250 matroxfb_accel snd_timer matroxfb_Ti3026 ppdev pegasus parport_pc snd parport matroxfb_g450 g450_pll serial_core video output matroxfb_misc soundcore snd_page_alloc serio_raw pcspkr ext4 jbd2 crc16 sdhci_pci sdhci mmc_core floppy [last unloaded: scsi_wait_scan]
Pid: 0, comm: swapper Not tainted 2.6.37.upstream_check+ #8
Call Trace:
[<c0433add>] warn_slowpath_common+0x65/0x7a
[<c043825b>] ? _local_bh_enable_ip.clone.5+0x35/0x71
[<c0433b01>] warn_slowpath_null+0xf/0x13
[<c043825b>] _local_bh_enable_ip.clone.5+0x35/0x71
[<c043829f>] local_bh_enable_ip+0x8/0xa
[<c06ec471>] _raw_spin_unlock_bh+0x10/0x12
[<f82b57dd>] pd_prep_slave_sg+0xba/0x200 [pch_dma]
[<f82f7b7a>] pch_uart_interrupt+0x44d/0x6aa [pch_uart]
[<c046fa97>] handle_IRQ_event+0x1d/0x9e
[<c047146f>] handle_fasteoi_irq+0x90/0xc7
[<c04713df>] ? handle_fasteoi_irq+0x0/0xc7
<IRQ> [<c04045af>] ? do_IRQ+0x3e/0x89
[<c04035a9>] ? common_interrupt+0x29/0x30
[<c04400d8>] ? sys_getpriority+0x12d/0x1a2
[<c058bb2b>] ? arch_local_irq_enable+0x5/0xb
[<c058c740>] ? acpi_idle_enter_bm+0x22a/0x261
[<c0648b11>] ? cpuidle_idle_call+0x70/0xa1
[<c0401f44>] ? cpu_idle+0x49/0x6a
[<c06d9fc4>] ? rest_init+0x58/0x5a
[<c089e762>] ? start_kernel+0x2d0/0x2d5
[<c089e0ce>] ? i386_start_kernel+0xce/0xd5
Signed-off-by: Tomoya MORINAGA <tomoya-linux@dsn.okisemi.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
|
|
|
|
Slave-dma has become the predominant usage model for dmaengine and needs
special attention. Memory-to-memory dma usage cases will continue to be
maintained by Dan.
Cc: Alan Cox <alan@linux.intel.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
|
|
|
Currently when two or more buffers are queued by the camera driver
and so the double buffering is enabled in the idmac, we lose one
frame comming from CSI since the reporting of arrival of the first
frame is deferred by the DMAIC_7_EOF interrupt handler and reporting
of the arrival of the last frame is not done at all. So when requesting
N frames from the image sensor we actually receive N - 1 frames in
user space.
The reason for this behaviour is that the DMAIC_7_EOF interrupt
handler misleadingly assumes that the CUR_BUF flag is pointing to the
buffer used by the IDMAC. Actually it is not the case since the
CUR_BUF flag will be flipped by the FSU when the FSU is sending the
<TASK>_NEW_FRM_RDY signal when new frame data is delivered by the CSI.
When sending this singal, FSU updates the DMA_CUR_BUF and the
DMA_BUFx_RDY flags: the DMA_CUR_BUF is flipped, the DMA_BUFx_RDY
is cleared, indicating that the frame data is beeing written by
the IDMAC to the pointed buffer. DMA_BUFx_RDY is supposed to be
set to the ready state again by the MCU, when it has handled the
received data. DMAIC_7_CUR_BUF flag won't be flipped here by the
IPU, so waiting for this event in the EOF interrupt handler is wrong.
Actually there is no spurious interrupt as described in the comments,
this is the valid DMAIC_7_EOF interrupt indicating reception of the
frame from CSI.
The patch removes code that waits for flipping of the DMAIC_7_CUR_BUF
flag in the DMAIC_7_EOF interrupt handler. As the comment in the
current code denotes, this waiting doesn't help anyway. As a result
of this removal the reporting of the first arrived frame is not
deferred to the time of arrival of the next frame and the drivers
software flag 'ichan->active_buffer' is in sync with DMAIC_7_CUR_BUF
flag, so the reception of all requested frames works.
This has been verified on the hardware which is triggering the
image sensor by the programmable state machine, allowing to
obtain exact number of frames. On this hardware we do not tolerate
losing frames.
This patch also removes resetting the DMA_BUFx_RDY flags of
all channels in ipu_disable_channel() since transfers on other
DMA channels might be triggered by other running tasks and the
buffers should always be ready for data sending or reception.
Signed-off-by: Anatolij Gustschin <agust@denx.de>
Reviewed-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Tested-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
|
|
|
|
|
As per the reference manual, bit "L" should be set while bit "C"
should be cleared for the last buffer descriptor in the non-cyclic
chain, so that sdma can stop trying to find the next BD and end
the transfer.
In case of sdma_prep_slave_sg(), BD_LAST needs to be set and BD_CONT
be cleared for the last BD.
Signed-off-by: Shawn Guo <shawn.guo@freescale.com>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
|
|
sdma_handle_channel_loop() is the handler of cyclic tx. One period
success does not really mean the success of the tx. Instead of
DMA_SUCCESS, DMA_IN_PROGRESS should be the one to tell.
Signed-off-by: Shawn Guo <shawn.guo@freescale.com>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
|
|
The sdmac->status was designed to reflect the status of the tx,
so simply return it in sdma_tx_status(). Then dma client can call
dma_async_is_tx_complete() to know the status of the tx.
Signed-off-by: Shawn Guo <shawn.guo@freescale.com>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
|
|
sdma_prep_slave_sg()
sdma_prep_dma_cyclic() sets sdmac->status to DMA_ERROR in err_out,
and sdma_prep_slave_sg() needs to do the same. Otherwise,
sdmac->status stays at DMA_IN_PROGRESS, which will make the function
return immediately next time it gets called.
Signed-off-by: Shawn Guo <shawn.guo@freescale.com>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
|
|
This is a leftover from the time that the driver did not have
sdma_prep_dma_cyclic callback and implemented sound dma as a looped
sg chain. And it can be removed now.
Signed-off-by: Shawn Guo <shawn.guo@freescale.com>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
|
|
The capabilities are device specific fields, not channel specific fields.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
|
|
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
|