diff options
837 files changed, 36554 insertions, 10630 deletions
diff --git a/Documentation/SubmittingPatches b/Documentation/SubmittingPatches index 2a8e89e13e45..7e9abb8a276b 100644 --- a/Documentation/SubmittingPatches +++ b/Documentation/SubmittingPatches @@ -132,6 +132,20 @@ Example: platform_set_drvdata(), but left the variable "dev" unused, delete it. +If your patch fixes a bug in a specific commit, e.g. you found an issue using +git-bisect, please use the 'Fixes:' tag with the first 12 characters of the +SHA-1 ID, and the one line summary. +Example: + + Fixes: e21d2170f366 ("video: remove unnecessary platform_set_drvdata()") + +The following git-config settings can be used to add a pretty format for +outputting the above style in the git log or git show commands + + [core] + abbrev = 12 + [pretty] + fixes = Fixes: %h (\"%s\") 3) Separate your changes. @@ -443,7 +457,7 @@ person it names. This tag documents that potentially interested parties have been included in the discussion -14) Using Reported-by:, Tested-by:, Reviewed-by: and Suggested-by: +14) Using Reported-by:, Tested-by:, Reviewed-by:, Suggested-by: and Fixes: If this patch fixes a problem reported by somebody else, consider adding a Reported-by: tag to credit the reporter for their contribution. Please @@ -498,6 +512,12 @@ idea was not posted in a public forum. That said, if we diligently credit our idea reporters, they will, hopefully, be inspired to help us again in the future. +A Fixes: tag indicates that the patch fixes an issue in a previous commit. It +is used to make it easy to determine where a bug originated, which can help +review a bug fix. This tag also assists the stable kernel team in determining +which stable kernel versions should receive your fix. This is the preferred +method for indicating a bug fixed by the patch. See #2 above for more details. + 15) The canonical patch format diff --git a/Documentation/arm/00-INDEX b/Documentation/arm/00-INDEX index a94090cc785d..3b08bc2b04cf 100644 --- a/Documentation/arm/00-INDEX +++ b/Documentation/arm/00-INDEX @@ -46,5 +46,7 @@ swp_emulation - SWP/SWPB emulation handler/logging description tcm.txt - ARM Tightly Coupled Memory +uefi.txt + - [U]EFI configuration and runtime services documentation vlocks.txt - Voting locks, low-level mechanism relying on memory system atomic writes. diff --git a/Documentation/arm/memory.txt b/Documentation/arm/memory.txt index 4bfb9ffbdbc1..38dc06d0a791 100644 --- a/Documentation/arm/memory.txt +++ b/Documentation/arm/memory.txt @@ -41,16 +41,9 @@ fffe8000 fffeffff DTCM mapping area for platforms with fffe0000 fffe7fff ITCM mapping area for platforms with ITCM mounted inside the CPU. -fff00000 fffdffff Fixmap mapping region. Addresses provided +ffc00000 ffdfffff Fixmap mapping region. Addresses provided by fix_to_virt() will be located here. -ffc00000 ffefffff DMA memory mapping region. Memory returned - by the dma_alloc_xxx functions will be - dynamically mapped here. - -ff000000 ffbfffff Reserved for future expansion of DMA - mapping region. - fee00000 feffffff Mapping of PCI I/O space. This is a static mapping within the vmalloc space. diff --git a/Documentation/arm/uefi.txt b/Documentation/arm/uefi.txt new file mode 100644 index 000000000000..d60030a1b909 --- /dev/null +++ b/Documentation/arm/uefi.txt @@ -0,0 +1,64 @@ +UEFI, the Unified Extensible Firmware Interface, is a specification +governing the behaviours of compatible firmware interfaces. It is +maintained by the UEFI Forum - http://www.uefi.org/. + +UEFI is an evolution of its predecessor 'EFI', so the terms EFI and +UEFI are used somewhat interchangeably in this document and associated +source code. As a rule, anything new uses 'UEFI', whereas 'EFI' refers +to legacy code or specifications. + +UEFI support in Linux +===================== +Booting on a platform with firmware compliant with the UEFI specification +makes it possible for the kernel to support additional features: +- UEFI Runtime Services +- Retrieving various configuration information through the standardised + interface of UEFI configuration tables. (ACPI, SMBIOS, ...) + +For actually enabling [U]EFI support, enable: +- CONFIG_EFI=y +- CONFIG_EFI_VARS=y or m + +The implementation depends on receiving information about the UEFI environment +in a Flattened Device Tree (FDT) - so is only available with CONFIG_OF. + +UEFI stub +========= +The "stub" is a feature that extends the Image/zImage into a valid UEFI +PE/COFF executable, including a loader application that makes it possible to +load the kernel directly from the UEFI shell, boot menu, or one of the +lightweight bootloaders like Gummiboot or rEFInd. + +The kernel image built with stub support remains a valid kernel image for +booting in non-UEFI environments. + +UEFI kernel support on ARM +========================== +UEFI kernel support on the ARM architectures (arm and arm64) is only available +when boot is performed through the stub. + +When booting in UEFI mode, the stub deletes any memory nodes from a provided DT. +Instead, the kernel reads the UEFI memory map. + +The stub populates the FDT /chosen node with (and the kernel scans for) the +following parameters: +________________________________________________________________________________ +Name | Size | Description +================================================================================ +linux,uefi-system-table | 64-bit | Physical address of the UEFI System Table. +-------------------------------------------------------------------------------- +linux,uefi-mmap-start | 64-bit | Physical address of the UEFI memory map, + | | populated by the UEFI GetMemoryMap() call. +-------------------------------------------------------------------------------- +linux,uefi-mmap-size | 32-bit | Size in bytes of the UEFI memory map + | | pointed to in previous entry. +-------------------------------------------------------------------------------- +linux,uefi-mmap-desc-size | 32-bit | Size in bytes of each entry in the UEFI + | | memory map. +-------------------------------------------------------------------------------- +linux,uefi-mmap-desc-ver | 32-bit | Version of the mmap descriptor format. +-------------------------------------------------------------------------------- +linux,uefi-stub-kern-ver | string | Copy of linux_banner from build. +-------------------------------------------------------------------------------- + +For verbose debug messages, specify 'uefi_debug' on the kernel command line. diff --git a/Documentation/arm64/booting.txt b/Documentation/arm64/booting.txt index beb754e87c65..37fc4f632176 100644 --- a/Documentation/arm64/booting.txt +++ b/Documentation/arm64/booting.txt @@ -85,6 +85,10 @@ The decompressed kernel image contains a 64-byte header as follows: Header notes: - code0/code1 are responsible for branching to stext. +- when booting through EFI, code0/code1 are initially skipped. + res5 is an offset to the PE header and the PE header has the EFI + entry point (efi_stub_entry). When the stub has done its work, it + jumps to code0 to resume the normal boot process. The image must be placed at the specified offset (currently 0x80000) from the start of the system RAM and called there. The start of the diff --git a/Documentation/cgroups/memory.txt b/Documentation/cgroups/memory.txt index 4937e6fff9b4..b3429aec444c 100644 --- a/Documentation/cgroups/memory.txt +++ b/Documentation/cgroups/memory.txt @@ -540,14 +540,13 @@ Note: 5.3 swappiness -Similar to /proc/sys/vm/swappiness, but only affecting reclaim that is -triggered by this cgroup's hard limit. The tunable in the root cgroup -corresponds to the global swappiness setting. - -Please note that unlike the global swappiness, memcg knob set to 0 -really prevents from any swapping even if there is a swap storage -available. This might lead to memcg OOM killer if there are no file -pages to reclaim. +Overrides /proc/sys/vm/swappiness for the particular group. The tunable +in the root cgroup corresponds to the global swappiness setting. + +Please note that unlike during the global reclaim, limit reclaim +enforces that 0 swappiness really prevents from any swapping even if +there is a swap storage available. This might lead to memcg OOM killer +if there are no file pages to reclaim. 5.4 failcnt diff --git a/Documentation/clk.txt b/Documentation/clk.txt index c9c399af7c08..1fee72f4d331 100644 --- a/Documentation/clk.txt +++ b/Documentation/clk.txt @@ -68,21 +68,27 @@ the operations defined in clk.h: int (*is_enabled)(struct clk_hw *hw); unsigned long (*recalc_rate)(struct clk_hw *hw, unsigned long parent_rate); - long (*round_rate)(struct clk_hw *hw, unsigned long, - unsigned long *); + long (*round_rate)(struct clk_hw *hw, + unsigned long rate, + unsigned long *parent_rate); long (*determine_rate)(struct clk_hw *hw, unsigned long rate, unsigned long *best_parent_rate, struct clk **best_parent_clk); int (*set_parent)(struct clk_hw *hw, u8 index); u8 (*get_parent)(struct clk_hw *hw); - int (*set_rate)(struct clk_hw *hw, unsigned long); + int (*set_rate)(struct clk_hw *hw, + unsigned long rate, + unsigned long parent_rate); int (*set_rate_and_parent)(struct clk_hw *hw, unsigned long rate, - unsigned long parent_rate, u8 index); + unsigned long parent_rate, + u8 index); unsigned long (*recalc_accuracy)(struct clk_hw *hw, - unsigned long parent_accuracy); + unsigned long parent_accuracy); void (*init)(struct clk_hw *hw); + int (*debug_init)(struct clk_hw *hw, + struct dentry *dentry); }; Part 3 - hardware clk implementations diff --git a/Documentation/devicetree/bindings/arm/pmu.txt b/Documentation/devicetree/bindings/arm/pmu.txt index fe5cef8976cb..75ef91d08f3b 100644 --- a/Documentation/devicetree/bindings/arm/pmu.txt +++ b/Documentation/devicetree/bindings/arm/pmu.txt @@ -8,6 +8,7 @@ Required properties: - compatible : should be one of "arm,armv8-pmuv3" + "arm,cortex-a17-pmu" "arm,cortex-a15-pmu" "arm,cortex-a12-pmu" "arm,cortex-a9-pmu" diff --git a/Documentation/devicetree/bindings/clock/bcm-kona-clock.txt b/Documentation/devicetree/bindings/clock/bcm-kona-clock.txt index 56d1f4961075..5286e260fcae 100644 --- a/Documentation/devicetree/bindings/clock/bcm-kona-clock.txt +++ b/Documentation/devicetree/bindings/clock/bcm-kona-clock.txt @@ -10,12 +10,12 @@ This binding uses the common clock binding: Required properties: - compatible - Shall have one of the following values: - - "brcm,bcm11351-root-ccu" - - "brcm,bcm11351-aon-ccu" - - "brcm,bcm11351-hub-ccu" - - "brcm,bcm11351-master-ccu" - - "brcm,bcm11351-slave-ccu" + Shall have a value of the form "brcm,<model>-<which>-ccu", + where <model> is a Broadcom SoC model number and <which> is + the name of a defined CCU. For example: + "brcm,bcm11351-root-ccu" + The compatible strings used for each supported SoC family + are defined below. - reg Shall define the base and range of the address space containing clock control registers @@ -26,12 +26,48 @@ Required properties: Shall be an ordered list of strings defining the names of the clocks provided by the CCU. +Device tree example: + + slave_ccu: slave_ccu { + compatible = "brcm,bcm11351-slave-ccu"; + reg = <0x3e011000 0x0f00>; + #clock-cells = <1>; + clock-output-names = "uartb", + "uartb2", + "uartb3", + "uartb4"; + }; + + ref_crystal_clk: ref_crystal { + #clock-cells = <0>; + compatible = "fixed-clock"; + clock-frequency = <26000000>; + }; + + uart@3e002000 { + compatible = "brcm,bcm11351-dw-apb-uart", "snps,dw-apb-uart"; + status = "disabled"; + reg = <0x3e002000 0x1000>; + clocks = <&slave_ccu BCM281XX_SLAVE_CCU_UARTB3>; + interrupts = <GIC_SPI 65 IRQ_TYPE_LEVEL_HIGH>; + reg-shift = <2>; + reg-io-width = <4>; + }; + +BCM281XX family +--------------- +CCU compatible string values for SoCs in the BCM281XX family are: + "brcm,bcm11351-root-ccu" + "brcm,bcm11351-aon-ccu" + "brcm,bcm11351-hub-ccu" + "brcm,bcm11351-master-ccu" + "brcm,bcm11351-slave-ccu" -BCM281XX family SoCs use Kona CCUs. The following table defines -the set of CCUs and clock specifiers for BCM281XX clocks. When -a clock consumer references a clocks, its symbolic specifier -(rather than its numeric index value) should be used. These -specifiers are defined in "include/dt-bindings/clock/bcm281xx.h". +The following table defines the set of CCUs and clock specifiers for +BCM281XX family clocks. When a clock consumer references a clocks, +its symbolic specifier (rather than its numeric index value) should +be used. These specifiers are defined in: + "include/dt-bindings/clock/bcm281xx.h" CCU Clock Type Index Specifier --- ----- ---- ----- --------- @@ -64,30 +100,40 @@ specifiers are defined in "include/dt-bindings/clock/bcm281xx.h". slave pwm peri 9 BCM281XX_SLAVE_CCU_PWM -Device tree example: +BCM21664 family +--------------- +CCU compatible string values for SoCs in the BCM21664 family are: + "brcm,bcm21664-root-ccu" + "brcm,bcm21664-aon-ccu" + "brcm,bcm21664-master-ccu" + "brcm,bcm21664-slave-ccu" - slave_ccu: slave_ccu { - compatible = "brcm,bcm11351-slave-ccu"; - reg = <0x3e011000 0x0f00>; - #clock-cells = <1>; - clock-output-names = "uartb", - "uartb2", - "uartb3", - "uartb4"; - }; +The following table defines the set of CCUs and clock specifiers for +BCM21664 family clocks. When a clock consumer references a clocks, +its symbolic specifier (rather than its numeric index value) should +be used. These specifiers are defined in: + "include/dt-bindings/clock/bcm21664.h" - ref_crystal_clk: ref_crystal { - #clock-cells = <0>; - compatible = "fixed-clock"; - clock-frequency = <26000000>; - }; + CCU Clock Type Index Specifier + --- ----- ---- ----- --------- + root frac_1m peri 0 BCM21664_ROOT_CCU_FRAC_1M - uart@3e002000 { - compatible = "brcm,bcm11351-dw-apb-uart", "snps,dw-apb-uart"; - status = "disabled"; - reg = <0x3e002000 0x1000>; - clocks = <&slave_ccu BCM281XX_SLAVE_CCU_UARTB3>; - interrupts = <GIC_SPI 65 IRQ_TYPE_LEVEL_HIGH>; - reg-shift = <2>; - reg-io-width = <4>; - }; + aon hub_timer peri 0 BCM21664_AON_CCU_HUB_TIMER + + master sdio1 peri 0 BCM21664_MASTER_CCU_SDIO1 + master sdio2 peri 1 BCM21664_MASTER_CCU_SDIO2 + master sdio3 peri 2 BCM21664_MASTER_CCU_SDIO3 + master sdio4 peri 3 BCM21664_MASTER_CCU_SDIO4 + master sdio1_sleep peri 4 BCM21664_MASTER_CCU_SDIO1_SLEEP + master sdio2_sleep peri 5 BCM21664_MASTER_CCU_SDIO2_SLEEP + master sdio3_sleep peri 6 BCM21664_MASTER_CCU_SDIO3_SLEEP + master sdio4_sleep peri 7 BCM21664_MASTER_CCU_SDIO4_SLEEP + + slave uartb peri 0 BCM21664_SLAVE_CCU_UARTB + slave uartb2 peri 1 BCM21664_SLAVE_CCU_UARTB2 + slave uartb3 peri 2 BCM21664_SLAVE_CCU_UARTB3 + slave uartb4 peri 3 BCM21664_SLAVE_CCU_UARTB4 + slave bsc1 peri 4 BCM21664_SLAVE_CCU_BSC1 + slave bsc2 peri 5 BCM21664_SLAVE_CCU_BSC2 + slave bsc3 peri 6 BCM21664_SLAVE_CCU_BSC3 + slave bsc4 peri 7 BCM21664_SLAVE_CCU_BSC4 diff --git a/Documentation/devicetree/bindings/clock/clock-bindings.txt b/Documentation/devicetree/bindings/clock/clock-bindings.txt index 700e7aac3717..f15787817d6b 100644 --- a/Documentation/devicetree/bindings/clock/clock-bindings.txt +++ b/Documentation/devicetree/bindings/clock/clock-bindings.txt @@ -44,10 +44,9 @@ For example: clocks by index. The names should reflect the clock output signal names for the device. -clock-indices: If the identifyng number for the clocks in the node - is not linear from zero, then the this mapping allows - the mapping of identifiers into the clock-output-names - array. +clock-indices: If the identifying number for the clocks in the node + is not linear from zero, then this allows the mapping of + identifiers into the clock-output-names array. For example, if we have two clocks <&oscillator 1> and <&oscillator 3>: @@ -58,7 +57,7 @@ For example, if we have two clocks <&oscillator 1> and <&oscillator 3>: clock-output-names = "clka", "clkb"; } - This ensures we do not have any empty nodes in clock-output-names + This ensures we do not have any empty strings in clock-output-names ==Clock consumers== diff --git a/Documentation/devicetree/bindings/clock/fixed-clock.txt b/Documentation/devicetree/bindings/clock/fixed-clock.txt index 48ea0ad8ad46..0641a663ad69 100644 --- a/Documentation/devicetree/bindings/clock/fixed-clock.txt +++ b/Documentation/devicetree/bindings/clock/fixed-clock.txt @@ -12,7 +12,6 @@ Required properties: Optional properties: - clock-accuracy : accuracy of clock in ppb (parts per billion). Should be a single cell. -- gpios : From common gpio binding; gpio connection to clock enable pin. - clock-output-names : From common clock binding. Example: diff --git a/Documentation/devicetree/bindings/clock/hix5hd2-clock.txt b/Documentation/devicetree/bindings/clock/hix5hd2-clock.txt new file mode 100644 index 000000000000..7894a64887cb --- /dev/null +++ b/Documentation/devicetree/bindings/clock/hix5hd2-clock.txt @@ -0,0 +1,31 @@ +* Hisilicon Hix5hd2 Clock Controller + +The hix5hd2 clock controller generates and supplies clock to various +controllers within the hix5hd2 SoC. + +Required Properties: + +- compatible: should be "hisilicon,hix5hd2-clock" +- reg: Address and length of the register set +- #clock-cells: Should be <1> + +Each clock is assigned an identifier and client nodes use this identifier +to specify the clock which they consume. + +All these identifier could be found in <dt-bindings/clock/hix5hd2-clock.h>. + +Examples: + clock: clock@f8a22000 { + compatible = "hisilicon,hix5hd2-clock"; + reg = <0xf8a22000 0x1000>; + #clock-cells = <1>; + }; + + uart0: uart@f8b00000 { + compatible = "arm,pl011", "arm,primecell"; + reg = <0xf8b00000 0x1000>; + interrupts = <0 49 4>; + clocks = <&clock HIX5HD2_FIXED_83M>; + clock-names = "apb_pclk"; + status = "disabled"; + }; diff --git a/Documentation/devicetree/bindings/clock/lsi,axm5516-clks.txt b/Documentation/devicetree/bindings/clock/lsi,axm5516-clks.txt new file mode 100644 index 000000000000..3ce97cfe999b --- /dev/null +++ b/Documentation/devicetree/bindings/clock/lsi,axm5516-clks.txt @@ -0,0 +1,29 @@ +AXM5516 clock driver bindings +----------------------------- + +Required properties : +- compatible : shall contain "lsi,axm5516-clks" +- reg : shall contain base register location and length +- #clock-cells : shall contain 1 + +The consumer specifies the desired clock by having the clock ID in its "clocks" +phandle cell. See <dt-bindings/clock/lsi,axxia-clock.h> for the list of +supported clock IDs. + +Example: + + clks: clock-controller@2010020000 { + compatible = "lsi,axm5516-clks"; + #clock-cells = <1>; + reg = <0x20 0x10020000 0 0x20000>; + }; + + serial0: uart@2010080000 { + compatible = "arm,pl011", "arm,primecell"; + reg = <0x20 0x10080000 0 0x1000>; + interrupts = <GIC_SPI 56 IRQ_TYPE_LEVEL_HIGH>; + clocks = <&clks AXXIA_CLK_PER>; + clock-names = "apb_pclk"; + }; + }; + diff --git a/Documentation/devicetree/bindings/clock/mvebu-core-clock.txt b/Documentation/devicetree/bindings/clock/mvebu-core-clock.txt index 307a503c5db8..dc5ea5b22da9 100644 --- a/Documentation/devicetree/bindings/clock/mvebu-core-clock.txt +++ b/Documentation/devicetree/bindings/clock/mvebu-core-clock.txt @@ -29,6 +29,11 @@ The following is a list of provided IDs and clock names on Kirkwood and Dove: 2 = l2clk (L2 Cache clock derived from CPU0 clock) 3 = ddrclk (DDR controller clock derived from CPU0 clock) +The following is a list of provided IDs and clock names on Orion5x: + 0 = tclk (Internal Bus clock) + 1 = cpuclk (CPU0 clock) + 2 = ddrclk (DDR controller clock derived from CPU0 clock) + Required properties: - compatible : shall be one of the following: "marvell,armada-370-core-clock" - For Armada 370 SoC core clocks @@ -38,6 +43,9 @@ Required properties: "marvell,dove-core-clock" - for Dove SoC core clocks "marvell,kirkwood-core-clock" - for Kirkwood SoC (except mv88f6180) "marvell,mv88f6180-core-clock" - for Kirkwood MV88f6180 SoC + "marvell,mv88f5182-core-clock" - for Orion MV88F5182 SoC + "marvell,mv88f5281-core-clock" - for Orion MV88F5281 SoC + "marvell,mv88f6183-core-clock" - for Orion MV88F6183 SoC - reg : shall be the register address of the Sample-At-Reset (SAR) register - #clock-cells : from common clock binding; shall be set to 1 diff --git a/Documentation/devicetree/bindings/clock/qcom,gcc.txt b/Documentation/devicetree/bindings/clock/qcom,gcc.txt index 767401f42871..9cfcb4f2bc97 100644 --- a/Documentation/devicetree/bindings/clock/qcom,gcc.txt +++ b/Documentation/devicetree/bindings/clock/qcom,gcc.txt @@ -4,9 +4,12 @@ Qualcomm Global Clock & Reset Controller Binding Required properties : - compatible : shall contain only one of the following: + "qcom,gcc-apq8064" "qcom,gcc-msm8660" "qcom,gcc-msm8960" "qcom,gcc-msm8974" + "qcom,gcc-msm8974pro" + "qcom,gcc-msm8974pro-ac" - reg : shall contain base register location and length - #clock-cells : shall contain 1 diff --git a/Documentation/devicetree/bindings/clock/renesas,cpg-mstp-clocks.txt b/Documentation/devicetree/bindings/clock/renesas,cpg-mstp-clocks.txt index 6c3c0847e4fd..8a92b5fb3540 100644 --- a/Documentation/devicetree/bindings/clock/renesas,cpg-mstp-clocks.txt +++ b/Documentation/devicetree/bindings/clock/renesas,cpg-mstp-clocks.txt @@ -11,6 +11,7 @@ Required Properties: - compatible: Must be one of the following - "renesas,r7s72100-mstp-clocks" for R7S72100 (RZ) MSTP gate clocks + - "renesas,r8a7779-mstp-clocks" for R8A7779 (R-Car H1) MSTP gate clocks - "renesas,r8a7790-mstp-clocks" for R8A7790 (R-Car H2) MSTP gate clocks - "renesas,r8a7791-mstp-clocks" for R8A7791 (R-Car M2) MSTP gate clocks - "renesas,cpg-mstp-clock" for generic MSTP gate clocks diff --git a/Documentation/devicetree/bindings/clock/renesas,r8a7740-cpg-clocks.txt b/Documentation/devicetree/bindings/clock/renesas,r8a7740-cpg-clocks.txt new file mode 100644 index 000000000000..2c03302f86ed --- /dev/null +++ b/Documentation/devicetree/bindings/clock/renesas,r8a7740-cpg-clocks.txt @@ -0,0 +1,41 @@ +These bindings should be considered EXPERIMENTAL for now. + +* Renesas R8A7740 Clock Pulse Generator (CPG) + +The CPG generates core clocks for the R8A7740 SoC. It includes three PLLs +and several fixed ratio and variable ratio dividers. + +Required Properties: + + - compatible: Must be "renesas,r8a7740-cpg-clocks" + + - reg: Base address and length of the memory resource used by the CPG + + - clocks: Reference to the three parent clocks + - #clock-cells: Must be 1 + - clock-output-names: The names of the clocks. Supported clocks are + "system", "pllc0", "pllc1", "pllc2", "r", "usb24s", "i", "zg", "b", + "m1", "hp", "hpp", "usbp", "s", "zb", "m3", and "cp". + + - renesas,mode: board-specific settings of the MD_CK* bits + + +Example +------- + +cpg_clocks: cpg_clocks@e6150000 { + compatible = "renesas,r8a7740-cpg-clocks"; + reg = <0xe6150000 0x10000>; + clocks = <&extal1_clk>, <&extal2_clk>, <&extalr_clk>; + #clock-cells = <1>; + clock-output-names = "system", "pllc0", "pllc1", + "pllc2", "r", + "usb24s", + "i", "zg", "b", "m1", "hp", + "hpp", "usbp", "s", "zb", "m3", + "cp"; +}; + +&cpg_clocks { + renesas,mode = <0x05>; +}; diff --git a/Documentation/devicetree/bindings/clock/renesas,r8a7779-cpg-clocks.txt b/Documentation/devicetree/bindings/clock/renesas,r8a7779-cpg-clocks.txt new file mode 100644 index 000000000000..ed3c8cb12f4e --- /dev/null +++ b/Documentation/devicetree/bindings/clock/renesas,r8a7779-cpg-clocks.txt @@ -0,0 +1,27 @@ +* Renesas R8A7779 Clock Pulse Generator (CPG) + +The CPG generates core clocks for the R8A7779. It includes one PLL and +several fixed ratio dividers + +Required Properties: + + - compatible: Must be "renesas,r8a7779-cpg-clocks" + - reg: Base address and length of the memory resource used by the CPG + + - clocks: Reference to the parent clock + - #clock-cells: Must be 1 + - clock-output-names: The names of the clocks. Supported clocks are "plla", + "z", "zs", "s", "s1", "p", "b", "out". + + +Example +------- + + cpg_clocks: cpg_clocks@ffc80000 { + compatible = "renesas,r8a7779-cpg-clocks"; + reg = <0 0xffc80000 0 0x30>; + clocks = <&extal_clk>; + #clock-cells = <1>; + clock-output-names = "plla", "z", "zs", "s", "s1", "p", + "b", "out"; + }; diff --git a/Documentation/devicetree/bindings/crypto/samsung-sss.txt b/Documentation/devicetree/bindings/crypto/samsung-sss.txt new file mode 100644 index 000000000000..a6dafa83c6df --- /dev/null +++ b/Documentation/devicetree/bindings/crypto/samsung-sss.txt @@ -0,0 +1,34 @@ +Samsung SoC SSS (Security SubSystem) module + +The SSS module in S5PV210 SoC supports the following: +-- Feeder (FeedCtrl) +-- Advanced Encryption Standard (AES) +-- Data Encryption Standard (DES)/3DES +-- Public Key Accelerator (PKA) +-- SHA-1/SHA-256/MD5/HMAC (SHA-1/SHA-256/MD5)/PRNG +-- PRNG: Pseudo Random Number Generator + +The SSS module in Exynos4 (Exynos4210) and +Exynos5 (Exynos5420 and Exynos5250) SoCs +supports the following also: +-- ARCFOUR (ARC4) +-- True Random Number Generator (TRNG) +-- Secure Key Manager + +Required properties: + +- compatible : Should contain entries for this and backward compatible + SSS versions: + - "samsung,s5pv210-secss" for S5PV210 SoC. + - "samsung,exynos4210-secss" for Exynos4210, Exynos4212, Exynos4412, Exynos5250, + Exynos5260 and Exynos5420 SoCs. +- reg : Offset and length of the register set for the module +- interrupts : interrupt specifiers of SSS module interrupts, should contain + following entries: + - first : feed control interrupt (required for all variants), + - second : hash interrupt (required only for samsung,s5pv210-secss). + +- clocks : list of clock phandle and specifier pairs for all clocks listed in + clock-names property. +- clock-names : list of device clock input names; should contain one entry + "secss". diff --git a/Documentation/devicetree/bindings/i2c/i2c-arb-gpio-challenge.txt b/Documentation/devicetree/bindings/i2c/i2c-arb-gpio-challenge.txt index 1ac8ea8ade1d..bfeabb843941 100644 --- a/Documentation/devicetree/bindings/i2c/i2c-arb-gpio-challenge.txt +++ b/Documentation/devicetree/bindings/i2c/i2c-arb-gpio-challenge.txt @@ -8,6 +8,12 @@ the standard I2C multi-master rules. Using GPIOs is generally useful in the case where there is a device on the bus that has errata and/or bugs that makes standard multimaster mode not feasible. +Note that this scheme works well enough but has some downsides: +* It is nonstandard (not using standard I2C multimaster) +* Having two masters on a bus in general makes it relatively hard to debug + problems (hard to tell if i2c issues were caused by one master, another, or + some device on the bus). + Algorithm: diff --git a/Documentation/devicetree/bindings/i2c/i2c-cros-ec-tunnel.txt b/Documentation/devicetree/bindings/i2c/i2c-cros-ec-tunnel.txt new file mode 100644 index 000000000000..898f030eba62 --- /dev/null +++ b/Documentation/devicetree/bindings/i2c/i2c-cros-ec-tunnel.txt @@ -0,0 +1,39 @@ +I2C bus that tunnels through the ChromeOS EC (cros-ec) +====================================================== +On some ChromeOS board designs we've got a connection to the EC (embedded +controller) but no direct connection to some devices on the other side of +the EC (like a battery and PMIC). To get access to those devices we need +to tunnel our i2c commands through the EC. + +The node for this device should be under a cros-ec node like google,cros-ec-spi +or google,cros-ec-i2c. + + +Required properties: +- compatible: google,cros-ec-i2c-tunnel +- google,remote-bus: The EC bus we'd like to talk to. + +Optional child nodes: +- One node per I2C device connected to the tunnelled I2C bus. + + +Example: + cros-ec@0 { + compatible = "google,cros-ec-spi"; + + ... + + i2c-tunnel { + compatible = "google,cros-ec-i2c-tunnel"; + #address-cells = <1>; + #size-cells = <0>; + + google,remote-bus = <0>; + + battery: sbs-battery@b { + compatible = "sbs,sbs-battery"; + reg = <0xb>; + sbs,poll-retry-count = <1>; + }; + }; + } diff --git a/Documentation/devicetree/bindings/i2c/i2c-exynos5.txt b/Documentation/devicetree/bindings/i2c/i2c-exynos5.txt index 056732cfdcee..d4745e31f5c6 100644 --- a/Documentation/devicetree/bindings/i2c/i2c-exynos5.txt +++ b/Documentation/devicetree/bindings/i2c/i2c-exynos5.txt @@ -5,7 +5,14 @@ at various speeds ranging from 100khz to 3.4Mhz. Required properties: - compatible: value should be. - -> "samsung,exynos5-hsi2c", for i2c compatible with exynos5 hsi2c. + -> "samsung,exynos5-hsi2c", (DEPRECATED) + for i2c compatible with HSI2C available + on Exynos5250 and Exynos5420 SoCs. + -> "samsung,exynos5250-hsi2c", for i2c compatible with HSI2C available + on Exynos5250 and Exynos5420 SoCs. + -> "samsung,exynos5260-hsi2c", for i2c compatible with HSI2C available + on Exynos5260 SoCs. + - reg: physical base address of the controller and length of memory mapped region. - interrupts: interrupt number to the cpu. @@ -26,7 +33,7 @@ Optional properties: Example: hsi2c@12ca0000 { - compatible = "samsung,exynos5-hsi2c"; + compatible = "samsung,exynos5250-hsi2c"; reg = <0x12ca0000 0x100>; interrupts = <56>; clock-frequency = <100000>; diff --git a/Documentation/devicetree/bindings/i2c/i2c-mv64xxx.txt b/Documentation/devicetree/bindings/i2c/i2c-mv64xxx.txt index befd4fb4764f..5c30026921ae 100644 --- a/Documentation/devicetree/bindings/i2c/i2c-mv64xxx.txt +++ b/Documentation/devicetree/bindings/i2c/i2c-mv64xxx.txt @@ -5,7 +5,7 @@ Required properties : - reg : Offset and length of the register set for the device - compatible : Should be either: - - "allwinner,sun4i-i2c" + - "allwinner,sun4i-a10-i2c" - "allwinner,sun6i-a31-i2c" - "marvell,mv64xxx-i2c" - "marvell,mv78230-i2c" diff --git a/Documentation/devicetree/bindings/i2c/i2c-rcar.txt b/Documentation/devicetree/bindings/i2c/i2c-rcar.txt index dd8b2dd1edeb..16b3e07aa98f 100644 --- a/Documentation/devicetree/bindings/i2c/i2c-rcar.txt +++ b/Documentation/devicetree/bindings/i2c/i2c-rcar.txt @@ -7,6 +7,9 @@ Required properties: "renesas,i2c-r8a7779" "renesas,i2c-r8a7790" "renesas,i2c-r8a7791" + "renesas,i2c-r8a7792" + "renesas,i2c-r8a7793" + "renesas,i2c-r8a7794" - reg: physical base address of the controller and length of memory mapped region. - interrupts: interrupt specifier. diff --git a/Documentation/devicetree/bindings/i2c/i2c-sh_mobile.txt b/Documentation/devicetree/bindings/i2c/i2c-sh_mobile.txt new file mode 100644 index 000000000000..d2153ce36fa8 --- /dev/null +++ b/Documentation/devicetree/bindings/i2c/i2c-sh_mobile.txt @@ -0,0 +1,26 @@ +Device tree configuration for Renesas IIC (sh_mobile) driver + +Required properties: +- compatible : "renesas,iic-<soctype>". "renesas,rmobile-iic" as fallback +- reg : address start and address range size of device +- interrupts : interrupt of device +- clocks : clock for device +- #address-cells : should be <1> +- #size-cells : should be <0> + +Optional properties: +- clock-frequency : frequency of bus clock in Hz. Default 100kHz if unset. + +Pinctrl properties might be needed, too. See there. + +Example: + + iic0: i2c@e6500000 { + compatible = "renesas,iic-r8a7790", "renesas,rmobile-iic"; + reg = <0 0xe6500000 0 0x425>; + interrupts = <0 174 IRQ_TYPE_LEVEL_HIGH>; + clocks = <&mstp3_clks R8A7790_CLK_IIC0>; + clock-frequency = <400000>; + #address-cells = <1>; + #size-cells = <0>; + }; diff --git a/Documentation/devicetree/bindings/iommu/samsung,sysmmu.txt b/Documentation/devicetree/bindings/iommu/samsung,sysmmu.txt new file mode 100644 index 000000000000..6fa4c737af23 --- /dev/null +++ b/Documentation/devicetree/bindings/iommu/samsung,sysmmu.txt @@ -0,0 +1,70 @@ +Samsung Exynos IOMMU H/W, System MMU (System Memory Management Unit) + +Samsung's Exynos architecture contains System MMUs that enables scattered +physical memory chunks visible as a contiguous region to DMA-capable peripheral +devices like MFC, FIMC, FIMD, GScaler, FIMC-IS and so forth. + +System MMU is an IOMMU and supports identical translation table format to +ARMv7 translation tables with minimum set of page properties including access +permissions, shareability and security protection. In addition, System MMU has +another capabilities like L2 TLB or block-fetch buffers to minimize translation +latency. + +System MMUs are in many to one relation with peripheral devices, i.e. single +peripheral device might have multiple System MMUs (usually one for each bus +master), but one System MMU can handle transactions from only one peripheral +device. The relation between a System MMU and the peripheral device needs to be +defined in device node of the peripheral device. + +MFC in all Exynos SoCs and FIMD, M2M Scalers and G2D in Exynos5420 has 2 System +MMUs. +* MFC has one System MMU on its left and right bus. +* FIMD in Exynos5420 has one System MMU for window 0 and 4, the other system MMU + for window 1, 2 and 3. +* M2M Scalers and G2D in Exynos5420 has one System MMU on the read channel and + the other System MMU on the write channel. +The drivers must consider how to handle those System MMUs. One of the idea is +to implement child devices or sub-devices which are the client devices of the +System MMU. + +Note: +The current DT binding for the Exynos System MMU is incomplete. +The following properties can be removed or changed, if found incompatible with +the "Generic IOMMU Binding" support for attaching devices to the IOMMU. + +Required properties: +- compatible: Should be "samsung,exynos-sysmmu" +- reg: A tuple of base address and size of System MMU registers. +- interrupt-parent: The phandle of the interrupt controller of System MMU +- interrupts: An interrupt specifier for interrupt signal of System MMU, + according to the format defined by a particular interrupt + controller. +- clock-names: Should be "sysmmu" if the System MMU is needed to gate its clock. + Optional "master" if the clock to the System MMU is gated by + another gate clock other than "sysmmu". + Exynos4 SoCs, there needs no "master" clock. + Exynos5 SoCs, some System MMUs must have "master" clocks. +- clocks: Required if the System MMU is needed to gate its clock. +- samsung,power-domain: Required if the System MMU is needed to gate its power. + Please refer to the following document: + Documentation/devicetree/bindings/arm/exynos/power_domain.txt + +Examples: + gsc_0: gsc@13e00000 { + compatible = "samsung,exynos5-gsc"; + reg = <0x13e00000 0x1000>; + interrupts = <0 85 0>; + samsung,power-domain = <&pd_gsc>; + clocks = <&clock CLK_GSCL0>; + clock-names = "gscl"; + }; + + sysmmu_gsc0: sysmmu@13E80000 { + compatible = "samsung,exynos-sysmmu"; + reg = <0x13E80000 0x1000>; + interrupt-parent = <&combiner>; + interrupts = <2 0>; + clock-names = "sysmmu", "master"; + clocks = <&clock CLK_SMMU_GSCL0>, <&clock CLK_GSCL0>; + samsung,power-domain = <&pd_gsc>; + }; diff --git a/Documentation/devicetree/bindings/media/i2c/adv7604.txt b/Documentation/devicetree/bindings/media/i2c/adv7604.txt new file mode 100644 index 000000000000..c27cede3bd68 --- /dev/null +++ b/Documentation/devicetree/bindings/media/i2c/adv7604.txt @@ -0,0 +1,70 @@ +* Analog Devices ADV7604/11 video decoder with HDMI receiver + +The ADV7604 and ADV7611 are multiformat video decoders with an integrated HDMI +receiver. The ADV7604 has four multiplexed HDMI inputs and one analog input, +and the ADV7611 has one HDMI input and no analog input. + +These device tree bindings support the ADV7611 only at the moment. + +Required Properties: + + - compatible: Must contain one of the following + - "adi,adv7611" for the ADV7611 + + - reg: I2C slave address + + - hpd-gpios: References to the GPIOs that control the HDMI hot-plug + detection pins, one per HDMI input. The active flag indicates the GPIO + level that enables hot-plug detection. + +The device node must contain one 'port' child node per device input and output +port, in accordance with the video interface bindings defined in +Documentation/devicetree/bindings/media/video-interfaces.txt. The port nodes +are numbered as follows. + + Port ADV7611 +------------------------------------------------------------ + HDMI 0 + Digital output 1 + +The digital output port node must contain at least one endpoint. + +Optional Properties: + + - reset-gpios: Reference to the GPIO connected to the device's reset pin. + +Optional Endpoint Properties: + + The following three properties are defined in video-interfaces.txt and are + valid for source endpoints only. + + - hsync-active: Horizontal synchronization polarity. Defaults to active low. + - vsync-active: Vertical synchronization polarity. Defaults to active low. + - pclk-sample: Pixel clock polarity. Defaults to output on the falling edge. + + If none of hsync-active, vsync-active and pclk-sample is specified the + endpoint will use embedded BT.656 synchronization. + + +Example: + + hdmi_receiver@4c { + compatible = "adi,adv7611"; + reg = <0x4c>; + + reset-gpios = <&ioexp 0 GPIO_ACTIVE_LOW>; + hpd-gpios = <&ioexp 2 GPIO_ACTIVE_HIGH>; + + #address-cells = <1>; + #size-cells = <0>; + + port@0 { + reg = <0>; + }; + port@1 { + reg = <1>; + hdmi_in: endpoint { + remote-endpoint = <&ccdc_in>; + }; + }; + }; diff --git a/Documentation/devicetree/bindings/media/renesas,vsp1.txt b/Documentation/devicetree/bindings/media/renesas,vsp1.txt new file mode 100644 index 000000000000..87fe08abf36d --- /dev/null +++ b/Documentation/devicetree/bindings/media/renesas,vsp1.txt @@ -0,0 +1,43 @@ +* Renesas VSP1 Video Processing Engine + +The VSP1 is a video processing engine that supports up-/down-scaling, alpha +blending, color space conversion and various other image processing features. +It can be found in the Renesas R-Car second generation SoCs. + +Required properties: + + - compatible: Must contain "renesas,vsp1" + + - reg: Base address and length of the registers block for the VSP1. + - interrupts: VSP1 interrupt specifier. + - clocks: A phandle + clock-specifier pair for the VSP1 functional clock. + + - renesas,#rpf: Number of Read Pixel Formatter (RPF) modules in the VSP1. + - renesas,#uds: Number of Up Down Scaler (UDS) modules in the VSP1. + - renesas,#wpf: Number of Write Pixel Formatter (WPF) modules in the VSP1. + + +Optional properties: + + - renesas,has-lif: Boolean, indicates that the LCD Interface (LIF) module is + available. + - renesas,has-lut: Boolean, indicates that the Look Up Table (LUT) module is + available. + - renesas,has-sru: Boolean, indicates that the Super Resolution Unit (SRU) + module is available. + + +Example: R8A7790 (R-Car H2) VSP1-S node + + vsp1@fe928000 { + compatible = "renesas,vsp1"; + reg = <0 0xfe928000 0 0x8000>; + interrupts = <0 267 IRQ_TYPE_LEVEL_HIGH>; + clocks = <&mstp1_clks R8A7790_CLK_VSP1_S>; + + renesas,has-lut; + renesas,has-sru; + renesas,#rpf = <5>; + renesas,#uds = <3>; + renesas,#wpf = <4>; + }; diff --git a/Documentation/devicetree/bindings/mfd/sun6i-prcm.txt b/Documentation/devicetree/bindings/mfd/sun6i-prcm.txt new file mode 100644 index 000000000000..1f5a31fef907 --- /dev/null +++ b/Documentation/devicetree/bindings/mfd/sun6i-prcm.txt @@ -0,0 +1,59 @@ +* Allwinner PRCM (Power/Reset/Clock Management) Multi-Functional Device + +PRCM is an MFD device exposing several Power Management related devices +(like clks and reset controllers). + +Required properties: + - compatible: "allwinner,sun6i-a31-prcm" + - reg: The PRCM registers range + +The prcm node may contain several subdevices definitions: + - see Documentation/devicetree/clk/sunxi.txt for clock devices + - see Documentation/devicetree/reset/allwinner,sunxi-clock-reset.txt for reset + controller devices + + +Example: + + prcm: prcm@01f01400 { + compatible = "allwinner,sun6i-a31-prcm"; + reg = <0x01f01400 0x200>; + + /* Put subdevices here */ + ar100: ar100_clk { + compatible = "allwinner,sun6i-a31-ar100-clk"; + #clock-cells = <0>; + clocks = <&osc32k>, <&osc24M>, <&pll6>, <&pll6>; + }; + + ahb0: ahb0_clk { + compatible = "fixed-factor-clock"; + #clock-cells = <0>; + clock-div = <1>; + clock-mult = <1>; + clocks = <&ar100_div>; + clock-output-names = "ahb0"; + }; + + apb0: apb0_clk { + compatible = "allwinner,sun6i-a31-apb0-clk"; + #clock-cells = <0>; + clocks = <&ahb0>; + clock-output-names = "apb0"; + }; + + apb0_gates: apb0_gates_clk { + compatible = "allwinner,sun6i-a31-apb0-gates-clk"; + #clock-cells = <1>; + clocks = <&apb0>; + clock-output-names = "apb0_pio", "apb0_ir", + "apb0_timer01", "apb0_p2wi", + "apb0_uart", "apb0_1wire", + "apb0_i2c"; + }; + + apb0_rst: apb0_rst { + compatible = "allwinner,sun6i-a31-clock-reset"; + #reset-cells = <1>; + }; + }; diff --git a/Documentation/devicetree/bindings/mfd/ti-keystone-devctrl.txt b/Documentation/devicetree/bindings/mfd/ti-keystone-devctrl.txt new file mode 100644 index 000000000000..20963c76b4bc --- /dev/null +++ b/Documentation/devicetree/bindings/mfd/ti-keystone-devctrl.txt @@ -0,0 +1,19 @@ +* Device tree bindings for Texas Instruments keystone device state control + +The Keystone II devices have a set of registers that are used to control +the status of its peripherals. This node is intended to allow access to +this functionality. + +Required properties: + +- compatible: "ti,keystone-devctrl", "syscon" + +- reg: contains offset/length value for device state control + registers space. + +Example: + +devctrl: device-state-control@0x02620000 { + compatible = "ti,keystone-devctrl", "syscon"; + reg = <0x02620000 0x1000>; +}; diff --git a/Documentation/devicetree/bindings/mfd/twl6040.txt b/Documentation/devicetree/bindings/mfd/twl6040.txt index 0f5dd709d752..a41157b5d930 100644 --- a/Documentation/devicetree/bindings/mfd/twl6040.txt +++ b/Documentation/devicetree/bindings/mfd/twl6040.txt @@ -19,6 +19,8 @@ Required properties: Optional properties, nodes: - enable-active-high: To power on the twl6040 during boot. +- clocks: phandle to the clk32k clock provider +- clock-names: Must be "clk32k" Vibra functionality Required properties: diff --git a/Documentation/devicetree/bindings/mmc/sunxi-mmc.txt b/Documentation/devicetree/bindings/mmc/sunxi-mmc.txt new file mode 100644 index 000000000000..91b3a3467150 --- /dev/null +++ b/Documentation/devicetree/bindings/mmc/sunxi-mmc.txt @@ -0,0 +1,43 @@ +* Allwinner sunxi MMC controller + +The highspeed MMC host controller on Allwinner SoCs provides an interface +for MMC, SD and SDIO types of memory cards. + +Supported maximum speeds are the ones of the eMMC standard 4.5 as well +as the speed of SD standard 3.0. +Absolute maximum transfer rate is 200MB/s + +Required properties: + - compatible : "allwinner,sun4i-a10-mmc" or "allwinner,sun5i-a13-mmc" + - reg : mmc controller base registers + - clocks : a list with 2 phandle + clock specifier pairs + - clock-names : must contain "ahb" and "mmc" + - interrupts : mmc controller interrupt + +Optional properties: + - resets : phandle + reset specifier pair + - reset-names : must contain "ahb" + - for cd, bus-width and additional generic mmc parameters + please refer to mmc.txt within this directory + +Examples: + - Within .dtsi: + mmc0: mmc@01c0f000 { + compatible = "allwinner,sun5i-a13-mmc"; + reg = <0x01c0f000 0x1000>; + clocks = <&ahb_gates 8>, <&mmc0_clk>; + clock-names = "ahb", "mod"; + interrupts = <0 32 4>; + status = "disabled"; + }; + + - Within dts: + mmc0: mmc@01c0f000 { + pinctrl-names = "default", "default"; + pinctrl-0 = <&mmc0_pins_a>; + pinctrl-1 = <&mmc0_cd_pin_reference_design>; + bus-width = <4>; + cd-gpios = <&pio 7 1 0>; /* PH1 */ + cd-inverted; + status = "okay"; + }; diff --git a/Documentation/devicetree/bindings/rtc/haoyu,hym8563.txt b/Documentation/devicetree/bindings/rtc/haoyu,hym8563.txt index 31406fd4a43e..5c199ee044cb 100644 --- a/Documentation/devicetree/bindings/rtc/haoyu,hym8563.txt +++ b/Documentation/devicetree/bindings/rtc/haoyu,hym8563.txt @@ -9,6 +9,9 @@ Required properties: - interrupts: rtc alarm/event interrupt - #clock-cells: the value should be 0 +Optional properties: +- clock-output-names: From common clock binding + Example: hym8563: hym8563@51 { diff --git a/Documentation/devicetree/bindings/rtc/xgene-rtc.txt b/Documentation/devicetree/bindings/rtc/xgene-rtc.txt new file mode 100644 index 000000000000..fd195c358446 --- /dev/null +++ b/Documentation/devicetree/bindings/rtc/xgene-rtc.txt @@ -0,0 +1,28 @@ +* APM X-Gene Real Time Clock + +RTC controller for the APM X-Gene Real Time Clock + +Required properties: +- compatible : Should be "apm,xgene-rtc" +- reg: physical base address of the controller and length of memory mapped + region. +- interrupts: IRQ line for the RTC. +- #clock-cells: Should be 1. +- clocks: Reference to the clock entry. + +Example: + +rtcclk: rtcclk { + compatible = "fixed-clock"; + #clock-cells = <1>; + clock-frequency = <100000000>; + clock-output-names = "rtcclk"; +}; + +rtc: rtc@10510000 { + compatible = "apm,xgene-rtc"; + reg = <0x0 0x10510000 0x0 0x400>; + interrupts = <0x0 0x46 0x4>; + #clock-cells = <1>; + clocks = <&rtcclk 0>; +}; diff --git a/Documentation/efi-stub.txt b/Documentation/efi-stub.txt index c628788d5b47..7747024d3bb7 100644 --- a/Documentation/efi-stub.txt +++ b/Documentation/efi-stub.txt @@ -1,13 +1,21 @@ The EFI Boot Stub --------------------------- -On the x86 platform, a bzImage can masquerade as a PE/COFF image, -thereby convincing EFI firmware loaders to load it as an EFI -executable. The code that modifies the bzImage header, along with the -EFI-specific entry point that the firmware loader jumps to are -collectively known as the "EFI boot stub", and live in +On the x86 and ARM platforms, a kernel zImage/bzImage can masquerade +as a PE/COFF image, thereby convincing EFI firmware loaders to load +it as an EFI executable. The code that modifies the bzImage header, +along with the EFI-specific entry point that the firmware loader +jumps to are collectively known as the "EFI boot stub", and live in arch/x86/boot/header.S and arch/x86/boot/compressed/eboot.c, -respectively. +respectively. For ARM the EFI stub is implemented in +arch/arm/boot/compressed/efi-header.S and +arch/arm/boot/compressed/efi-stub.c. EFI stub code that is shared +between architectures is in drivers/firmware/efi/efi-stub-helper.c. + +For arm64, there is no compressed kernel support, so the Image itself +masquerades as a PE/COFF image and the EFI stub is linked into the +kernel. The arm64 EFI stub lives in arch/arm64/kernel/efi-entry.S +and arch/arm64/kernel/efi-stub.c. By using the EFI boot stub it's possible to boot a Linux kernel without the use of a conventional EFI boot loader, such as grub or @@ -23,7 +31,10 @@ The bzImage located in arch/x86/boot/bzImage must be copied to the EFI System Partition (ESP) and renamed with the extension ".efi". Without the extension the EFI firmware loader will refuse to execute it. It's not possible to execute bzImage.efi from the usual Linux file systems -because EFI firmware doesn't have support for them. +because EFI firmware doesn't have support for them. For ARM the +arch/arm/boot/zImage should be copied to the system partition, and it +may not need to be renamed. Similarly for arm64, arch/arm64/boot/Image +should be copied but not necessarily renamed. **** Passing kernel parameters from the EFI shell @@ -63,3 +74,11 @@ Notice how bzImage.efi can be specified with a relative path. That's because the image we're executing is interpreted by the EFI shell, which understands relative paths, whereas the rest of the command line is passed to bzImage.efi. + + +**** The "dtb=" option + +For the ARM and arm64 architectures, we also need to be able to provide a +device tree to the kernel. This is done with the "dtb=" command line option, +and is processed in the same manner as the "initrd=" option that is +described above. diff --git a/Documentation/filesystems/seq_file.txt b/Documentation/filesystems/seq_file.txt index a1e2e0dda907..1fe0ccb1af55 100644 --- a/Documentation/filesystems/seq_file.txt +++ b/Documentation/filesystems/seq_file.txt @@ -54,6 +54,15 @@ how the mechanism works without getting lost in other details. (Those wanting to see the full source for this module can find it at http://lwn.net/Articles/22359/). +Deprecated create_proc_entry + +Note that the above article uses create_proc_entry which was removed in +kernel 3.10. Current versions require the following update + +- entry = create_proc_entry("sequence", 0, NULL); +- if (entry) +- entry->proc_fops = &ct_file_ops; ++ entry = proc_create("sequence", 0, NULL, &ct_file_ops); The iterator interface diff --git a/Documentation/filesystems/vfat.txt b/Documentation/filesystems/vfat.txt index 4a93e98b290a..ce1126aceed8 100644 --- a/Documentation/filesystems/vfat.txt +++ b/Documentation/filesystems/vfat.txt @@ -172,6 +172,11 @@ nfs=stale_rw|nostale_ro To maintain backward compatibility, '-o nfs' is also accepted, defaulting to stale_rw +dos1xfloppy -- If set, use a fallback default BIOS Parameter Block + configuration, determined by backing device size. These static + parameters match defaults assumed by DOS 1.x for 160 kiB, + 180 kiB, 320 kiB, and 360 kiB floppies and floppy images. + <bool>: 0,1,yes,no,true,false diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt index 9973a7e2e0ac..b9f67781c577 100644 --- a/Documentation/kernel-parameters.txt +++ b/Documentation/kernel-parameters.txt @@ -2361,6 +2361,14 @@ bytes respectively. Such letter suffixes can also be entirely omitted. timeout < 0: reboot immediately Format: <timeout> + crash_kexec_post_notifiers + Run kdump after running panic-notifiers and dumping + kmsg. This only for the users who doubt kdump always + succeeds in any situation. + Note that this also increases risks of kdump failure, + because some panic notifiers can make the crashed + kernel more unstable. + parkbd.port= [HW] Parallel port number the keyboard adapter is connected to, default is 0. Format: <parport#> diff --git a/Documentation/kmemleak.txt b/Documentation/kmemleak.txt index a7563ec4ea7b..b772418bf064 100644 --- a/Documentation/kmemleak.txt +++ b/Documentation/kmemleak.txt @@ -142,6 +142,7 @@ kmemleak_alloc_percpu - notify of a percpu memory block allocation kmemleak_free - notify of a memory block freeing kmemleak_free_part - notify of a partial memory block freeing kmemleak_free_percpu - notify of a percpu memory block freeing +kmemleak_update_trace - update object allocation stack trace kmemleak_not_leak - mark an object as not a leak kmemleak_ignore - do not scan or report an object as leak kmemleak_scan_area - add scan areas inside a memory block diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt index 46412bded104..f1dc4a215593 100644 --- a/Documentation/memory-barriers.txt +++ b/Documentation/memory-barriers.txt @@ -115,8 +115,8 @@ For example, consider the following sequence of events: CPU 1 CPU 2 =============== =============== { A == 1; B == 2 } - A = 3; x = A; - B = 4; y = B; + A = 3; x = B; + B = 4; y = A; The set of accesses as seen by the memory system in the middle can be arranged in 24 different combinations: diff --git a/Documentation/sysctl/kernel.txt b/Documentation/sysctl/kernel.txt index 9886c3d57fc2..708bb7f1b7e0 100644 --- a/Documentation/sysctl/kernel.txt +++ b/Documentation/sysctl/kernel.txt @@ -77,6 +77,7 @@ show up in /proc/sys/kernel: - shmmni - stop-a [ SPARC only ] - sysrq ==> Documentation/sysrq.txt +- sysctl_writes_strict - tainted - threads-max - unknown_nmi_panic @@ -762,6 +763,26 @@ without users and with a dead originative process will be destroyed. ============================================================== +sysctl_writes_strict: + +Control how file position affects the behavior of updating sysctl values +via the /proc/sys interface: + + -1 - Legacy per-write sysctl value handling, with no printk warnings. + Each write syscall must fully contain the sysctl value to be + written, and multiple writes on the same sysctl file descriptor + will rewrite the sysctl value, regardless of file position. + 0 - (default) Same behavior as above, but warn about processes that + perform writes to a sysctl file descriptor when the file position + is not 0. + 1 - Respect file position when writing sysctl strings. Multiple writes + will append to the sysctl value buffer. Anything past the max length + of the sysctl value buffer will be ignored. Writes to numeric sysctl + entries must always be at file position 0 and the value must be + fully contained in the buffer sent in the write syscall. + +============================================================== + tainted: Non-zero if the kernel has been tainted. Numeric values, which diff --git a/Documentation/vm/remap_file_pages.txt b/Documentation/vm/remap_file_pages.txt new file mode 100644 index 000000000000..560e4363a55d --- /dev/null +++ b/Documentation/vm/remap_file_pages.txt @@ -0,0 +1,28 @@ +The remap_file_pages() system call is used to create a nonlinear mapping, +that is, a mapping in which the pages of the file are mapped into a +nonsequential order in memory. The advantage of using remap_file_pages() +over using repeated calls to mmap(2) is that the former approach does not +require the kernel to create additional VMA (Virtual Memory Area) data +structures. + +Supporting of nonlinear mapping requires significant amount of non-trivial +code in kernel virtual memory subsystem including hot paths. Also to get +nonlinear mapping work kernel need a way to distinguish normal page table +entries from entries with file offset (pte_file). Kernel reserves flag in +PTE for this purpose. PTE flags are scarce resource especially on some CPU +architectures. It would be nice to free up the flag for other usage. + +Fortunately, there are not many users of remap_file_pages() in the wild. +It's only known that one enterprise RDBMS implementation uses the syscall +on 32-bit systems to map files bigger than can linearly fit into 32-bit +virtual address space. This use-case is not critical anymore since 64-bit +systems are widely available. + +The plan is to deprecate the syscall and replace it with an emulation. +The emulation will create new VMAs instead of nonlinear mappings. It's +going to work slower for rare users of remap_file_pages() but ABI is +preserved. + +One side effect of emulation (apart from performance) is that user can hit +vm.max_map_count limit more easily due to additional VMAs. See comment for +DEFAULT_MAX_MAP_COUNT for more details on the limit. diff --git a/MAINTAINERS b/MAINTAINERS index 7d101d5ba953..1b22565c59ac 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -210,6 +210,13 @@ S: Supported F: Documentation/scsi/aacraid.txt F: drivers/scsi/aacraid/ +ABI/API +L: linux-api@vger.kernel.org +F: Documentation/ABI/ +F: include/linux/syscalls.h +F: include/uapi/ +F: kernel/sys_ni.c + ABIT UGURU 1,2 HARDWARE MONITOR DRIVER M: Hans de Goede <hdegoede@redhat.com> L: lm-sensors@lm-sensors.org @@ -647,7 +654,7 @@ F: sound/soc/codecs/ssm* F: sound/soc/codecs/sigmadsp.* ANALOG DEVICES INC ASOC DRIVERS -L: adi-buildroot-devel@lists.sourceforge.net +L: adi-buildroot-devel@lists.sourceforge.net (moderated for non-subscribers) L: alsa-devel@alsa-project.org (moderated for non-subscribers) W: http://blackfin.uclinux.org/ S: Supported @@ -808,6 +815,11 @@ F: arch/arm/boot/dts/at91*.dtsi F: arch/arm/boot/dts/sama*.dts F: arch/arm/boot/dts/sama*.dtsi +ARM/ATMEL AT91 Clock Support +M: Boris Brezillon <boris.brezillon@free-electrons.com> +S: Maintained +F: drivers/clk/at91 + ARM/CALXEDA HIGHBANK ARCHITECTURE M: Rob Herring <robh@kernel.org> L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) @@ -1758,54 +1770,54 @@ F: include/uapi/linux/bfs_fs.h BLACKFIN ARCHITECTURE M: Steven Miao <realmz6@gmail.com> -L: adi-buildroot-devel@lists.sourceforge.net +L: adi-buildroot-devel@lists.sourceforge.net (moderated for non-subscribers) T: git git://git.code.sf.net/p/adi-linux/code W: http://blackfin.uclinux.org S: Supported F: arch/blackfin/ BLACKFIN EMAC DRIVER -L: adi-buildroot-devel@lists.sourceforge.net +L: adi-buildroot-devel@lists.sourceforge.net (moderated for non-subscribers) W: http://blackfin.uclinux.org S: Supported F: drivers/net/ethernet/adi/ BLACKFIN RTC DRIVER -L: adi-buildroot-devel@lists.sourceforge.net +L: adi-buildroot-devel@lists.sourceforge.net (moderated for non-subscribers) W: http://blackfin.uclinux.org S: Supported F: drivers/rtc/rtc-bfin.c BLACKFIN SDH DRIVER M: Sonic Zhang <sonic.zhang@analog.com> -L: adi-buildroot-devel@lists.sourceforge.net +L: adi-buildroot-devel@lists.sourceforge.net (moderated for non-subscribers) W: http://blackfin.uclinux.org S: Supported F: drivers/mmc/host/bfin_sdh.c BLACKFIN SERIAL DRIVER M: Sonic Zhang <sonic.zhang@analog.com> -L: adi-buildroot-devel@lists.sourceforge.net +L: adi-buildroot-devel@lists.sourceforge.net (moderated for non-subscribers) W: http://blackfin.uclinux.org S: Supported F: drivers/tty/serial/bfin_uart.c BLACKFIN WATCHDOG DRIVER -L: adi-buildroot-devel@lists.sourceforge.net +L: adi-buildroot-devel@lists.sourceforge.net (moderated for non-subscribers) W: http://blackfin.uclinux.org S: Supported F: drivers/watchdog/bfin_wdt.c BLACKFIN I2C TWI DRIVER M: Sonic Zhang <sonic.zhang@analog.com> -L: adi-buildroot-devel@lists.sourceforge.net +L: adi-buildroot-devel@lists.sourceforge.net (moderated for non-subscribers) W: http://blackfin.uclinux.org/ S: Supported F: drivers/i2c/busses/i2c-bfin-twi.c BLACKFIN MEDIA DRIVER M: Scott Jiang <scott.jiang.linux@gmail.com> -L: adi-buildroot-devel@lists.sourceforge.net +L: adi-buildroot-devel@lists.sourceforge.net (moderated for non-subscribers) W: http://blackfin.uclinux.org/ S: Supported F: drivers/media/platform/blackfin/ @@ -3148,10 +3160,9 @@ S: Maintained F: drivers/scsi/eata_pio.* EBTABLES -M: Bart De Schuymer <bart.de.schuymer@pandora.be> L: netfilter-devel@vger.kernel.org W: http://ebtables.sourceforge.net/ -S: Maintained +S: Orphan F: include/linux/netfilter_bridge/ebt_*.h F: include/uapi/linux/netfilter_bridge/ebt_*.h F: net/bridge/netfilter/ebt*.c @@ -1,7 +1,7 @@ VERSION = 3 PATCHLEVEL = 15 SUBLEVEL = 0 -EXTRAVERSION = -rc8 +EXTRAVERSION = NAME = Shuffling Zombie Juror # *DOCUMENTATION* diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index ad89a033f17f..87b63fde06d7 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -165,12 +165,9 @@ config TRACE_IRQFLAGS_SUPPORT bool default y -config RWSEM_GENERIC_SPINLOCK - bool - default y - config RWSEM_XCHGADD_ALGORITHM bool + default y config ARCH_HAS_ILOG2_U32 bool @@ -1089,11 +1086,6 @@ source "arch/arm/firmware/Kconfig" source arch/arm/mm/Kconfig -config ARM_NR_BANKS - int - default 16 if ARCH_EP93XX - default 8 - config IWMMXT bool "Enable iWMMXt support" depends on CPU_XSCALE || CPU_XSC3 || CPU_MOHAWK || CPU_PJ4 || CPU_PJ4B @@ -1214,19 +1206,6 @@ config ARM_ERRATA_742231 register of the Cortex-A9 which reduces the linefill issuing capabilities of the processor. -config PL310_ERRATA_588369 - bool "PL310 errata: Clean & Invalidate maintenance operations do not invalidate clean lines" - depends on CACHE_L2X0 - help - The PL310 L2 cache controller implements three types of Clean & - Invalidate maintenance operations: by Physical Address - (offset 0x7F0), by Index/Way (0x7F8) and by Way (0x7FC). - They are architecturally defined to behave as the execution of a - clean operation followed immediately by an invalidate operation, - both performing to the same memory location. This functionality - is not correctly implemented in PL310 as clean lines are not - invalidated as a result of these operations. - config ARM_ERRATA_643719 bool "ARM errata: LoUIS bit field in CLIDR register is incorrect" depends on CPU_V7 && SMP @@ -1249,17 +1228,6 @@ config ARM_ERRATA_720789 tables. The workaround changes the TLB flushing routines to invalidate entries regardless of the ASID. -config PL310_ERRATA_727915 - bool "PL310 errata: Background Clean & Invalidate by Way operation can cause data corruption" - depends on CACHE_L2X0 - help - PL310 implements the Clean & Invalidate by Way L2 cache maintenance - operation (offset 0x7FC). This operation runs in background so that - PL310 can handle normal accesses while it is in progress. Under very - rare circumstances, due to this erratum, write data can be lost when - PL310 treats a cacheable write transaction during a Clean & - Invalidate by Way operation. - config ARM_ERRATA_743622 bool "ARM errata: Faulty hazard checking in the Store Buffer may lead to data corruption" depends on CPU_V7 @@ -1285,21 +1253,6 @@ config ARM_ERRATA_751472 operation is received by a CPU before the ICIALLUIS has completed, potentially leading to corrupted entries in the cache or TLB. -config PL310_ERRATA_753970 - bool "PL310 errata: cache sync operation may be faulty" - depends on CACHE_PL310 - help - This option enables the workaround for the 753970 PL310 (r3p0) erratum. - - Under some condition the effect of cache sync operation on - the store buffer still remains when the operation completes. - This means that the store buffer is always asked to drain and - this prevents it from merging any further writes. The workaround - is to replace the normal offset of cache sync operation (0x730) - by another offset targeting an unmapped PL310 register 0x740. - This has the same effect as the cache sync operation: store buffer - drain and waiting for all buffers empty. - config ARM_ERRATA_754322 bool "ARM errata: possible faulty MMU translations following an ASID switch" depends on CPU_V7 @@ -1348,18 +1301,6 @@ config ARM_ERRATA_764369 relevant cache maintenance functions and sets a specific bit in the diagnostic control register of the SCU. -config PL310_ERRATA_769419 - bool "PL310 errata: no automatic Store Buffer drain" - depends on CACHE_L2X0 - help - On revisions of the PL310 prior to r3p2, the Store Buffer does - not automatically drain. This can cause normal, non-cacheable - writes to be retained when the memory system is idle, leading - to suboptimal I/O performance for drivers using coherent DMA. - This option adds a write barrier to the cpu_idle loop so that, - on systems with an outer cache, the store buffer is drained - explicitly. - config ARM_ERRATA_775420 bool "ARM errata: A data cache maintenance operation which aborts, might lead to deadlock" depends on CPU_V7 @@ -2279,6 +2220,11 @@ config ARCH_SUSPEND_POSSIBLE config ARM_CPU_SUSPEND def_bool PM_SLEEP +config ARCH_HIBERNATION_POSSIBLE + bool + depends on MMU + default y if ARCH_SUSPEND_POSSIBLE + endmenu source "net/Kconfig" diff --git a/arch/arm/boot/compressed/atags_to_fdt.c b/arch/arm/boot/compressed/atags_to_fdt.c index d1153c8a765a..9448aa0c6686 100644 --- a/arch/arm/boot/compressed/atags_to_fdt.c +++ b/arch/arm/boot/compressed/atags_to_fdt.c @@ -7,6 +7,8 @@ #define do_extend_cmdline 0 #endif +#define NR_BANKS 16 + static int node_offset(void *fdt, const char *node_path) { int offset = fdt_path_offset(fdt, node_path); diff --git a/arch/arm/boot/dts/bcm21664.dtsi b/arch/arm/boot/dts/bcm21664.dtsi index 08a44d41b672..8b366822bb43 100644 --- a/arch/arm/boot/dts/bcm21664.dtsi +++ b/arch/arm/boot/dts/bcm21664.dtsi @@ -14,6 +14,8 @@ #include <dt-bindings/interrupt-controller/arm-gic.h> #include <dt-bindings/interrupt-controller/irq.h> +#include "dt-bindings/clock/bcm21664.h" + #include "skeleton.dtsi" / { @@ -43,7 +45,7 @@ compatible = "brcm,bcm21664-dw-apb-uart", "snps,dw-apb-uart"; status = "disabled"; reg = <0x3e000000 0x118>; - clocks = <&uartb_clk>; + clocks = <&slave_ccu BCM21664_SLAVE_CCU_UARTB>; interrupts = <GIC_SPI 67 IRQ_TYPE_LEVEL_HIGH>; reg-shift = <2>; reg-io-width = <4>; @@ -53,7 +55,7 @@ compatible = "brcm,bcm21664-dw-apb-uart", "snps,dw-apb-uart"; status = "disabled"; reg = <0x3e001000 0x118>; - clocks = <&uartb2_clk>; + clocks = <&slave_ccu BCM21664_SLAVE_CCU_UARTB2>; interrupts = <GIC_SPI 66 IRQ_TYPE_LEVEL_HIGH>; reg-shift = <2>; reg-io-width = <4>; @@ -63,7 +65,7 @@ compatible = "brcm,bcm21664-dw-apb-uart", "snps,dw-apb-uart"; status = "disabled"; reg = <0x3e002000 0x118>; - clocks = <&uartb3_clk>; + clocks = <&slave_ccu BCM21664_SLAVE_CCU_UARTB3>; interrupts = <GIC_SPI 65 IRQ_TYPE_LEVEL_HIGH>; reg-shift = <2>; reg-io-width = <4>; @@ -85,7 +87,7 @@ compatible = "brcm,kona-timer"; reg = <0x35006000 0x1c>; interrupts = <GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>; - clocks = <&hub_timer_clk>; + clocks = <&aon_ccu BCM21664_AON_CCU_HUB_TIMER>; }; gpio: gpio@35003000 { @@ -106,7 +108,7 @@ compatible = "brcm,kona-sdhci"; reg = <0x3f180000 0x801c>; interrupts = <GIC_SPI 77 IRQ_TYPE_LEVEL_HIGH>; - clocks = <&sdio1_clk>; + clocks = <&master_ccu BCM21664_MASTER_CCU_SDIO1>; status = "disabled"; }; @@ -114,7 +116,7 @@ compatible = "brcm,kona-sdhci"; reg = <0x3f190000 0x801c>; interrupts = <GIC_SPI 76 IRQ_TYPE_LEVEL_HIGH>; - clocks = <&sdio2_clk>; + clocks = <&master_ccu BCM21664_MASTER_CCU_SDIO2>; status = "disabled"; }; @@ -122,7 +124,7 @@ compatible = "brcm,kona-sdhci"; reg = <0x3f1a0000 0x801c>; interrupts = <GIC_SPI 74 IRQ_TYPE_LEVEL_HIGH>; - clocks = <&sdio3_clk>; + clocks = <&master_ccu BCM21664_MASTER_CCU_SDIO3>; status = "disabled"; }; @@ -130,7 +132,7 @@ compatible = "brcm,kona-sdhci"; reg = <0x3f1b0000 0x801c>; interrupts = <GIC_SPI 73 IRQ_TYPE_LEVEL_HIGH>; - clocks = <&sdio4_clk>; + clocks = <&master_ccu BCM21664_MASTER_CCU_SDIO4>; status = "disabled"; }; @@ -140,7 +142,7 @@ interrupts = <GIC_SPI 103 IRQ_TYPE_LEVEL_HIGH>; #address-cells = <1>; #size-cells = <0>; - clocks = <&bsc1_clk>; + clocks = <&slave_ccu BCM21664_SLAVE_CCU_BSC1>; status = "disabled"; }; @@ -150,7 +152,7 @@ interrupts = <GIC_SPI 102 IRQ_TYPE_LEVEL_HIGH>; #address-cells = <1>; #size-cells = <0>; - clocks = <&bsc2_clk>; + clocks = <&slave_ccu BCM21664_SLAVE_CCU_BSC2>; status = "disabled"; }; @@ -160,7 +162,7 @@ interrupts = <GIC_SPI 169 IRQ_TYPE_LEVEL_HIGH>; #address-cells = <1>; #size-cells = <0>; - clocks = <&bsc3_clk>; + clocks = <&slave_ccu BCM21664_SLAVE_CCU_BSC3>; status = "disabled"; }; @@ -170,105 +172,149 @@ interrupts = <GIC_SPI 170 IRQ_TYPE_LEVEL_HIGH>; #address-cells = <1>; #size-cells = <0>; - clocks = <&bsc4_clk>; + clocks = <&slave_ccu BCM21664_SLAVE_CCU_BSC4>; status = "disabled"; }; clocks { - bsc1_clk: bsc1 { - compatible = "fixed-clock"; - clock-frequency = <13000000>; - #clock-cells = <0>; - }; + #address-cells = <1>; + #size-cells = <1>; + ranges; - bsc2_clk: bsc2 { - compatible = "fixed-clock"; - clock-frequency = <13000000>; + /* + * Fixed clocks are defined before CCUs whose + * clocks may depend on them. + */ + + ref_32k_clk: ref_32k { #clock-cells = <0>; + compatible = "fixed-clock"; + clock-frequency = <32768>; }; - bsc3_clk: bsc3 { - compatible = "fixed-clock"; - clock-frequency = <13000000>; + bbl_32k_clk: bbl_32k { #clock-cells = <0>; + compatible = "fixed-clock"; + clock-frequency = <32768>; }; - bsc4_clk: bsc4 { + ref_13m_clk: ref_13m { + #clock-cells = <0>; compatible = "fixed-clock"; clock-frequency = <13000000>; - #clock-cells = <0>; }; - pmu_bsc_clk: pmu_bsc { + var_13m_clk: var_13m { + #clock-cells = <0>; compatible = "fixed-clock"; clock-frequency = <13000000>; - #clock-cells = <0>; }; - hub_timer_clk: hub_timer { - compatible = "fixed-clock"; - clock-frequency = <32768>; + dft_19_5m_clk: dft_19_5m { #clock-cells = <0>; + compatible = "fixed-clock"; + clock-frequency = <19500000>; }; - pwm_clk: pwm { + ref_crystal_clk: ref_crystal { + #clock-cells = <0>; compatible = "fixed-clock"; clock-frequency = <26000000>; - #clock-cells = <0>; }; - sdio1_clk: sdio1 { - compatible = "fixed-clock"; - clock-frequency = <48000000>; + ref_52m_clk: ref_52m { #clock-cells = <0>; + compatible = "fixed-clock"; + clock-frequency = <52000000>; }; - sdio2_clk: sdio2 { - compatible = "fixed-clock"; - clock-frequency = <48000000>; + var_52m_clk: var_52m { #clock-cells = <0>; + compatible = "fixed-clock"; + clock-frequency = <52000000>; }; - sdio3_clk: sdio3 { - compatible = "fixed-clock"; - clock-frequency = <48000000>; + usb_otg_ahb_clk: usb_otg_ahb { #clock-cells = <0>; + compatible = "fixed-clock"; + clock-frequency = <52000000>; }; - sdio4_clk: sdio4 { - compatible = "fixed-clock"; - clock-frequency = <48000000>; + ref_96m_clk: ref_96m { #clock-cells = <0>; + compatible = "fixed-clock"; + clock-frequency = <96000000>; }; - tmon_1m_clk: tmon_1m { - compatible = "fixed-clock"; - clock-frequency = <1000000>; + var_96m_clk: var_96m { #clock-cells = <0>; + compatible = "fixed-clock"; + clock-frequency = <96000000>; }; - uartb_clk: uartb { - compatible = "fixed-clock"; - clock-frequency = <13000000>; + ref_104m_clk: ref_104m { #clock-cells = <0>; + compatible = "fixed-clock"; + clock-frequency = <104000000>; }; - uartb2_clk: uartb2 { - compatible = "fixed-clock"; - clock-frequency = <13000000>; + var_104m_clk: var_104m { #clock-cells = <0>; + compatible = "fixed-clock"; + clock-frequency = <104000000>; }; - uartb3_clk: uartb3 { - compatible = "fixed-clock"; - clock-frequency = <13000000>; + ref_156m_clk: ref_156m { #clock-cells = <0>; + compatible = "fixed-clock"; + clock-frequency = <156000000>; }; - usb_otg_ahb_clk: usb_otg_ahb { - compatible = "fixed-clock"; - clock-frequency = <52000000>; + var_156m_clk: var_156m { #clock-cells = <0>; + compatible = "fixed-clock"; + clock-frequency = <156000000>; + }; + + root_ccu: root_ccu { + compatible = BCM21664_DT_ROOT_CCU_COMPAT; + reg = <0x35001000 0x0f00>; + #clock-cells = <1>; + clock-output-names = "frac_1m"; + }; + + aon_ccu: aon_ccu { + compatible = BCM21664_DT_AON_CCU_COMPAT; + reg = <0x35002000 0x0f00>; + #clock-cells = <1>; + clock-output-names = "hub_timer"; + }; + + master_ccu: master_ccu { + compatible = BCM21664_DT_MASTER_CCU_COMPAT; + reg = <0x3f001000 0x0f00>; + #clock-cells = <1>; + clock-output-names = "sdio1", + "sdio2", + "sdio3", + "sdio4", + "sdio1_sleep", + "sdio2_sleep", + "sdio3_sleep", + "sdio4_sleep"; + }; + + slave_ccu: slave_ccu { + compatible = BCM21664_DT_SLAVE_CCU_COMPAT; + reg = <0x3e011000 0x0f00>; + #clock-cells = <1>; + clock-output-names = "uartb", + "uartb2", + "uartb3", + "bsc1", + "bsc2", + "bsc3", + "bsc4"; }; }; diff --git a/arch/arm/boot/dts/marco.dtsi b/arch/arm/boot/dts/marco.dtsi index 0c9647d28765..fb354225740a 100644 --- a/arch/arm/boot/dts/marco.dtsi +++ b/arch/arm/boot/dts/marco.dtsi @@ -36,7 +36,7 @@ ranges = <0x40000000 0x40000000 0xa0000000>; l2-cache-controller@c0030000 { - compatible = "sirf,marco-pl310-cache", "arm,pl310-cache"; + compatible = "arm,pl310-cache"; reg = <0xc0030000 0x1000>; interrupts = <0 59 0>; arm,tag-latency = <1 1 1>; diff --git a/arch/arm/boot/dts/prima2.dtsi b/arch/arm/boot/dts/prima2.dtsi index 3df7ba860282..963b7e54ab15 100644 --- a/arch/arm/boot/dts/prima2.dtsi +++ b/arch/arm/boot/dts/prima2.dtsi @@ -48,7 +48,7 @@ ranges = <0x40000000 0x40000000 0x80000000>; l2-cache-controller@80040000 { - compatible = "arm,pl310-cache", "sirf,prima2-pl310-cache"; + compatible = "arm,pl310-cache"; reg = <0x80040000 0x1000>; interrupts = <59>; arm,tag-latency = <1 1 1>; diff --git a/arch/arm/boot/dts/sun4i-a10.dtsi b/arch/arm/boot/dts/sun4i-a10.dtsi index c7b794ed337d..d96e179490ce 100644 --- a/arch/arm/boot/dts/sun4i-a10.dtsi +++ b/arch/arm/boot/dts/sun4i-a10.dtsi @@ -713,7 +713,7 @@ }; i2c0: i2c@01c2ac00 { - compatible = "allwinner,sun4i-i2c"; + compatible = "allwinner,sun4i-a10-i2c"; reg = <0x01c2ac00 0x400>; interrupts = <7>; clocks = <&apb1_gates 0>; @@ -724,7 +724,7 @@ }; i2c1: i2c@01c2b000 { - compatible = "allwinner,sun4i-i2c"; + compatible = "allwinner,sun4i-a10-i2c"; reg = <0x01c2b000 0x400>; interrupts = <8>; clocks = <&apb1_gates 1>; @@ -735,7 +735,7 @@ }; i2c2: i2c@01c2b400 { - compatible = "allwinner,sun4i-i2c"; + compatible = "allwinner,sun4i-a10-i2c"; reg = <0x01c2b400 0x400>; interrupts = <9>; clocks = <&apb1_gates 2>; diff --git a/arch/arm/boot/dts/sun5i-a10s.dtsi b/arch/arm/boot/dts/sun5i-a10s.dtsi index aa1dd59dc34a..b64f705d9008 100644 --- a/arch/arm/boot/dts/sun5i-a10s.dtsi +++ b/arch/arm/boot/dts/sun5i-a10s.dtsi @@ -560,7 +560,7 @@ i2c0: i2c@01c2ac00 { #address-cells = <1>; #size-cells = <0>; - compatible = "allwinner,sun4i-i2c"; + compatible = "allwinner,sun5i-a10s-i2c", "allwinner,sun4i-a10-i2c"; reg = <0x01c2ac00 0x400>; interrupts = <7>; clocks = <&apb1_gates 0>; @@ -571,7 +571,7 @@ i2c1: i2c@01c2b000 { #address-cells = <1>; #size-cells = <0>; - compatible = "allwinner,sun4i-i2c"; + compatible = "allwinner,sun5i-a10s-i2c", "allwinner,sun4i-a10-i2c"; reg = <0x01c2b000 0x400>; interrupts = <8>; clocks = <&apb1_gates 1>; @@ -582,7 +582,7 @@ i2c2: i2c@01c2b400 { #address-cells = <1>; #size-cells = <0>; - compatible = "allwinner,sun4i-i2c"; + compatible = "allwinner,sun5i-a10s-i2c", "allwinner,sun4i-a10-i2c"; reg = <0x01c2b400 0x400>; interrupts = <9>; clocks = <&apb1_gates 2>; diff --git a/arch/arm/boot/dts/sun5i-a13.dtsi b/arch/arm/boot/dts/sun5i-a13.dtsi index c9fdb7b3ecd9..3b2a94c40f6e 100644 --- a/arch/arm/boot/dts/sun5i-a13.dtsi +++ b/arch/arm/boot/dts/sun5i-a13.dtsi @@ -486,7 +486,7 @@ }; i2c0: i2c@01c2ac00 { - compatible = "allwinner,sun4i-i2c"; + compatible = "allwinner,sun5i-a13-i2c", "allwinner,sun4i-a10-i2c"; reg = <0x01c2ac00 0x400>; interrupts = <7>; clocks = <&apb1_gates 0>; @@ -497,7 +497,7 @@ }; i2c1: i2c@01c2b000 { - compatible = "allwinner,sun4i-i2c"; + compatible = "allwinner,sun5i-a13-i2c", "allwinner,sun4i-a10-i2c"; reg = <0x01c2b000 0x400>; interrupts = <8>; clocks = <&apb1_gates 1>; @@ -508,7 +508,7 @@ }; i2c2: i2c@01c2b400 { - compatible = "allwinner,sun4i-i2c"; + compatible = "allwinner,sun5i-a13-i2c", "allwinner,sun4i-a10-i2c"; reg = <0x01c2b400 0x400>; interrupts = <9>; clocks = <&apb1_gates 2>; diff --git a/arch/arm/boot/dts/sun7i-a20.dtsi b/arch/arm/boot/dts/sun7i-a20.dtsi index 385933bac114..01e94664232a 100644 --- a/arch/arm/boot/dts/sun7i-a20.dtsi +++ b/arch/arm/boot/dts/sun7i-a20.dtsi @@ -863,7 +863,7 @@ }; i2c0: i2c@01c2ac00 { - compatible = "allwinner,sun4i-i2c"; + compatible = "allwinner,sun7i-a20-i2c", "allwinner,sun4i-a10-i2c"; reg = <0x01c2ac00 0x400>; interrupts = <0 7 4>; clocks = <&apb1_gates 0>; @@ -874,7 +874,7 @@ }; i2c1: i2c@01c2b000 { - compatible = "allwinner,sun4i-i2c"; + compatible = "allwinner,sun7i-a20-i2c", "allwinner,sun4i-a10-i2c"; reg = <0x01c2b000 0x400>; interrupts = <0 8 4>; clocks = <&apb1_gates 1>; @@ -885,7 +885,7 @@ }; i2c2: i2c@01c2b400 { - compatible = "allwinner,sun4i-i2c"; + compatible = "allwinner,sun7i-a20-i2c", "allwinner,sun4i-a10-i2c"; reg = <0x01c2b400 0x400>; interrupts = <0 9 4>; clocks = <&apb1_gates 2>; @@ -896,7 +896,7 @@ }; i2c3: i2c@01c2b800 { - compatible = "allwinner,sun4i-i2c"; + compatible = "allwinner,sun7i-a20-i2c", "allwinner,sun4i-a10-i2c"; reg = <0x01c2b800 0x400>; interrupts = <0 88 4>; clocks = <&apb1_gates 3>; @@ -907,7 +907,7 @@ }; i2c4: i2c@01c2c000 { - compatible = "allwinner,sun4i-i2c"; + compatible = "allwinner,sun7i-a20-i2c", "allwinner,sun4i-a10-i2c"; reg = <0x01c2c000 0x400>; interrupts = <0 89 4>; clocks = <&apb1_gates 15>; diff --git a/arch/arm/common/mcpm_entry.c b/arch/arm/common/mcpm_entry.c index 86fd60fefbc9..f91136ab447e 100644 --- a/arch/arm/common/mcpm_entry.c +++ b/arch/arm/common/mcpm_entry.c @@ -106,14 +106,14 @@ void mcpm_cpu_power_down(void) BUG(); } -int mcpm_cpu_power_down_finish(unsigned int cpu, unsigned int cluster) +int mcpm_wait_for_cpu_powerdown(unsigned int cpu, unsigned int cluster) { int ret; - if (WARN_ON_ONCE(!platform_ops || !platform_ops->power_down_finish)) + if (WARN_ON_ONCE(!platform_ops || !platform_ops->wait_for_powerdown)) return -EUNATCH; - ret = platform_ops->power_down_finish(cpu, cluster); + ret = platform_ops->wait_for_powerdown(cpu, cluster); if (ret) pr_warn("%s: cpu %u, cluster %u failed to power down (%d)\n", __func__, cpu, cluster, ret); diff --git a/arch/arm/common/mcpm_platsmp.c b/arch/arm/common/mcpm_platsmp.c index 177251a4dd9a..92e54d7c6f46 100644 --- a/arch/arm/common/mcpm_platsmp.c +++ b/arch/arm/common/mcpm_platsmp.c @@ -62,7 +62,7 @@ static int mcpm_cpu_kill(unsigned int cpu) cpu_to_pcpu(cpu, &pcpu, &pcluster); - return !mcpm_cpu_power_down_finish(pcpu, pcluster); + return !mcpm_wait_for_cpu_powerdown(pcpu, pcluster); } static int mcpm_cpu_disable(unsigned int cpu) diff --git a/arch/arm/include/asm/Kbuild b/arch/arm/include/asm/Kbuild index 23e728ecf8ab..f5a357601983 100644 --- a/arch/arm/include/asm/Kbuild +++ b/arch/arm/include/asm/Kbuild @@ -21,6 +21,7 @@ generic-y += parport.h generic-y += poll.h generic-y += preempt.h generic-y += resource.h +generic-y += rwsem.h generic-y += sections.h generic-y += segment.h generic-y += sembuf.h diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index b974184f9941..57f0584e8d97 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -312,7 +312,7 @@ * you cannot return to the original mode. */ .macro safe_svcmode_maskall reg:req -#if __LINUX_ARM_ARCH__ >= 6 +#if __LINUX_ARM_ARCH__ >= 6 && !defined(CONFIG_CPU_V7M) mrs \reg , cpsr eor \reg, \reg, #HYP_MODE tst \reg, #MODE_MASK diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index 8b8b61685a34..fd43f7f55b70 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -212,7 +212,7 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *, static inline void __flush_icache_all(void) { __flush_icache_preferred(); - dsb(); + dsb(ishst); } /* @@ -487,4 +487,6 @@ int set_memory_rw(unsigned long addr, int numpages); int set_memory_x(unsigned long addr, int numpages); int set_memory_nx(unsigned long addr, int numpages); +void flush_uprobe_xol_access(struct page *page, unsigned long uaddr, + void *kaddr, unsigned long len); #endif diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h index 6493802f880a..c3f11524f10c 100644 --- a/arch/arm/include/asm/cp15.h +++ b/arch/arm/include/asm/cp15.h @@ -42,24 +42,23 @@ #ifndef __ASSEMBLY__ #if __LINUX_ARM_ARCH__ >= 4 -#define vectors_high() (cr_alignment & CR_V) +#define vectors_high() (get_cr() & CR_V) #else #define vectors_high() (0) #endif #ifdef CONFIG_CPU_CP15 -extern unsigned long cr_no_alignment; /* defined in entry-armv.S */ extern unsigned long cr_alignment; /* defined in entry-armv.S */ -static inline unsigned int get_cr(void) +static inline unsigned long get_cr(void) { - unsigned int val; + unsigned long val; asm("mrc p15, 0, %0, c1, c0, 0 @ get CR" : "=r" (val) : : "cc"); return val; } -static inline void set_cr(unsigned int val) +static inline void set_cr(unsigned long val) { asm volatile("mcr p15, 0, %0, c1, c0, 0 @ set CR" : : "r" (val) : "cc"); @@ -80,10 +79,6 @@ static inline void set_auxcr(unsigned int val) isb(); } -#ifndef CONFIG_SMP -extern void adjust_cr(unsigned long mask, unsigned long set); -#endif - #define CPACC_FULL(n) (3 << (n * 2)) #define CPACC_SVC(n) (1 << (n * 2)) #define CPACC_DISABLE(n) (0 << (n * 2)) @@ -106,13 +101,17 @@ static inline void set_copro_access(unsigned int val) #else /* ifdef CONFIG_CPU_CP15 */ /* - * cr_alignment and cr_no_alignment are tightly coupled to cp15 (at least in the - * minds of the developers). Yielding 0 for machines without a cp15 (and making - * it read-only) is fine for most cases and saves quite some #ifdeffery. + * cr_alignment is tightly coupled to cp15 (at least in the minds of the + * developers). Yielding 0 for machines without a cp15 (and making it + * read-only) is fine for most cases and saves quite some #ifdeffery. */ -#define cr_no_alignment UL(0) #define cr_alignment UL(0) +static inline unsigned long get_cr(void) +{ + return 0; +} + #endif /* ifdef CONFIG_CPU_CP15 / else */ #endif /* ifndef __ASSEMBLY__ */ diff --git a/arch/arm/include/asm/cputype.h b/arch/arm/include/asm/cputype.h index 4764344367d4..8c2b7321a478 100644 --- a/arch/arm/include/asm/cputype.h +++ b/arch/arm/include/asm/cputype.h @@ -72,6 +72,7 @@ #define ARM_CPU_PART_CORTEX_A15 0xC0F0 #define ARM_CPU_PART_CORTEX_A7 0xC070 #define ARM_CPU_PART_CORTEX_A12 0xC0D0 +#define ARM_CPU_PART_CORTEX_A17 0xC0E0 #define ARM_CPU_XSCALE_ARCH_MASK 0xe000 #define ARM_CPU_XSCALE_ARCH_V1 0x2000 diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h index e701a4d9aa59..c45b61a4b4a5 100644 --- a/arch/arm/include/asm/dma-mapping.h +++ b/arch/arm/include/asm/dma-mapping.h @@ -58,21 +58,37 @@ static inline int dma_set_mask(struct device *dev, u64 mask) #ifndef __arch_pfn_to_dma static inline dma_addr_t pfn_to_dma(struct device *dev, unsigned long pfn) { + if (dev) + pfn -= dev->dma_pfn_offset; return (dma_addr_t)__pfn_to_bus(pfn); } static inline unsigned long dma_to_pfn(struct device *dev, dma_addr_t addr) { - return __bus_to_pfn(addr); + unsigned long pfn = __bus_to_pfn(addr); + + if (dev) + pfn += dev->dma_pfn_offset; + + return pfn; } static inline void *dma_to_virt(struct device *dev, dma_addr_t addr) { + if (dev) { + unsigned long pfn = dma_to_pfn(dev, addr); + + return phys_to_virt(__pfn_to_phys(pfn)); + } + return (void *)__bus_to_virt((unsigned long)addr); } static inline dma_addr_t virt_to_dma(struct device *dev, void *addr) { + if (dev) + return pfn_to_dma(dev, virt_to_pfn(addr)); + return (dma_addr_t)__virt_to_bus((unsigned long)(addr)); } @@ -105,6 +121,13 @@ static inline unsigned long dma_max_pfn(struct device *dev) } #define dma_max_pfn(dev) dma_max_pfn(dev) +static inline int set_arch_dma_coherent_ops(struct device *dev) +{ + set_dma_ops(dev, &arm_coherent_dma_ops); + return 0; +} +#define set_arch_dma_coherent_ops(dev) set_arch_dma_coherent_ops(dev) + static inline dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr) { unsigned int offset = paddr & ~PAGE_MASK; diff --git a/arch/arm/include/asm/fixmap.h b/arch/arm/include/asm/fixmap.h index bbae919bceb4..74124b0d0d79 100644 --- a/arch/arm/include/asm/fixmap.h +++ b/arch/arm/include/asm/fixmap.h @@ -1,24 +1,11 @@ #ifndef _ASM_FIXMAP_H #define _ASM_FIXMAP_H -/* - * Nothing too fancy for now. - * - * On ARM we already have well known fixed virtual addresses imposed by - * the architecture such as the vector page which is located at 0xffff0000, - * therefore a second level page table is already allocated covering - * 0xfff00000 upwards. - * - * The cache flushing code in proc-xscale.S uses the virtual area between - * 0xfffe0000 and 0xfffeffff. - */ - -#define FIXADDR_START 0xfff00000UL -#define FIXADDR_TOP 0xfffe0000UL +#define FIXADDR_START 0xffc00000UL +#define FIXADDR_TOP 0xffe00000UL #define FIXADDR_SIZE (FIXADDR_TOP - FIXADDR_START) -#define FIX_KMAP_BEGIN 0 -#define FIX_KMAP_END (FIXADDR_SIZE >> PAGE_SHIFT) +#define FIX_KMAP_NR_PTES (FIXADDR_SIZE >> PAGE_SHIFT) #define __fix_to_virt(x) (FIXADDR_START + ((x) << PAGE_SHIFT)) #define __virt_to_fix(x) (((x) - FIXADDR_START) >> PAGE_SHIFT) @@ -27,7 +14,7 @@ extern void __this_fixmap_does_not_exist(void); static inline unsigned long fix_to_virt(const unsigned int idx) { - if (idx >= FIX_KMAP_END) + if (idx >= FIX_KMAP_NR_PTES) __this_fixmap_does_not_exist(); return __fix_to_virt(idx); } diff --git a/arch/arm/include/asm/ftrace.h b/arch/arm/include/asm/ftrace.h index f89515adac60..eb577f4f5f70 100644 --- a/arch/arm/include/asm/ftrace.h +++ b/arch/arm/include/asm/ftrace.h @@ -52,15 +52,7 @@ extern inline void *return_address(unsigned int level) #endif -#define HAVE_ARCH_CALLER_ADDR - -#define CALLER_ADDR0 ((unsigned long)__builtin_return_address(0)) -#define CALLER_ADDR1 ((unsigned long)return_address(1)) -#define CALLER_ADDR2 ((unsigned long)return_address(2)) -#define CALLER_ADDR3 ((unsigned long)return_address(3)) -#define CALLER_ADDR4 ((unsigned long)return_address(4)) -#define CALLER_ADDR5 ((unsigned long)return_address(5)) -#define CALLER_ADDR6 ((unsigned long)return_address(6)) +#define ftrace_return_addr(n) return_address(n) #endif /* ifndef __ASSEMBLY__ */ diff --git a/arch/arm/include/asm/glue-df.h b/arch/arm/include/asm/glue-df.h index 6b70f1b46a6e..04e18b656659 100644 --- a/arch/arm/include/asm/glue-df.h +++ b/arch/arm/include/asm/glue-df.h @@ -31,14 +31,6 @@ #undef CPU_DABORT_HANDLER #undef MULTI_DABORT -#if defined(CONFIG_CPU_ARM710) -# ifdef CPU_DABORT_HANDLER -# define MULTI_DABORT 1 -# else -# define CPU_DABORT_HANDLER cpu_arm7_data_abort -# endif -#endif - #ifdef CONFIG_CPU_ABRT_EV4 # ifdef CPU_DABORT_HANDLER # define MULTI_DABORT 1 diff --git a/arch/arm/include/asm/hardware/cache-l2x0.h b/arch/arm/include/asm/hardware/cache-l2x0.h index 6795ff743b3d..3a5ec1c25659 100644 --- a/arch/arm/include/asm/hardware/cache-l2x0.h +++ b/arch/arm/include/asm/hardware/cache-l2x0.h @@ -26,8 +26,8 @@ #define L2X0_CACHE_TYPE 0x004 #define L2X0_CTRL 0x100 #define L2X0_AUX_CTRL 0x104 -#define L2X0_TAG_LATENCY_CTRL 0x108 -#define L2X0_DATA_LATENCY_CTRL 0x10C +#define L310_TAG_LATENCY_CTRL 0x108 +#define L310_DATA_LATENCY_CTRL 0x10C #define L2X0_EVENT_CNT_CTRL 0x200 #define L2X0_EVENT_CNT1_CFG 0x204 #define L2X0_EVENT_CNT0_CFG 0x208 @@ -54,53 +54,93 @@ #define L2X0_LOCKDOWN_WAY_D_BASE 0x900 #define L2X0_LOCKDOWN_WAY_I_BASE 0x904 #define L2X0_LOCKDOWN_STRIDE 0x08 -#define L2X0_ADDR_FILTER_START 0xC00 -#define L2X0_ADDR_FILTER_END 0xC04 +#define L310_ADDR_FILTER_START 0xC00 +#define L310_ADDR_FILTER_END 0xC04 #define L2X0_TEST_OPERATION 0xF00 #define L2X0_LINE_DATA 0xF10 #define L2X0_LINE_TAG 0xF30 #define L2X0_DEBUG_CTRL 0xF40 -#define L2X0_PREFETCH_CTRL 0xF60 -#define L2X0_POWER_CTRL 0xF80 -#define L2X0_DYNAMIC_CLK_GATING_EN (1 << 1) -#define L2X0_STNDBY_MODE_EN (1 << 0) +#define L310_PREFETCH_CTRL 0xF60 +#define L310_POWER_CTRL 0xF80 +#define L310_DYNAMIC_CLK_GATING_EN (1 << 1) +#define L310_STNDBY_MODE_EN (1 << 0) /* Registers shifts and masks */ #define L2X0_CACHE_ID_PART_MASK (0xf << 6) #define L2X0_CACHE_ID_PART_L210 (1 << 6) +#define L2X0_CACHE_ID_PART_L220 (2 << 6) #define L2X0_CACHE_ID_PART_L310 (3 << 6) #define L2X0_CACHE_ID_RTL_MASK 0x3f -#define L2X0_CACHE_ID_RTL_R0P0 0x0 -#define L2X0_CACHE_ID_RTL_R1P0 0x2 -#define L2X0_CACHE_ID_RTL_R2P0 0x4 -#define L2X0_CACHE_ID_RTL_R3P0 0x5 -#define L2X0_CACHE_ID_RTL_R3P1 0x6 -#define L2X0_CACHE_ID_RTL_R3P2 0x8 +#define L210_CACHE_ID_RTL_R0P2_02 0x00 +#define L210_CACHE_ID_RTL_R0P1 0x01 +#define L210_CACHE_ID_RTL_R0P2_01 0x02 +#define L210_CACHE_ID_RTL_R0P3 0x03 +#define L210_CACHE_ID_RTL_R0P4 0x0b +#define L210_CACHE_ID_RTL_R0P5 0x0f +#define L220_CACHE_ID_RTL_R1P7_01REL0 0x06 +#define L310_CACHE_ID_RTL_R0P0 0x00 +#define L310_CACHE_ID_RTL_R1P0 0x02 +#define L310_CACHE_ID_RTL_R2P0 0x04 +#define L310_CACHE_ID_RTL_R3P0 0x05 +#define L310_CACHE_ID_RTL_R3P1 0x06 +#define L310_CACHE_ID_RTL_R3P1_50REL0 0x07 +#define L310_CACHE_ID_RTL_R3P2 0x08 +#define L310_CACHE_ID_RTL_R3P3 0x09 -#define L2X0_AUX_CTRL_MASK 0xc0000fff +/* L2C auxiliary control register - bits common to L2C-210/220/310 */ +#define L2C_AUX_CTRL_WAY_SIZE_SHIFT 17 +#define L2C_AUX_CTRL_WAY_SIZE_MASK (7 << 17) +#define L2C_AUX_CTRL_WAY_SIZE(n) ((n) << 17) +#define L2C_AUX_CTRL_EVTMON_ENABLE BIT(20) +#define L2C_AUX_CTRL_PARITY_ENABLE BIT(21) +#define L2C_AUX_CTRL_SHARED_OVERRIDE BIT(22) +/* L2C-210/220 common bits */ #define L2X0_AUX_CTRL_DATA_RD_LATENCY_SHIFT 0 -#define L2X0_AUX_CTRL_DATA_RD_LATENCY_MASK 0x7 +#define L2X0_AUX_CTRL_DATA_RD_LATENCY_MASK (7 << 0) #define L2X0_AUX_CTRL_DATA_WR_LATENCY_SHIFT 3 -#define L2X0_AUX_CTRL_DATA_WR_LATENCY_MASK (0x7 << 3) +#define L2X0_AUX_CTRL_DATA_WR_LATENCY_MASK (7 << 3) #define L2X0_AUX_CTRL_TAG_LATENCY_SHIFT 6 -#define L2X0_AUX_CTRL_TAG_LATENCY_MASK (0x7 << 6) +#define L2X0_AUX_CTRL_TAG_LATENCY_MASK (7 << 6) #define L2X0_AUX_CTRL_DIRTY_LATENCY_SHIFT 9 -#define L2X0_AUX_CTRL_DIRTY_LATENCY_MASK (0x7 << 9) -#define L2X0_AUX_CTRL_ASSOCIATIVITY_SHIFT 16 -#define L2X0_AUX_CTRL_WAY_SIZE_SHIFT 17 -#define L2X0_AUX_CTRL_WAY_SIZE_MASK (0x7 << 17) -#define L2X0_AUX_CTRL_SHARE_OVERRIDE_SHIFT 22 -#define L2X0_AUX_CTRL_NS_LOCKDOWN_SHIFT 26 -#define L2X0_AUX_CTRL_NS_INT_CTRL_SHIFT 27 -#define L2X0_AUX_CTRL_DATA_PREFETCH_SHIFT 28 -#define L2X0_AUX_CTRL_INSTR_PREFETCH_SHIFT 29 -#define L2X0_AUX_CTRL_EARLY_BRESP_SHIFT 30 +#define L2X0_AUX_CTRL_DIRTY_LATENCY_MASK (7 << 9) +#define L2X0_AUX_CTRL_ASSOC_SHIFT 13 +#define L2X0_AUX_CTRL_ASSOC_MASK (15 << 13) +/* L2C-210 specific bits */ +#define L210_AUX_CTRL_WRAP_DISABLE BIT(12) +#define L210_AUX_CTRL_WA_OVERRIDE BIT(23) +#define L210_AUX_CTRL_EXCLUSIVE_ABORT BIT(24) +/* L2C-220 specific bits */ +#define L220_AUX_CTRL_EXCLUSIVE_CACHE BIT(12) +#define L220_AUX_CTRL_FWA_SHIFT 23 +#define L220_AUX_CTRL_FWA_MASK (3 << 23) +#define L220_AUX_CTRL_NS_LOCKDOWN BIT(26) +#define L220_AUX_CTRL_NS_INT_CTRL BIT(27) +/* L2C-310 specific bits */ +#define L310_AUX_CTRL_FULL_LINE_ZERO BIT(0) /* R2P0+ */ +#define L310_AUX_CTRL_HIGHPRIO_SO_DEV BIT(10) /* R2P0+ */ +#define L310_AUX_CTRL_STORE_LIMITATION BIT(11) /* R2P0+ */ +#define L310_AUX_CTRL_EXCLUSIVE_CACHE BIT(12) +#define L310_AUX_CTRL_ASSOCIATIVITY_16 BIT(16) +#define L310_AUX_CTRL_CACHE_REPLACE_RR BIT(25) /* R2P0+ */ +#define L310_AUX_CTRL_NS_LOCKDOWN BIT(26) +#define L310_AUX_CTRL_NS_INT_CTRL BIT(27) +#define L310_AUX_CTRL_DATA_PREFETCH BIT(28) +#define L310_AUX_CTRL_INSTR_PREFETCH BIT(29) +#define L310_AUX_CTRL_EARLY_BRESP BIT(30) /* R2P0+ */ -#define L2X0_LATENCY_CTRL_SETUP_SHIFT 0 -#define L2X0_LATENCY_CTRL_RD_SHIFT 4 -#define L2X0_LATENCY_CTRL_WR_SHIFT 8 +#define L310_LATENCY_CTRL_SETUP(n) ((n) << 0) +#define L310_LATENCY_CTRL_RD(n) ((n) << 4) +#define L310_LATENCY_CTRL_WR(n) ((n) << 8) -#define L2X0_ADDR_FILTER_EN 1 +#define L310_ADDR_FILTER_EN 1 + +#define L310_PREFETCH_CTRL_OFFSET_MASK 0x1f +#define L310_PREFETCH_CTRL_DBL_LINEFILL_INCR BIT(23) +#define L310_PREFETCH_CTRL_PREFETCH_DROP BIT(24) +#define L310_PREFETCH_CTRL_DBL_LINEFILL_WRAP BIT(27) +#define L310_PREFETCH_CTRL_DATA_PREFETCH BIT(28) +#define L310_PREFETCH_CTRL_INSTR_PREFETCH BIT(29) +#define L310_PREFETCH_CTRL_DBL_LINEFILL BIT(30) #define L2X0_CTRL_EN 1 diff --git a/arch/arm/include/asm/highmem.h b/arch/arm/include/asm/highmem.h index 91b99abe7a95..535579511ed0 100644 --- a/arch/arm/include/asm/highmem.h +++ b/arch/arm/include/asm/highmem.h @@ -18,6 +18,7 @@ } while (0) extern pte_t *pkmap_page_table; +extern pte_t *fixmap_page_table; extern void *kmap_high(struct page *page); extern void kunmap_high(struct page *page); diff --git a/arch/arm/include/asm/io.h b/arch/arm/include/asm/io.h index 8aa4cca74501..3d23418cbddd 100644 --- a/arch/arm/include/asm/io.h +++ b/arch/arm/include/asm/io.h @@ -179,6 +179,12 @@ static inline void __iomem *__typesafe_io(unsigned long addr) /* PCI fixed i/o mapping */ #define PCI_IO_VIRT_BASE 0xfee00000 +#if defined(CONFIG_PCI) +void pci_ioremap_set_mem_type(int mem_type); +#else +static inline void pci_ioremap_set_mem_type(int mem_type) {} +#endif + extern int pci_ioremap_io(unsigned int offset, phys_addr_t phys_addr); /* diff --git a/arch/arm/include/asm/mach/arch.h b/arch/arm/include/asm/mach/arch.h index 17a3fa2979e8..060a75e99263 100644 --- a/arch/arm/include/asm/mach/arch.h +++ b/arch/arm/include/asm/mach/arch.h @@ -14,7 +14,6 @@ #include <linux/reboot.h> struct tag; -struct meminfo; struct pt_regs; struct smp_operations; #ifdef CONFIG_SMP @@ -45,10 +44,12 @@ struct machine_desc { unsigned char reserve_lp1 :1; /* never has lp1 */ unsigned char reserve_lp2 :1; /* never has lp2 */ enum reboot_mode reboot_mode; /* default restart mode */ + unsigned l2c_aux_val; /* L2 cache aux value */ + unsigned l2c_aux_mask; /* L2 cache aux mask */ + void (*l2c_write_sec)(unsigned long, unsigned); struct smp_operations *smp; /* SMP operations */ bool (*smp_init)(void); - void (*fixup)(struct tag *, char **, - struct meminfo *); + void (*fixup)(struct tag *, char **); void (*init_meminfo)(void); void (*reserve)(void);/* reserve mem blocks */ void (*map_io)(void);/* IO mapping function */ diff --git a/arch/arm/include/asm/mcpm.h b/arch/arm/include/asm/mcpm.h index a5ff410dcdb6..d9702eb0b02b 100644 --- a/arch/arm/include/asm/mcpm.h +++ b/arch/arm/include/asm/mcpm.h @@ -98,14 +98,14 @@ int mcpm_cpu_power_up(unsigned int cpu, unsigned int cluster); * previously in which case the caller should take appropriate action. * * On success, the CPU is not guaranteed to be truly halted until - * mcpm_cpu_power_down_finish() subsequently returns non-zero for the + * mcpm_wait_for_cpu_powerdown() subsequently returns non-zero for the * specified cpu. Until then, other CPUs should make sure they do not * trash memory the target CPU might be executing/accessing. */ void mcpm_cpu_power_down(void); /** - * mcpm_cpu_power_down_finish - wait for a specified CPU to halt, and + * mcpm_wait_for_cpu_powerdown - wait for a specified CPU to halt, and * make sure it is powered off * * @cpu: CPU number within given cluster @@ -127,7 +127,7 @@ void mcpm_cpu_power_down(void); * - zero if the CPU is in a safely parked state * - nonzero otherwise (e.g., timeout) */ -int mcpm_cpu_power_down_finish(unsigned int cpu, unsigned int cluster); +int mcpm_wait_for_cpu_powerdown(unsigned int cpu, unsigned int cluster); /** * mcpm_cpu_suspend - bring the calling CPU in a suspended state @@ -171,7 +171,7 @@ int mcpm_cpu_powered_up(void); struct mcpm_platform_ops { int (*power_up)(unsigned int cpu, unsigned int cluster); void (*power_down)(void); - int (*power_down_finish)(unsigned int cpu, unsigned int cluster); + int (*wait_for_powerdown)(unsigned int cpu, unsigned int cluster); void (*suspend)(u64); void (*powered_up)(void); }; diff --git a/arch/arm/include/asm/memblock.h b/arch/arm/include/asm/memblock.h index c2f5102ae659..bf47a6c110a2 100644 --- a/arch/arm/include/asm/memblock.h +++ b/arch/arm/include/asm/memblock.h @@ -1,10 +1,9 @@ #ifndef _ASM_ARM_MEMBLOCK_H #define _ASM_ARM_MEMBLOCK_H -struct meminfo; struct machine_desc; -void arm_memblock_init(struct meminfo *, const struct machine_desc *); +void arm_memblock_init(const struct machine_desc *); phys_addr_t arm_memblock_steal(phys_addr_t size, phys_addr_t align); #endif diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h index 02fa2558f662..2b751464d6ff 100644 --- a/arch/arm/include/asm/memory.h +++ b/arch/arm/include/asm/memory.h @@ -83,8 +83,6 @@ */ #define IOREMAP_MAX_ORDER 24 -#define CONSISTENT_END (0xffe00000UL) - #else /* CONFIG_MMU */ /* diff --git a/arch/arm/include/asm/outercache.h b/arch/arm/include/asm/outercache.h index f94784f0e3a6..891a56b35bcf 100644 --- a/arch/arm/include/asm/outercache.h +++ b/arch/arm/include/asm/outercache.h @@ -28,53 +28,84 @@ struct outer_cache_fns { void (*clean_range)(unsigned long, unsigned long); void (*flush_range)(unsigned long, unsigned long); void (*flush_all)(void); - void (*inv_all)(void); void (*disable)(void); #ifdef CONFIG_OUTER_CACHE_SYNC void (*sync)(void); #endif - void (*set_debug)(unsigned long); void (*resume)(void); + + /* This is an ARM L2C thing */ + void (*write_sec)(unsigned long, unsigned); }; extern struct outer_cache_fns outer_cache; #ifdef CONFIG_OUTER_CACHE - +/** + * outer_inv_range - invalidate range of outer cache lines + * @start: starting physical address, inclusive + * @end: end physical address, exclusive + */ static inline void outer_inv_range(phys_addr_t start, phys_addr_t end) { if (outer_cache.inv_range) outer_cache.inv_range(start, end); } + +/** + * outer_clean_range - clean dirty outer cache lines + * @start: starting physical address, inclusive + * @end: end physical address, exclusive + */ static inline void outer_clean_range(phys_addr_t start, phys_addr_t end) { if (outer_cache.clean_range) outer_cache.clean_range(start, end); } + +/** + * outer_flush_range - clean and invalidate outer cache lines + * @start: starting physical address, inclusive + * @end: end physical address, exclusive + */ static inline void outer_flush_range(phys_addr_t start, phys_addr_t end) { if (outer_cache.flush_range) outer_cache.flush_range(start, end); } +/** + * outer_flush_all - clean and invalidate all cache lines in the outer cache + * + * Note: depending on implementation, this may not be atomic - it must + * only be called with interrupts disabled and no other active outer + * cache masters. + * + * It is intended that this function is only used by implementations + * needing to override the outer_cache.disable() method due to security. + * (Some implementations perform this as a clean followed by an invalidate.) + */ static inline void outer_flush_all(void) { if (outer_cache.flush_all) outer_cache.flush_all(); } -static inline void outer_inv_all(void) -{ - if (outer_cache.inv_all) - outer_cache.inv_all(); -} - -static inline void outer_disable(void) -{ - if (outer_cache.disable) - outer_cache.disable(); -} +/** + * outer_disable - clean, invalidate and disable the outer cache + * + * Disable the outer cache, ensuring that any data contained in the outer + * cache is pushed out to lower levels of system memory. The note and + * conditions above concerning outer_flush_all() applies here. + */ +extern void outer_disable(void); +/** + * outer_resume - restore the cache configuration and re-enable outer cache + * + * Restore any configuration that the cache had when previously enabled, + * and re-enable the outer cache. + */ static inline void outer_resume(void) { if (outer_cache.resume) @@ -90,13 +121,18 @@ static inline void outer_clean_range(phys_addr_t start, phys_addr_t end) static inline void outer_flush_range(phys_addr_t start, phys_addr_t end) { } static inline void outer_flush_all(void) { } -static inline void outer_inv_all(void) { } static inline void outer_disable(void) { } static inline void outer_resume(void) { } #endif #ifdef CONFIG_OUTER_CACHE_SYNC +/** + * outer_sync - perform a sync point for outer cache + * + * Ensure that all outer cache operations are complete and any store + * buffers are drained. + */ static inline void outer_sync(void) { if (outer_cache.sync) diff --git a/arch/arm/include/asm/setup.h b/arch/arm/include/asm/setup.h index 8d6a089dfb76..e0adb9f1bf94 100644 --- a/arch/arm/include/asm/setup.h +++ b/arch/arm/include/asm/setup.h @@ -21,34 +21,6 @@ #define __tagtable(tag, fn) \ static const struct tagtable __tagtable_##fn __tag = { tag, fn } -/* - * Memory map description - */ -#define NR_BANKS CONFIG_ARM_NR_BANKS - -struct membank { - phys_addr_t start; - phys_addr_t size; - unsigned int highmem; -}; - -struct meminfo { - int nr_banks; - struct membank bank[NR_BANKS]; -}; - -extern struct meminfo meminfo; - -#define for_each_bank(iter,mi) \ - for (iter = 0; iter < (mi)->nr_banks; iter++) - -#define bank_pfn_start(bank) __phys_to_pfn((bank)->start) -#define bank_pfn_end(bank) __phys_to_pfn((bank)->start + (bank)->size) -#define bank_pfn_size(bank) ((bank)->size >> PAGE_SHIFT) -#define bank_phys_start(bank) (bank)->start -#define bank_phys_end(bank) ((bank)->start + (bank)->size) -#define bank_phys_size(bank) (bank)->size - extern int arm_add_memory(u64 start, u64 size); extern void early_print(const char *str, ...); extern void dump_machine_table(void); diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile index 040619c32d68..38ddd9f83d0e 100644 --- a/arch/arm/kernel/Makefile +++ b/arch/arm/kernel/Makefile @@ -39,6 +39,7 @@ obj-$(CONFIG_ARTHUR) += arthur.o obj-$(CONFIG_ISA_DMA) += dma-isa.o obj-$(CONFIG_PCI) += bios32.o isa.o obj-$(CONFIG_ARM_CPU_SUSPEND) += sleep.o suspend.o +obj-$(CONFIG_HIBERNATION) += hibernate.o obj-$(CONFIG_SMP) += smp.o ifdef CONFIG_MMU obj-$(CONFIG_SMP) += smp_tlb.o diff --git a/arch/arm/kernel/atags_parse.c b/arch/arm/kernel/atags_parse.c index 8c14de8180c0..7807ef58a2ab 100644 --- a/arch/arm/kernel/atags_parse.c +++ b/arch/arm/kernel/atags_parse.c @@ -22,6 +22,7 @@ #include <linux/fs.h> #include <linux/root_dev.h> #include <linux/screen_info.h> +#include <linux/memblock.h> #include <asm/setup.h> #include <asm/system_info.h> @@ -222,10 +223,10 @@ setup_machine_tags(phys_addr_t __atags_pointer, unsigned int machine_nr) } if (mdesc->fixup) - mdesc->fixup(tags, &from, &meminfo); + mdesc->fixup(tags, &from); if (tags->hdr.tag == ATAG_CORE) { - if (meminfo.nr_banks != 0) + if (memblock_phys_mem_size()) squash_mem_tags(tags); save_atags(tags); parse_tags(tags); diff --git a/arch/arm/kernel/devtree.c b/arch/arm/kernel/devtree.c index ea9ce92a4b52..e94a157ddff1 100644 --- a/arch/arm/kernel/devtree.c +++ b/arch/arm/kernel/devtree.c @@ -27,10 +27,6 @@ #include <asm/mach/arch.h> #include <asm/mach-types.h> -void __init early_init_dt_add_memory_arch(u64 base, u64 size) -{ - arm_add_memory(base, size); -} #ifdef CONFIG_SMP extern struct of_cpu_method __cpu_method_of_table[]; diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S index 1879e8dd2acc..52a949a8077d 100644 --- a/arch/arm/kernel/entry-armv.S +++ b/arch/arm/kernel/entry-armv.S @@ -344,7 +344,7 @@ ENDPROC(__pabt_svc) @ @ Enable the alignment trap while in kernel mode @ - alignment_trap r0 + alignment_trap r0, .LCcralign @ @ Clear FP to mark the first stack frame @@ -413,6 +413,11 @@ __und_usr: @ adr r9, BSYM(ret_from_exception) + @ IRQs must be enabled before attempting to read the instruction from + @ user space since that could cause a page/translation fault if the + @ page table was modified by another CPU. + enable_irq + tst r3, #PSR_T_BIT @ Thumb mode? bne __und_usr_thumb sub r4, r2, #4 @ ARM instr at LR - 4 @@ -484,7 +489,8 @@ ENDPROC(__und_usr) */ .pushsection .fixup, "ax" .align 2 -4: mov pc, r9 +4: str r4, [sp, #S_PC] @ retry current instruction + mov pc, r9 .popsection .pushsection __ex_table,"a" .long 1b, 4b @@ -517,7 +523,7 @@ ENDPROC(__und_usr) * r9 = normal "successful" return address * r10 = this threads thread_info structure * lr = unrecognised instruction return address - * IRQs disabled, FIQs enabled. + * IRQs enabled, FIQs enabled. */ @ @ Fall-through from Thumb-2 __und_usr @@ -624,7 +630,6 @@ call_fpe: #endif do_fpe: - enable_irq ldr r4, .LCfp add r10, r10, #TI_FPSTATE @ r10 = workspace ldr pc, [r4] @ Call FP module USR entry point @@ -652,8 +657,7 @@ __und_usr_fault_32: b 1f __und_usr_fault_16: mov r1, #2 -1: enable_irq - mov r0, sp +1: mov r0, sp adr lr, BSYM(ret_from_exception) b __und_fault ENDPROC(__und_usr_fault_32) @@ -1143,11 +1147,8 @@ __vectors_start: .data .globl cr_alignment - .globl cr_no_alignment cr_alignment: .space 4 -cr_no_alignment: - .space 4 #ifdef CONFIG_MULTI_IRQ_HANDLER .globl handle_arch_irq diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S index a2dcafdf1bc8..7139d4a7dea7 100644 --- a/arch/arm/kernel/entry-common.S +++ b/arch/arm/kernel/entry-common.S @@ -365,13 +365,7 @@ ENTRY(vector_swi) str r0, [sp, #S_OLD_R0] @ Save OLD_R0 #endif zero_fp - -#ifdef CONFIG_ALIGNMENT_TRAP - ldr ip, __cr_alignment - ldr ip, [ip] - mcr p15, 0, ip, c1, c0 @ update control register -#endif - + alignment_trap ip, __cr_alignment enable_irq ct_user_exit get_thread_info tsk diff --git a/arch/arm/kernel/entry-header.S b/arch/arm/kernel/entry-header.S index efb208de75ec..5d702f8900b1 100644 --- a/arch/arm/kernel/entry-header.S +++ b/arch/arm/kernel/entry-header.S @@ -37,9 +37,9 @@ #endif .endm - .macro alignment_trap, rtemp + .macro alignment_trap, rtemp, label #ifdef CONFIG_ALIGNMENT_TRAP - ldr \rtemp, .LCcralign + ldr \rtemp, \label ldr \rtemp, [\rtemp] mcr p15, 0, \rtemp, c1, c0 #endif diff --git a/arch/arm/kernel/ftrace.c b/arch/arm/kernel/ftrace.c index c108ddcb9ba4..af9a8a927a4e 100644 --- a/arch/arm/kernel/ftrace.c +++ b/arch/arm/kernel/ftrace.c @@ -14,6 +14,7 @@ #include <linux/ftrace.h> #include <linux/uaccess.h> +#include <linux/module.h> #include <asm/cacheflush.h> #include <asm/opcodes.h> @@ -63,6 +64,18 @@ static unsigned long adjust_address(struct dyn_ftrace *rec, unsigned long addr) } #endif +int ftrace_arch_code_modify_prepare(void) +{ + set_all_modules_text_rw(); + return 0; +} + +int ftrace_arch_code_modify_post_process(void) +{ + set_all_modules_text_ro(); + return 0; +} + static unsigned long ftrace_call_replace(unsigned long pc, unsigned long addr) { return arm_gen_branch_link(pc, addr); diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S index c96ecacb2021..572a38335c96 100644 --- a/arch/arm/kernel/head-common.S +++ b/arch/arm/kernel/head-common.S @@ -99,8 +99,7 @@ __mmap_switched: str r1, [r5] @ Save machine type str r2, [r6] @ Save atags pointer cmp r7, #0 - bicne r4, r0, #CR_A @ Clear 'A' bit - stmneia r7, {r0, r4} @ Save control register values + strne r0, [r7] @ Save control register values b start_kernel ENDPROC(__mmap_switched) diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S index 591d6e4a6492..2c35f0ff2fdc 100644 --- a/arch/arm/kernel/head.S +++ b/arch/arm/kernel/head.S @@ -475,7 +475,7 @@ ENDPROC(__turn_mmu_on) #ifdef CONFIG_SMP_ON_UP - __INIT + __HEAD __fixup_smp: and r3, r9, #0x000f0000 @ architecture version teq r3, #0x000f0000 @ CPU ID supported? diff --git a/arch/arm/kernel/hibernate.c b/arch/arm/kernel/hibernate.c new file mode 100644 index 000000000000..bb8b79648643 --- /dev/null +++ b/arch/arm/kernel/hibernate.c @@ -0,0 +1,107 @@ +/* + * Hibernation support specific for ARM + * + * Derived from work on ARM hibernation support by: + * + * Ubuntu project, hibernation support for mach-dove + * Copyright (C) 2010 Nokia Corporation (Hiroshi Doyu) + * Copyright (C) 2010 Texas Instruments, Inc. (Teerth Reddy et al.) + * https://lkml.org/lkml/2010/6/18/4 + * https://lists.linux-foundation.org/pipermail/linux-pm/2010-June/027422.html + * https://patchwork.kernel.org/patch/96442/ + * + * Copyright (C) 2006 Rafael J. Wysocki <rjw@sisk.pl> + * + * License terms: GNU General Public License (GPL) version 2 + */ + +#include <linux/mm.h> +#include <linux/suspend.h> +#include <asm/system_misc.h> +#include <asm/idmap.h> +#include <asm/suspend.h> +#include <asm/memory.h> + +extern const void __nosave_begin, __nosave_end; + +int pfn_is_nosave(unsigned long pfn) +{ + unsigned long nosave_begin_pfn = virt_to_pfn(&__nosave_begin); + unsigned long nosave_end_pfn = virt_to_pfn(&__nosave_end - 1); + + return (pfn >= nosave_begin_pfn) && (pfn <= nosave_end_pfn); +} + +void notrace save_processor_state(void) +{ + WARN_ON(num_online_cpus() != 1); + local_fiq_disable(); +} + +void notrace restore_processor_state(void) +{ + local_fiq_enable(); +} + +/* + * Snapshot kernel memory and reset the system. + * + * swsusp_save() is executed in the suspend finisher so that the CPU + * context pointer and memory are part of the saved image, which is + * required by the resume kernel image to restart execution from + * swsusp_arch_suspend(). + * + * soft_restart is not technically needed, but is used to get success + * returned from cpu_suspend. + * + * When soft reboot completes, the hibernation snapshot is written out. + */ +static int notrace arch_save_image(unsigned long unused) +{ + int ret; + + ret = swsusp_save(); + if (ret == 0) + soft_restart(virt_to_phys(cpu_resume)); + return ret; +} + +/* + * Save the current CPU state before suspend / poweroff. + */ +int notrace swsusp_arch_suspend(void) +{ + return cpu_suspend(0, arch_save_image); +} + +/* + * Restore page contents for physical pages that were in use during loading + * hibernation image. Switch to idmap_pgd so the physical page tables + * are overwritten with the same contents. + */ +static void notrace arch_restore_image(void *unused) +{ + struct pbe *pbe; + + cpu_switch_mm(idmap_pgd, &init_mm); + for (pbe = restore_pblist; pbe; pbe = pbe->next) + copy_page(pbe->orig_address, pbe->address); + + soft_restart(virt_to_phys(cpu_resume)); +} + +static u64 resume_stack[PAGE_SIZE/2/sizeof(u64)] __nosavedata; + +/* + * Resume from the hibernation image. + * Due to the kernel heap / data restore, stack contents change underneath + * and that would make function calls impossible; switch to a temporary + * stack within the nosave region to avoid that problem. + */ +int swsusp_arch_resume(void) +{ + extern void call_with_stack(void (*fn)(void *), void *arg, void *sp); + call_with_stack(arch_restore_image, 0, + resume_stack + ARRAY_SIZE(resume_stack)); + return 0; +} diff --git a/arch/arm/kernel/irq.c b/arch/arm/kernel/irq.c index 9723d17b8f38..2c4257604513 100644 --- a/arch/arm/kernel/irq.c +++ b/arch/arm/kernel/irq.c @@ -37,6 +37,7 @@ #include <linux/proc_fs.h> #include <linux/export.h> +#include <asm/hardware/cache-l2x0.h> #include <asm/exception.h> #include <asm/mach/arch.h> #include <asm/mach/irq.h> @@ -115,10 +116,21 @@ EXPORT_SYMBOL_GPL(set_irq_flags); void __init init_IRQ(void) { + int ret; + if (IS_ENABLED(CONFIG_OF) && !machine_desc->init_irq) irqchip_init(); else machine_desc->init_irq(); + + if (IS_ENABLED(CONFIG_OF) && IS_ENABLED(CONFIG_CACHE_L2X0) && + (machine_desc->l2c_aux_mask || machine_desc->l2c_aux_val)) { + outer_cache.write_sec = machine_desc->l2c_write_sec; + ret = l2x0_of_init(machine_desc->l2c_aux_val, + machine_desc->l2c_aux_mask); + if (ret) + pr_err("L2C: failed to init: %d\n", ret); + } } #ifdef CONFIG_MULTI_IRQ_HANDLER diff --git a/arch/arm/kernel/isa.c b/arch/arm/kernel/isa.c index 346485910732..9d1cf7156895 100644 --- a/arch/arm/kernel/isa.c +++ b/arch/arm/kernel/isa.c @@ -20,7 +20,7 @@ static unsigned int isa_membase, isa_portbase, isa_portshift; -static ctl_table ctl_isa_vars[4] = { +static struct ctl_table ctl_isa_vars[4] = { { .procname = "membase", .data = &isa_membase, @@ -44,7 +44,7 @@ static ctl_table ctl_isa_vars[4] = { static struct ctl_table_header *isa_sysctl_header; -static ctl_table ctl_isa[2] = { +static struct ctl_table ctl_isa[2] = { { .procname = "isa", .mode = 0555, @@ -52,7 +52,7 @@ static ctl_table ctl_isa[2] = { }, {} }; -static ctl_table ctl_bus[2] = { +static struct ctl_table ctl_bus[2] = { { .procname = "bus", .mode = 0555, diff --git a/arch/arm/kernel/iwmmxt.S b/arch/arm/kernel/iwmmxt.S index 2452dd1bef53..a5599cfc43cb 100644 --- a/arch/arm/kernel/iwmmxt.S +++ b/arch/arm/kernel/iwmmxt.S @@ -18,6 +18,7 @@ #include <asm/ptrace.h> #include <asm/thread_info.h> #include <asm/asm-offsets.h> +#include <asm/assembler.h> #if defined(CONFIG_CPU_PJ4) || defined(CONFIG_CPU_PJ4B) #define PJ4(code...) code @@ -65,17 +66,18 @@ * r9 = ret_from_exception * lr = undefined instr exit * - * called from prefetch exception handler with interrupts disabled + * called from prefetch exception handler with interrupts enabled */ ENTRY(iwmmxt_task_enable) + inc_preempt_count r10, r3 XSC(mrc p15, 0, r2, c15, c1, 0) PJ4(mrc p15, 0, r2, c1, c0, 2) @ CP0 and CP1 accessible? XSC(tst r2, #0x3) PJ4(tst r2, #0xf) - movne pc, lr @ if so no business here + bne 4f @ if so no business here @ enable access to CP0 and CP1 XSC(orr r2, r2, #0x3) XSC(mcr p15, 0, r2, c15, c1, 0) @@ -136,7 +138,7 @@ concan_dump: wstrd wR15, [r1, #MMX_WR15] 2: teq r0, #0 @ anything to load? - moveq pc, lr + beq 3f concan_load: @@ -169,8 +171,14 @@ concan_load: @ clear CUP/MUP (only if r1 != 0) teq r1, #0 mov r2, #0 - moveq pc, lr + beq 3f tmcr wCon, r2 + +3: +#ifdef CONFIG_PREEMPT_COUNT + get_thread_info r10 +#endif +4: dec_preempt_count r10, r3 mov pc, lr /* diff --git a/arch/arm/kernel/perf_event_cpu.c b/arch/arm/kernel/perf_event_cpu.c index 51798d7854ac..a71ae1523620 100644 --- a/arch/arm/kernel/perf_event_cpu.c +++ b/arch/arm/kernel/perf_event_cpu.c @@ -221,6 +221,7 @@ static struct notifier_block cpu_pmu_hotplug_notifier = { * PMU platform driver and devicetree bindings. */ static struct of_device_id cpu_pmu_of_device_ids[] = { + {.compatible = "arm,cortex-a17-pmu", .data = armv7_a17_pmu_init}, {.compatible = "arm,cortex-a15-pmu", .data = armv7_a15_pmu_init}, {.compatible = "arm,cortex-a12-pmu", .data = armv7_a12_pmu_init}, {.compatible = "arm,cortex-a9-pmu", .data = armv7_a9_pmu_init}, diff --git a/arch/arm/kernel/perf_event_v7.c b/arch/arm/kernel/perf_event_v7.c index f4ef3981ed02..2037f7205987 100644 --- a/arch/arm/kernel/perf_event_v7.c +++ b/arch/arm/kernel/perf_event_v7.c @@ -1599,6 +1599,13 @@ static int armv7_a12_pmu_init(struct arm_pmu *cpu_pmu) return 0; } +static int armv7_a17_pmu_init(struct arm_pmu *cpu_pmu) +{ + armv7_a12_pmu_init(cpu_pmu); + cpu_pmu->name = "ARMv7 Cortex-A17"; + return 0; +} + /* * Krait Performance Monitor Region Event Selection Register (PMRESRn) * @@ -2021,6 +2028,11 @@ static inline int armv7_a12_pmu_init(struct arm_pmu *cpu_pmu) return -ENODEV; } +static inline int armv7_a17_pmu_init(struct arm_pmu *cpu_pmu) +{ + return -ENODEV; +} + static inline int krait_pmu_init(struct arm_pmu *cpu_pmu) { return -ENODEV; diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c index 50e198c1e9c8..8a16ee5d8a95 100644 --- a/arch/arm/kernel/setup.c +++ b/arch/arm/kernel/setup.c @@ -72,6 +72,7 @@ static int __init fpe_setup(char *line) __setup("fpe=", fpe_setup); #endif +extern void init_default_cache_policy(unsigned long); extern void paging_init(const struct machine_desc *desc); extern void early_paging_init(const struct machine_desc *, struct proc_info_list *); @@ -590,7 +591,7 @@ static void __init setup_processor(void) pr_info("CPU: %s [%08x] revision %d (ARMv%s), cr=%08lx\n", cpu_name, read_cpuid_id(), read_cpuid_id() & 15, - proc_arch[cpu_architecture()], cr_alignment); + proc_arch[cpu_architecture()], get_cr()); snprintf(init_utsname()->machine, __NEW_UTS_LEN + 1, "%s%c", list->arch_name, ENDIANNESS); @@ -603,7 +604,9 @@ static void __init setup_processor(void) #ifndef CONFIG_ARM_THUMB elf_hwcap &= ~(HWCAP_THUMB | HWCAP_IDIVT); #endif - +#ifdef CONFIG_MMU + init_default_cache_policy(list->__cpu_mm_mmu_flags); +#endif erratum_a15_798181_init(); feat_v6_fixup(); @@ -628,15 +631,8 @@ void __init dump_machine_table(void) int __init arm_add_memory(u64 start, u64 size) { - struct membank *bank = &meminfo.bank[meminfo.nr_banks]; u64 aligned_start; - if (meminfo.nr_banks >= NR_BANKS) { - pr_crit("NR_BANKS too low, ignoring memory at 0x%08llx\n", - (long long)start); - return -EINVAL; - } - /* * Ensure that start/size are aligned to a page boundary. * Size is appropriately rounded down, start is rounded up. @@ -677,17 +673,17 @@ int __init arm_add_memory(u64 start, u64 size) aligned_start = PHYS_OFFSET; } - bank->start = aligned_start; - bank->size = size & ~(phys_addr_t)(PAGE_SIZE - 1); + start = aligned_start; + size = size & ~(phys_addr_t)(PAGE_SIZE - 1); /* * Check whether this memory region has non-zero size or * invalid node number. */ - if (bank->size == 0) + if (size == 0) return -EINVAL; - meminfo.nr_banks++; + memblock_add(start, size); return 0; } @@ -695,6 +691,7 @@ int __init arm_add_memory(u64 start, u64 size) * Pick out the memory size. We look for mem=size@start, * where start and size are "size[KkMm]" */ + static int __init early_mem(char *p) { static int usermem __initdata = 0; @@ -709,7 +706,8 @@ static int __init early_mem(char *p) */ if (usermem == 0) { usermem = 1; - meminfo.nr_banks = 0; + memblock_remove(memblock_start_of_DRAM(), + memblock_end_of_DRAM() - memblock_start_of_DRAM()); } start = PHYS_OFFSET; @@ -854,13 +852,6 @@ static void __init reserve_crashkernel(void) static inline void reserve_crashkernel(void) {} #endif /* CONFIG_KEXEC */ -static int __init meminfo_cmp(const void *_a, const void *_b) -{ - const struct membank *a = _a, *b = _b; - long cmp = bank_pfn_start(a) - bank_pfn_start(b); - return cmp < 0 ? -1 : cmp > 0 ? 1 : 0; -} - void __init hyp_mode_check(void) { #ifdef CONFIG_ARM_VIRT_EXT @@ -903,12 +894,10 @@ void __init setup_arch(char **cmdline_p) parse_early_param(); - sort(&meminfo.bank, meminfo.nr_banks, sizeof(meminfo.bank[0]), meminfo_cmp, NULL); - early_paging_init(mdesc, lookup_processor_type(read_cpuid_id())); setup_dma_zone(mdesc); sanity_check_meminfo(); - arm_memblock_init(&meminfo, mdesc); + arm_memblock_init(mdesc); paging_init(mdesc); request_standard_resources(mdesc); diff --git a/arch/arm/kernel/sleep.S b/arch/arm/kernel/sleep.S index b907d9b790ab..1b880db2a033 100644 --- a/arch/arm/kernel/sleep.S +++ b/arch/arm/kernel/sleep.S @@ -127,6 +127,10 @@ ENDPROC(cpu_resume_after_mmu) .align ENTRY(cpu_resume) ARM_BE8(setend be) @ ensure we are in BE mode +#ifdef CONFIG_ARM_VIRT_EXT + bl __hyp_stub_install_secondary +#endif + safe_svcmode_maskall r1 mov r1, #0 ALT_SMP(mrc p15, 0, r0, c0, c0, 5) ALT_UP_B(1f) @@ -144,7 +148,6 @@ ARM_BE8(setend be) @ ensure we are in BE mode ldr r0, [r0, #SLEEP_SAVE_SP_PHYS] ldr r0, [r0, r1, lsl #2] - setmode PSR_I_BIT | PSR_F_BIT | SVC_MODE, r1 @ set SVC, irqs off @ load phys pgd, stack, resume fn ARM( ldmia r0!, {r1, sp, pc} ) THUMB( ldmia r0!, {r1, r2, r3} ) diff --git a/arch/arm/kernel/stacktrace.c b/arch/arm/kernel/stacktrace.c index af4e8c8a5422..f065eb05d254 100644 --- a/arch/arm/kernel/stacktrace.c +++ b/arch/arm/kernel/stacktrace.c @@ -3,6 +3,7 @@ #include <linux/stacktrace.h> #include <asm/stacktrace.h> +#include <asm/traps.h> #if defined(CONFIG_FRAME_POINTER) && !defined(CONFIG_ARM_UNWIND) /* @@ -61,6 +62,7 @@ EXPORT_SYMBOL(walk_stackframe); #ifdef CONFIG_STACKTRACE struct stack_trace_data { struct stack_trace *trace; + unsigned long last_pc; unsigned int no_sched_functions; unsigned int skip; }; @@ -69,6 +71,7 @@ static int save_trace(struct stackframe *frame, void *d) { struct stack_trace_data *data = d; struct stack_trace *trace = data->trace; + struct pt_regs *regs; unsigned long addr = frame->pc; if (data->no_sched_functions && in_sched_functions(addr)) @@ -80,16 +83,39 @@ static int save_trace(struct stackframe *frame, void *d) trace->entries[trace->nr_entries++] = addr; + if (trace->nr_entries >= trace->max_entries) + return 1; + + /* + * in_exception_text() is designed to test if the PC is one of + * the functions which has an exception stack above it, but + * unfortunately what is in frame->pc is the return LR value, + * not the saved PC value. So, we need to track the previous + * frame PC value when doing this. + */ + addr = data->last_pc; + data->last_pc = frame->pc; + if (!in_exception_text(addr)) + return 0; + + regs = (struct pt_regs *)frame->sp; + + trace->entries[trace->nr_entries++] = regs->ARM_pc; + return trace->nr_entries >= trace->max_entries; } -void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace) +/* This must be noinline to so that our skip calculation works correctly */ +static noinline void __save_stack_trace(struct task_struct *tsk, + struct stack_trace *trace, unsigned int nosched) { struct stack_trace_data data; struct stackframe frame; data.trace = trace; + data.last_pc = ULONG_MAX; data.skip = trace->skip; + data.no_sched_functions = nosched; if (tsk != current) { #ifdef CONFIG_SMP @@ -102,7 +128,6 @@ void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace) trace->entries[trace->nr_entries++] = ULONG_MAX; return; #else - data.no_sched_functions = 1; frame.fp = thread_saved_fp(tsk); frame.sp = thread_saved_sp(tsk); frame.lr = 0; /* recovered from the stack */ @@ -111,11 +136,12 @@ void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace) } else { register unsigned long current_sp asm ("sp"); - data.no_sched_functions = 0; + /* We don't want this function nor the caller */ + data.skip += 2; frame.fp = (unsigned long)__builtin_frame_address(0); frame.sp = current_sp; frame.lr = (unsigned long)__builtin_return_address(0); - frame.pc = (unsigned long)save_stack_trace_tsk; + frame.pc = (unsigned long)__save_stack_trace; } walk_stackframe(&frame, save_trace, &data); @@ -123,9 +149,33 @@ void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace) trace->entries[trace->nr_entries++] = ULONG_MAX; } +void save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace) +{ + struct stack_trace_data data; + struct stackframe frame; + + data.trace = trace; + data.skip = trace->skip; + data.no_sched_functions = 0; + + frame.fp = regs->ARM_fp; + frame.sp = regs->ARM_sp; + frame.lr = regs->ARM_lr; + frame.pc = regs->ARM_pc; + + walk_stackframe(&frame, save_trace, &data); + if (trace->nr_entries < trace->max_entries) + trace->entries[trace->nr_entries++] = ULONG_MAX; +} + +void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace) +{ + __save_stack_trace(tsk, trace, 1); +} + void save_stack_trace(struct stack_trace *trace) { - save_stack_trace_tsk(current, trace); + __save_stack_trace(current, trace, 0); } EXPORT_SYMBOL_GPL(save_stack_trace); #endif diff --git a/arch/arm/kernel/topology.c b/arch/arm/kernel/topology.c index 71e1fec6d31a..3997c411c140 100644 --- a/arch/arm/kernel/topology.c +++ b/arch/arm/kernel/topology.c @@ -91,13 +91,13 @@ static void __init parse_dt_topology(void) { const struct cpu_efficiency *cpu_eff; struct device_node *cn = NULL; - unsigned long min_capacity = (unsigned long)(-1); + unsigned long min_capacity = ULONG_MAX; unsigned long max_capacity = 0; unsigned long capacity = 0; - int alloc_size, cpu = 0; + int cpu = 0; - alloc_size = nr_cpu_ids * sizeof(*__cpu_capacity); - __cpu_capacity = kzalloc(alloc_size, GFP_NOWAIT); + __cpu_capacity = kcalloc(nr_cpu_ids, sizeof(*__cpu_capacity), + GFP_NOWAIT); for_each_possible_cpu(cpu) { const u32 *rate; diff --git a/arch/arm/kernel/uprobes.c b/arch/arm/kernel/uprobes.c index f9bacee973bf..56adf9c1fde0 100644 --- a/arch/arm/kernel/uprobes.c +++ b/arch/arm/kernel/uprobes.c @@ -113,6 +113,26 @@ int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe, struct mm_struct *mm, return 0; } +void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr, + void *src, unsigned long len) +{ + void *xol_page_kaddr = kmap_atomic(page); + void *dst = xol_page_kaddr + (vaddr & ~PAGE_MASK); + + preempt_disable(); + + /* Initialize the slot */ + memcpy(dst, src, len); + + /* flush caches (dcache/icache) */ + flush_uprobe_xol_access(page, vaddr, dst, len); + + preempt_enable(); + + kunmap_atomic(xol_page_kaddr); +} + + int arch_uprobe_pre_xol(struct arch_uprobe *auprobe, struct pt_regs *regs) { struct uprobe_task *utask = current->utask; diff --git a/arch/arm/mach-at91/sysirq_mask.c b/arch/arm/mach-at91/sysirq_mask.c index 2ba694f9626b..f8bc3511a8c8 100644 --- a/arch/arm/mach-at91/sysirq_mask.c +++ b/arch/arm/mach-at91/sysirq_mask.c @@ -25,24 +25,28 @@ #include "generic.h" -#define AT91_RTC_IDR 0x24 /* Interrupt Disable Register */ -#define AT91_RTC_IMR 0x28 /* Interrupt Mask Register */ +#define AT91_RTC_IDR 0x24 /* Interrupt Disable Register */ +#define AT91_RTC_IMR 0x28 /* Interrupt Mask Register */ +#define AT91_RTC_IRQ_MASK 0x1f /* Available IRQs mask */ void __init at91_sysirq_mask_rtc(u32 rtc_base) { void __iomem *base; - u32 mask; base = ioremap(rtc_base, 64); if (!base) return; - mask = readl_relaxed(base + AT91_RTC_IMR); - if (mask) { - pr_info("AT91: Disabling rtc irq\n"); - writel_relaxed(mask, base + AT91_RTC_IDR); - (void)readl_relaxed(base + AT91_RTC_IMR); /* flush */ - } + /* + * sam9x5 SoCs have the following errata: + * "RTC: Interrupt Mask Register cannot be used + * Interrupt Mask Register read always returns 0." + * + * Hence we're not relying on IMR values to disable + * interrupts. + */ + writel_relaxed(AT91_RTC_IRQ_MASK, base + AT91_RTC_IDR); + (void)readl_relaxed(base + AT91_RTC_IMR); /* flush */ iounmap(base); } diff --git a/arch/arm/mach-bcm/bcm_5301x.c b/arch/arm/mach-bcm/bcm_5301x.c index edff69761e04..e9bcbdbce555 100644 --- a/arch/arm/mach-bcm/bcm_5301x.c +++ b/arch/arm/mach-bcm/bcm_5301x.c @@ -43,19 +43,14 @@ static void __init bcm5301x_init_early(void) "imprecise external abort"); } -static void __init bcm5301x_dt_init(void) -{ - l2x0_of_init(0, ~0UL); - of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); -} - static const char __initconst *bcm5301x_dt_compat[] = { "brcm,bcm4708", NULL, }; DT_MACHINE_START(BCM5301X, "BCM5301X") + .l2c_aux_val = 0, + .l2c_aux_mask = ~0, .init_early = bcm5301x_init_early, - .init_machine = bcm5301x_dt_init, .dt_compat = bcm5301x_dt_compat, MACHINE_END diff --git a/arch/arm/mach-berlin/berlin.c b/arch/arm/mach-berlin/berlin.c index 025bcb5473eb..ac181c6797ee 100644 --- a/arch/arm/mach-berlin/berlin.c +++ b/arch/arm/mach-berlin/berlin.c @@ -18,16 +18,6 @@ #include <asm/hardware/cache-l2x0.h> #include <asm/mach/arch.h> -static void __init berlin_init_machine(void) -{ - /* - * with DT probing for L2CCs, berlin_init_machine can be removed. - * Note: 88DE3005 (Armada 1500-mini) uses pl310 l2cc - */ - l2x0_of_init(0x70c00000, 0xfeffffff); - of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); -} - static const char * const berlin_dt_compat[] = { "marvell,berlin", NULL, @@ -35,5 +25,10 @@ static const char * const berlin_dt_compat[] = { DT_MACHINE_START(BERLIN_DT, "Marvell Berlin") .dt_compat = berlin_dt_compat, - .init_machine = berlin_init_machine, + /* + * with DT probing for L2CCs, berlin_init_machine can be removed. + * Note: 88DE3005 (Armada 1500-mini) uses pl310 l2cc + */ + .l2c_aux_val = 0x30c00000, + .l2c_aux_mask = 0xfeffffff, MACHINE_END diff --git a/arch/arm/mach-clps711x/board-clep7312.c b/arch/arm/mach-clps711x/board-clep7312.c index 221b9de32dd6..94a7add88a3f 100644 --- a/arch/arm/mach-clps711x/board-clep7312.c +++ b/arch/arm/mach-clps711x/board-clep7312.c @@ -18,6 +18,7 @@ #include <linux/init.h> #include <linux/types.h> #include <linux/string.h> +#include <linux/memblock.h> #include <asm/setup.h> #include <asm/mach-types.h> @@ -26,11 +27,9 @@ #include "common.h" static void __init -fixup_clep7312(struct tag *tags, char **cmdline, struct meminfo *mi) +fixup_clep7312(struct tag *tags, char **cmdline) { - mi->nr_banks=1; - mi->bank[0].start = 0xc0000000; - mi->bank[0].size = 0x01000000; + memblock_add(0xc0000000, 0x01000000); } MACHINE_START(CLEP7212, "Cirrus Logic 7212/7312") diff --git a/arch/arm/mach-clps711x/board-edb7211.c b/arch/arm/mach-clps711x/board-edb7211.c index 077609841f14..f9828f89972a 100644 --- a/arch/arm/mach-clps711x/board-edb7211.c +++ b/arch/arm/mach-clps711x/board-edb7211.c @@ -16,6 +16,7 @@ #include <linux/interrupt.h> #include <linux/backlight.h> #include <linux/platform_device.h> +#include <linux/memblock.h> #include <linux/mtd/physmap.h> #include <linux/mtd/partitions.h> @@ -133,7 +134,7 @@ static void __init edb7211_reserve(void) } static void __init -fixup_edb7211(struct tag *tags, char **cmdline, struct meminfo *mi) +fixup_edb7211(struct tag *tags, char **cmdline) { /* * Bank start addresses are not present in the information @@ -143,11 +144,8 @@ fixup_edb7211(struct tag *tags, char **cmdline, struct meminfo *mi) * Banks sizes _are_ present in the param block, but we're * not using that information yet. */ - mi->bank[0].start = 0xc0000000; - mi->bank[0].size = SZ_8M; - mi->bank[1].start = 0xc1000000; - mi->bank[1].size = SZ_8M; - mi->nr_banks = 2; + memblock_add(0xc0000000, SZ_8M); + memblock_add(0xc1000000, SZ_8M); } static void __init edb7211_init(void) diff --git a/arch/arm/mach-clps711x/board-p720t.c b/arch/arm/mach-clps711x/board-p720t.c index 67b733744ed7..0cf0e51e6546 100644 --- a/arch/arm/mach-clps711x/board-p720t.c +++ b/arch/arm/mach-clps711x/board-p720t.c @@ -295,7 +295,7 @@ static struct generic_bl_info p720t_lcd_backlight_pdata = { }; static void __init -fixup_p720t(struct tag *tag, char **cmdline, struct meminfo *mi) +fixup_p720t(struct tag *tag, char **cmdline) { /* * Our bootloader doesn't setup any tags (yet). diff --git a/arch/arm/mach-cns3xxx/core.c b/arch/arm/mach-cns3xxx/core.c index 2ae28a69e3e5..f85449a6accd 100644 --- a/arch/arm/mach-cns3xxx/core.c +++ b/arch/arm/mach-cns3xxx/core.c @@ -272,9 +272,9 @@ void __init cns3xxx_l2x0_init(void) * * 1 cycle of latency for setup, read and write accesses */ - val = readl(base + L2X0_TAG_LATENCY_CTRL); + val = readl(base + L310_TAG_LATENCY_CTRL); val &= 0xfffff888; - writel(val, base + L2X0_TAG_LATENCY_CTRL); + writel(val, base + L310_TAG_LATENCY_CTRL); /* * Data RAM Control register @@ -285,12 +285,12 @@ void __init cns3xxx_l2x0_init(void) * * 1 cycle of latency for setup, read and write accesses */ - val = readl(base + L2X0_DATA_LATENCY_CTRL); + val = readl(base + L310_DATA_LATENCY_CTRL); val &= 0xfffff888; - writel(val, base + L2X0_DATA_LATENCY_CTRL); + writel(val, base + L310_DATA_LATENCY_CTRL); /* 32 KiB, 8-way, parity disable */ - l2x0_init(base, 0x00540000, 0xfe000fff); + l2x0_init(base, 0x00500000, 0xfe0f0fff); } #endif /* CONFIG_CACHE_L2X0 */ diff --git a/arch/arm/mach-ep93xx/crunch-bits.S b/arch/arm/mach-ep93xx/crunch-bits.S index 0ec9bb48fab9..e96923a3017b 100644 --- a/arch/arm/mach-ep93xx/crunch-bits.S +++ b/arch/arm/mach-ep93xx/crunch-bits.S @@ -16,6 +16,7 @@ #include <asm/ptrace.h> #include <asm/thread_info.h> #include <asm/asm-offsets.h> +#include <asm/assembler.h> #include <mach/ep93xx-regs.h> /* @@ -62,14 +63,16 @@ * r9 = ret_from_exception * lr = undefined instr exit * - * called from prefetch exception handler with interrupts disabled + * called from prefetch exception handler with interrupts enabled */ ENTRY(crunch_task_enable) + inc_preempt_count r10, r3 + ldr r8, =(EP93XX_APB_VIRT_BASE + 0x00130000) @ syscon addr ldr r1, [r8, #0x80] tst r1, #0x00800000 @ access to crunch enabled? - movne pc, lr @ if so no business here + bne 2f @ if so no business here mov r3, #0xaa @ unlock syscon swlock str r3, [r8, #0xc0] orr r1, r1, #0x00800000 @ enable access to crunch @@ -142,7 +145,7 @@ crunch_save: teq r0, #0 @ anything to load? cfldr64eq mvdx0, [r1, #CRUNCH_MVDX0] @ mvdx0 was clobbered - moveq pc, lr + beq 1f crunch_load: cfldr64 mvdx0, [r0, #CRUNCH_DSPSC] @ load status word @@ -190,6 +193,11 @@ crunch_load: cfldr64 mvdx14, [r0, #CRUNCH_MVDX14] cfldr64 mvdx15, [r0, #CRUNCH_MVDX15] +1: +#ifdef CONFIG_PREEMPT_COUNT + get_thread_info r10 +#endif +2: dec_preempt_count r10, r3 mov pc, lr /* diff --git a/arch/arm/mach-exynos/common.h b/arch/arm/mach-exynos/common.h index 80b90e346ca0..16617bdb37a9 100644 --- a/arch/arm/mach-exynos/common.h +++ b/arch/arm/mach-exynos/common.h @@ -153,7 +153,6 @@ enum sys_powerdown { NUM_SYS_POWERDOWN, }; -extern unsigned long l2x0_regs_phys; struct exynos_pmu_conf { void __iomem *reg; unsigned int val[NUM_SYS_POWERDOWN]; diff --git a/arch/arm/mach-exynos/exynos.c b/arch/arm/mach-exynos/exynos.c index a56ce45a3f90..90aab4d75d08 100644 --- a/arch/arm/mach-exynos/exynos.c +++ b/arch/arm/mach-exynos/exynos.c @@ -30,9 +30,6 @@ #include "mfc.h" #include "regs-pmu.h" -#define L2_AUX_VAL 0x7C470001 -#define L2_AUX_MASK 0xC200ffff - static struct map_desc exynos4_iodesc[] __initdata = { { .virtual = (unsigned long)S3C_VA_SYS, @@ -246,25 +243,6 @@ void __init exynos_init_io(void) exynos_map_io(); } -static int __init exynos4_l2x0_cache_init(void) -{ - int ret; - - if (!soc_is_exynos4()) - return 0; - - ret = l2x0_of_init(L2_AUX_VAL, L2_AUX_MASK); - if (ret) - return ret; - - if (IS_ENABLED(CONFIG_S5P_SLEEP)) { - l2x0_regs_phys = virt_to_phys(&l2x0_saved_regs); - clean_dcache_area(&l2x0_regs_phys, sizeof(unsigned long)); - } - return 0; -} -early_initcall(exynos4_l2x0_cache_init); - static void __init exynos_dt_machine_init(void) { struct device_node *i2c_np; @@ -333,6 +311,8 @@ static void __init exynos_reserve(void) DT_MACHINE_START(EXYNOS_DT, "SAMSUNG EXYNOS (Flattened Device Tree)") /* Maintainer: Thomas Abraham <thomas.abraham@linaro.org> */ /* Maintainer: Kukjin Kim <kgene.kim@samsung.com> */ + .l2c_aux_val = 0x3c400001, + .l2c_aux_mask = 0xc20fffff, .smp = smp_ops(exynos_smp_ops), .map_io = exynos_init_io, .init_early = exynos_firmware_init, diff --git a/arch/arm/mach-exynos/sleep.S b/arch/arm/mach-exynos/sleep.S index a2613e944e10..108a45f4bb62 100644 --- a/arch/arm/mach-exynos/sleep.S +++ b/arch/arm/mach-exynos/sleep.S @@ -16,8 +16,6 @@ */ #include <linux/linkage.h> -#include <asm/asm-offsets.h> -#include <asm/hardware/cache-l2x0.h> #define CPU_MASK 0xff0ffff0 #define CPU_CORTEX_A9 0x410fc090 @@ -53,33 +51,7 @@ ENTRY(exynos_cpu_resume) and r0, r0, r1 ldr r1, =CPU_CORTEX_A9 cmp r0, r1 - bne skip_l2_resume - adr r0, l2x0_regs_phys - ldr r0, [r0] - cmp r0, #0 - beq skip_l2_resume - ldr r1, [r0, #L2X0_R_PHY_BASE] - ldr r2, [r1, #L2X0_CTRL] - tst r2, #0x1 - bne skip_l2_resume - ldr r2, [r0, #L2X0_R_AUX_CTRL] - str r2, [r1, #L2X0_AUX_CTRL] - ldr r2, [r0, #L2X0_R_TAG_LATENCY] - str r2, [r1, #L2X0_TAG_LATENCY_CTRL] - ldr r2, [r0, #L2X0_R_DATA_LATENCY] - str r2, [r1, #L2X0_DATA_LATENCY_CTRL] - ldr r2, [r0, #L2X0_R_PREFETCH_CTRL] - str r2, [r1, #L2X0_PREFETCH_CTRL] - ldr r2, [r0, #L2X0_R_PWR_CTRL] - str r2, [r1, #L2X0_POWER_CTRL] - mov r2, #1 - str r2, [r1, #L2X0_CTRL] -skip_l2_resume: + bleq l2c310_early_resume #endif b cpu_resume ENDPROC(exynos_cpu_resume) -#ifdef CONFIG_CACHE_L2X0 - .globl l2x0_regs_phys -l2x0_regs_phys: - .long 0 -#endif diff --git a/arch/arm/mach-footbridge/cats-hw.c b/arch/arm/mach-footbridge/cats-hw.c index da0415094856..8f05489671b7 100644 --- a/arch/arm/mach-footbridge/cats-hw.c +++ b/arch/arm/mach-footbridge/cats-hw.c @@ -76,7 +76,7 @@ __initcall(cats_hw_init); * hard reboots fail on early boards. */ static void __init -fixup_cats(struct tag *tags, char **cmdline, struct meminfo *mi) +fixup_cats(struct tag *tags, char **cmdline) { #if defined(CONFIG_VGA_CONSOLE) || defined(CONFIG_DUMMY_CONSOLE) screen_info.orig_video_lines = 25; diff --git a/arch/arm/mach-footbridge/netwinder-hw.c b/arch/arm/mach-footbridge/netwinder-hw.c index eb1fa5c84723..cdee08c6d239 100644 --- a/arch/arm/mach-footbridge/netwinder-hw.c +++ b/arch/arm/mach-footbridge/netwinder-hw.c @@ -620,7 +620,7 @@ __initcall(nw_hw_init); * the parameter page. */ static void __init -fixup_netwinder(struct tag *tags, char **cmdline, struct meminfo *mi) +fixup_netwinder(struct tag *tags, char **cmdline) { #ifdef CONFIG_ISAPNP extern int isapnp_disable; diff --git a/arch/arm/mach-highbank/highbank.c b/arch/arm/mach-highbank/highbank.c index c7de89b263dd..8c35ae4ff176 100644 --- a/arch/arm/mach-highbank/highbank.c +++ b/arch/arm/mach-highbank/highbank.c @@ -51,11 +51,13 @@ static void __init highbank_scu_map_io(void) } -static void highbank_l2x0_disable(void) +static void highbank_l2c310_write_sec(unsigned long val, unsigned reg) { - outer_flush_all(); - /* Disable PL310 L2 Cache controller */ - highbank_smc1(0x102, 0x0); + if (reg == L2X0_CTRL) + highbank_smc1(0x102, val); + else + WARN_ONCE(1, "Highbank L2C310: ignoring write to reg 0x%x\n", + reg); } static void __init highbank_init_irq(void) @@ -64,14 +66,6 @@ static void __init highbank_init_irq(void) if (of_find_compatible_node(NULL, NULL, "arm,cortex-a9")) highbank_scu_map_io(); - - /* Enable PL310 L2 Cache controller */ - if (IS_ENABLED(CONFIG_CACHE_L2X0) && - of_find_compatible_node(NULL, NULL, "arm,pl310-cache")) { - highbank_smc1(0x102, 0x1); - l2x0_of_init(0, ~0UL); - outer_cache.disable = highbank_l2x0_disable; - } } static void highbank_power_off(void) @@ -185,6 +179,9 @@ DT_MACHINE_START(HIGHBANK, "Highbank") #if defined(CONFIG_ZONE_DMA) && defined(CONFIG_ARM_LPAE) .dma_zone_size = (4ULL * SZ_1G), #endif + .l2c_aux_val = 0, + .l2c_aux_mask = ~0, + .l2c_write_sec = highbank_l2c310_write_sec, .init_irq = highbank_init_irq, .init_machine = highbank_init, .dt_compat = highbank_match, diff --git a/arch/arm/mach-imx/mach-vf610.c b/arch/arm/mach-imx/mach-vf610.c index 2d8aef5a6efa..c44602758120 100644 --- a/arch/arm/mach-imx/mach-vf610.c +++ b/arch/arm/mach-imx/mach-vf610.c @@ -20,19 +20,14 @@ static void __init vf610_init_machine(void) of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); } -static void __init vf610_init_irq(void) -{ - l2x0_of_init(0, ~0UL); - irqchip_init(); -} - static const char *vf610_dt_compat[] __initconst = { "fsl,vf610", NULL, }; DT_MACHINE_START(VYBRID_VF610, "Freescale Vybrid VF610 (Device Tree)") - .init_irq = vf610_init_irq, + .l2c_aux_val = 0, + .l2c_aux_mask = ~0, .init_machine = vf610_init_machine, .dt_compat = vf610_dt_compat, .restart = mxc_restart, diff --git a/arch/arm/mach-imx/suspend-imx6.S b/arch/arm/mach-imx/suspend-imx6.S index 20048ff05739..fe123b079c05 100644 --- a/arch/arm/mach-imx/suspend-imx6.S +++ b/arch/arm/mach-imx/suspend-imx6.S @@ -334,28 +334,10 @@ ENDPROC(imx6_suspend) * turned into relative ones. */ -#ifdef CONFIG_CACHE_L2X0 - .macro pl310_resume - adr r0, l2x0_saved_regs_offset - ldr r2, [r0] - add r2, r2, r0 - ldr r0, [r2, #L2X0_R_PHY_BASE] @ get physical base of l2x0 - ldr r1, [r2, #L2X0_R_AUX_CTRL] @ get aux_ctrl value - str r1, [r0, #L2X0_AUX_CTRL] @ restore aux_ctrl - mov r1, #0x1 - str r1, [r0, #L2X0_CTRL] @ re-enable L2 - .endm - -l2x0_saved_regs_offset: - .word l2x0_saved_regs - . - -#else - .macro pl310_resume - .endm -#endif - ENTRY(v7_cpu_resume) bl v7_invalidate_l1 - pl310_resume +#ifdef CONFIG_CACHE_L2X0 + bl l2c310_early_resume +#endif b cpu_resume ENDPROC(v7_cpu_resume) diff --git a/arch/arm/mach-imx/system.c b/arch/arm/mach-imx/system.c index 5e3027d3692f..3b0733edb68c 100644 --- a/arch/arm/mach-imx/system.c +++ b/arch/arm/mach-imx/system.c @@ -124,7 +124,7 @@ void __init imx_init_l2cache(void) } /* Configure the L2 PREFETCH and POWER registers */ - val = readl_relaxed(l2x0_base + L2X0_PREFETCH_CTRL); + val = readl_relaxed(l2x0_base + L310_PREFETCH_CTRL); val |= 0x70800000; /* * The L2 cache controller(PL310) version on the i.MX6D/Q is r3p1-50rel0 @@ -137,14 +137,12 @@ void __init imx_init_l2cache(void) */ if (cpu_is_imx6q()) val &= ~(1 << 30 | 1 << 23); - writel_relaxed(val, l2x0_base + L2X0_PREFETCH_CTRL); - val = L2X0_DYNAMIC_CLK_GATING_EN | L2X0_STNDBY_MODE_EN; - writel_relaxed(val, l2x0_base + L2X0_POWER_CTRL); + writel_relaxed(val, l2x0_base + L310_PREFETCH_CTRL); iounmap(l2x0_base); of_node_put(np); out: - l2x0_of_init(0, ~0UL); + l2x0_of_init(0, ~0); } #endif diff --git a/arch/arm/mach-msm/board-halibut.c b/arch/arm/mach-msm/board-halibut.c index a77529887cbc..61bfe584a9d7 100644 --- a/arch/arm/mach-msm/board-halibut.c +++ b/arch/arm/mach-msm/board-halibut.c @@ -83,11 +83,6 @@ static void __init halibut_init(void) platform_add_devices(devices, ARRAY_SIZE(devices)); } -static void __init halibut_fixup(struct tag *tags, char **cmdline, - struct meminfo *mi) -{ -} - static void __init halibut_map_io(void) { msm_map_common_io(); @@ -100,7 +95,6 @@ static void __init halibut_init_late(void) MACHINE_START(HALIBUT, "Halibut Board (QCT SURF7200A)") .atag_offset = 0x100, - .fixup = halibut_fixup, .map_io = halibut_map_io, .init_early = halibut_init_early, .init_irq = halibut_init_irq, diff --git a/arch/arm/mach-msm/board-mahimahi.c b/arch/arm/mach-msm/board-mahimahi.c index 7d9981cb400e..873c3ca3cd7e 100644 --- a/arch/arm/mach-msm/board-mahimahi.c +++ b/arch/arm/mach-msm/board-mahimahi.c @@ -22,6 +22,7 @@ #include <linux/io.h> #include <linux/kernel.h> #include <linux/platform_device.h> +#include <linux/memblock.h> #include <asm/mach-types.h> #include <asm/mach/arch.h> @@ -52,16 +53,10 @@ static void __init mahimahi_init(void) platform_add_devices(devices, ARRAY_SIZE(devices)); } -static void __init mahimahi_fixup(struct tag *tags, char **cmdline, - struct meminfo *mi) +static void __init mahimahi_fixup(struct tag *tags, char **cmdline) { - mi->nr_banks = 2; - mi->bank[0].start = PHYS_OFFSET; - mi->bank[0].node = PHYS_TO_NID(PHYS_OFFSET); - mi->bank[0].size = (219*1024*1024); - mi->bank[1].start = MSM_HIGHMEM_BASE; - mi->bank[1].node = PHYS_TO_NID(MSM_HIGHMEM_BASE); - mi->bank[1].size = MSM_HIGHMEM_SIZE; + memblock_add(PHYS_OFFSET, 219*SZ_1M); + memblock_add(MSM_HIGHMEM_BASE, MSM_HIGHMEM_SIZE); } static void __init mahimahi_map_io(void) diff --git a/arch/arm/mach-msm/board-msm7x30.c b/arch/arm/mach-msm/board-msm7x30.c index 0c4c200e1221..245884319d2e 100644 --- a/arch/arm/mach-msm/board-msm7x30.c +++ b/arch/arm/mach-msm/board-msm7x30.c @@ -40,8 +40,7 @@ #include "proc_comm.h" #include "common.h" -static void __init msm7x30_fixup(struct tag *tag, char **cmdline, - struct meminfo *mi) +static void __init msm7x30_fixup(struct tag *tag, char **cmdline) { for (; tag->hdr.size; tag = tag_next(tag)) if (tag->hdr.tag == ATAG_MEM && tag->u.mem.start == 0x200000) { diff --git a/arch/arm/mach-msm/board-sapphire.c b/arch/arm/mach-msm/board-sapphire.c index 327605174d63..e50967926dcd 100644 --- a/arch/arm/mach-msm/board-sapphire.c +++ b/arch/arm/mach-msm/board-sapphire.c @@ -35,6 +35,7 @@ #include <linux/mtd/nand.h> #include <linux/mtd/partitions.h> +#include <linux/memblock.h> #include "gpio_chip.h" #include "board-sapphire.h" @@ -74,22 +75,18 @@ static struct map_desc sapphire_io_desc[] __initdata = { } }; -static void __init sapphire_fixup(struct tag *tags, char **cmdline, - struct meminfo *mi) +static void __init sapphire_fixup(struct tag *tags, char **cmdline) { int smi_sz = parse_tag_smi((const struct tag *)tags); - mi->nr_banks = 1; - mi->bank[0].start = PHYS_OFFSET; - mi->bank[0].node = PHYS_TO_NID(PHYS_OFFSET); if (smi_sz == 32) { - mi->bank[0].size = (84*1024*1024); + memblock_add(PHYS_OFFSET, 84*SZ_1M); } else if (smi_sz == 64) { - mi->bank[0].size = (101*1024*1024); + memblock_add(PHYS_OFFSET, 101*SZ_1M); } else { + memblock_add(PHYS_OFFSET, 101*SZ_1M); /* Give a default value when not get smi size */ smi_sz = 64; - mi->bank[0].size = (101*1024*1024); } } diff --git a/arch/arm/mach-msm/board-trout.c b/arch/arm/mach-msm/board-trout.c index 5edfbd904d06..f72b07de2152 100644 --- a/arch/arm/mach-msm/board-trout.c +++ b/arch/arm/mach-msm/board-trout.c @@ -19,6 +19,7 @@ #include <linux/init.h> #include <linux/platform_device.h> #include <linux/clkdev.h> +#include <linux/memblock.h> #include <asm/system_info.h> #include <asm/mach-types.h> @@ -55,12 +56,9 @@ static void __init trout_init_irq(void) msm_init_irq(); } -static void __init trout_fixup(struct tag *tags, char **cmdline, - struct meminfo *mi) +static void __init trout_fixup(struct tag *tags, char **cmdline) { - mi->nr_banks = 1; - mi->bank[0].start = PHYS_OFFSET; - mi->bank[0].size = (101*1024*1024); + memblock_add(PHYS_OFFSET, 101*SZ_1M); } static void __init trout_init(void) diff --git a/arch/arm/mach-mvebu/board-v7.c b/arch/arm/mach-mvebu/board-v7.c index 01cfce6ac20b..8bb742fdf5ca 100644 --- a/arch/arm/mach-mvebu/board-v7.c +++ b/arch/arm/mach-mvebu/board-v7.c @@ -78,7 +78,6 @@ static void __init mvebu_timer_and_clk_init(void) mvebu_scu_enable(); coherency_init(); BUG_ON(mvebu_mbus_dt_init(coherency_available())); - l2x0_of_init(0, ~0UL); if (of_machine_is_compatible("marvell,armada375")) hook_fault_code(16 + 6, armada_375_external_abort_wa, SIGBUS, 0, @@ -182,6 +181,8 @@ static const char * const armada_370_xp_dt_compat[] = { }; DT_MACHINE_START(ARMADA_370_XP_DT, "Marvell Armada 370/XP (Device Tree)") + .l2c_aux_val = 0, + .l2c_aux_mask = ~0, .smp = smp_ops(armada_xp_smp_ops), .init_machine = mvebu_dt_init, .init_time = mvebu_timer_and_clk_init, @@ -195,6 +196,8 @@ static const char * const armada_375_dt_compat[] = { }; DT_MACHINE_START(ARMADA_375_DT, "Marvell Armada 375 (Device Tree)") + .l2c_aux_val = 0, + .l2c_aux_mask = ~0, .init_time = mvebu_timer_and_clk_init, .init_machine = mvebu_dt_init, .restart = mvebu_restart, @@ -208,6 +211,8 @@ static const char * const armada_38x_dt_compat[] = { }; DT_MACHINE_START(ARMADA_38X_DT, "Marvell Armada 380/385 (Device Tree)") + .l2c_aux_val = 0, + .l2c_aux_mask = ~0, .init_time = mvebu_timer_and_clk_init, .restart = mvebu_restart, .dt_compat = armada_38x_dt_compat, diff --git a/arch/arm/mach-nomadik/cpu-8815.c b/arch/arm/mach-nomadik/cpu-8815.c index 4a1065e41e9c..9116ca476d7c 100644 --- a/arch/arm/mach-nomadik/cpu-8815.c +++ b/arch/arm/mach-nomadik/cpu-8815.c @@ -143,23 +143,16 @@ static int __init cpu8815_mmcsd_init(void) } device_initcall(cpu8815_mmcsd_init); -static void __init cpu8815_init_of(void) -{ -#ifdef CONFIG_CACHE_L2X0 - /* At full speed latency must be >=2, so 0x249 in low bits */ - l2x0_of_init(0x00730249, 0xfe000fff); -#endif - of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); -} - static const char * cpu8815_board_compat[] = { "calaosystems,usb-s8815", NULL, }; DT_MACHINE_START(NOMADIK_DT, "Nomadik STn8815") + /* At full speed latency must be >=2, so 0x249 in low bits */ + .l2c_aux_val = 0x00700249, + .l2c_aux_mask = 0xfe0fefff, .map_io = cpu8815_map_io, - .init_machine = cpu8815_init_of, .restart = cpu8815_restart, .dt_compat = cpu8815_board_compat, MACHINE_END diff --git a/arch/arm/mach-omap2/Kconfig b/arch/arm/mach-omap2/Kconfig index cb31d4390d52..0ba482638ebf 100644 --- a/arch/arm/mach-omap2/Kconfig +++ b/arch/arm/mach-omap2/Kconfig @@ -65,6 +65,7 @@ config SOC_AM43XX select ARCH_HAS_OPP select ARM_GIC select MACH_OMAP_GENERIC + select MIGHT_HAVE_CACHE_L2X0 config SOC_DRA7XX bool "TI DRA7XX" diff --git a/arch/arm/mach-omap2/common.h b/arch/arm/mach-omap2/common.h index d88aff7baff8..ff029737c8f0 100644 --- a/arch/arm/mach-omap2/common.h +++ b/arch/arm/mach-omap2/common.h @@ -91,6 +91,7 @@ extern void omap3_sync32k_timer_init(void); extern void omap3_secure_sync32k_timer_init(void); extern void omap3_gptimer_timer_init(void); extern void omap4_local_timer_init(void); +int omap_l2_cache_init(void); extern void omap5_realtime_timer_init(void); void omap2420_init_early(void); diff --git a/arch/arm/mach-omap2/io.c b/arch/arm/mach-omap2/io.c index 4ec3b4a93843..8f559450c876 100644 --- a/arch/arm/mach-omap2/io.c +++ b/arch/arm/mach-omap2/io.c @@ -609,6 +609,7 @@ void __init am43xx_init_early(void) am43xx_clockdomains_init(); am43xx_hwmod_init(); omap_hwmod_init_postsetup(); + omap_l2_cache_init(); omap_clk_soc_init = am43xx_dt_clk_init; } @@ -640,6 +641,7 @@ void __init omap4430_init_early(void) omap44xx_clockdomains_init(); omap44xx_hwmod_init(); omap_hwmod_init_postsetup(); + omap_l2_cache_init(); omap_clk_soc_init = omap4xxx_dt_clk_init; } diff --git a/arch/arm/mach-omap2/omap-mpuss-lowpower.c b/arch/arm/mach-omap2/omap-mpuss-lowpower.c index eb76e47091ad..4001325f90fb 100644 --- a/arch/arm/mach-omap2/omap-mpuss-lowpower.c +++ b/arch/arm/mach-omap2/omap-mpuss-lowpower.c @@ -187,19 +187,15 @@ static void l2x0_pwrst_prepare(unsigned int cpu_id, unsigned int save_state) * in every restore MPUSS OFF path. */ #ifdef CONFIG_CACHE_L2X0 -static void save_l2x0_context(void) +static void __init save_l2x0_context(void) { - u32 val; - void __iomem *l2x0_base = omap4_get_l2cache_base(); - if (l2x0_base) { - val = readl_relaxed(l2x0_base + L2X0_AUX_CTRL); - writel_relaxed(val, sar_base + L2X0_AUXCTRL_OFFSET); - val = readl_relaxed(l2x0_base + L2X0_PREFETCH_CTRL); - writel_relaxed(val, sar_base + L2X0_PREFETCH_CTRL_OFFSET); - } + writel_relaxed(l2x0_saved_regs.aux_ctrl, + sar_base + L2X0_AUXCTRL_OFFSET); + writel_relaxed(l2x0_saved_regs.prefetch_ctrl, + sar_base + L2X0_PREFETCH_CTRL_OFFSET); } #else -static void save_l2x0_context(void) +static void __init save_l2x0_context(void) {} #endif diff --git a/arch/arm/mach-omap2/omap4-common.c b/arch/arm/mach-omap2/omap4-common.c index 99b0154493a4..326cd982a3cb 100644 --- a/arch/arm/mach-omap2/omap4-common.c +++ b/arch/arm/mach-omap2/omap4-common.c @@ -167,75 +167,57 @@ void __iomem *omap4_get_l2cache_base(void) return l2cache_base; } -static void omap4_l2x0_disable(void) +static void omap4_l2c310_write_sec(unsigned long val, unsigned reg) { - outer_flush_all(); - /* Disable PL310 L2 Cache controller */ - omap_smc1(0x102, 0x0); -} + unsigned smc_op; -static void omap4_l2x0_set_debug(unsigned long val) -{ - /* Program PL310 L2 Cache controller debug register */ - omap_smc1(0x100, val); + switch (reg) { + case L2X0_CTRL: + smc_op = OMAP4_MON_L2X0_CTRL_INDEX; + break; + + case L2X0_AUX_CTRL: + smc_op = OMAP4_MON_L2X0_AUXCTRL_INDEX; + break; + + case L2X0_DEBUG_CTRL: + smc_op = OMAP4_MON_L2X0_DBG_CTRL_INDEX; + break; + + case L310_PREFETCH_CTRL: + smc_op = OMAP4_MON_L2X0_PREFETCH_INDEX; + break; + + default: + WARN_ONCE(1, "OMAP L2C310: ignoring write to reg 0x%x\n", reg); + return; + } + + omap_smc1(smc_op, val); } -static int __init omap_l2_cache_init(void) +int __init omap_l2_cache_init(void) { - u32 aux_ctrl = 0; - - /* - * To avoid code running on other OMAPs in - * multi-omap builds - */ - if (!cpu_is_omap44xx()) - return -ENODEV; + u32 aux_ctrl; /* Static mapping, never released */ l2cache_base = ioremap(OMAP44XX_L2CACHE_BASE, SZ_4K); if (WARN_ON(!l2cache_base)) return -ENOMEM; - /* - * 16-way associativity, parity disabled - * Way size - 32KB (es1.0) - * Way size - 64KB (es2.0 +) - */ - aux_ctrl = ((1 << L2X0_AUX_CTRL_ASSOCIATIVITY_SHIFT) | - (0x1 << 25) | - (0x1 << L2X0_AUX_CTRL_NS_LOCKDOWN_SHIFT) | - (0x1 << L2X0_AUX_CTRL_NS_INT_CTRL_SHIFT)); - - if (omap_rev() == OMAP4430_REV_ES1_0) { - aux_ctrl |= 0x2 << L2X0_AUX_CTRL_WAY_SIZE_SHIFT; - } else { - aux_ctrl |= ((0x3 << L2X0_AUX_CTRL_WAY_SIZE_SHIFT) | - (1 << L2X0_AUX_CTRL_SHARE_OVERRIDE_SHIFT) | - (1 << L2X0_AUX_CTRL_DATA_PREFETCH_SHIFT) | - (1 << L2X0_AUX_CTRL_INSTR_PREFETCH_SHIFT) | - (1 << L2X0_AUX_CTRL_EARLY_BRESP_SHIFT)); - } - if (omap_rev() != OMAP4430_REV_ES1_0) - omap_smc1(0x109, aux_ctrl); - - /* Enable PL310 L2 Cache controller */ - omap_smc1(0x102, 0x1); + /* 16-way associativity, parity disabled, way size - 64KB (es2.0 +) */ + aux_ctrl = L2C_AUX_CTRL_SHARED_OVERRIDE | + L310_AUX_CTRL_DATA_PREFETCH | + L310_AUX_CTRL_INSTR_PREFETCH; + outer_cache.write_sec = omap4_l2c310_write_sec; if (of_have_populated_dt()) - l2x0_of_init(aux_ctrl, L2X0_AUX_CTRL_MASK); + l2x0_of_init(aux_ctrl, 0xcf9fffff); else - l2x0_init(l2cache_base, aux_ctrl, L2X0_AUX_CTRL_MASK); - - /* - * Override default outer_cache.disable with a OMAP4 - * specific one - */ - outer_cache.disable = omap4_l2x0_disable; - outer_cache.set_debug = omap4_l2x0_set_debug; + l2x0_init(l2cache_base, aux_ctrl, 0xcf9fffff); return 0; } -omap_early_initcall(omap_l2_cache_init); #endif void __iomem *omap4_get_sar_ram_base(void) diff --git a/arch/arm/mach-orion5x/common.c b/arch/arm/mach-orion5x/common.c index 3f1de1111e0f..6bbb7b55c6d1 100644 --- a/arch/arm/mach-orion5x/common.c +++ b/arch/arm/mach-orion5x/common.c @@ -365,8 +365,7 @@ void orion5x_restart(enum reboot_mode mode, const char *cmd) * Many orion-based systems have buggy bootloader implementations. * This is a common fixup for bogus memory tags. */ -void __init tag_fixup_mem32(struct tag *t, char **from, - struct meminfo *meminfo) +void __init tag_fixup_mem32(struct tag *t, char **from) { for (; t->hdr.size; t = tag_next(t)) if (t->hdr.tag == ATAG_MEM && diff --git a/arch/arm/mach-orion5x/common.h b/arch/arm/mach-orion5x/common.h index 26d6f34b6027..cd0389c6e822 100644 --- a/arch/arm/mach-orion5x/common.h +++ b/arch/arm/mach-orion5x/common.h @@ -64,9 +64,8 @@ int orion5x_pci_sys_setup(int nr, struct pci_sys_data *sys); struct pci_bus *orion5x_pci_sys_scan_bus(int nr, struct pci_sys_data *sys); int orion5x_pci_map_irq(const struct pci_dev *dev, u8 slot, u8 pin); -struct meminfo; struct tag; -extern void __init tag_fixup_mem32(struct tag *, char **, struct meminfo *); +extern void __init tag_fixup_mem32(struct tag *, char **); #ifdef CONFIG_MACH_MSS2_DT extern void mss2_init(void); diff --git a/arch/arm/mach-prima2/Makefile b/arch/arm/mach-prima2/Makefile index 7a6b4a323125..8846e7d87ea5 100644 --- a/arch/arm/mach-prima2/Makefile +++ b/arch/arm/mach-prima2/Makefile @@ -2,7 +2,6 @@ obj-y += rstc.o obj-y += common.o obj-y += rtciobrg.o obj-$(CONFIG_DEBUG_LL) += lluart.o -obj-$(CONFIG_CACHE_L2X0) += l2x0.o obj-$(CONFIG_SUSPEND) += pm.o sleep.o obj-$(CONFIG_SMP) += platsmp.o headsmp.o obj-$(CONFIG_HOTPLUG_CPU) += hotplug.o diff --git a/arch/arm/mach-prima2/common.c b/arch/arm/mach-prima2/common.c index 47c7819edb9b..a860ea27e8ae 100644 --- a/arch/arm/mach-prima2/common.c +++ b/arch/arm/mach-prima2/common.c @@ -34,6 +34,8 @@ static const char *atlas6_dt_match[] __initconst = { DT_MACHINE_START(ATLAS6_DT, "Generic ATLAS6 (Flattened Device Tree)") /* Maintainer: Barry Song <baohua.song@csr.com> */ + .l2c_aux_val = 0, + .l2c_aux_mask = ~0, .map_io = sirfsoc_map_io, .init_late = sirfsoc_init_late, .dt_compat = atlas6_dt_match, @@ -48,6 +50,8 @@ static const char *prima2_dt_match[] __initconst = { DT_MACHINE_START(PRIMA2_DT, "Generic PRIMA2 (Flattened Device Tree)") /* Maintainer: Barry Song <baohua.song@csr.com> */ + .l2c_aux_val = 0, + .l2c_aux_mask = ~0, .map_io = sirfsoc_map_io, .dma_zone_size = SZ_256M, .init_late = sirfsoc_init_late, @@ -63,6 +67,8 @@ static const char *marco_dt_match[] __initconst = { DT_MACHINE_START(MARCO_DT, "Generic MARCO (Flattened Device Tree)") /* Maintainer: Barry Song <baohua.song@csr.com> */ + .l2c_aux_val = 0, + .l2c_aux_mask = ~0, .smp = smp_ops(sirfsoc_smp_ops), .map_io = sirfsoc_map_io, .init_late = sirfsoc_init_late, diff --git a/arch/arm/mach-prima2/l2x0.c b/arch/arm/mach-prima2/l2x0.c deleted file mode 100644 index c7102539c0b0..000000000000 --- a/arch/arm/mach-prima2/l2x0.c +++ /dev/null @@ -1,49 +0,0 @@ -/* - * l2 cache initialization for CSR SiRFprimaII - * - * Copyright (c) 2011 Cambridge Silicon Radio Limited, a CSR plc group company. - * - * Licensed under GPLv2 or later. - */ - -#include <linux/init.h> -#include <linux/kernel.h> -#include <linux/of.h> -#include <asm/hardware/cache-l2x0.h> - -struct l2x0_aux { - u32 val; - u32 mask; -}; - -static const struct l2x0_aux prima2_l2x0_aux __initconst = { - .val = 2 << L2X0_AUX_CTRL_WAY_SIZE_SHIFT, - .mask = 0, -}; - -static const struct l2x0_aux marco_l2x0_aux __initconst = { - .val = (2 << L2X0_AUX_CTRL_WAY_SIZE_SHIFT) | - (1 << L2X0_AUX_CTRL_ASSOCIATIVITY_SHIFT), - .mask = L2X0_AUX_CTRL_MASK, -}; - -static const struct of_device_id sirf_l2x0_ids[] __initconst = { - { .compatible = "sirf,prima2-pl310-cache", .data = &prima2_l2x0_aux, }, - { .compatible = "sirf,marco-pl310-cache", .data = &marco_l2x0_aux, }, - {}, -}; - -static int __init sirfsoc_l2x0_init(void) -{ - struct device_node *np; - const struct l2x0_aux *aux; - - np = of_find_matching_node(NULL, sirf_l2x0_ids); - if (np) { - aux = of_match_node(sirf_l2x0_ids, np)->data; - return l2x0_of_init(aux->val, aux->mask); - } - - return 0; -} -early_initcall(sirfsoc_l2x0_init); diff --git a/arch/arm/mach-prima2/pm.c b/arch/arm/mach-prima2/pm.c index c4525a88e5da..96e9bc102117 100644 --- a/arch/arm/mach-prima2/pm.c +++ b/arch/arm/mach-prima2/pm.c @@ -71,7 +71,6 @@ static int sirfsoc_pm_enter(suspend_state_t state) case PM_SUSPEND_MEM: sirfsoc_pre_suspend_power_off(); - outer_flush_all(); outer_disable(); /* go zzz */ cpu_suspend(0, sirfsoc_finish_suspend); diff --git a/arch/arm/mach-pxa/cm-x300.c b/arch/arm/mach-pxa/cm-x300.c index 584439bfa59f..4d3588d26c2a 100644 --- a/arch/arm/mach-pxa/cm-x300.c +++ b/arch/arm/mach-pxa/cm-x300.c @@ -837,8 +837,7 @@ static void __init cm_x300_init(void) cm_x300_init_bl(); } -static void __init cm_x300_fixup(struct tag *tags, char **cmdline, - struct meminfo *mi) +static void __init cm_x300_fixup(struct tag *tags, char **cmdline) { /* Make sure that mi->bank[0].start = PHYS_ADDR */ for (; tags->hdr.size; tags = tag_next(tags)) diff --git a/arch/arm/mach-pxa/corgi.c b/arch/arm/mach-pxa/corgi.c index 57d60542f982..91dd1c7cdbcd 100644 --- a/arch/arm/mach-pxa/corgi.c +++ b/arch/arm/mach-pxa/corgi.c @@ -34,6 +34,7 @@ #include <linux/input/matrix_keypad.h> #include <linux/gpio_keys.h> #include <linux/module.h> +#include <linux/memblock.h> #include <video/w100fb.h> #include <asm/setup.h> @@ -753,16 +754,13 @@ static void __init corgi_init(void) platform_add_devices(devices, ARRAY_SIZE(devices)); } -static void __init fixup_corgi(struct tag *tags, char **cmdline, - struct meminfo *mi) +static void __init fixup_corgi(struct tag *tags, char **cmdline) { sharpsl_save_param(); - mi->nr_banks=1; - mi->bank[0].start = 0xa0000000; if (machine_is_corgi()) - mi->bank[0].size = (32*1024*1024); + memblock_add(0xa0000000, SZ_32M); else - mi->bank[0].size = (64*1024*1024); + memblock_add(0xa0000000, SZ_64M); } #ifdef CONFIG_MACH_CORGI diff --git a/arch/arm/mach-pxa/eseries.c b/arch/arm/mach-pxa/eseries.c index 8280ebcaab9f..cfb864173ce3 100644 --- a/arch/arm/mach-pxa/eseries.c +++ b/arch/arm/mach-pxa/eseries.c @@ -21,6 +21,7 @@ #include <linux/mtd/nand.h> #include <linux/mtd/partitions.h> #include <linux/usb/gpio_vbus.h> +#include <linux/memblock.h> #include <video/w100fb.h> @@ -41,14 +42,12 @@ #include "clock.h" /* Only e800 has 128MB RAM */ -void __init eseries_fixup(struct tag *tags, char **cmdline, struct meminfo *mi) +void __init eseries_fixup(struct tag *tags, char **cmdline) { - mi->nr_banks=1; - mi->bank[0].start = 0xa0000000; if (machine_is_e800()) - mi->bank[0].size = (128*1024*1024); + memblock_add(0xa0000000, SZ_128M); else - mi->bank[0].size = (64*1024*1024); + memblock_add(0xa0000000, SZ_64M); } struct gpio_vbus_mach_info e7xx_udc_info = { diff --git a/arch/arm/mach-pxa/poodle.c b/arch/arm/mach-pxa/poodle.c index aedf053a1de5..131991629116 100644 --- a/arch/arm/mach-pxa/poodle.c +++ b/arch/arm/mach-pxa/poodle.c @@ -29,6 +29,7 @@ #include <linux/spi/ads7846.h> #include <linux/spi/pxa2xx_spi.h> #include <linux/mtd/sharpsl.h> +#include <linux/memblock.h> #include <mach/hardware.h> #include <asm/mach-types.h> @@ -456,13 +457,10 @@ static void __init poodle_init(void) poodle_init_spi(); } -static void __init fixup_poodle(struct tag *tags, char **cmdline, - struct meminfo *mi) +static void __init fixup_poodle(struct tag *tags, char **cmdline) { sharpsl_save_param(); - mi->nr_banks=1; - mi->bank[0].start = 0xa0000000; - mi->bank[0].size = (32*1024*1024); + memblock_add(0xa0000000, SZ_32M); } MACHINE_START(POODLE, "SHARP Poodle") diff --git a/arch/arm/mach-pxa/spitz.c b/arch/arm/mach-pxa/spitz.c index 0b11c1af51c4..840c3a48e720 100644 --- a/arch/arm/mach-pxa/spitz.c +++ b/arch/arm/mach-pxa/spitz.c @@ -32,6 +32,7 @@ #include <linux/io.h> #include <linux/module.h> #include <linux/reboot.h> +#include <linux/memblock.h> #include <asm/setup.h> #include <asm/mach-types.h> @@ -971,13 +972,10 @@ static void __init spitz_init(void) spitz_i2c_init(); } -static void __init spitz_fixup(struct tag *tags, char **cmdline, - struct meminfo *mi) +static void __init spitz_fixup(struct tag *tags, char **cmdline) { sharpsl_save_param(); - mi->nr_banks = 1; - mi->bank[0].start = 0xa0000000; - mi->bank[0].size = (64*1024*1024); + memblock_add(0xa0000000, SZ_64M); } #ifdef CONFIG_MACH_SPITZ diff --git a/arch/arm/mach-pxa/tosa.c b/arch/arm/mach-pxa/tosa.c index ef5557b807ed..c158a6e3e0aa 100644 --- a/arch/arm/mach-pxa/tosa.c +++ b/arch/arm/mach-pxa/tosa.c @@ -37,6 +37,7 @@ #include <linux/i2c/pxa-i2c.h> #include <linux/usb/gpio_vbus.h> #include <linux/reboot.h> +#include <linux/memblock.h> #include <asm/setup.h> #include <asm/mach-types.h> @@ -960,13 +961,10 @@ static void __init tosa_init(void) platform_add_devices(devices, ARRAY_SIZE(devices)); } -static void __init fixup_tosa(struct tag *tags, char **cmdline, - struct meminfo *mi) +static void __init fixup_tosa(struct tag *tags, char **cmdline) { sharpsl_save_param(); - mi->nr_banks=1; - mi->bank[0].start = 0xa0000000; - mi->bank[0].size = (64*1024*1024); + memblock_add(0xa0000000, SZ_64M); } MACHINE_START(TOSA, "SHARP Tosa") diff --git a/arch/arm/mach-realview/core.c b/arch/arm/mach-realview/core.c index 960b8dd78c44..8c1b39a0caa0 100644 --- a/arch/arm/mach-realview/core.c +++ b/arch/arm/mach-realview/core.c @@ -31,6 +31,7 @@ #include <linux/amba/mmci.h> #include <linux/gfp.h> #include <linux/mtd/physmap.h> +#include <linux/memblock.h> #include <mach/hardware.h> #include <asm/irq.h> @@ -385,19 +386,15 @@ void __init realview_timer_init(unsigned int timer_irq) /* * Setup the memory banks. */ -void realview_fixup(struct tag *tags, char **from, struct meminfo *meminfo) +void realview_fixup(struct tag *tags, char **from) { /* * Most RealView platforms have 512MB contiguous RAM at 0x70000000. * Half of this is mirrored at 0. */ #ifdef CONFIG_REALVIEW_HIGH_PHYS_OFFSET - meminfo->bank[0].start = 0x70000000; - meminfo->bank[0].size = SZ_512M; - meminfo->nr_banks = 1; + memblock_add(0x70000000, SZ_512M); #else - meminfo->bank[0].start = 0; - meminfo->bank[0].size = SZ_256M; - meminfo->nr_banks = 1; + memblock_add(0, SZ_256M); #endif } diff --git a/arch/arm/mach-realview/core.h b/arch/arm/mach-realview/core.h index 13dc830ef469..868ece221978 100644 --- a/arch/arm/mach-realview/core.h +++ b/arch/arm/mach-realview/core.h @@ -52,8 +52,7 @@ extern int realview_flash_register(struct resource *res, u32 num); extern int realview_eth_register(const char *name, struct resource *res); extern int realview_usb_register(struct resource *res); extern void realview_init_early(void); -extern void realview_fixup(struct tag *tags, char **from, - struct meminfo *meminfo); +extern void realview_fixup(struct tag *tags, char **from); extern struct smp_operations realview_smp_ops; extern void realview_cpu_die(unsigned int cpu); diff --git a/arch/arm/mach-realview/realview_eb.c b/arch/arm/mach-realview/realview_eb.c index 6bb070e80128..739d4f113097 100644 --- a/arch/arm/mach-realview/realview_eb.c +++ b/arch/arm/mach-realview/realview_eb.c @@ -442,8 +442,13 @@ static void __init realview_eb_init(void) realview_eb11mp_fixup(); #ifdef CONFIG_CACHE_L2X0 - /* 1MB (128KB/way), 8-way associativity, evmon/parity/share enabled - * Bits: .... ...0 0111 1001 0000 .... .... .... */ + /* + * The PL220 needs to be manually configured as the hardware + * doesn't report the correct sizes. + * 1MB (128KB/way), 8-way associativity, event monitor and + * parity enabled, ignore share bit, no force write allocate + * Bits: .... ...0 0111 1001 0000 .... .... .... + */ l2x0_init(__io_address(REALVIEW_EB11MP_L220_BASE), 0x00790000, 0xfe000fff); #endif platform_device_register(&pmu_device); diff --git a/arch/arm/mach-realview/realview_pb1176.c b/arch/arm/mach-realview/realview_pb1176.c index 173f2c15de49..b0e0dcaed944 100644 --- a/arch/arm/mach-realview/realview_pb1176.c +++ b/arch/arm/mach-realview/realview_pb1176.c @@ -32,6 +32,7 @@ #include <linux/irqchip/arm-gic.h> #include <linux/platform_data/clk-realview.h> #include <linux/reboot.h> +#include <linux/memblock.h> #include <mach/hardware.h> #include <asm/irq.h> @@ -339,15 +340,12 @@ static void realview_pb1176_restart(enum reboot_mode mode, const char *cmd) dsb(); } -static void realview_pb1176_fixup(struct tag *tags, char **from, - struct meminfo *meminfo) +static void realview_pb1176_fixup(struct tag *tags, char **from) { /* * RealView PB1176 only has 128MB of RAM mapped at 0. */ - meminfo->bank[0].start = 0; - meminfo->bank[0].size = SZ_128M; - meminfo->nr_banks = 1; + memblock_add(0, SZ_128M); } static void __init realview_pb1176_init(void) @@ -355,7 +353,13 @@ static void __init realview_pb1176_init(void) int i; #ifdef CONFIG_CACHE_L2X0 - /* 128Kb (16Kb/way) 8-way associativity. evmon/parity/share enabled. */ + /* + * The PL220 needs to be manually configured as the hardware + * doesn't report the correct sizes. + * 128kB (16kB/way), 8-way associativity, event monitor and + * parity enabled, ignore share bit, no force write allocate + * Bits: .... ...0 0111 0011 0000 .... .... .... + */ l2x0_init(__io_address(REALVIEW_PB1176_L220_BASE), 0x00730000, 0xfe000fff); #endif diff --git a/arch/arm/mach-realview/realview_pb11mp.c b/arch/arm/mach-realview/realview_pb11mp.c index bde7e6b1fd44..47bf55fdbf27 100644 --- a/arch/arm/mach-realview/realview_pb11mp.c +++ b/arch/arm/mach-realview/realview_pb11mp.c @@ -337,8 +337,13 @@ static void __init realview_pb11mp_init(void) int i; #ifdef CONFIG_CACHE_L2X0 - /* 1MB (128KB/way), 8-way associativity, evmon/parity/share enabled - * Bits: .... ...0 0111 1001 0000 .... .... .... */ + /* + * The PL220 needs to be manually configured as the hardware + * doesn't report the correct sizes. + * 1MB (128KB/way), 8-way associativity, event monitor and + * parity enabled, ignore share bit, no force write allocate + * Bits: .... ...0 0111 1001 0000 .... .... .... + */ l2x0_init(__io_address(REALVIEW_TC11MP_L220_BASE), 0x00790000, 0xfe000fff); #endif diff --git a/arch/arm/mach-realview/realview_pbx.c b/arch/arm/mach-realview/realview_pbx.c index 72c96caebefa..d89eb4023467 100644 --- a/arch/arm/mach-realview/realview_pbx.c +++ b/arch/arm/mach-realview/realview_pbx.c @@ -29,6 +29,7 @@ #include <linux/irqchip/arm-gic.h> #include <linux/platform_data/clk-realview.h> #include <linux/reboot.h> +#include <linux/memblock.h> #include <asm/irq.h> #include <asm/mach-types.h> @@ -325,23 +326,19 @@ static void __init realview_pbx_timer_init(void) realview_pbx_twd_init(); } -static void realview_pbx_fixup(struct tag *tags, char **from, - struct meminfo *meminfo) +static void realview_pbx_fixup(struct tag *tags, char **from) { #ifdef CONFIG_SPARSEMEM /* * Memory configuration with SPARSEMEM enabled on RealView PBX (see * asm/mach/memory.h for more information). */ - meminfo->bank[0].start = 0; - meminfo->bank[0].size = SZ_256M; - meminfo->bank[1].start = 0x20000000; - meminfo->bank[1].size = SZ_512M; - meminfo->bank[2].start = 0x80000000; - meminfo->bank[2].size = SZ_256M; - meminfo->nr_banks = 3; + + memblock_add(0, SZ_256M); + memblock_add(0x20000000, SZ_512M); + memblock_add(0x80000000, SZ_256M); #else - realview_fixup(tags, from, meminfo); + realview_fixup(tags, from); #endif } @@ -370,8 +367,8 @@ static void __init realview_pbx_init(void) __io_address(REALVIEW_PBX_TILE_L220_BASE); /* set RAM latencies to 1 cycle for eASIC */ - writel(0, l2x0_base + L2X0_TAG_LATENCY_CTRL); - writel(0, l2x0_base + L2X0_DATA_LATENCY_CTRL); + writel(0, l2x0_base + L310_TAG_LATENCY_CTRL); + writel(0, l2x0_base + L310_DATA_LATENCY_CTRL); /* 16KB way size, 8-way associativity, parity disabled * Bits: .. 0 0 0 0 1 00 1 0 1 001 0 000 0 .... .... .... */ diff --git a/arch/arm/mach-rockchip/rockchip.c b/arch/arm/mach-rockchip/rockchip.c index 4499b0a31a27..968cc348e624 100644 --- a/arch/arm/mach-rockchip/rockchip.c +++ b/arch/arm/mach-rockchip/rockchip.c @@ -24,12 +24,6 @@ #include <asm/hardware/cache-l2x0.h> #include "core.h" -static void __init rockchip_dt_init(void) -{ - l2x0_of_init(0, ~0UL); - of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); -} - static const char * const rockchip_board_dt_compat[] = { "rockchip,rk2928", "rockchip,rk3066a", @@ -39,6 +33,7 @@ static const char * const rockchip_board_dt_compat[] = { }; DT_MACHINE_START(ROCKCHIP_DT, "Rockchip Cortex-A9 (Device Tree)") - .init_machine = rockchip_dt_init, + .l2c_aux_val = 0, + .l2c_aux_mask = ~0, .dt_compat = rockchip_board_dt_compat, MACHINE_END diff --git a/arch/arm/mach-s3c24xx/mach-smdk2413.c b/arch/arm/mach-s3c24xx/mach-smdk2413.c index a38f8a049e22..fb3b80e44595 100644 --- a/arch/arm/mach-s3c24xx/mach-smdk2413.c +++ b/arch/arm/mach-s3c24xx/mach-smdk2413.c @@ -22,6 +22,7 @@ #include <linux/serial_s3c.h> #include <linux/platform_device.h> #include <linux/io.h> +#include <linux/memblock.h> #include <asm/mach/arch.h> #include <asm/mach/map.h> @@ -93,13 +94,10 @@ static struct platform_device *smdk2413_devices[] __initdata = { &s3c2412_device_dma, }; -static void __init smdk2413_fixup(struct tag *tags, char **cmdline, - struct meminfo *mi) +static void __init smdk2413_fixup(struct tag *tags, char **cmdline) { if (tags != phys_to_virt(S3C2410_SDRAM_PA + 0x100)) { - mi->nr_banks=1; - mi->bank[0].start = 0x30000000; - mi->bank[0].size = SZ_64M; + memblock_add(0x30000000, SZ_64M); } } diff --git a/arch/arm/mach-s3c24xx/mach-vstms.c b/arch/arm/mach-s3c24xx/mach-vstms.c index 6b706c915387..9104c2be36c9 100644 --- a/arch/arm/mach-s3c24xx/mach-vstms.c +++ b/arch/arm/mach-s3c24xx/mach-vstms.c @@ -23,6 +23,7 @@ #include <linux/mtd/nand.h> #include <linux/mtd/nand_ecc.h> #include <linux/mtd/partitions.h> +#include <linux/memblock.h> #include <asm/mach/arch.h> #include <asm/mach/map.h> @@ -129,13 +130,10 @@ static struct platform_device *vstms_devices[] __initdata = { &s3c2412_device_dma, }; -static void __init vstms_fixup(struct tag *tags, char **cmdline, - struct meminfo *mi) +static void __init vstms_fixup(struct tag *tags, char **cmdline) { if (tags != phys_to_virt(S3C2410_SDRAM_PA + 0x100)) { - mi->nr_banks=1; - mi->bank[0].start = 0x30000000; - mi->bank[0].size = SZ_64M; + memblock_add(0x30000000, SZ_64M); } } diff --git a/arch/arm/mach-sa1100/assabet.c b/arch/arm/mach-sa1100/assabet.c index 8443a27bca2f..7dd894ece9ae 100644 --- a/arch/arm/mach-sa1100/assabet.c +++ b/arch/arm/mach-sa1100/assabet.c @@ -531,7 +531,7 @@ static void __init get_assabet_scr(void) } static void __init -fixup_assabet(struct tag *tags, char **cmdline, struct meminfo *mi) +fixup_assabet(struct tag *tags, char **cmdline) { /* This must be done before any call to machine_has_neponset() */ map_sa1100_gpio_regs(); diff --git a/arch/arm/mach-shmobile/board-armadillo800eva-reference.c b/arch/arm/mach-shmobile/board-armadillo800eva-reference.c index 57d246eb8813..f660fbb96e0b 100644 --- a/arch/arm/mach-shmobile/board-armadillo800eva-reference.c +++ b/arch/arm/mach-shmobile/board-armadillo800eva-reference.c @@ -164,8 +164,8 @@ static void __init eva_init(void) r8a7740_meram_workaround(); #ifdef CONFIG_CACHE_L2X0 - /* Early BRESP enable, Shared attribute override enable, 32K*8way */ - l2x0_init(IOMEM(0xf0002000), 0x40440000, 0x82000fff); + /* Shared attribute override enable, 32K*8way */ + l2x0_init(IOMEM(0xf0002000), 0x00400000, 0xc20f0fff); #endif r8a7740_add_standard_devices_dt(); diff --git a/arch/arm/mach-shmobile/board-armadillo800eva.c b/arch/arm/mach-shmobile/board-armadillo800eva.c index bc2cf7a89534..01f81100c330 100644 --- a/arch/arm/mach-shmobile/board-armadillo800eva.c +++ b/arch/arm/mach-shmobile/board-armadillo800eva.c @@ -1271,8 +1271,8 @@ static void __init eva_init(void) #ifdef CONFIG_CACHE_L2X0 - /* Early BRESP enable, Shared attribute override enable, 32K*8way */ - l2x0_init(IOMEM(0xf0002000), 0x40440000, 0x82000fff); + /* Shared attribute override enable, 32K*8way */ + l2x0_init(IOMEM(0xf0002000), 0x00400000, 0xc20f0fff); #endif i2c_register_board_info(0, i2c0_devices, ARRAY_SIZE(i2c0_devices)); diff --git a/arch/arm/mach-shmobile/board-kzm9g-reference.c b/arch/arm/mach-shmobile/board-kzm9g-reference.c index 598e32488410..a735a1d80c28 100644 --- a/arch/arm/mach-shmobile/board-kzm9g-reference.c +++ b/arch/arm/mach-shmobile/board-kzm9g-reference.c @@ -36,8 +36,8 @@ static void __init kzm_init(void) sh73a0_add_standard_devices_dt(); #ifdef CONFIG_CACHE_L2X0 - /* Early BRESP enable, Shared attribute override enable, 64K*8way */ - l2x0_init(IOMEM(0xf0100000), 0x40460000, 0x82000fff); + /* Shared attribute override enable, 64K*8way */ + l2x0_init(IOMEM(0xf0100000), 0x00400000, 0xc20f0fff); #endif } diff --git a/arch/arm/mach-shmobile/board-kzm9g.c b/arch/arm/mach-shmobile/board-kzm9g.c index 03dc3ac84502..f94ec8ca42c1 100644 --- a/arch/arm/mach-shmobile/board-kzm9g.c +++ b/arch/arm/mach-shmobile/board-kzm9g.c @@ -876,8 +876,8 @@ static void __init kzm_init(void) gpio_request_one(223, GPIOF_IN, NULL); /* IRQ8 */ #ifdef CONFIG_CACHE_L2X0 - /* Early BRESP enable, Shared attribute override enable, 64K*8way */ - l2x0_init(IOMEM(0xf0100000), 0x40460000, 0x82000fff); + /* Shared attribute override enable, 64K*8way */ + l2x0_init(IOMEM(0xf0100000), 0x00400000, 0xc20f0fff); #endif i2c_register_board_info(0, i2c0_devices, ARRAY_SIZE(i2c0_devices)); diff --git a/arch/arm/mach-shmobile/setup-r8a7778.c b/arch/arm/mach-shmobile/setup-r8a7778.c index 8c02e24f2483..d311ef903b39 100644 --- a/arch/arm/mach-shmobile/setup-r8a7778.c +++ b/arch/arm/mach-shmobile/setup-r8a7778.c @@ -285,10 +285,10 @@ void __init r8a7778_add_dt_devices(void) void __iomem *base = ioremap_nocache(0xf0100000, 0x1000); if (base) { /* - * Early BRESP enable, Shared attribute override enable, 64K*16way + * Shared attribute override enable, 64K*16way * don't call iounmap(base) */ - l2x0_init(base, 0x40470000, 0x82000fff); + l2x0_init(base, 0x00400000, 0xc20f0fff); } #endif diff --git a/arch/arm/mach-shmobile/setup-r8a7779.c b/arch/arm/mach-shmobile/setup-r8a7779.c index d197b5adc886..aba4ed652d54 100644 --- a/arch/arm/mach-shmobile/setup-r8a7779.c +++ b/arch/arm/mach-shmobile/setup-r8a7779.c @@ -660,8 +660,8 @@ static struct platform_device *r8a7779_standard_devices[] __initdata = { void __init r8a7779_add_standard_devices(void) { #ifdef CONFIG_CACHE_L2X0 - /* Early BRESP enable, Shared attribute override enable, 64K*16way */ - l2x0_init(IOMEM(0xf0100000), 0x40470000, 0x82000fff); + /* Shared attribute override enable, 64K*16way */ + l2x0_init(IOMEM(0xf0100000), 0x00400000, 0xc20f0fff); #endif r8a7779_pm_init(); diff --git a/arch/arm/mach-socfpga/socfpga.c b/arch/arm/mach-socfpga/socfpga.c index d86231e11b34..adbf38314ca8 100644 --- a/arch/arm/mach-socfpga/socfpga.c +++ b/arch/arm/mach-socfpga/socfpga.c @@ -98,22 +98,17 @@ static void socfpga_cyclone5_restart(enum reboot_mode mode, const char *cmd) writel(temp, rst_manager_base_addr + SOCFPGA_RSTMGR_CTRL); } -static void __init socfpga_cyclone5_init(void) -{ - l2x0_of_init(0, ~0UL); - of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); -} - static const char *altera_dt_match[] = { "altr,socfpga", NULL }; DT_MACHINE_START(SOCFPGA, "Altera SOCFPGA") + .l2c_aux_val = 0, + .l2c_aux_mask = ~0, .smp = smp_ops(socfpga_smp_ops), .map_io = socfpga_map_io, .init_irq = socfpga_init_irq, - .init_machine = socfpga_cyclone5_init, .restart = socfpga_cyclone5_restart, .dt_compat = altera_dt_match, MACHINE_END diff --git a/arch/arm/mach-spear/platsmp.c b/arch/arm/mach-spear/platsmp.c index c19751fff2c6..fd4297713d67 100644 --- a/arch/arm/mach-spear/platsmp.c +++ b/arch/arm/mach-spear/platsmp.c @@ -20,6 +20,18 @@ #include <mach/spear.h> #include "generic.h" +/* + * Write pen_release in a way that is guaranteed to be visible to all + * observers, irrespective of whether they're taking part in coherency + * or not. This is necessary for the hotplug code to work reliably. + */ +static void write_pen_release(int val) +{ + pen_release = val; + smp_wmb(); + sync_cache_w(&pen_release); +} + static DEFINE_SPINLOCK(boot_lock); static void __iomem *scu_base = IOMEM(VA_SCU_BASE); @@ -30,8 +42,7 @@ static void spear13xx_secondary_init(unsigned int cpu) * let the primary processor know we're out of the * pen, then head off into the C entry point */ - pen_release = -1; - smp_wmb(); + write_pen_release(-1); /* * Synchronise with the boot thread. @@ -58,9 +69,7 @@ static int spear13xx_boot_secondary(unsigned int cpu, struct task_struct *idle) * Note that "pen_release" is the hardware CPU ID, whereas * "cpu" is Linux's internal ID. */ - pen_release = cpu; - flush_cache_all(); - outer_flush_all(); + write_pen_release(cpu); timeout = jiffies + (1 * HZ); while (time_before(jiffies, timeout)) { diff --git a/arch/arm/mach-spear/spear13xx.c b/arch/arm/mach-spear/spear13xx.c index 7aa6e8cf830f..c9897ea38980 100644 --- a/arch/arm/mach-spear/spear13xx.c +++ b/arch/arm/mach-spear/spear13xx.c @@ -38,15 +38,15 @@ void __init spear13xx_l2x0_init(void) if (!IS_ENABLED(CONFIG_CACHE_L2X0)) return; - writel_relaxed(0x06, VA_L2CC_BASE + L2X0_PREFETCH_CTRL); + writel_relaxed(0x06, VA_L2CC_BASE + L310_PREFETCH_CTRL); /* * Program following latencies in order to make * SPEAr1340 work at 600 MHz */ - writel_relaxed(0x221, VA_L2CC_BASE + L2X0_TAG_LATENCY_CTRL); - writel_relaxed(0x441, VA_L2CC_BASE + L2X0_DATA_LATENCY_CTRL); - l2x0_init(VA_L2CC_BASE, 0x70A60001, 0xfe00ffff); + writel_relaxed(0x221, VA_L2CC_BASE + L310_TAG_LATENCY_CTRL); + writel_relaxed(0x441, VA_L2CC_BASE + L310_DATA_LATENCY_CTRL); + l2x0_init(VA_L2CC_BASE, 0x30a00001, 0xfe0fffff); } /* diff --git a/arch/arm/mach-sti/board-dt.c b/arch/arm/mach-sti/board-dt.c index df731f2322fa..3cf6ef8d4317 100644 --- a/arch/arm/mach-sti/board-dt.c +++ b/arch/arm/mach-sti/board-dt.c @@ -14,25 +14,6 @@ #include "smp.h" -void __init stih41x_l2x0_init(void) -{ - u32 way_size = 0x4; - u32 aux_ctrl; - /* may be this can be encoded in macros like BIT*() */ - aux_ctrl = (0x1 << L2X0_AUX_CTRL_SHARE_OVERRIDE_SHIFT) | - (0x1 << L2X0_AUX_CTRL_DATA_PREFETCH_SHIFT) | - (0x1 << L2X0_AUX_CTRL_INSTR_PREFETCH_SHIFT) | - (way_size << L2X0_AUX_CTRL_WAY_SIZE_SHIFT); - - l2x0_of_init(aux_ctrl, L2X0_AUX_CTRL_MASK); -} - -static void __init stih41x_machine_init(void) -{ - stih41x_l2x0_init(); - of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); -} - static const char *stih41x_dt_match[] __initdata = { "st,stih415", "st,stih416", @@ -41,7 +22,11 @@ static const char *stih41x_dt_match[] __initdata = { }; DT_MACHINE_START(STM, "STiH415/416 SoC with Flattened Device Tree") - .init_machine = stih41x_machine_init, - .smp = smp_ops(sti_smp_ops), .dt_compat = stih41x_dt_match, + .l2c_aux_val = L2C_AUX_CTRL_SHARED_OVERRIDE | + L310_AUX_CTRL_DATA_PREFETCH | + L310_AUX_CTRL_INSTR_PREFETCH | + L2C_AUX_CTRL_WAY_SIZE(4), + .l2c_aux_mask = 0xc0000fff, + .smp = smp_ops(sti_smp_ops), MACHINE_END diff --git a/arch/arm/mach-tegra/pm.h b/arch/arm/mach-tegra/pm.h index 6e92a7c2ecbd..f4a89698e5b0 100644 --- a/arch/arm/mach-tegra/pm.h +++ b/arch/arm/mach-tegra/pm.h @@ -35,8 +35,6 @@ void tegra20_sleep_core_init(void); void tegra30_lp1_iram_hook(void); void tegra30_sleep_core_init(void); -extern unsigned long l2x0_saved_regs_addr; - void tegra_clear_cpu_in_lp2(void); bool tegra_set_cpu_in_lp2(void); diff --git a/arch/arm/mach-tegra/reset-handler.S b/arch/arm/mach-tegra/reset-handler.S index 8c1ba4fea384..578d4d1ad648 100644 --- a/arch/arm/mach-tegra/reset-handler.S +++ b/arch/arm/mach-tegra/reset-handler.S @@ -19,7 +19,6 @@ #include <asm/cache.h> #include <asm/asm-offsets.h> -#include <asm/hardware/cache-l2x0.h> #include "flowctrl.h" #include "fuse.h" @@ -78,8 +77,10 @@ ENTRY(tegra_resume) str r1, [r0] #endif +#ifdef CONFIG_CACHE_L2X0 /* L2 cache resume & re-enable */ - l2_cache_resume r0, r1, r2, l2x0_saved_regs_addr + bl l2c310_early_resume +#endif end_ca9_scu_l2_resume: mov32 r9, 0xc0f cmp r8, r9 @@ -89,12 +90,6 @@ end_ca9_scu_l2_resume: ENDPROC(tegra_resume) #endif -#ifdef CONFIG_CACHE_L2X0 - .globl l2x0_saved_regs_addr -l2x0_saved_regs_addr: - .long 0 -#endif - .align L1_CACHE_SHIFT ENTRY(__tegra_cpu_reset_handler_start) diff --git a/arch/arm/mach-tegra/sleep.h b/arch/arm/mach-tegra/sleep.h index a4edbb3abd3d..339fe42cd6fb 100644 --- a/arch/arm/mach-tegra/sleep.h +++ b/arch/arm/mach-tegra/sleep.h @@ -120,37 +120,6 @@ mov \tmp1, \tmp1, lsr #8 .endm -/* Macro to resume & re-enable L2 cache */ -#ifndef L2X0_CTRL_EN -#define L2X0_CTRL_EN 1 -#endif - -#ifdef CONFIG_CACHE_L2X0 -.macro l2_cache_resume, tmp1, tmp2, tmp3, phys_l2x0_saved_regs - W(adr) \tmp1, \phys_l2x0_saved_regs - ldr \tmp1, [\tmp1] - ldr \tmp2, [\tmp1, #L2X0_R_PHY_BASE] - ldr \tmp3, [\tmp2, #L2X0_CTRL] - tst \tmp3, #L2X0_CTRL_EN - bne exit_l2_resume - ldr \tmp3, [\tmp1, #L2X0_R_TAG_LATENCY] - str \tmp3, [\tmp2, #L2X0_TAG_LATENCY_CTRL] - ldr \tmp3, [\tmp1, #L2X0_R_DATA_LATENCY] - str \tmp3, [\tmp2, #L2X0_DATA_LATENCY_CTRL] - ldr \tmp3, [\tmp1, #L2X0_R_PREFETCH_CTRL] - str \tmp3, [\tmp2, #L2X0_PREFETCH_CTRL] - ldr \tmp3, [\tmp1, #L2X0_R_PWR_CTRL] - str \tmp3, [\tmp2, #L2X0_POWER_CTRL] - ldr \tmp3, [\tmp1, #L2X0_R_AUX_CTRL] - str \tmp3, [\tmp2, #L2X0_AUX_CTRL] - mov \tmp3, #L2X0_CTRL_EN - str \tmp3, [\tmp2, #L2X0_CTRL] -exit_l2_resume: -.endm -#else /* CONFIG_CACHE_L2X0 */ -.macro l2_cache_resume, tmp1, tmp2, tmp3, phys_l2x0_saved_regs -.endm -#endif /* CONFIG_CACHE_L2X0 */ #else void tegra_pen_lock(void); void tegra_pen_unlock(void); diff --git a/arch/arm/mach-tegra/tegra.c b/arch/arm/mach-tegra/tegra.c index 6191603379e1..15ac9fcc96b1 100644 --- a/arch/arm/mach-tegra/tegra.c +++ b/arch/arm/mach-tegra/tegra.c @@ -70,40 +70,12 @@ u32 tegra_uart_config[3] = { 0, }; -static void __init tegra_init_cache(void) -{ -#ifdef CONFIG_CACHE_L2X0 - static const struct of_device_id pl310_ids[] __initconst = { - { .compatible = "arm,pl310-cache", }, - {} - }; - - struct device_node *np; - int ret; - void __iomem *p = IO_ADDRESS(TEGRA_ARM_PERIF_BASE) + 0x3000; - u32 aux_ctrl, cache_type; - - np = of_find_matching_node(NULL, pl310_ids); - if (!np) - return; - - cache_type = readl(p + L2X0_CACHE_TYPE); - aux_ctrl = (cache_type & 0x700) << (17-8); - aux_ctrl |= 0x7C400001; - - ret = l2x0_of_init(aux_ctrl, 0x8200c3fe); - if (!ret) - l2x0_saved_regs_addr = virt_to_phys(&l2x0_saved_regs); -#endif -} - static void __init tegra_init_early(void) { of_register_trusted_foundations(); tegra_apb_io_init(); tegra_init_fuse(); tegra_cpu_reset_handler_init(); - tegra_init_cache(); tegra_powergate_init(); tegra_hotplug_init(); } @@ -191,8 +163,10 @@ static const char * const tegra_dt_board_compat[] = { }; DT_MACHINE_START(TEGRA_DT, "NVIDIA Tegra SoC (Flattened Device Tree)") - .map_io = tegra_map_common_io, + .l2c_aux_val = 0x3c400001, + .l2c_aux_mask = 0xc20fc3fe, .smp = smp_ops(tegra_smp_ops), + .map_io = tegra_map_common_io, .init_early = tegra_init_early, .init_irq = tegra_dt_init_irq, .init_machine = tegra_dt_init, diff --git a/arch/arm/mach-ux500/cache-l2x0.c b/arch/arm/mach-ux500/cache-l2x0.c index 264f894c0e3d..842ebedbdd1c 100644 --- a/arch/arm/mach-ux500/cache-l2x0.c +++ b/arch/arm/mach-ux500/cache-l2x0.c @@ -35,10 +35,16 @@ static int __init ux500_l2x0_unlock(void) return 0; } -static int __init ux500_l2x0_init(void) +static void ux500_l2c310_write_sec(unsigned long val, unsigned reg) { - u32 aux_val = 0x3e000000; + /* + * We can't write to secure registers as we are in non-secure + * mode, until we have some SMI service available. + */ +} +static int __init ux500_l2x0_init(void) +{ if (cpu_is_u8500_family() || cpu_is_ux540_family()) l2x0_base = __io_address(U8500_L2CC_BASE); else @@ -48,28 +54,12 @@ static int __init ux500_l2x0_init(void) /* Unlock before init */ ux500_l2x0_unlock(); - /* DBx540's L2 has 128KB way size */ - if (cpu_is_ux540_family()) - /* 128KB way size */ - aux_val |= (0x4 << L2X0_AUX_CTRL_WAY_SIZE_SHIFT); - else - /* 64KB way size */ - aux_val |= (0x3 << L2X0_AUX_CTRL_WAY_SIZE_SHIFT); + outer_cache.write_sec = ux500_l2c310_write_sec; - /* 64KB way size, 8 way associativity, force WA */ if (of_have_populated_dt()) - l2x0_of_init(aux_val, 0xc0000fff); + l2x0_of_init(0, ~0); else - l2x0_init(l2x0_base, aux_val, 0xc0000fff); - - /* - * We can't disable l2 as we are in non secure mode, currently - * this seems be called only during kexec path. So let's - * override outer.disable with nasty assignment until we have - * some SMI service available. - */ - outer_cache.disable = NULL; - outer_cache.set_debug = NULL; + l2x0_init(l2x0_base, 0, ~0); return 0; } diff --git a/arch/arm/mach-vexpress/ct-ca9x4.c b/arch/arm/mach-vexpress/ct-ca9x4.c index 494d70bfddad..86150d7a2e7d 100644 --- a/arch/arm/mach-vexpress/ct-ca9x4.c +++ b/arch/arm/mach-vexpress/ct-ca9x4.c @@ -45,6 +45,23 @@ static void __init ct_ca9x4_map_io(void) iotable_init(ct_ca9x4_io_desc, ARRAY_SIZE(ct_ca9x4_io_desc)); } +static void __init ca9x4_l2_init(void) +{ +#ifdef CONFIG_CACHE_L2X0 + void __iomem *l2x0_base = ioremap(CT_CA9X4_L2CC, SZ_4K); + + if (l2x0_base) { + /* set RAM latencies to 1 cycle for this core tile. */ + writel(0, l2x0_base + L310_TAG_LATENCY_CTRL); + writel(0, l2x0_base + L310_DATA_LATENCY_CTRL); + + l2x0_init(l2x0_base, 0x00400000, 0xfe0fffff); + } else { + pr_err("L2C: unable to map L2 cache controller\n"); + } +#endif +} + #ifdef CONFIG_HAVE_ARM_TWD static DEFINE_TWD_LOCAL_TIMER(twd_local_timer, A9_MPCORE_TWD, IRQ_LOCALTIMER); @@ -63,6 +80,7 @@ static void __init ct_ca9x4_init_irq(void) gic_init(0, 29, ioremap(A9_MPCORE_GIC_DIST, SZ_4K), ioremap(A9_MPCORE_GIC_CPU, SZ_256)); ca9x4_twd_init(); + ca9x4_l2_init(); } static int ct_ca9x4_clcd_setup(struct clcd_fb *fb) @@ -146,16 +164,6 @@ static void __init ct_ca9x4_init(void) { int i; -#ifdef CONFIG_CACHE_L2X0 - void __iomem *l2x0_base = ioremap(CT_CA9X4_L2CC, SZ_4K); - - /* set RAM latencies to 1 cycle for this core tile. */ - writel(0, l2x0_base + L2X0_TAG_LATENCY_CTRL); - writel(0, l2x0_base + L2X0_DATA_LATENCY_CTRL); - - l2x0_init(l2x0_base, 0x00400000, 0xfe0fffff); -#endif - for (i = 0; i < ARRAY_SIZE(ct_ca9x4_amba_devs); i++) amba_device_register(ct_ca9x4_amba_devs[i], &iomem_resource); diff --git a/arch/arm/mach-vexpress/tc2_pm.c b/arch/arm/mach-vexpress/tc2_pm.c index 29e7785a54bc..b743a0ae02ce 100644 --- a/arch/arm/mach-vexpress/tc2_pm.c +++ b/arch/arm/mach-vexpress/tc2_pm.c @@ -209,7 +209,7 @@ static int tc2_core_in_reset(unsigned int cpu, unsigned int cluster) #define POLL_MSEC 10 #define TIMEOUT_MSEC 1000 -static int tc2_pm_power_down_finish(unsigned int cpu, unsigned int cluster) +static int tc2_pm_wait_for_powerdown(unsigned int cpu, unsigned int cluster) { unsigned tries; @@ -290,7 +290,7 @@ static void tc2_pm_powered_up(void) static const struct mcpm_platform_ops tc2_pm_power_ops = { .power_up = tc2_pm_power_up, .power_down = tc2_pm_power_down, - .power_down_finish = tc2_pm_power_down_finish, + .wait_for_powerdown = tc2_pm_wait_for_powerdown, .suspend = tc2_pm_suspend, .powered_up = tc2_pm_powered_up, }; diff --git a/arch/arm/mach-vexpress/v2m.c b/arch/arm/mach-vexpress/v2m.c index 38f4f6f37770..6ff681a24ba7 100644 --- a/arch/arm/mach-vexpress/v2m.c +++ b/arch/arm/mach-vexpress/v2m.c @@ -372,7 +372,6 @@ MACHINE_END static void __init v2m_dt_init(void) { - l2x0_of_init(0x00400000, 0xfe0fffff); of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); } @@ -383,6 +382,8 @@ static const char * const v2m_dt_match[] __initconst = { DT_MACHINE_START(VEXPRESS_DT, "ARM-Versatile Express") .dt_compat = v2m_dt_match, + .l2c_aux_val = 0x00400000, + .l2c_aux_mask = 0xfe0fffff, .smp = smp_ops(vexpress_smp_dt_ops), .smp_init = smp_init_ops(vexpress_smp_init_ops), .init_machine = v2m_dt_init, diff --git a/arch/arm/mach-zynq/common.c b/arch/arm/mach-zynq/common.c index edbd9d83f407..31a6fa40ba37 100644 --- a/arch/arm/mach-zynq/common.c +++ b/arch/arm/mach-zynq/common.c @@ -109,11 +109,6 @@ static void __init zynq_init_machine(void) struct soc_device *soc_dev; struct device *parent = NULL; - /* - * 64KB way size, 8-way associativity, parity disabled - */ - l2x0_of_init(0x02060000, 0xF0F0FFFF); - soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL); if (!soc_dev_attr) goto out; @@ -202,6 +197,9 @@ static const char * const zynq_dt_match[] = { }; DT_MACHINE_START(XILINX_EP107, "Xilinx Zynq Platform") + /* 64KB way size, 8-way associativity, parity disabled */ + .l2c_aux_val = 0x02000000, + .l2c_aux_mask = 0xf0ffffff, .smp = smp_ops(zynq_smp_ops), .map_io = zynq_map_io, .init_irq = zynq_irq_init, diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig index 5bf7c3c3b301..eda0dd0ab97b 100644 --- a/arch/arm/mm/Kconfig +++ b/arch/arm/mm/Kconfig @@ -897,6 +897,57 @@ config CACHE_PL310 This option enables optimisations for the PL310 cache controller. +config PL310_ERRATA_588369 + bool "PL310 errata: Clean & Invalidate maintenance operations do not invalidate clean lines" + depends on CACHE_L2X0 + help + The PL310 L2 cache controller implements three types of Clean & + Invalidate maintenance operations: by Physical Address + (offset 0x7F0), by Index/Way (0x7F8) and by Way (0x7FC). + They are architecturally defined to behave as the execution of a + clean operation followed immediately by an invalidate operation, + both performing to the same memory location. This functionality + is not correctly implemented in PL310 as clean lines are not + invalidated as a result of these operations. + +config PL310_ERRATA_727915 + bool "PL310 errata: Background Clean & Invalidate by Way operation can cause data corruption" + depends on CACHE_L2X0 + help + PL310 implements the Clean & Invalidate by Way L2 cache maintenance + operation (offset 0x7FC). This operation runs in background so that + PL310 can handle normal accesses while it is in progress. Under very + rare circumstances, due to this erratum, write data can be lost when + PL310 treats a cacheable write transaction during a Clean & + Invalidate by Way operation. + +config PL310_ERRATA_753970 + bool "PL310 errata: cache sync operation may be faulty" + depends on CACHE_PL310 + help + This option enables the workaround for the 753970 PL310 (r3p0) erratum. + + Under some condition the effect of cache sync operation on + the store buffer still remains when the operation completes. + This means that the store buffer is always asked to drain and + this prevents it from merging any further writes. The workaround + is to replace the normal offset of cache sync operation (0x730) + by another offset targeting an unmapped PL310 register 0x740. + This has the same effect as the cache sync operation: store buffer + drain and waiting for all buffers empty. + +config PL310_ERRATA_769419 + bool "PL310 errata: no automatic Store Buffer drain" + depends on CACHE_L2X0 + help + On revisions of the PL310 prior to r3p2, the Store Buffer does + not automatically drain. This can cause normal, non-cacheable + writes to be retained when the memory system is idle, leading + to suboptimal I/O performance for drivers using coherent DMA. + This option adds a write barrier to the cpu_idle loop so that, + on systems with an outer cache, the store buffer is drained + explicitly. + config CACHE_TAUROS2 bool "Enable the Tauros2 L2 cache controller" depends on (ARCH_DOVE || ARCH_MMP || CPU_PJ4) diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile index 7f39ce2f841f..91da64de440f 100644 --- a/arch/arm/mm/Makefile +++ b/arch/arm/mm/Makefile @@ -95,7 +95,8 @@ obj-$(CONFIG_CPU_V7M) += proc-v7m.o AFLAGS_proc-v6.o :=-Wa,-march=armv6 AFLAGS_proc-v7.o :=-Wa,-march=armv7-a +obj-$(CONFIG_OUTER_CACHE) += l2c-common.o obj-$(CONFIG_CACHE_FEROCEON_L2) += cache-feroceon-l2.o -obj-$(CONFIG_CACHE_L2X0) += cache-l2x0.o +obj-$(CONFIG_CACHE_L2X0) += cache-l2x0.o l2c-l2x0-resume.o obj-$(CONFIG_CACHE_XSC3L2) += cache-xsc3l2.o obj-$(CONFIG_CACHE_TAUROS2) += cache-tauros2.o diff --git a/arch/arm/mm/alignment.c b/arch/arm/mm/alignment.c index 924036473b16..b8cb1a2688a0 100644 --- a/arch/arm/mm/alignment.c +++ b/arch/arm/mm/alignment.c @@ -28,6 +28,7 @@ #include <asm/opcodes.h> #include "fault.h" +#include "mm.h" /* * 32-bit misaligned trap handler (c) 1998 San Mehat (CCC) -July 1998 @@ -81,6 +82,7 @@ static unsigned long ai_word; static unsigned long ai_dword; static unsigned long ai_multi; static int ai_usermode; +static unsigned long cr_no_alignment; core_param(alignment, ai_usermode, int, 0600); @@ -91,7 +93,7 @@ core_param(alignment, ai_usermode, int, 0600); /* Return true if and only if the ARMv6 unaligned access model is in use. */ static bool cpu_is_v6_unaligned(void) { - return cpu_architecture() >= CPU_ARCH_ARMv6 && (cr_alignment & CR_U); + return cpu_architecture() >= CPU_ARCH_ARMv6 && get_cr() & CR_U; } static int safe_usermode(int new_usermode, bool warn) @@ -949,6 +951,13 @@ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs) return 0; } +static int __init noalign_setup(char *__unused) +{ + set_cr(__clear_cr(CR_A)); + return 1; +} +__setup("noalign", noalign_setup); + /* * This needs to be done after sysctl_init, otherwise sys/ will be * overwritten. Actually, this shouldn't be in sys/ at all since @@ -966,14 +975,12 @@ static int __init alignment_init(void) return -ENOMEM; #endif -#ifdef CONFIG_CPU_CP15 if (cpu_is_v6_unaligned()) { - cr_alignment &= ~CR_A; - cr_no_alignment &= ~CR_A; - set_cr(cr_alignment); + set_cr(__clear_cr(CR_A)); ai_usermode = safe_usermode(ai_usermode, false); } -#endif + + cr_no_alignment = get_cr() & ~CR_A; hook_fault_code(FAULT_CODE_ALIGNMENT, do_alignment, SIGBUS, BUS_ADRALN, "alignment exception"); diff --git a/arch/arm/mm/cache-feroceon-l2.c b/arch/arm/mm/cache-feroceon-l2.c index dc814a548056..e028a7f2ebcc 100644 --- a/arch/arm/mm/cache-feroceon-l2.c +++ b/arch/arm/mm/cache-feroceon-l2.c @@ -350,7 +350,6 @@ void __init feroceon_l2_init(int __l2_wt_override) outer_cache.inv_range = feroceon_l2_inv_range; outer_cache.clean_range = feroceon_l2_clean_range; outer_cache.flush_range = feroceon_l2_flush_range; - outer_cache.inv_all = l2_inv_all; enable_l2(); diff --git a/arch/arm/mm/cache-l2x0.c b/arch/arm/mm/cache-l2x0.c index 7abde2ce8973..efc5cabf70e0 100644 --- a/arch/arm/mm/cache-l2x0.c +++ b/arch/arm/mm/cache-l2x0.c @@ -16,18 +16,33 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */ +#include <linux/cpu.h> #include <linux/err.h> #include <linux/init.h> +#include <linux/smp.h> #include <linux/spinlock.h> #include <linux/io.h> #include <linux/of.h> #include <linux/of_address.h> #include <asm/cacheflush.h> +#include <asm/cp15.h> +#include <asm/cputype.h> #include <asm/hardware/cache-l2x0.h> #include "cache-tauros3.h" #include "cache-aurora-l2.h" +struct l2c_init_data { + const char *type; + unsigned way_size_0; + unsigned num_lock; + void (*of_parse)(const struct device_node *, u32 *, u32 *); + void (*enable)(void __iomem *, u32, unsigned); + void (*fixup)(void __iomem *, u32, struct outer_cache_fns *); + void (*save)(void __iomem *); + struct outer_cache_fns outer_cache; +}; + #define CACHE_LINE_SIZE 32 static void __iomem *l2x0_base; @@ -36,96 +51,116 @@ static u32 l2x0_way_mask; /* Bitmask of active ways */ static u32 l2x0_size; static unsigned long sync_reg_offset = L2X0_CACHE_SYNC; -/* Aurora don't have the cache ID register available, so we have to - * pass it though the device tree */ -static u32 cache_id_part_number_from_dt; - struct l2x0_regs l2x0_saved_regs; -struct l2x0_of_data { - void (*setup)(const struct device_node *, u32 *, u32 *); - void (*save)(void); - struct outer_cache_fns outer_cache; -}; - -static bool of_init = false; - -static inline void cache_wait_way(void __iomem *reg, unsigned long mask) +/* + * Common code for all cache controllers. + */ +static inline void l2c_wait_mask(void __iomem *reg, unsigned long mask) { /* wait for cache operation by line or way to complete */ while (readl_relaxed(reg) & mask) cpu_relax(); } -#ifdef CONFIG_CACHE_PL310 -static inline void cache_wait(void __iomem *reg, unsigned long mask) +/* + * By default, we write directly to secure registers. Platforms must + * override this if they are running non-secure. + */ +static void l2c_write_sec(unsigned long val, void __iomem *base, unsigned reg) { - /* cache operations by line are atomic on PL310 */ + if (val == readl_relaxed(base + reg)) + return; + if (outer_cache.write_sec) + outer_cache.write_sec(val, reg); + else + writel_relaxed(val, base + reg); } -#else -#define cache_wait cache_wait_way -#endif -static inline void cache_sync(void) +/* + * This should only be called when we have a requirement that the + * register be written due to a work-around, as platforms running + * in non-secure mode may not be able to access this register. + */ +static inline void l2c_set_debug(void __iomem *base, unsigned long val) { - void __iomem *base = l2x0_base; - - writel_relaxed(0, base + sync_reg_offset); - cache_wait(base + L2X0_CACHE_SYNC, 1); + l2c_write_sec(val, base, L2X0_DEBUG_CTRL); } -static inline void l2x0_clean_line(unsigned long addr) +static void __l2c_op_way(void __iomem *reg) { - void __iomem *base = l2x0_base; - cache_wait(base + L2X0_CLEAN_LINE_PA, 1); - writel_relaxed(addr, base + L2X0_CLEAN_LINE_PA); + writel_relaxed(l2x0_way_mask, reg); + l2c_wait_mask(reg, l2x0_way_mask); } -static inline void l2x0_inv_line(unsigned long addr) +static inline void l2c_unlock(void __iomem *base, unsigned num) { - void __iomem *base = l2x0_base; - cache_wait(base + L2X0_INV_LINE_PA, 1); - writel_relaxed(addr, base + L2X0_INV_LINE_PA); + unsigned i; + + for (i = 0; i < num; i++) { + writel_relaxed(0, base + L2X0_LOCKDOWN_WAY_D_BASE + + i * L2X0_LOCKDOWN_STRIDE); + writel_relaxed(0, base + L2X0_LOCKDOWN_WAY_I_BASE + + i * L2X0_LOCKDOWN_STRIDE); + } } -#if defined(CONFIG_PL310_ERRATA_588369) || defined(CONFIG_PL310_ERRATA_727915) -static inline void debug_writel(unsigned long val) +/* + * Enable the L2 cache controller. This function must only be + * called when the cache controller is known to be disabled. + */ +static void l2c_enable(void __iomem *base, u32 aux, unsigned num_lock) { - if (outer_cache.set_debug) - outer_cache.set_debug(val); + unsigned long flags; + + l2c_write_sec(aux, base, L2X0_AUX_CTRL); + + l2c_unlock(base, num_lock); + + local_irq_save(flags); + __l2c_op_way(base + L2X0_INV_WAY); + writel_relaxed(0, base + sync_reg_offset); + l2c_wait_mask(base + sync_reg_offset, 1); + local_irq_restore(flags); + + l2c_write_sec(L2X0_CTRL_EN, base, L2X0_CTRL); } -static void pl310_set_debug(unsigned long val) +static void l2c_disable(void) { - writel_relaxed(val, l2x0_base + L2X0_DEBUG_CTRL); + void __iomem *base = l2x0_base; + + outer_cache.flush_all(); + l2c_write_sec(0, base, L2X0_CTRL); + dsb(st); } -#else -/* Optimised out for non-errata case */ -static inline void debug_writel(unsigned long val) + +#ifdef CONFIG_CACHE_PL310 +static inline void cache_wait(void __iomem *reg, unsigned long mask) { + /* cache operations by line are atomic on PL310 */ } - -#define pl310_set_debug NULL +#else +#define cache_wait l2c_wait_mask #endif -#ifdef CONFIG_PL310_ERRATA_588369 -static inline void l2x0_flush_line(unsigned long addr) +static inline void cache_sync(void) { void __iomem *base = l2x0_base; - /* Clean by PA followed by Invalidate by PA */ - cache_wait(base + L2X0_CLEAN_LINE_PA, 1); - writel_relaxed(addr, base + L2X0_CLEAN_LINE_PA); - cache_wait(base + L2X0_INV_LINE_PA, 1); - writel_relaxed(addr, base + L2X0_INV_LINE_PA); + writel_relaxed(0, base + sync_reg_offset); + cache_wait(base + L2X0_CACHE_SYNC, 1); } -#else -static inline void l2x0_flush_line(unsigned long addr) +#if defined(CONFIG_PL310_ERRATA_588369) || defined(CONFIG_PL310_ERRATA_727915) +static inline void debug_writel(unsigned long val) +{ + l2c_set_debug(l2x0_base, val); +} +#else +/* Optimised out for non-errata case */ +static inline void debug_writel(unsigned long val) { - void __iomem *base = l2x0_base; - cache_wait(base + L2X0_CLEAN_INV_LINE_PA, 1); - writel_relaxed(addr, base + L2X0_CLEAN_INV_LINE_PA); } #endif @@ -141,8 +176,7 @@ static void l2x0_cache_sync(void) static void __l2x0_flush_all(void) { debug_writel(0x03); - writel_relaxed(l2x0_way_mask, l2x0_base + L2X0_CLEAN_INV_WAY); - cache_wait_way(l2x0_base + L2X0_CLEAN_INV_WAY, l2x0_way_mask); + __l2c_op_way(l2x0_base + L2X0_CLEAN_INV_WAY); cache_sync(); debug_writel(0x00); } @@ -157,275 +191,883 @@ static void l2x0_flush_all(void) raw_spin_unlock_irqrestore(&l2x0_lock, flags); } -static void l2x0_clean_all(void) +static void l2x0_disable(void) { unsigned long flags; - /* clean all ways */ raw_spin_lock_irqsave(&l2x0_lock, flags); - writel_relaxed(l2x0_way_mask, l2x0_base + L2X0_CLEAN_WAY); - cache_wait_way(l2x0_base + L2X0_CLEAN_WAY, l2x0_way_mask); - cache_sync(); + __l2x0_flush_all(); + l2c_write_sec(0, l2x0_base, L2X0_CTRL); + dsb(st); raw_spin_unlock_irqrestore(&l2x0_lock, flags); } -static void l2x0_inv_all(void) +static void l2c_save(void __iomem *base) { - unsigned long flags; + l2x0_saved_regs.aux_ctrl = readl_relaxed(l2x0_base + L2X0_AUX_CTRL); +} - /* invalidate all ways */ - raw_spin_lock_irqsave(&l2x0_lock, flags); - /* Invalidating when L2 is enabled is a nono */ - BUG_ON(readl(l2x0_base + L2X0_CTRL) & L2X0_CTRL_EN); - writel_relaxed(l2x0_way_mask, l2x0_base + L2X0_INV_WAY); - cache_wait_way(l2x0_base + L2X0_INV_WAY, l2x0_way_mask); - cache_sync(); - raw_spin_unlock_irqrestore(&l2x0_lock, flags); +/* + * L2C-210 specific code. + * + * The L2C-2x0 PA, set/way and sync operations are atomic, but we must + * ensure that no background operation is running. The way operations + * are all background tasks. + * + * While a background operation is in progress, any new operation is + * ignored (unspecified whether this causes an error.) Thankfully, not + * used on SMP. + * + * Never has a different sync register other than L2X0_CACHE_SYNC, but + * we use sync_reg_offset here so we can share some of this with L2C-310. + */ +static void __l2c210_cache_sync(void __iomem *base) +{ + writel_relaxed(0, base + sync_reg_offset); } -static void l2x0_inv_range(unsigned long start, unsigned long end) +static void __l2c210_op_pa_range(void __iomem *reg, unsigned long start, + unsigned long end) +{ + while (start < end) { + writel_relaxed(start, reg); + start += CACHE_LINE_SIZE; + } +} + +static void l2c210_inv_range(unsigned long start, unsigned long end) { void __iomem *base = l2x0_base; - unsigned long flags; - raw_spin_lock_irqsave(&l2x0_lock, flags); if (start & (CACHE_LINE_SIZE - 1)) { start &= ~(CACHE_LINE_SIZE - 1); - debug_writel(0x03); - l2x0_flush_line(start); - debug_writel(0x00); + writel_relaxed(start, base + L2X0_CLEAN_INV_LINE_PA); start += CACHE_LINE_SIZE; } if (end & (CACHE_LINE_SIZE - 1)) { end &= ~(CACHE_LINE_SIZE - 1); - debug_writel(0x03); - l2x0_flush_line(end); - debug_writel(0x00); + writel_relaxed(end, base + L2X0_CLEAN_INV_LINE_PA); } + __l2c210_op_pa_range(base + L2X0_INV_LINE_PA, start, end); + __l2c210_cache_sync(base); +} + +static void l2c210_clean_range(unsigned long start, unsigned long end) +{ + void __iomem *base = l2x0_base; + + start &= ~(CACHE_LINE_SIZE - 1); + __l2c210_op_pa_range(base + L2X0_CLEAN_LINE_PA, start, end); + __l2c210_cache_sync(base); +} + +static void l2c210_flush_range(unsigned long start, unsigned long end) +{ + void __iomem *base = l2x0_base; + + start &= ~(CACHE_LINE_SIZE - 1); + __l2c210_op_pa_range(base + L2X0_CLEAN_INV_LINE_PA, start, end); + __l2c210_cache_sync(base); +} + +static void l2c210_flush_all(void) +{ + void __iomem *base = l2x0_base; + + BUG_ON(!irqs_disabled()); + + __l2c_op_way(base + L2X0_CLEAN_INV_WAY); + __l2c210_cache_sync(base); +} + +static void l2c210_sync(void) +{ + __l2c210_cache_sync(l2x0_base); +} + +static void l2c210_resume(void) +{ + void __iomem *base = l2x0_base; + + if (!(readl_relaxed(base + L2X0_CTRL) & L2X0_CTRL_EN)) + l2c_enable(base, l2x0_saved_regs.aux_ctrl, 1); +} + +static const struct l2c_init_data l2c210_data __initconst = { + .type = "L2C-210", + .way_size_0 = SZ_8K, + .num_lock = 1, + .enable = l2c_enable, + .save = l2c_save, + .outer_cache = { + .inv_range = l2c210_inv_range, + .clean_range = l2c210_clean_range, + .flush_range = l2c210_flush_range, + .flush_all = l2c210_flush_all, + .disable = l2c_disable, + .sync = l2c210_sync, + .resume = l2c210_resume, + }, +}; + +/* + * L2C-220 specific code. + * + * All operations are background operations: they have to be waited for. + * Conflicting requests generate a slave error (which will cause an + * imprecise abort.) Never uses sync_reg_offset, so we hard-code the + * sync register here. + * + * However, we can re-use the l2c210_resume call. + */ +static inline void __l2c220_cache_sync(void __iomem *base) +{ + writel_relaxed(0, base + L2X0_CACHE_SYNC); + l2c_wait_mask(base + L2X0_CACHE_SYNC, 1); +} + +static void l2c220_op_way(void __iomem *base, unsigned reg) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&l2x0_lock, flags); + __l2c_op_way(base + reg); + __l2c220_cache_sync(base); + raw_spin_unlock_irqrestore(&l2x0_lock, flags); +} + +static unsigned long l2c220_op_pa_range(void __iomem *reg, unsigned long start, + unsigned long end, unsigned long flags) +{ + raw_spinlock_t *lock = &l2x0_lock; + while (start < end) { unsigned long blk_end = start + min(end - start, 4096UL); while (start < blk_end) { - l2x0_inv_line(start); + l2c_wait_mask(reg, 1); + writel_relaxed(start, reg); start += CACHE_LINE_SIZE; } if (blk_end < end) { - raw_spin_unlock_irqrestore(&l2x0_lock, flags); - raw_spin_lock_irqsave(&l2x0_lock, flags); + raw_spin_unlock_irqrestore(lock, flags); + raw_spin_lock_irqsave(lock, flags); } } - cache_wait(base + L2X0_INV_LINE_PA, 1); - cache_sync(); - raw_spin_unlock_irqrestore(&l2x0_lock, flags); + + return flags; } -static void l2x0_clean_range(unsigned long start, unsigned long end) +static void l2c220_inv_range(unsigned long start, unsigned long end) { void __iomem *base = l2x0_base; unsigned long flags; - if ((end - start) >= l2x0_size) { - l2x0_clean_all(); - return; - } - raw_spin_lock_irqsave(&l2x0_lock, flags); - start &= ~(CACHE_LINE_SIZE - 1); - while (start < end) { - unsigned long blk_end = start + min(end - start, 4096UL); - - while (start < blk_end) { - l2x0_clean_line(start); + if ((start | end) & (CACHE_LINE_SIZE - 1)) { + if (start & (CACHE_LINE_SIZE - 1)) { + start &= ~(CACHE_LINE_SIZE - 1); + writel_relaxed(start, base + L2X0_CLEAN_INV_LINE_PA); start += CACHE_LINE_SIZE; } - if (blk_end < end) { - raw_spin_unlock_irqrestore(&l2x0_lock, flags); - raw_spin_lock_irqsave(&l2x0_lock, flags); + if (end & (CACHE_LINE_SIZE - 1)) { + end &= ~(CACHE_LINE_SIZE - 1); + l2c_wait_mask(base + L2X0_CLEAN_INV_LINE_PA, 1); + writel_relaxed(end, base + L2X0_CLEAN_INV_LINE_PA); } } - cache_wait(base + L2X0_CLEAN_LINE_PA, 1); - cache_sync(); + + flags = l2c220_op_pa_range(base + L2X0_INV_LINE_PA, + start, end, flags); + l2c_wait_mask(base + L2X0_INV_LINE_PA, 1); + __l2c220_cache_sync(base); raw_spin_unlock_irqrestore(&l2x0_lock, flags); } -static void l2x0_flush_range(unsigned long start, unsigned long end) +static void l2c220_clean_range(unsigned long start, unsigned long end) { void __iomem *base = l2x0_base; unsigned long flags; + start &= ~(CACHE_LINE_SIZE - 1); if ((end - start) >= l2x0_size) { - l2x0_flush_all(); + l2c220_op_way(base, L2X0_CLEAN_WAY); return; } raw_spin_lock_irqsave(&l2x0_lock, flags); + flags = l2c220_op_pa_range(base + L2X0_CLEAN_LINE_PA, + start, end, flags); + l2c_wait_mask(base + L2X0_CLEAN_INV_LINE_PA, 1); + __l2c220_cache_sync(base); + raw_spin_unlock_irqrestore(&l2x0_lock, flags); +} + +static void l2c220_flush_range(unsigned long start, unsigned long end) +{ + void __iomem *base = l2x0_base; + unsigned long flags; + start &= ~(CACHE_LINE_SIZE - 1); + if ((end - start) >= l2x0_size) { + l2c220_op_way(base, L2X0_CLEAN_INV_WAY); + return; + } + + raw_spin_lock_irqsave(&l2x0_lock, flags); + flags = l2c220_op_pa_range(base + L2X0_CLEAN_INV_LINE_PA, + start, end, flags); + l2c_wait_mask(base + L2X0_CLEAN_INV_LINE_PA, 1); + __l2c220_cache_sync(base); + raw_spin_unlock_irqrestore(&l2x0_lock, flags); +} + +static void l2c220_flush_all(void) +{ + l2c220_op_way(l2x0_base, L2X0_CLEAN_INV_WAY); +} + +static void l2c220_sync(void) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&l2x0_lock, flags); + __l2c220_cache_sync(l2x0_base); + raw_spin_unlock_irqrestore(&l2x0_lock, flags); +} + +static void l2c220_enable(void __iomem *base, u32 aux, unsigned num_lock) +{ + /* + * Always enable non-secure access to the lockdown registers - + * we write to them as part of the L2C enable sequence so they + * need to be accessible. + */ + aux |= L220_AUX_CTRL_NS_LOCKDOWN; + + l2c_enable(base, aux, num_lock); +} + +static const struct l2c_init_data l2c220_data = { + .type = "L2C-220", + .way_size_0 = SZ_8K, + .num_lock = 1, + .enable = l2c220_enable, + .save = l2c_save, + .outer_cache = { + .inv_range = l2c220_inv_range, + .clean_range = l2c220_clean_range, + .flush_range = l2c220_flush_range, + .flush_all = l2c220_flush_all, + .disable = l2c_disable, + .sync = l2c220_sync, + .resume = l2c210_resume, + }, +}; + +/* + * L2C-310 specific code. + * + * Very similar to L2C-210, the PA, set/way and sync operations are atomic, + * and the way operations are all background tasks. However, issuing an + * operation while a background operation is in progress results in a + * SLVERR response. We can reuse: + * + * __l2c210_cache_sync (using sync_reg_offset) + * l2c210_sync + * l2c210_inv_range (if 588369 is not applicable) + * l2c210_clean_range + * l2c210_flush_range (if 588369 is not applicable) + * l2c210_flush_all (if 727915 is not applicable) + * + * Errata: + * 588369: PL310 R0P0->R1P0, fixed R2P0. + * Affects: all clean+invalidate operations + * clean and invalidate skips the invalidate step, so we need to issue + * separate operations. We also require the above debug workaround + * enclosing this code fragment on affected parts. On unaffected parts, + * we must not use this workaround without the debug register writes + * to avoid exposing a problem similar to 727915. + * + * 727915: PL310 R2P0->R3P0, fixed R3P1. + * Affects: clean+invalidate by way + * clean and invalidate by way runs in the background, and a store can + * hit the line between the clean operation and invalidate operation, + * resulting in the store being lost. + * + * 752271: PL310 R3P0->R3P1-50REL0, fixed R3P2. + * Affects: 8x64-bit (double fill) line fetches + * double fill line fetches can fail to cause dirty data to be evicted + * from the cache before the new data overwrites the second line. + * + * 753970: PL310 R3P0, fixed R3P1. + * Affects: sync + * prevents merging writes after the sync operation, until another L2C + * operation is performed (or a number of other conditions.) + * + * 769419: PL310 R0P0->R3P1, fixed R3P2. + * Affects: store buffer + * store buffer is not automatically drained. + */ +static void l2c310_inv_range_erratum(unsigned long start, unsigned long end) +{ + void __iomem *base = l2x0_base; + + if ((start | end) & (CACHE_LINE_SIZE - 1)) { + unsigned long flags; + + /* Erratum 588369 for both clean+invalidate operations */ + raw_spin_lock_irqsave(&l2x0_lock, flags); + l2c_set_debug(base, 0x03); + + if (start & (CACHE_LINE_SIZE - 1)) { + start &= ~(CACHE_LINE_SIZE - 1); + writel_relaxed(start, base + L2X0_CLEAN_LINE_PA); + writel_relaxed(start, base + L2X0_INV_LINE_PA); + start += CACHE_LINE_SIZE; + } + + if (end & (CACHE_LINE_SIZE - 1)) { + end &= ~(CACHE_LINE_SIZE - 1); + writel_relaxed(end, base + L2X0_CLEAN_LINE_PA); + writel_relaxed(end, base + L2X0_INV_LINE_PA); + } + + l2c_set_debug(base, 0x00); + raw_spin_unlock_irqrestore(&l2x0_lock, flags); + } + + __l2c210_op_pa_range(base + L2X0_INV_LINE_PA, start, end); + __l2c210_cache_sync(base); +} + +static void l2c310_flush_range_erratum(unsigned long start, unsigned long end) +{ + raw_spinlock_t *lock = &l2x0_lock; + unsigned long flags; + void __iomem *base = l2x0_base; + + raw_spin_lock_irqsave(lock, flags); while (start < end) { unsigned long blk_end = start + min(end - start, 4096UL); - debug_writel(0x03); + l2c_set_debug(base, 0x03); while (start < blk_end) { - l2x0_flush_line(start); + writel_relaxed(start, base + L2X0_CLEAN_LINE_PA); + writel_relaxed(start, base + L2X0_INV_LINE_PA); start += CACHE_LINE_SIZE; } - debug_writel(0x00); + l2c_set_debug(base, 0x00); if (blk_end < end) { - raw_spin_unlock_irqrestore(&l2x0_lock, flags); - raw_spin_lock_irqsave(&l2x0_lock, flags); + raw_spin_unlock_irqrestore(lock, flags); + raw_spin_lock_irqsave(lock, flags); } } - cache_wait(base + L2X0_CLEAN_INV_LINE_PA, 1); - cache_sync(); - raw_spin_unlock_irqrestore(&l2x0_lock, flags); + raw_spin_unlock_irqrestore(lock, flags); + __l2c210_cache_sync(base); } -static void l2x0_disable(void) +static void l2c310_flush_all_erratum(void) { + void __iomem *base = l2x0_base; unsigned long flags; raw_spin_lock_irqsave(&l2x0_lock, flags); - __l2x0_flush_all(); - writel_relaxed(0, l2x0_base + L2X0_CTRL); - dsb(st); + l2c_set_debug(base, 0x03); + __l2c_op_way(base + L2X0_CLEAN_INV_WAY); + l2c_set_debug(base, 0x00); + __l2c210_cache_sync(base); raw_spin_unlock_irqrestore(&l2x0_lock, flags); } -static void l2x0_unlock(u32 cache_id) +static void __init l2c310_save(void __iomem *base) { - int lockregs; - int i; + unsigned revision; - switch (cache_id & L2X0_CACHE_ID_PART_MASK) { - case L2X0_CACHE_ID_PART_L310: - lockregs = 8; - break; - case AURORA_CACHE_ID: - lockregs = 4; + l2c_save(base); + + l2x0_saved_regs.tag_latency = readl_relaxed(base + + L310_TAG_LATENCY_CTRL); + l2x0_saved_regs.data_latency = readl_relaxed(base + + L310_DATA_LATENCY_CTRL); + l2x0_saved_regs.filter_end = readl_relaxed(base + + L310_ADDR_FILTER_END); + l2x0_saved_regs.filter_start = readl_relaxed(base + + L310_ADDR_FILTER_START); + + revision = readl_relaxed(base + L2X0_CACHE_ID) & + L2X0_CACHE_ID_RTL_MASK; + + /* From r2p0, there is Prefetch offset/control register */ + if (revision >= L310_CACHE_ID_RTL_R2P0) + l2x0_saved_regs.prefetch_ctrl = readl_relaxed(base + + L310_PREFETCH_CTRL); + + /* From r3p0, there is Power control register */ + if (revision >= L310_CACHE_ID_RTL_R3P0) + l2x0_saved_regs.pwr_ctrl = readl_relaxed(base + + L310_POWER_CTRL); +} + +static void l2c310_resume(void) +{ + void __iomem *base = l2x0_base; + + if (!(readl_relaxed(base + L2X0_CTRL) & L2X0_CTRL_EN)) { + unsigned revision; + + /* restore pl310 setup */ + writel_relaxed(l2x0_saved_regs.tag_latency, + base + L310_TAG_LATENCY_CTRL); + writel_relaxed(l2x0_saved_regs.data_latency, + base + L310_DATA_LATENCY_CTRL); + writel_relaxed(l2x0_saved_regs.filter_end, + base + L310_ADDR_FILTER_END); + writel_relaxed(l2x0_saved_regs.filter_start, + base + L310_ADDR_FILTER_START); + + revision = readl_relaxed(base + L2X0_CACHE_ID) & + L2X0_CACHE_ID_RTL_MASK; + + if (revision >= L310_CACHE_ID_RTL_R2P0) + l2c_write_sec(l2x0_saved_regs.prefetch_ctrl, base, + L310_PREFETCH_CTRL); + if (revision >= L310_CACHE_ID_RTL_R3P0) + l2c_write_sec(l2x0_saved_regs.pwr_ctrl, base, + L310_POWER_CTRL); + + l2c_enable(base, l2x0_saved_regs.aux_ctrl, 8); + + /* Re-enable full-line-of-zeros for Cortex-A9 */ + if (l2x0_saved_regs.aux_ctrl & L310_AUX_CTRL_FULL_LINE_ZERO) + set_auxcr(get_auxcr() | BIT(3) | BIT(2) | BIT(1)); + } +} + +static int l2c310_cpu_enable_flz(struct notifier_block *nb, unsigned long act, void *data) +{ + switch (act & ~CPU_TASKS_FROZEN) { + case CPU_STARTING: + set_auxcr(get_auxcr() | BIT(3) | BIT(2) | BIT(1)); break; - default: - /* L210 and unknown types */ - lockregs = 1; + case CPU_DYING: + set_auxcr(get_auxcr() & ~(BIT(3) | BIT(2) | BIT(1))); break; } + return NOTIFY_OK; +} - for (i = 0; i < lockregs; i++) { - writel_relaxed(0x0, l2x0_base + L2X0_LOCKDOWN_WAY_D_BASE + - i * L2X0_LOCKDOWN_STRIDE); - writel_relaxed(0x0, l2x0_base + L2X0_LOCKDOWN_WAY_I_BASE + - i * L2X0_LOCKDOWN_STRIDE); +static void __init l2c310_enable(void __iomem *base, u32 aux, unsigned num_lock) +{ + unsigned rev = readl_relaxed(base + L2X0_CACHE_ID) & L2X0_CACHE_ID_PART_MASK; + bool cortex_a9 = read_cpuid_part_number() == ARM_CPU_PART_CORTEX_A9; + + if (rev >= L310_CACHE_ID_RTL_R2P0) { + if (cortex_a9) { + aux |= L310_AUX_CTRL_EARLY_BRESP; + pr_info("L2C-310 enabling early BRESP for Cortex-A9\n"); + } else if (aux & L310_AUX_CTRL_EARLY_BRESP) { + pr_warn("L2C-310 early BRESP only supported with Cortex-A9\n"); + aux &= ~L310_AUX_CTRL_EARLY_BRESP; + } + } + + if (cortex_a9) { + u32 aux_cur = readl_relaxed(base + L2X0_AUX_CTRL); + u32 acr = get_auxcr(); + + pr_debug("Cortex-A9 ACR=0x%08x\n", acr); + + if (acr & BIT(3) && !(aux_cur & L310_AUX_CTRL_FULL_LINE_ZERO)) + pr_err("L2C-310: full line of zeros enabled in Cortex-A9 but not L2C-310 - invalid\n"); + + if (aux & L310_AUX_CTRL_FULL_LINE_ZERO && !(acr & BIT(3))) + pr_err("L2C-310: enabling full line of zeros but not enabled in Cortex-A9\n"); + + if (!(aux & L310_AUX_CTRL_FULL_LINE_ZERO) && !outer_cache.write_sec) { + aux |= L310_AUX_CTRL_FULL_LINE_ZERO; + pr_info("L2C-310 full line of zeros enabled for Cortex-A9\n"); + } + } else if (aux & (L310_AUX_CTRL_FULL_LINE_ZERO | L310_AUX_CTRL_EARLY_BRESP)) { + pr_err("L2C-310: disabling Cortex-A9 specific feature bits\n"); + aux &= ~(L310_AUX_CTRL_FULL_LINE_ZERO | L310_AUX_CTRL_EARLY_BRESP); + } + + if (aux & (L310_AUX_CTRL_DATA_PREFETCH | L310_AUX_CTRL_INSTR_PREFETCH)) { + u32 prefetch = readl_relaxed(base + L310_PREFETCH_CTRL); + + pr_info("L2C-310 %s%s prefetch enabled, offset %u lines\n", + aux & L310_AUX_CTRL_INSTR_PREFETCH ? "I" : "", + aux & L310_AUX_CTRL_DATA_PREFETCH ? "D" : "", + 1 + (prefetch & L310_PREFETCH_CTRL_OFFSET_MASK)); + } + + /* r3p0 or later has power control register */ + if (rev >= L310_CACHE_ID_RTL_R3P0) { + u32 power_ctrl; + + l2c_write_sec(L310_DYNAMIC_CLK_GATING_EN | L310_STNDBY_MODE_EN, + base, L310_POWER_CTRL); + power_ctrl = readl_relaxed(base + L310_POWER_CTRL); + pr_info("L2C-310 dynamic clock gating %sabled, standby mode %sabled\n", + power_ctrl & L310_DYNAMIC_CLK_GATING_EN ? "en" : "dis", + power_ctrl & L310_STNDBY_MODE_EN ? "en" : "dis"); + } + + /* + * Always enable non-secure access to the lockdown registers - + * we write to them as part of the L2C enable sequence so they + * need to be accessible. + */ + aux |= L310_AUX_CTRL_NS_LOCKDOWN; + + l2c_enable(base, aux, num_lock); + + if (aux & L310_AUX_CTRL_FULL_LINE_ZERO) { + set_auxcr(get_auxcr() | BIT(3) | BIT(2) | BIT(1)); + cpu_notifier(l2c310_cpu_enable_flz, 0); } } -void __init l2x0_init(void __iomem *base, u32 aux_val, u32 aux_mask) +static void __init l2c310_fixup(void __iomem *base, u32 cache_id, + struct outer_cache_fns *fns) { - u32 aux; - u32 cache_id; - u32 way_size = 0; - int ways; - int way_size_shift = L2X0_WAY_SIZE_SHIFT; - const char *type; + unsigned revision = cache_id & L2X0_CACHE_ID_RTL_MASK; + const char *errata[8]; + unsigned n = 0; + + if (IS_ENABLED(CONFIG_PL310_ERRATA_588369) && + revision < L310_CACHE_ID_RTL_R2P0 && + /* For bcm compatibility */ + fns->inv_range == l2c210_inv_range) { + fns->inv_range = l2c310_inv_range_erratum; + fns->flush_range = l2c310_flush_range_erratum; + errata[n++] = "588369"; + } - l2x0_base = base; - if (cache_id_part_number_from_dt) - cache_id = cache_id_part_number_from_dt; - else - cache_id = readl_relaxed(l2x0_base + L2X0_CACHE_ID); - aux = readl_relaxed(l2x0_base + L2X0_AUX_CTRL); + if (IS_ENABLED(CONFIG_PL310_ERRATA_727915) && + revision >= L310_CACHE_ID_RTL_R2P0 && + revision < L310_CACHE_ID_RTL_R3P1) { + fns->flush_all = l2c310_flush_all_erratum; + errata[n++] = "727915"; + } + + if (revision >= L310_CACHE_ID_RTL_R3P0 && + revision < L310_CACHE_ID_RTL_R3P2) { + u32 val = readl_relaxed(base + L310_PREFETCH_CTRL); + /* I don't think bit23 is required here... but iMX6 does so */ + if (val & (BIT(30) | BIT(23))) { + val &= ~(BIT(30) | BIT(23)); + l2c_write_sec(val, base, L310_PREFETCH_CTRL); + errata[n++] = "752271"; + } + } + + if (IS_ENABLED(CONFIG_PL310_ERRATA_753970) && + revision == L310_CACHE_ID_RTL_R3P0) { + sync_reg_offset = L2X0_DUMMY_REG; + errata[n++] = "753970"; + } + + if (IS_ENABLED(CONFIG_PL310_ERRATA_769419)) + errata[n++] = "769419"; + + if (n) { + unsigned i; + pr_info("L2C-310 errat%s", n > 1 ? "a" : "um"); + for (i = 0; i < n; i++) + pr_cont(" %s", errata[i]); + pr_cont(" enabled\n"); + } +} + +static void l2c310_disable(void) +{ + /* + * If full-line-of-zeros is enabled, we must first disable it in the + * Cortex-A9 auxiliary control register before disabling the L2 cache. + */ + if (l2x0_saved_regs.aux_ctrl & L310_AUX_CTRL_FULL_LINE_ZERO) + set_auxcr(get_auxcr() & ~(BIT(3) | BIT(2) | BIT(1))); + + l2c_disable(); +} + +static const struct l2c_init_data l2c310_init_fns __initconst = { + .type = "L2C-310", + .way_size_0 = SZ_8K, + .num_lock = 8, + .enable = l2c310_enable, + .fixup = l2c310_fixup, + .save = l2c310_save, + .outer_cache = { + .inv_range = l2c210_inv_range, + .clean_range = l2c210_clean_range, + .flush_range = l2c210_flush_range, + .flush_all = l2c210_flush_all, + .disable = l2c310_disable, + .sync = l2c210_sync, + .resume = l2c310_resume, + }, +}; + +static void __init __l2c_init(const struct l2c_init_data *data, + u32 aux_val, u32 aux_mask, u32 cache_id) +{ + struct outer_cache_fns fns; + unsigned way_size_bits, ways; + u32 aux, old_aux; + + /* + * Sanity check the aux values. aux_mask is the bits we preserve + * from reading the hardware register, and aux_val is the bits we + * set. + */ + if (aux_val & aux_mask) + pr_alert("L2C: platform provided aux values permit register corruption.\n"); + + old_aux = aux = readl_relaxed(l2x0_base + L2X0_AUX_CTRL); aux &= aux_mask; aux |= aux_val; + if (old_aux != aux) + pr_warn("L2C: DT/platform modifies aux control register: 0x%08x -> 0x%08x\n", + old_aux, aux); + /* Determine the number of ways */ switch (cache_id & L2X0_CACHE_ID_PART_MASK) { case L2X0_CACHE_ID_PART_L310: + if ((aux_val | ~aux_mask) & (L2C_AUX_CTRL_WAY_SIZE_MASK | L310_AUX_CTRL_ASSOCIATIVITY_16)) + pr_warn("L2C: DT/platform tries to modify or specify cache size\n"); if (aux & (1 << 16)) ways = 16; else ways = 8; - type = "L310"; -#ifdef CONFIG_PL310_ERRATA_753970 - /* Unmapped register. */ - sync_reg_offset = L2X0_DUMMY_REG; -#endif - if ((cache_id & L2X0_CACHE_ID_RTL_MASK) <= L2X0_CACHE_ID_RTL_R3P0) - outer_cache.set_debug = pl310_set_debug; break; + case L2X0_CACHE_ID_PART_L210: + case L2X0_CACHE_ID_PART_L220: ways = (aux >> 13) & 0xf; - type = "L210"; break; case AURORA_CACHE_ID: - sync_reg_offset = AURORA_SYNC_REG; ways = (aux >> 13) & 0xf; ways = 2 << ((ways + 1) >> 2); - way_size_shift = AURORA_WAY_SIZE_SHIFT; - type = "Aurora"; break; + default: /* Assume unknown chips have 8 ways */ ways = 8; - type = "L2x0 series"; break; } l2x0_way_mask = (1 << ways) - 1; /* - * L2 cache Size = Way size * Number of ways + * way_size_0 is the size that a way_size value of zero would be + * given the calculation: way_size = way_size_0 << way_size_bits. + * So, if way_size_bits=0 is reserved, but way_size_bits=1 is 16k, + * then way_size_0 would be 8k. + * + * L2 cache size = number of ways * way size. */ - way_size = (aux & L2X0_AUX_CTRL_WAY_SIZE_MASK) >> 17; - way_size = 1 << (way_size + way_size_shift); + way_size_bits = (aux & L2C_AUX_CTRL_WAY_SIZE_MASK) >> + L2C_AUX_CTRL_WAY_SIZE_SHIFT; + l2x0_size = ways * (data->way_size_0 << way_size_bits); - l2x0_size = ways * way_size * SZ_1K; + fns = data->outer_cache; + fns.write_sec = outer_cache.write_sec; + if (data->fixup) + data->fixup(l2x0_base, cache_id, &fns); /* - * Check if l2x0 controller is already enabled. - * If you are booting from non-secure mode - * accessing the below registers will fault. + * Check if l2x0 controller is already enabled. If we are booting + * in non-secure mode accessing the below registers will fault. */ - if (!(readl_relaxed(l2x0_base + L2X0_CTRL) & L2X0_CTRL_EN)) { - /* Make sure that I&D is not locked down when starting */ - l2x0_unlock(cache_id); + if (!(readl_relaxed(l2x0_base + L2X0_CTRL) & L2X0_CTRL_EN)) + data->enable(l2x0_base, aux, data->num_lock); - /* l2x0 controller is disabled */ - writel_relaxed(aux, l2x0_base + L2X0_AUX_CTRL); + outer_cache = fns; - l2x0_inv_all(); - - /* enable L2X0 */ - writel_relaxed(L2X0_CTRL_EN, l2x0_base + L2X0_CTRL); - } + /* + * It is strange to save the register state before initialisation, + * but hey, this is what the DT implementations decided to do. + */ + if (data->save) + data->save(l2x0_base); /* Re-read it in case some bits are reserved. */ aux = readl_relaxed(l2x0_base + L2X0_AUX_CTRL); - /* Save the value for resuming. */ - l2x0_saved_regs.aux_ctrl = aux; + pr_info("%s cache controller enabled, %d ways, %d kB\n", + data->type, ways, l2x0_size >> 10); + pr_info("%s: CACHE_ID 0x%08x, AUX_CTRL 0x%08x\n", + data->type, cache_id, aux); +} + +void __init l2x0_init(void __iomem *base, u32 aux_val, u32 aux_mask) +{ + const struct l2c_init_data *data; + u32 cache_id; + + l2x0_base = base; + + cache_id = readl_relaxed(base + L2X0_CACHE_ID); + + switch (cache_id & L2X0_CACHE_ID_PART_MASK) { + default: + case L2X0_CACHE_ID_PART_L210: + data = &l2c210_data; + break; - if (!of_init) { - outer_cache.inv_range = l2x0_inv_range; - outer_cache.clean_range = l2x0_clean_range; - outer_cache.flush_range = l2x0_flush_range; - outer_cache.sync = l2x0_cache_sync; - outer_cache.flush_all = l2x0_flush_all; - outer_cache.inv_all = l2x0_inv_all; - outer_cache.disable = l2x0_disable; + case L2X0_CACHE_ID_PART_L220: + data = &l2c220_data; + break; + + case L2X0_CACHE_ID_PART_L310: + data = &l2c310_init_fns; + break; } - pr_info("%s cache controller enabled\n", type); - pr_info("l2x0: %d ways, CACHE_ID 0x%08x, AUX_CTRL 0x%08x, Cache size: %d kB\n", - ways, cache_id, aux, l2x0_size >> 10); + __l2c_init(data, aux_val, aux_mask, cache_id); } #ifdef CONFIG_OF static int l2_wt_override; +/* Aurora don't have the cache ID register available, so we have to + * pass it though the device tree */ +static u32 cache_id_part_number_from_dt; + +static void __init l2x0_of_parse(const struct device_node *np, + u32 *aux_val, u32 *aux_mask) +{ + u32 data[2] = { 0, 0 }; + u32 tag = 0; + u32 dirty = 0; + u32 val = 0, mask = 0; + + of_property_read_u32(np, "arm,tag-latency", &tag); + if (tag) { + mask |= L2X0_AUX_CTRL_TAG_LATENCY_MASK; + val |= (tag - 1) << L2X0_AUX_CTRL_TAG_LATENCY_SHIFT; + } + + of_property_read_u32_array(np, "arm,data-latency", + data, ARRAY_SIZE(data)); + if (data[0] && data[1]) { + mask |= L2X0_AUX_CTRL_DATA_RD_LATENCY_MASK | + L2X0_AUX_CTRL_DATA_WR_LATENCY_MASK; + val |= ((data[0] - 1) << L2X0_AUX_CTRL_DATA_RD_LATENCY_SHIFT) | + ((data[1] - 1) << L2X0_AUX_CTRL_DATA_WR_LATENCY_SHIFT); + } + + of_property_read_u32(np, "arm,dirty-latency", &dirty); + if (dirty) { + mask |= L2X0_AUX_CTRL_DIRTY_LATENCY_MASK; + val |= (dirty - 1) << L2X0_AUX_CTRL_DIRTY_LATENCY_SHIFT; + } + + *aux_val &= ~mask; + *aux_val |= val; + *aux_mask &= ~mask; +} + +static const struct l2c_init_data of_l2c210_data __initconst = { + .type = "L2C-210", + .way_size_0 = SZ_8K, + .num_lock = 1, + .of_parse = l2x0_of_parse, + .enable = l2c_enable, + .save = l2c_save, + .outer_cache = { + .inv_range = l2c210_inv_range, + .clean_range = l2c210_clean_range, + .flush_range = l2c210_flush_range, + .flush_all = l2c210_flush_all, + .disable = l2c_disable, + .sync = l2c210_sync, + .resume = l2c210_resume, + }, +}; + +static const struct l2c_init_data of_l2c220_data __initconst = { + .type = "L2C-220", + .way_size_0 = SZ_8K, + .num_lock = 1, + .of_parse = l2x0_of_parse, + .enable = l2c220_enable, + .save = l2c_save, + .outer_cache = { + .inv_range = l2c220_inv_range, + .clean_range = l2c220_clean_range, + .flush_range = l2c220_flush_range, + .flush_all = l2c220_flush_all, + .disable = l2c_disable, + .sync = l2c220_sync, + .resume = l2c210_resume, + }, +}; + +static void __init l2c310_of_parse(const struct device_node *np, + u32 *aux_val, u32 *aux_mask) +{ + u32 data[3] = { 0, 0, 0 }; + u32 tag[3] = { 0, 0, 0 }; + u32 filter[2] = { 0, 0 }; + + of_property_read_u32_array(np, "arm,tag-latency", tag, ARRAY_SIZE(tag)); + if (tag[0] && tag[1] && tag[2]) + writel_relaxed( + L310_LATENCY_CTRL_RD(tag[0] - 1) | + L310_LATENCY_CTRL_WR(tag[1] - 1) | + L310_LATENCY_CTRL_SETUP(tag[2] - 1), + l2x0_base + L310_TAG_LATENCY_CTRL); + + of_property_read_u32_array(np, "arm,data-latency", + data, ARRAY_SIZE(data)); + if (data[0] && data[1] && data[2]) + writel_relaxed( + L310_LATENCY_CTRL_RD(data[0] - 1) | + L310_LATENCY_CTRL_WR(data[1] - 1) | + L310_LATENCY_CTRL_SETUP(data[2] - 1), + l2x0_base + L310_DATA_LATENCY_CTRL); + + of_property_read_u32_array(np, "arm,filter-ranges", + filter, ARRAY_SIZE(filter)); + if (filter[1]) { + writel_relaxed(ALIGN(filter[0] + filter[1], SZ_1M), + l2x0_base + L310_ADDR_FILTER_END); + writel_relaxed((filter[0] & ~(SZ_1M - 1)) | L310_ADDR_FILTER_EN, + l2x0_base + L310_ADDR_FILTER_START); + } +} + +static const struct l2c_init_data of_l2c310_data __initconst = { + .type = "L2C-310", + .way_size_0 = SZ_8K, + .num_lock = 8, + .of_parse = l2c310_of_parse, + .enable = l2c310_enable, + .fixup = l2c310_fixup, + .save = l2c310_save, + .outer_cache = { + .inv_range = l2c210_inv_range, + .clean_range = l2c210_clean_range, + .flush_range = l2c210_flush_range, + .flush_all = l2c210_flush_all, + .disable = l2c310_disable, + .sync = l2c210_sync, + .resume = l2c310_resume, + }, +}; + /* * Note that the end addresses passed to Linux primitives are * noninclusive, while the hardware cache range operations use @@ -524,6 +1166,100 @@ static void aurora_flush_range(unsigned long start, unsigned long end) } } +static void aurora_save(void __iomem *base) +{ + l2x0_saved_regs.ctrl = readl_relaxed(base + L2X0_CTRL); + l2x0_saved_regs.aux_ctrl = readl_relaxed(base + L2X0_AUX_CTRL); +} + +static void aurora_resume(void) +{ + void __iomem *base = l2x0_base; + + if (!(readl(base + L2X0_CTRL) & L2X0_CTRL_EN)) { + writel_relaxed(l2x0_saved_regs.aux_ctrl, base + L2X0_AUX_CTRL); + writel_relaxed(l2x0_saved_regs.ctrl, base + L2X0_CTRL); + } +} + +/* + * For Aurora cache in no outer mode, enable via the CP15 coprocessor + * broadcasting of cache commands to L2. + */ +static void __init aurora_enable_no_outer(void __iomem *base, u32 aux, + unsigned num_lock) +{ + u32 u; + + asm volatile("mrc p15, 1, %0, c15, c2, 0" : "=r" (u)); + u |= AURORA_CTRL_FW; /* Set the FW bit */ + asm volatile("mcr p15, 1, %0, c15, c2, 0" : : "r" (u)); + + isb(); + + l2c_enable(base, aux, num_lock); +} + +static void __init aurora_fixup(void __iomem *base, u32 cache_id, + struct outer_cache_fns *fns) +{ + sync_reg_offset = AURORA_SYNC_REG; +} + +static void __init aurora_of_parse(const struct device_node *np, + u32 *aux_val, u32 *aux_mask) +{ + u32 val = AURORA_ACR_REPLACEMENT_TYPE_SEMIPLRU; + u32 mask = AURORA_ACR_REPLACEMENT_MASK; + + of_property_read_u32(np, "cache-id-part", + &cache_id_part_number_from_dt); + + /* Determine and save the write policy */ + l2_wt_override = of_property_read_bool(np, "wt-override"); + + if (l2_wt_override) { + val |= AURORA_ACR_FORCE_WRITE_THRO_POLICY; + mask |= AURORA_ACR_FORCE_WRITE_POLICY_MASK; + } + + *aux_val &= ~mask; + *aux_val |= val; + *aux_mask &= ~mask; +} + +static const struct l2c_init_data of_aurora_with_outer_data __initconst = { + .type = "Aurora", + .way_size_0 = SZ_4K, + .num_lock = 4, + .of_parse = aurora_of_parse, + .enable = l2c_enable, + .fixup = aurora_fixup, + .save = aurora_save, + .outer_cache = { + .inv_range = aurora_inv_range, + .clean_range = aurora_clean_range, + .flush_range = aurora_flush_range, + .flush_all = l2x0_flush_all, + .disable = l2x0_disable, + .sync = l2x0_cache_sync, + .resume = aurora_resume, + }, +}; + +static const struct l2c_init_data of_aurora_no_outer_data __initconst = { + .type = "Aurora", + .way_size_0 = SZ_4K, + .num_lock = 4, + .of_parse = aurora_of_parse, + .enable = aurora_enable_no_outer, + .fixup = aurora_fixup, + .save = aurora_save, + .outer_cache = { + .resume = aurora_resume, + }, +}; + /* * For certain Broadcom SoCs, depending on the address range, different offsets * need to be added to the address before passing it to L2 for @@ -588,16 +1324,16 @@ static void bcm_inv_range(unsigned long start, unsigned long end) /* normal case, no cross section between start and end */ if (likely(bcm_addr_is_sys_emi(end) || !bcm_addr_is_sys_emi(start))) { - l2x0_inv_range(new_start, new_end); + l2c210_inv_range(new_start, new_end); return; } /* They cross sections, so it can only be a cross from section * 2 to section 3 */ - l2x0_inv_range(new_start, + l2c210_inv_range(new_start, bcm_l2_phys_addr(BCM_VC_EMI_SEC3_START_ADDR-1)); - l2x0_inv_range(bcm_l2_phys_addr(BCM_VC_EMI_SEC3_START_ADDR), + l2c210_inv_range(bcm_l2_phys_addr(BCM_VC_EMI_SEC3_START_ADDR), new_end); } @@ -610,26 +1346,21 @@ static void bcm_clean_range(unsigned long start, unsigned long end) if (unlikely(end <= start)) return; - if ((end - start) >= l2x0_size) { - l2x0_clean_all(); - return; - } - new_start = bcm_l2_phys_addr(start); new_end = bcm_l2_phys_addr(end); /* normal case, no cross section between start and end */ if (likely(bcm_addr_is_sys_emi(end) || !bcm_addr_is_sys_emi(start))) { - l2x0_clean_range(new_start, new_end); + l2c210_clean_range(new_start, new_end); return; } /* They cross sections, so it can only be a cross from section * 2 to section 3 */ - l2x0_clean_range(new_start, + l2c210_clean_range(new_start, bcm_l2_phys_addr(BCM_VC_EMI_SEC3_START_ADDR-1)); - l2x0_clean_range(bcm_l2_phys_addr(BCM_VC_EMI_SEC3_START_ADDR), + l2c210_clean_range(bcm_l2_phys_addr(BCM_VC_EMI_SEC3_START_ADDR), new_end); } @@ -643,7 +1374,7 @@ static void bcm_flush_range(unsigned long start, unsigned long end) return; if ((end - start) >= l2x0_size) { - l2x0_flush_all(); + outer_cache.flush_all(); return; } @@ -652,283 +1383,67 @@ static void bcm_flush_range(unsigned long start, unsigned long end) /* normal case, no cross section between start and end */ if (likely(bcm_addr_is_sys_emi(end) || !bcm_addr_is_sys_emi(start))) { - l2x0_flush_range(new_start, new_end); + l2c210_flush_range(new_start, new_end); return; } /* They cross sections, so it can only be a cross from section * 2 to section 3 */ - l2x0_flush_range(new_start, + l2c210_flush_range(new_start, bcm_l2_phys_addr(BCM_VC_EMI_SEC3_START_ADDR-1)); - l2x0_flush_range(bcm_l2_phys_addr(BCM_VC_EMI_SEC3_START_ADDR), + l2c210_flush_range(bcm_l2_phys_addr(BCM_VC_EMI_SEC3_START_ADDR), new_end); } -static void __init l2x0_of_setup(const struct device_node *np, - u32 *aux_val, u32 *aux_mask) -{ - u32 data[2] = { 0, 0 }; - u32 tag = 0; - u32 dirty = 0; - u32 val = 0, mask = 0; - - of_property_read_u32(np, "arm,tag-latency", &tag); - if (tag) { - mask |= L2X0_AUX_CTRL_TAG_LATENCY_MASK; - val |= (tag - 1) << L2X0_AUX_CTRL_TAG_LATENCY_SHIFT; - } - - of_property_read_u32_array(np, "arm,data-latency", - data, ARRAY_SIZE(data)); - if (data[0] && data[1]) { - mask |= L2X0_AUX_CTRL_DATA_RD_LATENCY_MASK | - L2X0_AUX_CTRL_DATA_WR_LATENCY_MASK; - val |= ((data[0] - 1) << L2X0_AUX_CTRL_DATA_RD_LATENCY_SHIFT) | - ((data[1] - 1) << L2X0_AUX_CTRL_DATA_WR_LATENCY_SHIFT); - } - - of_property_read_u32(np, "arm,dirty-latency", &dirty); - if (dirty) { - mask |= L2X0_AUX_CTRL_DIRTY_LATENCY_MASK; - val |= (dirty - 1) << L2X0_AUX_CTRL_DIRTY_LATENCY_SHIFT; - } - - *aux_val &= ~mask; - *aux_val |= val; - *aux_mask &= ~mask; -} - -static void __init pl310_of_setup(const struct device_node *np, - u32 *aux_val, u32 *aux_mask) -{ - u32 data[3] = { 0, 0, 0 }; - u32 tag[3] = { 0, 0, 0 }; - u32 filter[2] = { 0, 0 }; - - of_property_read_u32_array(np, "arm,tag-latency", tag, ARRAY_SIZE(tag)); - if (tag[0] && tag[1] && tag[2]) - writel_relaxed( - ((tag[0] - 1) << L2X0_LATENCY_CTRL_RD_SHIFT) | - ((tag[1] - 1) << L2X0_LATENCY_CTRL_WR_SHIFT) | - ((tag[2] - 1) << L2X0_LATENCY_CTRL_SETUP_SHIFT), - l2x0_base + L2X0_TAG_LATENCY_CTRL); - - of_property_read_u32_array(np, "arm,data-latency", - data, ARRAY_SIZE(data)); - if (data[0] && data[1] && data[2]) - writel_relaxed( - ((data[0] - 1) << L2X0_LATENCY_CTRL_RD_SHIFT) | - ((data[1] - 1) << L2X0_LATENCY_CTRL_WR_SHIFT) | - ((data[2] - 1) << L2X0_LATENCY_CTRL_SETUP_SHIFT), - l2x0_base + L2X0_DATA_LATENCY_CTRL); - - of_property_read_u32_array(np, "arm,filter-ranges", - filter, ARRAY_SIZE(filter)); - if (filter[1]) { - writel_relaxed(ALIGN(filter[0] + filter[1], SZ_1M), - l2x0_base + L2X0_ADDR_FILTER_END); - writel_relaxed((filter[0] & ~(SZ_1M - 1)) | L2X0_ADDR_FILTER_EN, - l2x0_base + L2X0_ADDR_FILTER_START); - } -} - -static void __init pl310_save(void) -{ - u32 l2x0_revision = readl_relaxed(l2x0_base + L2X0_CACHE_ID) & - L2X0_CACHE_ID_RTL_MASK; - - l2x0_saved_regs.tag_latency = readl_relaxed(l2x0_base + - L2X0_TAG_LATENCY_CTRL); - l2x0_saved_regs.data_latency = readl_relaxed(l2x0_base + - L2X0_DATA_LATENCY_CTRL); - l2x0_saved_regs.filter_end = readl_relaxed(l2x0_base + - L2X0_ADDR_FILTER_END); - l2x0_saved_regs.filter_start = readl_relaxed(l2x0_base + - L2X0_ADDR_FILTER_START); - - if (l2x0_revision >= L2X0_CACHE_ID_RTL_R2P0) { - /* - * From r2p0, there is Prefetch offset/control register - */ - l2x0_saved_regs.prefetch_ctrl = readl_relaxed(l2x0_base + - L2X0_PREFETCH_CTRL); - /* - * From r3p0, there is Power control register - */ - if (l2x0_revision >= L2X0_CACHE_ID_RTL_R3P0) - l2x0_saved_regs.pwr_ctrl = readl_relaxed(l2x0_base + - L2X0_POWER_CTRL); - } -} +/* Broadcom L2C-310 start from ARMs R3P2 or later, and require no fixups */ +static const struct l2c_init_data of_bcm_l2x0_data __initconst = { + .type = "BCM-L2C-310", + .way_size_0 = SZ_8K, + .num_lock = 8, + .of_parse = l2c310_of_parse, + .enable = l2c310_enable, + .save = l2c310_save, + .outer_cache = { + .inv_range = bcm_inv_range, + .clean_range = bcm_clean_range, + .flush_range = bcm_flush_range, + .flush_all = l2c210_flush_all, + .disable = l2c310_disable, + .sync = l2c210_sync, + .resume = l2c310_resume, + }, +}; -static void aurora_save(void) +static void __init tauros3_save(void __iomem *base) { - l2x0_saved_regs.ctrl = readl_relaxed(l2x0_base + L2X0_CTRL); - l2x0_saved_regs.aux_ctrl = readl_relaxed(l2x0_base + L2X0_AUX_CTRL); -} + l2c_save(base); -static void __init tauros3_save(void) -{ l2x0_saved_regs.aux2_ctrl = - readl_relaxed(l2x0_base + TAUROS3_AUX2_CTRL); + readl_relaxed(base + TAUROS3_AUX2_CTRL); l2x0_saved_regs.prefetch_ctrl = - readl_relaxed(l2x0_base + L2X0_PREFETCH_CTRL); -} - -static void l2x0_resume(void) -{ - if (!(readl_relaxed(l2x0_base + L2X0_CTRL) & L2X0_CTRL_EN)) { - /* restore aux ctrl and enable l2 */ - l2x0_unlock(readl_relaxed(l2x0_base + L2X0_CACHE_ID)); - - writel_relaxed(l2x0_saved_regs.aux_ctrl, l2x0_base + - L2X0_AUX_CTRL); - - l2x0_inv_all(); - - writel_relaxed(L2X0_CTRL_EN, l2x0_base + L2X0_CTRL); - } -} - -static void pl310_resume(void) -{ - u32 l2x0_revision; - - if (!(readl_relaxed(l2x0_base + L2X0_CTRL) & L2X0_CTRL_EN)) { - /* restore pl310 setup */ - writel_relaxed(l2x0_saved_regs.tag_latency, - l2x0_base + L2X0_TAG_LATENCY_CTRL); - writel_relaxed(l2x0_saved_regs.data_latency, - l2x0_base + L2X0_DATA_LATENCY_CTRL); - writel_relaxed(l2x0_saved_regs.filter_end, - l2x0_base + L2X0_ADDR_FILTER_END); - writel_relaxed(l2x0_saved_regs.filter_start, - l2x0_base + L2X0_ADDR_FILTER_START); - - l2x0_revision = readl_relaxed(l2x0_base + L2X0_CACHE_ID) & - L2X0_CACHE_ID_RTL_MASK; - - if (l2x0_revision >= L2X0_CACHE_ID_RTL_R2P0) { - writel_relaxed(l2x0_saved_regs.prefetch_ctrl, - l2x0_base + L2X0_PREFETCH_CTRL); - if (l2x0_revision >= L2X0_CACHE_ID_RTL_R3P0) - writel_relaxed(l2x0_saved_regs.pwr_ctrl, - l2x0_base + L2X0_POWER_CTRL); - } - } - - l2x0_resume(); -} - -static void aurora_resume(void) -{ - if (!(readl(l2x0_base + L2X0_CTRL) & L2X0_CTRL_EN)) { - writel_relaxed(l2x0_saved_regs.aux_ctrl, - l2x0_base + L2X0_AUX_CTRL); - writel_relaxed(l2x0_saved_regs.ctrl, l2x0_base + L2X0_CTRL); - } + readl_relaxed(base + L310_PREFETCH_CTRL); } static void tauros3_resume(void) { - if (!(readl_relaxed(l2x0_base + L2X0_CTRL) & L2X0_CTRL_EN)) { + void __iomem *base = l2x0_base; + + if (!(readl_relaxed(base + L2X0_CTRL) & L2X0_CTRL_EN)) { writel_relaxed(l2x0_saved_regs.aux2_ctrl, - l2x0_base + TAUROS3_AUX2_CTRL); + base + TAUROS3_AUX2_CTRL); writel_relaxed(l2x0_saved_regs.prefetch_ctrl, - l2x0_base + L2X0_PREFETCH_CTRL); - } + base + L310_PREFETCH_CTRL); - l2x0_resume(); -} - -static void __init aurora_broadcast_l2_commands(void) -{ - __u32 u; - /* Enable Broadcasting of cache commands to L2*/ - __asm__ __volatile__("mrc p15, 1, %0, c15, c2, 0" : "=r"(u)); - u |= AURORA_CTRL_FW; /* Set the FW bit */ - __asm__ __volatile__("mcr p15, 1, %0, c15, c2, 0\n" : : "r"(u)); - isb(); -} - -static void __init aurora_of_setup(const struct device_node *np, - u32 *aux_val, u32 *aux_mask) -{ - u32 val = AURORA_ACR_REPLACEMENT_TYPE_SEMIPLRU; - u32 mask = AURORA_ACR_REPLACEMENT_MASK; - - of_property_read_u32(np, "cache-id-part", - &cache_id_part_number_from_dt); - - /* Determine and save the write policy */ - l2_wt_override = of_property_read_bool(np, "wt-override"); - - if (l2_wt_override) { - val |= AURORA_ACR_FORCE_WRITE_THRO_POLICY; - mask |= AURORA_ACR_FORCE_WRITE_POLICY_MASK; + l2c_enable(base, l2x0_saved_regs.aux_ctrl, 8); } - - *aux_val &= ~mask; - *aux_val |= val; - *aux_mask &= ~mask; } -static const struct l2x0_of_data pl310_data = { - .setup = pl310_of_setup, - .save = pl310_save, - .outer_cache = { - .resume = pl310_resume, - .inv_range = l2x0_inv_range, - .clean_range = l2x0_clean_range, - .flush_range = l2x0_flush_range, - .sync = l2x0_cache_sync, - .flush_all = l2x0_flush_all, - .inv_all = l2x0_inv_all, - .disable = l2x0_disable, - }, -}; - -static const struct l2x0_of_data l2x0_data = { - .setup = l2x0_of_setup, - .save = NULL, - .outer_cache = { - .resume = l2x0_resume, - .inv_range = l2x0_inv_range, - .clean_range = l2x0_clean_range, - .flush_range = l2x0_flush_range, - .sync = l2x0_cache_sync, - .flush_all = l2x0_flush_all, - .inv_all = l2x0_inv_all, - .disable = l2x0_disable, - }, -}; - -static const struct l2x0_of_data aurora_with_outer_data = { - .setup = aurora_of_setup, - .save = aurora_save, - .outer_cache = { - .resume = aurora_resume, - .inv_range = aurora_inv_range, - .clean_range = aurora_clean_range, - .flush_range = aurora_flush_range, - .sync = l2x0_cache_sync, - .flush_all = l2x0_flush_all, - .inv_all = l2x0_inv_all, - .disable = l2x0_disable, - }, -}; - -static const struct l2x0_of_data aurora_no_outer_data = { - .setup = aurora_of_setup, - .save = aurora_save, - .outer_cache = { - .resume = aurora_resume, - }, -}; - -static const struct l2x0_of_data tauros3_data = { - .setup = NULL, +static const struct l2c_init_data of_tauros3_data __initconst = { + .type = "Tauros3", + .way_size_0 = SZ_8K, + .num_lock = 8, + .enable = l2c_enable, .save = tauros3_save, /* Tauros3 broadcasts L1 cache operations to L2 */ .outer_cache = { @@ -936,43 +1451,26 @@ static const struct l2x0_of_data tauros3_data = { }, }; -static const struct l2x0_of_data bcm_l2x0_data = { - .setup = pl310_of_setup, - .save = pl310_save, - .outer_cache = { - .resume = pl310_resume, - .inv_range = bcm_inv_range, - .clean_range = bcm_clean_range, - .flush_range = bcm_flush_range, - .sync = l2x0_cache_sync, - .flush_all = l2x0_flush_all, - .inv_all = l2x0_inv_all, - .disable = l2x0_disable, - }, -}; - +#define L2C_ID(name, fns) { .compatible = name, .data = (void *)&fns } static const struct of_device_id l2x0_ids[] __initconst = { - { .compatible = "arm,l210-cache", .data = (void *)&l2x0_data }, - { .compatible = "arm,l220-cache", .data = (void *)&l2x0_data }, - { .compatible = "arm,pl310-cache", .data = (void *)&pl310_data }, - { .compatible = "bcm,bcm11351-a2-pl310-cache", /* deprecated name */ - .data = (void *)&bcm_l2x0_data}, - { .compatible = "brcm,bcm11351-a2-pl310-cache", - .data = (void *)&bcm_l2x0_data}, - { .compatible = "marvell,aurora-outer-cache", - .data = (void *)&aurora_with_outer_data}, - { .compatible = "marvell,aurora-system-cache", - .data = (void *)&aurora_no_outer_data}, - { .compatible = "marvell,tauros3-cache", - .data = (void *)&tauros3_data }, + L2C_ID("arm,l210-cache", of_l2c210_data), + L2C_ID("arm,l220-cache", of_l2c220_data), + L2C_ID("arm,pl310-cache", of_l2c310_data), + L2C_ID("brcm,bcm11351-a2-pl310-cache", of_bcm_l2x0_data), + L2C_ID("marvell,aurora-outer-cache", of_aurora_with_outer_data), + L2C_ID("marvell,aurora-system-cache", of_aurora_no_outer_data), + L2C_ID("marvell,tauros3-cache", of_tauros3_data), + /* Deprecated IDs */ + L2C_ID("bcm,bcm11351-a2-pl310-cache", of_bcm_l2x0_data), {} }; int __init l2x0_of_init(u32 aux_val, u32 aux_mask) { + const struct l2c_init_data *data; struct device_node *np; - const struct l2x0_of_data *data; struct resource res; + u32 cache_id, old_aux; np = of_find_matching_node(NULL, l2x0_ids); if (!np) @@ -989,23 +1487,29 @@ int __init l2x0_of_init(u32 aux_val, u32 aux_mask) data = of_match_node(l2x0_ids, np)->data; - /* L2 configuration can only be changed if the cache is disabled */ - if (!(readl_relaxed(l2x0_base + L2X0_CTRL) & L2X0_CTRL_EN)) { - if (data->setup) - data->setup(np, &aux_val, &aux_mask); - - /* For aurora cache in no outer mode select the - * correct mode using the coprocessor*/ - if (data == &aurora_no_outer_data) - aurora_broadcast_l2_commands(); + old_aux = readl_relaxed(l2x0_base + L2X0_AUX_CTRL); + if (old_aux != ((old_aux & aux_mask) | aux_val)) { + pr_warn("L2C: platform modifies aux control register: 0x%08x -> 0x%08x\n", + old_aux, (old_aux & aux_mask) | aux_val); + } else if (aux_mask != ~0U && aux_val != 0) { + pr_alert("L2C: platform provided aux values match the hardware, so have no effect. Please remove them.\n"); } - if (data->save) - data->save(); + /* All L2 caches are unified, so this property should be specified */ + if (!of_property_read_bool(np, "cache-unified")) + pr_err("L2C: device tree omits to specify unified cache\n"); + + /* L2 configuration can only be changed if the cache is disabled */ + if (!(readl_relaxed(l2x0_base + L2X0_CTRL) & L2X0_CTRL_EN)) + if (data->of_parse) + data->of_parse(np, &aux_val, &aux_mask); + + if (cache_id_part_number_from_dt) + cache_id = cache_id_part_number_from_dt; + else + cache_id = readl_relaxed(l2x0_base + L2X0_CACHE_ID); - of_init = true; - memcpy(&outer_cache, &data->outer_cache, sizeof(outer_cache)); - l2x0_init(l2x0_base, aux_val, aux_mask); + __l2c_init(data, aux_val, aux_mask, cache_id); return 0; } diff --git a/arch/arm/mm/cache-v7.S b/arch/arm/mm/cache-v7.S index 778bcf88ee79..615c99e38ba1 100644 --- a/arch/arm/mm/cache-v7.S +++ b/arch/arm/mm/cache-v7.S @@ -59,7 +59,7 @@ ENTRY(v7_invalidate_l1) bgt 2b cmp r2, #0 bgt 1b - dsb + dsb st isb mov pc, lr ENDPROC(v7_invalidate_l1) @@ -166,7 +166,7 @@ skip: finished: mov r10, #0 @ swith back to cache level 0 mcr p15, 2, r10, c0, c0, 0 @ select current cache level in cssr - dsb + dsb st isb mov pc, lr ENDPROC(v7_flush_dcache_all) @@ -335,7 +335,7 @@ ENTRY(v7_flush_kern_dcache_area) add r0, r0, r2 cmp r0, r1 blo 1b - dsb + dsb st mov pc, lr ENDPROC(v7_flush_kern_dcache_area) @@ -368,7 +368,7 @@ v7_dma_inv_range: add r0, r0, r2 cmp r0, r1 blo 1b - dsb + dsb st mov pc, lr ENDPROC(v7_dma_inv_range) @@ -390,7 +390,7 @@ v7_dma_clean_range: add r0, r0, r2 cmp r0, r1 blo 1b - dsb + dsb st mov pc, lr ENDPROC(v7_dma_clean_range) @@ -412,7 +412,7 @@ ENTRY(v7_dma_flush_range) add r0, r0, r2 cmp r0, r1 blo 1b - dsb + dsb st mov pc, lr ENDPROC(v7_dma_flush_range) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 5bef858568e6..4c88935654ca 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -885,7 +885,7 @@ static void dma_cache_maint_page(struct page *page, unsigned long offset, static void __dma_page_cpu_to_dev(struct page *page, unsigned long off, size_t size, enum dma_data_direction dir) { - unsigned long paddr; + phys_addr_t paddr; dma_cache_maint_page(page, off, size, dir, dmac_map_area); @@ -901,14 +901,15 @@ static void __dma_page_cpu_to_dev(struct page *page, unsigned long off, static void __dma_page_dev_to_cpu(struct page *page, unsigned long off, size_t size, enum dma_data_direction dir) { - unsigned long paddr = page_to_phys(page) + off; + phys_addr_t paddr = page_to_phys(page) + off; /* FIXME: non-speculating: not required */ - /* don't bother invalidating if DMA to device */ - if (dir != DMA_TO_DEVICE) + /* in any case, don't bother invalidating if DMA to device */ + if (dir != DMA_TO_DEVICE) { outer_inv_range(paddr, paddr + size); - dma_cache_maint_page(page, off, size, dir, dmac_unmap_area); + dma_cache_maint_page(page, off, size, dir, dmac_unmap_area); + } /* * Mark the D-cache clean for these pages to avoid extra flushing. diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c index 3387e60e4ea3..43d54f5b26b9 100644 --- a/arch/arm/mm/flush.c +++ b/arch/arm/mm/flush.c @@ -104,17 +104,20 @@ void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsig #define flush_icache_alias(pfn,vaddr,len) do { } while (0) #endif +#define FLAG_PA_IS_EXEC 1 +#define FLAG_PA_CORE_IN_MM 2 + static void flush_ptrace_access_other(void *args) { __flush_icache_all(); } -static -void flush_ptrace_access(struct vm_area_struct *vma, struct page *page, - unsigned long uaddr, void *kaddr, unsigned long len) +static inline +void __flush_ptrace_access(struct page *page, unsigned long uaddr, void *kaddr, + unsigned long len, unsigned int flags) { if (cache_is_vivt()) { - if (cpumask_test_cpu(smp_processor_id(), mm_cpumask(vma->vm_mm))) { + if (flags & FLAG_PA_CORE_IN_MM) { unsigned long addr = (unsigned long)kaddr; __cpuc_coherent_kern_range(addr, addr + len); } @@ -128,7 +131,7 @@ void flush_ptrace_access(struct vm_area_struct *vma, struct page *page, } /* VIPT non-aliasing D-cache */ - if (vma->vm_flags & VM_EXEC) { + if (flags & FLAG_PA_IS_EXEC) { unsigned long addr = (unsigned long)kaddr; if (icache_is_vipt_aliasing()) flush_icache_alias(page_to_pfn(page), uaddr, len); @@ -140,6 +143,26 @@ void flush_ptrace_access(struct vm_area_struct *vma, struct page *page, } } +static +void flush_ptrace_access(struct vm_area_struct *vma, struct page *page, + unsigned long uaddr, void *kaddr, unsigned long len) +{ + unsigned int flags = 0; + if (cpumask_test_cpu(smp_processor_id(), mm_cpumask(vma->vm_mm))) + flags |= FLAG_PA_CORE_IN_MM; + if (vma->vm_flags & VM_EXEC) + flags |= FLAG_PA_IS_EXEC; + __flush_ptrace_access(page, uaddr, kaddr, len, flags); +} + +void flush_uprobe_xol_access(struct page *page, unsigned long uaddr, + void *kaddr, unsigned long len) +{ + unsigned int flags = FLAG_PA_CORE_IN_MM|FLAG_PA_IS_EXEC; + + __flush_ptrace_access(page, uaddr, kaddr, len, flags); +} + /* * Copy user data from/to a page which is mapped into a different * processes address space. Really, we want to allow our "user diff --git a/arch/arm/mm/highmem.c b/arch/arm/mm/highmem.c index 21b9e1bf9b77..45aeaaca9052 100644 --- a/arch/arm/mm/highmem.c +++ b/arch/arm/mm/highmem.c @@ -18,6 +18,21 @@ #include <asm/tlbflush.h> #include "mm.h" +pte_t *fixmap_page_table; + +static inline void set_fixmap_pte(int idx, pte_t pte) +{ + unsigned long vaddr = __fix_to_virt(idx); + set_pte_ext(fixmap_page_table + idx, pte, 0); + local_flush_tlb_kernel_page(vaddr); +} + +static inline pte_t get_fixmap_pte(unsigned long vaddr) +{ + unsigned long idx = __virt_to_fix(vaddr); + return *(fixmap_page_table + idx); +} + void *kmap(struct page *page) { might_sleep(); @@ -63,20 +78,20 @@ void *kmap_atomic(struct page *page) type = kmap_atomic_idx_push(); idx = type + KM_TYPE_NR * smp_processor_id(); - vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); + vaddr = __fix_to_virt(idx); #ifdef CONFIG_DEBUG_HIGHMEM /* * With debugging enabled, kunmap_atomic forces that entry to 0. * Make sure it was indeed properly unmapped. */ - BUG_ON(!pte_none(get_top_pte(vaddr))); + BUG_ON(!pte_none(*(fixmap_page_table + idx))); #endif /* * When debugging is off, kunmap_atomic leaves the previous mapping * in place, so the contained TLB flush ensures the TLB is updated * with the new mapping. */ - set_top_pte(vaddr, mk_pte(page, kmap_prot)); + set_fixmap_pte(idx, mk_pte(page, kmap_prot)); return (void *)vaddr; } @@ -94,8 +109,8 @@ void __kunmap_atomic(void *kvaddr) if (cache_is_vivt()) __cpuc_flush_dcache_area((void *)vaddr, PAGE_SIZE); #ifdef CONFIG_DEBUG_HIGHMEM - BUG_ON(vaddr != __fix_to_virt(FIX_KMAP_BEGIN + idx)); - set_top_pte(vaddr, __pte(0)); + BUG_ON(vaddr != __fix_to_virt(idx)); + set_fixmap_pte(idx, __pte(0)); #else (void) idx; /* to kill a warning */ #endif @@ -117,11 +132,11 @@ void *kmap_atomic_pfn(unsigned long pfn) type = kmap_atomic_idx_push(); idx = type + KM_TYPE_NR * smp_processor_id(); - vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); + vaddr = __fix_to_virt(idx); #ifdef CONFIG_DEBUG_HIGHMEM - BUG_ON(!pte_none(get_top_pte(vaddr))); + BUG_ON(!pte_none(*(fixmap_page_table + idx))); #endif - set_top_pte(vaddr, pfn_pte(pfn, kmap_prot)); + set_fixmap_pte(idx, pfn_pte(pfn, kmap_prot)); return (void *)vaddr; } @@ -133,5 +148,5 @@ struct page *kmap_atomic_to_page(const void *ptr) if (vaddr < FIXADDR_START) return virt_to_page(ptr); - return pte_page(get_top_pte(vaddr)); + return pte_page(get_fixmap_pte(vaddr)); } diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index 928d596d9ab4..659c75d808dc 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -23,6 +23,7 @@ #include <linux/dma-contiguous.h> #include <linux/sizes.h> +#include <asm/cp15.h> #include <asm/mach-types.h> #include <asm/memblock.h> #include <asm/prom.h> @@ -36,6 +37,14 @@ #include "mm.h" +#ifdef CONFIG_CPU_CP15_MMU +unsigned long __init __clear_cr(unsigned long mask) +{ + cr_alignment = cr_alignment & ~mask; + return cr_alignment; +} +#endif + static phys_addr_t phys_initrd_start __initdata = 0; static unsigned long phys_initrd_size __initdata = 0; @@ -81,24 +90,21 @@ __tagtable(ATAG_INITRD2, parse_tag_initrd2); * initialization functions, as well as show_mem() for the skipping * of holes in the memory map. It is populated by arm_add_memory(). */ -struct meminfo meminfo; - void show_mem(unsigned int filter) { int free = 0, total = 0, reserved = 0; - int shared = 0, cached = 0, slab = 0, i; - struct meminfo * mi = &meminfo; + int shared = 0, cached = 0, slab = 0; + struct memblock_region *reg; printk("Mem-info:\n"); show_free_areas(filter); - for_each_bank (i, mi) { - struct membank *bank = &mi->bank[i]; + for_each_memblock (memory, reg) { unsigned int pfn1, pfn2; struct page *page, *end; - pfn1 = bank_pfn_start(bank); - pfn2 = bank_pfn_end(bank); + pfn1 = memblock_region_memory_base_pfn(reg); + pfn2 = memblock_region_memory_end_pfn(reg); page = pfn_to_page(pfn1); end = pfn_to_page(pfn2 - 1) + 1; @@ -115,8 +121,9 @@ void show_mem(unsigned int filter) free++; else shared += page_count(page) - 1; - page++; - } while (page < end); + pfn1++; + page = pfn_to_page(pfn1); + } while (pfn1 < pfn2); } printk("%d pages of RAM\n", total); @@ -130,16 +137,9 @@ void show_mem(unsigned int filter) static void __init find_limits(unsigned long *min, unsigned long *max_low, unsigned long *max_high) { - struct meminfo *mi = &meminfo; - int i; - - /* This assumes the meminfo array is properly sorted */ - *min = bank_pfn_start(&mi->bank[0]); - for_each_bank (i, mi) - if (mi->bank[i].highmem) - break; - *max_low = bank_pfn_end(&mi->bank[i - 1]); - *max_high = bank_pfn_end(&mi->bank[mi->nr_banks - 1]); + *max_low = PFN_DOWN(memblock_get_current_limit()); + *min = PFN_UP(memblock_start_of_DRAM()); + *max_high = PFN_DOWN(memblock_end_of_DRAM()); } #ifdef CONFIG_ZONE_DMA @@ -274,14 +274,8 @@ phys_addr_t __init arm_memblock_steal(phys_addr_t size, phys_addr_t align) return phys; } -void __init arm_memblock_init(struct meminfo *mi, - const struct machine_desc *mdesc) +void __init arm_memblock_init(const struct machine_desc *mdesc) { - int i; - - for (i = 0; i < mi->nr_banks; i++) - memblock_add(mi->bank[i].start, mi->bank[i].size); - /* Register the kernel text, kernel data and initrd with memblock. */ #ifdef CONFIG_XIP_KERNEL memblock_reserve(__pa(_sdata), _end - _sdata); @@ -412,54 +406,53 @@ free_memmap(unsigned long start_pfn, unsigned long end_pfn) /* * The mem_map array can get very big. Free the unused area of the memory map. */ -static void __init free_unused_memmap(struct meminfo *mi) +static void __init free_unused_memmap(void) { - unsigned long bank_start, prev_bank_end = 0; - unsigned int i; + unsigned long start, prev_end = 0; + struct memblock_region *reg; /* * This relies on each bank being in address order. * The banks are sorted previously in bootmem_init(). */ - for_each_bank(i, mi) { - struct membank *bank = &mi->bank[i]; - - bank_start = bank_pfn_start(bank); + for_each_memblock(memory, reg) { + start = memblock_region_memory_base_pfn(reg); #ifdef CONFIG_SPARSEMEM /* * Take care not to free memmap entries that don't exist * due to SPARSEMEM sections which aren't present. */ - bank_start = min(bank_start, - ALIGN(prev_bank_end, PAGES_PER_SECTION)); + start = min(start, + ALIGN(prev_end, PAGES_PER_SECTION)); #else /* * Align down here since the VM subsystem insists that the * memmap entries are valid from the bank start aligned to * MAX_ORDER_NR_PAGES. */ - bank_start = round_down(bank_start, MAX_ORDER_NR_PAGES); + start = round_down(start, MAX_ORDER_NR_PAGES); #endif /* * If we had a previous bank, and there is a space * between the current bank and the previous, free it. */ - if (prev_bank_end && prev_bank_end < bank_start) - free_memmap(prev_bank_end, bank_start); + if (prev_end && prev_end < start) + free_memmap(prev_end, start); /* * Align up here since the VM subsystem insists that the * memmap entries are valid from the bank end aligned to * MAX_ORDER_NR_PAGES. */ - prev_bank_end = ALIGN(bank_pfn_end(bank), MAX_ORDER_NR_PAGES); + prev_end = ALIGN(memblock_region_memory_end_pfn(reg), + MAX_ORDER_NR_PAGES); } #ifdef CONFIG_SPARSEMEM - if (!IS_ALIGNED(prev_bank_end, PAGES_PER_SECTION)) - free_memmap(prev_bank_end, - ALIGN(prev_bank_end, PAGES_PER_SECTION)); + if (!IS_ALIGNED(prev_end, PAGES_PER_SECTION)) + free_memmap(prev_end, + ALIGN(prev_end, PAGES_PER_SECTION)); #endif } @@ -535,7 +528,7 @@ void __init mem_init(void) set_max_mapnr(pfn_to_page(max_pfn) - mem_map); /* this will put all unused low memory onto the freelists */ - free_unused_memmap(&meminfo); + free_unused_memmap(); free_all_bootmem(); #ifdef CONFIG_SA1111 diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c index f9c32ba73544..d1e5ad7ab3bc 100644 --- a/arch/arm/mm/ioremap.c +++ b/arch/arm/mm/ioremap.c @@ -438,6 +438,13 @@ void __arm_iounmap(volatile void __iomem *io_addr) EXPORT_SYMBOL(__arm_iounmap); #ifdef CONFIG_PCI +static int pci_ioremap_mem_type = MT_DEVICE; + +void pci_ioremap_set_mem_type(int mem_type) +{ + pci_ioremap_mem_type = mem_type; +} + int pci_ioremap_io(unsigned int offset, phys_addr_t phys_addr) { BUG_ON(offset + SZ_64K > IO_SPACE_LIMIT); @@ -445,7 +452,7 @@ int pci_ioremap_io(unsigned int offset, phys_addr_t phys_addr) return ioremap_page_range(PCI_IO_VIRT_BASE + offset, PCI_IO_VIRT_BASE + offset + SZ_64K, phys_addr, - __pgprot(get_mem_type(MT_DEVICE)->prot_pte)); + __pgprot(get_mem_type(pci_ioremap_mem_type)->prot_pte)); } EXPORT_SYMBOL_GPL(pci_ioremap_io); #endif diff --git a/arch/arm/mm/l2c-common.c b/arch/arm/mm/l2c-common.c new file mode 100644 index 000000000000..10a3cf28c362 --- /dev/null +++ b/arch/arm/mm/l2c-common.c @@ -0,0 +1,20 @@ +/* + * Copyright (C) 2010 ARM Ltd. + * Written by Catalin Marinas <catalin.marinas@arm.com> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ +#include <linux/bug.h> +#include <linux/smp.h> +#include <asm/outercache.h> + +void outer_disable(void) +{ + WARN_ON(!irqs_disabled()); + WARN_ON(num_online_cpus() > 1); + + if (outer_cache.disable) + outer_cache.disable(); +} diff --git a/arch/arm/mm/l2c-l2x0-resume.S b/arch/arm/mm/l2c-l2x0-resume.S new file mode 100644 index 000000000000..99b05f21a59a --- /dev/null +++ b/arch/arm/mm/l2c-l2x0-resume.S @@ -0,0 +1,58 @@ +/* + * L2C-310 early resume code. This can be used by platforms to restore + * the settings of their L2 cache controller before restoring the + * processor state. + * + * This code can only be used to if you are running in the secure world. + */ +#include <linux/linkage.h> +#include <asm/hardware/cache-l2x0.h> + + .text + +ENTRY(l2c310_early_resume) + adr r0, 1f + ldr r2, [r0] + add r0, r2, r0 + + ldmia r0, {r1, r2, r3, r4, r5, r6, r7, r8} + @ r1 = phys address of L2C-310 controller + @ r2 = aux_ctrl + @ r3 = tag_latency + @ r4 = data_latency + @ r5 = filter_start + @ r6 = filter_end + @ r7 = prefetch_ctrl + @ r8 = pwr_ctrl + + @ Check that the address has been initialised + teq r1, #0 + moveq pc, lr + + @ The prefetch and power control registers are revision dependent + @ and can be written whether or not the L2 cache is enabled + ldr r0, [r1, #L2X0_CACHE_ID] + and r0, r0, #L2X0_CACHE_ID_RTL_MASK + cmp r0, #L310_CACHE_ID_RTL_R2P0 + strcs r7, [r1, #L310_PREFETCH_CTRL] + cmp r0, #L310_CACHE_ID_RTL_R3P0 + strcs r8, [r1, #L310_POWER_CTRL] + + @ Don't setup the L2 cache if it is already enabled + ldr r0, [r1, #L2X0_CTRL] + tst r0, #L2X0_CTRL_EN + movne pc, lr + + str r3, [r1, #L310_TAG_LATENCY_CTRL] + str r4, [r1, #L310_DATA_LATENCY_CTRL] + str r6, [r1, #L310_ADDR_FILTER_END] + str r5, [r1, #L310_ADDR_FILTER_START] + + str r2, [r1, #L2X0_AUX_CTRL] + mov r9, #L2X0_CTRL_EN + str r9, [r1, #L2X0_CTRL] + mov pc, lr +ENDPROC(l2c310_early_resume) + + .align +1: .long l2x0_saved_regs - . diff --git a/arch/arm/mm/mm.h b/arch/arm/mm/mm.h index 7ea641b7aa7d..ce727d47275c 100644 --- a/arch/arm/mm/mm.h +++ b/arch/arm/mm/mm.h @@ -2,6 +2,8 @@ #include <linux/list.h> #include <linux/vmalloc.h> +#include <asm/pgtable.h> + /* the upper-most page table pointer */ extern pmd_t *top_pmd; @@ -93,3 +95,5 @@ extern phys_addr_t arm_lowmem_limit; void __init bootmem_init(void); void arm_mm_memblock_reserve(void); void dma_contiguous_remap(void); + +unsigned long __clear_cr(unsigned long mask); diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index b68c6b22e1c8..ab14b79b03f0 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -35,6 +35,7 @@ #include <asm/mach/arch.h> #include <asm/mach/map.h> #include <asm/mach/pci.h> +#include <asm/fixmap.h> #include "mm.h" #include "tcm.h" @@ -117,28 +118,54 @@ static struct cachepolicy cache_policies[] __initdata = { }; #ifdef CONFIG_CPU_CP15 +static unsigned long initial_pmd_value __initdata = 0; + /* - * These are useful for identifying cache coherency - * problems by allowing the cache or the cache and - * writebuffer to be turned off. (Note: the write - * buffer should not be on and the cache off). + * Initialise the cache_policy variable with the initial state specified + * via the "pmd" value. This is used to ensure that on ARMv6 and later, + * the C code sets the page tables up with the same policy as the head + * assembly code, which avoids an illegal state where the TLBs can get + * confused. See comments in early_cachepolicy() for more information. */ -static int __init early_cachepolicy(char *p) +void __init init_default_cache_policy(unsigned long pmd) { int i; + initial_pmd_value = pmd; + + pmd &= PMD_SECT_TEX(1) | PMD_SECT_BUFFERABLE | PMD_SECT_CACHEABLE; + + for (i = 0; i < ARRAY_SIZE(cache_policies); i++) + if (cache_policies[i].pmd == pmd) { + cachepolicy = i; + break; + } + + if (i == ARRAY_SIZE(cache_policies)) + pr_err("ERROR: could not find cache policy\n"); +} + +/* + * These are useful for identifying cache coherency problems by allowing + * the cache or the cache and writebuffer to be turned off. (Note: the + * write buffer should not be on and the cache off). + */ +static int __init early_cachepolicy(char *p) +{ + int i, selected = -1; + for (i = 0; i < ARRAY_SIZE(cache_policies); i++) { int len = strlen(cache_policies[i].policy); if (memcmp(p, cache_policies[i].policy, len) == 0) { - cachepolicy = i; - cr_alignment &= ~cache_policies[i].cr_mask; - cr_no_alignment &= ~cache_policies[i].cr_mask; + selected = i; break; } } - if (i == ARRAY_SIZE(cache_policies)) - printk(KERN_ERR "ERROR: unknown or unsupported cache policy\n"); + + if (selected == -1) + pr_err("ERROR: unknown or unsupported cache policy\n"); + /* * This restriction is partly to do with the way we boot; it is * unpredictable to have memory mapped using two different sets of @@ -146,12 +173,18 @@ static int __init early_cachepolicy(char *p) * change these attributes once the initial assembly has setup the * page tables. */ - if (cpu_architecture() >= CPU_ARCH_ARMv6) { - printk(KERN_WARNING "Only cachepolicy=writeback supported on ARMv6 and later\n"); - cachepolicy = CPOLICY_WRITEBACK; + if (cpu_architecture() >= CPU_ARCH_ARMv6 && selected != cachepolicy) { + pr_warn("Only cachepolicy=%s supported on ARMv6 and later\n", + cache_policies[cachepolicy].policy); + return 0; + } + + if (selected != cachepolicy) { + unsigned long cr = __clear_cr(cache_policies[selected].cr_mask); + cachepolicy = selected; + flush_cache_all(); + set_cr(cr); } - flush_cache_all(); - set_cr(cr_alignment); return 0; } early_param("cachepolicy", early_cachepolicy); @@ -186,35 +219,6 @@ static int __init early_ecc(char *p) early_param("ecc", early_ecc); #endif -static int __init noalign_setup(char *__unused) -{ - cr_alignment &= ~CR_A; - cr_no_alignment &= ~CR_A; - set_cr(cr_alignment); - return 1; -} -__setup("noalign", noalign_setup); - -#ifndef CONFIG_SMP -void adjust_cr(unsigned long mask, unsigned long set) -{ - unsigned long flags; - - mask &= ~CR_A; - - set &= mask; - - local_irq_save(flags); - - cr_no_alignment = (cr_no_alignment & ~mask) | set; - cr_alignment = (cr_alignment & ~mask) | set; - - set_cr((get_cr() & ~mask) | set); - - local_irq_restore(flags); -} -#endif - #else /* ifdef CONFIG_CPU_CP15 */ static int __init early_cachepolicy(char *p) @@ -414,8 +418,17 @@ static void __init build_mem_type_table(void) cachepolicy = CPOLICY_WRITEBACK; ecc_mask = 0; } - if (is_smp()) - cachepolicy = CPOLICY_WRITEALLOC; + + if (is_smp()) { + if (cachepolicy != CPOLICY_WRITEALLOC) { + pr_warn("Forcing write-allocate cache policy for SMP\n"); + cachepolicy = CPOLICY_WRITEALLOC; + } + if (!(initial_pmd_value & PMD_SECT_S)) { + pr_warn("Forcing shared mappings for SMP\n"); + initial_pmd_value |= PMD_SECT_S; + } + } /* * Strip out features not present on earlier architectures. @@ -539,11 +552,12 @@ static void __init build_mem_type_table(void) mem_types[MT_CACHECLEAN].prot_sect |= PMD_SECT_APX|PMD_SECT_AP_WRITE; #endif - if (is_smp()) { - /* - * Mark memory with the "shared" attribute - * for SMP systems - */ + /* + * If the initial page tables were created with the S bit + * set, then we need to do the same here for the same + * reasons given in early_cachepolicy(). + */ + if (initial_pmd_value & PMD_SECT_S) { user_pgprot |= L_PTE_SHARED; kern_pgprot |= L_PTE_SHARED; vecs_pgprot |= L_PTE_SHARED; @@ -1061,74 +1075,47 @@ phys_addr_t arm_lowmem_limit __initdata = 0; void __init sanity_check_meminfo(void) { phys_addr_t memblock_limit = 0; - int i, j, highmem = 0; + int highmem = 0; phys_addr_t vmalloc_limit = __pa(vmalloc_min - 1) + 1; + struct memblock_region *reg; - for (i = 0, j = 0; i < meminfo.nr_banks; i++) { - struct membank *bank = &meminfo.bank[j]; - phys_addr_t size_limit; - - *bank = meminfo.bank[i]; - size_limit = bank->size; + for_each_memblock(memory, reg) { + phys_addr_t block_start = reg->base; + phys_addr_t block_end = reg->base + reg->size; + phys_addr_t size_limit = reg->size; - if (bank->start >= vmalloc_limit) + if (reg->base >= vmalloc_limit) highmem = 1; else - size_limit = vmalloc_limit - bank->start; + size_limit = vmalloc_limit - reg->base; - bank->highmem = highmem; -#ifdef CONFIG_HIGHMEM - /* - * Split those memory banks which are partially overlapping - * the vmalloc area greatly simplifying things later. - */ - if (!highmem && bank->size > size_limit) { - if (meminfo.nr_banks >= NR_BANKS) { - printk(KERN_CRIT "NR_BANKS too low, " - "ignoring high memory\n"); - } else { - memmove(bank + 1, bank, - (meminfo.nr_banks - i) * sizeof(*bank)); - meminfo.nr_banks++; - i++; - bank[1].size -= size_limit; - bank[1].start = vmalloc_limit; - bank[1].highmem = highmem = 1; - j++; + if (!IS_ENABLED(CONFIG_HIGHMEM) || cache_is_vipt_aliasing()) { + + if (highmem) { + pr_notice("Ignoring RAM at %pa-%pa (!CONFIG_HIGHMEM)\n", + &block_start, &block_end); + memblock_remove(reg->base, reg->size); + continue; } - bank->size = size_limit; - } -#else - /* - * Highmem banks not allowed with !CONFIG_HIGHMEM. - */ - if (highmem) { - printk(KERN_NOTICE "Ignoring RAM at %.8llx-%.8llx " - "(!CONFIG_HIGHMEM).\n", - (unsigned long long)bank->start, - (unsigned long long)bank->start + bank->size - 1); - continue; - } - /* - * Check whether this memory bank would partially overlap - * the vmalloc area. - */ - if (bank->size > size_limit) { - printk(KERN_NOTICE "Truncating RAM at %.8llx-%.8llx " - "to -%.8llx (vmalloc region overlap).\n", - (unsigned long long)bank->start, - (unsigned long long)bank->start + bank->size - 1, - (unsigned long long)bank->start + size_limit - 1); - bank->size = size_limit; + if (reg->size > size_limit) { + phys_addr_t overlap_size = reg->size - size_limit; + + pr_notice("Truncating RAM at %pa-%pa to -%pa", + &block_start, &block_end, &vmalloc_limit); + memblock_remove(vmalloc_limit, overlap_size); + block_end = vmalloc_limit; + } } -#endif - if (!bank->highmem) { - phys_addr_t bank_end = bank->start + bank->size; - if (bank_end > arm_lowmem_limit) - arm_lowmem_limit = bank_end; + if (!highmem) { + if (block_end > arm_lowmem_limit) { + if (reg->size > size_limit) + arm_lowmem_limit = vmalloc_limit; + else + arm_lowmem_limit = block_end; + } /* * Find the first non-section-aligned page, and point @@ -1144,35 +1131,15 @@ void __init sanity_check_meminfo(void) * occurs before any free memory is mapped. */ if (!memblock_limit) { - if (!IS_ALIGNED(bank->start, SECTION_SIZE)) - memblock_limit = bank->start; - else if (!IS_ALIGNED(bank_end, SECTION_SIZE)) - memblock_limit = bank_end; + if (!IS_ALIGNED(block_start, SECTION_SIZE)) + memblock_limit = block_start; + else if (!IS_ALIGNED(block_end, SECTION_SIZE)) + memblock_limit = arm_lowmem_limit; } - } - j++; - } -#ifdef CONFIG_HIGHMEM - if (highmem) { - const char *reason = NULL; - if (cache_is_vipt_aliasing()) { - /* - * Interactions between kmap and other mappings - * make highmem support with aliasing VIPT caches - * rather difficult. - */ - reason = "with VIPT aliasing cache"; - } - if (reason) { - printk(KERN_CRIT "HIGHMEM is not supported %s, ignoring high memory\n", - reason); - while (j > 0 && meminfo.bank[j - 1].highmem) - j--; } } -#endif - meminfo.nr_banks = j; + high_memory = __va(arm_lowmem_limit - 1) + 1; /* @@ -1359,6 +1326,9 @@ static void __init kmap_init(void) #ifdef CONFIG_HIGHMEM pkmap_page_table = early_pte_alloc(pmd_off_k(PKMAP_BASE), PKMAP_BASE, _PAGE_KERNEL_TABLE); + + fixmap_page_table = early_pte_alloc(pmd_off_k(FIXADDR_START), + FIXADDR_START, _PAGE_KERNEL_TABLE); #endif } @@ -1461,7 +1431,7 @@ void __init early_paging_init(const struct machine_desc *mdesc, * just complicate the code. */ flush_cache_louis(); - dsb(); + dsb(ishst); isb(); /* remap level 1 table */ diff --git a/arch/arm/mm/nommu.c b/arch/arm/mm/nommu.c index 55764a7ef1f0..da1874f9f8cf 100644 --- a/arch/arm/mm/nommu.c +++ b/arch/arm/mm/nommu.c @@ -88,30 +88,35 @@ static unsigned long irbar_read(void) void __init sanity_check_meminfo_mpu(void) { int i; - struct membank *bank = meminfo.bank; phys_addr_t phys_offset = PHYS_OFFSET; phys_addr_t aligned_region_size, specified_mem_size, rounded_mem_size; - - /* Initially only use memory continuous from PHYS_OFFSET */ - if (bank_phys_start(&bank[0]) != phys_offset) - panic("First memory bank must be contiguous from PHYS_OFFSET"); - - /* Banks have already been sorted by start address */ - for (i = 1; i < meminfo.nr_banks; i++) { - if (bank[i].start <= bank_phys_end(&bank[0]) && - bank_phys_end(&bank[i]) > bank_phys_end(&bank[0])) { - bank[0].size = bank_phys_end(&bank[i]) - bank[0].start; + struct memblock_region *reg; + bool first = true; + phys_addr_t mem_start; + phys_addr_t mem_end; + + for_each_memblock(memory, reg) { + if (first) { + /* + * Initially only use memory continuous from + * PHYS_OFFSET */ + if (reg->base != phys_offset) + panic("First memory bank must be contiguous from PHYS_OFFSET"); + + mem_start = reg->base; + mem_end = reg->base + reg->size; + specified_mem_size = reg->size; + first = false; } else { - pr_notice("Ignoring RAM after 0x%.8lx. " - "First non-contiguous (ignored) bank start: 0x%.8lx\n", - (unsigned long)bank_phys_end(&bank[0]), - (unsigned long)bank_phys_start(&bank[i])); - break; + /* + * memblock auto merges contiguous blocks, remove + * all blocks afterwards + */ + pr_notice("Ignoring RAM after %pa, memory at %pa ignored\n", + &mem_start, ®->base); + memblock_remove(reg->base, reg->size); } } - /* All contiguous banks are now merged in to the first bank */ - meminfo.nr_banks = 1; - specified_mem_size = bank[0].size; /* * MPU has curious alignment requirements: Size must be power of 2, and @@ -128,23 +133,24 @@ void __init sanity_check_meminfo_mpu(void) */ aligned_region_size = (phys_offset - 1) ^ (phys_offset); /* Find the max power-of-two sized region that fits inside our bank */ - rounded_mem_size = (1 << __fls(bank[0].size)) - 1; + rounded_mem_size = (1 << __fls(specified_mem_size)) - 1; /* The actual region size is the smaller of the two */ aligned_region_size = aligned_region_size < rounded_mem_size ? aligned_region_size + 1 : rounded_mem_size + 1; - if (aligned_region_size != specified_mem_size) - pr_warn("Truncating memory from 0x%.8lx to 0x%.8lx (MPU region constraints)", - (unsigned long)specified_mem_size, - (unsigned long)aligned_region_size); + if (aligned_region_size != specified_mem_size) { + pr_warn("Truncating memory from %pa to %pa (MPU region constraints)", + &specified_mem_size, &aligned_region_size); + memblock_remove(mem_start + aligned_region_size, + specified_mem_size - aligned_round_size); + + mem_end = mem_start + aligned_region_size; + } - meminfo.bank[0].size = aligned_region_size; - pr_debug("MPU Region from 0x%.8lx size 0x%.8lx (end 0x%.8lx))\n", - (unsigned long)phys_offset, - (unsigned long)aligned_region_size, - (unsigned long)bank_phys_end(&bank[0])); + pr_debug("MPU Region from %pa size %pa (end %pa))\n", + &phys_offset, &aligned_region_size, &mem_end); } @@ -292,7 +298,7 @@ void __init sanity_check_meminfo(void) { phys_addr_t end; sanity_check_meminfo_mpu(); - end = bank_phys_end(&meminfo.bank[meminfo.nr_banks - 1]); + end = memblock_end_of_DRAM(); high_memory = __va(end - 1) + 1; } diff --git a/arch/arm/mm/proc-v7-3level.S b/arch/arm/mm/proc-v7-3level.S index 01a719e18bb0..22e3ad63500c 100644 --- a/arch/arm/mm/proc-v7-3level.S +++ b/arch/arm/mm/proc-v7-3level.S @@ -64,6 +64,14 @@ ENTRY(cpu_v7_switch_mm) mov pc, lr ENDPROC(cpu_v7_switch_mm) +#ifdef __ARMEB__ +#define rl r3 +#define rh r2 +#else +#define rl r2 +#define rh r3 +#endif + /* * cpu_v7_set_pte_ext(ptep, pte) * @@ -73,13 +81,13 @@ ENDPROC(cpu_v7_switch_mm) */ ENTRY(cpu_v7_set_pte_ext) #ifdef CONFIG_MMU - tst r2, #L_PTE_VALID + tst rl, #L_PTE_VALID beq 1f - tst r3, #1 << (57 - 32) @ L_PTE_NONE - bicne r2, #L_PTE_VALID + tst rh, #1 << (57 - 32) @ L_PTE_NONE + bicne rl, #L_PTE_VALID bne 1f - tst r3, #1 << (55 - 32) @ L_PTE_DIRTY - orreq r2, #L_PTE_RDONLY + tst rh, #1 << (55 - 32) @ L_PTE_DIRTY + orreq rl, #L_PTE_RDONLY 1: strd r2, r3, [r0] ALT_SMP(W(nop)) ALT_UP (mcr p15, 0, r0, c7, c10, 1) @ flush_pte diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S index 195731d3813b..3db2c2f04a30 100644 --- a/arch/arm/mm/proc-v7.S +++ b/arch/arm/mm/proc-v7.S @@ -169,9 +169,31 @@ ENDPROC(cpu_pj4b_do_idle) globl_equ cpu_pj4b_do_idle, cpu_v7_do_idle #endif globl_equ cpu_pj4b_dcache_clean_area, cpu_v7_dcache_clean_area - globl_equ cpu_pj4b_do_suspend, cpu_v7_do_suspend - globl_equ cpu_pj4b_do_resume, cpu_v7_do_resume - globl_equ cpu_pj4b_suspend_size, cpu_v7_suspend_size +#ifdef CONFIG_ARM_CPU_SUSPEND +ENTRY(cpu_pj4b_do_suspend) + stmfd sp!, {r6 - r10} + mrc p15, 1, r6, c15, c1, 0 @ save CP15 - extra features + mrc p15, 1, r7, c15, c2, 0 @ save CP15 - Aux Func Modes Ctrl 0 + mrc p15, 1, r8, c15, c1, 2 @ save CP15 - Aux Debug Modes Ctrl 2 + mrc p15, 1, r9, c15, c1, 1 @ save CP15 - Aux Debug Modes Ctrl 1 + mrc p15, 0, r10, c9, c14, 0 @ save CP15 - PMC + stmia r0!, {r6 - r10} + ldmfd sp!, {r6 - r10} + b cpu_v7_do_suspend +ENDPROC(cpu_pj4b_do_suspend) + +ENTRY(cpu_pj4b_do_resume) + ldmia r0!, {r6 - r10} + mcr p15, 1, r6, c15, c1, 0 @ save CP15 - extra features + mcr p15, 1, r7, c15, c2, 0 @ save CP15 - Aux Func Modes Ctrl 0 + mcr p15, 1, r8, c15, c1, 2 @ save CP15 - Aux Debug Modes Ctrl 2 + mcr p15, 1, r9, c15, c1, 1 @ save CP15 - Aux Debug Modes Ctrl 1 + mcr p15, 0, r10, c9, c14, 0 @ save CP15 - PMC + b cpu_v7_do_resume +ENDPROC(cpu_pj4b_do_resume) +#endif +.globl cpu_pj4b_suspend_size +.equ cpu_pj4b_suspend_size, 4 * 14 #endif @@ -194,6 +216,7 @@ __v7_cr7mp_setup: __v7_ca7mp_setup: __v7_ca12mp_setup: __v7_ca15mp_setup: +__v7_ca17mp_setup: mov r10, #0 1: #ifdef CONFIG_SMP @@ -505,6 +528,16 @@ __v7_ca15mp_proc_info: .size __v7_ca15mp_proc_info, . - __v7_ca15mp_proc_info /* + * ARM Ltd. Cortex A17 processor. + */ + .type __v7_ca17mp_proc_info, #object +__v7_ca17mp_proc_info: + .long 0x410fc0e0 + .long 0xff0ffff0 + __v7_proc __v7_ca17mp_setup + .size __v7_ca17mp_proc_info, . - __v7_ca17mp_proc_info + + /* * Qualcomm Inc. Krait processors. */ .type __krait_proc_info, #object diff --git a/arch/arm/plat-samsung/s5p-sleep.S b/arch/arm/plat-samsung/s5p-sleep.S index c5001659bdf8..25c68ceb9e2b 100644 --- a/arch/arm/plat-samsung/s5p-sleep.S +++ b/arch/arm/plat-samsung/s5p-sleep.S @@ -22,7 +22,6 @@ */ #include <linux/linkage.h> -#include <asm/asm-offsets.h> .data .align diff --git a/arch/arm/vfp/entry.S b/arch/arm/vfp/entry.S index f0759e70fb86..fe6ca574d093 100644 --- a/arch/arm/vfp/entry.S +++ b/arch/arm/vfp/entry.S @@ -22,11 +22,10 @@ @ r9 = normal "successful" return address @ r10 = this threads thread_info structure @ lr = unrecognised instruction return address -@ IRQs disabled. +@ IRQs enabled. @ ENTRY(do_vfp) inc_preempt_count r10, r4 - enable_irq ldr r4, .LCvfp ldr r11, [r10, #TI_CPU] @ CPU number add r10, r10, #TI_VFPSTATE @ r10 = workspace diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index f5f63b715d91..7295419165e1 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -30,12 +30,17 @@ config ARM64 select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_KGDB select HAVE_ARCH_TRACEHOOK + select HAVE_C_RECORDMCOUNT select HAVE_DEBUG_BUGVERBOSE select HAVE_DEBUG_KMEMLEAK select HAVE_DMA_API_DEBUG select HAVE_DMA_ATTRS select HAVE_DMA_CONTIGUOUS + select HAVE_DYNAMIC_FTRACE select HAVE_EFFICIENT_UNALIGNED_ACCESS + select HAVE_FTRACE_MCOUNT_RECORD + select HAVE_FUNCTION_TRACER + select HAVE_FUNCTION_GRAPH_TRACER select HAVE_GENERIC_DMA_COHERENT select HAVE_HW_BREAKPOINT if PERF_EVENTS select HAVE_MEMBLOCK @@ -43,6 +48,7 @@ config ARM64 select HAVE_PERF_EVENTS select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP + select HAVE_SYSCALL_TRACEPOINTS select IRQ_DOMAIN select MODULES_USE_ELF_RELA select NO_BOOTMEM @@ -245,6 +251,9 @@ config ARCH_WANT_HUGE_PMD_SHARE config HAVE_ARCH_TRANSPARENT_HUGEPAGE def_bool y +config ARCH_HAS_CACHE_LINE_SIZE + def_bool y + source "mm/Kconfig" config XEN_DOM0 @@ -283,6 +292,20 @@ config CMDLINE_FORCE This is useful if you cannot or don't want to change the command-line options your boot loader passes to the kernel. +config EFI + bool "UEFI runtime support" + depends on OF && !CPU_BIG_ENDIAN + select LIBFDT + select UCS2_STRING + select EFI_PARAMS_FROM_FDT + default y + help + This option provides support for runtime services provided + by UEFI firmware (such as non-volatile variables, realtime + clock, and platform reset). A UEFI stub is also provided to + allow the kernel to be booted as an EFI application. This + is only useful on systems that have UEFI firmware. + endmenu menu "Userspace binary formats" @@ -334,6 +357,8 @@ source "net/Kconfig" source "drivers/Kconfig" +source "drivers/firmware/Kconfig" + source "fs/Kconfig" source "arch/arm64/kvm/Kconfig" @@ -343,5 +368,8 @@ source "arch/arm64/Kconfig.debug" source "security/Kconfig" source "crypto/Kconfig" +if CRYPTO +source "arch/arm64/crypto/Kconfig" +endif source "lib/Kconfig" diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index 2fceb71ac3b7..8185a913c5ed 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -45,6 +45,7 @@ export TEXT_OFFSET GZFLAGS core-y += arch/arm64/kernel/ arch/arm64/mm/ core-$(CONFIG_KVM) += arch/arm64/kvm/ core-$(CONFIG_XEN) += arch/arm64/xen/ +core-$(CONFIG_CRYPTO) += arch/arm64/crypto/ libs-y := arch/arm64/lib/ $(libs-y) libs-y += $(LIBGCC) diff --git a/arch/arm64/boot/dts/apm-storm.dtsi b/arch/arm64/boot/dts/apm-storm.dtsi index f8c40a66e65d..c5f0a47a1375 100644 --- a/arch/arm64/boot/dts/apm-storm.dtsi +++ b/arch/arm64/boot/dts/apm-storm.dtsi @@ -257,6 +257,19 @@ enable-offset = <0x0>; enable-mask = <0x39>; }; + + rtcclk: rtcclk@17000000 { + compatible = "apm,xgene-device-clock"; + #clock-cells = <1>; + clocks = <&socplldiv2 0>; + reg = <0x0 0x17000000 0x0 0x2000>; + reg-names = "csr-reg"; + csr-offset = <0xc>; + csr-mask = <0x2>; + enable-offset = <0x10>; + enable-mask = <0x2>; + clock-output-names = "rtcclk"; + }; }; serial0: serial@1c020000 { @@ -342,5 +355,13 @@ phys = <&phy3 0>; phy-names = "sata-phy"; }; + + rtc: rtc@10510000 { + compatible = "apm,xgene-rtc"; + reg = <0x0 0x10510000 0x0 0x400>; + interrupts = <0x0 0x46 0x4>; + #clock-cells = <1>; + clocks = <&rtcclk 0>; + }; }; }; diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig index 7959dd0ca5d5..157e1d8d9a47 100644 --- a/arch/arm64/configs/defconfig +++ b/arch/arm64/configs/defconfig @@ -1,11 +1,11 @@ # CONFIG_LOCALVERSION_AUTO is not set -# CONFIG_SWAP is not set CONFIG_SYSVIPC=y CONFIG_POSIX_MQUEUE=y +CONFIG_AUDIT=y +CONFIG_NO_HZ_IDLE=y +CONFIG_HIGH_RES_TIMERS=y CONFIG_BSD_PROCESS_ACCT=y CONFIG_BSD_PROCESS_ACCT_V3=y -CONFIG_NO_HZ=y -CONFIG_HIGH_RES_TIMERS=y CONFIG_IKCONFIG=y CONFIG_IKCONFIG_PROC=y CONFIG_LOG_BUF_SHIFT=14 @@ -27,6 +27,7 @@ CONFIG_ARCH_VEXPRESS=y CONFIG_ARCH_XGENE=y CONFIG_SMP=y CONFIG_PREEMPT=y +CONFIG_TRANSPARENT_HUGEPAGE=y CONFIG_CMA=y CONFIG_CMDLINE="console=ttyAMA0" # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set @@ -44,7 +45,7 @@ CONFIG_IP_PNP_BOOTP=y CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" CONFIG_DEVTMPFS=y CONFIG_DMA_CMA=y -CONFIG_SCSI=y +CONFIG_VIRTIO_BLK=y # CONFIG_SCSI_PROC_FS is not set CONFIG_BLK_DEV_SD=y # CONFIG_SCSI_LOWLEVEL is not set @@ -56,20 +57,18 @@ CONFIG_SMC91X=y CONFIG_SMSC911X=y # CONFIG_WLAN is not set CONFIG_INPUT_EVDEV=y -# CONFIG_SERIO_I8042 is not set # CONFIG_SERIO_SERPORT is not set CONFIG_LEGACY_PTY_COUNT=16 CONFIG_SERIAL_8250=y CONFIG_SERIAL_8250_CONSOLE=y -CONFIG_SERIAL_OF_PLATFORM=y CONFIG_SERIAL_AMBA_PL011=y CONFIG_SERIAL_AMBA_PL011_CONSOLE=y +CONFIG_SERIAL_OF_PLATFORM=y # CONFIG_HW_RANDOM is not set # CONFIG_HWMON is not set CONFIG_REGULATOR=y CONFIG_REGULATOR_FIXED_VOLTAGE=y CONFIG_FB=y -# CONFIG_VGA_CONSOLE is not set CONFIG_FRAMEBUFFER_CONSOLE=y CONFIG_LOGO=y # CONFIG_LOGO_LINUX_MONO is not set @@ -79,27 +78,38 @@ CONFIG_USB_ISP1760_HCD=y CONFIG_USB_STORAGE=y CONFIG_MMC=y CONFIG_MMC_ARMMMCI=y +CONFIG_VIRTIO_MMIO=y # CONFIG_IOMMU_SUPPORT is not set CONFIG_EXT2_FS=y CONFIG_EXT3_FS=y -CONFIG_EXT4_FS=y # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set # CONFIG_EXT3_FS_XATTR is not set +CONFIG_EXT4_FS=y CONFIG_FUSE_FS=y CONFIG_CUSE=y CONFIG_VFAT_FS=y CONFIG_TMPFS=y +CONFIG_HUGETLBFS=y # CONFIG_MISC_FILESYSTEMS is not set CONFIG_NFS_FS=y CONFIG_ROOT_NFS=y CONFIG_NLS_CODEPAGE_437=y CONFIG_NLS_ISO8859_1=y -CONFIG_MAGIC_SYSRQ=y +CONFIG_VIRTUALIZATION=y +CONFIG_KVM=y +CONFIG_DEBUG_INFO=y CONFIG_DEBUG_FS=y +CONFIG_MAGIC_SYSRQ=y CONFIG_DEBUG_KERNEL=y +CONFIG_LOCKUP_DETECTOR=y # CONFIG_SCHED_DEBUG is not set -CONFIG_DEBUG_INFO=y # CONFIG_FTRACE is not set -CONFIG_ATOMIC64_SELFTEST=y -CONFIG_VIRTIO_MMIO=y -CONFIG_VIRTIO_BLK=y +CONFIG_CRYPTO_ANSI_CPRNG=y +CONFIG_ARM64_CRYPTO=y +CONFIG_CRYPTO_SHA1_ARM64_CE=y +CONFIG_CRYPTO_SHA2_ARM64_CE=y +CONFIG_CRYPTO_GHASH_ARM64_CE=y +CONFIG_CRYPTO_AES_ARM64_CE=y +CONFIG_CRYPTO_AES_ARM64_CE_CCM=y +CONFIG_CRYPTO_AES_ARM64_CE_BLK=y +CONFIG_CRYPTO_AES_ARM64_NEON_BLK=y diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig new file mode 100644 index 000000000000..5562652c5316 --- /dev/null +++ b/arch/arm64/crypto/Kconfig @@ -0,0 +1,53 @@ + +menuconfig ARM64_CRYPTO + bool "ARM64 Accelerated Cryptographic Algorithms" + depends on ARM64 + help + Say Y here to choose from a selection of cryptographic algorithms + implemented using ARM64 specific CPU features or instructions. + +if ARM64_CRYPTO + +config CRYPTO_SHA1_ARM64_CE + tristate "SHA-1 digest algorithm (ARMv8 Crypto Extensions)" + depends on ARM64 && KERNEL_MODE_NEON + select CRYPTO_HASH + +config CRYPTO_SHA2_ARM64_CE + tristate "SHA-224/SHA-256 digest algorithm (ARMv8 Crypto Extensions)" + depends on ARM64 && KERNEL_MODE_NEON + select CRYPTO_HASH + +config CRYPTO_GHASH_ARM64_CE + tristate "GHASH (for GCM chaining mode) using ARMv8 Crypto Extensions" + depends on ARM64 && KERNEL_MODE_NEON + select CRYPTO_HASH + +config CRYPTO_AES_ARM64_CE + tristate "AES core cipher using ARMv8 Crypto Extensions" + depends on ARM64 && KERNEL_MODE_NEON + select CRYPTO_ALGAPI + select CRYPTO_AES + +config CRYPTO_AES_ARM64_CE_CCM + tristate "AES in CCM mode using ARMv8 Crypto Extensions" + depends on ARM64 && KERNEL_MODE_NEON + select CRYPTO_ALGAPI + select CRYPTO_AES + select CRYPTO_AEAD + +config CRYPTO_AES_ARM64_CE_BLK + tristate "AES in ECB/CBC/CTR/XTS modes using ARMv8 Crypto Extensions" + depends on ARM64 && KERNEL_MODE_NEON + select CRYPTO_BLKCIPHER + select CRYPTO_AES + select CRYPTO_ABLK_HELPER + +config CRYPTO_AES_ARM64_NEON_BLK + tristate "AES in ECB/CBC/CTR/XTS modes using NEON instructions" + depends on ARM64 && KERNEL_MODE_NEON + select CRYPTO_BLKCIPHER + select CRYPTO_AES + select CRYPTO_ABLK_HELPER + +endif diff --git a/arch/arm64/crypto/Makefile b/arch/arm64/crypto/Makefile new file mode 100644 index 000000000000..2070a56ecc46 --- /dev/null +++ b/arch/arm64/crypto/Makefile @@ -0,0 +1,38 @@ +# +# linux/arch/arm64/crypto/Makefile +# +# Copyright (C) 2014 Linaro Ltd <ard.biesheuvel@linaro.org> +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License version 2 as +# published by the Free Software Foundation. +# + +obj-$(CONFIG_CRYPTO_SHA1_ARM64_CE) += sha1-ce.o +sha1-ce-y := sha1-ce-glue.o sha1-ce-core.o + +obj-$(CONFIG_CRYPTO_SHA2_ARM64_CE) += sha2-ce.o +sha2-ce-y := sha2-ce-glue.o sha2-ce-core.o + +obj-$(CONFIG_CRYPTO_GHASH_ARM64_CE) += ghash-ce.o +ghash-ce-y := ghash-ce-glue.o ghash-ce-core.o + +obj-$(CONFIG_CRYPTO_AES_ARM64_CE) += aes-ce-cipher.o +CFLAGS_aes-ce-cipher.o += -march=armv8-a+crypto + +obj-$(CONFIG_CRYPTO_AES_ARM64_CE_CCM) += aes-ce-ccm.o +aes-ce-ccm-y := aes-ce-ccm-glue.o aes-ce-ccm-core.o + +obj-$(CONFIG_CRYPTO_AES_ARM64_CE_BLK) += aes-ce-blk.o +aes-ce-blk-y := aes-glue-ce.o aes-ce.o + +obj-$(CONFIG_CRYPTO_AES_ARM64_NEON_BLK) += aes-neon-blk.o +aes-neon-blk-y := aes-glue-neon.o aes-neon.o + +AFLAGS_aes-ce.o := -DINTERLEAVE=2 -DINTERLEAVE_INLINE +AFLAGS_aes-neon.o := -DINTERLEAVE=4 + +CFLAGS_aes-glue-ce.o := -DUSE_V8_CRYPTO_EXTENSIONS + +$(obj)/aes-glue-%.o: $(src)/aes-glue.c FORCE + $(call if_changed_dep,cc_o_c) diff --git a/arch/arm64/crypto/aes-ce-ccm-core.S b/arch/arm64/crypto/aes-ce-ccm-core.S new file mode 100644 index 000000000000..432e4841cd81 --- /dev/null +++ b/arch/arm64/crypto/aes-ce-ccm-core.S @@ -0,0 +1,222 @@ +/* + * aesce-ccm-core.S - AES-CCM transform for ARMv8 with Crypto Extensions + * + * Copyright (C) 2013 - 2014 Linaro Ltd <ard.biesheuvel@linaro.org> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include <linux/linkage.h> + + .text + .arch armv8-a+crypto + + /* + * void ce_aes_ccm_auth_data(u8 mac[], u8 const in[], u32 abytes, + * u32 *macp, u8 const rk[], u32 rounds); + */ +ENTRY(ce_aes_ccm_auth_data) + ldr w8, [x3] /* leftover from prev round? */ + ld1 {v0.2d}, [x0] /* load mac */ + cbz w8, 1f + sub w8, w8, #16 + eor v1.16b, v1.16b, v1.16b +0: ldrb w7, [x1], #1 /* get 1 byte of input */ + subs w2, w2, #1 + add w8, w8, #1 + ins v1.b[0], w7 + ext v1.16b, v1.16b, v1.16b, #1 /* rotate in the input bytes */ + beq 8f /* out of input? */ + cbnz w8, 0b + eor v0.16b, v0.16b, v1.16b +1: ld1 {v3.2d}, [x4] /* load first round key */ + prfm pldl1strm, [x1] + cmp w5, #12 /* which key size? */ + add x6, x4, #16 + sub w7, w5, #2 /* modified # of rounds */ + bmi 2f + bne 5f + mov v5.16b, v3.16b + b 4f +2: mov v4.16b, v3.16b + ld1 {v5.2d}, [x6], #16 /* load 2nd round key */ +3: aese v0.16b, v4.16b + aesmc v0.16b, v0.16b +4: ld1 {v3.2d}, [x6], #16 /* load next round key */ + aese v0.16b, v5.16b + aesmc v0.16b, v0.16b +5: ld1 {v4.2d}, [x6], #16 /* load next round key */ + subs w7, w7, #3 + aese v0.16b, v3.16b + aesmc v0.16b, v0.16b + ld1 {v5.2d}, [x6], #16 /* load next round key */ + bpl 3b + aese v0.16b, v4.16b + subs w2, w2, #16 /* last data? */ + eor v0.16b, v0.16b, v5.16b /* final round */ + bmi 6f + ld1 {v1.16b}, [x1], #16 /* load next input block */ + eor v0.16b, v0.16b, v1.16b /* xor with mac */ + bne 1b +6: st1 {v0.2d}, [x0] /* store mac */ + beq 10f + adds w2, w2, #16 + beq 10f + mov w8, w2 +7: ldrb w7, [x1], #1 + umov w6, v0.b[0] + eor w6, w6, w7 + strb w6, [x0], #1 + subs w2, w2, #1 + beq 10f + ext v0.16b, v0.16b, v0.16b, #1 /* rotate out the mac bytes */ + b 7b +8: mov w7, w8 + add w8, w8, #16 +9: ext v1.16b, v1.16b, v1.16b, #1 + adds w7, w7, #1 + bne 9b + eor v0.16b, v0.16b, v1.16b + st1 {v0.2d}, [x0] +10: str w8, [x3] + ret +ENDPROC(ce_aes_ccm_auth_data) + + /* + * void ce_aes_ccm_final(u8 mac[], u8 const ctr[], u8 const rk[], + * u32 rounds); + */ +ENTRY(ce_aes_ccm_final) + ld1 {v3.2d}, [x2], #16 /* load first round key */ + ld1 {v0.2d}, [x0] /* load mac */ + cmp w3, #12 /* which key size? */ + sub w3, w3, #2 /* modified # of rounds */ + ld1 {v1.2d}, [x1] /* load 1st ctriv */ + bmi 0f + bne 3f + mov v5.16b, v3.16b + b 2f +0: mov v4.16b, v3.16b +1: ld1 {v5.2d}, [x2], #16 /* load next round key */ + aese v0.16b, v4.16b + aese v1.16b, v4.16b + aesmc v0.16b, v0.16b + aesmc v1.16b, v1.16b +2: ld1 {v3.2d}, [x2], #16 /* load next round key */ + aese v0.16b, v5.16b + aese v1.16b, v5.16b + aesmc v0.16b, v0.16b + aesmc v1.16b, v1.16b +3: ld1 {v4.2d}, [x2], #16 /* load next round key */ + subs w3, w3, #3 + aese v0.16b, v3.16b + aese v1.16b, v3.16b + aesmc v0.16b, v0.16b + aesmc v1.16b, v1.16b + bpl 1b + aese v0.16b, v4.16b + aese v1.16b, v4.16b + /* final round key cancels out */ + eor v0.16b, v0.16b, v1.16b /* en-/decrypt the mac */ + st1 {v0.2d}, [x0] /* store result */ + ret +ENDPROC(ce_aes_ccm_final) + + .macro aes_ccm_do_crypt,enc + ldr x8, [x6, #8] /* load lower ctr */ + ld1 {v0.2d}, [x5] /* load mac */ + rev x8, x8 /* keep swabbed ctr in reg */ +0: /* outer loop */ + ld1 {v1.1d}, [x6] /* load upper ctr */ + prfm pldl1strm, [x1] + add x8, x8, #1 + rev x9, x8 + cmp w4, #12 /* which key size? */ + sub w7, w4, #2 /* get modified # of rounds */ + ins v1.d[1], x9 /* no carry in lower ctr */ + ld1 {v3.2d}, [x3] /* load first round key */ + add x10, x3, #16 + bmi 1f + bne 4f + mov v5.16b, v3.16b + b 3f +1: mov v4.16b, v3.16b + ld1 {v5.2d}, [x10], #16 /* load 2nd round key */ +2: /* inner loop: 3 rounds, 2x interleaved */ + aese v0.16b, v4.16b + aese v1.16b, v4.16b + aesmc v0.16b, v0.16b + aesmc v1.16b, v1.16b +3: ld1 {v3.2d}, [x10], #16 /* load next round key */ + aese v0.16b, v5.16b + aese v1.16b, v5.16b + aesmc v0.16b, v0.16b + aesmc v1.16b, v1.16b +4: ld1 {v4.2d}, [x10], #16 /* load next round key */ + subs w7, w7, #3 + aese v0.16b, v3.16b + aese v1.16b, v3.16b + aesmc v0.16b, v0.16b + aesmc v1.16b, v1.16b + ld1 {v5.2d}, [x10], #16 /* load next round key */ + bpl 2b + aese v0.16b, v4.16b + aese v1.16b, v4.16b + subs w2, w2, #16 + bmi 6f /* partial block? */ + ld1 {v2.16b}, [x1], #16 /* load next input block */ + .if \enc == 1 + eor v2.16b, v2.16b, v5.16b /* final round enc+mac */ + eor v1.16b, v1.16b, v2.16b /* xor with crypted ctr */ + .else + eor v2.16b, v2.16b, v1.16b /* xor with crypted ctr */ + eor v1.16b, v2.16b, v5.16b /* final round enc */ + .endif + eor v0.16b, v0.16b, v2.16b /* xor mac with pt ^ rk[last] */ + st1 {v1.16b}, [x0], #16 /* write output block */ + bne 0b + rev x8, x8 + st1 {v0.2d}, [x5] /* store mac */ + str x8, [x6, #8] /* store lsb end of ctr (BE) */ +5: ret + +6: eor v0.16b, v0.16b, v5.16b /* final round mac */ + eor v1.16b, v1.16b, v5.16b /* final round enc */ + st1 {v0.2d}, [x5] /* store mac */ + add w2, w2, #16 /* process partial tail block */ +7: ldrb w9, [x1], #1 /* get 1 byte of input */ + umov w6, v1.b[0] /* get top crypted ctr byte */ + umov w7, v0.b[0] /* get top mac byte */ + .if \enc == 1 + eor w7, w7, w9 + eor w9, w9, w6 + .else + eor w9, w9, w6 + eor w7, w7, w9 + .endif + strb w9, [x0], #1 /* store out byte */ + strb w7, [x5], #1 /* store mac byte */ + subs w2, w2, #1 + beq 5b + ext v0.16b, v0.16b, v0.16b, #1 /* shift out mac byte */ + ext v1.16b, v1.16b, v1.16b, #1 /* shift out ctr byte */ + b 7b + .endm + + /* + * void ce_aes_ccm_encrypt(u8 out[], u8 const in[], u32 cbytes, + * u8 const rk[], u32 rounds, u8 mac[], + * u8 ctr[]); + * void ce_aes_ccm_decrypt(u8 out[], u8 const in[], u32 cbytes, + * u8 const rk[], u32 rounds, u8 mac[], + * u8 ctr[]); + */ +ENTRY(ce_aes_ccm_encrypt) + aes_ccm_do_crypt 1 +ENDPROC(ce_aes_ccm_encrypt) + +ENTRY(ce_aes_ccm_decrypt) + aes_ccm_do_crypt 0 +ENDPROC(ce_aes_ccm_decrypt) diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c new file mode 100644 index 000000000000..9e6cdde9b43d --- /dev/null +++ b/arch/arm64/crypto/aes-ce-ccm-glue.c @@ -0,0 +1,297 @@ +/* + * aes-ccm-glue.c - AES-CCM transform for ARMv8 with Crypto Extensions + * + * Copyright (C) 2013 - 2014 Linaro Ltd <ard.biesheuvel@linaro.org> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include <asm/neon.h> +#include <asm/unaligned.h> +#include <crypto/aes.h> +#include <crypto/algapi.h> +#include <crypto/scatterwalk.h> +#include <linux/crypto.h> +#include <linux/module.h> + +static int num_rounds(struct crypto_aes_ctx *ctx) +{ + /* + * # of rounds specified by AES: + * 128 bit key 10 rounds + * 192 bit key 12 rounds + * 256 bit key 14 rounds + * => n byte key => 6 + (n/4) rounds + */ + return 6 + ctx->key_length / 4; +} + +asmlinkage void ce_aes_ccm_auth_data(u8 mac[], u8 const in[], u32 abytes, + u32 *macp, u32 const rk[], u32 rounds); + +asmlinkage void ce_aes_ccm_encrypt(u8 out[], u8 const in[], u32 cbytes, + u32 const rk[], u32 rounds, u8 mac[], + u8 ctr[]); + +asmlinkage void ce_aes_ccm_decrypt(u8 out[], u8 const in[], u32 cbytes, + u32 const rk[], u32 rounds, u8 mac[], + u8 ctr[]); + +asmlinkage void ce_aes_ccm_final(u8 mac[], u8 const ctr[], u32 const rk[], + u32 rounds); + +static int ccm_setkey(struct crypto_aead *tfm, const u8 *in_key, + unsigned int key_len) +{ + struct crypto_aes_ctx *ctx = crypto_aead_ctx(tfm); + int ret; + + ret = crypto_aes_expand_key(ctx, in_key, key_len); + if (!ret) + return 0; + + tfm->base.crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; + return -EINVAL; +} + +static int ccm_setauthsize(struct crypto_aead *tfm, unsigned int authsize) +{ + if ((authsize & 1) || authsize < 4) + return -EINVAL; + return 0; +} + +static int ccm_init_mac(struct aead_request *req, u8 maciv[], u32 msglen) +{ + struct crypto_aead *aead = crypto_aead_reqtfm(req); + __be32 *n = (__be32 *)&maciv[AES_BLOCK_SIZE - 8]; + u32 l = req->iv[0] + 1; + + /* verify that CCM dimension 'L' is set correctly in the IV */ + if (l < 2 || l > 8) + return -EINVAL; + + /* verify that msglen can in fact be represented in L bytes */ + if (l < 4 && msglen >> (8 * l)) + return -EOVERFLOW; + + /* + * Even if the CCM spec allows L values of up to 8, the Linux cryptoapi + * uses a u32 type to represent msglen so the top 4 bytes are always 0. + */ + n[0] = 0; + n[1] = cpu_to_be32(msglen); + + memcpy(maciv, req->iv, AES_BLOCK_SIZE - l); + + /* + * Meaning of byte 0 according to CCM spec (RFC 3610/NIST 800-38C) + * - bits 0..2 : max # of bytes required to represent msglen, minus 1 + * (already set by caller) + * - bits 3..5 : size of auth tag (1 => 4 bytes, 2 => 6 bytes, etc) + * - bit 6 : indicates presence of authenticate-only data + */ + maciv[0] |= (crypto_aead_authsize(aead) - 2) << 2; + if (req->assoclen) + maciv[0] |= 0x40; + + memset(&req->iv[AES_BLOCK_SIZE - l], 0, l); + return 0; +} + +static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[]) +{ + struct crypto_aead *aead = crypto_aead_reqtfm(req); + struct crypto_aes_ctx *ctx = crypto_aead_ctx(aead); + struct __packed { __be16 l; __be32 h; u16 len; } ltag; + struct scatter_walk walk; + u32 len = req->assoclen; + u32 macp = 0; + + /* prepend the AAD with a length tag */ + if (len < 0xff00) { + ltag.l = cpu_to_be16(len); + ltag.len = 2; + } else { + ltag.l = cpu_to_be16(0xfffe); + put_unaligned_be32(len, <ag.h); + ltag.len = 6; + } + + ce_aes_ccm_auth_data(mac, (u8 *)<ag, ltag.len, &macp, ctx->key_enc, + num_rounds(ctx)); + scatterwalk_start(&walk, req->assoc); + + do { + u32 n = scatterwalk_clamp(&walk, len); + u8 *p; + + if (!n) { + scatterwalk_start(&walk, sg_next(walk.sg)); + n = scatterwalk_clamp(&walk, len); + } + p = scatterwalk_map(&walk); + ce_aes_ccm_auth_data(mac, p, n, &macp, ctx->key_enc, + num_rounds(ctx)); + len -= n; + + scatterwalk_unmap(p); + scatterwalk_advance(&walk, n); + scatterwalk_done(&walk, 0, len); + } while (len); +} + +static int ccm_encrypt(struct aead_request *req) +{ + struct crypto_aead *aead = crypto_aead_reqtfm(req); + struct crypto_aes_ctx *ctx = crypto_aead_ctx(aead); + struct blkcipher_desc desc = { .info = req->iv }; + struct blkcipher_walk walk; + u8 __aligned(8) mac[AES_BLOCK_SIZE]; + u8 buf[AES_BLOCK_SIZE]; + u32 len = req->cryptlen; + int err; + + err = ccm_init_mac(req, mac, len); + if (err) + return err; + + kernel_neon_begin_partial(6); + + if (req->assoclen) + ccm_calculate_auth_mac(req, mac); + + /* preserve the original iv for the final round */ + memcpy(buf, req->iv, AES_BLOCK_SIZE); + + blkcipher_walk_init(&walk, req->dst, req->src, len); + err = blkcipher_aead_walk_virt_block(&desc, &walk, aead, + AES_BLOCK_SIZE); + + while (walk.nbytes) { + u32 tail = walk.nbytes % AES_BLOCK_SIZE; + + if (walk.nbytes == len) + tail = 0; + + ce_aes_ccm_encrypt(walk.dst.virt.addr, walk.src.virt.addr, + walk.nbytes - tail, ctx->key_enc, + num_rounds(ctx), mac, walk.iv); + + len -= walk.nbytes - tail; + err = blkcipher_walk_done(&desc, &walk, tail); + } + if (!err) + ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); + + kernel_neon_end(); + + if (err) + return err; + + /* copy authtag to end of dst */ + scatterwalk_map_and_copy(mac, req->dst, req->cryptlen, + crypto_aead_authsize(aead), 1); + + return 0; +} + +static int ccm_decrypt(struct aead_request *req) +{ + struct crypto_aead *aead = crypto_aead_reqtfm(req); + struct crypto_aes_ctx *ctx = crypto_aead_ctx(aead); + unsigned int authsize = crypto_aead_authsize(aead); + struct blkcipher_desc desc = { .info = req->iv }; + struct blkcipher_walk walk; + u8 __aligned(8) mac[AES_BLOCK_SIZE]; + u8 buf[AES_BLOCK_SIZE]; + u32 len = req->cryptlen - authsize; + int err; + + err = ccm_init_mac(req, mac, len); + if (err) + return err; + + kernel_neon_begin_partial(6); + + if (req->assoclen) + ccm_calculate_auth_mac(req, mac); + + /* preserve the original iv for the final round */ + memcpy(buf, req->iv, AES_BLOCK_SIZE); + + blkcipher_walk_init(&walk, req->dst, req->src, len); + err = blkcipher_aead_walk_virt_block(&desc, &walk, aead, + AES_BLOCK_SIZE); + + while (walk.nbytes) { + u32 tail = walk.nbytes % AES_BLOCK_SIZE; + + if (walk.nbytes == len) + tail = 0; + + ce_aes_ccm_decrypt(walk.dst.virt.addr, walk.src.virt.addr, + walk.nbytes - tail, ctx->key_enc, + num_rounds(ctx), mac, walk.iv); + + len -= walk.nbytes - tail; + err = blkcipher_walk_done(&desc, &walk, tail); + } + if (!err) + ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); + + kernel_neon_end(); + + if (err) + return err; + + /* compare calculated auth tag with the stored one */ + scatterwalk_map_and_copy(buf, req->src, req->cryptlen - authsize, + authsize, 0); + + if (memcmp(mac, buf, authsize)) + return -EBADMSG; + return 0; +} + +static struct crypto_alg ccm_aes_alg = { + .cra_name = "ccm(aes)", + .cra_driver_name = "ccm-aes-ce", + .cra_priority = 300, + .cra_flags = CRYPTO_ALG_TYPE_AEAD, + .cra_blocksize = 1, + .cra_ctxsize = sizeof(struct crypto_aes_ctx), + .cra_alignmask = 7, + .cra_type = &crypto_aead_type, + .cra_module = THIS_MODULE, + .cra_aead = { + .ivsize = AES_BLOCK_SIZE, + .maxauthsize = AES_BLOCK_SIZE, + .setkey = ccm_setkey, + .setauthsize = ccm_setauthsize, + .encrypt = ccm_encrypt, + .decrypt = ccm_decrypt, + } +}; + +static int __init aes_mod_init(void) +{ + if (!(elf_hwcap & HWCAP_AES)) + return -ENODEV; + return crypto_register_alg(&ccm_aes_alg); +} + +static void __exit aes_mod_exit(void) +{ + crypto_unregister_alg(&ccm_aes_alg); +} + +module_init(aes_mod_init); +module_exit(aes_mod_exit); + +MODULE_DESCRIPTION("Synchronous AES in CCM mode using ARMv8 Crypto Extensions"); +MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>"); +MODULE_LICENSE("GPL v2"); +MODULE_ALIAS("ccm(aes)"); diff --git a/arch/arm64/crypto/aes-ce-cipher.c b/arch/arm64/crypto/aes-ce-cipher.c new file mode 100644 index 000000000000..2075e1acae6b --- /dev/null +++ b/arch/arm64/crypto/aes-ce-cipher.c @@ -0,0 +1,155 @@ +/* + * aes-ce-cipher.c - core AES cipher using ARMv8 Crypto Extensions + * + * Copyright (C) 2013 - 2014 Linaro Ltd <ard.biesheuvel@linaro.org> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include <asm/neon.h> +#include <crypto/aes.h> +#include <linux/cpufeature.h> +#include <linux/crypto.h> +#include <linux/module.h> + +MODULE_DESCRIPTION("Synchronous AES cipher using ARMv8 Crypto Extensions"); +MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>"); +MODULE_LICENSE("GPL v2"); + +struct aes_block { + u8 b[AES_BLOCK_SIZE]; +}; + +static int num_rounds(struct crypto_aes_ctx *ctx) +{ + /* + * # of rounds specified by AES: + * 128 bit key 10 rounds + * 192 bit key 12 rounds + * 256 bit key 14 rounds + * => n byte key => 6 + (n/4) rounds + */ + return 6 + ctx->key_length / 4; +} + +static void aes_cipher_encrypt(struct crypto_tfm *tfm, u8 dst[], u8 const src[]) +{ + struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); + struct aes_block *out = (struct aes_block *)dst; + struct aes_block const *in = (struct aes_block *)src; + void *dummy0; + int dummy1; + + kernel_neon_begin_partial(4); + + __asm__(" ld1 {v0.16b}, %[in] ;" + " ld1 {v1.2d}, [%[key]], #16 ;" + " cmp %w[rounds], #10 ;" + " bmi 0f ;" + " bne 3f ;" + " mov v3.16b, v1.16b ;" + " b 2f ;" + "0: mov v2.16b, v1.16b ;" + " ld1 {v3.2d}, [%[key]], #16 ;" + "1: aese v0.16b, v2.16b ;" + " aesmc v0.16b, v0.16b ;" + "2: ld1 {v1.2d}, [%[key]], #16 ;" + " aese v0.16b, v3.16b ;" + " aesmc v0.16b, v0.16b ;" + "3: ld1 {v2.2d}, [%[key]], #16 ;" + " subs %w[rounds], %w[rounds], #3 ;" + " aese v0.16b, v1.16b ;" + " aesmc v0.16b, v0.16b ;" + " ld1 {v3.2d}, [%[key]], #16 ;" + " bpl 1b ;" + " aese v0.16b, v2.16b ;" + " eor v0.16b, v0.16b, v3.16b ;" + " st1 {v0.16b}, %[out] ;" + + : [out] "=Q"(*out), + [key] "=r"(dummy0), + [rounds] "=r"(dummy1) + : [in] "Q"(*in), + "1"(ctx->key_enc), + "2"(num_rounds(ctx) - 2) + : "cc"); + + kernel_neon_end(); +} + +static void aes_cipher_decrypt(struct crypto_tfm *tfm, u8 dst[], u8 const src[]) +{ + struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); + struct aes_block *out = (struct aes_block *)dst; + struct aes_block const *in = (struct aes_block *)src; + void *dummy0; + int dummy1; + + kernel_neon_begin_partial(4); + + __asm__(" ld1 {v0.16b}, %[in] ;" + " ld1 {v1.2d}, [%[key]], #16 ;" + " cmp %w[rounds], #10 ;" + " bmi 0f ;" + " bne 3f ;" + " mov v3.16b, v1.16b ;" + " b 2f ;" + "0: mov v2.16b, v1.16b ;" + " ld1 {v3.2d}, [%[key]], #16 ;" + "1: aesd v0.16b, v2.16b ;" + " aesimc v0.16b, v0.16b ;" + "2: ld1 {v1.2d}, [%[key]], #16 ;" + " aesd v0.16b, v3.16b ;" + " aesimc v0.16b, v0.16b ;" + "3: ld1 {v2.2d}, [%[key]], #16 ;" + " subs %w[rounds], %w[rounds], #3 ;" + " aesd v0.16b, v1.16b ;" + " aesimc v0.16b, v0.16b ;" + " ld1 {v3.2d}, [%[key]], #16 ;" + " bpl 1b ;" + " aesd v0.16b, v2.16b ;" + " eor v0.16b, v0.16b, v3.16b ;" + " st1 {v0.16b}, %[out] ;" + + : [out] "=Q"(*out), + [key] "=r"(dummy0), + [rounds] "=r"(dummy1) + : [in] "Q"(*in), + "1"(ctx->key_dec), + "2"(num_rounds(ctx) - 2) + : "cc"); + + kernel_neon_end(); +} + +static struct crypto_alg aes_alg = { + .cra_name = "aes", + .cra_driver_name = "aes-ce", + .cra_priority = 300, + .cra_flags = CRYPTO_ALG_TYPE_CIPHER, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct crypto_aes_ctx), + .cra_module = THIS_MODULE, + .cra_cipher = { + .cia_min_keysize = AES_MIN_KEY_SIZE, + .cia_max_keysize = AES_MAX_KEY_SIZE, + .cia_setkey = crypto_aes_set_key, + .cia_encrypt = aes_cipher_encrypt, + .cia_decrypt = aes_cipher_decrypt + } +}; + +static int __init aes_mod_init(void) +{ + return crypto_register_alg(&aes_alg); +} + +static void __exit aes_mod_exit(void) +{ + crypto_unregister_alg(&aes_alg); +} + +module_cpu_feature_match(AES, aes_mod_init); +module_exit(aes_mod_exit); diff --git a/arch/arm64/crypto/aes-ce.S b/arch/arm64/crypto/aes-ce.S new file mode 100644 index 000000000000..685a18f731eb --- /dev/null +++ b/arch/arm64/crypto/aes-ce.S @@ -0,0 +1,133 @@ +/* + * linux/arch/arm64/crypto/aes-ce.S - AES cipher for ARMv8 with + * Crypto Extensions + * + * Copyright (C) 2013 Linaro Ltd <ard.biesheuvel@linaro.org> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include <linux/linkage.h> + +#define AES_ENTRY(func) ENTRY(ce_ ## func) +#define AES_ENDPROC(func) ENDPROC(ce_ ## func) + + .arch armv8-a+crypto + + /* preload all round keys */ + .macro load_round_keys, rounds, rk + cmp \rounds, #12 + blo 2222f /* 128 bits */ + beq 1111f /* 192 bits */ + ld1 {v17.16b-v18.16b}, [\rk], #32 +1111: ld1 {v19.16b-v20.16b}, [\rk], #32 +2222: ld1 {v21.16b-v24.16b}, [\rk], #64 + ld1 {v25.16b-v28.16b}, [\rk], #64 + ld1 {v29.16b-v31.16b}, [\rk] + .endm + + /* prepare for encryption with key in rk[] */ + .macro enc_prepare, rounds, rk, ignore + load_round_keys \rounds, \rk + .endm + + /* prepare for encryption (again) but with new key in rk[] */ + .macro enc_switch_key, rounds, rk, ignore + load_round_keys \rounds, \rk + .endm + + /* prepare for decryption with key in rk[] */ + .macro dec_prepare, rounds, rk, ignore + load_round_keys \rounds, \rk + .endm + + .macro do_enc_Nx, de, mc, k, i0, i1, i2, i3 + aes\de \i0\().16b, \k\().16b + .ifnb \i1 + aes\de \i1\().16b, \k\().16b + .ifnb \i3 + aes\de \i2\().16b, \k\().16b + aes\de \i3\().16b, \k\().16b + .endif + .endif + aes\mc \i0\().16b, \i0\().16b + .ifnb \i1 + aes\mc \i1\().16b, \i1\().16b + .ifnb \i3 + aes\mc \i2\().16b, \i2\().16b + aes\mc \i3\().16b, \i3\().16b + .endif + .endif + .endm + + /* up to 4 interleaved encryption rounds with the same round key */ + .macro round_Nx, enc, k, i0, i1, i2, i3 + .ifc \enc, e + do_enc_Nx e, mc, \k, \i0, \i1, \i2, \i3 + .else + do_enc_Nx d, imc, \k, \i0, \i1, \i2, \i3 + .endif + .endm + + /* up to 4 interleaved final rounds */ + .macro fin_round_Nx, de, k, k2, i0, i1, i2, i3 + aes\de \i0\().16b, \k\().16b + .ifnb \i1 + aes\de \i1\().16b, \k\().16b + .ifnb \i3 + aes\de \i2\().16b, \k\().16b + aes\de \i3\().16b, \k\().16b + .endif + .endif + eor \i0\().16b, \i0\().16b, \k2\().16b + .ifnb \i1 + eor \i1\().16b, \i1\().16b, \k2\().16b + .ifnb \i3 + eor \i2\().16b, \i2\().16b, \k2\().16b + eor \i3\().16b, \i3\().16b, \k2\().16b + .endif + .endif + .endm + + /* up to 4 interleaved blocks */ + .macro do_block_Nx, enc, rounds, i0, i1, i2, i3 + cmp \rounds, #12 + blo 2222f /* 128 bits */ + beq 1111f /* 192 bits */ + round_Nx \enc, v17, \i0, \i1, \i2, \i3 + round_Nx \enc, v18, \i0, \i1, \i2, \i3 +1111: round_Nx \enc, v19, \i0, \i1, \i2, \i3 + round_Nx \enc, v20, \i0, \i1, \i2, \i3 +2222: .irp key, v21, v22, v23, v24, v25, v26, v27, v28, v29 + round_Nx \enc, \key, \i0, \i1, \i2, \i3 + .endr + fin_round_Nx \enc, v30, v31, \i0, \i1, \i2, \i3 + .endm + + .macro encrypt_block, in, rounds, t0, t1, t2 + do_block_Nx e, \rounds, \in + .endm + + .macro encrypt_block2x, i0, i1, rounds, t0, t1, t2 + do_block_Nx e, \rounds, \i0, \i1 + .endm + + .macro encrypt_block4x, i0, i1, i2, i3, rounds, t0, t1, t2 + do_block_Nx e, \rounds, \i0, \i1, \i2, \i3 + .endm + + .macro decrypt_block, in, rounds, t0, t1, t2 + do_block_Nx d, \rounds, \in + .endm + + .macro decrypt_block2x, i0, i1, rounds, t0, t1, t2 + do_block_Nx d, \rounds, \i0, \i1 + .endm + + .macro decrypt_block4x, i0, i1, i2, i3, rounds, t0, t1, t2 + do_block_Nx d, \rounds, \i0, \i1, \i2, \i3 + .endm + +#include "aes-modes.S" diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c new file mode 100644 index 000000000000..60f2f4c12256 --- /dev/null +++ b/arch/arm64/crypto/aes-glue.c @@ -0,0 +1,446 @@ +/* + * linux/arch/arm64/crypto/aes-glue.c - wrapper code for ARMv8 AES + * + * Copyright (C) 2013 Linaro Ltd <ard.biesheuvel@linaro.org> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include <asm/neon.h> +#include <asm/hwcap.h> +#include <crypto/aes.h> +#include <crypto/ablk_helper.h> +#include <crypto/algapi.h> +#include <linux/module.h> +#include <linux/cpufeature.h> + +#ifdef USE_V8_CRYPTO_EXTENSIONS +#define MODE "ce" +#define PRIO 300 +#define aes_ecb_encrypt ce_aes_ecb_encrypt +#define aes_ecb_decrypt ce_aes_ecb_decrypt +#define aes_cbc_encrypt ce_aes_cbc_encrypt +#define aes_cbc_decrypt ce_aes_cbc_decrypt +#define aes_ctr_encrypt ce_aes_ctr_encrypt +#define aes_xts_encrypt ce_aes_xts_encrypt +#define aes_xts_decrypt ce_aes_xts_decrypt +MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS using ARMv8 Crypto Extensions"); +#else +#define MODE "neon" +#define PRIO 200 +#define aes_ecb_encrypt neon_aes_ecb_encrypt +#define aes_ecb_decrypt neon_aes_ecb_decrypt +#define aes_cbc_encrypt neon_aes_cbc_encrypt +#define aes_cbc_decrypt neon_aes_cbc_decrypt +#define aes_ctr_encrypt neon_aes_ctr_encrypt +#define aes_xts_encrypt neon_aes_xts_encrypt +#define aes_xts_decrypt neon_aes_xts_decrypt +MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS using ARMv8 NEON"); +MODULE_ALIAS("ecb(aes)"); +MODULE_ALIAS("cbc(aes)"); +MODULE_ALIAS("ctr(aes)"); +MODULE_ALIAS("xts(aes)"); +#endif + +MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>"); +MODULE_LICENSE("GPL v2"); + +/* defined in aes-modes.S */ +asmlinkage void aes_ecb_encrypt(u8 out[], u8 const in[], u8 const rk[], + int rounds, int blocks, int first); +asmlinkage void aes_ecb_decrypt(u8 out[], u8 const in[], u8 const rk[], + int rounds, int blocks, int first); + +asmlinkage void aes_cbc_encrypt(u8 out[], u8 const in[], u8 const rk[], + int rounds, int blocks, u8 iv[], int first); +asmlinkage void aes_cbc_decrypt(u8 out[], u8 const in[], u8 const rk[], + int rounds, int blocks, u8 iv[], int first); + +asmlinkage void aes_ctr_encrypt(u8 out[], u8 const in[], u8 const rk[], + int rounds, int blocks, u8 ctr[], int first); + +asmlinkage void aes_xts_encrypt(u8 out[], u8 const in[], u8 const rk1[], + int rounds, int blocks, u8 const rk2[], u8 iv[], + int first); +asmlinkage void aes_xts_decrypt(u8 out[], u8 const in[], u8 const rk1[], + int rounds, int blocks, u8 const rk2[], u8 iv[], + int first); + +struct crypto_aes_xts_ctx { + struct crypto_aes_ctx key1; + struct crypto_aes_ctx __aligned(8) key2; +}; + +static int xts_set_key(struct crypto_tfm *tfm, const u8 *in_key, + unsigned int key_len) +{ + struct crypto_aes_xts_ctx *ctx = crypto_tfm_ctx(tfm); + int ret; + + ret = crypto_aes_expand_key(&ctx->key1, in_key, key_len / 2); + if (!ret) + ret = crypto_aes_expand_key(&ctx->key2, &in_key[key_len / 2], + key_len / 2); + if (!ret) + return 0; + + tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; + return -EINVAL; +} + +static int ecb_encrypt(struct blkcipher_desc *desc, struct scatterlist *dst, + struct scatterlist *src, unsigned int nbytes) +{ + struct crypto_aes_ctx *ctx = crypto_blkcipher_ctx(desc->tfm); + int err, first, rounds = 6 + ctx->key_length / 4; + struct blkcipher_walk walk; + unsigned int blocks; + + desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; + blkcipher_walk_init(&walk, dst, src, nbytes); + err = blkcipher_walk_virt(desc, &walk); + + kernel_neon_begin(); + for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) { + aes_ecb_encrypt(walk.dst.virt.addr, walk.src.virt.addr, + (u8 *)ctx->key_enc, rounds, blocks, first); + err = blkcipher_walk_done(desc, &walk, 0); + } + kernel_neon_end(); + return err; +} + +static int ecb_decrypt(struct blkcipher_desc *desc, struct scatterlist *dst, + struct scatterlist *src, unsigned int nbytes) +{ + struct crypto_aes_ctx *ctx = crypto_blkcipher_ctx(desc->tfm); + int err, first, rounds = 6 + ctx->key_length / 4; + struct blkcipher_walk walk; + unsigned int blocks; + + desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; + blkcipher_walk_init(&walk, dst, src, nbytes); + err = blkcipher_walk_virt(desc, &walk); + + kernel_neon_begin(); + for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) { + aes_ecb_decrypt(walk.dst.virt.addr, walk.src.virt.addr, + (u8 *)ctx->key_dec, rounds, blocks, first); + err = blkcipher_walk_done(desc, &walk, 0); + } + kernel_neon_end(); + return err; +} + +static int cbc_encrypt(struct blkcipher_desc *desc, struct scatterlist *dst, + struct scatterlist *src, unsigned int nbytes) +{ + struct crypto_aes_ctx *ctx = crypto_blkcipher_ctx(desc->tfm); + int err, first, rounds = 6 + ctx->key_length / 4; + struct blkcipher_walk walk; + unsigned int blocks; + + desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; + blkcipher_walk_init(&walk, dst, src, nbytes); + err = blkcipher_walk_virt(desc, &walk); + + kernel_neon_begin(); + for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) { + aes_cbc_encrypt(walk.dst.virt.addr, walk.src.virt.addr, + (u8 *)ctx->key_enc, rounds, blocks, walk.iv, + first); + err = blkcipher_walk_done(desc, &walk, 0); + } + kernel_neon_end(); + return err; +} + +static int cbc_decrypt(struct blkcipher_desc *desc, struct scatterlist *dst, + struct scatterlist *src, unsigned int nbytes) +{ + struct crypto_aes_ctx *ctx = crypto_blkcipher_ctx(desc->tfm); + int err, first, rounds = 6 + ctx->key_length / 4; + struct blkcipher_walk walk; + unsigned int blocks; + + desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; + blkcipher_walk_init(&walk, dst, src, nbytes); + err = blkcipher_walk_virt(desc, &walk); + + kernel_neon_begin(); + for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) { + aes_cbc_decrypt(walk.dst.virt.addr, walk.src.virt.addr, + (u8 *)ctx->key_dec, rounds, blocks, walk.iv, + first); + err = blkcipher_walk_done(desc, &walk, 0); + } + kernel_neon_end(); + return err; +} + +static int ctr_encrypt(struct blkcipher_desc *desc, struct scatterlist *dst, + struct scatterlist *src, unsigned int nbytes) +{ + struct crypto_aes_ctx *ctx = crypto_blkcipher_ctx(desc->tfm); + int err, first, rounds = 6 + ctx->key_length / 4; + struct blkcipher_walk walk; + int blocks; + + desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; + blkcipher_walk_init(&walk, dst, src, nbytes); + err = blkcipher_walk_virt_block(desc, &walk, AES_BLOCK_SIZE); + + first = 1; + kernel_neon_begin(); + while ((blocks = (walk.nbytes / AES_BLOCK_SIZE))) { + aes_ctr_encrypt(walk.dst.virt.addr, walk.src.virt.addr, + (u8 *)ctx->key_enc, rounds, blocks, walk.iv, + first); + first = 0; + nbytes -= blocks * AES_BLOCK_SIZE; + if (nbytes && nbytes == walk.nbytes % AES_BLOCK_SIZE) + break; + err = blkcipher_walk_done(desc, &walk, + walk.nbytes % AES_BLOCK_SIZE); + } + if (nbytes) { + u8 *tdst = walk.dst.virt.addr + blocks * AES_BLOCK_SIZE; + u8 *tsrc = walk.src.virt.addr + blocks * AES_BLOCK_SIZE; + u8 __aligned(8) tail[AES_BLOCK_SIZE]; + + /* + * Minimum alignment is 8 bytes, so if nbytes is <= 8, we need + * to tell aes_ctr_encrypt() to only read half a block. + */ + blocks = (nbytes <= 8) ? -1 : 1; + + aes_ctr_encrypt(tail, tsrc, (u8 *)ctx->key_enc, rounds, + blocks, walk.iv, first); + memcpy(tdst, tail, nbytes); + err = blkcipher_walk_done(desc, &walk, 0); + } + kernel_neon_end(); + + return err; +} + +static int xts_encrypt(struct blkcipher_desc *desc, struct scatterlist *dst, + struct scatterlist *src, unsigned int nbytes) +{ + struct crypto_aes_xts_ctx *ctx = crypto_blkcipher_ctx(desc->tfm); + int err, first, rounds = 6 + ctx->key1.key_length / 4; + struct blkcipher_walk walk; + unsigned int blocks; + + desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; + blkcipher_walk_init(&walk, dst, src, nbytes); + err = blkcipher_walk_virt(desc, &walk); + + kernel_neon_begin(); + for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) { + aes_xts_encrypt(walk.dst.virt.addr, walk.src.virt.addr, + (u8 *)ctx->key1.key_enc, rounds, blocks, + (u8 *)ctx->key2.key_enc, walk.iv, first); + err = blkcipher_walk_done(desc, &walk, 0); + } + kernel_neon_end(); + + return err; +} + +static int xts_decrypt(struct blkcipher_desc *desc, struct scatterlist *dst, + struct scatterlist *src, unsigned int nbytes) +{ + struct crypto_aes_xts_ctx *ctx = crypto_blkcipher_ctx(desc->tfm); + int err, first, rounds = 6 + ctx->key1.key_length / 4; + struct blkcipher_walk walk; + unsigned int blocks; + + desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; + blkcipher_walk_init(&walk, dst, src, nbytes); + err = blkcipher_walk_virt(desc, &walk); + + kernel_neon_begin(); + for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) { + aes_xts_decrypt(walk.dst.virt.addr, walk.src.virt.addr, + (u8 *)ctx->key1.key_dec, rounds, blocks, + (u8 *)ctx->key2.key_enc, walk.iv, first); + err = blkcipher_walk_done(desc, &walk, 0); + } + kernel_neon_end(); + + return err; +} + +static struct crypto_alg aes_algs[] = { { + .cra_name = "__ecb-aes-" MODE, + .cra_driver_name = "__driver-ecb-aes-" MODE, + .cra_priority = 0, + .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct crypto_aes_ctx), + .cra_alignmask = 7, + .cra_type = &crypto_blkcipher_type, + .cra_module = THIS_MODULE, + .cra_blkcipher = { + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = crypto_aes_set_key, + .encrypt = ecb_encrypt, + .decrypt = ecb_decrypt, + }, +}, { + .cra_name = "__cbc-aes-" MODE, + .cra_driver_name = "__driver-cbc-aes-" MODE, + .cra_priority = 0, + .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct crypto_aes_ctx), + .cra_alignmask = 7, + .cra_type = &crypto_blkcipher_type, + .cra_module = THIS_MODULE, + .cra_blkcipher = { + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = crypto_aes_set_key, + .encrypt = cbc_encrypt, + .decrypt = cbc_decrypt, + }, +}, { + .cra_name = "__ctr-aes-" MODE, + .cra_driver_name = "__driver-ctr-aes-" MODE, + .cra_priority = 0, + .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER, + .cra_blocksize = 1, + .cra_ctxsize = sizeof(struct crypto_aes_ctx), + .cra_alignmask = 7, + .cra_type = &crypto_blkcipher_type, + .cra_module = THIS_MODULE, + .cra_blkcipher = { + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = crypto_aes_set_key, + .encrypt = ctr_encrypt, + .decrypt = ctr_encrypt, + }, +}, { + .cra_name = "__xts-aes-" MODE, + .cra_driver_name = "__driver-xts-aes-" MODE, + .cra_priority = 0, + .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct crypto_aes_xts_ctx), + .cra_alignmask = 7, + .cra_type = &crypto_blkcipher_type, + .cra_module = THIS_MODULE, + .cra_blkcipher = { + .min_keysize = 2 * AES_MIN_KEY_SIZE, + .max_keysize = 2 * AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = xts_set_key, + .encrypt = xts_encrypt, + .decrypt = xts_decrypt, + }, +}, { + .cra_name = "ecb(aes)", + .cra_driver_name = "ecb-aes-" MODE, + .cra_priority = PRIO, + .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER|CRYPTO_ALG_ASYNC, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct async_helper_ctx), + .cra_alignmask = 7, + .cra_type = &crypto_ablkcipher_type, + .cra_module = THIS_MODULE, + .cra_init = ablk_init, + .cra_exit = ablk_exit, + .cra_ablkcipher = { + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = ablk_set_key, + .encrypt = ablk_encrypt, + .decrypt = ablk_decrypt, + } +}, { + .cra_name = "cbc(aes)", + .cra_driver_name = "cbc-aes-" MODE, + .cra_priority = PRIO, + .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER|CRYPTO_ALG_ASYNC, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct async_helper_ctx), + .cra_alignmask = 7, + .cra_type = &crypto_ablkcipher_type, + .cra_module = THIS_MODULE, + .cra_init = ablk_init, + .cra_exit = ablk_exit, + .cra_ablkcipher = { + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = ablk_set_key, + .encrypt = ablk_encrypt, + .decrypt = ablk_decrypt, + } +}, { + .cra_name = "ctr(aes)", + .cra_driver_name = "ctr-aes-" MODE, + .cra_priority = PRIO, + .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER|CRYPTO_ALG_ASYNC, + .cra_blocksize = 1, + .cra_ctxsize = sizeof(struct async_helper_ctx), + .cra_alignmask = 7, + .cra_type = &crypto_ablkcipher_type, + .cra_module = THIS_MODULE, + .cra_init = ablk_init, + .cra_exit = ablk_exit, + .cra_ablkcipher = { + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = ablk_set_key, + .encrypt = ablk_encrypt, + .decrypt = ablk_decrypt, + } +}, { + .cra_name = "xts(aes)", + .cra_driver_name = "xts-aes-" MODE, + .cra_priority = PRIO, + .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER|CRYPTO_ALG_ASYNC, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct async_helper_ctx), + .cra_alignmask = 7, + .cra_type = &crypto_ablkcipher_type, + .cra_module = THIS_MODULE, + .cra_init = ablk_init, + .cra_exit = ablk_exit, + .cra_ablkcipher = { + .min_keysize = 2 * AES_MIN_KEY_SIZE, + .max_keysize = 2 * AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = ablk_set_key, + .encrypt = ablk_encrypt, + .decrypt = ablk_decrypt, + } +} }; + +static int __init aes_init(void) +{ + return crypto_register_algs(aes_algs, ARRAY_SIZE(aes_algs)); +} + +static void __exit aes_exit(void) +{ + crypto_unregister_algs(aes_algs, ARRAY_SIZE(aes_algs)); +} + +#ifdef USE_V8_CRYPTO_EXTENSIONS +module_cpu_feature_match(AES, aes_init); +#else +module_init(aes_init); +#endif +module_exit(aes_exit); diff --git a/arch/arm64/crypto/aes-modes.S b/arch/arm64/crypto/aes-modes.S new file mode 100644 index 000000000000..f6e372c528eb --- /dev/null +++ b/arch/arm64/crypto/aes-modes.S @@ -0,0 +1,532 @@ +/* + * linux/arch/arm64/crypto/aes-modes.S - chaining mode wrappers for AES + * + * Copyright (C) 2013 Linaro Ltd <ard.biesheuvel@linaro.org> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +/* included by aes-ce.S and aes-neon.S */ + + .text + .align 4 + +/* + * There are several ways to instantiate this code: + * - no interleave, all inline + * - 2-way interleave, 2x calls out of line (-DINTERLEAVE=2) + * - 2-way interleave, all inline (-DINTERLEAVE=2 -DINTERLEAVE_INLINE) + * - 4-way interleave, 4x calls out of line (-DINTERLEAVE=4) + * - 4-way interleave, all inline (-DINTERLEAVE=4 -DINTERLEAVE_INLINE) + * + * Macros imported by this code: + * - enc_prepare - setup NEON registers for encryption + * - dec_prepare - setup NEON registers for decryption + * - enc_switch_key - change to new key after having prepared for encryption + * - encrypt_block - encrypt a single block + * - decrypt block - decrypt a single block + * - encrypt_block2x - encrypt 2 blocks in parallel (if INTERLEAVE == 2) + * - decrypt_block2x - decrypt 2 blocks in parallel (if INTERLEAVE == 2) + * - encrypt_block4x - encrypt 4 blocks in parallel (if INTERLEAVE == 4) + * - decrypt_block4x - decrypt 4 blocks in parallel (if INTERLEAVE == 4) + */ + +#if defined(INTERLEAVE) && !defined(INTERLEAVE_INLINE) +#define FRAME_PUSH stp x29, x30, [sp,#-16]! ; mov x29, sp +#define FRAME_POP ldp x29, x30, [sp],#16 + +#if INTERLEAVE == 2 + +aes_encrypt_block2x: + encrypt_block2x v0, v1, w3, x2, x6, w7 + ret +ENDPROC(aes_encrypt_block2x) + +aes_decrypt_block2x: + decrypt_block2x v0, v1, w3, x2, x6, w7 + ret +ENDPROC(aes_decrypt_block2x) + +#elif INTERLEAVE == 4 + +aes_encrypt_block4x: + encrypt_block4x v0, v1, v2, v3, w3, x2, x6, w7 + ret +ENDPROC(aes_encrypt_block4x) + +aes_decrypt_block4x: + decrypt_block4x v0, v1, v2, v3, w3, x2, x6, w7 + ret +ENDPROC(aes_decrypt_block4x) + +#else +#error INTERLEAVE should equal 2 or 4 +#endif + + .macro do_encrypt_block2x + bl aes_encrypt_block2x + .endm + + .macro do_decrypt_block2x + bl aes_decrypt_block2x + .endm + + .macro do_encrypt_block4x + bl aes_encrypt_block4x + .endm + + .macro do_decrypt_block4x + bl aes_decrypt_block4x + .endm + +#else +#define FRAME_PUSH +#define FRAME_POP + + .macro do_encrypt_block2x + encrypt_block2x v0, v1, w3, x2, x6, w7 + .endm + + .macro do_decrypt_block2x + decrypt_block2x v0, v1, w3, x2, x6, w7 + .endm + + .macro do_encrypt_block4x + encrypt_block4x v0, v1, v2, v3, w3, x2, x6, w7 + .endm + + .macro do_decrypt_block4x + decrypt_block4x v0, v1, v2, v3, w3, x2, x6, w7 + .endm + +#endif + + /* + * aes_ecb_encrypt(u8 out[], u8 const in[], u8 const rk[], int rounds, + * int blocks, int first) + * aes_ecb_decrypt(u8 out[], u8 const in[], u8 const rk[], int rounds, + * int blocks, int first) + */ + +AES_ENTRY(aes_ecb_encrypt) + FRAME_PUSH + cbz w5, .LecbencloopNx + + enc_prepare w3, x2, x5 + +.LecbencloopNx: +#if INTERLEAVE >= 2 + subs w4, w4, #INTERLEAVE + bmi .Lecbenc1x +#if INTERLEAVE == 2 + ld1 {v0.16b-v1.16b}, [x1], #32 /* get 2 pt blocks */ + do_encrypt_block2x + st1 {v0.16b-v1.16b}, [x0], #32 +#else + ld1 {v0.16b-v3.16b}, [x1], #64 /* get 4 pt blocks */ + do_encrypt_block4x + st1 {v0.16b-v3.16b}, [x0], #64 +#endif + b .LecbencloopNx +.Lecbenc1x: + adds w4, w4, #INTERLEAVE + beq .Lecbencout +#endif +.Lecbencloop: + ld1 {v0.16b}, [x1], #16 /* get next pt block */ + encrypt_block v0, w3, x2, x5, w6 + st1 {v0.16b}, [x0], #16 + subs w4, w4, #1 + bne .Lecbencloop +.Lecbencout: + FRAME_POP + ret +AES_ENDPROC(aes_ecb_encrypt) + + +AES_ENTRY(aes_ecb_decrypt) + FRAME_PUSH + cbz w5, .LecbdecloopNx + + dec_prepare w3, x2, x5 + +.LecbdecloopNx: +#if INTERLEAVE >= 2 + subs w4, w4, #INTERLEAVE + bmi .Lecbdec1x +#if INTERLEAVE == 2 + ld1 {v0.16b-v1.16b}, [x1], #32 /* get 2 ct blocks */ + do_decrypt_block2x + st1 {v0.16b-v1.16b}, [x0], #32 +#else + ld1 {v0.16b-v3.16b}, [x1], #64 /* get 4 ct blocks */ + do_decrypt_block4x + st1 {v0.16b-v3.16b}, [x0], #64 +#endif + b .LecbdecloopNx +.Lecbdec1x: + adds w4, w4, #INTERLEAVE + beq .Lecbdecout +#endif +.Lecbdecloop: + ld1 {v0.16b}, [x1], #16 /* get next ct block */ + decrypt_block v0, w3, x2, x5, w6 + st1 {v0.16b}, [x0], #16 + subs w4, w4, #1 + bne .Lecbdecloop +.Lecbdecout: + FRAME_POP + ret +AES_ENDPROC(aes_ecb_decrypt) + + + /* + * aes_cbc_encrypt(u8 out[], u8 const in[], u8 const rk[], int rounds, + * int blocks, u8 iv[], int first) + * aes_cbc_decrypt(u8 out[], u8 const in[], u8 const rk[], int rounds, + * int blocks, u8 iv[], int first) + */ + +AES_ENTRY(aes_cbc_encrypt) + cbz w6, .Lcbcencloop + + ld1 {v0.16b}, [x5] /* get iv */ + enc_prepare w3, x2, x5 + +.Lcbcencloop: + ld1 {v1.16b}, [x1], #16 /* get next pt block */ + eor v0.16b, v0.16b, v1.16b /* ..and xor with iv */ + encrypt_block v0, w3, x2, x5, w6 + st1 {v0.16b}, [x0], #16 + subs w4, w4, #1 + bne .Lcbcencloop + ret +AES_ENDPROC(aes_cbc_encrypt) + + +AES_ENTRY(aes_cbc_decrypt) + FRAME_PUSH + cbz w6, .LcbcdecloopNx + + ld1 {v7.16b}, [x5] /* get iv */ + dec_prepare w3, x2, x5 + +.LcbcdecloopNx: +#if INTERLEAVE >= 2 + subs w4, w4, #INTERLEAVE + bmi .Lcbcdec1x +#if INTERLEAVE == 2 + ld1 {v0.16b-v1.16b}, [x1], #32 /* get 2 ct blocks */ + mov v2.16b, v0.16b + mov v3.16b, v1.16b + do_decrypt_block2x + eor v0.16b, v0.16b, v7.16b + eor v1.16b, v1.16b, v2.16b + mov v7.16b, v3.16b + st1 {v0.16b-v1.16b}, [x0], #32 +#else + ld1 {v0.16b-v3.16b}, [x1], #64 /* get 4 ct blocks */ + mov v4.16b, v0.16b + mov v5.16b, v1.16b + mov v6.16b, v2.16b + do_decrypt_block4x + sub x1, x1, #16 + eor v0.16b, v0.16b, v7.16b + eor v1.16b, v1.16b, v4.16b + ld1 {v7.16b}, [x1], #16 /* reload 1 ct block */ + eor v2.16b, v2.16b, v5.16b + eor v3.16b, v3.16b, v6.16b + st1 {v0.16b-v3.16b}, [x0], #64 +#endif + b .LcbcdecloopNx +.Lcbcdec1x: + adds w4, w4, #INTERLEAVE + beq .Lcbcdecout +#endif +.Lcbcdecloop: + ld1 {v1.16b}, [x1], #16 /* get next ct block */ + mov v0.16b, v1.16b /* ...and copy to v0 */ + decrypt_block v0, w3, x2, x5, w6 + eor v0.16b, v0.16b, v7.16b /* xor with iv => pt */ + mov v7.16b, v1.16b /* ct is next iv */ + st1 {v0.16b}, [x0], #16 + subs w4, w4, #1 + bne .Lcbcdecloop +.Lcbcdecout: + FRAME_POP + ret +AES_ENDPROC(aes_cbc_decrypt) + + + /* + * aes_ctr_encrypt(u8 out[], u8 const in[], u8 const rk[], int rounds, + * int blocks, u8 ctr[], int first) + */ + +AES_ENTRY(aes_ctr_encrypt) + FRAME_PUSH + cbnz w6, .Lctrfirst /* 1st time around? */ + umov x5, v4.d[1] /* keep swabbed ctr in reg */ + rev x5, x5 +#if INTERLEAVE >= 2 + cmn w5, w4 /* 32 bit overflow? */ + bcs .Lctrinc + add x5, x5, #1 /* increment BE ctr */ + b .LctrincNx +#else + b .Lctrinc +#endif +.Lctrfirst: + enc_prepare w3, x2, x6 + ld1 {v4.16b}, [x5] + umov x5, v4.d[1] /* keep swabbed ctr in reg */ + rev x5, x5 +#if INTERLEAVE >= 2 + cmn w5, w4 /* 32 bit overflow? */ + bcs .Lctrloop +.LctrloopNx: + subs w4, w4, #INTERLEAVE + bmi .Lctr1x +#if INTERLEAVE == 2 + mov v0.8b, v4.8b + mov v1.8b, v4.8b + rev x7, x5 + add x5, x5, #1 + ins v0.d[1], x7 + rev x7, x5 + add x5, x5, #1 + ins v1.d[1], x7 + ld1 {v2.16b-v3.16b}, [x1], #32 /* get 2 input blocks */ + do_encrypt_block2x + eor v0.16b, v0.16b, v2.16b + eor v1.16b, v1.16b, v3.16b + st1 {v0.16b-v1.16b}, [x0], #32 +#else + ldr q8, =0x30000000200000001 /* addends 1,2,3[,0] */ + dup v7.4s, w5 + mov v0.16b, v4.16b + add v7.4s, v7.4s, v8.4s + mov v1.16b, v4.16b + rev32 v8.16b, v7.16b + mov v2.16b, v4.16b + mov v3.16b, v4.16b + mov v1.s[3], v8.s[0] + mov v2.s[3], v8.s[1] + mov v3.s[3], v8.s[2] + ld1 {v5.16b-v7.16b}, [x1], #48 /* get 3 input blocks */ + do_encrypt_block4x + eor v0.16b, v5.16b, v0.16b + ld1 {v5.16b}, [x1], #16 /* get 1 input block */ + eor v1.16b, v6.16b, v1.16b + eor v2.16b, v7.16b, v2.16b + eor v3.16b, v5.16b, v3.16b + st1 {v0.16b-v3.16b}, [x0], #64 + add x5, x5, #INTERLEAVE +#endif + cbz w4, .LctroutNx +.LctrincNx: + rev x7, x5 + ins v4.d[1], x7 + b .LctrloopNx +.LctroutNx: + sub x5, x5, #1 + rev x7, x5 + ins v4.d[1], x7 + b .Lctrout +.Lctr1x: + adds w4, w4, #INTERLEAVE + beq .Lctrout +#endif +.Lctrloop: + mov v0.16b, v4.16b + encrypt_block v0, w3, x2, x6, w7 + subs w4, w4, #1 + bmi .Lctrhalfblock /* blocks < 0 means 1/2 block */ + ld1 {v3.16b}, [x1], #16 + eor v3.16b, v0.16b, v3.16b + st1 {v3.16b}, [x0], #16 + beq .Lctrout +.Lctrinc: + adds x5, x5, #1 /* increment BE ctr */ + rev x7, x5 + ins v4.d[1], x7 + bcc .Lctrloop /* no overflow? */ + umov x7, v4.d[0] /* load upper word of ctr */ + rev x7, x7 /* ... to handle the carry */ + add x7, x7, #1 + rev x7, x7 + ins v4.d[0], x7 + b .Lctrloop +.Lctrhalfblock: + ld1 {v3.8b}, [x1] + eor v3.8b, v0.8b, v3.8b + st1 {v3.8b}, [x0] +.Lctrout: + FRAME_POP + ret +AES_ENDPROC(aes_ctr_encrypt) + .ltorg + + + /* + * aes_xts_decrypt(u8 out[], u8 const in[], u8 const rk1[], int rounds, + * int blocks, u8 const rk2[], u8 iv[], int first) + * aes_xts_decrypt(u8 out[], u8 const in[], u8 const rk1[], int rounds, + * int blocks, u8 const rk2[], u8 iv[], int first) + */ + + .macro next_tweak, out, in, const, tmp + sshr \tmp\().2d, \in\().2d, #63 + and \tmp\().16b, \tmp\().16b, \const\().16b + add \out\().2d, \in\().2d, \in\().2d + ext \tmp\().16b, \tmp\().16b, \tmp\().16b, #8 + eor \out\().16b, \out\().16b, \tmp\().16b + .endm + +.Lxts_mul_x: + .word 1, 0, 0x87, 0 + +AES_ENTRY(aes_xts_encrypt) + FRAME_PUSH + cbz w7, .LxtsencloopNx + + ld1 {v4.16b}, [x6] + enc_prepare w3, x5, x6 + encrypt_block v4, w3, x5, x6, w7 /* first tweak */ + enc_switch_key w3, x2, x6 + ldr q7, .Lxts_mul_x + b .LxtsencNx + +.LxtsencloopNx: + ldr q7, .Lxts_mul_x + next_tweak v4, v4, v7, v8 +.LxtsencNx: +#if INTERLEAVE >= 2 + subs w4, w4, #INTERLEAVE + bmi .Lxtsenc1x +#if INTERLEAVE == 2 + ld1 {v0.16b-v1.16b}, [x1], #32 /* get 2 pt blocks */ + next_tweak v5, v4, v7, v8 + eor v0.16b, v0.16b, v4.16b + eor v1.16b, v1.16b, v5.16b + do_encrypt_block2x + eor v0.16b, v0.16b, v4.16b + eor v1.16b, v1.16b, v5.16b + st1 {v0.16b-v1.16b}, [x0], #32 + cbz w4, .LxtsencoutNx + next_tweak v4, v5, v7, v8 + b .LxtsencNx +.LxtsencoutNx: + mov v4.16b, v5.16b + b .Lxtsencout +#else + ld1 {v0.16b-v3.16b}, [x1], #64 /* get 4 pt blocks */ + next_tweak v5, v4, v7, v8 + eor v0.16b, v0.16b, v4.16b + next_tweak v6, v5, v7, v8 + eor v1.16b, v1.16b, v5.16b + eor v2.16b, v2.16b, v6.16b + next_tweak v7, v6, v7, v8 + eor v3.16b, v3.16b, v7.16b + do_encrypt_block4x + eor v3.16b, v3.16b, v7.16b + eor v0.16b, v0.16b, v4.16b + eor v1.16b, v1.16b, v5.16b + eor v2.16b, v2.16b, v6.16b + st1 {v0.16b-v3.16b}, [x0], #64 + mov v4.16b, v7.16b + cbz w4, .Lxtsencout + b .LxtsencloopNx +#endif +.Lxtsenc1x: + adds w4, w4, #INTERLEAVE + beq .Lxtsencout +#endif +.Lxtsencloop: + ld1 {v1.16b}, [x1], #16 + eor v0.16b, v1.16b, v4.16b + encrypt_block v0, w3, x2, x6, w7 + eor v0.16b, v0.16b, v4.16b + st1 {v0.16b}, [x0], #16 + subs w4, w4, #1 + beq .Lxtsencout + next_tweak v4, v4, v7, v8 + b .Lxtsencloop +.Lxtsencout: + FRAME_POP + ret +AES_ENDPROC(aes_xts_encrypt) + + +AES_ENTRY(aes_xts_decrypt) + FRAME_PUSH + cbz w7, .LxtsdecloopNx + + ld1 {v4.16b}, [x6] + enc_prepare w3, x5, x6 + encrypt_block v4, w3, x5, x6, w7 /* first tweak */ + dec_prepare w3, x2, x6 + ldr q7, .Lxts_mul_x + b .LxtsdecNx + +.LxtsdecloopNx: + ldr q7, .Lxts_mul_x + next_tweak v4, v4, v7, v8 +.LxtsdecNx: +#if INTERLEAVE >= 2 + subs w4, w4, #INTERLEAVE + bmi .Lxtsdec1x +#if INTERLEAVE == 2 + ld1 {v0.16b-v1.16b}, [x1], #32 /* get 2 ct blocks */ + next_tweak v5, v4, v7, v8 + eor v0.16b, v0.16b, v4.16b + eor v1.16b, v1.16b, v5.16b + do_decrypt_block2x + eor v0.16b, v0.16b, v4.16b + eor v1.16b, v1.16b, v5.16b + st1 {v0.16b-v1.16b}, [x0], #32 + cbz w4, .LxtsdecoutNx + next_tweak v4, v5, v7, v8 + b .LxtsdecNx +.LxtsdecoutNx: + mov v4.16b, v5.16b + b .Lxtsdecout +#else + ld1 {v0.16b-v3.16b}, [x1], #64 /* get 4 ct blocks */ + next_tweak v5, v4, v7, v8 + eor v0.16b, v0.16b, v4.16b + next_tweak v6, v5, v7, v8 + eor v1.16b, v1.16b, v5.16b + eor v2.16b, v2.16b, v6.16b + next_tweak v7, v6, v7, v8 + eor v3.16b, v3.16b, v7.16b + do_decrypt_block4x + eor v3.16b, v3.16b, v7.16b + eor v0.16b, v0.16b, v4.16b + eor v1.16b, v1.16b, v5.16b + eor v2.16b, v2.16b, v6.16b + st1 {v0.16b-v3.16b}, [x0], #64 + mov v4.16b, v7.16b + cbz w4, .Lxtsdecout + b .LxtsdecloopNx +#endif +.Lxtsdec1x: + adds w4, w4, #INTERLEAVE + beq .Lxtsdecout +#endif +.Lxtsdecloop: + ld1 {v1.16b}, [x1], #16 + eor v0.16b, v1.16b, v4.16b + decrypt_block v0, w3, x2, x6, w7 + eor v0.16b, v0.16b, v4.16b + st1 {v0.16b}, [x0], #16 + subs w4, w4, #1 + beq .Lxtsdecout + next_tweak v4, v4, v7, v8 + b .Lxtsdecloop +.Lxtsdecout: + FRAME_POP + ret +AES_ENDPROC(aes_xts_decrypt) diff --git a/arch/arm64/crypto/aes-neon.S b/arch/arm64/crypto/aes-neon.S new file mode 100644 index 000000000000..b93170e1cc93 --- /dev/null +++ b/arch/arm64/crypto/aes-neon.S @@ -0,0 +1,382 @@ +/* + * linux/arch/arm64/crypto/aes-neon.S - AES cipher for ARMv8 NEON + * + * Copyright (C) 2013 Linaro Ltd <ard.biesheuvel@linaro.org> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include <linux/linkage.h> + +#define AES_ENTRY(func) ENTRY(neon_ ## func) +#define AES_ENDPROC(func) ENDPROC(neon_ ## func) + + /* multiply by polynomial 'x' in GF(2^8) */ + .macro mul_by_x, out, in, temp, const + sshr \temp, \in, #7 + add \out, \in, \in + and \temp, \temp, \const + eor \out, \out, \temp + .endm + + /* preload the entire Sbox */ + .macro prepare, sbox, shiftrows, temp + adr \temp, \sbox + movi v12.16b, #0x40 + ldr q13, \shiftrows + movi v14.16b, #0x1b + ld1 {v16.16b-v19.16b}, [\temp], #64 + ld1 {v20.16b-v23.16b}, [\temp], #64 + ld1 {v24.16b-v27.16b}, [\temp], #64 + ld1 {v28.16b-v31.16b}, [\temp] + .endm + + /* do preload for encryption */ + .macro enc_prepare, ignore0, ignore1, temp + prepare .LForward_Sbox, .LForward_ShiftRows, \temp + .endm + + .macro enc_switch_key, ignore0, ignore1, temp + /* do nothing */ + .endm + + /* do preload for decryption */ + .macro dec_prepare, ignore0, ignore1, temp + prepare .LReverse_Sbox, .LReverse_ShiftRows, \temp + .endm + + /* apply SubBytes transformation using the the preloaded Sbox */ + .macro sub_bytes, in + sub v9.16b, \in\().16b, v12.16b + tbl \in\().16b, {v16.16b-v19.16b}, \in\().16b + sub v10.16b, v9.16b, v12.16b + tbx \in\().16b, {v20.16b-v23.16b}, v9.16b + sub v11.16b, v10.16b, v12.16b + tbx \in\().16b, {v24.16b-v27.16b}, v10.16b + tbx \in\().16b, {v28.16b-v31.16b}, v11.16b + .endm + + /* apply MixColumns transformation */ + .macro mix_columns, in + mul_by_x v10.16b, \in\().16b, v9.16b, v14.16b + rev32 v8.8h, \in\().8h + eor \in\().16b, v10.16b, \in\().16b + shl v9.4s, v8.4s, #24 + shl v11.4s, \in\().4s, #24 + sri v9.4s, v8.4s, #8 + sri v11.4s, \in\().4s, #8 + eor v9.16b, v9.16b, v8.16b + eor v10.16b, v10.16b, v9.16b + eor \in\().16b, v10.16b, v11.16b + .endm + + /* Inverse MixColumns: pre-multiply by { 5, 0, 4, 0 } */ + .macro inv_mix_columns, in + mul_by_x v11.16b, \in\().16b, v10.16b, v14.16b + mul_by_x v11.16b, v11.16b, v10.16b, v14.16b + eor \in\().16b, \in\().16b, v11.16b + rev32 v11.8h, v11.8h + eor \in\().16b, \in\().16b, v11.16b + mix_columns \in + .endm + + .macro do_block, enc, in, rounds, rk, rkp, i + ld1 {v15.16b}, [\rk] + add \rkp, \rk, #16 + mov \i, \rounds +1111: eor \in\().16b, \in\().16b, v15.16b /* ^round key */ + tbl \in\().16b, {\in\().16b}, v13.16b /* ShiftRows */ + sub_bytes \in + ld1 {v15.16b}, [\rkp], #16 + subs \i, \i, #1 + beq 2222f + .if \enc == 1 + mix_columns \in + .else + inv_mix_columns \in + .endif + b 1111b +2222: eor \in\().16b, \in\().16b, v15.16b /* ^round key */ + .endm + + .macro encrypt_block, in, rounds, rk, rkp, i + do_block 1, \in, \rounds, \rk, \rkp, \i + .endm + + .macro decrypt_block, in, rounds, rk, rkp, i + do_block 0, \in, \rounds, \rk, \rkp, \i + .endm + + /* + * Interleaved versions: functionally equivalent to the + * ones above, but applied to 2 or 4 AES states in parallel. + */ + + .macro sub_bytes_2x, in0, in1 + sub v8.16b, \in0\().16b, v12.16b + sub v9.16b, \in1\().16b, v12.16b + tbl \in0\().16b, {v16.16b-v19.16b}, \in0\().16b + tbl \in1\().16b, {v16.16b-v19.16b}, \in1\().16b + sub v10.16b, v8.16b, v12.16b + sub v11.16b, v9.16b, v12.16b + tbx \in0\().16b, {v20.16b-v23.16b}, v8.16b + tbx \in1\().16b, {v20.16b-v23.16b}, v9.16b + sub v8.16b, v10.16b, v12.16b + sub v9.16b, v11.16b, v12.16b + tbx \in0\().16b, {v24.16b-v27.16b}, v10.16b + tbx \in1\().16b, {v24.16b-v27.16b}, v11.16b + tbx \in0\().16b, {v28.16b-v31.16b}, v8.16b + tbx \in1\().16b, {v28.16b-v31.16b}, v9.16b + .endm + + .macro sub_bytes_4x, in0, in1, in2, in3 + sub v8.16b, \in0\().16b, v12.16b + tbl \in0\().16b, {v16.16b-v19.16b}, \in0\().16b + sub v9.16b, \in1\().16b, v12.16b + tbl \in1\().16b, {v16.16b-v19.16b}, \in1\().16b + sub v10.16b, \in2\().16b, v12.16b + tbl \in2\().16b, {v16.16b-v19.16b}, \in2\().16b + sub v11.16b, \in3\().16b, v12.16b + tbl \in3\().16b, {v16.16b-v19.16b}, \in3\().16b + tbx \in0\().16b, {v20.16b-v23.16b}, v8.16b + tbx \in1\().16b, {v20.16b-v23.16b}, v9.16b + sub v8.16b, v8.16b, v12.16b + tbx \in2\().16b, {v20.16b-v23.16b}, v10.16b + sub v9.16b, v9.16b, v12.16b + tbx \in3\().16b, {v20.16b-v23.16b}, v11.16b + sub v10.16b, v10.16b, v12.16b + tbx \in0\().16b, {v24.16b-v27.16b}, v8.16b + sub v11.16b, v11.16b, v12.16b + tbx \in1\().16b, {v24.16b-v27.16b}, v9.16b + sub v8.16b, v8.16b, v12.16b + tbx \in2\().16b, {v24.16b-v27.16b}, v10.16b + sub v9.16b, v9.16b, v12.16b + tbx \in3\().16b, {v24.16b-v27.16b}, v11.16b + sub v10.16b, v10.16b, v12.16b + tbx \in0\().16b, {v28.16b-v31.16b}, v8.16b + sub v11.16b, v11.16b, v12.16b + tbx \in1\().16b, {v28.16b-v31.16b}, v9.16b + tbx \in2\().16b, {v28.16b-v31.16b}, v10.16b + tbx \in3\().16b, {v28.16b-v31.16b}, v11.16b + .endm + + .macro mul_by_x_2x, out0, out1, in0, in1, tmp0, tmp1, const + sshr \tmp0\().16b, \in0\().16b, #7 + add \out0\().16b, \in0\().16b, \in0\().16b + sshr \tmp1\().16b, \in1\().16b, #7 + and \tmp0\().16b, \tmp0\().16b, \const\().16b + add \out1\().16b, \in1\().16b, \in1\().16b + and \tmp1\().16b, \tmp1\().16b, \const\().16b + eor \out0\().16b, \out0\().16b, \tmp0\().16b + eor \out1\().16b, \out1\().16b, \tmp1\().16b + .endm + + .macro mix_columns_2x, in0, in1 + mul_by_x_2x v8, v9, \in0, \in1, v10, v11, v14 + rev32 v10.8h, \in0\().8h + rev32 v11.8h, \in1\().8h + eor \in0\().16b, v8.16b, \in0\().16b + eor \in1\().16b, v9.16b, \in1\().16b + shl v12.4s, v10.4s, #24 + shl v13.4s, v11.4s, #24 + eor v8.16b, v8.16b, v10.16b + sri v12.4s, v10.4s, #8 + shl v10.4s, \in0\().4s, #24 + eor v9.16b, v9.16b, v11.16b + sri v13.4s, v11.4s, #8 + shl v11.4s, \in1\().4s, #24 + sri v10.4s, \in0\().4s, #8 + eor \in0\().16b, v8.16b, v12.16b + sri v11.4s, \in1\().4s, #8 + eor \in1\().16b, v9.16b, v13.16b + eor \in0\().16b, v10.16b, \in0\().16b + eor \in1\().16b, v11.16b, \in1\().16b + .endm + + .macro inv_mix_cols_2x, in0, in1 + mul_by_x_2x v8, v9, \in0, \in1, v10, v11, v14 + mul_by_x_2x v8, v9, v8, v9, v10, v11, v14 + eor \in0\().16b, \in0\().16b, v8.16b + eor \in1\().16b, \in1\().16b, v9.16b + rev32 v8.8h, v8.8h + rev32 v9.8h, v9.8h + eor \in0\().16b, \in0\().16b, v8.16b + eor \in1\().16b, \in1\().16b, v9.16b + mix_columns_2x \in0, \in1 + .endm + + .macro inv_mix_cols_4x, in0, in1, in2, in3 + mul_by_x_2x v8, v9, \in0, \in1, v10, v11, v14 + mul_by_x_2x v10, v11, \in2, \in3, v12, v13, v14 + mul_by_x_2x v8, v9, v8, v9, v12, v13, v14 + mul_by_x_2x v10, v11, v10, v11, v12, v13, v14 + eor \in0\().16b, \in0\().16b, v8.16b + eor \in1\().16b, \in1\().16b, v9.16b + eor \in2\().16b, \in2\().16b, v10.16b + eor \in3\().16b, \in3\().16b, v11.16b + rev32 v8.8h, v8.8h + rev32 v9.8h, v9.8h + rev32 v10.8h, v10.8h + rev32 v11.8h, v11.8h + eor \in0\().16b, \in0\().16b, v8.16b + eor \in1\().16b, \in1\().16b, v9.16b + eor \in2\().16b, \in2\().16b, v10.16b + eor \in3\().16b, \in3\().16b, v11.16b + mix_columns_2x \in0, \in1 + mix_columns_2x \in2, \in3 + .endm + + .macro do_block_2x, enc, in0, in1 rounds, rk, rkp, i + ld1 {v15.16b}, [\rk] + add \rkp, \rk, #16 + mov \i, \rounds +1111: eor \in0\().16b, \in0\().16b, v15.16b /* ^round key */ + eor \in1\().16b, \in1\().16b, v15.16b /* ^round key */ + sub_bytes_2x \in0, \in1 + tbl \in0\().16b, {\in0\().16b}, v13.16b /* ShiftRows */ + tbl \in1\().16b, {\in1\().16b}, v13.16b /* ShiftRows */ + ld1 {v15.16b}, [\rkp], #16 + subs \i, \i, #1 + beq 2222f + .if \enc == 1 + mix_columns_2x \in0, \in1 + ldr q13, .LForward_ShiftRows + .else + inv_mix_cols_2x \in0, \in1 + ldr q13, .LReverse_ShiftRows + .endif + movi v12.16b, #0x40 + b 1111b +2222: eor \in0\().16b, \in0\().16b, v15.16b /* ^round key */ + eor \in1\().16b, \in1\().16b, v15.16b /* ^round key */ + .endm + + .macro do_block_4x, enc, in0, in1, in2, in3, rounds, rk, rkp, i + ld1 {v15.16b}, [\rk] + add \rkp, \rk, #16 + mov \i, \rounds +1111: eor \in0\().16b, \in0\().16b, v15.16b /* ^round key */ + eor \in1\().16b, \in1\().16b, v15.16b /* ^round key */ + eor \in2\().16b, \in2\().16b, v15.16b /* ^round key */ + eor \in3\().16b, \in3\().16b, v15.16b /* ^round key */ + sub_bytes_4x \in0, \in1, \in2, \in3 + tbl \in0\().16b, {\in0\().16b}, v13.16b /* ShiftRows */ + tbl \in1\().16b, {\in1\().16b}, v13.16b /* ShiftRows */ + tbl \in2\().16b, {\in2\().16b}, v13.16b /* ShiftRows */ + tbl \in3\().16b, {\in3\().16b}, v13.16b /* ShiftRows */ + ld1 {v15.16b}, [\rkp], #16 + subs \i, \i, #1 + beq 2222f + .if \enc == 1 + mix_columns_2x \in0, \in1 + mix_columns_2x \in2, \in3 + ldr q13, .LForward_ShiftRows + .else + inv_mix_cols_4x \in0, \in1, \in2, \in3 + ldr q13, .LReverse_ShiftRows + .endif + movi v12.16b, #0x40 + b 1111b +2222: eor \in0\().16b, \in0\().16b, v15.16b /* ^round key */ + eor \in1\().16b, \in1\().16b, v15.16b /* ^round key */ + eor \in2\().16b, \in2\().16b, v15.16b /* ^round key */ + eor \in3\().16b, \in3\().16b, v15.16b /* ^round key */ + .endm + + .macro encrypt_block2x, in0, in1, rounds, rk, rkp, i + do_block_2x 1, \in0, \in1, \rounds, \rk, \rkp, \i + .endm + + .macro decrypt_block2x, in0, in1, rounds, rk, rkp, i + do_block_2x 0, \in0, \in1, \rounds, \rk, \rkp, \i + .endm + + .macro encrypt_block4x, in0, in1, in2, in3, rounds, rk, rkp, i + do_block_4x 1, \in0, \in1, \in2, \in3, \rounds, \rk, \rkp, \i + .endm + + .macro decrypt_block4x, in0, in1, in2, in3, rounds, rk, rkp, i + do_block_4x 0, \in0, \in1, \in2, \in3, \rounds, \rk, \rkp, \i + .endm + +#include "aes-modes.S" + + .text + .align 4 +.LForward_ShiftRows: + .byte 0x0, 0x5, 0xa, 0xf, 0x4, 0x9, 0xe, 0x3 + .byte 0x8, 0xd, 0x2, 0x7, 0xc, 0x1, 0x6, 0xb + +.LReverse_ShiftRows: + .byte 0x0, 0xd, 0xa, 0x7, 0x4, 0x1, 0xe, 0xb + .byte 0x8, 0x5, 0x2, 0xf, 0xc, 0x9, 0x6, 0x3 + +.LForward_Sbox: + .byte 0x63, 0x7c, 0x77, 0x7b, 0xf2, 0x6b, 0x6f, 0xc5 + .byte 0x30, 0x01, 0x67, 0x2b, 0xfe, 0xd7, 0xab, 0x76 + .byte 0xca, 0x82, 0xc9, 0x7d, 0xfa, 0x59, 0x47, 0xf0 + .byte 0xad, 0xd4, 0xa2, 0xaf, 0x9c, 0xa4, 0x72, 0xc0 + .byte 0xb7, 0xfd, 0x93, 0x26, 0x36, 0x3f, 0xf7, 0xcc + .byte 0x34, 0xa5, 0xe5, 0xf1, 0x71, 0xd8, 0x31, 0x15 + .byte 0x04, 0xc7, 0x23, 0xc3, 0x18, 0x96, 0x05, 0x9a + .byte 0x07, 0x12, 0x80, 0xe2, 0xeb, 0x27, 0xb2, 0x75 + .byte 0x09, 0x83, 0x2c, 0x1a, 0x1b, 0x6e, 0x5a, 0xa0 + .byte 0x52, 0x3b, 0xd6, 0xb3, 0x29, 0xe3, 0x2f, 0x84 + .byte 0x53, 0xd1, 0x00, 0xed, 0x20, 0xfc, 0xb1, 0x5b + .byte 0x6a, 0xcb, 0xbe, 0x39, 0x4a, 0x4c, 0x58, 0xcf + .byte 0xd0, 0xef, 0xaa, 0xfb, 0x43, 0x4d, 0x33, 0x85 + .byte 0x45, 0xf9, 0x02, 0x7f, 0x50, 0x3c, 0x9f, 0xa8 + .byte 0x51, 0xa3, 0x40, 0x8f, 0x92, 0x9d, 0x38, 0xf5 + .byte 0xbc, 0xb6, 0xda, 0x21, 0x10, 0xff, 0xf3, 0xd2 + .byte 0xcd, 0x0c, 0x13, 0xec, 0x5f, 0x97, 0x44, 0x17 + .byte 0xc4, 0xa7, 0x7e, 0x3d, 0x64, 0x5d, 0x19, 0x73 + .byte 0x60, 0x81, 0x4f, 0xdc, 0x22, 0x2a, 0x90, 0x88 + .byte 0x46, 0xee, 0xb8, 0x14, 0xde, 0x5e, 0x0b, 0xdb + .byte 0xe0, 0x32, 0x3a, 0x0a, 0x49, 0x06, 0x24, 0x5c + .byte 0xc2, 0xd3, 0xac, 0x62, 0x91, 0x95, 0xe4, 0x79 + .byte 0xe7, 0xc8, 0x37, 0x6d, 0x8d, 0xd5, 0x4e, 0xa9 + .byte 0x6c, 0x56, 0xf4, 0xea, 0x65, 0x7a, 0xae, 0x08 + .byte 0xba, 0x78, 0x25, 0x2e, 0x1c, 0xa6, 0xb4, 0xc6 + .byte 0xe8, 0xdd, 0x74, 0x1f, 0x4b, 0xbd, 0x8b, 0x8a + .byte 0x70, 0x3e, 0xb5, 0x66, 0x48, 0x03, 0xf6, 0x0e + .byte 0x61, 0x35, 0x57, 0xb9, 0x86, 0xc1, 0x1d, 0x9e + .byte 0xe1, 0xf8, 0x98, 0x11, 0x69, 0xd9, 0x8e, 0x94 + .byte 0x9b, 0x1e, 0x87, 0xe9, 0xce, 0x55, 0x28, 0xdf + .byte 0x8c, 0xa1, 0x89, 0x0d, 0xbf, 0xe6, 0x42, 0x68 + .byte 0x41, 0x99, 0x2d, 0x0f, 0xb0, 0x54, 0xbb, 0x16 + +.LReverse_Sbox: + .byte 0x52, 0x09, 0x6a, 0xd5, 0x30, 0x36, 0xa5, 0x38 + .byte 0xbf, 0x40, 0xa3, 0x9e, 0x81, 0xf3, 0xd7, 0xfb + .byte 0x7c, 0xe3, 0x39, 0x82, 0x9b, 0x2f, 0xff, 0x87 + .byte 0x34, 0x8e, 0x43, 0x44, 0xc4, 0xde, 0xe9, 0xcb + .byte 0x54, 0x7b, 0x94, 0x32, 0xa6, 0xc2, 0x23, 0x3d + .byte 0xee, 0x4c, 0x95, 0x0b, 0x42, 0xfa, 0xc3, 0x4e + .byte 0x08, 0x2e, 0xa1, 0x66, 0x28, 0xd9, 0x24, 0xb2 + .byte 0x76, 0x5b, 0xa2, 0x49, 0x6d, 0x8b, 0xd1, 0x25 + .byte 0x72, 0xf8, 0xf6, 0x64, 0x86, 0x68, 0x98, 0x16 + .byte 0xd4, 0xa4, 0x5c, 0xcc, 0x5d, 0x65, 0xb6, 0x92 + .byte 0x6c, 0x70, 0x48, 0x50, 0xfd, 0xed, 0xb9, 0xda + .byte 0x5e, 0x15, 0x46, 0x57, 0xa7, 0x8d, 0x9d, 0x84 + .byte 0x90, 0xd8, 0xab, 0x00, 0x8c, 0xbc, 0xd3, 0x0a + .byte 0xf7, 0xe4, 0x58, 0x05, 0xb8, 0xb3, 0x45, 0x06 + .byte 0xd0, 0x2c, 0x1e, 0x8f, 0xca, 0x3f, 0x0f, 0x02 + .byte 0xc1, 0xaf, 0xbd, 0x03, 0x01, 0x13, 0x8a, 0x6b + .byte 0x3a, 0x91, 0x11, 0x41, 0x4f, 0x67, 0xdc, 0xea + .byte 0x97, 0xf2, 0xcf, 0xce, 0xf0, 0xb4, 0xe6, 0x73 + .byte 0x96, 0xac, 0x74, 0x22, 0xe7, 0xad, 0x35, 0x85 + .byte 0xe2, 0xf9, 0x37, 0xe8, 0x1c, 0x75, 0xdf, 0x6e + .byte 0x47, 0xf1, 0x1a, 0x71, 0x1d, 0x29, 0xc5, 0x89 + .byte 0x6f, 0xb7, 0x62, 0x0e, 0xaa, 0x18, 0xbe, 0x1b + .byte 0xfc, 0x56, 0x3e, 0x4b, 0xc6, 0xd2, 0x79, 0x20 + .byte 0x9a, 0xdb, 0xc0, 0xfe, 0x78, 0xcd, 0x5a, 0xf4 + .byte 0x1f, 0xdd, 0xa8, 0x33, 0x88, 0x07, 0xc7, 0x31 + .byte 0xb1, 0x12, 0x10, 0x59, 0x27, 0x80, 0xec, 0x5f + .byte 0x60, 0x51, 0x7f, 0xa9, 0x19, 0xb5, 0x4a, 0x0d + .byte 0x2d, 0xe5, 0x7a, 0x9f, 0x93, 0xc9, 0x9c, 0xef + .byte 0xa0, 0xe0, 0x3b, 0x4d, 0xae, 0x2a, 0xf5, 0xb0 + .byte 0xc8, 0xeb, 0xbb, 0x3c, 0x83, 0x53, 0x99, 0x61 + .byte 0x17, 0x2b, 0x04, 0x7e, 0xba, 0x77, 0xd6, 0x26 + .byte 0xe1, 0x69, 0x14, 0x63, 0x55, 0x21, 0x0c, 0x7d diff --git a/arch/arm64/crypto/ghash-ce-core.S b/arch/arm64/crypto/ghash-ce-core.S new file mode 100644 index 000000000000..b9e6eaf41c9b --- /dev/null +++ b/arch/arm64/crypto/ghash-ce-core.S @@ -0,0 +1,95 @@ +/* + * Accelerated GHASH implementation with ARMv8 PMULL instructions. + * + * Copyright (C) 2014 Linaro Ltd. <ard.biesheuvel@linaro.org> + * + * Based on arch/x86/crypto/ghash-pmullni-intel_asm.S + * + * Copyright (c) 2009 Intel Corp. + * Author: Huang Ying <ying.huang@intel.com> + * Vinodh Gopal + * Erdinc Ozturk + * Deniz Karakoyunlu + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published + * by the Free Software Foundation. + */ + +#include <linux/linkage.h> +#include <asm/assembler.h> + + DATA .req v0 + SHASH .req v1 + IN1 .req v2 + T1 .req v2 + T2 .req v3 + T3 .req v4 + VZR .req v5 + + .text + .arch armv8-a+crypto + + /* + * void pmull_ghash_update(int blocks, u64 dg[], const char *src, + * struct ghash_key const *k, const char *head) + */ +ENTRY(pmull_ghash_update) + ld1 {DATA.16b}, [x1] + ld1 {SHASH.16b}, [x3] + eor VZR.16b, VZR.16b, VZR.16b + + /* do the head block first, if supplied */ + cbz x4, 0f + ld1 {IN1.2d}, [x4] + b 1f + +0: ld1 {IN1.2d}, [x2], #16 + sub w0, w0, #1 +1: ext IN1.16b, IN1.16b, IN1.16b, #8 +CPU_LE( rev64 IN1.16b, IN1.16b ) + eor DATA.16b, DATA.16b, IN1.16b + + /* multiply DATA by SHASH in GF(2^128) */ + ext T2.16b, DATA.16b, DATA.16b, #8 + ext T3.16b, SHASH.16b, SHASH.16b, #8 + eor T2.16b, T2.16b, DATA.16b + eor T3.16b, T3.16b, SHASH.16b + + pmull2 T1.1q, SHASH.2d, DATA.2d // a1 * b1 + pmull DATA.1q, SHASH.1d, DATA.1d // a0 * b0 + pmull T2.1q, T2.1d, T3.1d // (a1 + a0)(b1 + b0) + eor T2.16b, T2.16b, T1.16b // (a0 * b1) + (a1 * b0) + eor T2.16b, T2.16b, DATA.16b + + ext T3.16b, VZR.16b, T2.16b, #8 + ext T2.16b, T2.16b, VZR.16b, #8 + eor DATA.16b, DATA.16b, T3.16b + eor T1.16b, T1.16b, T2.16b // <T1:DATA> is result of + // carry-less multiplication + + /* first phase of the reduction */ + shl T3.2d, DATA.2d, #1 + eor T3.16b, T3.16b, DATA.16b + shl T3.2d, T3.2d, #5 + eor T3.16b, T3.16b, DATA.16b + shl T3.2d, T3.2d, #57 + ext T2.16b, VZR.16b, T3.16b, #8 + ext T3.16b, T3.16b, VZR.16b, #8 + eor DATA.16b, DATA.16b, T2.16b + eor T1.16b, T1.16b, T3.16b + + /* second phase of the reduction */ + ushr T2.2d, DATA.2d, #5 + eor T2.16b, T2.16b, DATA.16b + ushr T2.2d, T2.2d, #1 + eor T2.16b, T2.16b, DATA.16b + ushr T2.2d, T2.2d, #1 + eor T1.16b, T1.16b, T2.16b + eor DATA.16b, DATA.16b, T1.16b + + cbnz w0, 0b + + st1 {DATA.16b}, [x1] + ret +ENDPROC(pmull_ghash_update) diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c new file mode 100644 index 000000000000..b92baf3f68c7 --- /dev/null +++ b/arch/arm64/crypto/ghash-ce-glue.c @@ -0,0 +1,155 @@ +/* + * Accelerated GHASH implementation with ARMv8 PMULL instructions. + * + * Copyright (C) 2014 Linaro Ltd. <ard.biesheuvel@linaro.org> + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published + * by the Free Software Foundation. + */ + +#include <asm/neon.h> +#include <asm/unaligned.h> +#include <crypto/internal/hash.h> +#include <linux/cpufeature.h> +#include <linux/crypto.h> +#include <linux/module.h> + +MODULE_DESCRIPTION("GHASH secure hash using ARMv8 Crypto Extensions"); +MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>"); +MODULE_LICENSE("GPL v2"); + +#define GHASH_BLOCK_SIZE 16 +#define GHASH_DIGEST_SIZE 16 + +struct ghash_key { + u64 a; + u64 b; +}; + +struct ghash_desc_ctx { + u64 digest[GHASH_DIGEST_SIZE/sizeof(u64)]; + u8 buf[GHASH_BLOCK_SIZE]; + u32 count; +}; + +asmlinkage void pmull_ghash_update(int blocks, u64 dg[], const char *src, + struct ghash_key const *k, const char *head); + +static int ghash_init(struct shash_desc *desc) +{ + struct ghash_desc_ctx *ctx = shash_desc_ctx(desc); + + *ctx = (struct ghash_desc_ctx){}; + return 0; +} + +static int ghash_update(struct shash_desc *desc, const u8 *src, + unsigned int len) +{ + struct ghash_desc_ctx *ctx = shash_desc_ctx(desc); + unsigned int partial = ctx->count % GHASH_BLOCK_SIZE; + + ctx->count += len; + + if ((partial + len) >= GHASH_BLOCK_SIZE) { + struct ghash_key *key = crypto_shash_ctx(desc->tfm); + int blocks; + + if (partial) { + int p = GHASH_BLOCK_SIZE - partial; + + memcpy(ctx->buf + partial, src, p); + src += p; + len -= p; + } + + blocks = len / GHASH_BLOCK_SIZE; + len %= GHASH_BLOCK_SIZE; + + kernel_neon_begin_partial(6); + pmull_ghash_update(blocks, ctx->digest, src, key, + partial ? ctx->buf : NULL); + kernel_neon_end(); + src += blocks * GHASH_BLOCK_SIZE; + } + if (len) + memcpy(ctx->buf + partial, src, len); + return 0; +} + +static int ghash_final(struct shash_desc *desc, u8 *dst) +{ + struct ghash_desc_ctx *ctx = shash_desc_ctx(desc); + unsigned int partial = ctx->count % GHASH_BLOCK_SIZE; + + if (partial) { + struct ghash_key *key = crypto_shash_ctx(desc->tfm); + + memset(ctx->buf + partial, 0, GHASH_BLOCK_SIZE - partial); + + kernel_neon_begin_partial(6); + pmull_ghash_update(1, ctx->digest, ctx->buf, key, NULL); + kernel_neon_end(); + } + put_unaligned_be64(ctx->digest[1], dst); + put_unaligned_be64(ctx->digest[0], dst + 8); + + *ctx = (struct ghash_desc_ctx){}; + return 0; +} + +static int ghash_setkey(struct crypto_shash *tfm, + const u8 *inkey, unsigned int keylen) +{ + struct ghash_key *key = crypto_shash_ctx(tfm); + u64 a, b; + + if (keylen != GHASH_BLOCK_SIZE) { + crypto_shash_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); + return -EINVAL; + } + + /* perform multiplication by 'x' in GF(2^128) */ + b = get_unaligned_be64(inkey); + a = get_unaligned_be64(inkey + 8); + + key->a = (a << 1) | (b >> 63); + key->b = (b << 1) | (a >> 63); + + if (b >> 63) + key->b ^= 0xc200000000000000UL; + + return 0; +} + +static struct shash_alg ghash_alg = { + .digestsize = GHASH_DIGEST_SIZE, + .init = ghash_init, + .update = ghash_update, + .final = ghash_final, + .setkey = ghash_setkey, + .descsize = sizeof(struct ghash_desc_ctx), + .base = { + .cra_name = "ghash", + .cra_driver_name = "ghash-ce", + .cra_priority = 200, + .cra_flags = CRYPTO_ALG_TYPE_SHASH, + .cra_blocksize = GHASH_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ghash_key), + .cra_module = THIS_MODULE, + }, +}; + +static int __init ghash_ce_mod_init(void) +{ + return crypto_register_shash(&ghash_alg); +} + +static void __exit ghash_ce_mod_exit(void) +{ + crypto_unregister_shash(&ghash_alg); +} + +module_cpu_feature_match(PMULL, ghash_ce_mod_init); +module_exit(ghash_ce_mod_exit); diff --git a/arch/arm64/crypto/sha1-ce-core.S b/arch/arm64/crypto/sha1-ce-core.S new file mode 100644 index 000000000000..09d57d98609c --- /dev/null +++ b/arch/arm64/crypto/sha1-ce-core.S @@ -0,0 +1,153 @@ +/* + * sha1-ce-core.S - SHA-1 secure hash using ARMv8 Crypto Extensions + * + * Copyright (C) 2014 Linaro Ltd <ard.biesheuvel@linaro.org> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include <linux/linkage.h> +#include <asm/assembler.h> + + .text + .arch armv8-a+crypto + + k0 .req v0 + k1 .req v1 + k2 .req v2 + k3 .req v3 + + t0 .req v4 + t1 .req v5 + + dga .req q6 + dgav .req v6 + dgb .req s7 + dgbv .req v7 + + dg0q .req q12 + dg0s .req s12 + dg0v .req v12 + dg1s .req s13 + dg1v .req v13 + dg2s .req s14 + + .macro add_only, op, ev, rc, s0, dg1 + .ifc \ev, ev + add t1.4s, v\s0\().4s, \rc\().4s + sha1h dg2s, dg0s + .ifnb \dg1 + sha1\op dg0q, \dg1, t0.4s + .else + sha1\op dg0q, dg1s, t0.4s + .endif + .else + .ifnb \s0 + add t0.4s, v\s0\().4s, \rc\().4s + .endif + sha1h dg1s, dg0s + sha1\op dg0q, dg2s, t1.4s + .endif + .endm + + .macro add_update, op, ev, rc, s0, s1, s2, s3, dg1 + sha1su0 v\s0\().4s, v\s1\().4s, v\s2\().4s + add_only \op, \ev, \rc, \s1, \dg1 + sha1su1 v\s0\().4s, v\s3\().4s + .endm + + /* + * The SHA1 round constants + */ + .align 4 +.Lsha1_rcon: + .word 0x5a827999, 0x6ed9eba1, 0x8f1bbcdc, 0xca62c1d6 + + /* + * void sha1_ce_transform(int blocks, u8 const *src, u32 *state, + * u8 *head, long bytes) + */ +ENTRY(sha1_ce_transform) + /* load round constants */ + adr x6, .Lsha1_rcon + ld1r {k0.4s}, [x6], #4 + ld1r {k1.4s}, [x6], #4 + ld1r {k2.4s}, [x6], #4 + ld1r {k3.4s}, [x6] + + /* load state */ + ldr dga, [x2] + ldr dgb, [x2, #16] + + /* load partial state (if supplied) */ + cbz x3, 0f + ld1 {v8.4s-v11.4s}, [x3] + b 1f + + /* load input */ +0: ld1 {v8.4s-v11.4s}, [x1], #64 + sub w0, w0, #1 + +1: +CPU_LE( rev32 v8.16b, v8.16b ) +CPU_LE( rev32 v9.16b, v9.16b ) +CPU_LE( rev32 v10.16b, v10.16b ) +CPU_LE( rev32 v11.16b, v11.16b ) + +2: add t0.4s, v8.4s, k0.4s + mov dg0v.16b, dgav.16b + + add_update c, ev, k0, 8, 9, 10, 11, dgb + add_update c, od, k0, 9, 10, 11, 8 + add_update c, ev, k0, 10, 11, 8, 9 + add_update c, od, k0, 11, 8, 9, 10 + add_update c, ev, k1, 8, 9, 10, 11 + + add_update p, od, k1, 9, 10, 11, 8 + add_update p, ev, k1, 10, 11, 8, 9 + add_update p, od, k1, 11, 8, 9, 10 + add_update p, ev, k1, 8, 9, 10, 11 + add_update p, od, k2, 9, 10, 11, 8 + + add_update m, ev, k2, 10, 11, 8, 9 + add_update m, od, k2, 11, 8, 9, 10 + add_update m, ev, k2, 8, 9, 10, 11 + add_update m, od, k2, 9, 10, 11, 8 + add_update m, ev, k3, 10, 11, 8, 9 + + add_update p, od, k3, 11, 8, 9, 10 + add_only p, ev, k3, 9 + add_only p, od, k3, 10 + add_only p, ev, k3, 11 + add_only p, od + + /* update state */ + add dgbv.2s, dgbv.2s, dg1v.2s + add dgav.4s, dgav.4s, dg0v.4s + + cbnz w0, 0b + + /* + * Final block: add padding and total bit count. + * Skip if we have no total byte count in x4. In that case, the input + * size was not a round multiple of the block size, and the padding is + * handled by the C code. + */ + cbz x4, 3f + movi v9.2d, #0 + mov x8, #0x80000000 + movi v10.2d, #0 + ror x7, x4, #29 // ror(lsl(x4, 3), 32) + fmov d8, x8 + mov x4, #0 + mov v11.d[0], xzr + mov v11.d[1], x7 + b 2b + + /* store new state */ +3: str dga, [x2] + str dgb, [x2, #16] + ret +ENDPROC(sha1_ce_transform) diff --git a/arch/arm64/crypto/sha1-ce-glue.c b/arch/arm64/crypto/sha1-ce-glue.c new file mode 100644 index 000000000000..6fe83f37a750 --- /dev/null +++ b/arch/arm64/crypto/sha1-ce-glue.c @@ -0,0 +1,174 @@ +/* + * sha1-ce-glue.c - SHA-1 secure hash using ARMv8 Crypto Extensions + * + * Copyright (C) 2014 Linaro Ltd <ard.biesheuvel@linaro.org> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include <asm/neon.h> +#include <asm/unaligned.h> +#include <crypto/internal/hash.h> +#include <crypto/sha.h> +#include <linux/cpufeature.h> +#include <linux/crypto.h> +#include <linux/module.h> + +MODULE_DESCRIPTION("SHA1 secure hash using ARMv8 Crypto Extensions"); +MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>"); +MODULE_LICENSE("GPL v2"); + +asmlinkage void sha1_ce_transform(int blocks, u8 const *src, u32 *state, + u8 *head, long bytes); + +static int sha1_init(struct shash_desc *desc) +{ + struct sha1_state *sctx = shash_desc_ctx(desc); + + *sctx = (struct sha1_state){ + .state = { SHA1_H0, SHA1_H1, SHA1_H2, SHA1_H3, SHA1_H4 }, + }; + return 0; +} + +static int sha1_update(struct shash_desc *desc, const u8 *data, + unsigned int len) +{ + struct sha1_state *sctx = shash_desc_ctx(desc); + unsigned int partial = sctx->count % SHA1_BLOCK_SIZE; + + sctx->count += len; + + if ((partial + len) >= SHA1_BLOCK_SIZE) { + int blocks; + + if (partial) { + int p = SHA1_BLOCK_SIZE - partial; + + memcpy(sctx->buffer + partial, data, p); + data += p; + len -= p; + } + + blocks = len / SHA1_BLOCK_SIZE; + len %= SHA1_BLOCK_SIZE; + + kernel_neon_begin_partial(16); + sha1_ce_transform(blocks, data, sctx->state, + partial ? sctx->buffer : NULL, 0); + kernel_neon_end(); + + data += blocks * SHA1_BLOCK_SIZE; + partial = 0; + } + if (len) + memcpy(sctx->buffer + partial, data, len); + return 0; +} + +static int sha1_final(struct shash_desc *desc, u8 *out) +{ + static const u8 padding[SHA1_BLOCK_SIZE] = { 0x80, }; + + struct sha1_state *sctx = shash_desc_ctx(desc); + __be64 bits = cpu_to_be64(sctx->count << 3); + __be32 *dst = (__be32 *)out; + int i; + + u32 padlen = SHA1_BLOCK_SIZE + - ((sctx->count + sizeof(bits)) % SHA1_BLOCK_SIZE); + + sha1_update(desc, padding, padlen); + sha1_update(desc, (const u8 *)&bits, sizeof(bits)); + + for (i = 0; i < SHA1_DIGEST_SIZE / sizeof(__be32); i++) + put_unaligned_be32(sctx->state[i], dst++); + + *sctx = (struct sha1_state){}; + return 0; +} + +static int sha1_finup(struct shash_desc *desc, const u8 *data, + unsigned int len, u8 *out) +{ + struct sha1_state *sctx = shash_desc_ctx(desc); + __be32 *dst = (__be32 *)out; + int blocks; + int i; + + if (sctx->count || !len || (len % SHA1_BLOCK_SIZE)) { + sha1_update(desc, data, len); + return sha1_final(desc, out); + } + + /* + * Use a fast path if the input is a multiple of 64 bytes. In + * this case, there is no need to copy data around, and we can + * perform the entire digest calculation in a single invocation + * of sha1_ce_transform() + */ + blocks = len / SHA1_BLOCK_SIZE; + + kernel_neon_begin_partial(16); + sha1_ce_transform(blocks, data, sctx->state, NULL, len); + kernel_neon_end(); + + for (i = 0; i < SHA1_DIGEST_SIZE / sizeof(__be32); i++) + put_unaligned_be32(sctx->state[i], dst++); + + *sctx = (struct sha1_state){}; + return 0; +} + +static int sha1_export(struct shash_desc *desc, void *out) +{ + struct sha1_state *sctx = shash_desc_ctx(desc); + struct sha1_state *dst = out; + + *dst = *sctx; + return 0; +} + +static int sha1_import(struct shash_desc *desc, const void *in) +{ + struct sha1_state *sctx = shash_desc_ctx(desc); + struct sha1_state const *src = in; + + *sctx = *src; + return 0; +} + +static struct shash_alg alg = { + .init = sha1_init, + .update = sha1_update, + .final = sha1_final, + .finup = sha1_finup, + .export = sha1_export, + .import = sha1_import, + .descsize = sizeof(struct sha1_state), + .digestsize = SHA1_DIGEST_SIZE, + .statesize = sizeof(struct sha1_state), + .base = { + .cra_name = "sha1", + .cra_driver_name = "sha1-ce", + .cra_priority = 200, + .cra_flags = CRYPTO_ALG_TYPE_SHASH, + .cra_blocksize = SHA1_BLOCK_SIZE, + .cra_module = THIS_MODULE, + } +}; + +static int __init sha1_ce_mod_init(void) +{ + return crypto_register_shash(&alg); +} + +static void __exit sha1_ce_mod_fini(void) +{ + crypto_unregister_shash(&alg); +} + +module_cpu_feature_match(SHA1, sha1_ce_mod_init); +module_exit(sha1_ce_mod_fini); diff --git a/arch/arm64/crypto/sha2-ce-core.S b/arch/arm64/crypto/sha2-ce-core.S new file mode 100644 index 000000000000..7f29fc031ea8 --- /dev/null +++ b/arch/arm64/crypto/sha2-ce-core.S @@ -0,0 +1,156 @@ +/* + * sha2-ce-core.S - core SHA-224/SHA-256 transform using v8 Crypto Extensions + * + * Copyright (C) 2014 Linaro Ltd <ard.biesheuvel@linaro.org> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include <linux/linkage.h> +#include <asm/assembler.h> + + .text + .arch armv8-a+crypto + + dga .req q20 + dgav .req v20 + dgb .req q21 + dgbv .req v21 + + t0 .req v22 + t1 .req v23 + + dg0q .req q24 + dg0v .req v24 + dg1q .req q25 + dg1v .req v25 + dg2q .req q26 + dg2v .req v26 + + .macro add_only, ev, rc, s0 + mov dg2v.16b, dg0v.16b + .ifeq \ev + add t1.4s, v\s0\().4s, \rc\().4s + sha256h dg0q, dg1q, t0.4s + sha256h2 dg1q, dg2q, t0.4s + .else + .ifnb \s0 + add t0.4s, v\s0\().4s, \rc\().4s + .endif + sha256h dg0q, dg1q, t1.4s + sha256h2 dg1q, dg2q, t1.4s + .endif + .endm + + .macro add_update, ev, rc, s0, s1, s2, s3 + sha256su0 v\s0\().4s, v\s1\().4s + add_only \ev, \rc, \s1 + sha256su1 v\s0\().4s, v\s2\().4s, v\s3\().4s + .endm + + /* + * The SHA-256 round constants + */ + .align 4 +.Lsha2_rcon: + .word 0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5 + .word 0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5 + .word 0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3 + .word 0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174 + .word 0xe49b69c1, 0xefbe4786, 0x0fc19dc6, 0x240ca1cc + .word 0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da + .word 0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7 + .word 0xc6e00bf3, 0xd5a79147, 0x06ca6351, 0x14292967 + .word 0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13 + .word 0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85 + .word 0xa2bfe8a1, 0xa81a664b, 0xc24b8b70, 0xc76c51a3 + .word 0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070 + .word 0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5 + .word 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3 + .word 0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208 + .word 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2 + + /* + * void sha2_ce_transform(int blocks, u8 const *src, u32 *state, + * u8 *head, long bytes) + */ +ENTRY(sha2_ce_transform) + /* load round constants */ + adr x8, .Lsha2_rcon + ld1 { v0.4s- v3.4s}, [x8], #64 + ld1 { v4.4s- v7.4s}, [x8], #64 + ld1 { v8.4s-v11.4s}, [x8], #64 + ld1 {v12.4s-v15.4s}, [x8] + + /* load state */ + ldp dga, dgb, [x2] + + /* load partial input (if supplied) */ + cbz x3, 0f + ld1 {v16.4s-v19.4s}, [x3] + b 1f + + /* load input */ +0: ld1 {v16.4s-v19.4s}, [x1], #64 + sub w0, w0, #1 + +1: +CPU_LE( rev32 v16.16b, v16.16b ) +CPU_LE( rev32 v17.16b, v17.16b ) +CPU_LE( rev32 v18.16b, v18.16b ) +CPU_LE( rev32 v19.16b, v19.16b ) + +2: add t0.4s, v16.4s, v0.4s + mov dg0v.16b, dgav.16b + mov dg1v.16b, dgbv.16b + + add_update 0, v1, 16, 17, 18, 19 + add_update 1, v2, 17, 18, 19, 16 + add_update 0, v3, 18, 19, 16, 17 + add_update 1, v4, 19, 16, 17, 18 + + add_update 0, v5, 16, 17, 18, 19 + add_update 1, v6, 17, 18, 19, 16 + add_update 0, v7, 18, 19, 16, 17 + add_update 1, v8, 19, 16, 17, 18 + + add_update 0, v9, 16, 17, 18, 19 + add_update 1, v10, 17, 18, 19, 16 + add_update 0, v11, 18, 19, 16, 17 + add_update 1, v12, 19, 16, 17, 18 + + add_only 0, v13, 17 + add_only 1, v14, 18 + add_only 0, v15, 19 + add_only 1 + + /* update state */ + add dgav.4s, dgav.4s, dg0v.4s + add dgbv.4s, dgbv.4s, dg1v.4s + + /* handled all input blocks? */ + cbnz w0, 0b + + /* + * Final block: add padding and total bit count. + * Skip if we have no total byte count in x4. In that case, the input + * size was not a round multiple of the block size, and the padding is + * handled by the C code. + */ + cbz x4, 3f + movi v17.2d, #0 + mov x8, #0x80000000 + movi v18.2d, #0 + ror x7, x4, #29 // ror(lsl(x4, 3), 32) + fmov d16, x8 + mov x4, #0 + mov v19.d[0], xzr + mov v19.d[1], x7 + b 2b + + /* store new state */ +3: stp dga, dgb, [x2] + ret +ENDPROC(sha2_ce_transform) diff --git a/arch/arm64/crypto/sha2-ce-glue.c b/arch/arm64/crypto/sha2-ce-glue.c new file mode 100644 index 000000000000..c294e67d3925 --- /dev/null +++ b/arch/arm64/crypto/sha2-ce-glue.c @@ -0,0 +1,255 @@ +/* + * sha2-ce-glue.c - SHA-224/SHA-256 using ARMv8 Crypto Extensions + * + * Copyright (C) 2014 Linaro Ltd <ard.biesheuvel@linaro.org> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include <asm/neon.h> +#include <asm/unaligned.h> +#include <crypto/internal/hash.h> +#include <crypto/sha.h> +#include <linux/cpufeature.h> +#include <linux/crypto.h> +#include <linux/module.h> + +MODULE_DESCRIPTION("SHA-224/SHA-256 secure hash using ARMv8 Crypto Extensions"); +MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>"); +MODULE_LICENSE("GPL v2"); + +asmlinkage int sha2_ce_transform(int blocks, u8 const *src, u32 *state, + u8 *head, long bytes); + +static int sha224_init(struct shash_desc *desc) +{ + struct sha256_state *sctx = shash_desc_ctx(desc); + + *sctx = (struct sha256_state){ + .state = { + SHA224_H0, SHA224_H1, SHA224_H2, SHA224_H3, + SHA224_H4, SHA224_H5, SHA224_H6, SHA224_H7, + } + }; + return 0; +} + +static int sha256_init(struct shash_desc *desc) +{ + struct sha256_state *sctx = shash_desc_ctx(desc); + + *sctx = (struct sha256_state){ + .state = { + SHA256_H0, SHA256_H1, SHA256_H2, SHA256_H3, + SHA256_H4, SHA256_H5, SHA256_H6, SHA256_H7, + } + }; + return 0; +} + +static int sha2_update(struct shash_desc *desc, const u8 *data, + unsigned int len) +{ + struct sha256_state *sctx = shash_desc_ctx(desc); + unsigned int partial = sctx->count % SHA256_BLOCK_SIZE; + + sctx->count += len; + + if ((partial + len) >= SHA256_BLOCK_SIZE) { + int blocks; + + if (partial) { + int p = SHA256_BLOCK_SIZE - partial; + + memcpy(sctx->buf + partial, data, p); + data += p; + len -= p; + } + + blocks = len / SHA256_BLOCK_SIZE; + len %= SHA256_BLOCK_SIZE; + + kernel_neon_begin_partial(28); + sha2_ce_transform(blocks, data, sctx->state, + partial ? sctx->buf : NULL, 0); + kernel_neon_end(); + + data += blocks * SHA256_BLOCK_SIZE; + partial = 0; + } + if (len) + memcpy(sctx->buf + partial, data, len); + return 0; +} + +static void sha2_final(struct shash_desc *desc) +{ + static const u8 padding[SHA256_BLOCK_SIZE] = { 0x80, }; + + struct sha256_state *sctx = shash_desc_ctx(desc); + __be64 bits = cpu_to_be64(sctx->count << 3); + u32 padlen = SHA256_BLOCK_SIZE + - ((sctx->count + sizeof(bits)) % SHA256_BLOCK_SIZE); + + sha2_update(desc, padding, padlen); + sha2_update(desc, (const u8 *)&bits, sizeof(bits)); +} + +static int sha224_final(struct shash_desc *desc, u8 *out) +{ + struct sha256_state *sctx = shash_desc_ctx(desc); + __be32 *dst = (__be32 *)out; + int i; + + sha2_final(desc); + + for (i = 0; i < SHA224_DIGEST_SIZE / sizeof(__be32); i++) + put_unaligned_be32(sctx->state[i], dst++); + + *sctx = (struct sha256_state){}; + return 0; +} + +static int sha256_final(struct shash_desc *desc, u8 *out) +{ + struct sha256_state *sctx = shash_desc_ctx(desc); + __be32 *dst = (__be32 *)out; + int i; + + sha2_final(desc); + + for (i = 0; i < SHA256_DIGEST_SIZE / sizeof(__be32); i++) + put_unaligned_be32(sctx->state[i], dst++); + + *sctx = (struct sha256_state){}; + return 0; +} + +static void sha2_finup(struct shash_desc *desc, const u8 *data, + unsigned int len) +{ + struct sha256_state *sctx = shash_desc_ctx(desc); + int blocks; + + if (sctx->count || !len || (len % SHA256_BLOCK_SIZE)) { + sha2_update(desc, data, len); + sha2_final(desc); + return; + } + + /* + * Use a fast path if the input is a multiple of 64 bytes. In + * this case, there is no need to copy data around, and we can + * perform the entire digest calculation in a single invocation + * of sha2_ce_transform() + */ + blocks = len / SHA256_BLOCK_SIZE; + + kernel_neon_begin_partial(28); + sha2_ce_transform(blocks, data, sctx->state, NULL, len); + kernel_neon_end(); + data += blocks * SHA256_BLOCK_SIZE; +} + +static int sha224_finup(struct shash_desc *desc, const u8 *data, + unsigned int len, u8 *out) +{ + struct sha256_state *sctx = shash_desc_ctx(desc); + __be32 *dst = (__be32 *)out; + int i; + + sha2_finup(desc, data, len); + + for (i = 0; i < SHA224_DIGEST_SIZE / sizeof(__be32); i++) + put_unaligned_be32(sctx->state[i], dst++); + + *sctx = (struct sha256_state){}; + return 0; +} + +static int sha256_finup(struct shash_desc *desc, const u8 *data, + unsigned int len, u8 *out) +{ + struct sha256_state *sctx = shash_desc_ctx(desc); + __be32 *dst = (__be32 *)out; + int i; + + sha2_finup(desc, data, len); + + for (i = 0; i < SHA256_DIGEST_SIZE / sizeof(__be32); i++) + put_unaligned_be32(sctx->state[i], dst++); + + *sctx = (struct sha256_state){}; + return 0; +} + +static int sha2_export(struct shash_desc *desc, void *out) +{ + struct sha256_state *sctx = shash_desc_ctx(desc); + struct sha256_state *dst = out; + + *dst = *sctx; + return 0; +} + +static int sha2_import(struct shash_desc *desc, const void *in) +{ + struct sha256_state *sctx = shash_desc_ctx(desc); + struct sha256_state const *src = in; + + *sctx = *src; + return 0; +} + +static struct shash_alg algs[] = { { + .init = sha224_init, + .update = sha2_update, + .final = sha224_final, + .finup = sha224_finup, + .export = sha2_export, + .import = sha2_import, + .descsize = sizeof(struct sha256_state), + .digestsize = SHA224_DIGEST_SIZE, + .statesize = sizeof(struct sha256_state), + .base = { + .cra_name = "sha224", + .cra_driver_name = "sha224-ce", + .cra_priority = 200, + .cra_flags = CRYPTO_ALG_TYPE_SHASH, + .cra_blocksize = SHA256_BLOCK_SIZE, + .cra_module = THIS_MODULE, + } +}, { + .init = sha256_init, + .update = sha2_update, + .final = sha256_final, + .finup = sha256_finup, + .export = sha2_export, + .import = sha2_import, + .descsize = sizeof(struct sha256_state), + .digestsize = SHA256_DIGEST_SIZE, + .statesize = sizeof(struct sha256_state), + .base = { + .cra_name = "sha256", + .cra_driver_name = "sha256-ce", + .cra_priority = 200, + .cra_flags = CRYPTO_ALG_TYPE_SHASH, + .cra_blocksize = SHA256_BLOCK_SIZE, + .cra_module = THIS_MODULE, + } +} }; + +static int __init sha2_ce_mod_init(void) +{ + return crypto_register_shashes(algs, ARRAY_SIZE(algs)); +} + +static void __exit sha2_ce_mod_fini(void) +{ + crypto_unregister_shashes(algs, ARRAY_SIZE(algs)); +} + +module_cpu_feature_match(SHA2, sha2_ce_mod_init); +module_exit(sha2_ce_mod_fini); diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild index 83f71b3004a8..42c7eecd2bb6 100644 --- a/arch/arm64/include/asm/Kbuild +++ b/arch/arm64/include/asm/Kbuild @@ -40,6 +40,7 @@ generic-y += segment.h generic-y += sembuf.h generic-y += serial.h generic-y += shmbuf.h +generic-y += simd.h generic-y += sizes.h generic-y += socket.h generic-y += sockios.h diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index fd3e3924041b..5901480bfdca 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -21,6 +21,7 @@ #endif #include <asm/ptrace.h> +#include <asm/thread_info.h> /* * Stack pushing/popping (register pairs only). Equivalent to store decrement @@ -68,23 +69,31 @@ msr daifclr, #8 .endm - .macro disable_step, tmp + .macro disable_step_tsk, flgs, tmp + tbz \flgs, #TIF_SINGLESTEP, 9990f mrs \tmp, mdscr_el1 bic \tmp, \tmp, #1 msr mdscr_el1, \tmp + isb // Synchronise with enable_dbg +9990: .endm - .macro enable_step, tmp + .macro enable_step_tsk, flgs, tmp + tbz \flgs, #TIF_SINGLESTEP, 9990f + disable_dbg mrs \tmp, mdscr_el1 orr \tmp, \tmp, #1 msr mdscr_el1, \tmp +9990: .endm - .macro enable_dbg_if_not_stepping, tmp - mrs \tmp, mdscr_el1 - tbnz \tmp, #0, 9990f - enable_dbg -9990: +/* + * Enable both debug exceptions and interrupts. This is likely to be + * faster than two daifclr operations, since writes to this register + * are self-synchronising. + */ + .macro enable_dbg_and_irq + msr daifclr, #(8 | 2) .endm /* diff --git a/arch/arm64/include/asm/atomic.h b/arch/arm64/include/asm/atomic.h index 57e8cb49824c..65f1569ac96e 100644 --- a/arch/arm64/include/asm/atomic.h +++ b/arch/arm64/include/asm/atomic.h @@ -157,7 +157,7 @@ static inline int __atomic_add_unless(atomic_t *v, int a, int u) */ #define ATOMIC64_INIT(i) { (i) } -#define atomic64_read(v) (*(volatile long long *)&(v)->counter) +#define atomic64_read(v) (*(volatile long *)&(v)->counter) #define atomic64_set(v,i) (((v)->counter) = (i)) static inline void atomic64_add(u64 i, atomic64_t *v) diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h index 48b9e704af7c..6389d60574d9 100644 --- a/arch/arm64/include/asm/barrier.h +++ b/arch/arm64/include/asm/barrier.h @@ -25,12 +25,12 @@ #define wfi() asm volatile("wfi" : : : "memory") #define isb() asm volatile("isb" : : : "memory") -#define dmb(opt) asm volatile("dmb sy" : : : "memory") -#define dsb(opt) asm volatile("dsb sy" : : : "memory") +#define dmb(opt) asm volatile("dmb " #opt : : : "memory") +#define dsb(opt) asm volatile("dsb " #opt : : : "memory") -#define mb() dsb() -#define rmb() asm volatile("dsb ld" : : : "memory") -#define wmb() asm volatile("dsb st" : : : "memory") +#define mb() dsb(sy) +#define rmb() dsb(ld) +#define wmb() dsb(st) #ifndef CONFIG_SMP #define smp_mb() barrier() @@ -40,7 +40,7 @@ #define smp_store_release(p, v) \ do { \ compiletime_assert_atomic_type(*p); \ - smp_mb(); \ + barrier(); \ ACCESS_ONCE(*p) = (v); \ } while (0) @@ -48,15 +48,15 @@ do { \ ({ \ typeof(*p) ___p1 = ACCESS_ONCE(*p); \ compiletime_assert_atomic_type(*p); \ - smp_mb(); \ + barrier(); \ ___p1; \ }) #else -#define smp_mb() asm volatile("dmb ish" : : : "memory") -#define smp_rmb() asm volatile("dmb ishld" : : : "memory") -#define smp_wmb() asm volatile("dmb ishst" : : : "memory") +#define smp_mb() dmb(ish) +#define smp_rmb() dmb(ishld) +#define smp_wmb() dmb(ishst) #define smp_store_release(p, v) \ do { \ diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h index 390308a67f0d..88cc05b5f3ac 100644 --- a/arch/arm64/include/asm/cache.h +++ b/arch/arm64/include/asm/cache.h @@ -16,6 +16,8 @@ #ifndef __ASM_CACHE_H #define __ASM_CACHE_H +#include <asm/cachetype.h> + #define L1_CACHE_SHIFT 6 #define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT) @@ -27,6 +29,15 @@ * the CPU. */ #define ARCH_DMA_MINALIGN L1_CACHE_BYTES -#define ARCH_SLAB_MINALIGN 8 + +#ifndef __ASSEMBLY__ + +static inline int cache_line_size(void) +{ + u32 cwg = cache_type_cwg(); + return cwg ? 4 << cwg : L1_CACHE_BYTES; +} + +#endif /* __ASSEMBLY__ */ #endif diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h index 4c60e64a801c..a5176cf32dad 100644 --- a/arch/arm64/include/asm/cacheflush.h +++ b/arch/arm64/include/asm/cacheflush.h @@ -123,7 +123,7 @@ extern void flush_dcache_page(struct page *); static inline void __flush_icache_all(void) { asm("ic ialluis"); - dsb(); + dsb(ish); } #define flush_dcache_mmap_lock(mapping) \ @@ -150,7 +150,7 @@ static inline void flush_cache_vmap(unsigned long start, unsigned long end) * set_pte_at() called from vmap_pte_range() does not * have a DSB after cleaning the cache line. */ - dsb(); + dsb(ish); } static inline void flush_cache_vunmap(unsigned long start, unsigned long end) diff --git a/arch/arm64/include/asm/cachetype.h b/arch/arm64/include/asm/cachetype.h index 85f5f511352a..4b23e758d5e0 100644 --- a/arch/arm64/include/asm/cachetype.h +++ b/arch/arm64/include/asm/cachetype.h @@ -20,12 +20,16 @@ #define CTR_L1IP_SHIFT 14 #define CTR_L1IP_MASK 3 +#define CTR_CWG_SHIFT 24 +#define CTR_CWG_MASK 15 #define ICACHE_POLICY_RESERVED 0 #define ICACHE_POLICY_AIVIVT 1 #define ICACHE_POLICY_VIPT 2 #define ICACHE_POLICY_PIPT 3 +#ifndef __ASSEMBLY__ + static inline u32 icache_policy(void) { return (read_cpuid_cachetype() >> CTR_L1IP_SHIFT) & CTR_L1IP_MASK; @@ -45,4 +49,11 @@ static inline int icache_is_aivivt(void) return icache_policy() == ICACHE_POLICY_AIVIVT; } +static inline u32 cache_type_cwg(void) +{ + return (read_cpuid_cachetype() >> CTR_CWG_SHIFT) & CTR_CWG_MASK; +} + +#endif /* __ASSEMBLY__ */ + #endif /* __ASM_CACHETYPE_H */ diff --git a/arch/arm64/include/asm/cmpxchg.h b/arch/arm64/include/asm/cmpxchg.h index 57c0fa7bf711..ddb9d7830558 100644 --- a/arch/arm64/include/asm/cmpxchg.h +++ b/arch/arm64/include/asm/cmpxchg.h @@ -72,7 +72,12 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size } #define xchg(ptr,x) \ - ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr)))) +({ \ + __typeof__(*(ptr)) __ret; \ + __ret = (__typeof__(*(ptr))) \ + __xchg((unsigned long)(x), (ptr), sizeof(*(ptr))); \ + __ret; \ +}) static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new, int size) diff --git a/arch/arm64/include/asm/compat.h b/arch/arm64/include/asm/compat.h index e71f81fe127a..253e33bc94fb 100644 --- a/arch/arm64/include/asm/compat.h +++ b/arch/arm64/include/asm/compat.h @@ -305,11 +305,6 @@ static inline int is_compat_thread(struct thread_info *thread) #else /* !CONFIG_COMPAT */ -static inline int is_compat_task(void) -{ - return 0; -} - static inline int is_compat_thread(struct thread_info *thread) { return 0; diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h new file mode 100644 index 000000000000..5a46c4e7f539 --- /dev/null +++ b/arch/arm64/include/asm/efi.h @@ -0,0 +1,14 @@ +#ifndef _ASM_EFI_H +#define _ASM_EFI_H + +#include <asm/io.h> + +#ifdef CONFIG_EFI +extern void efi_init(void); +extern void efi_idmap_init(void); +#else +#define efi_init() +#define efi_idmap_init() +#endif + +#endif /* _ASM_EFI_H */ diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h index c4a7f940b387..72674f4c3871 100644 --- a/arch/arm64/include/asm/esr.h +++ b/arch/arm64/include/asm/esr.h @@ -18,9 +18,11 @@ #ifndef __ASM_ESR_H #define __ASM_ESR_H -#define ESR_EL1_EC_SHIFT (26) -#define ESR_EL1_IL (1U << 25) +#define ESR_EL1_WRITE (1 << 6) +#define ESR_EL1_CM (1 << 8) +#define ESR_EL1_IL (1 << 25) +#define ESR_EL1_EC_SHIFT (26) #define ESR_EL1_EC_UNKNOWN (0x00) #define ESR_EL1_EC_WFI (0x01) #define ESR_EL1_EC_CP15_32 (0x03) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index c43b4ac13008..50f559f574fe 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -37,8 +37,21 @@ struct fpsimd_state { u32 fpcr; }; }; + /* the id of the last cpu to have restored this state */ + unsigned int cpu; }; +/* + * Struct for stacking the bottom 'n' FP/SIMD registers. + */ +struct fpsimd_partial_state { + u32 fpsr; + u32 fpcr; + u32 num_regs; + __uint128_t vregs[32]; +}; + + #if defined(__KERNEL__) && defined(CONFIG_COMPAT) /* Masks for extracting the FPSR and FPCR from the FPSCR */ #define VFP_FPSCR_STAT_MASK 0xf800009f @@ -58,6 +71,16 @@ extern void fpsimd_load_state(struct fpsimd_state *state); extern void fpsimd_thread_switch(struct task_struct *next); extern void fpsimd_flush_thread(void); +extern void fpsimd_preserve_current_state(void); +extern void fpsimd_restore_current_state(void); +extern void fpsimd_update_current_state(struct fpsimd_state *state); + +extern void fpsimd_flush_task_state(struct task_struct *target); + +extern void fpsimd_save_partial_state(struct fpsimd_partial_state *state, + u32 num_regs); +extern void fpsimd_load_partial_state(struct fpsimd_partial_state *state); + #endif #endif diff --git a/arch/arm64/include/asm/fpsimdmacros.h b/arch/arm64/include/asm/fpsimdmacros.h index bbec599c96bd..768414d55e64 100644 --- a/arch/arm64/include/asm/fpsimdmacros.h +++ b/arch/arm64/include/asm/fpsimdmacros.h @@ -62,3 +62,38 @@ ldr w\tmpnr, [\state, #16 * 2 + 4] msr fpcr, x\tmpnr .endm + +.altmacro +.macro fpsimd_save_partial state, numnr, tmpnr1, tmpnr2 + mrs x\tmpnr1, fpsr + str w\numnr, [\state, #8] + mrs x\tmpnr2, fpcr + stp w\tmpnr1, w\tmpnr2, [\state] + adr x\tmpnr1, 0f + add \state, \state, x\numnr, lsl #4 + sub x\tmpnr1, x\tmpnr1, x\numnr, lsl #1 + br x\tmpnr1 + .irp qa, 30, 28, 26, 24, 22, 20, 18, 16, 14, 12, 10, 8, 6, 4, 2, 0 + .irp qb, %(qa + 1) + stp q\qa, q\qb, [\state, # -16 * \qa - 16] + .endr + .endr +0: +.endm + +.macro fpsimd_restore_partial state, tmpnr1, tmpnr2 + ldp w\tmpnr1, w\tmpnr2, [\state] + msr fpsr, x\tmpnr1 + msr fpcr, x\tmpnr2 + adr x\tmpnr1, 0f + ldr w\tmpnr2, [\state, #8] + add \state, \state, x\tmpnr2, lsl #4 + sub x\tmpnr1, x\tmpnr1, x\tmpnr2, lsl #1 + br x\tmpnr1 + .irp qa, 30, 28, 26, 24, 22, 20, 18, 16, 14, 12, 10, 8, 6, 4, 2, 0 + .irp qb, %(qa + 1) + ldp q\qa, q\qb, [\state, # -16 * \qa - 16] + .endr + .endr +0: +.endm diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h new file mode 100644 index 000000000000..c5534facf941 --- /dev/null +++ b/arch/arm64/include/asm/ftrace.h @@ -0,0 +1,59 @@ +/* + * arch/arm64/include/asm/ftrace.h + * + * Copyright (C) 2013 Linaro Limited + * Author: AKASHI Takahiro <takahiro.akashi@linaro.org> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ +#ifndef __ASM_FTRACE_H +#define __ASM_FTRACE_H + +#include <asm/insn.h> + +#define MCOUNT_ADDR ((unsigned long)_mcount) +#define MCOUNT_INSN_SIZE AARCH64_INSN_SIZE + +#ifndef __ASSEMBLY__ +#include <linux/compat.h> + +extern void _mcount(unsigned long); +extern void *return_address(unsigned int); + +struct dyn_arch_ftrace { + /* No extra data needed for arm64 */ +}; + +extern unsigned long ftrace_graph_call; + +static inline unsigned long ftrace_call_adjust(unsigned long addr) +{ + /* + * addr is the address of the mcount call instruction. + * recordmcount does the necessary offset calculation. + */ + return addr; +} + +#define ftrace_return_address(n) return_address(n) + +/* + * Because AArch32 mode does not share the same syscall table with AArch64, + * tracing compat syscalls may result in reporting bogus syscalls or even + * hang-up, so just do not trace them. + * See kernel/trace/trace_syscalls.c + * + * x86 code says: + * If the user realy wants these, then they should use the + * raw syscall tracepoints with filtering. + */ +#define ARCH_TRACE_IGNORE_COMPAT_SYSCALLS +static inline bool arch_trace_is_compat_syscall(struct pt_regs *regs) +{ + return is_compat_task(); +} +#endif /* ifndef __ASSEMBLY__ */ + +#endif /* __ASM_FTRACE_H */ diff --git a/arch/arm64/include/asm/hardirq.h b/arch/arm64/include/asm/hardirq.h index ae4801d77514..0be67821f9ce 100644 --- a/arch/arm64/include/asm/hardirq.h +++ b/arch/arm64/include/asm/hardirq.h @@ -20,7 +20,7 @@ #include <linux/threads.h> #include <asm/irq.h> -#define NR_IPI 5 +#define NR_IPI 6 typedef struct { unsigned int __softirq_pending; diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h index c44ad39ed310..dc1f73b13e74 100644 --- a/arch/arm64/include/asm/insn.h +++ b/arch/arm64/include/asm/insn.h @@ -21,6 +21,7 @@ /* A64 instructions are always 32 bits. */ #define AARCH64_INSN_SIZE 4 +#ifndef __ASSEMBLY__ /* * ARM Architecture Reference Manual for ARMv8 Profile-A, Issue A.a * Section C3.1 "A64 instruction index by encoding": @@ -104,5 +105,6 @@ bool aarch64_insn_hotpatch_safe(u32 old_insn, u32 new_insn); int aarch64_insn_patch_text_nosync(void *addr, u32 insn); int aarch64_insn_patch_text_sync(void *addrs[], u32 insns[], int cnt); int aarch64_insn_patch_text(void *addrs[], u32 insns[], int cnt); +#endif /* __ASSEMBLY__ */ #endif /* __ASM_INSN_H */ diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h index a1bef78f0303..e0ecdcf6632d 100644 --- a/arch/arm64/include/asm/io.h +++ b/arch/arm64/include/asm/io.h @@ -230,19 +230,11 @@ extern void __iomem *__ioremap(phys_addr_t phys_addr, size_t size, pgprot_t prot extern void __iounmap(volatile void __iomem *addr); extern void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size); -#define PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_DIRTY) -#define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_ATTRINDX(MT_DEVICE_nGnRE)) -#define PROT_NORMAL_NC (PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL_NC)) -#define PROT_NORMAL (PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL)) - #define ioremap(addr, size) __ioremap((addr), (size), __pgprot(PROT_DEVICE_nGnRE)) #define ioremap_nocache(addr, size) __ioremap((addr), (size), __pgprot(PROT_DEVICE_nGnRE)) #define ioremap_wc(addr, size) __ioremap((addr), (size), __pgprot(PROT_NORMAL_NC)) #define iounmap __iounmap -#define PROT_SECT_DEFAULT (PMD_TYPE_SECT | PMD_SECT_AF) -#define PROT_SECT_DEVICE_nGnRE (PROT_SECT_DEFAULT | PTE_PXN | PTE_UXN | PMD_ATTRINDX(MT_DEVICE_nGnRE)) - #define ARCH_HAS_IOREMAP_WC #include <asm-generic/iomap.h> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index aff0292c8f4d..c2f006c48bdb 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -31,5 +31,7 @@ extern void paging_init(void); extern void setup_mm_for_reboot(void); extern void __iomem *early_io_map(phys_addr_t phys, unsigned long virt); extern void init_mem_pgprot(void); +/* create an identity mapping for memory (or io if map_io is true) */ +extern void create_id_mapping(phys_addr_t addr, phys_addr_t size, int map_io); #endif diff --git a/arch/arm64/include/asm/neon.h b/arch/arm64/include/asm/neon.h index b0cc58a97780..13ce4cc18e26 100644 --- a/arch/arm64/include/asm/neon.h +++ b/arch/arm64/include/asm/neon.h @@ -8,7 +8,11 @@ * published by the Free Software Foundation. */ +#include <linux/types.h> + #define cpu_has_neon() (1) -void kernel_neon_begin(void); +#define kernel_neon_begin() kernel_neon_begin_partial(32) + +void kernel_neon_begin_partial(u32 num_regs); void kernel_neon_end(void); diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index 5fc8a66c3924..955e8c5f0afb 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -29,6 +29,8 @@ */ #define PUD_TABLE_BIT (_AT(pgdval_t, 1) << 1) +#define PUD_TYPE_MASK (_AT(pgdval_t, 3) << 0) +#define PUD_TYPE_SECT (_AT(pgdval_t, 1) << 0) /* * Level 2 descriptor (PMD). diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index e2f96748859b..598cc384fc1c 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -52,66 +52,59 @@ extern void __pgd_error(const char *file, int line, unsigned long val); #endif #define pgd_ERROR(pgd) __pgd_error(__FILE__, __LINE__, pgd_val(pgd)) -/* - * The pgprot_* and protection_map entries will be fixed up at runtime to - * include the cachable and bufferable bits based on memory policy, as well as - * any architecture dependent bits like global/ASID and SMP shared mapping - * bits. - */ -#define _PAGE_DEFAULT PTE_TYPE_PAGE | PTE_AF +#ifdef CONFIG_SMP +#define PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED) +#define PROT_SECT_DEFAULT (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S) +#else +#define PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF) +#define PROT_SECT_DEFAULT (PMD_TYPE_SECT | PMD_SECT_AF) +#endif -extern pgprot_t pgprot_default; +#define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_ATTRINDX(MT_DEVICE_nGnRE)) +#define PROT_NORMAL_NC (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_ATTRINDX(MT_NORMAL_NC)) +#define PROT_NORMAL (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_ATTRINDX(MT_NORMAL)) -#define __pgprot_modify(prot,mask,bits) \ - __pgprot((pgprot_val(prot) & ~(mask)) | (bits)) +#define PROT_SECT_DEVICE_nGnRE (PROT_SECT_DEFAULT | PMD_SECT_PXN | PMD_SECT_UXN | PMD_ATTRINDX(MT_DEVICE_nGnRE)) +#define PROT_SECT_NORMAL (PROT_SECT_DEFAULT | PMD_SECT_PXN | PMD_SECT_UXN | PMD_ATTRINDX(MT_NORMAL)) +#define PROT_SECT_NORMAL_EXEC (PROT_SECT_DEFAULT | PMD_SECT_UXN | PMD_ATTRINDX(MT_NORMAL)) -#define _MOD_PROT(p, b) __pgprot_modify(p, 0, b) +#define _PAGE_DEFAULT (PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL)) -#define PAGE_NONE __pgprot_modify(pgprot_default, PTE_TYPE_MASK, PTE_PROT_NONE | PTE_PXN | PTE_UXN) -#define PAGE_SHARED _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE) -#define PAGE_SHARED_EXEC _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_PXN | PTE_WRITE) -#define PAGE_COPY _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_PXN | PTE_UXN) -#define PAGE_COPY_EXEC _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_PXN) -#define PAGE_READONLY _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_PXN | PTE_UXN) -#define PAGE_READONLY_EXEC _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_PXN) -#define PAGE_KERNEL _MOD_PROT(pgprot_default, PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE) -#define PAGE_KERNEL_EXEC _MOD_PROT(pgprot_default, PTE_UXN | PTE_DIRTY | PTE_WRITE) +#define PAGE_KERNEL __pgprot(_PAGE_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE) +#define PAGE_KERNEL_EXEC __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE) -#define PAGE_HYP _MOD_PROT(pgprot_default, PTE_HYP) +#define PAGE_HYP __pgprot(_PAGE_DEFAULT | PTE_HYP) #define PAGE_HYP_DEVICE __pgprot(PROT_DEVICE_nGnRE | PTE_HYP) -#define PAGE_S2 __pgprot_modify(pgprot_default, PTE_S2_MEMATTR_MASK, PTE_S2_MEMATTR(MT_S2_NORMAL) | PTE_S2_RDONLY) +#define PAGE_S2 __pgprot(PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_NORMAL) | PTE_S2_RDONLY) #define PAGE_S2_DEVICE __pgprot(PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_DEVICE_nGnRE) | PTE_S2_RDWR | PTE_UXN) -#define __PAGE_NONE __pgprot(((_PAGE_DEFAULT) & ~PTE_TYPE_MASK) | PTE_PROT_NONE | PTE_PXN | PTE_UXN) -#define __PAGE_SHARED __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE) -#define __PAGE_SHARED_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_WRITE) -#define __PAGE_COPY __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN) -#define __PAGE_COPY_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN) -#define __PAGE_READONLY __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN) -#define __PAGE_READONLY_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN) - -#endif /* __ASSEMBLY__ */ - -#define __P000 __PAGE_NONE -#define __P001 __PAGE_READONLY -#define __P010 __PAGE_COPY -#define __P011 __PAGE_COPY -#define __P100 __PAGE_READONLY_EXEC -#define __P101 __PAGE_READONLY_EXEC -#define __P110 __PAGE_COPY_EXEC -#define __P111 __PAGE_COPY_EXEC - -#define __S000 __PAGE_NONE -#define __S001 __PAGE_READONLY -#define __S010 __PAGE_SHARED -#define __S011 __PAGE_SHARED -#define __S100 __PAGE_READONLY_EXEC -#define __S101 __PAGE_READONLY_EXEC -#define __S110 __PAGE_SHARED_EXEC -#define __S111 __PAGE_SHARED_EXEC +#define PAGE_NONE __pgprot(((_PAGE_DEFAULT) & ~PTE_TYPE_MASK) | PTE_PROT_NONE | PTE_PXN | PTE_UXN) +#define PAGE_SHARED __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE) +#define PAGE_SHARED_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_WRITE) +#define PAGE_COPY __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN) +#define PAGE_COPY_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN) +#define PAGE_READONLY __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN) +#define PAGE_READONLY_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN) + +#define __P000 PAGE_NONE +#define __P001 PAGE_READONLY +#define __P010 PAGE_COPY +#define __P011 PAGE_COPY +#define __P100 PAGE_READONLY_EXEC +#define __P101 PAGE_READONLY_EXEC +#define __P110 PAGE_COPY_EXEC +#define __P111 PAGE_COPY_EXEC + +#define __S000 PAGE_NONE +#define __S001 PAGE_READONLY +#define __S010 PAGE_SHARED +#define __S011 PAGE_SHARED +#define __S100 PAGE_READONLY_EXEC +#define __S101 PAGE_READONLY_EXEC +#define __S110 PAGE_SHARED_EXEC +#define __S111 PAGE_SHARED_EXEC -#ifndef __ASSEMBLY__ /* * ZERO_PAGE is a global shared page that is always zero: used * for zero-mapped memory areas etc.. @@ -265,6 +258,7 @@ static inline pmd_t pte_pmd(pte_t pte) #define mk_pmd(page,prot) pfn_pmd(page_to_pfn(page),prot) #define pmd_page(pmd) pfn_to_page(__phys_to_pfn(pmd_val(pmd) & PHYS_MASK)) +#define pud_pfn(pud) (((pud_val(pud) & PUD_MASK) & PHYS_MASK) >> PAGE_SHIFT) #define set_pmd_at(mm, addr, pmdp, pmd) set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd)) @@ -273,6 +267,9 @@ static inline int has_transparent_hugepage(void) return 1; } +#define __pgprot_modify(prot,mask,bits) \ + __pgprot((pgprot_val(prot) & ~(mask)) | (bits)) + /* * Mark the prot value as uncacheable and unbufferable. */ @@ -295,11 +292,17 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, #define pmd_sect(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \ PMD_TYPE_SECT) +#ifdef ARM64_64K_PAGES +#define pud_sect(pud) (0) +#else +#define pud_sect(pud) ((pud_val(pud) & PUD_TYPE_MASK) == \ + PUD_TYPE_SECT) +#endif static inline void set_pmd(pmd_t *pmdp, pmd_t pmd) { *pmdp = pmd; - dsb(); + dsb(ishst); } static inline void pmd_clear(pmd_t *pmdp) @@ -329,7 +332,7 @@ static inline pte_t *pmd_page_vaddr(pmd_t pmd) static inline void set_pud(pud_t *pudp, pud_t pud) { *pudp = pud; - dsb(); + dsb(ishst); } static inline void pud_clear(pud_t *pudp) diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 45b20cd6cbca..34de2a8f7d93 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -79,6 +79,7 @@ struct thread_struct { unsigned long tp_value; struct fpsimd_state fpsimd_state; unsigned long fault_address; /* fault info */ + unsigned long fault_code; /* ESR_EL1 value */ struct debug_info debug; /* debugging */ }; diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h index c7ba261dd4b3..a429b5940be2 100644 --- a/arch/arm64/include/asm/ptrace.h +++ b/arch/arm64/include/asm/ptrace.h @@ -135,6 +135,11 @@ struct pt_regs { #define user_stack_pointer(regs) \ (!compat_user_mode(regs)) ? ((regs)->sp) : ((regs)->compat_sp) +static inline unsigned long regs_return_value(struct pt_regs *regs) +{ + return regs->regs[0]; +} + /* * Are the current registers suitable for user mode? (used to maintain * security in signal handlers) diff --git a/arch/arm64/include/asm/sigcontext.h b/arch/arm64/include/asm/sigcontext.h deleted file mode 100644 index dca1094acc74..000000000000 --- a/arch/arm64/include/asm/sigcontext.h +++ /dev/null @@ -1,31 +0,0 @@ -/* - * Copyright (C) 2012 ARM Ltd. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program. If not, see <http://www.gnu.org/licenses/>. - */ -#ifndef __ASM_SIGCONTEXT_H -#define __ASM_SIGCONTEXT_H - -#include <uapi/asm/sigcontext.h> - -/* - * Auxiliary context saved in the sigcontext.__reserved array. Not exported to - * user space as it will change with the addition of new context. User space - * should check the magic/size information. - */ -struct aux_context { - struct fpsimd_context fpsimd; - /* additional context to be added before "end" */ - struct _aarch64_ctx end; -}; -#endif diff --git a/arch/arm64/include/asm/string.h b/arch/arm64/include/asm/string.h index 3ee8b303d9a9..64d2d4884a9d 100644 --- a/arch/arm64/include/asm/string.h +++ b/arch/arm64/include/asm/string.h @@ -22,6 +22,18 @@ extern char *strrchr(const char *, int c); #define __HAVE_ARCH_STRCHR extern char *strchr(const char *, int c); +#define __HAVE_ARCH_STRCMP +extern int strcmp(const char *, const char *); + +#define __HAVE_ARCH_STRNCMP +extern int strncmp(const char *, const char *, __kernel_size_t); + +#define __HAVE_ARCH_STRLEN +extern __kernel_size_t strlen(const char *); + +#define __HAVE_ARCH_STRNLEN +extern __kernel_size_t strnlen(const char *, __kernel_size_t); + #define __HAVE_ARCH_MEMCPY extern void *memcpy(void *, const void *, __kernel_size_t); @@ -34,4 +46,7 @@ extern void *memchr(const void *, int, __kernel_size_t); #define __HAVE_ARCH_MEMSET extern void *memset(void *, int, __kernel_size_t); +#define __HAVE_ARCH_MEMCMP +extern int memcmp(const void *, const void *, size_t); + #endif diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/syscall.h index 70ba9d4ee978..383771eb0b87 100644 --- a/arch/arm64/include/asm/syscall.h +++ b/arch/arm64/include/asm/syscall.h @@ -18,6 +18,7 @@ #include <linux/err.h> +extern const void *sys_call_table[]; static inline int syscall_get_nr(struct task_struct *task, struct pt_regs *regs) diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h index 7b8e3a2a00fb..e40b6d06d515 100644 --- a/arch/arm64/include/asm/thread_info.h +++ b/arch/arm64/include/asm/thread_info.h @@ -91,6 +91,9 @@ static inline struct thread_info *current_thread_info(void) /* * thread information flags: * TIF_SYSCALL_TRACE - syscall trace active + * TIF_SYSCALL_TRACEPOINT - syscall tracepoint for ftrace + * TIF_SYSCALL_AUDIT - syscall auditing + * TIF_SECOMP - syscall secure computing * TIF_SIGPENDING - signal pending * TIF_NEED_RESCHED - rescheduling necessary * TIF_NOTIFY_RESUME - callback before returning to user @@ -99,7 +102,11 @@ static inline struct thread_info *current_thread_info(void) #define TIF_SIGPENDING 0 #define TIF_NEED_RESCHED 1 #define TIF_NOTIFY_RESUME 2 /* callback before returning to user */ +#define TIF_FOREIGN_FPSTATE 3 /* CPU's FP state is not current's */ #define TIF_SYSCALL_TRACE 8 +#define TIF_SYSCALL_AUDIT 9 +#define TIF_SYSCALL_TRACEPOINT 10 +#define TIF_SECCOMP 11 #define TIF_MEMDIE 18 /* is terminating due to OOM killer */ #define TIF_FREEZE 19 #define TIF_RESTORE_SIGMASK 20 @@ -110,10 +117,18 @@ static inline struct thread_info *current_thread_info(void) #define _TIF_SIGPENDING (1 << TIF_SIGPENDING) #define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED) #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) +#define _TIF_FOREIGN_FPSTATE (1 << TIF_FOREIGN_FPSTATE) +#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) +#define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT) +#define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT) +#define _TIF_SECCOMP (1 << TIF_SECCOMP) #define _TIF_32BIT (1 << TIF_32BIT) #define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \ - _TIF_NOTIFY_RESUME) + _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE) + +#define _TIF_SYSCALL_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \ + _TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP) #endif /* __KERNEL__ */ #endif /* __ASM_THREAD_INFO_H */ diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 8b482035cfc2..b9349c4513ea 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -72,9 +72,9 @@ extern struct cpu_tlb_fns cpu_tlb; */ static inline void flush_tlb_all(void) { - dsb(); + dsb(ishst); asm("tlbi vmalle1is"); - dsb(); + dsb(ish); isb(); } @@ -82,9 +82,9 @@ static inline void flush_tlb_mm(struct mm_struct *mm) { unsigned long asid = (unsigned long)ASID(mm) << 48; - dsb(); + dsb(ishst); asm("tlbi aside1is, %0" : : "r" (asid)); - dsb(); + dsb(ish); } static inline void flush_tlb_page(struct vm_area_struct *vma, @@ -93,16 +93,36 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr = uaddr >> 12 | ((unsigned long)ASID(vma->vm_mm) << 48); - dsb(); + dsb(ishst); asm("tlbi vae1is, %0" : : "r" (addr)); - dsb(); + dsb(ish); } -/* - * Convert calls to our calling convention. - */ -#define flush_tlb_range(vma,start,end) __cpu_flush_user_tlb_range(start,end,vma) -#define flush_tlb_kernel_range(s,e) __cpu_flush_kern_tlb_range(s,e) +static inline void flush_tlb_range(struct vm_area_struct *vma, + unsigned long start, unsigned long end) +{ + unsigned long asid = (unsigned long)ASID(vma->vm_mm) << 48; + unsigned long addr; + start = asid | (start >> 12); + end = asid | (end >> 12); + + dsb(ishst); + for (addr = start; addr < end; addr += 1 << (PAGE_SHIFT - 12)) + asm("tlbi vae1is, %0" : : "r"(addr)); + dsb(ish); +} + +static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end) +{ + unsigned long addr; + start >>= 12; + end >>= 12; + + dsb(ishst); + for (addr = start; addr < end; addr += 1 << (PAGE_SHIFT - 12)) + asm("tlbi vaae1is, %0" : : "r"(addr)); + dsb(ish); +} /* * On AArch64, the cache coherency is handled via the set_pte_at() function. @@ -114,7 +134,7 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, * set_pte() does not have a DSB, so make sure that the page table * write is visible. */ - dsb(); + dsb(ishst); } #define update_mmu_cache_pmd(vma, address, pmd) do { } while (0) diff --git a/arch/arm64/include/asm/topology.h b/arch/arm64/include/asm/topology.h index 0172e6d76bf3..7ebcd31ce51c 100644 --- a/arch/arm64/include/asm/topology.h +++ b/arch/arm64/include/asm/topology.h @@ -20,9 +20,6 @@ extern struct cpu_topology cpu_topology[NR_CPUS]; #define topology_core_cpumask(cpu) (&cpu_topology[cpu].core_sibling) #define topology_thread_cpumask(cpu) (&cpu_topology[cpu].thread_sibling) -#define mc_capable() (cpu_topology[0].cluster_id != -1) -#define smt_capable() (cpu_topology[0].thread_id != -1) - void init_cpu_topology(void); void store_cpu_topology(unsigned int cpuid); const struct cpumask *cpu_coregroup_mask(int cpu); diff --git a/arch/arm64/include/asm/unistd.h b/arch/arm64/include/asm/unistd.h index a4654c656a1e..e5f47df00c24 100644 --- a/arch/arm64/include/asm/unistd.h +++ b/arch/arm64/include/asm/unistd.h @@ -29,3 +29,5 @@ #endif #define __ARCH_WANT_SYS_CLONE #include <uapi/asm/unistd.h> + +#define NR_syscalls (__NR_syscalls) diff --git a/arch/arm64/include/uapi/asm/sigcontext.h b/arch/arm64/include/uapi/asm/sigcontext.h index 690ad51cc901..b72cf405b3fe 100644 --- a/arch/arm64/include/uapi/asm/sigcontext.h +++ b/arch/arm64/include/uapi/asm/sigcontext.h @@ -53,5 +53,12 @@ struct fpsimd_context { __uint128_t vregs[32]; }; +/* ESR_EL1 context */ +#define ESR_MAGIC 0x45535201 + +struct esr_context { + struct _aarch64_ctx head; + u64 esr; +}; #endif /* _UAPI__ASM_SIGCONTEXT_H */ diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 7a6fce5167e9..cdaedad3afe5 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -4,15 +4,22 @@ CPPFLAGS_vmlinux.lds := -DTEXT_OFFSET=$(TEXT_OFFSET) AFLAGS_head.o := -DTEXT_OFFSET=$(TEXT_OFFSET) +CFLAGS_efi-stub.o := -DTEXT_OFFSET=$(TEXT_OFFSET) \ + -I$(src)/../../../scripts/dtc/libfdt + +CFLAGS_REMOVE_ftrace.o = -pg +CFLAGS_REMOVE_insn.o = -pg +CFLAGS_REMOVE_return_address.o = -pg # Object file lists. arm64-obj-y := cputable.o debug-monitors.o entry.o irq.o fpsimd.o \ entry-fpsimd.o process.o ptrace.o setup.o signal.o \ sys.o stacktrace.o time.o traps.o io.o vdso.o \ - hyp-stub.o psci.o cpu_ops.o insn.o + hyp-stub.o psci.o cpu_ops.o insn.o return_address.o arm64-obj-$(CONFIG_COMPAT) += sys32.o kuser32.o signal32.o \ sys_compat.o +arm64-obj-$(CONFIG_FUNCTION_TRACER) += ftrace.o entry-ftrace.o arm64-obj-$(CONFIG_MODULES) += arm64ksyms.o module.o arm64-obj-$(CONFIG_SMP) += smp.o smp_spin_table.o topology.o arm64-obj-$(CONFIG_PERF_EVENTS) += perf_regs.o @@ -21,6 +28,7 @@ arm64-obj-$(CONFIG_HAVE_HW_BREAKPOINT) += hw_breakpoint.o arm64-obj-$(CONFIG_ARM64_CPU_SUSPEND) += sleep.o suspend.o arm64-obj-$(CONFIG_JUMP_LABEL) += jump_label.o arm64-obj-$(CONFIG_KGDB) += kgdb.o +arm64-obj-$(CONFIG_EFI) += efi.o efi-stub.o efi-entry.o obj-y += $(arm64-obj-y) vdso/ obj-m += $(arm64-obj-m) diff --git a/arch/arm64/kernel/arm64ksyms.c b/arch/arm64/kernel/arm64ksyms.c index 338b568cd8ae..a85843ddbde8 100644 --- a/arch/arm64/kernel/arm64ksyms.c +++ b/arch/arm64/kernel/arm64ksyms.c @@ -44,10 +44,15 @@ EXPORT_SYMBOL(memstart_addr); /* string / mem functions */ EXPORT_SYMBOL(strchr); EXPORT_SYMBOL(strrchr); +EXPORT_SYMBOL(strcmp); +EXPORT_SYMBOL(strncmp); +EXPORT_SYMBOL(strlen); +EXPORT_SYMBOL(strnlen); EXPORT_SYMBOL(memset); EXPORT_SYMBOL(memcpy); EXPORT_SYMBOL(memmove); EXPORT_SYMBOL(memchr); +EXPORT_SYMBOL(memcmp); /* atomic bitops */ EXPORT_SYMBOL(set_bit); @@ -56,3 +61,7 @@ EXPORT_SYMBOL(clear_bit); EXPORT_SYMBOL(test_and_clear_bit); EXPORT_SYMBOL(change_bit); EXPORT_SYMBOL(test_and_change_bit); + +#ifdef CONFIG_FUNCTION_TRACER +EXPORT_SYMBOL(_mcount); +#endif diff --git a/arch/arm64/kernel/efi-entry.S b/arch/arm64/kernel/efi-entry.S new file mode 100644 index 000000000000..66716c9b9e5f --- /dev/null +++ b/arch/arm64/kernel/efi-entry.S @@ -0,0 +1,109 @@ +/* + * EFI entry point. + * + * Copyright (C) 2013, 2014 Red Hat, Inc. + * Author: Mark Salter <msalter@redhat.com> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ +#include <linux/linkage.h> +#include <linux/init.h> + +#include <asm/assembler.h> + +#define EFI_LOAD_ERROR 0x8000000000000001 + + __INIT + + /* + * We arrive here from the EFI boot manager with: + * + * * CPU in little-endian mode + * * MMU on with identity-mapped RAM + * * Icache and Dcache on + * + * We will most likely be running from some place other than where + * we want to be. The kernel image wants to be placed at TEXT_OFFSET + * from start of RAM. + */ +ENTRY(efi_stub_entry) + /* + * Create a stack frame to save FP/LR with extra space + * for image_addr variable passed to efi_entry(). + */ + stp x29, x30, [sp, #-32]! + + /* + * Call efi_entry to do the real work. + * x0 and x1 are already set up by firmware. Current runtime + * address of image is calculated and passed via *image_addr. + * + * unsigned long efi_entry(void *handle, + * efi_system_table_t *sys_table, + * unsigned long *image_addr) ; + */ + adrp x8, _text + add x8, x8, #:lo12:_text + add x2, sp, 16 + str x8, [x2] + bl efi_entry + cmn x0, #1 + b.eq efi_load_fail + + /* + * efi_entry() will have relocated the kernel image if necessary + * and we return here with device tree address in x0 and the kernel + * entry point stored at *image_addr. Save those values in registers + * which are callee preserved. + */ + mov x20, x0 // DTB address + ldr x0, [sp, #16] // relocated _text address + mov x21, x0 + + /* + * Flush dcache covering current runtime addresses + * of kernel text/data. Then flush all of icache. + */ + adrp x1, _text + add x1, x1, #:lo12:_text + adrp x2, _edata + add x2, x2, #:lo12:_edata + sub x1, x2, x1 + + bl __flush_dcache_area + ic ialluis + + /* Turn off Dcache and MMU */ + mrs x0, CurrentEL + cmp x0, #PSR_MODE_EL2t + ccmp x0, #PSR_MODE_EL2h, #0x4, ne + b.ne 1f + mrs x0, sctlr_el2 + bic x0, x0, #1 << 0 // clear SCTLR.M + bic x0, x0, #1 << 2 // clear SCTLR.C + msr sctlr_el2, x0 + isb + b 2f +1: + mrs x0, sctlr_el1 + bic x0, x0, #1 << 0 // clear SCTLR.M + bic x0, x0, #1 << 2 // clear SCTLR.C + msr sctlr_el1, x0 + isb +2: + /* Jump to kernel entry point */ + mov x0, x20 + mov x1, xzr + mov x2, xzr + mov x3, xzr + br x21 + +efi_load_fail: + mov x0, #EFI_LOAD_ERROR + ldp x29, x30, [sp], #32 + ret + +ENDPROC(efi_stub_entry) diff --git a/arch/arm64/kernel/efi-stub.c b/arch/arm64/kernel/efi-stub.c new file mode 100644 index 000000000000..60e98a639ac5 --- /dev/null +++ b/arch/arm64/kernel/efi-stub.c @@ -0,0 +1,81 @@ +/* + * Copyright (C) 2013, 2014 Linaro Ltd; <roy.franz@linaro.org> + * + * This file implements the EFI boot stub for the arm64 kernel. + * Adapted from ARM version by Mark Salter <msalter@redhat.com> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ +#include <linux/efi.h> +#include <linux/libfdt.h> +#include <asm/sections.h> +#include <generated/compile.h> +#include <generated/utsrelease.h> + +/* + * AArch64 requires the DTB to be 8-byte aligned in the first 512MiB from + * start of kernel and may not cross a 2MiB boundary. We set alignment to + * 2MiB so we know it won't cross a 2MiB boundary. + */ +#define EFI_FDT_ALIGN SZ_2M /* used by allocate_new_fdt_and_exit_boot() */ +#define MAX_FDT_OFFSET SZ_512M + +#define efi_call_early(f, ...) sys_table_arg->boottime->f(__VA_ARGS__) + +static void efi_char16_printk(efi_system_table_t *sys_table_arg, + efi_char16_t *str); + +static efi_status_t efi_open_volume(efi_system_table_t *sys_table, + void *__image, void **__fh); +static efi_status_t efi_file_close(void *handle); + +static efi_status_t +efi_file_read(void *handle, unsigned long *size, void *addr); + +static efi_status_t +efi_file_size(efi_system_table_t *sys_table, void *__fh, + efi_char16_t *filename_16, void **handle, u64 *file_sz); + +/* Include shared EFI stub code */ +#include "../../../drivers/firmware/efi/efi-stub-helper.c" +#include "../../../drivers/firmware/efi/fdt.c" +#include "../../../drivers/firmware/efi/arm-stub.c" + + +static efi_status_t handle_kernel_image(efi_system_table_t *sys_table, + unsigned long *image_addr, + unsigned long *image_size, + unsigned long *reserve_addr, + unsigned long *reserve_size, + unsigned long dram_base, + efi_loaded_image_t *image) +{ + efi_status_t status; + unsigned long kernel_size, kernel_memsize = 0; + + /* Relocate the image, if required. */ + kernel_size = _edata - _text; + if (*image_addr != (dram_base + TEXT_OFFSET)) { + kernel_memsize = kernel_size + (_end - _edata); + status = efi_relocate_kernel(sys_table, image_addr, + kernel_size, kernel_memsize, + dram_base + TEXT_OFFSET, + PAGE_SIZE); + if (status != EFI_SUCCESS) { + pr_efi_err(sys_table, "Failed to relocate kernel\n"); + return status; + } + if (*image_addr != (dram_base + TEXT_OFFSET)) { + pr_efi_err(sys_table, "Failed to alloc kernel memory\n"); + efi_free(sys_table, kernel_memsize, *image_addr); + return EFI_ERROR; + } + *image_size = kernel_memsize; + } + + + return EFI_SUCCESS; +} diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c new file mode 100644 index 000000000000..14db1f6e8d7f --- /dev/null +++ b/arch/arm64/kernel/efi.c @@ -0,0 +1,469 @@ +/* + * Extensible Firmware Interface + * + * Based on Extensible Firmware Interface Specification version 2.4 + * + * Copyright (C) 2013, 2014 Linaro Ltd. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include <linux/efi.h> +#include <linux/export.h> +#include <linux/memblock.h> +#include <linux/bootmem.h> +#include <linux/of.h> +#include <linux/of_fdt.h> +#include <linux/sched.h> +#include <linux/slab.h> + +#include <asm/cacheflush.h> +#include <asm/efi.h> +#include <asm/tlbflush.h> +#include <asm/mmu_context.h> + +struct efi_memory_map memmap; + +static efi_runtime_services_t *runtime; + +static u64 efi_system_table; + +static int uefi_debug __initdata; +static int __init uefi_debug_setup(char *str) +{ + uefi_debug = 1; + + return 0; +} +early_param("uefi_debug", uefi_debug_setup); + +static int __init is_normal_ram(efi_memory_desc_t *md) +{ + if (md->attribute & EFI_MEMORY_WB) + return 1; + return 0; +} + +static void __init efi_setup_idmap(void) +{ + struct memblock_region *r; + efi_memory_desc_t *md; + u64 paddr, npages, size; + + for_each_memblock(memory, r) + create_id_mapping(r->base, r->size, 0); + + /* map runtime io spaces */ + for_each_efi_memory_desc(&memmap, md) { + if (!(md->attribute & EFI_MEMORY_RUNTIME) || is_normal_ram(md)) + continue; + paddr = md->phys_addr; + npages = md->num_pages; + memrange_efi_to_native(&paddr, &npages); + size = npages << PAGE_SHIFT; + create_id_mapping(paddr, size, 1); + } +} + +static int __init uefi_init(void) +{ + efi_char16_t *c16; + char vendor[100] = "unknown"; + int i, retval; + + efi.systab = early_memremap(efi_system_table, + sizeof(efi_system_table_t)); + if (efi.systab == NULL) { + pr_warn("Unable to map EFI system table.\n"); + return -ENOMEM; + } + + set_bit(EFI_BOOT, &efi.flags); + set_bit(EFI_64BIT, &efi.flags); + + /* + * Verify the EFI Table + */ + if (efi.systab->hdr.signature != EFI_SYSTEM_TABLE_SIGNATURE) { + pr_err("System table signature incorrect\n"); + return -EINVAL; + } + if ((efi.systab->hdr.revision >> 16) < 2) + pr_warn("Warning: EFI system table version %d.%02d, expected 2.00 or greater\n", + efi.systab->hdr.revision >> 16, + efi.systab->hdr.revision & 0xffff); + + /* Show what we know for posterity */ + c16 = early_memremap(efi.systab->fw_vendor, + sizeof(vendor)); + if (c16) { + for (i = 0; i < (int) sizeof(vendor) - 1 && *c16; ++i) + vendor[i] = c16[i]; + vendor[i] = '\0'; + } + + pr_info("EFI v%u.%.02u by %s\n", + efi.systab->hdr.revision >> 16, + efi.systab->hdr.revision & 0xffff, vendor); + + retval = efi_config_init(NULL); + if (retval == 0) + set_bit(EFI_CONFIG_TABLES, &efi.flags); + + early_memunmap(c16, sizeof(vendor)); + early_memunmap(efi.systab, sizeof(efi_system_table_t)); + + return retval; +} + +static __initdata char memory_type_name[][32] = { + {"Reserved"}, + {"Loader Code"}, + {"Loader Data"}, + {"Boot Code"}, + {"Boot Data"}, + {"Runtime Code"}, + {"Runtime Data"}, + {"Conventional Memory"}, + {"Unusable Memory"}, + {"ACPI Reclaim Memory"}, + {"ACPI Memory NVS"}, + {"Memory Mapped I/O"}, + {"MMIO Port Space"}, + {"PAL Code"}, +}; + +/* + * Return true for RAM regions we want to permanently reserve. + */ +static __init int is_reserve_region(efi_memory_desc_t *md) +{ + if (!is_normal_ram(md)) + return 0; + + if (md->attribute & EFI_MEMORY_RUNTIME) + return 1; + + if (md->type == EFI_ACPI_RECLAIM_MEMORY || + md->type == EFI_RESERVED_TYPE) + return 1; + + return 0; +} + +static __init void reserve_regions(void) +{ + efi_memory_desc_t *md; + u64 paddr, npages, size; + + if (uefi_debug) + pr_info("Processing EFI memory map:\n"); + + for_each_efi_memory_desc(&memmap, md) { + paddr = md->phys_addr; + npages = md->num_pages; + + if (uefi_debug) + pr_info(" 0x%012llx-0x%012llx [%s]", + paddr, paddr + (npages << EFI_PAGE_SHIFT) - 1, + memory_type_name[md->type]); + + memrange_efi_to_native(&paddr, &npages); + size = npages << PAGE_SHIFT; + + if (is_normal_ram(md)) + early_init_dt_add_memory_arch(paddr, size); + + if (is_reserve_region(md) || + md->type == EFI_BOOT_SERVICES_CODE || + md->type == EFI_BOOT_SERVICES_DATA) { + memblock_reserve(paddr, size); + if (uefi_debug) + pr_cont("*"); + } + + if (uefi_debug) + pr_cont("\n"); + } +} + + +static u64 __init free_one_region(u64 start, u64 end) +{ + u64 size = end - start; + + if (uefi_debug) + pr_info(" EFI freeing: 0x%012llx-0x%012llx\n", start, end - 1); + + free_bootmem_late(start, size); + return size; +} + +static u64 __init free_region(u64 start, u64 end) +{ + u64 map_start, map_end, total = 0; + + if (end <= start) + return total; + + map_start = (u64)memmap.phys_map; + map_end = PAGE_ALIGN(map_start + (memmap.map_end - memmap.map)); + map_start &= PAGE_MASK; + + if (start < map_end && end > map_start) { + /* region overlaps UEFI memmap */ + if (start < map_start) + total += free_one_region(start, map_start); + + if (map_end < end) + total += free_one_region(map_end, end); + } else + total += free_one_region(start, end); + + return total; +} + +static void __init free_boot_services(void) +{ + u64 total_freed = 0; + u64 keep_end, free_start, free_end; + efi_memory_desc_t *md; + + /* + * If kernel uses larger pages than UEFI, we have to be careful + * not to inadvertantly free memory we want to keep if there is + * overlap at the kernel page size alignment. We do not want to + * free is_reserve_region() memory nor the UEFI memmap itself. + * + * The memory map is sorted, so we keep track of the end of + * any previous region we want to keep, remember any region + * we want to free and defer freeing it until we encounter + * the next region we want to keep. This way, before freeing + * it, we can clip it as needed to avoid freeing memory we + * want to keep for UEFI. + */ + + keep_end = 0; + free_start = 0; + + for_each_efi_memory_desc(&memmap, md) { + u64 paddr, npages, size; + + if (is_reserve_region(md)) { + /* + * We don't want to free any memory from this region. + */ + if (free_start) { + /* adjust free_end then free region */ + if (free_end > md->phys_addr) + free_end -= PAGE_SIZE; + total_freed += free_region(free_start, free_end); + free_start = 0; + } + keep_end = md->phys_addr + (md->num_pages << EFI_PAGE_SHIFT); + continue; + } + + if (md->type != EFI_BOOT_SERVICES_CODE && + md->type != EFI_BOOT_SERVICES_DATA) { + /* no need to free this region */ + continue; + } + + /* + * We want to free memory from this region. + */ + paddr = md->phys_addr; + npages = md->num_pages; + memrange_efi_to_native(&paddr, &npages); + size = npages << PAGE_SHIFT; + + if (free_start) { + if (paddr <= free_end) + free_end = paddr + size; + else { + total_freed += free_region(free_start, free_end); + free_start = paddr; + free_end = paddr + size; + } + } else { + free_start = paddr; + free_end = paddr + size; + } + if (free_start < keep_end) { + free_start += PAGE_SIZE; + if (free_start >= free_end) + free_start = 0; + } + } + if (free_start) + total_freed += free_region(free_start, free_end); + + if (total_freed) + pr_info("Freed 0x%llx bytes of EFI boot services memory", + total_freed); +} + +void __init efi_init(void) +{ + struct efi_fdt_params params; + + /* Grab UEFI information placed in FDT by stub */ + if (!efi_get_fdt_params(¶ms, uefi_debug)) + return; + + efi_system_table = params.system_table; + + memblock_reserve(params.mmap & PAGE_MASK, + PAGE_ALIGN(params.mmap_size + (params.mmap & ~PAGE_MASK))); + memmap.phys_map = (void *)params.mmap; + memmap.map = early_memremap(params.mmap, params.mmap_size); + memmap.map_end = memmap.map + params.mmap_size; + memmap.desc_size = params.desc_size; + memmap.desc_version = params.desc_ver; + + if (uefi_init() < 0) + return; + + reserve_regions(); +} + +void __init efi_idmap_init(void) +{ + if (!efi_enabled(EFI_BOOT)) + return; + + /* boot time idmap_pg_dir is incomplete, so fill in missing parts */ + efi_setup_idmap(); +} + +static int __init remap_region(efi_memory_desc_t *md, void **new) +{ + u64 paddr, vaddr, npages, size; + + paddr = md->phys_addr; + npages = md->num_pages; + memrange_efi_to_native(&paddr, &npages); + size = npages << PAGE_SHIFT; + + if (is_normal_ram(md)) + vaddr = (__force u64)ioremap_cache(paddr, size); + else + vaddr = (__force u64)ioremap(paddr, size); + + if (!vaddr) { + pr_err("Unable to remap 0x%llx pages @ %p\n", + npages, (void *)paddr); + return 0; + } + + /* adjust for any rounding when EFI and system pagesize differs */ + md->virt_addr = vaddr + (md->phys_addr - paddr); + + if (uefi_debug) + pr_info(" EFI remap 0x%012llx => %p\n", + md->phys_addr, (void *)md->virt_addr); + + memcpy(*new, md, memmap.desc_size); + *new += memmap.desc_size; + + return 1; +} + +/* + * Switch UEFI from an identity map to a kernel virtual map + */ +static int __init arm64_enter_virtual_mode(void) +{ + efi_memory_desc_t *md; + phys_addr_t virtmap_phys; + void *virtmap, *virt_md; + efi_status_t status; + u64 mapsize; + int count = 0; + unsigned long flags; + + if (!efi_enabled(EFI_BOOT)) { + pr_info("EFI services will not be available.\n"); + return -1; + } + + pr_info("Remapping and enabling EFI services.\n"); + + /* replace early memmap mapping with permanent mapping */ + mapsize = memmap.map_end - memmap.map; + early_memunmap(memmap.map, mapsize); + memmap.map = (__force void *)ioremap_cache((phys_addr_t)memmap.phys_map, + mapsize); + memmap.map_end = memmap.map + mapsize; + + efi.memmap = &memmap; + + /* Map the runtime regions */ + virtmap = kmalloc(mapsize, GFP_KERNEL); + if (!virtmap) { + pr_err("Failed to allocate EFI virtual memmap\n"); + return -1; + } + virtmap_phys = virt_to_phys(virtmap); + virt_md = virtmap; + + for_each_efi_memory_desc(&memmap, md) { + if (!(md->attribute & EFI_MEMORY_RUNTIME)) + continue; + if (remap_region(md, &virt_md)) + ++count; + } + + efi.systab = (__force void *)efi_lookup_mapped_addr(efi_system_table); + if (efi.systab) + set_bit(EFI_SYSTEM_TABLES, &efi.flags); + + local_irq_save(flags); + cpu_switch_mm(idmap_pg_dir, &init_mm); + + /* Call SetVirtualAddressMap with the physical address of the map */ + runtime = efi.systab->runtime; + efi.set_virtual_address_map = runtime->set_virtual_address_map; + + status = efi.set_virtual_address_map(count * memmap.desc_size, + memmap.desc_size, + memmap.desc_version, + (efi_memory_desc_t *)virtmap_phys); + cpu_set_reserved_ttbr0(); + flush_tlb_all(); + local_irq_restore(flags); + + kfree(virtmap); + + free_boot_services(); + + if (status != EFI_SUCCESS) { + pr_err("Failed to set EFI virtual address map! [%lx]\n", + status); + return -1; + } + + /* Set up runtime services function pointers */ + runtime = efi.systab->runtime; + efi.get_time = runtime->get_time; + efi.set_time = runtime->set_time; + efi.get_wakeup_time = runtime->get_wakeup_time; + efi.set_wakeup_time = runtime->set_wakeup_time; + efi.get_variable = runtime->get_variable; + efi.get_next_variable = runtime->get_next_variable; + efi.set_variable = runtime->set_variable; + efi.query_variable_info = runtime->query_variable_info; + efi.update_capsule = runtime->update_capsule; + efi.query_capsule_caps = runtime->query_capsule_caps; + efi.get_next_high_mono_count = runtime->get_next_high_mono_count; + efi.reset_system = runtime->reset_system; + + set_bit(EFI_RUNTIME_SERVICES, &efi.flags); + + return 0; +} +early_initcall(arm64_enter_virtual_mode); diff --git a/arch/arm64/kernel/entry-fpsimd.S b/arch/arm64/kernel/entry-fpsimd.S index 6a27cd6dbfa6..d358ccacfc00 100644 --- a/arch/arm64/kernel/entry-fpsimd.S +++ b/arch/arm64/kernel/entry-fpsimd.S @@ -41,3 +41,27 @@ ENTRY(fpsimd_load_state) fpsimd_restore x0, 8 ret ENDPROC(fpsimd_load_state) + +#ifdef CONFIG_KERNEL_MODE_NEON + +/* + * Save the bottom n FP registers. + * + * x0 - pointer to struct fpsimd_partial_state + */ +ENTRY(fpsimd_save_partial_state) + fpsimd_save_partial x0, 1, 8, 9 + ret +ENDPROC(fpsimd_load_partial_state) + +/* + * Load the bottom n FP registers. + * + * x0 - pointer to struct fpsimd_partial_state + */ +ENTRY(fpsimd_load_partial_state) + fpsimd_restore_partial x0, 8, 9 + ret +ENDPROC(fpsimd_load_partial_state) + +#endif diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S new file mode 100644 index 000000000000..b051871f2965 --- /dev/null +++ b/arch/arm64/kernel/entry-ftrace.S @@ -0,0 +1,218 @@ +/* + * arch/arm64/kernel/entry-ftrace.S + * + * Copyright (C) 2013 Linaro Limited + * Author: AKASHI Takahiro <takahiro.akashi@linaro.org> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include <linux/linkage.h> +#include <asm/ftrace.h> +#include <asm/insn.h> + +/* + * Gcc with -pg will put the following code in the beginning of each function: + * mov x0, x30 + * bl _mcount + * [function's body ...] + * "bl _mcount" may be replaced to "bl ftrace_caller" or NOP if dynamic + * ftrace is enabled. + * + * Please note that x0 as an argument will not be used here because we can + * get lr(x30) of instrumented function at any time by winding up call stack + * as long as the kernel is compiled without -fomit-frame-pointer. + * (or CONFIG_FRAME_POINTER, this is forced on arm64) + * + * stack layout after mcount_enter in _mcount(): + * + * current sp/fp => 0:+-----+ + * in _mcount() | x29 | -> instrumented function's fp + * +-----+ + * | x30 | -> _mcount()'s lr (= instrumented function's pc) + * old sp => +16:+-----+ + * when instrumented | | + * function calls | ... | + * _mcount() | | + * | | + * instrumented => +xx:+-----+ + * function's fp | x29 | -> parent's fp + * +-----+ + * | x30 | -> instrumented function's lr (= parent's pc) + * +-----+ + * | ... | + */ + + .macro mcount_enter + stp x29, x30, [sp, #-16]! + mov x29, sp + .endm + + .macro mcount_exit + ldp x29, x30, [sp], #16 + ret + .endm + + .macro mcount_adjust_addr rd, rn + sub \rd, \rn, #AARCH64_INSN_SIZE + .endm + + /* for instrumented function's parent */ + .macro mcount_get_parent_fp reg + ldr \reg, [x29] + ldr \reg, [\reg] + .endm + + /* for instrumented function */ + .macro mcount_get_pc0 reg + mcount_adjust_addr \reg, x30 + .endm + + .macro mcount_get_pc reg + ldr \reg, [x29, #8] + mcount_adjust_addr \reg, \reg + .endm + + .macro mcount_get_lr reg + ldr \reg, [x29] + ldr \reg, [\reg, #8] + mcount_adjust_addr \reg, \reg + .endm + + .macro mcount_get_lr_addr reg + ldr \reg, [x29] + add \reg, \reg, #8 + .endm + +#ifndef CONFIG_DYNAMIC_FTRACE +/* + * void _mcount(unsigned long return_address) + * @return_address: return address to instrumented function + * + * This function makes calls, if enabled, to: + * - tracer function to probe instrumented function's entry, + * - ftrace_graph_caller to set up an exit hook + */ +ENTRY(_mcount) +#ifdef CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST + ldr x0, =ftrace_trace_stop + ldr x0, [x0] // if ftrace_trace_stop + ret // return; +#endif + mcount_enter + + ldr x0, =ftrace_trace_function + ldr x2, [x0] + adr x0, ftrace_stub + cmp x0, x2 // if (ftrace_trace_function + b.eq skip_ftrace_call // != ftrace_stub) { + + mcount_get_pc x0 // function's pc + mcount_get_lr x1 // function's lr (= parent's pc) + blr x2 // (*ftrace_trace_function)(pc, lr); + +#ifndef CONFIG_FUNCTION_GRAPH_TRACER +skip_ftrace_call: // return; + mcount_exit // } +#else + mcount_exit // return; + // } +skip_ftrace_call: + ldr x1, =ftrace_graph_return + ldr x2, [x1] // if ((ftrace_graph_return + cmp x0, x2 // != ftrace_stub) + b.ne ftrace_graph_caller + + ldr x1, =ftrace_graph_entry // || (ftrace_graph_entry + ldr x2, [x1] // != ftrace_graph_entry_stub)) + ldr x0, =ftrace_graph_entry_stub + cmp x0, x2 + b.ne ftrace_graph_caller // ftrace_graph_caller(); + + mcount_exit +#endif /* CONFIG_FUNCTION_GRAPH_TRACER */ +ENDPROC(_mcount) + +#else /* CONFIG_DYNAMIC_FTRACE */ +/* + * _mcount() is used to build the kernel with -pg option, but all the branch + * instructions to _mcount() are replaced to NOP initially at kernel start up, + * and later on, NOP to branch to ftrace_caller() when enabled or branch to + * NOP when disabled per-function base. + */ +ENTRY(_mcount) + ret +ENDPROC(_mcount) + +/* + * void ftrace_caller(unsigned long return_address) + * @return_address: return address to instrumented function + * + * This function is a counterpart of _mcount() in 'static' ftrace, and + * makes calls to: + * - tracer function to probe instrumented function's entry, + * - ftrace_graph_caller to set up an exit hook + */ +ENTRY(ftrace_caller) + mcount_enter + + mcount_get_pc0 x0 // function's pc + mcount_get_lr x1 // function's lr + + .global ftrace_call +ftrace_call: // tracer(pc, lr); + nop // This will be replaced with "bl xxx" + // where xxx can be any kind of tracer. + +#ifdef CONFIG_FUNCTION_GRAPH_TRACER + .global ftrace_graph_call +ftrace_graph_call: // ftrace_graph_caller(); + nop // If enabled, this will be replaced + // "b ftrace_graph_caller" +#endif + + mcount_exit +ENDPROC(ftrace_caller) +#endif /* CONFIG_DYNAMIC_FTRACE */ + +ENTRY(ftrace_stub) + ret +ENDPROC(ftrace_stub) + +#ifdef CONFIG_FUNCTION_GRAPH_TRACER +/* + * void ftrace_graph_caller(void) + * + * Called from _mcount() or ftrace_caller() when function_graph tracer is + * selected. + * This function w/ prepare_ftrace_return() fakes link register's value on + * the call stack in order to intercept instrumented function's return path + * and run return_to_handler() later on its exit. + */ +ENTRY(ftrace_graph_caller) + mcount_get_lr_addr x0 // pointer to function's saved lr + mcount_get_pc x1 // function's pc + mcount_get_parent_fp x2 // parent's fp + bl prepare_ftrace_return // prepare_ftrace_return(&lr, pc, fp) + + mcount_exit +ENDPROC(ftrace_graph_caller) + +/* + * void return_to_handler(void) + * + * Run ftrace_return_to_handler() before going back to parent. + * @fp is checked against the value passed by ftrace_graph_caller() + * only when CONFIG_FUNCTION_GRAPH_FP_TEST is enabled. + */ +ENTRY(return_to_handler) + str x0, [sp, #-16]! + mov x0, x29 // parent's fp + bl ftrace_return_to_handler// addr = ftrace_return_to_hander(fp); + mov x30, x0 // restore the original return address + ldr x0, [sp], #16 + ret +END(return_to_handler) +#endif /* CONFIG_FUNCTION_GRAPH_TRACER */ diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 39ac630d83de..bf017f4ffb4f 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -60,6 +60,9 @@ push x0, x1 .if \el == 0 mrs x21, sp_el0 + get_thread_info tsk // Ensure MDSCR_EL1.SS is clear, + ldr x19, [tsk, #TI_FLAGS] // since we can unmask debug + disable_step_tsk x19, x20 // exceptions when scheduling. .else add x21, sp, #S_FRAME_SIZE .endif @@ -259,7 +262,7 @@ el1_da: * Data abort handling */ mrs x0, far_el1 - enable_dbg_if_not_stepping x2 + enable_dbg // re-enable interrupts if they were enabled in the aborted context tbnz x23, #7, 1f // PSR_I_BIT enable_irq @@ -275,6 +278,7 @@ el1_sp_pc: * Stack or PC alignment exception handling */ mrs x0, far_el1 + enable_dbg mov x1, x25 mov x2, sp b do_sp_pc_abort @@ -282,6 +286,7 @@ el1_undef: /* * Undefined instruction */ + enable_dbg mov x0, sp b do_undefinstr el1_dbg: @@ -294,10 +299,11 @@ el1_dbg: mrs x0, far_el1 mov x2, sp // struct pt_regs bl do_debug_exception - + enable_dbg kernel_exit 1 el1_inv: // TODO: add support for undefined instructions in kernel mode + enable_dbg mov x0, sp mov x1, #BAD_SYNC mrs x2, esr_el1 @@ -307,7 +313,7 @@ ENDPROC(el1_sync) .align 6 el1_irq: kernel_entry 1 - enable_dbg_if_not_stepping x0 + enable_dbg #ifdef CONFIG_TRACE_IRQFLAGS bl trace_hardirqs_off #endif @@ -332,8 +338,7 @@ ENDPROC(el1_irq) #ifdef CONFIG_PREEMPT el1_preempt: mov x24, lr -1: enable_dbg - bl preempt_schedule_irq // irq en/disable is done inside +1: bl preempt_schedule_irq // irq en/disable is done inside ldr x0, [tsk, #TI_FLAGS] // get new tasks TI_FLAGS tbnz x0, #TIF_NEED_RESCHED, 1b // needs rescheduling? ret x24 @@ -349,7 +354,7 @@ el0_sync: lsr x24, x25, #ESR_EL1_EC_SHIFT // exception class cmp x24, #ESR_EL1_EC_SVC64 // SVC in 64-bit state b.eq el0_svc - adr lr, ret_from_exception + adr lr, ret_to_user cmp x24, #ESR_EL1_EC_DABT_EL0 // data abort in EL0 b.eq el0_da cmp x24, #ESR_EL1_EC_IABT_EL0 // instruction abort in EL0 @@ -378,7 +383,7 @@ el0_sync_compat: lsr x24, x25, #ESR_EL1_EC_SHIFT // exception class cmp x24, #ESR_EL1_EC_SVC32 // SVC in 32-bit state b.eq el0_svc_compat - adr lr, ret_from_exception + adr lr, ret_to_user cmp x24, #ESR_EL1_EC_DABT_EL0 // data abort in EL0 b.eq el0_da cmp x24, #ESR_EL1_EC_IABT_EL0 // instruction abort in EL0 @@ -423,11 +428,8 @@ el0_da: */ mrs x0, far_el1 bic x0, x0, #(0xff << 56) - disable_step x1 - isb - enable_dbg // enable interrupts before calling the main handler - enable_irq + enable_dbg_and_irq mov x1, x25 mov x2, sp b do_mem_abort @@ -436,11 +438,8 @@ el0_ia: * Instruction abort handling */ mrs x0, far_el1 - disable_step x1 - isb - enable_dbg // enable interrupts before calling the main handler - enable_irq + enable_dbg_and_irq orr x1, x25, #1 << 24 // use reserved ISS bit for instruction aborts mov x2, sp b do_mem_abort @@ -448,6 +447,7 @@ el0_fpsimd_acc: /* * Floating Point or Advanced SIMD access */ + enable_dbg mov x0, x25 mov x1, sp b do_fpsimd_acc @@ -455,6 +455,7 @@ el0_fpsimd_exc: /* * Floating Point or Advanced SIMD exception */ + enable_dbg mov x0, x25 mov x1, sp b do_fpsimd_exc @@ -463,11 +464,8 @@ el0_sp_pc: * Stack or PC alignment exception handling */ mrs x0, far_el1 - disable_step x1 - isb - enable_dbg // enable interrupts before calling the main handler - enable_irq + enable_dbg_and_irq mov x1, x25 mov x2, sp b do_sp_pc_abort @@ -475,9 +473,9 @@ el0_undef: /* * Undefined instruction */ - mov x0, sp // enable interrupts before calling the main handler - enable_irq + enable_dbg_and_irq + mov x0, sp b do_undefinstr el0_dbg: /* @@ -485,11 +483,13 @@ el0_dbg: */ tbnz x24, #0, el0_inv // EL0 only mrs x0, far_el1 - disable_step x1 mov x1, x25 mov x2, sp - b do_debug_exception + bl do_debug_exception + enable_dbg + b ret_to_user el0_inv: + enable_dbg mov x0, sp mov x1, #BAD_SYNC mrs x2, esr_el1 @@ -500,15 +500,12 @@ ENDPROC(el0_sync) el0_irq: kernel_entry 0 el0_irq_naked: - disable_step x1 - isb enable_dbg #ifdef CONFIG_TRACE_IRQFLAGS bl trace_hardirqs_off #endif irq_handler - get_thread_info tsk #ifdef CONFIG_TRACE_IRQFLAGS bl trace_hardirqs_on @@ -517,14 +514,6 @@ el0_irq_naked: ENDPROC(el0_irq) /* - * This is the return code to user mode for abort handlers - */ -ret_from_exception: - get_thread_info tsk - b ret_to_user -ENDPROC(ret_from_exception) - -/* * Register switch for AArch64. The callee-saved registers need to be saved * and restored. On entry: * x0 = previous task_struct (must be preserved across the switch) @@ -563,10 +552,7 @@ ret_fast_syscall: ldr x1, [tsk, #TI_FLAGS] and x2, x1, #_TIF_WORK_MASK cbnz x2, fast_work_pending - tbz x1, #TIF_SINGLESTEP, fast_exit - disable_dbg - enable_step x2 -fast_exit: + enable_step_tsk x1, x2 kernel_exit 0, ret = 1 /* @@ -576,7 +562,7 @@ fast_work_pending: str x0, [sp, #S_X0] // returned x0 work_pending: tbnz x1, #TIF_NEED_RESCHED, work_resched - /* TIF_SIGPENDING or TIF_NOTIFY_RESUME case */ + /* TIF_SIGPENDING, TIF_NOTIFY_RESUME or TIF_FOREIGN_FPSTATE case */ ldr x2, [sp, #S_PSTATE] mov x0, sp // 'regs' tst x2, #PSR_MODE_MASK // user mode regs? @@ -585,7 +571,6 @@ work_pending: bl do_notify_resume b ret_to_user work_resched: - enable_dbg bl schedule /* @@ -596,9 +581,7 @@ ret_to_user: ldr x1, [tsk, #TI_FLAGS] and x2, x1, #_TIF_WORK_MASK cbnz x2, work_pending - tbz x1, #TIF_SINGLESTEP, no_work_pending - disable_dbg - enable_step x2 + enable_step_tsk x1, x2 no_work_pending: kernel_exit 0, ret = 0 ENDPROC(ret_to_user) @@ -625,14 +608,11 @@ el0_svc: mov sc_nr, #__NR_syscalls el0_svc_naked: // compat entry point stp x0, scno, [sp, #S_ORIG_X0] // save the original x0 and syscall number - disable_step x16 - isb - enable_dbg - enable_irq + enable_dbg_and_irq - get_thread_info tsk - ldr x16, [tsk, #TI_FLAGS] // check for syscall tracing - tbnz x16, #TIF_SYSCALL_TRACE, __sys_trace // are we tracing syscalls? + ldr x16, [tsk, #TI_FLAGS] // check for syscall hooks + tst x16, #_TIF_SYSCALL_WORK + b.ne __sys_trace adr lr, ret_fast_syscall // return address cmp scno, sc_nr // check upper syscall limit b.hs ni_sys @@ -648,9 +628,8 @@ ENDPROC(el0_svc) * switches, and waiting for our parent to respond. */ __sys_trace: - mov x1, sp - mov w0, #0 // trace entry - bl syscall_trace + mov x0, sp + bl syscall_trace_enter adr lr, __sys_trace_return // return address uxtw scno, w0 // syscall number (possibly new) mov x1, sp // pointer to regs @@ -665,9 +644,8 @@ __sys_trace: __sys_trace_return: str x0, [sp] // save returned x0 - mov x1, sp - mov w0, #1 // trace exit - bl syscall_trace + mov x0, sp + bl syscall_trace_exit b ret_to_user /* diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 4aef42a04bdc..ad8aebb1cdef 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -35,6 +35,60 @@ #define FPEXC_IDF (1 << 7) /* + * In order to reduce the number of times the FPSIMD state is needlessly saved + * and restored, we need to keep track of two things: + * (a) for each task, we need to remember which CPU was the last one to have + * the task's FPSIMD state loaded into its FPSIMD registers; + * (b) for each CPU, we need to remember which task's userland FPSIMD state has + * been loaded into its FPSIMD registers most recently, or whether it has + * been used to perform kernel mode NEON in the meantime. + * + * For (a), we add a 'cpu' field to struct fpsimd_state, which gets updated to + * the id of the current CPU everytime the state is loaded onto a CPU. For (b), + * we add the per-cpu variable 'fpsimd_last_state' (below), which contains the + * address of the userland FPSIMD state of the task that was loaded onto the CPU + * the most recently, or NULL if kernel mode NEON has been performed after that. + * + * With this in place, we no longer have to restore the next FPSIMD state right + * when switching between tasks. Instead, we can defer this check to userland + * resume, at which time we verify whether the CPU's fpsimd_last_state and the + * task's fpsimd_state.cpu are still mutually in sync. If this is the case, we + * can omit the FPSIMD restore. + * + * As an optimization, we use the thread_info flag TIF_FOREIGN_FPSTATE to + * indicate whether or not the userland FPSIMD state of the current task is + * present in the registers. The flag is set unless the FPSIMD registers of this + * CPU currently contain the most recent userland FPSIMD state of the current + * task. + * + * For a certain task, the sequence may look something like this: + * - the task gets scheduled in; if both the task's fpsimd_state.cpu field + * contains the id of the current CPU, and the CPU's fpsimd_last_state per-cpu + * variable points to the task's fpsimd_state, the TIF_FOREIGN_FPSTATE flag is + * cleared, otherwise it is set; + * + * - the task returns to userland; if TIF_FOREIGN_FPSTATE is set, the task's + * userland FPSIMD state is copied from memory to the registers, the task's + * fpsimd_state.cpu field is set to the id of the current CPU, the current + * CPU's fpsimd_last_state pointer is set to this task's fpsimd_state and the + * TIF_FOREIGN_FPSTATE flag is cleared; + * + * - the task executes an ordinary syscall; upon return to userland, the + * TIF_FOREIGN_FPSTATE flag will still be cleared, so no FPSIMD state is + * restored; + * + * - the task executes a syscall which executes some NEON instructions; this is + * preceded by a call to kernel_neon_begin(), which copies the task's FPSIMD + * register contents to memory, clears the fpsimd_last_state per-cpu variable + * and sets the TIF_FOREIGN_FPSTATE flag; + * + * - the task gets preempted after kernel_neon_end() is called; as we have not + * returned from the 2nd syscall yet, TIF_FOREIGN_FPSTATE is still set so + * whatever is in the FPSIMD registers is not saved to memory, but discarded. + */ +static DEFINE_PER_CPU(struct fpsimd_state *, fpsimd_last_state); + +/* * Trapped FP/ASIMD access. */ void do_fpsimd_acc(unsigned int esr, struct pt_regs *regs) @@ -72,43 +126,137 @@ void do_fpsimd_exc(unsigned int esr, struct pt_regs *regs) void fpsimd_thread_switch(struct task_struct *next) { - /* check if not kernel threads */ - if (current->mm) + /* + * Save the current FPSIMD state to memory, but only if whatever is in + * the registers is in fact the most recent userland FPSIMD state of + * 'current'. + */ + if (current->mm && !test_thread_flag(TIF_FOREIGN_FPSTATE)) fpsimd_save_state(¤t->thread.fpsimd_state); - if (next->mm) - fpsimd_load_state(&next->thread.fpsimd_state); + + if (next->mm) { + /* + * If we are switching to a task whose most recent userland + * FPSIMD state is already in the registers of *this* cpu, + * we can skip loading the state from memory. Otherwise, set + * the TIF_FOREIGN_FPSTATE flag so the state will be loaded + * upon the next return to userland. + */ + struct fpsimd_state *st = &next->thread.fpsimd_state; + + if (__this_cpu_read(fpsimd_last_state) == st + && st->cpu == smp_processor_id()) + clear_ti_thread_flag(task_thread_info(next), + TIF_FOREIGN_FPSTATE); + else + set_ti_thread_flag(task_thread_info(next), + TIF_FOREIGN_FPSTATE); + } } void fpsimd_flush_thread(void) { - preempt_disable(); memset(¤t->thread.fpsimd_state, 0, sizeof(struct fpsimd_state)); - fpsimd_load_state(¤t->thread.fpsimd_state); + set_thread_flag(TIF_FOREIGN_FPSTATE); +} + +/* + * Save the userland FPSIMD state of 'current' to memory, but only if the state + * currently held in the registers does in fact belong to 'current' + */ +void fpsimd_preserve_current_state(void) +{ + preempt_disable(); + if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) + fpsimd_save_state(¤t->thread.fpsimd_state); + preempt_enable(); +} + +/* + * Load the userland FPSIMD state of 'current' from memory, but only if the + * FPSIMD state already held in the registers is /not/ the most recent FPSIMD + * state of 'current' + */ +void fpsimd_restore_current_state(void) +{ + preempt_disable(); + if (test_and_clear_thread_flag(TIF_FOREIGN_FPSTATE)) { + struct fpsimd_state *st = ¤t->thread.fpsimd_state; + + fpsimd_load_state(st); + this_cpu_write(fpsimd_last_state, st); + st->cpu = smp_processor_id(); + } + preempt_enable(); +} + +/* + * Load an updated userland FPSIMD state for 'current' from memory and set the + * flag that indicates that the FPSIMD register contents are the most recent + * FPSIMD state of 'current' + */ +void fpsimd_update_current_state(struct fpsimd_state *state) +{ + preempt_disable(); + fpsimd_load_state(state); + if (test_and_clear_thread_flag(TIF_FOREIGN_FPSTATE)) { + struct fpsimd_state *st = ¤t->thread.fpsimd_state; + + this_cpu_write(fpsimd_last_state, st); + st->cpu = smp_processor_id(); + } preempt_enable(); } +/* + * Invalidate live CPU copies of task t's FPSIMD state + */ +void fpsimd_flush_task_state(struct task_struct *t) +{ + t->thread.fpsimd_state.cpu = NR_CPUS; +} + #ifdef CONFIG_KERNEL_MODE_NEON +static DEFINE_PER_CPU(struct fpsimd_partial_state, hardirq_fpsimdstate); +static DEFINE_PER_CPU(struct fpsimd_partial_state, softirq_fpsimdstate); + /* * Kernel-side NEON support functions */ -void kernel_neon_begin(void) +void kernel_neon_begin_partial(u32 num_regs) { - /* Avoid using the NEON in interrupt context */ - BUG_ON(in_interrupt()); - preempt_disable(); + if (in_interrupt()) { + struct fpsimd_partial_state *s = this_cpu_ptr( + in_irq() ? &hardirq_fpsimdstate : &softirq_fpsimdstate); - if (current->mm) - fpsimd_save_state(¤t->thread.fpsimd_state); + BUG_ON(num_regs > 32); + fpsimd_save_partial_state(s, roundup(num_regs, 2)); + } else { + /* + * Save the userland FPSIMD state if we have one and if we + * haven't done so already. Clear fpsimd_last_state to indicate + * that there is no longer userland FPSIMD state in the + * registers. + */ + preempt_disable(); + if (current->mm && + !test_and_set_thread_flag(TIF_FOREIGN_FPSTATE)) + fpsimd_save_state(¤t->thread.fpsimd_state); + this_cpu_write(fpsimd_last_state, NULL); + } } -EXPORT_SYMBOL(kernel_neon_begin); +EXPORT_SYMBOL(kernel_neon_begin_partial); void kernel_neon_end(void) { - if (current->mm) - fpsimd_load_state(¤t->thread.fpsimd_state); - - preempt_enable(); + if (in_interrupt()) { + struct fpsimd_partial_state *s = this_cpu_ptr( + in_irq() ? &hardirq_fpsimdstate : &softirq_fpsimdstate); + fpsimd_load_partial_state(s); + } else { + preempt_enable(); + } } EXPORT_SYMBOL(kernel_neon_end); @@ -120,12 +268,12 @@ static int fpsimd_cpu_pm_notifier(struct notifier_block *self, { switch (cmd) { case CPU_PM_ENTER: - if (current->mm) + if (current->mm && !test_thread_flag(TIF_FOREIGN_FPSTATE)) fpsimd_save_state(¤t->thread.fpsimd_state); break; case CPU_PM_EXIT: if (current->mm) - fpsimd_load_state(¤t->thread.fpsimd_state); + set_thread_flag(TIF_FOREIGN_FPSTATE); break; case CPU_PM_ENTER_FAILED: default: diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c new file mode 100644 index 000000000000..7924d73b6476 --- /dev/null +++ b/arch/arm64/kernel/ftrace.c @@ -0,0 +1,176 @@ +/* + * arch/arm64/kernel/ftrace.c + * + * Copyright (C) 2013 Linaro Limited + * Author: AKASHI Takahiro <takahiro.akashi@linaro.org> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include <linux/ftrace.h> +#include <linux/swab.h> +#include <linux/uaccess.h> + +#include <asm/cacheflush.h> +#include <asm/ftrace.h> +#include <asm/insn.h> + +#ifdef CONFIG_DYNAMIC_FTRACE +/* + * Replace a single instruction, which may be a branch or NOP. + * If @validate == true, a replaced instruction is checked against 'old'. + */ +static int ftrace_modify_code(unsigned long pc, u32 old, u32 new, + bool validate) +{ + u32 replaced; + + /* + * Note: + * Due to modules and __init, code can disappear and change, + * we need to protect against faulting as well as code changing. + * We do this by aarch64_insn_*() which use the probe_kernel_*(). + * + * No lock is held here because all the modifications are run + * through stop_machine(). + */ + if (validate) { + if (aarch64_insn_read((void *)pc, &replaced)) + return -EFAULT; + + if (replaced != old) + return -EINVAL; + } + if (aarch64_insn_patch_text_nosync((void *)pc, new)) + return -EPERM; + + return 0; +} + +/* + * Replace tracer function in ftrace_caller() + */ +int ftrace_update_ftrace_func(ftrace_func_t func) +{ + unsigned long pc; + u32 new; + + pc = (unsigned long)&ftrace_call; + new = aarch64_insn_gen_branch_imm(pc, (unsigned long)func, true); + + return ftrace_modify_code(pc, 0, new, false); +} + +/* + * Turn on the call to ftrace_caller() in instrumented function + */ +int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) +{ + unsigned long pc = rec->ip; + u32 old, new; + + old = aarch64_insn_gen_nop(); + new = aarch64_insn_gen_branch_imm(pc, addr, true); + + return ftrace_modify_code(pc, old, new, true); +} + +/* + * Turn off the call to ftrace_caller() in instrumented function + */ +int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec, + unsigned long addr) +{ + unsigned long pc = rec->ip; + u32 old, new; + + old = aarch64_insn_gen_branch_imm(pc, addr, true); + new = aarch64_insn_gen_nop(); + + return ftrace_modify_code(pc, old, new, true); +} + +int __init ftrace_dyn_arch_init(void) +{ + return 0; +} +#endif /* CONFIG_DYNAMIC_FTRACE */ + +#ifdef CONFIG_FUNCTION_GRAPH_TRACER +/* + * function_graph tracer expects ftrace_return_to_handler() to be called + * on the way back to parent. For this purpose, this function is called + * in _mcount() or ftrace_caller() to replace return address (*parent) on + * the call stack to return_to_handler. + * + * Note that @frame_pointer is used only for sanity check later. + */ +void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr, + unsigned long frame_pointer) +{ + unsigned long return_hooker = (unsigned long)&return_to_handler; + unsigned long old; + struct ftrace_graph_ent trace; + int err; + + if (unlikely(atomic_read(¤t->tracing_graph_pause))) + return; + + /* + * Note: + * No protection against faulting at *parent, which may be seen + * on other archs. It's unlikely on AArch64. + */ + old = *parent; + *parent = return_hooker; + + trace.func = self_addr; + trace.depth = current->curr_ret_stack + 1; + + /* Only trace if the calling function expects to */ + if (!ftrace_graph_entry(&trace)) { + *parent = old; + return; + } + + err = ftrace_push_return_trace(old, self_addr, &trace.depth, + frame_pointer); + if (err == -EBUSY) { + *parent = old; + return; + } +} + +#ifdef CONFIG_DYNAMIC_FTRACE +/* + * Turn on/off the call to ftrace_graph_caller() in ftrace_caller() + * depending on @enable. + */ +static int ftrace_modify_graph_caller(bool enable) +{ + unsigned long pc = (unsigned long)&ftrace_graph_call; + u32 branch, nop; + + branch = aarch64_insn_gen_branch_imm(pc, + (unsigned long)ftrace_graph_caller, false); + nop = aarch64_insn_gen_nop(); + + if (enable) + return ftrace_modify_code(pc, nop, branch, true); + else + return ftrace_modify_code(pc, branch, nop, true); +} + +int ftrace_enable_ftrace_graph_caller(void) +{ + return ftrace_modify_graph_caller(true); +} + +int ftrace_disable_ftrace_graph_caller(void) +{ + return ftrace_modify_graph_caller(false); +} +#endif /* CONFIG_DYNAMIC_FTRACE */ +#endif /* CONFIG_FUNCTION_GRAPH_TRACER */ diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 0fd565000772..a96d3a6a63f6 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -108,8 +108,18 @@ /* * DO NOT MODIFY. Image header expected by Linux boot-loaders. */ +#ifdef CONFIG_EFI +efi_head: + /* + * This add instruction has no meaningful effect except that + * its opcode forms the magic "MZ" signature required by UEFI. + */ + add x13, x18, #0x16 + b stext +#else b stext // branch to kernel start, magic .long 0 // reserved +#endif .quad TEXT_OFFSET // Image load offset from start of RAM .quad 0 // reserved .quad 0 // reserved @@ -120,7 +130,109 @@ .byte 0x52 .byte 0x4d .byte 0x64 +#ifdef CONFIG_EFI + .long pe_header - efi_head // Offset to the PE header. +#else .word 0 // reserved +#endif + +#ifdef CONFIG_EFI + .align 3 +pe_header: + .ascii "PE" + .short 0 +coff_header: + .short 0xaa64 // AArch64 + .short 2 // nr_sections + .long 0 // TimeDateStamp + .long 0 // PointerToSymbolTable + .long 1 // NumberOfSymbols + .short section_table - optional_header // SizeOfOptionalHeader + .short 0x206 // Characteristics. + // IMAGE_FILE_DEBUG_STRIPPED | + // IMAGE_FILE_EXECUTABLE_IMAGE | + // IMAGE_FILE_LINE_NUMS_STRIPPED +optional_header: + .short 0x20b // PE32+ format + .byte 0x02 // MajorLinkerVersion + .byte 0x14 // MinorLinkerVersion + .long _edata - stext // SizeOfCode + .long 0 // SizeOfInitializedData + .long 0 // SizeOfUninitializedData + .long efi_stub_entry - efi_head // AddressOfEntryPoint + .long stext - efi_head // BaseOfCode + +extra_header_fields: + .quad 0 // ImageBase + .long 0x20 // SectionAlignment + .long 0x8 // FileAlignment + .short 0 // MajorOperatingSystemVersion + .short 0 // MinorOperatingSystemVersion + .short 0 // MajorImageVersion + .short 0 // MinorImageVersion + .short 0 // MajorSubsystemVersion + .short 0 // MinorSubsystemVersion + .long 0 // Win32VersionValue + + .long _edata - efi_head // SizeOfImage + + // Everything before the kernel image is considered part of the header + .long stext - efi_head // SizeOfHeaders + .long 0 // CheckSum + .short 0xa // Subsystem (EFI application) + .short 0 // DllCharacteristics + .quad 0 // SizeOfStackReserve + .quad 0 // SizeOfStackCommit + .quad 0 // SizeOfHeapReserve + .quad 0 // SizeOfHeapCommit + .long 0 // LoaderFlags + .long 0x6 // NumberOfRvaAndSizes + + .quad 0 // ExportTable + .quad 0 // ImportTable + .quad 0 // ResourceTable + .quad 0 // ExceptionTable + .quad 0 // CertificationTable + .quad 0 // BaseRelocationTable + + // Section table +section_table: + + /* + * The EFI application loader requires a relocation section + * because EFI applications must be relocatable. This is a + * dummy section as far as we are concerned. + */ + .ascii ".reloc" + .byte 0 + .byte 0 // end of 0 padding of section name + .long 0 + .long 0 + .long 0 // SizeOfRawData + .long 0 // PointerToRawData + .long 0 // PointerToRelocations + .long 0 // PointerToLineNumbers + .short 0 // NumberOfRelocations + .short 0 // NumberOfLineNumbers + .long 0x42100040 // Characteristics (section flags) + + + .ascii ".text" + .byte 0 + .byte 0 + .byte 0 // end of 0 padding of section name + .long _edata - stext // VirtualSize + .long stext - efi_head // VirtualAddress + .long _edata - stext // SizeOfRawData + .long stext - efi_head // PointerToRawData + + .long 0 // PointerToRelocations (0 for executables) + .long 0 // PointerToLineNumbers (0 for executables) + .short 0 // NumberOfRelocations (0 for executables) + .short 0 // NumberOfLineNumbers (0 for executables) + .long 0xe0500020 // Characteristics (section flags) + .align 5 +#endif ENTRY(stext) mov x21, x0 // x21=FDT @@ -230,11 +342,9 @@ ENTRY(set_cpu_boot_mode_flag) cmp w20, #BOOT_CPU_MODE_EL2 b.ne 1f add x1, x1, #4 -1: dc cvac, x1 // Clean potentially dirty cache line - dsb sy - str w20, [x1] // This CPU has booted in EL1 - dc civac, x1 // Clean&invalidate potentially stale cache line - dsb sy +1: str w20, [x1] // This CPU has booted in EL1 + dmb sy + dc ivac, x1 // Invalidate potentially stale cache line ret ENDPROC(set_cpu_boot_mode_flag) diff --git a/arch/arm64/kernel/hw_breakpoint.c b/arch/arm64/kernel/hw_breakpoint.c index bee789757806..df1cf15377b4 100644 --- a/arch/arm64/kernel/hw_breakpoint.c +++ b/arch/arm64/kernel/hw_breakpoint.c @@ -20,6 +20,7 @@ #define pr_fmt(fmt) "hw-breakpoint: " fmt +#include <linux/compat.h> #include <linux/cpu_pm.h> #include <linux/errno.h> #include <linux/hw_breakpoint.h> @@ -27,7 +28,6 @@ #include <linux/ptrace.h> #include <linux/smp.h> -#include <asm/compat.h> #include <asm/current.h> #include <asm/debug-monitors.h> #include <asm/hw_breakpoint.h> diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 6391485f342d..43b7c34f92cb 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -20,6 +20,7 @@ #include <stdarg.h> +#include <linux/compat.h> #include <linux/export.h> #include <linux/sched.h> #include <linux/kernel.h> @@ -113,32 +114,62 @@ void arch_cpu_idle_dead(void) } #endif +/* + * Called by kexec, immediately prior to machine_kexec(). + * + * This must completely disable all secondary CPUs; simply causing those CPUs + * to execute e.g. a RAM-based pin loop is not sufficient. This allows the + * kexec'd kernel to use any and all RAM as it sees fit, without having to + * avoid any code or data used by any SW CPU pin loop. The CPU hotplug + * functionality embodied in disable_nonboot_cpus() to achieve this. + */ void machine_shutdown(void) { -#ifdef CONFIG_SMP - smp_send_stop(); -#endif + disable_nonboot_cpus(); } +/* + * Halting simply requires that the secondary CPUs stop performing any + * activity (executing tasks, handling interrupts). smp_send_stop() + * achieves this. + */ void machine_halt(void) { - machine_shutdown(); + local_irq_disable(); + smp_send_stop(); while (1); } +/* + * Power-off simply requires that the secondary CPUs stop performing any + * activity (executing tasks, handling interrupts). smp_send_stop() + * achieves this. When the system power is turned off, it will take all CPUs + * with it. + */ void machine_power_off(void) { - machine_shutdown(); + local_irq_disable(); + smp_send_stop(); if (pm_power_off) pm_power_off(); } +/* + * Restart requires that the secondary CPUs stop performing any activity + * while the primary CPU resets the system. Systems with a single CPU can + * use soft_restart() as their machine descriptor's .restart hook, since that + * will cause the only available CPU to reset. Systems with multiple CPUs must + * provide a HW restart implementation, to ensure that all CPUs reset at once. + * This is required so that any code running after reset on the primary CPU + * doesn't have to co-ordinate with other CPUs to ensure they aren't still + * executing pre-reset code, and using RAM that the primary CPU's code wishes + * to use. Implementing such co-ordination would be essentially impossible. + */ void machine_restart(char *cmd) { - machine_shutdown(); - /* Disable interrupts first */ local_irq_disable(); + smp_send_stop(); /* Now call the architecture specific reboot code. */ if (arm_pm_restart) @@ -205,7 +236,7 @@ void release_thread(struct task_struct *dead_task) int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) { - fpsimd_save_state(¤t->thread.fpsimd_state); + fpsimd_preserve_current_state(); *dst = *src; return 0; } @@ -300,7 +331,7 @@ struct task_struct *__switch_to(struct task_struct *prev, * Complete any pending TLB or cache maintenance on this CPU in case * the thread migrates to a different CPU. */ - dsb(); + dsb(ish); /* the actual thread switch */ last = cpu_switch_to(prev, next); diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c index 6a8928bba03c..3e926b9c0641 100644 --- a/arch/arm64/kernel/ptrace.c +++ b/arch/arm64/kernel/ptrace.c @@ -19,6 +19,7 @@ * along with this program. If not, see <http://www.gnu.org/licenses/>. */ +#include <linux/compat.h> #include <linux/kernel.h> #include <linux/sched.h> #include <linux/mm.h> @@ -41,6 +42,9 @@ #include <asm/traps.h> #include <asm/system_misc.h> +#define CREATE_TRACE_POINTS +#include <trace/events/syscalls.h> + /* * TODO: does not yet catch signals sent when the child dies. * in exit.c or in signal.c. @@ -517,6 +521,7 @@ static int fpr_set(struct task_struct *target, const struct user_regset *regset, return ret; target->thread.fpsimd_state.user_fpsimd = newstate; + fpsimd_flush_task_state(target); return ret; } @@ -764,6 +769,7 @@ static int compat_vfp_set(struct task_struct *target, uregs->fpcr = fpscr & VFP_FPSCR_CTRL_MASK; } + fpsimd_flush_task_state(target); return ret; } @@ -1058,35 +1064,49 @@ long arch_ptrace(struct task_struct *child, long request, return ptrace_request(child, request, addr, data); } -asmlinkage int syscall_trace(int dir, struct pt_regs *regs) +enum ptrace_syscall_dir { + PTRACE_SYSCALL_ENTER = 0, + PTRACE_SYSCALL_EXIT, +}; + +static void tracehook_report_syscall(struct pt_regs *regs, + enum ptrace_syscall_dir dir) { + int regno; unsigned long saved_reg; - if (!test_thread_flag(TIF_SYSCALL_TRACE)) - return regs->syscallno; - - if (is_compat_task()) { - /* AArch32 uses ip (r12) for scratch */ - saved_reg = regs->regs[12]; - regs->regs[12] = dir; - } else { - /* - * Save X7. X7 is used to denote syscall entry/exit: - * X7 = 0 -> entry, = 1 -> exit - */ - saved_reg = regs->regs[7]; - regs->regs[7] = dir; - } + /* + * A scratch register (ip(r12) on AArch32, x7 on AArch64) is + * used to denote syscall entry/exit: + */ + regno = (is_compat_task() ? 12 : 7); + saved_reg = regs->regs[regno]; + regs->regs[regno] = dir; - if (dir) + if (dir == PTRACE_SYSCALL_EXIT) tracehook_report_syscall_exit(regs, 0); else if (tracehook_report_syscall_entry(regs)) regs->syscallno = ~0UL; - if (is_compat_task()) - regs->regs[12] = saved_reg; - else - regs->regs[7] = saved_reg; + regs->regs[regno] = saved_reg; +} + +asmlinkage int syscall_trace_enter(struct pt_regs *regs) +{ + if (test_thread_flag(TIF_SYSCALL_TRACE)) + tracehook_report_syscall(regs, PTRACE_SYSCALL_ENTER); + + if (test_thread_flag(TIF_SYSCALL_TRACEPOINT)) + trace_sys_enter(regs, regs->syscallno); return regs->syscallno; } + +asmlinkage void syscall_trace_exit(struct pt_regs *regs) +{ + if (test_thread_flag(TIF_SYSCALL_TRACEPOINT)) + trace_sys_exit(regs, regs_return_value(regs)); + + if (test_thread_flag(TIF_SYSCALL_TRACE)) + tracehook_report_syscall(regs, PTRACE_SYSCALL_EXIT); +} diff --git a/arch/arm64/kernel/return_address.c b/arch/arm64/kernel/return_address.c new file mode 100644 index 000000000000..89102a6ffad5 --- /dev/null +++ b/arch/arm64/kernel/return_address.c @@ -0,0 +1,55 @@ +/* + * arch/arm64/kernel/return_address.c + * + * Copyright (C) 2013 Linaro Limited + * Author: AKASHI Takahiro <takahiro.akashi@linaro.org> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include <linux/export.h> +#include <linux/ftrace.h> + +#include <asm/stacktrace.h> + +struct return_address_data { + unsigned int level; + void *addr; +}; + +static int save_return_addr(struct stackframe *frame, void *d) +{ + struct return_address_data *data = d; + + if (!data->level) { + data->addr = (void *)frame->pc; + return 1; + } else { + --data->level; + return 0; + } +} + +void *return_address(unsigned int level) +{ + struct return_address_data data; + struct stackframe frame; + register unsigned long current_sp asm ("sp"); + + data.level = level + 2; + data.addr = NULL; + + frame.fp = (unsigned long)__builtin_frame_address(0); + frame.sp = current_sp; + frame.pc = (unsigned long)return_address; /* dummy */ + + walk_stackframe(&frame, save_return_addr, &data); + + if (!data.level) + return data.addr; + else + return NULL; +} +EXPORT_SYMBOL_GPL(return_address); diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index 7ec784653b29..46d1125571f6 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -25,6 +25,7 @@ #include <linux/utsname.h> #include <linux/initrd.h> #include <linux/console.h> +#include <linux/cache.h> #include <linux/bootmem.h> #include <linux/seq_file.h> #include <linux/screen_info.h> @@ -41,6 +42,7 @@ #include <linux/memblock.h> #include <linux/of_fdt.h> #include <linux/of_platform.h> +#include <linux/efi.h> #include <asm/fixmap.h> #include <asm/cputype.h> @@ -55,6 +57,7 @@ #include <asm/traps.h> #include <asm/memblock.h> #include <asm/psci.h> +#include <asm/efi.h> unsigned int processor_id; EXPORT_SYMBOL(processor_id); @@ -198,6 +201,8 @@ static void __init setup_processor(void) { struct cpu_info *cpu_info; u64 features, block; + u32 cwg; + int cls; cpu_info = lookup_processor_type(read_cpuid_id()); if (!cpu_info) { @@ -215,6 +220,18 @@ static void __init setup_processor(void) elf_hwcap = 0; /* + * Check for sane CTR_EL0.CWG value. + */ + cwg = cache_type_cwg(); + cls = cache_line_size(); + if (!cwg) + pr_warn("No Cache Writeback Granule information, assuming cache line size %d\n", + cls); + if (L1_CACHE_BYTES < cls) + pr_warn("L1_CACHE_BYTES smaller than the Cache Writeback Granule (%d < %d)\n", + L1_CACHE_BYTES, cls); + + /* * ID_AA64ISAR0_EL1 contains 4-bit wide signed feature blocks. * The blocks we test below represent incremental functionality * for non-negative values. Negative values are reserved. @@ -361,16 +378,18 @@ void __init setup_arch(char **cmdline_p) *cmdline_p = boot_command_line; - init_mem_pgprot(); early_ioremap_init(); parse_early_param(); + efi_init(); arm64_memblock_init(); paging_init(); request_standard_resources(); + efi_idmap_init(); + unflatten_device_tree(); psci_init(); diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index 890a591f75dd..6357b9c6c90e 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -17,6 +17,7 @@ * along with this program. If not, see <http://www.gnu.org/licenses/>. */ +#include <linux/compat.h> #include <linux/errno.h> #include <linux/signal.h> #include <linux/personality.h> @@ -25,7 +26,6 @@ #include <linux/tracehook.h> #include <linux/ratelimit.h> -#include <asm/compat.h> #include <asm/debug-monitors.h> #include <asm/elf.h> #include <asm/cacheflush.h> @@ -51,7 +51,7 @@ static int preserve_fpsimd_context(struct fpsimd_context __user *ctx) int err; /* dump the hardware registers to the fpsimd_state structure */ - fpsimd_save_state(fpsimd); + fpsimd_preserve_current_state(); /* copy the FP and status/control registers */ err = __copy_to_user(ctx->vregs, fpsimd->vregs, sizeof(fpsimd->vregs)); @@ -86,11 +86,8 @@ static int restore_fpsimd_context(struct fpsimd_context __user *ctx) __get_user_error(fpsimd.fpcr, &ctx->fpcr, err); /* load the hardware registers from the fpsimd_state structure */ - if (!err) { - preempt_disable(); - fpsimd_load_state(&fpsimd); - preempt_enable(); - } + if (!err) + fpsimd_update_current_state(&fpsimd); return err ? -EFAULT : 0; } @@ -100,8 +97,7 @@ static int restore_sigframe(struct pt_regs *regs, { sigset_t set; int i, err; - struct aux_context __user *aux = - (struct aux_context __user *)sf->uc.uc_mcontext.__reserved; + void *aux = sf->uc.uc_mcontext.__reserved; err = __copy_from_user(&set, &sf->uc.uc_sigmask, sizeof(set)); if (err == 0) @@ -121,8 +117,11 @@ static int restore_sigframe(struct pt_regs *regs, err |= !valid_user_regs(®s->user_regs); - if (err == 0) - err |= restore_fpsimd_context(&aux->fpsimd); + if (err == 0) { + struct fpsimd_context *fpsimd_ctx = + container_of(aux, struct fpsimd_context, head); + err |= restore_fpsimd_context(fpsimd_ctx); + } return err; } @@ -167,8 +166,8 @@ static int setup_sigframe(struct rt_sigframe __user *sf, struct pt_regs *regs, sigset_t *set) { int i, err = 0; - struct aux_context __user *aux = - (struct aux_context __user *)sf->uc.uc_mcontext.__reserved; + void *aux = sf->uc.uc_mcontext.__reserved; + struct _aarch64_ctx *end; /* set up the stack frame for unwinding */ __put_user_error(regs->regs[29], &sf->fp, err); @@ -185,12 +184,27 @@ static int setup_sigframe(struct rt_sigframe __user *sf, err |= __copy_to_user(&sf->uc.uc_sigmask, set, sizeof(*set)); - if (err == 0) - err |= preserve_fpsimd_context(&aux->fpsimd); + if (err == 0) { + struct fpsimd_context *fpsimd_ctx = + container_of(aux, struct fpsimd_context, head); + err |= preserve_fpsimd_context(fpsimd_ctx); + aux += sizeof(*fpsimd_ctx); + } + + /* fault information, if valid */ + if (current->thread.fault_code) { + struct esr_context *esr_ctx = + container_of(aux, struct esr_context, head); + __put_user_error(ESR_MAGIC, &esr_ctx->head.magic, err); + __put_user_error(sizeof(*esr_ctx), &esr_ctx->head.size, err); + __put_user_error(current->thread.fault_code, &esr_ctx->esr, err); + aux += sizeof(*esr_ctx); + } /* set the "end" magic */ - __put_user_error(0, &aux->end.magic, err); - __put_user_error(0, &aux->end.size, err); + end = aux; + __put_user_error(0, &end->magic, err); + __put_user_error(0, &end->size, err); return err; } @@ -416,4 +430,8 @@ asmlinkage void do_notify_resume(struct pt_regs *regs, clear_thread_flag(TIF_NOTIFY_RESUME); tracehook_notify_resume(regs); } + + if (thread_flags & _TIF_FOREIGN_FPSTATE) + fpsimd_restore_current_state(); + } diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c index b3fc9f5ec6d3..3491c638f172 100644 --- a/arch/arm64/kernel/signal32.c +++ b/arch/arm64/kernel/signal32.c @@ -23,6 +23,7 @@ #include <linux/syscalls.h> #include <linux/ratelimit.h> +#include <asm/esr.h> #include <asm/fpsimd.h> #include <asm/signal32.h> #include <asm/uaccess.h> @@ -81,6 +82,8 @@ struct compat_vfp_sigframe { #define VFP_MAGIC 0x56465001 #define VFP_STORAGE_SIZE sizeof(struct compat_vfp_sigframe) +#define FSR_WRITE_SHIFT (11) + struct compat_aux_sigframe { struct compat_vfp_sigframe vfp; @@ -219,7 +222,7 @@ static int compat_preserve_vfp_context(struct compat_vfp_sigframe __user *frame) * Note that this also saves V16-31, which aren't visible * in AArch32. */ - fpsimd_save_state(fpsimd); + fpsimd_preserve_current_state(); /* Place structure header on the stack */ __put_user_error(magic, &frame->magic, err); @@ -282,11 +285,8 @@ static int compat_restore_vfp_context(struct compat_vfp_sigframe __user *frame) * We don't need to touch the exception register, so * reload the hardware state. */ - if (!err) { - preempt_disable(); - fpsimd_load_state(&fpsimd); - preempt_enable(); - } + if (!err) + fpsimd_update_current_state(&fpsimd); return err ? -EFAULT : 0; } @@ -500,7 +500,9 @@ static int compat_setup_sigframe(struct compat_sigframe __user *sf, __put_user_error(regs->pstate, &sf->uc.uc_mcontext.arm_cpsr, err); __put_user_error((compat_ulong_t)0, &sf->uc.uc_mcontext.trap_no, err); - __put_user_error((compat_ulong_t)0, &sf->uc.uc_mcontext.error_code, err); + /* set the compat FSR WnR */ + __put_user_error(!!(current->thread.fault_code & ESR_EL1_WRITE) << + FSR_WRITE_SHIFT, &sf->uc.uc_mcontext.error_code, err); __put_user_error(current->thread.fault_address, &sf->uc.uc_mcontext.fault_address, err); __put_user_error(set->sig[0], &sf->uc.uc_mcontext.oldmask, err); diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index c3cb160edc69..40f38f46c8e0 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -35,6 +35,7 @@ #include <linux/clockchips.h> #include <linux/completion.h> #include <linux/of.h> +#include <linux/irq_work.h> #include <asm/atomic.h> #include <asm/cacheflush.h> @@ -62,6 +63,7 @@ enum ipi_msg_type { IPI_CALL_FUNC_SINGLE, IPI_CPU_STOP, IPI_TIMER, + IPI_IRQ_WORK, }; /* @@ -477,6 +479,14 @@ void arch_send_call_function_single_ipi(int cpu) smp_cross_call(cpumask_of(cpu), IPI_CALL_FUNC_SINGLE); } +#ifdef CONFIG_IRQ_WORK +void arch_irq_work_raise(void) +{ + if (smp_cross_call) + smp_cross_call(cpumask_of(smp_processor_id()), IPI_IRQ_WORK); +} +#endif + static const char *ipi_types[NR_IPI] = { #define S(x,s) [x - IPI_RESCHEDULE] = s S(IPI_RESCHEDULE, "Rescheduling interrupts"), @@ -484,6 +494,7 @@ static const char *ipi_types[NR_IPI] = { S(IPI_CALL_FUNC_SINGLE, "Single function call interrupts"), S(IPI_CPU_STOP, "CPU stop interrupts"), S(IPI_TIMER, "Timer broadcast interrupts"), + S(IPI_IRQ_WORK, "IRQ work interrupts"), }; void show_ipi_list(struct seq_file *p, int prec) @@ -576,6 +587,14 @@ void handle_IPI(int ipinr, struct pt_regs *regs) break; #endif +#ifdef CONFIG_IRQ_WORK + case IPI_IRQ_WORK: + irq_enter(); + irq_work_run(); + irq_exit(); + break; +#endif + default: pr_crit("CPU%u: Unknown IPI message 0x%x\n", cpu, ipinr); break; diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c index 7a530d2cc807..0347d38eea29 100644 --- a/arch/arm64/kernel/smp_spin_table.c +++ b/arch/arm64/kernel/smp_spin_table.c @@ -30,7 +30,6 @@ extern void secondary_holding_pen(void); volatile unsigned long secondary_holding_pen_release = INVALID_HWID; static phys_addr_t cpu_release_addr[NR_CPUS]; -static DEFINE_RAW_SPINLOCK(boot_lock); /* * Write secondary_holding_pen_release in a way that is guaranteed to be @@ -94,14 +93,6 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu) static int smp_spin_table_cpu_boot(unsigned int cpu) { - unsigned long timeout; - - /* - * Set synchronisation state between this boot processor - * and the secondary one - */ - raw_spin_lock(&boot_lock); - /* * Update the pen release flag. */ @@ -112,34 +103,7 @@ static int smp_spin_table_cpu_boot(unsigned int cpu) */ sev(); - timeout = jiffies + (1 * HZ); - while (time_before(jiffies, timeout)) { - if (secondary_holding_pen_release == INVALID_HWID) - break; - udelay(10); - } - - /* - * Now the secondary core is starting up let it run its - * calibrations, then wait for it to finish - */ - raw_spin_unlock(&boot_lock); - - return secondary_holding_pen_release != INVALID_HWID ? -ENOSYS : 0; -} - -static void smp_spin_table_cpu_postboot(void) -{ - /* - * Let the primary processor know we're out of the pen. - */ - write_pen_release(INVALID_HWID); - - /* - * Synchronise with the boot thread. - */ - raw_spin_lock(&boot_lock); - raw_spin_unlock(&boot_lock); + return 0; } const struct cpu_operations smp_spin_table_ops = { @@ -147,5 +111,4 @@ const struct cpu_operations smp_spin_table_ops = { .cpu_init = smp_spin_table_cpu_init, .cpu_prepare = smp_spin_table_cpu_prepare, .cpu_boot = smp_spin_table_cpu_boot, - .cpu_postboot = smp_spin_table_cpu_postboot, }; diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index 38f0558f0c0a..55437ba1f5a4 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -35,7 +35,7 @@ * ldp x29, x30, [sp] * add sp, sp, #0x10 */ -int unwind_frame(struct stackframe *frame) +int notrace unwind_frame(struct stackframe *frame) { unsigned long high, low; unsigned long fp = frame->fp; diff --git a/arch/arm64/kernel/time.c b/arch/arm64/kernel/time.c index 6815987b50f8..1a7125c3099b 100644 --- a/arch/arm64/kernel/time.c +++ b/arch/arm64/kernel/time.c @@ -18,6 +18,7 @@ * along with this program. If not, see <http://www.gnu.org/licenses/>. */ +#include <linux/clockchips.h> #include <linux/export.h> #include <linux/kernel.h> #include <linux/interrupt.h> @@ -69,6 +70,8 @@ void __init time_init(void) of_clk_init(NULL); clocksource_of_init(); + tick_setup_hrtimer_broadcast(); + arch_timer_rate = arch_timer_get_rate(); if (!arch_timer_rate) panic("Unable to initialise architected timer.\n"); diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c index 3e06b0be4ec8..43514f905916 100644 --- a/arch/arm64/kernel/topology.c +++ b/arch/arm64/kernel/topology.c @@ -17,10 +17,192 @@ #include <linux/percpu.h> #include <linux/node.h> #include <linux/nodemask.h> +#include <linux/of.h> #include <linux/sched.h> #include <asm/topology.h> +static int __init get_cpu_for_node(struct device_node *node) +{ + struct device_node *cpu_node; + int cpu; + + cpu_node = of_parse_phandle(node, "cpu", 0); + if (!cpu_node) + return -1; + + for_each_possible_cpu(cpu) { + if (of_get_cpu_node(cpu, NULL) == cpu_node) { + of_node_put(cpu_node); + return cpu; + } + } + + pr_crit("Unable to find CPU node for %s\n", cpu_node->full_name); + + of_node_put(cpu_node); + return -1; +} + +static int __init parse_core(struct device_node *core, int cluster_id, + int core_id) +{ + char name[10]; + bool leaf = true; + int i = 0; + int cpu; + struct device_node *t; + + do { + snprintf(name, sizeof(name), "thread%d", i); + t = of_get_child_by_name(core, name); + if (t) { + leaf = false; + cpu = get_cpu_for_node(t); + if (cpu >= 0) { + cpu_topology[cpu].cluster_id = cluster_id; + cpu_topology[cpu].core_id = core_id; + cpu_topology[cpu].thread_id = i; + } else { + pr_err("%s: Can't get CPU for thread\n", + t->full_name); + of_node_put(t); + return -EINVAL; + } + of_node_put(t); + } + i++; + } while (t); + + cpu = get_cpu_for_node(core); + if (cpu >= 0) { + if (!leaf) { + pr_err("%s: Core has both threads and CPU\n", + core->full_name); + return -EINVAL; + } + + cpu_topology[cpu].cluster_id = cluster_id; + cpu_topology[cpu].core_id = core_id; + } else if (leaf) { + pr_err("%s: Can't get CPU for leaf core\n", core->full_name); + return -EINVAL; + } + + return 0; +} + +static int __init parse_cluster(struct device_node *cluster, int depth) +{ + char name[10]; + bool leaf = true; + bool has_cores = false; + struct device_node *c; + static int cluster_id __initdata; + int core_id = 0; + int i, ret; + + /* + * First check for child clusters; we currently ignore any + * information about the nesting of clusters and present the + * scheduler with a flat list of them. + */ + i = 0; + do { + snprintf(name, sizeof(name), "cluster%d", i); + c = of_get_child_by_name(cluster, name); + if (c) { + leaf = false; + ret = parse_cluster(c, depth + 1); + of_node_put(c); + if (ret != 0) + return ret; + } + i++; + } while (c); + + /* Now check for cores */ + i = 0; + do { + snprintf(name, sizeof(name), "core%d", i); + c = of_get_child_by_name(cluster, name); + if (c) { + has_cores = true; + + if (depth == 0) { + pr_err("%s: cpu-map children should be clusters\n", + c->full_name); + of_node_put(c); + return -EINVAL; + } + + if (leaf) { + ret = parse_core(c, cluster_id, core_id++); + } else { + pr_err("%s: Non-leaf cluster with core %s\n", + cluster->full_name, name); + ret = -EINVAL; + } + + of_node_put(c); + if (ret != 0) + return ret; + } + i++; + } while (c); + + if (leaf && !has_cores) + pr_warn("%s: empty cluster\n", cluster->full_name); + + if (leaf) + cluster_id++; + + return 0; +} + +static int __init parse_dt_topology(void) +{ + struct device_node *cn, *map; + int ret = 0; + int cpu; + + cn = of_find_node_by_path("/cpus"); + if (!cn) { + pr_err("No CPU information found in DT\n"); + return 0; + } + + /* + * When topology is provided cpu-map is essentially a root + * cluster with restricted subnodes. + */ + map = of_get_child_by_name(cn, "cpu-map"); + if (!map) + goto out; + + ret = parse_cluster(map, 0); + if (ret != 0) + goto out_map; + + /* + * Check that all cores are in the topology; the SMP code will + * only mark cores described in the DT as possible. + */ + for_each_possible_cpu(cpu) { + if (cpu_topology[cpu].cluster_id == -1) { + pr_err("CPU%d: No topology information specified\n", + cpu); + ret = -EINVAL; + } + } + +out_map: + of_node_put(map); +out: + of_node_put(cn); + return ret; +} + /* * cpu topology table */ @@ -39,13 +221,9 @@ static void update_siblings_masks(unsigned int cpuid) if (cpuid_topo->cluster_id == -1) { /* - * DT does not contain topology information for this cpu - * reset it to default behaviour + * DT does not contain topology information for this cpu. */ pr_debug("CPU%u: No topology information configured\n", cpuid); - cpuid_topo->core_id = 0; - cpumask_set_cpu(cpuid, &cpuid_topo->core_sibling); - cpumask_set_cpu(cpuid, &cpuid_topo->thread_sibling); return; } @@ -74,22 +252,32 @@ void store_cpu_topology(unsigned int cpuid) update_siblings_masks(cpuid); } -/* - * init_cpu_topology is called at boot when only one cpu is running - * which prevent simultaneous write access to cpu_topology array - */ -void __init init_cpu_topology(void) +static void __init reset_cpu_topology(void) { unsigned int cpu; - /* init core mask and power*/ for_each_possible_cpu(cpu) { struct cpu_topology *cpu_topo = &cpu_topology[cpu]; cpu_topo->thread_id = -1; - cpu_topo->core_id = -1; + cpu_topo->core_id = 0; cpu_topo->cluster_id = -1; + cpumask_clear(&cpu_topo->core_sibling); + cpumask_set_cpu(cpu, &cpu_topo->core_sibling); cpumask_clear(&cpu_topo->thread_sibling); + cpumask_set_cpu(cpu, &cpu_topo->thread_sibling); } } + +void __init init_cpu_topology(void) +{ + reset_cpu_topology(); + + /* + * Discard anything that was parsed if we hit an error so we + * don't use partial information. + */ + if (parse_dt_topology()) + reset_cpu_topology(); +} diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index 7ffadddb645d..c43cfa9b8304 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -251,10 +251,13 @@ void die(const char *str, struct pt_regs *regs, int err) void arm64_notify_die(const char *str, struct pt_regs *regs, struct siginfo *info, int err) { - if (user_mode(regs)) + if (user_mode(regs)) { + current->thread.fault_address = 0; + current->thread.fault_code = err; force_sig_info(info->si_signo, info, current); - else + } else { die(str, regs, err); + } } asmlinkage void __exception do_undefinstr(struct pt_regs *regs) diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 4ba7a55b49c7..f1e6d5c032e1 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -13,7 +13,7 @@ #define ARM_EXIT_DISCARD(x) x OUTPUT_ARCH(aarch64) -ENTRY(stext) +ENTRY(_text) jiffies = jiffies_64; diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S index 2c56012cb2d2..b0d1512acf08 100644 --- a/arch/arm64/kvm/hyp.S +++ b/arch/arm64/kvm/hyp.S @@ -630,9 +630,15 @@ ENTRY(__kvm_tlb_flush_vmid_ipa) * whole of Stage-1. Weep... */ tlbi ipas2e1is, x1 - dsb sy + /* + * We have to ensure completion of the invalidation at Stage-2, + * since a table walk on another CPU could refill a TLB with a + * complete (S1 + S2) walk based on the old Stage-2 mapping if + * the Stage-1 invalidation happened first. + */ + dsb ish tlbi vmalle1is - dsb sy + dsb ish isb msr vttbr_el2, xzr @@ -643,7 +649,7 @@ ENTRY(__kvm_flush_vm_context) dsb ishst tlbi alle1is ic ialluis - dsb sy + dsb ish ret ENDPROC(__kvm_flush_vm_context) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 03244582bc55..c59a1bdab5eb 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -71,13 +71,13 @@ static u32 get_ccsidr(u32 csselr) static void do_dc_cisw(u32 val) { asm volatile("dc cisw, %x0" : : "r" (val)); - dsb(); + dsb(ish); } static void do_dc_csw(u32 val) { asm volatile("dc csw, %x0" : : "r" (val)); - dsb(); + dsb(ish); } /* See note at ARM ARM B1.14.4 */ diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 328ce1a99daa..d98d3e39879e 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -1,4 +1,5 @@ lib-y := bitops.o clear_user.o delay.o copy_from_user.o \ copy_to_user.o copy_in_user.o copy_page.o \ clear_page.o memchr.o memcpy.o memmove.o memset.o \ + memcmp.o strcmp.o strncmp.o strlen.o strnlen.o \ strchr.o strrchr.o diff --git a/arch/arm64/lib/memcmp.S b/arch/arm64/lib/memcmp.S new file mode 100644 index 000000000000..6ea0776ba6de --- /dev/null +++ b/arch/arm64/lib/memcmp.S @@ -0,0 +1,258 @@ +/* + * Copyright (C) 2013 ARM Ltd. + * Copyright (C) 2013 Linaro. + * + * This code is based on glibc cortex strings work originally authored by Linaro + * and re-licensed under GPLv2 for the Linux kernel. The original code can + * be found @ + * + * http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/ + * files/head:/src/aarch64/ + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see <http://www.gnu.org/licenses/>. + */ + +#include <linux/linkage.h> +#include <asm/assembler.h> + +/* +* compare memory areas(when two memory areas' offset are different, +* alignment handled by the hardware) +* +* Parameters: +* x0 - const memory area 1 pointer +* x1 - const memory area 2 pointer +* x2 - the maximal compare byte length +* Returns: +* x0 - a compare result, maybe less than, equal to, or greater than ZERO +*/ + +/* Parameters and result. */ +src1 .req x0 +src2 .req x1 +limit .req x2 +result .req x0 + +/* Internal variables. */ +data1 .req x3 +data1w .req w3 +data2 .req x4 +data2w .req w4 +has_nul .req x5 +diff .req x6 +endloop .req x7 +tmp1 .req x8 +tmp2 .req x9 +tmp3 .req x10 +pos .req x11 +limit_wd .req x12 +mask .req x13 + +ENTRY(memcmp) + cbz limit, .Lret0 + eor tmp1, src1, src2 + tst tmp1, #7 + b.ne .Lmisaligned8 + ands tmp1, src1, #7 + b.ne .Lmutual_align + sub limit_wd, limit, #1 /* limit != 0, so no underflow. */ + lsr limit_wd, limit_wd, #3 /* Convert to Dwords. */ + /* + * The input source addresses are at alignment boundary. + * Directly compare eight bytes each time. + */ +.Lloop_aligned: + ldr data1, [src1], #8 + ldr data2, [src2], #8 +.Lstart_realigned: + subs limit_wd, limit_wd, #1 + eor diff, data1, data2 /* Non-zero if differences found. */ + csinv endloop, diff, xzr, cs /* Last Dword or differences. */ + cbz endloop, .Lloop_aligned + + /* Not reached the limit, must have found a diff. */ + tbz limit_wd, #63, .Lnot_limit + + /* Limit % 8 == 0 => the diff is in the last 8 bytes. */ + ands limit, limit, #7 + b.eq .Lnot_limit + /* + * The remained bytes less than 8. It is needed to extract valid data + * from last eight bytes of the intended memory range. + */ + lsl limit, limit, #3 /* bytes-> bits. */ + mov mask, #~0 +CPU_BE( lsr mask, mask, limit ) +CPU_LE( lsl mask, mask, limit ) + bic data1, data1, mask + bic data2, data2, mask + + orr diff, diff, mask + b .Lnot_limit + +.Lmutual_align: + /* + * Sources are mutually aligned, but are not currently at an + * alignment boundary. Round down the addresses and then mask off + * the bytes that precede the start point. + */ + bic src1, src1, #7 + bic src2, src2, #7 + ldr data1, [src1], #8 + ldr data2, [src2], #8 + /* + * We can not add limit with alignment offset(tmp1) here. Since the + * addition probably make the limit overflown. + */ + sub limit_wd, limit, #1/*limit != 0, so no underflow.*/ + and tmp3, limit_wd, #7 + lsr limit_wd, limit_wd, #3 + add tmp3, tmp3, tmp1 + add limit_wd, limit_wd, tmp3, lsr #3 + add limit, limit, tmp1/* Adjust the limit for the extra. */ + + lsl tmp1, tmp1, #3/* Bytes beyond alignment -> bits.*/ + neg tmp1, tmp1/* Bits to alignment -64. */ + mov tmp2, #~0 + /*mask off the non-intended bytes before the start address.*/ +CPU_BE( lsl tmp2, tmp2, tmp1 )/*Big-endian.Early bytes are at MSB*/ + /* Little-endian. Early bytes are at LSB. */ +CPU_LE( lsr tmp2, tmp2, tmp1 ) + + orr data1, data1, tmp2 + orr data2, data2, tmp2 + b .Lstart_realigned + + /*src1 and src2 have different alignment offset.*/ +.Lmisaligned8: + cmp limit, #8 + b.lo .Ltiny8proc /*limit < 8: compare byte by byte*/ + + and tmp1, src1, #7 + neg tmp1, tmp1 + add tmp1, tmp1, #8/*valid length in the first 8 bytes of src1*/ + and tmp2, src2, #7 + neg tmp2, tmp2 + add tmp2, tmp2, #8/*valid length in the first 8 bytes of src2*/ + subs tmp3, tmp1, tmp2 + csel pos, tmp1, tmp2, hi /*Choose the maximum.*/ + + sub limit, limit, pos + /*compare the proceeding bytes in the first 8 byte segment.*/ +.Ltinycmp: + ldrb data1w, [src1], #1 + ldrb data2w, [src2], #1 + subs pos, pos, #1 + ccmp data1w, data2w, #0, ne /* NZCV = 0b0000. */ + b.eq .Ltinycmp + cbnz pos, 1f /*diff occurred before the last byte.*/ + cmp data1w, data2w + b.eq .Lstart_align +1: + sub result, data1, data2 + ret + +.Lstart_align: + lsr limit_wd, limit, #3 + cbz limit_wd, .Lremain8 + + ands xzr, src1, #7 + b.eq .Lrecal_offset + /*process more leading bytes to make src1 aligned...*/ + add src1, src1, tmp3 /*backwards src1 to alignment boundary*/ + add src2, src2, tmp3 + sub limit, limit, tmp3 + lsr limit_wd, limit, #3 + cbz limit_wd, .Lremain8 + /*load 8 bytes from aligned SRC1..*/ + ldr data1, [src1], #8 + ldr data2, [src2], #8 + + subs limit_wd, limit_wd, #1 + eor diff, data1, data2 /*Non-zero if differences found.*/ + csinv endloop, diff, xzr, ne + cbnz endloop, .Lunequal_proc + /*How far is the current SRC2 from the alignment boundary...*/ + and tmp3, tmp3, #7 + +.Lrecal_offset:/*src1 is aligned now..*/ + neg pos, tmp3 +.Lloopcmp_proc: + /* + * Divide the eight bytes into two parts. First,backwards the src2 + * to an alignment boundary,load eight bytes and compare from + * the SRC2 alignment boundary. If all 8 bytes are equal,then start + * the second part's comparison. Otherwise finish the comparison. + * This special handle can garantee all the accesses are in the + * thread/task space in avoid to overrange access. + */ + ldr data1, [src1,pos] + ldr data2, [src2,pos] + eor diff, data1, data2 /* Non-zero if differences found. */ + cbnz diff, .Lnot_limit + + /*The second part process*/ + ldr data1, [src1], #8 + ldr data2, [src2], #8 + eor diff, data1, data2 /* Non-zero if differences found. */ + subs limit_wd, limit_wd, #1 + csinv endloop, diff, xzr, ne/*if limit_wd is 0,will finish the cmp*/ + cbz endloop, .Lloopcmp_proc +.Lunequal_proc: + cbz diff, .Lremain8 + +/*There is differnence occured in the latest comparison.*/ +.Lnot_limit: +/* +* For little endian,reverse the low significant equal bits into MSB,then +* following CLZ can find how many equal bits exist. +*/ +CPU_LE( rev diff, diff ) +CPU_LE( rev data1, data1 ) +CPU_LE( rev data2, data2 ) + + /* + * The MS-non-zero bit of DIFF marks either the first bit + * that is different, or the end of the significant data. + * Shifting left now will bring the critical information into the + * top bits. + */ + clz pos, diff + lsl data1, data1, pos + lsl data2, data2, pos + /* + * We need to zero-extend (char is unsigned) the value and then + * perform a signed subtraction. + */ + lsr data1, data1, #56 + sub result, data1, data2, lsr #56 + ret + +.Lremain8: + /* Limit % 8 == 0 =>. all data are equal.*/ + ands limit, limit, #7 + b.eq .Lret0 + +.Ltiny8proc: + ldrb data1w, [src1], #1 + ldrb data2w, [src2], #1 + subs limit, limit, #1 + + ccmp data1w, data2w, #0, ne /* NZCV = 0b0000. */ + b.eq .Ltiny8proc + sub result, data1, data2 + ret +.Lret0: + mov result, #0 + ret +ENDPROC(memcmp) diff --git a/arch/arm64/lib/memcpy.S b/arch/arm64/lib/memcpy.S index 27b5003609b6..8a9a96d3ddae 100644 --- a/arch/arm64/lib/memcpy.S +++ b/arch/arm64/lib/memcpy.S @@ -1,5 +1,13 @@ /* * Copyright (C) 2013 ARM Ltd. + * Copyright (C) 2013 Linaro. + * + * This code is based on glibc cortex strings work originally authored by Linaro + * and re-licensed under GPLv2 for the Linux kernel. The original code can + * be found @ + * + * http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/ + * files/head:/src/aarch64/ * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -16,6 +24,7 @@ #include <linux/linkage.h> #include <asm/assembler.h> +#include <asm/cache.h> /* * Copy a buffer from src to dest (alignment handled by the hardware) @@ -27,27 +36,166 @@ * Returns: * x0 - dest */ +dstin .req x0 +src .req x1 +count .req x2 +tmp1 .req x3 +tmp1w .req w3 +tmp2 .req x4 +tmp2w .req w4 +tmp3 .req x5 +tmp3w .req w5 +dst .req x6 + +A_l .req x7 +A_h .req x8 +B_l .req x9 +B_h .req x10 +C_l .req x11 +C_h .req x12 +D_l .req x13 +D_h .req x14 + ENTRY(memcpy) - mov x4, x0 - subs x2, x2, #8 - b.mi 2f -1: ldr x3, [x1], #8 - subs x2, x2, #8 - str x3, [x4], #8 - b.pl 1b -2: adds x2, x2, #4 - b.mi 3f - ldr w3, [x1], #4 - sub x2, x2, #4 - str w3, [x4], #4 -3: adds x2, x2, #2 - b.mi 4f - ldrh w3, [x1], #2 - sub x2, x2, #2 - strh w3, [x4], #2 -4: adds x2, x2, #1 - b.mi 5f - ldrb w3, [x1] - strb w3, [x4] -5: ret + mov dst, dstin + cmp count, #16 + /*When memory length is less than 16, the accessed are not aligned.*/ + b.lo .Ltiny15 + + neg tmp2, src + ands tmp2, tmp2, #15/* Bytes to reach alignment. */ + b.eq .LSrcAligned + sub count, count, tmp2 + /* + * Copy the leading memory data from src to dst in an increasing + * address order.By this way,the risk of overwritting the source + * memory data is eliminated when the distance between src and + * dst is less than 16. The memory accesses here are alignment. + */ + tbz tmp2, #0, 1f + ldrb tmp1w, [src], #1 + strb tmp1w, [dst], #1 +1: + tbz tmp2, #1, 2f + ldrh tmp1w, [src], #2 + strh tmp1w, [dst], #2 +2: + tbz tmp2, #2, 3f + ldr tmp1w, [src], #4 + str tmp1w, [dst], #4 +3: + tbz tmp2, #3, .LSrcAligned + ldr tmp1, [src],#8 + str tmp1, [dst],#8 + +.LSrcAligned: + cmp count, #64 + b.ge .Lcpy_over64 + /* + * Deal with small copies quickly by dropping straight into the + * exit block. + */ +.Ltail63: + /* + * Copy up to 48 bytes of data. At this point we only need the + * bottom 6 bits of count to be accurate. + */ + ands tmp1, count, #0x30 + b.eq .Ltiny15 + cmp tmp1w, #0x20 + b.eq 1f + b.lt 2f + ldp A_l, A_h, [src], #16 + stp A_l, A_h, [dst], #16 +1: + ldp A_l, A_h, [src], #16 + stp A_l, A_h, [dst], #16 +2: + ldp A_l, A_h, [src], #16 + stp A_l, A_h, [dst], #16 +.Ltiny15: + /* + * Prefer to break one ldp/stp into several load/store to access + * memory in an increasing address order,rather than to load/store 16 + * bytes from (src-16) to (dst-16) and to backward the src to aligned + * address,which way is used in original cortex memcpy. If keeping + * the original memcpy process here, memmove need to satisfy the + * precondition that src address is at least 16 bytes bigger than dst + * address,otherwise some source data will be overwritten when memove + * call memcpy directly. To make memmove simpler and decouple the + * memcpy's dependency on memmove, withdrew the original process. + */ + tbz count, #3, 1f + ldr tmp1, [src], #8 + str tmp1, [dst], #8 +1: + tbz count, #2, 2f + ldr tmp1w, [src], #4 + str tmp1w, [dst], #4 +2: + tbz count, #1, 3f + ldrh tmp1w, [src], #2 + strh tmp1w, [dst], #2 +3: + tbz count, #0, .Lexitfunc + ldrb tmp1w, [src] + strb tmp1w, [dst] + +.Lexitfunc: + ret + +.Lcpy_over64: + subs count, count, #128 + b.ge .Lcpy_body_large + /* + * Less than 128 bytes to copy, so handle 64 here and then jump + * to the tail. + */ + ldp A_l, A_h, [src],#16 + stp A_l, A_h, [dst],#16 + ldp B_l, B_h, [src],#16 + ldp C_l, C_h, [src],#16 + stp B_l, B_h, [dst],#16 + stp C_l, C_h, [dst],#16 + ldp D_l, D_h, [src],#16 + stp D_l, D_h, [dst],#16 + + tst count, #0x3f + b.ne .Ltail63 + ret + + /* + * Critical loop. Start at a new cache line boundary. Assuming + * 64 bytes per line this ensures the entire loop is in one line. + */ + .p2align L1_CACHE_SHIFT +.Lcpy_body_large: + /* pre-get 64 bytes data. */ + ldp A_l, A_h, [src],#16 + ldp B_l, B_h, [src],#16 + ldp C_l, C_h, [src],#16 + ldp D_l, D_h, [src],#16 +1: + /* + * interlace the load of next 64 bytes data block with store of the last + * loaded 64 bytes data. + */ + stp A_l, A_h, [dst],#16 + ldp A_l, A_h, [src],#16 + stp B_l, B_h, [dst],#16 + ldp B_l, B_h, [src],#16 + stp C_l, C_h, [dst],#16 + ldp C_l, C_h, [src],#16 + stp D_l, D_h, [dst],#16 + ldp D_l, D_h, [src],#16 + subs count, count, #64 + b.ge 1b + stp A_l, A_h, [dst],#16 + stp B_l, B_h, [dst],#16 + stp C_l, C_h, [dst],#16 + stp D_l, D_h, [dst],#16 + + tst count, #0x3f + b.ne .Ltail63 + ret ENDPROC(memcpy) diff --git a/arch/arm64/lib/memmove.S b/arch/arm64/lib/memmove.S index b79fdfa42d39..57b19ea2dad4 100644 --- a/arch/arm64/lib/memmove.S +++ b/arch/arm64/lib/memmove.S @@ -1,5 +1,13 @@ /* * Copyright (C) 2013 ARM Ltd. + * Copyright (C) 2013 Linaro. + * + * This code is based on glibc cortex strings work originally authored by Linaro + * and re-licensed under GPLv2 for the Linux kernel. The original code can + * be found @ + * + * http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/ + * files/head:/src/aarch64/ * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -16,6 +24,7 @@ #include <linux/linkage.h> #include <asm/assembler.h> +#include <asm/cache.h> /* * Move a buffer from src to test (alignment handled by the hardware). @@ -28,30 +37,161 @@ * Returns: * x0 - dest */ +dstin .req x0 +src .req x1 +count .req x2 +tmp1 .req x3 +tmp1w .req w3 +tmp2 .req x4 +tmp2w .req w4 +tmp3 .req x5 +tmp3w .req w5 +dst .req x6 + +A_l .req x7 +A_h .req x8 +B_l .req x9 +B_h .req x10 +C_l .req x11 +C_h .req x12 +D_l .req x13 +D_h .req x14 + ENTRY(memmove) - cmp x0, x1 - b.ls memcpy - add x4, x0, x2 - add x1, x1, x2 - subs x2, x2, #8 - b.mi 2f -1: ldr x3, [x1, #-8]! - subs x2, x2, #8 - str x3, [x4, #-8]! - b.pl 1b -2: adds x2, x2, #4 - b.mi 3f - ldr w3, [x1, #-4]! - sub x2, x2, #4 - str w3, [x4, #-4]! -3: adds x2, x2, #2 - b.mi 4f - ldrh w3, [x1, #-2]! - sub x2, x2, #2 - strh w3, [x4, #-2]! -4: adds x2, x2, #1 - b.mi 5f - ldrb w3, [x1, #-1] - strb w3, [x4, #-1] -5: ret + cmp dstin, src + b.lo memcpy + add tmp1, src, count + cmp dstin, tmp1 + b.hs memcpy /* No overlap. */ + + add dst, dstin, count + add src, src, count + cmp count, #16 + b.lo .Ltail15 /*probably non-alignment accesses.*/ + + ands tmp2, src, #15 /* Bytes to reach alignment. */ + b.eq .LSrcAligned + sub count, count, tmp2 + /* + * process the aligned offset length to make the src aligned firstly. + * those extra instructions' cost is acceptable. It also make the + * coming accesses are based on aligned address. + */ + tbz tmp2, #0, 1f + ldrb tmp1w, [src, #-1]! + strb tmp1w, [dst, #-1]! +1: + tbz tmp2, #1, 2f + ldrh tmp1w, [src, #-2]! + strh tmp1w, [dst, #-2]! +2: + tbz tmp2, #2, 3f + ldr tmp1w, [src, #-4]! + str tmp1w, [dst, #-4]! +3: + tbz tmp2, #3, .LSrcAligned + ldr tmp1, [src, #-8]! + str tmp1, [dst, #-8]! + +.LSrcAligned: + cmp count, #64 + b.ge .Lcpy_over64 + + /* + * Deal with small copies quickly by dropping straight into the + * exit block. + */ +.Ltail63: + /* + * Copy up to 48 bytes of data. At this point we only need the + * bottom 6 bits of count to be accurate. + */ + ands tmp1, count, #0x30 + b.eq .Ltail15 + cmp tmp1w, #0x20 + b.eq 1f + b.lt 2f + ldp A_l, A_h, [src, #-16]! + stp A_l, A_h, [dst, #-16]! +1: + ldp A_l, A_h, [src, #-16]! + stp A_l, A_h, [dst, #-16]! +2: + ldp A_l, A_h, [src, #-16]! + stp A_l, A_h, [dst, #-16]! + +.Ltail15: + tbz count, #3, 1f + ldr tmp1, [src, #-8]! + str tmp1, [dst, #-8]! +1: + tbz count, #2, 2f + ldr tmp1w, [src, #-4]! + str tmp1w, [dst, #-4]! +2: + tbz count, #1, 3f + ldrh tmp1w, [src, #-2]! + strh tmp1w, [dst, #-2]! +3: + tbz count, #0, .Lexitfunc + ldrb tmp1w, [src, #-1] + strb tmp1w, [dst, #-1] + +.Lexitfunc: + ret + +.Lcpy_over64: + subs count, count, #128 + b.ge .Lcpy_body_large + /* + * Less than 128 bytes to copy, so handle 64 bytes here and then jump + * to the tail. + */ + ldp A_l, A_h, [src, #-16] + stp A_l, A_h, [dst, #-16] + ldp B_l, B_h, [src, #-32] + ldp C_l, C_h, [src, #-48] + stp B_l, B_h, [dst, #-32] + stp C_l, C_h, [dst, #-48] + ldp D_l, D_h, [src, #-64]! + stp D_l, D_h, [dst, #-64]! + + tst count, #0x3f + b.ne .Ltail63 + ret + + /* + * Critical loop. Start at a new cache line boundary. Assuming + * 64 bytes per line this ensures the entire loop is in one line. + */ + .p2align L1_CACHE_SHIFT +.Lcpy_body_large: + /* pre-load 64 bytes data. */ + ldp A_l, A_h, [src, #-16] + ldp B_l, B_h, [src, #-32] + ldp C_l, C_h, [src, #-48] + ldp D_l, D_h, [src, #-64]! +1: + /* + * interlace the load of next 64 bytes data block with store of the last + * loaded 64 bytes data. + */ + stp A_l, A_h, [dst, #-16] + ldp A_l, A_h, [src, #-16] + stp B_l, B_h, [dst, #-32] + ldp B_l, B_h, [src, #-32] + stp C_l, C_h, [dst, #-48] + ldp C_l, C_h, [src, #-48] + stp D_l, D_h, [dst, #-64]! + ldp D_l, D_h, [src, #-64]! + subs count, count, #64 + b.ge 1b + stp A_l, A_h, [dst, #-16] + stp B_l, B_h, [dst, #-32] + stp C_l, C_h, [dst, #-48] + stp D_l, D_h, [dst, #-64]! + + tst count, #0x3f + b.ne .Ltail63 + ret ENDPROC(memmove) diff --git a/arch/arm64/lib/memset.S b/arch/arm64/lib/memset.S index 87e4a68fbbbc..7c72dfd36b63 100644 --- a/arch/arm64/lib/memset.S +++ b/arch/arm64/lib/memset.S @@ -1,5 +1,13 @@ /* * Copyright (C) 2013 ARM Ltd. + * Copyright (C) 2013 Linaro. + * + * This code is based on glibc cortex strings work originally authored by Linaro + * and re-licensed under GPLv2 for the Linux kernel. The original code can + * be found @ + * + * http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/ + * files/head:/src/aarch64/ * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -16,6 +24,7 @@ #include <linux/linkage.h> #include <asm/assembler.h> +#include <asm/cache.h> /* * Fill in the buffer with character c (alignment handled by the hardware) @@ -27,27 +36,181 @@ * Returns: * x0 - buf */ + +dstin .req x0 +val .req w1 +count .req x2 +tmp1 .req x3 +tmp1w .req w3 +tmp2 .req x4 +tmp2w .req w4 +zva_len_x .req x5 +zva_len .req w5 +zva_bits_x .req x6 + +A_l .req x7 +A_lw .req w7 +dst .req x8 +tmp3w .req w9 +tmp3 .req x9 + ENTRY(memset) - mov x4, x0 - and w1, w1, #0xff - orr w1, w1, w1, lsl #8 - orr w1, w1, w1, lsl #16 - orr x1, x1, x1, lsl #32 - subs x2, x2, #8 - b.mi 2f -1: str x1, [x4], #8 - subs x2, x2, #8 - b.pl 1b -2: adds x2, x2, #4 - b.mi 3f - sub x2, x2, #4 - str w1, [x4], #4 -3: adds x2, x2, #2 - b.mi 4f - sub x2, x2, #2 - strh w1, [x4], #2 -4: adds x2, x2, #1 - b.mi 5f - strb w1, [x4] -5: ret + mov dst, dstin /* Preserve return value. */ + and A_lw, val, #255 + orr A_lw, A_lw, A_lw, lsl #8 + orr A_lw, A_lw, A_lw, lsl #16 + orr A_l, A_l, A_l, lsl #32 + + cmp count, #15 + b.hi .Lover16_proc + /*All store maybe are non-aligned..*/ + tbz count, #3, 1f + str A_l, [dst], #8 +1: + tbz count, #2, 2f + str A_lw, [dst], #4 +2: + tbz count, #1, 3f + strh A_lw, [dst], #2 +3: + tbz count, #0, 4f + strb A_lw, [dst] +4: + ret + +.Lover16_proc: + /*Whether the start address is aligned with 16.*/ + neg tmp2, dst + ands tmp2, tmp2, #15 + b.eq .Laligned +/* +* The count is not less than 16, we can use stp to store the start 16 bytes, +* then adjust the dst aligned with 16.This process will make the current +* memory address at alignment boundary. +*/ + stp A_l, A_l, [dst] /*non-aligned store..*/ + /*make the dst aligned..*/ + sub count, count, tmp2 + add dst, dst, tmp2 + +.Laligned: + cbz A_l, .Lzero_mem + +.Ltail_maybe_long: + cmp count, #64 + b.ge .Lnot_short +.Ltail63: + ands tmp1, count, #0x30 + b.eq 3f + cmp tmp1w, #0x20 + b.eq 1f + b.lt 2f + stp A_l, A_l, [dst], #16 +1: + stp A_l, A_l, [dst], #16 +2: + stp A_l, A_l, [dst], #16 +/* +* The last store length is less than 16,use stp to write last 16 bytes. +* It will lead some bytes written twice and the access is non-aligned. +*/ +3: + ands count, count, #15 + cbz count, 4f + add dst, dst, count + stp A_l, A_l, [dst, #-16] /* Repeat some/all of last store. */ +4: + ret + + /* + * Critical loop. Start at a new cache line boundary. Assuming + * 64 bytes per line, this ensures the entire loop is in one line. + */ + .p2align L1_CACHE_SHIFT +.Lnot_short: + sub dst, dst, #16/* Pre-bias. */ + sub count, count, #64 +1: + stp A_l, A_l, [dst, #16] + stp A_l, A_l, [dst, #32] + stp A_l, A_l, [dst, #48] + stp A_l, A_l, [dst, #64]! + subs count, count, #64 + b.ge 1b + tst count, #0x3f + add dst, dst, #16 + b.ne .Ltail63 +.Lexitfunc: + ret + + /* + * For zeroing memory, check to see if we can use the ZVA feature to + * zero entire 'cache' lines. + */ +.Lzero_mem: + cmp count, #63 + b.le .Ltail63 + /* + * For zeroing small amounts of memory, it's not worth setting up + * the line-clear code. + */ + cmp count, #128 + b.lt .Lnot_short /*count is at least 128 bytes*/ + + mrs tmp1, dczid_el0 + tbnz tmp1, #4, .Lnot_short + mov tmp3w, #4 + and zva_len, tmp1w, #15 /* Safety: other bits reserved. */ + lsl zva_len, tmp3w, zva_len + + ands tmp3w, zva_len, #63 + /* + * ensure the zva_len is not less than 64. + * It is not meaningful to use ZVA if the block size is less than 64. + */ + b.ne .Lnot_short +.Lzero_by_line: + /* + * Compute how far we need to go to become suitably aligned. We're + * already at quad-word alignment. + */ + cmp count, zva_len_x + b.lt .Lnot_short /* Not enough to reach alignment. */ + sub zva_bits_x, zva_len_x, #1 + neg tmp2, dst + ands tmp2, tmp2, zva_bits_x + b.eq 2f /* Already aligned. */ + /* Not aligned, check that there's enough to copy after alignment.*/ + sub tmp1, count, tmp2 + /* + * grantee the remain length to be ZVA is bigger than 64, + * avoid to make the 2f's process over mem range.*/ + cmp tmp1, #64 + ccmp tmp1, zva_len_x, #8, ge /* NZCV=0b1000 */ + b.lt .Lnot_short + /* + * We know that there's at least 64 bytes to zero and that it's safe + * to overrun by 64 bytes. + */ + mov count, tmp1 +1: + stp A_l, A_l, [dst] + stp A_l, A_l, [dst, #16] + stp A_l, A_l, [dst, #32] + subs tmp2, tmp2, #64 + stp A_l, A_l, [dst, #48] + add dst, dst, #64 + b.ge 1b + /* We've overrun a bit, so adjust dst downwards.*/ + add dst, dst, tmp2 +2: + sub count, count, zva_len_x +3: + dc zva, dst + add dst, dst, zva_len_x + subs count, count, zva_len_x + b.ge 3b + ands count, count, zva_bits_x + b.ne .Ltail_maybe_long + ret ENDPROC(memset) diff --git a/arch/arm64/lib/strcmp.S b/arch/arm64/lib/strcmp.S new file mode 100644 index 000000000000..42f828b06c59 --- /dev/null +++ b/arch/arm64/lib/strcmp.S @@ -0,0 +1,234 @@ +/* + * Copyright (C) 2013 ARM Ltd. + * Copyright (C) 2013 Linaro. + * + * This code is based on glibc cortex strings work originally authored by Linaro + * and re-licensed under GPLv2 for the Linux kernel. The original code can + * be found @ + * + * http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/ + * files/head:/src/aarch64/ + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see <http://www.gnu.org/licenses/>. + */ + +#include <linux/linkage.h> +#include <asm/assembler.h> + +/* + * compare two strings + * + * Parameters: + * x0 - const string 1 pointer + * x1 - const string 2 pointer + * Returns: + * x0 - an integer less than, equal to, or greater than zero + * if s1 is found, respectively, to be less than, to match, + * or be greater than s2. + */ + +#define REP8_01 0x0101010101010101 +#define REP8_7f 0x7f7f7f7f7f7f7f7f +#define REP8_80 0x8080808080808080 + +/* Parameters and result. */ +src1 .req x0 +src2 .req x1 +result .req x0 + +/* Internal variables. */ +data1 .req x2 +data1w .req w2 +data2 .req x3 +data2w .req w3 +has_nul .req x4 +diff .req x5 +syndrome .req x6 +tmp1 .req x7 +tmp2 .req x8 +tmp3 .req x9 +zeroones .req x10 +pos .req x11 + +ENTRY(strcmp) + eor tmp1, src1, src2 + mov zeroones, #REP8_01 + tst tmp1, #7 + b.ne .Lmisaligned8 + ands tmp1, src1, #7 + b.ne .Lmutual_align + + /* + * NUL detection works on the principle that (X - 1) & (~X) & 0x80 + * (=> (X - 1) & ~(X | 0x7f)) is non-zero iff a byte is zero, and + * can be done in parallel across the entire word. + */ +.Lloop_aligned: + ldr data1, [src1], #8 + ldr data2, [src2], #8 +.Lstart_realigned: + sub tmp1, data1, zeroones + orr tmp2, data1, #REP8_7f + eor diff, data1, data2 /* Non-zero if differences found. */ + bic has_nul, tmp1, tmp2 /* Non-zero if NUL terminator. */ + orr syndrome, diff, has_nul + cbz syndrome, .Lloop_aligned + b .Lcal_cmpresult + +.Lmutual_align: + /* + * Sources are mutually aligned, but are not currently at an + * alignment boundary. Round down the addresses and then mask off + * the bytes that preceed the start point. + */ + bic src1, src1, #7 + bic src2, src2, #7 + lsl tmp1, tmp1, #3 /* Bytes beyond alignment -> bits. */ + ldr data1, [src1], #8 + neg tmp1, tmp1 /* Bits to alignment -64. */ + ldr data2, [src2], #8 + mov tmp2, #~0 + /* Big-endian. Early bytes are at MSB. */ +CPU_BE( lsl tmp2, tmp2, tmp1 ) /* Shift (tmp1 & 63). */ + /* Little-endian. Early bytes are at LSB. */ +CPU_LE( lsr tmp2, tmp2, tmp1 ) /* Shift (tmp1 & 63). */ + + orr data1, data1, tmp2 + orr data2, data2, tmp2 + b .Lstart_realigned + +.Lmisaligned8: + /* + * Get the align offset length to compare per byte first. + * After this process, one string's address will be aligned. + */ + and tmp1, src1, #7 + neg tmp1, tmp1 + add tmp1, tmp1, #8 + and tmp2, src2, #7 + neg tmp2, tmp2 + add tmp2, tmp2, #8 + subs tmp3, tmp1, tmp2 + csel pos, tmp1, tmp2, hi /*Choose the maximum. */ +.Ltinycmp: + ldrb data1w, [src1], #1 + ldrb data2w, [src2], #1 + subs pos, pos, #1 + ccmp data1w, #1, #0, ne /* NZCV = 0b0000. */ + ccmp data1w, data2w, #0, cs /* NZCV = 0b0000. */ + b.eq .Ltinycmp + cbnz pos, 1f /*find the null or unequal...*/ + cmp data1w, #1 + ccmp data1w, data2w, #0, cs + b.eq .Lstart_align /*the last bytes are equal....*/ +1: + sub result, data1, data2 + ret + +.Lstart_align: + ands xzr, src1, #7 + b.eq .Lrecal_offset + /*process more leading bytes to make str1 aligned...*/ + add src1, src1, tmp3 + add src2, src2, tmp3 + /*load 8 bytes from aligned str1 and non-aligned str2..*/ + ldr data1, [src1], #8 + ldr data2, [src2], #8 + + sub tmp1, data1, zeroones + orr tmp2, data1, #REP8_7f + bic has_nul, tmp1, tmp2 + eor diff, data1, data2 /* Non-zero if differences found. */ + orr syndrome, diff, has_nul + cbnz syndrome, .Lcal_cmpresult + /*How far is the current str2 from the alignment boundary...*/ + and tmp3, tmp3, #7 +.Lrecal_offset: + neg pos, tmp3 +.Lloopcmp_proc: + /* + * Divide the eight bytes into two parts. First,backwards the src2 + * to an alignment boundary,load eight bytes from the SRC2 alignment + * boundary,then compare with the relative bytes from SRC1. + * If all 8 bytes are equal,then start the second part's comparison. + * Otherwise finish the comparison. + * This special handle can garantee all the accesses are in the + * thread/task space in avoid to overrange access. + */ + ldr data1, [src1,pos] + ldr data2, [src2,pos] + sub tmp1, data1, zeroones + orr tmp2, data1, #REP8_7f + bic has_nul, tmp1, tmp2 + eor diff, data1, data2 /* Non-zero if differences found. */ + orr syndrome, diff, has_nul + cbnz syndrome, .Lcal_cmpresult + + /*The second part process*/ + ldr data1, [src1], #8 + ldr data2, [src2], #8 + sub tmp1, data1, zeroones + orr tmp2, data1, #REP8_7f + bic has_nul, tmp1, tmp2 + eor diff, data1, data2 /* Non-zero if differences found. */ + orr syndrome, diff, has_nul + cbz syndrome, .Lloopcmp_proc + +.Lcal_cmpresult: + /* + * reversed the byte-order as big-endian,then CLZ can find the most + * significant zero bits. + */ +CPU_LE( rev syndrome, syndrome ) +CPU_LE( rev data1, data1 ) +CPU_LE( rev data2, data2 ) + + /* + * For big-endian we cannot use the trick with the syndrome value + * as carry-propagation can corrupt the upper bits if the trailing + * bytes in the string contain 0x01. + * However, if there is no NUL byte in the dword, we can generate + * the result directly. We ca not just subtract the bytes as the + * MSB might be significant. + */ +CPU_BE( cbnz has_nul, 1f ) +CPU_BE( cmp data1, data2 ) +CPU_BE( cset result, ne ) +CPU_BE( cneg result, result, lo ) +CPU_BE( ret ) +CPU_BE( 1: ) + /*Re-compute the NUL-byte detection, using a byte-reversed value. */ +CPU_BE( rev tmp3, data1 ) +CPU_BE( sub tmp1, tmp3, zeroones ) +CPU_BE( orr tmp2, tmp3, #REP8_7f ) +CPU_BE( bic has_nul, tmp1, tmp2 ) +CPU_BE( rev has_nul, has_nul ) +CPU_BE( orr syndrome, diff, has_nul ) + + clz pos, syndrome + /* + * The MS-non-zero bit of the syndrome marks either the first bit + * that is different, or the top bit of the first zero byte. + * Shifting left now will bring the critical information into the + * top bits. + */ + lsl data1, data1, pos + lsl data2, data2, pos + /* + * But we need to zero-extend (char is unsigned) the value and then + * perform a signed 32-bit subtraction. + */ + lsr data1, data1, #56 + sub result, data1, data2, lsr #56 + ret +ENDPROC(strcmp) diff --git a/arch/arm64/lib/strlen.S b/arch/arm64/lib/strlen.S new file mode 100644 index 000000000000..987b68b9ce44 --- /dev/null +++ b/arch/arm64/lib/strlen.S @@ -0,0 +1,126 @@ +/* + * Copyright (C) 2013 ARM Ltd. + * Copyright (C) 2013 Linaro. + * + * This code is based on glibc cortex strings work originally authored by Linaro + * and re-licensed under GPLv2 for the Linux kernel. The original code can + * be found @ + * + * http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/ + * files/head:/src/aarch64/ + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see <http://www.gnu.org/licenses/>. + */ + +#include <linux/linkage.h> +#include <asm/assembler.h> + +/* + * calculate the length of a string + * + * Parameters: + * x0 - const string pointer + * Returns: + * x0 - the return length of specific string + */ + +/* Arguments and results. */ +srcin .req x0 +len .req x0 + +/* Locals and temporaries. */ +src .req x1 +data1 .req x2 +data2 .req x3 +data2a .req x4 +has_nul1 .req x5 +has_nul2 .req x6 +tmp1 .req x7 +tmp2 .req x8 +tmp3 .req x9 +tmp4 .req x10 +zeroones .req x11 +pos .req x12 + +#define REP8_01 0x0101010101010101 +#define REP8_7f 0x7f7f7f7f7f7f7f7f +#define REP8_80 0x8080808080808080 + +ENTRY(strlen) + mov zeroones, #REP8_01 + bic src, srcin, #15 + ands tmp1, srcin, #15 + b.ne .Lmisaligned + /* + * NUL detection works on the principle that (X - 1) & (~X) & 0x80 + * (=> (X - 1) & ~(X | 0x7f)) is non-zero iff a byte is zero, and + * can be done in parallel across the entire word. + */ + /* + * The inner loop deals with two Dwords at a time. This has a + * slightly higher start-up cost, but we should win quite quickly, + * especially on cores with a high number of issue slots per + * cycle, as we get much better parallelism out of the operations. + */ +.Lloop: + ldp data1, data2, [src], #16 +.Lrealigned: + sub tmp1, data1, zeroones + orr tmp2, data1, #REP8_7f + sub tmp3, data2, zeroones + orr tmp4, data2, #REP8_7f + bic has_nul1, tmp1, tmp2 + bics has_nul2, tmp3, tmp4 + ccmp has_nul1, #0, #0, eq /* NZCV = 0000 */ + b.eq .Lloop + + sub len, src, srcin + cbz has_nul1, .Lnul_in_data2 +CPU_BE( mov data2, data1 ) /*prepare data to re-calculate the syndrome*/ + sub len, len, #8 + mov has_nul2, has_nul1 +.Lnul_in_data2: + /* + * For big-endian, carry propagation (if the final byte in the + * string is 0x01) means we cannot use has_nul directly. The + * easiest way to get the correct byte is to byte-swap the data + * and calculate the syndrome a second time. + */ +CPU_BE( rev data2, data2 ) +CPU_BE( sub tmp1, data2, zeroones ) +CPU_BE( orr tmp2, data2, #REP8_7f ) +CPU_BE( bic has_nul2, tmp1, tmp2 ) + + sub len, len, #8 + rev has_nul2, has_nul2 + clz pos, has_nul2 + add len, len, pos, lsr #3 /* Bits to bytes. */ + ret + +.Lmisaligned: + cmp tmp1, #8 + neg tmp1, tmp1 + ldp data1, data2, [src], #16 + lsl tmp1, tmp1, #3 /* Bytes beyond alignment -> bits. */ + mov tmp2, #~0 + /* Big-endian. Early bytes are at MSB. */ +CPU_BE( lsl tmp2, tmp2, tmp1 ) /* Shift (tmp1 & 63). */ + /* Little-endian. Early bytes are at LSB. */ +CPU_LE( lsr tmp2, tmp2, tmp1 ) /* Shift (tmp1 & 63). */ + + orr data1, data1, tmp2 + orr data2a, data2, tmp2 + csinv data1, data1, xzr, le + csel data2, data2, data2a, le + b .Lrealigned +ENDPROC(strlen) diff --git a/arch/arm64/lib/strncmp.S b/arch/arm64/lib/strncmp.S new file mode 100644 index 000000000000..0224cf5a5533 --- /dev/null +++ b/arch/arm64/lib/strncmp.S @@ -0,0 +1,310 @@ +/* + * Copyright (C) 2013 ARM Ltd. + * Copyright (C) 2013 Linaro. + * + * This code is based on glibc cortex strings work originally authored by Linaro + * and re-licensed under GPLv2 for the Linux kernel. The original code can + * be found @ + * + * http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/ + * files/head:/src/aarch64/ + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see <http://www.gnu.org/licenses/>. + */ + +#include <linux/linkage.h> +#include <asm/assembler.h> + +/* + * compare two strings + * + * Parameters: + * x0 - const string 1 pointer + * x1 - const string 2 pointer + * x2 - the maximal length to be compared + * Returns: + * x0 - an integer less than, equal to, or greater than zero if s1 is found, + * respectively, to be less than, to match, or be greater than s2. + */ + +#define REP8_01 0x0101010101010101 +#define REP8_7f 0x7f7f7f7f7f7f7f7f +#define REP8_80 0x8080808080808080 + +/* Parameters and result. */ +src1 .req x0 +src2 .req x1 +limit .req x2 +result .req x0 + +/* Internal variables. */ +data1 .req x3 +data1w .req w3 +data2 .req x4 +data2w .req w4 +has_nul .req x5 +diff .req x6 +syndrome .req x7 +tmp1 .req x8 +tmp2 .req x9 +tmp3 .req x10 +zeroones .req x11 +pos .req x12 +limit_wd .req x13 +mask .req x14 +endloop .req x15 + +ENTRY(strncmp) + cbz limit, .Lret0 + eor tmp1, src1, src2 + mov zeroones, #REP8_01 + tst tmp1, #7 + b.ne .Lmisaligned8 + ands tmp1, src1, #7 + b.ne .Lmutual_align + /* Calculate the number of full and partial words -1. */ + /* + * when limit is mulitply of 8, if not sub 1, + * the judgement of last dword will wrong. + */ + sub limit_wd, limit, #1 /* limit != 0, so no underflow. */ + lsr limit_wd, limit_wd, #3 /* Convert to Dwords. */ + + /* + * NUL detection works on the principle that (X - 1) & (~X) & 0x80 + * (=> (X - 1) & ~(X | 0x7f)) is non-zero iff a byte is zero, and + * can be done in parallel across the entire word. + */ +.Lloop_aligned: + ldr data1, [src1], #8 + ldr data2, [src2], #8 +.Lstart_realigned: + subs limit_wd, limit_wd, #1 + sub tmp1, data1, zeroones + orr tmp2, data1, #REP8_7f + eor diff, data1, data2 /* Non-zero if differences found. */ + csinv endloop, diff, xzr, pl /* Last Dword or differences.*/ + bics has_nul, tmp1, tmp2 /* Non-zero if NUL terminator. */ + ccmp endloop, #0, #0, eq + b.eq .Lloop_aligned + + /*Not reached the limit, must have found the end or a diff. */ + tbz limit_wd, #63, .Lnot_limit + + /* Limit % 8 == 0 => all bytes significant. */ + ands limit, limit, #7 + b.eq .Lnot_limit + + lsl limit, limit, #3 /* Bits -> bytes. */ + mov mask, #~0 +CPU_BE( lsr mask, mask, limit ) +CPU_LE( lsl mask, mask, limit ) + bic data1, data1, mask + bic data2, data2, mask + + /* Make sure that the NUL byte is marked in the syndrome. */ + orr has_nul, has_nul, mask + +.Lnot_limit: + orr syndrome, diff, has_nul + b .Lcal_cmpresult + +.Lmutual_align: + /* + * Sources are mutually aligned, but are not currently at an + * alignment boundary. Round down the addresses and then mask off + * the bytes that precede the start point. + * We also need to adjust the limit calculations, but without + * overflowing if the limit is near ULONG_MAX. + */ + bic src1, src1, #7 + bic src2, src2, #7 + ldr data1, [src1], #8 + neg tmp3, tmp1, lsl #3 /* 64 - bits(bytes beyond align). */ + ldr data2, [src2], #8 + mov tmp2, #~0 + sub limit_wd, limit, #1 /* limit != 0, so no underflow. */ + /* Big-endian. Early bytes are at MSB. */ +CPU_BE( lsl tmp2, tmp2, tmp3 ) /* Shift (tmp1 & 63). */ + /* Little-endian. Early bytes are at LSB. */ +CPU_LE( lsr tmp2, tmp2, tmp3 ) /* Shift (tmp1 & 63). */ + + and tmp3, limit_wd, #7 + lsr limit_wd, limit_wd, #3 + /* Adjust the limit. Only low 3 bits used, so overflow irrelevant.*/ + add limit, limit, tmp1 + add tmp3, tmp3, tmp1 + orr data1, data1, tmp2 + orr data2, data2, tmp2 + add limit_wd, limit_wd, tmp3, lsr #3 + b .Lstart_realigned + +/*when src1 offset is not equal to src2 offset...*/ +.Lmisaligned8: + cmp limit, #8 + b.lo .Ltiny8proc /*limit < 8... */ + /* + * Get the align offset length to compare per byte first. + * After this process, one string's address will be aligned.*/ + and tmp1, src1, #7 + neg tmp1, tmp1 + add tmp1, tmp1, #8 + and tmp2, src2, #7 + neg tmp2, tmp2 + add tmp2, tmp2, #8 + subs tmp3, tmp1, tmp2 + csel pos, tmp1, tmp2, hi /*Choose the maximum. */ + /* + * Here, limit is not less than 8, so directly run .Ltinycmp + * without checking the limit.*/ + sub limit, limit, pos +.Ltinycmp: + ldrb data1w, [src1], #1 + ldrb data2w, [src2], #1 + subs pos, pos, #1 + ccmp data1w, #1, #0, ne /* NZCV = 0b0000. */ + ccmp data1w, data2w, #0, cs /* NZCV = 0b0000. */ + b.eq .Ltinycmp + cbnz pos, 1f /*find the null or unequal...*/ + cmp data1w, #1 + ccmp data1w, data2w, #0, cs + b.eq .Lstart_align /*the last bytes are equal....*/ +1: + sub result, data1, data2 + ret + +.Lstart_align: + lsr limit_wd, limit, #3 + cbz limit_wd, .Lremain8 + /*process more leading bytes to make str1 aligned...*/ + ands xzr, src1, #7 + b.eq .Lrecal_offset + add src1, src1, tmp3 /*tmp3 is positive in this branch.*/ + add src2, src2, tmp3 + ldr data1, [src1], #8 + ldr data2, [src2], #8 + + sub limit, limit, tmp3 + lsr limit_wd, limit, #3 + subs limit_wd, limit_wd, #1 + + sub tmp1, data1, zeroones + orr tmp2, data1, #REP8_7f + eor diff, data1, data2 /* Non-zero if differences found. */ + csinv endloop, diff, xzr, ne/*if limit_wd is 0,will finish the cmp*/ + bics has_nul, tmp1, tmp2 + ccmp endloop, #0, #0, eq /*has_null is ZERO: no null byte*/ + b.ne .Lunequal_proc + /*How far is the current str2 from the alignment boundary...*/ + and tmp3, tmp3, #7 +.Lrecal_offset: + neg pos, tmp3 +.Lloopcmp_proc: + /* + * Divide the eight bytes into two parts. First,backwards the src2 + * to an alignment boundary,load eight bytes from the SRC2 alignment + * boundary,then compare with the relative bytes from SRC1. + * If all 8 bytes are equal,then start the second part's comparison. + * Otherwise finish the comparison. + * This special handle can garantee all the accesses are in the + * thread/task space in avoid to overrange access. + */ + ldr data1, [src1,pos] + ldr data2, [src2,pos] + sub tmp1, data1, zeroones + orr tmp2, data1, #REP8_7f + bics has_nul, tmp1, tmp2 /* Non-zero if NUL terminator. */ + eor diff, data1, data2 /* Non-zero if differences found. */ + csinv endloop, diff, xzr, eq + cbnz endloop, .Lunequal_proc + + /*The second part process*/ + ldr data1, [src1], #8 + ldr data2, [src2], #8 + subs limit_wd, limit_wd, #1 + sub tmp1, data1, zeroones + orr tmp2, data1, #REP8_7f + eor diff, data1, data2 /* Non-zero if differences found. */ + csinv endloop, diff, xzr, ne/*if limit_wd is 0,will finish the cmp*/ + bics has_nul, tmp1, tmp2 + ccmp endloop, #0, #0, eq /*has_null is ZERO: no null byte*/ + b.eq .Lloopcmp_proc + +.Lunequal_proc: + orr syndrome, diff, has_nul + cbz syndrome, .Lremain8 +.Lcal_cmpresult: + /* + * reversed the byte-order as big-endian,then CLZ can find the most + * significant zero bits. + */ +CPU_LE( rev syndrome, syndrome ) +CPU_LE( rev data1, data1 ) +CPU_LE( rev data2, data2 ) + /* + * For big-endian we cannot use the trick with the syndrome value + * as carry-propagation can corrupt the upper bits if the trailing + * bytes in the string contain 0x01. + * However, if there is no NUL byte in the dword, we can generate + * the result directly. We can't just subtract the bytes as the + * MSB might be significant. + */ +CPU_BE( cbnz has_nul, 1f ) +CPU_BE( cmp data1, data2 ) +CPU_BE( cset result, ne ) +CPU_BE( cneg result, result, lo ) +CPU_BE( ret ) +CPU_BE( 1: ) + /* Re-compute the NUL-byte detection, using a byte-reversed value.*/ +CPU_BE( rev tmp3, data1 ) +CPU_BE( sub tmp1, tmp3, zeroones ) +CPU_BE( orr tmp2, tmp3, #REP8_7f ) +CPU_BE( bic has_nul, tmp1, tmp2 ) +CPU_BE( rev has_nul, has_nul ) +CPU_BE( orr syndrome, diff, has_nul ) + /* + * The MS-non-zero bit of the syndrome marks either the first bit + * that is different, or the top bit of the first zero byte. + * Shifting left now will bring the critical information into the + * top bits. + */ + clz pos, syndrome + lsl data1, data1, pos + lsl data2, data2, pos + /* + * But we need to zero-extend (char is unsigned) the value and then + * perform a signed 32-bit subtraction. + */ + lsr data1, data1, #56 + sub result, data1, data2, lsr #56 + ret + +.Lremain8: + /* Limit % 8 == 0 => all bytes significant. */ + ands limit, limit, #7 + b.eq .Lret0 +.Ltiny8proc: + ldrb data1w, [src1], #1 + ldrb data2w, [src2], #1 + subs limit, limit, #1 + + ccmp data1w, #1, #0, ne /* NZCV = 0b0000. */ + ccmp data1w, data2w, #0, cs /* NZCV = 0b0000. */ + b.eq .Ltiny8proc + sub result, data1, data2 + ret + +.Lret0: + mov result, #0 + ret +ENDPROC(strncmp) diff --git a/arch/arm64/lib/strnlen.S b/arch/arm64/lib/strnlen.S new file mode 100644 index 000000000000..2ca665711bf2 --- /dev/null +++ b/arch/arm64/lib/strnlen.S @@ -0,0 +1,171 @@ +/* + * Copyright (C) 2013 ARM Ltd. + * Copyright (C) 2013 Linaro. + * + * This code is based on glibc cortex strings work originally authored by Linaro + * and re-licensed under GPLv2 for the Linux kernel. The original code can + * be found @ + * + * http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/ + * files/head:/src/aarch64/ + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see <http://www.gnu.org/licenses/>. + */ + +#include <linux/linkage.h> +#include <asm/assembler.h> + +/* + * determine the length of a fixed-size string + * + * Parameters: + * x0 - const string pointer + * x1 - maximal string length + * Returns: + * x0 - the return length of specific string + */ + +/* Arguments and results. */ +srcin .req x0 +len .req x0 +limit .req x1 + +/* Locals and temporaries. */ +src .req x2 +data1 .req x3 +data2 .req x4 +data2a .req x5 +has_nul1 .req x6 +has_nul2 .req x7 +tmp1 .req x8 +tmp2 .req x9 +tmp3 .req x10 +tmp4 .req x11 +zeroones .req x12 +pos .req x13 +limit_wd .req x14 + +#define REP8_01 0x0101010101010101 +#define REP8_7f 0x7f7f7f7f7f7f7f7f +#define REP8_80 0x8080808080808080 + +ENTRY(strnlen) + cbz limit, .Lhit_limit + mov zeroones, #REP8_01 + bic src, srcin, #15 + ands tmp1, srcin, #15 + b.ne .Lmisaligned + /* Calculate the number of full and partial words -1. */ + sub limit_wd, limit, #1 /* Limit != 0, so no underflow. */ + lsr limit_wd, limit_wd, #4 /* Convert to Qwords. */ + + /* + * NUL detection works on the principle that (X - 1) & (~X) & 0x80 + * (=> (X - 1) & ~(X | 0x7f)) is non-zero iff a byte is zero, and + * can be done in parallel across the entire word. + */ + /* + * The inner loop deals with two Dwords at a time. This has a + * slightly higher start-up cost, but we should win quite quickly, + * especially on cores with a high number of issue slots per + * cycle, as we get much better parallelism out of the operations. + */ +.Lloop: + ldp data1, data2, [src], #16 +.Lrealigned: + sub tmp1, data1, zeroones + orr tmp2, data1, #REP8_7f + sub tmp3, data2, zeroones + orr tmp4, data2, #REP8_7f + bic has_nul1, tmp1, tmp2 + bic has_nul2, tmp3, tmp4 + subs limit_wd, limit_wd, #1 + orr tmp1, has_nul1, has_nul2 + ccmp tmp1, #0, #0, pl /* NZCV = 0000 */ + b.eq .Lloop + + cbz tmp1, .Lhit_limit /* No null in final Qword. */ + + /* + * We know there's a null in the final Qword. The easiest thing + * to do now is work out the length of the string and return + * MIN (len, limit). + */ + sub len, src, srcin + cbz has_nul1, .Lnul_in_data2 +CPU_BE( mov data2, data1 ) /*perpare data to re-calculate the syndrome*/ + + sub len, len, #8 + mov has_nul2, has_nul1 +.Lnul_in_data2: + /* + * For big-endian, carry propagation (if the final byte in the + * string is 0x01) means we cannot use has_nul directly. The + * easiest way to get the correct byte is to byte-swap the data + * and calculate the syndrome a second time. + */ +CPU_BE( rev data2, data2 ) +CPU_BE( sub tmp1, data2, zeroones ) +CPU_BE( orr tmp2, data2, #REP8_7f ) +CPU_BE( bic has_nul2, tmp1, tmp2 ) + + sub len, len, #8 + rev has_nul2, has_nul2 + clz pos, has_nul2 + add len, len, pos, lsr #3 /* Bits to bytes. */ + cmp len, limit + csel len, len, limit, ls /* Return the lower value. */ + ret + +.Lmisaligned: + /* + * Deal with a partial first word. + * We're doing two things in parallel here; + * 1) Calculate the number of words (but avoiding overflow if + * limit is near ULONG_MAX) - to do this we need to work out + * limit + tmp1 - 1 as a 65-bit value before shifting it; + * 2) Load and mask the initial data words - we force the bytes + * before the ones we are interested in to 0xff - this ensures + * early bytes will not hit any zero detection. + */ + ldp data1, data2, [src], #16 + + sub limit_wd, limit, #1 + and tmp3, limit_wd, #15 + lsr limit_wd, limit_wd, #4 + + add tmp3, tmp3, tmp1 + add limit_wd, limit_wd, tmp3, lsr #4 + + neg tmp4, tmp1 + lsl tmp4, tmp4, #3 /* Bytes beyond alignment -> bits. */ + + mov tmp2, #~0 + /* Big-endian. Early bytes are at MSB. */ +CPU_BE( lsl tmp2, tmp2, tmp4 ) /* Shift (tmp1 & 63). */ + /* Little-endian. Early bytes are at LSB. */ +CPU_LE( lsr tmp2, tmp2, tmp4 ) /* Shift (tmp1 & 63). */ + + cmp tmp1, #8 + + orr data1, data1, tmp2 + orr data2a, data2, tmp2 + + csinv data1, data1, xzr, le + csel data2, data2, data2a, le + b .Lrealigned + +.Lhit_limit: + mov len, limit + ret +ENDPROC(strnlen) diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile index b51d36401d83..3ecb56c624d3 100644 --- a/arch/arm64/mm/Makefile +++ b/arch/arm64/mm/Makefile @@ -1,5 +1,5 @@ obj-y := dma-mapping.o extable.o fault.o init.o \ cache.o copypage.o flush.o \ ioremap.o mmap.o pgd.o mmu.o \ - context.o tlb.o proc.o + context.o proc.o obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S index fda756875fa6..23663837acff 100644 --- a/arch/arm64/mm/cache.S +++ b/arch/arm64/mm/cache.S @@ -31,7 +31,7 @@ * Corrupted registers: x0-x7, x9-x11 */ __flush_dcache_all: - dsb sy // ensure ordering with previous memory accesses + dmb sy // ensure ordering with previous memory accesses mrs x0, clidr_el1 // read clidr and x3, x0, #0x7000000 // extract loc from clidr lsr x3, x3, #23 // left align loc bit field @@ -128,7 +128,7 @@ USER(9f, dc cvau, x4 ) // clean D line to PoU add x4, x4, x2 cmp x4, x1 b.lo 1b - dsb sy + dsb ish icache_line_size x2, x3 sub x3, x2, #1 @@ -139,7 +139,7 @@ USER(9f, ic ivau, x4 ) // invalidate I line PoU cmp x4, x1 b.lo 1b 9: // ignore any faulting cache operation - dsb sy + dsb ish isb ret ENDPROC(flush_icache_range) diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index c851eb44dc50..4164c5ace9f8 100644 --- a/arch/arm64/mm/dma-mapping.c +++ b/arch/arm64/mm/dma-mapping.c @@ -115,7 +115,7 @@ static void *__dma_alloc_noncoherent(struct device *dev, size_t size, for (i = 0; i < (size >> PAGE_SHIFT); i++) map[i] = page + i; coherent_ptr = vmap(map, size >> PAGE_SHIFT, VM_MAP, - __get_dma_pgprot(attrs, pgprot_default, false)); + __get_dma_pgprot(attrs, __pgprot(PROT_NORMAL_NC), false)); kfree(map); if (!coherent_ptr) goto no_map; diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index c23751b06120..bcc965e2cce1 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -32,6 +32,7 @@ #include <asm/exception.h> #include <asm/debug-monitors.h> +#include <asm/esr.h> #include <asm/system_misc.h> #include <asm/pgtable.h> #include <asm/tlbflush.h> @@ -123,6 +124,7 @@ static void __do_user_fault(struct task_struct *tsk, unsigned long addr, } tsk->thread.fault_address = addr; + tsk->thread.fault_code = esr; si.si_signo = sig; si.si_errno = 0; si.si_code = code; @@ -148,8 +150,6 @@ static void do_bad_area(unsigned long addr, unsigned int esr, struct pt_regs *re #define VM_FAULT_BADMAP 0x010000 #define VM_FAULT_BADACCESS 0x020000 -#define ESR_WRITE (1 << 6) -#define ESR_CM (1 << 8) #define ESR_LNX_EXEC (1 << 24) static int __do_page_fault(struct mm_struct *mm, unsigned long addr, @@ -218,7 +218,7 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, if (esr & ESR_LNX_EXEC) { vm_flags = VM_EXEC; - } else if ((esr & ESR_WRITE) && !(esr & ESR_CM)) { + } else if ((esr & ESR_EL1_WRITE) && !(esr & ESR_EL1_CM)) { vm_flags = VM_WRITE; mm_flags |= FAULT_FLAG_WRITE; } @@ -525,7 +525,7 @@ asmlinkage int __exception do_debug_exception(unsigned long addr, info.si_errno = 0; info.si_code = inf->code; info.si_addr = (void __user *)addr; - arm64_notify_die("", regs, &info, esr); + arm64_notify_die("", regs, &info, 0); return 0; } diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 0a472c41a67f..c43f1dd19489 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -43,11 +43,6 @@ struct page *empty_zero_page; EXPORT_SYMBOL(empty_zero_page); -pgprot_t pgprot_default; -EXPORT_SYMBOL(pgprot_default); - -static pmdval_t prot_sect_kernel; - struct cachepolicy { const char policy[16]; u64 mair; @@ -122,33 +117,6 @@ static int __init early_cachepolicy(char *p) } early_param("cachepolicy", early_cachepolicy); -/* - * Adjust the PMD section entries according to the CPU in use. - */ -void __init init_mem_pgprot(void) -{ - pteval_t default_pgprot; - int i; - - default_pgprot = PTE_ATTRINDX(MT_NORMAL); - prot_sect_kernel = PMD_TYPE_SECT | PMD_SECT_AF | PMD_ATTRINDX(MT_NORMAL); - -#ifdef CONFIG_SMP - /* - * Mark memory with the "shared" attribute for SMP systems - */ - default_pgprot |= PTE_SHARED; - prot_sect_kernel |= PMD_SECT_S; -#endif - - for (i = 0; i < 16; i++) { - unsigned long v = pgprot_val(protection_map[i]); - protection_map[i] = __pgprot(v | default_pgprot); - } - - pgprot_default = __pgprot(PTE_TYPE_PAGE | PTE_AF | default_pgprot); -} - pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, unsigned long size, pgprot_t vma_prot) { @@ -168,7 +136,8 @@ static void __init *early_alloc(unsigned long sz) } static void __init alloc_init_pte(pmd_t *pmd, unsigned long addr, - unsigned long end, unsigned long pfn) + unsigned long end, unsigned long pfn, + pgprot_t prot) { pte_t *pte; @@ -180,16 +149,27 @@ static void __init alloc_init_pte(pmd_t *pmd, unsigned long addr, pte = pte_offset_kernel(pmd, addr); do { - set_pte(pte, pfn_pte(pfn, PAGE_KERNEL_EXEC)); + set_pte(pte, pfn_pte(pfn, prot)); pfn++; } while (pte++, addr += PAGE_SIZE, addr != end); } static void __init alloc_init_pmd(pud_t *pud, unsigned long addr, - unsigned long end, phys_addr_t phys) + unsigned long end, phys_addr_t phys, + int map_io) { pmd_t *pmd; unsigned long next; + pmdval_t prot_sect; + pgprot_t prot_pte; + + if (map_io) { + prot_sect = PROT_SECT_DEVICE_nGnRE; + prot_pte = __pgprot(PROT_DEVICE_nGnRE); + } else { + prot_sect = PROT_SECT_NORMAL_EXEC; + prot_pte = PAGE_KERNEL_EXEC; + } /* * Check for initial section mappings in the pgd/pud and remove them. @@ -205,7 +185,7 @@ static void __init alloc_init_pmd(pud_t *pud, unsigned long addr, /* try section mapping first */ if (((addr | next | phys) & ~SECTION_MASK) == 0) { pmd_t old_pmd =*pmd; - set_pmd(pmd, __pmd(phys | prot_sect_kernel)); + set_pmd(pmd, __pmd(phys | prot_sect)); /* * Check for previous table entries created during * boot (__create_page_tables) and flush them. @@ -213,21 +193,46 @@ static void __init alloc_init_pmd(pud_t *pud, unsigned long addr, if (!pmd_none(old_pmd)) flush_tlb_all(); } else { - alloc_init_pte(pmd, addr, next, __phys_to_pfn(phys)); + alloc_init_pte(pmd, addr, next, __phys_to_pfn(phys), + prot_pte); } phys += next - addr; } while (pmd++, addr = next, addr != end); } static void __init alloc_init_pud(pgd_t *pgd, unsigned long addr, - unsigned long end, unsigned long phys) + unsigned long end, unsigned long phys, + int map_io) { pud_t *pud = pud_offset(pgd, addr); unsigned long next; do { next = pud_addr_end(addr, end); - alloc_init_pmd(pud, addr, next, phys); + + /* + * For 4K granule only, attempt to put down a 1GB block + */ + if (!map_io && (PAGE_SHIFT == 12) && + ((addr | next | phys) & ~PUD_MASK) == 0) { + pud_t old_pud = *pud; + set_pud(pud, __pud(phys | PROT_SECT_NORMAL_EXEC)); + + /* + * If we have an old value for a pud, it will + * be pointing to a pmd table that we no longer + * need (from swapper_pg_dir). + * + * Look up the old pmd table and free it. + */ + if (!pud_none(old_pud)) { + phys_addr_t table = __pa(pmd_offset(&old_pud, 0)); + memblock_free(table, PAGE_SIZE); + flush_tlb_all(); + } + } else { + alloc_init_pmd(pud, addr, next, phys, map_io); + } phys += next - addr; } while (pud++, addr = next, addr != end); } @@ -236,30 +241,44 @@ static void __init alloc_init_pud(pgd_t *pgd, unsigned long addr, * Create the page directory entries and any necessary page tables for the * mapping specified by 'md'. */ -static void __init create_mapping(phys_addr_t phys, unsigned long virt, - phys_addr_t size) +static void __init __create_mapping(pgd_t *pgd, phys_addr_t phys, + unsigned long virt, phys_addr_t size, + int map_io) { unsigned long addr, length, end, next; - pgd_t *pgd; - - if (virt < VMALLOC_START) { - pr_warning("BUG: not creating mapping for 0x%016llx at 0x%016lx - outside kernel range\n", - phys, virt); - return; - } addr = virt & PAGE_MASK; length = PAGE_ALIGN(size + (virt & ~PAGE_MASK)); - pgd = pgd_offset_k(addr); end = addr + length; do { next = pgd_addr_end(addr, end); - alloc_init_pud(pgd, addr, next, phys); + alloc_init_pud(pgd, addr, next, phys, map_io); phys += next - addr; } while (pgd++, addr = next, addr != end); } +static void __init create_mapping(phys_addr_t phys, unsigned long virt, + phys_addr_t size) +{ + if (virt < VMALLOC_START) { + pr_warn("BUG: not creating mapping for %pa at 0x%016lx - outside kernel range\n", + &phys, virt); + return; + } + __create_mapping(pgd_offset_k(virt & PAGE_MASK), phys, virt, size, 0); +} + +void __init create_id_mapping(phys_addr_t addr, phys_addr_t size, int map_io) +{ + if ((addr >> PGDIR_SHIFT) >= ARRAY_SIZE(idmap_pg_dir)) { + pr_warn("BUG: not creating id mapping for %pa\n", &addr); + return; + } + __create_mapping(&idmap_pg_dir[pgd_index(addr)], + addr, addr, size, map_io); +} + static void __init map_mem(void) { struct memblock_region *reg; @@ -370,6 +389,9 @@ int kern_addr_valid(unsigned long addr) if (pud_none(*pud)) return 0; + if (pud_sect(*pud)) + return pfn_valid(pud_pfn(*pud)); + pmd = pmd_offset(pud, addr); if (pmd_none(*pmd)) return 0; @@ -417,7 +439,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node) if (!p) return -ENOMEM; - set_pmd(pmd, __pmd(__pa(p) | prot_sect_kernel)); + set_pmd(pmd, __pmd(__pa(p) | PROT_SECT_NORMAL)); } else vmemmap_verify((pte_t *)pmd, node, addr, next); } while (addr = next, addr != end); diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 9042aff5e9e3..7736779c9809 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -182,7 +182,7 @@ ENDPROC(cpu_do_switch_mm) ENTRY(__cpu_setup) ic iallu // I+BTB cache invalidate tlbi vmalle1is // invalidate I + D TLBs - dsb sy + dsb ish mov x0, #3 << 20 msr cpacr_el1, x0 // Enable FP/ASIMD diff --git a/arch/arm64/mm/tlb.S b/arch/arm64/mm/tlb.S deleted file mode 100644 index 19da91e0cd27..000000000000 --- a/arch/arm64/mm/tlb.S +++ /dev/null @@ -1,71 +0,0 @@ -/* - * Based on arch/arm/mm/tlb.S - * - * Copyright (C) 1997-2002 Russell King - * Copyright (C) 2012 ARM Ltd. - * Written by Catalin Marinas <catalin.marinas@arm.com> - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program. If not, see <http://www.gnu.org/licenses/>. - */ -#include <linux/linkage.h> -#include <asm/assembler.h> -#include <asm/asm-offsets.h> -#include <asm/page.h> -#include <asm/tlbflush.h> -#include "proc-macros.S" - -/* - * __cpu_flush_user_tlb_range(start, end, vma) - * - * Invalidate a range of TLB entries in the specified address space. - * - * - start - start address (may not be aligned) - * - end - end address (exclusive, may not be aligned) - * - vma - vma_struct describing address range - */ -ENTRY(__cpu_flush_user_tlb_range) - vma_vm_mm x3, x2 // get vma->vm_mm - mmid w3, x3 // get vm_mm->context.id - dsb sy - lsr x0, x0, #12 // align address - lsr x1, x1, #12 - bfi x0, x3, #48, #16 // start VA and ASID - bfi x1, x3, #48, #16 // end VA and ASID -1: tlbi vae1is, x0 // TLB invalidate by address and ASID - add x0, x0, #1 - cmp x0, x1 - b.lo 1b - dsb sy - ret -ENDPROC(__cpu_flush_user_tlb_range) - -/* - * __cpu_flush_kern_tlb_range(start,end) - * - * Invalidate a range of kernel TLB entries. - * - * - start - start address (may not be aligned) - * - end - end address (exclusive, may not be aligned) - */ -ENTRY(__cpu_flush_kern_tlb_range) - dsb sy - lsr x0, x0, #12 // align address - lsr x1, x1, #12 -1: tlbi vaae1is, x0 // TLB invalidate by address - add x0, x0, #1 - cmp x0, x1 - b.lo 1b - dsb sy - isb - ret -ENDPROC(__cpu_flush_kern_tlb_range) diff --git a/arch/blackfin/include/asm/ftrace.h b/arch/blackfin/include/asm/ftrace.h index 8a029505d7b7..2f1c3c2657ad 100644 --- a/arch/blackfin/include/asm/ftrace.h +++ b/arch/blackfin/include/asm/ftrace.h @@ -66,16 +66,7 @@ extern inline void *return_address(unsigned int level) #endif /* CONFIG_FRAME_POINTER */ -#define HAVE_ARCH_CALLER_ADDR - -/* inline function or macro may lead to unexpected result */ -#define CALLER_ADDR0 ((unsigned long)__builtin_return_address(0)) -#define CALLER_ADDR1 ((unsigned long)return_address(1)) -#define CALLER_ADDR2 ((unsigned long)return_address(2)) -#define CALLER_ADDR3 ((unsigned long)return_address(3)) -#define CALLER_ADDR4 ((unsigned long)return_address(4)) -#define CALLER_ADDR5 ((unsigned long)return_address(5)) -#define CALLER_ADDR6 ((unsigned long)return_address(6)) +#define ftrace_return_address(n) return_address(n) #endif /* __ASSEMBLY__ */ diff --git a/arch/blackfin/kernel/ptrace.c b/arch/blackfin/kernel/ptrace.c index e1f88e028cfe..8b8fe671b1a6 100644 --- a/arch/blackfin/kernel/ptrace.c +++ b/arch/blackfin/kernel/ptrace.c @@ -117,6 +117,7 @@ put_reg(struct task_struct *task, unsigned long regno, unsigned long data) int is_user_addr_valid(struct task_struct *child, unsigned long start, unsigned long len) { + bool valid; struct vm_area_struct *vma; struct sram_list_struct *sraml; @@ -124,9 +125,12 @@ is_user_addr_valid(struct task_struct *child, unsigned long start, unsigned long if (start + len < start) return -EIO; + down_read(&child->mm->mmap_sem); vma = find_vma(child->mm, start); - if (vma && start >= vma->vm_start && start + len <= vma->vm_end) - return 0; + valid = vma && start >= vma->vm_start && start + len <= vma->vm_end; + up_read(&child->mm->mmap_sem); + if (valid) + return 0; for (sraml = child->mm->context.sram_list; sraml; sraml = sraml->next) if (start >= (unsigned long)sraml->addr diff --git a/arch/cris/arch-v10/drivers/gpio.c b/arch/cris/arch-v10/drivers/gpio.c index f4374bae4fb4..64285e0d3481 100644 --- a/arch/cris/arch-v10/drivers/gpio.c +++ b/arch/cris/arch-v10/drivers/gpio.c @@ -833,8 +833,8 @@ static int __init gpio_init(void) printk(KERN_INFO "ETRAX 100LX GPIO driver v2.5, (c) 2001-2008 " "Axis Communications AB\n"); /* We call etrax_gpio_wake_up_check() from timer interrupt and - * from cpu_idle() in kernel/process.c - * The check in cpu_idle() reduces latency from ~15 ms to ~6 ms + * from default_idle() in kernel/process.c + * The check in default_idle() reduces latency from ~15 ms to ~6 ms * in some tests. */ res = request_irq(TIMER0_IRQ_NBR, gpio_poll_timer_interrupt, diff --git a/arch/cris/arch-v32/drivers/mach-fs/gpio.c b/arch/cris/arch-v32/drivers/mach-fs/gpio.c index 9e54273af0ca..009f4ee1bd09 100644 --- a/arch/cris/arch-v32/drivers/mach-fs/gpio.c +++ b/arch/cris/arch-v32/drivers/mach-fs/gpio.c @@ -958,11 +958,7 @@ gpio_init(void) printk(KERN_INFO "ETRAX FS GPIO driver v2.5, (c) 2003-2007 " "Axis Communications AB\n"); - /* We call etrax_gpio_wake_up_check() from timer interrupt and - * from cpu_idle() in kernel/process.c - * The check in cpu_idle() reduces latency from ~15 ms to ~6 ms - * in some tests. - */ + /* We call etrax_gpio_wake_up_check() from timer interrupt */ if (request_irq(TIMER0_INTR_VECT, gpio_poll_timer_interrupt, IRQF_SHARED, "gpio poll", &alarmlist)) printk(KERN_ERR "timer0 irq for gpio\n"); diff --git a/arch/ia64/kernel/crash.c b/arch/ia64/kernel/crash.c index b942f4032d7a..2955f359e2a7 100644 --- a/arch/ia64/kernel/crash.c +++ b/arch/ia64/kernel/crash.c @@ -237,7 +237,7 @@ kdump_init_notifier(struct notifier_block *self, unsigned long val, void *data) } #ifdef CONFIG_SYSCTL -static ctl_table kdump_ctl_table[] = { +static struct ctl_table kdump_ctl_table[] = { { .procname = "kdump_on_init", .data = &kdump_on_init, @@ -255,7 +255,7 @@ static ctl_table kdump_ctl_table[] = { { } }; -static ctl_table sys_table[] = { +static struct ctl_table sys_table[] = { { .procname = "kernel", .mode = 0555, diff --git a/arch/ia64/kernel/perfmon.c b/arch/ia64/kernel/perfmon.c index d841c4bd6864..5845ffea67c3 100644 --- a/arch/ia64/kernel/perfmon.c +++ b/arch/ia64/kernel/perfmon.c @@ -521,7 +521,7 @@ static pmu_config_t *pmu_conf; pfm_sysctl_t pfm_sysctl; EXPORT_SYMBOL(pfm_sysctl); -static ctl_table pfm_ctl_table[]={ +static struct ctl_table pfm_ctl_table[] = { { .procname = "debug", .data = &pfm_sysctl.debug, @@ -552,7 +552,7 @@ static ctl_table pfm_ctl_table[]={ }, {} }; -static ctl_table pfm_sysctl_dir[] = { +static struct ctl_table pfm_sysctl_dir[] = { { .procname = "perfmon", .mode = 0555, @@ -560,7 +560,7 @@ static ctl_table pfm_sysctl_dir[] = { }, {} }; -static ctl_table pfm_sysctl_root[] = { +static struct ctl_table pfm_sysctl_root[] = { { .procname = "kernel", .mode = 0555, diff --git a/arch/m68k/include/asm/signal.h b/arch/m68k/include/asm/signal.h index 214320b50384..8c8ce5e1ee0e 100644 --- a/arch/m68k/include/asm/signal.h +++ b/arch/m68k/include/asm/signal.h @@ -60,15 +60,6 @@ static inline int __gen_sigismember(sigset_t *set, int _sig) __const_sigismember(set,sig) : \ __gen_sigismember(set,sig)) -static inline int sigfindinword(unsigned long word) -{ - asm ("bfffo %1{#0,#0},%0" - : "=d" (word) - : "d" (word & -word) - : "cc"); - return word ^ 31; -} - #endif /* !CONFIG_CPU_HAS_NO_BITFIELDS */ #ifndef __uClinux__ diff --git a/arch/microblaze/configs/mmu_defconfig b/arch/microblaze/configs/mmu_defconfig index deaf45ab6429..e2f6543b91e7 100644 --- a/arch/microblaze/configs/mmu_defconfig +++ b/arch/microblaze/configs/mmu_defconfig @@ -49,6 +49,7 @@ CONFIG_SERIAL_8250=y CONFIG_SERIAL_8250_CONSOLE=y CONFIG_SERIAL_UARTLITE=y CONFIG_SERIAL_UARTLITE_CONSOLE=y +CONFIG_SERIAL_OF_PLATFORM=y # CONFIG_HW_RANDOM is not set CONFIG_XILINX_HWICAP=y CONFIG_I2C=y diff --git a/arch/microblaze/configs/nommu_defconfig b/arch/microblaze/configs/nommu_defconfig index 10b5172283d7..a29ebd4a9fcb 100644 --- a/arch/microblaze/configs/nommu_defconfig +++ b/arch/microblaze/configs/nommu_defconfig @@ -58,6 +58,7 @@ CONFIG_SERIAL_8250=y CONFIG_SERIAL_8250_CONSOLE=y CONFIG_SERIAL_UARTLITE=y CONFIG_SERIAL_UARTLITE_CONSOLE=y +CONFIG_SERIAL_OF_PLATFORM=y # CONFIG_HW_RANDOM is not set CONFIG_XILINX_HWICAP=y CONFIG_I2C=y diff --git a/arch/microblaze/include/asm/Kbuild b/arch/microblaze/include/asm/Kbuild index c98ed95c0541..35b3ecaf25d5 100644 --- a/arch/microblaze/include/asm/Kbuild +++ b/arch/microblaze/include/asm/Kbuild @@ -2,6 +2,7 @@ generic-y += barrier.h generic-y += clkdev.h generic-y += cputime.h +generic-y += device.h generic-y += exec.h generic-y += hash.h generic-y += mcs_spinlock.h diff --git a/arch/microblaze/include/asm/device.h b/arch/microblaze/include/asm/device.h deleted file mode 100644 index 123b2fe72d01..000000000000 --- a/arch/microblaze/include/asm/device.h +++ /dev/null @@ -1,26 +0,0 @@ -/* - * Arch specific extensions to struct device - * - * This file is subject to the terms and conditions of the GNU General Public - * License v2. See the file "COPYING" in the main directory of this archive - * for more details. - */ - -#ifndef _ASM_MICROBLAZE_DEVICE_H -#define _ASM_MICROBLAZE_DEVICE_H - -struct device_node; - -struct dev_archdata { - /* DMA operations on that device */ - struct dma_map_ops *dma_ops; - void *dma_data; -}; - -struct pdev_archdata { - u64 dma_mask; -}; - -#endif /* _ASM_MICROBLAZE_DEVICE_H */ - - diff --git a/arch/microblaze/include/asm/dma-mapping.h b/arch/microblaze/include/asm/dma-mapping.h index 46460f1c49c4..ab353723076a 100644 --- a/arch/microblaze/include/asm/dma-mapping.h +++ b/arch/microblaze/include/asm/dma-mapping.h @@ -35,16 +35,6 @@ #define __dma_alloc_coherent(dev, gfp, size, handle) NULL #define __dma_free_coherent(size, addr) ((void)0) -static inline unsigned long device_to_mask(struct device *dev) -{ - if (dev->dma_mask && *dev->dma_mask) - return *dev->dma_mask; - /* Assume devices without mask can take 32 bit addresses */ - return 0xfffffffful; -} - -extern struct dma_map_ops *dma_ops; - /* * Available generic sets of operations */ @@ -52,20 +42,7 @@ extern struct dma_map_ops dma_direct_ops; static inline struct dma_map_ops *get_dma_ops(struct device *dev) { - /* We don't handle the NULL dev case for ISA for now. We could - * do it via an out of line call but it is not needed for now. The - * only ISA DMA device we support is the floppy and we have a hack - * in the floppy driver directly to get a device for us. - */ - if (unlikely(!dev) || !dev->archdata.dma_ops) - return NULL; - - return dev->archdata.dma_ops; -} - -static inline void set_dma_ops(struct device *dev, struct dma_map_ops *ops) -{ - dev->archdata.dma_ops = ops; + return &dma_direct_ops; } static inline int dma_supported(struct device *dev, u64 mask) diff --git a/arch/microblaze/include/asm/io.h b/arch/microblaze/include/asm/io.h index 1e4c3329f62e..433751b2a003 100644 --- a/arch/microblaze/include/asm/io.h +++ b/arch/microblaze/include/asm/io.h @@ -19,17 +19,14 @@ #ifndef CONFIG_PCI #define _IO_BASE 0 #define _ISA_MEM_BASE 0 -#define PCI_DRAM_OFFSET 0 #else #define _IO_BASE isa_io_base #define _ISA_MEM_BASE isa_mem_base -#define PCI_DRAM_OFFSET pci_dram_offset struct pci_dev; extern void pci_iounmap(struct pci_dev *dev, void __iomem *); #define pci_iounmap pci_iounmap extern unsigned long isa_io_base; -extern unsigned long pci_dram_offset; extern resource_size_t isa_mem_base; #endif diff --git a/arch/microblaze/include/asm/pci.h b/arch/microblaze/include/asm/pci.h index 335524040fff..468aca8cec0d 100644 --- a/arch/microblaze/include/asm/pci.h +++ b/arch/microblaze/include/asm/pci.h @@ -45,14 +45,6 @@ struct pci_dev; #define pcibios_assign_all_busses() 0 #ifdef CONFIG_PCI -extern void set_pci_dma_ops(struct dma_map_ops *dma_ops); -extern struct dma_map_ops *get_pci_dma_ops(void); -#else /* CONFIG_PCI */ -#define set_pci_dma_ops(d) -#define get_pci_dma_ops() NULL -#endif - -#ifdef CONFIG_PCI static inline void pci_dma_burst_advice(struct pci_dev *pdev, enum pci_dma_burst_strategy *strat, unsigned long *strategy_parameter) diff --git a/arch/microblaze/kernel/dma.c b/arch/microblaze/kernel/dma.c index da68d00fd087..4633c36c1b32 100644 --- a/arch/microblaze/kernel/dma.c +++ b/arch/microblaze/kernel/dma.c @@ -13,23 +13,6 @@ #include <linux/export.h> #include <linux/bug.h> -/* - * Generic direct DMA implementation - * - * This implementation supports a per-device offset that can be applied if - * the address at which memory is visible to devices is not 0. Platform code - * can set archdata.dma_data to an unsigned long holding the offset. By - * default the offset is PCI_DRAM_OFFSET. - */ - -static unsigned long get_dma_direct_offset(struct device *dev) -{ - if (likely(dev)) - return (unsigned long)dev->archdata.dma_data; - - return PCI_DRAM_OFFSET; /* FIXME Not sure if is correct */ -} - #define NOT_COHERENT_CACHE static void *dma_direct_alloc_coherent(struct device *dev, size_t size, @@ -51,7 +34,7 @@ static void *dma_direct_alloc_coherent(struct device *dev, size_t size, return NULL; ret = page_address(page); memset(ret, 0, size); - *dma_handle = virt_to_phys(ret) + get_dma_direct_offset(dev); + *dma_handle = virt_to_phys(ret); return ret; #endif @@ -77,7 +60,7 @@ static int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, /* FIXME this part of code is untested */ for_each_sg(sgl, sg, nents, i) { - sg->dma_address = sg_phys(sg) + get_dma_direct_offset(dev); + sg->dma_address = sg_phys(sg); __dma_sync(page_to_phys(sg_page(sg)) + sg->offset, sg->length, direction); } @@ -85,12 +68,6 @@ static int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, return nents; } -static void dma_direct_unmap_sg(struct device *dev, struct scatterlist *sg, - int nents, enum dma_data_direction direction, - struct dma_attrs *attrs) -{ -} - static int dma_direct_dma_supported(struct device *dev, u64 mask) { return 1; @@ -104,7 +81,7 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev, struct dma_attrs *attrs) { __dma_sync(page_to_phys(page) + offset, size, direction); - return page_to_phys(page) + offset + get_dma_direct_offset(dev); + return page_to_phys(page) + offset; } static inline void dma_direct_unmap_page(struct device *dev, @@ -181,7 +158,6 @@ struct dma_map_ops dma_direct_ops = { .alloc = dma_direct_alloc_coherent, .free = dma_direct_free_coherent, .map_sg = dma_direct_map_sg, - .unmap_sg = dma_direct_unmap_sg, .dma_supported = dma_direct_dma_supported, .map_page = dma_direct_map_page, .unmap_page = dma_direct_unmap_page, diff --git a/arch/microblaze/kernel/head.S b/arch/microblaze/kernel/head.S index 17645b2e2f07..4655ff342c64 100644 --- a/arch/microblaze/kernel/head.S +++ b/arch/microblaze/kernel/head.S @@ -205,7 +205,7 @@ GT4: /* r11 contains the rest - will be either 1 or 4 */ GT16: /* TLB0 is 16MB */ addik r9, r0, 0x1000000 /* means TLB0 is 16MB */ TLB1: - /* must be used r2 because of substract if failed */ + /* must be used r2 because of subtract if failed */ addik r2, r11, -0x0400000 bgei r2, GT20 /* size is greater than 16MB */ /* size is >16MB and <20MB */ diff --git a/arch/microblaze/kernel/setup.c b/arch/microblaze/kernel/setup.c index 67cc4b282cc1..ab5b488e1fde 100644 --- a/arch/microblaze/kernel/setup.c +++ b/arch/microblaze/kernel/setup.c @@ -71,13 +71,9 @@ void __init setup_arch(char **cmdline_p) xilinx_pci_init(); -#ifdef CONFIG_VT -#if defined(CONFIG_XILINX_CONSOLE) - conswitchp = &xil_con; -#elif defined(CONFIG_DUMMY_CONSOLE) +#if defined(CONFIG_DUMMY_CONSOLE) conswitchp = &dummy_con; #endif -#endif } #ifdef CONFIG_MTD_UCLINUX @@ -229,31 +225,3 @@ static int __init debugfs_tlb(void) device_initcall(debugfs_tlb); # endif #endif - -static int dflt_bus_notify(struct notifier_block *nb, - unsigned long action, void *data) -{ - struct device *dev = data; - - /* We are only intereted in device addition */ - if (action != BUS_NOTIFY_ADD_DEVICE) - return 0; - - set_dma_ops(dev, &dma_direct_ops); - - return NOTIFY_DONE; -} - -static struct notifier_block dflt_plat_bus_notifier = { - .notifier_call = dflt_bus_notify, - .priority = INT_MAX, -}; - -static int __init setup_bus_notifier(void) -{ - bus_register_notifier(&platform_bus_type, &dflt_plat_bus_notifier); - - return 0; -} - -arch_initcall(setup_bus_notifier); diff --git a/arch/microblaze/pci/pci-common.c b/arch/microblaze/pci/pci-common.c index a59de1bc1ce0..9037914f6985 100644 --- a/arch/microblaze/pci/pci-common.c +++ b/arch/microblaze/pci/pci-common.c @@ -47,24 +47,9 @@ static int global_phb_number; /* Global phb counter */ /* ISA Memory physical address */ resource_size_t isa_mem_base; -static struct dma_map_ops *pci_dma_ops = &dma_direct_ops; - unsigned long isa_io_base; -unsigned long pci_dram_offset; static int pci_bus_count; - -void set_pci_dma_ops(struct dma_map_ops *dma_ops) -{ - pci_dma_ops = dma_ops; -} - -struct dma_map_ops *get_pci_dma_ops(void) -{ - return pci_dma_ops; -} -EXPORT_SYMBOL(get_pci_dma_ops); - struct pci_controller *pcibios_alloc_controller(struct device_node *dev) { struct pci_controller *phb; @@ -866,10 +851,6 @@ void pcibios_setup_bus_devices(struct pci_bus *bus) */ set_dev_node(&dev->dev, pcibus_to_node(dev->bus)); - /* Hook up default DMA ops */ - set_dma_ops(&dev->dev, pci_dma_ops); - dev->dev.archdata.dma_data = (void *)PCI_DRAM_OFFSET; - /* Read default IRQs and fixup if necessary */ dev->irq = of_irq_parse_and_map_pci(dev, 0, 0); } diff --git a/arch/mips/dec/Makefile b/arch/mips/dec/Makefile index 3d5d2c56de8d..bd74e05c90b0 100644 --- a/arch/mips/dec/Makefile +++ b/arch/mips/dec/Makefile @@ -3,7 +3,7 @@ # obj-y := ecc-berr.o int-handler.o ioasic-irq.o kn01-berr.o \ - kn02-irq.o kn02xa-berr.o reset.o setup.o time.o + kn02-irq.o kn02xa-berr.o platform.o reset.o setup.o time.o obj-$(CONFIG_TC) += tc.o obj-$(CONFIG_CPU_HAS_WB) += wbflush.o diff --git a/arch/mips/dec/platform.c b/arch/mips/dec/platform.c new file mode 100644 index 000000000000..c7ac86af847a --- /dev/null +++ b/arch/mips/dec/platform.c @@ -0,0 +1,44 @@ +/* + * DEC platform devices. + * + * Copyright (c) 2014 Maciej W. Rozycki + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version + * 2 of the License, or (at your option) any later version. + */ + +#include <linux/ioport.h> +#include <linux/kernel.h> +#include <linux/mc146818rtc.h> +#include <linux/platform_device.h> + +static struct resource dec_rtc_resources[] = { + { + .name = "rtc", + .flags = IORESOURCE_MEM, + }, +}; + +static struct cmos_rtc_board_info dec_rtc_info = { + .flags = CMOS_RTC_FLAGS_NOFREQ, + .address_space = 64, +}; + +static struct platform_device dec_rtc_device = { + .name = "rtc_cmos", + .id = PLATFORM_DEVID_NONE, + .dev.platform_data = &dec_rtc_info, + .resource = dec_rtc_resources, + .num_resources = ARRAY_SIZE(dec_rtc_resources), +}; + +static int __init dec_add_devices(void) +{ + dec_rtc_resources[0].start = RTC_PORT(0); + dec_rtc_resources[0].end = RTC_PORT(0) + dec_kn_slot_size - 1; + return platform_device_register(&dec_rtc_device); +} + +device_initcall(dec_add_devices); diff --git a/arch/parisc/include/asm/ftrace.h b/arch/parisc/include/asm/ftrace.h index 72c0fafaa039..544ed8ef87eb 100644 --- a/arch/parisc/include/asm/ftrace.h +++ b/arch/parisc/include/asm/ftrace.h @@ -24,15 +24,7 @@ extern void return_to_handler(void); extern unsigned long return_address(unsigned int); -#define HAVE_ARCH_CALLER_ADDR - -#define CALLER_ADDR0 ((unsigned long)__builtin_return_address(0)) -#define CALLER_ADDR1 return_address(1) -#define CALLER_ADDR2 return_address(2) -#define CALLER_ADDR3 return_address(3) -#define CALLER_ADDR4 return_address(4) -#define CALLER_ADDR5 return_address(5) -#define CALLER_ADDR6 return_address(6) +#define ftrace_return_address(n) return_address(n) #endif /* __ASSEMBLY__ */ diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c index ca1cd7459c4a..248ee7e5bebd 100644 --- a/arch/powerpc/kernel/irq.c +++ b/arch/powerpc/kernel/irq.c @@ -304,7 +304,7 @@ void notrace restore_interrupts(void) * being re-enabled and generally sanitized the lazy irq state, * and in the latter case it will leave with interrupts hard * disabled and marked as such, so the local_irq_enable() call - * in cpu_idle() will properly re-enable everything. + * in arch_cpu_idle() will properly re-enable everything. */ bool prep_irq_for_idle(void) { diff --git a/arch/sh/include/asm/ftrace.h b/arch/sh/include/asm/ftrace.h index 13e9966464c2..e79fb6ebaa42 100644 --- a/arch/sh/include/asm/ftrace.h +++ b/arch/sh/include/asm/ftrace.h @@ -40,15 +40,7 @@ static inline unsigned long ftrace_call_adjust(unsigned long addr) /* arch/sh/kernel/return_address.c */ extern void *return_address(unsigned int); -#define HAVE_ARCH_CALLER_ADDR - -#define CALLER_ADDR0 ((unsigned long)__builtin_return_address(0)) -#define CALLER_ADDR1 ((unsigned long)return_address(1)) -#define CALLER_ADDR2 ((unsigned long)return_address(2)) -#define CALLER_ADDR3 ((unsigned long)return_address(3)) -#define CALLER_ADDR4 ((unsigned long)return_address(4)) -#define CALLER_ADDR5 ((unsigned long)return_address(5)) -#define CALLER_ADDR6 ((unsigned long)return_address(6)) +#define ftrace_return_address(n) return_address(n) #endif /* __ASSEMBLY__ */ diff --git a/arch/tile/kernel/proc.c b/arch/tile/kernel/proc.c index 681100c59fda..6829a9508649 100644 --- a/arch/tile/kernel/proc.c +++ b/arch/tile/kernel/proc.c @@ -113,7 +113,7 @@ arch_initcall(proc_tile_init); * Support /proc/sys/tile directory */ -static ctl_table unaligned_subtable[] = { +static struct ctl_table unaligned_subtable[] = { { .procname = "enabled", .data = &unaligned_fixup, @@ -138,7 +138,7 @@ static ctl_table unaligned_subtable[] = { {} }; -static ctl_table unaligned_table[] = { +static struct ctl_table unaligned_table[] = { { .procname = "unaligned_fixup", .mode = 0555, diff --git a/arch/x86/boot/compressed/eboot.c b/arch/x86/boot/compressed/eboot.c index 4703a6c4b8e3..0331d765c2bb 100644 --- a/arch/x86/boot/compressed/eboot.c +++ b/arch/x86/boot/compressed/eboot.c @@ -1087,8 +1087,7 @@ struct boot_params *make_boot_params(struct efi_config *c) hdr->type_of_loader = 0x21; /* Convert unicode cmdline to ascii */ - cmdline_ptr = efi_convert_cmdline_to_ascii(sys_table, image, - &options_size); + cmdline_ptr = efi_convert_cmdline(sys_table, image, &options_size); if (!cmdline_ptr) goto fail; hdr->cmd_line_ptr = (unsigned long)cmdline_ptr; diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S index 0d558ee899ae..2884e0c3e8a5 100644 --- a/arch/x86/boot/compressed/head_64.S +++ b/arch/x86/boot/compressed/head_64.S @@ -452,7 +452,7 @@ efi32_config: .global efi64_config efi64_config: .fill 11,8,0 - .quad efi_call6 + .quad efi_call .byte 1 #endif /* CONFIG_EFI_STUB */ diff --git a/arch/x86/boot/header.S b/arch/x86/boot/header.S index 0ca9a5c362bc..84c223479e3c 100644 --- a/arch/x86/boot/header.S +++ b/arch/x86/boot/header.S @@ -375,8 +375,7 @@ xloadflags: # define XLF0 0 #endif -#if defined(CONFIG_RELOCATABLE) && defined(CONFIG_X86_64) && \ - !defined(CONFIG_EFI_MIXED) +#if defined(CONFIG_RELOCATABLE) && defined(CONFIG_X86_64) /* kernel/boot_param/ramdisk could be loaded above 4g */ # define XLF1 XLF_CAN_BE_LOADED_ABOVE_4G #else diff --git a/arch/x86/crypto/ghash-clmulni-intel_asm.S b/arch/x86/crypto/ghash-clmulni-intel_asm.S index 185fad49d86f..5d1e0075ac24 100644 --- a/arch/x86/crypto/ghash-clmulni-intel_asm.S +++ b/arch/x86/crypto/ghash-clmulni-intel_asm.S @@ -92,7 +92,7 @@ __clmul_gf128mul_ble: ret ENDPROC(__clmul_gf128mul_ble) -/* void clmul_ghash_mul(char *dst, const be128 *shash) */ +/* void clmul_ghash_mul(char *dst, const u128 *shash) */ ENTRY(clmul_ghash_mul) movups (%rdi), DATA movups (%rsi), SHASH @@ -106,7 +106,7 @@ ENDPROC(clmul_ghash_mul) /* * void clmul_ghash_update(char *dst, const char *src, unsigned int srclen, - * const be128 *shash); + * const u128 *shash); */ ENTRY(clmul_ghash_update) cmp $16, %rdx diff --git a/arch/x86/crypto/ghash-clmulni-intel_glue.c b/arch/x86/crypto/ghash-clmulni-intel_glue.c index d785cf2c529c..88bb7ba8b175 100644 --- a/arch/x86/crypto/ghash-clmulni-intel_glue.c +++ b/arch/x86/crypto/ghash-clmulni-intel_glue.c @@ -25,17 +25,17 @@ #define GHASH_BLOCK_SIZE 16 #define GHASH_DIGEST_SIZE 16 -void clmul_ghash_mul(char *dst, const be128 *shash); +void clmul_ghash_mul(char *dst, const u128 *shash); void clmul_ghash_update(char *dst, const char *src, unsigned int srclen, - const be128 *shash); + const u128 *shash); struct ghash_async_ctx { struct cryptd_ahash *cryptd_tfm; }; struct ghash_ctx { - be128 shash; + u128 shash; }; struct ghash_desc_ctx { @@ -68,11 +68,11 @@ static int ghash_setkey(struct crypto_shash *tfm, a = be64_to_cpu(x->a); b = be64_to_cpu(x->b); - ctx->shash.a = (__be64)((b << 1) | (a >> 63)); - ctx->shash.b = (__be64)((a << 1) | (b >> 63)); + ctx->shash.a = (b << 1) | (a >> 63); + ctx->shash.b = (a << 1) | (b >> 63); if (a >> 63) - ctx->shash.b ^= cpu_to_be64(0xc2); + ctx->shash.b ^= ((u64)0xc2) << 56; return 0; } diff --git a/arch/x86/include/asm/efi.h b/arch/x86/include/asm/efi.h index 0869434eaf72..1eb5f6433ad8 100644 --- a/arch/x86/include/asm/efi.h +++ b/arch/x86/include/asm/efi.h @@ -1,6 +1,7 @@ #ifndef _ASM_X86_EFI_H #define _ASM_X86_EFI_H +#include <asm/i387.h> /* * We map the EFI regions needed for runtime services non-contiguously, * with preserved alignment on virtual addresses starting from -4G down @@ -27,91 +28,58 @@ extern unsigned long asmlinkage efi_call_phys(void *, ...); -#define efi_call_phys0(f) efi_call_phys(f) -#define efi_call_phys1(f, a1) efi_call_phys(f, a1) -#define efi_call_phys2(f, a1, a2) efi_call_phys(f, a1, a2) -#define efi_call_phys3(f, a1, a2, a3) efi_call_phys(f, a1, a2, a3) -#define efi_call_phys4(f, a1, a2, a3, a4) \ - efi_call_phys(f, a1, a2, a3, a4) -#define efi_call_phys5(f, a1, a2, a3, a4, a5) \ - efi_call_phys(f, a1, a2, a3, a4, a5) -#define efi_call_phys6(f, a1, a2, a3, a4, a5, a6) \ - efi_call_phys(f, a1, a2, a3, a4, a5, a6) /* * Wrap all the virtual calls in a way that forces the parameters on the stack. */ +/* Use this macro if your virtual returns a non-void value */ #define efi_call_virt(f, args...) \ - ((efi_##f##_t __attribute__((regparm(0)))*)efi.systab->runtime->f)(args) - -#define efi_call_virt0(f) efi_call_virt(f) -#define efi_call_virt1(f, a1) efi_call_virt(f, a1) -#define efi_call_virt2(f, a1, a2) efi_call_virt(f, a1, a2) -#define efi_call_virt3(f, a1, a2, a3) efi_call_virt(f, a1, a2, a3) -#define efi_call_virt4(f, a1, a2, a3, a4) \ - efi_call_virt(f, a1, a2, a3, a4) -#define efi_call_virt5(f, a1, a2, a3, a4, a5) \ - efi_call_virt(f, a1, a2, a3, a4, a5) -#define efi_call_virt6(f, a1, a2, a3, a4, a5, a6) \ - efi_call_virt(f, a1, a2, a3, a4, a5, a6) +({ \ + efi_status_t __s; \ + kernel_fpu_begin(); \ + __s = ((efi_##f##_t __attribute__((regparm(0)))*) \ + efi.systab->runtime->f)(args); \ + kernel_fpu_end(); \ + __s; \ +}) + +/* Use this macro if your virtual call does not return any value */ +#define __efi_call_virt(f, args...) \ +({ \ + kernel_fpu_begin(); \ + ((efi_##f##_t __attribute__((regparm(0)))*) \ + efi.systab->runtime->f)(args); \ + kernel_fpu_end(); \ +}) #define efi_ioremap(addr, size, type, attr) ioremap_cache(addr, size) #else /* !CONFIG_X86_32 */ -extern u64 efi_call0(void *fp); -extern u64 efi_call1(void *fp, u64 arg1); -extern u64 efi_call2(void *fp, u64 arg1, u64 arg2); -extern u64 efi_call3(void *fp, u64 arg1, u64 arg2, u64 arg3); -extern u64 efi_call4(void *fp, u64 arg1, u64 arg2, u64 arg3, u64 arg4); -extern u64 efi_call5(void *fp, u64 arg1, u64 arg2, u64 arg3, - u64 arg4, u64 arg5); -extern u64 efi_call6(void *fp, u64 arg1, u64 arg2, u64 arg3, - u64 arg4, u64 arg5, u64 arg6); - -#define efi_call_phys0(f) \ - efi_call0((f)) -#define efi_call_phys1(f, a1) \ - efi_call1((f), (u64)(a1)) -#define efi_call_phys2(f, a1, a2) \ - efi_call2((f), (u64)(a1), (u64)(a2)) -#define efi_call_phys3(f, a1, a2, a3) \ - efi_call3((f), (u64)(a1), (u64)(a2), (u64)(a3)) -#define efi_call_phys4(f, a1, a2, a3, a4) \ - efi_call4((f), (u64)(a1), (u64)(a2), (u64)(a3), \ - (u64)(a4)) -#define efi_call_phys5(f, a1, a2, a3, a4, a5) \ - efi_call5((f), (u64)(a1), (u64)(a2), (u64)(a3), \ - (u64)(a4), (u64)(a5)) -#define efi_call_phys6(f, a1, a2, a3, a4, a5, a6) \ - efi_call6((f), (u64)(a1), (u64)(a2), (u64)(a3), \ - (u64)(a4), (u64)(a5), (u64)(a6)) - -#define _efi_call_virtX(x, f, ...) \ +#define EFI_LOADER_SIGNATURE "EL64" + +extern u64 asmlinkage efi_call(void *fp, ...); + +#define efi_call_phys(f, args...) efi_call((f), args) + +#define efi_call_virt(f, ...) \ ({ \ efi_status_t __s; \ \ efi_sync_low_kernel_mappings(); \ preempt_disable(); \ - __s = efi_call##x((void *)efi.systab->runtime->f, __VA_ARGS__); \ + __kernel_fpu_begin(); \ + __s = efi_call((void *)efi.systab->runtime->f, __VA_ARGS__); \ + __kernel_fpu_end(); \ preempt_enable(); \ __s; \ }) -#define efi_call_virt0(f) \ - _efi_call_virtX(0, f) -#define efi_call_virt1(f, a1) \ - _efi_call_virtX(1, f, (u64)(a1)) -#define efi_call_virt2(f, a1, a2) \ - _efi_call_virtX(2, f, (u64)(a1), (u64)(a2)) -#define efi_call_virt3(f, a1, a2, a3) \ - _efi_call_virtX(3, f, (u64)(a1), (u64)(a2), (u64)(a3)) -#define efi_call_virt4(f, a1, a2, a3, a4) \ - _efi_call_virtX(4, f, (u64)(a1), (u64)(a2), (u64)(a3), (u64)(a4)) -#define efi_call_virt5(f, a1, a2, a3, a4, a5) \ - _efi_call_virtX(5, f, (u64)(a1), (u64)(a2), (u64)(a3), (u64)(a4), (u64)(a5)) -#define efi_call_virt6(f, a1, a2, a3, a4, a5, a6) \ - _efi_call_virtX(6, f, (u64)(a1), (u64)(a2), (u64)(a3), (u64)(a4), (u64)(a5), (u64)(a6)) +/* + * All X86_64 virt calls return non-void values. Thus, use non-void call for + * virt calls that would be void on X86_32. + */ +#define __efi_call_virt(f, args...) efi_call_virt(f, args) extern void __iomem *efi_ioremap(unsigned long addr, unsigned long size, u32 type, u64 attribute); diff --git a/arch/x86/include/asm/fpu-internal.h b/arch/x86/include/asm/fpu-internal.h index cea1c76d49bf..115e3689cd53 100644 --- a/arch/x86/include/asm/fpu-internal.h +++ b/arch/x86/include/asm/fpu-internal.h @@ -87,22 +87,22 @@ static inline int is_x32_frame(void) static __always_inline __pure bool use_eager_fpu(void) { - return static_cpu_has(X86_FEATURE_EAGER_FPU); + return static_cpu_has_safe(X86_FEATURE_EAGER_FPU); } static __always_inline __pure bool use_xsaveopt(void) { - return static_cpu_has(X86_FEATURE_XSAVEOPT); + return static_cpu_has_safe(X86_FEATURE_XSAVEOPT); } static __always_inline __pure bool use_xsave(void) { - return static_cpu_has(X86_FEATURE_XSAVE); + return static_cpu_has_safe(X86_FEATURE_XSAVE); } static __always_inline __pure bool use_fxsr(void) { - return static_cpu_has(X86_FEATURE_FXSR); + return static_cpu_has_safe(X86_FEATURE_FXSR); } static inline void fx_finit(struct i387_fxsave_struct *fx) @@ -293,7 +293,7 @@ static inline int restore_fpu_checking(struct task_struct *tsk) /* AMD K7/K8 CPUs don't save/restore FDP/FIP/FOP unless an exception is pending. Clear the x87 state here by setting it to fixed values. "m" is a random variable that should be in L1 */ - if (unlikely(static_cpu_has(X86_FEATURE_FXSAVE_LEAK))) { + if (unlikely(static_cpu_has_safe(X86_FEATURE_FXSAVE_LEAK))) { asm volatile( "fnclex\n\t" "emms\n\t" diff --git a/arch/x86/include/asm/signal.h b/arch/x86/include/asm/signal.h index 35e67a457182..31eab867e6d3 100644 --- a/arch/x86/include/asm/signal.h +++ b/arch/x86/include/asm/signal.h @@ -92,12 +92,6 @@ static inline int __gen_sigismember(sigset_t *set, int _sig) ? __const_sigismember((set), (sig)) \ : __gen_sigismember((set), (sig))) -static inline int sigfindinword(unsigned long word) -{ - asm("bsfl %1,%0" : "=r"(word) : "rm"(word) : "cc"); - return word; -} - struct pt_regs; #else /* __i386__ */ diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c index 283a76a9cc40..11ccfb0a63e7 100644 --- a/arch/x86/kernel/irq.c +++ b/arch/x86/kernel/irq.c @@ -17,6 +17,7 @@ #include <asm/idle.h> #include <asm/mce.h> #include <asm/hw_irq.h> +#include <asm/desc.h> #define CREATE_TRACE_POINTS #include <asm/trace/irq_vectors.h> @@ -334,10 +335,17 @@ int check_irq_vectors_for_cpu_disable(void) for_each_online_cpu(cpu) { if (cpu == this_cpu) continue; - for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS; - vector++) { - if (per_cpu(vector_irq, cpu)[vector] < 0) - count++; + /* + * We scan from FIRST_EXTERNAL_VECTOR to first system + * vector. If the vector is marked in the used vectors + * bitmap or an irq is assigned to it, we don't count + * it as available. + */ + for (vector = FIRST_EXTERNAL_VECTOR; + vector < first_system_vector; vector++) { + if (!test_bit(vector, used_vectors) && + per_cpu(vector_irq, cpu)[vector] < 0) + count++; } } diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index 5d93ac1b72db..5492798930ef 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -866,9 +866,6 @@ static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle) /* was set by cpu_init() */ cpumask_clear_cpu(cpu, cpu_initialized_mask); - - set_cpu_present(cpu, false); - per_cpu(x86_cpu_to_apicid, cpu) = BAD_APICID; } /* mark "stuck" area as not stuck */ @@ -928,7 +925,7 @@ int native_cpu_up(unsigned int cpu, struct task_struct *tidle) err = do_boot_cpu(apicid, cpu, tidle); if (err) { - pr_debug("do_boot_cpu failed %d\n", err); + pr_err("do_boot_cpu failed(%d) to wakeup CPU#%u\n", err, cpu); return -EIO; } diff --git a/arch/x86/platform/efi/efi.c b/arch/x86/platform/efi/efi.c index 3781dd39e8bd..87fc96bcc13c 100644 --- a/arch/x86/platform/efi/efi.c +++ b/arch/x86/platform/efi/efi.c @@ -110,7 +110,7 @@ static efi_status_t virt_efi_get_time(efi_time_t *tm, efi_time_cap_t *tc) efi_status_t status; spin_lock_irqsave(&rtc_lock, flags); - status = efi_call_virt2(get_time, tm, tc); + status = efi_call_virt(get_time, tm, tc); spin_unlock_irqrestore(&rtc_lock, flags); return status; } @@ -121,7 +121,7 @@ static efi_status_t virt_efi_set_time(efi_time_t *tm) efi_status_t status; spin_lock_irqsave(&rtc_lock, flags); - status = efi_call_virt1(set_time, tm); + status = efi_call_virt(set_time, tm); spin_unlock_irqrestore(&rtc_lock, flags); return status; } @@ -134,8 +134,7 @@ static efi_status_t virt_efi_get_wakeup_time(efi_bool_t *enabled, efi_status_t status; spin_lock_irqsave(&rtc_lock, flags); - status = efi_call_virt3(get_wakeup_time, - enabled, pending, tm); + status = efi_call_virt(get_wakeup_time, enabled, pending, tm); spin_unlock_irqrestore(&rtc_lock, flags); return status; } @@ -146,8 +145,7 @@ static efi_status_t virt_efi_set_wakeup_time(efi_bool_t enabled, efi_time_t *tm) efi_status_t status; spin_lock_irqsave(&rtc_lock, flags); - status = efi_call_virt2(set_wakeup_time, - enabled, tm); + status = efi_call_virt(set_wakeup_time, enabled, tm); spin_unlock_irqrestore(&rtc_lock, flags); return status; } @@ -158,17 +156,17 @@ static efi_status_t virt_efi_get_variable(efi_char16_t *name, unsigned long *data_size, void *data) { - return efi_call_virt5(get_variable, - name, vendor, attr, - data_size, data); + return efi_call_virt(get_variable, + name, vendor, attr, + data_size, data); } static efi_status_t virt_efi_get_next_variable(unsigned long *name_size, efi_char16_t *name, efi_guid_t *vendor) { - return efi_call_virt3(get_next_variable, - name_size, name, vendor); + return efi_call_virt(get_next_variable, + name_size, name, vendor); } static efi_status_t virt_efi_set_variable(efi_char16_t *name, @@ -177,9 +175,9 @@ static efi_status_t virt_efi_set_variable(efi_char16_t *name, unsigned long data_size, void *data) { - return efi_call_virt5(set_variable, - name, vendor, attr, - data_size, data); + return efi_call_virt(set_variable, + name, vendor, attr, + data_size, data); } static efi_status_t virt_efi_query_variable_info(u32 attr, @@ -190,13 +188,13 @@ static efi_status_t virt_efi_query_variable_info(u32 attr, if (efi.runtime_version < EFI_2_00_SYSTEM_TABLE_REVISION) return EFI_UNSUPPORTED; - return efi_call_virt4(query_variable_info, attr, storage_space, - remaining_space, max_variable_size); + return efi_call_virt(query_variable_info, attr, storage_space, + remaining_space, max_variable_size); } static efi_status_t virt_efi_get_next_high_mono_count(u32 *count) { - return efi_call_virt1(get_next_high_mono_count, count); + return efi_call_virt(get_next_high_mono_count, count); } static void virt_efi_reset_system(int reset_type, @@ -204,8 +202,8 @@ static void virt_efi_reset_system(int reset_type, unsigned long data_size, efi_char16_t *data) { - efi_call_virt4(reset_system, reset_type, status, - data_size, data); + __efi_call_virt(reset_system, reset_type, status, + data_size, data); } static efi_status_t virt_efi_update_capsule(efi_capsule_header_t **capsules, @@ -215,7 +213,7 @@ static efi_status_t virt_efi_update_capsule(efi_capsule_header_t **capsules, if (efi.runtime_version < EFI_2_00_SYSTEM_TABLE_REVISION) return EFI_UNSUPPORTED; - return efi_call_virt3(update_capsule, capsules, count, sg_list); + return efi_call_virt(update_capsule, capsules, count, sg_list); } static efi_status_t virt_efi_query_capsule_caps(efi_capsule_header_t **capsules, @@ -226,8 +224,8 @@ static efi_status_t virt_efi_query_capsule_caps(efi_capsule_header_t **capsules, if (efi.runtime_version < EFI_2_00_SYSTEM_TABLE_REVISION) return EFI_UNSUPPORTED; - return efi_call_virt4(query_capsule_caps, capsules, count, max_size, - reset_type); + return efi_call_virt(query_capsule_caps, capsules, count, max_size, + reset_type); } static efi_status_t __init phys_efi_set_virtual_address_map( @@ -239,9 +237,9 @@ static efi_status_t __init phys_efi_set_virtual_address_map( efi_status_t status; efi_call_phys_prelog(); - status = efi_call_phys4(efi_phys.set_virtual_address_map, - memory_map_size, descriptor_size, - descriptor_version, virtual_map); + status = efi_call_phys(efi_phys.set_virtual_address_map, + memory_map_size, descriptor_size, + descriptor_version, virtual_map); efi_call_phys_epilog(); return status; } @@ -919,6 +917,9 @@ static void __init save_runtime_map(void) void *tmp, *p, *q = NULL; int count = 0; + if (efi_enabled(EFI_OLD_MEMMAP)) + return; + for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) { md = p; diff --git a/arch/x86/platform/efi/efi_stub_64.S b/arch/x86/platform/efi/efi_stub_64.S index e0984ef0374b..5fcda7272550 100644 --- a/arch/x86/platform/efi/efi_stub_64.S +++ b/arch/x86/platform/efi/efi_stub_64.S @@ -73,84 +73,7 @@ 2: .endm -ENTRY(efi_call0) - SAVE_XMM - subq $32, %rsp - SWITCH_PGT - call *%rdi - RESTORE_PGT - addq $32, %rsp - RESTORE_XMM - ret -ENDPROC(efi_call0) - -ENTRY(efi_call1) - SAVE_XMM - subq $32, %rsp - mov %rsi, %rcx - SWITCH_PGT - call *%rdi - RESTORE_PGT - addq $32, %rsp - RESTORE_XMM - ret -ENDPROC(efi_call1) - -ENTRY(efi_call2) - SAVE_XMM - subq $32, %rsp - mov %rsi, %rcx - SWITCH_PGT - call *%rdi - RESTORE_PGT - addq $32, %rsp - RESTORE_XMM - ret -ENDPROC(efi_call2) - -ENTRY(efi_call3) - SAVE_XMM - subq $32, %rsp - mov %rcx, %r8 - mov %rsi, %rcx - SWITCH_PGT - call *%rdi - RESTORE_PGT - addq $32, %rsp - RESTORE_XMM - ret -ENDPROC(efi_call3) - -ENTRY(efi_call4) - SAVE_XMM - subq $32, %rsp - mov %r8, %r9 - mov %rcx, %r8 - mov %rsi, %rcx - SWITCH_PGT - call *%rdi - RESTORE_PGT - addq $32, %rsp - RESTORE_XMM - ret -ENDPROC(efi_call4) - -ENTRY(efi_call5) - SAVE_XMM - subq $48, %rsp - mov %r9, 32(%rsp) - mov %r8, %r9 - mov %rcx, %r8 - mov %rsi, %rcx - SWITCH_PGT - call *%rdi - RESTORE_PGT - addq $48, %rsp - RESTORE_XMM - ret -ENDPROC(efi_call5) - -ENTRY(efi_call6) +ENTRY(efi_call) SAVE_XMM mov (%rsp), %rax mov 8(%rax), %rax @@ -166,7 +89,7 @@ ENTRY(efi_call6) addq $48, %rsp RESTORE_XMM ret -ENDPROC(efi_call6) +ENDPROC(efi_call) #ifdef CONFIG_EFI_MIXED diff --git a/arch/x86/platform/uv/bios_uv.c b/arch/x86/platform/uv/bios_uv.c index 766612137a62..1584cbed0dce 100644 --- a/arch/x86/platform/uv/bios_uv.c +++ b/arch/x86/platform/uv/bios_uv.c @@ -39,7 +39,7 @@ s64 uv_bios_call(enum uv_bios_cmd which, u64 a1, u64 a2, u64 a3, u64 a4, u64 a5) */ return BIOS_STATUS_UNIMPLEMENTED; - ret = efi_call6((void *)__va(tab->function), (u64)which, + ret = efi_call((void *)__va(tab->function), (u64)which, a1, a2, a3, a4, a5); return ret; } diff --git a/arch/xtensa/include/asm/ftrace.h b/arch/xtensa/include/asm/ftrace.h index 736b9d214d80..6c6d9a9f185f 100644 --- a/arch/xtensa/include/asm/ftrace.h +++ b/arch/xtensa/include/asm/ftrace.h @@ -12,24 +12,18 @@ #include <asm/processor.h> -#define HAVE_ARCH_CALLER_ADDR #ifndef __ASSEMBLY__ -#define CALLER_ADDR0 ({ unsigned long a0, a1; \ +#define ftrace_return_address0 ({ unsigned long a0, a1; \ __asm__ __volatile__ ( \ "mov %0, a0\n" \ "mov %1, a1\n" \ : "=r"(a0), "=r"(a1)); \ MAKE_PC_FROM_RA(a0, a1); }) + #ifdef CONFIG_FRAME_POINTER extern unsigned long return_address(unsigned level); -#define CALLER_ADDR1 return_address(1) -#define CALLER_ADDR2 return_address(2) -#define CALLER_ADDR3 return_address(3) -#else /* CONFIG_FRAME_POINTER */ -#define CALLER_ADDR1 (0) -#define CALLER_ADDR2 (0) -#define CALLER_ADDR3 (0) -#endif /* CONFIG_FRAME_POINTER */ +#define ftrace_return_address(n) return_address(n) +#endif #endif /* __ASSEMBLY__ */ #ifdef CONFIG_FUNCTION_TRACER diff --git a/block/bounce.c b/block/bounce.c index 523918b8c6dc..ab21ba203d5c 100644 --- a/block/bounce.c +++ b/block/bounce.c @@ -3,6 +3,8 @@ * - Split from highmem.c */ +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + #include <linux/mm.h> #include <linux/export.h> #include <linux/swap.h> @@ -15,6 +17,7 @@ #include <linux/hash.h> #include <linux/highmem.h> #include <linux/bootmem.h> +#include <linux/printk.h> #include <asm/tlbflush.h> #include <trace/events/block.h> @@ -34,7 +37,7 @@ static __init int init_emergency_pool(void) page_pool = mempool_create_page_pool(POOL_SIZE, 0); BUG_ON(!page_pool); - printk("bounce pool size: %d pages\n", POOL_SIZE); + pr_info("pool size: %d pages\n", POOL_SIZE); return 0; } @@ -86,7 +89,7 @@ int init_emergency_isa_pool(void) mempool_free_pages, (void *) 0); BUG_ON(!isa_page_pool); - printk("isa bounce pool size: %d pages\n", ISA_POOL_SIZE); + pr_info("isa pool size: %d pages\n", ISA_POOL_SIZE); return 0; } diff --git a/crypto/ahash.c b/crypto/ahash.c index 6e7223392e80..f2a5d8f656ff 100644 --- a/crypto/ahash.c +++ b/crypto/ahash.c @@ -15,6 +15,7 @@ #include <crypto/internal/hash.h> #include <crypto/scatterwalk.h> +#include <linux/bug.h> #include <linux/err.h> #include <linux/kernel.h> #include <linux/module.h> @@ -46,7 +47,10 @@ static int hash_walk_next(struct crypto_hash_walk *walk) unsigned int nbytes = min(walk->entrylen, ((unsigned int)(PAGE_SIZE)) - offset); - walk->data = kmap_atomic(walk->pg); + if (walk->flags & CRYPTO_ALG_ASYNC) + walk->data = kmap(walk->pg); + else + walk->data = kmap_atomic(walk->pg); walk->data += offset; if (offset & alignmask) { @@ -93,8 +97,16 @@ int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err) return nbytes; } - kunmap_atomic(walk->data); - crypto_yield(walk->flags); + if (walk->flags & CRYPTO_ALG_ASYNC) + kunmap(walk->pg); + else { + kunmap_atomic(walk->data); + /* + * The may sleep test only makes sense for sync users. + * Async users don't need to sleep here anyway. + */ + crypto_yield(walk->flags); + } if (err) return err; @@ -124,12 +136,31 @@ int crypto_hash_walk_first(struct ahash_request *req, walk->alignmask = crypto_ahash_alignmask(crypto_ahash_reqtfm(req)); walk->sg = req->src; - walk->flags = req->base.flags; + walk->flags = req->base.flags & CRYPTO_TFM_REQ_MASK; return hash_walk_new_entry(walk); } EXPORT_SYMBOL_GPL(crypto_hash_walk_first); +int crypto_ahash_walk_first(struct ahash_request *req, + struct crypto_hash_walk *walk) +{ + walk->total = req->nbytes; + + if (!walk->total) + return 0; + + walk->alignmask = crypto_ahash_alignmask(crypto_ahash_reqtfm(req)); + walk->sg = req->src; + walk->flags = req->base.flags & CRYPTO_TFM_REQ_MASK; + walk->flags |= CRYPTO_ALG_ASYNC; + + BUILD_BUG_ON(CRYPTO_TFM_REQ_MASK & CRYPTO_ALG_ASYNC); + + return hash_walk_new_entry(walk); +} +EXPORT_SYMBOL_GPL(crypto_ahash_walk_first); + int crypto_hash_walk_first_compat(struct hash_desc *hdesc, struct crypto_hash_walk *walk, struct scatterlist *sg, unsigned int len) @@ -141,7 +172,7 @@ int crypto_hash_walk_first_compat(struct hash_desc *hdesc, walk->alignmask = crypto_hash_alignmask(hdesc->tfm); walk->sg = sg; - walk->flags = hdesc->flags; + walk->flags = hdesc->flags & CRYPTO_TFM_REQ_MASK; return hash_walk_new_entry(walk); } diff --git a/crypto/crypto_user.c b/crypto/crypto_user.c index 43665d0d0905..e2a34feec7a4 100644 --- a/crypto/crypto_user.c +++ b/crypto/crypto_user.c @@ -265,6 +265,9 @@ static int crypto_update_alg(struct sk_buff *skb, struct nlmsghdr *nlh, struct nlattr *priority = attrs[CRYPTOCFGA_PRIORITY_VAL]; LIST_HEAD(list); + if (!netlink_capable(skb, CAP_NET_ADMIN)) + return -EPERM; + if (!null_terminated(p->cru_name) || !null_terminated(p->cru_driver_name)) return -EINVAL; @@ -295,6 +298,9 @@ static int crypto_del_alg(struct sk_buff *skb, struct nlmsghdr *nlh, struct crypto_alg *alg; struct crypto_user_alg *p = nlmsg_data(nlh); + if (!netlink_capable(skb, CAP_NET_ADMIN)) + return -EPERM; + if (!null_terminated(p->cru_name) || !null_terminated(p->cru_driver_name)) return -EINVAL; @@ -379,6 +385,9 @@ static int crypto_add_alg(struct sk_buff *skb, struct nlmsghdr *nlh, struct crypto_user_alg *p = nlmsg_data(nlh); struct nlattr *priority = attrs[CRYPTOCFGA_PRIORITY_VAL]; + if (!netlink_capable(skb, CAP_NET_ADMIN)) + return -EPERM; + if (!null_terminated(p->cru_name) || !null_terminated(p->cru_driver_name)) return -EINVAL; @@ -466,9 +475,6 @@ static int crypto_user_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh) type -= CRYPTO_MSG_BASE; link = &crypto_dispatch[type]; - if (!netlink_capable(skb, CAP_NET_ADMIN)) - return -EPERM; - if ((type == (CRYPTO_MSG_GETALG - CRYPTO_MSG_BASE) && (nlh->nlmsg_flags & NLM_F_DUMP))) { struct crypto_alg *alg; diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index 870be7b4dc05..ba247cf30858 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c @@ -282,6 +282,11 @@ static void test_aead_speed(const char *algo, int enc, unsigned int sec, unsigned int *b_size; unsigned int iv_len; + if (aad_size >= PAGE_SIZE) { + pr_err("associate data length (%u) too big\n", aad_size); + return; + } + if (enc == ENCRYPT) e = "encryption"; else @@ -308,14 +313,14 @@ static void test_aead_speed(const char *algo, int enc, unsigned int sec, if (IS_ERR(tfm)) { pr_err("alg: aead: Failed to load transform for %s: %ld\n", algo, PTR_ERR(tfm)); - return; + goto out_notfm; } req = aead_request_alloc(tfm, GFP_KERNEL); if (!req) { pr_err("alg: aead: Failed to allocate request for %s\n", algo); - goto out; + goto out_noreq; } i = 0; @@ -323,14 +328,7 @@ static void test_aead_speed(const char *algo, int enc, unsigned int sec, b_size = aead_sizes; do { assoc = axbuf[0]; - - if (aad_size < PAGE_SIZE) - memset(assoc, 0xff, aad_size); - else { - pr_err("associate data length (%u) too big\n", - aad_size); - goto out_nosg; - } + memset(assoc, 0xff, aad_size); sg_init_one(&asg[0], assoc, aad_size); if ((*keysize + *b_size) > TVMEMSIZE * PAGE_SIZE) { @@ -392,7 +390,10 @@ static void test_aead_speed(const char *algo, int enc, unsigned int sec, } while (*keysize); out: + aead_request_free(req); +out_noreq: crypto_free_aead(tfm); +out_notfm: kfree(sg); out_nosg: testmgr_free_buf(xoutbuf); @@ -1518,7 +1519,36 @@ static int do_test(int m) case 157: ret += tcrypt_test("authenc(hmac(sha1),ecb(cipher_null))"); break; - + case 181: + ret += tcrypt_test("authenc(hmac(sha1),cbc(des))"); + break; + case 182: + ret += tcrypt_test("authenc(hmac(sha1),cbc(des3_ede))"); + break; + case 183: + ret += tcrypt_test("authenc(hmac(sha224),cbc(des))"); + break; + case 184: + ret += tcrypt_test("authenc(hmac(sha224),cbc(des3_ede))"); + break; + case 185: + ret += tcrypt_test("authenc(hmac(sha256),cbc(des))"); + break; + case 186: + ret += tcrypt_test("authenc(hmac(sha256),cbc(des3_ede))"); + break; + case 187: + ret += tcrypt_test("authenc(hmac(sha384),cbc(des))"); + break; + case 188: + ret += tcrypt_test("authenc(hmac(sha384),cbc(des3_ede))"); + break; + case 189: + ret += tcrypt_test("authenc(hmac(sha512),cbc(des))"); + break; + case 190: + ret += tcrypt_test("authenc(hmac(sha512),cbc(des3_ede))"); + break; case 200: test_cipher_speed("ecb(aes)", ENCRYPT, sec, NULL, 0, speed_template_16_24_32); diff --git a/crypto/testmgr.c b/crypto/testmgr.c index dc3cf3535ef0..498649ac1953 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -414,16 +414,18 @@ static int __test_aead(struct crypto_aead *tfm, int enc, void *input; void *output; void *assoc; - char iv[MAX_IVLEN]; + char *iv; char *xbuf[XBUFSIZE]; char *xoutbuf[XBUFSIZE]; char *axbuf[XBUFSIZE]; + iv = kzalloc(MAX_IVLEN, GFP_KERNEL); + if (!iv) + return ret; if (testmgr_alloc_buf(xbuf)) goto out_noxbuf; if (testmgr_alloc_buf(axbuf)) goto out_noaxbuf; - if (diff_dst && testmgr_alloc_buf(xoutbuf)) goto out_nooutbuf; @@ -767,6 +769,7 @@ out_nooutbuf: out_noaxbuf: testmgr_free_buf(xbuf); out_noxbuf: + kfree(iv); return ret; } @@ -1831,8 +1834,38 @@ static const struct alg_test_desc alg_test_descs[] = { .suite = { .aead = { .enc = { - .vecs = hmac_sha1_aes_cbc_enc_tv_template, - .count = HMAC_SHA1_AES_CBC_ENC_TEST_VECTORS + .vecs = + hmac_sha1_aes_cbc_enc_tv_temp, + .count = + HMAC_SHA1_AES_CBC_ENC_TEST_VEC + } + } + } + }, { + .alg = "authenc(hmac(sha1),cbc(des))", + .test = alg_test_aead, + .fips_allowed = 1, + .suite = { + .aead = { + .enc = { + .vecs = + hmac_sha1_des_cbc_enc_tv_temp, + .count = + HMAC_SHA1_DES_CBC_ENC_TEST_VEC + } + } + } + }, { + .alg = "authenc(hmac(sha1),cbc(des3_ede))", + .test = alg_test_aead, + .fips_allowed = 1, + .suite = { + .aead = { + .enc = { + .vecs = + hmac_sha1_des3_ede_cbc_enc_tv_temp, + .count = + HMAC_SHA1_DES3_EDE_CBC_ENC_TEST_VEC } } } @@ -1843,12 +1876,44 @@ static const struct alg_test_desc alg_test_descs[] = { .suite = { .aead = { .enc = { - .vecs = hmac_sha1_ecb_cipher_null_enc_tv_template, - .count = HMAC_SHA1_ECB_CIPHER_NULL_ENC_TEST_VECTORS + .vecs = + hmac_sha1_ecb_cipher_null_enc_tv_temp, + .count = + HMAC_SHA1_ECB_CIPHER_NULL_ENC_TEST_VEC }, .dec = { - .vecs = hmac_sha1_ecb_cipher_null_dec_tv_template, - .count = HMAC_SHA1_ECB_CIPHER_NULL_DEC_TEST_VECTORS + .vecs = + hmac_sha1_ecb_cipher_null_dec_tv_temp, + .count = + HMAC_SHA1_ECB_CIPHER_NULL_DEC_TEST_VEC + } + } + } + }, { + .alg = "authenc(hmac(sha224),cbc(des))", + .test = alg_test_aead, + .fips_allowed = 1, + .suite = { + .aead = { + .enc = { + .vecs = + hmac_sha224_des_cbc_enc_tv_temp, + .count = + HMAC_SHA224_DES_CBC_ENC_TEST_VEC + } + } + } + }, { + .alg = "authenc(hmac(sha224),cbc(des3_ede))", + .test = alg_test_aead, + .fips_allowed = 1, + .suite = { + .aead = { + .enc = { + .vecs = + hmac_sha224_des3_ede_cbc_enc_tv_temp, + .count = + HMAC_SHA224_DES3_EDE_CBC_ENC_TEST_VEC } } } @@ -1859,8 +1924,66 @@ static const struct alg_test_desc alg_test_descs[] = { .suite = { .aead = { .enc = { - .vecs = hmac_sha256_aes_cbc_enc_tv_template, - .count = HMAC_SHA256_AES_CBC_ENC_TEST_VECTORS + .vecs = + hmac_sha256_aes_cbc_enc_tv_temp, + .count = + HMAC_SHA256_AES_CBC_ENC_TEST_VEC + } + } + } + }, { + .alg = "authenc(hmac(sha256),cbc(des))", + .test = alg_test_aead, + .fips_allowed = 1, + .suite = { + .aead = { + .enc = { + .vecs = + hmac_sha256_des_cbc_enc_tv_temp, + .count = + HMAC_SHA256_DES_CBC_ENC_TEST_VEC + } + } + } + }, { + .alg = "authenc(hmac(sha256),cbc(des3_ede))", + .test = alg_test_aead, + .fips_allowed = 1, + .suite = { + .aead = { + .enc = { + .vecs = + hmac_sha256_des3_ede_cbc_enc_tv_temp, + .count = + HMAC_SHA256_DES3_EDE_CBC_ENC_TEST_VEC + } + } + } + }, { + .alg = "authenc(hmac(sha384),cbc(des))", + .test = alg_test_aead, + .fips_allowed = 1, + .suite = { + .aead = { + .enc = { + .vecs = + hmac_sha384_des_cbc_enc_tv_temp, + .count = + HMAC_SHA384_DES_CBC_ENC_TEST_VEC + } + } + } + }, { + .alg = "authenc(hmac(sha384),cbc(des3_ede))", + .test = alg_test_aead, + .fips_allowed = 1, + .suite = { + .aead = { + .enc = { + .vecs = + hmac_sha384_des3_ede_cbc_enc_tv_temp, + .count = + HMAC_SHA384_DES3_EDE_CBC_ENC_TEST_VEC } } } @@ -1871,8 +1994,38 @@ static const struct alg_test_desc alg_test_descs[] = { .suite = { .aead = { .enc = { - .vecs = hmac_sha512_aes_cbc_enc_tv_template, - .count = HMAC_SHA512_AES_CBC_ENC_TEST_VECTORS + .vecs = + hmac_sha512_aes_cbc_enc_tv_temp, + .count = + HMAC_SHA512_AES_CBC_ENC_TEST_VEC + } + } + } + }, { + .alg = "authenc(hmac(sha512),cbc(des))", + .test = alg_test_aead, + .fips_allowed = 1, + .suite = { + .aead = { + .enc = { + .vecs = + hmac_sha512_des_cbc_enc_tv_temp, + .count = + HMAC_SHA512_DES_CBC_ENC_TEST_VEC + } + } + } + }, { + .alg = "authenc(hmac(sha512),cbc(des3_ede))", + .test = alg_test_aead, + .fips_allowed = 1, + .suite = { + .aead = { + .enc = { + .vecs = + hmac_sha512_des3_ede_cbc_enc_tv_temp, + .count = + HMAC_SHA512_DES3_EDE_CBC_ENC_TEST_VEC } } } @@ -3273,8 +3426,8 @@ test_done: panic("%s: %s alg self test failed in fips mode!\n", driver, alg); if (fips_enabled && !rc) - printk(KERN_INFO "alg: self-tests for %s (%s) passed\n", - driver, alg); + pr_info(KERN_INFO "alg: self-tests for %s (%s) passed\n", + driver, alg); return rc; diff --git a/crypto/testmgr.h b/crypto/testmgr.h index 3db83dbba1d9..69d0dd8ef27e 100644 --- a/crypto/testmgr.h +++ b/crypto/testmgr.h @@ -487,10 +487,15 @@ static struct hash_testvec crct10dif_tv_template[] = { * SHA1 test vectors from from FIPS PUB 180-1 * Long vector from CAVS 5.0 */ -#define SHA1_TEST_VECTORS 3 +#define SHA1_TEST_VECTORS 6 static struct hash_testvec sha1_tv_template[] = { { + .plaintext = "", + .psize = 0, + .digest = "\xda\x39\xa3\xee\x5e\x6b\x4b\x0d\x32\x55" + "\xbf\xef\x95\x60\x18\x90\xaf\xd8\x07\x09", + }, { .plaintext = "abc", .psize = 3, .digest = "\xa9\x99\x3e\x36\x47\x06\x81\x6a\xba\x3e" @@ -529,6 +534,144 @@ static struct hash_testvec sha1_tv_template[] = { "\x45\x9c\x02\xb6\x9b\x4a\xa8\xf5\x82\x17", .np = 4, .tap = { 63, 64, 31, 5 } + }, { + .plaintext = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+-", + .psize = 64, + .digest = "\xc8\x71\xf6\x9a\x63\xcc\xa9\x84\x84\x82" + "\x64\xe7\x79\x95\x5d\xd7\x19\x41\x7c\x91", + }, { + .plaintext = "\x08\x9f\x13\xaa\x41\xd8\x4c\xe3" + "\x7a\x11\x85\x1c\xb3\x27\xbe\x55" + "\xec\x60\xf7\x8e\x02\x99\x30\xc7" + "\x3b\xd2\x69\x00\x74\x0b\xa2\x16" + "\xad\x44\xdb\x4f\xe6\x7d\x14\x88" + "\x1f\xb6\x2a\xc1\x58\xef\x63\xfa" + "\x91\x05\x9c\x33\xca\x3e\xd5\x6c" + "\x03\x77\x0e\xa5\x19\xb0\x47\xde" + "\x52\xe9\x80\x17\x8b\x22\xb9\x2d" + "\xc4\x5b\xf2\x66\xfd\x94\x08\x9f" + "\x36\xcd\x41\xd8\x6f\x06\x7a\x11" + "\xa8\x1c\xb3\x4a\xe1\x55\xec\x83" + "\x1a\x8e\x25\xbc\x30\xc7\x5e\xf5" + "\x69\x00\x97\x0b\xa2\x39\xd0\x44" + "\xdb\x72\x09\x7d\x14\xab\x1f\xb6" + "\x4d\xe4\x58\xef\x86\x1d\x91\x28" + "\xbf\x33\xca\x61\xf8\x6c\x03\x9a" + "\x0e\xa5\x3c\xd3\x47\xde\x75\x0c" + "\x80\x17\xae\x22\xb9\x50\xe7\x5b" + "\xf2\x89\x20\x94\x2b\xc2\x36\xcd" + "\x64\xfb\x6f\x06\x9d\x11\xa8\x3f" + "\xd6\x4a\xe1\x78\x0f\x83\x1a\xb1" + "\x25\xbc\x53\xea\x5e\xf5\x8c\x00" + "\x97\x2e\xc5\x39\xd0\x67\xfe\x72" + "\x09\xa0\x14\xab\x42\xd9\x4d\xe4" + "\x7b\x12\x86\x1d\xb4\x28\xbf\x56" + "\xed\x61\xf8\x8f\x03\x9a\x31\xc8" + "\x3c\xd3\x6a\x01\x75\x0c\xa3\x17" + "\xae\x45\xdc\x50\xe7\x7e\x15\x89" + "\x20\xb7\x2b\xc2\x59\xf0\x64\xfb" + "\x92\x06\x9d\x34\xcb\x3f\xd6\x6d" + "\x04\x78\x0f\xa6\x1a\xb1\x48\xdf" + "\x53\xea\x81\x18\x8c\x23\xba\x2e" + "\xc5\x5c\xf3\x67\xfe\x95\x09\xa0" + "\x37\xce\x42\xd9\x70\x07\x7b\x12" + "\xa9\x1d\xb4\x4b\xe2\x56\xed\x84" + "\x1b\x8f\x26\xbd\x31\xc8\x5f\xf6" + "\x6a\x01\x98\x0c\xa3\x3a\xd1\x45" + "\xdc\x73\x0a\x7e\x15\xac\x20\xb7" + "\x4e\xe5\x59\xf0\x87\x1e\x92\x29" + "\xc0\x34\xcb\x62\xf9\x6d\x04\x9b" + "\x0f\xa6\x3d\xd4\x48\xdf\x76\x0d" + "\x81\x18\xaf\x23\xba\x51\xe8\x5c" + "\xf3\x8a\x21\x95\x2c\xc3\x37\xce" + "\x65\xfc\x70\x07\x9e\x12\xa9\x40" + "\xd7\x4b\xe2\x79\x10\x84\x1b\xb2" + "\x26\xbd\x54\xeb\x5f\xf6\x8d\x01" + "\x98\x2f\xc6\x3a\xd1\x68\xff\x73" + "\x0a\xa1\x15\xac\x43\xda\x4e\xe5" + "\x7c\x13\x87\x1e\xb5\x29\xc0\x57" + "\xee\x62\xf9\x90\x04\x9b\x32\xc9" + "\x3d\xd4\x6b\x02\x76\x0d\xa4\x18" + "\xaf\x46\xdd\x51\xe8\x7f\x16\x8a" + "\x21\xb8\x2c\xc3\x5a\xf1\x65\xfc" + "\x93\x07\x9e\x35\xcc\x40\xd7\x6e" + "\x05\x79\x10\xa7\x1b\xb2\x49\xe0" + "\x54\xeb\x82\x19\x8d\x24\xbb\x2f" + "\xc6\x5d\xf4\x68\xff\x96\x0a\xa1" + "\x38\xcf\x43\xda\x71\x08\x7c\x13" + "\xaa\x1e\xb5\x4c\xe3\x57\xee\x85" + "\x1c\x90\x27\xbe\x32\xc9\x60\xf7" + "\x6b\x02\x99\x0d\xa4\x3b\xd2\x46" + "\xdd\x74\x0b\x7f\x16\xad\x21\xb8" + "\x4f\xe6\x5a\xf1\x88\x1f\x93\x2a" + "\xc1\x35\xcc\x63\xfa\x6e\x05\x9c" + "\x10\xa7\x3e\xd5\x49\xe0\x77\x0e" + "\x82\x19\xb0\x24\xbb\x52\xe9\x5d" + "\xf4\x8b\x22\x96\x2d\xc4\x38\xcf" + "\x66\xfd\x71\x08\x9f\x13\xaa\x41" + "\xd8\x4c\xe3\x7a\x11\x85\x1c\xb3" + "\x27\xbe\x55\xec\x60\xf7\x8e\x02" + "\x99\x30\xc7\x3b\xd2\x69\x00\x74" + "\x0b\xa2\x16\xad\x44\xdb\x4f\xe6" + "\x7d\x14\x88\x1f\xb6\x2a\xc1\x58" + "\xef\x63\xfa\x91\x05\x9c\x33\xca" + "\x3e\xd5\x6c\x03\x77\x0e\xa5\x19" + "\xb0\x47\xde\x52\xe9\x80\x17\x8b" + "\x22\xb9\x2d\xc4\x5b\xf2\x66\xfd" + "\x94\x08\x9f\x36\xcd\x41\xd8\x6f" + "\x06\x7a\x11\xa8\x1c\xb3\x4a\xe1" + "\x55\xec\x83\x1a\x8e\x25\xbc\x30" + "\xc7\x5e\xf5\x69\x00\x97\x0b\xa2" + "\x39\xd0\x44\xdb\x72\x09\x7d\x14" + "\xab\x1f\xb6\x4d\xe4\x58\xef\x86" + "\x1d\x91\x28\xbf\x33\xca\x61\xf8" + "\x6c\x03\x9a\x0e\xa5\x3c\xd3\x47" + "\xde\x75\x0c\x80\x17\xae\x22\xb9" + "\x50\xe7\x5b\xf2\x89\x20\x94\x2b" + "\xc2\x36\xcd\x64\xfb\x6f\x06\x9d" + "\x11\xa8\x3f\xd6\x4a\xe1\x78\x0f" + "\x83\x1a\xb1\x25\xbc\x53\xea\x5e" + "\xf5\x8c\x00\x97\x2e\xc5\x39\xd0" + "\x67\xfe\x72\x09\xa0\x14\xab\x42" + "\xd9\x4d\xe4\x7b\x12\x86\x1d\xb4" + "\x28\xbf\x56\xed\x61\xf8\x8f\x03" + "\x9a\x31\xc8\x3c\xd3\x6a\x01\x75" + "\x0c\xa3\x17\xae\x45\xdc\x50\xe7" + "\x7e\x15\x89\x20\xb7\x2b\xc2\x59" + "\xf0\x64\xfb\x92\x06\x9d\x34\xcb" + "\x3f\xd6\x6d\x04\x78\x0f\xa6\x1a" + "\xb1\x48\xdf\x53\xea\x81\x18\x8c" + "\x23\xba\x2e\xc5\x5c\xf3\x67\xfe" + "\x95\x09\xa0\x37\xce\x42\xd9\x70" + "\x07\x7b\x12\xa9\x1d\xb4\x4b\xe2" + "\x56\xed\x84\x1b\x8f\x26\xbd\x31" + "\xc8\x5f\xf6\x6a\x01\x98\x0c\xa3" + "\x3a\xd1\x45\xdc\x73\x0a\x7e\x15" + "\xac\x20\xb7\x4e\xe5\x59\xf0\x87" + "\x1e\x92\x29\xc0\x34\xcb\x62\xf9" + "\x6d\x04\x9b\x0f\xa6\x3d\xd4\x48" + "\xdf\x76\x0d\x81\x18\xaf\x23\xba" + "\x51\xe8\x5c\xf3\x8a\x21\x95\x2c" + "\xc3\x37\xce\x65\xfc\x70\x07\x9e" + "\x12\xa9\x40\xd7\x4b\xe2\x79\x10" + "\x84\x1b\xb2\x26\xbd\x54\xeb\x5f" + "\xf6\x8d\x01\x98\x2f\xc6\x3a\xd1" + "\x68\xff\x73\x0a\xa1\x15\xac\x43" + "\xda\x4e\xe5\x7c\x13\x87\x1e\xb5" + "\x29\xc0\x57\xee\x62\xf9\x90\x04" + "\x9b\x32\xc9\x3d\xd4\x6b\x02\x76" + "\x0d\xa4\x18\xaf\x46\xdd\x51\xe8" + "\x7f\x16\x8a\x21\xb8\x2c\xc3\x5a" + "\xf1\x65\xfc\x93\x07\x9e\x35\xcc" + "\x40\xd7\x6e\x05\x79\x10\xa7\x1b" + "\xb2\x49\xe0\x54\xeb\x82\x19\x8d" + "\x24\xbb\x2f\xc6\x5d\xf4\x68\xff" + "\x96\x0a\xa1\x38\xcf\x43\xda\x71" + "\x08\x7c\x13\xaa\x1e\xb5\x4c", + .psize = 1023, + .digest = "\xb8\xe3\x54\xed\xc5\xfc\xef\xa4" + "\x55\x73\x4a\x81\x99\xe4\x47\x2a" + "\x30\xd6\xc9\x85", } }; @@ -536,10 +679,17 @@ static struct hash_testvec sha1_tv_template[] = { /* * SHA224 test vectors from from FIPS PUB 180-2 */ -#define SHA224_TEST_VECTORS 2 +#define SHA224_TEST_VECTORS 5 static struct hash_testvec sha224_tv_template[] = { { + .plaintext = "", + .psize = 0, + .digest = "\xd1\x4a\x02\x8c\x2a\x3a\x2b\xc9" + "\x47\x61\x02\xbb\x28\x82\x34\xc4" + "\x15\xa2\xb0\x1f\x82\x8e\xa6\x2a" + "\xc5\xb3\xe4\x2f", + }, { .plaintext = "abc", .psize = 3, .digest = "\x23\x09\x7D\x22\x34\x05\xD8\x22" @@ -556,16 +706,164 @@ static struct hash_testvec sha224_tv_template[] = { "\x52\x52\x25\x25", .np = 2, .tap = { 28, 28 } + }, { + .plaintext = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+-", + .psize = 64, + .digest = "\xc4\xdb\x2b\x3a\x58\xc3\x99\x01" + "\x42\xfd\x10\x92\xaa\x4e\x04\x08" + "\x58\xbb\xbb\xe8\xf8\x14\xa7\x0c" + "\xef\x3b\xcb\x0e", + }, { + .plaintext = "\x08\x9f\x13\xaa\x41\xd8\x4c\xe3" + "\x7a\x11\x85\x1c\xb3\x27\xbe\x55" + "\xec\x60\xf7\x8e\x02\x99\x30\xc7" + "\x3b\xd2\x69\x00\x74\x0b\xa2\x16" + "\xad\x44\xdb\x4f\xe6\x7d\x14\x88" + "\x1f\xb6\x2a\xc1\x58\xef\x63\xfa" + "\x91\x05\x9c\x33\xca\x3e\xd5\x6c" + "\x03\x77\x0e\xa5\x19\xb0\x47\xde" + "\x52\xe9\x80\x17\x8b\x22\xb9\x2d" + "\xc4\x5b\xf2\x66\xfd\x94\x08\x9f" + "\x36\xcd\x41\xd8\x6f\x06\x7a\x11" + "\xa8\x1c\xb3\x4a\xe1\x55\xec\x83" + "\x1a\x8e\x25\xbc\x30\xc7\x5e\xf5" + "\x69\x00\x97\x0b\xa2\x39\xd0\x44" + "\xdb\x72\x09\x7d\x14\xab\x1f\xb6" + "\x4d\xe4\x58\xef\x86\x1d\x91\x28" + "\xbf\x33\xca\x61\xf8\x6c\x03\x9a" + "\x0e\xa5\x3c\xd3\x47\xde\x75\x0c" + "\x80\x17\xae\x22\xb9\x50\xe7\x5b" + "\xf2\x89\x20\x94\x2b\xc2\x36\xcd" + "\x64\xfb\x6f\x06\x9d\x11\xa8\x3f" + "\xd6\x4a\xe1\x78\x0f\x83\x1a\xb1" + "\x25\xbc\x53\xea\x5e\xf5\x8c\x00" + "\x97\x2e\xc5\x39\xd0\x67\xfe\x72" + "\x09\xa0\x14\xab\x42\xd9\x4d\xe4" + "\x7b\x12\x86\x1d\xb4\x28\xbf\x56" + "\xed\x61\xf8\x8f\x03\x9a\x31\xc8" + "\x3c\xd3\x6a\x01\x75\x0c\xa3\x17" + "\xae\x45\xdc\x50\xe7\x7e\x15\x89" + "\x20\xb7\x2b\xc2\x59\xf0\x64\xfb" + "\x92\x06\x9d\x34\xcb\x3f\xd6\x6d" + "\x04\x78\x0f\xa6\x1a\xb1\x48\xdf" + "\x53\xea\x81\x18\x8c\x23\xba\x2e" + "\xc5\x5c\xf3\x67\xfe\x95\x09\xa0" + "\x37\xce\x42\xd9\x70\x07\x7b\x12" + "\xa9\x1d\xb4\x4b\xe2\x56\xed\x84" + "\x1b\x8f\x26\xbd\x31\xc8\x5f\xf6" + "\x6a\x01\x98\x0c\xa3\x3a\xd1\x45" + "\xdc\x73\x0a\x7e\x15\xac\x20\xb7" + "\x4e\xe5\x59\xf0\x87\x1e\x92\x29" + "\xc0\x34\xcb\x62\xf9\x6d\x04\x9b" + "\x0f\xa6\x3d\xd4\x48\xdf\x76\x0d" + "\x81\x18\xaf\x23\xba\x51\xe8\x5c" + "\xf3\x8a\x21\x95\x2c\xc3\x37\xce" + "\x65\xfc\x70\x07\x9e\x12\xa9\x40" + "\xd7\x4b\xe2\x79\x10\x84\x1b\xb2" + "\x26\xbd\x54\xeb\x5f\xf6\x8d\x01" + "\x98\x2f\xc6\x3a\xd1\x68\xff\x73" + "\x0a\xa1\x15\xac\x43\xda\x4e\xe5" + "\x7c\x13\x87\x1e\xb5\x29\xc0\x57" + "\xee\x62\xf9\x90\x04\x9b\x32\xc9" + "\x3d\xd4\x6b\x02\x76\x0d\xa4\x18" + "\xaf\x46\xdd\x51\xe8\x7f\x16\x8a" + "\x21\xb8\x2c\xc3\x5a\xf1\x65\xfc" + "\x93\x07\x9e\x35\xcc\x40\xd7\x6e" + "\x05\x79\x10\xa7\x1b\xb2\x49\xe0" + "\x54\xeb\x82\x19\x8d\x24\xbb\x2f" + "\xc6\x5d\xf4\x68\xff\x96\x0a\xa1" + "\x38\xcf\x43\xda\x71\x08\x7c\x13" + "\xaa\x1e\xb5\x4c\xe3\x57\xee\x85" + "\x1c\x90\x27\xbe\x32\xc9\x60\xf7" + "\x6b\x02\x99\x0d\xa4\x3b\xd2\x46" + "\xdd\x74\x0b\x7f\x16\xad\x21\xb8" + "\x4f\xe6\x5a\xf1\x88\x1f\x93\x2a" + "\xc1\x35\xcc\x63\xfa\x6e\x05\x9c" + "\x10\xa7\x3e\xd5\x49\xe0\x77\x0e" + "\x82\x19\xb0\x24\xbb\x52\xe9\x5d" + "\xf4\x8b\x22\x96\x2d\xc4\x38\xcf" + "\x66\xfd\x71\x08\x9f\x13\xaa\x41" + "\xd8\x4c\xe3\x7a\x11\x85\x1c\xb3" + "\x27\xbe\x55\xec\x60\xf7\x8e\x02" + "\x99\x30\xc7\x3b\xd2\x69\x00\x74" + "\x0b\xa2\x16\xad\x44\xdb\x4f\xe6" + "\x7d\x14\x88\x1f\xb6\x2a\xc1\x58" + "\xef\x63\xfa\x91\x05\x9c\x33\xca" + "\x3e\xd5\x6c\x03\x77\x0e\xa5\x19" + "\xb0\x47\xde\x52\xe9\x80\x17\x8b" + "\x22\xb9\x2d\xc4\x5b\xf2\x66\xfd" + "\x94\x08\x9f\x36\xcd\x41\xd8\x6f" + "\x06\x7a\x11\xa8\x1c\xb3\x4a\xe1" + "\x55\xec\x83\x1a\x8e\x25\xbc\x30" + "\xc7\x5e\xf5\x69\x00\x97\x0b\xa2" + "\x39\xd0\x44\xdb\x72\x09\x7d\x14" + "\xab\x1f\xb6\x4d\xe4\x58\xef\x86" + "\x1d\x91\x28\xbf\x33\xca\x61\xf8" + "\x6c\x03\x9a\x0e\xa5\x3c\xd3\x47" + "\xde\x75\x0c\x80\x17\xae\x22\xb9" + "\x50\xe7\x5b\xf2\x89\x20\x94\x2b" + "\xc2\x36\xcd\x64\xfb\x6f\x06\x9d" + "\x11\xa8\x3f\xd6\x4a\xe1\x78\x0f" + "\x83\x1a\xb1\x25\xbc\x53\xea\x5e" + "\xf5\x8c\x00\x97\x2e\xc5\x39\xd0" + "\x67\xfe\x72\x09\xa0\x14\xab\x42" + "\xd9\x4d\xe4\x7b\x12\x86\x1d\xb4" + "\x28\xbf\x56\xed\x61\xf8\x8f\x03" + "\x9a\x31\xc8\x3c\xd3\x6a\x01\x75" + "\x0c\xa3\x17\xae\x45\xdc\x50\xe7" + "\x7e\x15\x89\x20\xb7\x2b\xc2\x59" + "\xf0\x64\xfb\x92\x06\x9d\x34\xcb" + "\x3f\xd6\x6d\x04\x78\x0f\xa6\x1a" + "\xb1\x48\xdf\x53\xea\x81\x18\x8c" + "\x23\xba\x2e\xc5\x5c\xf3\x67\xfe" + "\x95\x09\xa0\x37\xce\x42\xd9\x70" + "\x07\x7b\x12\xa9\x1d\xb4\x4b\xe2" + "\x56\xed\x84\x1b\x8f\x26\xbd\x31" + "\xc8\x5f\xf6\x6a\x01\x98\x0c\xa3" + "\x3a\xd1\x45\xdc\x73\x0a\x7e\x15" + "\xac\x20\xb7\x4e\xe5\x59\xf0\x87" + "\x1e\x92\x29\xc0\x34\xcb\x62\xf9" + "\x6d\x04\x9b\x0f\xa6\x3d\xd4\x48" + "\xdf\x76\x0d\x81\x18\xaf\x23\xba" + "\x51\xe8\x5c\xf3\x8a\x21\x95\x2c" + "\xc3\x37\xce\x65\xfc\x70\x07\x9e" + "\x12\xa9\x40\xd7\x4b\xe2\x79\x10" + "\x84\x1b\xb2\x26\xbd\x54\xeb\x5f" + "\xf6\x8d\x01\x98\x2f\xc6\x3a\xd1" + "\x68\xff\x73\x0a\xa1\x15\xac\x43" + "\xda\x4e\xe5\x7c\x13\x87\x1e\xb5" + "\x29\xc0\x57\xee\x62\xf9\x90\x04" + "\x9b\x32\xc9\x3d\xd4\x6b\x02\x76" + "\x0d\xa4\x18\xaf\x46\xdd\x51\xe8" + "\x7f\x16\x8a\x21\xb8\x2c\xc3\x5a" + "\xf1\x65\xfc\x93\x07\x9e\x35\xcc" + "\x40\xd7\x6e\x05\x79\x10\xa7\x1b" + "\xb2\x49\xe0\x54\xeb\x82\x19\x8d" + "\x24\xbb\x2f\xc6\x5d\xf4\x68\xff" + "\x96\x0a\xa1\x38\xcf\x43\xda\x71" + "\x08\x7c\x13\xaa\x1e\xb5\x4c", + .psize = 1023, + .digest = "\x98\x43\x07\x63\x75\xe0\xa7\x1c" + "\x78\xb1\x8b\xfd\x04\xf5\x2d\x91" + "\x20\x48\xa4\x28\xff\x55\xb1\xd3" + "\xe6\xf9\x4f\xcc", } }; /* * SHA256 test vectors from from NIST */ -#define SHA256_TEST_VECTORS 2 +#define SHA256_TEST_VECTORS 5 static struct hash_testvec sha256_tv_template[] = { { + .plaintext = "", + .psize = 0, + .digest = "\xe3\xb0\xc4\x42\x98\xfc\x1c\x14" + "\x9a\xfb\xf4\xc8\x99\x6f\xb9\x24" + "\x27\xae\x41\xe4\x64\x9b\x93\x4c" + "\xa4\x95\x99\x1b\x78\x52\xb8\x55", + }, { .plaintext = "abc", .psize = 3, .digest = "\xba\x78\x16\xbf\x8f\x01\xcf\xea" @@ -581,16 +879,166 @@ static struct hash_testvec sha256_tv_template[] = { "\xf6\xec\xed\xd4\x19\xdb\x06\xc1", .np = 2, .tap = { 28, 28 } - }, + }, { + .plaintext = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+-", + .psize = 64, + .digest = "\xb5\xfe\xad\x56\x7d\xff\xcb\xa4" + "\x2c\x32\x29\x32\x19\xbb\xfb\xfa" + "\xd6\xff\x94\xa3\x72\x91\x85\x66" + "\x3b\xa7\x87\x77\x58\xa3\x40\x3a", + }, { + .plaintext = "\x08\x9f\x13\xaa\x41\xd8\x4c\xe3" + "\x7a\x11\x85\x1c\xb3\x27\xbe\x55" + "\xec\x60\xf7\x8e\x02\x99\x30\xc7" + "\x3b\xd2\x69\x00\x74\x0b\xa2\x16" + "\xad\x44\xdb\x4f\xe6\x7d\x14\x88" + "\x1f\xb6\x2a\xc1\x58\xef\x63\xfa" + "\x91\x05\x9c\x33\xca\x3e\xd5\x6c" + "\x03\x77\x0e\xa5\x19\xb0\x47\xde" + "\x52\xe9\x80\x17\x8b\x22\xb9\x2d" + "\xc4\x5b\xf2\x66\xfd\x94\x08\x9f" + "\x36\xcd\x41\xd8\x6f\x06\x7a\x11" + "\xa8\x1c\xb3\x4a\xe1\x55\xec\x83" + "\x1a\x8e\x25\xbc\x30\xc7\x5e\xf5" + "\x69\x00\x97\x0b\xa2\x39\xd0\x44" + "\xdb\x72\x09\x7d\x14\xab\x1f\xb6" + "\x4d\xe4\x58\xef\x86\x1d\x91\x28" + "\xbf\x33\xca\x61\xf8\x6c\x03\x9a" + "\x0e\xa5\x3c\xd3\x47\xde\x75\x0c" + "\x80\x17\xae\x22\xb9\x50\xe7\x5b" + "\xf2\x89\x20\x94\x2b\xc2\x36\xcd" + "\x64\xfb\x6f\x06\x9d\x11\xa8\x3f" + "\xd6\x4a\xe1\x78\x0f\x83\x1a\xb1" + "\x25\xbc\x53\xea\x5e\xf5\x8c\x00" + "\x97\x2e\xc5\x39\xd0\x67\xfe\x72" + "\x09\xa0\x14\xab\x42\xd9\x4d\xe4" + "\x7b\x12\x86\x1d\xb4\x28\xbf\x56" + "\xed\x61\xf8\x8f\x03\x9a\x31\xc8" + "\x3c\xd3\x6a\x01\x75\x0c\xa3\x17" + "\xae\x45\xdc\x50\xe7\x7e\x15\x89" + "\x20\xb7\x2b\xc2\x59\xf0\x64\xfb" + "\x92\x06\x9d\x34\xcb\x3f\xd6\x6d" + "\x04\x78\x0f\xa6\x1a\xb1\x48\xdf" + "\x53\xea\x81\x18\x8c\x23\xba\x2e" + "\xc5\x5c\xf3\x67\xfe\x95\x09\xa0" + "\x37\xce\x42\xd9\x70\x07\x7b\x12" + "\xa9\x1d\xb4\x4b\xe2\x56\xed\x84" + "\x1b\x8f\x26\xbd\x31\xc8\x5f\xf6" + "\x6a\x01\x98\x0c\xa3\x3a\xd1\x45" + "\xdc\x73\x0a\x7e\x15\xac\x20\xb7" + "\x4e\xe5\x59\xf0\x87\x1e\x92\x29" + "\xc0\x34\xcb\x62\xf9\x6d\x04\x9b" + "\x0f\xa6\x3d\xd4\x48\xdf\x76\x0d" + "\x81\x18\xaf\x23\xba\x51\xe8\x5c" + "\xf3\x8a\x21\x95\x2c\xc3\x37\xce" + "\x65\xfc\x70\x07\x9e\x12\xa9\x40" + "\xd7\x4b\xe2\x79\x10\x84\x1b\xb2" + "\x26\xbd\x54\xeb\x5f\xf6\x8d\x01" + "\x98\x2f\xc6\x3a\xd1\x68\xff\x73" + "\x0a\xa1\x15\xac\x43\xda\x4e\xe5" + "\x7c\x13\x87\x1e\xb5\x29\xc0\x57" + "\xee\x62\xf9\x90\x04\x9b\x32\xc9" + "\x3d\xd4\x6b\x02\x76\x0d\xa4\x18" + "\xaf\x46\xdd\x51\xe8\x7f\x16\x8a" + "\x21\xb8\x2c\xc3\x5a\xf1\x65\xfc" + "\x93\x07\x9e\x35\xcc\x40\xd7\x6e" + "\x05\x79\x10\xa7\x1b\xb2\x49\xe0" + "\x54\xeb\x82\x19\x8d\x24\xbb\x2f" + "\xc6\x5d\xf4\x68\xff\x96\x0a\xa1" + "\x38\xcf\x43\xda\x71\x08\x7c\x13" + "\xaa\x1e\xb5\x4c\xe3\x57\xee\x85" + "\x1c\x90\x27\xbe\x32\xc9\x60\xf7" + "\x6b\x02\x99\x0d\xa4\x3b\xd2\x46" + "\xdd\x74\x0b\x7f\x16\xad\x21\xb8" + "\x4f\xe6\x5a\xf1\x88\x1f\x93\x2a" + "\xc1\x35\xcc\x63\xfa\x6e\x05\x9c" + "\x10\xa7\x3e\xd5\x49\xe0\x77\x0e" + "\x82\x19\xb0\x24\xbb\x52\xe9\x5d" + "\xf4\x8b\x22\x96\x2d\xc4\x38\xcf" + "\x66\xfd\x71\x08\x9f\x13\xaa\x41" + "\xd8\x4c\xe3\x7a\x11\x85\x1c\xb3" + "\x27\xbe\x55\xec\x60\xf7\x8e\x02" + "\x99\x30\xc7\x3b\xd2\x69\x00\x74" + "\x0b\xa2\x16\xad\x44\xdb\x4f\xe6" + "\x7d\x14\x88\x1f\xb6\x2a\xc1\x58" + "\xef\x63\xfa\x91\x05\x9c\x33\xca" + "\x3e\xd5\x6c\x03\x77\x0e\xa5\x19" + "\xb0\x47\xde\x52\xe9\x80\x17\x8b" + "\x22\xb9\x2d\xc4\x5b\xf2\x66\xfd" + "\x94\x08\x9f\x36\xcd\x41\xd8\x6f" + "\x06\x7a\x11\xa8\x1c\xb3\x4a\xe1" + "\x55\xec\x83\x1a\x8e\x25\xbc\x30" + "\xc7\x5e\xf5\x69\x00\x97\x0b\xa2" + "\x39\xd0\x44\xdb\x72\x09\x7d\x14" + "\xab\x1f\xb6\x4d\xe4\x58\xef\x86" + "\x1d\x91\x28\xbf\x33\xca\x61\xf8" + "\x6c\x03\x9a\x0e\xa5\x3c\xd3\x47" + "\xde\x75\x0c\x80\x17\xae\x22\xb9" + "\x50\xe7\x5b\xf2\x89\x20\x94\x2b" + "\xc2\x36\xcd\x64\xfb\x6f\x06\x9d" + "\x11\xa8\x3f\xd6\x4a\xe1\x78\x0f" + "\x83\x1a\xb1\x25\xbc\x53\xea\x5e" + "\xf5\x8c\x00\x97\x2e\xc5\x39\xd0" + "\x67\xfe\x72\x09\xa0\x14\xab\x42" + "\xd9\x4d\xe4\x7b\x12\x86\x1d\xb4" + "\x28\xbf\x56\xed\x61\xf8\x8f\x03" + "\x9a\x31\xc8\x3c\xd3\x6a\x01\x75" + "\x0c\xa3\x17\xae\x45\xdc\x50\xe7" + "\x7e\x15\x89\x20\xb7\x2b\xc2\x59" + "\xf0\x64\xfb\x92\x06\x9d\x34\xcb" + "\x3f\xd6\x6d\x04\x78\x0f\xa6\x1a" + "\xb1\x48\xdf\x53\xea\x81\x18\x8c" + "\x23\xba\x2e\xc5\x5c\xf3\x67\xfe" + "\x95\x09\xa0\x37\xce\x42\xd9\x70" + "\x07\x7b\x12\xa9\x1d\xb4\x4b\xe2" + "\x56\xed\x84\x1b\x8f\x26\xbd\x31" + "\xc8\x5f\xf6\x6a\x01\x98\x0c\xa3" + "\x3a\xd1\x45\xdc\x73\x0a\x7e\x15" + "\xac\x20\xb7\x4e\xe5\x59\xf0\x87" + "\x1e\x92\x29\xc0\x34\xcb\x62\xf9" + "\x6d\x04\x9b\x0f\xa6\x3d\xd4\x48" + "\xdf\x76\x0d\x81\x18\xaf\x23\xba" + "\x51\xe8\x5c\xf3\x8a\x21\x95\x2c" + "\xc3\x37\xce\x65\xfc\x70\x07\x9e" + "\x12\xa9\x40\xd7\x4b\xe2\x79\x10" + "\x84\x1b\xb2\x26\xbd\x54\xeb\x5f" + "\xf6\x8d\x01\x98\x2f\xc6\x3a\xd1" + "\x68\xff\x73\x0a\xa1\x15\xac\x43" + "\xda\x4e\xe5\x7c\x13\x87\x1e\xb5" + "\x29\xc0\x57\xee\x62\xf9\x90\x04" + "\x9b\x32\xc9\x3d\xd4\x6b\x02\x76" + "\x0d\xa4\x18\xaf\x46\xdd\x51\xe8" + "\x7f\x16\x8a\x21\xb8\x2c\xc3\x5a" + "\xf1\x65\xfc\x93\x07\x9e\x35\xcc" + "\x40\xd7\x6e\x05\x79\x10\xa7\x1b" + "\xb2\x49\xe0\x54\xeb\x82\x19\x8d" + "\x24\xbb\x2f\xc6\x5d\xf4\x68\xff" + "\x96\x0a\xa1\x38\xcf\x43\xda\x71" + "\x08\x7c\x13\xaa\x1e\xb5\x4c", + .psize = 1023, + .digest = "\xc5\xce\x0c\xca\x01\x4f\x53\x3a" + "\x32\x32\x17\xcc\xd4\x6a\x71\xa9" + "\xf3\xed\x50\x10\x64\x8e\x06\xbe" + "\x9b\x4a\xa6\xbb\x05\x89\x59\x51", + } }; /* * SHA384 test vectors from from NIST and kerneli */ -#define SHA384_TEST_VECTORS 4 +#define SHA384_TEST_VECTORS 6 static struct hash_testvec sha384_tv_template[] = { { + .plaintext = "", + .psize = 0, + .digest = "\x38\xb0\x60\xa7\x51\xac\x96\x38" + "\x4c\xd9\x32\x7e\xb1\xb1\xe3\x6a" + "\x21\xfd\xb7\x11\x14\xbe\x07\x43" + "\x4c\x0c\xc7\xbf\x63\xf6\xe1\xda" + "\x27\x4e\xde\xbf\xe7\x6f\x65\xfb" + "\xd5\x1a\xd2\xf1\x48\x98\xb9\x5b", + }, { .plaintext= "abc", .psize = 3, .digest = "\xcb\x00\x75\x3f\x45\xa3\x5e\x8b" @@ -630,16 +1078,163 @@ static struct hash_testvec sha384_tv_template[] = { "\xc9\x38\xe2\xd1\x99\xe8\xbe\xa4", .np = 4, .tap = { 26, 26, 26, 26 } - }, + }, { + .plaintext = "\x08\x9f\x13\xaa\x41\xd8\x4c\xe3" + "\x7a\x11\x85\x1c\xb3\x27\xbe\x55" + "\xec\x60\xf7\x8e\x02\x99\x30\xc7" + "\x3b\xd2\x69\x00\x74\x0b\xa2\x16" + "\xad\x44\xdb\x4f\xe6\x7d\x14\x88" + "\x1f\xb6\x2a\xc1\x58\xef\x63\xfa" + "\x91\x05\x9c\x33\xca\x3e\xd5\x6c" + "\x03\x77\x0e\xa5\x19\xb0\x47\xde" + "\x52\xe9\x80\x17\x8b\x22\xb9\x2d" + "\xc4\x5b\xf2\x66\xfd\x94\x08\x9f" + "\x36\xcd\x41\xd8\x6f\x06\x7a\x11" + "\xa8\x1c\xb3\x4a\xe1\x55\xec\x83" + "\x1a\x8e\x25\xbc\x30\xc7\x5e\xf5" + "\x69\x00\x97\x0b\xa2\x39\xd0\x44" + "\xdb\x72\x09\x7d\x14\xab\x1f\xb6" + "\x4d\xe4\x58\xef\x86\x1d\x91\x28" + "\xbf\x33\xca\x61\xf8\x6c\x03\x9a" + "\x0e\xa5\x3c\xd3\x47\xde\x75\x0c" + "\x80\x17\xae\x22\xb9\x50\xe7\x5b" + "\xf2\x89\x20\x94\x2b\xc2\x36\xcd" + "\x64\xfb\x6f\x06\x9d\x11\xa8\x3f" + "\xd6\x4a\xe1\x78\x0f\x83\x1a\xb1" + "\x25\xbc\x53\xea\x5e\xf5\x8c\x00" + "\x97\x2e\xc5\x39\xd0\x67\xfe\x72" + "\x09\xa0\x14\xab\x42\xd9\x4d\xe4" + "\x7b\x12\x86\x1d\xb4\x28\xbf\x56" + "\xed\x61\xf8\x8f\x03\x9a\x31\xc8" + "\x3c\xd3\x6a\x01\x75\x0c\xa3\x17" + "\xae\x45\xdc\x50\xe7\x7e\x15\x89" + "\x20\xb7\x2b\xc2\x59\xf0\x64\xfb" + "\x92\x06\x9d\x34\xcb\x3f\xd6\x6d" + "\x04\x78\x0f\xa6\x1a\xb1\x48\xdf" + "\x53\xea\x81\x18\x8c\x23\xba\x2e" + "\xc5\x5c\xf3\x67\xfe\x95\x09\xa0" + "\x37\xce\x42\xd9\x70\x07\x7b\x12" + "\xa9\x1d\xb4\x4b\xe2\x56\xed\x84" + "\x1b\x8f\x26\xbd\x31\xc8\x5f\xf6" + "\x6a\x01\x98\x0c\xa3\x3a\xd1\x45" + "\xdc\x73\x0a\x7e\x15\xac\x20\xb7" + "\x4e\xe5\x59\xf0\x87\x1e\x92\x29" + "\xc0\x34\xcb\x62\xf9\x6d\x04\x9b" + "\x0f\xa6\x3d\xd4\x48\xdf\x76\x0d" + "\x81\x18\xaf\x23\xba\x51\xe8\x5c" + "\xf3\x8a\x21\x95\x2c\xc3\x37\xce" + "\x65\xfc\x70\x07\x9e\x12\xa9\x40" + "\xd7\x4b\xe2\x79\x10\x84\x1b\xb2" + "\x26\xbd\x54\xeb\x5f\xf6\x8d\x01" + "\x98\x2f\xc6\x3a\xd1\x68\xff\x73" + "\x0a\xa1\x15\xac\x43\xda\x4e\xe5" + "\x7c\x13\x87\x1e\xb5\x29\xc0\x57" + "\xee\x62\xf9\x90\x04\x9b\x32\xc9" + "\x3d\xd4\x6b\x02\x76\x0d\xa4\x18" + "\xaf\x46\xdd\x51\xe8\x7f\x16\x8a" + "\x21\xb8\x2c\xc3\x5a\xf1\x65\xfc" + "\x93\x07\x9e\x35\xcc\x40\xd7\x6e" + "\x05\x79\x10\xa7\x1b\xb2\x49\xe0" + "\x54\xeb\x82\x19\x8d\x24\xbb\x2f" + "\xc6\x5d\xf4\x68\xff\x96\x0a\xa1" + "\x38\xcf\x43\xda\x71\x08\x7c\x13" + "\xaa\x1e\xb5\x4c\xe3\x57\xee\x85" + "\x1c\x90\x27\xbe\x32\xc9\x60\xf7" + "\x6b\x02\x99\x0d\xa4\x3b\xd2\x46" + "\xdd\x74\x0b\x7f\x16\xad\x21\xb8" + "\x4f\xe6\x5a\xf1\x88\x1f\x93\x2a" + "\xc1\x35\xcc\x63\xfa\x6e\x05\x9c" + "\x10\xa7\x3e\xd5\x49\xe0\x77\x0e" + "\x82\x19\xb0\x24\xbb\x52\xe9\x5d" + "\xf4\x8b\x22\x96\x2d\xc4\x38\xcf" + "\x66\xfd\x71\x08\x9f\x13\xaa\x41" + "\xd8\x4c\xe3\x7a\x11\x85\x1c\xb3" + "\x27\xbe\x55\xec\x60\xf7\x8e\x02" + "\x99\x30\xc7\x3b\xd2\x69\x00\x74" + "\x0b\xa2\x16\xad\x44\xdb\x4f\xe6" + "\x7d\x14\x88\x1f\xb6\x2a\xc1\x58" + "\xef\x63\xfa\x91\x05\x9c\x33\xca" + "\x3e\xd5\x6c\x03\x77\x0e\xa5\x19" + "\xb0\x47\xde\x52\xe9\x80\x17\x8b" + "\x22\xb9\x2d\xc4\x5b\xf2\x66\xfd" + "\x94\x08\x9f\x36\xcd\x41\xd8\x6f" + "\x06\x7a\x11\xa8\x1c\xb3\x4a\xe1" + "\x55\xec\x83\x1a\x8e\x25\xbc\x30" + "\xc7\x5e\xf5\x69\x00\x97\x0b\xa2" + "\x39\xd0\x44\xdb\x72\x09\x7d\x14" + "\xab\x1f\xb6\x4d\xe4\x58\xef\x86" + "\x1d\x91\x28\xbf\x33\xca\x61\xf8" + "\x6c\x03\x9a\x0e\xa5\x3c\xd3\x47" + "\xde\x75\x0c\x80\x17\xae\x22\xb9" + "\x50\xe7\x5b\xf2\x89\x20\x94\x2b" + "\xc2\x36\xcd\x64\xfb\x6f\x06\x9d" + "\x11\xa8\x3f\xd6\x4a\xe1\x78\x0f" + "\x83\x1a\xb1\x25\xbc\x53\xea\x5e" + "\xf5\x8c\x00\x97\x2e\xc5\x39\xd0" + "\x67\xfe\x72\x09\xa0\x14\xab\x42" + "\xd9\x4d\xe4\x7b\x12\x86\x1d\xb4" + "\x28\xbf\x56\xed\x61\xf8\x8f\x03" + "\x9a\x31\xc8\x3c\xd3\x6a\x01\x75" + "\x0c\xa3\x17\xae\x45\xdc\x50\xe7" + "\x7e\x15\x89\x20\xb7\x2b\xc2\x59" + "\xf0\x64\xfb\x92\x06\x9d\x34\xcb" + "\x3f\xd6\x6d\x04\x78\x0f\xa6\x1a" + "\xb1\x48\xdf\x53\xea\x81\x18\x8c" + "\x23\xba\x2e\xc5\x5c\xf3\x67\xfe" + "\x95\x09\xa0\x37\xce\x42\xd9\x70" + "\x07\x7b\x12\xa9\x1d\xb4\x4b\xe2" + "\x56\xed\x84\x1b\x8f\x26\xbd\x31" + "\xc8\x5f\xf6\x6a\x01\x98\x0c\xa3" + "\x3a\xd1\x45\xdc\x73\x0a\x7e\x15" + "\xac\x20\xb7\x4e\xe5\x59\xf0\x87" + "\x1e\x92\x29\xc0\x34\xcb\x62\xf9" + "\x6d\x04\x9b\x0f\xa6\x3d\xd4\x48" + "\xdf\x76\x0d\x81\x18\xaf\x23\xba" + "\x51\xe8\x5c\xf3\x8a\x21\x95\x2c" + "\xc3\x37\xce\x65\xfc\x70\x07\x9e" + "\x12\xa9\x40\xd7\x4b\xe2\x79\x10" + "\x84\x1b\xb2\x26\xbd\x54\xeb\x5f" + "\xf6\x8d\x01\x98\x2f\xc6\x3a\xd1" + "\x68\xff\x73\x0a\xa1\x15\xac\x43" + "\xda\x4e\xe5\x7c\x13\x87\x1e\xb5" + "\x29\xc0\x57\xee\x62\xf9\x90\x04" + "\x9b\x32\xc9\x3d\xd4\x6b\x02\x76" + "\x0d\xa4\x18\xaf\x46\xdd\x51\xe8" + "\x7f\x16\x8a\x21\xb8\x2c\xc3\x5a" + "\xf1\x65\xfc\x93\x07\x9e\x35\xcc" + "\x40\xd7\x6e\x05\x79\x10\xa7\x1b" + "\xb2\x49\xe0\x54\xeb\x82\x19\x8d" + "\x24\xbb\x2f\xc6\x5d\xf4\x68\xff" + "\x96\x0a\xa1\x38\xcf\x43\xda\x71" + "\x08\x7c\x13\xaa\x1e\xb5\x4c", + .psize = 1023, + .digest = "\x4d\x97\x23\xc8\xea\x7a\x7c\x15" + "\xb8\xff\x97\x9c\xf5\x13\x4f\x31" + "\xde\x67\xf7\x24\x73\xcd\x70\x1c" + "\x03\x4a\xba\x8a\x87\x49\xfe\xdc" + "\x75\x29\x62\x83\xae\x3f\x17\xab" + "\xfd\x10\x4d\x8e\x17\x1c\x1f\xca", + } }; /* * SHA512 test vectors from from NIST and kerneli */ -#define SHA512_TEST_VECTORS 4 +#define SHA512_TEST_VECTORS 6 static struct hash_testvec sha512_tv_template[] = { { + .plaintext = "", + .psize = 0, + .digest = "\xcf\x83\xe1\x35\x7e\xef\xb8\xbd" + "\xf1\x54\x28\x50\xd6\x6d\x80\x07" + "\xd6\x20\xe4\x05\x0b\x57\x15\xdc" + "\x83\xf4\xa9\x21\xd3\x6c\xe9\xce" + "\x47\xd0\xd1\x3c\x5d\x85\xf2\xb0" + "\xff\x83\x18\xd2\x87\x7e\xec\x2f" + "\x63\xb9\x31\xbd\x47\x41\x7a\x81" + "\xa5\x38\x32\x7a\xf9\x27\xda\x3e", + }, { .plaintext = "abc", .psize = 3, .digest = "\xdd\xaf\x35\xa1\x93\x61\x7a\xba" @@ -687,7 +1282,145 @@ static struct hash_testvec sha512_tv_template[] = { "\xed\xb4\x19\x87\x23\x28\x50\xc9", .np = 4, .tap = { 26, 26, 26, 26 } - }, + }, { + .plaintext = "\x08\x9f\x13\xaa\x41\xd8\x4c\xe3" + "\x7a\x11\x85\x1c\xb3\x27\xbe\x55" + "\xec\x60\xf7\x8e\x02\x99\x30\xc7" + "\x3b\xd2\x69\x00\x74\x0b\xa2\x16" + "\xad\x44\xdb\x4f\xe6\x7d\x14\x88" + "\x1f\xb6\x2a\xc1\x58\xef\x63\xfa" + "\x91\x05\x9c\x33\xca\x3e\xd5\x6c" + "\x03\x77\x0e\xa5\x19\xb0\x47\xde" + "\x52\xe9\x80\x17\x8b\x22\xb9\x2d" + "\xc4\x5b\xf2\x66\xfd\x94\x08\x9f" + "\x36\xcd\x41\xd8\x6f\x06\x7a\x11" + "\xa8\x1c\xb3\x4a\xe1\x55\xec\x83" + "\x1a\x8e\x25\xbc\x30\xc7\x5e\xf5" + "\x69\x00\x97\x0b\xa2\x39\xd0\x44" + "\xdb\x72\x09\x7d\x14\xab\x1f\xb6" + "\x4d\xe4\x58\xef\x86\x1d\x91\x28" + "\xbf\x33\xca\x61\xf8\x6c\x03\x9a" + "\x0e\xa5\x3c\xd3\x47\xde\x75\x0c" + "\x80\x17\xae\x22\xb9\x50\xe7\x5b" + "\xf2\x89\x20\x94\x2b\xc2\x36\xcd" + "\x64\xfb\x6f\x06\x9d\x11\xa8\x3f" + "\xd6\x4a\xe1\x78\x0f\x83\x1a\xb1" + "\x25\xbc\x53\xea\x5e\xf5\x8c\x00" + "\x97\x2e\xc5\x39\xd0\x67\xfe\x72" + "\x09\xa0\x14\xab\x42\xd9\x4d\xe4" + "\x7b\x12\x86\x1d\xb4\x28\xbf\x56" + "\xed\x61\xf8\x8f\x03\x9a\x31\xc8" + "\x3c\xd3\x6a\x01\x75\x0c\xa3\x17" + "\xae\x45\xdc\x50\xe7\x7e\x15\x89" + "\x20\xb7\x2b\xc2\x59\xf0\x64\xfb" + "\x92\x06\x9d\x34\xcb\x3f\xd6\x6d" + "\x04\x78\x0f\xa6\x1a\xb1\x48\xdf" + "\x53\xea\x81\x18\x8c\x23\xba\x2e" + "\xc5\x5c\xf3\x67\xfe\x95\x09\xa0" + "\x37\xce\x42\xd9\x70\x07\x7b\x12" + "\xa9\x1d\xb4\x4b\xe2\x56\xed\x84" + "\x1b\x8f\x26\xbd\x31\xc8\x5f\xf6" + "\x6a\x01\x98\x0c\xa3\x3a\xd1\x45" + "\xdc\x73\x0a\x7e\x15\xac\x20\xb7" + "\x4e\xe5\x59\xf0\x87\x1e\x92\x29" + "\xc0\x34\xcb\x62\xf9\x6d\x04\x9b" + "\x0f\xa6\x3d\xd4\x48\xdf\x76\x0d" + "\x81\x18\xaf\x23\xba\x51\xe8\x5c" + "\xf3\x8a\x21\x95\x2c\xc3\x37\xce" + "\x65\xfc\x70\x07\x9e\x12\xa9\x40" + "\xd7\x4b\xe2\x79\x10\x84\x1b\xb2" + "\x26\xbd\x54\xeb\x5f\xf6\x8d\x01" + "\x98\x2f\xc6\x3a\xd1\x68\xff\x73" + "\x0a\xa1\x15\xac\x43\xda\x4e\xe5" + "\x7c\x13\x87\x1e\xb5\x29\xc0\x57" + "\xee\x62\xf9\x90\x04\x9b\x32\xc9" + "\x3d\xd4\x6b\x02\x76\x0d\xa4\x18" + "\xaf\x46\xdd\x51\xe8\x7f\x16\x8a" + "\x21\xb8\x2c\xc3\x5a\xf1\x65\xfc" + "\x93\x07\x9e\x35\xcc\x40\xd7\x6e" + "\x05\x79\x10\xa7\x1b\xb2\x49\xe0" + "\x54\xeb\x82\x19\x8d\x24\xbb\x2f" + "\xc6\x5d\xf4\x68\xff\x96\x0a\xa1" + "\x38\xcf\x43\xda\x71\x08\x7c\x13" + "\xaa\x1e\xb5\x4c\xe3\x57\xee\x85" + "\x1c\x90\x27\xbe\x32\xc9\x60\xf7" + "\x6b\x02\x99\x0d\xa4\x3b\xd2\x46" + "\xdd\x74\x0b\x7f\x16\xad\x21\xb8" + "\x4f\xe6\x5a\xf1\x88\x1f\x93\x2a" + "\xc1\x35\xcc\x63\xfa\x6e\x05\x9c" + "\x10\xa7\x3e\xd5\x49\xe0\x77\x0e" + "\x82\x19\xb0\x24\xbb\x52\xe9\x5d" + "\xf4\x8b\x22\x96\x2d\xc4\x38\xcf" + "\x66\xfd\x71\x08\x9f\x13\xaa\x41" + "\xd8\x4c\xe3\x7a\x11\x85\x1c\xb3" + "\x27\xbe\x55\xec\x60\xf7\x8e\x02" + "\x99\x30\xc7\x3b\xd2\x69\x00\x74" + "\x0b\xa2\x16\xad\x44\xdb\x4f\xe6" + "\x7d\x14\x88\x1f\xb6\x2a\xc1\x58" + "\xef\x63\xfa\x91\x05\x9c\x33\xca" + "\x3e\xd5\x6c\x03\x77\x0e\xa5\x19" + "\xb0\x47\xde\x52\xe9\x80\x17\x8b" + "\x22\xb9\x2d\xc4\x5b\xf2\x66\xfd" + "\x94\x08\x9f\x36\xcd\x41\xd8\x6f" + "\x06\x7a\x11\xa8\x1c\xb3\x4a\xe1" + "\x55\xec\x83\x1a\x8e\x25\xbc\x30" + "\xc7\x5e\xf5\x69\x00\x97\x0b\xa2" + "\x39\xd0\x44\xdb\x72\x09\x7d\x14" + "\xab\x1f\xb6\x4d\xe4\x58\xef\x86" + "\x1d\x91\x28\xbf\x33\xca\x61\xf8" + "\x6c\x03\x9a\x0e\xa5\x3c\xd3\x47" + "\xde\x75\x0c\x80\x17\xae\x22\xb9" + "\x50\xe7\x5b\xf2\x89\x20\x94\x2b" + "\xc2\x36\xcd\x64\xfb\x6f\x06\x9d" + "\x11\xa8\x3f\xd6\x4a\xe1\x78\x0f" + "\x83\x1a\xb1\x25\xbc\x53\xea\x5e" + "\xf5\x8c\x00\x97\x2e\xc5\x39\xd0" + "\x67\xfe\x72\x09\xa0\x14\xab\x42" + "\xd9\x4d\xe4\x7b\x12\x86\x1d\xb4" + "\x28\xbf\x56\xed\x61\xf8\x8f\x03" + "\x9a\x31\xc8\x3c\xd3\x6a\x01\x75" + "\x0c\xa3\x17\xae\x45\xdc\x50\xe7" + "\x7e\x15\x89\x20\xb7\x2b\xc2\x59" + "\xf0\x64\xfb\x92\x06\x9d\x34\xcb" + "\x3f\xd6\x6d\x04\x78\x0f\xa6\x1a" + "\xb1\x48\xdf\x53\xea\x81\x18\x8c" + "\x23\xba\x2e\xc5\x5c\xf3\x67\xfe" + "\x95\x09\xa0\x37\xce\x42\xd9\x70" + "\x07\x7b\x12\xa9\x1d\xb4\x4b\xe2" + "\x56\xed\x84\x1b\x8f\x26\xbd\x31" + "\xc8\x5f\xf6\x6a\x01\x98\x0c\xa3" + "\x3a\xd1\x45\xdc\x73\x0a\x7e\x15" + "\xac\x20\xb7\x4e\xe5\x59\xf0\x87" + "\x1e\x92\x29\xc0\x34\xcb\x62\xf9" + "\x6d\x04\x9b\x0f\xa6\x3d\xd4\x48" + "\xdf\x76\x0d\x81\x18\xaf\x23\xba" + "\x51\xe8\x5c\xf3\x8a\x21\x95\x2c" + "\xc3\x37\xce\x65\xfc\x70\x07\x9e" + "\x12\xa9\x40\xd7\x4b\xe2\x79\x10" + "\x84\x1b\xb2\x26\xbd\x54\xeb\x5f" + "\xf6\x8d\x01\x98\x2f\xc6\x3a\xd1" + "\x68\xff\x73\x0a\xa1\x15\xac\x43" + "\xda\x4e\xe5\x7c\x13\x87\x1e\xb5" + "\x29\xc0\x57\xee\x62\xf9\x90\x04" + "\x9b\x32\xc9\x3d\xd4\x6b\x02\x76" + "\x0d\xa4\x18\xaf\x46\xdd\x51\xe8" + "\x7f\x16\x8a\x21\xb8\x2c\xc3\x5a" + "\xf1\x65\xfc\x93\x07\x9e\x35\xcc" + "\x40\xd7\x6e\x05\x79\x10\xa7\x1b" + "\xb2\x49\xe0\x54\xeb\x82\x19\x8d" + "\x24\xbb\x2f\xc6\x5d\xf4\x68\xff" + "\x96\x0a\xa1\x38\xcf\x43\xda\x71" + "\x08\x7c\x13\xaa\x1e\xb5\x4c", + .psize = 1023, + .digest = "\x76\xc9\xd4\x91\x7a\x5f\x0f\xaa" + "\x13\x39\xf3\x01\x7a\xfa\xe5\x41" + "\x5f\x0b\xf8\xeb\x32\xfc\xbf\xb0" + "\xfa\x8c\xcd\x17\x83\xe2\xfa\xeb" + "\x1c\x19\xde\xe2\x75\xdc\x34\x64" + "\x5f\x35\x9c\x61\x2f\x10\xf9\xec" + "\x59\xca\x9d\xcc\x25\x0c\x43\xba" + "\x85\xa8\xf8\xfe\xb5\x24\xb2\xee", + } }; @@ -12823,11 +13556,11 @@ static struct cipher_testvec cast6_xts_dec_tv_template[] = { #define AES_CBC_DEC_TEST_VECTORS 5 #define HMAC_MD5_ECB_CIPHER_NULL_ENC_TEST_VECTORS 2 #define HMAC_MD5_ECB_CIPHER_NULL_DEC_TEST_VECTORS 2 -#define HMAC_SHA1_ECB_CIPHER_NULL_ENC_TEST_VECTORS 2 -#define HMAC_SHA1_ECB_CIPHER_NULL_DEC_TEST_VECTORS 2 -#define HMAC_SHA1_AES_CBC_ENC_TEST_VECTORS 7 -#define HMAC_SHA256_AES_CBC_ENC_TEST_VECTORS 7 -#define HMAC_SHA512_AES_CBC_ENC_TEST_VECTORS 7 +#define HMAC_SHA1_ECB_CIPHER_NULL_ENC_TEST_VEC 2 +#define HMAC_SHA1_ECB_CIPHER_NULL_DEC_TEST_VEC 2 +#define HMAC_SHA1_AES_CBC_ENC_TEST_VEC 7 +#define HMAC_SHA256_AES_CBC_ENC_TEST_VEC 7 +#define HMAC_SHA512_AES_CBC_ENC_TEST_VEC 7 #define AES_LRW_ENC_TEST_VECTORS 8 #define AES_LRW_DEC_TEST_VECTORS 8 #define AES_XTS_ENC_TEST_VECTORS 5 @@ -12844,7 +13577,7 @@ static struct cipher_testvec cast6_xts_dec_tv_template[] = { #define AES_GCM_4106_DEC_TEST_VECTORS 7 #define AES_GCM_4543_ENC_TEST_VECTORS 1 #define AES_GCM_4543_DEC_TEST_VECTORS 2 -#define AES_CCM_ENC_TEST_VECTORS 7 +#define AES_CCM_ENC_TEST_VECTORS 8 #define AES_CCM_DEC_TEST_VECTORS 7 #define AES_CCM_4309_ENC_TEST_VECTORS 7 #define AES_CCM_4309_DEC_TEST_VECTORS 10 @@ -13715,7 +14448,7 @@ static struct aead_testvec hmac_md5_ecb_cipher_null_dec_tv_template[] = { }, }; -static struct aead_testvec hmac_sha1_aes_cbc_enc_tv_template[] = { +static struct aead_testvec hmac_sha1_aes_cbc_enc_tv_temp[] = { { /* RFC 3602 Case 1 */ #ifdef __LITTLE_ENDIAN .key = "\x08\x00" /* rta length */ @@ -13964,7 +14697,7 @@ static struct aead_testvec hmac_sha1_aes_cbc_enc_tv_template[] = { }, }; -static struct aead_testvec hmac_sha1_ecb_cipher_null_enc_tv_template[] = { +static struct aead_testvec hmac_sha1_ecb_cipher_null_enc_tv_temp[] = { { /* Input data from RFC 2410 Case 1 */ #ifdef __LITTLE_ENDIAN .key = "\x08\x00" /* rta length */ @@ -14010,7 +14743,7 @@ static struct aead_testvec hmac_sha1_ecb_cipher_null_enc_tv_template[] = { }, }; -static struct aead_testvec hmac_sha1_ecb_cipher_null_dec_tv_template[] = { +static struct aead_testvec hmac_sha1_ecb_cipher_null_dec_tv_temp[] = { { #ifdef __LITTLE_ENDIAN .key = "\x08\x00" /* rta length */ @@ -14056,7 +14789,7 @@ static struct aead_testvec hmac_sha1_ecb_cipher_null_dec_tv_template[] = { }, }; -static struct aead_testvec hmac_sha256_aes_cbc_enc_tv_template[] = { +static struct aead_testvec hmac_sha256_aes_cbc_enc_tv_temp[] = { { /* RFC 3602 Case 1 */ #ifdef __LITTLE_ENDIAN .key = "\x08\x00" /* rta length */ @@ -14319,7 +15052,7 @@ static struct aead_testvec hmac_sha256_aes_cbc_enc_tv_template[] = { }, }; -static struct aead_testvec hmac_sha512_aes_cbc_enc_tv_template[] = { +static struct aead_testvec hmac_sha512_aes_cbc_enc_tv_temp[] = { { /* RFC 3602 Case 1 */ #ifdef __LITTLE_ENDIAN .key = "\x08\x00" /* rta length */ @@ -14638,6 +15371,652 @@ static struct aead_testvec hmac_sha512_aes_cbc_enc_tv_template[] = { }, }; +#define HMAC_SHA1_DES_CBC_ENC_TEST_VEC 1 + +static struct aead_testvec hmac_sha1_des_cbc_enc_tv_temp[] = { + { /*Generated with cryptopp*/ +#ifdef __LITTLE_ENDIAN + .key = "\x08\x00" /* rta length */ + "\x01\x00" /* rta type */ +#else + .key = "\x00\x08" /* rta length */ + "\x00\x01" /* rta type */ +#endif + "\x00\x00\x00\x08" /* enc key length */ + "\x11\x22\x33\x44\x55\x66\x77\x88" + "\x99\xaa\xbb\xcc\xdd\xee\xff\x11" + "\x22\x33\x44\x55" + "\xE9\xC0\xFF\x2E\x76\x0B\x64\x24", + .klen = 8 + 20 + 8, + .iv = "\x7D\x33\x88\x93\x0F\x93\xB2\x42", + .assoc = "\x00\x00\x43\x21\x00\x00\x00\x01", + .alen = 8, + .input = "\x6f\x54\x20\x6f\x61\x4d\x79\x6e" + "\x53\x20\x63\x65\x65\x72\x73\x74" + "\x54\x20\x6f\x6f\x4d\x20\x6e\x61" + "\x20\x79\x65\x53\x72\x63\x74\x65" + "\x20\x73\x6f\x54\x20\x6f\x61\x4d" + "\x79\x6e\x53\x20\x63\x65\x65\x72" + "\x73\x74\x54\x20\x6f\x6f\x4d\x20" + "\x6e\x61\x20\x79\x65\x53\x72\x63" + "\x74\x65\x20\x73\x6f\x54\x20\x6f" + "\x61\x4d\x79\x6e\x53\x20\x63\x65" + "\x65\x72\x73\x74\x54\x20\x6f\x6f" + "\x4d\x20\x6e\x61\x20\x79\x65\x53" + "\x72\x63\x74\x65\x20\x73\x6f\x54" + "\x20\x6f\x61\x4d\x79\x6e\x53\x20" + "\x63\x65\x65\x72\x73\x74\x54\x20" + "\x6f\x6f\x4d\x20\x6e\x61\x0a\x79", + .ilen = 128, + .result = "\x70\xd6\xde\x64\x87\x17\xf1\xe8" + "\x54\x31\x85\x37\xed\x6b\x01\x8d" + "\xe3\xcc\xe0\x1d\x5e\xf3\xfe\xf1" + "\x41\xaa\x33\x91\xa7\x7d\x99\x88" + "\x4d\x85\x6e\x2f\xa3\x69\xf5\x82" + "\x3a\x6f\x25\xcb\x7d\x58\x1f\x9b" + "\xaa\x9c\x11\xd5\x76\x67\xce\xde" + "\x56\xd7\x5a\x80\x69\xea\x3a\x02" + "\xf0\xc7\x7c\xe3\xcb\x40\xe5\x52" + "\xd1\x10\x92\x78\x0b\x8e\x5b\xf1" + "\xe3\x26\x1f\xe1\x15\x41\xc7\xba" + "\x99\xdb\x08\x51\x1c\xd3\x01\xf4" + "\x87\x47\x39\xb8\xd2\xdd\xbd\xfb" + "\x66\x13\xdf\x1c\x01\x44\xf0\x7a" + "\x1a\x6b\x13\xf5\xd5\x0b\xb8\xba" + "\x53\xba\xe1\x76\xe3\x82\x07\x86" + "\x95\x16\x20\x09\xf5\x95\x19\xfd" + "\x3c\xc7\xe0\x42\xc0\x14\x69\xfa" + "\x5c\x44\xa9\x37", + .rlen = 128 + 20, + }, +}; + +#define HMAC_SHA224_DES_CBC_ENC_TEST_VEC 1 + +static struct aead_testvec hmac_sha224_des_cbc_enc_tv_temp[] = { + { /*Generated with cryptopp*/ +#ifdef __LITTLE_ENDIAN + .key = "\x08\x00" /* rta length */ + "\x01\x00" /* rta type */ +#else + .key = "\x00\x08" /* rta length */ + "\x00\x01" /* rta type */ +#endif + "\x00\x00\x00\x08" /* enc key length */ + "\x11\x22\x33\x44\x55\x66\x77\x88" + "\x99\xaa\xbb\xcc\xdd\xee\xff\x11" + "\x22\x33\x44\x55\x66\x77\x88\x99" + "\xE9\xC0\xFF\x2E\x76\x0B\x64\x24", + .klen = 8 + 24 + 8, + .iv = "\x7D\x33\x88\x93\x0F\x93\xB2\x42", + .assoc = "\x00\x00\x43\x21\x00\x00\x00\x01", + .alen = 8, + .input = "\x6f\x54\x20\x6f\x61\x4d\x79\x6e" + "\x53\x20\x63\x65\x65\x72\x73\x74" + "\x54\x20\x6f\x6f\x4d\x20\x6e\x61" + "\x20\x79\x65\x53\x72\x63\x74\x65" + "\x20\x73\x6f\x54\x20\x6f\x61\x4d" + "\x79\x6e\x53\x20\x63\x65\x65\x72" + "\x73\x74\x54\x20\x6f\x6f\x4d\x20" + "\x6e\x61\x20\x79\x65\x53\x72\x63" + "\x74\x65\x20\x73\x6f\x54\x20\x6f" + "\x61\x4d\x79\x6e\x53\x20\x63\x65" + "\x65\x72\x73\x74\x54\x20\x6f\x6f" + "\x4d\x20\x6e\x61\x20\x79\x65\x53" + "\x72\x63\x74\x65\x20\x73\x6f\x54" + "\x20\x6f\x61\x4d\x79\x6e\x53\x20" + "\x63\x65\x65\x72\x73\x74\x54\x20" + "\x6f\x6f\x4d\x20\x6e\x61\x0a\x79", + .ilen = 128, + .result = "\x70\xd6\xde\x64\x87\x17\xf1\xe8" + "\x54\x31\x85\x37\xed\x6b\x01\x8d" + "\xe3\xcc\xe0\x1d\x5e\xf3\xfe\xf1" + "\x41\xaa\x33\x91\xa7\x7d\x99\x88" + "\x4d\x85\x6e\x2f\xa3\x69\xf5\x82" + "\x3a\x6f\x25\xcb\x7d\x58\x1f\x9b" + "\xaa\x9c\x11\xd5\x76\x67\xce\xde" + "\x56\xd7\x5a\x80\x69\xea\x3a\x02" + "\xf0\xc7\x7c\xe3\xcb\x40\xe5\x52" + "\xd1\x10\x92\x78\x0b\x8e\x5b\xf1" + "\xe3\x26\x1f\xe1\x15\x41\xc7\xba" + "\x99\xdb\x08\x51\x1c\xd3\x01\xf4" + "\x87\x47\x39\xb8\xd2\xdd\xbd\xfb" + "\x66\x13\xdf\x1c\x01\x44\xf0\x7a" + "\x1a\x6b\x13\xf5\xd5\x0b\xb8\xba" + "\x53\xba\xe1\x76\xe3\x82\x07\x86" + "\x9c\x2d\x7e\xee\x20\x34\x55\x0a" + "\xce\xb5\x4e\x64\x53\xe7\xbf\x91" + "\xab\xd4\xd9\xda\xc9\x12\xae\xf7", + .rlen = 128 + 24, + }, +}; + +#define HMAC_SHA256_DES_CBC_ENC_TEST_VEC 1 + +static struct aead_testvec hmac_sha256_des_cbc_enc_tv_temp[] = { + { /*Generated with cryptopp*/ +#ifdef __LITTLE_ENDIAN + .key = "\x08\x00" /* rta length */ + "\x01\x00" /* rta type */ +#else + .key = "\x00\x08" /* rta length */ + "\x00\x01" /* rta type */ +#endif + "\x00\x00\x00\x08" /* enc key length */ + "\x11\x22\x33\x44\x55\x66\x77\x88" + "\x99\xaa\xbb\xcc\xdd\xee\xff\x11" + "\x22\x33\x44\x55\x66\x77\x88\x99" + "\xaa\xbb\xcc\xdd\xee\xff\x11\x22" + "\xE9\xC0\xFF\x2E\x76\x0B\x64\x24", + .klen = 8 + 32 + 8, + .iv = "\x7D\x33\x88\x93\x0F\x93\xB2\x42", + .assoc = "\x00\x00\x43\x21\x00\x00\x00\x01", + .alen = 8, + .input = "\x6f\x54\x20\x6f\x61\x4d\x79\x6e" + "\x53\x20\x63\x65\x65\x72\x73\x74" + "\x54\x20\x6f\x6f\x4d\x20\x6e\x61" + "\x20\x79\x65\x53\x72\x63\x74\x65" + "\x20\x73\x6f\x54\x20\x6f\x61\x4d" + "\x79\x6e\x53\x20\x63\x65\x65\x72" + "\x73\x74\x54\x20\x6f\x6f\x4d\x20" + "\x6e\x61\x20\x79\x65\x53\x72\x63" + "\x74\x65\x20\x73\x6f\x54\x20\x6f" + "\x61\x4d\x79\x6e\x53\x20\x63\x65" + "\x65\x72\x73\x74\x54\x20\x6f\x6f" + "\x4d\x20\x6e\x61\x20\x79\x65\x53" + "\x72\x63\x74\x65\x20\x73\x6f\x54" + "\x20\x6f\x61\x4d\x79\x6e\x53\x20" + "\x63\x65\x65\x72\x73\x74\x54\x20" + "\x6f\x6f\x4d\x20\x6e\x61\x0a\x79", + .ilen = 128, + .result = "\x70\xd6\xde\x64\x87\x17\xf1\xe8" + "\x54\x31\x85\x37\xed\x6b\x01\x8d" + "\xe3\xcc\xe0\x1d\x5e\xf3\xfe\xf1" + "\x41\xaa\x33\x91\xa7\x7d\x99\x88" + "\x4d\x85\x6e\x2f\xa3\x69\xf5\x82" + "\x3a\x6f\x25\xcb\x7d\x58\x1f\x9b" + "\xaa\x9c\x11\xd5\x76\x67\xce\xde" + "\x56\xd7\x5a\x80\x69\xea\x3a\x02" + "\xf0\xc7\x7c\xe3\xcb\x40\xe5\x52" + "\xd1\x10\x92\x78\x0b\x8e\x5b\xf1" + "\xe3\x26\x1f\xe1\x15\x41\xc7\xba" + "\x99\xdb\x08\x51\x1c\xd3\x01\xf4" + "\x87\x47\x39\xb8\xd2\xdd\xbd\xfb" + "\x66\x13\xdf\x1c\x01\x44\xf0\x7a" + "\x1a\x6b\x13\xf5\xd5\x0b\xb8\xba" + "\x53\xba\xe1\x76\xe3\x82\x07\x86" + "\xc6\x58\xa1\x60\x70\x91\x39\x36" + "\x50\xf6\x5d\xab\x4b\x51\x4e\x5e" + "\xde\x63\xde\x76\x52\xde\x9f\xba" + "\x90\xcf\x15\xf2\xbb\x6e\x84\x00", + .rlen = 128 + 32, + }, +}; + +#define HMAC_SHA384_DES_CBC_ENC_TEST_VEC 1 + +static struct aead_testvec hmac_sha384_des_cbc_enc_tv_temp[] = { + { /*Generated with cryptopp*/ +#ifdef __LITTLE_ENDIAN + .key = "\x08\x00" /* rta length */ + "\x01\x00" /* rta type */ +#else + .key = "\x00\x08" /* rta length */ + "\x00\x01" /* rta type */ +#endif + "\x00\x00\x00\x08" /* enc key length */ + "\x11\x22\x33\x44\x55\x66\x77\x88" + "\x99\xaa\xbb\xcc\xdd\xee\xff\x11" + "\x22\x33\x44\x55\x66\x77\x88\x99" + "\xaa\xbb\xcc\xdd\xee\xff\x11\x22" + "\x33\x44\x55\x66\x77\x88\x99\xaa" + "\xbb\xcc\xdd\xee\xff\x11\x22\x33" + "\xE9\xC0\xFF\x2E\x76\x0B\x64\x24", + .klen = 8 + 48 + 8, + .iv = "\x7D\x33\x88\x93\x0F\x93\xB2\x42", + .assoc = "\x00\x00\x43\x21\x00\x00\x00\x01", + .alen = 8, + .input = "\x6f\x54\x20\x6f\x61\x4d\x79\x6e" + "\x53\x20\x63\x65\x65\x72\x73\x74" + "\x54\x20\x6f\x6f\x4d\x20\x6e\x61" + "\x20\x79\x65\x53\x72\x63\x74\x65" + "\x20\x73\x6f\x54\x20\x6f\x61\x4d" + "\x79\x6e\x53\x20\x63\x65\x65\x72" + "\x73\x74\x54\x20\x6f\x6f\x4d\x20" + "\x6e\x61\x20\x79\x65\x53\x72\x63" + "\x74\x65\x20\x73\x6f\x54\x20\x6f" + "\x61\x4d\x79\x6e\x53\x20\x63\x65" + "\x65\x72\x73\x74\x54\x20\x6f\x6f" + "\x4d\x20\x6e\x61\x20\x79\x65\x53" + "\x72\x63\x74\x65\x20\x73\x6f\x54" + "\x20\x6f\x61\x4d\x79\x6e\x53\x20" + "\x63\x65\x65\x72\x73\x74\x54\x20" + "\x6f\x6f\x4d\x20\x6e\x61\x0a\x79", + .ilen = 128, + .result = "\x70\xd6\xde\x64\x87\x17\xf1\xe8" + "\x54\x31\x85\x37\xed\x6b\x01\x8d" + "\xe3\xcc\xe0\x1d\x5e\xf3\xfe\xf1" + "\x41\xaa\x33\x91\xa7\x7d\x99\x88" + "\x4d\x85\x6e\x2f\xa3\x69\xf5\x82" + "\x3a\x6f\x25\xcb\x7d\x58\x1f\x9b" + "\xaa\x9c\x11\xd5\x76\x67\xce\xde" + "\x56\xd7\x5a\x80\x69\xea\x3a\x02" + "\xf0\xc7\x7c\xe3\xcb\x40\xe5\x52" + "\xd1\x10\x92\x78\x0b\x8e\x5b\xf1" + "\xe3\x26\x1f\xe1\x15\x41\xc7\xba" + "\x99\xdb\x08\x51\x1c\xd3\x01\xf4" + "\x87\x47\x39\xb8\xd2\xdd\xbd\xfb" + "\x66\x13\xdf\x1c\x01\x44\xf0\x7a" + "\x1a\x6b\x13\xf5\xd5\x0b\xb8\xba" + "\x53\xba\xe1\x76\xe3\x82\x07\x86" + "\xa8\x8e\x9c\x74\x8c\x2b\x99\xa0" + "\xc8\x8c\xef\x25\x07\x83\x11\x3a" + "\x31\x8d\xbe\x3b\x6a\xd7\x96\xfe" + "\x5e\x67\xb5\x74\xe7\xe7\x85\x61" + "\x6a\x95\x26\x75\xcc\x53\x89\xf3" + "\x74\xc9\x2a\x76\x20\xa2\x64\x62", + .rlen = 128 + 48, + }, +}; + +#define HMAC_SHA512_DES_CBC_ENC_TEST_VEC 1 + +static struct aead_testvec hmac_sha512_des_cbc_enc_tv_temp[] = { + { /*Generated with cryptopp*/ +#ifdef __LITTLE_ENDIAN + .key = "\x08\x00" /* rta length */ + "\x01\x00" /* rta type */ +#else + .key = "\x00\x08" /* rta length */ + "\x00\x01" /* rta type */ +#endif + "\x00\x00\x00\x08" /* enc key length */ + "\x11\x22\x33\x44\x55\x66\x77\x88" + "\x99\xaa\xbb\xcc\xdd\xee\xff\x11" + "\x22\x33\x44\x55\x66\x77\x88\x99" + "\xaa\xbb\xcc\xdd\xee\xff\x11\x22" + "\x33\x44\x55\x66\x77\x88\x99\xaa" + "\xbb\xcc\xdd\xee\xff\x11\x22\x33" + "\x44\x55\x66\x77\x88\x99\xaa\xbb" + "\xcc\xdd\xee\xff\x11\x22\x33\x44" + "\xE9\xC0\xFF\x2E\x76\x0B\x64\x24", + .klen = 8 + 64 + 8, + .iv = "\x7D\x33\x88\x93\x0F\x93\xB2\x42", + .assoc = "\x00\x00\x43\x21\x00\x00\x00\x01", + .alen = 8, + .input = "\x6f\x54\x20\x6f\x61\x4d\x79\x6e" + "\x53\x20\x63\x65\x65\x72\x73\x74" + "\x54\x20\x6f\x6f\x4d\x20\x6e\x61" + "\x20\x79\x65\x53\x72\x63\x74\x65" + "\x20\x73\x6f\x54\x20\x6f\x61\x4d" + "\x79\x6e\x53\x20\x63\x65\x65\x72" + "\x73\x74\x54\x20\x6f\x6f\x4d\x20" + "\x6e\x61\x20\x79\x65\x53\x72\x63" + "\x74\x65\x20\x73\x6f\x54\x20\x6f" + "\x61\x4d\x79\x6e\x53\x20\x63\x65" + "\x65\x72\x73\x74\x54\x20\x6f\x6f" + "\x4d\x20\x6e\x61\x20\x79\x65\x53" + "\x72\x63\x74\x65\x20\x73\x6f\x54" + "\x20\x6f\x61\x4d\x79\x6e\x53\x20" + "\x63\x65\x65\x72\x73\x74\x54\x20" + "\x6f\x6f\x4d\x20\x6e\x61\x0a\x79", + .ilen = 128, + .result = "\x70\xd6\xde\x64\x87\x17\xf1\xe8" + "\x54\x31\x85\x37\xed\x6b\x01\x8d" + "\xe3\xcc\xe0\x1d\x5e\xf3\xfe\xf1" + "\x41\xaa\x33\x91\xa7\x7d\x99\x88" + "\x4d\x85\x6e\x2f\xa3\x69\xf5\x82" + "\x3a\x6f\x25\xcb\x7d\x58\x1f\x9b" + "\xaa\x9c\x11\xd5\x76\x67\xce\xde" + "\x56\xd7\x5a\x80\x69\xea\x3a\x02" + "\xf0\xc7\x7c\xe3\xcb\x40\xe5\x52" + "\xd1\x10\x92\x78\x0b\x8e\x5b\xf1" + "\xe3\x26\x1f\xe1\x15\x41\xc7\xba" + "\x99\xdb\x08\x51\x1c\xd3\x01\xf4" + "\x87\x47\x39\xb8\xd2\xdd\xbd\xfb" + "\x66\x13\xdf\x1c\x01\x44\xf0\x7a" + "\x1a\x6b\x13\xf5\xd5\x0b\xb8\xba" + "\x53\xba\xe1\x76\xe3\x82\x07\x86" + "\xc6\x2c\x73\x88\xb0\x9d\x5f\x3e" + "\x5b\x78\xca\x0e\xab\x8a\xa3\xbb" + "\xd9\x1d\xc3\xe3\x05\xac\x76\xfb" + "\x58\x83\xda\x67\xfb\x21\x24\xa2" + "\xb1\xa7\xd7\x66\xa6\x8d\xa6\x93" + "\x97\xe2\xe3\xb8\xaa\x48\x85\xee" + "\x8c\xf6\x07\x95\x1f\xa6\x6c\x96" + "\x99\xc7\x5c\x8d\xd8\xb5\x68\x7b", + .rlen = 128 + 64, + }, +}; + +#define HMAC_SHA1_DES3_EDE_CBC_ENC_TEST_VEC 1 + +static struct aead_testvec hmac_sha1_des3_ede_cbc_enc_tv_temp[] = { + { /*Generated with cryptopp*/ +#ifdef __LITTLE_ENDIAN + .key = "\x08\x00" /* rta length */ + "\x01\x00" /* rta type */ +#else + .key = "\x00\x08" /* rta length */ + "\x00\x01" /* rta type */ +#endif + "\x00\x00\x00\x18" /* enc key length */ + "\x11\x22\x33\x44\x55\x66\x77\x88" + "\x99\xaa\xbb\xcc\xdd\xee\xff\x11" + "\x22\x33\x44\x55" + "\xE9\xC0\xFF\x2E\x76\x0B\x64\x24" + "\x44\x4D\x99\x5A\x12\xD6\x40\xC0" + "\xEA\xC2\x84\xE8\x14\x95\xDB\xE8", + .klen = 8 + 20 + 24, + .iv = "\x7D\x33\x88\x93\x0F\x93\xB2\x42", + .assoc = "\x00\x00\x43\x21\x00\x00\x00\x01", + .alen = 8, + .input = "\x6f\x54\x20\x6f\x61\x4d\x79\x6e" + "\x53\x20\x63\x65\x65\x72\x73\x74" + "\x54\x20\x6f\x6f\x4d\x20\x6e\x61" + "\x20\x79\x65\x53\x72\x63\x74\x65" + "\x20\x73\x6f\x54\x20\x6f\x61\x4d" + "\x79\x6e\x53\x20\x63\x65\x65\x72" + "\x73\x74\x54\x20\x6f\x6f\x4d\x20" + "\x6e\x61\x20\x79\x65\x53\x72\x63" + "\x74\x65\x20\x73\x6f\x54\x20\x6f" + "\x61\x4d\x79\x6e\x53\x20\x63\x65" + "\x65\x72\x73\x74\x54\x20\x6f\x6f" + "\x4d\x20\x6e\x61\x20\x79\x65\x53" + "\x72\x63\x74\x65\x20\x73\x6f\x54" + "\x20\x6f\x61\x4d\x79\x6e\x53\x20" + "\x63\x65\x65\x72\x73\x74\x54\x20" + "\x6f\x6f\x4d\x20\x6e\x61\x0a\x79", + .ilen = 128, + .result = "\x0e\x2d\xb6\x97\x3c\x56\x33\xf4" + "\x67\x17\x21\xc7\x6e\x8a\xd5\x49" + "\x74\xb3\x49\x05\xc5\x1c\xd0\xed" + "\x12\x56\x5c\x53\x96\xb6\x00\x7d" + "\x90\x48\xfc\xf5\x8d\x29\x39\xcc" + "\x8a\xd5\x35\x18\x36\x23\x4e\xd7" + "\x76\xd1\xda\x0c\x94\x67\xbb\x04" + "\x8b\xf2\x03\x6c\xa8\xcf\xb6\xea" + "\x22\x64\x47\xaa\x8f\x75\x13\xbf" + "\x9f\xc2\xc3\xf0\xc9\x56\xc5\x7a" + "\x71\x63\x2e\x89\x7b\x1e\x12\xca" + "\xe2\x5f\xaf\xd8\xa4\xf8\xc9\x7a" + "\xd6\xf9\x21\x31\x62\x44\x45\xa6" + "\xd6\xbc\x5a\xd3\x2d\x54\x43\xcc" + "\x9d\xde\xa5\x70\xe9\x42\x45\x8a" + "\x6b\xfa\xb1\x91\x13\xb0\xd9\x19" + "\x67\x6d\xb1\xf5\xb8\x10\xdc\xc6" + "\x75\x86\x96\x6b\xb1\xc5\xe4\xcf" + "\xd1\x60\x91\xb3", + .rlen = 128 + 20, + }, +}; + +#define HMAC_SHA224_DES3_EDE_CBC_ENC_TEST_VEC 1 + +static struct aead_testvec hmac_sha224_des3_ede_cbc_enc_tv_temp[] = { + { /*Generated with cryptopp*/ +#ifdef __LITTLE_ENDIAN + .key = "\x08\x00" /* rta length */ + "\x01\x00" /* rta type */ +#else + .key = "\x00\x08" /* rta length */ + "\x00\x01" /* rta type */ +#endif + "\x00\x00\x00\x18" /* enc key length */ + "\x11\x22\x33\x44\x55\x66\x77\x88" + "\x99\xaa\xbb\xcc\xdd\xee\xff\x11" + "\x22\x33\x44\x55\x66\x77\x88\x99" + "\xE9\xC0\xFF\x2E\x76\x0B\x64\x24" + "\x44\x4D\x99\x5A\x12\xD6\x40\xC0" + "\xEA\xC2\x84\xE8\x14\x95\xDB\xE8", + .klen = 8 + 24 + 24, + .iv = "\x7D\x33\x88\x93\x0F\x93\xB2\x42", + .assoc = "\x00\x00\x43\x21\x00\x00\x00\x01", + .alen = 8, + .input = "\x6f\x54\x20\x6f\x61\x4d\x79\x6e" + "\x53\x20\x63\x65\x65\x72\x73\x74" + "\x54\x20\x6f\x6f\x4d\x20\x6e\x61" + "\x20\x79\x65\x53\x72\x63\x74\x65" + "\x20\x73\x6f\x54\x20\x6f\x61\x4d" + "\x79\x6e\x53\x20\x63\x65\x65\x72" + "\x73\x74\x54\x20\x6f\x6f\x4d\x20" + "\x6e\x61\x20\x79\x65\x53\x72\x63" + "\x74\x65\x20\x73\x6f\x54\x20\x6f" + "\x61\x4d\x79\x6e\x53\x20\x63\x65" + "\x65\x72\x73\x74\x54\x20\x6f\x6f" + "\x4d\x20\x6e\x61\x20\x79\x65\x53" + "\x72\x63\x74\x65\x20\x73\x6f\x54" + "\x20\x6f\x61\x4d\x79\x6e\x53\x20" + "\x63\x65\x65\x72\x73\x74\x54\x20" + "\x6f\x6f\x4d\x20\x6e\x61\x0a\x79", + .ilen = 128, + .result = "\x0e\x2d\xb6\x97\x3c\x56\x33\xf4" + "\x67\x17\x21\xc7\x6e\x8a\xd5\x49" + "\x74\xb3\x49\x05\xc5\x1c\xd0\xed" + "\x12\x56\x5c\x53\x96\xb6\x00\x7d" + "\x90\x48\xfc\xf5\x8d\x29\x39\xcc" + "\x8a\xd5\x35\x18\x36\x23\x4e\xd7" + "\x76\xd1\xda\x0c\x94\x67\xbb\x04" + "\x8b\xf2\x03\x6c\xa8\xcf\xb6\xea" + "\x22\x64\x47\xaa\x8f\x75\x13\xbf" + "\x9f\xc2\xc3\xf0\xc9\x56\xc5\x7a" + "\x71\x63\x2e\x89\x7b\x1e\x12\xca" + "\xe2\x5f\xaf\xd8\xa4\xf8\xc9\x7a" + "\xd6\xf9\x21\x31\x62\x44\x45\xa6" + "\xd6\xbc\x5a\xd3\x2d\x54\x43\xcc" + "\x9d\xde\xa5\x70\xe9\x42\x45\x8a" + "\x6b\xfa\xb1\x91\x13\xb0\xd9\x19" + "\x15\x24\x7f\x5a\x45\x4a\x66\xce" + "\x2b\x0b\x93\x99\x2f\x9d\x0c\x6c" + "\x56\x1f\xe1\xa6\x41\xb2\x4c\xd0", + .rlen = 128 + 24, + }, +}; + +#define HMAC_SHA256_DES3_EDE_CBC_ENC_TEST_VEC 1 + +static struct aead_testvec hmac_sha256_des3_ede_cbc_enc_tv_temp[] = { + { /*Generated with cryptopp*/ +#ifdef __LITTLE_ENDIAN + .key = "\x08\x00" /* rta length */ + "\x01\x00" /* rta type */ +#else + .key = "\x00\x08" /* rta length */ + "\x00\x01" /* rta type */ +#endif + "\x00\x00\x00\x18" /* enc key length */ + "\x11\x22\x33\x44\x55\x66\x77\x88" + "\x99\xaa\xbb\xcc\xdd\xee\xff\x11" + "\x22\x33\x44\x55\x66\x77\x88\x99" + "\xaa\xbb\xcc\xdd\xee\xff\x11\x22" + "\xE9\xC0\xFF\x2E\x76\x0B\x64\x24" + "\x44\x4D\x99\x5A\x12\xD6\x40\xC0" + "\xEA\xC2\x84\xE8\x14\x95\xDB\xE8", + .klen = 8 + 32 + 24, + .iv = "\x7D\x33\x88\x93\x0F\x93\xB2\x42", + .assoc = "\x00\x00\x43\x21\x00\x00\x00\x01", + .alen = 8, + .input = "\x6f\x54\x20\x6f\x61\x4d\x79\x6e" + "\x53\x20\x63\x65\x65\x72\x73\x74" + "\x54\x20\x6f\x6f\x4d\x20\x6e\x61" + "\x20\x79\x65\x53\x72\x63\x74\x65" + "\x20\x73\x6f\x54\x20\x6f\x61\x4d" + "\x79\x6e\x53\x20\x63\x65\x65\x72" + "\x73\x74\x54\x20\x6f\x6f\x4d\x20" + "\x6e\x61\x20\x79\x65\x53\x72\x63" + "\x74\x65\x20\x73\x6f\x54\x20\x6f" + "\x61\x4d\x79\x6e\x53\x20\x63\x65" + "\x65\x72\x73\x74\x54\x20\x6f\x6f" + "\x4d\x20\x6e\x61\x20\x79\x65\x53" + "\x72\x63\x74\x65\x20\x73\x6f\x54" + "\x20\x6f\x61\x4d\x79\x6e\x53\x20" + "\x63\x65\x65\x72\x73\x74\x54\x20" + "\x6f\x6f\x4d\x20\x6e\x61\x0a\x79", + .ilen = 128, + .result = "\x0e\x2d\xb6\x97\x3c\x56\x33\xf4" + "\x67\x17\x21\xc7\x6e\x8a\xd5\x49" + "\x74\xb3\x49\x05\xc5\x1c\xd0\xed" + "\x12\x56\x5c\x53\x96\xb6\x00\x7d" + "\x90\x48\xfc\xf5\x8d\x29\x39\xcc" + "\x8a\xd5\x35\x18\x36\x23\x4e\xd7" + "\x76\xd1\xda\x0c\x94\x67\xbb\x04" + "\x8b\xf2\x03\x6c\xa8\xcf\xb6\xea" + "\x22\x64\x47\xaa\x8f\x75\x13\xbf" + "\x9f\xc2\xc3\xf0\xc9\x56\xc5\x7a" + "\x71\x63\x2e\x89\x7b\x1e\x12\xca" + "\xe2\x5f\xaf\xd8\xa4\xf8\xc9\x7a" + "\xd6\xf9\x21\x31\x62\x44\x45\xa6" + "\xd6\xbc\x5a\xd3\x2d\x54\x43\xcc" + "\x9d\xde\xa5\x70\xe9\x42\x45\x8a" + "\x6b\xfa\xb1\x91\x13\xb0\xd9\x19" + "\x73\xb0\xea\x9f\xe8\x18\x80\xd6" + "\x56\x38\x44\xc0\xdb\xe3\x4f\x71" + "\xf7\xce\xd1\xd3\xf8\xbd\x3e\x4f" + "\xca\x43\x95\xdf\x80\x61\x81\xa9", + .rlen = 128 + 32, + }, +}; + +#define HMAC_SHA384_DES3_EDE_CBC_ENC_TEST_VEC 1 + +static struct aead_testvec hmac_sha384_des3_ede_cbc_enc_tv_temp[] = { + { /*Generated with cryptopp*/ +#ifdef __LITTLE_ENDIAN + .key = "\x08\x00" /* rta length */ + "\x01\x00" /* rta type */ +#else + .key = "\x00\x08" /* rta length */ + "\x00\x01" /* rta type */ +#endif + "\x00\x00\x00\x18" /* enc key length */ + "\x11\x22\x33\x44\x55\x66\x77\x88" + "\x99\xaa\xbb\xcc\xdd\xee\xff\x11" + "\x22\x33\x44\x55\x66\x77\x88\x99" + "\xaa\xbb\xcc\xdd\xee\xff\x11\x22" + "\x33\x44\x55\x66\x77\x88\x99\xaa" + "\xbb\xcc\xdd\xee\xff\x11\x22\x33" + "\xE9\xC0\xFF\x2E\x76\x0B\x64\x24" + "\x44\x4D\x99\x5A\x12\xD6\x40\xC0" + "\xEA\xC2\x84\xE8\x14\x95\xDB\xE8", + .klen = 8 + 48 + 24, + .iv = "\x7D\x33\x88\x93\x0F\x93\xB2\x42", + .assoc = "\x00\x00\x43\x21\x00\x00\x00\x01", + .alen = 8, + .input = "\x6f\x54\x20\x6f\x61\x4d\x79\x6e" + "\x53\x20\x63\x65\x65\x72\x73\x74" + "\x54\x20\x6f\x6f\x4d\x20\x6e\x61" + "\x20\x79\x65\x53\x72\x63\x74\x65" + "\x20\x73\x6f\x54\x20\x6f\x61\x4d" + "\x79\x6e\x53\x20\x63\x65\x65\x72" + "\x73\x74\x54\x20\x6f\x6f\x4d\x20" + "\x6e\x61\x20\x79\x65\x53\x72\x63" + "\x74\x65\x20\x73\x6f\x54\x20\x6f" + "\x61\x4d\x79\x6e\x53\x20\x63\x65" + "\x65\x72\x73\x74\x54\x20\x6f\x6f" + "\x4d\x20\x6e\x61\x20\x79\x65\x53" + "\x72\x63\x74\x65\x20\x73\x6f\x54" + "\x20\x6f\x61\x4d\x79\x6e\x53\x20" + "\x63\x65\x65\x72\x73\x74\x54\x20" + "\x6f\x6f\x4d\x20\x6e\x61\x0a\x79", + .ilen = 128, + .result = "\x0e\x2d\xb6\x97\x3c\x56\x33\xf4" + "\x67\x17\x21\xc7\x6e\x8a\xd5\x49" + "\x74\xb3\x49\x05\xc5\x1c\xd0\xed" + "\x12\x56\x5c\x53\x96\xb6\x00\x7d" + "\x90\x48\xfc\xf5\x8d\x29\x39\xcc" + "\x8a\xd5\x35\x18\x36\x23\x4e\xd7" + "\x76\xd1\xda\x0c\x94\x67\xbb\x04" + "\x8b\xf2\x03\x6c\xa8\xcf\xb6\xea" + "\x22\x64\x47\xaa\x8f\x75\x13\xbf" + "\x9f\xc2\xc3\xf0\xc9\x56\xc5\x7a" + "\x71\x63\x2e\x89\x7b\x1e\x12\xca" + "\xe2\x5f\xaf\xd8\xa4\xf8\xc9\x7a" + "\xd6\xf9\x21\x31\x62\x44\x45\xa6" + "\xd6\xbc\x5a\xd3\x2d\x54\x43\xcc" + "\x9d\xde\xa5\x70\xe9\x42\x45\x8a" + "\x6b\xfa\xb1\x91\x13\xb0\xd9\x19" + "\x6d\x77\xfc\x80\x9d\x8a\x9c\xb7" + "\x70\xe7\x93\xbf\x73\xe6\x9f\x83" + "\x99\x62\x23\xe6\x5b\xd0\xda\x18" + "\xa4\x32\x8a\x0b\x46\xd7\xf0\x39" + "\x36\x5d\x13\x2f\x86\x10\x78\xd6" + "\xd6\xbe\x5c\xb9\x15\x89\xf9\x1b", + .rlen = 128 + 48, + }, +}; + +#define HMAC_SHA512_DES3_EDE_CBC_ENC_TEST_VEC 1 + +static struct aead_testvec hmac_sha512_des3_ede_cbc_enc_tv_temp[] = { + { /*Generated with cryptopp*/ +#ifdef __LITTLE_ENDIAN + .key = "\x08\x00" /* rta length */ + "\x01\x00" /* rta type */ +#else + .key = "\x00\x08" /* rta length */ + "\x00\x01" /* rta type */ +#endif + "\x00\x00\x00\x18" /* enc key length */ + "\x11\x22\x33\x44\x55\x66\x77\x88" + "\x99\xaa\xbb\xcc\xdd\xee\xff\x11" + "\x22\x33\x44\x55\x66\x77\x88\x99" + "\xaa\xbb\xcc\xdd\xee\xff\x11\x22" + "\x33\x44\x55\x66\x77\x88\x99\xaa" + "\xbb\xcc\xdd\xee\xff\x11\x22\x33" + "\x44\x55\x66\x77\x88\x99\xaa\xbb" + "\xcc\xdd\xee\xff\x11\x22\x33\x44" + "\xE9\xC0\xFF\x2E\x76\x0B\x64\x24" + "\x44\x4D\x99\x5A\x12\xD6\x40\xC0" + "\xEA\xC2\x84\xE8\x14\x95\xDB\xE8", + .klen = 8 + 64 + 24, + .iv = "\x7D\x33\x88\x93\x0F\x93\xB2\x42", + .assoc = "\x00\x00\x43\x21\x00\x00\x00\x01", + .alen = 8, + .input = "\x6f\x54\x20\x6f\x61\x4d\x79\x6e" + "\x53\x20\x63\x65\x65\x72\x73\x74" + "\x54\x20\x6f\x6f\x4d\x20\x6e\x61" + "\x20\x79\x65\x53\x72\x63\x74\x65" + "\x20\x73\x6f\x54\x20\x6f\x61\x4d" + "\x79\x6e\x53\x20\x63\x65\x65\x72" + "\x73\x74\x54\x20\x6f\x6f\x4d\x20" + "\x6e\x61\x20\x79\x65\x53\x72\x63" + "\x74\x65\x20\x73\x6f\x54\x20\x6f" + "\x61\x4d\x79\x6e\x53\x20\x63\x65" + "\x65\x72\x73\x74\x54\x20\x6f\x6f" + "\x4d\x20\x6e\x61\x20\x79\x65\x53" + "\x72\x63\x74\x65\x20\x73\x6f\x54" + "\x20\x6f\x61\x4d\x79\x6e\x53\x20" + "\x63\x65\x65\x72\x73\x74\x54\x20" + "\x6f\x6f\x4d\x20\x6e\x61\x0a\x79", + .ilen = 128, + .result = "\x0e\x2d\xb6\x97\x3c\x56\x33\xf4" + "\x67\x17\x21\xc7\x6e\x8a\xd5\x49" + "\x74\xb3\x49\x05\xc5\x1c\xd0\xed" + "\x12\x56\x5c\x53\x96\xb6\x00\x7d" + "\x90\x48\xfc\xf5\x8d\x29\x39\xcc" + "\x8a\xd5\x35\x18\x36\x23\x4e\xd7" + "\x76\xd1\xda\x0c\x94\x67\xbb\x04" + "\x8b\xf2\x03\x6c\xa8\xcf\xb6\xea" + "\x22\x64\x47\xaa\x8f\x75\x13\xbf" + "\x9f\xc2\xc3\xf0\xc9\x56\xc5\x7a" + "\x71\x63\x2e\x89\x7b\x1e\x12\xca" + "\xe2\x5f\xaf\xd8\xa4\xf8\xc9\x7a" + "\xd6\xf9\x21\x31\x62\x44\x45\xa6" + "\xd6\xbc\x5a\xd3\x2d\x54\x43\xcc" + "\x9d\xde\xa5\x70\xe9\x42\x45\x8a" + "\x6b\xfa\xb1\x91\x13\xb0\xd9\x19" + "\x41\xb5\x1f\xbb\xbd\x4e\xb8\x32" + "\x22\x86\x4e\x57\x1b\x2a\xd8\x6e" + "\xa9\xfb\xc8\xf3\xbf\x2d\xae\x2b" + "\x3b\xbc\x41\xe8\x38\xbb\xf1\x60" + "\x4c\x68\xa9\x4e\x8c\x73\xa7\xc0" + "\x2a\x74\xd4\x65\x12\xcb\x55\xf2" + "\xd5\x02\x6d\xe6\xaf\xc9\x2f\xf2" + "\x57\xaa\x85\xf7\xf3\x6a\xcb\xdb", + .rlen = 128 + 64, + }, +}; + static struct cipher_testvec aes_lrw_enc_tv_template[] = { /* from http://grouper.ieee.org/groups/1619/email/pdf00017.pdf */ { /* LRW-32-AES 1 */ @@ -18746,7 +20125,29 @@ static struct aead_testvec aes_ccm_enc_tv_template[] = { "\x7c\xf9\xbe\xc2\x40\x88\x97\xc6" "\xba", .rlen = 33, - }, + }, { + /* + * This is the same vector as aes_ccm_rfc4309_enc_tv_template[0] + * below but rewritten to use the ccm algorithm directly. + */ + .key = "\x83\xac\x54\x66\xc2\xeb\xe5\x05" + "\x2e\x01\xd1\xfc\x5d\x82\x66\x2e", + .klen = 16, + .iv = "\x03\x96\xac\x59\x30\x07\xa1\xe2\xa2\xc7\x55\x24\0\0\0\0", + .alen = 0, + .input = "\x19\xc8\x81\xf6\xe9\x86\xff\x93" + "\x0b\x78\x67\xe5\xbb\xb7\xfc\x6e" + "\x83\x77\xb3\xa6\x0c\x8c\x9f\x9c" + "\x35\x2e\xad\xe0\x62\xf9\x91\xa1", + .ilen = 32, + .result = "\xab\x6f\xe1\x69\x1d\x19\x99\xa8" + "\x92\xa0\xc4\x6f\x7e\xe2\x8b\xb1" + "\x70\xbb\x8c\xa6\x4c\x6e\x97\x8a" + "\x57\x2b\xbe\x5d\x98\xa6\xb1\x32" + "\xda\x24\xea\xd9\xa1\x39\x98\xfd" + "\xa4\xbe\xd9\xf2\x1a\x6d\x22\xa8", + .rlen = 48, + } }; static struct aead_testvec aes_ccm_dec_tv_template[] = { diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c index ea83828bfea9..18d97d5c7d90 100644 --- a/drivers/ata/libata-core.c +++ b/drivers/ata/libata-core.c @@ -4224,10 +4224,10 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = { { "PIONEER DVD-RW DVR-216D", NULL, ATA_HORKAGE_NOSETXFER }, /* devices that don't properly handle queued TRIM commands */ - { "Micron_M500*", "MU0[1-4]*", ATA_HORKAGE_NO_NCQ_TRIM, }, - { "Crucial_CT???M500SSD*", "MU0[1-4]*", ATA_HORKAGE_NO_NCQ_TRIM, }, - { "Micron_M550*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, }, - { "Crucial_CT???M550SSD*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, }, + { "Micron_M500*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, }, + { "Crucial_CT???M500SSD*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, }, + { "Micron_M550*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, }, + { "Crucial_CT???M550SSD*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, }, /* * Some WD SATA-I drives spin up and down erratically when the link diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c index 56a027d6115e..fb31b8ee4372 100644 --- a/drivers/block/nbd.c +++ b/drivers/block/nbd.c @@ -243,14 +243,11 @@ static int nbd_send_req(struct nbd_device *nbd, struct request *req) struct nbd_request request; unsigned long size = blk_rq_bytes(req); + memset(&request, 0, sizeof(request)); request.magic = htonl(NBD_REQUEST_MAGIC); request.type = htonl(nbd_cmd(req)); - if (nbd_cmd(req) == NBD_CMD_FLUSH) { - /* Other values are reserved for FLUSH requests. */ - request.from = 0; - request.len = 0; - } else { + if (nbd_cmd(req) != NBD_CMD_FLUSH && nbd_cmd(req) != NBD_CMD_DISC) { request.from = cpu_to_be64((u64)blk_rq_pos(req) << 9); request.len = htonl(size); } diff --git a/drivers/cdrom/cdrom.c b/drivers/cdrom/cdrom.c index 49ac5662585b..2a44767891f5 100644 --- a/drivers/cdrom/cdrom.c +++ b/drivers/cdrom/cdrom.c @@ -3470,7 +3470,7 @@ static int cdrom_print_info(const char *header, int val, char *info, return 0; } -static int cdrom_sysctl_info(ctl_table *ctl, int write, +static int cdrom_sysctl_info(struct ctl_table *ctl, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { int pos; @@ -3583,7 +3583,7 @@ static void cdrom_update_settings(void) mutex_unlock(&cdrom_mutex); } -static int cdrom_sysctl_handler(ctl_table *ctl, int write, +static int cdrom_sysctl_handler(struct ctl_table *ctl, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { int ret; @@ -3609,7 +3609,7 @@ static int cdrom_sysctl_handler(ctl_table *ctl, int write, } /* Place files in /proc/sys/dev/cdrom */ -static ctl_table cdrom_table[] = { +static struct ctl_table cdrom_table[] = { { .procname = "info", .data = &cdrom_sysctl_settings.info, @@ -3655,7 +3655,7 @@ static ctl_table cdrom_table[] = { { } }; -static ctl_table cdrom_cdrom_table[] = { +static struct ctl_table cdrom_cdrom_table[] = { { .procname = "cdrom", .maxlen = 0, @@ -3666,7 +3666,7 @@ static ctl_table cdrom_cdrom_table[] = { }; /* Make sure that /proc/sys/dev is there */ -static ctl_table cdrom_root_table[] = { +static struct ctl_table cdrom_root_table[] = { { .procname = "dev", .maxlen = 0, diff --git a/drivers/char/hw_random/Kconfig b/drivers/char/hw_random/Kconfig index 244759bbd7b7..836b061ced35 100644 --- a/drivers/char/hw_random/Kconfig +++ b/drivers/char/hw_random/Kconfig @@ -2,7 +2,7 @@ # Hardware Random Number Generator (RNG) configuration # -config HW_RANDOM +menuconfig HW_RANDOM tristate "Hardware Random Number Generator Core support" default m ---help--- @@ -20,9 +20,11 @@ config HW_RANDOM If unsure, say Y. +if HW_RANDOM + config HW_RANDOM_TIMERIOMEM tristate "Timer IOMEM HW Random Number Generator support" - depends on HW_RANDOM && HAS_IOMEM + depends on HAS_IOMEM ---help--- This driver provides kernel-side support for a generic Random Number Generator used by reading a 'dumb' iomem address that @@ -36,7 +38,7 @@ config HW_RANDOM_TIMERIOMEM config HW_RANDOM_INTEL tristate "Intel HW Random Number Generator support" - depends on HW_RANDOM && (X86 || IA64) && PCI + depends on (X86 || IA64) && PCI default HW_RANDOM ---help--- This driver provides kernel-side support for the Random Number @@ -49,7 +51,7 @@ config HW_RANDOM_INTEL config HW_RANDOM_AMD tristate "AMD HW Random Number Generator support" - depends on HW_RANDOM && (X86 || PPC_MAPLE) && PCI + depends on (X86 || PPC_MAPLE) && PCI default HW_RANDOM ---help--- This driver provides kernel-side support for the Random Number @@ -62,8 +64,8 @@ config HW_RANDOM_AMD config HW_RANDOM_ATMEL tristate "Atmel Random Number Generator support" - depends on HW_RANDOM && HAVE_CLK - default (HW_RANDOM && ARCH_AT91) + depends on ARCH_AT91 && HAVE_CLK + default HW_RANDOM ---help--- This driver provides kernel-side support for the Random Number Generator hardware found on Atmel AT91 devices. @@ -75,7 +77,7 @@ config HW_RANDOM_ATMEL config HW_RANDOM_BCM63XX tristate "Broadcom BCM63xx Random Number Generator support" - depends on HW_RANDOM && BCM63XX + depends on BCM63XX default HW_RANDOM ---help--- This driver provides kernel-side support for the Random Number @@ -88,7 +90,7 @@ config HW_RANDOM_BCM63XX config HW_RANDOM_BCM2835 tristate "Broadcom BCM2835 Random Number Generator support" - depends on HW_RANDOM && ARCH_BCM2835 + depends on ARCH_BCM2835 default HW_RANDOM ---help--- This driver provides kernel-side support for the Random Number @@ -101,7 +103,7 @@ config HW_RANDOM_BCM2835 config HW_RANDOM_GEODE tristate "AMD Geode HW Random Number Generator support" - depends on HW_RANDOM && X86_32 && PCI + depends on X86_32 && PCI default HW_RANDOM ---help--- This driver provides kernel-side support for the Random Number @@ -114,7 +116,7 @@ config HW_RANDOM_GEODE config HW_RANDOM_N2RNG tristate "Niagara2 Random Number Generator support" - depends on HW_RANDOM && SPARC64 + depends on SPARC64 default HW_RANDOM ---help--- This driver provides kernel-side support for the Random Number @@ -127,7 +129,7 @@ config HW_RANDOM_N2RNG config HW_RANDOM_VIA tristate "VIA HW Random Number Generator support" - depends on HW_RANDOM && X86 + depends on X86 default HW_RANDOM ---help--- This driver provides kernel-side support for the Random Number @@ -140,7 +142,7 @@ config HW_RANDOM_VIA config HW_RANDOM_IXP4XX tristate "Intel IXP4xx NPU HW Pseudo-Random Number Generator support" - depends on HW_RANDOM && ARCH_IXP4XX + depends on ARCH_IXP4XX default HW_RANDOM ---help--- This driver provides kernel-side support for the Pseudo-Random @@ -153,7 +155,7 @@ config HW_RANDOM_IXP4XX config HW_RANDOM_OMAP tristate "OMAP Random Number Generator support" - depends on HW_RANDOM && (ARCH_OMAP16XX || ARCH_OMAP2PLUS) + depends on ARCH_OMAP16XX || ARCH_OMAP2PLUS default HW_RANDOM ---help--- This driver provides kernel-side support for the Random Number @@ -167,7 +169,7 @@ config HW_RANDOM_OMAP config HW_RANDOM_OMAP3_ROM tristate "OMAP3 ROM Random Number Generator support" - depends on HW_RANDOM && ARCH_OMAP3 + depends on ARCH_OMAP3 default HW_RANDOM ---help--- This driver provides kernel-side support for the Random Number @@ -180,7 +182,7 @@ config HW_RANDOM_OMAP3_ROM config HW_RANDOM_OCTEON tristate "Octeon Random Number Generator support" - depends on HW_RANDOM && CAVIUM_OCTEON_SOC + depends on CAVIUM_OCTEON_SOC default HW_RANDOM ---help--- This driver provides kernel-side support for the Random Number @@ -193,7 +195,7 @@ config HW_RANDOM_OCTEON config HW_RANDOM_PASEMI tristate "PA Semi HW Random Number Generator support" - depends on HW_RANDOM && PPC_PASEMI + depends on PPC_PASEMI default HW_RANDOM ---help--- This driver provides kernel-side support for the Random Number @@ -206,7 +208,7 @@ config HW_RANDOM_PASEMI config HW_RANDOM_VIRTIO tristate "VirtIO Random Number Generator support" - depends on HW_RANDOM && VIRTIO + depends on VIRTIO ---help--- This driver provides kernel-side support for the virtual Random Number Generator hardware. @@ -216,7 +218,7 @@ config HW_RANDOM_VIRTIO config HW_RANDOM_TX4939 tristate "TX4939 Random Number Generator support" - depends on HW_RANDOM && SOC_TX4939 + depends on SOC_TX4939 default HW_RANDOM ---help--- This driver provides kernel-side support for the Random Number @@ -229,7 +231,8 @@ config HW_RANDOM_TX4939 config HW_RANDOM_MXC_RNGA tristate "Freescale i.MX RNGA Random Number Generator" - depends on HW_RANDOM && ARCH_HAS_RNGA + depends on ARCH_HAS_RNGA + default HW_RANDOM ---help--- This driver provides kernel-side support for the Random Number Generator hardware found on Freescale i.MX processors. @@ -241,7 +244,8 @@ config HW_RANDOM_MXC_RNGA config HW_RANDOM_NOMADIK tristate "ST-Ericsson Nomadik Random Number Generator support" - depends on HW_RANDOM && ARCH_NOMADIK + depends on ARCH_NOMADIK + default HW_RANDOM ---help--- This driver provides kernel-side support for the Random Number Generator hardware found on ST-Ericsson SoCs (8815 and 8500). @@ -251,21 +255,10 @@ config HW_RANDOM_NOMADIK If unsure, say Y. -config HW_RANDOM_PICOXCELL - tristate "Picochip picoXcell true random number generator support" - depends on HW_RANDOM && ARCH_PICOXCELL && PICOXCELL_PC3X3 - ---help--- - This driver provides kernel-side support for the Random Number - Generator hardware found on Picochip PC3x3 and later devices. - - To compile this driver as a module, choose M here: the - module will be called picoxcell-rng. - - If unsure, say Y. - config HW_RANDOM_PPC4XX tristate "PowerPC 4xx generic true random number generator support" - depends on HW_RANDOM && PPC && 4xx + depends on PPC && 4xx + default HW_RANDOM ---help--- This driver provides the kernel-side support for the TRNG hardware found in the security function of some PowerPC 4xx SoCs. @@ -275,24 +268,9 @@ config HW_RANDOM_PPC4XX If unsure, say N. -config UML_RANDOM - depends on UML - tristate "Hardware random number generator" - help - This option enables UML's "hardware" random number generator. It - attaches itself to the host's /dev/random, supplying as much entropy - as the host has, rather than the small amount the UML gets from its - own drivers. It registers itself as a standard hardware random number - generator, major 10, minor 183, and the canonical device name is - /dev/hwrng. - The way to make use of this is to install the rng-tools package - (check your distro, or download from - http://sourceforge.net/projects/gkernel/). rngd periodically reads - /dev/hwrng and injects the entropy into /dev/random. - config HW_RANDOM_PSERIES tristate "pSeries HW Random Number Generator support" - depends on HW_RANDOM && PPC64 && IBMVIO + depends on PPC64 && IBMVIO default HW_RANDOM ---help--- This driver provides kernel-side support for the Random Number @@ -305,7 +283,7 @@ config HW_RANDOM_PSERIES config HW_RANDOM_POWERNV tristate "PowerNV Random Number Generator support" - depends on HW_RANDOM && PPC_POWERNV + depends on PPC_POWERNV default HW_RANDOM ---help--- This is the driver for Random Number Generator hardware found @@ -318,7 +296,8 @@ config HW_RANDOM_POWERNV config HW_RANDOM_EXYNOS tristate "EXYNOS HW random number generator support" - depends on HW_RANDOM && HAS_IOMEM && HAVE_CLK + depends on ARCH_EXYNOS + default HW_RANDOM ---help--- This driver provides kernel-side support for the Random Number Generator hardware found on EXYNOS SOCs. @@ -330,7 +309,7 @@ config HW_RANDOM_EXYNOS config HW_RANDOM_TPM tristate "TPM HW Random Number Generator support" - depends on HW_RANDOM && TCG_TPM + depends on TCG_TPM default HW_RANDOM ---help--- This driver provides kernel-side support for the Random Number @@ -344,6 +323,7 @@ config HW_RANDOM_TPM config HW_RANDOM_MSM tristate "Qualcomm SoCs Random Number Generator support" depends on HW_RANDOM && ARCH_QCOM + default HW_RANDOM ---help--- This driver provides kernel-side support for the Random Number Generator hardware found on Qualcomm SoCs. @@ -352,3 +332,20 @@ config HW_RANDOM_MSM module will be called msm-rng. If unsure, say Y. + +endif # HW_RANDOM + +config UML_RANDOM + depends on UML + tristate "Hardware random number generator" + help + This option enables UML's "hardware" random number generator. It + attaches itself to the host's /dev/random, supplying as much entropy + as the host has, rather than the small amount the UML gets from its + own drivers. It registers itself as a standard hardware random number + generator, major 10, minor 183, and the canonical device name is + /dev/hwrng. + The way to make use of this is to install the rng-tools package + (check your distro, or download from + http://sourceforge.net/projects/gkernel/). rngd periodically reads + /dev/hwrng and injects the entropy into /dev/random. diff --git a/drivers/char/hw_random/Makefile b/drivers/char/hw_random/Makefile index 3ae7755a52e7..199ed283e149 100644 --- a/drivers/char/hw_random/Makefile +++ b/drivers/char/hw_random/Makefile @@ -22,7 +22,6 @@ obj-$(CONFIG_HW_RANDOM_TX4939) += tx4939-rng.o obj-$(CONFIG_HW_RANDOM_MXC_RNGA) += mxc-rnga.o obj-$(CONFIG_HW_RANDOM_OCTEON) += octeon-rng.o obj-$(CONFIG_HW_RANDOM_NOMADIK) += nomadik-rng.o -obj-$(CONFIG_HW_RANDOM_PICOXCELL) += picoxcell-rng.o obj-$(CONFIG_HW_RANDOM_PPC4XX) += ppc4xx-rng.o obj-$(CONFIG_HW_RANDOM_PSERIES) += pseries-rng.o obj-$(CONFIG_HW_RANDOM_POWERNV) += powernv-rng.o diff --git a/drivers/char/hw_random/n2-drv.c b/drivers/char/hw_random/n2-drv.c index 432232eefe05..292a5889f675 100644 --- a/drivers/char/hw_random/n2-drv.c +++ b/drivers/char/hw_random/n2-drv.c @@ -632,7 +632,7 @@ static int n2rng_probe(struct platform_device *op) multi_capable = (match->data != NULL); n2rng_driver_version(); - np = kzalloc(sizeof(*np), GFP_KERNEL); + np = devm_kzalloc(&op->dev, sizeof(*np), GFP_KERNEL); if (!np) goto out; np->op = op; @@ -653,7 +653,7 @@ static int n2rng_probe(struct platform_device *op) &np->hvapi_minor)) { dev_err(&op->dev, "Cannot register suitable " "HVAPI version.\n"); - goto out_free; + goto out; } } @@ -676,15 +676,16 @@ static int n2rng_probe(struct platform_device *op) dev_info(&op->dev, "Registered RNG HVAPI major %lu minor %lu\n", np->hvapi_major, np->hvapi_minor); - np->units = kzalloc(sizeof(struct n2rng_unit) * np->num_units, - GFP_KERNEL); + np->units = devm_kzalloc(&op->dev, + sizeof(struct n2rng_unit) * np->num_units, + GFP_KERNEL); err = -ENOMEM; if (!np->units) goto out_hvapi_unregister; err = n2rng_init_control(np); if (err) - goto out_free_units; + goto out_hvapi_unregister; dev_info(&op->dev, "Found %s RNG, units: %d\n", ((np->flags & N2RNG_FLAG_MULTI) ? @@ -697,7 +698,7 @@ static int n2rng_probe(struct platform_device *op) err = hwrng_register(&np->hwrng); if (err) - goto out_free_units; + goto out_hvapi_unregister; platform_set_drvdata(op, np); @@ -705,15 +706,9 @@ static int n2rng_probe(struct platform_device *op) return 0; -out_free_units: - kfree(np->units); - np->units = NULL; - out_hvapi_unregister: sun4v_hvapi_unregister(HV_GRP_RNG); -out_free: - kfree(np); out: return err; } @@ -730,11 +725,6 @@ static int n2rng_remove(struct platform_device *op) sun4v_hvapi_unregister(HV_GRP_RNG); - kfree(np->units); - np->units = NULL; - - kfree(np); - return 0; } diff --git a/drivers/char/hw_random/omap-rng.c b/drivers/char/hw_random/omap-rng.c index 9b89ff4881de..f66ea258382f 100644 --- a/drivers/char/hw_random/omap-rng.c +++ b/drivers/char/hw_random/omap-rng.c @@ -369,10 +369,8 @@ static int omap_rng_probe(struct platform_device *pdev) int ret; priv = devm_kzalloc(dev, sizeof(struct omap_rng_dev), GFP_KERNEL); - if (!priv) { - dev_err(&pdev->dev, "could not allocate memory\n"); + if (!priv) return -ENOMEM; - }; omap_rng_ops.priv = (unsigned long)priv; platform_set_drvdata(pdev, priv); diff --git a/drivers/char/hw_random/picoxcell-rng.c b/drivers/char/hw_random/picoxcell-rng.c deleted file mode 100644 index eab5448ad56f..000000000000 --- a/drivers/char/hw_random/picoxcell-rng.c +++ /dev/null @@ -1,181 +0,0 @@ -/* - * Copyright (c) 2010-2011 Picochip Ltd., Jamie Iles - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * - * All enquiries to support@picochip.com - */ -#include <linux/clk.h> -#include <linux/delay.h> -#include <linux/err.h> -#include <linux/hw_random.h> -#include <linux/io.h> -#include <linux/kernel.h> -#include <linux/module.h> -#include <linux/platform_device.h> - -#define DATA_REG_OFFSET 0x0200 -#define CSR_REG_OFFSET 0x0278 -#define CSR_OUT_EMPTY_MASK (1 << 24) -#define CSR_FAULT_MASK (1 << 1) -#define TRNG_BLOCK_RESET_MASK (1 << 0) -#define TAI_REG_OFFSET 0x0380 - -/* - * The maximum amount of time in microseconds to spend waiting for data if the - * core wants us to wait. The TRNG should generate 32 bits every 320ns so a - * timeout of 20us seems reasonable. The TRNG does builtin tests of the data - * for randomness so we can't always assume there is data present. - */ -#define PICO_TRNG_TIMEOUT 20 - -static void __iomem *rng_base; -static struct clk *rng_clk; -static struct device *rng_dev; - -static inline u32 picoxcell_trng_read_csr(void) -{ - return __raw_readl(rng_base + CSR_REG_OFFSET); -} - -static inline bool picoxcell_trng_is_empty(void) -{ - return picoxcell_trng_read_csr() & CSR_OUT_EMPTY_MASK; -} - -/* - * Take the random number generator out of reset and make sure the interrupts - * are masked. We shouldn't need to get large amounts of random bytes so just - * poll the status register. The hardware generates 32 bits every 320ns so we - * shouldn't have to wait long enough to warrant waiting for an IRQ. - */ -static void picoxcell_trng_start(void) -{ - __raw_writel(0, rng_base + TAI_REG_OFFSET); - __raw_writel(0, rng_base + CSR_REG_OFFSET); -} - -static void picoxcell_trng_reset(void) -{ - __raw_writel(TRNG_BLOCK_RESET_MASK, rng_base + CSR_REG_OFFSET); - __raw_writel(TRNG_BLOCK_RESET_MASK, rng_base + TAI_REG_OFFSET); - picoxcell_trng_start(); -} - -/* - * Get some random data from the random number generator. The hw_random core - * layer provides us with locking. - */ -static int picoxcell_trng_read(struct hwrng *rng, void *buf, size_t max, - bool wait) -{ - int i; - - /* Wait for some data to become available. */ - for (i = 0; i < PICO_TRNG_TIMEOUT && picoxcell_trng_is_empty(); ++i) { - if (!wait) - return 0; - - udelay(1); - } - - if (picoxcell_trng_read_csr() & CSR_FAULT_MASK) { - dev_err(rng_dev, "fault detected, resetting TRNG\n"); - picoxcell_trng_reset(); - return -EIO; - } - - if (i == PICO_TRNG_TIMEOUT) - return 0; - - *(u32 *)buf = __raw_readl(rng_base + DATA_REG_OFFSET); - return sizeof(u32); -} - -static struct hwrng picoxcell_trng = { - .name = "picoxcell", - .read = picoxcell_trng_read, -}; - -static int picoxcell_trng_probe(struct platform_device *pdev) -{ - int ret; - struct resource *mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); - - rng_base = devm_ioremap_resource(&pdev->dev, mem); - if (IS_ERR(rng_base)) - return PTR_ERR(rng_base); - - rng_clk = devm_clk_get(&pdev->dev, NULL); - if (IS_ERR(rng_clk)) { - dev_warn(&pdev->dev, "no clk\n"); - return PTR_ERR(rng_clk); - } - - ret = clk_enable(rng_clk); - if (ret) { - dev_warn(&pdev->dev, "unable to enable clk\n"); - return ret; - } - - picoxcell_trng_start(); - ret = hwrng_register(&picoxcell_trng); - if (ret) - goto err_register; - - rng_dev = &pdev->dev; - dev_info(&pdev->dev, "pixoxcell random number generator active\n"); - - return 0; - -err_register: - clk_disable(rng_clk); - return ret; -} - -static int picoxcell_trng_remove(struct platform_device *pdev) -{ - hwrng_unregister(&picoxcell_trng); - clk_disable(rng_clk); - - return 0; -} - -#ifdef CONFIG_PM -static int picoxcell_trng_suspend(struct device *dev) -{ - clk_disable(rng_clk); - - return 0; -} - -static int picoxcell_trng_resume(struct device *dev) -{ - return clk_enable(rng_clk); -} - -static const struct dev_pm_ops picoxcell_trng_pm_ops = { - .suspend = picoxcell_trng_suspend, - .resume = picoxcell_trng_resume, -}; -#endif /* CONFIG_PM */ - -static struct platform_driver picoxcell_trng_driver = { - .probe = picoxcell_trng_probe, - .remove = picoxcell_trng_remove, - .driver = { - .name = "picoxcell-trng", - .owner = THIS_MODULE, -#ifdef CONFIG_PM - .pm = &picoxcell_trng_pm_ops, -#endif /* CONFIG_PM */ - }, -}; - -module_platform_driver(picoxcell_trng_driver); - -MODULE_LICENSE("GPL"); -MODULE_AUTHOR("Jamie Iles"); -MODULE_DESCRIPTION("Picochip picoXcell TRNG driver"); diff --git a/drivers/char/hw_random/timeriomem-rng.c b/drivers/char/hw_random/timeriomem-rng.c index 439ff8b28c43..b6ab9ac3f34d 100644 --- a/drivers/char/hw_random/timeriomem-rng.c +++ b/drivers/char/hw_random/timeriomem-rng.c @@ -120,10 +120,8 @@ static int timeriomem_rng_probe(struct platform_device *pdev) /* Allocate memory for the device structure (and zero it) */ priv = devm_kzalloc(&pdev->dev, sizeof(struct timeriomem_rng_private_data), GFP_KERNEL); - if (!priv) { - dev_err(&pdev->dev, "failed to allocate device structure.\n"); + if (!priv) return -ENOMEM; - } platform_set_drvdata(pdev, priv); diff --git a/drivers/char/random.c b/drivers/char/random.c index 06cea7ff3a7c..4ad71ef2cd59 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1582,10 +1582,10 @@ static int proc_do_uuid(struct ctl_table *table, int write, /* * Return entropy available scaled to integral bits */ -static int proc_do_entropy(ctl_table *table, int write, +static int proc_do_entropy(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { - ctl_table fake_table; + struct ctl_table fake_table; int entropy_count; entropy_count = *(int *)table->data >> ENTROPY_SHIFT; diff --git a/drivers/clk/Kconfig b/drivers/clk/Kconfig index 3a2196481b11..9f9c5ae5359b 100644 --- a/drivers/clk/Kconfig +++ b/drivers/clk/Kconfig @@ -58,12 +58,12 @@ config COMMON_CLK_SI570 clock generators. config COMMON_CLK_S2MPS11 - tristate "Clock driver for S2MPS11/S5M8767 MFD" + tristate "Clock driver for S2MPS1X/S5M8767 MFD" depends on MFD_SEC_CORE ---help--- - This driver supports S2MPS11/S5M8767 crystal oscillator clock. These - multi-function devices have 3 fixed-rate oscillators, clocked at - 32KHz each. + This driver supports S2MPS11/S2MPS14/S5M8767 crystal oscillator + clock. These multi-function devices have two (S2MPS14) or three + (S2MPS11, S5M8767) fixed-rate oscillators, clocked at 32KHz each. config CLK_TWL6040 tristate "External McPDM functional clock from twl6040" diff --git a/drivers/clk/Makefile b/drivers/clk/Makefile index 50b2a7ebd747..567f10259029 100644 --- a/drivers/clk/Makefile +++ b/drivers/clk/Makefile @@ -13,6 +13,7 @@ obj-$(CONFIG_COMMON_CLK) += clk-fractional-divider.o # hardware specific clock types # please keep this section sorted lexicographically by file/directory path name obj-$(CONFIG_COMMON_CLK_AXI_CLKGEN) += clk-axi-clkgen.o +obj-$(CONFIG_ARCH_AXXIA) += clk-axm5516.o obj-$(CONFIG_ARCH_BCM2835) += clk-bcm2835.o obj-$(CONFIG_ARCH_EFM32) += clk-efm32gg.o obj-$(CONFIG_ARCH_HIGHBANK) += clk-highbank.o @@ -32,8 +33,10 @@ obj-$(CONFIG_COMMON_CLK_WM831X) += clk-wm831x.o obj-$(CONFIG_COMMON_CLK_XGENE) += clk-xgene.o obj-$(CONFIG_COMMON_CLK_AT91) += at91/ obj-$(CONFIG_ARCH_BCM_MOBILE) += bcm/ +obj-$(CONFIG_ARCH_BERLIN) += berlin/ obj-$(CONFIG_ARCH_HI3xxx) += hisilicon/ obj-$(CONFIG_ARCH_HIP04) += hisilicon/ +obj-$(CONFIG_ARCH_HIX5HD2) += hisilicon/ obj-$(CONFIG_COMMON_CLK_KEYSTONE) += keystone/ ifeq ($(CONFIG_COMMON_CLK), y) obj-$(CONFIG_ARCH_MMP) += mmp/ diff --git a/drivers/clk/bcm/Kconfig b/drivers/clk/bcm/Kconfig index a7262fb8ce55..75506e53075b 100644 --- a/drivers/clk/bcm/Kconfig +++ b/drivers/clk/bcm/Kconfig @@ -6,4 +6,4 @@ config CLK_BCM_KONA help Enable common clock framework support for Broadcom SoCs using "Kona" style clock control units, including those - in the BCM281xx family. + in the BCM281xx and BCM21664 families. diff --git a/drivers/clk/bcm/Makefile b/drivers/clk/bcm/Makefile index cf93359aa862..6297d05a9a10 100644 --- a/drivers/clk/bcm/Makefile +++ b/drivers/clk/bcm/Makefile @@ -1,3 +1,4 @@ obj-$(CONFIG_CLK_BCM_KONA) += clk-kona.o obj-$(CONFIG_CLK_BCM_KONA) += clk-kona-setup.o obj-$(CONFIG_CLK_BCM_KONA) += clk-bcm281xx.o +obj-$(CONFIG_CLK_BCM_KONA) += clk-bcm21664.o diff --git a/drivers/clk/bcm/clk-bcm21664.c b/drivers/clk/bcm/clk-bcm21664.c new file mode 100644 index 000000000000..eeae4cad2281 --- /dev/null +++ b/drivers/clk/bcm/clk-bcm21664.c @@ -0,0 +1,290 @@ +/* + * Copyright (C) 2014 Broadcom Corporation + * Copyright 2014 Linaro Limited + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation version 2. + * + * This program is distributed "as is" WITHOUT ANY WARRANTY of any + * kind, whether express or implied; without even the implied warranty + * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include "clk-kona.h" +#include "dt-bindings/clock/bcm21664.h" + +#define BCM21664_CCU_COMMON(_name, _capname) \ + KONA_CCU_COMMON(BCM21664, _name, _capname) + +/* Root CCU */ + +static struct peri_clk_data frac_1m_data = { + .gate = HW_SW_GATE(0x214, 16, 0, 1), + .clocks = CLOCKS("ref_crystal"), +}; + +static struct ccu_data root_ccu_data = { + BCM21664_CCU_COMMON(root, ROOT), + /* no policy control */ + .kona_clks = { + [BCM21664_ROOT_CCU_FRAC_1M] = + KONA_CLK(root, frac_1m, peri), + [BCM21664_ROOT_CCU_CLOCK_COUNT] = LAST_KONA_CLK, + }, +}; + +/* AON CCU */ + +static struct peri_clk_data hub_timer_data = { + .gate = HW_SW_GATE(0x0414, 16, 0, 1), + .hyst = HYST(0x0414, 8, 9), + .clocks = CLOCKS("bbl_32k", + "frac_1m", + "dft_19_5m"), + .sel = SELECTOR(0x0a10, 0, 2), + .trig = TRIGGER(0x0a40, 4), +}; + +static struct ccu_data aon_ccu_data = { + BCM21664_CCU_COMMON(aon, AON), + .policy = { + .enable = CCU_LVM_EN(0x0034, 0), + .control = CCU_POLICY_CTL(0x000c, 0, 1, 2), + }, + .kona_clks = { + [BCM21664_AON_CCU_HUB_TIMER] = + KONA_CLK(aon, hub_timer, peri), + [BCM21664_AON_CCU_CLOCK_COUNT] = LAST_KONA_CLK, + }, +}; + +/* Master CCU */ + +static struct peri_clk_data sdio1_data = { + .gate = HW_SW_GATE(0x0358, 18, 2, 3), + .clocks = CLOCKS("ref_crystal", + "var_52m", + "ref_52m", + "var_96m", + "ref_96m"), + .sel = SELECTOR(0x0a28, 0, 3), + .div = DIVIDER(0x0a28, 4, 14), + .trig = TRIGGER(0x0afc, 9), +}; + +static struct peri_clk_data sdio2_data = { + .gate = HW_SW_GATE(0x035c, 18, 2, 3), + .clocks = CLOCKS("ref_crystal", + "var_52m", + "ref_52m", + "var_96m", + "ref_96m"), + .sel = SELECTOR(0x0a2c, 0, 3), + .div = DIVIDER(0x0a2c, 4, 14), + .trig = TRIGGER(0x0afc, 10), +}; + +static struct peri_clk_data sdio3_data = { + .gate = HW_SW_GATE(0x0364, 18, 2, 3), + .clocks = CLOCKS("ref_crystal", + "var_52m", + "ref_52m", + "var_96m", + "ref_96m"), + .sel = SELECTOR(0x0a34, 0, 3), + .div = DIVIDER(0x0a34, 4, 14), + .trig = TRIGGER(0x0afc, 12), +}; + +static struct peri_clk_data sdio4_data = { + .gate = HW_SW_GATE(0x0360, 18, 2, 3), + .clocks = CLOCKS("ref_crystal", + "var_52m", + "ref_52m", + "var_96m", + "ref_96m"), + .sel = SELECTOR(0x0a30, 0, 3), + .div = DIVIDER(0x0a30, 4, 14), + .trig = TRIGGER(0x0afc, 11), +}; + +static struct peri_clk_data sdio1_sleep_data = { + .clocks = CLOCKS("ref_32k"), /* Verify */ + .gate = HW_SW_GATE(0x0358, 18, 2, 3), +}; + +static struct peri_clk_data sdio2_sleep_data = { + .clocks = CLOCKS("ref_32k"), /* Verify */ + .gate = HW_SW_GATE(0x035c, 18, 2, 3), +}; + +static struct peri_clk_data sdio3_sleep_data = { + .clocks = CLOCKS("ref_32k"), /* Verify */ + .gate = HW_SW_GATE(0x0364, 18, 2, 3), +}; + +static struct peri_clk_data sdio4_sleep_data = { + .clocks = CLOCKS("ref_32k"), /* Verify */ + .gate = HW_SW_GATE(0x0360, 18, 2, 3), +}; + +static struct ccu_data master_ccu_data = { + BCM21664_CCU_COMMON(master, MASTER), + .policy = { + .enable = CCU_LVM_EN(0x0034, 0), + .control = CCU_POLICY_CTL(0x000c, 0, 1, 2), + }, + .kona_clks = { + [BCM21664_MASTER_CCU_SDIO1] = + KONA_CLK(master, sdio1, peri), + [BCM21664_MASTER_CCU_SDIO2] = + KONA_CLK(master, sdio2, peri), + [BCM21664_MASTER_CCU_SDIO3] = + KONA_CLK(master, sdio3, peri), + [BCM21664_MASTER_CCU_SDIO4] = + KONA_CLK(master, sdio4, peri), + [BCM21664_MASTER_CCU_SDIO1_SLEEP] = + KONA_CLK(master, sdio1_sleep, peri), + [BCM21664_MASTER_CCU_SDIO2_SLEEP] = + KONA_CLK(master, sdio2_sleep, peri), + [BCM21664_MASTER_CCU_SDIO3_SLEEP] = + KONA_CLK(master, sdio3_sleep, peri), + [BCM21664_MASTER_CCU_SDIO4_SLEEP] = + KONA_CLK(master, sdio4_sleep, peri), + [BCM21664_MASTER_CCU_CLOCK_COUNT] = LAST_KONA_CLK, + }, +}; + +/* Slave CCU */ + +static struct peri_clk_data uartb_data = { + .gate = HW_SW_GATE(0x0400, 18, 2, 3), + .clocks = CLOCKS("ref_crystal", + "var_156m", + "ref_156m"), + .sel = SELECTOR(0x0a10, 0, 2), + .div = FRAC_DIVIDER(0x0a10, 4, 12, 8), + .trig = TRIGGER(0x0afc, 2), +}; + +static struct peri_clk_data uartb2_data = { + .gate = HW_SW_GATE(0x0404, 18, 2, 3), + .clocks = CLOCKS("ref_crystal", + "var_156m", + "ref_156m"), + .sel = SELECTOR(0x0a14, 0, 2), + .div = FRAC_DIVIDER(0x0a14, 4, 12, 8), + .trig = TRIGGER(0x0afc, 3), +}; + +static struct peri_clk_data uartb3_data = { + .gate = HW_SW_GATE(0x0408, 18, 2, 3), + .clocks = CLOCKS("ref_crystal", + "var_156m", + "ref_156m"), + .sel = SELECTOR(0x0a18, 0, 2), + .div = FRAC_DIVIDER(0x0a18, 4, 12, 8), + .trig = TRIGGER(0x0afc, 4), +}; + +static struct peri_clk_data bsc1_data = { + .gate = HW_SW_GATE(0x0458, 18, 2, 3), + .clocks = CLOCKS("ref_crystal", + "var_104m", + "ref_104m", + "var_13m", + "ref_13m"), + .sel = SELECTOR(0x0a64, 0, 3), + .trig = TRIGGER(0x0afc, 23), +}; + +static struct peri_clk_data bsc2_data = { + .gate = HW_SW_GATE(0x045c, 18, 2, 3), + .clocks = CLOCKS("ref_crystal", + "var_104m", + "ref_104m", + "var_13m", + "ref_13m"), + .sel = SELECTOR(0x0a68, 0, 3), + .trig = TRIGGER(0x0afc, 24), +}; + +static struct peri_clk_data bsc3_data = { + .gate = HW_SW_GATE(0x0470, 18, 2, 3), + .clocks = CLOCKS("ref_crystal", + "var_104m", + "ref_104m", + "var_13m", + "ref_13m"), + .sel = SELECTOR(0x0a7c, 0, 3), + .trig = TRIGGER(0x0afc, 18), +}; + +static struct peri_clk_data bsc4_data = { + .gate = HW_SW_GATE(0x0474, 18, 2, 3), + .clocks = CLOCKS("ref_crystal", + "var_104m", + "ref_104m", + "var_13m", + "ref_13m"), + .sel = SELECTOR(0x0a80, 0, 3), + .trig = TRIGGER(0x0afc, 19), +}; + +static struct ccu_data slave_ccu_data = { + BCM21664_CCU_COMMON(slave, SLAVE), + .policy = { + .enable = CCU_LVM_EN(0x0034, 0), + .control = CCU_POLICY_CTL(0x000c, 0, 1, 2), + }, + .kona_clks = { + [BCM21664_SLAVE_CCU_UARTB] = + KONA_CLK(slave, uartb, peri), + [BCM21664_SLAVE_CCU_UARTB2] = + KONA_CLK(slave, uartb2, peri), + [BCM21664_SLAVE_CCU_UARTB3] = + KONA_CLK(slave, uartb3, peri), + [BCM21664_SLAVE_CCU_BSC1] = + KONA_CLK(slave, bsc1, peri), + [BCM21664_SLAVE_CCU_BSC2] = + KONA_CLK(slave, bsc2, peri), + [BCM21664_SLAVE_CCU_BSC3] = + KONA_CLK(slave, bsc3, peri), + [BCM21664_SLAVE_CCU_BSC4] = + KONA_CLK(slave, bsc4, peri), + [BCM21664_SLAVE_CCU_CLOCK_COUNT] = LAST_KONA_CLK, + }, +}; + +/* Device tree match table callback functions */ + +static void __init kona_dt_root_ccu_setup(struct device_node *node) +{ + kona_dt_ccu_setup(&root_ccu_data, node); +} + +static void __init kona_dt_aon_ccu_setup(struct device_node *node) +{ + kona_dt_ccu_setup(&aon_ccu_data, node); +} + +static void __init kona_dt_master_ccu_setup(struct device_node *node) +{ + kona_dt_ccu_setup(&master_ccu_data, node); +} + +static void __init kona_dt_slave_ccu_setup(struct device_node *node) +{ + kona_dt_ccu_setup(&slave_ccu_data, node); +} + +CLK_OF_DECLARE(bcm21664_root_ccu, BCM21664_DT_ROOT_CCU_COMPAT, + kona_dt_root_ccu_setup); +CLK_OF_DECLARE(bcm21664_aon_ccu, BCM21664_DT_AON_CCU_COMPAT, + kona_dt_aon_ccu_setup); +CLK_OF_DECLARE(bcm21664_master_ccu, BCM21664_DT_MASTER_CCU_COMPAT, + kona_dt_master_ccu_setup); +CLK_OF_DECLARE(bcm21664_slave_ccu, BCM21664_DT_SLAVE_CCU_COMPAT, + kona_dt_slave_ccu_setup); diff --git a/drivers/clk/bcm/clk-bcm281xx.c b/drivers/clk/bcm/clk-bcm281xx.c index 3c66de696aeb..502a487d62c5 100644 --- a/drivers/clk/bcm/clk-bcm281xx.c +++ b/drivers/clk/bcm/clk-bcm281xx.c @@ -15,14 +15,10 @@ #include "clk-kona.h" #include "dt-bindings/clock/bcm281xx.h" -/* bcm11351 CCU device tree "compatible" strings */ -#define BCM11351_DT_ROOT_CCU_COMPAT "brcm,bcm11351-root-ccu" -#define BCM11351_DT_AON_CCU_COMPAT "brcm,bcm11351-aon-ccu" -#define BCM11351_DT_HUB_CCU_COMPAT "brcm,bcm11351-hub-ccu" -#define BCM11351_DT_MASTER_CCU_COMPAT "brcm,bcm11351-master-ccu" -#define BCM11351_DT_SLAVE_CCU_COMPAT "brcm,bcm11351-slave-ccu" +#define BCM281XX_CCU_COMMON(_name, _ucase_name) \ + KONA_CCU_COMMON(BCM281XX, _name, _ucase_name) -/* Root CCU clocks */ +/* Root CCU */ static struct peri_clk_data frac_1m_data = { .gate = HW_SW_GATE(0x214, 16, 0, 1), @@ -31,7 +27,16 @@ static struct peri_clk_data frac_1m_data = { .clocks = CLOCKS("ref_crystal"), }; -/* AON CCU clocks */ +static struct ccu_data root_ccu_data = { + BCM281XX_CCU_COMMON(root, ROOT), + .kona_clks = { + [BCM281XX_ROOT_CCU_FRAC_1M] = + KONA_CLK(root, frac_1m, peri), + [BCM281XX_ROOT_CCU_CLOCK_COUNT] = LAST_KONA_CLK, + }, +}; + +/* AON CCU */ static struct peri_clk_data hub_timer_data = { .gate = HW_SW_GATE(0x0414, 16, 0, 1), @@ -60,7 +65,20 @@ static struct peri_clk_data pmu_bsc_var_data = { .trig = TRIGGER(0x0a40, 2), }; -/* Hub CCU clocks */ +static struct ccu_data aon_ccu_data = { + BCM281XX_CCU_COMMON(aon, AON), + .kona_clks = { + [BCM281XX_AON_CCU_HUB_TIMER] = + KONA_CLK(aon, hub_timer, peri), + [BCM281XX_AON_CCU_PMU_BSC] = + KONA_CLK(aon, pmu_bsc, peri), + [BCM281XX_AON_CCU_PMU_BSC_VAR] = + KONA_CLK(aon, pmu_bsc_var, peri), + [BCM281XX_AON_CCU_CLOCK_COUNT] = LAST_KONA_CLK, + }, +}; + +/* Hub CCU */ static struct peri_clk_data tmon_1m_data = { .gate = HW_SW_GATE(0x04a4, 18, 2, 3), @@ -70,7 +88,16 @@ static struct peri_clk_data tmon_1m_data = { .trig = TRIGGER(0x0e84, 1), }; -/* Master CCU clocks */ +static struct ccu_data hub_ccu_data = { + BCM281XX_CCU_COMMON(hub, HUB), + .kona_clks = { + [BCM281XX_HUB_CCU_TMON_1M] = + KONA_CLK(hub, tmon_1m, peri), + [BCM281XX_HUB_CCU_CLOCK_COUNT] = LAST_KONA_CLK, + }, +}; + +/* Master CCU */ static struct peri_clk_data sdio1_data = { .gate = HW_SW_GATE(0x0358, 18, 2, 3), @@ -153,7 +180,28 @@ static struct peri_clk_data hsic2_12m_data = { .trig = TRIGGER(0x0afc, 5), }; -/* Slave CCU clocks */ +static struct ccu_data master_ccu_data = { + BCM281XX_CCU_COMMON(master, MASTER), + .kona_clks = { + [BCM281XX_MASTER_CCU_SDIO1] = + KONA_CLK(master, sdio1, peri), + [BCM281XX_MASTER_CCU_SDIO2] = + KONA_CLK(master, sdio2, peri), + [BCM281XX_MASTER_CCU_SDIO3] = + KONA_CLK(master, sdio3, peri), + [BCM281XX_MASTER_CCU_SDIO4] = + KONA_CLK(master, sdio4, peri), + [BCM281XX_MASTER_CCU_USB_IC] = + KONA_CLK(master, usb_ic, peri), + [BCM281XX_MASTER_CCU_HSIC2_48M] = + KONA_CLK(master, hsic2_48m, peri), + [BCM281XX_MASTER_CCU_HSIC2_12M] = + KONA_CLK(master, hsic2_12m, peri), + [BCM281XX_MASTER_CCU_CLOCK_COUNT] = LAST_KONA_CLK, + }, +}; + +/* Slave CCU */ static struct peri_clk_data uartb_data = { .gate = HW_SW_GATE(0x0400, 18, 2, 3), @@ -261,156 +309,67 @@ static struct peri_clk_data pwm_data = { .trig = TRIGGER(0x0afc, 15), }; -/* - * CCU setup routines - * - * These are called from kona_dt_ccu_setup() to initialize the array - * of clocks provided by the CCU. Once allocated, the entries in - * the array are initialized by calling kona_clk_setup() with the - * initialization data for each clock. They return 0 if successful - * or an error code otherwise. - */ -static int __init bcm281xx_root_ccu_clks_setup(struct ccu_data *ccu) -{ - struct clk **clks; - size_t count = BCM281XX_ROOT_CCU_CLOCK_COUNT; - - clks = kzalloc(count * sizeof(*clks), GFP_KERNEL); - if (!clks) { - pr_err("%s: failed to allocate root clocks\n", __func__); - return -ENOMEM; - } - ccu->data.clks = clks; - ccu->data.clk_num = count; - - PERI_CLK_SETUP(clks, ccu, BCM281XX_ROOT_CCU_FRAC_1M, frac_1m); - - return 0; -} - -static int __init bcm281xx_aon_ccu_clks_setup(struct ccu_data *ccu) -{ - struct clk **clks; - size_t count = BCM281XX_AON_CCU_CLOCK_COUNT; - - clks = kzalloc(count * sizeof(*clks), GFP_KERNEL); - if (!clks) { - pr_err("%s: failed to allocate aon clocks\n", __func__); - return -ENOMEM; - } - ccu->data.clks = clks; - ccu->data.clk_num = count; - - PERI_CLK_SETUP(clks, ccu, BCM281XX_AON_CCU_HUB_TIMER, hub_timer); - PERI_CLK_SETUP(clks, ccu, BCM281XX_AON_CCU_PMU_BSC, pmu_bsc); - PERI_CLK_SETUP(clks, ccu, BCM281XX_AON_CCU_PMU_BSC_VAR, pmu_bsc_var); - - return 0; -} - -static int __init bcm281xx_hub_ccu_clks_setup(struct ccu_data *ccu) -{ - struct clk **clks; - size_t count = BCM281XX_HUB_CCU_CLOCK_COUNT; - - clks = kzalloc(count * sizeof(*clks), GFP_KERNEL); - if (!clks) { - pr_err("%s: failed to allocate hub clocks\n", __func__); - return -ENOMEM; - } - ccu->data.clks = clks; - ccu->data.clk_num = count; - - PERI_CLK_SETUP(clks, ccu, BCM281XX_HUB_CCU_TMON_1M, tmon_1m); - - return 0; -} - -static int __init bcm281xx_master_ccu_clks_setup(struct ccu_data *ccu) -{ - struct clk **clks; - size_t count = BCM281XX_MASTER_CCU_CLOCK_COUNT; - - clks = kzalloc(count * sizeof(*clks), GFP_KERNEL); - if (!clks) { - pr_err("%s: failed to allocate master clocks\n", __func__); - return -ENOMEM; - } - ccu->data.clks = clks; - ccu->data.clk_num = count; - - PERI_CLK_SETUP(clks, ccu, BCM281XX_MASTER_CCU_SDIO1, sdio1); - PERI_CLK_SETUP(clks, ccu, BCM281XX_MASTER_CCU_SDIO2, sdio2); - PERI_CLK_SETUP(clks, ccu, BCM281XX_MASTER_CCU_SDIO3, sdio3); - PERI_CLK_SETUP(clks, ccu, BCM281XX_MASTER_CCU_SDIO4, sdio4); - PERI_CLK_SETUP(clks, ccu, BCM281XX_MASTER_CCU_USB_IC, usb_ic); - PERI_CLK_SETUP(clks, ccu, BCM281XX_MASTER_CCU_HSIC2_48M, hsic2_48m); - PERI_CLK_SETUP(clks, ccu, BCM281XX_MASTER_CCU_HSIC2_12M, hsic2_12m); - - return 0; -} - -static int __init bcm281xx_slave_ccu_clks_setup(struct ccu_data *ccu) -{ - struct clk **clks; - size_t count = BCM281XX_SLAVE_CCU_CLOCK_COUNT; - - clks = kzalloc(count * sizeof(*clks), GFP_KERNEL); - if (!clks) { - pr_err("%s: failed to allocate slave clocks\n", __func__); - return -ENOMEM; - } - ccu->data.clks = clks; - ccu->data.clk_num = count; - - PERI_CLK_SETUP(clks, ccu, BCM281XX_SLAVE_CCU_UARTB, uartb); - PERI_CLK_SETUP(clks, ccu, BCM281XX_SLAVE_CCU_UARTB2, uartb2); - PERI_CLK_SETUP(clks, ccu, BCM281XX_SLAVE_CCU_UARTB3, uartb3); - PERI_CLK_SETUP(clks, ccu, BCM281XX_SLAVE_CCU_UARTB4, uartb4); - PERI_CLK_SETUP(clks, ccu, BCM281XX_SLAVE_CCU_SSP0, ssp0); - PERI_CLK_SETUP(clks, ccu, BCM281XX_SLAVE_CCU_SSP2, ssp2); - PERI_CLK_SETUP(clks, ccu, BCM281XX_SLAVE_CCU_BSC1, bsc1); - PERI_CLK_SETUP(clks, ccu, BCM281XX_SLAVE_CCU_BSC2, bsc2); - PERI_CLK_SETUP(clks, ccu, BCM281XX_SLAVE_CCU_BSC3, bsc3); - PERI_CLK_SETUP(clks, ccu, BCM281XX_SLAVE_CCU_PWM, pwm); - - return 0; -} +static struct ccu_data slave_ccu_data = { + BCM281XX_CCU_COMMON(slave, SLAVE), + .kona_clks = { + [BCM281XX_SLAVE_CCU_UARTB] = + KONA_CLK(slave, uartb, peri), + [BCM281XX_SLAVE_CCU_UARTB2] = + KONA_CLK(slave, uartb2, peri), + [BCM281XX_SLAVE_CCU_UARTB3] = + KONA_CLK(slave, uartb3, peri), + [BCM281XX_SLAVE_CCU_UARTB4] = + KONA_CLK(slave, uartb4, peri), + [BCM281XX_SLAVE_CCU_SSP0] = + KONA_CLK(slave, ssp0, peri), + [BCM281XX_SLAVE_CCU_SSP2] = + KONA_CLK(slave, ssp2, peri), + [BCM281XX_SLAVE_CCU_BSC1] = + KONA_CLK(slave, bsc1, peri), + [BCM281XX_SLAVE_CCU_BSC2] = + KONA_CLK(slave, bsc2, peri), + [BCM281XX_SLAVE_CCU_BSC3] = + KONA_CLK(slave, bsc3, peri), + [BCM281XX_SLAVE_CCU_PWM] = + KONA_CLK(slave, pwm, peri), + [BCM281XX_SLAVE_CCU_CLOCK_COUNT] = LAST_KONA_CLK, + }, +}; /* Device tree match table callback functions */ static void __init kona_dt_root_ccu_setup(struct device_node *node) { - kona_dt_ccu_setup(node, bcm281xx_root_ccu_clks_setup); + kona_dt_ccu_setup(&root_ccu_data, node); } static void __init kona_dt_aon_ccu_setup(struct device_node *node) { - kona_dt_ccu_setup(node, bcm281xx_aon_ccu_clks_setup); + kona_dt_ccu_setup(&aon_ccu_data, node); } static void __init kona_dt_hub_ccu_setup(struct device_node *node) { - kona_dt_ccu_setup(node, bcm281xx_hub_ccu_clks_setup); + kona_dt_ccu_setup(&hub_ccu_data, node); } static void __init kona_dt_master_ccu_setup(struct device_node *node) { - kona_dt_ccu_setup(node, bcm281xx_master_ccu_clks_setup); + kona_dt_ccu_setup(&master_ccu_data, node); } static void __init kona_dt_slave_ccu_setup(struct device_node *node) { - kona_dt_ccu_setup(node, bcm281xx_slave_ccu_clks_setup); + kona_dt_ccu_setup(&slave_ccu_data, node); } -CLK_OF_DECLARE(bcm11351_root_ccu, BCM11351_DT_ROOT_CCU_COMPAT, +CLK_OF_DECLARE(bcm281xx_root_ccu, BCM281XX_DT_ROOT_CCU_COMPAT, kona_dt_root_ccu_setup); -CLK_OF_DECLARE(bcm11351_aon_ccu, BCM11351_DT_AON_CCU_COMPAT, +CLK_OF_DECLARE(bcm281xx_aon_ccu, BCM281XX_DT_AON_CCU_COMPAT, kona_dt_aon_ccu_setup); -CLK_OF_DECLARE(bcm11351_hub_ccu, BCM11351_DT_HUB_CCU_COMPAT, +CLK_OF_DECLARE(bcm281xx_hub_ccu, BCM281XX_DT_HUB_CCU_COMPAT, kona_dt_hub_ccu_setup); -CLK_OF_DECLARE(bcm11351_master_ccu, BCM11351_DT_MASTER_CCU_COMPAT, +CLK_OF_DECLARE(bcm281xx_master_ccu, BCM281XX_DT_MASTER_CCU_COMPAT, kona_dt_master_ccu_setup); -CLK_OF_DECLARE(bcm11351_slave_ccu, BCM11351_DT_SLAVE_CCU_COMPAT, +CLK_OF_DECLARE(bcm281xx_slave_ccu, BCM281XX_DT_SLAVE_CCU_COMPAT, kona_dt_slave_ccu_setup); diff --git a/drivers/clk/bcm/clk-kona-setup.c b/drivers/clk/bcm/clk-kona-setup.c index 54a06526f64f..e5aededdd322 100644 --- a/drivers/clk/bcm/clk-kona-setup.c +++ b/drivers/clk/bcm/clk-kona-setup.c @@ -25,6 +25,31 @@ LIST_HEAD(ccu_list); /* The list of set up CCUs */ /* Validity checking */ +static bool ccu_data_offsets_valid(struct ccu_data *ccu) +{ + struct ccu_policy *ccu_policy = &ccu->policy; + u32 limit; + + limit = ccu->range - sizeof(u32); + limit = round_down(limit, sizeof(u32)); + if (ccu_policy_exists(ccu_policy)) { + if (ccu_policy->enable.offset > limit) { + pr_err("%s: bad policy enable offset for %s " + "(%u > %u)\n", __func__, + ccu->name, ccu_policy->enable.offset, limit); + return false; + } + if (ccu_policy->control.offset > limit) { + pr_err("%s: bad policy control offset for %s " + "(%u > %u)\n", __func__, + ccu->name, ccu_policy->control.offset, limit); + return false; + } + } + + return true; +} + static bool clk_requires_trigger(struct kona_clk *bcm_clk) { struct peri_clk_data *peri = bcm_clk->u.peri; @@ -54,7 +79,9 @@ static bool clk_requires_trigger(struct kona_clk *bcm_clk) static bool peri_clk_data_offsets_valid(struct kona_clk *bcm_clk) { struct peri_clk_data *peri; + struct bcm_clk_policy *policy; struct bcm_clk_gate *gate; + struct bcm_clk_hyst *hyst; struct bcm_clk_div *div; struct bcm_clk_sel *sel; struct bcm_clk_trig *trig; @@ -64,19 +91,41 @@ static bool peri_clk_data_offsets_valid(struct kona_clk *bcm_clk) BUG_ON(bcm_clk->type != bcm_clk_peri); peri = bcm_clk->u.peri; - name = bcm_clk->name; + name = bcm_clk->init_data.name; range = bcm_clk->ccu->range; limit = range - sizeof(u32); limit = round_down(limit, sizeof(u32)); + policy = &peri->policy; + if (policy_exists(policy)) { + if (policy->offset > limit) { + pr_err("%s: bad policy offset for %s (%u > %u)\n", + __func__, name, policy->offset, limit); + return false; + } + } + gate = &peri->gate; + hyst = &peri->hyst; if (gate_exists(gate)) { if (gate->offset > limit) { pr_err("%s: bad gate offset for %s (%u > %u)\n", __func__, name, gate->offset, limit); return false; } + + if (hyst_exists(hyst)) { + if (hyst->offset > limit) { + pr_err("%s: bad hysteresis offset for %s " + "(%u > %u)\n", __func__, + name, hyst->offset, limit); + return false; + } + } + } else if (hyst_exists(hyst)) { + pr_err("%s: hysteresis but no gate for %s\n", __func__, name); + return false; } div = &peri->div; @@ -167,6 +216,36 @@ static bool bitfield_valid(u32 shift, u32 width, const char *field_name, return true; } +static bool +ccu_policy_valid(struct ccu_policy *ccu_policy, const char *ccu_name) +{ + struct bcm_lvm_en *enable = &ccu_policy->enable; + struct bcm_policy_ctl *control; + + if (!bit_posn_valid(enable->bit, "policy enable", ccu_name)) + return false; + + control = &ccu_policy->control; + if (!bit_posn_valid(control->go_bit, "policy control GO", ccu_name)) + return false; + + if (!bit_posn_valid(control->atl_bit, "policy control ATL", ccu_name)) + return false; + + if (!bit_posn_valid(control->ac_bit, "policy control AC", ccu_name)) + return false; + + return true; +} + +static bool policy_valid(struct bcm_clk_policy *policy, const char *clock_name) +{ + if (!bit_posn_valid(policy->bit, "policy", clock_name)) + return false; + + return true; +} + /* * All gates, if defined, have a status bit, and for hardware-only * gates, that's it. Gates that can be software controlled also @@ -196,6 +275,17 @@ static bool gate_valid(struct bcm_clk_gate *gate, const char *field_name, return true; } +static bool hyst_valid(struct bcm_clk_hyst *hyst, const char *clock_name) +{ + if (!bit_posn_valid(hyst->en_bit, "hysteresis enable", clock_name)) + return false; + + if (!bit_posn_valid(hyst->val_bit, "hysteresis value", clock_name)) + return false; + + return true; +} + /* * A selector bitfield must be valid. Its parent_sel array must * also be reasonable for the field. @@ -312,7 +402,9 @@ static bool peri_clk_data_valid(struct kona_clk *bcm_clk) { struct peri_clk_data *peri; + struct bcm_clk_policy *policy; struct bcm_clk_gate *gate; + struct bcm_clk_hyst *hyst; struct bcm_clk_sel *sel; struct bcm_clk_div *div; struct bcm_clk_div *pre_div; @@ -330,11 +422,20 @@ peri_clk_data_valid(struct kona_clk *bcm_clk) return false; peri = bcm_clk->u.peri; - name = bcm_clk->name; + name = bcm_clk->init_data.name; + + policy = &peri->policy; + if (policy_exists(policy) && !policy_valid(policy, name)) + return false; + gate = &peri->gate; if (gate_exists(gate) && !gate_valid(gate, "gate", name)) return false; + hyst = &peri->hyst; + if (hyst_exists(hyst) && !hyst_valid(hyst, name)) + return false; + sel = &peri->sel; if (selector_exists(sel)) { if (!sel_valid(sel, "selector", name)) @@ -567,7 +668,6 @@ static void peri_clk_teardown(struct peri_clk_data *data, struct clk_init_data *init_data) { clk_sel_teardown(&data->sel, init_data); - init_data->ops = NULL; } /* @@ -576,10 +676,9 @@ static void peri_clk_teardown(struct peri_clk_data *data, * that can be assigned if the clock has one or more parent clocks * associated with it. */ -static int peri_clk_setup(struct ccu_data *ccu, struct peri_clk_data *data, - struct clk_init_data *init_data) +static int +peri_clk_setup(struct peri_clk_data *data, struct clk_init_data *init_data) { - init_data->ops = &kona_peri_clk_ops; init_data->flags = CLK_IGNORE_UNUSED; return clk_sel_setup(data->clocks, &data->sel, init_data); @@ -617,39 +716,26 @@ static void kona_clk_teardown(struct clk *clk) bcm_clk_teardown(bcm_clk); } -struct clk *kona_clk_setup(struct ccu_data *ccu, const char *name, - enum bcm_clk_type type, void *data) +struct clk *kona_clk_setup(struct kona_clk *bcm_clk) { - struct kona_clk *bcm_clk; - struct clk_init_data *init_data; + struct clk_init_data *init_data = &bcm_clk->init_data; struct clk *clk = NULL; - bcm_clk = kzalloc(sizeof(*bcm_clk), GFP_KERNEL); - if (!bcm_clk) { - pr_err("%s: failed to allocate bcm_clk for %s\n", __func__, - name); - return NULL; - } - bcm_clk->ccu = ccu; - bcm_clk->name = name; - - init_data = &bcm_clk->init_data; - init_data->name = name; - switch (type) { + switch (bcm_clk->type) { case bcm_clk_peri: - if (peri_clk_setup(ccu, data, init_data)) - goto out_free; + if (peri_clk_setup(bcm_clk->u.data, init_data)) + return NULL; break; default: - data = NULL; - break; + pr_err("%s: clock type %d invalid for %s\n", __func__, + (int)bcm_clk->type, init_data->name); + return NULL; } - bcm_clk->type = type; - bcm_clk->u.data = data; /* Make sure everything makes sense before we set it up */ if (!kona_clk_valid(bcm_clk)) { - pr_err("%s: clock data invalid for %s\n", __func__, name); + pr_err("%s: clock data invalid for %s\n", __func__, + init_data->name); goto out_teardown; } @@ -657,7 +743,7 @@ struct clk *kona_clk_setup(struct ccu_data *ccu, const char *name, clk = clk_register(NULL, &bcm_clk->hw); if (IS_ERR(clk)) { pr_err("%s: error registering clock %s (%ld)\n", __func__, - name, PTR_ERR(clk)); + init_data->name, PTR_ERR(clk)); goto out_teardown; } BUG_ON(!clk); @@ -665,8 +751,6 @@ struct clk *kona_clk_setup(struct ccu_data *ccu, const char *name, return clk; out_teardown: bcm_clk_teardown(bcm_clk); -out_free: - kfree(bcm_clk); return NULL; } @@ -675,50 +759,64 @@ static void ccu_clks_teardown(struct ccu_data *ccu) { u32 i; - for (i = 0; i < ccu->data.clk_num; i++) - kona_clk_teardown(ccu->data.clks[i]); - kfree(ccu->data.clks); + for (i = 0; i < ccu->clk_data.clk_num; i++) + kona_clk_teardown(ccu->clk_data.clks[i]); + kfree(ccu->clk_data.clks); } static void kona_ccu_teardown(struct ccu_data *ccu) { - if (!ccu) - return; - + kfree(ccu->clk_data.clks); + ccu->clk_data.clks = NULL; if (!ccu->base) - goto done; + return; of_clk_del_provider(ccu->node); /* safe if never added */ ccu_clks_teardown(ccu); list_del(&ccu->links); of_node_put(ccu->node); + ccu->node = NULL; iounmap(ccu->base); -done: - kfree(ccu->name); - kfree(ccu); + ccu->base = NULL; +} + +static bool ccu_data_valid(struct ccu_data *ccu) +{ + struct ccu_policy *ccu_policy; + + if (!ccu_data_offsets_valid(ccu)) + return false; + + ccu_policy = &ccu->policy; + if (ccu_policy_exists(ccu_policy)) + if (!ccu_policy_valid(ccu_policy, ccu->name)) + return false; + + return true; } /* * Set up a CCU. Call the provided ccu_clks_setup callback to * initialize the array of clocks provided by the CCU. */ -void __init kona_dt_ccu_setup(struct device_node *node, - int (*ccu_clks_setup)(struct ccu_data *)) +void __init kona_dt_ccu_setup(struct ccu_data *ccu, + struct device_node *node) { - struct ccu_data *ccu; struct resource res = { 0 }; resource_size_t range; + unsigned int i; int ret; - ccu = kzalloc(sizeof(*ccu), GFP_KERNEL); - if (ccu) - ccu->name = kstrdup(node->name, GFP_KERNEL); - if (!ccu || !ccu->name) { - pr_err("%s: unable to allocate CCU struct for %s\n", - __func__, node->name); - kfree(ccu); + if (ccu->clk_data.clk_num) { + size_t size; - return; + size = ccu->clk_data.clk_num * sizeof(*ccu->clk_data.clks); + ccu->clk_data.clks = kzalloc(size, GFP_KERNEL); + if (!ccu->clk_data.clks) { + pr_err("%s: unable to allocate %u clocks for %s\n", + __func__, ccu->clk_data.clk_num, node->name); + return; + } } ret = of_address_to_resource(node, 0, &res); @@ -736,24 +834,33 @@ void __init kona_dt_ccu_setup(struct device_node *node, } ccu->range = (u32)range; + + if (!ccu_data_valid(ccu)) { + pr_err("%s: ccu data not valid for %s\n", __func__, node->name); + goto out_err; + } + ccu->base = ioremap(res.start, ccu->range); if (!ccu->base) { pr_err("%s: unable to map CCU registers for %s\n", __func__, node->name); goto out_err; } - - spin_lock_init(&ccu->lock); - INIT_LIST_HEAD(&ccu->links); ccu->node = of_node_get(node); - list_add_tail(&ccu->links, &ccu_list); - /* Set up clocks array (in ccu->data) */ - if (ccu_clks_setup(ccu)) - goto out_err; + /* + * Set up each defined kona clock and save the result in + * the clock framework clock array (in ccu->data). Then + * register as a provider for these clocks. + */ + for (i = 0; i < ccu->clk_data.clk_num; i++) { + if (!ccu->kona_clks[i].ccu) + continue; + ccu->clk_data.clks[i] = kona_clk_setup(&ccu->kona_clks[i]); + } - ret = of_clk_add_provider(node, of_clk_src_onecell_get, &ccu->data); + ret = of_clk_add_provider(node, of_clk_src_onecell_get, &ccu->clk_data); if (ret) { pr_err("%s: error adding ccu %s as provider (%d)\n", __func__, node->name, ret); diff --git a/drivers/clk/bcm/clk-kona.c b/drivers/clk/bcm/clk-kona.c index db11a87449f2..95af2e665dd3 100644 --- a/drivers/clk/bcm/clk-kona.c +++ b/drivers/clk/bcm/clk-kona.c @@ -16,6 +16,14 @@ #include <linux/delay.h> +/* + * "Policies" affect the frequencies of bus clocks provided by a + * CCU. (I believe these polices are named "Deep Sleep", "Economy", + * "Normal", and "Turbo".) A lower policy number has lower power + * consumption, and policy 2 is the default. + */ +#define CCU_POLICY_COUNT 4 + #define CCU_ACCESS_PASSWORD 0xA5A500 #define CLK_GATE_DELAY_LOOP 2000 @@ -207,9 +215,154 @@ __ccu_wait_bit(struct ccu_data *ccu, u32 reg_offset, u32 bit, bool want) return true; udelay(1); } + pr_warn("%s: %s/0x%04x bit %u was never %s\n", __func__, + ccu->name, reg_offset, bit, want ? "set" : "clear"); + return false; } +/* Policy operations */ + +static bool __ccu_policy_engine_start(struct ccu_data *ccu, bool sync) +{ + struct bcm_policy_ctl *control = &ccu->policy.control; + u32 offset; + u32 go_bit; + u32 mask; + bool ret; + + /* If we don't need to control policy for this CCU, we're done. */ + if (!policy_ctl_exists(control)) + return true; + + offset = control->offset; + go_bit = control->go_bit; + + /* Ensure we're not busy before we start */ + ret = __ccu_wait_bit(ccu, offset, go_bit, false); + if (!ret) { + pr_err("%s: ccu %s policy engine wouldn't go idle\n", + __func__, ccu->name); + return false; + } + + /* + * If it's a synchronous request, we'll wait for the voltage + * and frequency of the active load to stabilize before + * returning. To do this we select the active load by + * setting the ATL bit. + * + * An asynchronous request instead ramps the voltage in the + * background, and when that process stabilizes, the target + * load is copied to the active load and the CCU frequency + * is switched. We do this by selecting the target load + * (ATL bit clear) and setting the request auto-copy (AC bit + * set). + * + * Note, we do NOT read-modify-write this register. + */ + mask = (u32)1 << go_bit; + if (sync) + mask |= 1 << control->atl_bit; + else + mask |= 1 << control->ac_bit; + __ccu_write(ccu, offset, mask); + + /* Wait for indication that operation is complete. */ + ret = __ccu_wait_bit(ccu, offset, go_bit, false); + if (!ret) + pr_err("%s: ccu %s policy engine never started\n", + __func__, ccu->name); + + return ret; +} + +static bool __ccu_policy_engine_stop(struct ccu_data *ccu) +{ + struct bcm_lvm_en *enable = &ccu->policy.enable; + u32 offset; + u32 enable_bit; + bool ret; + + /* If we don't need to control policy for this CCU, we're done. */ + if (!policy_lvm_en_exists(enable)) + return true; + + /* Ensure we're not busy before we start */ + offset = enable->offset; + enable_bit = enable->bit; + ret = __ccu_wait_bit(ccu, offset, enable_bit, false); + if (!ret) { + pr_err("%s: ccu %s policy engine already stopped\n", + __func__, ccu->name); + return false; + } + + /* Now set the bit to stop the engine (NO read-modify-write) */ + __ccu_write(ccu, offset, (u32)1 << enable_bit); + + /* Wait for indication that it has stopped. */ + ret = __ccu_wait_bit(ccu, offset, enable_bit, false); + if (!ret) + pr_err("%s: ccu %s policy engine never stopped\n", + __func__, ccu->name); + + return ret; +} + +/* + * A CCU has four operating conditions ("policies"), and some clocks + * can be disabled or enabled based on which policy is currently in + * effect. Such clocks have a bit in a "policy mask" register for + * each policy indicating whether the clock is enabled for that + * policy or not. The bit position for a clock is the same for all + * four registers, and the 32-bit registers are at consecutive + * addresses. + */ +static bool policy_init(struct ccu_data *ccu, struct bcm_clk_policy *policy) +{ + u32 offset; + u32 mask; + int i; + bool ret; + + if (!policy_exists(policy)) + return true; + + /* + * We need to stop the CCU policy engine to allow update + * of our policy bits. + */ + if (!__ccu_policy_engine_stop(ccu)) { + pr_err("%s: unable to stop CCU %s policy engine\n", + __func__, ccu->name); + return false; + } + + /* + * For now, if a clock defines its policy bit we just mark + * it "enabled" for all four policies. + */ + offset = policy->offset; + mask = (u32)1 << policy->bit; + for (i = 0; i < CCU_POLICY_COUNT; i++) { + u32 reg_val; + + reg_val = __ccu_read(ccu, offset); + reg_val |= mask; + __ccu_write(ccu, offset, reg_val); + offset += sizeof(u32); + } + + /* We're done updating; fire up the policy engine again. */ + ret = __ccu_policy_engine_start(ccu, true); + if (!ret) + pr_err("%s: unable to restart CCU %s policy engine\n", + __func__, ccu->name); + + return ret; +} + /* Gate operations */ /* Determine whether a clock is gated. CCU lock must be held. */ @@ -374,6 +527,35 @@ static int clk_gate(struct ccu_data *ccu, const char *name, return -EIO; } +/* Hysteresis operations */ + +/* + * If a clock gate requires a turn-off delay it will have + * "hysteresis" register bits defined. The first, if set, enables + * the delay; and if enabled, the second bit determines whether the + * delay is "low" or "high" (1 means high). For now, if it's + * defined for a clock, we set it. + */ +static bool hyst_init(struct ccu_data *ccu, struct bcm_clk_hyst *hyst) +{ + u32 offset; + u32 reg_val; + u32 mask; + + if (!hyst_exists(hyst)) + return true; + + offset = hyst->offset; + mask = (u32)1 << hyst->en_bit; + mask |= (u32)1 << hyst->val_bit; + + reg_val = __ccu_read(ccu, offset); + reg_val |= mask; + __ccu_write(ccu, offset, reg_val); + + return true; +} + /* Trigger operations */ /* @@ -806,7 +988,7 @@ static int kona_peri_clk_enable(struct clk_hw *hw) struct kona_clk *bcm_clk = to_kona_clk(hw); struct bcm_clk_gate *gate = &bcm_clk->u.peri->gate; - return clk_gate(bcm_clk->ccu, bcm_clk->name, gate, true); + return clk_gate(bcm_clk->ccu, bcm_clk->init_data.name, gate, true); } static void kona_peri_clk_disable(struct clk_hw *hw) @@ -814,7 +996,7 @@ static void kona_peri_clk_disable(struct clk_hw *hw) struct kona_clk *bcm_clk = to_kona_clk(hw); struct bcm_clk_gate *gate = &bcm_clk->u.peri->gate; - (void)clk_gate(bcm_clk->ccu, bcm_clk->name, gate, false); + (void)clk_gate(bcm_clk->ccu, bcm_clk->init_data.name, gate, false); } static int kona_peri_clk_is_enabled(struct clk_hw *hw) @@ -849,6 +1031,58 @@ static long kona_peri_clk_round_rate(struct clk_hw *hw, unsigned long rate, rate ? rate : 1, *parent_rate, NULL); } +static long kona_peri_clk_determine_rate(struct clk_hw *hw, unsigned long rate, + unsigned long *best_parent_rate, struct clk **best_parent) +{ + struct kona_clk *bcm_clk = to_kona_clk(hw); + struct clk *clk = hw->clk; + struct clk *current_parent; + unsigned long parent_rate; + unsigned long best_delta; + unsigned long best_rate; + u32 parent_count; + u32 which; + + /* + * If there is no other parent to choose, use the current one. + * Note: We don't honor (or use) CLK_SET_RATE_NO_REPARENT. + */ + WARN_ON_ONCE(bcm_clk->init_data.flags & CLK_SET_RATE_NO_REPARENT); + parent_count = (u32)bcm_clk->init_data.num_parents; + if (parent_count < 2) + return kona_peri_clk_round_rate(hw, rate, best_parent_rate); + + /* Unless we can do better, stick with current parent */ + current_parent = clk_get_parent(clk); + parent_rate = __clk_get_rate(current_parent); + best_rate = kona_peri_clk_round_rate(hw, rate, &parent_rate); + best_delta = abs(best_rate - rate); + + /* Check whether any other parent clock can produce a better result */ + for (which = 0; which < parent_count; which++) { + struct clk *parent = clk_get_parent_by_index(clk, which); + unsigned long delta; + unsigned long other_rate; + + BUG_ON(!parent); + if (parent == current_parent) + continue; + + /* We don't support CLK_SET_RATE_PARENT */ + parent_rate = __clk_get_rate(parent); + other_rate = kona_peri_clk_round_rate(hw, rate, &parent_rate); + delta = abs(other_rate - rate); + if (delta < best_delta) { + best_delta = delta; + best_rate = other_rate; + *best_parent = parent; + *best_parent_rate = parent_rate; + } + } + + return best_rate; +} + static int kona_peri_clk_set_parent(struct clk_hw *hw, u8 index) { struct kona_clk *bcm_clk = to_kona_clk(hw); @@ -872,12 +1106,13 @@ static int kona_peri_clk_set_parent(struct clk_hw *hw, u8 index) ret = selector_write(bcm_clk->ccu, &data->gate, sel, trig, index); if (ret == -ENXIO) { - pr_err("%s: gating failure for %s\n", __func__, bcm_clk->name); + pr_err("%s: gating failure for %s\n", __func__, + bcm_clk->init_data.name); ret = -EIO; /* Don't proliferate weird errors */ } else if (ret == -EIO) { pr_err("%s: %strigger failed for %s\n", __func__, trig == &data->pre_trig ? "pre-" : "", - bcm_clk->name); + bcm_clk->init_data.name); } return ret; @@ -936,10 +1171,12 @@ static int kona_peri_clk_set_rate(struct clk_hw *hw, unsigned long rate, ret = divider_write(bcm_clk->ccu, &data->gate, &data->div, &data->trig, scaled_div); if (ret == -ENXIO) { - pr_err("%s: gating failure for %s\n", __func__, bcm_clk->name); + pr_err("%s: gating failure for %s\n", __func__, + bcm_clk->init_data.name); ret = -EIO; /* Don't proliferate weird errors */ } else if (ret == -EIO) { - pr_err("%s: trigger failed for %s\n", __func__, bcm_clk->name); + pr_err("%s: trigger failed for %s\n", __func__, + bcm_clk->init_data.name); } return ret; @@ -950,7 +1187,7 @@ struct clk_ops kona_peri_clk_ops = { .disable = kona_peri_clk_disable, .is_enabled = kona_peri_clk_is_enabled, .recalc_rate = kona_peri_clk_recalc_rate, - .round_rate = kona_peri_clk_round_rate, + .determine_rate = kona_peri_clk_determine_rate, .set_parent = kona_peri_clk_set_parent, .get_parent = kona_peri_clk_get_parent, .set_rate = kona_peri_clk_set_rate, @@ -961,15 +1198,24 @@ static bool __peri_clk_init(struct kona_clk *bcm_clk) { struct ccu_data *ccu = bcm_clk->ccu; struct peri_clk_data *peri = bcm_clk->u.peri; - const char *name = bcm_clk->name; + const char *name = bcm_clk->init_data.name; struct bcm_clk_trig *trig; BUG_ON(bcm_clk->type != bcm_clk_peri); + if (!policy_init(ccu, &peri->policy)) { + pr_err("%s: error initializing policy for %s\n", + __func__, name); + return false; + } if (!gate_init(ccu, &peri->gate)) { pr_err("%s: error initializing gate for %s\n", __func__, name); return false; } + if (!hyst_init(ccu, &peri->hyst)) { + pr_err("%s: error initializing hyst for %s\n", __func__, name); + return false; + } if (!div_init(ccu, &peri->gate, &peri->div, &peri->trig)) { pr_err("%s: error initializing divider for %s\n", __func__, name); @@ -1014,13 +1260,13 @@ bool __init kona_ccu_init(struct ccu_data *ccu) { unsigned long flags; unsigned int which; - struct clk **clks = ccu->data.clks; + struct clk **clks = ccu->clk_data.clks; bool success = true; flags = ccu_lock(ccu); __ccu_write_enable(ccu); - for (which = 0; which < ccu->data.clk_num; which++) { + for (which = 0; which < ccu->clk_data.clk_num; which++) { struct kona_clk *bcm_clk; if (!clks[which]) diff --git a/drivers/clk/bcm/clk-kona.h b/drivers/clk/bcm/clk-kona.h index dee690951bb6..2537b3072910 100644 --- a/drivers/clk/bcm/clk-kona.h +++ b/drivers/clk/bcm/clk-kona.h @@ -43,8 +43,14 @@ #define FLAG_FLIP(obj, type, flag) ((obj)->flags ^= FLAG(type, flag)) #define FLAG_TEST(obj, type, flag) (!!((obj)->flags & FLAG(type, flag))) +/* CCU field state tests */ + +#define ccu_policy_exists(ccu_policy) ((ccu_policy)->enable.offset != 0) + /* Clock field state tests */ +#define policy_exists(policy) ((policy)->offset != 0) + #define gate_exists(gate) FLAG_TEST(gate, GATE, EXISTS) #define gate_is_enabled(gate) FLAG_TEST(gate, GATE, ENABLED) #define gate_is_hw_controllable(gate) FLAG_TEST(gate, GATE, HW) @@ -54,6 +60,8 @@ #define gate_flip_enabled(gate) FLAG_FLIP(gate, GATE, ENABLED) +#define hyst_exists(hyst) ((hyst)->offset != 0) + #define divider_exists(div) FLAG_TEST(div, DIV, EXISTS) #define divider_is_fixed(div) FLAG_TEST(div, DIV, FIXED) #define divider_has_fraction(div) (!divider_is_fixed(div) && \ @@ -62,6 +70,9 @@ #define selector_exists(sel) ((sel)->width != 0) #define trigger_exists(trig) FLAG_TEST(trig, TRIG, EXISTS) +#define policy_lvm_en_exists(enable) ((enable)->offset != 0) +#define policy_ctl_exists(control) ((control)->offset != 0) + /* Clock type, used to tell common block what it's part of */ enum bcm_clk_type { bcm_clk_none, /* undefined clock type */ @@ -71,25 +82,26 @@ enum bcm_clk_type { }; /* - * Each CCU defines a mapped area of memory containing registers - * used to manage clocks implemented by the CCU. Access to memory - * within the CCU's space is serialized by a spinlock. Before any - * (other) address can be written, a special access "password" value - * must be written to its WR_ACCESS register (located at the base - * address of the range). We keep track of the name of each CCU as - * it is set up, and maintain them in a list. + * CCU policy control for clocks. Clocks can be enabled or disabled + * based on the CCU policy in effect. One bit in each policy mask + * register (one per CCU policy) represents whether the clock is + * enabled when that policy is effect or not. The CCU policy engine + * must be stopped to update these bits, and must be restarted again + * afterward. */ -struct ccu_data { - void __iomem *base; /* base of mapped address space */ - spinlock_t lock; /* serialization lock */ - bool write_enabled; /* write access is currently enabled */ - struct list_head links; /* for ccu_list */ - struct device_node *node; - struct clk_onecell_data data; - const char *name; - u32 range; /* byte range of address space */ +struct bcm_clk_policy { + u32 offset; /* first policy mask register offset */ + u32 bit; /* bit used in all mask registers */ }; +/* Policy initialization macro */ + +#define POLICY(_offset, _bit) \ + { \ + .offset = (_offset), \ + .bit = (_bit), \ + } + /* * Gating control and status is managed by a 32-bit gate register. * @@ -195,6 +207,22 @@ struct bcm_clk_gate { .flags = FLAG(GATE, HW)|FLAG(GATE, EXISTS), \ } +/* Gate hysteresis for clocks */ +struct bcm_clk_hyst { + u32 offset; /* hyst register offset (normally CLKGATE) */ + u32 en_bit; /* bit used to enable hysteresis */ + u32 val_bit; /* if enabled: 0 = low delay; 1 = high delay */ +}; + +/* Hysteresis initialization macro */ + +#define HYST(_offset, _en_bit, _val_bit) \ + { \ + .offset = (_offset), \ + .en_bit = (_en_bit), \ + .val_bit = (_val_bit), \ + } + /* * Each clock can have zero, one, or two dividers which change the * output rate of the clock. Each divider can be either fixed or @@ -360,7 +388,9 @@ struct bcm_clk_trig { } struct peri_clk_data { + struct bcm_clk_policy policy; struct bcm_clk_gate gate; + struct bcm_clk_hyst hyst; struct bcm_clk_trig pre_trig; struct bcm_clk_div pre_div; struct bcm_clk_trig trig; @@ -373,8 +403,7 @@ struct peri_clk_data { struct kona_clk { struct clk_hw hw; - struct clk_init_data init_data; - const char *name; /* name of this clock */ + struct clk_init_data init_data; /* includes name of this clock */ struct ccu_data *ccu; /* ccu this clock is associated with */ enum bcm_clk_type type; union { @@ -385,14 +414,92 @@ struct kona_clk { #define to_kona_clk(_hw) \ container_of(_hw, struct kona_clk, hw) -/* Exported globals */ +/* Initialization macro for an entry in a CCU's kona_clks[] array. */ +#define KONA_CLK(_ccu_name, _clk_name, _type) \ + { \ + .init_data = { \ + .name = #_clk_name, \ + .ops = &kona_ ## _type ## _clk_ops, \ + }, \ + .ccu = &_ccu_name ## _ccu_data, \ + .type = bcm_clk_ ## _type, \ + .u.data = &_clk_name ## _data, \ + } +#define LAST_KONA_CLK { .type = bcm_clk_none } -extern struct clk_ops kona_peri_clk_ops; +/* + * CCU policy control. To enable software update of the policy + * tables the CCU policy engine must be stopped by setting the + * software update enable bit (LVM_EN). After an update the engine + * is restarted using the GO bit and either the GO_ATL or GO_AC bit. + */ +struct bcm_lvm_en { + u32 offset; /* LVM_EN register offset */ + u32 bit; /* POLICY_CONFIG_EN bit in register */ +}; + +/* Policy enable initialization macro */ +#define CCU_LVM_EN(_offset, _bit) \ + { \ + .offset = (_offset), \ + .bit = (_bit), \ + } + +struct bcm_policy_ctl { + u32 offset; /* POLICY_CTL register offset */ + u32 go_bit; + u32 atl_bit; /* GO, GO_ATL, and GO_AC bits */ + u32 ac_bit; +}; + +/* Policy control initialization macro */ +#define CCU_POLICY_CTL(_offset, _go_bit, _ac_bit, _atl_bit) \ + { \ + .offset = (_offset), \ + .go_bit = (_go_bit), \ + .ac_bit = (_ac_bit), \ + .atl_bit = (_atl_bit), \ + } + +struct ccu_policy { + struct bcm_lvm_en enable; + struct bcm_policy_ctl control; +}; + +/* + * Each CCU defines a mapped area of memory containing registers + * used to manage clocks implemented by the CCU. Access to memory + * within the CCU's space is serialized by a spinlock. Before any + * (other) address can be written, a special access "password" value + * must be written to its WR_ACCESS register (located at the base + * address of the range). We keep track of the name of each CCU as + * it is set up, and maintain them in a list. + */ +struct ccu_data { + void __iomem *base; /* base of mapped address space */ + spinlock_t lock; /* serialization lock */ + bool write_enabled; /* write access is currently enabled */ + struct ccu_policy policy; + struct list_head links; /* for ccu_list */ + struct device_node *node; + struct clk_onecell_data clk_data; + const char *name; + u32 range; /* byte range of address space */ + struct kona_clk kona_clks[]; /* must be last */ +}; -/* Help functions */ +/* Initialization for common fields in a Kona ccu_data structure */ +#define KONA_CCU_COMMON(_prefix, _name, _ccuname) \ + .name = #_name "_ccu", \ + .lock = __SPIN_LOCK_UNLOCKED(_name ## _ccu_data.lock), \ + .links = LIST_HEAD_INIT(_name ## _ccu_data.links), \ + .clk_data = { \ + .clk_num = _prefix ## _ ## _ccuname ## _CCU_CLOCK_COUNT, \ + } + +/* Exported globals */ -#define PERI_CLK_SETUP(clks, ccu, id, name) \ - clks[id] = kona_clk_setup(ccu, #name, bcm_clk_peri, &name ## _data) +extern struct clk_ops kona_peri_clk_ops; /* Externally visible functions */ @@ -401,10 +508,9 @@ extern u64 scaled_div_max(struct bcm_clk_div *div); extern u64 scaled_div_build(struct bcm_clk_div *div, u32 div_value, u32 billionths); -extern struct clk *kona_clk_setup(struct ccu_data *ccu, const char *name, - enum bcm_clk_type type, void *data); -extern void __init kona_dt_ccu_setup(struct device_node *node, - int (*ccu_clks_setup)(struct ccu_data *)); +extern struct clk *kona_clk_setup(struct kona_clk *bcm_clk); +extern void __init kona_dt_ccu_setup(struct ccu_data *ccu, + struct device_node *node); extern bool __init kona_ccu_init(struct ccu_data *ccu); #endif /* _CLK_KONA_H */ diff --git a/drivers/clk/berlin/Makefile b/drivers/clk/berlin/Makefile new file mode 100644 index 000000000000..2a36ab710a07 --- /dev/null +++ b/drivers/clk/berlin/Makefile @@ -0,0 +1,4 @@ +obj-y += berlin2-avpll.o berlin2-pll.o berlin2-div.o +obj-$(CONFIG_MACH_BERLIN_BG2) += bg2.o +obj-$(CONFIG_MACH_BERLIN_BG2CD) += bg2.o +obj-$(CONFIG_MACH_BERLIN_BG2Q) += bg2q.o diff --git a/drivers/clk/berlin/berlin2-avpll.c b/drivers/clk/berlin/berlin2-avpll.c new file mode 100644 index 000000000000..fd0f26c38465 --- /dev/null +++ b/drivers/clk/berlin/berlin2-avpll.c @@ -0,0 +1,393 @@ +/* + * Copyright (c) 2014 Marvell Technology Group Ltd. + * + * Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> + * Alexandre Belloni <alexandre.belloni@free-electrons.com> + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see <http://www.gnu.org/licenses/>. + */ +#include <linux/clk-provider.h> +#include <linux/io.h> +#include <linux/kernel.h> +#include <linux/of.h> +#include <linux/of_address.h> +#include <linux/slab.h> + +#include "berlin2-avpll.h" + +/* + * Berlin2 SoCs comprise up to two PLLs called AVPLL built upon a + * VCO with 8 channels each, channel 8 is the odd-one-out and does + * not provide mul/div. + * + * Unfortunately, its registers are not named but just numbered. To + * get in at least some kind of structure, we split each AVPLL into + * the VCOs and each channel into separate clock drivers. + * + * Also, here and there the VCO registers are a bit different with + * respect to bit shifts. Make sure to add a comment for those. + */ +#define NUM_CHANNELS 8 + +#define AVPLL_CTRL(x) ((x) * 0x4) + +#define VCO_CTRL0 AVPLL_CTRL(0) +/* BG2/BG2CDs VCO_B has an additional shift of 4 for its VCO_CTRL0 reg */ +#define VCO_RESET BIT(0) +#define VCO_POWERUP BIT(1) +#define VCO_INTERPOL_SHIFT 2 +#define VCO_INTERPOL_MASK (0xf << VCO_INTERPOL_SHIFT) +#define VCO_REG1V45_SEL_SHIFT 6 +#define VCO_REG1V45_SEL(x) ((x) << VCO_REG1V45_SEL_SHIFT) +#define VCO_REG1V45_SEL_1V40 VCO_REG1V45_SEL(0) +#define VCO_REG1V45_SEL_1V45 VCO_REG1V45_SEL(1) +#define VCO_REG1V45_SEL_1V50 VCO_REG1V45_SEL(2) +#define VCO_REG1V45_SEL_1V55 VCO_REG1V45_SEL(3) +#define VCO_REG1V45_SEL_MASK VCO_REG1V45_SEL(3) +#define VCO_REG0V9_SEL_SHIFT 8 +#define VCO_REG0V9_SEL_MASK (0xf << VCO_REG0V9_SEL_SHIFT) +#define VCO_VTHCAL_SHIFT 12 +#define VCO_VTHCAL(x) ((x) << VCO_VTHCAL_SHIFT) +#define VCO_VTHCAL_0V90 VCO_VTHCAL(0) +#define VCO_VTHCAL_0V95 VCO_VTHCAL(1) +#define VCO_VTHCAL_1V00 VCO_VTHCAL(2) +#define VCO_VTHCAL_1V05 VCO_VTHCAL(3) +#define VCO_VTHCAL_MASK VCO_VTHCAL(3) +#define VCO_KVCOEXT_SHIFT 14 +#define VCO_KVCOEXT_MASK (0x3 << VCO_KVCOEXT_SHIFT) +#define VCO_KVCOEXT_ENABLE BIT(17) +#define VCO_V2IEXT_SHIFT 18 +#define VCO_V2IEXT_MASK (0xf << VCO_V2IEXT_SHIFT) +#define VCO_V2IEXT_ENABLE BIT(22) +#define VCO_SPEED_SHIFT 23 +#define VCO_SPEED(x) ((x) << VCO_SPEED_SHIFT) +#define VCO_SPEED_1G08_1G21 VCO_SPEED(0) +#define VCO_SPEED_1G21_1G40 VCO_SPEED(1) +#define VCO_SPEED_1G40_1G61 VCO_SPEED(2) +#define VCO_SPEED_1G61_1G86 VCO_SPEED(3) +#define VCO_SPEED_1G86_2G00 VCO_SPEED(4) +#define VCO_SPEED_2G00_2G22 VCO_SPEED(5) +#define VCO_SPEED_2G22 VCO_SPEED(6) +#define VCO_SPEED_MASK VCO_SPEED(0x7) +#define VCO_CLKDET_ENABLE BIT(26) +#define VCO_CTRL1 AVPLL_CTRL(1) +#define VCO_REFDIV_SHIFT 0 +#define VCO_REFDIV(x) ((x) << VCO_REFDIV_SHIFT) +#define VCO_REFDIV_1 VCO_REFDIV(0) +#define VCO_REFDIV_2 VCO_REFDIV(1) +#define VCO_REFDIV_4 VCO_REFDIV(2) +#define VCO_REFDIV_3 VCO_REFDIV(3) +#define VCO_REFDIV_MASK VCO_REFDIV(0x3f) +#define VCO_FBDIV_SHIFT 6 +#define VCO_FBDIV(x) ((x) << VCO_FBDIV_SHIFT) +#define VCO_FBDIV_MASK VCO_FBDIV(0xff) +#define VCO_ICP_SHIFT 14 +/* PLL Charge Pump Current = 10uA * (x + 1) */ +#define VCO_ICP(x) ((x) << VCO_ICP_SHIFT) +#define VCO_ICP_MASK VCO_ICP(0xf) +#define VCO_LOAD_CAP BIT(18) +#define VCO_CALIBRATION_START BIT(19) +#define VCO_FREQOFFSETn(x) AVPLL_CTRL(3 + (x)) +#define VCO_FREQOFFSET_MASK 0x7ffff +#define VCO_CTRL10 AVPLL_CTRL(10) +#define VCO_POWERUP_CH1 BIT(20) +#define VCO_CTRL11 AVPLL_CTRL(11) +#define VCO_CTRL12 AVPLL_CTRL(12) +#define VCO_CTRL13 AVPLL_CTRL(13) +#define VCO_CTRL14 AVPLL_CTRL(14) +#define VCO_CTRL15 AVPLL_CTRL(15) +#define VCO_SYNC1n(x) AVPLL_CTRL(15 + (x)) +#define VCO_SYNC1_MASK 0x1ffff +#define VCO_SYNC2n(x) AVPLL_CTRL(23 + (x)) +#define VCO_SYNC2_MASK 0x1ffff +#define VCO_CTRL30 AVPLL_CTRL(30) +#define VCO_DPLL_CH1_ENABLE BIT(17) + +struct berlin2_avpll_vco { + struct clk_hw hw; + void __iomem *base; + u8 flags; +}; + +#define to_avpll_vco(hw) container_of(hw, struct berlin2_avpll_vco, hw) + +static int berlin2_avpll_vco_is_enabled(struct clk_hw *hw) +{ + struct berlin2_avpll_vco *vco = to_avpll_vco(hw); + u32 reg; + + reg = readl_relaxed(vco->base + VCO_CTRL0); + if (vco->flags & BERLIN2_AVPLL_BIT_QUIRK) + reg >>= 4; + + return !!(reg & VCO_POWERUP); +} + +static int berlin2_avpll_vco_enable(struct clk_hw *hw) +{ + struct berlin2_avpll_vco *vco = to_avpll_vco(hw); + u32 reg; + + reg = readl_relaxed(vco->base + VCO_CTRL0); + if (vco->flags & BERLIN2_AVPLL_BIT_QUIRK) + reg |= VCO_POWERUP << 4; + else + reg |= VCO_POWERUP; + writel_relaxed(reg, vco->base + VCO_CTRL0); + + return 0; +} + +static void berlin2_avpll_vco_disable(struct clk_hw *hw) +{ + struct berlin2_avpll_vco *vco = to_avpll_vco(hw); + u32 reg; + + reg = readl_relaxed(vco->base + VCO_CTRL0); + if (vco->flags & BERLIN2_AVPLL_BIT_QUIRK) + reg &= ~(VCO_POWERUP << 4); + else + reg &= ~VCO_POWERUP; + writel_relaxed(reg, vco->base + VCO_CTRL0); +} + +static u8 vco_refdiv[] = { 1, 2, 4, 3 }; + +static unsigned long +berlin2_avpll_vco_recalc_rate(struct clk_hw *hw, unsigned long parent_rate) +{ + struct berlin2_avpll_vco *vco = to_avpll_vco(hw); + u32 reg, refdiv, fbdiv; + u64 freq = parent_rate; + + /* AVPLL VCO frequency: Fvco = (Fref / refdiv) * fbdiv */ + reg = readl_relaxed(vco->base + VCO_CTRL1); + refdiv = (reg & VCO_REFDIV_MASK) >> VCO_REFDIV_SHIFT; + refdiv = vco_refdiv[refdiv]; + fbdiv = (reg & VCO_FBDIV_MASK) >> VCO_FBDIV_SHIFT; + freq *= fbdiv; + do_div(freq, refdiv); + + return (unsigned long)freq; +} + +static const struct clk_ops berlin2_avpll_vco_ops = { + .is_enabled = berlin2_avpll_vco_is_enabled, + .enable = berlin2_avpll_vco_enable, + .disable = berlin2_avpll_vco_disable, + .recalc_rate = berlin2_avpll_vco_recalc_rate, +}; + +struct clk * __init berlin2_avpll_vco_register(void __iomem *base, + const char *name, const char *parent_name, + u8 vco_flags, unsigned long flags) +{ + struct berlin2_avpll_vco *vco; + struct clk_init_data init; + + vco = kzalloc(sizeof(*vco), GFP_KERNEL); + if (!vco) + return ERR_PTR(-ENOMEM); + + vco->base = base; + vco->flags = vco_flags; + vco->hw.init = &init; + init.name = name; + init.ops = &berlin2_avpll_vco_ops; + init.parent_names = &parent_name; + init.num_parents = 1; + init.flags = flags; + + return clk_register(NULL, &vco->hw); +} + +struct berlin2_avpll_channel { + struct clk_hw hw; + void __iomem *base; + u8 flags; + u8 index; +}; + +#define to_avpll_channel(hw) container_of(hw, struct berlin2_avpll_channel, hw) + +static int berlin2_avpll_channel_is_enabled(struct clk_hw *hw) +{ + struct berlin2_avpll_channel *ch = to_avpll_channel(hw); + u32 reg; + + if (ch->index == 7) + return 1; + + reg = readl_relaxed(ch->base + VCO_CTRL10); + reg &= VCO_POWERUP_CH1 << ch->index; + + return !!reg; +} + +static int berlin2_avpll_channel_enable(struct clk_hw *hw) +{ + struct berlin2_avpll_channel *ch = to_avpll_channel(hw); + u32 reg; + + reg = readl_relaxed(ch->base + VCO_CTRL10); + reg |= VCO_POWERUP_CH1 << ch->index; + writel_relaxed(reg, ch->base + VCO_CTRL10); + + return 0; +} + +static void berlin2_avpll_channel_disable(struct clk_hw *hw) +{ + struct berlin2_avpll_channel *ch = to_avpll_channel(hw); + u32 reg; + + reg = readl_relaxed(ch->base + VCO_CTRL10); + reg &= ~(VCO_POWERUP_CH1 << ch->index); + writel_relaxed(reg, ch->base + VCO_CTRL10); +} + +static const u8 div_hdmi[] = { 1, 2, 4, 6 }; +static const u8 div_av1[] = { 1, 2, 5, 5 }; + +static unsigned long +berlin2_avpll_channel_recalc_rate(struct clk_hw *hw, unsigned long parent_rate) +{ + struct berlin2_avpll_channel *ch = to_avpll_channel(hw); + u32 reg, div_av2, div_av3, divider = 1; + u64 freq = parent_rate; + + reg = readl_relaxed(ch->base + VCO_CTRL30); + if ((reg & (VCO_DPLL_CH1_ENABLE << ch->index)) == 0) + goto skip_div; + + /* + * Fch = (Fref * sync2) / + * (sync1 * div_hdmi * div_av1 * div_av2 * div_av3) + */ + + reg = readl_relaxed(ch->base + VCO_SYNC1n(ch->index)); + /* BG2/BG2CDs SYNC1 reg on AVPLL_B channel 1 is shifted by 4 */ + if (ch->flags & BERLIN2_AVPLL_BIT_QUIRK && ch->index == 0) + reg >>= 4; + divider = reg & VCO_SYNC1_MASK; + + reg = readl_relaxed(ch->base + VCO_SYNC2n(ch->index)); + freq *= reg & VCO_SYNC2_MASK; + + /* Channel 8 has no dividers */ + if (ch->index == 7) + goto skip_div; + + /* + * HDMI divider start at VCO_CTRL11, bit 7; MSB is enable, lower 2 bit + * determine divider. + */ + reg = readl_relaxed(ch->base + VCO_CTRL11) >> 7; + reg = (reg >> (ch->index * 3)); + if (reg & BIT(2)) + divider *= div_hdmi[reg & 0x3]; + + /* + * AV1 divider start at VCO_CTRL11, bit 28; MSB is enable, lower 2 bit + * determine divider. + */ + if (ch->index == 0) { + reg = readl_relaxed(ch->base + VCO_CTRL11); + reg >>= 28; + } else { + reg = readl_relaxed(ch->base + VCO_CTRL12); + reg >>= (ch->index-1) * 3; + } + if (reg & BIT(2)) + divider *= div_av1[reg & 0x3]; + + /* + * AV2 divider start at VCO_CTRL12, bit 18; each 7 bits wide, + * zero is not a valid value. + */ + if (ch->index < 2) { + reg = readl_relaxed(ch->base + VCO_CTRL12); + reg >>= 18 + (ch->index * 7); + } else if (ch->index < 7) { + reg = readl_relaxed(ch->base + VCO_CTRL13); + reg >>= (ch->index - 2) * 7; + } else { + reg = readl_relaxed(ch->base + VCO_CTRL14); + } + div_av2 = reg & 0x7f; + if (div_av2) + divider *= div_av2; + + /* + * AV3 divider start at VCO_CTRL14, bit 7; each 4 bits wide. + * AV2/AV3 form a fractional divider, where only specfic values for AV3 + * are allowed. AV3 != 0 divides by AV2/2, AV3=0 is bypass. + */ + if (ch->index < 6) { + reg = readl_relaxed(ch->base + VCO_CTRL14); + reg >>= 7 + (ch->index * 4); + } else { + reg = readl_relaxed(ch->base + VCO_CTRL15); + } + div_av3 = reg & 0xf; + if (div_av2 && div_av3) + freq *= 2; + +skip_div: + do_div(freq, divider); + return (unsigned long)freq; +} + +static const struct clk_ops berlin2_avpll_channel_ops = { + .is_enabled = berlin2_avpll_channel_is_enabled, + .enable = berlin2_avpll_channel_enable, + .disable = berlin2_avpll_channel_disable, + .recalc_rate = berlin2_avpll_channel_recalc_rate, +}; + +/* + * Another nice quirk: + * On some production SoCs, AVPLL channels are scrambled with respect + * to the channel numbering in the registers but still referenced by + * their original channel numbers. We deal with it by having a flag + * and a translation table for the index. + */ +static const u8 quirk_index[] __initconst = { 0, 6, 5, 4, 3, 2, 1, 7 }; + +struct clk * __init berlin2_avpll_channel_register(void __iomem *base, + const char *name, u8 index, const char *parent_name, + u8 ch_flags, unsigned long flags) +{ + struct berlin2_avpll_channel *ch; + struct clk_init_data init; + + ch = kzalloc(sizeof(*ch), GFP_KERNEL); + if (!ch) + return ERR_PTR(-ENOMEM); + + ch->base = base; + if (ch_flags & BERLIN2_AVPLL_SCRAMBLE_QUIRK) + ch->index = quirk_index[index]; + else + ch->index = index; + + ch->flags = ch_flags; + ch->hw.init = &init; + init.name = name; + init.ops = &berlin2_avpll_channel_ops; + init.parent_names = &parent_name; + init.num_parents = 1; + init.flags = flags; + + return clk_register(NULL, &ch->hw); +} diff --git a/drivers/clk/berlin/berlin2-avpll.h b/drivers/clk/berlin/berlin2-avpll.h new file mode 100644 index 000000000000..a37f5068d299 --- /dev/null +++ b/drivers/clk/berlin/berlin2-avpll.h @@ -0,0 +1,36 @@ +/* + * Copyright (c) 2014 Marvell Technology Group Ltd. + * + * Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> + * Alexandre Belloni <alexandre.belloni@free-electrons.com> + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see <http://www.gnu.org/licenses/>. + */ +#ifndef __BERLIN2_AVPLL_H +#define __BERLIN2_AVPLL_H + +struct clk; + +#define BERLIN2_AVPLL_BIT_QUIRK BIT(0) +#define BERLIN2_AVPLL_SCRAMBLE_QUIRK BIT(1) + +struct clk * __init +berlin2_avpll_vco_register(void __iomem *base, const char *name, + const char *parent_name, u8 vco_flags, unsigned long flags); + +struct clk * __init +berlin2_avpll_channel_register(void __iomem *base, const char *name, + u8 index, const char *parent_name, u8 ch_flags, + unsigned long flags); + +#endif /* __BERLIN2_AVPLL_H */ diff --git a/drivers/clk/berlin/berlin2-div.c b/drivers/clk/berlin/berlin2-div.c new file mode 100644 index 000000000000..81ff97f8aa0b --- /dev/null +++ b/drivers/clk/berlin/berlin2-div.c @@ -0,0 +1,265 @@ +/* + * Copyright (c) 2014 Marvell Technology Group Ltd. + * + * Alexandre Belloni <alexandre.belloni@free-electrons.com> + * Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see <http://www.gnu.org/licenses/>. + */ +#include <linux/bitops.h> +#include <linux/clk-provider.h> +#include <linux/of.h> +#include <linux/of_address.h> +#include <linux/slab.h> +#include <linux/spinlock.h> + +#include "berlin2-div.h" + +/* + * Clock dividers in Berlin2 SoCs comprise a complex cell to select + * input pll and divider. The virtual structure as it is used in Marvell + * BSP code can be seen as: + * + * +---+ + * pll0 --------------->| 0 | +---+ + * +---+ |(B)|--+--------------->| 0 | +---+ + * pll1.0 -->| 0 | +-->| 1 | | +--------+ |(E)|----->| 0 | +---+ + * pll1.1 -->| 1 | | +---+ +-->|(C) 1:M |-->| 1 | |(F)|-->|(G)|-> + * ... -->|(A)|--+ | +--------+ +---+ +-->| 1 | +---+ + * ... -->| | +-->|(D) 1:3 |----------+ +---+ + * pll1.N -->| N | +--------- + * +---+ + * + * (A) input pll clock mux controlled by <PllSelect[1:n]> + * (B) input pll bypass mux controlled by <PllSwitch> + * (C) programmable clock divider controlled by <Select[1:n]> + * (D) constant div-by-3 clock divider + * (E) programmable clock divider bypass controlled by <Switch> + * (F) constant div-by-3 clock mux controlled by <D3Switch> + * (G) clock gate controlled by <Enable> + * + * For whatever reason, above control signals come in two flavors: + * - single register dividers with all bits in one register + * - shared register dividers with bits spread over multiple registers + * (including signals for the same cell spread over consecutive registers) + * + * Also, clock gate and pll mux is not available on every div cell, so + * we have to deal with those, too. We reuse common clock composite driver + * for it. + */ + +#define PLL_SELECT_MASK 0x7 +#define DIV_SELECT_MASK 0x7 + +struct berlin2_div { + struct clk_hw hw; + void __iomem *base; + struct berlin2_div_map map; + spinlock_t *lock; +}; + +#define to_berlin2_div(hw) container_of(hw, struct berlin2_div, hw) + +static u8 clk_div[] = { 1, 2, 4, 6, 8, 12, 1, 1 }; + +static int berlin2_div_is_enabled(struct clk_hw *hw) +{ + struct berlin2_div *div = to_berlin2_div(hw); + struct berlin2_div_map *map = &div->map; + u32 reg; + + if (div->lock) + spin_lock(div->lock); + + reg = readl_relaxed(div->base + map->gate_offs); + reg >>= map->gate_shift; + + if (div->lock) + spin_unlock(div->lock); + + return (reg & 0x1); +} + +static int berlin2_div_enable(struct clk_hw *hw) +{ + struct berlin2_div *div = to_berlin2_div(hw); + struct berlin2_div_map *map = &div->map; + u32 reg; + + if (div->lock) + spin_lock(div->lock); + + reg = readl_relaxed(div->base + map->gate_offs); + reg |= BIT(map->gate_shift); + writel_relaxed(reg, div->base + map->gate_offs); + + if (div->lock) + spin_unlock(div->lock); + + return 0; +} + +static void berlin2_div_disable(struct clk_hw *hw) +{ + struct berlin2_div *div = to_berlin2_div(hw); + struct berlin2_div_map *map = &div->map; + u32 reg; + + if (div->lock) + spin_lock(div->lock); + + reg = readl_relaxed(div->base + map->gate_offs); + reg &= ~BIT(map->gate_shift); + writel_relaxed(reg, div->base + map->gate_offs); + + if (div->lock) + spin_unlock(div->lock); +} + +static int berlin2_div_set_parent(struct clk_hw *hw, u8 index) +{ + struct berlin2_div *div = to_berlin2_div(hw); + struct berlin2_div_map *map = &div->map; + u32 reg; + + if (div->lock) + spin_lock(div->lock); + + /* index == 0 is PLL_SWITCH */ + reg = readl_relaxed(div->base + map->pll_switch_offs); + if (index == 0) + reg &= ~BIT(map->pll_switch_shift); + else + reg |= BIT(map->pll_switch_shift); + writel_relaxed(reg, div->base + map->pll_switch_offs); + + /* index > 0 is PLL_SELECT */ + if (index > 0) { + reg = readl_relaxed(div->base + map->pll_select_offs); + reg &= ~(PLL_SELECT_MASK << map->pll_select_shift); + reg |= (index - 1) << map->pll_select_shift; + writel_relaxed(reg, div->base + map->pll_select_offs); + } + + if (div->lock) + spin_unlock(div->lock); + + return 0; +} + +static u8 berlin2_div_get_parent(struct clk_hw *hw) +{ + struct berlin2_div *div = to_berlin2_div(hw); + struct berlin2_div_map *map = &div->map; + u32 reg; + u8 index = 0; + + if (div->lock) + spin_lock(div->lock); + + /* PLL_SWITCH == 0 is index 0 */ + reg = readl_relaxed(div->base + map->pll_switch_offs); + reg &= BIT(map->pll_switch_shift); + if (reg) { + reg = readl_relaxed(div->base + map->pll_select_offs); + reg >>= map->pll_select_shift; + reg &= PLL_SELECT_MASK; + index = 1 + reg; + } + + if (div->lock) + spin_unlock(div->lock); + + return index; +} + +static unsigned long berlin2_div_recalc_rate(struct clk_hw *hw, + unsigned long parent_rate) +{ + struct berlin2_div *div = to_berlin2_div(hw); + struct berlin2_div_map *map = &div->map; + u32 divsw, div3sw, divider = 1; + + if (div->lock) + spin_lock(div->lock); + + divsw = readl_relaxed(div->base + map->div_switch_offs) & + (1 << map->div_switch_shift); + div3sw = readl_relaxed(div->base + map->div3_switch_offs) & + (1 << map->div3_switch_shift); + + /* constant divide-by-3 (dominant) */ + if (div3sw != 0) { + divider = 3; + /* divider can be bypassed with DIV_SWITCH == 0 */ + } else if (divsw == 0) { + divider = 1; + /* clock divider determined by DIV_SELECT */ + } else { + u32 reg; + reg = readl_relaxed(div->base + map->div_select_offs); + reg >>= map->div_select_shift; + reg &= DIV_SELECT_MASK; + divider = clk_div[reg]; + } + + if (div->lock) + spin_unlock(div->lock); + + return parent_rate / divider; +} + +static const struct clk_ops berlin2_div_rate_ops = { + .recalc_rate = berlin2_div_recalc_rate, +}; + +static const struct clk_ops berlin2_div_gate_ops = { + .is_enabled = berlin2_div_is_enabled, + .enable = berlin2_div_enable, + .disable = berlin2_div_disable, +}; + +static const struct clk_ops berlin2_div_mux_ops = { + .set_parent = berlin2_div_set_parent, + .get_parent = berlin2_div_get_parent, +}; + +struct clk * __init +berlin2_div_register(const struct berlin2_div_map *map, + void __iomem *base, const char *name, u8 div_flags, + const char **parent_names, int num_parents, + unsigned long flags, spinlock_t *lock) +{ + const struct clk_ops *mux_ops = &berlin2_div_mux_ops; + const struct clk_ops *rate_ops = &berlin2_div_rate_ops; + const struct clk_ops *gate_ops = &berlin2_div_gate_ops; + struct berlin2_div *div; + + div = kzalloc(sizeof(*div), GFP_KERNEL); + if (!div) + return ERR_PTR(-ENOMEM); + + /* copy div_map to allow __initconst */ + memcpy(&div->map, map, sizeof(*map)); + div->base = base; + div->lock = lock; + + if ((div_flags & BERLIN2_DIV_HAS_GATE) == 0) + gate_ops = NULL; + if ((div_flags & BERLIN2_DIV_HAS_MUX) == 0) + mux_ops = NULL; + + return clk_register_composite(NULL, name, parent_names, num_parents, + &div->hw, mux_ops, &div->hw, rate_ops, + &div->hw, gate_ops, flags); +} diff --git a/drivers/clk/berlin/berlin2-div.h b/drivers/clk/berlin/berlin2-div.h new file mode 100644 index 000000000000..15e3384f3116 --- /dev/null +++ b/drivers/clk/berlin/berlin2-div.h @@ -0,0 +1,89 @@ +/* + * Copyright (c) 2014 Marvell Technology Group Ltd. + * + * Alexandre Belloni <alexandre.belloni@free-electrons.com> + * Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see <http://www.gnu.org/licenses/>. + */ +#ifndef __BERLIN2_DIV_H +#define __BERLIN2_DIV_H + +struct clk; + +#define BERLIN2_DIV_HAS_GATE BIT(0) +#define BERLIN2_DIV_HAS_MUX BIT(1) + +#define BERLIN2_PLL_SELECT(_off, _sh) \ + .pll_select_offs = _off, \ + .pll_select_shift = _sh + +#define BERLIN2_PLL_SWITCH(_off, _sh) \ + .pll_switch_offs = _off, \ + .pll_switch_shift = _sh + +#define BERLIN2_DIV_SELECT(_off, _sh) \ + .div_select_offs = _off, \ + .div_select_shift = _sh + +#define BERLIN2_DIV_SWITCH(_off, _sh) \ + .div_switch_offs = _off, \ + .div_switch_shift = _sh + +#define BERLIN2_DIV_D3SWITCH(_off, _sh) \ + .div3_switch_offs = _off, \ + .div3_switch_shift = _sh + +#define BERLIN2_DIV_GATE(_off, _sh) \ + .gate_offs = _off, \ + .gate_shift = _sh + +#define BERLIN2_SINGLE_DIV(_off) \ + BERLIN2_DIV_GATE(_off, 0), \ + BERLIN2_PLL_SELECT(_off, 1), \ + BERLIN2_PLL_SWITCH(_off, 4), \ + BERLIN2_DIV_SWITCH(_off, 5), \ + BERLIN2_DIV_D3SWITCH(_off, 6), \ + BERLIN2_DIV_SELECT(_off, 7) + +struct berlin2_div_map { + u16 pll_select_offs; + u16 pll_switch_offs; + u16 div_select_offs; + u16 div_switch_offs; + u16 div3_switch_offs; + u16 gate_offs; + u8 pll_select_shift; + u8 pll_switch_shift; + u8 div_select_shift; + u8 div_switch_shift; + u8 div3_switch_shift; + u8 gate_shift; +}; + +struct berlin2_div_data { + const char *name; + const u8 *parent_ids; + int num_parents; + unsigned long flags; + struct berlin2_div_map map; + u8 div_flags; +}; + +struct clk * __init +berlin2_div_register(const struct berlin2_div_map *map, + void __iomem *base, const char *name, u8 div_flags, + const char **parent_names, int num_parents, + unsigned long flags, spinlock_t *lock); + +#endif /* __BERLIN2_DIV_H */ diff --git a/drivers/clk/berlin/berlin2-pll.c b/drivers/clk/berlin/berlin2-pll.c new file mode 100644 index 000000000000..bdc506b03824 --- /dev/null +++ b/drivers/clk/berlin/berlin2-pll.c @@ -0,0 +1,117 @@ +/* + * Copyright (c) 2014 Marvell Technology Group Ltd. + * + * Alexandre Belloni <alexandre.belloni@free-electrons.com> + * Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see <http://www.gnu.org/licenses/>. + */ +#include <linux/clk-provider.h> +#include <linux/io.h> +#include <linux/kernel.h> +#include <linux/of.h> +#include <linux/of_address.h> +#include <linux/slab.h> +#include <asm/div64.h> + +#include "berlin2-div.h" + +struct berlin2_pll_map { + const u8 vcodiv[16]; + u8 mult; + u8 fbdiv_shift; + u8 rfdiv_shift; + u8 divsel_shift; +}; + +struct berlin2_pll { + struct clk_hw hw; + void __iomem *base; + struct berlin2_pll_map map; +}; + +#define to_berlin2_pll(hw) container_of(hw, struct berlin2_pll, hw) + +#define SPLL_CTRL0 0x00 +#define SPLL_CTRL1 0x04 +#define SPLL_CTRL2 0x08 +#define SPLL_CTRL3 0x0c +#define SPLL_CTRL4 0x10 + +#define FBDIV_MASK 0x1ff +#define RFDIV_MASK 0x1f +#define DIVSEL_MASK 0xf + +/* + * The output frequency formula for the pll is: + * clkout = fbdiv / refdiv * parent / vcodiv + */ +static unsigned long +berlin2_pll_recalc_rate(struct clk_hw *hw, unsigned long parent_rate) +{ + struct berlin2_pll *pll = to_berlin2_pll(hw); + struct berlin2_pll_map *map = &pll->map; + u32 val, fbdiv, rfdiv, vcodivsel, vcodiv; + u64 rate = parent_rate; + + val = readl_relaxed(pll->base + SPLL_CTRL0); + fbdiv = (val >> map->fbdiv_shift) & FBDIV_MASK; + rfdiv = (val >> map->rfdiv_shift) & RFDIV_MASK; + if (rfdiv == 0) { + pr_warn("%s has zero rfdiv\n", __clk_get_name(hw->clk)); + rfdiv = 1; + } + + val = readl_relaxed(pll->base + SPLL_CTRL1); + vcodivsel = (val >> map->divsel_shift) & DIVSEL_MASK; + vcodiv = map->vcodiv[vcodivsel]; + if (vcodiv == 0) { + pr_warn("%s has zero vcodiv (index %d)\n", + __clk_get_name(hw->clk), vcodivsel); + vcodiv = 1; + } + + rate *= fbdiv * map->mult; + do_div(rate, rfdiv * vcodiv); + + return (unsigned long)rate; +} + +static const struct clk_ops berlin2_pll_ops = { + .recalc_rate = berlin2_pll_recalc_rate, +}; + +struct clk * __init +berlin2_pll_register(const struct berlin2_pll_map *map, + void __iomem *base, const char *name, + const char *parent_name, unsigned long flags) +{ + struct clk_init_data init; + struct berlin2_pll *pll; + + pll = kzalloc(sizeof(*pll), GFP_KERNEL); + if (!pll) + return ERR_PTR(-ENOMEM); + + /* copy pll_map to allow __initconst */ + memcpy(&pll->map, map, sizeof(*map)); + pll->base = base; + pll->hw.init = &init; + init.name = name; + init.ops = &berlin2_pll_ops; + init.parent_names = &parent_name; + init.num_parents = 1; + init.flags = flags; + + return clk_register(NULL, &pll->hw); +} diff --git a/drivers/clk/berlin/berlin2-pll.h b/drivers/clk/berlin/berlin2-pll.h new file mode 100644 index 000000000000..8831ce27ac1e --- /dev/null +++ b/drivers/clk/berlin/berlin2-pll.h @@ -0,0 +1,37 @@ +/* + * Copyright (c) 2014 Marvell Technology Group Ltd. + * + * Alexandre Belloni <alexandre.belloni@free-electrons.com> + * Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see <http://www.gnu.org/licenses/>. + */ +#ifndef __BERLIN2_PLL_H +#define __BERLIN2_PLL_H + +struct clk; + +struct berlin2_pll_map { + const u8 vcodiv[16]; + u8 mult; + u8 fbdiv_shift; + u8 rfdiv_shift; + u8 divsel_shift; +}; + +struct clk * __init +berlin2_pll_register(const struct berlin2_pll_map *map, + void __iomem *base, const char *name, + const char *parent_name, unsigned long flags); + +#endif /* __BERLIN2_PLL_H */ diff --git a/drivers/clk/berlin/bg2.c b/drivers/clk/berlin/bg2.c new file mode 100644 index 000000000000..515fb133495c --- /dev/null +++ b/drivers/clk/berlin/bg2.c @@ -0,0 +1,691 @@ +/* + * Copyright (c) 2014 Marvell Technology Group Ltd. + * + * Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> + * Alexandre Belloni <alexandre.belloni@free-electrons.com> + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see <http://www.gnu.org/licenses/>. + */ + +#include <linux/clk.h> +#include <linux/clk-provider.h> +#include <linux/kernel.h> +#include <linux/of.h> +#include <linux/of_address.h> +#include <linux/slab.h> + +#include <dt-bindings/clock/berlin2.h> + +#include "berlin2-avpll.h" +#include "berlin2-div.h" +#include "berlin2-pll.h" +#include "common.h" + +#define REG_PINMUX0 0x0000 +#define REG_PINMUX1 0x0004 +#define REG_SYSPLLCTL0 0x0014 +#define REG_SYSPLLCTL4 0x0024 +#define REG_MEMPLLCTL0 0x0028 +#define REG_MEMPLLCTL4 0x0038 +#define REG_CPUPLLCTL0 0x003c +#define REG_CPUPLLCTL4 0x004c +#define REG_AVPLLCTL0 0x0050 +#define REG_AVPLLCTL31 0x00cc +#define REG_AVPLLCTL62 0x0148 +#define REG_PLLSTATUS 0x014c +#define REG_CLKENABLE 0x0150 +#define REG_CLKSELECT0 0x0154 +#define REG_CLKSELECT1 0x0158 +#define REG_CLKSELECT2 0x015c +#define REG_CLKSELECT3 0x0160 +#define REG_CLKSWITCH0 0x0164 +#define REG_CLKSWITCH1 0x0168 +#define REG_RESET_TRIGGER 0x0178 +#define REG_RESET_STATUS0 0x017c +#define REG_RESET_STATUS1 0x0180 +#define REG_SW_GENERIC0 0x0184 +#define REG_SW_GENERIC3 0x0190 +#define REG_PRODUCTID 0x01cc +#define REG_PRODUCTID_EXT 0x01d0 +#define REG_GFX3DCORE_CLKCTL 0x022c +#define REG_GFX3DSYS_CLKCTL 0x0230 +#define REG_ARC_CLKCTL 0x0234 +#define REG_VIP_CLKCTL 0x0238 +#define REG_SDIO0XIN_CLKCTL 0x023c +#define REG_SDIO1XIN_CLKCTL 0x0240 +#define REG_GFX3DEXTRA_CLKCTL 0x0244 +#define REG_GFX3D_RESET 0x0248 +#define REG_GC360_CLKCTL 0x024c +#define REG_SDIO_DLLMST_CLKCTL 0x0250 + +/* + * BG2/BG2CD SoCs have the following audio/video I/O units: + * + * audiohd: HDMI TX audio + * audio0: 7.1ch TX + * audio1: 2ch TX + * audio2: 2ch RX + * audio3: SPDIF TX + * video0: HDMI video + * video1: Secondary video + * video2: SD auxiliary video + * + * There are no external audio clocks (ACLKI0, ACLKI1) and + * only one external video clock (VCLKI0). + * + * Currently missing bits and pieces: + * - audio_fast_pll is unknown + * - audiohd_pll is unknown + * - video0_pll is unknown + * - audio[023], audiohd parent pll is assumed to be audio_fast_pll + * + */ + +#define MAX_CLKS 41 +static struct clk *clks[MAX_CLKS]; +static struct clk_onecell_data clk_data; +static DEFINE_SPINLOCK(lock); +static void __iomem *gbase; + +enum { + REFCLK, VIDEO_EXT0, + SYSPLL, MEMPLL, CPUPLL, + AVPLL_A1, AVPLL_A2, AVPLL_A3, AVPLL_A4, + AVPLL_A5, AVPLL_A6, AVPLL_A7, AVPLL_A8, + AVPLL_B1, AVPLL_B2, AVPLL_B3, AVPLL_B4, + AVPLL_B5, AVPLL_B6, AVPLL_B7, AVPLL_B8, + AUDIO1_PLL, AUDIO_FAST_PLL, + VIDEO0_PLL, VIDEO0_IN, + VIDEO1_PLL, VIDEO1_IN, + VIDEO2_PLL, VIDEO2_IN, +}; + +static const char *clk_names[] = { + [REFCLK] = "refclk", + [VIDEO_EXT0] = "video_ext0", + [SYSPLL] = "syspll", + [MEMPLL] = "mempll", + [CPUPLL] = "cpupll", + [AVPLL_A1] = "avpll_a1", + [AVPLL_A2] = "avpll_a2", + [AVPLL_A3] = "avpll_a3", + [AVPLL_A4] = "avpll_a4", + [AVPLL_A5] = "avpll_a5", + [AVPLL_A6] = "avpll_a6", + [AVPLL_A7] = "avpll_a7", + [AVPLL_A8] = "avpll_a8", + [AVPLL_B1] = "avpll_b1", + [AVPLL_B2] = "avpll_b2", + [AVPLL_B3] = "avpll_b3", + [AVPLL_B4] = "avpll_b4", + [AVPLL_B5] = "avpll_b5", + [AVPLL_B6] = "avpll_b6", + [AVPLL_B7] = "avpll_b7", + [AVPLL_B8] = "avpll_b8", + [AUDIO1_PLL] = "audio1_pll", + [AUDIO_FAST_PLL] = "audio_fast_pll", + [VIDEO0_PLL] = "video0_pll", + [VIDEO0_IN] = "video0_in", + [VIDEO1_PLL] = "video1_pll", + [VIDEO1_IN] = "video1_in", + [VIDEO2_PLL] = "video2_pll", + [VIDEO2_IN] = "video2_in", +}; + +static const struct berlin2_pll_map bg2_pll_map __initconst = { + .vcodiv = {10, 15, 20, 25, 30, 40, 50, 60, 80}, + .mult = 10, + .fbdiv_shift = 6, + .rfdiv_shift = 1, + .divsel_shift = 7, +}; + +static const u8 default_parent_ids[] = { + SYSPLL, AVPLL_B4, AVPLL_A5, AVPLL_B6, AVPLL_B7, SYSPLL +}; + +static const struct berlin2_div_data bg2_divs[] __initconst = { + { + .name = "sys", + .parent_ids = (const u8 []){ + SYSPLL, AVPLL_B4, AVPLL_B5, AVPLL_B6, AVPLL_B7, SYSPLL + }, + .num_parents = 6, + .map = { + BERLIN2_DIV_GATE(REG_CLKENABLE, 0), + BERLIN2_PLL_SELECT(REG_CLKSELECT0, 0), + BERLIN2_DIV_SELECT(REG_CLKSELECT0, 3), + BERLIN2_PLL_SWITCH(REG_CLKSWITCH0, 3), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH0, 4), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH0, 5), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = CLK_IGNORE_UNUSED, + }, + { + .name = "cpu", + .parent_ids = (const u8 []){ + CPUPLL, MEMPLL, MEMPLL, MEMPLL, MEMPLL + }, + .num_parents = 5, + .map = { + BERLIN2_PLL_SELECT(REG_CLKSELECT0, 6), + BERLIN2_DIV_SELECT(REG_CLKSELECT0, 9), + BERLIN2_PLL_SWITCH(REG_CLKSWITCH0, 6), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH0, 7), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH0, 8), + }, + .div_flags = BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "drmfigo", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_DIV_GATE(REG_CLKENABLE, 16), + BERLIN2_PLL_SELECT(REG_CLKSELECT0, 17), + BERLIN2_DIV_SELECT(REG_CLKSELECT0, 20), + BERLIN2_PLL_SWITCH(REG_CLKSWITCH0, 12), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH0, 13), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH0, 14), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "cfg", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_DIV_GATE(REG_CLKENABLE, 1), + BERLIN2_PLL_SELECT(REG_CLKSELECT0, 23), + BERLIN2_DIV_SELECT(REG_CLKSELECT0, 26), + BERLIN2_PLL_SWITCH(REG_CLKSWITCH0, 15), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH0, 16), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH0, 17), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "gfx", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_DIV_GATE(REG_CLKENABLE, 4), + BERLIN2_PLL_SELECT(REG_CLKSELECT0, 29), + BERLIN2_DIV_SELECT(REG_CLKSELECT1, 0), + BERLIN2_PLL_SWITCH(REG_CLKSWITCH0, 18), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH0, 19), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH0, 20), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "zsp", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_DIV_GATE(REG_CLKENABLE, 5), + BERLIN2_PLL_SELECT(REG_CLKSELECT1, 3), + BERLIN2_DIV_SELECT(REG_CLKSELECT1, 6), + BERLIN2_PLL_SWITCH(REG_CLKSWITCH0, 21), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH0, 22), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH0, 23), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "perif", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_DIV_GATE(REG_CLKENABLE, 6), + BERLIN2_PLL_SELECT(REG_CLKSELECT1, 9), + BERLIN2_DIV_SELECT(REG_CLKSELECT1, 12), + BERLIN2_PLL_SWITCH(REG_CLKSWITCH0, 24), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH0, 25), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH0, 26), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = CLK_IGNORE_UNUSED, + }, + { + .name = "pcube", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_DIV_GATE(REG_CLKENABLE, 2), + BERLIN2_PLL_SELECT(REG_CLKSELECT1, 15), + BERLIN2_DIV_SELECT(REG_CLKSELECT1, 18), + BERLIN2_PLL_SWITCH(REG_CLKSWITCH0, 27), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH0, 28), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH0, 29), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "vscope", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_DIV_GATE(REG_CLKENABLE, 3), + BERLIN2_PLL_SELECT(REG_CLKSELECT1, 21), + BERLIN2_DIV_SELECT(REG_CLKSELECT1, 24), + BERLIN2_PLL_SWITCH(REG_CLKSWITCH0, 30), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH0, 31), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH1, 0), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "nfc_ecc", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_DIV_GATE(REG_CLKENABLE, 18), + BERLIN2_PLL_SELECT(REG_CLKSELECT1, 27), + BERLIN2_DIV_SELECT(REG_CLKSELECT2, 0), + BERLIN2_PLL_SWITCH(REG_CLKSWITCH1, 1), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH1, 2), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH1, 3), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "vpp", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_DIV_GATE(REG_CLKENABLE, 21), + BERLIN2_PLL_SELECT(REG_CLKSELECT2, 3), + BERLIN2_DIV_SELECT(REG_CLKSELECT2, 6), + BERLIN2_PLL_SWITCH(REG_CLKSWITCH1, 4), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH1, 5), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH1, 6), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "app", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_DIV_GATE(REG_CLKENABLE, 20), + BERLIN2_PLL_SELECT(REG_CLKSELECT2, 9), + BERLIN2_DIV_SELECT(REG_CLKSELECT2, 12), + BERLIN2_PLL_SWITCH(REG_CLKSWITCH1, 7), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH1, 8), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH1, 9), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "audio0", + .parent_ids = (const u8 []){ AUDIO_FAST_PLL }, + .num_parents = 1, + .map = { + BERLIN2_DIV_GATE(REG_CLKENABLE, 22), + BERLIN2_DIV_SELECT(REG_CLKSELECT2, 17), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH1, 10), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH1, 11), + }, + .div_flags = BERLIN2_DIV_HAS_GATE, + .flags = 0, + }, + { + .name = "audio2", + .parent_ids = (const u8 []){ AUDIO_FAST_PLL }, + .num_parents = 1, + .map = { + BERLIN2_DIV_GATE(REG_CLKENABLE, 24), + BERLIN2_DIV_SELECT(REG_CLKSELECT2, 20), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH1, 14), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH1, 15), + }, + .div_flags = BERLIN2_DIV_HAS_GATE, + .flags = 0, + }, + { + .name = "audio3", + .parent_ids = (const u8 []){ AUDIO_FAST_PLL }, + .num_parents = 1, + .map = { + BERLIN2_DIV_GATE(REG_CLKENABLE, 25), + BERLIN2_DIV_SELECT(REG_CLKSELECT2, 23), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH1, 16), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH1, 17), + }, + .div_flags = BERLIN2_DIV_HAS_GATE, + .flags = 0, + }, + { + .name = "audio1", + .parent_ids = (const u8 []){ AUDIO1_PLL }, + .num_parents = 1, + .map = { + BERLIN2_DIV_GATE(REG_CLKENABLE, 23), + BERLIN2_DIV_SELECT(REG_CLKSELECT3, 0), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH1, 12), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH1, 13), + }, + .div_flags = BERLIN2_DIV_HAS_GATE, + .flags = 0, + }, + { + .name = "gfx3d_core", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_SINGLE_DIV(REG_GFX3DCORE_CLKCTL), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "gfx3d_sys", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_SINGLE_DIV(REG_GFX3DSYS_CLKCTL), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "arc", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_SINGLE_DIV(REG_ARC_CLKCTL), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "vip", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_SINGLE_DIV(REG_VIP_CLKCTL), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "sdio0xin", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_SINGLE_DIV(REG_SDIO0XIN_CLKCTL), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "sdio1xin", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_SINGLE_DIV(REG_SDIO1XIN_CLKCTL), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "gfx3d_extra", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_SINGLE_DIV(REG_GFX3DEXTRA_CLKCTL), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "gc360", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_SINGLE_DIV(REG_GC360_CLKCTL), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "sdio_dllmst", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_SINGLE_DIV(REG_SDIO_DLLMST_CLKCTL), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, +}; + +static const struct berlin2_gate_data bg2_gates[] __initconst = { + { "geth0", "perif", 7 }, + { "geth1", "perif", 8 }, + { "sata", "perif", 9 }, + { "ahbapb", "perif", 10, CLK_IGNORE_UNUSED }, + { "usb0", "perif", 11 }, + { "usb1", "perif", 12 }, + { "pbridge", "perif", 13, CLK_IGNORE_UNUSED }, + { "sdio0", "perif", 14, CLK_IGNORE_UNUSED }, + { "sdio1", "perif", 15, CLK_IGNORE_UNUSED }, + { "nfc", "perif", 17 }, + { "smemc", "perif", 19 }, + { "audiohd", "audiohd_pll", 26 }, + { "video0", "video0_in", 27 }, + { "video1", "video1_in", 28 }, + { "video2", "video2_in", 29 }, +}; + +static void __init berlin2_clock_setup(struct device_node *np) +{ + const char *parent_names[9]; + struct clk *clk; + u8 avpll_flags = 0; + int n; + + gbase = of_iomap(np, 0); + if (!gbase) + return; + + /* overwrite default clock names with DT provided ones */ + clk = of_clk_get_by_name(np, clk_names[REFCLK]); + if (!IS_ERR(clk)) { + clk_names[REFCLK] = __clk_get_name(clk); + clk_put(clk); + } + + clk = of_clk_get_by_name(np, clk_names[VIDEO_EXT0]); + if (!IS_ERR(clk)) { + clk_names[VIDEO_EXT0] = __clk_get_name(clk); + clk_put(clk); + } + + /* simple register PLLs */ + clk = berlin2_pll_register(&bg2_pll_map, gbase + REG_SYSPLLCTL0, + clk_names[SYSPLL], clk_names[REFCLK], 0); + if (IS_ERR(clk)) + goto bg2_fail; + + clk = berlin2_pll_register(&bg2_pll_map, gbase + REG_MEMPLLCTL0, + clk_names[MEMPLL], clk_names[REFCLK], 0); + if (IS_ERR(clk)) + goto bg2_fail; + + clk = berlin2_pll_register(&bg2_pll_map, gbase + REG_CPUPLLCTL0, + clk_names[CPUPLL], clk_names[REFCLK], 0); + if (IS_ERR(clk)) + goto bg2_fail; + + if (of_device_is_compatible(np, "marvell,berlin2-global-register")) + avpll_flags |= BERLIN2_AVPLL_SCRAMBLE_QUIRK; + + /* audio/video VCOs */ + clk = berlin2_avpll_vco_register(gbase + REG_AVPLLCTL0, "avpll_vcoA", + clk_names[REFCLK], avpll_flags, 0); + if (IS_ERR(clk)) + goto bg2_fail; + + for (n = 0; n < 8; n++) { + clk = berlin2_avpll_channel_register(gbase + REG_AVPLLCTL0, + clk_names[AVPLL_A1 + n], n, "avpll_vcoA", + avpll_flags, 0); + if (IS_ERR(clk)) + goto bg2_fail; + } + + clk = berlin2_avpll_vco_register(gbase + REG_AVPLLCTL31, "avpll_vcoB", + clk_names[REFCLK], BERLIN2_AVPLL_BIT_QUIRK | + avpll_flags, 0); + if (IS_ERR(clk)) + goto bg2_fail; + + for (n = 0; n < 8; n++) { + clk = berlin2_avpll_channel_register(gbase + REG_AVPLLCTL31, + clk_names[AVPLL_B1 + n], n, "avpll_vcoB", + BERLIN2_AVPLL_BIT_QUIRK | avpll_flags, 0); + if (IS_ERR(clk)) + goto bg2_fail; + } + + /* reference clock bypass switches */ + parent_names[0] = clk_names[SYSPLL]; + parent_names[1] = clk_names[REFCLK]; + clk = clk_register_mux(NULL, "syspll_byp", parent_names, 2, + 0, gbase + REG_CLKSWITCH0, 0, 1, 0, &lock); + if (IS_ERR(clk)) + goto bg2_fail; + clk_names[SYSPLL] = __clk_get_name(clk); + + parent_names[0] = clk_names[MEMPLL]; + parent_names[1] = clk_names[REFCLK]; + clk = clk_register_mux(NULL, "mempll_byp", parent_names, 2, + 0, gbase + REG_CLKSWITCH0, 1, 1, 0, &lock); + if (IS_ERR(clk)) + goto bg2_fail; + clk_names[MEMPLL] = __clk_get_name(clk); + + parent_names[0] = clk_names[CPUPLL]; + parent_names[1] = clk_names[REFCLK]; + clk = clk_register_mux(NULL, "cpupll_byp", parent_names, 2, + 0, gbase + REG_CLKSWITCH0, 2, 1, 0, &lock); + if (IS_ERR(clk)) + goto bg2_fail; + clk_names[CPUPLL] = __clk_get_name(clk); + + /* clock muxes */ + parent_names[0] = clk_names[AVPLL_B3]; + parent_names[1] = clk_names[AVPLL_A3]; + clk = clk_register_mux(NULL, clk_names[AUDIO1_PLL], parent_names, 2, + 0, gbase + REG_CLKSELECT2, 29, 1, 0, &lock); + if (IS_ERR(clk)) + goto bg2_fail; + + parent_names[0] = clk_names[VIDEO0_PLL]; + parent_names[1] = clk_names[VIDEO_EXT0]; + clk = clk_register_mux(NULL, clk_names[VIDEO0_IN], parent_names, 2, + 0, gbase + REG_CLKSELECT3, 4, 1, 0, &lock); + if (IS_ERR(clk)) + goto bg2_fail; + + parent_names[0] = clk_names[VIDEO1_PLL]; + parent_names[1] = clk_names[VIDEO_EXT0]; + clk = clk_register_mux(NULL, clk_names[VIDEO1_IN], parent_names, 2, + 0, gbase + REG_CLKSELECT3, 6, 1, 0, &lock); + if (IS_ERR(clk)) + goto bg2_fail; + + parent_names[0] = clk_names[AVPLL_A2]; + parent_names[1] = clk_names[AVPLL_B2]; + clk = clk_register_mux(NULL, clk_names[VIDEO1_PLL], parent_names, 2, + 0, gbase + REG_CLKSELECT3, 7, 1, 0, &lock); + if (IS_ERR(clk)) + goto bg2_fail; + + parent_names[0] = clk_names[VIDEO2_PLL]; + parent_names[1] = clk_names[VIDEO_EXT0]; + clk = clk_register_mux(NULL, clk_names[VIDEO2_IN], parent_names, 2, + 0, gbase + REG_CLKSELECT3, 9, 1, 0, &lock); + if (IS_ERR(clk)) + goto bg2_fail; + + parent_names[0] = clk_names[AVPLL_B1]; + parent_names[1] = clk_names[AVPLL_A5]; + clk = clk_register_mux(NULL, clk_names[VIDEO2_PLL], parent_names, 2, + 0, gbase + REG_CLKSELECT3, 10, 1, 0, &lock); + if (IS_ERR(clk)) + goto bg2_fail; + + /* clock divider cells */ + for (n = 0; n < ARRAY_SIZE(bg2_divs); n++) { + const struct berlin2_div_data *dd = &bg2_divs[n]; + int k; + + for (k = 0; k < dd->num_parents; k++) + parent_names[k] = clk_names[dd->parent_ids[k]]; + + clks[CLKID_SYS + n] = berlin2_div_register(&dd->map, gbase, + dd->name, dd->div_flags, parent_names, + dd->num_parents, dd->flags, &lock); + } + + /* clock gate cells */ + for (n = 0; n < ARRAY_SIZE(bg2_gates); n++) { + const struct berlin2_gate_data *gd = &bg2_gates[n]; + + clks[CLKID_GETH0 + n] = clk_register_gate(NULL, gd->name, + gd->parent_name, gd->flags, gbase + REG_CLKENABLE, + gd->bit_idx, 0, &lock); + } + + /* twdclk is derived from cpu/3 */ + clks[CLKID_TWD] = + clk_register_fixed_factor(NULL, "twd", "cpu", 0, 1, 3); + + /* check for errors on leaf clocks */ + for (n = 0; n < MAX_CLKS; n++) { + if (!IS_ERR(clks[n])) + continue; + + pr_err("%s: Unable to register leaf clock %d\n", + np->full_name, n); + goto bg2_fail; + } + + /* register clk-provider */ + clk_data.clks = clks; + clk_data.clk_num = MAX_CLKS; + of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data); + + return; + +bg2_fail: + iounmap(gbase); +} +CLK_OF_DECLARE(berlin2_clock, "marvell,berlin2-chip-ctrl", + berlin2_clock_setup); +CLK_OF_DECLARE(berlin2cd_clock, "marvell,berlin2cd-chip-ctrl", + berlin2_clock_setup); diff --git a/drivers/clk/berlin/bg2q.c b/drivers/clk/berlin/bg2q.c new file mode 100644 index 000000000000..21784e4eb3f0 --- /dev/null +++ b/drivers/clk/berlin/bg2q.c @@ -0,0 +1,389 @@ +/* + * Copyright (c) 2014 Marvell Technology Group Ltd. + * + * Alexandre Belloni <alexandre.belloni@free-electrons.com> + * Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see <http://www.gnu.org/licenses/>. + */ + +#include <linux/clk.h> +#include <linux/clk-provider.h> +#include <linux/kernel.h> +#include <linux/of.h> +#include <linux/of_address.h> +#include <linux/slab.h> + +#include <dt-bindings/clock/berlin2q.h> + +#include "berlin2-div.h" +#include "berlin2-pll.h" +#include "common.h" + +#define REG_PINMUX0 0x0018 +#define REG_PINMUX5 0x002c +#define REG_SYSPLLCTL0 0x0030 +#define REG_SYSPLLCTL4 0x0040 +#define REG_CLKENABLE 0x00e8 +#define REG_CLKSELECT0 0x00ec +#define REG_CLKSELECT1 0x00f0 +#define REG_CLKSELECT2 0x00f4 +#define REG_CLKSWITCH0 0x00f8 +#define REG_CLKSWITCH1 0x00fc +#define REG_SW_GENERIC0 0x0110 +#define REG_SW_GENERIC3 0x011c +#define REG_SDIO0XIN_CLKCTL 0x0158 +#define REG_SDIO1XIN_CLKCTL 0x015c + +#define MAX_CLKS 27 +static struct clk *clks[MAX_CLKS]; +static struct clk_onecell_data clk_data; +static DEFINE_SPINLOCK(lock); +static void __iomem *gbase; +static void __iomem *cpupll_base; + +enum { + REFCLK, + SYSPLL, CPUPLL, + AVPLL_B1, AVPLL_B2, AVPLL_B3, AVPLL_B4, + AVPLL_B5, AVPLL_B6, AVPLL_B7, AVPLL_B8, +}; + +static const char *clk_names[] = { + [REFCLK] = "refclk", + [SYSPLL] = "syspll", + [CPUPLL] = "cpupll", + [AVPLL_B1] = "avpll_b1", + [AVPLL_B2] = "avpll_b2", + [AVPLL_B3] = "avpll_b3", + [AVPLL_B4] = "avpll_b4", + [AVPLL_B5] = "avpll_b5", + [AVPLL_B6] = "avpll_b6", + [AVPLL_B7] = "avpll_b7", + [AVPLL_B8] = "avpll_b8", +}; + +static const struct berlin2_pll_map bg2q_pll_map __initconst = { + .vcodiv = {1, 0, 2, 0, 3, 4, 0, 6, 8}, + .mult = 1, + .fbdiv_shift = 7, + .rfdiv_shift = 2, + .divsel_shift = 9, +}; + +static const u8 default_parent_ids[] = { + SYSPLL, AVPLL_B4, AVPLL_B5, AVPLL_B6, AVPLL_B7, SYSPLL +}; + +static const struct berlin2_div_data bg2q_divs[] __initconst = { + { + .name = "sys", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_DIV_GATE(REG_CLKENABLE, 0), + BERLIN2_PLL_SELECT(REG_CLKSELECT0, 0), + BERLIN2_DIV_SELECT(REG_CLKSELECT0, 3), + BERLIN2_PLL_SWITCH(REG_CLKSWITCH0, 3), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH0, 4), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH0, 5), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = CLK_IGNORE_UNUSED, + }, + { + .name = "drmfigo", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_DIV_GATE(REG_CLKENABLE, 17), + BERLIN2_PLL_SELECT(REG_CLKSELECT0, 6), + BERLIN2_DIV_SELECT(REG_CLKSELECT0, 9), + BERLIN2_PLL_SWITCH(REG_CLKSWITCH0, 6), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH0, 7), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH0, 8), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "cfg", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_DIV_GATE(REG_CLKENABLE, 1), + BERLIN2_PLL_SELECT(REG_CLKSELECT0, 12), + BERLIN2_DIV_SELECT(REG_CLKSELECT0, 15), + BERLIN2_PLL_SWITCH(REG_CLKSWITCH0, 9), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH0, 10), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH0, 11), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "gfx2d", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_DIV_GATE(REG_CLKENABLE, 4), + BERLIN2_PLL_SELECT(REG_CLKSELECT0, 18), + BERLIN2_DIV_SELECT(REG_CLKSELECT0, 21), + BERLIN2_PLL_SWITCH(REG_CLKSWITCH0, 12), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH0, 13), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH0, 14), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "zsp", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_DIV_GATE(REG_CLKENABLE, 6), + BERLIN2_PLL_SELECT(REG_CLKSELECT0, 24), + BERLIN2_DIV_SELECT(REG_CLKSELECT0, 27), + BERLIN2_PLL_SWITCH(REG_CLKSWITCH0, 15), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH0, 16), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH0, 17), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "perif", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_DIV_GATE(REG_CLKENABLE, 7), + BERLIN2_PLL_SELECT(REG_CLKSELECT1, 0), + BERLIN2_DIV_SELECT(REG_CLKSELECT1, 3), + BERLIN2_PLL_SWITCH(REG_CLKSWITCH0, 18), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH0, 19), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH0, 20), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = CLK_IGNORE_UNUSED, + }, + { + .name = "pcube", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_DIV_GATE(REG_CLKENABLE, 2), + BERLIN2_PLL_SELECT(REG_CLKSELECT1, 6), + BERLIN2_DIV_SELECT(REG_CLKSELECT1, 9), + BERLIN2_PLL_SWITCH(REG_CLKSWITCH0, 21), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH0, 22), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH0, 23), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "vscope", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_DIV_GATE(REG_CLKENABLE, 3), + BERLIN2_PLL_SELECT(REG_CLKSELECT1, 12), + BERLIN2_DIV_SELECT(REG_CLKSELECT1, 15), + BERLIN2_PLL_SWITCH(REG_CLKSWITCH0, 24), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH0, 25), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH0, 26), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "nfc_ecc", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_DIV_GATE(REG_CLKENABLE, 19), + BERLIN2_PLL_SELECT(REG_CLKSELECT1, 18), + BERLIN2_DIV_SELECT(REG_CLKSELECT1, 21), + BERLIN2_PLL_SWITCH(REG_CLKSWITCH0, 27), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH0, 28), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH0, 29), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "vpp", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_DIV_GATE(REG_CLKENABLE, 21), + BERLIN2_PLL_SELECT(REG_CLKSELECT1, 24), + BERLIN2_DIV_SELECT(REG_CLKSELECT1, 27), + BERLIN2_PLL_SWITCH(REG_CLKSWITCH0, 30), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH0, 31), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH1, 0), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "app", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_DIV_GATE(REG_CLKENABLE, 20), + BERLIN2_PLL_SELECT(REG_CLKSELECT2, 0), + BERLIN2_DIV_SELECT(REG_CLKSELECT2, 3), + BERLIN2_PLL_SWITCH(REG_CLKSWITCH1, 1), + BERLIN2_DIV_SWITCH(REG_CLKSWITCH1, 2), + BERLIN2_DIV_D3SWITCH(REG_CLKSWITCH1, 3), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "sdio0xin", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_SINGLE_DIV(REG_SDIO0XIN_CLKCTL), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, + { + .name = "sdio1xin", + .parent_ids = default_parent_ids, + .num_parents = ARRAY_SIZE(default_parent_ids), + .map = { + BERLIN2_SINGLE_DIV(REG_SDIO1XIN_CLKCTL), + }, + .div_flags = BERLIN2_DIV_HAS_GATE | BERLIN2_DIV_HAS_MUX, + .flags = 0, + }, +}; + +static const struct berlin2_gate_data bg2q_gates[] __initconst = { + { "gfx2daxi", "perif", 5 }, + { "geth0", "perif", 8 }, + { "sata", "perif", 9 }, + { "ahbapb", "perif", 10, CLK_IGNORE_UNUSED }, + { "usb0", "perif", 11 }, + { "usb1", "perif", 12 }, + { "usb2", "perif", 13 }, + { "usb3", "perif", 14 }, + { "pbridge", "perif", 15, CLK_IGNORE_UNUSED }, + { "sdio", "perif", 16, CLK_IGNORE_UNUSED }, + { "nfc", "perif", 18 }, + { "smemc", "perif", 19 }, + { "pcie", "perif", 22 }, +}; + +static void __init berlin2q_clock_setup(struct device_node *np) +{ + const char *parent_names[9]; + struct clk *clk; + int n; + + gbase = of_iomap(np, 0); + if (!gbase) { + pr_err("%s: Unable to map global base\n", np->full_name); + return; + } + + /* BG2Q CPU PLL is not part of global registers */ + cpupll_base = of_iomap(np, 1); + if (!cpupll_base) { + pr_err("%s: Unable to map cpupll base\n", np->full_name); + iounmap(gbase); + return; + } + + /* overwrite default clock names with DT provided ones */ + clk = of_clk_get_by_name(np, clk_names[REFCLK]); + if (!IS_ERR(clk)) { + clk_names[REFCLK] = __clk_get_name(clk); + clk_put(clk); + } + + /* simple register PLLs */ + clk = berlin2_pll_register(&bg2q_pll_map, gbase + REG_SYSPLLCTL0, + clk_names[SYSPLL], clk_names[REFCLK], 0); + if (IS_ERR(clk)) + goto bg2q_fail; + + clk = berlin2_pll_register(&bg2q_pll_map, cpupll_base, + clk_names[CPUPLL], clk_names[REFCLK], 0); + if (IS_ERR(clk)) + goto bg2q_fail; + + /* TODO: add BG2Q AVPLL */ + + /* + * TODO: add reference clock bypass switches: + * memPLLSWBypass, cpuPLLSWBypass, and sysPLLSWBypass + */ + + /* clock divider cells */ + for (n = 0; n < ARRAY_SIZE(bg2q_divs); n++) { + const struct berlin2_div_data *dd = &bg2q_divs[n]; + int k; + + for (k = 0; k < dd->num_parents; k++) + parent_names[k] = clk_names[dd->parent_ids[k]]; + + clks[CLKID_SYS + n] = berlin2_div_register(&dd->map, gbase, + dd->name, dd->div_flags, parent_names, + dd->num_parents, dd->flags, &lock); + } + + /* clock gate cells */ + for (n = 0; n < ARRAY_SIZE(bg2q_gates); n++) { + const struct berlin2_gate_data *gd = &bg2q_gates[n]; + + clks[CLKID_GFX2DAXI + n] = clk_register_gate(NULL, gd->name, + gd->parent_name, gd->flags, gbase + REG_CLKENABLE, + gd->bit_idx, 0, &lock); + } + + /* + * twdclk is derived from cpu/3 + * TODO: use cpupll until cpuclk is not available + */ + clks[CLKID_TWD] = + clk_register_fixed_factor(NULL, "twd", clk_names[CPUPLL], + 0, 1, 3); + + /* check for errors on leaf clocks */ + for (n = 0; n < MAX_CLKS; n++) { + if (!IS_ERR(clks[n])) + continue; + + pr_err("%s: Unable to register leaf clock %d\n", + np->full_name, n); + goto bg2q_fail; + } + + /* register clk-provider */ + clk_data.clks = clks; + clk_data.clk_num = MAX_CLKS; + of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data); + + return; + +bg2q_fail: + iounmap(cpupll_base); + iounmap(gbase); +} +CLK_OF_DECLARE(berlin2q_clock, "marvell,berlin2q-chip-ctrl", + berlin2q_clock_setup); diff --git a/drivers/clk/berlin/common.h b/drivers/clk/berlin/common.h new file mode 100644 index 000000000000..bc68a14c4550 --- /dev/null +++ b/drivers/clk/berlin/common.h @@ -0,0 +1,29 @@ +/* + * Copyright (c) 2014 Marvell Technology Group Ltd. + * + * Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> + * Alexandre Belloni <alexandre.belloni@free-electrons.com> + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see <http://www.gnu.org/licenses/>. + */ +#ifndef __BERLIN2_COMMON_H +#define __BERLIN2_COMMON_H + +struct berlin2_gate_data { + const char *name; + const char *parent_name; + u8 bit_idx; + unsigned long flags; +}; + +#endif /* BERLIN2_COMMON_H */ diff --git a/drivers/clk/clk-axm5516.c b/drivers/clk/clk-axm5516.c new file mode 100644 index 000000000000..d2f1e119b450 --- /dev/null +++ b/drivers/clk/clk-axm5516.c @@ -0,0 +1,615 @@ +/* + * drivers/clk/clk-axm5516.c + * + * Provides clock implementations for three different types of clock devices on + * the Axxia device: PLL clock, a clock divider and a clock mux. + * + * Copyright (C) 2014 LSI Corporation + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + */ +#include <linux/module.h> +#include <linux/kernel.h> +#include <linux/slab.h> +#include <linux/platform_device.h> +#include <linux/of.h> +#include <linux/of_address.h> +#include <linux/clk-provider.h> +#include <linux/regmap.h> +#include <dt-bindings/clock/lsi,axm5516-clks.h> + + +/** + * struct axxia_clk - Common struct to all Axxia clocks. + * @hw: clk_hw for the common clk framework + * @regmap: Regmap for the clock control registers + */ +struct axxia_clk { + struct clk_hw hw; + struct regmap *regmap; +}; +#define to_axxia_clk(_hw) container_of(_hw, struct axxia_clk, hw) + +/** + * struct axxia_pllclk - Axxia PLL generated clock. + * @aclk: Common struct + * @reg: Offset into regmap for PLL control register + */ +struct axxia_pllclk { + struct axxia_clk aclk; + u32 reg; +}; +#define to_axxia_pllclk(_aclk) container_of(_aclk, struct axxia_pllclk, aclk) + +/** + * axxia_pllclk_recalc - Calculate the PLL generated clock rate given the + * parent clock rate. + */ +static unsigned long +axxia_pllclk_recalc(struct clk_hw *hw, unsigned long parent_rate) +{ + struct axxia_clk *aclk = to_axxia_clk(hw); + struct axxia_pllclk *pll = to_axxia_pllclk(aclk); + unsigned long rate, fbdiv, refdiv, postdiv; + u32 control; + + regmap_read(aclk->regmap, pll->reg, &control); + postdiv = ((control >> 0) & 0xf) + 1; + fbdiv = ((control >> 4) & 0xfff) + 3; + refdiv = ((control >> 16) & 0x1f) + 1; + rate = (parent_rate / (refdiv * postdiv)) * fbdiv; + + return rate; +} + +static const struct clk_ops axxia_pllclk_ops = { + .recalc_rate = axxia_pllclk_recalc, +}; + +/** + * struct axxia_divclk - Axxia clock divider + * @aclk: Common struct + * @reg: Offset into regmap for PLL control register + * @shift: Bit position for divider value + * @width: Number of bits in divider value + */ +struct axxia_divclk { + struct axxia_clk aclk; + u32 reg; + u32 shift; + u32 width; +}; +#define to_axxia_divclk(_aclk) container_of(_aclk, struct axxia_divclk, aclk) + +/** + * axxia_divclk_recalc_rate - Calculate clock divider output rage + */ +static unsigned long +axxia_divclk_recalc_rate(struct clk_hw *hw, unsigned long parent_rate) +{ + struct axxia_clk *aclk = to_axxia_clk(hw); + struct axxia_divclk *divclk = to_axxia_divclk(aclk); + u32 ctrl, div; + + regmap_read(aclk->regmap, divclk->reg, &ctrl); + div = 1 + ((ctrl >> divclk->shift) & ((1 << divclk->width)-1)); + + return parent_rate / div; +} + +static const struct clk_ops axxia_divclk_ops = { + .recalc_rate = axxia_divclk_recalc_rate, +}; + +/** + * struct axxia_clkmux - Axxia clock mux + * @aclk: Common struct + * @reg: Offset into regmap for PLL control register + * @shift: Bit position for selection value + * @width: Number of bits in selection value + */ +struct axxia_clkmux { + struct axxia_clk aclk; + u32 reg; + u32 shift; + u32 width; +}; +#define to_axxia_clkmux(_aclk) container_of(_aclk, struct axxia_clkmux, aclk) + +/** + * axxia_clkmux_get_parent - Return the index of selected parent clock + */ +static u8 axxia_clkmux_get_parent(struct clk_hw *hw) +{ + struct axxia_clk *aclk = to_axxia_clk(hw); + struct axxia_clkmux *mux = to_axxia_clkmux(aclk); + u32 ctrl, parent; + + regmap_read(aclk->regmap, mux->reg, &ctrl); + parent = (ctrl >> mux->shift) & ((1 << mux->width) - 1); + + return (u8) parent; +} + +static const struct clk_ops axxia_clkmux_ops = { + .get_parent = axxia_clkmux_get_parent, +}; + + +/* + * PLLs + */ + +static struct axxia_pllclk clk_fab_pll = { + .aclk.hw.init = &(struct clk_init_data){ + .name = "clk_fab_pll", + .parent_names = (const char *[]){ + "clk_ref0" + }, + .num_parents = 1, + .ops = &axxia_pllclk_ops, + }, + .reg = 0x01800, +}; + +static struct axxia_pllclk clk_cpu_pll = { + .aclk.hw.init = &(struct clk_init_data){ + .name = "clk_cpu_pll", + .parent_names = (const char *[]){ + "clk_ref0" + }, + .num_parents = 1, + .ops = &axxia_pllclk_ops, + }, + .reg = 0x02000, +}; + +static struct axxia_pllclk clk_sys_pll = { + .aclk.hw.init = &(struct clk_init_data){ + .name = "clk_sys_pll", + .parent_names = (const char *[]){ + "clk_ref0" + }, + .num_parents = 1, + .ops = &axxia_pllclk_ops, + }, + .reg = 0x02800, +}; + +static struct axxia_pllclk clk_sm0_pll = { + .aclk.hw.init = &(struct clk_init_data){ + .name = "clk_sm0_pll", + .parent_names = (const char *[]){ + "clk_ref2" + }, + .num_parents = 1, + .ops = &axxia_pllclk_ops, + }, + .reg = 0x03000, +}; + +static struct axxia_pllclk clk_sm1_pll = { + .aclk.hw.init = &(struct clk_init_data){ + .name = "clk_sm1_pll", + .parent_names = (const char *[]){ + "clk_ref1" + }, + .num_parents = 1, + .ops = &axxia_pllclk_ops, + }, + .reg = 0x03800, +}; + +/* + * Clock dividers + */ + +static struct axxia_divclk clk_cpu0_div = { + .aclk.hw.init = &(struct clk_init_data){ + .name = "clk_cpu0_div", + .parent_names = (const char *[]){ + "clk_cpu_pll" + }, + .num_parents = 1, + .ops = &axxia_divclk_ops, + }, + .reg = 0x10008, + .shift = 0, + .width = 4, +}; + +static struct axxia_divclk clk_cpu1_div = { + .aclk.hw.init = &(struct clk_init_data){ + .name = "clk_cpu1_div", + .parent_names = (const char *[]){ + "clk_cpu_pll" + }, + .num_parents = 1, + .ops = &axxia_divclk_ops, + }, + .reg = 0x10008, + .shift = 4, + .width = 4, +}; + +static struct axxia_divclk clk_cpu2_div = { + .aclk.hw.init = &(struct clk_init_data){ + .name = "clk_cpu2_div", + .parent_names = (const char *[]){ + "clk_cpu_pll" + }, + .num_parents = 1, + .ops = &axxia_divclk_ops, + }, + .reg = 0x10008, + .shift = 8, + .width = 4, +}; + +static struct axxia_divclk clk_cpu3_div = { + .aclk.hw.init = &(struct clk_init_data){ + .name = "clk_cpu3_div", + .parent_names = (const char *[]){ + "clk_cpu_pll" + }, + .num_parents = 1, + .ops = &axxia_divclk_ops, + }, + .reg = 0x10008, + .shift = 12, + .width = 4, +}; + +static struct axxia_divclk clk_nrcp_div = { + .aclk.hw.init = &(struct clk_init_data){ + .name = "clk_nrcp_div", + .parent_names = (const char *[]){ + "clk_sys_pll" + }, + .num_parents = 1, + .ops = &axxia_divclk_ops, + }, + .reg = 0x1000c, + .shift = 0, + .width = 4, +}; + +static struct axxia_divclk clk_sys_div = { + .aclk.hw.init = &(struct clk_init_data){ + .name = "clk_sys_div", + .parent_names = (const char *[]){ + "clk_sys_pll" + }, + .num_parents = 1, + .ops = &axxia_divclk_ops, + }, + .reg = 0x1000c, + .shift = 4, + .width = 4, +}; + +static struct axxia_divclk clk_fab_div = { + .aclk.hw.init = &(struct clk_init_data){ + .name = "clk_fab_div", + .parent_names = (const char *[]){ + "clk_fab_pll" + }, + .num_parents = 1, + .ops = &axxia_divclk_ops, + }, + .reg = 0x1000c, + .shift = 8, + .width = 4, +}; + +static struct axxia_divclk clk_per_div = { + .aclk.hw.init = &(struct clk_init_data){ + .name = "clk_per_div", + .parent_names = (const char *[]){ + "clk_sm1_pll" + }, + .num_parents = 1, + .flags = CLK_IS_BASIC, + .ops = &axxia_divclk_ops, + }, + .reg = 0x1000c, + .shift = 12, + .width = 4, +}; + +static struct axxia_divclk clk_mmc_div = { + .aclk.hw.init = &(struct clk_init_data){ + .name = "clk_mmc_div", + .parent_names = (const char *[]){ + "clk_sm1_pll" + }, + .num_parents = 1, + .flags = CLK_IS_BASIC, + .ops = &axxia_divclk_ops, + }, + .reg = 0x1000c, + .shift = 16, + .width = 4, +}; + +/* + * Clock MUXes + */ + +static struct axxia_clkmux clk_cpu0_mux = { + .aclk.hw.init = &(struct clk_init_data){ + .name = "clk_cpu0", + .parent_names = (const char *[]){ + "clk_ref0", + "clk_cpu_pll", + "clk_cpu0_div", + "clk_cpu0_div" + }, + .num_parents = 4, + .ops = &axxia_clkmux_ops, + }, + .reg = 0x10000, + .shift = 0, + .width = 2, +}; + +static struct axxia_clkmux clk_cpu1_mux = { + .aclk.hw.init = &(struct clk_init_data){ + .name = "clk_cpu1", + .parent_names = (const char *[]){ + "clk_ref0", + "clk_cpu_pll", + "clk_cpu1_div", + "clk_cpu1_div" + }, + .num_parents = 4, + .ops = &axxia_clkmux_ops, + }, + .reg = 0x10000, + .shift = 2, + .width = 2, +}; + +static struct axxia_clkmux clk_cpu2_mux = { + .aclk.hw.init = &(struct clk_init_data){ + .name = "clk_cpu2", + .parent_names = (const char *[]){ + "clk_ref0", + "clk_cpu_pll", + "clk_cpu2_div", + "clk_cpu2_div" + }, + .num_parents = 4, + .ops = &axxia_clkmux_ops, + }, + .reg = 0x10000, + .shift = 4, + .width = 2, +}; + +static struct axxia_clkmux clk_cpu3_mux = { + .aclk.hw.init = &(struct clk_init_data){ + .name = "clk_cpu3", + .parent_names = (const char *[]){ + "clk_ref0", + "clk_cpu_pll", + "clk_cpu3_div", + "clk_cpu3_div" + }, + .num_parents = 4, + .ops = &axxia_clkmux_ops, + }, + .reg = 0x10000, + .shift = 6, + .width = 2, +}; + +static struct axxia_clkmux clk_nrcp_mux = { + .aclk.hw.init = &(struct clk_init_data){ + .name = "clk_nrcp", + .parent_names = (const char *[]){ + "clk_ref0", + "clk_sys_pll", + "clk_nrcp_div", + "clk_nrcp_div" + }, + .num_parents = 4, + .ops = &axxia_clkmux_ops, + }, + .reg = 0x10004, + .shift = 0, + .width = 2, +}; + +static struct axxia_clkmux clk_sys_mux = { + .aclk.hw.init = &(struct clk_init_data){ + .name = "clk_sys", + .parent_names = (const char *[]){ + "clk_ref0", + "clk_sys_pll", + "clk_sys_div", + "clk_sys_div" + }, + .num_parents = 4, + .ops = &axxia_clkmux_ops, + }, + .reg = 0x10004, + .shift = 2, + .width = 2, +}; + +static struct axxia_clkmux clk_fab_mux = { + .aclk.hw.init = &(struct clk_init_data){ + .name = "clk_fab", + .parent_names = (const char *[]){ + "clk_ref0", + "clk_fab_pll", + "clk_fab_div", + "clk_fab_div" + }, + .num_parents = 4, + .ops = &axxia_clkmux_ops, + }, + .reg = 0x10004, + .shift = 4, + .width = 2, +}; + +static struct axxia_clkmux clk_per_mux = { + .aclk.hw.init = &(struct clk_init_data){ + .name = "clk_per", + .parent_names = (const char *[]){ + "clk_ref1", + "clk_per_div" + }, + .num_parents = 2, + .ops = &axxia_clkmux_ops, + }, + .reg = 0x10004, + .shift = 6, + .width = 1, +}; + +static struct axxia_clkmux clk_mmc_mux = { + .aclk.hw.init = &(struct clk_init_data){ + .name = "clk_mmc", + .parent_names = (const char *[]){ + "clk_ref1", + "clk_mmc_div" + }, + .num_parents = 2, + .ops = &axxia_clkmux_ops, + }, + .reg = 0x10004, + .shift = 9, + .width = 1, +}; + +/* Table of all supported clocks indexed by the clock identifiers from the + * device tree binding + */ +static struct axxia_clk *axmclk_clocks[] = { + [AXXIA_CLK_FAB_PLL] = &clk_fab_pll.aclk, + [AXXIA_CLK_CPU_PLL] = &clk_cpu_pll.aclk, + [AXXIA_CLK_SYS_PLL] = &clk_sys_pll.aclk, + [AXXIA_CLK_SM0_PLL] = &clk_sm0_pll.aclk, + [AXXIA_CLK_SM1_PLL] = &clk_sm1_pll.aclk, + [AXXIA_CLK_FAB_DIV] = &clk_fab_div.aclk, + [AXXIA_CLK_SYS_DIV] = &clk_sys_div.aclk, + [AXXIA_CLK_NRCP_DIV] = &clk_nrcp_div.aclk, + [AXXIA_CLK_CPU0_DIV] = &clk_cpu0_div.aclk, + [AXXIA_CLK_CPU1_DIV] = &clk_cpu1_div.aclk, + [AXXIA_CLK_CPU2_DIV] = &clk_cpu2_div.aclk, + [AXXIA_CLK_CPU3_DIV] = &clk_cpu3_div.aclk, + [AXXIA_CLK_PER_DIV] = &clk_per_div.aclk, + [AXXIA_CLK_MMC_DIV] = &clk_mmc_div.aclk, + [AXXIA_CLK_FAB] = &clk_fab_mux.aclk, + [AXXIA_CLK_SYS] = &clk_sys_mux.aclk, + [AXXIA_CLK_NRCP] = &clk_nrcp_mux.aclk, + [AXXIA_CLK_CPU0] = &clk_cpu0_mux.aclk, + [AXXIA_CLK_CPU1] = &clk_cpu1_mux.aclk, + [AXXIA_CLK_CPU2] = &clk_cpu2_mux.aclk, + [AXXIA_CLK_CPU3] = &clk_cpu3_mux.aclk, + [AXXIA_CLK_PER] = &clk_per_mux.aclk, + [AXXIA_CLK_MMC] = &clk_mmc_mux.aclk, +}; + +static const struct regmap_config axmclk_regmap_config = { + .reg_bits = 32, + .reg_stride = 4, + .val_bits = 32, + .max_register = 0x1fffc, + .fast_io = true, +}; + +static const struct of_device_id axmclk_match_table[] = { + { .compatible = "lsi,axm5516-clks" }, + { } +}; +MODULE_DEVICE_TABLE(of, axmclk_match_table); + +struct axmclk_priv { + struct clk_onecell_data onecell; + struct clk *clks[]; +}; + +static int axmclk_probe(struct platform_device *pdev) +{ + void __iomem *base; + struct resource *res; + int i, ret; + struct device *dev = &pdev->dev; + struct clk *clk; + struct regmap *regmap; + size_t num_clks; + struct axmclk_priv *priv; + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + base = devm_ioremap_resource(dev, res); + if (IS_ERR(base)) + return PTR_ERR(base); + + regmap = devm_regmap_init_mmio(dev, base, &axmclk_regmap_config); + if (IS_ERR(regmap)) + return PTR_ERR(regmap); + + num_clks = ARRAY_SIZE(axmclk_clocks); + pr_info("axmclk: supporting %u clocks\n", num_clks); + priv = devm_kzalloc(dev, sizeof(*priv) + sizeof(*priv->clks) * num_clks, + GFP_KERNEL); + if (!priv) + return -ENOMEM; + + priv->onecell.clks = priv->clks; + priv->onecell.clk_num = num_clks; + + /* Update each entry with the allocated regmap and register the clock + * with the common clock framework + */ + for (i = 0; i < num_clks; i++) { + axmclk_clocks[i]->regmap = regmap; + clk = devm_clk_register(dev, &axmclk_clocks[i]->hw); + if (IS_ERR(clk)) + return PTR_ERR(clk); + priv->clks[i] = clk; + } + + ret = of_clk_add_provider(dev->of_node, + of_clk_src_onecell_get, &priv->onecell); + + return ret; +} + +static int axmclk_remove(struct platform_device *pdev) +{ + of_clk_del_provider(pdev->dev.of_node); + return 0; +} + +static struct platform_driver axmclk_driver = { + .probe = axmclk_probe, + .remove = axmclk_remove, + .driver = { + .name = "clk-axm5516", + .owner = THIS_MODULE, + .of_match_table = axmclk_match_table, + }, +}; + +static int __init axmclk_init(void) +{ + return platform_driver_register(&axmclk_driver); +} +core_initcall(axmclk_init); + +static void __exit axmclk_exit(void) +{ + platform_driver_unregister(&axmclk_driver); +} +module_exit(axmclk_exit); + +MODULE_DESCRIPTION("AXM5516 clock driver"); +MODULE_LICENSE("GPL v2"); +MODULE_ALIAS("platform:clk-axm5516"); diff --git a/drivers/clk/clk-divider.c b/drivers/clk/clk-divider.c index 3fbee4540228..18a9de29df0e 100644 --- a/drivers/clk/clk-divider.c +++ b/drivers/clk/clk-divider.c @@ -43,6 +43,17 @@ static unsigned int _get_table_maxdiv(const struct clk_div_table *table) return maxdiv; } +static unsigned int _get_table_mindiv(const struct clk_div_table *table) +{ + unsigned int mindiv = UINT_MAX; + const struct clk_div_table *clkt; + + for (clkt = table; clkt->div; clkt++) + if (clkt->div < mindiv) + mindiv = clkt->div; + return mindiv; +} + static unsigned int _get_maxdiv(struct clk_divider *divider) { if (divider->flags & CLK_DIVIDER_ONE_BASED) @@ -162,6 +173,24 @@ static int _round_up_table(const struct clk_div_table *table, int div) return up; } +static int _round_down_table(const struct clk_div_table *table, int div) +{ + const struct clk_div_table *clkt; + int down = _get_table_mindiv(table); + + for (clkt = table; clkt->div; clkt++) { + if (clkt->div == div) + return clkt->div; + else if (clkt->div > div) + continue; + + if ((div - clkt->div) < (div - down)) + down = clkt->div; + } + + return down; +} + static int _div_round_up(struct clk_divider *divider, unsigned long parent_rate, unsigned long rate) { @@ -175,6 +204,54 @@ static int _div_round_up(struct clk_divider *divider, return div; } +static int _div_round_closest(struct clk_divider *divider, + unsigned long parent_rate, unsigned long rate) +{ + int up, down, div; + + up = down = div = DIV_ROUND_CLOSEST(parent_rate, rate); + + if (divider->flags & CLK_DIVIDER_POWER_OF_TWO) { + up = __roundup_pow_of_two(div); + down = __rounddown_pow_of_two(div); + } else if (divider->table) { + up = _round_up_table(divider->table, div); + down = _round_down_table(divider->table, div); + } + + return (up - div) <= (div - down) ? up : down; +} + +static int _div_round(struct clk_divider *divider, unsigned long parent_rate, + unsigned long rate) +{ + if (divider->flags & CLK_DIVIDER_ROUND_CLOSEST) + return _div_round_closest(divider, parent_rate, rate); + + return _div_round_up(divider, parent_rate, rate); +} + +static bool _is_best_div(struct clk_divider *divider, + unsigned long rate, unsigned long now, unsigned long best) +{ + if (divider->flags & CLK_DIVIDER_ROUND_CLOSEST) + return abs(rate - now) < abs(rate - best); + + return now <= rate && now > best; +} + +static int _next_div(struct clk_divider *divider, int div) +{ + div++; + + if (divider->flags & CLK_DIVIDER_POWER_OF_TWO) + return __roundup_pow_of_two(div); + if (divider->table) + return _round_up_table(divider->table, div); + + return div; +} + static int clk_divider_bestdiv(struct clk_hw *hw, unsigned long rate, unsigned long *best_parent_rate) { @@ -190,7 +267,7 @@ static int clk_divider_bestdiv(struct clk_hw *hw, unsigned long rate, if (!(__clk_get_flags(hw->clk) & CLK_SET_RATE_PARENT)) { parent_rate = *best_parent_rate; - bestdiv = _div_round_up(divider, parent_rate, rate); + bestdiv = _div_round(divider, parent_rate, rate); bestdiv = bestdiv == 0 ? 1 : bestdiv; bestdiv = bestdiv > maxdiv ? maxdiv : bestdiv; return bestdiv; @@ -202,7 +279,7 @@ static int clk_divider_bestdiv(struct clk_hw *hw, unsigned long rate, */ maxdiv = min(ULONG_MAX / rate, maxdiv); - for (i = 1; i <= maxdiv; i++) { + for (i = 1; i <= maxdiv; i = _next_div(divider, i)) { if (!_is_valid_div(divider, i)) continue; if (rate * i == parent_rate_saved) { @@ -217,7 +294,7 @@ static int clk_divider_bestdiv(struct clk_hw *hw, unsigned long rate, parent_rate = __clk_round_rate(__clk_get_parent(hw->clk), MULT_ROUND_UP(rate, i)); now = DIV_ROUND_UP(parent_rate, i); - if (now <= rate && now > best) { + if (_is_best_div(divider, rate, now, best)) { bestdiv = i; best = now; *best_parent_rate = parent_rate; @@ -284,6 +361,11 @@ const struct clk_ops clk_divider_ops = { }; EXPORT_SYMBOL_GPL(clk_divider_ops); +const struct clk_ops clk_divider_ro_ops = { + .recalc_rate = clk_divider_recalc_rate, +}; +EXPORT_SYMBOL_GPL(clk_divider_ro_ops); + static struct clk *_register_divider(struct device *dev, const char *name, const char *parent_name, unsigned long flags, void __iomem *reg, u8 shift, u8 width, @@ -309,7 +391,10 @@ static struct clk *_register_divider(struct device *dev, const char *name, } init.name = name; - init.ops = &clk_divider_ops; + if (clk_divider_flags & CLK_DIVIDER_READ_ONLY) + init.ops = &clk_divider_ro_ops; + else + init.ops = &clk_divider_ops; init.flags = flags | CLK_IS_BASIC; init.parent_names = (parent_name ? &parent_name: NULL); init.num_parents = (parent_name ? 1 : 0); diff --git a/drivers/clk/clk-s2mps11.c b/drivers/clk/clk-s2mps11.c index f2f62a1bf61a..9b7b5859a420 100644 --- a/drivers/clk/clk-s2mps11.c +++ b/drivers/clk/clk-s2mps11.c @@ -1,7 +1,7 @@ /* * clk-s2mps11.c - Clock driver for S2MPS11. * - * Copyright (C) 2013 Samsung Electornics + * Copyright (C) 2013,2014 Samsung Electornics * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License as published by the @@ -13,10 +13,6 @@ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - * */ #include <linux/module.h> @@ -27,6 +23,7 @@ #include <linux/clk-provider.h> #include <linux/platform_device.h> #include <linux/mfd/samsung/s2mps11.h> +#include <linux/mfd/samsung/s2mps14.h> #include <linux/mfd/samsung/s5m8767.h> #include <linux/mfd/samsung/core.h> @@ -44,6 +41,7 @@ enum { struct s2mps11_clk { struct sec_pmic_dev *iodev; + struct device_node *clk_np; struct clk_hw hw; struct clk *clk; struct clk_lookup *lookup; @@ -125,7 +123,21 @@ static struct clk_init_data s2mps11_clks_init[S2MPS11_CLKS_NUM] = { }, }; -static struct device_node *s2mps11_clk_parse_dt(struct platform_device *pdev) +static struct clk_init_data s2mps14_clks_init[S2MPS11_CLKS_NUM] = { + [S2MPS11_CLK_AP] = { + .name = "s2mps14_ap", + .ops = &s2mps11_clk_ops, + .flags = CLK_IS_ROOT, + }, + [S2MPS11_CLK_BT] = { + .name = "s2mps14_bt", + .ops = &s2mps11_clk_ops, + .flags = CLK_IS_ROOT, + }, +}; + +static struct device_node *s2mps11_clk_parse_dt(struct platform_device *pdev, + struct clk_init_data *clks_init) { struct sec_pmic_dev *iodev = dev_get_drvdata(pdev->dev.parent); struct device_node *clk_np; @@ -140,14 +152,12 @@ static struct device_node *s2mps11_clk_parse_dt(struct platform_device *pdev) return ERR_PTR(-EINVAL); } - clk_table = devm_kzalloc(&pdev->dev, sizeof(struct clk *) * - S2MPS11_CLKS_NUM, GFP_KERNEL); - if (!clk_table) - return ERR_PTR(-ENOMEM); - - for (i = 0; i < S2MPS11_CLKS_NUM; i++) + for (i = 0; i < S2MPS11_CLKS_NUM; i++) { + if (!clks_init[i].name) + continue; /* Skip clocks not present in some devices */ of_property_read_string_index(clk_np, "clock-output-names", i, - &s2mps11_clks_init[i].name); + &clks_init[i].name); + } return clk_np; } @@ -156,8 +166,8 @@ static int s2mps11_clk_probe(struct platform_device *pdev) { struct sec_pmic_dev *iodev = dev_get_drvdata(pdev->dev.parent); struct s2mps11_clk *s2mps11_clks, *s2mps11_clk; - struct device_node *clk_np = NULL; unsigned int s2mps11_reg; + struct clk_init_data *clks_init; int i, ret = 0; u32 val; @@ -168,25 +178,39 @@ static int s2mps11_clk_probe(struct platform_device *pdev) s2mps11_clk = s2mps11_clks; - clk_np = s2mps11_clk_parse_dt(pdev); - if (IS_ERR(clk_np)) - return PTR_ERR(clk_np); + clk_table = devm_kzalloc(&pdev->dev, sizeof(struct clk *) * + S2MPS11_CLKS_NUM, GFP_KERNEL); + if (!clk_table) + return -ENOMEM; switch(platform_get_device_id(pdev)->driver_data) { case S2MPS11X: s2mps11_reg = S2MPS11_REG_RTC_CTRL; + clks_init = s2mps11_clks_init; + break; + case S2MPS14X: + s2mps11_reg = S2MPS14_REG_RTCCTRL; + clks_init = s2mps14_clks_init; break; case S5M8767X: s2mps11_reg = S5M8767_REG_CTRL1; + clks_init = s2mps11_clks_init; break; default: dev_err(&pdev->dev, "Invalid device type\n"); return -EINVAL; }; + /* Store clocks of_node in first element of s2mps11_clks array */ + s2mps11_clks->clk_np = s2mps11_clk_parse_dt(pdev, clks_init); + if (IS_ERR(s2mps11_clks->clk_np)) + return PTR_ERR(s2mps11_clks->clk_np); + for (i = 0; i < S2MPS11_CLKS_NUM; i++, s2mps11_clk++) { + if (!clks_init[i].name) + continue; /* Skip clocks not present in some devices */ s2mps11_clk->iodev = iodev; - s2mps11_clk->hw.init = &s2mps11_clks_init[i]; + s2mps11_clk->hw.init = &clks_init[i]; s2mps11_clk->mask = 1 << i; s2mps11_clk->reg = s2mps11_reg; @@ -219,15 +243,18 @@ static int s2mps11_clk_probe(struct platform_device *pdev) clkdev_add(s2mps11_clk->lookup); } - if (clk_table) { - for (i = 0; i < S2MPS11_CLKS_NUM; i++) - clk_table[i] = s2mps11_clks[i].clk; - - clk_data.clks = clk_table; - clk_data.clk_num = S2MPS11_CLKS_NUM; - of_clk_add_provider(clk_np, of_clk_src_onecell_get, &clk_data); + for (i = 0; i < S2MPS11_CLKS_NUM; i++) { + /* Skip clocks not present on S2MPS14 */ + if (!clks_init[i].name) + continue; + clk_table[i] = s2mps11_clks[i].clk; } + clk_data.clks = clk_table; + clk_data.clk_num = S2MPS11_CLKS_NUM; + of_clk_add_provider(s2mps11_clks->clk_np, of_clk_src_onecell_get, + &clk_data); + platform_set_drvdata(pdev, s2mps11_clks); return ret; @@ -250,14 +277,23 @@ static int s2mps11_clk_remove(struct platform_device *pdev) struct s2mps11_clk *s2mps11_clks = platform_get_drvdata(pdev); int i; - for (i = 0; i < S2MPS11_CLKS_NUM; i++) + of_clk_del_provider(s2mps11_clks[0].clk_np); + /* Drop the reference obtained in s2mps11_clk_parse_dt */ + of_node_put(s2mps11_clks[0].clk_np); + + for (i = 0; i < S2MPS11_CLKS_NUM; i++) { + /* Skip clocks not present on S2MPS14 */ + if (!s2mps11_clks[i].lookup) + continue; clkdev_drop(s2mps11_clks[i].lookup); + } return 0; } static const struct platform_device_id s2mps11_clk_id[] = { { "s2mps11-clk", S2MPS11X}, + { "s2mps14-clk", S2MPS14X}, { "s5m8767-clk", S5M8767X}, { }, }; diff --git a/drivers/clk/clk-si570.c b/drivers/clk/clk-si570.c index 4bbbe32585ec..fc167b3f8919 100644 --- a/drivers/clk/clk-si570.c +++ b/drivers/clk/clk-si570.c @@ -526,6 +526,6 @@ static struct i2c_driver si570_driver = { module_i2c_driver(si570_driver); MODULE_AUTHOR("Guenter Roeck <guenter.roeck@ericsson.com>"); -MODULE_AUTHOR("Soeren Brinkmann <soren.brinkmann@xilinx.com"); +MODULE_AUTHOR("Soeren Brinkmann <soren.brinkmann@xilinx.com>"); MODULE_DESCRIPTION("Si570 driver"); MODULE_LICENSE("GPL"); diff --git a/drivers/clk/clk-u300.c b/drivers/clk/clk-u300.c index 3efbdd078d14..406bfc1375b2 100644 --- a/drivers/clk/clk-u300.c +++ b/drivers/clk/clk-u300.c @@ -1168,6 +1168,7 @@ static const struct of_device_id u300_clk_match[] __initconst = { .compatible = "stericsson,u300-syscon-mclk", .data = of_u300_syscon_mclk_init, }, + {} }; diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c index 7cf2c093cc54..8b73edef151d 100644 --- a/drivers/clk/clk.c +++ b/drivers/clk/clk.c @@ -106,12 +106,11 @@ static void clk_summary_show_one(struct seq_file *s, struct clk *c, int level) if (!c) return; - seq_printf(s, "%*s%-*s %-11d %-12d %-10lu %-11lu", + seq_printf(s, "%*s%-*s %11d %12d %11lu %10lu\n", level * 3 + 1, "", 30 - level * 3, c->name, c->enable_count, c->prepare_count, clk_get_rate(c), clk_get_accuracy(c)); - seq_printf(s, "\n"); } static void clk_summary_show_subtree(struct seq_file *s, struct clk *c, @@ -132,8 +131,8 @@ static int clk_summary_show(struct seq_file *s, void *data) { struct clk *c; - seq_printf(s, " clock enable_cnt prepare_cnt rate accuracy\n"); - seq_printf(s, "---------------------------------------------------------------------------------\n"); + seq_puts(s, " clock enable_cnt prepare_cnt rate accuracy\n"); + seq_puts(s, "--------------------------------------------------------------------------------\n"); clk_prepare_lock(); @@ -822,6 +821,9 @@ void __clk_unprepare(struct clk *clk) */ void clk_unprepare(struct clk *clk) { + if (IS_ERR_OR_NULL(clk)) + return; + clk_prepare_lock(); __clk_unprepare(clk); clk_prepare_unlock(); @@ -883,9 +885,6 @@ static void __clk_disable(struct clk *clk) if (!clk) return; - if (WARN_ON(IS_ERR(clk))) - return; - if (WARN_ON(clk->enable_count == 0)) return; @@ -914,6 +913,9 @@ void clk_disable(struct clk *clk) { unsigned long flags; + if (IS_ERR_OR_NULL(clk)) + return; + flags = clk_enable_lock(); __clk_disable(clk); clk_enable_unlock(flags); @@ -1004,6 +1006,7 @@ unsigned long __clk_round_rate(struct clk *clk, unsigned long rate) else return clk->rate; } +EXPORT_SYMBOL_GPL(__clk_round_rate); /** * clk_round_rate - round the given rate for a clk @@ -1115,6 +1118,13 @@ long clk_get_accuracy(struct clk *clk) } EXPORT_SYMBOL_GPL(clk_get_accuracy); +static unsigned long clk_recalc(struct clk *clk, unsigned long parent_rate) +{ + if (clk->ops->recalc_rate) + return clk->ops->recalc_rate(clk->hw, parent_rate); + return parent_rate; +} + /** * __clk_recalc_rates * @clk: first clk in the subtree @@ -1140,10 +1150,7 @@ static void __clk_recalc_rates(struct clk *clk, unsigned long msg) if (clk->parent) parent_rate = clk->parent->rate; - if (clk->ops->recalc_rate) - clk->rate = clk->ops->recalc_rate(clk->hw, parent_rate); - else - clk->rate = parent_rate; + clk->rate = clk_recalc(clk, parent_rate); /* * ignore NOTIFY_STOP and NOTIFY_BAD return values for POST_RATE_CHANGE @@ -1334,10 +1341,7 @@ static int __clk_speculate_rates(struct clk *clk, unsigned long parent_rate) unsigned long new_rate; int ret = NOTIFY_DONE; - if (clk->ops->recalc_rate) - new_rate = clk->ops->recalc_rate(clk->hw, parent_rate); - else - new_rate = parent_rate; + new_rate = clk_recalc(clk, parent_rate); /* abort rate change if a driver returns NOTIFY_BAD or NOTIFY_STOP */ if (clk->notifier_count) @@ -1373,10 +1377,7 @@ static void clk_calc_subtree(struct clk *clk, unsigned long new_rate, new_parent->new_child = clk; hlist_for_each_entry(child, &clk->children, child_node) { - if (child->ops->recalc_rate) - child->new_rate = child->ops->recalc_rate(child->hw, new_rate); - else - child->new_rate = new_rate; + child->new_rate = clk_recalc(child, new_rate); clk_calc_subtree(child, child->new_rate, NULL, 0); } } @@ -1524,10 +1525,7 @@ static void clk_change_rate(struct clk *clk) if (!skip_set_rate && clk->ops->set_rate) clk->ops->set_rate(clk->hw, clk->new_rate, best_parent_rate); - if (clk->ops->recalc_rate) - clk->rate = clk->ops->recalc_rate(clk->hw, best_parent_rate); - else - clk->rate = best_parent_rate; + clk->rate = clk_recalc(clk, best_parent_rate); if (clk->notifier_count && old_rate != clk->rate) __clk_notify(clk, POST_RATE_CHANGE, old_rate, clk->rate); @@ -1716,9 +1714,6 @@ int clk_set_parent(struct clk *clk, struct clk *parent) if (!clk) return 0; - if (!clk->ops) - return -EINVAL; - /* verify ops for for multi-parent clks */ if ((clk->num_parents > 1) && (!clk->ops->set_parent)) return -ENOSYS; diff --git a/drivers/clk/clk.h b/drivers/clk/clk.h index 795cc9f0dac0..c798138f023f 100644 --- a/drivers/clk/clk.h +++ b/drivers/clk/clk.h @@ -10,6 +10,7 @@ */ #if defined(CONFIG_OF) && defined(CONFIG_COMMON_CLK) +struct clk *of_clk_get_by_clkspec(struct of_phandle_args *clkspec); struct clk *__of_clk_get_from_provider(struct of_phandle_args *clkspec); void of_clk_lock(void); void of_clk_unlock(void); diff --git a/drivers/clk/clkdev.c b/drivers/clk/clkdev.c index a360b2eca5cb..f890b901c6bc 100644 --- a/drivers/clk/clkdev.c +++ b/drivers/clk/clkdev.c @@ -27,6 +27,32 @@ static LIST_HEAD(clocks); static DEFINE_MUTEX(clocks_mutex); #if defined(CONFIG_OF) && defined(CONFIG_COMMON_CLK) + +/** + * of_clk_get_by_clkspec() - Lookup a clock form a clock provider + * @clkspec: pointer to a clock specifier data structure + * + * This function looks up a struct clk from the registered list of clock + * providers, an input is a clock specifier data structure as returned + * from the of_parse_phandle_with_args() function call. + */ +struct clk *of_clk_get_by_clkspec(struct of_phandle_args *clkspec) +{ + struct clk *clk; + + if (!clkspec) + return ERR_PTR(-EINVAL); + + of_clk_lock(); + clk = __of_clk_get_from_provider(clkspec); + + if (!IS_ERR(clk) && !__clk_get(clk)) + clk = ERR_PTR(-ENOENT); + + of_clk_unlock(); + return clk; +} + struct clk *of_clk_get(struct device_node *np, int index) { struct of_phandle_args clkspec; @@ -41,13 +67,7 @@ struct clk *of_clk_get(struct device_node *np, int index) if (rc) return ERR_PTR(rc); - of_clk_lock(); - clk = __of_clk_get_from_provider(&clkspec); - - if (!IS_ERR(clk) && !__clk_get(clk)) - clk = ERR_PTR(-ENOENT); - - of_clk_unlock(); + clk = of_clk_get_by_clkspec(&clkspec); of_node_put(clkspec.np); return clk; } diff --git a/drivers/clk/hisilicon/Makefile b/drivers/clk/hisilicon/Makefile index 40b33c6a8257..038c02f4d0e7 100644 --- a/drivers/clk/hisilicon/Makefile +++ b/drivers/clk/hisilicon/Makefile @@ -6,3 +6,4 @@ obj-y += clk.o clkgate-separated.o obj-$(CONFIG_ARCH_HI3xxx) += clk-hi3620.o obj-$(CONFIG_ARCH_HIP04) += clk-hip04.o +obj-$(CONFIG_ARCH_HIX5HD2) += clk-hix5hd2.o diff --git a/drivers/clk/hisilicon/clk-hix5hd2.c b/drivers/clk/hisilicon/clk-hix5hd2.c new file mode 100644 index 000000000000..e5fcfb4e32ef --- /dev/null +++ b/drivers/clk/hisilicon/clk-hix5hd2.c @@ -0,0 +1,101 @@ +/* + * Copyright (c) 2014 Linaro Ltd. + * Copyright (c) 2014 Hisilicon Limited. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + */ + +#include <linux/of_address.h> +#include <dt-bindings/clock/hix5hd2-clock.h> +#include "clk.h" + +static struct hisi_fixed_rate_clock hix5hd2_fixed_rate_clks[] __initdata = { + { HIX5HD2_FIXED_1200M, "1200m", NULL, CLK_IS_ROOT, 1200000000, }, + { HIX5HD2_FIXED_400M, "400m", NULL, CLK_IS_ROOT, 400000000, }, + { HIX5HD2_FIXED_48M, "48m", NULL, CLK_IS_ROOT, 48000000, }, + { HIX5HD2_FIXED_24M, "24m", NULL, CLK_IS_ROOT, 24000000, }, + { HIX5HD2_FIXED_600M, "600m", NULL, CLK_IS_ROOT, 600000000, }, + { HIX5HD2_FIXED_300M, "300m", NULL, CLK_IS_ROOT, 300000000, }, + { HIX5HD2_FIXED_75M, "75m", NULL, CLK_IS_ROOT, 75000000, }, + { HIX5HD2_FIXED_200M, "200m", NULL, CLK_IS_ROOT, 200000000, }, + { HIX5HD2_FIXED_100M, "100m", NULL, CLK_IS_ROOT, 100000000, }, + { HIX5HD2_FIXED_40M, "40m", NULL, CLK_IS_ROOT, 40000000, }, + { HIX5HD2_FIXED_150M, "150m", NULL, CLK_IS_ROOT, 150000000, }, + { HIX5HD2_FIXED_1728M, "1728m", NULL, CLK_IS_ROOT, 1728000000, }, + { HIX5HD2_FIXED_28P8M, "28p8m", NULL, CLK_IS_ROOT, 28000000, }, + { HIX5HD2_FIXED_432M, "432m", NULL, CLK_IS_ROOT, 432000000, }, + { HIX5HD2_FIXED_345P6M, "345p6m", NULL, CLK_IS_ROOT, 345000000, }, + { HIX5HD2_FIXED_288M, "288m", NULL, CLK_IS_ROOT, 288000000, }, + { HIX5HD2_FIXED_60M, "60m", NULL, CLK_IS_ROOT, 60000000, }, + { HIX5HD2_FIXED_750M, "750m", NULL, CLK_IS_ROOT, 750000000, }, + { HIX5HD2_FIXED_500M, "500m", NULL, CLK_IS_ROOT, 500000000, }, + { HIX5HD2_FIXED_54M, "54m", NULL, CLK_IS_ROOT, 54000000, }, + { HIX5HD2_FIXED_27M, "27m", NULL, CLK_IS_ROOT, 27000000, }, + { HIX5HD2_FIXED_1500M, "1500m", NULL, CLK_IS_ROOT, 1500000000, }, + { HIX5HD2_FIXED_375M, "375m", NULL, CLK_IS_ROOT, 375000000, }, + { HIX5HD2_FIXED_187M, "187m", NULL, CLK_IS_ROOT, 187000000, }, + { HIX5HD2_FIXED_250M, "250m", NULL, CLK_IS_ROOT, 250000000, }, + { HIX5HD2_FIXED_125M, "125m", NULL, CLK_IS_ROOT, 125000000, }, + { HIX5HD2_FIXED_2P02M, "2m", NULL, CLK_IS_ROOT, 2000000, }, + { HIX5HD2_FIXED_50M, "50m", NULL, CLK_IS_ROOT, 50000000, }, + { HIX5HD2_FIXED_25M, "25m", NULL, CLK_IS_ROOT, 25000000, }, + { HIX5HD2_FIXED_83M, "83m", NULL, CLK_IS_ROOT, 83333333, }, +}; + +static const char *sfc_mux_p[] __initconst = { + "24m", "150m", "200m", "100m", "75m", }; +static u32 sfc_mux_table[] = {0, 4, 5, 6, 7}; + +static const char *sdio1_mux_p[] __initconst = { + "75m", "100m", "50m", "15m", }; +static u32 sdio1_mux_table[] = {0, 1, 2, 3}; + +static const char *fephy_mux_p[] __initconst = { "25m", "125m"}; +static u32 fephy_mux_table[] = {0, 1}; + + +static struct hisi_mux_clock hix5hd2_mux_clks[] __initdata = { + { HIX5HD2_SFC_MUX, "sfc_mux", sfc_mux_p, ARRAY_SIZE(sfc_mux_p), + CLK_SET_RATE_PARENT, 0x5c, 8, 3, 0, sfc_mux_table, }, + { HIX5HD2_MMC_MUX, "mmc_mux", sdio1_mux_p, ARRAY_SIZE(sdio1_mux_p), + CLK_SET_RATE_PARENT, 0xa0, 8, 2, 0, sdio1_mux_table, }, + { HIX5HD2_FEPHY_MUX, "fephy_mux", + fephy_mux_p, ARRAY_SIZE(fephy_mux_p), + CLK_SET_RATE_PARENT, 0x120, 8, 2, 0, fephy_mux_table, }, +}; + +static struct hisi_gate_clock hix5hd2_gate_clks[] __initdata = { + /*sfc*/ + { HIX5HD2_SFC_CLK, "clk_sfc", "sfc_mux", + CLK_SET_RATE_PARENT, 0x5c, 0, 0, }, + { HIX5HD2_SFC_RST, "rst_sfc", "clk_sfc", + CLK_SET_RATE_PARENT, 0x5c, 4, CLK_GATE_SET_TO_DISABLE, }, + /*sdio1*/ + { HIX5HD2_MMC_BIU_CLK, "clk_mmc_biu", "200m", + CLK_SET_RATE_PARENT, 0xa0, 0, 0, }, + { HIX5HD2_MMC_CIU_CLK, "clk_mmc_ciu", "mmc_mux", + CLK_SET_RATE_PARENT, 0xa0, 1, 0, }, + { HIX5HD2_MMC_CIU_RST, "rst_mmc_ciu", "clk_mmc_ciu", + CLK_SET_RATE_PARENT, 0xa0, 4, CLK_GATE_SET_TO_DISABLE, }, +}; + +static void __init hix5hd2_clk_init(struct device_node *np) +{ + struct hisi_clock_data *clk_data; + + clk_data = hisi_clk_init(np, HIX5HD2_NR_CLKS); + if (!clk_data) + return; + + hisi_clk_register_fixed_rate(hix5hd2_fixed_rate_clks, + ARRAY_SIZE(hix5hd2_fixed_rate_clks), + clk_data); + hisi_clk_register_mux(hix5hd2_mux_clks, ARRAY_SIZE(hix5hd2_mux_clks), + clk_data); + hisi_clk_register_gate(hix5hd2_gate_clks, + ARRAY_SIZE(hix5hd2_gate_clks), clk_data); +} + +CLK_OF_DECLARE(hix5hd2_clk, "hisilicon,hix5hd2-clock", hix5hd2_clk_init); diff --git a/drivers/clk/hisilicon/clk.c b/drivers/clk/hisilicon/clk.c index 276f672e7b1a..a078e84f7b05 100644 --- a/drivers/clk/hisilicon/clk.c +++ b/drivers/clk/hisilicon/clk.c @@ -127,11 +127,14 @@ void __init hisi_clk_register_mux(struct hisi_mux_clock *clks, int i; for (i = 0; i < nums; i++) { - clk = clk_register_mux(NULL, clks[i].name, clks[i].parent_names, - clks[i].num_parents, clks[i].flags, - base + clks[i].offset, clks[i].shift, - clks[i].width, clks[i].mux_flags, - &hisi_clk_lock); + u32 mask = BIT(clks[i].width) - 1; + + clk = clk_register_mux_table(NULL, clks[i].name, + clks[i].parent_names, + clks[i].num_parents, clks[i].flags, + base + clks[i].offset, clks[i].shift, + mask, clks[i].mux_flags, + clks[i].table, &hisi_clk_lock); if (IS_ERR(clk)) { pr_err("%s: failed to register clock %s\n", __func__, clks[i].name); @@ -174,6 +177,34 @@ void __init hisi_clk_register_divider(struct hisi_divider_clock *clks, } } +void __init hisi_clk_register_gate(struct hisi_gate_clock *clks, + int nums, struct hisi_clock_data *data) +{ + struct clk *clk; + void __iomem *base = data->base; + int i; + + for (i = 0; i < nums; i++) { + clk = clk_register_gate(NULL, clks[i].name, + clks[i].parent_name, + clks[i].flags, + base + clks[i].offset, + clks[i].bit_idx, + clks[i].gate_flags, + &hisi_clk_lock); + if (IS_ERR(clk)) { + pr_err("%s: failed to register clock %s\n", + __func__, clks[i].name); + continue; + } + + if (clks[i].alias) + clk_register_clkdev(clk, clks[i].alias, NULL); + + data->clk_data.clks[clks[i].id] = clk; + } +} + void __init hisi_clk_register_gate_sep(struct hisi_gate_clock *clks, int nums, struct hisi_clock_data *data) { diff --git a/drivers/clk/hisilicon/clk.h b/drivers/clk/hisilicon/clk.h index 43fa5da88f02..31083ffc0650 100644 --- a/drivers/clk/hisilicon/clk.h +++ b/drivers/clk/hisilicon/clk.h @@ -62,6 +62,7 @@ struct hisi_mux_clock { u8 shift; u8 width; u8 mux_flags; + u32 *table; const char *alias; }; @@ -103,6 +104,8 @@ void __init hisi_clk_register_mux(struct hisi_mux_clock *, int, struct hisi_clock_data *); void __init hisi_clk_register_divider(struct hisi_divider_clock *, int, struct hisi_clock_data *); +void __init hisi_clk_register_gate(struct hisi_gate_clock *, + int, struct hisi_clock_data *); void __init hisi_clk_register_gate_sep(struct hisi_gate_clock *, int, struct hisi_clock_data *); #endif /* __HISI_CLK_H */ diff --git a/drivers/clk/mvebu/Kconfig b/drivers/clk/mvebu/Kconfig index 693f7be129f1..3b34dba9178d 100644 --- a/drivers/clk/mvebu/Kconfig +++ b/drivers/clk/mvebu/Kconfig @@ -34,3 +34,7 @@ config DOVE_CLK config KIRKWOOD_CLK bool select MVEBU_CLK_COMMON + +config ORION_CLK + bool + select MVEBU_CLK_COMMON diff --git a/drivers/clk/mvebu/Makefile b/drivers/clk/mvebu/Makefile index 4c66162fb0b4..a9a56fc01901 100644 --- a/drivers/clk/mvebu/Makefile +++ b/drivers/clk/mvebu/Makefile @@ -8,3 +8,4 @@ obj-$(CONFIG_ARMADA_38X_CLK) += armada-38x.o obj-$(CONFIG_ARMADA_XP_CLK) += armada-xp.o obj-$(CONFIG_DOVE_CLK) += dove.o obj-$(CONFIG_KIRKWOOD_CLK) += kirkwood.o +obj-$(CONFIG_ORION_CLK) += orion.o diff --git a/drivers/clk/mvebu/orion.c b/drivers/clk/mvebu/orion.c new file mode 100644 index 000000000000..fd129566c1ce --- /dev/null +++ b/drivers/clk/mvebu/orion.c @@ -0,0 +1,210 @@ +/* + * Marvell Orion SoC clocks + * + * Copyright (C) 2014 Thomas Petazzoni + * + * Thomas Petazzoni <thomas.petazzoni@free-electrons.com> + * + * This file is licensed under the terms of the GNU General Public + * License version 2. This program is licensed "as is" without any + * warranty of any kind, whether express or implied. + */ + +#include <linux/kernel.h> +#include <linux/clk-provider.h> +#include <linux/io.h> +#include <linux/of.h> +#include "common.h" + +static const struct coreclk_ratio orion_coreclk_ratios[] __initconst = { + { .id = 0, .name = "ddrclk", } +}; + +/* + * Orion 5182 + */ + +#define SAR_MV88F5182_TCLK_FREQ 8 +#define SAR_MV88F5182_TCLK_FREQ_MASK 0x3 + +static u32 __init mv88f5182_get_tclk_freq(void __iomem *sar) +{ + u32 opt = (readl(sar) >> SAR_MV88F5182_TCLK_FREQ) & + SAR_MV88F5182_TCLK_FREQ_MASK; + if (opt == 1) + return 150000000; + else if (opt == 2) + return 166666667; + else + return 0; +} + +#define SAR_MV88F5182_CPU_FREQ 4 +#define SAR_MV88F5182_CPU_FREQ_MASK 0xf + +static u32 __init mv88f5182_get_cpu_freq(void __iomem *sar) +{ + u32 opt = (readl(sar) >> SAR_MV88F5182_CPU_FREQ) & + SAR_MV88F5182_CPU_FREQ_MASK; + if (opt == 0) + return 333333333; + else if (opt == 1 || opt == 2) + return 400000000; + else if (opt == 3) + return 500000000; + else + return 0; +} + +static void __init mv88f5182_get_clk_ratio(void __iomem *sar, int id, + int *mult, int *div) +{ + u32 opt = (readl(sar) >> SAR_MV88F5182_CPU_FREQ) & + SAR_MV88F5182_CPU_FREQ_MASK; + if (opt == 0 || opt == 1) { + *mult = 1; + *div = 2; + } else if (opt == 2 || opt == 3) { + *mult = 1; + *div = 3; + } else { + *mult = 0; + *div = 1; + } +} + +static const struct coreclk_soc_desc mv88f5182_coreclks = { + .get_tclk_freq = mv88f5182_get_tclk_freq, + .get_cpu_freq = mv88f5182_get_cpu_freq, + .get_clk_ratio = mv88f5182_get_clk_ratio, + .ratios = orion_coreclk_ratios, + .num_ratios = ARRAY_SIZE(orion_coreclk_ratios), +}; + +static void __init mv88f5182_clk_init(struct device_node *np) +{ + return mvebu_coreclk_setup(np, &mv88f5182_coreclks); +} + +CLK_OF_DECLARE(mv88f5182_clk, "marvell,mv88f5182-core-clock", mv88f5182_clk_init); + +/* + * Orion 5281 + */ + +static u32 __init mv88f5281_get_tclk_freq(void __iomem *sar) +{ + /* On 5281, tclk is always 166 Mhz */ + return 166666667; +} + +#define SAR_MV88F5281_CPU_FREQ 4 +#define SAR_MV88F5281_CPU_FREQ_MASK 0xf + +static u32 __init mv88f5281_get_cpu_freq(void __iomem *sar) +{ + u32 opt = (readl(sar) >> SAR_MV88F5281_CPU_FREQ) & + SAR_MV88F5281_CPU_FREQ_MASK; + if (opt == 1 || opt == 2) + return 400000000; + else if (opt == 3) + return 500000000; + else + return 0; +} + +static void __init mv88f5281_get_clk_ratio(void __iomem *sar, int id, + int *mult, int *div) +{ + u32 opt = (readl(sar) >> SAR_MV88F5281_CPU_FREQ) & + SAR_MV88F5281_CPU_FREQ_MASK; + if (opt == 1) { + *mult = 1; + *div = 2; + } else if (opt == 2 || opt == 3) { + *mult = 1; + *div = 3; + } else { + *mult = 0; + *div = 1; + } +} + +static const struct coreclk_soc_desc mv88f5281_coreclks = { + .get_tclk_freq = mv88f5281_get_tclk_freq, + .get_cpu_freq = mv88f5281_get_cpu_freq, + .get_clk_ratio = mv88f5281_get_clk_ratio, + .ratios = orion_coreclk_ratios, + .num_ratios = ARRAY_SIZE(orion_coreclk_ratios), +}; + +static void __init mv88f5281_clk_init(struct device_node *np) +{ + return mvebu_coreclk_setup(np, &mv88f5281_coreclks); +} + +CLK_OF_DECLARE(mv88f5281_clk, "marvell,mv88f5281-core-clock", mv88f5281_clk_init); + +/* + * Orion 6183 + */ + +#define SAR_MV88F6183_TCLK_FREQ 9 +#define SAR_MV88F6183_TCLK_FREQ_MASK 0x1 + +static u32 __init mv88f6183_get_tclk_freq(void __iomem *sar) +{ + u32 opt = (readl(sar) >> SAR_MV88F6183_TCLK_FREQ) & + SAR_MV88F6183_TCLK_FREQ_MASK; + if (opt == 0) + return 133333333; + else if (opt == 1) + return 166666667; + else + return 0; +} + +#define SAR_MV88F6183_CPU_FREQ 1 +#define SAR_MV88F6183_CPU_FREQ_MASK 0x3f + +static u32 __init mv88f6183_get_cpu_freq(void __iomem *sar) +{ + u32 opt = (readl(sar) >> SAR_MV88F6183_CPU_FREQ) & + SAR_MV88F6183_CPU_FREQ_MASK; + if (opt == 9) + return 333333333; + else if (opt == 17) + return 400000000; + else + return 0; +} + +static void __init mv88f6183_get_clk_ratio(void __iomem *sar, int id, + int *mult, int *div) +{ + u32 opt = (readl(sar) >> SAR_MV88F6183_CPU_FREQ) & + SAR_MV88F6183_CPU_FREQ_MASK; + if (opt == 9 || opt == 17) { + *mult = 1; + *div = 2; + } else { + *mult = 0; + *div = 1; + } +} + +static const struct coreclk_soc_desc mv88f6183_coreclks = { + .get_tclk_freq = mv88f6183_get_tclk_freq, + .get_cpu_freq = mv88f6183_get_cpu_freq, + .get_clk_ratio = mv88f6183_get_clk_ratio, + .ratios = orion_coreclk_ratios, + .num_ratios = ARRAY_SIZE(orion_coreclk_ratios), +}; + + +static void __init mv88f6183_clk_init(struct device_node *np) +{ + return mvebu_coreclk_setup(np, &mv88f6183_coreclks); +} + +CLK_OF_DECLARE(mv88f6183_clk, "marvell,mv88f6183-core-clock", mv88f6183_clk_init); diff --git a/drivers/clk/qcom/Kconfig b/drivers/clk/qcom/Kconfig index 995bcfa021a4..7f696b7d4422 100644 --- a/drivers/clk/qcom/Kconfig +++ b/drivers/clk/qcom/Kconfig @@ -13,10 +13,10 @@ config MSM_GCC_8660 i2c, USB, SD/eMMC, etc. config MSM_GCC_8960 - tristate "MSM8960 Global Clock Controller" + tristate "APQ8064/MSM8960 Global Clock Controller" depends on COMMON_CLK_QCOM help - Support for the global clock controller on msm8960 devices. + Support for the global clock controller on apq8064/msm8960 devices. Say Y if you want to use peripheral devices such as UART, SPI, i2c, USB, SD/eMMC, SATA, PCIe, etc. diff --git a/drivers/clk/qcom/Makefile b/drivers/clk/qcom/Makefile index f60db2ef1aee..689e05bf4f95 100644 --- a/drivers/clk/qcom/Makefile +++ b/drivers/clk/qcom/Makefile @@ -1,5 +1,6 @@ obj-$(CONFIG_COMMON_CLK_QCOM) += clk-qcom.o +clk-qcom-y += common.o clk-qcom-y += clk-regmap.o clk-qcom-y += clk-pll.o clk-qcom-y += clk-rcg.o diff --git a/drivers/clk/qcom/clk-rcg.h b/drivers/clk/qcom/clk-rcg.h index 1d6b6dece328..b9ec11dfd1b4 100644 --- a/drivers/clk/qcom/clk-rcg.h +++ b/drivers/clk/qcom/clk-rcg.h @@ -155,5 +155,8 @@ struct clk_rcg2 { #define to_clk_rcg2(_hw) container_of(to_clk_regmap(_hw), struct clk_rcg2, clkr) extern const struct clk_ops clk_rcg2_ops; +extern const struct clk_ops clk_edp_pixel_ops; +extern const struct clk_ops clk_byte_ops; +extern const struct clk_ops clk_pixel_ops; #endif diff --git a/drivers/clk/qcom/clk-rcg2.c b/drivers/clk/qcom/clk-rcg2.c index 00f878a04d3f..cd185d5cc67a 100644 --- a/drivers/clk/qcom/clk-rcg2.c +++ b/drivers/clk/qcom/clk-rcg2.c @@ -19,6 +19,7 @@ #include <linux/clk-provider.h> #include <linux/delay.h> #include <linux/regmap.h> +#include <linux/math64.h> #include <asm/div64.h> @@ -55,7 +56,7 @@ static int clk_rcg2_is_enabled(struct clk_hw *hw) if (ret) return ret; - return (cmd & CMD_ROOT_OFF) != 0; + return (cmd & CMD_ROOT_OFF) == 0; } static u8 clk_rcg2_get_parent(struct clk_hw *hw) @@ -181,7 +182,8 @@ struct freq_tbl *find_freq(const struct freq_tbl *f, unsigned long rate) if (rate <= f->freq) return f; - return NULL; + /* Default to our fastest rate */ + return f - 1; } static long _freq_tbl_determine_rate(struct clk_hw *hw, @@ -224,31 +226,25 @@ static long clk_rcg2_determine_rate(struct clk_hw *hw, unsigned long rate, return _freq_tbl_determine_rate(hw, rcg->freq_tbl, rate, p_rate, p); } -static int __clk_rcg2_set_rate(struct clk_hw *hw, unsigned long rate) +static int clk_rcg2_configure(struct clk_rcg2 *rcg, const struct freq_tbl *f) { - struct clk_rcg2 *rcg = to_clk_rcg2(hw); - const struct freq_tbl *f; u32 cfg, mask; int ret; - f = find_freq(rcg->freq_tbl, rate); - if (!f) - return -EINVAL; - if (rcg->mnd_width && f->n) { mask = BIT(rcg->mnd_width) - 1; - ret = regmap_update_bits(rcg->clkr.regmap, rcg->cmd_rcgr + M_REG, - mask, f->m); + ret = regmap_update_bits(rcg->clkr.regmap, + rcg->cmd_rcgr + M_REG, mask, f->m); if (ret) return ret; - ret = regmap_update_bits(rcg->clkr.regmap, rcg->cmd_rcgr + N_REG, - mask, ~(f->n - f->m)); + ret = regmap_update_bits(rcg->clkr.regmap, + rcg->cmd_rcgr + N_REG, mask, ~(f->n - f->m)); if (ret) return ret; - ret = regmap_update_bits(rcg->clkr.regmap, rcg->cmd_rcgr + D_REG, - mask, ~f->n); + ret = regmap_update_bits(rcg->clkr.regmap, + rcg->cmd_rcgr + D_REG, mask, ~f->n); if (ret) return ret; } @@ -259,14 +255,26 @@ static int __clk_rcg2_set_rate(struct clk_hw *hw, unsigned long rate) cfg |= rcg->parent_map[f->src] << CFG_SRC_SEL_SHIFT; if (rcg->mnd_width && f->n) cfg |= CFG_MODE_DUAL_EDGE; - ret = regmap_update_bits(rcg->clkr.regmap, rcg->cmd_rcgr + CFG_REG, mask, - cfg); + ret = regmap_update_bits(rcg->clkr.regmap, + rcg->cmd_rcgr + CFG_REG, mask, cfg); if (ret) return ret; return update_config(rcg); } +static int __clk_rcg2_set_rate(struct clk_hw *hw, unsigned long rate) +{ + struct clk_rcg2 *rcg = to_clk_rcg2(hw); + const struct freq_tbl *f; + + f = find_freq(rcg->freq_tbl, rate); + if (!f) + return -EINVAL; + + return clk_rcg2_configure(rcg, f); +} + static int clk_rcg2_set_rate(struct clk_hw *hw, unsigned long rate, unsigned long parent_rate) { @@ -289,3 +297,265 @@ const struct clk_ops clk_rcg2_ops = { .set_rate_and_parent = clk_rcg2_set_rate_and_parent, }; EXPORT_SYMBOL_GPL(clk_rcg2_ops); + +struct frac_entry { + int num; + int den; +}; + +static const struct frac_entry frac_table_675m[] = { /* link rate of 270M */ + { 52, 295 }, /* 119 M */ + { 11, 57 }, /* 130.25 M */ + { 63, 307 }, /* 138.50 M */ + { 11, 50 }, /* 148.50 M */ + { 47, 206 }, /* 154 M */ + { 31, 100 }, /* 205.25 M */ + { 107, 269 }, /* 268.50 M */ + { }, +}; + +static struct frac_entry frac_table_810m[] = { /* Link rate of 162M */ + { 31, 211 }, /* 119 M */ + { 32, 199 }, /* 130.25 M */ + { 63, 307 }, /* 138.50 M */ + { 11, 60 }, /* 148.50 M */ + { 50, 263 }, /* 154 M */ + { 31, 120 }, /* 205.25 M */ + { 119, 359 }, /* 268.50 M */ + { }, +}; + +static int clk_edp_pixel_set_rate(struct clk_hw *hw, unsigned long rate, + unsigned long parent_rate) +{ + struct clk_rcg2 *rcg = to_clk_rcg2(hw); + struct freq_tbl f = *rcg->freq_tbl; + const struct frac_entry *frac; + int delta = 100000; + s64 src_rate = parent_rate; + s64 request; + u32 mask = BIT(rcg->hid_width) - 1; + u32 hid_div; + + if (src_rate == 810000000) + frac = frac_table_810m; + else + frac = frac_table_675m; + + for (; frac->num; frac++) { + request = rate; + request *= frac->den; + request = div_s64(request, frac->num); + if ((src_rate < (request - delta)) || + (src_rate > (request + delta))) + continue; + + regmap_read(rcg->clkr.regmap, rcg->cmd_rcgr + CFG_REG, + &hid_div); + f.pre_div = hid_div; + f.pre_div >>= CFG_SRC_DIV_SHIFT; + f.pre_div &= mask; + f.m = frac->num; + f.n = frac->den; + + return clk_rcg2_configure(rcg, &f); + } + + return -EINVAL; +} + +static int clk_edp_pixel_set_rate_and_parent(struct clk_hw *hw, + unsigned long rate, unsigned long parent_rate, u8 index) +{ + /* Parent index is set statically in frequency table */ + return clk_edp_pixel_set_rate(hw, rate, parent_rate); +} + +static long clk_edp_pixel_determine_rate(struct clk_hw *hw, unsigned long rate, + unsigned long *p_rate, struct clk **p) +{ + struct clk_rcg2 *rcg = to_clk_rcg2(hw); + const struct freq_tbl *f = rcg->freq_tbl; + const struct frac_entry *frac; + int delta = 100000; + s64 src_rate = *p_rate; + s64 request; + u32 mask = BIT(rcg->hid_width) - 1; + u32 hid_div; + + /* Force the correct parent */ + *p = clk_get_parent_by_index(hw->clk, f->src); + + if (src_rate == 810000000) + frac = frac_table_810m; + else + frac = frac_table_675m; + + for (; frac->num; frac++) { + request = rate; + request *= frac->den; + request = div_s64(request, frac->num); + if ((src_rate < (request - delta)) || + (src_rate > (request + delta))) + continue; + + regmap_read(rcg->clkr.regmap, rcg->cmd_rcgr + CFG_REG, + &hid_div); + hid_div >>= CFG_SRC_DIV_SHIFT; + hid_div &= mask; + + return calc_rate(src_rate, frac->num, frac->den, !!frac->den, + hid_div); + } + + return -EINVAL; +} + +const struct clk_ops clk_edp_pixel_ops = { + .is_enabled = clk_rcg2_is_enabled, + .get_parent = clk_rcg2_get_parent, + .set_parent = clk_rcg2_set_parent, + .recalc_rate = clk_rcg2_recalc_rate, + .set_rate = clk_edp_pixel_set_rate, + .set_rate_and_parent = clk_edp_pixel_set_rate_and_parent, + .determine_rate = clk_edp_pixel_determine_rate, +}; +EXPORT_SYMBOL_GPL(clk_edp_pixel_ops); + +static long clk_byte_determine_rate(struct clk_hw *hw, unsigned long rate, + unsigned long *p_rate, struct clk **p) +{ + struct clk_rcg2 *rcg = to_clk_rcg2(hw); + const struct freq_tbl *f = rcg->freq_tbl; + unsigned long parent_rate, div; + u32 mask = BIT(rcg->hid_width) - 1; + + if (rate == 0) + return -EINVAL; + + *p = clk_get_parent_by_index(hw->clk, f->src); + *p_rate = parent_rate = __clk_round_rate(*p, rate); + + div = DIV_ROUND_UP((2 * parent_rate), rate) - 1; + div = min_t(u32, div, mask); + + return calc_rate(parent_rate, 0, 0, 0, div); +} + +static int clk_byte_set_rate(struct clk_hw *hw, unsigned long rate, + unsigned long parent_rate) +{ + struct clk_rcg2 *rcg = to_clk_rcg2(hw); + struct freq_tbl f = *rcg->freq_tbl; + unsigned long div; + u32 mask = BIT(rcg->hid_width) - 1; + + div = DIV_ROUND_UP((2 * parent_rate), rate) - 1; + div = min_t(u32, div, mask); + + f.pre_div = div; + + return clk_rcg2_configure(rcg, &f); +} + +static int clk_byte_set_rate_and_parent(struct clk_hw *hw, + unsigned long rate, unsigned long parent_rate, u8 index) +{ + /* Parent index is set statically in frequency table */ + return clk_byte_set_rate(hw, rate, parent_rate); +} + +const struct clk_ops clk_byte_ops = { + .is_enabled = clk_rcg2_is_enabled, + .get_parent = clk_rcg2_get_parent, + .set_parent = clk_rcg2_set_parent, + .recalc_rate = clk_rcg2_recalc_rate, + .set_rate = clk_byte_set_rate, + .set_rate_and_parent = clk_byte_set_rate_and_parent, + .determine_rate = clk_byte_determine_rate, +}; +EXPORT_SYMBOL_GPL(clk_byte_ops); + +static const struct frac_entry frac_table_pixel[] = { + { 3, 8 }, + { 2, 9 }, + { 4, 9 }, + { 1, 1 }, + { } +}; + +static long clk_pixel_determine_rate(struct clk_hw *hw, unsigned long rate, + unsigned long *p_rate, struct clk **p) +{ + struct clk_rcg2 *rcg = to_clk_rcg2(hw); + unsigned long request, src_rate; + int delta = 100000; + const struct freq_tbl *f = rcg->freq_tbl; + const struct frac_entry *frac = frac_table_pixel; + struct clk *parent = *p = clk_get_parent_by_index(hw->clk, f->src); + + for (; frac->num; frac++) { + request = (rate * frac->den) / frac->num; + + src_rate = __clk_round_rate(parent, request); + if ((src_rate < (request - delta)) || + (src_rate > (request + delta))) + continue; + + *p_rate = src_rate; + return (src_rate * frac->num) / frac->den; + } + + return -EINVAL; +} + +static int clk_pixel_set_rate(struct clk_hw *hw, unsigned long rate, + unsigned long parent_rate) +{ + struct clk_rcg2 *rcg = to_clk_rcg2(hw); + struct freq_tbl f = *rcg->freq_tbl; + const struct frac_entry *frac = frac_table_pixel; + unsigned long request, src_rate; + int delta = 100000; + u32 mask = BIT(rcg->hid_width) - 1; + u32 hid_div; + struct clk *parent = clk_get_parent_by_index(hw->clk, f.src); + + for (; frac->num; frac++) { + request = (rate * frac->den) / frac->num; + + src_rate = __clk_round_rate(parent, request); + if ((src_rate < (request - delta)) || + (src_rate > (request + delta))) + continue; + + regmap_read(rcg->clkr.regmap, rcg->cmd_rcgr + CFG_REG, + &hid_div); + f.pre_div = hid_div; + f.pre_div >>= CFG_SRC_DIV_SHIFT; + f.pre_div &= mask; + f.m = frac->num; + f.n = frac->den; + + return clk_rcg2_configure(rcg, &f); + } + return -EINVAL; +} + +static int clk_pixel_set_rate_and_parent(struct clk_hw *hw, unsigned long rate, + unsigned long parent_rate, u8 index) +{ + /* Parent index is set statically in frequency table */ + return clk_pixel_set_rate(hw, rate, parent_rate); +} + +const struct clk_ops clk_pixel_ops = { + .is_enabled = clk_rcg2_is_enabled, + .get_parent = clk_rcg2_get_parent, + .set_parent = clk_rcg2_set_parent, + .recalc_rate = clk_rcg2_recalc_rate, + .set_rate = clk_pixel_set_rate, + .set_rate_and_parent = clk_pixel_set_rate_and_parent, + .determine_rate = clk_pixel_determine_rate, +}; +EXPORT_SYMBOL_GPL(clk_pixel_ops); diff --git a/drivers/clk/qcom/common.c b/drivers/clk/qcom/common.c new file mode 100644 index 000000000000..9b5a1cfc6b91 --- /dev/null +++ b/drivers/clk/qcom/common.c @@ -0,0 +1,101 @@ +/* + * Copyright (c) 2013-2014, The Linux Foundation. All rights reserved. + * + * This software is licensed under the terms of the GNU General Public + * License version 2, as published by the Free Software Foundation, and + * may be copied, distributed, and modified under those terms. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <linux/export.h> +#include <linux/regmap.h> +#include <linux/platform_device.h> +#include <linux/clk-provider.h> +#include <linux/reset-controller.h> + +#include "common.h" +#include "clk-regmap.h" +#include "reset.h" + +struct qcom_cc { + struct qcom_reset_controller reset; + struct clk_onecell_data data; + struct clk *clks[]; +}; + +int qcom_cc_probe(struct platform_device *pdev, const struct qcom_cc_desc *desc) +{ + void __iomem *base; + struct resource *res; + int i, ret; + struct device *dev = &pdev->dev; + struct clk *clk; + struct clk_onecell_data *data; + struct clk **clks; + struct regmap *regmap; + struct qcom_reset_controller *reset; + struct qcom_cc *cc; + size_t num_clks = desc->num_clks; + struct clk_regmap **rclks = desc->clks; + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + base = devm_ioremap_resource(dev, res); + if (IS_ERR(base)) + return PTR_ERR(base); + + regmap = devm_regmap_init_mmio(dev, base, desc->config); + if (IS_ERR(regmap)) + return PTR_ERR(regmap); + + cc = devm_kzalloc(dev, sizeof(*cc) + sizeof(*clks) * num_clks, + GFP_KERNEL); + if (!cc) + return -ENOMEM; + + clks = cc->clks; + data = &cc->data; + data->clks = clks; + data->clk_num = num_clks; + + for (i = 0; i < num_clks; i++) { + if (!rclks[i]) { + clks[i] = ERR_PTR(-ENOENT); + continue; + } + clk = devm_clk_register_regmap(dev, rclks[i]); + if (IS_ERR(clk)) + return PTR_ERR(clk); + clks[i] = clk; + } + + ret = of_clk_add_provider(dev->of_node, of_clk_src_onecell_get, data); + if (ret) + return ret; + + reset = &cc->reset; + reset->rcdev.of_node = dev->of_node; + reset->rcdev.ops = &qcom_reset_ops; + reset->rcdev.owner = dev->driver->owner; + reset->rcdev.nr_resets = desc->num_resets; + reset->regmap = regmap; + reset->reset_map = desc->resets; + platform_set_drvdata(pdev, &reset->rcdev); + + ret = reset_controller_register(&reset->rcdev); + if (ret) + of_clk_del_provider(dev->of_node); + + return ret; +} +EXPORT_SYMBOL_GPL(qcom_cc_probe); + +void qcom_cc_remove(struct platform_device *pdev) +{ + of_clk_del_provider(pdev->dev.of_node); + reset_controller_unregister(platform_get_drvdata(pdev)); +} +EXPORT_SYMBOL_GPL(qcom_cc_remove); diff --git a/drivers/clk/qcom/common.h b/drivers/clk/qcom/common.h new file mode 100644 index 000000000000..2c3cfc860348 --- /dev/null +++ b/drivers/clk/qcom/common.h @@ -0,0 +1,34 @@ +/* + * Copyright (c) 2014, The Linux Foundation. All rights reserved. + * + * This software is licensed under the terms of the GNU General Public + * License version 2, as published by the Free Software Foundation, and + * may be copied, distributed, and modified under those terms. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ +#ifndef __QCOM_CLK_COMMON_H__ +#define __QCOM_CLK_COMMON_H__ + +struct platform_device; +struct regmap_config; +struct clk_regmap; +struct qcom_reset_map; + +struct qcom_cc_desc { + const struct regmap_config *config; + struct clk_regmap **clks; + size_t num_clks; + const struct qcom_reset_map *resets; + size_t num_resets; +}; + +extern int qcom_cc_probe(struct platform_device *pdev, + const struct qcom_cc_desc *desc); + +extern void qcom_cc_remove(struct platform_device *pdev); + +#endif diff --git a/drivers/clk/qcom/gcc-msm8660.c b/drivers/clk/qcom/gcc-msm8660.c index bc0b7f1fcfbe..0c4b727ae429 100644 --- a/drivers/clk/qcom/gcc-msm8660.c +++ b/drivers/clk/qcom/gcc-msm8660.c @@ -25,6 +25,7 @@ #include <dt-bindings/clock/qcom,gcc-msm8660.h> #include <dt-bindings/reset/qcom,gcc-msm8660.h> +#include "common.h" #include "clk-regmap.h" #include "clk-pll.h" #include "clk-rcg.h" @@ -2701,51 +2702,24 @@ static const struct regmap_config gcc_msm8660_regmap_config = { .fast_io = true, }; +static const struct qcom_cc_desc gcc_msm8660_desc = { + .config = &gcc_msm8660_regmap_config, + .clks = gcc_msm8660_clks, + .num_clks = ARRAY_SIZE(gcc_msm8660_clks), + .resets = gcc_msm8660_resets, + .num_resets = ARRAY_SIZE(gcc_msm8660_resets), +}; + static const struct of_device_id gcc_msm8660_match_table[] = { { .compatible = "qcom,gcc-msm8660" }, { } }; MODULE_DEVICE_TABLE(of, gcc_msm8660_match_table); -struct qcom_cc { - struct qcom_reset_controller reset; - struct clk_onecell_data data; - struct clk *clks[]; -}; - static int gcc_msm8660_probe(struct platform_device *pdev) { - void __iomem *base; - struct resource *res; - int i, ret; - struct device *dev = &pdev->dev; struct clk *clk; - struct clk_onecell_data *data; - struct clk **clks; - struct regmap *regmap; - size_t num_clks; - struct qcom_reset_controller *reset; - struct qcom_cc *cc; - - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); - base = devm_ioremap_resource(dev, res); - if (IS_ERR(base)) - return PTR_ERR(base); - - regmap = devm_regmap_init_mmio(dev, base, &gcc_msm8660_regmap_config); - if (IS_ERR(regmap)) - return PTR_ERR(regmap); - - num_clks = ARRAY_SIZE(gcc_msm8660_clks); - cc = devm_kzalloc(dev, sizeof(*cc) + sizeof(*clks) * num_clks, - GFP_KERNEL); - if (!cc) - return -ENOMEM; - - clks = cc->clks; - data = &cc->data; - data->clks = clks; - data->clk_num = num_clks; + struct device *dev = &pdev->dev; /* Temporary until RPM clocks supported */ clk = clk_register_fixed_rate(dev, "cxo", NULL, CLK_IS_ROOT, 19200000); @@ -2756,39 +2730,12 @@ static int gcc_msm8660_probe(struct platform_device *pdev) if (IS_ERR(clk)) return PTR_ERR(clk); - for (i = 0; i < num_clks; i++) { - if (!gcc_msm8660_clks[i]) - continue; - clk = devm_clk_register_regmap(dev, gcc_msm8660_clks[i]); - if (IS_ERR(clk)) - return PTR_ERR(clk); - clks[i] = clk; - } - - ret = of_clk_add_provider(dev->of_node, of_clk_src_onecell_get, data); - if (ret) - return ret; - - reset = &cc->reset; - reset->rcdev.of_node = dev->of_node; - reset->rcdev.ops = &qcom_reset_ops, - reset->rcdev.owner = THIS_MODULE, - reset->rcdev.nr_resets = ARRAY_SIZE(gcc_msm8660_resets), - reset->regmap = regmap; - reset->reset_map = gcc_msm8660_resets, - platform_set_drvdata(pdev, &reset->rcdev); - - ret = reset_controller_register(&reset->rcdev); - if (ret) - of_clk_del_provider(dev->of_node); - - return ret; + return qcom_cc_probe(pdev, &gcc_msm8660_desc); } static int gcc_msm8660_remove(struct platform_device *pdev) { - of_clk_del_provider(pdev->dev.of_node); - reset_controller_unregister(platform_get_drvdata(pdev)); + qcom_cc_remove(pdev); return 0; } diff --git a/drivers/clk/qcom/gcc-msm8960.c b/drivers/clk/qcom/gcc-msm8960.c index fd446ab2fd98..f4ffd91901f8 100644 --- a/drivers/clk/qcom/gcc-msm8960.c +++ b/drivers/clk/qcom/gcc-msm8960.c @@ -1,5 +1,5 @@ /* - * Copyright (c) 2013, The Linux Foundation. All rights reserved. + * Copyright (c) 2013-2014, The Linux Foundation. All rights reserved. * * This software is licensed under the terms of the GNU General Public * License version 2, as published by the Free Software Foundation, and @@ -25,6 +25,7 @@ #include <dt-bindings/clock/qcom,gcc-msm8960.h> #include <dt-bindings/reset/qcom,gcc-msm8960.h> +#include "common.h" #include "clk-regmap.h" #include "clk-pll.h" #include "clk-rcg.h" @@ -2809,7 +2810,7 @@ static const struct qcom_reset_map gcc_msm8960_resets[] = { [PPSS_PROC_RESET] = { 0x2594, 1 }, [PPSS_RESET] = { 0x2594}, [DMA_BAM_RESET] = { 0x25c0, 7 }, - [SIC_TIC_RESET] = { 0x2600, 7 }, + [SPS_TIC_H_RESET] = { 0x2600, 7 }, [SLIMBUS_H_RESET] = { 0x2620, 7 }, [SFAB_CFPB_M_RESET] = { 0x2680, 7 }, [SFAB_CFPB_S_RESET] = { 0x26c0, 7 }, @@ -2822,7 +2823,7 @@ static const struct qcom_reset_map gcc_msm8960_resets[] = { [SFAB_SFPB_M_RESET] = { 0x2780, 7 }, [SFAB_SFPB_S_RESET] = { 0x27a0, 7 }, [RPM_PROC_RESET] = { 0x27c0, 7 }, - [PMIC_SSBI2_RESET] = { 0x270c, 12 }, + [PMIC_SSBI2_RESET] = { 0x280c, 12 }, [SDC1_RESET] = { 0x2830 }, [SDC2_RESET] = { 0x2850 }, [SDC3_RESET] = { 0x2870 }, @@ -2867,6 +2868,16 @@ static const struct qcom_reset_map gcc_msm8960_resets[] = { [RIVA_RESET] = { 0x35e0 }, }; +static struct clk_regmap *gcc_apq8064_clks[] = { + [PLL8] = &pll8.clkr, + [PLL8_VOTE] = &pll8_vote, + [GSBI7_UART_SRC] = &gsbi7_uart_src.clkr, + [GSBI7_UART_CLK] = &gsbi7_uart_clk.clkr, + [GSBI7_QUP_SRC] = &gsbi7_qup_src.clkr, + [GSBI7_QUP_CLK] = &gsbi7_qup_clk.clkr, + [GSBI7_H_CLK] = &gsbi7_h_clk.clkr, +}; + static const struct regmap_config gcc_msm8960_regmap_config = { .reg_bits = 32, .reg_stride = 4, @@ -2875,51 +2886,38 @@ static const struct regmap_config gcc_msm8960_regmap_config = { .fast_io = true, }; +static const struct qcom_cc_desc gcc_msm8960_desc = { + .config = &gcc_msm8960_regmap_config, + .clks = gcc_msm8960_clks, + .num_clks = ARRAY_SIZE(gcc_msm8960_clks), + .resets = gcc_msm8960_resets, + .num_resets = ARRAY_SIZE(gcc_msm8960_resets), +}; + +static const struct qcom_cc_desc gcc_apq8064_desc = { + .config = &gcc_msm8960_regmap_config, + .clks = gcc_apq8064_clks, + .num_clks = ARRAY_SIZE(gcc_apq8064_clks), + .resets = gcc_msm8960_resets, + .num_resets = ARRAY_SIZE(gcc_msm8960_resets), +}; + static const struct of_device_id gcc_msm8960_match_table[] = { - { .compatible = "qcom,gcc-msm8960" }, + { .compatible = "qcom,gcc-msm8960", .data = &gcc_msm8960_desc }, + { .compatible = "qcom,gcc-apq8064", .data = &gcc_apq8064_desc }, { } }; MODULE_DEVICE_TABLE(of, gcc_msm8960_match_table); -struct qcom_cc { - struct qcom_reset_controller reset; - struct clk_onecell_data data; - struct clk *clks[]; -}; - static int gcc_msm8960_probe(struct platform_device *pdev) { - void __iomem *base; - struct resource *res; - int i, ret; - struct device *dev = &pdev->dev; struct clk *clk; - struct clk_onecell_data *data; - struct clk **clks; - struct regmap *regmap; - size_t num_clks; - struct qcom_reset_controller *reset; - struct qcom_cc *cc; - - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); - base = devm_ioremap_resource(dev, res); - if (IS_ERR(base)) - return PTR_ERR(base); - - regmap = devm_regmap_init_mmio(dev, base, &gcc_msm8960_regmap_config); - if (IS_ERR(regmap)) - return PTR_ERR(regmap); - - num_clks = ARRAY_SIZE(gcc_msm8960_clks); - cc = devm_kzalloc(dev, sizeof(*cc) + sizeof(*clks) * num_clks, - GFP_KERNEL); - if (!cc) - return -ENOMEM; - - clks = cc->clks; - data = &cc->data; - data->clks = clks; - data->clk_num = num_clks; + struct device *dev = &pdev->dev; + const struct of_device_id *match; + + match = of_match_device(gcc_msm8960_match_table, &pdev->dev); + if (!match) + return -EINVAL; /* Temporary until RPM clocks supported */ clk = clk_register_fixed_rate(dev, "cxo", NULL, CLK_IS_ROOT, 19200000); @@ -2930,39 +2928,12 @@ static int gcc_msm8960_probe(struct platform_device *pdev) if (IS_ERR(clk)) return PTR_ERR(clk); - for (i = 0; i < num_clks; i++) { - if (!gcc_msm8960_clks[i]) - continue; - clk = devm_clk_register_regmap(dev, gcc_msm8960_clks[i]); - if (IS_ERR(clk)) - return PTR_ERR(clk); - clks[i] = clk; - } - - ret = of_clk_add_provider(dev->of_node, of_clk_src_onecell_get, data); - if (ret) - return ret; - - reset = &cc->reset; - reset->rcdev.of_node = dev->of_node; - reset->rcdev.ops = &qcom_reset_ops, - reset->rcdev.owner = THIS_MODULE, - reset->rcdev.nr_resets = ARRAY_SIZE(gcc_msm8960_resets), - reset->regmap = regmap; - reset->reset_map = gcc_msm8960_resets, - platform_set_drvdata(pdev, &reset->rcdev); - - ret = reset_controller_register(&reset->rcdev); - if (ret) - of_clk_del_provider(dev->of_node); - - return ret; + return qcom_cc_probe(pdev, match->data); } static int gcc_msm8960_remove(struct platform_device *pdev) { - of_clk_del_provider(pdev->dev.of_node); - reset_controller_unregister(platform_get_drvdata(pdev)); + qcom_cc_remove(pdev); return 0; } diff --git a/drivers/clk/qcom/gcc-msm8974.c b/drivers/clk/qcom/gcc-msm8974.c index 51d457e2b959..7af7c18d2144 100644 --- a/drivers/clk/qcom/gcc-msm8974.c +++ b/drivers/clk/qcom/gcc-msm8974.c @@ -25,6 +25,7 @@ #include <dt-bindings/clock/qcom,gcc-msm8974.h> #include <dt-bindings/reset/qcom,gcc-msm8974.h> +#include "common.h" #include "clk-regmap.h" #include "clk-pll.h" #include "clk-rcg.h" @@ -34,6 +35,7 @@ #define P_XO 0 #define P_GPLL0 1 #define P_GPLL1 1 +#define P_GPLL4 2 static const u8 gcc_xo_gpll0_map[] = { [P_XO] = 0, @@ -45,6 +47,18 @@ static const char *gcc_xo_gpll0[] = { "gpll0_vote", }; +static const u8 gcc_xo_gpll0_gpll4_map[] = { + [P_XO] = 0, + [P_GPLL0] = 1, + [P_GPLL4] = 5, +}; + +static const char *gcc_xo_gpll0_gpll4[] = { + "xo", + "gpll0_vote", + "gpll4_vote", +}; + #define F(f, s, h, m, n) { (f), (s), (2 * (h) - 1), (m), (n) } static struct clk_pll gpll0 = { @@ -137,6 +151,33 @@ static struct clk_regmap gpll1_vote = { }, }; +static struct clk_pll gpll4 = { + .l_reg = 0x1dc4, + .m_reg = 0x1dc8, + .n_reg = 0x1dcc, + .config_reg = 0x1dd4, + .mode_reg = 0x1dc0, + .status_reg = 0x1ddc, + .status_bit = 17, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gpll4", + .parent_names = (const char *[]){ "xo" }, + .num_parents = 1, + .ops = &clk_pll_ops, + }, +}; + +static struct clk_regmap gpll4_vote = { + .enable_reg = 0x1480, + .enable_mask = BIT(4), + .hw.init = &(struct clk_init_data){ + .name = "gpll4_vote", + .parent_names = (const char *[]){ "gpll4" }, + .num_parents = 1, + .ops = &clk_pll_vote_ops, + }, +}; + static const struct freq_tbl ftbl_gcc_usb30_master_clk[] = { F(125000000, P_GPLL0, 1, 5, 24), { } @@ -811,18 +852,33 @@ static const struct freq_tbl ftbl_gcc_sdcc1_4_apps_clk[] = { { } }; +static const struct freq_tbl ftbl_gcc_sdcc1_apps_clk_pro[] = { + F(144000, P_XO, 16, 3, 25), + F(400000, P_XO, 12, 1, 4), + F(20000000, P_GPLL0, 15, 1, 2), + F(25000000, P_GPLL0, 12, 1, 2), + F(50000000, P_GPLL0, 12, 0, 0), + F(100000000, P_GPLL0, 6, 0, 0), + F(192000000, P_GPLL4, 4, 0, 0), + F(200000000, P_GPLL0, 3, 0, 0), + F(384000000, P_GPLL4, 2, 0, 0), + { } +}; + +static struct clk_init_data sdcc1_apps_clk_src_init = { + .name = "sdcc1_apps_clk_src", + .parent_names = gcc_xo_gpll0, + .num_parents = 2, + .ops = &clk_rcg2_ops, +}; + static struct clk_rcg2 sdcc1_apps_clk_src = { .cmd_rcgr = 0x04d0, .mnd_width = 8, .hid_width = 5, .parent_map = gcc_xo_gpll0_map, .freq_tbl = ftbl_gcc_sdcc1_4_apps_clk, - .clkr.hw.init = &(struct clk_init_data){ - .name = "sdcc1_apps_clk_src", - .parent_names = gcc_xo_gpll0, - .num_parents = 2, - .ops = &clk_rcg2_ops, - }, + .clkr.hw.init = &sdcc1_apps_clk_src_init, }; static struct clk_rcg2 sdcc2_apps_clk_src = { @@ -1340,7 +1396,7 @@ static struct clk_branch gcc_blsp1_uart6_apps_clk = { }; static struct clk_branch gcc_blsp2_ahb_clk = { - .halt_reg = 0x05c4, + .halt_reg = 0x0944, .halt_check = BRANCH_HALT_VOTED, .clkr = { .enable_reg = 0x1484, @@ -1994,6 +2050,38 @@ static struct clk_branch gcc_sdcc1_apps_clk = { }, }; +static struct clk_branch gcc_sdcc1_cdccal_ff_clk = { + .halt_reg = 0x04e8, + .clkr = { + .enable_reg = 0x04e8, + .enable_mask = BIT(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_sdcc1_cdccal_ff_clk", + .parent_names = (const char *[]){ + "xo" + }, + .num_parents = 1, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_sdcc1_cdccal_sleep_clk = { + .halt_reg = 0x04e4, + .clkr = { + .enable_reg = 0x04e4, + .enable_mask = BIT(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_sdcc1_cdccal_sleep_clk", + .parent_names = (const char *[]){ + "sleep_clk_src" + }, + .num_parents = 1, + .ops = &clk_branch2_ops, + }, + }, +}; + static struct clk_branch gcc_sdcc2_ahb_clk = { .halt_reg = 0x0508, .clkr = { @@ -2483,6 +2571,10 @@ static struct clk_regmap *gcc_msm8974_clocks[] = { [GCC_USB_HSIC_IO_CAL_SLEEP_CLK] = &gcc_usb_hsic_io_cal_sleep_clk.clkr, [GCC_USB_HSIC_SYSTEM_CLK] = &gcc_usb_hsic_system_clk.clkr, [GCC_MMSS_GPLL0_CLK_SRC] = &gcc_mmss_gpll0_clk_src, + [GPLL4] = NULL, + [GPLL4_VOTE] = NULL, + [GCC_SDCC1_CDCCAL_SLEEP_CLK] = NULL, + [GCC_SDCC1_CDCCAL_FF_CLK] = NULL, }; static const struct qcom_reset_map gcc_msm8974_resets[] = { @@ -2574,51 +2666,51 @@ static const struct regmap_config gcc_msm8974_regmap_config = { .fast_io = true, }; +static const struct qcom_cc_desc gcc_msm8974_desc = { + .config = &gcc_msm8974_regmap_config, + .clks = gcc_msm8974_clocks, + .num_clks = ARRAY_SIZE(gcc_msm8974_clocks), + .resets = gcc_msm8974_resets, + .num_resets = ARRAY_SIZE(gcc_msm8974_resets), +}; + static const struct of_device_id gcc_msm8974_match_table[] = { { .compatible = "qcom,gcc-msm8974" }, + { .compatible = "qcom,gcc-msm8974pro" , .data = (void *)1UL }, + { .compatible = "qcom,gcc-msm8974pro-ac", .data = (void *)1UL }, { } }; MODULE_DEVICE_TABLE(of, gcc_msm8974_match_table); -struct qcom_cc { - struct qcom_reset_controller reset; - struct clk_onecell_data data; - struct clk *clks[]; -}; +static void msm8974_pro_clock_override(void) +{ + sdcc1_apps_clk_src_init.parent_names = gcc_xo_gpll0_gpll4; + sdcc1_apps_clk_src_init.num_parents = 3; + sdcc1_apps_clk_src.freq_tbl = ftbl_gcc_sdcc1_apps_clk_pro; + sdcc1_apps_clk_src.parent_map = gcc_xo_gpll0_gpll4_map; + + gcc_msm8974_clocks[GPLL4] = &gpll4.clkr; + gcc_msm8974_clocks[GPLL4_VOTE] = &gpll4_vote; + gcc_msm8974_clocks[GCC_SDCC1_CDCCAL_SLEEP_CLK] = + &gcc_sdcc1_cdccal_sleep_clk.clkr; + gcc_msm8974_clocks[GCC_SDCC1_CDCCAL_FF_CLK] = + &gcc_sdcc1_cdccal_ff_clk.clkr; +} static int gcc_msm8974_probe(struct platform_device *pdev) { - void __iomem *base; - struct resource *res; - int i, ret; - struct device *dev = &pdev->dev; struct clk *clk; - struct clk_onecell_data *data; - struct clk **clks; - struct regmap *regmap; - size_t num_clks; - struct qcom_reset_controller *reset; - struct qcom_cc *cc; - - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); - base = devm_ioremap_resource(dev, res); - if (IS_ERR(base)) - return PTR_ERR(base); - - regmap = devm_regmap_init_mmio(dev, base, &gcc_msm8974_regmap_config); - if (IS_ERR(regmap)) - return PTR_ERR(regmap); - - num_clks = ARRAY_SIZE(gcc_msm8974_clocks); - cc = devm_kzalloc(dev, sizeof(*cc) + sizeof(*clks) * num_clks, - GFP_KERNEL); - if (!cc) - return -ENOMEM; - - clks = cc->clks; - data = &cc->data; - data->clks = clks; - data->clk_num = num_clks; + struct device *dev = &pdev->dev; + bool pro; + const struct of_device_id *id; + + id = of_match_device(gcc_msm8974_match_table, dev); + if (!id) + return -ENODEV; + pro = !!(id->data); + + if (pro) + msm8974_pro_clock_override(); /* Temporary until RPM clocks supported */ clk = clk_register_fixed_rate(dev, "xo", NULL, CLK_IS_ROOT, 19200000); @@ -2631,39 +2723,12 @@ static int gcc_msm8974_probe(struct platform_device *pdev) if (IS_ERR(clk)) return PTR_ERR(clk); - for (i = 0; i < num_clks; i++) { - if (!gcc_msm8974_clocks[i]) - continue; - clk = devm_clk_register_regmap(dev, gcc_msm8974_clocks[i]); - if (IS_ERR(clk)) - return PTR_ERR(clk); - clks[i] = clk; - } - - ret = of_clk_add_provider(dev->of_node, of_clk_src_onecell_get, data); - if (ret) - return ret; - - reset = &cc->reset; - reset->rcdev.of_node = dev->of_node; - reset->rcdev.ops = &qcom_reset_ops, - reset->rcdev.owner = THIS_MODULE, - reset->rcdev.nr_resets = ARRAY_SIZE(gcc_msm8974_resets), - reset->regmap = regmap; - reset->reset_map = gcc_msm8974_resets, - platform_set_drvdata(pdev, &reset->rcdev); - - ret = reset_controller_register(&reset->rcdev); - if (ret) - of_clk_del_provider(dev->of_node); - - return ret; + return qcom_cc_probe(pdev, &gcc_msm8974_desc); } static int gcc_msm8974_remove(struct platform_device *pdev) { - of_clk_del_provider(pdev->dev.of_node); - reset_controller_unregister(platform_get_drvdata(pdev)); + qcom_cc_remove(pdev); return 0; } diff --git a/drivers/clk/qcom/mmcc-msm8960.c b/drivers/clk/qcom/mmcc-msm8960.c index f9b59c7e48e9..12f3c0b64fcd 100644 --- a/drivers/clk/qcom/mmcc-msm8960.c +++ b/drivers/clk/qcom/mmcc-msm8960.c @@ -26,6 +26,7 @@ #include <dt-bindings/clock/qcom,mmcc-msm8960.h> #include <dt-bindings/reset/qcom,mmcc-msm8960.h> +#include "common.h" #include "clk-regmap.h" #include "clk-pll.h" #include "clk-rcg.h" @@ -2222,85 +2223,28 @@ static const struct regmap_config mmcc_msm8960_regmap_config = { .fast_io = true, }; +static const struct qcom_cc_desc mmcc_msm8960_desc = { + .config = &mmcc_msm8960_regmap_config, + .clks = mmcc_msm8960_clks, + .num_clks = ARRAY_SIZE(mmcc_msm8960_clks), + .resets = mmcc_msm8960_resets, + .num_resets = ARRAY_SIZE(mmcc_msm8960_resets), +}; + static const struct of_device_id mmcc_msm8960_match_table[] = { { .compatible = "qcom,mmcc-msm8960" }, { } }; MODULE_DEVICE_TABLE(of, mmcc_msm8960_match_table); -struct qcom_cc { - struct qcom_reset_controller reset; - struct clk_onecell_data data; - struct clk *clks[]; -}; - static int mmcc_msm8960_probe(struct platform_device *pdev) { - void __iomem *base; - struct resource *res; - int i, ret; - struct device *dev = &pdev->dev; - struct clk *clk; - struct clk_onecell_data *data; - struct clk **clks; - struct regmap *regmap; - size_t num_clks; - struct qcom_reset_controller *reset; - struct qcom_cc *cc; - - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); - base = devm_ioremap_resource(dev, res); - if (IS_ERR(base)) - return PTR_ERR(base); - - regmap = devm_regmap_init_mmio(dev, base, &mmcc_msm8960_regmap_config); - if (IS_ERR(regmap)) - return PTR_ERR(regmap); - - num_clks = ARRAY_SIZE(mmcc_msm8960_clks); - cc = devm_kzalloc(dev, sizeof(*cc) + sizeof(*clks) * num_clks, - GFP_KERNEL); - if (!cc) - return -ENOMEM; - - clks = cc->clks; - data = &cc->data; - data->clks = clks; - data->clk_num = num_clks; - - for (i = 0; i < num_clks; i++) { - if (!mmcc_msm8960_clks[i]) - continue; - clk = devm_clk_register_regmap(dev, mmcc_msm8960_clks[i]); - if (IS_ERR(clk)) - return PTR_ERR(clk); - clks[i] = clk; - } - - ret = of_clk_add_provider(dev->of_node, of_clk_src_onecell_get, data); - if (ret) - return ret; - - reset = &cc->reset; - reset->rcdev.of_node = dev->of_node; - reset->rcdev.ops = &qcom_reset_ops, - reset->rcdev.owner = THIS_MODULE, - reset->rcdev.nr_resets = ARRAY_SIZE(mmcc_msm8960_resets), - reset->regmap = regmap; - reset->reset_map = mmcc_msm8960_resets, - platform_set_drvdata(pdev, &reset->rcdev); - - ret = reset_controller_register(&reset->rcdev); - if (ret) - of_clk_del_provider(dev->of_node); - - return ret; + return qcom_cc_probe(pdev, &mmcc_msm8960_desc); } static int mmcc_msm8960_remove(struct platform_device *pdev) { - of_clk_del_provider(pdev->dev.of_node); - reset_controller_unregister(platform_get_drvdata(pdev)); + qcom_cc_remove(pdev); return 0; } diff --git a/drivers/clk/qcom/mmcc-msm8974.c b/drivers/clk/qcom/mmcc-msm8974.c index c95774514b81..c65b90515872 100644 --- a/drivers/clk/qcom/mmcc-msm8974.c +++ b/drivers/clk/qcom/mmcc-msm8974.c @@ -25,6 +25,7 @@ #include <dt-bindings/clock/qcom,mmcc-msm8974.h> #include <dt-bindings/reset/qcom,mmcc-msm8974.h> +#include "common.h" #include "clk-regmap.h" #include "clk-pll.h" #include "clk-rcg.h" @@ -40,9 +41,11 @@ #define P_EDPVCO 3 #define P_GPLL1 4 #define P_DSI0PLL 4 +#define P_DSI0PLL_BYTE 4 #define P_MMPLL2 4 #define P_MMPLL3 4 #define P_DSI1PLL 5 +#define P_DSI1PLL_BYTE 5 static const u8 mmcc_xo_mmpll0_mmpll1_gpll0_map[] = { [P_XO] = 0, @@ -160,6 +163,24 @@ static const char *mmcc_xo_dsi_hdmi_edp_gpll0[] = { "dsi1pll", }; +static const u8 mmcc_xo_dsibyte_hdmi_edp_gpll0_map[] = { + [P_XO] = 0, + [P_EDPLINK] = 4, + [P_HDMIPLL] = 3, + [P_GPLL0] = 5, + [P_DSI0PLL_BYTE] = 1, + [P_DSI1PLL_BYTE] = 2, +}; + +static const char *mmcc_xo_dsibyte_hdmi_edp_gpll0[] = { + "xo", + "edp_link_clk", + "hdmipll", + "gpll0_vote", + "dsi0pllbyte", + "dsi1pllbyte", +}; + #define F(f, s, h, m, n) { (f), (s), (2 * (h) - 1), (m), (n) } static struct clk_pll mmpll0 = { @@ -169,6 +190,7 @@ static struct clk_pll mmpll0 = { .config_reg = 0x0014, .mode_reg = 0x0000, .status_reg = 0x001c, + .status_bit = 17, .clkr.hw.init = &(struct clk_init_data){ .name = "mmpll0", .parent_names = (const char *[]){ "xo" }, @@ -192,9 +214,10 @@ static struct clk_pll mmpll1 = { .l_reg = 0x0044, .m_reg = 0x0048, .n_reg = 0x004c, - .config_reg = 0x0054, + .config_reg = 0x0050, .mode_reg = 0x0040, .status_reg = 0x005c, + .status_bit = 17, .clkr.hw.init = &(struct clk_init_data){ .name = "mmpll1", .parent_names = (const char *[]){ "xo" }, @@ -218,7 +241,7 @@ static struct clk_pll mmpll2 = { .l_reg = 0x4104, .m_reg = 0x4108, .n_reg = 0x410c, - .config_reg = 0x4114, + .config_reg = 0x4110, .mode_reg = 0x4100, .status_reg = 0x411c, .clkr.hw.init = &(struct clk_init_data){ @@ -233,9 +256,10 @@ static struct clk_pll mmpll3 = { .l_reg = 0x0084, .m_reg = 0x0088, .n_reg = 0x008c, - .config_reg = 0x0094, + .config_reg = 0x0090, .mode_reg = 0x0080, .status_reg = 0x009c, + .status_bit = 17, .clkr.hw.init = &(struct clk_init_data){ .name = "mmpll3", .parent_names = (const char *[]){ "xo" }, @@ -496,15 +520,8 @@ static struct clk_rcg2 jpeg2_clk_src = { }, }; -static struct freq_tbl ftbl_mdss_pclk0_clk[] = { - F(125000000, P_DSI0PLL, 2, 0, 0), - F(250000000, P_DSI0PLL, 1, 0, 0), - { } -}; - -static struct freq_tbl ftbl_mdss_pclk1_clk[] = { - F(125000000, P_DSI1PLL, 2, 0, 0), - F(250000000, P_DSI1PLL, 1, 0, 0), +static struct freq_tbl pixel_freq_tbl[] = { + { .src = P_DSI0PLL }, { } }; @@ -513,12 +530,13 @@ static struct clk_rcg2 pclk0_clk_src = { .mnd_width = 8, .hid_width = 5, .parent_map = mmcc_xo_dsi_hdmi_edp_gpll0_map, - .freq_tbl = ftbl_mdss_pclk0_clk, + .freq_tbl = pixel_freq_tbl, .clkr.hw.init = &(struct clk_init_data){ .name = "pclk0_clk_src", .parent_names = mmcc_xo_dsi_hdmi_edp_gpll0, .num_parents = 6, - .ops = &clk_rcg2_ops, + .ops = &clk_pixel_ops, + .flags = CLK_SET_RATE_PARENT, }, }; @@ -527,12 +545,13 @@ static struct clk_rcg2 pclk1_clk_src = { .mnd_width = 8, .hid_width = 5, .parent_map = mmcc_xo_dsi_hdmi_edp_gpll0_map, - .freq_tbl = ftbl_mdss_pclk1_clk, + .freq_tbl = pixel_freq_tbl, .clkr.hw.init = &(struct clk_init_data){ .name = "pclk1_clk_src", .parent_names = mmcc_xo_dsi_hdmi_edp_gpll0, .num_parents = 6, - .ops = &clk_rcg2_ops, + .ops = &clk_pixel_ops, + .flags = CLK_SET_RATE_PARENT, }, }; @@ -750,41 +769,36 @@ static struct clk_rcg2 cpp_clk_src = { }, }; -static struct freq_tbl ftbl_mdss_byte0_clk[] = { - F(93750000, P_DSI0PLL, 8, 0, 0), - F(187500000, P_DSI0PLL, 4, 0, 0), - { } -}; - -static struct freq_tbl ftbl_mdss_byte1_clk[] = { - F(93750000, P_DSI1PLL, 8, 0, 0), - F(187500000, P_DSI1PLL, 4, 0, 0), +static struct freq_tbl byte_freq_tbl[] = { + { .src = P_DSI0PLL_BYTE }, { } }; static struct clk_rcg2 byte0_clk_src = { .cmd_rcgr = 0x2120, .hid_width = 5, - .parent_map = mmcc_xo_dsi_hdmi_edp_gpll0_map, - .freq_tbl = ftbl_mdss_byte0_clk, + .parent_map = mmcc_xo_dsibyte_hdmi_edp_gpll0_map, + .freq_tbl = byte_freq_tbl, .clkr.hw.init = &(struct clk_init_data){ .name = "byte0_clk_src", - .parent_names = mmcc_xo_dsi_hdmi_edp_gpll0, + .parent_names = mmcc_xo_dsibyte_hdmi_edp_gpll0, .num_parents = 6, - .ops = &clk_rcg2_ops, + .ops = &clk_byte_ops, + .flags = CLK_SET_RATE_PARENT, }, }; static struct clk_rcg2 byte1_clk_src = { .cmd_rcgr = 0x2140, .hid_width = 5, - .parent_map = mmcc_xo_dsi_hdmi_edp_gpll0_map, - .freq_tbl = ftbl_mdss_byte1_clk, + .parent_map = mmcc_xo_dsibyte_hdmi_edp_gpll0_map, + .freq_tbl = byte_freq_tbl, .clkr.hw.init = &(struct clk_init_data){ .name = "byte1_clk_src", - .parent_names = mmcc_xo_dsi_hdmi_edp_gpll0, + .parent_names = mmcc_xo_dsibyte_hdmi_edp_gpll0, .num_parents = 6, - .ops = &clk_rcg2_ops, + .ops = &clk_byte_ops, + .flags = CLK_SET_RATE_PARENT, }, }; @@ -822,12 +836,12 @@ static struct clk_rcg2 edplink_clk_src = { .parent_names = mmcc_xo_dsi_hdmi_edp_gpll0, .num_parents = 6, .ops = &clk_rcg2_ops, + .flags = CLK_SET_RATE_PARENT, }, }; -static struct freq_tbl ftbl_mdss_edppixel_clk[] = { - F(175000000, P_EDPVCO, 2, 0, 0), - F(350000000, P_EDPVCO, 11, 0, 0), +static struct freq_tbl edp_pixel_freq_tbl[] = { + { .src = P_EDPVCO }, { } }; @@ -836,12 +850,12 @@ static struct clk_rcg2 edppixel_clk_src = { .mnd_width = 8, .hid_width = 5, .parent_map = mmcc_xo_dsi_hdmi_edp_map, - .freq_tbl = ftbl_mdss_edppixel_clk, + .freq_tbl = edp_pixel_freq_tbl, .clkr.hw.init = &(struct clk_init_data){ .name = "edppixel_clk_src", .parent_names = mmcc_xo_dsi_hdmi_edp, .num_parents = 6, - .ops = &clk_rcg2_ops, + .ops = &clk_edp_pixel_ops, }, }; @@ -853,11 +867,11 @@ static struct freq_tbl ftbl_mdss_esc0_1_clk[] = { static struct clk_rcg2 esc0_clk_src = { .cmd_rcgr = 0x2160, .hid_width = 5, - .parent_map = mmcc_xo_dsi_hdmi_edp_gpll0_map, + .parent_map = mmcc_xo_dsibyte_hdmi_edp_gpll0_map, .freq_tbl = ftbl_mdss_esc0_1_clk, .clkr.hw.init = &(struct clk_init_data){ .name = "esc0_clk_src", - .parent_names = mmcc_xo_dsi_hdmi_edp_gpll0, + .parent_names = mmcc_xo_dsibyte_hdmi_edp_gpll0, .num_parents = 6, .ops = &clk_rcg2_ops, }, @@ -866,26 +880,18 @@ static struct clk_rcg2 esc0_clk_src = { static struct clk_rcg2 esc1_clk_src = { .cmd_rcgr = 0x2180, .hid_width = 5, - .parent_map = mmcc_xo_dsi_hdmi_edp_gpll0_map, + .parent_map = mmcc_xo_dsibyte_hdmi_edp_gpll0_map, .freq_tbl = ftbl_mdss_esc0_1_clk, .clkr.hw.init = &(struct clk_init_data){ .name = "esc1_clk_src", - .parent_names = mmcc_xo_dsi_hdmi_edp_gpll0, + .parent_names = mmcc_xo_dsibyte_hdmi_edp_gpll0, .num_parents = 6, .ops = &clk_rcg2_ops, }, }; -static struct freq_tbl ftbl_mdss_extpclk_clk[] = { - F(25200000, P_HDMIPLL, 1, 0, 0), - F(27000000, P_HDMIPLL, 1, 0, 0), - F(27030000, P_HDMIPLL, 1, 0, 0), - F(65000000, P_HDMIPLL, 1, 0, 0), - F(74250000, P_HDMIPLL, 1, 0, 0), - F(108000000, P_HDMIPLL, 1, 0, 0), - F(148500000, P_HDMIPLL, 1, 0, 0), - F(268500000, P_HDMIPLL, 1, 0, 0), - F(297000000, P_HDMIPLL, 1, 0, 0), +static struct freq_tbl extpclk_freq_tbl[] = { + { .src = P_HDMIPLL }, { } }; @@ -893,12 +899,13 @@ static struct clk_rcg2 extpclk_clk_src = { .cmd_rcgr = 0x2060, .hid_width = 5, .parent_map = mmcc_xo_dsi_hdmi_edp_gpll0_map, - .freq_tbl = ftbl_mdss_extpclk_clk, + .freq_tbl = extpclk_freq_tbl, .clkr.hw.init = &(struct clk_init_data){ .name = "extpclk_clk_src", .parent_names = mmcc_xo_dsi_hdmi_edp_gpll0, .num_parents = 6, - .ops = &clk_rcg2_ops, + .ops = &clk_byte_ops, + .flags = CLK_SET_RATE_PARENT, }, }; @@ -2318,7 +2325,7 @@ static const struct pll_config mmpll1_config = { .vco_val = 0x0, .vco_mask = 0x3 << 20, .pre_div_val = 0x0, - .pre_div_mask = 0x3 << 12, + .pre_div_mask = 0x7 << 12, .post_div_val = 0x0, .post_div_mask = 0x3 << 8, .mn_ena_mask = BIT(24), @@ -2332,7 +2339,7 @@ static struct pll_config mmpll3_config = { .vco_val = 0x0, .vco_mask = 0x3 << 20, .pre_div_val = 0x0, - .pre_div_mask = 0x3 << 12, + .pre_div_mask = 0x7 << 12, .post_div_val = 0x0, .post_div_mask = 0x3 << 8, .mn_ena_mask = BIT(24), @@ -2524,88 +2531,39 @@ static const struct regmap_config mmcc_msm8974_regmap_config = { .fast_io = true, }; +static const struct qcom_cc_desc mmcc_msm8974_desc = { + .config = &mmcc_msm8974_regmap_config, + .clks = mmcc_msm8974_clocks, + .num_clks = ARRAY_SIZE(mmcc_msm8974_clocks), + .resets = mmcc_msm8974_resets, + .num_resets = ARRAY_SIZE(mmcc_msm8974_resets), +}; + static const struct of_device_id mmcc_msm8974_match_table[] = { { .compatible = "qcom,mmcc-msm8974" }, { } }; MODULE_DEVICE_TABLE(of, mmcc_msm8974_match_table); -struct qcom_cc { - struct qcom_reset_controller reset; - struct clk_onecell_data data; - struct clk *clks[]; -}; - static int mmcc_msm8974_probe(struct platform_device *pdev) { - void __iomem *base; - struct resource *res; - int i, ret; - struct device *dev = &pdev->dev; - struct clk *clk; - struct clk_onecell_data *data; - struct clk **clks; + int ret; struct regmap *regmap; - size_t num_clks; - struct qcom_reset_controller *reset; - struct qcom_cc *cc; - - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); - base = devm_ioremap_resource(dev, res); - if (IS_ERR(base)) - return PTR_ERR(base); - - regmap = devm_regmap_init_mmio(dev, base, &mmcc_msm8974_regmap_config); - if (IS_ERR(regmap)) - return PTR_ERR(regmap); - - num_clks = ARRAY_SIZE(mmcc_msm8974_clocks); - cc = devm_kzalloc(dev, sizeof(*cc) + sizeof(*clks) * num_clks, - GFP_KERNEL); - if (!cc) - return -ENOMEM; - - clks = cc->clks; - data = &cc->data; - data->clks = clks; - data->clk_num = num_clks; - - clk_pll_configure_sr_hpm_lp(&mmpll1, regmap, &mmpll1_config, true); - clk_pll_configure_sr_hpm_lp(&mmpll3, regmap, &mmpll3_config, false); - - for (i = 0; i < num_clks; i++) { - if (!mmcc_msm8974_clocks[i]) - continue; - clk = devm_clk_register_regmap(dev, mmcc_msm8974_clocks[i]); - if (IS_ERR(clk)) - return PTR_ERR(clk); - clks[i] = clk; - } - ret = of_clk_add_provider(dev->of_node, of_clk_src_onecell_get, data); + ret = qcom_cc_probe(pdev, &mmcc_msm8974_desc); if (ret) return ret; - reset = &cc->reset; - reset->rcdev.of_node = dev->of_node; - reset->rcdev.ops = &qcom_reset_ops, - reset->rcdev.owner = THIS_MODULE, - reset->rcdev.nr_resets = ARRAY_SIZE(mmcc_msm8974_resets), - reset->regmap = regmap; - reset->reset_map = mmcc_msm8974_resets, - platform_set_drvdata(pdev, &reset->rcdev); - - ret = reset_controller_register(&reset->rcdev); - if (ret) - of_clk_del_provider(dev->of_node); + regmap = dev_get_regmap(&pdev->dev, NULL); + clk_pll_configure_sr_hpm_lp(&mmpll1, regmap, &mmpll1_config, true); + clk_pll_configure_sr_hpm_lp(&mmpll3, regmap, &mmpll3_config, false); - return ret; + return 0; } static int mmcc_msm8974_remove(struct platform_device *pdev) { - of_clk_del_provider(pdev->dev.of_node); - reset_controller_unregister(platform_get_drvdata(pdev)); + qcom_cc_remove(pdev); return 0; } diff --git a/drivers/clk/samsung/clk-exynos4.c b/drivers/clk/samsung/clk-exynos4.c index c4df294bb7fb..4f150c9dd38c 100644 --- a/drivers/clk/samsung/clk-exynos4.c +++ b/drivers/clk/samsung/clk-exynos4.c @@ -324,7 +324,7 @@ static struct syscore_ops exynos4_clk_syscore_ops = { .resume = exynos4_clk_resume, }; -static void exynos4_clk_sleep_init(void) +static void __init exynos4_clk_sleep_init(void) { exynos4_save_common = samsung_clk_alloc_reg_dump(exynos4_clk_regs, ARRAY_SIZE(exynos4_clk_regs)); @@ -359,7 +359,7 @@ err_warn: __func__); } #else -static void exynos4_clk_sleep_init(void) {} +static void __init exynos4_clk_sleep_init(void) {} #endif /* list of all parent clock list */ diff --git a/drivers/clk/shmobile/Makefile b/drivers/clk/shmobile/Makefile index 5404cb931ebf..e0029237827a 100644 --- a/drivers/clk/shmobile/Makefile +++ b/drivers/clk/shmobile/Makefile @@ -1,5 +1,7 @@ obj-$(CONFIG_ARCH_EMEV2) += clk-emev2.o obj-$(CONFIG_ARCH_R7S72100) += clk-rz.o +obj-$(CONFIG_ARCH_R8A7740) += clk-r8a7740.o +obj-$(CONFIG_ARCH_R8A7779) += clk-r8a7779.o obj-$(CONFIG_ARCH_R8A7790) += clk-rcar-gen2.o obj-$(CONFIG_ARCH_R8A7791) += clk-rcar-gen2.o obj-$(CONFIG_ARCH_SHMOBILE_MULTI) += clk-div6.o diff --git a/drivers/clk/shmobile/clk-mstp.c b/drivers/clk/shmobile/clk-mstp.c index 1f6324e29a80..2d2fe773ac81 100644 --- a/drivers/clk/shmobile/clk-mstp.c +++ b/drivers/clk/shmobile/clk-mstp.c @@ -112,7 +112,7 @@ static int cpg_mstp_clock_is_enabled(struct clk_hw *hw) else value = clk_readl(group->smstpcr); - return !!(value & BIT(clock->bit_index)); + return !(value & BIT(clock->bit_index)); } static const struct clk_ops cpg_mstp_clock_ops = { diff --git a/drivers/clk/shmobile/clk-r8a7740.c b/drivers/clk/shmobile/clk-r8a7740.c new file mode 100644 index 000000000000..1e2eaae21e01 --- /dev/null +++ b/drivers/clk/shmobile/clk-r8a7740.c @@ -0,0 +1,199 @@ +/* + * r8a7740 Core CPG Clocks + * + * Copyright (C) 2014 Ulrich Hecht + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; version 2 of the License. + */ + +#include <linux/clk-provider.h> +#include <linux/clkdev.h> +#include <linux/clk/shmobile.h> +#include <linux/init.h> +#include <linux/kernel.h> +#include <linux/of.h> +#include <linux/of_address.h> +#include <linux/spinlock.h> + +struct r8a7740_cpg { + struct clk_onecell_data data; + spinlock_t lock; + void __iomem *reg; +}; + +#define CPG_FRQCRA 0x00 +#define CPG_FRQCRB 0x04 +#define CPG_PLLC2CR 0x2c +#define CPG_USBCKCR 0x8c +#define CPG_FRQCRC 0xe0 + +#define CLK_ENABLE_ON_INIT BIT(0) + +struct div4_clk { + const char *name; + unsigned int reg; + unsigned int shift; + int flags; +}; + +static struct div4_clk div4_clks[] = { + { "i", CPG_FRQCRA, 20, CLK_ENABLE_ON_INIT }, + { "zg", CPG_FRQCRA, 16, CLK_ENABLE_ON_INIT }, + { "b", CPG_FRQCRA, 8, CLK_ENABLE_ON_INIT }, + { "m1", CPG_FRQCRA, 4, CLK_ENABLE_ON_INIT }, + { "hp", CPG_FRQCRB, 4, 0 }, + { "hpp", CPG_FRQCRC, 20, 0 }, + { "usbp", CPG_FRQCRC, 16, 0 }, + { "s", CPG_FRQCRC, 12, 0 }, + { "zb", CPG_FRQCRC, 8, 0 }, + { "m3", CPG_FRQCRC, 4, 0 }, + { "cp", CPG_FRQCRC, 0, 0 }, + { NULL, 0, 0, 0 }, +}; + +static const struct clk_div_table div4_div_table[] = { + { 0, 2 }, { 1, 3 }, { 2, 4 }, { 3, 6 }, { 4, 8 }, { 5, 12 }, + { 6, 16 }, { 7, 18 }, { 8, 24 }, { 9, 32 }, { 10, 36 }, { 11, 48 }, + { 13, 72 }, { 14, 96 }, { 0, 0 } +}; + +static u32 cpg_mode __initdata; + +static struct clk * __init +r8a7740_cpg_register_clock(struct device_node *np, struct r8a7740_cpg *cpg, + const char *name) +{ + const struct clk_div_table *table = NULL; + const char *parent_name; + unsigned int shift, reg; + unsigned int mult = 1; + unsigned int div = 1; + + if (!strcmp(name, "r")) { + switch (cpg_mode & (BIT(2) | BIT(1))) { + case BIT(1) | BIT(2): + /* extal1 */ + parent_name = of_clk_get_parent_name(np, 0); + div = 2048; + break; + case BIT(2): + /* extal1 */ + parent_name = of_clk_get_parent_name(np, 0); + div = 1024; + break; + default: + /* extalr */ + parent_name = of_clk_get_parent_name(np, 2); + break; + } + } else if (!strcmp(name, "system")) { + parent_name = of_clk_get_parent_name(np, 0); + if (cpg_mode & BIT(1)) + div = 2; + } else if (!strcmp(name, "pllc0")) { + /* PLLC0/1 are configurable multiplier clocks. Register them as + * fixed factor clocks for now as there's no generic multiplier + * clock implementation and we currently have no need to change + * the multiplier value. + */ + u32 value = clk_readl(cpg->reg + CPG_FRQCRC); + parent_name = "system"; + mult = ((value >> 24) & 0x7f) + 1; + } else if (!strcmp(name, "pllc1")) { + u32 value = clk_readl(cpg->reg + CPG_FRQCRA); + parent_name = "system"; + mult = ((value >> 24) & 0x7f) + 1; + div = 2; + } else if (!strcmp(name, "pllc2")) { + u32 value = clk_readl(cpg->reg + CPG_PLLC2CR); + parent_name = "system"; + mult = ((value >> 24) & 0x3f) + 1; + } else if (!strcmp(name, "usb24s")) { + u32 value = clk_readl(cpg->reg + CPG_USBCKCR); + if (value & BIT(7)) + /* extal2 */ + parent_name = of_clk_get_parent_name(np, 1); + else + parent_name = "system"; + if (!(value & BIT(6))) + div = 2; + } else { + struct div4_clk *c; + for (c = div4_clks; c->name; c++) { + if (!strcmp(name, c->name)) { + parent_name = "pllc1"; + table = div4_div_table; + reg = c->reg; + shift = c->shift; + break; + } + } + if (!c->name) + return ERR_PTR(-EINVAL); + } + + if (!table) { + return clk_register_fixed_factor(NULL, name, parent_name, 0, + mult, div); + } else { + return clk_register_divider_table(NULL, name, parent_name, 0, + cpg->reg + reg, shift, 4, 0, + table, &cpg->lock); + } +} + +static void __init r8a7740_cpg_clocks_init(struct device_node *np) +{ + struct r8a7740_cpg *cpg; + struct clk **clks; + unsigned int i; + int num_clks; + + if (of_property_read_u32(np, "renesas,mode", &cpg_mode)) + pr_warn("%s: missing renesas,mode property\n", __func__); + + num_clks = of_property_count_strings(np, "clock-output-names"); + if (num_clks < 0) { + pr_err("%s: failed to count clocks\n", __func__); + return; + } + + cpg = kzalloc(sizeof(*cpg), GFP_KERNEL); + clks = kzalloc(num_clks * sizeof(*clks), GFP_KERNEL); + if (cpg == NULL || clks == NULL) { + /* We're leaking memory on purpose, there's no point in cleaning + * up as the system won't boot anyway. + */ + return; + } + + spin_lock_init(&cpg->lock); + + cpg->data.clks = clks; + cpg->data.clk_num = num_clks; + + cpg->reg = of_iomap(np, 0); + if (WARN_ON(cpg->reg == NULL)) + return; + + for (i = 0; i < num_clks; ++i) { + const char *name; + struct clk *clk; + + of_property_read_string_index(np, "clock-output-names", i, + &name); + + clk = r8a7740_cpg_register_clock(np, cpg, name); + if (IS_ERR(clk)) + pr_err("%s: failed to register %s %s clock (%ld)\n", + __func__, np->name, name, PTR_ERR(clk)); + else + cpg->data.clks[i] = clk; + } + + of_clk_add_provider(np, of_clk_src_onecell_get, &cpg->data); +} +CLK_OF_DECLARE(r8a7740_cpg_clks, "renesas,r8a7740-cpg-clocks", + r8a7740_cpg_clocks_init); diff --git a/drivers/clk/shmobile/clk-r8a7779.c b/drivers/clk/shmobile/clk-r8a7779.c new file mode 100644 index 000000000000..652ecacb6daf --- /dev/null +++ b/drivers/clk/shmobile/clk-r8a7779.c @@ -0,0 +1,180 @@ +/* + * r8a7779 Core CPG Clocks + * + * Copyright (C) 2013, 2014 Horms Solutions Ltd. + * + * Contact: Simon Horman <horms@verge.net.au> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; version 2 of the License. + */ + +#include <linux/clk-provider.h> +#include <linux/clkdev.h> +#include <linux/clk/shmobile.h> +#include <linux/init.h> +#include <linux/kernel.h> +#include <linux/of.h> +#include <linux/of_address.h> +#include <linux/spinlock.h> + +#include <dt-bindings/clock/r8a7779-clock.h> + +#define CPG_NUM_CLOCKS (R8A7779_CLK_OUT + 1) + +struct r8a7779_cpg { + struct clk_onecell_data data; + spinlock_t lock; + void __iomem *reg; +}; + +/* ----------------------------------------------------------------------------- + * CPG Clock Data + */ + +/* + * MD1 = 1 MD1 = 0 + * (PLLA = 1500) (PLLA = 1600) + * (MHz) (MHz) + *------------------------------------------------+-------------------- + * clkz 1000 (2/3) 800 (1/2) + * clkzs 250 (1/6) 200 (1/8) + * clki 750 (1/2) 800 (1/2) + * clks 250 (1/6) 200 (1/8) + * clks1 125 (1/12) 100 (1/16) + * clks3 187.5 (1/8) 200 (1/8) + * clks4 93.7 (1/16) 100 (1/16) + * clkp 62.5 (1/24) 50 (1/32) + * clkg 62.5 (1/24) 66.6 (1/24) + * clkb, CLKOUT + * (MD2 = 0) 62.5 (1/24) 66.6 (1/24) + * (MD2 = 1) 41.6 (1/36) 50 (1/32) + */ + +#define CPG_CLK_CONFIG_INDEX(md) (((md) & (BIT(2)|BIT(1))) >> 1) + +struct cpg_clk_config { + unsigned int z_mult; + unsigned int z_div; + unsigned int zs_and_s_div; + unsigned int s1_div; + unsigned int p_div; + unsigned int b_and_out_div; +}; + +static const struct cpg_clk_config cpg_clk_configs[4] __initconst = { + { 1, 2, 8, 16, 32, 24 }, + { 2, 3, 6, 12, 24, 24 }, + { 1, 2, 8, 16, 32, 32 }, + { 2, 3, 6, 12, 24, 36 }, +}; + +/* + * MD PLLA Ratio + * 12 11 + *------------------------ + * 0 0 x42 + * 0 1 x48 + * 1 0 x56 + * 1 1 x64 + */ + +#define CPG_PLLA_MULT_INDEX(md) (((md) & (BIT(12)|BIT(11))) >> 11) + +static const unsigned int cpg_plla_mult[4] __initconst = { 42, 48, 56, 64 }; + +/* ----------------------------------------------------------------------------- + * Initialization + */ + +static u32 cpg_mode __initdata; + +static struct clk * __init +r8a7779_cpg_register_clock(struct device_node *np, struct r8a7779_cpg *cpg, + const struct cpg_clk_config *config, + unsigned int plla_mult, const char *name) +{ + const char *parent_name = "plla"; + unsigned int mult = 1; + unsigned int div = 1; + + if (!strcmp(name, "plla")) { + parent_name = of_clk_get_parent_name(np, 0); + mult = plla_mult; + } else if (!strcmp(name, "z")) { + div = config->z_div; + mult = config->z_mult; + } else if (!strcmp(name, "zs") || !strcmp(name, "s")) { + div = config->zs_and_s_div; + } else if (!strcmp(name, "s1")) { + div = config->s1_div; + } else if (!strcmp(name, "p")) { + div = config->p_div; + } else if (!strcmp(name, "b") || !strcmp(name, "out")) { + div = config->b_and_out_div; + } else { + return ERR_PTR(-EINVAL); + } + + return clk_register_fixed_factor(NULL, name, parent_name, 0, mult, div); +} + +static void __init r8a7779_cpg_clocks_init(struct device_node *np) +{ + const struct cpg_clk_config *config; + struct r8a7779_cpg *cpg; + struct clk **clks; + unsigned int i, plla_mult; + int num_clks; + + num_clks = of_property_count_strings(np, "clock-output-names"); + if (num_clks < 0) { + pr_err("%s: failed to count clocks\n", __func__); + return; + } + + cpg = kzalloc(sizeof(*cpg), GFP_KERNEL); + clks = kzalloc(CPG_NUM_CLOCKS * sizeof(*clks), GFP_KERNEL); + if (cpg == NULL || clks == NULL) { + /* We're leaking memory on purpose, there's no point in cleaning + * up as the system won't boot anyway. + */ + return; + } + + spin_lock_init(&cpg->lock); + + cpg->data.clks = clks; + cpg->data.clk_num = num_clks; + + config = &cpg_clk_configs[CPG_CLK_CONFIG_INDEX(cpg_mode)]; + plla_mult = cpg_plla_mult[CPG_PLLA_MULT_INDEX(cpg_mode)]; + + for (i = 0; i < num_clks; ++i) { + const char *name; + struct clk *clk; + + of_property_read_string_index(np, "clock-output-names", i, + &name); + + clk = r8a7779_cpg_register_clock(np, cpg, config, + plla_mult, name); + if (IS_ERR(clk)) + pr_err("%s: failed to register %s %s clock (%ld)\n", + __func__, np->name, name, PTR_ERR(clk)); + else + cpg->data.clks[i] = clk; + } + + of_clk_add_provider(np, of_clk_src_onecell_get, &cpg->data); +} +CLK_OF_DECLARE(r8a7779_cpg_clks, "renesas,r8a7779-cpg-clocks", + r8a7779_cpg_clocks_init); + +void __init r8a7779_clocks_init(u32 mode) +{ + cpg_mode = mode; + + of_clk_init(NULL); +} diff --git a/drivers/clk/socfpga/clk-gate.c b/drivers/clk/socfpga/clk-gate.c index 501d513bf890..dd3a78c64795 100644 --- a/drivers/clk/socfpga/clk-gate.c +++ b/drivers/clk/socfpga/clk-gate.c @@ -32,7 +32,6 @@ #define SOCFPGA_MMC_CLK "sdmmc_clk" #define SOCFPGA_GPIO_DB_CLK_OFFSET 0xA8 -#define div_mask(width) ((1 << (width)) - 1) #define streq(a, b) (strcmp((a), (b)) == 0) #define to_socfpga_gate_clk(p) container_of(p, struct socfpga_gate_clk, hw.hw) diff --git a/drivers/clk/socfpga/clk-periph.c b/drivers/clk/socfpga/clk-periph.c index 81623a3736f9..46531c34ec9b 100644 --- a/drivers/clk/socfpga/clk-periph.c +++ b/drivers/clk/socfpga/clk-periph.c @@ -29,12 +29,18 @@ static unsigned long clk_periclk_recalc_rate(struct clk_hw *hwclk, unsigned long parent_rate) { struct socfpga_periph_clk *socfpgaclk = to_socfpga_periph_clk(hwclk); - u32 div; + u32 div, val; - if (socfpgaclk->fixed_div) + if (socfpgaclk->fixed_div) { div = socfpgaclk->fixed_div; - else + } else { + if (socfpgaclk->div_reg) { + val = readl(socfpgaclk->div_reg) >> socfpgaclk->shift; + val &= div_mask(socfpgaclk->width); + parent_rate /= (val + 1); + } div = ((readl(socfpgaclk->hw.reg) & 0x1ff) + 1); + } return parent_rate / div; } @@ -54,6 +60,7 @@ static __init void __socfpga_periph_init(struct device_node *node, struct clk_init_data init; int rc; u32 fixed_div; + u32 div_reg[3]; of_property_read_u32(node, "reg", ®); @@ -63,6 +70,15 @@ static __init void __socfpga_periph_init(struct device_node *node, periph_clk->hw.reg = clk_mgr_base_addr + reg; + rc = of_property_read_u32_array(node, "div-reg", div_reg, 3); + if (!rc) { + periph_clk->div_reg = clk_mgr_base_addr + div_reg[0]; + periph_clk->shift = div_reg[1]; + periph_clk->width = div_reg[2]; + } else { + periph_clk->div_reg = 0; + } + rc = of_property_read_u32(node, "fixed-divider", &fixed_div); if (rc) periph_clk->fixed_div = 0; diff --git a/drivers/clk/socfpga/clk.h b/drivers/clk/socfpga/clk.h index d2e54019c94f..d291f60c46e1 100644 --- a/drivers/clk/socfpga/clk.h +++ b/drivers/clk/socfpga/clk.h @@ -27,6 +27,7 @@ #define CLKMGR_PERPLL_SRC 0xAC #define SOCFPGA_MAX_PARENTS 3 +#define div_mask(width) ((1 << (width)) - 1) extern void __iomem *clk_mgr_base_addr; @@ -52,6 +53,9 @@ struct socfpga_periph_clk { struct clk_gate hw; char *parent_name; u32 fixed_div; + void __iomem *div_reg; + u32 width; /* only valid if div_reg != 0 */ + u32 shift; /* only valid if div_reg != 0 */ }; #endif /* SOCFPGA_CLK_H */ diff --git a/drivers/clk/st/clkgen-pll.c b/drivers/clk/st/clkgen-pll.c index a886702f7c8b..d8b9b1a2aeda 100644 --- a/drivers/clk/st/clkgen-pll.c +++ b/drivers/clk/st/clkgen-pll.c @@ -655,6 +655,7 @@ static struct of_device_id c32_gpu_pll_of_match[] = { .compatible = "st,stih416-gpu-pll-c32", .data = &st_pll1200c32_gpu_416, }, + {} }; static void __init clkgengpu_c32_pll_setup(struct device_node *np) diff --git a/drivers/clk/sunxi/clk-factors.c b/drivers/clk/sunxi/clk-factors.c index 9e232644f07e..3806d97e529b 100644 --- a/drivers/clk/sunxi/clk-factors.c +++ b/drivers/clk/sunxi/clk-factors.c @@ -77,6 +77,41 @@ static long clk_factors_round_rate(struct clk_hw *hw, unsigned long rate, return rate; } +static long clk_factors_determine_rate(struct clk_hw *hw, unsigned long rate, + unsigned long *best_parent_rate, + struct clk **best_parent_p) +{ + struct clk *clk = hw->clk, *parent, *best_parent = NULL; + int i, num_parents; + unsigned long parent_rate, best = 0, child_rate, best_child_rate = 0; + + /* find the parent that can help provide the fastest rate <= rate */ + num_parents = __clk_get_num_parents(clk); + for (i = 0; i < num_parents; i++) { + parent = clk_get_parent_by_index(clk, i); + if (!parent) + continue; + if (__clk_get_flags(clk) & CLK_SET_RATE_PARENT) + parent_rate = __clk_round_rate(parent, rate); + else + parent_rate = __clk_get_rate(parent); + + child_rate = clk_factors_round_rate(hw, rate, &parent_rate); + + if (child_rate <= rate && child_rate > best_child_rate) { + best_parent = parent; + best = parent_rate; + best_child_rate = child_rate; + } + } + + if (best_parent) + *best_parent_p = best_parent; + *best_parent_rate = best; + + return best_child_rate; +} + static int clk_factors_set_rate(struct clk_hw *hw, unsigned long rate, unsigned long parent_rate) { @@ -113,6 +148,7 @@ static int clk_factors_set_rate(struct clk_hw *hw, unsigned long rate, } const struct clk_ops clk_factors_ops = { + .determine_rate = clk_factors_determine_rate, .recalc_rate = clk_factors_recalc_rate, .round_rate = clk_factors_round_rate, .set_rate = clk_factors_set_rate, diff --git a/drivers/clk/sunxi/clk-sunxi.c b/drivers/clk/sunxi/clk-sunxi.c index 9eddf22d56a4..426483422d3d 100644 --- a/drivers/clk/sunxi/clk-sunxi.c +++ b/drivers/clk/sunxi/clk-sunxi.c @@ -507,6 +507,43 @@ CLK_OF_DECLARE(sun7i_a20_gmac, "allwinner,sun7i-a20-gmac-clk", /** + * clk_sunxi_mmc_phase_control() - configures MMC clock phase control + */ + +void clk_sunxi_mmc_phase_control(struct clk *clk, u8 sample, u8 output) +{ + #define to_clk_composite(_hw) container_of(_hw, struct clk_composite, hw) + #define to_clk_factors(_hw) container_of(_hw, struct clk_factors, hw) + + struct clk_hw *hw = __clk_get_hw(clk); + struct clk_composite *composite = to_clk_composite(hw); + struct clk_hw *rate_hw = composite->rate_hw; + struct clk_factors *factors = to_clk_factors(rate_hw); + unsigned long flags = 0; + u32 reg; + + if (factors->lock) + spin_lock_irqsave(factors->lock, flags); + + reg = readl(factors->reg); + + /* set sample clock phase control */ + reg &= ~(0x7 << 20); + reg |= ((sample & 0x7) << 20); + + /* set output clock phase control */ + reg &= ~(0x7 << 8); + reg |= ((output & 0x7) << 8); + + writel(reg, factors->reg); + + if (factors->lock) + spin_unlock_irqrestore(factors->lock, flags); +} +EXPORT_SYMBOL(clk_sunxi_mmc_phase_control); + + +/** * sunxi_factors_clk_setup() - Setup function for factor clocks */ diff --git a/drivers/clk/tegra/clk-id.h b/drivers/clk/tegra/clk-id.h index c39613c519af..0011d547a9f7 100644 --- a/drivers/clk/tegra/clk-id.h +++ b/drivers/clk/tegra/clk-id.h @@ -233,6 +233,7 @@ enum clk_id { tegra_clk_xusb_hs_src, tegra_clk_xusb_ss, tegra_clk_xusb_ss_src, + tegra_clk_xusb_ss_div2, tegra_clk_max, }; diff --git a/drivers/clk/tegra/clk-pll.c b/drivers/clk/tegra/clk-pll.c index 6aad8abc69a2..637b62ccc91e 100644 --- a/drivers/clk/tegra/clk-pll.c +++ b/drivers/clk/tegra/clk-pll.c @@ -96,10 +96,20 @@ (PLLE_SS_MAX_VAL | PLLE_SS_INC_VAL | PLLE_SS_INCINTRV_VAL) #define PLLE_AUX_PLLP_SEL BIT(2) +#define PLLE_AUX_USE_LOCKDET BIT(3) #define PLLE_AUX_ENABLE_SWCTL BIT(4) +#define PLLE_AUX_SS_SWCTL BIT(6) #define PLLE_AUX_SEQ_ENABLE BIT(24) +#define PLLE_AUX_SEQ_START_STATE BIT(25) #define PLLE_AUX_PLLRE_SEL BIT(28) +#define XUSBIO_PLL_CFG0 0x51c +#define XUSBIO_PLL_CFG0_PADPLL_RESET_SWCTL BIT(0) +#define XUSBIO_PLL_CFG0_CLK_ENABLE_SWCTL BIT(2) +#define XUSBIO_PLL_CFG0_PADPLL_USE_LOCKDET BIT(6) +#define XUSBIO_PLL_CFG0_SEQ_ENABLE BIT(24) +#define XUSBIO_PLL_CFG0_SEQ_START_STATE BIT(25) + #define PLLE_MISC_PLLE_PTS BIT(8) #define PLLE_MISC_IDDQ_SW_VALUE BIT(13) #define PLLE_MISC_IDDQ_SW_CTRL BIT(14) @@ -1328,7 +1338,28 @@ static int clk_plle_tegra114_enable(struct clk_hw *hw) pll_writel(val, PLLE_SS_CTRL, pll); udelay(1); - /* TODO: enable hw control of xusb brick pll */ + /* Enable hw control of xusb brick pll */ + val = pll_readl_misc(pll); + val &= ~PLLE_MISC_IDDQ_SW_CTRL; + pll_writel_misc(val, pll); + + val = pll_readl(pll->params->aux_reg, pll); + val |= (PLLE_AUX_USE_LOCKDET | PLLE_AUX_SEQ_START_STATE); + val &= ~(PLLE_AUX_ENABLE_SWCTL | PLLE_AUX_SS_SWCTL); + pll_writel(val, pll->params->aux_reg, pll); + udelay(1); + val |= PLLE_AUX_SEQ_ENABLE; + pll_writel(val, pll->params->aux_reg, pll); + + val = pll_readl(XUSBIO_PLL_CFG0, pll); + val |= (XUSBIO_PLL_CFG0_PADPLL_USE_LOCKDET | + XUSBIO_PLL_CFG0_SEQ_START_STATE); + val &= ~(XUSBIO_PLL_CFG0_CLK_ENABLE_SWCTL | + XUSBIO_PLL_CFG0_PADPLL_RESET_SWCTL); + pll_writel(val, XUSBIO_PLL_CFG0, pll); + udelay(1); + val |= XUSBIO_PLL_CFG0_SEQ_ENABLE; + pll_writel(val, XUSBIO_PLL_CFG0, pll); out: if (pll->lock) diff --git a/drivers/clk/tegra/clk-tegra-periph.c b/drivers/clk/tegra/clk-tegra-periph.c index 1fa5c3f33b20..adf6b814b5bc 100644 --- a/drivers/clk/tegra/clk-tegra-periph.c +++ b/drivers/clk/tegra/clk-tegra-periph.c @@ -329,7 +329,9 @@ static u32 mux_clkm_pllp_pllc_pllre_idx[] = { static const char *mux_clkm_48M_pllp_480M[] = { "clk_m", "pll_u_48M", "pll_p", "pll_u_480M" }; -#define mux_clkm_48M_pllp_480M_idx NULL +static u32 mux_clkm_48M_pllp_480M_idx[] = { + [0] = 0, [1] = 2, [2] = 4, [3] = 6, +}; static const char *mux_clkm_pllre_clk32_480M_pllc_ref[] = { "clk_m", "pll_re_out", "clk_32k", "pll_u_480M", "pll_c", "pll_ref" @@ -338,6 +340,11 @@ static u32 mux_clkm_pllre_clk32_480M_pllc_ref_idx[] = { [0] = 0, [1] = 1, [2] = 3, [3] = 3, [4] = 4, [5] = 7, }; +static const char *mux_ss_60M[] = { + "xusb_ss_div2", "pll_u_60M" +}; +#define mux_ss_60M_idx NULL + static const char *mux_d_audio_clk[] = { "pll_a_out0", "pll_p", "clk_m", "spdif_in_sync", "i2s0_sync", "i2s1_sync", "i2s2_sync", "i2s3_sync", "i2s4_sync", "vimclk_sync", @@ -499,6 +506,7 @@ static struct tegra_periph_init_data periph_clks[] = { XUSB("xusb_falcon_src", mux_clkm_pllp_pllc_pllre, CLK_SOURCE_XUSB_FALCON_SRC, 143, TEGRA_PERIPH_NO_RESET, tegra_clk_xusb_falcon_src), XUSB("xusb_fs_src", mux_clkm_48M_pllp_480M, CLK_SOURCE_XUSB_FS_SRC, 143, TEGRA_PERIPH_NO_RESET, tegra_clk_xusb_fs_src), XUSB("xusb_ss_src", mux_clkm_pllre_clk32_480M_pllc_ref, CLK_SOURCE_XUSB_SS_SRC, 143, TEGRA_PERIPH_NO_RESET, tegra_clk_xusb_ss_src), + NODIV("xusb_hs_src", mux_ss_60M, CLK_SOURCE_XUSB_SS_SRC, 25, MASK(1), 143, TEGRA_PERIPH_NO_RESET, tegra_clk_xusb_hs_src, NULL), XUSB("xusb_dev_src", mux_clkm_pllp_pllc_pllre, CLK_SOURCE_XUSB_DEV_SRC, 95, TEGRA_PERIPH_ON_APB | TEGRA_PERIPH_NO_RESET, tegra_clk_xusb_dev_src), }; diff --git a/drivers/clk/tegra/clk-tegra114.c b/drivers/clk/tegra/clk-tegra114.c index 80431f0fb268..b9c8ba258ef0 100644 --- a/drivers/clk/tegra/clk-tegra114.c +++ b/drivers/clk/tegra/clk-tegra114.c @@ -142,7 +142,6 @@ #define UTMIPLL_HW_PWRDN_CFG0_IDDQ_SWCTL BIT(0) #define CLK_SOURCE_CSITE 0x1d4 -#define CLK_SOURCE_XUSB_SS_SRC 0x610 #define CLK_SOURCE_EMC 0x19c /* PLLM override registers */ @@ -834,6 +833,7 @@ static struct tegra_clk tegra114_clks[tegra_clk_max] __initdata = { [tegra_clk_xusb_falcon_src] = { .dt_id = TEGRA114_CLK_XUSB_FALCON_SRC, .present = true }, [tegra_clk_xusb_fs_src] = { .dt_id = TEGRA114_CLK_XUSB_FS_SRC, .present = true }, [tegra_clk_xusb_ss_src] = { .dt_id = TEGRA114_CLK_XUSB_SS_SRC, .present = true }, + [tegra_clk_xusb_ss_div2] = { .dt_id = TEGRA114_CLK_XUSB_SS_DIV2, .present = true}, [tegra_clk_xusb_dev_src] = { .dt_id = TEGRA114_CLK_XUSB_DEV_SRC, .present = true }, [tegra_clk_xusb_dev] = { .dt_id = TEGRA114_CLK_XUSB_DEV, .present = true }, [tegra_clk_xusb_hs_src] = { .dt_id = TEGRA114_CLK_XUSB_HS_SRC, .present = true }, @@ -1182,16 +1182,11 @@ static __init void tegra114_periph_clk_init(void __iomem *clk_base, void __iomem *pmc_base) { struct clk *clk; - u32 val; - - /* xusb_hs_src */ - val = readl(clk_base + CLK_SOURCE_XUSB_SS_SRC); - val |= BIT(25); /* always select PLLU_60M */ - writel(val, clk_base + CLK_SOURCE_XUSB_SS_SRC); - clk = clk_register_fixed_factor(NULL, "xusb_hs_src", "pll_u_60M", 0, - 1, 1); - clks[TEGRA114_CLK_XUSB_HS_SRC] = clk; + /* xusb_ss_div2 */ + clk = clk_register_fixed_factor(NULL, "xusb_ss_div2", "xusb_ss_src", 0, + 1, 2); + clks[TEGRA114_CLK_XUSB_SS_DIV2] = clk; /* dsia mux */ clk = clk_register_mux(NULL, "dsia_mux", mux_plld_out0_plld2_out0, @@ -1301,7 +1296,12 @@ static struct tegra_clk_init_table init_table[] __initdata = { {TEGRA114_CLK_GR3D, TEGRA114_CLK_PLL_C2, 300000000, 0}, {TEGRA114_CLK_DSIALP, TEGRA114_CLK_PLL_P, 68000000, 0}, {TEGRA114_CLK_DSIBLP, TEGRA114_CLK_PLL_P, 68000000, 0}, - + {TEGRA114_CLK_PLL_RE_VCO, TEGRA114_CLK_CLK_MAX, 612000000, 0}, + {TEGRA114_CLK_XUSB_SS_SRC, TEGRA114_CLK_PLL_RE_OUT, 122400000, 0}, + {TEGRA114_CLK_XUSB_FS_SRC, TEGRA114_CLK_PLL_U_48M, 48000000, 0}, + {TEGRA114_CLK_XUSB_HS_SRC, TEGRA114_CLK_XUSB_SS_DIV2, 61200000, 0}, + {TEGRA114_CLK_XUSB_FALCON_SRC, TEGRA114_CLK_PLL_P, 204000000, 0}, + {TEGRA114_CLK_XUSB_HOST_SRC, TEGRA114_CLK_PLL_P, 102000000, 0}, /* This MUST be the last entry. */ {TEGRA114_CLK_CLK_MAX, TEGRA114_CLK_CLK_MAX, 0, 0}, }; diff --git a/drivers/clk/tegra/clk-tegra124.c b/drivers/clk/tegra/clk-tegra124.c index cc37c342c4cb..80efe51fdcdf 100644 --- a/drivers/clk/tegra/clk-tegra124.c +++ b/drivers/clk/tegra/clk-tegra124.c @@ -30,7 +30,6 @@ #define CLK_SOURCE_CSITE 0x1d4 #define CLK_SOURCE_EMC 0x19c -#define CLK_SOURCE_XUSB_SS_SRC 0x610 #define PLLC_BASE 0x80 #define PLLC_OUT 0x84 @@ -925,6 +924,7 @@ static struct tegra_clk tegra124_clks[tegra_clk_max] __initdata = { [tegra_clk_xusb_falcon_src] = { .dt_id = TEGRA124_CLK_XUSB_FALCON_SRC, .present = true }, [tegra_clk_xusb_fs_src] = { .dt_id = TEGRA124_CLK_XUSB_FS_SRC, .present = true }, [tegra_clk_xusb_ss_src] = { .dt_id = TEGRA124_CLK_XUSB_SS_SRC, .present = true }, + [tegra_clk_xusb_ss_div2] = { .dt_id = TEGRA124_CLK_XUSB_SS_DIV2, .present = true }, [tegra_clk_xusb_dev_src] = { .dt_id = TEGRA124_CLK_XUSB_DEV_SRC, .present = true }, [tegra_clk_xusb_dev] = { .dt_id = TEGRA124_CLK_XUSB_DEV, .present = true }, [tegra_clk_xusb_hs_src] = { .dt_id = TEGRA124_CLK_XUSB_HS_SRC, .present = true }, @@ -1105,16 +1105,11 @@ static __init void tegra124_periph_clk_init(void __iomem *clk_base, void __iomem *pmc_base) { struct clk *clk; - u32 val; - - /* xusb_hs_src */ - val = readl(clk_base + CLK_SOURCE_XUSB_SS_SRC); - val |= BIT(25); /* always select PLLU_60M */ - writel(val, clk_base + CLK_SOURCE_XUSB_SS_SRC); - clk = clk_register_fixed_factor(NULL, "xusb_hs_src", "pll_u_60M", 0, - 1, 1); - clks[TEGRA124_CLK_XUSB_HS_SRC] = clk; + /* xusb_ss_div2 */ + clk = clk_register_fixed_factor(NULL, "xusb_ss_div2", "xusb_ss_src", 0, + 1, 2); + clks[TEGRA124_CLK_XUSB_SS_DIV2] = clk; /* dsia mux */ clk = clk_register_mux(NULL, "dsia_mux", mux_plld_out0_plld2_out0, @@ -1368,6 +1363,12 @@ static struct tegra_clk_init_table init_table[] __initdata = { {TEGRA124_CLK_SBC4, TEGRA124_CLK_PLL_P, 12000000, 1}, {TEGRA124_CLK_TSEC, TEGRA124_CLK_PLL_C3, 0, 0}, {TEGRA124_CLK_MSENC, TEGRA124_CLK_PLL_C3, 0, 0}, + {TEGRA124_CLK_PLL_RE_VCO, TEGRA124_CLK_CLK_MAX, 672000000, 0}, + {TEGRA124_CLK_XUSB_SS_SRC, TEGRA124_CLK_PLL_U_480M, 120000000, 0}, + {TEGRA124_CLK_XUSB_FS_SRC, TEGRA124_CLK_PLL_U_48M, 48000000, 0}, + {TEGRA124_CLK_XUSB_HS_SRC, TEGRA124_CLK_PLL_U_60M, 60000000, 0}, + {TEGRA124_CLK_XUSB_FALCON_SRC, TEGRA124_CLK_PLL_RE_OUT, 224000000, 0}, + {TEGRA124_CLK_XUSB_HOST_SRC, TEGRA124_CLK_PLL_RE_OUT, 112000000, 0}, /* This MUST be the last entry. */ {TEGRA124_CLK_CLK_MAX, TEGRA124_CLK_CLK_MAX, 0, 0}, }; diff --git a/drivers/clk/versatile/clk-icst.c b/drivers/clk/versatile/clk-icst.c index a820b0cfcf57..bc96f103bd7c 100644 --- a/drivers/clk/versatile/clk-icst.c +++ b/drivers/clk/versatile/clk-icst.c @@ -140,6 +140,7 @@ struct clk *icst_clk_register(struct device *dev, pclone = kmemdup(desc->params, sizeof(*pclone), GFP_KERNEL); if (!pclone) { + kfree(icst); pr_err("could not clone ICST params\n"); return ERR_PTR(-ENOMEM); } @@ -160,3 +161,4 @@ struct clk *icst_clk_register(struct device *dev, return clk; } +EXPORT_SYMBOL_GPL(icst_clk_register); diff --git a/drivers/clk/versatile/clk-impd1.c b/drivers/clk/versatile/clk-impd1.c index 31b44f025f9e..1cc1330dc570 100644 --- a/drivers/clk/versatile/clk-impd1.c +++ b/drivers/clk/versatile/clk-impd1.c @@ -20,6 +20,8 @@ #define IMPD1_LOCK 0x08 struct impd1_clk { + char *pclkname; + struct clk *pclk; char *vco1name; struct clk *vco1clk; char *vco2name; @@ -31,7 +33,7 @@ struct impd1_clk { struct clk *spiclk; char *scname; struct clk *scclk; - struct clk_lookup *clks[6]; + struct clk_lookup *clks[15]; }; /* One entry for each connected IM-PD1 LM */ @@ -86,6 +88,7 @@ void integrator_impd1_clk_init(void __iomem *base, unsigned int id) { struct impd1_clk *imc; struct clk *clk; + struct clk *pclk; int i; if (id > 3) { @@ -94,11 +97,18 @@ void integrator_impd1_clk_init(void __iomem *base, unsigned int id) } imc = &impd1_clks[id]; + /* Register the fixed rate PCLK */ + imc->pclkname = kasprintf(GFP_KERNEL, "lm%x-pclk", id); + pclk = clk_register_fixed_rate(NULL, imc->pclkname, NULL, + CLK_IS_ROOT, 0); + imc->pclk = pclk; + imc->vco1name = kasprintf(GFP_KERNEL, "lm%x-vco1", id); clk = icst_clk_register(NULL, &impd1_icst1_desc, imc->vco1name, NULL, base); imc->vco1clk = clk; - imc->clks[0] = clkdev_alloc(clk, NULL, "lm%x:01000", id); + imc->clks[0] = clkdev_alloc(pclk, "apb_pclk", "lm%x:01000", id); + imc->clks[1] = clkdev_alloc(clk, NULL, "lm%x:01000", id); /* VCO2 is also called "CLK2" */ imc->vco2name = kasprintf(GFP_KERNEL, "lm%x-vco2", id); @@ -107,32 +117,43 @@ void integrator_impd1_clk_init(void __iomem *base, unsigned int id) imc->vco2clk = clk; /* MMCI uses CLK2 right off */ - imc->clks[1] = clkdev_alloc(clk, NULL, "lm%x:00700", id); + imc->clks[2] = clkdev_alloc(pclk, "apb_pclk", "lm%x:00700", id); + imc->clks[3] = clkdev_alloc(clk, NULL, "lm%x:00700", id); /* UART reference clock divides CLK2 by a fixed factor 4 */ imc->uartname = kasprintf(GFP_KERNEL, "lm%x-uartclk", id); clk = clk_register_fixed_factor(NULL, imc->uartname, imc->vco2name, CLK_IGNORE_UNUSED, 1, 4); imc->uartclk = clk; - imc->clks[2] = clkdev_alloc(clk, NULL, "lm%x:00100", id); - imc->clks[3] = clkdev_alloc(clk, NULL, "lm%x:00200", id); + imc->clks[4] = clkdev_alloc(pclk, "apb_pclk", "lm%x:00100", id); + imc->clks[5] = clkdev_alloc(clk, NULL, "lm%x:00100", id); + imc->clks[6] = clkdev_alloc(pclk, "apb_pclk", "lm%x:00200", id); + imc->clks[7] = clkdev_alloc(clk, NULL, "lm%x:00200", id); /* SPI PL022 clock divides CLK2 by a fixed factor 64 */ imc->spiname = kasprintf(GFP_KERNEL, "lm%x-spiclk", id); clk = clk_register_fixed_factor(NULL, imc->spiname, imc->vco2name, CLK_IGNORE_UNUSED, 1, 64); - imc->clks[4] = clkdev_alloc(clk, NULL, "lm%x:00300", id); + imc->clks[8] = clkdev_alloc(pclk, "apb_pclk", "lm%x:00300", id); + imc->clks[9] = clkdev_alloc(clk, NULL, "lm%x:00300", id); + + /* The GPIO blocks and AACI have only PCLK */ + imc->clks[10] = clkdev_alloc(pclk, "apb_pclk", "lm%x:00400", id); + imc->clks[11] = clkdev_alloc(pclk, "apb_pclk", "lm%x:00500", id); + imc->clks[12] = clkdev_alloc(pclk, "apb_pclk", "lm%x:00800", id); /* Smart Card clock divides CLK2 by a fixed factor 4 */ imc->scname = kasprintf(GFP_KERNEL, "lm%x-scclk", id); clk = clk_register_fixed_factor(NULL, imc->scname, imc->vco2name, CLK_IGNORE_UNUSED, 1, 4); imc->scclk = clk; - imc->clks[5] = clkdev_alloc(clk, NULL, "lm%x:00600", id); + imc->clks[13] = clkdev_alloc(pclk, "apb_pclk", "lm%x:00600", id); + imc->clks[14] = clkdev_alloc(clk, NULL, "lm%x:00600", id); for (i = 0; i < ARRAY_SIZE(imc->clks); i++) clkdev_add(imc->clks[i]); } +EXPORT_SYMBOL_GPL(integrator_impd1_clk_init); void integrator_impd1_clk_exit(unsigned int id) { @@ -149,9 +170,12 @@ void integrator_impd1_clk_exit(unsigned int id) clk_unregister(imc->uartclk); clk_unregister(imc->vco2clk); clk_unregister(imc->vco1clk); + clk_unregister(imc->pclk); kfree(imc->scname); kfree(imc->spiname); kfree(imc->uartname); kfree(imc->vco2name); kfree(imc->vco1name); + kfree(imc->pclkname); } +EXPORT_SYMBOL_GPL(integrator_impd1_clk_exit); diff --git a/drivers/clk/zynq/clkc.c b/drivers/clk/zynq/clkc.c index 52c09afdcfb7..246cf1226eaa 100644 --- a/drivers/clk/zynq/clkc.c +++ b/drivers/clk/zynq/clkc.c @@ -53,6 +53,9 @@ static void __iomem *zynq_clkc_base; #define NUM_MIO_PINS 54 +#define DBG_CLK_CTRL_CLKACT_TRC BIT(0) +#define DBG_CLK_CTRL_CPU_1XCLKACT BIT(1) + enum zynq_clk { armpll, ddrpll, iopll, cpu_6or4x, cpu_3or2x, cpu_2x, cpu_1x, @@ -499,6 +502,15 @@ static void __init zynq_clk_setup(struct device_node *np) clk_output_name[cpu_1x], 0, SLCR_DBG_CLK_CTRL, 1, 0, &dbgclk_lock); + /* leave debug clocks in the state the bootloader set them up to */ + tmp = clk_readl(SLCR_DBG_CLK_CTRL); + if (tmp & DBG_CLK_CTRL_CLKACT_TRC) + if (clk_prepare_enable(clks[dbg_trc])) + pr_warn("%s: trace clk enable failed\n", __func__); + if (tmp & DBG_CLK_CTRL_CPU_1XCLKACT) + if (clk_prepare_enable(clks[dbg_apb])) + pr_warn("%s: debug APB clk enable failed\n", __func__); + /* One gated clock for all APER clocks. */ clks[dma] = clk_register_gate(NULL, clk_output_name[dma], clk_output_name[cpu_2x], 0, SLCR_APER_CLK_CTRL, 0, 0, diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig index 03ccdb0ccf9e..f066fa23cc05 100644 --- a/drivers/crypto/Kconfig +++ b/drivers/crypto/Kconfig @@ -301,14 +301,14 @@ config CRYPTO_DEV_SAHARA found in some Freescale i.MX chips. config CRYPTO_DEV_S5P - tristate "Support for Samsung S5PV210 crypto accelerator" - depends on ARCH_S5PV210 + tristate "Support for Samsung S5PV210/Exynos crypto accelerator" + depends on ARCH_S5PV210 || ARCH_EXYNOS select CRYPTO_AES select CRYPTO_ALGAPI select CRYPTO_BLKCIPHER help This option allows you to have support for S5P crypto acceleration. - Select this to offload Samsung S5PV210 or S5PC110 from AES + Select this to offload Samsung S5PV210 or S5PC110, Exynos from AES algorithms execution. config CRYPTO_DEV_NX diff --git a/drivers/crypto/atmel-aes.c b/drivers/crypto/atmel-aes.c index d7c9e317423c..a083474991ab 100644 --- a/drivers/crypto/atmel-aes.c +++ b/drivers/crypto/atmel-aes.c @@ -716,6 +716,12 @@ static int atmel_aes_crypt(struct ablkcipher_request *req, unsigned long mode) return -EINVAL; } ctx->block_size = CFB32_BLOCK_SIZE; + } else if (mode & AES_FLAGS_CFB64) { + if (!IS_ALIGNED(req->nbytes, CFB64_BLOCK_SIZE)) { + pr_err("request size is not exact amount of CFB64 blocks\n"); + return -EINVAL; + } + ctx->block_size = CFB64_BLOCK_SIZE; } else { if (!IS_ALIGNED(req->nbytes, AES_BLOCK_SIZE)) { pr_err("request size is not exact amount of AES blocks\n"); @@ -1069,7 +1075,7 @@ static struct crypto_alg aes_algs[] = { .cra_driver_name = "atmel-cfb8-aes", .cra_priority = 100, .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = CFB64_BLOCK_SIZE, + .cra_blocksize = CFB8_BLOCK_SIZE, .cra_ctxsize = sizeof(struct atmel_aes_ctx), .cra_alignmask = 0x0, .cra_type = &crypto_ablkcipher_type, diff --git a/drivers/crypto/bfin_crc.c b/drivers/crypto/bfin_crc.c index c9ff298e6d26..b099e33cb073 100644 --- a/drivers/crypto/bfin_crc.c +++ b/drivers/crypto/bfin_crc.c @@ -29,10 +29,11 @@ #include <crypto/hash.h> #include <crypto/internal/hash.h> -#include <asm/blackfin.h> -#include <asm/bfin_crc.h> #include <asm/dma.h> #include <asm/portmux.h> +#include <asm/io.h> + +#include "bfin_crc.h" #define CRC_CCRYPTO_QUEUE_LENGTH 5 @@ -54,12 +55,13 @@ struct bfin_crypto_crc { int irq; int dma_ch; u32 poly; - volatile struct crc_register *regs; + struct crc_register *regs; struct ahash_request *req; /* current request in operation */ struct dma_desc_array *sg_cpu; /* virt addr of sg dma descriptors */ dma_addr_t sg_dma; /* phy addr of sg dma descriptors */ u8 *sg_mid_buf; + dma_addr_t sg_mid_dma; /* phy addr of sg mid buffer */ struct tasklet_struct done_task; struct crypto_queue queue; /* waiting requests */ @@ -132,13 +134,13 @@ static struct scatterlist *sg_get(struct scatterlist *sg_list, unsigned int nent static int bfin_crypto_crc_init_hw(struct bfin_crypto_crc *crc, u32 key) { - crc->regs->datacntrld = 0; - crc->regs->control = MODE_CALC_CRC << OPMODE_OFFSET; - crc->regs->curresult = key; + writel(0, &crc->regs->datacntrld); + writel(MODE_CALC_CRC << OPMODE_OFFSET, &crc->regs->control); + writel(key, &crc->regs->curresult); /* setup CRC interrupts */ - crc->regs->status = CMPERRI | DCNTEXPI; - crc->regs->intrenset = CMPERRI | DCNTEXPI; + writel(CMPERRI | DCNTEXPI, &crc->regs->status); + writel(CMPERRI | DCNTEXPI, &crc->regs->intrenset); return 0; } @@ -194,7 +196,6 @@ static void bfin_crypto_crc_config_dma(struct bfin_crypto_crc *crc) dma_map_sg(crc->dev, ctx->sg, ctx->sg_nents, DMA_TO_DEVICE); for_each_sg(ctx->sg, sg, ctx->sg_nents, j) { - dma_config = DMAFLOW_ARRAY | RESTART | NDSIZE_3 | DMAEN | PSIZE_32; dma_addr = sg_dma_address(sg); /* deduce extra bytes in last sg */ if (sg_is_last(sg)) @@ -207,12 +208,29 @@ static void bfin_crypto_crc_config_dma(struct bfin_crypto_crc *crc) bytes in current sg buffer. Move addr of current sg and deduce the length of current sg. */ - memcpy(crc->sg_mid_buf +((i-1) << 2) + mid_dma_count, - (void *)dma_addr, + memcpy(crc->sg_mid_buf +(i << 2) + mid_dma_count, + sg_virt(sg), CHKSUM_DIGEST_SIZE - mid_dma_count); dma_addr += CHKSUM_DIGEST_SIZE - mid_dma_count; dma_count -= CHKSUM_DIGEST_SIZE - mid_dma_count; + + dma_config = DMAFLOW_ARRAY | RESTART | NDSIZE_3 | + DMAEN | PSIZE_32 | WDSIZE_32; + + /* setup new dma descriptor for next middle dma */ + crc->sg_cpu[i].start_addr = crc->sg_mid_dma + (i << 2); + crc->sg_cpu[i].cfg = dma_config; + crc->sg_cpu[i].x_count = 1; + crc->sg_cpu[i].x_modify = CHKSUM_DIGEST_SIZE; + dev_dbg(crc->dev, "%d: crc_dma: start_addr:0x%lx, " + "cfg:0x%lx, x_count:0x%lx, x_modify:0x%lx\n", + i, crc->sg_cpu[i].start_addr, + crc->sg_cpu[i].cfg, crc->sg_cpu[i].x_count, + crc->sg_cpu[i].x_modify); + i++; } + + dma_config = DMAFLOW_ARRAY | RESTART | NDSIZE_3 | DMAEN | PSIZE_32; /* chop current sg dma len to multiple of 32 bits */ mid_dma_count = dma_count % 4; dma_count &= ~0x3; @@ -243,24 +261,9 @@ static void bfin_crypto_crc_config_dma(struct bfin_crypto_crc *crc) if (mid_dma_count) { /* copy extra bytes to next middle dma buffer */ - dma_config = DMAFLOW_ARRAY | RESTART | NDSIZE_3 | - DMAEN | PSIZE_32 | WDSIZE_32; memcpy(crc->sg_mid_buf + (i << 2), - (void *)(dma_addr + (dma_count << 2)), + (u8*)sg_virt(sg) + (dma_count << 2), mid_dma_count); - /* setup new dma descriptor for next middle dma */ - crc->sg_cpu[i].start_addr = dma_map_single(crc->dev, - crc->sg_mid_buf + (i << 2), - CHKSUM_DIGEST_SIZE, DMA_TO_DEVICE); - crc->sg_cpu[i].cfg = dma_config; - crc->sg_cpu[i].x_count = 1; - crc->sg_cpu[i].x_modify = CHKSUM_DIGEST_SIZE; - dev_dbg(crc->dev, "%d: crc_dma: start_addr:0x%lx, " - "cfg:0x%lx, x_count:0x%lx, x_modify:0x%lx\n", - i, crc->sg_cpu[i].start_addr, - crc->sg_cpu[i].cfg, crc->sg_cpu[i].x_count, - crc->sg_cpu[i].x_modify); - i++; } } @@ -303,6 +306,7 @@ static int bfin_crypto_crc_handle_queue(struct bfin_crypto_crc *crc, int nsg, i, j; unsigned int nextlen; unsigned long flags; + u32 reg; spin_lock_irqsave(&crc->lock, flags); if (req) @@ -402,13 +406,14 @@ finish_update: ctx->sg_buflen += CHKSUM_DIGEST_SIZE; /* set CRC data count before start DMA */ - crc->regs->datacnt = ctx->sg_buflen >> 2; + writel(ctx->sg_buflen >> 2, &crc->regs->datacnt); /* setup and enable CRC DMA */ bfin_crypto_crc_config_dma(crc); /* finally kick off CRC operation */ - crc->regs->control |= BLKEN; + reg = readl(&crc->regs->control); + writel(reg | BLKEN, &crc->regs->control); return -EINPROGRESS; } @@ -529,14 +534,17 @@ static void bfin_crypto_crc_done_task(unsigned long data) static irqreturn_t bfin_crypto_crc_handler(int irq, void *dev_id) { struct bfin_crypto_crc *crc = dev_id; + u32 reg; - if (crc->regs->status & DCNTEXP) { - crc->regs->status = DCNTEXP; + if (readl(&crc->regs->status) & DCNTEXP) { + writel(DCNTEXP, &crc->regs->status); /* prepare results */ - put_unaligned_le32(crc->regs->result, crc->req->result); + put_unaligned_le32(readl(&crc->regs->result), + crc->req->result); - crc->regs->control &= ~BLKEN; + reg = readl(&crc->regs->control); + writel(reg & ~BLKEN, &crc->regs->control); crc->busy = 0; if (crc->req->base.complete) @@ -560,7 +568,7 @@ static int bfin_crypto_crc_suspend(struct platform_device *pdev, pm_message_t st struct bfin_crypto_crc *crc = platform_get_drvdata(pdev); int i = 100000; - while ((crc->regs->control & BLKEN) && --i) + while ((readl(&crc->regs->control) & BLKEN) && --i) cpu_relax(); if (i == 0) @@ -647,29 +655,32 @@ static int bfin_crypto_crc_probe(struct platform_device *pdev) * 1 last + 1 next dma descriptors */ crc->sg_mid_buf = (u8 *)(crc->sg_cpu + ((CRC_MAX_DMA_DESC + 1) << 1)); + crc->sg_mid_dma = crc->sg_dma + sizeof(struct dma_desc_array) + * ((CRC_MAX_DMA_DESC + 1) << 1); - crc->regs->control = 0; - crc->regs->poly = crc->poly = (u32)pdev->dev.platform_data; + writel(0, &crc->regs->control); + crc->poly = (u32)pdev->dev.platform_data; + writel(crc->poly, &crc->regs->poly); - while (!(crc->regs->status & LUTDONE) && (--timeout) > 0) + while (!(readl(&crc->regs->status) & LUTDONE) && (--timeout) > 0) cpu_relax(); if (timeout == 0) dev_info(&pdev->dev, "init crc poly timeout\n"); + platform_set_drvdata(pdev, crc); + spin_lock(&crc_list.lock); list_add(&crc->list, &crc_list.dev_list); spin_unlock(&crc_list.lock); - platform_set_drvdata(pdev, crc); - - ret = crypto_register_ahash(&algs); - if (ret) { - spin_lock(&crc_list.lock); - list_del(&crc->list); - spin_unlock(&crc_list.lock); - dev_err(&pdev->dev, "Cann't register crypto ahash device\n"); - goto out_error_dma; + if (list_is_singular(&crc_list.dev_list)) { + ret = crypto_register_ahash(&algs); + if (ret) { + dev_err(&pdev->dev, + "Can't register crypto ahash device\n"); + goto out_error_dma; + } } dev_info(&pdev->dev, "initialized\n"); diff --git a/drivers/crypto/bfin_crc.h b/drivers/crypto/bfin_crc.h new file mode 100644 index 000000000000..75cef4dc85a1 --- /dev/null +++ b/drivers/crypto/bfin_crc.h @@ -0,0 +1,125 @@ +/* + * bfin_crc.h - interface to Blackfin CRC controllers + * + * Copyright 2012 Analog Devices Inc. + * + * Licensed under the GPL-2 or later. + */ + +#ifndef __BFIN_CRC_H__ +#define __BFIN_CRC_H__ + +/* Function driver which use hardware crc must initialize the structure */ +struct crc_info { + /* Input data address */ + unsigned char *in_addr; + /* Output data address */ + unsigned char *out_addr; + /* Input or output bytes */ + unsigned long datasize; + union { + /* CRC to compare with that of input buffer */ + unsigned long crc_compare; + /* Value to compare with input data */ + unsigned long val_verify; + /* Value to fill */ + unsigned long val_fill; + }; + /* Value to program the 32b CRC Polynomial */ + unsigned long crc_poly; + union { + /* CRC calculated from the input data */ + unsigned long crc_result; + /* First failed position to verify input data */ + unsigned long pos_verify; + }; + /* CRC mirror flags */ + unsigned int bitmirr:1; + unsigned int bytmirr:1; + unsigned int w16swp:1; + unsigned int fdsel:1; + unsigned int rsltmirr:1; + unsigned int polymirr:1; + unsigned int cmpmirr:1; +}; + +/* Userspace interface */ +#define CRC_IOC_MAGIC 'C' +#define CRC_IOC_CALC_CRC _IOWR('C', 0x01, unsigned int) +#define CRC_IOC_MEMCPY_CRC _IOWR('C', 0x02, unsigned int) +#define CRC_IOC_VERIFY_VAL _IOWR('C', 0x03, unsigned int) +#define CRC_IOC_FILL_VAL _IOWR('C', 0x04, unsigned int) + + +#ifdef __KERNEL__ + +#include <linux/types.h> +#include <linux/spinlock.h> +#include <linux/miscdevice.h> + +struct crc_register { + u32 control; + u32 datacnt; + u32 datacntrld; + u32 __pad_1[2]; + u32 compare; + u32 fillval; + u32 datafifo; + u32 intren; + u32 intrenset; + u32 intrenclr; + u32 poly; + u32 __pad_2[4]; + u32 status; + u32 datacntcap; + u32 __pad_3; + u32 result; + u32 curresult; + u32 __pad_4[3]; + u32 revid; +}; + +/* CRC_STATUS Masks */ +#define CMPERR 0x00000002 /* Compare error */ +#define DCNTEXP 0x00000010 /* datacnt register expired */ +#define IBR 0x00010000 /* Input buffer ready */ +#define OBR 0x00020000 /* Output buffer ready */ +#define IRR 0x00040000 /* Immediate result readt */ +#define LUTDONE 0x00080000 /* Look-up table generation done */ +#define FSTAT 0x00700000 /* FIFO status */ +#define MAX_FIFO 4 /* Max fifo size */ + +/* CRC_CONTROL Masks */ +#define BLKEN 0x00000001 /* Block enable */ +#define OPMODE 0x000000F0 /* Operation mode */ +#define OPMODE_OFFSET 4 /* Operation mode mask offset*/ +#define MODE_DMACPY_CRC 1 /* MTM CRC compute and compare */ +#define MODE_DATA_FILL 2 /* MTM data fill */ +#define MODE_CALC_CRC 3 /* MSM CRC compute and compare */ +#define MODE_DATA_VERIFY 4 /* MSM data verify */ +#define AUTOCLRZ 0x00000100 /* Auto clear to zero */ +#define AUTOCLRF 0x00000200 /* Auto clear to one */ +#define OBRSTALL 0x00001000 /* Stall on output buffer ready */ +#define IRRSTALL 0x00002000 /* Stall on immediate result ready */ +#define BITMIRR 0x00010000 /* Mirror bits within each byte of 32-bit input data */ +#define BITMIRR_OFFSET 16 /* Mirror bits offset */ +#define BYTMIRR 0x00020000 /* Mirror bytes of 32-bit input data */ +#define BYTMIRR_OFFSET 17 /* Mirror bytes offset */ +#define W16SWP 0x00040000 /* Mirror uppper and lower 16-bit word of 32-bit input data */ +#define W16SWP_OFFSET 18 /* Mirror 16-bit word offset */ +#define FDSEL 0x00080000 /* FIFO is written after input data is mirrored */ +#define FDSEL_OFFSET 19 /* Mirror FIFO offset */ +#define RSLTMIRR 0x00100000 /* CRC result registers are mirrored. */ +#define RSLTMIRR_OFFSET 20 /* Mirror CRC result offset. */ +#define POLYMIRR 0x00200000 /* CRC poly register is mirrored. */ +#define POLYMIRR_OFFSET 21 /* Mirror CRC poly offset. */ +#define CMPMIRR 0x00400000 /* CRC compare register is mirrored. */ +#define CMPMIRR_OFFSET 22 /* Mirror CRC compare offset. */ + +/* CRC_INTREN Masks */ +#define CMPERRI 0x02 /* CRC_ERROR_INTR */ +#define DCNTEXPI 0x10 /* CRC_STATUS_INTR */ + +#endif + +#endif diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c index 5f891254db73..c09ce1f040d3 100644 --- a/drivers/crypto/caam/caamalg.c +++ b/drivers/crypto/caam/caamalg.c @@ -303,6 +303,7 @@ static int aead_null_set_sh_desc(struct crypto_aead *aead) * Job Descriptor and Shared Descriptors * must all fit into the 64-word Descriptor h/w Buffer */ + keys_fit_inline = false; if (DESC_AEAD_NULL_DEC_LEN + DESC_JOB_IO_LEN + ctx->split_key_pad_len <= CAAM_DESC_BYTES_MAX) keys_fit_inline = true; @@ -472,6 +473,7 @@ static int aead_set_sh_desc(struct crypto_aead *aead) * Job Descriptor and Shared Descriptors * must all fit into the 64-word Descriptor h/w Buffer */ + keys_fit_inline = false; if (DESC_AEAD_DEC_LEN + DESC_JOB_IO_LEN + ctx->split_key_pad_len + ctx->enckeylen <= CAAM_DESC_BYTES_MAX) @@ -527,6 +529,7 @@ static int aead_set_sh_desc(struct crypto_aead *aead) * Job Descriptor and Shared Descriptors * must all fit into the 64-word Descriptor h/w Buffer */ + keys_fit_inline = false; if (DESC_AEAD_GIVENC_LEN + DESC_JOB_IO_LEN + ctx->split_key_pad_len + ctx->enckeylen <= CAAM_DESC_BYTES_MAX) @@ -918,11 +921,8 @@ static void aead_encrypt_done(struct device *jrdev, u32 *desc, u32 err, edesc = (struct aead_edesc *)((char *)desc - offsetof(struct aead_edesc, hw_desc)); - if (err) { - char tmp[CAAM_ERROR_STR_MAX]; - - dev_err(jrdev, "%08x: %s\n", err, caam_jr_strstatus(tmp, err)); - } + if (err) + caam_jr_strstatus(jrdev, err); aead_unmap(jrdev, edesc, req); @@ -969,11 +969,8 @@ static void aead_decrypt_done(struct device *jrdev, u32 *desc, u32 err, req->cryptlen - ctx->authsize, 1); #endif - if (err) { - char tmp[CAAM_ERROR_STR_MAX]; - - dev_err(jrdev, "%08x: %s\n", err, caam_jr_strstatus(tmp, err)); - } + if (err) + caam_jr_strstatus(jrdev, err); aead_unmap(jrdev, edesc, req); @@ -1018,11 +1015,8 @@ static void ablkcipher_encrypt_done(struct device *jrdev, u32 *desc, u32 err, edesc = (struct ablkcipher_edesc *)((char *)desc - offsetof(struct ablkcipher_edesc, hw_desc)); - if (err) { - char tmp[CAAM_ERROR_STR_MAX]; - - dev_err(jrdev, "%08x: %s\n", err, caam_jr_strstatus(tmp, err)); - } + if (err) + caam_jr_strstatus(jrdev, err); #ifdef DEBUG print_hex_dump(KERN_ERR, "dstiv @"__stringify(__LINE__)": ", @@ -1053,11 +1047,8 @@ static void ablkcipher_decrypt_done(struct device *jrdev, u32 *desc, u32 err, edesc = (struct ablkcipher_edesc *)((char *)desc - offsetof(struct ablkcipher_edesc, hw_desc)); - if (err) { - char tmp[CAAM_ERROR_STR_MAX]; - - dev_err(jrdev, "%08x: %s\n", err, caam_jr_strstatus(tmp, err)); - } + if (err) + caam_jr_strstatus(jrdev, err); #ifdef DEBUG print_hex_dump(KERN_ERR, "dstiv @"__stringify(__LINE__)": ", diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c index 0378328f47a7..0d9284ef96a8 100644 --- a/drivers/crypto/caam/caamhash.c +++ b/drivers/crypto/caam/caamhash.c @@ -545,7 +545,8 @@ static int ahash_setkey(struct crypto_ahash *ahash, DMA_TO_DEVICE); if (dma_mapping_error(jrdev, ctx->key_dma)) { dev_err(jrdev, "unable to map key i/o memory\n"); - return -ENOMEM; + ret = -ENOMEM; + goto map_err; } #ifdef DEBUG print_hex_dump(KERN_ERR, "ctx.key@"__stringify(__LINE__)": ", @@ -559,6 +560,7 @@ static int ahash_setkey(struct crypto_ahash *ahash, DMA_TO_DEVICE); } +map_err: kfree(hashed_key); return ret; badkey: @@ -631,11 +633,8 @@ static void ahash_done(struct device *jrdev, u32 *desc, u32 err, edesc = (struct ahash_edesc *)((char *)desc - offsetof(struct ahash_edesc, hw_desc)); - if (err) { - char tmp[CAAM_ERROR_STR_MAX]; - - dev_err(jrdev, "%08x: %s\n", err, caam_jr_strstatus(tmp, err)); - } + if (err) + caam_jr_strstatus(jrdev, err); ahash_unmap(jrdev, edesc, req, digestsize); kfree(edesc); @@ -669,11 +668,8 @@ static void ahash_done_bi(struct device *jrdev, u32 *desc, u32 err, edesc = (struct ahash_edesc *)((char *)desc - offsetof(struct ahash_edesc, hw_desc)); - if (err) { - char tmp[CAAM_ERROR_STR_MAX]; - - dev_err(jrdev, "%08x: %s\n", err, caam_jr_strstatus(tmp, err)); - } + if (err) + caam_jr_strstatus(jrdev, err); ahash_unmap_ctx(jrdev, edesc, req, ctx->ctx_len, DMA_BIDIRECTIONAL); kfree(edesc); @@ -707,11 +703,8 @@ static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err, edesc = (struct ahash_edesc *)((char *)desc - offsetof(struct ahash_edesc, hw_desc)); - if (err) { - char tmp[CAAM_ERROR_STR_MAX]; - - dev_err(jrdev, "%08x: %s\n", err, caam_jr_strstatus(tmp, err)); - } + if (err) + caam_jr_strstatus(jrdev, err); ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); kfree(edesc); @@ -745,11 +738,8 @@ static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err, edesc = (struct ahash_edesc *)((char *)desc - offsetof(struct ahash_edesc, hw_desc)); - if (err) { - char tmp[CAAM_ERROR_STR_MAX]; - - dev_err(jrdev, "%08x: %s\n", err, caam_jr_strstatus(tmp, err)); - } + if (err) + caam_jr_strstatus(jrdev, err); ahash_unmap_ctx(jrdev, edesc, req, ctx->ctx_len, DMA_TO_DEVICE); kfree(edesc); diff --git a/drivers/crypto/caam/caamrng.c b/drivers/crypto/caam/caamrng.c index 3529b54048c9..8c07d3153f12 100644 --- a/drivers/crypto/caam/caamrng.c +++ b/drivers/crypto/caam/caamrng.c @@ -103,11 +103,8 @@ static void rng_done(struct device *jrdev, u32 *desc, u32 err, void *context) bd = (struct buf_data *)((char *)desc - offsetof(struct buf_data, hw_desc)); - if (err) { - char tmp[CAAM_ERROR_STR_MAX]; - - dev_err(jrdev, "%08x: %s\n", err, caam_jr_strstatus(tmp, err)); - } + if (err) + caam_jr_strstatus(jrdev, err); atomic_set(&bd->empty, BUF_NOT_EMPTY); complete(&bd->filled); diff --git a/drivers/crypto/caam/error.c b/drivers/crypto/caam/error.c index 0eabd81e1a90..6531054a44c8 100644 --- a/drivers/crypto/caam/error.c +++ b/drivers/crypto/caam/error.c @@ -11,247 +11,208 @@ #include "jr.h" #include "error.h" -#define SPRINTFCAT(str, format, param, max_alloc) \ -{ \ - char *tmp; \ - \ - tmp = kmalloc(sizeof(format) + max_alloc, GFP_ATOMIC); \ - if (likely(tmp)) { \ - sprintf(tmp, format, param); \ - strcat(str, tmp); \ - kfree(tmp); \ - } else { \ - strcat(str, "kmalloc failure in SPRINTFCAT"); \ - } \ -} - -static void report_jump_idx(u32 status, char *outstr) +static const struct { + u8 value; + const char *error_text; +} desc_error_list[] = { + { 0x00, "No error." }, + { 0x01, "SGT Length Error. The descriptor is trying to read more data than is contained in the SGT table." }, + { 0x02, "SGT Null Entry Error." }, + { 0x03, "Job Ring Control Error. There is a bad value in the Job Ring Control register." }, + { 0x04, "Invalid Descriptor Command. The Descriptor Command field is invalid." }, + { 0x05, "Reserved." }, + { 0x06, "Invalid KEY Command" }, + { 0x07, "Invalid LOAD Command" }, + { 0x08, "Invalid STORE Command" }, + { 0x09, "Invalid OPERATION Command" }, + { 0x0A, "Invalid FIFO LOAD Command" }, + { 0x0B, "Invalid FIFO STORE Command" }, + { 0x0C, "Invalid MOVE/MOVE_LEN Command" }, + { 0x0D, "Invalid JUMP Command. A nonlocal JUMP Command is invalid because the target is not a Job Header Command, or the jump is from a Trusted Descriptor to a Job Descriptor, or because the target Descriptor contains a Shared Descriptor." }, + { 0x0E, "Invalid MATH Command" }, + { 0x0F, "Invalid SIGNATURE Command" }, + { 0x10, "Invalid Sequence Command. A SEQ IN PTR OR SEQ OUT PTR Command is invalid or a SEQ KEY, SEQ LOAD, SEQ FIFO LOAD, or SEQ FIFO STORE decremented the input or output sequence length below 0. This error may result if a built-in PROTOCOL Command has encountered a malformed PDU." }, + { 0x11, "Skip data type invalid. The type must be 0xE or 0xF."}, + { 0x12, "Shared Descriptor Header Error" }, + { 0x13, "Header Error. Invalid length or parity, or certain other problems." }, + { 0x14, "Burster Error. Burster has gotten to an illegal state" }, + { 0x15, "Context Register Length Error. The descriptor is trying to read or write past the end of the Context Register. A SEQ LOAD or SEQ STORE with the VLF bit set was executed with too large a length in the variable length register (VSOL for SEQ STORE or VSIL for SEQ LOAD)." }, + { 0x16, "DMA Error" }, + { 0x17, "Reserved." }, + { 0x1A, "Job failed due to JR reset" }, + { 0x1B, "Job failed due to Fail Mode" }, + { 0x1C, "DECO Watchdog timer timeout error" }, + { 0x1D, "DECO tried to copy a key from another DECO but the other DECO's Key Registers were locked" }, + { 0x1E, "DECO attempted to copy data from a DECO that had an unmasked Descriptor error" }, + { 0x1F, "LIODN error. DECO was trying to share from itself or from another DECO but the two Non-SEQ LIODN values didn't match or the 'shared from' DECO's Descriptor required that the SEQ LIODNs be the same and they aren't." }, + { 0x20, "DECO has completed a reset initiated via the DRR register" }, + { 0x21, "Nonce error. When using EKT (CCM) key encryption option in the FIFO STORE Command, the Nonce counter reached its maximum value and this encryption mode can no longer be used." }, + { 0x22, "Meta data is too large (> 511 bytes) for TLS decap (input frame; block ciphers) and IPsec decap (output frame, when doing the next header byte update) and DCRC (output frame)." }, + { 0x23, "Read Input Frame error" }, + { 0x24, "JDKEK, TDKEK or TDSK not loaded error" }, + { 0x80, "DNR (do not run) error" }, + { 0x81, "undefined protocol command" }, + { 0x82, "invalid setting in PDB" }, + { 0x83, "Anti-replay LATE error" }, + { 0x84, "Anti-replay REPLAY error" }, + { 0x85, "Sequence number overflow" }, + { 0x86, "Sigver invalid signature" }, + { 0x87, "DSA Sign Illegal test descriptor" }, + { 0x88, "Protocol Format Error - A protocol has seen an error in the format of data received. When running RSA, this means that formatting with random padding was used, and did not follow the form: 0x00, 0x02, 8-to-N bytes of non-zero pad, 0x00, F data." }, + { 0x89, "Protocol Size Error - A protocol has seen an error in size. When running RSA, pdb size N < (size of F) when no formatting is used; or pdb size N < (F + 11) when formatting is used." }, + { 0xC1, "Blob Command error: Undefined mode" }, + { 0xC2, "Blob Command error: Secure Memory Blob mode error" }, + { 0xC4, "Blob Command error: Black Blob key or input size error" }, + { 0xC5, "Blob Command error: Invalid key destination" }, + { 0xC8, "Blob Command error: Trusted/Secure mode error" }, + { 0xF0, "IPsec TTL or hop limit field either came in as 0, or was decremented to 0" }, + { 0xF1, "3GPP HFN matches or exceeds the Threshold" }, +}; + +static const char * const cha_id_list[] = { + "", + "AES", + "DES", + "ARC4", + "MDHA", + "RNG", + "SNOW f8", + "Kasumi f8/9", + "PKHA", + "CRCA", + "SNOW f9", + "ZUCE", + "ZUCA", +}; + +static const char * const err_id_list[] = { + "No error.", + "Mode error.", + "Data size error.", + "Key size error.", + "PKHA A memory size error.", + "PKHA B memory size error.", + "Data arrived out of sequence error.", + "PKHA divide-by-zero error.", + "PKHA modulus even error.", + "DES key parity error.", + "ICV check failed.", + "Hardware error.", + "Unsupported CCM AAD size.", + "Class 1 CHA is not reset", + "Invalid CHA combination was selected", + "Invalid CHA selected.", +}; + +static const char * const rng_err_id_list[] = { + "", + "", + "", + "Instantiate", + "Not instantiated", + "Test instantiate", + "Prediction resistance", + "Prediction resistance and test request", + "Uninstantiate", + "Secure key generation", +}; + +static void report_ccb_status(struct device *jrdev, const u32 status, + const char *error) { + u8 cha_id = (status & JRSTA_CCBERR_CHAID_MASK) >> + JRSTA_CCBERR_CHAID_SHIFT; + u8 err_id = status & JRSTA_CCBERR_ERRID_MASK; u8 idx = (status & JRSTA_DECOERR_INDEX_MASK) >> JRSTA_DECOERR_INDEX_SHIFT; + char *idx_str; + const char *cha_str = "unidentified cha_id value 0x"; + char cha_err_code[3] = { 0 }; + const char *err_str = "unidentified err_id value 0x"; + char err_err_code[3] = { 0 }; if (status & JRSTA_DECOERR_JUMP) - strcat(outstr, "jump tgt desc idx "); + idx_str = "jump tgt desc idx"; else - strcat(outstr, "desc idx "); - - SPRINTFCAT(outstr, "%d: ", idx, sizeof("255")); -} - -static void report_ccb_status(u32 status, char *outstr) -{ - static const char * const cha_id_list[] = { - "", - "AES", - "DES", - "ARC4", - "MDHA", - "RNG", - "SNOW f8", - "Kasumi f8/9", - "PKHA", - "CRCA", - "SNOW f9", - "ZUCE", - "ZUCA", - }; - static const char * const err_id_list[] = { - "No error.", - "Mode error.", - "Data size error.", - "Key size error.", - "PKHA A memory size error.", - "PKHA B memory size error.", - "Data arrived out of sequence error.", - "PKHA divide-by-zero error.", - "PKHA modulus even error.", - "DES key parity error.", - "ICV check failed.", - "Hardware error.", - "Unsupported CCM AAD size.", - "Class 1 CHA is not reset", - "Invalid CHA combination was selected", - "Invalid CHA selected.", - }; - static const char * const rng_err_id_list[] = { - "", - "", - "", - "Instantiate", - "Not instantiated", - "Test instantiate", - "Prediction resistance", - "Prediction resistance and test request", - "Uninstantiate", - "Secure key generation", - }; - u8 cha_id = (status & JRSTA_CCBERR_CHAID_MASK) >> - JRSTA_CCBERR_CHAID_SHIFT; - u8 err_id = status & JRSTA_CCBERR_ERRID_MASK; + idx_str = "desc idx"; - report_jump_idx(status, outstr); - - if (cha_id < ARRAY_SIZE(cha_id_list)) { - SPRINTFCAT(outstr, "%s: ", cha_id_list[cha_id], - strlen(cha_id_list[cha_id])); - } else { - SPRINTFCAT(outstr, "unidentified cha_id value 0x%02x: ", - cha_id, sizeof("ff")); - } + if (cha_id < ARRAY_SIZE(cha_id_list)) + cha_str = cha_id_list[cha_id]; + else + snprintf(cha_err_code, sizeof(cha_err_code), "%02x", cha_id); if ((cha_id << JRSTA_CCBERR_CHAID_SHIFT) == JRSTA_CCBERR_CHAID_RNG && err_id < ARRAY_SIZE(rng_err_id_list) && strlen(rng_err_id_list[err_id])) { /* RNG-only error */ - SPRINTFCAT(outstr, "%s", rng_err_id_list[err_id], - strlen(rng_err_id_list[err_id])); - } else if (err_id < ARRAY_SIZE(err_id_list)) { - SPRINTFCAT(outstr, "%s", err_id_list[err_id], - strlen(err_id_list[err_id])); - } else { - SPRINTFCAT(outstr, "unidentified err_id value 0x%02x", - err_id, sizeof("ff")); - } + err_str = rng_err_id_list[err_id]; + } else if (err_id < ARRAY_SIZE(err_id_list)) + err_str = err_id_list[err_id]; + else + snprintf(err_err_code, sizeof(err_err_code), "%02x", err_id); + + dev_err(jrdev, "%08x: %s: %s %d: %s%s: %s%s\n", + status, error, idx_str, idx, + cha_str, cha_err_code, + err_str, err_err_code); } -static void report_jump_status(u32 status, char *outstr) +static void report_jump_status(struct device *jrdev, const u32 status, + const char *error) { - SPRINTFCAT(outstr, "%s() not implemented", __func__, sizeof(__func__)); + dev_err(jrdev, "%08x: %s: %s() not implemented\n", + status, error, __func__); } -static void report_deco_status(u32 status, char *outstr) +static void report_deco_status(struct device *jrdev, const u32 status, + const char *error) { - static const struct { - u8 value; - char *error_text; - } desc_error_list[] = { - { 0x00, "No error." }, - { 0x01, "SGT Length Error. The descriptor is trying to read " - "more data than is contained in the SGT table." }, - { 0x02, "SGT Null Entry Error." }, - { 0x03, "Job Ring Control Error. There is a bad value in the " - "Job Ring Control register." }, - { 0x04, "Invalid Descriptor Command. The Descriptor Command " - "field is invalid." }, - { 0x05, "Reserved." }, - { 0x06, "Invalid KEY Command" }, - { 0x07, "Invalid LOAD Command" }, - { 0x08, "Invalid STORE Command" }, - { 0x09, "Invalid OPERATION Command" }, - { 0x0A, "Invalid FIFO LOAD Command" }, - { 0x0B, "Invalid FIFO STORE Command" }, - { 0x0C, "Invalid MOVE/MOVE_LEN Command" }, - { 0x0D, "Invalid JUMP Command. A nonlocal JUMP Command is " - "invalid because the target is not a Job Header " - "Command, or the jump is from a Trusted Descriptor to " - "a Job Descriptor, or because the target Descriptor " - "contains a Shared Descriptor." }, - { 0x0E, "Invalid MATH Command" }, - { 0x0F, "Invalid SIGNATURE Command" }, - { 0x10, "Invalid Sequence Command. A SEQ IN PTR OR SEQ OUT PTR " - "Command is invalid or a SEQ KEY, SEQ LOAD, SEQ FIFO " - "LOAD, or SEQ FIFO STORE decremented the input or " - "output sequence length below 0. This error may result " - "if a built-in PROTOCOL Command has encountered a " - "malformed PDU." }, - { 0x11, "Skip data type invalid. The type must be 0xE or 0xF."}, - { 0x12, "Shared Descriptor Header Error" }, - { 0x13, "Header Error. Invalid length or parity, or certain " - "other problems." }, - { 0x14, "Burster Error. Burster has gotten to an illegal " - "state" }, - { 0x15, "Context Register Length Error. The descriptor is " - "trying to read or write past the end of the Context " - "Register. A SEQ LOAD or SEQ STORE with the VLF bit " - "set was executed with too large a length in the " - "variable length register (VSOL for SEQ STORE or VSIL " - "for SEQ LOAD)." }, - { 0x16, "DMA Error" }, - { 0x17, "Reserved." }, - { 0x1A, "Job failed due to JR reset" }, - { 0x1B, "Job failed due to Fail Mode" }, - { 0x1C, "DECO Watchdog timer timeout error" }, - { 0x1D, "DECO tried to copy a key from another DECO but the " - "other DECO's Key Registers were locked" }, - { 0x1E, "DECO attempted to copy data from a DECO that had an " - "unmasked Descriptor error" }, - { 0x1F, "LIODN error. DECO was trying to share from itself or " - "from another DECO but the two Non-SEQ LIODN values " - "didn't match or the 'shared from' DECO's Descriptor " - "required that the SEQ LIODNs be the same and they " - "aren't." }, - { 0x20, "DECO has completed a reset initiated via the DRR " - "register" }, - { 0x21, "Nonce error. When using EKT (CCM) key encryption " - "option in the FIFO STORE Command, the Nonce counter " - "reached its maximum value and this encryption mode " - "can no longer be used." }, - { 0x22, "Meta data is too large (> 511 bytes) for TLS decap " - "(input frame; block ciphers) and IPsec decap (output " - "frame, when doing the next header byte update) and " - "DCRC (output frame)." }, - { 0x23, "Read Input Frame error" }, - { 0x24, "JDKEK, TDKEK or TDSK not loaded error" }, - { 0x80, "DNR (do not run) error" }, - { 0x81, "undefined protocol command" }, - { 0x82, "invalid setting in PDB" }, - { 0x83, "Anti-replay LATE error" }, - { 0x84, "Anti-replay REPLAY error" }, - { 0x85, "Sequence number overflow" }, - { 0x86, "Sigver invalid signature" }, - { 0x87, "DSA Sign Illegal test descriptor" }, - { 0x88, "Protocol Format Error - A protocol has seen an error " - "in the format of data received. When running RSA, " - "this means that formatting with random padding was " - "used, and did not follow the form: 0x00, 0x02, 8-to-N " - "bytes of non-zero pad, 0x00, F data." }, - { 0x89, "Protocol Size Error - A protocol has seen an error in " - "size. When running RSA, pdb size N < (size of F) when " - "no formatting is used; or pdb size N < (F + 11) when " - "formatting is used." }, - { 0xC1, "Blob Command error: Undefined mode" }, - { 0xC2, "Blob Command error: Secure Memory Blob mode error" }, - { 0xC4, "Blob Command error: Black Blob key or input size " - "error" }, - { 0xC5, "Blob Command error: Invalid key destination" }, - { 0xC8, "Blob Command error: Trusted/Secure mode error" }, - { 0xF0, "IPsec TTL or hop limit field either came in as 0, " - "or was decremented to 0" }, - { 0xF1, "3GPP HFN matches or exceeds the Threshold" }, - }; - u8 desc_error = status & JRSTA_DECOERR_ERROR_MASK; + u8 err_id = status & JRSTA_DECOERR_ERROR_MASK; + u8 idx = (status & JRSTA_DECOERR_INDEX_MASK) >> + JRSTA_DECOERR_INDEX_SHIFT; + char *idx_str; + const char *err_str = "unidentified error value 0x"; + char err_err_code[3] = { 0 }; int i; - report_jump_idx(status, outstr); + if (status & JRSTA_DECOERR_JUMP) + idx_str = "jump tgt desc idx"; + else + idx_str = "desc idx"; for (i = 0; i < ARRAY_SIZE(desc_error_list); i++) - if (desc_error_list[i].value == desc_error) + if (desc_error_list[i].value == err_id) break; - if (i != ARRAY_SIZE(desc_error_list) && desc_error_list[i].error_text) { - SPRINTFCAT(outstr, "%s", desc_error_list[i].error_text, - strlen(desc_error_list[i].error_text)); - } else { - SPRINTFCAT(outstr, "unidentified error value 0x%02x", - desc_error, sizeof("ff")); - } + if (i != ARRAY_SIZE(desc_error_list) && desc_error_list[i].error_text) + err_str = desc_error_list[i].error_text; + else + snprintf(err_err_code, sizeof(err_err_code), "%02x", err_id); + + dev_err(jrdev, "%08x: %s: %s %d: %s%s\n", + status, error, idx_str, idx, err_str, err_err_code); } -static void report_jr_status(u32 status, char *outstr) +static void report_jr_status(struct device *jrdev, const u32 status, + const char *error) { - SPRINTFCAT(outstr, "%s() not implemented", __func__, sizeof(__func__)); + dev_err(jrdev, "%08x: %s: %s() not implemented\n", + status, error, __func__); } -static void report_cond_code_status(u32 status, char *outstr) +static void report_cond_code_status(struct device *jrdev, const u32 status, + const char *error) { - SPRINTFCAT(outstr, "%s() not implemented", __func__, sizeof(__func__)); + dev_err(jrdev, "%08x: %s: %s() not implemented\n", + status, error, __func__); } -char *caam_jr_strstatus(char *outstr, u32 status) +void caam_jr_strstatus(struct device *jrdev, u32 status) { static const struct stat_src { - void (*report_ssed)(u32 status, char *outstr); - char *error; + void (*report_ssed)(struct device *jrdev, const u32 status, + const char *error); + const char *error; } status_src[] = { { NULL, "No error" }, { NULL, NULL }, @@ -263,12 +224,16 @@ char *caam_jr_strstatus(char *outstr, u32 status) { report_cond_code_status, "Condition Code" }, }; u32 ssrc = status >> JRSTA_SSRC_SHIFT; - - sprintf(outstr, "%s: ", status_src[ssrc].error); - - if (status_src[ssrc].report_ssed) - status_src[ssrc].report_ssed(status, outstr); - - return outstr; + const char *error = status_src[ssrc].error; + + /* + * If there is no further error handling function, just + * print the error code, error string and exit. Otherwise + * call the handler function. + */ + if (!status_src[ssrc].report_ssed) + dev_err(jrdev, "%08x: %s: \n", status, status_src[ssrc].error); + else + status_src[ssrc].report_ssed(jrdev, status, error); } EXPORT_SYMBOL(caam_jr_strstatus); diff --git a/drivers/crypto/caam/error.h b/drivers/crypto/caam/error.h index 02c7baa1748e..b6350b0d9153 100644 --- a/drivers/crypto/caam/error.h +++ b/drivers/crypto/caam/error.h @@ -7,5 +7,5 @@ #ifndef CAAM_ERROR_H #define CAAM_ERROR_H #define CAAM_ERROR_STR_MAX 302 -extern char *caam_jr_strstatus(char *outstr, u32 status); +void caam_jr_strstatus(struct device *jrdev, u32 status); #endif /* CAAM_ERROR_H */ diff --git a/drivers/crypto/caam/key_gen.c b/drivers/crypto/caam/key_gen.c index ea2e406610eb..871703c49d2c 100644 --- a/drivers/crypto/caam/key_gen.c +++ b/drivers/crypto/caam/key_gen.c @@ -19,11 +19,8 @@ void split_key_done(struct device *dev, u32 *desc, u32 err, dev_err(dev, "%s %d: err 0x%x\n", __func__, __LINE__, err); #endif - if (err) { - char tmp[CAAM_ERROR_STR_MAX]; - - dev_err(dev, "%08x: %s\n", err, caam_jr_strstatus(tmp, err)); - } + if (err) + caam_jr_strstatus(dev, err); res->err = err; diff --git a/drivers/crypto/ccp/ccp-crypto-aes-xts.c b/drivers/crypto/ccp/ccp-crypto-aes-xts.c index 0237ab58f242..0cc5594b7de3 100644 --- a/drivers/crypto/ccp/ccp-crypto-aes-xts.c +++ b/drivers/crypto/ccp/ccp-crypto-aes-xts.c @@ -191,12 +191,12 @@ static int ccp_aes_xts_cra_init(struct crypto_tfm *tfm) ctx->complete = ccp_aes_xts_complete; ctx->u.aes.key_len = 0; - fallback_tfm = crypto_alloc_ablkcipher(tfm->__crt_alg->cra_name, 0, + fallback_tfm = crypto_alloc_ablkcipher(crypto_tfm_alg_name(tfm), 0, CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(fallback_tfm)) { pr_warn("could not load fallback driver %s\n", - tfm->__crt_alg->cra_name); + crypto_tfm_alg_name(tfm)); return PTR_ERR(fallback_tfm); } ctx->u.aes.tfm_ablkcipher = fallback_tfm; diff --git a/drivers/crypto/ccp/ccp-pci.c b/drivers/crypto/ccp/ccp-pci.c index 93319f9db753..0d746236df5e 100644 --- a/drivers/crypto/ccp/ccp-pci.c +++ b/drivers/crypto/ccp/ccp-pci.c @@ -48,12 +48,11 @@ static int ccp_get_msix_irqs(struct ccp_device *ccp) for (v = 0; v < ARRAY_SIZE(msix_entry); v++) msix_entry[v].entry = v; - while ((ret = pci_enable_msix(pdev, msix_entry, v)) > 0) - v = ret; - if (ret) + ret = pci_enable_msix_range(pdev, msix_entry, 1, v); + if (ret < 0) return ret; - ccp_pci->msix_count = v; + ccp_pci->msix_count = ret; for (v = 0; v < ccp_pci->msix_count; v++) { /* Set the interrupt names and request the irqs */ snprintf(ccp_pci->msix[v].name, name_len, "ccp-%u", v); diff --git a/drivers/crypto/geode-aes.c b/drivers/crypto/geode-aes.c index 0c9ff4971724..fe538e5287a5 100644 --- a/drivers/crypto/geode-aes.c +++ b/drivers/crypto/geode-aes.c @@ -226,7 +226,7 @@ geode_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) op->dst = (void *) out; op->mode = AES_MODE_ECB; op->flags = 0; - op->len = AES_MIN_BLOCK_SIZE; + op->len = AES_BLOCK_SIZE; op->dir = AES_DIR_ENCRYPT; geode_aes_crypt(op); @@ -247,7 +247,7 @@ geode_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) op->dst = (void *) out; op->mode = AES_MODE_ECB; op->flags = 0; - op->len = AES_MIN_BLOCK_SIZE; + op->len = AES_BLOCK_SIZE; op->dir = AES_DIR_DECRYPT; geode_aes_crypt(op); @@ -255,7 +255,7 @@ geode_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) static int fallback_init_cip(struct crypto_tfm *tfm) { - const char *name = tfm->__crt_alg->cra_name; + const char *name = crypto_tfm_alg_name(tfm); struct geode_aes_op *op = crypto_tfm_ctx(tfm); op->fallback.cip = crypto_alloc_cipher(name, 0, @@ -286,7 +286,7 @@ static struct crypto_alg geode_alg = { CRYPTO_ALG_NEED_FALLBACK, .cra_init = fallback_init_cip, .cra_exit = fallback_exit_cip, - .cra_blocksize = AES_MIN_BLOCK_SIZE, + .cra_blocksize = AES_BLOCK_SIZE, .cra_ctxsize = sizeof(struct geode_aes_op), .cra_module = THIS_MODULE, .cra_u = { @@ -320,7 +320,7 @@ geode_cbc_decrypt(struct blkcipher_desc *desc, op->src = walk.src.virt.addr, op->dst = walk.dst.virt.addr; op->mode = AES_MODE_CBC; - op->len = nbytes - (nbytes % AES_MIN_BLOCK_SIZE); + op->len = nbytes - (nbytes % AES_BLOCK_SIZE); op->dir = AES_DIR_DECRYPT; ret = geode_aes_crypt(op); @@ -352,7 +352,7 @@ geode_cbc_encrypt(struct blkcipher_desc *desc, op->src = walk.src.virt.addr, op->dst = walk.dst.virt.addr; op->mode = AES_MODE_CBC; - op->len = nbytes - (nbytes % AES_MIN_BLOCK_SIZE); + op->len = nbytes - (nbytes % AES_BLOCK_SIZE); op->dir = AES_DIR_ENCRYPT; ret = geode_aes_crypt(op); @@ -365,7 +365,7 @@ geode_cbc_encrypt(struct blkcipher_desc *desc, static int fallback_init_blk(struct crypto_tfm *tfm) { - const char *name = tfm->__crt_alg->cra_name; + const char *name = crypto_tfm_alg_name(tfm); struct geode_aes_op *op = crypto_tfm_ctx(tfm); op->fallback.blk = crypto_alloc_blkcipher(name, 0, @@ -396,7 +396,7 @@ static struct crypto_alg geode_cbc_alg = { CRYPTO_ALG_NEED_FALLBACK, .cra_init = fallback_init_blk, .cra_exit = fallback_exit_blk, - .cra_blocksize = AES_MIN_BLOCK_SIZE, + .cra_blocksize = AES_BLOCK_SIZE, .cra_ctxsize = sizeof(struct geode_aes_op), .cra_alignmask = 15, .cra_type = &crypto_blkcipher_type, @@ -408,7 +408,7 @@ static struct crypto_alg geode_cbc_alg = { .setkey = geode_setkey_blk, .encrypt = geode_cbc_encrypt, .decrypt = geode_cbc_decrypt, - .ivsize = AES_IV_LENGTH, + .ivsize = AES_BLOCK_SIZE, } } }; @@ -432,7 +432,7 @@ geode_ecb_decrypt(struct blkcipher_desc *desc, op->src = walk.src.virt.addr, op->dst = walk.dst.virt.addr; op->mode = AES_MODE_ECB; - op->len = nbytes - (nbytes % AES_MIN_BLOCK_SIZE); + op->len = nbytes - (nbytes % AES_BLOCK_SIZE); op->dir = AES_DIR_DECRYPT; ret = geode_aes_crypt(op); @@ -462,7 +462,7 @@ geode_ecb_encrypt(struct blkcipher_desc *desc, op->src = walk.src.virt.addr, op->dst = walk.dst.virt.addr; op->mode = AES_MODE_ECB; - op->len = nbytes - (nbytes % AES_MIN_BLOCK_SIZE); + op->len = nbytes - (nbytes % AES_BLOCK_SIZE); op->dir = AES_DIR_ENCRYPT; ret = geode_aes_crypt(op); @@ -482,7 +482,7 @@ static struct crypto_alg geode_ecb_alg = { CRYPTO_ALG_NEED_FALLBACK, .cra_init = fallback_init_blk, .cra_exit = fallback_exit_blk, - .cra_blocksize = AES_MIN_BLOCK_SIZE, + .cra_blocksize = AES_BLOCK_SIZE, .cra_ctxsize = sizeof(struct geode_aes_op), .cra_alignmask = 15, .cra_type = &crypto_blkcipher_type, @@ -547,7 +547,7 @@ static int geode_aes_probe(struct pci_dev *dev, const struct pci_device_id *id) if (ret) goto eecb; - printk(KERN_NOTICE "geode-aes: GEODE AES engine enabled.\n"); + dev_notice(&dev->dev, "GEODE AES engine enabled.\n"); return 0; eecb: @@ -565,7 +565,7 @@ static int geode_aes_probe(struct pci_dev *dev, const struct pci_device_id *id) eenable: pci_disable_device(dev); - printk(KERN_ERR "geode-aes: GEODE AES initialization failed.\n"); + dev_err(&dev->dev, "GEODE AES initialization failed.\n"); return ret; } diff --git a/drivers/crypto/geode-aes.h b/drivers/crypto/geode-aes.h index f1855b50da48..f442ca972e3c 100644 --- a/drivers/crypto/geode-aes.h +++ b/drivers/crypto/geode-aes.h @@ -10,10 +10,6 @@ #define _GEODE_AES_H_ /* driver logic flags */ -#define AES_IV_LENGTH 16 -#define AES_KEY_LENGTH 16 -#define AES_MIN_BLOCK_SIZE 16 - #define AES_MODE_ECB 0 #define AES_MODE_CBC 1 @@ -64,7 +60,7 @@ struct geode_aes_op { u32 flags; int len; - u8 key[AES_KEY_LENGTH]; + u8 key[AES_KEYSIZE_128]; u8 *iv; union { diff --git a/drivers/crypto/mv_cesa.c b/drivers/crypto/mv_cesa.c index 8d1e6f8e9e9c..29d0ee504907 100644 --- a/drivers/crypto/mv_cesa.c +++ b/drivers/crypto/mv_cesa.c @@ -622,8 +622,8 @@ static int queue_manag(void *data) } if (async_req) { - if (async_req->tfm->__crt_alg->cra_type != - &crypto_ahash_type) { + if (crypto_tfm_alg_type(async_req->tfm) != + CRYPTO_ALG_TYPE_AHASH) { struct ablkcipher_request *req = ablkcipher_request_cast(async_req); mv_start_new_crypt_req(req); @@ -843,7 +843,7 @@ static int mv_hash_setkey(struct crypto_ahash *tfm, const u8 * key, static int mv_cra_hash_init(struct crypto_tfm *tfm, const char *base_hash_name, enum hash_op op, int count_add) { - const char *fallback_driver_name = tfm->__crt_alg->cra_name; + const char *fallback_driver_name = crypto_tfm_alg_name(tfm); struct mv_tfm_hash_ctx *ctx = crypto_tfm_ctx(tfm); struct crypto_shash *fallback_tfm = NULL; struct crypto_shash *base_hash = NULL; diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c index 7bbe0ab21eca..b5f7e6db24d4 100644 --- a/drivers/crypto/mxs-dcp.c +++ b/drivers/crypto/mxs-dcp.c @@ -104,7 +104,6 @@ struct dcp_sha_req_ctx { * design of Linux Crypto API. */ static struct dcp *global_sdcp; -static DEFINE_MUTEX(global_mutex); /* DCP register layout. */ #define MXS_DCP_CTRL 0x00 @@ -482,7 +481,7 @@ static int mxs_dcp_aes_setkey(struct crypto_ablkcipher *tfm, const u8 *key, static int mxs_dcp_aes_fallback_init(struct crypto_tfm *tfm) { - const char *name = tfm->__crt_alg->cra_name; + const char *name = crypto_tfm_alg_name(tfm); const uint32_t flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK; struct dcp_async_ctx *actx = crypto_tfm_ctx(tfm); struct crypto_ablkcipher *blk; @@ -907,60 +906,49 @@ static int mxs_dcp_probe(struct platform_device *pdev) struct resource *iores; int dcp_vmi_irq, dcp_irq; - mutex_lock(&global_mutex); if (global_sdcp) { dev_err(dev, "Only one DCP instance allowed!\n"); - ret = -ENODEV; - goto err_mutex; + return -ENODEV; } iores = platform_get_resource(pdev, IORESOURCE_MEM, 0); dcp_vmi_irq = platform_get_irq(pdev, 0); - if (dcp_vmi_irq < 0) { - ret = dcp_vmi_irq; - goto err_mutex; - } + if (dcp_vmi_irq < 0) + return dcp_vmi_irq; dcp_irq = platform_get_irq(pdev, 1); - if (dcp_irq < 0) { - ret = dcp_irq; - goto err_mutex; - } + if (dcp_irq < 0) + return dcp_irq; sdcp = devm_kzalloc(dev, sizeof(*sdcp), GFP_KERNEL); - if (!sdcp) { - ret = -ENOMEM; - goto err_mutex; - } + if (!sdcp) + return -ENOMEM; sdcp->dev = dev; sdcp->base = devm_ioremap_resource(dev, iores); - if (IS_ERR(sdcp->base)) { - ret = PTR_ERR(sdcp->base); - goto err_mutex; - } + if (IS_ERR(sdcp->base)) + return PTR_ERR(sdcp->base); + ret = devm_request_irq(dev, dcp_vmi_irq, mxs_dcp_irq, 0, "dcp-vmi-irq", sdcp); if (ret) { dev_err(dev, "Failed to claim DCP VMI IRQ!\n"); - goto err_mutex; + return ret; } ret = devm_request_irq(dev, dcp_irq, mxs_dcp_irq, 0, "dcp-irq", sdcp); if (ret) { dev_err(dev, "Failed to claim DCP IRQ!\n"); - goto err_mutex; + return ret; } /* Allocate coherent helper block. */ sdcp->coh = devm_kzalloc(dev, sizeof(*sdcp->coh) + DCP_ALIGNMENT, GFP_KERNEL); - if (!sdcp->coh) { - ret = -ENOMEM; - goto err_mutex; - } + if (!sdcp->coh) + return -ENOMEM; /* Re-align the structure so it fits the DCP constraints. */ sdcp->coh = PTR_ALIGN(sdcp->coh, DCP_ALIGNMENT); @@ -968,7 +956,7 @@ static int mxs_dcp_probe(struct platform_device *pdev) /* Restart the DCP block. */ ret = stmp_reset_block(sdcp->base); if (ret) - goto err_mutex; + return ret; /* Initialize control register. */ writel(MXS_DCP_CTRL_GATHER_RESIDUAL_WRITES | @@ -1006,8 +994,7 @@ static int mxs_dcp_probe(struct platform_device *pdev) NULL, "mxs_dcp_chan/sha"); if (IS_ERR(sdcp->thread[DCP_CHAN_HASH_SHA])) { dev_err(dev, "Error starting SHA thread!\n"); - ret = PTR_ERR(sdcp->thread[DCP_CHAN_HASH_SHA]); - goto err_mutex; + return PTR_ERR(sdcp->thread[DCP_CHAN_HASH_SHA]); } sdcp->thread[DCP_CHAN_CRYPTO] = kthread_run(dcp_chan_thread_aes, @@ -1064,9 +1051,6 @@ err_destroy_aes_thread: err_destroy_sha_thread: kthread_stop(sdcp->thread[DCP_CHAN_HASH_SHA]); - -err_mutex: - mutex_unlock(&global_mutex); return ret; } @@ -1088,9 +1072,7 @@ static int mxs_dcp_remove(struct platform_device *pdev) platform_set_drvdata(pdev, NULL); - mutex_lock(&global_mutex); global_sdcp = NULL; - mutex_unlock(&global_mutex); return 0; } diff --git a/drivers/crypto/n2_core.c b/drivers/crypto/n2_core.c index e1f0ab413c3b..7263c10a56ee 100644 --- a/drivers/crypto/n2_core.c +++ b/drivers/crypto/n2_core.c @@ -356,7 +356,7 @@ static int n2_hash_async_finup(struct ahash_request *req) static int n2_hash_cra_init(struct crypto_tfm *tfm) { - const char *fallback_driver_name = tfm->__crt_alg->cra_name; + const char *fallback_driver_name = crypto_tfm_alg_name(tfm); struct crypto_ahash *ahash = __crypto_ahash_cast(tfm); struct n2_hash_ctx *ctx = crypto_ahash_ctx(ahash); struct crypto_ahash *fallback_tfm; @@ -391,7 +391,7 @@ static void n2_hash_cra_exit(struct crypto_tfm *tfm) static int n2_hmac_cra_init(struct crypto_tfm *tfm) { - const char *fallback_driver_name = tfm->__crt_alg->cra_name; + const char *fallback_driver_name = crypto_tfm_alg_name(tfm); struct crypto_ahash *ahash = __crypto_ahash_cast(tfm); struct n2_hmac_ctx *ctx = crypto_ahash_ctx(ahash); struct n2_hmac_alg *n2alg = n2_hmac_alg(tfm); diff --git a/drivers/crypto/nx/nx-842.c b/drivers/crypto/nx/nx-842.c index 5ce8b5765121..502edf0a2933 100644 --- a/drivers/crypto/nx/nx-842.c +++ b/drivers/crypto/nx/nx-842.c @@ -1229,7 +1229,7 @@ static int __exit nx842_remove(struct vio_dev *viodev) old_devdata = rcu_dereference_check(devdata, lockdep_is_held(&devdata_mutex)); of_reconfig_notifier_unregister(&nx842_of_nb); - rcu_assign_pointer(devdata, NULL); + RCU_INIT_POINTER(devdata, NULL); spin_unlock_irqrestore(&devdata_mutex, flags); synchronize_rcu(); dev_set_drvdata(&viodev->dev, NULL); @@ -1280,7 +1280,7 @@ static void __exit nx842_exit(void) spin_lock_irqsave(&devdata_mutex, flags); old_devdata = rcu_dereference_check(devdata, lockdep_is_held(&devdata_mutex)); - rcu_assign_pointer(devdata, NULL); + RCU_INIT_POINTER(devdata, NULL); spin_unlock_irqrestore(&devdata_mutex, flags); synchronize_rcu(); if (old_devdata) diff --git a/drivers/crypto/omap-des.c b/drivers/crypto/omap-des.c index ec5f13162b73..b8bc84be8741 100644 --- a/drivers/crypto/omap-des.c +++ b/drivers/crypto/omap-des.c @@ -223,12 +223,19 @@ static void omap_des_write_n(struct omap_des_dev *dd, u32 offset, static int omap_des_hw_init(struct omap_des_dev *dd) { + int err; + /* * clocks are enabled when request starts and disabled when finished. * It may be long delays between requests. * Device might go to off mode to save power. */ - pm_runtime_get_sync(dd->dev); + err = pm_runtime_get_sync(dd->dev); + if (err < 0) { + pm_runtime_put_noidle(dd->dev); + dev_err(dd->dev, "%s: failed to get_sync(%d)\n", __func__, err); + return err; + } if (!(dd->flags & FLAGS_INIT)) { dd->flags |= FLAGS_INIT; @@ -1074,16 +1081,20 @@ static int omap_des_probe(struct platform_device *pdev) if (err) goto err_res; - dd->io_base = devm_request_and_ioremap(dev, res); - if (!dd->io_base) { - dev_err(dev, "can't ioremap\n"); - err = -ENOMEM; + dd->io_base = devm_ioremap_resource(dev, res); + if (IS_ERR(dd->io_base)) { + err = PTR_ERR(dd->io_base); goto err_res; } dd->phys_base = res->start; pm_runtime_enable(dev); - pm_runtime_get_sync(dev); + err = pm_runtime_get_sync(dev); + if (err < 0) { + pm_runtime_put_noidle(dev); + dev_err(dd->dev, "%s: failed to get_sync(%d)\n", __func__, err); + goto err_get; + } omap_des_dma_stop(dd); @@ -1148,6 +1159,7 @@ err_algs: err_irq: tasklet_kill(&dd->done_task); tasklet_kill(&dd->queue_task); +err_get: pm_runtime_disable(dev); err_res: dd = NULL; @@ -1191,7 +1203,14 @@ static int omap_des_suspend(struct device *dev) static int omap_des_resume(struct device *dev) { - pm_runtime_get_sync(dev); + int err; + + err = pm_runtime_get_sync(dev); + if (err < 0) { + pm_runtime_put_noidle(dev); + dev_err(dev, "%s: failed to get_sync(%d)\n", __func__, err); + return err; + } return 0; } #endif diff --git a/drivers/crypto/padlock-sha.c b/drivers/crypto/padlock-sha.c index 9266c0e25492..bace885634f2 100644 --- a/drivers/crypto/padlock-sha.c +++ b/drivers/crypto/padlock-sha.c @@ -211,7 +211,7 @@ static int padlock_sha256_final(struct shash_desc *desc, u8 *out) static int padlock_cra_init(struct crypto_tfm *tfm) { struct crypto_shash *hash = __crypto_shash_cast(tfm); - const char *fallback_driver_name = tfm->__crt_alg->cra_name; + const char *fallback_driver_name = crypto_tfm_alg_name(tfm); struct padlock_sha_ctx *ctx = crypto_tfm_ctx(tfm); struct crypto_shash *fallback_tfm; int err = -ENOMEM; diff --git a/drivers/crypto/s5p-sss.c b/drivers/crypto/s5p-sss.c index be45762f390a..4197ad9a711b 100644 --- a/drivers/crypto/s5p-sss.c +++ b/drivers/crypto/s5p-sss.c @@ -22,6 +22,7 @@ #include <linux/scatterlist.h> #include <linux/dma-mapping.h> #include <linux/io.h> +#include <linux/of.h> #include <linux/crypto.h> #include <linux/interrupt.h> @@ -29,9 +30,6 @@ #include <crypto/aes.h> #include <crypto/ctr.h> -#include <plat/cpu.h> -#include <mach/dma.h> - #define _SBF(s, v) ((v) << (s)) #define _BIT(b) _SBF(b, 1) @@ -105,7 +103,7 @@ #define SSS_REG_FCPKDMAO 0x005C /* AES registers */ -#define SSS_REG_AES_CONTROL 0x4000 +#define SSS_REG_AES_CONTROL 0x00 #define SSS_AES_BYTESWAP_DI _BIT(11) #define SSS_AES_BYTESWAP_DO _BIT(10) #define SSS_AES_BYTESWAP_IV _BIT(9) @@ -121,21 +119,25 @@ #define SSS_AES_CHAIN_MODE_CTR _SBF(1, 0x02) #define SSS_AES_MODE_DECRYPT _BIT(0) -#define SSS_REG_AES_STATUS 0x4004 +#define SSS_REG_AES_STATUS 0x04 #define SSS_AES_BUSY _BIT(2) #define SSS_AES_INPUT_READY _BIT(1) #define SSS_AES_OUTPUT_READY _BIT(0) -#define SSS_REG_AES_IN_DATA(s) (0x4010 + (s << 2)) -#define SSS_REG_AES_OUT_DATA(s) (0x4020 + (s << 2)) -#define SSS_REG_AES_IV_DATA(s) (0x4030 + (s << 2)) -#define SSS_REG_AES_CNT_DATA(s) (0x4040 + (s << 2)) -#define SSS_REG_AES_KEY_DATA(s) (0x4080 + (s << 2)) +#define SSS_REG_AES_IN_DATA(s) (0x10 + (s << 2)) +#define SSS_REG_AES_OUT_DATA(s) (0x20 + (s << 2)) +#define SSS_REG_AES_IV_DATA(s) (0x30 + (s << 2)) +#define SSS_REG_AES_CNT_DATA(s) (0x40 + (s << 2)) +#define SSS_REG_AES_KEY_DATA(s) (0x80 + (s << 2)) #define SSS_REG(dev, reg) ((dev)->ioaddr + (SSS_REG_##reg)) #define SSS_READ(dev, reg) __raw_readl(SSS_REG(dev, reg)) #define SSS_WRITE(dev, reg, val) __raw_writel((val), SSS_REG(dev, reg)) +#define SSS_AES_REG(dev, reg) ((dev)->aes_ioaddr + SSS_REG_##reg) +#define SSS_AES_WRITE(dev, reg, val) __raw_writel((val), \ + SSS_AES_REG(dev, reg)) + /* HW engine modes */ #define FLAGS_AES_DECRYPT _BIT(0) #define FLAGS_AES_MODE_MASK _SBF(1, 0x03) @@ -145,6 +147,20 @@ #define AES_KEY_LEN 16 #define CRYPTO_QUEUE_LEN 1 +/** + * struct samsung_aes_variant - platform specific SSS driver data + * @has_hash_irq: true if SSS module uses hash interrupt, false otherwise + * @aes_offset: AES register offset from SSS module's base. + * + * Specifies platform specific configuration of SSS module. + * Note: A structure for driver specific platform data is used for future + * expansion of its usage. + */ +struct samsung_aes_variant { + bool has_hash_irq; + unsigned int aes_offset; +}; + struct s5p_aes_reqctx { unsigned long mode; }; @@ -161,6 +177,7 @@ struct s5p_aes_dev { struct device *dev; struct clk *clk; void __iomem *ioaddr; + void __iomem *aes_ioaddr; int irq_hash; int irq_fc; @@ -173,10 +190,48 @@ struct s5p_aes_dev { struct crypto_queue queue; bool busy; spinlock_t lock; + + struct samsung_aes_variant *variant; }; static struct s5p_aes_dev *s5p_dev; +static const struct samsung_aes_variant s5p_aes_data = { + .has_hash_irq = true, + .aes_offset = 0x4000, +}; + +static const struct samsung_aes_variant exynos_aes_data = { + .has_hash_irq = false, + .aes_offset = 0x200, +}; + +static const struct of_device_id s5p_sss_dt_match[] = { + { + .compatible = "samsung,s5pv210-secss", + .data = &s5p_aes_data, + }, + { + .compatible = "samsung,exynos4210-secss", + .data = &exynos_aes_data, + }, + { }, +}; +MODULE_DEVICE_TABLE(of, s5p_sss_dt_match); + +static inline struct samsung_aes_variant *find_s5p_sss_version + (struct platform_device *pdev) +{ + if (IS_ENABLED(CONFIG_OF) && (pdev->dev.of_node)) { + const struct of_device_id *match; + match = of_match_node(s5p_sss_dt_match, + pdev->dev.of_node); + return (struct samsung_aes_variant *)match->data; + } + return (struct samsung_aes_variant *) + platform_get_device_id(pdev)->driver_data; +} + static void s5p_set_dma_indata(struct s5p_aes_dev *dev, struct scatterlist *sg) { SSS_WRITE(dev, FCBRDMAS, sg_dma_address(sg)); @@ -272,8 +327,12 @@ static void s5p_aes_tx(struct s5p_aes_dev *dev) } s5p_set_dma_outdata(dev, dev->sg_dst); - } else + } else { s5p_aes_complete(dev, err); + + dev->busy = true; + tasklet_schedule(&dev->tasklet); + } } static void s5p_aes_rx(struct s5p_aes_dev *dev) @@ -322,14 +381,15 @@ static void s5p_set_aes(struct s5p_aes_dev *dev, { void __iomem *keystart; - memcpy(dev->ioaddr + SSS_REG_AES_IV_DATA(0), iv, 0x10); + if (iv) + memcpy(dev->aes_ioaddr + SSS_REG_AES_IV_DATA(0), iv, 0x10); if (keylen == AES_KEYSIZE_256) - keystart = dev->ioaddr + SSS_REG_AES_KEY_DATA(0); + keystart = dev->aes_ioaddr + SSS_REG_AES_KEY_DATA(0); else if (keylen == AES_KEYSIZE_192) - keystart = dev->ioaddr + SSS_REG_AES_KEY_DATA(2); + keystart = dev->aes_ioaddr + SSS_REG_AES_KEY_DATA(2); else - keystart = dev->ioaddr + SSS_REG_AES_KEY_DATA(4); + keystart = dev->aes_ioaddr + SSS_REG_AES_KEY_DATA(4); memcpy(keystart, key, keylen); } @@ -379,7 +439,7 @@ static void s5p_aes_crypt_start(struct s5p_aes_dev *dev, unsigned long mode) if (err) goto outdata_error; - SSS_WRITE(dev, AES_CONTROL, aes_control); + SSS_AES_WRITE(dev, AES_CONTROL, aes_control); s5p_set_aes(dev, dev->ctx->aes_key, req->info, dev->ctx->keylen); s5p_set_dma_indata(dev, req->src); @@ -410,10 +470,13 @@ static void s5p_tasklet_cb(unsigned long data) spin_lock_irqsave(&dev->lock, flags); backlog = crypto_get_backlog(&dev->queue); async_req = crypto_dequeue_request(&dev->queue); - spin_unlock_irqrestore(&dev->lock, flags); - if (!async_req) + if (!async_req) { + dev->busy = false; + spin_unlock_irqrestore(&dev->lock, flags); return; + } + spin_unlock_irqrestore(&dev->lock, flags); if (backlog) backlog->complete(backlog, -EINPROGRESS); @@ -432,14 +495,13 @@ static int s5p_aes_handle_req(struct s5p_aes_dev *dev, int err; spin_lock_irqsave(&dev->lock, flags); + err = ablkcipher_enqueue_request(&dev->queue, req); if (dev->busy) { - err = -EAGAIN; spin_unlock_irqrestore(&dev->lock, flags); goto exit; } dev->busy = true; - err = ablkcipher_enqueue_request(&dev->queue, req); spin_unlock_irqrestore(&dev->lock, flags); tasklet_schedule(&dev->tasklet); @@ -564,6 +626,7 @@ static int s5p_aes_probe(struct platform_device *pdev) struct s5p_aes_dev *pdata; struct device *dev = &pdev->dev; struct resource *res; + struct samsung_aes_variant *variant; if (s5p_dev) return -EEXIST; @@ -577,30 +640,25 @@ static int s5p_aes_probe(struct platform_device *pdev) if (IS_ERR(pdata->ioaddr)) return PTR_ERR(pdata->ioaddr); + variant = find_s5p_sss_version(pdev); + pdata->clk = devm_clk_get(dev, "secss"); if (IS_ERR(pdata->clk)) { dev_err(dev, "failed to find secss clock source\n"); return -ENOENT; } - clk_enable(pdata->clk); + err = clk_prepare_enable(pdata->clk); + if (err < 0) { + dev_err(dev, "Enabling SSS clk failed, err %d\n", err); + return err; + } spin_lock_init(&pdata->lock); - pdata->irq_hash = platform_get_irq_byname(pdev, "hash"); - if (pdata->irq_hash < 0) { - err = pdata->irq_hash; - dev_warn(dev, "hash interrupt is not available.\n"); - goto err_irq; - } - err = devm_request_irq(dev, pdata->irq_hash, s5p_aes_interrupt, - IRQF_SHARED, pdev->name, pdev); - if (err < 0) { - dev_warn(dev, "hash interrupt is not available.\n"); - goto err_irq; - } + pdata->aes_ioaddr = pdata->ioaddr + variant->aes_offset; - pdata->irq_fc = platform_get_irq_byname(pdev, "feed control"); + pdata->irq_fc = platform_get_irq(pdev, 0); if (pdata->irq_fc < 0) { err = pdata->irq_fc; dev_warn(dev, "feed control interrupt is not available.\n"); @@ -613,6 +671,23 @@ static int s5p_aes_probe(struct platform_device *pdev) goto err_irq; } + if (variant->has_hash_irq) { + pdata->irq_hash = platform_get_irq(pdev, 1); + if (pdata->irq_hash < 0) { + err = pdata->irq_hash; + dev_warn(dev, "hash interrupt is not available.\n"); + goto err_irq; + } + err = devm_request_irq(dev, pdata->irq_hash, s5p_aes_interrupt, + IRQF_SHARED, pdev->name, pdev); + if (err < 0) { + dev_warn(dev, "hash interrupt is not available.\n"); + goto err_irq; + } + } + + pdata->busy = false; + pdata->variant = variant; pdata->dev = dev; platform_set_drvdata(pdev, pdata); s5p_dev = pdata; @@ -639,7 +714,7 @@ static int s5p_aes_probe(struct platform_device *pdev) tasklet_kill(&pdata->tasklet); err_irq: - clk_disable(pdata->clk); + clk_disable_unprepare(pdata->clk); s5p_dev = NULL; @@ -659,7 +734,7 @@ static int s5p_aes_remove(struct platform_device *pdev) tasklet_kill(&pdata->tasklet); - clk_disable(pdata->clk); + clk_disable_unprepare(pdata->clk); s5p_dev = NULL; @@ -672,6 +747,7 @@ static struct platform_driver s5p_aes_crypto = { .driver = { .owner = THIS_MODULE, .name = "s5p-secss", + .of_match_table = s5p_sss_dt_match, }, }; diff --git a/drivers/crypto/sahara.c b/drivers/crypto/sahara.c index 07a5987ce67d..164e1ec624e3 100644 --- a/drivers/crypto/sahara.c +++ b/drivers/crypto/sahara.c @@ -728,7 +728,7 @@ static int sahara_aes_cbc_decrypt(struct ablkcipher_request *req) static int sahara_aes_cra_init(struct crypto_tfm *tfm) { - const char *name = tfm->__crt_alg->cra_name; + const char *name = crypto_tfm_alg_name(tfm); struct sahara_ctx *ctx = crypto_tfm_ctx(tfm); ctx->fallback = crypto_alloc_ablkcipher(name, 0, diff --git a/drivers/firmware/efi/Kconfig b/drivers/firmware/efi/Kconfig index 1e75f48b61f8..d420ae2d3413 100644 --- a/drivers/firmware/efi/Kconfig +++ b/drivers/firmware/efi/Kconfig @@ -47,6 +47,13 @@ config EFI_RUNTIME_MAP See also Documentation/ABI/testing/sysfs-firmware-efi-runtime-map. +config EFI_PARAMS_FROM_FDT + bool + help + Select this config option from the architecture Kconfig if + the EFI runtime support gets system table address, memory + map address, and other parameters from the device tree. + endmenu config UEFI_CPER diff --git a/drivers/firmware/efi/arm-stub.c b/drivers/firmware/efi/arm-stub.c new file mode 100644 index 000000000000..41114ce03b01 --- /dev/null +++ b/drivers/firmware/efi/arm-stub.c @@ -0,0 +1,278 @@ +/* + * EFI stub implementation that is shared by arm and arm64 architectures. + * This should be #included by the EFI stub implementation files. + * + * Copyright (C) 2013,2014 Linaro Limited + * Roy Franz <roy.franz@linaro.org + * Copyright (C) 2013 Red Hat, Inc. + * Mark Salter <msalter@redhat.com> + * + * This file is part of the Linux kernel, and is made available under the + * terms of the GNU General Public License version 2. + * + */ + +static int __init efi_secureboot_enabled(efi_system_table_t *sys_table_arg) +{ + static efi_guid_t const var_guid __initconst = EFI_GLOBAL_VARIABLE_GUID; + static efi_char16_t const var_name[] __initconst = { + 'S', 'e', 'c', 'u', 'r', 'e', 'B', 'o', 'o', 't', 0 }; + + efi_get_variable_t *f_getvar = sys_table_arg->runtime->get_variable; + unsigned long size = sizeof(u8); + efi_status_t status; + u8 val; + + status = f_getvar((efi_char16_t *)var_name, (efi_guid_t *)&var_guid, + NULL, &size, &val); + + switch (status) { + case EFI_SUCCESS: + return val; + case EFI_NOT_FOUND: + return 0; + default: + return 1; + } +} + +static efi_status_t efi_open_volume(efi_system_table_t *sys_table_arg, + void *__image, void **__fh) +{ + efi_file_io_interface_t *io; + efi_loaded_image_t *image = __image; + efi_file_handle_t *fh; + efi_guid_t fs_proto = EFI_FILE_SYSTEM_GUID; + efi_status_t status; + void *handle = (void *)(unsigned long)image->device_handle; + + status = sys_table_arg->boottime->handle_protocol(handle, + &fs_proto, (void **)&io); + if (status != EFI_SUCCESS) { + efi_printk(sys_table_arg, "Failed to handle fs_proto\n"); + return status; + } + + status = io->open_volume(io, &fh); + if (status != EFI_SUCCESS) + efi_printk(sys_table_arg, "Failed to open volume\n"); + + *__fh = fh; + return status; +} +static efi_status_t efi_file_close(void *handle) +{ + efi_file_handle_t *fh = handle; + + return fh->close(handle); +} + +static efi_status_t +efi_file_read(void *handle, unsigned long *size, void *addr) +{ + efi_file_handle_t *fh = handle; + + return fh->read(handle, size, addr); +} + + +static efi_status_t +efi_file_size(efi_system_table_t *sys_table_arg, void *__fh, + efi_char16_t *filename_16, void **handle, u64 *file_sz) +{ + efi_file_handle_t *h, *fh = __fh; + efi_file_info_t *info; + efi_status_t status; + efi_guid_t info_guid = EFI_FILE_INFO_ID; + unsigned long info_sz; + + status = fh->open(fh, &h, filename_16, EFI_FILE_MODE_READ, (u64)0); + if (status != EFI_SUCCESS) { + efi_printk(sys_table_arg, "Failed to open file: "); + efi_char16_printk(sys_table_arg, filename_16); + efi_printk(sys_table_arg, "\n"); + return status; + } + + *handle = h; + + info_sz = 0; + status = h->get_info(h, &info_guid, &info_sz, NULL); + if (status != EFI_BUFFER_TOO_SMALL) { + efi_printk(sys_table_arg, "Failed to get file info size\n"); + return status; + } + +grow: + status = sys_table_arg->boottime->allocate_pool(EFI_LOADER_DATA, + info_sz, (void **)&info); + if (status != EFI_SUCCESS) { + efi_printk(sys_table_arg, "Failed to alloc mem for file info\n"); + return status; + } + + status = h->get_info(h, &info_guid, &info_sz, + info); + if (status == EFI_BUFFER_TOO_SMALL) { + sys_table_arg->boottime->free_pool(info); + goto grow; + } + + *file_sz = info->file_size; + sys_table_arg->boottime->free_pool(info); + + if (status != EFI_SUCCESS) + efi_printk(sys_table_arg, "Failed to get initrd info\n"); + + return status; +} + + + +static void efi_char16_printk(efi_system_table_t *sys_table_arg, + efi_char16_t *str) +{ + struct efi_simple_text_output_protocol *out; + + out = (struct efi_simple_text_output_protocol *)sys_table_arg->con_out; + out->output_string(out, str); +} + + +/* + * This function handles the architcture specific differences between arm and + * arm64 regarding where the kernel image must be loaded and any memory that + * must be reserved. On failure it is required to free all + * all allocations it has made. + */ +static efi_status_t handle_kernel_image(efi_system_table_t *sys_table, + unsigned long *image_addr, + unsigned long *image_size, + unsigned long *reserve_addr, + unsigned long *reserve_size, + unsigned long dram_base, + efi_loaded_image_t *image); +/* + * EFI entry point for the arm/arm64 EFI stubs. This is the entrypoint + * that is described in the PE/COFF header. Most of the code is the same + * for both archictectures, with the arch-specific code provided in the + * handle_kernel_image() function. + */ +unsigned long __init efi_entry(void *handle, efi_system_table_t *sys_table, + unsigned long *image_addr) +{ + efi_loaded_image_t *image; + efi_status_t status; + unsigned long image_size = 0; + unsigned long dram_base; + /* addr/point and size pairs for memory management*/ + unsigned long initrd_addr; + u64 initrd_size = 0; + unsigned long fdt_addr = 0; /* Original DTB */ + u64 fdt_size = 0; /* We don't get size from configuration table */ + char *cmdline_ptr = NULL; + int cmdline_size = 0; + unsigned long new_fdt_addr; + efi_guid_t loaded_image_proto = LOADED_IMAGE_PROTOCOL_GUID; + unsigned long reserve_addr = 0; + unsigned long reserve_size = 0; + + /* Check if we were booted by the EFI firmware */ + if (sys_table->hdr.signature != EFI_SYSTEM_TABLE_SIGNATURE) + goto fail; + + pr_efi(sys_table, "Booting Linux Kernel...\n"); + + /* + * Get a handle to the loaded image protocol. This is used to get + * information about the running image, such as size and the command + * line. + */ + status = sys_table->boottime->handle_protocol(handle, + &loaded_image_proto, (void *)&image); + if (status != EFI_SUCCESS) { + pr_efi_err(sys_table, "Failed to get loaded image protocol\n"); + goto fail; + } + + dram_base = get_dram_base(sys_table); + if (dram_base == EFI_ERROR) { + pr_efi_err(sys_table, "Failed to find DRAM base\n"); + goto fail; + } + status = handle_kernel_image(sys_table, image_addr, &image_size, + &reserve_addr, + &reserve_size, + dram_base, image); + if (status != EFI_SUCCESS) { + pr_efi_err(sys_table, "Failed to relocate kernel\n"); + goto fail; + } + + /* + * Get the command line from EFI, using the LOADED_IMAGE + * protocol. We are going to copy the command line into the + * device tree, so this can be allocated anywhere. + */ + cmdline_ptr = efi_convert_cmdline(sys_table, image, &cmdline_size); + if (!cmdline_ptr) { + pr_efi_err(sys_table, "getting command line via LOADED_IMAGE_PROTOCOL\n"); + goto fail_free_image; + } + + /* + * Unauthenticated device tree data is a security hazard, so + * ignore 'dtb=' unless UEFI Secure Boot is disabled. + */ + if (efi_secureboot_enabled(sys_table)) { + pr_efi(sys_table, "UEFI Secure Boot is enabled.\n"); + } else { + status = handle_cmdline_files(sys_table, image, cmdline_ptr, + "dtb=", + ~0UL, (unsigned long *)&fdt_addr, + (unsigned long *)&fdt_size); + + if (status != EFI_SUCCESS) { + pr_efi_err(sys_table, "Failed to load device tree!\n"); + goto fail_free_cmdline; + } + } + if (!fdt_addr) + /* Look for a device tree configuration table entry. */ + fdt_addr = (uintptr_t)get_fdt(sys_table); + + status = handle_cmdline_files(sys_table, image, cmdline_ptr, + "initrd=", dram_base + SZ_512M, + (unsigned long *)&initrd_addr, + (unsigned long *)&initrd_size); + if (status != EFI_SUCCESS) + pr_efi_err(sys_table, "Failed initrd from command line!\n"); + + new_fdt_addr = fdt_addr; + status = allocate_new_fdt_and_exit_boot(sys_table, handle, + &new_fdt_addr, dram_base + MAX_FDT_OFFSET, + initrd_addr, initrd_size, cmdline_ptr, + fdt_addr, fdt_size); + + /* + * If all went well, we need to return the FDT address to the + * calling function so it can be passed to kernel as part of + * the kernel boot protocol. + */ + if (status == EFI_SUCCESS) + return new_fdt_addr; + + pr_efi_err(sys_table, "Failed to update FDT and exit boot services\n"); + + efi_free(sys_table, initrd_size, initrd_addr); + efi_free(sys_table, fdt_size, fdt_addr); + +fail_free_cmdline: + efi_free(sys_table, cmdline_size, (unsigned long)cmdline_ptr); + +fail_free_image: + efi_free(sys_table, image_size, *image_addr); + efi_free(sys_table, reserve_size, reserve_addr); +fail: + return EFI_ERROR; +} diff --git a/drivers/firmware/efi/efi-stub-helper.c b/drivers/firmware/efi/efi-stub-helper.c index 2c41eaece2c1..eb6d4be9e722 100644 --- a/drivers/firmware/efi/efi-stub-helper.c +++ b/drivers/firmware/efi/efi-stub-helper.c @@ -11,6 +11,10 @@ */ #define EFI_READ_CHUNK_SIZE (1024 * 1024) +/* error code which can't be mistaken for valid address */ +#define EFI_ERROR (~0UL) + + struct file_info { efi_file_handle_t *handle; u64 size; @@ -33,6 +37,9 @@ static void efi_printk(efi_system_table_t *sys_table_arg, char *str) } } +#define pr_efi(sys_table, msg) efi_printk(sys_table, "EFI stub: "msg) +#define pr_efi_err(sys_table, msg) efi_printk(sys_table, "EFI stub: ERROR: "msg) + static efi_status_t efi_get_memory_map(efi_system_table_t *sys_table_arg, efi_memory_desc_t **map, @@ -80,6 +87,32 @@ fail: return status; } + +static unsigned long __init get_dram_base(efi_system_table_t *sys_table_arg) +{ + efi_status_t status; + unsigned long map_size; + unsigned long membase = EFI_ERROR; + struct efi_memory_map map; + efi_memory_desc_t *md; + + status = efi_get_memory_map(sys_table_arg, (efi_memory_desc_t **)&map.map, + &map_size, &map.desc_size, NULL, NULL); + if (status != EFI_SUCCESS) + return membase; + + map.map_end = map.map + map_size; + + for_each_efi_memory_desc(&map, md) + if (md->attribute & EFI_MEMORY_WB) + if (membase > md->phys_addr) + membase = md->phys_addr; + + efi_call_early(free_pool, map.map); + + return membase; +} + /* * Allocate at the highest possible address that is not above 'max'. */ @@ -267,7 +300,7 @@ static efi_status_t handle_cmdline_files(efi_system_table_t *sys_table_arg, struct file_info *files; unsigned long file_addr; u64 file_size_total; - efi_file_handle_t *fh; + efi_file_handle_t *fh = NULL; efi_status_t status; int nr_files; char *str; @@ -310,7 +343,7 @@ static efi_status_t handle_cmdline_files(efi_system_table_t *sys_table_arg, status = efi_call_early(allocate_pool, EFI_LOADER_DATA, nr_files * sizeof(*files), (void **)&files); if (status != EFI_SUCCESS) { - efi_printk(sys_table_arg, "Failed to alloc mem for file handle list\n"); + pr_efi_err(sys_table_arg, "Failed to alloc mem for file handle list\n"); goto fail; } @@ -374,13 +407,13 @@ static efi_status_t handle_cmdline_files(efi_system_table_t *sys_table_arg, status = efi_high_alloc(sys_table_arg, file_size_total, 0x1000, &file_addr, max_addr); if (status != EFI_SUCCESS) { - efi_printk(sys_table_arg, "Failed to alloc highmem for files\n"); + pr_efi_err(sys_table_arg, "Failed to alloc highmem for files\n"); goto close_handles; } /* We've run out of free low memory. */ if (file_addr > max_addr) { - efi_printk(sys_table_arg, "We've run out of free low memory\n"); + pr_efi_err(sys_table_arg, "We've run out of free low memory\n"); status = EFI_INVALID_PARAMETER; goto free_file_total; } @@ -401,7 +434,7 @@ static efi_status_t handle_cmdline_files(efi_system_table_t *sys_table_arg, &chunksize, (void *)addr); if (status != EFI_SUCCESS) { - efi_printk(sys_table_arg, "Failed to read file\n"); + pr_efi_err(sys_table_arg, "Failed to read file\n"); goto free_file_total; } addr += chunksize; @@ -486,7 +519,7 @@ static efi_status_t efi_relocate_kernel(efi_system_table_t *sys_table_arg, &new_addr); } if (status != EFI_SUCCESS) { - efi_printk(sys_table_arg, "ERROR: Failed to allocate usable memory for kernel.\n"); + pr_efi_err(sys_table_arg, "Failed to allocate usable memory for kernel.\n"); return status; } @@ -503,62 +536,99 @@ static efi_status_t efi_relocate_kernel(efi_system_table_t *sys_table_arg, } /* + * Get the number of UTF-8 bytes corresponding to an UTF-16 character. + * This overestimates for surrogates, but that is okay. + */ +static int efi_utf8_bytes(u16 c) +{ + return 1 + (c >= 0x80) + (c >= 0x800); +} + +/* + * Convert an UTF-16 string, not necessarily null terminated, to UTF-8. + */ +static u8 *efi_utf16_to_utf8(u8 *dst, const u16 *src, int n) +{ + unsigned int c; + + while (n--) { + c = *src++; + if (n && c >= 0xd800 && c <= 0xdbff && + *src >= 0xdc00 && *src <= 0xdfff) { + c = 0x10000 + ((c & 0x3ff) << 10) + (*src & 0x3ff); + src++; + n--; + } + if (c >= 0xd800 && c <= 0xdfff) + c = 0xfffd; /* Unmatched surrogate */ + if (c < 0x80) { + *dst++ = c; + continue; + } + if (c < 0x800) { + *dst++ = 0xc0 + (c >> 6); + goto t1; + } + if (c < 0x10000) { + *dst++ = 0xe0 + (c >> 12); + goto t2; + } + *dst++ = 0xf0 + (c >> 18); + *dst++ = 0x80 + ((c >> 12) & 0x3f); + t2: + *dst++ = 0x80 + ((c >> 6) & 0x3f); + t1: + *dst++ = 0x80 + (c & 0x3f); + } + + return dst; +} + +/* * Convert the unicode UEFI command line to ASCII to pass to kernel. * Size of memory allocated return in *cmd_line_len. * Returns NULL on error. */ -static char *efi_convert_cmdline_to_ascii(efi_system_table_t *sys_table_arg, - efi_loaded_image_t *image, - int *cmd_line_len) +static char *efi_convert_cmdline(efi_system_table_t *sys_table_arg, + efi_loaded_image_t *image, + int *cmd_line_len) { - u16 *s2; + const u16 *s2; u8 *s1 = NULL; unsigned long cmdline_addr = 0; - int load_options_size = image->load_options_size / 2; /* ASCII */ - void *options = image->load_options; - int options_size = 0; + int load_options_chars = image->load_options_size / 2; /* UTF-16 */ + const u16 *options = image->load_options; + int options_bytes = 0; /* UTF-8 bytes */ + int options_chars = 0; /* UTF-16 chars */ efi_status_t status; - int i; u16 zero = 0; if (options) { s2 = options; - while (*s2 && *s2 != '\n' && options_size < load_options_size) { - s2++; - options_size++; + while (*s2 && *s2 != '\n' + && options_chars < load_options_chars) { + options_bytes += efi_utf8_bytes(*s2++); + options_chars++; } } - if (options_size == 0) { + if (!options_chars) { /* No command line options, so return empty string*/ - options_size = 1; options = &zero; } - options_size++; /* NUL termination */ -#ifdef CONFIG_ARM - /* - * For ARM, allocate at a high address to avoid reserved - * regions at low addresses that we don't know the specfics of - * at the time we are processing the command line. - */ - status = efi_high_alloc(sys_table_arg, options_size, 0, - &cmdline_addr, 0xfffff000); -#else - status = efi_low_alloc(sys_table_arg, options_size, 0, - &cmdline_addr); -#endif + options_bytes++; /* NUL termination */ + + status = efi_low_alloc(sys_table_arg, options_bytes, 0, &cmdline_addr); if (status != EFI_SUCCESS) return NULL; s1 = (u8 *)cmdline_addr; - s2 = (u16 *)options; - - for (i = 0; i < options_size - 1; i++) - *s1++ = *s2++; + s2 = (const u16 *)options; + s1 = efi_utf16_to_utf8(s1, s2, options_chars); *s1 = '\0'; - *cmd_line_len = options_size; + *cmd_line_len = options_bytes; return (char *)cmdline_addr; } diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c index af20f1712337..cd36deb619fa 100644 --- a/drivers/firmware/efi/efi.c +++ b/drivers/firmware/efi/efi.c @@ -20,6 +20,8 @@ #include <linux/init.h> #include <linux/device.h> #include <linux/efi.h> +#include <linux/of.h> +#include <linux/of_fdt.h> #include <linux/io.h> struct efi __read_mostly efi = { @@ -318,3 +320,80 @@ int __init efi_config_init(efi_config_table_type_t *arch_tables) return 0; } + +#ifdef CONFIG_EFI_PARAMS_FROM_FDT + +#define UEFI_PARAM(name, prop, field) \ + { \ + { name }, \ + { prop }, \ + offsetof(struct efi_fdt_params, field), \ + FIELD_SIZEOF(struct efi_fdt_params, field) \ + } + +static __initdata struct { + const char name[32]; + const char propname[32]; + int offset; + int size; +} dt_params[] = { + UEFI_PARAM("System Table", "linux,uefi-system-table", system_table), + UEFI_PARAM("MemMap Address", "linux,uefi-mmap-start", mmap), + UEFI_PARAM("MemMap Size", "linux,uefi-mmap-size", mmap_size), + UEFI_PARAM("MemMap Desc. Size", "linux,uefi-mmap-desc-size", desc_size), + UEFI_PARAM("MemMap Desc. Version", "linux,uefi-mmap-desc-ver", desc_ver) +}; + +struct param_info { + int verbose; + void *params; +}; + +static int __init fdt_find_uefi_params(unsigned long node, const char *uname, + int depth, void *data) +{ + struct param_info *info = data; + void *prop, *dest; + unsigned long len; + u64 val; + int i; + + if (depth != 1 || + (strcmp(uname, "chosen") != 0 && strcmp(uname, "chosen@0") != 0)) + return 0; + + pr_info("Getting parameters from FDT:\n"); + + for (i = 0; i < ARRAY_SIZE(dt_params); i++) { + prop = of_get_flat_dt_prop(node, dt_params[i].propname, &len); + if (!prop) { + pr_err("Can't find %s in device tree!\n", + dt_params[i].name); + return 0; + } + dest = info->params + dt_params[i].offset; + + val = of_read_number(prop, len / sizeof(u32)); + + if (dt_params[i].size == sizeof(u32)) + *(u32 *)dest = val; + else + *(u64 *)dest = val; + + if (info->verbose) + pr_info(" %s: 0x%0*llx\n", dt_params[i].name, + dt_params[i].size * 2, val); + } + return 1; +} + +int __init efi_get_fdt_params(struct efi_fdt_params *params, int verbose) +{ + struct param_info info; + + info.verbose = verbose; + info.params = params; + + return of_scan_flat_dt(fdt_find_uefi_params, &info); +} +#endif /* CONFIG_EFI_PARAMS_FROM_FDT */ diff --git a/drivers/firmware/efi/efivars.c b/drivers/firmware/efi/efivars.c index 50ea412a25e6..463c56545ae8 100644 --- a/drivers/firmware/efi/efivars.c +++ b/drivers/firmware/efi/efivars.c @@ -69,6 +69,7 @@ #include <linux/module.h> #include <linux/slab.h> #include <linux/ucs2_string.h> +#include <linux/compat.h> #define EFIVARS_VERSION "0.08" #define EFIVARS_DATE "2004-May-17" @@ -86,6 +87,15 @@ static struct kset *efivars_kset; static struct bin_attribute *efivars_new_var; static struct bin_attribute *efivars_del_var; +struct compat_efi_variable { + efi_char16_t VariableName[EFI_VAR_NAME_LEN/sizeof(efi_char16_t)]; + efi_guid_t VendorGuid; + __u32 DataSize; + __u8 Data[1024]; + __u32 Status; + __u32 Attributes; +} __packed; + struct efivar_attribute { struct attribute attr; ssize_t (*show) (struct efivar_entry *entry, char *buf); @@ -189,45 +199,107 @@ efivar_data_read(struct efivar_entry *entry, char *buf) memcpy(buf, var->Data, var->DataSize); return var->DataSize; } -/* - * We allow each variable to be edited via rewriting the - * entire efi variable structure. - */ -static ssize_t -efivar_store_raw(struct efivar_entry *entry, const char *buf, size_t count) -{ - struct efi_variable *new_var, *var = &entry->var; - int err; - if (count != sizeof(struct efi_variable)) - return -EINVAL; - - new_var = (struct efi_variable *)buf; +static inline int +sanity_check(struct efi_variable *var, efi_char16_t *name, efi_guid_t vendor, + unsigned long size, u32 attributes, u8 *data) +{ /* * If only updating the variable data, then the name * and guid should remain the same */ - if (memcmp(new_var->VariableName, var->VariableName, sizeof(var->VariableName)) || - efi_guidcmp(new_var->VendorGuid, var->VendorGuid)) { + if (memcmp(name, var->VariableName, sizeof(var->VariableName)) || + efi_guidcmp(vendor, var->VendorGuid)) { printk(KERN_ERR "efivars: Cannot edit the wrong variable!\n"); return -EINVAL; } - if ((new_var->DataSize <= 0) || (new_var->Attributes == 0)){ + if ((size <= 0) || (attributes == 0)){ printk(KERN_ERR "efivars: DataSize & Attributes must be valid!\n"); return -EINVAL; } - if ((new_var->Attributes & ~EFI_VARIABLE_MASK) != 0 || - efivar_validate(new_var, new_var->Data, new_var->DataSize) == false) { + if ((attributes & ~EFI_VARIABLE_MASK) != 0 || + efivar_validate(name, data, size) == false) { printk(KERN_ERR "efivars: Malformed variable content\n"); return -EINVAL; } - memcpy(&entry->var, new_var, count); + return 0; +} + +static inline bool is_compat(void) +{ + if (IS_ENABLED(CONFIG_COMPAT) && is_compat_task()) + return true; + + return false; +} + +static void +copy_out_compat(struct efi_variable *dst, struct compat_efi_variable *src) +{ + memcpy(dst->VariableName, src->VariableName, EFI_VAR_NAME_LEN); + memcpy(dst->Data, src->Data, sizeof(src->Data)); + + dst->VendorGuid = src->VendorGuid; + dst->DataSize = src->DataSize; + dst->Attributes = src->Attributes; +} + +/* + * We allow each variable to be edited via rewriting the + * entire efi variable structure. + */ +static ssize_t +efivar_store_raw(struct efivar_entry *entry, const char *buf, size_t count) +{ + struct efi_variable *new_var, *var = &entry->var; + efi_char16_t *name; + unsigned long size; + efi_guid_t vendor; + u32 attributes; + u8 *data; + int err; + + if (is_compat()) { + struct compat_efi_variable *compat; + + if (count != sizeof(*compat)) + return -EINVAL; + + compat = (struct compat_efi_variable *)buf; + attributes = compat->Attributes; + vendor = compat->VendorGuid; + name = compat->VariableName; + size = compat->DataSize; + data = compat->Data; + + err = sanity_check(var, name, vendor, size, attributes, data); + if (err) + return err; + + copy_out_compat(&entry->var, compat); + } else { + if (count != sizeof(struct efi_variable)) + return -EINVAL; + + new_var = (struct efi_variable *)buf; - err = efivar_entry_set(entry, new_var->Attributes, - new_var->DataSize, new_var->Data, NULL); + attributes = new_var->Attributes; + vendor = new_var->VendorGuid; + name = new_var->VariableName; + size = new_var->DataSize; + data = new_var->Data; + + err = sanity_check(var, name, vendor, size, attributes, data); + if (err) + return err; + + memcpy(&entry->var, new_var, count); + } + + err = efivar_entry_set(entry, attributes, size, data, NULL); if (err) { printk(KERN_WARNING "efivars: set_variable() failed: status=%d\n", err); return -EIO; @@ -240,6 +312,8 @@ static ssize_t efivar_show_raw(struct efivar_entry *entry, char *buf) { struct efi_variable *var = &entry->var; + struct compat_efi_variable *compat; + size_t size; if (!entry || !buf) return 0; @@ -249,9 +323,23 @@ efivar_show_raw(struct efivar_entry *entry, char *buf) &entry->var.DataSize, entry->var.Data)) return -EIO; - memcpy(buf, var, sizeof(*var)); + if (is_compat()) { + compat = (struct compat_efi_variable *)buf; + + size = sizeof(*compat); + memcpy(compat->VariableName, var->VariableName, + EFI_VAR_NAME_LEN); + memcpy(compat->Data, var->Data, sizeof(compat->Data)); + + compat->VendorGuid = var->VendorGuid; + compat->DataSize = var->DataSize; + compat->Attributes = var->Attributes; + } else { + size = sizeof(*var); + memcpy(buf, var, size); + } - return sizeof(*var); + return size; } /* @@ -326,15 +414,39 @@ static ssize_t efivar_create(struct file *filp, struct kobject *kobj, struct bin_attribute *bin_attr, char *buf, loff_t pos, size_t count) { + struct compat_efi_variable *compat = (struct compat_efi_variable *)buf; struct efi_variable *new_var = (struct efi_variable *)buf; struct efivar_entry *new_entry; + bool need_compat = is_compat(); + efi_char16_t *name; + unsigned long size; + u32 attributes; + u8 *data; int err; if (!capable(CAP_SYS_ADMIN)) return -EACCES; - if ((new_var->Attributes & ~EFI_VARIABLE_MASK) != 0 || - efivar_validate(new_var, new_var->Data, new_var->DataSize) == false) { + if (need_compat) { + if (count != sizeof(*compat)) + return -EINVAL; + + attributes = compat->Attributes; + name = compat->VariableName; + size = compat->DataSize; + data = compat->Data; + } else { + if (count != sizeof(*new_var)) + return -EINVAL; + + attributes = new_var->Attributes; + name = new_var->VariableName; + size = new_var->DataSize; + data = new_var->Data; + } + + if ((attributes & ~EFI_VARIABLE_MASK) != 0 || + efivar_validate(name, data, size) == false) { printk(KERN_ERR "efivars: Malformed variable content\n"); return -EINVAL; } @@ -343,10 +455,13 @@ static ssize_t efivar_create(struct file *filp, struct kobject *kobj, if (!new_entry) return -ENOMEM; - memcpy(&new_entry->var, new_var, sizeof(*new_var)); + if (need_compat) + copy_out_compat(&new_entry->var, compat); + else + memcpy(&new_entry->var, new_var, sizeof(*new_var)); - err = efivar_entry_set(new_entry, new_var->Attributes, new_var->DataSize, - new_var->Data, &efivar_sysfs_list); + err = efivar_entry_set(new_entry, attributes, size, + data, &efivar_sysfs_list); if (err) { if (err == -EEXIST) err = -EINVAL; @@ -369,15 +484,32 @@ static ssize_t efivar_delete(struct file *filp, struct kobject *kobj, char *buf, loff_t pos, size_t count) { struct efi_variable *del_var = (struct efi_variable *)buf; + struct compat_efi_variable *compat; struct efivar_entry *entry; + efi_char16_t *name; + efi_guid_t vendor; int err = 0; if (!capable(CAP_SYS_ADMIN)) return -EACCES; + if (is_compat()) { + if (count != sizeof(*compat)) + return -EINVAL; + + compat = (struct compat_efi_variable *)buf; + name = compat->VariableName; + vendor = compat->VendorGuid; + } else { + if (count != sizeof(*del_var)) + return -EINVAL; + + name = del_var->VariableName; + vendor = del_var->VendorGuid; + } + efivar_entry_iter_begin(); - entry = efivar_entry_find(del_var->VariableName, del_var->VendorGuid, - &efivar_sysfs_list, true); + entry = efivar_entry_find(name, vendor, &efivar_sysfs_list, true); if (!entry) err = -EINVAL; else if (__efivar_entry_delete(entry)) diff --git a/drivers/firmware/efi/fdt.c b/drivers/firmware/efi/fdt.c new file mode 100644 index 000000000000..5c6a8e8a9580 --- /dev/null +++ b/drivers/firmware/efi/fdt.c @@ -0,0 +1,285 @@ +/* + * FDT related Helper functions used by the EFI stub on multiple + * architectures. This should be #included by the EFI stub + * implementation files. + * + * Copyright 2013 Linaro Limited; author Roy Franz + * + * This file is part of the Linux kernel, and is made available + * under the terms of the GNU General Public License version 2. + * + */ + +static efi_status_t update_fdt(efi_system_table_t *sys_table, void *orig_fdt, + unsigned long orig_fdt_size, + void *fdt, int new_fdt_size, char *cmdline_ptr, + u64 initrd_addr, u64 initrd_size, + efi_memory_desc_t *memory_map, + unsigned long map_size, unsigned long desc_size, + u32 desc_ver) +{ + int node, prev; + int status; + u32 fdt_val32; + u64 fdt_val64; + + /* + * Copy definition of linux_banner here. Since this code is + * built as part of the decompressor for ARM v7, pulling + * in version.c where linux_banner is defined for the + * kernel brings other kernel dependencies with it. + */ + const char linux_banner[] = + "Linux version " UTS_RELEASE " (" LINUX_COMPILE_BY "@" + LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION "\n"; + + /* Do some checks on provided FDT, if it exists*/ + if (orig_fdt) { + if (fdt_check_header(orig_fdt)) { + pr_efi_err(sys_table, "Device Tree header not valid!\n"); + return EFI_LOAD_ERROR; + } + /* + * We don't get the size of the FDT if we get if from a + * configuration table. + */ + if (orig_fdt_size && fdt_totalsize(orig_fdt) > orig_fdt_size) { + pr_efi_err(sys_table, "Truncated device tree! foo!\n"); + return EFI_LOAD_ERROR; + } + } + + if (orig_fdt) + status = fdt_open_into(orig_fdt, fdt, new_fdt_size); + else + status = fdt_create_empty_tree(fdt, new_fdt_size); + + if (status != 0) + goto fdt_set_fail; + + /* + * Delete any memory nodes present. We must delete nodes which + * early_init_dt_scan_memory may try to use. + */ + prev = 0; + for (;;) { + const char *type, *name; + int len; + + node = fdt_next_node(fdt, prev, NULL); + if (node < 0) + break; + + type = fdt_getprop(fdt, node, "device_type", &len); + if (type && strncmp(type, "memory", len) == 0) { + fdt_del_node(fdt, node); + continue; + } + + prev = node; + } + + node = fdt_subnode_offset(fdt, 0, "chosen"); + if (node < 0) { + node = fdt_add_subnode(fdt, 0, "chosen"); + if (node < 0) { + status = node; /* node is error code when negative */ + goto fdt_set_fail; + } + } + + if ((cmdline_ptr != NULL) && (strlen(cmdline_ptr) > 0)) { + status = fdt_setprop(fdt, node, "bootargs", cmdline_ptr, + strlen(cmdline_ptr) + 1); + if (status) + goto fdt_set_fail; + } + + /* Set initrd address/end in device tree, if present */ + if (initrd_size != 0) { + u64 initrd_image_end; + u64 initrd_image_start = cpu_to_fdt64(initrd_addr); + + status = fdt_setprop(fdt, node, "linux,initrd-start", + &initrd_image_start, sizeof(u64)); + if (status) + goto fdt_set_fail; + initrd_image_end = cpu_to_fdt64(initrd_addr + initrd_size); + status = fdt_setprop(fdt, node, "linux,initrd-end", + &initrd_image_end, sizeof(u64)); + if (status) + goto fdt_set_fail; + } + + /* Add FDT entries for EFI runtime services in chosen node. */ + node = fdt_subnode_offset(fdt, 0, "chosen"); + fdt_val64 = cpu_to_fdt64((u64)(unsigned long)sys_table); + status = fdt_setprop(fdt, node, "linux,uefi-system-table", + &fdt_val64, sizeof(fdt_val64)); + if (status) + goto fdt_set_fail; + + fdt_val64 = cpu_to_fdt64((u64)(unsigned long)memory_map); + status = fdt_setprop(fdt, node, "linux,uefi-mmap-start", + &fdt_val64, sizeof(fdt_val64)); + if (status) + goto fdt_set_fail; + + fdt_val32 = cpu_to_fdt32(map_size); + status = fdt_setprop(fdt, node, "linux,uefi-mmap-size", + &fdt_val32, sizeof(fdt_val32)); + if (status) + goto fdt_set_fail; + + fdt_val32 = cpu_to_fdt32(desc_size); + status = fdt_setprop(fdt, node, "linux,uefi-mmap-desc-size", + &fdt_val32, sizeof(fdt_val32)); + if (status) + goto fdt_set_fail; + + fdt_val32 = cpu_to_fdt32(desc_ver); + status = fdt_setprop(fdt, node, "linux,uefi-mmap-desc-ver", + &fdt_val32, sizeof(fdt_val32)); + if (status) + goto fdt_set_fail; + + /* + * Add kernel version banner so stub/kernel match can be + * verified. + */ + status = fdt_setprop_string(fdt, node, "linux,uefi-stub-kern-ver", + linux_banner); + if (status) + goto fdt_set_fail; + + return EFI_SUCCESS; + +fdt_set_fail: + if (status == -FDT_ERR_NOSPACE) + return EFI_BUFFER_TOO_SMALL; + + return EFI_LOAD_ERROR; +} + +#ifndef EFI_FDT_ALIGN +#define EFI_FDT_ALIGN EFI_PAGE_SIZE +#endif + +/* + * Allocate memory for a new FDT, then add EFI, commandline, and + * initrd related fields to the FDT. This routine increases the + * FDT allocation size until the allocated memory is large + * enough. EFI allocations are in EFI_PAGE_SIZE granules, + * which are fixed at 4K bytes, so in most cases the first + * allocation should succeed. + * EFI boot services are exited at the end of this function. + * There must be no allocations between the get_memory_map() + * call and the exit_boot_services() call, so the exiting of + * boot services is very tightly tied to the creation of the FDT + * with the final memory map in it. + */ + +efi_status_t allocate_new_fdt_and_exit_boot(efi_system_table_t *sys_table, + void *handle, + unsigned long *new_fdt_addr, + unsigned long max_addr, + u64 initrd_addr, u64 initrd_size, + char *cmdline_ptr, + unsigned long fdt_addr, + unsigned long fdt_size) +{ + unsigned long map_size, desc_size; + u32 desc_ver; + unsigned long mmap_key; + efi_memory_desc_t *memory_map; + unsigned long new_fdt_size; + efi_status_t status; + + /* + * Estimate size of new FDT, and allocate memory for it. We + * will allocate a bigger buffer if this ends up being too + * small, so a rough guess is OK here. + */ + new_fdt_size = fdt_size + EFI_PAGE_SIZE; + while (1) { + status = efi_high_alloc(sys_table, new_fdt_size, EFI_FDT_ALIGN, + new_fdt_addr, max_addr); + if (status != EFI_SUCCESS) { + pr_efi_err(sys_table, "Unable to allocate memory for new device tree.\n"); + goto fail; + } + + /* + * Now that we have done our final memory allocation (and free) + * we can get the memory map key needed for + * exit_boot_services(). + */ + status = efi_get_memory_map(sys_table, &memory_map, &map_size, + &desc_size, &desc_ver, &mmap_key); + if (status != EFI_SUCCESS) + goto fail_free_new_fdt; + + status = update_fdt(sys_table, + (void *)fdt_addr, fdt_size, + (void *)*new_fdt_addr, new_fdt_size, + cmdline_ptr, initrd_addr, initrd_size, + memory_map, map_size, desc_size, desc_ver); + + /* Succeeding the first time is the expected case. */ + if (status == EFI_SUCCESS) + break; + + if (status == EFI_BUFFER_TOO_SMALL) { + /* + * We need to allocate more space for the new + * device tree, so free existing buffer that is + * too small. Also free memory map, as we will need + * to get new one that reflects the free/alloc we do + * on the device tree buffer. + */ + efi_free(sys_table, new_fdt_size, *new_fdt_addr); + sys_table->boottime->free_pool(memory_map); + new_fdt_size += EFI_PAGE_SIZE; + } else { + pr_efi_err(sys_table, "Unable to constuct new device tree.\n"); + goto fail_free_mmap; + } + } + + /* Now we are ready to exit_boot_services.*/ + status = sys_table->boottime->exit_boot_services(handle, mmap_key); + + + if (status == EFI_SUCCESS) + return status; + + pr_efi_err(sys_table, "Exit boot services failed.\n"); + +fail_free_mmap: + sys_table->boottime->free_pool(memory_map); + +fail_free_new_fdt: + efi_free(sys_table, new_fdt_size, *new_fdt_addr); + +fail: + return EFI_LOAD_ERROR; +} + +static void *get_fdt(efi_system_table_t *sys_table) +{ + efi_guid_t fdt_guid = DEVICE_TREE_GUID; + efi_config_table_t *tables; + void *fdt; + int i; + + tables = (efi_config_table_t *) sys_table->tables; + fdt = NULL; + + for (i = 0; i < sys_table->nr_tables; i++) + if (efi_guidcmp(tables[i].guid, fdt_guid) == 0) { + fdt = (void *) tables[i].table; + break; + } + + return fdt; +} diff --git a/drivers/firmware/efi/vars.c b/drivers/firmware/efi/vars.c index b22659cccca4..f0a43646a2f3 100644 --- a/drivers/firmware/efi/vars.c +++ b/drivers/firmware/efi/vars.c @@ -42,7 +42,7 @@ DECLARE_WORK(efivar_work, NULL); EXPORT_SYMBOL_GPL(efivar_work); static bool -validate_device_path(struct efi_variable *var, int match, u8 *buffer, +validate_device_path(efi_char16_t *var_name, int match, u8 *buffer, unsigned long len) { struct efi_generic_dev_path *node; @@ -75,7 +75,7 @@ validate_device_path(struct efi_variable *var, int match, u8 *buffer, } static bool -validate_boot_order(struct efi_variable *var, int match, u8 *buffer, +validate_boot_order(efi_char16_t *var_name, int match, u8 *buffer, unsigned long len) { /* An array of 16-bit integers */ @@ -86,18 +86,18 @@ validate_boot_order(struct efi_variable *var, int match, u8 *buffer, } static bool -validate_load_option(struct efi_variable *var, int match, u8 *buffer, +validate_load_option(efi_char16_t *var_name, int match, u8 *buffer, unsigned long len) { u16 filepathlength; int i, desclength = 0, namelen; - namelen = ucs2_strnlen(var->VariableName, sizeof(var->VariableName)); + namelen = ucs2_strnlen(var_name, EFI_VAR_NAME_LEN); /* Either "Boot" or "Driver" followed by four digits of hex */ for (i = match; i < match+4; i++) { - if (var->VariableName[i] > 127 || - hex_to_bin(var->VariableName[i] & 0xff) < 0) + if (var_name[i] > 127 || + hex_to_bin(var_name[i] & 0xff) < 0) return true; } @@ -132,12 +132,12 @@ validate_load_option(struct efi_variable *var, int match, u8 *buffer, /* * And, finally, check the filepath */ - return validate_device_path(var, match, buffer + desclength + 6, + return validate_device_path(var_name, match, buffer + desclength + 6, filepathlength); } static bool -validate_uint16(struct efi_variable *var, int match, u8 *buffer, +validate_uint16(efi_char16_t *var_name, int match, u8 *buffer, unsigned long len) { /* A single 16-bit integer */ @@ -148,7 +148,7 @@ validate_uint16(struct efi_variable *var, int match, u8 *buffer, } static bool -validate_ascii_string(struct efi_variable *var, int match, u8 *buffer, +validate_ascii_string(efi_char16_t *var_name, int match, u8 *buffer, unsigned long len) { int i; @@ -166,7 +166,7 @@ validate_ascii_string(struct efi_variable *var, int match, u8 *buffer, struct variable_validate { char *name; - bool (*validate)(struct efi_variable *var, int match, u8 *data, + bool (*validate)(efi_char16_t *var_name, int match, u8 *data, unsigned long len); }; @@ -189,10 +189,10 @@ static const struct variable_validate variable_validate[] = { }; bool -efivar_validate(struct efi_variable *var, u8 *data, unsigned long len) +efivar_validate(efi_char16_t *var_name, u8 *data, unsigned long len) { int i; - u16 *unicode_name = var->VariableName; + u16 *unicode_name = var_name; for (i = 0; variable_validate[i].validate != NULL; i++) { const char *name = variable_validate[i].name; @@ -208,7 +208,7 @@ efivar_validate(struct efi_variable *var, u8 *data, unsigned long len) /* Wildcard in the matching name means we've matched */ if (c == '*') - return variable_validate[i].validate(var, + return variable_validate[i].validate(var_name, match, data, len); /* Case sensitive match */ @@ -217,7 +217,7 @@ efivar_validate(struct efi_variable *var, u8 *data, unsigned long len) /* Reached the end of the string while matching */ if (!c) - return variable_validate[i].validate(var, + return variable_validate[i].validate(var_name, match, data, len); } } @@ -805,7 +805,7 @@ int efivar_entry_set_get_size(struct efivar_entry *entry, u32 attributes, *set = false; - if (efivar_validate(&entry->var, data, *size) == false) + if (efivar_validate(name, data, *size) == false) return -EINVAL; /* diff --git a/drivers/gpio/gpio-stmpe.c b/drivers/gpio/gpio-stmpe.c index 2776a09bee58..628b58494294 100644 --- a/drivers/gpio/gpio-stmpe.c +++ b/drivers/gpio/gpio-stmpe.c @@ -23,7 +23,8 @@ enum { REG_RE, REG_FE, REG_IE }; #define CACHE_NR_REGS 3 -#define CACHE_NR_BANKS (STMPE_NR_GPIOS / 8) +/* No variant has more than 24 GPIOs */ +#define CACHE_NR_BANKS (24 / 8) struct stmpe_gpio { struct gpio_chip chip; @@ -31,8 +32,6 @@ struct stmpe_gpio { struct device *dev; struct mutex irq_lock; struct irq_domain *domain; - - int irq_base; unsigned norequest_mask; /* Caches of interrupt control registers for bus_lock */ @@ -311,13 +310,8 @@ static const struct irq_domain_ops stmpe_gpio_irq_simple_ops = { static int stmpe_gpio_irq_init(struct stmpe_gpio *stmpe_gpio, struct device_node *np) { - int base = 0; - - if (!np) - base = stmpe_gpio->irq_base; - stmpe_gpio->domain = irq_domain_add_simple(np, - stmpe_gpio->chip.ngpio, base, + stmpe_gpio->chip.ngpio, 0, &stmpe_gpio_irq_simple_ops, stmpe_gpio); if (!stmpe_gpio->domain) { dev_err(stmpe_gpio->dev, "failed to create irqdomain\n"); @@ -354,7 +348,7 @@ static int stmpe_gpio_probe(struct platform_device *pdev) #ifdef CONFIG_OF stmpe_gpio->chip.of_node = np; #endif - stmpe_gpio->chip.base = pdata ? pdata->gpio_base : -1; + stmpe_gpio->chip.base = -1; if (pdata) stmpe_gpio->norequest_mask = pdata->norequest_mask; @@ -362,9 +356,7 @@ static int stmpe_gpio_probe(struct platform_device *pdev) of_property_read_u32(np, "st,norequest-mask", &stmpe_gpio->norequest_mask); - if (irq >= 0) - stmpe_gpio->irq_base = stmpe->irq_base + STMPE_INT_GPIO(0); - else + if (irq < 0) dev_info(&pdev->dev, "device configured in no-irq mode; " "irqs are not available\n"); diff --git a/drivers/gpu/drm/drm_crtc_helper.c b/drivers/gpu/drm/drm_crtc_helper.c index df281b54db01..872ba11c4533 100644 --- a/drivers/gpu/drm/drm_crtc_helper.c +++ b/drivers/gpu/drm/drm_crtc_helper.c @@ -29,6 +29,7 @@ * Jesse Barnes <jesse.barnes@intel.com> */ +#include <linux/kernel.h> #include <linux/export.h> #include <linux/moduleparam.h> @@ -88,7 +89,13 @@ bool drm_helper_encoder_in_use(struct drm_encoder *encoder) struct drm_connector *connector; struct drm_device *dev = encoder->dev; - WARN_ON(!mutex_is_locked(&dev->mode_config.mutex)); + /* + * We can expect this mutex to be locked if we are not panicking. + * Locking is currently fubar in the panic handler. + */ + if (!oops_in_progress) + WARN_ON(!mutex_is_locked(&dev->mode_config.mutex)); + list_for_each_entry(connector, &dev->mode_config.connector_list, head) if (connector->encoder == encoder) return true; @@ -112,7 +119,13 @@ bool drm_helper_crtc_in_use(struct drm_crtc *crtc) struct drm_encoder *encoder; struct drm_device *dev = crtc->dev; - WARN_ON(!mutex_is_locked(&dev->mode_config.mutex)); + /* + * We can expect this mutex to be locked if we are not panicking. + * Locking is currently fubar in the panic handler. + */ + if (!oops_in_progress) + WARN_ON(!mutex_is_locked(&dev->mode_config.mutex)); + list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) if (encoder->crtc == crtc && drm_helper_encoder_in_use(encoder)) return true; diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c b/drivers/gpu/drm/radeon/atombios_crtc.c index c31c12b4e666..e911898348f8 100644 --- a/drivers/gpu/drm/radeon/atombios_crtc.c +++ b/drivers/gpu/drm/radeon/atombios_crtc.c @@ -270,8 +270,6 @@ void atombios_crtc_dpms(struct drm_crtc *crtc, int mode) switch (mode) { case DRM_MODE_DPMS_ON: radeon_crtc->enabled = true; - /* adjust pm to dpms changes BEFORE enabling crtcs */ - radeon_pm_compute_clocks(rdev); atombios_enable_crtc(crtc, ATOM_ENABLE); if (ASIC_IS_DCE3(rdev) && !ASIC_IS_DCE6(rdev)) atombios_enable_crtc_memreq(crtc, ATOM_ENABLE); @@ -289,10 +287,10 @@ void atombios_crtc_dpms(struct drm_crtc *crtc, int mode) atombios_enable_crtc_memreq(crtc, ATOM_DISABLE); atombios_enable_crtc(crtc, ATOM_DISABLE); radeon_crtc->enabled = false; - /* adjust pm to dpms changes AFTER disabling crtcs */ - radeon_pm_compute_clocks(rdev); break; } + /* adjust pm to dpms */ + radeon_pm_compute_clocks(rdev); } static void diff --git a/drivers/gpu/drm/radeon/radeon_asic.c b/drivers/gpu/drm/radeon/radeon_asic.c index be20e62dac83..e5f0177bea1e 100644 --- a/drivers/gpu/drm/radeon/radeon_asic.c +++ b/drivers/gpu/drm/radeon/radeon_asic.c @@ -2049,8 +2049,8 @@ static struct radeon_asic ci_asic = { .blit_ring_index = RADEON_RING_TYPE_GFX_INDEX, .dma = &cik_copy_dma, .dma_ring_index = R600_RING_TYPE_DMA_INDEX, - .copy = &cik_copy_dma, - .copy_ring_index = R600_RING_TYPE_DMA_INDEX, + .copy = &cik_copy_cpdma, + .copy_ring_index = RADEON_RING_TYPE_GFX_INDEX, }, .surface = { .set_reg = r600_set_surface_reg, diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c index 14671406212f..2cd144c378d6 100644 --- a/drivers/gpu/drm/radeon/radeon_device.c +++ b/drivers/gpu/drm/radeon/radeon_device.c @@ -1558,6 +1558,10 @@ int radeon_resume_kms(struct drm_device *dev, bool resume, bool fbcon) drm_kms_helper_poll_enable(dev); + /* set the power state here in case we are a PX system or headless */ + if ((rdev->pm.pm_method == PM_METHOD_DPM) && rdev->pm.dpm_enabled) + radeon_pm_compute_clocks(rdev); + if (fbcon) { radeon_fbdev_set_suspend(rdev, 0); console_unlock(); diff --git a/drivers/gpu/drm/radeon/radeon_pm.c b/drivers/gpu/drm/radeon/radeon_pm.c index 53d6e1bb48dc..2bdae61c0ac0 100644 --- a/drivers/gpu/drm/radeon/radeon_pm.c +++ b/drivers/gpu/drm/radeon/radeon_pm.c @@ -1104,7 +1104,6 @@ static void radeon_pm_resume_dpm(struct radeon_device *rdev) if (ret) goto dpm_resume_fail; rdev->pm.dpm_enabled = true; - radeon_pm_compute_clocks(rdev); return; dpm_resume_fail: diff --git a/drivers/gpu/drm/radeon/radeon_vm.c b/drivers/gpu/drm/radeon/radeon_vm.c index 1f426696de36..c11b71d249e3 100644 --- a/drivers/gpu/drm/radeon/radeon_vm.c +++ b/drivers/gpu/drm/radeon/radeon_vm.c @@ -132,7 +132,7 @@ struct radeon_cs_reloc *radeon_vm_get_bos(struct radeon_device *rdev, struct radeon_cs_reloc *list; unsigned i, idx; - list = kmalloc_array(vm->max_pde_used + 1, + list = kmalloc_array(vm->max_pde_used + 2, sizeof(struct radeon_cs_reloc), GFP_KERNEL); if (!list) return NULL; @@ -585,7 +585,8 @@ int radeon_vm_update_page_directory(struct radeon_device *rdev, { static const uint32_t incr = RADEON_VM_PTE_COUNT * 8; - uint64_t pd_addr = radeon_bo_gpu_offset(vm->page_directory); + struct radeon_bo *pd = vm->page_directory; + uint64_t pd_addr = radeon_bo_gpu_offset(pd); uint64_t last_pde = ~0, last_pt = ~0; unsigned count = 0, pt_idx, ndw; struct radeon_ib ib; @@ -642,6 +643,7 @@ int radeon_vm_update_page_directory(struct radeon_device *rdev, incr, R600_PTE_VALID); if (ib.length_dw != 0) { + radeon_semaphore_sync_to(ib.semaphore, pd->tbo.sync_obj); radeon_semaphore_sync_to(ib.semaphore, vm->last_id_use); r = radeon_ib_schedule(rdev, &ib, NULL); if (r) { @@ -689,15 +691,18 @@ static void radeon_vm_update_ptes(struct radeon_device *rdev, /* walk over the address space and update the page tables */ for (addr = start; addr < end; ) { uint64_t pt_idx = addr >> RADEON_VM_BLOCK_SIZE; + struct radeon_bo *pt = vm->page_tables[pt_idx].bo; unsigned nptes; uint64_t pte; + radeon_semaphore_sync_to(ib->semaphore, pt->tbo.sync_obj); + if ((addr & ~mask) == (end & ~mask)) nptes = end - addr; else nptes = RADEON_VM_PTE_COUNT - (addr & mask); - pte = radeon_bo_gpu_offset(vm->page_tables[pt_idx].bo); + pte = radeon_bo_gpu_offset(pt); pte += (addr & mask) * 8; if ((last_pte + 8 * count) != pte) { diff --git a/drivers/i2c/busses/Kconfig b/drivers/i2c/busses/Kconfig index c94db1c5e353..620d1004a1e7 100644 --- a/drivers/i2c/busses/Kconfig +++ b/drivers/i2c/busses/Kconfig @@ -449,7 +449,7 @@ config I2C_EFM32 config I2C_EG20T tristate "Intel EG20T PCH/LAPIS Semicon IOH(ML7213/ML7223/ML7831) I2C" - depends on PCI + depends on PCI && (X86_32 || COMPILE_TEST) help This driver is for PCH(Platform controller Hub) I2C of EG20T which is an IOH(Input/Output Hub) for x86 embedded processor. @@ -570,13 +570,6 @@ config I2C_NOMADIK I2C interface from ST-Ericsson's Nomadik and Ux500 architectures, as well as the STA2X11 PCIe I/O HUB. -config I2C_NUC900 - tristate "NUC900 I2C Driver" - depends on ARCH_W90X900 - help - Say Y here to include support for I2C controller in the - Winbond/Nuvoton NUC900 based System-on-Chip devices. - config I2C_OCORES tristate "OpenCores I2C Controller" help @@ -993,6 +986,15 @@ config I2C_SIBYTE help Supports the SiByte SOC on-chip I2C interfaces (2 channels). +config I2C_CROS_EC_TUNNEL + tristate "ChromeOS EC tunnel I2C bus" + depends on MFD_CROS_EC + help + If you say yes here you get an I2C bus that will tunnel i2c commands + through to the other side of the ChromeOS EC to the i2c bus + connected there. This will work whatever the interface used to + talk to the EC (SPI, I2C or LPC). + config SCx200_I2C tristate "NatSemi SCx200 I2C using GPIO pins (DEPRECATED)" depends on SCx200_GPIO diff --git a/drivers/i2c/busses/Makefile b/drivers/i2c/busses/Makefile index 18d18ff9db93..298692cc6000 100644 --- a/drivers/i2c/busses/Makefile +++ b/drivers/i2c/busses/Makefile @@ -55,7 +55,6 @@ obj-$(CONFIG_I2C_MPC) += i2c-mpc.o obj-$(CONFIG_I2C_MV64XXX) += i2c-mv64xxx.o obj-$(CONFIG_I2C_MXS) += i2c-mxs.o obj-$(CONFIG_I2C_NOMADIK) += i2c-nomadik.o -obj-$(CONFIG_I2C_NUC900) += i2c-nuc900.o obj-$(CONFIG_I2C_OCORES) += i2c-ocores.o obj-$(CONFIG_I2C_OMAP) += i2c-omap.o obj-$(CONFIG_I2C_PASEMI) += i2c-pasemi.o @@ -95,6 +94,7 @@ obj-$(CONFIG_I2C_VIPERBOARD) += i2c-viperboard.o # Other I2C/SMBus bus drivers obj-$(CONFIG_I2C_ACORN) += i2c-acorn.o obj-$(CONFIG_I2C_BCM_KONA) += i2c-bcm-kona.o +obj-$(CONFIG_I2C_CROS_EC_TUNNEL) += i2c-cros-ec-tunnel.o obj-$(CONFIG_I2C_ELEKTOR) += i2c-elektor.o obj-$(CONFIG_I2C_PCA_ISA) += i2c-pca-isa.o obj-$(CONFIG_I2C_SIBYTE) += i2c-sibyte.o diff --git a/drivers/i2c/busses/i2c-ali1563.c b/drivers/i2c/busses/i2c-ali1563.c index 98a1c97739ba..15517d78d5ff 100644 --- a/drivers/i2c/busses/i2c-ali1563.c +++ b/drivers/i2c/busses/i2c-ali1563.c @@ -63,7 +63,7 @@ static struct pci_driver ali1563_pci_driver; static unsigned short ali1563_smba; -static int ali1563_transaction(struct i2c_adapter * a, int size) +static int ali1563_transaction(struct i2c_adapter *a, int size) { u32 data; int timeout; @@ -78,7 +78,7 @@ static int ali1563_transaction(struct i2c_adapter * a, int size) data = inb_p(SMB_HST_STS); if (data & HST_STS_BAD) { dev_err(&a->dev, "ali1563: Trying to reset busy device\n"); - outb_p(data | HST_STS_BAD,SMB_HST_STS); + outb_p(data | HST_STS_BAD, SMB_HST_STS); data = inb_p(SMB_HST_STS); if (data & HST_STS_BAD) return -EBUSY; @@ -102,10 +102,10 @@ static int ali1563_transaction(struct i2c_adapter * a, int size) if (!timeout) { dev_err(&a->dev, "Timeout - Trying to KILL transaction!\n"); /* Issue 'kill' to host controller */ - outb_p(HST_CNTL2_KILL,SMB_HST_CNTL2); + outb_p(HST_CNTL2_KILL, SMB_HST_CNTL2); data = inb_p(SMB_HST_STS); status = -ETIMEDOUT; - } + } /* device error - no response, ignore the autodetection case */ if (data & HST_STS_DEVERR) { @@ -117,18 +117,18 @@ static int ali1563_transaction(struct i2c_adapter * a, int size) if (data & HST_STS_BUSERR) { dev_err(&a->dev, "Bus collision!\n"); /* Issue timeout, hoping it helps */ - outb_p(HST_CNTL1_TIMEOUT,SMB_HST_CNTL1); + outb_p(HST_CNTL1_TIMEOUT, SMB_HST_CNTL1); } if (data & HST_STS_FAIL) { dev_err(&a->dev, "Cleaning fail after KILL!\n"); - outb_p(0x0,SMB_HST_CNTL2); + outb_p(0x0, SMB_HST_CNTL2); } return status; } -static int ali1563_block_start(struct i2c_adapter * a) +static int ali1563_block_start(struct i2c_adapter *a) { u32 data; int timeout; @@ -142,8 +142,8 @@ static int ali1563_block_start(struct i2c_adapter * a) data = inb_p(SMB_HST_STS); if (data & HST_STS_BAD) { - dev_warn(&a->dev,"ali1563: Trying to reset busy device\n"); - outb_p(data | HST_STS_BAD,SMB_HST_STS); + dev_warn(&a->dev, "ali1563: Trying to reset busy device\n"); + outb_p(data | HST_STS_BAD, SMB_HST_STS); data = inb_p(SMB_HST_STS); if (data & HST_STS_BAD) return -EBUSY; @@ -184,13 +184,14 @@ static int ali1563_block_start(struct i2c_adapter * a) return status; } -static int ali1563_block(struct i2c_adapter * a, union i2c_smbus_data * data, u8 rw) +static int ali1563_block(struct i2c_adapter *a, + union i2c_smbus_data *data, u8 rw) { int i, len; int error = 0; /* Do we need this? */ - outb_p(HST_CNTL1_LAST,SMB_HST_CNTL1); + outb_p(HST_CNTL1_LAST, SMB_HST_CNTL1); if (rw == I2C_SMBUS_WRITE) { len = data->block[0]; @@ -198,8 +199,8 @@ static int ali1563_block(struct i2c_adapter * a, union i2c_smbus_data * data, u8 len = 1; else if (len > 32) len = 32; - outb_p(len,SMB_HST_DAT0); - outb_p(data->block[1],SMB_BLK_DAT); + outb_p(len, SMB_HST_DAT0); + outb_p(data->block[1], SMB_BLK_DAT); } else len = 32; @@ -208,10 +209,12 @@ static int ali1563_block(struct i2c_adapter * a, union i2c_smbus_data * data, u8 for (i = 0; i < len; i++) { if (rw == I2C_SMBUS_WRITE) { outb_p(data->block[i + 1], SMB_BLK_DAT); - if ((error = ali1563_block_start(a))) + error = ali1563_block_start(a); + if (error) break; } else { - if ((error = ali1563_block_start(a))) + error = ali1563_block_start(a); + if (error) break; if (i == 0) { len = inb_p(SMB_HST_DAT0); @@ -224,25 +227,26 @@ static int ali1563_block(struct i2c_adapter * a, union i2c_smbus_data * data, u8 } } /* Do we need this? */ - outb_p(HST_CNTL1_LAST,SMB_HST_CNTL1); + outb_p(HST_CNTL1_LAST, SMB_HST_CNTL1); return error; } -static s32 ali1563_access(struct i2c_adapter * a, u16 addr, +static s32 ali1563_access(struct i2c_adapter *a, u16 addr, unsigned short flags, char rw, u8 cmd, - int size, union i2c_smbus_data * data) + int size, union i2c_smbus_data *data) { int error = 0; int timeout; u32 reg; for (timeout = ALI1563_MAX_TIMEOUT; timeout; timeout--) { - if (!(reg = inb_p(SMB_HST_STS) & HST_STS_BUSY)) + reg = inb_p(SMB_HST_STS); + if (!(reg & HST_STS_BUSY)) break; } if (!timeout) - dev_warn(&a->dev,"SMBus not idle. HST_STS = %02x\n",reg); - outb_p(0xff,SMB_HST_STS); + dev_warn(&a->dev, "SMBus not idle. HST_STS = %02x\n", reg); + outb_p(0xff, SMB_HST_STS); /* Map the size to what the chip understands */ switch (size) { @@ -268,13 +272,14 @@ static s32 ali1563_access(struct i2c_adapter * a, u16 addr, } outb_p(((addr & 0x7f) << 1) | (rw & 0x01), SMB_HST_ADD); - outb_p((inb_p(SMB_HST_CNTL2) & ~HST_CNTL2_SIZEMASK) | (size << 3), SMB_HST_CNTL2); + outb_p((inb_p(SMB_HST_CNTL2) & ~HST_CNTL2_SIZEMASK) | + (size << 3), SMB_HST_CNTL2); /* Write the command register */ - switch(size) { + switch (size) { case HST_CNTL2_BYTE: - if (rw== I2C_SMBUS_WRITE) + if (rw == I2C_SMBUS_WRITE) /* Beware it uses DAT0 register and not CMD! */ outb_p(cmd, SMB_HST_DAT0); break; @@ -292,11 +297,12 @@ static s32 ali1563_access(struct i2c_adapter * a, u16 addr, break; case HST_CNTL2_BLOCK: outb_p(cmd, SMB_HST_CMD); - error = ali1563_block(a,data,rw); + error = ali1563_block(a, data, rw); goto Done; } - if ((error = ali1563_transaction(a, size))) + error = ali1563_transaction(a, size); + if (error) goto Done; if ((rw == I2C_SMBUS_WRITE) || (size == HST_CNTL2_QUICK)) @@ -317,7 +323,7 @@ Done: return error; } -static u32 ali1563_func(struct i2c_adapter * a) +static u32 ali1563_func(struct i2c_adapter *a) { return I2C_FUNC_SMBUS_QUICK | I2C_FUNC_SMBUS_BYTE | I2C_FUNC_SMBUS_BYTE_DATA | I2C_FUNC_SMBUS_WORD_DATA | @@ -329,13 +335,13 @@ static int ali1563_setup(struct pci_dev *dev) { u16 ctrl; - pci_read_config_word(dev,ALI1563_SMBBA,&ctrl); + pci_read_config_word(dev, ALI1563_SMBBA, &ctrl); /* SMB I/O Base in high 12 bits and must be aligned with the * size of the I/O space. */ ali1563_smba = ctrl & ~(ALI1563_SMB_IOSIZE - 1); if (!ali1563_smba) { - dev_warn(&dev->dev,"ali1563_smba Uninitialized\n"); + dev_warn(&dev->dev, "ali1563_smba Uninitialized\n"); goto Err; } @@ -350,8 +356,8 @@ static int ali1563_setup(struct pci_dev *dev) ctrl | ALI1563_SMB_IOEN); pci_read_config_word(dev, ALI1563_SMBBA, &ctrl); if (!(ctrl & ALI1563_SMB_IOEN)) { - dev_err(&dev->dev, "I/O space still not enabled, " - "giving up\n"); + dev_err(&dev->dev, + "I/O space still not enabled, giving up\n"); goto Err; } } @@ -375,7 +381,7 @@ Err: static void ali1563_shutdown(struct pci_dev *dev) { - release_region(ali1563_smba,ALI1563_SMB_IOSIZE); + release_region(ali1563_smba, ALI1563_SMB_IOSIZE); } static const struct i2c_algorithm ali1563_algorithm = { @@ -394,12 +400,14 @@ static int ali1563_probe(struct pci_dev *dev, { int error; - if ((error = ali1563_setup(dev))) + error = ali1563_setup(dev); + if (error) goto exit; ali1563_adapter.dev.parent = &dev->dev; snprintf(ali1563_adapter.name, sizeof(ali1563_adapter.name), "SMBus ALi 1563 Adapter @ %04x", ali1563_smba); - if ((error = i2c_add_adapter(&ali1563_adapter))) + error = i2c_add_adapter(&ali1563_adapter); + if (error) goto exit_shutdown; return 0; @@ -421,12 +429,12 @@ static const struct pci_device_id ali1563_id_table[] = { {}, }; -MODULE_DEVICE_TABLE (pci, ali1563_id_table); +MODULE_DEVICE_TABLE(pci, ali1563_id_table); static struct pci_driver ali1563_pci_driver = { - .name = "ali1563_smbus", + .name = "ali1563_smbus", .id_table = ali1563_id_table, - .probe = ali1563_probe, + .probe = ali1563_probe, .remove = ali1563_remove, }; diff --git a/drivers/i2c/busses/i2c-bcm2835.c b/drivers/i2c/busses/i2c-bcm2835.c index c60719577fc3..214ff9700efe 100644 --- a/drivers/i2c/busses/i2c-bcm2835.c +++ b/drivers/i2c/busses/i2c-bcm2835.c @@ -225,10 +225,8 @@ static int bcm2835_i2c_probe(struct platform_device *pdev) struct i2c_adapter *adap; i2c_dev = devm_kzalloc(&pdev->dev, sizeof(*i2c_dev), GFP_KERNEL); - if (!i2c_dev) { - dev_err(&pdev->dev, "Cannot allocate i2c_dev\n"); + if (!i2c_dev) return -ENOMEM; - } platform_set_drvdata(pdev, i2c_dev); i2c_dev->dev = &pdev->dev; init_completion(&i2c_dev->completion); diff --git a/drivers/i2c/busses/i2c-bfin-twi.c b/drivers/i2c/busses/i2c-bfin-twi.c index e6d5162b6379..3e271e7558d3 100644 --- a/drivers/i2c/busses/i2c-bfin-twi.c +++ b/drivers/i2c/busses/i2c-bfin-twi.c @@ -620,35 +620,27 @@ static int i2c_bfin_twi_probe(struct platform_device *pdev) int rc; unsigned int clkhilow; - iface = kzalloc(sizeof(struct bfin_twi_iface), GFP_KERNEL); + iface = devm_kzalloc(&pdev->dev, sizeof(struct bfin_twi_iface), + GFP_KERNEL); if (!iface) { dev_err(&pdev->dev, "Cannot allocate memory\n"); - rc = -ENOMEM; - goto out_error_nomem; + return -ENOMEM; } spin_lock_init(&(iface->lock)); /* Find and map our resources */ res = platform_get_resource(pdev, IORESOURCE_MEM, 0); - if (res == NULL) { - dev_err(&pdev->dev, "Cannot get IORESOURCE_MEM\n"); - rc = -ENOENT; - goto out_error_get_res; - } - - iface->regs_base = ioremap(res->start, resource_size(res)); - if (iface->regs_base == NULL) { + iface->regs_base = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR(iface->regs_base)) { dev_err(&pdev->dev, "Cannot map IO\n"); - rc = -ENXIO; - goto out_error_ioremap; + return PTR_ERR(iface->regs_base); } iface->irq = platform_get_irq(pdev, 0); if (iface->irq < 0) { dev_err(&pdev->dev, "No IRQ specified\n"); - rc = -ENOENT; - goto out_error_no_irq; + return -ENOENT; } p_adap = &iface->adap; @@ -666,15 +658,15 @@ static int i2c_bfin_twi_probe(struct platform_device *pdev) "i2c-bfin-twi"); if (rc) { dev_err(&pdev->dev, "Can't setup pin mux!\n"); - goto out_error_pin_mux; + return -EBUSY; } - rc = request_irq(iface->irq, bfin_twi_interrupt_entry, + rc = devm_request_irq(&pdev->dev, iface->irq, bfin_twi_interrupt_entry, 0, pdev->name, iface); if (rc) { dev_err(&pdev->dev, "Can't get IRQ %d !\n", iface->irq); rc = -ENODEV; - goto out_error_req_irq; + goto out_error; } /* Set TWI internal clock as 10MHz */ @@ -695,7 +687,7 @@ static int i2c_bfin_twi_probe(struct platform_device *pdev) rc = i2c_add_numbered_adapter(p_adap); if (rc < 0) { dev_err(&pdev->dev, "Can't add i2c adapter!\n"); - goto out_error_add_adapter; + goto out_error; } platform_set_drvdata(pdev, iface); @@ -705,17 +697,8 @@ static int i2c_bfin_twi_probe(struct platform_device *pdev) return 0; -out_error_add_adapter: - free_irq(iface->irq, iface); -out_error_req_irq: -out_error_no_irq: +out_error: peripheral_free_list(dev_get_platdata(&pdev->dev)); -out_error_pin_mux: - iounmap(iface->regs_base); -out_error_ioremap: -out_error_get_res: - kfree(iface); -out_error_nomem: return rc; } @@ -724,10 +707,7 @@ static int i2c_bfin_twi_remove(struct platform_device *pdev) struct bfin_twi_iface *iface = platform_get_drvdata(pdev); i2c_del_adapter(&(iface->adap)); - free_irq(iface->irq, iface); peripheral_free_list(dev_get_platdata(&pdev->dev)); - iounmap(iface->regs_base); - kfree(iface); return 0; } diff --git a/drivers/i2c/busses/i2c-cros-ec-tunnel.c b/drivers/i2c/busses/i2c-cros-ec-tunnel.c new file mode 100644 index 000000000000..8e7a71487bb1 --- /dev/null +++ b/drivers/i2c/busses/i2c-cros-ec-tunnel.c @@ -0,0 +1,318 @@ +/* + * Copyright (C) 2013 Google, Inc + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * Expose an I2C passthrough to the ChromeOS EC. + */ + +#include <linux/module.h> +#include <linux/i2c.h> +#include <linux/mfd/cros_ec.h> +#include <linux/mfd/cros_ec_commands.h> +#include <linux/platform_device.h> +#include <linux/slab.h> + +/** + * struct ec_i2c_device - Driver data for I2C tunnel + * + * @dev: Device node + * @adap: I2C adapter + * @ec: Pointer to EC device + * @remote_bus: The EC bus number we tunnel to on the other side. + * @request_buf: Buffer for transmitting data; we expect most transfers to fit. + * @response_buf: Buffer for receiving data; we expect most transfers to fit. + */ + +struct ec_i2c_device { + struct device *dev; + struct i2c_adapter adap; + struct cros_ec_device *ec; + + u16 remote_bus; + + u8 request_buf[256]; + u8 response_buf[256]; +}; + +/** + * ec_i2c_count_message - Count bytes needed for ec_i2c_construct_message + * + * @i2c_msgs: The i2c messages to read + * @num: The number of i2c messages. + * + * Returns the number of bytes the messages will take up. + */ +static int ec_i2c_count_message(const struct i2c_msg i2c_msgs[], int num) +{ + int i; + int size; + + size = sizeof(struct ec_params_i2c_passthru); + size += num * sizeof(struct ec_params_i2c_passthru_msg); + for (i = 0; i < num; i++) + if (!(i2c_msgs[i].flags & I2C_M_RD)) + size += i2c_msgs[i].len; + + return size; +} + +/** + * ec_i2c_construct_message - construct a message to go to the EC + * + * This function effectively stuffs the standard i2c_msg format of Linux into + * a format that the EC understands. + * + * @buf: The buffer to fill. We assume that the buffer is big enough. + * @i2c_msgs: The i2c messages to read. + * @num: The number of i2c messages. + * @bus_num: The remote bus number we want to talk to. + * + * Returns 0 or a negative error number. + */ +static int ec_i2c_construct_message(u8 *buf, const struct i2c_msg i2c_msgs[], + int num, u16 bus_num) +{ + struct ec_params_i2c_passthru *params; + u8 *out_data; + int i; + + out_data = buf + sizeof(struct ec_params_i2c_passthru) + + num * sizeof(struct ec_params_i2c_passthru_msg); + + params = (struct ec_params_i2c_passthru *)buf; + params->port = bus_num; + params->num_msgs = num; + for (i = 0; i < num; i++) { + const struct i2c_msg *i2c_msg = &i2c_msgs[i]; + struct ec_params_i2c_passthru_msg *msg = ¶ms->msg[i]; + + msg->len = i2c_msg->len; + msg->addr_flags = i2c_msg->addr; + + if (i2c_msg->flags & I2C_M_TEN) + msg->addr_flags |= EC_I2C_FLAG_10BIT; + + if (i2c_msg->flags & I2C_M_RD) { + msg->addr_flags |= EC_I2C_FLAG_READ; + } else { + memcpy(out_data, i2c_msg->buf, msg->len); + out_data += msg->len; + } + } + + return 0; +} + +/** + * ec_i2c_count_response - Count bytes needed for ec_i2c_parse_response + * + * @i2c_msgs: The i2c messages to to fill up. + * @num: The number of i2c messages expected. + * + * Returns the number of response bytes expeced. + */ +static int ec_i2c_count_response(struct i2c_msg i2c_msgs[], int num) +{ + int size; + int i; + + size = sizeof(struct ec_response_i2c_passthru); + for (i = 0; i < num; i++) + if (i2c_msgs[i].flags & I2C_M_RD) + size += i2c_msgs[i].len; + + return size; +} + +/** + * ec_i2c_parse_response - Parse a response from the EC + * + * We'll take the EC's response and copy it back into msgs. + * + * @buf: The buffer to parse. + * @i2c_msgs: The i2c messages to to fill up. + * @num: The number of i2c messages; will be modified to include the actual + * number received. + * + * Returns 0 or a negative error number. + */ +static int ec_i2c_parse_response(const u8 *buf, struct i2c_msg i2c_msgs[], + int *num) +{ + const struct ec_response_i2c_passthru *resp; + const u8 *in_data; + int i; + + in_data = buf + sizeof(struct ec_response_i2c_passthru); + + resp = (const struct ec_response_i2c_passthru *)buf; + if (resp->i2c_status & EC_I2C_STATUS_TIMEOUT) + return -ETIMEDOUT; + else if (resp->i2c_status & EC_I2C_STATUS_ERROR) + return -EREMOTEIO; + + /* Other side could send us back fewer messages, but not more */ + if (resp->num_msgs > *num) + return -EPROTO; + *num = resp->num_msgs; + + for (i = 0; i < *num; i++) { + struct i2c_msg *i2c_msg = &i2c_msgs[i]; + + if (i2c_msgs[i].flags & I2C_M_RD) { + memcpy(i2c_msg->buf, in_data, i2c_msg->len); + in_data += i2c_msg->len; + } + } + + return 0; +} + +static int ec_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg i2c_msgs[], + int num) +{ + struct ec_i2c_device *bus = adap->algo_data; + struct device *dev = bus->dev; + const u16 bus_num = bus->remote_bus; + int request_len; + int response_len; + u8 *request = NULL; + u8 *response = NULL; + int result; + + request_len = ec_i2c_count_message(i2c_msgs, num); + if (request_len < 0) { + dev_warn(dev, "Error constructing message %d\n", request_len); + result = request_len; + goto exit; + } + response_len = ec_i2c_count_response(i2c_msgs, num); + if (response_len < 0) { + /* Unexpected; no errors should come when NULL response */ + dev_warn(dev, "Error preparing response %d\n", response_len); + result = response_len; + goto exit; + } + + if (request_len <= ARRAY_SIZE(bus->request_buf)) { + request = bus->request_buf; + } else { + request = kzalloc(request_len, GFP_KERNEL); + if (request == NULL) { + result = -ENOMEM; + goto exit; + } + } + if (response_len <= ARRAY_SIZE(bus->response_buf)) { + response = bus->response_buf; + } else { + response = kzalloc(response_len, GFP_KERNEL); + if (response == NULL) { + result = -ENOMEM; + goto exit; + } + } + + ec_i2c_construct_message(request, i2c_msgs, num, bus_num); + result = bus->ec->command_sendrecv(bus->ec, EC_CMD_I2C_PASSTHRU, + request, request_len, + response, response_len); + if (result) + goto exit; + + result = ec_i2c_parse_response(response, i2c_msgs, &num); + if (result < 0) + goto exit; + + /* Indicate success by saying how many messages were sent */ + result = num; +exit: + if (request != bus->request_buf) + kfree(request); + if (response != bus->response_buf) + kfree(response); + + return result; +} + +static u32 ec_i2c_functionality(struct i2c_adapter *adap) +{ + return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL; +} + +static const struct i2c_algorithm ec_i2c_algorithm = { + .master_xfer = ec_i2c_xfer, + .functionality = ec_i2c_functionality, +}; + +static int ec_i2c_probe(struct platform_device *pdev) +{ + struct device_node *np = pdev->dev.of_node; + struct cros_ec_device *ec = dev_get_drvdata(pdev->dev.parent); + struct device *dev = &pdev->dev; + struct ec_i2c_device *bus = NULL; + u32 remote_bus; + int err; + + if (!ec->command_sendrecv) { + dev_err(dev, "Missing sendrecv\n"); + return -EINVAL; + } + + bus = devm_kzalloc(dev, sizeof(*bus), GFP_KERNEL); + if (bus == NULL) + return -ENOMEM; + + err = of_property_read_u32(np, "google,remote-bus", &remote_bus); + if (err) { + dev_err(dev, "Couldn't read remote-bus property\n"); + return err; + } + bus->remote_bus = remote_bus; + + bus->ec = ec; + bus->dev = dev; + + bus->adap.owner = THIS_MODULE; + strlcpy(bus->adap.name, "cros-ec-i2c-tunnel", sizeof(bus->adap.name)); + bus->adap.algo = &ec_i2c_algorithm; + bus->adap.algo_data = bus; + bus->adap.dev.parent = &pdev->dev; + bus->adap.dev.of_node = np; + + err = i2c_add_adapter(&bus->adap); + if (err) { + dev_err(dev, "cannot register i2c adapter\n"); + return err; + } + platform_set_drvdata(pdev, bus); + + return err; +} + +static int ec_i2c_remove(struct platform_device *dev) +{ + struct ec_i2c_device *bus = platform_get_drvdata(dev); + + i2c_del_adapter(&bus->adap); + + return 0; +} + +static struct platform_driver ec_i2c_tunnel_driver = { + .probe = ec_i2c_probe, + .remove = ec_i2c_remove, + .driver = { + .name = "cros-ec-i2c-tunnel", + }, +}; + +module_platform_driver(ec_i2c_tunnel_driver); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("EC I2C tunnel driver"); +MODULE_ALIAS("platform:cros-ec-i2c-tunnel"); diff --git a/drivers/i2c/busses/i2c-designware-pcidrv.c b/drivers/i2c/busses/i2c-designware-pcidrv.c index 85056c22d21e..3356f7ab9f79 100644 --- a/drivers/i2c/busses/i2c-designware-pcidrv.c +++ b/drivers/i2c/busses/i2c-designware-pcidrv.c @@ -56,6 +56,7 @@ enum dw_pci_ctl_id_t { medfield_5, baytrail, + haswell, }; struct dw_scl_sda_cfg { @@ -95,6 +96,15 @@ static struct dw_scl_sda_cfg byt_config = { .sda_hold = 0x6, }; +/* Haswell HCNT/LCNT/SDA hold time */ +static struct dw_scl_sda_cfg hsw_config = { + .ss_hcnt = 0x01b0, + .fs_hcnt = 0x48, + .ss_lcnt = 0x01fb, + .fs_lcnt = 0xa0, + .sda_hold = 0x9, +}; + static struct dw_pci_controller dw_pci_controllers[] = { [moorestown_0] = { .bus_num = 0, @@ -168,6 +178,15 @@ static struct dw_pci_controller dw_pci_controllers[] = { .functionality = I2C_FUNC_10BIT_ADDR, .scl_sda_cfg = &byt_config, }, + [haswell] = { + .bus_num = -1, + .bus_cfg = INTEL_MID_STD_CFG | DW_IC_CON_SPEED_FAST, + .tx_fifo_depth = 32, + .rx_fifo_depth = 32, + .clk_khz = 100000, + .functionality = I2C_FUNC_10BIT_ADDR, + .scl_sda_cfg = &hsw_config, + }, }; static struct i2c_algorithm i2c_dw_algo = { .master_xfer = i2c_dw_xfer, @@ -328,6 +347,9 @@ static const struct pci_device_id i2_designware_pci_ids[] = { { PCI_VDEVICE(INTEL, 0x0F45), baytrail }, { PCI_VDEVICE(INTEL, 0x0F46), baytrail }, { PCI_VDEVICE(INTEL, 0x0F47), baytrail }, + /* Haswell */ + { PCI_VDEVICE(INTEL, 0x9c61), haswell }, + { PCI_VDEVICE(INTEL, 0x9c62), haswell }, { 0,} }; MODULE_DEVICE_TABLE(pci, i2_designware_pci_ids); diff --git a/drivers/i2c/busses/i2c-designware-platdrv.c b/drivers/i2c/busses/i2c-designware-platdrv.c index 9c7802614342..402ec3970fed 100644 --- a/drivers/i2c/busses/i2c-designware-platdrv.c +++ b/drivers/i2c/busses/i2c-designware-platdrv.c @@ -247,12 +247,13 @@ static const struct of_device_id dw_i2c_of_match[] = { MODULE_DEVICE_TABLE(of, dw_i2c_of_match); #endif -#ifdef CONFIG_PM_SLEEP +#ifdef CONFIG_PM static int dw_i2c_suspend(struct device *dev) { struct platform_device *pdev = to_platform_device(dev); struct dw_i2c_dev *i_dev = platform_get_drvdata(pdev); + i2c_dw_disable(i_dev); clk_disable_unprepare(i_dev->clk); return 0; @@ -268,13 +269,11 @@ static int dw_i2c_resume(struct device *dev) return 0; } - -static SIMPLE_DEV_PM_OPS(dw_i2c_dev_pm_ops, dw_i2c_suspend, dw_i2c_resume); -#define DW_I2C_DEV_PM_OPS (&dw_i2c_dev_pm_ops) -#else -#define DW_I2C_DEV_PM_OPS NULL #endif +static UNIVERSAL_DEV_PM_OPS(dw_i2c_dev_pm_ops, dw_i2c_suspend, + dw_i2c_resume, NULL); + /* work with hotplug and coldplug */ MODULE_ALIAS("platform:i2c_designware"); @@ -286,7 +285,7 @@ static struct platform_driver dw_i2c_driver = { .owner = THIS_MODULE, .of_match_table = of_match_ptr(dw_i2c_of_match), .acpi_match_table = ACPI_PTR(dw_i2c_acpi_match), - .pm = DW_I2C_DEV_PM_OPS, + .pm = &dw_i2c_dev_pm_ops, }, }; diff --git a/drivers/i2c/busses/i2c-diolan-u2c.c b/drivers/i2c/busses/i2c-diolan-u2c.c index 721f7ebf9a3b..b19a310bf9b3 100644 --- a/drivers/i2c/busses/i2c-diolan-u2c.c +++ b/drivers/i2c/busses/i2c-diolan-u2c.c @@ -455,7 +455,6 @@ static int diolan_u2c_probe(struct usb_interface *interface, /* allocate memory for our device state and initialize it */ dev = kzalloc(sizeof(*dev), GFP_KERNEL); if (dev == NULL) { - dev_err(&interface->dev, "no memory for device state\n"); ret = -ENOMEM; goto error; } diff --git a/drivers/i2c/busses/i2c-efm32.c b/drivers/i2c/busses/i2c-efm32.c index 777ed409a24a..f7eccd682de9 100644 --- a/drivers/i2c/busses/i2c-efm32.c +++ b/drivers/i2c/busses/i2c-efm32.c @@ -320,10 +320,8 @@ static int efm32_i2c_probe(struct platform_device *pdev) return -EINVAL; ddata = devm_kzalloc(&pdev->dev, sizeof(*ddata), GFP_KERNEL); - if (!ddata) { - dev_dbg(&pdev->dev, "failed to allocate private data\n"); + if (!ddata) return -ENOMEM; - } platform_set_drvdata(pdev, ddata); init_completion(&ddata->done); diff --git a/drivers/i2c/busses/i2c-eg20t.c b/drivers/i2c/busses/i2c-eg20t.c index ff775ac29e49..a44ea13d1434 100644 --- a/drivers/i2c/busses/i2c-eg20t.c +++ b/drivers/i2c/busses/i2c-eg20t.c @@ -751,10 +751,8 @@ static int pch_i2c_probe(struct pci_dev *pdev, pch_pci_dbg(pdev, "Entered.\n"); adap_info = kzalloc((sizeof(struct adapter_info)), GFP_KERNEL); - if (adap_info == NULL) { - pch_pci_err(pdev, "Memory allocation FAILED\n"); + if (adap_info == NULL) return -ENOMEM; - } ret = pci_enable_device(pdev); if (ret) { diff --git a/drivers/i2c/busses/i2c-exynos5.c b/drivers/i2c/busses/i2c-exynos5.c index 00af0a0a3361..63d229202854 100644 --- a/drivers/i2c/busses/i2c-exynos5.c +++ b/drivers/i2c/busses/i2c-exynos5.c @@ -76,12 +76,6 @@ #define HSI2C_RXFIFO_TRIGGER_LEVEL(x) ((x) << 4) #define HSI2C_TXFIFO_TRIGGER_LEVEL(x) ((x) << 16) -/* As per user manual FIFO max depth is 64bytes */ -#define HSI2C_FIFO_MAX 0x40 -/* default trigger levels for Tx and Rx FIFOs */ -#define HSI2C_DEF_TXFIFO_LVL (HSI2C_FIFO_MAX - 0x30) -#define HSI2C_DEF_RXFIFO_LVL (HSI2C_FIFO_MAX - 0x10) - /* I2C_TRAILING_CTL Register bits */ #define HSI2C_TRAILING_COUNT (0xf) @@ -183,14 +177,54 @@ struct exynos5_i2c { * 2. Fast speed upto 1Mbps */ int speed_mode; + + /* Version of HS-I2C Hardware */ + struct exynos_hsi2c_variant *variant; +}; + +/** + * struct exynos_hsi2c_variant - platform specific HSI2C driver data + * @fifo_depth: the fifo depth supported by the HSI2C module + * + * Specifies platform specific configuration of HSI2C module. + * Note: A structure for driver specific platform data is used for future + * expansion of its usage. + */ +struct exynos_hsi2c_variant { + unsigned int fifo_depth; +}; + +static const struct exynos_hsi2c_variant exynos5250_hsi2c_data = { + .fifo_depth = 64, +}; + +static const struct exynos_hsi2c_variant exynos5260_hsi2c_data = { + .fifo_depth = 16, }; static const struct of_device_id exynos5_i2c_match[] = { - { .compatible = "samsung,exynos5-hsi2c" }, - {}, + { + .compatible = "samsung,exynos5-hsi2c", + .data = &exynos5250_hsi2c_data + }, { + .compatible = "samsung,exynos5250-hsi2c", + .data = &exynos5250_hsi2c_data + }, { + .compatible = "samsung,exynos5260-hsi2c", + .data = &exynos5260_hsi2c_data + }, {}, }; MODULE_DEVICE_TABLE(of, exynos5_i2c_match); +static inline struct exynos_hsi2c_variant *exynos5_i2c_get_variant + (struct platform_device *pdev) +{ + const struct of_device_id *match; + + match = of_match_node(exynos5_i2c_match, pdev->dev.of_node); + return (struct exynos_hsi2c_variant *)match->data; +} + static void exynos5_i2c_clr_pend_irq(struct exynos5_i2c *i2c) { writel(readl(i2c->regs + HSI2C_INT_STATUS), @@ -415,7 +449,7 @@ static irqreturn_t exynos5_i2c_irq(int irqno, void *dev_id) fifo_status = readl(i2c->regs + HSI2C_FIFO_STATUS); fifo_level = HSI2C_TX_FIFO_LVL(fifo_status); - len = HSI2C_FIFO_MAX - fifo_level; + len = i2c->variant->fifo_depth - fifo_level; if (len > (i2c->msg->len - i2c->msg_ptr)) len = i2c->msg->len - i2c->msg_ptr; @@ -483,6 +517,7 @@ static void exynos5_i2c_message_start(struct exynos5_i2c *i2c, int stop) u32 i2c_auto_conf = 0; u32 fifo_ctl; unsigned long flags; + unsigned short trig_lvl; i2c_ctl = readl(i2c->regs + HSI2C_CTL); i2c_ctl &= ~(HSI2C_TXCHON | HSI2C_RXCHON); @@ -493,13 +528,19 @@ static void exynos5_i2c_message_start(struct exynos5_i2c *i2c, int stop) i2c_auto_conf = HSI2C_READ_WRITE; - fifo_ctl |= HSI2C_RXFIFO_TRIGGER_LEVEL(HSI2C_DEF_TXFIFO_LVL); + trig_lvl = (i2c->msg->len > i2c->variant->fifo_depth) ? + (i2c->variant->fifo_depth * 3 / 4) : i2c->msg->len; + fifo_ctl |= HSI2C_RXFIFO_TRIGGER_LEVEL(trig_lvl); + int_en |= (HSI2C_INT_RX_ALMOSTFULL_EN | HSI2C_INT_TRAILING_EN); } else { i2c_ctl |= HSI2C_TXCHON; - fifo_ctl |= HSI2C_TXFIFO_TRIGGER_LEVEL(HSI2C_DEF_RXFIFO_LVL); + trig_lvl = (i2c->msg->len > i2c->variant->fifo_depth) ? + (i2c->variant->fifo_depth * 1 / 4) : i2c->msg->len; + fifo_ctl |= HSI2C_TXFIFO_TRIGGER_LEVEL(trig_lvl); + int_en |= HSI2C_INT_TX_ALMOSTEMPTY_EN; } @@ -621,10 +662,8 @@ static int exynos5_i2c_probe(struct platform_device *pdev) int ret; i2c = devm_kzalloc(&pdev->dev, sizeof(struct exynos5_i2c), GFP_KERNEL); - if (!i2c) { - dev_err(&pdev->dev, "no memory for state\n"); + if (!i2c) return -ENOMEM; - } if (of_property_read_u32(np, "clock-frequency", &op_clock)) { i2c->speed_mode = HSI2C_FAST_SPD; @@ -691,7 +730,9 @@ static int exynos5_i2c_probe(struct platform_device *pdev) if (ret) goto err_clk; - exynos5_i2c_init(i2c); + i2c->variant = exynos5_i2c_get_variant(pdev); + + exynos5_i2c_reset(i2c); ret = i2c_add_adapter(&i2c->adap); if (ret < 0) { diff --git a/drivers/i2c/busses/i2c-gpio.c b/drivers/i2c/busses/i2c-gpio.c index 02d2d4abb9dd..71a45b210a24 100644 --- a/drivers/i2c/busses/i2c-gpio.c +++ b/drivers/i2c/busses/i2c-gpio.c @@ -147,24 +147,22 @@ static int i2c_gpio_probe(struct platform_device *pdev) scl_pin = pdata->scl_pin; } - ret = gpio_request(sda_pin, "sda"); + ret = devm_gpio_request(&pdev->dev, sda_pin, "sda"); if (ret) { if (ret == -EINVAL) ret = -EPROBE_DEFER; /* Try again later */ - goto err_request_sda; + return ret; } - ret = gpio_request(scl_pin, "scl"); + ret = devm_gpio_request(&pdev->dev, scl_pin, "scl"); if (ret) { if (ret == -EINVAL) ret = -EPROBE_DEFER; /* Try again later */ - goto err_request_scl; + return ret; } priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL); - if (!priv) { - ret = -ENOMEM; - goto err_add_bus; - } + if (!priv) + return -ENOMEM; adap = &priv->adap; bit_data = &priv->bit_data; pdata = &priv->pdata; @@ -225,7 +223,7 @@ static int i2c_gpio_probe(struct platform_device *pdev) adap->nr = pdev->id; ret = i2c_bit_add_numbered_bus(adap); if (ret) - goto err_add_bus; + return ret; platform_set_drvdata(pdev, priv); @@ -235,13 +233,6 @@ static int i2c_gpio_probe(struct platform_device *pdev) ? ", no clock stretching" : ""); return 0; - -err_add_bus: - gpio_free(scl_pin); -err_request_scl: - gpio_free(sda_pin); -err_request_sda: - return ret; } static int i2c_gpio_remove(struct platform_device *pdev) @@ -255,8 +246,6 @@ static int i2c_gpio_remove(struct platform_device *pdev) pdata = &priv->pdata; i2c_del_adapter(adap); - gpio_free(pdata->scl_pin); - gpio_free(pdata->sda_pin); return 0; } diff --git a/drivers/i2c/busses/i2c-imx.c b/drivers/i2c/busses/i2c-imx.c index db895fb22e65..aa8bc146718b 100644 --- a/drivers/i2c/busses/i2c-imx.c +++ b/drivers/i2c/busses/i2c-imx.c @@ -183,6 +183,8 @@ struct imx_i2c_struct { unsigned int disable_delay; int stopped; unsigned int ifdr; /* IMX_I2C_IFDR */ + unsigned int cur_clk; + unsigned int bitrate; const struct imx_i2c_hwdata *hwdata; }; @@ -305,6 +307,48 @@ static int i2c_imx_acked(struct imx_i2c_struct *i2c_imx) return 0; } +static void i2c_imx_set_clk(struct imx_i2c_struct *i2c_imx) +{ + struct imx_i2c_clk_pair *i2c_clk_div = i2c_imx->hwdata->clk_div; + unsigned int i2c_clk_rate; + unsigned int div; + int i; + + /* Divider value calculation */ + i2c_clk_rate = clk_get_rate(i2c_imx->clk); + if (i2c_imx->cur_clk == i2c_clk_rate) + return; + else + i2c_imx->cur_clk = i2c_clk_rate; + + div = (i2c_clk_rate + i2c_imx->bitrate - 1) / i2c_imx->bitrate; + if (div < i2c_clk_div[0].div) + i = 0; + else if (div > i2c_clk_div[i2c_imx->hwdata->ndivs - 1].div) + i = i2c_imx->hwdata->ndivs - 1; + else + for (i = 0; i2c_clk_div[i].div < div; i++); + + /* Store divider value */ + i2c_imx->ifdr = i2c_clk_div[i].val; + + /* + * There dummy delay is calculated. + * It should be about one I2C clock period long. + * This delay is used in I2C bus disable function + * to fix chip hardware bug. + */ + i2c_imx->disable_delay = (500000U * i2c_clk_div[i].div + + (i2c_clk_rate / 2) - 1) / (i2c_clk_rate / 2); + +#ifdef CONFIG_I2C_DEBUG_BUS + dev_dbg(&i2c_imx->adapter.dev, "I2C_CLK=%d, REQ DIV=%d\n", + i2c_clk_rate, div); + dev_dbg(&i2c_imx->adapter.dev, "IFDR[IC]=0x%x, REAL DIV=%d\n", + i2c_clk_div[i].val, i2c_clk_div[i].div); +#endif +} + static int i2c_imx_start(struct imx_i2c_struct *i2c_imx) { unsigned int temp = 0; @@ -312,6 +356,8 @@ static int i2c_imx_start(struct imx_i2c_struct *i2c_imx) dev_dbg(&i2c_imx->adapter.dev, "<%s>\n", __func__); + i2c_imx_set_clk(i2c_imx); + result = clk_prepare_enable(i2c_imx->clk); if (result) return result; @@ -367,45 +413,6 @@ static void i2c_imx_stop(struct imx_i2c_struct *i2c_imx) clk_disable_unprepare(i2c_imx->clk); } -static void i2c_imx_set_clk(struct imx_i2c_struct *i2c_imx, - unsigned int rate) -{ - struct imx_i2c_clk_pair *i2c_clk_div = i2c_imx->hwdata->clk_div; - unsigned int i2c_clk_rate; - unsigned int div; - int i; - - /* Divider value calculation */ - i2c_clk_rate = clk_get_rate(i2c_imx->clk); - div = (i2c_clk_rate + rate - 1) / rate; - if (div < i2c_clk_div[0].div) - i = 0; - else if (div > i2c_clk_div[i2c_imx->hwdata->ndivs - 1].div) - i = i2c_imx->hwdata->ndivs - 1; - else - for (i = 0; i2c_clk_div[i].div < div; i++); - - /* Store divider value */ - i2c_imx->ifdr = i2c_clk_div[i].val; - - /* - * There dummy delay is calculated. - * It should be about one I2C clock period long. - * This delay is used in I2C bus disable function - * to fix chip hardware bug. - */ - i2c_imx->disable_delay = (500000U * i2c_clk_div[i].div - + (i2c_clk_rate / 2) - 1) / (i2c_clk_rate / 2); - - /* dev_dbg() can't be used, because adapter is not yet registered */ -#ifdef CONFIG_I2C_DEBUG_BUS - dev_dbg(&i2c_imx->adapter.dev, "<%s> I2C_CLK=%d, REQ DIV=%d\n", - __func__, i2c_clk_rate, div); - dev_dbg(&i2c_imx->adapter.dev, "<%s> IFDR[IC]=0x%x, REAL DIV=%d\n", - __func__, i2c_clk_div[i].val, i2c_clk_div[i].div); -#endif -} - static irqreturn_t i2c_imx_isr(int irq, void *dev_id) { struct imx_i2c_struct *i2c_imx = dev_id; @@ -458,10 +465,11 @@ static int i2c_imx_write(struct imx_i2c_struct *i2c_imx, struct i2c_msg *msgs) return 0; } -static int i2c_imx_read(struct imx_i2c_struct *i2c_imx, struct i2c_msg *msgs) +static int i2c_imx_read(struct imx_i2c_struct *i2c_imx, struct i2c_msg *msgs, bool is_lastmsg) { int i, result; unsigned int temp; + int block_data = msgs->flags & I2C_M_RECV_LEN; dev_dbg(&i2c_imx->adapter.dev, "<%s> write slave address: addr=0x%x\n", @@ -481,7 +489,12 @@ static int i2c_imx_read(struct imx_i2c_struct *i2c_imx, struct i2c_msg *msgs) /* setup bus to read data */ temp = imx_i2c_read_reg(i2c_imx, IMX_I2C_I2CR); temp &= ~I2CR_MTX; - if (msgs->len - 1) + + /* + * Reset the I2CR_TXAK flag initially for SMBus block read since the + * length is unknown + */ + if ((msgs->len - 1) || block_data) temp &= ~I2CR_TXAK; imx_i2c_write_reg(temp, i2c_imx, IMX_I2C_I2CR); imx_i2c_read_reg(i2c_imx, IMX_I2C_I2DR); /* dummy read */ @@ -490,19 +503,49 @@ static int i2c_imx_read(struct imx_i2c_struct *i2c_imx, struct i2c_msg *msgs) /* read data */ for (i = 0; i < msgs->len; i++) { + u8 len = 0; result = i2c_imx_trx_complete(i2c_imx); if (result) return result; - if (i == (msgs->len - 1)) { - /* It must generate STOP before read I2DR to prevent - controller from generating another clock cycle */ + /* + * First byte is the length of remaining packet + * in the SMBus block data read. Add it to + * msgs->len. + */ + if ((!i) && block_data) { + len = imx_i2c_read_reg(i2c_imx, IMX_I2C_I2DR); + if ((len == 0) || (len > I2C_SMBUS_BLOCK_MAX)) + return -EPROTO; dev_dbg(&i2c_imx->adapter.dev, - "<%s> clear MSTA\n", __func__); - temp = imx_i2c_read_reg(i2c_imx, IMX_I2C_I2CR); - temp &= ~(I2CR_MSTA | I2CR_MTX); - imx_i2c_write_reg(temp, i2c_imx, IMX_I2C_I2CR); - i2c_imx_bus_busy(i2c_imx, 0); - i2c_imx->stopped = 1; + "<%s> read length: 0x%X\n", + __func__, len); + msgs->len += len; + } + if (i == (msgs->len - 1)) { + if (is_lastmsg) { + /* + * It must generate STOP before read I2DR to prevent + * controller from generating another clock cycle + */ + dev_dbg(&i2c_imx->adapter.dev, + "<%s> clear MSTA\n", __func__); + temp = imx_i2c_read_reg(i2c_imx, IMX_I2C_I2CR); + temp &= ~(I2CR_MSTA | I2CR_MTX); + imx_i2c_write_reg(temp, i2c_imx, IMX_I2C_I2CR); + i2c_imx_bus_busy(i2c_imx, 0); + i2c_imx->stopped = 1; + } else { + /* + * For i2c master receiver repeat restart operation like: + * read -> repeat MSTA -> read/write + * The controller must set MTX before read the last byte in + * the first read operation, otherwise the first read cost + * one extra clock cycle. + */ + temp = readb(i2c_imx->base + IMX_I2C_I2CR); + temp |= I2CR_MTX; + writeb(temp, i2c_imx->base + IMX_I2C_I2CR); + } } else if (i == (msgs->len - 2)) { dev_dbg(&i2c_imx->adapter.dev, "<%s> set TXAK\n", __func__); @@ -510,7 +553,10 @@ static int i2c_imx_read(struct imx_i2c_struct *i2c_imx, struct i2c_msg *msgs) temp |= I2CR_TXAK; imx_i2c_write_reg(temp, i2c_imx, IMX_I2C_I2CR); } - msgs->buf[i] = imx_i2c_read_reg(i2c_imx, IMX_I2C_I2DR); + if ((!i) && block_data) + msgs->buf[0] = len; + else + msgs->buf[i] = imx_i2c_read_reg(i2c_imx, IMX_I2C_I2DR); dev_dbg(&i2c_imx->adapter.dev, "<%s> read byte: B%d=0x%X\n", __func__, i, msgs->buf[i]); @@ -523,6 +569,7 @@ static int i2c_imx_xfer(struct i2c_adapter *adapter, { unsigned int i, temp; int result; + bool is_lastmsg = false; struct imx_i2c_struct *i2c_imx = i2c_get_adapdata(adapter); dev_dbg(&i2c_imx->adapter.dev, "<%s>\n", __func__); @@ -534,6 +581,9 @@ static int i2c_imx_xfer(struct i2c_adapter *adapter, /* read/write data */ for (i = 0; i < num; i++) { + if (i == num - 1) + is_lastmsg = true; + if (i) { dev_dbg(&i2c_imx->adapter.dev, "<%s> repeated start\n", __func__); @@ -564,7 +614,7 @@ static int i2c_imx_xfer(struct i2c_adapter *adapter, (temp & I2SR_RXAK ? 1 : 0)); #endif if (msgs[i].flags & I2C_M_RD) - result = i2c_imx_read(i2c_imx, &msgs[i]); + result = i2c_imx_read(i2c_imx, &msgs[i], is_lastmsg); else result = i2c_imx_write(i2c_imx, &msgs[i]); if (result) @@ -583,7 +633,8 @@ fail0: static u32 i2c_imx_func(struct i2c_adapter *adapter) { - return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL; + return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL + | I2C_FUNC_SMBUS_READ_BLOCK_DATA; } static struct i2c_algorithm i2c_imx_algo = { @@ -600,7 +651,6 @@ static int i2c_imx_probe(struct platform_device *pdev) struct imxi2c_platform_data *pdata = dev_get_platdata(&pdev->dev); void __iomem *base; int irq, ret; - u32 bitrate; dev_dbg(&pdev->dev, "<%s>\n", __func__); @@ -617,10 +667,8 @@ static int i2c_imx_probe(struct platform_device *pdev) i2c_imx = devm_kzalloc(&pdev->dev, sizeof(struct imx_i2c_struct), GFP_KERNEL); - if (!i2c_imx) { - dev_err(&pdev->dev, "can't allocate interface\n"); + if (!i2c_imx) return -ENOMEM; - } if (of_id) i2c_imx->hwdata = of_id->data; @@ -664,12 +712,11 @@ static int i2c_imx_probe(struct platform_device *pdev) i2c_set_adapdata(&i2c_imx->adapter, i2c_imx); /* Set up clock divider */ - bitrate = IMX_I2C_BIT_RATE; + i2c_imx->bitrate = IMX_I2C_BIT_RATE; ret = of_property_read_u32(pdev->dev.of_node, - "clock-frequency", &bitrate); + "clock-frequency", &i2c_imx->bitrate); if (ret < 0 && pdata && pdata->bitrate) - bitrate = pdata->bitrate; - i2c_imx_set_clk(i2c_imx, bitrate); + i2c_imx->bitrate = pdata->bitrate; /* Set up chip registers to defaults */ imx_i2c_write_reg(i2c_imx->hwdata->i2cr_ien_opcode ^ I2CR_IEN, diff --git a/drivers/i2c/busses/i2c-mpc.c b/drivers/i2c/busses/i2c-mpc.c index f5391633b53a..6a32aa095f83 100644 --- a/drivers/i2c/busses/i2c-mpc.c +++ b/drivers/i2c/busses/i2c-mpc.c @@ -115,7 +115,7 @@ static void mpc_i2c_fixup(struct mpc_i2c *i2c) for (k = 9; k; k--) { writeccr(i2c, 0); writeccr(i2c, CCR_MSTA | CCR_MTX | CCR_MEN); - udelay(delay_val); + readb(i2c->base + MPC_I2C_DR); writeccr(i2c, CCR_MEN); udelay(delay_val << 1); } diff --git a/drivers/i2c/busses/i2c-mv64xxx.c b/drivers/i2c/busses/i2c-mv64xxx.c index 540ea692bf79..9f4b775e2e39 100644 --- a/drivers/i2c/busses/i2c-mv64xxx.c +++ b/drivers/i2c/busses/i2c-mv64xxx.c @@ -681,7 +681,7 @@ static const struct i2c_algorithm mv64xxx_i2c_algo = { ***************************************************************************** */ static const struct of_device_id mv64xxx_i2c_of_match_table[] = { - { .compatible = "allwinner,sun4i-i2c", .data = &mv64xxx_i2c_regs_sun4i}, + { .compatible = "allwinner,sun4i-a10-i2c", .data = &mv64xxx_i2c_regs_sun4i}, { .compatible = "allwinner,sun6i-a31-i2c", .data = &mv64xxx_i2c_regs_sun4i}, { .compatible = "marvell,mv64xxx-i2c", .data = &mv64xxx_i2c_regs_mv64xxx}, { .compatible = "marvell,mv78230-i2c", .data = &mv64xxx_i2c_regs_mv64xxx}, diff --git a/drivers/i2c/busses/i2c-nomadik.c b/drivers/i2c/busses/i2c-nomadik.c index 32c85e9ecdae..0e55d85fd4ed 100644 --- a/drivers/i2c/busses/i2c-nomadik.c +++ b/drivers/i2c/busses/i2c-nomadik.c @@ -879,19 +879,19 @@ static irqreturn_t i2c_irq_handler(int irq, void *arg) #ifdef CONFIG_PM_SLEEP static int nmk_i2c_suspend_late(struct device *dev) { - pinctrl_pm_select_sleep_state(dev); + int ret; + ret = pm_runtime_force_suspend(dev); + if (ret) + return ret; + + pinctrl_pm_select_sleep_state(dev); return 0; } static int nmk_i2c_resume_early(struct device *dev) { - /* First go to the default state */ - pinctrl_pm_select_default_state(dev); - /* Then let's idle the pins until the next transfer happens */ - pinctrl_pm_select_idle_state(dev); - - return 0; + return pm_runtime_force_resume(dev); } #endif diff --git a/drivers/i2c/busses/i2c-nuc900.c b/drivers/i2c/busses/i2c-nuc900.c deleted file mode 100644 index 36394d737faf..000000000000 --- a/drivers/i2c/busses/i2c-nuc900.c +++ /dev/null @@ -1,709 +0,0 @@ -/* - * linux/drivers/i2c/busses/i2c-nuc900.c - * - * Copyright (c) 2010 Nuvoton technology corporation. - * - * This driver based on S3C2410 I2C driver of Ben Dooks <ben-Y5A6D6n0/KfQXOPxS62xeg@public.gmane.org>. - * Written by Wan ZongShun <mcuos.com-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation;version 2 of the License. - * - */ - -#include <linux/kernel.h> -#include <linux/module.h> - -#include <linux/i2c.h> -#include <linux/init.h> -#include <linux/time.h> -#include <linux/interrupt.h> -#include <linux/delay.h> -#include <linux/errno.h> -#include <linux/err.h> -#include <linux/platform_device.h> -#include <linux/clk.h> -#include <linux/cpufreq.h> -#include <linux/slab.h> -#include <linux/io.h> - -#include <mach/mfp.h> -#include <linux/platform_data/i2c-nuc900.h> - -/* nuc900 i2c registers offset */ - -#define CSR 0x00 -#define DIVIDER 0x04 -#define CMDR 0x08 -#define SWR 0x0C -#define RXR 0x10 -#define TXR 0x14 - -/* nuc900 i2c CSR register bits */ - -#define IRQEN 0x003 -#define I2CBUSY 0x400 -#define I2CSTART 0x018 -#define IRQFLAG 0x004 -#define ARBIT_LOST 0x200 -#define SLAVE_ACK 0x800 - -/* nuc900 i2c CMDR register bits */ - -#define I2C_CMD_START 0x10 -#define I2C_CMD_STOP 0x08 -#define I2C_CMD_READ 0x04 -#define I2C_CMD_WRITE 0x02 -#define I2C_CMD_NACK 0x01 - -/* i2c controller state */ - -enum nuc900_i2c_state { - STATE_IDLE, - STATE_START, - STATE_READ, - STATE_WRITE, - STATE_STOP -}; - -/* i2c controller private data */ - -struct nuc900_i2c { - spinlock_t lock; - wait_queue_head_t wait; - - struct i2c_msg *msg; - unsigned int msg_num; - unsigned int msg_idx; - unsigned int msg_ptr; - unsigned int irq; - - enum nuc900_i2c_state state; - - void __iomem *regs; - struct clk *clk; - struct device *dev; - struct resource *ioarea; - struct i2c_adapter adap; -}; - -/* nuc900_i2c_master_complete - * - * complete the message and wake up the caller, using the given return code, - * or zero to mean ok. -*/ - -static inline void nuc900_i2c_master_complete(struct nuc900_i2c *i2c, int ret) -{ - dev_dbg(i2c->dev, "master_complete %d\n", ret); - - i2c->msg_ptr = 0; - i2c->msg = NULL; - i2c->msg_idx++; - i2c->msg_num = 0; - if (ret) - i2c->msg_idx = ret; - - wake_up(&i2c->wait); -} - -/* irq enable/disable functions */ - -static inline void nuc900_i2c_disable_irq(struct nuc900_i2c *i2c) -{ - unsigned long tmp; - - tmp = readl(i2c->regs + CSR); - writel(tmp & ~IRQEN, i2c->regs + CSR); -} - -static inline void nuc900_i2c_enable_irq(struct nuc900_i2c *i2c) -{ - unsigned long tmp; - - tmp = readl(i2c->regs + CSR); - writel(tmp | IRQEN, i2c->regs + CSR); -} - - -/* nuc900_i2c_message_start - * - * put the start of a message onto the bus -*/ - -static void nuc900_i2c_message_start(struct nuc900_i2c *i2c, - struct i2c_msg *msg) -{ - unsigned int addr = (msg->addr & 0x7f) << 1; - - if (msg->flags & I2C_M_RD) - addr |= 0x1; - writel(addr & 0xff, i2c->regs + TXR); - writel(I2C_CMD_START | I2C_CMD_WRITE, i2c->regs + CMDR); -} - -static inline void nuc900_i2c_stop(struct nuc900_i2c *i2c, int ret) -{ - - dev_dbg(i2c->dev, "STOP\n"); - - /* stop the transfer */ - i2c->state = STATE_STOP; - writel(I2C_CMD_STOP, i2c->regs + CMDR); - - nuc900_i2c_master_complete(i2c, ret); - nuc900_i2c_disable_irq(i2c); -} - -/* helper functions to determine the current state in the set of - * messages we are sending -*/ - -/* is_lastmsg() - * - * returns TRUE if the current message is the last in the set -*/ - -static inline int is_lastmsg(struct nuc900_i2c *i2c) -{ - return i2c->msg_idx >= (i2c->msg_num - 1); -} - -/* is_msglast - * - * returns TRUE if we this is the last byte in the current message -*/ - -static inline int is_msglast(struct nuc900_i2c *i2c) -{ - return i2c->msg_ptr == i2c->msg->len-1; -} - -/* is_msgend - * - * returns TRUE if we reached the end of the current message -*/ - -static inline int is_msgend(struct nuc900_i2c *i2c) -{ - return i2c->msg_ptr >= i2c->msg->len; -} - -/* i2c_nuc900_irq_nextbyte - * - * process an interrupt and work out what to do - */ - -static void i2c_nuc900_irq_nextbyte(struct nuc900_i2c *i2c, - unsigned long iicstat) -{ - unsigned char byte; - - switch (i2c->state) { - - case STATE_IDLE: - dev_err(i2c->dev, "%s: called in STATE_IDLE\n", __func__); - break; - - case STATE_STOP: - dev_err(i2c->dev, "%s: called in STATE_STOP\n", __func__); - nuc900_i2c_disable_irq(i2c); - break; - - case STATE_START: - /* last thing we did was send a start condition on the - * bus, or started a new i2c message - */ - - if (iicstat & SLAVE_ACK && - !(i2c->msg->flags & I2C_M_IGNORE_NAK)) { - /* ack was not received... */ - - dev_dbg(i2c->dev, "ack was not received\n"); - nuc900_i2c_stop(i2c, -ENXIO); - break; - } - - if (i2c->msg->flags & I2C_M_RD) - i2c->state = STATE_READ; - else - i2c->state = STATE_WRITE; - - /* terminate the transfer if there is nothing to do - * as this is used by the i2c probe to find devices. - */ - - if (is_lastmsg(i2c) && i2c->msg->len == 0) { - nuc900_i2c_stop(i2c, 0); - break; - } - - if (i2c->state == STATE_READ) - goto prepare_read; - - /* fall through to the write state, as we will need to - * send a byte as well - */ - - case STATE_WRITE: - /* we are writing data to the device... check for the - * end of the message, and if so, work out what to do - */ - - if (!(i2c->msg->flags & I2C_M_IGNORE_NAK)) { - if (iicstat & SLAVE_ACK) { - dev_dbg(i2c->dev, "WRITE: No Ack\n"); - - nuc900_i2c_stop(i2c, -ECONNREFUSED); - break; - } - } - -retry_write: - - if (!is_msgend(i2c)) { - byte = i2c->msg->buf[i2c->msg_ptr++]; - writeb(byte, i2c->regs + TXR); - writel(I2C_CMD_WRITE, i2c->regs + CMDR); - - } else if (!is_lastmsg(i2c)) { - /* we need to go to the next i2c message */ - - dev_dbg(i2c->dev, "WRITE: Next Message\n"); - - i2c->msg_ptr = 0; - i2c->msg_idx++; - i2c->msg++; - - /* check to see if we need to do another message */ - if (i2c->msg->flags & I2C_M_NOSTART) { - - if (i2c->msg->flags & I2C_M_RD) { - /* cannot do this, the controller - * forces us to send a new START - * when we change direction - */ - - nuc900_i2c_stop(i2c, -EINVAL); - } - - goto retry_write; - } else { - /* send the new start */ - nuc900_i2c_message_start(i2c, i2c->msg); - i2c->state = STATE_START; - } - - } else { - /* send stop */ - - nuc900_i2c_stop(i2c, 0); - } - break; - - case STATE_READ: - /* we have a byte of data in the data register, do - * something with it, and then work out whether we are - * going to do any more read/write - */ - - byte = readb(i2c->regs + RXR); - i2c->msg->buf[i2c->msg_ptr++] = byte; - -prepare_read: - if (is_msglast(i2c)) { - /* last byte of buffer */ - - if (is_lastmsg(i2c)) - writel(I2C_CMD_READ | I2C_CMD_NACK, - i2c->regs + CMDR); - - } else if (is_msgend(i2c)) { - /* ok, we've read the entire buffer, see if there - * is anything else we need to do - */ - - if (is_lastmsg(i2c)) { - /* last message, send stop and complete */ - dev_dbg(i2c->dev, "READ: Send Stop\n"); - - nuc900_i2c_stop(i2c, 0); - } else { - /* go to the next transfer */ - dev_dbg(i2c->dev, "READ: Next Transfer\n"); - - i2c->msg_ptr = 0; - i2c->msg_idx++; - i2c->msg++; - - writel(I2C_CMD_READ, i2c->regs + CMDR); - } - - } else { - writel(I2C_CMD_READ, i2c->regs + CMDR); - } - - break; - } -} - -/* nuc900_i2c_irq - * - * top level IRQ servicing routine -*/ - -static irqreturn_t nuc900_i2c_irq(int irqno, void *dev_id) -{ - struct nuc900_i2c *i2c = dev_id; - unsigned long status; - - status = readl(i2c->regs + CSR); - writel(status | IRQFLAG, i2c->regs + CSR); - - if (status & ARBIT_LOST) { - /* deal with arbitration loss */ - dev_err(i2c->dev, "deal with arbitration loss\n"); - goto out; - } - - if (i2c->state == STATE_IDLE) { - dev_dbg(i2c->dev, "IRQ: error i2c->state == IDLE\n"); - goto out; - } - - /* pretty much this leaves us with the fact that we've - * transmitted or received whatever byte we last sent - */ - - i2c_nuc900_irq_nextbyte(i2c, status); - - out: - return IRQ_HANDLED; -} - - -/* nuc900_i2c_set_master - * - * get the i2c bus for a master transaction -*/ - -static int nuc900_i2c_set_master(struct nuc900_i2c *i2c) -{ - int timeout = 400; - - while (timeout-- > 0) { - if (((readl(i2c->regs + SWR) & I2CSTART) == I2CSTART) && - ((readl(i2c->regs + CSR) & I2CBUSY) == 0)) { - return 0; - } - - msleep(1); - } - - return -ETIMEDOUT; -} - -/* nuc900_i2c_doxfer - * - * this starts an i2c transfer -*/ - -static int nuc900_i2c_doxfer(struct nuc900_i2c *i2c, - struct i2c_msg *msgs, int num) -{ - unsigned long iicstat, timeout; - int spins = 20; - int ret; - - ret = nuc900_i2c_set_master(i2c); - if (ret != 0) { - dev_err(i2c->dev, "cannot get bus (error %d)\n", ret); - ret = -EAGAIN; - goto out; - } - - spin_lock_irq(&i2c->lock); - - i2c->msg = msgs; - i2c->msg_num = num; - i2c->msg_ptr = 0; - i2c->msg_idx = 0; - i2c->state = STATE_START; - - nuc900_i2c_message_start(i2c, msgs); - spin_unlock_irq(&i2c->lock); - - timeout = wait_event_timeout(i2c->wait, i2c->msg_num == 0, HZ * 5); - - ret = i2c->msg_idx; - - /* having these next two as dev_err() makes life very - * noisy when doing an i2cdetect - */ - - if (timeout == 0) - dev_dbg(i2c->dev, "timeout\n"); - else if (ret != num) - dev_dbg(i2c->dev, "incomplete xfer (%d)\n", ret); - - /* ensure the stop has been through the bus */ - - dev_dbg(i2c->dev, "waiting for bus idle\n"); - - /* first, try busy waiting briefly */ - do { - iicstat = readl(i2c->regs + CSR); - } while ((iicstat & I2CBUSY) && --spins); - - /* if that timed out sleep */ - if (!spins) { - msleep(1); - iicstat = readl(i2c->regs + CSR); - } - - if (iicstat & I2CBUSY) - dev_warn(i2c->dev, "timeout waiting for bus idle\n"); - - out: - return ret; -} - -/* nuc900_i2c_xfer - * - * first port of call from the i2c bus code when an message needs - * transferring across the i2c bus. -*/ - -static int nuc900_i2c_xfer(struct i2c_adapter *adap, - struct i2c_msg *msgs, int num) -{ - struct nuc900_i2c *i2c = (struct nuc900_i2c *)adap->algo_data; - int retry; - int ret; - - nuc900_i2c_enable_irq(i2c); - - for (retry = 0; retry < adap->retries; retry++) { - - ret = nuc900_i2c_doxfer(i2c, msgs, num); - - if (ret != -EAGAIN) - return ret; - - dev_dbg(i2c->dev, "Retrying transmission (%d)\n", retry); - - udelay(100); - } - - return -EREMOTEIO; -} - -/* declare our i2c functionality */ -static u32 nuc900_i2c_func(struct i2c_adapter *adap) -{ - return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL | I2C_FUNC_NOSTART | - I2C_FUNC_PROTOCOL_MANGLING; -} - -/* i2c bus registration info */ - -static const struct i2c_algorithm nuc900_i2c_algorithm = { - .master_xfer = nuc900_i2c_xfer, - .functionality = nuc900_i2c_func, -}; - -/* nuc900_i2c_probe - * - * called by the bus driver when a suitable device is found -*/ - -static int nuc900_i2c_probe(struct platform_device *pdev) -{ - struct nuc900_i2c *i2c; - struct nuc900_platform_i2c *pdata; - struct resource *res; - int ret; - - pdata = dev_get_platdata(&pdev->dev); - if (!pdata) { - dev_err(&pdev->dev, "no platform data\n"); - return -EINVAL; - } - - i2c = kzalloc(sizeof(struct nuc900_i2c), GFP_KERNEL); - if (!i2c) { - dev_err(&pdev->dev, "no memory for state\n"); - return -ENOMEM; - } - - strlcpy(i2c->adap.name, "nuc900-i2c0", sizeof(i2c->adap.name)); - i2c->adap.owner = THIS_MODULE; - i2c->adap.algo = &nuc900_i2c_algorithm; - i2c->adap.retries = 2; - i2c->adap.class = I2C_CLASS_HWMON | I2C_CLASS_SPD; - - spin_lock_init(&i2c->lock); - init_waitqueue_head(&i2c->wait); - - /* find the clock and enable it */ - - i2c->dev = &pdev->dev; - i2c->clk = clk_get(&pdev->dev, NULL); - if (IS_ERR(i2c->clk)) { - dev_err(&pdev->dev, "cannot get clock\n"); - ret = -ENOENT; - goto err_noclk; - } - - dev_dbg(&pdev->dev, "clock source %p\n", i2c->clk); - - clk_enable(i2c->clk); - - /* map the registers */ - - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); - if (res == NULL) { - dev_err(&pdev->dev, "cannot find IO resource\n"); - ret = -ENOENT; - goto err_clk; - } - - i2c->ioarea = request_mem_region(res->start, resource_size(res), - pdev->name); - - if (i2c->ioarea == NULL) { - dev_err(&pdev->dev, "cannot request IO\n"); - ret = -ENXIO; - goto err_clk; - } - - i2c->regs = ioremap(res->start, resource_size(res)); - - if (i2c->regs == NULL) { - dev_err(&pdev->dev, "cannot map IO\n"); - ret = -ENXIO; - goto err_ioarea; - } - - dev_dbg(&pdev->dev, "registers %p (%p, %p)\n", - i2c->regs, i2c->ioarea, res); - - /* setup info block for the i2c core */ - - i2c->adap.algo_data = i2c; - i2c->adap.dev.parent = &pdev->dev; - - mfp_set_groupg(&pdev->dev, NULL); - - clk_get_rate(i2c->clk); - - ret = (i2c->clk.apbfreq)/(pdata->bus_freq * 5) - 1; - writel(ret & 0xffff, i2c->regs + DIVIDER); - - /* find the IRQ for this unit (note, this relies on the init call to - * ensure no current IRQs pending - */ - - i2c->irq = ret = platform_get_irq(pdev, 0); - if (ret <= 0) { - dev_err(&pdev->dev, "cannot find IRQ\n"); - goto err_iomap; - } - - ret = request_irq(i2c->irq, nuc900_i2c_irq, IRQF_SHARED, - dev_name(&pdev->dev), i2c); - - if (ret != 0) { - dev_err(&pdev->dev, "cannot claim IRQ %d\n", i2c->irq); - goto err_iomap; - } - - /* Note, previous versions of the driver used i2c_add_adapter() - * to add the bus at any number. We now pass the bus number via - * the platform data, so if unset it will now default to always - * being bus 0. - */ - - i2c->adap.nr = pdata->bus_num; - - ret = i2c_add_numbered_adapter(&i2c->adap); - if (ret < 0) { - dev_err(&pdev->dev, "failed to add bus to i2c core\n"); - goto err_irq; - } - - platform_set_drvdata(pdev, i2c); - - dev_info(&pdev->dev, "%s: NUC900 I2C adapter\n", - dev_name(&i2c->adap.dev)); - return 0; - - err_irq: - free_irq(i2c->irq, i2c); - - err_iomap: - iounmap(i2c->regs); - - err_ioarea: - release_resource(i2c->ioarea); - kfree(i2c->ioarea); - - err_clk: - clk_disable(i2c->clk); - clk_put(i2c->clk); - - err_noclk: - kfree(i2c); - return ret; -} - -/* nuc900_i2c_remove - * - * called when device is removed from the bus -*/ - -static int nuc900_i2c_remove(struct platform_device *pdev) -{ - struct nuc900_i2c *i2c = platform_get_drvdata(pdev); - - i2c_del_adapter(&i2c->adap); - free_irq(i2c->irq, i2c); - - clk_disable(i2c->clk); - clk_put(i2c->clk); - - iounmap(i2c->regs); - - release_resource(i2c->ioarea); - kfree(i2c->ioarea); - kfree(i2c); - - return 0; -} - -static struct platform_driver nuc900_i2c_driver = { - .probe = nuc900_i2c_probe, - .remove = nuc900_i2c_remove, - .driver = { - .owner = THIS_MODULE, - .name = "nuc900-i2c0", - }, -}; - -static int __init i2c_adap_nuc900_init(void) -{ - return platform_driver_register(&nuc900_i2c_driver); -} - -static void __exit i2c_adap_nuc900_exit(void) -{ - platform_driver_unregister(&nuc900_i2c_driver); -} -subsys_initcall(i2c_adap_nuc900_init); -module_exit(i2c_adap_nuc900_exit); - -MODULE_DESCRIPTION("NUC900 I2C Bus driver"); -MODULE_AUTHOR("Wan ZongShun, <mcuos.com-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>"); -MODULE_LICENSE("GPL"); -MODULE_ALIAS("platform:nuc900-i2c0"); diff --git a/drivers/i2c/busses/i2c-ocores.c b/drivers/i2c/busses/i2c-ocores.c index 1f6369f14fb6..0e10cc6182f0 100644 --- a/drivers/i2c/busses/i2c-ocores.c +++ b/drivers/i2c/busses/i2c-ocores.c @@ -250,7 +250,7 @@ static struct i2c_adapter ocores_adapter = { .algo = &ocores_algorithm, }; -static struct of_device_id ocores_i2c_match[] = { +static const struct of_device_id ocores_i2c_match[] = { { .compatible = "opencores,i2c-ocores", .data = (void *)TYPE_OCORES, diff --git a/drivers/i2c/busses/i2c-omap.c b/drivers/i2c/busses/i2c-omap.c index 85f8eac9ba18..b182793a4051 100644 --- a/drivers/i2c/busses/i2c-omap.c +++ b/drivers/i2c/busses/i2c-omap.c @@ -1114,10 +1114,8 @@ omap_i2c_probe(struct platform_device *pdev) } dev = devm_kzalloc(&pdev->dev, sizeof(struct omap_i2c_dev), GFP_KERNEL); - if (!dev) { - dev_err(&pdev->dev, "Menory allocation failed\n"); + if (!dev) return -ENOMEM; - } mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); dev->base = devm_ioremap_resource(&pdev->dev, mem); diff --git a/drivers/i2c/busses/i2c-pxa.c b/drivers/i2c/busses/i2c-pxa.c index bbe6dfbc5c05..be671f7a0e06 100644 --- a/drivers/i2c/busses/i2c-pxa.c +++ b/drivers/i2c/busses/i2c-pxa.c @@ -1084,7 +1084,7 @@ static const struct i2c_algorithm i2c_pxa_pio_algorithm = { .functionality = i2c_pxa_functionality, }; -static struct of_device_id i2c_pxa_dt_ids[] = { +static const struct of_device_id i2c_pxa_dt_ids[] = { { .compatible = "mrvl,pxa-i2c", .data = (void *)REGS_PXA2XX }, { .compatible = "mrvl,pwri2c", .data = (void *)REGS_PXA3XX }, { .compatible = "mrvl,mmp-twsi", .data = (void *)REGS_PXA2XX }, diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c index 06d47aafbb79..899405923678 100644 --- a/drivers/i2c/busses/i2c-rcar.c +++ b/drivers/i2c/busses/i2c-rcar.c @@ -1,7 +1,9 @@ /* - * drivers/i2c/busses/i2c-rcar.c + * Driver for the Renesas RCar I2C unit * - * Copyright (C) 2012 Renesas Solutions Corp. + * Copyright (C) 2014 Wolfram Sang <wsa@sang-engineering.com> + * + * Copyright (C) 2012-14 Renesas Solutions Corp. * Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> * * This file is based on the drivers/i2c/busses/i2c-sh7760.c @@ -12,16 +14,12 @@ * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License + * the Free Software Foundation; version 2 of the License. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */ #include <linux/clk.h> #include <linux/delay.h> @@ -36,7 +34,6 @@ #include <linux/platform_device.h> #include <linux/pm_runtime.h> #include <linux/slab.h> -#include <linux/spinlock.h> /* register offsets */ #define ICSCR 0x00 /* slave ctrl */ @@ -60,7 +57,7 @@ #define FSB (1 << 1) /* force stop bit */ #define ESG (1 << 0) /* en startbit gen */ -/* ICMSR */ +/* ICMSR (also for ICMIE) */ #define MNR (1 << 6) /* nack received */ #define MAL (1 << 5) /* arbitration lost */ #define MST (1 << 4) /* sent a stop */ @@ -69,32 +66,18 @@ #define MDR (1 << 1) #define MAT (1 << 0) /* slave addr xfer done */ -/* ICMIE */ -#define MNRE (1 << 6) /* nack irq en */ -#define MALE (1 << 5) /* arblos irq en */ -#define MSTE (1 << 4) /* stop irq en */ -#define MDEE (1 << 3) -#define MDTE (1 << 2) -#define MDRE (1 << 1) -#define MATE (1 << 0) /* address sent irq en */ +#define RCAR_BUS_PHASE_START (MDBS | MIE | ESG) +#define RCAR_BUS_PHASE_DATA (MDBS | MIE) +#define RCAR_BUS_PHASE_STOP (MDBS | MIE | FSB) -enum { - RCAR_BUS_PHASE_ADDR, - RCAR_BUS_PHASE_DATA, - RCAR_BUS_PHASE_STOP, -}; +#define RCAR_IRQ_SEND (MNR | MAL | MST | MAT | MDE) +#define RCAR_IRQ_RECV (MNR | MAL | MST | MAT | MDR) +#define RCAR_IRQ_STOP (MST) -enum { - RCAR_IRQ_CLOSE, - RCAR_IRQ_OPEN_FOR_SEND, - RCAR_IRQ_OPEN_FOR_RECV, - RCAR_IRQ_OPEN_FOR_STOP, -}; +#define RCAR_IRQ_ACK_SEND (~(MAT | MDE)) +#define RCAR_IRQ_ACK_RECV (~(MAT | MDR)) -/* - * flags - */ #define ID_LAST_MSG (1 << 0) #define ID_IOERROR (1 << 1) #define ID_DONE (1 << 2) @@ -112,14 +95,12 @@ struct rcar_i2c_priv { struct i2c_msg *msg; struct clk *clk; - spinlock_t lock; wait_queue_head_t wait; int pos; - int irq; u32 icccr; u32 flags; - enum rcar_i2c_type devtype; + enum rcar_i2c_type devtype; }; #define rcar_i2c_priv_to_dev(p) ((p)->adap.dev.parent) @@ -130,9 +111,7 @@ struct rcar_i2c_priv { #define LOOP_TIMEOUT 1024 -/* - * basic functions - */ + static void rcar_i2c_write(struct rcar_i2c_priv *priv, int reg, u32 val) { writel(val, priv->io + reg); @@ -161,36 +140,6 @@ static void rcar_i2c_init(struct rcar_i2c_priv *priv) rcar_i2c_write(priv, ICMAR, 0); } -static void rcar_i2c_irq_mask(struct rcar_i2c_priv *priv, int open) -{ - u32 val = MNRE | MALE | MSTE | MATE; /* default */ - - switch (open) { - case RCAR_IRQ_OPEN_FOR_SEND: - val |= MDEE; /* default + send */ - break; - case RCAR_IRQ_OPEN_FOR_RECV: - val |= MDRE; /* default + read */ - break; - case RCAR_IRQ_OPEN_FOR_STOP: - val = MSTE; /* stop irq only */ - break; - case RCAR_IRQ_CLOSE: - default: - val = 0; /* all close */ - break; - } - rcar_i2c_write(priv, ICMIER, val); -} - -static void rcar_i2c_set_addr(struct rcar_i2c_priv *priv, u32 recv) -{ - rcar_i2c_write(priv, ICMAR, (priv->msg->addr << 1) | recv); -} - -/* - * bus control functions - */ static int rcar_i2c_bus_barrier(struct rcar_i2c_priv *priv) { int i; @@ -205,24 +154,6 @@ static int rcar_i2c_bus_barrier(struct rcar_i2c_priv *priv) return -EBUSY; } -static void rcar_i2c_bus_phase(struct rcar_i2c_priv *priv, int phase) -{ - switch (phase) { - case RCAR_BUS_PHASE_ADDR: - rcar_i2c_write(priv, ICMCR, MDBS | MIE | ESG); - break; - case RCAR_BUS_PHASE_DATA: - rcar_i2c_write(priv, ICMCR, MDBS | MIE); - break; - case RCAR_BUS_PHASE_STOP: - rcar_i2c_write(priv, ICMCR, MDBS | MIE | FSB); - break; - } -} - -/* - * clock function - */ static int rcar_i2c_clock_calculate(struct rcar_i2c_priv *priv, u32 bus_speed, struct device *dev) @@ -312,60 +243,18 @@ scgd_find: return 0; } -static void rcar_i2c_clock_start(struct rcar_i2c_priv *priv) -{ - rcar_i2c_write(priv, ICCCR, priv->icccr); -} - -/* - * status functions - */ -static u32 rcar_i2c_status_get(struct rcar_i2c_priv *priv) -{ - return rcar_i2c_read(priv, ICMSR); -} - -#define rcar_i2c_status_clear(priv) rcar_i2c_status_bit_clear(priv, 0xffffffff) -static void rcar_i2c_status_bit_clear(struct rcar_i2c_priv *priv, u32 bit) -{ - rcar_i2c_write(priv, ICMSR, ~bit); -} - -/* - * recv/send functions - */ -static int rcar_i2c_recv(struct rcar_i2c_priv *priv) -{ - rcar_i2c_set_addr(priv, 1); - rcar_i2c_status_clear(priv); - rcar_i2c_bus_phase(priv, RCAR_BUS_PHASE_ADDR); - rcar_i2c_irq_mask(priv, RCAR_IRQ_OPEN_FOR_RECV); - - return 0; -} - -static int rcar_i2c_send(struct rcar_i2c_priv *priv) +static int rcar_i2c_prepare_msg(struct rcar_i2c_priv *priv) { - int ret; + int read = !!rcar_i2c_is_recv(priv); - /* - * It should check bus status when send case - */ - ret = rcar_i2c_bus_barrier(priv); - if (ret < 0) - return ret; - - rcar_i2c_set_addr(priv, 0); - rcar_i2c_status_clear(priv); - rcar_i2c_bus_phase(priv, RCAR_BUS_PHASE_ADDR); - rcar_i2c_irq_mask(priv, RCAR_IRQ_OPEN_FOR_SEND); + rcar_i2c_write(priv, ICMAR, (priv->msg->addr << 1) | read); + rcar_i2c_write(priv, ICMSR, 0); + rcar_i2c_write(priv, ICMCR, RCAR_BUS_PHASE_START); + rcar_i2c_write(priv, ICMIER, read ? RCAR_IRQ_RECV : RCAR_IRQ_SEND); return 0; } -#define rcar_i2c_send_restart(priv) rcar_i2c_status_bit_clear(priv, (MAT | MDE)) -#define rcar_i2c_recv_restart(priv) rcar_i2c_status_bit_clear(priv, (MAT | MDR)) - /* * interrupt functions */ @@ -386,7 +275,7 @@ static int rcar_i2c_irq_send(struct rcar_i2c_priv *priv, u32 msr) * goto data phase. */ if (msr & MAT) - rcar_i2c_bus_phase(priv, RCAR_BUS_PHASE_DATA); + rcar_i2c_write(priv, ICMCR, RCAR_BUS_PHASE_DATA); if (priv->pos < msg->len) { /* @@ -414,7 +303,7 @@ static int rcar_i2c_irq_send(struct rcar_i2c_priv *priv, u32 msr) * prepare stop condition here. * ID_DONE will be set on STOP irq. */ - rcar_i2c_bus_phase(priv, RCAR_BUS_PHASE_STOP); + rcar_i2c_write(priv, ICMCR, RCAR_BUS_PHASE_STOP); else /* * If current msg is _NOT_ last msg, @@ -425,7 +314,7 @@ static int rcar_i2c_irq_send(struct rcar_i2c_priv *priv, u32 msr) return ID_DONE; } - rcar_i2c_send_restart(priv); + rcar_i2c_write(priv, ICMSR, RCAR_IRQ_ACK_SEND); return 0; } @@ -462,11 +351,11 @@ static int rcar_i2c_irq_recv(struct rcar_i2c_priv *priv, u32 msr) * otherwise, go to DATA phase. */ if (priv->pos + 1 >= msg->len) - rcar_i2c_bus_phase(priv, RCAR_BUS_PHASE_STOP); + rcar_i2c_write(priv, ICMCR, RCAR_BUS_PHASE_STOP); else - rcar_i2c_bus_phase(priv, RCAR_BUS_PHASE_DATA); + rcar_i2c_write(priv, ICMCR, RCAR_BUS_PHASE_DATA); - rcar_i2c_recv_restart(priv); + rcar_i2c_write(priv, ICMSR, RCAR_IRQ_ACK_RECV); return 0; } @@ -474,53 +363,31 @@ static int rcar_i2c_irq_recv(struct rcar_i2c_priv *priv, u32 msr) static irqreturn_t rcar_i2c_irq(int irq, void *ptr) { struct rcar_i2c_priv *priv = ptr; - struct device *dev = rcar_i2c_priv_to_dev(priv); u32 msr; - /*-------------- spin lock -----------------*/ - spin_lock(&priv->lock); - - msr = rcar_i2c_status_get(priv); + msr = rcar_i2c_read(priv, ICMSR); - /* - * Arbitration lost - */ + /* Arbitration lost */ if (msr & MAL) { - /* - * CAUTION - * - * When arbitration lost, device become _slave_ mode. - */ - dev_dbg(dev, "Arbitration Lost\n"); rcar_i2c_flags_set(priv, (ID_DONE | ID_ARBLOST)); goto out; } - /* - * Stop - */ + /* Stop */ if (msr & MST) { - dev_dbg(dev, "Stop\n"); rcar_i2c_flags_set(priv, ID_DONE); goto out; } - /* - * Nack - */ + /* Nack */ if (msr & MNR) { - dev_dbg(dev, "Nack\n"); - /* go to stop phase */ - rcar_i2c_bus_phase(priv, RCAR_BUS_PHASE_STOP); - rcar_i2c_irq_mask(priv, RCAR_IRQ_OPEN_FOR_STOP); + rcar_i2c_write(priv, ICMCR, RCAR_BUS_PHASE_STOP); + rcar_i2c_write(priv, ICMIER, RCAR_IRQ_STOP); rcar_i2c_flags_set(priv, ID_NACK); goto out; } - /* - * recv/send - */ if (rcar_i2c_is_recv(priv)) rcar_i2c_flags_set(priv, rcar_i2c_irq_recv(priv, msr)); else @@ -528,14 +395,11 @@ static irqreturn_t rcar_i2c_irq(int irq, void *ptr) out: if (rcar_i2c_flags_has(priv, ID_DONE)) { - rcar_i2c_irq_mask(priv, RCAR_IRQ_CLOSE); - rcar_i2c_status_clear(priv); + rcar_i2c_write(priv, ICMIER, 0); + rcar_i2c_write(priv, ICMSR, 0); wake_up(&priv->wait); } - spin_unlock(&priv->lock); - /*-------------- spin unlock -----------------*/ - return IRQ_HANDLED; } @@ -545,21 +409,18 @@ static int rcar_i2c_master_xfer(struct i2c_adapter *adap, { struct rcar_i2c_priv *priv = i2c_get_adapdata(adap); struct device *dev = rcar_i2c_priv_to_dev(priv); - unsigned long flags; int i, ret, timeout; pm_runtime_get_sync(dev); - /*-------------- spin lock -----------------*/ - spin_lock_irqsave(&priv->lock, flags); - rcar_i2c_init(priv); - rcar_i2c_clock_start(priv); + /* start clock */ + rcar_i2c_write(priv, ICCCR, priv->icccr); - spin_unlock_irqrestore(&priv->lock, flags); - /*-------------- spin unlock -----------------*/ + ret = rcar_i2c_bus_barrier(priv); + if (ret < 0) + goto out; - ret = -EINVAL; for (i = 0; i < num; i++) { /* This HW can't send STOP after address phase */ if (msgs[i].len == 0) { @@ -567,9 +428,6 @@ static int rcar_i2c_master_xfer(struct i2c_adapter *adap, break; } - /*-------------- spin lock -----------------*/ - spin_lock_irqsave(&priv->lock, flags); - /* init each data */ priv->msg = &msgs[i]; priv->pos = 0; @@ -577,21 +435,11 @@ static int rcar_i2c_master_xfer(struct i2c_adapter *adap, if (priv->msg == &msgs[num - 1]) rcar_i2c_flags_set(priv, ID_LAST_MSG); - /* start send/recv */ - if (rcar_i2c_is_recv(priv)) - ret = rcar_i2c_recv(priv); - else - ret = rcar_i2c_send(priv); - - spin_unlock_irqrestore(&priv->lock, flags); - /*-------------- spin unlock -----------------*/ + ret = rcar_i2c_prepare_msg(priv); if (ret < 0) break; - /* - * wait result - */ timeout = wait_event_timeout(priv->wait, rcar_i2c_flags_has(priv, ID_DONE), 5 * HZ); @@ -600,9 +448,6 @@ static int rcar_i2c_master_xfer(struct i2c_adapter *adap, break; } - /* - * error handling - */ if (rcar_i2c_flags_has(priv, ID_NACK)) { ret = -ENXIO; break; @@ -620,7 +465,7 @@ static int rcar_i2c_master_xfer(struct i2c_adapter *adap, ret = i + 1; /* The number of transfer */ } - +out: pm_runtime_put(dev); if (ret < 0 && ret != -ENXIO) @@ -646,6 +491,9 @@ static const struct of_device_id rcar_i2c_dt_ids[] = { { .compatible = "renesas,i2c-r8a7779", .data = (void *)I2C_RCAR_GEN1 }, { .compatible = "renesas,i2c-r8a7790", .data = (void *)I2C_RCAR_GEN2 }, { .compatible = "renesas,i2c-r8a7791", .data = (void *)I2C_RCAR_GEN2 }, + { .compatible = "renesas,i2c-r8a7792", .data = (void *)I2C_RCAR_GEN2 }, + { .compatible = "renesas,i2c-r8a7793", .data = (void *)I2C_RCAR_GEN2 }, + { .compatible = "renesas,i2c-r8a7794", .data = (void *)I2C_RCAR_GEN2 }, {}, }; MODULE_DEVICE_TABLE(of, rcar_i2c_dt_ids); @@ -658,13 +506,11 @@ static int rcar_i2c_probe(struct platform_device *pdev) struct resource *res; struct device *dev = &pdev->dev; u32 bus_speed; - int ret; + int irq, ret; priv = devm_kzalloc(dev, sizeof(struct rcar_i2c_priv), GFP_KERNEL); - if (!priv) { - dev_err(dev, "no mem for private data\n"); + if (!priv) return -ENOMEM; - } priv->clk = devm_clk_get(dev, NULL); if (IS_ERR(priv->clk)) { @@ -692,9 +538,8 @@ static int rcar_i2c_probe(struct platform_device *pdev) if (IS_ERR(priv->io)) return PTR_ERR(priv->io); - priv->irq = platform_get_irq(pdev, 0); + irq = platform_get_irq(pdev, 0); init_waitqueue_head(&priv->wait); - spin_lock_init(&priv->lock); adap = &priv->adap; adap->nr = pdev->id; @@ -706,10 +551,10 @@ static int rcar_i2c_probe(struct platform_device *pdev) i2c_set_adapdata(adap, priv); strlcpy(adap->name, pdev->name, sizeof(adap->name)); - ret = devm_request_irq(dev, priv->irq, rcar_i2c_irq, 0, + ret = devm_request_irq(dev, irq, rcar_i2c_irq, 0, dev_name(dev), priv); if (ret < 0) { - dev_err(dev, "cannot get irq %d\n", priv->irq); + dev_err(dev, "cannot get irq %d\n", irq); return ret; } @@ -759,6 +604,6 @@ static struct platform_driver rcar_i2c_driver = { module_platform_driver(rcar_i2c_driver); -MODULE_LICENSE("GPL"); +MODULE_LICENSE("GPL v2"); MODULE_DESCRIPTION("Renesas R-Car I2C bus driver"); MODULE_AUTHOR("Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>"); diff --git a/drivers/i2c/busses/i2c-riic.c b/drivers/i2c/busses/i2c-riic.c index 9e1f8bacfb39..af3b3d032a9f 100644 --- a/drivers/i2c/busses/i2c-riic.c +++ b/drivers/i2c/busses/i2c-riic.c @@ -404,7 +404,7 @@ static int riic_i2c_remove(struct platform_device *pdev) return 0; } -static struct of_device_id riic_i2c_dt_ids[] = { +static const struct of_device_id riic_i2c_dt_ids[] = { { .compatible = "renesas,riic-rz" }, { /* Sentinel */ }, }; diff --git a/drivers/i2c/busses/i2c-s3c2410.c b/drivers/i2c/busses/i2c-s3c2410.c index bb3a9964f7e0..e828a1dba0e5 100644 --- a/drivers/i2c/busses/i2c-s3c2410.c +++ b/drivers/i2c/busses/i2c-s3c2410.c @@ -1114,16 +1114,12 @@ static int s3c24xx_i2c_probe(struct platform_device *pdev) } i2c = devm_kzalloc(&pdev->dev, sizeof(struct s3c24xx_i2c), GFP_KERNEL); - if (!i2c) { - dev_err(&pdev->dev, "no memory for state\n"); + if (!i2c) return -ENOMEM; - } i2c->pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL); - if (!i2c->pdata) { - dev_err(&pdev->dev, "no memory for platform data\n"); + if (!i2c->pdata) return -ENOMEM; - } i2c->quirks = s3c24xx_get_device_quirks(pdev); if (pdata) diff --git a/drivers/i2c/busses/i2c-sh_mobile.c b/drivers/i2c/busses/i2c-sh_mobile.c index 1d79585ba4b3..8b5e79cb4468 100644 --- a/drivers/i2c/busses/i2c-sh_mobile.c +++ b/drivers/i2c/busses/i2c-sh_mobile.c @@ -32,6 +32,7 @@ #include <linux/clk.h> #include <linux/io.h> #include <linux/slab.h> +#include <linux/of_device.h> #include <linux/i2c/i2c-sh_mobile.h> /* Transmit operation: */ @@ -139,6 +140,10 @@ struct sh_mobile_i2c_data { bool send_stop; }; +struct sh_mobile_dt_config { + int clks_per_count; +}; + #define IIC_FLAG_HAS_ICIC67 (1 << 0) #define STANDARD_MODE 100000 @@ -194,7 +199,7 @@ static void iic_set_clr(struct sh_mobile_i2c_data *pd, int offs, iic_wr(pd, offs, (iic_rd(pd, offs) | set) & ~clr); } -static u32 sh_mobile_i2c_iccl(unsigned long count_khz, u32 tLOW, u32 tf, int offset) +static u32 sh_mobile_i2c_iccl(unsigned long count_khz, u32 tLOW, u32 tf) { /* * Conditional expression: @@ -206,10 +211,10 @@ static u32 sh_mobile_i2c_iccl(unsigned long count_khz, u32 tLOW, u32 tf, int off * account the fall time of SCL signal (tf). Default tf value * should be 0.3 us, for safety. */ - return (((count_khz * (tLOW + tf)) + 5000) / 10000) + offset; + return (((count_khz * (tLOW + tf)) + 5000) / 10000); } -static u32 sh_mobile_i2c_icch(unsigned long count_khz, u32 tHIGH, u32 tf, int offset) +static u32 sh_mobile_i2c_icch(unsigned long count_khz, u32 tHIGH, u32 tf) { /* * Conditional expression: @@ -225,52 +230,58 @@ static u32 sh_mobile_i2c_icch(unsigned long count_khz, u32 tHIGH, u32 tf, int of * to take into account the fall time of SDA signal (tf) at START * condition, in order to meet both tHIGH and tHD;STA specs. */ - return (((count_khz * (tHIGH + tf)) + 5000) / 10000) + offset; + return (((count_khz * (tHIGH + tf)) + 5000) / 10000); } -static void sh_mobile_i2c_init(struct sh_mobile_i2c_data *pd) +static int sh_mobile_i2c_init(struct sh_mobile_i2c_data *pd) { unsigned long i2c_clk_khz; u32 tHIGH, tLOW, tf; - int offset; + uint16_t max_val; /* Get clock rate after clock is enabled */ clk_prepare_enable(pd->clk); i2c_clk_khz = clk_get_rate(pd->clk) / 1000; + clk_disable_unprepare(pd->clk); i2c_clk_khz /= pd->clks_per_count; if (pd->bus_speed == STANDARD_MODE) { tLOW = 47; /* tLOW = 4.7 us */ tHIGH = 40; /* tHD;STA = tHIGH = 4.0 us */ tf = 3; /* tf = 0.3 us */ - offset = 0; /* No offset */ } else if (pd->bus_speed == FAST_MODE) { tLOW = 13; /* tLOW = 1.3 us */ tHIGH = 6; /* tHD;STA = tHIGH = 0.6 us */ tf = 3; /* tf = 0.3 us */ - offset = 0; /* No offset */ } else { dev_err(pd->dev, "unrecognized bus speed %lu Hz\n", pd->bus_speed); - goto out; + return -EINVAL; + } + + pd->iccl = sh_mobile_i2c_iccl(i2c_clk_khz, tLOW, tf); + pd->icch = sh_mobile_i2c_icch(i2c_clk_khz, tHIGH, tf); + + max_val = pd->flags & IIC_FLAG_HAS_ICIC67 ? 0x1ff : 0xff; + if (pd->iccl > max_val || pd->icch > max_val) { + dev_err(pd->dev, "timing values out of range: L/H=0x%x/0x%x\n", + pd->iccl, pd->icch); + return -EINVAL; } - pd->iccl = sh_mobile_i2c_iccl(i2c_clk_khz, tLOW, tf, offset); /* one more bit of ICCL in ICIC */ - if ((pd->iccl > 0xff) && (pd->flags & IIC_FLAG_HAS_ICIC67)) + if (pd->iccl & 0x100) pd->icic |= ICIC_ICCLB8; else pd->icic &= ~ICIC_ICCLB8; - pd->icch = sh_mobile_i2c_icch(i2c_clk_khz, tHIGH, tf, offset); /* one more bit of ICCH in ICIC */ - if ((pd->icch > 0xff) && (pd->flags & IIC_FLAG_HAS_ICIC67)) + if (pd->icch & 0x100) pd->icic |= ICIC_ICCHB8; else pd->icic &= ~ICIC_ICCHB8; -out: - clk_disable_unprepare(pd->clk); + return 0; } static void activate_ch(struct sh_mobile_i2c_data *pd) @@ -316,7 +327,7 @@ static unsigned char i2c_op(struct sh_mobile_i2c_data *pd, switch (op) { case OP_START: /* issue start and trigger DTE interrupt */ - iic_wr(pd, ICCR, 0x94); + iic_wr(pd, ICCR, ICCR_ICE | ICCR_TRS | ICCR_BBSY); break; case OP_TX_FIRST: /* disable DTE interrupt and write data */ iic_wr(pd, ICIC, ICIC_WAITE | ICIC_ALE | ICIC_TACKE); @@ -327,10 +338,11 @@ static unsigned char i2c_op(struct sh_mobile_i2c_data *pd, break; case OP_TX_STOP: /* write data and issue a stop afterwards */ iic_wr(pd, ICDR, data); - iic_wr(pd, ICCR, pd->send_stop ? 0x90 : 0x94); + iic_wr(pd, ICCR, pd->send_stop ? ICCR_ICE | ICCR_TRS + : ICCR_ICE | ICCR_TRS | ICCR_BBSY); break; case OP_TX_TO_RX: /* select read mode */ - iic_wr(pd, ICCR, 0x81); + iic_wr(pd, ICCR, ICCR_ICE | ICCR_SCP); break; case OP_RX: /* just read data */ ret = iic_rd(pd, ICDR); @@ -338,13 +350,13 @@ static unsigned char i2c_op(struct sh_mobile_i2c_data *pd, case OP_RX_STOP: /* enable DTE interrupt, issue stop */ iic_wr(pd, ICIC, ICIC_DTEE | ICIC_WAITE | ICIC_ALE | ICIC_TACKE); - iic_wr(pd, ICCR, 0xc0); + iic_wr(pd, ICCR, ICCR_ICE | ICCR_RACK); break; case OP_RX_STOP_DATA: /* enable DTE interrupt, read data, issue stop */ iic_wr(pd, ICIC, ICIC_DTEE | ICIC_WAITE | ICIC_ALE | ICIC_TACKE); ret = iic_rd(pd, ICDR); - iic_wr(pd, ICCR, 0xc0); + iic_wr(pd, ICCR, ICCR_ICE | ICCR_RACK); break; } @@ -479,7 +491,7 @@ static int start_ch(struct sh_mobile_i2c_data *pd, struct i2c_msg *usr_msg, { if (usr_msg->len == 0 && (usr_msg->flags & I2C_M_RD)) { dev_err(pd->dev, "Unsupported zero length i2c read\n"); - return -EIO; + return -EOPNOTSUPP; } if (do_init) { @@ -514,17 +526,12 @@ static int poll_dte(struct sh_mobile_i2c_data *pd) break; if (val & ICSR_TACK) - return -EIO; + return -ENXIO; udelay(10); } - if (!i) { - dev_warn(pd->dev, "Timeout polling for DTE!\n"); - return -ETIMEDOUT; - } - - return 0; + return i ? 0 : -ETIMEDOUT; } static int poll_busy(struct sh_mobile_i2c_data *pd) @@ -542,20 +549,18 @@ static int poll_busy(struct sh_mobile_i2c_data *pd) */ if (!(val & ICSR_BUSY)) { /* handle missing acknowledge and arbitration lost */ - if ((val | pd->sr) & (ICSR_TACK | ICSR_AL)) - return -EIO; + val |= pd->sr; + if (val & ICSR_TACK) + return -ENXIO; + if (val & ICSR_AL) + return -EAGAIN; break; } udelay(10); } - if (!i) { - dev_err(pd->dev, "Polling timed out\n"); - return -ETIMEDOUT; - } - - return 0; + return i ? 0 : -ETIMEDOUT; } static int sh_mobile_i2c_xfer(struct i2c_adapter *adapter, @@ -617,42 +622,44 @@ static struct i2c_algorithm sh_mobile_i2c_algorithm = { .master_xfer = sh_mobile_i2c_xfer, }; -static int sh_mobile_i2c_hook_irqs(struct platform_device *dev, int hook) +static const struct sh_mobile_dt_config default_dt_config = { + .clks_per_count = 1, +}; + +static const struct sh_mobile_dt_config rcar_gen2_dt_config = { + .clks_per_count = 2, +}; + +static const struct of_device_id sh_mobile_i2c_dt_ids[] = { + { .compatible = "renesas,rmobile-iic", .data = &default_dt_config }, + { .compatible = "renesas,iic-r8a7790", .data = &rcar_gen2_dt_config }, + { .compatible = "renesas,iic-r8a7791", .data = &rcar_gen2_dt_config }, + { .compatible = "renesas,iic-r8a7792", .data = &rcar_gen2_dt_config }, + { .compatible = "renesas,iic-r8a7793", .data = &rcar_gen2_dt_config }, + { .compatible = "renesas,iic-r8a7794", .data = &rcar_gen2_dt_config }, + {}, +}; +MODULE_DEVICE_TABLE(of, sh_mobile_i2c_dt_ids); + +static int sh_mobile_i2c_hook_irqs(struct platform_device *dev) { struct resource *res; - int ret = -ENXIO; - int n, k = 0; + resource_size_t n; + int k = 0, ret; while ((res = platform_get_resource(dev, IORESOURCE_IRQ, k))) { - for (n = res->start; hook && n <= res->end; n++) { - if (request_irq(n, sh_mobile_i2c_isr, 0, - dev_name(&dev->dev), dev)) { - for (n--; n >= res->start; n--) - free_irq(n, dev); - - goto rollback; + for (n = res->start; n <= res->end; n++) { + ret = devm_request_irq(&dev->dev, n, sh_mobile_i2c_isr, + 0, dev_name(&dev->dev), dev); + if (ret) { + dev_err(&dev->dev, "cannot request IRQ %pa\n", &n); + return ret; } } k++; } - if (hook) - return k > 0 ? 0 : -ENOENT; - - ret = 0; - - rollback: - k--; - - while (k >= 0) { - res = platform_get_resource(dev, IORESOURCE_IRQ, k); - for (n = res->start; n <= res->end; n++) - free_irq(n, dev); - - k--; - } - - return ret; + return k > 0 ? 0 : -ENOENT; } static int sh_mobile_i2c_probe(struct platform_device *dev) @@ -661,62 +668,64 @@ static int sh_mobile_i2c_probe(struct platform_device *dev) struct sh_mobile_i2c_data *pd; struct i2c_adapter *adap; struct resource *res; - int size; int ret; + u32 bus_speed; - pd = kzalloc(sizeof(struct sh_mobile_i2c_data), GFP_KERNEL); - if (pd == NULL) { - dev_err(&dev->dev, "cannot allocate private data\n"); + pd = devm_kzalloc(&dev->dev, sizeof(struct sh_mobile_i2c_data), GFP_KERNEL); + if (!pd) return -ENOMEM; - } - pd->clk = clk_get(&dev->dev, NULL); + pd->clk = devm_clk_get(&dev->dev, NULL); if (IS_ERR(pd->clk)) { dev_err(&dev->dev, "cannot get clock\n"); - ret = PTR_ERR(pd->clk); - goto err; + return PTR_ERR(pd->clk); } - ret = sh_mobile_i2c_hook_irqs(dev, 1); - if (ret) { - dev_err(&dev->dev, "cannot request IRQ\n"); - goto err_clk; - } + ret = sh_mobile_i2c_hook_irqs(dev); + if (ret) + return ret; pd->dev = &dev->dev; platform_set_drvdata(dev, pd); res = platform_get_resource(dev, IORESOURCE_MEM, 0); - if (res == NULL) { - dev_err(&dev->dev, "cannot find IO resource\n"); - ret = -ENOENT; - goto err_irq; - } - - size = resource_size(res); - pd->reg = ioremap(res->start, size); - if (pd->reg == NULL) { - dev_err(&dev->dev, "cannot map IO\n"); - ret = -ENXIO; - goto err_irq; - } + pd->reg = devm_ioremap_resource(&dev->dev, res); + if (IS_ERR(pd->reg)) + return PTR_ERR(pd->reg); /* Use platform data bus speed or STANDARD_MODE */ - pd->bus_speed = STANDARD_MODE; - if (pdata && pdata->bus_speed) - pd->bus_speed = pdata->bus_speed; + ret = of_property_read_u32(dev->dev.of_node, "clock-frequency", &bus_speed); + pd->bus_speed = ret ? STANDARD_MODE : bus_speed; + pd->clks_per_count = 1; - if (pdata && pdata->clks_per_count) - pd->clks_per_count = pdata->clks_per_count; + + if (dev->dev.of_node) { + const struct of_device_id *match; + + match = of_match_device(sh_mobile_i2c_dt_ids, &dev->dev); + if (match) { + const struct sh_mobile_dt_config *config; + + config = match->data; + pd->clks_per_count = config->clks_per_count; + } + } else { + if (pdata && pdata->bus_speed) + pd->bus_speed = pdata->bus_speed; + if (pdata && pdata->clks_per_count) + pd->clks_per_count = pdata->clks_per_count; + } /* The IIC blocks on SH-Mobile ARM processors * come with two new bits in ICIC. */ - if (size > 0x17) + if (resource_size(res) > 0x17) pd->flags |= IIC_FLAG_HAS_ICIC67; - sh_mobile_i2c_init(pd); + ret = sh_mobile_i2c_init(pd); + if (ret) + return ret; /* Enable Runtime PM for this device. * @@ -750,24 +759,14 @@ static int sh_mobile_i2c_probe(struct platform_device *dev) ret = i2c_add_numbered_adapter(adap); if (ret < 0) { dev_err(&dev->dev, "cannot add numbered adapter\n"); - goto err_all; + return ret; } dev_info(&dev->dev, - "I2C adapter %d with bus speed %lu Hz (L/H=%x/%x)\n", + "I2C adapter %d with bus speed %lu Hz (L/H=0x%x/0x%x)\n", adap->nr, pd->bus_speed, pd->iccl, pd->icch); return 0; - - err_all: - iounmap(pd->reg); - err_irq: - sh_mobile_i2c_hook_irqs(dev, 0); - err_clk: - clk_put(pd->clk); - err: - kfree(pd); - return ret; } static int sh_mobile_i2c_remove(struct platform_device *dev) @@ -775,11 +774,7 @@ static int sh_mobile_i2c_remove(struct platform_device *dev) struct sh_mobile_i2c_data *pd = platform_get_drvdata(dev); i2c_del_adapter(&pd->adap); - iounmap(pd->reg); - sh_mobile_i2c_hook_irqs(dev, 0); - clk_put(pd->clk); pm_runtime_disable(&dev->dev); - kfree(pd); return 0; } @@ -800,12 +795,6 @@ static const struct dev_pm_ops sh_mobile_i2c_dev_pm_ops = { .runtime_resume = sh_mobile_i2c_runtime_nop, }; -static const struct of_device_id sh_mobile_i2c_dt_ids[] = { - { .compatible = "renesas,rmobile-iic", }, - {}, -}; -MODULE_DEVICE_TABLE(of, sh_mobile_i2c_dt_ids); - static struct platform_driver sh_mobile_i2c_driver = { .driver = { .name = "i2c-sh_mobile", diff --git a/drivers/i2c/busses/i2c-simtec.c b/drivers/i2c/busses/i2c-simtec.c index 294c80f21d65..964e5c6f84ab 100644 --- a/drivers/i2c/busses/i2c-simtec.c +++ b/drivers/i2c/busses/i2c-simtec.c @@ -77,10 +77,8 @@ static int simtec_i2c_probe(struct platform_device *dev) int ret; pd = kzalloc(sizeof(struct simtec_i2c_data), GFP_KERNEL); - if (pd == NULL) { - dev_err(&dev->dev, "cannot allocate private data\n"); + if (pd == NULL) return -ENOMEM; - } platform_set_drvdata(dev, pd); diff --git a/drivers/i2c/busses/i2c-sirf.c b/drivers/i2c/busses/i2c-sirf.c index 8e3be7ed0586..a3216defc1d3 100644 --- a/drivers/i2c/busses/i2c-sirf.c +++ b/drivers/i2c/busses/i2c-sirf.c @@ -307,7 +307,6 @@ static int i2c_sirfsoc_probe(struct platform_device *pdev) siic = devm_kzalloc(&pdev->dev, sizeof(*siic), GFP_KERNEL); if (!siic) { - dev_err(&pdev->dev, "Can't allocate driver data\n"); err = -ENOMEM; goto out; } diff --git a/drivers/i2c/busses/i2c-st.c b/drivers/i2c/busses/i2c-st.c index 872016196ef3..95b947670386 100644 --- a/drivers/i2c/busses/i2c-st.c +++ b/drivers/i2c/busses/i2c-st.c @@ -847,7 +847,7 @@ static int st_i2c_remove(struct platform_device *pdev) return 0; } -static struct of_device_id st_i2c_match[] = { +static const struct of_device_id st_i2c_match[] = { { .compatible = "st,comms-ssc-i2c", }, { .compatible = "st,comms-ssc4-i2c", }, {}, diff --git a/drivers/i2c/busses/i2c-stu300.c b/drivers/i2c/busses/i2c-stu300.c index 29b1fb778943..fefb1c19ec1d 100644 --- a/drivers/i2c/busses/i2c-stu300.c +++ b/drivers/i2c/busses/i2c-stu300.c @@ -868,10 +868,8 @@ static int stu300_probe(struct platform_device *pdev) int ret = 0; dev = devm_kzalloc(&pdev->dev, sizeof(struct stu300_dev), GFP_KERNEL); - if (!dev) { - dev_err(&pdev->dev, "could not allocate device struct\n"); + if (!dev) return -ENOMEM; - } bus_nr = pdev->id; dev->clk = devm_clk_get(&pdev->dev, NULL); diff --git a/drivers/i2c/busses/i2c-tegra.c b/drivers/i2c/busses/i2c-tegra.c index 00f04cb5b4eb..f1bb2fc06791 100644 --- a/drivers/i2c/busses/i2c-tegra.c +++ b/drivers/i2c/busses/i2c-tegra.c @@ -732,10 +732,8 @@ static int tegra_i2c_probe(struct platform_device *pdev) } i2c_dev = devm_kzalloc(&pdev->dev, sizeof(*i2c_dev), GFP_KERNEL); - if (!i2c_dev) { - dev_err(&pdev->dev, "Could not allocate struct tegra_i2c_dev"); + if (!i2c_dev) return -ENOMEM; - } i2c_dev->base = base; i2c_dev->div_clk = div_clk; diff --git a/drivers/i2c/busses/i2c-wmt.c b/drivers/i2c/busses/i2c-wmt.c index 2c8a3e4f9008..f80a38c2072c 100644 --- a/drivers/i2c/busses/i2c-wmt.c +++ b/drivers/i2c/busses/i2c-wmt.c @@ -379,10 +379,8 @@ static int wmt_i2c_probe(struct platform_device *pdev) u32 clk_rate; i2c_dev = devm_kzalloc(&pdev->dev, sizeof(*i2c_dev), GFP_KERNEL); - if (!i2c_dev) { - dev_err(&pdev->dev, "device memory allocation failed\n"); + if (!i2c_dev) return -ENOMEM; - } res = platform_get_resource(pdev, IORESOURCE_MEM, 0); i2c_dev->base = devm_ioremap_resource(&pdev->dev, res); @@ -454,7 +452,7 @@ static int wmt_i2c_remove(struct platform_device *pdev) return 0; } -static struct of_device_id wmt_i2c_dt_ids[] = { +static const struct of_device_id wmt_i2c_dt_ids[] = { { .compatible = "wm,wm8505-i2c" }, { /* Sentinel */ }, }; diff --git a/drivers/i2c/busses/scx200_acb.c b/drivers/i2c/busses/scx200_acb.c index cb66f9586f76..ff3f5747e43b 100644 --- a/drivers/i2c/busses/scx200_acb.c +++ b/drivers/i2c/busses/scx200_acb.c @@ -431,10 +431,8 @@ static struct scx200_acb_iface *scx200_create_iface(const char *text, struct i2c_adapter *adapter; iface = kzalloc(sizeof(*iface), GFP_KERNEL); - if (!iface) { - pr_err("can't allocate memory\n"); + if (!iface) return NULL; - } adapter = &iface->adapter; i2c_set_adapdata(adapter, iface); diff --git a/drivers/i2c/muxes/i2c-mux-pca954x.c b/drivers/i2c/muxes/i2c-mux-pca954x.c index 550bd36aa5d6..9bd4212782ab 100644 --- a/drivers/i2c/muxes/i2c-mux-pca954x.c +++ b/drivers/i2c/muxes/i2c-mux-pca954x.c @@ -36,12 +36,11 @@ */ #include <linux/device.h> -#include <linux/gpio.h> +#include <linux/gpio/consumer.h> #include <linux/i2c.h> #include <linux/i2c-mux.h> #include <linux/i2c/pca954x.h> #include <linux/module.h> -#include <linux/of_gpio.h> #include <linux/slab.h> #define PCA954X_MAX_NCHANS 8 @@ -186,7 +185,7 @@ static int pca954x_probe(struct i2c_client *client, { struct i2c_adapter *adap = to_i2c_adapter(client->dev.parent); struct pca954x_platform_data *pdata = dev_get_platdata(&client->dev); - struct device_node *np = client->dev.of_node; + struct gpio_desc *gpio; int num, force, class; struct pca954x *data; int ret; @@ -200,21 +199,10 @@ static int pca954x_probe(struct i2c_client *client, i2c_set_clientdata(client, data); - if (IS_ENABLED(CONFIG_OF) && np) { - enum of_gpio_flags flags; - int gpio; - - /* Get the mux out of reset if a reset GPIO is specified. */ - gpio = of_get_named_gpio_flags(np, "reset-gpio", 0, &flags); - if (gpio_is_valid(gpio)) { - ret = devm_gpio_request_one(&client->dev, gpio, - flags & OF_GPIO_ACTIVE_LOW ? - GPIOF_OUT_INIT_HIGH : GPIOF_OUT_INIT_LOW, - "pca954x reset"); - if (ret < 0) - return ret; - } - } + /* Get the mux out of reset if a reset GPIO is specified. */ + gpio = devm_gpiod_get(&client->dev, "reset"); + if (!IS_ERR(gpio)) + gpiod_direction_output(gpio, 0); /* Write the mux register at addr to verify * that the mux is in fact present. This also diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c index a1710465faaf..b9d647468b99 100644 --- a/drivers/infiniband/ulp/isert/ib_isert.c +++ b/drivers/infiniband/ulp/isert/ib_isert.c @@ -1210,6 +1210,8 @@ sequence_cmd: if (!rc && dump_payload == false && unsol_data) iscsit_set_unsoliticed_dataout(cmd); + else if (dump_payload && imm_data) + target_put_sess_cmd(conn->sess->se_sess, &cmd->se_cmd); return 0; } diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index df56e4c74a7e..d260605e6d5f 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -178,13 +178,13 @@ config TEGRA_IOMMU_SMMU config EXYNOS_IOMMU bool "Exynos IOMMU Support" - depends on ARCH_EXYNOS && EXYNOS_DEV_SYSMMU + depends on ARCH_EXYNOS select IOMMU_API help - Support for the IOMMU(System MMU) of Samsung Exynos application - processor family. This enables H/W multimedia accellerators to see - non-linear physical memory chunks as a linear memory in their - address spaces + Support for the IOMMU (System MMU) of Samsung Exynos application + processor family. This enables H/W multimedia accelerators to see + non-linear physical memory chunks as linear memory in their + address space. If unsure, say N here. @@ -193,9 +193,9 @@ config EXYNOS_IOMMU_DEBUG depends on EXYNOS_IOMMU help Select this to see the detailed log message that shows what - happens in the IOMMU driver + happens in the IOMMU driver. - Say N unless you need kernel log message for IOMMU debugging + Say N unless you need kernel log message for IOMMU debugging. config SHMOBILE_IPMMU bool @@ -272,6 +272,18 @@ config SHMOBILE_IOMMU_L1SIZE default 256 if SHMOBILE_IOMMU_ADDRSIZE_64MB default 128 if SHMOBILE_IOMMU_ADDRSIZE_32MB +config IPMMU_VMSA + bool "Renesas VMSA-compatible IPMMU" + depends on ARM_LPAE + depends on ARCH_SHMOBILE || COMPILE_TEST + select IOMMU_API + select ARM_DMA_USE_IOMMU + help + Support for the Renesas VMSA-compatible IPMMU Renesas found in the + R-Mobile APE6 and R-Car H2/M2 SoCs. + + If unsure, say N. + config SPAPR_TCE_IOMMU bool "sPAPR TCE IOMMU Support" depends on PPC_POWERNV || PPC_PSERIES diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile index 5d58bf16e9e3..8893bad048e0 100644 --- a/drivers/iommu/Makefile +++ b/drivers/iommu/Makefile @@ -7,6 +7,7 @@ obj-$(CONFIG_AMD_IOMMU_V2) += amd_iommu_v2.o obj-$(CONFIG_ARM_SMMU) += arm-smmu.o obj-$(CONFIG_DMAR_TABLE) += dmar.o obj-$(CONFIG_INTEL_IOMMU) += iova.o intel-iommu.o +obj-$(CONFIG_IPMMU_VMSA) += ipmmu-vmsa.o obj-$(CONFIG_IRQ_REMAP) += intel_irq_remapping.o irq_remapping.o obj-$(CONFIG_OMAP_IOMMU) += omap-iommu.o obj-$(CONFIG_OMAP_IOMMU) += omap-iommu2.o diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c index 57068e8035b5..4aec6a29e316 100644 --- a/drivers/iommu/amd_iommu.c +++ b/drivers/iommu/amd_iommu.c @@ -3499,8 +3499,6 @@ int __init amd_iommu_init_passthrough(void) { struct iommu_dev_data *dev_data; struct pci_dev *dev = NULL; - struct amd_iommu *iommu; - u16 devid; int ret; ret = alloc_passthrough_domain(); @@ -3514,12 +3512,6 @@ int __init amd_iommu_init_passthrough(void) dev_data = get_dev_data(&dev->dev); dev_data->passthrough = true; - devid = get_device_id(&dev->dev); - - iommu = amd_iommu_rlookup_table[devid]; - if (!iommu) - continue; - attach_device(&dev->dev, pt_domain); } diff --git a/drivers/iommu/amd_iommu_v2.c b/drivers/iommu/amd_iommu_v2.c index 203b2e6a91cf..d4daa05efe60 100644 --- a/drivers/iommu/amd_iommu_v2.c +++ b/drivers/iommu/amd_iommu_v2.c @@ -45,6 +45,8 @@ struct pri_queue { struct pasid_state { struct list_head list; /* For global state-list */ atomic_t count; /* Reference count */ + atomic_t mmu_notifier_count; /* Counting nested mmu_notifier + calls */ struct task_struct *task; /* Task bound to this PASID */ struct mm_struct *mm; /* mm_struct for the faults */ struct mmu_notifier mn; /* mmu_otifier handle */ @@ -56,6 +58,8 @@ struct pasid_state { }; struct device_state { + struct list_head list; + u16 devid; atomic_t count; struct pci_dev *pdev; struct pasid_state **states; @@ -81,13 +85,9 @@ struct fault { u16 flags; }; -static struct device_state **state_table; +static LIST_HEAD(state_list); static spinlock_t state_lock; -/* List and lock for all pasid_states */ -static LIST_HEAD(pasid_state_list); -static DEFINE_SPINLOCK(ps_lock); - static struct workqueue_struct *iommu_wq; /* @@ -99,7 +99,6 @@ static u64 *empty_page_table; static void free_pasid_states(struct device_state *dev_state); static void unbind_pasid(struct device_state *dev_state, int pasid); -static int task_exit(struct notifier_block *nb, unsigned long e, void *data); static u16 device_id(struct pci_dev *pdev) { @@ -111,13 +110,25 @@ static u16 device_id(struct pci_dev *pdev) return devid; } +static struct device_state *__get_device_state(u16 devid) +{ + struct device_state *dev_state; + + list_for_each_entry(dev_state, &state_list, list) { + if (dev_state->devid == devid) + return dev_state; + } + + return NULL; +} + static struct device_state *get_device_state(u16 devid) { struct device_state *dev_state; unsigned long flags; spin_lock_irqsave(&state_lock, flags); - dev_state = state_table[devid]; + dev_state = __get_device_state(devid); if (dev_state != NULL) atomic_inc(&dev_state->count); spin_unlock_irqrestore(&state_lock, flags); @@ -158,29 +169,6 @@ static void put_device_state_wait(struct device_state *dev_state) free_device_state(dev_state); } -static struct notifier_block profile_nb = { - .notifier_call = task_exit, -}; - -static void link_pasid_state(struct pasid_state *pasid_state) -{ - spin_lock(&ps_lock); - list_add_tail(&pasid_state->list, &pasid_state_list); - spin_unlock(&ps_lock); -} - -static void __unlink_pasid_state(struct pasid_state *pasid_state) -{ - list_del(&pasid_state->list); -} - -static void unlink_pasid_state(struct pasid_state *pasid_state) -{ - spin_lock(&ps_lock); - __unlink_pasid_state(pasid_state); - spin_unlock(&ps_lock); -} - /* Must be called under dev_state->lock */ static struct pasid_state **__get_pasid_state_ptr(struct device_state *dev_state, int pasid, bool alloc) @@ -337,7 +325,6 @@ static void unbind_pasid(struct device_state *dev_state, int pasid) if (pasid_state == NULL) return; - unlink_pasid_state(pasid_state); __unbind_pasid(pasid_state); put_pasid_state_wait(pasid_state); /* Reference taken in this function */ } @@ -379,7 +366,12 @@ static void free_pasid_states(struct device_state *dev_state) continue; put_pasid_state(pasid_state); - unbind_pasid(dev_state, i); + + /* + * This will call the mn_release function and + * unbind the PASID + */ + mmu_notifier_unregister(&pasid_state->mn, pasid_state->mm); } if (dev_state->pasid_levels == 2) @@ -443,8 +435,11 @@ static void mn_invalidate_range_start(struct mmu_notifier *mn, pasid_state = mn_to_state(mn); dev_state = pasid_state->device_state; - amd_iommu_domain_set_gcr3(dev_state->domain, pasid_state->pasid, - __pa(empty_page_table)); + if (atomic_add_return(1, &pasid_state->mmu_notifier_count) == 1) { + amd_iommu_domain_set_gcr3(dev_state->domain, + pasid_state->pasid, + __pa(empty_page_table)); + } } static void mn_invalidate_range_end(struct mmu_notifier *mn, @@ -457,11 +452,31 @@ static void mn_invalidate_range_end(struct mmu_notifier *mn, pasid_state = mn_to_state(mn); dev_state = pasid_state->device_state; - amd_iommu_domain_set_gcr3(dev_state->domain, pasid_state->pasid, - __pa(pasid_state->mm->pgd)); + if (atomic_dec_and_test(&pasid_state->mmu_notifier_count)) { + amd_iommu_domain_set_gcr3(dev_state->domain, + pasid_state->pasid, + __pa(pasid_state->mm->pgd)); + } +} + +static void mn_release(struct mmu_notifier *mn, struct mm_struct *mm) +{ + struct pasid_state *pasid_state; + struct device_state *dev_state; + + might_sleep(); + + pasid_state = mn_to_state(mn); + dev_state = pasid_state->device_state; + + if (pasid_state->device_state->inv_ctx_cb) + dev_state->inv_ctx_cb(dev_state->pdev, pasid_state->pasid); + + unbind_pasid(dev_state, pasid_state->pasid); } static struct mmu_notifier_ops iommu_mn = { + .release = mn_release, .clear_flush_young = mn_clear_flush_young, .change_pte = mn_change_pte, .invalidate_page = mn_invalidate_page, @@ -606,53 +621,6 @@ static struct notifier_block ppr_nb = { .notifier_call = ppr_notifier, }; -static int task_exit(struct notifier_block *nb, unsigned long e, void *data) -{ - struct pasid_state *pasid_state; - struct task_struct *task; - - task = data; - - /* - * Using this notifier is a hack - but there is no other choice - * at the moment. What I really want is a sleeping notifier that - * is called when an MM goes down. But such a notifier doesn't - * exist yet. The notifier needs to sleep because it has to make - * sure that the device does not use the PASID and the address - * space anymore before it is destroyed. This includes waiting - * for pending PRI requests to pass the workqueue. The - * MMU-Notifiers would be a good fit, but they use RCU and so - * they are not allowed to sleep. Lets see how we can solve this - * in a more intelligent way in the future. - */ -again: - spin_lock(&ps_lock); - list_for_each_entry(pasid_state, &pasid_state_list, list) { - struct device_state *dev_state; - int pasid; - - if (pasid_state->task != task) - continue; - - /* Drop Lock and unbind */ - spin_unlock(&ps_lock); - - dev_state = pasid_state->device_state; - pasid = pasid_state->pasid; - - if (pasid_state->device_state->inv_ctx_cb) - dev_state->inv_ctx_cb(dev_state->pdev, pasid); - - unbind_pasid(dev_state, pasid); - - /* Task may be in the list multiple times */ - goto again; - } - spin_unlock(&ps_lock); - - return NOTIFY_OK; -} - int amd_iommu_bind_pasid(struct pci_dev *pdev, int pasid, struct task_struct *task) { @@ -682,6 +650,7 @@ int amd_iommu_bind_pasid(struct pci_dev *pdev, int pasid, goto out; atomic_set(&pasid_state->count, 1); + atomic_set(&pasid_state->mmu_notifier_count, 0); init_waitqueue_head(&pasid_state->wq); spin_lock_init(&pasid_state->lock); @@ -705,8 +674,6 @@ int amd_iommu_bind_pasid(struct pci_dev *pdev, int pasid, if (ret) goto out_clear_state; - link_pasid_state(pasid_state); - return 0; out_clear_state: @@ -727,6 +694,7 @@ EXPORT_SYMBOL(amd_iommu_bind_pasid); void amd_iommu_unbind_pasid(struct pci_dev *pdev, int pasid) { + struct pasid_state *pasid_state; struct device_state *dev_state; u16 devid; @@ -743,7 +711,17 @@ void amd_iommu_unbind_pasid(struct pci_dev *pdev, int pasid) if (pasid < 0 || pasid >= dev_state->max_pasids) goto out; - unbind_pasid(dev_state, pasid); + pasid_state = get_pasid_state(dev_state, pasid); + if (pasid_state == NULL) + goto out; + /* + * Drop reference taken here. We are safe because we still hold + * the reference taken in the amd_iommu_bind_pasid function. + */ + put_pasid_state(pasid_state); + + /* This will call the mn_release function and unbind the PASID */ + mmu_notifier_unregister(&pasid_state->mn, pasid_state->mm); out: put_device_state(dev_state); @@ -773,7 +751,8 @@ int amd_iommu_init_device(struct pci_dev *pdev, int pasids) spin_lock_init(&dev_state->lock); init_waitqueue_head(&dev_state->wq); - dev_state->pdev = pdev; + dev_state->pdev = pdev; + dev_state->devid = devid; tmp = pasids; for (dev_state->pasid_levels = 0; (tmp - 1) & ~0x1ff; tmp >>= 9) @@ -803,13 +782,13 @@ int amd_iommu_init_device(struct pci_dev *pdev, int pasids) spin_lock_irqsave(&state_lock, flags); - if (state_table[devid] != NULL) { + if (__get_device_state(devid) != NULL) { spin_unlock_irqrestore(&state_lock, flags); ret = -EBUSY; goto out_free_domain; } - state_table[devid] = dev_state; + list_add_tail(&dev_state->list, &state_list); spin_unlock_irqrestore(&state_lock, flags); @@ -841,13 +820,13 @@ void amd_iommu_free_device(struct pci_dev *pdev) spin_lock_irqsave(&state_lock, flags); - dev_state = state_table[devid]; + dev_state = __get_device_state(devid); if (dev_state == NULL) { spin_unlock_irqrestore(&state_lock, flags); return; } - state_table[devid] = NULL; + list_del(&dev_state->list); spin_unlock_irqrestore(&state_lock, flags); @@ -874,7 +853,7 @@ int amd_iommu_set_invalid_ppr_cb(struct pci_dev *pdev, spin_lock_irqsave(&state_lock, flags); ret = -EINVAL; - dev_state = state_table[devid]; + dev_state = __get_device_state(devid); if (dev_state == NULL) goto out_unlock; @@ -905,7 +884,7 @@ int amd_iommu_set_invalidate_ctx_cb(struct pci_dev *pdev, spin_lock_irqsave(&state_lock, flags); ret = -EINVAL; - dev_state = state_table[devid]; + dev_state = __get_device_state(devid); if (dev_state == NULL) goto out_unlock; @@ -922,7 +901,6 @@ EXPORT_SYMBOL(amd_iommu_set_invalidate_ctx_cb); static int __init amd_iommu_v2_init(void) { - size_t state_table_size; int ret; pr_info("AMD IOMMUv2 driver by Joerg Roedel <joerg.roedel@amd.com>\n"); @@ -938,16 +916,10 @@ static int __init amd_iommu_v2_init(void) spin_lock_init(&state_lock); - state_table_size = MAX_DEVICES * sizeof(struct device_state *); - state_table = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, - get_order(state_table_size)); - if (state_table == NULL) - return -ENOMEM; - ret = -ENOMEM; iommu_wq = create_workqueue("amd_iommu_v2"); if (iommu_wq == NULL) - goto out_free; + goto out; ret = -ENOMEM; empty_page_table = (u64 *)get_zeroed_page(GFP_KERNEL); @@ -955,29 +927,24 @@ static int __init amd_iommu_v2_init(void) goto out_destroy_wq; amd_iommu_register_ppr_notifier(&ppr_nb); - profile_event_register(PROFILE_TASK_EXIT, &profile_nb); return 0; out_destroy_wq: destroy_workqueue(iommu_wq); -out_free: - free_pages((unsigned long)state_table, get_order(state_table_size)); - +out: return ret; } static void __exit amd_iommu_v2_exit(void) { struct device_state *dev_state; - size_t state_table_size; int i; if (!amd_iommu_v2_supported()) return; - profile_event_unregister(PROFILE_TASK_EXIT, &profile_nb); amd_iommu_unregister_ppr_notifier(&ppr_nb); flush_workqueue(iommu_wq); @@ -1000,9 +967,6 @@ static void __exit amd_iommu_v2_exit(void) destroy_workqueue(iommu_wq); - state_table_size = MAX_DEVICES * sizeof(struct device_state *); - free_pages((unsigned long)state_table, get_order(state_table_size)); - free_page((unsigned long)empty_page_table); } diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index 647c3c7fd742..1599354e974d 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -1167,7 +1167,7 @@ static int arm_smmu_domain_add_master(struct arm_smmu_domain *smmu_domain, for (i = 0; i < master->num_streamids; ++i) { u32 idx, s2cr; idx = master->smrs ? master->smrs[i].idx : master->streamids[i]; - s2cr = (S2CR_TYPE_TRANS << S2CR_TYPE_SHIFT) | + s2cr = S2CR_TYPE_TRANS | (smmu_domain->root_cfg.cbndx << S2CR_CBNDX_SHIFT); writel_relaxed(s2cr, gr0_base + ARM_SMMU_GR0_S2CR(idx)); } @@ -1804,7 +1804,7 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu) * allocation (PTRS_PER_PGD). */ #ifdef CONFIG_64BIT - smmu->s1_output_size = min(39UL, size); + smmu->s1_output_size = min((unsigned long)VA_BITS, size); #else smmu->s1_output_size = min(32UL, size); #endif diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c index 685f9263cfee..99054d2c040d 100644 --- a/drivers/iommu/exynos-iommu.c +++ b/drivers/iommu/exynos-iommu.c @@ -29,7 +29,8 @@ #include <asm/cacheflush.h> #include <asm/pgtable.h> -#include <mach/sysmmu.h> +typedef u32 sysmmu_iova_t; +typedef u32 sysmmu_pte_t; /* We does not consider super section mapping (16MB) */ #define SECT_ORDER 20 @@ -44,28 +45,44 @@ #define LPAGE_MASK (~(LPAGE_SIZE - 1)) #define SPAGE_MASK (~(SPAGE_SIZE - 1)) -#define lv1ent_fault(sent) (((*(sent) & 3) == 0) || ((*(sent) & 3) == 3)) -#define lv1ent_page(sent) ((*(sent) & 3) == 1) +#define lv1ent_fault(sent) ((*(sent) == ZERO_LV2LINK) || \ + ((*(sent) & 3) == 0) || ((*(sent) & 3) == 3)) +#define lv1ent_zero(sent) (*(sent) == ZERO_LV2LINK) +#define lv1ent_page_zero(sent) ((*(sent) & 3) == 1) +#define lv1ent_page(sent) ((*(sent) != ZERO_LV2LINK) && \ + ((*(sent) & 3) == 1)) #define lv1ent_section(sent) ((*(sent) & 3) == 2) #define lv2ent_fault(pent) ((*(pent) & 3) == 0) #define lv2ent_small(pent) ((*(pent) & 2) == 2) #define lv2ent_large(pent) ((*(pent) & 3) == 1) +static u32 sysmmu_page_offset(sysmmu_iova_t iova, u32 size) +{ + return iova & (size - 1); +} + #define section_phys(sent) (*(sent) & SECT_MASK) -#define section_offs(iova) ((iova) & 0xFFFFF) +#define section_offs(iova) sysmmu_page_offset((iova), SECT_SIZE) #define lpage_phys(pent) (*(pent) & LPAGE_MASK) -#define lpage_offs(iova) ((iova) & 0xFFFF) +#define lpage_offs(iova) sysmmu_page_offset((iova), LPAGE_SIZE) #define spage_phys(pent) (*(pent) & SPAGE_MASK) -#define spage_offs(iova) ((iova) & 0xFFF) - -#define lv1ent_offset(iova) ((iova) >> SECT_ORDER) -#define lv2ent_offset(iova) (((iova) & 0xFF000) >> SPAGE_ORDER) +#define spage_offs(iova) sysmmu_page_offset((iova), SPAGE_SIZE) #define NUM_LV1ENTRIES 4096 -#define NUM_LV2ENTRIES 256 +#define NUM_LV2ENTRIES (SECT_SIZE / SPAGE_SIZE) + +static u32 lv1ent_offset(sysmmu_iova_t iova) +{ + return iova >> SECT_ORDER; +} + +static u32 lv2ent_offset(sysmmu_iova_t iova) +{ + return (iova >> SPAGE_ORDER) & (NUM_LV2ENTRIES - 1); +} -#define LV2TABLE_SIZE (NUM_LV2ENTRIES * sizeof(long)) +#define LV2TABLE_SIZE (NUM_LV2ENTRIES * sizeof(sysmmu_pte_t)) #define SPAGES_PER_LPAGE (LPAGE_SIZE / SPAGE_SIZE) @@ -80,6 +97,13 @@ #define CTRL_BLOCK 0x7 #define CTRL_DISABLE 0x0 +#define CFG_LRU 0x1 +#define CFG_QOS(n) ((n & 0xF) << 7) +#define CFG_MASK 0x0150FFFF /* Selecting bit 0-15, 20, 22 and 24 */ +#define CFG_ACGEN (1 << 24) /* System MMU 3.3 only */ +#define CFG_SYSSEL (1 << 22) /* System MMU 3.2 only */ +#define CFG_FLPDCACHE (1 << 20) /* System MMU 3.2+ only */ + #define REG_MMU_CTRL 0x000 #define REG_MMU_CFG 0x004 #define REG_MMU_STATUS 0x008 @@ -96,19 +120,32 @@ #define REG_MMU_VERSION 0x034 +#define MMU_MAJ_VER(val) ((val) >> 7) +#define MMU_MIN_VER(val) ((val) & 0x7F) +#define MMU_RAW_VER(reg) (((reg) >> 21) & ((1 << 11) - 1)) /* 11 bits */ + +#define MAKE_MMU_VER(maj, min) ((((maj) & 0xF) << 7) | ((min) & 0x7F)) + #define REG_PB0_SADDR 0x04C #define REG_PB0_EADDR 0x050 #define REG_PB1_SADDR 0x054 #define REG_PB1_EADDR 0x058 -static unsigned long *section_entry(unsigned long *pgtable, unsigned long iova) +#define has_sysmmu(dev) (dev->archdata.iommu != NULL) + +static struct kmem_cache *lv2table_kmem_cache; +static sysmmu_pte_t *zero_lv2_table; +#define ZERO_LV2LINK mk_lv1ent_page(virt_to_phys(zero_lv2_table)) + +static sysmmu_pte_t *section_entry(sysmmu_pte_t *pgtable, sysmmu_iova_t iova) { return pgtable + lv1ent_offset(iova); } -static unsigned long *page_entry(unsigned long *sent, unsigned long iova) +static sysmmu_pte_t *page_entry(sysmmu_pte_t *sent, sysmmu_iova_t iova) { - return (unsigned long *)__va(lv2table_base(sent)) + lv2ent_offset(iova); + return (sysmmu_pte_t *)phys_to_virt( + lv2table_base(sent)) + lv2ent_offset(iova); } enum exynos_sysmmu_inttype { @@ -124,16 +161,6 @@ enum exynos_sysmmu_inttype { SYSMMU_FAULTS_NUM }; -/* - * @itype: type of fault. - * @pgtable_base: the physical address of page table base. This is 0 if @itype - * is SYSMMU_BUSERROR. - * @fault_addr: the device (virtual) address that the System MMU tried to - * translated. This is 0 if @itype is SYSMMU_BUSERROR. - */ -typedef int (*sysmmu_fault_handler_t)(enum exynos_sysmmu_inttype itype, - unsigned long pgtable_base, unsigned long fault_addr); - static unsigned short fault_reg_offset[SYSMMU_FAULTS_NUM] = { REG_PAGE_FAULT_ADDR, REG_AR_FAULT_ADDR, @@ -157,27 +184,34 @@ static char *sysmmu_fault_name[SYSMMU_FAULTS_NUM] = { "UNKNOWN FAULT" }; +/* attached to dev.archdata.iommu of the master device */ +struct exynos_iommu_owner { + struct list_head client; /* entry of exynos_iommu_domain.clients */ + struct device *dev; + struct device *sysmmu; + struct iommu_domain *domain; + void *vmm_data; /* IO virtual memory manager's data */ + spinlock_t lock; /* Lock to preserve consistency of System MMU */ +}; + struct exynos_iommu_domain { struct list_head clients; /* list of sysmmu_drvdata.node */ - unsigned long *pgtable; /* lv1 page table, 16KB */ + sysmmu_pte_t *pgtable; /* lv1 page table, 16KB */ short *lv2entcnt; /* free lv2 entry counter for each section */ spinlock_t lock; /* lock for this structure */ spinlock_t pgtablelock; /* lock for modifying page table @ pgtable */ }; struct sysmmu_drvdata { - struct list_head node; /* entry of exynos_iommu_domain.clients */ struct device *sysmmu; /* System MMU's device descriptor */ - struct device *dev; /* Owner of system MMU */ - char *dbgname; - int nsfrs; - void __iomem **sfrbases; - struct clk *clk[2]; + struct device *master; /* Owner of system MMU */ + void __iomem *sfrbase; + struct clk *clk; + struct clk *clk_master; int activations; - rwlock_t lock; + spinlock_t lock; struct iommu_domain *domain; - sysmmu_fault_handler_t fault_handler; - unsigned long pgtable; + phys_addr_t pgtable; }; static bool set_sysmmu_active(struct sysmmu_drvdata *data) @@ -204,6 +238,11 @@ static void sysmmu_unblock(void __iomem *sfrbase) __raw_writel(CTRL_ENABLE, sfrbase + REG_MMU_CTRL); } +static unsigned int __raw_sysmmu_version(struct sysmmu_drvdata *data) +{ + return MMU_RAW_VER(__raw_readl(data->sfrbase + REG_MMU_VERSION)); +} + static bool sysmmu_block(void __iomem *sfrbase) { int i = 120; @@ -226,429 +265,428 @@ static void __sysmmu_tlb_invalidate(void __iomem *sfrbase) } static void __sysmmu_tlb_invalidate_entry(void __iomem *sfrbase, - unsigned long iova) + sysmmu_iova_t iova, unsigned int num_inv) { - __raw_writel((iova & SPAGE_MASK) | 1, sfrbase + REG_MMU_FLUSH_ENTRY); + unsigned int i; + + for (i = 0; i < num_inv; i++) { + __raw_writel((iova & SPAGE_MASK) | 1, + sfrbase + REG_MMU_FLUSH_ENTRY); + iova += SPAGE_SIZE; + } } static void __sysmmu_set_ptbase(void __iomem *sfrbase, - unsigned long pgd) + phys_addr_t pgd) { - __raw_writel(0x1, sfrbase + REG_MMU_CFG); /* 16KB LV1, LRU */ __raw_writel(pgd, sfrbase + REG_PT_BASE_ADDR); __sysmmu_tlb_invalidate(sfrbase); } -static void __sysmmu_set_prefbuf(void __iomem *sfrbase, unsigned long base, - unsigned long size, int idx) +static void show_fault_information(const char *name, + enum exynos_sysmmu_inttype itype, + phys_addr_t pgtable_base, sysmmu_iova_t fault_addr) { - __raw_writel(base, sfrbase + REG_PB0_SADDR + idx * 8); - __raw_writel(size - 1 + base, sfrbase + REG_PB0_EADDR + idx * 8); -} - -static void __set_fault_handler(struct sysmmu_drvdata *data, - sysmmu_fault_handler_t handler) -{ - unsigned long flags; - - write_lock_irqsave(&data->lock, flags); - data->fault_handler = handler; - write_unlock_irqrestore(&data->lock, flags); -} - -void exynos_sysmmu_set_fault_handler(struct device *dev, - sysmmu_fault_handler_t handler) -{ - struct sysmmu_drvdata *data = dev_get_drvdata(dev->archdata.iommu); - - __set_fault_handler(data, handler); -} - -static int default_fault_handler(enum exynos_sysmmu_inttype itype, - unsigned long pgtable_base, unsigned long fault_addr) -{ - unsigned long *ent; + sysmmu_pte_t *ent; if ((itype >= SYSMMU_FAULTS_NUM) || (itype < SYSMMU_PAGEFAULT)) itype = SYSMMU_FAULT_UNKNOWN; - pr_err("%s occurred at 0x%lx(Page table base: 0x%lx)\n", - sysmmu_fault_name[itype], fault_addr, pgtable_base); + pr_err("%s occurred at %#x by %s(Page table base: %pa)\n", + sysmmu_fault_name[itype], fault_addr, name, &pgtable_base); - ent = section_entry(__va(pgtable_base), fault_addr); - pr_err("\tLv1 entry: 0x%lx\n", *ent); + ent = section_entry(phys_to_virt(pgtable_base), fault_addr); + pr_err("\tLv1 entry: %#x\n", *ent); if (lv1ent_page(ent)) { ent = page_entry(ent, fault_addr); - pr_err("\t Lv2 entry: 0x%lx\n", *ent); + pr_err("\t Lv2 entry: %#x\n", *ent); } - - pr_err("Generating Kernel OOPS... because it is unrecoverable.\n"); - - BUG(); - - return 0; } static irqreturn_t exynos_sysmmu_irq(int irq, void *dev_id) { /* SYSMMU is in blocked when interrupt occurred. */ struct sysmmu_drvdata *data = dev_id; - struct resource *irqres; - struct platform_device *pdev; enum exynos_sysmmu_inttype itype; - unsigned long addr = -1; - - int i, ret = -ENOSYS; - - read_lock(&data->lock); + sysmmu_iova_t addr = -1; + int ret = -ENOSYS; WARN_ON(!is_sysmmu_active(data)); - pdev = to_platform_device(data->sysmmu); - for (i = 0; i < (pdev->num_resources / 2); i++) { - irqres = platform_get_resource(pdev, IORESOURCE_IRQ, i); - if (irqres && ((int)irqres->start == irq)) - break; - } + spin_lock(&data->lock); + + if (!IS_ERR(data->clk_master)) + clk_enable(data->clk_master); - if (i == pdev->num_resources) { + itype = (enum exynos_sysmmu_inttype) + __ffs(__raw_readl(data->sfrbase + REG_INT_STATUS)); + if (WARN_ON(!((itype >= 0) && (itype < SYSMMU_FAULT_UNKNOWN)))) itype = SYSMMU_FAULT_UNKNOWN; + else + addr = __raw_readl(data->sfrbase + fault_reg_offset[itype]); + + if (itype == SYSMMU_FAULT_UNKNOWN) { + pr_err("%s: Fault is not occurred by System MMU '%s'!\n", + __func__, dev_name(data->sysmmu)); + pr_err("%s: Please check if IRQ is correctly configured.\n", + __func__); + BUG(); } else { - itype = (enum exynos_sysmmu_inttype) - __ffs(__raw_readl(data->sfrbases[i] + REG_INT_STATUS)); - if (WARN_ON(!((itype >= 0) && (itype < SYSMMU_FAULT_UNKNOWN)))) - itype = SYSMMU_FAULT_UNKNOWN; - else - addr = __raw_readl( - data->sfrbases[i] + fault_reg_offset[itype]); + unsigned int base = + __raw_readl(data->sfrbase + REG_PT_BASE_ADDR); + show_fault_information(dev_name(data->sysmmu), + itype, base, addr); + if (data->domain) + ret = report_iommu_fault(data->domain, + data->master, addr, itype); } - if (data->domain) - ret = report_iommu_fault(data->domain, data->dev, - addr, itype); + /* fault is not recovered by fault handler */ + BUG_ON(ret != 0); - if ((ret == -ENOSYS) && data->fault_handler) { - unsigned long base = data->pgtable; - if (itype != SYSMMU_FAULT_UNKNOWN) - base = __raw_readl( - data->sfrbases[i] + REG_PT_BASE_ADDR); - ret = data->fault_handler(itype, base, addr); - } + __raw_writel(1 << itype, data->sfrbase + REG_INT_CLEAR); - if (!ret && (itype != SYSMMU_FAULT_UNKNOWN)) - __raw_writel(1 << itype, data->sfrbases[i] + REG_INT_CLEAR); - else - dev_dbg(data->sysmmu, "(%s) %s is not handled.\n", - data->dbgname, sysmmu_fault_name[itype]); + sysmmu_unblock(data->sfrbase); - if (itype != SYSMMU_FAULT_UNKNOWN) - sysmmu_unblock(data->sfrbases[i]); + if (!IS_ERR(data->clk_master)) + clk_disable(data->clk_master); - read_unlock(&data->lock); + spin_unlock(&data->lock); return IRQ_HANDLED; } -static bool __exynos_sysmmu_disable(struct sysmmu_drvdata *data) +static void __sysmmu_disable_nocount(struct sysmmu_drvdata *data) { + if (!IS_ERR(data->clk_master)) + clk_enable(data->clk_master); + + __raw_writel(CTRL_DISABLE, data->sfrbase + REG_MMU_CTRL); + __raw_writel(0, data->sfrbase + REG_MMU_CFG); + + clk_disable(data->clk); + if (!IS_ERR(data->clk_master)) + clk_disable(data->clk_master); +} + +static bool __sysmmu_disable(struct sysmmu_drvdata *data) +{ + bool disabled; unsigned long flags; - bool disabled = false; - int i; - write_lock_irqsave(&data->lock, flags); + spin_lock_irqsave(&data->lock, flags); - if (!set_sysmmu_inactive(data)) - goto finish; + disabled = set_sysmmu_inactive(data); - for (i = 0; i < data->nsfrs; i++) - __raw_writel(CTRL_DISABLE, data->sfrbases[i] + REG_MMU_CTRL); + if (disabled) { + data->pgtable = 0; + data->domain = NULL; - if (data->clk[1]) - clk_disable(data->clk[1]); - if (data->clk[0]) - clk_disable(data->clk[0]); + __sysmmu_disable_nocount(data); - disabled = true; - data->pgtable = 0; - data->domain = NULL; -finish: - write_unlock_irqrestore(&data->lock, flags); + dev_dbg(data->sysmmu, "Disabled\n"); + } else { + dev_dbg(data->sysmmu, "%d times left to disable\n", + data->activations); + } - if (disabled) - dev_dbg(data->sysmmu, "(%s) Disabled\n", data->dbgname); - else - dev_dbg(data->sysmmu, "(%s) %d times left to be disabled\n", - data->dbgname, data->activations); + spin_unlock_irqrestore(&data->lock, flags); return disabled; } -/* __exynos_sysmmu_enable: Enables System MMU - * - * returns -error if an error occurred and System MMU is not enabled, - * 0 if the System MMU has been just enabled and 1 if System MMU was already - * enabled before. - */ -static int __exynos_sysmmu_enable(struct sysmmu_drvdata *data, - unsigned long pgtable, struct iommu_domain *domain) +static void __sysmmu_init_config(struct sysmmu_drvdata *data) { - int i, ret = 0; - unsigned long flags; + unsigned int cfg = CFG_LRU | CFG_QOS(15); + unsigned int ver; + + ver = __raw_sysmmu_version(data); + if (MMU_MAJ_VER(ver) == 3) { + if (MMU_MIN_VER(ver) >= 2) { + cfg |= CFG_FLPDCACHE; + if (MMU_MIN_VER(ver) == 3) { + cfg |= CFG_ACGEN; + cfg &= ~CFG_LRU; + } else { + cfg |= CFG_SYSSEL; + } + } + } - write_lock_irqsave(&data->lock, flags); + __raw_writel(cfg, data->sfrbase + REG_MMU_CFG); +} - if (!set_sysmmu_active(data)) { - if (WARN_ON(pgtable != data->pgtable)) { - ret = -EBUSY; - set_sysmmu_inactive(data); - } else { - ret = 1; - } +static void __sysmmu_enable_nocount(struct sysmmu_drvdata *data) +{ + if (!IS_ERR(data->clk_master)) + clk_enable(data->clk_master); + clk_enable(data->clk); - dev_dbg(data->sysmmu, "(%s) Already enabled\n", data->dbgname); - goto finish; - } + __raw_writel(CTRL_BLOCK, data->sfrbase + REG_MMU_CTRL); - if (data->clk[0]) - clk_enable(data->clk[0]); - if (data->clk[1]) - clk_enable(data->clk[1]); + __sysmmu_init_config(data); - data->pgtable = pgtable; + __sysmmu_set_ptbase(data->sfrbase, data->pgtable); - for (i = 0; i < data->nsfrs; i++) { - __sysmmu_set_ptbase(data->sfrbases[i], pgtable); + __raw_writel(CTRL_ENABLE, data->sfrbase + REG_MMU_CTRL); - if ((readl(data->sfrbases[i] + REG_MMU_VERSION) >> 28) == 3) { - /* System MMU version is 3.x */ - __raw_writel((1 << 12) | (2 << 28), - data->sfrbases[i] + REG_MMU_CFG); - __sysmmu_set_prefbuf(data->sfrbases[i], 0, -1, 0); - __sysmmu_set_prefbuf(data->sfrbases[i], 0, -1, 1); - } + if (!IS_ERR(data->clk_master)) + clk_disable(data->clk_master); +} + +static int __sysmmu_enable(struct sysmmu_drvdata *data, + phys_addr_t pgtable, struct iommu_domain *domain) +{ + int ret = 0; + unsigned long flags; + + spin_lock_irqsave(&data->lock, flags); + if (set_sysmmu_active(data)) { + data->pgtable = pgtable; + data->domain = domain; + + __sysmmu_enable_nocount(data); + + dev_dbg(data->sysmmu, "Enabled\n"); + } else { + ret = (pgtable == data->pgtable) ? 1 : -EBUSY; - __raw_writel(CTRL_ENABLE, data->sfrbases[i] + REG_MMU_CTRL); + dev_dbg(data->sysmmu, "already enabled\n"); } - data->domain = domain; + if (WARN_ON(ret < 0)) + set_sysmmu_inactive(data); /* decrement count */ - dev_dbg(data->sysmmu, "(%s) Enabled\n", data->dbgname); -finish: - write_unlock_irqrestore(&data->lock, flags); + spin_unlock_irqrestore(&data->lock, flags); return ret; } -int exynos_sysmmu_enable(struct device *dev, unsigned long pgtable) +/* __exynos_sysmmu_enable: Enables System MMU + * + * returns -error if an error occurred and System MMU is not enabled, + * 0 if the System MMU has been just enabled and 1 if System MMU was already + * enabled before. + */ +static int __exynos_sysmmu_enable(struct device *dev, phys_addr_t pgtable, + struct iommu_domain *domain) { - struct sysmmu_drvdata *data = dev_get_drvdata(dev->archdata.iommu); - int ret; + int ret = 0; + unsigned long flags; + struct exynos_iommu_owner *owner = dev->archdata.iommu; + struct sysmmu_drvdata *data; - BUG_ON(!memblock_is_memory(pgtable)); + BUG_ON(!has_sysmmu(dev)); - ret = pm_runtime_get_sync(data->sysmmu); - if (ret < 0) { - dev_dbg(data->sysmmu, "(%s) Failed to enable\n", data->dbgname); - return ret; - } + spin_lock_irqsave(&owner->lock, flags); - ret = __exynos_sysmmu_enable(data, pgtable, NULL); - if (WARN_ON(ret < 0)) { - pm_runtime_put(data->sysmmu); - dev_err(data->sysmmu, - "(%s) Already enabled with page table %#lx\n", - data->dbgname, data->pgtable); - } else { - data->dev = dev; - } + data = dev_get_drvdata(owner->sysmmu); + + ret = __sysmmu_enable(data, pgtable, domain); + if (ret >= 0) + data->master = dev; + + spin_unlock_irqrestore(&owner->lock, flags); return ret; } +int exynos_sysmmu_enable(struct device *dev, phys_addr_t pgtable) +{ + BUG_ON(!memblock_is_memory(pgtable)); + + return __exynos_sysmmu_enable(dev, pgtable, NULL); +} + static bool exynos_sysmmu_disable(struct device *dev) { - struct sysmmu_drvdata *data = dev_get_drvdata(dev->archdata.iommu); - bool disabled; + unsigned long flags; + bool disabled = true; + struct exynos_iommu_owner *owner = dev->archdata.iommu; + struct sysmmu_drvdata *data; + + BUG_ON(!has_sysmmu(dev)); + + spin_lock_irqsave(&owner->lock, flags); + + data = dev_get_drvdata(owner->sysmmu); + + disabled = __sysmmu_disable(data); + if (disabled) + data->master = NULL; - disabled = __exynos_sysmmu_disable(data); - pm_runtime_put(data->sysmmu); + spin_unlock_irqrestore(&owner->lock, flags); return disabled; } -static void sysmmu_tlb_invalidate_entry(struct device *dev, unsigned long iova) +static void __sysmmu_tlb_invalidate_flpdcache(struct sysmmu_drvdata *data, + sysmmu_iova_t iova) +{ + if (__raw_sysmmu_version(data) == MAKE_MMU_VER(3, 3)) + __raw_writel(iova | 0x1, data->sfrbase + REG_MMU_FLUSH_ENTRY); +} + +static void sysmmu_tlb_invalidate_flpdcache(struct device *dev, + sysmmu_iova_t iova) { unsigned long flags; - struct sysmmu_drvdata *data = dev_get_drvdata(dev->archdata.iommu); + struct exynos_iommu_owner *owner = dev->archdata.iommu; + struct sysmmu_drvdata *data = dev_get_drvdata(owner->sysmmu); + + if (!IS_ERR(data->clk_master)) + clk_enable(data->clk_master); - read_lock_irqsave(&data->lock, flags); + spin_lock_irqsave(&data->lock, flags); + if (is_sysmmu_active(data)) + __sysmmu_tlb_invalidate_flpdcache(data, iova); + spin_unlock_irqrestore(&data->lock, flags); + if (!IS_ERR(data->clk_master)) + clk_disable(data->clk_master); +} + +static void sysmmu_tlb_invalidate_entry(struct device *dev, sysmmu_iova_t iova, + size_t size) +{ + struct exynos_iommu_owner *owner = dev->archdata.iommu; + unsigned long flags; + struct sysmmu_drvdata *data; + + data = dev_get_drvdata(owner->sysmmu); + + spin_lock_irqsave(&data->lock, flags); if (is_sysmmu_active(data)) { - int i; - for (i = 0; i < data->nsfrs; i++) { - if (sysmmu_block(data->sfrbases[i])) { - __sysmmu_tlb_invalidate_entry( - data->sfrbases[i], iova); - sysmmu_unblock(data->sfrbases[i]); - } + unsigned int num_inv = 1; + + if (!IS_ERR(data->clk_master)) + clk_enable(data->clk_master); + + /* + * L2TLB invalidation required + * 4KB page: 1 invalidation + * 64KB page: 16 invalidation + * 1MB page: 64 invalidation + * because it is set-associative TLB + * with 8-way and 64 sets. + * 1MB page can be cached in one of all sets. + * 64KB page can be one of 16 consecutive sets. + */ + if (MMU_MAJ_VER(__raw_sysmmu_version(data)) == 2) + num_inv = min_t(unsigned int, size / PAGE_SIZE, 64); + + if (sysmmu_block(data->sfrbase)) { + __sysmmu_tlb_invalidate_entry( + data->sfrbase, iova, num_inv); + sysmmu_unblock(data->sfrbase); } + if (!IS_ERR(data->clk_master)) + clk_disable(data->clk_master); } else { - dev_dbg(data->sysmmu, - "(%s) Disabled. Skipping invalidating TLB.\n", - data->dbgname); + dev_dbg(dev, "disabled. Skipping TLB invalidation @ %#x\n", + iova); } - - read_unlock_irqrestore(&data->lock, flags); + spin_unlock_irqrestore(&data->lock, flags); } void exynos_sysmmu_tlb_invalidate(struct device *dev) { + struct exynos_iommu_owner *owner = dev->archdata.iommu; unsigned long flags; - struct sysmmu_drvdata *data = dev_get_drvdata(dev->archdata.iommu); + struct sysmmu_drvdata *data; - read_lock_irqsave(&data->lock, flags); + data = dev_get_drvdata(owner->sysmmu); + spin_lock_irqsave(&data->lock, flags); if (is_sysmmu_active(data)) { - int i; - for (i = 0; i < data->nsfrs; i++) { - if (sysmmu_block(data->sfrbases[i])) { - __sysmmu_tlb_invalidate(data->sfrbases[i]); - sysmmu_unblock(data->sfrbases[i]); - } + if (!IS_ERR(data->clk_master)) + clk_enable(data->clk_master); + if (sysmmu_block(data->sfrbase)) { + __sysmmu_tlb_invalidate(data->sfrbase); + sysmmu_unblock(data->sfrbase); } + if (!IS_ERR(data->clk_master)) + clk_disable(data->clk_master); } else { - dev_dbg(data->sysmmu, - "(%s) Disabled. Skipping invalidating TLB.\n", - data->dbgname); + dev_dbg(dev, "disabled. Skipping TLB invalidation\n"); } - - read_unlock_irqrestore(&data->lock, flags); + spin_unlock_irqrestore(&data->lock, flags); } -static int exynos_sysmmu_probe(struct platform_device *pdev) +static int __init exynos_sysmmu_probe(struct platform_device *pdev) { - int i, ret; - struct device *dev; + int irq, ret; + struct device *dev = &pdev->dev; struct sysmmu_drvdata *data; + struct resource *res; - dev = &pdev->dev; + data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); + if (!data) + return -ENOMEM; - data = kzalloc(sizeof(*data), GFP_KERNEL); - if (!data) { - dev_dbg(dev, "Not enough memory\n"); - ret = -ENOMEM; - goto err_alloc; - } + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + data->sfrbase = devm_ioremap_resource(dev, res); + if (IS_ERR(data->sfrbase)) + return PTR_ERR(data->sfrbase); - dev_set_drvdata(dev, data); - data->nsfrs = pdev->num_resources / 2; - data->sfrbases = kmalloc(sizeof(*data->sfrbases) * data->nsfrs, - GFP_KERNEL); - if (data->sfrbases == NULL) { - dev_dbg(dev, "Not enough memory\n"); - ret = -ENOMEM; - goto err_init; + irq = platform_get_irq(pdev, 0); + if (irq <= 0) { + dev_err(dev, "Unable to find IRQ resource\n"); + return irq; } - for (i = 0; i < data->nsfrs; i++) { - struct resource *res; - res = platform_get_resource(pdev, IORESOURCE_MEM, i); - if (!res) { - dev_dbg(dev, "Unable to find IOMEM region\n"); - ret = -ENOENT; - goto err_res; - } - - data->sfrbases[i] = ioremap(res->start, resource_size(res)); - if (!data->sfrbases[i]) { - dev_dbg(dev, "Unable to map IOMEM @ PA:%#x\n", - res->start); - ret = -ENOENT; - goto err_res; - } + ret = devm_request_irq(dev, irq, exynos_sysmmu_irq, 0, + dev_name(dev), data); + if (ret) { + dev_err(dev, "Unabled to register handler of irq %d\n", irq); + return ret; } - for (i = 0; i < data->nsfrs; i++) { - ret = platform_get_irq(pdev, i); - if (ret <= 0) { - dev_dbg(dev, "Unable to find IRQ resource\n"); - goto err_irq; - } - - ret = request_irq(ret, exynos_sysmmu_irq, 0, - dev_name(dev), data); + data->clk = devm_clk_get(dev, "sysmmu"); + if (IS_ERR(data->clk)) { + dev_err(dev, "Failed to get clock!\n"); + return PTR_ERR(data->clk); + } else { + ret = clk_prepare(data->clk); if (ret) { - dev_dbg(dev, "Unabled to register interrupt handler\n"); - goto err_irq; + dev_err(dev, "Failed to prepare clk\n"); + return ret; } } - if (dev_get_platdata(dev)) { - char *deli, *beg; - struct sysmmu_platform_data *platdata = dev_get_platdata(dev); - - beg = platdata->clockname; - - for (deli = beg; (*deli != '\0') && (*deli != ','); deli++) - /* NOTHING */; - - if (*deli == '\0') - deli = NULL; - else - *deli = '\0'; - - data->clk[0] = clk_get(dev, beg); - if (IS_ERR(data->clk[0])) { - data->clk[0] = NULL; - dev_dbg(dev, "No clock descriptor registered\n"); - } - - if (data->clk[0] && deli) { - *deli = ','; - data->clk[1] = clk_get(dev, deli + 1); - if (IS_ERR(data->clk[1])) - data->clk[1] = NULL; + data->clk_master = devm_clk_get(dev, "master"); + if (!IS_ERR(data->clk_master)) { + ret = clk_prepare(data->clk_master); + if (ret) { + clk_unprepare(data->clk); + dev_err(dev, "Failed to prepare master's clk\n"); + return ret; } - - data->dbgname = platdata->dbgname; } data->sysmmu = dev; - rwlock_init(&data->lock); - INIT_LIST_HEAD(&data->node); + spin_lock_init(&data->lock); - __set_fault_handler(data, &default_fault_handler); + platform_set_drvdata(pdev, data); - if (dev->parent) - pm_runtime_enable(dev); + pm_runtime_enable(dev); - dev_dbg(dev, "(%s) Initialized\n", data->dbgname); return 0; -err_irq: - while (i-- > 0) { - int irq; - - irq = platform_get_irq(pdev, i); - free_irq(irq, data); - } -err_res: - while (data->nsfrs-- > 0) - iounmap(data->sfrbases[data->nsfrs]); - kfree(data->sfrbases); -err_init: - kfree(data); -err_alloc: - dev_err(dev, "Failed to initialize\n"); - return ret; } -static struct platform_driver exynos_sysmmu_driver = { - .probe = exynos_sysmmu_probe, - .driver = { +static const struct of_device_id sysmmu_of_match[] __initconst = { + { .compatible = "samsung,exynos-sysmmu", }, + { }, +}; + +static struct platform_driver exynos_sysmmu_driver __refdata = { + .probe = exynos_sysmmu_probe, + .driver = { .owner = THIS_MODULE, .name = "exynos-sysmmu", + .of_match_table = sysmmu_of_match, } }; @@ -662,21 +700,32 @@ static inline void pgtable_flush(void *vastart, void *vaend) static int exynos_iommu_domain_init(struct iommu_domain *domain) { struct exynos_iommu_domain *priv; + int i; priv = kzalloc(sizeof(*priv), GFP_KERNEL); if (!priv) return -ENOMEM; - priv->pgtable = (unsigned long *)__get_free_pages( - GFP_KERNEL | __GFP_ZERO, 2); + priv->pgtable = (sysmmu_pte_t *)__get_free_pages(GFP_KERNEL, 2); if (!priv->pgtable) goto err_pgtable; - priv->lv2entcnt = (short *)__get_free_pages( - GFP_KERNEL | __GFP_ZERO, 1); + priv->lv2entcnt = (short *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 1); if (!priv->lv2entcnt) goto err_counter; + /* w/a of System MMU v3.3 to prevent caching 1MiB mapping */ + for (i = 0; i < NUM_LV1ENTRIES; i += 8) { + priv->pgtable[i + 0] = ZERO_LV2LINK; + priv->pgtable[i + 1] = ZERO_LV2LINK; + priv->pgtable[i + 2] = ZERO_LV2LINK; + priv->pgtable[i + 3] = ZERO_LV2LINK; + priv->pgtable[i + 4] = ZERO_LV2LINK; + priv->pgtable[i + 5] = ZERO_LV2LINK; + priv->pgtable[i + 6] = ZERO_LV2LINK; + priv->pgtable[i + 7] = ZERO_LV2LINK; + } + pgtable_flush(priv->pgtable, priv->pgtable + NUM_LV1ENTRIES); spin_lock_init(&priv->lock); @@ -700,7 +749,7 @@ err_pgtable: static void exynos_iommu_domain_destroy(struct iommu_domain *domain) { struct exynos_iommu_domain *priv = domain->priv; - struct sysmmu_drvdata *data; + struct exynos_iommu_owner *owner; unsigned long flags; int i; @@ -708,16 +757,20 @@ static void exynos_iommu_domain_destroy(struct iommu_domain *domain) spin_lock_irqsave(&priv->lock, flags); - list_for_each_entry(data, &priv->clients, node) { - while (!exynos_sysmmu_disable(data->dev)) + list_for_each_entry(owner, &priv->clients, client) { + while (!exynos_sysmmu_disable(owner->dev)) ; /* until System MMU is actually disabled */ } + while (!list_empty(&priv->clients)) + list_del_init(priv->clients.next); + spin_unlock_irqrestore(&priv->lock, flags); for (i = 0; i < NUM_LV1ENTRIES; i++) if (lv1ent_page(priv->pgtable + i)) - kfree(__va(lv2table_base(priv->pgtable + i))); + kmem_cache_free(lv2table_kmem_cache, + phys_to_virt(lv2table_base(priv->pgtable + i))); free_pages((unsigned long)priv->pgtable, 2); free_pages((unsigned long)priv->lv2entcnt, 1); @@ -728,114 +781,134 @@ static void exynos_iommu_domain_destroy(struct iommu_domain *domain) static int exynos_iommu_attach_device(struct iommu_domain *domain, struct device *dev) { - struct sysmmu_drvdata *data = dev_get_drvdata(dev->archdata.iommu); + struct exynos_iommu_owner *owner = dev->archdata.iommu; struct exynos_iommu_domain *priv = domain->priv; + phys_addr_t pagetable = virt_to_phys(priv->pgtable); unsigned long flags; int ret; - ret = pm_runtime_get_sync(data->sysmmu); - if (ret < 0) - return ret; - - ret = 0; - spin_lock_irqsave(&priv->lock, flags); - ret = __exynos_sysmmu_enable(data, __pa(priv->pgtable), domain); - + ret = __exynos_sysmmu_enable(dev, pagetable, domain); if (ret == 0) { - /* 'data->node' must not be appeared in priv->clients */ - BUG_ON(!list_empty(&data->node)); - data->dev = dev; - list_add_tail(&data->node, &priv->clients); + list_add_tail(&owner->client, &priv->clients); + owner->domain = domain; } spin_unlock_irqrestore(&priv->lock, flags); if (ret < 0) { - dev_err(dev, "%s: Failed to attach IOMMU with pgtable %#lx\n", - __func__, __pa(priv->pgtable)); - pm_runtime_put(data->sysmmu); - } else if (ret > 0) { - dev_dbg(dev, "%s: IOMMU with pgtable 0x%lx already attached\n", - __func__, __pa(priv->pgtable)); - } else { - dev_dbg(dev, "%s: Attached new IOMMU with pgtable 0x%lx\n", - __func__, __pa(priv->pgtable)); + dev_err(dev, "%s: Failed to attach IOMMU with pgtable %pa\n", + __func__, &pagetable); + return ret; } + dev_dbg(dev, "%s: Attached IOMMU with pgtable %pa %s\n", + __func__, &pagetable, (ret == 0) ? "" : ", again"); + return ret; } static void exynos_iommu_detach_device(struct iommu_domain *domain, struct device *dev) { - struct sysmmu_drvdata *data = dev_get_drvdata(dev->archdata.iommu); + struct exynos_iommu_owner *owner; struct exynos_iommu_domain *priv = domain->priv; - struct list_head *pos; + phys_addr_t pagetable = virt_to_phys(priv->pgtable); unsigned long flags; - bool found = false; spin_lock_irqsave(&priv->lock, flags); - list_for_each(pos, &priv->clients) { - if (list_entry(pos, struct sysmmu_drvdata, node) == data) { - found = true; + list_for_each_entry(owner, &priv->clients, client) { + if (owner == dev->archdata.iommu) { + if (exynos_sysmmu_disable(dev)) { + list_del_init(&owner->client); + owner->domain = NULL; + } break; } } - if (!found) - goto finish; - - if (__exynos_sysmmu_disable(data)) { - dev_dbg(dev, "%s: Detached IOMMU with pgtable %#lx\n", - __func__, __pa(priv->pgtable)); - list_del_init(&data->node); - - } else { - dev_dbg(dev, "%s: Detaching IOMMU with pgtable %#lx delayed", - __func__, __pa(priv->pgtable)); - } - -finish: spin_unlock_irqrestore(&priv->lock, flags); - if (found) - pm_runtime_put(data->sysmmu); + if (owner == dev->archdata.iommu) + dev_dbg(dev, "%s: Detached IOMMU with pgtable %pa\n", + __func__, &pagetable); + else + dev_err(dev, "%s: No IOMMU is attached\n", __func__); } -static unsigned long *alloc_lv2entry(unsigned long *sent, unsigned long iova, - short *pgcounter) +static sysmmu_pte_t *alloc_lv2entry(struct exynos_iommu_domain *priv, + sysmmu_pte_t *sent, sysmmu_iova_t iova, short *pgcounter) { + if (lv1ent_section(sent)) { + WARN(1, "Trying mapping on %#08x mapped with 1MiB page", iova); + return ERR_PTR(-EADDRINUSE); + } + if (lv1ent_fault(sent)) { - unsigned long *pent; + sysmmu_pte_t *pent; + bool need_flush_flpd_cache = lv1ent_zero(sent); - pent = kzalloc(LV2TABLE_SIZE, GFP_ATOMIC); - BUG_ON((unsigned long)pent & (LV2TABLE_SIZE - 1)); + pent = kmem_cache_zalloc(lv2table_kmem_cache, GFP_ATOMIC); + BUG_ON((unsigned int)pent & (LV2TABLE_SIZE - 1)); if (!pent) - return NULL; + return ERR_PTR(-ENOMEM); - *sent = mk_lv1ent_page(__pa(pent)); + *sent = mk_lv1ent_page(virt_to_phys(pent)); *pgcounter = NUM_LV2ENTRIES; pgtable_flush(pent, pent + NUM_LV2ENTRIES); pgtable_flush(sent, sent + 1); + + /* + * If pretched SLPD is a fault SLPD in zero_l2_table, FLPD cache + * may caches the address of zero_l2_table. This function + * replaces the zero_l2_table with new L2 page table to write + * valid mappings. + * Accessing the valid area may cause page fault since FLPD + * cache may still caches zero_l2_table for the valid area + * instead of new L2 page table that have the mapping + * information of the valid area + * Thus any replacement of zero_l2_table with other valid L2 + * page table must involve FLPD cache invalidation for System + * MMU v3.3. + * FLPD cache invalidation is performed with TLB invalidation + * by VPN without blocking. It is safe to invalidate TLB without + * blocking because the target address of TLB invalidation is + * not currently mapped. + */ + if (need_flush_flpd_cache) { + struct exynos_iommu_owner *owner; + + spin_lock(&priv->lock); + list_for_each_entry(owner, &priv->clients, client) + sysmmu_tlb_invalidate_flpdcache( + owner->dev, iova); + spin_unlock(&priv->lock); + } } return page_entry(sent, iova); } -static int lv1set_section(unsigned long *sent, phys_addr_t paddr, short *pgcnt) +static int lv1set_section(struct exynos_iommu_domain *priv, + sysmmu_pte_t *sent, sysmmu_iova_t iova, + phys_addr_t paddr, short *pgcnt) { - if (lv1ent_section(sent)) + if (lv1ent_section(sent)) { + WARN(1, "Trying mapping on 1MiB@%#08x that is mapped", + iova); return -EADDRINUSE; + } if (lv1ent_page(sent)) { - if (*pgcnt != NUM_LV2ENTRIES) + if (*pgcnt != NUM_LV2ENTRIES) { + WARN(1, "Trying mapping on 1MiB@%#08x that is mapped", + iova); return -EADDRINUSE; + } - kfree(page_entry(sent, 0)); - + kmem_cache_free(lv2table_kmem_cache, page_entry(sent, 0)); *pgcnt = 0; } @@ -843,14 +916,26 @@ static int lv1set_section(unsigned long *sent, phys_addr_t paddr, short *pgcnt) pgtable_flush(sent, sent + 1); + spin_lock(&priv->lock); + if (lv1ent_page_zero(sent)) { + struct exynos_iommu_owner *owner; + /* + * Flushing FLPD cache in System MMU v3.3 that may cache a FLPD + * entry by speculative prefetch of SLPD which has no mapping. + */ + list_for_each_entry(owner, &priv->clients, client) + sysmmu_tlb_invalidate_flpdcache(owner->dev, iova); + } + spin_unlock(&priv->lock); + return 0; } -static int lv2set_page(unsigned long *pent, phys_addr_t paddr, size_t size, +static int lv2set_page(sysmmu_pte_t *pent, phys_addr_t paddr, size_t size, short *pgcnt) { if (size == SPAGE_SIZE) { - if (!lv2ent_fault(pent)) + if (WARN_ON(!lv2ent_fault(pent))) return -EADDRINUSE; *pent = mk_lv2ent_spage(paddr); @@ -858,9 +943,11 @@ static int lv2set_page(unsigned long *pent, phys_addr_t paddr, size_t size, *pgcnt -= 1; } else { /* size == LPAGE_SIZE */ int i; + for (i = 0; i < SPAGES_PER_LPAGE; i++, pent++) { - if (!lv2ent_fault(pent)) { - memset(pent, 0, sizeof(*pent) * i); + if (WARN_ON(!lv2ent_fault(pent))) { + if (i > 0) + memset(pent - i, 0, sizeof(*pent) * i); return -EADDRINUSE; } @@ -873,11 +960,38 @@ static int lv2set_page(unsigned long *pent, phys_addr_t paddr, size_t size, return 0; } -static int exynos_iommu_map(struct iommu_domain *domain, unsigned long iova, +/* + * *CAUTION* to the I/O virtual memory managers that support exynos-iommu: + * + * System MMU v3.x have an advanced logic to improve address translation + * performance with caching more page table entries by a page table walk. + * However, the logic has a bug that caching fault page table entries and System + * MMU reports page fault if the cached fault entry is hit even though the fault + * entry is updated to a valid entry after the entry is cached. + * To prevent caching fault page table entries which may be updated to valid + * entries later, the virtual memory manager should care about the w/a about the + * problem. The followings describe w/a. + * + * Any two consecutive I/O virtual address regions must have a hole of 128KiB + * in maximum to prevent misbehavior of System MMU 3.x. (w/a of h/w bug) + * + * Precisely, any start address of I/O virtual region must be aligned by + * the following sizes for System MMU v3.1 and v3.2. + * System MMU v3.1: 128KiB + * System MMU v3.2: 256KiB + * + * Because System MMU v3.3 caches page table entries more aggressively, it needs + * more w/a. + * - Any two consecutive I/O virtual regions must be have a hole of larger size + * than or equal size to 128KiB. + * - Start address of an I/O virtual region must be aligned by 128KiB. + */ +static int exynos_iommu_map(struct iommu_domain *domain, unsigned long l_iova, phys_addr_t paddr, size_t size, int prot) { struct exynos_iommu_domain *priv = domain->priv; - unsigned long *entry; + sysmmu_pte_t *entry; + sysmmu_iova_t iova = (sysmmu_iova_t)l_iova; unsigned long flags; int ret = -ENOMEM; @@ -888,38 +1002,52 @@ static int exynos_iommu_map(struct iommu_domain *domain, unsigned long iova, entry = section_entry(priv->pgtable, iova); if (size == SECT_SIZE) { - ret = lv1set_section(entry, paddr, + ret = lv1set_section(priv, entry, iova, paddr, &priv->lv2entcnt[lv1ent_offset(iova)]); } else { - unsigned long *pent; + sysmmu_pte_t *pent; - pent = alloc_lv2entry(entry, iova, + pent = alloc_lv2entry(priv, entry, iova, &priv->lv2entcnt[lv1ent_offset(iova)]); - if (!pent) - ret = -ENOMEM; + if (IS_ERR(pent)) + ret = PTR_ERR(pent); else ret = lv2set_page(pent, paddr, size, &priv->lv2entcnt[lv1ent_offset(iova)]); } - if (ret) { - pr_debug("%s: Failed to map iova 0x%lx/0x%x bytes\n", - __func__, iova, size); - } + if (ret) + pr_err("%s: Failed(%d) to map %#zx bytes @ %#x\n", + __func__, ret, size, iova); spin_unlock_irqrestore(&priv->pgtablelock, flags); return ret; } +static void exynos_iommu_tlb_invalidate_entry(struct exynos_iommu_domain *priv, + sysmmu_iova_t iova, size_t size) +{ + struct exynos_iommu_owner *owner; + unsigned long flags; + + spin_lock_irqsave(&priv->lock, flags); + + list_for_each_entry(owner, &priv->clients, client) + sysmmu_tlb_invalidate_entry(owner->dev, iova, size); + + spin_unlock_irqrestore(&priv->lock, flags); +} + static size_t exynos_iommu_unmap(struct iommu_domain *domain, - unsigned long iova, size_t size) + unsigned long l_iova, size_t size) { struct exynos_iommu_domain *priv = domain->priv; - struct sysmmu_drvdata *data; + sysmmu_iova_t iova = (sysmmu_iova_t)l_iova; + sysmmu_pte_t *ent; + size_t err_pgsize; unsigned long flags; - unsigned long *ent; BUG_ON(priv->pgtable == NULL); @@ -928,9 +1056,12 @@ static size_t exynos_iommu_unmap(struct iommu_domain *domain, ent = section_entry(priv->pgtable, iova); if (lv1ent_section(ent)) { - BUG_ON(size < SECT_SIZE); + if (WARN_ON(size < SECT_SIZE)) { + err_pgsize = SECT_SIZE; + goto err; + } - *ent = 0; + *ent = ZERO_LV2LINK; /* w/a for h/w bug in Sysmem MMU v3.3 */ pgtable_flush(ent, ent + 1); size = SECT_SIZE; goto done; @@ -954,34 +1085,42 @@ static size_t exynos_iommu_unmap(struct iommu_domain *domain, if (lv2ent_small(ent)) { *ent = 0; size = SPAGE_SIZE; + pgtable_flush(ent, ent + 1); priv->lv2entcnt[lv1ent_offset(iova)] += 1; goto done; } /* lv1ent_large(ent) == true here */ - BUG_ON(size < LPAGE_SIZE); + if (WARN_ON(size < LPAGE_SIZE)) { + err_pgsize = LPAGE_SIZE; + goto err; + } memset(ent, 0, sizeof(*ent) * SPAGES_PER_LPAGE); + pgtable_flush(ent, ent + SPAGES_PER_LPAGE); size = LPAGE_SIZE; priv->lv2entcnt[lv1ent_offset(iova)] += SPAGES_PER_LPAGE; done: spin_unlock_irqrestore(&priv->pgtablelock, flags); - spin_lock_irqsave(&priv->lock, flags); - list_for_each_entry(data, &priv->clients, node) - sysmmu_tlb_invalidate_entry(data->dev, iova); - spin_unlock_irqrestore(&priv->lock, flags); - + exynos_iommu_tlb_invalidate_entry(priv, iova, size); return size; +err: + spin_unlock_irqrestore(&priv->pgtablelock, flags); + + pr_err("%s: Failed: size(%#zx) @ %#x is smaller than page size %#zx\n", + __func__, size, iova, err_pgsize); + + return 0; } static phys_addr_t exynos_iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova) { struct exynos_iommu_domain *priv = domain->priv; - unsigned long *entry; + sysmmu_pte_t *entry; unsigned long flags; phys_addr_t phys = 0; @@ -1005,6 +1144,32 @@ static phys_addr_t exynos_iommu_iova_to_phys(struct iommu_domain *domain, return phys; } +static int exynos_iommu_add_device(struct device *dev) +{ + struct iommu_group *group; + int ret; + + group = iommu_group_get(dev); + + if (!group) { + group = iommu_group_alloc(); + if (IS_ERR(group)) { + dev_err(dev, "Failed to allocate IOMMU group\n"); + return PTR_ERR(group); + } + } + + ret = iommu_group_add_device(group, dev); + iommu_group_put(group); + + return ret; +} + +static void exynos_iommu_remove_device(struct device *dev) +{ + iommu_group_remove_device(dev); +} + static struct iommu_ops exynos_iommu_ops = { .domain_init = exynos_iommu_domain_init, .domain_destroy = exynos_iommu_domain_destroy, @@ -1013,6 +1178,8 @@ static struct iommu_ops exynos_iommu_ops = { .map = exynos_iommu_map, .unmap = exynos_iommu_unmap, .iova_to_phys = exynos_iommu_iova_to_phys, + .add_device = exynos_iommu_add_device, + .remove_device = exynos_iommu_remove_device, .pgsize_bitmap = SECT_SIZE | LPAGE_SIZE | SPAGE_SIZE, }; @@ -1020,11 +1187,41 @@ static int __init exynos_iommu_init(void) { int ret; + lv2table_kmem_cache = kmem_cache_create("exynos-iommu-lv2table", + LV2TABLE_SIZE, LV2TABLE_SIZE, 0, NULL); + if (!lv2table_kmem_cache) { + pr_err("%s: Failed to create kmem cache\n", __func__); + return -ENOMEM; + } + ret = platform_driver_register(&exynos_sysmmu_driver); + if (ret) { + pr_err("%s: Failed to register driver\n", __func__); + goto err_reg_driver; + } - if (ret == 0) - bus_set_iommu(&platform_bus_type, &exynos_iommu_ops); + zero_lv2_table = kmem_cache_zalloc(lv2table_kmem_cache, GFP_KERNEL); + if (zero_lv2_table == NULL) { + pr_err("%s: Failed to allocate zero level2 page table\n", + __func__); + ret = -ENOMEM; + goto err_zero_lv2; + } + ret = bus_set_iommu(&platform_bus_type, &exynos_iommu_ops); + if (ret) { + pr_err("%s: Failed to register exynos-iommu driver.\n", + __func__); + goto err_set_iommu; + } + + return 0; +err_set_iommu: + kmem_cache_free(lv2table_kmem_cache, zero_lv2_table); +err_zero_lv2: + platform_driver_unregister(&exynos_sysmmu_driver); +err_reg_driver: + kmem_cache_destroy(lv2table_kmem_cache); return ret; } subsys_initcall(exynos_iommu_init); diff --git a/drivers/iommu/fsl_pamu.c b/drivers/iommu/fsl_pamu.c index cba0498eb011..b99dd88e31b9 100644 --- a/drivers/iommu/fsl_pamu.c +++ b/drivers/iommu/fsl_pamu.c @@ -592,8 +592,7 @@ found_cpu_node: /* advance to next node in cache hierarchy */ node = of_find_node_by_phandle(*prop); if (!node) { - pr_debug("Invalid node for cache hierarchy %s\n", - node->full_name); + pr_debug("Invalid node for cache hierarchy\n"); return ~(u32)0; } } diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c new file mode 100644 index 000000000000..53cde086e83b --- /dev/null +++ b/drivers/iommu/ipmmu-vmsa.c @@ -0,0 +1,1255 @@ +/* + * IPMMU VMSA + * + * Copyright (C) 2014 Renesas Electronics Corporation + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; version 2 of the License. + */ + +#include <linux/delay.h> +#include <linux/dma-mapping.h> +#include <linux/err.h> +#include <linux/export.h> +#include <linux/interrupt.h> +#include <linux/io.h> +#include <linux/iommu.h> +#include <linux/module.h> +#include <linux/platform_data/ipmmu-vmsa.h> +#include <linux/platform_device.h> +#include <linux/sizes.h> +#include <linux/slab.h> + +#include <asm/dma-iommu.h> +#include <asm/pgalloc.h> + +struct ipmmu_vmsa_device { + struct device *dev; + void __iomem *base; + struct list_head list; + + const struct ipmmu_vmsa_platform_data *pdata; + unsigned int num_utlbs; + + struct dma_iommu_mapping *mapping; +}; + +struct ipmmu_vmsa_domain { + struct ipmmu_vmsa_device *mmu; + struct iommu_domain *io_domain; + + unsigned int context_id; + spinlock_t lock; /* Protects mappings */ + pgd_t *pgd; +}; + +struct ipmmu_vmsa_archdata { + struct ipmmu_vmsa_device *mmu; + unsigned int utlb; +}; + +static DEFINE_SPINLOCK(ipmmu_devices_lock); +static LIST_HEAD(ipmmu_devices); + +#define TLB_LOOP_TIMEOUT 100 /* 100us */ + +/* ----------------------------------------------------------------------------- + * Registers Definition + */ + +#define IM_CTX_SIZE 0x40 + +#define IMCTR 0x0000 +#define IMCTR_TRE (1 << 17) +#define IMCTR_AFE (1 << 16) +#define IMCTR_RTSEL_MASK (3 << 4) +#define IMCTR_RTSEL_SHIFT 4 +#define IMCTR_TREN (1 << 3) +#define IMCTR_INTEN (1 << 2) +#define IMCTR_FLUSH (1 << 1) +#define IMCTR_MMUEN (1 << 0) + +#define IMCAAR 0x0004 + +#define IMTTBCR 0x0008 +#define IMTTBCR_EAE (1 << 31) +#define IMTTBCR_PMB (1 << 30) +#define IMTTBCR_SH1_NON_SHAREABLE (0 << 28) +#define IMTTBCR_SH1_OUTER_SHAREABLE (2 << 28) +#define IMTTBCR_SH1_INNER_SHAREABLE (3 << 28) +#define IMTTBCR_SH1_MASK (3 << 28) +#define IMTTBCR_ORGN1_NC (0 << 26) +#define IMTTBCR_ORGN1_WB_WA (1 << 26) +#define IMTTBCR_ORGN1_WT (2 << 26) +#define IMTTBCR_ORGN1_WB (3 << 26) +#define IMTTBCR_ORGN1_MASK (3 << 26) +#define IMTTBCR_IRGN1_NC (0 << 24) +#define IMTTBCR_IRGN1_WB_WA (1 << 24) +#define IMTTBCR_IRGN1_WT (2 << 24) +#define IMTTBCR_IRGN1_WB (3 << 24) +#define IMTTBCR_IRGN1_MASK (3 << 24) +#define IMTTBCR_TSZ1_MASK (7 << 16) +#define IMTTBCR_TSZ1_SHIFT 16 +#define IMTTBCR_SH0_NON_SHAREABLE (0 << 12) +#define IMTTBCR_SH0_OUTER_SHAREABLE (2 << 12) +#define IMTTBCR_SH0_INNER_SHAREABLE (3 << 12) +#define IMTTBCR_SH0_MASK (3 << 12) +#define IMTTBCR_ORGN0_NC (0 << 10) +#define IMTTBCR_ORGN0_WB_WA (1 << 10) +#define IMTTBCR_ORGN0_WT (2 << 10) +#define IMTTBCR_ORGN0_WB (3 << 10) +#define IMTTBCR_ORGN0_MASK (3 << 10) +#define IMTTBCR_IRGN0_NC (0 << 8) +#define IMTTBCR_IRGN0_WB_WA (1 << 8) +#define IMTTBCR_IRGN0_WT (2 << 8) +#define IMTTBCR_IRGN0_WB (3 << 8) +#define IMTTBCR_IRGN0_MASK (3 << 8) +#define IMTTBCR_SL0_LVL_2 (0 << 4) +#define IMTTBCR_SL0_LVL_1 (1 << 4) +#define IMTTBCR_TSZ0_MASK (7 << 0) +#define IMTTBCR_TSZ0_SHIFT O + +#define IMBUSCR 0x000c +#define IMBUSCR_DVM (1 << 2) +#define IMBUSCR_BUSSEL_SYS (0 << 0) +#define IMBUSCR_BUSSEL_CCI (1 << 0) +#define IMBUSCR_BUSSEL_IMCAAR (2 << 0) +#define IMBUSCR_BUSSEL_CCI_IMCAAR (3 << 0) +#define IMBUSCR_BUSSEL_MASK (3 << 0) + +#define IMTTLBR0 0x0010 +#define IMTTUBR0 0x0014 +#define IMTTLBR1 0x0018 +#define IMTTUBR1 0x001c + +#define IMSTR 0x0020 +#define IMSTR_ERRLVL_MASK (3 << 12) +#define IMSTR_ERRLVL_SHIFT 12 +#define IMSTR_ERRCODE_TLB_FORMAT (1 << 8) +#define IMSTR_ERRCODE_ACCESS_PERM (4 << 8) +#define IMSTR_ERRCODE_SECURE_ACCESS (5 << 8) +#define IMSTR_ERRCODE_MASK (7 << 8) +#define IMSTR_MHIT (1 << 4) +#define IMSTR_ABORT (1 << 2) +#define IMSTR_PF (1 << 1) +#define IMSTR_TF (1 << 0) + +#define IMMAIR0 0x0028 +#define IMMAIR1 0x002c +#define IMMAIR_ATTR_MASK 0xff +#define IMMAIR_ATTR_DEVICE 0x04 +#define IMMAIR_ATTR_NC 0x44 +#define IMMAIR_ATTR_WBRWA 0xff +#define IMMAIR_ATTR_SHIFT(n) ((n) << 3) +#define IMMAIR_ATTR_IDX_NC 0 +#define IMMAIR_ATTR_IDX_WBRWA 1 +#define IMMAIR_ATTR_IDX_DEV 2 + +#define IMEAR 0x0030 + +#define IMPCTR 0x0200 +#define IMPSTR 0x0208 +#define IMPEAR 0x020c +#define IMPMBA(n) (0x0280 + ((n) * 4)) +#define IMPMBD(n) (0x02c0 + ((n) * 4)) + +#define IMUCTR(n) (0x0300 + ((n) * 16)) +#define IMUCTR_FIXADDEN (1 << 31) +#define IMUCTR_FIXADD_MASK (0xff << 16) +#define IMUCTR_FIXADD_SHIFT 16 +#define IMUCTR_TTSEL_MMU(n) ((n) << 4) +#define IMUCTR_TTSEL_PMB (8 << 4) +#define IMUCTR_TTSEL_MASK (15 << 4) +#define IMUCTR_FLUSH (1 << 1) +#define IMUCTR_MMUEN (1 << 0) + +#define IMUASID(n) (0x0308 + ((n) * 16)) +#define IMUASID_ASID8_MASK (0xff << 8) +#define IMUASID_ASID8_SHIFT 8 +#define IMUASID_ASID0_MASK (0xff << 0) +#define IMUASID_ASID0_SHIFT 0 + +/* ----------------------------------------------------------------------------- + * Page Table Bits + */ + +/* + * VMSA states in section B3.6.3 "Control of Secure or Non-secure memory access, + * Long-descriptor format" that the NStable bit being set in a table descriptor + * will result in the NStable and NS bits of all child entries being ignored and + * considered as being set. The IPMMU seems not to comply with this, as it + * generates a secure access page fault if any of the NStable and NS bits isn't + * set when running in non-secure mode. + */ +#ifndef PMD_NSTABLE +#define PMD_NSTABLE (_AT(pmdval_t, 1) << 63) +#endif + +#define ARM_VMSA_PTE_XN (((pteval_t)3) << 53) +#define ARM_VMSA_PTE_CONT (((pteval_t)1) << 52) +#define ARM_VMSA_PTE_AF (((pteval_t)1) << 10) +#define ARM_VMSA_PTE_SH_NS (((pteval_t)0) << 8) +#define ARM_VMSA_PTE_SH_OS (((pteval_t)2) << 8) +#define ARM_VMSA_PTE_SH_IS (((pteval_t)3) << 8) +#define ARM_VMSA_PTE_SH_MASK (((pteval_t)3) << 8) +#define ARM_VMSA_PTE_NS (((pteval_t)1) << 5) +#define ARM_VMSA_PTE_PAGE (((pteval_t)3) << 0) + +/* Stage-1 PTE */ +#define ARM_VMSA_PTE_nG (((pteval_t)1) << 11) +#define ARM_VMSA_PTE_AP_UNPRIV (((pteval_t)1) << 6) +#define ARM_VMSA_PTE_AP_RDONLY (((pteval_t)2) << 6) +#define ARM_VMSA_PTE_AP_MASK (((pteval_t)3) << 6) +#define ARM_VMSA_PTE_ATTRINDX_MASK (((pteval_t)3) << 2) +#define ARM_VMSA_PTE_ATTRINDX_SHIFT 2 + +#define ARM_VMSA_PTE_ATTRS_MASK \ + (ARM_VMSA_PTE_XN | ARM_VMSA_PTE_CONT | ARM_VMSA_PTE_nG | \ + ARM_VMSA_PTE_AF | ARM_VMSA_PTE_SH_MASK | ARM_VMSA_PTE_AP_MASK | \ + ARM_VMSA_PTE_NS | ARM_VMSA_PTE_ATTRINDX_MASK) + +#define ARM_VMSA_PTE_CONT_ENTRIES 16 +#define ARM_VMSA_PTE_CONT_SIZE (PAGE_SIZE * ARM_VMSA_PTE_CONT_ENTRIES) + +#define IPMMU_PTRS_PER_PTE 512 +#define IPMMU_PTRS_PER_PMD 512 +#define IPMMU_PTRS_PER_PGD 4 + +/* ----------------------------------------------------------------------------- + * Read/Write Access + */ + +static u32 ipmmu_read(struct ipmmu_vmsa_device *mmu, unsigned int offset) +{ + return ioread32(mmu->base + offset); +} + +static void ipmmu_write(struct ipmmu_vmsa_device *mmu, unsigned int offset, + u32 data) +{ + iowrite32(data, mmu->base + offset); +} + +static u32 ipmmu_ctx_read(struct ipmmu_vmsa_domain *domain, unsigned int reg) +{ + return ipmmu_read(domain->mmu, domain->context_id * IM_CTX_SIZE + reg); +} + +static void ipmmu_ctx_write(struct ipmmu_vmsa_domain *domain, unsigned int reg, + u32 data) +{ + ipmmu_write(domain->mmu, domain->context_id * IM_CTX_SIZE + reg, data); +} + +/* ----------------------------------------------------------------------------- + * TLB and microTLB Management + */ + +/* Wait for any pending TLB invalidations to complete */ +static void ipmmu_tlb_sync(struct ipmmu_vmsa_domain *domain) +{ + unsigned int count = 0; + + while (ipmmu_ctx_read(domain, IMCTR) & IMCTR_FLUSH) { + cpu_relax(); + if (++count == TLB_LOOP_TIMEOUT) { + dev_err_ratelimited(domain->mmu->dev, + "TLB sync timed out -- MMU may be deadlocked\n"); + return; + } + udelay(1); + } +} + +static void ipmmu_tlb_invalidate(struct ipmmu_vmsa_domain *domain) +{ + u32 reg; + + reg = ipmmu_ctx_read(domain, IMCTR); + reg |= IMCTR_FLUSH; + ipmmu_ctx_write(domain, IMCTR, reg); + + ipmmu_tlb_sync(domain); +} + +/* + * Enable MMU translation for the microTLB. + */ +static void ipmmu_utlb_enable(struct ipmmu_vmsa_domain *domain, + unsigned int utlb) +{ + struct ipmmu_vmsa_device *mmu = domain->mmu; + + /* + * TODO: Reference-count the microTLB as several bus masters can be + * connected to the same microTLB. + */ + + /* TODO: What should we set the ASID to ? */ + ipmmu_write(mmu, IMUASID(utlb), 0); + /* TODO: Do we need to flush the microTLB ? */ + ipmmu_write(mmu, IMUCTR(utlb), + IMUCTR_TTSEL_MMU(domain->context_id) | IMUCTR_FLUSH | + IMUCTR_MMUEN); +} + +/* + * Disable MMU translation for the microTLB. + */ +static void ipmmu_utlb_disable(struct ipmmu_vmsa_domain *domain, + unsigned int utlb) +{ + struct ipmmu_vmsa_device *mmu = domain->mmu; + + ipmmu_write(mmu, IMUCTR(utlb), 0); +} + +static void ipmmu_flush_pgtable(struct ipmmu_vmsa_device *mmu, void *addr, + size_t size) +{ + unsigned long offset = (unsigned long)addr & ~PAGE_MASK; + + /* + * TODO: Add support for coherent walk through CCI with DVM and remove + * cache handling. + */ + dma_map_page(mmu->dev, virt_to_page(addr), offset, size, DMA_TO_DEVICE); +} + +/* ----------------------------------------------------------------------------- + * Domain/Context Management + */ + +static int ipmmu_domain_init_context(struct ipmmu_vmsa_domain *domain) +{ + phys_addr_t ttbr; + u32 reg; + + /* + * TODO: When adding support for multiple contexts, find an unused + * context. + */ + domain->context_id = 0; + + /* TTBR0 */ + ipmmu_flush_pgtable(domain->mmu, domain->pgd, + IPMMU_PTRS_PER_PGD * sizeof(*domain->pgd)); + ttbr = __pa(domain->pgd); + ipmmu_ctx_write(domain, IMTTLBR0, ttbr); + ipmmu_ctx_write(domain, IMTTUBR0, ttbr >> 32); + + /* + * TTBCR + * We use long descriptors with inner-shareable WBWA tables and allocate + * the whole 32-bit VA space to TTBR0. + */ + ipmmu_ctx_write(domain, IMTTBCR, IMTTBCR_EAE | + IMTTBCR_SH0_INNER_SHAREABLE | IMTTBCR_ORGN0_WB_WA | + IMTTBCR_IRGN0_WB_WA | IMTTBCR_SL0_LVL_1); + + /* + * MAIR0 + * We need three attributes only, non-cacheable, write-back read/write + * allocate and device memory. + */ + reg = (IMMAIR_ATTR_NC << IMMAIR_ATTR_SHIFT(IMMAIR_ATTR_IDX_NC)) + | (IMMAIR_ATTR_WBRWA << IMMAIR_ATTR_SHIFT(IMMAIR_ATTR_IDX_WBRWA)) + | (IMMAIR_ATTR_DEVICE << IMMAIR_ATTR_SHIFT(IMMAIR_ATTR_IDX_DEV)); + ipmmu_ctx_write(domain, IMMAIR0, reg); + + /* IMBUSCR */ + ipmmu_ctx_write(domain, IMBUSCR, + ipmmu_ctx_read(domain, IMBUSCR) & + ~(IMBUSCR_DVM | IMBUSCR_BUSSEL_MASK)); + + /* + * IMSTR + * Clear all interrupt flags. + */ + ipmmu_ctx_write(domain, IMSTR, ipmmu_ctx_read(domain, IMSTR)); + + /* + * IMCTR + * Enable the MMU and interrupt generation. The long-descriptor + * translation table format doesn't use TEX remapping. Don't enable AF + * software management as we have no use for it. Flush the TLB as + * required when modifying the context registers. + */ + ipmmu_ctx_write(domain, IMCTR, IMCTR_INTEN | IMCTR_FLUSH | IMCTR_MMUEN); + + return 0; +} + +static void ipmmu_domain_destroy_context(struct ipmmu_vmsa_domain *domain) +{ + /* + * Disable the context. Flush the TLB as required when modifying the + * context registers. + * + * TODO: Is TLB flush really needed ? + */ + ipmmu_ctx_write(domain, IMCTR, IMCTR_FLUSH); + ipmmu_tlb_sync(domain); +} + +/* ----------------------------------------------------------------------------- + * Fault Handling + */ + +static irqreturn_t ipmmu_domain_irq(struct ipmmu_vmsa_domain *domain) +{ + const u32 err_mask = IMSTR_MHIT | IMSTR_ABORT | IMSTR_PF | IMSTR_TF; + struct ipmmu_vmsa_device *mmu = domain->mmu; + u32 status; + u32 iova; + + status = ipmmu_ctx_read(domain, IMSTR); + if (!(status & err_mask)) + return IRQ_NONE; + + iova = ipmmu_ctx_read(domain, IMEAR); + + /* + * Clear the error status flags. Unlike traditional interrupt flag + * registers that must be cleared by writing 1, this status register + * seems to require 0. The error address register must be read before, + * otherwise its value will be 0. + */ + ipmmu_ctx_write(domain, IMSTR, 0); + + /* Log fatal errors. */ + if (status & IMSTR_MHIT) + dev_err_ratelimited(mmu->dev, "Multiple TLB hits @0x%08x\n", + iova); + if (status & IMSTR_ABORT) + dev_err_ratelimited(mmu->dev, "Page Table Walk Abort @0x%08x\n", + iova); + + if (!(status & (IMSTR_PF | IMSTR_TF))) + return IRQ_NONE; + + /* + * Try to handle page faults and translation faults. + * + * TODO: We need to look up the faulty device based on the I/O VA. Use + * the IOMMU device for now. + */ + if (!report_iommu_fault(domain->io_domain, mmu->dev, iova, 0)) + return IRQ_HANDLED; + + dev_err_ratelimited(mmu->dev, + "Unhandled fault: status 0x%08x iova 0x%08x\n", + status, iova); + + return IRQ_HANDLED; +} + +static irqreturn_t ipmmu_irq(int irq, void *dev) +{ + struct ipmmu_vmsa_device *mmu = dev; + struct iommu_domain *io_domain; + struct ipmmu_vmsa_domain *domain; + + if (!mmu->mapping) + return IRQ_NONE; + + io_domain = mmu->mapping->domain; + domain = io_domain->priv; + + return ipmmu_domain_irq(domain); +} + +/* ----------------------------------------------------------------------------- + * Page Table Management + */ + +#define pud_pgtable(pud) pfn_to_page(__phys_to_pfn(pud_val(pud) & PHYS_MASK)) + +static void ipmmu_free_ptes(pmd_t *pmd) +{ + pgtable_t table = pmd_pgtable(*pmd); + __free_page(table); +} + +static void ipmmu_free_pmds(pud_t *pud) +{ + pmd_t *pmd = pmd_offset(pud, 0); + pgtable_t table; + unsigned int i; + + for (i = 0; i < IPMMU_PTRS_PER_PMD; ++i) { + if (!pmd_table(*pmd)) + continue; + + ipmmu_free_ptes(pmd); + pmd++; + } + + table = pud_pgtable(*pud); + __free_page(table); +} + +static void ipmmu_free_pgtables(struct ipmmu_vmsa_domain *domain) +{ + pgd_t *pgd, *pgd_base = domain->pgd; + unsigned int i; + + /* + * Recursively free the page tables for this domain. We don't care about + * speculative TLB filling, because the TLB will be nuked next time this + * context bank is re-allocated and no devices currently map to these + * tables. + */ + pgd = pgd_base; + for (i = 0; i < IPMMU_PTRS_PER_PGD; ++i) { + if (pgd_none(*pgd)) + continue; + ipmmu_free_pmds((pud_t *)pgd); + pgd++; + } + + kfree(pgd_base); +} + +/* + * We can't use the (pgd|pud|pmd|pte)_populate or the set_(pgd|pud|pmd|pte) + * functions as they would flush the CPU TLB. + */ + +static pte_t *ipmmu_alloc_pte(struct ipmmu_vmsa_device *mmu, pmd_t *pmd, + unsigned long iova) +{ + pte_t *pte; + + if (!pmd_none(*pmd)) + return pte_offset_kernel(pmd, iova); + + pte = (pte_t *)get_zeroed_page(GFP_ATOMIC); + if (!pte) + return NULL; + + ipmmu_flush_pgtable(mmu, pte, PAGE_SIZE); + *pmd = __pmd(__pa(pte) | PMD_NSTABLE | PMD_TYPE_TABLE); + ipmmu_flush_pgtable(mmu, pmd, sizeof(*pmd)); + + return pte + pte_index(iova); +} + +static pmd_t *ipmmu_alloc_pmd(struct ipmmu_vmsa_device *mmu, pgd_t *pgd, + unsigned long iova) +{ + pud_t *pud = (pud_t *)pgd; + pmd_t *pmd; + + if (!pud_none(*pud)) + return pmd_offset(pud, iova); + + pmd = (pmd_t *)get_zeroed_page(GFP_ATOMIC); + if (!pmd) + return NULL; + + ipmmu_flush_pgtable(mmu, pmd, PAGE_SIZE); + *pud = __pud(__pa(pmd) | PMD_NSTABLE | PMD_TYPE_TABLE); + ipmmu_flush_pgtable(mmu, pud, sizeof(*pud)); + + return pmd + pmd_index(iova); +} + +static u64 ipmmu_page_prot(unsigned int prot, u64 type) +{ + u64 pgprot = ARM_VMSA_PTE_XN | ARM_VMSA_PTE_nG | ARM_VMSA_PTE_AF + | ARM_VMSA_PTE_SH_IS | ARM_VMSA_PTE_AP_UNPRIV + | ARM_VMSA_PTE_NS | type; + + if (!(prot & IOMMU_WRITE) && (prot & IOMMU_READ)) + pgprot |= ARM_VMSA_PTE_AP_RDONLY; + + if (prot & IOMMU_CACHE) + pgprot |= IMMAIR_ATTR_IDX_WBRWA << ARM_VMSA_PTE_ATTRINDX_SHIFT; + + if (prot & IOMMU_EXEC) + pgprot &= ~ARM_VMSA_PTE_XN; + else if (!(prot & (IOMMU_READ | IOMMU_WRITE))) + /* If no access create a faulting entry to avoid TLB fills. */ + pgprot &= ~ARM_VMSA_PTE_PAGE; + + return pgprot; +} + +static int ipmmu_alloc_init_pte(struct ipmmu_vmsa_device *mmu, pmd_t *pmd, + unsigned long iova, unsigned long pfn, + size_t size, int prot) +{ + pteval_t pteval = ipmmu_page_prot(prot, ARM_VMSA_PTE_PAGE); + unsigned int num_ptes = 1; + pte_t *pte, *start; + unsigned int i; + + pte = ipmmu_alloc_pte(mmu, pmd, iova); + if (!pte) + return -ENOMEM; + + start = pte; + + /* + * Install the page table entries. We can be called both for a single + * page or for a block of 16 physically contiguous pages. In the latter + * case set the PTE contiguous hint. + */ + if (size == SZ_64K) { + pteval |= ARM_VMSA_PTE_CONT; + num_ptes = ARM_VMSA_PTE_CONT_ENTRIES; + } + + for (i = num_ptes; i; --i) + *pte++ = pfn_pte(pfn++, __pgprot(pteval)); + + ipmmu_flush_pgtable(mmu, start, sizeof(*pte) * num_ptes); + + return 0; +} + +static int ipmmu_alloc_init_pmd(struct ipmmu_vmsa_device *mmu, pmd_t *pmd, + unsigned long iova, unsigned long pfn, + int prot) +{ + pmdval_t pmdval = ipmmu_page_prot(prot, PMD_TYPE_SECT); + + *pmd = pfn_pmd(pfn, __pgprot(pmdval)); + ipmmu_flush_pgtable(mmu, pmd, sizeof(*pmd)); + + return 0; +} + +static int ipmmu_create_mapping(struct ipmmu_vmsa_domain *domain, + unsigned long iova, phys_addr_t paddr, + size_t size, int prot) +{ + struct ipmmu_vmsa_device *mmu = domain->mmu; + pgd_t *pgd = domain->pgd; + unsigned long flags; + unsigned long pfn; + pmd_t *pmd; + int ret; + + if (!pgd) + return -EINVAL; + + if (size & ~PAGE_MASK) + return -EINVAL; + + if (paddr & ~((1ULL << 40) - 1)) + return -ERANGE; + + pfn = __phys_to_pfn(paddr); + pgd += pgd_index(iova); + + /* Update the page tables. */ + spin_lock_irqsave(&domain->lock, flags); + + pmd = ipmmu_alloc_pmd(mmu, pgd, iova); + if (!pmd) { + ret = -ENOMEM; + goto done; + } + + switch (size) { + case SZ_2M: + ret = ipmmu_alloc_init_pmd(mmu, pmd, iova, pfn, prot); + break; + case SZ_64K: + case SZ_4K: + ret = ipmmu_alloc_init_pte(mmu, pmd, iova, pfn, size, prot); + break; + default: + ret = -EINVAL; + break; + } + +done: + spin_unlock_irqrestore(&domain->lock, flags); + + if (!ret) + ipmmu_tlb_invalidate(domain); + + return ret; +} + +static void ipmmu_clear_pud(struct ipmmu_vmsa_device *mmu, pud_t *pud) +{ + /* Free the page table. */ + pgtable_t table = pud_pgtable(*pud); + __free_page(table); + + /* Clear the PUD. */ + *pud = __pud(0); + ipmmu_flush_pgtable(mmu, pud, sizeof(*pud)); +} + +static void ipmmu_clear_pmd(struct ipmmu_vmsa_device *mmu, pud_t *pud, + pmd_t *pmd) +{ + unsigned int i; + + /* Free the page table. */ + if (pmd_table(*pmd)) { + pgtable_t table = pmd_pgtable(*pmd); + __free_page(table); + } + + /* Clear the PMD. */ + *pmd = __pmd(0); + ipmmu_flush_pgtable(mmu, pmd, sizeof(*pmd)); + + /* Check whether the PUD is still needed. */ + pmd = pmd_offset(pud, 0); + for (i = 0; i < IPMMU_PTRS_PER_PMD; ++i) { + if (!pmd_none(pmd[i])) + return; + } + + /* Clear the parent PUD. */ + ipmmu_clear_pud(mmu, pud); +} + +static void ipmmu_clear_pte(struct ipmmu_vmsa_device *mmu, pud_t *pud, + pmd_t *pmd, pte_t *pte, unsigned int num_ptes) +{ + unsigned int i; + + /* Clear the PTE. */ + for (i = num_ptes; i; --i) + pte[i-1] = __pte(0); + + ipmmu_flush_pgtable(mmu, pte, sizeof(*pte) * num_ptes); + + /* Check whether the PMD is still needed. */ + pte = pte_offset_kernel(pmd, 0); + for (i = 0; i < IPMMU_PTRS_PER_PTE; ++i) { + if (!pte_none(pte[i])) + return; + } + + /* Clear the parent PMD. */ + ipmmu_clear_pmd(mmu, pud, pmd); +} + +static int ipmmu_split_pmd(struct ipmmu_vmsa_device *mmu, pmd_t *pmd) +{ + pte_t *pte, *start; + pteval_t pteval; + unsigned long pfn; + unsigned int i; + + pte = (pte_t *)get_zeroed_page(GFP_ATOMIC); + if (!pte) + return -ENOMEM; + + /* Copy the PMD attributes. */ + pteval = (pmd_val(*pmd) & ARM_VMSA_PTE_ATTRS_MASK) + | ARM_VMSA_PTE_CONT | ARM_VMSA_PTE_PAGE; + + pfn = pmd_pfn(*pmd); + start = pte; + + for (i = IPMMU_PTRS_PER_PTE; i; --i) + *pte++ = pfn_pte(pfn++, __pgprot(pteval)); + + ipmmu_flush_pgtable(mmu, start, PAGE_SIZE); + *pmd = __pmd(__pa(start) | PMD_NSTABLE | PMD_TYPE_TABLE); + ipmmu_flush_pgtable(mmu, pmd, sizeof(*pmd)); + + return 0; +} + +static void ipmmu_split_pte(struct ipmmu_vmsa_device *mmu, pte_t *pte) +{ + unsigned int i; + + for (i = ARM_VMSA_PTE_CONT_ENTRIES; i; --i) + pte[i-1] = __pte(pte_val(*pte) & ~ARM_VMSA_PTE_CONT); + + ipmmu_flush_pgtable(mmu, pte, sizeof(*pte) * ARM_VMSA_PTE_CONT_ENTRIES); +} + +static int ipmmu_clear_mapping(struct ipmmu_vmsa_domain *domain, + unsigned long iova, size_t size) +{ + struct ipmmu_vmsa_device *mmu = domain->mmu; + unsigned long flags; + pgd_t *pgd = domain->pgd; + pud_t *pud; + pmd_t *pmd; + pte_t *pte; + int ret = 0; + + if (!pgd) + return -EINVAL; + + if (size & ~PAGE_MASK) + return -EINVAL; + + pgd += pgd_index(iova); + pud = (pud_t *)pgd; + + spin_lock_irqsave(&domain->lock, flags); + + /* If there's no PUD or PMD we're done. */ + if (pud_none(*pud)) + goto done; + + pmd = pmd_offset(pud, iova); + if (pmd_none(*pmd)) + goto done; + + /* + * When freeing a 2MB block just clear the PMD. In the unlikely case the + * block is mapped as individual pages this will free the corresponding + * PTE page table. + */ + if (size == SZ_2M) { + ipmmu_clear_pmd(mmu, pud, pmd); + goto done; + } + + /* + * If the PMD has been mapped as a section remap it as pages to allow + * freeing individual pages. + */ + if (pmd_sect(*pmd)) + ipmmu_split_pmd(mmu, pmd); + + pte = pte_offset_kernel(pmd, iova); + + /* + * When freeing a 64kB block just clear the PTE entries. We don't have + * to care about the contiguous hint of the surrounding entries. + */ + if (size == SZ_64K) { + ipmmu_clear_pte(mmu, pud, pmd, pte, ARM_VMSA_PTE_CONT_ENTRIES); + goto done; + } + + /* + * If the PTE has been mapped with the contiguous hint set remap it and + * its surrounding PTEs to allow unmapping a single page. + */ + if (pte_val(*pte) & ARM_VMSA_PTE_CONT) + ipmmu_split_pte(mmu, pte); + + /* Clear the PTE. */ + ipmmu_clear_pte(mmu, pud, pmd, pte, 1); + +done: + spin_unlock_irqrestore(&domain->lock, flags); + + if (ret) + ipmmu_tlb_invalidate(domain); + + return 0; +} + +/* ----------------------------------------------------------------------------- + * IOMMU Operations + */ + +static int ipmmu_domain_init(struct iommu_domain *io_domain) +{ + struct ipmmu_vmsa_domain *domain; + + domain = kzalloc(sizeof(*domain), GFP_KERNEL); + if (!domain) + return -ENOMEM; + + spin_lock_init(&domain->lock); + + domain->pgd = kzalloc(IPMMU_PTRS_PER_PGD * sizeof(pgd_t), GFP_KERNEL); + if (!domain->pgd) { + kfree(domain); + return -ENOMEM; + } + + io_domain->priv = domain; + domain->io_domain = io_domain; + + return 0; +} + +static void ipmmu_domain_destroy(struct iommu_domain *io_domain) +{ + struct ipmmu_vmsa_domain *domain = io_domain->priv; + + /* + * Free the domain resources. We assume that all devices have already + * been detached. + */ + ipmmu_domain_destroy_context(domain); + ipmmu_free_pgtables(domain); + kfree(domain); +} + +static int ipmmu_attach_device(struct iommu_domain *io_domain, + struct device *dev) +{ + struct ipmmu_vmsa_archdata *archdata = dev->archdata.iommu; + struct ipmmu_vmsa_device *mmu = archdata->mmu; + struct ipmmu_vmsa_domain *domain = io_domain->priv; + unsigned long flags; + int ret = 0; + + if (!mmu) { + dev_err(dev, "Cannot attach to IPMMU\n"); + return -ENXIO; + } + + spin_lock_irqsave(&domain->lock, flags); + + if (!domain->mmu) { + /* The domain hasn't been used yet, initialize it. */ + domain->mmu = mmu; + ret = ipmmu_domain_init_context(domain); + } else if (domain->mmu != mmu) { + /* + * Something is wrong, we can't attach two devices using + * different IOMMUs to the same domain. + */ + dev_err(dev, "Can't attach IPMMU %s to domain on IPMMU %s\n", + dev_name(mmu->dev), dev_name(domain->mmu->dev)); + ret = -EINVAL; + } + + spin_unlock_irqrestore(&domain->lock, flags); + + if (ret < 0) + return ret; + + ipmmu_utlb_enable(domain, archdata->utlb); + + return 0; +} + +static void ipmmu_detach_device(struct iommu_domain *io_domain, + struct device *dev) +{ + struct ipmmu_vmsa_archdata *archdata = dev->archdata.iommu; + struct ipmmu_vmsa_domain *domain = io_domain->priv; + + ipmmu_utlb_disable(domain, archdata->utlb); + + /* + * TODO: Optimize by disabling the context when no device is attached. + */ +} + +static int ipmmu_map(struct iommu_domain *io_domain, unsigned long iova, + phys_addr_t paddr, size_t size, int prot) +{ + struct ipmmu_vmsa_domain *domain = io_domain->priv; + + if (!domain) + return -ENODEV; + + return ipmmu_create_mapping(domain, iova, paddr, size, prot); +} + +static size_t ipmmu_unmap(struct iommu_domain *io_domain, unsigned long iova, + size_t size) +{ + struct ipmmu_vmsa_domain *domain = io_domain->priv; + int ret; + + ret = ipmmu_clear_mapping(domain, iova, size); + return ret ? 0 : size; +} + +static phys_addr_t ipmmu_iova_to_phys(struct iommu_domain *io_domain, + dma_addr_t iova) +{ + struct ipmmu_vmsa_domain *domain = io_domain->priv; + pgd_t pgd; + pud_t pud; + pmd_t pmd; + pte_t pte; + + /* TODO: Is locking needed ? */ + + if (!domain->pgd) + return 0; + + pgd = *(domain->pgd + pgd_index(iova)); + if (pgd_none(pgd)) + return 0; + + pud = *pud_offset(&pgd, iova); + if (pud_none(pud)) + return 0; + + pmd = *pmd_offset(&pud, iova); + if (pmd_none(pmd)) + return 0; + + if (pmd_sect(pmd)) + return __pfn_to_phys(pmd_pfn(pmd)) | (iova & ~PMD_MASK); + + pte = *(pmd_page_vaddr(pmd) + pte_index(iova)); + if (pte_none(pte)) + return 0; + + return __pfn_to_phys(pte_pfn(pte)) | (iova & ~PAGE_MASK); +} + +static int ipmmu_find_utlb(struct ipmmu_vmsa_device *mmu, struct device *dev) +{ + const struct ipmmu_vmsa_master *master = mmu->pdata->masters; + const char *devname = dev_name(dev); + unsigned int i; + + for (i = 0; i < mmu->pdata->num_masters; ++i, ++master) { + if (strcmp(master->name, devname) == 0) + return master->utlb; + } + + return -1; +} + +static int ipmmu_add_device(struct device *dev) +{ + struct ipmmu_vmsa_archdata *archdata; + struct ipmmu_vmsa_device *mmu; + struct iommu_group *group; + int utlb = -1; + int ret; + + if (dev->archdata.iommu) { + dev_warn(dev, "IOMMU driver already assigned to device %s\n", + dev_name(dev)); + return -EINVAL; + } + + /* Find the master corresponding to the device. */ + spin_lock(&ipmmu_devices_lock); + + list_for_each_entry(mmu, &ipmmu_devices, list) { + utlb = ipmmu_find_utlb(mmu, dev); + if (utlb >= 0) { + /* + * TODO Take a reference to the MMU to protect + * against device removal. + */ + break; + } + } + + spin_unlock(&ipmmu_devices_lock); + + if (utlb < 0) + return -ENODEV; + + if (utlb >= mmu->num_utlbs) + return -EINVAL; + + /* Create a device group and add the device to it. */ + group = iommu_group_alloc(); + if (IS_ERR(group)) { + dev_err(dev, "Failed to allocate IOMMU group\n"); + return PTR_ERR(group); + } + + ret = iommu_group_add_device(group, dev); + iommu_group_put(group); + + if (ret < 0) { + dev_err(dev, "Failed to add device to IPMMU group\n"); + return ret; + } + + archdata = kzalloc(sizeof(*archdata), GFP_KERNEL); + if (!archdata) { + ret = -ENOMEM; + goto error; + } + + archdata->mmu = mmu; + archdata->utlb = utlb; + dev->archdata.iommu = archdata; + + /* + * Create the ARM mapping, used by the ARM DMA mapping core to allocate + * VAs. This will allocate a corresponding IOMMU domain. + * + * TODO: + * - Create one mapping per context (TLB). + * - Make the mapping size configurable ? We currently use a 2GB mapping + * at a 1GB offset to ensure that NULL VAs will fault. + */ + if (!mmu->mapping) { + struct dma_iommu_mapping *mapping; + + mapping = arm_iommu_create_mapping(&platform_bus_type, + SZ_1G, SZ_2G); + if (IS_ERR(mapping)) { + dev_err(mmu->dev, "failed to create ARM IOMMU mapping\n"); + return PTR_ERR(mapping); + } + + mmu->mapping = mapping; + } + + /* Attach the ARM VA mapping to the device. */ + ret = arm_iommu_attach_device(dev, mmu->mapping); + if (ret < 0) { + dev_err(dev, "Failed to attach device to VA mapping\n"); + goto error; + } + + return 0; + +error: + kfree(dev->archdata.iommu); + dev->archdata.iommu = NULL; + iommu_group_remove_device(dev); + return ret; +} + +static void ipmmu_remove_device(struct device *dev) +{ + arm_iommu_detach_device(dev); + iommu_group_remove_device(dev); + kfree(dev->archdata.iommu); + dev->archdata.iommu = NULL; +} + +static struct iommu_ops ipmmu_ops = { + .domain_init = ipmmu_domain_init, + .domain_destroy = ipmmu_domain_destroy, + .attach_dev = ipmmu_attach_device, + .detach_dev = ipmmu_detach_device, + .map = ipmmu_map, + .unmap = ipmmu_unmap, + .iova_to_phys = ipmmu_iova_to_phys, + .add_device = ipmmu_add_device, + .remove_device = ipmmu_remove_device, + .pgsize_bitmap = SZ_2M | SZ_64K | SZ_4K, +}; + +/* ----------------------------------------------------------------------------- + * Probe/remove and init + */ + +static void ipmmu_device_reset(struct ipmmu_vmsa_device *mmu) +{ + unsigned int i; + + /* Disable all contexts. */ + for (i = 0; i < 4; ++i) + ipmmu_write(mmu, i * IM_CTX_SIZE + IMCTR, 0); +} + +static int ipmmu_probe(struct platform_device *pdev) +{ + struct ipmmu_vmsa_device *mmu; + struct resource *res; + int irq; + int ret; + + if (!pdev->dev.platform_data) { + dev_err(&pdev->dev, "missing platform data\n"); + return -EINVAL; + } + + mmu = devm_kzalloc(&pdev->dev, sizeof(*mmu), GFP_KERNEL); + if (!mmu) { + dev_err(&pdev->dev, "cannot allocate device data\n"); + return -ENOMEM; + } + + mmu->dev = &pdev->dev; + mmu->pdata = pdev->dev.platform_data; + mmu->num_utlbs = 32; + + /* Map I/O memory and request IRQ. */ + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + mmu->base = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR(mmu->base)) + return PTR_ERR(mmu->base); + + irq = platform_get_irq(pdev, 0); + if (irq < 0) { + dev_err(&pdev->dev, "no IRQ found\n"); + return irq; + } + + ret = devm_request_irq(&pdev->dev, irq, ipmmu_irq, 0, + dev_name(&pdev->dev), mmu); + if (ret < 0) { + dev_err(&pdev->dev, "failed to request IRQ %d\n", irq); + return irq; + } + + ipmmu_device_reset(mmu); + + /* + * We can't create the ARM mapping here as it requires the bus to have + * an IOMMU, which only happens when bus_set_iommu() is called in + * ipmmu_init() after the probe function returns. + */ + + spin_lock(&ipmmu_devices_lock); + list_add(&mmu->list, &ipmmu_devices); + spin_unlock(&ipmmu_devices_lock); + + platform_set_drvdata(pdev, mmu); + + return 0; +} + +static int ipmmu_remove(struct platform_device *pdev) +{ + struct ipmmu_vmsa_device *mmu = platform_get_drvdata(pdev); + + spin_lock(&ipmmu_devices_lock); + list_del(&mmu->list); + spin_unlock(&ipmmu_devices_lock); + + arm_iommu_release_mapping(mmu->mapping); + + ipmmu_device_reset(mmu); + + return 0; +} + +static struct platform_driver ipmmu_driver = { + .driver = { + .owner = THIS_MODULE, + .name = "ipmmu-vmsa", + }, + .probe = ipmmu_probe, + .remove = ipmmu_remove, +}; + +static int __init ipmmu_init(void) +{ + int ret; + + ret = platform_driver_register(&ipmmu_driver); + if (ret < 0) + return ret; + + if (!iommu_present(&platform_bus_type)) + bus_set_iommu(&platform_bus_type, &ipmmu_ops); + + return 0; +} + +static void __exit ipmmu_exit(void) +{ + return platform_driver_unregister(&ipmmu_driver); +} + +subsys_initcall(ipmmu_init); +module_exit(ipmmu_exit); + +MODULE_DESCRIPTION("IOMMU API for Renesas VMSA-compatible IPMMU"); +MODULE_AUTHOR("Laurent Pinchart <laurent.pinchart@ideasonboard.com>"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/iommu/msm_iommu_dev.c b/drivers/iommu/msm_iommu_dev.c index 08ba4972da9d..61def7cb5263 100644 --- a/drivers/iommu/msm_iommu_dev.c +++ b/drivers/iommu/msm_iommu_dev.c @@ -127,13 +127,12 @@ static void msm_iommu_reset(void __iomem *base, int ncb) static int msm_iommu_probe(struct platform_device *pdev) { - struct resource *r, *r2; + struct resource *r; struct clk *iommu_clk; struct clk *iommu_pclk; struct msm_iommu_drvdata *drvdata; struct msm_iommu_dev *iommu_dev = pdev->dev.platform_data; void __iomem *regs_base; - resource_size_t len; int ret, irq, par; if (pdev->id == -1) { @@ -178,35 +177,16 @@ static int msm_iommu_probe(struct platform_device *pdev) iommu_clk = NULL; r = platform_get_resource_byname(pdev, IORESOURCE_MEM, "physbase"); - - if (!r) { - ret = -ENODEV; - goto fail_clk; - } - - len = resource_size(r); - - r2 = request_mem_region(r->start, len, r->name); - if (!r2) { - pr_err("Could not request memory region: start=%p, len=%d\n", - (void *) r->start, len); - ret = -EBUSY; + regs_base = devm_ioremap_resource(&pdev->dev, r); + if (IS_ERR(regs_base)) { + ret = PTR_ERR(regs_base); goto fail_clk; } - regs_base = ioremap(r2->start, len); - - if (!regs_base) { - pr_err("Could not ioremap: start=%p, len=%d\n", - (void *) r2->start, len); - ret = -EBUSY; - goto fail_mem; - } - irq = platform_get_irq_byname(pdev, "secure_irq"); if (irq < 0) { ret = -ENODEV; - goto fail_io; + goto fail_clk; } msm_iommu_reset(regs_base, iommu_dev->ncb); @@ -222,14 +202,14 @@ static int msm_iommu_probe(struct platform_device *pdev) if (!par) { pr_err("%s: Invalid PAR value detected\n", iommu_dev->name); ret = -ENODEV; - goto fail_io; + goto fail_clk; } ret = request_irq(irq, msm_iommu_fault_handler, 0, "msm_iommu_secure_irpt_handler", drvdata); if (ret) { pr_err("Request IRQ %d failed with ret=%d\n", irq, ret); - goto fail_io; + goto fail_clk; } @@ -250,10 +230,6 @@ static int msm_iommu_probe(struct platform_device *pdev) clk_disable(iommu_pclk); return 0; -fail_io: - iounmap(regs_base); -fail_mem: - release_mem_region(r->start, len); fail_clk: if (iommu_clk) { clk_disable(iommu_clk); diff --git a/drivers/iommu/omap-iommu.c b/drivers/iommu/omap-iommu.c index 7fcbfc498fa9..895af06a667f 100644 --- a/drivers/iommu/omap-iommu.c +++ b/drivers/iommu/omap-iommu.c @@ -34,6 +34,9 @@ #include "omap-iopgtable.h" #include "omap-iommu.h" +#define to_iommu(dev) \ + ((struct omap_iommu *)platform_get_drvdata(to_platform_device(dev))) + #define for_each_iotlb_cr(obj, n, __i, cr) \ for (__i = 0; \ (__i < (n)) && (cr = __iotlb_read_cr((obj), __i), true); \ @@ -391,6 +394,7 @@ static void flush_iotlb_page(struct omap_iommu *obj, u32 da) __func__, start, da, bytes); iotlb_load_cr(obj, &cr); iommu_write_reg(obj, 1, MMU_FLUSH_ENTRY); + break; } } pm_runtime_put_sync(obj->dev); @@ -1037,19 +1041,18 @@ static void iopte_cachep_ctor(void *iopte) clean_dcache_area(iopte, IOPTE_TABLE_SIZE); } -static u32 iotlb_init_entry(struct iotlb_entry *e, u32 da, u32 pa, - u32 flags) +static u32 iotlb_init_entry(struct iotlb_entry *e, u32 da, u32 pa, int pgsz) { memset(e, 0, sizeof(*e)); e->da = da; e->pa = pa; - e->valid = 1; + e->valid = MMU_CAM_V; /* FIXME: add OMAP1 support */ - e->pgsz = flags & MMU_CAM_PGSZ_MASK; - e->endian = flags & MMU_RAM_ENDIAN_MASK; - e->elsz = flags & MMU_RAM_ELSZ_MASK; - e->mixed = flags & MMU_RAM_MIXED_MASK; + e->pgsz = pgsz; + e->endian = MMU_RAM_ENDIAN_LITTLE; + e->elsz = MMU_RAM_ELSZ_8; + e->mixed = 0; return iopgsz_to_bytes(e->pgsz); } @@ -1062,9 +1065,8 @@ static int omap_iommu_map(struct iommu_domain *domain, unsigned long da, struct device *dev = oiommu->dev; struct iotlb_entry e; int omap_pgsz; - u32 ret, flags; + u32 ret; - /* we only support mapping a single iommu page for now */ omap_pgsz = bytes_to_iopgsz(bytes); if (omap_pgsz < 0) { dev_err(dev, "invalid size to map: %d\n", bytes); @@ -1073,9 +1075,7 @@ static int omap_iommu_map(struct iommu_domain *domain, unsigned long da, dev_dbg(dev, "mapping da 0x%lx to pa 0x%x size 0x%x\n", da, pa, bytes); - flags = omap_pgsz | prot; - - iotlb_init_entry(&e, da, pa, flags); + iotlb_init_entry(&e, da, pa, omap_pgsz); ret = omap_iopgtable_store_entry(oiommu, &e); if (ret) @@ -1248,12 +1248,6 @@ static phys_addr_t omap_iommu_iova_to_phys(struct iommu_domain *domain, return ret; } -static int omap_iommu_domain_has_cap(struct iommu_domain *domain, - unsigned long cap) -{ - return 0; -} - static int omap_iommu_add_device(struct device *dev) { struct omap_iommu_arch_data *arch_data; @@ -1305,7 +1299,6 @@ static struct iommu_ops omap_iommu_ops = { .map = omap_iommu_map, .unmap = omap_iommu_unmap, .iova_to_phys = omap_iommu_iova_to_phys, - .domain_has_cap = omap_iommu_domain_has_cap, .add_device = omap_iommu_add_device, .remove_device = omap_iommu_remove_device, .pgsize_bitmap = OMAP_IOMMU_PGSIZES, diff --git a/drivers/iommu/omap-iopgtable.h b/drivers/iommu/omap-iopgtable.h index b6f9a51746ca..f891683e3f05 100644 --- a/drivers/iommu/omap-iopgtable.h +++ b/drivers/iommu/omap-iopgtable.h @@ -93,6 +93,3 @@ static inline phys_addr_t omap_iommu_translate(u32 d, u32 va, u32 mask) /* to find an entry in the second-level page table. */ #define iopte_index(da) (((da) >> IOPTE_SHIFT) & (PTRS_PER_IOPTE - 1)) #define iopte_offset(iopgd, da) (iopgd_page_vaddr(iopgd) + iopte_index(da)) - -#define to_iommu(dev) \ - (platform_get_drvdata(to_platform_device(dev))) diff --git a/drivers/iommu/shmobile-ipmmu.c b/drivers/iommu/shmobile-ipmmu.c index e3bc2e19b6dd..bd97adecb1fd 100644 --- a/drivers/iommu/shmobile-ipmmu.c +++ b/drivers/iommu/shmobile-ipmmu.c @@ -94,11 +94,6 @@ static int ipmmu_probe(struct platform_device *pdev) struct resource *res; struct shmobile_ipmmu_platform_data *pdata = pdev->dev.platform_data; - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); - if (!res) { - dev_err(&pdev->dev, "cannot get platform resources\n"); - return -ENOENT; - } ipmmu = devm_kzalloc(&pdev->dev, sizeof(*ipmmu), GFP_KERNEL); if (!ipmmu) { dev_err(&pdev->dev, "cannot allocate device data\n"); @@ -106,19 +101,18 @@ static int ipmmu_probe(struct platform_device *pdev) } spin_lock_init(&ipmmu->flush_lock); ipmmu->dev = &pdev->dev; - ipmmu->ipmmu_base = devm_ioremap_nocache(&pdev->dev, res->start, - resource_size(res)); - if (!ipmmu->ipmmu_base) { - dev_err(&pdev->dev, "ioremap_nocache failed\n"); - return -ENOMEM; - } + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + ipmmu->ipmmu_base = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR(ipmmu->ipmmu_base)) + return PTR_ERR(ipmmu->ipmmu_base); + ipmmu->dev_names = pdata->dev_names; ipmmu->num_dev_names = pdata->num_dev_names; platform_set_drvdata(pdev, ipmmu); ipmmu_reg_write(ipmmu, IMCTR1, 0x0); /* disable TLB */ ipmmu_reg_write(ipmmu, IMCTR2, 0x0); /* disable PMB */ - ipmmu_iommu_init(ipmmu); - return 0; + return ipmmu_iommu_init(ipmmu); } static struct platform_driver ipmmu_driver = { diff --git a/drivers/md/md.c b/drivers/md/md.c index 237b7e0ddc7a..2382cfc9bb3f 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -7381,8 +7381,10 @@ void md_do_sync(struct md_thread *thread) /* just incase thread restarts... */ if (test_bit(MD_RECOVERY_DONE, &mddev->recovery)) return; - if (mddev->ro) /* never try to sync a read-only array */ + if (mddev->ro) {/* never try to sync a read-only array */ + set_bit(MD_RECOVERY_INTR, &mddev->recovery); return; + } if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery)) { if (test_bit(MD_RECOVERY_CHECK, &mddev->recovery)) { @@ -7824,6 +7826,7 @@ void md_check_recovery(struct mddev *mddev) /* There is no thread, but we need to call * ->spare_active and clear saved_raid_disk */ + set_bit(MD_RECOVERY_INTR, &mddev->recovery); md_reap_sync_thread(mddev); clear_bit(MD_RECOVERY_NEEDED, &mddev->recovery); goto unlock; diff --git a/drivers/media/i2c/adv7604.c b/drivers/media/i2c/adv7604.c index 338baa4c23ef..1778d320272e 100644 --- a/drivers/media/i2c/adv7604.c +++ b/drivers/media/i2c/adv7604.c @@ -27,19 +27,21 @@ * REF_03 - Analog devices, ADV7604, Hardware Manual, Rev. F, August 2010 */ - +#include <linux/delay.h> +#include <linux/gpio/consumer.h> +#include <linux/i2c.h> #include <linux/kernel.h> #include <linux/module.h> #include <linux/slab.h> -#include <linux/i2c.h> -#include <linux/delay.h> +#include <linux/v4l2-dv-timings.h> #include <linux/videodev2.h> #include <linux/workqueue.h> -#include <linux/v4l2-dv-timings.h> -#include <media/v4l2-device.h> + +#include <media/adv7604.h> #include <media/v4l2-ctrls.h> +#include <media/v4l2-device.h> #include <media/v4l2-dv-timings.h> -#include <media/adv7604.h> +#include <media/v4l2-of.h> static int debug; module_param(debug, int, 0644); @@ -53,6 +55,76 @@ MODULE_LICENSE("GPL"); /* ADV7604 system clock frequency */ #define ADV7604_fsc (28636360) +#define ADV7604_RGB_OUT (1 << 1) + +#define ADV7604_OP_FORMAT_SEL_8BIT (0 << 0) +#define ADV7604_OP_FORMAT_SEL_10BIT (1 << 0) +#define ADV7604_OP_FORMAT_SEL_12BIT (2 << 0) + +#define ADV7604_OP_MODE_SEL_SDR_422 (0 << 5) +#define ADV7604_OP_MODE_SEL_DDR_422 (1 << 5) +#define ADV7604_OP_MODE_SEL_SDR_444 (2 << 5) +#define ADV7604_OP_MODE_SEL_DDR_444 (3 << 5) +#define ADV7604_OP_MODE_SEL_SDR_422_2X (4 << 5) +#define ADV7604_OP_MODE_SEL_ADI_CM (5 << 5) + +#define ADV7604_OP_CH_SEL_GBR (0 << 5) +#define ADV7604_OP_CH_SEL_GRB (1 << 5) +#define ADV7604_OP_CH_SEL_BGR (2 << 5) +#define ADV7604_OP_CH_SEL_RGB (3 << 5) +#define ADV7604_OP_CH_SEL_BRG (4 << 5) +#define ADV7604_OP_CH_SEL_RBG (5 << 5) + +#define ADV7604_OP_SWAP_CB_CR (1 << 0) + +enum adv7604_type { + ADV7604, + ADV7611, +}; + +struct adv7604_reg_seq { + unsigned int reg; + u8 val; +}; + +struct adv7604_format_info { + enum v4l2_mbus_pixelcode code; + u8 op_ch_sel; + bool rgb_out; + bool swap_cb_cr; + u8 op_format_sel; +}; + +struct adv7604_chip_info { + enum adv7604_type type; + + bool has_afe; + unsigned int max_port; + unsigned int num_dv_ports; + + unsigned int edid_enable_reg; + unsigned int edid_status_reg; + unsigned int lcf_reg; + + unsigned int cable_det_mask; + unsigned int tdms_lock_mask; + unsigned int fmt_change_digital_mask; + + const struct adv7604_format_info *formats; + unsigned int nformats; + + void (*set_termination)(struct v4l2_subdev *sd, bool enable); + void (*setup_irqs)(struct v4l2_subdev *sd); + unsigned int (*read_hdmi_pixelclock)(struct v4l2_subdev *sd); + unsigned int (*read_cable_det)(struct v4l2_subdev *sd); + + /* 0 = AFE, 1 = HDMI */ + const struct adv7604_reg_seq *recommended_settings[2]; + unsigned int num_recommended_settings[2]; + + unsigned long page_mask; +}; + /* ********************************************************************** * @@ -60,13 +132,24 @@ MODULE_LICENSE("GPL"); * ********************************************************************** */ + struct adv7604_state { + const struct adv7604_chip_info *info; struct adv7604_platform_data pdata; + + struct gpio_desc *hpd_gpio[4]; + struct v4l2_subdev sd; - struct media_pad pad; + struct media_pad pads[ADV7604_PAD_MAX]; + unsigned int source_pad; + struct v4l2_ctrl_handler hdl; - enum adv7604_input_port selected_input; + + enum adv7604_pad selected_input; + struct v4l2_dv_timings timings; + const struct adv7604_format_info *format; + struct { u8 edid[256]; u32 present; @@ -80,18 +163,7 @@ struct adv7604_state { bool restart_stdi_once; /* i2c clients */ - struct i2c_client *i2c_avlink; - struct i2c_client *i2c_cec; - struct i2c_client *i2c_infoframe; - struct i2c_client *i2c_esdp; - struct i2c_client *i2c_dpp; - struct i2c_client *i2c_afe; - struct i2c_client *i2c_repeater; - struct i2c_client *i2c_edid; - struct i2c_client *i2c_hdmi; - struct i2c_client *i2c_test; - struct i2c_client *i2c_cp; - struct i2c_client *i2c_vdp; + struct i2c_client *i2c_clients[ADV7604_PAGE_MAX]; /* controls */ struct v4l2_ctrl *detect_tx_5v_ctrl; @@ -101,6 +173,11 @@ struct adv7604_state { struct v4l2_ctrl *rgb_quantization_range_ctrl; }; +static bool adv7604_has_afe(struct adv7604_state *state) +{ + return state->info->has_afe; +} + /* Supported CEA and DMT timings */ static const struct v4l2_dv_timings adv7604_timings[] = { V4L2_DV_BT_CEA_720X480P59_94, @@ -256,11 +333,6 @@ static inline struct adv7604_state *to_state(struct v4l2_subdev *sd) return container_of(sd, struct adv7604_state, sd); } -static inline struct v4l2_subdev *to_sd(struct v4l2_ctrl *ctrl) -{ - return &container_of(ctrl->handler, struct adv7604_state, hdl)->sd; -} - static inline unsigned hblanking(const struct v4l2_bt_timings *t) { return V4L2_DV_BT_BLANKING_WIDTH(t); @@ -298,14 +370,18 @@ static s32 adv_smbus_read_byte_data_check(struct i2c_client *client, return -EIO; } -static s32 adv_smbus_read_byte_data(struct i2c_client *client, u8 command) +static s32 adv_smbus_read_byte_data(struct adv7604_state *state, + enum adv7604_page page, u8 command) { - return adv_smbus_read_byte_data_check(client, command, true); + return adv_smbus_read_byte_data_check(state->i2c_clients[page], + command, true); } -static s32 adv_smbus_write_byte_data(struct i2c_client *client, - u8 command, u8 value) +static s32 adv_smbus_write_byte_data(struct adv7604_state *state, + enum adv7604_page page, u8 command, + u8 value) { + struct i2c_client *client = state->i2c_clients[page]; union i2c_smbus_data data; int err; int i; @@ -325,9 +401,11 @@ static s32 adv_smbus_write_byte_data(struct i2c_client *client, return err; } -static s32 adv_smbus_write_i2c_block_data(struct i2c_client *client, - u8 command, unsigned length, const u8 *values) +static s32 adv_smbus_write_i2c_block_data(struct adv7604_state *state, + enum adv7604_page page, u8 command, + unsigned length, const u8 *values) { + struct i2c_client *client = state->i2c_clients[page]; union i2c_smbus_data data; if (length > I2C_SMBUS_BLOCK_MAX) @@ -343,149 +421,150 @@ static s32 adv_smbus_write_i2c_block_data(struct i2c_client *client, static inline int io_read(struct v4l2_subdev *sd, u8 reg) { - struct i2c_client *client = v4l2_get_subdevdata(sd); + struct adv7604_state *state = to_state(sd); - return adv_smbus_read_byte_data(client, reg); + return adv_smbus_read_byte_data(state, ADV7604_PAGE_IO, reg); } static inline int io_write(struct v4l2_subdev *sd, u8 reg, u8 val) { - struct i2c_client *client = v4l2_get_subdevdata(sd); + struct adv7604_state *state = to_state(sd); - return adv_smbus_write_byte_data(client, reg, val); + return adv_smbus_write_byte_data(state, ADV7604_PAGE_IO, reg, val); } -static inline int io_write_and_or(struct v4l2_subdev *sd, u8 reg, u8 mask, u8 val) +static inline int io_write_clr_set(struct v4l2_subdev *sd, u8 reg, u8 mask, u8 val) { - return io_write(sd, reg, (io_read(sd, reg) & mask) | val); + return io_write(sd, reg, (io_read(sd, reg) & ~mask) | val); } static inline int avlink_read(struct v4l2_subdev *sd, u8 reg) { struct adv7604_state *state = to_state(sd); - return adv_smbus_read_byte_data(state->i2c_avlink, reg); + return adv_smbus_read_byte_data(state, ADV7604_PAGE_AVLINK, reg); } static inline int avlink_write(struct v4l2_subdev *sd, u8 reg, u8 val) { struct adv7604_state *state = to_state(sd); - return adv_smbus_write_byte_data(state->i2c_avlink, reg, val); + return adv_smbus_write_byte_data(state, ADV7604_PAGE_AVLINK, reg, val); } static inline int cec_read(struct v4l2_subdev *sd, u8 reg) { struct adv7604_state *state = to_state(sd); - return adv_smbus_read_byte_data(state->i2c_cec, reg); + return adv_smbus_read_byte_data(state, ADV7604_PAGE_CEC, reg); } static inline int cec_write(struct v4l2_subdev *sd, u8 reg, u8 val) { struct adv7604_state *state = to_state(sd); - return adv_smbus_write_byte_data(state->i2c_cec, reg, val); + return adv_smbus_write_byte_data(state, ADV7604_PAGE_CEC, reg, val); } -static inline int cec_write_and_or(struct v4l2_subdev *sd, u8 reg, u8 mask, u8 val) +static inline int cec_write_clr_set(struct v4l2_subdev *sd, u8 reg, u8 mask, u8 val) { - return cec_write(sd, reg, (cec_read(sd, reg) & mask) | val); + return cec_write(sd, reg, (cec_read(sd, reg) & ~mask) | val); } static inline int infoframe_read(struct v4l2_subdev *sd, u8 reg) { struct adv7604_state *state = to_state(sd); - return adv_smbus_read_byte_data(state->i2c_infoframe, reg); + return adv_smbus_read_byte_data(state, ADV7604_PAGE_INFOFRAME, reg); } static inline int infoframe_write(struct v4l2_subdev *sd, u8 reg, u8 val) { struct adv7604_state *state = to_state(sd); - return adv_smbus_write_byte_data(state->i2c_infoframe, reg, val); + return adv_smbus_write_byte_data(state, ADV7604_PAGE_INFOFRAME, + reg, val); } static inline int esdp_read(struct v4l2_subdev *sd, u8 reg) { struct adv7604_state *state = to_state(sd); - return adv_smbus_read_byte_data(state->i2c_esdp, reg); + return adv_smbus_read_byte_data(state, ADV7604_PAGE_ESDP, reg); } static inline int esdp_write(struct v4l2_subdev *sd, u8 reg, u8 val) { struct adv7604_state *state = to_state(sd); - return adv_smbus_write_byte_data(state->i2c_esdp, reg, val); + return adv_smbus_write_byte_data(state, ADV7604_PAGE_ESDP, reg, val); } static inline int dpp_read(struct v4l2_subdev *sd, u8 reg) { struct adv7604_state *state = to_state(sd); - return adv_smbus_read_byte_data(state->i2c_dpp, reg); + return adv_smbus_read_byte_data(state, ADV7604_PAGE_DPP, reg); } static inline int dpp_write(struct v4l2_subdev *sd, u8 reg, u8 val) { struct adv7604_state *state = to_state(sd); - return adv_smbus_write_byte_data(state->i2c_dpp, reg, val); + return adv_smbus_write_byte_data(state, ADV7604_PAGE_DPP, reg, val); } static inline int afe_read(struct v4l2_subdev *sd, u8 reg) { struct adv7604_state *state = to_state(sd); - return adv_smbus_read_byte_data(state->i2c_afe, reg); + return adv_smbus_read_byte_data(state, ADV7604_PAGE_AFE, reg); } static inline int afe_write(struct v4l2_subdev *sd, u8 reg, u8 val) { struct adv7604_state *state = to_state(sd); - return adv_smbus_write_byte_data(state->i2c_afe, reg, val); + return adv_smbus_write_byte_data(state, ADV7604_PAGE_AFE, reg, val); } static inline int rep_read(struct v4l2_subdev *sd, u8 reg) { struct adv7604_state *state = to_state(sd); - return adv_smbus_read_byte_data(state->i2c_repeater, reg); + return adv_smbus_read_byte_data(state, ADV7604_PAGE_REP, reg); } static inline int rep_write(struct v4l2_subdev *sd, u8 reg, u8 val) { struct adv7604_state *state = to_state(sd); - return adv_smbus_write_byte_data(state->i2c_repeater, reg, val); + return adv_smbus_write_byte_data(state, ADV7604_PAGE_REP, reg, val); } -static inline int rep_write_and_or(struct v4l2_subdev *sd, u8 reg, u8 mask, u8 val) +static inline int rep_write_clr_set(struct v4l2_subdev *sd, u8 reg, u8 mask, u8 val) { - return rep_write(sd, reg, (rep_read(sd, reg) & mask) | val); + return rep_write(sd, reg, (rep_read(sd, reg) & ~mask) | val); } static inline int edid_read(struct v4l2_subdev *sd, u8 reg) { struct adv7604_state *state = to_state(sd); - return adv_smbus_read_byte_data(state->i2c_edid, reg); + return adv_smbus_read_byte_data(state, ADV7604_PAGE_EDID, reg); } static inline int edid_write(struct v4l2_subdev *sd, u8 reg, u8 val) { struct adv7604_state *state = to_state(sd); - return adv_smbus_write_byte_data(state->i2c_edid, reg, val); + return adv_smbus_write_byte_data(state, ADV7604_PAGE_EDID, reg, val); } static inline int edid_read_block(struct v4l2_subdev *sd, unsigned len, u8 *val) { struct adv7604_state *state = to_state(sd); - struct i2c_client *client = state->i2c_edid; + struct i2c_client *client = state->i2c_clients[ADV7604_PAGE_EDID]; u8 msgbuf0[1] = { 0 }; u8 msgbuf1[256]; struct i2c_msg msg[2] = { @@ -518,11 +597,25 @@ static inline int edid_write_block(struct v4l2_subdev *sd, v4l2_dbg(2, debug, sd, "%s: write EDID block (%d byte)\n", __func__, len); for (i = 0; !err && i < len; i += I2C_SMBUS_BLOCK_MAX) - err = adv_smbus_write_i2c_block_data(state->i2c_edid, i, - I2C_SMBUS_BLOCK_MAX, val + i); + err = adv_smbus_write_i2c_block_data(state, ADV7604_PAGE_EDID, + i, I2C_SMBUS_BLOCK_MAX, val + i); return err; } +static void adv7604_set_hpd(struct adv7604_state *state, unsigned int hpd) +{ + unsigned int i; + + for (i = 0; i < state->info->num_dv_ports; ++i) { + if (IS_ERR(state->hpd_gpio[i])) + continue; + + gpiod_set_value_cansleep(state->hpd_gpio[i], hpd & BIT(i)); + } + + v4l2_subdev_notify(&state->sd, ADV7604_HOTPLUG, &hpd); +} + static void adv7604_delayed_work_enable_hotplug(struct work_struct *work) { struct delayed_work *dwork = to_delayed_work(work); @@ -532,73 +625,210 @@ static void adv7604_delayed_work_enable_hotplug(struct work_struct *work) v4l2_dbg(2, debug, sd, "%s: enable hotplug\n", __func__); - v4l2_subdev_notify(sd, ADV7604_HOTPLUG, (void *)&state->edid.present); + adv7604_set_hpd(state, state->edid.present); } static inline int hdmi_read(struct v4l2_subdev *sd, u8 reg) { struct adv7604_state *state = to_state(sd); - return adv_smbus_read_byte_data(state->i2c_hdmi, reg); + return adv_smbus_read_byte_data(state, ADV7604_PAGE_HDMI, reg); +} + +static u16 hdmi_read16(struct v4l2_subdev *sd, u8 reg, u16 mask) +{ + return ((hdmi_read(sd, reg) << 8) | hdmi_read(sd, reg + 1)) & mask; } static inline int hdmi_write(struct v4l2_subdev *sd, u8 reg, u8 val) { struct adv7604_state *state = to_state(sd); - return adv_smbus_write_byte_data(state->i2c_hdmi, reg, val); + return adv_smbus_write_byte_data(state, ADV7604_PAGE_HDMI, reg, val); } -static inline int hdmi_write_and_or(struct v4l2_subdev *sd, u8 reg, u8 mask, u8 val) +static inline int hdmi_write_clr_set(struct v4l2_subdev *sd, u8 reg, u8 mask, u8 val) { - return hdmi_write(sd, reg, (hdmi_read(sd, reg) & mask) | val); + return hdmi_write(sd, reg, (hdmi_read(sd, reg) & ~mask) | val); } static inline int test_read(struct v4l2_subdev *sd, u8 reg) { struct adv7604_state *state = to_state(sd); - return adv_smbus_read_byte_data(state->i2c_test, reg); + return adv_smbus_read_byte_data(state, ADV7604_PAGE_TEST, reg); } static inline int test_write(struct v4l2_subdev *sd, u8 reg, u8 val) { struct adv7604_state *state = to_state(sd); - return adv_smbus_write_byte_data(state->i2c_test, reg, val); + return adv_smbus_write_byte_data(state, ADV7604_PAGE_TEST, reg, val); } static inline int cp_read(struct v4l2_subdev *sd, u8 reg) { struct adv7604_state *state = to_state(sd); - return adv_smbus_read_byte_data(state->i2c_cp, reg); + return adv_smbus_read_byte_data(state, ADV7604_PAGE_CP, reg); +} + +static u16 cp_read16(struct v4l2_subdev *sd, u8 reg, u16 mask) +{ + return ((cp_read(sd, reg) << 8) | cp_read(sd, reg + 1)) & mask; } static inline int cp_write(struct v4l2_subdev *sd, u8 reg, u8 val) { struct adv7604_state *state = to_state(sd); - return adv_smbus_write_byte_data(state->i2c_cp, reg, val); + return adv_smbus_write_byte_data(state, ADV7604_PAGE_CP, reg, val); } -static inline int cp_write_and_or(struct v4l2_subdev *sd, u8 reg, u8 mask, u8 val) +static inline int cp_write_clr_set(struct v4l2_subdev *sd, u8 reg, u8 mask, u8 val) { - return cp_write(sd, reg, (cp_read(sd, reg) & mask) | val); + return cp_write(sd, reg, (cp_read(sd, reg) & ~mask) | val); } static inline int vdp_read(struct v4l2_subdev *sd, u8 reg) { struct adv7604_state *state = to_state(sd); - return adv_smbus_read_byte_data(state->i2c_vdp, reg); + return adv_smbus_read_byte_data(state, ADV7604_PAGE_VDP, reg); } static inline int vdp_write(struct v4l2_subdev *sd, u8 reg, u8 val) { struct adv7604_state *state = to_state(sd); - return adv_smbus_write_byte_data(state->i2c_vdp, reg, val); + return adv_smbus_write_byte_data(state, ADV7604_PAGE_VDP, reg, val); +} + +#define ADV7604_REG(page, offset) (((page) << 8) | (offset)) +#define ADV7604_REG_SEQ_TERM 0xffff + +#ifdef CONFIG_VIDEO_ADV_DEBUG +static int adv7604_read_reg(struct v4l2_subdev *sd, unsigned int reg) +{ + struct adv7604_state *state = to_state(sd); + unsigned int page = reg >> 8; + + if (!(BIT(page) & state->info->page_mask)) + return -EINVAL; + + reg &= 0xff; + + return adv_smbus_read_byte_data(state, page, reg); +} +#endif + +static int adv7604_write_reg(struct v4l2_subdev *sd, unsigned int reg, u8 val) +{ + struct adv7604_state *state = to_state(sd); + unsigned int page = reg >> 8; + + if (!(BIT(page) & state->info->page_mask)) + return -EINVAL; + + reg &= 0xff; + + return adv_smbus_write_byte_data(state, page, reg, val); +} + +static void adv7604_write_reg_seq(struct v4l2_subdev *sd, + const struct adv7604_reg_seq *reg_seq) +{ + unsigned int i; + + for (i = 0; reg_seq[i].reg != ADV7604_REG_SEQ_TERM; i++) + adv7604_write_reg(sd, reg_seq[i].reg, reg_seq[i].val); +} + +/* ----------------------------------------------------------------------------- + * Format helpers + */ + +static const struct adv7604_format_info adv7604_formats[] = { + { V4L2_MBUS_FMT_RGB888_1X24, ADV7604_OP_CH_SEL_RGB, true, false, + ADV7604_OP_MODE_SEL_SDR_444 | ADV7604_OP_FORMAT_SEL_8BIT }, + { V4L2_MBUS_FMT_YUYV8_2X8, ADV7604_OP_CH_SEL_RGB, false, false, + ADV7604_OP_MODE_SEL_SDR_422 | ADV7604_OP_FORMAT_SEL_8BIT }, + { V4L2_MBUS_FMT_YVYU8_2X8, ADV7604_OP_CH_SEL_RGB, false, true, + ADV7604_OP_MODE_SEL_SDR_422 | ADV7604_OP_FORMAT_SEL_8BIT }, + { V4L2_MBUS_FMT_YUYV10_2X10, ADV7604_OP_CH_SEL_RGB, false, false, + ADV7604_OP_MODE_SEL_SDR_422 | ADV7604_OP_FORMAT_SEL_10BIT }, + { V4L2_MBUS_FMT_YVYU10_2X10, ADV7604_OP_CH_SEL_RGB, false, true, + ADV7604_OP_MODE_SEL_SDR_422 | ADV7604_OP_FORMAT_SEL_10BIT }, + { V4L2_MBUS_FMT_YUYV12_2X12, ADV7604_OP_CH_SEL_RGB, false, false, + ADV7604_OP_MODE_SEL_SDR_422 | ADV7604_OP_FORMAT_SEL_12BIT }, + { V4L2_MBUS_FMT_YVYU12_2X12, ADV7604_OP_CH_SEL_RGB, false, true, + ADV7604_OP_MODE_SEL_SDR_422 | ADV7604_OP_FORMAT_SEL_12BIT }, + { V4L2_MBUS_FMT_UYVY8_1X16, ADV7604_OP_CH_SEL_RBG, false, false, + ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_8BIT }, + { V4L2_MBUS_FMT_VYUY8_1X16, ADV7604_OP_CH_SEL_RBG, false, true, + ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_8BIT }, + { V4L2_MBUS_FMT_YUYV8_1X16, ADV7604_OP_CH_SEL_RGB, false, false, + ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_8BIT }, + { V4L2_MBUS_FMT_YVYU8_1X16, ADV7604_OP_CH_SEL_RGB, false, true, + ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_8BIT }, + { V4L2_MBUS_FMT_UYVY10_1X20, ADV7604_OP_CH_SEL_RBG, false, false, + ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_10BIT }, + { V4L2_MBUS_FMT_VYUY10_1X20, ADV7604_OP_CH_SEL_RBG, false, true, + ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_10BIT }, + { V4L2_MBUS_FMT_YUYV10_1X20, ADV7604_OP_CH_SEL_RGB, false, false, + ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_10BIT }, + { V4L2_MBUS_FMT_YVYU10_1X20, ADV7604_OP_CH_SEL_RGB, false, true, + ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_10BIT }, + { V4L2_MBUS_FMT_UYVY12_1X24, ADV7604_OP_CH_SEL_RBG, false, false, + ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_12BIT }, + { V4L2_MBUS_FMT_VYUY12_1X24, ADV7604_OP_CH_SEL_RBG, false, true, + ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_12BIT }, + { V4L2_MBUS_FMT_YUYV12_1X24, ADV7604_OP_CH_SEL_RGB, false, false, + ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_12BIT }, + { V4L2_MBUS_FMT_YVYU12_1X24, ADV7604_OP_CH_SEL_RGB, false, true, + ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_12BIT }, +}; + +static const struct adv7604_format_info adv7611_formats[] = { + { V4L2_MBUS_FMT_RGB888_1X24, ADV7604_OP_CH_SEL_RGB, true, false, + ADV7604_OP_MODE_SEL_SDR_444 | ADV7604_OP_FORMAT_SEL_8BIT }, + { V4L2_MBUS_FMT_YUYV8_2X8, ADV7604_OP_CH_SEL_RGB, false, false, + ADV7604_OP_MODE_SEL_SDR_422 | ADV7604_OP_FORMAT_SEL_8BIT }, + { V4L2_MBUS_FMT_YVYU8_2X8, ADV7604_OP_CH_SEL_RGB, false, true, + ADV7604_OP_MODE_SEL_SDR_422 | ADV7604_OP_FORMAT_SEL_8BIT }, + { V4L2_MBUS_FMT_YUYV12_2X12, ADV7604_OP_CH_SEL_RGB, false, false, + ADV7604_OP_MODE_SEL_SDR_422 | ADV7604_OP_FORMAT_SEL_12BIT }, + { V4L2_MBUS_FMT_YVYU12_2X12, ADV7604_OP_CH_SEL_RGB, false, true, + ADV7604_OP_MODE_SEL_SDR_422 | ADV7604_OP_FORMAT_SEL_12BIT }, + { V4L2_MBUS_FMT_UYVY8_1X16, ADV7604_OP_CH_SEL_RBG, false, false, + ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_8BIT }, + { V4L2_MBUS_FMT_VYUY8_1X16, ADV7604_OP_CH_SEL_RBG, false, true, + ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_8BIT }, + { V4L2_MBUS_FMT_YUYV8_1X16, ADV7604_OP_CH_SEL_RGB, false, false, + ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_8BIT }, + { V4L2_MBUS_FMT_YVYU8_1X16, ADV7604_OP_CH_SEL_RGB, false, true, + ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_8BIT }, + { V4L2_MBUS_FMT_UYVY12_1X24, ADV7604_OP_CH_SEL_RBG, false, false, + ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_12BIT }, + { V4L2_MBUS_FMT_VYUY12_1X24, ADV7604_OP_CH_SEL_RBG, false, true, + ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_12BIT }, + { V4L2_MBUS_FMT_YUYV12_1X24, ADV7604_OP_CH_SEL_RGB, false, false, + ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_12BIT }, + { V4L2_MBUS_FMT_YVYU12_1X24, ADV7604_OP_CH_SEL_RGB, false, true, + ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_12BIT }, +}; + +static const struct adv7604_format_info * +adv7604_format_info(struct adv7604_state *state, enum v4l2_mbus_pixelcode code) +{ + unsigned int i; + + for (i = 0; i < state->info->nformats; ++i) { + if (state->info->formats[i].code == code) + return &state->info->formats[i]; + } + + return NULL; } /* ----------------------------------------------------------------------- */ @@ -607,18 +837,18 @@ static inline bool is_analog_input(struct v4l2_subdev *sd) { struct adv7604_state *state = to_state(sd); - return state->selected_input == ADV7604_INPUT_VGA_RGB || - state->selected_input == ADV7604_INPUT_VGA_COMP; + return state->selected_input == ADV7604_PAD_VGA_RGB || + state->selected_input == ADV7604_PAD_VGA_COMP; } static inline bool is_digital_input(struct v4l2_subdev *sd) { struct adv7604_state *state = to_state(sd); - return state->selected_input == ADV7604_INPUT_HDMI_PORT_A || - state->selected_input == ADV7604_INPUT_HDMI_PORT_B || - state->selected_input == ADV7604_INPUT_HDMI_PORT_C || - state->selected_input == ADV7604_INPUT_HDMI_PORT_D; + return state->selected_input == ADV7604_PAD_HDMI_PORT_A || + state->selected_input == ADV7604_PAD_HDMI_PORT_B || + state->selected_input == ADV7604_PAD_HDMI_PORT_C || + state->selected_input == ADV7604_PAD_HDMI_PORT_D; } /* ----------------------------------------------------------------------- */ @@ -644,119 +874,61 @@ static void adv7604_inv_register(struct v4l2_subdev *sd) static int adv7604_g_register(struct v4l2_subdev *sd, struct v4l2_dbg_register *reg) { - reg->size = 1; - switch (reg->reg >> 8) { - case 0: - reg->val = io_read(sd, reg->reg & 0xff); - break; - case 1: - reg->val = avlink_read(sd, reg->reg & 0xff); - break; - case 2: - reg->val = cec_read(sd, reg->reg & 0xff); - break; - case 3: - reg->val = infoframe_read(sd, reg->reg & 0xff); - break; - case 4: - reg->val = esdp_read(sd, reg->reg & 0xff); - break; - case 5: - reg->val = dpp_read(sd, reg->reg & 0xff); - break; - case 6: - reg->val = afe_read(sd, reg->reg & 0xff); - break; - case 7: - reg->val = rep_read(sd, reg->reg & 0xff); - break; - case 8: - reg->val = edid_read(sd, reg->reg & 0xff); - break; - case 9: - reg->val = hdmi_read(sd, reg->reg & 0xff); - break; - case 0xa: - reg->val = test_read(sd, reg->reg & 0xff); - break; - case 0xb: - reg->val = cp_read(sd, reg->reg & 0xff); - break; - case 0xc: - reg->val = vdp_read(sd, reg->reg & 0xff); - break; - default: + int ret; + + ret = adv7604_read_reg(sd, reg->reg); + if (ret < 0) { v4l2_info(sd, "Register %03llx not supported\n", reg->reg); adv7604_inv_register(sd); - break; + return ret; } + + reg->size = 1; + reg->val = ret; + return 0; } static int adv7604_s_register(struct v4l2_subdev *sd, const struct v4l2_dbg_register *reg) { - u8 val = reg->val & 0xff; + int ret; - switch (reg->reg >> 8) { - case 0: - io_write(sd, reg->reg & 0xff, val); - break; - case 1: - avlink_write(sd, reg->reg & 0xff, val); - break; - case 2: - cec_write(sd, reg->reg & 0xff, val); - break; - case 3: - infoframe_write(sd, reg->reg & 0xff, val); - break; - case 4: - esdp_write(sd, reg->reg & 0xff, val); - break; - case 5: - dpp_write(sd, reg->reg & 0xff, val); - break; - case 6: - afe_write(sd, reg->reg & 0xff, val); - break; - case 7: - rep_write(sd, reg->reg & 0xff, val); - break; - case 8: - edid_write(sd, reg->reg & 0xff, val); - break; - case 9: - hdmi_write(sd, reg->reg & 0xff, val); - break; - case 0xa: - test_write(sd, reg->reg & 0xff, val); - break; - case 0xb: - cp_write(sd, reg->reg & 0xff, val); - break; - case 0xc: - vdp_write(sd, reg->reg & 0xff, val); - break; - default: + ret = adv7604_write_reg(sd, reg->reg, reg->val); + if (ret < 0) { v4l2_info(sd, "Register %03llx not supported\n", reg->reg); adv7604_inv_register(sd); - break; + return ret; } + return 0; } #endif +static unsigned int adv7604_read_cable_det(struct v4l2_subdev *sd) +{ + u8 value = io_read(sd, 0x6f); + + return ((value & 0x10) >> 4) + | ((value & 0x08) >> 2) + | ((value & 0x04) << 0) + | ((value & 0x02) << 2); +} + +static unsigned int adv7611_read_cable_det(struct v4l2_subdev *sd) +{ + u8 value = io_read(sd, 0x6f); + + return value & 1; +} + static int adv7604_s_detect_tx_5v_ctrl(struct v4l2_subdev *sd) { struct adv7604_state *state = to_state(sd); - u8 reg_io_6f = io_read(sd, 0x6f); + const struct adv7604_chip_info *info = state->info; return v4l2_ctrl_s_ctrl(state->detect_tx_5v_ctrl, - ((reg_io_6f & 0x10) >> 4) | - ((reg_io_6f & 0x08) >> 2) | - (reg_io_6f & 0x04) | - ((reg_io_6f & 0x02) << 2)); + info->read_cable_det(sd)); } static int find_and_set_predefined_video_timings(struct v4l2_subdev *sd, @@ -787,11 +959,13 @@ static int configure_predefined_video_timings(struct v4l2_subdev *sd, v4l2_dbg(1, debug, sd, "%s", __func__); - /* reset to default values */ - io_write(sd, 0x16, 0x43); - io_write(sd, 0x17, 0x5a); + if (adv7604_has_afe(state)) { + /* reset to default values */ + io_write(sd, 0x16, 0x43); + io_write(sd, 0x17, 0x5a); + } /* disable embedded syncs for auto graphics mode */ - cp_write_and_or(sd, 0x81, 0xef, 0x00); + cp_write_clr_set(sd, 0x81, 0x10, 0x00); cp_write(sd, 0x8f, 0x00); cp_write(sd, 0x90, 0x00); cp_write(sd, 0xa2, 0x00); @@ -829,7 +1003,6 @@ static void configure_custom_video_timings(struct v4l2_subdev *sd, const struct v4l2_bt_timings *bt) { struct adv7604_state *state = to_state(sd); - struct i2c_client *client = v4l2_get_subdevdata(sd); u32 width = htotal(bt); u32 height = vtotal(bt); u16 cp_start_sav = bt->hsync + bt->hbackporch - 4; @@ -850,12 +1023,13 @@ static void configure_custom_video_timings(struct v4l2_subdev *sd, io_write(sd, 0x00, 0x07); /* video std */ io_write(sd, 0x01, 0x02); /* prim mode */ /* enable embedded syncs for auto graphics mode */ - cp_write_and_or(sd, 0x81, 0xef, 0x10); + cp_write_clr_set(sd, 0x81, 0x10, 0x10); /* Should only be set in auto-graphics mode [REF_02, p. 91-92] */ /* setup PLL_DIV_MAN_EN and PLL_DIV_RATIO */ /* IO-map reg. 0x16 and 0x17 should be written in sequence */ - if (adv_smbus_write_i2c_block_data(client, 0x16, 2, pll)) + if (adv_smbus_write_i2c_block_data(state, ADV7604_PAGE_IO, + 0x16, 2, pll)) v4l2_err(sd, "writing to reg 0x16 and 0x17 failed\n"); /* active video - horizontal timing */ @@ -906,7 +1080,8 @@ static void adv7604_set_offset(struct v4l2_subdev *sd, bool auto_offset, u16 off offset_buf[3] = offset_c & 0x0ff; /* Registers must be written in this order with no i2c access in between */ - if (adv_smbus_write_i2c_block_data(state->i2c_cp, 0x77, 4, offset_buf)) + if (adv_smbus_write_i2c_block_data(state, ADV7604_PAGE_CP, + 0x77, 4, offset_buf)) v4l2_err(sd, "%s: i2c error writing to CP reg 0x77, 0x78, 0x79, 0x7a\n", __func__); } @@ -935,7 +1110,8 @@ static void adv7604_set_gain(struct v4l2_subdev *sd, bool auto_gain, u16 gain_a, gain_buf[3] = ((gain_c & 0x0ff)); /* Registers must be written in this order with no i2c access in between */ - if (adv_smbus_write_i2c_block_data(state->i2c_cp, 0x73, 4, gain_buf)) + if (adv_smbus_write_i2c_block_data(state, ADV7604_PAGE_CP, + 0x73, 4, gain_buf)) v4l2_err(sd, "%s: i2c error writing to CP reg 0x73, 0x74, 0x75, 0x76\n", __func__); } @@ -954,24 +1130,24 @@ static void set_rgb_quantization_range(struct v4l2_subdev *sd) switch (state->rgb_quantization_range) { case V4L2_DV_RGB_RANGE_AUTO: - if (state->selected_input == ADV7604_INPUT_VGA_RGB) { + if (state->selected_input == ADV7604_PAD_VGA_RGB) { /* Receiving analog RGB signal * Set RGB full range (0-255) */ - io_write_and_or(sd, 0x02, 0x0f, 0x10); + io_write_clr_set(sd, 0x02, 0xf0, 0x10); break; } - if (state->selected_input == ADV7604_INPUT_VGA_COMP) { + if (state->selected_input == ADV7604_PAD_VGA_COMP) { /* Receiving analog YPbPr signal * Set automode */ - io_write_and_or(sd, 0x02, 0x0f, 0xf0); + io_write_clr_set(sd, 0x02, 0xf0, 0xf0); break; } if (hdmi_signal) { /* Receiving HDMI signal * Set automode */ - io_write_and_or(sd, 0x02, 0x0f, 0xf0); + io_write_clr_set(sd, 0x02, 0xf0, 0xf0); break; } @@ -980,10 +1156,10 @@ static void set_rgb_quantization_range(struct v4l2_subdev *sd) * input format (CE/IT) in automatic mode */ if (state->timings.bt.standards & V4L2_DV_BT_STD_CEA861) { /* RGB limited range (16-235) */ - io_write_and_or(sd, 0x02, 0x0f, 0x00); + io_write_clr_set(sd, 0x02, 0xf0, 0x00); } else { /* RGB full range (0-255) */ - io_write_and_or(sd, 0x02, 0x0f, 0x10); + io_write_clr_set(sd, 0x02, 0xf0, 0x10); if (is_digital_input(sd) && rgb_output) { adv7604_set_offset(sd, false, 0x40, 0x40, 0x40); @@ -994,25 +1170,25 @@ static void set_rgb_quantization_range(struct v4l2_subdev *sd) } break; case V4L2_DV_RGB_RANGE_LIMITED: - if (state->selected_input == ADV7604_INPUT_VGA_COMP) { + if (state->selected_input == ADV7604_PAD_VGA_COMP) { /* YCrCb limited range (16-235) */ - io_write_and_or(sd, 0x02, 0x0f, 0x20); + io_write_clr_set(sd, 0x02, 0xf0, 0x20); break; } /* RGB limited range (16-235) */ - io_write_and_or(sd, 0x02, 0x0f, 0x00); + io_write_clr_set(sd, 0x02, 0xf0, 0x00); break; case V4L2_DV_RGB_RANGE_FULL: - if (state->selected_input == ADV7604_INPUT_VGA_COMP) { + if (state->selected_input == ADV7604_PAD_VGA_COMP) { /* YCrCb full range (0-255) */ - io_write_and_or(sd, 0x02, 0x0f, 0x60); + io_write_clr_set(sd, 0x02, 0xf0, 0x60); break; } /* RGB full range (0-255) */ - io_write_and_or(sd, 0x02, 0x0f, 0x10); + io_write_clr_set(sd, 0x02, 0xf0, 0x10); if (is_analog_input(sd) || hdmi_signal) break; @@ -1030,7 +1206,9 @@ static void set_rgb_quantization_range(struct v4l2_subdev *sd) static int adv7604_s_ctrl(struct v4l2_ctrl *ctrl) { - struct v4l2_subdev *sd = to_sd(ctrl); + struct v4l2_subdev *sd = + &container_of(ctrl->handler, struct adv7604_state, hdl)->sd; + struct adv7604_state *state = to_state(sd); switch (ctrl->id) { @@ -1051,6 +1229,8 @@ static int adv7604_s_ctrl(struct v4l2_ctrl *ctrl) set_rgb_quantization_range(sd); return 0; case V4L2_CID_ADV_RX_ANALOG_SAMPLING_PHASE: + if (!adv7604_has_afe(state)) + return -EINVAL; /* Set the analog sampling phase. This is needed to find the best sampling phase for analog video: an application or driver has to try a number of phases and analyze the picture @@ -1060,7 +1240,7 @@ static int adv7604_s_ctrl(struct v4l2_ctrl *ctrl) case V4L2_CID_ADV_RX_FREE_RUN_COLOR_MANUAL: /* Use the default blue color for free running mode, or supply your own. */ - cp_write_and_or(sd, 0xbf, ~0x04, (ctrl->val << 2)); + cp_write_clr_set(sd, 0xbf, 0x04, ctrl->val << 2); return 0; case V4L2_CID_ADV_RX_FREE_RUN_COLOR: cp_write(sd, 0xc0, (ctrl->val & 0xff0000) >> 16); @@ -1088,7 +1268,10 @@ static inline bool no_signal_tmds(struct v4l2_subdev *sd) static inline bool no_lock_tmds(struct v4l2_subdev *sd) { - return (io_read(sd, 0x6a) & 0xe0) != 0xe0; + struct adv7604_state *state = to_state(sd); + const struct adv7604_chip_info *info = state->info; + + return (io_read(sd, 0x6a) & info->tdms_lock_mask) != info->tdms_lock_mask; } static inline bool is_hdmi(struct v4l2_subdev *sd) @@ -1098,6 +1281,15 @@ static inline bool is_hdmi(struct v4l2_subdev *sd) static inline bool no_lock_sspd(struct v4l2_subdev *sd) { + struct adv7604_state *state = to_state(sd); + + /* + * Chips without a AFE don't expose registers for the SSPD, so just assume + * that we have a lock. + */ + if (adv7604_has_afe(state)) + return false; + /* TODO channel 2 */ return ((cp_read(sd, 0xb5) & 0xd0) != 0xd0); } @@ -1127,6 +1319,11 @@ static inline bool no_signal(struct v4l2_subdev *sd) static inline bool no_lock_cp(struct v4l2_subdev *sd) { + struct adv7604_state *state = to_state(sd); + + if (!adv7604_has_afe(state)) + return false; + /* CP has detected a non standard number of lines on the incoming video compared to what it is configured to receive by s_dv_timings */ return io_read(sd, 0x12) & 0x01; @@ -1195,28 +1392,40 @@ static int stdi2dv_timings(struct v4l2_subdev *sd, return -1; } + static int read_stdi(struct v4l2_subdev *sd, struct stdi_readback *stdi) { + struct adv7604_state *state = to_state(sd); + const struct adv7604_chip_info *info = state->info; + u8 polarity; + if (no_lock_stdi(sd) || no_lock_sspd(sd)) { v4l2_dbg(2, debug, sd, "%s: STDI and/or SSPD not locked\n", __func__); return -1; } /* read STDI */ - stdi->bl = ((cp_read(sd, 0xb1) & 0x3f) << 8) | cp_read(sd, 0xb2); - stdi->lcf = ((cp_read(sd, 0xb3) & 0x7) << 8) | cp_read(sd, 0xb4); + stdi->bl = cp_read16(sd, 0xb1, 0x3fff); + stdi->lcf = cp_read16(sd, info->lcf_reg, 0x7ff); stdi->lcvs = cp_read(sd, 0xb3) >> 3; stdi->interlaced = io_read(sd, 0x12) & 0x10; - /* read SSPD */ - if ((cp_read(sd, 0xb5) & 0x03) == 0x01) { - stdi->hs_pol = ((cp_read(sd, 0xb5) & 0x10) ? - ((cp_read(sd, 0xb5) & 0x08) ? '+' : '-') : 'x'); - stdi->vs_pol = ((cp_read(sd, 0xb5) & 0x40) ? - ((cp_read(sd, 0xb5) & 0x20) ? '+' : '-') : 'x'); + if (adv7604_has_afe(state)) { + /* read SSPD */ + polarity = cp_read(sd, 0xb5); + if ((polarity & 0x03) == 0x01) { + stdi->hs_pol = polarity & 0x10 + ? (polarity & 0x08 ? '+' : '-') : 'x'; + stdi->vs_pol = polarity & 0x40 + ? (polarity & 0x20 ? '+' : '-') : 'x'; + } else { + stdi->hs_pol = 'x'; + stdi->vs_pol = 'x'; + } } else { - stdi->hs_pol = 'x'; - stdi->vs_pol = 'x'; + polarity = hdmi_read(sd, 0x05); + stdi->hs_pol = polarity & 0x20 ? '+' : '-'; + stdi->vs_pol = polarity & 0x10 ? '+' : '-'; } if (no_lock_stdi(sd) || no_lock_sspd(sd)) { @@ -1243,8 +1452,14 @@ static int read_stdi(struct v4l2_subdev *sd, struct stdi_readback *stdi) static int adv7604_enum_dv_timings(struct v4l2_subdev *sd, struct v4l2_enum_dv_timings *timings) { + struct adv7604_state *state = to_state(sd); + if (timings->index >= ARRAY_SIZE(adv7604_timings) - 1) return -EINVAL; + + if (timings->pad >= state->source_pad) + return -EINVAL; + memset(timings->reserved, 0, sizeof(timings->reserved)); timings->timings = adv7604_timings[timings->index]; return 0; @@ -1253,14 +1468,30 @@ static int adv7604_enum_dv_timings(struct v4l2_subdev *sd, static int adv7604_dv_timings_cap(struct v4l2_subdev *sd, struct v4l2_dv_timings_cap *cap) { + struct adv7604_state *state = to_state(sd); + + if (cap->pad >= state->source_pad) + return -EINVAL; + cap->type = V4L2_DV_BT_656_1120; cap->bt.max_width = 1920; cap->bt.max_height = 1200; cap->bt.min_pixelclock = 25000000; - if (is_digital_input(sd)) + + switch (cap->pad) { + case ADV7604_PAD_HDMI_PORT_A: + case ADV7604_PAD_HDMI_PORT_B: + case ADV7604_PAD_HDMI_PORT_C: + case ADV7604_PAD_HDMI_PORT_D: cap->bt.max_pixelclock = 225000000; - else + break; + case ADV7604_PAD_VGA_RGB: + case ADV7604_PAD_VGA_COMP: + default: cap->bt.max_pixelclock = 170000000; + break; + } + cap->bt.standards = V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT | V4L2_DV_BT_STD_GTF | V4L2_DV_BT_STD_CVT; cap->bt.capabilities = V4L2_DV_BT_CAP_PROGRESSIVE | @@ -1284,10 +1515,43 @@ static void adv7604_fill_optional_dv_timings_fields(struct v4l2_subdev *sd, } } +static unsigned int adv7604_read_hdmi_pixelclock(struct v4l2_subdev *sd) +{ + unsigned int freq; + int a, b; + + a = hdmi_read(sd, 0x06); + b = hdmi_read(sd, 0x3b); + if (a < 0 || b < 0) + return 0; + freq = a * 1000000 + ((b & 0x30) >> 4) * 250000; + + if (is_hdmi(sd)) { + /* adjust for deep color mode */ + unsigned bits_per_channel = ((hdmi_read(sd, 0x0b) & 0x60) >> 4) + 8; + + freq = freq * 8 / bits_per_channel; + } + + return freq; +} + +static unsigned int adv7611_read_hdmi_pixelclock(struct v4l2_subdev *sd) +{ + int a, b; + + a = hdmi_read(sd, 0x51); + b = hdmi_read(sd, 0x52); + if (a < 0 || b < 0) + return 0; + return ((a << 1) | (b >> 7)) * 1000000 + (b & 0x7f) * 1000000 / 128; +} + static int adv7604_query_dv_timings(struct v4l2_subdev *sd, struct v4l2_dv_timings *timings) { struct adv7604_state *state = to_state(sd); + const struct adv7604_chip_info *info = state->info; struct v4l2_bt_timings *bt = &timings->bt; struct stdi_readback stdi; @@ -1311,44 +1575,25 @@ static int adv7604_query_dv_timings(struct v4l2_subdev *sd, V4L2_DV_INTERLACED : V4L2_DV_PROGRESSIVE; if (is_digital_input(sd)) { - uint32_t freq; - timings->type = V4L2_DV_BT_656_1120; - bt->width = (hdmi_read(sd, 0x07) & 0x0f) * 256 + hdmi_read(sd, 0x08); - bt->height = (hdmi_read(sd, 0x09) & 0x0f) * 256 + hdmi_read(sd, 0x0a); - freq = (hdmi_read(sd, 0x06) * 1000000) + - ((hdmi_read(sd, 0x3b) & 0x30) >> 4) * 250000; - if (is_hdmi(sd)) { - /* adjust for deep color mode */ - unsigned bits_per_channel = ((hdmi_read(sd, 0x0b) & 0x60) >> 4) + 8; - - freq = freq * 8 / bits_per_channel; - } - bt->pixelclock = freq; - bt->hfrontporch = (hdmi_read(sd, 0x20) & 0x03) * 256 + - hdmi_read(sd, 0x21); - bt->hsync = (hdmi_read(sd, 0x22) & 0x03) * 256 + - hdmi_read(sd, 0x23); - bt->hbackporch = (hdmi_read(sd, 0x24) & 0x03) * 256 + - hdmi_read(sd, 0x25); - bt->vfrontporch = ((hdmi_read(sd, 0x2a) & 0x1f) * 256 + - hdmi_read(sd, 0x2b)) / 2; - bt->vsync = ((hdmi_read(sd, 0x2e) & 0x1f) * 256 + - hdmi_read(sd, 0x2f)) / 2; - bt->vbackporch = ((hdmi_read(sd, 0x32) & 0x1f) * 256 + - hdmi_read(sd, 0x33)) / 2; + /* FIXME: All masks are incorrect for ADV7611 */ + bt->width = hdmi_read16(sd, 0x07, 0xfff); + bt->height = hdmi_read16(sd, 0x09, 0xfff); + bt->pixelclock = info->read_hdmi_pixelclock(sd); + bt->hfrontporch = hdmi_read16(sd, 0x20, 0x3ff); + bt->hsync = hdmi_read16(sd, 0x22, 0x3ff); + bt->hbackporch = hdmi_read16(sd, 0x24, 0x3ff); + bt->vfrontporch = hdmi_read16(sd, 0x2a, 0x1fff) / 2; + bt->vsync = hdmi_read16(sd, 0x2e, 0x1fff) / 2; + bt->vbackporch = hdmi_read16(sd, 0x32, 0x1fff) / 2; bt->polarities = ((hdmi_read(sd, 0x05) & 0x10) ? V4L2_DV_VSYNC_POS_POL : 0) | ((hdmi_read(sd, 0x05) & 0x20) ? V4L2_DV_HSYNC_POS_POL : 0); if (bt->interlaced == V4L2_DV_INTERLACED) { - bt->height += (hdmi_read(sd, 0x0b) & 0x0f) * 256 + - hdmi_read(sd, 0x0c); - bt->il_vfrontporch = ((hdmi_read(sd, 0x2c) & 0x1f) * 256 + - hdmi_read(sd, 0x2d)) / 2; - bt->il_vsync = ((hdmi_read(sd, 0x30) & 0x1f) * 256 + - hdmi_read(sd, 0x31)) / 2; - bt->vbackporch = ((hdmi_read(sd, 0x34) & 0x1f) * 256 + - hdmi_read(sd, 0x35)) / 2; + bt->height += hdmi_read16(sd, 0x0b, 0xfff); + bt->il_vfrontporch = hdmi_read16(sd, 0x2c, 0x1fff) / 2; + bt->il_vsync = hdmi_read16(sd, 0x30, 0x1fff) / 2; + bt->vbackporch = hdmi_read16(sd, 0x34, 0x1fff) / 2; } adv7604_fill_optional_dv_timings_fields(sd, timings); } else { @@ -1378,11 +1623,11 @@ static int adv7604_query_dv_timings(struct v4l2_subdev *sd, v4l2_dbg(1, debug, sd, "%s: restart STDI\n", __func__); /* TODO restart STDI for Sync Channel 2 */ /* enter one-shot mode */ - cp_write_and_or(sd, 0x86, 0xf9, 0x00); + cp_write_clr_set(sd, 0x86, 0x06, 0x00); /* trigger STDI restart */ - cp_write_and_or(sd, 0x86, 0xf9, 0x04); + cp_write_clr_set(sd, 0x86, 0x06, 0x04); /* reset to continuous mode */ - cp_write_and_or(sd, 0x86, 0xf9, 0x02); + cp_write_clr_set(sd, 0x86, 0x06, 0x02); state->restart_stdi_once = false; return -ENOLINK; } @@ -1441,7 +1686,7 @@ static int adv7604_s_dv_timings(struct v4l2_subdev *sd, state->timings = *timings; - cp_write(sd, 0x91, bt->interlaced ? 0x50 : 0x10); + cp_write_clr_set(sd, 0x91, 0x40, bt->interlaced ? 0x40 : 0x00); /* Use prim_mode and vid_std when available */ err = configure_predefined_video_timings(sd, timings); @@ -1468,6 +1713,16 @@ static int adv7604_g_dv_timings(struct v4l2_subdev *sd, return 0; } +static void adv7604_set_termination(struct v4l2_subdev *sd, bool enable) +{ + hdmi_write(sd, 0x01, enable ? 0x00 : 0x78); +} + +static void adv7611_set_termination(struct v4l2_subdev *sd, bool enable) +{ + hdmi_write(sd, 0x83, enable ? 0xfe : 0xff); +} + static void enable_input(struct v4l2_subdev *sd) { struct adv7604_state *state = to_state(sd); @@ -1475,10 +1730,10 @@ static void enable_input(struct v4l2_subdev *sd) if (is_analog_input(sd)) { io_write(sd, 0x15, 0xb0); /* Disable Tristate of Pins (no audio) */ } else if (is_digital_input(sd)) { - hdmi_write_and_or(sd, 0x00, 0xfc, state->selected_input); - hdmi_write(sd, 0x01, 0x00); /* Enable HDMI clock terminators */ + hdmi_write_clr_set(sd, 0x00, 0x03, state->selected_input); + state->info->set_termination(sd, true); io_write(sd, 0x15, 0xa0); /* Disable Tristate of Pins */ - hdmi_write_and_or(sd, 0x1a, 0xef, 0x00); /* Unmute audio */ + hdmi_write_clr_set(sd, 0x1a, 0x10, 0x00); /* Unmute audio */ } else { v4l2_dbg(2, debug, sd, "%s: Unknown port %d selected\n", __func__, state->selected_input); @@ -1487,67 +1742,36 @@ static void enable_input(struct v4l2_subdev *sd) static void disable_input(struct v4l2_subdev *sd) { - hdmi_write_and_or(sd, 0x1a, 0xef, 0x10); /* Mute audio */ + struct adv7604_state *state = to_state(sd); + + hdmi_write_clr_set(sd, 0x1a, 0x10, 0x10); /* Mute audio */ msleep(16); /* 512 samples with >= 32 kHz sample rate [REF_03, c. 7.16.10] */ io_write(sd, 0x15, 0xbe); /* Tristate all outputs from video core */ - hdmi_write(sd, 0x01, 0x78); /* Disable HDMI clock terminators */ + state->info->set_termination(sd, false); } static void select_input(struct v4l2_subdev *sd) { struct adv7604_state *state = to_state(sd); + const struct adv7604_chip_info *info = state->info; if (is_analog_input(sd)) { - /* reset ADI recommended settings for HDMI: */ - /* "ADV7604 Register Settings Recommendations (rev. 2.5, June 2010)" p. 4. */ - hdmi_write(sd, 0x0d, 0x04); /* HDMI filter optimization */ - hdmi_write(sd, 0x3d, 0x00); /* DDC bus active pull-up control */ - hdmi_write(sd, 0x3e, 0x74); /* TMDS PLL optimization */ - hdmi_write(sd, 0x4e, 0x3b); /* TMDS PLL optimization */ - hdmi_write(sd, 0x57, 0x74); /* TMDS PLL optimization */ - hdmi_write(sd, 0x58, 0x63); /* TMDS PLL optimization */ - hdmi_write(sd, 0x8d, 0x18); /* equaliser */ - hdmi_write(sd, 0x8e, 0x34); /* equaliser */ - hdmi_write(sd, 0x93, 0x88); /* equaliser */ - hdmi_write(sd, 0x94, 0x2e); /* equaliser */ - hdmi_write(sd, 0x96, 0x00); /* enable automatic EQ changing */ + adv7604_write_reg_seq(sd, info->recommended_settings[0]); afe_write(sd, 0x00, 0x08); /* power up ADC */ afe_write(sd, 0x01, 0x06); /* power up Analog Front End */ afe_write(sd, 0xc8, 0x00); /* phase control */ - - /* set ADI recommended settings for digitizer */ - /* "ADV7604 Register Settings Recommendations (rev. 2.5, June 2010)" p. 17. */ - afe_write(sd, 0x12, 0x7b); /* ADC noise shaping filter controls */ - afe_write(sd, 0x0c, 0x1f); /* CP core gain controls */ - cp_write(sd, 0x3e, 0x04); /* CP core pre-gain control */ - cp_write(sd, 0xc3, 0x39); /* CP coast control. Graphics mode */ - cp_write(sd, 0x40, 0x5c); /* CP core pre-gain control. Graphics mode */ } else if (is_digital_input(sd)) { hdmi_write(sd, 0x00, state->selected_input & 0x03); - /* set ADI recommended settings for HDMI: */ - /* "ADV7604 Register Settings Recommendations (rev. 2.5, June 2010)" p. 4. */ - hdmi_write(sd, 0x0d, 0x84); /* HDMI filter optimization */ - hdmi_write(sd, 0x3d, 0x10); /* DDC bus active pull-up control */ - hdmi_write(sd, 0x3e, 0x39); /* TMDS PLL optimization */ - hdmi_write(sd, 0x4e, 0x3b); /* TMDS PLL optimization */ - hdmi_write(sd, 0x57, 0xb6); /* TMDS PLL optimization */ - hdmi_write(sd, 0x58, 0x03); /* TMDS PLL optimization */ - hdmi_write(sd, 0x8d, 0x18); /* equaliser */ - hdmi_write(sd, 0x8e, 0x34); /* equaliser */ - hdmi_write(sd, 0x93, 0x8b); /* equaliser */ - hdmi_write(sd, 0x94, 0x2d); /* equaliser */ - hdmi_write(sd, 0x96, 0x01); /* enable automatic EQ changing */ - - afe_write(sd, 0x00, 0xff); /* power down ADC */ - afe_write(sd, 0x01, 0xfe); /* power down Analog Front End */ - afe_write(sd, 0xc8, 0x40); /* phase control */ - - /* reset ADI recommended settings for digitizer */ - /* "ADV7604 Register Settings Recommendations (rev. 2.5, June 2010)" p. 17. */ - afe_write(sd, 0x12, 0xfb); /* ADC noise shaping filter controls */ - afe_write(sd, 0x0c, 0x0d); /* CP core gain controls */ + adv7604_write_reg_seq(sd, info->recommended_settings[1]); + + if (adv7604_has_afe(state)) { + afe_write(sd, 0x00, 0xff); /* power down ADC */ + afe_write(sd, 0x01, 0xfe); /* power down Analog Front End */ + afe_write(sd, 0xc8, 0x40); /* phase control */ + } + cp_write(sd, 0x3e, 0x00); /* CP core pre-gain control */ cp_write(sd, 0xc3, 0x39); /* CP coast control. Graphics mode */ cp_write(sd, 0x40, 0x80); /* CP core pre-gain control. Graphics mode */ @@ -1568,6 +1792,9 @@ static int adv7604_s_routing(struct v4l2_subdev *sd, if (input == state->selected_input) return 0; + if (input > state->info->max_port) + return -EINVAL; + state->selected_input = input; disable_input(sd); @@ -1579,34 +1806,139 @@ static int adv7604_s_routing(struct v4l2_subdev *sd, return 0; } -static int adv7604_enum_mbus_fmt(struct v4l2_subdev *sd, unsigned int index, - enum v4l2_mbus_pixelcode *code) +static int adv7604_enum_mbus_code(struct v4l2_subdev *sd, + struct v4l2_subdev_fh *fh, + struct v4l2_subdev_mbus_code_enum *code) +{ + struct adv7604_state *state = to_state(sd); + + if (code->index >= state->info->nformats) + return -EINVAL; + + code->code = state->info->formats[code->index].code; + + return 0; +} + +static void adv7604_fill_format(struct adv7604_state *state, + struct v4l2_mbus_framefmt *format) +{ + memset(format, 0, sizeof(*format)); + + format->width = state->timings.bt.width; + format->height = state->timings.bt.height; + format->field = V4L2_FIELD_NONE; + + if (state->timings.bt.standards & V4L2_DV_BT_STD_CEA861) + format->colorspace = (state->timings.bt.height <= 576) ? + V4L2_COLORSPACE_SMPTE170M : V4L2_COLORSPACE_REC709; +} + +/* + * Compute the op_ch_sel value required to obtain on the bus the component order + * corresponding to the selected format taking into account bus reordering + * applied by the board at the output of the device. + * + * The following table gives the op_ch_value from the format component order + * (expressed as op_ch_sel value in column) and the bus reordering (expressed as + * adv7604_bus_order value in row). + * + * | GBR(0) GRB(1) BGR(2) RGB(3) BRG(4) RBG(5) + * ----------+------------------------------------------------- + * RGB (NOP) | GBR GRB BGR RGB BRG RBG + * GRB (1-2) | BGR RGB GBR GRB RBG BRG + * RBG (2-3) | GRB GBR BRG RBG BGR RGB + * BGR (1-3) | RBG BRG RGB BGR GRB GBR + * BRG (ROR) | BRG RBG GRB GBR RGB BGR + * GBR (ROL) | RGB BGR RBG BRG GBR GRB + */ +static unsigned int adv7604_op_ch_sel(struct adv7604_state *state) +{ +#define _SEL(a,b,c,d,e,f) { \ + ADV7604_OP_CH_SEL_##a, ADV7604_OP_CH_SEL_##b, ADV7604_OP_CH_SEL_##c, \ + ADV7604_OP_CH_SEL_##d, ADV7604_OP_CH_SEL_##e, ADV7604_OP_CH_SEL_##f } +#define _BUS(x) [ADV7604_BUS_ORDER_##x] + + static const unsigned int op_ch_sel[6][6] = { + _BUS(RGB) /* NOP */ = _SEL(GBR, GRB, BGR, RGB, BRG, RBG), + _BUS(GRB) /* 1-2 */ = _SEL(BGR, RGB, GBR, GRB, RBG, BRG), + _BUS(RBG) /* 2-3 */ = _SEL(GRB, GBR, BRG, RBG, BGR, RGB), + _BUS(BGR) /* 1-3 */ = _SEL(RBG, BRG, RGB, BGR, GRB, GBR), + _BUS(BRG) /* ROR */ = _SEL(BRG, RBG, GRB, GBR, RGB, BGR), + _BUS(GBR) /* ROL */ = _SEL(RGB, BGR, RBG, BRG, GBR, GRB), + }; + + return op_ch_sel[state->pdata.bus_order][state->format->op_ch_sel >> 5]; +} + +static void adv7604_setup_format(struct adv7604_state *state) +{ + struct v4l2_subdev *sd = &state->sd; + + io_write_clr_set(sd, 0x02, 0x02, + state->format->rgb_out ? ADV7604_RGB_OUT : 0); + io_write(sd, 0x03, state->format->op_format_sel | + state->pdata.op_format_mode_sel); + io_write_clr_set(sd, 0x04, 0xe0, adv7604_op_ch_sel(state)); + io_write_clr_set(sd, 0x05, 0x01, + state->format->swap_cb_cr ? ADV7604_OP_SWAP_CB_CR : 0); +} + +static int adv7604_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh, + struct v4l2_subdev_format *format) { - if (index) + struct adv7604_state *state = to_state(sd); + + if (format->pad != state->source_pad) return -EINVAL; - /* Good enough for now */ - *code = V4L2_MBUS_FMT_FIXED; + + adv7604_fill_format(state, &format->format); + + if (format->which == V4L2_SUBDEV_FORMAT_TRY) { + struct v4l2_mbus_framefmt *fmt; + + fmt = v4l2_subdev_get_try_format(fh, format->pad); + format->format.code = fmt->code; + } else { + format->format.code = state->format->code; + } + return 0; } -static int adv7604_g_mbus_fmt(struct v4l2_subdev *sd, - struct v4l2_mbus_framefmt *fmt) +static int adv7604_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh, + struct v4l2_subdev_format *format) { struct adv7604_state *state = to_state(sd); + const struct adv7604_format_info *info; - fmt->width = state->timings.bt.width; - fmt->height = state->timings.bt.height; - fmt->code = V4L2_MBUS_FMT_FIXED; - fmt->field = V4L2_FIELD_NONE; - if (state->timings.bt.standards & V4L2_DV_BT_STD_CEA861) { - fmt->colorspace = (state->timings.bt.height <= 576) ? - V4L2_COLORSPACE_SMPTE170M : V4L2_COLORSPACE_REC709; + if (format->pad != state->source_pad) + return -EINVAL; + + info = adv7604_format_info(state, format->format.code); + if (info == NULL) + info = adv7604_format_info(state, V4L2_MBUS_FMT_YUYV8_2X8); + + adv7604_fill_format(state, &format->format); + format->format.code = info->code; + + if (format->which == V4L2_SUBDEV_FORMAT_TRY) { + struct v4l2_mbus_framefmt *fmt; + + fmt = v4l2_subdev_get_try_format(fh, format->pad); + fmt->code = format->format.code; + } else { + state->format = info; + adv7604_setup_format(state); } + return 0; } static int adv7604_isr(struct v4l2_subdev *sd, u32 status, bool *handled) { + struct adv7604_state *state = to_state(sd); + const struct adv7604_chip_info *info = state->info; const u8 irq_reg_0x43 = io_read(sd, 0x43); const u8 irq_reg_0x6b = io_read(sd, 0x6b); const u8 irq_reg_0x70 = io_read(sd, 0x70); @@ -1625,7 +1957,9 @@ static int adv7604_isr(struct v4l2_subdev *sd, u32 status, bool *handled) /* format change */ fmt_change = irq_reg_0x43 & 0x98; - fmt_change_digital = is_digital_input(sd) ? (irq_reg_0x6b & 0xc0) : 0; + fmt_change_digital = is_digital_input(sd) + ? irq_reg_0x6b & info->fmt_change_digital_mask + : 0; if (fmt_change || fmt_change_digital) { v4l2_dbg(1, debug, sd, @@ -1647,7 +1981,7 @@ static int adv7604_isr(struct v4l2_subdev *sd, u32 status, bool *handled) } /* tx 5v detect */ - tx_5v = io_read(sd, 0x70) & 0x1e; + tx_5v = io_read(sd, 0x70) & info->cable_det_mask; if (tx_5v) { v4l2_dbg(1, debug, sd, "%s: tx_5v: 0x%x\n", __func__, tx_5v); io_write(sd, 0x71, tx_5v); @@ -1663,7 +1997,7 @@ static int adv7604_get_edid(struct v4l2_subdev *sd, struct v4l2_edid *edid) struct adv7604_state *state = to_state(sd); u8 *data = NULL; - if (edid->pad > ADV7604_EDID_PORT_D) + if (edid->pad > ADV7604_PAD_HDMI_PORT_D) return -EINVAL; if (edid->blocks == 0) return -EINVAL; @@ -1678,10 +2012,10 @@ static int adv7604_get_edid(struct v4l2_subdev *sd, struct v4l2_edid *edid) edid->blocks = state->edid.blocks; switch (edid->pad) { - case ADV7604_EDID_PORT_A: - case ADV7604_EDID_PORT_B: - case ADV7604_EDID_PORT_C: - case ADV7604_EDID_PORT_D: + case ADV7604_PAD_HDMI_PORT_A: + case ADV7604_PAD_HDMI_PORT_B: + case ADV7604_PAD_HDMI_PORT_C: + case ADV7604_PAD_HDMI_PORT_D: if (state->edid.present & (1 << edid->pad)) data = state->edid.edid; break; @@ -1729,20 +2063,20 @@ static int get_edid_spa_location(const u8 *edid) static int adv7604_set_edid(struct v4l2_subdev *sd, struct v4l2_edid *edid) { struct adv7604_state *state = to_state(sd); + const struct adv7604_chip_info *info = state->info; int spa_loc; - int tmp = 0; int err; int i; - if (edid->pad > ADV7604_EDID_PORT_D) + if (edid->pad > ADV7604_PAD_HDMI_PORT_D) return -EINVAL; if (edid->start_block != 0) return -EINVAL; if (edid->blocks == 0) { /* Disable hotplug and I2C access to EDID RAM from DDC port */ state->edid.present &= ~(1 << edid->pad); - v4l2_subdev_notify(sd, ADV7604_HOTPLUG, (void *)&state->edid.present); - rep_write_and_or(sd, 0x77, 0xf0, state->edid.present); + adv7604_set_hpd(state, state->edid.present); + rep_write_clr_set(sd, info->edid_enable_reg, 0x0f, state->edid.present); /* Fall back to a 16:9 aspect ratio */ state->aspect_ratio.numerator = 16; @@ -1765,35 +2099,41 @@ static int adv7604_set_edid(struct v4l2_subdev *sd, struct v4l2_edid *edid) /* Disable hotplug and I2C access to EDID RAM from DDC port */ cancel_delayed_work_sync(&state->delayed_work_enable_hotplug); - v4l2_subdev_notify(sd, ADV7604_HOTPLUG, (void *)&tmp); - rep_write_and_or(sd, 0x77, 0xf0, 0x00); + adv7604_set_hpd(state, 0); + rep_write_clr_set(sd, info->edid_enable_reg, 0x0f, 0x00); spa_loc = get_edid_spa_location(edid->edid); if (spa_loc < 0) spa_loc = 0xc0; /* Default value [REF_02, p. 116] */ switch (edid->pad) { - case ADV7604_EDID_PORT_A: + case ADV7604_PAD_HDMI_PORT_A: state->spa_port_a[0] = edid->edid[spa_loc]; state->spa_port_a[1] = edid->edid[spa_loc + 1]; break; - case ADV7604_EDID_PORT_B: + case ADV7604_PAD_HDMI_PORT_B: rep_write(sd, 0x70, edid->edid[spa_loc]); rep_write(sd, 0x71, edid->edid[spa_loc + 1]); break; - case ADV7604_EDID_PORT_C: + case ADV7604_PAD_HDMI_PORT_C: rep_write(sd, 0x72, edid->edid[spa_loc]); rep_write(sd, 0x73, edid->edid[spa_loc + 1]); break; - case ADV7604_EDID_PORT_D: + case ADV7604_PAD_HDMI_PORT_D: rep_write(sd, 0x74, edid->edid[spa_loc]); rep_write(sd, 0x75, edid->edid[spa_loc + 1]); break; default: return -EINVAL; } - rep_write(sd, 0x76, spa_loc & 0xff); - rep_write_and_or(sd, 0x77, 0xbf, (spa_loc >> 2) & 0x40); + + if (info->type == ADV7604) { + rep_write(sd, 0x76, spa_loc & 0xff); + rep_write_clr_set(sd, 0x77, 0x40, (spa_loc & 0x100) >> 2); + } else { + /* FIXME: Where is the SPA location LSB register ? */ + rep_write_clr_set(sd, 0x71, 0x01, (spa_loc & 0x100) >> 8); + } edid->edid[spa_loc] = state->spa_port_a[0]; edid->edid[spa_loc + 1] = state->spa_port_a[1]; @@ -1812,10 +2152,10 @@ static int adv7604_set_edid(struct v4l2_subdev *sd, struct v4l2_edid *edid) /* adv7604 calculates the checksums and enables I2C access to internal EDID RAM from DDC port. */ - rep_write_and_or(sd, 0x77, 0xf0, state->edid.present); + rep_write_clr_set(sd, info->edid_enable_reg, 0x0f, state->edid.present); for (i = 0; i < 1000; i++) { - if (rep_read(sd, 0x7d) & state->edid.present) + if (rep_read(sd, info->edid_status_reg) & state->edid.present) break; mdelay(1); } @@ -1878,17 +2218,20 @@ static void print_avi_infoframe(struct v4l2_subdev *sd) static int adv7604_log_status(struct v4l2_subdev *sd) { struct adv7604_state *state = to_state(sd); + const struct adv7604_chip_info *info = state->info; struct v4l2_dv_timings timings; struct stdi_readback stdi; u8 reg_io_0x02 = io_read(sd, 0x02); + u8 edid_enabled; + u8 cable_det; - char *csc_coeff_sel_rb[16] = { + static const char * const csc_coeff_sel_rb[16] = { "bypassed", "YPbPr601 -> RGB", "reserved", "YPbPr709 -> RGB", "reserved", "RGB -> YPbPr601", "reserved", "RGB -> YPbPr709", "reserved", "YPbPr709 -> YPbPr601", "YPbPr601 -> YPbPr709", "reserved", "reserved", "reserved", "reserved", "manual" }; - char *input_color_space_txt[16] = { + static const char * const input_color_space_txt[16] = { "RGB limited range (16-235)", "RGB full range (0-255)", "YCbCr Bt.601 (16-235)", "YCbCr Bt.709 (16-235)", "xvYCC Bt.601", "xvYCC Bt.709", @@ -1896,12 +2239,12 @@ static int adv7604_log_status(struct v4l2_subdev *sd) "invalid", "invalid", "invalid", "invalid", "invalid", "invalid", "invalid", "automatic" }; - char *rgb_quantization_range_txt[] = { + static const char * const rgb_quantization_range_txt[] = { "Automatic", "RGB limited range (16-235)", "RGB full range (0-255)", }; - char *deep_color_mode_txt[4] = { + static const char * const deep_color_mode_txt[4] = { "8-bits per channel", "10-bits per channel", "12-bits per channel", @@ -1910,20 +2253,22 @@ static int adv7604_log_status(struct v4l2_subdev *sd) v4l2_info(sd, "-----Chip status-----\n"); v4l2_info(sd, "Chip power: %s\n", no_power(sd) ? "off" : "on"); + edid_enabled = rep_read(sd, info->edid_status_reg); v4l2_info(sd, "EDID enabled port A: %s, B: %s, C: %s, D: %s\n", - ((rep_read(sd, 0x7d) & 0x01) ? "Yes" : "No"), - ((rep_read(sd, 0x7d) & 0x02) ? "Yes" : "No"), - ((rep_read(sd, 0x7d) & 0x04) ? "Yes" : "No"), - ((rep_read(sd, 0x7d) & 0x08) ? "Yes" : "No")); + ((edid_enabled & 0x01) ? "Yes" : "No"), + ((edid_enabled & 0x02) ? "Yes" : "No"), + ((edid_enabled & 0x04) ? "Yes" : "No"), + ((edid_enabled & 0x08) ? "Yes" : "No")); v4l2_info(sd, "CEC: %s\n", !!(cec_read(sd, 0x2a) & 0x01) ? "enabled" : "disabled"); v4l2_info(sd, "-----Signal status-----\n"); + cable_det = info->read_cable_det(sd); v4l2_info(sd, "Cable detected (+5V power) port A: %s, B: %s, C: %s, D: %s\n", - ((io_read(sd, 0x6f) & 0x10) ? "Yes" : "No"), - ((io_read(sd, 0x6f) & 0x08) ? "Yes" : "No"), - ((io_read(sd, 0x6f) & 0x04) ? "Yes" : "No"), - ((io_read(sd, 0x6f) & 0x02) ? "Yes" : "No")); + ((cable_det & 0x01) ? "Yes" : "No"), + ((cable_det & 0x02) ? "Yes" : "No"), + ((cable_det & 0x04) ? "Yes" : "No"), + ((cable_det & 0x08) ? "Yes" : "No")); v4l2_info(sd, "TMDS signal detected: %s\n", no_signal_tmds(sd) ? "false" : "true"); v4l2_info(sd, "TMDS signal locked: %s\n", @@ -2017,13 +2362,6 @@ static const struct v4l2_ctrl_ops adv7604_ctrl_ops = { static const struct v4l2_subdev_core_ops adv7604_core_ops = { .log_status = adv7604_log_status, - .g_ext_ctrls = v4l2_subdev_g_ext_ctrls, - .try_ext_ctrls = v4l2_subdev_try_ext_ctrls, - .s_ext_ctrls = v4l2_subdev_s_ext_ctrls, - .g_ctrl = v4l2_subdev_g_ctrl, - .s_ctrl = v4l2_subdev_s_ctrl, - .queryctrl = v4l2_subdev_queryctrl, - .querymenu = v4l2_subdev_querymenu, .interrupt_service_routine = adv7604_isr, #ifdef CONFIG_VIDEO_ADV_DEBUG .g_register = adv7604_g_register, @@ -2037,17 +2375,16 @@ static const struct v4l2_subdev_video_ops adv7604_video_ops = { .s_dv_timings = adv7604_s_dv_timings, .g_dv_timings = adv7604_g_dv_timings, .query_dv_timings = adv7604_query_dv_timings, - .enum_dv_timings = adv7604_enum_dv_timings, - .dv_timings_cap = adv7604_dv_timings_cap, - .enum_mbus_fmt = adv7604_enum_mbus_fmt, - .g_mbus_fmt = adv7604_g_mbus_fmt, - .try_mbus_fmt = adv7604_g_mbus_fmt, - .s_mbus_fmt = adv7604_g_mbus_fmt, }; static const struct v4l2_subdev_pad_ops adv7604_pad_ops = { + .enum_mbus_code = adv7604_enum_mbus_code, + .get_fmt = adv7604_get_format, + .set_fmt = adv7604_set_format, .get_edid = adv7604_get_edid, .set_edid = adv7604_set_edid, + .dv_timings_cap = adv7604_dv_timings_cap, + .enum_dv_timings = adv7604_enum_dv_timings, }; static const struct v4l2_subdev_ops adv7604_ops = { @@ -2096,6 +2433,7 @@ static const struct v4l2_ctrl_config adv7604_ctrl_free_run_color = { static int adv7604_core_init(struct v4l2_subdev *sd) { struct adv7604_state *state = to_state(sd); + const struct adv7604_chip_info *info = state->info; struct adv7604_platform_data *pdata = &state->pdata; hdmi_write(sd, 0x48, @@ -2104,28 +2442,33 @@ static int adv7604_core_init(struct v4l2_subdev *sd) disable_input(sd); + if (pdata->default_input >= 0 && + pdata->default_input < state->source_pad) { + state->selected_input = pdata->default_input; + select_input(sd); + enable_input(sd); + } + /* power */ io_write(sd, 0x0c, 0x42); /* Power up part and power down VDP */ io_write(sd, 0x0b, 0x44); /* Power down ESDP block */ cp_write(sd, 0xcf, 0x01); /* Power down macrovision */ /* video format */ - io_write_and_or(sd, 0x02, 0xf0, + io_write_clr_set(sd, 0x02, 0x0f, pdata->alt_gamma << 3 | pdata->op_656_range << 2 | - pdata->rgb_out << 1 | pdata->alt_data_sat << 0); - io_write(sd, 0x03, pdata->op_format_sel); - io_write_and_or(sd, 0x04, 0x1f, pdata->op_ch_sel << 5); - io_write_and_or(sd, 0x05, 0xf0, pdata->blank_data << 3 | - pdata->insert_av_codes << 2 | - pdata->replicate_av_codes << 1 | - pdata->invert_cbcr << 0); + io_write_clr_set(sd, 0x05, 0x0e, pdata->blank_data << 3 | + pdata->insert_av_codes << 2 | + pdata->replicate_av_codes << 1); + adv7604_setup_format(state); cp_write(sd, 0x69, 0x30); /* Enable CP CSC */ /* VS, HS polarities */ - io_write(sd, 0x06, 0xa0 | pdata->inv_vs_pol << 2 | pdata->inv_hs_pol << 1); + io_write(sd, 0x06, 0xa0 | pdata->inv_vs_pol << 2 | + pdata->inv_hs_pol << 1 | pdata->inv_llc_pol); /* Adjust drive strength */ io_write(sd, 0x14, 0x40 | pdata->dr_str_data << 4 | @@ -2142,52 +2485,46 @@ static int adv7604_core_init(struct v4l2_subdev *sd) for digital formats */ /* HDMI audio */ - hdmi_write_and_or(sd, 0x15, 0xfc, 0x03); /* Mute on FIFO over-/underflow [REF_01, c. 1.2.18] */ - hdmi_write_and_or(sd, 0x1a, 0xf1, 0x08); /* Wait 1 s before unmute */ - hdmi_write_and_or(sd, 0x68, 0xf9, 0x06); /* FIFO reset on over-/underflow [REF_01, c. 1.2.19] */ + hdmi_write_clr_set(sd, 0x15, 0x03, 0x03); /* Mute on FIFO over-/underflow [REF_01, c. 1.2.18] */ + hdmi_write_clr_set(sd, 0x1a, 0x0e, 0x08); /* Wait 1 s before unmute */ + hdmi_write_clr_set(sd, 0x68, 0x06, 0x06); /* FIFO reset on over-/underflow [REF_01, c. 1.2.19] */ /* TODO from platform data */ afe_write(sd, 0xb5, 0x01); /* Setting MCLK to 256Fs */ - afe_write(sd, 0x02, pdata->ain_sel); /* Select analog input muxing mode */ - io_write_and_or(sd, 0x30, ~(1 << 4), pdata->output_bus_lsb_to_msb << 4); + if (adv7604_has_afe(state)) { + afe_write(sd, 0x02, pdata->ain_sel); /* Select analog input muxing mode */ + io_write_clr_set(sd, 0x30, 1 << 4, pdata->output_bus_lsb_to_msb << 4); + } /* interrupts */ - io_write(sd, 0x40, 0xc2); /* Configure INT1 */ - io_write(sd, 0x41, 0xd7); /* STDI irq for any change, disable INT2 */ + io_write(sd, 0x40, 0xc0 | pdata->int1_config); /* Configure INT1 */ io_write(sd, 0x46, 0x98); /* Enable SSPD, STDI and CP unlocked interrupts */ - io_write(sd, 0x6e, 0xc1); /* Enable V_LOCKED, DE_REGEN_LCK, HDMI_MODE interrupts */ - io_write(sd, 0x73, 0x1e); /* Enable CABLE_DET_A_ST (+5v) interrupts */ + io_write(sd, 0x6e, info->fmt_change_digital_mask); /* Enable V_LOCKED and DE_REGEN_LCK interrupts */ + io_write(sd, 0x73, info->cable_det_mask); /* Enable cable detection (+5v) interrupts */ + info->setup_irqs(sd); return v4l2_ctrl_handler_setup(sd->ctrl_handler); } +static void adv7604_setup_irqs(struct v4l2_subdev *sd) +{ + io_write(sd, 0x41, 0xd7); /* STDI irq for any change, disable INT2 */ +} + +static void adv7611_setup_irqs(struct v4l2_subdev *sd) +{ + io_write(sd, 0x41, 0xd0); /* STDI irq for any change, disable INT2 */ +} + static void adv7604_unregister_clients(struct adv7604_state *state) { - if (state->i2c_avlink) - i2c_unregister_device(state->i2c_avlink); - if (state->i2c_cec) - i2c_unregister_device(state->i2c_cec); - if (state->i2c_infoframe) - i2c_unregister_device(state->i2c_infoframe); - if (state->i2c_esdp) - i2c_unregister_device(state->i2c_esdp); - if (state->i2c_dpp) - i2c_unregister_device(state->i2c_dpp); - if (state->i2c_afe) - i2c_unregister_device(state->i2c_afe); - if (state->i2c_repeater) - i2c_unregister_device(state->i2c_repeater); - if (state->i2c_edid) - i2c_unregister_device(state->i2c_edid); - if (state->i2c_hdmi) - i2c_unregister_device(state->i2c_hdmi); - if (state->i2c_test) - i2c_unregister_device(state->i2c_test); - if (state->i2c_cp) - i2c_unregister_device(state->i2c_cp); - if (state->i2c_vdp) - i2c_unregister_device(state->i2c_vdp); + unsigned int i; + + for (i = 1; i < ARRAY_SIZE(state->i2c_clients); ++i) { + if (state->i2c_clients[i]) + i2c_unregister_device(state->i2c_clients[i]); + } } static struct i2c_client *adv7604_dummy_client(struct v4l2_subdev *sd, @@ -2200,15 +2537,219 @@ static struct i2c_client *adv7604_dummy_client(struct v4l2_subdev *sd, return i2c_new_dummy(client->adapter, io_read(sd, io_reg) >> 1); } +static const struct adv7604_reg_seq adv7604_recommended_settings_afe[] = { + /* reset ADI recommended settings for HDMI: */ + /* "ADV7604 Register Settings Recommendations (rev. 2.5, June 2010)" p. 4. */ + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x0d), 0x04 }, /* HDMI filter optimization */ + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x0d), 0x04 }, /* HDMI filter optimization */ + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x3d), 0x00 }, /* DDC bus active pull-up control */ + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x3e), 0x74 }, /* TMDS PLL optimization */ + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x4e), 0x3b }, /* TMDS PLL optimization */ + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x57), 0x74 }, /* TMDS PLL optimization */ + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x58), 0x63 }, /* TMDS PLL optimization */ + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x8d), 0x18 }, /* equaliser */ + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x8e), 0x34 }, /* equaliser */ + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x93), 0x88 }, /* equaliser */ + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x94), 0x2e }, /* equaliser */ + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x96), 0x00 }, /* enable automatic EQ changing */ + + /* set ADI recommended settings for digitizer */ + /* "ADV7604 Register Settings Recommendations (rev. 2.5, June 2010)" p. 17. */ + { ADV7604_REG(ADV7604_PAGE_AFE, 0x12), 0x7b }, /* ADC noise shaping filter controls */ + { ADV7604_REG(ADV7604_PAGE_AFE, 0x0c), 0x1f }, /* CP core gain controls */ + { ADV7604_REG(ADV7604_PAGE_CP, 0x3e), 0x04 }, /* CP core pre-gain control */ + { ADV7604_REG(ADV7604_PAGE_CP, 0xc3), 0x39 }, /* CP coast control. Graphics mode */ + { ADV7604_REG(ADV7604_PAGE_CP, 0x40), 0x5c }, /* CP core pre-gain control. Graphics mode */ + + { ADV7604_REG_SEQ_TERM, 0 }, +}; + +static const struct adv7604_reg_seq adv7604_recommended_settings_hdmi[] = { + /* set ADI recommended settings for HDMI: */ + /* "ADV7604 Register Settings Recommendations (rev. 2.5, June 2010)" p. 4. */ + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x0d), 0x84 }, /* HDMI filter optimization */ + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x3d), 0x10 }, /* DDC bus active pull-up control */ + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x3e), 0x39 }, /* TMDS PLL optimization */ + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x4e), 0x3b }, /* TMDS PLL optimization */ + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x57), 0xb6 }, /* TMDS PLL optimization */ + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x58), 0x03 }, /* TMDS PLL optimization */ + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x8d), 0x18 }, /* equaliser */ + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x8e), 0x34 }, /* equaliser */ + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x93), 0x8b }, /* equaliser */ + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x94), 0x2d }, /* equaliser */ + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x96), 0x01 }, /* enable automatic EQ changing */ + + /* reset ADI recommended settings for digitizer */ + /* "ADV7604 Register Settings Recommendations (rev. 2.5, June 2010)" p. 17. */ + { ADV7604_REG(ADV7604_PAGE_AFE, 0x12), 0xfb }, /* ADC noise shaping filter controls */ + { ADV7604_REG(ADV7604_PAGE_AFE, 0x0c), 0x0d }, /* CP core gain controls */ + + { ADV7604_REG_SEQ_TERM, 0 }, +}; + +static const struct adv7604_reg_seq adv7611_recommended_settings_hdmi[] = { + { ADV7604_REG(ADV7604_PAGE_CP, 0x6c), 0x00 }, + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x6f), 0x0c }, + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x87), 0x70 }, + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x57), 0xda }, + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x58), 0x01 }, + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x03), 0x98 }, + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x4c), 0x44 }, + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x8d), 0x04 }, + { ADV7604_REG(ADV7604_PAGE_HDMI, 0x8e), 0x1e }, + + { ADV7604_REG_SEQ_TERM, 0 }, +}; + +static const struct adv7604_chip_info adv7604_chip_info[] = { + [ADV7604] = { + .type = ADV7604, + .has_afe = true, + .max_port = ADV7604_PAD_VGA_COMP, + .num_dv_ports = 4, + .edid_enable_reg = 0x77, + .edid_status_reg = 0x7d, + .lcf_reg = 0xb3, + .tdms_lock_mask = 0xe0, + .cable_det_mask = 0x1e, + .fmt_change_digital_mask = 0xc1, + .formats = adv7604_formats, + .nformats = ARRAY_SIZE(adv7604_formats), + .set_termination = adv7604_set_termination, + .setup_irqs = adv7604_setup_irqs, + .read_hdmi_pixelclock = adv7604_read_hdmi_pixelclock, + .read_cable_det = adv7604_read_cable_det, + .recommended_settings = { + [0] = adv7604_recommended_settings_afe, + [1] = adv7604_recommended_settings_hdmi, + }, + .num_recommended_settings = { + [0] = ARRAY_SIZE(adv7604_recommended_settings_afe), + [1] = ARRAY_SIZE(adv7604_recommended_settings_hdmi), + }, + .page_mask = BIT(ADV7604_PAGE_IO) | BIT(ADV7604_PAGE_AVLINK) | + BIT(ADV7604_PAGE_CEC) | BIT(ADV7604_PAGE_INFOFRAME) | + BIT(ADV7604_PAGE_ESDP) | BIT(ADV7604_PAGE_DPP) | + BIT(ADV7604_PAGE_AFE) | BIT(ADV7604_PAGE_REP) | + BIT(ADV7604_PAGE_EDID) | BIT(ADV7604_PAGE_HDMI) | + BIT(ADV7604_PAGE_TEST) | BIT(ADV7604_PAGE_CP) | + BIT(ADV7604_PAGE_VDP), + }, + [ADV7611] = { + .type = ADV7611, + .has_afe = false, + .max_port = ADV7604_PAD_HDMI_PORT_A, + .num_dv_ports = 1, + .edid_enable_reg = 0x74, + .edid_status_reg = 0x76, + .lcf_reg = 0xa3, + .tdms_lock_mask = 0x43, + .cable_det_mask = 0x01, + .fmt_change_digital_mask = 0x03, + .formats = adv7611_formats, + .nformats = ARRAY_SIZE(adv7611_formats), + .set_termination = adv7611_set_termination, + .setup_irqs = adv7611_setup_irqs, + .read_hdmi_pixelclock = adv7611_read_hdmi_pixelclock, + .read_cable_det = adv7611_read_cable_det, + .recommended_settings = { + [1] = adv7611_recommended_settings_hdmi, + }, + .num_recommended_settings = { + [1] = ARRAY_SIZE(adv7611_recommended_settings_hdmi), + }, + .page_mask = BIT(ADV7604_PAGE_IO) | BIT(ADV7604_PAGE_CEC) | + BIT(ADV7604_PAGE_INFOFRAME) | BIT(ADV7604_PAGE_AFE) | + BIT(ADV7604_PAGE_REP) | BIT(ADV7604_PAGE_EDID) | + BIT(ADV7604_PAGE_HDMI) | BIT(ADV7604_PAGE_CP), + }, +}; + +static struct i2c_device_id adv7604_i2c_id[] = { + { "adv7604", (kernel_ulong_t)&adv7604_chip_info[ADV7604] }, + { "adv7611", (kernel_ulong_t)&adv7604_chip_info[ADV7611] }, + { } +}; +MODULE_DEVICE_TABLE(i2c, adv7604_i2c_id); + +static struct of_device_id adv7604_of_id[] __maybe_unused = { + { .compatible = "adi,adv7611", .data = &adv7604_chip_info[ADV7611] }, + { } +}; +MODULE_DEVICE_TABLE(of, adv7604_of_id); + +static int adv7604_parse_dt(struct adv7604_state *state) +{ + struct v4l2_of_endpoint bus_cfg; + struct device_node *endpoint; + struct device_node *np; + unsigned int flags; + + np = state->i2c_clients[ADV7604_PAGE_IO]->dev.of_node; + + /* Parse the endpoint. */ + endpoint = of_graph_get_next_endpoint(np, NULL); + if (!endpoint) + return -EINVAL; + + v4l2_of_parse_endpoint(endpoint, &bus_cfg); + of_node_put(endpoint); + + flags = bus_cfg.bus.parallel.flags; + + if (flags & V4L2_MBUS_HSYNC_ACTIVE_HIGH) + state->pdata.inv_hs_pol = 1; + + if (flags & V4L2_MBUS_VSYNC_ACTIVE_HIGH) + state->pdata.inv_vs_pol = 1; + + if (flags & V4L2_MBUS_PCLK_SAMPLE_RISING) + state->pdata.inv_llc_pol = 1; + + if (bus_cfg.bus_type == V4L2_MBUS_BT656) { + state->pdata.insert_av_codes = 1; + state->pdata.op_656_range = 1; + } + + /* Disable the interrupt for now as no DT-based board uses it. */ + state->pdata.int1_config = ADV7604_INT1_CONFIG_DISABLED; + + /* Use the default I2C addresses. */ + state->pdata.i2c_addresses[ADV7604_PAGE_AVLINK] = 0x42; + state->pdata.i2c_addresses[ADV7604_PAGE_CEC] = 0x40; + state->pdata.i2c_addresses[ADV7604_PAGE_INFOFRAME] = 0x3e; + state->pdata.i2c_addresses[ADV7604_PAGE_ESDP] = 0x38; + state->pdata.i2c_addresses[ADV7604_PAGE_DPP] = 0x3c; + state->pdata.i2c_addresses[ADV7604_PAGE_AFE] = 0x26; + state->pdata.i2c_addresses[ADV7604_PAGE_REP] = 0x32; + state->pdata.i2c_addresses[ADV7604_PAGE_EDID] = 0x36; + state->pdata.i2c_addresses[ADV7604_PAGE_HDMI] = 0x34; + state->pdata.i2c_addresses[ADV7604_PAGE_TEST] = 0x30; + state->pdata.i2c_addresses[ADV7604_PAGE_CP] = 0x22; + state->pdata.i2c_addresses[ADV7604_PAGE_VDP] = 0x24; + + /* Hardcode the remaining platform data fields. */ + state->pdata.disable_pwrdnb = 0; + state->pdata.disable_cable_det_rst = 0; + state->pdata.default_input = -1; + state->pdata.blank_data = 1; + state->pdata.alt_data_sat = 1; + state->pdata.op_format_mode_sel = ADV7604_OP_FORMAT_MODE0; + state->pdata.bus_order = ADV7604_BUS_ORDER_RGB; + + return 0; +} + static int adv7604_probe(struct i2c_client *client, const struct i2c_device_id *id) { static const struct v4l2_dv_timings cea640x480 = V4L2_DV_BT_CEA_640X480P59_94; struct adv7604_state *state; - struct adv7604_platform_data *pdata = client->dev.platform_data; struct v4l2_ctrl_handler *hdl; struct v4l2_subdev *sd; + unsigned int i; + u16 val; int err; /* Check if the adapter supports the needed features */ @@ -2223,32 +2764,80 @@ static int adv7604_probe(struct i2c_client *client, return -ENOMEM; } + state->i2c_clients[ADV7604_PAGE_IO] = client; + /* initialize variables */ state->restart_stdi_once = true; state->selected_input = ~0; - /* platform data */ - if (!pdata) { + if (IS_ENABLED(CONFIG_OF) && client->dev.of_node) { + const struct of_device_id *oid; + + oid = of_match_node(adv7604_of_id, client->dev.of_node); + state->info = oid->data; + + err = adv7604_parse_dt(state); + if (err < 0) { + v4l_err(client, "DT parsing error\n"); + return err; + } + } else if (client->dev.platform_data) { + struct adv7604_platform_data *pdata = client->dev.platform_data; + + state->info = (const struct adv7604_chip_info *)id->driver_data; + state->pdata = *pdata; + } else { v4l_err(client, "No platform data!\n"); return -ENODEV; } - state->pdata = *pdata; + + /* Request GPIOs. */ + for (i = 0; i < state->info->num_dv_ports; ++i) { + state->hpd_gpio[i] = + devm_gpiod_get_index(&client->dev, "hpd", i); + if (IS_ERR(state->hpd_gpio[i])) + continue; + + gpiod_direction_output(state->hpd_gpio[i], 0); + + v4l_info(client, "Handling HPD %u GPIO\n", i); + } + state->timings = cea640x480; + state->format = adv7604_format_info(state, V4L2_MBUS_FMT_YUYV8_2X8); sd = &state->sd; v4l2_i2c_subdev_init(sd, client, &adv7604_ops); + snprintf(sd->name, sizeof(sd->name), "%s %d-%04x", + id->name, i2c_adapter_id(client->adapter), + client->addr); sd->flags |= V4L2_SUBDEV_FL_HAS_DEVNODE; - /* i2c access to adv7604? */ - if (adv_smbus_read_byte_data_check(client, 0xfb, false) != 0x68) { - v4l2_info(sd, "not an adv7604 on address 0x%x\n", - client->addr << 1); - return -ENODEV; + /* + * Verify that the chip is present. On ADV7604 the RD_INFO register only + * identifies the revision, while on ADV7611 it identifies the model as + * well. Use the HDMI slave address on ADV7604 and RD_INFO on ADV7611. + */ + if (state->info->type == ADV7604) { + val = adv_smbus_read_byte_data_check(client, 0xfb, false); + if (val != 0x68) { + v4l2_info(sd, "not an adv7604 on address 0x%x\n", + client->addr << 1); + return -ENODEV; + } + } else { + val = (adv_smbus_read_byte_data_check(client, 0xea, false) << 8) + | (adv_smbus_read_byte_data_check(client, 0xeb, false) << 0); + if (val != 0x2051) { + v4l2_info(sd, "not an adv7611 on address 0x%x\n", + client->addr << 1); + return -ENODEV; + } } /* control handlers */ hdl = &state->hdl; - v4l2_ctrl_handler_init(hdl, 9); + v4l2_ctrl_handler_init(hdl, adv7604_has_afe(state) ? 9 : 8); v4l2_ctrl_new_std(hdl, &adv7604_ctrl_ops, V4L2_CID_BRIGHTNESS, -128, 127, 1, 0); @@ -2261,15 +2850,17 @@ static int adv7604_probe(struct i2c_client *client, /* private controls */ state->detect_tx_5v_ctrl = v4l2_ctrl_new_std(hdl, NULL, - V4L2_CID_DV_RX_POWER_PRESENT, 0, 0x0f, 0, 0); + V4L2_CID_DV_RX_POWER_PRESENT, 0, + (1 << state->info->num_dv_ports) - 1, 0, 0); state->rgb_quantization_range_ctrl = v4l2_ctrl_new_std_menu(hdl, &adv7604_ctrl_ops, V4L2_CID_DV_RX_RGB_RANGE, V4L2_DV_RGB_RANGE_FULL, 0, V4L2_DV_RGB_RANGE_AUTO); /* custom controls */ - state->analog_sampling_phase_ctrl = - v4l2_ctrl_new_custom(hdl, &adv7604_ctrl_analog_sampling_phase, NULL); + if (adv7604_has_afe(state)) + state->analog_sampling_phase_ctrl = + v4l2_ctrl_new_custom(hdl, &adv7604_ctrl_analog_sampling_phase, NULL); state->free_run_color_manual_ctrl = v4l2_ctrl_new_custom(hdl, &adv7604_ctrl_free_run_color_manual, NULL); state->free_run_color_ctrl = @@ -2282,7 +2873,8 @@ static int adv7604_probe(struct i2c_client *client, } state->detect_tx_5v_ctrl->is_private = true; state->rgb_quantization_range_ctrl->is_private = true; - state->analog_sampling_phase_ctrl->is_private = true; + if (adv7604_has_afe(state)) + state->analog_sampling_phase_ctrl->is_private = true; state->free_run_color_manual_ctrl->is_private = true; state->free_run_color_ctrl->is_private = true; @@ -2291,25 +2883,18 @@ static int adv7604_probe(struct i2c_client *client, goto err_hdl; } - state->i2c_avlink = adv7604_dummy_client(sd, pdata->i2c_avlink, 0xf3); - state->i2c_cec = adv7604_dummy_client(sd, pdata->i2c_cec, 0xf4); - state->i2c_infoframe = adv7604_dummy_client(sd, pdata->i2c_infoframe, 0xf5); - state->i2c_esdp = adv7604_dummy_client(sd, pdata->i2c_esdp, 0xf6); - state->i2c_dpp = adv7604_dummy_client(sd, pdata->i2c_dpp, 0xf7); - state->i2c_afe = adv7604_dummy_client(sd, pdata->i2c_afe, 0xf8); - state->i2c_repeater = adv7604_dummy_client(sd, pdata->i2c_repeater, 0xf9); - state->i2c_edid = adv7604_dummy_client(sd, pdata->i2c_edid, 0xfa); - state->i2c_hdmi = adv7604_dummy_client(sd, pdata->i2c_hdmi, 0xfb); - state->i2c_test = adv7604_dummy_client(sd, pdata->i2c_test, 0xfc); - state->i2c_cp = adv7604_dummy_client(sd, pdata->i2c_cp, 0xfd); - state->i2c_vdp = adv7604_dummy_client(sd, pdata->i2c_vdp, 0xfe); - if (!state->i2c_avlink || !state->i2c_cec || !state->i2c_infoframe || - !state->i2c_esdp || !state->i2c_dpp || !state->i2c_afe || - !state->i2c_repeater || !state->i2c_edid || !state->i2c_hdmi || - !state->i2c_test || !state->i2c_cp || !state->i2c_vdp) { - err = -ENOMEM; - v4l2_err(sd, "failed to create all i2c clients\n"); - goto err_i2c; + for (i = 1; i < ADV7604_PAGE_MAX; ++i) { + if (!(BIT(i) & state->info->page_mask)) + continue; + + state->i2c_clients[i] = + adv7604_dummy_client(sd, state->pdata.i2c_addresses[i], + 0xf2 + i); + if (state->i2c_clients[i] == NULL) { + err = -ENOMEM; + v4l2_err(sd, "failed to create i2c client %u\n", i); + goto err_i2c; + } } /* work queues */ @@ -2323,8 +2908,14 @@ static int adv7604_probe(struct i2c_client *client, INIT_DELAYED_WORK(&state->delayed_work_enable_hotplug, adv7604_delayed_work_enable_hotplug); - state->pad.flags = MEDIA_PAD_FL_SOURCE; - err = media_entity_init(&sd->entity, 1, &state->pad, 0); + state->source_pad = state->info->num_dv_ports + + (state->info->has_afe ? 2 : 0); + for (i = 0; i < state->source_pad; ++i) + state->pads[i].flags = MEDIA_PAD_FL_SINK; + state->pads[state->source_pad].flags = MEDIA_PAD_FL_SOURCE; + + err = media_entity_init(&sd->entity, state->source_pad + 1, + state->pads, 0); if (err) goto err_work_queues; @@ -2333,6 +2924,11 @@ static int adv7604_probe(struct i2c_client *client, goto err_entity; v4l2_info(sd, "%s found @ 0x%x (%s)\n", client->name, client->addr << 1, client->adapter->name); + + err = v4l2_async_register_subdev(sd); + if (err) + goto err_entity; + return 0; err_entity: @@ -2356,6 +2952,7 @@ static int adv7604_remove(struct i2c_client *client) cancel_delayed_work(&state->delayed_work_enable_hotplug); destroy_workqueue(state->work_queues); + v4l2_async_unregister_subdev(sd); v4l2_device_unregister_subdev(sd); media_entity_cleanup(&sd->entity); adv7604_unregister_clients(to_state(sd)); @@ -2365,20 +2962,15 @@ static int adv7604_remove(struct i2c_client *client) /* ----------------------------------------------------------------------- */ -static struct i2c_device_id adv7604_id[] = { - { "adv7604", 0 }, - { } -}; -MODULE_DEVICE_TABLE(i2c, adv7604_id); - static struct i2c_driver adv7604_driver = { .driver = { .owner = THIS_MODULE, .name = "adv7604", + .of_match_table = of_match_ptr(adv7604_of_id), }, .probe = adv7604_probe, .remove = adv7604_remove, - .id_table = adv7604_id, + .id_table = adv7604_i2c_id, }; module_i2c_driver(adv7604_driver); diff --git a/drivers/media/platform/vsp1/Makefile b/drivers/media/platform/vsp1/Makefile index 151cecd0ea25..6a93f928dfde 100644 --- a/drivers/media/platform/vsp1/Makefile +++ b/drivers/media/platform/vsp1/Makefile @@ -1,6 +1,6 @@ vsp1-y := vsp1_drv.o vsp1_entity.o vsp1_video.o vsp1-y += vsp1_rpf.o vsp1_rwpf.o vsp1_wpf.o vsp1-y += vsp1_hsit.o vsp1_lif.o vsp1_lut.o -vsp1-y += vsp1_sru.o vsp1_uds.o +vsp1-y += vsp1_bru.o vsp1_sru.o vsp1_uds.o obj-$(CONFIG_VIDEO_RENESAS_VSP1) += vsp1.o diff --git a/drivers/media/platform/vsp1/vsp1.h b/drivers/media/platform/vsp1/vsp1.h index 0313210c6e9e..6ca2cf20d545 100644 --- a/drivers/media/platform/vsp1/vsp1.h +++ b/drivers/media/platform/vsp1/vsp1.h @@ -28,6 +28,7 @@ struct clk; struct device; struct vsp1_platform_data; +struct vsp1_bru; struct vsp1_hsit; struct vsp1_lif; struct vsp1_lut; @@ -45,11 +46,11 @@ struct vsp1_device { void __iomem *mmio; struct clk *clock; - struct clk *rt_clock; struct mutex lock; int ref_count; + struct vsp1_bru *bru; struct vsp1_hsit *hsi; struct vsp1_hsit *hst; struct vsp1_lif *lif; diff --git a/drivers/media/platform/vsp1/vsp1_bru.c b/drivers/media/platform/vsp1/vsp1_bru.c new file mode 100644 index 000000000000..f80695480060 --- /dev/null +++ b/drivers/media/platform/vsp1/vsp1_bru.c @@ -0,0 +1,395 @@ +/* + * vsp1_bru.c -- R-Car VSP1 Blend ROP Unit + * + * Copyright (C) 2013 Renesas Corporation + * + * Contact: Laurent Pinchart (laurent.pinchart@ideasonboard.com) + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + */ + +#include <linux/device.h> +#include <linux/gfp.h> + +#include <media/v4l2-subdev.h> + +#include "vsp1.h" +#include "vsp1_bru.h" + +#define BRU_MIN_SIZE 4U +#define BRU_MAX_SIZE 8190U + +/* ----------------------------------------------------------------------------- + * Device Access + */ + +static inline u32 vsp1_bru_read(struct vsp1_bru *bru, u32 reg) +{ + return vsp1_read(bru->entity.vsp1, reg); +} + +static inline void vsp1_bru_write(struct vsp1_bru *bru, u32 reg, u32 data) +{ + vsp1_write(bru->entity.vsp1, reg, data); +} + +/* ----------------------------------------------------------------------------- + * V4L2 Subdevice Core Operations + */ + +static bool bru_is_input_enabled(struct vsp1_bru *bru, unsigned int input) +{ + return media_entity_remote_pad(&bru->entity.pads[input]) != NULL; +} + +static int bru_s_stream(struct v4l2_subdev *subdev, int enable) +{ + struct vsp1_bru *bru = to_bru(subdev); + struct v4l2_mbus_framefmt *format; + unsigned int i; + + if (!enable) + return 0; + + format = &bru->entity.formats[BRU_PAD_SOURCE]; + + /* The hardware is extremely flexible but we have no userspace API to + * expose all the parameters, nor is it clear whether we would have use + * cases for all the supported modes. Let's just harcode the parameters + * to sane default values for now. + */ + + /* Disable both color data normalization and dithering. */ + vsp1_bru_write(bru, VI6_BRU_INCTRL, 0); + + /* Set the background position to cover the whole output image and + * set its color to opaque black. + */ + vsp1_bru_write(bru, VI6_BRU_VIRRPF_SIZE, + (format->width << VI6_BRU_VIRRPF_SIZE_HSIZE_SHIFT) | + (format->height << VI6_BRU_VIRRPF_SIZE_VSIZE_SHIFT)); + vsp1_bru_write(bru, VI6_BRU_VIRRPF_LOC, 0); + vsp1_bru_write(bru, VI6_BRU_VIRRPF_COL, + 0xff << VI6_BRU_VIRRPF_COL_A_SHIFT); + + /* Route BRU input 1 as SRC input to the ROP unit and configure the ROP + * unit with a NOP operation to make BRU input 1 available as the + * Blend/ROP unit B SRC input. + */ + vsp1_bru_write(bru, VI6_BRU_ROP, VI6_BRU_ROP_DSTSEL_BRUIN(1) | + VI6_BRU_ROP_CROP(VI6_ROP_NOP) | + VI6_BRU_ROP_AROP(VI6_ROP_NOP)); + + for (i = 0; i < 4; ++i) { + u32 ctrl = 0; + + /* Configure all Blend/ROP units corresponding to an enabled BRU + * input for alpha blending. Blend/ROP units corresponding to + * disabled BRU inputs are used in ROP NOP mode to ignore the + * SRC input. + */ + if (bru_is_input_enabled(bru, i)) + ctrl |= VI6_BRU_CTRL_RBC; + else + ctrl |= VI6_BRU_CTRL_CROP(VI6_ROP_NOP) + | VI6_BRU_CTRL_AROP(VI6_ROP_NOP); + + /* Select the virtual RPF as the Blend/ROP unit A DST input to + * serve as a background color. + */ + if (i == 0) + ctrl |= VI6_BRU_CTRL_DSTSEL_VRPF; + + /* Route BRU inputs 0 to 3 as SRC inputs to Blend/ROP units A to + * D in that order. The Blend/ROP unit B SRC is hardwired to the + * ROP unit output, the corresponding register bits must be set + * to 0. + */ + if (i != 1) + ctrl |= VI6_BRU_CTRL_SRCSEL_BRUIN(i); + + vsp1_bru_write(bru, VI6_BRU_CTRL(i), ctrl); + + /* Harcode the blending formula to + * + * DSTc = DSTc * (1 - SRCa) + SRCc * SRCa + * DSTa = DSTa * (1 - SRCa) + SRCa + */ + vsp1_bru_write(bru, VI6_BRU_BLD(i), + VI6_BRU_BLD_CCMDX_255_SRC_A | + VI6_BRU_BLD_CCMDY_SRC_A | + VI6_BRU_BLD_ACMDX_255_SRC_A | + VI6_BRU_BLD_ACMDY_COEFY | + (0xff << VI6_BRU_BLD_COEFY_SHIFT)); + } + + return 0; +} + +/* ----------------------------------------------------------------------------- + * V4L2 Subdevice Pad Operations + */ + +/* + * The BRU can't perform format conversion, all sink and source formats must be + * identical. We pick the format on the first sink pad (pad 0) and propagate it + * to all other pads. + */ + +static int bru_enum_mbus_code(struct v4l2_subdev *subdev, + struct v4l2_subdev_fh *fh, + struct v4l2_subdev_mbus_code_enum *code) +{ + static const unsigned int codes[] = { + V4L2_MBUS_FMT_ARGB8888_1X32, + V4L2_MBUS_FMT_AYUV8_1X32, + }; + struct v4l2_mbus_framefmt *format; + + if (code->pad == BRU_PAD_SINK(0)) { + if (code->index >= ARRAY_SIZE(codes)) + return -EINVAL; + + code->code = codes[code->index]; + } else { + if (code->index) + return -EINVAL; + + format = v4l2_subdev_get_try_format(fh, BRU_PAD_SINK(0)); + code->code = format->code; + } + + return 0; +} + +static int bru_enum_frame_size(struct v4l2_subdev *subdev, + struct v4l2_subdev_fh *fh, + struct v4l2_subdev_frame_size_enum *fse) +{ + if (fse->index) + return -EINVAL; + + if (fse->code != V4L2_MBUS_FMT_ARGB8888_1X32 && + fse->code != V4L2_MBUS_FMT_AYUV8_1X32) + return -EINVAL; + + fse->min_width = BRU_MIN_SIZE; + fse->max_width = BRU_MAX_SIZE; + fse->min_height = BRU_MIN_SIZE; + fse->max_height = BRU_MAX_SIZE; + + return 0; +} + +static struct v4l2_rect *bru_get_compose(struct vsp1_bru *bru, + struct v4l2_subdev_fh *fh, + unsigned int pad, u32 which) +{ + switch (which) { + case V4L2_SUBDEV_FORMAT_TRY: + return v4l2_subdev_get_try_crop(fh, pad); + case V4L2_SUBDEV_FORMAT_ACTIVE: + return &bru->compose[pad]; + default: + return NULL; + } +} + +static int bru_get_format(struct v4l2_subdev *subdev, struct v4l2_subdev_fh *fh, + struct v4l2_subdev_format *fmt) +{ + struct vsp1_bru *bru = to_bru(subdev); + + fmt->format = *vsp1_entity_get_pad_format(&bru->entity, fh, fmt->pad, + fmt->which); + + return 0; +} + +static void bru_try_format(struct vsp1_bru *bru, struct v4l2_subdev_fh *fh, + unsigned int pad, struct v4l2_mbus_framefmt *fmt, + enum v4l2_subdev_format_whence which) +{ + struct v4l2_mbus_framefmt *format; + + switch (pad) { + case BRU_PAD_SINK(0): + /* Default to YUV if the requested format is not supported. */ + if (fmt->code != V4L2_MBUS_FMT_ARGB8888_1X32 && + fmt->code != V4L2_MBUS_FMT_AYUV8_1X32) + fmt->code = V4L2_MBUS_FMT_AYUV8_1X32; + break; + + default: + /* The BRU can't perform format conversion. */ + format = vsp1_entity_get_pad_format(&bru->entity, fh, + BRU_PAD_SINK(0), which); + fmt->code = format->code; + break; + } + + fmt->width = clamp(fmt->width, BRU_MIN_SIZE, BRU_MAX_SIZE); + fmt->height = clamp(fmt->height, BRU_MIN_SIZE, BRU_MAX_SIZE); + fmt->field = V4L2_FIELD_NONE; + fmt->colorspace = V4L2_COLORSPACE_SRGB; +} + +static int bru_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_fh *fh, + struct v4l2_subdev_format *fmt) +{ + struct vsp1_bru *bru = to_bru(subdev); + struct v4l2_mbus_framefmt *format; + + bru_try_format(bru, fh, fmt->pad, &fmt->format, fmt->which); + + format = vsp1_entity_get_pad_format(&bru->entity, fh, fmt->pad, + fmt->which); + *format = fmt->format; + + /* Reset the compose rectangle */ + if (fmt->pad != BRU_PAD_SOURCE) { + struct v4l2_rect *compose; + + compose = bru_get_compose(bru, fh, fmt->pad, fmt->which); + compose->left = 0; + compose->top = 0; + compose->width = format->width; + compose->height = format->height; + } + + /* Propagate the format code to all pads */ + if (fmt->pad == BRU_PAD_SINK(0)) { + unsigned int i; + + for (i = 0; i <= BRU_PAD_SOURCE; ++i) { + format = vsp1_entity_get_pad_format(&bru->entity, fh, + i, fmt->which); + format->code = fmt->format.code; + } + } + + return 0; +} + +static int bru_get_selection(struct v4l2_subdev *subdev, + struct v4l2_subdev_fh *fh, + struct v4l2_subdev_selection *sel) +{ + struct vsp1_bru *bru = to_bru(subdev); + + if (sel->pad == BRU_PAD_SOURCE) + return -EINVAL; + + switch (sel->target) { + case V4L2_SEL_TGT_COMPOSE_BOUNDS: + sel->r.left = 0; + sel->r.top = 0; + sel->r.width = BRU_MAX_SIZE; + sel->r.height = BRU_MAX_SIZE; + return 0; + + case V4L2_SEL_TGT_COMPOSE: + sel->r = *bru_get_compose(bru, fh, sel->pad, sel->which); + return 0; + + default: + return -EINVAL; + } +} + +static int bru_set_selection(struct v4l2_subdev *subdev, + struct v4l2_subdev_fh *fh, + struct v4l2_subdev_selection *sel) +{ + struct vsp1_bru *bru = to_bru(subdev); + struct v4l2_mbus_framefmt *format; + struct v4l2_rect *compose; + + if (sel->pad == BRU_PAD_SOURCE) + return -EINVAL; + + if (sel->target != V4L2_SEL_TGT_COMPOSE) + return -EINVAL; + + /* The compose rectangle top left corner must be inside the output + * frame. + */ + format = vsp1_entity_get_pad_format(&bru->entity, fh, BRU_PAD_SOURCE, + sel->which); + sel->r.left = clamp_t(unsigned int, sel->r.left, 0, format->width - 1); + sel->r.top = clamp_t(unsigned int, sel->r.top, 0, format->height - 1); + + /* Scaling isn't supported, the compose rectangle size must be identical + * to the sink format size. + */ + format = vsp1_entity_get_pad_format(&bru->entity, fh, sel->pad, + sel->which); + sel->r.width = format->width; + sel->r.height = format->height; + + compose = bru_get_compose(bru, fh, sel->pad, sel->which); + *compose = sel->r; + + return 0; +} + +/* ----------------------------------------------------------------------------- + * V4L2 Subdevice Operations + */ + +static struct v4l2_subdev_video_ops bru_video_ops = { + .s_stream = bru_s_stream, +}; + +static struct v4l2_subdev_pad_ops bru_pad_ops = { + .enum_mbus_code = bru_enum_mbus_code, + .enum_frame_size = bru_enum_frame_size, + .get_fmt = bru_get_format, + .set_fmt = bru_set_format, + .get_selection = bru_get_selection, + .set_selection = bru_set_selection, +}; + +static struct v4l2_subdev_ops bru_ops = { + .video = &bru_video_ops, + .pad = &bru_pad_ops, +}; + +/* ----------------------------------------------------------------------------- + * Initialization and Cleanup + */ + +struct vsp1_bru *vsp1_bru_create(struct vsp1_device *vsp1) +{ + struct v4l2_subdev *subdev; + struct vsp1_bru *bru; + int ret; + + bru = devm_kzalloc(vsp1->dev, sizeof(*bru), GFP_KERNEL); + if (bru == NULL) + return ERR_PTR(-ENOMEM); + + bru->entity.type = VSP1_ENTITY_BRU; + + ret = vsp1_entity_init(vsp1, &bru->entity, 5); + if (ret < 0) + return ERR_PTR(ret); + + /* Initialize the V4L2 subdev. */ + subdev = &bru->entity.subdev; + v4l2_subdev_init(subdev, &bru_ops); + + subdev->entity.ops = &vsp1_media_ops; + subdev->internal_ops = &vsp1_subdev_internal_ops; + snprintf(subdev->name, sizeof(subdev->name), "%s bru", + dev_name(vsp1->dev)); + v4l2_set_subdevdata(subdev, bru); + subdev->flags |= V4L2_SUBDEV_FL_HAS_DEVNODE; + + vsp1_entity_init_formats(subdev, NULL); + + return bru; +} diff --git a/drivers/media/platform/vsp1/vsp1_bru.h b/drivers/media/platform/vsp1/vsp1_bru.h new file mode 100644 index 000000000000..37062704dbf6 --- /dev/null +++ b/drivers/media/platform/vsp1/vsp1_bru.h @@ -0,0 +1,39 @@ +/* + * vsp1_bru.h -- R-Car VSP1 Blend ROP Unit + * + * Copyright (C) 2013 Renesas Corporation + * + * Contact: Laurent Pinchart (laurent.pinchart@ideasonboard.com) + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + */ +#ifndef __VSP1_BRU_H__ +#define __VSP1_BRU_H__ + +#include <media/media-entity.h> +#include <media/v4l2-subdev.h> + +#include "vsp1_entity.h" + +struct vsp1_device; + +#define BRU_PAD_SINK(n) (n) +#define BRU_PAD_SOURCE 4 + +struct vsp1_bru { + struct vsp1_entity entity; + + struct v4l2_rect compose[4]; +}; + +static inline struct vsp1_bru *to_bru(struct v4l2_subdev *subdev) +{ + return container_of(subdev, struct vsp1_bru, entity.subdev); +} + +struct vsp1_bru *vsp1_bru_create(struct vsp1_device *vsp1); + +#endif /* __VSP1_BRU_H__ */ diff --git a/drivers/media/platform/vsp1/vsp1_drv.c b/drivers/media/platform/vsp1/vsp1_drv.c index 2f74f0e0ddf5..c69ee0657f75 100644 --- a/drivers/media/platform/vsp1/vsp1_drv.c +++ b/drivers/media/platform/vsp1/vsp1_drv.c @@ -16,10 +16,12 @@ #include <linux/device.h> #include <linux/interrupt.h> #include <linux/module.h> +#include <linux/of.h> #include <linux/platform_device.h> #include <linux/videodev2.h> #include "vsp1.h" +#include "vsp1_bru.h" #include "vsp1_hsit.h" #include "vsp1_lif.h" #include "vsp1_lut.h" @@ -155,6 +157,14 @@ static int vsp1_create_entities(struct vsp1_device *vsp1) } /* Instantiate all the entities. */ + vsp1->bru = vsp1_bru_create(vsp1); + if (IS_ERR(vsp1->bru)) { + ret = PTR_ERR(vsp1->bru); + goto done; + } + + list_add_tail(&vsp1->bru->entity.list_dev, &vsp1->entities); + vsp1->hsi = vsp1_hsit_create(vsp1, true); if (IS_ERR(vsp1->hsi)) { ret = PTR_ERR(vsp1->hsi); @@ -329,33 +339,6 @@ static int vsp1_device_init(struct vsp1_device *vsp1) return 0; } -static int vsp1_clocks_enable(struct vsp1_device *vsp1) -{ - int ret; - - ret = clk_prepare_enable(vsp1->clock); - if (ret < 0) - return ret; - - if (IS_ERR(vsp1->rt_clock)) - return 0; - - ret = clk_prepare_enable(vsp1->rt_clock); - if (ret < 0) { - clk_disable_unprepare(vsp1->clock); - return ret; - } - - return 0; -} - -static void vsp1_clocks_disable(struct vsp1_device *vsp1) -{ - if (!IS_ERR(vsp1->rt_clock)) - clk_disable_unprepare(vsp1->rt_clock); - clk_disable_unprepare(vsp1->clock); -} - /* * vsp1_device_get - Acquire the VSP1 device * @@ -373,7 +356,7 @@ struct vsp1_device *vsp1_device_get(struct vsp1_device *vsp1) if (vsp1->ref_count > 0) goto done; - ret = vsp1_clocks_enable(vsp1); + ret = clk_prepare_enable(vsp1->clock); if (ret < 0) { __vsp1 = NULL; goto done; @@ -381,7 +364,7 @@ struct vsp1_device *vsp1_device_get(struct vsp1_device *vsp1) ret = vsp1_device_init(vsp1); if (ret < 0) { - vsp1_clocks_disable(vsp1); + clk_disable_unprepare(vsp1->clock); __vsp1 = NULL; goto done; } @@ -405,7 +388,7 @@ void vsp1_device_put(struct vsp1_device *vsp1) mutex_lock(&vsp1->lock); if (--vsp1->ref_count == 0) - vsp1_clocks_disable(vsp1); + clk_disable_unprepare(vsp1->clock); mutex_unlock(&vsp1->lock); } @@ -424,7 +407,7 @@ static int vsp1_pm_suspend(struct device *dev) if (vsp1->ref_count == 0) return 0; - vsp1_clocks_disable(vsp1); + clk_disable_unprepare(vsp1->clock); return 0; } @@ -437,7 +420,7 @@ static int vsp1_pm_resume(struct device *dev) if (vsp1->ref_count) return 0; - return vsp1_clocks_enable(vsp1); + return clk_prepare_enable(vsp1->clock); } #endif @@ -449,34 +432,59 @@ static const struct dev_pm_ops vsp1_pm_ops = { * Platform Driver */ -static struct vsp1_platform_data * -vsp1_get_platform_data(struct platform_device *pdev) +static int vsp1_validate_platform_data(struct platform_device *pdev, + struct vsp1_platform_data *pdata) { - struct vsp1_platform_data *pdata = pdev->dev.platform_data; - if (pdata == NULL) { dev_err(&pdev->dev, "missing platform data\n"); - return NULL; + return -EINVAL; } if (pdata->rpf_count <= 0 || pdata->rpf_count > VPS1_MAX_RPF) { dev_err(&pdev->dev, "invalid number of RPF (%u)\n", pdata->rpf_count); - return NULL; + return -EINVAL; } if (pdata->uds_count <= 0 || pdata->uds_count > VPS1_MAX_UDS) { dev_err(&pdev->dev, "invalid number of UDS (%u)\n", pdata->uds_count); - return NULL; + return -EINVAL; } if (pdata->wpf_count <= 0 || pdata->wpf_count > VPS1_MAX_WPF) { dev_err(&pdev->dev, "invalid number of WPF (%u)\n", pdata->wpf_count); - return NULL; + return -EINVAL; } + return 0; +} + +static struct vsp1_platform_data * +vsp1_get_platform_data(struct platform_device *pdev) +{ + struct device_node *np = pdev->dev.of_node; + struct vsp1_platform_data *pdata; + + if (!IS_ENABLED(CONFIG_OF) || np == NULL) + return pdev->dev.platform_data; + + pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL); + if (pdata == NULL) + return NULL; + + if (of_property_read_bool(np, "renesas,has-lif")) + pdata->features |= VSP1_HAS_LIF; + if (of_property_read_bool(np, "renesas,has-lut")) + pdata->features |= VSP1_HAS_LUT; + if (of_property_read_bool(np, "renesas,has-sru")) + pdata->features |= VSP1_HAS_SRU; + + of_property_read_u32(np, "renesas,#rpf", &pdata->rpf_count); + of_property_read_u32(np, "renesas,#uds", &pdata->uds_count); + of_property_read_u32(np, "renesas,#wpf", &pdata->wpf_count); + return pdata; } @@ -499,6 +507,10 @@ static int vsp1_probe(struct platform_device *pdev) if (vsp1->pdata == NULL) return -ENODEV; + ret = vsp1_validate_platform_data(pdev, vsp1->pdata); + if (ret < 0) + return ret; + /* I/O, IRQ and clock resources */ io = platform_get_resource(pdev, IORESOURCE_MEM, 0); vsp1->mmio = devm_ioremap_resource(&pdev->dev, io); @@ -511,9 +523,6 @@ static int vsp1_probe(struct platform_device *pdev) return PTR_ERR(vsp1->clock); } - /* The RT clock is optional */ - vsp1->rt_clock = devm_clk_get(&pdev->dev, "rt"); - irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0); if (!irq) { dev_err(&pdev->dev, "missing IRQ\n"); @@ -548,6 +557,11 @@ static int vsp1_remove(struct platform_device *pdev) return 0; } +static const struct of_device_id vsp1_of_match[] = { + { .compatible = "renesas,vsp1" }, + { }, +}; + static struct platform_driver vsp1_platform_driver = { .probe = vsp1_probe, .remove = vsp1_remove, @@ -555,6 +569,7 @@ static struct platform_driver vsp1_platform_driver = { .owner = THIS_MODULE, .name = "vsp1", .pm = &vsp1_pm_ops, + .of_match_table = vsp1_of_match, }, }; diff --git a/drivers/media/platform/vsp1/vsp1_entity.c b/drivers/media/platform/vsp1/vsp1_entity.c index 3fc9e4266caf..44167834285d 100644 --- a/drivers/media/platform/vsp1/vsp1_entity.c +++ b/drivers/media/platform/vsp1/vsp1_entity.c @@ -100,8 +100,10 @@ static int vsp1_entity_link_setup(struct media_entity *entity, if (source->sink) return -EBUSY; source->sink = remote->entity; + source->sink_pad = remote->index; } else { source->sink = NULL; + source->sink_pad = 0; } return 0; @@ -116,42 +118,43 @@ const struct media_entity_operations vsp1_media_ops = { * Initialization */ +static const struct vsp1_route vsp1_routes[] = { + { VSP1_ENTITY_BRU, 0, VI6_DPR_BRU_ROUTE, + { VI6_DPR_NODE_BRU_IN(0), VI6_DPR_NODE_BRU_IN(1), + VI6_DPR_NODE_BRU_IN(2), VI6_DPR_NODE_BRU_IN(3), } }, + { VSP1_ENTITY_HSI, 0, VI6_DPR_HSI_ROUTE, { VI6_DPR_NODE_HSI, } }, + { VSP1_ENTITY_HST, 0, VI6_DPR_HST_ROUTE, { VI6_DPR_NODE_HST, } }, + { VSP1_ENTITY_LIF, 0, 0, { VI6_DPR_NODE_LIF, } }, + { VSP1_ENTITY_LUT, 0, VI6_DPR_LUT_ROUTE, { VI6_DPR_NODE_LUT, } }, + { VSP1_ENTITY_RPF, 0, VI6_DPR_RPF_ROUTE(0), { VI6_DPR_NODE_RPF(0), } }, + { VSP1_ENTITY_RPF, 1, VI6_DPR_RPF_ROUTE(1), { VI6_DPR_NODE_RPF(1), } }, + { VSP1_ENTITY_RPF, 2, VI6_DPR_RPF_ROUTE(2), { VI6_DPR_NODE_RPF(2), } }, + { VSP1_ENTITY_RPF, 3, VI6_DPR_RPF_ROUTE(3), { VI6_DPR_NODE_RPF(3), } }, + { VSP1_ENTITY_RPF, 4, VI6_DPR_RPF_ROUTE(4), { VI6_DPR_NODE_RPF(4), } }, + { VSP1_ENTITY_SRU, 0, VI6_DPR_SRU_ROUTE, { VI6_DPR_NODE_SRU, } }, + { VSP1_ENTITY_UDS, 0, VI6_DPR_UDS_ROUTE(0), { VI6_DPR_NODE_UDS(0), } }, + { VSP1_ENTITY_UDS, 1, VI6_DPR_UDS_ROUTE(1), { VI6_DPR_NODE_UDS(1), } }, + { VSP1_ENTITY_UDS, 2, VI6_DPR_UDS_ROUTE(2), { VI6_DPR_NODE_UDS(2), } }, + { VSP1_ENTITY_WPF, 0, 0, { VI6_DPR_NODE_WPF(0), } }, + { VSP1_ENTITY_WPF, 1, 0, { VI6_DPR_NODE_WPF(1), } }, + { VSP1_ENTITY_WPF, 2, 0, { VI6_DPR_NODE_WPF(2), } }, + { VSP1_ENTITY_WPF, 3, 0, { VI6_DPR_NODE_WPF(3), } }, +}; + int vsp1_entity_init(struct vsp1_device *vsp1, struct vsp1_entity *entity, unsigned int num_pads) { - static const struct { - unsigned int id; - unsigned int reg; - } routes[] = { - { VI6_DPR_NODE_HSI, VI6_DPR_HSI_ROUTE }, - { VI6_DPR_NODE_HST, VI6_DPR_HST_ROUTE }, - { VI6_DPR_NODE_LIF, 0 }, - { VI6_DPR_NODE_LUT, VI6_DPR_LUT_ROUTE }, - { VI6_DPR_NODE_RPF(0), VI6_DPR_RPF_ROUTE(0) }, - { VI6_DPR_NODE_RPF(1), VI6_DPR_RPF_ROUTE(1) }, - { VI6_DPR_NODE_RPF(2), VI6_DPR_RPF_ROUTE(2) }, - { VI6_DPR_NODE_RPF(3), VI6_DPR_RPF_ROUTE(3) }, - { VI6_DPR_NODE_RPF(4), VI6_DPR_RPF_ROUTE(4) }, - { VI6_DPR_NODE_SRU, VI6_DPR_SRU_ROUTE }, - { VI6_DPR_NODE_UDS(0), VI6_DPR_UDS_ROUTE(0) }, - { VI6_DPR_NODE_UDS(1), VI6_DPR_UDS_ROUTE(1) }, - { VI6_DPR_NODE_UDS(2), VI6_DPR_UDS_ROUTE(2) }, - { VI6_DPR_NODE_WPF(0), 0 }, - { VI6_DPR_NODE_WPF(1), 0 }, - { VI6_DPR_NODE_WPF(2), 0 }, - { VI6_DPR_NODE_WPF(3), 0 }, - }; - unsigned int i; - for (i = 0; i < ARRAY_SIZE(routes); ++i) { - if (routes[i].id == entity->id) { - entity->route = routes[i].reg; + for (i = 0; i < ARRAY_SIZE(vsp1_routes); ++i) { + if (vsp1_routes[i].type == entity->type && + vsp1_routes[i].index == entity->index) { + entity->route = &vsp1_routes[i]; break; } } - if (i == ARRAY_SIZE(routes)) + if (i == ARRAY_SIZE(vsp1_routes)) return -EINVAL; entity->vsp1 = vsp1; diff --git a/drivers/media/platform/vsp1/vsp1_entity.h b/drivers/media/platform/vsp1/vsp1_entity.h index f6fd6988aeb0..7afbd8a7ba66 100644 --- a/drivers/media/platform/vsp1/vsp1_entity.h +++ b/drivers/media/platform/vsp1/vsp1_entity.h @@ -20,6 +20,7 @@ struct vsp1_device; enum vsp1_entity_type { + VSP1_ENTITY_BRU, VSP1_ENTITY_HSI, VSP1_ENTITY_HST, VSP1_ENTITY_LIF, @@ -30,13 +31,31 @@ enum vsp1_entity_type { VSP1_ENTITY_WPF, }; +/* + * struct vsp1_route - Entity routing configuration + * @type: Entity type this routing entry is associated with + * @index: Entity index this routing entry is associated with + * @reg: Output routing configuration register + * @inputs: Target node value for each input + * + * Each $vsp1_route entry describes routing configuration for the entity + * specified by the entry's @type and @index. @reg indicates the register that + * holds output routing configuration for the entity, and the @inputs array + * store the target node value for each input of the entity. + */ +struct vsp1_route { + enum vsp1_entity_type type; + unsigned int index; + unsigned int reg; + unsigned int inputs[4]; +}; + struct vsp1_entity { struct vsp1_device *vsp1; enum vsp1_entity_type type; unsigned int index; - unsigned int id; - unsigned int route; + const struct vsp1_route *route; struct list_head list_dev; struct list_head list_pipe; @@ -45,6 +64,7 @@ struct vsp1_entity { unsigned int source_pad; struct media_entity *sink; + unsigned int sink_pad; struct v4l2_subdev subdev; struct v4l2_mbus_framefmt *formats; diff --git a/drivers/media/platform/vsp1/vsp1_hsit.c b/drivers/media/platform/vsp1/vsp1_hsit.c index 285485350d82..db2950a73c60 100644 --- a/drivers/media/platform/vsp1/vsp1_hsit.c +++ b/drivers/media/platform/vsp1/vsp1_hsit.c @@ -193,13 +193,10 @@ struct vsp1_hsit *vsp1_hsit_create(struct vsp1_device *vsp1, bool inverse) hsit->inverse = inverse; - if (inverse) { + if (inverse) hsit->entity.type = VSP1_ENTITY_HSI; - hsit->entity.id = VI6_DPR_NODE_HSI; - } else { + else hsit->entity.type = VSP1_ENTITY_HST; - hsit->entity.id = VI6_DPR_NODE_HST; - } ret = vsp1_entity_init(vsp1, &hsit->entity, 2); if (ret < 0) diff --git a/drivers/media/platform/vsp1/vsp1_lif.c b/drivers/media/platform/vsp1/vsp1_lif.c index 135a78957014..d4fb23e9c4a8 100644 --- a/drivers/media/platform/vsp1/vsp1_lif.c +++ b/drivers/media/platform/vsp1/vsp1_lif.c @@ -215,7 +215,6 @@ struct vsp1_lif *vsp1_lif_create(struct vsp1_device *vsp1) return ERR_PTR(-ENOMEM); lif->entity.type = VSP1_ENTITY_LIF; - lif->entity.id = VI6_DPR_NODE_LIF; ret = vsp1_entity_init(vsp1, &lif->entity, 2); if (ret < 0) diff --git a/drivers/media/platform/vsp1/vsp1_lut.c b/drivers/media/platform/vsp1/vsp1_lut.c index 4e9dc7c86ef8..fea36ebe2565 100644 --- a/drivers/media/platform/vsp1/vsp1_lut.c +++ b/drivers/media/platform/vsp1/vsp1_lut.c @@ -229,7 +229,6 @@ struct vsp1_lut *vsp1_lut_create(struct vsp1_device *vsp1) return ERR_PTR(-ENOMEM); lut->entity.type = VSP1_ENTITY_LUT; - lut->entity.id = VI6_DPR_NODE_LUT; ret = vsp1_entity_init(vsp1, &lut->entity, 2); if (ret < 0) diff --git a/drivers/media/platform/vsp1/vsp1_regs.h b/drivers/media/platform/vsp1/vsp1_regs.h index 28650806c20f..3e74b44286f6 100644 --- a/drivers/media/platform/vsp1/vsp1_regs.h +++ b/drivers/media/platform/vsp1/vsp1_regs.h @@ -451,13 +451,111 @@ * BRU Control Registers */ +#define VI6_ROP_NOP 0 +#define VI6_ROP_AND 1 +#define VI6_ROP_AND_REV 2 +#define VI6_ROP_COPY 3 +#define VI6_ROP_AND_INV 4 +#define VI6_ROP_CLEAR 5 +#define VI6_ROP_XOR 6 +#define VI6_ROP_OR 7 +#define VI6_ROP_NOR 8 +#define VI6_ROP_EQUIV 9 +#define VI6_ROP_INVERT 10 +#define VI6_ROP_OR_REV 11 +#define VI6_ROP_COPY_INV 12 +#define VI6_ROP_OR_INV 13 +#define VI6_ROP_NAND 14 +#define VI6_ROP_SET 15 + #define VI6_BRU_INCTRL 0x2c00 +#define VI6_BRU_INCTRL_NRM (1 << 28) +#define VI6_BRU_INCTRL_DnON (1 << (16 + (n))) +#define VI6_BRU_INCTRL_DITHn_OFF (0 << ((n) * 4)) +#define VI6_BRU_INCTRL_DITHn_18BPP (1 << ((n) * 4)) +#define VI6_BRU_INCTRL_DITHn_16BPP (2 << ((n) * 4)) +#define VI6_BRU_INCTRL_DITHn_15BPP (3 << ((n) * 4)) +#define VI6_BRU_INCTRL_DITHn_12BPP (4 << ((n) * 4)) +#define VI6_BRU_INCTRL_DITHn_8BPP (5 << ((n) * 4)) +#define VI6_BRU_INCTRL_DITHn_MASK (7 << ((n) * 4)) +#define VI6_BRU_INCTRL_DITHn_SHIFT ((n) * 4) + #define VI6_BRU_VIRRPF_SIZE 0x2c04 +#define VI6_BRU_VIRRPF_SIZE_HSIZE_MASK (0x1fff << 16) +#define VI6_BRU_VIRRPF_SIZE_HSIZE_SHIFT 16 +#define VI6_BRU_VIRRPF_SIZE_VSIZE_MASK (0x1fff << 0) +#define VI6_BRU_VIRRPF_SIZE_VSIZE_SHIFT 0 + #define VI6_BRU_VIRRPF_LOC 0x2c08 +#define VI6_BRU_VIRRPF_LOC_HCOORD_MASK (0x1fff << 16) +#define VI6_BRU_VIRRPF_LOC_HCOORD_SHIFT 16 +#define VI6_BRU_VIRRPF_LOC_VCOORD_MASK (0x1fff << 0) +#define VI6_BRU_VIRRPF_LOC_VCOORD_SHIFT 0 + #define VI6_BRU_VIRRPF_COL 0x2c0c +#define VI6_BRU_VIRRPF_COL_A_MASK (0xff << 24) +#define VI6_BRU_VIRRPF_COL_A_SHIFT 24 +#define VI6_BRU_VIRRPF_COL_RCR_MASK (0xff << 16) +#define VI6_BRU_VIRRPF_COL_RCR_SHIFT 16 +#define VI6_BRU_VIRRPF_COL_GY_MASK (0xff << 8) +#define VI6_BRU_VIRRPF_COL_GY_SHIFT 8 +#define VI6_BRU_VIRRPF_COL_BCB_MASK (0xff << 0) +#define VI6_BRU_VIRRPF_COL_BCB_SHIFT 0 + #define VI6_BRU_CTRL(n) (0x2c10 + (n) * 8) +#define VI6_BRU_CTRL_RBC (1 << 31) +#define VI6_BRU_CTRL_DSTSEL_BRUIN(n) ((n) << 20) +#define VI6_BRU_CTRL_DSTSEL_VRPF (4 << 20) +#define VI6_BRU_CTRL_DSTSEL_MASK (7 << 20) +#define VI6_BRU_CTRL_SRCSEL_BRUIN(n) ((n) << 16) +#define VI6_BRU_CTRL_SRCSEL_VRPF (4 << 16) +#define VI6_BRU_CTRL_SRCSEL_MASK (7 << 16) +#define VI6_BRU_CTRL_CROP(rop) ((rop) << 4) +#define VI6_BRU_CTRL_CROP_MASK (0xf << 4) +#define VI6_BRU_CTRL_AROP(rop) ((rop) << 0) +#define VI6_BRU_CTRL_AROP_MASK (0xf << 0) + #define VI6_BRU_BLD(n) (0x2c14 + (n) * 8) +#define VI6_BRU_BLD_CBES (1 << 31) +#define VI6_BRU_BLD_CCMDX_DST_A (0 << 28) +#define VI6_BRU_BLD_CCMDX_255_DST_A (1 << 28) +#define VI6_BRU_BLD_CCMDX_SRC_A (2 << 28) +#define VI6_BRU_BLD_CCMDX_255_SRC_A (3 << 28) +#define VI6_BRU_BLD_CCMDX_COEFX (4 << 28) +#define VI6_BRU_BLD_CCMDX_MASK (7 << 28) +#define VI6_BRU_BLD_CCMDY_DST_A (0 << 24) +#define VI6_BRU_BLD_CCMDY_255_DST_A (1 << 24) +#define VI6_BRU_BLD_CCMDY_SRC_A (2 << 24) +#define VI6_BRU_BLD_CCMDY_255_SRC_A (3 << 24) +#define VI6_BRU_BLD_CCMDY_COEFY (4 << 24) +#define VI6_BRU_BLD_CCMDY_MASK (7 << 24) +#define VI6_BRU_BLD_CCMDY_SHIFT 24 +#define VI6_BRU_BLD_ABES (1 << 23) +#define VI6_BRU_BLD_ACMDX_DST_A (0 << 20) +#define VI6_BRU_BLD_ACMDX_255_DST_A (1 << 20) +#define VI6_BRU_BLD_ACMDX_SRC_A (2 << 20) +#define VI6_BRU_BLD_ACMDX_255_SRC_A (3 << 20) +#define VI6_BRU_BLD_ACMDX_COEFX (4 << 20) +#define VI6_BRU_BLD_ACMDX_MASK (7 << 20) +#define VI6_BRU_BLD_ACMDY_DST_A (0 << 16) +#define VI6_BRU_BLD_ACMDY_255_DST_A (1 << 16) +#define VI6_BRU_BLD_ACMDY_SRC_A (2 << 16) +#define VI6_BRU_BLD_ACMDY_255_SRC_A (3 << 16) +#define VI6_BRU_BLD_ACMDY_COEFY (4 << 16) +#define VI6_BRU_BLD_ACMDY_MASK (7 << 16) +#define VI6_BRU_BLD_COEFX_MASK (0xff << 8) +#define VI6_BRU_BLD_COEFX_SHIFT 8 +#define VI6_BRU_BLD_COEFY_MASK (0xff << 0) +#define VI6_BRU_BLD_COEFY_SHIFT 0 + #define VI6_BRU_ROP 0x2c30 +#define VI6_BRU_ROP_DSTSEL_BRUIN(n) ((n) << 20) +#define VI6_BRU_ROP_DSTSEL_VRPF (4 << 20) +#define VI6_BRU_ROP_DSTSEL_MASK (7 << 20) +#define VI6_BRU_ROP_CROP(rop) ((rop) << 4) +#define VI6_BRU_ROP_CROP_MASK (0xf << 4) +#define VI6_BRU_ROP_AROP(rop) ((rop) << 0) +#define VI6_BRU_ROP_AROP_MASK (0xf << 0) /* ----------------------------------------------------------------------------- * HGO Control Registers diff --git a/drivers/media/platform/vsp1/vsp1_rpf.c b/drivers/media/platform/vsp1/vsp1_rpf.c index 2b04d0f95c62..c3d98642a4aa 100644 --- a/drivers/media/platform/vsp1/vsp1_rpf.c +++ b/drivers/media/platform/vsp1/vsp1_rpf.c @@ -96,8 +96,10 @@ static int rpf_s_stream(struct v4l2_subdev *subdev, int enable) vsp1_rpf_write(rpf, VI6_RPF_INFMT, infmt); vsp1_rpf_write(rpf, VI6_RPF_DSWAP, fmtinfo->swap); - /* Output location. Composing isn't supported yet. */ - vsp1_rpf_write(rpf, VI6_RPF_LOC, 0); + /* Output location */ + vsp1_rpf_write(rpf, VI6_RPF_LOC, + (rpf->location.left << VI6_RPF_LOC_HCOORD_SHIFT) | + (rpf->location.top << VI6_RPF_LOC_VCOORD_SHIFT)); /* Disable alpha, mask and color key. Set the alpha channel to a fixed * value of 255. @@ -176,7 +178,6 @@ struct vsp1_rwpf *vsp1_rpf_create(struct vsp1_device *vsp1, unsigned int index) rpf->entity.type = VSP1_ENTITY_RPF; rpf->entity.index = index; - rpf->entity.id = VI6_DPR_NODE_RPF(index); ret = vsp1_entity_init(vsp1, &rpf->entity, 2); if (ret < 0) diff --git a/drivers/media/platform/vsp1/vsp1_rwpf.h b/drivers/media/platform/vsp1/vsp1_rwpf.h index 5c5ee81bbeae..b4fb65e58770 100644 --- a/drivers/media/platform/vsp1/vsp1_rwpf.h +++ b/drivers/media/platform/vsp1/vsp1_rwpf.h @@ -30,6 +30,10 @@ struct vsp1_rwpf { unsigned int max_width; unsigned int max_height; + struct { + unsigned int left; + unsigned int top; + } location; struct v4l2_rect crop; unsigned int offsets[2]; diff --git a/drivers/media/platform/vsp1/vsp1_sru.c b/drivers/media/platform/vsp1/vsp1_sru.c index 7ab1a0b2d656..aa0e04c56f3f 100644 --- a/drivers/media/platform/vsp1/vsp1_sru.c +++ b/drivers/media/platform/vsp1/vsp1_sru.c @@ -327,7 +327,6 @@ struct vsp1_sru *vsp1_sru_create(struct vsp1_device *vsp1) return ERR_PTR(-ENOMEM); sru->entity.type = VSP1_ENTITY_SRU; - sru->entity.id = VI6_DPR_NODE_SRU; ret = vsp1_entity_init(vsp1, &sru->entity, 2); if (ret < 0) diff --git a/drivers/media/platform/vsp1/vsp1_uds.c b/drivers/media/platform/vsp1/vsp1_uds.c index 622342ac7770..0293bdbb4401 100644 --- a/drivers/media/platform/vsp1/vsp1_uds.c +++ b/drivers/media/platform/vsp1/vsp1_uds.c @@ -131,7 +131,7 @@ static int uds_s_stream(struct v4l2_subdev *subdev, int enable) return 0; /* Enable multi-tap scaling. */ - vsp1_uds_write(uds, VI6_UDS_CTRL, VI6_UDS_CTRL_BC); + vsp1_uds_write(uds, VI6_UDS_CTRL, VI6_UDS_CTRL_AON | VI6_UDS_CTRL_BC); vsp1_uds_write(uds, VI6_UDS_PASS_BWIDTH, (uds_passband_width(uds->hscale) @@ -139,7 +139,6 @@ static int uds_s_stream(struct v4l2_subdev *subdev, int enable) (uds_passband_width(uds->vscale) << VI6_UDS_PASS_BWIDTH_V_SHIFT)); - /* Set the scaling ratios and the output size. */ format = &uds->entity.formats[UDS_PAD_SOURCE]; @@ -323,7 +322,6 @@ struct vsp1_uds *vsp1_uds_create(struct vsp1_device *vsp1, unsigned int index) uds->entity.type = VSP1_ENTITY_UDS; uds->entity.index = index; - uds->entity.id = VI6_DPR_NODE_UDS(index); ret = vsp1_entity_init(vsp1, &uds->entity, 2); if (ret < 0) diff --git a/drivers/media/platform/vsp1/vsp1_video.c b/drivers/media/platform/vsp1/vsp1_video.c index a0595c17700f..8a1253e51f04 100644 --- a/drivers/media/platform/vsp1/vsp1_video.c +++ b/drivers/media/platform/vsp1/vsp1_video.c @@ -28,6 +28,7 @@ #include <media/videobuf2-dma-contig.h> #include "vsp1.h" +#include "vsp1_bru.h" #include "vsp1_entity.h" #include "vsp1_rwpf.h" #include "vsp1_video.h" @@ -280,6 +281,9 @@ static int vsp1_pipeline_validate_branch(struct vsp1_rwpf *input, struct media_pad *pad; bool uds_found = false; + input->location.left = 0; + input->location.top = 0; + pad = media_entity_remote_pad(&input->entity.pads[RWPF_PAD_SOURCE]); while (1) { @@ -292,6 +296,17 @@ static int vsp1_pipeline_validate_branch(struct vsp1_rwpf *input, entity = to_vsp1_entity(media_entity_to_v4l2_subdev(pad->entity)); + /* A BRU is present in the pipeline, store the compose rectangle + * location in the input RPF for use when configuring the RPF. + */ + if (entity->type == VSP1_ENTITY_BRU) { + struct vsp1_bru *bru = to_bru(&entity->subdev); + struct v4l2_rect *rect = &bru->compose[pad->index]; + + input->location.left = rect->left; + input->location.top = rect->top; + } + /* We've reached the WPF, we're done. */ if (entity->type == VSP1_ENTITY_WPF) break; @@ -363,6 +378,8 @@ static int vsp1_pipeline_validate(struct vsp1_pipeline *pipe, rwpf->video.pipe_index = 0; } else if (e->type == VSP1_ENTITY_LIF) { pipe->lif = e; + } else if (e->type == VSP1_ENTITY_BRU) { + pipe->bru = e; } } @@ -392,6 +409,7 @@ error: pipe->num_video = 0; pipe->num_inputs = 0; pipe->output = NULL; + pipe->bru = NULL; pipe->lif = NULL; return ret; } @@ -430,6 +448,7 @@ static void vsp1_pipeline_cleanup(struct vsp1_pipeline *pipe) pipe->num_video = 0; pipe->num_inputs = 0; pipe->output = NULL; + pipe->bru = NULL; pipe->lif = NULL; } @@ -461,7 +480,7 @@ static int vsp1_pipeline_stop(struct vsp1_pipeline *pipe) list_for_each_entry(entity, &pipe->entities, list_pipe) { if (entity->route) - vsp1_write(entity->vsp1, entity->route, + vsp1_write(entity->vsp1, entity->route->reg, VI6_DPR_NODE_UNUSED); v4l2_subdev_call(&entity->subdev, video, s_stream, 0); @@ -680,11 +699,12 @@ static void vsp1_entity_route_setup(struct vsp1_entity *source) { struct vsp1_entity *sink; - if (source->route == 0) + if (source->route->reg == 0) return; sink = container_of(source->sink, struct vsp1_entity, subdev.entity); - vsp1_write(source->vsp1, source->route, sink->id); + vsp1_write(source->vsp1, source->route->reg, + sink->route->inputs[source->sink_pad]); } static int vsp1_video_start_streaming(struct vb2_queue *vq, unsigned int count) diff --git a/drivers/media/platform/vsp1/vsp1_video.h b/drivers/media/platform/vsp1/vsp1_video.h index 53e4b3745940..c04d48fa2999 100644 --- a/drivers/media/platform/vsp1/vsp1_video.h +++ b/drivers/media/platform/vsp1/vsp1_video.h @@ -75,6 +75,7 @@ struct vsp1_pipeline { unsigned int num_inputs; struct vsp1_rwpf *inputs[VPS1_MAX_RPF]; struct vsp1_rwpf *output; + struct vsp1_entity *bru; struct vsp1_entity *lif; struct list_head entities; diff --git a/drivers/media/platform/vsp1/vsp1_wpf.c b/drivers/media/platform/vsp1/vsp1_wpf.c index 11a61c601da0..1294340dcb36 100644 --- a/drivers/media/platform/vsp1/vsp1_wpf.c +++ b/drivers/media/platform/vsp1/vsp1_wpf.c @@ -58,13 +58,21 @@ static int wpf_s_stream(struct v4l2_subdev *subdev, int enable) return 0; } - /* Sources */ + /* Sources. If the pipeline has a single input configure it as the + * master layer. Otherwise configure all inputs as sub-layers and + * select the virtual RPF as the master layer. + */ for (i = 0; i < pipe->num_inputs; ++i) { struct vsp1_rwpf *input = pipe->inputs[i]; - srcrpf |= VI6_WPF_SRCRPF_RPF_ACT_MST(input->entity.index); + srcrpf |= pipe->num_inputs == 1 + ? VI6_WPF_SRCRPF_RPF_ACT_MST(input->entity.index) + : VI6_WPF_SRCRPF_RPF_ACT_SUB(input->entity.index); } + if (pipe->num_inputs > 1) + srcrpf |= VI6_WPF_SRCRPF_VIRACT_MST; + vsp1_wpf_write(wpf, VI6_WPF_SRCRPF, srcrpf); /* Destination stride. */ @@ -181,7 +189,6 @@ struct vsp1_rwpf *vsp1_wpf_create(struct vsp1_device *vsp1, unsigned int index) wpf->entity.type = VSP1_ENTITY_WPF; wpf->entity.index = index; - wpf->entity.id = VI6_DPR_NODE_WPF(index); ret = vsp1_entity_init(vsp1, &wpf->entity, 2); if (ret < 0) diff --git a/drivers/memstick/host/Kconfig b/drivers/memstick/host/Kconfig index 1b37cf8cd204..7310e32b5991 100644 --- a/drivers/memstick/host/Kconfig +++ b/drivers/memstick/host/Kconfig @@ -52,3 +52,13 @@ config MEMSTICK_REALTEK_PCI To compile this driver as a module, choose M here: the module will be called rtsx_pci_ms. + +config MEMSTICK_REALTEK_USB + tristate "Realtek USB Memstick Card Interface Driver" + depends on MFD_RTSX_USB + help + Say Y here to include driver code to support Memstick card interface + of Realtek RTS5129/39 series USB card reader + + To compile this driver as a module, choose M here: the module will + be called rts5139_ms. diff --git a/drivers/memstick/host/Makefile b/drivers/memstick/host/Makefile index af3459d7686e..491c9557441d 100644 --- a/drivers/memstick/host/Makefile +++ b/drivers/memstick/host/Makefile @@ -6,3 +6,4 @@ obj-$(CONFIG_MEMSTICK_TIFM_MS) += tifm_ms.o obj-$(CONFIG_MEMSTICK_JMICRON_38X) += jmb38x_ms.o obj-$(CONFIG_MEMSTICK_R592) += r592.o obj-$(CONFIG_MEMSTICK_REALTEK_PCI) += rtsx_pci_ms.o +obj-$(CONFIG_MEMSTICK_REALTEK_USB) += rtsx_usb_ms.o diff --git a/drivers/memstick/host/rtsx_usb_ms.c b/drivers/memstick/host/rtsx_usb_ms.c new file mode 100644 index 000000000000..a7282b7d4de8 --- /dev/null +++ b/drivers/memstick/host/rtsx_usb_ms.c @@ -0,0 +1,839 @@ +/* Realtek USB Memstick Card Interface driver + * + * Copyright(c) 2009-2013 Realtek Semiconductor Corp. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 + * as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, see <http://www.gnu.org/licenses/>. + * + * Author: + * Roger Tseng <rogerable@realtek.com> + */ + +#include <linux/module.h> +#include <linux/highmem.h> +#include <linux/delay.h> +#include <linux/platform_device.h> +#include <linux/workqueue.h> +#include <linux/memstick.h> +#include <linux/kthread.h> +#include <linux/mfd/rtsx_usb.h> +#include <linux/pm_runtime.h> +#include <linux/mutex.h> +#include <linux/sched.h> +#include <linux/completion.h> +#include <asm/unaligned.h> + +struct rtsx_usb_ms { + struct platform_device *pdev; + struct rtsx_ucr *ucr; + struct memstick_host *msh; + struct memstick_request *req; + + struct mutex host_mutex; + struct work_struct handle_req; + + struct task_struct *detect_ms; + struct completion detect_ms_exit; + + u8 ssc_depth; + unsigned int clock; + int power_mode; + unsigned char ifmode; + bool eject; +}; + +static inline struct device *ms_dev(struct rtsx_usb_ms *host) +{ + return &(host->pdev->dev); +} + +static inline void ms_clear_error(struct rtsx_usb_ms *host) +{ + struct rtsx_ucr *ucr = host->ucr; + rtsx_usb_ep0_write_register(ucr, CARD_STOP, + MS_STOP | MS_CLR_ERR, + MS_STOP | MS_CLR_ERR); + + rtsx_usb_clear_dma_err(ucr); + rtsx_usb_clear_fsm_err(ucr); +} + +#ifdef DEBUG + +static void ms_print_debug_regs(struct rtsx_usb_ms *host) +{ + struct rtsx_ucr *ucr = host->ucr; + u16 i; + u8 *ptr; + + /* Print MS host internal registers */ + rtsx_usb_init_cmd(ucr); + + /* MS_CFG to MS_INT_REG */ + for (i = 0xFD40; i <= 0xFD44; i++) + rtsx_usb_add_cmd(ucr, READ_REG_CMD, i, 0, 0); + + /* CARD_SHARE_MODE to CARD_GPIO */ + for (i = 0xFD51; i <= 0xFD56; i++) + rtsx_usb_add_cmd(ucr, READ_REG_CMD, i, 0, 0); + + /* CARD_PULL_CTLx */ + for (i = 0xFD60; i <= 0xFD65; i++) + rtsx_usb_add_cmd(ucr, READ_REG_CMD, i, 0, 0); + + /* CARD_DATA_SOURCE, CARD_SELECT, CARD_CLK_EN, CARD_PWR_CTL */ + rtsx_usb_add_cmd(ucr, READ_REG_CMD, CARD_DATA_SOURCE, 0, 0); + rtsx_usb_add_cmd(ucr, READ_REG_CMD, CARD_SELECT, 0, 0); + rtsx_usb_add_cmd(ucr, READ_REG_CMD, CARD_CLK_EN, 0, 0); + rtsx_usb_add_cmd(ucr, READ_REG_CMD, CARD_PWR_CTL, 0, 0); + + rtsx_usb_send_cmd(ucr, MODE_CR, 100); + rtsx_usb_get_rsp(ucr, 21, 100); + + ptr = ucr->rsp_buf; + for (i = 0xFD40; i <= 0xFD44; i++) + dev_dbg(ms_dev(host), "0x%04X: 0x%02x\n", i, *(ptr++)); + for (i = 0xFD51; i <= 0xFD56; i++) + dev_dbg(ms_dev(host), "0x%04X: 0x%02x\n", i, *(ptr++)); + for (i = 0xFD60; i <= 0xFD65; i++) + dev_dbg(ms_dev(host), "0x%04X: 0x%02x\n", i, *(ptr++)); + + dev_dbg(ms_dev(host), "0x%04X: 0x%02x\n", CARD_DATA_SOURCE, *(ptr++)); + dev_dbg(ms_dev(host), "0x%04X: 0x%02x\n", CARD_SELECT, *(ptr++)); + dev_dbg(ms_dev(host), "0x%04X: 0x%02x\n", CARD_CLK_EN, *(ptr++)); + dev_dbg(ms_dev(host), "0x%04X: 0x%02x\n", CARD_PWR_CTL, *(ptr++)); +} + +#else + +static void ms_print_debug_regs(struct rtsx_usb_ms *host) +{ +} + +#endif + +static int ms_pull_ctl_disable_lqfp48(struct rtsx_ucr *ucr) +{ + rtsx_usb_init_cmd(ucr); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL1, 0xFF, 0x55); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL2, 0xFF, 0x55); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL3, 0xFF, 0x95); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL4, 0xFF, 0x55); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL5, 0xFF, 0x55); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL6, 0xFF, 0xA5); + + return rtsx_usb_send_cmd(ucr, MODE_C, 100); +} + +static int ms_pull_ctl_disable_qfn24(struct rtsx_ucr *ucr) +{ + rtsx_usb_init_cmd(ucr); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL1, 0xFF, 0x65); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL2, 0xFF, 0x55); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL3, 0xFF, 0x95); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL4, 0xFF, 0x55); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL5, 0xFF, 0x56); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL6, 0xFF, 0x59); + + return rtsx_usb_send_cmd(ucr, MODE_C, 100); +} + +static int ms_pull_ctl_enable_lqfp48(struct rtsx_ucr *ucr) +{ + rtsx_usb_init_cmd(ucr); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL1, 0xFF, 0x55); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL2, 0xFF, 0x55); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL3, 0xFF, 0x95); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL4, 0xFF, 0x55); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL5, 0xFF, 0x55); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL6, 0xFF, 0xA5); + + return rtsx_usb_send_cmd(ucr, MODE_C, 100); +} + +static int ms_pull_ctl_enable_qfn24(struct rtsx_ucr *ucr) +{ + rtsx_usb_init_cmd(ucr); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL1, 0xFF, 0x65); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL2, 0xFF, 0x55); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL3, 0xFF, 0x95); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL4, 0xFF, 0x55); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL5, 0xFF, 0x55); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL6, 0xFF, 0x59); + + return rtsx_usb_send_cmd(ucr, MODE_C, 100); +} + +static int ms_power_on(struct rtsx_usb_ms *host) +{ + struct rtsx_ucr *ucr = host->ucr; + int err; + + dev_dbg(ms_dev(host), "%s\n", __func__); + + rtsx_usb_init_cmd(ucr); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_SELECT, 0x07, MS_MOD_SEL); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_SHARE_MODE, + CARD_SHARE_MASK, CARD_SHARE_MS); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_CLK_EN, + MS_CLK_EN, MS_CLK_EN); + err = rtsx_usb_send_cmd(ucr, MODE_C, 100); + if (err < 0) + return err; + + if (CHECK_PKG(ucr, LQFP48)) + err = ms_pull_ctl_enable_lqfp48(ucr); + else + err = ms_pull_ctl_enable_qfn24(ucr); + if (err < 0) + return err; + + err = rtsx_usb_write_register(ucr, CARD_PWR_CTL, + POWER_MASK, PARTIAL_POWER_ON); + if (err) + return err; + + usleep_range(800, 1000); + + rtsx_usb_init_cmd(ucr); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PWR_CTL, + POWER_MASK, POWER_ON); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_OE, + MS_OUTPUT_EN, MS_OUTPUT_EN); + + return rtsx_usb_send_cmd(ucr, MODE_C, 100); +} + +static int ms_power_off(struct rtsx_usb_ms *host) +{ + struct rtsx_ucr *ucr = host->ucr; + int err; + + dev_dbg(ms_dev(host), "%s\n", __func__); + + rtsx_usb_init_cmd(ucr); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_CLK_EN, MS_CLK_EN, 0); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_OE, MS_OUTPUT_EN, 0); + + err = rtsx_usb_send_cmd(ucr, MODE_C, 100); + if (err < 0) + return err; + + if (CHECK_PKG(ucr, LQFP48)) + return ms_pull_ctl_disable_lqfp48(ucr); + + return ms_pull_ctl_disable_qfn24(ucr); +} + +static int ms_transfer_data(struct rtsx_usb_ms *host, unsigned char data_dir, + u8 tpc, u8 cfg, struct scatterlist *sg) +{ + struct rtsx_ucr *ucr = host->ucr; + int err; + unsigned int length = sg->length; + u16 sec_cnt = (u16)(length / 512); + u8 trans_mode, dma_dir, flag; + unsigned int pipe; + struct memstick_dev *card = host->msh->card; + + dev_dbg(ms_dev(host), "%s: tpc = 0x%02x, data_dir = %s, length = %d\n", + __func__, tpc, (data_dir == READ) ? "READ" : "WRITE", + length); + + if (data_dir == READ) { + flag = MODE_CDIR; + dma_dir = DMA_DIR_FROM_CARD; + if (card->id.type != MEMSTICK_TYPE_PRO) + trans_mode = MS_TM_NORMAL_READ; + else + trans_mode = MS_TM_AUTO_READ; + pipe = usb_rcvbulkpipe(ucr->pusb_dev, EP_BULK_IN); + } else { + flag = MODE_CDOR; + dma_dir = DMA_DIR_TO_CARD; + if (card->id.type != MEMSTICK_TYPE_PRO) + trans_mode = MS_TM_NORMAL_WRITE; + else + trans_mode = MS_TM_AUTO_WRITE; + pipe = usb_sndbulkpipe(ucr->pusb_dev, EP_BULK_OUT); + } + + rtsx_usb_init_cmd(ucr); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, MS_TPC, 0xFF, tpc); + if (card->id.type == MEMSTICK_TYPE_PRO) { + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, MS_SECTOR_CNT_H, + 0xFF, (u8)(sec_cnt >> 8)); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, MS_SECTOR_CNT_L, + 0xFF, (u8)sec_cnt); + } + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, MS_TRANS_CFG, 0xFF, cfg); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, MC_DMA_TC3, + 0xFF, (u8)(length >> 24)); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, MC_DMA_TC2, + 0xFF, (u8)(length >> 16)); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, MC_DMA_TC1, + 0xFF, (u8)(length >> 8)); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, MC_DMA_TC0, 0xFF, + (u8)length); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, MC_DMA_CTL, + 0x03 | DMA_PACK_SIZE_MASK, dma_dir | DMA_EN | DMA_512); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_DATA_SOURCE, + 0x01, RING_BUFFER); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, MS_TRANSFER, + 0xFF, MS_TRANSFER_START | trans_mode); + rtsx_usb_add_cmd(ucr, CHECK_REG_CMD, MS_TRANSFER, + MS_TRANSFER_END, MS_TRANSFER_END); + + err = rtsx_usb_send_cmd(ucr, flag | STAGE_MS_STATUS, 100); + if (err) + return err; + + err = rtsx_usb_transfer_data(ucr, pipe, sg, length, + 1, NULL, 10000); + if (err) + goto err_out; + + err = rtsx_usb_get_rsp(ucr, 3, 15000); + if (err) + goto err_out; + + if (ucr->rsp_buf[0] & MS_TRANSFER_ERR || + ucr->rsp_buf[1] & (MS_CRC16_ERR | MS_RDY_TIMEOUT)) { + err = -EIO; + goto err_out; + } + return 0; +err_out: + ms_clear_error(host); + return err; +} + +static int ms_write_bytes(struct rtsx_usb_ms *host, u8 tpc, + u8 cfg, u8 cnt, u8 *data, u8 *int_reg) +{ + struct rtsx_ucr *ucr = host->ucr; + int err, i; + + dev_dbg(ms_dev(host), "%s: tpc = 0x%02x\n", __func__, tpc); + + rtsx_usb_init_cmd(ucr); + + for (i = 0; i < cnt; i++) + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, + PPBUF_BASE2 + i, 0xFF, data[i]); + + if (cnt % 2) + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, + PPBUF_BASE2 + i, 0xFF, 0xFF); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, MS_TPC, 0xFF, tpc); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, MS_BYTE_CNT, 0xFF, cnt); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, MS_TRANS_CFG, 0xFF, cfg); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_DATA_SOURCE, + 0x01, PINGPONG_BUFFER); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, MS_TRANSFER, + 0xFF, MS_TRANSFER_START | MS_TM_WRITE_BYTES); + rtsx_usb_add_cmd(ucr, CHECK_REG_CMD, MS_TRANSFER, + MS_TRANSFER_END, MS_TRANSFER_END); + rtsx_usb_add_cmd(ucr, READ_REG_CMD, MS_TRANS_CFG, 0, 0); + + err = rtsx_usb_send_cmd(ucr, MODE_CR, 100); + if (err) + return err; + + err = rtsx_usb_get_rsp(ucr, 2, 5000); + if (err || (ucr->rsp_buf[0] & MS_TRANSFER_ERR)) { + u8 val; + + rtsx_usb_ep0_read_register(ucr, MS_TRANS_CFG, &val); + dev_dbg(ms_dev(host), "MS_TRANS_CFG: 0x%02x\n", val); + + if (int_reg) + *int_reg = val & 0x0F; + + ms_print_debug_regs(host); + + ms_clear_error(host); + + if (!(tpc & 0x08)) { + if (val & MS_CRC16_ERR) + return -EIO; + } else { + if (!(val & 0x80)) { + if (val & (MS_INT_ERR | MS_INT_CMDNK)) + return -EIO; + } + } + + return -ETIMEDOUT; + } + + if (int_reg) + *int_reg = ucr->rsp_buf[1] & 0x0F; + + return 0; +} + +static int ms_read_bytes(struct rtsx_usb_ms *host, u8 tpc, + u8 cfg, u8 cnt, u8 *data, u8 *int_reg) +{ + struct rtsx_ucr *ucr = host->ucr; + int err, i; + u8 *ptr; + + dev_dbg(ms_dev(host), "%s: tpc = 0x%02x\n", __func__, tpc); + + rtsx_usb_init_cmd(ucr); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, MS_TPC, 0xFF, tpc); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, MS_BYTE_CNT, 0xFF, cnt); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, MS_TRANS_CFG, 0xFF, cfg); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_DATA_SOURCE, + 0x01, PINGPONG_BUFFER); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, MS_TRANSFER, + 0xFF, MS_TRANSFER_START | MS_TM_READ_BYTES); + rtsx_usb_add_cmd(ucr, CHECK_REG_CMD, MS_TRANSFER, + MS_TRANSFER_END, MS_TRANSFER_END); + for (i = 0; i < cnt - 1; i++) + rtsx_usb_add_cmd(ucr, READ_REG_CMD, PPBUF_BASE2 + i, 0, 0); + if (cnt % 2) + rtsx_usb_add_cmd(ucr, READ_REG_CMD, PPBUF_BASE2 + cnt, 0, 0); + else + rtsx_usb_add_cmd(ucr, READ_REG_CMD, + PPBUF_BASE2 + cnt - 1, 0, 0); + + rtsx_usb_add_cmd(ucr, READ_REG_CMD, MS_TRANS_CFG, 0, 0); + + err = rtsx_usb_send_cmd(ucr, MODE_CR, 100); + if (err) + return err; + + err = rtsx_usb_get_rsp(ucr, cnt + 2, 5000); + if (err || (ucr->rsp_buf[0] & MS_TRANSFER_ERR)) { + u8 val; + + rtsx_usb_ep0_read_register(ucr, MS_TRANS_CFG, &val); + dev_dbg(ms_dev(host), "MS_TRANS_CFG: 0x%02x\n", val); + + if (int_reg && (host->ifmode != MEMSTICK_SERIAL)) + *int_reg = val & 0x0F; + + ms_print_debug_regs(host); + + ms_clear_error(host); + + if (!(tpc & 0x08)) { + if (val & MS_CRC16_ERR) + return -EIO; + } else { + if (!(val & 0x80)) { + if (val & (MS_INT_ERR | MS_INT_CMDNK)) + return -EIO; + } + } + + return -ETIMEDOUT; + } + + ptr = ucr->rsp_buf + 1; + for (i = 0; i < cnt; i++) + data[i] = *ptr++; + + + if (int_reg && (host->ifmode != MEMSTICK_SERIAL)) + *int_reg = *ptr & 0x0F; + + return 0; +} + +static int rtsx_usb_ms_issue_cmd(struct rtsx_usb_ms *host) +{ + struct memstick_request *req = host->req; + int err = 0; + u8 cfg = 0, int_reg; + + dev_dbg(ms_dev(host), "%s\n", __func__); + + if (req->need_card_int) { + if (host->ifmode != MEMSTICK_SERIAL) + cfg = WAIT_INT; + } + + if (req->long_data) { + err = ms_transfer_data(host, req->data_dir, + req->tpc, cfg, &(req->sg)); + } else { + if (req->data_dir == READ) + err = ms_read_bytes(host, req->tpc, cfg, + req->data_len, req->data, &int_reg); + else + err = ms_write_bytes(host, req->tpc, cfg, + req->data_len, req->data, &int_reg); + } + if (err < 0) + return err; + + if (req->need_card_int) { + if (host->ifmode == MEMSTICK_SERIAL) { + err = ms_read_bytes(host, MS_TPC_GET_INT, + NO_WAIT_INT, 1, &req->int_reg, NULL); + if (err < 0) + return err; + } else { + + if (int_reg & MS_INT_CMDNK) + req->int_reg |= MEMSTICK_INT_CMDNAK; + if (int_reg & MS_INT_BREQ) + req->int_reg |= MEMSTICK_INT_BREQ; + if (int_reg & MS_INT_ERR) + req->int_reg |= MEMSTICK_INT_ERR; + if (int_reg & MS_INT_CED) + req->int_reg |= MEMSTICK_INT_CED; + } + dev_dbg(ms_dev(host), "int_reg: 0x%02x\n", req->int_reg); + } + + return 0; +} + +static void rtsx_usb_ms_handle_req(struct work_struct *work) +{ + struct rtsx_usb_ms *host = container_of(work, + struct rtsx_usb_ms, handle_req); + struct rtsx_ucr *ucr = host->ucr; + struct memstick_host *msh = host->msh; + int rc; + + if (!host->req) { + do { + rc = memstick_next_req(msh, &host->req); + dev_dbg(ms_dev(host), "next req %d\n", rc); + + if (!rc) { + mutex_lock(&ucr->dev_mutex); + + if (rtsx_usb_card_exclusive_check(ucr, + RTSX_USB_MS_CARD)) + host->req->error = -EIO; + else + host->req->error = + rtsx_usb_ms_issue_cmd(host); + + mutex_unlock(&ucr->dev_mutex); + + dev_dbg(ms_dev(host), "req result %d\n", + host->req->error); + } + } while (!rc); + } + +} + +static void rtsx_usb_ms_request(struct memstick_host *msh) +{ + struct rtsx_usb_ms *host = memstick_priv(msh); + + dev_dbg(ms_dev(host), "--> %s\n", __func__); + + if (!host->eject) + schedule_work(&host->handle_req); +} + +static int rtsx_usb_ms_set_param(struct memstick_host *msh, + enum memstick_param param, int value) +{ + struct rtsx_usb_ms *host = memstick_priv(msh); + struct rtsx_ucr *ucr = host->ucr; + unsigned int clock = 0; + u8 ssc_depth = 0; + int err; + + dev_dbg(ms_dev(host), "%s: param = %d, value = %d\n", + __func__, param, value); + + mutex_lock(&ucr->dev_mutex); + + err = rtsx_usb_card_exclusive_check(ucr, RTSX_USB_MS_CARD); + if (err) + goto out; + + switch (param) { + case MEMSTICK_POWER: + if (value == host->power_mode) + break; + + if (value == MEMSTICK_POWER_ON) { + pm_runtime_get_sync(ms_dev(host)); + err = ms_power_on(host); + } else if (value == MEMSTICK_POWER_OFF) { + err = ms_power_off(host); + if (host->msh->card) + pm_runtime_put_noidle(ms_dev(host)); + else + pm_runtime_put(ms_dev(host)); + } else + err = -EINVAL; + if (!err) + host->power_mode = value; + break; + + case MEMSTICK_INTERFACE: + if (value == MEMSTICK_SERIAL) { + clock = 19000000; + ssc_depth = SSC_DEPTH_512K; + err = rtsx_usb_write_register(ucr, MS_CFG, 0x5A, + MS_BUS_WIDTH_1 | PUSH_TIME_DEFAULT); + if (err < 0) + break; + } else if (value == MEMSTICK_PAR4) { + clock = 39000000; + ssc_depth = SSC_DEPTH_1M; + + err = rtsx_usb_write_register(ucr, MS_CFG, 0x5A, + MS_BUS_WIDTH_4 | PUSH_TIME_ODD | + MS_NO_CHECK_INT); + if (err < 0) + break; + } else { + err = -EINVAL; + break; + } + + err = rtsx_usb_switch_clock(ucr, clock, + ssc_depth, false, true, false); + if (err < 0) { + dev_dbg(ms_dev(host), "switch clock failed\n"); + break; + } + + host->ssc_depth = ssc_depth; + host->clock = clock; + host->ifmode = value; + break; + default: + err = -EINVAL; + break; + } +out: + mutex_unlock(&ucr->dev_mutex); + + /* power-on delay */ + if (param == MEMSTICK_POWER && value == MEMSTICK_POWER_ON) + usleep_range(10000, 12000); + + dev_dbg(ms_dev(host), "%s: return = %d\n", __func__, err); + return err; +} + +#ifdef CONFIG_PM_SLEEP +static int rtsx_usb_ms_suspend(struct device *dev) +{ + struct rtsx_usb_ms *host = dev_get_drvdata(dev); + struct memstick_host *msh = host->msh; + + dev_dbg(ms_dev(host), "--> %s\n", __func__); + + memstick_suspend_host(msh); + return 0; +} + +static int rtsx_usb_ms_resume(struct device *dev) +{ + struct rtsx_usb_ms *host = dev_get_drvdata(dev); + struct memstick_host *msh = host->msh; + + dev_dbg(ms_dev(host), "--> %s\n", __func__); + + memstick_resume_host(msh); + return 0; +} +#endif /* CONFIG_PM_SLEEP */ + +/* + * Thread function of ms card slot detection. The thread starts right after + * successful host addition. It stops while the driver removal function sets + * host->eject true. + */ +static int rtsx_usb_detect_ms_card(void *__host) +{ + struct rtsx_usb_ms *host = (struct rtsx_usb_ms *)__host; + struct rtsx_ucr *ucr = host->ucr; + u8 val = 0; + int err; + + for (;;) { + mutex_lock(&ucr->dev_mutex); + + /* Check pending MS card changes */ + err = rtsx_usb_read_register(ucr, CARD_INT_PEND, &val); + if (err) { + mutex_unlock(&ucr->dev_mutex); + goto poll_again; + } + + /* Clear the pending */ + rtsx_usb_write_register(ucr, CARD_INT_PEND, + XD_INT | MS_INT | SD_INT, + XD_INT | MS_INT | SD_INT); + + mutex_unlock(&ucr->dev_mutex); + + if (val & MS_INT) { + dev_dbg(ms_dev(host), "MS slot change detected\n"); + memstick_detect_change(host->msh); + } + +poll_again: + if (host->eject) + break; + + msleep(1000); + } + + complete(&host->detect_ms_exit); + return 0; +} + +static int rtsx_usb_ms_drv_probe(struct platform_device *pdev) +{ + struct memstick_host *msh; + struct rtsx_usb_ms *host; + struct rtsx_ucr *ucr; + int err; + + ucr = usb_get_intfdata(to_usb_interface(pdev->dev.parent)); + if (!ucr) + return -ENXIO; + + dev_dbg(&(pdev->dev), + "Realtek USB Memstick controller found\n"); + + msh = memstick_alloc_host(sizeof(*host), &pdev->dev); + if (!msh) + return -ENOMEM; + + host = memstick_priv(msh); + host->ucr = ucr; + host->msh = msh; + host->pdev = pdev; + host->power_mode = MEMSTICK_POWER_OFF; + platform_set_drvdata(pdev, host); + + mutex_init(&host->host_mutex); + INIT_WORK(&host->handle_req, rtsx_usb_ms_handle_req); + + init_completion(&host->detect_ms_exit); + host->detect_ms = kthread_create(rtsx_usb_detect_ms_card, host, + "rtsx_usb_ms_%d", pdev->id); + if (IS_ERR(host->detect_ms)) { + dev_dbg(&(pdev->dev), + "Unable to create polling thread.\n"); + err = PTR_ERR(host->detect_ms); + goto err_out; + } + + msh->request = rtsx_usb_ms_request; + msh->set_param = rtsx_usb_ms_set_param; + msh->caps = MEMSTICK_CAP_PAR4; + + pm_runtime_enable(&pdev->dev); + err = memstick_add_host(msh); + if (err) + goto err_out; + + wake_up_process(host->detect_ms); + return 0; +err_out: + memstick_free_host(msh); + return err; +} + +static int rtsx_usb_ms_drv_remove(struct platform_device *pdev) +{ + struct rtsx_usb_ms *host = platform_get_drvdata(pdev); + struct memstick_host *msh; + int err; + + msh = host->msh; + host->eject = true; + cancel_work_sync(&host->handle_req); + + mutex_lock(&host->host_mutex); + if (host->req) { + dev_dbg(&(pdev->dev), + "%s: Controller removed during transfer\n", + dev_name(&msh->dev)); + host->req->error = -ENOMEDIUM; + do { + err = memstick_next_req(msh, &host->req); + if (!err) + host->req->error = -ENOMEDIUM; + } while (!err); + } + mutex_unlock(&host->host_mutex); + + wait_for_completion(&host->detect_ms_exit); + memstick_remove_host(msh); + memstick_free_host(msh); + + /* Balance possible unbalanced usage count + * e.g. unconditional module removal + */ + if (pm_runtime_active(ms_dev(host))) + pm_runtime_put(ms_dev(host)); + + pm_runtime_disable(&pdev->dev); + platform_set_drvdata(pdev, NULL); + + dev_dbg(&(pdev->dev), + ": Realtek USB Memstick controller has been removed\n"); + + return 0; +} + +static SIMPLE_DEV_PM_OPS(rtsx_usb_ms_pm_ops, + rtsx_usb_ms_suspend, rtsx_usb_ms_resume); + +static struct platform_device_id rtsx_usb_ms_ids[] = { + { + .name = "rtsx_usb_ms", + }, { + /* sentinel */ + } +}; +MODULE_DEVICE_TABLE(platform, rtsx_usb_ms_ids); + +static struct platform_driver rtsx_usb_ms_driver = { + .probe = rtsx_usb_ms_drv_probe, + .remove = rtsx_usb_ms_drv_remove, + .id_table = rtsx_usb_ms_ids, + .driver = { + .owner = THIS_MODULE, + .name = "rtsx_usb_ms", + .pm = &rtsx_usb_ms_pm_ops, + }, +}; +module_platform_driver(rtsx_usb_ms_driver); + +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Roger Tseng <rogerable@realtek.com>"); +MODULE_DESCRIPTION("Realtek USB Memstick Card Host Driver"); diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig index 6deb8a11c12f..ee8204cc31e9 100644 --- a/drivers/mfd/Kconfig +++ b/drivers/mfd/Kconfig @@ -67,6 +67,18 @@ config MFD_BCM590XX help Support for the BCM590xx PMUs from Broadcom +config MFD_AXP20X + bool "X-Powers AXP20X" + select MFD_CORE + select REGMAP_I2C + select REGMAP_IRQ + depends on I2C=y + help + If you say Y here you get support for the X-Powers AXP202 and AXP209. + This driver include only the core APIs. You have to select individual + components like regulators or the PEK (Power Enable Key) under the + corresponding menus. + config MFD_CROS_EC tristate "ChromeOS Embedded Controller" select MFD_CORE @@ -250,6 +262,16 @@ config MFD_INTEL_MSIC Passage) chip. This chip embeds audio, battery, GPIO, etc. devices used in Intel Medfield platforms. +config MFD_IPAQ_MICRO + bool "Atmel Micro ASIC (iPAQ h3100/h3600/h3700) Support" + depends on SA1100_H3100 || SA1100_H3600 + select MFD_CORE + help + Select this to get support for the Microcontroller found in + the Compaq iPAQ handheld computers. This is an Atmel + AT90LS8535 microcontroller flashed with a special iPAQ + firmware using the custom protocol implemented in this driver. + config MFD_JANZ_CMODIO tristate "Janz CMOD-IO PCI MODULbus Carrier Board" select MFD_CORE @@ -675,6 +697,7 @@ config MFD_DB8500_PRCMU config MFD_STMPE bool "STMicroelectronics STMPE" depends on (I2C=y || SPI_MASTER=y) + depends on OF select MFD_CORE help Support for the STMPE family of I/O Expanders from @@ -719,6 +742,14 @@ config MFD_STA2X11 select MFD_CORE select REGMAP_MMIO +config MFD_SUN6I_PRCM + bool "Allwinner A31 PRCM controller" + depends on ARCH_SUNXI + select MFD_CORE + help + Support for the PRCM (Power/Reset/Clock Management) unit available + in A31 SoC. + config MFD_SYSCON bool "System Controller Register R/W Based on Regmap" select REGMAP_MMIO diff --git a/drivers/mfd/Makefile b/drivers/mfd/Makefile index cec3487b539e..8afedba535c7 100644 --- a/drivers/mfd/Makefile +++ b/drivers/mfd/Makefile @@ -29,6 +29,7 @@ obj-$(CONFIG_MFD_STA2X11) += sta2x11-mfd.o obj-$(CONFIG_MFD_STMPE) += stmpe.o obj-$(CONFIG_STMPE_I2C) += stmpe-i2c.o obj-$(CONFIG_STMPE_SPI) += stmpe-spi.o +obj-$(CONFIG_MFD_SUN6I_PRCM) += sun6i-prcm.o obj-$(CONFIG_MFD_TC3589X) += tc3589x.o obj-$(CONFIG_MFD_T7L66XB) += t7l66xb.o tmio_core.o obj-$(CONFIG_MFD_TC6387XB) += tc6387xb.o tmio_core.o @@ -102,6 +103,7 @@ obj-$(CONFIG_PMIC_DA9052) += da9052-irq.o obj-$(CONFIG_PMIC_DA9052) += da9052-core.o obj-$(CONFIG_MFD_DA9052_SPI) += da9052-spi.o obj-$(CONFIG_MFD_DA9052_I2C) += da9052-i2c.o +obj-$(CONFIG_MFD_AXP20X) += axp20x.o obj-$(CONFIG_MFD_LP3943) += lp3943.o obj-$(CONFIG_MFD_LP8788) += lp8788.o lp8788-irq.o @@ -166,3 +168,4 @@ obj-$(CONFIG_MFD_RETU) += retu-mfd.o obj-$(CONFIG_MFD_AS3711) += as3711.o obj-$(CONFIG_MFD_AS3722) += as3722.o obj-$(CONFIG_MFD_STW481X) += stw481x.o +obj-$(CONFIG_MFD_IPAQ_MICRO) += ipaq-micro.o diff --git a/drivers/mfd/abx500-core.c b/drivers/mfd/abx500-core.c index f3a15aa54d7b..fe418995108c 100644 --- a/drivers/mfd/abx500-core.c +++ b/drivers/mfd/abx500-core.c @@ -151,22 +151,6 @@ int abx500_startup_irq_enabled(struct device *dev, unsigned int irq) } EXPORT_SYMBOL(abx500_startup_irq_enabled); -void abx500_dump_all_banks(void) -{ - struct abx500_ops *ops; - struct device dummy_child = {NULL}; - struct abx500_device_entry *dev_entry; - - list_for_each_entry(dev_entry, &abx500_list, list) { - dummy_child.parent = dev_entry->dev; - ops = &dev_entry->ops; - - if ((ops != NULL) && (ops->dump_all_banks != NULL)) - ops->dump_all_banks(&dummy_child); - } -} -EXPORT_SYMBOL(abx500_dump_all_banks); - MODULE_AUTHOR("Mattias Wallin <mattias.wallin@stericsson.com>"); MODULE_DESCRIPTION("ABX500 core driver"); MODULE_LICENSE("GPL"); diff --git a/drivers/mfd/arizona-core.c b/drivers/mfd/arizona-core.c index 07e6e27be23c..cfc191abae4a 100644 --- a/drivers/mfd/arizona-core.c +++ b/drivers/mfd/arizona-core.c @@ -583,6 +583,7 @@ static const char *wm5102_supplies[] = { "CPVDD", "SPKVDDL", "SPKVDDR", + "MICVDD", }; static const struct mfd_cell wm5102_devs[] = { diff --git a/drivers/mfd/arizona-irq.c b/drivers/mfd/arizona-irq.c index 88758ab9402b..17102f589100 100644 --- a/drivers/mfd/arizona-irq.c +++ b/drivers/mfd/arizona-irq.c @@ -285,7 +285,7 @@ int arizona_irq_init(struct arizona *arizona) IRQF_ONESHOT, -1, irq, &arizona->irq_chip); if (ret != 0) { - dev_err(arizona->dev, "Failed to add AOD IRQs: %d\n", ret); + dev_err(arizona->dev, "Failed to add main IRQs: %d\n", ret); goto err_aod; } diff --git a/drivers/mfd/as3711.c b/drivers/mfd/as3711.c index ec684fcedb42..d9706ede8d39 100644 --- a/drivers/mfd/as3711.c +++ b/drivers/mfd/as3711.c @@ -114,7 +114,7 @@ static const struct regmap_config as3711_regmap_config = { }; #ifdef CONFIG_OF -static struct of_device_id as3711_of_match[] = { +static const struct of_device_id as3711_of_match[] = { {.compatible = "ams,as3711",}, {} }; diff --git a/drivers/mfd/axp20x.c b/drivers/mfd/axp20x.c new file mode 100644 index 000000000000..dee653989e3a --- /dev/null +++ b/drivers/mfd/axp20x.c @@ -0,0 +1,258 @@ +/* + * axp20x.c - MFD core driver for the X-Powers AXP202 and AXP209 + * + * AXP20x comprises an adaptive USB-Compatible PWM charger, 2 BUCK DC-DC + * converters, 5 LDOs, multiple 12-bit ADCs of voltage, current and temperature + * as well as 4 configurable GPIOs. + * + * Author: Carlo Caione <carlo@caione.org> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include <linux/err.h> +#include <linux/i2c.h> +#include <linux/interrupt.h> +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/pm_runtime.h> +#include <linux/regmap.h> +#include <linux/slab.h> +#include <linux/regulator/consumer.h> +#include <linux/mfd/axp20x.h> +#include <linux/mfd/core.h> +#include <linux/of_device.h> +#include <linux/of_irq.h> + +#define AXP20X_OFF 0x80 + +static const struct regmap_range axp20x_writeable_ranges[] = { + regmap_reg_range(AXP20X_DATACACHE(0), AXP20X_IRQ5_STATE), + regmap_reg_range(AXP20X_DCDC_MODE, AXP20X_FG_RES), +}; + +static const struct regmap_range axp20x_volatile_ranges[] = { + regmap_reg_range(AXP20X_IRQ1_EN, AXP20X_IRQ5_STATE), +}; + +static const struct regmap_access_table axp20x_writeable_table = { + .yes_ranges = axp20x_writeable_ranges, + .n_yes_ranges = ARRAY_SIZE(axp20x_writeable_ranges), +}; + +static const struct regmap_access_table axp20x_volatile_table = { + .yes_ranges = axp20x_volatile_ranges, + .n_yes_ranges = ARRAY_SIZE(axp20x_volatile_ranges), +}; + +static struct resource axp20x_pek_resources[] = { + { + .name = "PEK_DBR", + .start = AXP20X_IRQ_PEK_RIS_EDGE, + .end = AXP20X_IRQ_PEK_RIS_EDGE, + .flags = IORESOURCE_IRQ, + }, { + .name = "PEK_DBF", + .start = AXP20X_IRQ_PEK_FAL_EDGE, + .end = AXP20X_IRQ_PEK_FAL_EDGE, + .flags = IORESOURCE_IRQ, + }, +}; + +static const struct regmap_config axp20x_regmap_config = { + .reg_bits = 8, + .val_bits = 8, + .wr_table = &axp20x_writeable_table, + .volatile_table = &axp20x_volatile_table, + .max_register = AXP20X_FG_RES, + .cache_type = REGCACHE_RBTREE, +}; + +#define AXP20X_IRQ(_irq, _off, _mask) \ + [AXP20X_IRQ_##_irq] = { .reg_offset = (_off), .mask = BIT(_mask) } + +static const struct regmap_irq axp20x_regmap_irqs[] = { + AXP20X_IRQ(ACIN_OVER_V, 0, 7), + AXP20X_IRQ(ACIN_PLUGIN, 0, 6), + AXP20X_IRQ(ACIN_REMOVAL, 0, 5), + AXP20X_IRQ(VBUS_OVER_V, 0, 4), + AXP20X_IRQ(VBUS_PLUGIN, 0, 3), + AXP20X_IRQ(VBUS_REMOVAL, 0, 2), + AXP20X_IRQ(VBUS_V_LOW, 0, 1), + AXP20X_IRQ(BATT_PLUGIN, 1, 7), + AXP20X_IRQ(BATT_REMOVAL, 1, 6), + AXP20X_IRQ(BATT_ENT_ACT_MODE, 1, 5), + AXP20X_IRQ(BATT_EXIT_ACT_MODE, 1, 4), + AXP20X_IRQ(CHARG, 1, 3), + AXP20X_IRQ(CHARG_DONE, 1, 2), + AXP20X_IRQ(BATT_TEMP_HIGH, 1, 1), + AXP20X_IRQ(BATT_TEMP_LOW, 1, 0), + AXP20X_IRQ(DIE_TEMP_HIGH, 2, 7), + AXP20X_IRQ(CHARG_I_LOW, 2, 6), + AXP20X_IRQ(DCDC1_V_LONG, 2, 5), + AXP20X_IRQ(DCDC2_V_LONG, 2, 4), + AXP20X_IRQ(DCDC3_V_LONG, 2, 3), + AXP20X_IRQ(PEK_SHORT, 2, 1), + AXP20X_IRQ(PEK_LONG, 2, 0), + AXP20X_IRQ(N_OE_PWR_ON, 3, 7), + AXP20X_IRQ(N_OE_PWR_OFF, 3, 6), + AXP20X_IRQ(VBUS_VALID, 3, 5), + AXP20X_IRQ(VBUS_NOT_VALID, 3, 4), + AXP20X_IRQ(VBUS_SESS_VALID, 3, 3), + AXP20X_IRQ(VBUS_SESS_END, 3, 2), + AXP20X_IRQ(LOW_PWR_LVL1, 3, 1), + AXP20X_IRQ(LOW_PWR_LVL2, 3, 0), + AXP20X_IRQ(TIMER, 4, 7), + AXP20X_IRQ(PEK_RIS_EDGE, 4, 6), + AXP20X_IRQ(PEK_FAL_EDGE, 4, 5), + AXP20X_IRQ(GPIO3_INPUT, 4, 3), + AXP20X_IRQ(GPIO2_INPUT, 4, 2), + AXP20X_IRQ(GPIO1_INPUT, 4, 1), + AXP20X_IRQ(GPIO0_INPUT, 4, 0), +}; + +static const struct of_device_id axp20x_of_match[] = { + { .compatible = "x-powers,axp202", .data = (void *) AXP202_ID }, + { .compatible = "x-powers,axp209", .data = (void *) AXP209_ID }, + { }, +}; +MODULE_DEVICE_TABLE(of, axp20x_of_match); + +/* + * This is useless for OF-enabled devices, but it is needed by I2C subsystem + */ +static const struct i2c_device_id axp20x_i2c_id[] = { + { }, +}; +MODULE_DEVICE_TABLE(i2c, axp20x_i2c_id); + +static const struct regmap_irq_chip axp20x_regmap_irq_chip = { + .name = "axp20x_irq_chip", + .status_base = AXP20X_IRQ1_STATE, + .ack_base = AXP20X_IRQ1_STATE, + .mask_base = AXP20X_IRQ1_EN, + .num_regs = 5, + .irqs = axp20x_regmap_irqs, + .num_irqs = ARRAY_SIZE(axp20x_regmap_irqs), + .mask_invert = true, + .init_ack_masked = true, +}; + +static const char * const axp20x_supplies[] = { + "acin", + "vin2", + "vin3", + "ldo24in", + "ldo3in", + "ldo5in", +}; + +static struct mfd_cell axp20x_cells[] = { + { + .name = "axp20x-pek", + .num_resources = ARRAY_SIZE(axp20x_pek_resources), + .resources = axp20x_pek_resources, + }, { + .name = "axp20x-regulator", + .parent_supplies = axp20x_supplies, + .num_parent_supplies = ARRAY_SIZE(axp20x_supplies), + }, +}; + +static struct axp20x_dev *axp20x_pm_power_off; +static void axp20x_power_off(void) +{ + regmap_write(axp20x_pm_power_off->regmap, AXP20X_OFF_CTRL, + AXP20X_OFF); +} + +static int axp20x_i2c_probe(struct i2c_client *i2c, + const struct i2c_device_id *id) +{ + struct axp20x_dev *axp20x; + const struct of_device_id *of_id; + int ret; + + axp20x = devm_kzalloc(&i2c->dev, sizeof(*axp20x), GFP_KERNEL); + if (!axp20x) + return -ENOMEM; + + of_id = of_match_device(axp20x_of_match, &i2c->dev); + if (!of_id) { + dev_err(&i2c->dev, "Unable to setup AXP20X data\n"); + return -ENODEV; + } + axp20x->variant = (long) of_id->data; + + axp20x->i2c_client = i2c; + axp20x->dev = &i2c->dev; + dev_set_drvdata(axp20x->dev, axp20x); + + axp20x->regmap = devm_regmap_init_i2c(i2c, &axp20x_regmap_config); + if (IS_ERR(axp20x->regmap)) { + ret = PTR_ERR(axp20x->regmap); + dev_err(&i2c->dev, "regmap init failed: %d\n", ret); + return ret; + } + + ret = regmap_add_irq_chip(axp20x->regmap, i2c->irq, + IRQF_ONESHOT | IRQF_SHARED, -1, + &axp20x_regmap_irq_chip, + &axp20x->regmap_irqc); + if (ret) { + dev_err(&i2c->dev, "failed to add irq chip: %d\n", ret); + return ret; + } + + ret = mfd_add_devices(axp20x->dev, -1, axp20x_cells, + ARRAY_SIZE(axp20x_cells), NULL, 0, NULL); + + if (ret) { + dev_err(&i2c->dev, "failed to add MFD devices: %d\n", ret); + regmap_del_irq_chip(i2c->irq, axp20x->regmap_irqc); + return ret; + } + + if (!pm_power_off) { + axp20x_pm_power_off = axp20x; + pm_power_off = axp20x_power_off; + } + + dev_info(&i2c->dev, "AXP20X driver loaded\n"); + + return 0; +} + +static int axp20x_i2c_remove(struct i2c_client *i2c) +{ + struct axp20x_dev *axp20x = i2c_get_clientdata(i2c); + + if (axp20x == axp20x_pm_power_off) { + axp20x_pm_power_off = NULL; + pm_power_off = NULL; + } + + mfd_remove_devices(axp20x->dev); + regmap_del_irq_chip(axp20x->i2c_client->irq, axp20x->regmap_irqc); + + return 0; +} + +static struct i2c_driver axp20x_i2c_driver = { + .driver = { + .name = "axp20x", + .owner = THIS_MODULE, + .of_match_table = of_match_ptr(axp20x_of_match), + }, + .probe = axp20x_i2c_probe, + .remove = axp20x_i2c_remove, + .id_table = axp20x_i2c_id, +}; + +module_i2c_driver(axp20x_i2c_driver); + +MODULE_DESCRIPTION("PMIC MFD core driver for AXP20X"); +MODULE_AUTHOR("Carlo Caione <carlo@caione.org>"); +MODULE_LICENSE("GPL"); diff --git a/drivers/mfd/bcm590xx.c b/drivers/mfd/bcm590xx.c index 43cba1a1973c..e334de000e8c 100644 --- a/drivers/mfd/bcm590xx.c +++ b/drivers/mfd/bcm590xx.c @@ -96,6 +96,12 @@ err: return ret; } +static int bcm590xx_i2c_remove(struct i2c_client *i2c) +{ + mfd_remove_devices(&i2c->dev); + return 0; +} + static const struct of_device_id bcm590xx_of_match[] = { { .compatible = "brcm,bcm59056" }, { } @@ -115,6 +121,7 @@ static struct i2c_driver bcm590xx_i2c_driver = { .of_match_table = of_match_ptr(bcm590xx_of_match), }, .probe = bcm590xx_i2c_probe, + .remove = bcm590xx_i2c_remove, .id_table = bcm590xx_i2c_id, }; module_i2c_driver(bcm590xx_i2c_driver); @@ -122,4 +129,4 @@ module_i2c_driver(bcm590xx_i2c_driver); MODULE_AUTHOR("Matt Porter <mporter@linaro.org>"); MODULE_DESCRIPTION("BCM590xx multi-function driver"); MODULE_LICENSE("GPL v2"); -MODULE_ALIAS("platform:bcm590xx"); +MODULE_ALIAS("i2c:bcm590xx"); diff --git a/drivers/mfd/cros_ec.c b/drivers/mfd/cros_ec.c index 783fe2e73e1e..38fe9bf0d169 100644 --- a/drivers/mfd/cros_ec.c +++ b/drivers/mfd/cros_ec.c @@ -30,7 +30,7 @@ int cros_ec_prepare_tx(struct cros_ec_device *ec_dev, uint8_t *out; int csum, i; - BUG_ON(msg->out_len > EC_HOST_PARAM_SIZE); + BUG_ON(msg->out_len > EC_PROTO2_MAX_PARAM_SIZE); out = ec_dev->dout; out[0] = EC_CMD_VERSION0 + msg->version; out[1] = msg->cmd; @@ -90,6 +90,11 @@ static const struct mfd_cell cros_devs[] = { .id = 1, .of_compatible = "google,cros-ec-keyb", }, + { + .name = "cros-ec-i2c-tunnel", + .id = 2, + .of_compatible = "google,cros-ec-i2c-tunnel", + }, }; int cros_ec_register(struct cros_ec_device *ec_dev) @@ -184,3 +189,6 @@ int cros_ec_resume(struct cros_ec_device *ec_dev) EXPORT_SYMBOL(cros_ec_resume); #endif + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("ChromeOS EC core driver"); diff --git a/drivers/mfd/cros_ec_spi.c b/drivers/mfd/cros_ec_spi.c index 84af8d7a4295..0b8d32829166 100644 --- a/drivers/mfd/cros_ec_spi.c +++ b/drivers/mfd/cros_ec_spi.c @@ -39,14 +39,22 @@ #define EC_MSG_PREAMBLE_COUNT 32 /* - * We must get a response from the EC in 5ms. This is a very long - * time, but the flash write command can take 2-3ms. The EC command - * processing is currently not very fast (about 500us). We could - * look at speeding this up and making the flash write command a - * 'slow' command, requiring a GET_STATUS wait loop, like flash - * erase. - */ -#define EC_MSG_DEADLINE_MS 5 + * Allow for a long time for the EC to respond. We support i2c + * tunneling and support fairly long messages for the tunnel (249 + * bytes long at the moment). If we're talking to a 100 kHz device + * on the other end and need to transfer ~256 bytes, then we need: + * 10 us/bit * ~10 bits/byte * ~256 bytes = ~25ms + * + * We'll wait 4 times that to handle clock stretching and other + * paranoia. + * + * It's pretty unlikely that we'll really see a 249 byte tunnel in + * anything other than testing. If this was more common we might + * consider having slow commands like this require a GET_STATUS + * wait loop. The 'flash write' command would be another candidate + * for this, clocking in at 2-3ms. + */ +#define EC_MSG_DEADLINE_MS 100 /* * Time between raising the SPI chip select (for the end of a @@ -65,11 +73,13 @@ * if no record * @end_of_msg_delay: used to set the delay_usecs on the spi_transfer that * is sent when we want to turn off CS at the end of a transaction. + * @lock: mutex to ensure only one user of cros_ec_command_spi_xfer at a time */ struct cros_ec_spi { struct spi_device *spi; s64 last_transfer_ns; unsigned int end_of_msg_delay; + struct mutex lock; }; static void debug_packet(struct device *dev, const char *name, u8 *ptr, @@ -111,7 +121,9 @@ static int cros_ec_spi_receive_response(struct cros_ec_device *ec_dev, /* Receive data until we see the header byte */ deadline = jiffies + msecs_to_jiffies(EC_MSG_DEADLINE_MS); - do { + while (true) { + unsigned long start_jiffies = jiffies; + memset(&trans, 0, sizeof(trans)); trans.cs_change = 1; trans.rx_buf = ptr = ec_dev->din; @@ -132,12 +144,19 @@ static int cros_ec_spi_receive_response(struct cros_ec_device *ec_dev, break; } } + if (ptr != end) + break; - if (time_after(jiffies, deadline)) { + /* + * Use the time at the start of the loop as a timeout. This + * gives us one last shot at getting the transfer and is useful + * in case we got context switched out for a while. + */ + if (time_after(start_jiffies, deadline)) { dev_warn(ec_dev->dev, "EC failed to respond in time\n"); return -ETIMEDOUT; } - } while (ptr == end); + } /* * ptr now points to the header byte. Copy any valid data to the @@ -208,6 +227,13 @@ static int cros_ec_command_spi_xfer(struct cros_ec_device *ec_dev, int ret = 0, final_ret; struct timespec ts; + /* + * We have the shared ec_dev buffer plus we do lots of separate spi_sync + * calls, so we need to make sure only one person is using this at a + * time. + */ + mutex_lock(&ec_spi->lock); + len = cros_ec_prepare_tx(ec_dev, ec_msg); dev_dbg(ec_dev->dev, "prepared, len=%d\n", len); @@ -219,7 +245,7 @@ static int cros_ec_command_spi_xfer(struct cros_ec_device *ec_dev, ktime_get_ts(&ts); delay = timespec_to_ns(&ts) - ec_spi->last_transfer_ns; if (delay < EC_SPI_RECOVERY_TIME_NS) - ndelay(delay); + ndelay(EC_SPI_RECOVERY_TIME_NS - delay); } /* Transmit phase - send our message */ @@ -260,7 +286,7 @@ static int cros_ec_command_spi_xfer(struct cros_ec_device *ec_dev, ret = final_ret; if (ret < 0) { dev_err(ec_dev->dev, "spi transfer failed: %d\n", ret); - return ret; + goto exit; } /* check response error code */ @@ -269,14 +295,16 @@ static int cros_ec_command_spi_xfer(struct cros_ec_device *ec_dev, dev_warn(ec_dev->dev, "command 0x%02x returned an error %d\n", ec_msg->cmd, ptr[0]); debug_packet(ec_dev->dev, "in_err", ptr, len); - return -EINVAL; + ret = -EINVAL; + goto exit; } len = ptr[1]; sum = ptr[0] + ptr[1]; if (len > ec_msg->in_len) { dev_err(ec_dev->dev, "packet too long (%d bytes, expected %d)", len, ec_msg->in_len); - return -ENOSPC; + ret = -ENOSPC; + goto exit; } /* copy response packet payload and compute checksum */ @@ -293,10 +321,14 @@ static int cros_ec_command_spi_xfer(struct cros_ec_device *ec_dev, dev_err(ec_dev->dev, "bad packet checksum, expected %02x, got %02x\n", sum, ptr[len + 2]); - return -EBADMSG; + ret = -EBADMSG; + goto exit; } - return 0; + ret = 0; +exit: + mutex_unlock(&ec_spi->lock); + return ret; } static void cros_ec_spi_dt_probe(struct cros_ec_spi *ec_spi, struct device *dev) @@ -327,6 +359,7 @@ static int cros_ec_spi_probe(struct spi_device *spi) if (ec_spi == NULL) return -ENOMEM; ec_spi->spi = spi; + mutex_init(&ec_spi->lock); ec_dev = devm_kzalloc(dev, sizeof(*ec_dev), GFP_KERNEL); if (!ec_dev) return -ENOMEM; diff --git a/drivers/mfd/db8500-prcmu.c b/drivers/mfd/db8500-prcmu.c index b11fdd63eecd..193cf168ba84 100644 --- a/drivers/mfd/db8500-prcmu.c +++ b/drivers/mfd/db8500-prcmu.c @@ -2300,9 +2300,6 @@ int prcmu_ac_wake_req(void) if (!wait_for_completion_timeout(&mb0_transfer.ac_wake_work, msecs_to_jiffies(5000))) { -#if defined(CONFIG_DBX500_PRCMU_DEBUG) - db8500_prcmu_debug_dump(__func__, true, true); -#endif pr_crit("prcmu: %s timed out (5 s) waiting for a reply.\n", __func__); ret = -EFAULT; @@ -3112,7 +3109,7 @@ static int db8500_prcmu_register_ab8500(struct device *parent, { struct device_node *np; struct resource ab8500_resource; - struct mfd_cell ab8500_cell = { + const struct mfd_cell ab8500_cell = { .name = "ab8500-core", .of_compatible = "stericsson,ab8500", .id = AB8500_VERSION_AB8500, diff --git a/drivers/mfd/ipaq-micro.c b/drivers/mfd/ipaq-micro.c new file mode 100644 index 000000000000..7e50fe0118e3 --- /dev/null +++ b/drivers/mfd/ipaq-micro.c @@ -0,0 +1,482 @@ +/* + * Compaq iPAQ h3xxx Atmel microcontroller companion support + * + * This is an Atmel AT90LS8535 with a special flashed-in firmware that + * implements the special protocol used by this driver. + * + * based on previous kernel 2.4 version by Andrew Christian + * Author : Alessandro Gardich <gremlin@gremlin.it> + * Author : Dmitry Artamonow <mad_soft@inbox.ru> + * Author : Linus Walleij <linus.walleij@linaro.org> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include <linux/module.h> +#include <linux/init.h> +#include <linux/interrupt.h> +#include <linux/pm.h> +#include <linux/delay.h> +#include <linux/device.h> +#include <linux/platform_device.h> +#include <linux/io.h> +#include <linux/mfd/core.h> +#include <linux/mfd/ipaq-micro.h> +#include <linux/string.h> +#include <linux/random.h> +#include <linux/slab.h> +#include <linux/list.h> + +#include <mach/hardware.h> + +static void ipaq_micro_trigger_tx(struct ipaq_micro *micro) +{ + struct ipaq_micro_txdev *tx = µ->tx; + struct ipaq_micro_msg *msg = micro->msg; + int i, bp; + u8 checksum; + u32 val; + + bp = 0; + tx->buf[bp++] = CHAR_SOF; + + checksum = ((msg->id & 0x0f) << 4) | (msg->tx_len & 0x0f); + tx->buf[bp++] = checksum; + + for (i = 0; i < msg->tx_len; i++) { + tx->buf[bp++] = msg->tx_data[i]; + checksum += msg->tx_data[i]; + } + + tx->buf[bp++] = checksum; + tx->len = bp; + tx->index = 0; + print_hex_dump(KERN_DEBUG, "data: ", DUMP_PREFIX_OFFSET, 16, 1, + tx->buf, tx->len, true); + + /* Enable interrupt */ + val = readl(micro->base + UTCR3); + val |= UTCR3_TIE; + writel(val, micro->base + UTCR3); +} + +int ipaq_micro_tx_msg(struct ipaq_micro *micro, struct ipaq_micro_msg *msg) +{ + unsigned long flags; + + dev_dbg(micro->dev, "TX msg: %02x, %d bytes\n", msg->id, msg->tx_len); + + spin_lock_irqsave(µ->lock, flags); + if (micro->msg) { + list_add_tail(&msg->node, µ->queue); + spin_unlock_irqrestore(µ->lock, flags); + return 0; + } + micro->msg = msg; + ipaq_micro_trigger_tx(micro); + spin_unlock_irqrestore(µ->lock, flags); + return 0; +} +EXPORT_SYMBOL(ipaq_micro_tx_msg); + +static void micro_rx_msg(struct ipaq_micro *micro, u8 id, int len, u8 *data) +{ + int i; + + dev_dbg(micro->dev, "RX msg: %02x, %d bytes\n", id, len); + + spin_lock(µ->lock); + switch (id) { + case MSG_VERSION: + case MSG_EEPROM_READ: + case MSG_EEPROM_WRITE: + case MSG_BACKLIGHT: + case MSG_NOTIFY_LED: + case MSG_THERMAL_SENSOR: + case MSG_BATTERY: + /* Handle synchronous messages */ + if (micro->msg && micro->msg->id == id) { + struct ipaq_micro_msg *msg = micro->msg; + + memcpy(msg->rx_data, data, len); + msg->rx_len = len; + complete(µ->msg->ack); + if (!list_empty(µ->queue)) { + micro->msg = list_entry(micro->queue.next, + struct ipaq_micro_msg, + node); + list_del_init(µ->msg->node); + ipaq_micro_trigger_tx(micro); + } else + micro->msg = NULL; + dev_dbg(micro->dev, "OK RX message 0x%02x\n", id); + } else { + dev_err(micro->dev, + "out of band RX message 0x%02x\n", id); + if(!micro->msg) + dev_info(micro->dev, "no message queued\n"); + else + dev_info(micro->dev, "expected message %02x\n", + micro->msg->id); + } + break; + case MSG_KEYBOARD: + if (micro->key) + micro->key(micro->key_data, len, data); + else + dev_dbg(micro->dev, "key message ignored, no handle \n"); + break; + case MSG_TOUCHSCREEN: + if (micro->ts) + micro->ts(micro->ts_data, len, data); + else + dev_dbg(micro->dev, "touchscreen message ignored, no handle \n"); + break; + default: + dev_err(micro->dev, + "unknown msg %d [%d] ", id, len); + for (i = 0; i < len; ++i) + pr_cont("0x%02x ", data[i]); + pr_cont("\n"); + } + spin_unlock(µ->lock); +} + +static void micro_process_char(struct ipaq_micro *micro, u8 ch) +{ + struct ipaq_micro_rxdev *rx = µ->rx; + + switch (rx->state) { + case STATE_SOF: /* Looking for SOF */ + if (ch == CHAR_SOF) + rx->state = STATE_ID; /* Next byte is the id and len */ + break; + case STATE_ID: /* Looking for id and len byte */ + rx->id = (ch & 0xf0) >> 4 ; + rx->len = (ch & 0x0f); + rx->index = 0; + rx->chksum = ch; + rx->state = (rx->len > 0) ? STATE_DATA : STATE_CHKSUM; + break; + case STATE_DATA: /* Looking for 'len' data bytes */ + rx->chksum += ch; + rx->buf[rx->index] = ch; + if (++rx->index == rx->len) + rx->state = STATE_CHKSUM; + break; + case STATE_CHKSUM: /* Looking for the checksum */ + if (ch == rx->chksum) + micro_rx_msg(micro, rx->id, rx->len, rx->buf); + rx->state = STATE_SOF; + break; + } +} + +static void micro_rx_chars(struct ipaq_micro *micro) +{ + u32 status, ch; + + while ((status = readl(micro->base + UTSR1)) & UTSR1_RNE) { + ch = readl(micro->base + UTDR); + if (status & UTSR1_PRE) + dev_err(micro->dev, "rx: parity error\n"); + else if (status & UTSR1_FRE) + dev_err(micro->dev, "rx: framing error\n"); + else if (status & UTSR1_ROR) + dev_err(micro->dev, "rx: overrun error\n"); + micro_process_char(micro, ch); + } +} + +static void ipaq_micro_get_version(struct ipaq_micro *micro) +{ + struct ipaq_micro_msg msg = { + .id = MSG_VERSION, + }; + + ipaq_micro_tx_msg_sync(micro, &msg); + if (msg.rx_len == 4) { + memcpy(micro->version, msg.rx_data, 4); + micro->version[4] = '\0'; + } else if (msg.rx_len == 9) { + memcpy(micro->version, msg.rx_data, 4); + micro->version[4] = '\0'; + /* Bytes 4-7 are "pack", byte 8 is "boot type" */ + } else { + dev_err(micro->dev, + "illegal version message %d bytes\n", msg.rx_len); + } +} + +static void ipaq_micro_eeprom_read(struct ipaq_micro *micro, + u8 address, u8 len, u8 *data) +{ + struct ipaq_micro_msg msg = { + .id = MSG_EEPROM_READ, + }; + u8 i; + + for (i = 0; i < len; i++) { + msg.tx_data[0] = address + i; + msg.tx_data[1] = 1; + msg.tx_len = 2; + ipaq_micro_tx_msg_sync(micro, &msg); + memcpy(data + (i * 2), msg.rx_data, 2); + } +} + +static char *ipaq_micro_str(u8 *wchar, u8 len) +{ + char retstr[256]; + u8 i; + + for (i = 0; i < len / 2; i++) + retstr[i] = wchar[i * 2]; + return kstrdup(retstr, GFP_KERNEL); +} + +static u16 ipaq_micro_to_u16(u8 *data) +{ + return data[1] << 8 | data[0]; +} + +static void ipaq_micro_eeprom_dump(struct ipaq_micro *micro) +{ + u8 dump[256]; + char *str; + + ipaq_micro_eeprom_read(micro, 0, 128, dump); + str = ipaq_micro_str(dump, 10); + if (str) { + dev_info(micro->dev, "HM version %s\n", str); + kfree(str); + } + str = ipaq_micro_str(dump+10, 40); + if (str) { + dev_info(micro->dev, "serial number: %s\n", str); + /* Feed the random pool with this */ + add_device_randomness(str, strlen(str)); + kfree(str); + } + str = ipaq_micro_str(dump+50, 20); + if (str) { + dev_info(micro->dev, "module ID: %s\n", str); + kfree(str); + } + str = ipaq_micro_str(dump+70, 10); + if (str) { + dev_info(micro->dev, "product revision: %s\n", str); + kfree(str); + } + dev_info(micro->dev, "product ID: %u\n", ipaq_micro_to_u16(dump+80)); + dev_info(micro->dev, "frame rate: %u fps\n", + ipaq_micro_to_u16(dump+82)); + dev_info(micro->dev, "page mode: %u\n", ipaq_micro_to_u16(dump+84)); + dev_info(micro->dev, "country ID: %u\n", ipaq_micro_to_u16(dump+86)); + dev_info(micro->dev, "color display: %s\n", + ipaq_micro_to_u16(dump+88) ? "yes" : "no"); + dev_info(micro->dev, "ROM size: %u MiB\n", ipaq_micro_to_u16(dump+90)); + dev_info(micro->dev, "RAM size: %u KiB\n", ipaq_micro_to_u16(dump+92)); + dev_info(micro->dev, "screen: %u x %u\n", + ipaq_micro_to_u16(dump+94), ipaq_micro_to_u16(dump+96)); + print_hex_dump(KERN_DEBUG, "eeprom: ", DUMP_PREFIX_OFFSET, 16, 1, + dump, 256, true); + +} + +static void micro_tx_chars(struct ipaq_micro *micro) +{ + struct ipaq_micro_txdev *tx = µ->tx; + u32 val; + + while ((tx->index < tx->len) && + (readl(micro->base + UTSR1) & UTSR1_TNF)) { + writel(tx->buf[tx->index], micro->base + UTDR); + tx->index++; + } + + /* Stop interrupts */ + val = readl(micro->base + UTCR3); + val &= ~UTCR3_TIE; + writel(val, micro->base + UTCR3); +} + +static void micro_reset_comm(struct ipaq_micro *micro) +{ + struct ipaq_micro_rxdev *rx = µ->rx; + u32 val; + + if (micro->msg) + complete(µ->msg->ack); + + /* Initialize Serial channel protocol frame */ + rx->state = STATE_SOF; /* Reset the state machine */ + + /* Set up interrupts */ + writel(0x01, micro->sdlc + 0x0); /* Select UART mode */ + + /* Clean up CR3 */ + writel(0x0, micro->base + UTCR3); + + /* Format: 8N1 */ + writel(UTCR0_8BitData | UTCR0_1StpBit, micro->base + UTCR0); + + /* Baud rate: 115200 */ + writel(0x0, micro->base + UTCR1); + writel(0x1, micro->base + UTCR2); + + /* Clear SR0 */ + writel(0xff, micro->base + UTSR0); + + /* Enable RX int, disable TX int */ + writel(UTCR3_TXE | UTCR3_RXE | UTCR3_RIE, micro->base + UTCR3); + val = readl(micro->base + UTCR3); + val &= ~UTCR3_TIE; + writel(val, micro->base + UTCR3); +} + +static irqreturn_t micro_serial_isr(int irq, void *dev_id) +{ + struct ipaq_micro *micro = dev_id; + struct ipaq_micro_txdev *tx = µ->tx; + u32 status; + + status = readl(micro->base + UTSR0); + do { + if (status & (UTSR0_RID | UTSR0_RFS)) { + if (status & UTSR0_RID) + /* Clear the Receiver IDLE bit */ + writel(UTSR0_RID, micro->base + UTSR0); + micro_rx_chars(micro); + } + + /* Clear break bits */ + if (status & (UTSR0_RBB | UTSR0_REB)) + writel(status & (UTSR0_RBB | UTSR0_REB), + micro->base + UTSR0); + + if (status & UTSR0_TFS) + micro_tx_chars(micro); + + status = readl(micro->base + UTSR0); + + } while (((tx->index < tx->len) && (status & UTSR0_TFS)) || + (status & (UTSR0_RFS | UTSR0_RID))); + + return IRQ_HANDLED; +} + +static const struct mfd_cell micro_cells[] = { + { .name = "ipaq-micro-backlight", }, + { .name = "ipaq-micro-battery", }, + { .name = "ipaq-micro-keys", }, + { .name = "ipaq-micro-ts", }, + { .name = "ipaq-micro-leds", }, +}; + +static int micro_resume(struct device *dev) +{ + struct ipaq_micro *micro = dev_get_drvdata(dev); + + micro_reset_comm(micro); + mdelay(10); + + return 0; +} + +static int micro_probe(struct platform_device *pdev) +{ + struct ipaq_micro *micro; + struct resource *res; + int ret; + int irq; + + micro = devm_kzalloc(&pdev->dev, sizeof(*micro), GFP_KERNEL); + if (!micro) + return -ENOMEM; + + micro->dev = &pdev->dev; + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + if (!res) + return -EINVAL; + + micro->base = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR(micro->base)) + return PTR_ERR(micro->base); + + res = platform_get_resource(pdev, IORESOURCE_MEM, 1); + if (!res) + return -EINVAL; + + micro->sdlc = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR(micro->sdlc)) + return PTR_ERR(micro->sdlc); + + micro_reset_comm(micro); + + irq = platform_get_irq(pdev, 0); + if (!irq) + return -EINVAL; + ret = devm_request_irq(&pdev->dev, irq, micro_serial_isr, + IRQF_SHARED, "ipaq-micro", + micro); + if (ret) { + dev_err(&pdev->dev, "unable to grab serial port IRQ\n"); + return ret; + } else + dev_info(&pdev->dev, "grabbed serial port IRQ\n"); + + spin_lock_init(µ->lock); + INIT_LIST_HEAD(µ->queue); + platform_set_drvdata(pdev, micro); + + ret = mfd_add_devices(&pdev->dev, pdev->id, micro_cells, + ARRAY_SIZE(micro_cells), NULL, 0, NULL); + if (ret) { + dev_err(&pdev->dev, "error adding MFD cells"); + return ret; + } + + /* Check version */ + ipaq_micro_get_version(micro); + dev_info(&pdev->dev, "Atmel micro ASIC version %s\n", micro->version); + ipaq_micro_eeprom_dump(micro); + + return 0; +} + +static int micro_remove(struct platform_device *pdev) +{ + struct ipaq_micro *micro = platform_get_drvdata(pdev); + u32 val; + + mfd_remove_devices(&pdev->dev); + + val = readl(micro->base + UTCR3); + val &= ~(UTCR3_RXE | UTCR3_RIE); /* disable receive interrupt */ + val &= ~(UTCR3_TXE | UTCR3_TIE); /* disable transmit interrupt */ + writel(val, micro->base + UTCR3); + + return 0; +} + +static const struct dev_pm_ops micro_dev_pm_ops = { + SET_SYSTEM_SLEEP_PM_OPS(NULL, micro_resume) +}; + +static struct platform_driver micro_device_driver = { + .driver = { + .name = "ipaq-h3xxx-micro", + .pm = µ_dev_pm_ops, + }, + .probe = micro_probe, + .remove = micro_remove, + /* .shutdown = micro_suspend, // FIXME */ +}; +module_platform_driver(micro_device_driver); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("driver for iPAQ Atmel micro core and backlight"); diff --git a/drivers/mfd/kempld-core.c b/drivers/mfd/kempld-core.c index 07692604e119..f7ff0188603d 100644 --- a/drivers/mfd/kempld-core.c +++ b/drivers/mfd/kempld-core.c @@ -86,7 +86,7 @@ enum kempld_cells { KEMPLD_UART, }; -static struct mfd_cell kempld_devs[] = { +static const struct mfd_cell kempld_devs[] = { [KEMPLD_I2C] = { .name = "kempld-i2c", }, @@ -288,9 +288,38 @@ EXPORT_SYMBOL_GPL(kempld_release_mutex); */ static int kempld_get_info(struct kempld_device_data *pld) { + int ret; struct kempld_platform_data *pdata = dev_get_platdata(pld->dev); + char major, minor; + + ret = pdata->get_info(pld); + if (ret) + return ret; + + /* The Kontron PLD firmware version string has the following format: + * Pwxy.zzzz + * P: Fixed + * w: PLD number - 1 hex digit + * x: Major version - 1 alphanumerical digit (0-9A-V) + * y: Minor version - 1 alphanumerical digit (0-9A-V) + * zzzz: Build number - 4 zero padded hex digits */ - return pdata->get_info(pld); + if (pld->info.major < 10) + major = pld->info.major + '0'; + else + major = (pld->info.major - 10) + 'A'; + if (pld->info.minor < 10) + minor = pld->info.minor + '0'; + else + minor = (pld->info.minor - 10) + 'A'; + + ret = scnprintf(pld->info.version, sizeof(pld->info.version), + "P%X%c%c.%04X", pld->info.number, major, minor, + pld->info.buildnr); + if (ret < 0) + return ret; + + return 0; } /* @@ -307,9 +336,71 @@ static int kempld_register_cells(struct kempld_device_data *pld) return pdata->register_cells(pld); } +static const char *kempld_get_type_string(struct kempld_device_data *pld) +{ + const char *version_type; + + switch (pld->info.type) { + case 0: + version_type = "release"; + break; + case 1: + version_type = "debug"; + break; + case 2: + version_type = "custom"; + break; + default: + version_type = "unspecified"; + break; + } + + return version_type; +} + +static ssize_t kempld_version_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct kempld_device_data *pld = dev_get_drvdata(dev); + + return scnprintf(buf, PAGE_SIZE, "%s\n", pld->info.version); +} + +static ssize_t kempld_specification_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct kempld_device_data *pld = dev_get_drvdata(dev); + + return scnprintf(buf, PAGE_SIZE, "%d.%d\n", pld->info.spec_major, + pld->info.spec_minor); +} + +static ssize_t kempld_type_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct kempld_device_data *pld = dev_get_drvdata(dev); + + return scnprintf(buf, PAGE_SIZE, "%s\n", kempld_get_type_string(pld)); +} + +static DEVICE_ATTR(pld_version, S_IRUGO, kempld_version_show, NULL); +static DEVICE_ATTR(pld_specification, S_IRUGO, kempld_specification_show, + NULL); +static DEVICE_ATTR(pld_type, S_IRUGO, kempld_type_show, NULL); + +static struct attribute *pld_attributes[] = { + &dev_attr_pld_version.attr, + &dev_attr_pld_specification.attr, + &dev_attr_pld_type.attr, + NULL +}; + +static const struct attribute_group pld_attr_group = { + .attrs = pld_attributes, +}; + static int kempld_detect_device(struct kempld_device_data *pld) { - char *version_type; u8 index_reg; int ret; @@ -335,27 +426,19 @@ static int kempld_detect_device(struct kempld_device_data *pld) if (ret) return ret; - switch (pld->info.type) { - case 0: - version_type = "release"; - break; - case 1: - version_type = "debug"; - break; - case 2: - version_type = "custom"; - break; - default: - version_type = "unspecified"; - } + dev_info(pld->dev, "Found Kontron PLD - %s (%s), spec %d.%d\n", + pld->info.version, kempld_get_type_string(pld), + pld->info.spec_major, pld->info.spec_minor); + + ret = sysfs_create_group(&pld->dev->kobj, &pld_attr_group); + if (ret) + return ret; - dev_info(pld->dev, "Found Kontron PLD %d\n", pld->info.number); - dev_info(pld->dev, "%s version %d.%d build %d, specification %d.%d\n", - version_type, pld->info.major, pld->info.minor, - pld->info.buildnr, pld->info.spec_major, - pld->info.spec_minor); + ret = kempld_register_cells(pld); + if (ret) + sysfs_remove_group(&pld->dev->kobj, &pld_attr_group); - return kempld_register_cells(pld); + return ret; } static int kempld_probe(struct platform_device *pdev) @@ -399,6 +482,8 @@ static int kempld_remove(struct platform_device *pdev) struct kempld_device_data *pld = platform_get_drvdata(pdev); struct kempld_platform_data *pdata = dev_get_platdata(pld->dev); + sysfs_remove_group(&pld->dev->kobj, &pld_attr_group); + mfd_remove_devices(&pdev->dev); pdata->release_hardware_mutex(pld); diff --git a/drivers/mfd/lp3943.c b/drivers/mfd/lp3943.c index e32226836fb4..335b930112b2 100644 --- a/drivers/mfd/lp3943.c +++ b/drivers/mfd/lp3943.c @@ -62,7 +62,7 @@ static const struct lp3943_reg_cfg lp3943_mux_cfg[] = { { LP3943_REG_MUX3, 0xC0, 6 }, }; -static struct mfd_cell lp3943_devs[] = { +static const struct mfd_cell lp3943_devs[] = { { .name = "lp3943-pwm", .of_compatible = "ti,lp3943-pwm", diff --git a/drivers/mfd/lpc_ich.c b/drivers/mfd/lpc_ich.c index 3f10ea3f45d1..7d8482ff5868 100644 --- a/drivers/mfd/lpc_ich.c +++ b/drivers/mfd/lpc_ich.c @@ -488,6 +488,7 @@ static struct lpc_ich_info lpc_chipset_info[] = { [LPC_PPT] = { .name = "Panther Point", .iTCO_version = 2, + .gpio_version = ICH_V5_GPIO, }, [LPC_LPT] = { .name = "Lynx Point", diff --git a/drivers/mfd/max14577.c b/drivers/mfd/max14577.c index 484d372a4892..4a5e885383f8 100644 --- a/drivers/mfd/max14577.c +++ b/drivers/mfd/max14577.c @@ -26,7 +26,7 @@ #include <linux/mfd/max14577.h> #include <linux/mfd/max14577-private.h> -static struct mfd_cell max14577_devs[] = { +static const struct mfd_cell max14577_devs[] = { { .name = "max14577-muic", .of_compatible = "maxim,max14577-muic", @@ -38,7 +38,7 @@ static struct mfd_cell max14577_devs[] = { { .name = "max14577-charger", }, }; -static struct mfd_cell max77836_devs[] = { +static const struct mfd_cell max77836_devs[] = { { .name = "max77836-muic", .of_compatible = "maxim,max77836-muic", @@ -57,7 +57,7 @@ static struct mfd_cell max77836_devs[] = { }, }; -static struct of_device_id max14577_dt_match[] = { +static const struct of_device_id max14577_dt_match[] = { { .compatible = "maxim,max14577", .data = (void *)MAXIM_DEVICE_TYPE_MAX14577, @@ -292,7 +292,7 @@ static int max14577_i2c_probe(struct i2c_client *i2c, struct device_node *np = i2c->dev.of_node; int ret = 0; const struct regmap_irq_chip *irq_chip; - struct mfd_cell *mfd_devs; + const struct mfd_cell *mfd_devs; unsigned int mfd_devs_size; int irq_flags; @@ -331,7 +331,8 @@ static int max14577_i2c_probe(struct i2c_client *i2c, of_id = of_match_device(max14577_dt_match, &i2c->dev); if (of_id) - max14577->dev_type = (unsigned int)of_id->data; + max14577->dev_type = + (enum maxim_device_type)of_id->data; } else { max14577->dev_type = id->driver_data; } @@ -414,20 +415,18 @@ static int max14577_suspend(struct device *dev) struct i2c_client *i2c = container_of(dev, struct i2c_client, dev); struct max14577 *max14577 = i2c_get_clientdata(i2c); - if (device_may_wakeup(dev)) { + if (device_may_wakeup(dev)) enable_irq_wake(max14577->irq); - /* - * MUIC IRQ must be disabled during suspend if this is - * a wake up source because it will be handled before - * resuming I2C. - * - * When device is woken up from suspend (e.g. by ADC change), - * an interrupt occurs before resuming I2C bus controller. - * Interrupt handler tries to read registers but this read - * will fail because I2C is still suspended. - */ - disable_irq(max14577->irq); - } + /* + * MUIC IRQ must be disabled during suspend because if it happens + * while suspended it will be handled before resuming I2C. + * + * When device is woken up from suspend (e.g. by ADC change), + * an interrupt occurs before resuming I2C bus controller. + * Interrupt handler tries to read registers but this read + * will fail because I2C is still suspended. + */ + disable_irq(max14577->irq); return 0; } @@ -437,10 +436,9 @@ static int max14577_resume(struct device *dev) struct i2c_client *i2c = container_of(dev, struct i2c_client, dev); struct max14577 *max14577 = i2c_get_clientdata(i2c); - if (device_may_wakeup(dev)) { + if (device_may_wakeup(dev)) disable_irq_wake(max14577->irq); - enable_irq(max14577->irq); - } + enable_irq(max14577->irq); return 0; } diff --git a/drivers/mfd/max77686.c b/drivers/mfd/max77686.c index e5fce765accb..ce869acf27ae 100644 --- a/drivers/mfd/max77686.c +++ b/drivers/mfd/max77686.c @@ -47,7 +47,7 @@ static struct regmap_config max77686_regmap_config = { }; #ifdef CONFIG_OF -static struct of_device_id max77686_pmic_dt_match[] = { +static const struct of_device_id max77686_pmic_dt_match[] = { {.compatible = "maxim,max77686", .data = NULL}, {}, }; diff --git a/drivers/mfd/max77693.c b/drivers/mfd/max77693.c index c5535f018466..7e05428c756d 100644 --- a/drivers/mfd/max77693.c +++ b/drivers/mfd/max77693.c @@ -243,7 +243,7 @@ static const struct dev_pm_ops max77693_pm = { }; #ifdef CONFIG_OF -static struct of_device_id max77693_dt_match[] = { +static const struct of_device_id max77693_dt_match[] = { { .compatible = "maxim,max77693" }, {}, }; diff --git a/drivers/mfd/max8907.c b/drivers/mfd/max8907.c index 07740314b29d..232749c8813d 100644 --- a/drivers/mfd/max8907.c +++ b/drivers/mfd/max8907.c @@ -305,7 +305,7 @@ static int max8907_i2c_remove(struct i2c_client *i2c) } #ifdef CONFIG_OF -static struct of_device_id max8907_of_match[] = { +static const struct of_device_id max8907_of_match[] = { { .compatible = "maxim,max8907" }, { }, }; diff --git a/drivers/mfd/max8997.c b/drivers/mfd/max8997.c index 8cf7a015cfe5..595364ee178a 100644 --- a/drivers/mfd/max8997.c +++ b/drivers/mfd/max8997.c @@ -51,7 +51,7 @@ static const struct mfd_cell max8997_devs[] = { }; #ifdef CONFIG_OF -static struct of_device_id max8997_pmic_dt_match[] = { +static const struct of_device_id max8997_pmic_dt_match[] = { { .compatible = "maxim,max8997-pmic", .data = (void *)TYPE_MAX8997 }, {}, }; diff --git a/drivers/mfd/max8998.c b/drivers/mfd/max8998.c index 592db06098e6..a37cb7444b6e 100644 --- a/drivers/mfd/max8998.c +++ b/drivers/mfd/max8998.c @@ -132,7 +132,7 @@ int max8998_update_reg(struct i2c_client *i2c, u8 reg, u8 val, u8 mask) EXPORT_SYMBOL(max8998_update_reg); #ifdef CONFIG_OF -static struct of_device_id max8998_dt_match[] = { +static const struct of_device_id max8998_dt_match[] = { { .compatible = "maxim,max8998", .data = (void *)TYPE_MAX8998 }, { .compatible = "national,lp3974", .data = (void *)TYPE_LP3974 }, { .compatible = "ti,lp3974", .data = (void *)TYPE_LP3974 }, diff --git a/drivers/mfd/mc13xxx-core.c b/drivers/mfd/mc13xxx-core.c index 0c6c21c5b1a8..acf5dd712eb2 100644 --- a/drivers/mfd/mc13xxx-core.c +++ b/drivers/mfd/mc13xxx-core.c @@ -660,34 +660,22 @@ int mc13xxx_common_init(struct device *dev) if (ret) return ret; + mutex_init(&mc13xxx->lock); + ret = request_threaded_irq(mc13xxx->irq, NULL, mc13xxx_irq_thread, IRQF_ONESHOT | IRQF_TRIGGER_HIGH, "mc13xxx", mc13xxx); if (ret) return ret; - mutex_init(&mc13xxx->lock); - if (mc13xxx_probe_flags_dt(mc13xxx) < 0 && pdata) mc13xxx->flags = pdata->flags; if (mc13xxx->flags & MC13XXX_USE_ADC) mc13xxx_add_subdevice(mc13xxx, "%s-adc"); - if (mc13xxx->flags & MC13XXX_USE_CODEC) { - if (pdata) - mc13xxx_add_subdevice_pdata(mc13xxx, "%s-codec", - pdata->codec, sizeof(*pdata->codec)); - else - mc13xxx_add_subdevice(mc13xxx, "%s-codec"); - } - if (mc13xxx->flags & MC13XXX_USE_RTC) mc13xxx_add_subdevice(mc13xxx, "%s-rtc"); - if (mc13xxx->flags & MC13XXX_USE_TOUCHSCREEN) - mc13xxx_add_subdevice_pdata(mc13xxx, "%s-ts", - &pdata->touch, sizeof(pdata->touch)); - if (pdata) { mc13xxx_add_subdevice_pdata(mc13xxx, "%s-regulator", &pdata->regulators, sizeof(pdata->regulators)); @@ -695,10 +683,20 @@ int mc13xxx_common_init(struct device *dev) pdata->leds, sizeof(*pdata->leds)); mc13xxx_add_subdevice_pdata(mc13xxx, "%s-pwrbutton", pdata->buttons, sizeof(*pdata->buttons)); + if (mc13xxx->flags & MC13XXX_USE_CODEC) + mc13xxx_add_subdevice_pdata(mc13xxx, "%s-codec", + pdata->codec, sizeof(*pdata->codec)); + if (mc13xxx->flags & MC13XXX_USE_TOUCHSCREEN) + mc13xxx_add_subdevice_pdata(mc13xxx, "%s-ts", + &pdata->touch, sizeof(pdata->touch)); } else { mc13xxx_add_subdevice(mc13xxx, "%s-regulator"); mc13xxx_add_subdevice(mc13xxx, "%s-led"); mc13xxx_add_subdevice(mc13xxx, "%s-pwrbutton"); + if (mc13xxx->flags & MC13XXX_USE_CODEC) + mc13xxx_add_subdevice(mc13xxx, "%s-codec"); + if (mc13xxx->flags & MC13XXX_USE_TOUCHSCREEN) + mc13xxx_add_subdevice(mc13xxx, "%s-ts"); } return 0; diff --git a/drivers/mfd/menelaus.c b/drivers/mfd/menelaus.c index ad25bfa3fb02..5e2667afe2bc 100644 --- a/drivers/mfd/menelaus.c +++ b/drivers/mfd/menelaus.c @@ -1287,29 +1287,8 @@ static struct i2c_driver menelaus_i2c_driver = { .id_table = menelaus_id, }; -static int __init menelaus_init(void) -{ - int res; - - res = i2c_add_driver(&menelaus_i2c_driver); - if (res < 0) { - pr_err(DRIVER_NAME ": driver registration failed\n"); - return res; - } - - return 0; -} - -static void __exit menelaus_exit(void) -{ - i2c_del_driver(&menelaus_i2c_driver); - - /* FIXME: Shutdown menelaus parts that can be shut down */ -} +module_i2c_driver(menelaus_i2c_driver); MODULE_AUTHOR("Texas Instruments, Inc. (and others)"); MODULE_DESCRIPTION("I2C interface for Menelaus."); MODULE_LICENSE("GPL"); - -module_init(menelaus_init); -module_exit(menelaus_exit); diff --git a/drivers/mfd/mfd-core.c b/drivers/mfd/mfd-core.c index 267649244737..892d343193ad 100644 --- a/drivers/mfd/mfd-core.c +++ b/drivers/mfd/mfd-core.c @@ -102,7 +102,7 @@ static int mfd_add_device(struct device *parent, int id, pdev->dev.dma_mask = parent->dma_mask; pdev->dev.dma_parms = parent->dma_parms; - ret = devm_regulator_bulk_register_supply_alias( + ret = regulator_bulk_register_supply_alias( &pdev->dev, cell->parent_supplies, parent, cell->parent_supplies, cell->num_parent_supplies); @@ -182,9 +182,9 @@ static int mfd_add_device(struct device *parent, int id, return 0; fail_alias: - devm_regulator_bulk_unregister_supply_alias(&pdev->dev, - cell->parent_supplies, - cell->num_parent_supplies); + regulator_bulk_unregister_supply_alias(&pdev->dev, + cell->parent_supplies, + cell->num_parent_supplies); fail_res: kfree(res); fail_device: @@ -238,6 +238,9 @@ static int mfd_remove_devices_fn(struct device *dev, void *c) pdev = to_platform_device(dev); cell = mfd_get_cell(pdev); + regulator_bulk_unregister_supply_alias(dev, cell->parent_supplies, + cell->num_parent_supplies); + /* find the base address of usage_count pointers (for freeing) */ if (!*usage_count || (cell->usage_count < *usage_count)) *usage_count = cell->usage_count; diff --git a/drivers/mfd/omap-usb-host.c b/drivers/mfd/omap-usb-host.c index 651e249287dc..b48d80c367f9 100644 --- a/drivers/mfd/omap-usb-host.c +++ b/drivers/mfd/omap-usb-host.c @@ -557,7 +557,7 @@ static int usbhs_omap_get_dt_pdata(struct device *dev, return 0; } -static struct of_device_id usbhs_child_match_table[] = { +static const struct of_device_id usbhs_child_match_table[] = { { .compatible = "ti,omap-ehci", }, { .compatible = "ti,omap-ohci", }, { } diff --git a/drivers/mfd/pm8921-core.c b/drivers/mfd/pm8921-core.c index b97a97187ae9..959513803542 100644 --- a/drivers/mfd/pm8921-core.c +++ b/drivers/mfd/pm8921-core.c @@ -26,7 +26,6 @@ #include <linux/regmap.h> #include <linux/of_platform.h> #include <linux/mfd/core.h> -#include <linux/mfd/pm8xxx/core.h> #define SSBI_REG_ADDR_IRQ_BASE 0x1BB @@ -57,7 +56,6 @@ #define PM8921_NR_IRQS 256 struct pm_irq_chip { - struct device *dev; struct regmap *regmap; spinlock_t pm_irq_lock; struct irq_domain *irqdomain; @@ -67,11 +65,6 @@ struct pm_irq_chip { u8 config[0]; }; -struct pm8921 { - struct device *dev; - struct pm_irq_chip *irq_chip; -}; - static int pm8xxx_read_block_irq(struct pm_irq_chip *chip, unsigned int bp, unsigned int *ip) { @@ -255,55 +248,6 @@ static struct irq_chip pm8xxx_irq_chip = { .flags = IRQCHIP_MASK_ON_SUSPEND | IRQCHIP_SKIP_SET_WAKE, }; -/** - * pm8xxx_get_irq_stat - get the status of the irq line - * @chip: pointer to identify a pmic irq controller - * @irq: the irq number - * - * The pm8xxx gpio and mpp rely on the interrupt block to read - * the values on their pins. This function is to facilitate reading - * the status of a gpio or an mpp line. The caller has to convert the - * gpio number to irq number. - * - * RETURNS: - * an int indicating the value read on that line - */ -static int pm8xxx_get_irq_stat(struct pm_irq_chip *chip, int irq) -{ - int pmirq, rc; - unsigned int block, bits, bit; - unsigned long flags; - struct irq_data *irq_data = irq_get_irq_data(irq); - - pmirq = irq_data->hwirq; - - block = pmirq / 8; - bit = pmirq % 8; - - spin_lock_irqsave(&chip->pm_irq_lock, flags); - - rc = regmap_write(chip->regmap, SSBI_REG_ADDR_IRQ_BLK_SEL, block); - if (rc) { - pr_err("Failed Selecting block irq=%d pmirq=%d blk=%d rc=%d\n", - irq, pmirq, block, rc); - goto bail_out; - } - - rc = regmap_read(chip->regmap, SSBI_REG_ADDR_IRQ_RT_STATUS, &bits); - if (rc) { - pr_err("Failed Configuring irq=%d pmirq=%d blk=%d rc=%d\n", - irq, pmirq, block, rc); - goto bail_out; - } - - rc = (bits & (1 << bit)) ? 1 : 0; - -bail_out: - spin_unlock_irqrestore(&chip->pm_irq_lock, flags); - - return rc; -} - static int pm8xxx_irq_domain_map(struct irq_domain *d, unsigned int irq, irq_hw_number_t hwirq) { @@ -324,56 +268,6 @@ static const struct irq_domain_ops pm8xxx_irq_domain_ops = { .map = pm8xxx_irq_domain_map, }; -static int pm8921_readb(const struct device *dev, u16 addr, u8 *val) -{ - const struct pm8xxx_drvdata *pm8921_drvdata = dev_get_drvdata(dev); - const struct pm8921 *pmic = pm8921_drvdata->pm_chip_data; - - return ssbi_read(pmic->dev->parent, addr, val, 1); -} - -static int pm8921_writeb(const struct device *dev, u16 addr, u8 val) -{ - const struct pm8xxx_drvdata *pm8921_drvdata = dev_get_drvdata(dev); - const struct pm8921 *pmic = pm8921_drvdata->pm_chip_data; - - return ssbi_write(pmic->dev->parent, addr, &val, 1); -} - -static int pm8921_read_buf(const struct device *dev, u16 addr, u8 *buf, - int cnt) -{ - const struct pm8xxx_drvdata *pm8921_drvdata = dev_get_drvdata(dev); - const struct pm8921 *pmic = pm8921_drvdata->pm_chip_data; - - return ssbi_read(pmic->dev->parent, addr, buf, cnt); -} - -static int pm8921_write_buf(const struct device *dev, u16 addr, u8 *buf, - int cnt) -{ - const struct pm8xxx_drvdata *pm8921_drvdata = dev_get_drvdata(dev); - const struct pm8921 *pmic = pm8921_drvdata->pm_chip_data; - - return ssbi_write(pmic->dev->parent, addr, buf, cnt); -} - -static int pm8921_read_irq_stat(const struct device *dev, int irq) -{ - const struct pm8xxx_drvdata *pm8921_drvdata = dev_get_drvdata(dev); - const struct pm8921 *pmic = pm8921_drvdata->pm_chip_data; - - return pm8xxx_get_irq_stat(pmic->irq_chip, irq); -} - -static struct pm8xxx_drvdata pm8921_drvdata = { - .pmic_readb = pm8921_readb, - .pmic_writeb = pm8921_writeb, - .pmic_read_buf = pm8921_read_buf, - .pmic_write_buf = pm8921_write_buf, - .pmic_read_irq_stat = pm8921_read_irq_stat, -}; - static const struct regmap_config ssbi_regmap_config = { .reg_bits = 16, .val_bits = 8, @@ -392,7 +286,6 @@ MODULE_DEVICE_TABLE(of, pm8921_id_table); static int pm8921_probe(struct platform_device *pdev) { - struct pm8921 *pmic; struct regmap *regmap; int irq, rc; unsigned int val; @@ -404,12 +297,6 @@ static int pm8921_probe(struct platform_device *pdev) if (irq < 0) return irq; - pmic = devm_kzalloc(&pdev->dev, sizeof(struct pm8921), GFP_KERNEL); - if (!pmic) { - pr_err("Cannot alloc pm8921 struct\n"); - return -ENOMEM; - } - regmap = devm_regmap_init(&pdev->dev, NULL, pdev->dev.parent, &ssbi_regmap_config); if (IS_ERR(regmap)) @@ -434,18 +321,13 @@ static int pm8921_probe(struct platform_device *pdev) pr_info("PMIC revision 2: %02X\n", val); rev |= val << BITS_PER_BYTE; - pmic->dev = &pdev->dev; - pm8921_drvdata.pm_chip_data = pmic; - platform_set_drvdata(pdev, &pm8921_drvdata); - chip = devm_kzalloc(&pdev->dev, sizeof(*chip) + sizeof(chip->config[0]) * nirqs, GFP_KERNEL); if (!chip) return -ENOMEM; - pmic->irq_chip = chip; - chip->dev = &pdev->dev; + platform_set_drvdata(pdev, chip); chip->regmap = regmap; chip->num_irqs = nirqs; chip->num_blocks = DIV_ROUND_UP(chip->num_irqs, 8); @@ -481,8 +363,7 @@ static int pm8921_remove_child(struct device *dev, void *unused) static int pm8921_remove(struct platform_device *pdev) { int irq = platform_get_irq(pdev, 0); - struct pm8921 *pmic = pm8921_drvdata.pm_chip_data; - struct pm_irq_chip *chip = pmic->irq_chip; + struct pm_irq_chip *chip = platform_get_drvdata(pdev); device_for_each_child(&pdev->dev, NULL, pm8921_remove_child); irq_set_chained_handler(irq, NULL); diff --git a/drivers/mfd/rdc321x-southbridge.c b/drivers/mfd/rdc321x-southbridge.c index c79569750be9..6575585f1d1f 100644 --- a/drivers/mfd/rdc321x-southbridge.c +++ b/drivers/mfd/rdc321x-southbridge.c @@ -38,7 +38,7 @@ static struct resource rdc321x_wdt_resource[] = { }; static struct rdc321x_gpio_pdata rdc321x_gpio_pdata = { - .max_gpios = RDC321X_MAX_GPIO, + .max_gpios = RDC321X_NUM_GPIO, }; static struct resource rdc321x_gpio_resources[] = { diff --git a/drivers/mfd/rtsx_usb.c b/drivers/mfd/rtsx_usb.c index b53b9d46cc45..6352bec8419a 100644 --- a/drivers/mfd/rtsx_usb.c +++ b/drivers/mfd/rtsx_usb.c @@ -29,7 +29,7 @@ static int polling_pipe = 1; module_param(polling_pipe, int, S_IRUGO | S_IWUSR); MODULE_PARM_DESC(polling_pipe, "polling pipe (0: ctl, 1: bulk)"); -static struct mfd_cell rtsx_usb_cells[] = { +static const struct mfd_cell rtsx_usb_cells[] = { [RTSX_USB_SD_CARD] = { .name = "rtsx_usb_sdmmc", .pdata_size = 0, @@ -67,7 +67,7 @@ static int rtsx_usb_bulk_transfer_sglist(struct rtsx_ucr *ucr, ucr->sg_timer.expires = jiffies + msecs_to_jiffies(timeout); add_timer(&ucr->sg_timer); usb_sg_wait(&ucr->current_sg); - del_timer(&ucr->sg_timer); + del_timer_sync(&ucr->sg_timer); if (act_len) *act_len = ucr->current_sg.bytes; @@ -644,14 +644,14 @@ static int rtsx_usb_probe(struct usb_interface *intf, if (ret) goto out_init_fail; + /* initialize USB SG transfer timer */ + setup_timer(&ucr->sg_timer, rtsx_usb_sg_timed_out, (unsigned long) ucr); + ret = mfd_add_devices(&intf->dev, usb_dev->devnum, rtsx_usb_cells, ARRAY_SIZE(rtsx_usb_cells), NULL, 0, NULL); if (ret) goto out_init_fail; - /* initialize USB SG transfer timer */ - init_timer(&ucr->sg_timer); - setup_timer(&ucr->sg_timer, rtsx_usb_sg_timed_out, (unsigned long) ucr); #ifdef CONFIG_PM intf->needs_remote_wakeup = 1; usb_enable_autosuspend(usb_dev); @@ -687,9 +687,15 @@ static int rtsx_usb_suspend(struct usb_interface *intf, pm_message_t message) dev_dbg(&intf->dev, "%s called with pm message 0x%04u\n", __func__, message.event); + /* + * Call to make sure LED is off during suspend to save more power. + * It is NOT a permanent state and could be turned on anytime later. + * Thus no need to call turn_on when resunming. + */ mutex_lock(&ucr->dev_mutex); rtsx_usb_turn_off_led(ucr); mutex_unlock(&ucr->dev_mutex); + return 0; } diff --git a/drivers/mfd/sec-core.c b/drivers/mfd/sec-core.c index 1cf27521fff4..be06d0abbf19 100644 --- a/drivers/mfd/sec-core.c +++ b/drivers/mfd/sec-core.c @@ -25,7 +25,6 @@ #include <linux/mfd/core.h> #include <linux/mfd/samsung/core.h> #include <linux/mfd/samsung/irq.h> -#include <linux/mfd/samsung/rtc.h> #include <linux/mfd/samsung/s2mpa01.h> #include <linux/mfd/samsung/s2mps11.h> #include <linux/mfd/samsung/s2mps14.h> @@ -91,7 +90,7 @@ static const struct mfd_cell s2mpa01_devs[] = { }; #ifdef CONFIG_OF -static struct of_device_id sec_dt_match[] = { +static const struct of_device_id sec_dt_match[] = { { .compatible = "samsung,s5m8767-pmic", .data = (void *)S5M8767X, }, { @@ -196,20 +195,6 @@ static const struct regmap_config s5m8767_regmap_config = { .cache_type = REGCACHE_FLAT, }; -static const struct regmap_config s5m_rtc_regmap_config = { - .reg_bits = 8, - .val_bits = 8, - - .max_register = SEC_RTC_REG_MAX, -}; - -static const struct regmap_config s2mps14_rtc_regmap_config = { - .reg_bits = 8, - .val_bits = 8, - - .max_register = S2MPS_RTC_REG_MAX, -}; - #ifdef CONFIG_OF /* * Only the common platform data elements for s5m8767 are parsed here from the @@ -264,8 +249,9 @@ static int sec_pmic_probe(struct i2c_client *i2c, const struct i2c_device_id *id) { struct sec_platform_data *pdata = dev_get_platdata(&i2c->dev); - const struct regmap_config *regmap, *regmap_rtc; + const struct regmap_config *regmap; struct sec_pmic_dev *sec_pmic; + unsigned long device_type; int ret; sec_pmic = devm_kzalloc(&i2c->dev, sizeof(struct sec_pmic_dev), @@ -277,7 +263,7 @@ static int sec_pmic_probe(struct i2c_client *i2c, sec_pmic->dev = &i2c->dev; sec_pmic->i2c = i2c; sec_pmic->irq = i2c->irq; - sec_pmic->type = sec_i2c_get_driver_data(i2c, id); + device_type = sec_i2c_get_driver_data(i2c, id); if (sec_pmic->dev->of_node) { pdata = sec_pmic_i2c_parse_dt_pdata(sec_pmic->dev); @@ -285,7 +271,7 @@ static int sec_pmic_probe(struct i2c_client *i2c, ret = PTR_ERR(pdata); return ret; } - pdata->device_type = sec_pmic->type; + pdata->device_type = device_type; } if (pdata) { sec_pmic->device_type = pdata->device_type; @@ -298,39 +284,21 @@ static int sec_pmic_probe(struct i2c_client *i2c, switch (sec_pmic->device_type) { case S2MPA01: regmap = &s2mpa01_regmap_config; - /* - * The rtc-s5m driver does not support S2MPA01 and there - * is no mfd_cell for S2MPA01 RTC device. - * However we must pass something to devm_regmap_init_i2c() - * so use S5M-like regmap config even though it wouldn't work. - */ - regmap_rtc = &s5m_rtc_regmap_config; break; case S2MPS11X: regmap = &s2mps11_regmap_config; - /* - * The rtc-s5m driver does not support S2MPS11 and there - * is no mfd_cell for S2MPS11 RTC device. - * However we must pass something to devm_regmap_init_i2c() - * so use S5M-like regmap config even though it wouldn't work. - */ - regmap_rtc = &s5m_rtc_regmap_config; break; case S2MPS14X: regmap = &s2mps14_regmap_config; - regmap_rtc = &s2mps14_rtc_regmap_config; break; case S5M8763X: regmap = &s5m8763_regmap_config; - regmap_rtc = &s5m_rtc_regmap_config; break; case S5M8767X: regmap = &s5m8767_regmap_config; - regmap_rtc = &s5m_rtc_regmap_config; break; default: regmap = &sec_regmap_config; - regmap_rtc = &s5m_rtc_regmap_config; break; } @@ -342,21 +310,6 @@ static int sec_pmic_probe(struct i2c_client *i2c, return ret; } - sec_pmic->rtc = i2c_new_dummy(i2c->adapter, RTC_I2C_ADDR); - if (!sec_pmic->rtc) { - dev_err(&i2c->dev, "Failed to allocate I2C for RTC\n"); - return -ENODEV; - } - i2c_set_clientdata(sec_pmic->rtc, sec_pmic); - - sec_pmic->regmap_rtc = devm_regmap_init_i2c(sec_pmic->rtc, regmap_rtc); - if (IS_ERR(sec_pmic->regmap_rtc)) { - ret = PTR_ERR(sec_pmic->regmap_rtc); - dev_err(&i2c->dev, "Failed to allocate RTC register map: %d\n", - ret); - goto err_regmap_rtc; - } - if (pdata && pdata->cfg_pmic_irq) pdata->cfg_pmic_irq(); @@ -403,8 +356,6 @@ static int sec_pmic_probe(struct i2c_client *i2c, err_mfd: sec_irq_exit(sec_pmic); -err_regmap_rtc: - i2c_unregister_device(sec_pmic->rtc); return ret; } @@ -414,7 +365,6 @@ static int sec_pmic_remove(struct i2c_client *i2c) mfd_remove_devices(sec_pmic->dev); sec_irq_exit(sec_pmic); - i2c_unregister_device(sec_pmic->rtc); return 0; } @@ -424,19 +374,18 @@ static int sec_pmic_suspend(struct device *dev) struct i2c_client *i2c = container_of(dev, struct i2c_client, dev); struct sec_pmic_dev *sec_pmic = i2c_get_clientdata(i2c); - if (device_may_wakeup(dev)) { + if (device_may_wakeup(dev)) enable_irq_wake(sec_pmic->irq); - /* - * PMIC IRQ must be disabled during suspend for RTC alarm - * to work properly. - * When device is woken up from suspend by RTC Alarm, an - * interrupt occurs before resuming I2C bus controller. - * The interrupt is handled by regmap_irq_thread which tries - * to read RTC registers. This read fails (I2C is still - * suspended) and RTC Alarm interrupt is disabled. - */ - disable_irq(sec_pmic->irq); - } + /* + * PMIC IRQ must be disabled during suspend for RTC alarm + * to work properly. + * When device is woken up from suspend, an + * interrupt occurs before resuming I2C bus controller. + * The interrupt is handled by regmap_irq_thread which tries + * to read RTC registers. This read fails (I2C is still + * suspended) and RTC Alarm interrupt is disabled. + */ + disable_irq(sec_pmic->irq); return 0; } @@ -446,10 +395,9 @@ static int sec_pmic_resume(struct device *dev) struct i2c_client *i2c = container_of(dev, struct i2c_client, dev); struct sec_pmic_dev *sec_pmic = i2c_get_clientdata(i2c); - if (device_may_wakeup(dev)) { + if (device_may_wakeup(dev)) disable_irq_wake(sec_pmic->irq); - enable_irq(sec_pmic->irq); - } + enable_irq(sec_pmic->irq); return 0; } diff --git a/drivers/mfd/sec-irq.c b/drivers/mfd/sec-irq.c index 64e7913aadc6..654e2c1dbf7a 100644 --- a/drivers/mfd/sec-irq.c +++ b/drivers/mfd/sec-irq.c @@ -385,7 +385,7 @@ int sec_irq_init(struct sec_pmic_dev *sec_pmic) &sec_pmic->irq_data); break; default: - dev_err(sec_pmic->dev, "Unknown device type %d\n", + dev_err(sec_pmic->dev, "Unknown device type %lu\n", sec_pmic->device_type); return -EINVAL; } diff --git a/drivers/mfd/sm501.c b/drivers/mfd/sm501.c index e7dc441a8f8a..81e6d0932bf0 100644 --- a/drivers/mfd/sm501.c +++ b/drivers/mfd/sm501.c @@ -1726,7 +1726,7 @@ static struct pci_driver sm501_pci_driver = { MODULE_ALIAS("platform:sm501"); -static struct of_device_id of_sm501_match_tbl[] = { +static const struct of_device_id of_sm501_match_tbl[] = { { .compatible = "smi,sm501", }, { /* end */ } }; diff --git a/drivers/mfd/stmpe-i2c.c b/drivers/mfd/stmpe-i2c.c index 0da02e11d58e..a45f9c0a330a 100644 --- a/drivers/mfd/stmpe-i2c.c +++ b/drivers/mfd/stmpe-i2c.c @@ -14,6 +14,7 @@ #include <linux/kernel.h> #include <linux/module.h> #include <linux/types.h> +#include <linux/of_device.h> #include "stmpe.h" static int i2c_reg_read(struct stmpe *stmpe, u8 reg) @@ -52,15 +53,41 @@ static struct stmpe_client_info i2c_ci = { .write_block = i2c_block_write, }; +static const struct of_device_id stmpe_of_match[] = { + { .compatible = "st,stmpe610", .data = (void *)STMPE610, }, + { .compatible = "st,stmpe801", .data = (void *)STMPE801, }, + { .compatible = "st,stmpe811", .data = (void *)STMPE811, }, + { .compatible = "st,stmpe1601", .data = (void *)STMPE1601, }, + { .compatible = "st,stmpe1801", .data = (void *)STMPE1801, }, + { .compatible = "st,stmpe2401", .data = (void *)STMPE2401, }, + { .compatible = "st,stmpe2403", .data = (void *)STMPE2403, }, + {}, +}; +MODULE_DEVICE_TABLE(of, stmpe_of_match); + static int stmpe_i2c_probe(struct i2c_client *i2c, const struct i2c_device_id *id) { + int partnum; + const struct of_device_id *of_id; + i2c_ci.data = (void *)id; i2c_ci.irq = i2c->irq; i2c_ci.client = i2c; i2c_ci.dev = &i2c->dev; - return stmpe_probe(&i2c_ci, id->driver_data); + of_id = of_match_device(stmpe_of_match, &i2c->dev); + if (!of_id) { + /* + * This happens when the I2C ID matches the node name + * but no real compatible string has been given. + */ + dev_info(&i2c->dev, "matching on node name, compatible is preferred\n"); + partnum = id->driver_data; + } else + partnum = (int)of_id->data; + + return stmpe_probe(&i2c_ci, partnum); } static int stmpe_i2c_remove(struct i2c_client *i2c) @@ -89,6 +116,7 @@ static struct i2c_driver stmpe_i2c_driver = { #ifdef CONFIG_PM .pm = &stmpe_dev_pm_ops, #endif + .of_match_table = stmpe_of_match, }, .probe = stmpe_i2c_probe, .remove = stmpe_i2c_remove, diff --git a/drivers/mfd/stmpe.c b/drivers/mfd/stmpe.c index 4a91f6771fb8..3b6bfa7184ad 100644 --- a/drivers/mfd/stmpe.c +++ b/drivers/mfd/stmpe.c @@ -20,6 +20,7 @@ #include <linux/slab.h> #include <linux/mfd/core.h> #include <linux/delay.h> +#include <linux/regulator/consumer.h> #include "stmpe.h" static int __stmpe_enable(struct stmpe *stmpe, unsigned int blocks) @@ -605,9 +606,18 @@ static int stmpe1601_enable(struct stmpe *stmpe, unsigned int blocks, if (blocks & STMPE_BLOCK_GPIO) mask |= STMPE1601_SYS_CTRL_ENABLE_GPIO; + else + mask &= ~STMPE1601_SYS_CTRL_ENABLE_GPIO; if (blocks & STMPE_BLOCK_KEYPAD) mask |= STMPE1601_SYS_CTRL_ENABLE_KPC; + else + mask &= ~STMPE1601_SYS_CTRL_ENABLE_KPC; + + if (blocks & STMPE_BLOCK_PWM) + mask |= STMPE1601_SYS_CTRL_ENABLE_SPWM; + else + mask &= ~STMPE1601_SYS_CTRL_ENABLE_SPWM; return __stmpe_set_bits(stmpe, STMPE1601_REG_SYS_CTRL, mask, enable ? mask : 0); @@ -986,9 +996,6 @@ static int stmpe_irq_init(struct stmpe *stmpe, struct device_node *np) int base = 0; int num_irqs = stmpe->variant->num_irqs; - if (!np) - base = stmpe->irq_base; - stmpe->domain = irq_domain_add_simple(np, num_irqs, base, &stmpe_irq_ops, stmpe); if (!stmpe->domain) { @@ -1067,7 +1074,7 @@ static int stmpe_chip_init(struct stmpe *stmpe) static int stmpe_add_device(struct stmpe *stmpe, const struct mfd_cell *cell) { return mfd_add_devices(stmpe->dev, stmpe->pdata->id, cell, 1, - NULL, stmpe->irq_base, stmpe->domain); + NULL, 0, stmpe->domain); } static int stmpe_devices_init(struct stmpe *stmpe) @@ -1171,12 +1178,23 @@ int stmpe_probe(struct stmpe_client_info *ci, int partnum) stmpe->dev = ci->dev; stmpe->client = ci->client; stmpe->pdata = pdata; - stmpe->irq_base = pdata->irq_base; stmpe->ci = ci; stmpe->partnum = partnum; stmpe->variant = stmpe_variant_info[partnum]; stmpe->regs = stmpe->variant->regs; stmpe->num_gpios = stmpe->variant->num_gpios; + stmpe->vcc = devm_regulator_get_optional(ci->dev, "vcc"); + if (!IS_ERR(stmpe->vcc)) { + ret = regulator_enable(stmpe->vcc); + if (ret) + dev_warn(ci->dev, "failed to enable VCC supply\n"); + } + stmpe->vio = devm_regulator_get_optional(ci->dev, "vio"); + if (!IS_ERR(stmpe->vio)) { + ret = regulator_enable(stmpe->vio); + if (ret) + dev_warn(ci->dev, "failed to enable VIO supply\n"); + } dev_set_drvdata(stmpe->dev, stmpe); if (ci->init) @@ -1243,6 +1261,11 @@ int stmpe_probe(struct stmpe_client_info *ci, int partnum) int stmpe_remove(struct stmpe *stmpe) { + if (!IS_ERR(stmpe->vio)) + regulator_disable(stmpe->vio); + if (!IS_ERR(stmpe->vcc)) + regulator_disable(stmpe->vcc); + mfd_remove_devices(stmpe->dev); return 0; diff --git a/drivers/mfd/stmpe.h b/drivers/mfd/stmpe.h index 6639f1b0fef5..9e4d21d37a11 100644 --- a/drivers/mfd/stmpe.h +++ b/drivers/mfd/stmpe.h @@ -192,7 +192,7 @@ int stmpe_remove(struct stmpe *stmpe); #define STMPE1601_SYS_CTRL_ENABLE_GPIO (1 << 3) #define STMPE1601_SYS_CTRL_ENABLE_KPC (1 << 1) -#define STMPE1601_SYSCON_ENABLE_SPWM (1 << 0) +#define STMPE1601_SYS_CTRL_ENABLE_SPWM (1 << 0) /* The 1601/2403 share the same masks */ #define STMPE1601_AUTOSLEEP_TIMEOUT_MASK (0x7) diff --git a/drivers/mfd/sun6i-prcm.c b/drivers/mfd/sun6i-prcm.c new file mode 100644 index 000000000000..718fc4d2adc0 --- /dev/null +++ b/drivers/mfd/sun6i-prcm.c @@ -0,0 +1,134 @@ +/* + * Copyright (C) 2014 Free Electrons + * + * License Terms: GNU General Public License v2 + * Author: Boris BREZILLON <boris.brezillon@free-electrons.com> + * + * Allwinner PRCM (Power/Reset/Clock Management) driver + * + */ + +#include <linux/mfd/core.h> +#include <linux/module.h> +#include <linux/of.h> + +struct prcm_data { + int nsubdevs; + const struct mfd_cell *subdevs; +}; + +static const struct resource sun6i_a31_ar100_clk_res[] = { + { + .start = 0x0, + .end = 0x3, + .flags = IORESOURCE_MEM, + }, +}; + +static const struct resource sun6i_a31_apb0_clk_res[] = { + { + .start = 0xc, + .end = 0xf, + .flags = IORESOURCE_MEM, + }, +}; + +static const struct resource sun6i_a31_apb0_gates_clk_res[] = { + { + .start = 0x28, + .end = 0x2b, + .flags = IORESOURCE_MEM, + }, +}; + +static const struct resource sun6i_a31_apb0_rstc_res[] = { + { + .start = 0xb0, + .end = 0xb3, + .flags = IORESOURCE_MEM, + }, +}; + +static const struct mfd_cell sun6i_a31_prcm_subdevs[] = { + { + .name = "sun6i-a31-ar100-clk", + .of_compatible = "allwinner,sun6i-a31-ar100-clk", + .num_resources = ARRAY_SIZE(sun6i_a31_ar100_clk_res), + .resources = sun6i_a31_ar100_clk_res, + }, + { + .name = "sun6i-a31-apb0-clk", + .of_compatible = "allwinner,sun6i-a31-apb0-clk", + .num_resources = ARRAY_SIZE(sun6i_a31_apb0_clk_res), + .resources = sun6i_a31_apb0_clk_res, + }, + { + .name = "sun6i-a31-apb0-gates-clk", + .of_compatible = "allwinner,sun6i-a31-apb0-gates-clk", + .num_resources = ARRAY_SIZE(sun6i_a31_apb0_gates_clk_res), + .resources = sun6i_a31_apb0_gates_clk_res, + }, + { + .name = "sun6i-a31-apb0-clock-reset", + .of_compatible = "allwinner,sun6i-a31-clock-reset", + .num_resources = ARRAY_SIZE(sun6i_a31_apb0_rstc_res), + .resources = sun6i_a31_apb0_rstc_res, + }, +}; + +static const struct prcm_data sun6i_a31_prcm_data = { + .nsubdevs = ARRAY_SIZE(sun6i_a31_prcm_subdevs), + .subdevs = sun6i_a31_prcm_subdevs, +}; + +static const struct of_device_id sun6i_prcm_dt_ids[] = { + { + .compatible = "allwinner,sun6i-a31-prcm", + .data = &sun6i_a31_prcm_data, + }, + { /* sentinel */ }, +}; + +static int sun6i_prcm_probe(struct platform_device *pdev) +{ + struct device_node *np = pdev->dev.of_node; + const struct of_device_id *match; + const struct prcm_data *data; + struct resource *res; + int ret; + + match = of_match_node(sun6i_prcm_dt_ids, np); + if (!match) + return -EINVAL; + + data = match->data; + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + if (!res) { + dev_err(&pdev->dev, "no prcm memory region provided\n"); + return -ENOENT; + } + + ret = mfd_add_devices(&pdev->dev, 0, data->subdevs, data->nsubdevs, + res, -1, NULL); + if (ret) { + dev_err(&pdev->dev, "failed to add subdevices\n"); + return ret; + } + + return 0; +} + +static struct platform_driver sun6i_prcm_driver = { + .driver = { + .name = "sun6i-prcm", + .owner = THIS_MODULE, + .of_match_table = sun6i_prcm_dt_ids, + }, + .probe = sun6i_prcm_probe, +}; +module_platform_driver(sun6i_prcm_driver); + +MODULE_AUTHOR("Boris BREZILLON <boris.brezillon@free-electrons.com>"); +MODULE_DESCRIPTION("Allwinner sun6i PRCM driver"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/mfd/syscon.c b/drivers/mfd/syscon.c index e2a04bb8bc1e..ca15878ce5c0 100644 --- a/drivers/mfd/syscon.c +++ b/drivers/mfd/syscon.c @@ -95,7 +95,11 @@ struct regmap *syscon_regmap_lookup_by_phandle(struct device_node *np, struct device_node *syscon_np; struct regmap *regmap; - syscon_np = of_parse_phandle(np, property, 0); + if (property) + syscon_np = of_parse_phandle(np, property, 0); + else + syscon_np = np; + if (!syscon_np) return ERR_PTR(-ENODEV); diff --git a/drivers/mfd/tps6507x.c b/drivers/mfd/tps6507x.c index 3b27482a174f..a2e1990c9de7 100644 --- a/drivers/mfd/tps6507x.c +++ b/drivers/mfd/tps6507x.c @@ -119,7 +119,7 @@ static const struct i2c_device_id tps6507x_i2c_id[] = { MODULE_DEVICE_TABLE(i2c, tps6507x_i2c_id); #ifdef CONFIG_OF -static struct of_device_id tps6507x_of_match[] = { +static const struct of_device_id tps6507x_of_match[] = { {.compatible = "ti,tps6507x", }, {}, }; diff --git a/drivers/mfd/tps65218.c b/drivers/mfd/tps65218.c index a74bfb59f18f..0d256cb002eb 100644 --- a/drivers/mfd/tps65218.c +++ b/drivers/mfd/tps65218.c @@ -197,6 +197,7 @@ static struct regmap_irq_chip tps65218_irq_chip = { static const struct of_device_id of_tps65218_match_table[] = { { .compatible = "ti,tps65218", }, + {} }; static int tps65218_probe(struct i2c_client *client, diff --git a/drivers/mfd/tps6586x.c b/drivers/mfd/tps6586x.c index 835e5549ecdd..8e1dbc469580 100644 --- a/drivers/mfd/tps6586x.c +++ b/drivers/mfd/tps6586x.c @@ -444,7 +444,7 @@ static struct tps6586x_platform_data *tps6586x_parse_dt(struct i2c_client *clien return pdata; } -static struct of_device_id tps6586x_of_match[] = { +static const struct of_device_id tps6586x_of_match[] = { { .compatible = "ti,tps6586x", }, { }, }; diff --git a/drivers/mfd/tps65910.c b/drivers/mfd/tps65910.c index 460a014ca629..f9e42ea1cb1a 100644 --- a/drivers/mfd/tps65910.c +++ b/drivers/mfd/tps65910.c @@ -379,7 +379,7 @@ err_sleep_init: } #ifdef CONFIG_OF -static struct of_device_id tps65910_of_match[] = { +static const struct of_device_id tps65910_of_match[] = { { .compatible = "ti,tps65910", .data = (void *)TPS65910}, { .compatible = "ti,tps65911", .data = (void *)TPS65911}, { }, diff --git a/drivers/mfd/twl6040.c b/drivers/mfd/twl6040.c index 6e88f25832fb..ae26d84b3a59 100644 --- a/drivers/mfd/twl6040.c +++ b/drivers/mfd/twl6040.c @@ -87,8 +87,13 @@ static struct reg_default twl6040_defaults[] = { }; static struct reg_default twl6040_patch[] = { - /* Select I2C bus access to dual access registers */ - { TWL6040_REG_ACCCTL, 0x09 }, + /* + * Select I2C bus access to dual access registers + * Interrupt register is cleared on read + * Select fast mode for i2c (400KHz) + */ + { TWL6040_REG_ACCCTL, + TWL6040_I2CSEL | TWL6040_INTCLRMODE | TWL6040_I2CMODE(1) }, }; @@ -286,6 +291,8 @@ int twl6040_power(struct twl6040 *twl6040, int on) if (twl6040->power_count++) goto out; + clk_prepare_enable(twl6040->clk32k); + /* Allow writes to the chip */ regcache_cache_only(twl6040->regmap, false); @@ -341,6 +348,8 @@ int twl6040_power(struct twl6040 *twl6040, int on) twl6040->sysclk = 0; twl6040->mclk = 0; + + clk_disable_unprepare(twl6040->clk32k); } out: @@ -432,12 +441,9 @@ int twl6040_set_pll(struct twl6040 *twl6040, int pll_id, TWL6040_HPLLENA; break; case 19200000: - /* - * PLL disabled - * (enable PLL if MCLK jitter quality - * doesn't meet specification) - */ - hppllctl |= TWL6040_MCLK_19200KHZ; + /* PLL enabled, bypass mode */ + hppllctl |= TWL6040_MCLK_19200KHZ | + TWL6040_HPLLBP | TWL6040_HPLLENA; break; case 26000000: /* PLL enabled, active mode */ @@ -445,9 +451,9 @@ int twl6040_set_pll(struct twl6040 *twl6040, int pll_id, TWL6040_HPLLENA; break; case 38400000: - /* PLL enabled, active mode */ + /* PLL enabled, bypass mode */ hppllctl |= TWL6040_MCLK_38400KHZ | - TWL6040_HPLLENA; + TWL6040_HPLLBP | TWL6040_HPLLENA; break; default: dev_err(twl6040->dev, @@ -639,6 +645,12 @@ static int twl6040_probe(struct i2c_client *client, i2c_set_clientdata(client, twl6040); + twl6040->clk32k = devm_clk_get(&client->dev, "clk32k"); + if (IS_ERR(twl6040->clk32k)) { + dev_info(&client->dev, "clk32k is not handled\n"); + twl6040->clk32k = NULL; + } + twl6040->supplies[0].supply = "vio"; twl6040->supplies[1].supply = "v2v1"; ret = devm_regulator_bulk_get(&client->dev, TWL6040_NUM_SUPPLIES, @@ -660,6 +672,9 @@ static int twl6040_probe(struct i2c_client *client, mutex_init(&twl6040->mutex); init_completion(&twl6040->ready); + regmap_register_patch(twl6040->regmap, twl6040_patch, + ARRAY_SIZE(twl6040_patch)); + twl6040->rev = twl6040_reg_read(twl6040, TWL6040_REG_ASICREV); if (twl6040->rev < 0) { dev_err(&client->dev, "Failed to read revision register: %d\n", @@ -679,6 +694,9 @@ static int twl6040_probe(struct i2c_client *client, GPIOF_OUT_INIT_LOW, "audpwron"); if (ret) goto gpio_err; + + /* Clear any pending interrupt */ + twl6040_reg_read(twl6040, TWL6040_REG_INTID); } ret = regmap_add_irq_chip(twl6040->regmap, twl6040->irq, IRQF_ONESHOT, @@ -707,10 +725,6 @@ static int twl6040_probe(struct i2c_client *client, goto readyirq_err; } - /* dual-access registers controlled by I2C only */ - regmap_register_patch(twl6040->regmap, twl6040_patch, - ARRAY_SIZE(twl6040_patch)); - /* * The main functionality of twl6040 to provide audio on OMAP4+ systems. * We can add the ASoC codec child whenever this driver has been loaded. diff --git a/drivers/mfd/wm5102-tables.c b/drivers/mfd/wm5102-tables.c index 070f8cfbbd7a..c8a993bd17ae 100644 --- a/drivers/mfd/wm5102-tables.c +++ b/drivers/mfd/wm5102-tables.c @@ -333,7 +333,7 @@ static const struct reg_default wm5102_reg_default[] = { { 0x000001AA, 0x0004 }, /* R426 - FLL2 GPIO Clock */ { 0x00000200, 0x0006 }, /* R512 - Mic Charge Pump 1 */ { 0x00000210, 0x00D4 }, /* R528 - LDO1 Control 1 */ - { 0x00000212, 0x0001 }, /* R530 - LDO1 Control 2 */ + { 0x00000212, 0x0000 }, /* R530 - LDO1 Control 2 */ { 0x00000213, 0x0344 }, /* R531 - LDO2 Control 1 */ { 0x00000218, 0x01A6 }, /* R536 - Mic Bias Ctrl 1 */ { 0x00000219, 0x01A6 }, /* R537 - Mic Bias Ctrl 2 */ @@ -1037,6 +1037,8 @@ static bool wm5102_readable_register(struct device *dev, unsigned int reg) case ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_4: case ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_5: case ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_6: + case ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_7: + case ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_8: case ARIZONA_COMFORT_NOISE_GENERATOR: case ARIZONA_HAPTICS_CONTROL_1: case ARIZONA_HAPTICS_CONTROL_2: diff --git a/drivers/mfd/wm5110-tables.c b/drivers/mfd/wm5110-tables.c index 1942b6f231da..41a7f6fb7802 100644 --- a/drivers/mfd/wm5110-tables.c +++ b/drivers/mfd/wm5110-tables.c @@ -468,10 +468,12 @@ static const struct reg_default wm5110_reg_default[] = { { 0x00000062, 0x01FF }, /* R98 - Sample Rate Sequence Select 2 */ { 0x00000063, 0x01FF }, /* R99 - Sample Rate Sequence Select 3 */ { 0x00000064, 0x01FF }, /* R100 - Sample Rate Sequence Select 4 */ - { 0x00000068, 0x01FF }, /* R104 - Always On Triggers Sequence Select 1 */ - { 0x00000069, 0x01FF }, /* R105 - Always On Triggers Sequence Select 2 */ - { 0x0000006A, 0x01FF }, /* R106 - Always On Triggers Sequence Select 3 */ - { 0x0000006B, 0x01FF }, /* R107 - Always On Triggers Sequence Select 4 */ + { 0x00000066, 0x01FF }, /* R102 - Always On Triggers Sequence Select 1 */ + { 0x00000067, 0x01FF }, /* R103 - Always On Triggers Sequence Select 2 */ + { 0x00000068, 0x01FF }, /* R104 - Always On Triggers Sequence Select 3 */ + { 0x00000069, 0x01FF }, /* R105 - Always On Triggers Sequence Select 4 */ + { 0x0000006A, 0x01FF }, /* R106 - Always On Triggers Sequence Select 5 */ + { 0x0000006B, 0x01FF }, /* R107 - Always On Triggers Sequence Select 6 */ { 0x00000070, 0x0000 }, /* R112 - Comfort Noise Generator */ { 0x00000090, 0x0000 }, /* R144 - Haptics Control 1 */ { 0x00000091, 0x7FFF }, /* R145 - Haptics Control 2 */ @@ -549,6 +551,7 @@ static const struct reg_default wm5110_reg_default[] = { { 0x000002A8, 0x1422 }, /* R680 - Mic Detect Level 3 */ { 0x000002A9, 0x300A }, /* R681 - Mic Detect Level 4 */ { 0x000002C3, 0x0000 }, /* R707 - Mic noise mix control 1 */ + { 0x000002CB, 0x0000 }, /* R715 - Isolation control */ { 0x000002D3, 0x0000 }, /* R723 - Jack detect analogue */ { 0x00000300, 0x0000 }, /* R768 - Input Enables */ { 0x00000308, 0x0000 }, /* R776 - Input Rate */ @@ -1498,6 +1501,8 @@ static bool wm5110_readable_register(struct device *dev, unsigned int reg) case ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_2: case ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_3: case ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_4: + case ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_5: + case ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_6: case ARIZONA_COMFORT_NOISE_GENERATOR: case ARIZONA_HAPTICS_CONTROL_1: case ARIZONA_HAPTICS_CONTROL_2: @@ -1580,6 +1585,7 @@ static bool wm5110_readable_register(struct device *dev, unsigned int reg) case ARIZONA_MIC_DETECT_LEVEL_3: case ARIZONA_MIC_DETECT_LEVEL_4: case ARIZONA_MIC_NOISE_MIX_CONTROL_1: + case ARIZONA_ISOLATION_CONTROL: case ARIZONA_JACK_DETECT_ANALOGUE: case ARIZONA_INPUT_ENABLES: case ARIZONA_INPUT_ENABLES_STATUS: diff --git a/drivers/mfd/wm8400-core.c b/drivers/mfd/wm8400-core.c index e5eae751aa1b..c6fb5d16ca09 100644 --- a/drivers/mfd/wm8400-core.c +++ b/drivers/mfd/wm8400-core.c @@ -64,7 +64,7 @@ EXPORT_SYMBOL_GPL(wm8400_block_read); static int wm8400_register_codec(struct wm8400 *wm8400) { - struct mfd_cell cell = { + const struct mfd_cell cell = { .name = "wm8400-codec", .platform_data = wm8400, .pdata_size = sizeof(*wm8400), diff --git a/drivers/mfd/wm8997-tables.c b/drivers/mfd/wm8997-tables.c index 5aa807687777..c7a81da64ee1 100644 --- a/drivers/mfd/wm8997-tables.c +++ b/drivers/mfd/wm8997-tables.c @@ -174,10 +174,10 @@ static const struct reg_default wm8997_reg_default[] = { { 0x00000062, 0x01FF }, /* R98 - Sample Rate Sequence Select 2 */ { 0x00000063, 0x01FF }, /* R99 - Sample Rate Sequence Select 3 */ { 0x00000064, 0x01FF }, /* R100 - Sample Rate Sequence Select 4 */ - { 0x00000068, 0x01FF }, /* R104 - Always On Triggers Sequence Select 1 */ - { 0x00000069, 0x01FF }, /* R105 - Always On Triggers Sequence Select 2 */ - { 0x0000006A, 0x01FF }, /* R106 - Always On Triggers Sequence Select 3 */ - { 0x0000006B, 0x01FF }, /* R107 - Always On Triggers Sequence Select 4 */ + { 0x00000068, 0x01FF }, /* R104 - Always On Triggers Sequence Select 3 */ + { 0x00000069, 0x01FF }, /* R105 - Always On Triggers Sequence Select 4 */ + { 0x0000006A, 0x01FF }, /* R106 - Always On Triggers Sequence Select 5 */ + { 0x0000006B, 0x01FF }, /* R107 - Always On Triggers Sequence Select 6 */ { 0x00000070, 0x0000 }, /* R112 - Comfort Noise Generator */ { 0x00000090, 0x0000 }, /* R144 - Haptics Control 1 */ { 0x00000091, 0x7FFF }, /* R145 - Haptics Control 2 */ @@ -814,10 +814,10 @@ static bool wm8997_readable_register(struct device *dev, unsigned int reg) case ARIZONA_SAMPLE_RATE_SEQUENCE_SELECT_2: case ARIZONA_SAMPLE_RATE_SEQUENCE_SELECT_3: case ARIZONA_SAMPLE_RATE_SEQUENCE_SELECT_4: - case ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_1: - case ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_2: case ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_3: case ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_4: + case ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_5: + case ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_6: case ARIZONA_COMFORT_NOISE_GENERATOR: case ARIZONA_HAPTICS_CONTROL_1: case ARIZONA_HAPTICS_CONTROL_2: @@ -846,6 +846,7 @@ static bool wm8997_readable_register(struct device *dev, unsigned int reg) case ARIZONA_RATE_ESTIMATOR_3: case ARIZONA_RATE_ESTIMATOR_4: case ARIZONA_RATE_ESTIMATOR_5: + case ARIZONA_DYNAMIC_FREQUENCY_SCALING_1: case ARIZONA_FLL1_CONTROL_1: case ARIZONA_FLL1_CONTROL_2: case ARIZONA_FLL1_CONTROL_3: @@ -880,6 +881,7 @@ static bool wm8997_readable_register(struct device *dev, unsigned int reg) case ARIZONA_FLL2_GPIO_CLOCK: case ARIZONA_MIC_CHARGE_PUMP_1: case ARIZONA_LDO1_CONTROL_1: + case ARIZONA_LDO1_CONTROL_2: case ARIZONA_LDO2_CONTROL_1: case ARIZONA_MIC_BIAS_CTRL_1: case ARIZONA_MIC_BIAS_CTRL_2: diff --git a/drivers/mmc/host/Kconfig b/drivers/mmc/host/Kconfig index 8aaf8c1f3f63..779368b683d0 100644 --- a/drivers/mmc/host/Kconfig +++ b/drivers/mmc/host/Kconfig @@ -694,3 +694,17 @@ config MMC_REALTEK_PCI help Say Y here to include driver code to support SD/MMC card interface of Realtek PCI-E card reader + +config MMC_REALTEK_USB + tristate "Realtek USB SD/MMC Card Interface Driver" + depends on MFD_RTSX_USB + help + Say Y here to include driver code to support SD/MMC card interface + of Realtek RTS5129/39 series card reader + +config MMC_SUNXI + tristate "Allwinner sunxi SD/MMC Host Controller support" + depends on ARCH_SUNXI + help + This selects support for the SD/MMC Host Controller on + Allwinner sunxi SoCs. diff --git a/drivers/mmc/host/Makefile b/drivers/mmc/host/Makefile index 0c8aa5e1e304..61cbc241935b 100644 --- a/drivers/mmc/host/Makefile +++ b/drivers/mmc/host/Makefile @@ -50,8 +50,10 @@ obj-$(CONFIG_MMC_JZ4740) += jz4740_mmc.o obj-$(CONFIG_MMC_VUB300) += vub300.o obj-$(CONFIG_MMC_USHC) += ushc.o obj-$(CONFIG_MMC_WMT) += wmt-sdmmc.o +obj-$(CONFIG_MMC_SUNXI) += sunxi-mmc.o obj-$(CONFIG_MMC_REALTEK_PCI) += rtsx_pci_sdmmc.o +obj-$(CONFIG_MMC_REALTEK_USB) += rtsx_usb_sdmmc.o obj-$(CONFIG_MMC_SDHCI_PLTFM) += sdhci-pltfm.o obj-$(CONFIG_MMC_SDHCI_CNS3XXX) += sdhci-cns3xxx.o diff --git a/drivers/mmc/host/rtsx_usb_sdmmc.c b/drivers/mmc/host/rtsx_usb_sdmmc.c new file mode 100644 index 000000000000..e11fafa6fc6b --- /dev/null +++ b/drivers/mmc/host/rtsx_usb_sdmmc.c @@ -0,0 +1,1455 @@ +/* Realtek USB SD/MMC Card Interface driver + * + * Copyright(c) 2009-2013 Realtek Semiconductor Corp. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 + * as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, see <http://www.gnu.org/licenses/>. + * + * Author: + * Roger Tseng <rogerable@realtek.com> + */ + +#include <linux/module.h> +#include <linux/slab.h> +#include <linux/delay.h> +#include <linux/platform_device.h> +#include <linux/usb.h> +#include <linux/mmc/host.h> +#include <linux/mmc/mmc.h> +#include <linux/mmc/sd.h> +#include <linux/mmc/sdio.h> +#include <linux/mmc/card.h> +#include <linux/scatterlist.h> +#include <linux/pm_runtime.h> + +#include <linux/mfd/rtsx_usb.h> +#include <asm/unaligned.h> + +#if defined(CONFIG_LEDS_CLASS) || defined(CONFIG_LEDS_CLASS_MODULE) +#include <linux/leds.h> +#include <linux/workqueue.h> +#define RTSX_USB_USE_LEDS_CLASS +#endif + +struct rtsx_usb_sdmmc { + struct platform_device *pdev; + struct rtsx_ucr *ucr; + struct mmc_host *mmc; + struct mmc_request *mrq; + + struct mutex host_mutex; + + u8 ssc_depth; + unsigned int clock; + bool vpclk; + bool double_clk; + bool host_removal; + bool card_exist; + bool initial_mode; + bool ddr_mode; + + unsigned char power_mode; + +#if defined(CONFIG_LEDS_CLASS) || defined(CONFIG_LEDS_CLASS_MODULE) + struct led_classdev led; + char led_name[32]; + struct work_struct led_work; +#endif +}; + +static inline struct device *sdmmc_dev(struct rtsx_usb_sdmmc *host) +{ + return &(host->pdev->dev); +} + +static inline void sd_clear_error(struct rtsx_usb_sdmmc *host) +{ + struct rtsx_ucr *ucr = host->ucr; + rtsx_usb_ep0_write_register(ucr, CARD_STOP, + SD_STOP | SD_CLR_ERR, + SD_STOP | SD_CLR_ERR); + + rtsx_usb_clear_dma_err(ucr); + rtsx_usb_clear_fsm_err(ucr); +} + +#ifdef DEBUG +static void sd_print_debug_regs(struct rtsx_usb_sdmmc *host) +{ + struct rtsx_ucr *ucr = host->ucr; + u8 val = 0; + + rtsx_usb_ep0_read_register(ucr, SD_STAT1, &val); + dev_dbg(sdmmc_dev(host), "SD_STAT1: 0x%x\n", val); + rtsx_usb_ep0_read_register(ucr, SD_STAT2, &val); + dev_dbg(sdmmc_dev(host), "SD_STAT2: 0x%x\n", val); + rtsx_usb_ep0_read_register(ucr, SD_BUS_STAT, &val); + dev_dbg(sdmmc_dev(host), "SD_BUS_STAT: 0x%x\n", val); +} +#else +#define sd_print_debug_regs(host) +#endif /* DEBUG */ + +static int sd_read_data(struct rtsx_usb_sdmmc *host, struct mmc_command *cmd, + u16 byte_cnt, u8 *buf, int buf_len, int timeout) +{ + struct rtsx_ucr *ucr = host->ucr; + int err; + u8 trans_mode; + + if (!buf) + buf_len = 0; + + rtsx_usb_init_cmd(ucr); + if (cmd != NULL) { + dev_dbg(sdmmc_dev(host), "%s: SD/MMC CMD%d\n", __func__ + , cmd->opcode); + if (cmd->opcode == MMC_SEND_TUNING_BLOCK) + trans_mode = SD_TM_AUTO_TUNING; + else + trans_mode = SD_TM_NORMAL_READ; + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, + SD_CMD0, 0xFF, (u8)(cmd->opcode) | 0x40); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, + SD_CMD1, 0xFF, (u8)(cmd->arg >> 24)); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, + SD_CMD2, 0xFF, (u8)(cmd->arg >> 16)); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, + SD_CMD3, 0xFF, (u8)(cmd->arg >> 8)); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, + SD_CMD4, 0xFF, (u8)cmd->arg); + } else { + trans_mode = SD_TM_AUTO_READ_3; + } + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_BYTE_CNT_L, 0xFF, (u8)byte_cnt); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_BYTE_CNT_H, + 0xFF, (u8)(byte_cnt >> 8)); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_BLOCK_CNT_L, 0xFF, 1); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_BLOCK_CNT_H, 0xFF, 0); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_CFG2, 0xFF, + SD_CALCULATE_CRC7 | SD_CHECK_CRC16 | + SD_NO_WAIT_BUSY_END | SD_CHECK_CRC7 | SD_RSP_LEN_6); + if (trans_mode != SD_TM_AUTO_TUNING) + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, + CARD_DATA_SOURCE, 0x01, PINGPONG_BUFFER); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_TRANSFER, + 0xFF, trans_mode | SD_TRANSFER_START); + rtsx_usb_add_cmd(ucr, CHECK_REG_CMD, SD_TRANSFER, + SD_TRANSFER_END, SD_TRANSFER_END); + + if (cmd != NULL) { + rtsx_usb_add_cmd(ucr, READ_REG_CMD, SD_CMD1, 0, 0); + rtsx_usb_add_cmd(ucr, READ_REG_CMD, SD_CMD2, 0, 0); + rtsx_usb_add_cmd(ucr, READ_REG_CMD, SD_CMD3, 0, 0); + rtsx_usb_add_cmd(ucr, READ_REG_CMD, SD_CMD4, 0, 0); + } + + err = rtsx_usb_send_cmd(ucr, MODE_CR, timeout); + if (err) { + dev_dbg(sdmmc_dev(host), + "rtsx_usb_send_cmd failed (err = %d)\n", err); + return err; + } + + err = rtsx_usb_get_rsp(ucr, !cmd ? 1 : 5, timeout); + if (err || (ucr->rsp_buf[0] & SD_TRANSFER_ERR)) { + sd_print_debug_regs(host); + + if (!err) { + dev_dbg(sdmmc_dev(host), + "Transfer failed (SD_TRANSFER = %02x)\n", + ucr->rsp_buf[0]); + err = -EIO; + } else { + dev_dbg(sdmmc_dev(host), + "rtsx_usb_get_rsp failed (err = %d)\n", err); + } + + return err; + } + + if (cmd != NULL) { + cmd->resp[0] = get_unaligned_be32(ucr->rsp_buf + 1); + dev_dbg(sdmmc_dev(host), "cmd->resp[0] = 0x%08x\n", + cmd->resp[0]); + } + + if (buf && buf_len) { + /* 2-byte aligned part */ + err = rtsx_usb_read_ppbuf(ucr, buf, byte_cnt - (byte_cnt % 2)); + if (err) { + dev_dbg(sdmmc_dev(host), + "rtsx_usb_read_ppbuf failed (err = %d)\n", err); + return err; + } + + /* unaligned byte */ + if (byte_cnt % 2) + return rtsx_usb_read_register(ucr, + PPBUF_BASE2 + byte_cnt, + buf + byte_cnt - 1); + } + + return 0; +} + +static int sd_write_data(struct rtsx_usb_sdmmc *host, struct mmc_command *cmd, + u16 byte_cnt, u8 *buf, int buf_len, int timeout) +{ + struct rtsx_ucr *ucr = host->ucr; + int err; + u8 trans_mode; + + if (!buf) + buf_len = 0; + + if (buf && buf_len) { + err = rtsx_usb_write_ppbuf(ucr, buf, buf_len); + if (err) { + dev_dbg(sdmmc_dev(host), + "rtsx_usb_write_ppbuf failed (err = %d)\n", + err); + return err; + } + } + + trans_mode = (cmd != NULL) ? SD_TM_AUTO_WRITE_2 : SD_TM_AUTO_WRITE_3; + rtsx_usb_init_cmd(ucr); + + if (cmd != NULL) { + dev_dbg(sdmmc_dev(host), "%s: SD/MMC CMD%d\n", __func__, + cmd->opcode); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, + SD_CMD0, 0xFF, (u8)(cmd->opcode) | 0x40); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, + SD_CMD1, 0xFF, (u8)(cmd->arg >> 24)); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, + SD_CMD2, 0xFF, (u8)(cmd->arg >> 16)); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, + SD_CMD3, 0xFF, (u8)(cmd->arg >> 8)); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, + SD_CMD4, 0xFF, (u8)cmd->arg); + } + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_BYTE_CNT_L, 0xFF, (u8)byte_cnt); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_BYTE_CNT_H, + 0xFF, (u8)(byte_cnt >> 8)); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_BLOCK_CNT_L, 0xFF, 1); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_BLOCK_CNT_H, 0xFF, 0); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_CFG2, 0xFF, + SD_CALCULATE_CRC7 | SD_CHECK_CRC16 | + SD_NO_WAIT_BUSY_END | SD_CHECK_CRC7 | SD_RSP_LEN_6); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, + CARD_DATA_SOURCE, 0x01, PINGPONG_BUFFER); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_TRANSFER, 0xFF, + trans_mode | SD_TRANSFER_START); + rtsx_usb_add_cmd(ucr, CHECK_REG_CMD, SD_TRANSFER, + SD_TRANSFER_END, SD_TRANSFER_END); + + if (cmd != NULL) { + rtsx_usb_add_cmd(ucr, READ_REG_CMD, SD_CMD1, 0, 0); + rtsx_usb_add_cmd(ucr, READ_REG_CMD, SD_CMD2, 0, 0); + rtsx_usb_add_cmd(ucr, READ_REG_CMD, SD_CMD3, 0, 0); + rtsx_usb_add_cmd(ucr, READ_REG_CMD, SD_CMD4, 0, 0); + } + + err = rtsx_usb_send_cmd(ucr, MODE_CR, timeout); + if (err) { + dev_dbg(sdmmc_dev(host), + "rtsx_usb_send_cmd failed (err = %d)\n", err); + return err; + } + + err = rtsx_usb_get_rsp(ucr, !cmd ? 1 : 5, timeout); + if (err) { + sd_print_debug_regs(host); + dev_dbg(sdmmc_dev(host), + "rtsx_usb_get_rsp failed (err = %d)\n", err); + return err; + } + + if (cmd != NULL) { + cmd->resp[0] = get_unaligned_be32(ucr->rsp_buf + 1); + dev_dbg(sdmmc_dev(host), "cmd->resp[0] = 0x%08x\n", + cmd->resp[0]); + } + + return 0; +} + +static void sd_send_cmd_get_rsp(struct rtsx_usb_sdmmc *host, + struct mmc_command *cmd) +{ + struct rtsx_ucr *ucr = host->ucr; + u8 cmd_idx = (u8)cmd->opcode; + u32 arg = cmd->arg; + int err = 0; + int timeout = 100; + int i; + u8 *ptr; + int stat_idx = 0; + int len = 2; + u8 rsp_type; + + dev_dbg(sdmmc_dev(host), "%s: SD/MMC CMD %d, arg = 0x%08x\n", + __func__, cmd_idx, arg); + + /* Response type: + * R0 + * R1, R5, R6, R7 + * R1b + * R2 + * R3, R4 + */ + switch (mmc_resp_type(cmd)) { + case MMC_RSP_NONE: + rsp_type = SD_RSP_TYPE_R0; + break; + case MMC_RSP_R1: + rsp_type = SD_RSP_TYPE_R1; + break; + case MMC_RSP_R1 & ~MMC_RSP_CRC: + rsp_type = SD_RSP_TYPE_R1 | SD_NO_CHECK_CRC7; + break; + case MMC_RSP_R1B: + rsp_type = SD_RSP_TYPE_R1b; + break; + case MMC_RSP_R2: + rsp_type = SD_RSP_TYPE_R2; + break; + case MMC_RSP_R3: + rsp_type = SD_RSP_TYPE_R3; + break; + default: + dev_dbg(sdmmc_dev(host), "cmd->flag is not valid\n"); + err = -EINVAL; + goto out; + } + + if (rsp_type == SD_RSP_TYPE_R1b) + timeout = 3000; + + if (cmd->opcode == SD_SWITCH_VOLTAGE) { + err = rtsx_usb_write_register(ucr, SD_BUS_STAT, + SD_CLK_TOGGLE_EN | SD_CLK_FORCE_STOP, + SD_CLK_TOGGLE_EN); + if (err) + goto out; + } + + rtsx_usb_init_cmd(ucr); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_CMD0, 0xFF, 0x40 | cmd_idx); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_CMD1, 0xFF, (u8)(arg >> 24)); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_CMD2, 0xFF, (u8)(arg >> 16)); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_CMD3, 0xFF, (u8)(arg >> 8)); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_CMD4, 0xFF, (u8)arg); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_CFG2, 0xFF, rsp_type); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_DATA_SOURCE, + 0x01, PINGPONG_BUFFER); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_TRANSFER, + 0xFF, SD_TM_CMD_RSP | SD_TRANSFER_START); + rtsx_usb_add_cmd(ucr, CHECK_REG_CMD, SD_TRANSFER, + SD_TRANSFER_END | SD_STAT_IDLE, + SD_TRANSFER_END | SD_STAT_IDLE); + + if (rsp_type == SD_RSP_TYPE_R2) { + /* Read data from ping-pong buffer */ + for (i = PPBUF_BASE2; i < PPBUF_BASE2 + 16; i++) + rtsx_usb_add_cmd(ucr, READ_REG_CMD, (u16)i, 0, 0); + stat_idx = 16; + } else if (rsp_type != SD_RSP_TYPE_R0) { + /* Read data from SD_CMDx registers */ + for (i = SD_CMD0; i <= SD_CMD4; i++) + rtsx_usb_add_cmd(ucr, READ_REG_CMD, (u16)i, 0, 0); + stat_idx = 5; + } + len += stat_idx; + + rtsx_usb_add_cmd(ucr, READ_REG_CMD, SD_STAT1, 0, 0); + + err = rtsx_usb_send_cmd(ucr, MODE_CR, 100); + if (err) { + dev_dbg(sdmmc_dev(host), + "rtsx_usb_send_cmd error (err = %d)\n", err); + goto out; + } + + err = rtsx_usb_get_rsp(ucr, len, timeout); + if (err || (ucr->rsp_buf[0] & SD_TRANSFER_ERR)) { + sd_print_debug_regs(host); + sd_clear_error(host); + + if (!err) { + dev_dbg(sdmmc_dev(host), + "Transfer failed (SD_TRANSFER = %02x)\n", + ucr->rsp_buf[0]); + err = -EIO; + } else { + dev_dbg(sdmmc_dev(host), + "rtsx_usb_get_rsp failed (err = %d)\n", err); + } + + goto out; + } + + if (rsp_type == SD_RSP_TYPE_R0) { + err = 0; + goto out; + } + + /* Skip result of CHECK_REG_CMD */ + ptr = ucr->rsp_buf + 1; + + /* Check (Start,Transmission) bit of Response */ + if ((ptr[0] & 0xC0) != 0) { + err = -EILSEQ; + dev_dbg(sdmmc_dev(host), "Invalid response bit\n"); + goto out; + } + + /* Check CRC7 */ + if (!(rsp_type & SD_NO_CHECK_CRC7)) { + if (ptr[stat_idx] & SD_CRC7_ERR) { + err = -EILSEQ; + dev_dbg(sdmmc_dev(host), "CRC7 error\n"); + goto out; + } + } + + if (rsp_type == SD_RSP_TYPE_R2) { + for (i = 0; i < 4; i++) { + cmd->resp[i] = get_unaligned_be32(ptr + 1 + i * 4); + dev_dbg(sdmmc_dev(host), "cmd->resp[%d] = 0x%08x\n", + i, cmd->resp[i]); + } + } else { + cmd->resp[0] = get_unaligned_be32(ptr + 1); + dev_dbg(sdmmc_dev(host), "cmd->resp[0] = 0x%08x\n", + cmd->resp[0]); + } + +out: + cmd->error = err; +} + +static int sd_rw_multi(struct rtsx_usb_sdmmc *host, struct mmc_request *mrq) +{ + struct rtsx_ucr *ucr = host->ucr; + struct mmc_data *data = mrq->data; + int read = (data->flags & MMC_DATA_READ) ? 1 : 0; + u8 cfg2, trans_mode; + int err; + u8 flag; + size_t data_len = data->blksz * data->blocks; + unsigned int pipe; + + if (read) { + dev_dbg(sdmmc_dev(host), "%s: read %zu bytes\n", + __func__, data_len); + cfg2 = SD_CALCULATE_CRC7 | SD_CHECK_CRC16 | + SD_NO_WAIT_BUSY_END | SD_CHECK_CRC7 | SD_RSP_LEN_0; + trans_mode = SD_TM_AUTO_READ_3; + } else { + dev_dbg(sdmmc_dev(host), "%s: write %zu bytes\n", + __func__, data_len); + cfg2 = SD_NO_CALCULATE_CRC7 | SD_CHECK_CRC16 | + SD_NO_WAIT_BUSY_END | SD_NO_CHECK_CRC7 | SD_RSP_LEN_0; + trans_mode = SD_TM_AUTO_WRITE_3; + } + + rtsx_usb_init_cmd(ucr); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_BYTE_CNT_L, 0xFF, 0x00); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_BYTE_CNT_H, 0xFF, 0x02); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_BLOCK_CNT_L, + 0xFF, (u8)data->blocks); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_BLOCK_CNT_H, + 0xFF, (u8)(data->blocks >> 8)); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_DATA_SOURCE, + 0x01, RING_BUFFER); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, MC_DMA_TC3, + 0xFF, (u8)(data_len >> 24)); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, MC_DMA_TC2, + 0xFF, (u8)(data_len >> 16)); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, MC_DMA_TC1, + 0xFF, (u8)(data_len >> 8)); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, MC_DMA_TC0, + 0xFF, (u8)data_len); + if (read) { + flag = MODE_CDIR; + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, MC_DMA_CTL, + 0x03 | DMA_PACK_SIZE_MASK, + DMA_DIR_FROM_CARD | DMA_EN | DMA_512); + } else { + flag = MODE_CDOR; + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, MC_DMA_CTL, + 0x03 | DMA_PACK_SIZE_MASK, + DMA_DIR_TO_CARD | DMA_EN | DMA_512); + } + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_CFG2, 0xFF, cfg2); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_TRANSFER, 0xFF, + trans_mode | SD_TRANSFER_START); + rtsx_usb_add_cmd(ucr, CHECK_REG_CMD, SD_TRANSFER, + SD_TRANSFER_END, SD_TRANSFER_END); + + err = rtsx_usb_send_cmd(ucr, flag, 100); + if (err) + return err; + + if (read) + pipe = usb_rcvbulkpipe(ucr->pusb_dev, EP_BULK_IN); + else + pipe = usb_sndbulkpipe(ucr->pusb_dev, EP_BULK_OUT); + + err = rtsx_usb_transfer_data(ucr, pipe, data->sg, data_len, + data->sg_len, NULL, 10000); + if (err) { + dev_dbg(sdmmc_dev(host), "rtsx_usb_transfer_data error %d\n" + , err); + sd_clear_error(host); + return err; + } + + return rtsx_usb_get_rsp(ucr, 1, 2000); +} + +static inline void sd_enable_initial_mode(struct rtsx_usb_sdmmc *host) +{ + rtsx_usb_write_register(host->ucr, SD_CFG1, + SD_CLK_DIVIDE_MASK, SD_CLK_DIVIDE_128); +} + +static inline void sd_disable_initial_mode(struct rtsx_usb_sdmmc *host) +{ + rtsx_usb_write_register(host->ucr, SD_CFG1, + SD_CLK_DIVIDE_MASK, SD_CLK_DIVIDE_0); +} + +static void sd_normal_rw(struct rtsx_usb_sdmmc *host, + struct mmc_request *mrq) +{ + struct mmc_command *cmd = mrq->cmd; + struct mmc_data *data = mrq->data; + u8 *buf; + + buf = kzalloc(data->blksz, GFP_NOIO); + if (!buf) { + cmd->error = -ENOMEM; + return; + } + + if (data->flags & MMC_DATA_READ) { + if (host->initial_mode) + sd_disable_initial_mode(host); + + cmd->error = sd_read_data(host, cmd, (u16)data->blksz, buf, + data->blksz, 200); + + if (host->initial_mode) + sd_enable_initial_mode(host); + + sg_copy_from_buffer(data->sg, data->sg_len, buf, data->blksz); + } else { + sg_copy_to_buffer(data->sg, data->sg_len, buf, data->blksz); + + cmd->error = sd_write_data(host, cmd, (u16)data->blksz, buf, + data->blksz, 200); + } + + kfree(buf); +} + +static int sd_change_phase(struct rtsx_usb_sdmmc *host, u8 sample_point, int tx) +{ + struct rtsx_ucr *ucr = host->ucr; + int err; + + dev_dbg(sdmmc_dev(host), "%s: %s sample_point = %d\n", + __func__, tx ? "TX" : "RX", sample_point); + + rtsx_usb_init_cmd(ucr); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CLK_DIV, CLK_CHANGE, CLK_CHANGE); + + if (tx) + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_VPCLK0_CTL, + 0x0F, sample_point); + else + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_VPCLK1_CTL, + 0x0F, sample_point); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_VPCLK0_CTL, PHASE_NOT_RESET, 0); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_VPCLK0_CTL, + PHASE_NOT_RESET, PHASE_NOT_RESET); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CLK_DIV, CLK_CHANGE, 0); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_CFG1, SD_ASYNC_FIFO_RST, 0); + + err = rtsx_usb_send_cmd(ucr, MODE_C, 100); + if (err) + return err; + + return 0; +} + +static inline u32 get_phase_point(u32 phase_map, unsigned int idx) +{ + idx &= MAX_PHASE; + return phase_map & (1 << idx); +} + +static int get_phase_len(u32 phase_map, unsigned int idx) +{ + int i; + + for (i = 0; i < MAX_PHASE + 1; i++) { + if (get_phase_point(phase_map, idx + i) == 0) + return i; + } + return MAX_PHASE + 1; +} + +static u8 sd_search_final_phase(struct rtsx_usb_sdmmc *host, u32 phase_map) +{ + int start = 0, len = 0; + int start_final = 0, len_final = 0; + u8 final_phase = 0xFF; + + if (phase_map == 0) { + dev_dbg(sdmmc_dev(host), "Phase: [map:%x]\n", phase_map); + return final_phase; + } + + while (start < MAX_PHASE + 1) { + len = get_phase_len(phase_map, start); + if (len_final < len) { + start_final = start; + len_final = len; + } + start += len ? len : 1; + } + + final_phase = (start_final + len_final / 2) & MAX_PHASE; + dev_dbg(sdmmc_dev(host), "Phase: [map:%x] [maxlen:%d] [final:%d]\n", + phase_map, len_final, final_phase); + + return final_phase; +} + +static void sd_wait_data_idle(struct rtsx_usb_sdmmc *host) +{ + int err, i; + u8 val = 0; + + for (i = 0; i < 100; i++) { + err = rtsx_usb_ep0_read_register(host->ucr, + SD_DATA_STATE, &val); + if (val & SD_DATA_IDLE) + return; + + usleep_range(100, 1000); + } +} + +static int sd_tuning_rx_cmd(struct rtsx_usb_sdmmc *host, + u8 opcode, u8 sample_point) +{ + int err; + struct mmc_command cmd = {0}; + + err = sd_change_phase(host, sample_point, 0); + if (err) + return err; + + cmd.opcode = MMC_SEND_TUNING_BLOCK; + err = sd_read_data(host, &cmd, 0x40, NULL, 0, 100); + if (err) { + /* Wait till SD DATA IDLE */ + sd_wait_data_idle(host); + sd_clear_error(host); + return err; + } + + return 0; +} + +static void sd_tuning_phase(struct rtsx_usb_sdmmc *host, + u8 opcode, u16 *phase_map) +{ + int err, i; + u16 raw_phase_map = 0; + + for (i = MAX_PHASE; i >= 0; i--) { + err = sd_tuning_rx_cmd(host, opcode, (u8)i); + if (!err) + raw_phase_map |= 1 << i; + } + + if (phase_map) + *phase_map = raw_phase_map; +} + +static int sd_tuning_rx(struct rtsx_usb_sdmmc *host, u8 opcode) +{ + int err, i; + u16 raw_phase_map[RX_TUNING_CNT] = {0}, phase_map; + u8 final_phase; + + /* setting fixed default TX phase */ + err = sd_change_phase(host, 0x01, 1); + if (err) { + dev_dbg(sdmmc_dev(host), "TX phase setting failed\n"); + return err; + } + + /* tuning RX phase */ + for (i = 0; i < RX_TUNING_CNT; i++) { + sd_tuning_phase(host, opcode, &(raw_phase_map[i])); + + if (raw_phase_map[i] == 0) + break; + } + + phase_map = 0xFFFF; + for (i = 0; i < RX_TUNING_CNT; i++) { + dev_dbg(sdmmc_dev(host), "RX raw_phase_map[%d] = 0x%04x\n", + i, raw_phase_map[i]); + phase_map &= raw_phase_map[i]; + } + dev_dbg(sdmmc_dev(host), "RX phase_map = 0x%04x\n", phase_map); + + if (phase_map) { + final_phase = sd_search_final_phase(host, phase_map); + if (final_phase == 0xFF) + return -EINVAL; + + err = sd_change_phase(host, final_phase, 0); + if (err) + return err; + } else { + return -EINVAL; + } + + return 0; +} + +static int sdmmc_get_ro(struct mmc_host *mmc) +{ + struct rtsx_usb_sdmmc *host = mmc_priv(mmc); + struct rtsx_ucr *ucr = host->ucr; + int err; + u16 val; + + if (host->host_removal) + return -ENOMEDIUM; + + mutex_lock(&ucr->dev_mutex); + + /* Check SD card detect */ + err = rtsx_usb_get_card_status(ucr, &val); + + mutex_unlock(&ucr->dev_mutex); + + + /* Treat failed detection as non-ro */ + if (err) + return 0; + + if (val & SD_WP) + return 1; + + return 0; +} + +static int sdmmc_get_cd(struct mmc_host *mmc) +{ + struct rtsx_usb_sdmmc *host = mmc_priv(mmc); + struct rtsx_ucr *ucr = host->ucr; + int err; + u16 val; + + if (host->host_removal) + return -ENOMEDIUM; + + mutex_lock(&ucr->dev_mutex); + + /* Check SD card detect */ + err = rtsx_usb_get_card_status(ucr, &val); + + mutex_unlock(&ucr->dev_mutex); + + /* Treat failed detection as non-exist */ + if (err) + goto no_card; + + if (val & SD_CD) { + host->card_exist = true; + return 1; + } + +no_card: + host->card_exist = false; + return 0; +} + +static void sdmmc_request(struct mmc_host *mmc, struct mmc_request *mrq) +{ + struct rtsx_usb_sdmmc *host = mmc_priv(mmc); + struct rtsx_ucr *ucr = host->ucr; + struct mmc_command *cmd = mrq->cmd; + struct mmc_data *data = mrq->data; + unsigned int data_size = 0; + + dev_dbg(sdmmc_dev(host), "%s\n", __func__); + + if (host->host_removal) { + cmd->error = -ENOMEDIUM; + goto finish; + } + + if ((!host->card_exist)) { + cmd->error = -ENOMEDIUM; + goto finish_detect_card; + } + + /* + * Reject SDIO CMDs to speed up card identification + * since unsupported + */ + if (cmd->opcode == SD_IO_SEND_OP_COND || + cmd->opcode == SD_IO_RW_DIRECT || + cmd->opcode == SD_IO_RW_EXTENDED) { + cmd->error = -EINVAL; + goto finish; + } + + mutex_lock(&ucr->dev_mutex); + + mutex_lock(&host->host_mutex); + host->mrq = mrq; + mutex_unlock(&host->host_mutex); + + if (mrq->data) + data_size = data->blocks * data->blksz; + + if (!data_size) { + sd_send_cmd_get_rsp(host, cmd); + } else if ((!(data_size % 512) && cmd->opcode != MMC_SEND_EXT_CSD) || + mmc_op_multi(cmd->opcode)) { + sd_send_cmd_get_rsp(host, cmd); + + if (!cmd->error) { + sd_rw_multi(host, mrq); + + if (mmc_op_multi(cmd->opcode) && mrq->stop) { + sd_send_cmd_get_rsp(host, mrq->stop); + rtsx_usb_write_register(ucr, MC_FIFO_CTL, + FIFO_FLUSH, FIFO_FLUSH); + } + } + } else { + sd_normal_rw(host, mrq); + } + + if (mrq->data) { + if (cmd->error || data->error) + data->bytes_xfered = 0; + else + data->bytes_xfered = data->blocks * data->blksz; + } + + mutex_unlock(&ucr->dev_mutex); + +finish_detect_card: + if (cmd->error) { + /* + * detect card when fail to update card existence state and + * speed up card removal when retry + */ + sdmmc_get_cd(mmc); + dev_dbg(sdmmc_dev(host), "cmd->error = %d\n", cmd->error); + } + +finish: + mutex_lock(&host->host_mutex); + host->mrq = NULL; + mutex_unlock(&host->host_mutex); + + mmc_request_done(mmc, mrq); +} + +static int sd_set_bus_width(struct rtsx_usb_sdmmc *host, + unsigned char bus_width) +{ + int err = 0; + u8 width[] = { + [MMC_BUS_WIDTH_1] = SD_BUS_WIDTH_1BIT, + [MMC_BUS_WIDTH_4] = SD_BUS_WIDTH_4BIT, + [MMC_BUS_WIDTH_8] = SD_BUS_WIDTH_8BIT, + }; + + if (bus_width <= MMC_BUS_WIDTH_8) + err = rtsx_usb_write_register(host->ucr, SD_CFG1, + 0x03, width[bus_width]); + + return err; +} + +static int sd_pull_ctl_disable_lqfp48(struct rtsx_ucr *ucr) +{ + rtsx_usb_init_cmd(ucr); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL1, 0xFF, 0x55); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL2, 0xFF, 0x55); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL3, 0xFF, 0x95); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL4, 0xFF, 0x55); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL5, 0xFF, 0x55); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL6, 0xFF, 0xA5); + + return rtsx_usb_send_cmd(ucr, MODE_C, 100); +} + +static int sd_pull_ctl_disable_qfn24(struct rtsx_ucr *ucr) +{ + rtsx_usb_init_cmd(ucr); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL1, 0xFF, 0x65); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL2, 0xFF, 0x55); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL3, 0xFF, 0x95); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL4, 0xFF, 0x55); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL5, 0xFF, 0x56); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL6, 0xFF, 0x59); + + return rtsx_usb_send_cmd(ucr, MODE_C, 100); +} + +static int sd_pull_ctl_enable_lqfp48(struct rtsx_ucr *ucr) +{ + rtsx_usb_init_cmd(ucr); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL1, 0xFF, 0xAA); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL2, 0xFF, 0xAA); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL3, 0xFF, 0xA9); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL4, 0xFF, 0x55); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL5, 0xFF, 0x55); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL6, 0xFF, 0xA5); + + return rtsx_usb_send_cmd(ucr, MODE_C, 100); +} + +static int sd_pull_ctl_enable_qfn24(struct rtsx_ucr *ucr) +{ + rtsx_usb_init_cmd(ucr); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL1, 0xFF, 0xA5); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL2, 0xFF, 0x9A); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL3, 0xFF, 0xA5); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL4, 0xFF, 0x9A); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL5, 0xFF, 0x65); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PULL_CTL6, 0xFF, 0x5A); + + return rtsx_usb_send_cmd(ucr, MODE_C, 100); +} + +static int sd_power_on(struct rtsx_usb_sdmmc *host) +{ + struct rtsx_ucr *ucr = host->ucr; + int err; + + dev_dbg(sdmmc_dev(host), "%s\n", __func__); + rtsx_usb_init_cmd(ucr); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_SELECT, 0x07, SD_MOD_SEL); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_SHARE_MODE, + CARD_SHARE_MASK, CARD_SHARE_SD); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_CLK_EN, + SD_CLK_EN, SD_CLK_EN); + err = rtsx_usb_send_cmd(ucr, MODE_C, 100); + if (err) + return err; + + if (CHECK_PKG(ucr, LQFP48)) + err = sd_pull_ctl_enable_lqfp48(ucr); + else + err = sd_pull_ctl_enable_qfn24(ucr); + if (err) + return err; + + err = rtsx_usb_write_register(ucr, CARD_PWR_CTL, + POWER_MASK, PARTIAL_POWER_ON); + if (err) + return err; + + usleep_range(800, 1000); + + rtsx_usb_init_cmd(ucr); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PWR_CTL, + POWER_MASK|LDO3318_PWR_MASK, POWER_ON|LDO_ON); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_OE, + SD_OUTPUT_EN, SD_OUTPUT_EN); + + return rtsx_usb_send_cmd(ucr, MODE_C, 100); +} + +static int sd_power_off(struct rtsx_usb_sdmmc *host) +{ + struct rtsx_ucr *ucr = host->ucr; + int err; + + dev_dbg(sdmmc_dev(host), "%s\n", __func__); + rtsx_usb_init_cmd(ucr); + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_CLK_EN, SD_CLK_EN, 0); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_OE, SD_OUTPUT_EN, 0); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PWR_CTL, + POWER_MASK, POWER_OFF); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_PWR_CTL, + POWER_MASK|LDO3318_PWR_MASK, POWER_OFF|LDO_SUSPEND); + + err = rtsx_usb_send_cmd(ucr, MODE_C, 100); + if (err) + return err; + + if (CHECK_PKG(ucr, LQFP48)) + return sd_pull_ctl_disable_lqfp48(ucr); + return sd_pull_ctl_disable_qfn24(ucr); +} + +static int sd_set_power_mode(struct rtsx_usb_sdmmc *host, + unsigned char power_mode) +{ + int err; + + if (power_mode != MMC_POWER_OFF) + power_mode = MMC_POWER_ON; + + if (power_mode == host->power_mode) + return 0; + + if (power_mode == MMC_POWER_OFF) { + err = sd_power_off(host); + pm_runtime_put(sdmmc_dev(host)); + } else { + pm_runtime_get_sync(sdmmc_dev(host)); + err = sd_power_on(host); + } + + if (!err) + host->power_mode = power_mode; + + return err; +} + +static int sd_set_timing(struct rtsx_usb_sdmmc *host, + unsigned char timing, bool *ddr_mode) +{ + struct rtsx_ucr *ucr = host->ucr; + int err; + + *ddr_mode = false; + + rtsx_usb_init_cmd(ucr); + + switch (timing) { + case MMC_TIMING_UHS_SDR104: + case MMC_TIMING_UHS_SDR50: + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_CFG1, + 0x0C | SD_ASYNC_FIFO_RST, + SD_30_MODE | SD_ASYNC_FIFO_RST); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_CLK_SOURCE, 0xFF, + CRC_VAR_CLK0 | SD30_FIX_CLK | SAMPLE_VAR_CLK1); + break; + + case MMC_TIMING_UHS_DDR50: + *ddr_mode = true; + + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_CFG1, + 0x0C | SD_ASYNC_FIFO_RST, + SD_DDR_MODE | SD_ASYNC_FIFO_RST); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_CLK_SOURCE, 0xFF, + CRC_VAR_CLK0 | SD30_FIX_CLK | SAMPLE_VAR_CLK1); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_PUSH_POINT_CTL, + DDR_VAR_TX_CMD_DAT, DDR_VAR_TX_CMD_DAT); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_SAMPLE_POINT_CTL, + DDR_VAR_RX_DAT | DDR_VAR_RX_CMD, + DDR_VAR_RX_DAT | DDR_VAR_RX_CMD); + break; + + case MMC_TIMING_MMC_HS: + case MMC_TIMING_SD_HS: + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_CFG1, + 0x0C, SD_20_MODE); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_CLK_SOURCE, 0xFF, + CRC_FIX_CLK | SD30_VAR_CLK0 | SAMPLE_VAR_CLK1); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_PUSH_POINT_CTL, + SD20_TX_SEL_MASK, SD20_TX_14_AHEAD); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_SAMPLE_POINT_CTL, + SD20_RX_SEL_MASK, SD20_RX_14_DELAY); + break; + + default: + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, + SD_CFG1, 0x0C, SD_20_MODE); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, CARD_CLK_SOURCE, 0xFF, + CRC_FIX_CLK | SD30_VAR_CLK0 | SAMPLE_VAR_CLK1); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, + SD_PUSH_POINT_CTL, 0xFF, 0); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_SAMPLE_POINT_CTL, + SD20_RX_SEL_MASK, SD20_RX_POS_EDGE); + break; + } + + err = rtsx_usb_send_cmd(ucr, MODE_C, 100); + + return err; +} + +static void sdmmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios) +{ + struct rtsx_usb_sdmmc *host = mmc_priv(mmc); + struct rtsx_ucr *ucr = host->ucr; + + dev_dbg(sdmmc_dev(host), "%s\n", __func__); + mutex_lock(&ucr->dev_mutex); + + if (rtsx_usb_card_exclusive_check(ucr, RTSX_USB_SD_CARD)) { + mutex_unlock(&ucr->dev_mutex); + return; + } + + sd_set_power_mode(host, ios->power_mode); + sd_set_bus_width(host, ios->bus_width); + sd_set_timing(host, ios->timing, &host->ddr_mode); + + host->vpclk = false; + host->double_clk = true; + + switch (ios->timing) { + case MMC_TIMING_UHS_SDR104: + case MMC_TIMING_UHS_SDR50: + host->ssc_depth = SSC_DEPTH_2M; + host->vpclk = true; + host->double_clk = false; + break; + case MMC_TIMING_UHS_DDR50: + case MMC_TIMING_UHS_SDR25: + host->ssc_depth = SSC_DEPTH_1M; + break; + default: + host->ssc_depth = SSC_DEPTH_512K; + break; + } + + host->initial_mode = (ios->clock <= 1000000) ? true : false; + host->clock = ios->clock; + + rtsx_usb_switch_clock(host->ucr, host->clock, host->ssc_depth, + host->initial_mode, host->double_clk, host->vpclk); + + mutex_unlock(&ucr->dev_mutex); + dev_dbg(sdmmc_dev(host), "%s end\n", __func__); +} + +static int sdmmc_switch_voltage(struct mmc_host *mmc, struct mmc_ios *ios) +{ + struct rtsx_usb_sdmmc *host = mmc_priv(mmc); + struct rtsx_ucr *ucr = host->ucr; + int err = 0; + + dev_dbg(sdmmc_dev(host), "%s: signal_voltage = %d\n", + __func__, ios->signal_voltage); + + if (host->host_removal) + return -ENOMEDIUM; + + if (ios->signal_voltage == MMC_SIGNAL_VOLTAGE_120) + return -EPERM; + + mutex_lock(&ucr->dev_mutex); + + err = rtsx_usb_card_exclusive_check(ucr, RTSX_USB_SD_CARD); + if (err) { + mutex_unlock(&ucr->dev_mutex); + return err; + } + + /* Let mmc core do the busy checking, simply stop the forced-toggle + * clock(while issuing CMD11) and switch voltage. + */ + rtsx_usb_init_cmd(ucr); + + if (ios->signal_voltage == MMC_SIGNAL_VOLTAGE_330) { + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_PAD_CTL, + SD_IO_USING_1V8, SD_IO_USING_3V3); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, LDO_POWER_CFG, + TUNE_SD18_MASK, TUNE_SD18_3V3); + } else { + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_BUS_STAT, + SD_CLK_TOGGLE_EN | SD_CLK_FORCE_STOP, + SD_CLK_FORCE_STOP); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, SD_PAD_CTL, + SD_IO_USING_1V8, SD_IO_USING_1V8); + rtsx_usb_add_cmd(ucr, WRITE_REG_CMD, LDO_POWER_CFG, + TUNE_SD18_MASK, TUNE_SD18_1V8); + } + + err = rtsx_usb_send_cmd(ucr, MODE_C, 100); + mutex_unlock(&ucr->dev_mutex); + + return err; +} + +static int sdmmc_card_busy(struct mmc_host *mmc) +{ + struct rtsx_usb_sdmmc *host = mmc_priv(mmc); + struct rtsx_ucr *ucr = host->ucr; + int err; + u8 stat; + u8 mask = SD_DAT3_STATUS | SD_DAT2_STATUS | SD_DAT1_STATUS + | SD_DAT0_STATUS; + + dev_dbg(sdmmc_dev(host), "%s\n", __func__); + + mutex_lock(&ucr->dev_mutex); + + err = rtsx_usb_write_register(ucr, SD_BUS_STAT, + SD_CLK_TOGGLE_EN | SD_CLK_FORCE_STOP, + SD_CLK_TOGGLE_EN); + if (err) + goto out; + + mdelay(1); + + err = rtsx_usb_read_register(ucr, SD_BUS_STAT, &stat); + if (err) + goto out; + + err = rtsx_usb_write_register(ucr, SD_BUS_STAT, + SD_CLK_TOGGLE_EN | SD_CLK_FORCE_STOP, 0); +out: + mutex_unlock(&ucr->dev_mutex); + + if (err) + return err; + + /* check if any pin between dat[0:3] is low */ + if ((stat & mask) != mask) + return 1; + else + return 0; +} + +static int sdmmc_execute_tuning(struct mmc_host *mmc, u32 opcode) +{ + struct rtsx_usb_sdmmc *host = mmc_priv(mmc); + struct rtsx_ucr *ucr = host->ucr; + int err = 0; + + if (host->host_removal) + return -ENOMEDIUM; + + mutex_lock(&ucr->dev_mutex); + + if (!host->ddr_mode) + err = sd_tuning_rx(host, MMC_SEND_TUNING_BLOCK); + + mutex_unlock(&ucr->dev_mutex); + + return err; +} + +static const struct mmc_host_ops rtsx_usb_sdmmc_ops = { + .request = sdmmc_request, + .set_ios = sdmmc_set_ios, + .get_ro = sdmmc_get_ro, + .get_cd = sdmmc_get_cd, + .start_signal_voltage_switch = sdmmc_switch_voltage, + .card_busy = sdmmc_card_busy, + .execute_tuning = sdmmc_execute_tuning, +}; + +#ifdef RTSX_USB_USE_LEDS_CLASS +static void rtsx_usb_led_control(struct led_classdev *led, + enum led_brightness brightness) +{ + struct rtsx_usb_sdmmc *host = container_of(led, + struct rtsx_usb_sdmmc, led); + + if (host->host_removal) + return; + + host->led.brightness = brightness; + schedule_work(&host->led_work); +} + +static void rtsx_usb_update_led(struct work_struct *work) +{ + struct rtsx_usb_sdmmc *host = + container_of(work, struct rtsx_usb_sdmmc, led_work); + struct rtsx_ucr *ucr = host->ucr; + + mutex_lock(&ucr->dev_mutex); + + if (host->led.brightness == LED_OFF) + rtsx_usb_turn_off_led(ucr); + else + rtsx_usb_turn_on_led(ucr); + + mutex_unlock(&ucr->dev_mutex); +} +#endif + +static void rtsx_usb_init_host(struct rtsx_usb_sdmmc *host) +{ + struct mmc_host *mmc = host->mmc; + + mmc->f_min = 250000; + mmc->f_max = 208000000; + mmc->ocr_avail = MMC_VDD_32_33 | MMC_VDD_33_34 | MMC_VDD_165_195; + mmc->caps = MMC_CAP_4_BIT_DATA | MMC_CAP_SD_HIGHSPEED | + MMC_CAP_MMC_HIGHSPEED | MMC_CAP_BUS_WIDTH_TEST | + MMC_CAP_UHS_SDR12 | MMC_CAP_UHS_SDR25 | MMC_CAP_UHS_SDR50 | + MMC_CAP_NEEDS_POLL; + + mmc->max_current_330 = 400; + mmc->max_current_180 = 800; + mmc->ops = &rtsx_usb_sdmmc_ops; + mmc->max_segs = 256; + mmc->max_seg_size = 65536; + mmc->max_blk_size = 512; + mmc->max_blk_count = 65535; + mmc->max_req_size = 524288; + + host->power_mode = MMC_POWER_OFF; +} + +static int rtsx_usb_sdmmc_drv_probe(struct platform_device *pdev) +{ + struct mmc_host *mmc; + struct rtsx_usb_sdmmc *host; + struct rtsx_ucr *ucr; +#ifdef RTSX_USB_USE_LEDS_CLASS + int err; +#endif + + ucr = usb_get_intfdata(to_usb_interface(pdev->dev.parent)); + if (!ucr) + return -ENXIO; + + dev_dbg(&(pdev->dev), ": Realtek USB SD/MMC controller found\n"); + + mmc = mmc_alloc_host(sizeof(*host), &pdev->dev); + if (!mmc) + return -ENOMEM; + + host = mmc_priv(mmc); + host->ucr = ucr; + host->mmc = mmc; + host->pdev = pdev; + platform_set_drvdata(pdev, host); + + mutex_init(&host->host_mutex); + rtsx_usb_init_host(host); + pm_runtime_enable(&pdev->dev); + +#ifdef RTSX_USB_USE_LEDS_CLASS + snprintf(host->led_name, sizeof(host->led_name), + "%s::", mmc_hostname(mmc)); + host->led.name = host->led_name; + host->led.brightness = LED_OFF; + host->led.default_trigger = mmc_hostname(mmc); + host->led.brightness_set = rtsx_usb_led_control; + + err = led_classdev_register(mmc_dev(mmc), &host->led); + if (err) + dev_err(&(pdev->dev), + "Failed to register LED device: %d\n", err); + INIT_WORK(&host->led_work, rtsx_usb_update_led); + +#endif + mmc_add_host(mmc); + + return 0; +} + +static int rtsx_usb_sdmmc_drv_remove(struct platform_device *pdev) +{ + struct rtsx_usb_sdmmc *host = platform_get_drvdata(pdev); + struct mmc_host *mmc; + + if (!host) + return 0; + + mmc = host->mmc; + host->host_removal = true; + + mutex_lock(&host->host_mutex); + if (host->mrq) { + dev_dbg(&(pdev->dev), + "%s: Controller removed during transfer\n", + mmc_hostname(mmc)); + host->mrq->cmd->error = -ENOMEDIUM; + if (host->mrq->stop) + host->mrq->stop->error = -ENOMEDIUM; + mmc_request_done(mmc, host->mrq); + } + mutex_unlock(&host->host_mutex); + + mmc_remove_host(mmc); + +#ifdef RTSX_USB_USE_LEDS_CLASS + cancel_work_sync(&host->led_work); + led_classdev_unregister(&host->led); +#endif + + mmc_free_host(mmc); + pm_runtime_disable(&pdev->dev); + platform_set_drvdata(pdev, NULL); + + dev_dbg(&(pdev->dev), + ": Realtek USB SD/MMC module has been removed\n"); + + return 0; +} + +static struct platform_device_id rtsx_usb_sdmmc_ids[] = { + { + .name = "rtsx_usb_sdmmc", + }, { + /* sentinel */ + } +}; +MODULE_DEVICE_TABLE(platform, rtsx_usb_sdmmc_ids); + +static struct platform_driver rtsx_usb_sdmmc_driver = { + .probe = rtsx_usb_sdmmc_drv_probe, + .remove = rtsx_usb_sdmmc_drv_remove, + .id_table = rtsx_usb_sdmmc_ids, + .driver = { + .owner = THIS_MODULE, + .name = "rtsx_usb_sdmmc", + }, +}; +module_platform_driver(rtsx_usb_sdmmc_driver); + +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Roger Tseng <rogerable@realtek.com>"); +MODULE_DESCRIPTION("Realtek USB SD/MMC Card Host Driver"); diff --git a/drivers/mmc/host/sunxi-mmc.c b/drivers/mmc/host/sunxi-mmc.c new file mode 100644 index 000000000000..024f67c98cdc --- /dev/null +++ b/drivers/mmc/host/sunxi-mmc.c @@ -0,0 +1,1049 @@ +/* + * Driver for sunxi SD/MMC host controllers + * (C) Copyright 2007-2011 Reuuimlla Technology Co., Ltd. + * (C) Copyright 2007-2011 Aaron Maoye <leafy.myeh@reuuimllatech.com> + * (C) Copyright 2013-2014 O2S GmbH <www.o2s.ch> + * (C) Copyright 2013-2014 David Lanzend�rfer <david.lanzendoerfer@o2s.ch> + * (C) Copyright 2013-2014 Hans de Goede <hdegoede@redhat.com> + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation; either version 2 of + * the License, or (at your option) any later version. + */ + +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/io.h> +#include <linux/device.h> +#include <linux/interrupt.h> +#include <linux/delay.h> +#include <linux/err.h> + +#include <linux/clk.h> +#include <linux/clk-private.h> +#include <linux/clk/sunxi.h> + +#include <linux/gpio.h> +#include <linux/platform_device.h> +#include <linux/spinlock.h> +#include <linux/scatterlist.h> +#include <linux/dma-mapping.h> +#include <linux/slab.h> +#include <linux/reset.h> + +#include <linux/of_address.h> +#include <linux/of_gpio.h> +#include <linux/of_platform.h> + +#include <linux/mmc/host.h> +#include <linux/mmc/sd.h> +#include <linux/mmc/sdio.h> +#include <linux/mmc/mmc.h> +#include <linux/mmc/core.h> +#include <linux/mmc/card.h> +#include <linux/mmc/slot-gpio.h> + +/* register offset definitions */ +#define SDXC_REG_GCTRL (0x00) /* SMC Global Control Register */ +#define SDXC_REG_CLKCR (0x04) /* SMC Clock Control Register */ +#define SDXC_REG_TMOUT (0x08) /* SMC Time Out Register */ +#define SDXC_REG_WIDTH (0x0C) /* SMC Bus Width Register */ +#define SDXC_REG_BLKSZ (0x10) /* SMC Block Size Register */ +#define SDXC_REG_BCNTR (0x14) /* SMC Byte Count Register */ +#define SDXC_REG_CMDR (0x18) /* SMC Command Register */ +#define SDXC_REG_CARG (0x1C) /* SMC Argument Register */ +#define SDXC_REG_RESP0 (0x20) /* SMC Response Register 0 */ +#define SDXC_REG_RESP1 (0x24) /* SMC Response Register 1 */ +#define SDXC_REG_RESP2 (0x28) /* SMC Response Register 2 */ +#define SDXC_REG_RESP3 (0x2C) /* SMC Response Register 3 */ +#define SDXC_REG_IMASK (0x30) /* SMC Interrupt Mask Register */ +#define SDXC_REG_MISTA (0x34) /* SMC Masked Interrupt Status Register */ +#define SDXC_REG_RINTR (0x38) /* SMC Raw Interrupt Status Register */ +#define SDXC_REG_STAS (0x3C) /* SMC Status Register */ +#define SDXC_REG_FTRGL (0x40) /* SMC FIFO Threshold Watermark Registe */ +#define SDXC_REG_FUNS (0x44) /* SMC Function Select Register */ +#define SDXC_REG_CBCR (0x48) /* SMC CIU Byte Count Register */ +#define SDXC_REG_BBCR (0x4C) /* SMC BIU Byte Count Register */ +#define SDXC_REG_DBGC (0x50) /* SMC Debug Enable Register */ +#define SDXC_REG_HWRST (0x78) /* SMC Card Hardware Reset for Register */ +#define SDXC_REG_DMAC (0x80) /* SMC IDMAC Control Register */ +#define SDXC_REG_DLBA (0x84) /* SMC IDMAC Descriptor List Base Addre */ +#define SDXC_REG_IDST (0x88) /* SMC IDMAC Status Register */ +#define SDXC_REG_IDIE (0x8C) /* SMC IDMAC Interrupt Enable Register */ +#define SDXC_REG_CHDA (0x90) +#define SDXC_REG_CBDA (0x94) + +#define mmc_readl(host, reg) \ + readl((host)->reg_base + SDXC_##reg) +#define mmc_writel(host, reg, value) \ + writel((value), (host)->reg_base + SDXC_##reg) + +/* global control register bits */ +#define SDXC_SOFT_RESET BIT(0) +#define SDXC_FIFO_RESET BIT(1) +#define SDXC_DMA_RESET BIT(2) +#define SDXC_INTERRUPT_ENABLE_BIT BIT(4) +#define SDXC_DMA_ENABLE_BIT BIT(5) +#define SDXC_DEBOUNCE_ENABLE_BIT BIT(8) +#define SDXC_POSEDGE_LATCH_DATA BIT(9) +#define SDXC_DDR_MODE BIT(10) +#define SDXC_MEMORY_ACCESS_DONE BIT(29) +#define SDXC_ACCESS_DONE_DIRECT BIT(30) +#define SDXC_ACCESS_BY_AHB BIT(31) +#define SDXC_ACCESS_BY_DMA (0 << 31) +#define SDXC_HARDWARE_RESET \ + (SDXC_SOFT_RESET | SDXC_FIFO_RESET | SDXC_DMA_RESET) + +/* clock control bits */ +#define SDXC_CARD_CLOCK_ON BIT(16) +#define SDXC_LOW_POWER_ON BIT(17) + +/* bus width */ +#define SDXC_WIDTH1 0 +#define SDXC_WIDTH4 1 +#define SDXC_WIDTH8 2 + +/* smc command bits */ +#define SDXC_RESP_EXPIRE BIT(6) +#define SDXC_LONG_RESPONSE BIT(7) +#define SDXC_CHECK_RESPONSE_CRC BIT(8) +#define SDXC_DATA_EXPIRE BIT(9) +#define SDXC_WRITE BIT(10) +#define SDXC_SEQUENCE_MODE BIT(11) +#define SDXC_SEND_AUTO_STOP BIT(12) +#define SDXC_WAIT_PRE_OVER BIT(13) +#define SDXC_STOP_ABORT_CMD BIT(14) +#define SDXC_SEND_INIT_SEQUENCE BIT(15) +#define SDXC_UPCLK_ONLY BIT(21) +#define SDXC_READ_CEATA_DEV BIT(22) +#define SDXC_CCS_EXPIRE BIT(23) +#define SDXC_ENABLE_BIT_BOOT BIT(24) +#define SDXC_ALT_BOOT_OPTIONS BIT(25) +#define SDXC_BOOT_ACK_EXPIRE BIT(26) +#define SDXC_BOOT_ABORT BIT(27) +#define SDXC_VOLTAGE_SWITCH BIT(28) +#define SDXC_USE_HOLD_REGISTER BIT(29) +#define SDXC_START BIT(31) + +/* interrupt bits */ +#define SDXC_RESP_ERROR BIT(1) +#define SDXC_COMMAND_DONE BIT(2) +#define SDXC_DATA_OVER BIT(3) +#define SDXC_TX_DATA_REQUEST BIT(4) +#define SDXC_RX_DATA_REQUEST BIT(5) +#define SDXC_RESP_CRC_ERROR BIT(6) +#define SDXC_DATA_CRC_ERROR BIT(7) +#define SDXC_RESP_TIMEOUT BIT(8) +#define SDXC_DATA_TIMEOUT BIT(9) +#define SDXC_VOLTAGE_CHANGE_DONE BIT(10) +#define SDXC_FIFO_RUN_ERROR BIT(11) +#define SDXC_HARD_WARE_LOCKED BIT(12) +#define SDXC_START_BIT_ERROR BIT(13) +#define SDXC_AUTO_COMMAND_DONE BIT(14) +#define SDXC_END_BIT_ERROR BIT(15) +#define SDXC_SDIO_INTERRUPT BIT(16) +#define SDXC_CARD_INSERT BIT(30) +#define SDXC_CARD_REMOVE BIT(31) +#define SDXC_INTERRUPT_ERROR_BIT \ + (SDXC_RESP_ERROR | SDXC_RESP_CRC_ERROR | SDXC_DATA_CRC_ERROR | \ + SDXC_RESP_TIMEOUT | SDXC_DATA_TIMEOUT | SDXC_FIFO_RUN_ERROR | \ + SDXC_HARD_WARE_LOCKED | SDXC_START_BIT_ERROR | SDXC_END_BIT_ERROR) +#define SDXC_INTERRUPT_DONE_BIT \ + (SDXC_AUTO_COMMAND_DONE | SDXC_DATA_OVER | \ + SDXC_COMMAND_DONE | SDXC_VOLTAGE_CHANGE_DONE) + +/* status */ +#define SDXC_RXWL_FLAG BIT(0) +#define SDXC_TXWL_FLAG BIT(1) +#define SDXC_FIFO_EMPTY BIT(2) +#define SDXC_FIFO_FULL BIT(3) +#define SDXC_CARD_PRESENT BIT(8) +#define SDXC_CARD_DATA_BUSY BIT(9) +#define SDXC_DATA_FSM_BUSY BIT(10) +#define SDXC_DMA_REQUEST BIT(31) +#define SDXC_FIFO_SIZE 16 + +/* Function select */ +#define SDXC_CEATA_ON (0xceaa << 16) +#define SDXC_SEND_IRQ_RESPONSE BIT(0) +#define SDXC_SDIO_READ_WAIT BIT(1) +#define SDXC_ABORT_READ_DATA BIT(2) +#define SDXC_SEND_CCSD BIT(8) +#define SDXC_SEND_AUTO_STOPCCSD BIT(9) +#define SDXC_CEATA_DEV_IRQ_ENABLE BIT(10) + +/* IDMA controller bus mod bit field */ +#define SDXC_IDMAC_SOFT_RESET BIT(0) +#define SDXC_IDMAC_FIX_BURST BIT(1) +#define SDXC_IDMAC_IDMA_ON BIT(7) +#define SDXC_IDMAC_REFETCH_DES BIT(31) + +/* IDMA status bit field */ +#define SDXC_IDMAC_TRANSMIT_INTERRUPT BIT(0) +#define SDXC_IDMAC_RECEIVE_INTERRUPT BIT(1) +#define SDXC_IDMAC_FATAL_BUS_ERROR BIT(2) +#define SDXC_IDMAC_DESTINATION_INVALID BIT(4) +#define SDXC_IDMAC_CARD_ERROR_SUM BIT(5) +#define SDXC_IDMAC_NORMAL_INTERRUPT_SUM BIT(8) +#define SDXC_IDMAC_ABNORMAL_INTERRUPT_SUM BIT(9) +#define SDXC_IDMAC_HOST_ABORT_INTERRUPT BIT(10) +#define SDXC_IDMAC_IDLE (0 << 13) +#define SDXC_IDMAC_SUSPEND (1 << 13) +#define SDXC_IDMAC_DESC_READ (2 << 13) +#define SDXC_IDMAC_DESC_CHECK (3 << 13) +#define SDXC_IDMAC_READ_REQUEST_WAIT (4 << 13) +#define SDXC_IDMAC_WRITE_REQUEST_WAIT (5 << 13) +#define SDXC_IDMAC_READ (6 << 13) +#define SDXC_IDMAC_WRITE (7 << 13) +#define SDXC_IDMAC_DESC_CLOSE (8 << 13) + +/* +* If the idma-des-size-bits of property is ie 13, bufsize bits are: +* Bits 0-12: buf1 size +* Bits 13-25: buf2 size +* Bits 26-31: not used +* Since we only ever set buf1 size, we can simply store it directly. +*/ +#define SDXC_IDMAC_DES0_DIC BIT(1) /* disable interrupt on completion */ +#define SDXC_IDMAC_DES0_LD BIT(2) /* last descriptor */ +#define SDXC_IDMAC_DES0_FD BIT(3) /* first descriptor */ +#define SDXC_IDMAC_DES0_CH BIT(4) /* chain mode */ +#define SDXC_IDMAC_DES0_ER BIT(5) /* end of ring */ +#define SDXC_IDMAC_DES0_CES BIT(30) /* card error summary */ +#define SDXC_IDMAC_DES0_OWN BIT(31) /* 1-idma owns it, 0-host owns it */ + +struct sunxi_idma_des { + u32 config; + u32 buf_size; + u32 buf_addr_ptr1; + u32 buf_addr_ptr2; +}; + +struct sunxi_mmc_host { + struct mmc_host *mmc; + struct reset_control *reset; + + /* IO mapping base */ + void __iomem *reg_base; + + /* clock management */ + struct clk *clk_ahb; + struct clk *clk_mmc; + + /* irq */ + spinlock_t lock; + int irq; + u32 int_sum; + u32 sdio_imask; + + /* dma */ + u32 idma_des_size_bits; + dma_addr_t sg_dma; + void *sg_cpu; + bool wait_dma; + + struct mmc_request *mrq; + struct mmc_request *manual_stop_mrq; + int ferror; +}; + +static int sunxi_mmc_reset_host(struct sunxi_mmc_host *host) +{ + unsigned long expire = jiffies + msecs_to_jiffies(250); + u32 rval; + + mmc_writel(host, REG_CMDR, SDXC_HARDWARE_RESET); + do { + rval = mmc_readl(host, REG_GCTRL); + } while (time_before(jiffies, expire) && (rval & SDXC_HARDWARE_RESET)); + + if (rval & SDXC_HARDWARE_RESET) { + dev_err(mmc_dev(host->mmc), "fatal err reset timeout\n"); + return -EIO; + } + + return 0; +} + +static int sunxi_mmc_init_host(struct mmc_host *mmc) +{ + u32 rval; + struct sunxi_mmc_host *host = mmc_priv(mmc); + + if (sunxi_mmc_reset_host(host)) + return -EIO; + + mmc_writel(host, REG_FTRGL, 0x20070008); + mmc_writel(host, REG_TMOUT, 0xffffffff); + mmc_writel(host, REG_IMASK, host->sdio_imask); + mmc_writel(host, REG_RINTR, 0xffffffff); + mmc_writel(host, REG_DBGC, 0xdeb); + mmc_writel(host, REG_FUNS, SDXC_CEATA_ON); + mmc_writel(host, REG_DLBA, host->sg_dma); + + rval = mmc_readl(host, REG_GCTRL); + rval |= SDXC_INTERRUPT_ENABLE_BIT; + rval &= ~SDXC_ACCESS_DONE_DIRECT; + mmc_writel(host, REG_GCTRL, rval); + + return 0; +} + +static void sunxi_mmc_init_idma_des(struct sunxi_mmc_host *host, + struct mmc_data *data) +{ + struct sunxi_idma_des *pdes = (struct sunxi_idma_des *)host->sg_cpu; + struct sunxi_idma_des *pdes_pa = (struct sunxi_idma_des *)host->sg_dma; + int i, max_len = (1 << host->idma_des_size_bits); + + for (i = 0; i < data->sg_len; i++) { + pdes[i].config = SDXC_IDMAC_DES0_CH | SDXC_IDMAC_DES0_OWN | + SDXC_IDMAC_DES0_DIC; + + if (data->sg[i].length == max_len) + pdes[i].buf_size = 0; /* 0 == max_len */ + else + pdes[i].buf_size = data->sg[i].length; + + pdes[i].buf_addr_ptr1 = sg_dma_address(&data->sg[i]); + pdes[i].buf_addr_ptr2 = (u32)&pdes_pa[i + 1]; + } + + pdes[0].config |= SDXC_IDMAC_DES0_FD; + pdes[i - 1].config = SDXC_IDMAC_DES0_OWN | SDXC_IDMAC_DES0_LD; + + /* + * Avoid the io-store starting the idmac hitting io-mem before the + * descriptors hit the main-mem. + */ + wmb(); +} + +static enum dma_data_direction sunxi_mmc_get_dma_dir(struct mmc_data *data) +{ + if (data->flags & MMC_DATA_WRITE) + return DMA_TO_DEVICE; + else + return DMA_FROM_DEVICE; +} + +static int sunxi_mmc_map_dma(struct sunxi_mmc_host *host, + struct mmc_data *data) +{ + u32 i, dma_len; + struct scatterlist *sg; + + dma_len = dma_map_sg(mmc_dev(host->mmc), data->sg, data->sg_len, + sunxi_mmc_get_dma_dir(data)); + if (dma_len == 0) { + dev_err(mmc_dev(host->mmc), "dma_map_sg failed\n"); + return -ENOMEM; + } + + for_each_sg(data->sg, sg, data->sg_len, i) { + if (sg->offset & 3 || sg->length & 3) { + dev_err(mmc_dev(host->mmc), + "unaligned scatterlist: os %x length %d\n", + sg->offset, sg->length); + return -EINVAL; + } + } + + return 0; +} + +static void sunxi_mmc_start_dma(struct sunxi_mmc_host *host, + struct mmc_data *data) +{ + u32 rval; + + sunxi_mmc_init_idma_des(host, data); + + rval = mmc_readl(host, REG_GCTRL); + rval |= SDXC_DMA_ENABLE_BIT; + mmc_writel(host, REG_GCTRL, rval); + rval |= SDXC_DMA_RESET; + mmc_writel(host, REG_GCTRL, rval); + + mmc_writel(host, REG_DMAC, SDXC_IDMAC_SOFT_RESET); + + if (!(data->flags & MMC_DATA_WRITE)) + mmc_writel(host, REG_IDIE, SDXC_IDMAC_RECEIVE_INTERRUPT); + + mmc_writel(host, REG_DMAC, + SDXC_IDMAC_FIX_BURST | SDXC_IDMAC_IDMA_ON); +} + +static void sunxi_mmc_send_manual_stop(struct sunxi_mmc_host *host, + struct mmc_request *req) +{ + u32 arg, cmd_val, ri; + unsigned long expire = jiffies + msecs_to_jiffies(1000); + + cmd_val = SDXC_START | SDXC_RESP_EXPIRE | + SDXC_STOP_ABORT_CMD | SDXC_CHECK_RESPONSE_CRC; + + if (req->cmd->opcode == SD_IO_RW_EXTENDED) { + cmd_val |= SD_IO_RW_DIRECT; + arg = (1 << 31) | (0 << 28) | (SDIO_CCCR_ABORT << 9) | + ((req->cmd->arg >> 28) & 0x7); + } else { + cmd_val |= MMC_STOP_TRANSMISSION; + arg = 0; + } + + mmc_writel(host, REG_CARG, arg); + mmc_writel(host, REG_CMDR, cmd_val); + + do { + ri = mmc_readl(host, REG_RINTR); + } while (!(ri & (SDXC_COMMAND_DONE | SDXC_INTERRUPT_ERROR_BIT)) && + time_before(jiffies, expire)); + + if (!(ri & SDXC_COMMAND_DONE) || (ri & SDXC_INTERRUPT_ERROR_BIT)) { + dev_err(mmc_dev(host->mmc), "send stop command failed\n"); + if (req->stop) + req->stop->resp[0] = -ETIMEDOUT; + } else { + if (req->stop) + req->stop->resp[0] = mmc_readl(host, REG_RESP0); + } + + mmc_writel(host, REG_RINTR, 0xffff); +} + +static void sunxi_mmc_dump_errinfo(struct sunxi_mmc_host *host) +{ + struct mmc_command *cmd = host->mrq->cmd; + struct mmc_data *data = host->mrq->data; + + /* For some cmds timeout is normal with sd/mmc cards */ + if ((host->int_sum & SDXC_INTERRUPT_ERROR_BIT) == + SDXC_RESP_TIMEOUT && (cmd->opcode == SD_IO_SEND_OP_COND || + cmd->opcode == SD_IO_RW_DIRECT)) + return; + + dev_err(mmc_dev(host->mmc), + "smc %d err, cmd %d,%s%s%s%s%s%s%s%s%s%s !!\n", + host->mmc->index, cmd->opcode, + data ? (data->flags & MMC_DATA_WRITE ? " WR" : " RD") : "", + host->int_sum & SDXC_RESP_ERROR ? " RE" : "", + host->int_sum & SDXC_RESP_CRC_ERROR ? " RCE" : "", + host->int_sum & SDXC_DATA_CRC_ERROR ? " DCE" : "", + host->int_sum & SDXC_RESP_TIMEOUT ? " RTO" : "", + host->int_sum & SDXC_DATA_TIMEOUT ? " DTO" : "", + host->int_sum & SDXC_FIFO_RUN_ERROR ? " FE" : "", + host->int_sum & SDXC_HARD_WARE_LOCKED ? " HL" : "", + host->int_sum & SDXC_START_BIT_ERROR ? " SBE" : "", + host->int_sum & SDXC_END_BIT_ERROR ? " EBE" : "" + ); +} + +/* Called in interrupt context! */ +static irqreturn_t sunxi_mmc_finalize_request(struct sunxi_mmc_host *host) +{ + struct mmc_request *mrq = host->mrq; + struct mmc_data *data = mrq->data; + u32 rval; + + mmc_writel(host, REG_IMASK, host->sdio_imask); + mmc_writel(host, REG_IDIE, 0); + + if (host->int_sum & SDXC_INTERRUPT_ERROR_BIT) { + sunxi_mmc_dump_errinfo(host); + mrq->cmd->error = -ETIMEDOUT; + + if (data) { + data->error = -ETIMEDOUT; + host->manual_stop_mrq = mrq; + } + + if (mrq->stop) + mrq->stop->error = -ETIMEDOUT; + } else { + if (mrq->cmd->flags & MMC_RSP_136) { + mrq->cmd->resp[0] = mmc_readl(host, REG_RESP3); + mrq->cmd->resp[1] = mmc_readl(host, REG_RESP2); + mrq->cmd->resp[2] = mmc_readl(host, REG_RESP1); + mrq->cmd->resp[3] = mmc_readl(host, REG_RESP0); + } else { + mrq->cmd->resp[0] = mmc_readl(host, REG_RESP0); + } + + if (data) + data->bytes_xfered = data->blocks * data->blksz; + } + + if (data) { + mmc_writel(host, REG_IDST, 0x337); + mmc_writel(host, REG_DMAC, 0); + rval = mmc_readl(host, REG_GCTRL); + rval |= SDXC_DMA_RESET; + mmc_writel(host, REG_GCTRL, rval); + rval &= ~SDXC_DMA_ENABLE_BIT; + mmc_writel(host, REG_GCTRL, rval); + rval |= SDXC_FIFO_RESET; + mmc_writel(host, REG_GCTRL, rval); + dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len, + sunxi_mmc_get_dma_dir(data)); + } + + mmc_writel(host, REG_RINTR, 0xffff); + + host->mrq = NULL; + host->int_sum = 0; + host->wait_dma = false; + + return host->manual_stop_mrq ? IRQ_WAKE_THREAD : IRQ_HANDLED; +} + +static irqreturn_t sunxi_mmc_irq(int irq, void *dev_id) +{ + struct sunxi_mmc_host *host = dev_id; + struct mmc_request *mrq; + u32 msk_int, idma_int; + bool finalize = false; + bool sdio_int = false; + irqreturn_t ret = IRQ_HANDLED; + + spin_lock(&host->lock); + + idma_int = mmc_readl(host, REG_IDST); + msk_int = mmc_readl(host, REG_MISTA); + + dev_dbg(mmc_dev(host->mmc), "irq: rq %p mi %08x idi %08x\n", + host->mrq, msk_int, idma_int); + + mrq = host->mrq; + if (mrq) { + if (idma_int & SDXC_IDMAC_RECEIVE_INTERRUPT) + host->wait_dma = false; + + host->int_sum |= msk_int; + + /* Wait for COMMAND_DONE on RESPONSE_TIMEOUT before finalize */ + if ((host->int_sum & SDXC_RESP_TIMEOUT) && + !(host->int_sum & SDXC_COMMAND_DONE)) + mmc_writel(host, REG_IMASK, + host->sdio_imask | SDXC_COMMAND_DONE); + /* Don't wait for dma on error */ + else if (host->int_sum & SDXC_INTERRUPT_ERROR_BIT) + finalize = true; + else if ((host->int_sum & SDXC_INTERRUPT_DONE_BIT) && + !host->wait_dma) + finalize = true; + } + + if (msk_int & SDXC_SDIO_INTERRUPT) + sdio_int = true; + + mmc_writel(host, REG_RINTR, msk_int); + mmc_writel(host, REG_IDST, idma_int); + + if (finalize) + ret = sunxi_mmc_finalize_request(host); + + spin_unlock(&host->lock); + + if (finalize && ret == IRQ_HANDLED) + mmc_request_done(host->mmc, mrq); + + if (sdio_int) + mmc_signal_sdio_irq(host->mmc); + + return ret; +} + +static irqreturn_t sunxi_mmc_handle_manual_stop(int irq, void *dev_id) +{ + struct sunxi_mmc_host *host = dev_id; + struct mmc_request *mrq; + unsigned long iflags; + + spin_lock_irqsave(&host->lock, iflags); + mrq = host->manual_stop_mrq; + spin_unlock_irqrestore(&host->lock, iflags); + + if (!mrq) { + dev_err(mmc_dev(host->mmc), "no request for manual stop\n"); + return IRQ_HANDLED; + } + + dev_err(mmc_dev(host->mmc), "data error, sending stop command\n"); + sunxi_mmc_send_manual_stop(host, mrq); + + spin_lock_irqsave(&host->lock, iflags); + host->manual_stop_mrq = NULL; + spin_unlock_irqrestore(&host->lock, iflags); + + mmc_request_done(host->mmc, mrq); + + return IRQ_HANDLED; +} + +static int sunxi_mmc_oclk_onoff(struct sunxi_mmc_host *host, u32 oclk_en) +{ + unsigned long expire = jiffies + msecs_to_jiffies(250); + u32 rval; + + rval = mmc_readl(host, REG_CLKCR); + rval &= ~(SDXC_CARD_CLOCK_ON | SDXC_LOW_POWER_ON); + + if (oclk_en) + rval |= SDXC_CARD_CLOCK_ON; + + mmc_writel(host, REG_CLKCR, rval); + + rval = SDXC_START | SDXC_UPCLK_ONLY | SDXC_WAIT_PRE_OVER; + mmc_writel(host, REG_CMDR, rval); + + do { + rval = mmc_readl(host, REG_CMDR); + } while (time_before(jiffies, expire) && (rval & SDXC_START)); + + /* clear irq status bits set by the command */ + mmc_writel(host, REG_RINTR, + mmc_readl(host, REG_RINTR) & ~SDXC_SDIO_INTERRUPT); + + if (rval & SDXC_START) { + dev_err(mmc_dev(host->mmc), "fatal err update clk timeout\n"); + return -EIO; + } + + return 0; +} + +static int sunxi_mmc_clk_set_rate(struct sunxi_mmc_host *host, + struct mmc_ios *ios) +{ + u32 rate, oclk_dly, rval, sclk_dly, src_clk; + int ret; + + rate = clk_round_rate(host->clk_mmc, ios->clock); + dev_dbg(mmc_dev(host->mmc), "setting clk to %d, rounded %d\n", + ios->clock, rate); + + /* setting clock rate */ + ret = clk_set_rate(host->clk_mmc, rate); + if (ret) { + dev_err(mmc_dev(host->mmc), "error setting clk to %d: %d\n", + rate, ret); + return ret; + } + + ret = sunxi_mmc_oclk_onoff(host, 0); + if (ret) + return ret; + + /* clear internal divider */ + rval = mmc_readl(host, REG_CLKCR); + rval &= ~0xff; + mmc_writel(host, REG_CLKCR, rval); + + /* determine delays */ + if (rate <= 400000) { + oclk_dly = 0; + sclk_dly = 7; + } else if (rate <= 25000000) { + oclk_dly = 0; + sclk_dly = 5; + } else if (rate <= 50000000) { + if (ios->timing == MMC_TIMING_UHS_DDR50) { + oclk_dly = 2; + sclk_dly = 4; + } else { + oclk_dly = 3; + sclk_dly = 5; + } + } else { + /* rate > 50000000 */ + oclk_dly = 2; + sclk_dly = 4; + } + + src_clk = clk_get_rate(clk_get_parent(host->clk_mmc)); + if (src_clk >= 300000000 && src_clk <= 400000000) { + if (oclk_dly) + oclk_dly--; + if (sclk_dly) + sclk_dly--; + } + + clk_sunxi_mmc_phase_control(host->clk_mmc, sclk_dly, oclk_dly); + + return sunxi_mmc_oclk_onoff(host, 1); +} + +static void sunxi_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios) +{ + struct sunxi_mmc_host *host = mmc_priv(mmc); + u32 rval; + + /* Set the power state */ + switch (ios->power_mode) { + case MMC_POWER_ON: + break; + + case MMC_POWER_UP: + mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, ios->vdd); + + host->ferror = sunxi_mmc_init_host(mmc); + if (host->ferror) + return; + + dev_dbg(mmc_dev(mmc), "power on!\n"); + break; + + case MMC_POWER_OFF: + dev_dbg(mmc_dev(mmc), "power off!\n"); + sunxi_mmc_reset_host(host); + mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, 0); + break; + } + + /* set bus width */ + switch (ios->bus_width) { + case MMC_BUS_WIDTH_1: + mmc_writel(host, REG_WIDTH, SDXC_WIDTH1); + break; + case MMC_BUS_WIDTH_4: + mmc_writel(host, REG_WIDTH, SDXC_WIDTH4); + break; + case MMC_BUS_WIDTH_8: + mmc_writel(host, REG_WIDTH, SDXC_WIDTH8); + break; + } + + /* set ddr mode */ + rval = mmc_readl(host, REG_GCTRL); + if (ios->timing == MMC_TIMING_UHS_DDR50) + rval |= SDXC_DDR_MODE; + else + rval &= ~SDXC_DDR_MODE; + mmc_writel(host, REG_GCTRL, rval); + + /* set up clock */ + if (ios->clock && ios->power_mode) { + host->ferror = sunxi_mmc_clk_set_rate(host, ios); + /* Android code had a usleep_range(50000, 55000); here */ + } +} + +static void sunxi_mmc_enable_sdio_irq(struct mmc_host *mmc, int enable) +{ + struct sunxi_mmc_host *host = mmc_priv(mmc); + unsigned long flags; + u32 imask; + + spin_lock_irqsave(&host->lock, flags); + + imask = mmc_readl(host, REG_IMASK); + if (enable) { + host->sdio_imask = SDXC_SDIO_INTERRUPT; + imask |= SDXC_SDIO_INTERRUPT; + } else { + host->sdio_imask = 0; + imask &= ~SDXC_SDIO_INTERRUPT; + } + mmc_writel(host, REG_IMASK, imask); + spin_unlock_irqrestore(&host->lock, flags); +} + +static void sunxi_mmc_hw_reset(struct mmc_host *mmc) +{ + struct sunxi_mmc_host *host = mmc_priv(mmc); + mmc_writel(host, REG_HWRST, 0); + udelay(10); + mmc_writel(host, REG_HWRST, 1); + udelay(300); +} + +static void sunxi_mmc_request(struct mmc_host *mmc, struct mmc_request *mrq) +{ + struct sunxi_mmc_host *host = mmc_priv(mmc); + struct mmc_command *cmd = mrq->cmd; + struct mmc_data *data = mrq->data; + unsigned long iflags; + u32 imask = SDXC_INTERRUPT_ERROR_BIT; + u32 cmd_val = SDXC_START | (cmd->opcode & 0x3f); + int ret; + + /* Check for set_ios errors (should never happen) */ + if (host->ferror) { + mrq->cmd->error = host->ferror; + mmc_request_done(mmc, mrq); + return; + } + + if (data) { + ret = sunxi_mmc_map_dma(host, data); + if (ret < 0) { + dev_err(mmc_dev(mmc), "map DMA failed\n"); + cmd->error = ret; + data->error = ret; + mmc_request_done(mmc, mrq); + return; + } + } + + if (cmd->opcode == MMC_GO_IDLE_STATE) { + cmd_val |= SDXC_SEND_INIT_SEQUENCE; + imask |= SDXC_COMMAND_DONE; + } + + if (cmd->flags & MMC_RSP_PRESENT) { + cmd_val |= SDXC_RESP_EXPIRE; + if (cmd->flags & MMC_RSP_136) + cmd_val |= SDXC_LONG_RESPONSE; + if (cmd->flags & MMC_RSP_CRC) + cmd_val |= SDXC_CHECK_RESPONSE_CRC; + + if ((cmd->flags & MMC_CMD_MASK) == MMC_CMD_ADTC) { + cmd_val |= SDXC_DATA_EXPIRE | SDXC_WAIT_PRE_OVER; + if (cmd->data->flags & MMC_DATA_STREAM) { + imask |= SDXC_AUTO_COMMAND_DONE; + cmd_val |= SDXC_SEQUENCE_MODE | + SDXC_SEND_AUTO_STOP; + } + + if (cmd->data->stop) { + imask |= SDXC_AUTO_COMMAND_DONE; + cmd_val |= SDXC_SEND_AUTO_STOP; + } else { + imask |= SDXC_DATA_OVER; + } + + if (cmd->data->flags & MMC_DATA_WRITE) + cmd_val |= SDXC_WRITE; + else + host->wait_dma = true; + } else { + imask |= SDXC_COMMAND_DONE; + } + } else { + imask |= SDXC_COMMAND_DONE; + } + + dev_dbg(mmc_dev(mmc), "cmd %d(%08x) arg %x ie 0x%08x len %d\n", + cmd_val & 0x3f, cmd_val, cmd->arg, imask, + mrq->data ? mrq->data->blksz * mrq->data->blocks : 0); + + spin_lock_irqsave(&host->lock, iflags); + + if (host->mrq || host->manual_stop_mrq) { + spin_unlock_irqrestore(&host->lock, iflags); + + if (data) + dma_unmap_sg(mmc_dev(mmc), data->sg, data->sg_len, + sunxi_mmc_get_dma_dir(data)); + + dev_err(mmc_dev(mmc), "request already pending\n"); + mrq->cmd->error = -EBUSY; + mmc_request_done(mmc, mrq); + return; + } + + if (data) { + mmc_writel(host, REG_BLKSZ, data->blksz); + mmc_writel(host, REG_BCNTR, data->blksz * data->blocks); + sunxi_mmc_start_dma(host, data); + } + + host->mrq = mrq; + mmc_writel(host, REG_IMASK, host->sdio_imask | imask); + mmc_writel(host, REG_CARG, cmd->arg); + mmc_writel(host, REG_CMDR, cmd_val); + + spin_unlock_irqrestore(&host->lock, iflags); +} + +static const struct of_device_id sunxi_mmc_of_match[] = { + { .compatible = "allwinner,sun4i-a10-mmc", }, + { .compatible = "allwinner,sun5i-a13-mmc", }, + { /* sentinel */ } +}; +MODULE_DEVICE_TABLE(of, sunxi_mmc_of_match); + +static struct mmc_host_ops sunxi_mmc_ops = { + .request = sunxi_mmc_request, + .set_ios = sunxi_mmc_set_ios, + .get_ro = mmc_gpio_get_ro, + .get_cd = mmc_gpio_get_cd, + .enable_sdio_irq = sunxi_mmc_enable_sdio_irq, + .hw_reset = sunxi_mmc_hw_reset, +}; + +static int sunxi_mmc_resource_request(struct sunxi_mmc_host *host, + struct platform_device *pdev) +{ + struct device_node *np = pdev->dev.of_node; + int ret; + + if (of_device_is_compatible(np, "allwinner,sun4i-a10-mmc")) + host->idma_des_size_bits = 13; + else + host->idma_des_size_bits = 16; + + ret = mmc_regulator_get_supply(host->mmc); + if (ret) { + if (ret != -EPROBE_DEFER) + dev_err(&pdev->dev, "Could not get vmmc supply\n"); + return ret; + } + + host->reg_base = devm_ioremap_resource(&pdev->dev, + platform_get_resource(pdev, IORESOURCE_MEM, 0)); + if (IS_ERR(host->reg_base)) + return PTR_ERR(host->reg_base); + + host->clk_ahb = devm_clk_get(&pdev->dev, "ahb"); + if (IS_ERR(host->clk_ahb)) { + dev_err(&pdev->dev, "Could not get ahb clock\n"); + return PTR_ERR(host->clk_ahb); + } + + host->clk_mmc = devm_clk_get(&pdev->dev, "mmc"); + if (IS_ERR(host->clk_mmc)) { + dev_err(&pdev->dev, "Could not get mmc clock\n"); + return PTR_ERR(host->clk_mmc); + } + + host->reset = devm_reset_control_get(&pdev->dev, "ahb"); + + ret = clk_prepare_enable(host->clk_ahb); + if (ret) { + dev_err(&pdev->dev, "Enable ahb clk err %d\n", ret); + return ret; + } + + ret = clk_prepare_enable(host->clk_mmc); + if (ret) { + dev_err(&pdev->dev, "Enable mmc clk err %d\n", ret); + goto error_disable_clk_ahb; + } + + if (!IS_ERR(host->reset)) { + ret = reset_control_deassert(host->reset); + if (ret) { + dev_err(&pdev->dev, "reset err %d\n", ret); + goto error_disable_clk_mmc; + } + } + + /* + * Sometimes the controller asserts the irq on boot for some reason, + * make sure the controller is in a sane state before enabling irqs. + */ + ret = sunxi_mmc_reset_host(host); + if (ret) + goto error_assert_reset; + + host->irq = platform_get_irq(pdev, 0); + return devm_request_threaded_irq(&pdev->dev, host->irq, sunxi_mmc_irq, + sunxi_mmc_handle_manual_stop, 0, "sunxi-mmc", host); + +error_assert_reset: + if (!IS_ERR(host->reset)) + reset_control_assert(host->reset); +error_disable_clk_mmc: + clk_disable_unprepare(host->clk_mmc); +error_disable_clk_ahb: + clk_disable_unprepare(host->clk_ahb); + return ret; +} + +static int sunxi_mmc_probe(struct platform_device *pdev) +{ + struct sunxi_mmc_host *host; + struct mmc_host *mmc; + int ret; + + mmc = mmc_alloc_host(sizeof(struct sunxi_mmc_host), &pdev->dev); + if (!mmc) { + dev_err(&pdev->dev, "mmc alloc host failed\n"); + return -ENOMEM; + } + + host = mmc_priv(mmc); + host->mmc = mmc; + spin_lock_init(&host->lock); + + ret = sunxi_mmc_resource_request(host, pdev); + if (ret) + goto error_free_host; + + host->sg_cpu = dma_alloc_coherent(&pdev->dev, PAGE_SIZE, + &host->sg_dma, GFP_KERNEL); + if (!host->sg_cpu) { + dev_err(&pdev->dev, "Failed to allocate DMA descriptor mem\n"); + ret = -ENOMEM; + goto error_free_host; + } + + mmc->ops = &sunxi_mmc_ops; + mmc->max_blk_count = 8192; + mmc->max_blk_size = 4096; + mmc->max_segs = PAGE_SIZE / sizeof(struct sunxi_idma_des); + mmc->max_seg_size = (1 << host->idma_des_size_bits); + mmc->max_req_size = mmc->max_seg_size * mmc->max_segs; + /* 400kHz ~ 50MHz */ + mmc->f_min = 400000; + mmc->f_max = 50000000; + mmc->caps |= MMC_CAP_MMC_HIGHSPEED | MMC_CAP_SD_HIGHSPEED; + + ret = mmc_of_parse(mmc); + if (ret) + goto error_free_dma; + + ret = mmc_add_host(mmc); + if (ret) + goto error_free_dma; + + dev_info(&pdev->dev, "base:0x%p irq:%u\n", host->reg_base, host->irq); + platform_set_drvdata(pdev, mmc); + return 0; + +error_free_dma: + dma_free_coherent(&pdev->dev, PAGE_SIZE, host->sg_cpu, host->sg_dma); +error_free_host: + mmc_free_host(mmc); + return ret; +} + +static int sunxi_mmc_remove(struct platform_device *pdev) +{ + struct mmc_host *mmc = platform_get_drvdata(pdev); + struct sunxi_mmc_host *host = mmc_priv(mmc); + + mmc_remove_host(mmc); + disable_irq(host->irq); + sunxi_mmc_reset_host(host); + + if (!IS_ERR(host->reset)) + reset_control_assert(host->reset); + + clk_disable_unprepare(host->clk_mmc); + clk_disable_unprepare(host->clk_ahb); + + dma_free_coherent(&pdev->dev, PAGE_SIZE, host->sg_cpu, host->sg_dma); + mmc_free_host(mmc); + + return 0; +} + +static struct platform_driver sunxi_mmc_driver = { + .driver = { + .name = "sunxi-mmc", + .owner = THIS_MODULE, + .of_match_table = of_match_ptr(sunxi_mmc_of_match), + }, + .probe = sunxi_mmc_probe, + .remove = sunxi_mmc_remove, +}; +module_platform_driver(sunxi_mmc_driver); + +MODULE_DESCRIPTION("Allwinner's SD/MMC Card Controller Driver"); +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("David Lanzend�rfer <david.lanzendoerfer@o2s.ch>"); +MODULE_ALIAS("platform:sunxi-mmc"); diff --git a/drivers/net/can/led.c b/drivers/net/can/led.c index a3d99a8fd2d1..ab7f1b01be49 100644 --- a/drivers/net/can/led.c +++ b/drivers/net/can/led.c @@ -97,6 +97,9 @@ static int can_led_notifier(struct notifier_block *nb, unsigned long msg, if (!priv) return NOTIFY_DONE; + if (!priv->tx_led_trig || !priv->rx_led_trig) + return NOTIFY_DONE; + if (msg == NETDEV_CHANGENAME) { snprintf(name, sizeof(name), "%s-tx", netdev->name); led_trigger_rename_static(name, priv->tx_led_trig); diff --git a/drivers/net/ethernet/Kconfig b/drivers/net/ethernet/Kconfig index d7401017a3f1..051349458462 100644 --- a/drivers/net/ethernet/Kconfig +++ b/drivers/net/ethernet/Kconfig @@ -39,6 +39,7 @@ source "drivers/net/ethernet/cisco/Kconfig" config CX_ECAT tristate "Beckhoff CX5020 EtherCAT master support" depends on PCI + depends on X86 || COMPILE_TEST ---help--- Driver for EtherCAT master module located on CCAT FPGA that can be found on Beckhoff CX5020, and possibly other of CX diff --git a/drivers/net/ethernet/ibm/emac/mal.c b/drivers/net/ethernet/ibm/emac/mal.c index 9d75fef6396f..63eb959a28aa 100644 --- a/drivers/net/ethernet/ibm/emac/mal.c +++ b/drivers/net/ethernet/ibm/emac/mal.c @@ -682,10 +682,7 @@ static int mal_probe(struct platform_device *ofdev) goto fail6; /* Enable all MAL SERR interrupt sources */ - if (mal->version == 2) - set_mal_dcrn(mal, MAL_IER, MAL2_IER_EVENTS); - else - set_mal_dcrn(mal, MAL_IER, MAL1_IER_EVENTS); + set_mal_dcrn(mal, MAL_IER, MAL_IER_EVENTS); /* Enable EOB interrupt */ mal_enable_eob_irq(mal); diff --git a/drivers/net/ethernet/ibm/emac/mal.h b/drivers/net/ethernet/ibm/emac/mal.h index e431a32e3d69..eeade2ea8334 100644 --- a/drivers/net/ethernet/ibm/emac/mal.h +++ b/drivers/net/ethernet/ibm/emac/mal.h @@ -95,24 +95,20 @@ #define MAL_IER 0x02 +/* MAL IER bits */ #define MAL_IER_DE 0x00000010 #define MAL_IER_OTE 0x00000004 #define MAL_IER_OE 0x00000002 #define MAL_IER_PE 0x00000001 -/* MAL V1 IER bits */ -#define MAL1_IER_NWE 0x00000008 -#define MAL1_IER_SOC_EVENTS MAL1_IER_NWE -#define MAL1_IER_EVENTS (MAL1_IER_SOC_EVENTS | MAL_IER_DE | \ - MAL_IER_OTE | MAL_IER_OE | MAL_IER_PE) -/* MAL V2 IER bits */ -#define MAL2_IER_PT 0x00000080 -#define MAL2_IER_PRE 0x00000040 -#define MAL2_IER_PWE 0x00000020 -#define MAL2_IER_SOC_EVENTS (MAL2_IER_PT | MAL2_IER_PRE | MAL2_IER_PWE) -#define MAL2_IER_EVENTS (MAL2_IER_SOC_EVENTS | MAL_IER_DE | \ - MAL_IER_OTE | MAL_IER_OE | MAL_IER_PE) +/* PLB read/write/timeout errors */ +#define MAL_IER_PTE 0x00000080 +#define MAL_IER_PRE 0x00000040 +#define MAL_IER_PWE 0x00000020 +#define MAL_IER_SOC_EVENTS (MAL_IER_PTE | MAL_IER_PRE | MAL_IER_PWE) +#define MAL_IER_EVENTS (MAL_IER_SOC_EVENTS | MAL_IER_DE | \ + MAL_IER_OTE | MAL_IER_OE | MAL_IER_PE) #define MAL_TXCASR 0x04 #define MAL_TXCARR 0x05 diff --git a/drivers/net/ethernet/ibm/emac/rgmii.c b/drivers/net/ethernet/ibm/emac/rgmii.c index 4fb2f96da23b..a01182cce965 100644 --- a/drivers/net/ethernet/ibm/emac/rgmii.c +++ b/drivers/net/ethernet/ibm/emac/rgmii.c @@ -45,6 +45,7 @@ /* RGMIIx_SSR */ #define RGMII_SSR_MASK(idx) (0x7 << ((idx) * 8)) +#define RGMII_SSR_10(idx) (0x1 << ((idx) * 8)) #define RGMII_SSR_100(idx) (0x2 << ((idx) * 8)) #define RGMII_SSR_1000(idx) (0x4 << ((idx) * 8)) @@ -139,6 +140,8 @@ void rgmii_set_speed(struct platform_device *ofdev, int input, int speed) ssr |= RGMII_SSR_1000(input); else if (speed == SPEED_100) ssr |= RGMII_SSR_100(input); + else if (speed == SPEED_10) + ssr |= RGMII_SSR_10(input); out_be32(&p->ssr, ssr); diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c index 7cf9dadcb471..c187d748115f 100644 --- a/drivers/net/ethernet/mellanox/mlx4/main.c +++ b/drivers/net/ethernet/mellanox/mlx4/main.c @@ -2044,6 +2044,7 @@ static int mlx4_init_port_info(struct mlx4_dev *dev, int port) if (!mlx4_is_slave(dev)) { mlx4_init_mac_table(dev, &info->mac_table); mlx4_init_vlan_table(dev, &info->vlan_table); + mlx4_init_roce_gid_table(dev, &info->gid_table); info->base_qpn = mlx4_get_base_qpn(dev, port); } diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4.h b/drivers/net/ethernet/mellanox/mlx4/mlx4.h index 212cea440f90..8e9eb02e09cb 100644 --- a/drivers/net/ethernet/mellanox/mlx4/mlx4.h +++ b/drivers/net/ethernet/mellanox/mlx4/mlx4.h @@ -695,6 +695,17 @@ struct mlx4_mac_table { int max; }; +#define MLX4_ROCE_GID_ENTRY_SIZE 16 + +struct mlx4_roce_gid_entry { + u8 raw[MLX4_ROCE_GID_ENTRY_SIZE]; +}; + +struct mlx4_roce_gid_table { + struct mlx4_roce_gid_entry roce_gids[MLX4_ROCE_MAX_GIDS]; + struct mutex mutex; +}; + #define MLX4_MAX_VLAN_NUM 128 #define MLX4_VLAN_TABLE_SIZE (MLX4_MAX_VLAN_NUM << 2) @@ -758,6 +769,7 @@ struct mlx4_port_info { struct device_attribute port_mtu_attr; struct mlx4_mac_table mac_table; struct mlx4_vlan_table vlan_table; + struct mlx4_roce_gid_table gid_table; int base_qpn; }; @@ -788,10 +800,6 @@ enum { MLX4_USE_RR = 1, }; -struct mlx4_roce_gid_entry { - u8 raw[16]; -}; - struct mlx4_priv { struct mlx4_dev dev; @@ -839,7 +847,6 @@ struct mlx4_priv { int fs_hash_mode; u8 virt2phys_pkey[MLX4_MFUNC_MAX][MLX4_MAX_PORTS][MLX4_MAX_PORT_PKEYS]; __be64 slave_node_guids[MLX4_MFUNC_MAX]; - struct mlx4_roce_gid_entry roce_gids[MLX4_MAX_PORTS][MLX4_ROCE_MAX_GIDS]; atomic_t opreq_count; struct work_struct opreq_task; @@ -1140,6 +1147,8 @@ int mlx4_change_port_types(struct mlx4_dev *dev, void mlx4_init_mac_table(struct mlx4_dev *dev, struct mlx4_mac_table *table); void mlx4_init_vlan_table(struct mlx4_dev *dev, struct mlx4_vlan_table *table); +void mlx4_init_roce_gid_table(struct mlx4_dev *dev, + struct mlx4_roce_gid_table *table); void __mlx4_unregister_vlan(struct mlx4_dev *dev, u8 port, u16 vlan); int __mlx4_register_vlan(struct mlx4_dev *dev, u8 port, u16 vlan, int *index); @@ -1149,6 +1158,7 @@ int mlx4_get_slave_from_resource_id(struct mlx4_dev *dev, enum mlx4_resource resource_type, u64 resource_id, int *slave); void mlx4_delete_all_resources_for_slave(struct mlx4_dev *dev, int slave_id); +void mlx4_reset_roce_gids(struct mlx4_dev *dev, int slave); int mlx4_init_resource_tracker(struct mlx4_dev *dev); void mlx4_free_resource_tracker(struct mlx4_dev *dev, diff --git a/drivers/net/ethernet/mellanox/mlx4/port.c b/drivers/net/ethernet/mellanox/mlx4/port.c index b5b3549b0c8d..5ec6f203c6e6 100644 --- a/drivers/net/ethernet/mellanox/mlx4/port.c +++ b/drivers/net/ethernet/mellanox/mlx4/port.c @@ -75,6 +75,16 @@ void mlx4_init_vlan_table(struct mlx4_dev *dev, struct mlx4_vlan_table *table) table->total = 0; } +void mlx4_init_roce_gid_table(struct mlx4_dev *dev, + struct mlx4_roce_gid_table *table) +{ + int i; + + mutex_init(&table->mutex); + for (i = 0; i < MLX4_ROCE_MAX_GIDS; i++) + memset(table->roce_gids[i].raw, 0, MLX4_ROCE_GID_ENTRY_SIZE); +} + static int validate_index(struct mlx4_dev *dev, struct mlx4_mac_table *table, int index) { @@ -584,6 +594,84 @@ int mlx4_get_base_gid_ix(struct mlx4_dev *dev, int slave, int port) } EXPORT_SYMBOL_GPL(mlx4_get_base_gid_ix); +static int mlx4_reset_roce_port_gids(struct mlx4_dev *dev, int slave, + int port, struct mlx4_cmd_mailbox *mailbox) +{ + struct mlx4_roce_gid_entry *gid_entry_mbox; + struct mlx4_priv *priv = mlx4_priv(dev); + int num_gids, base, offset; + int i, err; + + num_gids = mlx4_get_slave_num_gids(dev, slave, port); + base = mlx4_get_base_gid_ix(dev, slave, port); + + memset(mailbox->buf, 0, MLX4_MAILBOX_SIZE); + + mutex_lock(&(priv->port[port].gid_table.mutex)); + /* Zero-out gids belonging to that slave in the port GID table */ + for (i = 0, offset = base; i < num_gids; offset++, i++) + memcpy(priv->port[port].gid_table.roce_gids[offset].raw, + zgid_entry.raw, MLX4_ROCE_GID_ENTRY_SIZE); + + /* Now, copy roce port gids table to mailbox for passing to FW */ + gid_entry_mbox = (struct mlx4_roce_gid_entry *)mailbox->buf; + for (i = 0; i < MLX4_ROCE_MAX_GIDS; gid_entry_mbox++, i++) + memcpy(gid_entry_mbox->raw, + priv->port[port].gid_table.roce_gids[i].raw, + MLX4_ROCE_GID_ENTRY_SIZE); + + err = mlx4_cmd(dev, mailbox->dma, + ((u32)port) | (MLX4_SET_PORT_GID_TABLE << 8), 1, + MLX4_CMD_SET_PORT, MLX4_CMD_TIME_CLASS_B, + MLX4_CMD_NATIVE); + mutex_unlock(&(priv->port[port].gid_table.mutex)); + return err; +} + + +void mlx4_reset_roce_gids(struct mlx4_dev *dev, int slave) +{ + struct mlx4_active_ports actv_ports; + struct mlx4_cmd_mailbox *mailbox; + int num_eth_ports, err; + int i; + + if (slave < 0 || slave > dev->num_vfs) + return; + + actv_ports = mlx4_get_active_ports(dev, slave); + + for (i = 0, num_eth_ports = 0; i < dev->caps.num_ports; i++) { + if (test_bit(i, actv_ports.ports)) { + if (dev->caps.port_type[i + 1] != MLX4_PORT_TYPE_ETH) + continue; + num_eth_ports++; + } + } + + if (!num_eth_ports) + return; + + /* have ETH ports. Alloc mailbox for SET_PORT command */ + mailbox = mlx4_alloc_cmd_mailbox(dev); + if (IS_ERR(mailbox)) + return; + + for (i = 0; i < dev->caps.num_ports; i++) { + if (test_bit(i, actv_ports.ports)) { + if (dev->caps.port_type[i + 1] != MLX4_PORT_TYPE_ETH) + continue; + err = mlx4_reset_roce_port_gids(dev, slave, i + 1, mailbox); + if (err) + mlx4_warn(dev, "Could not reset ETH port GID table for slave %d, port %d (%d)\n", + slave, i + 1, err); + } + } + + mlx4_free_cmd_mailbox(dev, mailbox); + return; +} + static int mlx4_common_set_port(struct mlx4_dev *dev, int slave, u32 in_mod, u8 op_mod, struct mlx4_cmd_mailbox *inbox) { @@ -692,10 +780,12 @@ static int mlx4_common_set_port(struct mlx4_dev *dev, int slave, u32 in_mod, /* 2. Check that do not have duplicates in OTHER * entries in the port GID table */ + + mutex_lock(&(priv->port[port].gid_table.mutex)); for (i = 0; i < MLX4_ROCE_MAX_GIDS; i++) { if (i >= base && i < base + num_gids) continue; /* don't compare to slave's current gids */ - gid_entry_tbl = &priv->roce_gids[port - 1][i]; + gid_entry_tbl = &priv->port[port].gid_table.roce_gids[i]; if (!memcmp(gid_entry_tbl->raw, zgid_entry.raw, sizeof(zgid_entry))) continue; gid_entry_mbox = (struct mlx4_roce_gid_entry *)(inbox->buf); @@ -709,6 +799,7 @@ static int mlx4_common_set_port(struct mlx4_dev *dev, int slave, u32 in_mod, mlx4_warn(dev, "requested gid entry for slave:%d " "is a duplicate of gid at index %d\n", slave, i); + mutex_unlock(&(priv->port[port].gid_table.mutex)); return -EINVAL; } } @@ -717,16 +808,24 @@ static int mlx4_common_set_port(struct mlx4_dev *dev, int slave, u32 in_mod, /* insert slave GIDs with memcpy, starting at slave's base index */ gid_entry_mbox = (struct mlx4_roce_gid_entry *)(inbox->buf); for (i = 0, offset = base; i < num_gids; gid_entry_mbox++, offset++, i++) - memcpy(priv->roce_gids[port - 1][offset].raw, gid_entry_mbox->raw, 16); + memcpy(priv->port[port].gid_table.roce_gids[offset].raw, + gid_entry_mbox->raw, MLX4_ROCE_GID_ENTRY_SIZE); /* Now, copy roce port gids table to current mailbox for passing to FW */ gid_entry_mbox = (struct mlx4_roce_gid_entry *)(inbox->buf); for (i = 0; i < MLX4_ROCE_MAX_GIDS; gid_entry_mbox++, i++) - memcpy(gid_entry_mbox->raw, priv->roce_gids[port - 1][i].raw, 16); - - break; + memcpy(gid_entry_mbox->raw, + priv->port[port].gid_table.roce_gids[i].raw, + MLX4_ROCE_GID_ENTRY_SIZE); + + err = mlx4_cmd(dev, inbox->dma, in_mod & 0xffff, op_mod, + MLX4_CMD_SET_PORT, MLX4_CMD_TIME_CLASS_B, + MLX4_CMD_NATIVE); + mutex_unlock(&(priv->port[port].gid_table.mutex)); + return err; } - return mlx4_cmd(dev, inbox->dma, in_mod, op_mod, + + return mlx4_cmd(dev, inbox->dma, in_mod & 0xffff, op_mod, MLX4_CMD_SET_PORT, MLX4_CMD_TIME_CLASS_B, MLX4_CMD_NATIVE); } @@ -1099,7 +1198,8 @@ int mlx4_get_slave_from_roce_gid(struct mlx4_dev *dev, int port, u8 *gid, num_vfs = bitmap_weight(slaves_pport.slaves, dev->num_vfs + 1) - 1; for (i = 0; i < MLX4_ROCE_MAX_GIDS; i++) { - if (!memcmp(priv->roce_gids[port - 1][i].raw, gid, 16)) { + if (!memcmp(priv->port[port].gid_table.roce_gids[i].raw, gid, + MLX4_ROCE_GID_ENTRY_SIZE)) { found_ix = i; break; } @@ -1187,7 +1287,8 @@ int mlx4_get_roce_gid_from_slave(struct mlx4_dev *dev, int port, int slave_id, if (!mlx4_is_master(dev)) return -EINVAL; - memcpy(gid, priv->roce_gids[port - 1][slave_id].raw, 16); + memcpy(gid, priv->port[port].gid_table.roce_gids[slave_id].raw, + MLX4_ROCE_GID_ENTRY_SIZE); return 0; } EXPORT_SYMBOL(mlx4_get_roce_gid_from_slave); diff --git a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c index 8f1254a79832..f16e539749c4 100644 --- a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c +++ b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c @@ -586,6 +586,7 @@ void mlx4_free_resource_tracker(struct mlx4_dev *dev, } /* free master's vlans */ i = dev->caps.function; + mlx4_reset_roce_gids(dev, i); mutex_lock(&priv->mfunc.master.res_tracker.slave_list[i].mutex); rem_slave_vlans(dev, i); mutex_unlock(&priv->mfunc.master.res_tracker.slave_list[i].mutex); @@ -4681,7 +4682,7 @@ static void rem_slave_xrcdns(struct mlx4_dev *dev, int slave) void mlx4_delete_all_resources_for_slave(struct mlx4_dev *dev, int slave) { struct mlx4_priv *priv = mlx4_priv(dev); - + mlx4_reset_roce_gids(dev, slave); mutex_lock(&priv->mfunc.master.res_tracker.slave_list[slave].mutex); rem_slave_vlans(dev, slave); rem_slave_macs(dev, slave); diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_dcb.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_dcb.c index a51fe18f09a8..561cb11ca58c 100644 --- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_dcb.c +++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_dcb.c @@ -1020,6 +1020,7 @@ static int qlcnic_dcb_peer_app_info(struct net_device *netdev, struct qlcnic_dcb_cee *peer; int i; + memset(info, 0, sizeof(*info)); *app_count = 0; if (!test_bit(QLCNIC_DCB_STATE, &adapter->dcb->state)) diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c index 767fe61b5ac9..ce4989be86d9 100644 --- a/drivers/net/team/team.c +++ b/drivers/net/team/team.c @@ -1724,6 +1724,7 @@ static int team_change_mtu(struct net_device *dev, int new_mtu) * to traverse list in reverse under rcu_read_lock */ mutex_lock(&team->lock); + team->port_mtu_change_allowed = true; list_for_each_entry(port, &team->port_list, list) { err = dev_set_mtu(port->dev, new_mtu); if (err) { @@ -1732,6 +1733,7 @@ static int team_change_mtu(struct net_device *dev, int new_mtu) goto unwind; } } + team->port_mtu_change_allowed = false; mutex_unlock(&team->lock); dev->mtu = new_mtu; @@ -1741,6 +1743,7 @@ static int team_change_mtu(struct net_device *dev, int new_mtu) unwind: list_for_each_entry_continue_reverse(port, &team->port_list, list) dev_set_mtu(port->dev, dev->mtu); + team->port_mtu_change_allowed = false; mutex_unlock(&team->lock); return err; @@ -2851,7 +2854,9 @@ static int team_device_event(struct notifier_block *unused, break; case NETDEV_PRECHANGEMTU: /* Forbid to change mtu of underlaying device */ - return NOTIFY_BAD; + if (!port->team->port_mtu_change_allowed) + return NOTIFY_BAD; + break; case NETDEV_PRE_TYPE_CHANGE: /* Forbid to change type of underlaying device */ return NOTIFY_BAD; diff --git a/drivers/net/usb/ipheth.c b/drivers/net/usb/ipheth.c index 421934c83f1c..973275fef250 100644 --- a/drivers/net/usb/ipheth.c +++ b/drivers/net/usb/ipheth.c @@ -59,6 +59,8 @@ #define USB_PRODUCT_IPHONE_3GS 0x1294 #define USB_PRODUCT_IPHONE_4 0x1297 #define USB_PRODUCT_IPAD 0x129a +#define USB_PRODUCT_IPAD_2 0x12a2 +#define USB_PRODUCT_IPAD_3 0x12a6 #define USB_PRODUCT_IPAD_MINI 0x12ab #define USB_PRODUCT_IPHONE_4_VZW 0x129c #define USB_PRODUCT_IPHONE_4S 0x12a0 @@ -107,6 +109,14 @@ static struct usb_device_id ipheth_table[] = { IPHETH_USBINTF_CLASS, IPHETH_USBINTF_SUBCLASS, IPHETH_USBINTF_PROTO) }, { USB_DEVICE_AND_INTERFACE_INFO( + USB_VENDOR_APPLE, USB_PRODUCT_IPAD_2, + IPHETH_USBINTF_CLASS, IPHETH_USBINTF_SUBCLASS, + IPHETH_USBINTF_PROTO) }, + { USB_DEVICE_AND_INTERFACE_INFO( + USB_VENDOR_APPLE, USB_PRODUCT_IPAD_3, + IPHETH_USBINTF_CLASS, IPHETH_USBINTF_SUBCLASS, + IPHETH_USBINTF_PROTO) }, + { USB_DEVICE_AND_INTERFACE_INFO( USB_VENDOR_APPLE, USB_PRODUCT_IPAD_MINI, IPHETH_USBINTF_CLASS, IPHETH_USBINTF_SUBCLASS, IPHETH_USBINTF_PROTO) }, diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c index 83208d4fdc59..dc4bf06948c7 100644 --- a/drivers/net/usb/qmi_wwan.c +++ b/drivers/net/usb/qmi_wwan.c @@ -748,11 +748,15 @@ static const struct usb_device_id products[] = { {QMI_FIXED_INTF(0x1199, 0x68a2, 19)}, /* Sierra Wireless MC7710 in QMI mode */ {QMI_FIXED_INTF(0x1199, 0x68c0, 8)}, /* Sierra Wireless MC73xx */ {QMI_FIXED_INTF(0x1199, 0x68c0, 10)}, /* Sierra Wireless MC73xx */ - {QMI_FIXED_INTF(0x1199, 0x68c0, 11)}, /* Sierra Wireless MC73xx */ {QMI_FIXED_INTF(0x1199, 0x901c, 8)}, /* Sierra Wireless EM7700 */ {QMI_FIXED_INTF(0x1199, 0x901f, 8)}, /* Sierra Wireless EM7355 */ {QMI_FIXED_INTF(0x1199, 0x9041, 8)}, /* Sierra Wireless MC7305/MC7355 */ {QMI_FIXED_INTF(0x1199, 0x9051, 8)}, /* Netgear AirCard 340U */ + {QMI_FIXED_INTF(0x1199, 0x9053, 8)}, /* Sierra Wireless Modem */ + {QMI_FIXED_INTF(0x1199, 0x9054, 8)}, /* Sierra Wireless Modem */ + {QMI_FIXED_INTF(0x1199, 0x9055, 8)}, /* Netgear AirCard 341U */ + {QMI_FIXED_INTF(0x1199, 0x9056, 8)}, /* Sierra Wireless Modem */ + {QMI_FIXED_INTF(0x1199, 0x9061, 8)}, /* Sierra Wireless Modem */ {QMI_FIXED_INTF(0x1bbb, 0x011e, 4)}, /* Telekom Speedstick LTE II (Alcatel One Touch L100V LTE) */ {QMI_FIXED_INTF(0x1bbb, 0x0203, 2)}, /* Alcatel L800MA */ {QMI_FIXED_INTF(0x2357, 0x0201, 4)}, /* TP-LINK HSUPA Modem MA180 */ diff --git a/drivers/net/wireless/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/iwlwifi/mvm/mac80211.c index b41dc84e9431..8735ef1f44ae 100644 --- a/drivers/net/wireless/iwlwifi/mvm/mac80211.c +++ b/drivers/net/wireless/iwlwifi/mvm/mac80211.c @@ -826,7 +826,7 @@ static int iwl_mvm_mac_add_interface(struct ieee80211_hw *hw, if (ret) goto out_remove_mac; - if (!mvm->bf_allowed_vif && + if (!mvm->bf_allowed_vif && false && vif->type == NL80211_IFTYPE_STATION && !vif->p2p && mvm->fw->ucode_capa.flags & IWL_UCODE_TLV_FLAGS_BF_UPDATED){ mvm->bf_allowed_vif = mvmvif; diff --git a/drivers/of/address.c b/drivers/of/address.c index 95351b2a112c..5edfcb0da37d 100644 --- a/drivers/of/address.c +++ b/drivers/of/address.c @@ -701,3 +701,113 @@ void __iomem *of_iomap(struct device_node *np, int index) return ioremap(res.start, resource_size(&res)); } EXPORT_SYMBOL(of_iomap); + +/** + * of_dma_get_range - Get DMA range info + * @np: device node to get DMA range info + * @dma_addr: pointer to store initial DMA address of DMA range + * @paddr: pointer to store initial CPU address of DMA range + * @size: pointer to store size of DMA range + * + * Look in bottom up direction for the first "dma-ranges" property + * and parse it. + * dma-ranges format: + * DMA addr (dma_addr) : naddr cells + * CPU addr (phys_addr_t) : pna cells + * size : nsize cells + * + * It returns -ENODEV if "dma-ranges" property was not found + * for this device in DT. + */ +int of_dma_get_range(struct device_node *np, u64 *dma_addr, u64 *paddr, u64 *size) +{ + struct device_node *node = of_node_get(np); + const __be32 *ranges = NULL; + int len, naddr, nsize, pna; + int ret = 0; + u64 dmaaddr; + + if (!node) + return -EINVAL; + + while (1) { + naddr = of_n_addr_cells(node); + nsize = of_n_size_cells(node); + node = of_get_next_parent(node); + if (!node) + break; + + ranges = of_get_property(node, "dma-ranges", &len); + + /* Ignore empty ranges, they imply no translation required */ + if (ranges && len > 0) + break; + + /* + * At least empty ranges has to be defined for parent node if + * DMA is supported + */ + if (!ranges) + break; + } + + if (!ranges) { + pr_debug("%s: no dma-ranges found for node(%s)\n", + __func__, np->full_name); + ret = -ENODEV; + goto out; + } + + len /= sizeof(u32); + + pna = of_n_addr_cells(node); + + /* dma-ranges format: + * DMA addr : naddr cells + * CPU addr : pna cells + * size : nsize cells + */ + dmaaddr = of_read_number(ranges, naddr); + *paddr = of_translate_dma_address(np, ranges); + if (*paddr == OF_BAD_ADDR) { + pr_err("%s: translation of DMA address(%pad) to CPU address failed node(%s)\n", + __func__, dma_addr, np->full_name); + ret = -EINVAL; + goto out; + } + *dma_addr = dmaaddr; + + *size = of_read_number(ranges + naddr + pna, nsize); + + pr_debug("dma_addr(%llx) cpu_addr(%llx) size(%llx)\n", + *dma_addr, *paddr, *size); + +out: + of_node_put(node); + + return ret; +} +EXPORT_SYMBOL_GPL(of_dma_get_range); + +/** + * of_dma_is_coherent - Check if device is coherent + * @np: device node + * + * It returns true if "dma-coherent" property was found + * for this device in DT. + */ +bool of_dma_is_coherent(struct device_node *np) +{ + struct device_node *node = of_node_get(np); + + while (node) { + if (of_property_read_bool(node, "dma-coherent")) { + of_node_put(node); + return true; + } + node = of_get_next_parent(node); + } + of_node_put(node); + return false; +} +EXPORT_SYMBOL_GPL(of_dma_is_coherent); diff --git a/drivers/of/platform.c b/drivers/of/platform.c index 92c060e58b02..6c48d73a7fd7 100644 --- a/drivers/of/platform.c +++ b/drivers/of/platform.c @@ -138,9 +138,6 @@ struct platform_device *of_device_alloc(struct device_node *np, } dev->dev.of_node = of_node_get(np); -#if defined(CONFIG_MICROBLAZE) - dev->dev.dma_mask = &dev->archdata.dma_mask; -#endif dev->dev.parent = parent; if (bus_id) @@ -153,6 +150,64 @@ struct platform_device *of_device_alloc(struct device_node *np, EXPORT_SYMBOL(of_device_alloc); /** + * of_dma_configure - Setup DMA configuration + * @dev: Device to apply DMA configuration + * + * Try to get devices's DMA configuration from DT and update it + * accordingly. + * + * In case if platform code need to use own special DMA configuration,it + * can use Platform bus notifier and handle BUS_NOTIFY_ADD_DEVICE event + * to fix up DMA configuration. + */ +static void of_dma_configure(struct platform_device *pdev) +{ + u64 dma_addr, paddr, size; + int ret; + struct device *dev = &pdev->dev; + +#if defined(CONFIG_MICROBLAZE) + pdev->archdata.dma_mask = 0xffffffffUL; +#endif + + /* + * Set default dma-mask to 32 bit. Drivers are expected to setup + * the correct supported dma_mask. + */ + dev->coherent_dma_mask = DMA_BIT_MASK(32); + + /* + * Set it to coherent_dma_mask by default if the architecture + * code has not set it. + */ + if (!dev->dma_mask) + dev->dma_mask = &dev->coherent_dma_mask; + + /* + * if dma-coherent property exist, call arch hook to setup + * dma coherent operations. + */ + if (of_dma_is_coherent(dev->of_node)) { + set_arch_dma_coherent_ops(dev); + dev_dbg(dev, "device is dma coherent\n"); + } + + /* + * if dma-ranges property doesn't exist - just return else + * setup the dma offset + */ + ret = of_dma_get_range(dev->of_node, &dma_addr, &paddr, &size); + if (ret < 0) { + dev_dbg(dev, "no dma range information to setup\n"); + return; + } + + /* DMA ranges found. Calculate and set dma_pfn_offset */ + dev->dma_pfn_offset = PFN_DOWN(paddr - dma_addr); + dev_dbg(dev, "dma_pfn_offset(%#08lx)\n", dev->dma_pfn_offset); +} + +/** * of_platform_device_create_pdata - Alloc, initialize and register an of_device * @np: pointer to node to create device for * @bus_id: name to assign device @@ -178,12 +233,7 @@ static struct platform_device *of_platform_device_create_pdata( if (!dev) goto err_clear_flag; -#if defined(CONFIG_MICROBLAZE) - dev->archdata.dma_mask = 0xffffffffUL; -#endif - dev->dev.coherent_dma_mask = DMA_BIT_MASK(32); - if (!dev->dev.dma_mask) - dev->dev.dma_mask = &dev->dev.coherent_dma_mask; + of_dma_configure(dev); dev->dev.bus = &platform_bus_type; dev->dev.platform_data = platform_data; diff --git a/drivers/parport/procfs.c b/drivers/parport/procfs.c index 92ed045a5f93..3b470801a04f 100644 --- a/drivers/parport/procfs.c +++ b/drivers/parport/procfs.c @@ -31,7 +31,7 @@ #define PARPORT_MIN_SPINTIME_VALUE 1 #define PARPORT_MAX_SPINTIME_VALUE 1000 -static int do_active_device(ctl_table *table, int write, +static int do_active_device(struct ctl_table *table, int write, void __user *result, size_t *lenp, loff_t *ppos) { struct parport *port = (struct parport *)table->extra1; @@ -68,7 +68,7 @@ static int do_active_device(ctl_table *table, int write, } #ifdef CONFIG_PARPORT_1284 -static int do_autoprobe(ctl_table *table, int write, +static int do_autoprobe(struct ctl_table *table, int write, void __user *result, size_t *lenp, loff_t *ppos) { struct parport_device_info *info = table->extra2; @@ -110,9 +110,9 @@ static int do_autoprobe(ctl_table *table, int write, } #endif /* IEEE1284.3 support. */ -static int do_hardware_base_addr (ctl_table *table, int write, - void __user *result, - size_t *lenp, loff_t *ppos) +static int do_hardware_base_addr(struct ctl_table *table, int write, + void __user *result, + size_t *lenp, loff_t *ppos) { struct parport *port = (struct parport *)table->extra1; char buffer[20]; @@ -138,9 +138,9 @@ static int do_hardware_base_addr (ctl_table *table, int write, return copy_to_user(result, buffer, len) ? -EFAULT : 0; } -static int do_hardware_irq (ctl_table *table, int write, - void __user *result, - size_t *lenp, loff_t *ppos) +static int do_hardware_irq(struct ctl_table *table, int write, + void __user *result, + size_t *lenp, loff_t *ppos) { struct parport *port = (struct parport *)table->extra1; char buffer[20]; @@ -166,9 +166,9 @@ static int do_hardware_irq (ctl_table *table, int write, return copy_to_user(result, buffer, len) ? -EFAULT : 0; } -static int do_hardware_dma (ctl_table *table, int write, - void __user *result, - size_t *lenp, loff_t *ppos) +static int do_hardware_dma(struct ctl_table *table, int write, + void __user *result, + size_t *lenp, loff_t *ppos) { struct parport *port = (struct parport *)table->extra1; char buffer[20]; @@ -194,9 +194,9 @@ static int do_hardware_dma (ctl_table *table, int write, return copy_to_user(result, buffer, len) ? -EFAULT : 0; } -static int do_hardware_modes (ctl_table *table, int write, - void __user *result, - size_t *lenp, loff_t *ppos) +static int do_hardware_modes(struct ctl_table *table, int write, + void __user *result, + size_t *lenp, loff_t *ppos) { struct parport *port = (struct parport *)table->extra1; char buffer[40]; @@ -255,11 +255,11 @@ PARPORT_MAX_SPINTIME_VALUE; struct parport_sysctl_table { struct ctl_table_header *sysctl_header; - ctl_table vars[12]; - ctl_table device_dir[2]; - ctl_table port_dir[2]; - ctl_table parport_dir[2]; - ctl_table dev_dir[2]; + struct ctl_table vars[12]; + struct ctl_table device_dir[2]; + struct ctl_table port_dir[2]; + struct ctl_table parport_dir[2]; + struct ctl_table dev_dir[2]; }; static const struct parport_sysctl_table parport_sysctl_template = { @@ -369,12 +369,12 @@ static const struct parport_sysctl_table parport_sysctl_template = { struct parport_device_sysctl_table { struct ctl_table_header *sysctl_header; - ctl_table vars[2]; - ctl_table device_dir[2]; - ctl_table devices_root_dir[2]; - ctl_table port_dir[2]; - ctl_table parport_dir[2]; - ctl_table dev_dir[2]; + struct ctl_table vars[2]; + struct ctl_table device_dir[2]; + struct ctl_table devices_root_dir[2]; + struct ctl_table port_dir[2]; + struct ctl_table parport_dir[2]; + struct ctl_table dev_dir[2]; }; static const struct parport_device_sysctl_table @@ -422,10 +422,10 @@ parport_device_sysctl_template = { struct parport_default_sysctl_table { struct ctl_table_header *sysctl_header; - ctl_table vars[3]; - ctl_table default_dir[2]; - ctl_table parport_dir[2]; - ctl_table dev_dir[2]; + struct ctl_table vars[3]; + struct ctl_table default_dir[2]; + struct ctl_table parport_dir[2]; + struct ctl_table dev_dir[2]; }; static struct parport_default_sysctl_table diff --git a/drivers/rapidio/devices/tsi721.c b/drivers/rapidio/devices/tsi721.c index 1753dc693c15..2ca1a0b3ad57 100644 --- a/drivers/rapidio/devices/tsi721.c +++ b/drivers/rapidio/devices/tsi721.c @@ -768,15 +768,10 @@ static int tsi721_enable_msix(struct tsi721_device *priv) } #endif /* CONFIG_RAPIDIO_DMA_ENGINE */ - err = pci_enable_msix(priv->pdev, entries, ARRAY_SIZE(entries)); + err = pci_enable_msix_exact(priv->pdev, entries, ARRAY_SIZE(entries)); if (err) { - if (err > 0) - dev_info(&priv->pdev->dev, - "Only %d MSI-X vectors available, " - "not using MSI-X\n", err); - else - dev_err(&priv->pdev->dev, - "Failed to enable MSI-X (err=%d)\n", err); + dev_err(&priv->pdev->dev, + "Failed to enable MSI-X (err=%d)\n", err); return err; } diff --git a/drivers/rtc/Kconfig b/drivers/rtc/Kconfig index 2e565f8e5165..71988b69eca6 100644 --- a/drivers/rtc/Kconfig +++ b/drivers/rtc/Kconfig @@ -386,12 +386,12 @@ config RTC_DRV_PCF8583 will be called rtc-pcf8583. config RTC_DRV_M41T80 - tristate "ST M41T62/65/M41T80/81/82/83/84/85/87" + tristate "ST M41T62/65/M41T80/81/82/83/84/85/87 and compatible" help If you say Y here you will get support for the ST M41T60 and M41T80 RTC chips series. Currently, the following chips are supported: M41T62, M41T65, M41T80, M41T81, M41T82, M41T83, M41ST84, - M41ST85, and M41ST87. + M41ST85, M41ST87, and MicroCrystal RV4162. This driver can also be built as a module. If so, the module will be called rtc-m41t80. @@ -573,6 +573,17 @@ config RTC_DRV_DS1305 This driver can also be built as a module. If so, the module will be called rtc-ds1305. +config RTC_DRV_DS1343 + select REGMAP_SPI + tristate "Dallas/Maxim DS1343/DS1344" + help + If you say yes here you get support for the + Dallas/Maxim DS1343 and DS1344 real time clock chips. + Support for trickle charger, alarm is provided. + + This driver can also be built as a module. If so, the module + will be called rtc-ds1343. + config RTC_DRV_DS1347 tristate "Dallas/Maxim DS1347" help @@ -650,6 +661,14 @@ config RTC_DRV_RX4581 This driver can also be built as a module. If so the module will be called rtc-rx4581. +config RTC_DRV_MCP795 + tristate "Microchip MCP795" + help + If you say yes here you will get support for the Microchip MCP795. + + This driver can also be built as a module. If so the module + will be called rtc-mcp795. + endif # SPI_MASTER comment "Platform RTC drivers" @@ -758,6 +777,16 @@ config RTC_DRV_DA9055 This driver can also be built as a module. If so, the module will be called rtc-da9055 +config RTC_DRV_DA9063 + tristate "Dialog Semiconductor DA9063 RTC" + depends on MFD_DA9063 + help + If you say yes here you will get support for the RTC subsystem + of the Dialog Semiconductor DA9063. + + This driver can also be built as a module. If so, the module + will be called "rtc-da9063". + config RTC_DRV_EFI tristate "EFI RTC" depends on IA64 @@ -1327,6 +1356,15 @@ config RTC_DRV_MOXART This driver can also be built as a module. If so, the module will be called rtc-moxart +config RTC_DRV_XGENE + tristate "APM X-Gene RTC" + help + If you say yes here you get support for the APM X-Gene SoC real time + clock. + + This driver can also be built as a module, if so, the module + will be called "rtc-xgene". + comment "HID Sensor RTC drivers" config RTC_DRV_HID_SENSOR_TIME diff --git a/drivers/rtc/Makefile b/drivers/rtc/Makefile index 40a09915c8f6..70347d041d10 100644 --- a/drivers/rtc/Makefile +++ b/drivers/rtc/Makefile @@ -32,6 +32,7 @@ obj-$(CONFIG_RTC_DRV_CMOS) += rtc-cmos.o obj-$(CONFIG_RTC_DRV_COH901331) += rtc-coh901331.o obj-$(CONFIG_RTC_DRV_DA9052) += rtc-da9052.o obj-$(CONFIG_RTC_DRV_DA9055) += rtc-da9055.o +obj-$(CONFIG_RTC_DRV_DA9063) += rtc-da9063.o obj-$(CONFIG_RTC_DRV_DAVINCI) += rtc-davinci.o obj-$(CONFIG_RTC_DRV_DM355EVM) += rtc-dm355evm.o obj-$(CONFIG_RTC_DRV_VRTC) += rtc-mrst.o @@ -40,6 +41,7 @@ obj-$(CONFIG_RTC_DRV_DS1286) += rtc-ds1286.o obj-$(CONFIG_RTC_DRV_DS1302) += rtc-ds1302.o obj-$(CONFIG_RTC_DRV_DS1305) += rtc-ds1305.o obj-$(CONFIG_RTC_DRV_DS1307) += rtc-ds1307.o +obj-$(CONFIG_RTC_DRV_DS1343) += rtc-ds1343.o obj-$(CONFIG_RTC_DRV_DS1347) += rtc-ds1347.o obj-$(CONFIG_RTC_DRV_DS1374) += rtc-ds1374.o obj-$(CONFIG_RTC_DRV_DS1390) += rtc-ds1390.o @@ -80,6 +82,7 @@ obj-$(CONFIG_RTC_DRV_MAX8997) += rtc-max8997.o obj-$(CONFIG_RTC_DRV_MAX6902) += rtc-max6902.o obj-$(CONFIG_RTC_DRV_MAX77686) += rtc-max77686.o obj-$(CONFIG_RTC_DRV_MC13XXX) += rtc-mc13xxx.o +obj-$(CONFIG_RTC_DRV_MCP795) += rtc-mcp795.o obj-$(CONFIG_RTC_DRV_MSM6242) += rtc-msm6242.o obj-$(CONFIG_RTC_DRV_MPC5121) += rtc-mpc5121.o obj-$(CONFIG_RTC_DRV_MV) += rtc-mv.o @@ -135,5 +138,6 @@ obj-$(CONFIG_RTC_DRV_VT8500) += rtc-vt8500.o obj-$(CONFIG_RTC_DRV_WM831X) += rtc-wm831x.o obj-$(CONFIG_RTC_DRV_WM8350) += rtc-wm8350.o obj-$(CONFIG_RTC_DRV_X1205) += rtc-x1205.o +obj-$(CONFIG_RTC_DRV_XGENE) += rtc-xgene.o obj-$(CONFIG_RTC_DRV_SIRFSOC) += rtc-sirfsoc.o obj-$(CONFIG_RTC_DRV_MOXART) += rtc-moxart.o diff --git a/drivers/rtc/interface.c b/drivers/rtc/interface.c index c2eff6082363..5813fa52c3d4 100644 --- a/drivers/rtc/interface.c +++ b/drivers/rtc/interface.c @@ -292,7 +292,8 @@ int __rtc_read_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm) dev_dbg(&rtc->dev, "alarm rollover: %s\n", "year"); do { alarm->time.tm_year++; - } while (rtc_valid_tm(&alarm->time) != 0); + } while (!is_leap_year(alarm->time.tm_year + 1900) + && rtc_valid_tm(&alarm->time) != 0); break; default: @@ -300,7 +301,16 @@ int __rtc_read_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm) } done: - return 0; + err = rtc_valid_tm(&alarm->time); + + if (err) { + dev_warn(&rtc->dev, "invalid alarm value: %d-%d-%d %d:%d:%d\n", + alarm->time.tm_year + 1900, alarm->time.tm_mon + 1, + alarm->time.tm_mday, alarm->time.tm_hour, alarm->time.tm_min, + alarm->time.tm_sec); + } + + return err; } int rtc_read_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm) diff --git a/drivers/rtc/rtc-88pm860x.c b/drivers/rtc/rtc-88pm860x.c index 816504846cdd..0c6add1a38dc 100644 --- a/drivers/rtc/rtc-88pm860x.c +++ b/drivers/rtc/rtc-88pm860x.c @@ -293,7 +293,7 @@ static int pm860x_rtc_dt_init(struct platform_device *pdev, int ret; if (!np) return -ENODEV; - np = of_find_node_by_name(np, "rtc"); + np = of_get_child_by_name(np, "rtc"); if (!np) { dev_err(&pdev->dev, "failed to find rtc node\n"); return -ENODEV; @@ -301,6 +301,7 @@ static int pm860x_rtc_dt_init(struct platform_device *pdev, ret = of_property_read_u32(np, "marvell,88pm860x-vrtc", &info->vrtc); if (ret) info->vrtc = 0; + of_node_put(np); return 0; } #else diff --git a/drivers/rtc/rtc-at91rm9200.c b/drivers/rtc/rtc-at91rm9200.c index 3281c90691c3..44fe83ee9bee 100644 --- a/drivers/rtc/rtc-at91rm9200.c +++ b/drivers/rtc/rtc-at91rm9200.c @@ -48,6 +48,7 @@ struct at91_rtc_config { static const struct at91_rtc_config *at91_rtc_config; static DECLARE_COMPLETION(at91_rtc_updated); +static DECLARE_COMPLETION(at91_rtc_upd_rdy); static unsigned int at91_alarm_year = AT91_RTC_EPOCH; static void __iomem *at91_rtc_regs; static int irq; @@ -161,6 +162,8 @@ static int at91_rtc_settime(struct device *dev, struct rtc_time *tm) 1900 + tm->tm_year, tm->tm_mon, tm->tm_mday, tm->tm_hour, tm->tm_min, tm->tm_sec); + wait_for_completion(&at91_rtc_upd_rdy); + /* Stop Time/Calendar from counting */ cr = at91_rtc_read(AT91_RTC_CR); at91_rtc_write(AT91_RTC_CR, cr | AT91_RTC_UPDCAL | AT91_RTC_UPDTIM); @@ -183,7 +186,9 @@ static int at91_rtc_settime(struct device *dev, struct rtc_time *tm) /* Restart Time/Calendar */ cr = at91_rtc_read(AT91_RTC_CR); + at91_rtc_write(AT91_RTC_SCCR, AT91_RTC_SECEV); at91_rtc_write(AT91_RTC_CR, cr & ~(AT91_RTC_UPDCAL | AT91_RTC_UPDTIM)); + at91_rtc_write_ier(AT91_RTC_SECEV); return 0; } @@ -290,8 +295,10 @@ static irqreturn_t at91_rtc_interrupt(int irq, void *dev_id) if (rtsr) { /* this interrupt is shared! Is it ours? */ if (rtsr & AT91_RTC_ALARM) events |= (RTC_AF | RTC_IRQF); - if (rtsr & AT91_RTC_SECEV) - events |= (RTC_UF | RTC_IRQF); + if (rtsr & AT91_RTC_SECEV) { + complete(&at91_rtc_upd_rdy); + at91_rtc_write_idr(AT91_RTC_SECEV); + } if (rtsr & AT91_RTC_ACKUPD) complete(&at91_rtc_updated); @@ -413,6 +420,11 @@ static int __init at91_rtc_probe(struct platform_device *pdev) return PTR_ERR(rtc); platform_set_drvdata(pdev, rtc); + /* enable SECEV interrupt in order to initialize at91_rtc_upd_rdy + * completion. + */ + at91_rtc_write_ier(AT91_RTC_SECEV); + dev_info(&pdev->dev, "AT91 Real Time Clock driver.\n"); return 0; } diff --git a/drivers/rtc/rtc-bfin.c b/drivers/rtc/rtc-bfin.c index 0c53f452849d..fe4bdb06a55a 100644 --- a/drivers/rtc/rtc-bfin.c +++ b/drivers/rtc/rtc-bfin.c @@ -346,7 +346,7 @@ static int bfin_rtc_probe(struct platform_device *pdev) { struct bfin_rtc *rtc; struct device *dev = &pdev->dev; - int ret = 0; + int ret; unsigned long timeout = jiffies + HZ; dev_dbg_stamp(dev); @@ -361,16 +361,17 @@ static int bfin_rtc_probe(struct platform_device *pdev) /* Register our RTC with the RTC framework */ rtc->rtc_dev = devm_rtc_device_register(dev, pdev->name, &bfin_rtc_ops, THIS_MODULE); - if (unlikely(IS_ERR(rtc->rtc_dev))) { - ret = PTR_ERR(rtc->rtc_dev); - goto err; - } + if (unlikely(IS_ERR(rtc->rtc_dev))) + return PTR_ERR(rtc->rtc_dev); /* Grab the IRQ and init the hardware */ ret = devm_request_irq(dev, IRQ_RTC, bfin_rtc_interrupt, 0, pdev->name, dev); if (unlikely(ret)) - goto err; + dev_err(&pdev->dev, + "unable to request IRQ; alarm won't work, " + "and writes will be delayed\n"); + /* sometimes the bootloader touched things, but the write complete was not * enabled, so let's just do a quick timeout here since the IRQ will not fire ... */ @@ -381,9 +382,6 @@ static int bfin_rtc_probe(struct platform_device *pdev) bfin_write_RTC_SWCNT(0); return 0; - -err: - return ret; } static int bfin_rtc_remove(struct platform_device *pdev) diff --git a/drivers/rtc/rtc-cmos.c b/drivers/rtc/rtc-cmos.c index 0963c9309c74..b0e4a3eb33c7 100644 --- a/drivers/rtc/rtc-cmos.c +++ b/drivers/rtc/rtc-cmos.c @@ -647,6 +647,7 @@ cmos_do_probe(struct device *dev, struct resource *ports, int rtc_irq) int retval = 0; unsigned char rtc_control; unsigned address_space; + u32 flags = 0; /* there can be only one ... */ if (cmos_rtc.dev) @@ -660,9 +661,12 @@ cmos_do_probe(struct device *dev, struct resource *ports, int rtc_irq) * REVISIT non-x86 systems may instead use memory space resources * (needing ioremap etc), not i/o space resources like this ... */ - ports = request_region(ports->start, - resource_size(ports), - driver_name); + if (RTC_IOMAPPED) + ports = request_region(ports->start, resource_size(ports), + driver_name); + else + ports = request_mem_region(ports->start, resource_size(ports), + driver_name); if (!ports) { dev_dbg(dev, "i/o registers already in use\n"); return -EBUSY; @@ -699,6 +703,11 @@ cmos_do_probe(struct device *dev, struct resource *ports, int rtc_irq) * expect CMOS_READ and friends to handle. */ if (info) { + if (info->flags) + flags = info->flags; + if (info->address_space) + address_space = info->address_space; + if (info->rtc_day_alarm && info->rtc_day_alarm < 128) cmos_rtc.day_alrm = info->rtc_day_alarm; if (info->rtc_mon_alarm && info->rtc_mon_alarm < 128) @@ -726,18 +735,21 @@ cmos_do_probe(struct device *dev, struct resource *ports, int rtc_irq) spin_lock_irq(&rtc_lock); - /* force periodic irq to CMOS reset default of 1024Hz; - * - * REVISIT it's been reported that at least one x86_64 ALI mobo - * doesn't use 32KHz here ... for portability we might need to - * do something about other clock frequencies. - */ - cmos_rtc.rtc->irq_freq = 1024; - hpet_set_periodic_freq(cmos_rtc.rtc->irq_freq); - CMOS_WRITE(RTC_REF_CLCK_32KHZ | 0x06, RTC_FREQ_SELECT); + if (!(flags & CMOS_RTC_FLAGS_NOFREQ)) { + /* force periodic irq to CMOS reset default of 1024Hz; + * + * REVISIT it's been reported that at least one x86_64 ALI + * mobo doesn't use 32KHz here ... for portability we might + * need to do something about other clock frequencies. + */ + cmos_rtc.rtc->irq_freq = 1024; + hpet_set_periodic_freq(cmos_rtc.rtc->irq_freq); + CMOS_WRITE(RTC_REF_CLCK_32KHZ | 0x06, RTC_FREQ_SELECT); + } /* disable irqs */ - cmos_irq_disable(&cmos_rtc, RTC_PIE | RTC_AIE | RTC_UIE); + if (is_valid_irq(rtc_irq)) + cmos_irq_disable(&cmos_rtc, RTC_PIE | RTC_AIE | RTC_UIE); rtc_control = CMOS_READ(RTC_CONTROL); @@ -802,14 +814,18 @@ cleanup1: cmos_rtc.dev = NULL; rtc_device_unregister(cmos_rtc.rtc); cleanup0: - release_region(ports->start, resource_size(ports)); + if (RTC_IOMAPPED) + release_region(ports->start, resource_size(ports)); + else + release_mem_region(ports->start, resource_size(ports)); return retval; } -static void cmos_do_shutdown(void) +static void cmos_do_shutdown(int rtc_irq) { spin_lock_irq(&rtc_lock); - cmos_irq_disable(&cmos_rtc, RTC_IRQMASK); + if (is_valid_irq(rtc_irq)) + cmos_irq_disable(&cmos_rtc, RTC_IRQMASK); spin_unlock_irq(&rtc_lock); } @@ -818,7 +834,7 @@ static void __exit cmos_do_remove(struct device *dev) struct cmos_rtc *cmos = dev_get_drvdata(dev); struct resource *ports; - cmos_do_shutdown(); + cmos_do_shutdown(cmos->irq); sysfs_remove_bin_file(&dev->kobj, &nvram); @@ -831,7 +847,10 @@ static void __exit cmos_do_remove(struct device *dev) cmos->rtc = NULL; ports = cmos->iomem; - release_region(ports->start, resource_size(ports)); + if (RTC_IOMAPPED) + release_region(ports->start, resource_size(ports)); + else + release_mem_region(ports->start, resource_size(ports)); cmos->iomem = NULL; cmos->dev = NULL; @@ -1065,10 +1084,13 @@ static void __exit cmos_pnp_remove(struct pnp_dev *pnp) static void cmos_pnp_shutdown(struct pnp_dev *pnp) { - if (system_state == SYSTEM_POWER_OFF && !cmos_poweroff(&pnp->dev)) + struct device *dev = &pnp->dev; + struct cmos_rtc *cmos = dev_get_drvdata(dev); + + if (system_state == SYSTEM_POWER_OFF && !cmos_poweroff(dev)) return; - cmos_do_shutdown(); + cmos_do_shutdown(cmos->irq); } static const struct pnp_device_id rtc_ids[] = { @@ -1143,11 +1165,21 @@ static inline void cmos_of_init(struct platform_device *pdev) {} static int __init cmos_platform_probe(struct platform_device *pdev) { + struct resource *resource; + int irq; + cmos_of_init(pdev); cmos_wake_setup(&pdev->dev); - return cmos_do_probe(&pdev->dev, - platform_get_resource(pdev, IORESOURCE_IO, 0), - platform_get_irq(pdev, 0)); + + if (RTC_IOMAPPED) + resource = platform_get_resource(pdev, IORESOURCE_IO, 0); + else + resource = platform_get_resource(pdev, IORESOURCE_MEM, 0); + irq = platform_get_irq(pdev, 0); + if (irq < 0) + irq = -1; + + return cmos_do_probe(&pdev->dev, resource, irq); } static int __exit cmos_platform_remove(struct platform_device *pdev) @@ -1158,10 +1190,13 @@ static int __exit cmos_platform_remove(struct platform_device *pdev) static void cmos_platform_shutdown(struct platform_device *pdev) { - if (system_state == SYSTEM_POWER_OFF && !cmos_poweroff(&pdev->dev)) + struct device *dev = &pdev->dev; + struct cmos_rtc *cmos = dev_get_drvdata(dev); + + if (system_state == SYSTEM_POWER_OFF && !cmos_poweroff(dev)) return; - cmos_do_shutdown(); + cmos_do_shutdown(cmos->irq); } /* work with hotplug and coldplug */ diff --git a/drivers/rtc/rtc-da9052.c b/drivers/rtc/rtc-da9052.c index a1cbf64242a5..e5c9486cf452 100644 --- a/drivers/rtc/rtc-da9052.c +++ b/drivers/rtc/rtc-da9052.c @@ -20,28 +20,28 @@ #include <linux/mfd/da9052/da9052.h> #include <linux/mfd/da9052/reg.h> -#define rtc_err(da9052, fmt, ...) \ - dev_err(da9052->dev, "%s: " fmt, __func__, ##__VA_ARGS__) +#define rtc_err(rtc, fmt, ...) \ + dev_err(rtc->da9052->dev, "%s: " fmt, __func__, ##__VA_ARGS__) struct da9052_rtc { struct rtc_device *rtc; struct da9052 *da9052; }; -static int da9052_rtc_enable_alarm(struct da9052 *da9052, bool enable) +static int da9052_rtc_enable_alarm(struct da9052_rtc *rtc, bool enable) { int ret; if (enable) { - ret = da9052_reg_update(da9052, DA9052_ALARM_Y_REG, - DA9052_ALARM_Y_ALARM_ON, - DA9052_ALARM_Y_ALARM_ON); + ret = da9052_reg_update(rtc->da9052, DA9052_ALARM_Y_REG, + DA9052_ALARM_Y_ALARM_ON|DA9052_ALARM_Y_TICK_ON, + DA9052_ALARM_Y_ALARM_ON); if (ret != 0) - rtc_err(da9052, "Failed to enable ALM: %d\n", ret); + rtc_err(rtc, "Failed to enable ALM: %d\n", ret); } else { - ret = da9052_reg_update(da9052, DA9052_ALARM_Y_REG, - DA9052_ALARM_Y_ALARM_ON, 0); + ret = da9052_reg_update(rtc->da9052, DA9052_ALARM_Y_REG, + DA9052_ALARM_Y_ALARM_ON|DA9052_ALARM_Y_TICK_ON, 0); if (ret != 0) - rtc_err(da9052, "Write error: %d\n", ret); + rtc_err(rtc, "Write error: %d\n", ret); } return ret; } @@ -49,31 +49,20 @@ static int da9052_rtc_enable_alarm(struct da9052 *da9052, bool enable) static irqreturn_t da9052_rtc_irq(int irq, void *data) { struct da9052_rtc *rtc = data; - int ret; - ret = da9052_reg_read(rtc->da9052, DA9052_ALARM_MI_REG); - if (ret < 0) { - rtc_err(rtc->da9052, "Read error: %d\n", ret); - return IRQ_NONE; - } - - if (ret & DA9052_ALARMMI_ALARMTYPE) { - da9052_rtc_enable_alarm(rtc->da9052, 0); - rtc_update_irq(rtc->rtc, 1, RTC_IRQF | RTC_AF); - } else - rtc_update_irq(rtc->rtc, 1, RTC_IRQF | RTC_PF); + rtc_update_irq(rtc->rtc, 1, RTC_IRQF | RTC_AF); return IRQ_HANDLED; } -static int da9052_read_alarm(struct da9052 *da9052, struct rtc_time *rtc_tm) +static int da9052_read_alarm(struct da9052_rtc *rtc, struct rtc_time *rtc_tm) { int ret; uint8_t v[5]; - ret = da9052_group_read(da9052, DA9052_ALARM_MI_REG, 5, v); + ret = da9052_group_read(rtc->da9052, DA9052_ALARM_MI_REG, 5, v); if (ret != 0) { - rtc_err(da9052, "Failed to group read ALM: %d\n", ret); + rtc_err(rtc, "Failed to group read ALM: %d\n", ret); return ret; } @@ -84,23 +73,33 @@ static int da9052_read_alarm(struct da9052 *da9052, struct rtc_time *rtc_tm) rtc_tm->tm_min = v[0] & DA9052_RTC_MIN; ret = rtc_valid_tm(rtc_tm); - if (ret != 0) - return ret; return ret; } -static int da9052_set_alarm(struct da9052 *da9052, struct rtc_time *rtc_tm) +static int da9052_set_alarm(struct da9052_rtc *rtc, struct rtc_time *rtc_tm) { + struct da9052 *da9052 = rtc->da9052; + unsigned long alm_time; int ret; uint8_t v[3]; + ret = rtc_tm_to_time(rtc_tm, &alm_time); + if (ret != 0) + return ret; + + if (rtc_tm->tm_sec > 0) { + alm_time += 60 - rtc_tm->tm_sec; + rtc_time_to_tm(alm_time, rtc_tm); + } + BUG_ON(rtc_tm->tm_sec); /* it will cause repeated irqs if not zero */ + rtc_tm->tm_year -= 100; rtc_tm->tm_mon += 1; ret = da9052_reg_update(da9052, DA9052_ALARM_MI_REG, DA9052_RTC_MIN, rtc_tm->tm_min); if (ret != 0) { - rtc_err(da9052, "Failed to write ALRM MIN: %d\n", ret); + rtc_err(rtc, "Failed to write ALRM MIN: %d\n", ret); return ret; } @@ -115,22 +114,22 @@ static int da9052_set_alarm(struct da9052 *da9052, struct rtc_time *rtc_tm) ret = da9052_reg_update(da9052, DA9052_ALARM_Y_REG, DA9052_RTC_YEAR, rtc_tm->tm_year); if (ret != 0) - rtc_err(da9052, "Failed to write ALRM YEAR: %d\n", ret); + rtc_err(rtc, "Failed to write ALRM YEAR: %d\n", ret); return ret; } -static int da9052_rtc_get_alarm_status(struct da9052 *da9052) +static int da9052_rtc_get_alarm_status(struct da9052_rtc *rtc) { int ret; - ret = da9052_reg_read(da9052, DA9052_ALARM_Y_REG); + ret = da9052_reg_read(rtc->da9052, DA9052_ALARM_Y_REG); if (ret < 0) { - rtc_err(da9052, "Failed to read ALM: %d\n", ret); + rtc_err(rtc, "Failed to read ALM: %d\n", ret); return ret; } - ret &= DA9052_ALARM_Y_ALARM_ON; - return (ret > 0) ? 1 : 0; + + return !!(ret&DA9052_ALARM_Y_ALARM_ON); } static int da9052_rtc_read_time(struct device *dev, struct rtc_time *rtc_tm) @@ -141,7 +140,7 @@ static int da9052_rtc_read_time(struct device *dev, struct rtc_time *rtc_tm) ret = da9052_group_read(rtc->da9052, DA9052_COUNT_S_REG, 6, v); if (ret < 0) { - rtc_err(rtc->da9052, "Failed to read RTC time : %d\n", ret); + rtc_err(rtc, "Failed to read RTC time : %d\n", ret); return ret; } @@ -153,18 +152,14 @@ static int da9052_rtc_read_time(struct device *dev, struct rtc_time *rtc_tm) rtc_tm->tm_sec = v[0] & DA9052_RTC_SEC; ret = rtc_valid_tm(rtc_tm); - if (ret != 0) { - rtc_err(rtc->da9052, "rtc_valid_tm failed: %d\n", ret); - return ret; - } - - return 0; + return ret; } static int da9052_rtc_set_time(struct device *dev, struct rtc_time *tm) { struct da9052_rtc *rtc; uint8_t v[6]; + int ret; rtc = dev_get_drvdata(dev); @@ -175,7 +170,10 @@ static int da9052_rtc_set_time(struct device *dev, struct rtc_time *tm) v[4] = tm->tm_mon + 1; v[5] = tm->tm_year - 100; - return da9052_group_write(rtc->da9052, DA9052_COUNT_S_REG, 6, v); + ret = da9052_group_write(rtc->da9052, DA9052_COUNT_S_REG, 6, v); + if (ret < 0) + rtc_err(rtc, "failed to set RTC time: %d\n", ret); + return ret; } static int da9052_rtc_read_alarm(struct device *dev, struct rtc_wkalrm *alrm) @@ -184,13 +182,13 @@ static int da9052_rtc_read_alarm(struct device *dev, struct rtc_wkalrm *alrm) struct rtc_time *tm = &alrm->time; struct da9052_rtc *rtc = dev_get_drvdata(dev); - ret = da9052_read_alarm(rtc->da9052, tm); - - if (ret) + ret = da9052_read_alarm(rtc, tm); + if (ret < 0) { + rtc_err(rtc, "failed to read RTC alarm: %d\n", ret); return ret; + } - alrm->enabled = da9052_rtc_get_alarm_status(rtc->da9052); - + alrm->enabled = da9052_rtc_get_alarm_status(rtc); return 0; } @@ -200,16 +198,15 @@ static int da9052_rtc_set_alarm(struct device *dev, struct rtc_wkalrm *alrm) struct rtc_time *tm = &alrm->time; struct da9052_rtc *rtc = dev_get_drvdata(dev); - ret = da9052_rtc_enable_alarm(rtc->da9052, 0); + ret = da9052_rtc_enable_alarm(rtc, 0); if (ret < 0) return ret; - ret = da9052_set_alarm(rtc->da9052, tm); - if (ret) + ret = da9052_set_alarm(rtc, tm); + if (ret < 0) return ret; - ret = da9052_rtc_enable_alarm(rtc->da9052, 1); - + ret = da9052_rtc_enable_alarm(rtc, 1); return ret; } @@ -217,7 +214,7 @@ static int da9052_rtc_alarm_irq_enable(struct device *dev, unsigned int enabled) { struct da9052_rtc *rtc = dev_get_drvdata(dev); - return da9052_rtc_enable_alarm(rtc->da9052, enabled); + return da9052_rtc_enable_alarm(rtc, enabled); } static const struct rtc_class_ops da9052_rtc_ops = { @@ -239,10 +236,23 @@ static int da9052_rtc_probe(struct platform_device *pdev) rtc->da9052 = dev_get_drvdata(pdev->dev.parent); platform_set_drvdata(pdev, rtc); + + ret = da9052_reg_write(rtc->da9052, DA9052_BBAT_CONT_REG, 0xFE); + if (ret < 0) { + rtc_err(rtc, + "Failed to setup RTC battery charging: %d\n", ret); + return ret; + } + + ret = da9052_reg_update(rtc->da9052, DA9052_ALARM_Y_REG, + DA9052_ALARM_Y_TICK_ON, 0); + if (ret != 0) + rtc_err(rtc, "Failed to disable TICKS: %d\n", ret); + ret = da9052_request_irq(rtc->da9052, DA9052_IRQ_ALARM, "ALM", da9052_rtc_irq, rtc); if (ret != 0) { - rtc_err(rtc->da9052, "irq registration failed: %d\n", ret); + rtc_err(rtc, "irq registration failed: %d\n", ret); return ret; } @@ -261,7 +271,7 @@ static struct platform_driver da9052_rtc_driver = { module_platform_driver(da9052_rtc_driver); -MODULE_AUTHOR("David Dajun Chen <dchen@diasemi.com>"); +MODULE_AUTHOR("Anthony Olech <Anthony.Olech@diasemi.com>"); MODULE_DESCRIPTION("RTC driver for Dialog DA9052 PMIC"); MODULE_LICENSE("GPL"); MODULE_ALIAS("platform:da9052-rtc"); diff --git a/drivers/rtc/rtc-da9063.c b/drivers/rtc/rtc-da9063.c new file mode 100644 index 000000000000..595393098b09 --- /dev/null +++ b/drivers/rtc/rtc-da9063.c @@ -0,0 +1,333 @@ +/* rtc-da9063.c - Real time clock device driver for DA9063 + * Copyright (C) 2013-14 Dialog Semiconductor Ltd. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Library General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Library General Public License for more details. + */ + +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/init.h> +#include <linux/platform_device.h> +#include <linux/interrupt.h> +#include <linux/rtc.h> +#include <linux/slab.h> +#include <linux/delay.h> +#include <linux/regmap.h> +#include <linux/mfd/da9063/registers.h> +#include <linux/mfd/da9063/core.h> + +#define YEARS_TO_DA9063(year) ((year) - 100) +#define MONTHS_TO_DA9063(month) ((month) + 1) +#define YEARS_FROM_DA9063(year) ((year) + 100) +#define MONTHS_FROM_DA9063(month) ((month) - 1) + +#define RTC_DATA_LEN (DA9063_REG_COUNT_Y - DA9063_REG_COUNT_S + 1) +#define RTC_SEC 0 +#define RTC_MIN 1 +#define RTC_HOUR 2 +#define RTC_DAY 3 +#define RTC_MONTH 4 +#define RTC_YEAR 5 + +struct da9063_rtc { + struct rtc_device *rtc_dev; + struct da9063 *hw; + struct rtc_time alarm_time; + bool rtc_sync; +}; + +static void da9063_data_to_tm(u8 *data, struct rtc_time *tm) +{ + tm->tm_sec = data[RTC_SEC] & DA9063_COUNT_SEC_MASK; + tm->tm_min = data[RTC_MIN] & DA9063_COUNT_MIN_MASK; + tm->tm_hour = data[RTC_HOUR] & DA9063_COUNT_HOUR_MASK; + tm->tm_mday = data[RTC_DAY] & DA9063_COUNT_DAY_MASK; + tm->tm_mon = MONTHS_FROM_DA9063(data[RTC_MONTH] & + DA9063_COUNT_MONTH_MASK); + tm->tm_year = YEARS_FROM_DA9063(data[RTC_YEAR] & + DA9063_COUNT_YEAR_MASK); +} + +static void da9063_tm_to_data(struct rtc_time *tm, u8 *data) +{ + data[RTC_SEC] &= ~DA9063_COUNT_SEC_MASK; + data[RTC_SEC] |= tm->tm_sec & DA9063_COUNT_SEC_MASK; + + data[RTC_MIN] &= ~DA9063_COUNT_MIN_MASK; + data[RTC_MIN] |= tm->tm_min & DA9063_COUNT_MIN_MASK; + + data[RTC_HOUR] &= ~DA9063_COUNT_HOUR_MASK; + data[RTC_HOUR] |= tm->tm_hour & DA9063_COUNT_HOUR_MASK; + + data[RTC_DAY] &= ~DA9063_COUNT_DAY_MASK; + data[RTC_DAY] |= tm->tm_mday & DA9063_COUNT_DAY_MASK; + + data[RTC_MONTH] &= ~DA9063_COUNT_MONTH_MASK; + data[RTC_MONTH] |= MONTHS_TO_DA9063(tm->tm_mon) & + DA9063_COUNT_MONTH_MASK; + + data[RTC_YEAR] &= ~DA9063_COUNT_YEAR_MASK; + data[RTC_YEAR] |= YEARS_TO_DA9063(tm->tm_year) & + DA9063_COUNT_YEAR_MASK; +} + +static int da9063_rtc_stop_alarm(struct device *dev) +{ + struct da9063_rtc *rtc = dev_get_drvdata(dev); + + return regmap_update_bits(rtc->hw->regmap, DA9063_REG_ALARM_Y, + DA9063_ALARM_ON, 0); +} + +static int da9063_rtc_start_alarm(struct device *dev) +{ + struct da9063_rtc *rtc = dev_get_drvdata(dev); + + return regmap_update_bits(rtc->hw->regmap, DA9063_REG_ALARM_Y, + DA9063_ALARM_ON, DA9063_ALARM_ON); +} + +static int da9063_rtc_read_time(struct device *dev, struct rtc_time *tm) +{ + struct da9063_rtc *rtc = dev_get_drvdata(dev); + unsigned long tm_secs; + unsigned long al_secs; + u8 data[RTC_DATA_LEN]; + int ret; + + ret = regmap_bulk_read(rtc->hw->regmap, DA9063_REG_COUNT_S, + data, RTC_DATA_LEN); + if (ret < 0) { + dev_err(dev, "Failed to read RTC time data: %d\n", ret); + return ret; + } + + if (!(data[RTC_SEC] & DA9063_RTC_READ)) { + dev_dbg(dev, "RTC not yet ready to be read by the host\n"); + return -EINVAL; + } + + da9063_data_to_tm(data, tm); + + rtc_tm_to_time(tm, &tm_secs); + rtc_tm_to_time(&rtc->alarm_time, &al_secs); + + /* handle the rtc synchronisation delay */ + if (rtc->rtc_sync == true && al_secs - tm_secs == 1) + memcpy(tm, &rtc->alarm_time, sizeof(struct rtc_time)); + else + rtc->rtc_sync = false; + + return rtc_valid_tm(tm); +} + +static int da9063_rtc_set_time(struct device *dev, struct rtc_time *tm) +{ + struct da9063_rtc *rtc = dev_get_drvdata(dev); + u8 data[RTC_DATA_LEN]; + int ret; + + da9063_tm_to_data(tm, data); + ret = regmap_bulk_write(rtc->hw->regmap, DA9063_REG_COUNT_S, + data, RTC_DATA_LEN); + if (ret < 0) + dev_err(dev, "Failed to set RTC time data: %d\n", ret); + + return ret; +} + +static int da9063_rtc_read_alarm(struct device *dev, struct rtc_wkalrm *alrm) +{ + struct da9063_rtc *rtc = dev_get_drvdata(dev); + u8 data[RTC_DATA_LEN]; + int ret; + unsigned int val; + + ret = regmap_bulk_read(rtc->hw->regmap, DA9063_REG_ALARM_S, + &data[RTC_SEC], RTC_DATA_LEN); + if (ret < 0) + return ret; + + da9063_data_to_tm(data, &alrm->time); + + alrm->enabled = !!(data[RTC_YEAR] & DA9063_ALARM_ON); + + ret = regmap_read(rtc->hw->regmap, DA9063_REG_EVENT_A, &val); + if (ret < 0) + return ret; + + if (val & (DA9063_E_ALARM)) + alrm->pending = 1; + else + alrm->pending = 0; + + return 0; +} + +static int da9063_rtc_set_alarm(struct device *dev, struct rtc_wkalrm *alrm) +{ + struct da9063_rtc *rtc = dev_get_drvdata(dev); + u8 data[RTC_DATA_LEN]; + int ret; + + da9063_tm_to_data(&alrm->time, data); + + ret = da9063_rtc_stop_alarm(dev); + if (ret < 0) { + dev_err(dev, "Failed to stop alarm: %d\n", ret); + return ret; + } + + ret = regmap_bulk_write(rtc->hw->regmap, DA9063_REG_ALARM_S, + data, RTC_DATA_LEN); + if (ret < 0) { + dev_err(dev, "Failed to write alarm: %d\n", ret); + return ret; + } + + rtc->alarm_time = alrm->time; + + if (alrm->enabled) { + ret = da9063_rtc_start_alarm(dev); + if (ret < 0) { + dev_err(dev, "Failed to start alarm: %d\n", ret); + return ret; + } + } + + return ret; +} + +static int da9063_rtc_alarm_irq_enable(struct device *dev, unsigned int enabled) +{ + if (enabled) + return da9063_rtc_start_alarm(dev); + else + return da9063_rtc_stop_alarm(dev); +} + +static irqreturn_t da9063_alarm_event(int irq, void *data) +{ + struct da9063_rtc *rtc = data; + + regmap_update_bits(rtc->hw->regmap, DA9063_REG_ALARM_Y, + DA9063_ALARM_ON, 0); + + rtc->rtc_sync = true; + rtc_update_irq(rtc->rtc_dev, 1, RTC_IRQF | RTC_AF); + + return IRQ_HANDLED; +} + +static const struct rtc_class_ops da9063_rtc_ops = { + .read_time = da9063_rtc_read_time, + .set_time = da9063_rtc_set_time, + .read_alarm = da9063_rtc_read_alarm, + .set_alarm = da9063_rtc_set_alarm, + .alarm_irq_enable = da9063_rtc_alarm_irq_enable, +}; + +static int da9063_rtc_probe(struct platform_device *pdev) +{ + struct da9063 *da9063 = dev_get_drvdata(pdev->dev.parent); + struct da9063_rtc *rtc; + int irq_alarm; + u8 data[RTC_DATA_LEN]; + int ret; + + ret = regmap_update_bits(da9063->regmap, DA9063_REG_CONTROL_E, + DA9063_RTC_EN, DA9063_RTC_EN); + if (ret < 0) { + dev_err(&pdev->dev, "Failed to enable RTC\n"); + goto err; + } + + ret = regmap_update_bits(da9063->regmap, DA9063_REG_EN_32K, + DA9063_CRYSTAL, DA9063_CRYSTAL); + if (ret < 0) { + dev_err(&pdev->dev, "Failed to run 32kHz oscillator\n"); + goto err; + } + + ret = regmap_update_bits(da9063->regmap, DA9063_REG_ALARM_S, + DA9063_ALARM_STATUS_TICK | DA9063_ALARM_STATUS_ALARM, + 0); + if (ret < 0) { + dev_err(&pdev->dev, "Failed to access RTC alarm register\n"); + goto err; + } + + ret = regmap_update_bits(da9063->regmap, DA9063_REG_ALARM_S, + DA9063_ALARM_STATUS_ALARM, + DA9063_ALARM_STATUS_ALARM); + if (ret < 0) { + dev_err(&pdev->dev, "Failed to access RTC alarm register\n"); + goto err; + } + + ret = regmap_update_bits(da9063->regmap, DA9063_REG_ALARM_Y, + DA9063_TICK_ON, 0); + if (ret < 0) { + dev_err(&pdev->dev, "Failed to disable TICKs\n"); + goto err; + } + + ret = regmap_bulk_read(da9063->regmap, DA9063_REG_ALARM_S, + data, RTC_DATA_LEN); + if (ret < 0) { + dev_err(&pdev->dev, "Failed to read initial alarm data: %d\n", + ret); + goto err; + } + + rtc = devm_kzalloc(&pdev->dev, sizeof(*rtc), GFP_KERNEL); + if (!rtc) + return -ENOMEM; + + platform_set_drvdata(pdev, rtc); + + irq_alarm = platform_get_irq_byname(pdev, "ALARM"); + ret = devm_request_threaded_irq(&pdev->dev, irq_alarm, NULL, + da9063_alarm_event, + IRQF_TRIGGER_LOW | IRQF_ONESHOT, + "ALARM", rtc); + if (ret) { + dev_err(&pdev->dev, "Failed to request ALARM IRQ %d: %d\n", + irq_alarm, ret); + goto err; + } + + rtc->hw = da9063; + rtc->rtc_dev = devm_rtc_device_register(&pdev->dev, DA9063_DRVNAME_RTC, + &da9063_rtc_ops, THIS_MODULE); + if (IS_ERR(rtc->rtc_dev)) + return PTR_ERR(rtc->rtc_dev); + + da9063_data_to_tm(data, &rtc->alarm_time); + rtc->rtc_sync = false; +err: + return ret; +} + +static struct platform_driver da9063_rtc_driver = { + .probe = da9063_rtc_probe, + .driver = { + .name = DA9063_DRVNAME_RTC, + .owner = THIS_MODULE, + }, +}; + +module_platform_driver(da9063_rtc_driver); + +MODULE_AUTHOR("S Twiss <stwiss.opensource@diasemi.com>"); +MODULE_DESCRIPTION("Real time clock device driver for Dialog DA9063"); +MODULE_LICENSE("GPL v2"); +MODULE_ALIAS("platform:" DA9063_DRVNAME_RTC); diff --git a/drivers/rtc/rtc-ds1343.c b/drivers/rtc/rtc-ds1343.c new file mode 100644 index 000000000000..c3719189dd96 --- /dev/null +++ b/drivers/rtc/rtc-ds1343.c @@ -0,0 +1,689 @@ +/* rtc-ds1343.c + * + * Driver for Dallas Semiconductor DS1343 Low Current, SPI Compatible + * Real Time Clock + * + * Author : Raghavendra Chandra Ganiga <ravi23ganiga@gmail.com> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include <linux/init.h> +#include <linux/module.h> +#include <linux/interrupt.h> +#include <linux/device.h> +#include <linux/spi/spi.h> +#include <linux/regmap.h> +#include <linux/rtc.h> +#include <linux/bcd.h> +#include <linux/pm.h> +#include <linux/slab.h> + +#define DS1343_DRV_VERSION "01.00" +#define DALLAS_MAXIM_DS1343 0 +#define DALLAS_MAXIM_DS1344 1 + +/* RTC DS1343 Registers */ +#define DS1343_SECONDS_REG 0x00 +#define DS1343_MINUTES_REG 0x01 +#define DS1343_HOURS_REG 0x02 +#define DS1343_DAY_REG 0x03 +#define DS1343_DATE_REG 0x04 +#define DS1343_MONTH_REG 0x05 +#define DS1343_YEAR_REG 0x06 +#define DS1343_ALM0_SEC_REG 0x07 +#define DS1343_ALM0_MIN_REG 0x08 +#define DS1343_ALM0_HOUR_REG 0x09 +#define DS1343_ALM0_DAY_REG 0x0A +#define DS1343_ALM1_SEC_REG 0x0B +#define DS1343_ALM1_MIN_REG 0x0C +#define DS1343_ALM1_HOUR_REG 0x0D +#define DS1343_ALM1_DAY_REG 0x0E +#define DS1343_CONTROL_REG 0x0F +#define DS1343_STATUS_REG 0x10 +#define DS1343_TRICKLE_REG 0x11 + +/* DS1343 Control Registers bits */ +#define DS1343_EOSC 0x80 +#define DS1343_DOSF 0x20 +#define DS1343_EGFIL 0x10 +#define DS1343_SQW 0x08 +#define DS1343_INTCN 0x04 +#define DS1343_A1IE 0x02 +#define DS1343_A0IE 0x01 + +/* DS1343 Status Registers bits */ +#define DS1343_OSF 0x80 +#define DS1343_IRQF1 0x02 +#define DS1343_IRQF0 0x01 + +/* DS1343 Trickle Charger Registers bits */ +#define DS1343_TRICKLE_MAGIC 0xa0 +#define DS1343_TRICKLE_DS1 0x08 +#define DS1343_TRICKLE_1K 0x01 +#define DS1343_TRICKLE_2K 0x02 +#define DS1343_TRICKLE_4K 0x03 + +static const struct spi_device_id ds1343_id[] = { + { "ds1343", DALLAS_MAXIM_DS1343 }, + { "ds1344", DALLAS_MAXIM_DS1344 }, + { } +}; +MODULE_DEVICE_TABLE(spi, ds1343_id); + +struct ds1343_priv { + struct spi_device *spi; + struct rtc_device *rtc; + struct regmap *map; + struct mutex mutex; + unsigned int irqen; + int irq; + int alarm_sec; + int alarm_min; + int alarm_hour; + int alarm_mday; +}; + +static int ds1343_ioctl(struct device *dev, unsigned int cmd, unsigned long arg) +{ + switch (cmd) { +#ifdef RTC_SET_CHARGE + case RTC_SET_CHARGE: + { + int val; + + if (copy_from_user(&val, (int __user *)arg, sizeof(int))) + return -EFAULT; + + return regmap_write(priv->map, DS1343_TRICKLE_REG, val); + } + break; +#endif + } + + return -ENOIOCTLCMD; +} + +static ssize_t ds1343_show_glitchfilter(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct ds1343_priv *priv = dev_get_drvdata(dev); + int glitch_filt_status, data; + + regmap_read(priv->map, DS1343_CONTROL_REG, &data); + + glitch_filt_status = !!(data & DS1343_EGFIL); + + if (glitch_filt_status) + return sprintf(buf, "enabled\n"); + else + return sprintf(buf, "disabled\n"); +} + +static ssize_t ds1343_store_glitchfilter(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + struct ds1343_priv *priv = dev_get_drvdata(dev); + int data; + + regmap_read(priv->map, DS1343_CONTROL_REG, &data); + + if (strncmp(buf, "enabled", 7) == 0) + data |= DS1343_EGFIL; + + else if (strncmp(buf, "disabled", 8) == 0) + data &= ~(DS1343_EGFIL); + + else + return -EINVAL; + + regmap_write(priv->map, DS1343_CONTROL_REG, data); + + return count; +} + +static DEVICE_ATTR(glitch_filter, S_IRUGO | S_IWUSR, ds1343_show_glitchfilter, + ds1343_store_glitchfilter); + +static ssize_t ds1343_show_alarmstatus(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct ds1343_priv *priv = dev_get_drvdata(dev); + int alarmstatus, data; + + regmap_read(priv->map, DS1343_CONTROL_REG, &data); + + alarmstatus = !!(data & DS1343_A0IE); + + if (alarmstatus) + return sprintf(buf, "enabled\n"); + else + return sprintf(buf, "disabled\n"); +} + +static DEVICE_ATTR(alarm_status, S_IRUGO, ds1343_show_alarmstatus, NULL); + +static ssize_t ds1343_show_alarmmode(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct ds1343_priv *priv = dev_get_drvdata(dev); + int alarm_mode, data; + char *alarm_str; + + regmap_read(priv->map, DS1343_ALM0_SEC_REG, &data); + alarm_mode = (data & 0x80) >> 4; + + regmap_read(priv->map, DS1343_ALM0_MIN_REG, &data); + alarm_mode |= (data & 0x80) >> 5; + + regmap_read(priv->map, DS1343_ALM0_HOUR_REG, &data); + alarm_mode |= (data & 0x80) >> 6; + + regmap_read(priv->map, DS1343_ALM0_DAY_REG, &data); + alarm_mode |= (data & 0x80) >> 7; + + switch (alarm_mode) { + case 15: + alarm_str = "each second"; + break; + + case 7: + alarm_str = "seconds match"; + break; + + case 3: + alarm_str = "minutes and seconds match"; + break; + + case 1: + alarm_str = "hours, minutes and seconds match"; + break; + + case 0: + alarm_str = "day, hours, minutes and seconds match"; + break; + + default: + alarm_str = "invalid"; + break; + } + + return sprintf(buf, "%s\n", alarm_str); +} + +static DEVICE_ATTR(alarm_mode, S_IRUGO, ds1343_show_alarmmode, NULL); + +static ssize_t ds1343_show_tricklecharger(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct ds1343_priv *priv = dev_get_drvdata(dev); + int data; + char *diodes = "disabled", *resistors = " "; + + regmap_read(priv->map, DS1343_TRICKLE_REG, &data); + + if ((data & 0xf0) == DS1343_TRICKLE_MAGIC) { + switch (data & 0x0c) { + case DS1343_TRICKLE_DS1: + diodes = "one diode,"; + break; + + default: + diodes = "no diode,"; + break; + } + + switch (data & 0x03) { + case DS1343_TRICKLE_1K: + resistors = "1k Ohm"; + break; + + case DS1343_TRICKLE_2K: + resistors = "2k Ohm"; + break; + + case DS1343_TRICKLE_4K: + resistors = "4k Ohm"; + break; + + default: + diodes = "disabled"; + break; + } + } + + return sprintf(buf, "%s %s\n", diodes, resistors); +} + +static DEVICE_ATTR(trickle_charger, S_IRUGO, ds1343_show_tricklecharger, NULL); + +static int ds1343_sysfs_register(struct device *dev) +{ + struct ds1343_priv *priv = dev_get_drvdata(dev); + int err; + + err = device_create_file(dev, &dev_attr_glitch_filter); + if (err) + return err; + + err = device_create_file(dev, &dev_attr_trickle_charger); + if (err) + goto error1; + + if (priv->irq <= 0) + return err; + + err = device_create_file(dev, &dev_attr_alarm_mode); + if (err) + goto error2; + + err = device_create_file(dev, &dev_attr_alarm_status); + if (!err) + return err; + + device_remove_file(dev, &dev_attr_alarm_mode); + +error2: + device_remove_file(dev, &dev_attr_trickle_charger); + +error1: + device_remove_file(dev, &dev_attr_glitch_filter); + + return err; +} + +static void ds1343_sysfs_unregister(struct device *dev) +{ + struct ds1343_priv *priv = dev_get_drvdata(dev); + + device_remove_file(dev, &dev_attr_glitch_filter); + device_remove_file(dev, &dev_attr_trickle_charger); + + if (priv->irq <= 0) + return; + + device_remove_file(dev, &dev_attr_alarm_status); + device_remove_file(dev, &dev_attr_alarm_mode); +} + +static int ds1343_read_time(struct device *dev, struct rtc_time *dt) +{ + struct ds1343_priv *priv = dev_get_drvdata(dev); + unsigned char buf[7]; + int res; + + res = regmap_bulk_read(priv->map, DS1343_SECONDS_REG, buf, 7); + if (res) + return res; + + dt->tm_sec = bcd2bin(buf[0]); + dt->tm_min = bcd2bin(buf[1]); + dt->tm_hour = bcd2bin(buf[2] & 0x3F); + dt->tm_wday = bcd2bin(buf[3]) - 1; + dt->tm_mday = bcd2bin(buf[4]); + dt->tm_mon = bcd2bin(buf[5] & 0x1F) - 1; + dt->tm_year = bcd2bin(buf[6]) + 100; /* year offset from 1900 */ + + return rtc_valid_tm(dt); +} + +static int ds1343_set_time(struct device *dev, struct rtc_time *dt) +{ + struct ds1343_priv *priv = dev_get_drvdata(dev); + int res; + + res = regmap_write(priv->map, DS1343_SECONDS_REG, + bin2bcd(dt->tm_sec)); + if (res) + return res; + + res = regmap_write(priv->map, DS1343_MINUTES_REG, + bin2bcd(dt->tm_min)); + if (res) + return res; + + res = regmap_write(priv->map, DS1343_HOURS_REG, + bin2bcd(dt->tm_hour) & 0x3F); + if (res) + return res; + + res = regmap_write(priv->map, DS1343_DAY_REG, + bin2bcd(dt->tm_wday + 1)); + if (res) + return res; + + res = regmap_write(priv->map, DS1343_DATE_REG, + bin2bcd(dt->tm_mday)); + if (res) + return res; + + res = regmap_write(priv->map, DS1343_MONTH_REG, + bin2bcd(dt->tm_mon + 1)); + if (res) + return res; + + dt->tm_year %= 100; + + res = regmap_write(priv->map, DS1343_YEAR_REG, + bin2bcd(dt->tm_year)); + if (res) + return res; + + return 0; +} + +static int ds1343_update_alarm(struct device *dev) +{ + struct ds1343_priv *priv = dev_get_drvdata(dev); + unsigned int control, stat; + unsigned char buf[4]; + int res = 0; + + res = regmap_read(priv->map, DS1343_CONTROL_REG, &control); + if (res) + return res; + + res = regmap_read(priv->map, DS1343_STATUS_REG, &stat); + if (res) + return res; + + control &= ~(DS1343_A0IE); + stat &= ~(DS1343_IRQF0); + + res = regmap_write(priv->map, DS1343_CONTROL_REG, control); + if (res) + return res; + + res = regmap_write(priv->map, DS1343_STATUS_REG, stat); + if (res) + return res; + + buf[0] = priv->alarm_sec < 0 || (priv->irqen & RTC_UF) ? + 0x80 : bin2bcd(priv->alarm_sec) & 0x7F; + buf[1] = priv->alarm_min < 0 || (priv->irqen & RTC_UF) ? + 0x80 : bin2bcd(priv->alarm_min) & 0x7F; + buf[2] = priv->alarm_hour < 0 || (priv->irqen & RTC_UF) ? + 0x80 : bin2bcd(priv->alarm_hour) & 0x3F; + buf[3] = priv->alarm_mday < 0 || (priv->irqen & RTC_UF) ? + 0x80 : bin2bcd(priv->alarm_mday) & 0x7F; + + res = regmap_bulk_write(priv->map, DS1343_ALM0_SEC_REG, buf, 4); + if (res) + return res; + + if (priv->irqen) { + control |= DS1343_A0IE; + res = regmap_write(priv->map, DS1343_CONTROL_REG, control); + } + + return res; +} + +static int ds1343_read_alarm(struct device *dev, struct rtc_wkalrm *alarm) +{ + struct ds1343_priv *priv = dev_get_drvdata(dev); + int res = 0; + unsigned int stat; + + if (priv->irq <= 0) + return -EINVAL; + + mutex_lock(&priv->mutex); + + res = regmap_read(priv->map, DS1343_STATUS_REG, &stat); + if (res) + goto out; + + alarm->enabled = !!(priv->irqen & RTC_AF); + alarm->pending = !!(stat & DS1343_IRQF0); + + alarm->time.tm_sec = priv->alarm_sec < 0 ? 0 : priv->alarm_sec; + alarm->time.tm_min = priv->alarm_min < 0 ? 0 : priv->alarm_min; + alarm->time.tm_hour = priv->alarm_hour < 0 ? 0 : priv->alarm_hour; + alarm->time.tm_mday = priv->alarm_mday < 0 ? 0 : priv->alarm_mday; + + alarm->time.tm_mon = -1; + alarm->time.tm_year = -1; + alarm->time.tm_wday = -1; + alarm->time.tm_yday = -1; + alarm->time.tm_isdst = -1; + +out: + mutex_unlock(&priv->mutex); + return res; +} + +static int ds1343_set_alarm(struct device *dev, struct rtc_wkalrm *alarm) +{ + struct ds1343_priv *priv = dev_get_drvdata(dev); + int res = 0; + + if (priv->irq <= 0) + return -EINVAL; + + mutex_lock(&priv->mutex); + + priv->alarm_sec = alarm->time.tm_sec; + priv->alarm_min = alarm->time.tm_min; + priv->alarm_hour = alarm->time.tm_hour; + priv->alarm_mday = alarm->time.tm_mday; + + if (alarm->enabled) + priv->irqen |= RTC_AF; + + res = ds1343_update_alarm(dev); + + mutex_unlock(&priv->mutex); + + return res; +} + +static int ds1343_alarm_irq_enable(struct device *dev, unsigned int enabled) +{ + struct ds1343_priv *priv = dev_get_drvdata(dev); + int res = 0; + + if (priv->irq <= 0) + return -EINVAL; + + mutex_lock(&priv->mutex); + + if (enabled) + priv->irqen |= RTC_AF; + else + priv->irqen &= ~RTC_AF; + + res = ds1343_update_alarm(dev); + + mutex_unlock(&priv->mutex); + + return res; +} + +static irqreturn_t ds1343_thread(int irq, void *dev_id) +{ + struct ds1343_priv *priv = dev_id; + unsigned int stat, control; + int res = 0; + + mutex_lock(&priv->mutex); + + res = regmap_read(priv->map, DS1343_STATUS_REG, &stat); + if (res) + goto out; + + if (stat & DS1343_IRQF0) { + stat &= ~DS1343_IRQF0; + regmap_write(priv->map, DS1343_STATUS_REG, stat); + + res = regmap_read(priv->map, DS1343_CONTROL_REG, &control); + if (res) + goto out; + + control &= ~DS1343_A0IE; + regmap_write(priv->map, DS1343_CONTROL_REG, control); + + rtc_update_irq(priv->rtc, 1, RTC_AF | RTC_IRQF); + } + +out: + mutex_unlock(&priv->mutex); + return IRQ_HANDLED; +} + +static const struct rtc_class_ops ds1343_rtc_ops = { + .ioctl = ds1343_ioctl, + .read_time = ds1343_read_time, + .set_time = ds1343_set_time, + .read_alarm = ds1343_read_alarm, + .set_alarm = ds1343_set_alarm, + .alarm_irq_enable = ds1343_alarm_irq_enable, +}; + +static int ds1343_probe(struct spi_device *spi) +{ + struct ds1343_priv *priv; + struct regmap_config config; + unsigned int data; + int res; + + memset(&config, 0, sizeof(config)); + config.reg_bits = 8; + config.val_bits = 8; + config.write_flag_mask = 0x80; + + priv = devm_kzalloc(&spi->dev, sizeof(struct ds1343_priv), GFP_KERNEL); + if (!priv) + return -ENOMEM; + + priv->spi = spi; + mutex_init(&priv->mutex); + + /* RTC DS1347 works in spi mode 3 and + * its chip select is active high + */ + spi->mode = SPI_MODE_3 | SPI_CS_HIGH; + spi->bits_per_word = 8; + res = spi_setup(spi); + if (res) + return res; + + spi_set_drvdata(spi, priv); + + priv->map = devm_regmap_init_spi(spi, &config); + + if (IS_ERR(priv->map)) { + dev_err(&spi->dev, "spi regmap init failed for rtc ds1343\n"); + return PTR_ERR(priv->map); + } + + res = regmap_read(priv->map, DS1343_SECONDS_REG, &data); + if (res) + return res; + + regmap_read(priv->map, DS1343_CONTROL_REG, &data); + data |= DS1343_INTCN; + data &= ~(DS1343_EOSC | DS1343_A1IE | DS1343_A0IE); + regmap_write(priv->map, DS1343_CONTROL_REG, data); + + regmap_read(priv->map, DS1343_STATUS_REG, &data); + data &= ~(DS1343_OSF | DS1343_IRQF1 | DS1343_IRQF0); + regmap_write(priv->map, DS1343_STATUS_REG, data); + + priv->rtc = devm_rtc_device_register(&spi->dev, "ds1343", + &ds1343_rtc_ops, THIS_MODULE); + if (IS_ERR(priv->rtc)) { + dev_err(&spi->dev, "unable to register rtc ds1343\n"); + return PTR_ERR(priv->rtc); + } + + priv->irq = spi->irq; + + if (priv->irq >= 0) { + res = devm_request_threaded_irq(&spi->dev, spi->irq, NULL, + ds1343_thread, + IRQF_NO_SUSPEND | IRQF_ONESHOT, + "ds1343", priv); + if (res) { + priv->irq = -1; + dev_err(&spi->dev, + "unable to request irq for rtc ds1343\n"); + } else { + device_set_wakeup_capable(&spi->dev, 1); + } + } + + res = ds1343_sysfs_register(&spi->dev); + if (res) + dev_err(&spi->dev, + "unable to create sysfs entries for rtc ds1343\n"); + + return 0; +} + +static int ds1343_remove(struct spi_device *spi) +{ + struct ds1343_priv *priv = spi_get_drvdata(spi); + + if (spi->irq) { + mutex_lock(&priv->mutex); + priv->irqen &= ~RTC_AF; + mutex_unlock(&priv->mutex); + + devm_free_irq(&spi->dev, spi->irq, priv); + } + + spi_set_drvdata(spi, NULL); + + ds1343_sysfs_unregister(&spi->dev); + + return 0; +} + +#ifdef CONFIG_PM_SLEEP + +static int ds1343_suspend(struct device *dev) +{ + struct spi_device *spi = to_spi_device(dev); + + if (spi->irq >= 0 && device_may_wakeup(dev)) + enable_irq_wake(spi->irq); + + return 0; +} + +static int ds1343_resume(struct device *dev) +{ + struct spi_device *spi = to_spi_device(dev); + + if (spi->irq >= 0 && device_may_wakeup(dev)) + disable_irq_wake(spi->irq); + + return 0; +} + +#endif + +static SIMPLE_DEV_PM_OPS(ds1343_pm, ds1343_suspend, ds1343_resume); + +static struct spi_driver ds1343_driver = { + .driver = { + .name = "ds1343", + .owner = THIS_MODULE, + .pm = &ds1343_pm, + }, + .probe = ds1343_probe, + .remove = ds1343_remove, + .id_table = ds1343_id, +}; + +module_spi_driver(ds1343_driver); + +MODULE_DESCRIPTION("DS1343 RTC SPI Driver"); +MODULE_AUTHOR("Raghavendra Chandra Ganiga <ravi23ganiga@gmail.com>"); +MODULE_LICENSE("GPL v2"); +MODULE_VERSION(DS1343_DRV_VERSION); diff --git a/drivers/rtc/rtc-ds1742.c b/drivers/rtc/rtc-ds1742.c index 942103dac30f..c6b2191a4128 100644 --- a/drivers/rtc/rtc-ds1742.c +++ b/drivers/rtc/rtc-ds1742.c @@ -219,7 +219,7 @@ static int ds1742_rtc_remove(struct platform_device *pdev) return 0; } -static struct of_device_id __maybe_unused ds1742_rtc_of_match[] = { +static const struct of_device_id __maybe_unused ds1742_rtc_of_match[] = { { .compatible = "maxim,ds1742", }, { } }; diff --git a/drivers/rtc/rtc-efi.c b/drivers/rtc/rtc-efi.c index 797aa0252ba9..c4c38431012e 100644 --- a/drivers/rtc/rtc-efi.c +++ b/drivers/rtc/rtc-efi.c @@ -35,7 +35,7 @@ static inline int compute_yday(efi_time_t *eft) { /* efi_time_t.month is in the [1-12] so, we need -1 */ - return rtc_year_days(eft->day - 1, eft->month - 1, eft->year); + return rtc_year_days(eft->day, eft->month - 1, eft->year); } /* * returns day of the week [0-6] 0=Sunday diff --git a/drivers/rtc/rtc-hym8563.c b/drivers/rtc/rtc-hym8563.c index e5f13c4310fe..b936bb4096b5 100644 --- a/drivers/rtc/rtc-hym8563.c +++ b/drivers/rtc/rtc-hym8563.c @@ -418,6 +418,9 @@ static struct clk *hym8563_clkout_register_clk(struct hym8563 *hym8563) init.num_parents = 0; hym8563->clkout_hw.init = &init; + /* optional override of the clockname */ + of_property_read_string(node, "clock-output-names", &init.name); + /* register the clock */ clk = clk_register(&client->dev, &hym8563->clkout_hw); @@ -585,7 +588,7 @@ static const struct i2c_device_id hym8563_id[] = { }; MODULE_DEVICE_TABLE(i2c, hym8563_id); -static struct of_device_id hym8563_dt_idtable[] = { +static const struct of_device_id hym8563_dt_idtable[] = { { .compatible = "haoyu,hym8563" }, {}, }; diff --git a/drivers/rtc/rtc-isl12057.c b/drivers/rtc/rtc-isl12057.c index 41bd76aaff76..455b601d731d 100644 --- a/drivers/rtc/rtc-isl12057.c +++ b/drivers/rtc/rtc-isl12057.c @@ -278,7 +278,7 @@ static int isl12057_probe(struct i2c_client *client, } #ifdef CONFIG_OF -static struct of_device_id isl12057_dt_match[] = { +static const struct of_device_id isl12057_dt_match[] = { { .compatible = "isl,isl12057" }, { }, }; diff --git a/drivers/rtc/rtc-m41t80.c b/drivers/rtc/rtc-m41t80.c index a5248aa1abf1..7ff7427c2e6a 100644 --- a/drivers/rtc/rtc-m41t80.c +++ b/drivers/rtc/rtc-m41t80.c @@ -66,8 +66,6 @@ #define M41T80_FEATURE_WD (1 << 3) /* Extra watchdog resolution */ #define M41T80_FEATURE_SQ_ALT (1 << 4) /* RSx bits are in reg 4 */ -#define DRV_VERSION "0.05" - static DEFINE_MUTEX(m41t80_rtc_mutex); static const struct i2c_device_id m41t80_id[] = { { "m41t62", M41T80_FEATURE_SQ | M41T80_FEATURE_SQ_ALT }, @@ -80,6 +78,7 @@ static const struct i2c_device_id m41t80_id[] = { { "m41st84", M41T80_FEATURE_HT | M41T80_FEATURE_BL | M41T80_FEATURE_SQ }, { "m41st85", M41T80_FEATURE_HT | M41T80_FEATURE_BL | M41T80_FEATURE_SQ }, { "m41st87", M41T80_FEATURE_HT | M41T80_FEATURE_BL | M41T80_FEATURE_SQ }, + { "rv4162", M41T80_FEATURE_SQ | M41T80_FEATURE_WD | M41T80_FEATURE_SQ_ALT }, { } }; MODULE_DEVICE_TABLE(i2c, m41t80_id); @@ -232,7 +231,7 @@ static ssize_t m41t80_sysfs_show_flags(struct device *dev, val = i2c_smbus_read_byte_data(client, M41T80_REG_FLAGS); if (val < 0) - return -EIO; + return val; return sprintf(buf, "%#x\n", val); } static DEVICE_ATTR(flags, S_IRUGO, m41t80_sysfs_show_flags, NULL); @@ -252,7 +251,7 @@ static ssize_t m41t80_sysfs_show_sqwfreq(struct device *dev, reg_sqw = M41T80_REG_WDAY; val = i2c_smbus_read_byte_data(client, reg_sqw); if (val < 0) - return -EIO; + return val; val = (val >> 4) & 0xf; switch (val) { case 0: @@ -271,7 +270,7 @@ static ssize_t m41t80_sysfs_set_sqwfreq(struct device *dev, { struct i2c_client *client = to_i2c_client(dev); struct m41t80_data *clientdata = i2c_get_clientdata(client); - int almon, sqw, reg_sqw; + int almon, sqw, reg_sqw, rc; int val = simple_strtoul(buf, NULL, 0); if (!(clientdata->features & M41T80_FEATURE_SQ)) @@ -291,21 +290,30 @@ static ssize_t m41t80_sysfs_set_sqwfreq(struct device *dev, /* disable SQW, set SQW frequency & re-enable */ almon = i2c_smbus_read_byte_data(client, M41T80_REG_ALARM_MON); if (almon < 0) - return -EIO; + return almon; reg_sqw = M41T80_REG_SQW; if (clientdata->features & M41T80_FEATURE_SQ_ALT) reg_sqw = M41T80_REG_WDAY; sqw = i2c_smbus_read_byte_data(client, reg_sqw); if (sqw < 0) - return -EIO; + return sqw; sqw = (sqw & 0x0f) | (val << 4); - if (i2c_smbus_write_byte_data(client, M41T80_REG_ALARM_MON, - almon & ~M41T80_ALMON_SQWE) < 0 || - i2c_smbus_write_byte_data(client, reg_sqw, sqw) < 0) - return -EIO; - if (val && i2c_smbus_write_byte_data(client, M41T80_REG_ALARM_MON, - almon | M41T80_ALMON_SQWE) < 0) - return -EIO; + + rc = i2c_smbus_write_byte_data(client, M41T80_REG_ALARM_MON, + almon & ~M41T80_ALMON_SQWE); + if (rc < 0) + return rc; + + if (val) { + rc = i2c_smbus_write_byte_data(client, reg_sqw, sqw); + if (rc < 0) + return rc; + + rc = i2c_smbus_write_byte_data(client, M41T80_REG_ALARM_MON, + almon | M41T80_ALMON_SQWE); + if (rc <0) + return rc; + } return count; } static DEVICE_ATTR(sqwfreq, S_IRUGO | S_IWUSR, @@ -629,40 +637,28 @@ static int m41t80_probe(struct i2c_client *client, struct m41t80_data *clientdata = NULL; if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C - | I2C_FUNC_SMBUS_BYTE_DATA)) { - rc = -ENODEV; - goto exit; - } - - dev_info(&client->dev, - "chip found, driver version " DRV_VERSION "\n"); + | I2C_FUNC_SMBUS_BYTE_DATA)) + return -ENODEV; clientdata = devm_kzalloc(&client->dev, sizeof(*clientdata), GFP_KERNEL); - if (!clientdata) { - rc = -ENOMEM; - goto exit; - } + if (!clientdata) + return -ENOMEM; clientdata->features = id->driver_data; i2c_set_clientdata(client, clientdata); rtc = devm_rtc_device_register(&client->dev, client->name, &m41t80_rtc_ops, THIS_MODULE); - if (IS_ERR(rtc)) { - rc = PTR_ERR(rtc); - rtc = NULL; - goto exit; - } + if (IS_ERR(rtc)) + return PTR_ERR(rtc); clientdata->rtc = rtc; /* Make sure HT (Halt Update) bit is cleared */ rc = i2c_smbus_read_byte_data(client, M41T80_REG_ALARM_HOUR); - if (rc < 0) - goto ht_err; - if (rc & M41T80_ALHOUR_HT) { + if (rc >= 0 && rc & M41T80_ALHOUR_HT) { if (clientdata->features & M41T80_FEATURE_HT) { m41t80_get_datetime(client, &tm); dev_info(&client->dev, "HT bit was set!\n"); @@ -673,53 +669,44 @@ static int m41t80_probe(struct i2c_client *client, tm.tm_mon + 1, tm.tm_mday, tm.tm_hour, tm.tm_min, tm.tm_sec); } - if (i2c_smbus_write_byte_data(client, - M41T80_REG_ALARM_HOUR, - rc & ~M41T80_ALHOUR_HT) < 0) - goto ht_err; + rc = i2c_smbus_write_byte_data(client, M41T80_REG_ALARM_HOUR, + rc & ~M41T80_ALHOUR_HT); + } + + if (rc < 0) { + dev_err(&client->dev, "Can't clear HT bit\n"); + return rc; } /* Make sure ST (stop) bit is cleared */ rc = i2c_smbus_read_byte_data(client, M41T80_REG_SEC); - if (rc < 0) - goto st_err; - if (rc & M41T80_SEC_ST) { - if (i2c_smbus_write_byte_data(client, M41T80_REG_SEC, - rc & ~M41T80_SEC_ST) < 0) - goto st_err; + if (rc >= 0 && rc & M41T80_SEC_ST) + rc = i2c_smbus_write_byte_data(client, M41T80_REG_SEC, + rc & ~M41T80_SEC_ST); + if (rc < 0) { + dev_err(&client->dev, "Can't clear ST bit\n"); + return rc; } rc = m41t80_sysfs_register(&client->dev); if (rc) - goto exit; + return rc; #ifdef CONFIG_RTC_DRV_M41T80_WDT if (clientdata->features & M41T80_FEATURE_HT) { save_client = client; rc = misc_register(&wdt_dev); if (rc) - goto exit; + return rc; rc = register_reboot_notifier(&wdt_notifier); if (rc) { misc_deregister(&wdt_dev); - goto exit; + return rc; } } #endif return 0; - -st_err: - rc = -EIO; - dev_err(&client->dev, "Can't clear ST bit\n"); - goto exit; -ht_err: - rc = -EIO; - dev_err(&client->dev, "Can't clear HT bit\n"); - goto exit; - -exit: - return rc; } static int m41t80_remove(struct i2c_client *client) @@ -750,4 +737,3 @@ module_i2c_driver(m41t80_driver); MODULE_AUTHOR("Alexander Bigga <ab@mycable.de>"); MODULE_DESCRIPTION("ST Microelectronics M41T80 series RTC I2C Client Driver"); MODULE_LICENSE("GPL"); -MODULE_VERSION(DRV_VERSION); diff --git a/drivers/rtc/rtc-mcp795.c b/drivers/rtc/rtc-mcp795.c new file mode 100644 index 000000000000..34295bf00416 --- /dev/null +++ b/drivers/rtc/rtc-mcp795.c @@ -0,0 +1,199 @@ +/* + * SPI Driver for Microchip MCP795 RTC + * + * Copyright (C) Josef Gajdusek <atx@atx.name> + * + * based on other Linux RTC drivers + * + * Device datasheet: + * http://ww1.microchip.com/downloads/en/DeviceDoc/22280A.pdf + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * */ + +#include <linux/module.h> +#include <linux/kernel.h> +#include <linux/device.h> +#include <linux/printk.h> +#include <linux/spi/spi.h> +#include <linux/rtc.h> + +/* MCP795 Instructions, see datasheet table 3-1 */ +#define MCP795_EEREAD 0x03 +#define MCP795_EEWRITE 0x02 +#define MCP795_EEWRDI 0x04 +#define MCP795_EEWREN 0x06 +#define MCP795_SRREAD 0x05 +#define MCP795_SRWRITE 0x01 +#define MCP795_READ 0x13 +#define MCP795_WRITE 0x12 +#define MCP795_UNLOCK 0x14 +#define MCP795_IDWRITE 0x32 +#define MCP795_IDREAD 0x33 +#define MCP795_CLRWDT 0x44 +#define MCP795_CLRRAM 0x54 + +#define MCP795_ST_BIT 0x80 +#define MCP795_24_BIT 0x40 + +static int mcp795_rtcc_read(struct device *dev, u8 addr, u8 *buf, u8 count) +{ + struct spi_device *spi = to_spi_device(dev); + int ret; + u8 tx[2]; + + tx[0] = MCP795_READ; + tx[1] = addr; + ret = spi_write_then_read(spi, tx, sizeof(tx), buf, count); + + if (ret) + dev_err(dev, "Failed reading %d bytes from address %x.\n", + count, addr); + + return ret; +} + +static int mcp795_rtcc_write(struct device *dev, u8 addr, u8 *data, u8 count) +{ + struct spi_device *spi = to_spi_device(dev); + int ret; + u8 tx[2 + count]; + + tx[0] = MCP795_WRITE; + tx[1] = addr; + memcpy(&tx[2], data, count); + + ret = spi_write(spi, tx, 2 + count); + + if (ret) + dev_err(dev, "Failed to write %d bytes to address %x.\n", + count, addr); + + return ret; +} + +static int mcp795_rtcc_set_bits(struct device *dev, u8 addr, u8 mask, u8 state) +{ + int ret; + u8 tmp; + + ret = mcp795_rtcc_read(dev, addr, &tmp, 1); + if (ret) + return ret; + + if ((tmp & mask) != state) { + tmp = (tmp & ~mask) | state; + ret = mcp795_rtcc_write(dev, addr, &tmp, 1); + } + + return ret; +} + +static int mcp795_set_time(struct device *dev, struct rtc_time *tim) +{ + int ret; + u8 data[7]; + + /* Read first, so we can leave config bits untouched */ + ret = mcp795_rtcc_read(dev, 0x01, data, sizeof(data)); + + if (ret) + return ret; + + data[0] = (data[0] & 0x80) | ((tim->tm_sec / 10) << 4) | (tim->tm_sec % 10); + data[1] = (data[1] & 0x80) | ((tim->tm_min / 10) << 4) | (tim->tm_min % 10); + data[2] = ((tim->tm_hour / 10) << 4) | (tim->tm_hour % 10); + data[4] = ((tim->tm_mday / 10) << 4) | ((tim->tm_mday) % 10); + data[5] = (data[5] & 0x10) | (tim->tm_mon / 10) | (tim->tm_mon % 10); + + if (tim->tm_year > 100) + tim->tm_year -= 100; + + data[6] = ((tim->tm_year / 10) << 4) | (tim->tm_year % 10); + + ret = mcp795_rtcc_write(dev, 0x01, data, sizeof(data)); + + if (ret) + return ret; + + dev_dbg(dev, "Set mcp795: %04d-%02d-%02d %02d:%02d:%02d\n", + tim->tm_year + 1900, tim->tm_mon, tim->tm_mday, + tim->tm_hour, tim->tm_min, tim->tm_sec); + + return 0; +} + +static int mcp795_read_time(struct device *dev, struct rtc_time *tim) +{ + int ret; + u8 data[7]; + + ret = mcp795_rtcc_read(dev, 0x01, data, sizeof(data)); + + if (ret) + return ret; + + tim->tm_sec = ((data[0] & 0x70) >> 4) * 10 + (data[0] & 0x0f); + tim->tm_min = ((data[1] & 0x70) >> 4) * 10 + (data[1] & 0x0f); + tim->tm_hour = ((data[2] & 0x30) >> 4) * 10 + (data[2] & 0x0f); + tim->tm_mday = ((data[4] & 0x30) >> 4) * 10 + (data[4] & 0x0f); + tim->tm_mon = ((data[5] & 0x10) >> 4) * 10 + (data[5] & 0x0f); + tim->tm_year = ((data[6] & 0xf0) >> 4) * 10 + (data[6] & 0x0f) + 100; /* Assume we are in 20xx */ + + dev_dbg(dev, "Read from mcp795: %04d-%02d-%02d %02d:%02d:%02d\n", + tim->tm_year + 1900, tim->tm_mon, tim->tm_mday, + tim->tm_hour, tim->tm_min, tim->tm_sec); + + return rtc_valid_tm(tim); +} + +static struct rtc_class_ops mcp795_rtc_ops = { + .read_time = mcp795_read_time, + .set_time = mcp795_set_time +}; + +static int mcp795_probe(struct spi_device *spi) +{ + struct rtc_device *rtc; + int ret; + + spi->mode = SPI_MODE_0; + spi->bits_per_word = 8; + ret = spi_setup(spi); + if (ret) { + dev_err(&spi->dev, "Unable to setup SPI\n"); + return ret; + } + + /* Start the oscillator */ + mcp795_rtcc_set_bits(&spi->dev, 0x01, MCP795_ST_BIT, MCP795_ST_BIT); + /* Clear the 12 hour mode flag*/ + mcp795_rtcc_set_bits(&spi->dev, 0x03, MCP795_24_BIT, 0); + + rtc = devm_rtc_device_register(&spi->dev, "rtc-mcp795", + &mcp795_rtc_ops, THIS_MODULE); + if (IS_ERR(rtc)) + return PTR_ERR(rtc); + + spi_set_drvdata(spi, rtc); + + return 0; +} + +static struct spi_driver mcp795_driver = { + .driver = { + .name = "rtc-mcp795", + .owner = THIS_MODULE, + }, + .probe = mcp795_probe, +}; + +module_spi_driver(mcp795_driver); + +MODULE_DESCRIPTION("MCP795 RTC SPI Driver"); +MODULE_AUTHOR("Josef Gajdusek <atx@atx.name>"); +MODULE_LICENSE("GPL"); +MODULE_ALIAS("spi:mcp795"); diff --git a/drivers/rtc/rtc-mv.c b/drivers/rtc/rtc-mv.c index d15a999363fc..6aaec2fc7c0d 100644 --- a/drivers/rtc/rtc-mv.c +++ b/drivers/rtc/rtc-mv.c @@ -319,7 +319,7 @@ static int __exit mv_rtc_remove(struct platform_device *pdev) } #ifdef CONFIG_OF -static struct of_device_id rtc_mv_of_match_table[] = { +static const struct of_device_id rtc_mv_of_match_table[] = { { .compatible = "marvell,orion-rtc", }, {} }; diff --git a/drivers/rtc/rtc-omap.c b/drivers/rtc/rtc-omap.c index 26de5f8c2ae4..21142e6574a9 100644 --- a/drivers/rtc/rtc-omap.c +++ b/drivers/rtc/rtc-omap.c @@ -73,43 +73,52 @@ #define OMAP_RTC_IRQWAKEEN 0x7c /* OMAP_RTC_CTRL_REG bit fields: */ -#define OMAP_RTC_CTRL_SPLIT (1<<7) -#define OMAP_RTC_CTRL_DISABLE (1<<6) -#define OMAP_RTC_CTRL_SET_32_COUNTER (1<<5) -#define OMAP_RTC_CTRL_TEST (1<<4) -#define OMAP_RTC_CTRL_MODE_12_24 (1<<3) -#define OMAP_RTC_CTRL_AUTO_COMP (1<<2) -#define OMAP_RTC_CTRL_ROUND_30S (1<<1) -#define OMAP_RTC_CTRL_STOP (1<<0) +#define OMAP_RTC_CTRL_SPLIT BIT(7) +#define OMAP_RTC_CTRL_DISABLE BIT(6) +#define OMAP_RTC_CTRL_SET_32_COUNTER BIT(5) +#define OMAP_RTC_CTRL_TEST BIT(4) +#define OMAP_RTC_CTRL_MODE_12_24 BIT(3) +#define OMAP_RTC_CTRL_AUTO_COMP BIT(2) +#define OMAP_RTC_CTRL_ROUND_30S BIT(1) +#define OMAP_RTC_CTRL_STOP BIT(0) /* OMAP_RTC_STATUS_REG bit fields: */ -#define OMAP_RTC_STATUS_POWER_UP (1<<7) -#define OMAP_RTC_STATUS_ALARM (1<<6) -#define OMAP_RTC_STATUS_1D_EVENT (1<<5) -#define OMAP_RTC_STATUS_1H_EVENT (1<<4) -#define OMAP_RTC_STATUS_1M_EVENT (1<<3) -#define OMAP_RTC_STATUS_1S_EVENT (1<<2) -#define OMAP_RTC_STATUS_RUN (1<<1) -#define OMAP_RTC_STATUS_BUSY (1<<0) +#define OMAP_RTC_STATUS_POWER_UP BIT(7) +#define OMAP_RTC_STATUS_ALARM BIT(6) +#define OMAP_RTC_STATUS_1D_EVENT BIT(5) +#define OMAP_RTC_STATUS_1H_EVENT BIT(4) +#define OMAP_RTC_STATUS_1M_EVENT BIT(3) +#define OMAP_RTC_STATUS_1S_EVENT BIT(2) +#define OMAP_RTC_STATUS_RUN BIT(1) +#define OMAP_RTC_STATUS_BUSY BIT(0) /* OMAP_RTC_INTERRUPTS_REG bit fields: */ -#define OMAP_RTC_INTERRUPTS_IT_ALARM (1<<3) -#define OMAP_RTC_INTERRUPTS_IT_TIMER (1<<2) +#define OMAP_RTC_INTERRUPTS_IT_ALARM BIT(3) +#define OMAP_RTC_INTERRUPTS_IT_TIMER BIT(2) + +/* OMAP_RTC_OSC_REG bit fields: */ +#define OMAP_RTC_OSC_32KCLK_EN BIT(6) /* OMAP_RTC_IRQWAKEEN bit fields: */ -#define OMAP_RTC_IRQWAKEEN_ALARM_WAKEEN (1<<1) +#define OMAP_RTC_IRQWAKEEN_ALARM_WAKEEN BIT(1) /* OMAP_RTC_KICKER values */ #define KICK0_VALUE 0x83e70b13 #define KICK1_VALUE 0x95a4f1e0 -#define OMAP_RTC_HAS_KICKER 0x1 +#define OMAP_RTC_HAS_KICKER BIT(0) /* * Few RTC IP revisions has special WAKE-EN Register to enable Wakeup * generation for event Alarm. */ -#define OMAP_RTC_HAS_IRQWAKEEN 0x2 +#define OMAP_RTC_HAS_IRQWAKEEN BIT(1) + +/* + * Some RTC IP revisions (like those in AM335x and DRA7x) need + * the 32KHz clock to be explicitly enabled. + */ +#define OMAP_RTC_HAS_32KCLK_EN BIT(2) static void __iomem *rtc_base; @@ -162,17 +171,28 @@ static irqreturn_t rtc_irq(int irq, void *rtc) static int omap_rtc_alarm_irq_enable(struct device *dev, unsigned int enabled) { - u8 reg; + u8 reg, irqwake_reg = 0; + struct platform_device *pdev = to_platform_device(dev); + const struct platform_device_id *id_entry = + platform_get_device_id(pdev); local_irq_disable(); rtc_wait_not_busy(); reg = rtc_read(OMAP_RTC_INTERRUPTS_REG); - if (enabled) + if (id_entry->driver_data & OMAP_RTC_HAS_IRQWAKEEN) + irqwake_reg = rtc_read(OMAP_RTC_IRQWAKEEN); + + if (enabled) { reg |= OMAP_RTC_INTERRUPTS_IT_ALARM; - else + irqwake_reg |= OMAP_RTC_IRQWAKEEN_ALARM_WAKEEN; + } else { reg &= ~OMAP_RTC_INTERRUPTS_IT_ALARM; + irqwake_reg &= ~OMAP_RTC_IRQWAKEEN_ALARM_WAKEEN; + } rtc_wait_not_busy(); rtc_write(reg, OMAP_RTC_INTERRUPTS_REG); + if (id_entry->driver_data & OMAP_RTC_HAS_IRQWAKEEN) + rtc_write(irqwake_reg, OMAP_RTC_IRQWAKEEN); local_irq_enable(); return 0; @@ -272,7 +292,10 @@ static int omap_rtc_read_alarm(struct device *dev, struct rtc_wkalrm *alm) static int omap_rtc_set_alarm(struct device *dev, struct rtc_wkalrm *alm) { - u8 reg; + u8 reg, irqwake_reg = 0; + struct platform_device *pdev = to_platform_device(dev); + const struct platform_device_id *id_entry = + platform_get_device_id(pdev); if (tm2bcd(&alm->time) < 0) return -EINVAL; @@ -288,11 +311,19 @@ static int omap_rtc_set_alarm(struct device *dev, struct rtc_wkalrm *alm) rtc_write(alm->time.tm_sec, OMAP_RTC_ALARM_SECONDS_REG); reg = rtc_read(OMAP_RTC_INTERRUPTS_REG); - if (alm->enabled) + if (id_entry->driver_data & OMAP_RTC_HAS_IRQWAKEEN) + irqwake_reg = rtc_read(OMAP_RTC_IRQWAKEEN); + + if (alm->enabled) { reg |= OMAP_RTC_INTERRUPTS_IT_ALARM; - else + irqwake_reg |= OMAP_RTC_IRQWAKEEN_ALARM_WAKEEN; + } else { reg &= ~OMAP_RTC_INTERRUPTS_IT_ALARM; + irqwake_reg &= ~OMAP_RTC_IRQWAKEEN_ALARM_WAKEEN; + } rtc_write(reg, OMAP_RTC_INTERRUPTS_REG); + if (id_entry->driver_data & OMAP_RTC_HAS_IRQWAKEEN) + rtc_write(irqwake_reg, OMAP_RTC_IRQWAKEEN); local_irq_enable(); @@ -319,7 +350,8 @@ static struct platform_device_id omap_rtc_devtype[] = { }, [OMAP_RTC_DATA_AM3352_IDX] = { .name = "am3352-rtc", - .driver_data = OMAP_RTC_HAS_KICKER | OMAP_RTC_HAS_IRQWAKEEN, + .driver_data = OMAP_RTC_HAS_KICKER | OMAP_RTC_HAS_IRQWAKEEN | + OMAP_RTC_HAS_32KCLK_EN, }, [OMAP_RTC_DATA_DA830_IDX] = { .name = "da830-rtc", @@ -352,6 +384,12 @@ static int __init omap_rtc_probe(struct platform_device *pdev) if (of_id) pdev->id_entry = of_id->data; + id_entry = platform_get_device_id(pdev); + if (!id_entry) { + dev_err(&pdev->dev, "no matching device entry\n"); + return -ENODEV; + } + omap_rtc_timer = platform_get_irq(pdev, 0); if (omap_rtc_timer <= 0) { pr_debug("%s: no update irq?\n", pdev->name); @@ -373,8 +411,7 @@ static int __init omap_rtc_probe(struct platform_device *pdev) pm_runtime_enable(&pdev->dev); pm_runtime_get_sync(&pdev->dev); - id_entry = platform_get_device_id(pdev); - if (id_entry && (id_entry->driver_data & OMAP_RTC_HAS_KICKER)) { + if (id_entry->driver_data & OMAP_RTC_HAS_KICKER) { rtc_writel(KICK0_VALUE, OMAP_RTC_KICK0_REG); rtc_writel(KICK1_VALUE, OMAP_RTC_KICK1_REG); } @@ -393,6 +430,10 @@ static int __init omap_rtc_probe(struct platform_device *pdev) */ rtc_write(0, OMAP_RTC_INTERRUPTS_REG); + /* enable RTC functional clock */ + if (id_entry->driver_data & OMAP_RTC_HAS_32KCLK_EN) + rtc_writel(OMAP_RTC_OSC_32KCLK_EN, OMAP_RTC_OSC_REG); + /* clear old status */ reg = rtc_read(OMAP_RTC_STATUS_REG); if (reg & (u8) OMAP_RTC_STATUS_POWER_UP) { @@ -452,7 +493,7 @@ static int __init omap_rtc_probe(struct platform_device *pdev) return 0; fail0: - if (id_entry && (id_entry->driver_data & OMAP_RTC_HAS_KICKER)) + if (id_entry->driver_data & OMAP_RTC_HAS_KICKER) rtc_writel(0, OMAP_RTC_KICK0_REG); pm_runtime_put_sync(&pdev->dev); pm_runtime_disable(&pdev->dev); @@ -469,7 +510,7 @@ static int __exit omap_rtc_remove(struct platform_device *pdev) /* leave rtc running, but disable irqs */ rtc_write(0, OMAP_RTC_INTERRUPTS_REG); - if (id_entry && (id_entry->driver_data & OMAP_RTC_HAS_KICKER)) + if (id_entry->driver_data & OMAP_RTC_HAS_KICKER) rtc_writel(0, OMAP_RTC_KICK0_REG); /* Disable the clock/module */ @@ -484,28 +525,16 @@ static u8 irqstat; static int omap_rtc_suspend(struct device *dev) { - u8 irqwake_stat; - struct platform_device *pdev = to_platform_device(dev); - const struct platform_device_id *id_entry = - platform_get_device_id(pdev); - irqstat = rtc_read(OMAP_RTC_INTERRUPTS_REG); /* FIXME the RTC alarm is not currently acting as a wakeup event * source on some platforms, and in fact this enable() call is just * saving a flag that's never used... */ - if (device_may_wakeup(dev)) { + if (device_may_wakeup(dev)) enable_irq_wake(omap_rtc_alarm); - - if (id_entry->driver_data & OMAP_RTC_HAS_IRQWAKEEN) { - irqwake_stat = rtc_read(OMAP_RTC_IRQWAKEEN); - irqwake_stat |= OMAP_RTC_IRQWAKEEN_ALARM_WAKEEN; - rtc_write(irqwake_stat, OMAP_RTC_IRQWAKEEN); - } - } else { + else rtc_write(0, OMAP_RTC_INTERRUPTS_REG); - } /* Disable the clock/module */ pm_runtime_put_sync(dev); @@ -515,25 +544,14 @@ static int omap_rtc_suspend(struct device *dev) static int omap_rtc_resume(struct device *dev) { - u8 irqwake_stat; - struct platform_device *pdev = to_platform_device(dev); - const struct platform_device_id *id_entry = - platform_get_device_id(pdev); - /* Enable the clock/module so that we can access the registers */ pm_runtime_get_sync(dev); - if (device_may_wakeup(dev)) { + if (device_may_wakeup(dev)) disable_irq_wake(omap_rtc_alarm); - - if (id_entry->driver_data & OMAP_RTC_HAS_IRQWAKEEN) { - irqwake_stat = rtc_read(OMAP_RTC_IRQWAKEEN); - irqwake_stat &= ~OMAP_RTC_IRQWAKEEN_ALARM_WAKEEN; - rtc_write(irqwake_stat, OMAP_RTC_IRQWAKEEN); - } - } else { + else rtc_write(irqstat, OMAP_RTC_INTERRUPTS_REG); - } + return 0; } #endif diff --git a/drivers/rtc/rtc-palmas.c b/drivers/rtc/rtc-palmas.c index c360d62fb3f6..4dfe2d793fa3 100644 --- a/drivers/rtc/rtc-palmas.c +++ b/drivers/rtc/rtc-palmas.c @@ -352,7 +352,7 @@ static SIMPLE_DEV_PM_OPS(palmas_rtc_pm_ops, palmas_rtc_suspend, palmas_rtc_resume); #ifdef CONFIG_OF -static struct of_device_id of_palmas_rtc_match[] = { +static const struct of_device_id of_palmas_rtc_match[] = { { .compatible = "ti,palmas-rtc"}, { }, }; diff --git a/drivers/rtc/rtc-pxa.c b/drivers/rtc/rtc-pxa.c index cccbf9d89729..4561f375327d 100644 --- a/drivers/rtc/rtc-pxa.c +++ b/drivers/rtc/rtc-pxa.c @@ -389,7 +389,7 @@ static int __exit pxa_rtc_remove(struct platform_device *pdev) } #ifdef CONFIG_OF -static struct of_device_id pxa_rtc_dt_ids[] = { +static const struct of_device_id pxa_rtc_dt_ids[] = { { .compatible = "marvell,pxa-rtc" }, {} }; diff --git a/drivers/rtc/rtc-s5m.c b/drivers/rtc/rtc-s5m.c index 476af93543f6..8ec2d6a1dbe1 100644 --- a/drivers/rtc/rtc-s5m.c +++ b/drivers/rtc/rtc-s5m.c @@ -40,6 +40,7 @@ struct s5m_rtc_info { struct device *dev; + struct i2c_client *i2c; struct sec_pmic_dev *s5m87xx; struct regmap *regmap; struct rtc_device *rtc_dev; @@ -49,6 +50,20 @@ struct s5m_rtc_info { bool wtsr_smpl; }; +static const struct regmap_config s5m_rtc_regmap_config = { + .reg_bits = 8, + .val_bits = 8, + + .max_register = SEC_RTC_REG_MAX, +}; + +static const struct regmap_config s2mps14_rtc_regmap_config = { + .reg_bits = 8, + .val_bits = 8, + + .max_register = S2MPS_RTC_REG_MAX, +}; + static void s5m8767_data_to_tm(u8 *data, struct rtc_time *tm, int rtc_24hr_mode) { @@ -554,6 +569,7 @@ static int s5m_rtc_probe(struct platform_device *pdev) struct sec_pmic_dev *s5m87xx = dev_get_drvdata(pdev->dev.parent); struct sec_platform_data *pdata = s5m87xx->pdata; struct s5m_rtc_info *info; + const struct regmap_config *regmap_cfg; int ret; if (!pdata) { @@ -565,9 +581,37 @@ static int s5m_rtc_probe(struct platform_device *pdev) if (!info) return -ENOMEM; + switch (pdata->device_type) { + case S2MPS14X: + regmap_cfg = &s2mps14_rtc_regmap_config; + break; + case S5M8763X: + regmap_cfg = &s5m_rtc_regmap_config; + break; + case S5M8767X: + regmap_cfg = &s5m_rtc_regmap_config; + break; + default: + dev_err(&pdev->dev, "Device type is not supported by RTC driver\n"); + return -ENODEV; + } + + info->i2c = i2c_new_dummy(s5m87xx->i2c->adapter, RTC_I2C_ADDR); + if (!info->i2c) { + dev_err(&pdev->dev, "Failed to allocate I2C for RTC\n"); + return -ENODEV; + } + + info->regmap = devm_regmap_init_i2c(info->i2c, regmap_cfg); + if (IS_ERR(info->regmap)) { + ret = PTR_ERR(info->regmap); + dev_err(&pdev->dev, "Failed to allocate RTC register map: %d\n", + ret); + goto err; + } + info->dev = &pdev->dev; info->s5m87xx = s5m87xx; - info->regmap = s5m87xx->regmap_rtc; info->device_type = s5m87xx->device_type; info->wtsr_smpl = s5m87xx->wtsr_smpl; @@ -585,7 +629,7 @@ static int s5m_rtc_probe(struct platform_device *pdev) default: ret = -EINVAL; dev_err(&pdev->dev, "Unsupported device type: %d\n", ret); - return ret; + goto err; } platform_set_drvdata(pdev, info); @@ -602,15 +646,24 @@ static int s5m_rtc_probe(struct platform_device *pdev) info->rtc_dev = devm_rtc_device_register(&pdev->dev, "s5m-rtc", &s5m_rtc_ops, THIS_MODULE); - if (IS_ERR(info->rtc_dev)) - return PTR_ERR(info->rtc_dev); + if (IS_ERR(info->rtc_dev)) { + ret = PTR_ERR(info->rtc_dev); + goto err; + } ret = devm_request_threaded_irq(&pdev->dev, info->irq, NULL, s5m_rtc_alarm_irq, 0, "rtc-alarm0", info); - if (ret < 0) + if (ret < 0) { dev_err(&pdev->dev, "Failed to request alarm IRQ: %d: %d\n", info->irq, ret); + goto err; + } + + return 0; + +err: + i2c_unregister_device(info->i2c); return ret; } @@ -639,6 +692,17 @@ static void s5m_rtc_shutdown(struct platform_device *pdev) s5m_rtc_enable_smpl(info, false); } +static int s5m_rtc_remove(struct platform_device *pdev) +{ + struct s5m_rtc_info *info = platform_get_drvdata(pdev); + + /* Perform also all shutdown steps when removing */ + s5m_rtc_shutdown(pdev); + i2c_unregister_device(info->i2c); + + return 0; +} + #ifdef CONFIG_PM_SLEEP static int s5m_rtc_resume(struct device *dev) { @@ -676,6 +740,7 @@ static struct platform_driver s5m_rtc_driver = { .pm = &s5m_rtc_pm_ops, }, .probe = s5m_rtc_probe, + .remove = s5m_rtc_remove, .shutdown = s5m_rtc_shutdown, .id_table = s5m_rtc_id, }; diff --git a/drivers/rtc/rtc-sa1100.c b/drivers/rtc/rtc-sa1100.c index 0f7adeb1944a..b6e1ca08c2c0 100644 --- a/drivers/rtc/rtc-sa1100.c +++ b/drivers/rtc/rtc-sa1100.c @@ -338,7 +338,7 @@ static SIMPLE_DEV_PM_OPS(sa1100_rtc_pm_ops, sa1100_rtc_suspend, sa1100_rtc_resume); #ifdef CONFIG_OF -static struct of_device_id sa1100_rtc_dt_ids[] = { +static const struct of_device_id sa1100_rtc_dt_ids[] = { { .compatible = "mrvl,sa1100-rtc", }, { .compatible = "mrvl,mmp-rtc", }, {} diff --git a/drivers/rtc/rtc-xgene.c b/drivers/rtc/rtc-xgene.c new file mode 100644 index 000000000000..14129cc85bdb --- /dev/null +++ b/drivers/rtc/rtc-xgene.c @@ -0,0 +1,278 @@ +/* + * APM X-Gene SoC Real Time Clock Driver + * + * Copyright (c) 2014, Applied Micro Circuits Corporation + * Author: Rameshwar Prasad Sahu <rsahu@apm.com> + * Loc Ho <lho@apm.com> + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License as published by the + * Free Software Foundation; either version 2 of the License, or (at your + * option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see <http://www.gnu.org/licenses/>. + * + */ + +#include <linux/init.h> +#include <linux/module.h> +#include <linux/of.h> +#include <linux/platform_device.h> +#include <linux/io.h> +#include <linux/slab.h> +#include <linux/clk.h> +#include <linux/delay.h> +#include <linux/rtc.h> + +/* RTC CSR Registers */ +#define RTC_CCVR 0x00 +#define RTC_CMR 0x04 +#define RTC_CLR 0x08 +#define RTC_CCR 0x0C +#define RTC_CCR_IE BIT(0) +#define RTC_CCR_MASK BIT(1) +#define RTC_CCR_EN BIT(2) +#define RTC_CCR_WEN BIT(3) +#define RTC_STAT 0x10 +#define RTC_STAT_BIT BIT(0) +#define RTC_RSTAT 0x14 +#define RTC_EOI 0x18 +#define RTC_VER 0x1C + +struct xgene_rtc_dev { + struct rtc_device *rtc; + struct device *dev; + unsigned long alarm_time; + void __iomem *csr_base; + struct clk *clk; + unsigned int irq_wake; +}; + +static int xgene_rtc_read_time(struct device *dev, struct rtc_time *tm) +{ + struct xgene_rtc_dev *pdata = dev_get_drvdata(dev); + + rtc_time_to_tm(readl(pdata->csr_base + RTC_CCVR), tm); + return rtc_valid_tm(tm); +} + +static int xgene_rtc_set_mmss(struct device *dev, unsigned long secs) +{ + struct xgene_rtc_dev *pdata = dev_get_drvdata(dev); + + /* + * NOTE: After the following write, the RTC_CCVR is only reflected + * after the update cycle of 1 seconds. + */ + writel((u32) secs, pdata->csr_base + RTC_CLR); + readl(pdata->csr_base + RTC_CLR); /* Force a barrier */ + + return 0; +} + +static int xgene_rtc_read_alarm(struct device *dev, struct rtc_wkalrm *alrm) +{ + struct xgene_rtc_dev *pdata = dev_get_drvdata(dev); + + rtc_time_to_tm(pdata->alarm_time, &alrm->time); + alrm->enabled = readl(pdata->csr_base + RTC_CCR) & RTC_CCR_IE; + + return 0; +} + +static int xgene_rtc_alarm_irq_enable(struct device *dev, u32 enabled) +{ + struct xgene_rtc_dev *pdata = dev_get_drvdata(dev); + u32 ccr; + + ccr = readl(pdata->csr_base + RTC_CCR); + if (enabled) { + ccr &= ~RTC_CCR_MASK; + ccr |= RTC_CCR_IE; + } else { + ccr &= ~RTC_CCR_IE; + ccr |= RTC_CCR_MASK; + } + writel(ccr, pdata->csr_base + RTC_CCR); + + return 0; +} + +static int xgene_rtc_set_alarm(struct device *dev, struct rtc_wkalrm *alrm) +{ + struct xgene_rtc_dev *pdata = dev_get_drvdata(dev); + unsigned long rtc_time; + unsigned long alarm_time; + + rtc_time = readl(pdata->csr_base + RTC_CCVR); + rtc_tm_to_time(&alrm->time, &alarm_time); + + pdata->alarm_time = alarm_time; + writel((u32) pdata->alarm_time, pdata->csr_base + RTC_CMR); + + xgene_rtc_alarm_irq_enable(dev, alrm->enabled); + + return 0; +} + +static const struct rtc_class_ops xgene_rtc_ops = { + .read_time = xgene_rtc_read_time, + .set_mmss = xgene_rtc_set_mmss, + .read_alarm = xgene_rtc_read_alarm, + .set_alarm = xgene_rtc_set_alarm, + .alarm_irq_enable = xgene_rtc_alarm_irq_enable, +}; + +static irqreturn_t xgene_rtc_interrupt(int irq, void *id) +{ + struct xgene_rtc_dev *pdata = (struct xgene_rtc_dev *) id; + + /* Check if interrupt asserted */ + if (!(readl(pdata->csr_base + RTC_STAT) & RTC_STAT_BIT)) + return IRQ_NONE; + + /* Clear interrupt */ + readl(pdata->csr_base + RTC_EOI); + + rtc_update_irq(pdata->rtc, 1, RTC_IRQF | RTC_AF); + + return IRQ_HANDLED; +} + +static int xgene_rtc_probe(struct platform_device *pdev) +{ + struct xgene_rtc_dev *pdata; + struct resource *res; + int ret; + int irq; + + pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL); + if (!pdata) + return -ENOMEM; + platform_set_drvdata(pdev, pdata); + pdata->dev = &pdev->dev; + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + pdata->csr_base = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR(pdata->csr_base)) + return PTR_ERR(pdata->csr_base); + + irq = platform_get_irq(pdev, 0); + if (irq < 0) { + dev_err(&pdev->dev, "No IRQ resource\n"); + return irq; + } + ret = devm_request_irq(&pdev->dev, irq, xgene_rtc_interrupt, 0, + dev_name(&pdev->dev), pdata); + if (ret) { + dev_err(&pdev->dev, "Could not request IRQ\n"); + return ret; + } + + pdata->clk = devm_clk_get(&pdev->dev, NULL); + if (IS_ERR(pdata->clk)) { + dev_err(&pdev->dev, "Couldn't get the clock for RTC\n"); + return -ENODEV; + } + clk_prepare_enable(pdata->clk); + + /* Turn on the clock and the crystal */ + writel(RTC_CCR_EN, pdata->csr_base + RTC_CCR); + + device_init_wakeup(&pdev->dev, 1); + + pdata->rtc = devm_rtc_device_register(&pdev->dev, pdev->name, + &xgene_rtc_ops, THIS_MODULE); + if (IS_ERR(pdata->rtc)) { + clk_disable_unprepare(pdata->clk); + return PTR_ERR(pdata->rtc); + } + + /* HW does not support update faster than 1 seconds */ + pdata->rtc->uie_unsupported = 1; + + return 0; +} + +static int xgene_rtc_remove(struct platform_device *pdev) +{ + struct xgene_rtc_dev *pdata = platform_get_drvdata(pdev); + + xgene_rtc_alarm_irq_enable(&pdev->dev, 0); + device_init_wakeup(&pdev->dev, 0); + clk_disable_unprepare(pdata->clk); + return 0; +} + +#ifdef CONFIG_PM_SLEEP +static int xgene_rtc_suspend(struct device *dev) +{ + struct platform_device *pdev = to_platform_device(dev); + struct xgene_rtc_dev *pdata = platform_get_drvdata(pdev); + int irq; + + irq = platform_get_irq(pdev, 0); + if (device_may_wakeup(&pdev->dev)) { + if (!enable_irq_wake(irq)) + pdata->irq_wake = 1; + } else { + xgene_rtc_alarm_irq_enable(dev, 0); + clk_disable(pdata->clk); + } + + return 0; +} + +static int xgene_rtc_resume(struct device *dev) +{ + struct platform_device *pdev = to_platform_device(dev); + struct xgene_rtc_dev *pdata = platform_get_drvdata(pdev); + int irq; + + irq = platform_get_irq(pdev, 0); + if (device_may_wakeup(&pdev->dev)) { + if (pdata->irq_wake) { + disable_irq_wake(irq); + pdata->irq_wake = 0; + } + } else { + clk_enable(pdata->clk); + xgene_rtc_alarm_irq_enable(dev, 1); + } + + return 0; +} +#endif + +static SIMPLE_DEV_PM_OPS(xgene_rtc_pm_ops, xgene_rtc_suspend, xgene_rtc_resume); + +#ifdef CONFIG_OF +static const struct of_device_id xgene_rtc_of_match[] = { + {.compatible = "apm,xgene-rtc" }, + { } +}; +MODULE_DEVICE_TABLE(of, xgene_rtc_of_match); +#endif + +static struct platform_driver xgene_rtc_driver = { + .probe = xgene_rtc_probe, + .remove = xgene_rtc_remove, + .driver = { + .owner = THIS_MODULE, + .name = "xgene-rtc", + .pm = &xgene_rtc_pm_ops, + .of_match_table = of_match_ptr(xgene_rtc_of_match), + }, +}; + +module_platform_driver(xgene_rtc_driver); + +MODULE_DESCRIPTION("APM X-Gene SoC RTC driver"); +MODULE_AUTHOR("Rameshwar Sahu <rsahu@apm.com>"); +MODULE_LICENSE("GPL"); diff --git a/drivers/scsi/scsi_sysctl.c b/drivers/scsi/scsi_sysctl.c index 2b6b93f7d8ef..546f16299ef9 100644 --- a/drivers/scsi/scsi_sysctl.c +++ b/drivers/scsi/scsi_sysctl.c @@ -12,7 +12,7 @@ #include "scsi_priv.h" -static ctl_table scsi_table[] = { +static struct ctl_table scsi_table[] = { { .procname = "logging_level", .data = &scsi_logging_level, .maxlen = sizeof(scsi_logging_level), @@ -21,14 +21,14 @@ static ctl_table scsi_table[] = { { } }; -static ctl_table scsi_dir_table[] = { +static struct ctl_table scsi_dir_table[] = { { .procname = "scsi", .mode = 0555, .child = scsi_table }, { } }; -static ctl_table scsi_root_table[] = { +static struct ctl_table scsi_root_table[] = { { .procname = "dev", .mode = 0555, .child = scsi_dir_table }, diff --git a/drivers/sh/intc/Kconfig b/drivers/sh/intc/Kconfig index f7d90617c9d9..60228fae943f 100644 --- a/drivers/sh/intc/Kconfig +++ b/drivers/sh/intc/Kconfig @@ -6,7 +6,7 @@ comment "Interrupt controller options" config INTC_USERIMASK bool "Userspace interrupt masking support" - depends on ARCH_SHMOBILE || (SUPERH && CPU_SH4A) || COMPILE_TEST + depends on (SUPERH && CPU_SH4A) || COMPILE_TEST help This enables support for hardware-assisted userspace hardirq masking. diff --git a/drivers/sh/pm_runtime.c b/drivers/sh/pm_runtime.c index 10c65eb51f85..72f63817a1a0 100644 --- a/drivers/sh/pm_runtime.c +++ b/drivers/sh/pm_runtime.c @@ -21,18 +21,43 @@ #include <linux/slab.h> #ifdef CONFIG_PM_RUNTIME - -static int default_platform_runtime_idle(struct device *dev) +static int sh_pm_runtime_suspend(struct device *dev) { - /* suspend synchronously to disable clocks immediately */ + int ret; + + ret = pm_generic_runtime_suspend(dev); + if (ret) { + dev_err(dev, "failed to suspend device\n"); + return ret; + } + + ret = pm_clk_suspend(dev); + if (ret) { + dev_err(dev, "failed to suspend clock\n"); + pm_generic_runtime_resume(dev); + return ret; + } + return 0; } +static int sh_pm_runtime_resume(struct device *dev) +{ + int ret; + + ret = pm_clk_resume(dev); + if (ret) { + dev_err(dev, "failed to resume clock\n"); + return ret; + } + + return pm_generic_runtime_resume(dev); +} + static struct dev_pm_domain default_pm_domain = { .ops = { - .runtime_suspend = pm_clk_suspend, - .runtime_resume = pm_clk_resume, - .runtime_idle = default_platform_runtime_idle, + .runtime_suspend = sh_pm_runtime_suspend, + .runtime_resume = sh_pm_runtime_resume, USE_PLATFORM_PM_SLEEP_OPS }, }; @@ -63,6 +88,9 @@ static int __init sh_pm_runtime_init(void) !of_machine_is_compatible("renesas,r8a7779") && !of_machine_is_compatible("renesas,r8a7790") && !of_machine_is_compatible("renesas,r8a7791") && + !of_machine_is_compatible("renesas,r8a7792") && + !of_machine_is_compatible("renesas,r8a7793") && + !of_machine_is_compatible("renesas,r8a7794") && !of_machine_is_compatible("renesas,sh7372") && !of_machine_is_compatible("renesas,sh73a0")) return 0; diff --git a/drivers/staging/comedi/drivers/ni_daq_700.c b/drivers/staging/comedi/drivers/ni_daq_700.c index 171a71d20c88..728bf7f14f7b 100644 --- a/drivers/staging/comedi/drivers/ni_daq_700.c +++ b/drivers/staging/comedi/drivers/ni_daq_700.c @@ -139,6 +139,8 @@ static int daq700_ai_rinsn(struct comedi_device *dev, /* write channel to multiplexer */ /* set mask scan bit high to disable scanning */ outb(chan | 0x80, dev->iobase + CMD_R1); + /* mux needs 2us to really settle [Fred Brooks]. */ + udelay(2); /* convert n samples */ for (n = 0; n < insn->n; n++) { diff --git a/drivers/staging/rtl8192e/rtllib_tx.c b/drivers/staging/rtl8192e/rtllib_tx.c index 11d0a9d8ee59..b7dd1539bbc4 100644 --- a/drivers/staging/rtl8192e/rtllib_tx.c +++ b/drivers/staging/rtl8192e/rtllib_tx.c @@ -171,7 +171,7 @@ inline int rtllib_put_snap(u8 *data, u16 h_proto) snap->oui[1] = oui[1]; snap->oui[2] = oui[2]; - *(u16 *)(data + SNAP_SIZE) = h_proto; + *(__be16 *)(data + SNAP_SIZE) = htons(h_proto); return SNAP_SIZE + sizeof(u16); } diff --git a/drivers/staging/speakup/main.c b/drivers/staging/speakup/main.c index 3b6e5358c723..7de79d59a4cd 100644 --- a/drivers/staging/speakup/main.c +++ b/drivers/staging/speakup/main.c @@ -2218,6 +2218,7 @@ static void __exit speakup_exit(void) unregister_keyboard_notifier(&keyboard_notifier_block); unregister_vt_notifier(&vt_notifier_block); speakup_unregister_devsynth(); + speakup_cancel_paste(); del_timer(&cursor_timer); kthread_stop(speakup_task); speakup_task = NULL; diff --git a/drivers/staging/speakup/selection.c b/drivers/staging/speakup/selection.c index f0fb00392d6b..ca04d3669acc 100644 --- a/drivers/staging/speakup/selection.c +++ b/drivers/staging/speakup/selection.c @@ -4,6 +4,10 @@ #include <linux/sched.h> #include <linux/device.h> /* for dev_warn */ #include <linux/selection.h> +#include <linux/workqueue.h> +#include <linux/tty.h> +#include <linux/tty_flip.h> +#include <asm/cmpxchg.h> #include "speakup.h" @@ -121,31 +125,61 @@ int speakup_set_selection(struct tty_struct *tty) return 0; } -/* TODO: move to some helper thread, probably. That'd fix having to check for - * in_atomic(). */ -int speakup_paste_selection(struct tty_struct *tty) +struct speakup_paste_work { + struct work_struct work; + struct tty_struct *tty; +}; + +static void __speakup_paste_selection(struct work_struct *work) { + struct speakup_paste_work *spw = + container_of(work, struct speakup_paste_work, work); + struct tty_struct *tty = xchg(&spw->tty, NULL); struct vc_data *vc = (struct vc_data *) tty->driver_data; int pasted = 0, count; + struct tty_ldisc *ld; DECLARE_WAITQUEUE(wait, current); + + ld = tty_ldisc_ref_wait(tty); + tty_buffer_lock_exclusive(&vc->port); + add_wait_queue(&vc->paste_wait, &wait); while (sel_buffer && sel_buffer_lth > pasted) { set_current_state(TASK_INTERRUPTIBLE); if (test_bit(TTY_THROTTLED, &tty->flags)) { - if (in_atomic()) - /* if we are in an interrupt handler, abort */ - break; schedule(); continue; } count = sel_buffer_lth - pasted; - count = min_t(int, count, tty->receive_room); - tty->ldisc->ops->receive_buf(tty, sel_buffer + pasted, - NULL, count); + count = tty_ldisc_receive_buf(ld, sel_buffer + pasted, NULL, + count); pasted += count; } remove_wait_queue(&vc->paste_wait, &wait); current->state = TASK_RUNNING; + + tty_buffer_unlock_exclusive(&vc->port); + tty_ldisc_deref(ld); + tty_kref_put(tty); +} + +static struct speakup_paste_work speakup_paste_work = { + .work = __WORK_INITIALIZER(speakup_paste_work.work, + __speakup_paste_selection) +}; + +int speakup_paste_selection(struct tty_struct *tty) +{ + if (cmpxchg(&speakup_paste_work.tty, NULL, tty) != NULL) + return -EBUSY; + + tty_kref_get(tty); + schedule_work_on(WORK_CPU_UNBOUND, &speakup_paste_work.work); return 0; } +void speakup_cancel_paste(void) +{ + cancel_work_sync(&speakup_paste_work.work); + tty_kref_put(speakup_paste_work.tty); +} diff --git a/drivers/staging/speakup/speakup.h b/drivers/staging/speakup/speakup.h index a7bcceec436a..898dce5e1243 100644 --- a/drivers/staging/speakup/speakup.h +++ b/drivers/staging/speakup/speakup.h @@ -75,6 +75,7 @@ extern void synth_buffer_clear(void); extern void speakup_clear_selection(void); extern int speakup_set_selection(struct tty_struct *tty); extern int speakup_paste_selection(struct tty_struct *tty); +extern void speakup_cancel_paste(void); extern void speakup_register_devsynth(void); extern void speakup_unregister_devsynth(void); extern void synth_write(const char *buf, size_t count); diff --git a/drivers/staging/speakup/speakup_acntsa.c b/drivers/staging/speakup/speakup_acntsa.c index 1f374845f610..3f2b5698a3d8 100644 --- a/drivers/staging/speakup/speakup_acntsa.c +++ b/drivers/staging/speakup/speakup_acntsa.c @@ -60,15 +60,15 @@ static struct kobj_attribute vol_attribute = __ATTR(vol, S_IWUSR|S_IRUGO, spk_var_show, spk_var_store); static struct kobj_attribute delay_time_attribute = - __ATTR(delay_time, S_IRUSR|S_IRUGO, spk_var_show, spk_var_store); + __ATTR(delay_time, S_IWUSR|S_IRUGO, spk_var_show, spk_var_store); static struct kobj_attribute direct_attribute = __ATTR(direct, S_IWUSR|S_IRUGO, spk_var_show, spk_var_store); static struct kobj_attribute full_time_attribute = - __ATTR(full_time, S_IRUSR|S_IRUGO, spk_var_show, spk_var_store); + __ATTR(full_time, S_IWUSR|S_IRUGO, spk_var_show, spk_var_store); static struct kobj_attribute jiffy_delta_attribute = - __ATTR(jiffy_delta, S_IRUSR|S_IRUGO, spk_var_show, spk_var_store); + __ATTR(jiffy_delta, S_IWUSR|S_IRUGO, spk_var_show, spk_var_store); static struct kobj_attribute trigger_time_attribute = - __ATTR(trigger_time, S_IRUSR|S_IRUGO, spk_var_show, spk_var_store); + __ATTR(trigger_time, S_IWUSR|S_IRUGO, spk_var_show, spk_var_store); /* * Create a group of attributes so that we can create and destroy them all diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c index 46588c85d39b..9189bc0a87ae 100644 --- a/drivers/target/iscsi/iscsi_target.c +++ b/drivers/target/iscsi/iscsi_target.c @@ -460,6 +460,7 @@ int iscsit_del_np(struct iscsi_np *np) spin_lock_bh(&np->np_thread_lock); np->np_exports--; if (np->np_exports) { + np->enabled = true; spin_unlock_bh(&np->np_thread_lock); return 0; } diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c index ca31fa1b8a4b..d9b1d88e1ad3 100644 --- a/drivers/target/iscsi/iscsi_target_login.c +++ b/drivers/target/iscsi/iscsi_target_login.c @@ -249,6 +249,28 @@ static void iscsi_login_set_conn_values( mutex_unlock(&auth_id_lock); } +static __printf(2, 3) int iscsi_change_param_sprintf( + struct iscsi_conn *conn, + const char *fmt, ...) +{ + va_list args; + unsigned char buf[64]; + + memset(buf, 0, sizeof buf); + + va_start(args, fmt); + vsnprintf(buf, sizeof buf, fmt, args); + va_end(args); + + if (iscsi_change_param_value(buf, conn->param_list, 0) < 0) { + iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, + ISCSI_LOGIN_STATUS_NO_RESOURCES); + return -1; + } + + return 0; +} + /* * This is the leading connection of a new session, * or session reinstatement. @@ -339,7 +361,6 @@ static int iscsi_login_zero_tsih_s2( { struct iscsi_node_attrib *na; struct iscsi_session *sess = conn->sess; - unsigned char buf[32]; bool iser = false; sess->tpg = conn->tpg; @@ -380,26 +401,16 @@ static int iscsi_login_zero_tsih_s2( * * In our case, we have already located the struct iscsi_tiqn at this point. */ - memset(buf, 0, 32); - sprintf(buf, "TargetPortalGroupTag=%hu", sess->tpg->tpgt); - if (iscsi_change_param_value(buf, conn->param_list, 0) < 0) { - iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, - ISCSI_LOGIN_STATUS_NO_RESOURCES); + if (iscsi_change_param_sprintf(conn, "TargetPortalGroupTag=%hu", sess->tpg->tpgt)) return -1; - } /* * Workaround for Initiators that have broken connection recovery logic. * * "We would really like to get rid of this." Linux-iSCSI.org team */ - memset(buf, 0, 32); - sprintf(buf, "ErrorRecoveryLevel=%d", na->default_erl); - if (iscsi_change_param_value(buf, conn->param_list, 0) < 0) { - iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, - ISCSI_LOGIN_STATUS_NO_RESOURCES); + if (iscsi_change_param_sprintf(conn, "ErrorRecoveryLevel=%d", na->default_erl)) return -1; - } if (iscsi_login_disable_FIM_keys(conn->param_list, conn) < 0) return -1; @@ -411,12 +422,9 @@ static int iscsi_login_zero_tsih_s2( unsigned long mrdsl, off; int rc; - sprintf(buf, "RDMAExtensions=Yes"); - if (iscsi_change_param_value(buf, conn->param_list, 0) < 0) { - iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, - ISCSI_LOGIN_STATUS_NO_RESOURCES); + if (iscsi_change_param_sprintf(conn, "RDMAExtensions=Yes")) return -1; - } + /* * Make MaxRecvDataSegmentLength PAGE_SIZE aligned for * Immediate Data + Unsolicitied Data-OUT if necessary.. @@ -446,12 +454,8 @@ static int iscsi_login_zero_tsih_s2( pr_warn("Aligning ISER MaxRecvDataSegmentLength: %lu down" " to PAGE_SIZE\n", mrdsl); - sprintf(buf, "MaxRecvDataSegmentLength=%lu\n", mrdsl); - if (iscsi_change_param_value(buf, conn->param_list, 0) < 0) { - iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, - ISCSI_LOGIN_STATUS_NO_RESOURCES); + if (iscsi_change_param_sprintf(conn, "MaxRecvDataSegmentLength=%lu\n", mrdsl)) return -1; - } /* * ISER currently requires that ImmediateData + Unsolicited * Data be disabled when protection / signature MRs are enabled. @@ -461,19 +465,12 @@ check_prot: (TARGET_PROT_DOUT_STRIP | TARGET_PROT_DOUT_PASS | TARGET_PROT_DOUT_INSERT)) { - sprintf(buf, "ImmediateData=No"); - if (iscsi_change_param_value(buf, conn->param_list, 0) < 0) { - iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, - ISCSI_LOGIN_STATUS_NO_RESOURCES); + if (iscsi_change_param_sprintf(conn, "ImmediateData=No")) return -1; - } - sprintf(buf, "InitialR2T=Yes"); - if (iscsi_change_param_value(buf, conn->param_list, 0) < 0) { - iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, - ISCSI_LOGIN_STATUS_NO_RESOURCES); + if (iscsi_change_param_sprintf(conn, "InitialR2T=Yes")) return -1; - } + pr_debug("Forcing ImmediateData=No + InitialR2T=Yes for" " T10-PI enabled ISER session\n"); } @@ -618,13 +615,8 @@ static int iscsi_login_non_zero_tsih_s2( * * In our case, we have already located the struct iscsi_tiqn at this point. */ - memset(buf, 0, 32); - sprintf(buf, "TargetPortalGroupTag=%hu", sess->tpg->tpgt); - if (iscsi_change_param_value(buf, conn->param_list, 0) < 0) { - iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, - ISCSI_LOGIN_STATUS_NO_RESOURCES); + if (iscsi_change_param_sprintf(conn, "TargetPortalGroupTag=%hu", sess->tpg->tpgt)) return -1; - } return iscsi_login_disable_FIM_keys(conn->param_list, conn); } diff --git a/drivers/target/iscsi/iscsi_target_tpg.c b/drivers/target/iscsi/iscsi_target_tpg.c index ca1811858afd..1431e8400d28 100644 --- a/drivers/target/iscsi/iscsi_target_tpg.c +++ b/drivers/target/iscsi/iscsi_target_tpg.c @@ -184,7 +184,8 @@ static void iscsit_clear_tpg_np_login_thread( return; } - tpg_np->tpg_np->enabled = false; + if (shutdown) + tpg_np->tpg_np->enabled = false; iscsit_reset_np_thread(tpg_np->tpg_np, tpg_np, tpg, shutdown); } diff --git a/drivers/target/target_core_alua.c b/drivers/target/target_core_alua.c index 0b79b852f4b2..fbc5ebb5f761 100644 --- a/drivers/target/target_core_alua.c +++ b/drivers/target/target_core_alua.c @@ -576,7 +576,16 @@ static inline int core_alua_state_standby( case REPORT_LUNS: case RECEIVE_DIAGNOSTIC: case SEND_DIAGNOSTIC: + case READ_CAPACITY: return 0; + case SERVICE_ACTION_IN: + switch (cdb[1] & 0x1f) { + case SAI_READ_CAPACITY_16: + return 0; + default: + set_ascq(cmd, ASCQ_04H_ALUA_TG_PT_STANDBY); + return 1; + } case MAINTENANCE_IN: switch (cdb[1] & 0x1f) { case MI_REPORT_TARGET_PGS: diff --git a/drivers/target/target_core_configfs.c b/drivers/target/target_core_configfs.c index 60a9ae6df763..bf55c5a04cfa 100644 --- a/drivers/target/target_core_configfs.c +++ b/drivers/target/target_core_configfs.c @@ -2227,6 +2227,11 @@ static ssize_t target_core_alua_tg_pt_gp_store_attr_alua_access_state( " tg_pt_gp ID: %hu\n", tg_pt_gp->tg_pt_gp_valid_id); return -EINVAL; } + if (!(dev->dev_flags & DF_CONFIGURED)) { + pr_err("Unable to set alua_access_state while device is" + " not configured\n"); + return -ENODEV; + } ret = kstrtoul(page, 0, &tmp); if (ret < 0) { diff --git a/drivers/tty/sysrq.c b/drivers/tty/sysrq.c index b767a64e49d9..454b65898e2c 100644 --- a/drivers/tty/sysrq.c +++ b/drivers/tty/sysrq.c @@ -46,6 +46,7 @@ #include <linux/jiffies.h> #include <linux/syscalls.h> #include <linux/of.h> +#include <linux/rcupdate.h> #include <asm/ptrace.h> #include <asm/irq_regs.h> @@ -510,9 +511,9 @@ void __handle_sysrq(int key, bool check_mask) struct sysrq_key_op *op_p; int orig_log_level; int i; - unsigned long flags; - spin_lock_irqsave(&sysrq_key_table_lock, flags); + rcu_sysrq_start(); + rcu_read_lock(); /* * Raise the apparent loglevel to maximum so that the sysrq header * is shown to provide the user with positive feedback. We do not @@ -554,7 +555,8 @@ void __handle_sysrq(int key, bool check_mask) printk("\n"); console_loglevel = orig_log_level; } - spin_unlock_irqrestore(&sysrq_key_table_lock, flags); + rcu_read_unlock(); + rcu_sysrq_end(); } void handle_sysrq(int key) @@ -1043,16 +1045,23 @@ static int __sysrq_swap_key_ops(int key, struct sysrq_key_op *insert_op_p, struct sysrq_key_op *remove_op_p) { int retval; - unsigned long flags; - spin_lock_irqsave(&sysrq_key_table_lock, flags); + spin_lock(&sysrq_key_table_lock); if (__sysrq_get_key_op(key) == remove_op_p) { __sysrq_put_key_op(key, insert_op_p); retval = 0; } else { retval = -1; } - spin_unlock_irqrestore(&sysrq_key_table_lock, flags); + spin_unlock(&sysrq_key_table_lock); + + /* + * A concurrent __handle_sysrq either got the old op or the new op. + * Wait for it to go away before returning, so the code for an old + * op is not freed (eg. on module unload) while it is in use. + */ + synchronize_rcu(); + return retval; } diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c index cf78d1985cd8..143deb62467d 100644 --- a/drivers/tty/tty_buffer.c +++ b/drivers/tty/tty_buffer.c @@ -60,6 +60,7 @@ void tty_buffer_lock_exclusive(struct tty_port *port) atomic_inc(&buf->priority); mutex_lock(&buf->lock); } +EXPORT_SYMBOL_GPL(tty_buffer_lock_exclusive); void tty_buffer_unlock_exclusive(struct tty_port *port) { @@ -73,6 +74,7 @@ void tty_buffer_unlock_exclusive(struct tty_port *port) if (restart) queue_work(system_unbound_wq, &buf->work); } +EXPORT_SYMBOL_GPL(tty_buffer_unlock_exclusive); /** * tty_buffer_space_avail - return unused buffer space diff --git a/drivers/usb/core/driver.c b/drivers/usb/core/driver.c index 888881e5f292..4aeb10034de7 100644 --- a/drivers/usb/core/driver.c +++ b/drivers/usb/core/driver.c @@ -1822,10 +1822,13 @@ int usb_runtime_suspend(struct device *dev) if (status == -EAGAIN || status == -EBUSY) usb_mark_last_busy(udev); - /* The PM core reacts badly unless the return code is 0, - * -EAGAIN, or -EBUSY, so always return -EBUSY on an error. + /* + * The PM core reacts badly unless the return code is 0, + * -EAGAIN, or -EBUSY, so always return -EBUSY on an error + * (except for root hubs, because they don't suspend through + * an upstream port like other USB devices). */ - if (status != 0) + if (status != 0 && udev->parent) return -EBUSY; return status; } diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c index db6287025c06..879b66e13370 100644 --- a/drivers/usb/core/hub.c +++ b/drivers/usb/core/hub.c @@ -1706,8 +1706,19 @@ static int hub_probe(struct usb_interface *intf, const struct usb_device_id *id) */ pm_runtime_set_autosuspend_delay(&hdev->dev, 0); - /* Hubs have proper suspend/resume support. */ - usb_enable_autosuspend(hdev); + /* + * Hubs have proper suspend/resume support, except for root hubs + * where the controller driver doesn't have bus_suspend and + * bus_resume methods. + */ + if (hdev->parent) { /* normal device */ + usb_enable_autosuspend(hdev); + } else { /* root hub */ + const struct hc_driver *drv = bus_to_hcd(hdev->bus)->driver; + + if (drv->bus_suspend && drv->bus_resume) + usb_enable_autosuspend(hdev); + } if (hdev->level == MAX_TOPO_LEVEL) { dev_err(&intf->dev, diff --git a/drivers/usb/host/pci-quirks.c b/drivers/usb/host/pci-quirks.c index 00661d305143..4a6d3dd68572 100644 --- a/drivers/usb/host/pci-quirks.c +++ b/drivers/usb/host/pci-quirks.c @@ -847,6 +847,13 @@ void usb_enable_intel_xhci_ports(struct pci_dev *xhci_pdev) bool ehci_found = false; struct pci_dev *companion = NULL; + /* Sony VAIO t-series with subsystem device ID 90a8 is not capable of + * switching ports from EHCI to xHCI + */ + if (xhci_pdev->subsystem_vendor == PCI_VENDOR_ID_SONY && + xhci_pdev->subsystem_device == 0x90a8) + return; + /* make sure an intel EHCI controller exists */ for_each_pci_dev(companion) { if (companion->class == PCI_CLASS_SERIAL_USB_EHCI && diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c index 6a57e81c2a76..8056d90690ee 100644 --- a/drivers/usb/host/xhci-mem.c +++ b/drivers/usb/host/xhci-mem.c @@ -1818,6 +1818,16 @@ void xhci_mem_cleanup(struct xhci_hcd *xhci) xhci_dbg_trace(xhci, trace_xhci_dbg_init, "Freed command ring"); xhci_cleanup_command_queue(xhci); + num_ports = HCS_MAX_PORTS(xhci->hcs_params1); + for (i = 0; i < num_ports; i++) { + struct xhci_interval_bw_table *bwt = &xhci->rh_bw[i].bw_table; + for (j = 0; j < XHCI_MAX_INTERVAL; j++) { + struct list_head *ep = &bwt->interval_bw[j].endpoints; + while (!list_empty(ep)) + list_del_init(ep->next); + } + } + for (i = 1; i < MAX_HC_SLOTS; ++i) xhci_free_virt_device(xhci, i); @@ -1853,16 +1863,6 @@ void xhci_mem_cleanup(struct xhci_hcd *xhci) if (!xhci->rh_bw) goto no_bw; - num_ports = HCS_MAX_PORTS(xhci->hcs_params1); - for (i = 0; i < num_ports; i++) { - struct xhci_interval_bw_table *bwt = &xhci->rh_bw[i].bw_table; - for (j = 0; j < XHCI_MAX_INTERVAL; j++) { - struct list_head *ep = &bwt->interval_bw[j].endpoints; - while (!list_empty(ep)) - list_del_init(ep->next); - } - } - for (i = 0; i < num_ports; i++) { struct xhci_tt_bw_info *tt, *n; list_for_each_entry_safe(tt, n, &xhci->rh_bw[i].tts, tt_list) { diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c index 7c6e1dedeb06..edf3b124583c 100644 --- a/drivers/usb/serial/ftdi_sio.c +++ b/drivers/usb/serial/ftdi_sio.c @@ -580,6 +580,8 @@ static const struct usb_device_id id_table_combined[] = { { USB_DEVICE(FTDI_VID, FTDI_TAVIR_STK500_PID) }, { USB_DEVICE(FTDI_VID, FTDI_TIAO_UMPA_PID), .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk }, + { USB_DEVICE(FTDI_VID, FTDI_NT_ORIONLXM_PID), + .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk }, /* * ELV devices: */ diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h index 993c93df6874..500474c48f4b 100644 --- a/drivers/usb/serial/ftdi_sio_ids.h +++ b/drivers/usb/serial/ftdi_sio_ids.h @@ -538,6 +538,11 @@ */ #define FTDI_TIAO_UMPA_PID 0x8a98 /* TIAO/DIYGADGET USB Multi-Protocol Adapter */ +/* + * NovaTech product ids (FTDI_VID) + */ +#define FTDI_NT_ORIONLXM_PID 0x7c90 /* OrionLXm Substation Automation Platform */ + /********************************/ /** third-party VID/PID combos **/ diff --git a/drivers/usb/serial/io_ti.c b/drivers/usb/serial/io_ti.c index df90dae53eb9..c0a42e9e6777 100644 --- a/drivers/usb/serial/io_ti.c +++ b/drivers/usb/serial/io_ti.c @@ -821,7 +821,7 @@ static int build_i2c_fw_hdr(__u8 *header, struct device *dev) firmware_rec = (struct ti_i2c_firmware_rec*)i2c_header->Data; i2c_header->Type = I2C_DESC_TYPE_FIRMWARE_BLANK; - i2c_header->Size = (__u16)buffer_size; + i2c_header->Size = cpu_to_le16(buffer_size); i2c_header->CheckSum = cs; firmware_rec->Ver_Major = OperationalMajorVersion; firmware_rec->Ver_Minor = OperationalMinorVersion; diff --git a/drivers/usb/serial/io_usbvend.h b/drivers/usb/serial/io_usbvend.h index 51f83fbb73bb..6f6a856bc37c 100644 --- a/drivers/usb/serial/io_usbvend.h +++ b/drivers/usb/serial/io_usbvend.h @@ -594,7 +594,7 @@ struct edge_boot_descriptor { struct ti_i2c_desc { __u8 Type; // Type of descriptor - __u16 Size; // Size of data only not including header + __le16 Size; // Size of data only not including header __u8 CheckSum; // Checksum (8 bit sum of data only) __u8 Data[0]; // Data starts here } __attribute__((packed)); diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c index 51e30740b2fe..59c3108cc136 100644 --- a/drivers/usb/serial/option.c +++ b/drivers/usb/serial/option.c @@ -161,6 +161,7 @@ static void option_instat_callback(struct urb *urb); #define NOVATELWIRELESS_PRODUCT_HSPA_EMBEDDED_FULLSPEED 0x9000 #define NOVATELWIRELESS_PRODUCT_HSPA_EMBEDDED_HIGHSPEED 0x9001 #define NOVATELWIRELESS_PRODUCT_E362 0x9010 +#define NOVATELWIRELESS_PRODUCT_E371 0x9011 #define NOVATELWIRELESS_PRODUCT_G2 0xA010 #define NOVATELWIRELESS_PRODUCT_MC551 0xB001 @@ -1012,6 +1013,7 @@ static const struct usb_device_id option_ids[] = { /* Novatel Ovation MC551 a.k.a. Verizon USB551L */ { USB_DEVICE_AND_INTERFACE_INFO(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_MC551, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_E362, 0xff, 0xff, 0xff) }, + { USB_DEVICE_AND_INTERFACE_INFO(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_E371, 0xff, 0xff, 0xff) }, { USB_DEVICE(AMOI_VENDOR_ID, AMOI_PRODUCT_H01) }, { USB_DEVICE(AMOI_VENDOR_ID, AMOI_PRODUCT_H01A) }, diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c index 7ba042498857..010e0f8b8e4f 100644 --- a/drivers/vfio/pci/vfio_pci.c +++ b/drivers/vfio/pci/vfio_pci.c @@ -57,7 +57,8 @@ static int vfio_pci_enable(struct vfio_pci_device *vdev) ret = vfio_config_init(vdev); if (ret) { - pci_load_and_free_saved_state(pdev, &vdev->pci_saved_state); + kfree(vdev->pci_saved_state); + vdev->pci_saved_state = NULL; pci_disable_device(pdev); return ret; } @@ -196,8 +197,7 @@ static int vfio_pci_get_irq_count(struct vfio_pci_device *vdev, int irq_type) if (pos) { pci_read_config_word(vdev->pdev, pos + PCI_MSI_FLAGS, &flags); - - return 1 << (flags & PCI_MSI_FLAGS_QMASK); + return 1 << ((flags & PCI_MSI_FLAGS_QMASK) >> 1); } } else if (irq_type == VFIO_PCI_MSIX_IRQ_INDEX) { u8 pos; diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci_config.c index 83cd1574c810..e50790e91f76 100644 --- a/drivers/vfio/pci/vfio_pci_config.c +++ b/drivers/vfio/pci/vfio_pci_config.c @@ -1126,8 +1126,7 @@ static int vfio_ext_cap_len(struct vfio_pci_device *vdev, u16 ecap, u16 epos) return pcibios_err_to_errno(ret); byte &= PCI_DPA_CAP_SUBSTATE_MASK; - byte = round_up(byte + 1, 4); - return PCI_DPA_BASE_SIZEOF + byte; + return PCI_DPA_BASE_SIZEOF + byte + 1; case PCI_EXT_CAP_ID_TPH: ret = pci_read_config_dword(pdev, epos + PCI_TPH_CAP, &dword); if (ret) @@ -1136,9 +1135,9 @@ static int vfio_ext_cap_len(struct vfio_pci_device *vdev, u16 ecap, u16 epos) if ((dword & PCI_TPH_CAP_LOC_MASK) == PCI_TPH_LOC_CAP) { int sts; - sts = byte & PCI_TPH_CAP_ST_MASK; + sts = dword & PCI_TPH_CAP_ST_MASK; sts >>= PCI_TPH_CAP_ST_SHIFT; - return PCI_TPH_BASE_SIZEOF + round_up(sts * 2, 4); + return PCI_TPH_BASE_SIZEOF + (sts * 2) + 2; } return PCI_TPH_BASE_SIZEOF; default: diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 6673e7be507f..0734fbe5b651 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -524,7 +524,7 @@ unwind: static int vfio_dma_do_map(struct vfio_iommu *iommu, struct vfio_iommu_type1_dma_map *map) { - dma_addr_t end, iova; + dma_addr_t iova = map->iova; unsigned long vaddr = map->vaddr; size_t size = map->size; long npage; @@ -533,39 +533,30 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu, struct vfio_dma *dma; unsigned long pfn; - end = map->iova + map->size; + /* Verify that none of our __u64 fields overflow */ + if (map->size != size || map->vaddr != vaddr || map->iova != iova) + return -EINVAL; mask = ((uint64_t)1 << __ffs(vfio_pgsize_bitmap(iommu))) - 1; + WARN_ON(mask & PAGE_MASK); + /* READ/WRITE from device perspective */ if (map->flags & VFIO_DMA_MAP_FLAG_WRITE) prot |= IOMMU_WRITE; if (map->flags & VFIO_DMA_MAP_FLAG_READ) prot |= IOMMU_READ; - if (!prot) - return -EINVAL; /* No READ/WRITE? */ - - if (vaddr & mask) - return -EINVAL; - if (map->iova & mask) - return -EINVAL; - if (!map->size || map->size & mask) - return -EINVAL; - - WARN_ON(mask & PAGE_MASK); - - /* Don't allow IOVA wrap */ - if (end && end < map->iova) + if (!prot || !size || (size | iova | vaddr) & mask) return -EINVAL; - /* Don't allow virtual address wrap */ - if (vaddr + map->size && vaddr + map->size < vaddr) + /* Don't allow IOVA or virtual address wrap */ + if (iova + size - 1 < iova || vaddr + size - 1 < vaddr) return -EINVAL; mutex_lock(&iommu->lock); - if (vfio_find_dma(iommu, map->iova, map->size)) { + if (vfio_find_dma(iommu, iova, size)) { mutex_unlock(&iommu->lock); return -EEXIST; } @@ -576,17 +567,17 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu, return -ENOMEM; } - dma->iova = map->iova; - dma->vaddr = map->vaddr; + dma->iova = iova; + dma->vaddr = vaddr; dma->prot = prot; /* Insert zero-sized and grow as we map chunks of it */ vfio_link_dma(iommu, dma); - for (iova = map->iova; iova < end; iova += size, vaddr += size) { + while (size) { /* Pin a contiguous chunk of memory */ - npage = vfio_pin_pages(vaddr, (end - iova) >> PAGE_SHIFT, - prot, &pfn); + npage = vfio_pin_pages(vaddr + dma->size, + size >> PAGE_SHIFT, prot, &pfn); if (npage <= 0) { WARN_ON(!npage); ret = (int)npage; @@ -594,14 +585,14 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu, } /* Map it! */ - ret = vfio_iommu_map(iommu, iova, pfn, npage, prot); + ret = vfio_iommu_map(iommu, iova + dma->size, pfn, npage, prot); if (ret) { vfio_unpin_pages(pfn, npage, prot, true); break; } - size = npage << PAGE_SHIFT; - dma->size += size; + size -= npage << PAGE_SHIFT; + dma->size += npage << PAGE_SHIFT; } if (ret) diff --git a/fs/affs/affs.h b/fs/affs/affs.h index 25b23b1e7f22..9bca88159725 100644 --- a/fs/affs/affs.h +++ b/fs/affs/affs.h @@ -1,3 +1,9 @@ +#ifdef pr_fmt +#undef pr_fmt +#endif + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + #include <linux/types.h> #include <linux/fs.h> #include <linux/buffer_head.h> @@ -206,7 +212,7 @@ affs_set_blocksize(struct super_block *sb, int size) static inline struct buffer_head * affs_bread(struct super_block *sb, int block) { - pr_debug("affs_bread: %d\n", block); + pr_debug("%s: %d\n", __func__, block); if (block >= AFFS_SB(sb)->s_reserved && block < AFFS_SB(sb)->s_partition_size) return sb_bread(sb, block); return NULL; @@ -214,7 +220,7 @@ affs_bread(struct super_block *sb, int block) static inline struct buffer_head * affs_getblk(struct super_block *sb, int block) { - pr_debug("affs_getblk: %d\n", block); + pr_debug("%s: %d\n", __func__, block); if (block >= AFFS_SB(sb)->s_reserved && block < AFFS_SB(sb)->s_partition_size) return sb_getblk(sb, block); return NULL; @@ -223,7 +229,7 @@ static inline struct buffer_head * affs_getzeroblk(struct super_block *sb, int block) { struct buffer_head *bh; - pr_debug("affs_getzeroblk: %d\n", block); + pr_debug("%s: %d\n", __func__, block); if (block >= AFFS_SB(sb)->s_reserved && block < AFFS_SB(sb)->s_partition_size) { bh = sb_getblk(sb, block); lock_buffer(bh); @@ -238,7 +244,7 @@ static inline struct buffer_head * affs_getemptyblk(struct super_block *sb, int block) { struct buffer_head *bh; - pr_debug("affs_getemptyblk: %d\n", block); + pr_debug("%s: %d\n", __func__, block); if (block >= AFFS_SB(sb)->s_reserved && block < AFFS_SB(sb)->s_partition_size) { bh = sb_getblk(sb, block); wait_on_buffer(bh); @@ -251,7 +257,7 @@ static inline void affs_brelse(struct buffer_head *bh) { if (bh) - pr_debug("affs_brelse: %lld\n", (long long) bh->b_blocknr); + pr_debug("%s: %lld\n", __func__, (long long) bh->b_blocknr); brelse(bh); } diff --git a/fs/affs/amigaffs.c b/fs/affs/amigaffs.c index 533a322c41c0..406b29836b19 100644 --- a/fs/affs/amigaffs.c +++ b/fs/affs/amigaffs.c @@ -34,7 +34,7 @@ affs_insert_hash(struct inode *dir, struct buffer_head *bh) ino = bh->b_blocknr; offset = affs_hash_name(sb, AFFS_TAIL(sb, bh)->name + 1, AFFS_TAIL(sb, bh)->name[0]); - pr_debug("AFFS: insert_hash(dir=%u, ino=%d)\n", (u32)dir->i_ino, ino); + pr_debug("%s(dir=%u, ino=%d)\n", __func__, (u32)dir->i_ino, ino); dir_bh = affs_bread(sb, dir->i_ino); if (!dir_bh) @@ -84,7 +84,8 @@ affs_remove_hash(struct inode *dir, struct buffer_head *rem_bh) sb = dir->i_sb; rem_ino = rem_bh->b_blocknr; offset = affs_hash_name(sb, AFFS_TAIL(sb, rem_bh)->name+1, AFFS_TAIL(sb, rem_bh)->name[0]); - pr_debug("AFFS: remove_hash(dir=%d, ino=%d, hashval=%d)\n", (u32)dir->i_ino, rem_ino, offset); + pr_debug("%s(dir=%d, ino=%d, hashval=%d)\n", + __func__, (u32)dir->i_ino, rem_ino, offset); bh = affs_bread(sb, dir->i_ino); if (!bh) @@ -147,7 +148,7 @@ affs_remove_link(struct dentry *dentry) u32 link_ino, ino; int retval; - pr_debug("AFFS: remove_link(key=%ld)\n", inode->i_ino); + pr_debug("%s(key=%ld)\n", __func__, inode->i_ino); retval = -EIO; bh = affs_bread(sb, inode->i_ino); if (!bh) @@ -279,7 +280,7 @@ affs_remove_header(struct dentry *dentry) if (!inode) goto done; - pr_debug("AFFS: remove_header(key=%ld)\n", inode->i_ino); + pr_debug("%s(key=%ld)\n", __func__, inode->i_ino); retval = -EIO; bh = affs_bread(sb, (u32)(long)dentry->d_fsdata); if (!bh) @@ -451,10 +452,10 @@ affs_error(struct super_block *sb, const char *function, const char *fmt, ...) vsnprintf(ErrorBuffer,sizeof(ErrorBuffer),fmt,args); va_end(args); - printk(KERN_CRIT "AFFS error (device %s): %s(): %s\n", sb->s_id, + pr_crit("error (device %s): %s(): %s\n", sb->s_id, function,ErrorBuffer); if (!(sb->s_flags & MS_RDONLY)) - printk(KERN_WARNING "AFFS: Remounting filesystem read-only\n"); + pr_warn("Remounting filesystem read-only\n"); sb->s_flags |= MS_RDONLY; } @@ -467,7 +468,7 @@ affs_warning(struct super_block *sb, const char *function, const char *fmt, ...) vsnprintf(ErrorBuffer,sizeof(ErrorBuffer),fmt,args); va_end(args); - printk(KERN_WARNING "AFFS warning (device %s): %s(): %s\n", sb->s_id, + pr_warn("(device %s): %s(): %s\n", sb->s_id, function,ErrorBuffer); } diff --git a/fs/affs/bitmap.c b/fs/affs/bitmap.c index a32246b8359e..c8de51185c23 100644 --- a/fs/affs/bitmap.c +++ b/fs/affs/bitmap.c @@ -17,7 +17,7 @@ affs_count_free_blocks(struct super_block *sb) u32 free; int i; - pr_debug("AFFS: count_free_blocks()\n"); + pr_debug("%s()\n", __func__); if (sb->s_flags & MS_RDONLY) return 0; @@ -43,7 +43,7 @@ affs_free_block(struct super_block *sb, u32 block) u32 blk, bmap, bit, mask, tmp; __be32 *data; - pr_debug("AFFS: free_block(%u)\n", block); + pr_debug("%s(%u)\n", __func__, block); if (block > sbi->s_partition_size) goto err_range; @@ -125,7 +125,7 @@ affs_alloc_block(struct inode *inode, u32 goal) sb = inode->i_sb; sbi = AFFS_SB(sb); - pr_debug("AFFS: balloc(inode=%lu,goal=%u): ", inode->i_ino, goal); + pr_debug("balloc(inode=%lu,goal=%u): ", inode->i_ino, goal); if (AFFS_I(inode)->i_pa_cnt) { pr_debug("%d\n", AFFS_I(inode)->i_lastalloc+1); @@ -254,8 +254,7 @@ int affs_init_bitmap(struct super_block *sb, int *flags) return 0; if (!AFFS_ROOT_TAIL(sb, sbi->s_root_bh)->bm_flag) { - printk(KERN_NOTICE "AFFS: Bitmap invalid - mounting %s read only\n", - sb->s_id); + pr_notice("Bitmap invalid - mounting %s read only\n", sb->s_id); *flags |= MS_RDONLY; return 0; } @@ -268,7 +267,7 @@ int affs_init_bitmap(struct super_block *sb, int *flags) size = sbi->s_bmap_count * sizeof(*bm); bm = sbi->s_bitmap = kzalloc(size, GFP_KERNEL); if (!sbi->s_bitmap) { - printk(KERN_ERR "AFFS: Bitmap allocation failed\n"); + pr_err("Bitmap allocation failed\n"); return -ENOMEM; } @@ -282,17 +281,17 @@ int affs_init_bitmap(struct super_block *sb, int *flags) bm->bm_key = be32_to_cpu(bmap_blk[blk]); bh = affs_bread(sb, bm->bm_key); if (!bh) { - printk(KERN_ERR "AFFS: Cannot read bitmap\n"); + pr_err("Cannot read bitmap\n"); res = -EIO; goto out; } if (affs_checksum_block(sb, bh)) { - printk(KERN_WARNING "AFFS: Bitmap %u invalid - mounting %s read only.\n", - bm->bm_key, sb->s_id); + pr_warn("Bitmap %u invalid - mounting %s read only.\n", + bm->bm_key, sb->s_id); *flags |= MS_RDONLY; goto out; } - pr_debug("AFFS: read bitmap block %d: %d\n", blk, bm->bm_key); + pr_debug("read bitmap block %d: %d\n", blk, bm->bm_key); bm->bm_free = memweight(bh->b_data + 4, sb->s_blocksize - 4); /* Don't try read the extension if this is the last block, @@ -304,7 +303,7 @@ int affs_init_bitmap(struct super_block *sb, int *flags) affs_brelse(bmap_bh); bmap_bh = affs_bread(sb, be32_to_cpu(bmap_blk[blk])); if (!bmap_bh) { - printk(KERN_ERR "AFFS: Cannot read bitmap extension\n"); + pr_err("Cannot read bitmap extension\n"); res = -EIO; goto out; } diff --git a/fs/affs/dir.c b/fs/affs/dir.c index cbbda476a805..59f07bec92a6 100644 --- a/fs/affs/dir.c +++ b/fs/affs/dir.c @@ -54,8 +54,8 @@ affs_readdir(struct file *file, struct dir_context *ctx) u32 ino; int error = 0; - pr_debug("AFFS: readdir(ino=%lu,f_pos=%lx)\n", - inode->i_ino, (unsigned long)ctx->pos); + pr_debug("%s(ino=%lu,f_pos=%lx)\n", + __func__, inode->i_ino, (unsigned long)ctx->pos); if (ctx->pos < 2) { file->private_data = (void *)0; @@ -81,7 +81,7 @@ affs_readdir(struct file *file, struct dir_context *ctx) */ ino = (u32)(long)file->private_data; if (ino && file->f_version == inode->i_version) { - pr_debug("AFFS: readdir() left off=%d\n", ino); + pr_debug("readdir() left off=%d\n", ino); goto inside; } @@ -117,7 +117,7 @@ inside: namelen = min(AFFS_TAIL(sb, fh_bh)->name[0], (u8)30); name = AFFS_TAIL(sb, fh_bh)->name + 1; - pr_debug("AFFS: readdir(): dir_emit(\"%.*s\", " + pr_debug("readdir(): dir_emit(\"%.*s\", " "ino=%u), hash=%d, f_pos=%x\n", namelen, name, ino, hash_pos, (u32)ctx->pos); diff --git a/fs/affs/file.c b/fs/affs/file.c index 8669b6ecddee..0270303388ee 100644 --- a/fs/affs/file.c +++ b/fs/affs/file.c @@ -45,7 +45,7 @@ const struct inode_operations affs_file_inode_operations = { static int affs_file_open(struct inode *inode, struct file *filp) { - pr_debug("AFFS: open(%lu,%d)\n", + pr_debug("open(%lu,%d)\n", inode->i_ino, atomic_read(&AFFS_I(inode)->i_opencnt)); atomic_inc(&AFFS_I(inode)->i_opencnt); return 0; @@ -54,7 +54,7 @@ affs_file_open(struct inode *inode, struct file *filp) static int affs_file_release(struct inode *inode, struct file *filp) { - pr_debug("AFFS: release(%lu, %d)\n", + pr_debug("release(%lu, %d)\n", inode->i_ino, atomic_read(&AFFS_I(inode)->i_opencnt)); if (atomic_dec_and_test(&AFFS_I(inode)->i_opencnt)) { @@ -324,7 +324,8 @@ affs_get_block(struct inode *inode, sector_t block, struct buffer_head *bh_resul struct buffer_head *ext_bh; u32 ext; - pr_debug("AFFS: get_block(%u, %lu)\n", (u32)inode->i_ino, (unsigned long)block); + pr_debug("%s(%u, %lu)\n", + __func__, (u32)inode->i_ino, (unsigned long)block); BUG_ON(block > (sector_t)0x7fffffffUL); @@ -498,34 +499,36 @@ affs_getemptyblk_ino(struct inode *inode, int block) } static int -affs_do_readpage_ofs(struct file *file, struct page *page, unsigned from, unsigned to) +affs_do_readpage_ofs(struct page *page, unsigned to) { struct inode *inode = page->mapping->host; struct super_block *sb = inode->i_sb; struct buffer_head *bh; char *data; + unsigned pos = 0; u32 bidx, boff, bsize; u32 tmp; - pr_debug("AFFS: read_page(%u, %ld, %d, %d)\n", (u32)inode->i_ino, page->index, from, to); - BUG_ON(from > to || to > PAGE_CACHE_SIZE); + pr_debug("%s(%u, %ld, 0, %d)\n", __func__, (u32)inode->i_ino, + page->index, to); + BUG_ON(to > PAGE_CACHE_SIZE); kmap(page); data = page_address(page); bsize = AFFS_SB(sb)->s_data_blksize; - tmp = (page->index << PAGE_CACHE_SHIFT) + from; + tmp = page->index << PAGE_CACHE_SHIFT; bidx = tmp / bsize; boff = tmp % bsize; - while (from < to) { + while (pos < to) { bh = affs_bread_ino(inode, bidx, 0); if (IS_ERR(bh)) return PTR_ERR(bh); - tmp = min(bsize - boff, to - from); - BUG_ON(from + tmp > to || tmp > bsize); - memcpy(data + from, AFFS_DATA(bh) + boff, tmp); + tmp = min(bsize - boff, to - pos); + BUG_ON(pos + tmp > to || tmp > bsize); + memcpy(data + pos, AFFS_DATA(bh) + boff, tmp); affs_brelse(bh); bidx++; - from += tmp; + pos += tmp; boff = 0; } flush_dcache_page(page); @@ -542,7 +545,7 @@ affs_extent_file_ofs(struct inode *inode, u32 newsize) u32 size, bsize; u32 tmp; - pr_debug("AFFS: extent_file(%u, %d)\n", (u32)inode->i_ino, newsize); + pr_debug("%s(%u, %d)\n", __func__, (u32)inode->i_ino, newsize); bsize = AFFS_SB(sb)->s_data_blksize; bh = NULL; size = AFFS_I(inode)->mmu_private; @@ -608,14 +611,14 @@ affs_readpage_ofs(struct file *file, struct page *page) u32 to; int err; - pr_debug("AFFS: read_page(%u, %ld)\n", (u32)inode->i_ino, page->index); + pr_debug("%s(%u, %ld)\n", __func__, (u32)inode->i_ino, page->index); to = PAGE_CACHE_SIZE; if (((page->index + 1) << PAGE_CACHE_SHIFT) > inode->i_size) { to = inode->i_size & ~PAGE_CACHE_MASK; memset(page_address(page) + to, 0, PAGE_CACHE_SIZE - to); } - err = affs_do_readpage_ofs(file, page, 0, to); + err = affs_do_readpage_ofs(page, to); if (!err) SetPageUptodate(page); unlock_page(page); @@ -631,7 +634,8 @@ static int affs_write_begin_ofs(struct file *file, struct address_space *mapping pgoff_t index; int err = 0; - pr_debug("AFFS: write_begin(%u, %llu, %llu)\n", (u32)inode->i_ino, (unsigned long long)pos, (unsigned long long)pos + len); + pr_debug("%s(%u, %llu, %llu)\n", __func__, (u32)inode->i_ino, + (unsigned long long)pos, (unsigned long long)pos + len); if (pos > AFFS_I(inode)->mmu_private) { /* XXX: this probably leaves a too-big i_size in case of * failure. Should really be updating i_size at write_end time @@ -651,7 +655,7 @@ static int affs_write_begin_ofs(struct file *file, struct address_space *mapping return 0; /* XXX: inefficient but safe in the face of short writes */ - err = affs_do_readpage_ofs(file, page, 0, PAGE_CACHE_SIZE); + err = affs_do_readpage_ofs(page, PAGE_CACHE_SIZE); if (err) { unlock_page(page); page_cache_release(page); @@ -680,7 +684,9 @@ static int affs_write_end_ofs(struct file *file, struct address_space *mapping, * due to write_begin. */ - pr_debug("AFFS: write_begin(%u, %llu, %llu)\n", (u32)inode->i_ino, (unsigned long long)pos, (unsigned long long)pos + len); + pr_debug("%s(%u, %llu, %llu)\n", + __func__, (u32)inode->i_ino, (unsigned long long)pos, + (unsigned long long)pos + len); bsize = AFFS_SB(sb)->s_data_blksize; data = page_address(page); @@ -802,7 +808,7 @@ affs_free_prealloc(struct inode *inode) { struct super_block *sb = inode->i_sb; - pr_debug("AFFS: free_prealloc(ino=%lu)\n", inode->i_ino); + pr_debug("free_prealloc(ino=%lu)\n", inode->i_ino); while (AFFS_I(inode)->i_pa_cnt) { AFFS_I(inode)->i_pa_cnt--; @@ -822,7 +828,7 @@ affs_truncate(struct inode *inode) struct buffer_head *ext_bh; int i; - pr_debug("AFFS: truncate(inode=%d, oldsize=%u, newsize=%u)\n", + pr_debug("truncate(inode=%d, oldsize=%u, newsize=%u)\n", (u32)inode->i_ino, (u32)AFFS_I(inode)->mmu_private, (u32)inode->i_size); last_blk = 0; diff --git a/fs/affs/inode.c b/fs/affs/inode.c index 96df91e8c334..bec2d1a0c91c 100644 --- a/fs/affs/inode.c +++ b/fs/affs/inode.c @@ -34,7 +34,7 @@ struct inode *affs_iget(struct super_block *sb, unsigned long ino) if (!(inode->i_state & I_NEW)) return inode; - pr_debug("AFFS: affs_iget(%lu)\n", inode->i_ino); + pr_debug("affs_iget(%lu)\n", inode->i_ino); block = inode->i_ino; bh = affs_bread(sb, block); @@ -175,7 +175,7 @@ affs_write_inode(struct inode *inode, struct writeback_control *wbc) uid_t uid; gid_t gid; - pr_debug("AFFS: write_inode(%lu)\n",inode->i_ino); + pr_debug("write_inode(%lu)\n", inode->i_ino); if (!inode->i_nlink) // possibly free block @@ -220,7 +220,7 @@ affs_notify_change(struct dentry *dentry, struct iattr *attr) struct inode *inode = dentry->d_inode; int error; - pr_debug("AFFS: notify_change(%lu,0x%x)\n",inode->i_ino,attr->ia_valid); + pr_debug("notify_change(%lu,0x%x)\n", inode->i_ino, attr->ia_valid); error = inode_change_ok(inode,attr); if (error) @@ -258,7 +258,8 @@ void affs_evict_inode(struct inode *inode) { unsigned long cache_page; - pr_debug("AFFS: evict_inode(ino=%lu, nlink=%u)\n", inode->i_ino, inode->i_nlink); + pr_debug("evict_inode(ino=%lu, nlink=%u)\n", + inode->i_ino, inode->i_nlink); truncate_inode_pages_final(&inode->i_data); if (!inode->i_nlink) { @@ -271,7 +272,7 @@ affs_evict_inode(struct inode *inode) affs_free_prealloc(inode); cache_page = (unsigned long)AFFS_I(inode)->i_lc; if (cache_page) { - pr_debug("AFFS: freeing ext cache\n"); + pr_debug("freeing ext cache\n"); AFFS_I(inode)->i_lc = NULL; AFFS_I(inode)->i_ac = NULL; free_page(cache_page); @@ -350,7 +351,8 @@ affs_add_entry(struct inode *dir, struct inode *inode, struct dentry *dentry, s3 u32 block = 0; int retval; - pr_debug("AFFS: add_entry(dir=%u, inode=%u, \"%*s\", type=%d)\n", (u32)dir->i_ino, + pr_debug("%s(dir=%u, inode=%u, \"%*s\", type=%d)\n", + __func__, (u32)dir->i_ino, (u32)inode->i_ino, (int)dentry->d_name.len, dentry->d_name.name, type); retval = -EIO; diff --git a/fs/affs/namei.c b/fs/affs/namei.c index 6dae1ccd176d..035bd31556fc 100644 --- a/fs/affs/namei.c +++ b/fs/affs/namei.c @@ -190,7 +190,8 @@ affs_find_entry(struct inode *dir, struct dentry *dentry) toupper_t toupper = affs_get_toupper(sb); u32 key; - pr_debug("AFFS: find_entry(\"%.*s\")\n", (int)dentry->d_name.len, dentry->d_name.name); + pr_debug("%s(\"%.*s\")\n", + __func__, (int)dentry->d_name.len, dentry->d_name.name); bh = affs_bread(sb, dir->i_ino); if (!bh) @@ -218,7 +219,8 @@ affs_lookup(struct inode *dir, struct dentry *dentry, unsigned int flags) struct buffer_head *bh; struct inode *inode = NULL; - pr_debug("AFFS: lookup(\"%.*s\")\n",(int)dentry->d_name.len,dentry->d_name.name); + pr_debug("%s(\"%.*s\")\n", + __func__, (int)dentry->d_name.len, dentry->d_name.name); affs_lock_dir(dir); bh = affs_find_entry(dir, dentry); @@ -248,9 +250,9 @@ affs_lookup(struct inode *dir, struct dentry *dentry, unsigned int flags) int affs_unlink(struct inode *dir, struct dentry *dentry) { - pr_debug("AFFS: unlink(dir=%d, %lu \"%.*s\")\n", (u32)dir->i_ino, - dentry->d_inode->i_ino, - (int)dentry->d_name.len, dentry->d_name.name); + pr_debug("%s(dir=%d, %lu \"%.*s\")\n", + __func__, (u32)dir->i_ino, dentry->d_inode->i_ino, + (int)dentry->d_name.len, dentry->d_name.name); return affs_remove_header(dentry); } @@ -262,7 +264,8 @@ affs_create(struct inode *dir, struct dentry *dentry, umode_t mode, bool excl) struct inode *inode; int error; - pr_debug("AFFS: create(%lu,\"%.*s\",0%ho)\n",dir->i_ino,(int)dentry->d_name.len, + pr_debug("%s(%lu,\"%.*s\",0%ho)\n", + __func__, dir->i_ino, (int)dentry->d_name.len, dentry->d_name.name,mode); inode = affs_new_inode(dir); @@ -291,8 +294,9 @@ affs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode) struct inode *inode; int error; - pr_debug("AFFS: mkdir(%lu,\"%.*s\",0%ho)\n",dir->i_ino, - (int)dentry->d_name.len,dentry->d_name.name,mode); + pr_debug("%s(%lu,\"%.*s\",0%ho)\n", + __func__, dir->i_ino, (int)dentry->d_name.len, + dentry->d_name.name, mode); inode = affs_new_inode(dir); if (!inode) @@ -317,8 +321,8 @@ affs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode) int affs_rmdir(struct inode *dir, struct dentry *dentry) { - pr_debug("AFFS: rmdir(dir=%u, %lu \"%.*s\")\n", (u32)dir->i_ino, - dentry->d_inode->i_ino, + pr_debug("%s(dir=%u, %lu \"%.*s\")\n", + __func__, (u32)dir->i_ino, dentry->d_inode->i_ino, (int)dentry->d_name.len, dentry->d_name.name); return affs_remove_header(dentry); @@ -334,8 +338,9 @@ affs_symlink(struct inode *dir, struct dentry *dentry, const char *symname) int i, maxlen, error; char c, lc; - pr_debug("AFFS: symlink(%lu,\"%.*s\" -> \"%s\")\n",dir->i_ino, - (int)dentry->d_name.len,dentry->d_name.name,symname); + pr_debug("%s(%lu,\"%.*s\" -> \"%s\")\n", + __func__, dir->i_ino, (int)dentry->d_name.len, + dentry->d_name.name, symname); maxlen = AFFS_SB(sb)->s_hashsize * sizeof(u32) - 1; inode = affs_new_inode(dir); @@ -404,7 +409,8 @@ affs_link(struct dentry *old_dentry, struct inode *dir, struct dentry *dentry) { struct inode *inode = old_dentry->d_inode; - pr_debug("AFFS: link(%u, %u, \"%.*s\")\n", (u32)inode->i_ino, (u32)dir->i_ino, + pr_debug("%s(%u, %u, \"%.*s\")\n", + __func__, (u32)inode->i_ino, (u32)dir->i_ino, (int)dentry->d_name.len,dentry->d_name.name); return affs_add_entry(dir, inode, dentry, ST_LINKFILE); @@ -418,9 +424,10 @@ affs_rename(struct inode *old_dir, struct dentry *old_dentry, struct buffer_head *bh = NULL; int retval; - pr_debug("AFFS: rename(old=%u,\"%*s\" to new=%u,\"%*s\")\n", - (u32)old_dir->i_ino, (int)old_dentry->d_name.len, old_dentry->d_name.name, - (u32)new_dir->i_ino, (int)new_dentry->d_name.len, new_dentry->d_name.name); + pr_debug("%s(old=%u,\"%*s\" to new=%u,\"%*s\")\n", + __func__, (u32)old_dir->i_ino, (int)old_dentry->d_name.len, + old_dentry->d_name.name, (u32)new_dir->i_ino, + (int)new_dentry->d_name.len, new_dentry->d_name.name); retval = affs_check_name(new_dentry->d_name.name, new_dentry->d_name.len, diff --git a/fs/affs/super.c b/fs/affs/super.c index 895ac7dc9dbf..51f1a95bff73 100644 --- a/fs/affs/super.c +++ b/fs/affs/super.c @@ -46,7 +46,7 @@ static void affs_put_super(struct super_block *sb) { struct affs_sb_info *sbi = AFFS_SB(sb); - pr_debug("AFFS: put_super()\n"); + pr_debug("%s()\n", __func__); cancel_delayed_work_sync(&sbi->sb_work); } @@ -220,7 +220,7 @@ parse_options(char *options, kuid_t *uid, kgid_t *gid, int *mode, int *reserved, return 0; if (n != 512 && n != 1024 && n != 2048 && n != 4096) { - printk ("AFFS: Invalid blocksize (512, 1024, 2048, 4096 allowed)\n"); + pr_warn("Invalid blocksize (512, 1024, 2048, 4096 allowed)\n"); return 0; } *blocksize = n; @@ -285,8 +285,8 @@ parse_options(char *options, kuid_t *uid, kgid_t *gid, int *mode, int *reserved, /* Silently ignore the quota options */ break; default: - printk("AFFS: Unrecognized mount option \"%s\" " - "or missing value\n", p); + pr_warn("Unrecognized mount option \"%s\" or missing value\n", + p); return 0; } } @@ -319,7 +319,7 @@ static int affs_fill_super(struct super_block *sb, void *data, int silent) save_mount_options(sb, data); - pr_debug("AFFS: read_super(%s)\n",data ? (const char *)data : "no options"); + pr_debug("read_super(%s)\n", data ? (const char *)data : "no options"); sb->s_magic = AFFS_SUPER_MAGIC; sb->s_op = &affs_sops; @@ -339,7 +339,7 @@ static int affs_fill_super(struct super_block *sb, void *data, int silent) if (!parse_options(data,&uid,&gid,&i,&reserved,&root_block, &blocksize,&sbi->s_prefix, sbi->s_volume, &mount_flags)) { - printk(KERN_ERR "AFFS: Error parsing options\n"); + pr_err("Error parsing options\n"); return -EINVAL; } /* N.B. after this point s_prefix must be released */ @@ -356,7 +356,7 @@ static int affs_fill_super(struct super_block *sb, void *data, int silent) */ size = sb->s_bdev->bd_inode->i_size >> 9; - pr_debug("AFFS: initial blocksize=%d, #blocks=%d\n", 512, size); + pr_debug("initial blocksize=%d, #blocks=%d\n", 512, size); affs_set_blocksize(sb, PAGE_SIZE); /* Try to find root block. Its location depends on the block size. */ @@ -371,7 +371,7 @@ static int affs_fill_super(struct super_block *sb, void *data, int silent) sbi->s_root_block = root_block; if (root_block < 0) sbi->s_root_block = (reserved + size - 1) / 2; - pr_debug("AFFS: setting blocksize to %d\n", blocksize); + pr_debug("setting blocksize to %d\n", blocksize); affs_set_blocksize(sb, blocksize); sbi->s_partition_size = size; @@ -386,7 +386,7 @@ static int affs_fill_super(struct super_block *sb, void *data, int silent) * block behind the calculated one. So we check this one, too. */ for (num_bm = 0; num_bm < 2; num_bm++) { - pr_debug("AFFS: Dev %s, trying root=%u, bs=%d, " + pr_debug("Dev %s, trying root=%u, bs=%d, " "size=%d, reserved=%d\n", sb->s_id, sbi->s_root_block + num_bm, @@ -407,8 +407,7 @@ static int affs_fill_super(struct super_block *sb, void *data, int silent) } } if (!silent) - printk(KERN_ERR "AFFS: No valid root block on device %s\n", - sb->s_id); + pr_err("No valid root block on device %s\n", sb->s_id); return -EINVAL; /* N.B. after this point bh must be released */ @@ -420,7 +419,7 @@ got_root: /* Find out which kind of FS we have */ boot_bh = sb_bread(sb, 0); if (!boot_bh) { - printk(KERN_ERR "AFFS: Cannot read boot block\n"); + pr_err("Cannot read boot block\n"); return -EINVAL; } memcpy(sig, boot_bh->b_data, 4); @@ -433,8 +432,7 @@ got_root: */ if ((chksum == FS_DCFFS || chksum == MUFS_DCFFS || chksum == FS_DCOFS || chksum == MUFS_DCOFS) && !(sb->s_flags & MS_RDONLY)) { - printk(KERN_NOTICE "AFFS: Dircache FS - mounting %s read only\n", - sb->s_id); + pr_notice("Dircache FS - mounting %s read only\n", sb->s_id); sb->s_flags |= MS_RDONLY; } switch (chksum) { @@ -468,14 +466,14 @@ got_root: sb->s_flags |= MS_NOEXEC; break; default: - printk(KERN_ERR "AFFS: Unknown filesystem on device %s: %08X\n", - sb->s_id, chksum); + pr_err("Unknown filesystem on device %s: %08X\n", + sb->s_id, chksum); return -EINVAL; } if (mount_flags & SF_VERBOSE) { u8 len = AFFS_ROOT_TAIL(sb, root_bh)->disk_name[0]; - printk(KERN_NOTICE "AFFS: Mounting volume \"%.*s\": Type=%.3s\\%c, Blocksize=%d\n", + pr_notice("Mounting volume \"%.*s\": Type=%.3s\\%c, Blocksize=%d\n", len > 31 ? 31 : len, AFFS_ROOT_TAIL(sb, root_bh)->disk_name + 1, sig, sig[3] + '0', blocksize); @@ -506,11 +504,11 @@ got_root: sb->s_root = d_make_root(root_inode); if (!sb->s_root) { - printk(KERN_ERR "AFFS: Get root inode failed\n"); + pr_err("AFFS: Get root inode failed\n"); return -ENOMEM; } - pr_debug("AFFS: s_flags=%lX\n",sb->s_flags); + pr_debug("s_flags=%lX\n", sb->s_flags); return 0; } @@ -530,7 +528,7 @@ affs_remount(struct super_block *sb, int *flags, char *data) char volume[32]; char *prefix = NULL; - pr_debug("AFFS: remount(flags=0x%x,opts=\"%s\")\n",*flags,data); + pr_debug("%s(flags=0x%x,opts=\"%s\")\n", __func__, *flags, data); sync_filesystem(sb); *flags |= MS_NODIRATIME; @@ -578,8 +576,9 @@ affs_statfs(struct dentry *dentry, struct kstatfs *buf) int free; u64 id = huge_encode_dev(sb->s_bdev->bd_dev); - pr_debug("AFFS: statfs() partsize=%d, reserved=%d\n",AFFS_SB(sb)->s_partition_size, - AFFS_SB(sb)->s_reserved); + pr_debug("%s() partsize=%d, reserved=%d\n", + __func__, AFFS_SB(sb)->s_partition_size, + AFFS_SB(sb)->s_reserved); free = affs_count_free_blocks(sb); buf->f_type = AFFS_SUPER_MAGIC; diff --git a/fs/affs/symlink.c b/fs/affs/symlink.c index ee00f08c4f53..f39b71c3981e 100644 --- a/fs/affs/symlink.c +++ b/fs/affs/symlink.c @@ -21,7 +21,7 @@ static int affs_symlink_readpage(struct file *file, struct page *page) char c; char lc; - pr_debug("AFFS: follow_link(ino=%lu)\n",inode->i_ino); + pr_debug("follow_link(ino=%lu)\n", inode->i_ino); err = -EIO; bh = affs_bread(inode->i_sb, inode->i_ino); diff --git a/fs/befs/btree.c b/fs/befs/btree.c index a2cd305a993a..9c7faa8a9288 100644 --- a/fs/befs/btree.c +++ b/fs/befs/btree.c @@ -318,7 +318,7 @@ befs_btree_find(struct super_block *sb, befs_data_stream * ds, * befs_find_key - Search for a key within a node * @sb: Filesystem superblock * @node: Node to find the key within - * @key: Keystring to search for + * @findkey: Keystring to search for * @value: If key is found, the value stored with the key is put here * * finds exact match if one exists, and returns BEFS_BT_MATCH @@ -405,7 +405,7 @@ befs_find_key(struct super_block *sb, befs_btree_node * node, * Heres how it works: Key_no is the index of the key/value pair to * return in keybuf/value. * Bufsize is the size of keybuf (BEFS_NAME_LEN+1 is a good size). Keysize is - * the number of charecters in the key (just a convenience). + * the number of characters in the key (just a convenience). * * Algorithm: * Get the first leafnode of the tree. See if the requested key is in that @@ -502,12 +502,11 @@ befs_btree_read(struct super_block *sb, befs_data_stream * ds, "for key of size %d", __func__, bufsize, keylen); brelse(this_node->bh); goto error_alloc; - }; + } - strncpy(keybuf, keystart, keylen); + strlcpy(keybuf, keystart, keylen + 1); *value = fs64_to_cpu(sb, valarray[cur_key]); *keysize = keylen; - keybuf[keylen] = '\0'; befs_debug(sb, "Read [%llu,%d]: Key \"%.*s\", Value %llu", node_off, cur_key, keylen, keybuf, *value); @@ -707,7 +706,7 @@ befs_bt_get_key(struct super_block *sb, befs_btree_node * node, * @key1: pointer to the first key to be compared * @keylen1: length in bytes of key1 * @key2: pointer to the second key to be compared - * @kelen2: length in bytes of key2 + * @keylen2: length in bytes of key2 * * Returns 0 if @key1 and @key2 are equal. * Returns >0 if @key1 is greater. diff --git a/fs/befs/datastream.c b/fs/befs/datastream.c index c467bebd50af..1e8e0b8d8836 100644 --- a/fs/befs/datastream.c +++ b/fs/befs/datastream.c @@ -116,7 +116,7 @@ befs_fblock2brun(struct super_block *sb, befs_data_stream * data, * befs_read_lsmylink - read long symlink from datastream. * @sb: Filesystem superblock * @ds: Datastrem to read from - * @buf: Buffer in which to place long symlink data + * @buff: Buffer in which to place long symlink data * @len: Length of the long symlink in bytes * * Returns the number of bytes read diff --git a/fs/befs/linuxvfs.c b/fs/befs/linuxvfs.c index d626756ff721..a16fbd4e8241 100644 --- a/fs/befs/linuxvfs.c +++ b/fs/befs/linuxvfs.c @@ -133,14 +133,6 @@ befs_get_block(struct inode *inode, sector_t block, befs_debug(sb, "---> befs_get_block() for inode %lu, block %ld", (unsigned long)inode->i_ino, (long)block); - - if (block < 0) { - befs_error(sb, "befs_get_block() was asked for a block " - "number less than zero: block %ld in inode %lu", - (long)block, (unsigned long)inode->i_ino); - return -EIO; - } - if (create) { befs_error(sb, "befs_get_block() was asked to write to " "block %ld in inode %lu", (long)block, @@ -396,9 +388,8 @@ static struct inode *befs_iget(struct super_block *sb, unsigned long ino) if (S_ISLNK(inode->i_mode) && !(befs_ino->i_flags & BEFS_LONG_SYMLINK)){ inode->i_size = 0; inode->i_blocks = befs_sb->block_size / VFS_BLOCK_SIZE; - strncpy(befs_ino->i_data.symlink, raw_inode->data.symlink, - BEFS_SYMLINK_LEN - 1); - befs_ino->i_data.symlink[BEFS_SYMLINK_LEN - 1] = '\0'; + strlcpy(befs_ino->i_data.symlink, raw_inode->data.symlink, + BEFS_SYMLINK_LEN); } else { int num_blks; @@ -591,21 +582,21 @@ befs_utf2nls(struct super_block *sb, const char *in, /** * befs_nls2utf - Convert NLS string to utf8 encodeing * @sb: Superblock - * @src: Input string buffer in NLS format - * @srclen: Length of input string in bytes - * @dest: The output string in UTF-8 format - * @destlen: Length of the output buffer + * @in: Input string buffer in NLS format + * @in_len: Length of input string in bytes + * @out: The output string in UTF-8 format + * @out_len: Length of the output buffer * - * Converts input string @src, which is in the format of the loaded NLS map, + * Converts input string @in, which is in the format of the loaded NLS map, * into a utf8 string. * - * The destination string @dest is allocated by this function and the caller is + * The destination string @out is allocated by this function and the caller is * responsible for freeing it with kfree() * - * On return, *@destlen is the length of @dest in bytes. + * On return, *@out_len is the length of @out in bytes. * * On success, the return value is the number of utf8 characters written to - * the output buffer @dest. + * the output buffer @out. * * On Failure, a negative number coresponding to the error code is returned. */ diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c index fd38b5053479..484aacac2c89 100644 --- a/fs/btrfs/send.c +++ b/fs/btrfs/send.c @@ -360,10 +360,13 @@ static int fs_path_ensure_buf(struct fs_path *p, int len) /* * First time the inline_buf does not suffice */ - if (p->buf == p->inline_buf) + if (p->buf == p->inline_buf) { tmp_buf = kmalloc(len, GFP_NOFS); - else + if (tmp_buf) + memcpy(tmp_buf, p->buf, old_buf_len); + } else { tmp_buf = krealloc(p->buf, len, GFP_NOFS); + } if (!tmp_buf) return -ENOMEM; p->buf = tmp_buf; diff --git a/fs/cachefiles/bind.c b/fs/cachefiles/bind.c index 5b99bafc31d1..d749731dc0ee 100644 --- a/fs/cachefiles/bind.c +++ b/fs/cachefiles/bind.c @@ -50,18 +50,18 @@ int cachefiles_daemon_bind(struct cachefiles_cache *cache, char *args) cache->brun_percent < 100); if (*args) { - kerror("'bind' command doesn't take an argument"); + pr_err("'bind' command doesn't take an argument"); return -EINVAL; } if (!cache->rootdirname) { - kerror("No cache directory specified"); + pr_err("No cache directory specified"); return -EINVAL; } /* don't permit already bound caches to be re-bound */ if (test_bit(CACHEFILES_READY, &cache->flags)) { - kerror("Cache already bound"); + pr_err("Cache already bound"); return -EBUSY; } @@ -228,9 +228,7 @@ static int cachefiles_daemon_add_cache(struct cachefiles_cache *cache) set_bit(CACHEFILES_READY, &cache->flags); dput(root); - printk(KERN_INFO "CacheFiles:" - " File cache on %s registered\n", - cache->cache.identifier); + pr_info("File cache on %s registered\n", cache->cache.identifier); /* check how much space the cache has */ cachefiles_has_space(cache, 0, 0); @@ -250,7 +248,7 @@ error_open_root: kmem_cache_free(cachefiles_object_jar, fsdef); error_root_object: cachefiles_end_secure(cache, saved_cred); - kerror("Failed to register: %d", ret); + pr_err("Failed to register: %d", ret); return ret; } @@ -262,9 +260,8 @@ void cachefiles_daemon_unbind(struct cachefiles_cache *cache) _enter(""); if (test_bit(CACHEFILES_READY, &cache->flags)) { - printk(KERN_INFO "CacheFiles:" - " File cache on %s unregistering\n", - cache->cache.identifier); + pr_info("File cache on %s unregistering\n", + cache->cache.identifier); fscache_withdraw_cache(&cache->cache); } diff --git a/fs/cachefiles/daemon.c b/fs/cachefiles/daemon.c index 0a1467b15516..b078d3081d6c 100644 --- a/fs/cachefiles/daemon.c +++ b/fs/cachefiles/daemon.c @@ -315,8 +315,7 @@ static unsigned int cachefiles_daemon_poll(struct file *file, static int cachefiles_daemon_range_error(struct cachefiles_cache *cache, char *args) { - kerror("Free space limits must be in range" - " 0%%<=stop<cull<run<100%%"); + pr_err("Free space limits must be in range 0%%<=stop<cull<run<100%%"); return -EINVAL; } @@ -476,12 +475,12 @@ static int cachefiles_daemon_dir(struct cachefiles_cache *cache, char *args) _enter(",%s", args); if (!*args) { - kerror("Empty directory specified"); + pr_err("Empty directory specified"); return -EINVAL; } if (cache->rootdirname) { - kerror("Second cache directory specified"); + pr_err("Second cache directory specified"); return -EEXIST; } @@ -504,12 +503,12 @@ static int cachefiles_daemon_secctx(struct cachefiles_cache *cache, char *args) _enter(",%s", args); if (!*args) { - kerror("Empty security context specified"); + pr_err("Empty security context specified"); return -EINVAL; } if (cache->secctx) { - kerror("Second security context specified"); + pr_err("Second security context specified"); return -EINVAL; } @@ -532,7 +531,7 @@ static int cachefiles_daemon_tag(struct cachefiles_cache *cache, char *args) _enter(",%s", args); if (!*args) { - kerror("Empty tag specified"); + pr_err("Empty tag specified"); return -EINVAL; } @@ -563,12 +562,12 @@ static int cachefiles_daemon_cull(struct cachefiles_cache *cache, char *args) goto inval; if (!test_bit(CACHEFILES_READY, &cache->flags)) { - kerror("cull applied to unready cache"); + pr_err("cull applied to unready cache"); return -EIO; } if (test_bit(CACHEFILES_DEAD, &cache->flags)) { - kerror("cull applied to dead cache"); + pr_err("cull applied to dead cache"); return -EIO; } @@ -588,11 +587,11 @@ static int cachefiles_daemon_cull(struct cachefiles_cache *cache, char *args) notdir: path_put(&path); - kerror("cull command requires dirfd to be a directory"); + pr_err("cull command requires dirfd to be a directory"); return -ENOTDIR; inval: - kerror("cull command requires dirfd and filename"); + pr_err("cull command requires dirfd and filename"); return -EINVAL; } @@ -615,7 +614,7 @@ static int cachefiles_daemon_debug(struct cachefiles_cache *cache, char *args) return 0; inval: - kerror("debug command requires mask"); + pr_err("debug command requires mask"); return -EINVAL; } @@ -635,12 +634,12 @@ static int cachefiles_daemon_inuse(struct cachefiles_cache *cache, char *args) goto inval; if (!test_bit(CACHEFILES_READY, &cache->flags)) { - kerror("inuse applied to unready cache"); + pr_err("inuse applied to unready cache"); return -EIO; } if (test_bit(CACHEFILES_DEAD, &cache->flags)) { - kerror("inuse applied to dead cache"); + pr_err("inuse applied to dead cache"); return -EIO; } @@ -660,11 +659,11 @@ static int cachefiles_daemon_inuse(struct cachefiles_cache *cache, char *args) notdir: path_put(&path); - kerror("inuse command requires dirfd to be a directory"); + pr_err("inuse command requires dirfd to be a directory"); return -ENOTDIR; inval: - kerror("inuse command requires dirfd and filename"); + pr_err("inuse command requires dirfd and filename"); return -EINVAL; } diff --git a/fs/cachefiles/interface.c b/fs/cachefiles/interface.c index 57e17fe6121a..584743d456c3 100644 --- a/fs/cachefiles/interface.c +++ b/fs/cachefiles/interface.c @@ -146,8 +146,7 @@ static int cachefiles_lookup_object(struct fscache_object *_object) if (ret < 0 && ret != -ETIMEDOUT) { if (ret != -ENOBUFS) - printk(KERN_WARNING - "CacheFiles: Lookup failed error %d\n", ret); + pr_warn("Lookup failed error %d\n", ret); fscache_object_lookup_error(&object->fscache); } diff --git a/fs/cachefiles/internal.h b/fs/cachefiles/internal.h index 5349473df1b1..3d50998abf57 100644 --- a/fs/cachefiles/internal.h +++ b/fs/cachefiles/internal.h @@ -9,6 +9,13 @@ * 2 of the Licence, or (at your option) any later version. */ +#ifdef pr_fmt +#undef pr_fmt +#endif + +#define pr_fmt(fmt) "CacheFiles: " fmt + + #include <linux/fscache-cache.h> #include <linux/timer.h> #include <linux/wait.h> @@ -245,11 +252,10 @@ extern int cachefiles_remove_object_xattr(struct cachefiles_cache *cache, /* * error handling */ -#define kerror(FMT, ...) printk(KERN_ERR "CacheFiles: "FMT"\n", ##__VA_ARGS__) #define cachefiles_io_error(___cache, FMT, ...) \ do { \ - kerror("I/O Error: " FMT, ##__VA_ARGS__); \ + pr_err("I/O Error: " FMT, ##__VA_ARGS__); \ fscache_io_error(&(___cache)->cache); \ set_bit(CACHEFILES_DEAD, &(___cache)->flags); \ } while (0) @@ -310,8 +316,8 @@ do { \ #define ASSERT(X) \ do { \ if (unlikely(!(X))) { \ - printk(KERN_ERR "\n"); \ - printk(KERN_ERR "CacheFiles: Assertion failed\n"); \ + pr_err("\n"); \ + pr_err("Assertion failed\n"); \ BUG(); \ } \ } while (0) @@ -319,9 +325,9 @@ do { \ #define ASSERTCMP(X, OP, Y) \ do { \ if (unlikely(!((X) OP (Y)))) { \ - printk(KERN_ERR "\n"); \ - printk(KERN_ERR "CacheFiles: Assertion failed\n"); \ - printk(KERN_ERR "%lx " #OP " %lx is false\n", \ + pr_err("\n"); \ + pr_err("Assertion failed\n"); \ + pr_err("%lx " #OP " %lx is false\n", \ (unsigned long)(X), (unsigned long)(Y)); \ BUG(); \ } \ @@ -330,8 +336,8 @@ do { \ #define ASSERTIF(C, X) \ do { \ if (unlikely((C) && !(X))) { \ - printk(KERN_ERR "\n"); \ - printk(KERN_ERR "CacheFiles: Assertion failed\n"); \ + pr_err("\n"); \ + pr_err("Assertion failed\n"); \ BUG(); \ } \ } while (0) @@ -339,9 +345,9 @@ do { \ #define ASSERTIFCMP(C, X, OP, Y) \ do { \ if (unlikely((C) && !((X) OP (Y)))) { \ - printk(KERN_ERR "\n"); \ - printk(KERN_ERR "CacheFiles: Assertion failed\n"); \ - printk(KERN_ERR "%lx " #OP " %lx is false\n", \ + pr_err("\n"); \ + pr_err("Assertion failed\n"); \ + pr_err("%lx " #OP " %lx is false\n", \ (unsigned long)(X), (unsigned long)(Y)); \ BUG(); \ } \ diff --git a/fs/cachefiles/main.c b/fs/cachefiles/main.c index 4bfa8cf43bf5..180edfb45f66 100644 --- a/fs/cachefiles/main.c +++ b/fs/cachefiles/main.c @@ -68,8 +68,7 @@ static int __init cachefiles_init(void) SLAB_HWCACHE_ALIGN, cachefiles_object_init_once); if (!cachefiles_object_jar) { - printk(KERN_NOTICE - "CacheFiles: Failed to allocate an object jar\n"); + pr_notice("Failed to allocate an object jar\n"); goto error_object_jar; } @@ -77,7 +76,7 @@ static int __init cachefiles_init(void) if (ret < 0) goto error_proc; - printk(KERN_INFO "CacheFiles: Loaded\n"); + pr_info("Loaded\n"); return 0; error_proc: @@ -85,7 +84,7 @@ error_proc: error_object_jar: misc_deregister(&cachefiles_dev); error_dev: - kerror("failed to register: %d", ret); + pr_err("failed to register: %d", ret); return ret; } @@ -96,7 +95,7 @@ fs_initcall(cachefiles_init); */ static void __exit cachefiles_exit(void) { - printk(KERN_INFO "CacheFiles: Unloading\n"); + pr_info("Unloading\n"); cachefiles_proc_cleanup(); kmem_cache_destroy(cachefiles_object_jar); diff --git a/fs/cachefiles/namei.c b/fs/cachefiles/namei.c index c0a681705104..5bf2b41e66d3 100644 --- a/fs/cachefiles/namei.c +++ b/fs/cachefiles/namei.c @@ -35,22 +35,21 @@ void __cachefiles_printk_object(struct cachefiles_object *object, struct fscache_cookie *cookie; unsigned keylen, loop; - printk(KERN_ERR "%sobject: OBJ%x\n", - prefix, object->fscache.debug_id); - printk(KERN_ERR "%sobjstate=%s fl=%lx wbusy=%x ev=%lx[%lx]\n", + pr_err("%sobject: OBJ%x\n", prefix, object->fscache.debug_id); + pr_err("%sobjstate=%s fl=%lx wbusy=%x ev=%lx[%lx]\n", prefix, object->fscache.state->name, object->fscache.flags, work_busy(&object->fscache.work), object->fscache.events, object->fscache.event_mask); - printk(KERN_ERR "%sops=%u inp=%u exc=%u\n", + pr_err("%sops=%u inp=%u exc=%u\n", prefix, object->fscache.n_ops, object->fscache.n_in_progress, object->fscache.n_exclusive); - printk(KERN_ERR "%sparent=%p\n", + pr_err("%sparent=%p\n", prefix, object->fscache.parent); spin_lock(&object->fscache.lock); cookie = object->fscache.cookie; if (cookie) { - printk(KERN_ERR "%scookie=%p [pr=%p nd=%p fl=%lx]\n", + pr_err("%scookie=%p [pr=%p nd=%p fl=%lx]\n", prefix, object->fscache.cookie, object->fscache.cookie->parent, @@ -62,16 +61,16 @@ void __cachefiles_printk_object(struct cachefiles_object *object, else keylen = 0; } else { - printk(KERN_ERR "%scookie=NULL\n", prefix); + pr_err("%scookie=NULL\n", prefix); keylen = 0; } spin_unlock(&object->fscache.lock); if (keylen) { - printk(KERN_ERR "%skey=[%u] '", prefix, keylen); + pr_err("%skey=[%u] '", prefix, keylen); for (loop = 0; loop < keylen; loop++) - printk("%02x", keybuf[loop]); - printk("'\n"); + pr_cont("%02x", keybuf[loop]); + pr_cont("'\n"); } } @@ -131,13 +130,11 @@ found_dentry: dentry); if (fscache_object_is_live(&object->fscache)) { - printk(KERN_ERR "\n"); - printk(KERN_ERR "CacheFiles: Error:" - " Can't preemptively bury live object\n"); + pr_err("\n"); + pr_err("Error: Can't preemptively bury live object\n"); cachefiles_printk_object(object, NULL); } else if (test_and_set_bit(CACHEFILES_OBJECT_BURIED, &object->flags)) { - printk(KERN_ERR "CacheFiles: Error:" - " Object already preemptively buried\n"); + pr_err("Error: Object already preemptively buried\n"); } write_unlock(&cache->active_lock); @@ -160,7 +157,7 @@ try_again: write_lock(&cache->active_lock); if (test_and_set_bit(CACHEFILES_OBJECT_ACTIVE, &object->flags)) { - printk(KERN_ERR "CacheFiles: Error: Object already active\n"); + pr_err("Error: Object already active\n"); cachefiles_printk_object(object, NULL); BUG(); } @@ -193,9 +190,8 @@ try_again: * need to wait for it to be destroyed */ wait_for_old_object: if (fscache_object_is_live(&object->fscache)) { - printk(KERN_ERR "\n"); - printk(KERN_ERR "CacheFiles: Error:" - " Unexpected object collision\n"); + pr_err("\n"); + pr_err("Error: Unexpected object collision\n"); cachefiles_printk_object(object, xobject); BUG(); } @@ -241,9 +237,8 @@ wait_for_old_object: } if (timeout <= 0) { - printk(KERN_ERR "\n"); - printk(KERN_ERR "CacheFiles: Error: Overlong" - " wait for old active object to go away\n"); + pr_err("\n"); + pr_err("Error: Overlong wait for old active object to go away\n"); cachefiles_printk_object(object, xobject); goto requeue; } @@ -548,7 +543,7 @@ lookup_again: next, next->d_inode, next->d_inode->i_ino); } else if (!S_ISDIR(next->d_inode->i_mode)) { - kerror("inode %lu is not a directory", + pr_err("inode %lu is not a directory", next->d_inode->i_ino); ret = -ENOBUFS; goto error; @@ -579,7 +574,7 @@ lookup_again: } else if (!S_ISDIR(next->d_inode->i_mode) && !S_ISREG(next->d_inode->i_mode) ) { - kerror("inode %lu is not a file or directory", + pr_err("inode %lu is not a file or directory", next->d_inode->i_ino); ret = -ENOBUFS; goto error; @@ -773,7 +768,7 @@ struct dentry *cachefiles_get_directory(struct cachefiles_cache *cache, ASSERT(subdir->d_inode); if (!S_ISDIR(subdir->d_inode->i_mode)) { - kerror("%s is not a directory", dirname); + pr_err("%s is not a directory", dirname); ret = -EIO; goto check_error; } @@ -800,13 +795,13 @@ check_error: mkdir_error: mutex_unlock(&dir->d_inode->i_mutex); dput(subdir); - kerror("mkdir %s failed with error %d", dirname, ret); + pr_err("mkdir %s failed with error %d", dirname, ret); return ERR_PTR(ret); lookup_error: mutex_unlock(&dir->d_inode->i_mutex); ret = PTR_ERR(subdir); - kerror("Lookup %s failed with error %d", dirname, ret); + pr_err("Lookup %s failed with error %d", dirname, ret); return ERR_PTR(ret); nomem_d_alloc: @@ -896,7 +891,7 @@ lookup_error: if (ret == -EIO) { cachefiles_io_error(cache, "Lookup failed"); } else if (ret != -ENOMEM) { - kerror("Internal error: %d", ret); + pr_err("Internal error: %d", ret); ret = -EIO; } @@ -955,7 +950,7 @@ error: } if (ret != -ENOMEM) { - kerror("Internal error: %d", ret); + pr_err("Internal error: %d", ret); ret = -EIO; } diff --git a/fs/cachefiles/security.c b/fs/cachefiles/security.c index 039b5011d83b..396c18ea2764 100644 --- a/fs/cachefiles/security.c +++ b/fs/cachefiles/security.c @@ -34,9 +34,7 @@ int cachefiles_get_security_ID(struct cachefiles_cache *cache) ret = set_security_override_from_ctx(new, cache->secctx); if (ret < 0) { put_cred(new); - printk(KERN_ERR "CacheFiles:" - " Security denies permission to nominate" - " security context: error %d\n", + pr_err("Security denies permission to nominate security context: error %d\n", ret); goto error; } @@ -59,16 +57,14 @@ static int cachefiles_check_cache_dir(struct cachefiles_cache *cache, ret = security_inode_mkdir(root->d_inode, root, 0); if (ret < 0) { - printk(KERN_ERR "CacheFiles:" - " Security denies permission to make dirs: error %d", + pr_err("Security denies permission to make dirs: error %d", ret); return ret; } ret = security_inode_create(root->d_inode, root, 0); if (ret < 0) - printk(KERN_ERR "CacheFiles:" - " Security denies permission to create files: error %d", + pr_err("Security denies permission to create files: error %d", ret); return ret; diff --git a/fs/cachefiles/xattr.c b/fs/cachefiles/xattr.c index 12b0eef84183..1ad51ffbb275 100644 --- a/fs/cachefiles/xattr.c +++ b/fs/cachefiles/xattr.c @@ -51,7 +51,7 @@ int cachefiles_check_object_type(struct cachefiles_object *object) } if (ret != -EEXIST) { - kerror("Can't set xattr on %*.*s [%lu] (err %d)", + pr_err("Can't set xattr on %*.*s [%lu] (err %d)", dentry->d_name.len, dentry->d_name.len, dentry->d_name.name, dentry->d_inode->i_ino, -ret); @@ -64,7 +64,7 @@ int cachefiles_check_object_type(struct cachefiles_object *object) if (ret == -ERANGE) goto bad_type_length; - kerror("Can't read xattr on %*.*s [%lu] (err %d)", + pr_err("Can't read xattr on %*.*s [%lu] (err %d)", dentry->d_name.len, dentry->d_name.len, dentry->d_name.name, dentry->d_inode->i_ino, -ret); @@ -85,14 +85,14 @@ error: return ret; bad_type_length: - kerror("Cache object %lu type xattr length incorrect", + pr_err("Cache object %lu type xattr length incorrect", dentry->d_inode->i_ino); ret = -EIO; goto error; bad_type: xtype[2] = 0; - kerror("Cache object %*.*s [%lu] type %s not %s", + pr_err("Cache object %*.*s [%lu] type %s not %s", dentry->d_name.len, dentry->d_name.len, dentry->d_name.name, dentry->d_inode->i_ino, xtype, type); @@ -293,7 +293,7 @@ error: return ret; bad_type_length: - kerror("Cache object %lu xattr length incorrect", + pr_err("Cache object %lu xattr length incorrect", dentry->d_inode->i_ino); ret = -EIO; goto error; diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index b53278c9fd97..65a30e817dd8 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -694,7 +694,7 @@ static int ceph_writepages_start(struct address_space *mapping, (wbc->sync_mode == WB_SYNC_ALL ? "ALL" : "HOLD")); if (fsc->mount_state == CEPH_MOUNT_SHUTDOWN) { - pr_warning("writepage_start %p on forced umount\n", inode); + pr_warn("writepage_start %p on forced umount\n", inode); return -EIO; /* we're in a forced umount, don't write! */ } if (fsc->mount_options->wsize && fsc->mount_options->wsize < wsize) diff --git a/fs/ceph/debugfs.c b/fs/ceph/debugfs.c index 16b54aa31f08..5a743ac141ab 100644 --- a/fs/ceph/debugfs.c +++ b/fs/ceph/debugfs.c @@ -71,9 +71,9 @@ static int mdsc_show(struct seq_file *s, void *p) seq_printf(s, "%s", ceph_mds_op_name(req->r_op)); if (req->r_got_unsafe) - seq_printf(s, "\t(unsafe)"); + seq_puts(s, "\t(unsafe)"); else - seq_printf(s, "\t"); + seq_puts(s, "\t"); if (req->r_inode) { seq_printf(s, " #%llx", ceph_ino(req->r_inode)); @@ -119,7 +119,7 @@ static int mdsc_show(struct seq_file *s, void *p) seq_printf(s, " %s", req->r_path2); } - seq_printf(s, "\n"); + seq_puts(s, "\n"); } mutex_unlock(&mdsc->mutex); diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c index 233c6f96910a..e4fff9ff1c27 100644 --- a/fs/ceph/inode.c +++ b/fs/ceph/inode.c @@ -821,7 +821,7 @@ no_change: spin_unlock(&ci->i_ceph_lock); } } else if (cap_fmode >= 0) { - pr_warning("mds issued no caps on %llx.%llx\n", + pr_warn("mds issued no caps on %llx.%llx\n", ceph_vinop(inode)); __ceph_get_fmode(ci, cap_fmode); } diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c index 2b4d093d0563..9a33b98cb000 100644 --- a/fs/ceph/mds_client.c +++ b/fs/ceph/mds_client.c @@ -2218,13 +2218,13 @@ static void handle_reply(struct ceph_mds_session *session, struct ceph_msg *msg) /* dup? */ if ((req->r_got_unsafe && !head->safe) || (req->r_got_safe && head->safe)) { - pr_warning("got a dup %s reply on %llu from mds%d\n", + pr_warn("got a dup %s reply on %llu from mds%d\n", head->safe ? "safe" : "unsafe", tid, mds); mutex_unlock(&mdsc->mutex); goto out; } if (req->r_got_safe && !head->safe) { - pr_warning("got unsafe after safe on %llu from mds%d\n", + pr_warn("got unsafe after safe on %llu from mds%d\n", tid, mds); mutex_unlock(&mdsc->mutex); goto out; @@ -3525,7 +3525,7 @@ static void peer_reset(struct ceph_connection *con) struct ceph_mds_session *s = con->private; struct ceph_mds_client *mdsc = s->s_mdsc; - pr_warning("mds%d closed our session\n", s->s_mds); + pr_warn("mds%d closed our session\n", s->s_mds); send_mds_reconnect(mdsc, s); } diff --git a/fs/ceph/mdsmap.c b/fs/ceph/mdsmap.c index 132b64eeecd4..261531e55e9d 100644 --- a/fs/ceph/mdsmap.c +++ b/fs/ceph/mdsmap.c @@ -62,7 +62,7 @@ struct ceph_mdsmap *ceph_mdsmap_decode(void **p, void *end) ceph_decode_16_safe(p, end, version, bad); if (version > 3) { - pr_warning("got mdsmap version %d > 3, failing", version); + pr_warn("got mdsmap version %d > 3, failing", version); goto bad; } diff --git a/fs/coda/cnode.c b/fs/coda/cnode.c index 911cf30d057d..7740b1c871c1 100644 --- a/fs/coda/cnode.c +++ b/fs/coda/cnode.c @@ -101,7 +101,7 @@ struct inode *coda_cnode_make(struct CodaFid *fid, struct super_block *sb) inode = coda_iget(sb, fid, &attr); if (IS_ERR(inode)) - printk("coda_cnode_make: coda_iget failed\n"); + pr_warn("%s: coda_iget failed\n", __func__); return inode; } @@ -137,7 +137,7 @@ struct inode *coda_fid_to_inode(struct CodaFid *fid, struct super_block *sb) unsigned long hash = coda_f2i(fid); if ( !sb ) { - printk("coda_fid_to_inode: no sb!\n"); + pr_warn("%s: no sb!\n", __func__); return NULL; } diff --git a/fs/coda/coda_linux.h b/fs/coda/coda_linux.h index e7550cb9fb74..d42b725b1d21 100644 --- a/fs/coda/coda_linux.h +++ b/fs/coda/coda_linux.h @@ -12,6 +12,12 @@ #ifndef _LINUX_CODA_FS #define _LINUX_CODA_FS +#ifdef pr_fmt +#undef pr_fmt +#endif + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + #include <linux/kernel.h> #include <linux/param.h> #include <linux/mm.h> @@ -63,7 +69,7 @@ void coda_sysctl_clean(void); else \ ptr = (cast)vzalloc((unsigned long) size); \ if (!ptr) \ - printk("kernel malloc returns 0 at %s:%d\n", __FILE__, __LINE__); \ + pr_warn("kernel malloc returns 0 at %s:%d\n", __FILE__, __LINE__); \ } while (0) diff --git a/fs/coda/dir.c b/fs/coda/dir.c index 5efbb5ee0adc..cd8a63238b11 100644 --- a/fs/coda/dir.c +++ b/fs/coda/dir.c @@ -102,7 +102,7 @@ static struct dentry *coda_lookup(struct inode *dir, struct dentry *entry, unsig int type = 0; if (length > CODA_MAXNAMLEN) { - printk(KERN_ERR "name too long: lookup, %s (%*s)\n", + pr_err("name too long: lookup, %s (%*s)\n", coda_i2s(dir), (int)length, name); return ERR_PTR(-ENAMETOOLONG); } @@ -453,23 +453,23 @@ static int coda_venus_readdir(struct file *coda_file, struct dir_context *ctx) ret = kernel_read(host_file, ctx->pos - 2, (char *)vdir, sizeof(*vdir)); if (ret < 0) { - printk(KERN_ERR "coda readdir: read dir %s failed %d\n", - coda_f2s(&cii->c_fid), ret); + pr_err("%s: read dir %s failed %d\n", + __func__, coda_f2s(&cii->c_fid), ret); break; } if (ret == 0) break; /* end of directory file reached */ /* catch truncated reads */ if (ret < vdir_size || ret < vdir_size + vdir->d_namlen) { - printk(KERN_ERR "coda readdir: short read on %s\n", - coda_f2s(&cii->c_fid)); + pr_err("%s: short read on %s\n", + __func__, coda_f2s(&cii->c_fid)); ret = -EBADF; break; } /* validate whether the directory file actually makes sense */ if (vdir->d_reclen < vdir_size + vdir->d_namlen) { - printk(KERN_ERR "coda readdir: invalid dir %s\n", - coda_f2s(&cii->c_fid)); + pr_err("%s: invalid dir %s\n", + __func__, coda_f2s(&cii->c_fid)); ret = -EBADF; break; } @@ -589,8 +589,8 @@ int coda_revalidate_inode(struct inode *inode) coda_vattr_to_iattr(inode, &attr); if ((old_mode & S_IFMT) != (inode->i_mode & S_IFMT)) { - printk("Coda: inode %ld, fid %s changed type!\n", - inode->i_ino, coda_f2s(&(cii->c_fid))); + pr_warn("inode %ld, fid %s changed type!\n", + inode->i_ino, coda_f2s(&(cii->c_fid))); } /* the following can happen when a local fid is replaced diff --git a/fs/coda/inode.c b/fs/coda/inode.c index d9c7751f10ac..fe3afb2de880 100644 --- a/fs/coda/inode.c +++ b/fs/coda/inode.c @@ -119,12 +119,12 @@ static int get_device_index(struct coda_mount_data *data) int idx; if (data == NULL) { - printk("coda_read_super: Bad mount data\n"); + pr_warn("%s: Bad mount data\n", __func__); return -1; } if (data->version != CODA_MOUNT_VERSION) { - printk("coda_read_super: Bad mount version\n"); + pr_warn("%s: Bad mount version\n", __func__); return -1; } @@ -141,13 +141,13 @@ static int get_device_index(struct coda_mount_data *data) fdput(f); if (idx < 0 || idx >= MAX_CODADEVS) { - printk("coda_read_super: Bad minor number\n"); + pr_warn("%s: Bad minor number\n", __func__); return -1; } return idx; Ebadf: - printk("coda_read_super: Bad file\n"); + pr_warn("%s: Bad file\n", __func__); return -1; } @@ -168,19 +168,19 @@ static int coda_fill_super(struct super_block *sb, void *data, int silent) if(idx == -1) idx = 0; - printk(KERN_INFO "coda_read_super: device index: %i\n", idx); + pr_info("%s: device index: %i\n", __func__, idx); vc = &coda_comms[idx]; mutex_lock(&vc->vc_mutex); if (!vc->vc_inuse) { - printk("coda_read_super: No pseudo device\n"); + pr_warn("%s: No pseudo device\n", __func__); error = -EINVAL; goto unlock_out; } if (vc->vc_sb) { - printk("coda_read_super: Device already mounted\n"); + pr_warn("%s: Device already mounted\n", __func__); error = -EBUSY; goto unlock_out; } @@ -204,22 +204,23 @@ static int coda_fill_super(struct super_block *sb, void *data, int silent) /* get root fid from Venus: this needs the root inode */ error = venus_rootfid(sb, &fid); if ( error ) { - printk("coda_read_super: coda_get_rootfid failed with %d\n", - error); + pr_warn("%s: coda_get_rootfid failed with %d\n", + __func__, error); goto error; } - printk("coda_read_super: rootfid is %s\n", coda_f2s(&fid)); + pr_info("%s: rootfid is %s\n", __func__, coda_f2s(&fid)); /* make root inode */ root = coda_cnode_make(&fid, sb); if (IS_ERR(root)) { error = PTR_ERR(root); - printk("Failure of coda_cnode_make for root: error %d\n", error); + pr_warn("Failure of coda_cnode_make for root: error %d\n", + error); goto error; } - printk("coda_read_super: rootinode is %ld dev %s\n", - root->i_ino, root->i_sb->s_id); + pr_info("%s: rootinode is %ld dev %s\n", + __func__, root->i_ino, root->i_sb->s_id); sb->s_root = d_make_root(root); if (!sb->s_root) { error = -EINVAL; @@ -246,7 +247,7 @@ static void coda_put_super(struct super_block *sb) sb->s_fs_info = NULL; mutex_unlock(&vcp->vc_mutex); - printk("Coda: Bye bye.\n"); + pr_info("Bye bye.\n"); } static void coda_evict_inode(struct inode *inode) diff --git a/fs/coda/psdev.c b/fs/coda/psdev.c index ebc2bae6c289..5c1e4242368b 100644 --- a/fs/coda/psdev.c +++ b/fs/coda/psdev.c @@ -114,14 +114,14 @@ static ssize_t coda_psdev_write(struct file *file, const char __user *buf, int size = sizeof(*dcbuf); if ( nbytes < sizeof(struct coda_out_hdr) ) { - printk("coda_downcall opc %d uniq %d, not enough!\n", - hdr.opcode, hdr.unique); + pr_warn("coda_downcall opc %d uniq %d, not enough!\n", + hdr.opcode, hdr.unique); count = nbytes; goto out; } if ( nbytes > size ) { - printk("Coda: downcall opc %d, uniq %d, too much!", - hdr.opcode, hdr.unique); + pr_warn("downcall opc %d, uniq %d, too much!", + hdr.opcode, hdr.unique); nbytes = size; } CODA_ALLOC(dcbuf, union outputArgs *, nbytes); @@ -136,7 +136,8 @@ static ssize_t coda_psdev_write(struct file *file, const char __user *buf, CODA_FREE(dcbuf, nbytes); if (error) { - printk("psdev_write: coda_downcall error: %d\n", error); + pr_warn("%s: coda_downcall error: %d\n", + __func__, error); retval = error; goto out; } @@ -157,16 +158,17 @@ static ssize_t coda_psdev_write(struct file *file, const char __user *buf, mutex_unlock(&vcp->vc_mutex); if (!req) { - printk("psdev_write: msg (%d, %d) not found\n", - hdr.opcode, hdr.unique); + pr_warn("%s: msg (%d, %d) not found\n", + __func__, hdr.opcode, hdr.unique); retval = -ESRCH; goto out; } /* move data into response buffer. */ if (req->uc_outSize < nbytes) { - printk("psdev_write: too much cnt: %d, cnt: %ld, opc: %d, uniq: %d.\n", - req->uc_outSize, (long)nbytes, hdr.opcode, hdr.unique); + pr_warn("%s: too much cnt: %d, cnt: %ld, opc: %d, uniq: %d.\n", + __func__, req->uc_outSize, (long)nbytes, + hdr.opcode, hdr.unique); nbytes = req->uc_outSize; /* don't have more space! */ } if (copy_from_user(req->uc_data, buf, nbytes)) { @@ -240,8 +242,8 @@ static ssize_t coda_psdev_read(struct file * file, char __user * buf, /* Move the input args into userspace */ count = req->uc_inSize; if (nbytes < req->uc_inSize) { - printk ("psdev_read: Venus read %ld bytes of %d in message\n", - (long)nbytes, req->uc_inSize); + pr_warn("%s: Venus read %ld bytes of %d in message\n", + __func__, (long)nbytes, req->uc_inSize); count = nbytes; } @@ -305,7 +307,7 @@ static int coda_psdev_release(struct inode * inode, struct file * file) struct upc_req *req, *tmp; if (!vcp || !vcp->vc_inuse ) { - printk("psdev_release: Not open.\n"); + pr_warn("%s: Not open.\n", __func__); return -1; } @@ -354,8 +356,8 @@ static int init_coda_psdev(void) { int i, err = 0; if (register_chrdev(CODA_PSDEV_MAJOR, "coda", &coda_psdev_fops)) { - printk(KERN_ERR "coda_psdev: unable to get major %d\n", - CODA_PSDEV_MAJOR); + pr_err("%s: unable to get major %d\n", + __func__, CODA_PSDEV_MAJOR); return -EIO; } coda_psdev_class = class_create(THIS_MODULE, "coda"); @@ -393,13 +395,13 @@ static int __init init_coda(void) goto out2; status = init_coda_psdev(); if ( status ) { - printk("Problem (%d) in init_coda_psdev\n", status); + pr_warn("Problem (%d) in init_coda_psdev\n", status); goto out1; } status = register_filesystem(&coda_fs_type); if (status) { - printk("coda: failed to register filesystem!\n"); + pr_warn("failed to register filesystem!\n"); goto out; } return 0; @@ -420,9 +422,8 @@ static void __exit exit_coda(void) int err, i; err = unregister_filesystem(&coda_fs_type); - if ( err != 0 ) { - printk("coda: failed to unregister filesystem\n"); - } + if (err != 0) + pr_warn("failed to unregister filesystem\n"); for (i = 0; i < MAX_CODADEVS; i++) device_destroy(coda_psdev_class, MKDEV(CODA_PSDEV_MAJOR, i)); class_destroy(coda_psdev_class); diff --git a/fs/coda/sysctl.c b/fs/coda/sysctl.c index af56ad56a89a..34218a8a28cd 100644 --- a/fs/coda/sysctl.c +++ b/fs/coda/sysctl.c @@ -14,7 +14,7 @@ #ifdef CONFIG_SYSCTL static struct ctl_table_header *fs_table_header; -static ctl_table coda_table[] = { +static struct ctl_table coda_table[] = { { .procname = "timeout", .data = &coda_timeout, @@ -39,7 +39,7 @@ static ctl_table coda_table[] = { {} }; -static ctl_table fs_table[] = { +static struct ctl_table fs_table[] = { { .procname = "coda", .mode = 0555, diff --git a/fs/coda/upcall.c b/fs/coda/upcall.c index 3a731976dc5e..21fcf8dcb9cd 100644 --- a/fs/coda/upcall.c +++ b/fs/coda/upcall.c @@ -508,8 +508,8 @@ int venus_pioctl(struct super_block *sb, struct CodaFid *fid, inp->coda_ioctl.data = (char *)(INSIZE(ioctl)); /* get the data out of user space */ - if ( copy_from_user((char*)inp + (long)inp->coda_ioctl.data, - data->vi.in, data->vi.in_size) ) { + if (copy_from_user((char *)inp + (long)inp->coda_ioctl.data, + data->vi.in, data->vi.in_size)) { error = -EINVAL; goto exit; } @@ -518,8 +518,8 @@ int venus_pioctl(struct super_block *sb, struct CodaFid *fid, &outsize, inp); if (error) { - printk("coda_pioctl: Venus returns: %d for %s\n", - error, coda_f2s(fid)); + pr_warn("%s: Venus returns: %d for %s\n", + __func__, error, coda_f2s(fid)); goto exit; } @@ -675,7 +675,7 @@ static int coda_upcall(struct venus_comm *vcp, mutex_lock(&vcp->vc_mutex); if (!vcp->vc_inuse) { - printk(KERN_NOTICE "coda: Venus dead, not sending upcall\n"); + pr_notice("Venus dead, not sending upcall\n"); error = -ENXIO; goto exit; } @@ -725,7 +725,7 @@ static int coda_upcall(struct venus_comm *vcp, error = -EINTR; if ((req->uc_flags & CODA_REQ_ABORT) || !signal_pending(current)) { - printk(KERN_WARNING "coda: Unexpected interruption.\n"); + pr_warn("Unexpected interruption.\n"); goto exit; } @@ -735,7 +735,7 @@ static int coda_upcall(struct venus_comm *vcp, /* Venus saw the upcall, make sure we can send interrupt signal */ if (!vcp->vc_inuse) { - printk(KERN_INFO "coda: Venus dead, not sending signal.\n"); + pr_info("Venus dead, not sending signal.\n"); goto exit; } diff --git a/fs/dcache.c b/fs/dcache.c index be2bea834bf4..1792d6075b4f 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -150,7 +150,7 @@ static long get_nr_dentry_unused(void) return sum < 0 ? 0 : sum; } -int proc_nr_dentry(ctl_table *table, int write, void __user *buffer, +int proc_nr_dentry(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { dentry_stat.nr_dentry = get_nr_dentry(); diff --git a/fs/devpts/inode.c b/fs/devpts/inode.c index c71038079b47..cfe8466f7fef 100644 --- a/fs/devpts/inode.c +++ b/fs/devpts/inode.c @@ -10,6 +10,8 @@ * * ------------------------------------------------------------------------- */ +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + #include <linux/module.h> #include <linux/init.h> #include <linux/fs.h> @@ -148,10 +150,10 @@ static inline struct super_block *pts_sb_from_inode(struct inode *inode) /* * parse_mount_options(): - * Set @opts to mount options specified in @data. If an option is not - * specified in @data, set it to its default value. The exception is - * 'newinstance' option which can only be set/cleared on a mount (i.e. - * cannot be changed during remount). + * Set @opts to mount options specified in @data. If an option is not + * specified in @data, set it to its default value. The exception is + * 'newinstance' option which can only be set/cleared on a mount (i.e. + * cannot be changed during remount). * * Note: @data may be NULL (in which case all options are set to default). */ @@ -225,7 +227,7 @@ static int parse_mount_options(char *data, int op, struct pts_mount_opts *opts) break; #endif default: - printk(KERN_ERR "devpts: called with bogus options\n"); + pr_err("called with bogus options\n"); return -EINVAL; } } @@ -261,7 +263,7 @@ static int mknod_ptmx(struct super_block *sb) dentry = d_alloc_name(root, "ptmx"); if (!dentry) { - printk(KERN_NOTICE "Unable to alloc dentry for ptmx node\n"); + pr_err("Unable to alloc dentry for ptmx node\n"); goto out; } @@ -270,7 +272,7 @@ static int mknod_ptmx(struct super_block *sb) */ inode = new_inode(sb); if (!inode) { - printk(KERN_ERR "Unable to alloc inode for ptmx node\n"); + pr_err("Unable to alloc inode for ptmx node\n"); dput(dentry); goto out; } @@ -303,7 +305,7 @@ static void update_ptmx_mode(struct pts_fs_info *fsi) #else static inline void update_ptmx_mode(struct pts_fs_info *fsi) { - return; + return; } #endif @@ -333,9 +335,11 @@ static int devpts_show_options(struct seq_file *seq, struct dentry *root) struct pts_mount_opts *opts = &fsi->mount_opts; if (opts->setuid) - seq_printf(seq, ",uid=%u", from_kuid_munged(&init_user_ns, opts->uid)); + seq_printf(seq, ",uid=%u", + from_kuid_munged(&init_user_ns, opts->uid)); if (opts->setgid) - seq_printf(seq, ",gid=%u", from_kgid_munged(&init_user_ns, opts->gid)); + seq_printf(seq, ",gid=%u", + from_kgid_munged(&init_user_ns, opts->gid)); seq_printf(seq, ",mode=%03o", opts->mode); #ifdef CONFIG_DEVPTS_MULTIPLE_INSTANCES seq_printf(seq, ",ptmxmode=%03o", opts->ptmxmode); @@ -396,7 +400,7 @@ devpts_fill_super(struct super_block *s, void *data, int silent) if (s->s_root) return 0; - printk(KERN_ERR "devpts: get root dentry failed\n"); + pr_err("get root dentry failed\n"); fail: return -ENOMEM; diff --git a/fs/dlm/config.c b/fs/dlm/config.c index 76feb4b60fa6..d521bddf876d 100644 --- a/fs/dlm/config.c +++ b/fs/dlm/config.c @@ -157,11 +157,13 @@ static ssize_t cluster_set(struct dlm_cluster *cl, unsigned int *cl_field, const char *buf, size_t len) { unsigned int x; + int rc; if (!capable(CAP_SYS_ADMIN)) return -EPERM; - - x = simple_strtoul(buf, NULL, 0); + rc = kstrtouint(buf, 0, &x); + if (rc) + return rc; if (check_zero && !x) return -EINVAL; @@ -730,7 +732,10 @@ static ssize_t comm_nodeid_read(struct dlm_comm *cm, char *buf) static ssize_t comm_nodeid_write(struct dlm_comm *cm, const char *buf, size_t len) { - cm->nodeid = simple_strtol(buf, NULL, 0); + int rc = kstrtoint(buf, 0, &cm->nodeid); + + if (rc) + return rc; return len; } @@ -742,7 +747,10 @@ static ssize_t comm_local_read(struct dlm_comm *cm, char *buf) static ssize_t comm_local_write(struct dlm_comm *cm, const char *buf, size_t len) { - cm->local= simple_strtol(buf, NULL, 0); + int rc = kstrtoint(buf, 0, &cm->local); + + if (rc) + return rc; if (cm->local && !local_comm) local_comm = cm; return len; @@ -846,7 +854,10 @@ static ssize_t node_nodeid_write(struct dlm_node *nd, const char *buf, size_t len) { uint32_t seq = 0; - nd->nodeid = simple_strtol(buf, NULL, 0); + int rc = kstrtoint(buf, 0, &nd->nodeid); + + if (rc) + return rc; dlm_comm_seq(nd->nodeid, &seq); nd->comm_seq = seq; return len; @@ -860,7 +871,10 @@ static ssize_t node_weight_read(struct dlm_node *nd, char *buf) static ssize_t node_weight_write(struct dlm_node *nd, const char *buf, size_t len) { - nd->weight = simple_strtol(buf, NULL, 0); + int rc = kstrtoint(buf, 0, &nd->weight); + + if (rc) + return rc; return len; } diff --git a/fs/dlm/debug_fs.c b/fs/dlm/debug_fs.c index b969deef9ebb..8d77ba7b1756 100644 --- a/fs/dlm/debug_fs.c +++ b/fs/dlm/debug_fs.c @@ -68,7 +68,7 @@ static int print_format1_lock(struct seq_file *s, struct dlm_lkb *lkb, if (lkb->lkb_wait_type) seq_printf(s, " wait_type: %d", lkb->lkb_wait_type); - return seq_printf(s, "\n"); + return seq_puts(s, "\n"); } static int print_format1(struct dlm_rsb *res, struct seq_file *s) @@ -92,31 +92,31 @@ static int print_format1(struct dlm_rsb *res, struct seq_file *s) } if (res->res_nodeid > 0) - rv = seq_printf(s, "\" \nLocal Copy, Master is node %d\n", + rv = seq_printf(s, "\"\nLocal Copy, Master is node %d\n", res->res_nodeid); else if (res->res_nodeid == 0) - rv = seq_printf(s, "\" \nMaster Copy\n"); + rv = seq_puts(s, "\"\nMaster Copy\n"); else if (res->res_nodeid == -1) - rv = seq_printf(s, "\" \nLooking up master (lkid %x)\n", + rv = seq_printf(s, "\"\nLooking up master (lkid %x)\n", res->res_first_lkid); else - rv = seq_printf(s, "\" \nInvalid master %d\n", + rv = seq_printf(s, "\"\nInvalid master %d\n", res->res_nodeid); if (rv) goto out; /* Print the LVB: */ if (res->res_lvbptr) { - seq_printf(s, "LVB: "); + seq_puts(s, "LVB: "); for (i = 0; i < lvblen; i++) { if (i == lvblen / 2) - seq_printf(s, "\n "); + seq_puts(s, "\n "); seq_printf(s, "%02x ", (unsigned char) res->res_lvbptr[i]); } if (rsb_flag(res, RSB_VALNOTVALID)) - seq_printf(s, " (INVALID)"); - rv = seq_printf(s, "\n"); + seq_puts(s, " (INVALID)"); + rv = seq_puts(s, "\n"); if (rv) goto out; } @@ -133,21 +133,21 @@ static int print_format1(struct dlm_rsb *res, struct seq_file *s) } /* Print the locks attached to this resource */ - seq_printf(s, "Granted Queue\n"); + seq_puts(s, "Granted Queue\n"); list_for_each_entry(lkb, &res->res_grantqueue, lkb_statequeue) { rv = print_format1_lock(s, lkb, res); if (rv) goto out; } - seq_printf(s, "Conversion Queue\n"); + seq_puts(s, "Conversion Queue\n"); list_for_each_entry(lkb, &res->res_convertqueue, lkb_statequeue) { rv = print_format1_lock(s, lkb, res); if (rv) goto out; } - seq_printf(s, "Waiting Queue\n"); + seq_puts(s, "Waiting Queue\n"); list_for_each_entry(lkb, &res->res_waitqueue, lkb_statequeue) { rv = print_format1_lock(s, lkb, res); if (rv) @@ -157,13 +157,13 @@ static int print_format1(struct dlm_rsb *res, struct seq_file *s) if (list_empty(&res->res_lookup)) goto out; - seq_printf(s, "Lookup Queue\n"); + seq_puts(s, "Lookup Queue\n"); list_for_each_entry(lkb, &res->res_lookup, lkb_rsb_lookup) { rv = seq_printf(s, "%08x %s", lkb->lkb_id, print_lockmode(lkb->lkb_rqmode)); if (lkb->lkb_wait_type) seq_printf(s, " wait_type: %d", lkb->lkb_wait_type); - rv = seq_printf(s, "\n"); + rv = seq_puts(s, "\n"); } out: unlock_rsb(res); @@ -300,7 +300,7 @@ static int print_format3(struct dlm_rsb *r, struct seq_file *s) else seq_printf(s, " %02x", (unsigned char)r->res_name[i]); } - rv = seq_printf(s, "\n"); + rv = seq_puts(s, "\n"); if (rv) goto out; @@ -311,7 +311,7 @@ static int print_format3(struct dlm_rsb *r, struct seq_file *s) for (i = 0; i < lvblen; i++) seq_printf(s, " %02x", (unsigned char)r->res_lvbptr[i]); - rv = seq_printf(s, "\n"); + rv = seq_puts(s, "\n"); if (rv) goto out; @@ -377,7 +377,7 @@ static int print_format4(struct dlm_rsb *r, struct seq_file *s) else seq_printf(s, " %02x", (unsigned char)r->res_name[i]); } - rv = seq_printf(s, "\n"); + rv = seq_puts(s, "\n"); out: unlock_rsb(r); return rv; diff --git a/fs/dlm/lockspace.c b/fs/dlm/lockspace.c index 04d6398c1f1c..f3e72787e7f9 100644 --- a/fs/dlm/lockspace.c +++ b/fs/dlm/lockspace.c @@ -35,8 +35,11 @@ static struct task_struct * scand_task; static ssize_t dlm_control_store(struct dlm_ls *ls, const char *buf, size_t len) { ssize_t ret = len; - int n = simple_strtol(buf, NULL, 0); + int n; + int rc = kstrtoint(buf, 0, &n); + if (rc) + return rc; ls = dlm_find_lockspace_local(ls->ls_local_handle); if (!ls) return -EINVAL; @@ -57,7 +60,10 @@ static ssize_t dlm_control_store(struct dlm_ls *ls, const char *buf, size_t len) static ssize_t dlm_event_store(struct dlm_ls *ls, const char *buf, size_t len) { - ls->ls_uevent_result = simple_strtol(buf, NULL, 0); + int rc = kstrtoint(buf, 0, &ls->ls_uevent_result); + + if (rc) + return rc; set_bit(LSFL_UEVENT_WAIT, &ls->ls_flags); wake_up(&ls->ls_uevent_wait); return len; @@ -70,7 +76,10 @@ static ssize_t dlm_id_show(struct dlm_ls *ls, char *buf) static ssize_t dlm_id_store(struct dlm_ls *ls, const char *buf, size_t len) { - ls->ls_global_id = simple_strtoul(buf, NULL, 0); + int rc = kstrtouint(buf, 0, &ls->ls_global_id); + + if (rc) + return rc; return len; } @@ -81,7 +90,11 @@ static ssize_t dlm_nodir_show(struct dlm_ls *ls, char *buf) static ssize_t dlm_nodir_store(struct dlm_ls *ls, const char *buf, size_t len) { - int val = simple_strtoul(buf, NULL, 0); + int val; + int rc = kstrtoint(buf, 0, &val); + + if (rc) + return rc; if (val == 1) set_bit(LSFL_NODIR, &ls->ls_flags); return len; diff --git a/fs/drop_caches.c b/fs/drop_caches.c index 9280202e488c..1de7294aad20 100644 --- a/fs/drop_caches.c +++ b/fs/drop_caches.c @@ -50,7 +50,7 @@ static void drop_slab(void) } while (nr_objects > 10); } -int drop_caches_sysctl_handler(ctl_table *table, int write, +int drop_caches_sysctl_handler(struct ctl_table *table, int write, void __user *buffer, size_t *length, loff_t *ppos) { int ret; diff --git a/fs/eventpoll.c b/fs/eventpoll.c index af903128891c..b73e0621ce9e 100644 --- a/fs/eventpoll.c +++ b/fs/eventpoll.c @@ -293,7 +293,7 @@ static LIST_HEAD(tfile_check_list); static long zero; static long long_max = LONG_MAX; -ctl_table epoll_table[] = { +struct ctl_table epoll_table[] = { { .procname = "max_user_watches", .data = &max_user_watches, diff --git a/fs/exofs/Kconfig.ore b/fs/exofs/Kconfig.ore index 1ca7fb7b6ba8..2daf2329c28d 100644 --- a/fs/exofs/Kconfig.ore +++ b/fs/exofs/Kconfig.ore @@ -9,4 +9,6 @@ config ORE tristate depends on EXOFS_FS || PNFS_OBJLAYOUT select ASYNC_XOR + select RAID6_PQ + select ASYNC_PQ default SCSI_OSD_ULD diff --git a/fs/exofs/ore.c b/fs/exofs/ore.c index dae884694bd9..cfc0205d62c4 100644 --- a/fs/exofs/ore.c +++ b/fs/exofs/ore.c @@ -58,9 +58,12 @@ int ore_verify_layout(unsigned total_comps, struct ore_layout *layout) layout->parity = 1; break; case PNFS_OSD_RAID_PQ: + layout->parity = 2; + break; case PNFS_OSD_RAID_4: default: - ORE_ERR("Only RAID_0/5 for now\n"); + ORE_ERR("Only RAID_0/5/6 for now received-enum=%d\n", + layout->raid_algorithm); return -EINVAL; } if (0 != (layout->stripe_unit & ~PAGE_MASK)) { @@ -112,6 +115,8 @@ int ore_verify_layout(unsigned total_comps, struct ore_layout *layout) layout->max_io_length /= stripe_length; layout->max_io_length *= stripe_length; } + ORE_DBGMSG("max_io_length=0x%lx\n", layout->max_io_length); + return 0; } EXPORT_SYMBOL(ore_verify_layout); @@ -545,21 +550,24 @@ void ore_calc_stripe_info(struct ore_layout *layout, u64 file_offset, /* "H - (N * U)" is just "H % U" so it's bound to u32 */ u32 C = (u32)(H - (N * U)) / stripe_unit + G * group_width; + u32 first_dev = C - C % group_width; div_u64_rem(file_offset, stripe_unit, &si->unit_off); si->obj_offset = si->unit_off + (N * stripe_unit) + (M * group_depth * stripe_unit); + si->cur_comp = C - first_dev; + si->cur_pg = si->unit_off / PAGE_SIZE; if (parity) { u32 LCMdP = lcm(group_width, parity) / parity; /* R = N % LCMdP; */ u32 RxP = (N % LCMdP) * parity; - u32 first_dev = C - C % group_width; si->par_dev = (group_width + group_width - parity - RxP) % group_width + first_dev; - si->dev = (group_width + C - RxP) % group_width + first_dev; + si->dev = (group_width + group_width + C - RxP) % + group_width + first_dev; si->bytes_in_stripe = U; si->first_stripe_start = M * S + G * T + N * U; } else { @@ -649,6 +657,43 @@ out: /* we fail the complete unit on an error eg don't advance return ret; } +static int _add_parity_units(struct ore_io_state *ios, + struct ore_striping_info *si, + unsigned dev, unsigned first_dev, + unsigned mirrors_p1, unsigned devs_in_group, + unsigned cur_len) +{ + unsigned do_parity; + int ret = 0; + + for (do_parity = ios->layout->parity; do_parity; --do_parity) { + struct ore_per_dev_state *per_dev; + + per_dev = &ios->per_dev[dev - first_dev]; + if (!per_dev->length && !per_dev->offset) { + /* Only/always the parity unit of the first + * stripe will be empty. So this is a chance to + * initialize the per_dev info. + */ + per_dev->dev = dev; + per_dev->offset = si->obj_offset - si->unit_off; + } + + ret = _ore_add_parity_unit(ios, si, per_dev, cur_len, + do_parity == 1); + if (unlikely(ret)) + break; + + if (do_parity != 1) { + dev = ((dev + mirrors_p1) % devs_in_group) + first_dev; + si->cur_comp = (si->cur_comp + 1) % + ios->layout->group_width; + } + } + + return ret; +} + static int _prepare_for_striping(struct ore_io_state *ios) { struct ore_striping_info *si = &ios->si; @@ -658,7 +703,6 @@ static int _prepare_for_striping(struct ore_io_state *ios) unsigned devs_in_group = group_width * mirrors_p1; unsigned dev = si->dev; unsigned first_dev = dev - (dev % devs_in_group); - unsigned dev_order; unsigned cur_pg = ios->pages_consumed; u64 length = ios->length; int ret = 0; @@ -670,16 +714,13 @@ static int _prepare_for_striping(struct ore_io_state *ios) BUG_ON(length > si->length); - dev_order = _dev_order(devs_in_group, mirrors_p1, si->par_dev, dev); - si->cur_comp = dev_order; - si->cur_pg = si->unit_off / PAGE_SIZE; - while (length) { - unsigned comp = dev - first_dev; - struct ore_per_dev_state *per_dev = &ios->per_dev[comp]; + struct ore_per_dev_state *per_dev = + &ios->per_dev[dev - first_dev]; unsigned cur_len, page_off = 0; - if (!per_dev->length) { + if (!per_dev->length && !per_dev->offset) { + /* First time initialize the per_dev info. */ per_dev->dev = dev; if (dev == si->dev) { WARN_ON(dev == si->par_dev); @@ -688,13 +729,7 @@ static int _prepare_for_striping(struct ore_io_state *ios) page_off = si->unit_off & ~PAGE_MASK; BUG_ON(page_off && (page_off != ios->pgbase)); } else { - if (si->cur_comp > dev_order) - per_dev->offset = - si->obj_offset - si->unit_off; - else /* si->cur_comp < dev_order */ - per_dev->offset = - si->obj_offset + stripe_unit - - si->unit_off; + per_dev->offset = si->obj_offset - si->unit_off; cur_len = stripe_unit; } } else { @@ -708,11 +743,9 @@ static int _prepare_for_striping(struct ore_io_state *ios) if (unlikely(ret)) goto out; - dev += mirrors_p1; - dev = (dev % devs_in_group) + first_dev; - length -= cur_len; + dev = ((dev + mirrors_p1) % devs_in_group) + first_dev; si->cur_comp = (si->cur_comp + 1) % group_width; if (unlikely((dev == si->par_dev) || (!length && ios->sp2d))) { if (!length && ios->sp2d) { @@ -720,23 +753,16 @@ static int _prepare_for_striping(struct ore_io_state *ios) * stripe. then operate on parity dev. */ dev = si->par_dev; - } - if (ios->sp2d) - /* In writes cur_len just means if it's the - * last one. See _ore_add_parity_unit. - */ - cur_len = length; - per_dev = &ios->per_dev[dev - first_dev]; - if (!per_dev->length) { - /* Only/always the parity unit of the first - * stripe will be empty. So this is a chance to - * initialize the per_dev info. - */ - per_dev->dev = dev; - per_dev->offset = si->obj_offset - si->unit_off; + /* If last stripe operate on parity comp */ + si->cur_comp = group_width - ios->layout->parity; } - ret = _ore_add_parity_unit(ios, si, per_dev, cur_len); + /* In writes cur_len just means if it's the + * last one. See _ore_add_parity_unit. + */ + ret = _add_parity_units(ios, si, dev, first_dev, + mirrors_p1, devs_in_group, + ios->sp2d ? length : cur_len); if (unlikely(ret)) goto out; @@ -747,6 +773,8 @@ static int _prepare_for_striping(struct ore_io_state *ios) /* Next stripe, start fresh */ si->cur_comp = 0; si->cur_pg = 0; + si->obj_offset += cur_len; + si->unit_off = 0; } } out: diff --git a/fs/exofs/ore_raid.c b/fs/exofs/ore_raid.c index 4e2c032ab8a1..7f20f25c232c 100644 --- a/fs/exofs/ore_raid.c +++ b/fs/exofs/ore_raid.c @@ -218,22 +218,28 @@ static unsigned _sp2d_max_pg(struct __stripe_pages_2d *sp2d) static void _gen_xor_unit(struct __stripe_pages_2d *sp2d) { unsigned p; + unsigned tx_flags = ASYNC_TX_ACK; + + if (sp2d->parity == 1) + tx_flags |= ASYNC_TX_XOR_ZERO_DST; + for (p = 0; p < sp2d->pages_in_unit; p++) { struct __1_page_stripe *_1ps = &sp2d->_1p_stripes[p]; if (!_1ps->write_count) continue; - init_async_submit(&_1ps->submit, - ASYNC_TX_XOR_ZERO_DST | ASYNC_TX_ACK, - NULL, - NULL, NULL, - (addr_conv_t *)_1ps->scribble); - - /* TODO: raid6 */ - _1ps->tx = async_xor(_1ps->pages[sp2d->data_devs], _1ps->pages, - 0, sp2d->data_devs, PAGE_SIZE, - &_1ps->submit); + init_async_submit(&_1ps->submit, tx_flags, + NULL, NULL, NULL, (addr_conv_t *)_1ps->scribble); + + if (sp2d->parity == 1) + _1ps->tx = async_xor(_1ps->pages[sp2d->data_devs], + _1ps->pages, 0, sp2d->data_devs, + PAGE_SIZE, &_1ps->submit); + else /* parity == 2 */ + _1ps->tx = async_gen_syndrome(_1ps->pages, 0, + sp2d->data_devs + sp2d->parity, + PAGE_SIZE, &_1ps->submit); } for (p = 0; p < sp2d->pages_in_unit; p++) { @@ -404,9 +410,8 @@ static int _add_to_r4w_last_page(struct ore_io_state *ios, u64 *offset) ore_calc_stripe_info(ios->layout, *offset, 0, &si); - p = si.unit_off / PAGE_SIZE; - c = _dev_order(ios->layout->group_width * ios->layout->mirrors_p1, - ios->layout->mirrors_p1, si.par_dev, si.dev); + p = si.cur_pg; + c = si.cur_comp; page = ios->sp2d->_1p_stripes[p].pages[c]; pg_len = PAGE_SIZE - (si.unit_off % PAGE_SIZE); @@ -534,9 +539,8 @@ static int _read_4_write_last_stripe(struct ore_io_state *ios) goto read_it; ore_calc_stripe_info(ios->layout, offset, 0, &read_si); - p = read_si.unit_off / PAGE_SIZE; - c = _dev_order(ios->layout->group_width * ios->layout->mirrors_p1, - ios->layout->mirrors_p1, read_si.par_dev, read_si.dev); + p = read_si.cur_pg; + c = read_si.cur_comp; if (min_p == sp2d->pages_in_unit) { /* Didn't do it yet */ @@ -620,7 +624,7 @@ static int _read_4_write_execute(struct ore_io_state *ios) int _ore_add_parity_unit(struct ore_io_state *ios, struct ore_striping_info *si, struct ore_per_dev_state *per_dev, - unsigned cur_len) + unsigned cur_len, bool do_xor) { if (ios->reading) { if (per_dev->cur_sg >= ios->sgs_per_dev) { @@ -640,17 +644,16 @@ int _ore_add_parity_unit(struct ore_io_state *ios, si->cur_pg = _sp2d_min_pg(sp2d); num_pages = _sp2d_max_pg(sp2d) + 1 - si->cur_pg; - if (!cur_len) /* If last stripe operate on parity comp */ - si->cur_comp = sp2d->data_devs; - if (!per_dev->length) { per_dev->offset += si->cur_pg * PAGE_SIZE; /* If first stripe, Read in all read4write pages * (if needed) before we calculate the first parity. */ - _read_4_write_first_stripe(ios); + if (do_xor) + _read_4_write_first_stripe(ios); } - if (!cur_len) /* If last stripe r4w pages of last stripe */ + if (!cur_len && do_xor) + /* If last stripe r4w pages of last stripe */ _read_4_write_last_stripe(ios); _read_4_write_execute(ios); @@ -662,7 +665,7 @@ int _ore_add_parity_unit(struct ore_io_state *ios, ++(ios->cur_par_page); } - BUG_ON(si->cur_comp != sp2d->data_devs); + BUG_ON(si->cur_comp < sp2d->data_devs); BUG_ON(si->cur_pg + num_pages > sp2d->pages_in_unit); ret = _ore_add_stripe_unit(ios, &array_start, 0, pages, @@ -670,9 +673,10 @@ int _ore_add_parity_unit(struct ore_io_state *ios, if (unlikely(ret)) return ret; - /* TODO: raid6 if (last_parity_dev) */ - _gen_xor_unit(sp2d); - _sp2d_reset(sp2d, ios->r4w, ios->private); + if (do_xor) { + _gen_xor_unit(sp2d); + _sp2d_reset(sp2d, ios->r4w, ios->private); + } } return 0; } diff --git a/fs/exofs/ore_raid.h b/fs/exofs/ore_raid.h index 2ffd2c3c6e46..cf6375d82129 100644 --- a/fs/exofs/ore_raid.h +++ b/fs/exofs/ore_raid.h @@ -31,24 +31,6 @@ #define ORE_DBGMSG2(M...) do {} while (0) /* #define ORE_DBGMSG2 ORE_DBGMSG */ -/* Calculate the component order in a stripe. eg the logical data unit - * address within the stripe of @dev given the @par_dev of this stripe. - */ -static inline unsigned _dev_order(unsigned devs_in_group, unsigned mirrors_p1, - unsigned par_dev, unsigned dev) -{ - unsigned first_dev = dev - dev % devs_in_group; - - dev -= first_dev; - par_dev -= first_dev; - - if (devs_in_group == par_dev) /* The raid 0 case */ - return dev / mirrors_p1; - /* raid4/5/6 case */ - return ((devs_in_group + dev - par_dev - mirrors_p1) % devs_in_group) / - mirrors_p1; -} - /* ios_raid.c stuff needed by ios.c */ int _ore_post_alloc_raid_stuff(struct ore_io_state *ios); void _ore_free_raid_stuff(struct ore_io_state *ios); @@ -56,7 +38,8 @@ void _ore_free_raid_stuff(struct ore_io_state *ios); void _ore_add_sg_seg(struct ore_per_dev_state *per_dev, unsigned cur_len, bool not_last); int _ore_add_parity_unit(struct ore_io_state *ios, struct ore_striping_info *si, - struct ore_per_dev_state *per_dev, unsigned cur_len); + struct ore_per_dev_state *per_dev, unsigned cur_len, + bool do_xor); void _ore_add_stripe_page(struct __stripe_pages_2d *sp2d, struct ore_striping_info *si, struct page *page); static inline void _add_stripe_page(struct __stripe_pages_2d *sp2d, diff --git a/fs/fat/fat.h b/fs/fat/fat.h index 7c31f4bc74a9..e0c4ba39a377 100644 --- a/fs/fat/fat.h +++ b/fs/fat/fat.h @@ -52,7 +52,8 @@ struct fat_mount_options { usefree:1, /* Use free_clusters for FAT32 */ tz_set:1, /* Filesystem timestamps' offset set */ rodir:1, /* allow ATTR_RO for directory */ - discard:1; /* Issue discard requests on deletions */ + discard:1, /* Issue discard requests on deletions */ + dos1xfloppy:1; /* Assume default BPB for DOS 1.x floppies */ }; #define FAT_HASH_BITS 8 diff --git a/fs/fat/inode.c b/fs/fat/inode.c index b3361fe2bcb5..9c83594d7fb5 100644 --- a/fs/fat/inode.c +++ b/fs/fat/inode.c @@ -35,9 +35,71 @@ #define CONFIG_FAT_DEFAULT_IOCHARSET "" #endif +#define KB_IN_SECTORS 2 + +/* + * A deserialized copy of the on-disk structure laid out in struct + * fat_boot_sector. + */ +struct fat_bios_param_block { + u16 fat_sector_size; + u8 fat_sec_per_clus; + u16 fat_reserved; + u8 fat_fats; + u16 fat_dir_entries; + u16 fat_sectors; + u16 fat_fat_length; + u32 fat_total_sect; + + u8 fat16_state; + u32 fat16_vol_id; + + u32 fat32_length; + u32 fat32_root_cluster; + u16 fat32_info_sector; + u8 fat32_state; + u32 fat32_vol_id; +}; + static int fat_default_codepage = CONFIG_FAT_DEFAULT_CODEPAGE; static char fat_default_iocharset[] = CONFIG_FAT_DEFAULT_IOCHARSET; +static struct fat_floppy_defaults { + unsigned nr_sectors; + unsigned sec_per_clus; + unsigned dir_entries; + unsigned media; + unsigned fat_length; +} floppy_defaults[] = { +{ + .nr_sectors = 160 * KB_IN_SECTORS, + .sec_per_clus = 1, + .dir_entries = 64, + .media = 0xFE, + .fat_length = 1, +}, +{ + .nr_sectors = 180 * KB_IN_SECTORS, + .sec_per_clus = 1, + .dir_entries = 64, + .media = 0xFC, + .fat_length = 2, +}, +{ + .nr_sectors = 320 * KB_IN_SECTORS, + .sec_per_clus = 2, + .dir_entries = 112, + .media = 0xFF, + .fat_length = 1, +}, +{ + .nr_sectors = 360 * KB_IN_SECTORS, + .sec_per_clus = 2, + .dir_entries = 112, + .media = 0xFD, + .fat_length = 2, +}, +}; static int fat_add_cluster(struct inode *inode) { @@ -359,7 +421,7 @@ struct inode *fat_iget(struct super_block *sb, loff_t i_pos) static int is_exec(unsigned char *extension) { - unsigned char *exe_extensions = "EXECOMBAT", *walk; + unsigned char exe_extensions[] = "EXECOMBAT", *walk; for (walk = exe_extensions; *walk; walk += 3) if (!strncmp(extension, walk, 3)) @@ -853,6 +915,8 @@ static int fat_show_options(struct seq_file *m, struct dentry *root) seq_puts(m, ",nfs=stale_rw"); if (opts->discard) seq_puts(m, ",discard"); + if (opts->dos1xfloppy) + seq_puts(m, ",dos1xfloppy"); return 0; } @@ -867,7 +931,7 @@ enum { Opt_uni_xl_no, Opt_uni_xl_yes, Opt_nonumtail_no, Opt_nonumtail_yes, Opt_obsolete, Opt_flush, Opt_tz_utc, Opt_rodir, Opt_err_cont, Opt_err_panic, Opt_err_ro, Opt_discard, Opt_nfs, Opt_time_offset, - Opt_nfs_stale_rw, Opt_nfs_nostale_ro, Opt_err, + Opt_nfs_stale_rw, Opt_nfs_nostale_ro, Opt_err, Opt_dos1xfloppy, }; static const match_table_t fat_tokens = { @@ -900,6 +964,7 @@ static const match_table_t fat_tokens = { {Opt_nfs_stale_rw, "nfs"}, {Opt_nfs_stale_rw, "nfs=stale_rw"}, {Opt_nfs_nostale_ro, "nfs=nostale_ro"}, + {Opt_dos1xfloppy, "dos1xfloppy"}, {Opt_obsolete, "conv=binary"}, {Opt_obsolete, "conv=text"}, {Opt_obsolete, "conv=auto"}, @@ -1102,6 +1167,9 @@ static int parse_options(struct super_block *sb, char *options, int is_vfat, case Opt_nfs_nostale_ro: opts->nfs = FAT_NFS_NOSTALE_RO; break; + case Opt_dos1xfloppy: + opts->dos1xfloppy = 1; + break; /* msdos specific */ case Opt_dots: @@ -1247,6 +1315,169 @@ static unsigned long calc_fat_clusters(struct super_block *sb) return sbi->fat_length * sb->s_blocksize * 8 / sbi->fat_bits; } +static bool fat_bpb_is_zero(struct fat_boot_sector *b) +{ + if (get_unaligned_le16(&b->sector_size)) + return false; + if (b->sec_per_clus) + return false; + if (b->reserved) + return false; + if (b->fats) + return false; + if (get_unaligned_le16(&b->dir_entries)) + return false; + if (get_unaligned_le16(&b->sectors)) + return false; + if (b->media) + return false; + if (b->fat_length) + return false; + if (b->secs_track) + return false; + if (b->heads) + return false; + return true; +} + +static int fat_read_bpb(struct super_block *sb, struct fat_boot_sector *b, + int silent, struct fat_bios_param_block *bpb) +{ + int error = -EINVAL; + + /* Read in BPB ... */ + memset(bpb, 0, sizeof(*bpb)); + bpb->fat_sector_size = get_unaligned_le16(&b->sector_size); + bpb->fat_sec_per_clus = b->sec_per_clus; + bpb->fat_reserved = le16_to_cpu(b->reserved); + bpb->fat_fats = b->fats; + bpb->fat_dir_entries = get_unaligned_le16(&b->dir_entries); + bpb->fat_sectors = get_unaligned_le16(&b->sectors); + bpb->fat_fat_length = le16_to_cpu(b->fat_length); + bpb->fat_total_sect = le32_to_cpu(b->total_sect); + + bpb->fat16_state = b->fat16.state; + bpb->fat16_vol_id = get_unaligned_le32(b->fat16.vol_id); + + bpb->fat32_length = le32_to_cpu(b->fat32.length); + bpb->fat32_root_cluster = le32_to_cpu(b->fat32.root_cluster); + bpb->fat32_info_sector = le16_to_cpu(b->fat32.info_sector); + bpb->fat32_state = b->fat32.state; + bpb->fat32_vol_id = get_unaligned_le32(b->fat32.vol_id); + + /* Validate this looks like a FAT filesystem BPB */ + if (!bpb->fat_reserved) { + if (!silent) + fat_msg(sb, KERN_ERR, + "bogus number of reserved sectors"); + goto out; + } + if (!bpb->fat_fats) { + if (!silent) + fat_msg(sb, KERN_ERR, "bogus number of FAT structure"); + goto out; + } + + /* + * Earlier we checked here that b->secs_track and b->head are nonzero, + * but it turns out valid FAT filesystems can have zero there. + */ + + if (!fat_valid_media(b->media)) { + if (!silent) + fat_msg(sb, KERN_ERR, "invalid media value (0x%02x)", + (unsigned)b->media); + goto out; + } + + if (!is_power_of_2(bpb->fat_sector_size) + || (bpb->fat_sector_size < 512) + || (bpb->fat_sector_size > 4096)) { + if (!silent) + fat_msg(sb, KERN_ERR, "bogus logical sector size %u", + (unsigned)bpb->fat_sector_size); + goto out; + } + + if (!is_power_of_2(bpb->fat_sec_per_clus)) { + if (!silent) + fat_msg(sb, KERN_ERR, "bogus sectors per cluster %u", + (unsigned)bpb->fat_sec_per_clus); + goto out; + } + + error = 0; + +out: + return error; +} + +static int fat_read_static_bpb(struct super_block *sb, + struct fat_boot_sector *b, int silent, + struct fat_bios_param_block *bpb) +{ + static const char *notdos1x = "This doesn't look like a DOS 1.x volume"; + + struct fat_floppy_defaults *fdefaults = NULL; + int error = -EINVAL; + sector_t bd_sects; + unsigned i; + + bd_sects = i_size_read(sb->s_bdev->bd_inode) / SECTOR_SIZE; + + /* 16-bit DOS 1.x reliably wrote bootstrap short-jmp code */ + if (b->ignored[0] != 0xeb || b->ignored[2] != 0x90) { + if (!silent) + fat_msg(sb, KERN_ERR, + "%s; no bootstrapping code", notdos1x); + goto out; + } + + /* + * If any value in this region is non-zero, it isn't archaic + * DOS. + */ + if (!fat_bpb_is_zero(b)) { + if (!silent) + fat_msg(sb, KERN_ERR, + "%s; DOS 2.x BPB is non-zero", notdos1x); + goto out; + } + + for (i = 0; i < ARRAY_SIZE(floppy_defaults); i++) { + if (floppy_defaults[i].nr_sectors == bd_sects) { + fdefaults = &floppy_defaults[i]; + break; + } + } + + if (fdefaults == NULL) { + if (!silent) + fat_msg(sb, KERN_WARNING, + "This looks like a DOS 1.x volume, but isn't a recognized floppy size (%llu sectors)", + (u64)bd_sects); + goto out; + } + + if (!silent) + fat_msg(sb, KERN_INFO, + "This looks like a DOS 1.x volume; assuming default BPB values"); + + memset(bpb, 0, sizeof(*bpb)); + bpb->fat_sector_size = SECTOR_SIZE; + bpb->fat_sec_per_clus = fdefaults->sec_per_clus; + bpb->fat_reserved = 1; + bpb->fat_fats = 2; + bpb->fat_dir_entries = fdefaults->dir_entries; + bpb->fat_sectors = fdefaults->nr_sectors; + bpb->fat_fat_length = fdefaults->fat_length; + + error = 0; + +out: + return error; +} + /* * Read the super block of an MS-DOS FS. */ @@ -1256,12 +1487,11 @@ int fat_fill_super(struct super_block *sb, void *data, int silent, int isvfat, struct inode *root_inode = NULL, *fat_inode = NULL; struct inode *fsinfo_inode = NULL; struct buffer_head *bh; - struct fat_boot_sector *b; + struct fat_bios_param_block bpb; struct msdos_sb_info *sbi; u16 logical_sector_size; u32 total_sectors, total_clusters, fat_clusters, rootdir_sectors; int debug; - unsigned int media; long error; char buf[50]; @@ -1298,100 +1528,72 @@ int fat_fill_super(struct super_block *sb, void *data, int silent, int isvfat, goto out_fail; } - b = (struct fat_boot_sector *) bh->b_data; - if (!b->reserved) { - if (!silent) - fat_msg(sb, KERN_ERR, "bogus number of reserved sectors"); - brelse(bh); - goto out_invalid; - } - if (!b->fats) { - if (!silent) - fat_msg(sb, KERN_ERR, "bogus number of FAT structure"); - brelse(bh); - goto out_invalid; - } - - /* - * Earlier we checked here that b->secs_track and b->head are nonzero, - * but it turns out valid FAT filesystems can have zero there. - */ + error = fat_read_bpb(sb, (struct fat_boot_sector *)bh->b_data, silent, + &bpb); + if (error == -EINVAL && sbi->options.dos1xfloppy) + error = fat_read_static_bpb(sb, + (struct fat_boot_sector *)bh->b_data, silent, &bpb); + brelse(bh); - media = b->media; - if (!fat_valid_media(media)) { - if (!silent) - fat_msg(sb, KERN_ERR, "invalid media value (0x%02x)", - media); - brelse(bh); - goto out_invalid; - } - logical_sector_size = get_unaligned_le16(&b->sector_size); - if (!is_power_of_2(logical_sector_size) - || (logical_sector_size < 512) - || (logical_sector_size > 4096)) { - if (!silent) - fat_msg(sb, KERN_ERR, "bogus logical sector size %u", - logical_sector_size); - brelse(bh); - goto out_invalid; - } - sbi->sec_per_clus = b->sec_per_clus; - if (!is_power_of_2(sbi->sec_per_clus)) { - if (!silent) - fat_msg(sb, KERN_ERR, "bogus sectors per cluster %u", - sbi->sec_per_clus); - brelse(bh); + if (error == -EINVAL) goto out_invalid; - } + else if (error) + goto out_fail; + + logical_sector_size = bpb.fat_sector_size; + sbi->sec_per_clus = bpb.fat_sec_per_clus; + error = -EIO; if (logical_sector_size < sb->s_blocksize) { fat_msg(sb, KERN_ERR, "logical sector size too small for device" " (logical sector size = %u)", logical_sector_size); - brelse(bh); goto out_fail; } + if (logical_sector_size > sb->s_blocksize) { - brelse(bh); + struct buffer_head *bh_resize; if (!sb_set_blocksize(sb, logical_sector_size)) { fat_msg(sb, KERN_ERR, "unable to set blocksize %u", logical_sector_size); goto out_fail; } - bh = sb_bread(sb, 0); - if (bh == NULL) { + + /* Verify that the larger boot sector is fully readable */ + bh_resize = sb_bread(sb, 0); + if (bh_resize == NULL) { fat_msg(sb, KERN_ERR, "unable to read boot sector" " (logical sector size = %lu)", sb->s_blocksize); goto out_fail; } - b = (struct fat_boot_sector *) bh->b_data; + brelse(bh_resize); } mutex_init(&sbi->s_lock); sbi->cluster_size = sb->s_blocksize * sbi->sec_per_clus; sbi->cluster_bits = ffs(sbi->cluster_size) - 1; - sbi->fats = b->fats; + sbi->fats = bpb.fat_fats; sbi->fat_bits = 0; /* Don't know yet */ - sbi->fat_start = le16_to_cpu(b->reserved); - sbi->fat_length = le16_to_cpu(b->fat_length); + sbi->fat_start = bpb.fat_reserved; + sbi->fat_length = bpb.fat_fat_length; sbi->root_cluster = 0; sbi->free_clusters = -1; /* Don't know yet */ sbi->free_clus_valid = 0; sbi->prev_free = FAT_START_ENT; sb->s_maxbytes = 0xffffffff; - if (!sbi->fat_length && b->fat32.length) { + if (!sbi->fat_length && bpb.fat32_length) { struct fat_boot_fsinfo *fsinfo; struct buffer_head *fsinfo_bh; /* Must be FAT32 */ sbi->fat_bits = 32; - sbi->fat_length = le32_to_cpu(b->fat32.length); - sbi->root_cluster = le32_to_cpu(b->fat32.root_cluster); + sbi->fat_length = bpb.fat32_length; + sbi->root_cluster = bpb.fat32_root_cluster; /* MC - if info_sector is 0, don't multiply by 0 */ - sbi->fsinfo_sector = le16_to_cpu(b->fat32.info_sector); + sbi->fsinfo_sector = bpb.fat32_info_sector; if (sbi->fsinfo_sector == 0) sbi->fsinfo_sector = 1; @@ -1399,7 +1601,6 @@ int fat_fill_super(struct super_block *sb, void *data, int silent, int isvfat, if (fsinfo_bh == NULL) { fat_msg(sb, KERN_ERR, "bread failed, FSINFO block" " (sector = %lu)", sbi->fsinfo_sector); - brelse(bh); goto out_fail; } @@ -1422,35 +1623,28 @@ int fat_fill_super(struct super_block *sb, void *data, int silent, int isvfat, /* interpret volume ID as a little endian 32 bit integer */ if (sbi->fat_bits == 32) - sbi->vol_id = (((u32)b->fat32.vol_id[0]) | - ((u32)b->fat32.vol_id[1] << 8) | - ((u32)b->fat32.vol_id[2] << 16) | - ((u32)b->fat32.vol_id[3] << 24)); + sbi->vol_id = bpb.fat32_vol_id; else /* fat 16 or 12 */ - sbi->vol_id = (((u32)b->fat16.vol_id[0]) | - ((u32)b->fat16.vol_id[1] << 8) | - ((u32)b->fat16.vol_id[2] << 16) | - ((u32)b->fat16.vol_id[3] << 24)); + sbi->vol_id = bpb.fat16_vol_id; sbi->dir_per_block = sb->s_blocksize / sizeof(struct msdos_dir_entry); sbi->dir_per_block_bits = ffs(sbi->dir_per_block) - 1; sbi->dir_start = sbi->fat_start + sbi->fats * sbi->fat_length; - sbi->dir_entries = get_unaligned_le16(&b->dir_entries); + sbi->dir_entries = bpb.fat_dir_entries; if (sbi->dir_entries & (sbi->dir_per_block - 1)) { if (!silent) fat_msg(sb, KERN_ERR, "bogus directory-entries per block" " (%u)", sbi->dir_entries); - brelse(bh); goto out_invalid; } rootdir_sectors = sbi->dir_entries * sizeof(struct msdos_dir_entry) / sb->s_blocksize; sbi->data_start = sbi->dir_start + rootdir_sectors; - total_sectors = get_unaligned_le16(&b->sectors); + total_sectors = bpb.fat_sectors; if (total_sectors == 0) - total_sectors = le32_to_cpu(b->total_sect); + total_sectors = bpb.fat_total_sect; total_clusters = (total_sectors - sbi->data_start) / sbi->sec_per_clus; @@ -1459,9 +1653,9 @@ int fat_fill_super(struct super_block *sb, void *data, int silent, int isvfat, /* some OSes set FAT_STATE_DIRTY and clean it on unmount. */ if (sbi->fat_bits == 32) - sbi->dirty = b->fat32.state & FAT_STATE_DIRTY; + sbi->dirty = bpb.fat32_state & FAT_STATE_DIRTY; else /* fat 16 or 12 */ - sbi->dirty = b->fat16.state & FAT_STATE_DIRTY; + sbi->dirty = bpb.fat16_state & FAT_STATE_DIRTY; /* check that FAT table does not overflow */ fat_clusters = calc_fat_clusters(sb); @@ -1470,7 +1664,6 @@ int fat_fill_super(struct super_block *sb, void *data, int silent, int isvfat, if (!silent) fat_msg(sb, KERN_ERR, "count of clusters too big (%u)", total_clusters); - brelse(bh); goto out_invalid; } @@ -1483,8 +1676,6 @@ int fat_fill_super(struct super_block *sb, void *data, int silent, int isvfat, if (sbi->prev_free < FAT_START_ENT) sbi->prev_free = FAT_START_ENT; - brelse(bh); - /* set up enough so that it can read an inode */ fat_hash_init(sb); dir_hash_init(sb); diff --git a/fs/file_table.c b/fs/file_table.c index a374f5033e97..40bf4660f0a3 100644 --- a/fs/file_table.c +++ b/fs/file_table.c @@ -76,14 +76,14 @@ EXPORT_SYMBOL_GPL(get_max_files); * Handle nr_files sysctl */ #if defined(CONFIG_SYSCTL) && defined(CONFIG_PROC_FS) -int proc_nr_files(ctl_table *table, int write, +int proc_nr_files(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { files_stat.nr_files = get_nr_files(); return proc_doulongvec_minmax(table, write, buffer, lenp, ppos); } #else -int proc_nr_files(ctl_table *table, int write, +int proc_nr_files(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { return -ENOSYS; diff --git a/fs/fscache/main.c b/fs/fscache/main.c index acd4bf1fc277..63f868e869b9 100644 --- a/fs/fscache/main.c +++ b/fs/fscache/main.c @@ -67,7 +67,7 @@ static int fscache_max_active_sysctl(struct ctl_table *table, int write, return ret; } -ctl_table fscache_sysctls[] = { +struct ctl_table fscache_sysctls[] = { { .procname = "object_max_active", .data = &fscache_object_max_active, @@ -87,7 +87,7 @@ ctl_table fscache_sysctls[] = { {} }; -ctl_table fscache_sysctls_root[] = { +struct ctl_table fscache_sysctls_root[] = { { .procname = "fscache", .mode = 0555, diff --git a/fs/hfsplus/attributes.c b/fs/hfsplus/attributes.c index caf89a7be0a1..e5b221de7de6 100644 --- a/fs/hfsplus/attributes.c +++ b/fs/hfsplus/attributes.c @@ -54,14 +54,11 @@ int hfsplus_attr_build_key(struct super_block *sb, hfsplus_btree_key *key, memset(key, 0, sizeof(struct hfsplus_attr_key)); key->attr.cnid = cpu_to_be32(cnid); if (name) { - len = strlen(name); - if (len > HFSPLUS_ATTR_MAX_STRLEN) { - pr_err("invalid xattr name's length\n"); - return -EINVAL; - } - hfsplus_asc2uni(sb, + int res = hfsplus_asc2uni(sb, (struct hfsplus_unistr *)&key->attr.key_name, - HFSPLUS_ATTR_MAX_STRLEN, name, len); + HFSPLUS_ATTR_MAX_STRLEN, name, strlen(name)); + if (res) + return res; len = be16_to_cpu(key->attr.key_name.length); } else { key->attr.key_name.length = 0; @@ -82,31 +79,6 @@ int hfsplus_attr_build_key(struct super_block *sb, hfsplus_btree_key *key, return 0; } -void hfsplus_attr_build_key_uni(hfsplus_btree_key *key, - u32 cnid, - struct hfsplus_attr_unistr *name) -{ - int ustrlen; - - memset(key, 0, sizeof(struct hfsplus_attr_key)); - ustrlen = be16_to_cpu(name->length); - key->attr.cnid = cpu_to_be32(cnid); - key->attr.key_name.length = cpu_to_be16(ustrlen); - ustrlen *= 2; - memcpy(key->attr.key_name.unicode, name->unicode, ustrlen); - - /* The length of the key, as stored in key_len field, does not include - * the size of the key_len field itself. - * So, offsetof(hfsplus_attr_key, key_name) is a trick because - * it takes into consideration key_len field (__be16) of - * hfsplus_attr_key structure instead of length field (__be16) of - * hfsplus_attr_unistr structure. - */ - key->key_len = - cpu_to_be16(offsetof(struct hfsplus_attr_key, key_name) + - ustrlen); -} - hfsplus_attr_entry *hfsplus_alloc_attr_entry(void) { return kmem_cache_alloc(hfsplus_attr_tree_cachep, GFP_KERNEL); diff --git a/fs/hfsplus/bnode.c b/fs/hfsplus/bnode.c index 11c860204520..759708fd9331 100644 --- a/fs/hfsplus/bnode.c +++ b/fs/hfsplus/bnode.c @@ -27,13 +27,13 @@ void hfs_bnode_read(struct hfs_bnode *node, void *buf, int off, int len) pagep = node->page + (off >> PAGE_CACHE_SHIFT); off &= ~PAGE_CACHE_MASK; - l = min(len, (int)PAGE_CACHE_SIZE - off); + l = min_t(int, len, PAGE_CACHE_SIZE - off); memcpy(buf, kmap(*pagep) + off, l); kunmap(*pagep); while ((len -= l) != 0) { buf += l; - l = min(len, (int)PAGE_CACHE_SIZE); + l = min_t(int, len, PAGE_CACHE_SIZE); memcpy(buf, kmap(*++pagep), l); kunmap(*pagep); } @@ -80,14 +80,14 @@ void hfs_bnode_write(struct hfs_bnode *node, void *buf, int off, int len) pagep = node->page + (off >> PAGE_CACHE_SHIFT); off &= ~PAGE_CACHE_MASK; - l = min(len, (int)PAGE_CACHE_SIZE - off); + l = min_t(int, len, PAGE_CACHE_SIZE - off); memcpy(kmap(*pagep) + off, buf, l); set_page_dirty(*pagep); kunmap(*pagep); while ((len -= l) != 0) { buf += l; - l = min(len, (int)PAGE_CACHE_SIZE); + l = min_t(int, len, PAGE_CACHE_SIZE); memcpy(kmap(*++pagep), buf, l); set_page_dirty(*pagep); kunmap(*pagep); @@ -110,13 +110,13 @@ void hfs_bnode_clear(struct hfs_bnode *node, int off, int len) pagep = node->page + (off >> PAGE_CACHE_SHIFT); off &= ~PAGE_CACHE_MASK; - l = min(len, (int)PAGE_CACHE_SIZE - off); + l = min_t(int, len, PAGE_CACHE_SIZE - off); memset(kmap(*pagep) + off, 0, l); set_page_dirty(*pagep); kunmap(*pagep); while ((len -= l) != 0) { - l = min(len, (int)PAGE_CACHE_SIZE); + l = min_t(int, len, PAGE_CACHE_SIZE); memset(kmap(*++pagep), 0, l); set_page_dirty(*pagep); kunmap(*pagep); @@ -142,14 +142,14 @@ void hfs_bnode_copy(struct hfs_bnode *dst_node, int dst, dst &= ~PAGE_CACHE_MASK; if (src == dst) { - l = min(len, (int)PAGE_CACHE_SIZE - src); + l = min_t(int, len, PAGE_CACHE_SIZE - src); memcpy(kmap(*dst_page) + src, kmap(*src_page) + src, l); kunmap(*src_page); set_page_dirty(*dst_page); kunmap(*dst_page); while ((len -= l) != 0) { - l = min(len, (int)PAGE_CACHE_SIZE); + l = min_t(int, len, PAGE_CACHE_SIZE); memcpy(kmap(*++dst_page), kmap(*++src_page), l); kunmap(*src_page); set_page_dirty(*dst_page); @@ -251,7 +251,7 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len) dst &= ~PAGE_CACHE_MASK; if (src == dst) { - l = min(len, (int)PAGE_CACHE_SIZE - src); + l = min_t(int, len, PAGE_CACHE_SIZE - src); memmove(kmap(*dst_page) + src, kmap(*src_page) + src, l); kunmap(*src_page); @@ -259,7 +259,7 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len) kunmap(*dst_page); while ((len -= l) != 0) { - l = min(len, (int)PAGE_CACHE_SIZE); + l = min_t(int, len, PAGE_CACHE_SIZE); memmove(kmap(*++dst_page), kmap(*++src_page), l); kunmap(*src_page); @@ -386,9 +386,8 @@ struct hfs_bnode *hfs_bnode_findhash(struct hfs_btree *tree, u32 cnid) struct hfs_bnode *node; if (cnid >= tree->node_count) { - pr_err("request for non-existent node " - "%d in B*Tree\n", - cnid); + pr_err("request for non-existent node %d in B*Tree\n", + cnid); return NULL; } @@ -409,9 +408,8 @@ static struct hfs_bnode *__hfs_bnode_create(struct hfs_btree *tree, u32 cnid) loff_t off; if (cnid >= tree->node_count) { - pr_err("request for non-existent node " - "%d in B*Tree\n", - cnid); + pr_err("request for non-existent node %d in B*Tree\n", + cnid); return NULL; } @@ -602,7 +600,7 @@ struct hfs_bnode *hfs_bnode_create(struct hfs_btree *tree, u32 num) pagep = node->page; memset(kmap(*pagep) + node->page_offset, 0, - min((int)PAGE_CACHE_SIZE, (int)tree->node_size)); + min_t(int, PAGE_CACHE_SIZE, tree->node_size)); set_page_dirty(*pagep); kunmap(*pagep); for (i = 1; i < tree->pages_per_bnode; i++) { @@ -648,8 +646,8 @@ void hfs_bnode_put(struct hfs_bnode *node) if (test_bit(HFS_BNODE_DELETED, &node->flags)) { hfs_bnode_unhash(node); spin_unlock(&tree->hash_lock); - hfs_bnode_clear(node, 0, - PAGE_CACHE_SIZE * tree->pages_per_bnode); + if (hfs_bnode_need_zeroout(tree)) + hfs_bnode_clear(node, 0, tree->node_size); hfs_bmap_free(node); hfs_bnode_free(node); return; @@ -658,3 +656,16 @@ void hfs_bnode_put(struct hfs_bnode *node) } } +/* + * Unused nodes have to be zeroed if this is the catalog tree and + * a corresponding flag in the volume header is set. + */ +bool hfs_bnode_need_zeroout(struct hfs_btree *tree) +{ + struct super_block *sb = tree->inode->i_sb; + struct hfsplus_sb_info *sbi = HFSPLUS_SB(sb); + const u32 volume_attr = be32_to_cpu(sbi->s_vhdr->attributes); + + return tree->cnid == HFSPLUS_CAT_CNID && + volume_attr & HFSPLUS_VOL_UNUSED_NODE_FIX; +} diff --git a/fs/hfsplus/btree.c b/fs/hfsplus/btree.c index 0fcec8b2a90b..3345c7553edc 100644 --- a/fs/hfsplus/btree.c +++ b/fs/hfsplus/btree.c @@ -358,7 +358,7 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree) u32 count; int res; - res = hfsplus_file_extend(inode); + res = hfsplus_file_extend(inode, hfs_bnode_need_zeroout(tree)); if (res) return ERR_PTR(res); hip->phys_size = inode->i_size = diff --git a/fs/hfsplus/dir.c b/fs/hfsplus/dir.c index bdec66522de3..610a3260bef1 100644 --- a/fs/hfsplus/dir.c +++ b/fs/hfsplus/dir.c @@ -12,6 +12,7 @@ #include <linux/fs.h> #include <linux/slab.h> #include <linux/random.h> +#include <linux/nls.h> #include "hfsplus_fs.h" #include "hfsplus_raw.h" @@ -127,7 +128,7 @@ static int hfsplus_readdir(struct file *file, struct dir_context *ctx) struct inode *inode = file_inode(file); struct super_block *sb = inode->i_sb; int len, err; - char strbuf[HFSPLUS_MAX_STRLEN + 1]; + char *strbuf; hfsplus_cat_entry entry; struct hfs_find_data fd; struct hfsplus_readdir_data *rd; @@ -139,6 +140,11 @@ static int hfsplus_readdir(struct file *file, struct dir_context *ctx) err = hfs_find_init(HFSPLUS_SB(sb)->cat_tree, &fd); if (err) return err; + strbuf = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_MAX_STRLEN + 1, GFP_KERNEL); + if (!strbuf) { + err = -ENOMEM; + goto out; + } hfsplus_cat_build_key(sb, fd.search_key, inode->i_ino, NULL); err = hfs_brec_find(&fd, hfs_find_rec_by_key); if (err) @@ -193,7 +199,7 @@ static int hfsplus_readdir(struct file *file, struct dir_context *ctx) hfs_bnode_read(fd.bnode, &entry, fd.entryoffset, fd.entrylength); type = be16_to_cpu(entry.type); - len = HFSPLUS_MAX_STRLEN; + len = NLS_MAX_CHARSET_SIZE * HFSPLUS_MAX_STRLEN; err = hfsplus_uni2asc(sb, &fd.key->cat.name, strbuf, &len); if (err) goto out; @@ -212,13 +218,31 @@ static int hfsplus_readdir(struct file *file, struct dir_context *ctx) be32_to_cpu(entry.folder.id), DT_DIR)) break; } else if (type == HFSPLUS_FILE) { + u16 mode; + unsigned type = DT_UNKNOWN; + if (fd.entrylength < sizeof(struct hfsplus_cat_file)) { pr_err("small file entry\n"); err = -EIO; goto out; } + + mode = be16_to_cpu(entry.file.permissions.mode); + if (S_ISREG(mode)) + type = DT_REG; + else if (S_ISLNK(mode)) + type = DT_LNK; + else if (S_ISFIFO(mode)) + type = DT_FIFO; + else if (S_ISCHR(mode)) + type = DT_CHR; + else if (S_ISBLK(mode)) + type = DT_BLK; + else if (S_ISSOCK(mode)) + type = DT_SOCK; + if (!dir_emit(ctx, strbuf, len, - be32_to_cpu(entry.file.id), DT_REG)) + be32_to_cpu(entry.file.id), type)) break; } else { pr_err("bad catalog entry type\n"); @@ -246,6 +270,7 @@ next: } memcpy(&rd->key, fd.key, sizeof(struct hfsplus_cat_key)); out: + kfree(strbuf); hfs_find_exit(&fd); return err; } diff --git a/fs/hfsplus/extents.c b/fs/hfsplus/extents.c index a7aafb35b624..feca524ce2a5 100644 --- a/fs/hfsplus/extents.c +++ b/fs/hfsplus/extents.c @@ -235,7 +235,7 @@ int hfsplus_get_block(struct inode *inode, sector_t iblock, if (iblock > hip->fs_blocks || !create) return -EIO; if (ablock >= hip->alloc_blocks) { - res = hfsplus_file_extend(inode); + res = hfsplus_file_extend(inode, false); if (res) return res; } @@ -425,7 +425,7 @@ int hfsplus_free_fork(struct super_block *sb, u32 cnid, return res; } -int hfsplus_file_extend(struct inode *inode) +int hfsplus_file_extend(struct inode *inode, bool zeroout) { struct super_block *sb = inode->i_sb; struct hfsplus_sb_info *sbi = HFSPLUS_SB(sb); @@ -436,10 +436,9 @@ int hfsplus_file_extend(struct inode *inode) if (sbi->alloc_file->i_size * 8 < sbi->total_blocks - sbi->free_blocks + 8) { /* extend alloc file */ - pr_err("extend alloc file! " - "(%llu,%u,%u)\n", - sbi->alloc_file->i_size * 8, - sbi->total_blocks, sbi->free_blocks); + pr_err("extend alloc file! (%llu,%u,%u)\n", + sbi->alloc_file->i_size * 8, + sbi->total_blocks, sbi->free_blocks); return -ENOSPC; } @@ -463,6 +462,12 @@ int hfsplus_file_extend(struct inode *inode) } } + if (zeroout) { + res = sb_issue_zeroout(sb, start, len, GFP_NOFS); + if (res) + goto out; + } + hfs_dbg(EXTENT, "extend %lu: %u,%u\n", inode->i_ino, start, len); if (hip->alloc_blocks <= hip->first_blocks) { diff --git a/fs/hfsplus/hfsplus_fs.h b/fs/hfsplus/hfsplus_fs.h index 83dc29286b10..eb5e059f481a 100644 --- a/fs/hfsplus/hfsplus_fs.h +++ b/fs/hfsplus/hfsplus_fs.h @@ -369,114 +369,119 @@ typedef int (*search_strategy_t)(struct hfs_bnode *, /* attributes.c */ int __init hfsplus_create_attr_tree_cache(void); void hfsplus_destroy_attr_tree_cache(void); +int hfsplus_attr_bin_cmp_key(const hfsplus_btree_key *k1, + const hfsplus_btree_key *k2); +int hfsplus_attr_build_key(struct super_block *sb, hfsplus_btree_key *key, + u32 cnid, const char *name); hfsplus_attr_entry *hfsplus_alloc_attr_entry(void); -void hfsplus_destroy_attr_entry(hfsplus_attr_entry *entry_p); -int hfsplus_attr_bin_cmp_key(const hfsplus_btree_key *, - const hfsplus_btree_key *); -int hfsplus_attr_build_key(struct super_block *, hfsplus_btree_key *, - u32, const char *); -void hfsplus_attr_build_key_uni(hfsplus_btree_key *key, - u32 cnid, - struct hfsplus_attr_unistr *name); -int hfsplus_find_attr(struct super_block *, u32, - const char *, struct hfs_find_data *); +void hfsplus_destroy_attr_entry(hfsplus_attr_entry *entry); +int hfsplus_find_attr(struct super_block *sb, u32 cnid, const char *name, + struct hfs_find_data *fd); int hfsplus_attr_exists(struct inode *inode, const char *name); -int hfsplus_create_attr(struct inode *, const char *, const void *, size_t); -int hfsplus_delete_attr(struct inode *, const char *); +int hfsplus_create_attr(struct inode *inode, const char *name, + const void *value, size_t size); +int hfsplus_delete_attr(struct inode *inode, const char *name); int hfsplus_delete_all_attrs(struct inode *dir, u32 cnid); /* bitmap.c */ -int hfsplus_block_allocate(struct super_block *, u32, u32, u32 *); -int hfsplus_block_free(struct super_block *, u32, u32); +int hfsplus_block_allocate(struct super_block *sb, u32 size, u32 offset, + u32 *max); +int hfsplus_block_free(struct super_block *sb, u32 offset, u32 count); /* btree.c */ -u32 hfsplus_calc_btree_clump_size(u32, u32, u64, int); -struct hfs_btree *hfs_btree_open(struct super_block *, u32); -void hfs_btree_close(struct hfs_btree *); -int hfs_btree_write(struct hfs_btree *); -struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *); -void hfs_bmap_free(struct hfs_bnode *); +u32 hfsplus_calc_btree_clump_size(u32 block_size, u32 node_size, u64 sectors, + int file_id); +struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id); +void hfs_btree_close(struct hfs_btree *tree); +int hfs_btree_write(struct hfs_btree *tree); +struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree); +void hfs_bmap_free(struct hfs_bnode *node); /* bnode.c */ -void hfs_bnode_read(struct hfs_bnode *, void *, int, int); -u16 hfs_bnode_read_u16(struct hfs_bnode *, int); -u8 hfs_bnode_read_u8(struct hfs_bnode *, int); -void hfs_bnode_read_key(struct hfs_bnode *, void *, int); -void hfs_bnode_write(struct hfs_bnode *, void *, int, int); -void hfs_bnode_write_u16(struct hfs_bnode *, int, u16); -void hfs_bnode_clear(struct hfs_bnode *, int, int); -void hfs_bnode_copy(struct hfs_bnode *, int, - struct hfs_bnode *, int, int); -void hfs_bnode_move(struct hfs_bnode *, int, int, int); -void hfs_bnode_dump(struct hfs_bnode *); -void hfs_bnode_unlink(struct hfs_bnode *); -struct hfs_bnode *hfs_bnode_findhash(struct hfs_btree *, u32); -struct hfs_bnode *hfs_bnode_find(struct hfs_btree *, u32); -void hfs_bnode_unhash(struct hfs_bnode *); -void hfs_bnode_free(struct hfs_bnode *); -struct hfs_bnode *hfs_bnode_create(struct hfs_btree *, u32); -void hfs_bnode_get(struct hfs_bnode *); -void hfs_bnode_put(struct hfs_bnode *); +void hfs_bnode_read(struct hfs_bnode *node, void *buf, int off, int len); +u16 hfs_bnode_read_u16(struct hfs_bnode *node, int off); +u8 hfs_bnode_read_u8(struct hfs_bnode *node, int off); +void hfs_bnode_read_key(struct hfs_bnode *node, void *key, int off); +void hfs_bnode_write(struct hfs_bnode *node, void *buf, int off, int len); +void hfs_bnode_write_u16(struct hfs_bnode *node, int off, u16 data); +void hfs_bnode_clear(struct hfs_bnode *node, int off, int len); +void hfs_bnode_copy(struct hfs_bnode *dst_node, int dst, + struct hfs_bnode *src_node, int src, int len); +void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len); +void hfs_bnode_dump(struct hfs_bnode *node); +void hfs_bnode_unlink(struct hfs_bnode *node); +struct hfs_bnode *hfs_bnode_findhash(struct hfs_btree *tree, u32 cnid); +void hfs_bnode_unhash(struct hfs_bnode *node); +struct hfs_bnode *hfs_bnode_find(struct hfs_btree *tree, u32 num); +void hfs_bnode_free(struct hfs_bnode *node); +struct hfs_bnode *hfs_bnode_create(struct hfs_btree *tree, u32 num); +void hfs_bnode_get(struct hfs_bnode *node); +void hfs_bnode_put(struct hfs_bnode *node); +bool hfs_bnode_need_zeroout(struct hfs_btree *tree); /* brec.c */ -u16 hfs_brec_lenoff(struct hfs_bnode *, u16, u16 *); -u16 hfs_brec_keylen(struct hfs_bnode *, u16); -int hfs_brec_insert(struct hfs_find_data *, void *, int); -int hfs_brec_remove(struct hfs_find_data *); +u16 hfs_brec_lenoff(struct hfs_bnode *node, u16 rec, u16 *off); +u16 hfs_brec_keylen(struct hfs_bnode *node, u16 rec); +int hfs_brec_insert(struct hfs_find_data *fd, void *entry, int entry_len); +int hfs_brec_remove(struct hfs_find_data *fd); /* bfind.c */ -int hfs_find_init(struct hfs_btree *, struct hfs_find_data *); -void hfs_find_exit(struct hfs_find_data *); -int hfs_find_1st_rec_by_cnid(struct hfs_bnode *, - struct hfs_find_data *, - int *, int *, int *); -int hfs_find_rec_by_key(struct hfs_bnode *, - struct hfs_find_data *, - int *, int *, int *); -int __hfs_brec_find(struct hfs_bnode *, struct hfs_find_data *, - search_strategy_t); -int hfs_brec_find(struct hfs_find_data *, search_strategy_t); -int hfs_brec_read(struct hfs_find_data *, void *, int); -int hfs_brec_goto(struct hfs_find_data *, int); +int hfs_find_init(struct hfs_btree *tree, struct hfs_find_data *fd); +void hfs_find_exit(struct hfs_find_data *fd); +int hfs_find_1st_rec_by_cnid(struct hfs_bnode *bnode, struct hfs_find_data *fd, + int *begin, int *end, int *cur_rec); +int hfs_find_rec_by_key(struct hfs_bnode *bnode, struct hfs_find_data *fd, + int *begin, int *end, int *cur_rec); +int __hfs_brec_find(struct hfs_bnode *bnode, struct hfs_find_data *fd, + search_strategy_t rec_found); +int hfs_brec_find(struct hfs_find_data *fd, search_strategy_t do_key_compare); +int hfs_brec_read(struct hfs_find_data *fd, void *rec, int rec_len); +int hfs_brec_goto(struct hfs_find_data *fd, int cnt); /* catalog.c */ -int hfsplus_cat_case_cmp_key(const hfsplus_btree_key *, - const hfsplus_btree_key *); -int hfsplus_cat_bin_cmp_key(const hfsplus_btree_key *, - const hfsplus_btree_key *); -void hfsplus_cat_build_key(struct super_block *sb, - hfsplus_btree_key *, u32, struct qstr *); -int hfsplus_find_cat(struct super_block *, u32, struct hfs_find_data *); -int hfsplus_create_cat(u32, struct inode *, struct qstr *, struct inode *); -int hfsplus_delete_cat(u32, struct inode *, struct qstr *); -int hfsplus_rename_cat(u32, struct inode *, struct qstr *, - struct inode *, struct qstr *); +int hfsplus_cat_case_cmp_key(const hfsplus_btree_key *k1, + const hfsplus_btree_key *k2); +int hfsplus_cat_bin_cmp_key(const hfsplus_btree_key *k1, + const hfsplus_btree_key *k2); +void hfsplus_cat_build_key(struct super_block *sb, hfsplus_btree_key *key, + u32 parent, struct qstr *str); void hfsplus_cat_set_perms(struct inode *inode, struct hfsplus_perm *perms); +int hfsplus_find_cat(struct super_block *sb, u32 cnid, + struct hfs_find_data *fd); +int hfsplus_create_cat(u32 cnid, struct inode *dir, struct qstr *str, + struct inode *inode); +int hfsplus_delete_cat(u32 cnid, struct inode *dir, struct qstr *str); +int hfsplus_rename_cat(u32 cnid, struct inode *src_dir, struct qstr *src_name, + struct inode *dst_dir, struct qstr *dst_name); /* dir.c */ extern const struct inode_operations hfsplus_dir_inode_operations; extern const struct file_operations hfsplus_dir_operations; /* extents.c */ -int hfsplus_ext_cmp_key(const hfsplus_btree_key *, const hfsplus_btree_key *); -int hfsplus_ext_write_extent(struct inode *); -int hfsplus_get_block(struct inode *, sector_t, struct buffer_head *, int); -int hfsplus_free_fork(struct super_block *, u32, - struct hfsplus_fork_raw *, int); -int hfsplus_file_extend(struct inode *); -void hfsplus_file_truncate(struct inode *); +int hfsplus_ext_cmp_key(const hfsplus_btree_key *k1, + const hfsplus_btree_key *k2); +int hfsplus_ext_write_extent(struct inode *inode); +int hfsplus_get_block(struct inode *inode, sector_t iblock, + struct buffer_head *bh_result, int create); +int hfsplus_free_fork(struct super_block *sb, u32 cnid, + struct hfsplus_fork_raw *fork, int type); +int hfsplus_file_extend(struct inode *inode, bool zeroout); +void hfsplus_file_truncate(struct inode *inode); /* inode.c */ extern const struct address_space_operations hfsplus_aops; extern const struct address_space_operations hfsplus_btree_aops; extern const struct dentry_operations hfsplus_dentry_operations; -void hfsplus_inode_read_fork(struct inode *, struct hfsplus_fork_raw *); -void hfsplus_inode_write_fork(struct inode *, struct hfsplus_fork_raw *); -int hfsplus_cat_read_inode(struct inode *, struct hfs_find_data *); -int hfsplus_cat_write_inode(struct inode *); -struct inode *hfsplus_new_inode(struct super_block *, umode_t); -void hfsplus_delete_inode(struct inode *); +struct inode *hfsplus_new_inode(struct super_block *sb, umode_t mode); +void hfsplus_delete_inode(struct inode *inode); +void hfsplus_inode_read_fork(struct inode *inode, + struct hfsplus_fork_raw *fork); +void hfsplus_inode_write_fork(struct inode *inode, + struct hfsplus_fork_raw *fork); +int hfsplus_cat_read_inode(struct inode *inode, struct hfs_find_data *fd); +int hfsplus_cat_write_inode(struct inode *inode); int hfsplus_file_fsync(struct file *file, loff_t start, loff_t end, int datasync); @@ -484,13 +489,17 @@ int hfsplus_file_fsync(struct file *file, loff_t start, loff_t end, long hfsplus_ioctl(struct file *filp, unsigned int cmd, unsigned long arg); /* options.c */ -int hfsplus_parse_options(char *, struct hfsplus_sb_info *); +void hfsplus_fill_defaults(struct hfsplus_sb_info *opts); int hfsplus_parse_options_remount(char *input, int *force); -void hfsplus_fill_defaults(struct hfsplus_sb_info *); -int hfsplus_show_options(struct seq_file *, struct dentry *); +int hfsplus_parse_options(char *input, struct hfsplus_sb_info *sbi); +int hfsplus_show_options(struct seq_file *seq, struct dentry *root); + +/* part_tbl.c */ +int hfs_part_find(struct super_block *sb, sector_t *part_start, + sector_t *part_size); /* super.c */ -struct inode *hfsplus_iget(struct super_block *, unsigned long); +struct inode *hfsplus_iget(struct super_block *sb, unsigned long ino); void hfsplus_mark_mdb_dirty(struct super_block *sb); /* tables.c */ @@ -499,23 +508,23 @@ extern u16 hfsplus_decompose_table[]; extern u16 hfsplus_compose_table[]; /* unicode.c */ -int hfsplus_strcasecmp(const struct hfsplus_unistr *, - const struct hfsplus_unistr *); -int hfsplus_strcmp(const struct hfsplus_unistr *, - const struct hfsplus_unistr *); -int hfsplus_uni2asc(struct super_block *, - const struct hfsplus_unistr *, char *, int *); -int hfsplus_asc2uni(struct super_block *, - struct hfsplus_unistr *, int, const char *, int); +int hfsplus_strcasecmp(const struct hfsplus_unistr *s1, + const struct hfsplus_unistr *s2); +int hfsplus_strcmp(const struct hfsplus_unistr *s1, + const struct hfsplus_unistr *s2); +int hfsplus_uni2asc(struct super_block *sb, const struct hfsplus_unistr *ustr, + char *astr, int *len_p); +int hfsplus_asc2uni(struct super_block *sb, struct hfsplus_unistr *ustr, + int max_unistr_len, const char *astr, int len); int hfsplus_hash_dentry(const struct dentry *dentry, struct qstr *str); -int hfsplus_compare_dentry(const struct dentry *parent, const struct dentry *dentry, - unsigned int len, const char *str, const struct qstr *name); +int hfsplus_compare_dentry(const struct dentry *parent, + const struct dentry *dentry, unsigned int len, + const char *str, const struct qstr *name); /* wrapper.c */ -int hfsplus_read_wrapper(struct super_block *); -int hfs_part_find(struct super_block *, sector_t *, sector_t *); -int hfsplus_submit_bio(struct super_block *sb, sector_t sector, - void *buf, void **data, int rw); +int hfsplus_submit_bio(struct super_block *sb, sector_t sector, void *buf, + void **data, int rw); +int hfsplus_read_wrapper(struct super_block *sb); /* time macros */ #define __hfsp_mt2ut(t) (be32_to_cpu(t) - 2082844800U) diff --git a/fs/hfsplus/hfsplus_raw.h b/fs/hfsplus/hfsplus_raw.h index 5a126828d85e..8298d0985f81 100644 --- a/fs/hfsplus/hfsplus_raw.h +++ b/fs/hfsplus/hfsplus_raw.h @@ -144,6 +144,7 @@ struct hfsplus_vh { #define HFSPLUS_VOL_NODEID_REUSED (1 << 12) #define HFSPLUS_VOL_JOURNALED (1 << 13) #define HFSPLUS_VOL_SOFTLOCK (1 << 15) +#define HFSPLUS_VOL_UNUSED_NODE_FIX (1 << 31) /* HFS+ BTree node descriptor */ struct hfs_bnode_desc { diff --git a/fs/hfsplus/options.c b/fs/hfsplus/options.c index 68537e8b7a09..c90b72ee676d 100644 --- a/fs/hfsplus/options.c +++ b/fs/hfsplus/options.c @@ -173,9 +173,8 @@ int hfsplus_parse_options(char *input, struct hfsplus_sb_info *sbi) if (p) sbi->nls = load_nls(p); if (!sbi->nls) { - pr_err("unable to load " - "nls mapping \"%s\"\n", - p); + pr_err("unable to load nls mapping \"%s\"\n", + p); kfree(p); return 0; } @@ -232,8 +231,8 @@ int hfsplus_show_options(struct seq_file *seq, struct dentry *root) if (sbi->nls) seq_printf(seq, ",nls=%s", sbi->nls->charset); if (test_bit(HFSPLUS_SB_NODECOMPOSE, &sbi->flags)) - seq_printf(seq, ",nodecompose"); + seq_puts(seq, ",nodecompose"); if (test_bit(HFSPLUS_SB_NOBARRIER, &sbi->flags)) - seq_printf(seq, ",nobarrier"); + seq_puts(seq, ",nobarrier"); return 0; } diff --git a/fs/hfsplus/super.c b/fs/hfsplus/super.c index a513d2d36be9..4cf2024b87da 100644 --- a/fs/hfsplus/super.c +++ b/fs/hfsplus/super.c @@ -131,9 +131,10 @@ static int hfsplus_system_write_inode(struct inode *inode) hfsplus_inode_write_fork(inode, fork); if (tree) { int err = hfs_btree_write(tree); + if (err) { pr_err("b-tree write err: %d, ino %lu\n", - err, inode->i_ino); + err, inode->i_ino); return err; } } diff --git a/fs/hfsplus/wrapper.c b/fs/hfsplus/wrapper.c index 3f999649587f..cc6235671437 100644 --- a/fs/hfsplus/wrapper.c +++ b/fs/hfsplus/wrapper.c @@ -24,8 +24,8 @@ struct hfsplus_wd { u16 embed_count; }; -/* - * hfsplus_submit_bio - Perfrom block I/O +/** + * hfsplus_submit_bio - Perform block I/O * @sb: super block of volume for I/O * @sector: block to read or write, for blocks of HFSPLUS_SECTOR_SIZE bytes * @buf: buffer for I/O @@ -231,10 +231,8 @@ reread: if (blocksize < HFSPLUS_SECTOR_SIZE || ((blocksize - 1) & blocksize)) goto out_free_backup_vhdr; sbi->alloc_blksz = blocksize; - sbi->alloc_blksz_shift = 0; - while ((blocksize >>= 1) != 0) - sbi->alloc_blksz_shift++; - blocksize = min(sbi->alloc_blksz, (u32)PAGE_SIZE); + sbi->alloc_blksz_shift = ilog2(blocksize); + blocksize = min_t(u32, sbi->alloc_blksz, PAGE_SIZE); /* * Align block size to block offset. diff --git a/fs/hfsplus/xattr.c b/fs/hfsplus/xattr.c index 4e27edc082a4..d98094a9f476 100644 --- a/fs/hfsplus/xattr.c +++ b/fs/hfsplus/xattr.c @@ -8,6 +8,7 @@ #include "hfsplus_fs.h" #include <linux/posix_acl_xattr.h> +#include <linux/nls.h> #include "xattr.h" #include "acl.h" @@ -66,10 +67,10 @@ static void hfsplus_init_header_node(struct inode *attr_file, char *bmp; u32 used_nodes; u32 used_bmp_bytes; - loff_t tmp; + u64 tmp; hfs_dbg(ATTR_MOD, "init_hdr_attr_file: clump %u, node_size %u\n", - clump_size, node_size); + clump_size, node_size); /* The end of the node contains list of record offsets */ rec_offsets = (__be16 *)(buf + node_size); @@ -195,7 +196,7 @@ check_attr_tree_state_again: } while (hip->alloc_blocks < hip->clump_blocks) { - err = hfsplus_file_extend(attr_file); + err = hfsplus_file_extend(attr_file, false); if (unlikely(err)) { pr_err("failed to extend attributes file\n"); goto end_attr_file_creation; @@ -645,8 +646,7 @@ ssize_t hfsplus_listxattr(struct dentry *dentry, char *buffer, size_t size) struct hfs_find_data fd; u16 key_len = 0; struct hfsplus_attr_key attr_key; - char strbuf[HFSPLUS_ATTR_MAX_STRLEN + - XATTR_MAC_OSX_PREFIX_LEN + 1] = {0}; + char *strbuf; int xattr_name_len; if ((!S_ISREG(inode->i_mode) && @@ -666,6 +666,13 @@ ssize_t hfsplus_listxattr(struct dentry *dentry, char *buffer, size_t size) return err; } + strbuf = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN + + XATTR_MAC_OSX_PREFIX_LEN + 1, GFP_KERNEL); + if (!strbuf) { + res = -ENOMEM; + goto out; + } + err = hfsplus_find_attr(inode->i_sb, inode->i_ino, NULL, &fd); if (err) { if (err == -ENOENT) { @@ -692,7 +699,7 @@ ssize_t hfsplus_listxattr(struct dentry *dentry, char *buffer, size_t size) if (be32_to_cpu(attr_key.cnid) != inode->i_ino) goto end_listxattr; - xattr_name_len = HFSPLUS_ATTR_MAX_STRLEN; + xattr_name_len = NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN; if (hfsplus_uni2asc(inode->i_sb, (const struct hfsplus_unistr *)&fd.key->attr.key_name, strbuf, &xattr_name_len)) { @@ -718,6 +725,8 @@ ssize_t hfsplus_listxattr(struct dentry *dentry, char *buffer, size_t size) } end_listxattr: + kfree(strbuf); +out: hfs_find_exit(&fd); return res; } @@ -797,47 +806,55 @@ end_removexattr: static int hfsplus_osx_getxattr(struct dentry *dentry, const char *name, void *buffer, size_t size, int type) { - char xattr_name[HFSPLUS_ATTR_MAX_STRLEN + - XATTR_MAC_OSX_PREFIX_LEN + 1] = {0}; - size_t len = strlen(name); + char *xattr_name; + int res; if (!strcmp(name, "")) return -EINVAL; - if (len > HFSPLUS_ATTR_MAX_STRLEN) - return -EOPNOTSUPP; - /* * Don't allow retrieving properly prefixed attributes * by prepending them with "osx." */ if (is_known_namespace(name)) return -EOPNOTSUPP; + xattr_name = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN + + XATTR_MAC_OSX_PREFIX_LEN + 1, GFP_KERNEL); + if (!xattr_name) + return -ENOMEM; + strcpy(xattr_name, XATTR_MAC_OSX_PREFIX); + strcpy(xattr_name + XATTR_MAC_OSX_PREFIX_LEN, name); - return hfsplus_getxattr(dentry, xattr_name, buffer, size); + res = hfsplus_getxattr(dentry, xattr_name, buffer, size); + kfree(xattr_name); + return res; } static int hfsplus_osx_setxattr(struct dentry *dentry, const char *name, const void *buffer, size_t size, int flags, int type) { - char xattr_name[HFSPLUS_ATTR_MAX_STRLEN + - XATTR_MAC_OSX_PREFIX_LEN + 1] = {0}; - size_t len = strlen(name); + char *xattr_name; + int res; if (!strcmp(name, "")) return -EINVAL; - if (len > HFSPLUS_ATTR_MAX_STRLEN) - return -EOPNOTSUPP; - /* * Don't allow setting properly prefixed attributes * by prepending them with "osx." */ if (is_known_namespace(name)) return -EOPNOTSUPP; + xattr_name = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN + + XATTR_MAC_OSX_PREFIX_LEN + 1, GFP_KERNEL); + if (!xattr_name) + return -ENOMEM; + strcpy(xattr_name, XATTR_MAC_OSX_PREFIX); + strcpy(xattr_name + XATTR_MAC_OSX_PREFIX_LEN, name); - return hfsplus_setxattr(dentry, xattr_name, buffer, size, flags); + res = hfsplus_setxattr(dentry, xattr_name, buffer, size, flags); + kfree(xattr_name); + return res; } static size_t hfsplus_osx_listxattr(struct dentry *dentry, char *list, diff --git a/fs/hfsplus/xattr_security.c b/fs/hfsplus/xattr_security.c index 00722765ea79..6ec5e107691f 100644 --- a/fs/hfsplus/xattr_security.c +++ b/fs/hfsplus/xattr_security.c @@ -7,6 +7,8 @@ */ #include <linux/security.h> +#include <linux/nls.h> + #include "hfsplus_fs.h" #include "xattr.h" #include "acl.h" @@ -14,37 +16,43 @@ static int hfsplus_security_getxattr(struct dentry *dentry, const char *name, void *buffer, size_t size, int type) { - char xattr_name[HFSPLUS_ATTR_MAX_STRLEN + 1] = {0}; - size_t len = strlen(name); + char *xattr_name; + int res; if (!strcmp(name, "")) return -EINVAL; - if (len + XATTR_SECURITY_PREFIX_LEN > HFSPLUS_ATTR_MAX_STRLEN) - return -EOPNOTSUPP; - + xattr_name = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN + 1, + GFP_KERNEL); + if (!xattr_name) + return -ENOMEM; strcpy(xattr_name, XATTR_SECURITY_PREFIX); strcpy(xattr_name + XATTR_SECURITY_PREFIX_LEN, name); - return hfsplus_getxattr(dentry, xattr_name, buffer, size); + res = hfsplus_getxattr(dentry, xattr_name, buffer, size); + kfree(xattr_name); + return res; } static int hfsplus_security_setxattr(struct dentry *dentry, const char *name, const void *buffer, size_t size, int flags, int type) { - char xattr_name[HFSPLUS_ATTR_MAX_STRLEN + 1] = {0}; - size_t len = strlen(name); + char *xattr_name; + int res; if (!strcmp(name, "")) return -EINVAL; - if (len + XATTR_SECURITY_PREFIX_LEN > HFSPLUS_ATTR_MAX_STRLEN) - return -EOPNOTSUPP; - + xattr_name = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN + 1, + GFP_KERNEL); + if (!xattr_name) + return -ENOMEM; strcpy(xattr_name, XATTR_SECURITY_PREFIX); strcpy(xattr_name + XATTR_SECURITY_PREFIX_LEN, name); - return hfsplus_setxattr(dentry, xattr_name, buffer, size, flags); + res = hfsplus_setxattr(dentry, xattr_name, buffer, size, flags); + kfree(xattr_name); + return res; } static size_t hfsplus_security_listxattr(struct dentry *dentry, char *list, @@ -62,31 +70,30 @@ static int hfsplus_initxattrs(struct inode *inode, void *fs_info) { const struct xattr *xattr; - char xattr_name[HFSPLUS_ATTR_MAX_STRLEN + 1] = {0}; - size_t xattr_name_len; + char *xattr_name; int err = 0; + xattr_name = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN + 1, + GFP_KERNEL); + if (!xattr_name) + return -ENOMEM; for (xattr = xattr_array; xattr->name != NULL; xattr++) { - xattr_name_len = strlen(xattr->name); - if (xattr_name_len == 0) + if (!strcmp(xattr->name, "")) continue; - if (xattr_name_len + XATTR_SECURITY_PREFIX_LEN > - HFSPLUS_ATTR_MAX_STRLEN) - return -EOPNOTSUPP; - strcpy(xattr_name, XATTR_SECURITY_PREFIX); strcpy(xattr_name + XATTR_SECURITY_PREFIX_LEN, xattr->name); memset(xattr_name + - XATTR_SECURITY_PREFIX_LEN + xattr_name_len, 0, 1); + XATTR_SECURITY_PREFIX_LEN + strlen(xattr->name), 0, 1); err = __hfsplus_setxattr(inode, xattr_name, xattr->value, xattr->value_len, 0); if (err) break; } + kfree(xattr_name); return err; } diff --git a/fs/hfsplus/xattr_trusted.c b/fs/hfsplus/xattr_trusted.c index 426cee277542..3c5f27e4746a 100644 --- a/fs/hfsplus/xattr_trusted.c +++ b/fs/hfsplus/xattr_trusted.c @@ -6,43 +6,51 @@ * Handler for trusted extended attributes. */ +#include <linux/nls.h> + #include "hfsplus_fs.h" #include "xattr.h" static int hfsplus_trusted_getxattr(struct dentry *dentry, const char *name, void *buffer, size_t size, int type) { - char xattr_name[HFSPLUS_ATTR_MAX_STRLEN + 1] = {0}; - size_t len = strlen(name); + char *xattr_name; + int res; if (!strcmp(name, "")) return -EINVAL; - if (len + XATTR_TRUSTED_PREFIX_LEN > HFSPLUS_ATTR_MAX_STRLEN) - return -EOPNOTSUPP; - + xattr_name = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN + 1, + GFP_KERNEL); + if (!xattr_name) + return -ENOMEM; strcpy(xattr_name, XATTR_TRUSTED_PREFIX); strcpy(xattr_name + XATTR_TRUSTED_PREFIX_LEN, name); - return hfsplus_getxattr(dentry, xattr_name, buffer, size); + res = hfsplus_getxattr(dentry, xattr_name, buffer, size); + kfree(xattr_name); + return res; } static int hfsplus_trusted_setxattr(struct dentry *dentry, const char *name, const void *buffer, size_t size, int flags, int type) { - char xattr_name[HFSPLUS_ATTR_MAX_STRLEN + 1] = {0}; - size_t len = strlen(name); + char *xattr_name; + int res; if (!strcmp(name, "")) return -EINVAL; - if (len + XATTR_TRUSTED_PREFIX_LEN > HFSPLUS_ATTR_MAX_STRLEN) - return -EOPNOTSUPP; - + xattr_name = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN + 1, + GFP_KERNEL); + if (!xattr_name) + return -ENOMEM; strcpy(xattr_name, XATTR_TRUSTED_PREFIX); strcpy(xattr_name + XATTR_TRUSTED_PREFIX_LEN, name); - return hfsplus_setxattr(dentry, xattr_name, buffer, size, flags); + res = hfsplus_setxattr(dentry, xattr_name, buffer, size, flags); + kfree(xattr_name); + return res; } static size_t hfsplus_trusted_listxattr(struct dentry *dentry, char *list, diff --git a/fs/hfsplus/xattr_user.c b/fs/hfsplus/xattr_user.c index e34016561ae0..2b625a538b64 100644 --- a/fs/hfsplus/xattr_user.c +++ b/fs/hfsplus/xattr_user.c @@ -6,43 +6,51 @@ * Handler for user extended attributes. */ +#include <linux/nls.h> + #include "hfsplus_fs.h" #include "xattr.h" static int hfsplus_user_getxattr(struct dentry *dentry, const char *name, void *buffer, size_t size, int type) { - char xattr_name[HFSPLUS_ATTR_MAX_STRLEN + 1] = {0}; - size_t len = strlen(name); + char *xattr_name; + int res; if (!strcmp(name, "")) return -EINVAL; - if (len + XATTR_USER_PREFIX_LEN > HFSPLUS_ATTR_MAX_STRLEN) - return -EOPNOTSUPP; - + xattr_name = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN + 1, + GFP_KERNEL); + if (!xattr_name) + return -ENOMEM; strcpy(xattr_name, XATTR_USER_PREFIX); strcpy(xattr_name + XATTR_USER_PREFIX_LEN, name); - return hfsplus_getxattr(dentry, xattr_name, buffer, size); + res = hfsplus_getxattr(dentry, xattr_name, buffer, size); + kfree(xattr_name); + return res; } static int hfsplus_user_setxattr(struct dentry *dentry, const char *name, const void *buffer, size_t size, int flags, int type) { - char xattr_name[HFSPLUS_ATTR_MAX_STRLEN + 1] = {0}; - size_t len = strlen(name); + char *xattr_name; + int res; if (!strcmp(name, "")) return -EINVAL; - if (len + XATTR_USER_PREFIX_LEN > HFSPLUS_ATTR_MAX_STRLEN) - return -EOPNOTSUPP; - + xattr_name = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN + 1, + GFP_KERNEL); + if (!xattr_name) + return -ENOMEM; strcpy(xattr_name, XATTR_USER_PREFIX); strcpy(xattr_name + XATTR_USER_PREFIX_LEN, name); - return hfsplus_setxattr(dentry, xattr_name, buffer, size, flags); + res = hfsplus_setxattr(dentry, xattr_name, buffer, size, flags); + kfree(xattr_name); + return res; } static size_t hfsplus_user_listxattr(struct dentry *dentry, char *list, diff --git a/fs/hpfs/alloc.c b/fs/hpfs/alloc.c index 58b5106186d0..f005046e1591 100644 --- a/fs/hpfs/alloc.c +++ b/fs/hpfs/alloc.c @@ -316,7 +316,7 @@ void hpfs_free_sectors(struct super_block *s, secno sec, unsigned n) struct quad_buffer_head qbh; __le32 *bmp; struct hpfs_sb_info *sbi = hpfs_sb(s); - /*printk("2 - ");*/ + /*pr_info("2 - ");*/ if (!n) return; if (sec < 0x12) { hpfs_error(s, "Trying to free reserved sector %08x", sec); diff --git a/fs/hpfs/buffer.c b/fs/hpfs/buffer.c index 139ef1684d07..8057fe4e6574 100644 --- a/fs/hpfs/buffer.c +++ b/fs/hpfs/buffer.c @@ -55,7 +55,7 @@ void *hpfs_map_sector(struct super_block *s, unsigned secno, struct buffer_head if (bh != NULL) return bh->b_data; else { - printk("HPFS: hpfs_map_sector: read error\n"); + pr_err("%s(): read error\n", __func__); return NULL; } } @@ -76,7 +76,7 @@ void *hpfs_get_sector(struct super_block *s, unsigned secno, struct buffer_head set_buffer_uptodate(bh); return bh->b_data; } else { - printk("HPFS: hpfs_get_sector: getblk failed\n"); + pr_err("%s(): getblk failed\n", __func__); return NULL; } } @@ -93,7 +93,7 @@ void *hpfs_map_4sectors(struct super_block *s, unsigned secno, struct quad_buffe cond_resched(); if (secno & 3) { - printk("HPFS: hpfs_map_4sectors: unaligned read\n"); + pr_err("%s(): unaligned read\n", __func__); return NULL; } @@ -112,7 +112,7 @@ void *hpfs_map_4sectors(struct super_block *s, unsigned secno, struct quad_buffe qbh->data = data = kmalloc(2048, GFP_NOFS); if (!data) { - printk("HPFS: hpfs_map_4sectors: out of memory\n"); + pr_err("%s(): out of memory\n", __func__); goto bail4; } @@ -145,7 +145,7 @@ void *hpfs_get_4sectors(struct super_block *s, unsigned secno, hpfs_lock_assert(s); if (secno & 3) { - printk("HPFS: hpfs_get_4sectors: unaligned read\n"); + pr_err("%s(): unaligned read\n", __func__); return NULL; } @@ -161,7 +161,7 @@ void *hpfs_get_4sectors(struct super_block *s, unsigned secno, } if (!(qbh->data = kmalloc(2048, GFP_NOFS))) { - printk("HPFS: hpfs_get_4sectors: out of memory\n"); + pr_err("%s(): out of memory\n", __func__); goto bail4; } return qbh->data; diff --git a/fs/hpfs/dir.c b/fs/hpfs/dir.c index 292b1acb9b81..2a8e07425de0 100644 --- a/fs/hpfs/dir.c +++ b/fs/hpfs/dir.c @@ -36,7 +36,7 @@ static loff_t hpfs_dir_lseek(struct file *filp, loff_t off, int whence) mutex_lock(&i->i_mutex); hpfs_lock(s); - /*printk("dir lseek\n");*/ + /*pr_info("dir lseek\n");*/ if (new_off == 0 || new_off == 1 || new_off == 11 || new_off == 12 || new_off == 13) goto ok; pos = ((loff_t) hpfs_de_as_down_as_possible(s, hpfs_inode->i_dno) << 4) + 1; while (pos != new_off) { @@ -51,7 +51,7 @@ ok: mutex_unlock(&i->i_mutex); return new_off; fail: - /*printk("illegal lseek: %016llx\n", new_off);*/ + /*pr_warn("illegal lseek: %016llx\n", new_off);*/ hpfs_unlock(s); mutex_unlock(&i->i_mutex); return -ESPIPE; @@ -127,7 +127,7 @@ static int hpfs_readdir(struct file *file, struct dir_context *ctx) if (ctx->pos == 12) goto out; if (ctx->pos == 3 || ctx->pos == 4 || ctx->pos == 5) { - printk("HPFS: warning: pos==%d\n",(int)ctx->pos); + pr_err("pos==%d\n", (int)ctx->pos); goto out; } if (ctx->pos == 0) { diff --git a/fs/hpfs/dnode.c b/fs/hpfs/dnode.c index 4364b2a02c5d..f36fc010fccb 100644 --- a/fs/hpfs/dnode.c +++ b/fs/hpfs/dnode.c @@ -17,7 +17,7 @@ static loff_t get_pos(struct dnode *d, struct hpfs_dirent *fde) if (de == fde) return ((loff_t) le32_to_cpu(d->self) << 4) | (loff_t)i; i++; } - printk("HPFS: get_pos: not_found\n"); + pr_info("%s(): not_found\n", __func__); return ((loff_t)le32_to_cpu(d->self) << 4) | (loff_t)1; } @@ -32,7 +32,7 @@ void hpfs_add_pos(struct inode *inode, loff_t *pos) if (hpfs_inode->i_rddir_off[i] == pos) return; if (!(i&0x0f)) { if (!(ppos = kmalloc((i+0x11) * sizeof(loff_t*), GFP_NOFS))) { - printk("HPFS: out of memory for position list\n"); + pr_err("out of memory for position list\n"); return; } if (hpfs_inode->i_rddir_off) { @@ -63,7 +63,8 @@ void hpfs_del_pos(struct inode *inode, loff_t *pos) } return; not_f: - /*printk("HPFS: warning: position pointer %p->%08x not found\n", pos, (int)*pos);*/ + /*pr_warn("position pointer %p->%08x not found\n", + pos, (int)*pos);*/ return; } @@ -92,8 +93,11 @@ static void hpfs_pos_ins(loff_t *p, loff_t d, loff_t c) { if ((*p & ~0x3f) == (d & ~0x3f) && (*p & 0x3f) >= (d & 0x3f)) { int n = (*p & 0x3f) + c; - if (n > 0x3f) printk("HPFS: hpfs_pos_ins: %08x + %d\n", (int)*p, (int)c >> 8); - else *p = (*p & ~0x3f) | n; + if (n > 0x3f) + pr_err("%s(): %08x + %d\n", + __func__, (int)*p, (int)c >> 8); + else + *p = (*p & ~0x3f) | n; } } @@ -101,8 +105,11 @@ static void hpfs_pos_del(loff_t *p, loff_t d, loff_t c) { if ((*p & ~0x3f) == (d & ~0x3f) && (*p & 0x3f) >= (d & 0x3f)) { int n = (*p & 0x3f) - c; - if (n < 1) printk("HPFS: hpfs_pos_ins: %08x - %d\n", (int)*p, (int)c >> 8); - else *p = (*p & ~0x3f) | n; + if (n < 1) + pr_err("%s(): %08x - %d\n", + __func__, (int)*p, (int)c >> 8); + else + *p = (*p & ~0x3f) | n; } } @@ -239,12 +246,12 @@ static int hpfs_add_to_dnode(struct inode *i, dnode_secno dno, struct fnode *fnode; int c1, c2 = 0; if (!(nname = kmalloc(256, GFP_NOFS))) { - printk("HPFS: out of memory, can't add to dnode\n"); + pr_err("out of memory, can't add to dnode\n"); return 1; } go_up: if (namelen >= 256) { - hpfs_error(i->i_sb, "hpfs_add_to_dnode: namelen == %d", namelen); + hpfs_error(i->i_sb, "%s(): namelen == %d", __func__, namelen); kfree(nd); kfree(nname); return 1; @@ -281,7 +288,7 @@ static int hpfs_add_to_dnode(struct inode *i, dnode_secno dno, not be any error while splitting dnodes, otherwise the whole directory, not only file we're adding, would be lost. */ - printk("HPFS: out of memory for dnode splitting\n"); + pr_err("out of memory for dnode splitting\n"); hpfs_brelse4(&qbh); kfree(nname); return 1; @@ -597,7 +604,7 @@ static void delete_empty_dnode(struct inode *i, dnode_secno dno) if (!de_next->down) goto endm; ndown = de_down_pointer(de_next); if (!(de_cp = kmalloc(le16_to_cpu(de->length), GFP_NOFS))) { - printk("HPFS: out of memory for dtree balancing\n"); + pr_err("out of memory for dtree balancing\n"); goto endm; } memcpy(de_cp, de, le16_to_cpu(de->length)); @@ -612,7 +619,8 @@ static void delete_empty_dnode(struct inode *i, dnode_secno dno) hpfs_brelse4(&qbh1); } hpfs_add_to_dnode(i, ndown, de_cp->name, de_cp->namelen, de_cp, de_cp->down ? de_down_pointer(de_cp) : 0); - /*printk("UP-TO-DNODE: %08x (ndown = %08x, down = %08x, dno = %08x)\n", up, ndown, down, dno);*/ + /*pr_info("UP-TO-DNODE: %08x (ndown = %08x, down = %08x, dno = %08x)\n", + up, ndown, down, dno);*/ dno = up; kfree(de_cp); goto try_it_again; @@ -637,15 +645,15 @@ static void delete_empty_dnode(struct inode *i, dnode_secno dno) if (!dlp && down) { if (le32_to_cpu(d1->first_free) > 2044) { if (hpfs_sb(i->i_sb)->sb_chk >= 2) { - printk("HPFS: warning: unbalanced dnode tree, see hpfs.txt 4 more info\n"); - printk("HPFS: warning: terminating balancing operation\n"); + pr_err("unbalanced dnode tree, see hpfs.txt 4 more info\n"); + pr_err("terminating balancing operation\n"); } hpfs_brelse4(&qbh1); goto endm; } if (hpfs_sb(i->i_sb)->sb_chk >= 2) { - printk("HPFS: warning: unbalanced dnode tree, see hpfs.txt 4 more info\n"); - printk("HPFS: warning: goin'on\n"); + pr_err("unbalanced dnode tree, see hpfs.txt 4 more info\n"); + pr_err("goin'on\n"); } le16_add_cpu(&del->length, 4); del->down = 1; @@ -659,7 +667,7 @@ static void delete_empty_dnode(struct inode *i, dnode_secno dno) *(__le32 *) ((void *) del + le16_to_cpu(del->length) - 4) = cpu_to_le32(down); } else goto endm; if (!(de_cp = kmalloc(le16_to_cpu(de_prev->length), GFP_NOFS))) { - printk("HPFS: out of memory for dtree balancing\n"); + pr_err("out of memory for dtree balancing\n"); hpfs_brelse4(&qbh1); goto endm; } @@ -1000,7 +1008,7 @@ struct hpfs_dirent *map_fnode_dirent(struct super_block *s, fnode_secno fno, int d1, d2 = 0; name1 = f->name; if (!(name2 = kmalloc(256, GFP_NOFS))) { - printk("HPFS: out of memory, can't map dirent\n"); + pr_err("out of memory, can't map dirent\n"); return NULL; } if (f->len <= 15) diff --git a/fs/hpfs/ea.c b/fs/hpfs/ea.c index bcaafcd2666a..ce3f98ba993a 100644 --- a/fs/hpfs/ea.c +++ b/fs/hpfs/ea.c @@ -51,7 +51,7 @@ static char *get_indirect_ea(struct super_block *s, int ano, secno a, int size) { char *ret; if (!(ret = kmalloc(size + 1, GFP_NOFS))) { - printk("HPFS: out of memory for EA\n"); + pr_err("out of memory for EA\n"); return NULL; } if (hpfs_ea_read(s, a, ano, 0, size, ret)) { @@ -139,7 +139,7 @@ char *hpfs_get_ea(struct super_block *s, struct fnode *fnode, char *key, int *si if (ea_indirect(ea)) return get_indirect_ea(s, ea_in_anode(ea), ea_sec(ea), *size = ea_len(ea)); if (!(ret = kmalloc((*size = ea_valuelen(ea)) + 1, GFP_NOFS))) { - printk("HPFS: out of memory for EA\n"); + pr_err("out of memory for EA\n"); return NULL; } memcpy(ret, ea_data(ea), ea_valuelen(ea)); @@ -165,7 +165,7 @@ char *hpfs_get_ea(struct super_block *s, struct fnode *fnode, char *key, int *si if (ea_indirect(ea)) return get_indirect_ea(s, ea_in_anode(ea), ea_sec(ea), *size = ea_len(ea)); if (!(ret = kmalloc((*size = ea_valuelen(ea)) + 1, GFP_NOFS))) { - printk("HPFS: out of memory for EA\n"); + pr_err("out of memory for EA\n"); return NULL; } if (hpfs_ea_read(s, a, ano, pos + 4 + ea->namelen + 1, ea_valuelen(ea), ret)) { diff --git a/fs/hpfs/hpfs_fn.h b/fs/hpfs/hpfs_fn.h index 3ba49c080e42..b63b75fa00e7 100644 --- a/fs/hpfs/hpfs_fn.h +++ b/fs/hpfs/hpfs_fn.h @@ -8,6 +8,11 @@ //#define DBG //#define DEBUG_LOCKS +#ifdef pr_fmt +#undef pr_fmt +#endif + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include <linux/mutex.h> #include <linux/pagemap.h> diff --git a/fs/hpfs/inode.c b/fs/hpfs/inode.c index 50a427313835..7ce4b74234a1 100644 --- a/fs/hpfs/inode.c +++ b/fs/hpfs/inode.c @@ -183,7 +183,8 @@ void hpfs_write_inode(struct inode *i) struct inode *parent; if (i->i_ino == hpfs_sb(i->i_sb)->sb_root) return; if (hpfs_inode->i_rddir_off && !atomic_read(&i->i_count)) { - if (*hpfs_inode->i_rddir_off) printk("HPFS: write_inode: some position still there\n"); + if (*hpfs_inode->i_rddir_off) + pr_err("write_inode: some position still there\n"); kfree(hpfs_inode->i_rddir_off); hpfs_inode->i_rddir_off = NULL; } diff --git a/fs/hpfs/map.c b/fs/hpfs/map.c index 3aa66ae1031e..442770edcdc7 100644 --- a/fs/hpfs/map.c +++ b/fs/hpfs/map.c @@ -65,12 +65,13 @@ unsigned char *hpfs_load_code_page(struct super_block *s, secno cps) struct code_page_directory *cp = hpfs_map_sector(s, cps, &bh, 0); if (!cp) return NULL; if (le32_to_cpu(cp->magic) != CP_DIR_MAGIC) { - printk("HPFS: Code page directory magic doesn't match (magic = %08x)\n", le32_to_cpu(cp->magic)); + pr_err("Code page directory magic doesn't match (magic = %08x)\n", + le32_to_cpu(cp->magic)); brelse(bh); return NULL; } if (!le32_to_cpu(cp->n_code_pages)) { - printk("HPFS: n_code_pages == 0\n"); + pr_err("n_code_pages == 0\n"); brelse(bh); return NULL; } @@ -79,19 +80,19 @@ unsigned char *hpfs_load_code_page(struct super_block *s, secno cps) brelse(bh); if (cpi >= 3) { - printk("HPFS: Code page index out of array\n"); + pr_err("Code page index out of array\n"); return NULL; } if (!(cpd = hpfs_map_sector(s, cpds, &bh, 0))) return NULL; if (le16_to_cpu(cpd->offs[cpi]) > 0x178) { - printk("HPFS: Code page index out of sector\n"); + pr_err("Code page index out of sector\n"); brelse(bh); return NULL; } ptr = (unsigned char *)cpd + le16_to_cpu(cpd->offs[cpi]) + 6; if (!(cp_table = kmalloc(256, GFP_KERNEL))) { - printk("HPFS: out of memory for code page table\n"); + pr_err("out of memory for code page table\n"); brelse(bh); return NULL; } @@ -114,7 +115,7 @@ __le32 *hpfs_load_bitmap_directory(struct super_block *s, secno bmp) int i; __le32 *b; if (!(b = kmalloc(n * 512, GFP_KERNEL))) { - printk("HPFS: can't allocate memory for bitmap directory\n"); + pr_err("can't allocate memory for bitmap directory\n"); return NULL; } for (i=0;i<n;i++) { @@ -281,7 +282,9 @@ struct dnode *hpfs_map_dnode(struct super_block *s, unsigned secno, hpfs_error(s, "dnode %08x does not end with \\377 entry", secno); goto bail; } - if (b == 3) printk("HPFS: warning: unbalanced dnode tree, dnode %08x; see hpfs.txt 4 more info\n", secno); + if (b == 3) + pr_err("unbalanced dnode tree, dnode %08x; see hpfs.txt 4 more info\n", + secno); } return dnode; bail: diff --git a/fs/hpfs/name.c b/fs/hpfs/name.c index 9acdf338def0..b00d396d22c6 100644 --- a/fs/hpfs/name.c +++ b/fs/hpfs/name.c @@ -56,14 +56,15 @@ unsigned char *hpfs_translate_name(struct super_block *s, unsigned char *from, unsigned char *to; int i; if (hpfs_sb(s)->sb_chk >= 2) if (hpfs_is_name_long(from, len) != lng) { - printk("HPFS: Long name flag mismatch - name "); - for (i=0; i<len; i++) printk("%c", from[i]); - printk(" misidentified as %s.\n", lng ? "short" : "long"); - printk("HPFS: It's nothing serious. It could happen because of bug in OS/2.\nHPFS: Set checks=normal to disable this message.\n"); + pr_err("Long name flag mismatch - name "); + for (i = 0; i < len; i++) + pr_cont("%c", from[i]); + pr_cont(" misidentified as %s.\n", lng ? "short" : "long"); + pr_err("It's nothing serious. It could happen because of bug in OS/2.\nSet checks=normal to disable this message.\n"); } if (!lc) return from; if (!(to = kmalloc(len, GFP_KERNEL))) { - printk("HPFS: can't allocate memory for name conversion buffer\n"); + pr_err("can't allocate memory for name conversion buffer\n"); return from; } for (i = 0; i < len; i++) to[i] = locase(hpfs_sb(s)->sb_cp_table,from[i]); diff --git a/fs/hpfs/namei.c b/fs/hpfs/namei.c index 1b39afdd86fd..bdbc2c3080a4 100644 --- a/fs/hpfs/namei.c +++ b/fs/hpfs/namei.c @@ -404,7 +404,7 @@ again: d_rehash(dentry); } else { struct iattr newattrs; - /*printk("HPFS: truncating file before delete.\n");*/ + /*pr_info("truncating file before delete.\n");*/ newattrs.ia_size = 0; newattrs.ia_valid = ATTR_SIZE | ATTR_CTIME; err = notify_change(dentry, &newattrs, NULL); diff --git a/fs/hpfs/super.c b/fs/hpfs/super.c index fe3463a43236..7cd00d3a7c9b 100644 --- a/fs/hpfs/super.c +++ b/fs/hpfs/super.c @@ -62,22 +62,26 @@ void hpfs_error(struct super_block *s, const char *fmt, ...) vsnprintf(err_buf, sizeof(err_buf), fmt, args); va_end(args); - printk("HPFS: filesystem error: %s", err_buf); + pr_err("filesystem error: %s", err_buf); if (!hpfs_sb(s)->sb_was_error) { if (hpfs_sb(s)->sb_err == 2) { - printk("; crashing the system because you wanted it\n"); + pr_cont("; crashing the system because you wanted it\n"); mark_dirty(s, 0); panic("HPFS panic"); } else if (hpfs_sb(s)->sb_err == 1) { - if (s->s_flags & MS_RDONLY) printk("; already mounted read-only\n"); + if (s->s_flags & MS_RDONLY) + pr_cont("; already mounted read-only\n"); else { - printk("; remounting read-only\n"); + pr_cont("; remounting read-only\n"); mark_dirty(s, 0); s->s_flags |= MS_RDONLY; } - } else if (s->s_flags & MS_RDONLY) printk("; going on - but anything won't be destroyed because it's read-only\n"); - else printk("; corrupted filesystem mounted read/write - your computer will explode within 20 seconds ... but you wanted it so!\n"); - } else printk("\n"); + } else if (s->s_flags & MS_RDONLY) + pr_cont("; going on - but anything won't be destroyed because it's read-only\n"); + else + pr_cont("; corrupted filesystem mounted read/write - your computer will explode within 20 seconds ... but you wanted it so!\n"); + } else + pr_cont("\n"); hpfs_sb(s)->sb_was_error = 1; } @@ -292,7 +296,7 @@ static int parse_opts(char *opts, kuid_t *uid, kgid_t *gid, umode_t *umask, if (!opts) return 1; - /*printk("Parsing opts: '%s'\n",opts);*/ + /*pr_info("Parsing opts: '%s'\n",opts);*/ while ((p = strsep(&opts, ",")) != NULL) { substring_t args[MAX_OPT_ARGS]; @@ -387,7 +391,7 @@ static int parse_opts(char *opts, kuid_t *uid, kgid_t *gid, umode_t *umask, static inline void hpfs_help(void) { - printk("\n\ + pr_info("\n\ HPFS filesystem options:\n\ help do not mount and display this text\n\ uid=xxx set uid of files that don't have uid specified in eas\n\ @@ -434,7 +438,7 @@ static int hpfs_remount_fs(struct super_block *s, int *flags, char *data) if (!(o = parse_opts(data, &uid, &gid, &umask, &lowercase, &eas, &chk, &errs, &chkdsk, ×hift))) { - printk("HPFS: bad mount options.\n"); + pr_err("bad mount options.\n"); goto out_err; } if (o == 2) { @@ -442,7 +446,7 @@ static int hpfs_remount_fs(struct super_block *s, int *flags, char *data) goto out_err; } if (timeshift != sbi->sb_timeshift) { - printk("HPFS: timeshift can't be changed using remount.\n"); + pr_err("timeshift can't be changed using remount.\n"); goto out_err; } @@ -523,7 +527,7 @@ static int hpfs_fill_super(struct super_block *s, void *options, int silent) if (!(o = parse_opts(options, &uid, &gid, &umask, &lowercase, &eas, &chk, &errs, &chkdsk, ×hift))) { - printk("HPFS: bad mount options.\n"); + pr_err("bad mount options.\n"); goto bail0; } if (o==2) { @@ -542,16 +546,17 @@ static int hpfs_fill_super(struct super_block *s, void *options, int silent) if (/*le16_to_cpu(bootblock->magic) != BB_MAGIC ||*/ le32_to_cpu(superblock->magic) != SB_MAGIC || le32_to_cpu(spareblock->magic) != SP_MAGIC) { - if (!silent) printk("HPFS: Bad magic ... probably not HPFS\n"); + if (!silent) + pr_err("Bad magic ... probably not HPFS\n"); goto bail4; } /* Check version */ if (!(s->s_flags & MS_RDONLY) && superblock->funcversion != 2 && superblock->funcversion != 3) { - printk("HPFS: Bad version %d,%d. Mount readonly to go around\n", + pr_err("Bad version %d,%d. Mount readonly to go around\n", (int)superblock->version, (int)superblock->funcversion); - printk("HPFS: please try recent version of HPFS driver at http://artax.karlin.mff.cuni.cz/~mikulas/vyplody/hpfs/index-e.cgi and if it still can't understand this format, contact author - mikulas@artax.karlin.mff.cuni.cz\n"); + pr_err("please try recent version of HPFS driver at http://artax.karlin.mff.cuni.cz/~mikulas/vyplody/hpfs/index-e.cgi and if it still can't understand this format, contact author - mikulas@artax.karlin.mff.cuni.cz\n"); goto bail4; } @@ -597,7 +602,7 @@ static int hpfs_fill_super(struct super_block *s, void *options, int silent) /* Check for general fs errors*/ if (spareblock->dirty && !spareblock->old_wrote) { if (errs == 2) { - printk("HPFS: Improperly stopped, not mounted\n"); + pr_err("Improperly stopped, not mounted\n"); goto bail4; } hpfs_error(s, "improperly stopped"); @@ -611,22 +616,25 @@ static int hpfs_fill_super(struct super_block *s, void *options, int silent) if (spareblock->hotfixes_used || spareblock->n_spares_used) { if (errs >= 2) { - printk("HPFS: Hotfixes not supported here, try chkdsk\n"); + pr_err("Hotfixes not supported here, try chkdsk\n"); mark_dirty(s, 0); goto bail4; } hpfs_error(s, "hotfixes not supported here, try chkdsk"); - if (errs == 0) printk("HPFS: Proceeding, but your filesystem will be probably corrupted by this driver...\n"); - else printk("HPFS: This driver may read bad files or crash when operating on disk with hotfixes.\n"); + if (errs == 0) + pr_err("Proceeding, but your filesystem will be probably corrupted by this driver...\n"); + else + pr_err("This driver may read bad files or crash when operating on disk with hotfixes.\n"); } if (le32_to_cpu(spareblock->n_dnode_spares) != le32_to_cpu(spareblock->n_dnode_spares_free)) { if (errs >= 2) { - printk("HPFS: Spare dnodes used, try chkdsk\n"); + pr_err("Spare dnodes used, try chkdsk\n"); mark_dirty(s, 0); goto bail4; } hpfs_error(s, "warning: spare dnodes used, try chkdsk"); - if (errs == 0) printk("HPFS: Proceeding, but your filesystem could be corrupted if you delete files or directories\n"); + if (errs == 0) + pr_err("Proceeding, but your filesystem could be corrupted if you delete files or directories\n"); } if (chk) { unsigned a; @@ -645,12 +653,13 @@ static int hpfs_fill_super(struct super_block *s, void *options, int silent) goto bail4; } sbi->sb_dirband_size = a; - } else printk("HPFS: You really don't want any checks? You are crazy...\n"); + } else + pr_err("You really don't want any checks? You are crazy...\n"); /* Load code page table */ if (le32_to_cpu(spareblock->n_code_pages)) if (!(sbi->sb_cp_table = hpfs_load_code_page(s, le32_to_cpu(spareblock->code_page_dir)))) - printk("HPFS: Warning: code page support is disabled\n"); + pr_err("code page support is disabled\n"); brelse(bh2); brelse(bh1); diff --git a/fs/inode.c b/fs/inode.c index f96d2a6f88cc..2feb9b69f1be 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -105,7 +105,7 @@ long get_nr_dirty_inodes(void) * Handle nr_inode sysctl */ #ifdef CONFIG_SYSCTL -int proc_nr_inodes(ctl_table *table, int write, +int proc_nr_inodes(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { inodes_stat.nr_inodes = get_nr_inodes(); diff --git a/fs/jffs2/background.c b/fs/jffs2/background.c index 2b60ce1996aa..bb9cebc9ca8a 100644 --- a/fs/jffs2/background.c +++ b/fs/jffs2/background.c @@ -75,10 +75,13 @@ void jffs2_stop_garbage_collect_thread(struct jffs2_sb_info *c) static int jffs2_garbage_collect_thread(void *_c) { struct jffs2_sb_info *c = _c; + sigset_t hupmask; + siginitset(&hupmask, sigmask(SIGHUP)); allow_signal(SIGKILL); allow_signal(SIGSTOP); allow_signal(SIGCONT); + allow_signal(SIGHUP); c->gc_task = current; complete(&c->gc_thread_start); @@ -87,7 +90,7 @@ static int jffs2_garbage_collect_thread(void *_c) set_freezable(); for (;;) { - allow_signal(SIGHUP); + sigprocmask(SIG_UNBLOCK, &hupmask, NULL); again: spin_lock(&c->erase_completion_lock); if (!jffs2_thread_should_wake(c)) { @@ -95,10 +98,9 @@ static int jffs2_garbage_collect_thread(void *_c) spin_unlock(&c->erase_completion_lock); jffs2_dbg(1, "%s(): sleeping...\n", __func__); schedule(); - } else + } else { spin_unlock(&c->erase_completion_lock); - - + } /* Problem - immediately after bootup, the GCD spends a lot * of time in places like jffs2_kill_fragtree(); so much so * that userspace processes (like gdm and X) are starved @@ -150,7 +152,7 @@ static int jffs2_garbage_collect_thread(void *_c) } } /* We don't want SIGHUP to interrupt us. STOP and KILL are OK though. */ - disallow_signal(SIGHUP); + sigprocmask(SIG_BLOCK, &hupmask, NULL); jffs2_dbg(1, "%s(): pass\n", __func__); if (jffs2_garbage_collect_pass(c) == -ENOSPC) { diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c index 6bf06a07f3e0..de051cb1f553 100644 --- a/fs/lockd/svc.c +++ b/fs/lockd/svc.c @@ -436,7 +436,7 @@ EXPORT_SYMBOL_GPL(lockd_down); * Sysctl parameters (same as module parameters, different interface). */ -static ctl_table nlm_sysctls[] = { +static struct ctl_table nlm_sysctls[] = { { .procname = "nlm_grace_period", .data = &nlm_grace_period, @@ -490,7 +490,7 @@ static ctl_table nlm_sysctls[] = { { } }; -static ctl_table nlm_sysctl_dir[] = { +static struct ctl_table nlm_sysctl_dir[] = { { .procname = "nfs", .mode = 0555, @@ -499,7 +499,7 @@ static ctl_table nlm_sysctl_dir[] = { { } }; -static ctl_table nlm_sysctl_root[] = { +static struct ctl_table nlm_sysctl_root[] = { { .procname = "fs", .mode = 0555, diff --git a/fs/nfs/nfs4sysctl.c b/fs/nfs/nfs4sysctl.c index 2628d921b7e3..b6ebe7e445f6 100644 --- a/fs/nfs/nfs4sysctl.c +++ b/fs/nfs/nfs4sysctl.c @@ -16,7 +16,7 @@ static const int nfs_set_port_min = 0; static const int nfs_set_port_max = 65535; static struct ctl_table_header *nfs4_callback_sysctl_table; -static ctl_table nfs4_cb_sysctls[] = { +static struct ctl_table nfs4_cb_sysctls[] = { { .procname = "nfs_callback_tcpport", .data = &nfs_callback_set_tcpport, @@ -36,7 +36,7 @@ static ctl_table nfs4_cb_sysctls[] = { { } }; -static ctl_table nfs4_cb_sysctl_dir[] = { +static struct ctl_table nfs4_cb_sysctl_dir[] = { { .procname = "nfs", .mode = 0555, @@ -45,7 +45,7 @@ static ctl_table nfs4_cb_sysctl_dir[] = { { } }; -static ctl_table nfs4_cb_sysctl_root[] = { +static struct ctl_table nfs4_cb_sysctl_root[] = { { .procname = "fs", .mode = 0555, diff --git a/fs/nfs/sysctl.c b/fs/nfs/sysctl.c index 6b3f2535a3ec..bb6ed810fa6f 100644 --- a/fs/nfs/sysctl.c +++ b/fs/nfs/sysctl.c @@ -13,7 +13,7 @@ static struct ctl_table_header *nfs_callback_sysctl_table; -static ctl_table nfs_cb_sysctls[] = { +static struct ctl_table nfs_cb_sysctls[] = { { .procname = "nfs_mountpoint_timeout", .data = &nfs_mountpoint_expiry_timeout, @@ -31,7 +31,7 @@ static ctl_table nfs_cb_sysctls[] = { { } }; -static ctl_table nfs_cb_sysctl_dir[] = { +static struct ctl_table nfs_cb_sysctl_dir[] = { { .procname = "nfs", .mode = 0555, @@ -40,7 +40,7 @@ static ctl_table nfs_cb_sysctl_dir[] = { { } }; -static ctl_table nfs_cb_sysctl_root[] = { +static struct ctl_table nfs_cb_sysctl_root[] = { { .procname = "fs", .mode = 0555, diff --git a/fs/notify/inotify/inotify_user.c b/fs/notify/inotify/inotify_user.c index 78a2ca3966c3..cc423a30a0c8 100644 --- a/fs/notify/inotify/inotify_user.c +++ b/fs/notify/inotify/inotify_user.c @@ -57,7 +57,7 @@ static struct kmem_cache *inotify_inode_mark_cachep __read_mostly; static int zero; -ctl_table inotify_table[] = { +struct ctl_table inotify_table[] = { { .procname = "max_user_instances", .data = &inotify_max_user_instances, diff --git a/fs/ntfs/sysctl.c b/fs/ntfs/sysctl.c index 1927170a35ce..a503156ec15f 100644 --- a/fs/ntfs/sysctl.c +++ b/fs/ntfs/sysctl.c @@ -34,7 +34,7 @@ #include "debug.h" /* Definition of the ntfs sysctl. */ -static ctl_table ntfs_sysctls[] = { +static struct ctl_table ntfs_sysctls[] = { { .procname = "ntfs-debug", .data = &debug_msgs, /* Data pointer and size. */ @@ -46,7 +46,7 @@ static ctl_table ntfs_sysctls[] = { }; /* Define the parent directory /proc/sys/fs. */ -static ctl_table sysctls_root[] = { +static struct ctl_table sysctls_root[] = { { .procname = "fs", .mode = 0555, diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 48cbe4c0b2a5..cfa63ee92c96 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1361,7 +1361,7 @@ static int gather_hugetbl_stats(pte_t *pte, unsigned long hmask, struct numa_maps *md; struct page *page; - if (pte_none(*pte)) + if (!pte_present(*pte)) return 0; page = pte_page(*pte); @@ -1418,10 +1418,10 @@ static int show_numa_map(struct seq_file *m, void *v, int is_pid) seq_printf(m, "%08lx %s", vma->vm_start, buffer); if (file) { - seq_printf(m, " file="); + seq_puts(m, " file="); seq_path(m, &file->f_path, "\n\t= "); } else if (vma->vm_start <= mm->brk && vma->vm_end >= mm->start_brk) { - seq_printf(m, " heap"); + seq_puts(m, " heap"); } else { pid_t tid = vm_is_stack(task, vma, is_pid); if (tid != 0) { @@ -1431,14 +1431,14 @@ static int show_numa_map(struct seq_file *m, void *v, int is_pid) */ if (!is_pid || (vma->vm_start <= mm->start_stack && vma->vm_end >= mm->start_stack)) - seq_printf(m, " stack"); + seq_puts(m, " stack"); else seq_printf(m, " stack:%d", tid); } } if (is_vm_hugetlb_page(vma)) - seq_printf(m, " huge"); + seq_puts(m, " huge"); walk_page_range(vma->vm_start, vma->vm_end, &walk); diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c index 6a8e785b29da..382aa890e228 100644 --- a/fs/proc/vmcore.c +++ b/fs/proc/vmcore.c @@ -42,7 +42,7 @@ static size_t elfnotes_sz; /* Total size of vmcore file. */ static u64 vmcore_size; -static struct proc_dir_entry *proc_vmcore = NULL; +static struct proc_dir_entry *proc_vmcore; /* * Returns > 0 for RAM pages, 0 for non-RAM pages, < 0 on error diff --git a/fs/pstore/platform.c b/fs/pstore/platform.c index 46d269e38706..0a9b72cdfeca 100644 --- a/fs/pstore/platform.c +++ b/fs/pstore/platform.c @@ -18,6 +18,8 @@ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */ +#define pr_fmt(fmt) "pstore: " fmt + #include <linux/atomic.h> #include <linux/types.h> #include <linux/errno.h> @@ -224,14 +226,12 @@ static void allocate_buf_for_compression(void) zlib_inflate_workspacesize()); stream.workspace = kmalloc(size, GFP_KERNEL); if (!stream.workspace) { - pr_err("pstore: No memory for compression workspace; " - "skipping compression\n"); + pr_err("No memory for compression workspace; skipping compression\n"); kfree(big_oops_buf); big_oops_buf = NULL; } } else { - pr_err("No memory for uncompressed data; " - "skipping compression\n"); + pr_err("No memory for uncompressed data; skipping compression\n"); stream.workspace = NULL; } @@ -455,8 +455,7 @@ int pstore_register(struct pstore_info *psi) add_timer(&pstore_timer); } - pr_info("pstore: Registered %s as persistent store backend\n", - psi->name); + pr_info("Registered %s as persistent store backend\n", psi->name); return 0; } @@ -502,8 +501,8 @@ void pstore_get_records(int quiet) size = unzipped_len; compressed = false; } else { - pr_err("pstore: decompression failed;" - "returned %d\n", unzipped_len); + pr_err("decompression failed;returned %d\n", + unzipped_len); compressed = true; } } @@ -524,8 +523,8 @@ out: mutex_unlock(&psi->read_mutex); if (failed) - printk(KERN_WARNING "pstore: failed to load %d record(s) from '%s'\n", - failed, psi->name); + pr_warn("failed to load %d record(s) from '%s'\n", + failed, psi->name); } static void pstore_dowork(struct work_struct *work) diff --git a/fs/pstore/ram_core.c b/fs/pstore/ram_core.c index ff7e3d4df5a1..34a1e5aa848c 100644 --- a/fs/pstore/ram_core.c +++ b/fs/pstore/ram_core.c @@ -12,6 +12,8 @@ * */ +#define pr_fmt(fmt) "persistent_ram: " fmt + #include <linux/device.h> #include <linux/err.h> #include <linux/errno.h> @@ -205,12 +207,10 @@ static void persistent_ram_ecc_old(struct persistent_ram_zone *prz) size = buffer->data + prz->buffer_size - block; numerr = persistent_ram_decode_rs8(prz, block, size, par); if (numerr > 0) { - pr_devel("persistent_ram: error in block %p, %d\n", - block, numerr); + pr_devel("error in block %p, %d\n", block, numerr); prz->corrected_bytes += numerr; } else if (numerr < 0) { - pr_devel("persistent_ram: uncorrectable error in block %p\n", - block); + pr_devel("uncorrectable error in block %p\n", block); prz->bad_blocks++; } block += prz->ecc_info.block_size; @@ -257,7 +257,7 @@ static int persistent_ram_init_ecc(struct persistent_ram_zone *prz, prz->rs_decoder = init_rs(prz->ecc_info.symsize, prz->ecc_info.poly, 0, 1, prz->ecc_info.ecc_size); if (prz->rs_decoder == NULL) { - pr_info("persistent_ram: init_rs failed\n"); + pr_info("init_rs failed\n"); return -EINVAL; } @@ -267,10 +267,10 @@ static int persistent_ram_init_ecc(struct persistent_ram_zone *prz, numerr = persistent_ram_decode_rs8(prz, buffer, sizeof(*buffer), prz->par_header); if (numerr > 0) { - pr_info("persistent_ram: error in header, %d\n", numerr); + pr_info("error in header, %d\n", numerr); prz->corrected_bytes += numerr; } else if (numerr < 0) { - pr_info("persistent_ram: uncorrectable error in header\n"); + pr_info("uncorrectable error in header\n"); prz->bad_blocks++; } @@ -317,7 +317,7 @@ void persistent_ram_save_old(struct persistent_ram_zone *prz) prz->old_log = kmalloc(size, GFP_KERNEL); } if (!prz->old_log) { - pr_err("persistent_ram: failed to allocate buffer\n"); + pr_err("failed to allocate buffer\n"); return; } @@ -396,8 +396,8 @@ static void *persistent_ram_vmap(phys_addr_t start, size_t size) pages = kmalloc(sizeof(struct page *) * page_count, GFP_KERNEL); if (!pages) { - pr_err("%s: Failed to allocate array for %u pages\n", __func__, - page_count); + pr_err("%s: Failed to allocate array for %u pages\n", + __func__, page_count); return NULL; } @@ -462,19 +462,17 @@ static int persistent_ram_post_init(struct persistent_ram_zone *prz, u32 sig, if (prz->buffer->sig == sig) { if (buffer_size(prz) > prz->buffer_size || buffer_start(prz) > buffer_size(prz)) - pr_info("persistent_ram: found existing invalid buffer," - " size %zu, start %zu\n", - buffer_size(prz), buffer_start(prz)); + pr_info("found existing invalid buffer, size %zu, start %zu\n", + buffer_size(prz), buffer_start(prz)); else { - pr_debug("persistent_ram: found existing buffer," - " size %zu, start %zu\n", - buffer_size(prz), buffer_start(prz)); + pr_debug("found existing buffer, size %zu, start %zu\n", + buffer_size(prz), buffer_start(prz)); persistent_ram_save_old(prz); return 0; } } else { - pr_debug("persistent_ram: no valid data in buffer" - " (sig = 0x%08x)\n", prz->buffer->sig); + pr_debug("no valid data in buffer (sig = 0x%08x)\n", + prz->buffer->sig); } prz->buffer->sig = sig; @@ -509,7 +507,7 @@ struct persistent_ram_zone *persistent_ram_new(phys_addr_t start, size_t size, prz = kzalloc(sizeof(struct persistent_ram_zone), GFP_KERNEL); if (!prz) { - pr_err("persistent_ram: failed to allocate persistent ram zone\n"); + pr_err("failed to allocate persistent ram zone\n"); goto err; } diff --git a/fs/reiserfs/bitmap.c b/fs/reiserfs/bitmap.c index dc9a6829f7c6..1bcffeab713c 100644 --- a/fs/reiserfs/bitmap.c +++ b/fs/reiserfs/bitmap.c @@ -142,7 +142,6 @@ static int scan_bitmap_block(struct reiserfs_transaction_handle *th, int org = *beg; BUG_ON(!th->t_trans_id); - RFALSE(bmap_n >= reiserfs_bmap_count(s), "Bitmap %u is out of " "range (0..%u)", bmap_n, reiserfs_bmap_count(s) - 1); PROC_INFO_INC(s, scan_bitmap.bmap); @@ -321,7 +320,6 @@ static int scan_bitmap(struct reiserfs_transaction_handle *th, unsigned int off_max = s->s_blocksize << 3; BUG_ON(!th->t_trans_id); - PROC_INFO_INC(s, scan_bitmap.call); if (SB_FREE_BLOCKS(s) <= 0) return 0; // No point in looking for more free blocks @@ -388,9 +386,7 @@ static void _reiserfs_free_block(struct reiserfs_transaction_handle *th, unsigned int nr, offset; BUG_ON(!th->t_trans_id); - PROC_INFO_INC(s, free_block); - rs = SB_DISK_SUPER_BLOCK(s); sbh = SB_BUFFER_WITH_SB(s); apbi = SB_AP_BITMAP(s); @@ -435,8 +431,8 @@ void reiserfs_free_block(struct reiserfs_transaction_handle *th, int for_unformatted) { struct super_block *s = th->t_super; - BUG_ON(!th->t_trans_id); + BUG_ON(!th->t_trans_id); RFALSE(!s, "vs-4061: trying to free block on nonexistent device"); if (!is_reusable(s, block, 1)) return; @@ -471,6 +467,7 @@ static void __discard_prealloc(struct reiserfs_transaction_handle *th, unsigned long save = ei->i_prealloc_block; int dirty = 0; struct inode *inode = &ei->vfs_inode; + BUG_ON(!th->t_trans_id); #ifdef CONFIG_REISERFS_CHECK if (ei->i_prealloc_count < 0) @@ -494,6 +491,7 @@ void reiserfs_discard_prealloc(struct reiserfs_transaction_handle *th, struct inode *inode) { struct reiserfs_inode_info *ei = REISERFS_I(inode); + BUG_ON(!th->t_trans_id); if (ei->i_prealloc_count) __discard_prealloc(th, ei); @@ -504,7 +502,6 @@ void reiserfs_discard_all_prealloc(struct reiserfs_transaction_handle *th) struct list_head *plist = &SB_JOURNAL(th->t_super)->j_prealloc_list; BUG_ON(!th->t_trans_id); - while (!list_empty(plist)) { struct reiserfs_inode_info *ei; ei = list_entry(plist->next, struct reiserfs_inode_info, @@ -562,7 +559,7 @@ int reiserfs_parse_alloc_options(struct super_block *s, char *options) if (!strcmp(this_char, "displacing_new_packing_localities")) { SET_OPTION(displacing_new_packing_localities); continue; - }; + } if (!strcmp(this_char, "old_hashed_relocation")) { SET_OPTION(old_hashed_relocation); @@ -729,6 +726,7 @@ void show_alloc_options(struct seq_file *seq, struct super_block *s) static inline void new_hashed_relocation(reiserfs_blocknr_hint_t * hint) { char *hash_in; + if (hint->formatted_node) { hash_in = (char *)&hint->key.k_dir_id; } else { @@ -757,6 +755,7 @@ static void dirid_groups(reiserfs_blocknr_hint_t * hint) __u32 dirid = 0; int bm = 0; struct super_block *sb = hint->th->t_super; + if (hint->inode) dirid = le32_to_cpu(INODE_PKEY(hint->inode)->k_dir_id); else if (hint->formatted_node) diff --git a/fs/reiserfs/stree.c b/fs/reiserfs/stree.c index b14706a05d52..615cd9ab7940 100644 --- a/fs/reiserfs/stree.c +++ b/fs/reiserfs/stree.c @@ -228,10 +228,10 @@ const struct reiserfs_key MIN_KEY = { 0, 0, {{0, 0},} }; /* Maximal possible key. It is never in the tree. */ static const struct reiserfs_key MAX_KEY = { - __constant_cpu_to_le32(0xffffffff), - __constant_cpu_to_le32(0xffffffff), - {{__constant_cpu_to_le32(0xffffffff), - __constant_cpu_to_le32(0xffffffff)},} + cpu_to_le32(0xffffffff), + cpu_to_le32(0xffffffff), + {{cpu_to_le32(0xffffffff), + cpu_to_le32(0xffffffff)},} }; /* Get delimiting key of the buffer by looking for it in the buffers in the path, starting from the bottom diff --git a/fs/ufs/balloc.c b/fs/ufs/balloc.c index 0ab1de4b39a5..7bc20809c99e 100644 --- a/fs/ufs/balloc.c +++ b/fs/ufs/balloc.c @@ -24,7 +24,7 @@ #define INVBLOCK ((u64)-1L) -static u64 ufs_add_fragments(struct inode *, u64, unsigned, unsigned, int *); +static u64 ufs_add_fragments(struct inode *, u64, unsigned, unsigned); static u64 ufs_alloc_fragments(struct inode *, unsigned, u64, unsigned, int *); static u64 ufs_alloccg_block(struct inode *, struct ufs_cg_private_info *, u64, int *); static u64 ufs_bitmap_search (struct super_block *, struct ufs_cg_private_info *, u64, unsigned); @@ -52,7 +52,7 @@ void ufs_free_fragments(struct inode *inode, u64 fragment, unsigned count) if (ufs_fragnum(fragment) + count > uspi->s_fpg) ufs_error (sb, "ufs_free_fragments", "internal error"); - mutex_lock(&UFS_SB(sb)->s_lock); + lock_ufs(sb); cgno = ufs_dtog(uspi, fragment); bit = ufs_dtogd(uspi, fragment); @@ -116,12 +116,12 @@ void ufs_free_fragments(struct inode *inode, u64 fragment, unsigned count) ubh_sync_block(UCPI_UBH(ucpi)); ufs_mark_sb_dirty(sb); - mutex_unlock(&UFS_SB(sb)->s_lock); + unlock_ufs(sb); UFSD("EXIT\n"); return; failed: - mutex_unlock(&UFS_SB(sb)->s_lock); + unlock_ufs(sb); UFSD("EXIT (FAILED)\n"); return; } @@ -151,7 +151,7 @@ void ufs_free_blocks(struct inode *inode, u64 fragment, unsigned count) goto failed; } - mutex_lock(&UFS_SB(sb)->s_lock); + lock_ufs(sb); do_more: overflow = 0; @@ -211,12 +211,12 @@ do_more: } ufs_mark_sb_dirty(sb); - mutex_unlock(&UFS_SB(sb)->s_lock); + unlock_ufs(sb); UFSD("EXIT\n"); return; failed_unlock: - mutex_unlock(&UFS_SB(sb)->s_lock); + unlock_ufs(sb); failed: UFSD("EXIT (FAILED)\n"); return; @@ -357,7 +357,7 @@ u64 ufs_new_fragments(struct inode *inode, void *p, u64 fragment, usb1 = ubh_get_usb_first(uspi); *err = -ENOSPC; - mutex_lock(&UFS_SB(sb)->s_lock); + lock_ufs(sb); tmp = ufs_data_ptr_to_cpu(sb, p); if (count + ufs_fragnum(fragment) > uspi->s_fpb) { @@ -378,19 +378,19 @@ u64 ufs_new_fragments(struct inode *inode, void *p, u64 fragment, "fragment %llu, tmp %llu\n", (unsigned long long)fragment, (unsigned long long)tmp); - mutex_unlock(&UFS_SB(sb)->s_lock); + unlock_ufs(sb); return INVBLOCK; } if (fragment < UFS_I(inode)->i_lastfrag) { UFSD("EXIT (ALREADY ALLOCATED)\n"); - mutex_unlock(&UFS_SB(sb)->s_lock); + unlock_ufs(sb); return 0; } } else { if (tmp) { UFSD("EXIT (ALREADY ALLOCATED)\n"); - mutex_unlock(&UFS_SB(sb)->s_lock); + unlock_ufs(sb); return 0; } } @@ -399,7 +399,7 @@ u64 ufs_new_fragments(struct inode *inode, void *p, u64 fragment, * There is not enough space for user on the device */ if (!capable(CAP_SYS_RESOURCE) && ufs_freespace(uspi, UFS_MINFREE) <= 0) { - mutex_unlock(&UFS_SB(sb)->s_lock); + unlock_ufs(sb); UFSD("EXIT (FAILED)\n"); return 0; } @@ -424,7 +424,7 @@ u64 ufs_new_fragments(struct inode *inode, void *p, u64 fragment, ufs_clear_frags(inode, result + oldcount, newcount - oldcount, locked_page != NULL); } - mutex_unlock(&UFS_SB(sb)->s_lock); + unlock_ufs(sb); UFSD("EXIT, result %llu\n", (unsigned long long)result); return result; } @@ -432,14 +432,14 @@ u64 ufs_new_fragments(struct inode *inode, void *p, u64 fragment, /* * resize block */ - result = ufs_add_fragments (inode, tmp, oldcount, newcount, err); + result = ufs_add_fragments(inode, tmp, oldcount, newcount); if (result) { *err = 0; UFS_I(inode)->i_lastfrag = max(UFS_I(inode)->i_lastfrag, fragment + count); ufs_clear_frags(inode, result + oldcount, newcount - oldcount, locked_page != NULL); - mutex_unlock(&UFS_SB(sb)->s_lock); + unlock_ufs(sb); UFSD("EXIT, result %llu\n", (unsigned long long)result); return result; } @@ -477,7 +477,7 @@ u64 ufs_new_fragments(struct inode *inode, void *p, u64 fragment, *err = 0; UFS_I(inode)->i_lastfrag = max(UFS_I(inode)->i_lastfrag, fragment + count); - mutex_unlock(&UFS_SB(sb)->s_lock); + unlock_ufs(sb); if (newcount < request) ufs_free_fragments (inode, result + newcount, request - newcount); ufs_free_fragments (inode, tmp, oldcount); @@ -485,13 +485,13 @@ u64 ufs_new_fragments(struct inode *inode, void *p, u64 fragment, return result; } - mutex_unlock(&UFS_SB(sb)->s_lock); + unlock_ufs(sb); UFSD("EXIT (FAILED)\n"); return 0; } static u64 ufs_add_fragments(struct inode *inode, u64 fragment, - unsigned oldcount, unsigned newcount, int *err) + unsigned oldcount, unsigned newcount) { struct super_block * sb; struct ufs_sb_private_info * uspi; diff --git a/fs/ufs/ialloc.c b/fs/ufs/ialloc.c index 98f7211599ff..a9cc75ffa925 100644 --- a/fs/ufs/ialloc.c +++ b/fs/ufs/ialloc.c @@ -69,11 +69,11 @@ void ufs_free_inode (struct inode * inode) ino = inode->i_ino; - mutex_lock(&UFS_SB(sb)->s_lock); + lock_ufs(sb); if (!((ino > 1) && (ino < (uspi->s_ncg * uspi->s_ipg )))) { ufs_warning(sb, "ufs_free_inode", "reserved inode or nonexistent inode %u\n", ino); - mutex_unlock(&UFS_SB(sb)->s_lock); + unlock_ufs(sb); return; } @@ -81,7 +81,7 @@ void ufs_free_inode (struct inode * inode) bit = ufs_inotocgoff (ino); ucpi = ufs_load_cylinder (sb, cg); if (!ucpi) { - mutex_unlock(&UFS_SB(sb)->s_lock); + unlock_ufs(sb); return; } ucg = ubh_get_ucg(UCPI_UBH(ucpi)); @@ -115,7 +115,7 @@ void ufs_free_inode (struct inode * inode) ubh_sync_block(UCPI_UBH(ucpi)); ufs_mark_sb_dirty(sb); - mutex_unlock(&UFS_SB(sb)->s_lock); + unlock_ufs(sb); UFSD("EXIT\n"); } @@ -193,7 +193,7 @@ struct inode *ufs_new_inode(struct inode *dir, umode_t mode) sbi = UFS_SB(sb); uspi = sbi->s_uspi; - mutex_lock(&sbi->s_lock); + lock_ufs(sb); /* * Try to place the inode in its parent directory @@ -328,21 +328,20 @@ cg_found: sync_dirty_buffer(bh); brelse(bh); } - - mutex_unlock(&sbi->s_lock); + unlock_ufs(sb); UFSD("allocating inode %lu\n", inode->i_ino); UFSD("EXIT\n"); return inode; fail_remove_inode: - mutex_unlock(&sbi->s_lock); + unlock_ufs(sb); clear_nlink(inode); iput(inode); UFSD("EXIT (FAILED): err %d\n", err); return ERR_PTR(err); failed: - mutex_unlock(&sbi->s_lock); + unlock_ufs(sb); make_bad_inode(inode); iput (inode); UFSD("EXIT (FAILED): err %d\n", err); diff --git a/fs/ufs/super.c b/fs/ufs/super.c index c1183f9f69dc..b879f1ba3439 100644 --- a/fs/ufs/super.c +++ b/fs/ufs/super.c @@ -697,7 +697,6 @@ static int ufs_sync_fs(struct super_block *sb, int wait) unsigned flags; lock_ufs(sb); - mutex_lock(&UFS_SB(sb)->s_lock); UFSD("ENTER\n"); @@ -715,7 +714,6 @@ static int ufs_sync_fs(struct super_block *sb, int wait) ufs_put_cstotal(sb); UFSD("EXIT\n"); - mutex_unlock(&UFS_SB(sb)->s_lock); unlock_ufs(sb); return 0; @@ -760,6 +758,7 @@ static void ufs_put_super(struct super_block *sb) ubh_brelse_uspi (sbi->s_uspi); kfree (sbi->s_uspi); + mutex_destroy(&sbi->mutex); kfree (sbi); sb->s_fs_info = NULL; UFSD("EXIT\n"); @@ -786,6 +785,14 @@ static int ufs_fill_super(struct super_block *sb, void *data, int silent) flags = 0; UFSD("ENTER\n"); + +#ifndef CONFIG_UFS_FS_WRITE + if (!(sb->s_flags & MS_RDONLY)) { + printk("ufs was compiled with read-only support, " + "can't be mounted as read-write\n"); + return -EROFS; + } +#endif sbi = kzalloc(sizeof(struct ufs_sb_info), GFP_KERNEL); if (!sbi) @@ -795,15 +802,7 @@ static int ufs_fill_super(struct super_block *sb, void *data, int silent) UFSD("flag %u\n", (int)(sb->s_flags & MS_RDONLY)); -#ifndef CONFIG_UFS_FS_WRITE - if (!(sb->s_flags & MS_RDONLY)) { - printk("ufs was compiled with read-only support, " - "can't be mounted as read-write\n"); - goto failed; - } -#endif mutex_init(&sbi->mutex); - mutex_init(&sbi->s_lock); spin_lock_init(&sbi->work_lock); INIT_DELAYED_WORK(&sbi->sync_work, delayed_sync_fs); /* @@ -1257,6 +1256,7 @@ magic_found: return 0; failed: + mutex_destroy(&sbi->mutex); if (ubh) ubh_brelse_uspi (uspi); kfree (uspi); @@ -1280,7 +1280,6 @@ static int ufs_remount (struct super_block *sb, int *mount_flags, char *data) sync_filesystem(sb); lock_ufs(sb); - mutex_lock(&UFS_SB(sb)->s_lock); uspi = UFS_SB(sb)->s_uspi; flags = UFS_SB(sb)->s_flags; usb1 = ubh_get_usb_first(uspi); @@ -1294,7 +1293,6 @@ static int ufs_remount (struct super_block *sb, int *mount_flags, char *data) new_mount_opt = 0; ufs_set_opt (new_mount_opt, ONERROR_LOCK); if (!ufs_parse_options (data, &new_mount_opt)) { - mutex_unlock(&UFS_SB(sb)->s_lock); unlock_ufs(sb); return -EINVAL; } @@ -1302,14 +1300,12 @@ static int ufs_remount (struct super_block *sb, int *mount_flags, char *data) new_mount_opt |= ufstype; } else if ((new_mount_opt & UFS_MOUNT_UFSTYPE) != ufstype) { printk("ufstype can't be changed during remount\n"); - mutex_unlock(&UFS_SB(sb)->s_lock); unlock_ufs(sb); return -EINVAL; } if ((*mount_flags & MS_RDONLY) == (sb->s_flags & MS_RDONLY)) { UFS_SB(sb)->s_mount_opt = new_mount_opt; - mutex_unlock(&UFS_SB(sb)->s_lock); unlock_ufs(sb); return 0; } @@ -1334,7 +1330,6 @@ static int ufs_remount (struct super_block *sb, int *mount_flags, char *data) #ifndef CONFIG_UFS_FS_WRITE printk("ufs was compiled with read-only support, " "can't be mounted as read-write\n"); - mutex_unlock(&UFS_SB(sb)->s_lock); unlock_ufs(sb); return -EINVAL; #else @@ -1344,13 +1339,11 @@ static int ufs_remount (struct super_block *sb, int *mount_flags, char *data) ufstype != UFS_MOUNT_UFSTYPE_SUNx86 && ufstype != UFS_MOUNT_UFSTYPE_UFS2) { printk("this ufstype is read-only supported\n"); - mutex_unlock(&UFS_SB(sb)->s_lock); unlock_ufs(sb); return -EINVAL; } if (!ufs_read_cylinder_structures(sb)) { printk("failed during remounting\n"); - mutex_unlock(&UFS_SB(sb)->s_lock); unlock_ufs(sb); return -EPERM; } @@ -1358,7 +1351,6 @@ static int ufs_remount (struct super_block *sb, int *mount_flags, char *data) #endif } UFS_SB(sb)->s_mount_opt = new_mount_opt; - mutex_unlock(&UFS_SB(sb)->s_lock); unlock_ufs(sb); return 0; } diff --git a/fs/ufs/ufs.h b/fs/ufs/ufs.h index ff2c15ab81aa..343e6fc571e5 100644 --- a/fs/ufs/ufs.h +++ b/fs/ufs/ufs.h @@ -24,7 +24,6 @@ struct ufs_sb_info { int work_queued; /* non-zero if the delayed work is queued */ struct delayed_work sync_work; /* FS sync delayed work */ spinlock_t work_lock; /* protects sync_work and work_queued */ - struct mutex s_lock; }; struct ufs_inode_info { diff --git a/include/asm-generic/ioctl.h b/include/asm-generic/ioctl.h index d17295b290fa..297fb0d7cd6c 100644 --- a/include/asm-generic/ioctl.h +++ b/include/asm-generic/ioctl.h @@ -3,10 +3,15 @@ #include <uapi/asm-generic/ioctl.h> +#ifdef __CHECKER__ +#define _IOC_TYPECHECK(t) (sizeof(t)) +#else /* provoke compile error for invalid uses of size argument */ extern unsigned int __invalid_size_argument_for_IOC; #define _IOC_TYPECHECK(t) \ ((sizeof(t) == sizeof(t[1]) && \ sizeof(t) < (1 << _IOC_SIZEBITS)) ? \ sizeof(t) : __invalid_size_argument_for_IOC) +#endif + #endif /* _ASM_GENERIC_IOCTL_H */ diff --git a/include/asm-generic/unaligned.h b/include/asm-generic/unaligned.h index 03cf5936bad6..1ac097279db1 100644 --- a/include/asm-generic/unaligned.h +++ b/include/asm-generic/unaligned.h @@ -4,22 +4,27 @@ /* * This is the most generic implementation of unaligned accesses * and should work almost anywhere. - * - * If an architecture can handle unaligned accesses in hardware, - * it may want to use the linux/unaligned/access_ok.h implementation - * instead. */ #include <asm/byteorder.h> +/* Set by the arch if it can handle unaligned accesses in hardware. */ +#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS +# include <linux/unaligned/access_ok.h> +#endif + #if defined(__LITTLE_ENDIAN) -# include <linux/unaligned/le_struct.h> -# include <linux/unaligned/be_byteshift.h> +# ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS +# include <linux/unaligned/le_struct.h> +# include <linux/unaligned/be_byteshift.h> +# endif # include <linux/unaligned/generic.h> # define get_unaligned __get_unaligned_le # define put_unaligned __put_unaligned_le #elif defined(__BIG_ENDIAN) -# include <linux/unaligned/be_struct.h> -# include <linux/unaligned/le_byteshift.h> +# ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS +# include <linux/unaligned/be_struct.h> +# include <linux/unaligned/le_byteshift.h> +# endif # include <linux/unaligned/generic.h> # define get_unaligned __get_unaligned_be # define put_unaligned __put_unaligned_be diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h index 821eae8cbd8c..9b6f32a6cad1 100644 --- a/include/crypto/internal/hash.h +++ b/include/crypto/internal/hash.h @@ -55,15 +55,28 @@ extern const struct crypto_type crypto_ahash_type; int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err); int crypto_hash_walk_first(struct ahash_request *req, struct crypto_hash_walk *walk); +int crypto_ahash_walk_first(struct ahash_request *req, + struct crypto_hash_walk *walk); int crypto_hash_walk_first_compat(struct hash_desc *hdesc, struct crypto_hash_walk *walk, struct scatterlist *sg, unsigned int len); +static inline int crypto_ahash_walk_done(struct crypto_hash_walk *walk, + int err) +{ + return crypto_hash_walk_done(walk, err); +} + static inline int crypto_hash_walk_last(struct crypto_hash_walk *walk) { return !(walk->entrylen | walk->total); } +static inline int crypto_ahash_walk_last(struct crypto_hash_walk *walk) +{ + return crypto_hash_walk_last(walk); +} + int crypto_register_ahash(struct ahash_alg *alg); int crypto_unregister_ahash(struct ahash_alg *alg); int ahash_register_instance(struct crypto_template *tmpl, diff --git a/include/dt-bindings/clock/bcm21664.h b/include/dt-bindings/clock/bcm21664.h new file mode 100644 index 000000000000..5a7f0e4750a8 --- /dev/null +++ b/include/dt-bindings/clock/bcm21664.h @@ -0,0 +1,62 @@ +/* + * Copyright (C) 2013 Broadcom Corporation + * Copyright 2013 Linaro Limited + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation version 2. + * + * This program is distributed "as is" WITHOUT ANY WARRANTY of any + * kind, whether express or implied; without even the implied warranty + * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#ifndef _CLOCK_BCM21664_H +#define _CLOCK_BCM21664_H + +/* + * This file defines the values used to specify clocks provided by + * the clock control units (CCUs) on Broadcom BCM21664 family SoCs. + */ + +/* bcm21664 CCU device tree "compatible" strings */ +#define BCM21664_DT_ROOT_CCU_COMPAT "brcm,bcm21664-root-ccu" +#define BCM21664_DT_AON_CCU_COMPAT "brcm,bcm21664-aon-ccu" +#define BCM21664_DT_MASTER_CCU_COMPAT "brcm,bcm21664-master-ccu" +#define BCM21664_DT_SLAVE_CCU_COMPAT "brcm,bcm21664-slave-ccu" + +/* root CCU clock ids */ + +#define BCM21664_ROOT_CCU_FRAC_1M 0 +#define BCM21664_ROOT_CCU_CLOCK_COUNT 1 + +/* aon CCU clock ids */ + +#define BCM21664_AON_CCU_HUB_TIMER 0 +#define BCM21664_AON_CCU_CLOCK_COUNT 1 + +/* master CCU clock ids */ + +#define BCM21664_MASTER_CCU_SDIO1 0 +#define BCM21664_MASTER_CCU_SDIO2 1 +#define BCM21664_MASTER_CCU_SDIO3 2 +#define BCM21664_MASTER_CCU_SDIO4 3 +#define BCM21664_MASTER_CCU_SDIO1_SLEEP 4 +#define BCM21664_MASTER_CCU_SDIO2_SLEEP 5 +#define BCM21664_MASTER_CCU_SDIO3_SLEEP 6 +#define BCM21664_MASTER_CCU_SDIO4_SLEEP 7 +#define BCM21664_MASTER_CCU_CLOCK_COUNT 8 + +/* slave CCU clock ids */ + +#define BCM21664_SLAVE_CCU_UARTB 0 +#define BCM21664_SLAVE_CCU_UARTB2 1 +#define BCM21664_SLAVE_CCU_UARTB3 2 +#define BCM21664_SLAVE_CCU_BSC1 3 +#define BCM21664_SLAVE_CCU_BSC2 4 +#define BCM21664_SLAVE_CCU_BSC3 5 +#define BCM21664_SLAVE_CCU_BSC4 6 +#define BCM21664_SLAVE_CCU_CLOCK_COUNT 7 + +#endif /* _CLOCK_BCM21664_H */ diff --git a/include/dt-bindings/clock/bcm281xx.h b/include/dt-bindings/clock/bcm281xx.h index e0096940886d..a763460cf1af 100644 --- a/include/dt-bindings/clock/bcm281xx.h +++ b/include/dt-bindings/clock/bcm281xx.h @@ -20,6 +20,18 @@ * the clock control units (CCUs) on Broadcom BCM281XX family SoCs. */ +/* + * These are the bcm281xx CCU device tree "compatible" strings. + * We're stuck with using "bcm11351" in the string because wild + * cards aren't allowed, and that name was the first one defined + * in this family of devices. + */ +#define BCM281XX_DT_ROOT_CCU_COMPAT "brcm,bcm11351-root-ccu" +#define BCM281XX_DT_AON_CCU_COMPAT "brcm,bcm11351-aon-ccu" +#define BCM281XX_DT_HUB_CCU_COMPAT "brcm,bcm11351-hub-ccu" +#define BCM281XX_DT_MASTER_CCU_COMPAT "brcm,bcm11351-master-ccu" +#define BCM281XX_DT_SLAVE_CCU_COMPAT "brcm,bcm11351-slave-ccu" + /* root CCU clock ids */ #define BCM281XX_ROOT_CCU_FRAC_1M 0 diff --git a/include/dt-bindings/clock/hix5hd2-clock.h b/include/dt-bindings/clock/hix5hd2-clock.h new file mode 100644 index 000000000000..aad579a75802 --- /dev/null +++ b/include/dt-bindings/clock/hix5hd2-clock.h @@ -0,0 +1,58 @@ +/* + * Copyright (c) 2014 Linaro Ltd. + * Copyright (c) 2014 Hisilicon Limited. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + */ + +#ifndef __DTS_HIX5HD2_CLOCK_H +#define __DTS_HIX5HD2_CLOCK_H + +/* fixed rate */ +#define HIX5HD2_FIXED_1200M 1 +#define HIX5HD2_FIXED_400M 2 +#define HIX5HD2_FIXED_48M 3 +#define HIX5HD2_FIXED_24M 4 +#define HIX5HD2_FIXED_600M 5 +#define HIX5HD2_FIXED_300M 6 +#define HIX5HD2_FIXED_75M 7 +#define HIX5HD2_FIXED_200M 8 +#define HIX5HD2_FIXED_100M 9 +#define HIX5HD2_FIXED_40M 10 +#define HIX5HD2_FIXED_150M 11 +#define HIX5HD2_FIXED_1728M 12 +#define HIX5HD2_FIXED_28P8M 13 +#define HIX5HD2_FIXED_432M 14 +#define HIX5HD2_FIXED_345P6M 15 +#define HIX5HD2_FIXED_288M 16 +#define HIX5HD2_FIXED_60M 17 +#define HIX5HD2_FIXED_750M 18 +#define HIX5HD2_FIXED_500M 19 +#define HIX5HD2_FIXED_54M 20 +#define HIX5HD2_FIXED_27M 21 +#define HIX5HD2_FIXED_1500M 22 +#define HIX5HD2_FIXED_375M 23 +#define HIX5HD2_FIXED_187M 24 +#define HIX5HD2_FIXED_250M 25 +#define HIX5HD2_FIXED_125M 26 +#define HIX5HD2_FIXED_2P02M 27 +#define HIX5HD2_FIXED_50M 28 +#define HIX5HD2_FIXED_25M 29 +#define HIX5HD2_FIXED_83M 30 + +/* mux clocks */ +#define HIX5HD2_SFC_MUX 64 +#define HIX5HD2_MMC_MUX 65 +#define HIX5HD2_FEPHY_MUX 66 + +/* gate clocks */ +#define HIX5HD2_SFC_RST 128 +#define HIX5HD2_SFC_CLK 129 +#define HIX5HD2_MMC_CIU_CLK 130 +#define HIX5HD2_MMC_BIU_CLK 131 +#define HIX5HD2_MMC_CIU_RST 132 + +#define HIX5HD2_NR_CLKS 256 +#endif /* __DTS_HIX5HD2_CLOCK_H */ diff --git a/include/dt-bindings/clock/qcom,gcc-msm8960.h b/include/dt-bindings/clock/qcom,gcc-msm8960.h index 03bbf49d43b7..f9f547146a15 100644 --- a/include/dt-bindings/clock/qcom,gcc-msm8960.h +++ b/include/dt-bindings/clock/qcom,gcc-msm8960.h @@ -51,7 +51,7 @@ #define QDSS_TSCTR_CLK 34 #define SFAB_ADM0_M0_A_CLK 35 #define SFAB_ADM0_M1_A_CLK 36 -#define SFAB_ADM0_M2_A_CLK 37 +#define SFAB_ADM0_M2_H_CLK 37 #define ADM0_CLK 38 #define ADM0_PBUS_CLK 39 #define MSS_XPU_CLK 40 @@ -99,7 +99,7 @@ #define CFPB2_H_CLK 82 #define SFAB_CFPB_M_H_CLK 83 #define CFPB_MASTER_H_CLK 84 -#define SFAB_CFPB_S_HCLK 85 +#define SFAB_CFPB_S_H_CLK 85 #define CFPB_SPLITTER_H_CLK 86 #define TSIF_H_CLK 87 #define TSIF_INACTIVITY_TIMERS_CLK 88 @@ -110,7 +110,6 @@ #define CE1_SLEEP_CLK 93 #define CE2_H_CLK 94 #define CE2_CORE_CLK 95 -#define CE2_SLEEP_CLK 96 #define SFPB_H_CLK_SRC 97 #define SFPB_H_CLK 98 #define SFAB_SFPB_M_H_CLK 99 @@ -252,7 +251,7 @@ #define MSS_S_H_CLK 235 #define MSS_CXO_SRC_CLK 236 #define SATA_H_CLK 237 -#define SATA_SRC_CLK 238 +#define SATA_CLK_SRC 238 #define SATA_RXOOB_CLK 239 #define SATA_PMALIVE_CLK 240 #define SATA_PHY_REF_CLK 241 diff --git a/include/dt-bindings/clock/qcom,gcc-msm8974.h b/include/dt-bindings/clock/qcom,gcc-msm8974.h index 223ca174d9d3..51e51c860fe6 100644 --- a/include/dt-bindings/clock/qcom,gcc-msm8974.h +++ b/include/dt-bindings/clock/qcom,gcc-msm8974.h @@ -316,5 +316,9 @@ #define GCC_CE2_CLK_SLEEP_ENA 299 #define GCC_CE2_AXI_CLK_SLEEP_ENA 300 #define GCC_CE2_AHB_CLK_SLEEP_ENA 301 +#define GPLL4 302 +#define GPLL4_VOTE 303 +#define GCC_SDCC1_CDCCAL_SLEEP_CLK 304 +#define GCC_SDCC1_CDCCAL_FF_CLK 305 #endif diff --git a/include/dt-bindings/clock/r8a7779-clock.h b/include/dt-bindings/clock/r8a7779-clock.h new file mode 100644 index 000000000000..381a6114237a --- /dev/null +++ b/include/dt-bindings/clock/r8a7779-clock.h @@ -0,0 +1,64 @@ +/* + * Copyright (C) 2013 Horms Solutions Ltd. + * + * Contact: Simon Horman <horms@verge.net.au> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + */ + +#ifndef __DT_BINDINGS_CLOCK_R8A7779_H__ +#define __DT_BINDINGS_CLOCK_R8A7779_H__ + +/* CPG */ +#define R8A7779_CLK_PLLA 0 +#define R8A7779_CLK_Z 1 +#define R8A7779_CLK_ZS 2 +#define R8A7779_CLK_S 3 +#define R8A7779_CLK_S1 4 +#define R8A7779_CLK_P 5 +#define R8A7779_CLK_B 6 +#define R8A7779_CLK_OUT 7 + +/* MSTP 0 */ +#define R8A7779_CLK_HSPI 7 +#define R8A7779_CLK_TMU2 14 +#define R8A7779_CLK_TMU1 15 +#define R8A7779_CLK_TMU0 16 +#define R8A7779_CLK_HSCIF1 18 +#define R8A7779_CLK_HSCIF0 19 +#define R8A7779_CLK_SCIF5 21 +#define R8A7779_CLK_SCIF4 22 +#define R8A7779_CLK_SCIF3 23 +#define R8A7779_CLK_SCIF2 24 +#define R8A7779_CLK_SCIF1 25 +#define R8A7779_CLK_SCIF0 26 +#define R8A7779_CLK_I2C3 27 +#define R8A7779_CLK_I2C2 28 +#define R8A7779_CLK_I2C1 29 +#define R8A7779_CLK_I2C0 30 + +/* MSTP 1 */ +#define R8A7779_CLK_USB01 0 +#define R8A7779_CLK_USB2 1 +#define R8A7779_CLK_DU 3 +#define R8A7779_CLK_VIN2 8 +#define R8A7779_CLK_VIN1 9 +#define R8A7779_CLK_VIN0 10 +#define R8A7779_CLK_ETHER 14 +#define R8A7779_CLK_SATA 15 +#define R8A7779_CLK_PCIE 16 +#define R8A7779_CLK_VIN3 20 + +/* MSTP 3 */ +#define R8A7779_CLK_SDHI3 20 +#define R8A7779_CLK_SDHI2 21 +#define R8A7779_CLK_SDHI1 22 +#define R8A7779_CLK_SDHI0 23 +#define R8A7779_CLK_MMC1 30 +#define R8A7779_CLK_MMC0 31 + + +#endif /* __DT_BINDINGS_CLOCK_R8A7779_H__ */ diff --git a/include/dt-bindings/clock/tegra114-car.h b/include/dt-bindings/clock/tegra114-car.h index 6d0d8d8ef31e..fc12621fb432 100644 --- a/include/dt-bindings/clock/tegra114-car.h +++ b/include/dt-bindings/clock/tegra114-car.h @@ -337,6 +337,7 @@ #define TEGRA114_CLK_CLK_OUT_3_MUX 308 #define TEGRA114_CLK_DSIA_MUX 309 #define TEGRA114_CLK_DSIB_MUX 310 -#define TEGRA114_CLK_CLK_MAX 311 +#define TEGRA114_CLK_XUSB_SS_DIV2 311 +#define TEGRA114_CLK_CLK_MAX 312 #endif /* _DT_BINDINGS_CLOCK_TEGRA114_CAR_H */ diff --git a/include/dt-bindings/clock/tegra124-car.h b/include/dt-bindings/clock/tegra124-car.h index 433528ab5161..8a4c5892890f 100644 --- a/include/dt-bindings/clock/tegra124-car.h +++ b/include/dt-bindings/clock/tegra124-car.h @@ -336,6 +336,7 @@ #define TEGRA124_CLK_DSIA_MUX 309 #define TEGRA124_CLK_DSIB_MUX 310 #define TEGRA124_CLK_SOR0_LVDS 311 -#define TEGRA124_CLK_CLK_MAX 312 +#define TEGRA124_CLK_XUSB_SS_DIV2 312 +#define TEGRA124_CLK_CLK_MAX 313 #endif /* _DT_BINDINGS_CLOCK_TEGRA124_CAR_H */ diff --git a/include/dt-bindings/reset/qcom,gcc-msm8960.h b/include/dt-bindings/reset/qcom,gcc-msm8960.h index a840e680323c..07edd0e65eed 100644 --- a/include/dt-bindings/reset/qcom,gcc-msm8960.h +++ b/include/dt-bindings/reset/qcom,gcc-msm8960.h @@ -58,7 +58,7 @@ #define PPSS_PROC_RESET 41 #define PPSS_RESET 42 #define DMA_BAM_RESET 43 -#define SIC_TIC_RESET 44 +#define SPS_TIC_H_RESET 44 #define SLIMBUS_H_RESET 45 #define SFAB_CFPB_M_RESET 46 #define SFAB_CFPB_S_RESET 47 diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h index f295bab1865d..0c287dbbb144 100644 --- a/include/linux/clk-provider.h +++ b/include/linux/clk-provider.h @@ -40,14 +40,14 @@ struct dentry; * through the clk_* api. * * @prepare: Prepare the clock for enabling. This must not return until - * the clock is fully prepared, and it's safe to call clk_enable. - * This callback is intended to allow clock implementations to - * do any initialisation that may sleep. Called with - * prepare_lock held. + * the clock is fully prepared, and it's safe to call clk_enable. + * This callback is intended to allow clock implementations to + * do any initialisation that may sleep. Called with + * prepare_lock held. * * @unprepare: Release the clock from its prepared state. This will typically - * undo any work done in the @prepare callback. Called with - * prepare_lock held. + * undo any work done in the @prepare callback. Called with + * prepare_lock held. * * @is_prepared: Queries the hardware to determine if the clock is prepared. * This function is allowed to sleep. Optional, if this op is not @@ -58,16 +58,16 @@ struct dentry; * Called with prepare mutex held. This function may sleep. * * @enable: Enable the clock atomically. This must not return until the - * clock is generating a valid clock signal, usable by consumer - * devices. Called with enable_lock held. This function must not - * sleep. + * clock is generating a valid clock signal, usable by consumer + * devices. Called with enable_lock held. This function must not + * sleep. * * @disable: Disable the clock atomically. Called with enable_lock held. - * This function must not sleep. + * This function must not sleep. * * @is_enabled: Queries the hardware to determine if the clock is enabled. - * This function must not sleep. Optional, if this op is not - * set then the enable count will be used. + * This function must not sleep. Optional, if this op is not + * set then the enable count will be used. * * @disable_unused: Disable the clock atomically. Only called from * clk_disable_unused for gate clocks with special needs. @@ -75,34 +75,35 @@ struct dentry; * sleep. * * @recalc_rate Recalculate the rate of this clock, by querying hardware. The - * parent rate is an input parameter. It is up to the caller to - * ensure that the prepare_mutex is held across this call. - * Returns the calculated rate. Optional, but recommended - if - * this op is not set then clock rate will be initialized to 0. + * parent rate is an input parameter. It is up to the caller to + * ensure that the prepare_mutex is held across this call. + * Returns the calculated rate. Optional, but recommended - if + * this op is not set then clock rate will be initialized to 0. * * @round_rate: Given a target rate as input, returns the closest rate actually - * supported by the clock. + * supported by the clock. The parent rate is an input/output + * parameter. * * @determine_rate: Given a target rate as input, returns the closest rate * actually supported by the clock, and optionally the parent clock * that should be used to provide the clock rate. * - * @get_parent: Queries the hardware to determine the parent of a clock. The - * return value is a u8 which specifies the index corresponding to - * the parent clock. This index can be applied to either the - * .parent_names or .parents arrays. In short, this function - * translates the parent value read from hardware into an array - * index. Currently only called when the clock is initialized by - * __clk_init. This callback is mandatory for clocks with - * multiple parents. It is optional (and unnecessary) for clocks - * with 0 or 1 parents. - * * @set_parent: Change the input source of this clock; for clocks with multiple - * possible parents specify a new parent by passing in the index - * as a u8 corresponding to the parent in either the .parent_names - * or .parents arrays. This function in affect translates an - * array index into the value programmed into the hardware. - * Returns 0 on success, -EERROR otherwise. + * possible parents specify a new parent by passing in the index + * as a u8 corresponding to the parent in either the .parent_names + * or .parents arrays. This function in affect translates an + * array index into the value programmed into the hardware. + * Returns 0 on success, -EERROR otherwise. + * + * @get_parent: Queries the hardware to determine the parent of a clock. The + * return value is a u8 which specifies the index corresponding to + * the parent clock. This index can be applied to either the + * .parent_names or .parents arrays. In short, this function + * translates the parent value read from hardware into an array + * index. Currently only called when the clock is initialized by + * __clk_init. This callback is mandatory for clocks with + * multiple parents. It is optional (and unnecessary) for clocks + * with 0 or 1 parents. * * @set_rate: Change the rate of this clock. The requested rate is specified * by the second argument, which should typically be the return @@ -110,13 +111,6 @@ struct dentry; * which is likely helpful for most .set_rate implementation. * Returns 0 on success, -EERROR otherwise. * - * @recalc_accuracy: Recalculate the accuracy of this clock. The clock accuracy - * is expressed in ppb (parts per billion). The parent accuracy is - * an input parameter. - * Returns the calculated accuracy. Optional - if this op is not - * set then clock accuracy will be initialized to parent accuracy - * or 0 (perfect clock) if clock has no parent. - * * @set_rate_and_parent: Change the rate and the parent of this clock. The * requested rate is specified by the second argument, which * should typically be the return of .round_rate call. The @@ -128,6 +122,18 @@ struct dentry; * separately via calls to .set_parent and .set_rate. * Returns 0 on success, -EERROR otherwise. * + * @recalc_accuracy: Recalculate the accuracy of this clock. The clock accuracy + * is expressed in ppb (parts per billion). The parent accuracy is + * an input parameter. + * Returns the calculated accuracy. Optional - if this op is not + * set then clock accuracy will be initialized to parent accuracy + * or 0 (perfect clock) if clock has no parent. + * + * @init: Perform platform-specific initialization magic. + * This is not not used by any of the basic clock types. + * Please consider other ways of solving initialization problems + * before using this callback, as its use is discouraged. + * * @debug_init: Set up type-specific debugfs entries for this clock. This * is called once, after the debugfs directory entry for this * clock has been created. The dentry pointer representing that @@ -157,15 +163,15 @@ struct clk_ops { void (*disable_unused)(struct clk_hw *hw); unsigned long (*recalc_rate)(struct clk_hw *hw, unsigned long parent_rate); - long (*round_rate)(struct clk_hw *hw, unsigned long, - unsigned long *); + long (*round_rate)(struct clk_hw *hw, unsigned long rate, + unsigned long *parent_rate); long (*determine_rate)(struct clk_hw *hw, unsigned long rate, unsigned long *best_parent_rate, struct clk **best_parent_clk); int (*set_parent)(struct clk_hw *hw, u8 index); u8 (*get_parent)(struct clk_hw *hw); - int (*set_rate)(struct clk_hw *hw, unsigned long, - unsigned long); + int (*set_rate)(struct clk_hw *hw, unsigned long rate, + unsigned long parent_rate); int (*set_rate_and_parent)(struct clk_hw *hw, unsigned long rate, unsigned long parent_rate, u8 index); @@ -254,12 +260,12 @@ void of_fixed_clk_setup(struct device_node *np); * * Flags: * CLK_GATE_SET_TO_DISABLE - by default this clock sets the bit at bit_idx to - * enable the clock. Setting this flag does the opposite: setting the bit - * disable the clock and clearing it enables the clock + * enable the clock. Setting this flag does the opposite: setting the bit + * disable the clock and clearing it enables the clock * CLK_GATE_HIWORD_MASK - The gate settings are only in lower 16-bit - * of this register, and mask of gate bits are in higher 16-bit of this - * register. While setting the gate bits, higher 16-bit should also be - * updated to indicate changing gate bits. + * of this register, and mask of gate bits are in higher 16-bit of this + * register. While setting the gate bits, higher 16-bit should also be + * updated to indicate changing gate bits. */ struct clk_gate { struct clk_hw hw; @@ -298,20 +304,24 @@ struct clk_div_table { * * Flags: * CLK_DIVIDER_ONE_BASED - by default the divisor is the value read from the - * register plus one. If CLK_DIVIDER_ONE_BASED is set then the divider is - * the raw value read from the register, with the value of zero considered + * register plus one. If CLK_DIVIDER_ONE_BASED is set then the divider is + * the raw value read from the register, with the value of zero considered * invalid, unless CLK_DIVIDER_ALLOW_ZERO is set. * CLK_DIVIDER_POWER_OF_TWO - clock divisor is 2 raised to the value read from - * the hardware register + * the hardware register * CLK_DIVIDER_ALLOW_ZERO - Allow zero divisors. For dividers which have * CLK_DIVIDER_ONE_BASED set, it is possible to end up with a zero divisor. * Some hardware implementations gracefully handle this case and allow a * zero divisor by not modifying their input clock * (divide by one / bypass). * CLK_DIVIDER_HIWORD_MASK - The divider settings are only in lower 16-bit - * of this register, and mask of divider bits are in higher 16-bit of this - * register. While setting the divider bits, higher 16-bit should also be - * updated to indicate changing divider bits. + * of this register, and mask of divider bits are in higher 16-bit of this + * register. While setting the divider bits, higher 16-bit should also be + * updated to indicate changing divider bits. + * CLK_DIVIDER_ROUND_CLOSEST - Makes the best calculated divider to be rounded + * to the closest integer instead of the up one. + * CLK_DIVIDER_READ_ONLY - The divider settings are preconfigured and should + * not be changed by the clock framework. */ struct clk_divider { struct clk_hw hw; @@ -327,8 +337,11 @@ struct clk_divider { #define CLK_DIVIDER_POWER_OF_TWO BIT(1) #define CLK_DIVIDER_ALLOW_ZERO BIT(2) #define CLK_DIVIDER_HIWORD_MASK BIT(3) +#define CLK_DIVIDER_ROUND_CLOSEST BIT(4) +#define CLK_DIVIDER_READ_ONLY BIT(5) extern const struct clk_ops clk_divider_ops; +extern const struct clk_ops clk_divider_ro_ops; struct clk *clk_register_divider(struct device *dev, const char *name, const char *parent_name, unsigned long flags, void __iomem *reg, u8 shift, u8 width, @@ -356,9 +369,9 @@ struct clk *clk_register_divider_table(struct device *dev, const char *name, * CLK_MUX_INDEX_ONE - register index starts at 1, not 0 * CLK_MUX_INDEX_BIT - register index is a single bit (power of two) * CLK_MUX_HIWORD_MASK - The mux settings are only in lower 16-bit of this - * register, and mask of mux bits are in higher 16-bit of this register. - * While setting the mux bits, higher 16-bit should also be updated to - * indicate changing mux bits. + * register, and mask of mux bits are in higher 16-bit of this register. + * While setting the mux bits, higher 16-bit should also be updated to + * indicate changing mux bits. */ struct clk_mux { struct clk_hw hw; diff --git a/include/linux/clk/shmobile.h b/include/linux/clk/shmobile.h index f9bf080a1123..9f8a14041dd5 100644 --- a/include/linux/clk/shmobile.h +++ b/include/linux/clk/shmobile.h @@ -1,7 +1,9 @@ /* * Copyright 2013 Ideas On Board SPRL + * Copyright 2013, 2014 Horms Solutions Ltd. * * Contact: Laurent Pinchart <laurent.pinchart@ideasonboard.com> + * Contact: Simon Horman <horms@verge.net.au> * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by @@ -14,6 +16,7 @@ #include <linux/types.h> +void r8a7779_clocks_init(u32 mode); void rcar_gen2_clocks_init(u32 mode); #endif diff --git a/include/linux/clk/sunxi.h b/include/linux/clk/sunxi.h new file mode 100644 index 000000000000..aed28c4451d9 --- /dev/null +++ b/include/linux/clk/sunxi.h @@ -0,0 +1,22 @@ +/* + * Copyright 2013 - Hans de Goede <hdegoede@redhat.com> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#ifndef __LINUX_CLK_SUNXI_H_ +#define __LINUX_CLK_SUNXI_H_ + +#include <linux/clk.h> + +void clk_sunxi_mmc_phase_control(struct clk *clk, u8 sample, u8 output); + +#endif diff --git a/include/linux/cpu.h b/include/linux/cpu.h index 81887120395c..95978ad7fcdd 100644 --- a/include/linux/cpu.h +++ b/include/linux/cpu.h @@ -256,7 +256,6 @@ enum cpuhp_state { }; void cpu_startup_entry(enum cpuhp_state state); -void cpu_idle(void); void cpu_idle_poll_ctrl(bool enable); diff --git a/include/linux/device.h b/include/linux/device.h index 580e3eed4b78..af424acd393d 100644 --- a/include/linux/device.h +++ b/include/linux/device.h @@ -692,6 +692,7 @@ struct acpi_dev_node { * @coherent_dma_mask: Like dma_mask, but for alloc_coherent mapping as not all * hardware supports 64-bit addresses for consistent allocations * such descriptors. + * @dma_pfn_offset: offset of DMA memory range relatively of RAM * @dma_parms: A low level driver may set these to teach IOMMU code about * segment limitations. * @dma_pools: Dma pools (if dma'ble device). @@ -759,6 +760,7 @@ struct device { not all hardware supports 64 bit addresses for consistent allocations such descriptors. */ + unsigned long dma_pfn_offset; struct device_dma_parameters *dma_parms; diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 0c3eab1e39ac..931b70986272 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -129,6 +129,13 @@ static inline int dma_coerce_mask_and_coherent(struct device *dev, u64 mask) extern u64 dma_get_required_mask(struct device *dev); +#ifndef set_arch_dma_coherent_ops +static inline int set_arch_dma_coherent_ops(struct device *dev) +{ + return 0; +} +#endif + static inline unsigned int dma_get_max_seg_size(struct device *dev) { return dev->dma_parms ? dev->dma_parms->max_segment_size : 65536; diff --git a/include/linux/efi.h b/include/linux/efi.h index 6c100ff0cae4..41bbf8ba4ba8 100644 --- a/include/linux/efi.h +++ b/include/linux/efi.h @@ -575,6 +575,9 @@ typedef efi_status_t efi_query_variable_store_t(u32 attributes, unsigned long si #define EFI_FILE_SYSTEM_GUID \ EFI_GUID( 0x964e5b22, 0x6459, 0x11d2, 0x8e, 0x39, 0x00, 0xa0, 0xc9, 0x69, 0x72, 0x3b ) +#define DEVICE_TREE_GUID \ + EFI_GUID( 0xb1b621d5, 0xf19c, 0x41a5, 0x83, 0x0b, 0xd9, 0x15, 0x2c, 0x69, 0xaa, 0xe0 ) + typedef struct { efi_guid_t guid; u64 table; @@ -664,6 +667,14 @@ struct efi_memory_map { unsigned long desc_size; }; +struct efi_fdt_params { + u64 system_table; + u64 mmap; + u32 mmap_size; + u32 desc_size; + u32 desc_ver; +}; + typedef struct { u32 revision; u32 parent_handle; @@ -861,8 +872,15 @@ extern void efi_initialize_iomem_resources(struct resource *code_resource, extern void efi_get_time(struct timespec *now); extern int efi_set_rtc_mmss(const struct timespec *now); extern void efi_reserve_boot_services(void); +extern int efi_get_fdt_params(struct efi_fdt_params *params, int verbose); extern struct efi_memory_map memmap; +/* Iterate through an efi_memory_map */ +#define for_each_efi_memory_desc(m, md) \ + for ((md) = (m)->map; \ + (md) <= (efi_memory_desc_t *)((m)->map_end - (m)->desc_size); \ + (md) = (void *)(md) + (m)->desc_size) + /** * efi_range_is_wc - check the WC bit on an address range * @start: starting kvirt address @@ -1033,8 +1051,10 @@ struct efivars { * and we use a page for reading/writing. */ +#define EFI_VAR_NAME_LEN 1024 + struct efi_variable { - efi_char16_t VariableName[1024/sizeof(efi_char16_t)]; + efi_char16_t VariableName[EFI_VAR_NAME_LEN/sizeof(efi_char16_t)]; efi_guid_t VendorGuid; unsigned long DataSize; __u8 Data[1024]; @@ -1116,7 +1136,7 @@ int efivar_entry_iter(int (*func)(struct efivar_entry *, void *), struct efivar_entry *efivar_entry_find(efi_char16_t *name, efi_guid_t guid, struct list_head *head, bool remove); -bool efivar_validate(struct efi_variable *var, u8 *data, unsigned long len); +bool efivar_validate(efi_char16_t *var_name, u8 *data, unsigned long len); extern struct work_struct efivar_work; void efivar_run_worker(void); diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index ae9504b4b67d..2018751cad9e 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -616,25 +616,27 @@ static inline void __ftrace_enabled_restore(int enabled) #endif } -#ifndef HAVE_ARCH_CALLER_ADDR +/* All archs should have this, but we define it for consistency */ +#ifndef ftrace_return_address0 +# define ftrace_return_address0 __builtin_return_address(0) +#endif + +/* Archs may use other ways for ADDR1 and beyond */ +#ifndef ftrace_return_address # ifdef CONFIG_FRAME_POINTER -# define CALLER_ADDR0 ((unsigned long)__builtin_return_address(0)) -# define CALLER_ADDR1 ((unsigned long)__builtin_return_address(1)) -# define CALLER_ADDR2 ((unsigned long)__builtin_return_address(2)) -# define CALLER_ADDR3 ((unsigned long)__builtin_return_address(3)) -# define CALLER_ADDR4 ((unsigned long)__builtin_return_address(4)) -# define CALLER_ADDR5 ((unsigned long)__builtin_return_address(5)) -# define CALLER_ADDR6 ((unsigned long)__builtin_return_address(6)) +# define ftrace_return_address(n) __builtin_return_address(n) # else -# define CALLER_ADDR0 ((unsigned long)__builtin_return_address(0)) -# define CALLER_ADDR1 0UL -# define CALLER_ADDR2 0UL -# define CALLER_ADDR3 0UL -# define CALLER_ADDR4 0UL -# define CALLER_ADDR5 0UL -# define CALLER_ADDR6 0UL +# define ftrace_return_address(n) 0UL # endif -#endif /* ifndef HAVE_ARCH_CALLER_ADDR */ +#endif + +#define CALLER_ADDR0 ((unsigned long)ftrace_return_address0) +#define CALLER_ADDR1 ((unsigned long)ftrace_return_address(1)) +#define CALLER_ADDR2 ((unsigned long)ftrace_return_address(2)) +#define CALLER_ADDR3 ((unsigned long)ftrace_return_address(3)) +#define CALLER_ADDR4 ((unsigned long)ftrace_return_address(4)) +#define CALLER_ADDR5 ((unsigned long)ftrace_return_address(5)) +#define CALLER_ADDR6 ((unsigned long)ftrace_return_address(6)) #ifdef CONFIG_IRQSOFF_TRACER extern void time_hardirqs_on(unsigned long a0, unsigned long a1); diff --git a/include/linux/idr.h b/include/linux/idr.h index 6af3400b9b2f..013fd9bc4cb6 100644 --- a/include/linux/idr.h +++ b/include/linux/idr.h @@ -29,21 +29,24 @@ struct idr_layer { int prefix; /* the ID prefix of this idr_layer */ - DECLARE_BITMAP(bitmap, IDR_SIZE); /* A zero bit means "space here" */ + int layer; /* distance from leaf */ struct idr_layer __rcu *ary[1<<IDR_BITS]; int count; /* When zero, we can release it */ - int layer; /* distance from leaf */ - struct rcu_head rcu_head; + union { + /* A zero bit means "space here" */ + DECLARE_BITMAP(bitmap, IDR_SIZE); + struct rcu_head rcu_head; + }; }; struct idr { struct idr_layer __rcu *hint; /* the last layer allocated from */ struct idr_layer __rcu *top; - struct idr_layer *id_free; int layers; /* only valid w/o concurrent changes */ - int id_free_cnt; int cur; /* current pos for cyclic allocation */ spinlock_t lock; + int id_free_cnt; + struct idr_layer *id_free; }; #define IDR_INIT(name) \ diff --git a/include/linux/if_team.h b/include/linux/if_team.h index a899dc24be15..a6aa970758a2 100644 --- a/include/linux/if_team.h +++ b/include/linux/if_team.h @@ -194,6 +194,7 @@ struct team { bool user_carrier_enabled; bool queue_override_enabled; struct list_head *qom_lists; /* array of queue override mapping lists */ + bool port_mtu_change_allowed; struct { unsigned int count; unsigned int interval; /* in ms */ diff --git a/include/linux/key.h b/include/linux/key.h index 80d677483e31..3ae45f09589b 100644 --- a/include/linux/key.h +++ b/include/linux/key.h @@ -332,7 +332,7 @@ do { \ } while (0) #ifdef CONFIG_SYSCTL -extern ctl_table key_sysctls[]; +extern struct ctl_table key_sysctls[]; #endif /* * the userspace interface diff --git a/include/linux/kmemleak.h b/include/linux/kmemleak.h index 5bb424659c04..057e95971014 100644 --- a/include/linux/kmemleak.h +++ b/include/linux/kmemleak.h @@ -30,6 +30,7 @@ extern void kmemleak_alloc_percpu(const void __percpu *ptr, size_t size) __ref; extern void kmemleak_free(const void *ptr) __ref; extern void kmemleak_free_part(const void *ptr, size_t size) __ref; extern void kmemleak_free_percpu(const void __percpu *ptr) __ref; +extern void kmemleak_update_trace(const void *ptr) __ref; extern void kmemleak_not_leak(const void *ptr) __ref; extern void kmemleak_ignore(const void *ptr) __ref; extern void kmemleak_scan_area(const void *ptr, size_t size, gfp_t gfp) __ref; @@ -83,6 +84,9 @@ static inline void kmemleak_free_recursive(const void *ptr, unsigned long flags) static inline void kmemleak_free_percpu(const void __percpu *ptr) { } +static inline void kmemleak_update_trace(const void *ptr) +{ +} static inline void kmemleak_not_leak(const void *ptr) { } diff --git a/include/linux/mc146818rtc.h b/include/linux/mc146818rtc.h index 2f4e957af656..433e0c74d643 100644 --- a/include/linux/mc146818rtc.h +++ b/include/linux/mc146818rtc.h @@ -31,6 +31,10 @@ struct cmos_rtc_board_info { void (*wake_on)(struct device *dev); void (*wake_off)(struct device *dev); + u32 flags; +#define CMOS_RTC_FLAGS_NOFREQ (1 << 0) + int address_space; + u8 rtc_day_alarm; /* zero, or register index */ u8 rtc_mon_alarm; /* zero, or register index */ u8 rtc_century; /* zero, or register index */ diff --git a/include/linux/mfd/abx500.h b/include/linux/mfd/abx500.h index 3301b2031c8d..552cc1d61cc7 100644 --- a/include/linux/mfd/abx500.h +++ b/include/linux/mfd/abx500.h @@ -330,7 +330,6 @@ int abx500_mask_and_set_register_interruptible(struct device *dev, u8 bank, int abx500_get_chip_id(struct device *dev); int abx500_event_registers_startup_state_get(struct device *dev, u8 *event); int abx500_startup_irq_enabled(struct device *dev, unsigned int irq); -void abx500_dump_all_banks(void); struct abx500_ops { int (*get_chip_id) (struct device *); diff --git a/include/linux/mfd/arizona/registers.h b/include/linux/mfd/arizona/registers.h index 7b35c21170d5..7204d8138b24 100644 --- a/include/linux/mfd/arizona/registers.h +++ b/include/linux/mfd/arizona/registers.h @@ -42,12 +42,14 @@ #define ARIZONA_SAMPLE_RATE_SEQUENCE_SELECT_2 0x62 #define ARIZONA_SAMPLE_RATE_SEQUENCE_SELECT_3 0x63 #define ARIZONA_SAMPLE_RATE_SEQUENCE_SELECT_4 0x64 -#define ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_1 0x68 -#define ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_2 0x69 -#define ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_3 0x6A -#define ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_4 0x6B -#define ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_5 0x6C -#define ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_6 0x6D +#define ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_1 0x66 +#define ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_2 0x67 +#define ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_3 0x68 +#define ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_4 0x69 +#define ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_5 0x6A +#define ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_6 0x6B +#define ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_7 0x6C +#define ARIZONA_ALWAYS_ON_TRIGGERS_SEQUENCE_SELECT_8 0x6D #define ARIZONA_COMFORT_NOISE_GENERATOR 0x70 #define ARIZONA_HAPTICS_CONTROL_1 0x90 #define ARIZONA_HAPTICS_CONTROL_2 0x91 diff --git a/include/linux/mfd/axp20x.h b/include/linux/mfd/axp20x.h new file mode 100644 index 000000000000..d0e31a2287ac --- /dev/null +++ b/include/linux/mfd/axp20x.h @@ -0,0 +1,180 @@ +/* + * Functions and registers to access AXP20X power management chip. + * + * Copyright (C) 2013, Carlo Caione <carlo@caione.org> + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#ifndef __LINUX_MFD_AXP20X_H +#define __LINUX_MFD_AXP20X_H + +enum { + AXP202_ID = 0, + AXP209_ID, +}; + +#define AXP20X_DATACACHE(m) (0x04 + (m)) + +/* Power supply */ +#define AXP20X_PWR_INPUT_STATUS 0x00 +#define AXP20X_PWR_OP_MODE 0x01 +#define AXP20X_USB_OTG_STATUS 0x02 +#define AXP20X_PWR_OUT_CTRL 0x12 +#define AXP20X_DCDC2_V_OUT 0x23 +#define AXP20X_DCDC2_LDO3_V_SCAL 0x25 +#define AXP20X_DCDC3_V_OUT 0x27 +#define AXP20X_LDO24_V_OUT 0x28 +#define AXP20X_LDO3_V_OUT 0x29 +#define AXP20X_VBUS_IPSOUT_MGMT 0x30 +#define AXP20X_V_OFF 0x31 +#define AXP20X_OFF_CTRL 0x32 +#define AXP20X_CHRG_CTRL1 0x33 +#define AXP20X_CHRG_CTRL2 0x34 +#define AXP20X_CHRG_BAK_CTRL 0x35 +#define AXP20X_PEK_KEY 0x36 +#define AXP20X_DCDC_FREQ 0x37 +#define AXP20X_V_LTF_CHRG 0x38 +#define AXP20X_V_HTF_CHRG 0x39 +#define AXP20X_APS_WARN_L1 0x3a +#define AXP20X_APS_WARN_L2 0x3b +#define AXP20X_V_LTF_DISCHRG 0x3c +#define AXP20X_V_HTF_DISCHRG 0x3d + +/* Interrupt */ +#define AXP20X_IRQ1_EN 0x40 +#define AXP20X_IRQ2_EN 0x41 +#define AXP20X_IRQ3_EN 0x42 +#define AXP20X_IRQ4_EN 0x43 +#define AXP20X_IRQ5_EN 0x44 +#define AXP20X_IRQ1_STATE 0x48 +#define AXP20X_IRQ2_STATE 0x49 +#define AXP20X_IRQ3_STATE 0x4a +#define AXP20X_IRQ4_STATE 0x4b +#define AXP20X_IRQ5_STATE 0x4c + +/* ADC */ +#define AXP20X_ACIN_V_ADC_H 0x56 +#define AXP20X_ACIN_V_ADC_L 0x57 +#define AXP20X_ACIN_I_ADC_H 0x58 +#define AXP20X_ACIN_I_ADC_L 0x59 +#define AXP20X_VBUS_V_ADC_H 0x5a +#define AXP20X_VBUS_V_ADC_L 0x5b +#define AXP20X_VBUS_I_ADC_H 0x5c +#define AXP20X_VBUS_I_ADC_L 0x5d +#define AXP20X_TEMP_ADC_H 0x5e +#define AXP20X_TEMP_ADC_L 0x5f +#define AXP20X_TS_IN_H 0x62 +#define AXP20X_TS_IN_L 0x63 +#define AXP20X_GPIO0_V_ADC_H 0x64 +#define AXP20X_GPIO0_V_ADC_L 0x65 +#define AXP20X_GPIO1_V_ADC_H 0x66 +#define AXP20X_GPIO1_V_ADC_L 0x67 +#define AXP20X_PWR_BATT_H 0x70 +#define AXP20X_PWR_BATT_M 0x71 +#define AXP20X_PWR_BATT_L 0x72 +#define AXP20X_BATT_V_H 0x78 +#define AXP20X_BATT_V_L 0x79 +#define AXP20X_BATT_CHRG_I_H 0x7a +#define AXP20X_BATT_CHRG_I_L 0x7b +#define AXP20X_BATT_DISCHRG_I_H 0x7c +#define AXP20X_BATT_DISCHRG_I_L 0x7d +#define AXP20X_IPSOUT_V_HIGH_H 0x7e +#define AXP20X_IPSOUT_V_HIGH_L 0x7f + +/* Power supply */ +#define AXP20X_DCDC_MODE 0x80 +#define AXP20X_ADC_EN1 0x82 +#define AXP20X_ADC_EN2 0x83 +#define AXP20X_ADC_RATE 0x84 +#define AXP20X_GPIO10_IN_RANGE 0x85 +#define AXP20X_GPIO1_ADC_IRQ_RIS 0x86 +#define AXP20X_GPIO1_ADC_IRQ_FAL 0x87 +#define AXP20X_TIMER_CTRL 0x8a +#define AXP20X_VBUS_MON 0x8b +#define AXP20X_OVER_TMP 0x8f + +/* GPIO */ +#define AXP20X_GPIO0_CTRL 0x90 +#define AXP20X_LDO5_V_OUT 0x91 +#define AXP20X_GPIO1_CTRL 0x92 +#define AXP20X_GPIO2_CTRL 0x93 +#define AXP20X_GPIO20_SS 0x94 +#define AXP20X_GPIO3_CTRL 0x95 + +/* Battery */ +#define AXP20X_CHRG_CC_31_24 0xb0 +#define AXP20X_CHRG_CC_23_16 0xb1 +#define AXP20X_CHRG_CC_15_8 0xb2 +#define AXP20X_CHRG_CC_7_0 0xb3 +#define AXP20X_DISCHRG_CC_31_24 0xb4 +#define AXP20X_DISCHRG_CC_23_16 0xb5 +#define AXP20X_DISCHRG_CC_15_8 0xb6 +#define AXP20X_DISCHRG_CC_7_0 0xb7 +#define AXP20X_CC_CTRL 0xb8 +#define AXP20X_FG_RES 0xb9 + +/* Regulators IDs */ +enum { + AXP20X_LDO1 = 0, + AXP20X_LDO2, + AXP20X_LDO3, + AXP20X_LDO4, + AXP20X_LDO5, + AXP20X_DCDC2, + AXP20X_DCDC3, + AXP20X_REG_ID_MAX, +}; + +/* IRQs */ +enum { + AXP20X_IRQ_ACIN_OVER_V = 1, + AXP20X_IRQ_ACIN_PLUGIN, + AXP20X_IRQ_ACIN_REMOVAL, + AXP20X_IRQ_VBUS_OVER_V, + AXP20X_IRQ_VBUS_PLUGIN, + AXP20X_IRQ_VBUS_REMOVAL, + AXP20X_IRQ_VBUS_V_LOW, + AXP20X_IRQ_BATT_PLUGIN, + AXP20X_IRQ_BATT_REMOVAL, + AXP20X_IRQ_BATT_ENT_ACT_MODE, + AXP20X_IRQ_BATT_EXIT_ACT_MODE, + AXP20X_IRQ_CHARG, + AXP20X_IRQ_CHARG_DONE, + AXP20X_IRQ_BATT_TEMP_HIGH, + AXP20X_IRQ_BATT_TEMP_LOW, + AXP20X_IRQ_DIE_TEMP_HIGH, + AXP20X_IRQ_CHARG_I_LOW, + AXP20X_IRQ_DCDC1_V_LONG, + AXP20X_IRQ_DCDC2_V_LONG, + AXP20X_IRQ_DCDC3_V_LONG, + AXP20X_IRQ_PEK_SHORT = 22, + AXP20X_IRQ_PEK_LONG, + AXP20X_IRQ_N_OE_PWR_ON, + AXP20X_IRQ_N_OE_PWR_OFF, + AXP20X_IRQ_VBUS_VALID, + AXP20X_IRQ_VBUS_NOT_VALID, + AXP20X_IRQ_VBUS_SESS_VALID, + AXP20X_IRQ_VBUS_SESS_END, + AXP20X_IRQ_LOW_PWR_LVL1, + AXP20X_IRQ_LOW_PWR_LVL2, + AXP20X_IRQ_TIMER, + AXP20X_IRQ_PEK_RIS_EDGE, + AXP20X_IRQ_PEK_FAL_EDGE, + AXP20X_IRQ_GPIO3_INPUT, + AXP20X_IRQ_GPIO2_INPUT, + AXP20X_IRQ_GPIO1_INPUT, + AXP20X_IRQ_GPIO0_INPUT, +}; + +struct axp20x_dev { + struct device *dev; + struct i2c_client *i2c_client; + struct regmap *regmap; + struct regmap_irq_chip_data *regmap_irqc; + long variant; +}; + +#endif /* __LINUX_MFD_AXP20X_H */ diff --git a/include/linux/mfd/cros_ec.h b/include/linux/mfd/cros_ec.h index 032af7fc5b2e..887ef4f7bef7 100644 --- a/include/linux/mfd/cros_ec.h +++ b/include/linux/mfd/cros_ec.h @@ -29,8 +29,8 @@ enum { EC_MSG_RX_PROTO_BYTES = 3, /* Max length of messages */ - EC_MSG_BYTES = EC_HOST_PARAM_SIZE + EC_MSG_TX_PROTO_BYTES, - + EC_MSG_BYTES = EC_PROTO2_MAX_PARAM_SIZE + + EC_MSG_TX_PROTO_BYTES, }; /** diff --git a/include/linux/mfd/cros_ec_commands.h b/include/linux/mfd/cros_ec_commands.h index 86fd06953bcd..7853a6410d14 100644 --- a/include/linux/mfd/cros_ec_commands.h +++ b/include/linux/mfd/cros_ec_commands.h @@ -24,25 +24,12 @@ #define __CROS_EC_COMMANDS_H /* - * Protocol overview + * Current version of this protocol * - * request: CMD [ P0 P1 P2 ... Pn S ] - * response: ERR [ P0 P1 P2 ... Pn S ] - * - * where the bytes are defined as follow : - * - CMD is the command code. (defined by EC_CMD_ constants) - * - ERR is the error code. (defined by EC_RES_ constants) - * - Px is the optional payload. - * it is not sent if the error code is not success. - * (defined by ec_params_ and ec_response_ structures) - * - S is the checksum which is the sum of all payload bytes. - * - * On LPC, CMD and ERR are sent/received at EC_LPC_ADDR_KERNEL|USER_CMD - * and the payloads are sent/received at EC_LPC_ADDR_KERNEL|USER_PARAM. - * On I2C, all bytes are sent serially in the same message. + * TODO(crosbug.com/p/11223): This is effectively useless; protocol is + * determined in other ways. Remove this once the kernel code no longer + * depends on it. */ - -/* Current version of this protocol */ #define EC_PROTO_VERSION 0x00000002 /* Command version mask */ @@ -57,13 +44,19 @@ #define EC_LPC_ADDR_HOST_CMD 0x204 /* I/O addresses for host command args and params */ -#define EC_LPC_ADDR_HOST_ARGS 0x800 -#define EC_LPC_ADDR_HOST_PARAM 0x804 -#define EC_HOST_PARAM_SIZE 0x0fc /* Size of param area in bytes */ - -/* I/O addresses for host command params, old interface */ -#define EC_LPC_ADDR_OLD_PARAM 0x880 -#define EC_OLD_PARAM_SIZE 0x080 /* Size of param area in bytes */ +/* Protocol version 2 */ +#define EC_LPC_ADDR_HOST_ARGS 0x800 /* And 0x801, 0x802, 0x803 */ +#define EC_LPC_ADDR_HOST_PARAM 0x804 /* For version 2 params; size is + * EC_PROTO2_MAX_PARAM_SIZE */ +/* Protocol version 3 */ +#define EC_LPC_ADDR_HOST_PACKET 0x800 /* Offset of version 3 packet */ +#define EC_LPC_HOST_PACKET_SIZE 0x100 /* Max size of version 3 packet */ + +/* The actual block is 0x800-0x8ff, but some BIOSes think it's 0x880-0x8ff + * and they tell the kernel that so we have to think of it as two parts. */ +#define EC_HOST_CMD_REGION0 0x800 +#define EC_HOST_CMD_REGION1 0x880 +#define EC_HOST_CMD_REGION_SIZE 0x80 /* EC command register bit functions */ #define EC_LPC_CMDR_DATA (1 << 0) /* Data ready for host to read */ @@ -79,18 +72,22 @@ #define EC_MEMMAP_TEXT_MAX 8 /* Size of a string in the memory map */ /* The offset address of each type of data in mapped memory. */ -#define EC_MEMMAP_TEMP_SENSOR 0x00 /* Temp sensors */ -#define EC_MEMMAP_FAN 0x10 /* Fan speeds */ -#define EC_MEMMAP_TEMP_SENSOR_B 0x18 /* Temp sensors (second set) */ -#define EC_MEMMAP_ID 0x20 /* 'E' 'C' */ +#define EC_MEMMAP_TEMP_SENSOR 0x00 /* Temp sensors 0x00 - 0x0f */ +#define EC_MEMMAP_FAN 0x10 /* Fan speeds 0x10 - 0x17 */ +#define EC_MEMMAP_TEMP_SENSOR_B 0x18 /* More temp sensors 0x18 - 0x1f */ +#define EC_MEMMAP_ID 0x20 /* 0x20 == 'E', 0x21 == 'C' */ #define EC_MEMMAP_ID_VERSION 0x22 /* Version of data in 0x20 - 0x2f */ #define EC_MEMMAP_THERMAL_VERSION 0x23 /* Version of data in 0x00 - 0x1f */ #define EC_MEMMAP_BATTERY_VERSION 0x24 /* Version of data in 0x40 - 0x7f */ #define EC_MEMMAP_SWITCHES_VERSION 0x25 /* Version of data in 0x30 - 0x33 */ #define EC_MEMMAP_EVENTS_VERSION 0x26 /* Version of data in 0x34 - 0x3f */ -#define EC_MEMMAP_HOST_CMD_FLAGS 0x27 /* Host command interface flags */ -#define EC_MEMMAP_SWITCHES 0x30 -#define EC_MEMMAP_HOST_EVENTS 0x34 +#define EC_MEMMAP_HOST_CMD_FLAGS 0x27 /* Host cmd interface flags (8 bits) */ +/* Unused 0x28 - 0x2f */ +#define EC_MEMMAP_SWITCHES 0x30 /* 8 bits */ +/* Unused 0x31 - 0x33 */ +#define EC_MEMMAP_HOST_EVENTS 0x34 /* 32 bits */ +/* Reserve 0x38 - 0x3f for additional host event-related stuff */ +/* Battery values are all 32 bits */ #define EC_MEMMAP_BATT_VOLT 0x40 /* Battery Present Voltage */ #define EC_MEMMAP_BATT_RATE 0x44 /* Battery Present Rate */ #define EC_MEMMAP_BATT_CAP 0x48 /* Battery Remaining Capacity */ @@ -99,10 +96,24 @@ #define EC_MEMMAP_BATT_DVLT 0x54 /* Battery Design Voltage */ #define EC_MEMMAP_BATT_LFCC 0x58 /* Battery Last Full Charge Capacity */ #define EC_MEMMAP_BATT_CCNT 0x5c /* Battery Cycle Count */ +/* Strings are all 8 bytes (EC_MEMMAP_TEXT_MAX) */ #define EC_MEMMAP_BATT_MFGR 0x60 /* Battery Manufacturer String */ #define EC_MEMMAP_BATT_MODEL 0x68 /* Battery Model Number String */ #define EC_MEMMAP_BATT_SERIAL 0x70 /* Battery Serial Number String */ #define EC_MEMMAP_BATT_TYPE 0x78 /* Battery Type String */ +#define EC_MEMMAP_ALS 0x80 /* ALS readings in lux (2 X 16 bits) */ +/* Unused 0x84 - 0x8f */ +#define EC_MEMMAP_ACC_STATUS 0x90 /* Accelerometer status (8 bits )*/ +/* Unused 0x91 */ +#define EC_MEMMAP_ACC_DATA 0x92 /* Accelerometer data 0x92 - 0x9f */ +#define EC_MEMMAP_GYRO_DATA 0xa0 /* Gyroscope data 0xa0 - 0xa5 */ +/* Unused 0xa6 - 0xfe (remember, 0xff is NOT part of the memmap region) */ + + +/* Define the format of the accelerometer mapped memory status byte. */ +#define EC_MEMMAP_ACC_STATUS_SAMPLE_ID_MASK 0x0f +#define EC_MEMMAP_ACC_STATUS_BUSY_BIT (1 << 4) +#define EC_MEMMAP_ACC_STATUS_PRESENCE_BIT (1 << 7) /* Number of temp sensors at EC_MEMMAP_TEMP_SENSOR */ #define EC_TEMP_SENSOR_ENTRIES 16 @@ -112,6 +123,8 @@ * Valid only if EC_MEMMAP_THERMAL_VERSION returns >= 2. */ #define EC_TEMP_SENSOR_B_ENTRIES 8 + +/* Special values for mapped temperature sensors */ #define EC_TEMP_SENSOR_NOT_PRESENT 0xff #define EC_TEMP_SENSOR_ERROR 0xfe #define EC_TEMP_SENSOR_NOT_POWERED 0xfd @@ -122,6 +135,18 @@ */ #define EC_TEMP_SENSOR_OFFSET 200 +/* + * Number of ALS readings at EC_MEMMAP_ALS + */ +#define EC_ALS_ENTRIES 2 + +/* + * The default value a temperature sensor will return when it is present but + * has not been read this boot. This is a reasonable number to avoid + * triggering alarms on the host. + */ +#define EC_TEMP_SENSOR_DEFAULT (296 - EC_TEMP_SENSOR_OFFSET) + #define EC_FAN_SPEED_ENTRIES 4 /* Number of fans at EC_MEMMAP_FAN */ #define EC_FAN_SPEED_NOT_PRESENT 0xffff /* Entry not present */ #define EC_FAN_SPEED_STALLED 0xfffe /* Fan stalled */ @@ -137,8 +162,8 @@ #define EC_SWITCH_LID_OPEN 0x01 #define EC_SWITCH_POWER_BUTTON_PRESSED 0x02 #define EC_SWITCH_WRITE_PROTECT_DISABLED 0x04 -/* Recovery requested via keyboard */ -#define EC_SWITCH_KEYBOARD_RECOVERY 0x08 +/* Was recovery requested via keyboard; now unused. */ +#define EC_SWITCH_IGNORE1 0x08 /* Recovery requested via dedicated signal (from servo board) */ #define EC_SWITCH_DEDICATED_RECOVERY 0x10 /* Was fake developer mode switch; now unused. Remove in next refactor. */ @@ -147,10 +172,15 @@ /* Host command interface flags */ /* Host command interface supports LPC args (LPC interface only) */ #define EC_HOST_CMD_FLAG_LPC_ARGS_SUPPORTED 0x01 +/* Host command interface supports version 3 protocol */ +#define EC_HOST_CMD_FLAG_VERSION_3 0x02 /* Wireless switch flags */ -#define EC_WIRELESS_SWITCH_WLAN 0x01 -#define EC_WIRELESS_SWITCH_BLUETOOTH 0x02 +#define EC_WIRELESS_SWITCH_ALL ~0x00 /* All flags */ +#define EC_WIRELESS_SWITCH_WLAN 0x01 /* WLAN radio */ +#define EC_WIRELESS_SWITCH_BLUETOOTH 0x02 /* Bluetooth radio */ +#define EC_WIRELESS_SWITCH_WWAN 0x04 /* WWAN power */ +#define EC_WIRELESS_SWITCH_WLAN_POWER 0x08 /* WLAN power */ /* * This header file is used in coreboot both in C and ACPI code. The ACPI code @@ -159,6 +189,14 @@ */ #ifndef __ACPI__ +/* + * Define __packed if someone hasn't beat us to it. Linux kernel style + * checking prefers __packed over __attribute__((packed)). + */ +#ifndef __packed +#define __packed __attribute__((packed)) +#endif + /* LPC command status byte masks */ /* EC has written a byte in the data register and host hasn't read it yet */ #define EC_LPC_STATUS_TO_HOST 0x01 @@ -198,6 +236,9 @@ enum ec_status { EC_RES_UNAVAILABLE = 9, /* No response available */ EC_RES_TIMEOUT = 10, /* We got a timeout */ EC_RES_OVERFLOW = 11, /* Table / data overflow */ + EC_RES_INVALID_HEADER = 12, /* Header contains invalid data */ + EC_RES_REQUEST_TRUNCATED = 13, /* Didn't get the entire request */ + EC_RES_RESPONSE_TOO_BIG = 14 /* Response was too big to handle */ }; /* @@ -235,6 +276,16 @@ enum host_event_code { /* Shutdown due to battery level too low */ EC_HOST_EVENT_BATTERY_SHUTDOWN = 17, + /* Suggest that the AP throttle itself */ + EC_HOST_EVENT_THROTTLE_START = 18, + /* Suggest that the AP resume normal speed */ + EC_HOST_EVENT_THROTTLE_STOP = 19, + + /* Hang detect logic detected a hang and host event timeout expired */ + EC_HOST_EVENT_HANG_DETECT = 20, + /* Hang detect logic detected a hang and warm rebooted the AP */ + EC_HOST_EVENT_HANG_REBOOT = 21, + /* * The high bit of the event mask is not used as a host event code. If * it reads back as set, then the entire event mask should be @@ -279,6 +330,188 @@ struct ec_lpc_host_args { */ #define EC_HOST_ARGS_FLAG_TO_HOST 0x02 +/*****************************************************************************/ +/* + * Byte codes returned by EC over SPI interface. + * + * These can be used by the AP to debug the EC interface, and to determine + * when the EC is not in a state where it will ever get around to responding + * to the AP. + * + * Example of sequence of bytes read from EC for a current good transfer: + * 1. - - AP asserts chip select (CS#) + * 2. EC_SPI_OLD_READY - AP sends first byte(s) of request + * 3. - - EC starts handling CS# interrupt + * 4. EC_SPI_RECEIVING - AP sends remaining byte(s) of request + * 5. EC_SPI_PROCESSING - EC starts processing request; AP is clocking in + * bytes looking for EC_SPI_FRAME_START + * 6. - - EC finishes processing and sets up response + * 7. EC_SPI_FRAME_START - AP reads frame byte + * 8. (response packet) - AP reads response packet + * 9. EC_SPI_PAST_END - Any additional bytes read by AP + * 10 - - AP deasserts chip select + * 11 - - EC processes CS# interrupt and sets up DMA for + * next request + * + * If the AP is waiting for EC_SPI_FRAME_START and sees any value other than + * the following byte values: + * EC_SPI_OLD_READY + * EC_SPI_RX_READY + * EC_SPI_RECEIVING + * EC_SPI_PROCESSING + * + * Then the EC found an error in the request, or was not ready for the request + * and lost data. The AP should give up waiting for EC_SPI_FRAME_START, + * because the EC is unable to tell when the AP is done sending its request. + */ + +/* + * Framing byte which precedes a response packet from the EC. After sending a + * request, the AP will clock in bytes until it sees the framing byte, then + * clock in the response packet. + */ +#define EC_SPI_FRAME_START 0xec + +/* + * Padding bytes which are clocked out after the end of a response packet. + */ +#define EC_SPI_PAST_END 0xed + +/* + * EC is ready to receive, and has ignored the byte sent by the AP. EC expects + * that the AP will send a valid packet header (starting with + * EC_COMMAND_PROTOCOL_3) in the next 32 bytes. + */ +#define EC_SPI_RX_READY 0xf8 + +/* + * EC has started receiving the request from the AP, but hasn't started + * processing it yet. + */ +#define EC_SPI_RECEIVING 0xf9 + +/* EC has received the entire request from the AP and is processing it. */ +#define EC_SPI_PROCESSING 0xfa + +/* + * EC received bad data from the AP, such as a packet header with an invalid + * length. EC will ignore all data until chip select deasserts. + */ +#define EC_SPI_RX_BAD_DATA 0xfb + +/* + * EC received data from the AP before it was ready. That is, the AP asserted + * chip select and started clocking data before the EC was ready to receive it. + * EC will ignore all data until chip select deasserts. + */ +#define EC_SPI_NOT_READY 0xfc + +/* + * EC was ready to receive a request from the AP. EC has treated the byte sent + * by the AP as part of a request packet, or (for old-style ECs) is processing + * a fully received packet but is not ready to respond yet. + */ +#define EC_SPI_OLD_READY 0xfd + +/*****************************************************************************/ + +/* + * Protocol version 2 for I2C and SPI send a request this way: + * + * 0 EC_CMD_VERSION0 + (command version) + * 1 Command number + * 2 Length of params = N + * 3..N+2 Params, if any + * N+3 8-bit checksum of bytes 0..N+2 + * + * The corresponding response is: + * + * 0 Result code (EC_RES_*) + * 1 Length of params = M + * 2..M+1 Params, if any + * M+2 8-bit checksum of bytes 0..M+1 + */ +#define EC_PROTO2_REQUEST_HEADER_BYTES 3 +#define EC_PROTO2_REQUEST_TRAILER_BYTES 1 +#define EC_PROTO2_REQUEST_OVERHEAD (EC_PROTO2_REQUEST_HEADER_BYTES + \ + EC_PROTO2_REQUEST_TRAILER_BYTES) + +#define EC_PROTO2_RESPONSE_HEADER_BYTES 2 +#define EC_PROTO2_RESPONSE_TRAILER_BYTES 1 +#define EC_PROTO2_RESPONSE_OVERHEAD (EC_PROTO2_RESPONSE_HEADER_BYTES + \ + EC_PROTO2_RESPONSE_TRAILER_BYTES) + +/* Parameter length was limited by the LPC interface */ +#define EC_PROTO2_MAX_PARAM_SIZE 0xfc + +/* Maximum request and response packet sizes for protocol version 2 */ +#define EC_PROTO2_MAX_REQUEST_SIZE (EC_PROTO2_REQUEST_OVERHEAD + \ + EC_PROTO2_MAX_PARAM_SIZE) +#define EC_PROTO2_MAX_RESPONSE_SIZE (EC_PROTO2_RESPONSE_OVERHEAD + \ + EC_PROTO2_MAX_PARAM_SIZE) + +/*****************************************************************************/ + +/* + * Value written to legacy command port / prefix byte to indicate protocol + * 3+ structs are being used. Usage is bus-dependent. + */ +#define EC_COMMAND_PROTOCOL_3 0xda + +#define EC_HOST_REQUEST_VERSION 3 + +/* Version 3 request from host */ +struct ec_host_request { + /* Struct version (=3) + * + * EC will return EC_RES_INVALID_HEADER if it receives a header with a + * version it doesn't know how to parse. + */ + uint8_t struct_version; + + /* + * Checksum of request and data; sum of all bytes including checksum + * should total to 0. + */ + uint8_t checksum; + + /* Command code */ + uint16_t command; + + /* Command version */ + uint8_t command_version; + + /* Unused byte in current protocol version; set to 0 */ + uint8_t reserved; + + /* Length of data which follows this header */ + uint16_t data_len; +} __packed; + +#define EC_HOST_RESPONSE_VERSION 3 + +/* Version 3 response from EC */ +struct ec_host_response { + /* Struct version (=3) */ + uint8_t struct_version; + + /* + * Checksum of response and data; sum of all bytes including checksum + * should total to 0. + */ + uint8_t checksum; + + /* Result code (EC_RES_*) */ + uint16_t result; + + /* Length of data which follows this header */ + uint16_t data_len; + + /* Unused bytes in current protocol version; set to 0 */ + uint16_t reserved; +} __packed; + +/*****************************************************************************/ /* * Notes on commands: * @@ -418,6 +651,68 @@ struct ec_response_get_comms_status { uint32_t flags; /* Mask of enum ec_comms_status */ } __packed; +/* Fake a variety of responses, purely for testing purposes. */ +#define EC_CMD_TEST_PROTOCOL 0x0a + +/* Tell the EC what to send back to us. */ +struct ec_params_test_protocol { + uint32_t ec_result; + uint32_t ret_len; + uint8_t buf[32]; +} __packed; + +/* Here it comes... */ +struct ec_response_test_protocol { + uint8_t buf[32]; +} __packed; + +/* Get prococol information */ +#define EC_CMD_GET_PROTOCOL_INFO 0x0b + +/* Flags for ec_response_get_protocol_info.flags */ +/* EC_RES_IN_PROGRESS may be returned if a command is slow */ +#define EC_PROTOCOL_INFO_IN_PROGRESS_SUPPORTED (1 << 0) + +struct ec_response_get_protocol_info { + /* Fields which exist if at least protocol version 3 supported */ + + /* Bitmask of protocol versions supported (1 << n means version n)*/ + uint32_t protocol_versions; + + /* Maximum request packet size, in bytes */ + uint16_t max_request_packet_size; + + /* Maximum response packet size, in bytes */ + uint16_t max_response_packet_size; + + /* Flags; see EC_PROTOCOL_INFO_* */ + uint32_t flags; +} __packed; + + +/*****************************************************************************/ +/* Get/Set miscellaneous values */ + +/* The upper byte of .flags tells what to do (nothing means "get") */ +#define EC_GSV_SET 0x80000000 + +/* The lower three bytes of .flags identifies the parameter, if that has + meaning for an individual command. */ +#define EC_GSV_PARAM_MASK 0x00ffffff + +struct ec_params_get_set_value { + uint32_t flags; + uint32_t value; +} __packed; + +struct ec_response_get_set_value { + uint32_t flags; + uint32_t value; +} __packed; + +/* More than one command can use these structs to get/set paramters. */ +#define EC_CMD_GSV_PAUSE_IN_S5 0x0c + /*****************************************************************************/ /* Flash commands */ @@ -425,6 +720,7 @@ struct ec_response_get_comms_status { /* Get flash info */ #define EC_CMD_FLASH_INFO 0x10 +/* Version 0 returns these fields */ struct ec_response_flash_info { /* Usable flash size, in bytes */ uint32_t flash_size; @@ -445,6 +741,37 @@ struct ec_response_flash_info { uint32_t protect_block_size; } __packed; +/* Flags for version 1+ flash info command */ +/* EC flash erases bits to 0 instead of 1 */ +#define EC_FLASH_INFO_ERASE_TO_0 (1 << 0) + +/* + * Version 1 returns the same initial fields as version 0, with additional + * fields following. + * + * gcc anonymous structs don't seem to get along with the __packed directive; + * if they did we'd define the version 0 struct as a sub-struct of this one. + */ +struct ec_response_flash_info_1 { + /* Version 0 fields; see above for description */ + uint32_t flash_size; + uint32_t write_block_size; + uint32_t erase_block_size; + uint32_t protect_block_size; + + /* Version 1 adds these fields: */ + /* + * Ideal write size in bytes. Writes will be fastest if size is + * exactly this and offset is a multiple of this. For example, an EC + * may have a write buffer which can do half-page operations if data is + * aligned, and a slower word-at-a-time write mode. + */ + uint32_t write_ideal_size; + + /* Flags; see EC_FLASH_INFO_* */ + uint32_t flags; +} __packed; + /* * Read flash * @@ -459,15 +786,15 @@ struct ec_params_flash_read { /* Write flash */ #define EC_CMD_FLASH_WRITE 0x12 +#define EC_VER_FLASH_WRITE 1 + +/* Version 0 of the flash command supported only 64 bytes of data */ +#define EC_FLASH_WRITE_VER0_SIZE 64 struct ec_params_flash_write { uint32_t offset; /* Byte offset to write */ uint32_t size; /* Size to write in bytes */ - /* - * Data to write. Could really use EC_PARAM_SIZE - 8, but tidiest to - * use a power of 2 so writes stay aligned. - */ - uint8_t data[64]; + /* Followed by data to write */ } __packed; /* Erase flash */ @@ -543,7 +870,7 @@ struct ec_response_flash_protect { enum ec_flash_region { /* Region which holds read-only EC image */ - EC_FLASH_REGION_RO, + EC_FLASH_REGION_RO = 0, /* Region which holds rewritable EC image */ EC_FLASH_REGION_RW, /* @@ -551,6 +878,8 @@ enum ec_flash_region { * EC_FLASH_REGION_RO) */ EC_FLASH_REGION_WP_RO, + /* Number of regions */ + EC_FLASH_REGION_COUNT, }; struct ec_params_flash_region_info { @@ -639,15 +968,15 @@ struct rgb_s { */ struct lightbar_params { /* Timing */ - int google_ramp_up; - int google_ramp_down; - int s3s0_ramp_up; - int s0_tick_delay[2]; /* AC=0/1 */ - int s0a_tick_delay[2]; /* AC=0/1 */ - int s0s3_ramp_down; - int s3_sleep_for; - int s3_ramp_up; - int s3_ramp_down; + int32_t google_ramp_up; + int32_t google_ramp_down; + int32_t s3s0_ramp_up; + int32_t s0_tick_delay[2]; /* AC=0/1 */ + int32_t s0a_tick_delay[2]; /* AC=0/1 */ + int32_t s0s3_ramp_down; + int32_t s3_sleep_for; + int32_t s3_ramp_up; + int32_t s3_ramp_down; /* Oscillation */ uint8_t new_s0; @@ -676,7 +1005,7 @@ struct ec_params_lightbar { union { struct { /* no args */ - } dump, off, on, init, get_seq, get_params; + } dump, off, on, init, get_seq, get_params, version; struct num { uint8_t num; @@ -710,6 +1039,11 @@ struct ec_response_lightbar { struct lightbar_params get_params; + struct version { + uint32_t num; + uint32_t flags; + } version; + struct { /* no return params */ } off, on, init, brightness, seq, reg, rgb, demo, set_params; @@ -730,10 +1064,62 @@ enum lightbar_command { LIGHTBAR_CMD_DEMO = 9, LIGHTBAR_CMD_GET_PARAMS = 10, LIGHTBAR_CMD_SET_PARAMS = 11, + LIGHTBAR_CMD_VERSION = 12, LIGHTBAR_NUM_CMDS }; /*****************************************************************************/ +/* LED control commands */ + +#define EC_CMD_LED_CONTROL 0x29 + +enum ec_led_id { + /* LED to indicate battery state of charge */ + EC_LED_ID_BATTERY_LED = 0, + /* + * LED to indicate system power state (on or in suspend). + * May be on power button or on C-panel. + */ + EC_LED_ID_POWER_LED, + /* LED on power adapter or its plug */ + EC_LED_ID_ADAPTER_LED, + + EC_LED_ID_COUNT +}; + +/* LED control flags */ +#define EC_LED_FLAGS_QUERY (1 << 0) /* Query LED capability only */ +#define EC_LED_FLAGS_AUTO (1 << 1) /* Switch LED back to automatic control */ + +enum ec_led_colors { + EC_LED_COLOR_RED = 0, + EC_LED_COLOR_GREEN, + EC_LED_COLOR_BLUE, + EC_LED_COLOR_YELLOW, + EC_LED_COLOR_WHITE, + + EC_LED_COLOR_COUNT +}; + +struct ec_params_led_control { + uint8_t led_id; /* Which LED to control */ + uint8_t flags; /* Control flags */ + + uint8_t brightness[EC_LED_COLOR_COUNT]; +} __packed; + +struct ec_response_led_control { + /* + * Available brightness value range. + * + * Range 0 means color channel not present. + * Range 1 means on/off control. + * Other values means the LED is control by PWM. + */ + uint8_t brightness_range[EC_LED_COLOR_COUNT]; +} __packed; + +/*****************************************************************************/ /* Verified boot commands */ /* @@ -790,6 +1176,181 @@ enum ec_vboot_hash_status { #define EC_VBOOT_HASH_OFFSET_RW 0xfffffffd /*****************************************************************************/ +/* + * Motion sense commands. We'll make separate structs for sub-commands with + * different input args, so that we know how much to expect. + */ +#define EC_CMD_MOTION_SENSE_CMD 0x2B + +/* Motion sense commands */ +enum motionsense_command { + /* + * Dump command returns all motion sensor data including motion sense + * module flags and individual sensor flags. + */ + MOTIONSENSE_CMD_DUMP = 0, + + /* + * Info command returns data describing the details of a given sensor, + * including enum motionsensor_type, enum motionsensor_location, and + * enum motionsensor_chip. + */ + MOTIONSENSE_CMD_INFO = 1, + + /* + * EC Rate command is a setter/getter command for the EC sampling rate + * of all motion sensors in milliseconds. + */ + MOTIONSENSE_CMD_EC_RATE = 2, + + /* + * Sensor ODR command is a setter/getter command for the output data + * rate of a specific motion sensor in millihertz. + */ + MOTIONSENSE_CMD_SENSOR_ODR = 3, + + /* + * Sensor range command is a setter/getter command for the range of + * a specified motion sensor in +/-G's or +/- deg/s. + */ + MOTIONSENSE_CMD_SENSOR_RANGE = 4, + + /* + * Setter/getter command for the keyboard wake angle. When the lid + * angle is greater than this value, keyboard wake is disabled in S3, + * and when the lid angle goes less than this value, keyboard wake is + * enabled. Note, the lid angle measurement is an approximate, + * un-calibrated value, hence the wake angle isn't exact. + */ + MOTIONSENSE_CMD_KB_WAKE_ANGLE = 5, + + /* Number of motionsense sub-commands. */ + MOTIONSENSE_NUM_CMDS +}; + +enum motionsensor_id { + EC_MOTION_SENSOR_ACCEL_BASE = 0, + EC_MOTION_SENSOR_ACCEL_LID = 1, + EC_MOTION_SENSOR_GYRO = 2, + + /* + * Note, if more sensors are added and this count changes, the padding + * in ec_response_motion_sense dump command must be modified. + */ + EC_MOTION_SENSOR_COUNT = 3 +}; + +/* List of motion sensor types. */ +enum motionsensor_type { + MOTIONSENSE_TYPE_ACCEL = 0, + MOTIONSENSE_TYPE_GYRO = 1, +}; + +/* List of motion sensor locations. */ +enum motionsensor_location { + MOTIONSENSE_LOC_BASE = 0, + MOTIONSENSE_LOC_LID = 1, +}; + +/* List of motion sensor chips. */ +enum motionsensor_chip { + MOTIONSENSE_CHIP_KXCJ9 = 0, +}; + +/* Module flag masks used for the dump sub-command. */ +#define MOTIONSENSE_MODULE_FLAG_ACTIVE (1<<0) + +/* Sensor flag masks used for the dump sub-command. */ +#define MOTIONSENSE_SENSOR_FLAG_PRESENT (1<<0) + +/* + * Send this value for the data element to only perform a read. If you + * send any other value, the EC will interpret it as data to set and will + * return the actual value set. + */ +#define EC_MOTION_SENSE_NO_VALUE -1 + +struct ec_params_motion_sense { + uint8_t cmd; + union { + /* Used for MOTIONSENSE_CMD_DUMP. */ + struct { + /* no args */ + } dump; + + /* + * Used for MOTIONSENSE_CMD_EC_RATE and + * MOTIONSENSE_CMD_KB_WAKE_ANGLE. + */ + struct { + /* Data to set or EC_MOTION_SENSE_NO_VALUE to read. */ + int16_t data; + } ec_rate, kb_wake_angle; + + /* Used for MOTIONSENSE_CMD_INFO. */ + struct { + /* Should be element of enum motionsensor_id. */ + uint8_t sensor_num; + } info; + + /* + * Used for MOTIONSENSE_CMD_SENSOR_ODR and + * MOTIONSENSE_CMD_SENSOR_RANGE. + */ + struct { + /* Should be element of enum motionsensor_id. */ + uint8_t sensor_num; + + /* Rounding flag, true for round-up, false for down. */ + uint8_t roundup; + + uint16_t reserved; + + /* Data to set or EC_MOTION_SENSE_NO_VALUE to read. */ + int32_t data; + } sensor_odr, sensor_range; + }; +} __packed; + +struct ec_response_motion_sense { + union { + /* Used for MOTIONSENSE_CMD_DUMP. */ + struct { + /* Flags representing the motion sensor module. */ + uint8_t module_flags; + + /* Flags for each sensor in enum motionsensor_id. */ + uint8_t sensor_flags[EC_MOTION_SENSOR_COUNT]; + + /* Array of all sensor data. Each sensor is 3-axis. */ + int16_t data[3*EC_MOTION_SENSOR_COUNT]; + } dump; + + /* Used for MOTIONSENSE_CMD_INFO. */ + struct { + /* Should be element of enum motionsensor_type. */ + uint8_t type; + + /* Should be element of enum motionsensor_location. */ + uint8_t location; + + /* Should be element of enum motionsensor_chip. */ + uint8_t chip; + } info; + + /* + * Used for MOTIONSENSE_CMD_EC_RATE, MOTIONSENSE_CMD_SENSOR_ODR, + * MOTIONSENSE_CMD_SENSOR_RANGE, and + * MOTIONSENSE_CMD_KB_WAKE_ANGLE. + */ + struct { + /* Current value of the parameter queried. */ + int32_t ret; + } ec_rate, sensor_odr, sensor_range, kb_wake_angle; + }; +} __packed; + +/*****************************************************************************/ /* USB charging control commands */ /* Set USB port charging mode */ @@ -868,20 +1429,27 @@ struct ec_response_port80_last_boot { } __packed; /*****************************************************************************/ -/* Thermal engine commands */ +/* Thermal engine commands. Note that there are two implementations. We'll + * reuse the command number, but the data and behavior is incompatible. + * Version 0 is what originally shipped on Link. + * Version 1 separates the CPU thermal limits from the fan control. + */ -/* Set thershold value */ #define EC_CMD_THERMAL_SET_THRESHOLD 0x50 +#define EC_CMD_THERMAL_GET_THRESHOLD 0x51 + +/* The version 0 structs are opaque. You have to know what they are for + * the get/set commands to make any sense. + */ +/* Version 0 - set */ struct ec_params_thermal_set_threshold { uint8_t sensor_type; uint8_t threshold_id; uint16_t value; } __packed; -/* Get threshold value */ -#define EC_CMD_THERMAL_GET_THRESHOLD 0x51 - +/* Version 0 - get */ struct ec_params_thermal_get_threshold { uint8_t sensor_type; uint8_t threshold_id; @@ -891,6 +1459,41 @@ struct ec_response_thermal_get_threshold { uint16_t value; } __packed; + +/* The version 1 structs are visible. */ +enum ec_temp_thresholds { + EC_TEMP_THRESH_WARN = 0, + EC_TEMP_THRESH_HIGH, + EC_TEMP_THRESH_HALT, + + EC_TEMP_THRESH_COUNT +}; + +/* Thermal configuration for one temperature sensor. Temps are in degrees K. + * Zero values will be silently ignored by the thermal task. + */ +struct ec_thermal_config { + uint32_t temp_host[EC_TEMP_THRESH_COUNT]; /* levels of hotness */ + uint32_t temp_fan_off; /* no active cooling needed */ + uint32_t temp_fan_max; /* max active cooling needed */ +} __packed; + +/* Version 1 - get config for one sensor. */ +struct ec_params_thermal_get_threshold_v1 { + uint32_t sensor_num; +} __packed; +/* This returns a struct ec_thermal_config */ + +/* Version 1 - set config for one sensor. + * Use read-modify-write for best results! */ +struct ec_params_thermal_set_threshold_v1 { + uint32_t sensor_num; + struct ec_thermal_config cfg; +} __packed; +/* This returns no data */ + +/****************************************************************************/ + /* Toggle automatic fan control */ #define EC_CMD_THERMAL_AUTO_FAN_CTRL 0x52 @@ -920,6 +1523,18 @@ struct ec_params_tmp006_set_calibration { float b2; } __packed; +/* Read raw TMP006 data */ +#define EC_CMD_TMP006_GET_RAW 0x55 + +struct ec_params_tmp006_get_raw { + uint8_t index; +} __packed; + +struct ec_response_tmp006_get_raw { + int32_t t; /* In 1/100 K */ + int32_t v; /* In nV */ +}; + /*****************************************************************************/ /* MKBP - Matrix KeyBoard Protocol */ @@ -1118,11 +1733,41 @@ struct ec_params_switch_enable_backlight { /* Enable/disable WLAN/Bluetooth */ #define EC_CMD_SWITCH_ENABLE_WIRELESS 0x91 +#define EC_VER_SWITCH_ENABLE_WIRELESS 1 -struct ec_params_switch_enable_wireless { +/* Version 0 params; no response */ +struct ec_params_switch_enable_wireless_v0 { uint8_t enabled; } __packed; +/* Version 1 params */ +struct ec_params_switch_enable_wireless_v1 { + /* Flags to enable now */ + uint8_t now_flags; + + /* Which flags to copy from now_flags */ + uint8_t now_mask; + + /* + * Flags to leave enabled in S3, if they're on at the S0->S3 + * transition. (Other flags will be disabled by the S0->S3 + * transition.) + */ + uint8_t suspend_flags; + + /* Which flags to copy from suspend_flags */ + uint8_t suspend_mask; +} __packed; + +/* Version 1 response */ +struct ec_response_switch_enable_wireless_v1 { + /* Flags to enable now */ + uint8_t now_flags; + + /* Flags to leave enabled in S3 */ + uint8_t suspend_flags; +} __packed; + /*****************************************************************************/ /* GPIO commands. Only available on EC if write protect has been disabled. */ @@ -1147,11 +1792,16 @@ struct ec_response_gpio_get { /*****************************************************************************/ /* I2C commands. Only available when flash write protect is unlocked. */ +/* + * TODO(crosbug.com/p/23570): These commands are deprecated, and will be + * removed soon. Use EC_CMD_I2C_XFER instead. + */ + /* Read I2C bus */ #define EC_CMD_I2C_READ 0x94 struct ec_params_i2c_read { - uint16_t addr; + uint16_t addr; /* 8-bit address (7-bit shifted << 1) */ uint8_t read_size; /* Either 8 or 16. */ uint8_t port; uint8_t offset; @@ -1165,7 +1815,7 @@ struct ec_response_i2c_read { struct ec_params_i2c_write { uint16_t data; - uint16_t addr; + uint16_t addr; /* 8-bit address (7-bit shifted << 1) */ uint8_t write_size; /* Either 8 or 16. */ uint8_t port; uint8_t offset; @@ -1174,11 +1824,20 @@ struct ec_params_i2c_write { /*****************************************************************************/ /* Charge state commands. Only available when flash write protect unlocked. */ -/* Force charge state machine to stop in idle mode */ -#define EC_CMD_CHARGE_FORCE_IDLE 0x96 +/* Force charge state machine to stop charging the battery or force it to + * discharge the battery. + */ +#define EC_CMD_CHARGE_CONTROL 0x96 +#define EC_VER_CHARGE_CONTROL 1 -struct ec_params_force_idle { - uint8_t enabled; +enum ec_charge_control_mode { + CHARGE_CONTROL_NORMAL = 0, + CHARGE_CONTROL_IDLE, + CHARGE_CONTROL_DISCHARGE, +}; + +struct ec_params_charge_control { + uint32_t mode; /* enum charge_control_mode */ } __packed; /*****************************************************************************/ @@ -1206,14 +1865,231 @@ struct ec_params_force_idle { #define EC_CMD_BATTERY_CUT_OFF 0x99 /*****************************************************************************/ -/* Temporary debug commands. TODO: remove this crosbug.com/p/13849 */ +/* USB port mux control. */ /* - * Dump charge state machine context. - * - * Response is a binary dump of charge state machine context. + * Switch USB mux or return to automatic switching. + */ +#define EC_CMD_USB_MUX 0x9a + +struct ec_params_usb_mux { + uint8_t mux; +} __packed; + +/*****************************************************************************/ +/* LDOs / FETs control. */ + +enum ec_ldo_state { + EC_LDO_STATE_OFF = 0, /* the LDO / FET is shut down */ + EC_LDO_STATE_ON = 1, /* the LDO / FET is ON / providing power */ +}; + +/* + * Switch on/off a LDO. + */ +#define EC_CMD_LDO_SET 0x9b + +struct ec_params_ldo_set { + uint8_t index; + uint8_t state; +} __packed; + +/* + * Get LDO state. + */ +#define EC_CMD_LDO_GET 0x9c + +struct ec_params_ldo_get { + uint8_t index; +} __packed; + +struct ec_response_ldo_get { + uint8_t state; +} __packed; + +/*****************************************************************************/ +/* Power info. */ + +/* + * Get power info. + */ +#define EC_CMD_POWER_INFO 0x9d + +struct ec_response_power_info { + uint32_t usb_dev_type; + uint16_t voltage_ac; + uint16_t voltage_system; + uint16_t current_system; + uint16_t usb_current_limit; +} __packed; + +/*****************************************************************************/ +/* I2C passthru command */ + +#define EC_CMD_I2C_PASSTHRU 0x9e + +/* Slave address is 10 (not 7) bit */ +#define EC_I2C_FLAG_10BIT (1 << 16) + +/* Read data; if not present, message is a write */ +#define EC_I2C_FLAG_READ (1 << 15) + +/* Mask for address */ +#define EC_I2C_ADDR_MASK 0x3ff + +#define EC_I2C_STATUS_NAK (1 << 0) /* Transfer was not acknowledged */ +#define EC_I2C_STATUS_TIMEOUT (1 << 1) /* Timeout during transfer */ + +/* Any error */ +#define EC_I2C_STATUS_ERROR (EC_I2C_STATUS_NAK | EC_I2C_STATUS_TIMEOUT) + +struct ec_params_i2c_passthru_msg { + uint16_t addr_flags; /* I2C slave address (7 or 10 bits) and flags */ + uint16_t len; /* Number of bytes to read or write */ +} __packed; + +struct ec_params_i2c_passthru { + uint8_t port; /* I2C port number */ + uint8_t num_msgs; /* Number of messages */ + struct ec_params_i2c_passthru_msg msg[]; + /* Data to write for all messages is concatenated here */ +} __packed; + +struct ec_response_i2c_passthru { + uint8_t i2c_status; /* Status flags (EC_I2C_STATUS_...) */ + uint8_t num_msgs; /* Number of messages processed */ + uint8_t data[]; /* Data read by messages concatenated here */ +} __packed; + +/*****************************************************************************/ +/* Power button hang detect */ + +#define EC_CMD_HANG_DETECT 0x9f + +/* Reasons to start hang detection timer */ +/* Power button pressed */ +#define EC_HANG_START_ON_POWER_PRESS (1 << 0) + +/* Lid closed */ +#define EC_HANG_START_ON_LID_CLOSE (1 << 1) + + /* Lid opened */ +#define EC_HANG_START_ON_LID_OPEN (1 << 2) + +/* Start of AP S3->S0 transition (booting or resuming from suspend) */ +#define EC_HANG_START_ON_RESUME (1 << 3) + +/* Reasons to cancel hang detection */ + +/* Power button released */ +#define EC_HANG_STOP_ON_POWER_RELEASE (1 << 8) + +/* Any host command from AP received */ +#define EC_HANG_STOP_ON_HOST_COMMAND (1 << 9) + +/* Stop on end of AP S0->S3 transition (suspending or shutting down) */ +#define EC_HANG_STOP_ON_SUSPEND (1 << 10) + +/* + * If this flag is set, all the other fields are ignored, and the hang detect + * timer is started. This provides the AP a way to start the hang timer + * without reconfiguring any of the other hang detect settings. Note that + * you must previously have configured the timeouts. + */ +#define EC_HANG_START_NOW (1 << 30) + +/* + * If this flag is set, all the other fields are ignored (including + * EC_HANG_START_NOW). This provides the AP a way to stop the hang timer + * without reconfiguring any of the other hang detect settings. */ -#define EC_CMD_CHARGE_DUMP 0xa0 +#define EC_HANG_STOP_NOW (1 << 31) + +struct ec_params_hang_detect { + /* Flags; see EC_HANG_* */ + uint32_t flags; + + /* Timeout in msec before generating host event, if enabled */ + uint16_t host_event_timeout_msec; + + /* Timeout in msec before generating warm reboot, if enabled */ + uint16_t warm_reboot_timeout_msec; +} __packed; + +/*****************************************************************************/ +/* Commands for battery charging */ + +/* + * This is the single catch-all host command to exchange data regarding the + * charge state machine (v2 and up). + */ +#define EC_CMD_CHARGE_STATE 0xa0 + +/* Subcommands for this host command */ +enum charge_state_command { + CHARGE_STATE_CMD_GET_STATE, + CHARGE_STATE_CMD_GET_PARAM, + CHARGE_STATE_CMD_SET_PARAM, + CHARGE_STATE_NUM_CMDS +}; + +/* + * Known param numbers are defined here. Ranges are reserved for board-specific + * params, which are handled by the particular implementations. + */ +enum charge_state_params { + CS_PARAM_CHG_VOLTAGE, /* charger voltage limit */ + CS_PARAM_CHG_CURRENT, /* charger current limit */ + CS_PARAM_CHG_INPUT_CURRENT, /* charger input current limit */ + CS_PARAM_CHG_STATUS, /* charger-specific status */ + CS_PARAM_CHG_OPTION, /* charger-specific options */ + /* How many so far? */ + CS_NUM_BASE_PARAMS, + + /* Range for CONFIG_CHARGER_PROFILE_OVERRIDE params */ + CS_PARAM_CUSTOM_PROFILE_MIN = 0x10000, + CS_PARAM_CUSTOM_PROFILE_MAX = 0x1ffff, + + /* Other custom param ranges go here... */ +}; + +struct ec_params_charge_state { + uint8_t cmd; /* enum charge_state_command */ + union { + struct { + /* no args */ + } get_state; + + struct { + uint32_t param; /* enum charge_state_param */ + } get_param; + + struct { + uint32_t param; /* param to set */ + uint32_t value; /* value to set */ + } set_param; + }; +} __packed; + +struct ec_response_charge_state { + union { + struct { + int ac; + int chg_voltage; + int chg_current; + int chg_input_current; + int batt_state_of_charge; + } get_state; + + struct { + uint32_t value; + } get_param; + struct { + /* no return values */ + } set_param; + }; +} __packed; + /* * Set maximum battery charging current. @@ -1221,15 +2097,59 @@ struct ec_params_force_idle { #define EC_CMD_CHARGE_CURRENT_LIMIT 0xa1 struct ec_params_current_limit { - uint32_t limit; + uint32_t limit; /* in mA */ +} __packed; + +/* + * Set maximum external power current. + */ +#define EC_CMD_EXT_POWER_CURRENT_LIMIT 0xa2 + +struct ec_params_ext_power_current_limit { + uint32_t limit; /* in mA */ +} __packed; + +/*****************************************************************************/ +/* Smart battery pass-through */ + +/* Get / Set 16-bit smart battery registers */ +#define EC_CMD_SB_READ_WORD 0xb0 +#define EC_CMD_SB_WRITE_WORD 0xb1 + +/* Get / Set string smart battery parameters + * formatted as SMBUS "block". + */ +#define EC_CMD_SB_READ_BLOCK 0xb2 +#define EC_CMD_SB_WRITE_BLOCK 0xb3 + +struct ec_params_sb_rd { + uint8_t reg; +} __packed; + +struct ec_response_sb_rd_word { + uint16_t value; +} __packed; + +struct ec_params_sb_wr_word { + uint8_t reg; + uint16_t value; +} __packed; + +struct ec_response_sb_rd_block { + uint8_t data[32]; +} __packed; + +struct ec_params_sb_wr_block { + uint8_t reg; + uint16_t data[32]; } __packed; /*****************************************************************************/ /* System commands */ /* - * TODO: this is a confusing name, since it doesn't necessarily reboot the EC. - * Rename to "set image" or something similar. + * TODO(crosbug.com/p/23747): This is a confusing name, since it doesn't + * necessarily reboot the EC. Rename to "image" or something similar? */ #define EC_CMD_REBOOT_EC 0xd2 @@ -1308,6 +2228,7 @@ struct ec_params_reboot_ec { #define EC_CMD_ACPI_QUERY_EVENT 0x84 /* Valid addresses in ACPI memory space, for read/write commands */ + /* Memory space version; set to EC_ACPI_MEM_VERSION_CURRENT */ #define EC_ACPI_MEM_VERSION 0x00 /* @@ -1317,8 +2238,60 @@ struct ec_params_reboot_ec { #define EC_ACPI_MEM_TEST 0x01 /* Test compliment; writes here are ignored. */ #define EC_ACPI_MEM_TEST_COMPLIMENT 0x02 + /* Keyboard backlight brightness percent (0 - 100) */ #define EC_ACPI_MEM_KEYBOARD_BACKLIGHT 0x03 +/* DPTF Target Fan Duty (0-100, 0xff for auto/none) */ +#define EC_ACPI_MEM_FAN_DUTY 0x04 + +/* + * DPTF temp thresholds. Any of the EC's temp sensors can have up to two + * independent thresholds attached to them. The current value of the ID + * register determines which sensor is affected by the THRESHOLD and COMMIT + * registers. The THRESHOLD register uses the same EC_TEMP_SENSOR_OFFSET scheme + * as the memory-mapped sensors. The COMMIT register applies those settings. + * + * The spec does not mandate any way to read back the threshold settings + * themselves, but when a threshold is crossed the AP needs a way to determine + * which sensor(s) are responsible. Each reading of the ID register clears and + * returns one sensor ID that has crossed one of its threshold (in either + * direction) since the last read. A value of 0xFF means "no new thresholds + * have tripped". Setting or enabling the thresholds for a sensor will clear + * the unread event count for that sensor. + */ +#define EC_ACPI_MEM_TEMP_ID 0x05 +#define EC_ACPI_MEM_TEMP_THRESHOLD 0x06 +#define EC_ACPI_MEM_TEMP_COMMIT 0x07 +/* + * Here are the bits for the COMMIT register: + * bit 0 selects the threshold index for the chosen sensor (0/1) + * bit 1 enables/disables the selected threshold (0 = off, 1 = on) + * Each write to the commit register affects one threshold. + */ +#define EC_ACPI_MEM_TEMP_COMMIT_SELECT_MASK (1 << 0) +#define EC_ACPI_MEM_TEMP_COMMIT_ENABLE_MASK (1 << 1) +/* + * Example: + * + * Set the thresholds for sensor 2 to 50 C and 60 C: + * write 2 to [0x05] -- select temp sensor 2 + * write 0x7b to [0x06] -- C_TO_K(50) - EC_TEMP_SENSOR_OFFSET + * write 0x2 to [0x07] -- enable threshold 0 with this value + * write 0x85 to [0x06] -- C_TO_K(60) - EC_TEMP_SENSOR_OFFSET + * write 0x3 to [0x07] -- enable threshold 1 with this value + * + * Disable the 60 C threshold, leaving the 50 C threshold unchanged: + * write 2 to [0x05] -- select temp sensor 2 + * write 0x1 to [0x07] -- disable threshold 1 + */ + +/* DPTF battery charging current limit */ +#define EC_ACPI_MEM_CHARGING_LIMIT 0x08 + +/* Charging limit is specified in 64 mA steps */ +#define EC_ACPI_MEM_CHARGING_LIMIT_STEP_MA 64 +/* Value to disable DPTF battery charging limit */ +#define EC_ACPI_MEM_CHARGING_LIMIT_DISABLED 0xff /* Current version of ACPI memory address space */ #define EC_ACPI_MEM_VERSION_CURRENT 1 @@ -1360,10 +2333,21 @@ struct ec_params_reboot_ec { * Header bytes greater than this indicate a later version. For example, * EC_CMD_VERSION0 + 1 means we are using version 1. * - * The old EC interface must not use commands 0dc or higher. + * The old EC interface must not use commands 0xdc or higher. */ #define EC_CMD_VERSION0 0xdc #endif /* !__ACPI__ */ +/*****************************************************************************/ +/* + * Deprecated constants. These constants have been renamed for clarity. The + * meaning and size has not changed. Programs that use the old names should + * switch to the new names soon, as the old names may not be carried forward + * forever. + */ +#define EC_HOST_PARAM_SIZE EC_PROTO2_MAX_PARAM_SIZE +#define EC_LPC_ADDR_OLD_PARAM EC_HOST_CMD_REGION1 +#define EC_OLD_PARAM_SIZE EC_HOST_CMD_REGION_SIZE + #endif /* __CROS_EC_COMMANDS_H */ diff --git a/include/linux/mfd/ipaq-micro.h b/include/linux/mfd/ipaq-micro.h new file mode 100644 index 000000000000..5c4d29f6674f --- /dev/null +++ b/include/linux/mfd/ipaq-micro.h @@ -0,0 +1,148 @@ +/* + * Header file for the compaq Micro MFD + */ + +#ifndef _MFD_IPAQ_MICRO_H_ +#define _MFD_IPAQ_MICRO_H_ + +#include <linux/spinlock.h> +#include <linux/completion.h> +#include <linux/list.h> + +#define TX_BUF_SIZE 32 +#define RX_BUF_SIZE 16 +#define CHAR_SOF 0x02 + +/* + * These are the different messages that can be sent to the microcontroller + * to control various aspects. + */ +#define MSG_VERSION 0x0 +#define MSG_KEYBOARD 0x2 +#define MSG_TOUCHSCREEN 0x3 +#define MSG_EEPROM_READ 0x4 +#define MSG_EEPROM_WRITE 0x5 +#define MSG_THERMAL_SENSOR 0x6 +#define MSG_NOTIFY_LED 0x8 +#define MSG_BATTERY 0x9 +#define MSG_SPI_READ 0xb +#define MSG_SPI_WRITE 0xc +#define MSG_BACKLIGHT 0xd /* H3600 only */ +#define MSG_CODEC_CTRL 0xe /* H3100 only */ +#define MSG_DISPLAY_CTRL 0xf /* H3100 only */ + +/* state of receiver parser */ +enum rx_state { + STATE_SOF = 0, /* Next byte should be start of frame */ + STATE_ID, /* Next byte is ID & message length */ + STATE_DATA, /* Next byte is a data byte */ + STATE_CHKSUM /* Next byte should be checksum */ +}; + +/** + * struct ipaq_micro_txdev - TX state + * @len: length of message in TX buffer + * @index: current index into TX buffer + * @buf: TX buffer + */ +struct ipaq_micro_txdev { + u8 len; + u8 index; + u8 buf[TX_BUF_SIZE]; +}; + +/** + * struct ipaq_micro_rxdev - RX state + * @state: context of RX state machine + * @chksum: calculated checksum + * @id: message ID from packet + * @len: RX buffer length + * @index: RX buffer index + * @buf: RX buffer + */ +struct ipaq_micro_rxdev { + enum rx_state state; + unsigned char chksum; + u8 id; + unsigned int len; + unsigned int index; + u8 buf[RX_BUF_SIZE]; +}; + +/** + * struct ipaq_micro_msg - message to the iPAQ microcontroller + * @id: 4-bit ID of the message + * @tx_len: length of TX data + * @tx_data: TX data to send + * @rx_len: length of receieved RX data + * @rx_data: RX data to recieve + * @ack: a completion that will be completed when RX is complete + * @node: list node if message gets queued + */ +struct ipaq_micro_msg { + u8 id; + u8 tx_len; + u8 tx_data[TX_BUF_SIZE]; + u8 rx_len; + u8 rx_data[RX_BUF_SIZE]; + struct completion ack; + struct list_head node; +}; + +/** + * struct ipaq_micro - iPAQ microcontroller state + * @dev: corresponding platform device + * @base: virtual memory base for underlying serial device + * @sdlc: virtual memory base for Synchronous Data Link Controller + * @version: version string + * @tx: TX state + * @rx: RX state + * @lock: lock for this state container + * @msg: current message + * @queue: message queue + * @key: callback for asynchronous key events + * @key_data: data to pass along with key events + * @ts: callback for asynchronous touchscreen events + * @ts_data: data to pass along with key events + */ +struct ipaq_micro { + struct device *dev; + void __iomem *base; + void __iomem *sdlc; + char version[5]; + struct ipaq_micro_txdev tx; /* transmit ISR state */ + struct ipaq_micro_rxdev rx; /* receive ISR state */ + spinlock_t lock; + struct ipaq_micro_msg *msg; + struct list_head queue; + void (*key) (void *data, int len, unsigned char *rxdata); + void *key_data; + void (*ts) (void *data, int len, unsigned char *rxdata); + void *ts_data; +}; + +extern int +ipaq_micro_tx_msg(struct ipaq_micro *micro, struct ipaq_micro_msg *msg); + +static inline int +ipaq_micro_tx_msg_sync(struct ipaq_micro *micro, + struct ipaq_micro_msg *msg) +{ + int ret; + + init_completion(&msg->ack); + ret = ipaq_micro_tx_msg(micro, msg); + wait_for_completion(&msg->ack); + + return ret; +} + +static inline int +ipaq_micro_tx_msg_async(struct ipaq_micro *micro, + struct ipaq_micro_msg *msg) +{ + init_completion(&msg->ack); + return ipaq_micro_tx_msg(micro, msg); +} + +#endif /* _MFD_IPAQ_MICRO_H_ */ diff --git a/include/linux/mfd/kempld.h b/include/linux/mfd/kempld.h index b911ef3add03..26e0b469e567 100644 --- a/include/linux/mfd/kempld.h +++ b/include/linux/mfd/kempld.h @@ -51,6 +51,8 @@ #define KEMPLD_TYPE_DEBUG 0x1 #define KEMPLD_TYPE_CUSTOM 0x2 +#define KEMPLD_VERSION_LEN 10 + /** * struct kempld_info - PLD device information structure * @major: PLD major revision @@ -60,6 +62,7 @@ * @type: PLD type * @spec_major: PLD FW specification major revision * @spec_minor: PLD FW specification minor revision + * @version: PLD version string */ struct kempld_info { unsigned int major; @@ -69,6 +72,7 @@ struct kempld_info { unsigned int type; unsigned int spec_major; unsigned int spec_minor; + char version[KEMPLD_VERSION_LEN]; }; /** diff --git a/include/linux/mfd/mc13xxx.h b/include/linux/mfd/mc13xxx.h index a326c850f046..d63b1d309106 100644 --- a/include/linux/mfd/mc13xxx.h +++ b/include/linux/mfd/mc13xxx.h @@ -117,10 +117,6 @@ struct mc13xxx_led_platform_data { #define MAX_LED_CONTROL_REGS 6 -struct mc13xxx_leds_platform_data { - struct mc13xxx_led_platform_data *led; - int num_leds; - /* MC13783 LED Control 0 */ #define MC13783_LED_C0_ENABLE (1 << 0) #define MC13783_LED_C0_TRIODE_MD (1 << 7) @@ -169,10 +165,13 @@ struct mc13xxx_leds_platform_data { /* MC34708 LED Control 0 */ #define MC34708_LED_C0_CURRENT_R(x) (((x) & 0x3) << 9) #define MC34708_LED_C0_CURRENT_G(x) (((x) & 0x3) << 21) + +struct mc13xxx_leds_platform_data { + struct mc13xxx_led_platform_data *led; + int num_leds; u32 led_control[MAX_LED_CONTROL_REGS]; }; -struct mc13xxx_buttons_platform_data { #define MC13783_BUTTON_DBNC_0MS 0 #define MC13783_BUTTON_DBNC_30MS 1 #define MC13783_BUTTON_DBNC_150MS 2 @@ -180,6 +179,8 @@ struct mc13xxx_buttons_platform_data { #define MC13783_BUTTON_ENABLE (1 << 2) #define MC13783_BUTTON_POL_INVERT (1 << 3) #define MC13783_BUTTON_RESET_EN (1 << 4) + +struct mc13xxx_buttons_platform_data { int b1on_flags; unsigned short b1on_key; int b2on_flags; @@ -188,14 +189,14 @@ struct mc13xxx_buttons_platform_data { unsigned short b3on_key; }; +#define MC13783_TS_ATO_FIRST false +#define MC13783_TS_ATO_EACH true + struct mc13xxx_ts_platform_data { /* Delay between Touchscreen polarization and ADC Conversion. * Given in clock ticks of a 32 kHz clock which gives a granularity of * about 30.5ms */ u8 ato; - -#define MC13783_TS_ATO_FIRST false -#define MC13783_TS_ATO_EACH true /* Use the ATO delay only for the first conversion or for each one */ bool atox; }; @@ -210,11 +211,12 @@ struct mc13xxx_codec_platform_data { enum mc13783_ssi_port dac_ssi_port; }; -struct mc13xxx_platform_data { -#define MC13XXX_USE_TOUCHSCREEN (1 << 0) +#define MC13XXX_USE_TOUCHSCREEN (1 << 0) #define MC13XXX_USE_CODEC (1 << 1) #define MC13XXX_USE_ADC (1 << 2) #define MC13XXX_USE_RTC (1 << 3) + +struct mc13xxx_platform_data { unsigned int flags; struct mc13xxx_regulator_platform_data regulators; diff --git a/include/linux/mfd/palmas.h b/include/linux/mfd/palmas.h index b8f87b704409..3420e09e2e20 100644 --- a/include/linux/mfd/palmas.h +++ b/include/linux/mfd/palmas.h @@ -482,10 +482,10 @@ enum usb_irq_events { /* helper macro to get correct slave number */ #define PALMAS_BASE_TO_SLAVE(x) ((x >> 8) - 1) -#define PALMAS_BASE_TO_REG(x, y) ((x & 0xff) + y) +#define PALMAS_BASE_TO_REG(x, y) ((x & 0xFF) + y) /* Base addresses of IP blocks in Palmas */ -#define PALMAS_SMPS_DVS_BASE 0x20 +#define PALMAS_SMPS_DVS_BASE 0x020 #define PALMAS_RTC_BASE 0x100 #define PALMAS_VALIDITY_BASE 0x118 #define PALMAS_SMPS_BASE 0x120 @@ -504,19 +504,19 @@ enum usb_irq_events { #define PALMAS_TRIM_GPADC_BASE 0x3CD /* Registers for function RTC */ -#define PALMAS_SECONDS_REG 0x0 -#define PALMAS_MINUTES_REG 0x1 -#define PALMAS_HOURS_REG 0x2 -#define PALMAS_DAYS_REG 0x3 -#define PALMAS_MONTHS_REG 0x4 -#define PALMAS_YEARS_REG 0x5 -#define PALMAS_WEEKS_REG 0x6 -#define PALMAS_ALARM_SECONDS_REG 0x8 -#define PALMAS_ALARM_MINUTES_REG 0x9 -#define PALMAS_ALARM_HOURS_REG 0xA -#define PALMAS_ALARM_DAYS_REG 0xB -#define PALMAS_ALARM_MONTHS_REG 0xC -#define PALMAS_ALARM_YEARS_REG 0xD +#define PALMAS_SECONDS_REG 0x00 +#define PALMAS_MINUTES_REG 0x01 +#define PALMAS_HOURS_REG 0x02 +#define PALMAS_DAYS_REG 0x03 +#define PALMAS_MONTHS_REG 0x04 +#define PALMAS_YEARS_REG 0x05 +#define PALMAS_WEEKS_REG 0x06 +#define PALMAS_ALARM_SECONDS_REG 0x08 +#define PALMAS_ALARM_MINUTES_REG 0x09 +#define PALMAS_ALARM_HOURS_REG 0x0A +#define PALMAS_ALARM_DAYS_REG 0x0B +#define PALMAS_ALARM_MONTHS_REG 0x0C +#define PALMAS_ALARM_YEARS_REG 0x0D #define PALMAS_RTC_CTRL_REG 0x10 #define PALMAS_RTC_STATUS_REG 0x11 #define PALMAS_RTC_INTERRUPTS_REG 0x12 @@ -527,201 +527,201 @@ enum usb_irq_events { /* Bit definitions for SECONDS_REG */ #define PALMAS_SECONDS_REG_SEC1_MASK 0x70 -#define PALMAS_SECONDS_REG_SEC1_SHIFT 4 -#define PALMAS_SECONDS_REG_SEC0_MASK 0x0f -#define PALMAS_SECONDS_REG_SEC0_SHIFT 0 +#define PALMAS_SECONDS_REG_SEC1_SHIFT 0x04 +#define PALMAS_SECONDS_REG_SEC0_MASK 0x0F +#define PALMAS_SECONDS_REG_SEC0_SHIFT 0x00 /* Bit definitions for MINUTES_REG */ #define PALMAS_MINUTES_REG_MIN1_MASK 0x70 -#define PALMAS_MINUTES_REG_MIN1_SHIFT 4 -#define PALMAS_MINUTES_REG_MIN0_MASK 0x0f -#define PALMAS_MINUTES_REG_MIN0_SHIFT 0 +#define PALMAS_MINUTES_REG_MIN1_SHIFT 0x04 +#define PALMAS_MINUTES_REG_MIN0_MASK 0x0F +#define PALMAS_MINUTES_REG_MIN0_SHIFT 0x00 /* Bit definitions for HOURS_REG */ #define PALMAS_HOURS_REG_PM_NAM 0x80 -#define PALMAS_HOURS_REG_PM_NAM_SHIFT 7 +#define PALMAS_HOURS_REG_PM_NAM_SHIFT 0x07 #define PALMAS_HOURS_REG_HOUR1_MASK 0x30 -#define PALMAS_HOURS_REG_HOUR1_SHIFT 4 -#define PALMAS_HOURS_REG_HOUR0_MASK 0x0f -#define PALMAS_HOURS_REG_HOUR0_SHIFT 0 +#define PALMAS_HOURS_REG_HOUR1_SHIFT 0x04 +#define PALMAS_HOURS_REG_HOUR0_MASK 0x0F +#define PALMAS_HOURS_REG_HOUR0_SHIFT 0x00 /* Bit definitions for DAYS_REG */ #define PALMAS_DAYS_REG_DAY1_MASK 0x30 -#define PALMAS_DAYS_REG_DAY1_SHIFT 4 -#define PALMAS_DAYS_REG_DAY0_MASK 0x0f -#define PALMAS_DAYS_REG_DAY0_SHIFT 0 +#define PALMAS_DAYS_REG_DAY1_SHIFT 0x04 +#define PALMAS_DAYS_REG_DAY0_MASK 0x0F +#define PALMAS_DAYS_REG_DAY0_SHIFT 0x00 /* Bit definitions for MONTHS_REG */ #define PALMAS_MONTHS_REG_MONTH1 0x10 -#define PALMAS_MONTHS_REG_MONTH1_SHIFT 4 -#define PALMAS_MONTHS_REG_MONTH0_MASK 0x0f -#define PALMAS_MONTHS_REG_MONTH0_SHIFT 0 +#define PALMAS_MONTHS_REG_MONTH1_SHIFT 0x04 +#define PALMAS_MONTHS_REG_MONTH0_MASK 0x0F +#define PALMAS_MONTHS_REG_MONTH0_SHIFT 0x00 /* Bit definitions for YEARS_REG */ #define PALMAS_YEARS_REG_YEAR1_MASK 0xf0 -#define PALMAS_YEARS_REG_YEAR1_SHIFT 4 -#define PALMAS_YEARS_REG_YEAR0_MASK 0x0f -#define PALMAS_YEARS_REG_YEAR0_SHIFT 0 +#define PALMAS_YEARS_REG_YEAR1_SHIFT 0x04 +#define PALMAS_YEARS_REG_YEAR0_MASK 0x0F +#define PALMAS_YEARS_REG_YEAR0_SHIFT 0x00 /* Bit definitions for WEEKS_REG */ #define PALMAS_WEEKS_REG_WEEK_MASK 0x07 -#define PALMAS_WEEKS_REG_WEEK_SHIFT 0 +#define PALMAS_WEEKS_REG_WEEK_SHIFT 0x00 /* Bit definitions for ALARM_SECONDS_REG */ #define PALMAS_ALARM_SECONDS_REG_ALARM_SEC1_MASK 0x70 -#define PALMAS_ALARM_SECONDS_REG_ALARM_SEC1_SHIFT 4 -#define PALMAS_ALARM_SECONDS_REG_ALARM_SEC0_MASK 0x0f -#define PALMAS_ALARM_SECONDS_REG_ALARM_SEC0_SHIFT 0 +#define PALMAS_ALARM_SECONDS_REG_ALARM_SEC1_SHIFT 0x04 +#define PALMAS_ALARM_SECONDS_REG_ALARM_SEC0_MASK 0x0F +#define PALMAS_ALARM_SECONDS_REG_ALARM_SEC0_SHIFT 0x00 /* Bit definitions for ALARM_MINUTES_REG */ #define PALMAS_ALARM_MINUTES_REG_ALARM_MIN1_MASK 0x70 -#define PALMAS_ALARM_MINUTES_REG_ALARM_MIN1_SHIFT 4 -#define PALMAS_ALARM_MINUTES_REG_ALARM_MIN0_MASK 0x0f -#define PALMAS_ALARM_MINUTES_REG_ALARM_MIN0_SHIFT 0 +#define PALMAS_ALARM_MINUTES_REG_ALARM_MIN1_SHIFT 0x04 +#define PALMAS_ALARM_MINUTES_REG_ALARM_MIN0_MASK 0x0F +#define PALMAS_ALARM_MINUTES_REG_ALARM_MIN0_SHIFT 0x00 /* Bit definitions for ALARM_HOURS_REG */ #define PALMAS_ALARM_HOURS_REG_ALARM_PM_NAM 0x80 -#define PALMAS_ALARM_HOURS_REG_ALARM_PM_NAM_SHIFT 7 +#define PALMAS_ALARM_HOURS_REG_ALARM_PM_NAM_SHIFT 0x07 #define PALMAS_ALARM_HOURS_REG_ALARM_HOUR1_MASK 0x30 -#define PALMAS_ALARM_HOURS_REG_ALARM_HOUR1_SHIFT 4 -#define PALMAS_ALARM_HOURS_REG_ALARM_HOUR0_MASK 0x0f -#define PALMAS_ALARM_HOURS_REG_ALARM_HOUR0_SHIFT 0 +#define PALMAS_ALARM_HOURS_REG_ALARM_HOUR1_SHIFT 0x04 +#define PALMAS_ALARM_HOURS_REG_ALARM_HOUR0_MASK 0x0F +#define PALMAS_ALARM_HOURS_REG_ALARM_HOUR0_SHIFT 0x00 /* Bit definitions for ALARM_DAYS_REG */ #define PALMAS_ALARM_DAYS_REG_ALARM_DAY1_MASK 0x30 -#define PALMAS_ALARM_DAYS_REG_ALARM_DAY1_SHIFT 4 -#define PALMAS_ALARM_DAYS_REG_ALARM_DAY0_MASK 0x0f -#define PALMAS_ALARM_DAYS_REG_ALARM_DAY0_SHIFT 0 +#define PALMAS_ALARM_DAYS_REG_ALARM_DAY1_SHIFT 0x04 +#define PALMAS_ALARM_DAYS_REG_ALARM_DAY0_MASK 0x0F +#define PALMAS_ALARM_DAYS_REG_ALARM_DAY0_SHIFT 0x00 /* Bit definitions for ALARM_MONTHS_REG */ #define PALMAS_ALARM_MONTHS_REG_ALARM_MONTH1 0x10 -#define PALMAS_ALARM_MONTHS_REG_ALARM_MONTH1_SHIFT 4 -#define PALMAS_ALARM_MONTHS_REG_ALARM_MONTH0_MASK 0x0f -#define PALMAS_ALARM_MONTHS_REG_ALARM_MONTH0_SHIFT 0 +#define PALMAS_ALARM_MONTHS_REG_ALARM_MONTH1_SHIFT 0x04 +#define PALMAS_ALARM_MONTHS_REG_ALARM_MONTH0_MASK 0x0F +#define PALMAS_ALARM_MONTHS_REG_ALARM_MONTH0_SHIFT 0x00 /* Bit definitions for ALARM_YEARS_REG */ #define PALMAS_ALARM_YEARS_REG_ALARM_YEAR1_MASK 0xf0 -#define PALMAS_ALARM_YEARS_REG_ALARM_YEAR1_SHIFT 4 -#define PALMAS_ALARM_YEARS_REG_ALARM_YEAR0_MASK 0x0f -#define PALMAS_ALARM_YEARS_REG_ALARM_YEAR0_SHIFT 0 +#define PALMAS_ALARM_YEARS_REG_ALARM_YEAR1_SHIFT 0x04 +#define PALMAS_ALARM_YEARS_REG_ALARM_YEAR0_MASK 0x0F +#define PALMAS_ALARM_YEARS_REG_ALARM_YEAR0_SHIFT 0x00 /* Bit definitions for RTC_CTRL_REG */ #define PALMAS_RTC_CTRL_REG_RTC_V_OPT 0x80 -#define PALMAS_RTC_CTRL_REG_RTC_V_OPT_SHIFT 7 +#define PALMAS_RTC_CTRL_REG_RTC_V_OPT_SHIFT 0x07 #define PALMAS_RTC_CTRL_REG_GET_TIME 0x40 -#define PALMAS_RTC_CTRL_REG_GET_TIME_SHIFT 6 +#define PALMAS_RTC_CTRL_REG_GET_TIME_SHIFT 0x06 #define PALMAS_RTC_CTRL_REG_SET_32_COUNTER 0x20 -#define PALMAS_RTC_CTRL_REG_SET_32_COUNTER_SHIFT 5 +#define PALMAS_RTC_CTRL_REG_SET_32_COUNTER_SHIFT 0x05 #define PALMAS_RTC_CTRL_REG_TEST_MODE 0x10 -#define PALMAS_RTC_CTRL_REG_TEST_MODE_SHIFT 4 +#define PALMAS_RTC_CTRL_REG_TEST_MODE_SHIFT 0x04 #define PALMAS_RTC_CTRL_REG_MODE_12_24 0x08 -#define PALMAS_RTC_CTRL_REG_MODE_12_24_SHIFT 3 +#define PALMAS_RTC_CTRL_REG_MODE_12_24_SHIFT 0x03 #define PALMAS_RTC_CTRL_REG_AUTO_COMP 0x04 -#define PALMAS_RTC_CTRL_REG_AUTO_COMP_SHIFT 2 +#define PALMAS_RTC_CTRL_REG_AUTO_COMP_SHIFT 0x02 #define PALMAS_RTC_CTRL_REG_ROUND_30S 0x02 -#define PALMAS_RTC_CTRL_REG_ROUND_30S_SHIFT 1 +#define PALMAS_RTC_CTRL_REG_ROUND_30S_SHIFT 0x01 #define PALMAS_RTC_CTRL_REG_STOP_RTC 0x01 -#define PALMAS_RTC_CTRL_REG_STOP_RTC_SHIFT 0 +#define PALMAS_RTC_CTRL_REG_STOP_RTC_SHIFT 0x00 /* Bit definitions for RTC_STATUS_REG */ #define PALMAS_RTC_STATUS_REG_POWER_UP 0x80 -#define PALMAS_RTC_STATUS_REG_POWER_UP_SHIFT 7 +#define PALMAS_RTC_STATUS_REG_POWER_UP_SHIFT 0x07 #define PALMAS_RTC_STATUS_REG_ALARM 0x40 -#define PALMAS_RTC_STATUS_REG_ALARM_SHIFT 6 +#define PALMAS_RTC_STATUS_REG_ALARM_SHIFT 0x06 #define PALMAS_RTC_STATUS_REG_EVENT_1D 0x20 -#define PALMAS_RTC_STATUS_REG_EVENT_1D_SHIFT 5 +#define PALMAS_RTC_STATUS_REG_EVENT_1D_SHIFT 0x05 #define PALMAS_RTC_STATUS_REG_EVENT_1H 0x10 -#define PALMAS_RTC_STATUS_REG_EVENT_1H_SHIFT 4 +#define PALMAS_RTC_STATUS_REG_EVENT_1H_SHIFT 0x04 #define PALMAS_RTC_STATUS_REG_EVENT_1M 0x08 -#define PALMAS_RTC_STATUS_REG_EVENT_1M_SHIFT 3 +#define PALMAS_RTC_STATUS_REG_EVENT_1M_SHIFT 0x03 #define PALMAS_RTC_STATUS_REG_EVENT_1S 0x04 -#define PALMAS_RTC_STATUS_REG_EVENT_1S_SHIFT 2 +#define PALMAS_RTC_STATUS_REG_EVENT_1S_SHIFT 0x02 #define PALMAS_RTC_STATUS_REG_RUN 0x02 -#define PALMAS_RTC_STATUS_REG_RUN_SHIFT 1 +#define PALMAS_RTC_STATUS_REG_RUN_SHIFT 0x01 /* Bit definitions for RTC_INTERRUPTS_REG */ #define PALMAS_RTC_INTERRUPTS_REG_IT_SLEEP_MASK_EN 0x10 -#define PALMAS_RTC_INTERRUPTS_REG_IT_SLEEP_MASK_EN_SHIFT 4 +#define PALMAS_RTC_INTERRUPTS_REG_IT_SLEEP_MASK_EN_SHIFT 0x04 #define PALMAS_RTC_INTERRUPTS_REG_IT_ALARM 0x08 -#define PALMAS_RTC_INTERRUPTS_REG_IT_ALARM_SHIFT 3 +#define PALMAS_RTC_INTERRUPTS_REG_IT_ALARM_SHIFT 0x03 #define PALMAS_RTC_INTERRUPTS_REG_IT_TIMER 0x04 -#define PALMAS_RTC_INTERRUPTS_REG_IT_TIMER_SHIFT 2 +#define PALMAS_RTC_INTERRUPTS_REG_IT_TIMER_SHIFT 0x02 #define PALMAS_RTC_INTERRUPTS_REG_EVERY_MASK 0x03 -#define PALMAS_RTC_INTERRUPTS_REG_EVERY_SHIFT 0 +#define PALMAS_RTC_INTERRUPTS_REG_EVERY_SHIFT 0x00 /* Bit definitions for RTC_COMP_LSB_REG */ -#define PALMAS_RTC_COMP_LSB_REG_RTC_COMP_LSB_MASK 0xff -#define PALMAS_RTC_COMP_LSB_REG_RTC_COMP_LSB_SHIFT 0 +#define PALMAS_RTC_COMP_LSB_REG_RTC_COMP_LSB_MASK 0xFF +#define PALMAS_RTC_COMP_LSB_REG_RTC_COMP_LSB_SHIFT 0x00 /* Bit definitions for RTC_COMP_MSB_REG */ -#define PALMAS_RTC_COMP_MSB_REG_RTC_COMP_MSB_MASK 0xff -#define PALMAS_RTC_COMP_MSB_REG_RTC_COMP_MSB_SHIFT 0 +#define PALMAS_RTC_COMP_MSB_REG_RTC_COMP_MSB_MASK 0xFF +#define PALMAS_RTC_COMP_MSB_REG_RTC_COMP_MSB_SHIFT 0x00 /* Bit definitions for RTC_RES_PROG_REG */ -#define PALMAS_RTC_RES_PROG_REG_SW_RES_PROG_MASK 0x3f -#define PALMAS_RTC_RES_PROG_REG_SW_RES_PROG_SHIFT 0 +#define PALMAS_RTC_RES_PROG_REG_SW_RES_PROG_MASK 0x3F +#define PALMAS_RTC_RES_PROG_REG_SW_RES_PROG_SHIFT 0x00 /* Bit definitions for RTC_RESET_STATUS_REG */ #define PALMAS_RTC_RESET_STATUS_REG_RESET_STATUS 0x01 -#define PALMAS_RTC_RESET_STATUS_REG_RESET_STATUS_SHIFT 0 +#define PALMAS_RTC_RESET_STATUS_REG_RESET_STATUS_SHIFT 0x00 /* Registers for function BACKUP */ -#define PALMAS_BACKUP0 0x0 -#define PALMAS_BACKUP1 0x1 -#define PALMAS_BACKUP2 0x2 -#define PALMAS_BACKUP3 0x3 -#define PALMAS_BACKUP4 0x4 -#define PALMAS_BACKUP5 0x5 -#define PALMAS_BACKUP6 0x6 -#define PALMAS_BACKUP7 0x7 +#define PALMAS_BACKUP0 0x00 +#define PALMAS_BACKUP1 0x01 +#define PALMAS_BACKUP2 0x02 +#define PALMAS_BACKUP3 0x03 +#define PALMAS_BACKUP4 0x04 +#define PALMAS_BACKUP5 0x05 +#define PALMAS_BACKUP6 0x06 +#define PALMAS_BACKUP7 0x07 /* Bit definitions for BACKUP0 */ -#define PALMAS_BACKUP0_BACKUP_MASK 0xff -#define PALMAS_BACKUP0_BACKUP_SHIFT 0 +#define PALMAS_BACKUP0_BACKUP_MASK 0xFF +#define PALMAS_BACKUP0_BACKUP_SHIFT 0x00 /* Bit definitions for BACKUP1 */ -#define PALMAS_BACKUP1_BACKUP_MASK 0xff -#define PALMAS_BACKUP1_BACKUP_SHIFT 0 +#define PALMAS_BACKUP1_BACKUP_MASK 0xFF +#define PALMAS_BACKUP1_BACKUP_SHIFT 0x00 /* Bit definitions for BACKUP2 */ -#define PALMAS_BACKUP2_BACKUP_MASK 0xff -#define PALMAS_BACKUP2_BACKUP_SHIFT 0 +#define PALMAS_BACKUP2_BACKUP_MASK 0xFF +#define PALMAS_BACKUP2_BACKUP_SHIFT 0x00 /* Bit definitions for BACKUP3 */ -#define PALMAS_BACKUP3_BACKUP_MASK 0xff -#define PALMAS_BACKUP3_BACKUP_SHIFT 0 +#define PALMAS_BACKUP3_BACKUP_MASK 0xFF +#define PALMAS_BACKUP3_BACKUP_SHIFT 0x00 /* Bit definitions for BACKUP4 */ -#define PALMAS_BACKUP4_BACKUP_MASK 0xff -#define PALMAS_BACKUP4_BACKUP_SHIFT 0 +#define PALMAS_BACKUP4_BACKUP_MASK 0xFF +#define PALMAS_BACKUP4_BACKUP_SHIFT 0x00 /* Bit definitions for BACKUP5 */ -#define PALMAS_BACKUP5_BACKUP_MASK 0xff -#define PALMAS_BACKUP5_BACKUP_SHIFT 0 +#define PALMAS_BACKUP5_BACKUP_MASK 0xFF +#define PALMAS_BACKUP5_BACKUP_SHIFT 0x00 /* Bit definitions for BACKUP6 */ -#define PALMAS_BACKUP6_BACKUP_MASK 0xff -#define PALMAS_BACKUP6_BACKUP_SHIFT 0 +#define PALMAS_BACKUP6_BACKUP_MASK 0xFF +#define PALMAS_BACKUP6_BACKUP_SHIFT 0x00 /* Bit definitions for BACKUP7 */ -#define PALMAS_BACKUP7_BACKUP_MASK 0xff -#define PALMAS_BACKUP7_BACKUP_SHIFT 0 +#define PALMAS_BACKUP7_BACKUP_MASK 0xFF +#define PALMAS_BACKUP7_BACKUP_SHIFT 0x00 /* Registers for function SMPS */ -#define PALMAS_SMPS12_CTRL 0x0 -#define PALMAS_SMPS12_TSTEP 0x1 -#define PALMAS_SMPS12_FORCE 0x2 -#define PALMAS_SMPS12_VOLTAGE 0x3 -#define PALMAS_SMPS3_CTRL 0x4 -#define PALMAS_SMPS3_VOLTAGE 0x7 -#define PALMAS_SMPS45_CTRL 0x8 -#define PALMAS_SMPS45_TSTEP 0x9 -#define PALMAS_SMPS45_FORCE 0xA -#define PALMAS_SMPS45_VOLTAGE 0xB -#define PALMAS_SMPS6_CTRL 0xC -#define PALMAS_SMPS6_TSTEP 0xD -#define PALMAS_SMPS6_FORCE 0xE -#define PALMAS_SMPS6_VOLTAGE 0xF +#define PALMAS_SMPS12_CTRL 0x00 +#define PALMAS_SMPS12_TSTEP 0x01 +#define PALMAS_SMPS12_FORCE 0x02 +#define PALMAS_SMPS12_VOLTAGE 0x03 +#define PALMAS_SMPS3_CTRL 0x04 +#define PALMAS_SMPS3_VOLTAGE 0x07 +#define PALMAS_SMPS45_CTRL 0x08 +#define PALMAS_SMPS45_TSTEP 0x09 +#define PALMAS_SMPS45_FORCE 0x0A +#define PALMAS_SMPS45_VOLTAGE 0x0B +#define PALMAS_SMPS6_CTRL 0x0C +#define PALMAS_SMPS6_TSTEP 0x0D +#define PALMAS_SMPS6_FORCE 0x0E +#define PALMAS_SMPS6_VOLTAGE 0x0F #define PALMAS_SMPS7_CTRL 0x10 #define PALMAS_SMPS7_VOLTAGE 0x13 #define PALMAS_SMPS8_CTRL 0x14 @@ -744,303 +744,303 @@ enum usb_irq_events { /* Bit definitions for SMPS12_CTRL */ #define PALMAS_SMPS12_CTRL_WR_S 0x80 -#define PALMAS_SMPS12_CTRL_WR_S_SHIFT 7 +#define PALMAS_SMPS12_CTRL_WR_S_SHIFT 0x07 #define PALMAS_SMPS12_CTRL_ROOF_FLOOR_EN 0x40 -#define PALMAS_SMPS12_CTRL_ROOF_FLOOR_EN_SHIFT 6 +#define PALMAS_SMPS12_CTRL_ROOF_FLOOR_EN_SHIFT 0x06 #define PALMAS_SMPS12_CTRL_STATUS_MASK 0x30 -#define PALMAS_SMPS12_CTRL_STATUS_SHIFT 4 +#define PALMAS_SMPS12_CTRL_STATUS_SHIFT 0x04 #define PALMAS_SMPS12_CTRL_MODE_SLEEP_MASK 0x0c -#define PALMAS_SMPS12_CTRL_MODE_SLEEP_SHIFT 2 +#define PALMAS_SMPS12_CTRL_MODE_SLEEP_SHIFT 0x02 #define PALMAS_SMPS12_CTRL_MODE_ACTIVE_MASK 0x03 -#define PALMAS_SMPS12_CTRL_MODE_ACTIVE_SHIFT 0 +#define PALMAS_SMPS12_CTRL_MODE_ACTIVE_SHIFT 0x00 /* Bit definitions for SMPS12_TSTEP */ #define PALMAS_SMPS12_TSTEP_TSTEP_MASK 0x03 -#define PALMAS_SMPS12_TSTEP_TSTEP_SHIFT 0 +#define PALMAS_SMPS12_TSTEP_TSTEP_SHIFT 0x00 /* Bit definitions for SMPS12_FORCE */ #define PALMAS_SMPS12_FORCE_CMD 0x80 -#define PALMAS_SMPS12_FORCE_CMD_SHIFT 7 -#define PALMAS_SMPS12_FORCE_VSEL_MASK 0x7f -#define PALMAS_SMPS12_FORCE_VSEL_SHIFT 0 +#define PALMAS_SMPS12_FORCE_CMD_SHIFT 0x07 +#define PALMAS_SMPS12_FORCE_VSEL_MASK 0x7F +#define PALMAS_SMPS12_FORCE_VSEL_SHIFT 0x00 /* Bit definitions for SMPS12_VOLTAGE */ #define PALMAS_SMPS12_VOLTAGE_RANGE 0x80 -#define PALMAS_SMPS12_VOLTAGE_RANGE_SHIFT 7 -#define PALMAS_SMPS12_VOLTAGE_VSEL_MASK 0x7f -#define PALMAS_SMPS12_VOLTAGE_VSEL_SHIFT 0 +#define PALMAS_SMPS12_VOLTAGE_RANGE_SHIFT 0x07 +#define PALMAS_SMPS12_VOLTAGE_VSEL_MASK 0x7F +#define PALMAS_SMPS12_VOLTAGE_VSEL_SHIFT 0x00 /* Bit definitions for SMPS3_CTRL */ #define PALMAS_SMPS3_CTRL_WR_S 0x80 -#define PALMAS_SMPS3_CTRL_WR_S_SHIFT 7 +#define PALMAS_SMPS3_CTRL_WR_S_SHIFT 0x07 #define PALMAS_SMPS3_CTRL_STATUS_MASK 0x30 -#define PALMAS_SMPS3_CTRL_STATUS_SHIFT 4 +#define PALMAS_SMPS3_CTRL_STATUS_SHIFT 0x04 #define PALMAS_SMPS3_CTRL_MODE_SLEEP_MASK 0x0c -#define PALMAS_SMPS3_CTRL_MODE_SLEEP_SHIFT 2 +#define PALMAS_SMPS3_CTRL_MODE_SLEEP_SHIFT 0x02 #define PALMAS_SMPS3_CTRL_MODE_ACTIVE_MASK 0x03 -#define PALMAS_SMPS3_CTRL_MODE_ACTIVE_SHIFT 0 +#define PALMAS_SMPS3_CTRL_MODE_ACTIVE_SHIFT 0x00 /* Bit definitions for SMPS3_VOLTAGE */ #define PALMAS_SMPS3_VOLTAGE_RANGE 0x80 -#define PALMAS_SMPS3_VOLTAGE_RANGE_SHIFT 7 -#define PALMAS_SMPS3_VOLTAGE_VSEL_MASK 0x7f -#define PALMAS_SMPS3_VOLTAGE_VSEL_SHIFT 0 +#define PALMAS_SMPS3_VOLTAGE_RANGE_SHIFT 0x07 +#define PALMAS_SMPS3_VOLTAGE_VSEL_MASK 0x7F +#define PALMAS_SMPS3_VOLTAGE_VSEL_SHIFT 0x00 /* Bit definitions for SMPS45_CTRL */ #define PALMAS_SMPS45_CTRL_WR_S 0x80 -#define PALMAS_SMPS45_CTRL_WR_S_SHIFT 7 +#define PALMAS_SMPS45_CTRL_WR_S_SHIFT 0x07 #define PALMAS_SMPS45_CTRL_ROOF_FLOOR_EN 0x40 -#define PALMAS_SMPS45_CTRL_ROOF_FLOOR_EN_SHIFT 6 +#define PALMAS_SMPS45_CTRL_ROOF_FLOOR_EN_SHIFT 0x06 #define PALMAS_SMPS45_CTRL_STATUS_MASK 0x30 -#define PALMAS_SMPS45_CTRL_STATUS_SHIFT 4 +#define PALMAS_SMPS45_CTRL_STATUS_SHIFT 0x04 #define PALMAS_SMPS45_CTRL_MODE_SLEEP_MASK 0x0c -#define PALMAS_SMPS45_CTRL_MODE_SLEEP_SHIFT 2 +#define PALMAS_SMPS45_CTRL_MODE_SLEEP_SHIFT 0x02 #define PALMAS_SMPS45_CTRL_MODE_ACTIVE_MASK 0x03 -#define PALMAS_SMPS45_CTRL_MODE_ACTIVE_SHIFT 0 +#define PALMAS_SMPS45_CTRL_MODE_ACTIVE_SHIFT 0x00 /* Bit definitions for SMPS45_TSTEP */ #define PALMAS_SMPS45_TSTEP_TSTEP_MASK 0x03 -#define PALMAS_SMPS45_TSTEP_TSTEP_SHIFT 0 +#define PALMAS_SMPS45_TSTEP_TSTEP_SHIFT 0x00 /* Bit definitions for SMPS45_FORCE */ #define PALMAS_SMPS45_FORCE_CMD 0x80 -#define PALMAS_SMPS45_FORCE_CMD_SHIFT 7 -#define PALMAS_SMPS45_FORCE_VSEL_MASK 0x7f -#define PALMAS_SMPS45_FORCE_VSEL_SHIFT 0 +#define PALMAS_SMPS45_FORCE_CMD_SHIFT 0x07 +#define PALMAS_SMPS45_FORCE_VSEL_MASK 0x7F +#define PALMAS_SMPS45_FORCE_VSEL_SHIFT 0x00 /* Bit definitions for SMPS45_VOLTAGE */ #define PALMAS_SMPS45_VOLTAGE_RANGE 0x80 -#define PALMAS_SMPS45_VOLTAGE_RANGE_SHIFT 7 -#define PALMAS_SMPS45_VOLTAGE_VSEL_MASK 0x7f -#define PALMAS_SMPS45_VOLTAGE_VSEL_SHIFT 0 +#define PALMAS_SMPS45_VOLTAGE_RANGE_SHIFT 0x07 +#define PALMAS_SMPS45_VOLTAGE_VSEL_MASK 0x7F +#define PALMAS_SMPS45_VOLTAGE_VSEL_SHIFT 0x00 /* Bit definitions for SMPS6_CTRL */ #define PALMAS_SMPS6_CTRL_WR_S 0x80 -#define PALMAS_SMPS6_CTRL_WR_S_SHIFT 7 +#define PALMAS_SMPS6_CTRL_WR_S_SHIFT 0x07 #define PALMAS_SMPS6_CTRL_ROOF_FLOOR_EN 0x40 -#define PALMAS_SMPS6_CTRL_ROOF_FLOOR_EN_SHIFT 6 +#define PALMAS_SMPS6_CTRL_ROOF_FLOOR_EN_SHIFT 0x06 #define PALMAS_SMPS6_CTRL_STATUS_MASK 0x30 -#define PALMAS_SMPS6_CTRL_STATUS_SHIFT 4 +#define PALMAS_SMPS6_CTRL_STATUS_SHIFT 0x04 #define PALMAS_SMPS6_CTRL_MODE_SLEEP_MASK 0x0c -#define PALMAS_SMPS6_CTRL_MODE_SLEEP_SHIFT 2 +#define PALMAS_SMPS6_CTRL_MODE_SLEEP_SHIFT 0x02 #define PALMAS_SMPS6_CTRL_MODE_ACTIVE_MASK 0x03 -#define PALMAS_SMPS6_CTRL_MODE_ACTIVE_SHIFT 0 +#define PALMAS_SMPS6_CTRL_MODE_ACTIVE_SHIFT 0x00 /* Bit definitions for SMPS6_TSTEP */ #define PALMAS_SMPS6_TSTEP_TSTEP_MASK 0x03 -#define PALMAS_SMPS6_TSTEP_TSTEP_SHIFT 0 +#define PALMAS_SMPS6_TSTEP_TSTEP_SHIFT 0x00 /* Bit definitions for SMPS6_FORCE */ #define PALMAS_SMPS6_FORCE_CMD 0x80 -#define PALMAS_SMPS6_FORCE_CMD_SHIFT 7 -#define PALMAS_SMPS6_FORCE_VSEL_MASK 0x7f -#define PALMAS_SMPS6_FORCE_VSEL_SHIFT 0 +#define PALMAS_SMPS6_FORCE_CMD_SHIFT 0x07 +#define PALMAS_SMPS6_FORCE_VSEL_MASK 0x7F +#define PALMAS_SMPS6_FORCE_VSEL_SHIFT 0x00 /* Bit definitions for SMPS6_VOLTAGE */ #define PALMAS_SMPS6_VOLTAGE_RANGE 0x80 -#define PALMAS_SMPS6_VOLTAGE_RANGE_SHIFT 7 -#define PALMAS_SMPS6_VOLTAGE_VSEL_MASK 0x7f -#define PALMAS_SMPS6_VOLTAGE_VSEL_SHIFT 0 +#define PALMAS_SMPS6_VOLTAGE_RANGE_SHIFT 0x07 +#define PALMAS_SMPS6_VOLTAGE_VSEL_MASK 0x7F +#define PALMAS_SMPS6_VOLTAGE_VSEL_SHIFT 0x00 /* Bit definitions for SMPS7_CTRL */ #define PALMAS_SMPS7_CTRL_WR_S 0x80 -#define PALMAS_SMPS7_CTRL_WR_S_SHIFT 7 +#define PALMAS_SMPS7_CTRL_WR_S_SHIFT 0x07 #define PALMAS_SMPS7_CTRL_STATUS_MASK 0x30 -#define PALMAS_SMPS7_CTRL_STATUS_SHIFT 4 +#define PALMAS_SMPS7_CTRL_STATUS_SHIFT 0x04 #define PALMAS_SMPS7_CTRL_MODE_SLEEP_MASK 0x0c -#define PALMAS_SMPS7_CTRL_MODE_SLEEP_SHIFT 2 +#define PALMAS_SMPS7_CTRL_MODE_SLEEP_SHIFT 0x02 #define PALMAS_SMPS7_CTRL_MODE_ACTIVE_MASK 0x03 -#define PALMAS_SMPS7_CTRL_MODE_ACTIVE_SHIFT 0 +#define PALMAS_SMPS7_CTRL_MODE_ACTIVE_SHIFT 0x00 /* Bit definitions for SMPS7_VOLTAGE */ #define PALMAS_SMPS7_VOLTAGE_RANGE 0x80 -#define PALMAS_SMPS7_VOLTAGE_RANGE_SHIFT 7 -#define PALMAS_SMPS7_VOLTAGE_VSEL_MASK 0x7f -#define PALMAS_SMPS7_VOLTAGE_VSEL_SHIFT 0 +#define PALMAS_SMPS7_VOLTAGE_RANGE_SHIFT 0x07 +#define PALMAS_SMPS7_VOLTAGE_VSEL_MASK 0x7F +#define PALMAS_SMPS7_VOLTAGE_VSEL_SHIFT 0x00 /* Bit definitions for SMPS8_CTRL */ #define PALMAS_SMPS8_CTRL_WR_S 0x80 -#define PALMAS_SMPS8_CTRL_WR_S_SHIFT 7 +#define PALMAS_SMPS8_CTRL_WR_S_SHIFT 0x07 #define PALMAS_SMPS8_CTRL_ROOF_FLOOR_EN 0x40 -#define PALMAS_SMPS8_CTRL_ROOF_FLOOR_EN_SHIFT 6 +#define PALMAS_SMPS8_CTRL_ROOF_FLOOR_EN_SHIFT 0x06 #define PALMAS_SMPS8_CTRL_STATUS_MASK 0x30 -#define PALMAS_SMPS8_CTRL_STATUS_SHIFT 4 +#define PALMAS_SMPS8_CTRL_STATUS_SHIFT 0x04 #define PALMAS_SMPS8_CTRL_MODE_SLEEP_MASK 0x0c -#define PALMAS_SMPS8_CTRL_MODE_SLEEP_SHIFT 2 +#define PALMAS_SMPS8_CTRL_MODE_SLEEP_SHIFT 0x02 #define PALMAS_SMPS8_CTRL_MODE_ACTIVE_MASK 0x03 -#define PALMAS_SMPS8_CTRL_MODE_ACTIVE_SHIFT 0 +#define PALMAS_SMPS8_CTRL_MODE_ACTIVE_SHIFT 0x00 /* Bit definitions for SMPS8_TSTEP */ #define PALMAS_SMPS8_TSTEP_TSTEP_MASK 0x03 -#define PALMAS_SMPS8_TSTEP_TSTEP_SHIFT 0 +#define PALMAS_SMPS8_TSTEP_TSTEP_SHIFT 0x00 /* Bit definitions for SMPS8_FORCE */ #define PALMAS_SMPS8_FORCE_CMD 0x80 -#define PALMAS_SMPS8_FORCE_CMD_SHIFT 7 -#define PALMAS_SMPS8_FORCE_VSEL_MASK 0x7f -#define PALMAS_SMPS8_FORCE_VSEL_SHIFT 0 +#define PALMAS_SMPS8_FORCE_CMD_SHIFT 0x07 +#define PALMAS_SMPS8_FORCE_VSEL_MASK 0x7F +#define PALMAS_SMPS8_FORCE_VSEL_SHIFT 0x00 /* Bit definitions for SMPS8_VOLTAGE */ #define PALMAS_SMPS8_VOLTAGE_RANGE 0x80 -#define PALMAS_SMPS8_VOLTAGE_RANGE_SHIFT 7 -#define PALMAS_SMPS8_VOLTAGE_VSEL_MASK 0x7f -#define PALMAS_SMPS8_VOLTAGE_VSEL_SHIFT 0 +#define PALMAS_SMPS8_VOLTAGE_RANGE_SHIFT 0x07 +#define PALMAS_SMPS8_VOLTAGE_VSEL_MASK 0x7F +#define PALMAS_SMPS8_VOLTAGE_VSEL_SHIFT 0x00 /* Bit definitions for SMPS9_CTRL */ #define PALMAS_SMPS9_CTRL_WR_S 0x80 -#define PALMAS_SMPS9_CTRL_WR_S_SHIFT 7 +#define PALMAS_SMPS9_CTRL_WR_S_SHIFT 0x07 #define PALMAS_SMPS9_CTRL_STATUS_MASK 0x30 -#define PALMAS_SMPS9_CTRL_STATUS_SHIFT 4 +#define PALMAS_SMPS9_CTRL_STATUS_SHIFT 0x04 #define PALMAS_SMPS9_CTRL_MODE_SLEEP_MASK 0x0c -#define PALMAS_SMPS9_CTRL_MODE_SLEEP_SHIFT 2 +#define PALMAS_SMPS9_CTRL_MODE_SLEEP_SHIFT 0x02 #define PALMAS_SMPS9_CTRL_MODE_ACTIVE_MASK 0x03 -#define PALMAS_SMPS9_CTRL_MODE_ACTIVE_SHIFT 0 +#define PALMAS_SMPS9_CTRL_MODE_ACTIVE_SHIFT 0x00 /* Bit definitions for SMPS9_VOLTAGE */ #define PALMAS_SMPS9_VOLTAGE_RANGE 0x80 -#define PALMAS_SMPS9_VOLTAGE_RANGE_SHIFT 7 -#define PALMAS_SMPS9_VOLTAGE_VSEL_MASK 0x7f -#define PALMAS_SMPS9_VOLTAGE_VSEL_SHIFT 0 +#define PALMAS_SMPS9_VOLTAGE_RANGE_SHIFT 0x07 +#define PALMAS_SMPS9_VOLTAGE_VSEL_MASK 0x7F +#define PALMAS_SMPS9_VOLTAGE_VSEL_SHIFT 0x00 /* Bit definitions for SMPS10_CTRL */ #define PALMAS_SMPS10_CTRL_MODE_SLEEP_MASK 0xf0 -#define PALMAS_SMPS10_CTRL_MODE_SLEEP_SHIFT 4 -#define PALMAS_SMPS10_CTRL_MODE_ACTIVE_MASK 0x0f -#define PALMAS_SMPS10_CTRL_MODE_ACTIVE_SHIFT 0 +#define PALMAS_SMPS10_CTRL_MODE_SLEEP_SHIFT 0x04 +#define PALMAS_SMPS10_CTRL_MODE_ACTIVE_MASK 0x0F +#define PALMAS_SMPS10_CTRL_MODE_ACTIVE_SHIFT 0x00 /* Bit definitions for SMPS10_STATUS */ -#define PALMAS_SMPS10_STATUS_STATUS_MASK 0x0f -#define PALMAS_SMPS10_STATUS_STATUS_SHIFT 0 +#define PALMAS_SMPS10_STATUS_STATUS_MASK 0x0F +#define PALMAS_SMPS10_STATUS_STATUS_SHIFT 0x00 /* Bit definitions for SMPS_CTRL */ #define PALMAS_SMPS_CTRL_SMPS45_SMPS457_EN 0x20 -#define PALMAS_SMPS_CTRL_SMPS45_SMPS457_EN_SHIFT 5 +#define PALMAS_SMPS_CTRL_SMPS45_SMPS457_EN_SHIFT 0x05 #define PALMAS_SMPS_CTRL_SMPS12_SMPS123_EN 0x10 -#define PALMAS_SMPS_CTRL_SMPS12_SMPS123_EN_SHIFT 4 +#define PALMAS_SMPS_CTRL_SMPS12_SMPS123_EN_SHIFT 0x04 #define PALMAS_SMPS_CTRL_SMPS45_PHASE_CTRL_MASK 0x0c -#define PALMAS_SMPS_CTRL_SMPS45_PHASE_CTRL_SHIFT 2 +#define PALMAS_SMPS_CTRL_SMPS45_PHASE_CTRL_SHIFT 0x02 #define PALMAS_SMPS_CTRL_SMPS123_PHASE_CTRL_MASK 0x03 -#define PALMAS_SMPS_CTRL_SMPS123_PHASE_CTRL_SHIFT 0 +#define PALMAS_SMPS_CTRL_SMPS123_PHASE_CTRL_SHIFT 0x00 /* Bit definitions for SMPS_PD_CTRL */ #define PALMAS_SMPS_PD_CTRL_SMPS9 0x40 -#define PALMAS_SMPS_PD_CTRL_SMPS9_SHIFT 6 +#define PALMAS_SMPS_PD_CTRL_SMPS9_SHIFT 0x06 #define PALMAS_SMPS_PD_CTRL_SMPS8 0x20 -#define PALMAS_SMPS_PD_CTRL_SMPS8_SHIFT 5 +#define PALMAS_SMPS_PD_CTRL_SMPS8_SHIFT 0x05 #define PALMAS_SMPS_PD_CTRL_SMPS7 0x10 -#define PALMAS_SMPS_PD_CTRL_SMPS7_SHIFT 4 +#define PALMAS_SMPS_PD_CTRL_SMPS7_SHIFT 0x04 #define PALMAS_SMPS_PD_CTRL_SMPS6 0x08 -#define PALMAS_SMPS_PD_CTRL_SMPS6_SHIFT 3 +#define PALMAS_SMPS_PD_CTRL_SMPS6_SHIFT 0x03 #define PALMAS_SMPS_PD_CTRL_SMPS45 0x04 -#define PALMAS_SMPS_PD_CTRL_SMPS45_SHIFT 2 +#define PALMAS_SMPS_PD_CTRL_SMPS45_SHIFT 0x02 #define PALMAS_SMPS_PD_CTRL_SMPS3 0x02 -#define PALMAS_SMPS_PD_CTRL_SMPS3_SHIFT 1 +#define PALMAS_SMPS_PD_CTRL_SMPS3_SHIFT 0x01 #define PALMAS_SMPS_PD_CTRL_SMPS12 0x01 -#define PALMAS_SMPS_PD_CTRL_SMPS12_SHIFT 0 +#define PALMAS_SMPS_PD_CTRL_SMPS12_SHIFT 0x00 /* Bit definitions for SMPS_THERMAL_EN */ #define PALMAS_SMPS_THERMAL_EN_SMPS9 0x40 -#define PALMAS_SMPS_THERMAL_EN_SMPS9_SHIFT 6 +#define PALMAS_SMPS_THERMAL_EN_SMPS9_SHIFT 0x06 #define PALMAS_SMPS_THERMAL_EN_SMPS8 0x20 -#define PALMAS_SMPS_THERMAL_EN_SMPS8_SHIFT 5 +#define PALMAS_SMPS_THERMAL_EN_SMPS8_SHIFT 0x05 #define PALMAS_SMPS_THERMAL_EN_SMPS6 0x08 -#define PALMAS_SMPS_THERMAL_EN_SMPS6_SHIFT 3 +#define PALMAS_SMPS_THERMAL_EN_SMPS6_SHIFT 0x03 #define PALMAS_SMPS_THERMAL_EN_SMPS457 0x04 -#define PALMAS_SMPS_THERMAL_EN_SMPS457_SHIFT 2 +#define PALMAS_SMPS_THERMAL_EN_SMPS457_SHIFT 0x02 #define PALMAS_SMPS_THERMAL_EN_SMPS123 0x01 -#define PALMAS_SMPS_THERMAL_EN_SMPS123_SHIFT 0 +#define PALMAS_SMPS_THERMAL_EN_SMPS123_SHIFT 0x00 /* Bit definitions for SMPS_THERMAL_STATUS */ #define PALMAS_SMPS_THERMAL_STATUS_SMPS9 0x40 -#define PALMAS_SMPS_THERMAL_STATUS_SMPS9_SHIFT 6 +#define PALMAS_SMPS_THERMAL_STATUS_SMPS9_SHIFT 0x06 #define PALMAS_SMPS_THERMAL_STATUS_SMPS8 0x20 -#define PALMAS_SMPS_THERMAL_STATUS_SMPS8_SHIFT 5 +#define PALMAS_SMPS_THERMAL_STATUS_SMPS8_SHIFT 0x05 #define PALMAS_SMPS_THERMAL_STATUS_SMPS6 0x08 -#define PALMAS_SMPS_THERMAL_STATUS_SMPS6_SHIFT 3 +#define PALMAS_SMPS_THERMAL_STATUS_SMPS6_SHIFT 0x03 #define PALMAS_SMPS_THERMAL_STATUS_SMPS457 0x04 -#define PALMAS_SMPS_THERMAL_STATUS_SMPS457_SHIFT 2 +#define PALMAS_SMPS_THERMAL_STATUS_SMPS457_SHIFT 0x02 #define PALMAS_SMPS_THERMAL_STATUS_SMPS123 0x01 -#define PALMAS_SMPS_THERMAL_STATUS_SMPS123_SHIFT 0 +#define PALMAS_SMPS_THERMAL_STATUS_SMPS123_SHIFT 0x00 /* Bit definitions for SMPS_SHORT_STATUS */ #define PALMAS_SMPS_SHORT_STATUS_SMPS10 0x80 -#define PALMAS_SMPS_SHORT_STATUS_SMPS10_SHIFT 7 +#define PALMAS_SMPS_SHORT_STATUS_SMPS10_SHIFT 0x07 #define PALMAS_SMPS_SHORT_STATUS_SMPS9 0x40 -#define PALMAS_SMPS_SHORT_STATUS_SMPS9_SHIFT 6 +#define PALMAS_SMPS_SHORT_STATUS_SMPS9_SHIFT 0x06 #define PALMAS_SMPS_SHORT_STATUS_SMPS8 0x20 -#define PALMAS_SMPS_SHORT_STATUS_SMPS8_SHIFT 5 +#define PALMAS_SMPS_SHORT_STATUS_SMPS8_SHIFT 0x05 #define PALMAS_SMPS_SHORT_STATUS_SMPS7 0x10 -#define PALMAS_SMPS_SHORT_STATUS_SMPS7_SHIFT 4 +#define PALMAS_SMPS_SHORT_STATUS_SMPS7_SHIFT 0x04 #define PALMAS_SMPS_SHORT_STATUS_SMPS6 0x08 -#define PALMAS_SMPS_SHORT_STATUS_SMPS6_SHIFT 3 +#define PALMAS_SMPS_SHORT_STATUS_SMPS6_SHIFT 0x03 #define PALMAS_SMPS_SHORT_STATUS_SMPS45 0x04 -#define PALMAS_SMPS_SHORT_STATUS_SMPS45_SHIFT 2 +#define PALMAS_SMPS_SHORT_STATUS_SMPS45_SHIFT 0x02 #define PALMAS_SMPS_SHORT_STATUS_SMPS3 0x02 -#define PALMAS_SMPS_SHORT_STATUS_SMPS3_SHIFT 1 +#define PALMAS_SMPS_SHORT_STATUS_SMPS3_SHIFT 0x01 #define PALMAS_SMPS_SHORT_STATUS_SMPS12 0x01 -#define PALMAS_SMPS_SHORT_STATUS_SMPS12_SHIFT 0 +#define PALMAS_SMPS_SHORT_STATUS_SMPS12_SHIFT 0x00 /* Bit definitions for SMPS_NEGATIVE_CURRENT_LIMIT_EN */ #define PALMAS_SMPS_NEGATIVE_CURRENT_LIMIT_EN_SMPS9 0x40 -#define PALMAS_SMPS_NEGATIVE_CURRENT_LIMIT_EN_SMPS9_SHIFT 6 +#define PALMAS_SMPS_NEGATIVE_CURRENT_LIMIT_EN_SMPS9_SHIFT 0x06 #define PALMAS_SMPS_NEGATIVE_CURRENT_LIMIT_EN_SMPS8 0x20 -#define PALMAS_SMPS_NEGATIVE_CURRENT_LIMIT_EN_SMPS8_SHIFT 5 +#define PALMAS_SMPS_NEGATIVE_CURRENT_LIMIT_EN_SMPS8_SHIFT 0x05 #define PALMAS_SMPS_NEGATIVE_CURRENT_LIMIT_EN_SMPS7 0x10 -#define PALMAS_SMPS_NEGATIVE_CURRENT_LIMIT_EN_SMPS7_SHIFT 4 +#define PALMAS_SMPS_NEGATIVE_CURRENT_LIMIT_EN_SMPS7_SHIFT 0x04 #define PALMAS_SMPS_NEGATIVE_CURRENT_LIMIT_EN_SMPS6 0x08 -#define PALMAS_SMPS_NEGATIVE_CURRENT_LIMIT_EN_SMPS6_SHIFT 3 +#define PALMAS_SMPS_NEGATIVE_CURRENT_LIMIT_EN_SMPS6_SHIFT 0x03 #define PALMAS_SMPS_NEGATIVE_CURRENT_LIMIT_EN_SMPS45 0x04 -#define PALMAS_SMPS_NEGATIVE_CURRENT_LIMIT_EN_SMPS45_SHIFT 2 +#define PALMAS_SMPS_NEGATIVE_CURRENT_LIMIT_EN_SMPS45_SHIFT 0x02 #define PALMAS_SMPS_NEGATIVE_CURRENT_LIMIT_EN_SMPS3 0x02 -#define PALMAS_SMPS_NEGATIVE_CURRENT_LIMIT_EN_SMPS3_SHIFT 1 +#define PALMAS_SMPS_NEGATIVE_CURRENT_LIMIT_EN_SMPS3_SHIFT 0x01 #define PALMAS_SMPS_NEGATIVE_CURRENT_LIMIT_EN_SMPS12 0x01 -#define PALMAS_SMPS_NEGATIVE_CURRENT_LIMIT_EN_SMPS12_SHIFT 0 +#define PALMAS_SMPS_NEGATIVE_CURRENT_LIMIT_EN_SMPS12_SHIFT 0x00 /* Bit definitions for SMPS_POWERGOOD_MASK1 */ #define PALMAS_SMPS_POWERGOOD_MASK1_SMPS10 0x80 -#define PALMAS_SMPS_POWERGOOD_MASK1_SMPS10_SHIFT 7 +#define PALMAS_SMPS_POWERGOOD_MASK1_SMPS10_SHIFT 0x07 #define PALMAS_SMPS_POWERGOOD_MASK1_SMPS9 0x40 -#define PALMAS_SMPS_POWERGOOD_MASK1_SMPS9_SHIFT 6 +#define PALMAS_SMPS_POWERGOOD_MASK1_SMPS9_SHIFT 0x06 #define PALMAS_SMPS_POWERGOOD_MASK1_SMPS8 0x20 -#define PALMAS_SMPS_POWERGOOD_MASK1_SMPS8_SHIFT 5 +#define PALMAS_SMPS_POWERGOOD_MASK1_SMPS8_SHIFT 0x05 #define PALMAS_SMPS_POWERGOOD_MASK1_SMPS7 0x10 -#define PALMAS_SMPS_POWERGOOD_MASK1_SMPS7_SHIFT 4 +#define PALMAS_SMPS_POWERGOOD_MASK1_SMPS7_SHIFT 0x04 #define PALMAS_SMPS_POWERGOOD_MASK1_SMPS6 0x08 -#define PALMAS_SMPS_POWERGOOD_MASK1_SMPS6_SHIFT 3 +#define PALMAS_SMPS_POWERGOOD_MASK1_SMPS6_SHIFT 0x03 #define PALMAS_SMPS_POWERGOOD_MASK1_SMPS45 0x04 -#define PALMAS_SMPS_POWERGOOD_MASK1_SMPS45_SHIFT 2 +#define PALMAS_SMPS_POWERGOOD_MASK1_SMPS45_SHIFT 0x02 #define PALMAS_SMPS_POWERGOOD_MASK1_SMPS3 0x02 -#define PALMAS_SMPS_POWERGOOD_MASK1_SMPS3_SHIFT 1 +#define PALMAS_SMPS_POWERGOOD_MASK1_SMPS3_SHIFT 0x01 #define PALMAS_SMPS_POWERGOOD_MASK1_SMPS12 0x01 -#define PALMAS_SMPS_POWERGOOD_MASK1_SMPS12_SHIFT 0 +#define PALMAS_SMPS_POWERGOOD_MASK1_SMPS12_SHIFT 0x00 /* Bit definitions for SMPS_POWERGOOD_MASK2 */ #define PALMAS_SMPS_POWERGOOD_MASK2_POWERGOOD_TYPE_SELECT 0x80 -#define PALMAS_SMPS_POWERGOOD_MASK2_POWERGOOD_TYPE_SELECT_SHIFT 7 +#define PALMAS_SMPS_POWERGOOD_MASK2_POWERGOOD_TYPE_SELECT_SHIFT 0x07 #define PALMAS_SMPS_POWERGOOD_MASK2_GPIO_7 0x04 -#define PALMAS_SMPS_POWERGOOD_MASK2_GPIO_7_SHIFT 2 +#define PALMAS_SMPS_POWERGOOD_MASK2_GPIO_7_SHIFT 0x02 #define PALMAS_SMPS_POWERGOOD_MASK2_VBUS 0x02 -#define PALMAS_SMPS_POWERGOOD_MASK2_VBUS_SHIFT 1 +#define PALMAS_SMPS_POWERGOOD_MASK2_VBUS_SHIFT 0x01 #define PALMAS_SMPS_POWERGOOD_MASK2_ACOK 0x01 -#define PALMAS_SMPS_POWERGOOD_MASK2_ACOK_SHIFT 0 +#define PALMAS_SMPS_POWERGOOD_MASK2_ACOK_SHIFT 0x00 /* Registers for function LDO */ -#define PALMAS_LDO1_CTRL 0x0 -#define PALMAS_LDO1_VOLTAGE 0x1 -#define PALMAS_LDO2_CTRL 0x2 -#define PALMAS_LDO2_VOLTAGE 0x3 -#define PALMAS_LDO3_CTRL 0x4 -#define PALMAS_LDO3_VOLTAGE 0x5 -#define PALMAS_LDO4_CTRL 0x6 -#define PALMAS_LDO4_VOLTAGE 0x7 -#define PALMAS_LDO5_CTRL 0x8 -#define PALMAS_LDO5_VOLTAGE 0x9 -#define PALMAS_LDO6_CTRL 0xA -#define PALMAS_LDO6_VOLTAGE 0xB -#define PALMAS_LDO7_CTRL 0xC -#define PALMAS_LDO7_VOLTAGE 0xD -#define PALMAS_LDO8_CTRL 0xE -#define PALMAS_LDO8_VOLTAGE 0xF +#define PALMAS_LDO1_CTRL 0x00 +#define PALMAS_LDO1_VOLTAGE 0x01 +#define PALMAS_LDO2_CTRL 0x02 +#define PALMAS_LDO2_VOLTAGE 0x03 +#define PALMAS_LDO3_CTRL 0x04 +#define PALMAS_LDO3_VOLTAGE 0x05 +#define PALMAS_LDO4_CTRL 0x06 +#define PALMAS_LDO4_VOLTAGE 0x07 +#define PALMAS_LDO5_CTRL 0x08 +#define PALMAS_LDO5_VOLTAGE 0x09 +#define PALMAS_LDO6_CTRL 0x0A +#define PALMAS_LDO6_VOLTAGE 0x0B +#define PALMAS_LDO7_CTRL 0x0C +#define PALMAS_LDO7_VOLTAGE 0x0D +#define PALMAS_LDO8_CTRL 0x0E +#define PALMAS_LDO8_VOLTAGE 0x0F #define PALMAS_LDO9_CTRL 0x10 #define PALMAS_LDO9_VOLTAGE 0x11 #define PALMAS_LDOLN_CTRL 0x12 @@ -1055,236 +1055,236 @@ enum usb_irq_events { /* Bit definitions for LDO1_CTRL */ #define PALMAS_LDO1_CTRL_WR_S 0x80 -#define PALMAS_LDO1_CTRL_WR_S_SHIFT 7 +#define PALMAS_LDO1_CTRL_WR_S_SHIFT 0x07 #define PALMAS_LDO1_CTRL_STATUS 0x10 -#define PALMAS_LDO1_CTRL_STATUS_SHIFT 4 +#define PALMAS_LDO1_CTRL_STATUS_SHIFT 0x04 #define PALMAS_LDO1_CTRL_MODE_SLEEP 0x04 -#define PALMAS_LDO1_CTRL_MODE_SLEEP_SHIFT 2 +#define PALMAS_LDO1_CTRL_MODE_SLEEP_SHIFT 0x02 #define PALMAS_LDO1_CTRL_MODE_ACTIVE 0x01 -#define PALMAS_LDO1_CTRL_MODE_ACTIVE_SHIFT 0 +#define PALMAS_LDO1_CTRL_MODE_ACTIVE_SHIFT 0x00 /* Bit definitions for LDO1_VOLTAGE */ -#define PALMAS_LDO1_VOLTAGE_VSEL_MASK 0x3f -#define PALMAS_LDO1_VOLTAGE_VSEL_SHIFT 0 +#define PALMAS_LDO1_VOLTAGE_VSEL_MASK 0x3F +#define PALMAS_LDO1_VOLTAGE_VSEL_SHIFT 0x00 /* Bit definitions for LDO2_CTRL */ #define PALMAS_LDO2_CTRL_WR_S 0x80 -#define PALMAS_LDO2_CTRL_WR_S_SHIFT 7 +#define PALMAS_LDO2_CTRL_WR_S_SHIFT 0x07 #define PALMAS_LDO2_CTRL_STATUS 0x10 -#define PALMAS_LDO2_CTRL_STATUS_SHIFT 4 +#define PALMAS_LDO2_CTRL_STATUS_SHIFT 0x04 #define PALMAS_LDO2_CTRL_MODE_SLEEP 0x04 -#define PALMAS_LDO2_CTRL_MODE_SLEEP_SHIFT 2 +#define PALMAS_LDO2_CTRL_MODE_SLEEP_SHIFT 0x02 #define PALMAS_LDO2_CTRL_MODE_ACTIVE 0x01 -#define PALMAS_LDO2_CTRL_MODE_ACTIVE_SHIFT 0 +#define PALMAS_LDO2_CTRL_MODE_ACTIVE_SHIFT 0x00 /* Bit definitions for LDO2_VOLTAGE */ -#define PALMAS_LDO2_VOLTAGE_VSEL_MASK 0x3f -#define PALMAS_LDO2_VOLTAGE_VSEL_SHIFT 0 +#define PALMAS_LDO2_VOLTAGE_VSEL_MASK 0x3F +#define PALMAS_LDO2_VOLTAGE_VSEL_SHIFT 0x00 /* Bit definitions for LDO3_CTRL */ #define PALMAS_LDO3_CTRL_WR_S 0x80 -#define PALMAS_LDO3_CTRL_WR_S_SHIFT 7 +#define PALMAS_LDO3_CTRL_WR_S_SHIFT 0x07 #define PALMAS_LDO3_CTRL_STATUS 0x10 -#define PALMAS_LDO3_CTRL_STATUS_SHIFT 4 +#define PALMAS_LDO3_CTRL_STATUS_SHIFT 0x04 #define PALMAS_LDO3_CTRL_MODE_SLEEP 0x04 -#define PALMAS_LDO3_CTRL_MODE_SLEEP_SHIFT 2 +#define PALMAS_LDO3_CTRL_MODE_SLEEP_SHIFT 0x02 #define PALMAS_LDO3_CTRL_MODE_ACTIVE 0x01 -#define PALMAS_LDO3_CTRL_MODE_ACTIVE_SHIFT 0 +#define PALMAS_LDO3_CTRL_MODE_ACTIVE_SHIFT 0x00 /* Bit definitions for LDO3_VOLTAGE */ -#define PALMAS_LDO3_VOLTAGE_VSEL_MASK 0x3f -#define PALMAS_LDO3_VOLTAGE_VSEL_SHIFT 0 +#define PALMAS_LDO3_VOLTAGE_VSEL_MASK 0x3F +#define PALMAS_LDO3_VOLTAGE_VSEL_SHIFT 0x00 /* Bit definitions for LDO4_CTRL */ #define PALMAS_LDO4_CTRL_WR_S 0x80 -#define PALMAS_LDO4_CTRL_WR_S_SHIFT 7 +#define PALMAS_LDO4_CTRL_WR_S_SHIFT 0x07 #define PALMAS_LDO4_CTRL_STATUS 0x10 -#define PALMAS_LDO4_CTRL_STATUS_SHIFT 4 +#define PALMAS_LDO4_CTRL_STATUS_SHIFT 0x04 #define PALMAS_LDO4_CTRL_MODE_SLEEP 0x04 -#define PALMAS_LDO4_CTRL_MODE_SLEEP_SHIFT 2 +#define PALMAS_LDO4_CTRL_MODE_SLEEP_SHIFT 0x02 #define PALMAS_LDO4_CTRL_MODE_ACTIVE 0x01 -#define PALMAS_LDO4_CTRL_MODE_ACTIVE_SHIFT 0 +#define PALMAS_LDO4_CTRL_MODE_ACTIVE_SHIFT 0x00 /* Bit definitions for LDO4_VOLTAGE */ -#define PALMAS_LDO4_VOLTAGE_VSEL_MASK 0x3f -#define PALMAS_LDO4_VOLTAGE_VSEL_SHIFT 0 +#define PALMAS_LDO4_VOLTAGE_VSEL_MASK 0x3F +#define PALMAS_LDO4_VOLTAGE_VSEL_SHIFT 0x00 /* Bit definitions for LDO5_CTRL */ #define PALMAS_LDO5_CTRL_WR_S 0x80 -#define PALMAS_LDO5_CTRL_WR_S_SHIFT 7 +#define PALMAS_LDO5_CTRL_WR_S_SHIFT 0x07 #define PALMAS_LDO5_CTRL_STATUS 0x10 -#define PALMAS_LDO5_CTRL_STATUS_SHIFT 4 +#define PALMAS_LDO5_CTRL_STATUS_SHIFT 0x04 #define PALMAS_LDO5_CTRL_MODE_SLEEP 0x04 -#define PALMAS_LDO5_CTRL_MODE_SLEEP_SHIFT 2 +#define PALMAS_LDO5_CTRL_MODE_SLEEP_SHIFT 0x02 #define PALMAS_LDO5_CTRL_MODE_ACTIVE 0x01 -#define PALMAS_LDO5_CTRL_MODE_ACTIVE_SHIFT 0 +#define PALMAS_LDO5_CTRL_MODE_ACTIVE_SHIFT 0x00 /* Bit definitions for LDO5_VOLTAGE */ -#define PALMAS_LDO5_VOLTAGE_VSEL_MASK 0x3f -#define PALMAS_LDO5_VOLTAGE_VSEL_SHIFT 0 +#define PALMAS_LDO5_VOLTAGE_VSEL_MASK 0x3F +#define PALMAS_LDO5_VOLTAGE_VSEL_SHIFT 0x00 /* Bit definitions for LDO6_CTRL */ #define PALMAS_LDO6_CTRL_WR_S 0x80 -#define PALMAS_LDO6_CTRL_WR_S_SHIFT 7 +#define PALMAS_LDO6_CTRL_WR_S_SHIFT 0x07 #define PALMAS_LDO6_CTRL_LDO_VIB_EN 0x40 -#define PALMAS_LDO6_CTRL_LDO_VIB_EN_SHIFT 6 +#define PALMAS_LDO6_CTRL_LDO_VIB_EN_SHIFT 0x06 #define PALMAS_LDO6_CTRL_STATUS 0x10 -#define PALMAS_LDO6_CTRL_STATUS_SHIFT 4 +#define PALMAS_LDO6_CTRL_STATUS_SHIFT 0x04 #define PALMAS_LDO6_CTRL_MODE_SLEEP 0x04 -#define PALMAS_LDO6_CTRL_MODE_SLEEP_SHIFT 2 +#define PALMAS_LDO6_CTRL_MODE_SLEEP_SHIFT 0x02 #define PALMAS_LDO6_CTRL_MODE_ACTIVE 0x01 -#define PALMAS_LDO6_CTRL_MODE_ACTIVE_SHIFT 0 +#define PALMAS_LDO6_CTRL_MODE_ACTIVE_SHIFT 0x00 /* Bit definitions for LDO6_VOLTAGE */ -#define PALMAS_LDO6_VOLTAGE_VSEL_MASK 0x3f -#define PALMAS_LDO6_VOLTAGE_VSEL_SHIFT 0 +#define PALMAS_LDO6_VOLTAGE_VSEL_MASK 0x3F +#define PALMAS_LDO6_VOLTAGE_VSEL_SHIFT 0x00 /* Bit definitions for LDO7_CTRL */ #define PALMAS_LDO7_CTRL_WR_S 0x80 -#define PALMAS_LDO7_CTRL_WR_S_SHIFT 7 +#define PALMAS_LDO7_CTRL_WR_S_SHIFT 0x07 #define PALMAS_LDO7_CTRL_STATUS 0x10 -#define PALMAS_LDO7_CTRL_STATUS_SHIFT 4 +#define PALMAS_LDO7_CTRL_STATUS_SHIFT 0x04 #define PALMAS_LDO7_CTRL_MODE_SLEEP 0x04 -#define PALMAS_LDO7_CTRL_MODE_SLEEP_SHIFT 2 +#define PALMAS_LDO7_CTRL_MODE_SLEEP_SHIFT 0x02 #define PALMAS_LDO7_CTRL_MODE_ACTIVE 0x01 -#define PALMAS_LDO7_CTRL_MODE_ACTIVE_SHIFT 0 +#define PALMAS_LDO7_CTRL_MODE_ACTIVE_SHIFT 0x00 /* Bit definitions for LDO7_VOLTAGE */ -#define PALMAS_LDO7_VOLTAGE_VSEL_MASK 0x3f -#define PALMAS_LDO7_VOLTAGE_VSEL_SHIFT 0 +#define PALMAS_LDO7_VOLTAGE_VSEL_MASK 0x3F +#define PALMAS_LDO7_VOLTAGE_VSEL_SHIFT 0x00 /* Bit definitions for LDO8_CTRL */ #define PALMAS_LDO8_CTRL_WR_S 0x80 -#define PALMAS_LDO8_CTRL_WR_S_SHIFT 7 +#define PALMAS_LDO8_CTRL_WR_S_SHIFT 0x07 #define PALMAS_LDO8_CTRL_LDO_TRACKING_EN 0x40 -#define PALMAS_LDO8_CTRL_LDO_TRACKING_EN_SHIFT 6 +#define PALMAS_LDO8_CTRL_LDO_TRACKING_EN_SHIFT 0x06 #define PALMAS_LDO8_CTRL_STATUS 0x10 -#define PALMAS_LDO8_CTRL_STATUS_SHIFT 4 +#define PALMAS_LDO8_CTRL_STATUS_SHIFT 0x04 #define PALMAS_LDO8_CTRL_MODE_SLEEP 0x04 -#define PALMAS_LDO8_CTRL_MODE_SLEEP_SHIFT 2 +#define PALMAS_LDO8_CTRL_MODE_SLEEP_SHIFT 0x02 #define PALMAS_LDO8_CTRL_MODE_ACTIVE 0x01 -#define PALMAS_LDO8_CTRL_MODE_ACTIVE_SHIFT 0 +#define PALMAS_LDO8_CTRL_MODE_ACTIVE_SHIFT 0x00 /* Bit definitions for LDO8_VOLTAGE */ -#define PALMAS_LDO8_VOLTAGE_VSEL_MASK 0x3f -#define PALMAS_LDO8_VOLTAGE_VSEL_SHIFT 0 +#define PALMAS_LDO8_VOLTAGE_VSEL_MASK 0x3F +#define PALMAS_LDO8_VOLTAGE_VSEL_SHIFT 0x00 /* Bit definitions for LDO9_CTRL */ #define PALMAS_LDO9_CTRL_WR_S 0x80 -#define PALMAS_LDO9_CTRL_WR_S_SHIFT 7 +#define PALMAS_LDO9_CTRL_WR_S_SHIFT 0x07 #define PALMAS_LDO9_CTRL_LDO_BYPASS_EN 0x40 -#define PALMAS_LDO9_CTRL_LDO_BYPASS_EN_SHIFT 6 +#define PALMAS_LDO9_CTRL_LDO_BYPASS_EN_SHIFT 0x06 #define PALMAS_LDO9_CTRL_STATUS 0x10 -#define PALMAS_LDO9_CTRL_STATUS_SHIFT 4 +#define PALMAS_LDO9_CTRL_STATUS_SHIFT 0x04 #define PALMAS_LDO9_CTRL_MODE_SLEEP 0x04 -#define PALMAS_LDO9_CTRL_MODE_SLEEP_SHIFT 2 +#define PALMAS_LDO9_CTRL_MODE_SLEEP_SHIFT 0x02 #define PALMAS_LDO9_CTRL_MODE_ACTIVE 0x01 -#define PALMAS_LDO9_CTRL_MODE_ACTIVE_SHIFT 0 +#define PALMAS_LDO9_CTRL_MODE_ACTIVE_SHIFT 0x00 /* Bit definitions for LDO9_VOLTAGE */ -#define PALMAS_LDO9_VOLTAGE_VSEL_MASK 0x3f -#define PALMAS_LDO9_VOLTAGE_VSEL_SHIFT 0 +#define PALMAS_LDO9_VOLTAGE_VSEL_MASK 0x3F +#define PALMAS_LDO9_VOLTAGE_VSEL_SHIFT 0x00 /* Bit definitions for LDOLN_CTRL */ #define PALMAS_LDOLN_CTRL_WR_S 0x80 -#define PALMAS_LDOLN_CTRL_WR_S_SHIFT 7 +#define PALMAS_LDOLN_CTRL_WR_S_SHIFT 0x07 #define PALMAS_LDOLN_CTRL_STATUS 0x10 -#define PALMAS_LDOLN_CTRL_STATUS_SHIFT 4 +#define PALMAS_LDOLN_CTRL_STATUS_SHIFT 0x04 #define PALMAS_LDOLN_CTRL_MODE_SLEEP 0x04 -#define PALMAS_LDOLN_CTRL_MODE_SLEEP_SHIFT 2 +#define PALMAS_LDOLN_CTRL_MODE_SLEEP_SHIFT 0x02 #define PALMAS_LDOLN_CTRL_MODE_ACTIVE 0x01 -#define PALMAS_LDOLN_CTRL_MODE_ACTIVE_SHIFT 0 +#define PALMAS_LDOLN_CTRL_MODE_ACTIVE_SHIFT 0x00 /* Bit definitions for LDOLN_VOLTAGE */ -#define PALMAS_LDOLN_VOLTAGE_VSEL_MASK 0x3f -#define PALMAS_LDOLN_VOLTAGE_VSEL_SHIFT 0 +#define PALMAS_LDOLN_VOLTAGE_VSEL_MASK 0x3F +#define PALMAS_LDOLN_VOLTAGE_VSEL_SHIFT 0x00 /* Bit definitions for LDOUSB_CTRL */ #define PALMAS_LDOUSB_CTRL_WR_S 0x80 -#define PALMAS_LDOUSB_CTRL_WR_S_SHIFT 7 +#define PALMAS_LDOUSB_CTRL_WR_S_SHIFT 0x07 #define PALMAS_LDOUSB_CTRL_STATUS 0x10 -#define PALMAS_LDOUSB_CTRL_STATUS_SHIFT 4 +#define PALMAS_LDOUSB_CTRL_STATUS_SHIFT 0x04 #define PALMAS_LDOUSB_CTRL_MODE_SLEEP 0x04 -#define PALMAS_LDOUSB_CTRL_MODE_SLEEP_SHIFT 2 +#define PALMAS_LDOUSB_CTRL_MODE_SLEEP_SHIFT 0x02 #define PALMAS_LDOUSB_CTRL_MODE_ACTIVE 0x01 -#define PALMAS_LDOUSB_CTRL_MODE_ACTIVE_SHIFT 0 +#define PALMAS_LDOUSB_CTRL_MODE_ACTIVE_SHIFT 0x00 /* Bit definitions for LDOUSB_VOLTAGE */ -#define PALMAS_LDOUSB_VOLTAGE_VSEL_MASK 0x3f -#define PALMAS_LDOUSB_VOLTAGE_VSEL_SHIFT 0 +#define PALMAS_LDOUSB_VOLTAGE_VSEL_MASK 0x3F +#define PALMAS_LDOUSB_VOLTAGE_VSEL_SHIFT 0x00 /* Bit definitions for LDO_CTRL */ #define PALMAS_LDO_CTRL_LDOUSB_ON_VBUS_VSYS 0x01 -#define PALMAS_LDO_CTRL_LDOUSB_ON_VBUS_VSYS_SHIFT 0 +#define PALMAS_LDO_CTRL_LDOUSB_ON_VBUS_VSYS_SHIFT 0x00 /* Bit definitions for LDO_PD_CTRL1 */ #define PALMAS_LDO_PD_CTRL1_LDO8 0x80 -#define PALMAS_LDO_PD_CTRL1_LDO8_SHIFT 7 +#define PALMAS_LDO_PD_CTRL1_LDO8_SHIFT 0x07 #define PALMAS_LDO_PD_CTRL1_LDO7 0x40 -#define PALMAS_LDO_PD_CTRL1_LDO7_SHIFT 6 +#define PALMAS_LDO_PD_CTRL1_LDO7_SHIFT 0x06 #define PALMAS_LDO_PD_CTRL1_LDO6 0x20 -#define PALMAS_LDO_PD_CTRL1_LDO6_SHIFT 5 +#define PALMAS_LDO_PD_CTRL1_LDO6_SHIFT 0x05 #define PALMAS_LDO_PD_CTRL1_LDO5 0x10 -#define PALMAS_LDO_PD_CTRL1_LDO5_SHIFT 4 +#define PALMAS_LDO_PD_CTRL1_LDO5_SHIFT 0x04 #define PALMAS_LDO_PD_CTRL1_LDO4 0x08 -#define PALMAS_LDO_PD_CTRL1_LDO4_SHIFT 3 +#define PALMAS_LDO_PD_CTRL1_LDO4_SHIFT 0x03 #define PALMAS_LDO_PD_CTRL1_LDO3 0x04 -#define PALMAS_LDO_PD_CTRL1_LDO3_SHIFT 2 +#define PALMAS_LDO_PD_CTRL1_LDO3_SHIFT 0x02 #define PALMAS_LDO_PD_CTRL1_LDO2 0x02 -#define PALMAS_LDO_PD_CTRL1_LDO2_SHIFT 1 +#define PALMAS_LDO_PD_CTRL1_LDO2_SHIFT 0x01 #define PALMAS_LDO_PD_CTRL1_LDO1 0x01 -#define PALMAS_LDO_PD_CTRL1_LDO1_SHIFT 0 +#define PALMAS_LDO_PD_CTRL1_LDO1_SHIFT 0x00 /* Bit definitions for LDO_PD_CTRL2 */ #define PALMAS_LDO_PD_CTRL2_LDOUSB 0x04 -#define PALMAS_LDO_PD_CTRL2_LDOUSB_SHIFT 2 +#define PALMAS_LDO_PD_CTRL2_LDOUSB_SHIFT 0x02 #define PALMAS_LDO_PD_CTRL2_LDOLN 0x02 -#define PALMAS_LDO_PD_CTRL2_LDOLN_SHIFT 1 +#define PALMAS_LDO_PD_CTRL2_LDOLN_SHIFT 0x01 #define PALMAS_LDO_PD_CTRL2_LDO9 0x01 -#define PALMAS_LDO_PD_CTRL2_LDO9_SHIFT 0 +#define PALMAS_LDO_PD_CTRL2_LDO9_SHIFT 0x00 /* Bit definitions for LDO_SHORT_STATUS1 */ #define PALMAS_LDO_SHORT_STATUS1_LDO8 0x80 -#define PALMAS_LDO_SHORT_STATUS1_LDO8_SHIFT 7 +#define PALMAS_LDO_SHORT_STATUS1_LDO8_SHIFT 0x07 #define PALMAS_LDO_SHORT_STATUS1_LDO7 0x40 -#define PALMAS_LDO_SHORT_STATUS1_LDO7_SHIFT 6 +#define PALMAS_LDO_SHORT_STATUS1_LDO7_SHIFT 0x06 #define PALMAS_LDO_SHORT_STATUS1_LDO6 0x20 -#define PALMAS_LDO_SHORT_STATUS1_LDO6_SHIFT 5 +#define PALMAS_LDO_SHORT_STATUS1_LDO6_SHIFT 0x05 #define PALMAS_LDO_SHORT_STATUS1_LDO5 0x10 -#define PALMAS_LDO_SHORT_STATUS1_LDO5_SHIFT 4 +#define PALMAS_LDO_SHORT_STATUS1_LDO5_SHIFT 0x04 #define PALMAS_LDO_SHORT_STATUS1_LDO4 0x08 -#define PALMAS_LDO_SHORT_STATUS1_LDO4_SHIFT 3 +#define PALMAS_LDO_SHORT_STATUS1_LDO4_SHIFT 0x03 #define PALMAS_LDO_SHORT_STATUS1_LDO3 0x04 -#define PALMAS_LDO_SHORT_STATUS1_LDO3_SHIFT 2 +#define PALMAS_LDO_SHORT_STATUS1_LDO3_SHIFT 0x02 #define PALMAS_LDO_SHORT_STATUS1_LDO2 0x02 -#define PALMAS_LDO_SHORT_STATUS1_LDO2_SHIFT 1 +#define PALMAS_LDO_SHORT_STATUS1_LDO2_SHIFT 0x01 #define PALMAS_LDO_SHORT_STATUS1_LDO1 0x01 -#define PALMAS_LDO_SHORT_STATUS1_LDO1_SHIFT 0 +#define PALMAS_LDO_SHORT_STATUS1_LDO1_SHIFT 0x00 /* Bit definitions for LDO_SHORT_STATUS2 */ #define PALMAS_LDO_SHORT_STATUS2_LDOVANA 0x08 -#define PALMAS_LDO_SHORT_STATUS2_LDOVANA_SHIFT 3 +#define PALMAS_LDO_SHORT_STATUS2_LDOVANA_SHIFT 0x03 #define PALMAS_LDO_SHORT_STATUS2_LDOUSB 0x04 -#define PALMAS_LDO_SHORT_STATUS2_LDOUSB_SHIFT 2 +#define PALMAS_LDO_SHORT_STATUS2_LDOUSB_SHIFT 0x02 #define PALMAS_LDO_SHORT_STATUS2_LDOLN 0x02 -#define PALMAS_LDO_SHORT_STATUS2_LDOLN_SHIFT 1 +#define PALMAS_LDO_SHORT_STATUS2_LDOLN_SHIFT 0x01 #define PALMAS_LDO_SHORT_STATUS2_LDO9 0x01 -#define PALMAS_LDO_SHORT_STATUS2_LDO9_SHIFT 0 +#define PALMAS_LDO_SHORT_STATUS2_LDO9_SHIFT 0x00 /* Registers for function PMU_CONTROL */ -#define PALMAS_DEV_CTRL 0x0 -#define PALMAS_POWER_CTRL 0x1 -#define PALMAS_VSYS_LO 0x2 -#define PALMAS_VSYS_MON 0x3 -#define PALMAS_VBAT_MON 0x4 -#define PALMAS_WATCHDOG 0x5 -#define PALMAS_BOOT_STATUS 0x6 -#define PALMAS_BATTERY_BOUNCE 0x7 -#define PALMAS_BACKUP_BATTERY_CTRL 0x8 -#define PALMAS_LONG_PRESS_KEY 0x9 -#define PALMAS_OSC_THERM_CTRL 0xA -#define PALMAS_BATDEBOUNCING 0xB -#define PALMAS_SWOFF_HWRST 0xF +#define PALMAS_DEV_CTRL 0x00 +#define PALMAS_POWER_CTRL 0x01 +#define PALMAS_VSYS_LO 0x02 +#define PALMAS_VSYS_MON 0x03 +#define PALMAS_VBAT_MON 0x04 +#define PALMAS_WATCHDOG 0x05 +#define PALMAS_BOOT_STATUS 0x06 +#define PALMAS_BATTERY_BOUNCE 0x07 +#define PALMAS_BACKUP_BATTERY_CTRL 0x08 +#define PALMAS_LONG_PRESS_KEY 0x09 +#define PALMAS_OSC_THERM_CTRL 0x0A +#define PALMAS_BATDEBOUNCING 0x0B +#define PALMAS_SWOFF_HWRST 0x0F #define PALMAS_SWOFF_COLDRST 0x10 #define PALMAS_SWOFF_STATUS 0x11 #define PALMAS_PMU_CONFIG 0x12 @@ -1296,668 +1296,668 @@ enum usb_irq_events { /* Bit definitions for DEV_CTRL */ #define PALMAS_DEV_CTRL_DEV_STATUS_MASK 0x0c -#define PALMAS_DEV_CTRL_DEV_STATUS_SHIFT 2 +#define PALMAS_DEV_CTRL_DEV_STATUS_SHIFT 0x02 #define PALMAS_DEV_CTRL_SW_RST 0x02 -#define PALMAS_DEV_CTRL_SW_RST_SHIFT 1 +#define PALMAS_DEV_CTRL_SW_RST_SHIFT 0x01 #define PALMAS_DEV_CTRL_DEV_ON 0x01 -#define PALMAS_DEV_CTRL_DEV_ON_SHIFT 0 +#define PALMAS_DEV_CTRL_DEV_ON_SHIFT 0x00 /* Bit definitions for POWER_CTRL */ #define PALMAS_POWER_CTRL_ENABLE2_MASK 0x04 -#define PALMAS_POWER_CTRL_ENABLE2_MASK_SHIFT 2 +#define PALMAS_POWER_CTRL_ENABLE2_MASK_SHIFT 0x02 #define PALMAS_POWER_CTRL_ENABLE1_MASK 0x02 -#define PALMAS_POWER_CTRL_ENABLE1_MASK_SHIFT 1 +#define PALMAS_POWER_CTRL_ENABLE1_MASK_SHIFT 0x01 #define PALMAS_POWER_CTRL_NSLEEP_MASK 0x01 -#define PALMAS_POWER_CTRL_NSLEEP_MASK_SHIFT 0 +#define PALMAS_POWER_CTRL_NSLEEP_MASK_SHIFT 0x00 /* Bit definitions for VSYS_LO */ -#define PALMAS_VSYS_LO_THRESHOLD_MASK 0x1f -#define PALMAS_VSYS_LO_THRESHOLD_SHIFT 0 +#define PALMAS_VSYS_LO_THRESHOLD_MASK 0x1F +#define PALMAS_VSYS_LO_THRESHOLD_SHIFT 0x00 /* Bit definitions for VSYS_MON */ #define PALMAS_VSYS_MON_ENABLE 0x80 -#define PALMAS_VSYS_MON_ENABLE_SHIFT 7 -#define PALMAS_VSYS_MON_THRESHOLD_MASK 0x3f -#define PALMAS_VSYS_MON_THRESHOLD_SHIFT 0 +#define PALMAS_VSYS_MON_ENABLE_SHIFT 0x07 +#define PALMAS_VSYS_MON_THRESHOLD_MASK 0x3F +#define PALMAS_VSYS_MON_THRESHOLD_SHIFT 0x00 /* Bit definitions for VBAT_MON */ #define PALMAS_VBAT_MON_ENABLE 0x80 -#define PALMAS_VBAT_MON_ENABLE_SHIFT 7 -#define PALMAS_VBAT_MON_THRESHOLD_MASK 0x3f -#define PALMAS_VBAT_MON_THRESHOLD_SHIFT 0 +#define PALMAS_VBAT_MON_ENABLE_SHIFT 0x07 +#define PALMAS_VBAT_MON_THRESHOLD_MASK 0x3F +#define PALMAS_VBAT_MON_THRESHOLD_SHIFT 0x00 /* Bit definitions for WATCHDOG */ #define PALMAS_WATCHDOG_LOCK 0x20 -#define PALMAS_WATCHDOG_LOCK_SHIFT 5 +#define PALMAS_WATCHDOG_LOCK_SHIFT 0x05 #define PALMAS_WATCHDOG_ENABLE 0x10 -#define PALMAS_WATCHDOG_ENABLE_SHIFT 4 +#define PALMAS_WATCHDOG_ENABLE_SHIFT 0x04 #define PALMAS_WATCHDOG_MODE 0x08 -#define PALMAS_WATCHDOG_MODE_SHIFT 3 +#define PALMAS_WATCHDOG_MODE_SHIFT 0x03 #define PALMAS_WATCHDOG_TIMER_MASK 0x07 -#define PALMAS_WATCHDOG_TIMER_SHIFT 0 +#define PALMAS_WATCHDOG_TIMER_SHIFT 0x00 /* Bit definitions for BOOT_STATUS */ #define PALMAS_BOOT_STATUS_BOOT1 0x02 -#define PALMAS_BOOT_STATUS_BOOT1_SHIFT 1 +#define PALMAS_BOOT_STATUS_BOOT1_SHIFT 0x01 #define PALMAS_BOOT_STATUS_BOOT0 0x01 -#define PALMAS_BOOT_STATUS_BOOT0_SHIFT 0 +#define PALMAS_BOOT_STATUS_BOOT0_SHIFT 0x00 /* Bit definitions for BATTERY_BOUNCE */ -#define PALMAS_BATTERY_BOUNCE_BB_DELAY_MASK 0x3f -#define PALMAS_BATTERY_BOUNCE_BB_DELAY_SHIFT 0 +#define PALMAS_BATTERY_BOUNCE_BB_DELAY_MASK 0x3F +#define PALMAS_BATTERY_BOUNCE_BB_DELAY_SHIFT 0x00 /* Bit definitions for BACKUP_BATTERY_CTRL */ #define PALMAS_BACKUP_BATTERY_CTRL_VRTC_18_15 0x80 -#define PALMAS_BACKUP_BATTERY_CTRL_VRTC_18_15_SHIFT 7 +#define PALMAS_BACKUP_BATTERY_CTRL_VRTC_18_15_SHIFT 0x07 #define PALMAS_BACKUP_BATTERY_CTRL_VRTC_EN_SLP 0x40 -#define PALMAS_BACKUP_BATTERY_CTRL_VRTC_EN_SLP_SHIFT 6 +#define PALMAS_BACKUP_BATTERY_CTRL_VRTC_EN_SLP_SHIFT 0x06 #define PALMAS_BACKUP_BATTERY_CTRL_VRTC_EN_OFF 0x20 -#define PALMAS_BACKUP_BATTERY_CTRL_VRTC_EN_OFF_SHIFT 5 +#define PALMAS_BACKUP_BATTERY_CTRL_VRTC_EN_OFF_SHIFT 0x05 #define PALMAS_BACKUP_BATTERY_CTRL_VRTC_PWEN 0x10 -#define PALMAS_BACKUP_BATTERY_CTRL_VRTC_PWEN_SHIFT 4 +#define PALMAS_BACKUP_BATTERY_CTRL_VRTC_PWEN_SHIFT 0x04 #define PALMAS_BACKUP_BATTERY_CTRL_BBS_BBC_LOW_ICHRG 0x08 -#define PALMAS_BACKUP_BATTERY_CTRL_BBS_BBC_LOW_ICHRG_SHIFT 3 +#define PALMAS_BACKUP_BATTERY_CTRL_BBS_BBC_LOW_ICHRG_SHIFT 0x03 #define PALMAS_BACKUP_BATTERY_CTRL_BB_SEL_MASK 0x06 -#define PALMAS_BACKUP_BATTERY_CTRL_BB_SEL_SHIFT 1 +#define PALMAS_BACKUP_BATTERY_CTRL_BB_SEL_SHIFT 0x01 #define PALMAS_BACKUP_BATTERY_CTRL_BB_CHG_EN 0x01 -#define PALMAS_BACKUP_BATTERY_CTRL_BB_CHG_EN_SHIFT 0 +#define PALMAS_BACKUP_BATTERY_CTRL_BB_CHG_EN_SHIFT 0x00 /* Bit definitions for LONG_PRESS_KEY */ #define PALMAS_LONG_PRESS_KEY_LPK_LOCK 0x80 -#define PALMAS_LONG_PRESS_KEY_LPK_LOCK_SHIFT 7 +#define PALMAS_LONG_PRESS_KEY_LPK_LOCK_SHIFT 0x07 #define PALMAS_LONG_PRESS_KEY_LPK_INT_CLR 0x10 -#define PALMAS_LONG_PRESS_KEY_LPK_INT_CLR_SHIFT 4 +#define PALMAS_LONG_PRESS_KEY_LPK_INT_CLR_SHIFT 0x04 #define PALMAS_LONG_PRESS_KEY_LPK_TIME_MASK 0x0c -#define PALMAS_LONG_PRESS_KEY_LPK_TIME_SHIFT 2 +#define PALMAS_LONG_PRESS_KEY_LPK_TIME_SHIFT 0x02 #define PALMAS_LONG_PRESS_KEY_PWRON_DEBOUNCE_MASK 0x03 -#define PALMAS_LONG_PRESS_KEY_PWRON_DEBOUNCE_SHIFT 0 +#define PALMAS_LONG_PRESS_KEY_PWRON_DEBOUNCE_SHIFT 0x00 /* Bit definitions for OSC_THERM_CTRL */ #define PALMAS_OSC_THERM_CTRL_VANA_ON_IN_SLEEP 0x80 -#define PALMAS_OSC_THERM_CTRL_VANA_ON_IN_SLEEP_SHIFT 7 +#define PALMAS_OSC_THERM_CTRL_VANA_ON_IN_SLEEP_SHIFT 0x07 #define PALMAS_OSC_THERM_CTRL_INT_MASK_IN_SLEEP 0x40 -#define PALMAS_OSC_THERM_CTRL_INT_MASK_IN_SLEEP_SHIFT 6 +#define PALMAS_OSC_THERM_CTRL_INT_MASK_IN_SLEEP_SHIFT 0x06 #define PALMAS_OSC_THERM_CTRL_RC15MHZ_ON_IN_SLEEP 0x20 -#define PALMAS_OSC_THERM_CTRL_RC15MHZ_ON_IN_SLEEP_SHIFT 5 +#define PALMAS_OSC_THERM_CTRL_RC15MHZ_ON_IN_SLEEP_SHIFT 0x05 #define PALMAS_OSC_THERM_CTRL_THERM_OFF_IN_SLEEP 0x10 -#define PALMAS_OSC_THERM_CTRL_THERM_OFF_IN_SLEEP_SHIFT 4 +#define PALMAS_OSC_THERM_CTRL_THERM_OFF_IN_SLEEP_SHIFT 0x04 #define PALMAS_OSC_THERM_CTRL_THERM_HD_SEL_MASK 0x0c -#define PALMAS_OSC_THERM_CTRL_THERM_HD_SEL_SHIFT 2 +#define PALMAS_OSC_THERM_CTRL_THERM_HD_SEL_SHIFT 0x02 #define PALMAS_OSC_THERM_CTRL_OSC_BYPASS 0x02 -#define PALMAS_OSC_THERM_CTRL_OSC_BYPASS_SHIFT 1 +#define PALMAS_OSC_THERM_CTRL_OSC_BYPASS_SHIFT 0x01 #define PALMAS_OSC_THERM_CTRL_OSC_HPMODE 0x01 -#define PALMAS_OSC_THERM_CTRL_OSC_HPMODE_SHIFT 0 +#define PALMAS_OSC_THERM_CTRL_OSC_HPMODE_SHIFT 0x00 /* Bit definitions for BATDEBOUNCING */ #define PALMAS_BATDEBOUNCING_BAT_DEB_BYPASS 0x80 -#define PALMAS_BATDEBOUNCING_BAT_DEB_BYPASS_SHIFT 7 +#define PALMAS_BATDEBOUNCING_BAT_DEB_BYPASS_SHIFT 0x07 #define PALMAS_BATDEBOUNCING_BINS_DEB_MASK 0x78 -#define PALMAS_BATDEBOUNCING_BINS_DEB_SHIFT 3 +#define PALMAS_BATDEBOUNCING_BINS_DEB_SHIFT 0x03 #define PALMAS_BATDEBOUNCING_BEXT_DEB_MASK 0x07 -#define PALMAS_BATDEBOUNCING_BEXT_DEB_SHIFT 0 +#define PALMAS_BATDEBOUNCING_BEXT_DEB_SHIFT 0x00 /* Bit definitions for SWOFF_HWRST */ #define PALMAS_SWOFF_HWRST_PWRON_LPK 0x80 -#define PALMAS_SWOFF_HWRST_PWRON_LPK_SHIFT 7 +#define PALMAS_SWOFF_HWRST_PWRON_LPK_SHIFT 0x07 #define PALMAS_SWOFF_HWRST_PWRDOWN 0x40 -#define PALMAS_SWOFF_HWRST_PWRDOWN_SHIFT 6 +#define PALMAS_SWOFF_HWRST_PWRDOWN_SHIFT 0x06 #define PALMAS_SWOFF_HWRST_WTD 0x20 -#define PALMAS_SWOFF_HWRST_WTD_SHIFT 5 +#define PALMAS_SWOFF_HWRST_WTD_SHIFT 0x05 #define PALMAS_SWOFF_HWRST_TSHUT 0x10 -#define PALMAS_SWOFF_HWRST_TSHUT_SHIFT 4 +#define PALMAS_SWOFF_HWRST_TSHUT_SHIFT 0x04 #define PALMAS_SWOFF_HWRST_RESET_IN 0x08 -#define PALMAS_SWOFF_HWRST_RESET_IN_SHIFT 3 +#define PALMAS_SWOFF_HWRST_RESET_IN_SHIFT 0x03 #define PALMAS_SWOFF_HWRST_SW_RST 0x04 -#define PALMAS_SWOFF_HWRST_SW_RST_SHIFT 2 +#define PALMAS_SWOFF_HWRST_SW_RST_SHIFT 0x02 #define PALMAS_SWOFF_HWRST_VSYS_LO 0x02 -#define PALMAS_SWOFF_HWRST_VSYS_LO_SHIFT 1 +#define PALMAS_SWOFF_HWRST_VSYS_LO_SHIFT 0x01 #define PALMAS_SWOFF_HWRST_GPADC_SHUTDOWN 0x01 -#define PALMAS_SWOFF_HWRST_GPADC_SHUTDOWN_SHIFT 0 +#define PALMAS_SWOFF_HWRST_GPADC_SHUTDOWN_SHIFT 0x00 /* Bit definitions for SWOFF_COLDRST */ #define PALMAS_SWOFF_COLDRST_PWRON_LPK 0x80 -#define PALMAS_SWOFF_COLDRST_PWRON_LPK_SHIFT 7 +#define PALMAS_SWOFF_COLDRST_PWRON_LPK_SHIFT 0x07 #define PALMAS_SWOFF_COLDRST_PWRDOWN 0x40 -#define PALMAS_SWOFF_COLDRST_PWRDOWN_SHIFT 6 +#define PALMAS_SWOFF_COLDRST_PWRDOWN_SHIFT 0x06 #define PALMAS_SWOFF_COLDRST_WTD 0x20 -#define PALMAS_SWOFF_COLDRST_WTD_SHIFT 5 +#define PALMAS_SWOFF_COLDRST_WTD_SHIFT 0x05 #define PALMAS_SWOFF_COLDRST_TSHUT 0x10 -#define PALMAS_SWOFF_COLDRST_TSHUT_SHIFT 4 +#define PALMAS_SWOFF_COLDRST_TSHUT_SHIFT 0x04 #define PALMAS_SWOFF_COLDRST_RESET_IN 0x08 -#define PALMAS_SWOFF_COLDRST_RESET_IN_SHIFT 3 +#define PALMAS_SWOFF_COLDRST_RESET_IN_SHIFT 0x03 #define PALMAS_SWOFF_COLDRST_SW_RST 0x04 -#define PALMAS_SWOFF_COLDRST_SW_RST_SHIFT 2 +#define PALMAS_SWOFF_COLDRST_SW_RST_SHIFT 0x02 #define PALMAS_SWOFF_COLDRST_VSYS_LO 0x02 -#define PALMAS_SWOFF_COLDRST_VSYS_LO_SHIFT 1 +#define PALMAS_SWOFF_COLDRST_VSYS_LO_SHIFT 0x01 #define PALMAS_SWOFF_COLDRST_GPADC_SHUTDOWN 0x01 -#define PALMAS_SWOFF_COLDRST_GPADC_SHUTDOWN_SHIFT 0 +#define PALMAS_SWOFF_COLDRST_GPADC_SHUTDOWN_SHIFT 0x00 /* Bit definitions for SWOFF_STATUS */ #define PALMAS_SWOFF_STATUS_PWRON_LPK 0x80 -#define PALMAS_SWOFF_STATUS_PWRON_LPK_SHIFT 7 +#define PALMAS_SWOFF_STATUS_PWRON_LPK_SHIFT 0x07 #define PALMAS_SWOFF_STATUS_PWRDOWN 0x40 -#define PALMAS_SWOFF_STATUS_PWRDOWN_SHIFT 6 +#define PALMAS_SWOFF_STATUS_PWRDOWN_SHIFT 0x06 #define PALMAS_SWOFF_STATUS_WTD 0x20 -#define PALMAS_SWOFF_STATUS_WTD_SHIFT 5 +#define PALMAS_SWOFF_STATUS_WTD_SHIFT 0x05 #define PALMAS_SWOFF_STATUS_TSHUT 0x10 -#define PALMAS_SWOFF_STATUS_TSHUT_SHIFT 4 +#define PALMAS_SWOFF_STATUS_TSHUT_SHIFT 0x04 #define PALMAS_SWOFF_STATUS_RESET_IN 0x08 -#define PALMAS_SWOFF_STATUS_RESET_IN_SHIFT 3 +#define PALMAS_SWOFF_STATUS_RESET_IN_SHIFT 0x03 #define PALMAS_SWOFF_STATUS_SW_RST 0x04 -#define PALMAS_SWOFF_STATUS_SW_RST_SHIFT 2 +#define PALMAS_SWOFF_STATUS_SW_RST_SHIFT 0x02 #define PALMAS_SWOFF_STATUS_VSYS_LO 0x02 -#define PALMAS_SWOFF_STATUS_VSYS_LO_SHIFT 1 +#define PALMAS_SWOFF_STATUS_VSYS_LO_SHIFT 0x01 #define PALMAS_SWOFF_STATUS_GPADC_SHUTDOWN 0x01 -#define PALMAS_SWOFF_STATUS_GPADC_SHUTDOWN_SHIFT 0 +#define PALMAS_SWOFF_STATUS_GPADC_SHUTDOWN_SHIFT 0x00 /* Bit definitions for PMU_CONFIG */ #define PALMAS_PMU_CONFIG_MULTI_CELL_EN 0x40 -#define PALMAS_PMU_CONFIG_MULTI_CELL_EN_SHIFT 6 +#define PALMAS_PMU_CONFIG_MULTI_CELL_EN_SHIFT 0x06 #define PALMAS_PMU_CONFIG_SPARE_MASK 0x30 -#define PALMAS_PMU_CONFIG_SPARE_SHIFT 4 +#define PALMAS_PMU_CONFIG_SPARE_SHIFT 0x04 #define PALMAS_PMU_CONFIG_SWOFF_DLY_MASK 0x0c -#define PALMAS_PMU_CONFIG_SWOFF_DLY_SHIFT 2 +#define PALMAS_PMU_CONFIG_SWOFF_DLY_SHIFT 0x02 #define PALMAS_PMU_CONFIG_GATE_RESET_OUT 0x02 -#define PALMAS_PMU_CONFIG_GATE_RESET_OUT_SHIFT 1 +#define PALMAS_PMU_CONFIG_GATE_RESET_OUT_SHIFT 0x01 #define PALMAS_PMU_CONFIG_AUTODEVON 0x01 -#define PALMAS_PMU_CONFIG_AUTODEVON_SHIFT 0 +#define PALMAS_PMU_CONFIG_AUTODEVON_SHIFT 0x00 /* Bit definitions for SPARE */ #define PALMAS_SPARE_SPARE_MASK 0xf8 -#define PALMAS_SPARE_SPARE_SHIFT 3 +#define PALMAS_SPARE_SPARE_SHIFT 0x03 #define PALMAS_SPARE_REGEN3_OD 0x04 -#define PALMAS_SPARE_REGEN3_OD_SHIFT 2 +#define PALMAS_SPARE_REGEN3_OD_SHIFT 0x02 #define PALMAS_SPARE_REGEN2_OD 0x02 -#define PALMAS_SPARE_REGEN2_OD_SHIFT 1 +#define PALMAS_SPARE_REGEN2_OD_SHIFT 0x01 #define PALMAS_SPARE_REGEN1_OD 0x01 -#define PALMAS_SPARE_REGEN1_OD_SHIFT 0 +#define PALMAS_SPARE_REGEN1_OD_SHIFT 0x00 /* Bit definitions for PMU_SECONDARY_INT */ #define PALMAS_PMU_SECONDARY_INT_VBUS_OVV_INT_SRC 0x80 -#define PALMAS_PMU_SECONDARY_INT_VBUS_OVV_INT_SRC_SHIFT 7 +#define PALMAS_PMU_SECONDARY_INT_VBUS_OVV_INT_SRC_SHIFT 0x07 #define PALMAS_PMU_SECONDARY_INT_CHARG_DET_N_INT_SRC 0x40 -#define PALMAS_PMU_SECONDARY_INT_CHARG_DET_N_INT_SRC_SHIFT 6 +#define PALMAS_PMU_SECONDARY_INT_CHARG_DET_N_INT_SRC_SHIFT 0x06 #define PALMAS_PMU_SECONDARY_INT_BB_INT_SRC 0x20 -#define PALMAS_PMU_SECONDARY_INT_BB_INT_SRC_SHIFT 5 +#define PALMAS_PMU_SECONDARY_INT_BB_INT_SRC_SHIFT 0x05 #define PALMAS_PMU_SECONDARY_INT_FBI_INT_SRC 0x10 -#define PALMAS_PMU_SECONDARY_INT_FBI_INT_SRC_SHIFT 4 +#define PALMAS_PMU_SECONDARY_INT_FBI_INT_SRC_SHIFT 0x04 #define PALMAS_PMU_SECONDARY_INT_VBUS_OVV_MASK 0x08 -#define PALMAS_PMU_SECONDARY_INT_VBUS_OVV_MASK_SHIFT 3 +#define PALMAS_PMU_SECONDARY_INT_VBUS_OVV_MASK_SHIFT 0x03 #define PALMAS_PMU_SECONDARY_INT_CHARG_DET_N_MASK 0x04 -#define PALMAS_PMU_SECONDARY_INT_CHARG_DET_N_MASK_SHIFT 2 +#define PALMAS_PMU_SECONDARY_INT_CHARG_DET_N_MASK_SHIFT 0x02 #define PALMAS_PMU_SECONDARY_INT_BB_MASK 0x02 -#define PALMAS_PMU_SECONDARY_INT_BB_MASK_SHIFT 1 +#define PALMAS_PMU_SECONDARY_INT_BB_MASK_SHIFT 0x01 #define PALMAS_PMU_SECONDARY_INT_FBI_MASK 0x01 -#define PALMAS_PMU_SECONDARY_INT_FBI_MASK_SHIFT 0 +#define PALMAS_PMU_SECONDARY_INT_FBI_MASK_SHIFT 0x00 /* Bit definitions for SW_REVISION */ -#define PALMAS_SW_REVISION_SW_REVISION_MASK 0xff -#define PALMAS_SW_REVISION_SW_REVISION_SHIFT 0 +#define PALMAS_SW_REVISION_SW_REVISION_MASK 0xFF +#define PALMAS_SW_REVISION_SW_REVISION_SHIFT 0x00 /* Bit definitions for EXT_CHRG_CTRL */ #define PALMAS_EXT_CHRG_CTRL_VBUS_OVV_STATUS 0x80 -#define PALMAS_EXT_CHRG_CTRL_VBUS_OVV_STATUS_SHIFT 7 +#define PALMAS_EXT_CHRG_CTRL_VBUS_OVV_STATUS_SHIFT 0x07 #define PALMAS_EXT_CHRG_CTRL_CHARG_DET_N_STATUS 0x40 -#define PALMAS_EXT_CHRG_CTRL_CHARG_DET_N_STATUS_SHIFT 6 +#define PALMAS_EXT_CHRG_CTRL_CHARG_DET_N_STATUS_SHIFT 0x06 #define PALMAS_EXT_CHRG_CTRL_VSYS_DEBOUNCE_DELAY 0x08 -#define PALMAS_EXT_CHRG_CTRL_VSYS_DEBOUNCE_DELAY_SHIFT 3 +#define PALMAS_EXT_CHRG_CTRL_VSYS_DEBOUNCE_DELAY_SHIFT 0x03 #define PALMAS_EXT_CHRG_CTRL_CHRG_DET_N 0x04 -#define PALMAS_EXT_CHRG_CTRL_CHRG_DET_N_SHIFT 2 +#define PALMAS_EXT_CHRG_CTRL_CHRG_DET_N_SHIFT 0x02 #define PALMAS_EXT_CHRG_CTRL_AUTO_ACA_EN 0x02 -#define PALMAS_EXT_CHRG_CTRL_AUTO_ACA_EN_SHIFT 1 +#define PALMAS_EXT_CHRG_CTRL_AUTO_ACA_EN_SHIFT 0x01 #define PALMAS_EXT_CHRG_CTRL_AUTO_LDOUSB_EN 0x01 -#define PALMAS_EXT_CHRG_CTRL_AUTO_LDOUSB_EN_SHIFT 0 +#define PALMAS_EXT_CHRG_CTRL_AUTO_LDOUSB_EN_SHIFT 0x00 /* Bit definitions for PMU_SECONDARY_INT2 */ #define PALMAS_PMU_SECONDARY_INT2_DVFS2_INT_SRC 0x20 -#define PALMAS_PMU_SECONDARY_INT2_DVFS2_INT_SRC_SHIFT 5 +#define PALMAS_PMU_SECONDARY_INT2_DVFS2_INT_SRC_SHIFT 0x05 #define PALMAS_PMU_SECONDARY_INT2_DVFS1_INT_SRC 0x10 -#define PALMAS_PMU_SECONDARY_INT2_DVFS1_INT_SRC_SHIFT 4 +#define PALMAS_PMU_SECONDARY_INT2_DVFS1_INT_SRC_SHIFT 0x04 #define PALMAS_PMU_SECONDARY_INT2_DVFS2_MASK 0x02 -#define PALMAS_PMU_SECONDARY_INT2_DVFS2_MASK_SHIFT 1 +#define PALMAS_PMU_SECONDARY_INT2_DVFS2_MASK_SHIFT 0x01 #define PALMAS_PMU_SECONDARY_INT2_DVFS1_MASK 0x01 -#define PALMAS_PMU_SECONDARY_INT2_DVFS1_MASK_SHIFT 0 +#define PALMAS_PMU_SECONDARY_INT2_DVFS1_MASK_SHIFT 0x00 /* Registers for function RESOURCE */ -#define PALMAS_CLK32KG_CTRL 0x0 -#define PALMAS_CLK32KGAUDIO_CTRL 0x1 -#define PALMAS_REGEN1_CTRL 0x2 -#define PALMAS_REGEN2_CTRL 0x3 -#define PALMAS_SYSEN1_CTRL 0x4 -#define PALMAS_SYSEN2_CTRL 0x5 -#define PALMAS_NSLEEP_RES_ASSIGN 0x6 -#define PALMAS_NSLEEP_SMPS_ASSIGN 0x7 -#define PALMAS_NSLEEP_LDO_ASSIGN1 0x8 -#define PALMAS_NSLEEP_LDO_ASSIGN2 0x9 -#define PALMAS_ENABLE1_RES_ASSIGN 0xA -#define PALMAS_ENABLE1_SMPS_ASSIGN 0xB -#define PALMAS_ENABLE1_LDO_ASSIGN1 0xC -#define PALMAS_ENABLE1_LDO_ASSIGN2 0xD -#define PALMAS_ENABLE2_RES_ASSIGN 0xE -#define PALMAS_ENABLE2_SMPS_ASSIGN 0xF +#define PALMAS_CLK32KG_CTRL 0x00 +#define PALMAS_CLK32KGAUDIO_CTRL 0x01 +#define PALMAS_REGEN1_CTRL 0x02 +#define PALMAS_REGEN2_CTRL 0x03 +#define PALMAS_SYSEN1_CTRL 0x04 +#define PALMAS_SYSEN2_CTRL 0x05 +#define PALMAS_NSLEEP_RES_ASSIGN 0x06 +#define PALMAS_NSLEEP_SMPS_ASSIGN 0x07 +#define PALMAS_NSLEEP_LDO_ASSIGN1 0x08 +#define PALMAS_NSLEEP_LDO_ASSIGN2 0x09 +#define PALMAS_ENABLE1_RES_ASSIGN 0x0A +#define PALMAS_ENABLE1_SMPS_ASSIGN 0x0B +#define PALMAS_ENABLE1_LDO_ASSIGN1 0x0C +#define PALMAS_ENABLE1_LDO_ASSIGN2 0x0D +#define PALMAS_ENABLE2_RES_ASSIGN 0x0E +#define PALMAS_ENABLE2_SMPS_ASSIGN 0x0F #define PALMAS_ENABLE2_LDO_ASSIGN1 0x10 #define PALMAS_ENABLE2_LDO_ASSIGN2 0x11 #define PALMAS_REGEN3_CTRL 0x12 /* Bit definitions for CLK32KG_CTRL */ #define PALMAS_CLK32KG_CTRL_STATUS 0x10 -#define PALMAS_CLK32KG_CTRL_STATUS_SHIFT 4 +#define PALMAS_CLK32KG_CTRL_STATUS_SHIFT 0x04 #define PALMAS_CLK32KG_CTRL_MODE_SLEEP 0x04 -#define PALMAS_CLK32KG_CTRL_MODE_SLEEP_SHIFT 2 +#define PALMAS_CLK32KG_CTRL_MODE_SLEEP_SHIFT 0x02 #define PALMAS_CLK32KG_CTRL_MODE_ACTIVE 0x01 -#define PALMAS_CLK32KG_CTRL_MODE_ACTIVE_SHIFT 0 +#define PALMAS_CLK32KG_CTRL_MODE_ACTIVE_SHIFT 0x00 /* Bit definitions for CLK32KGAUDIO_CTRL */ #define PALMAS_CLK32KGAUDIO_CTRL_STATUS 0x10 -#define PALMAS_CLK32KGAUDIO_CTRL_STATUS_SHIFT 4 +#define PALMAS_CLK32KGAUDIO_CTRL_STATUS_SHIFT 0x04 #define PALMAS_CLK32KGAUDIO_CTRL_RESERVED3 0x08 -#define PALMAS_CLK32KGAUDIO_CTRL_RESERVED3_SHIFT 3 +#define PALMAS_CLK32KGAUDIO_CTRL_RESERVED3_SHIFT 0x03 #define PALMAS_CLK32KGAUDIO_CTRL_MODE_SLEEP 0x04 -#define PALMAS_CLK32KGAUDIO_CTRL_MODE_SLEEP_SHIFT 2 +#define PALMAS_CLK32KGAUDIO_CTRL_MODE_SLEEP_SHIFT 0x02 #define PALMAS_CLK32KGAUDIO_CTRL_MODE_ACTIVE 0x01 -#define PALMAS_CLK32KGAUDIO_CTRL_MODE_ACTIVE_SHIFT 0 +#define PALMAS_CLK32KGAUDIO_CTRL_MODE_ACTIVE_SHIFT 0x00 /* Bit definitions for REGEN1_CTRL */ #define PALMAS_REGEN1_CTRL_STATUS 0x10 -#define PALMAS_REGEN1_CTRL_STATUS_SHIFT 4 +#define PALMAS_REGEN1_CTRL_STATUS_SHIFT 0x04 #define PALMAS_REGEN1_CTRL_MODE_SLEEP 0x04 -#define PALMAS_REGEN1_CTRL_MODE_SLEEP_SHIFT 2 +#define PALMAS_REGEN1_CTRL_MODE_SLEEP_SHIFT 0x02 #define PALMAS_REGEN1_CTRL_MODE_ACTIVE 0x01 -#define PALMAS_REGEN1_CTRL_MODE_ACTIVE_SHIFT 0 +#define PALMAS_REGEN1_CTRL_MODE_ACTIVE_SHIFT 0x00 /* Bit definitions for REGEN2_CTRL */ #define PALMAS_REGEN2_CTRL_STATUS 0x10 -#define PALMAS_REGEN2_CTRL_STATUS_SHIFT 4 +#define PALMAS_REGEN2_CTRL_STATUS_SHIFT 0x04 #define PALMAS_REGEN2_CTRL_MODE_SLEEP 0x04 -#define PALMAS_REGEN2_CTRL_MODE_SLEEP_SHIFT 2 +#define PALMAS_REGEN2_CTRL_MODE_SLEEP_SHIFT 0x02 #define PALMAS_REGEN2_CTRL_MODE_ACTIVE 0x01 -#define PALMAS_REGEN2_CTRL_MODE_ACTIVE_SHIFT 0 +#define PALMAS_REGEN2_CTRL_MODE_ACTIVE_SHIFT 0x00 /* Bit definitions for SYSEN1_CTRL */ #define PALMAS_SYSEN1_CTRL_STATUS 0x10 -#define PALMAS_SYSEN1_CTRL_STATUS_SHIFT 4 +#define PALMAS_SYSEN1_CTRL_STATUS_SHIFT 0x04 #define PALMAS_SYSEN1_CTRL_MODE_SLEEP 0x04 -#define PALMAS_SYSEN1_CTRL_MODE_SLEEP_SHIFT 2 +#define PALMAS_SYSEN1_CTRL_MODE_SLEEP_SHIFT 0x02 #define PALMAS_SYSEN1_CTRL_MODE_ACTIVE 0x01 -#define PALMAS_SYSEN1_CTRL_MODE_ACTIVE_SHIFT 0 +#define PALMAS_SYSEN1_CTRL_MODE_ACTIVE_SHIFT 0x00 /* Bit definitions for SYSEN2_CTRL */ #define PALMAS_SYSEN2_CTRL_STATUS 0x10 -#define PALMAS_SYSEN2_CTRL_STATUS_SHIFT 4 +#define PALMAS_SYSEN2_CTRL_STATUS_SHIFT 0x04 #define PALMAS_SYSEN2_CTRL_MODE_SLEEP 0x04 -#define PALMAS_SYSEN2_CTRL_MODE_SLEEP_SHIFT 2 +#define PALMAS_SYSEN2_CTRL_MODE_SLEEP_SHIFT 0x02 #define PALMAS_SYSEN2_CTRL_MODE_ACTIVE 0x01 -#define PALMAS_SYSEN2_CTRL_MODE_ACTIVE_SHIFT 0 +#define PALMAS_SYSEN2_CTRL_MODE_ACTIVE_SHIFT 0x00 /* Bit definitions for NSLEEP_RES_ASSIGN */ #define PALMAS_NSLEEP_RES_ASSIGN_REGEN3 0x40 -#define PALMAS_NSLEEP_RES_ASSIGN_REGEN3_SHIFT 6 +#define PALMAS_NSLEEP_RES_ASSIGN_REGEN3_SHIFT 0x06 #define PALMAS_NSLEEP_RES_ASSIGN_CLK32KGAUDIO 0x20 -#define PALMAS_NSLEEP_RES_ASSIGN_CLK32KGAUDIO_SHIFT 5 +#define PALMAS_NSLEEP_RES_ASSIGN_CLK32KGAUDIO_SHIFT 0x05 #define PALMAS_NSLEEP_RES_ASSIGN_CLK32KG 0x10 -#define PALMAS_NSLEEP_RES_ASSIGN_CLK32KG_SHIFT 4 +#define PALMAS_NSLEEP_RES_ASSIGN_CLK32KG_SHIFT 0x04 #define PALMAS_NSLEEP_RES_ASSIGN_SYSEN2 0x08 -#define PALMAS_NSLEEP_RES_ASSIGN_SYSEN2_SHIFT 3 +#define PALMAS_NSLEEP_RES_ASSIGN_SYSEN2_SHIFT 0x03 #define PALMAS_NSLEEP_RES_ASSIGN_SYSEN1 0x04 -#define PALMAS_NSLEEP_RES_ASSIGN_SYSEN1_SHIFT 2 +#define PALMAS_NSLEEP_RES_ASSIGN_SYSEN1_SHIFT 0x02 #define PALMAS_NSLEEP_RES_ASSIGN_REGEN2 0x02 -#define PALMAS_NSLEEP_RES_ASSIGN_REGEN2_SHIFT 1 +#define PALMAS_NSLEEP_RES_ASSIGN_REGEN2_SHIFT 0x01 #define PALMAS_NSLEEP_RES_ASSIGN_REGEN1 0x01 -#define PALMAS_NSLEEP_RES_ASSIGN_REGEN1_SHIFT 0 +#define PALMAS_NSLEEP_RES_ASSIGN_REGEN1_SHIFT 0x00 /* Bit definitions for NSLEEP_SMPS_ASSIGN */ #define PALMAS_NSLEEP_SMPS_ASSIGN_SMPS10 0x80 -#define PALMAS_NSLEEP_SMPS_ASSIGN_SMPS10_SHIFT 7 +#define PALMAS_NSLEEP_SMPS_ASSIGN_SMPS10_SHIFT 0x07 #define PALMAS_NSLEEP_SMPS_ASSIGN_SMPS9 0x40 -#define PALMAS_NSLEEP_SMPS_ASSIGN_SMPS9_SHIFT 6 +#define PALMAS_NSLEEP_SMPS_ASSIGN_SMPS9_SHIFT 0x06 #define PALMAS_NSLEEP_SMPS_ASSIGN_SMPS8 0x20 -#define PALMAS_NSLEEP_SMPS_ASSIGN_SMPS8_SHIFT 5 +#define PALMAS_NSLEEP_SMPS_ASSIGN_SMPS8_SHIFT 0x05 #define PALMAS_NSLEEP_SMPS_ASSIGN_SMPS7 0x10 -#define PALMAS_NSLEEP_SMPS_ASSIGN_SMPS7_SHIFT 4 +#define PALMAS_NSLEEP_SMPS_ASSIGN_SMPS7_SHIFT 0x04 #define PALMAS_NSLEEP_SMPS_ASSIGN_SMPS6 0x08 -#define PALMAS_NSLEEP_SMPS_ASSIGN_SMPS6_SHIFT 3 +#define PALMAS_NSLEEP_SMPS_ASSIGN_SMPS6_SHIFT 0x03 #define PALMAS_NSLEEP_SMPS_ASSIGN_SMPS45 0x04 -#define PALMAS_NSLEEP_SMPS_ASSIGN_SMPS45_SHIFT 2 +#define PALMAS_NSLEEP_SMPS_ASSIGN_SMPS45_SHIFT 0x02 #define PALMAS_NSLEEP_SMPS_ASSIGN_SMPS3 0x02 -#define PALMAS_NSLEEP_SMPS_ASSIGN_SMPS3_SHIFT 1 +#define PALMAS_NSLEEP_SMPS_ASSIGN_SMPS3_SHIFT 0x01 #define PALMAS_NSLEEP_SMPS_ASSIGN_SMPS12 0x01 -#define PALMAS_NSLEEP_SMPS_ASSIGN_SMPS12_SHIFT 0 +#define PALMAS_NSLEEP_SMPS_ASSIGN_SMPS12_SHIFT 0x00 /* Bit definitions for NSLEEP_LDO_ASSIGN1 */ #define PALMAS_NSLEEP_LDO_ASSIGN1_LDO8 0x80 -#define PALMAS_NSLEEP_LDO_ASSIGN1_LDO8_SHIFT 7 +#define PALMAS_NSLEEP_LDO_ASSIGN1_LDO8_SHIFT 0x07 #define PALMAS_NSLEEP_LDO_ASSIGN1_LDO7 0x40 -#define PALMAS_NSLEEP_LDO_ASSIGN1_LDO7_SHIFT 6 +#define PALMAS_NSLEEP_LDO_ASSIGN1_LDO7_SHIFT 0x06 #define PALMAS_NSLEEP_LDO_ASSIGN1_LDO6 0x20 -#define PALMAS_NSLEEP_LDO_ASSIGN1_LDO6_SHIFT 5 +#define PALMAS_NSLEEP_LDO_ASSIGN1_LDO6_SHIFT 0x05 #define PALMAS_NSLEEP_LDO_ASSIGN1_LDO5 0x10 -#define PALMAS_NSLEEP_LDO_ASSIGN1_LDO5_SHIFT 4 +#define PALMAS_NSLEEP_LDO_ASSIGN1_LDO5_SHIFT 0x04 #define PALMAS_NSLEEP_LDO_ASSIGN1_LDO4 0x08 -#define PALMAS_NSLEEP_LDO_ASSIGN1_LDO4_SHIFT 3 +#define PALMAS_NSLEEP_LDO_ASSIGN1_LDO4_SHIFT 0x03 #define PALMAS_NSLEEP_LDO_ASSIGN1_LDO3 0x04 -#define PALMAS_NSLEEP_LDO_ASSIGN1_LDO3_SHIFT 2 +#define PALMAS_NSLEEP_LDO_ASSIGN1_LDO3_SHIFT 0x02 #define PALMAS_NSLEEP_LDO_ASSIGN1_LDO2 0x02 -#define PALMAS_NSLEEP_LDO_ASSIGN1_LDO2_SHIFT 1 +#define PALMAS_NSLEEP_LDO_ASSIGN1_LDO2_SHIFT 0x01 #define PALMAS_NSLEEP_LDO_ASSIGN1_LDO1 0x01 -#define PALMAS_NSLEEP_LDO_ASSIGN1_LDO1_SHIFT 0 +#define PALMAS_NSLEEP_LDO_ASSIGN1_LDO1_SHIFT 0x00 /* Bit definitions for NSLEEP_LDO_ASSIGN2 */ #define PALMAS_NSLEEP_LDO_ASSIGN2_LDOUSB 0x04 -#define PALMAS_NSLEEP_LDO_ASSIGN2_LDOUSB_SHIFT 2 +#define PALMAS_NSLEEP_LDO_ASSIGN2_LDOUSB_SHIFT 0x02 #define PALMAS_NSLEEP_LDO_ASSIGN2_LDOLN 0x02 -#define PALMAS_NSLEEP_LDO_ASSIGN2_LDOLN_SHIFT 1 +#define PALMAS_NSLEEP_LDO_ASSIGN2_LDOLN_SHIFT 0x01 #define PALMAS_NSLEEP_LDO_ASSIGN2_LDO9 0x01 -#define PALMAS_NSLEEP_LDO_ASSIGN2_LDO9_SHIFT 0 +#define PALMAS_NSLEEP_LDO_ASSIGN2_LDO9_SHIFT 0x00 /* Bit definitions for ENABLE1_RES_ASSIGN */ #define PALMAS_ENABLE1_RES_ASSIGN_REGEN3 0x40 -#define PALMAS_ENABLE1_RES_ASSIGN_REGEN3_SHIFT 6 +#define PALMAS_ENABLE1_RES_ASSIGN_REGEN3_SHIFT 0x06 #define PALMAS_ENABLE1_RES_ASSIGN_CLK32KGAUDIO 0x20 -#define PALMAS_ENABLE1_RES_ASSIGN_CLK32KGAUDIO_SHIFT 5 +#define PALMAS_ENABLE1_RES_ASSIGN_CLK32KGAUDIO_SHIFT 0x05 #define PALMAS_ENABLE1_RES_ASSIGN_CLK32KG 0x10 -#define PALMAS_ENABLE1_RES_ASSIGN_CLK32KG_SHIFT 4 +#define PALMAS_ENABLE1_RES_ASSIGN_CLK32KG_SHIFT 0x04 #define PALMAS_ENABLE1_RES_ASSIGN_SYSEN2 0x08 -#define PALMAS_ENABLE1_RES_ASSIGN_SYSEN2_SHIFT 3 +#define PALMAS_ENABLE1_RES_ASSIGN_SYSEN2_SHIFT 0x03 #define PALMAS_ENABLE1_RES_ASSIGN_SYSEN1 0x04 -#define PALMAS_ENABLE1_RES_ASSIGN_SYSEN1_SHIFT 2 +#define PALMAS_ENABLE1_RES_ASSIGN_SYSEN1_SHIFT 0x02 #define PALMAS_ENABLE1_RES_ASSIGN_REGEN2 0x02 -#define PALMAS_ENABLE1_RES_ASSIGN_REGEN2_SHIFT 1 +#define PALMAS_ENABLE1_RES_ASSIGN_REGEN2_SHIFT 0x01 #define PALMAS_ENABLE1_RES_ASSIGN_REGEN1 0x01 -#define PALMAS_ENABLE1_RES_ASSIGN_REGEN1_SHIFT 0 +#define PALMAS_ENABLE1_RES_ASSIGN_REGEN1_SHIFT 0x00 /* Bit definitions for ENABLE1_SMPS_ASSIGN */ #define PALMAS_ENABLE1_SMPS_ASSIGN_SMPS10 0x80 -#define PALMAS_ENABLE1_SMPS_ASSIGN_SMPS10_SHIFT 7 +#define PALMAS_ENABLE1_SMPS_ASSIGN_SMPS10_SHIFT 0x07 #define PALMAS_ENABLE1_SMPS_ASSIGN_SMPS9 0x40 -#define PALMAS_ENABLE1_SMPS_ASSIGN_SMPS9_SHIFT 6 +#define PALMAS_ENABLE1_SMPS_ASSIGN_SMPS9_SHIFT 0x06 #define PALMAS_ENABLE1_SMPS_ASSIGN_SMPS8 0x20 -#define PALMAS_ENABLE1_SMPS_ASSIGN_SMPS8_SHIFT 5 +#define PALMAS_ENABLE1_SMPS_ASSIGN_SMPS8_SHIFT 0x05 #define PALMAS_ENABLE1_SMPS_ASSIGN_SMPS7 0x10 -#define PALMAS_ENABLE1_SMPS_ASSIGN_SMPS7_SHIFT 4 +#define PALMAS_ENABLE1_SMPS_ASSIGN_SMPS7_SHIFT 0x04 #define PALMAS_ENABLE1_SMPS_ASSIGN_SMPS6 0x08 -#define PALMAS_ENABLE1_SMPS_ASSIGN_SMPS6_SHIFT 3 +#define PALMAS_ENABLE1_SMPS_ASSIGN_SMPS6_SHIFT 0x03 #define PALMAS_ENABLE1_SMPS_ASSIGN_SMPS45 0x04 -#define PALMAS_ENABLE1_SMPS_ASSIGN_SMPS45_SHIFT 2 +#define PALMAS_ENABLE1_SMPS_ASSIGN_SMPS45_SHIFT 0x02 #define PALMAS_ENABLE1_SMPS_ASSIGN_SMPS3 0x02 -#define PALMAS_ENABLE1_SMPS_ASSIGN_SMPS3_SHIFT 1 +#define PALMAS_ENABLE1_SMPS_ASSIGN_SMPS3_SHIFT 0x01 #define PALMAS_ENABLE1_SMPS_ASSIGN_SMPS12 0x01 -#define PALMAS_ENABLE1_SMPS_ASSIGN_SMPS12_SHIFT 0 +#define PALMAS_ENABLE1_SMPS_ASSIGN_SMPS12_SHIFT 0x00 /* Bit definitions for ENABLE1_LDO_ASSIGN1 */ #define PALMAS_ENABLE1_LDO_ASSIGN1_LDO8 0x80 -#define PALMAS_ENABLE1_LDO_ASSIGN1_LDO8_SHIFT 7 +#define PALMAS_ENABLE1_LDO_ASSIGN1_LDO8_SHIFT 0x07 #define PALMAS_ENABLE1_LDO_ASSIGN1_LDO7 0x40 -#define PALMAS_ENABLE1_LDO_ASSIGN1_LDO7_SHIFT 6 +#define PALMAS_ENABLE1_LDO_ASSIGN1_LDO7_SHIFT 0x06 #define PALMAS_ENABLE1_LDO_ASSIGN1_LDO6 0x20 -#define PALMAS_ENABLE1_LDO_ASSIGN1_LDO6_SHIFT 5 +#define PALMAS_ENABLE1_LDO_ASSIGN1_LDO6_SHIFT 0x05 #define PALMAS_ENABLE1_LDO_ASSIGN1_LDO5 0x10 -#define PALMAS_ENABLE1_LDO_ASSIGN1_LDO5_SHIFT 4 +#define PALMAS_ENABLE1_LDO_ASSIGN1_LDO5_SHIFT 0x04 #define PALMAS_ENABLE1_LDO_ASSIGN1_LDO4 0x08 -#define PALMAS_ENABLE1_LDO_ASSIGN1_LDO4_SHIFT 3 +#define PALMAS_ENABLE1_LDO_ASSIGN1_LDO4_SHIFT 0x03 #define PALMAS_ENABLE1_LDO_ASSIGN1_LDO3 0x04 -#define PALMAS_ENABLE1_LDO_ASSIGN1_LDO3_SHIFT 2 +#define PALMAS_ENABLE1_LDO_ASSIGN1_LDO3_SHIFT 0x02 #define PALMAS_ENABLE1_LDO_ASSIGN1_LDO2 0x02 -#define PALMAS_ENABLE1_LDO_ASSIGN1_LDO2_SHIFT 1 +#define PALMAS_ENABLE1_LDO_ASSIGN1_LDO2_SHIFT 0x01 #define PALMAS_ENABLE1_LDO_ASSIGN1_LDO1 0x01 -#define PALMAS_ENABLE1_LDO_ASSIGN1_LDO1_SHIFT 0 +#define PALMAS_ENABLE1_LDO_ASSIGN1_LDO1_SHIFT 0x00 /* Bit definitions for ENABLE1_LDO_ASSIGN2 */ #define PALMAS_ENABLE1_LDO_ASSIGN2_LDOUSB 0x04 -#define PALMAS_ENABLE1_LDO_ASSIGN2_LDOUSB_SHIFT 2 +#define PALMAS_ENABLE1_LDO_ASSIGN2_LDOUSB_SHIFT 0x02 #define PALMAS_ENABLE1_LDO_ASSIGN2_LDOLN 0x02 -#define PALMAS_ENABLE1_LDO_ASSIGN2_LDOLN_SHIFT 1 +#define PALMAS_ENABLE1_LDO_ASSIGN2_LDOLN_SHIFT 0x01 #define PALMAS_ENABLE1_LDO_ASSIGN2_LDO9 0x01 -#define PALMAS_ENABLE1_LDO_ASSIGN2_LDO9_SHIFT 0 +#define PALMAS_ENABLE1_LDO_ASSIGN2_LDO9_SHIFT 0x00 /* Bit definitions for ENABLE2_RES_ASSIGN */ #define PALMAS_ENABLE2_RES_ASSIGN_REGEN3 0x40 -#define PALMAS_ENABLE2_RES_ASSIGN_REGEN3_SHIFT 6 +#define PALMAS_ENABLE2_RES_ASSIGN_REGEN3_SHIFT 0x06 #define PALMAS_ENABLE2_RES_ASSIGN_CLK32KGAUDIO 0x20 -#define PALMAS_ENABLE2_RES_ASSIGN_CLK32KGAUDIO_SHIFT 5 +#define PALMAS_ENABLE2_RES_ASSIGN_CLK32KGAUDIO_SHIFT 0x05 #define PALMAS_ENABLE2_RES_ASSIGN_CLK32KG 0x10 -#define PALMAS_ENABLE2_RES_ASSIGN_CLK32KG_SHIFT 4 +#define PALMAS_ENABLE2_RES_ASSIGN_CLK32KG_SHIFT 0x04 #define PALMAS_ENABLE2_RES_ASSIGN_SYSEN2 0x08 -#define PALMAS_ENABLE2_RES_ASSIGN_SYSEN2_SHIFT 3 +#define PALMAS_ENABLE2_RES_ASSIGN_SYSEN2_SHIFT 0x03 #define PALMAS_ENABLE2_RES_ASSIGN_SYSEN1 0x04 -#define PALMAS_ENABLE2_RES_ASSIGN_SYSEN1_SHIFT 2 +#define PALMAS_ENABLE2_RES_ASSIGN_SYSEN1_SHIFT 0x02 #define PALMAS_ENABLE2_RES_ASSIGN_REGEN2 0x02 -#define PALMAS_ENABLE2_RES_ASSIGN_REGEN2_SHIFT 1 +#define PALMAS_ENABLE2_RES_ASSIGN_REGEN2_SHIFT 0x01 #define PALMAS_ENABLE2_RES_ASSIGN_REGEN1 0x01 -#define PALMAS_ENABLE2_RES_ASSIGN_REGEN1_SHIFT 0 +#define PALMAS_ENABLE2_RES_ASSIGN_REGEN1_SHIFT 0x00 /* Bit definitions for ENABLE2_SMPS_ASSIGN */ #define PALMAS_ENABLE2_SMPS_ASSIGN_SMPS10 0x80 -#define PALMAS_ENABLE2_SMPS_ASSIGN_SMPS10_SHIFT 7 +#define PALMAS_ENABLE2_SMPS_ASSIGN_SMPS10_SHIFT 0x07 #define PALMAS_ENABLE2_SMPS_ASSIGN_SMPS9 0x40 -#define PALMAS_ENABLE2_SMPS_ASSIGN_SMPS9_SHIFT 6 +#define PALMAS_ENABLE2_SMPS_ASSIGN_SMPS9_SHIFT 0x06 #define PALMAS_ENABLE2_SMPS_ASSIGN_SMPS8 0x20 -#define PALMAS_ENABLE2_SMPS_ASSIGN_SMPS8_SHIFT 5 +#define PALMAS_ENABLE2_SMPS_ASSIGN_SMPS8_SHIFT 0x05 #define PALMAS_ENABLE2_SMPS_ASSIGN_SMPS7 0x10 -#define PALMAS_ENABLE2_SMPS_ASSIGN_SMPS7_SHIFT 4 +#define PALMAS_ENABLE2_SMPS_ASSIGN_SMPS7_SHIFT 0x04 #define PALMAS_ENABLE2_SMPS_ASSIGN_SMPS6 0x08 -#define PALMAS_ENABLE2_SMPS_ASSIGN_SMPS6_SHIFT 3 +#define PALMAS_ENABLE2_SMPS_ASSIGN_SMPS6_SHIFT 0x03 #define PALMAS_ENABLE2_SMPS_ASSIGN_SMPS45 0x04 -#define PALMAS_ENABLE2_SMPS_ASSIGN_SMPS45_SHIFT 2 +#define PALMAS_ENABLE2_SMPS_ASSIGN_SMPS45_SHIFT 0x02 #define PALMAS_ENABLE2_SMPS_ASSIGN_SMPS3 0x02 -#define PALMAS_ENABLE2_SMPS_ASSIGN_SMPS3_SHIFT 1 +#define PALMAS_ENABLE2_SMPS_ASSIGN_SMPS3_SHIFT 0x01 #define PALMAS_ENABLE2_SMPS_ASSIGN_SMPS12 0x01 -#define PALMAS_ENABLE2_SMPS_ASSIGN_SMPS12_SHIFT 0 +#define PALMAS_ENABLE2_SMPS_ASSIGN_SMPS12_SHIFT 0x00 /* Bit definitions for ENABLE2_LDO_ASSIGN1 */ #define PALMAS_ENABLE2_LDO_ASSIGN1_LDO8 0x80 -#define PALMAS_ENABLE2_LDO_ASSIGN1_LDO8_SHIFT 7 +#define PALMAS_ENABLE2_LDO_ASSIGN1_LDO8_SHIFT 0x07 #define PALMAS_ENABLE2_LDO_ASSIGN1_LDO7 0x40 -#define PALMAS_ENABLE2_LDO_ASSIGN1_LDO7_SHIFT 6 +#define PALMAS_ENABLE2_LDO_ASSIGN1_LDO7_SHIFT 0x06 #define PALMAS_ENABLE2_LDO_ASSIGN1_LDO6 0x20 -#define PALMAS_ENABLE2_LDO_ASSIGN1_LDO6_SHIFT 5 +#define PALMAS_ENABLE2_LDO_ASSIGN1_LDO6_SHIFT 0x05 #define PALMAS_ENABLE2_LDO_ASSIGN1_LDO5 0x10 -#define PALMAS_ENABLE2_LDO_ASSIGN1_LDO5_SHIFT 4 +#define PALMAS_ENABLE2_LDO_ASSIGN1_LDO5_SHIFT 0x04 #define PALMAS_ENABLE2_LDO_ASSIGN1_LDO4 0x08 -#define PALMAS_ENABLE2_LDO_ASSIGN1_LDO4_SHIFT 3 +#define PALMAS_ENABLE2_LDO_ASSIGN1_LDO4_SHIFT 0x03 #define PALMAS_ENABLE2_LDO_ASSIGN1_LDO3 0x04 -#define PALMAS_ENABLE2_LDO_ASSIGN1_LDO3_SHIFT 2 +#define PALMAS_ENABLE2_LDO_ASSIGN1_LDO3_SHIFT 0x02 #define PALMAS_ENABLE2_LDO_ASSIGN1_LDO2 0x02 -#define PALMAS_ENABLE2_LDO_ASSIGN1_LDO2_SHIFT 1 +#define PALMAS_ENABLE2_LDO_ASSIGN1_LDO2_SHIFT 0x01 #define PALMAS_ENABLE2_LDO_ASSIGN1_LDO1 0x01 -#define PALMAS_ENABLE2_LDO_ASSIGN1_LDO1_SHIFT 0 +#define PALMAS_ENABLE2_LDO_ASSIGN1_LDO1_SHIFT 0x00 /* Bit definitions for ENABLE2_LDO_ASSIGN2 */ #define PALMAS_ENABLE2_LDO_ASSIGN2_LDOUSB 0x04 -#define PALMAS_ENABLE2_LDO_ASSIGN2_LDOUSB_SHIFT 2 +#define PALMAS_ENABLE2_LDO_ASSIGN2_LDOUSB_SHIFT 0x02 #define PALMAS_ENABLE2_LDO_ASSIGN2_LDOLN 0x02 -#define PALMAS_ENABLE2_LDO_ASSIGN2_LDOLN_SHIFT 1 +#define PALMAS_ENABLE2_LDO_ASSIGN2_LDOLN_SHIFT 0x01 #define PALMAS_ENABLE2_LDO_ASSIGN2_LDO9 0x01 -#define PALMAS_ENABLE2_LDO_ASSIGN2_LDO9_SHIFT 0 +#define PALMAS_ENABLE2_LDO_ASSIGN2_LDO9_SHIFT 0x00 /* Bit definitions for REGEN3_CTRL */ #define PALMAS_REGEN3_CTRL_STATUS 0x10 -#define PALMAS_REGEN3_CTRL_STATUS_SHIFT 4 +#define PALMAS_REGEN3_CTRL_STATUS_SHIFT 0x04 #define PALMAS_REGEN3_CTRL_MODE_SLEEP 0x04 -#define PALMAS_REGEN3_CTRL_MODE_SLEEP_SHIFT 2 +#define PALMAS_REGEN3_CTRL_MODE_SLEEP_SHIFT 0x02 #define PALMAS_REGEN3_CTRL_MODE_ACTIVE 0x01 -#define PALMAS_REGEN3_CTRL_MODE_ACTIVE_SHIFT 0 +#define PALMAS_REGEN3_CTRL_MODE_ACTIVE_SHIFT 0x00 /* Registers for function PAD_CONTROL */ -#define PALMAS_OD_OUTPUT_CTRL2 0x2 -#define PALMAS_POLARITY_CTRL2 0x3 -#define PALMAS_PU_PD_INPUT_CTRL1 0x4 -#define PALMAS_PU_PD_INPUT_CTRL2 0x5 -#define PALMAS_PU_PD_INPUT_CTRL3 0x6 -#define PALMAS_PU_PD_INPUT_CTRL5 0x7 -#define PALMAS_OD_OUTPUT_CTRL 0x8 -#define PALMAS_POLARITY_CTRL 0x9 -#define PALMAS_PRIMARY_SECONDARY_PAD1 0xA -#define PALMAS_PRIMARY_SECONDARY_PAD2 0xB -#define PALMAS_I2C_SPI 0xC -#define PALMAS_PU_PD_INPUT_CTRL4 0xD -#define PALMAS_PRIMARY_SECONDARY_PAD3 0xE -#define PALMAS_PRIMARY_SECONDARY_PAD4 0xF +#define PALMAS_OD_OUTPUT_CTRL2 0x02 +#define PALMAS_POLARITY_CTRL2 0x03 +#define PALMAS_PU_PD_INPUT_CTRL1 0x04 +#define PALMAS_PU_PD_INPUT_CTRL2 0x05 +#define PALMAS_PU_PD_INPUT_CTRL3 0x06 +#define PALMAS_PU_PD_INPUT_CTRL5 0x07 +#define PALMAS_OD_OUTPUT_CTRL 0x08 +#define PALMAS_POLARITY_CTRL 0x09 +#define PALMAS_PRIMARY_SECONDARY_PAD1 0x0A +#define PALMAS_PRIMARY_SECONDARY_PAD2 0x0B +#define PALMAS_I2C_SPI 0x0C +#define PALMAS_PU_PD_INPUT_CTRL4 0x0D +#define PALMAS_PRIMARY_SECONDARY_PAD3 0x0E +#define PALMAS_PRIMARY_SECONDARY_PAD4 0x0F /* Bit definitions for PU_PD_INPUT_CTRL1 */ #define PALMAS_PU_PD_INPUT_CTRL1_RESET_IN_PD 0x40 -#define PALMAS_PU_PD_INPUT_CTRL1_RESET_IN_PD_SHIFT 6 +#define PALMAS_PU_PD_INPUT_CTRL1_RESET_IN_PD_SHIFT 0x06 #define PALMAS_PU_PD_INPUT_CTRL1_GPADC_START_PU 0x20 -#define PALMAS_PU_PD_INPUT_CTRL1_GPADC_START_PU_SHIFT 5 +#define PALMAS_PU_PD_INPUT_CTRL1_GPADC_START_PU_SHIFT 0x05 #define PALMAS_PU_PD_INPUT_CTRL1_GPADC_START_PD 0x10 -#define PALMAS_PU_PD_INPUT_CTRL1_GPADC_START_PD_SHIFT 4 +#define PALMAS_PU_PD_INPUT_CTRL1_GPADC_START_PD_SHIFT 0x04 #define PALMAS_PU_PD_INPUT_CTRL1_PWRDOWN_PD 0x04 -#define PALMAS_PU_PD_INPUT_CTRL1_PWRDOWN_PD_SHIFT 2 +#define PALMAS_PU_PD_INPUT_CTRL1_PWRDOWN_PD_SHIFT 0x02 #define PALMAS_PU_PD_INPUT_CTRL1_NRESWARM_PU 0x02 -#define PALMAS_PU_PD_INPUT_CTRL1_NRESWARM_PU_SHIFT 1 +#define PALMAS_PU_PD_INPUT_CTRL1_NRESWARM_PU_SHIFT 0x01 /* Bit definitions for PU_PD_INPUT_CTRL2 */ #define PALMAS_PU_PD_INPUT_CTRL2_ENABLE2_PU 0x20 -#define PALMAS_PU_PD_INPUT_CTRL2_ENABLE2_PU_SHIFT 5 +#define PALMAS_PU_PD_INPUT_CTRL2_ENABLE2_PU_SHIFT 0x05 #define PALMAS_PU_PD_INPUT_CTRL2_ENABLE2_PD 0x10 -#define PALMAS_PU_PD_INPUT_CTRL2_ENABLE2_PD_SHIFT 4 +#define PALMAS_PU_PD_INPUT_CTRL2_ENABLE2_PD_SHIFT 0x04 #define PALMAS_PU_PD_INPUT_CTRL2_ENABLE1_PU 0x08 -#define PALMAS_PU_PD_INPUT_CTRL2_ENABLE1_PU_SHIFT 3 +#define PALMAS_PU_PD_INPUT_CTRL2_ENABLE1_PU_SHIFT 0x03 #define PALMAS_PU_PD_INPUT_CTRL2_ENABLE1_PD 0x04 -#define PALMAS_PU_PD_INPUT_CTRL2_ENABLE1_PD_SHIFT 2 +#define PALMAS_PU_PD_INPUT_CTRL2_ENABLE1_PD_SHIFT 0x02 #define PALMAS_PU_PD_INPUT_CTRL2_NSLEEP_PU 0x02 -#define PALMAS_PU_PD_INPUT_CTRL2_NSLEEP_PU_SHIFT 1 +#define PALMAS_PU_PD_INPUT_CTRL2_NSLEEP_PU_SHIFT 0x01 #define PALMAS_PU_PD_INPUT_CTRL2_NSLEEP_PD 0x01 -#define PALMAS_PU_PD_INPUT_CTRL2_NSLEEP_PD_SHIFT 0 +#define PALMAS_PU_PD_INPUT_CTRL2_NSLEEP_PD_SHIFT 0x00 /* Bit definitions for PU_PD_INPUT_CTRL3 */ #define PALMAS_PU_PD_INPUT_CTRL3_ACOK_PD 0x40 -#define PALMAS_PU_PD_INPUT_CTRL3_ACOK_PD_SHIFT 6 +#define PALMAS_PU_PD_INPUT_CTRL3_ACOK_PD_SHIFT 0x06 #define PALMAS_PU_PD_INPUT_CTRL3_CHRG_DET_N_PD 0x10 -#define PALMAS_PU_PD_INPUT_CTRL3_CHRG_DET_N_PD_SHIFT 4 +#define PALMAS_PU_PD_INPUT_CTRL3_CHRG_DET_N_PD_SHIFT 0x04 #define PALMAS_PU_PD_INPUT_CTRL3_POWERHOLD_PD 0x04 -#define PALMAS_PU_PD_INPUT_CTRL3_POWERHOLD_PD_SHIFT 2 +#define PALMAS_PU_PD_INPUT_CTRL3_POWERHOLD_PD_SHIFT 0x02 #define PALMAS_PU_PD_INPUT_CTRL3_MSECURE_PD 0x01 -#define PALMAS_PU_PD_INPUT_CTRL3_MSECURE_PD_SHIFT 0 +#define PALMAS_PU_PD_INPUT_CTRL3_MSECURE_PD_SHIFT 0x00 /* Bit definitions for OD_OUTPUT_CTRL */ #define PALMAS_OD_OUTPUT_CTRL_PWM_2_OD 0x80 -#define PALMAS_OD_OUTPUT_CTRL_PWM_2_OD_SHIFT 7 +#define PALMAS_OD_OUTPUT_CTRL_PWM_2_OD_SHIFT 0x07 #define PALMAS_OD_OUTPUT_CTRL_VBUSDET_OD 0x40 -#define PALMAS_OD_OUTPUT_CTRL_VBUSDET_OD_SHIFT 6 +#define PALMAS_OD_OUTPUT_CTRL_VBUSDET_OD_SHIFT 0x06 #define PALMAS_OD_OUTPUT_CTRL_PWM_1_OD 0x20 -#define PALMAS_OD_OUTPUT_CTRL_PWM_1_OD_SHIFT 5 +#define PALMAS_OD_OUTPUT_CTRL_PWM_1_OD_SHIFT 0x05 #define PALMAS_OD_OUTPUT_CTRL_INT_OD 0x08 -#define PALMAS_OD_OUTPUT_CTRL_INT_OD_SHIFT 3 +#define PALMAS_OD_OUTPUT_CTRL_INT_OD_SHIFT 0x03 /* Bit definitions for POLARITY_CTRL */ #define PALMAS_POLARITY_CTRL_INT_POLARITY 0x80 -#define PALMAS_POLARITY_CTRL_INT_POLARITY_SHIFT 7 +#define PALMAS_POLARITY_CTRL_INT_POLARITY_SHIFT 0x07 #define PALMAS_POLARITY_CTRL_ENABLE2_POLARITY 0x40 -#define PALMAS_POLARITY_CTRL_ENABLE2_POLARITY_SHIFT 6 +#define PALMAS_POLARITY_CTRL_ENABLE2_POLARITY_SHIFT 0x06 #define PALMAS_POLARITY_CTRL_ENABLE1_POLARITY 0x20 -#define PALMAS_POLARITY_CTRL_ENABLE1_POLARITY_SHIFT 5 +#define PALMAS_POLARITY_CTRL_ENABLE1_POLARITY_SHIFT 0x05 #define PALMAS_POLARITY_CTRL_NSLEEP_POLARITY 0x10 -#define PALMAS_POLARITY_CTRL_NSLEEP_POLARITY_SHIFT 4 +#define PALMAS_POLARITY_CTRL_NSLEEP_POLARITY_SHIFT 0x04 #define PALMAS_POLARITY_CTRL_RESET_IN_POLARITY 0x08 -#define PALMAS_POLARITY_CTRL_RESET_IN_POLARITY_SHIFT 3 +#define PALMAS_POLARITY_CTRL_RESET_IN_POLARITY_SHIFT 0x03 #define PALMAS_POLARITY_CTRL_GPIO_3_CHRG_DET_N_POLARITY 0x04 -#define PALMAS_POLARITY_CTRL_GPIO_3_CHRG_DET_N_POLARITY_SHIFT 2 +#define PALMAS_POLARITY_CTRL_GPIO_3_CHRG_DET_N_POLARITY_SHIFT 0x02 #define PALMAS_POLARITY_CTRL_POWERGOOD_USB_PSEL_POLARITY 0x02 -#define PALMAS_POLARITY_CTRL_POWERGOOD_USB_PSEL_POLARITY_SHIFT 1 +#define PALMAS_POLARITY_CTRL_POWERGOOD_USB_PSEL_POLARITY_SHIFT 0x01 #define PALMAS_POLARITY_CTRL_PWRDOWN_POLARITY 0x01 -#define PALMAS_POLARITY_CTRL_PWRDOWN_POLARITY_SHIFT 0 +#define PALMAS_POLARITY_CTRL_PWRDOWN_POLARITY_SHIFT 0x00 /* Bit definitions for PRIMARY_SECONDARY_PAD1 */ #define PALMAS_PRIMARY_SECONDARY_PAD1_GPIO_3 0x80 -#define PALMAS_PRIMARY_SECONDARY_PAD1_GPIO_3_SHIFT 7 +#define PALMAS_PRIMARY_SECONDARY_PAD1_GPIO_3_SHIFT 0x07 #define PALMAS_PRIMARY_SECONDARY_PAD1_GPIO_2_MASK 0x60 -#define PALMAS_PRIMARY_SECONDARY_PAD1_GPIO_2_SHIFT 5 +#define PALMAS_PRIMARY_SECONDARY_PAD1_GPIO_2_SHIFT 0x05 #define PALMAS_PRIMARY_SECONDARY_PAD1_GPIO_1_MASK 0x18 -#define PALMAS_PRIMARY_SECONDARY_PAD1_GPIO_1_SHIFT 3 +#define PALMAS_PRIMARY_SECONDARY_PAD1_GPIO_1_SHIFT 0x03 #define PALMAS_PRIMARY_SECONDARY_PAD1_GPIO_0 0x04 -#define PALMAS_PRIMARY_SECONDARY_PAD1_GPIO_0_SHIFT 2 +#define PALMAS_PRIMARY_SECONDARY_PAD1_GPIO_0_SHIFT 0x02 #define PALMAS_PRIMARY_SECONDARY_PAD1_VAC 0x02 -#define PALMAS_PRIMARY_SECONDARY_PAD1_VAC_SHIFT 1 +#define PALMAS_PRIMARY_SECONDARY_PAD1_VAC_SHIFT 0x01 #define PALMAS_PRIMARY_SECONDARY_PAD1_POWERGOOD 0x01 -#define PALMAS_PRIMARY_SECONDARY_PAD1_POWERGOOD_SHIFT 0 +#define PALMAS_PRIMARY_SECONDARY_PAD1_POWERGOOD_SHIFT 0x00 /* Bit definitions for PRIMARY_SECONDARY_PAD2 */ #define PALMAS_PRIMARY_SECONDARY_PAD2_GPIO_7_MASK 0x30 -#define PALMAS_PRIMARY_SECONDARY_PAD2_GPIO_7_SHIFT 4 +#define PALMAS_PRIMARY_SECONDARY_PAD2_GPIO_7_SHIFT 0x04 #define PALMAS_PRIMARY_SECONDARY_PAD2_GPIO_6 0x08 -#define PALMAS_PRIMARY_SECONDARY_PAD2_GPIO_6_SHIFT 3 +#define PALMAS_PRIMARY_SECONDARY_PAD2_GPIO_6_SHIFT 0x03 #define PALMAS_PRIMARY_SECONDARY_PAD2_GPIO_5_MASK 0x06 -#define PALMAS_PRIMARY_SECONDARY_PAD2_GPIO_5_SHIFT 1 +#define PALMAS_PRIMARY_SECONDARY_PAD2_GPIO_5_SHIFT 0x01 #define PALMAS_PRIMARY_SECONDARY_PAD2_GPIO_4 0x01 -#define PALMAS_PRIMARY_SECONDARY_PAD2_GPIO_4_SHIFT 0 +#define PALMAS_PRIMARY_SECONDARY_PAD2_GPIO_4_SHIFT 0x00 /* Bit definitions for I2C_SPI */ #define PALMAS_I2C_SPI_I2C2OTP_EN 0x80 -#define PALMAS_I2C_SPI_I2C2OTP_EN_SHIFT 7 +#define PALMAS_I2C_SPI_I2C2OTP_EN_SHIFT 0x07 #define PALMAS_I2C_SPI_I2C2OTP_PAGESEL 0x40 -#define PALMAS_I2C_SPI_I2C2OTP_PAGESEL_SHIFT 6 +#define PALMAS_I2C_SPI_I2C2OTP_PAGESEL_SHIFT 0x06 #define PALMAS_I2C_SPI_ID_I2C2 0x20 -#define PALMAS_I2C_SPI_ID_I2C2_SHIFT 5 +#define PALMAS_I2C_SPI_ID_I2C2_SHIFT 0x05 #define PALMAS_I2C_SPI_I2C_SPI 0x10 -#define PALMAS_I2C_SPI_I2C_SPI_SHIFT 4 -#define PALMAS_I2C_SPI_ID_I2C1_MASK 0x0f -#define PALMAS_I2C_SPI_ID_I2C1_SHIFT 0 +#define PALMAS_I2C_SPI_I2C_SPI_SHIFT 0x04 +#define PALMAS_I2C_SPI_ID_I2C1_MASK 0x0F +#define PALMAS_I2C_SPI_ID_I2C1_SHIFT 0x00 /* Bit definitions for PU_PD_INPUT_CTRL4 */ #define PALMAS_PU_PD_INPUT_CTRL4_DVFS2_DAT_PD 0x40 -#define PALMAS_PU_PD_INPUT_CTRL4_DVFS2_DAT_PD_SHIFT 6 +#define PALMAS_PU_PD_INPUT_CTRL4_DVFS2_DAT_PD_SHIFT 0x06 #define PALMAS_PU_PD_INPUT_CTRL4_DVFS2_CLK_PD 0x10 -#define PALMAS_PU_PD_INPUT_CTRL4_DVFS2_CLK_PD_SHIFT 4 +#define PALMAS_PU_PD_INPUT_CTRL4_DVFS2_CLK_PD_SHIFT 0x04 #define PALMAS_PU_PD_INPUT_CTRL4_DVFS1_DAT_PD 0x04 -#define PALMAS_PU_PD_INPUT_CTRL4_DVFS1_DAT_PD_SHIFT 2 +#define PALMAS_PU_PD_INPUT_CTRL4_DVFS1_DAT_PD_SHIFT 0x02 #define PALMAS_PU_PD_INPUT_CTRL4_DVFS1_CLK_PD 0x01 -#define PALMAS_PU_PD_INPUT_CTRL4_DVFS1_CLK_PD_SHIFT 0 +#define PALMAS_PU_PD_INPUT_CTRL4_DVFS1_CLK_PD_SHIFT 0x00 /* Bit definitions for PRIMARY_SECONDARY_PAD3 */ #define PALMAS_PRIMARY_SECONDARY_PAD3_DVFS2 0x02 -#define PALMAS_PRIMARY_SECONDARY_PAD3_DVFS2_SHIFT 1 +#define PALMAS_PRIMARY_SECONDARY_PAD3_DVFS2_SHIFT 0x01 #define PALMAS_PRIMARY_SECONDARY_PAD3_DVFS1 0x01 -#define PALMAS_PRIMARY_SECONDARY_PAD3_DVFS1_SHIFT 0 +#define PALMAS_PRIMARY_SECONDARY_PAD3_DVFS1_SHIFT 0x00 /* Registers for function LED_PWM */ -#define PALMAS_LED_PERIOD_CTRL 0x0 -#define PALMAS_LED_CTRL 0x1 -#define PALMAS_PWM_CTRL1 0x2 -#define PALMAS_PWM_CTRL2 0x3 +#define PALMAS_LED_PERIOD_CTRL 0x00 +#define PALMAS_LED_CTRL 0x01 +#define PALMAS_PWM_CTRL1 0x02 +#define PALMAS_PWM_CTRL2 0x03 /* Bit definitions for LED_PERIOD_CTRL */ #define PALMAS_LED_PERIOD_CTRL_LED_2_PERIOD_MASK 0x38 -#define PALMAS_LED_PERIOD_CTRL_LED_2_PERIOD_SHIFT 3 +#define PALMAS_LED_PERIOD_CTRL_LED_2_PERIOD_SHIFT 0x03 #define PALMAS_LED_PERIOD_CTRL_LED_1_PERIOD_MASK 0x07 -#define PALMAS_LED_PERIOD_CTRL_LED_1_PERIOD_SHIFT 0 +#define PALMAS_LED_PERIOD_CTRL_LED_1_PERIOD_SHIFT 0x00 /* Bit definitions for LED_CTRL */ #define PALMAS_LED_CTRL_LED_2_SEQ 0x20 -#define PALMAS_LED_CTRL_LED_2_SEQ_SHIFT 5 +#define PALMAS_LED_CTRL_LED_2_SEQ_SHIFT 0x05 #define PALMAS_LED_CTRL_LED_1_SEQ 0x10 -#define PALMAS_LED_CTRL_LED_1_SEQ_SHIFT 4 +#define PALMAS_LED_CTRL_LED_1_SEQ_SHIFT 0x04 #define PALMAS_LED_CTRL_LED_2_ON_TIME_MASK 0x0c -#define PALMAS_LED_CTRL_LED_2_ON_TIME_SHIFT 2 +#define PALMAS_LED_CTRL_LED_2_ON_TIME_SHIFT 0x02 #define PALMAS_LED_CTRL_LED_1_ON_TIME_MASK 0x03 -#define PALMAS_LED_CTRL_LED_1_ON_TIME_SHIFT 0 +#define PALMAS_LED_CTRL_LED_1_ON_TIME_SHIFT 0x00 /* Bit definitions for PWM_CTRL1 */ #define PALMAS_PWM_CTRL1_PWM_FREQ_EN 0x02 -#define PALMAS_PWM_CTRL1_PWM_FREQ_EN_SHIFT 1 +#define PALMAS_PWM_CTRL1_PWM_FREQ_EN_SHIFT 0x01 #define PALMAS_PWM_CTRL1_PWM_FREQ_SEL 0x01 -#define PALMAS_PWM_CTRL1_PWM_FREQ_SEL_SHIFT 0 +#define PALMAS_PWM_CTRL1_PWM_FREQ_SEL_SHIFT 0x00 /* Bit definitions for PWM_CTRL2 */ -#define PALMAS_PWM_CTRL2_PWM_DUTY_SEL_MASK 0xff -#define PALMAS_PWM_CTRL2_PWM_DUTY_SEL_SHIFT 0 +#define PALMAS_PWM_CTRL2_PWM_DUTY_SEL_MASK 0xFF +#define PALMAS_PWM_CTRL2_PWM_DUTY_SEL_SHIFT 0x00 /* Registers for function INTERRUPT */ -#define PALMAS_INT1_STATUS 0x0 -#define PALMAS_INT1_MASK 0x1 -#define PALMAS_INT1_LINE_STATE 0x2 -#define PALMAS_INT1_EDGE_DETECT1_RESERVED 0x3 -#define PALMAS_INT1_EDGE_DETECT2_RESERVED 0x4 -#define PALMAS_INT2_STATUS 0x5 -#define PALMAS_INT2_MASK 0x6 -#define PALMAS_INT2_LINE_STATE 0x7 -#define PALMAS_INT2_EDGE_DETECT1_RESERVED 0x8 -#define PALMAS_INT2_EDGE_DETECT2_RESERVED 0x9 -#define PALMAS_INT3_STATUS 0xA -#define PALMAS_INT3_MASK 0xB -#define PALMAS_INT3_LINE_STATE 0xC -#define PALMAS_INT3_EDGE_DETECT1_RESERVED 0xD -#define PALMAS_INT3_EDGE_DETECT2_RESERVED 0xE -#define PALMAS_INT4_STATUS 0xF +#define PALMAS_INT1_STATUS 0x00 +#define PALMAS_INT1_MASK 0x01 +#define PALMAS_INT1_LINE_STATE 0x02 +#define PALMAS_INT1_EDGE_DETECT1_RESERVED 0x03 +#define PALMAS_INT1_EDGE_DETECT2_RESERVED 0x04 +#define PALMAS_INT2_STATUS 0x05 +#define PALMAS_INT2_MASK 0x06 +#define PALMAS_INT2_LINE_STATE 0x07 +#define PALMAS_INT2_EDGE_DETECT1_RESERVED 0x08 +#define PALMAS_INT2_EDGE_DETECT2_RESERVED 0x09 +#define PALMAS_INT3_STATUS 0x0A +#define PALMAS_INT3_MASK 0x0B +#define PALMAS_INT3_LINE_STATE 0x0C +#define PALMAS_INT3_EDGE_DETECT1_RESERVED 0x0D +#define PALMAS_INT3_EDGE_DETECT2_RESERVED 0x0E +#define PALMAS_INT4_STATUS 0x0F #define PALMAS_INT4_MASK 0x10 #define PALMAS_INT4_LINE_STATE 0x11 #define PALMAS_INT4_EDGE_DETECT1 0x12 @@ -1966,276 +1966,276 @@ enum usb_irq_events { /* Bit definitions for INT1_STATUS */ #define PALMAS_INT1_STATUS_VBAT_MON 0x80 -#define PALMAS_INT1_STATUS_VBAT_MON_SHIFT 7 +#define PALMAS_INT1_STATUS_VBAT_MON_SHIFT 0x07 #define PALMAS_INT1_STATUS_VSYS_MON 0x40 -#define PALMAS_INT1_STATUS_VSYS_MON_SHIFT 6 +#define PALMAS_INT1_STATUS_VSYS_MON_SHIFT 0x06 #define PALMAS_INT1_STATUS_HOTDIE 0x20 -#define PALMAS_INT1_STATUS_HOTDIE_SHIFT 5 +#define PALMAS_INT1_STATUS_HOTDIE_SHIFT 0x05 #define PALMAS_INT1_STATUS_PWRDOWN 0x10 -#define PALMAS_INT1_STATUS_PWRDOWN_SHIFT 4 +#define PALMAS_INT1_STATUS_PWRDOWN_SHIFT 0x04 #define PALMAS_INT1_STATUS_RPWRON 0x08 -#define PALMAS_INT1_STATUS_RPWRON_SHIFT 3 +#define PALMAS_INT1_STATUS_RPWRON_SHIFT 0x03 #define PALMAS_INT1_STATUS_LONG_PRESS_KEY 0x04 -#define PALMAS_INT1_STATUS_LONG_PRESS_KEY_SHIFT 2 +#define PALMAS_INT1_STATUS_LONG_PRESS_KEY_SHIFT 0x02 #define PALMAS_INT1_STATUS_PWRON 0x02 -#define PALMAS_INT1_STATUS_PWRON_SHIFT 1 +#define PALMAS_INT1_STATUS_PWRON_SHIFT 0x01 #define PALMAS_INT1_STATUS_CHARG_DET_N_VBUS_OVV 0x01 -#define PALMAS_INT1_STATUS_CHARG_DET_N_VBUS_OVV_SHIFT 0 +#define PALMAS_INT1_STATUS_CHARG_DET_N_VBUS_OVV_SHIFT 0x00 /* Bit definitions for INT1_MASK */ #define PALMAS_INT1_MASK_VBAT_MON 0x80 -#define PALMAS_INT1_MASK_VBAT_MON_SHIFT 7 +#define PALMAS_INT1_MASK_VBAT_MON_SHIFT 0x07 #define PALMAS_INT1_MASK_VSYS_MON 0x40 -#define PALMAS_INT1_MASK_VSYS_MON_SHIFT 6 +#define PALMAS_INT1_MASK_VSYS_MON_SHIFT 0x06 #define PALMAS_INT1_MASK_HOTDIE 0x20 -#define PALMAS_INT1_MASK_HOTDIE_SHIFT 5 +#define PALMAS_INT1_MASK_HOTDIE_SHIFT 0x05 #define PALMAS_INT1_MASK_PWRDOWN 0x10 -#define PALMAS_INT1_MASK_PWRDOWN_SHIFT 4 +#define PALMAS_INT1_MASK_PWRDOWN_SHIFT 0x04 #define PALMAS_INT1_MASK_RPWRON 0x08 -#define PALMAS_INT1_MASK_RPWRON_SHIFT 3 +#define PALMAS_INT1_MASK_RPWRON_SHIFT 0x03 #define PALMAS_INT1_MASK_LONG_PRESS_KEY 0x04 -#define PALMAS_INT1_MASK_LONG_PRESS_KEY_SHIFT 2 +#define PALMAS_INT1_MASK_LONG_PRESS_KEY_SHIFT 0x02 #define PALMAS_INT1_MASK_PWRON 0x02 -#define PALMAS_INT1_MASK_PWRON_SHIFT 1 +#define PALMAS_INT1_MASK_PWRON_SHIFT 0x01 #define PALMAS_INT1_MASK_CHARG_DET_N_VBUS_OVV 0x01 -#define PALMAS_INT1_MASK_CHARG_DET_N_VBUS_OVV_SHIFT 0 +#define PALMAS_INT1_MASK_CHARG_DET_N_VBUS_OVV_SHIFT 0x00 /* Bit definitions for INT1_LINE_STATE */ #define PALMAS_INT1_LINE_STATE_VBAT_MON 0x80 -#define PALMAS_INT1_LINE_STATE_VBAT_MON_SHIFT 7 +#define PALMAS_INT1_LINE_STATE_VBAT_MON_SHIFT 0x07 #define PALMAS_INT1_LINE_STATE_VSYS_MON 0x40 -#define PALMAS_INT1_LINE_STATE_VSYS_MON_SHIFT 6 +#define PALMAS_INT1_LINE_STATE_VSYS_MON_SHIFT 0x06 #define PALMAS_INT1_LINE_STATE_HOTDIE 0x20 -#define PALMAS_INT1_LINE_STATE_HOTDIE_SHIFT 5 +#define PALMAS_INT1_LINE_STATE_HOTDIE_SHIFT 0x05 #define PALMAS_INT1_LINE_STATE_PWRDOWN 0x10 -#define PALMAS_INT1_LINE_STATE_PWRDOWN_SHIFT 4 +#define PALMAS_INT1_LINE_STATE_PWRDOWN_SHIFT 0x04 #define PALMAS_INT1_LINE_STATE_RPWRON 0x08 -#define PALMAS_INT1_LINE_STATE_RPWRON_SHIFT 3 +#define PALMAS_INT1_LINE_STATE_RPWRON_SHIFT 0x03 #define PALMAS_INT1_LINE_STATE_LONG_PRESS_KEY 0x04 -#define PALMAS_INT1_LINE_STATE_LONG_PRESS_KEY_SHIFT 2 +#define PALMAS_INT1_LINE_STATE_LONG_PRESS_KEY_SHIFT 0x02 #define PALMAS_INT1_LINE_STATE_PWRON 0x02 -#define PALMAS_INT1_LINE_STATE_PWRON_SHIFT 1 +#define PALMAS_INT1_LINE_STATE_PWRON_SHIFT 0x01 #define PALMAS_INT1_LINE_STATE_CHARG_DET_N_VBUS_OVV 0x01 -#define PALMAS_INT1_LINE_STATE_CHARG_DET_N_VBUS_OVV_SHIFT 0 +#define PALMAS_INT1_LINE_STATE_CHARG_DET_N_VBUS_OVV_SHIFT 0x00 /* Bit definitions for INT2_STATUS */ #define PALMAS_INT2_STATUS_VAC_ACOK 0x80 -#define PALMAS_INT2_STATUS_VAC_ACOK_SHIFT 7 +#define PALMAS_INT2_STATUS_VAC_ACOK_SHIFT 0x07 #define PALMAS_INT2_STATUS_SHORT 0x40 -#define PALMAS_INT2_STATUS_SHORT_SHIFT 6 +#define PALMAS_INT2_STATUS_SHORT_SHIFT 0x06 #define PALMAS_INT2_STATUS_FBI_BB 0x20 -#define PALMAS_INT2_STATUS_FBI_BB_SHIFT 5 +#define PALMAS_INT2_STATUS_FBI_BB_SHIFT 0x05 #define PALMAS_INT2_STATUS_RESET_IN 0x10 -#define PALMAS_INT2_STATUS_RESET_IN_SHIFT 4 +#define PALMAS_INT2_STATUS_RESET_IN_SHIFT 0x04 #define PALMAS_INT2_STATUS_BATREMOVAL 0x08 -#define PALMAS_INT2_STATUS_BATREMOVAL_SHIFT 3 +#define PALMAS_INT2_STATUS_BATREMOVAL_SHIFT 0x03 #define PALMAS_INT2_STATUS_WDT 0x04 -#define PALMAS_INT2_STATUS_WDT_SHIFT 2 +#define PALMAS_INT2_STATUS_WDT_SHIFT 0x02 #define PALMAS_INT2_STATUS_RTC_TIMER 0x02 -#define PALMAS_INT2_STATUS_RTC_TIMER_SHIFT 1 +#define PALMAS_INT2_STATUS_RTC_TIMER_SHIFT 0x01 #define PALMAS_INT2_STATUS_RTC_ALARM 0x01 -#define PALMAS_INT2_STATUS_RTC_ALARM_SHIFT 0 +#define PALMAS_INT2_STATUS_RTC_ALARM_SHIFT 0x00 /* Bit definitions for INT2_MASK */ #define PALMAS_INT2_MASK_VAC_ACOK 0x80 -#define PALMAS_INT2_MASK_VAC_ACOK_SHIFT 7 +#define PALMAS_INT2_MASK_VAC_ACOK_SHIFT 0x07 #define PALMAS_INT2_MASK_SHORT 0x40 -#define PALMAS_INT2_MASK_SHORT_SHIFT 6 +#define PALMAS_INT2_MASK_SHORT_SHIFT 0x06 #define PALMAS_INT2_MASK_FBI_BB 0x20 -#define PALMAS_INT2_MASK_FBI_BB_SHIFT 5 +#define PALMAS_INT2_MASK_FBI_BB_SHIFT 0x05 #define PALMAS_INT2_MASK_RESET_IN 0x10 -#define PALMAS_INT2_MASK_RESET_IN_SHIFT 4 +#define PALMAS_INT2_MASK_RESET_IN_SHIFT 0x04 #define PALMAS_INT2_MASK_BATREMOVAL 0x08 -#define PALMAS_INT2_MASK_BATREMOVAL_SHIFT 3 +#define PALMAS_INT2_MASK_BATREMOVAL_SHIFT 0x03 #define PALMAS_INT2_MASK_WDT 0x04 -#define PALMAS_INT2_MASK_WDT_SHIFT 2 +#define PALMAS_INT2_MASK_WDT_SHIFT 0x02 #define PALMAS_INT2_MASK_RTC_TIMER 0x02 -#define PALMAS_INT2_MASK_RTC_TIMER_SHIFT 1 +#define PALMAS_INT2_MASK_RTC_TIMER_SHIFT 0x01 #define PALMAS_INT2_MASK_RTC_ALARM 0x01 -#define PALMAS_INT2_MASK_RTC_ALARM_SHIFT 0 +#define PALMAS_INT2_MASK_RTC_ALARM_SHIFT 0x00 /* Bit definitions for INT2_LINE_STATE */ #define PALMAS_INT2_LINE_STATE_VAC_ACOK 0x80 -#define PALMAS_INT2_LINE_STATE_VAC_ACOK_SHIFT 7 +#define PALMAS_INT2_LINE_STATE_VAC_ACOK_SHIFT 0x07 #define PALMAS_INT2_LINE_STATE_SHORT 0x40 -#define PALMAS_INT2_LINE_STATE_SHORT_SHIFT 6 +#define PALMAS_INT2_LINE_STATE_SHORT_SHIFT 0x06 #define PALMAS_INT2_LINE_STATE_FBI_BB 0x20 -#define PALMAS_INT2_LINE_STATE_FBI_BB_SHIFT 5 +#define PALMAS_INT2_LINE_STATE_FBI_BB_SHIFT 0x05 #define PALMAS_INT2_LINE_STATE_RESET_IN 0x10 -#define PALMAS_INT2_LINE_STATE_RESET_IN_SHIFT 4 +#define PALMAS_INT2_LINE_STATE_RESET_IN_SHIFT 0x04 #define PALMAS_INT2_LINE_STATE_BATREMOVAL 0x08 -#define PALMAS_INT2_LINE_STATE_BATREMOVAL_SHIFT 3 +#define PALMAS_INT2_LINE_STATE_BATREMOVAL_SHIFT 0x03 #define PALMAS_INT2_LINE_STATE_WDT 0x04 -#define PALMAS_INT2_LINE_STATE_WDT_SHIFT 2 +#define PALMAS_INT2_LINE_STATE_WDT_SHIFT 0x02 #define PALMAS_INT2_LINE_STATE_RTC_TIMER 0x02 -#define PALMAS_INT2_LINE_STATE_RTC_TIMER_SHIFT 1 +#define PALMAS_INT2_LINE_STATE_RTC_TIMER_SHIFT 0x01 #define PALMAS_INT2_LINE_STATE_RTC_ALARM 0x01 -#define PALMAS_INT2_LINE_STATE_RTC_ALARM_SHIFT 0 +#define PALMAS_INT2_LINE_STATE_RTC_ALARM_SHIFT 0x00 /* Bit definitions for INT3_STATUS */ #define PALMAS_INT3_STATUS_VBUS 0x80 -#define PALMAS_INT3_STATUS_VBUS_SHIFT 7 +#define PALMAS_INT3_STATUS_VBUS_SHIFT 0x07 #define PALMAS_INT3_STATUS_VBUS_OTG 0x40 -#define PALMAS_INT3_STATUS_VBUS_OTG_SHIFT 6 +#define PALMAS_INT3_STATUS_VBUS_OTG_SHIFT 0x06 #define PALMAS_INT3_STATUS_ID 0x20 -#define PALMAS_INT3_STATUS_ID_SHIFT 5 +#define PALMAS_INT3_STATUS_ID_SHIFT 0x05 #define PALMAS_INT3_STATUS_ID_OTG 0x10 -#define PALMAS_INT3_STATUS_ID_OTG_SHIFT 4 +#define PALMAS_INT3_STATUS_ID_OTG_SHIFT 0x04 #define PALMAS_INT3_STATUS_GPADC_EOC_RT 0x08 -#define PALMAS_INT3_STATUS_GPADC_EOC_RT_SHIFT 3 +#define PALMAS_INT3_STATUS_GPADC_EOC_RT_SHIFT 0x03 #define PALMAS_INT3_STATUS_GPADC_EOC_SW 0x04 -#define PALMAS_INT3_STATUS_GPADC_EOC_SW_SHIFT 2 +#define PALMAS_INT3_STATUS_GPADC_EOC_SW_SHIFT 0x02 #define PALMAS_INT3_STATUS_GPADC_AUTO_1 0x02 -#define PALMAS_INT3_STATUS_GPADC_AUTO_1_SHIFT 1 +#define PALMAS_INT3_STATUS_GPADC_AUTO_1_SHIFT 0x01 #define PALMAS_INT3_STATUS_GPADC_AUTO_0 0x01 -#define PALMAS_INT3_STATUS_GPADC_AUTO_0_SHIFT 0 +#define PALMAS_INT3_STATUS_GPADC_AUTO_0_SHIFT 0x00 /* Bit definitions for INT3_MASK */ #define PALMAS_INT3_MASK_VBUS 0x80 -#define PALMAS_INT3_MASK_VBUS_SHIFT 7 +#define PALMAS_INT3_MASK_VBUS_SHIFT 0x07 #define PALMAS_INT3_MASK_VBUS_OTG 0x40 -#define PALMAS_INT3_MASK_VBUS_OTG_SHIFT 6 +#define PALMAS_INT3_MASK_VBUS_OTG_SHIFT 0x06 #define PALMAS_INT3_MASK_ID 0x20 -#define PALMAS_INT3_MASK_ID_SHIFT 5 +#define PALMAS_INT3_MASK_ID_SHIFT 0x05 #define PALMAS_INT3_MASK_ID_OTG 0x10 -#define PALMAS_INT3_MASK_ID_OTG_SHIFT 4 +#define PALMAS_INT3_MASK_ID_OTG_SHIFT 0x04 #define PALMAS_INT3_MASK_GPADC_EOC_RT 0x08 -#define PALMAS_INT3_MASK_GPADC_EOC_RT_SHIFT 3 +#define PALMAS_INT3_MASK_GPADC_EOC_RT_SHIFT 0x03 #define PALMAS_INT3_MASK_GPADC_EOC_SW 0x04 -#define PALMAS_INT3_MASK_GPADC_EOC_SW_SHIFT 2 +#define PALMAS_INT3_MASK_GPADC_EOC_SW_SHIFT 0x02 #define PALMAS_INT3_MASK_GPADC_AUTO_1 0x02 -#define PALMAS_INT3_MASK_GPADC_AUTO_1_SHIFT 1 +#define PALMAS_INT3_MASK_GPADC_AUTO_1_SHIFT 0x01 #define PALMAS_INT3_MASK_GPADC_AUTO_0 0x01 -#define PALMAS_INT3_MASK_GPADC_AUTO_0_SHIFT 0 +#define PALMAS_INT3_MASK_GPADC_AUTO_0_SHIFT 0x00 /* Bit definitions for INT3_LINE_STATE */ #define PALMAS_INT3_LINE_STATE_VBUS 0x80 -#define PALMAS_INT3_LINE_STATE_VBUS_SHIFT 7 +#define PALMAS_INT3_LINE_STATE_VBUS_SHIFT 0x07 #define PALMAS_INT3_LINE_STATE_VBUS_OTG 0x40 -#define PALMAS_INT3_LINE_STATE_VBUS_OTG_SHIFT 6 +#define PALMAS_INT3_LINE_STATE_VBUS_OTG_SHIFT 0x06 #define PALMAS_INT3_LINE_STATE_ID 0x20 -#define PALMAS_INT3_LINE_STATE_ID_SHIFT 5 +#define PALMAS_INT3_LINE_STATE_ID_SHIFT 0x05 #define PALMAS_INT3_LINE_STATE_ID_OTG 0x10 -#define PALMAS_INT3_LINE_STATE_ID_OTG_SHIFT 4 +#define PALMAS_INT3_LINE_STATE_ID_OTG_SHIFT 0x04 #define PALMAS_INT3_LINE_STATE_GPADC_EOC_RT 0x08 -#define PALMAS_INT3_LINE_STATE_GPADC_EOC_RT_SHIFT 3 +#define PALMAS_INT3_LINE_STATE_GPADC_EOC_RT_SHIFT 0x03 #define PALMAS_INT3_LINE_STATE_GPADC_EOC_SW 0x04 -#define PALMAS_INT3_LINE_STATE_GPADC_EOC_SW_SHIFT 2 +#define PALMAS_INT3_LINE_STATE_GPADC_EOC_SW_SHIFT 0x02 #define PALMAS_INT3_LINE_STATE_GPADC_AUTO_1 0x02 -#define PALMAS_INT3_LINE_STATE_GPADC_AUTO_1_SHIFT 1 +#define PALMAS_INT3_LINE_STATE_GPADC_AUTO_1_SHIFT 0x01 #define PALMAS_INT3_LINE_STATE_GPADC_AUTO_0 0x01 -#define PALMAS_INT3_LINE_STATE_GPADC_AUTO_0_SHIFT 0 +#define PALMAS_INT3_LINE_STATE_GPADC_AUTO_0_SHIFT 0x00 /* Bit definitions for INT4_STATUS */ #define PALMAS_INT4_STATUS_GPIO_7 0x80 -#define PALMAS_INT4_STATUS_GPIO_7_SHIFT 7 +#define PALMAS_INT4_STATUS_GPIO_7_SHIFT 0x07 #define PALMAS_INT4_STATUS_GPIO_6 0x40 -#define PALMAS_INT4_STATUS_GPIO_6_SHIFT 6 +#define PALMAS_INT4_STATUS_GPIO_6_SHIFT 0x06 #define PALMAS_INT4_STATUS_GPIO_5 0x20 -#define PALMAS_INT4_STATUS_GPIO_5_SHIFT 5 +#define PALMAS_INT4_STATUS_GPIO_5_SHIFT 0x05 #define PALMAS_INT4_STATUS_GPIO_4 0x10 -#define PALMAS_INT4_STATUS_GPIO_4_SHIFT 4 +#define PALMAS_INT4_STATUS_GPIO_4_SHIFT 0x04 #define PALMAS_INT4_STATUS_GPIO_3 0x08 -#define PALMAS_INT4_STATUS_GPIO_3_SHIFT 3 +#define PALMAS_INT4_STATUS_GPIO_3_SHIFT 0x03 #define PALMAS_INT4_STATUS_GPIO_2 0x04 -#define PALMAS_INT4_STATUS_GPIO_2_SHIFT 2 +#define PALMAS_INT4_STATUS_GPIO_2_SHIFT 0x02 #define PALMAS_INT4_STATUS_GPIO_1 0x02 -#define PALMAS_INT4_STATUS_GPIO_1_SHIFT 1 +#define PALMAS_INT4_STATUS_GPIO_1_SHIFT 0x01 #define PALMAS_INT4_STATUS_GPIO_0 0x01 -#define PALMAS_INT4_STATUS_GPIO_0_SHIFT 0 +#define PALMAS_INT4_STATUS_GPIO_0_SHIFT 0x00 /* Bit definitions for INT4_MASK */ #define PALMAS_INT4_MASK_GPIO_7 0x80 -#define PALMAS_INT4_MASK_GPIO_7_SHIFT 7 +#define PALMAS_INT4_MASK_GPIO_7_SHIFT 0x07 #define PALMAS_INT4_MASK_GPIO_6 0x40 -#define PALMAS_INT4_MASK_GPIO_6_SHIFT 6 +#define PALMAS_INT4_MASK_GPIO_6_SHIFT 0x06 #define PALMAS_INT4_MASK_GPIO_5 0x20 -#define PALMAS_INT4_MASK_GPIO_5_SHIFT 5 +#define PALMAS_INT4_MASK_GPIO_5_SHIFT 0x05 #define PALMAS_INT4_MASK_GPIO_4 0x10 -#define PALMAS_INT4_MASK_GPIO_4_SHIFT 4 +#define PALMAS_INT4_MASK_GPIO_4_SHIFT 0x04 #define PALMAS_INT4_MASK_GPIO_3 0x08 -#define PALMAS_INT4_MASK_GPIO_3_SHIFT 3 +#define PALMAS_INT4_MASK_GPIO_3_SHIFT 0x03 #define PALMAS_INT4_MASK_GPIO_2 0x04 -#define PALMAS_INT4_MASK_GPIO_2_SHIFT 2 +#define PALMAS_INT4_MASK_GPIO_2_SHIFT 0x02 #define PALMAS_INT4_MASK_GPIO_1 0x02 -#define PALMAS_INT4_MASK_GPIO_1_SHIFT 1 +#define PALMAS_INT4_MASK_GPIO_1_SHIFT 0x01 #define PALMAS_INT4_MASK_GPIO_0 0x01 -#define PALMAS_INT4_MASK_GPIO_0_SHIFT 0 +#define PALMAS_INT4_MASK_GPIO_0_SHIFT 0x00 /* Bit definitions for INT4_LINE_STATE */ #define PALMAS_INT4_LINE_STATE_GPIO_7 0x80 -#define PALMAS_INT4_LINE_STATE_GPIO_7_SHIFT 7 +#define PALMAS_INT4_LINE_STATE_GPIO_7_SHIFT 0x07 #define PALMAS_INT4_LINE_STATE_GPIO_6 0x40 -#define PALMAS_INT4_LINE_STATE_GPIO_6_SHIFT 6 +#define PALMAS_INT4_LINE_STATE_GPIO_6_SHIFT 0x06 #define PALMAS_INT4_LINE_STATE_GPIO_5 0x20 -#define PALMAS_INT4_LINE_STATE_GPIO_5_SHIFT 5 +#define PALMAS_INT4_LINE_STATE_GPIO_5_SHIFT 0x05 #define PALMAS_INT4_LINE_STATE_GPIO_4 0x10 -#define PALMAS_INT4_LINE_STATE_GPIO_4_SHIFT 4 +#define PALMAS_INT4_LINE_STATE_GPIO_4_SHIFT 0x04 #define PALMAS_INT4_LINE_STATE_GPIO_3 0x08 -#define PALMAS_INT4_LINE_STATE_GPIO_3_SHIFT 3 +#define PALMAS_INT4_LINE_STATE_GPIO_3_SHIFT 0x03 #define PALMAS_INT4_LINE_STATE_GPIO_2 0x04 -#define PALMAS_INT4_LINE_STATE_GPIO_2_SHIFT 2 +#define PALMAS_INT4_LINE_STATE_GPIO_2_SHIFT 0x02 #define PALMAS_INT4_LINE_STATE_GPIO_1 0x02 -#define PALMAS_INT4_LINE_STATE_GPIO_1_SHIFT 1 +#define PALMAS_INT4_LINE_STATE_GPIO_1_SHIFT 0x01 #define PALMAS_INT4_LINE_STATE_GPIO_0 0x01 -#define PALMAS_INT4_LINE_STATE_GPIO_0_SHIFT 0 +#define PALMAS_INT4_LINE_STATE_GPIO_0_SHIFT 0x00 /* Bit definitions for INT4_EDGE_DETECT1 */ #define PALMAS_INT4_EDGE_DETECT1_GPIO_3_RISING 0x80 -#define PALMAS_INT4_EDGE_DETECT1_GPIO_3_RISING_SHIFT 7 +#define PALMAS_INT4_EDGE_DETECT1_GPIO_3_RISING_SHIFT 0x07 #define PALMAS_INT4_EDGE_DETECT1_GPIO_3_FALLING 0x40 -#define PALMAS_INT4_EDGE_DETECT1_GPIO_3_FALLING_SHIFT 6 +#define PALMAS_INT4_EDGE_DETECT1_GPIO_3_FALLING_SHIFT 0x06 #define PALMAS_INT4_EDGE_DETECT1_GPIO_2_RISING 0x20 -#define PALMAS_INT4_EDGE_DETECT1_GPIO_2_RISING_SHIFT 5 +#define PALMAS_INT4_EDGE_DETECT1_GPIO_2_RISING_SHIFT 0x05 #define PALMAS_INT4_EDGE_DETECT1_GPIO_2_FALLING 0x10 -#define PALMAS_INT4_EDGE_DETECT1_GPIO_2_FALLING_SHIFT 4 +#define PALMAS_INT4_EDGE_DETECT1_GPIO_2_FALLING_SHIFT 0x04 #define PALMAS_INT4_EDGE_DETECT1_GPIO_1_RISING 0x08 -#define PALMAS_INT4_EDGE_DETECT1_GPIO_1_RISING_SHIFT 3 +#define PALMAS_INT4_EDGE_DETECT1_GPIO_1_RISING_SHIFT 0x03 #define PALMAS_INT4_EDGE_DETECT1_GPIO_1_FALLING 0x04 -#define PALMAS_INT4_EDGE_DETECT1_GPIO_1_FALLING_SHIFT 2 +#define PALMAS_INT4_EDGE_DETECT1_GPIO_1_FALLING_SHIFT 0x02 #define PALMAS_INT4_EDGE_DETECT1_GPIO_0_RISING 0x02 -#define PALMAS_INT4_EDGE_DETECT1_GPIO_0_RISING_SHIFT 1 +#define PALMAS_INT4_EDGE_DETECT1_GPIO_0_RISING_SHIFT 0x01 #define PALMAS_INT4_EDGE_DETECT1_GPIO_0_FALLING 0x01 -#define PALMAS_INT4_EDGE_DETECT1_GPIO_0_FALLING_SHIFT 0 +#define PALMAS_INT4_EDGE_DETECT1_GPIO_0_FALLING_SHIFT 0x00 /* Bit definitions for INT4_EDGE_DETECT2 */ #define PALMAS_INT4_EDGE_DETECT2_GPIO_7_RISING 0x80 -#define PALMAS_INT4_EDGE_DETECT2_GPIO_7_RISING_SHIFT 7 +#define PALMAS_INT4_EDGE_DETECT2_GPIO_7_RISING_SHIFT 0x07 #define PALMAS_INT4_EDGE_DETECT2_GPIO_7_FALLING 0x40 -#define PALMAS_INT4_EDGE_DETECT2_GPIO_7_FALLING_SHIFT 6 +#define PALMAS_INT4_EDGE_DETECT2_GPIO_7_FALLING_SHIFT 0x06 #define PALMAS_INT4_EDGE_DETECT2_GPIO_6_RISING 0x20 -#define PALMAS_INT4_EDGE_DETECT2_GPIO_6_RISING_SHIFT 5 +#define PALMAS_INT4_EDGE_DETECT2_GPIO_6_RISING_SHIFT 0x05 #define PALMAS_INT4_EDGE_DETECT2_GPIO_6_FALLING 0x10 -#define PALMAS_INT4_EDGE_DETECT2_GPIO_6_FALLING_SHIFT 4 +#define PALMAS_INT4_EDGE_DETECT2_GPIO_6_FALLING_SHIFT 0x04 #define PALMAS_INT4_EDGE_DETECT2_GPIO_5_RISING 0x08 -#define PALMAS_INT4_EDGE_DETECT2_GPIO_5_RISING_SHIFT 3 +#define PALMAS_INT4_EDGE_DETECT2_GPIO_5_RISING_SHIFT 0x03 #define PALMAS_INT4_EDGE_DETECT2_GPIO_5_FALLING 0x04 -#define PALMAS_INT4_EDGE_DETECT2_GPIO_5_FALLING_SHIFT 2 +#define PALMAS_INT4_EDGE_DETECT2_GPIO_5_FALLING_SHIFT 0x02 #define PALMAS_INT4_EDGE_DETECT2_GPIO_4_RISING 0x02 -#define PALMAS_INT4_EDGE_DETECT2_GPIO_4_RISING_SHIFT 1 +#define PALMAS_INT4_EDGE_DETECT2_GPIO_4_RISING_SHIFT 0x01 #define PALMAS_INT4_EDGE_DETECT2_GPIO_4_FALLING 0x01 -#define PALMAS_INT4_EDGE_DETECT2_GPIO_4_FALLING_SHIFT 0 +#define PALMAS_INT4_EDGE_DETECT2_GPIO_4_FALLING_SHIFT 0x00 /* Bit definitions for INT_CTRL */ #define PALMAS_INT_CTRL_INT_PENDING 0x04 -#define PALMAS_INT_CTRL_INT_PENDING_SHIFT 2 +#define PALMAS_INT_CTRL_INT_PENDING_SHIFT 0x02 #define PALMAS_INT_CTRL_INT_CLEAR 0x01 -#define PALMAS_INT_CTRL_INT_CLEAR_SHIFT 0 +#define PALMAS_INT_CTRL_INT_CLEAR_SHIFT 0x00 /* Registers for function USB_OTG */ -#define PALMAS_USB_WAKEUP 0x3 -#define PALMAS_USB_VBUS_CTRL_SET 0x4 -#define PALMAS_USB_VBUS_CTRL_CLR 0x5 -#define PALMAS_USB_ID_CTRL_SET 0x6 -#define PALMAS_USB_ID_CTRL_CLEAR 0x7 -#define PALMAS_USB_VBUS_INT_SRC 0x8 -#define PALMAS_USB_VBUS_INT_LATCH_SET 0x9 -#define PALMAS_USB_VBUS_INT_LATCH_CLR 0xA -#define PALMAS_USB_VBUS_INT_EN_LO_SET 0xB -#define PALMAS_USB_VBUS_INT_EN_LO_CLR 0xC -#define PALMAS_USB_VBUS_INT_EN_HI_SET 0xD -#define PALMAS_USB_VBUS_INT_EN_HI_CLR 0xE -#define PALMAS_USB_ID_INT_SRC 0xF +#define PALMAS_USB_WAKEUP 0x03 +#define PALMAS_USB_VBUS_CTRL_SET 0x04 +#define PALMAS_USB_VBUS_CTRL_CLR 0x05 +#define PALMAS_USB_ID_CTRL_SET 0x06 +#define PALMAS_USB_ID_CTRL_CLEAR 0x07 +#define PALMAS_USB_VBUS_INT_SRC 0x08 +#define PALMAS_USB_VBUS_INT_LATCH_SET 0x09 +#define PALMAS_USB_VBUS_INT_LATCH_CLR 0x0A +#define PALMAS_USB_VBUS_INT_EN_LO_SET 0x0B +#define PALMAS_USB_VBUS_INT_EN_LO_CLR 0x0C +#define PALMAS_USB_VBUS_INT_EN_HI_SET 0x0D +#define PALMAS_USB_VBUS_INT_EN_HI_CLR 0x0E +#define PALMAS_USB_ID_INT_SRC 0x0F #define PALMAS_USB_ID_INT_LATCH_SET 0x10 #define PALMAS_USB_ID_INT_LATCH_CLR 0x11 #define PALMAS_USB_ID_INT_EN_LO_SET 0x12 @@ -2250,306 +2250,306 @@ enum usb_irq_events { /* Bit definitions for USB_WAKEUP */ #define PALMAS_USB_WAKEUP_ID_WK_UP_COMP 0x01 -#define PALMAS_USB_WAKEUP_ID_WK_UP_COMP_SHIFT 0 +#define PALMAS_USB_WAKEUP_ID_WK_UP_COMP_SHIFT 0x00 /* Bit definitions for USB_VBUS_CTRL_SET */ #define PALMAS_USB_VBUS_CTRL_SET_VBUS_CHRG_VSYS 0x80 -#define PALMAS_USB_VBUS_CTRL_SET_VBUS_CHRG_VSYS_SHIFT 7 +#define PALMAS_USB_VBUS_CTRL_SET_VBUS_CHRG_VSYS_SHIFT 0x07 #define PALMAS_USB_VBUS_CTRL_SET_VBUS_DISCHRG 0x20 -#define PALMAS_USB_VBUS_CTRL_SET_VBUS_DISCHRG_SHIFT 5 +#define PALMAS_USB_VBUS_CTRL_SET_VBUS_DISCHRG_SHIFT 0x05 #define PALMAS_USB_VBUS_CTRL_SET_VBUS_IADP_SRC 0x10 -#define PALMAS_USB_VBUS_CTRL_SET_VBUS_IADP_SRC_SHIFT 4 +#define PALMAS_USB_VBUS_CTRL_SET_VBUS_IADP_SRC_SHIFT 0x04 #define PALMAS_USB_VBUS_CTRL_SET_VBUS_IADP_SINK 0x08 -#define PALMAS_USB_VBUS_CTRL_SET_VBUS_IADP_SINK_SHIFT 3 +#define PALMAS_USB_VBUS_CTRL_SET_VBUS_IADP_SINK_SHIFT 0x03 #define PALMAS_USB_VBUS_CTRL_SET_VBUS_ACT_COMP 0x04 -#define PALMAS_USB_VBUS_CTRL_SET_VBUS_ACT_COMP_SHIFT 2 +#define PALMAS_USB_VBUS_CTRL_SET_VBUS_ACT_COMP_SHIFT 0x02 /* Bit definitions for USB_VBUS_CTRL_CLR */ #define PALMAS_USB_VBUS_CTRL_CLR_VBUS_CHRG_VSYS 0x80 -#define PALMAS_USB_VBUS_CTRL_CLR_VBUS_CHRG_VSYS_SHIFT 7 +#define PALMAS_USB_VBUS_CTRL_CLR_VBUS_CHRG_VSYS_SHIFT 0x07 #define PALMAS_USB_VBUS_CTRL_CLR_VBUS_DISCHRG 0x20 -#define PALMAS_USB_VBUS_CTRL_CLR_VBUS_DISCHRG_SHIFT 5 +#define PALMAS_USB_VBUS_CTRL_CLR_VBUS_DISCHRG_SHIFT 0x05 #define PALMAS_USB_VBUS_CTRL_CLR_VBUS_IADP_SRC 0x10 -#define PALMAS_USB_VBUS_CTRL_CLR_VBUS_IADP_SRC_SHIFT 4 +#define PALMAS_USB_VBUS_CTRL_CLR_VBUS_IADP_SRC_SHIFT 0x04 #define PALMAS_USB_VBUS_CTRL_CLR_VBUS_IADP_SINK 0x08 -#define PALMAS_USB_VBUS_CTRL_CLR_VBUS_IADP_SINK_SHIFT 3 +#define PALMAS_USB_VBUS_CTRL_CLR_VBUS_IADP_SINK_SHIFT 0x03 #define PALMAS_USB_VBUS_CTRL_CLR_VBUS_ACT_COMP 0x04 -#define PALMAS_USB_VBUS_CTRL_CLR_VBUS_ACT_COMP_SHIFT 2 +#define PALMAS_USB_VBUS_CTRL_CLR_VBUS_ACT_COMP_SHIFT 0x02 /* Bit definitions for USB_ID_CTRL_SET */ #define PALMAS_USB_ID_CTRL_SET_ID_PU_220K 0x80 -#define PALMAS_USB_ID_CTRL_SET_ID_PU_220K_SHIFT 7 +#define PALMAS_USB_ID_CTRL_SET_ID_PU_220K_SHIFT 0x07 #define PALMAS_USB_ID_CTRL_SET_ID_PU_100K 0x40 -#define PALMAS_USB_ID_CTRL_SET_ID_PU_100K_SHIFT 6 +#define PALMAS_USB_ID_CTRL_SET_ID_PU_100K_SHIFT 0x06 #define PALMAS_USB_ID_CTRL_SET_ID_GND_DRV 0x20 -#define PALMAS_USB_ID_CTRL_SET_ID_GND_DRV_SHIFT 5 +#define PALMAS_USB_ID_CTRL_SET_ID_GND_DRV_SHIFT 0x05 #define PALMAS_USB_ID_CTRL_SET_ID_SRC_16U 0x10 -#define PALMAS_USB_ID_CTRL_SET_ID_SRC_16U_SHIFT 4 +#define PALMAS_USB_ID_CTRL_SET_ID_SRC_16U_SHIFT 0x04 #define PALMAS_USB_ID_CTRL_SET_ID_SRC_5U 0x08 -#define PALMAS_USB_ID_CTRL_SET_ID_SRC_5U_SHIFT 3 +#define PALMAS_USB_ID_CTRL_SET_ID_SRC_5U_SHIFT 0x03 #define PALMAS_USB_ID_CTRL_SET_ID_ACT_COMP 0x04 -#define PALMAS_USB_ID_CTRL_SET_ID_ACT_COMP_SHIFT 2 +#define PALMAS_USB_ID_CTRL_SET_ID_ACT_COMP_SHIFT 0x02 /* Bit definitions for USB_ID_CTRL_CLEAR */ #define PALMAS_USB_ID_CTRL_CLEAR_ID_PU_220K 0x80 -#define PALMAS_USB_ID_CTRL_CLEAR_ID_PU_220K_SHIFT 7 +#define PALMAS_USB_ID_CTRL_CLEAR_ID_PU_220K_SHIFT 0x07 #define PALMAS_USB_ID_CTRL_CLEAR_ID_PU_100K 0x40 -#define PALMAS_USB_ID_CTRL_CLEAR_ID_PU_100K_SHIFT 6 +#define PALMAS_USB_ID_CTRL_CLEAR_ID_PU_100K_SHIFT 0x06 #define PALMAS_USB_ID_CTRL_CLEAR_ID_GND_DRV 0x20 -#define PALMAS_USB_ID_CTRL_CLEAR_ID_GND_DRV_SHIFT 5 +#define PALMAS_USB_ID_CTRL_CLEAR_ID_GND_DRV_SHIFT 0x05 #define PALMAS_USB_ID_CTRL_CLEAR_ID_SRC_16U 0x10 -#define PALMAS_USB_ID_CTRL_CLEAR_ID_SRC_16U_SHIFT 4 +#define PALMAS_USB_ID_CTRL_CLEAR_ID_SRC_16U_SHIFT 0x04 #define PALMAS_USB_ID_CTRL_CLEAR_ID_SRC_5U 0x08 -#define PALMAS_USB_ID_CTRL_CLEAR_ID_SRC_5U_SHIFT 3 +#define PALMAS_USB_ID_CTRL_CLEAR_ID_SRC_5U_SHIFT 0x03 #define PALMAS_USB_ID_CTRL_CLEAR_ID_ACT_COMP 0x04 -#define PALMAS_USB_ID_CTRL_CLEAR_ID_ACT_COMP_SHIFT 2 +#define PALMAS_USB_ID_CTRL_CLEAR_ID_ACT_COMP_SHIFT 0x02 /* Bit definitions for USB_VBUS_INT_SRC */ #define PALMAS_USB_VBUS_INT_SRC_VOTG_SESS_VLD 0x80 -#define PALMAS_USB_VBUS_INT_SRC_VOTG_SESS_VLD_SHIFT 7 +#define PALMAS_USB_VBUS_INT_SRC_VOTG_SESS_VLD_SHIFT 0x07 #define PALMAS_USB_VBUS_INT_SRC_VADP_PRB 0x40 -#define PALMAS_USB_VBUS_INT_SRC_VADP_PRB_SHIFT 6 +#define PALMAS_USB_VBUS_INT_SRC_VADP_PRB_SHIFT 0x06 #define PALMAS_USB_VBUS_INT_SRC_VADP_SNS 0x20 -#define PALMAS_USB_VBUS_INT_SRC_VADP_SNS_SHIFT 5 +#define PALMAS_USB_VBUS_INT_SRC_VADP_SNS_SHIFT 0x05 #define PALMAS_USB_VBUS_INT_SRC_VA_VBUS_VLD 0x08 -#define PALMAS_USB_VBUS_INT_SRC_VA_VBUS_VLD_SHIFT 3 +#define PALMAS_USB_VBUS_INT_SRC_VA_VBUS_VLD_SHIFT 0x03 #define PALMAS_USB_VBUS_INT_SRC_VA_SESS_VLD 0x04 -#define PALMAS_USB_VBUS_INT_SRC_VA_SESS_VLD_SHIFT 2 +#define PALMAS_USB_VBUS_INT_SRC_VA_SESS_VLD_SHIFT 0x02 #define PALMAS_USB_VBUS_INT_SRC_VB_SESS_VLD 0x02 -#define PALMAS_USB_VBUS_INT_SRC_VB_SESS_VLD_SHIFT 1 +#define PALMAS_USB_VBUS_INT_SRC_VB_SESS_VLD_SHIFT 0x01 #define PALMAS_USB_VBUS_INT_SRC_VB_SESS_END 0x01 -#define PALMAS_USB_VBUS_INT_SRC_VB_SESS_END_SHIFT 0 +#define PALMAS_USB_VBUS_INT_SRC_VB_SESS_END_SHIFT 0x00 /* Bit definitions for USB_VBUS_INT_LATCH_SET */ #define PALMAS_USB_VBUS_INT_LATCH_SET_VOTG_SESS_VLD 0x80 -#define PALMAS_USB_VBUS_INT_LATCH_SET_VOTG_SESS_VLD_SHIFT 7 +#define PALMAS_USB_VBUS_INT_LATCH_SET_VOTG_SESS_VLD_SHIFT 0x07 #define PALMAS_USB_VBUS_INT_LATCH_SET_VADP_PRB 0x40 -#define PALMAS_USB_VBUS_INT_LATCH_SET_VADP_PRB_SHIFT 6 +#define PALMAS_USB_VBUS_INT_LATCH_SET_VADP_PRB_SHIFT 0x06 #define PALMAS_USB_VBUS_INT_LATCH_SET_VADP_SNS 0x20 -#define PALMAS_USB_VBUS_INT_LATCH_SET_VADP_SNS_SHIFT 5 +#define PALMAS_USB_VBUS_INT_LATCH_SET_VADP_SNS_SHIFT 0x05 #define PALMAS_USB_VBUS_INT_LATCH_SET_ADP 0x10 -#define PALMAS_USB_VBUS_INT_LATCH_SET_ADP_SHIFT 4 +#define PALMAS_USB_VBUS_INT_LATCH_SET_ADP_SHIFT 0x04 #define PALMAS_USB_VBUS_INT_LATCH_SET_VA_VBUS_VLD 0x08 -#define PALMAS_USB_VBUS_INT_LATCH_SET_VA_VBUS_VLD_SHIFT 3 +#define PALMAS_USB_VBUS_INT_LATCH_SET_VA_VBUS_VLD_SHIFT 0x03 #define PALMAS_USB_VBUS_INT_LATCH_SET_VA_SESS_VLD 0x04 -#define PALMAS_USB_VBUS_INT_LATCH_SET_VA_SESS_VLD_SHIFT 2 +#define PALMAS_USB_VBUS_INT_LATCH_SET_VA_SESS_VLD_SHIFT 0x02 #define PALMAS_USB_VBUS_INT_LATCH_SET_VB_SESS_VLD 0x02 -#define PALMAS_USB_VBUS_INT_LATCH_SET_VB_SESS_VLD_SHIFT 1 +#define PALMAS_USB_VBUS_INT_LATCH_SET_VB_SESS_VLD_SHIFT 0x01 #define PALMAS_USB_VBUS_INT_LATCH_SET_VB_SESS_END 0x01 -#define PALMAS_USB_VBUS_INT_LATCH_SET_VB_SESS_END_SHIFT 0 +#define PALMAS_USB_VBUS_INT_LATCH_SET_VB_SESS_END_SHIFT 0x00 /* Bit definitions for USB_VBUS_INT_LATCH_CLR */ #define PALMAS_USB_VBUS_INT_LATCH_CLR_VOTG_SESS_VLD 0x80 -#define PALMAS_USB_VBUS_INT_LATCH_CLR_VOTG_SESS_VLD_SHIFT 7 +#define PALMAS_USB_VBUS_INT_LATCH_CLR_VOTG_SESS_VLD_SHIFT 0x07 #define PALMAS_USB_VBUS_INT_LATCH_CLR_VADP_PRB 0x40 -#define PALMAS_USB_VBUS_INT_LATCH_CLR_VADP_PRB_SHIFT 6 +#define PALMAS_USB_VBUS_INT_LATCH_CLR_VADP_PRB_SHIFT 0x06 #define PALMAS_USB_VBUS_INT_LATCH_CLR_VADP_SNS 0x20 -#define PALMAS_USB_VBUS_INT_LATCH_CLR_VADP_SNS_SHIFT 5 +#define PALMAS_USB_VBUS_INT_LATCH_CLR_VADP_SNS_SHIFT 0x05 #define PALMAS_USB_VBUS_INT_LATCH_CLR_ADP 0x10 -#define PALMAS_USB_VBUS_INT_LATCH_CLR_ADP_SHIFT 4 +#define PALMAS_USB_VBUS_INT_LATCH_CLR_ADP_SHIFT 0x04 #define PALMAS_USB_VBUS_INT_LATCH_CLR_VA_VBUS_VLD 0x08 -#define PALMAS_USB_VBUS_INT_LATCH_CLR_VA_VBUS_VLD_SHIFT 3 +#define PALMAS_USB_VBUS_INT_LATCH_CLR_VA_VBUS_VLD_SHIFT 0x03 #define PALMAS_USB_VBUS_INT_LATCH_CLR_VA_SESS_VLD 0x04 -#define PALMAS_USB_VBUS_INT_LATCH_CLR_VA_SESS_VLD_SHIFT 2 +#define PALMAS_USB_VBUS_INT_LATCH_CLR_VA_SESS_VLD_SHIFT 0x02 #define PALMAS_USB_VBUS_INT_LATCH_CLR_VB_SESS_VLD 0x02 -#define PALMAS_USB_VBUS_INT_LATCH_CLR_VB_SESS_VLD_SHIFT 1 +#define PALMAS_USB_VBUS_INT_LATCH_CLR_VB_SESS_VLD_SHIFT 0x01 #define PALMAS_USB_VBUS_INT_LATCH_CLR_VB_SESS_END 0x01 -#define PALMAS_USB_VBUS_INT_LATCH_CLR_VB_SESS_END_SHIFT 0 +#define PALMAS_USB_VBUS_INT_LATCH_CLR_VB_SESS_END_SHIFT 0x00 /* Bit definitions for USB_VBUS_INT_EN_LO_SET */ #define PALMAS_USB_VBUS_INT_EN_LO_SET_VOTG_SESS_VLD 0x80 -#define PALMAS_USB_VBUS_INT_EN_LO_SET_VOTG_SESS_VLD_SHIFT 7 +#define PALMAS_USB_VBUS_INT_EN_LO_SET_VOTG_SESS_VLD_SHIFT 0x07 #define PALMAS_USB_VBUS_INT_EN_LO_SET_VADP_PRB 0x40 -#define PALMAS_USB_VBUS_INT_EN_LO_SET_VADP_PRB_SHIFT 6 +#define PALMAS_USB_VBUS_INT_EN_LO_SET_VADP_PRB_SHIFT 0x06 #define PALMAS_USB_VBUS_INT_EN_LO_SET_VADP_SNS 0x20 -#define PALMAS_USB_VBUS_INT_EN_LO_SET_VADP_SNS_SHIFT 5 +#define PALMAS_USB_VBUS_INT_EN_LO_SET_VADP_SNS_SHIFT 0x05 #define PALMAS_USB_VBUS_INT_EN_LO_SET_VA_VBUS_VLD 0x08 -#define PALMAS_USB_VBUS_INT_EN_LO_SET_VA_VBUS_VLD_SHIFT 3 +#define PALMAS_USB_VBUS_INT_EN_LO_SET_VA_VBUS_VLD_SHIFT 0x03 #define PALMAS_USB_VBUS_INT_EN_LO_SET_VA_SESS_VLD 0x04 -#define PALMAS_USB_VBUS_INT_EN_LO_SET_VA_SESS_VLD_SHIFT 2 +#define PALMAS_USB_VBUS_INT_EN_LO_SET_VA_SESS_VLD_SHIFT 0x02 #define PALMAS_USB_VBUS_INT_EN_LO_SET_VB_SESS_VLD 0x02 -#define PALMAS_USB_VBUS_INT_EN_LO_SET_VB_SESS_VLD_SHIFT 1 +#define PALMAS_USB_VBUS_INT_EN_LO_SET_VB_SESS_VLD_SHIFT 0x01 #define PALMAS_USB_VBUS_INT_EN_LO_SET_VB_SESS_END 0x01 -#define PALMAS_USB_VBUS_INT_EN_LO_SET_VB_SESS_END_SHIFT 0 +#define PALMAS_USB_VBUS_INT_EN_LO_SET_VB_SESS_END_SHIFT 0x00 /* Bit definitions for USB_VBUS_INT_EN_LO_CLR */ #define PALMAS_USB_VBUS_INT_EN_LO_CLR_VOTG_SESS_VLD 0x80 -#define PALMAS_USB_VBUS_INT_EN_LO_CLR_VOTG_SESS_VLD_SHIFT 7 +#define PALMAS_USB_VBUS_INT_EN_LO_CLR_VOTG_SESS_VLD_SHIFT 0x07 #define PALMAS_USB_VBUS_INT_EN_LO_CLR_VADP_PRB 0x40 -#define PALMAS_USB_VBUS_INT_EN_LO_CLR_VADP_PRB_SHIFT 6 +#define PALMAS_USB_VBUS_INT_EN_LO_CLR_VADP_PRB_SHIFT 0x06 #define PALMAS_USB_VBUS_INT_EN_LO_CLR_VADP_SNS 0x20 -#define PALMAS_USB_VBUS_INT_EN_LO_CLR_VADP_SNS_SHIFT 5 +#define PALMAS_USB_VBUS_INT_EN_LO_CLR_VADP_SNS_SHIFT 0x05 #define PALMAS_USB_VBUS_INT_EN_LO_CLR_VA_VBUS_VLD 0x08 -#define PALMAS_USB_VBUS_INT_EN_LO_CLR_VA_VBUS_VLD_SHIFT 3 +#define PALMAS_USB_VBUS_INT_EN_LO_CLR_VA_VBUS_VLD_SHIFT 0x03 #define PALMAS_USB_VBUS_INT_EN_LO_CLR_VA_SESS_VLD 0x04 -#define PALMAS_USB_VBUS_INT_EN_LO_CLR_VA_SESS_VLD_SHIFT 2 +#define PALMAS_USB_VBUS_INT_EN_LO_CLR_VA_SESS_VLD_SHIFT 0x02 #define PALMAS_USB_VBUS_INT_EN_LO_CLR_VB_SESS_VLD 0x02 -#define PALMAS_USB_VBUS_INT_EN_LO_CLR_VB_SESS_VLD_SHIFT 1 +#define PALMAS_USB_VBUS_INT_EN_LO_CLR_VB_SESS_VLD_SHIFT 0x01 #define PALMAS_USB_VBUS_INT_EN_LO_CLR_VB_SESS_END 0x01 -#define PALMAS_USB_VBUS_INT_EN_LO_CLR_VB_SESS_END_SHIFT 0 +#define PALMAS_USB_VBUS_INT_EN_LO_CLR_VB_SESS_END_SHIFT 0x00 /* Bit definitions for USB_VBUS_INT_EN_HI_SET */ #define PALMAS_USB_VBUS_INT_EN_HI_SET_VOTG_SESS_VLD 0x80 -#define PALMAS_USB_VBUS_INT_EN_HI_SET_VOTG_SESS_VLD_SHIFT 7 +#define PALMAS_USB_VBUS_INT_EN_HI_SET_VOTG_SESS_VLD_SHIFT 0x07 #define PALMAS_USB_VBUS_INT_EN_HI_SET_VADP_PRB 0x40 -#define PALMAS_USB_VBUS_INT_EN_HI_SET_VADP_PRB_SHIFT 6 +#define PALMAS_USB_VBUS_INT_EN_HI_SET_VADP_PRB_SHIFT 0x06 #define PALMAS_USB_VBUS_INT_EN_HI_SET_VADP_SNS 0x20 -#define PALMAS_USB_VBUS_INT_EN_HI_SET_VADP_SNS_SHIFT 5 +#define PALMAS_USB_VBUS_INT_EN_HI_SET_VADP_SNS_SHIFT 0x05 #define PALMAS_USB_VBUS_INT_EN_HI_SET_ADP 0x10 -#define PALMAS_USB_VBUS_INT_EN_HI_SET_ADP_SHIFT 4 +#define PALMAS_USB_VBUS_INT_EN_HI_SET_ADP_SHIFT 0x04 #define PALMAS_USB_VBUS_INT_EN_HI_SET_VA_VBUS_VLD 0x08 -#define PALMAS_USB_VBUS_INT_EN_HI_SET_VA_VBUS_VLD_SHIFT 3 +#define PALMAS_USB_VBUS_INT_EN_HI_SET_VA_VBUS_VLD_SHIFT 0x03 #define PALMAS_USB_VBUS_INT_EN_HI_SET_VA_SESS_VLD 0x04 -#define PALMAS_USB_VBUS_INT_EN_HI_SET_VA_SESS_VLD_SHIFT 2 +#define PALMAS_USB_VBUS_INT_EN_HI_SET_VA_SESS_VLD_SHIFT 0x02 #define PALMAS_USB_VBUS_INT_EN_HI_SET_VB_SESS_VLD 0x02 -#define PALMAS_USB_VBUS_INT_EN_HI_SET_VB_SESS_VLD_SHIFT 1 +#define PALMAS_USB_VBUS_INT_EN_HI_SET_VB_SESS_VLD_SHIFT 0x01 #define PALMAS_USB_VBUS_INT_EN_HI_SET_VB_SESS_END 0x01 -#define PALMAS_USB_VBUS_INT_EN_HI_SET_VB_SESS_END_SHIFT 0 +#define PALMAS_USB_VBUS_INT_EN_HI_SET_VB_SESS_END_SHIFT 0x00 /* Bit definitions for USB_VBUS_INT_EN_HI_CLR */ #define PALMAS_USB_VBUS_INT_EN_HI_CLR_VOTG_SESS_VLD 0x80 -#define PALMAS_USB_VBUS_INT_EN_HI_CLR_VOTG_SESS_VLD_SHIFT 7 +#define PALMAS_USB_VBUS_INT_EN_HI_CLR_VOTG_SESS_VLD_SHIFT 0x07 #define PALMAS_USB_VBUS_INT_EN_HI_CLR_VADP_PRB 0x40 -#define PALMAS_USB_VBUS_INT_EN_HI_CLR_VADP_PRB_SHIFT 6 +#define PALMAS_USB_VBUS_INT_EN_HI_CLR_VADP_PRB_SHIFT 0x06 #define PALMAS_USB_VBUS_INT_EN_HI_CLR_VADP_SNS 0x20 -#define PALMAS_USB_VBUS_INT_EN_HI_CLR_VADP_SNS_SHIFT 5 +#define PALMAS_USB_VBUS_INT_EN_HI_CLR_VADP_SNS_SHIFT 0x05 #define PALMAS_USB_VBUS_INT_EN_HI_CLR_ADP 0x10 -#define PALMAS_USB_VBUS_INT_EN_HI_CLR_ADP_SHIFT 4 +#define PALMAS_USB_VBUS_INT_EN_HI_CLR_ADP_SHIFT 0x04 #define PALMAS_USB_VBUS_INT_EN_HI_CLR_VA_VBUS_VLD 0x08 -#define PALMAS_USB_VBUS_INT_EN_HI_CLR_VA_VBUS_VLD_SHIFT 3 +#define PALMAS_USB_VBUS_INT_EN_HI_CLR_VA_VBUS_VLD_SHIFT 0x03 #define PALMAS_USB_VBUS_INT_EN_HI_CLR_VA_SESS_VLD 0x04 -#define PALMAS_USB_VBUS_INT_EN_HI_CLR_VA_SESS_VLD_SHIFT 2 +#define PALMAS_USB_VBUS_INT_EN_HI_CLR_VA_SESS_VLD_SHIFT 0x02 #define PALMAS_USB_VBUS_INT_EN_HI_CLR_VB_SESS_VLD 0x02 -#define PALMAS_USB_VBUS_INT_EN_HI_CLR_VB_SESS_VLD_SHIFT 1 +#define PALMAS_USB_VBUS_INT_EN_HI_CLR_VB_SESS_VLD_SHIFT 0x01 #define PALMAS_USB_VBUS_INT_EN_HI_CLR_VB_SESS_END 0x01 -#define PALMAS_USB_VBUS_INT_EN_HI_CLR_VB_SESS_END_SHIFT 0 +#define PALMAS_USB_VBUS_INT_EN_HI_CLR_VB_SESS_END_SHIFT 0x00 /* Bit definitions for USB_ID_INT_SRC */ #define PALMAS_USB_ID_INT_SRC_ID_FLOAT 0x10 -#define PALMAS_USB_ID_INT_SRC_ID_FLOAT_SHIFT 4 +#define PALMAS_USB_ID_INT_SRC_ID_FLOAT_SHIFT 0x04 #define PALMAS_USB_ID_INT_SRC_ID_A 0x08 -#define PALMAS_USB_ID_INT_SRC_ID_A_SHIFT 3 +#define PALMAS_USB_ID_INT_SRC_ID_A_SHIFT 0x03 #define PALMAS_USB_ID_INT_SRC_ID_B 0x04 -#define PALMAS_USB_ID_INT_SRC_ID_B_SHIFT 2 +#define PALMAS_USB_ID_INT_SRC_ID_B_SHIFT 0x02 #define PALMAS_USB_ID_INT_SRC_ID_C 0x02 -#define PALMAS_USB_ID_INT_SRC_ID_C_SHIFT 1 +#define PALMAS_USB_ID_INT_SRC_ID_C_SHIFT 0x01 #define PALMAS_USB_ID_INT_SRC_ID_GND 0x01 -#define PALMAS_USB_ID_INT_SRC_ID_GND_SHIFT 0 +#define PALMAS_USB_ID_INT_SRC_ID_GND_SHIFT 0x00 /* Bit definitions for USB_ID_INT_LATCH_SET */ #define PALMAS_USB_ID_INT_LATCH_SET_ID_FLOAT 0x10 -#define PALMAS_USB_ID_INT_LATCH_SET_ID_FLOAT_SHIFT 4 +#define PALMAS_USB_ID_INT_LATCH_SET_ID_FLOAT_SHIFT 0x04 #define PALMAS_USB_ID_INT_LATCH_SET_ID_A 0x08 -#define PALMAS_USB_ID_INT_LATCH_SET_ID_A_SHIFT 3 +#define PALMAS_USB_ID_INT_LATCH_SET_ID_A_SHIFT 0x03 #define PALMAS_USB_ID_INT_LATCH_SET_ID_B 0x04 -#define PALMAS_USB_ID_INT_LATCH_SET_ID_B_SHIFT 2 +#define PALMAS_USB_ID_INT_LATCH_SET_ID_B_SHIFT 0x02 #define PALMAS_USB_ID_INT_LATCH_SET_ID_C 0x02 -#define PALMAS_USB_ID_INT_LATCH_SET_ID_C_SHIFT 1 +#define PALMAS_USB_ID_INT_LATCH_SET_ID_C_SHIFT 0x01 #define PALMAS_USB_ID_INT_LATCH_SET_ID_GND 0x01 -#define PALMAS_USB_ID_INT_LATCH_SET_ID_GND_SHIFT 0 +#define PALMAS_USB_ID_INT_LATCH_SET_ID_GND_SHIFT 0x00 /* Bit definitions for USB_ID_INT_LATCH_CLR */ #define PALMAS_USB_ID_INT_LATCH_CLR_ID_FLOAT 0x10 -#define PALMAS_USB_ID_INT_LATCH_CLR_ID_FLOAT_SHIFT 4 +#define PALMAS_USB_ID_INT_LATCH_CLR_ID_FLOAT_SHIFT 0x04 #define PALMAS_USB_ID_INT_LATCH_CLR_ID_A 0x08 -#define PALMAS_USB_ID_INT_LATCH_CLR_ID_A_SHIFT 3 +#define PALMAS_USB_ID_INT_LATCH_CLR_ID_A_SHIFT 0x03 #define PALMAS_USB_ID_INT_LATCH_CLR_ID_B 0x04 -#define PALMAS_USB_ID_INT_LATCH_CLR_ID_B_SHIFT 2 +#define PALMAS_USB_ID_INT_LATCH_CLR_ID_B_SHIFT 0x02 #define PALMAS_USB_ID_INT_LATCH_CLR_ID_C 0x02 -#define PALMAS_USB_ID_INT_LATCH_CLR_ID_C_SHIFT 1 +#define PALMAS_USB_ID_INT_LATCH_CLR_ID_C_SHIFT 0x01 #define PALMAS_USB_ID_INT_LATCH_CLR_ID_GND 0x01 -#define PALMAS_USB_ID_INT_LATCH_CLR_ID_GND_SHIFT 0 +#define PALMAS_USB_ID_INT_LATCH_CLR_ID_GND_SHIFT 0x00 /* Bit definitions for USB_ID_INT_EN_LO_SET */ #define PALMAS_USB_ID_INT_EN_LO_SET_ID_FLOAT 0x10 -#define PALMAS_USB_ID_INT_EN_LO_SET_ID_FLOAT_SHIFT 4 +#define PALMAS_USB_ID_INT_EN_LO_SET_ID_FLOAT_SHIFT 0x04 #define PALMAS_USB_ID_INT_EN_LO_SET_ID_A 0x08 -#define PALMAS_USB_ID_INT_EN_LO_SET_ID_A_SHIFT 3 +#define PALMAS_USB_ID_INT_EN_LO_SET_ID_A_SHIFT 0x03 #define PALMAS_USB_ID_INT_EN_LO_SET_ID_B 0x04 -#define PALMAS_USB_ID_INT_EN_LO_SET_ID_B_SHIFT 2 +#define PALMAS_USB_ID_INT_EN_LO_SET_ID_B_SHIFT 0x02 #define PALMAS_USB_ID_INT_EN_LO_SET_ID_C 0x02 -#define PALMAS_USB_ID_INT_EN_LO_SET_ID_C_SHIFT 1 +#define PALMAS_USB_ID_INT_EN_LO_SET_ID_C_SHIFT 0x01 #define PALMAS_USB_ID_INT_EN_LO_SET_ID_GND 0x01 -#define PALMAS_USB_ID_INT_EN_LO_SET_ID_GND_SHIFT 0 +#define PALMAS_USB_ID_INT_EN_LO_SET_ID_GND_SHIFT 0x00 /* Bit definitions for USB_ID_INT_EN_LO_CLR */ #define PALMAS_USB_ID_INT_EN_LO_CLR_ID_FLOAT 0x10 -#define PALMAS_USB_ID_INT_EN_LO_CLR_ID_FLOAT_SHIFT 4 +#define PALMAS_USB_ID_INT_EN_LO_CLR_ID_FLOAT_SHIFT 0x04 #define PALMAS_USB_ID_INT_EN_LO_CLR_ID_A 0x08 -#define PALMAS_USB_ID_INT_EN_LO_CLR_ID_A_SHIFT 3 +#define PALMAS_USB_ID_INT_EN_LO_CLR_ID_A_SHIFT 0x03 #define PALMAS_USB_ID_INT_EN_LO_CLR_ID_B 0x04 -#define PALMAS_USB_ID_INT_EN_LO_CLR_ID_B_SHIFT 2 +#define PALMAS_USB_ID_INT_EN_LO_CLR_ID_B_SHIFT 0x02 #define PALMAS_USB_ID_INT_EN_LO_CLR_ID_C 0x02 -#define PALMAS_USB_ID_INT_EN_LO_CLR_ID_C_SHIFT 1 +#define PALMAS_USB_ID_INT_EN_LO_CLR_ID_C_SHIFT 0x01 #define PALMAS_USB_ID_INT_EN_LO_CLR_ID_GND 0x01 -#define PALMAS_USB_ID_INT_EN_LO_CLR_ID_GND_SHIFT 0 +#define PALMAS_USB_ID_INT_EN_LO_CLR_ID_GND_SHIFT 0x00 /* Bit definitions for USB_ID_INT_EN_HI_SET */ #define PALMAS_USB_ID_INT_EN_HI_SET_ID_FLOAT 0x10 -#define PALMAS_USB_ID_INT_EN_HI_SET_ID_FLOAT_SHIFT 4 +#define PALMAS_USB_ID_INT_EN_HI_SET_ID_FLOAT_SHIFT 0x04 #define PALMAS_USB_ID_INT_EN_HI_SET_ID_A 0x08 -#define PALMAS_USB_ID_INT_EN_HI_SET_ID_A_SHIFT 3 +#define PALMAS_USB_ID_INT_EN_HI_SET_ID_A_SHIFT 0x03 #define PALMAS_USB_ID_INT_EN_HI_SET_ID_B 0x04 -#define PALMAS_USB_ID_INT_EN_HI_SET_ID_B_SHIFT 2 +#define PALMAS_USB_ID_INT_EN_HI_SET_ID_B_SHIFT 0x02 #define PALMAS_USB_ID_INT_EN_HI_SET_ID_C 0x02 -#define PALMAS_USB_ID_INT_EN_HI_SET_ID_C_SHIFT 1 +#define PALMAS_USB_ID_INT_EN_HI_SET_ID_C_SHIFT 0x01 #define PALMAS_USB_ID_INT_EN_HI_SET_ID_GND 0x01 -#define PALMAS_USB_ID_INT_EN_HI_SET_ID_GND_SHIFT 0 +#define PALMAS_USB_ID_INT_EN_HI_SET_ID_GND_SHIFT 0x00 /* Bit definitions for USB_ID_INT_EN_HI_CLR */ #define PALMAS_USB_ID_INT_EN_HI_CLR_ID_FLOAT 0x10 -#define PALMAS_USB_ID_INT_EN_HI_CLR_ID_FLOAT_SHIFT 4 +#define PALMAS_USB_ID_INT_EN_HI_CLR_ID_FLOAT_SHIFT 0x04 #define PALMAS_USB_ID_INT_EN_HI_CLR_ID_A 0x08 -#define PALMAS_USB_ID_INT_EN_HI_CLR_ID_A_SHIFT 3 +#define PALMAS_USB_ID_INT_EN_HI_CLR_ID_A_SHIFT 0x03 #define PALMAS_USB_ID_INT_EN_HI_CLR_ID_B 0x04 -#define PALMAS_USB_ID_INT_EN_HI_CLR_ID_B_SHIFT 2 +#define PALMAS_USB_ID_INT_EN_HI_CLR_ID_B_SHIFT 0x02 #define PALMAS_USB_ID_INT_EN_HI_CLR_ID_C 0x02 -#define PALMAS_USB_ID_INT_EN_HI_CLR_ID_C_SHIFT 1 +#define PALMAS_USB_ID_INT_EN_HI_CLR_ID_C_SHIFT 0x01 #define PALMAS_USB_ID_INT_EN_HI_CLR_ID_GND 0x01 -#define PALMAS_USB_ID_INT_EN_HI_CLR_ID_GND_SHIFT 0 +#define PALMAS_USB_ID_INT_EN_HI_CLR_ID_GND_SHIFT 0x00 /* Bit definitions for USB_OTG_ADP_CTRL */ #define PALMAS_USB_OTG_ADP_CTRL_ADP_EN 0x04 -#define PALMAS_USB_OTG_ADP_CTRL_ADP_EN_SHIFT 2 +#define PALMAS_USB_OTG_ADP_CTRL_ADP_EN_SHIFT 0x02 #define PALMAS_USB_OTG_ADP_CTRL_ADP_MODE_MASK 0x03 -#define PALMAS_USB_OTG_ADP_CTRL_ADP_MODE_SHIFT 0 +#define PALMAS_USB_OTG_ADP_CTRL_ADP_MODE_SHIFT 0x00 /* Bit definitions for USB_OTG_ADP_HIGH */ -#define PALMAS_USB_OTG_ADP_HIGH_T_ADP_HIGH_MASK 0xff -#define PALMAS_USB_OTG_ADP_HIGH_T_ADP_HIGH_SHIFT 0 +#define PALMAS_USB_OTG_ADP_HIGH_T_ADP_HIGH_MASK 0xFF +#define PALMAS_USB_OTG_ADP_HIGH_T_ADP_HIGH_SHIFT 0x00 /* Bit definitions for USB_OTG_ADP_LOW */ -#define PALMAS_USB_OTG_ADP_LOW_T_ADP_LOW_MASK 0xff -#define PALMAS_USB_OTG_ADP_LOW_T_ADP_LOW_SHIFT 0 +#define PALMAS_USB_OTG_ADP_LOW_T_ADP_LOW_MASK 0xFF +#define PALMAS_USB_OTG_ADP_LOW_T_ADP_LOW_SHIFT 0x00 /* Bit definitions for USB_OTG_ADP_RISE */ -#define PALMAS_USB_OTG_ADP_RISE_T_ADP_RISE_MASK 0xff -#define PALMAS_USB_OTG_ADP_RISE_T_ADP_RISE_SHIFT 0 +#define PALMAS_USB_OTG_ADP_RISE_T_ADP_RISE_MASK 0xFF +#define PALMAS_USB_OTG_ADP_RISE_T_ADP_RISE_SHIFT 0x00 /* Bit definitions for USB_OTG_REVISION */ #define PALMAS_USB_OTG_REVISION_OTG_REV 0x01 -#define PALMAS_USB_OTG_REVISION_OTG_REV_SHIFT 0 +#define PALMAS_USB_OTG_REVISION_OTG_REV_SHIFT 0x00 /* Registers for function VIBRATOR */ -#define PALMAS_VIBRA_CTRL 0x0 +#define PALMAS_VIBRA_CTRL 0x00 /* Bit definitions for VIBRA_CTRL */ #define PALMAS_VIBRA_CTRL_PWM_DUTY_SEL_MASK 0x06 -#define PALMAS_VIBRA_CTRL_PWM_DUTY_SEL_SHIFT 1 +#define PALMAS_VIBRA_CTRL_PWM_DUTY_SEL_SHIFT 0x01 #define PALMAS_VIBRA_CTRL_PWM_FREQ_SEL 0x01 -#define PALMAS_VIBRA_CTRL_PWM_FREQ_SEL_SHIFT 0 +#define PALMAS_VIBRA_CTRL_PWM_FREQ_SEL_SHIFT 0x00 /* Registers for function GPIO */ -#define PALMAS_GPIO_DATA_IN 0x0 -#define PALMAS_GPIO_DATA_DIR 0x1 -#define PALMAS_GPIO_DATA_OUT 0x2 -#define PALMAS_GPIO_DEBOUNCE_EN 0x3 -#define PALMAS_GPIO_CLEAR_DATA_OUT 0x4 -#define PALMAS_GPIO_SET_DATA_OUT 0x5 -#define PALMAS_PU_PD_GPIO_CTRL1 0x6 -#define PALMAS_PU_PD_GPIO_CTRL2 0x7 -#define PALMAS_OD_OUTPUT_GPIO_CTRL 0x8 -#define PALMAS_GPIO_DATA_IN2 0x9 +#define PALMAS_GPIO_DATA_IN 0x00 +#define PALMAS_GPIO_DATA_DIR 0x01 +#define PALMAS_GPIO_DATA_OUT 0x02 +#define PALMAS_GPIO_DEBOUNCE_EN 0x03 +#define PALMAS_GPIO_CLEAR_DATA_OUT 0x04 +#define PALMAS_GPIO_SET_DATA_OUT 0x05 +#define PALMAS_PU_PD_GPIO_CTRL1 0x06 +#define PALMAS_PU_PD_GPIO_CTRL2 0x07 +#define PALMAS_OD_OUTPUT_GPIO_CTRL 0x08 +#define PALMAS_GPIO_DATA_IN2 0x09 #define PALMAS_GPIO_DATA_DIR2 0x0A #define PALMAS_GPIO_DATA_OUT2 0x0B #define PALMAS_GPIO_DEBOUNCE_EN2 0x0C @@ -2561,167 +2561,167 @@ enum usb_irq_events { /* Bit definitions for GPIO_DATA_IN */ #define PALMAS_GPIO_DATA_IN_GPIO_7_IN 0x80 -#define PALMAS_GPIO_DATA_IN_GPIO_7_IN_SHIFT 7 +#define PALMAS_GPIO_DATA_IN_GPIO_7_IN_SHIFT 0x07 #define PALMAS_GPIO_DATA_IN_GPIO_6_IN 0x40 -#define PALMAS_GPIO_DATA_IN_GPIO_6_IN_SHIFT 6 +#define PALMAS_GPIO_DATA_IN_GPIO_6_IN_SHIFT 0x06 #define PALMAS_GPIO_DATA_IN_GPIO_5_IN 0x20 -#define PALMAS_GPIO_DATA_IN_GPIO_5_IN_SHIFT 5 +#define PALMAS_GPIO_DATA_IN_GPIO_5_IN_SHIFT 0x05 #define PALMAS_GPIO_DATA_IN_GPIO_4_IN 0x10 -#define PALMAS_GPIO_DATA_IN_GPIO_4_IN_SHIFT 4 +#define PALMAS_GPIO_DATA_IN_GPIO_4_IN_SHIFT 0x04 #define PALMAS_GPIO_DATA_IN_GPIO_3_IN 0x08 -#define PALMAS_GPIO_DATA_IN_GPIO_3_IN_SHIFT 3 +#define PALMAS_GPIO_DATA_IN_GPIO_3_IN_SHIFT 0x03 #define PALMAS_GPIO_DATA_IN_GPIO_2_IN 0x04 -#define PALMAS_GPIO_DATA_IN_GPIO_2_IN_SHIFT 2 +#define PALMAS_GPIO_DATA_IN_GPIO_2_IN_SHIFT 0x02 #define PALMAS_GPIO_DATA_IN_GPIO_1_IN 0x02 -#define PALMAS_GPIO_DATA_IN_GPIO_1_IN_SHIFT 1 +#define PALMAS_GPIO_DATA_IN_GPIO_1_IN_SHIFT 0x01 #define PALMAS_GPIO_DATA_IN_GPIO_0_IN 0x01 -#define PALMAS_GPIO_DATA_IN_GPIO_0_IN_SHIFT 0 +#define PALMAS_GPIO_DATA_IN_GPIO_0_IN_SHIFT 0x00 /* Bit definitions for GPIO_DATA_DIR */ #define PALMAS_GPIO_DATA_DIR_GPIO_7_DIR 0x80 -#define PALMAS_GPIO_DATA_DIR_GPIO_7_DIR_SHIFT 7 +#define PALMAS_GPIO_DATA_DIR_GPIO_7_DIR_SHIFT 0x07 #define PALMAS_GPIO_DATA_DIR_GPIO_6_DIR 0x40 -#define PALMAS_GPIO_DATA_DIR_GPIO_6_DIR_SHIFT 6 +#define PALMAS_GPIO_DATA_DIR_GPIO_6_DIR_SHIFT 0x06 #define PALMAS_GPIO_DATA_DIR_GPIO_5_DIR 0x20 -#define PALMAS_GPIO_DATA_DIR_GPIO_5_DIR_SHIFT 5 +#define PALMAS_GPIO_DATA_DIR_GPIO_5_DIR_SHIFT 0x05 #define PALMAS_GPIO_DATA_DIR_GPIO_4_DIR 0x10 -#define PALMAS_GPIO_DATA_DIR_GPIO_4_DIR_SHIFT 4 +#define PALMAS_GPIO_DATA_DIR_GPIO_4_DIR_SHIFT 0x04 #define PALMAS_GPIO_DATA_DIR_GPIO_3_DIR 0x08 -#define PALMAS_GPIO_DATA_DIR_GPIO_3_DIR_SHIFT 3 +#define PALMAS_GPIO_DATA_DIR_GPIO_3_DIR_SHIFT 0x03 #define PALMAS_GPIO_DATA_DIR_GPIO_2_DIR 0x04 -#define PALMAS_GPIO_DATA_DIR_GPIO_2_DIR_SHIFT 2 +#define PALMAS_GPIO_DATA_DIR_GPIO_2_DIR_SHIFT 0x02 #define PALMAS_GPIO_DATA_DIR_GPIO_1_DIR 0x02 -#define PALMAS_GPIO_DATA_DIR_GPIO_1_DIR_SHIFT 1 +#define PALMAS_GPIO_DATA_DIR_GPIO_1_DIR_SHIFT 0x01 #define PALMAS_GPIO_DATA_DIR_GPIO_0_DIR 0x01 -#define PALMAS_GPIO_DATA_DIR_GPIO_0_DIR_SHIFT 0 +#define PALMAS_GPIO_DATA_DIR_GPIO_0_DIR_SHIFT 0x00 /* Bit definitions for GPIO_DATA_OUT */ #define PALMAS_GPIO_DATA_OUT_GPIO_7_OUT 0x80 -#define PALMAS_GPIO_DATA_OUT_GPIO_7_OUT_SHIFT 7 +#define PALMAS_GPIO_DATA_OUT_GPIO_7_OUT_SHIFT 0x07 #define PALMAS_GPIO_DATA_OUT_GPIO_6_OUT 0x40 -#define PALMAS_GPIO_DATA_OUT_GPIO_6_OUT_SHIFT 6 +#define PALMAS_GPIO_DATA_OUT_GPIO_6_OUT_SHIFT 0x06 #define PALMAS_GPIO_DATA_OUT_GPIO_5_OUT 0x20 -#define PALMAS_GPIO_DATA_OUT_GPIO_5_OUT_SHIFT 5 +#define PALMAS_GPIO_DATA_OUT_GPIO_5_OUT_SHIFT 0x05 #define PALMAS_GPIO_DATA_OUT_GPIO_4_OUT 0x10 -#define PALMAS_GPIO_DATA_OUT_GPIO_4_OUT_SHIFT 4 +#define PALMAS_GPIO_DATA_OUT_GPIO_4_OUT_SHIFT 0x04 #define PALMAS_GPIO_DATA_OUT_GPIO_3_OUT 0x08 -#define PALMAS_GPIO_DATA_OUT_GPIO_3_OUT_SHIFT 3 +#define PALMAS_GPIO_DATA_OUT_GPIO_3_OUT_SHIFT 0x03 #define PALMAS_GPIO_DATA_OUT_GPIO_2_OUT 0x04 -#define PALMAS_GPIO_DATA_OUT_GPIO_2_OUT_SHIFT 2 +#define PALMAS_GPIO_DATA_OUT_GPIO_2_OUT_SHIFT 0x02 #define PALMAS_GPIO_DATA_OUT_GPIO_1_OUT 0x02 -#define PALMAS_GPIO_DATA_OUT_GPIO_1_OUT_SHIFT 1 +#define PALMAS_GPIO_DATA_OUT_GPIO_1_OUT_SHIFT 0x01 #define PALMAS_GPIO_DATA_OUT_GPIO_0_OUT 0x01 -#define PALMAS_GPIO_DATA_OUT_GPIO_0_OUT_SHIFT 0 +#define PALMAS_GPIO_DATA_OUT_GPIO_0_OUT_SHIFT 0x00 /* Bit definitions for GPIO_DEBOUNCE_EN */ #define PALMAS_GPIO_DEBOUNCE_EN_GPIO_7_DEBOUNCE_EN 0x80 -#define PALMAS_GPIO_DEBOUNCE_EN_GPIO_7_DEBOUNCE_EN_SHIFT 7 +#define PALMAS_GPIO_DEBOUNCE_EN_GPIO_7_DEBOUNCE_EN_SHIFT 0x07 #define PALMAS_GPIO_DEBOUNCE_EN_GPIO_6_DEBOUNCE_EN 0x40 -#define PALMAS_GPIO_DEBOUNCE_EN_GPIO_6_DEBOUNCE_EN_SHIFT 6 +#define PALMAS_GPIO_DEBOUNCE_EN_GPIO_6_DEBOUNCE_EN_SHIFT 0x06 #define PALMAS_GPIO_DEBOUNCE_EN_GPIO_5_DEBOUNCE_EN 0x20 -#define PALMAS_GPIO_DEBOUNCE_EN_GPIO_5_DEBOUNCE_EN_SHIFT 5 +#define PALMAS_GPIO_DEBOUNCE_EN_GPIO_5_DEBOUNCE_EN_SHIFT 0x05 #define PALMAS_GPIO_DEBOUNCE_EN_GPIO_4_DEBOUNCE_EN 0x10 -#define PALMAS_GPIO_DEBOUNCE_EN_GPIO_4_DEBOUNCE_EN_SHIFT 4 +#define PALMAS_GPIO_DEBOUNCE_EN_GPIO_4_DEBOUNCE_EN_SHIFT 0x04 #define PALMAS_GPIO_DEBOUNCE_EN_GPIO_3_DEBOUNCE_EN 0x08 -#define PALMAS_GPIO_DEBOUNCE_EN_GPIO_3_DEBOUNCE_EN_SHIFT 3 +#define PALMAS_GPIO_DEBOUNCE_EN_GPIO_3_DEBOUNCE_EN_SHIFT 0x03 #define PALMAS_GPIO_DEBOUNCE_EN_GPIO_2_DEBOUNCE_EN 0x04 -#define PALMAS_GPIO_DEBOUNCE_EN_GPIO_2_DEBOUNCE_EN_SHIFT 2 +#define PALMAS_GPIO_DEBOUNCE_EN_GPIO_2_DEBOUNCE_EN_SHIFT 0x02 #define PALMAS_GPIO_DEBOUNCE_EN_GPIO_1_DEBOUNCE_EN 0x02 -#define PALMAS_GPIO_DEBOUNCE_EN_GPIO_1_DEBOUNCE_EN_SHIFT 1 +#define PALMAS_GPIO_DEBOUNCE_EN_GPIO_1_DEBOUNCE_EN_SHIFT 0x01 #define PALMAS_GPIO_DEBOUNCE_EN_GPIO_0_DEBOUNCE_EN 0x01 -#define PALMAS_GPIO_DEBOUNCE_EN_GPIO_0_DEBOUNCE_EN_SHIFT 0 +#define PALMAS_GPIO_DEBOUNCE_EN_GPIO_0_DEBOUNCE_EN_SHIFT 0x00 /* Bit definitions for GPIO_CLEAR_DATA_OUT */ #define PALMAS_GPIO_CLEAR_DATA_OUT_GPIO_7_CLEAR_DATA_OUT 0x80 -#define PALMAS_GPIO_CLEAR_DATA_OUT_GPIO_7_CLEAR_DATA_OUT_SHIFT 7 +#define PALMAS_GPIO_CLEAR_DATA_OUT_GPIO_7_CLEAR_DATA_OUT_SHIFT 0x07 #define PALMAS_GPIO_CLEAR_DATA_OUT_GPIO_6_CLEAR_DATA_OUT 0x40 -#define PALMAS_GPIO_CLEAR_DATA_OUT_GPIO_6_CLEAR_DATA_OUT_SHIFT 6 +#define PALMAS_GPIO_CLEAR_DATA_OUT_GPIO_6_CLEAR_DATA_OUT_SHIFT 0x06 #define PALMAS_GPIO_CLEAR_DATA_OUT_GPIO_5_CLEAR_DATA_OUT 0x20 -#define PALMAS_GPIO_CLEAR_DATA_OUT_GPIO_5_CLEAR_DATA_OUT_SHIFT 5 +#define PALMAS_GPIO_CLEAR_DATA_OUT_GPIO_5_CLEAR_DATA_OUT_SHIFT 0x05 #define PALMAS_GPIO_CLEAR_DATA_OUT_GPIO_4_CLEAR_DATA_OUT 0x10 -#define PALMAS_GPIO_CLEAR_DATA_OUT_GPIO_4_CLEAR_DATA_OUT_SHIFT 4 +#define PALMAS_GPIO_CLEAR_DATA_OUT_GPIO_4_CLEAR_DATA_OUT_SHIFT 0x04 #define PALMAS_GPIO_CLEAR_DATA_OUT_GPIO_3_CLEAR_DATA_OUT 0x08 -#define PALMAS_GPIO_CLEAR_DATA_OUT_GPIO_3_CLEAR_DATA_OUT_SHIFT 3 +#define PALMAS_GPIO_CLEAR_DATA_OUT_GPIO_3_CLEAR_DATA_OUT_SHIFT 0x03 #define PALMAS_GPIO_CLEAR_DATA_OUT_GPIO_2_CLEAR_DATA_OUT 0x04 -#define PALMAS_GPIO_CLEAR_DATA_OUT_GPIO_2_CLEAR_DATA_OUT_SHIFT 2 +#define PALMAS_GPIO_CLEAR_DATA_OUT_GPIO_2_CLEAR_DATA_OUT_SHIFT 0x02 #define PALMAS_GPIO_CLEAR_DATA_OUT_GPIO_1_CLEAR_DATA_OUT 0x02 -#define PALMAS_GPIO_CLEAR_DATA_OUT_GPIO_1_CLEAR_DATA_OUT_SHIFT 1 +#define PALMAS_GPIO_CLEAR_DATA_OUT_GPIO_1_CLEAR_DATA_OUT_SHIFT 0x01 #define PALMAS_GPIO_CLEAR_DATA_OUT_GPIO_0_CLEAR_DATA_OUT 0x01 -#define PALMAS_GPIO_CLEAR_DATA_OUT_GPIO_0_CLEAR_DATA_OUT_SHIFT 0 +#define PALMAS_GPIO_CLEAR_DATA_OUT_GPIO_0_CLEAR_DATA_OUT_SHIFT 0x00 /* Bit definitions for GPIO_SET_DATA_OUT */ #define PALMAS_GPIO_SET_DATA_OUT_GPIO_7_SET_DATA_OUT 0x80 -#define PALMAS_GPIO_SET_DATA_OUT_GPIO_7_SET_DATA_OUT_SHIFT 7 +#define PALMAS_GPIO_SET_DATA_OUT_GPIO_7_SET_DATA_OUT_SHIFT 0x07 #define PALMAS_GPIO_SET_DATA_OUT_GPIO_6_SET_DATA_OUT 0x40 -#define PALMAS_GPIO_SET_DATA_OUT_GPIO_6_SET_DATA_OUT_SHIFT 6 +#define PALMAS_GPIO_SET_DATA_OUT_GPIO_6_SET_DATA_OUT_SHIFT 0x06 #define PALMAS_GPIO_SET_DATA_OUT_GPIO_5_SET_DATA_OUT 0x20 -#define PALMAS_GPIO_SET_DATA_OUT_GPIO_5_SET_DATA_OUT_SHIFT 5 +#define PALMAS_GPIO_SET_DATA_OUT_GPIO_5_SET_DATA_OUT_SHIFT 0x05 #define PALMAS_GPIO_SET_DATA_OUT_GPIO_4_SET_DATA_OUT 0x10 -#define PALMAS_GPIO_SET_DATA_OUT_GPIO_4_SET_DATA_OUT_SHIFT 4 +#define PALMAS_GPIO_SET_DATA_OUT_GPIO_4_SET_DATA_OUT_SHIFT 0x04 #define PALMAS_GPIO_SET_DATA_OUT_GPIO_3_SET_DATA_OUT 0x08 -#define PALMAS_GPIO_SET_DATA_OUT_GPIO_3_SET_DATA_OUT_SHIFT 3 +#define PALMAS_GPIO_SET_DATA_OUT_GPIO_3_SET_DATA_OUT_SHIFT 0x03 #define PALMAS_GPIO_SET_DATA_OUT_GPIO_2_SET_DATA_OUT 0x04 -#define PALMAS_GPIO_SET_DATA_OUT_GPIO_2_SET_DATA_OUT_SHIFT 2 +#define PALMAS_GPIO_SET_DATA_OUT_GPIO_2_SET_DATA_OUT_SHIFT 0x02 #define PALMAS_GPIO_SET_DATA_OUT_GPIO_1_SET_DATA_OUT 0x02 -#define PALMAS_GPIO_SET_DATA_OUT_GPIO_1_SET_DATA_OUT_SHIFT 1 +#define PALMAS_GPIO_SET_DATA_OUT_GPIO_1_SET_DATA_OUT_SHIFT 0x01 #define PALMAS_GPIO_SET_DATA_OUT_GPIO_0_SET_DATA_OUT 0x01 -#define PALMAS_GPIO_SET_DATA_OUT_GPIO_0_SET_DATA_OUT_SHIFT 0 +#define PALMAS_GPIO_SET_DATA_OUT_GPIO_0_SET_DATA_OUT_SHIFT 0x00 /* Bit definitions for PU_PD_GPIO_CTRL1 */ #define PALMAS_PU_PD_GPIO_CTRL1_GPIO_3_PD 0x40 -#define PALMAS_PU_PD_GPIO_CTRL1_GPIO_3_PD_SHIFT 6 +#define PALMAS_PU_PD_GPIO_CTRL1_GPIO_3_PD_SHIFT 0x06 #define PALMAS_PU_PD_GPIO_CTRL1_GPIO_2_PU 0x20 -#define PALMAS_PU_PD_GPIO_CTRL1_GPIO_2_PU_SHIFT 5 +#define PALMAS_PU_PD_GPIO_CTRL1_GPIO_2_PU_SHIFT 0x05 #define PALMAS_PU_PD_GPIO_CTRL1_GPIO_2_PD 0x10 -#define PALMAS_PU_PD_GPIO_CTRL1_GPIO_2_PD_SHIFT 4 +#define PALMAS_PU_PD_GPIO_CTRL1_GPIO_2_PD_SHIFT 0x04 #define PALMAS_PU_PD_GPIO_CTRL1_GPIO_1_PU 0x08 -#define PALMAS_PU_PD_GPIO_CTRL1_GPIO_1_PU_SHIFT 3 +#define PALMAS_PU_PD_GPIO_CTRL1_GPIO_1_PU_SHIFT 0x03 #define PALMAS_PU_PD_GPIO_CTRL1_GPIO_1_PD 0x04 -#define PALMAS_PU_PD_GPIO_CTRL1_GPIO_1_PD_SHIFT 2 +#define PALMAS_PU_PD_GPIO_CTRL1_GPIO_1_PD_SHIFT 0x02 #define PALMAS_PU_PD_GPIO_CTRL1_GPIO_0_PD 0x01 -#define PALMAS_PU_PD_GPIO_CTRL1_GPIO_0_PD_SHIFT 0 +#define PALMAS_PU_PD_GPIO_CTRL1_GPIO_0_PD_SHIFT 0x00 /* Bit definitions for PU_PD_GPIO_CTRL2 */ #define PALMAS_PU_PD_GPIO_CTRL2_GPIO_7_PD 0x40 -#define PALMAS_PU_PD_GPIO_CTRL2_GPIO_7_PD_SHIFT 6 +#define PALMAS_PU_PD_GPIO_CTRL2_GPIO_7_PD_SHIFT 0x06 #define PALMAS_PU_PD_GPIO_CTRL2_GPIO_6_PU 0x20 -#define PALMAS_PU_PD_GPIO_CTRL2_GPIO_6_PU_SHIFT 5 +#define PALMAS_PU_PD_GPIO_CTRL2_GPIO_6_PU_SHIFT 0x05 #define PALMAS_PU_PD_GPIO_CTRL2_GPIO_6_PD 0x10 -#define PALMAS_PU_PD_GPIO_CTRL2_GPIO_6_PD_SHIFT 4 +#define PALMAS_PU_PD_GPIO_CTRL2_GPIO_6_PD_SHIFT 0x04 #define PALMAS_PU_PD_GPIO_CTRL2_GPIO_5_PU 0x08 -#define PALMAS_PU_PD_GPIO_CTRL2_GPIO_5_PU_SHIFT 3 +#define PALMAS_PU_PD_GPIO_CTRL2_GPIO_5_PU_SHIFT 0x03 #define PALMAS_PU_PD_GPIO_CTRL2_GPIO_5_PD 0x04 -#define PALMAS_PU_PD_GPIO_CTRL2_GPIO_5_PD_SHIFT 2 +#define PALMAS_PU_PD_GPIO_CTRL2_GPIO_5_PD_SHIFT 0x02 #define PALMAS_PU_PD_GPIO_CTRL2_GPIO_4_PU 0x02 -#define PALMAS_PU_PD_GPIO_CTRL2_GPIO_4_PU_SHIFT 1 +#define PALMAS_PU_PD_GPIO_CTRL2_GPIO_4_PU_SHIFT 0x01 #define PALMAS_PU_PD_GPIO_CTRL2_GPIO_4_PD 0x01 -#define PALMAS_PU_PD_GPIO_CTRL2_GPIO_4_PD_SHIFT 0 +#define PALMAS_PU_PD_GPIO_CTRL2_GPIO_4_PD_SHIFT 0x00 /* Bit definitions for OD_OUTPUT_GPIO_CTRL */ #define PALMAS_OD_OUTPUT_GPIO_CTRL_GPIO_5_OD 0x20 -#define PALMAS_OD_OUTPUT_GPIO_CTRL_GPIO_5_OD_SHIFT 5 +#define PALMAS_OD_OUTPUT_GPIO_CTRL_GPIO_5_OD_SHIFT 0x05 #define PALMAS_OD_OUTPUT_GPIO_CTRL_GPIO_2_OD 0x04 -#define PALMAS_OD_OUTPUT_GPIO_CTRL_GPIO_2_OD_SHIFT 2 +#define PALMAS_OD_OUTPUT_GPIO_CTRL_GPIO_2_OD_SHIFT 0x02 #define PALMAS_OD_OUTPUT_GPIO_CTRL_GPIO_1_OD 0x02 -#define PALMAS_OD_OUTPUT_GPIO_CTRL_GPIO_1_OD_SHIFT 1 +#define PALMAS_OD_OUTPUT_GPIO_CTRL_GPIO_1_OD_SHIFT 0x01 /* Registers for function GPADC */ -#define PALMAS_GPADC_CTRL1 0x0 -#define PALMAS_GPADC_CTRL2 0x1 -#define PALMAS_GPADC_RT_CTRL 0x2 -#define PALMAS_GPADC_AUTO_CTRL 0x3 -#define PALMAS_GPADC_STATUS 0x4 -#define PALMAS_GPADC_RT_SELECT 0x5 -#define PALMAS_GPADC_RT_CONV0_LSB 0x6 -#define PALMAS_GPADC_RT_CONV0_MSB 0x7 -#define PALMAS_GPADC_AUTO_SELECT 0x8 -#define PALMAS_GPADC_AUTO_CONV0_LSB 0x9 -#define PALMAS_GPADC_AUTO_CONV0_MSB 0xA -#define PALMAS_GPADC_AUTO_CONV1_LSB 0xB -#define PALMAS_GPADC_AUTO_CONV1_MSB 0xC -#define PALMAS_GPADC_SW_SELECT 0xD -#define PALMAS_GPADC_SW_CONV0_LSB 0xE -#define PALMAS_GPADC_SW_CONV0_MSB 0xF +#define PALMAS_GPADC_CTRL1 0x00 +#define PALMAS_GPADC_CTRL2 0x01 +#define PALMAS_GPADC_RT_CTRL 0x02 +#define PALMAS_GPADC_AUTO_CTRL 0x03 +#define PALMAS_GPADC_STATUS 0x04 +#define PALMAS_GPADC_RT_SELECT 0x05 +#define PALMAS_GPADC_RT_CONV0_LSB 0x06 +#define PALMAS_GPADC_RT_CONV0_MSB 0x07 +#define PALMAS_GPADC_AUTO_SELECT 0x08 +#define PALMAS_GPADC_AUTO_CONV0_LSB 0x09 +#define PALMAS_GPADC_AUTO_CONV0_MSB 0x0A +#define PALMAS_GPADC_AUTO_CONV1_LSB 0x0B +#define PALMAS_GPADC_AUTO_CONV1_MSB 0x0C +#define PALMAS_GPADC_SW_SELECT 0x0D +#define PALMAS_GPADC_SW_CONV0_LSB 0x0E +#define PALMAS_GPADC_SW_CONV0_MSB 0x0F #define PALMAS_GPADC_THRES_CONV0_LSB 0x10 #define PALMAS_GPADC_THRES_CONV0_MSB 0x11 #define PALMAS_GPADC_THRES_CONV1_LSB 0x12 @@ -2731,150 +2731,150 @@ enum usb_irq_events { /* Bit definitions for GPADC_CTRL1 */ #define PALMAS_GPADC_CTRL1_RESERVED_MASK 0xc0 -#define PALMAS_GPADC_CTRL1_RESERVED_SHIFT 6 +#define PALMAS_GPADC_CTRL1_RESERVED_SHIFT 0x06 #define PALMAS_GPADC_CTRL1_CURRENT_SRC_CH3_MASK 0x30 -#define PALMAS_GPADC_CTRL1_CURRENT_SRC_CH3_SHIFT 4 +#define PALMAS_GPADC_CTRL1_CURRENT_SRC_CH3_SHIFT 0x04 #define PALMAS_GPADC_CTRL1_CURRENT_SRC_CH0_MASK 0x0c -#define PALMAS_GPADC_CTRL1_CURRENT_SRC_CH0_SHIFT 2 +#define PALMAS_GPADC_CTRL1_CURRENT_SRC_CH0_SHIFT 0x02 #define PALMAS_GPADC_CTRL1_BAT_REMOVAL_DET 0x02 -#define PALMAS_GPADC_CTRL1_BAT_REMOVAL_DET_SHIFT 1 +#define PALMAS_GPADC_CTRL1_BAT_REMOVAL_DET_SHIFT 0x01 #define PALMAS_GPADC_CTRL1_GPADC_FORCE 0x01 -#define PALMAS_GPADC_CTRL1_GPADC_FORCE_SHIFT 0 +#define PALMAS_GPADC_CTRL1_GPADC_FORCE_SHIFT 0x00 /* Bit definitions for GPADC_CTRL2 */ #define PALMAS_GPADC_CTRL2_RESERVED_MASK 0x06 -#define PALMAS_GPADC_CTRL2_RESERVED_SHIFT 1 +#define PALMAS_GPADC_CTRL2_RESERVED_SHIFT 0x01 /* Bit definitions for GPADC_RT_CTRL */ #define PALMAS_GPADC_RT_CTRL_EXTEND_DELAY 0x02 -#define PALMAS_GPADC_RT_CTRL_EXTEND_DELAY_SHIFT 1 +#define PALMAS_GPADC_RT_CTRL_EXTEND_DELAY_SHIFT 0x01 #define PALMAS_GPADC_RT_CTRL_START_POLARITY 0x01 -#define PALMAS_GPADC_RT_CTRL_START_POLARITY_SHIFT 0 +#define PALMAS_GPADC_RT_CTRL_START_POLARITY_SHIFT 0x00 /* Bit definitions for GPADC_AUTO_CTRL */ #define PALMAS_GPADC_AUTO_CTRL_SHUTDOWN_CONV1 0x80 -#define PALMAS_GPADC_AUTO_CTRL_SHUTDOWN_CONV1_SHIFT 7 +#define PALMAS_GPADC_AUTO_CTRL_SHUTDOWN_CONV1_SHIFT 0x07 #define PALMAS_GPADC_AUTO_CTRL_SHUTDOWN_CONV0 0x40 -#define PALMAS_GPADC_AUTO_CTRL_SHUTDOWN_CONV0_SHIFT 6 +#define PALMAS_GPADC_AUTO_CTRL_SHUTDOWN_CONV0_SHIFT 0x06 #define PALMAS_GPADC_AUTO_CTRL_AUTO_CONV1_EN 0x20 -#define PALMAS_GPADC_AUTO_CTRL_AUTO_CONV1_EN_SHIFT 5 +#define PALMAS_GPADC_AUTO_CTRL_AUTO_CONV1_EN_SHIFT 0x05 #define PALMAS_GPADC_AUTO_CTRL_AUTO_CONV0_EN 0x10 -#define PALMAS_GPADC_AUTO_CTRL_AUTO_CONV0_EN_SHIFT 4 -#define PALMAS_GPADC_AUTO_CTRL_COUNTER_CONV_MASK 0x0f -#define PALMAS_GPADC_AUTO_CTRL_COUNTER_CONV_SHIFT 0 +#define PALMAS_GPADC_AUTO_CTRL_AUTO_CONV0_EN_SHIFT 0x04 +#define PALMAS_GPADC_AUTO_CTRL_COUNTER_CONV_MASK 0x0F +#define PALMAS_GPADC_AUTO_CTRL_COUNTER_CONV_SHIFT 0x00 /* Bit definitions for GPADC_STATUS */ #define PALMAS_GPADC_STATUS_GPADC_AVAILABLE 0x10 -#define PALMAS_GPADC_STATUS_GPADC_AVAILABLE_SHIFT 4 +#define PALMAS_GPADC_STATUS_GPADC_AVAILABLE_SHIFT 0x04 /* Bit definitions for GPADC_RT_SELECT */ #define PALMAS_GPADC_RT_SELECT_RT_CONV_EN 0x80 -#define PALMAS_GPADC_RT_SELECT_RT_CONV_EN_SHIFT 7 -#define PALMAS_GPADC_RT_SELECT_RT_CONV0_SEL_MASK 0x0f -#define PALMAS_GPADC_RT_SELECT_RT_CONV0_SEL_SHIFT 0 +#define PALMAS_GPADC_RT_SELECT_RT_CONV_EN_SHIFT 0x07 +#define PALMAS_GPADC_RT_SELECT_RT_CONV0_SEL_MASK 0x0F +#define PALMAS_GPADC_RT_SELECT_RT_CONV0_SEL_SHIFT 0x00 /* Bit definitions for GPADC_RT_CONV0_LSB */ -#define PALMAS_GPADC_RT_CONV0_LSB_RT_CONV0_LSB_MASK 0xff -#define PALMAS_GPADC_RT_CONV0_LSB_RT_CONV0_LSB_SHIFT 0 +#define PALMAS_GPADC_RT_CONV0_LSB_RT_CONV0_LSB_MASK 0xFF +#define PALMAS_GPADC_RT_CONV0_LSB_RT_CONV0_LSB_SHIFT 0x00 /* Bit definitions for GPADC_RT_CONV0_MSB */ -#define PALMAS_GPADC_RT_CONV0_MSB_RT_CONV0_MSB_MASK 0x0f -#define PALMAS_GPADC_RT_CONV0_MSB_RT_CONV0_MSB_SHIFT 0 +#define PALMAS_GPADC_RT_CONV0_MSB_RT_CONV0_MSB_MASK 0x0F +#define PALMAS_GPADC_RT_CONV0_MSB_RT_CONV0_MSB_SHIFT 0x00 /* Bit definitions for GPADC_AUTO_SELECT */ -#define PALMAS_GPADC_AUTO_SELECT_AUTO_CONV1_SEL_MASK 0xf0 -#define PALMAS_GPADC_AUTO_SELECT_AUTO_CONV1_SEL_SHIFT 4 -#define PALMAS_GPADC_AUTO_SELECT_AUTO_CONV0_SEL_MASK 0x0f -#define PALMAS_GPADC_AUTO_SELECT_AUTO_CONV0_SEL_SHIFT 0 +#define PALMAS_GPADC_AUTO_SELECT_AUTO_CONV1_SEL_MASK 0xF0 +#define PALMAS_GPADC_AUTO_SELECT_AUTO_CONV1_SEL_SHIFT 0x04 +#define PALMAS_GPADC_AUTO_SELECT_AUTO_CONV0_SEL_MASK 0x0F +#define PALMAS_GPADC_AUTO_SELECT_AUTO_CONV0_SEL_SHIFT 0x00 /* Bit definitions for GPADC_AUTO_CONV0_LSB */ -#define PALMAS_GPADC_AUTO_CONV0_LSB_AUTO_CONV0_LSB_MASK 0xff -#define PALMAS_GPADC_AUTO_CONV0_LSB_AUTO_CONV0_LSB_SHIFT 0 +#define PALMAS_GPADC_AUTO_CONV0_LSB_AUTO_CONV0_LSB_MASK 0xFF +#define PALMAS_GPADC_AUTO_CONV0_LSB_AUTO_CONV0_LSB_SHIFT 0x00 /* Bit definitions for GPADC_AUTO_CONV0_MSB */ -#define PALMAS_GPADC_AUTO_CONV0_MSB_AUTO_CONV0_MSB_MASK 0x0f -#define PALMAS_GPADC_AUTO_CONV0_MSB_AUTO_CONV0_MSB_SHIFT 0 +#define PALMAS_GPADC_AUTO_CONV0_MSB_AUTO_CONV0_MSB_MASK 0x0F +#define PALMAS_GPADC_AUTO_CONV0_MSB_AUTO_CONV0_MSB_SHIFT 0x00 /* Bit definitions for GPADC_AUTO_CONV1_LSB */ -#define PALMAS_GPADC_AUTO_CONV1_LSB_AUTO_CONV1_LSB_MASK 0xff -#define PALMAS_GPADC_AUTO_CONV1_LSB_AUTO_CONV1_LSB_SHIFT 0 +#define PALMAS_GPADC_AUTO_CONV1_LSB_AUTO_CONV1_LSB_MASK 0xFF +#define PALMAS_GPADC_AUTO_CONV1_LSB_AUTO_CONV1_LSB_SHIFT 0x00 /* Bit definitions for GPADC_AUTO_CONV1_MSB */ -#define PALMAS_GPADC_AUTO_CONV1_MSB_AUTO_CONV1_MSB_MASK 0x0f -#define PALMAS_GPADC_AUTO_CONV1_MSB_AUTO_CONV1_MSB_SHIFT 0 +#define PALMAS_GPADC_AUTO_CONV1_MSB_AUTO_CONV1_MSB_MASK 0x0F +#define PALMAS_GPADC_AUTO_CONV1_MSB_AUTO_CONV1_MSB_SHIFT 0x00 /* Bit definitions for GPADC_SW_SELECT */ #define PALMAS_GPADC_SW_SELECT_SW_CONV_EN 0x80 -#define PALMAS_GPADC_SW_SELECT_SW_CONV_EN_SHIFT 7 +#define PALMAS_GPADC_SW_SELECT_SW_CONV_EN_SHIFT 0x07 #define PALMAS_GPADC_SW_SELECT_SW_START_CONV0 0x10 -#define PALMAS_GPADC_SW_SELECT_SW_START_CONV0_SHIFT 4 -#define PALMAS_GPADC_SW_SELECT_SW_CONV0_SEL_MASK 0x0f -#define PALMAS_GPADC_SW_SELECT_SW_CONV0_SEL_SHIFT 0 +#define PALMAS_GPADC_SW_SELECT_SW_START_CONV0_SHIFT 0x04 +#define PALMAS_GPADC_SW_SELECT_SW_CONV0_SEL_MASK 0x0F +#define PALMAS_GPADC_SW_SELECT_SW_CONV0_SEL_SHIFT 0x00 /* Bit definitions for GPADC_SW_CONV0_LSB */ -#define PALMAS_GPADC_SW_CONV0_LSB_SW_CONV0_LSB_MASK 0xff -#define PALMAS_GPADC_SW_CONV0_LSB_SW_CONV0_LSB_SHIFT 0 +#define PALMAS_GPADC_SW_CONV0_LSB_SW_CONV0_LSB_MASK 0xFF +#define PALMAS_GPADC_SW_CONV0_LSB_SW_CONV0_LSB_SHIFT 0x00 /* Bit definitions for GPADC_SW_CONV0_MSB */ -#define PALMAS_GPADC_SW_CONV0_MSB_SW_CONV0_MSB_MASK 0x0f -#define PALMAS_GPADC_SW_CONV0_MSB_SW_CONV0_MSB_SHIFT 0 +#define PALMAS_GPADC_SW_CONV0_MSB_SW_CONV0_MSB_MASK 0x0F +#define PALMAS_GPADC_SW_CONV0_MSB_SW_CONV0_MSB_SHIFT 0x00 /* Bit definitions for GPADC_THRES_CONV0_LSB */ -#define PALMAS_GPADC_THRES_CONV0_LSB_THRES_CONV0_LSB_MASK 0xff -#define PALMAS_GPADC_THRES_CONV0_LSB_THRES_CONV0_LSB_SHIFT 0 +#define PALMAS_GPADC_THRES_CONV0_LSB_THRES_CONV0_LSB_MASK 0xFF +#define PALMAS_GPADC_THRES_CONV0_LSB_THRES_CONV0_LSB_SHIFT 0x00 /* Bit definitions for GPADC_THRES_CONV0_MSB */ #define PALMAS_GPADC_THRES_CONV0_MSB_THRES_CONV0_POL 0x80 -#define PALMAS_GPADC_THRES_CONV0_MSB_THRES_CONV0_POL_SHIFT 7 -#define PALMAS_GPADC_THRES_CONV0_MSB_THRES_CONV0_MSB_MASK 0x0f -#define PALMAS_GPADC_THRES_CONV0_MSB_THRES_CONV0_MSB_SHIFT 0 +#define PALMAS_GPADC_THRES_CONV0_MSB_THRES_CONV0_POL_SHIFT 0x07 +#define PALMAS_GPADC_THRES_CONV0_MSB_THRES_CONV0_MSB_MASK 0x0F +#define PALMAS_GPADC_THRES_CONV0_MSB_THRES_CONV0_MSB_SHIFT 0x00 /* Bit definitions for GPADC_THRES_CONV1_LSB */ -#define PALMAS_GPADC_THRES_CONV1_LSB_THRES_CONV1_LSB_MASK 0xff -#define PALMAS_GPADC_THRES_CONV1_LSB_THRES_CONV1_LSB_SHIFT 0 +#define PALMAS_GPADC_THRES_CONV1_LSB_THRES_CONV1_LSB_MASK 0xFF +#define PALMAS_GPADC_THRES_CONV1_LSB_THRES_CONV1_LSB_SHIFT 0x00 /* Bit definitions for GPADC_THRES_CONV1_MSB */ #define PALMAS_GPADC_THRES_CONV1_MSB_THRES_CONV1_POL 0x80 -#define PALMAS_GPADC_THRES_CONV1_MSB_THRES_CONV1_POL_SHIFT 7 -#define PALMAS_GPADC_THRES_CONV1_MSB_THRES_CONV1_MSB_MASK 0x0f -#define PALMAS_GPADC_THRES_CONV1_MSB_THRES_CONV1_MSB_SHIFT 0 +#define PALMAS_GPADC_THRES_CONV1_MSB_THRES_CONV1_POL_SHIFT 0x07 +#define PALMAS_GPADC_THRES_CONV1_MSB_THRES_CONV1_MSB_MASK 0x0F +#define PALMAS_GPADC_THRES_CONV1_MSB_THRES_CONV1_MSB_SHIFT 0x00 /* Bit definitions for GPADC_SMPS_ILMONITOR_EN */ #define PALMAS_GPADC_SMPS_ILMONITOR_EN_SMPS_ILMON_EN 0x20 -#define PALMAS_GPADC_SMPS_ILMONITOR_EN_SMPS_ILMON_EN_SHIFT 5 +#define PALMAS_GPADC_SMPS_ILMONITOR_EN_SMPS_ILMON_EN_SHIFT 0x05 #define PALMAS_GPADC_SMPS_ILMONITOR_EN_SMPS_ILMON_REXT 0x10 -#define PALMAS_GPADC_SMPS_ILMONITOR_EN_SMPS_ILMON_REXT_SHIFT 4 -#define PALMAS_GPADC_SMPS_ILMONITOR_EN_SMPS_ILMON_SEL_MASK 0x0f -#define PALMAS_GPADC_SMPS_ILMONITOR_EN_SMPS_ILMON_SEL_SHIFT 0 +#define PALMAS_GPADC_SMPS_ILMONITOR_EN_SMPS_ILMON_REXT_SHIFT 0x04 +#define PALMAS_GPADC_SMPS_ILMONITOR_EN_SMPS_ILMON_SEL_MASK 0x0F +#define PALMAS_GPADC_SMPS_ILMONITOR_EN_SMPS_ILMON_SEL_SHIFT 0x00 /* Bit definitions for GPADC_SMPS_VSEL_MONITORING */ #define PALMAS_GPADC_SMPS_VSEL_MONITORING_ACTIVE_PHASE 0x80 -#define PALMAS_GPADC_SMPS_VSEL_MONITORING_ACTIVE_PHASE_SHIFT 7 -#define PALMAS_GPADC_SMPS_VSEL_MONITORING_SMPS_VSEL_MONITORING_MASK 0x7f -#define PALMAS_GPADC_SMPS_VSEL_MONITORING_SMPS_VSEL_MONITORING_SHIFT 0 +#define PALMAS_GPADC_SMPS_VSEL_MONITORING_ACTIVE_PHASE_SHIFT 0x07 +#define PALMAS_GPADC_SMPS_VSEL_MONITORING_SMPS_VSEL_MONITORING_MASK 0x7F +#define PALMAS_GPADC_SMPS_VSEL_MONITORING_SMPS_VSEL_MONITORING_SHIFT 0x00 /* Registers for function GPADC */ -#define PALMAS_GPADC_TRIM1 0x0 -#define PALMAS_GPADC_TRIM2 0x1 -#define PALMAS_GPADC_TRIM3 0x2 -#define PALMAS_GPADC_TRIM4 0x3 -#define PALMAS_GPADC_TRIM5 0x4 -#define PALMAS_GPADC_TRIM6 0x5 -#define PALMAS_GPADC_TRIM7 0x6 -#define PALMAS_GPADC_TRIM8 0x7 -#define PALMAS_GPADC_TRIM9 0x8 -#define PALMAS_GPADC_TRIM10 0x9 -#define PALMAS_GPADC_TRIM11 0xA -#define PALMAS_GPADC_TRIM12 0xB -#define PALMAS_GPADC_TRIM13 0xC -#define PALMAS_GPADC_TRIM14 0xD -#define PALMAS_GPADC_TRIM15 0xE -#define PALMAS_GPADC_TRIM16 0xF +#define PALMAS_GPADC_TRIM1 0x00 +#define PALMAS_GPADC_TRIM2 0x01 +#define PALMAS_GPADC_TRIM3 0x02 +#define PALMAS_GPADC_TRIM4 0x03 +#define PALMAS_GPADC_TRIM5 0x04 +#define PALMAS_GPADC_TRIM6 0x05 +#define PALMAS_GPADC_TRIM7 0x06 +#define PALMAS_GPADC_TRIM8 0x07 +#define PALMAS_GPADC_TRIM9 0x08 +#define PALMAS_GPADC_TRIM10 0x09 +#define PALMAS_GPADC_TRIM11 0x0A +#define PALMAS_GPADC_TRIM12 0x0B +#define PALMAS_GPADC_TRIM13 0x0C +#define PALMAS_GPADC_TRIM14 0x0D +#define PALMAS_GPADC_TRIM15 0x0E +#define PALMAS_GPADC_TRIM16 0x0F static inline int palmas_read(struct palmas *palmas, unsigned int base, unsigned int reg, unsigned int *val) { - unsigned int addr = PALMAS_BASE_TO_REG(base, reg); + unsigned int addr = PALMAS_BASE_TO_REG(base, reg); int slave_id = PALMAS_BASE_TO_SLAVE(base); return regmap_read(palmas->regmap[slave_id], addr, val); diff --git a/include/linux/mfd/pm8xxx/core.h b/include/linux/mfd/pm8xxx/core.h deleted file mode 100644 index bd2f4f64e931..000000000000 --- a/include/linux/mfd/pm8xxx/core.h +++ /dev/null @@ -1,81 +0,0 @@ -/* - * Copyright (c) 2011, Code Aurora Forum. All rights reserved. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 and - * only version 2 as published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - */ -/* - * Qualcomm PMIC 8xxx driver header file - * - */ - -#ifndef __MFD_PM8XXX_CORE_H -#define __MFD_PM8XXX_CORE_H - -#include <linux/mfd/core.h> - -struct pm8xxx_drvdata { - int (*pmic_readb) (const struct device *dev, u16 addr, u8 *val); - int (*pmic_writeb) (const struct device *dev, u16 addr, u8 val); - int (*pmic_read_buf) (const struct device *dev, u16 addr, u8 *buf, - int n); - int (*pmic_write_buf) (const struct device *dev, u16 addr, u8 *buf, - int n); - int (*pmic_read_irq_stat) (const struct device *dev, int irq); - void *pm_chip_data; -}; - -static inline int pm8xxx_readb(const struct device *dev, u16 addr, u8 *val) -{ - struct pm8xxx_drvdata *dd = dev_get_drvdata(dev); - - if (!dd) - return -EINVAL; - return dd->pmic_readb(dev, addr, val); -} - -static inline int pm8xxx_writeb(const struct device *dev, u16 addr, u8 val) -{ - struct pm8xxx_drvdata *dd = dev_get_drvdata(dev); - - if (!dd) - return -EINVAL; - return dd->pmic_writeb(dev, addr, val); -} - -static inline int pm8xxx_read_buf(const struct device *dev, u16 addr, u8 *buf, - int n) -{ - struct pm8xxx_drvdata *dd = dev_get_drvdata(dev); - - if (!dd) - return -EINVAL; - return dd->pmic_read_buf(dev, addr, buf, n); -} - -static inline int pm8xxx_write_buf(const struct device *dev, u16 addr, u8 *buf, - int n) -{ - struct pm8xxx_drvdata *dd = dev_get_drvdata(dev); - - if (!dd) - return -EINVAL; - return dd->pmic_write_buf(dev, addr, buf, n); -} - -static inline int pm8xxx_read_irq_stat(const struct device *dev, int irq) -{ - struct pm8xxx_drvdata *dd = dev_get_drvdata(dev); - - if (!dd) - return -EINVAL; - return dd->pmic_read_irq_stat(dev, irq); -} - -#endif diff --git a/include/linux/mfd/rdc321x.h b/include/linux/mfd/rdc321x.h index 4bdf19c8eedf..442743a8f915 100644 --- a/include/linux/mfd/rdc321x.h +++ b/include/linux/mfd/rdc321x.h @@ -12,7 +12,7 @@ #define RDC321X_GPIO_CTRL_REG2 0x84 #define RDC321X_GPIO_DATA_REG2 0x88 -#define RDC321X_MAX_GPIO 58 +#define RDC321X_NUM_GPIO 59 struct rdc321x_gpio_pdata { struct pci_dev *sb_pdev; diff --git a/include/linux/mfd/samsung/core.h b/include/linux/mfd/samsung/core.h index 157e32b6ca28..47d84242940b 100644 --- a/include/linux/mfd/samsung/core.h +++ b/include/linux/mfd/samsung/core.h @@ -24,35 +24,36 @@ enum sec_device_type { }; /** - * struct sec_pmic_dev - s5m87xx master device for sub-drivers - * @dev: master device of the chip (can be used to access platform data) - * @pdata: pointer to private data used to pass platform data to child - * @i2c: i2c client private data for regulator - * @rtc: i2c client private data for rtc - * @iolock: mutex for serializing io access - * @irqlock: mutex for buslock - * @irq_base: base IRQ number for sec-pmic, required for IRQs - * @irq: generic IRQ number for s5m87xx - * @ono: power onoff IRQ number for s5m87xx - * @irq_masks_cur: currently active value - * @irq_masks_cache: cached hardware value - * @type: indicate which s5m87xx "variant" is used + * struct sec_pmic_dev - s2m/s5m master device for sub-drivers + * @dev: Master device of the chip + * @pdata: Platform data populated with data from DTS + * or board files + * @regmap_pmic: Regmap associated with PMIC's I2C address + * @i2c: I2C client of the main driver + * @device_type: Type of device, matches enum sec_device_type + * @irq_base: Base IRQ number for device, required for IRQs + * @irq: Generic IRQ number for device + * @irq_data: Runtime data structure for IRQ controller + * @ono: Power onoff IRQ number for s5m87xx + * @wakeup: Whether or not this is a wakeup device + * @wtsr_smpl: Whether or not to enable in RTC driver the Watchdog + * Timer Software Reset (registers set to default value + * after PWRHOLD falling) and Sudden Momentary Power Loss + * (PMIC will enter power on sequence after short drop in + * VBATT voltage). */ struct sec_pmic_dev { struct device *dev; struct sec_platform_data *pdata; struct regmap *regmap_pmic; - struct regmap *regmap_rtc; struct i2c_client *i2c; - struct i2c_client *rtc; - int device_type; + unsigned long device_type; int irq_base; int irq; struct regmap_irq_chip_data *irq_data; int ono; - unsigned long type; bool wakeup; bool wtsr_smpl; }; diff --git a/include/linux/mfd/stmpe.h b/include/linux/mfd/stmpe.h index 48395a69a7e9..575a86c7fcbd 100644 --- a/include/linux/mfd/stmpe.h +++ b/include/linux/mfd/stmpe.h @@ -11,6 +11,7 @@ #include <linux/mutex.h> struct device; +struct regulator; enum stmpe_block { STMPE_BLOCK_GPIO = 1 << 0, @@ -62,6 +63,8 @@ struct stmpe_client_info; /** * struct stmpe - STMPE MFD structure + * @vcc: optional VCC regulator + * @vio: optional VIO regulator * @lock: lock protecting I/O operations * @irq_lock: IRQ bus lock * @dev: device, mostly for dev_dbg() @@ -73,13 +76,14 @@ struct stmpe_client_info; * @regs: list of addresses of registers which are at different addresses on * different variants. Indexed by one of STMPE_IDX_*. * @irq: irq number for stmpe - * @irq_base: starting IRQ number for internal IRQs * @num_gpios: number of gpios, differs for variants * @ier: cache of IER registers for bus_lock * @oldier: cache of IER registers for bus_lock * @pdata: platform data */ struct stmpe { + struct regulator *vcc; + struct regulator *vio; struct mutex lock; struct mutex irq_lock; struct device *dev; @@ -91,7 +95,6 @@ struct stmpe { const u8 *regs; int irq; - int irq_base; int num_gpios; u8 ier[2]; u8 oldier[2]; @@ -132,8 +135,6 @@ struct stmpe_keypad_platform_data { /** * struct stmpe_gpio_platform_data - STMPE GPIO platform data - * @gpio_base: first gpio number assigned. A maximum of - * %STMPE_NR_GPIOS GPIOs will be allocated. * @norequest_mask: bitmask specifying which GPIOs should _not_ be * requestable due to different usage (e.g. touch, keypad) * STMPE_GPIO_NOREQ_* macros can be used here. @@ -141,7 +142,6 @@ struct stmpe_keypad_platform_data { * @remove: board specific remove callback */ struct stmpe_gpio_platform_data { - int gpio_base; unsigned norequest_mask; void (*setup)(struct stmpe *stmpe, unsigned gpio_base); void (*remove)(struct stmpe *stmpe, unsigned gpio_base); @@ -195,8 +195,6 @@ struct stmpe_ts_platform_data { * @irq_trigger: IRQ trigger to use for the interrupt to the host * @autosleep: bool to enable/disable stmpe autosleep * @autosleep_timeout: inactivity timeout in milliseconds for autosleep - * @irq_base: base IRQ number. %STMPE_NR_IRQS irqs will be used, or - * %STMPE_NR_INTERNAL_IRQS if the GPIO driver is not used. * @irq_over_gpio: true if gpio is used to get irq * @irq_gpio: gpio number over which irq will be requested (significant only if * irq_over_gpio is true) @@ -207,7 +205,6 @@ struct stmpe_ts_platform_data { struct stmpe_platform_data { int id; unsigned int blocks; - int irq_base; unsigned int irq_trigger; bool autosleep; bool irq_over_gpio; @@ -219,10 +216,4 @@ struct stmpe_platform_data { struct stmpe_ts_platform_data *ts; }; -#define STMPE_NR_INTERNAL_IRQS 9 -#define STMPE_INT_GPIO(x) (STMPE_NR_INTERNAL_IRQS + (x)) - -#define STMPE_NR_GPIOS 24 -#define STMPE_NR_IRQS STMPE_INT_GPIO(STMPE_NR_GPIOS) - #endif diff --git a/include/linux/mfd/syscon.h b/include/linux/mfd/syscon.h index 8789fa3c7fd9..75e543b78f53 100644 --- a/include/linux/mfd/syscon.h +++ b/include/linux/mfd/syscon.h @@ -15,6 +15,8 @@ #ifndef __LINUX_MFD_SYSCON_H__ #define __LINUX_MFD_SYSCON_H__ +#include <linux/err.h> + struct device_node; #ifdef CONFIG_MFD_SYSCON diff --git a/include/linux/mfd/tps65218.h b/include/linux/mfd/tps65218.h index d2e357df5a0e..2f9b593246ee 100644 --- a/include/linux/mfd/tps65218.h +++ b/include/linux/mfd/tps65218.h @@ -267,7 +267,6 @@ struct tps65218 { u32 irq_mask; struct regmap_irq_chip_data *irq_data; struct regulator_desc desc[TPS65218_NUM_REGULATOR]; - struct regulator_dev *rdev[TPS65218_NUM_REGULATOR]; struct tps_info *info[TPS65218_NUM_REGULATOR]; struct regmap *regmap; }; diff --git a/include/linux/mfd/twl6040.h b/include/linux/mfd/twl6040.h index 81f639bc1ae6..8f9fc3d26e6d 100644 --- a/include/linux/mfd/twl6040.h +++ b/include/linux/mfd/twl6040.h @@ -28,6 +28,7 @@ #include <linux/interrupt.h> #include <linux/mfd/core.h> #include <linux/regulator/consumer.h> +#include <linux/clk.h> #define TWL6040_REG_ASICID 0x01 #define TWL6040_REG_ASICREV 0x02 @@ -157,6 +158,7 @@ #define TWL6040_I2CSEL 0x01 #define TWL6040_RESETSPLIT 0x04 #define TWL6040_INTCLRMODE 0x08 +#define TWL6040_I2CMODE(x) ((x & 0x3) << 4) /* STATUS (0x2E) fields */ @@ -222,6 +224,7 @@ struct twl6040 { struct regmap *regmap; struct regmap_irq_chip_data *irq_data; struct regulator_bulk_data supplies[2]; /* supplies for vio, v2v1 */ + struct clk *clk32k; struct mutex mutex; struct mutex irq_mutex; struct mfd_cell cells[TWL6040_CELLS]; diff --git a/include/linux/netlink.h b/include/linux/netlink.h index f64b01787ddc..034cda789a15 100644 --- a/include/linux/netlink.h +++ b/include/linux/netlink.h @@ -16,9 +16,10 @@ static inline struct nlmsghdr *nlmsg_hdr(const struct sk_buff *skb) } enum netlink_skb_flags { - NETLINK_SKB_MMAPED = 0x1, /* Packet data is mmaped */ - NETLINK_SKB_TX = 0x2, /* Packet was sent by userspace */ - NETLINK_SKB_DELIVERED = 0x4, /* Packet was delivered */ + NETLINK_SKB_MMAPED = 0x1, /* Packet data is mmaped */ + NETLINK_SKB_TX = 0x2, /* Packet was sent by userspace */ + NETLINK_SKB_DELIVERED = 0x4, /* Packet was delivered */ + NETLINK_SKB_DST = 0x8, /* Dst set in sendto or sendmsg */ }; struct netlink_skb_parms { diff --git a/include/linux/of_address.h b/include/linux/of_address.h index 906ca7681756..c13b8782a4eb 100644 --- a/include/linux/of_address.h +++ b/include/linux/of_address.h @@ -62,6 +62,9 @@ extern int of_pci_range_parser_init(struct of_pci_range_parser *parser, extern struct of_pci_range *of_pci_range_parser_one( struct of_pci_range_parser *parser, struct of_pci_range *range); +extern int of_dma_get_range(struct device_node *np, u64 *dma_addr, + u64 *paddr, u64 *size); +extern bool of_dma_is_coherent(struct device_node *np); #else /* CONFIG_OF_ADDRESS */ static inline struct device_node *of_find_matching_node_by_address( struct device_node *from, @@ -89,6 +92,17 @@ static inline struct of_pci_range *of_pci_range_parser_one( { return NULL; } + +static inline int of_dma_get_range(struct device_node *np, u64 *dma_addr, + u64 *paddr, u64 *size) +{ + return -ENODEV; +} + +static inline bool of_dma_is_coherent(struct device_node *np) +{ + return false; +} #endif /* CONFIG_OF_ADDRESS */ #ifdef CONFIG_OF diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h index 95961f0bf62d..0afb48fd449d 100644 --- a/include/linux/percpu-refcount.h +++ b/include/linux/percpu-refcount.h @@ -110,7 +110,7 @@ static inline void percpu_ref_get(struct percpu_ref *ref) pcpu_count = ACCESS_ONCE(ref->pcpu_count); if (likely(REF_STATUS(pcpu_count) == PCPU_REF_PTR)) - __this_cpu_inc(*pcpu_count); + this_cpu_inc(*pcpu_count); else atomic_inc(&ref->count); @@ -139,7 +139,7 @@ static inline bool percpu_ref_tryget(struct percpu_ref *ref) pcpu_count = ACCESS_ONCE(ref->pcpu_count); if (likely(REF_STATUS(pcpu_count) == PCPU_REF_PTR)) { - __this_cpu_inc(*pcpu_count); + this_cpu_inc(*pcpu_count); ret = true; } @@ -164,7 +164,7 @@ static inline void percpu_ref_put(struct percpu_ref *ref) pcpu_count = ACCESS_ONCE(ref->pcpu_count); if (likely(REF_STATUS(pcpu_count) == PCPU_REF_PTR)) - __this_cpu_dec(*pcpu_count); + this_cpu_dec(*pcpu_count); else if (unlikely(atomic_dec_and_test(&ref->count))) ref->release(ref); diff --git a/include/linux/platform_data/ipmmu-vmsa.h b/include/linux/platform_data/ipmmu-vmsa.h new file mode 100644 index 000000000000..5275b3ac6d37 --- /dev/null +++ b/include/linux/platform_data/ipmmu-vmsa.h @@ -0,0 +1,24 @@ +/* + * IPMMU VMSA Platform Data + * + * Copyright (C) 2014 Renesas Electronics Corporation + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; version 2 of the License. + */ + +#ifndef __IPMMU_VMSA_H__ +#define __IPMMU_VMSA_H__ + +struct ipmmu_vmsa_master { + const char *name; + unsigned int utlb; +}; + +struct ipmmu_vmsa_platform_data { + const struct ipmmu_vmsa_master *masters; + unsigned int num_masters; +}; + +#endif /* __IPMMU_VMSA_H__ */ diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h index 07d0df6bf768..077904c8b70d 100644 --- a/include/linux/ptrace.h +++ b/include/linux/ptrace.h @@ -5,6 +5,7 @@ #include <linux/sched.h> /* For struct task_struct. */ #include <linux/err.h> /* for IS_ERR_VALUE */ #include <linux/bug.h> /* For BUG_ON. */ +#include <linux/pid_namespace.h> /* For task_active_pid_ns. */ #include <uapi/linux/ptrace.h> /* @@ -129,6 +130,37 @@ static inline void ptrace_event(int event, unsigned long message) } /** + * ptrace_event_pid - possibly stop for a ptrace event notification + * @event: %PTRACE_EVENT_* value to report + * @pid: process identifier for %PTRACE_GETEVENTMSG to return + * + * Check whether @event is enabled and, if so, report @event and @pid + * to the ptrace parent. @pid is reported as the pid_t seen from the + * the ptrace parent's pid namespace. + * + * Called without locks. + */ +static inline void ptrace_event_pid(int event, struct pid *pid) +{ + /* + * FIXME: There's a potential race if a ptracer in a different pid + * namespace than parent attaches between computing message below and + * when we acquire tasklist_lock in ptrace_stop(). If this happens, + * the ptracer will get a bogus pid from PTRACE_GETEVENTMSG. + */ + unsigned long message = 0; + struct pid_namespace *ns; + + rcu_read_lock(); + ns = task_active_pid_ns(rcu_dereference(current->parent)); + if (ns) + message = pid_nr_ns(pid, ns); + rcu_read_unlock(); + + ptrace_event(event, message); +} + +/** * ptrace_init_task - initialize ptrace state for a new child * @child: new child task * @ptrace: true if child should be ptrace'd by parent's tracer diff --git a/include/linux/sched.h b/include/linux/sched.h index 8fcd0e6098d9..ea74596014a2 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -2414,9 +2414,6 @@ extern void flush_itimer_signals(void); extern void do_group_exit(int); -extern int allow_signal(int); -extern int disallow_signal(int); - extern int do_execve(struct filename *, const char __user * const __user *, const char __user * const __user *); diff --git a/include/linux/shm.h b/include/linux/shm.h index 1e2cd2e6b540..57d77709fbe2 100644 --- a/include/linux/shm.h +++ b/include/linux/shm.h @@ -3,9 +3,8 @@ #include <asm/page.h> #include <uapi/linux/shm.h> - -#define SHMALL (SHMMAX/PAGE_SIZE*(SHMMNI/16)) /* max shm system wide (pages) */ #include <asm/shmparam.h> + struct shmid_kernel /* private to the kernel */ { struct kern_ipc_perm shm_perm; diff --git a/include/linux/signal.h b/include/linux/signal.h index 2ac423bdb676..c9e65360c49a 100644 --- a/include/linux/signal.h +++ b/include/linux/signal.h @@ -63,11 +63,6 @@ static inline int sigismember(sigset_t *set, int _sig) return 1 & (set->sig[sig / _NSIG_BPW] >> (sig % _NSIG_BPW)); } -static inline int sigfindinword(unsigned long word) -{ - return ffz(~word); -} - #endif /* __HAVE_ARCH_SIG_BITOPS */ static inline int sigisemptyset(sigset_t *set) @@ -289,6 +284,22 @@ extern int get_signal_to_deliver(siginfo_t *info, struct k_sigaction *return_ka, extern void signal_setup_done(int failed, struct ksignal *ksig, int stepping); extern void signal_delivered(int sig, siginfo_t *info, struct k_sigaction *ka, struct pt_regs *regs, int stepping); extern void exit_signals(struct task_struct *tsk); +extern void kernel_sigaction(int, __sighandler_t); + +static inline void allow_signal(int sig) +{ + /* + * Kernel threads handle their own signals. Let the signal code + * know it'll be handled, so that they don't get converted to + * SIGKILL or just silently dropped. + */ + kernel_sigaction(sig, (__force __sighandler_t)2); +} + +static inline void disallow_signal(int sig) +{ + kernel_sigaction(sig, SIG_IGN); +} /* * Eventually that'll replace get_signal_to_deliver(); macro for now, diff --git a/include/linux/smp.h b/include/linux/smp.h index 633f5edd7470..34347f26be9b 100644 --- a/include/linux/smp.h +++ b/include/linux/smp.h @@ -13,8 +13,6 @@ #include <linux/init.h> #include <linux/llist.h> -extern void cpu_idle(void); - typedef void (*smp_call_func_t)(void *info); struct call_single_data { struct llist_node llist; diff --git a/include/linux/suspend.h b/include/linux/suspend.h index 91d66fd8dce1..f76994b9396c 100644 --- a/include/linux/suspend.h +++ b/include/linux/suspend.h @@ -327,6 +327,8 @@ extern unsigned long get_safe_page(gfp_t gfp_mask); extern void hibernation_set_ops(const struct platform_hibernation_ops *ops); extern int hibernate(void); extern bool system_entering_hibernation(void); +asmlinkage int swsusp_save(void); +extern struct pbe *restore_pblist; #else /* CONFIG_HIBERNATION */ static inline void register_nosave_region(unsigned long b, unsigned long e) {} static inline void register_nosave_region_late(unsigned long b, unsigned long e) {} diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h index edff2b97b864..c52f827ba6ce 100644 --- a/include/linux/uprobes.h +++ b/include/linux/uprobes.h @@ -32,6 +32,7 @@ struct vm_area_struct; struct mm_struct; struct inode; struct notifier_block; +struct page; #define UPROBE_HANDLER_REMOVE 1 #define UPROBE_HANDLER_MASK 1 @@ -127,6 +128,8 @@ extern int arch_uprobe_exception_notify(struct notifier_block *self, unsigned l extern void arch_uprobe_abort_xol(struct arch_uprobe *aup, struct pt_regs *regs); extern unsigned long arch_uretprobe_hijack_return_addr(unsigned long trampoline_vaddr, struct pt_regs *regs); extern bool __weak arch_uprobe_ignore(struct arch_uprobe *aup, struct pt_regs *regs); +extern void __weak arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr, + void *src, unsigned long len); #else /* !CONFIG_UPROBES */ struct uprobes_state { }; diff --git a/include/linux/vfio.h b/include/linux/vfio.h index 81022a52bc34..8ec980b5e3af 100644 --- a/include/linux/vfio.h +++ b/include/linux/vfio.h @@ -86,9 +86,8 @@ extern void vfio_unregister_iommu_driver( * from user space. This allows us to easily determine if the provided * structure is sized to include various fields. */ -#define offsetofend(TYPE, MEMBER) ({ \ - TYPE tmp; \ - offsetof(TYPE, MEMBER) + sizeof(tmp.MEMBER); }) \ +#define offsetofend(TYPE, MEMBER) \ + (offsetof(TYPE, MEMBER) + sizeof(((TYPE *)0)->MEMBER)) /* * External user API diff --git a/include/media/adv7604.h b/include/media/adv7604.h index c6b39372eed7..aa1c4477722d 100644 --- a/include/media/adv7604.h +++ b/include/media/adv7604.h @@ -32,14 +32,18 @@ enum adv7604_ain_sel { ADV7604_AIN9_4_5_6_SYNC_2_1 = 4, }; -/* Bus rotation and reordering (IO register 0x04, [7:5]) */ -enum adv7604_op_ch_sel { - ADV7604_OP_CH_SEL_GBR = 0, - ADV7604_OP_CH_SEL_GRB = 1, - ADV7604_OP_CH_SEL_BGR = 2, - ADV7604_OP_CH_SEL_RGB = 3, - ADV7604_OP_CH_SEL_BRG = 4, - ADV7604_OP_CH_SEL_RBG = 5, +/* + * Bus rotation and reordering. This is used to specify component reordering on + * the board and describes the components order on the bus when the ADV7604 + * outputs RGB. + */ +enum adv7604_bus_order { + ADV7604_BUS_ORDER_RGB, /* No operation */ + ADV7604_BUS_ORDER_GRB, /* Swap 1-2 */ + ADV7604_BUS_ORDER_RBG, /* Swap 2-3 */ + ADV7604_BUS_ORDER_BGR, /* Swap 1-3 */ + ADV7604_BUS_ORDER_BRG, /* Rotate right */ + ADV7604_BUS_ORDER_GBR, /* Rotate left */ }; /* Input Color Space (IO register 0x02, [7:4]) */ @@ -55,29 +59,11 @@ enum adv7604_inp_color_space { ADV7604_INP_COLOR_SPACE_AUTO = 0xf, }; -/* Select output format (IO register 0x03, [7:0]) */ -enum adv7604_op_format_sel { - ADV7604_OP_FORMAT_SEL_SDR_ITU656_8 = 0x00, - ADV7604_OP_FORMAT_SEL_SDR_ITU656_10 = 0x01, - ADV7604_OP_FORMAT_SEL_SDR_ITU656_12_MODE0 = 0x02, - ADV7604_OP_FORMAT_SEL_SDR_ITU656_12_MODE1 = 0x06, - ADV7604_OP_FORMAT_SEL_SDR_ITU656_12_MODE2 = 0x0a, - ADV7604_OP_FORMAT_SEL_DDR_422_8 = 0x20, - ADV7604_OP_FORMAT_SEL_DDR_422_10 = 0x21, - ADV7604_OP_FORMAT_SEL_DDR_422_12_MODE0 = 0x22, - ADV7604_OP_FORMAT_SEL_DDR_422_12_MODE1 = 0x23, - ADV7604_OP_FORMAT_SEL_DDR_422_12_MODE2 = 0x24, - ADV7604_OP_FORMAT_SEL_SDR_444_24 = 0x40, - ADV7604_OP_FORMAT_SEL_SDR_444_30 = 0x41, - ADV7604_OP_FORMAT_SEL_SDR_444_36_MODE0 = 0x42, - ADV7604_OP_FORMAT_SEL_DDR_444_24 = 0x60, - ADV7604_OP_FORMAT_SEL_DDR_444_30 = 0x61, - ADV7604_OP_FORMAT_SEL_DDR_444_36 = 0x62, - ADV7604_OP_FORMAT_SEL_SDR_ITU656_16 = 0x80, - ADV7604_OP_FORMAT_SEL_SDR_ITU656_20 = 0x81, - ADV7604_OP_FORMAT_SEL_SDR_ITU656_24_MODE0 = 0x82, - ADV7604_OP_FORMAT_SEL_SDR_ITU656_24_MODE1 = 0x86, - ADV7604_OP_FORMAT_SEL_SDR_ITU656_24_MODE2 = 0x8a, +/* Select output format (IO register 0x03, [4:2]) */ +enum adv7604_op_format_mode_sel { + ADV7604_OP_FORMAT_MODE0 = 0x00, + ADV7604_OP_FORMAT_MODE1 = 0x04, + ADV7604_OP_FORMAT_MODE2 = 0x08, }; enum adv7604_drive_strength { @@ -86,6 +72,30 @@ enum adv7604_drive_strength { ADV7604_DR_STR_HIGH = 3, }; +enum adv7604_int1_config { + ADV7604_INT1_CONFIG_OPEN_DRAIN, + ADV7604_INT1_CONFIG_ACTIVE_LOW, + ADV7604_INT1_CONFIG_ACTIVE_HIGH, + ADV7604_INT1_CONFIG_DISABLED, +}; + +enum adv7604_page { + ADV7604_PAGE_IO, + ADV7604_PAGE_AVLINK, + ADV7604_PAGE_CEC, + ADV7604_PAGE_INFOFRAME, + ADV7604_PAGE_ESDP, + ADV7604_PAGE_DPP, + ADV7604_PAGE_AFE, + ADV7604_PAGE_REP, + ADV7604_PAGE_EDID, + ADV7604_PAGE_HDMI, + ADV7604_PAGE_TEST, + ADV7604_PAGE_CP, + ADV7604_PAGE_VDP, + ADV7604_PAGE_MAX, +}; + /* Platform dependent definition */ struct adv7604_platform_data { /* DIS_PWRDNB: 1 if the PWRDNB pin is unused and unconnected */ @@ -94,30 +104,34 @@ struct adv7604_platform_data { /* DIS_CABLE_DET_RST: 1 if the 5V pins are unused and unconnected */ unsigned disable_cable_det_rst:1; + int default_input; + /* Analog input muxing mode */ enum adv7604_ain_sel ain_sel; /* Bus rotation and reordering */ - enum adv7604_op_ch_sel op_ch_sel; + enum adv7604_bus_order bus_order; - /* Select output format */ - enum adv7604_op_format_sel op_format_sel; + /* Select output format mode */ + enum adv7604_op_format_mode_sel op_format_mode_sel; + + /* Configuration of the INT1 pin */ + enum adv7604_int1_config int1_config; /* IO register 0x02 */ unsigned alt_gamma:1; unsigned op_656_range:1; - unsigned rgb_out:1; unsigned alt_data_sat:1; /* IO register 0x05 */ unsigned blank_data:1; unsigned insert_av_codes:1; unsigned replicate_av_codes:1; - unsigned invert_cbcr:1; /* IO register 0x06 */ unsigned inv_vs_pol:1; unsigned inv_hs_pol:1; + unsigned inv_llc_pol:1; /* IO register 0x14 */ enum adv7604_drive_strength dr_str_data; @@ -131,34 +145,22 @@ struct adv7604_platform_data { unsigned hdmi_free_run_mode; /* i2c addresses: 0 == use default */ - u8 i2c_avlink; - u8 i2c_cec; - u8 i2c_infoframe; - u8 i2c_esdp; - u8 i2c_dpp; - u8 i2c_afe; - u8 i2c_repeater; - u8 i2c_edid; - u8 i2c_hdmi; - u8 i2c_test; - u8 i2c_cp; - u8 i2c_vdp; + u8 i2c_addresses[ADV7604_PAGE_MAX]; }; -enum adv7604_input_port { - ADV7604_INPUT_HDMI_PORT_A, - ADV7604_INPUT_HDMI_PORT_B, - ADV7604_INPUT_HDMI_PORT_C, - ADV7604_INPUT_HDMI_PORT_D, - ADV7604_INPUT_VGA_RGB, - ADV7604_INPUT_VGA_COMP, +enum adv7604_pad { + ADV7604_PAD_HDMI_PORT_A = 0, + ADV7604_PAD_HDMI_PORT_B = 1, + ADV7604_PAD_HDMI_PORT_C = 2, + ADV7604_PAD_HDMI_PORT_D = 3, + ADV7604_PAD_VGA_RGB = 4, + ADV7604_PAD_VGA_COMP = 5, + /* The source pad is either 1 (ADV7611) or 6 (ADV7604) */ + ADV7604_PAD_SOURCE = 6, + ADV7611_PAD_SOURCE = 1, + ADV7604_PAD_MAX = 7, }; -#define ADV7604_EDID_PORT_A 0 -#define ADV7604_EDID_PORT_B 1 -#define ADV7604_EDID_PORT_C 2 -#define ADV7604_EDID_PORT_D 3 - #define V4L2_CID_ADV_RX_ANALOG_SAMPLING_PHASE (V4L2_CID_DV_CLASS_BASE + 0x1000) #define V4L2_CID_ADV_RX_FREE_RUN_COLOR_MANUAL (V4L2_CID_DV_CLASS_BASE + 0x1001) #define V4L2_CID_ADV_RX_FREE_RUN_COLOR (V4L2_CID_DV_CLASS_BASE + 0x1002) diff --git a/include/media/v4l2-subdev.h b/include/media/v4l2-subdev.h index 9fab013eea86..d7465725773d 100644 --- a/include/media/v4l2-subdev.h +++ b/include/media/v4l2-subdev.h @@ -338,12 +338,8 @@ struct v4l2_subdev_video_ops { struct v4l2_dv_timings *timings); int (*g_dv_timings)(struct v4l2_subdev *sd, struct v4l2_dv_timings *timings); - int (*enum_dv_timings)(struct v4l2_subdev *sd, - struct v4l2_enum_dv_timings *timings); int (*query_dv_timings)(struct v4l2_subdev *sd, struct v4l2_dv_timings *timings); - int (*dv_timings_cap)(struct v4l2_subdev *sd, - struct v4l2_dv_timings_cap *cap); int (*enum_mbus_fmt)(struct v4l2_subdev *sd, unsigned int index, enum v4l2_mbus_pixelcode *code); int (*enum_mbus_fsizes)(struct v4l2_subdev *sd, diff --git a/include/net/inetpeer.h b/include/net/inetpeer.h index 6efe73c79c52..058271bde27a 100644 --- a/include/net/inetpeer.h +++ b/include/net/inetpeer.h @@ -177,16 +177,9 @@ static inline void inet_peer_refcheck(const struct inet_peer *p) /* can be called with or without local BH being disabled */ static inline int inet_getid(struct inet_peer *p, int more) { - int old, new; more++; inet_peer_refcheck(p); - do { - old = atomic_read(&p->ip_id_count); - new = old + more; - if (!new) - new = 1; - } while (atomic_cmpxchg(&p->ip_id_count, old, new) != old); - return new; + return atomic_add_return(more, &p->ip_id_count) - more; } #endif /* _NET_INETPEER_H */ diff --git a/include/uapi/linux/shm.h b/include/uapi/linux/shm.h index 78b69413f582..1fbf24ea37fd 100644 --- a/include/uapi/linux/shm.h +++ b/include/uapi/linux/shm.h @@ -8,19 +8,20 @@ #endif /* - * SHMMAX, SHMMNI and SHMALL are upper limits are defaults which can - * be increased by sysctl + * SHMMNI, SHMMAX and SHMALL are default upper limits which can be + * modified by sysctl. The SHMMAX and SHMALL values have been chosen to + * be as large possible without facilitating scenarios where userspace + * causes overflows when adjusting the limits via operations of the form + * "retrieve current limit; add X; update limit". It is therefore not + * advised to make SHMMAX and SHMALL any larger. These limits are + * suitable for both 32 and 64-bit systems. */ - -#define SHMMAX 0x2000000 /* max shared seg size (bytes) */ #define SHMMIN 1 /* min shared seg size (bytes) */ #define SHMMNI 4096 /* max num of segs system wide */ -#ifndef __KERNEL__ -#define SHMALL (SHMMAX/getpagesize()*(SHMMNI/16)) -#endif +#define SHMMAX (ULONG_MAX - (1UL << 24)) /* max shared seg size (bytes) */ +#define SHMALL (ULONG_MAX - (1UL << 24)) /* max shm system wide (pages) */ #define SHMSEG SHMMNI /* max shared segs per process */ - /* Obsolete, used only for backwards compatibility and libc5 compiles */ struct shmid_ds { struct ipc_perm shm_perm; /* operation perms */ diff --git a/include/uapi/linux/usb/Kbuild b/include/uapi/linux/usb/Kbuild index 6cb4ea826834..4cc4d6e7e523 100644 --- a/include/uapi/linux/usb/Kbuild +++ b/include/uapi/linux/usb/Kbuild @@ -1,6 +1,7 @@ # UAPI Header export list header-y += audio.h header-y += cdc.h +header-y += cdc-wdm.h header-y += ch11.h header-y += ch9.h header-y += functionfs.h diff --git a/include/uapi/linux/usb/cdc-wdm.h b/include/uapi/linux/usb/cdc-wdm.h index f03134feebd6..0dc132e75030 100644 --- a/include/uapi/linux/usb/cdc-wdm.h +++ b/include/uapi/linux/usb/cdc-wdm.h @@ -9,6 +9,8 @@ #ifndef _UAPI__LINUX_USB_CDC_WDM_H #define _UAPI__LINUX_USB_CDC_WDM_H +#include <linux/types.h> + /* * This IOCTL is used to retrieve the wMaxCommand for the device, * defining the message limit for both reading and writing. diff --git a/ipc/compat.c b/ipc/compat.c index 45d035d4cedc..b5ef4f7946dc 100644 --- a/ipc/compat.c +++ b/ipc/compat.c @@ -30,7 +30,7 @@ #include <linux/ptrace.h> #include <linux/mutex.h> -#include <asm/uaccess.h> +#include <linux/uaccess.h> #include "util.h" diff --git a/ipc/compat_mq.c b/ipc/compat_mq.c index 90d29f59cac6..ef6f91cc4490 100644 --- a/ipc/compat_mq.c +++ b/ipc/compat_mq.c @@ -12,7 +12,7 @@ #include <linux/mqueue.h> #include <linux/syscalls.h> -#include <asm/uaccess.h> +#include <linux/uaccess.h> struct compat_mq_attr { compat_long_t mq_flags; /* message queue flags */ diff --git a/ipc/ipc_sysctl.c b/ipc/ipc_sysctl.c index 998d31b230f1..c3f0326e98db 100644 --- a/ipc/ipc_sysctl.c +++ b/ipc/ipc_sysctl.c @@ -18,7 +18,7 @@ #include <linux/msg.h> #include "util.h" -static void *get_ipc(ctl_table *table) +static void *get_ipc(struct ctl_table *table) { char *which = table->data; struct ipc_namespace *ipc_ns = current->nsproxy->ipc_ns; @@ -27,7 +27,7 @@ static void *get_ipc(ctl_table *table) } #ifdef CONFIG_PROC_SYSCTL -static int proc_ipc_dointvec(ctl_table *table, int write, +static int proc_ipc_dointvec(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { struct ctl_table ipc_table; @@ -38,7 +38,7 @@ static int proc_ipc_dointvec(ctl_table *table, int write, return proc_dointvec(&ipc_table, write, buffer, lenp, ppos); } -static int proc_ipc_dointvec_minmax(ctl_table *table, int write, +static int proc_ipc_dointvec_minmax(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { struct ctl_table ipc_table; @@ -49,7 +49,7 @@ static int proc_ipc_dointvec_minmax(ctl_table *table, int write, return proc_dointvec_minmax(&ipc_table, write, buffer, lenp, ppos); } -static int proc_ipc_dointvec_minmax_orphans(ctl_table *table, int write, +static int proc_ipc_dointvec_minmax_orphans(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { struct ipc_namespace *ns = current->nsproxy->ipc_ns; @@ -62,7 +62,7 @@ static int proc_ipc_dointvec_minmax_orphans(ctl_table *table, int write, return err; } -static int proc_ipc_callback_dointvec_minmax(ctl_table *table, int write, +static int proc_ipc_callback_dointvec_minmax(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { struct ctl_table ipc_table; @@ -85,7 +85,7 @@ static int proc_ipc_callback_dointvec_minmax(ctl_table *table, int write, return rc; } -static int proc_ipc_doulongvec_minmax(ctl_table *table, int write, +static int proc_ipc_doulongvec_minmax(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { struct ctl_table ipc_table; @@ -119,7 +119,7 @@ static void ipc_auto_callback(int val) } } -static int proc_ipcauto_dointvec_minmax(ctl_table *table, int write, +static int proc_ipcauto_dointvec_minmax(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { struct ctl_table ipc_table; diff --git a/ipc/mq_sysctl.c b/ipc/mq_sysctl.c index 5bb8bfe67149..68d4e953762c 100644 --- a/ipc/mq_sysctl.c +++ b/ipc/mq_sysctl.c @@ -14,7 +14,7 @@ #include <linux/sysctl.h> #ifdef CONFIG_PROC_SYSCTL -static void *get_mq(ctl_table *table) +static void *get_mq(struct ctl_table *table) { char *which = table->data; struct ipc_namespace *ipc_ns = current->nsproxy->ipc_ns; @@ -22,7 +22,7 @@ static void *get_mq(ctl_table *table) return which; } -static int proc_mq_dointvec(ctl_table *table, int write, +static int proc_mq_dointvec(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { struct ctl_table mq_table; @@ -32,7 +32,7 @@ static int proc_mq_dointvec(ctl_table *table, int write, return proc_dointvec(&mq_table, write, buffer, lenp, ppos); } -static int proc_mq_dointvec_minmax(ctl_table *table, int write, +static int proc_mq_dointvec_minmax(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { struct ctl_table mq_table; @@ -53,7 +53,7 @@ static int msg_max_limit_max = HARD_MSGMAX; static int msg_maxsize_limit_min = MIN_MSGSIZEMAX; static int msg_maxsize_limit_max = HARD_MSGSIZEMAX; -static ctl_table mq_sysctls[] = { +static struct ctl_table mq_sysctls[] = { { .procname = "queues_max", .data = &init_ipc_ns.mq_queues_max, @@ -100,7 +100,7 @@ static ctl_table mq_sysctls[] = { {} }; -static ctl_table mq_sysctl_dir[] = { +static struct ctl_table mq_sysctl_dir[] = { { .procname = "mqueue", .mode = 0555, @@ -109,7 +109,7 @@ static ctl_table mq_sysctl_dir[] = { {} }; -static ctl_table mq_sysctl_root[] = { +static struct ctl_table mq_sysctl_root[] = { { .procname = "fs", .mode = 0555, diff --git a/ipc/msg.c b/ipc/msg.c index 649853105a5d..c5d8e3749985 100644 --- a/ipc/msg.c +++ b/ipc/msg.c @@ -39,12 +39,10 @@ #include <linux/ipc_namespace.h> #include <asm/current.h> -#include <asm/uaccess.h> +#include <linux/uaccess.h> #include "util.h" -/* - * one msg_receiver structure for each sleeping receiver: - */ +/* one msg_receiver structure for each sleeping receiver */ struct msg_receiver { struct list_head r_list; struct task_struct *r_tsk; @@ -53,6 +51,12 @@ struct msg_receiver { long r_msgtype; long r_maxsize; + /* + * Mark r_msg volatile so that the compiler + * does not try to get smart and optimize + * it. We rely on this for the lockless + * receive algorithm. + */ struct msg_msg *volatile r_msg; }; @@ -70,75 +74,6 @@ struct msg_sender { #define msg_ids(ns) ((ns)->ids[IPC_MSG_IDS]) -static void freeque(struct ipc_namespace *, struct kern_ipc_perm *); -static int newque(struct ipc_namespace *, struct ipc_params *); -#ifdef CONFIG_PROC_FS -static int sysvipc_msg_proc_show(struct seq_file *s, void *it); -#endif - -/* - * Scale msgmni with the available lowmem size: the memory dedicated to msg - * queues should occupy at most 1/MSG_MEM_SCALE of lowmem. - * Also take into account the number of nsproxies created so far. - * This should be done staying within the (MSGMNI , IPCMNI/nr_ipc_ns) range. - */ -void recompute_msgmni(struct ipc_namespace *ns) -{ - struct sysinfo i; - unsigned long allowed; - int nb_ns; - - si_meminfo(&i); - allowed = (((i.totalram - i.totalhigh) / MSG_MEM_SCALE) * i.mem_unit) - / MSGMNB; - nb_ns = atomic_read(&nr_ipc_ns); - allowed /= nb_ns; - - if (allowed < MSGMNI) { - ns->msg_ctlmni = MSGMNI; - return; - } - - if (allowed > IPCMNI / nb_ns) { - ns->msg_ctlmni = IPCMNI / nb_ns; - return; - } - - ns->msg_ctlmni = allowed; -} - -void msg_init_ns(struct ipc_namespace *ns) -{ - ns->msg_ctlmax = MSGMAX; - ns->msg_ctlmnb = MSGMNB; - - recompute_msgmni(ns); - - atomic_set(&ns->msg_bytes, 0); - atomic_set(&ns->msg_hdrs, 0); - ipc_init_ids(&ns->ids[IPC_MSG_IDS]); -} - -#ifdef CONFIG_IPC_NS -void msg_exit_ns(struct ipc_namespace *ns) -{ - free_ipcs(ns, &msg_ids(ns), freeque); - idr_destroy(&ns->ids[IPC_MSG_IDS].ipcs_idr); -} -#endif - -void __init msg_init(void) -{ - msg_init_ns(&init_ipc_ns); - - printk(KERN_INFO "msgmni has been set to %d\n", - init_ipc_ns.msg_ctlmni); - - ipc_init_proc_interface("sysvipc/msg", - " key msqid perms cbytes qnum lspid lrpid uid gid cuid cgid stime rtime ctime\n", - IPC_MSG_IDS, sysvipc_msg_proc_show); -} - static inline struct msg_queue *msq_obtain_object(struct ipc_namespace *ns, int id) { struct kern_ipc_perm *ipcp = ipc_obtain_object(&msg_ids(ns), id); @@ -227,7 +162,7 @@ static int newque(struct ipc_namespace *ns, struct ipc_params *params) static inline void ss_add(struct msg_queue *msq, struct msg_sender *mss) { mss->tsk = current; - current->state = TASK_INTERRUPTIBLE; + __set_current_state(TASK_INTERRUPTIBLE); list_add_tail(&mss->list, &msq->q_senders); } @@ -306,15 +241,14 @@ static inline int msg_security(struct kern_ipc_perm *ipcp, int msgflg) SYSCALL_DEFINE2(msgget, key_t, key, int, msgflg) { struct ipc_namespace *ns; - struct ipc_ops msg_ops; + static const struct ipc_ops msg_ops = { + .getnew = newque, + .associate = msg_security, + }; struct ipc_params msg_params; ns = current->nsproxy->ipc_ns; - msg_ops.getnew = newque; - msg_ops.associate = msg_security; - msg_ops.more_checks = NULL; - msg_params.key = key; msg_params.flg = msgflg; @@ -612,23 +546,22 @@ SYSCALL_DEFINE3(msgctl, int, msqid, int, cmd, struct msqid_ds __user *, buf) static int testmsg(struct msg_msg *msg, long type, int mode) { - switch (mode) - { - case SEARCH_ANY: - case SEARCH_NUMBER: + switch (mode) { + case SEARCH_ANY: + case SEARCH_NUMBER: + return 1; + case SEARCH_LESSEQUAL: + if (msg->m_type <= type) return 1; - case SEARCH_LESSEQUAL: - if (msg->m_type <= type) - return 1; - break; - case SEARCH_EQUAL: - if (msg->m_type == type) - return 1; - break; - case SEARCH_NOTEQUAL: - if (msg->m_type != type) - return 1; - break; + break; + case SEARCH_EQUAL: + if (msg->m_type == type) + return 1; + break; + case SEARCH_NOTEQUAL: + if (msg->m_type != type) + return 1; + break; } return 0; } @@ -978,7 +911,7 @@ long do_msgrcv(int msqid, void __user *buf, size_t bufsz, long msgtyp, int msgfl else msr_d.r_maxsize = bufsz; msr_d.r_msg = ERR_PTR(-EAGAIN); - current->state = TASK_INTERRUPTIBLE; + __set_current_state(TASK_INTERRUPTIBLE); ipc_unlock_object(&msq->q_perm); rcu_read_unlock(); @@ -1056,6 +989,57 @@ SYSCALL_DEFINE5(msgrcv, int, msqid, struct msgbuf __user *, msgp, size_t, msgsz, return do_msgrcv(msqid, msgp, msgsz, msgtyp, msgflg, do_msg_fill); } +/* + * Scale msgmni with the available lowmem size: the memory dedicated to msg + * queues should occupy at most 1/MSG_MEM_SCALE of lowmem. + * Also take into account the number of nsproxies created so far. + * This should be done staying within the (MSGMNI , IPCMNI/nr_ipc_ns) range. + */ +void recompute_msgmni(struct ipc_namespace *ns) +{ + struct sysinfo i; + unsigned long allowed; + int nb_ns; + + si_meminfo(&i); + allowed = (((i.totalram - i.totalhigh) / MSG_MEM_SCALE) * i.mem_unit) + / MSGMNB; + nb_ns = atomic_read(&nr_ipc_ns); + allowed /= nb_ns; + + if (allowed < MSGMNI) { + ns->msg_ctlmni = MSGMNI; + return; + } + + if (allowed > IPCMNI / nb_ns) { + ns->msg_ctlmni = IPCMNI / nb_ns; + return; + } + + ns->msg_ctlmni = allowed; +} + +void msg_init_ns(struct ipc_namespace *ns) +{ + ns->msg_ctlmax = MSGMAX; + ns->msg_ctlmnb = MSGMNB; + + recompute_msgmni(ns); + + atomic_set(&ns->msg_bytes, 0); + atomic_set(&ns->msg_hdrs, 0); + ipc_init_ids(&ns->ids[IPC_MSG_IDS]); +} + +#ifdef CONFIG_IPC_NS +void msg_exit_ns(struct ipc_namespace *ns) +{ + free_ipcs(ns, &msg_ids(ns), freeque); + idr_destroy(&ns->ids[IPC_MSG_IDS].ipcs_idr); +} +#endif + #ifdef CONFIG_PROC_FS static int sysvipc_msg_proc_show(struct seq_file *s, void *it) { @@ -1080,3 +1064,15 @@ static int sysvipc_msg_proc_show(struct seq_file *s, void *it) msq->q_ctime); } #endif + +void __init msg_init(void) +{ + msg_init_ns(&init_ipc_ns); + + printk(KERN_INFO "msgmni has been set to %d\n", + init_ipc_ns.msg_ctlmni); + + ipc_init_proc_interface("sysvipc/msg", + " key msqid perms cbytes qnum lspid lrpid uid gid cuid cgid stime rtime ctime\n", + IPC_MSG_IDS, sysvipc_msg_proc_show); +} diff --git a/ipc/sem.c b/ipc/sem.c index bee555417312..454f6c6020a8 100644 --- a/ipc/sem.c +++ b/ipc/sem.c @@ -47,8 +47,7 @@ * Thus: Perfect SMP scaling between independent semaphore arrays. * If multiple semaphores in one array are used, then cache line * trashing on the semaphore array spinlock will limit the scaling. - * - semncnt and semzcnt are calculated on demand in count_semncnt() and - * count_semzcnt() + * - semncnt and semzcnt are calculated on demand in count_semcnt() * - the task that performs a successful semop() scans the list of all * sleeping tasks and completes any pending operations that can be fulfilled. * Semaphores are actively given to waiting tasks (necessary for FIFO). @@ -87,7 +86,7 @@ #include <linux/nsproxy.h> #include <linux/ipc_namespace.h> -#include <asm/uaccess.h> +#include <linux/uaccess.h> #include "util.h" /* One semaphore structure for each semaphore in the system. */ @@ -110,6 +109,7 @@ struct sem_queue { int pid; /* process id of requesting process */ int status; /* completion status of operation */ struct sembuf *sops; /* array of pending operations */ + struct sembuf *blocking; /* the operation that blocked */ int nsops; /* number of operations */ int alter; /* does *sops alter the array? */ }; @@ -160,7 +160,7 @@ static int sysvipc_sem_proc_show(struct seq_file *s, void *it); * sem_array.pending{_alter,_cont}, * sem_array.sem_undo: global sem_lock() for read/write * sem_undo.proc_next: only "current" is allowed to read/write that field. - * + * * sem_array.sem_base[i].pending_{const,alter}: * global or semaphore sem_lock() for read/write */ @@ -564,7 +564,11 @@ static inline int sem_more_checks(struct kern_ipc_perm *ipcp, SYSCALL_DEFINE3(semget, key_t, key, int, nsems, int, semflg) { struct ipc_namespace *ns; - struct ipc_ops sem_ops; + static const struct ipc_ops sem_ops = { + .getnew = newary, + .associate = sem_security, + .more_checks = sem_more_checks, + }; struct ipc_params sem_params; ns = current->nsproxy->ipc_ns; @@ -572,10 +576,6 @@ SYSCALL_DEFINE3(semget, key_t, key, int, nsems, int, semflg) if (nsems < 0 || nsems > ns->sc_semmsl) return -EINVAL; - sem_ops.getnew = newary; - sem_ops.associate = sem_security; - sem_ops.more_checks = sem_more_checks; - sem_params.key = key; sem_params.flg = semflg; sem_params.u.nsems = nsems; @@ -586,21 +586,23 @@ SYSCALL_DEFINE3(semget, key_t, key, int, nsems, int, semflg) /** * perform_atomic_semop - Perform (if possible) a semaphore operation * @sma: semaphore array - * @sops: array with operations that should be checked - * @nsops: number of operations - * @un: undo array - * @pid: pid that did the change + * @q: struct sem_queue that describes the operation * * Returns 0 if the operation was possible. * Returns 1 if the operation is impossible, the caller must sleep. * Negative values are error codes. */ -static int perform_atomic_semop(struct sem_array *sma, struct sembuf *sops, - int nsops, struct sem_undo *un, int pid) +static int perform_atomic_semop(struct sem_array *sma, struct sem_queue *q) { - int result, sem_op; + int result, sem_op, nsops, pid; struct sembuf *sop; struct sem *curr; + struct sembuf *sops; + struct sem_undo *un; + + sops = q->sops; + nsops = q->nsops; + un = q->undo; for (sop = sops; sop < sops + nsops; sop++) { curr = sma->sem_base + sop->sem_num; @@ -628,6 +630,7 @@ static int perform_atomic_semop(struct sem_array *sma, struct sembuf *sops, } sop--; + pid = q->pid; while (sop >= sops) { sma->sem_base[sop->sem_num].sempid = pid; sop--; @@ -640,6 +643,8 @@ out_of_range: goto undo; would_block: + q->blocking = sop; + if (sop->sem_flg & IPC_NOWAIT) result = -EAGAIN; else @@ -780,8 +785,7 @@ static int wake_const_ops(struct sem_array *sma, int semnum, q = container_of(walk, struct sem_queue, list); walk = walk->next; - error = perform_atomic_semop(sma, q->sops, q->nsops, - q->undo, q->pid); + error = perform_atomic_semop(sma, q); if (error <= 0) { /* operation completed, remove from queue & wakeup */ @@ -893,8 +897,7 @@ again: if (semnum != -1 && sma->sem_base[semnum].semval == 0) break; - error = perform_atomic_semop(sma, q->sops, q->nsops, - q->undo, q->pid); + error = perform_atomic_semop(sma, q); /* Does q->sleeper still need to sleep? */ if (error > 0) @@ -989,65 +992,74 @@ static void do_smart_update(struct sem_array *sma, struct sembuf *sops, int nsop set_semotime(sma, sops); } -/* The following counts are associated to each semaphore: - * semncnt number of tasks waiting on semval being nonzero - * semzcnt number of tasks waiting on semval being zero - * This model assumes that a task waits on exactly one semaphore. - * Since semaphore operations are to be performed atomically, tasks actually - * wait on a whole sequence of semaphores simultaneously. - * The counts we return here are a rough approximation, but still - * warrant that semncnt+semzcnt>0 if the task is on the pending queue. +/* + * check_qop: Test if a queued operation sleeps on the semaphore semnum */ -static int count_semncnt(struct sem_array *sma, ushort semnum) +static int check_qop(struct sem_array *sma, int semnum, struct sem_queue *q, + bool count_zero) { - int semncnt; - struct sem_queue *q; + struct sembuf *sop = q->blocking; - semncnt = 0; - list_for_each_entry(q, &sma->sem_base[semnum].pending_alter, list) { - struct sembuf *sops = q->sops; - BUG_ON(sops->sem_num != semnum); - if ((sops->sem_op < 0) && !(sops->sem_flg & IPC_NOWAIT)) - semncnt++; - } + /* + * Linux always (since 0.99.10) reported a task as sleeping on all + * semaphores. This violates SUS, therefore it was changed to the + * standard compliant behavior. + * Give the administrators a chance to notice that an application + * might misbehave because it relies on the Linux behavior. + */ + pr_info_once("semctl(GETNCNT/GETZCNT) is since 3.16 Single Unix Specification compliant.\n" + "The task %s (%d) triggered the difference, watch for misbehavior.\n", + current->comm, task_pid_nr(current)); - list_for_each_entry(q, &sma->pending_alter, list) { - struct sembuf *sops = q->sops; - int nsops = q->nsops; - int i; - for (i = 0; i < nsops; i++) - if (sops[i].sem_num == semnum - && (sops[i].sem_op < 0) - && !(sops[i].sem_flg & IPC_NOWAIT)) - semncnt++; - } - return semncnt; + if (sop->sem_num != semnum) + return 0; + + if (count_zero && sop->sem_op == 0) + return 1; + if (!count_zero && sop->sem_op < 0) + return 1; + + return 0; } -static int count_semzcnt(struct sem_array *sma, ushort semnum) +/* The following counts are associated to each semaphore: + * semncnt number of tasks waiting on semval being nonzero + * semzcnt number of tasks waiting on semval being zero + * + * Per definition, a task waits only on the semaphore of the first semop + * that cannot proceed, even if additional operation would block, too. + */ +static int count_semcnt(struct sem_array *sma, ushort semnum, + bool count_zero) { - int semzcnt; + struct list_head *l; struct sem_queue *q; + int semcnt; + + semcnt = 0; + /* First: check the simple operations. They are easy to evaluate */ + if (count_zero) + l = &sma->sem_base[semnum].pending_const; + else + l = &sma->sem_base[semnum].pending_alter; - semzcnt = 0; - list_for_each_entry(q, &sma->sem_base[semnum].pending_const, list) { - struct sembuf *sops = q->sops; - BUG_ON(sops->sem_num != semnum); - if ((sops->sem_op == 0) && !(sops->sem_flg & IPC_NOWAIT)) - semzcnt++; + list_for_each_entry(q, l, list) { + /* all task on a per-semaphore list sleep on exactly + * that semaphore + */ + semcnt++; } - list_for_each_entry(q, &sma->pending_const, list) { - struct sembuf *sops = q->sops; - int nsops = q->nsops; - int i; - for (i = 0; i < nsops; i++) - if (sops[i].sem_num == semnum - && (sops[i].sem_op == 0) - && !(sops[i].sem_flg & IPC_NOWAIT)) - semzcnt++; + /* Then: check the complex operations. */ + list_for_each_entry(q, &sma->pending_alter, list) { + semcnt += check_qop(sma, semnum, q, count_zero); } - return semzcnt; + if (count_zero) { + list_for_each_entry(q, &sma->pending_const, list) { + semcnt += check_qop(sma, semnum, q, count_zero); + } + } + return semcnt; } /* Free a semaphore set. freeary() is called with sem_ids.rwsem locked @@ -1161,7 +1173,7 @@ static int semctl_nolock(struct ipc_namespace *ns, int semid, err = security_sem_semctl(NULL, cmd); if (err) return err; - + memset(&seminfo, 0, sizeof(seminfo)); seminfo.semmni = ns->sc_semmni; seminfo.semmns = ns->sc_semmns; @@ -1181,7 +1193,7 @@ static int semctl_nolock(struct ipc_namespace *ns, int semid, } max_id = ipc_get_maxid(&sem_ids(ns)); up_read(&sem_ids(ns).rwsem); - if (copy_to_user(p, &seminfo, sizeof(struct seminfo))) + if (copy_to_user(p, &seminfo, sizeof(struct seminfo))) return -EFAULT; return (max_id < 0) ? 0 : max_id; } @@ -1449,10 +1461,10 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum, err = curr->sempid; goto out_unlock; case GETNCNT: - err = count_semncnt(sma, semnum); + err = count_semcnt(sma, semnum, 0); goto out_unlock; case GETZCNT: - err = count_semzcnt(sma, semnum); + err = count_semcnt(sma, semnum, 1); goto out_unlock; } @@ -1866,8 +1878,13 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct sembuf __user *, tsops, if (un && un->semid == -1) goto out_unlock_free; - error = perform_atomic_semop(sma, sops, nsops, un, - task_tgid_vnr(current)); + queue.sops = sops; + queue.nsops = nsops; + queue.undo = un; + queue.pid = task_tgid_vnr(current); + queue.alter = alter; + + error = perform_atomic_semop(sma, &queue); if (error == 0) { /* If the operation was successful, then do * the required updates. @@ -1883,12 +1900,6 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct sembuf __user *, tsops, /* We need to sleep on this operation, so we put the current * task into the pending queue and go to sleep. */ - - queue.sops = sops; - queue.nsops = nsops; - queue.undo = un; - queue.pid = task_tgid_vnr(current); - queue.alter = alter; if (nsops == 1) { struct sem *curr; @@ -2016,7 +2027,7 @@ int copy_semundo(unsigned long clone_flags, struct task_struct *tsk) return error; atomic_inc(&undo_list->refcnt); tsk->sysvsem.undo_list = undo_list; - } else + } else tsk->sysvsem.undo_list = NULL; return 0; diff --git a/ipc/shm.c b/ipc/shm.c index 76459616a7fa..89fc354156cb 100644 --- a/ipc/shm.c +++ b/ipc/shm.c @@ -43,7 +43,7 @@ #include <linux/mount.h> #include <linux/ipc_namespace.h> -#include <asm/uaccess.h> +#include <linux/uaccess.h> #include "util.h" @@ -493,7 +493,11 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params) if (size < SHMMIN || size > ns->shm_ctlmax) return -EINVAL; - if (ns->shm_tot + numpages > ns->shm_ctlall) + if (numpages << PAGE_SHIFT < size) + return -ENOSPC; + + if (ns->shm_tot + numpages < ns->shm_tot || + ns->shm_tot + numpages > ns->shm_ctlall) return -ENOSPC; shp = ipc_rcu_alloc(sizeof(*shp)); @@ -609,15 +613,15 @@ static inline int shm_more_checks(struct kern_ipc_perm *ipcp, SYSCALL_DEFINE3(shmget, key_t, key, size_t, size, int, shmflg) { struct ipc_namespace *ns; - struct ipc_ops shm_ops; + static const struct ipc_ops shm_ops = { + .getnew = newseg, + .associate = shm_security, + .more_checks = shm_more_checks, + }; struct ipc_params shm_params; ns = current->nsproxy->ipc_ns; - shm_ops.getnew = newseg; - shm_ops.associate = shm_security; - shm_ops.more_checks = shm_more_checks; - shm_params.key = key; shm_params.flg = shmflg; shm_params.u.size = size; @@ -694,7 +698,7 @@ static inline unsigned long copy_shminfo_to_user(void __user *buf, struct shminf out.shmmin = in->shmmin; out.shmmni = in->shmmni; out.shmseg = in->shmseg; - out.shmall = in->shmall; + out.shmall = in->shmall; return copy_to_user(buf, &out, sizeof(out)); } @@ -1160,6 +1164,9 @@ long do_shmat(int shmid, char __user *shmaddr, int shmflg, ulong *raddr, down_write(¤t->mm->mmap_sem); if (addr && !(shmflg & SHM_REMAP)) { err = -EINVAL; + if (addr + size < addr) + goto invalid; + if (find_vma_intersection(current->mm, addr, addr + size)) goto invalid; /* diff --git a/ipc/util.c b/ipc/util.c index 2eb0d1eaa312..27d74e69fd57 100644 --- a/ipc/util.c +++ b/ipc/util.c @@ -183,7 +183,7 @@ void __init ipc_init_proc_interface(const char *path, const char *header, * ipc_findkey - find a key in an ipc identifier set * @ids: ipc identifier set * @key: key to find - * + * * Returns the locked pointer to the ipc structure if found or NULL * otherwise. If key is found ipc points to the owning ipc structure * @@ -317,7 +317,7 @@ int ipc_addid(struct ipc_ids *ids, struct kern_ipc_perm *new, int size) * when the key is IPC_PRIVATE. */ static int ipcget_new(struct ipc_namespace *ns, struct ipc_ids *ids, - struct ipc_ops *ops, struct ipc_params *params) + const struct ipc_ops *ops, struct ipc_params *params) { int err; @@ -344,7 +344,7 @@ static int ipcget_new(struct ipc_namespace *ns, struct ipc_ids *ids, */ static int ipc_check_perms(struct ipc_namespace *ns, struct kern_ipc_perm *ipcp, - struct ipc_ops *ops, + const struct ipc_ops *ops, struct ipc_params *params) { int err; @@ -375,7 +375,7 @@ static int ipc_check_perms(struct ipc_namespace *ns, * On success, the ipc id is returned. */ static int ipcget_public(struct ipc_namespace *ns, struct ipc_ids *ids, - struct ipc_ops *ops, struct ipc_params *params) + const struct ipc_ops *ops, struct ipc_params *params) { struct kern_ipc_perm *ipcp; int flg = params->flg; @@ -538,7 +538,7 @@ int ipcperms(struct ipc_namespace *ns, struct kern_ipc_perm *ipcp, short flag) else if (in_group_p(ipcp->cgid) || in_group_p(ipcp->gid)) granted_mode >>= 3; /* is there some bit set in requested_mode but not in granted_mode? */ - if ((requested_mode & ~granted_mode & 0007) && + if ((requested_mode & ~granted_mode & 0007) && !ns_capable(ns->user_ns, CAP_IPC_OWNER)) return -1; @@ -678,7 +678,7 @@ out: * Common routine called by sys_msgget(), sys_semget() and sys_shmget(). */ int ipcget(struct ipc_namespace *ns, struct ipc_ids *ids, - struct ipc_ops *ops, struct ipc_params *params) + const struct ipc_ops *ops, struct ipc_params *params) { if (params->key == IPC_PRIVATE) return ipcget_new(ns, ids, ops, params); diff --git a/ipc/util.h b/ipc/util.h index 9c47d6f6c7b4..1a5a0fcd099c 100644 --- a/ipc/util.h +++ b/ipc/util.h @@ -78,9 +78,9 @@ struct ipc_params { * . routine to call for an extra check if needed */ struct ipc_ops { - int (*getnew) (struct ipc_namespace *, struct ipc_params *); - int (*associate) (struct kern_ipc_perm *, int); - int (*more_checks) (struct kern_ipc_perm *, struct ipc_params *); + int (*getnew)(struct ipc_namespace *, struct ipc_params *); + int (*associate)(struct kern_ipc_perm *, int); + int (*more_checks)(struct kern_ipc_perm *, struct ipc_params *); }; struct seq_file; @@ -142,7 +142,7 @@ struct kern_ipc_perm *ipcctl_pre_down_nolock(struct ipc_namespace *ns, struct ipc64_perm *perm, int extra_perm); #ifndef CONFIG_ARCH_WANT_IPC_PARSE_VERSION - /* On IA-64, we always use the "64-bit version" of the IPC structures. */ +/* On IA-64, we always use the "64-bit version" of the IPC structures. */ # define ipc_parse_version(cmd) IPC_64 #else int ipc_parse_version(int *cmd); @@ -201,7 +201,7 @@ static inline bool ipc_valid_object(struct kern_ipc_perm *perm) struct kern_ipc_perm *ipc_obtain_object_check(struct ipc_ids *ids, int id); int ipcget(struct ipc_namespace *ns, struct ipc_ids *ids, - struct ipc_ops *ops, struct ipc_params *params); + const struct ipc_ops *ops, struct ipc_params *params); void free_ipcs(struct ipc_namespace *ns, struct ipc_ids *ids, void (*free)(struct ipc_namespace *, struct kern_ipc_perm *)); #endif diff --git a/kernel/acct.c b/kernel/acct.c index 8d6e145138bb..808a86ff229d 100644 --- a/kernel/acct.c +++ b/kernel/acct.c @@ -55,7 +55,7 @@ #include <linux/times.h> #include <linux/syscalls.h> #include <linux/mount.h> -#include <asm/uaccess.h> +#include <linux/uaccess.h> #include <asm/div64.h> #include <linux/blkdev.h> /* sector_div */ #include <linux/pid_namespace.h> @@ -134,7 +134,7 @@ static int check_free_space(struct bsd_acct_struct *acct, struct file *file) spin_lock(&acct_lock); if (file != acct->file) { if (act) - res = act>0; + res = act > 0; goto out; } @@ -262,7 +262,7 @@ SYSCALL_DEFINE1(acct, const char __user *, name) if (name) { struct filename *tmp = getname(name); if (IS_ERR(tmp)) - return (PTR_ERR(tmp)); + return PTR_ERR(tmp); error = acct_on(tmp); putname(tmp); } else { diff --git a/kernel/audit.c b/kernel/audit.c index 47845c57eb19..f30106459a32 100644 --- a/kernel/audit.c +++ b/kernel/audit.c @@ -44,7 +44,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include <linux/init.h> -#include <asm/types.h> +#include <linux/types.h> #include <linux/atomic.h> #include <linux/mm.h> #include <linux/export.h> diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index d1edc5e6fd03..adcd76a96839 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -1291,14 +1291,8 @@ static unsigned long xol_get_insn_slot(struct uprobe *uprobe) if (unlikely(!xol_vaddr)) return 0; - /* Initialize the slot */ - copy_to_page(area->page, xol_vaddr, - &uprobe->arch.ixol, sizeof(uprobe->arch.ixol)); - /* - * We probably need flush_icache_user_range() but it needs vma. - * This should work on supported architectures too. - */ - flush_dcache_page(area->page); + arch_uprobe_copy_ixol(area->page, xol_vaddr, + &uprobe->arch.ixol, sizeof(uprobe->arch.ixol)); return xol_vaddr; } @@ -1341,6 +1335,21 @@ static void xol_free_insn_slot(struct task_struct *tsk) } } +void __weak arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr, + void *src, unsigned long len) +{ + /* Initialize the slot */ + copy_to_page(page, vaddr, src, len); + + /* + * We probably need flush_icache_user_range() but it needs vma. + * This should work on most of architectures by default. If + * architecture needs to do something different it can define + * its own version of the function. + */ + flush_dcache_page(page); +} + /** * uprobe_get_swbp_addr - compute address of swbp given post-swbp regs * @regs: Reflects the saved state of the task after it has hit a breakpoint diff --git a/kernel/exit.c b/kernel/exit.c index 750c2e594617..e5c4668f1799 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -313,45 +313,6 @@ kill_orphaned_pgrp(struct task_struct *tsk, struct task_struct *parent) } } -/* - * Let kernel threads use this to say that they allow a certain signal. - * Must not be used if kthread was cloned with CLONE_SIGHAND. - */ -int allow_signal(int sig) -{ - if (!valid_signal(sig) || sig < 1) - return -EINVAL; - - spin_lock_irq(¤t->sighand->siglock); - /* This is only needed for daemonize()'ed kthreads */ - sigdelset(¤t->blocked, sig); - /* - * Kernel threads handle their own signals. Let the signal code - * know it'll be handled, so that they don't get converted to - * SIGKILL or just silently dropped. - */ - current->sighand->action[(sig)-1].sa.sa_handler = (void __user *)2; - recalc_sigpending(); - spin_unlock_irq(¤t->sighand->siglock); - return 0; -} - -EXPORT_SYMBOL(allow_signal); - -int disallow_signal(int sig) -{ - if (!valid_signal(sig) || sig < 1) - return -EINVAL; - - spin_lock_irq(¤t->sighand->siglock); - current->sighand->action[(sig)-1].sa.sa_handler = SIG_IGN; - recalc_sigpending(); - spin_unlock_irq(¤t->sighand->siglock); - return 0; -} - -EXPORT_SYMBOL(disallow_signal); - #ifdef CONFIG_MEMCG /* * A task is exiting. If it owned this mm, find a new owner for the mm. diff --git a/kernel/fork.c b/kernel/fork.c index 0d53eb0dfb6f..d2799d1fc952 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1606,10 +1606,12 @@ long do_fork(unsigned long clone_flags, */ if (!IS_ERR(p)) { struct completion vfork; + struct pid *pid; trace_sched_process_fork(current, p); - nr = task_pid_vnr(p); + pid = get_task_pid(p, PIDTYPE_PID); + nr = pid_vnr(pid); if (clone_flags & CLONE_PARENT_SETTID) put_user(nr, parent_tidptr); @@ -1624,12 +1626,14 @@ long do_fork(unsigned long clone_flags, /* forking complete and child started to run, tell ptracer */ if (unlikely(trace)) - ptrace_event(trace, nr); + ptrace_event_pid(trace, pid); if (clone_flags & CLONE_VFORK) { if (!wait_for_vfork_done(p, &vfork)) - ptrace_event(PTRACE_EVENT_VFORK_DONE, nr); + ptrace_event_pid(PTRACE_EVENT_VFORK_DONE, pid); } + + put_pid(pid); } else { nr = PTR_ERR(p); } diff --git a/kernel/futex.c b/kernel/futex.c index 89bc9d59ac65..b632b5f3f094 100644 --- a/kernel/futex.c +++ b/kernel/futex.c @@ -743,10 +743,58 @@ void exit_pi_state_list(struct task_struct *curr) raw_spin_unlock_irq(&curr->pi_lock); } +/* + * We need to check the following states: + * + * Waiter | pi_state | pi->owner | uTID | uODIED | ? + * + * [1] NULL | --- | --- | 0 | 0/1 | Valid + * [2] NULL | --- | --- | >0 | 0/1 | Valid + * + * [3] Found | NULL | -- | Any | 0/1 | Invalid + * + * [4] Found | Found | NULL | 0 | 1 | Valid + * [5] Found | Found | NULL | >0 | 1 | Invalid + * + * [6] Found | Found | task | 0 | 1 | Valid + * + * [7] Found | Found | NULL | Any | 0 | Invalid + * + * [8] Found | Found | task | ==taskTID | 0/1 | Valid + * [9] Found | Found | task | 0 | 0 | Invalid + * [10] Found | Found | task | !=taskTID | 0/1 | Invalid + * + * [1] Indicates that the kernel can acquire the futex atomically. We + * came came here due to a stale FUTEX_WAITERS/FUTEX_OWNER_DIED bit. + * + * [2] Valid, if TID does not belong to a kernel thread. If no matching + * thread is found then it indicates that the owner TID has died. + * + * [3] Invalid. The waiter is queued on a non PI futex + * + * [4] Valid state after exit_robust_list(), which sets the user space + * value to FUTEX_WAITERS | FUTEX_OWNER_DIED. + * + * [5] The user space value got manipulated between exit_robust_list() + * and exit_pi_state_list() + * + * [6] Valid state after exit_pi_state_list() which sets the new owner in + * the pi_state but cannot access the user space value. + * + * [7] pi_state->owner can only be NULL when the OWNER_DIED bit is set. + * + * [8] Owner and user space value match + * + * [9] There is no transient state which sets the user space TID to 0 + * except exit_robust_list(), but this is indicated by the + * FUTEX_OWNER_DIED bit. See [4] + * + * [10] There is no transient state which leaves owner and user space + * TID out of sync. + */ static int lookup_pi_state(u32 uval, struct futex_hash_bucket *hb, - union futex_key *key, struct futex_pi_state **ps, - struct task_struct *task) + union futex_key *key, struct futex_pi_state **ps) { struct futex_pi_state *pi_state = NULL; struct futex_q *this, *next; @@ -756,12 +804,13 @@ lookup_pi_state(u32 uval, struct futex_hash_bucket *hb, plist_for_each_entry_safe(this, next, &hb->chain, list) { if (match_futex(&this->key, key)) { /* - * Another waiter already exists - bump up - * the refcount and return its pi_state: + * Sanity check the waiter before increasing + * the refcount and attaching to it. */ pi_state = this->pi_state; /* - * Userspace might have messed up non-PI and PI futexes + * Userspace might have messed up non-PI and + * PI futexes [3] */ if (unlikely(!pi_state)) return -EINVAL; @@ -769,44 +818,70 @@ lookup_pi_state(u32 uval, struct futex_hash_bucket *hb, WARN_ON(!atomic_read(&pi_state->refcount)); /* - * When pi_state->owner is NULL then the owner died - * and another waiter is on the fly. pi_state->owner - * is fixed up by the task which acquires - * pi_state->rt_mutex. - * - * We do not check for pid == 0 which can happen when - * the owner died and robust_list_exit() cleared the - * TID. + * Handle the owner died case: */ - if (pid && pi_state->owner) { + if (uval & FUTEX_OWNER_DIED) { /* - * Bail out if user space manipulated the - * futex value. + * exit_pi_state_list sets owner to NULL and + * wakes the topmost waiter. The task which + * acquires the pi_state->rt_mutex will fixup + * owner. */ - if (pid != task_pid_vnr(pi_state->owner)) + if (!pi_state->owner) { + /* + * No pi state owner, but the user + * space TID is not 0. Inconsistent + * state. [5] + */ + if (pid) + return -EINVAL; + /* + * Take a ref on the state and + * return. [4] + */ + goto out_state; + } + + /* + * If TID is 0, then either the dying owner + * has not yet executed exit_pi_state_list() + * or some waiter acquired the rtmutex in the + * pi state, but did not yet fixup the TID in + * user space. + * + * Take a ref on the state and return. [6] + */ + if (!pid) + goto out_state; + } else { + /* + * If the owner died bit is not set, + * then the pi_state must have an + * owner. [7] + */ + if (!pi_state->owner) return -EINVAL; } /* - * Protect against a corrupted uval. If uval - * is 0x80000000 then pid is 0 and the waiter - * bit is set. So the deadlock check in the - * calling code has failed and we did not fall - * into the check above due to !pid. + * Bail out if user space manipulated the + * futex value. If pi state exists then the + * owner TID must be the same as the user + * space TID. [9/10] */ - if (task && pi_state->owner == task) - return -EDEADLK; + if (pid != task_pid_vnr(pi_state->owner)) + return -EINVAL; + out_state: atomic_inc(&pi_state->refcount); *ps = pi_state; - return 0; } } /* * We are the first waiter - try to look up the real owner and attach - * the new pi_state to it, but bail out when TID = 0 + * the new pi_state to it, but bail out when TID = 0 [1] */ if (!pid) return -ESRCH; @@ -839,6 +914,9 @@ lookup_pi_state(u32 uval, struct futex_hash_bucket *hb, return ret; } + /* + * No existing pi state. First waiter. [2] + */ pi_state = alloc_pi_state(); /* @@ -910,10 +988,18 @@ retry: return -EDEADLK; /* - * Surprise - we got the lock. Just return to userspace: + * Surprise - we got the lock, but we do not trust user space at all. */ - if (unlikely(!curval)) - return 1; + if (unlikely(!curval)) { + /* + * We verify whether there is kernel state for this + * futex. If not, we can safely assume, that the 0 -> + * TID transition is correct. If state exists, we do + * not bother to fixup the user space state as it was + * corrupted already. + */ + return futex_top_waiter(hb, key) ? -EINVAL : 1; + } uval = curval; @@ -951,7 +1037,7 @@ retry: * We dont have the lock. Look up the PI state (or create it if * we are the first waiter): */ - ret = lookup_pi_state(uval, hb, key, ps, task); + ret = lookup_pi_state(uval, hb, key, ps); if (unlikely(ret)) { switch (ret) { @@ -1044,6 +1130,7 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_q *this) struct task_struct *new_owner; struct futex_pi_state *pi_state = this->pi_state; u32 uninitialized_var(curval), newval; + int ret = 0; if (!pi_state) return -EINVAL; @@ -1067,23 +1154,19 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_q *this) new_owner = this->task; /* - * We pass it to the next owner. (The WAITERS bit is always - * kept enabled while there is PI state around. We must also - * preserve the owner died bit.) + * We pass it to the next owner. The WAITERS bit is always + * kept enabled while there is PI state around. We cleanup the + * owner died bit, because we are the owner. */ - if (!(uval & FUTEX_OWNER_DIED)) { - int ret = 0; - - newval = FUTEX_WAITERS | task_pid_vnr(new_owner); + newval = FUTEX_WAITERS | task_pid_vnr(new_owner); - if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)) - ret = -EFAULT; - else if (curval != uval) - ret = -EINVAL; - if (ret) { - raw_spin_unlock(&pi_state->pi_mutex.wait_lock); - return ret; - } + if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)) + ret = -EFAULT; + else if (curval != uval) + ret = -EINVAL; + if (ret) { + raw_spin_unlock(&pi_state->pi_mutex.wait_lock); + return ret; } raw_spin_lock_irq(&pi_state->owner->pi_lock); @@ -1442,6 +1525,13 @@ static int futex_requeue(u32 __user *uaddr1, unsigned int flags, if (requeue_pi) { /* + * Requeue PI only works on two distinct uaddrs. This + * check is only valid for private futexes. See below. + */ + if (uaddr1 == uaddr2) + return -EINVAL; + + /* * requeue_pi requires a pi_state, try to allocate it now * without any locks in case it fails. */ @@ -1479,6 +1569,15 @@ retry: if (unlikely(ret != 0)) goto out_put_key1; + /* + * The check above which compares uaddrs is not sufficient for + * shared futexes. We need to compare the keys: + */ + if (requeue_pi && match_futex(&key1, &key2)) { + ret = -EINVAL; + goto out_put_keys; + } + hb1 = hash_futex(&key1); hb2 = hash_futex(&key2); @@ -1544,7 +1643,7 @@ retry_private: * rereading and handing potential crap to * lookup_pi_state. */ - ret = lookup_pi_state(ret, hb2, &key2, &pi_state, NULL); + ret = lookup_pi_state(ret, hb2, &key2, &pi_state); } switch (ret) { @@ -2327,9 +2426,10 @@ retry: /* * To avoid races, try to do the TID -> 0 atomic transition * again. If it succeeds then we can return without waking - * anyone else up: + * anyone else up. We only try this if neither the waiters nor + * the owner died bit are set. */ - if (!(uval & FUTEX_OWNER_DIED) && + if (!(uval & ~FUTEX_TID_MASK) && cmpxchg_futex_value_locked(&uval, uaddr, vpid, 0)) goto pi_faulted; /* @@ -2359,11 +2459,9 @@ retry: /* * No waiters - kernel unlocks the futex: */ - if (!(uval & FUTEX_OWNER_DIED)) { - ret = unlock_futex_pi(uaddr, uval); - if (ret == -EFAULT) - goto pi_faulted; - } + ret = unlock_futex_pi(uaddr, uval); + if (ret == -EFAULT) + goto pi_faulted; out_unlock: spin_unlock(&hb->lock); @@ -2525,6 +2623,15 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags, if (ret) goto out_key2; + /* + * The check above which compares uaddrs is not sufficient for + * shared futexes. We need to compare the keys: + */ + if (match_futex(&q.key, &key2)) { + ret = -EINVAL; + goto out_put_keys; + } + /* Queue the futex_q, drop the hb lock, wait for wakeup. */ futex_wait_queue_me(hb, &q, to); diff --git a/kernel/kexec.c b/kernel/kexec.c index 28c57069ef68..6748688813d0 100644 --- a/kernel/kexec.c +++ b/kernel/kexec.c @@ -125,8 +125,8 @@ static struct page *kimage_alloc_page(struct kimage *image, unsigned long dest); static int do_kimage_alloc(struct kimage **rimage, unsigned long entry, - unsigned long nr_segments, - struct kexec_segment __user *segments) + unsigned long nr_segments, + struct kexec_segment __user *segments) { size_t segment_bytes; struct kimage *image; @@ -257,13 +257,13 @@ static int kimage_normal_alloc(struct kimage **rimage, unsigned long entry, image->control_code_page = kimage_alloc_control_pages(image, get_order(KEXEC_CONTROL_PAGE_SIZE)); if (!image->control_code_page) { - printk(KERN_ERR "Could not allocate control_code_buffer\n"); + pr_err("Could not allocate control_code_buffer\n"); goto out_free; } image->swap_page = kimage_alloc_control_pages(image, 0); if (!image->swap_page) { - printk(KERN_ERR "Could not allocate swap buffer\n"); + pr_err("Could not allocate swap buffer\n"); goto out_free; } @@ -332,7 +332,7 @@ static int kimage_crash_alloc(struct kimage **rimage, unsigned long entry, image->control_code_page = kimage_alloc_control_pages(image, get_order(KEXEC_CONTROL_PAGE_SIZE)); if (!image->control_code_page) { - printk(KERN_ERR "Could not allocate control_code_buffer\n"); + pr_err("Could not allocate control_code_buffer\n"); goto out_free; } @@ -621,8 +621,8 @@ static void kimage_terminate(struct kimage *image) #define for_each_kimage_entry(image, ptr, entry) \ for (ptr = &image->head; (entry = *ptr) && !(entry & IND_DONE); \ - ptr = (entry & IND_INDIRECTION)? \ - phys_to_virt((entry & PAGE_MASK)): ptr +1) + ptr = (entry & IND_INDIRECTION) ? \ + phys_to_virt((entry & PAGE_MASK)) : ptr + 1) static void kimage_free_entry(kimage_entry_t entry) { @@ -650,8 +650,7 @@ static void kimage_free(struct kimage *image) * done with it. */ ind = entry; - } - else if (entry & IND_SOURCE) + } else if (entry & IND_SOURCE) kimage_free_entry(entry); } /* Free the final indirection page */ @@ -774,8 +773,7 @@ static struct page *kimage_alloc_page(struct kimage *image, addr = old_addr; page = old_page; break; - } - else { + } else { /* Place the page on the destination list I * will use it later. */ @@ -1059,7 +1057,7 @@ COMPAT_SYSCALL_DEFINE4(kexec_load, compat_ulong_t, entry, return -EINVAL; ksegments = compat_alloc_user_space(nr_segments * sizeof(out)); - for (i=0; i < nr_segments; i++) { + for (i = 0; i < nr_segments; i++) { result = copy_from_user(&in, &segments[i], sizeof(in)); if (result) return -EFAULT; @@ -1214,14 +1212,14 @@ void crash_save_cpu(struct pt_regs *regs, int cpu) * squirrelled away. ELF notes happen to provide * all of that, so there is no need to invent something new. */ - buf = (u32*)per_cpu_ptr(crash_notes, cpu); + buf = (u32 *)per_cpu_ptr(crash_notes, cpu); if (!buf) return; memset(&prstatus, 0, sizeof(prstatus)); prstatus.pr_pid = current->pid; elf_core_copy_kernel_regs(&prstatus.pr_reg, regs); buf = append_elf_note(buf, KEXEC_CORE_NOTE_NAME, NT_PRSTATUS, - &prstatus, sizeof(prstatus)); + &prstatus, sizeof(prstatus)); final_note(buf); } @@ -1230,8 +1228,7 @@ static int __init crash_notes_memory_init(void) /* Allocate memory for saving cpu registers. */ crash_notes = alloc_percpu(note_buf_t); if (!crash_notes) { - printk("Kexec: Memory allocation for saving cpu register" - " states failed\n"); + pr_warn("Kexec: Memory allocation for saving cpu register states failed\n"); return -ENOMEM; } return 0; @@ -1253,10 +1250,10 @@ subsys_initcall(crash_notes_memory_init); * * The function returns 0 on success and -EINVAL on failure. */ -static int __init parse_crashkernel_mem(char *cmdline, - unsigned long long system_ram, - unsigned long long *crash_size, - unsigned long long *crash_base) +static int __init parse_crashkernel_mem(char *cmdline, + unsigned long long system_ram, + unsigned long long *crash_size, + unsigned long long *crash_base) { char *cur = cmdline, *tmp; @@ -1267,12 +1264,12 @@ static int __init parse_crashkernel_mem(char *cmdline, /* get the start of the range */ start = memparse(cur, &tmp); if (cur == tmp) { - pr_warning("crashkernel: Memory value expected\n"); + pr_warn("crashkernel: Memory value expected\n"); return -EINVAL; } cur = tmp; if (*cur != '-') { - pr_warning("crashkernel: '-' expected\n"); + pr_warn("crashkernel: '-' expected\n"); return -EINVAL; } cur++; @@ -1281,31 +1278,30 @@ static int __init parse_crashkernel_mem(char *cmdline, if (*cur != ':') { end = memparse(cur, &tmp); if (cur == tmp) { - pr_warning("crashkernel: Memory " - "value expected\n"); + pr_warn("crashkernel: Memory value expected\n"); return -EINVAL; } cur = tmp; if (end <= start) { - pr_warning("crashkernel: end <= start\n"); + pr_warn("crashkernel: end <= start\n"); return -EINVAL; } } if (*cur != ':') { - pr_warning("crashkernel: ':' expected\n"); + pr_warn("crashkernel: ':' expected\n"); return -EINVAL; } cur++; size = memparse(cur, &tmp); if (cur == tmp) { - pr_warning("Memory value expected\n"); + pr_warn("Memory value expected\n"); return -EINVAL; } cur = tmp; if (size >= system_ram) { - pr_warning("crashkernel: invalid size\n"); + pr_warn("crashkernel: invalid size\n"); return -EINVAL; } @@ -1323,8 +1319,7 @@ static int __init parse_crashkernel_mem(char *cmdline, cur++; *crash_base = memparse(cur, &tmp); if (cur == tmp) { - pr_warning("Memory value expected " - "after '@'\n"); + pr_warn("Memory value expected after '@'\n"); return -EINVAL; } } @@ -1336,26 +1331,26 @@ static int __init parse_crashkernel_mem(char *cmdline, /* * That function parses "simple" (old) crashkernel command lines like * - * crashkernel=size[@offset] + * crashkernel=size[@offset] * * It returns 0 on success and -EINVAL on failure. */ -static int __init parse_crashkernel_simple(char *cmdline, - unsigned long long *crash_size, - unsigned long long *crash_base) +static int __init parse_crashkernel_simple(char *cmdline, + unsigned long long *crash_size, + unsigned long long *crash_base) { char *cur = cmdline; *crash_size = memparse(cmdline, &cur); if (cmdline == cur) { - pr_warning("crashkernel: memory value expected\n"); + pr_warn("crashkernel: memory value expected\n"); return -EINVAL; } if (*cur == '@') *crash_base = memparse(cur+1, &cur); else if (*cur != ' ' && *cur != '\0') { - pr_warning("crashkernel: unrecognized char\n"); + pr_warn("crashkernel: unrecognized char\n"); return -EINVAL; } @@ -1691,7 +1686,7 @@ int kernel_kexec(void) * CPU hotplug again; so re-enable it here. */ cpu_hotplug_enable(); - printk(KERN_EMERG "Starting new kernel\n"); + pr_emerg("Starting new kernel\n"); machine_shutdown(); } diff --git a/kernel/kmod.c b/kernel/kmod.c index 0ac67a5861c5..8637e041a247 100644 --- a/kernel/kmod.c +++ b/kernel/kmod.c @@ -285,10 +285,7 @@ static int wait_for_helper(void *data) pid_t pid; /* If SIGCLD is ignored sys_wait4 won't populate the status. */ - spin_lock_irq(¤t->sighand->siglock); - current->sighand->action[SIGCHLD-1].sa.sa_handler = SIG_DFL; - spin_unlock_irq(¤t->sighand->siglock); - + kernel_sigaction(SIGCHLD, SIG_DFL); pid = kernel_thread(____call_usermodehelper, sub_info, SIGCHLD); if (pid < 0) { sub_info->retval = pid; diff --git a/kernel/panic.c b/kernel/panic.c index d02fa9fef46a..62e16cef9cc2 100644 --- a/kernel/panic.c +++ b/kernel/panic.c @@ -32,6 +32,7 @@ static unsigned long tainted_mask; static int pause_on_oops; static int pause_on_oops_flag; static DEFINE_SPINLOCK(pause_on_oops_lock); +static bool crash_kexec_post_notifiers; int panic_timeout = CONFIG_PANIC_TIMEOUT; EXPORT_SYMBOL_GPL(panic_timeout); @@ -112,9 +113,11 @@ void panic(const char *fmt, ...) /* * If we have crashed and we have a crash kernel loaded let it handle * everything else. - * Do we want to call this before we try to display a message? + * If we want to run this after calling panic_notifiers, pass + * the "crash_kexec_post_notifiers" option to the kernel. */ - crash_kexec(NULL); + if (!crash_kexec_post_notifiers) + crash_kexec(NULL); /* * Note smp_send_stop is the usual smp shutdown function, which @@ -131,6 +134,15 @@ void panic(const char *fmt, ...) kmsg_dump(KMSG_DUMP_PANIC); + /* + * If you doubt kdump always works fine in any situation, + * "crash_kexec_post_notifiers" offers you a chance to run + * panic_notifiers and dumping kmsg before kdump. + * Note: since some panic_notifiers can make crashed kernel + * more unstable, it can increase risks of the kdump failure too. + */ + crash_kexec(NULL); + bust_spinlocks(0); if (!panic_blink) @@ -472,6 +484,13 @@ EXPORT_SYMBOL(__stack_chk_fail); core_param(panic, panic_timeout, int, 0644); core_param(pause_on_oops, pause_on_oops, int, 0644); +static int __init setup_crash_kexec_post_notifiers(char *s) +{ + crash_kexec_post_notifiers = true; + return 0; +} +early_param("crash_kexec_post_notifiers", setup_crash_kexec_post_notifiers); + static int __init oops_setup(char *s) { if (!s) diff --git a/kernel/profile.c b/kernel/profile.c index cb980f0c731b..54bf5ba26420 100644 --- a/kernel/profile.c +++ b/kernel/profile.c @@ -52,9 +52,9 @@ static DEFINE_MUTEX(profile_flip_mutex); int profile_setup(char *str) { - static char schedstr[] = "schedule"; - static char sleepstr[] = "sleep"; - static char kvmstr[] = "kvm"; + static const char schedstr[] = "schedule"; + static const char sleepstr[] = "sleep"; + static const char kvmstr[] = "kvm"; int par; if (!strncmp(str, sleepstr, strlen(sleepstr))) { @@ -64,12 +64,10 @@ int profile_setup(char *str) str += strlen(sleepstr) + 1; if (get_option(&str, &par)) prof_shift = par; - printk(KERN_INFO - "kernel sleep profiling enabled (shift: %ld)\n", + pr_info("kernel sleep profiling enabled (shift: %ld)\n", prof_shift); #else - printk(KERN_WARNING - "kernel sleep profiling requires CONFIG_SCHEDSTATS\n"); + pr_warn("kernel sleep profiling requires CONFIG_SCHEDSTATS\n"); #endif /* CONFIG_SCHEDSTATS */ } else if (!strncmp(str, schedstr, strlen(schedstr))) { prof_on = SCHED_PROFILING; @@ -77,8 +75,7 @@ int profile_setup(char *str) str += strlen(schedstr) + 1; if (get_option(&str, &par)) prof_shift = par; - printk(KERN_INFO - "kernel schedule profiling enabled (shift: %ld)\n", + pr_info("kernel schedule profiling enabled (shift: %ld)\n", prof_shift); } else if (!strncmp(str, kvmstr, strlen(kvmstr))) { prof_on = KVM_PROFILING; @@ -86,13 +83,12 @@ int profile_setup(char *str) str += strlen(kvmstr) + 1; if (get_option(&str, &par)) prof_shift = par; - printk(KERN_INFO - "kernel KVM profiling enabled (shift: %ld)\n", + pr_info("kernel KVM profiling enabled (shift: %ld)\n", prof_shift); } else if (get_option(&str, &par)) { prof_shift = par; prof_on = CPU_PROFILING; - printk(KERN_INFO "kernel profiling enabled (shift: %ld)\n", + pr_info("kernel profiling enabled (shift: %ld)\n", prof_shift); } return 1; diff --git a/kernel/sched/core.c b/kernel/sched/core.c index caf03e89a068..48e78b657d23 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3723,7 +3723,7 @@ SYSCALL_DEFINE3(sched_setattr, pid_t, pid, struct sched_attr __user *, uattr, if (retval) return retval; - if (attr.sched_policy < 0) + if ((int)attr.sched_policy < 0) return -EINVAL; rcu_read_lock(); @@ -7800,8 +7800,7 @@ static int tg_set_cfs_bandwidth(struct task_group *tg, u64 period, u64 quota) /* restart the period timer (if active) to handle new period expiry */ if (runtime_enabled && cfs_b->timer_active) { /* force a reprogram */ - cfs_b->timer_active = 0; - __start_cfs_bandwidth(cfs_b); + __start_cfs_bandwidth(cfs_b, true); } raw_spin_unlock_irq(&cfs_b->lock); diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index e1574fca03b5..2b8cbf09d1a4 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -508,9 +508,17 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer) struct sched_dl_entity, dl_timer); struct task_struct *p = dl_task_of(dl_se); - struct rq *rq = task_rq(p); + struct rq *rq; +again: + rq = task_rq(p); raw_spin_lock(&rq->lock); + if (rq != task_rq(p)) { + /* Task was moved, retrying. */ + raw_spin_unlock(&rq->lock); + goto again; + } + /* * We need to take care of a possible races here. In fact, the * task might have changed its scheduling policy to something diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c9617b73bcc0..17de1956ddad 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1745,18 +1745,19 @@ no_join: void task_numa_free(struct task_struct *p) { struct numa_group *grp = p->numa_group; - int i; void *numa_faults = p->numa_faults_memory; + unsigned long flags; + int i; if (grp) { - spin_lock_irq(&grp->lock); + spin_lock_irqsave(&grp->lock, flags); for (i = 0; i < NR_NUMA_HINT_FAULT_STATS * nr_node_ids; i++) grp->faults[i] -= p->numa_faults_memory[i]; grp->total_faults -= p->total_numa_faults; list_del(&p->numa_entry); grp->nr_tasks--; - spin_unlock_irq(&grp->lock); + spin_unlock_irqrestore(&grp->lock, flags); rcu_assign_pointer(p->numa_group, NULL); put_numa_group(grp); } @@ -3179,7 +3180,7 @@ static int assign_cfs_rq_runtime(struct cfs_rq *cfs_rq) */ if (!cfs_b->timer_active) { __refill_cfs_bandwidth_runtime(cfs_b); - __start_cfs_bandwidth(cfs_b); + __start_cfs_bandwidth(cfs_b, false); } if (cfs_b->runtime > 0) { @@ -3358,7 +3359,7 @@ static void throttle_cfs_rq(struct cfs_rq *cfs_rq) raw_spin_lock(&cfs_b->lock); list_add_tail_rcu(&cfs_rq->throttled_list, &cfs_b->throttled_cfs_rq); if (!cfs_b->timer_active) - __start_cfs_bandwidth(cfs_b); + __start_cfs_bandwidth(cfs_b, false); raw_spin_unlock(&cfs_b->lock); } @@ -3740,7 +3741,7 @@ static void init_cfs_rq_runtime(struct cfs_rq *cfs_rq) } /* requires cfs_b->lock, may release to reprogram timer */ -void __start_cfs_bandwidth(struct cfs_bandwidth *cfs_b) +void __start_cfs_bandwidth(struct cfs_bandwidth *cfs_b, bool force) { /* * The timer may be active because we're trying to set a new bandwidth @@ -3755,7 +3756,7 @@ void __start_cfs_bandwidth(struct cfs_bandwidth *cfs_b) cpu_relax(); raw_spin_lock(&cfs_b->lock); /* if someone else restarted the timer then we're done */ - if (cfs_b->timer_active) + if (!force && cfs_b->timer_active) return; } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 600e2291a75c..e47679b04d16 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -278,7 +278,7 @@ extern void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b); extern int sched_group_set_shares(struct task_group *tg, unsigned long shares); extern void __refill_cfs_bandwidth_runtime(struct cfs_bandwidth *cfs_b); -extern void __start_cfs_bandwidth(struct cfs_bandwidth *cfs_b); +extern void __start_cfs_bandwidth(struct cfs_bandwidth *cfs_b, bool force); extern void unthrottle_cfs_rq(struct cfs_rq *cfs_rq); extern void free_rt_sched_group(struct task_group *tg); diff --git a/kernel/seccomp.c b/kernel/seccomp.c index b35c21503a36..f6d76bebe69f 100644 --- a/kernel/seccomp.c +++ b/kernel/seccomp.c @@ -39,7 +39,7 @@ * is only needed for handling filters shared across tasks. * @prev: points to a previously installed, or inherited, filter * @len: the number of instructions in the program - * @insns: the BPF program instructions to evaluate + * @insnsi: the BPF program instructions to evaluate * * seccomp_filter objects are organized in a tree linked via the @prev * pointer. For any task, it appears to be a singly-linked list starting @@ -220,7 +220,7 @@ static long seccomp_attach_filter(struct sock_fprog *fprog) return -ENOMEM; /* - * Installing a seccomp filter requires that the task have + * Installing a seccomp filter requires that the task has * CAP_SYS_ADMIN in its namespace or be running with no_new_privs. * This avoids scenarios where unprivileged tasks can affect the * behavior of privileged children. diff --git a/kernel/signal.c b/kernel/signal.c index 6e600aaa2af4..a4077e90f19f 100644 --- a/kernel/signal.c +++ b/kernel/signal.c @@ -277,6 +277,7 @@ void task_clear_jobctl_trapping(struct task_struct *task) { if (unlikely(task->jobctl & JOBCTL_TRAPPING)) { task->jobctl &= ~JOBCTL_TRAPPING; + smp_mb(); /* advised by wake_up_bit() */ wake_up_bit(&task->jobctl, JOBCTL_TRAPPING_BIT); } } @@ -705,11 +706,8 @@ void signal_wake_up_state(struct task_struct *t, unsigned int state) * Returns 1 if any signals were found. * * All callers must be holding the siglock. - * - * This version takes a sigset mask and looks at all signals, - * not just those in the first mask word. */ -static int rm_from_queue_full(sigset_t *mask, struct sigpending *s) +static int flush_sigqueue_mask(sigset_t *mask, struct sigpending *s) { struct sigqueue *q, *n; sigset_t m; @@ -727,29 +725,6 @@ static int rm_from_queue_full(sigset_t *mask, struct sigpending *s) } return 1; } -/* - * Remove signals in mask from the pending set and queue. - * Returns 1 if any signals were found. - * - * All callers must be holding the siglock. - */ -static int rm_from_queue(unsigned long mask, struct sigpending *s) -{ - struct sigqueue *q, *n; - - if (!sigtestsetmask(&s->signal, mask)) - return 0; - - sigdelsetmask(&s->signal, mask); - list_for_each_entry_safe(q, n, &s->list, list) { - if (q->info.si_signo < SIGRTMIN && - (mask & sigmask(q->info.si_signo))) { - list_del_init(&q->list); - __sigqueue_free(q); - } - } - return 1; -} static inline int is_si_special(const struct siginfo *info) { @@ -861,6 +836,7 @@ static bool prepare_signal(int sig, struct task_struct *p, bool force) { struct signal_struct *signal = p->signal; struct task_struct *t; + sigset_t flush; if (signal->flags & (SIGNAL_GROUP_EXIT | SIGNAL_GROUP_COREDUMP)) { if (signal->flags & SIGNAL_GROUP_COREDUMP) @@ -872,26 +848,25 @@ static bool prepare_signal(int sig, struct task_struct *p, bool force) /* * This is a stop signal. Remove SIGCONT from all queues. */ - rm_from_queue(sigmask(SIGCONT), &signal->shared_pending); - t = p; - do { - rm_from_queue(sigmask(SIGCONT), &t->pending); - } while_each_thread(p, t); + siginitset(&flush, sigmask(SIGCONT)); + flush_sigqueue_mask(&flush, &signal->shared_pending); + for_each_thread(p, t) + flush_sigqueue_mask(&flush, &t->pending); } else if (sig == SIGCONT) { unsigned int why; /* * Remove all stop signals from all queues, wake all threads. */ - rm_from_queue(SIG_KERNEL_STOP_MASK, &signal->shared_pending); - t = p; - do { + siginitset(&flush, SIG_KERNEL_STOP_MASK); + flush_sigqueue_mask(&flush, &signal->shared_pending); + for_each_thread(p, t) { + flush_sigqueue_mask(&flush, &t->pending); task_clear_jobctl_pending(t, JOBCTL_STOP_PENDING); - rm_from_queue(SIG_KERNEL_STOP_MASK, &t->pending); if (likely(!(t->ptrace & PT_SEIZED))) wake_up_state(t, __TASK_STOPPED); else ptrace_trap_notify(t); - } while_each_thread(p, t); + } /* * Notify the parent with CLD_CONTINUED if we were stopped. @@ -2854,7 +2829,7 @@ int do_sigtimedwait(const sigset_t *which, siginfo_t *info, spin_lock_irq(&tsk->sighand->siglock); __set_task_blocked(tsk, &tsk->real_blocked); - siginitset(&tsk->real_blocked, 0); + sigemptyset(&tsk->real_blocked); sig = dequeue_signal(tsk, &mask, info); } spin_unlock_irq(&tsk->sighand->siglock); @@ -3091,18 +3066,39 @@ COMPAT_SYSCALL_DEFINE4(rt_tgsigqueueinfo, } #endif +/* + * For kthreads only, must not be used if cloned with CLONE_SIGHAND + */ +void kernel_sigaction(int sig, __sighandler_t action) +{ + spin_lock_irq(¤t->sighand->siglock); + current->sighand->action[sig - 1].sa.sa_handler = action; + if (action == SIG_IGN) { + sigset_t mask; + + sigemptyset(&mask); + sigaddset(&mask, sig); + + flush_sigqueue_mask(&mask, ¤t->signal->shared_pending); + flush_sigqueue_mask(&mask, ¤t->pending); + recalc_sigpending(); + } + spin_unlock_irq(¤t->sighand->siglock); +} +EXPORT_SYMBOL(kernel_sigaction); + int do_sigaction(int sig, struct k_sigaction *act, struct k_sigaction *oact) { - struct task_struct *t = current; + struct task_struct *p = current, *t; struct k_sigaction *k; sigset_t mask; if (!valid_signal(sig) || sig < 1 || (act && sig_kernel_only(sig))) return -EINVAL; - k = &t->sighand->action[sig-1]; + k = &p->sighand->action[sig-1]; - spin_lock_irq(¤t->sighand->siglock); + spin_lock_irq(&p->sighand->siglock); if (oact) *oact = *k; @@ -3121,21 +3117,20 @@ int do_sigaction(int sig, struct k_sigaction *act, struct k_sigaction *oact) * (for example, SIGCHLD), shall cause the pending signal to * be discarded, whether or not it is blocked" */ - if (sig_handler_ignored(sig_handler(t, sig), sig)) { + if (sig_handler_ignored(sig_handler(p, sig), sig)) { sigemptyset(&mask); sigaddset(&mask, sig); - rm_from_queue_full(&mask, &t->signal->shared_pending); - do { - rm_from_queue_full(&mask, &t->pending); - } while_each_thread(current, t); + flush_sigqueue_mask(&mask, &p->signal->shared_pending); + for_each_thread(p, t) + flush_sigqueue_mask(&mask, &t->pending); } } - spin_unlock_irq(¤t->sighand->siglock); + spin_unlock_irq(&p->sighand->siglock); return 0; } -static int +static int do_sigaltstack (const stack_t __user *uss, stack_t __user *uoss, unsigned long sp) { stack_t oss; diff --git a/kernel/smp.c b/kernel/smp.c index 06d574e42c72..306f8180b0d5 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -185,14 +185,26 @@ void generic_smp_call_function_single_interrupt(void) { struct llist_node *entry; struct call_single_data *csd, *csd_next; + static bool warned; + + entry = llist_del_all(&__get_cpu_var(call_single_queue)); + entry = llist_reverse_order(entry); /* * Shouldn't receive this interrupt on a cpu that is not yet online. */ - WARN_ON_ONCE(!cpu_online(smp_processor_id())); + if (unlikely(!cpu_online(smp_processor_id()) && !warned)) { + warned = true; + WARN(1, "IPI on offline CPU %d\n", smp_processor_id()); - entry = llist_del_all(&__get_cpu_var(call_single_queue)); - entry = llist_reverse_order(entry); + /* + * We don't have to use the _safe() variant here + * because we are not invoking the IPI handlers yet. + */ + llist_for_each_entry(csd, entry, llist) + pr_warn("IPI callback %pS sent to offline CPU\n", + csd->func); + } llist_for_each_entry_safe(csd, csd_next, entry, llist) { csd->func(csd->info); diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 40ce2d983b12..db19e3e2aa4b 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -173,6 +173,13 @@ extern int no_unaligned_warning; #endif #ifdef CONFIG_PROC_SYSCTL + +#define SYSCTL_WRITES_LEGACY -1 +#define SYSCTL_WRITES_WARN 0 +#define SYSCTL_WRITES_STRICT 1 + +static int sysctl_writes_strict = SYSCTL_WRITES_WARN; + static int proc_do_cad_pid(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos); static int proc_taint(struct ctl_table *table, int write, @@ -195,7 +202,7 @@ static int proc_dostring_coredump(struct ctl_table *table, int write, /* Note: sysrq code uses it's own private copy */ static int __sysrq_enabled = CONFIG_MAGIC_SYSRQ_DEFAULT_ENABLE; -static int sysrq_sysctl_handler(ctl_table *table, int write, +static int sysrq_sysctl_handler(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { @@ -495,6 +502,15 @@ static struct ctl_table kern_table[] = { .mode = 0644, .proc_handler = proc_taint, }, + { + .procname = "sysctl_writes_strict", + .data = &sysctl_writes_strict, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = proc_dointvec_minmax, + .extra1 = &neg_one, + .extra2 = &one, + }, #endif #ifdef CONFIG_LATENCYTOP { @@ -1703,8 +1719,8 @@ int __init sysctl_init(void) #ifdef CONFIG_PROC_SYSCTL -static int _proc_do_string(void* data, int maxlen, int write, - void __user *buffer, +static int _proc_do_string(char *data, int maxlen, int write, + char __user *buffer, size_t *lenp, loff_t *ppos) { size_t len; @@ -1717,21 +1733,30 @@ static int _proc_do_string(void* data, int maxlen, int write, } if (write) { - len = 0; + if (sysctl_writes_strict == SYSCTL_WRITES_STRICT) { + /* Only continue writes not past the end of buffer. */ + len = strlen(data); + if (len > maxlen - 1) + len = maxlen - 1; + + if (*ppos > len) + return 0; + len = *ppos; + } else { + /* Start writing from beginning of buffer. */ + len = 0; + } + + *ppos += *lenp; p = buffer; - while (len < *lenp) { + while ((p - buffer) < *lenp && len < maxlen - 1) { if (get_user(c, p++)) return -EFAULT; if (c == 0 || c == '\n') break; - len++; + data[len++] = c; } - if (len >= maxlen) - len = maxlen-1; - if(copy_from_user(data, buffer, len)) - return -EFAULT; - ((char *) data)[len] = 0; - *ppos += *lenp; + data[len] = 0; } else { len = strlen(data); if (len > maxlen) @@ -1748,10 +1773,10 @@ static int _proc_do_string(void* data, int maxlen, int write, if (len > *lenp) len = *lenp; if (len) - if(copy_to_user(buffer, data, len)) + if (copy_to_user(buffer, data, len)) return -EFAULT; if (len < *lenp) { - if(put_user('\n', ((char __user *) buffer) + len)) + if (put_user('\n', buffer + len)) return -EFAULT; len++; } @@ -1761,6 +1786,14 @@ static int _proc_do_string(void* data, int maxlen, int write, return 0; } +static void warn_sysctl_write(struct ctl_table *table) +{ + pr_warn_once("%s wrote to %s when file position was not 0!\n" + "This will not be supported in the future. To silence this\n" + "warning, set kernel.sysctl_writes_strict = -1\n", + current->comm, table->procname); +} + /** * proc_dostring - read a string sysctl * @table: the sysctl table @@ -1781,8 +1814,11 @@ static int _proc_do_string(void* data, int maxlen, int write, int proc_dostring(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { - return _proc_do_string(table->data, table->maxlen, write, - buffer, lenp, ppos); + if (write && *ppos && sysctl_writes_strict == SYSCTL_WRITES_WARN) + warn_sysctl_write(table); + + return _proc_do_string((char *)(table->data), table->maxlen, write, + (char __user *)buffer, lenp, ppos); } static size_t proc_skip_spaces(char **buf) @@ -1956,6 +1992,18 @@ static int __do_proc_dointvec(void *tbl_data, struct ctl_table *table, conv = do_proc_dointvec_conv; if (write) { + if (*ppos) { + switch (sysctl_writes_strict) { + case SYSCTL_WRITES_STRICT: + goto out; + case SYSCTL_WRITES_WARN: + warn_sysctl_write(table); + break; + default: + break; + } + } + if (left > PAGE_SIZE - 1) left = PAGE_SIZE - 1; page = __get_free_page(GFP_TEMPORARY); @@ -2013,6 +2061,7 @@ free: return err ? : -EINVAL; } *lenp -= left; +out: *ppos += *lenp; return err; } @@ -2205,6 +2254,18 @@ static int __do_proc_doulongvec_minmax(void *data, struct ctl_table *table, int left = *lenp; if (write) { + if (*ppos) { + switch (sysctl_writes_strict) { + case SYSCTL_WRITES_STRICT: + goto out; + case SYSCTL_WRITES_WARN: + warn_sysctl_write(table); + break; + default: + break; + } + } + if (left > PAGE_SIZE - 1) left = PAGE_SIZE - 1; page = __get_free_page(GFP_TEMPORARY); @@ -2260,6 +2321,7 @@ free: return err ? : -EINVAL; } *lenp -= left; +out: *ppos += *lenp; return err; } diff --git a/kernel/user_namespace.c b/kernel/user_namespace.c index bf71b4b2d632..fcc02560fd6b 100644 --- a/kernel/user_namespace.c +++ b/kernel/user_namespace.c @@ -286,7 +286,7 @@ EXPORT_SYMBOL(from_kuid_munged); /** * make_kgid - Map a user-namespace gid pair into a kgid. * @ns: User namespace that the gid is in - * @uid: group identifier + * @gid: group identifier * * Maps a user-namespace gid pair into a kernel internal kgid, * and returns that kgid. @@ -482,7 +482,8 @@ static int projid_m_show(struct seq_file *seq, void *v) return 0; } -static void *m_start(struct seq_file *seq, loff_t *ppos, struct uid_gid_map *map) +static void *m_start(struct seq_file *seq, loff_t *ppos, + struct uid_gid_map *map) { struct uid_gid_extent *extent = NULL; loff_t pos = *ppos; @@ -546,7 +547,8 @@ struct seq_operations proc_projid_seq_operations = { .show = projid_m_show, }; -static bool mappings_overlap(struct uid_gid_map *new_map, struct uid_gid_extent *extent) +static bool mappings_overlap(struct uid_gid_map *new_map, + struct uid_gid_extent *extent) { u32 upper_first, lower_first, upper_last, lower_last; unsigned idx; @@ -653,7 +655,7 @@ static ssize_t map_write(struct file *file, const char __user *buf, ret = -EINVAL; pos = kbuf; new_map.nr_extents = 0; - for (;pos; pos = next_line) { + for (; pos; pos = next_line) { extent = &new_map.extent[new_map.nr_extents]; /* Find the end of line and ensure I don't look past it */ @@ -687,13 +689,16 @@ static ssize_t map_write(struct file *file, const char __user *buf, /* Verify we have been given valid starting values */ if ((extent->first == (u32) -1) || - (extent->lower_first == (u32) -1 )) + (extent->lower_first == (u32) -1)) goto out; - /* Verify count is not zero and does not cause the extent to wrap */ + /* Verify count is not zero and does not cause the + * extent to wrap + */ if ((extent->first + extent->count) <= extent->first) goto out; - if ((extent->lower_first + extent->count) <= extent->lower_first) + if ((extent->lower_first + extent->count) <= + extent->lower_first) goto out; /* Do the ranges in extent overlap any previous extents? */ @@ -751,7 +756,8 @@ out: return ret; } -ssize_t proc_uid_map_write(struct file *file, const char __user *buf, size_t size, loff_t *ppos) +ssize_t proc_uid_map_write(struct file *file, const char __user *buf, + size_t size, loff_t *ppos) { struct seq_file *seq = file->private_data; struct user_namespace *ns = seq->private; @@ -767,7 +773,8 @@ ssize_t proc_uid_map_write(struct file *file, const char __user *buf, size_t siz &ns->uid_map, &ns->parent->uid_map); } -ssize_t proc_gid_map_write(struct file *file, const char __user *buf, size_t size, loff_t *ppos) +ssize_t proc_gid_map_write(struct file *file, const char __user *buf, + size_t size, loff_t *ppos) { struct seq_file *seq = file->private_data; struct user_namespace *ns = seq->private; @@ -783,7 +790,8 @@ ssize_t proc_gid_map_write(struct file *file, const char __user *buf, size_t siz &ns->gid_map, &ns->parent->gid_map); } -ssize_t proc_projid_map_write(struct file *file, const char __user *buf, size_t size, loff_t *ppos) +ssize_t proc_projid_map_write(struct file *file, const char __user *buf, + size_t size, loff_t *ppos) { struct seq_file *seq = file->private_data; struct user_namespace *ns = seq->private; @@ -800,7 +808,7 @@ ssize_t proc_projid_map_write(struct file *file, const char __user *buf, size_t &ns->projid_map, &ns->parent->projid_map); } -static bool new_idmap_permitted(const struct file *file, +static bool new_idmap_permitted(const struct file *file, struct user_namespace *ns, int cap_setid, struct uid_gid_map *new_map) { @@ -811,8 +819,7 @@ static bool new_idmap_permitted(const struct file *file, kuid_t uid = make_kuid(ns->parent, id); if (uid_eq(uid, file->f_cred->fsuid)) return true; - } - else if (cap_setid == CAP_SETGID) { + } else if (cap_setid == CAP_SETGID) { kgid_t gid = make_kgid(ns->parent, id); if (gid_eq(gid, file->f_cred->fsgid)) return true; diff --git a/kernel/utsname_sysctl.c b/kernel/utsname_sysctl.c index 6fbe811c7ad1..c8eac43267e9 100644 --- a/kernel/utsname_sysctl.c +++ b/kernel/utsname_sysctl.c @@ -17,7 +17,7 @@ #ifdef CONFIG_PROC_SYSCTL -static void *get_uts(ctl_table *table, int write) +static void *get_uts(struct ctl_table *table, int write) { char *which = table->data; struct uts_namespace *uts_ns; @@ -32,7 +32,7 @@ static void *get_uts(ctl_table *table, int write) return which; } -static void put_uts(ctl_table *table, int write, void *which) +static void put_uts(struct ctl_table *table, int write, void *which) { if (!write) up_read(&uts_sem); @@ -44,7 +44,7 @@ static void put_uts(ctl_table *table, int write, void *which) * Special case of dostring for the UTS structure. This has locks * to observe. Should this be in kernel/sys.c ???? */ -static int proc_do_uts_string(ctl_table *table, int write, +static int proc_do_uts_string(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { struct ctl_table uts_table; diff --git a/lib/Makefile b/lib/Makefile index 0cd7b68e1382..74a32dc49a93 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -148,7 +148,8 @@ obj-$(CONFIG_GENERIC_NET_UTILS) += net_utils.o obj-$(CONFIG_STMP_DEVICE) += stmp_device.o -libfdt_files = fdt.o fdt_ro.o fdt_wip.o fdt_rw.o fdt_sw.o fdt_strerror.o +libfdt_files = fdt.o fdt_ro.o fdt_wip.o fdt_rw.o fdt_sw.o fdt_strerror.o \ + fdt_empty_tree.o $(foreach file, $(libfdt_files), \ $(eval CFLAGS_$(file) = -I$(src)/../scripts/dtc/libfdt)) lib-$(CONFIG_LIBFDT) += $(libfdt_files) diff --git a/lib/fdt_empty_tree.c b/lib/fdt_empty_tree.c new file mode 100644 index 000000000000..5d30c58150ad --- /dev/null +++ b/lib/fdt_empty_tree.c @@ -0,0 +1,2 @@ +#include <linux/libfdt_env.h> +#include "../scripts/dtc/libfdt/fdt_empty_tree.c" diff --git a/lib/idr.c b/lib/idr.c index 2642fa8e424d..39158abebad1 100644 --- a/lib/idr.c +++ b/lib/idr.c @@ -18,12 +18,6 @@ * pointer or what ever, we treat it as a (void *). You can pass this * id to a user for him to pass back at a later time. You then pass * that id to this code and it returns your pointer. - - * You can release ids at any time. When all ids are released, most of - * the memory is returned (we keep MAX_IDR_FREE) in a local pool so we - * don't need to go to the memory "store" during an id allocate, just - * so you don't need to be too concerned about locking and conflicts - * with the slab allocator. */ #ifndef TEST // to test in user space... @@ -151,7 +145,7 @@ static void idr_layer_rcu_free(struct rcu_head *head) static inline void free_layer(struct idr *idr, struct idr_layer *p) { - if (idr->hint && idr->hint == p) + if (idr->hint == p) RCU_INIT_POINTER(idr->hint, NULL); call_rcu(&p->rcu_head, idr_layer_rcu_free); } @@ -249,7 +243,7 @@ static int sub_alloc(struct idr *idp, int *starting_id, struct idr_layer **pa, id = (id | ((1 << (IDR_BITS * l)) - 1)) + 1; /* if already at the top layer, we need to grow */ - if (id >= 1 << (idp->layers * IDR_BITS)) { + if (id > idr_max(idp->layers)) { *starting_id = id; return -EAGAIN; } @@ -562,6 +556,11 @@ void idr_remove(struct idr *idp, int id) if (id < 0) return; + if (id > idr_max(idp->layers)) { + idr_remove_warning(id); + return; + } + sub_remove(idp, (idp->layers - 1) * IDR_BITS, id); if (idp->top && idp->top->count == 1 && (idp->layers > 1) && idp->top->ary[0]) { @@ -579,16 +578,6 @@ void idr_remove(struct idr *idp, int id) bitmap_clear(to_free->bitmap, 0, IDR_SIZE); free_layer(idp, to_free); } - while (idp->id_free_cnt >= MAX_IDR_FREE) { - p = get_from_free_list(idp); - /* - * Note: we don't call the rcu callback here, since the only - * layers that fall into the freelist are those that have been - * preallocated. - */ - kmem_cache_free(idr_layer_cache, p); - } - return; } EXPORT_SYMBOL(idr_remove); @@ -809,14 +798,12 @@ void *idr_replace(struct idr *idp, void *ptr, int id) p = idp->top; if (!p) - return ERR_PTR(-EINVAL); - - n = (p->layer+1) * IDR_BITS; + return ERR_PTR(-ENOENT); - if (id >= (1 << n)) - return ERR_PTR(-EINVAL); + if (id > idr_max(p->layer + 1)) + return ERR_PTR(-ENOENT); - n -= IDR_BITS; + n = p->layer * IDR_BITS; while ((n > 0) && p) { p = p->ary[(id >> n) & IDR_MASK]; n -= IDR_BITS; @@ -1027,6 +1014,9 @@ void ida_remove(struct ida *ida, int id) int n; struct ida_bitmap *bitmap; + if (idr_id > idr_max(ida->idr.layers)) + goto err; + /* clear full bits while looking up the leaf idr_layer */ while ((shift > 0) && p) { n = (idr_id >> shift) & IDR_MASK; @@ -1042,7 +1032,7 @@ void ida_remove(struct ida *ida, int id) __clear_bit(n, p->bitmap); bitmap = (void *)p->ary[n]; - if (!test_bit(offset, bitmap->bitmap)) + if (!bitmap || !test_bit(offset, bitmap->bitmap)) goto err; /* update bitmap and remove it if empty */ diff --git a/lib/nlattr.c b/lib/nlattr.c index 0c5778752aec..9c3e85ff0a6c 100644 --- a/lib/nlattr.c +++ b/lib/nlattr.c @@ -203,8 +203,8 @@ int nla_parse(struct nlattr **tb, int maxtype, const struct nlattr *head, } if (unlikely(rem > 0)) - printk(KERN_WARNING "netlink: %d bytes leftover after parsing " - "attributes.\n", rem); + pr_warn_ratelimited("netlink: %d bytes leftover after parsing attributes in process `%s'.\n", + rem, current->comm); err = 0; errout: diff --git a/lib/radix-tree.c b/lib/radix-tree.c index d64815651e90..3291a8e37490 100644 --- a/lib/radix-tree.c +++ b/lib/radix-tree.c @@ -27,6 +27,7 @@ #include <linux/radix-tree.h> #include <linux/percpu.h> #include <linux/slab.h> +#include <linux/kmemleak.h> #include <linux/notifier.h> #include <linux/cpu.h> #include <linux/string.h> @@ -200,6 +201,11 @@ radix_tree_node_alloc(struct radix_tree_root *root) rtp->nodes[rtp->nr - 1] = NULL; rtp->nr--; } + /* + * Update the allocation stack trace as this is more useful + * for debugging. + */ + kmemleak_update_trace(ret); } if (ret == NULL) ret = kmem_cache_alloc(radix_tree_node_cachep, gfp_mask); diff --git a/mm/fremap.c b/mm/fremap.c index 2c5646f11f41..72b8fa361433 100644 --- a/mm/fremap.c +++ b/mm/fremap.c @@ -149,6 +149,10 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, start, unsigned long, size, int has_write_lock = 0; vm_flags_t vm_flags = 0; + pr_warn_once("%s (%d) uses deprecated remap_file_pages() syscall. " + "See Documentation/vm/remap_file_pages.txt.\n", + current->comm, current->pid); + if (prot) return err; /* diff --git a/mm/kmemleak-test.c b/mm/kmemleak-test.c index ff0d9779cec8..dcdcadb69533 100644 --- a/mm/kmemleak-test.c +++ b/mm/kmemleak-test.c @@ -18,6 +18,8 @@ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */ +#define pr_fmt(fmt) "kmemleak: " fmt + #include <linux/init.h> #include <linux/kernel.h> #include <linux/module.h> @@ -50,25 +52,25 @@ static int __init kmemleak_test_init(void) printk(KERN_INFO "Kmemleak testing\n"); /* make some orphan objects */ - pr_info("kmemleak: kmalloc(32) = %p\n", kmalloc(32, GFP_KERNEL)); - pr_info("kmemleak: kmalloc(32) = %p\n", kmalloc(32, GFP_KERNEL)); - pr_info("kmemleak: kmalloc(1024) = %p\n", kmalloc(1024, GFP_KERNEL)); - pr_info("kmemleak: kmalloc(1024) = %p\n", kmalloc(1024, GFP_KERNEL)); - pr_info("kmemleak: kmalloc(2048) = %p\n", kmalloc(2048, GFP_KERNEL)); - pr_info("kmemleak: kmalloc(2048) = %p\n", kmalloc(2048, GFP_KERNEL)); - pr_info("kmemleak: kmalloc(4096) = %p\n", kmalloc(4096, GFP_KERNEL)); - pr_info("kmemleak: kmalloc(4096) = %p\n", kmalloc(4096, GFP_KERNEL)); + pr_info("kmalloc(32) = %p\n", kmalloc(32, GFP_KERNEL)); + pr_info("kmalloc(32) = %p\n", kmalloc(32, GFP_KERNEL)); + pr_info("kmalloc(1024) = %p\n", kmalloc(1024, GFP_KERNEL)); + pr_info("kmalloc(1024) = %p\n", kmalloc(1024, GFP_KERNEL)); + pr_info("kmalloc(2048) = %p\n", kmalloc(2048, GFP_KERNEL)); + pr_info("kmalloc(2048) = %p\n", kmalloc(2048, GFP_KERNEL)); + pr_info("kmalloc(4096) = %p\n", kmalloc(4096, GFP_KERNEL)); + pr_info("kmalloc(4096) = %p\n", kmalloc(4096, GFP_KERNEL)); #ifndef CONFIG_MODULES - pr_info("kmemleak: kmem_cache_alloc(files_cachep) = %p\n", + pr_info("kmem_cache_alloc(files_cachep) = %p\n", kmem_cache_alloc(files_cachep, GFP_KERNEL)); - pr_info("kmemleak: kmem_cache_alloc(files_cachep) = %p\n", + pr_info("kmem_cache_alloc(files_cachep) = %p\n", kmem_cache_alloc(files_cachep, GFP_KERNEL)); #endif - pr_info("kmemleak: vmalloc(64) = %p\n", vmalloc(64)); - pr_info("kmemleak: vmalloc(64) = %p\n", vmalloc(64)); - pr_info("kmemleak: vmalloc(64) = %p\n", vmalloc(64)); - pr_info("kmemleak: vmalloc(64) = %p\n", vmalloc(64)); - pr_info("kmemleak: vmalloc(64) = %p\n", vmalloc(64)); + pr_info("vmalloc(64) = %p\n", vmalloc(64)); + pr_info("vmalloc(64) = %p\n", vmalloc(64)); + pr_info("vmalloc(64) = %p\n", vmalloc(64)); + pr_info("vmalloc(64) = %p\n", vmalloc(64)); + pr_info("vmalloc(64) = %p\n", vmalloc(64)); /* * Add elements to a list. They should only appear as orphan @@ -76,7 +78,7 @@ static int __init kmemleak_test_init(void) */ for (i = 0; i < 10; i++) { elem = kzalloc(sizeof(*elem), GFP_KERNEL); - pr_info("kmemleak: kzalloc(sizeof(*elem)) = %p\n", elem); + pr_info("kzalloc(sizeof(*elem)) = %p\n", elem); if (!elem) return -ENOMEM; INIT_LIST_HEAD(&elem->list); @@ -85,7 +87,7 @@ static int __init kmemleak_test_init(void) for_each_possible_cpu(i) { per_cpu(kmemleak_test_pointer, i) = kmalloc(129, GFP_KERNEL); - pr_info("kmemleak: kmalloc(129) = %p\n", + pr_info("kmalloc(129) = %p\n", per_cpu(kmemleak_test_pointer, i)); } diff --git a/mm/kmemleak.c b/mm/kmemleak.c index 736ade31d1dc..3cda50c1e394 100644 --- a/mm/kmemleak.c +++ b/mm/kmemleak.c @@ -387,7 +387,7 @@ static void dump_object_info(struct kmemleak_object *object) pr_notice(" min_count = %d\n", object->min_count); pr_notice(" count = %d\n", object->count); pr_notice(" flags = 0x%lx\n", object->flags); - pr_notice(" checksum = %d\n", object->checksum); + pr_notice(" checksum = %u\n", object->checksum); pr_notice(" backtrace:\n"); print_stack_trace(&trace, 4); } @@ -990,6 +990,40 @@ void __ref kmemleak_free_percpu(const void __percpu *ptr) EXPORT_SYMBOL_GPL(kmemleak_free_percpu); /** + * kmemleak_update_trace - update object allocation stack trace + * @ptr: pointer to beginning of the object + * + * Override the object allocation stack trace for cases where the actual + * allocation place is not always useful. + */ +void __ref kmemleak_update_trace(const void *ptr) +{ + struct kmemleak_object *object; + unsigned long flags; + + pr_debug("%s(0x%p)\n", __func__, ptr); + + if (!kmemleak_enabled || IS_ERR_OR_NULL(ptr)) + return; + + object = find_and_get_object((unsigned long)ptr, 1); + if (!object) { +#ifdef DEBUG + kmemleak_warn("Updating stack trace for unknown object at %p\n", + ptr); +#endif + return; + } + + spin_lock_irqsave(&object->lock, flags); + object->trace_len = __save_stack_trace(object->trace); + spin_unlock_irqrestore(&object->lock, flags); + + put_object(object); +} +EXPORT_SYMBOL(kmemleak_update_trace); + +/** * kmemleak_not_leak - mark an allocated object as false positive * @ptr: pointer to beginning of the object * diff --git a/mm/memblock.c b/mm/memblock.c index 0aa0d2b07624..6d2f219a48b0 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -691,6 +691,7 @@ int __init_memblock memblock_free(phys_addr_t base, phys_addr_t size) (unsigned long long)base + size - 1, (void *)_RET_IP_); + kmemleak_free_part(__va(base), size); return memblock_remove_range(&memblock.reserved, base, size); } @@ -1043,9 +1044,14 @@ static phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size, align = SMP_CACHE_BYTES; found = memblock_find_in_range_node(size, align, start, end, nid); - if (found && !memblock_reserve(found, size)) + if (found && !memblock_reserve(found, size)) { + /* + * The min_count is set to 0 so that memblock allocations are + * never reported as leaks. + */ + kmemleak_alloc(__va(found), size, 0, 0); return found; - + } return 0; } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index a500cb0594c4..a9559b91603c 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -676,9 +676,11 @@ static void disarm_static_keys(struct mem_cgroup *memcg) static void drain_all_stock_async(struct mem_cgroup *memcg); static struct mem_cgroup_per_zone * -mem_cgroup_zoneinfo(struct mem_cgroup *memcg, int nid, int zid) +mem_cgroup_zone_zoneinfo(struct mem_cgroup *memcg, struct zone *zone) { - VM_BUG_ON((unsigned)nid >= nr_node_ids); + int nid = zone_to_nid(zone); + int zid = zone_idx(zone); + return &memcg->nodeinfo[nid]->zoneinfo[zid]; } @@ -688,12 +690,12 @@ struct cgroup_subsys_state *mem_cgroup_css(struct mem_cgroup *memcg) } static struct mem_cgroup_per_zone * -page_cgroup_zoneinfo(struct mem_cgroup *memcg, struct page *page) +mem_cgroup_page_zoneinfo(struct mem_cgroup *memcg, struct page *page) { int nid = page_to_nid(page); int zid = page_zonenum(page); - return mem_cgroup_zoneinfo(memcg, nid, zid); + return &memcg->nodeinfo[nid]->zoneinfo[zid]; } static struct mem_cgroup_tree_per_zone * @@ -711,11 +713,9 @@ soft_limit_tree_from_page(struct page *page) return &soft_limit_tree.rb_tree_per_node[nid]->rb_tree_per_zone[zid]; } -static void -__mem_cgroup_insert_exceeded(struct mem_cgroup *memcg, - struct mem_cgroup_per_zone *mz, - struct mem_cgroup_tree_per_zone *mctz, - unsigned long long new_usage_in_excess) +static void __mem_cgroup_insert_exceeded(struct mem_cgroup_per_zone *mz, + struct mem_cgroup_tree_per_zone *mctz, + unsigned long long new_usage_in_excess) { struct rb_node **p = &mctz->rb_root.rb_node; struct rb_node *parent = NULL; @@ -745,10 +745,8 @@ __mem_cgroup_insert_exceeded(struct mem_cgroup *memcg, mz->on_tree = true; } -static void -__mem_cgroup_remove_exceeded(struct mem_cgroup *memcg, - struct mem_cgroup_per_zone *mz, - struct mem_cgroup_tree_per_zone *mctz) +static void __mem_cgroup_remove_exceeded(struct mem_cgroup_per_zone *mz, + struct mem_cgroup_tree_per_zone *mctz) { if (!mz->on_tree) return; @@ -756,13 +754,11 @@ __mem_cgroup_remove_exceeded(struct mem_cgroup *memcg, mz->on_tree = false; } -static void -mem_cgroup_remove_exceeded(struct mem_cgroup *memcg, - struct mem_cgroup_per_zone *mz, - struct mem_cgroup_tree_per_zone *mctz) +static void mem_cgroup_remove_exceeded(struct mem_cgroup_per_zone *mz, + struct mem_cgroup_tree_per_zone *mctz) { spin_lock(&mctz->lock); - __mem_cgroup_remove_exceeded(memcg, mz, mctz); + __mem_cgroup_remove_exceeded(mz, mctz); spin_unlock(&mctz->lock); } @@ -772,16 +768,14 @@ static void mem_cgroup_update_tree(struct mem_cgroup *memcg, struct page *page) unsigned long long excess; struct mem_cgroup_per_zone *mz; struct mem_cgroup_tree_per_zone *mctz; - int nid = page_to_nid(page); - int zid = page_zonenum(page); - mctz = soft_limit_tree_from_page(page); + mctz = soft_limit_tree_from_page(page); /* * Necessary to update all ancestors when hierarchy is used. * because their event counter is not touched. */ for (; memcg; memcg = parent_mem_cgroup(memcg)) { - mz = mem_cgroup_zoneinfo(memcg, nid, zid); + mz = mem_cgroup_page_zoneinfo(memcg, page); excess = res_counter_soft_limit_excess(&memcg->res); /* * We have to update the tree if mz is on RB-tree or @@ -791,12 +785,12 @@ static void mem_cgroup_update_tree(struct mem_cgroup *memcg, struct page *page) spin_lock(&mctz->lock); /* if on-tree, remove it */ if (mz->on_tree) - __mem_cgroup_remove_exceeded(memcg, mz, mctz); + __mem_cgroup_remove_exceeded(mz, mctz); /* * Insert again. mz->usage_in_excess will be updated. * If excess is 0, no tree ops. */ - __mem_cgroup_insert_exceeded(memcg, mz, mctz, excess); + __mem_cgroup_insert_exceeded(mz, mctz, excess); spin_unlock(&mctz->lock); } } @@ -804,15 +798,15 @@ static void mem_cgroup_update_tree(struct mem_cgroup *memcg, struct page *page) static void mem_cgroup_remove_from_trees(struct mem_cgroup *memcg) { - int node, zone; - struct mem_cgroup_per_zone *mz; struct mem_cgroup_tree_per_zone *mctz; + struct mem_cgroup_per_zone *mz; + int nid, zid; - for_each_node(node) { - for (zone = 0; zone < MAX_NR_ZONES; zone++) { - mz = mem_cgroup_zoneinfo(memcg, node, zone); - mctz = soft_limit_tree_node_zone(node, zone); - mem_cgroup_remove_exceeded(memcg, mz, mctz); + for_each_node(nid) { + for (zid = 0; zid < MAX_NR_ZONES; zid++) { + mz = &memcg->nodeinfo[nid]->zoneinfo[zid]; + mctz = soft_limit_tree_node_zone(nid, zid); + mem_cgroup_remove_exceeded(mz, mctz); } } } @@ -835,7 +829,7 @@ retry: * we will to add it back at the end of reclaim to its correct * position in the tree. */ - __mem_cgroup_remove_exceeded(mz->memcg, mz, mctz); + __mem_cgroup_remove_exceeded(mz, mctz); if (!res_counter_soft_limit_excess(&mz->memcg->res) || !css_tryget(&mz->memcg->css)) goto retry; @@ -946,8 +940,7 @@ static void mem_cgroup_charge_statistics(struct mem_cgroup *memcg, __this_cpu_add(memcg->stat->nr_page_events, nr_pages); } -unsigned long -mem_cgroup_get_lru_size(struct lruvec *lruvec, enum lru_list lru) +unsigned long mem_cgroup_get_lru_size(struct lruvec *lruvec, enum lru_list lru) { struct mem_cgroup_per_zone *mz; @@ -955,46 +948,38 @@ mem_cgroup_get_lru_size(struct lruvec *lruvec, enum lru_list lru) return mz->lru_size[lru]; } -static unsigned long -mem_cgroup_zone_nr_lru_pages(struct mem_cgroup *memcg, int nid, int zid, - unsigned int lru_mask) +static unsigned long mem_cgroup_node_nr_lru_pages(struct mem_cgroup *memcg, + int nid, + unsigned int lru_mask) { - struct mem_cgroup_per_zone *mz; - enum lru_list lru; - unsigned long ret = 0; - - mz = mem_cgroup_zoneinfo(memcg, nid, zid); - - for_each_lru(lru) { - if (BIT(lru) & lru_mask) - ret += mz->lru_size[lru]; - } - return ret; -} - -static unsigned long -mem_cgroup_node_nr_lru_pages(struct mem_cgroup *memcg, - int nid, unsigned int lru_mask) -{ - u64 total = 0; + unsigned long nr = 0; int zid; - for (zid = 0; zid < MAX_NR_ZONES; zid++) - total += mem_cgroup_zone_nr_lru_pages(memcg, - nid, zid, lru_mask); + VM_BUG_ON((unsigned)nid >= nr_node_ids); - return total; + for (zid = 0; zid < MAX_NR_ZONES; zid++) { + struct mem_cgroup_per_zone *mz; + enum lru_list lru; + + for_each_lru(lru) { + if (!(BIT(lru) & lru_mask)) + continue; + mz = &memcg->nodeinfo[nid]->zoneinfo[zid]; + nr += mz->lru_size[lru]; + } + } + return nr; } static unsigned long mem_cgroup_nr_lru_pages(struct mem_cgroup *memcg, unsigned int lru_mask) { + unsigned long nr = 0; int nid; - u64 total = 0; for_each_node_state(nid, N_MEMORY) - total += mem_cgroup_node_nr_lru_pages(memcg, nid, lru_mask); - return total; + nr += mem_cgroup_node_nr_lru_pages(memcg, nid, lru_mask); + return nr; } static bool mem_cgroup_event_ratelimit(struct mem_cgroup *memcg, @@ -1242,11 +1227,9 @@ struct mem_cgroup *mem_cgroup_iter(struct mem_cgroup *root, int uninitialized_var(seq); if (reclaim) { - int nid = zone_to_nid(reclaim->zone); - int zid = zone_idx(reclaim->zone); struct mem_cgroup_per_zone *mz; - mz = mem_cgroup_zoneinfo(root, nid, zid); + mz = mem_cgroup_zone_zoneinfo(root, reclaim->zone); iter = &mz->reclaim_iter[reclaim->priority]; if (prev && reclaim->generation != iter->generation) { iter->last_visited = NULL; @@ -1353,7 +1336,7 @@ struct lruvec *mem_cgroup_zone_lruvec(struct zone *zone, goto out; } - mz = mem_cgroup_zoneinfo(memcg, zone_to_nid(zone), zone_idx(zone)); + mz = mem_cgroup_zone_zoneinfo(memcg, zone); lruvec = &mz->lruvec; out: /* @@ -1412,7 +1395,7 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *page, struct zone *zone) if (!PageLRU(page) && !PageCgroupUsed(pc) && memcg != root_mem_cgroup) pc->mem_cgroup = memcg = root_mem_cgroup; - mz = page_cgroup_zoneinfo(memcg, page); + mz = mem_cgroup_page_zoneinfo(memcg, page); lruvec = &mz->lruvec; out: /* @@ -1550,7 +1533,7 @@ static unsigned long mem_cgroup_margin(struct mem_cgroup *memcg) int mem_cgroup_swappiness(struct mem_cgroup *memcg) { /* root ? */ - if (!css_parent(&memcg->css)) + if (mem_cgroup_disabled() || !css_parent(&memcg->css)) return vm_swappiness; return memcg->swappiness; @@ -4597,7 +4580,7 @@ unsigned long mem_cgroup_soft_limit_reclaim(struct zone *zone, int order, break; } while (1); } - __mem_cgroup_remove_exceeded(mz->memcg, mz, mctz); + __mem_cgroup_remove_exceeded(mz, mctz); excess = res_counter_soft_limit_excess(&mz->memcg->res); /* * One school of thought says that we should not add @@ -4608,7 +4591,7 @@ unsigned long mem_cgroup_soft_limit_reclaim(struct zone *zone, int order, * term TODO. */ /* If excess == 0, no tree ops */ - __mem_cgroup_insert_exceeded(mz->memcg, mz, mctz, excess); + __mem_cgroup_insert_exceeded(mz, mctz, excess); spin_unlock(&mctz->lock); css_put(&mz->memcg->css); loop++; @@ -5305,7 +5288,7 @@ static int memcg_stat_show(struct seq_file *m, void *v) for_each_online_node(nid) for (zid = 0; zid < MAX_NR_ZONES; zid++) { - mz = mem_cgroup_zoneinfo(memcg, nid, zid); + mz = &memcg->nodeinfo[nid]->zoneinfo[zid]; rstat = &mz->lruvec.reclaim_stat; recent_rotated[0] += rstat->recent_rotated[0]; diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 16bc9fa42998..284974230459 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -65,6 +65,8 @@ kernel is not always grateful with that. */ +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + #include <linux/mempolicy.h> #include <linux/mm.h> #include <linux/highmem.h> @@ -91,6 +93,7 @@ #include <linux/ctype.h> #include <linux/mm_inline.h> #include <linux/mmu_notifier.h> +#include <linux/printk.h> #include <asm/tlbflush.h> #include <asm/uaccess.h> @@ -526,9 +529,13 @@ static void queue_pages_hugetlb_pmd_range(struct vm_area_struct *vma, int nid; struct page *page; spinlock_t *ptl; + pte_t entry; ptl = huge_pte_lock(hstate_vma(vma), vma->vm_mm, (pte_t *)pmd); - page = pte_page(huge_ptep_get((pte_t *)pmd)); + entry = huge_ptep_get((pte_t *)pmd); + if (!pte_present(entry)) + goto unlock; + page = pte_page(entry); nid = page_to_nid(page); if (node_isset(nid, *nodes) == !!(flags & MPOL_MF_INVERT)) goto unlock; @@ -2645,7 +2652,7 @@ void __init numa_policy_init(void) node_set(prefer, interleave_nodes); if (do_set_mempolicy(MPOL_INTERLEAVE, 0, &interleave_nodes)) - printk("numa_policy_init: interleaving failed\n"); + pr_err("%s: interleaving failed\n", __func__); check_numabalancing_enable(); } diff --git a/mm/mempool.c b/mm/mempool.c index 455d468c3a5d..e209c98c7203 100644 --- a/mm/mempool.c +++ b/mm/mempool.c @@ -10,6 +10,7 @@ #include <linux/mm.h> #include <linux/slab.h> +#include <linux/kmemleak.h> #include <linux/export.h> #include <linux/mempool.h> #include <linux/blkdev.h> @@ -222,6 +223,11 @@ repeat_alloc: spin_unlock_irqrestore(&pool->lock, flags); /* paired with rmb in mempool_free(), read comment there */ smp_wmb(); + /* + * Update the allocation stack trace as this is more useful + * for debugging. + */ + kmemleak_update_trace(element); return element; } diff --git a/mm/mmap.c b/mm/mmap.c index ced5efcdd4b6..129b847d30cc 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -6,6 +6,8 @@ * Address space accounting code <alan@lxorguk.ukuu.org.uk> */ +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + #include <linux/kernel.h> #include <linux/slab.h> #include <linux/backing-dev.h> @@ -37,6 +39,7 @@ #include <linux/sched/sysctl.h> #include <linux/notifier.h> #include <linux/memory.h> +#include <linux/printk.h> #include <asm/uaccess.h> #include <asm/cacheflush.h> @@ -361,20 +364,20 @@ static int browse_rb(struct rb_root *root) struct vm_area_struct *vma; vma = rb_entry(nd, struct vm_area_struct, vm_rb); if (vma->vm_start < prev) { - printk("vm_start %lx prev %lx\n", vma->vm_start, prev); + pr_info("vm_start %lx prev %lx\n", vma->vm_start, prev); bug = 1; } if (vma->vm_start < pend) { - printk("vm_start %lx pend %lx\n", vma->vm_start, pend); + pr_info("vm_start %lx pend %lx\n", vma->vm_start, pend); bug = 1; } if (vma->vm_start > vma->vm_end) { - printk("vm_end %lx < vm_start %lx\n", + pr_info("vm_end %lx < vm_start %lx\n", vma->vm_end, vma->vm_start); bug = 1; } if (vma->rb_subtree_gap != vma_compute_subtree_gap(vma)) { - printk("free gap %lx, correct %lx\n", + pr_info("free gap %lx, correct %lx\n", vma->rb_subtree_gap, vma_compute_subtree_gap(vma)); bug = 1; @@ -388,7 +391,7 @@ static int browse_rb(struct rb_root *root) for (nd = pn; nd; nd = rb_prev(nd)) j++; if (i != j) { - printk("backwards %d, forwards %d\n", j, i); + pr_info("backwards %d, forwards %d\n", j, i); bug = 1; } return bug ? -1 : i; @@ -423,17 +426,17 @@ static void validate_mm(struct mm_struct *mm) i++; } if (i != mm->map_count) { - printk("map_count %d vm_next %d\n", mm->map_count, i); + pr_info("map_count %d vm_next %d\n", mm->map_count, i); bug = 1; } if (highest_address != mm->highest_vm_end) { - printk("mm->highest_vm_end %lx, found %lx\n", + pr_info("mm->highest_vm_end %lx, found %lx\n", mm->highest_vm_end, highest_address); bug = 1; } i = browse_rb(&mm->mm_rb); if (i != mm->map_count) { - printk("map_count %d rb %d\n", mm->map_count, i); + pr_info("map_count %d rb %d\n", mm->map_count, i); bug = 1; } BUG_ON(bug); @@ -3280,7 +3283,7 @@ static struct notifier_block reserve_mem_nb = { static int __meminit init_reserve_notifier(void) { if (register_hotmemory_notifier(&reserve_mem_nb)) - printk("Failed registering memory add/remove notifier for admin reserve"); + pr_err("Failed registering memory add/remove notifier for admin reserve\n"); return 0; } diff --git a/mm/nobootmem.c b/mm/nobootmem.c index 04a9d94333a5..7ed58602e71b 100644 --- a/mm/nobootmem.c +++ b/mm/nobootmem.c @@ -197,7 +197,6 @@ unsigned long __init free_all_bootmem(void) void __init free_bootmem_node(pg_data_t *pgdat, unsigned long physaddr, unsigned long size) { - kmemleak_free_part(__va(physaddr), size); memblock_free(physaddr, size); } @@ -212,7 +211,6 @@ void __init free_bootmem_node(pg_data_t *pgdat, unsigned long physaddr, */ void __init free_bootmem(unsigned long addr, unsigned long size) { - kmemleak_free_part(__va(addr), size); memblock_free(addr, size); } diff --git a/mm/nommu.c b/mm/nommu.c index 85f8d6698d48..b78e3a8f5ee7 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -13,6 +13,8 @@ * Copyright (c) 2007-2010 Paul Mundt <lethal@linux-sh.org> */ +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + #include <linux/export.h> #include <linux/mm.h> #include <linux/vmacache.h> @@ -32,6 +34,7 @@ #include <linux/syscalls.h> #include <linux/audit.h> #include <linux/sched/sysctl.h> +#include <linux/printk.h> #include <asm/uaccess.h> #include <asm/tlb.h> @@ -1246,7 +1249,7 @@ error_free: return ret; enomem: - printk("Allocation of length %lu from process %d (%s) failed\n", + pr_err("Allocation of length %lu from process %d (%s) failed\n", len, current->pid, current->comm); show_free_areas(0); return -ENOMEM; diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 533fa60c9ac1..7d9a4ef0a078 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -1664,7 +1664,7 @@ void throttle_vm_writeout(gfp_t gfp_mask) /* * sysctl handler for /proc/sys/vm/dirty_writeback_centisecs */ -int dirty_writeback_centisecs_handler(ctl_table *table, int write, +int dirty_writeback_centisecs_handler(struct ctl_table *table, int write, void __user *buffer, size_t *length, loff_t *ppos) { proc_dointvec(table, write, buffer, length, ppos); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a59bdb653958..4f59fa29eda8 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3389,7 +3389,7 @@ early_param("numa_zonelist_order", setup_numa_zonelist_order); /* * sysctl handler for numa_zonelist_order */ -int numa_zonelist_order_handler(ctl_table *table, int write, +int numa_zonelist_order_handler(struct ctl_table *table, int write, void __user *buffer, size_t *length, loff_t *ppos) { @@ -5805,7 +5805,7 @@ module_init(init_per_zone_wmark_min) * that we can call two helper functions whenever min_free_kbytes * changes. */ -int min_free_kbytes_sysctl_handler(ctl_table *table, int write, +int min_free_kbytes_sysctl_handler(struct ctl_table *table, int write, void __user *buffer, size_t *length, loff_t *ppos) { int rc; @@ -5822,7 +5822,7 @@ int min_free_kbytes_sysctl_handler(ctl_table *table, int write, } #ifdef CONFIG_NUMA -int sysctl_min_unmapped_ratio_sysctl_handler(ctl_table *table, int write, +int sysctl_min_unmapped_ratio_sysctl_handler(struct ctl_table *table, int write, void __user *buffer, size_t *length, loff_t *ppos) { struct zone *zone; @@ -5838,7 +5838,7 @@ int sysctl_min_unmapped_ratio_sysctl_handler(ctl_table *table, int write, return 0; } -int sysctl_min_slab_ratio_sysctl_handler(ctl_table *table, int write, +int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write, void __user *buffer, size_t *length, loff_t *ppos) { struct zone *zone; @@ -5864,7 +5864,7 @@ int sysctl_min_slab_ratio_sysctl_handler(ctl_table *table, int write, * minimum watermarks. The lowmem reserve ratio can only make sense * if in function of the boot time zone sizes. */ -int lowmem_reserve_ratio_sysctl_handler(ctl_table *table, int write, +int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *table, int write, void __user *buffer, size_t *length, loff_t *ppos) { proc_dointvec_minmax(table, write, buffer, length, ppos); @@ -5877,7 +5877,7 @@ int lowmem_reserve_ratio_sysctl_handler(ctl_table *table, int write, * cpu. It is the fraction of total pages in each zone that a hot per cpu * pagelist can have before it gets flushed back to buddy allocator. */ -int percpu_pagelist_fraction_sysctl_handler(ctl_table *table, int write, +int percpu_pagelist_fraction_sysctl_handler(struct ctl_table *table, int write, void __user *buffer, size_t *length, loff_t *ppos) { struct zone *zone; diff --git a/mm/rmap.c b/mm/rmap.c index ea8e20d75b29..bf05fc872ae8 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1591,10 +1591,9 @@ void __put_anon_vma(struct anon_vma *anon_vma) { struct anon_vma *root = anon_vma->root; + anon_vma_free(anon_vma); if (root != anon_vma && atomic_dec_and_test(&root->refcount)) anon_vma_free(root); - - anon_vma_free(anon_vma); } static struct anon_vma *rmap_walk_anon_lock(struct page *page, diff --git a/mm/slub.c b/mm/slub.c index fdf0fe4da9a9..b2b047327d76 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1726,7 +1726,7 @@ static void *get_partial(struct kmem_cache *s, gfp_t flags, int node, struct kmem_cache_cpu *c) { void *object; - int searchnode = (node == NUMA_NO_NODE) ? numa_node_id() : node; + int searchnode = (node == NUMA_NO_NODE) ? numa_mem_id() : node; object = get_partial_node(s, get_node(s, searchnode), c, flags); if (object || node != NUMA_NO_NODE) diff --git a/mm/vmscan.c b/mm/vmscan.c index 9149444f947d..71f23c0c1090 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -11,6 +11,8 @@ * Multiqueue VM started 5.8.00, Rik van Riel. */ +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + #include <linux/mm.h> #include <linux/module.h> #include <linux/gfp.h> @@ -43,6 +45,7 @@ #include <linux/sysctl.h> #include <linux/oom.h> #include <linux/prefetch.h> +#include <linux/printk.h> #include <asm/tlbflush.h> #include <asm/div64.h> @@ -83,6 +86,9 @@ struct scan_control { /* Scan (total_size >> priority) pages at once */ int priority; + /* anon vs. file LRUs scanning "ratio" */ + int swappiness; + /* * The memory cgroup that hit its limit and as a result is the * primary target of this reclaim invocation. @@ -477,7 +483,7 @@ static pageout_t pageout(struct page *page, struct address_space *mapping, if (page_has_private(page)) { if (try_to_free_buffers(page)) { ClearPageDirty(page); - printk("%s: orphaned page\n", __func__); + pr_info("%s: orphaned page\n", __func__); return PAGE_CLEAN; } } @@ -1845,13 +1851,6 @@ static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan, return shrink_inactive_list(nr_to_scan, lruvec, sc, lru); } -static int vmscan_swappiness(struct scan_control *sc) -{ - if (global_reclaim(sc)) - return vm_swappiness; - return mem_cgroup_swappiness(sc->target_mem_cgroup); -} - enum scan_balance { SCAN_EQUAL, SCAN_FRACT, @@ -1912,7 +1911,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, * using the memory controller's swap limit feature would be * too expensive. */ - if (!global_reclaim(sc) && !vmscan_swappiness(sc)) { + if (!global_reclaim(sc) && !sc->swappiness) { scan_balance = SCAN_FILE; goto out; } @@ -1922,7 +1921,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, * system is close to OOM, scan both anon and file equally * (unless the swappiness setting disagrees with swapping). */ - if (!sc->priority && vmscan_swappiness(sc)) { + if (!sc->priority && sc->swappiness) { scan_balance = SCAN_EQUAL; goto out; } @@ -1965,7 +1964,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, * With swappiness at 100, anonymous and file have the same priority. * This scanning priority is essentially the inverse of IO cost. */ - anon_prio = vmscan_swappiness(sc); + anon_prio = sc->swappiness; file_prio = 200 - anon_prio; /* @@ -2265,6 +2264,7 @@ static void shrink_zone(struct zone *zone, struct scan_control *sc) lruvec = mem_cgroup_zone_lruvec(zone, memcg); + sc->swappiness = mem_cgroup_swappiness(memcg); shrink_lruvec(lruvec, sc); /* @@ -2731,6 +2731,7 @@ unsigned long mem_cgroup_shrink_node_zone(struct mem_cgroup *memcg, .may_swap = !noswap, .order = 0, .priority = 0, + .swappiness = mem_cgroup_swappiness(memcg), .target_mem_cgroup = memcg, }; struct lruvec *lruvec = mem_cgroup_zone_lruvec(zone, memcg); @@ -3372,7 +3373,10 @@ static int kswapd(void *p) } } + tsk->flags &= ~(PF_MEMALLOC | PF_SWAPWRITE | PF_KSWAPD); current->reclaim_state = NULL; + lockdep_clear_current_reclaim_state(); + return 0; } diff --git a/net/batman-adv/multicast.c b/net/batman-adv/multicast.c index 8c7ca811de6e..96b66fd30f96 100644 --- a/net/batman-adv/multicast.c +++ b/net/batman-adv/multicast.c @@ -415,7 +415,7 @@ batadv_mcast_forw_ipv4_node_get(struct batadv_priv *bat_priv) hlist_for_each_entry_rcu(tmp_orig_node, &bat_priv->mcast.want_all_ipv4_list, mcast_want_all_ipv4_node) { - if (!atomic_inc_not_zero(&orig_node->refcount)) + if (!atomic_inc_not_zero(&tmp_orig_node->refcount)) continue; orig_node = tmp_orig_node; @@ -442,7 +442,7 @@ batadv_mcast_forw_ipv6_node_get(struct batadv_priv *bat_priv) hlist_for_each_entry_rcu(tmp_orig_node, &bat_priv->mcast.want_all_ipv6_list, mcast_want_all_ipv6_node) { - if (!atomic_inc_not_zero(&orig_node->refcount)) + if (!atomic_inc_not_zero(&tmp_orig_node->refcount)) continue; orig_node = tmp_orig_node; @@ -493,7 +493,7 @@ batadv_mcast_forw_unsnoop_node_get(struct batadv_priv *bat_priv) hlist_for_each_entry_rcu(tmp_orig_node, &bat_priv->mcast.want_all_unsnoopables_list, mcast_want_all_unsnoopables_node) { - if (!atomic_inc_not_zero(&orig_node->refcount)) + if (!atomic_inc_not_zero(&tmp_orig_node->refcount)) continue; orig_node = tmp_orig_node; diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c index a1e5bb7d06e8..dc4d301d3a72 100644 --- a/net/bluetooth/l2cap_core.c +++ b/net/bluetooth/l2cap_core.c @@ -7519,9 +7519,9 @@ int __init l2cap_init(void) l2cap_debugfs = debugfs_create_file("l2cap", 0444, bt_debugfs, NULL, &l2cap_debugfs_fops); - debugfs_create_u16("l2cap_le_max_credits", 0466, bt_debugfs, + debugfs_create_u16("l2cap_le_max_credits", 0644, bt_debugfs, &le_max_credits); - debugfs_create_u16("l2cap_le_default_mps", 0466, bt_debugfs, + debugfs_create_u16("l2cap_le_default_mps", 0644, bt_debugfs, &le_default_mps); bt_6lowpan_init(); diff --git a/net/bridge/br_fdb.c b/net/bridge/br_fdb.c index 9203d5a1943f..474d36f93342 100644 --- a/net/bridge/br_fdb.c +++ b/net/bridge/br_fdb.c @@ -487,6 +487,7 @@ void br_fdb_update(struct net_bridge *br, struct net_bridge_port *source, { struct hlist_head *head = &br->hash[br_mac_hash(addr, vid)]; struct net_bridge_fdb_entry *fdb; + bool fdb_modified = false; /* some users want to always flood. */ if (hold_time(br) == 0) @@ -507,10 +508,15 @@ void br_fdb_update(struct net_bridge *br, struct net_bridge_port *source, source->dev->name); } else { /* fastpath: update of existing entry */ - fdb->dst = source; + if (unlikely(source != fdb->dst)) { + fdb->dst = source; + fdb_modified = true; + } fdb->updated = jiffies; if (unlikely(added_by_user)) fdb->added_by_user = 1; + if (unlikely(fdb_modified)) + fdb_notify(br, fdb, RTM_NEWNEIGH); } } else { spin_lock(&br->hash_lock); diff --git a/net/bridge/br_input.c b/net/bridge/br_input.c index 7985deaff52f..04d6348fd530 100644 --- a/net/bridge/br_input.c +++ b/net/bridge/br_input.c @@ -147,8 +147,8 @@ static int br_handle_local_finish(struct sk_buff *skb) struct net_bridge_port *p = br_port_get_rcu(skb->dev); u16 vid = 0; - br_vlan_get_tag(skb, &vid); - if (p->flags & BR_LEARNING) + /* check if vlan is allowed, to avoid spoofing */ + if (p->flags & BR_LEARNING && br_should_learn(p, skb, &vid)) br_fdb_update(p->br, p, eth_hdr(skb)->h_source, vid, false); return 0; /* process further */ } diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index 06811d79f89f..59d3a85c5873 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -581,6 +581,7 @@ bool br_allowed_ingress(struct net_bridge *br, struct net_port_vlans *v, struct sk_buff *skb, u16 *vid); bool br_allowed_egress(struct net_bridge *br, const struct net_port_vlans *v, const struct sk_buff *skb); +bool br_should_learn(struct net_bridge_port *p, struct sk_buff *skb, u16 *vid); struct sk_buff *br_handle_vlan(struct net_bridge *br, const struct net_port_vlans *v, struct sk_buff *skb); @@ -648,6 +649,12 @@ static inline bool br_allowed_egress(struct net_bridge *br, return true; } +static inline bool br_should_learn(struct net_bridge_port *p, + struct sk_buff *skb, u16 *vid) +{ + return true; +} + static inline struct sk_buff *br_handle_vlan(struct net_bridge *br, const struct net_port_vlans *v, struct sk_buff *skb) diff --git a/net/bridge/br_vlan.c b/net/bridge/br_vlan.c index 4a3716102789..5fee2feaf292 100644 --- a/net/bridge/br_vlan.c +++ b/net/bridge/br_vlan.c @@ -241,6 +241,34 @@ bool br_allowed_egress(struct net_bridge *br, return false; } +/* Called under RCU */ +bool br_should_learn(struct net_bridge_port *p, struct sk_buff *skb, u16 *vid) +{ + struct net_bridge *br = p->br; + struct net_port_vlans *v; + + if (!br->vlan_enabled) + return true; + + v = rcu_dereference(p->vlan_info); + if (!v) + return false; + + br_vlan_get_tag(skb, vid); + if (!*vid) { + *vid = br_get_pvid(v); + if (*vid == VLAN_N_VID) + return false; + + return true; + } + + if (test_bit(*vid, v->vlan_bitmap)) + return true; + + return false; +} + /* Must be protected by RTNL. * Must be called with vid in range from 1 to 4094 inclusive. */ diff --git a/net/core/dev.c b/net/core/dev.c index 8b07db37dc10..8908a68db449 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -2283,8 +2283,8 @@ EXPORT_SYMBOL(skb_checksum_help); __be16 skb_network_protocol(struct sk_buff *skb, int *depth) { + unsigned int vlan_depth = skb->mac_len; __be16 type = skb->protocol; - int vlan_depth = skb->mac_len; /* Tunnel gso handlers can set protocol to ethernet. */ if (type == htons(ETH_P_TEB)) { @@ -2297,15 +2297,30 @@ __be16 skb_network_protocol(struct sk_buff *skb, int *depth) type = eth->h_proto; } - while (type == htons(ETH_P_8021Q) || type == htons(ETH_P_8021AD)) { - struct vlan_hdr *vh; - - if (unlikely(!pskb_may_pull(skb, vlan_depth + VLAN_HLEN))) - return 0; - - vh = (struct vlan_hdr *)(skb->data + vlan_depth); - type = vh->h_vlan_encapsulated_proto; - vlan_depth += VLAN_HLEN; + /* if skb->protocol is 802.1Q/AD then the header should already be + * present at mac_len - VLAN_HLEN (if mac_len > 0), or at + * ETH_HLEN otherwise + */ + if (type == htons(ETH_P_8021Q) || type == htons(ETH_P_8021AD)) { + if (vlan_depth) { + if (unlikely(WARN_ON(vlan_depth < VLAN_HLEN))) + return 0; + vlan_depth -= VLAN_HLEN; + } else { + vlan_depth = ETH_HLEN; + } + do { + struct vlan_hdr *vh; + + if (unlikely(!pskb_may_pull(skb, + vlan_depth + VLAN_HLEN))) + return 0; + + vh = (struct vlan_hdr *)(skb->data + vlan_depth); + type = vh->h_vlan_encapsulated_proto; + vlan_depth += VLAN_HLEN; + } while (type == htons(ETH_P_8021Q) || + type == htons(ETH_P_8021AD)); } *depth = vlan_depth; diff --git a/net/core/filter.c b/net/core/filter.c index 9d79ca0a6e8e..4aec7b93f1a9 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -1559,8 +1559,13 @@ static struct sk_filter *__sk_prepare_filter(struct sk_filter *fp, fp->jited = 0; err = sk_chk_filter(fp->insns, fp->len); - if (err) + if (err) { + if (sk != NULL) + sk_filter_uncharge(sk, fp); + else + kfree(fp); return ERR_PTR(err); + } /* Probe if we can JIT compile the filter and if so, do * the compilation of the filter. diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index d6b46eb2f94c..3a26b3b23f16 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -2684,13 +2684,12 @@ static void tcp_process_loss(struct sock *sk, int flag, bool is_dupack) bool recovered = !before(tp->snd_una, tp->high_seq); if (tp->frto) { /* F-RTO RFC5682 sec 3.1 (sack enhanced version). */ - if (flag & FLAG_ORIG_SACK_ACKED) { - /* Step 3.b. A timeout is spurious if not all data are - * lost, i.e., never-retransmitted data are (s)acked. - */ - tcp_try_undo_loss(sk, true); + /* Step 3.b. A timeout is spurious if not all data are + * lost, i.e., never-retransmitted data are (s)acked. + */ + if (tcp_try_undo_loss(sk, flag & FLAG_ORIG_SACK_ACKED)) return; - } + if (after(tp->snd_nxt, tp->high_seq) && (flag & FLAG_DATA_SACKED || is_dupack)) { tp->frto = 0; /* Loss was real: 2nd part of step 3.a */ diff --git a/net/ipv6/output_core.c b/net/ipv6/output_core.c index 6313abd53c9d..56596ce390a1 100644 --- a/net/ipv6/output_core.c +++ b/net/ipv6/output_core.c @@ -12,7 +12,7 @@ void ipv6_select_ident(struct frag_hdr *fhdr, struct rt6_info *rt) { static atomic_t ipv6_fragmentation_id; struct in6_addr addr; - int old, new; + int ident; #if IS_ENABLED(CONFIG_IPV6) struct inet_peer *peer; @@ -26,15 +26,10 @@ void ipv6_select_ident(struct frag_hdr *fhdr, struct rt6_info *rt) return; } #endif - do { - old = atomic_read(&ipv6_fragmentation_id); - new = old + 1; - if (!new) - new = 1; - } while (atomic_cmpxchg(&ipv6_fragmentation_id, old, new) != old); + ident = atomic_inc_return(&ipv6_fragmentation_id); addr = rt->rt6i_dst.addr; - addr.s6_addr32[0] ^= (__force __be32)new; + addr.s6_addr32[0] ^= (__force __be32)ident; fhdr->identification = htonl(secure_ipv6_id(addr.s6_addr32)); } EXPORT_SYMBOL(ipv6_select_ident); diff --git a/net/netfilter/ipvs/ip_vs_core.c b/net/netfilter/ipvs/ip_vs_core.c index 4f26ee46b51f..3d2d2c8108ca 100644 --- a/net/netfilter/ipvs/ip_vs_core.c +++ b/net/netfilter/ipvs/ip_vs_core.c @@ -1392,15 +1392,19 @@ ip_vs_in_icmp(struct sk_buff *skb, int *related, unsigned int hooknum) if (ipip) { __be32 info = ic->un.gateway; + __u8 type = ic->type; + __u8 code = ic->code; /* Update the MTU */ if (ic->type == ICMP_DEST_UNREACH && ic->code == ICMP_FRAG_NEEDED) { struct ip_vs_dest *dest = cp->dest; u32 mtu = ntohs(ic->un.frag.mtu); + __be16 frag_off = cih->frag_off; /* Strip outer IP and ICMP, go to IPIP header */ - __skb_pull(skb, ihl + sizeof(_icmph)); + if (pskb_pull(skb, ihl + sizeof(_icmph)) == NULL) + goto ignore_ipip; offset2 -= ihl + sizeof(_icmph); skb_reset_network_header(skb); IP_VS_DBG(12, "ICMP for IPIP %pI4->%pI4: mtu=%u\n", @@ -1408,7 +1412,7 @@ ip_vs_in_icmp(struct sk_buff *skb, int *related, unsigned int hooknum) ipv4_update_pmtu(skb, dev_net(skb->dev), mtu, 0, 0, 0, 0); /* Client uses PMTUD? */ - if (!(cih->frag_off & htons(IP_DF))) + if (!(frag_off & htons(IP_DF))) goto ignore_ipip; /* Prefer the resulting PMTU */ if (dest) { @@ -1427,12 +1431,13 @@ ip_vs_in_icmp(struct sk_buff *skb, int *related, unsigned int hooknum) /* Strip outer IP, ICMP and IPIP, go to IP header of * original request. */ - __skb_pull(skb, offset2); + if (pskb_pull(skb, offset2) == NULL) + goto ignore_ipip; skb_reset_network_header(skb); IP_VS_DBG(12, "Sending ICMP for %pI4->%pI4: t=%u, c=%u, i=%u\n", &ip_hdr(skb)->saddr, &ip_hdr(skb)->daddr, - ic->type, ic->code, ntohl(info)); - icmp_send(skb, ic->type, ic->code, info); + type, code, ntohl(info)); + icmp_send(skb, type, code, info); /* ICMP can be shorter but anyways, account it */ ip_vs_out_stats(cp, skb); diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c index 81dca96d2be6..f22757a29cd0 100644 --- a/net/netlink/af_netlink.c +++ b/net/netlink/af_netlink.c @@ -1373,7 +1373,9 @@ retry: bool __netlink_ns_capable(const struct netlink_skb_parms *nsp, struct user_namespace *user_ns, int cap) { - return sk_ns_capable(nsp->sk, user_ns, cap); + return ((nsp->flags & NETLINK_SKB_DST) || + file_ns_capable(nsp->sk->sk_socket->file, user_ns, cap)) && + ns_capable(user_ns, cap); } EXPORT_SYMBOL(__netlink_ns_capable); @@ -2293,6 +2295,7 @@ static int netlink_sendmsg(struct kiocb *kiocb, struct socket *sock, struct sk_buff *skb; int err; struct scm_cookie scm; + u32 netlink_skb_flags = 0; if (msg->msg_flags&MSG_OOB) return -EOPNOTSUPP; @@ -2314,6 +2317,7 @@ static int netlink_sendmsg(struct kiocb *kiocb, struct socket *sock, if ((dst_group || dst_portid) && !netlink_allowed(sock, NL_CFG_F_NONROOT_SEND)) goto out; + netlink_skb_flags |= NETLINK_SKB_DST; } else { dst_portid = nlk->dst_portid; dst_group = nlk->dst_group; @@ -2343,6 +2347,7 @@ static int netlink_sendmsg(struct kiocb *kiocb, struct socket *sock, NETLINK_CB(skb).portid = nlk->portid; NETLINK_CB(skb).dst_group = dst_group; NETLINK_CB(skb).creds = siocb->scm->creds; + NETLINK_CB(skb).flags = netlink_skb_flags; err = -EFAULT; if (memcpy_fromiovec(skb_put(skb, len), msg->msg_iov, len)) { diff --git a/scripts/recordmcount.c b/scripts/recordmcount.c index 9c22317778eb..e11aa4a156d2 100644 --- a/scripts/recordmcount.c +++ b/scripts/recordmcount.c @@ -40,6 +40,11 @@ #define R_METAG_NONE 3 #endif +#ifndef EM_AARCH64 +#define EM_AARCH64 183 +#define R_AARCH64_ABS64 257 +#endif + static int fd_map; /* File descriptor for file being modified. */ static int mmap_failed; /* Boolean flag. */ static void *ehdr_curr; /* current ElfXX_Ehdr * for resource cleanup */ @@ -347,6 +352,8 @@ do_file(char const *const fname) case EM_ARM: reltype = R_ARM_ABS32; altmcount = "__gnu_mcount_nc"; break; + case EM_AARCH64: + reltype = R_AARCH64_ABS64; gpfx = '_'; break; case EM_IA_64: reltype = R_IA64_IMM64; gpfx = '_'; break; case EM_METAG: reltype = R_METAG_ADDR32; altmcount = "_mcount_wrapper"; diff --git a/scripts/recordmcount.pl b/scripts/recordmcount.pl index 91280b82da08..397b6b84e8c5 100755 --- a/scripts/recordmcount.pl +++ b/scripts/recordmcount.pl @@ -279,6 +279,11 @@ if ($arch eq "x86_64") { $mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*R_ARM_(CALL|PC24|THM_CALL)" . "\\s+(__gnu_mcount_nc|mcount)\$"; +} elsif ($arch eq "arm64") { + $alignment = 3; + $section_type = '%progbits'; + $mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*R_AARCH64_CALL26\\s+_mcount\$"; + $type = ".quad"; } elsif ($arch eq "ia64") { $mcount_regex = "^\\s*([0-9a-fA-F]+):.*\\s_mcount\$"; $type = "data8"; diff --git a/tools/perf/util/dwarf-aux.c b/tools/perf/util/dwarf-aux.c index 7defd77105d0..cc66c4049e09 100644 --- a/tools/perf/util/dwarf-aux.c +++ b/tools/perf/util/dwarf-aux.c @@ -747,14 +747,17 @@ struct __find_variable_param { static int __die_find_variable_cb(Dwarf_Die *die_mem, void *data) { struct __find_variable_param *fvp = data; + Dwarf_Attribute attr; int tag; tag = dwarf_tag(die_mem); if ((tag == DW_TAG_formal_parameter || tag == DW_TAG_variable) && - die_compare_name(die_mem, fvp->name)) + die_compare_name(die_mem, fvp->name) && + /* Does the DIE have location information or external instance? */ + (dwarf_attr(die_mem, DW_AT_external, &attr) || + dwarf_attr(die_mem, DW_AT_location, &attr))) return DIE_FIND_CB_END; - if (dwarf_haspc(die_mem, fvp->addr)) return DIE_FIND_CB_CONTINUE; else diff --git a/tools/perf/util/probe-finder.c b/tools/perf/util/probe-finder.c index 562762117639..9d8eb26f0533 100644 --- a/tools/perf/util/probe-finder.c +++ b/tools/perf/util/probe-finder.c @@ -511,12 +511,12 @@ static int convert_variable(Dwarf_Die *vr_die, struct probe_finder *pf) ret = convert_variable_location(vr_die, pf->addr, pf->fb_ops, &pf->sp_die, pf->tvar); - if (ret == -ENOENT) + if (ret == -ENOENT || ret == -EINVAL) pr_err("Failed to find the location of %s at this address.\n" " Perhaps, it has been optimized out.\n", pf->pvar->var); else if (ret == -ENOTSUP) pr_err("Sorry, we don't support this variable location yet.\n"); - else if (pf->pvar->field) { + else if (ret == 0 && pf->pvar->field) { ret = convert_variable_fields(vr_die, pf->pvar->var, pf->pvar->field, &pf->tvar->ref, &die_mem); diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile index 32487ed18354..e66e710cc595 100644 --- a/tools/testing/selftests/Makefile +++ b/tools/testing/selftests/Makefile @@ -10,6 +10,7 @@ TARGETS += timers TARGETS += vm TARGETS += powerpc TARGETS += user +TARGETS += sysctl all: for TARGET in $(TARGETS); do \ diff --git a/tools/testing/selftests/sysctl/Makefile b/tools/testing/selftests/sysctl/Makefile new file mode 100644 index 000000000000..0a92adaf0865 --- /dev/null +++ b/tools/testing/selftests/sysctl/Makefile @@ -0,0 +1,19 @@ +# Makefile for sysctl selftests. +# Expects kernel.sysctl_writes_strict=1. + +# No binaries, but make sure arg-less "make" doesn't trigger "run_tests". +all: + +# Allow specific tests to be selected. +test_num: + @/bin/sh ./run_numerictests + +test_string: + @/bin/sh ./run_stringtests + +run_tests: all test_num test_string + +# Nothing to clean up. +clean: + +.PHONY: all run_tests clean test_num test_string diff --git a/tools/testing/selftests/sysctl/common_tests b/tools/testing/selftests/sysctl/common_tests new file mode 100644 index 000000000000..17d534b1b7b4 --- /dev/null +++ b/tools/testing/selftests/sysctl/common_tests @@ -0,0 +1,109 @@ +#!/bin/sh + +TEST_FILE=$(mktemp) + +echo "== Testing sysctl behavior against ${TARGET} ==" + +set_orig() +{ + echo "${ORIG}" > "${TARGET}" +} + +set_test() +{ + echo "${TEST_STR}" > "${TARGET}" +} + +verify() +{ + local seen + seen=$(cat "$1") + if [ "${seen}" != "${TEST_STR}" ]; then + return 1 + fi + return 0 +} + +trap 'set_orig; rm -f "${TEST_FILE}"' EXIT + +rc=0 + +echo -n "Writing test file ... " +echo "${TEST_STR}" > "${TEST_FILE}" +if ! verify "${TEST_FILE}"; then + echo "FAIL" >&2 + exit 1 +else + echo "ok" +fi + +echo -n "Checking sysctl is not set to test value ... " +if verify "${TARGET}"; then + echo "FAIL" >&2 + exit 1 +else + echo "ok" +fi + +echo -n "Writing sysctl from shell ... " +set_test +if ! verify "${TARGET}"; then + echo "FAIL" >&2 + exit 1 +else + echo "ok" +fi + +echo -n "Resetting sysctl to original value ... " +set_orig +if verify "${TARGET}"; then + echo "FAIL" >&2 + exit 1 +else + echo "ok" +fi + +# Now that we've validated the sanity of "set_test" and "set_orig", +# we can use those functions to set starting states before running +# specific behavioral tests. + +echo -n "Writing entire sysctl in single write ... " +set_orig +dd if="${TEST_FILE}" of="${TARGET}" bs=4096 2>/dev/null +if ! verify "${TARGET}"; then + echo "FAIL" >&2 + rc=1 +else + echo "ok" +fi + +echo -n "Writing middle of sysctl after synchronized seek ... " +set_test +dd if="${TEST_FILE}" of="${TARGET}" bs=1 seek=1 skip=1 2>/dev/null +if ! verify "${TARGET}"; then + echo "FAIL" >&2 + rc=1 +else + echo "ok" +fi + +echo -n "Writing beyond end of sysctl ... " +set_orig +dd if="${TEST_FILE}" of="${TARGET}" bs=20 seek=2 2>/dev/null +if verify "${TARGET}"; then + echo "FAIL" >&2 + rc=1 +else + echo "ok" +fi + +echo -n "Writing sysctl with multiple long writes ... " +set_orig +(perl -e 'print "A" x 50;'; echo "${TEST_STR}") | \ + dd of="${TARGET}" bs=50 2>/dev/null +if verify "${TARGET}"; then + echo "FAIL" >&2 + rc=1 +else + echo "ok" +fi diff --git a/tools/testing/selftests/sysctl/run_numerictests b/tools/testing/selftests/sysctl/run_numerictests new file mode 100644 index 000000000000..8510f93f2d14 --- /dev/null +++ b/tools/testing/selftests/sysctl/run_numerictests @@ -0,0 +1,10 @@ +#!/bin/sh + +SYSCTL="/proc/sys" +TARGET="${SYSCTL}/vm/swappiness" +ORIG=$(cat "${TARGET}") +TEST_STR=$(( $ORIG + 1 )) + +. ./common_tests + +exit $rc diff --git a/tools/testing/selftests/sysctl/run_stringtests b/tools/testing/selftests/sysctl/run_stringtests new file mode 100644 index 000000000000..90a9293d520c --- /dev/null +++ b/tools/testing/selftests/sysctl/run_stringtests @@ -0,0 +1,77 @@ +#!/bin/sh + +SYSCTL="/proc/sys" +TARGET="${SYSCTL}/kernel/domainname" +ORIG=$(cat "${TARGET}") +TEST_STR="Testing sysctl" + +. ./common_tests + +# Only string sysctls support seeking/appending. +MAXLEN=65 + +echo -n "Writing entire sysctl in short writes ... " +set_orig +dd if="${TEST_FILE}" of="${TARGET}" bs=1 2>/dev/null +if ! verify "${TARGET}"; then + echo "FAIL" >&2 + rc=1 +else + echo "ok" +fi + +echo -n "Writing middle of sysctl after unsynchronized seek ... " +set_test +dd if="${TEST_FILE}" of="${TARGET}" bs=1 seek=1 2>/dev/null +if verify "${TARGET}"; then + echo "FAIL" >&2 + rc=1 +else + echo "ok" +fi + +echo -n "Checking sysctl maxlen is at least $MAXLEN ... " +set_orig +perl -e 'print "A" x ('"${MAXLEN}"'-2), "B";' | \ + dd of="${TARGET}" bs="${MAXLEN}" 2>/dev/null +if ! grep -q B "${TARGET}"; then + echo "FAIL" >&2 + rc=1 +else + echo "ok" +fi + +echo -n "Checking sysctl keeps original string on overflow append ... " +set_orig +perl -e 'print "A" x ('"${MAXLEN}"'-1), "B";' | \ + dd of="${TARGET}" bs=$(( MAXLEN - 1 )) 2>/dev/null +if grep -q B "${TARGET}"; then + echo "FAIL" >&2 + rc=1 +else + echo "ok" +fi + +echo -n "Checking sysctl stays NULL terminated on write ... " +set_orig +perl -e 'print "A" x ('"${MAXLEN}"'-1), "B";' | \ + dd of="${TARGET}" bs="${MAXLEN}" 2>/dev/null +if grep -q B "${TARGET}"; then + echo "FAIL" >&2 + rc=1 +else + echo "ok" +fi + +echo -n "Checking sysctl stays NULL terminated on overwrite ... " +set_orig +perl -e 'print "A" x ('"${MAXLEN}"'-1), "BB";' | \ + dd of="${TARGET}" bs=$(( $MAXLEN + 1 )) 2>/dev/null +if grep -q B "${TARGET}"; then + echo "FAIL" >&2 + rc=1 +else + echo "ok" +fi + +exit $rc diff --git a/usr/Kconfig b/usr/Kconfig index 642f503d3e9f..2d4c77eecf2e 100644 --- a/usr/Kconfig +++ b/usr/Kconfig @@ -98,80 +98,3 @@ config RD_LZ4 help Support loading of a LZ4 encoded initial ramdisk or cpio buffer If unsure, say N. - -choice - prompt "Built-in initramfs compression mode" if INITRAMFS_SOURCE!="" - help - This option decides by which algorithm the builtin initramfs - will be compressed. Several compression algorithms are - available, which differ in efficiency, compression and - decompression speed. Compression speed is only relevant - when building a kernel. Decompression speed is relevant at - each boot. - - If you have any problems with bzip2 or LZMA compressed - initramfs, mail me (Alain Knaff) <alain@knaff.lu>. - - High compression options are mostly useful for users who are - low on RAM, since it reduces the memory consumption during - boot. - - If in doubt, select 'gzip' - -config INITRAMFS_COMPRESSION_NONE - bool "None" - help - Do not compress the built-in initramfs at all. This may - sound wasteful in space, but, you should be aware that the - built-in initramfs will be compressed at a later stage - anyways along with the rest of the kernel, on those - architectures that support this. - However, not compressing the initramfs may lead to slightly - higher memory consumption during a short time at boot, while - both the cpio image and the unpacked filesystem image will - be present in memory simultaneously - -config INITRAMFS_COMPRESSION_GZIP - bool "Gzip" - depends on RD_GZIP - help - The old and tried gzip compression. It provides a good balance - between compression ratio and decompression speed. - -config INITRAMFS_COMPRESSION_BZIP2 - bool "Bzip2" - depends on RD_BZIP2 - help - Its compression ratio and speed is intermediate. - Decompression speed is slowest among the choices. The initramfs - size is about 10% smaller with bzip2, in comparison to gzip. - Bzip2 uses a large amount of memory. For modern kernels you - will need at least 8MB RAM or more for booting. - -config INITRAMFS_COMPRESSION_LZMA - bool "LZMA" - depends on RD_LZMA - help - This algorithm's compression ratio is best. - Decompression speed is between the other choices. - Compression is slowest. The initramfs size is about 33% - smaller with LZMA in comparison to gzip. - -config INITRAMFS_COMPRESSION_XZ - bool "XZ" - depends on RD_XZ - help - XZ uses the LZMA2 algorithm. The initramfs size is about 30% - smaller with XZ in comparison to gzip. Decompression speed - is better than that of bzip2 but worse than gzip and LZO. - Compression is slow. - -config INITRAMFS_COMPRESSION_LZO - bool "LZO" - depends on RD_LZO - help - Its compression ratio is the poorest among the choices. The kernel - size is about 10% bigger than gzip; however its speed - (both compression and decompression) is the fastest. - -endchoice |