diff options
author | Ingo Molnar <mingo@kernel.org> | 2014-07-28 10:00:33 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2014-07-28 10:00:33 +0200 |
commit | 5030c69755416d19516c0a61cd988a0e0062e041 (patch) | |
tree | ca4d3088428cf033420f9df9111d796650bc59e5 | |
parent | 2336ebc32676df5b794acfe0c980583ec6c05f34 (diff) | |
parent | 64aa90f26c06e1cb2aacfb98a7d0eccfbd6c1a91 (diff) | |
download | linux-5030c69755416d19516c0a61cd988a0e0062e041.tar.bz2 |
Merge tag 'v3.16-rc7' into perf/core, to merge in the latest fixes before applying new changes
Signed-off-by: Ingo Molnar <mingo@kernel.org>
1000 files changed, 9707 insertions, 5791 deletions
@@ -9,6 +9,10 @@ Linus ---------- +M: Matt Mackal +E: mpm@selenic.com +D: SLOB slab allocator + N: Matti Aarnio E: mea@nic.funet.fi D: Alpha systems hacking, IPv6 and other network related stuff diff --git a/Documentation/Changes b/Documentation/Changes index 2254db0f00a5..227bec88021e 100644 --- a/Documentation/Changes +++ b/Documentation/Changes @@ -280,12 +280,9 @@ that is possible. mcelog ------ -In Linux 2.6.31+ the i386 kernel needs to run the mcelog utility -as a regular cronjob similar to the x86-64 kernel to process and log -machine check events when CONFIG_X86_NEW_MCE is enabled. Machine check -events are errors reported by the CPU. Processing them is strongly encouraged. -All x86-64 kernels since 2.6.4 require the mcelog utility to -process machine checks. +On x86 kernels the mcelog utility is needed to process and log machine check +events when CONFIG_X86_MCE is enabled. Machine check events are errors reported +by the CPU. Processing them is strongly encouraged. Getting updated software ======================== diff --git a/Documentation/DocBook/gadget.tmpl b/Documentation/DocBook/gadget.tmpl index 4017f147ba2f..2c425d70f7e2 100644 --- a/Documentation/DocBook/gadget.tmpl +++ b/Documentation/DocBook/gadget.tmpl @@ -708,7 +708,7 @@ hardware level details could be very different. <para>Systems need specialized hardware support to implement OTG, notably including a special <emphasis>Mini-AB</emphasis> jack -and associated transciever to support <emphasis>Dual-Role</emphasis> +and associated transceiver to support <emphasis>Dual-Role</emphasis> operation: they can act either as a host, using the standard Linux-USB host side driver stack, diff --git a/Documentation/DocBook/genericirq.tmpl b/Documentation/DocBook/genericirq.tmpl index 46347f603353..59fb5c077541 100644 --- a/Documentation/DocBook/genericirq.tmpl +++ b/Documentation/DocBook/genericirq.tmpl @@ -182,7 +182,7 @@ <para> Each interrupt is described by an interrupt descriptor structure irq_desc. The interrupt is referenced by an 'unsigned int' numeric - value which selects the corresponding interrupt decription structure + value which selects the corresponding interrupt description structure in the descriptor structures array. The descriptor structure contains status information and pointers to the interrupt flow method and the interrupt chip structure @@ -470,7 +470,7 @@ if (desc->irq_data.chip->irq_eoi) <para> To avoid copies of identical implementations of IRQ chips the core provides a configurable generic interrupt chip - implementation. Developers should check carefuly whether the + implementation. Developers should check carefully whether the generic chip fits their needs before implementing the same functionality slightly differently themselves. </para> diff --git a/Documentation/DocBook/kernel-locking.tmpl b/Documentation/DocBook/kernel-locking.tmpl index 19f2a5a5a5b4..e584ee12a1e7 100644 --- a/Documentation/DocBook/kernel-locking.tmpl +++ b/Documentation/DocBook/kernel-locking.tmpl @@ -1760,7 +1760,7 @@ as it would be on UP. </para> <para> -There is a furthur optimization possible here: remember our original +There is a further optimization possible here: remember our original cache code, where there were no reference counts and the caller simply held the lock whenever using the object? This is still possible: if you hold the lock, no one can delete the object, so you don't need to diff --git a/Documentation/DocBook/libata.tmpl b/Documentation/DocBook/libata.tmpl index deb71baed328..d7fcdc5a4379 100644 --- a/Documentation/DocBook/libata.tmpl +++ b/Documentation/DocBook/libata.tmpl @@ -677,7 +677,7 @@ and other resources, etc. <listitem> <para> - ATA_QCFLAG_ACTIVE is clared from qc->flags. + ATA_QCFLAG_ACTIVE is cleared from qc->flags. </para> </listitem> @@ -708,7 +708,7 @@ and other resources, etc. <listitem> <para> - qc->waiting is claread & completed (in that order). + qc->waiting is cleared & completed (in that order). </para> </listitem> @@ -1163,7 +1163,7 @@ and other resources, etc. <para> Once sense data is acquired, this type of errors can be - handled similary to other SCSI errors. Note that sense data + handled similarly to other SCSI errors. Note that sense data may indicate ATA bus error (e.g. Sense Key 04h HARDWARE ERROR && ASC/ASCQ 47h/00h SCSI PARITY ERROR). In such cases, the error should be considered as an ATA bus error and diff --git a/Documentation/DocBook/media_api.tmpl b/Documentation/DocBook/media_api.tmpl index 4decb46bfa76..03f9a1f8d413 100644 --- a/Documentation/DocBook/media_api.tmpl +++ b/Documentation/DocBook/media_api.tmpl @@ -68,7 +68,7 @@ several digital tv standards. While it is called as DVB API, in fact it covers several different video standards including DVB-T, DVB-S, DVB-C and ATSC. The API is currently being updated - to documment support also for DVB-S2, ISDB-T and ISDB-S.</para> + to document support also for DVB-S2, ISDB-T and ISDB-S.</para> <para>The third part covers the Remote Controller API.</para> <para>The fourth part covers the Media Controller API.</para> <para>For additional information and for the latest development code, diff --git a/Documentation/DocBook/mtdnand.tmpl b/Documentation/DocBook/mtdnand.tmpl index cd11926e07c7..7da8f0402af5 100644 --- a/Documentation/DocBook/mtdnand.tmpl +++ b/Documentation/DocBook/mtdnand.tmpl @@ -91,7 +91,7 @@ <listitem><para> [MTD Interface]</para><para> These functions provide the interface to the MTD kernel API. - They are not replacable and provide functionality + They are not replaceable and provide functionality which is complete hardware independent. </para></listitem> <listitem><para> @@ -100,14 +100,14 @@ </para></listitem> <listitem><para> [GENERIC]</para><para> - Generic functions are not replacable and provide functionality + Generic functions are not replaceable and provide functionality which is complete hardware independent. </para></listitem> <listitem><para> [DEFAULT]</para><para> Default functions provide hardware related functionality which is suitable for most of the implementations. These functions can be replaced by the - board driver if neccecary. Those functions are called via pointers in the + board driver if necessary. Those functions are called via pointers in the NAND chip description structure. The board driver can set the functions which should be replaced by board dependent functions before calling nand_scan(). If the function pointer is NULL on entry to nand_scan() then the pointer @@ -264,7 +264,7 @@ static void board_hwcontrol(struct mtd_info *mtd, int cmd) is set up nand_scan() is called. This function tries to detect and identify then chip. If a chip is found all the internal data fields are initialized accordingly. - The structure(s) have to be zeroed out first and then filled with the neccecary + The structure(s) have to be zeroed out first and then filled with the necessary information about the device. </para> <programlisting> @@ -327,7 +327,7 @@ module_init(board_init); <sect1 id="Exit_function"> <title>Exit function</title> <para> - The exit function is only neccecary if the driver is + The exit function is only necessary if the driver is compiled as a module. It releases all resources which are held by the chip driver and unregisters the partitions in the MTD layer. @@ -494,7 +494,7 @@ static void board_select_chip (struct mtd_info *mtd, int chip) in this case. See rts_from4.c and diskonchip.c for implementation reference. In those cases we must also use bad block tables on FLASH, because the ECC layout is - interferring with the bad block marker positions. + interfering with the bad block marker positions. See bad block table support for details. </para> </sect2> @@ -542,7 +542,7 @@ static void board_select_chip (struct mtd_info *mtd, int chip) <para> nand_scan() calls the function nand_default_bbt(). nand_default_bbt() selects appropriate default - bad block table desriptors depending on the chip information + bad block table descriptors depending on the chip information which was retrieved by nand_scan(). </para> <para> @@ -554,7 +554,7 @@ static void board_select_chip (struct mtd_info *mtd, int chip) <sect2 id="Flash_based_tables"> <title>Flash based tables</title> <para> - It may be desired or neccecary to keep a bad block table in FLASH. + It may be desired or necessary to keep a bad block table in FLASH. For AG-AND chips this is mandatory, as they have no factory marked bad blocks. They have factory marked good blocks. The marker pattern is erased when the block is erased to be reused. So in case of @@ -565,10 +565,10 @@ static void board_select_chip (struct mtd_info *mtd, int chip) of the blocks. </para> <para> - The blocks in which the tables are stored are procteted against + The blocks in which the tables are stored are protected against accidental access by marking them bad in the memory bad block table. The bad block table management functions are allowed - to circumvernt this protection. + to circumvent this protection. </para> <para> The simplest way to activate the FLASH based bad block table support @@ -592,7 +592,7 @@ static void board_select_chip (struct mtd_info *mtd, int chip) User defined tables are created by filling out a nand_bbt_descr structure and storing the pointer in the nand_chip structure member bbt_td before calling nand_scan(). - If a mirror table is neccecary a second structure must be + If a mirror table is necessary a second structure must be created and a pointer to this structure must be stored in bbt_md inside the nand_chip structure. If the bbt_md member is set to NULL then only the main table is used @@ -666,7 +666,7 @@ static void board_select_chip (struct mtd_info *mtd, int chip) <para> For automatic placement some blocks must be reserved for bad block table storage. The number of reserved blocks is defined - in the maxblocks member of the babd block table description structure. + in the maxblocks member of the bad block table description structure. Reserving 4 blocks for mirrored tables should be a reasonable number. This also limits the number of blocks which are scanned for the bad block table ident pattern. @@ -1068,11 +1068,11 @@ in this page</entry> <chapter id="filesystems"> <title>Filesystem support</title> <para> - The NAND driver provides all neccecary functions for a + The NAND driver provides all necessary functions for a filesystem via the MTD interface. </para> <para> - Filesystems must be aware of the NAND pecularities and + Filesystems must be aware of the NAND peculiarities and restrictions. One major restrictions of NAND Flash is, that you cannot write as often as you want to a page. The consecutive writes to a page, before erasing it again, are restricted to 1-3 writes, depending on the @@ -1222,7 +1222,7 @@ in this page</entry> #define NAND_BBT_VERSION 0x00000100 /* Create a bbt if none axists */ #define NAND_BBT_CREATE 0x00000200 -/* Write bbt if neccecary */ +/* Write bbt if necessary */ #define NAND_BBT_WRITE 0x00001000 /* Read and write back block contents when writing bbt */ #define NAND_BBT_SAVECONTENT 0x00002000 diff --git a/Documentation/DocBook/regulator.tmpl b/Documentation/DocBook/regulator.tmpl index 346e552fa2cc..3b08a085d2c7 100644 --- a/Documentation/DocBook/regulator.tmpl +++ b/Documentation/DocBook/regulator.tmpl @@ -155,7 +155,7 @@ release regulators. Functions are provided to <link linkend='API-regulator-enable'>enable</link> and <link linkend='API-regulator-disable'>disable</link> the - reguator and to get and set the runtime parameters of the + regulator and to get and set the runtime parameters of the regulator. </para> <para> diff --git a/Documentation/DocBook/uio-howto.tmpl b/Documentation/DocBook/uio-howto.tmpl index 95618159e29b..bbe9c1fd5cef 100644 --- a/Documentation/DocBook/uio-howto.tmpl +++ b/Documentation/DocBook/uio-howto.tmpl @@ -766,10 +766,10 @@ framework to set up sysfs files for this region. Simply leave it alone. <para> The dynamic memory regions will be allocated when the UIO device file, <varname>/dev/uioX</varname> is opened. - Simiar to static memory resources, the memory region information for + Similar to static memory resources, the memory region information for dynamic regions is then visible via sysfs at <varname>/sys/class/uio/uioX/maps/mapY/*</varname>. - The dynmaic memory regions will be freed when the UIO device file is + The dynamic memory regions will be freed when the UIO device file is closed. When no processes are holding the device file open, the address returned to userspace is ~0. </para> diff --git a/Documentation/DocBook/usb.tmpl b/Documentation/DocBook/usb.tmpl index 8d57c1888dca..85fc0e28576f 100644 --- a/Documentation/DocBook/usb.tmpl +++ b/Documentation/DocBook/usb.tmpl @@ -153,7 +153,7 @@ <listitem><para>The Linux USB API supports synchronous calls for control and bulk messages. - It also supports asynchnous calls for all kinds of data transfer, + It also supports asynchronous calls for all kinds of data transfer, using request structures called "URBs" (USB Request Blocks). </para></listitem> diff --git a/Documentation/DocBook/writing-an-alsa-driver.tmpl b/Documentation/DocBook/writing-an-alsa-driver.tmpl index d0056a4e9c53..6f639d9530b5 100644 --- a/Documentation/DocBook/writing-an-alsa-driver.tmpl +++ b/Documentation/DocBook/writing-an-alsa-driver.tmpl @@ -5696,7 +5696,7 @@ struct _snd_pcm_runtime { suspending the PCM operations via <function>snd_pcm_suspend_all()</function> or <function>snd_pcm_suspend()</function>. It means that the PCM - streams are already stoppped when the register snapshot is + streams are already stopped when the register snapshot is taken. But, remember that you don't have to restart the PCM stream in the resume callback. It'll be restarted via trigger call with <constant>SNDRV_PCM_TRIGGER_RESUME</constant> diff --git a/Documentation/accounting/getdelays.c b/Documentation/accounting/getdelays.c index c6a06b71594d..f40578026a04 100644 --- a/Documentation/accounting/getdelays.c +++ b/Documentation/accounting/getdelays.c @@ -314,6 +314,7 @@ int main(int argc, char *argv[]) break; case 'm': strncpy(cpumask, optarg, sizeof(cpumask)); + cpumask[sizeof(cpumask) - 1] = '\0'; maskset = 1; printf("cpumask %s maskset %d\n", cpumask, maskset); break; diff --git a/Documentation/acpi/enumeration.txt b/Documentation/acpi/enumeration.txt index fd786ea13a1f..e182be5e3c83 100644 --- a/Documentation/acpi/enumeration.txt +++ b/Documentation/acpi/enumeration.txt @@ -60,12 +60,6 @@ If the driver needs to perform more complex initialization like getting and configuring GPIOs it can get its ACPI handle and extract this information from ACPI tables. -Currently the kernel is not able to automatically determine from which ACPI -device it should make the corresponding platform device so we need to add -the ACPI device explicitly to acpi_platform_device_ids list defined in -drivers/acpi/acpi_platform.c. This limitation is only for the platform -devices, SPI and I2C devices are created automatically as described below. - DMA support ~~~~~~~~~~~ DMA controllers enumerated via ACPI should be registered in the system to diff --git a/Documentation/cpu-freq/intel-pstate.txt b/Documentation/cpu-freq/intel-pstate.txt index e742d21dbd96..a69ffe1d54d5 100644 --- a/Documentation/cpu-freq/intel-pstate.txt +++ b/Documentation/cpu-freq/intel-pstate.txt @@ -15,10 +15,13 @@ New sysfs files for controlling P state selection have been added to /sys/devices/system/cpu/intel_pstate/ max_perf_pct: limits the maximum P state that will be requested by - the driver stated as a percentage of the available performance. + the driver stated as a percentage of the available performance. The + available (P states) performance may be reduced by the no_turbo + setting described below. min_perf_pct: limits the minimum P state that will be requested by - the driver stated as a percentage of the available performance. + the driver stated as a percentage of the max (non-turbo) + performance level. no_turbo: limits the driver to selecting P states below the turbo frequency range. diff --git a/Documentation/devicetree/bindings/arm/armada-38x.txt b/Documentation/devicetree/bindings/arm/armada-38x.txt index 11f2330a6554..ad9f8ed4d9bd 100644 --- a/Documentation/devicetree/bindings/arm/armada-38x.txt +++ b/Documentation/devicetree/bindings/arm/armada-38x.txt @@ -6,5 +6,15 @@ following property: Required root node property: - - compatible: must contain either "marvell,armada380" or - "marvell,armada385" depending on the variant of the SoC being used. + - compatible: must contain "marvell,armada380" + +In addition, boards using the Marvell Armada 385 SoC shall have the +following property before the previous one: + +Required root node property: + +compatible: must contain "marvell,armada385" + +Example: + +compatible = "marvell,a385-rd", "marvell,armada385", "marvell,armada380"; diff --git a/Documentation/devicetree/bindings/arm/exynos/power_domain.txt b/Documentation/devicetree/bindings/arm/exynos/power_domain.txt index 5216b419016a..8b4f7b7fe88b 100644 --- a/Documentation/devicetree/bindings/arm/exynos/power_domain.txt +++ b/Documentation/devicetree/bindings/arm/exynos/power_domain.txt @@ -9,6 +9,18 @@ Required Properties: - reg: physical base address of the controller and length of memory mapped region. +Optional Properties: +- clocks: List of clock handles. The parent clocks of the input clocks to the + devices in this power domain are set to oscclk before power gating + and restored back after powering on a domain. This is required for + all domains which are powered on and off and not required for unused + domains. +- clock-names: The following clocks can be specified: + - oscclk: Oscillator clock. + - pclkN, clkN: Pairs of parent of input clock and input clock to the + devices in this power domain. Maximum of 4 pairs (N = 0 to 3) + are supported currently. + Node of a device using power domains must have a samsung,power-domain property defined with a phandle to respective power domain. @@ -19,6 +31,14 @@ Example: reg = <0x10023C00 0x10>; }; + mfc_pd: power-domain@10044060 { + compatible = "samsung,exynos4210-pd"; + reg = <0x10044060 0x20>; + clocks = <&clock CLK_FIN_PLL>, <&clock CLK_MOUT_SW_ACLK333>, + <&clock CLK_MOUT_USER_ACLK333>; + clock-names = "oscclk", "pclk0", "clk0"; + }; + Example of the node using power domain: node { diff --git a/Documentation/devicetree/bindings/arm/l2cc.txt b/Documentation/devicetree/bindings/arm/l2cc.txt index b513cb8196fe..af527ee111c2 100644 --- a/Documentation/devicetree/bindings/arm/l2cc.txt +++ b/Documentation/devicetree/bindings/arm/l2cc.txt @@ -40,6 +40,9 @@ Optional properties: - arm,filter-ranges : <start length> Starting address and length of window to filter. Addresses in the filter window are directed to the M1 port. Other addresses will go to the M0 port. +- arm,io-coherent : indicates that the system is operating in an hardware + I/O coherent mode. Valid only when the arm,pl310-cache compatible + string is used. - interrupts : 1 combined interrupt. - cache-id-part: cache id part number to be used if it is not present on hardware diff --git a/Documentation/devicetree/bindings/arm/samsung/exynos-adc.txt b/Documentation/devicetree/bindings/arm/samsung/exynos-adc.txt index 5d49f2b37f68..832fe8cc24d7 100644 --- a/Documentation/devicetree/bindings/arm/samsung/exynos-adc.txt +++ b/Documentation/devicetree/bindings/arm/samsung/exynos-adc.txt @@ -48,7 +48,7 @@ adc@12D10000 { /* NTC thermistor is a hwmon device */ ncp15wb473@0 { - compatible = "ntc,ncp15wb473"; + compatible = "murata,ncp15wb473"; pullup-uv = <1800000>; pullup-ohm = <47000>; pulldown-ohm = <0>; diff --git a/Documentation/devicetree/bindings/cpufreq/cpufreq-cpu0.txt b/Documentation/devicetree/bindings/cpufreq/cpufreq-cpu0.txt index f055515d2b62..366690cb86a3 100644 --- a/Documentation/devicetree/bindings/cpufreq/cpufreq-cpu0.txt +++ b/Documentation/devicetree/bindings/cpufreq/cpufreq-cpu0.txt @@ -8,10 +8,12 @@ Both required and optional properties listed below must be defined under node /cpus/cpu@0. Required properties: -- operating-points: Refer to Documentation/devicetree/bindings/power/opp.txt - for details +- None Optional properties: +- operating-points: Refer to Documentation/devicetree/bindings/power/opp.txt for + details. OPPs *must* be supplied either via DT, i.e. this property, or + populated at runtime. - clock-latency: Specify the possible maximum transition latency for clock, in unit of nanoseconds. - voltage-tolerance: Specify the CPU voltage tolerance in percentage. diff --git a/Documentation/devicetree/bindings/hwmon/ntc_thermistor.txt b/Documentation/devicetree/bindings/hwmon/ntc_thermistor.txt index c6f66674f19c..b117b2e9e1a7 100644 --- a/Documentation/devicetree/bindings/hwmon/ntc_thermistor.txt +++ b/Documentation/devicetree/bindings/hwmon/ntc_thermistor.txt @@ -3,11 +3,19 @@ NTC Thermistor hwmon sensors Requires node properties: - "compatible" value : one of - "ntc,ncp15wb473" - "ntc,ncp18wb473" - "ntc,ncp21wb473" - "ntc,ncp03wb473" - "ntc,ncp15wl333" + "murata,ncp15wb473" + "murata,ncp18wb473" + "murata,ncp21wb473" + "murata,ncp03wb473" + "murata,ncp15wl333" + +/* Usage of vendor name "ntc" is deprecated */ +<DEPRECATED> "ntc,ncp15wb473" +<DEPRECATED> "ntc,ncp18wb473" +<DEPRECATED> "ntc,ncp21wb473" +<DEPRECATED> "ntc,ncp03wb473" +<DEPRECATED> "ntc,ncp15wl333" + - "pullup-uv" Pull up voltage in micro volts - "pullup-ohm" Pull up resistor value in ohms - "pulldown-ohm" Pull down resistor value in ohms @@ -21,7 +29,7 @@ Read more about iio bindings at Example: ncp15wb473@0 { - compatible = "ntc,ncp15wb473"; + compatible = "murata,ncp15wb473"; pullup-uv = <1800000>; pullup-ohm = <47000>; pulldown-ohm = <0>; diff --git a/Documentation/devicetree/bindings/serial/renesas,sci-serial.txt b/Documentation/devicetree/bindings/serial/renesas,sci-serial.txt index 64fd7dec1bbc..b3556609a06f 100644 --- a/Documentation/devicetree/bindings/serial/renesas,sci-serial.txt +++ b/Documentation/devicetree/bindings/serial/renesas,sci-serial.txt @@ -4,6 +4,13 @@ Required properties: - compatible: Must contain one of the following: + - "renesas,scifa-sh73a0" for SH73A0 (SH-Mobile AG5) SCIFA compatible UART. + - "renesas,scifb-sh73a0" for SH73A0 (SH-Mobile AG5) SCIFB compatible UART. + - "renesas,scifa-r8a73a4" for R8A73A4 (R-Mobile APE6) SCIFA compatible UART. + - "renesas,scifb-r8a73a4" for R8A73A4 (R-Mobile APE6) SCIFB compatible UART. + - "renesas,scifa-r8a7740" for R8A7740 (R-Mobile A1) SCIFA compatible UART. + - "renesas,scifb-r8a7740" for R8A7740 (R-Mobile A1) SCIFB compatible UART. + - "renesas,scif-r8a7778" for R8A7778 (R-Car M1) SCIF compatible UART. - "renesas,scif-r8a7779" for R8A7779 (R-Car H1) SCIF compatible UART. - "renesas,scif-r8a7790" for R8A7790 (R-Car H2) SCIF compatible UART. - "renesas,scifa-r8a7790" for R8A7790 (R-Car H2) SCIFA compatible UART. diff --git a/Documentation/devicetree/bindings/spi/qcom,spi-qup.txt b/Documentation/devicetree/bindings/spi/qcom,spi-qup.txt index b82a268f1bd4..bee6ff204baf 100644 --- a/Documentation/devicetree/bindings/spi/qcom,spi-qup.txt +++ b/Documentation/devicetree/bindings/spi/qcom,spi-qup.txt @@ -23,6 +23,12 @@ Optional properties: - spi-max-frequency: Specifies maximum SPI clock frequency, Units - Hz. Definition as per Documentation/devicetree/bindings/spi/spi-bus.txt +- num-cs: total number of chipselects +- cs-gpios: should specify GPIOs used for chipselects. + The gpios will be referred to as reg = <index> in the SPI child + nodes. If unspecified, a single SPI device without a chip + select can be used. + SPI slave nodes must be children of the SPI master node and can contain properties described in Documentation/devicetree/bindings/spi/spi-bus.txt diff --git a/Documentation/devicetree/bindings/vendor-prefixes.txt b/Documentation/devicetree/bindings/vendor-prefixes.txt index 4d7f3758d1b4..46a311e728a8 100644 --- a/Documentation/devicetree/bindings/vendor-prefixes.txt +++ b/Documentation/devicetree/bindings/vendor-prefixes.txt @@ -83,6 +83,7 @@ mosaixtech Mosaix Technologies, Inc. moxa Moxa mpl MPL AG mundoreader Mundo Reader S.L. +murata Murata Manufacturing Co., Ltd. mxicy Macronix International Co., Ltd. national National Semiconductor neonode Neonode Inc. diff --git a/Documentation/email-clients.txt b/Documentation/email-clients.txt index 4e30ebaa9e5b..9af538be3751 100644 --- a/Documentation/email-clients.txt +++ b/Documentation/email-clients.txt @@ -1,6 +1,17 @@ Email clients info for Linux ====================================================================== +Git +---------------------------------------------------------------------- +These days most developers use `git send-email` instead of regular +email clients. The man page for this is quite good. On the receiving +end, maintainers use `git am` to apply the patches. + +If you are new to git then send your first patch to yourself. Save it +as raw text including all the headers. Run `git am raw_email.txt` and +then review the changelog with `git log`. When that works then send +the patch to the appropriate mailing list(s). + General Preferences ---------------------------------------------------------------------- Patches for the Linux kernel are submitted via email, preferably as diff --git a/Documentation/hwmon/ntc_thermistor b/Documentation/hwmon/ntc_thermistor index 3bfda94096fd..057b77029f26 100644 --- a/Documentation/hwmon/ntc_thermistor +++ b/Documentation/hwmon/ntc_thermistor @@ -1,7 +1,7 @@ Kernel driver ntc_thermistor ================= -Supported thermistors: +Supported thermistors from Murata: * Murata NTC Thermistors NCP15WB473, NCP18WB473, NCP21WB473, NCP03WB473, NCP15WL333 Prefixes: 'ncp15wb473', 'ncp18wb473', 'ncp21wb473', 'ncp03wb473', 'ncp15wl333' Datasheet: Publicly available at Murata @@ -15,9 +15,9 @@ Authors: Description ----------- -The NTC thermistor is a simple thermistor that requires users to provide the -resistance and lookup the corresponding compensation table to get the -temperature input. +The NTC (Negative Temperature Coefficient) thermistor is a simple thermistor +that requires users to provide the resistance and lookup the corresponding +compensation table to get the temperature input. The NTC driver provides lookup tables with a linear approximation function and four circuit models with an option not to use any of the four models. diff --git a/Documentation/input/event-codes.txt b/Documentation/input/event-codes.txt index f1ea2c69648d..c587a966413e 100644 --- a/Documentation/input/event-codes.txt +++ b/Documentation/input/event-codes.txt @@ -281,6 +281,19 @@ gestures can normally be extracted from it. If INPUT_PROP_SEMI_MT is not set, the device is assumed to be a true MT device. +INPUT_PROP_TOPBUTTONPAD: +----------------------- +Some laptops, most notably the Lenovo *40 series provide a trackstick +device but do not have physical buttons associated with the trackstick +device. Instead, the top area of the touchpad is marked to show +visual/haptic areas for left, middle, right buttons intended to be used +with the trackstick. + +If INPUT_PROP_TOPBUTTONPAD is set, userspace should emulate buttons +accordingly. This property does not affect kernel behavior. +The kernel does not provide button emulation for such devices but treats +them as any other INPUT_PROP_BUTTONPAD device. + Guidelines: ========== The guidelines below ensure proper single-touch and multi-finger functionality. diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt index 884904975d0b..b7fa2f599459 100644 --- a/Documentation/kernel-parameters.txt +++ b/Documentation/kernel-parameters.txt @@ -2790,6 +2790,12 @@ bytes respectively. Such letter suffixes can also be entirely omitted. leaf rcu_node structure. Useful for very large systems. + rcutree.jiffies_till_sched_qs= [KNL] + Set required age in jiffies for a + given grace period before RCU starts + soliciting quiescent-state help from + rcu_note_context_switch(). + rcutree.jiffies_till_first_fqs= [KNL] Set delay from grace-period initialization to first attempt to force quiescent states. @@ -3130,6 +3136,11 @@ bytes respectively. Such letter suffixes can also be entirely omitted. [KNL] Should the soft-lockup detector generate panics. Format: <integer> + softlockup_all_cpu_backtrace= + [KNL] Should the soft-lockup detector generate + backtraces on all cpus. + Format: <integer> + sonypi.*= [HW] Sony Programmable I/O Control Device driver See Documentation/laptops/sonypi.txt @@ -3521,7 +3532,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted. the allocated input device; If set to 0, video driver will only send out the event without touching backlight brightness level. - default: 0 + default: 1 virtio_mmio.device= [VMMIO] Memory mapped virtio (platform) device. diff --git a/Documentation/laptops/00-INDEX b/Documentation/laptops/00-INDEX index d13b9a9a9e00..d399ae1fc724 100644 --- a/Documentation/laptops/00-INDEX +++ b/Documentation/laptops/00-INDEX @@ -8,8 +8,8 @@ disk-shock-protection.txt - information on hard disk shock protection. dslm.c - Simple Disk Sleep Monitor program -hpfall.c - - (HP) laptop accelerometer program for disk protection. +freefall.c + - (HP/DELL) laptop accelerometer program for disk protection. laptop-mode.txt - how to conserve battery power using laptop-mode. sony-laptop.txt diff --git a/Documentation/laptops/hpfall.c b/Documentation/laptops/freefall.c index b85dbbac0499..aab2ff09e868 100644 --- a/Documentation/laptops/hpfall.c +++ b/Documentation/laptops/freefall.c @@ -1,7 +1,9 @@ -/* Disk protection for HP machines. +/* Disk protection for HP/DELL machines. * * Copyright 2008 Eric Piel * Copyright 2009 Pavel Machek <pavel@ucw.cz> + * Copyright 2012 Sonal Santan + * Copyright 2014 Pali Rohár <pali.rohar@gmail.com> * * GPLv2. */ @@ -18,24 +20,31 @@ #include <signal.h> #include <sys/mman.h> #include <sched.h> +#include <syslog.h> -char unload_heads_path[64]; +static int noled; +static char unload_heads_path[64]; +static char device_path[32]; +static const char app_name[] = "FREE FALL"; -int set_unload_heads_path(char *device) +static int set_unload_heads_path(char *device) { char devname[64]; if (strlen(device) <= 5 || strncmp(device, "/dev/", 5) != 0) return -EINVAL; - strncpy(devname, device + 5, sizeof(devname)); + strncpy(devname, device + 5, sizeof(devname) - 1); + strncpy(device_path, device, sizeof(device_path) - 1); snprintf(unload_heads_path, sizeof(unload_heads_path) - 1, "/sys/block/%s/device/unload_heads", devname); return 0; } -int valid_disk(void) + +static int valid_disk(void) { int fd = open(unload_heads_path, O_RDONLY); + if (fd < 0) { perror(unload_heads_path); return 0; @@ -45,43 +54,54 @@ int valid_disk(void) return 1; } -void write_int(char *path, int i) +static void write_int(char *path, int i) { char buf[1024]; int fd = open(path, O_RDWR); + if (fd < 0) { perror("open"); exit(1); } + sprintf(buf, "%d", i); + if (write(fd, buf, strlen(buf)) != strlen(buf)) { perror("write"); exit(1); } + close(fd); } -void set_led(int on) +static void set_led(int on) { + if (noled) + return; write_int("/sys/class/leds/hp::hddprotect/brightness", on); } -void protect(int seconds) +static void protect(int seconds) { + const char *str = (seconds == 0) ? "Unparked" : "Parked"; + write_int(unload_heads_path, seconds*1000); + syslog(LOG_INFO, "%s %s disk head\n", str, device_path); } -int on_ac(void) +static int on_ac(void) { -// /sys/class/power_supply/AC0/online + /* /sys/class/power_supply/AC0/online */ + return 1; } -int lid_open(void) +static int lid_open(void) { -// /proc/acpi/button/lid/LID/state + /* /proc/acpi/button/lid/LID/state */ + return 1; } -void ignore_me(void) +static void ignore_me(int signum) { protect(0); set_led(0); @@ -90,6 +110,7 @@ void ignore_me(void) int main(int argc, char **argv) { int fd, ret; + struct stat st; struct sched_param param; if (argc == 1) @@ -111,7 +132,16 @@ int main(int argc, char **argv) return EXIT_FAILURE; } - daemon(0, 0); + if (stat("/sys/class/leds/hp::hddprotect/brightness", &st)) + noled = 1; + + if (daemon(0, 0) != 0) { + perror("daemon"); + return EXIT_FAILURE; + } + + openlog(app_name, LOG_CONS | LOG_PID | LOG_NDELAY, LOG_LOCAL1); + param.sched_priority = sched_get_priority_max(SCHED_FIFO); sched_setscheduler(0, SCHED_FIFO, ¶m); mlockall(MCL_CURRENT|MCL_FUTURE); @@ -141,6 +171,7 @@ int main(int argc, char **argv) alarm(20); } + closelog(); close(fd); return EXIT_SUCCESS; } diff --git a/Documentation/memory-hotplug.txt b/Documentation/memory-hotplug.txt index f304edb8fbe7..45134dc23854 100644 --- a/Documentation/memory-hotplug.txt +++ b/Documentation/memory-hotplug.txt @@ -209,15 +209,12 @@ If memory device is found, memory hotplug code will be called. 4.2 Notify memory hot-add event by hand ------------ -On powerpc, the firmware does not notify a memory hotplug event to the kernel. -Therefore, "probe" interface is supported to notify the event to the kernel. -This interface depends on CONFIG_ARCH_MEMORY_PROBE. - -CONFIG_ARCH_MEMORY_PROBE is supported on powerpc only. On x86, this config -option is disabled by default since ACPI notifies a memory hotplug event to -the kernel, which performs its hotplug operation as the result. Please -enable this option if you need the "probe" interface for testing purposes -on x86. +On some architectures, the firmware may not notify the kernel of a memory +hotplug event. Therefore, the memory "probe" interface is supported to +explicitly notify the kernel. This interface depends on +CONFIG_ARCH_MEMORY_PROBE and can be configured on powerpc, sh, and x86 +if hotplug is supported, although for x86 this should be handled by ACPI +notification. Probe interface is located at /sys/devices/system/memory/probe diff --git a/Documentation/ptp/testptp.c b/Documentation/ptp/testptp.c index f1ac2dae999e..ba1d50200c46 100644 --- a/Documentation/ptp/testptp.c +++ b/Documentation/ptp/testptp.c @@ -17,6 +17,7 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */ +#define _GNU_SOURCE #include <errno.h> #include <fcntl.h> #include <inttypes.h> @@ -46,12 +47,14 @@ #define CLOCK_INVALID -1 #endif -/* When glibc offers the syscall, this will go away. */ +/* clock_adjtime is not available in GLIBC < 2.14 */ +#if !__GLIBC_PREREQ(2, 14) #include <sys/syscall.h> static int clock_adjtime(clockid_t id, struct timex *tx) { return syscall(__NR_clock_adjtime, id, tx); } +#endif static clockid_t get_clockid(int fd) { diff --git a/Documentation/sound/alsa/HD-Audio-Models.txt b/Documentation/sound/alsa/HD-Audio-Models.txt index 85c362d8ea34..d1ab5e17eb13 100644 --- a/Documentation/sound/alsa/HD-Audio-Models.txt +++ b/Documentation/sound/alsa/HD-Audio-Models.txt @@ -286,6 +286,11 @@ STAC92HD83* hp-inv-led HP with broken BIOS for inverted mute LED auto BIOS setup (default) +STAC92HD95 +========== + hp-led LED support for HP laptops + hp-bass Bass HPF setup for HP Spectre 13 + STAC9872 ======== vaio VAIO laptop without SPDIF diff --git a/Documentation/sysctl/kernel.txt b/Documentation/sysctl/kernel.txt index 708bb7f1b7e0..c14374e71775 100644 --- a/Documentation/sysctl/kernel.txt +++ b/Documentation/sysctl/kernel.txt @@ -75,6 +75,7 @@ show up in /proc/sys/kernel: - shmall - shmmax [ sysv ipc ] - shmmni +- softlockup_all_cpu_backtrace - stop-a [ SPARC only ] - sysrq ==> Documentation/sysrq.txt - sysctl_writes_strict @@ -783,6 +784,22 @@ via the /proc/sys interface: ============================================================== +softlockup_all_cpu_backtrace: + +This value controls the soft lockup detector thread's behavior +when a soft lockup condition is detected as to whether or not +to gather further debug information. If enabled, each cpu will +be issued an NMI and instructed to capture stack trace. + +This feature is only applicable for architectures which support +NMI. + +0: do nothing. This is the default behavior. + +1: on detection capture more debug information. + +============================================================== + tainted: Non-zero if the kernel has been tainted. Numeric values, which diff --git a/Documentation/sysctl/vm.txt b/Documentation/sysctl/vm.txt index bd4b34c03738..4415aa915681 100644 --- a/Documentation/sysctl/vm.txt +++ b/Documentation/sysctl/vm.txt @@ -702,7 +702,8 @@ The batch value of each per cpu pagelist is also updated as a result. It is set to pcp->high/4. The upper limit of batch is (PAGE_SHIFT * 8) The initial value is zero. Kernel does not use this value at boot time to set -the high water marks for each per cpu page list. +the high water marks for each per cpu page list. If the user writes '0' to this +sysctl, it will revert to this default behavior. ============================================================== diff --git a/Documentation/trace/postprocess/trace-vmscan-postprocess.pl b/Documentation/trace/postprocess/trace-vmscan-postprocess.pl index 00e425faa2fd..78c9a7b2b58f 100644 --- a/Documentation/trace/postprocess/trace-vmscan-postprocess.pl +++ b/Documentation/trace/postprocess/trace-vmscan-postprocess.pl @@ -47,7 +47,6 @@ use constant HIGH_KSWAPD_REWAKEUP => 21; use constant HIGH_NR_SCANNED => 22; use constant HIGH_NR_TAKEN => 23; use constant HIGH_NR_RECLAIMED => 24; -use constant HIGH_NR_CONTIG_DIRTY => 25; my %perprocesspid; my %perprocess; @@ -105,7 +104,7 @@ my $regex_direct_end_default = 'nr_reclaimed=([0-9]*)'; my $regex_kswapd_wake_default = 'nid=([0-9]*) order=([0-9]*)'; my $regex_kswapd_sleep_default = 'nid=([0-9]*)'; my $regex_wakeup_kswapd_default = 'nid=([0-9]*) zid=([0-9]*) order=([0-9]*)'; -my $regex_lru_isolate_default = 'isolate_mode=([0-9]*) order=([0-9]*) nr_requested=([0-9]*) nr_scanned=([0-9]*) nr_taken=([0-9]*) contig_taken=([0-9]*) contig_dirty=([0-9]*) contig_failed=([0-9]*)'; +my $regex_lru_isolate_default = 'isolate_mode=([0-9]*) order=([0-9]*) nr_requested=([0-9]*) nr_scanned=([0-9]*) nr_taken=([0-9]*) file=([0-9]*)'; my $regex_lru_shrink_inactive_default = 'nid=([0-9]*) zid=([0-9]*) nr_scanned=([0-9]*) nr_reclaimed=([0-9]*) priority=([0-9]*) flags=([A-Z_|]*)'; my $regex_lru_shrink_active_default = 'lru=([A-Z_]*) nr_scanned=([0-9]*) nr_rotated=([0-9]*) priority=([0-9]*)'; my $regex_writepage_default = 'page=([0-9a-f]*) pfn=([0-9]*) flags=([A-Z_|]*)'; @@ -200,7 +199,7 @@ $regex_lru_isolate = generate_traceevent_regex( $regex_lru_isolate_default, "isolate_mode", "order", "nr_requested", "nr_scanned", "nr_taken", - "contig_taken", "contig_dirty", "contig_failed"); + "file"); $regex_lru_shrink_inactive = generate_traceevent_regex( "vmscan/mm_vmscan_lru_shrink_inactive", $regex_lru_shrink_inactive_default, @@ -375,7 +374,6 @@ EVENT_PROCESS: } my $isolate_mode = $1; my $nr_scanned = $4; - my $nr_contig_dirty = $7; # To closer match vmstat scanning statistics, only count isolate_both # and isolate_inactive as scanning. isolate_active is rotation @@ -385,7 +383,6 @@ EVENT_PROCESS: if ($isolate_mode != 2) { $perprocesspid{$process_pid}->{HIGH_NR_SCANNED} += $nr_scanned; } - $perprocesspid{$process_pid}->{HIGH_NR_CONTIG_DIRTY} += $nr_contig_dirty; } elsif ($tracepoint eq "mm_vmscan_lru_shrink_inactive") { $details = $6; if ($details !~ /$regex_lru_shrink_inactive/o) { @@ -539,13 +536,6 @@ sub dump_stats { } } } - if ($stats{$process_pid}->{HIGH_NR_CONTIG_DIRTY}) { - print " "; - my $count = $stats{$process_pid}->{HIGH_NR_CONTIG_DIRTY}; - if ($count != 0) { - print "contig-dirty=$count "; - } - } print "\n"; } diff --git a/MAINTAINERS b/MAINTAINERS index 3f2e171047b9..86efa7e213c2 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -156,7 +156,6 @@ F: drivers/net/hamradio/6pack.c 8169 10/100/1000 GIGABIT ETHERNET DRIVER M: Realtek linux nic maintainers <nic_swsd@realtek.com> -M: Francois Romieu <romieu@fr.zoreil.com> L: netdev@vger.kernel.org S: Maintained F: drivers/net/ethernet/realtek/r8169.c @@ -943,16 +942,10 @@ L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained T: git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git F: arch/arm/mach-imx/ +F: arch/arm/mach-mxs/ F: arch/arm/boot/dts/imx* F: arch/arm/configs/imx*_defconfig -ARM/FREESCALE MXS ARM ARCHITECTURE -M: Shawn Guo <shawn.guo@linaro.org> -L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) -S: Maintained -T: git git://git.linaro.org/people/shawnguo/linux-2.6.git -F: arch/arm/mach-mxs/ - ARM/GLOMATION GESBC9312SX MACHINE SUPPORT M: Lennert Buytenhek <kernel@wantstofly.org> L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) @@ -1052,9 +1045,33 @@ M: Santosh Shilimkar <santosh.shilimkar@ti.com> L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained F: arch/arm/mach-keystone/ -F: drivers/clk/keystone/ T: git git://git.kernel.org/pub/scm/linux/kernel/git/ssantosh/linux-keystone.git +ARM/TEXAS INSTRUMENT KEYSTONE CLOCK FRAMEWORK +M: Santosh Shilimkar <santosh.shilimkar@ti.com> +L: linux-kernel@vger.kernel.org +S: Maintained +F: drivers/clk/keystone/ + +ARM/TEXAS INSTRUMENT KEYSTONE ClOCKSOURCE +M: Santosh Shilimkar <santosh.shilimkar@ti.com> +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) +L: linux-kernel@vger.kernel.org +S: Maintained +F: drivers/clocksource/timer-keystone.c + +ARM/TEXAS INSTRUMENT KEYSTONE RESET DRIVER +M: Santosh Shilimkar <santosh.shilimkar@ti.com> +L: linux-kernel@vger.kernel.org +S: Maintained +F: drivers/power/reset/keystone-reset.c + +ARM/TEXAS INSTRUMENT AEMIF/EMIF DRIVERS +M: Santosh Shilimkar <santosh.shilimkar@ti.com> +L: linux-kernel@vger.kernel.org +S: Maintained +F: drivers/memory/*emif* + ARM/LOGICPD PXA270 MACHINE SUPPORT M: Lennert Buytenhek <kernel@wantstofly.org> L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) @@ -1296,6 +1313,20 @@ W: http://oss.renesas.com Q: http://patchwork.kernel.org/project/linux-sh/list/ T: git git://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas.git next S: Supported +F: arch/arm/boot/dts/emev2* +F: arch/arm/boot/dts/r7s* +F: arch/arm/boot/dts/r8a* +F: arch/arm/boot/dts/sh* +F: arch/arm/configs/ape6evm_defconfig +F: arch/arm/configs/armadillo800eva_defconfig +F: arch/arm/configs/bockw_defconfig +F: arch/arm/configs/genmai_defconfig +F: arch/arm/configs/koelsch_defconfig +F: arch/arm/configs/kzm9g_defconfig +F: arch/arm/configs/lager_defconfig +F: arch/arm/configs/mackerel_defconfig +F: arch/arm/configs/marzen_defconfig +F: arch/arm/configs/shmobile_defconfig F: arch/arm/mach-shmobile/ F: drivers/sh/ @@ -2917,6 +2948,9 @@ L: linux-doc@vger.kernel.org T: quilt http://www.infradead.org/~rdunlap/Doc/patches/ S: Maintained F: Documentation/ +X: Documentation/ABI/ +X: Documentation/devicetree/ +X: Documentation/[a-z][a-z]_[A-Z][A-Z]/ DOUBLETALK DRIVER M: "James R. Van Zandt" <jrv@vanzandt.mv.com> @@ -3189,14 +3223,6 @@ L: linux-scsi@vger.kernel.org S: Maintained F: drivers/scsi/eata_pio.* -EBTABLES -L: netfilter-devel@vger.kernel.org -W: http://ebtables.sourceforge.net/ -S: Orphan -F: include/linux/netfilter_bridge/ebt_*.h -F: include/uapi/linux/netfilter_bridge/ebt_*.h -F: net/bridge/netfilter/ebt*.c - EC100 MEDIA DRIVER M: Antti Palosaari <crope@iki.fi> L: linux-media@vger.kernel.org @@ -4484,8 +4510,7 @@ S: Supported F: drivers/idle/i7300_idle.c IEEE 802.15.4 SUBSYSTEM -M: Alexander Smirnov <alex.bluesman.smirnov@gmail.com> -M: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com> +M: Alexander Aring <alex.aring@gmail.com> L: linux-zigbee-devel@lists.sourceforge.net (moderated for non-subscribers) W: http://apps.sourceforge.net/trac/linux-zigbee T: git git://git.kernel.org/pub/scm/linux/kernel/git/lowpan/lowpan.git @@ -5517,10 +5542,11 @@ S: Maintained F: arch/arm/mach-lpc32xx/ LSILOGIC MPT FUSION DRIVERS (FC/SAS/SPI) -M: Nagalakshmi Nandigama <Nagalakshmi.Nandigama@lsi.com> -M: Sreekanth Reddy <Sreekanth.Reddy@lsi.com> -M: support@lsi.com -L: DL-MPTFusionLinux@lsi.com +M: Nagalakshmi Nandigama <nagalakshmi.nandigama@avagotech.com> +M: Praveen Krishnamoorthy <praveen.krishnamoorthy@avagotech.com> +M: Sreekanth Reddy <sreekanth.reddy@avagotech.com> +M: Abhijit Mahajan <abhijit.mahajan@avagotech.com> +L: MPT-FusionLinux.pdl@avagotech.com L: linux-scsi@vger.kernel.org W: http://www.lsilogic.com/support S: Supported @@ -6105,12 +6131,11 @@ F: Documentation/networking/s2io.txt F: Documentation/networking/vxge.txt F: drivers/net/ethernet/neterion/ -NETFILTER/IPTABLES +NETFILTER ({IP,IP6,ARP,EB,NF}TABLES) M: Pablo Neira Ayuso <pablo@netfilter.org> M: Patrick McHardy <kaber@trash.net> M: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu> L: netfilter-devel@vger.kernel.org -L: netfilter@vger.kernel.org L: coreteam@netfilter.org W: http://www.netfilter.org/ W: http://www.iptables.org/ @@ -6774,7 +6799,7 @@ F: arch/x86/kernel/quirks.c PCI DRIVER FOR IMX6 M: Richard Zhu <r65037@freescale.com> -M: Shawn Guo <shawn.guo@linaro.org> +M: Shawn Guo <shawn.guo@freescale.com> L: linux-pci@vger.kernel.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained @@ -6931,6 +6956,12 @@ L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained F: drivers/pinctrl/pinctrl-at91.c +PIN CONTROLLER - RENESAS +M: Laurent Pinchart <laurent.pinchart@ideasonboard.com> +L: linux-sh@vger.kernel.org +S: Maintained +F: drivers/pinctrl/sh-pfc/ + PIN CONTROLLER - SAMSUNG M: Tomasz Figa <t.figa@samsung.com> M: Thomas Abraham <thomas.abraham@linaro.org> @@ -7994,6 +8025,16 @@ F: drivers/ata/ F: include/linux/ata.h F: include/linux/libata.h +SERIAL ATA AHCI PLATFORM devices support +M: Hans de Goede <hdegoede@redhat.com> +M: Tejun Heo <tj@kernel.org> +L: linux-ide@vger.kernel.org +T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata.git +S: Supported +F: drivers/ata/ahci_platform.c +F: drivers/ata/libahci_platform.c +F: include/linux/ahci_platform.h + SERVER ENGINES 10Gbps iSCSI - BladeEngine 2 DRIVER M: Jayamohan Kallickal <jayamohan.kallickal@emulex.com> L: linux-scsi@vger.kernel.org @@ -8196,13 +8237,15 @@ S: Maintained F: drivers/usb/misc/sisusbvga/ SLAB ALLOCATOR -M: Christoph Lameter <cl@linux-foundation.org> +M: Christoph Lameter <cl@linux.com> M: Pekka Enberg <penberg@kernel.org> -M: Matt Mackall <mpm@selenic.com> +M: David Rientjes <rientjes@google.com> +M: Joonsoo Kim <iamjoonsoo.kim@lge.com> +M: Andrew Morton <akpm@linux-foundation.org> L: linux-mm@kvack.org S: Maintained F: include/linux/sl?b*.h -F: mm/sl?b.c +F: mm/sl?b* SLEEPABLE READ-COPY UPDATE (SRCU) M: Lai Jiangshan <laijs@cn.fujitsu.com> @@ -8969,7 +9012,7 @@ F: drivers/media/radio/radio-raremono.c THERMAL M: Zhang Rui <rui.zhang@intel.com> -M: Eduardo Valentin <eduardo.valentin@ti.com> +M: Eduardo Valentin <edubezval@gmail.com> L: linux-pm@vger.kernel.org T: git git://git.kernel.org/pub/scm/linux/kernel/git/rzhang/linux.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/evalenti/linux-soc-thermal.git @@ -8996,7 +9039,7 @@ S: Maintained F: drivers/platform/x86/thinkpad_acpi.c TI BANDGAP AND THERMAL DRIVER -M: Eduardo Valentin <eduardo.valentin@ti.com> +M: Eduardo Valentin <edubezval@gmail.com> L: linux-pm@vger.kernel.org S: Supported F: drivers/thermal/ti-soc-thermal/ @@ -9410,12 +9453,6 @@ S: Maintained F: drivers/usb/host/isp116x* F: include/linux/usb/isp116x.h -USB KAWASAKI LSI DRIVER -M: Oliver Neukum <oliver@neukum.org> -L: linux-usb@vger.kernel.org -S: Maintained -F: drivers/usb/serial/kl5kusb105.* - USB MASS STORAGE DRIVER M: Matthew Dharm <mdharm-usb@one-eyed-alien.net> L: linux-usb@vger.kernel.org @@ -9443,12 +9480,6 @@ S: Maintained F: Documentation/usb/ohci.txt F: drivers/usb/host/ohci* -USB OPTION-CARD DRIVER -M: Matthias Urlichs <smurf@smurf.noris.de> -L: linux-usb@vger.kernel.org -S: Maintained -F: drivers/usb/serial/option.c - USB PEGASUS DRIVER M: Petko Manolov <petkan@nucleusys.com> L: linux-usb@vger.kernel.org @@ -9481,7 +9512,7 @@ S: Maintained F: drivers/net/usb/rtl8150.c USB SERIAL SUBSYSTEM -M: Johan Hovold <jhovold@gmail.com> +M: Johan Hovold <johan@kernel.org> L: linux-usb@vger.kernel.org S: Maintained F: Documentation/usb/usb-serial.txt @@ -1,7 +1,7 @@ VERSION = 3 PATCHLEVEL = 16 SUBLEVEL = 0 -EXTRAVERSION = -rc2 +EXTRAVERSION = -rc7 NAME = Shuffling Zombie Juror # *DOCUMENTATION* @@ -41,6 +41,29 @@ unexport GREP_OPTIONS # descending is started. They are now explicitly listed as the # prepare rule. +# Beautify output +# --------------------------------------------------------------------------- +# +# Normally, we echo the whole command before executing it. By making +# that echo $($(quiet)$(cmd)), we now have the possibility to set +# $(quiet) to choose other forms of output instead, e.g. +# +# quiet_cmd_cc_o_c = Compiling $(RELDIR)/$@ +# cmd_cc_o_c = $(CC) $(c_flags) -c -o $@ $< +# +# If $(quiet) is empty, the whole command will be printed. +# If it is set to "quiet_", only the short version will be printed. +# If it is set to "silent_", nothing will be printed at all, since +# the variable $(silent_cmd_cc_o_c) doesn't exist. +# +# A simple variant is to prefix commands with $(Q) - that's useful +# for commands that shall be hidden in non-verbose mode. +# +# $(Q)ln $@ :< +# +# If KBUILD_VERBOSE equals 0 then the above command will be hidden. +# If KBUILD_VERBOSE equals 1 then the above command is displayed. +# # To put more focus on warnings, be less verbose as default # Use 'make V=1' to see the full commands @@ -51,6 +74,29 @@ ifndef KBUILD_VERBOSE KBUILD_VERBOSE = 0 endif +ifeq ($(KBUILD_VERBOSE),1) + quiet = + Q = +else + quiet=quiet_ + Q = @ +endif + +# If the user is running make -s (silent mode), suppress echoing of +# commands + +ifneq ($(filter 4.%,$(MAKE_VERSION)),) # make-4 +ifneq ($(filter %s ,$(firstword x$(MAKEFLAGS))),) + quiet=silent_ +endif +else # make-3.8x +ifneq ($(filter s% -s%,$(MAKEFLAGS)),) + quiet=silent_ +endif +endif + +export quiet Q KBUILD_VERBOSE + # Call a source code checker (by default, "sparse") as part of the # C compilation. # @@ -126,7 +172,13 @@ PHONY += $(MAKECMDGOALS) sub-make $(filter-out _all sub-make $(CURDIR)/Makefile, $(MAKECMDGOALS)) _all: sub-make @: +# Fake the "Entering directory" message once, so that IDEs/editors are +# able to understand relative filenames. + echodir := @echo + quiet_echodir := @echo +silent_echodir := @: sub-make: FORCE + $($(quiet)echodir) "make[1]: Entering directory \`$(KBUILD_OUTPUT)'" $(if $(KBUILD_VERBOSE:1=),@)$(MAKE) -C $(KBUILD_OUTPUT) \ KBUILD_SRC=$(CURDIR) \ KBUILD_EXTMOD="$(KBUILD_EXTMOD)" -f $(CURDIR)/Makefile \ @@ -289,52 +341,6 @@ endif export KBUILD_MODULES KBUILD_BUILTIN export KBUILD_CHECKSRC KBUILD_SRC KBUILD_EXTMOD -# Beautify output -# --------------------------------------------------------------------------- -# -# Normally, we echo the whole command before executing it. By making -# that echo $($(quiet)$(cmd)), we now have the possibility to set -# $(quiet) to choose other forms of output instead, e.g. -# -# quiet_cmd_cc_o_c = Compiling $(RELDIR)/$@ -# cmd_cc_o_c = $(CC) $(c_flags) -c -o $@ $< -# -# If $(quiet) is empty, the whole command will be printed. -# If it is set to "quiet_", only the short version will be printed. -# If it is set to "silent_", nothing will be printed at all, since -# the variable $(silent_cmd_cc_o_c) doesn't exist. -# -# A simple variant is to prefix commands with $(Q) - that's useful -# for commands that shall be hidden in non-verbose mode. -# -# $(Q)ln $@ :< -# -# If KBUILD_VERBOSE equals 0 then the above command will be hidden. -# If KBUILD_VERBOSE equals 1 then the above command is displayed. - -ifeq ($(KBUILD_VERBOSE),1) - quiet = - Q = -else - quiet=quiet_ - Q = @ -endif - -# If the user is running make -s (silent mode), suppress echoing of -# commands - -ifneq ($(filter 4.%,$(MAKE_VERSION)),) # make-4 -ifneq ($(filter %s ,$(firstword x$(MAKEFLAGS))),) - quiet=silent_ -endif -else # make-3.8x -ifneq ($(filter s% -s%,$(MAKEFLAGS)),) - quiet=silent_ -endif -endif - -export quiet Q KBUILD_VERBOSE - ifneq ($(CC),) ifeq ($(shell $(CC) -v 2>&1 | grep -c "clang version"), 1) COMPILER := clang @@ -682,6 +688,8 @@ KBUILD_CFLAGS += -fomit-frame-pointer endif endif +KBUILD_CFLAGS += $(call cc-option, -fno-var-tracking-assignments) + ifdef CONFIG_DEBUG_INFO KBUILD_CFLAGS += -g KBUILD_AFLAGS += -Wa,-gdwarf-2 @@ -1170,7 +1178,7 @@ distclean: mrproper # Packaging of the kernel to various formats # --------------------------------------------------------------------------- # rpm target kept for backward compatibility -package-dir := $(srctree)/scripts/package +package-dir := scripts/package %src-pkg: FORCE $(Q)$(MAKE) $(build)=$(package-dir) $@ diff --git a/arch/arc/include/asm/cache.h b/arch/arc/include/asm/cache.h index c1d3d2da1191..b3c750979aa1 100644 --- a/arch/arc/include/asm/cache.h +++ b/arch/arc/include/asm/cache.h @@ -60,7 +60,7 @@ extern void read_decode_cache_bcr(void); #define ARC_REG_IC_IVIC 0x10 #define ARC_REG_IC_CTRL 0x11 #define ARC_REG_IC_IVIL 0x19 -#if defined(CONFIG_ARC_MMU_V3) || defined (CONFIG_ARC_MMU_V4) +#if defined(CONFIG_ARC_MMU_V3) #define ARC_REG_IC_PTAG 0x1E #endif @@ -74,7 +74,7 @@ extern void read_decode_cache_bcr(void); #define ARC_REG_DC_IVDL 0x4A #define ARC_REG_DC_FLSH 0x4B #define ARC_REG_DC_FLDL 0x4C -#if defined(CONFIG_ARC_MMU_V3) || defined (CONFIG_ARC_MMU_V4) +#if defined(CONFIG_ARC_MMU_V3) #define ARC_REG_DC_PTAG 0x5C #endif diff --git a/arch/arc/include/uapi/asm/ptrace.h b/arch/arc/include/uapi/asm/ptrace.h index 2618cc13ba75..76a7739aab1c 100644 --- a/arch/arc/include/uapi/asm/ptrace.h +++ b/arch/arc/include/uapi/asm/ptrace.h @@ -11,6 +11,7 @@ #ifndef _UAPI__ASM_ARC_PTRACE_H #define _UAPI__ASM_ARC_PTRACE_H +#define PTRACE_GET_THREAD_AREA 25 #ifndef __ASSEMBLY__ /* diff --git a/arch/arc/kernel/ctx_sw_asm.S b/arch/arc/kernel/ctx_sw_asm.S index 2ff0347a2fd7..e248594097e7 100644 --- a/arch/arc/kernel/ctx_sw_asm.S +++ b/arch/arc/kernel/ctx_sw_asm.S @@ -10,9 +10,9 @@ * -This is the more "natural" hand written assembler */ +#include <linux/linkage.h> #include <asm/entry.h> /* For the SAVE_* macros */ #include <asm/asm-offsets.h> -#include <asm/linkage.h> #define KSP_WORD_OFF ((TASK_THREAD + THREAD_KSP) / 4) diff --git a/arch/arc/kernel/devtree.c b/arch/arc/kernel/devtree.c index 0b3ef4025d89..fffdb5e41b20 100644 --- a/arch/arc/kernel/devtree.c +++ b/arch/arc/kernel/devtree.c @@ -41,7 +41,7 @@ const struct machine_desc * __init setup_machine_fdt(void *dt) { const struct machine_desc *mdesc; unsigned long dt_root; - void *clk; + const void *clk; int len; if (!early_init_dt_scan(dt)) diff --git a/arch/arc/kernel/head.S b/arch/arc/kernel/head.S index 07a58f2d3077..4d2481bd8b98 100644 --- a/arch/arc/kernel/head.S +++ b/arch/arc/kernel/head.S @@ -77,10 +77,11 @@ stext: ; Clear BSS before updating any globals ; XXX: use ZOL here mov r5, __bss_start - mov r6, __bss_stop + sub r6, __bss_stop, r5 + lsr.f lp_count, r6, 2 + lpnz 1f + st.ab 0, [r5, 4] 1: - st.ab 0, [r5,4] - brlt r5, r6, 1b ; Uboot - kernel ABI ; r0 = [0] No uboot interaction, [1] cmdline in r2, [2] DTB in r2 diff --git a/arch/arc/kernel/ptrace.c b/arch/arc/kernel/ptrace.c index 5d76706139dd..13b3ffb27a38 100644 --- a/arch/arc/kernel/ptrace.c +++ b/arch/arc/kernel/ptrace.c @@ -146,6 +146,10 @@ long arch_ptrace(struct task_struct *child, long request, pr_debug("REQ=%ld: ADDR =0x%lx, DATA=0x%lx)\n", request, addr, data); switch (request) { + case PTRACE_GET_THREAD_AREA: + ret = put_user(task_thread_info(child)->thr_ptr, + (unsigned long __user *)data); + break; default: ret = ptrace_request(child, request, addr, data); break; diff --git a/arch/arc/kernel/smp.c b/arch/arc/kernel/smp.c index cf90b6f4d3e0..c802bb500602 100644 --- a/arch/arc/kernel/smp.c +++ b/arch/arc/kernel/smp.c @@ -337,8 +337,19 @@ irqreturn_t do_IPI(int irq, void *dev_id) * API called by platform code to hookup arch-common ISR to their IPI IRQ */ static DEFINE_PER_CPU(int, ipi_dev); + +static struct irqaction arc_ipi_irq = { + .name = "IPI Interrupt", + .flags = IRQF_PERCPU, + .handler = do_IPI, +}; + int smp_ipi_irq_setup(int cpu, int irq) { - int *dev_id = &per_cpu(ipi_dev, smp_processor_id()); - return request_percpu_irq(irq, do_IPI, "IPI Interrupt", dev_id); + if (!cpu) + return setup_irq(irq, &arc_ipi_irq); + else + arch_unmask_irq(irq); + + return 0; } diff --git a/arch/arc/kernel/vmlinux.lds.S b/arch/arc/kernel/vmlinux.lds.S index 2555f5886af6..dd35bde39f69 100644 --- a/arch/arc/kernel/vmlinux.lds.S +++ b/arch/arc/kernel/vmlinux.lds.S @@ -116,7 +116,7 @@ SECTIONS _edata = .; - BSS_SECTION(0, 0, 0) + BSS_SECTION(4, 4, 4) #ifdef CONFIG_ARC_DW2_UNWIND . = ALIGN(PAGE_SIZE); diff --git a/arch/arc/mm/cache_arc700.c b/arch/arc/mm/cache_arc700.c index 1f676c4794e0..353b202c37c9 100644 --- a/arch/arc/mm/cache_arc700.c +++ b/arch/arc/mm/cache_arc700.c @@ -389,7 +389,7 @@ static inline void __dc_line_op(unsigned long paddr, unsigned long vaddr, /*********************************************************** * Machine specific helper for per line I-Cache invalidate. */ -static void __ic_line_inv_vaddr(unsigned long paddr, unsigned long vaddr, +static void __ic_line_inv_vaddr_local(unsigned long paddr, unsigned long vaddr, unsigned long sz) { unsigned long flags; @@ -405,6 +405,23 @@ static inline void __ic_entire_inv(void) read_aux_reg(ARC_REG_IC_CTRL); /* blocks */ } +struct ic_line_inv_vaddr_ipi { + unsigned long paddr, vaddr; + int sz; +}; + +static void __ic_line_inv_vaddr_helper(void *info) +{ + struct ic_line_inv_vaddr_ipi *ic_inv = (struct ic_line_inv_vaddr_ipi*) info; + __ic_line_inv_vaddr_local(ic_inv->paddr, ic_inv->vaddr, ic_inv->sz); +} + +static void __ic_line_inv_vaddr(unsigned long paddr, unsigned long vaddr, + unsigned long sz) +{ + struct ic_line_inv_vaddr_ipi ic_inv = { paddr, vaddr , sz}; + on_each_cpu(__ic_line_inv_vaddr_helper, &ic_inv, 1); +} #else #define __ic_entire_inv() @@ -553,12 +570,8 @@ void flush_icache_range(unsigned long kstart, unsigned long kend) */ void __sync_icache_dcache(unsigned long paddr, unsigned long vaddr, int len) { - unsigned long flags; - - local_irq_save(flags); - __ic_line_inv_vaddr(paddr, vaddr, len); __dc_line_op(paddr, vaddr, len, OP_FLUSH_N_INV); - local_irq_restore(flags); + __ic_line_inv_vaddr(paddr, vaddr, len); } /* wrapper to compile time eliminate alignment checks in flush loop */ diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 245058b3b0ef..88acf8bc1490 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -6,6 +6,7 @@ config ARM select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST select ARCH_HAVE_CUSTOM_GPIO_H select ARCH_MIGHT_HAVE_PC_PARPORT + select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_USE_BUILTIN_BSWAP select ARCH_USE_CMPXCHG_LOCKREF select ARCH_WANT_IPC_PARSE_VERSION diff --git a/arch/arm/boot/dts/Makefile b/arch/arm/boot/dts/Makefile index 5986ff63b901..adb5ed9e269e 100644 --- a/arch/arm/boot/dts/Makefile +++ b/arch/arm/boot/dts/Makefile @@ -357,7 +357,7 @@ dtb-$(CONFIG_ARCH_STI)+= stih407-b2120.dtb \ stih415-b2020.dtb \ stih416-b2000.dtb \ stih416-b2020.dtb \ - stih416-b2020-revE.dtb + stih416-b2020e.dtb dtb-$(CONFIG_MACH_SUN4I) += \ sun4i-a10-a1000.dtb \ sun4i-a10-cubieboard.dtb \ diff --git a/arch/arm/boot/dts/am335x-evm.dts b/arch/arm/boot/dts/am335x-evm.dts index ecb267767cf5..e2156a583de7 100644 --- a/arch/arm/boot/dts/am335x-evm.dts +++ b/arch/arm/boot/dts/am335x-evm.dts @@ -529,8 +529,8 @@ serial-dir = < /* 0: INACTIVE, 1: TX, 2: RX */ 0 0 1 2 >; - tx-num-evt = <1>; - rx-num-evt = <1>; + tx-num-evt = <32>; + rx-num-evt = <32>; }; &tps { diff --git a/arch/arm/boot/dts/am335x-evmsk.dts b/arch/arm/boot/dts/am335x-evmsk.dts index ab9a34ce524c..80a3b215e7d6 100644 --- a/arch/arm/boot/dts/am335x-evmsk.dts +++ b/arch/arm/boot/dts/am335x-evmsk.dts @@ -560,8 +560,8 @@ serial-dir = < /* 0: INACTIVE, 1: TX, 2: RX */ 0 0 1 2 >; - tx-num-evt = <1>; - rx-num-evt = <1>; + tx-num-evt = <32>; + rx-num-evt = <32>; }; &tscadc { diff --git a/arch/arm/boot/dts/am335x-igep0033.dtsi b/arch/arm/boot/dts/am335x-igep0033.dtsi index 8a0a72dc7dd7..a1a0cc5eb35c 100644 --- a/arch/arm/boot/dts/am335x-igep0033.dtsi +++ b/arch/arm/boot/dts/am335x-igep0033.dtsi @@ -105,10 +105,16 @@ &cpsw_emac0 { phy_id = <&davinci_mdio>, <0>; + phy-mode = "rmii"; }; &cpsw_emac1 { phy_id = <&davinci_mdio>, <1>; + phy-mode = "rmii"; +}; + +&phy_sel { + rmii-clock-ext; }; &elm { diff --git a/arch/arm/boot/dts/am43x-epos-evm.dts b/arch/arm/boot/dts/am43x-epos-evm.dts index 19f1f7e87597..90098f98a5c8 100644 --- a/arch/arm/boot/dts/am43x-epos-evm.dts +++ b/arch/arm/boot/dts/am43x-epos-evm.dts @@ -319,6 +319,10 @@ phy-mode = "rmii"; }; +&phy_sel { + rmii-clock-ext; +}; + &i2c0 { status = "okay"; pinctrl-names = "default"; diff --git a/arch/arm/boot/dts/armada-380.dtsi b/arch/arm/boot/dts/armada-380.dtsi index e69bc6759c39..4173a8ab34e7 100644 --- a/arch/arm/boot/dts/armada-380.dtsi +++ b/arch/arm/boot/dts/armada-380.dtsi @@ -16,7 +16,7 @@ / { model = "Marvell Armada 380 family SoC"; - compatible = "marvell,armada380", "marvell,armada38x"; + compatible = "marvell,armada380"; cpus { #address-cells = <1>; diff --git a/arch/arm/boot/dts/armada-385-db.dts b/arch/arm/boot/dts/armada-385-db.dts index 5bae4731828b..1af886f1e486 100644 --- a/arch/arm/boot/dts/armada-385-db.dts +++ b/arch/arm/boot/dts/armada-385-db.dts @@ -16,7 +16,7 @@ / { model = "Marvell Armada 385 Development Board"; - compatible = "marvell,a385-db", "marvell,armada385", "marvell,armada38x"; + compatible = "marvell,a385-db", "marvell,armada385", "marvell,armada380"; chosen { bootargs = "console=ttyS0,115200 earlyprintk"; diff --git a/arch/arm/boot/dts/armada-385-rd.dts b/arch/arm/boot/dts/armada-385-rd.dts index 40893255a3f0..aaca2861dc87 100644 --- a/arch/arm/boot/dts/armada-385-rd.dts +++ b/arch/arm/boot/dts/armada-385-rd.dts @@ -17,7 +17,7 @@ / { model = "Marvell Armada 385 Reference Design"; - compatible = "marvell,a385-rd", "marvell,armada385", "marvell,armada38x"; + compatible = "marvell,a385-rd", "marvell,armada385", "marvell,armada380"; chosen { bootargs = "console=ttyS0,115200 earlyprintk"; diff --git a/arch/arm/boot/dts/armada-385.dtsi b/arch/arm/boot/dts/armada-385.dtsi index f011009bf4cf..6283d7912f71 100644 --- a/arch/arm/boot/dts/armada-385.dtsi +++ b/arch/arm/boot/dts/armada-385.dtsi @@ -16,7 +16,7 @@ / { model = "Marvell Armada 385 family SoC"; - compatible = "marvell,armada385", "marvell,armada38x"; + compatible = "marvell,armada385", "marvell,armada380"; cpus { #address-cells = <1>; diff --git a/arch/arm/boot/dts/armada-38x.dtsi b/arch/arm/boot/dts/armada-38x.dtsi index 3de364e81b52..689fa1a46728 100644 --- a/arch/arm/boot/dts/armada-38x.dtsi +++ b/arch/arm/boot/dts/armada-38x.dtsi @@ -20,7 +20,7 @@ / { model = "Marvell Armada 38x family SoC"; - compatible = "marvell,armada38x"; + compatible = "marvell,armada380"; aliases { gpio0 = &gpio0; diff --git a/arch/arm/boot/dts/at91sam9261.dtsi b/arch/arm/boot/dts/at91sam9261.dtsi index b309c1c6e848..04927db1d6bf 100644 --- a/arch/arm/boot/dts/at91sam9261.dtsi +++ b/arch/arm/boot/dts/at91sam9261.dtsi @@ -568,24 +568,17 @@ #size-cells = <0>; #interrupt-cells = <1>; - slow_rc_osc: slow_rc_osc { - compatible = "fixed-clock"; + main_osc: main_osc { + compatible = "atmel,at91rm9200-clk-main-osc"; #clock-cells = <0>; - clock-frequency = <32768>; - clock-accuracy = <50000000>; - }; - - clk32k: slck { - compatible = "atmel,at91sam9260-clk-slow"; - #clock-cells = <0>; - clocks = <&slow_rc_osc &slow_xtal>; + interrupts-extended = <&pmc AT91_PMC_MOSCS>; + clocks = <&main_xtal>; }; main: mainck { compatible = "atmel,at91rm9200-clk-main"; #clock-cells = <0>; - interrupts-extended = <&pmc AT91_PMC_MOSCS>; - clocks = <&main_xtal>; + clocks = <&main_osc>; }; plla: pllack { @@ -615,7 +608,7 @@ compatible = "atmel,at91rm9200-clk-master"; #clock-cells = <0>; interrupts-extended = <&pmc AT91_PMC_MCKRDY>; - clocks = <&clk32k>, <&main>, <&plla>, <&pllb>; + clocks = <&slow_xtal>, <&main>, <&plla>, <&pllb>; atmel,clk-output-range = <0 94000000>; atmel,clk-divisors = <1 2 4 0>; }; @@ -632,7 +625,7 @@ #address-cells = <1>; #size-cells = <0>; interrupt-parent = <&pmc>; - clocks = <&clk32k>, <&main>, <&plla>, <&pllb>; + clocks = <&slow_xtal>, <&main>, <&plla>, <&pllb>; prog0: prog0 { #clock-cells = <0>; diff --git a/arch/arm/boot/dts/at91sam9261ek.dts b/arch/arm/boot/dts/at91sam9261ek.dts index c6683ea8b743..aa35a7aec9a8 100644 --- a/arch/arm/boot/dts/at91sam9261ek.dts +++ b/arch/arm/boot/dts/at91sam9261ek.dts @@ -20,6 +20,10 @@ reg = <0x20000000 0x4000000>; }; + slow_xtal { + clock-frequency = <32768>; + }; + main_xtal { clock-frequency = <18432000>; }; diff --git a/arch/arm/boot/dts/at91sam9n12.dtsi b/arch/arm/boot/dts/at91sam9n12.dtsi index d1b82e6635d5..b84bac5bada4 100644 --- a/arch/arm/boot/dts/at91sam9n12.dtsi +++ b/arch/arm/boot/dts/at91sam9n12.dtsi @@ -132,8 +132,8 @@ <595000000 650000000 3 0>, <545000000 600000000 0 1>, <495000000 555000000 1 1>, - <445000000 500000000 1 2>, - <400000000 450000000 1 3>; + <445000000 500000000 2 1>, + <400000000 450000000 3 1>; }; plladiv: plladivck { @@ -925,7 +925,7 @@ compatible = "atmel,at91rm9200-ohci", "usb-ohci"; reg = <0x00500000 0x00100000>; interrupts = <22 IRQ_TYPE_LEVEL_HIGH 2>; - clocks = <&usb>, <&uhphs_clk>, <&udphs_clk>, + clocks = <&usb>, <&uhphs_clk>, <&uhphs_clk>, <&uhpck>; clock-names = "usb_clk", "ohci_clk", "hclk", "uhpck"; status = "disabled"; diff --git a/arch/arm/boot/dts/at91sam9x5.dtsi b/arch/arm/boot/dts/at91sam9x5.dtsi index 1a57298636a5..2c0d6ea3ab41 100644 --- a/arch/arm/boot/dts/at91sam9x5.dtsi +++ b/arch/arm/boot/dts/at91sam9x5.dtsi @@ -140,8 +140,8 @@ 595000000 650000000 3 0 545000000 600000000 0 1 495000000 555000000 1 1 - 445000000 500000000 1 2 - 400000000 450000000 1 3>; + 445000000 500000000 2 1 + 400000000 450000000 3 1>; }; plladiv: plladivck { @@ -1045,6 +1045,8 @@ reg = <0x00500000 0x80000 0xf803c000 0x400>; interrupts = <23 IRQ_TYPE_LEVEL_HIGH 0>; + clocks = <&usb>, <&udphs_clk>; + clock-names = "hclk", "pclk"; status = "disabled"; ep0 { @@ -1122,6 +1124,7 @@ compatible = "atmel,at91sam9rl-pwm"; reg = <0xf8034000 0x300>; interrupts = <18 IRQ_TYPE_LEVEL_HIGH 4>; + clocks = <&pwm_clk>; #pwm-cells = <3>; status = "disabled"; }; @@ -1153,8 +1156,7 @@ compatible = "atmel,at91rm9200-ohci", "usb-ohci"; reg = <0x00600000 0x100000>; interrupts = <22 IRQ_TYPE_LEVEL_HIGH 2>; - clocks = <&usb>, <&uhphs_clk>, <&udphs_clk>, - <&uhpck>; + clocks = <&usb>, <&uhphs_clk>, <&uhphs_clk>, <&uhpck>; clock-names = "usb_clk", "ohci_clk", "hclk", "uhpck"; status = "disabled"; }; diff --git a/arch/arm/boot/dts/dra7-evm.dts b/arch/arm/boot/dts/dra7-evm.dts index 4adc28039c30..83089540e324 100644 --- a/arch/arm/boot/dts/dra7-evm.dts +++ b/arch/arm/boot/dts/dra7-evm.dts @@ -240,6 +240,7 @@ regulator-name = "ldo3"; regulator-min-microvolt = <1800000>; regulator-max-microvolt = <1800000>; + regulator-always-on; regulator-boot-on; }; diff --git a/arch/arm/boot/dts/dra7.dtsi b/arch/arm/boot/dts/dra7.dtsi index c29945e07c5a..80127638b379 100644 --- a/arch/arm/boot/dts/dra7.dtsi +++ b/arch/arm/boot/dts/dra7.dtsi @@ -773,7 +773,6 @@ clocks = <&qspi_gfclk_div>; clock-names = "fck"; num-cs = <4>; - interrupts = <0 343 0x4>; status = "disabled"; }; @@ -984,6 +983,17 @@ #size-cells = <1>; status = "disabled"; }; + + atl: atl@4843c000 { + compatible = "ti,dra7-atl"; + reg = <0x4843c000 0x3ff>; + ti,hwmods = "atl"; + ti,provided-clocks = <&atl_clkin0_ck>, <&atl_clkin1_ck>, + <&atl_clkin2_ck>, <&atl_clkin3_ck>; + clocks = <&atl_gfclk_mux>; + clock-names = "fck"; + status = "disabled"; + }; }; }; diff --git a/arch/arm/boot/dts/dra7xx-clocks.dtsi b/arch/arm/boot/dts/dra7xx-clocks.dtsi index b03cfe49d22b..dc7a292fe939 100644 --- a/arch/arm/boot/dts/dra7xx-clocks.dtsi +++ b/arch/arm/boot/dts/dra7xx-clocks.dtsi @@ -10,26 +10,26 @@ &cm_core_aon_clocks { atl_clkin0_ck: atl_clkin0_ck { #clock-cells = <0>; - compatible = "fixed-clock"; - clock-frequency = <0>; + compatible = "ti,dra7-atl-clock"; + clocks = <&atl_gfclk_mux>; }; atl_clkin1_ck: atl_clkin1_ck { #clock-cells = <0>; - compatible = "fixed-clock"; - clock-frequency = <0>; + compatible = "ti,dra7-atl-clock"; + clocks = <&atl_gfclk_mux>; }; atl_clkin2_ck: atl_clkin2_ck { #clock-cells = <0>; - compatible = "fixed-clock"; - clock-frequency = <0>; + compatible = "ti,dra7-atl-clock"; + clocks = <&atl_gfclk_mux>; }; atl_clkin3_ck: atl_clkin3_ck { #clock-cells = <0>; - compatible = "fixed-clock"; - clock-frequency = <0>; + compatible = "ti,dra7-atl-clock"; + clocks = <&atl_gfclk_mux>; }; hdmi_clkin_ck: hdmi_clkin_ck { @@ -673,10 +673,12 @@ l3_iclk_div: l3_iclk_div { #clock-cells = <0>; - compatible = "fixed-factor-clock"; + compatible = "ti,divider-clock"; + ti,max-div = <2>; + ti,bit-shift = <4>; + reg = <0x0100>; clocks = <&dpll_core_h12x2_ck>; - clock-mult = <1>; - clock-div = <1>; + ti,index-power-of-two; }; l4_root_clk_div: l4_root_clk_div { @@ -684,7 +686,7 @@ compatible = "fixed-factor-clock"; clocks = <&l3_iclk_div>; clock-mult = <1>; - clock-div = <1>; + clock-div = <2>; }; video1_clk2_div: video1_clk2_div { diff --git a/arch/arm/boot/dts/exynos4.dtsi b/arch/arm/boot/dts/exynos4.dtsi index b8ece4be41ca..17b22e9cc2aa 100644 --- a/arch/arm/boot/dts/exynos4.dtsi +++ b/arch/arm/boot/dts/exynos4.dtsi @@ -113,7 +113,7 @@ compatible = "arm,cortex-a9-gic"; #interrupt-cells = <3>; interrupt-controller; - reg = <0x10490000 0x1000>, <0x10480000 0x100>; + reg = <0x10490000 0x10000>, <0x10480000 0x10000>; }; combiner: interrupt-controller@10440000 { @@ -554,7 +554,7 @@ interrupts = <0 37 0>, <0 38 0>, <0 39 0>, <0 40 0>, <0 41 0>; clocks = <&clock CLK_PWM>; clock-names = "timers"; - #pwm-cells = <2>; + #pwm-cells = <3>; status = "disabled"; }; diff --git a/arch/arm/boot/dts/exynos5420.dtsi b/arch/arm/boot/dts/exynos5420.dtsi index e38532271ef9..15957227ffda 100644 --- a/arch/arm/boot/dts/exynos5420.dtsi +++ b/arch/arm/boot/dts/exynos5420.dtsi @@ -167,7 +167,7 @@ compatible = "samsung,exynos5420-audss-clock"; reg = <0x03810000 0x0C>; #clock-cells = <1>; - clocks = <&clock CLK_FIN_PLL>, <&clock CLK_FOUT_EPLL>, + clocks = <&clock CLK_FIN_PLL>, <&clock CLK_MAU_EPLL>, <&clock CLK_SCLK_MAUDIO0>, <&clock CLK_SCLK_MAUPCM0>; clock-names = "pll_ref", "pll_in", "sclk_audio", "sclk_pcm_in"; }; @@ -260,6 +260,9 @@ mfc_pd: power-domain@10044060 { compatible = "samsung,exynos4210-pd"; reg = <0x10044060 0x20>; + clocks = <&clock CLK_FIN_PLL>, <&clock CLK_MOUT_SW_ACLK333>, + <&clock CLK_MOUT_USER_ACLK333>; + clock-names = "oscclk", "pclk0", "clk0"; }; disp_pd: power-domain@100440C0 { diff --git a/arch/arm/boot/dts/imx51-babbage.dts b/arch/arm/boot/dts/imx51-babbage.dts index 6bc3243a80d3..181d77fa2fa6 100644 --- a/arch/arm/boot/dts/imx51-babbage.dts +++ b/arch/arm/boot/dts/imx51-babbage.dts @@ -315,15 +315,15 @@ &esdhc1 { pinctrl-names = "default"; pinctrl-0 = <&pinctrl_esdhc1>; - fsl,cd-controller; - fsl,wp-controller; + cd-gpios = <&gpio1 0 GPIO_ACTIVE_LOW>; + wp-gpios = <&gpio1 1 GPIO_ACTIVE_HIGH>; status = "okay"; }; &esdhc2 { pinctrl-names = "default"; pinctrl-0 = <&pinctrl_esdhc2>; - cd-gpios = <&gpio1 6 GPIO_ACTIVE_HIGH>; + cd-gpios = <&gpio1 6 GPIO_ACTIVE_LOW>; wp-gpios = <&gpio1 5 GPIO_ACTIVE_HIGH>; status = "okay"; }; @@ -468,8 +468,8 @@ MX51_PAD_SD1_DATA1__SD1_DATA1 0x20d5 MX51_PAD_SD1_DATA2__SD1_DATA2 0x20d5 MX51_PAD_SD1_DATA3__SD1_DATA3 0x20d5 - MX51_PAD_GPIO1_0__SD1_CD 0x20d5 - MX51_PAD_GPIO1_1__SD1_WP 0x20d5 + MX51_PAD_GPIO1_0__GPIO1_0 0x100 + MX51_PAD_GPIO1_1__GPIO1_1 0x100 >; }; diff --git a/arch/arm/boot/dts/imx51-eukrea-mbimxsd51-baseboard.dts b/arch/arm/boot/dts/imx51-eukrea-mbimxsd51-baseboard.dts index 75e66c9c6144..31cfb7f2b02e 100644 --- a/arch/arm/boot/dts/imx51-eukrea-mbimxsd51-baseboard.dts +++ b/arch/arm/boot/dts/imx51-eukrea-mbimxsd51-baseboard.dts @@ -107,7 +107,7 @@ &esdhc1 { pinctrl-names = "default"; pinctrl-0 = <&pinctrl_esdhc1 &pinctrl_esdhc1_cd>; - fsl,cd-controller; + cd-gpios = <&gpio1 0 GPIO_ACTIVE_LOW>; status = "okay"; }; @@ -206,7 +206,7 @@ pinctrl_esdhc1_cd: esdhc1_cd { fsl,pins = < - MX51_PAD_GPIO1_0__SD1_CD 0x20d5 + MX51_PAD_GPIO1_0__GPIO1_0 0xd5 >; }; diff --git a/arch/arm/boot/dts/imx53-m53evk.dts b/arch/arm/boot/dts/imx53-m53evk.dts index d5d146a8b149..c4956b0ffb35 100644 --- a/arch/arm/boot/dts/imx53-m53evk.dts +++ b/arch/arm/boot/dts/imx53-m53evk.dts @@ -21,27 +21,25 @@ <0xb0000000 0x20000000>; }; - soc { - display1: display@di1 { - compatible = "fsl,imx-parallel-display"; - interface-pix-fmt = "bgr666"; - pinctrl-names = "default"; - pinctrl-0 = <&pinctrl_ipu_disp1>; - - display-timings { - 800x480p60 { - native-mode; - clock-frequency = <31500000>; - hactive = <800>; - vactive = <480>; - hfront-porch = <40>; - hback-porch = <88>; - hsync-len = <128>; - vback-porch = <33>; - vfront-porch = <9>; - vsync-len = <3>; - vsync-active = <1>; - }; + display1: display@di1 { + compatible = "fsl,imx-parallel-display"; + interface-pix-fmt = "bgr666"; + pinctrl-names = "default"; + pinctrl-0 = <&pinctrl_ipu_disp1>; + + display-timings { + 800x480p60 { + native-mode; + clock-frequency = <31500000>; + hactive = <800>; + vactive = <480>; + hfront-porch = <40>; + hback-porch = <88>; + hsync-len = <128>; + vback-porch = <33>; + vfront-porch = <9>; + vsync-len = <3>; + vsync-active = <1>; }; }; diff --git a/arch/arm/boot/dts/imx6dl-hummingboard.dts b/arch/arm/boot/dts/imx6dl-hummingboard.dts index 5373a5f2782b..c8e51dd41b8f 100644 --- a/arch/arm/boot/dts/imx6dl-hummingboard.dts +++ b/arch/arm/boot/dts/imx6dl-hummingboard.dts @@ -143,6 +143,14 @@ fsl,pins = <MX6QDL_PAD_GPIO_0__GPIO1_IO00 0x1b0b0>; }; + pinctrl_hummingboard_usbotg_id: hummingboard-usbotg-id { + /* + * Similar to pinctrl_usbotg_2, but we want it + * pulled down for a fixed host connection. + */ + fsl,pins = <MX6QDL_PAD_GPIO_1__USB_OTG_ID 0x13059>; + }; + pinctrl_hummingboard_usbotg_vbus: hummingboard-usbotg-vbus { fsl,pins = <MX6QDL_PAD_EIM_D22__GPIO3_IO22 0x1b0b0>; }; @@ -178,6 +186,8 @@ }; &usbotg { + pinctrl-names = "default"; + pinctrl-0 = <&pinctrl_hummingboard_usbotg_id>; vbus-supply = <®_usbotg_vbus>; status = "okay"; }; diff --git a/arch/arm/boot/dts/imx6q-gw51xx.dts b/arch/arm/boot/dts/imx6q-gw51xx.dts index af4929aee075..0e1406e58eff 100644 --- a/arch/arm/boot/dts/imx6q-gw51xx.dts +++ b/arch/arm/boot/dts/imx6q-gw51xx.dts @@ -11,7 +11,7 @@ /dts-v1/; #include "imx6q.dtsi" -#include "imx6qdl-gw54xx.dtsi" +#include "imx6qdl-gw51xx.dtsi" / { model = "Gateworks Ventana i.MX6 Quad GW51XX"; diff --git a/arch/arm/boot/dts/imx6qdl-cubox-i.dtsi b/arch/arm/boot/dts/imx6qdl-cubox-i.dtsi index 25da82a03110..e8e781656b3f 100644 --- a/arch/arm/boot/dts/imx6qdl-cubox-i.dtsi +++ b/arch/arm/boot/dts/imx6qdl-cubox-i.dtsi @@ -12,6 +12,19 @@ pinctrl-0 = <&pinctrl_cubox_i_ir>; }; + pwmleds { + compatible = "pwm-leds"; + pinctrl-names = "default"; + pinctrl-0 = <&pinctrl_cubox_i_pwm1>; + + front { + active-low; + label = "imx6:red:front"; + max-brightness = <248>; + pwms = <&pwm1 0 50000>; + }; + }; + regulators { compatible = "simple-bus"; @@ -109,6 +122,10 @@ >; }; + pinctrl_cubox_i_pwm1: cubox-i-pwm1-front-led { + fsl,pins = <MX6QDL_PAD_DISP0_DAT8__PWM1_OUT 0x1b0b0>; + }; + pinctrl_cubox_i_spdif: cubox-i-spdif { fsl,pins = <MX6QDL_PAD_GPIO_17__SPDIF_OUT 0x13091>; }; @@ -117,6 +134,14 @@ fsl,pins = <MX6QDL_PAD_GPIO_0__GPIO1_IO00 0x4001b0b0>; }; + pinctrl_cubox_i_usbotg_id: cubox-i-usbotg-id { + /* + * The Cubox-i pulls this low, but as it's pointless + * leaving it as a pull-up, even if it is just 10uA. + */ + fsl,pins = <MX6QDL_PAD_GPIO_1__USB_OTG_ID 0x13059>; + }; + pinctrl_cubox_i_usbotg_vbus: cubox-i-usbotg-vbus { fsl,pins = <MX6QDL_PAD_EIM_D22__GPIO3_IO22 0x4001b0b0>; }; @@ -153,6 +178,8 @@ }; &usbotg { + pinctrl-names = "default"; + pinctrl-0 = <&pinctrl_cubox_i_usbotg_id>; vbus-supply = <®_usbotg_vbus>; status = "okay"; }; diff --git a/arch/arm/boot/dts/imx6qdl-gw51xx.dtsi b/arch/arm/boot/dts/imx6qdl-gw51xx.dtsi index 31665adcbf39..0db15af41cb1 100644 --- a/arch/arm/boot/dts/imx6qdl-gw51xx.dtsi +++ b/arch/arm/boot/dts/imx6qdl-gw51xx.dtsi @@ -161,7 +161,7 @@ status = "okay"; pmic: ltc3676@3c { - compatible = "ltc,ltc3676"; + compatible = "lltc,ltc3676"; reg = <0x3c>; regulators { diff --git a/arch/arm/boot/dts/imx6qdl-gw52xx.dtsi b/arch/arm/boot/dts/imx6qdl-gw52xx.dtsi index 367af3ec9435..744c8a2d81f6 100644 --- a/arch/arm/boot/dts/imx6qdl-gw52xx.dtsi +++ b/arch/arm/boot/dts/imx6qdl-gw52xx.dtsi @@ -220,7 +220,7 @@ }; pmic: ltc3676@3c { - compatible = "ltc,ltc3676"; + compatible = "lltc,ltc3676"; reg = <0x3c>; regulators { @@ -288,7 +288,7 @@ codec: sgtl5000@0a { compatible = "fsl,sgtl5000"; reg = <0x0a>; - clocks = <&clks 169>; + clocks = <&clks 201>; VDDA-supply = <®_1p8v>; VDDIO-supply = <®_3p3v>; }; diff --git a/arch/arm/boot/dts/imx6qdl-gw53xx.dtsi b/arch/arm/boot/dts/imx6qdl-gw53xx.dtsi index c91b5a6c769b..adf150c1be90 100644 --- a/arch/arm/boot/dts/imx6qdl-gw53xx.dtsi +++ b/arch/arm/boot/dts/imx6qdl-gw53xx.dtsi @@ -234,7 +234,7 @@ }; pmic: ltc3676@3c { - compatible = "ltc,ltc3676"; + compatible = "lltc,ltc3676"; reg = <0x3c>; regulators { diff --git a/arch/arm/boot/dts/imx6qdl-microsom.dtsi b/arch/arm/boot/dts/imx6qdl-microsom.dtsi index d729d0b15f25..79eac6849d4c 100644 --- a/arch/arm/boot/dts/imx6qdl-microsom.dtsi +++ b/arch/arm/boot/dts/imx6qdl-microsom.dtsi @@ -10,14 +10,6 @@ MX6QDL_PAD_CSI0_DAT11__UART1_RX_DATA 0x1b0b1 >; }; - - pinctrl_microsom_usbotg: microsom-usbotg { - /* - * Similar to pinctrl_usbotg_2, but we want it - * pulled down for a fixed host connection. - */ - fsl,pins = <MX6QDL_PAD_GPIO_1__USB_OTG_ID 0x13059>; - }; }; }; @@ -26,8 +18,3 @@ pinctrl-0 = <&pinctrl_microsom_uart1>; status = "okay"; }; - -&usbotg { - pinctrl-names = "default"; - pinctrl-0 = <&pinctrl_microsom_usbotg>; -}; diff --git a/arch/arm/boot/dts/imx6sl.dtsi b/arch/arm/boot/dts/imx6sl.dtsi index 2d4e5285f3f3..57d4abe03a94 100644 --- a/arch/arm/boot/dts/imx6sl.dtsi +++ b/arch/arm/boot/dts/imx6sl.dtsi @@ -686,7 +686,7 @@ compatible = "fsl,imx6sl-fec", "fsl,imx25-fec"; reg = <0x02188000 0x4000>; interrupts = <0 114 IRQ_TYPE_LEVEL_HIGH>; - clocks = <&clks IMX6SL_CLK_ENET_REF>, + clocks = <&clks IMX6SL_CLK_ENET>, <&clks IMX6SL_CLK_ENET_REF>; clock-names = "ipg", "ahb"; status = "disabled"; diff --git a/arch/arm/boot/dts/kirkwood-guruplug-server-plus.dts b/arch/arm/boot/dts/kirkwood-guruplug-server-plus.dts index c5a1fc75c7a3..b2d9834bf458 100644 --- a/arch/arm/boot/dts/kirkwood-guruplug-server-plus.dts +++ b/arch/arm/boot/dts/kirkwood-guruplug-server-plus.dts @@ -105,7 +105,6 @@ compatible = "ethernet-phy-id0141.0cb0", "ethernet-phy-ieee802.3-c22"; reg = <0>; - phy-connection-type = "rgmii-id"; }; ethphy1: ethernet-phy@1 { @@ -113,7 +112,6 @@ compatible = "ethernet-phy-id0141.0cb0", "ethernet-phy-ieee802.3-c22"; reg = <1>; - phy-connection-type = "rgmii-id"; }; }; @@ -121,6 +119,7 @@ status = "okay"; ethernet0-port@0 { phy-handle = <ðphy0>; + phy-connection-type = "rgmii-id"; }; }; @@ -128,5 +127,6 @@ status = "okay"; ethernet1-port@0 { phy-handle = <ðphy1>; + phy-connection-type = "rgmii-id"; }; }; diff --git a/arch/arm/boot/dts/omap3-beagle-xm.dts b/arch/arm/boot/dts/omap3-beagle-xm.dts index cf0be662297e..1becefce821b 100644 --- a/arch/arm/boot/dts/omap3-beagle-xm.dts +++ b/arch/arm/boot/dts/omap3-beagle-xm.dts @@ -251,6 +251,11 @@ codec { }; }; + + twl_power: power { + compatible = "ti,twl4030-power-beagleboard-xm", "ti,twl4030-power-idle-osc-off"; + ti,use_poweroff; + }; }; }; @@ -301,6 +306,7 @@ }; &uart3 { + interrupts-extended = <&intc 74 &omap3_pmx_core OMAP3_UART3_RX>; pinctrl-names = "default"; pinctrl-0 = <&uart3_pins>; }; diff --git a/arch/arm/boot/dts/omap3-evm-common.dtsi b/arch/arm/boot/dts/omap3-evm-common.dtsi index 8ae8f007c8ad..c8747c7f1cc8 100644 --- a/arch/arm/boot/dts/omap3-evm-common.dtsi +++ b/arch/arm/boot/dts/omap3-evm-common.dtsi @@ -50,6 +50,13 @@ gpios = <&twl_gpio 18 GPIO_ACTIVE_LOW>; }; +&twl { + twl_power: power { + compatible = "ti,twl4030-power-omap3-evm", "ti,twl4030-power-idle"; + ti,use_poweroff; + }; +}; + &i2c2 { clock-frequency = <400000>; }; diff --git a/arch/arm/boot/dts/omap3-n900.dts b/arch/arm/boot/dts/omap3-n900.dts index ae8ae3f4f9bf..1fe45d1f75ec 100644 --- a/arch/arm/boot/dts/omap3-n900.dts +++ b/arch/arm/boot/dts/omap3-n900.dts @@ -351,6 +351,11 @@ compatible = "ti,twl4030-audio"; ti,enable-vibra = <1>; }; + + twl_power: power { + compatible = "ti,twl4030-power-n900", "ti,twl4030-power-idle-osc-off"; + ti,use_poweroff; + }; }; &twl_keypad { diff --git a/arch/arm/boot/dts/omap5.dtsi b/arch/arm/boot/dts/omap5.dtsi index 3bfda16c8b52..a4ed54988866 100644 --- a/arch/arm/boot/dts/omap5.dtsi +++ b/arch/arm/boot/dts/omap5.dtsi @@ -45,7 +45,6 @@ operating-points = < /* kHz uV */ - 500000 880000 1000000 1060000 1500000 1250000 >; diff --git a/arch/arm/boot/dts/stih415.dtsi b/arch/arm/boot/dts/stih415.dtsi index d6f254f302fe..a0f6f75fe3b5 100644 --- a/arch/arm/boot/dts/stih415.dtsi +++ b/arch/arm/boot/dts/stih415.dtsi @@ -169,8 +169,8 @@ pinctrl-names = "default"; pinctrl-0 = <&pinctrl_mii0>; - clock-names = "stmmaceth"; - clocks = <&clk_s_a1_ls CLK_GMAC0_PHY>; + clock-names = "stmmaceth", "sti-ethclk"; + clocks = <&clk_s_a1_ls CLK_ICN_IF_2>, <&clk_s_a1_ls CLK_GMAC0_PHY>; }; ethernet1: dwmac@fef08000 { @@ -192,8 +192,8 @@ reset-names = "stmmaceth"; pinctrl-names = "default"; pinctrl-0 = <&pinctrl_mii1>; - clock-names = "stmmaceth"; - clocks = <&clk_s_a0_ls CLK_ETH1_PHY>; + clock-names = "stmmaceth", "sti-ethclk"; + clocks = <&clk_s_a0_ls CLK_ICN_REG>, <&clk_s_a0_ls CLK_ETH1_PHY>; }; rc: rc@fe518000 { diff --git a/arch/arm/boot/dts/stih416-b2020-revE.dts b/arch/arm/boot/dts/stih416-b2020e.dts index ba0fa2caaf18..ba0fa2caaf18 100644 --- a/arch/arm/boot/dts/stih416-b2020-revE.dts +++ b/arch/arm/boot/dts/stih416-b2020e.dts diff --git a/arch/arm/boot/dts/stih416.dtsi b/arch/arm/boot/dts/stih416.dtsi index 06473c5d9ea9..84758d76d064 100644 --- a/arch/arm/boot/dts/stih416.dtsi +++ b/arch/arm/boot/dts/stih416.dtsi @@ -175,8 +175,8 @@ reset-names = "stmmaceth"; pinctrl-names = "default"; pinctrl-0 = <&pinctrl_mii0>; - clock-names = "stmmaceth"; - clocks = <&clk_s_a1_ls CLK_GMAC0_PHY>; + clock-names = "stmmaceth", "sti-ethclk"; + clocks = <&clk_s_a1_ls CLK_ICN_IF_2>, <&clk_s_a1_ls CLK_GMAC0_PHY>; }; ethernet1: dwmac@fef08000 { @@ -197,8 +197,8 @@ reset-names = "stmmaceth"; pinctrl-names = "default"; pinctrl-0 = <&pinctrl_mii1>; - clock-names = "stmmaceth"; - clocks = <&clk_s_a0_ls CLK_ETH1_PHY>; + clock-names = "stmmaceth", "sti-ethclk"; + clocks = <&clk_s_a0_ls CLK_ICN_REG>, <&clk_s_a0_ls CLK_ETH1_PHY>; }; rc: rc@fe518000 { diff --git a/arch/arm/common/scoop.c b/arch/arm/common/scoop.c index 6ef146edd0cd..a20fa80776d3 100644 --- a/arch/arm/common/scoop.c +++ b/arch/arm/common/scoop.c @@ -182,7 +182,6 @@ static int scoop_probe(struct platform_device *pdev) struct scoop_config *inf; struct resource *mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); int ret; - int temp; if (!mem) return -EINVAL; diff --git a/arch/arm/configs/bcm_defconfig b/arch/arm/configs/bcm_defconfig index 9d13dae99125..4bf72264b175 100644 --- a/arch/arm/configs/bcm_defconfig +++ b/arch/arm/configs/bcm_defconfig @@ -94,10 +94,10 @@ CONFIG_BACKLIGHT_CLASS_DEVICE=y CONFIG_BACKLIGHT_PWM=y # CONFIG_USB_SUPPORT is not set CONFIG_MMC=y -CONFIG_MMC_UNSAFE_RESUME=y CONFIG_MMC_BLOCK_MINORS=32 CONFIG_MMC_TEST=y CONFIG_MMC_SDHCI=y +CONFIG_MMC_SDHCI_PLTFM=y CONFIG_MMC_SDHCI_BCM_KONA=y CONFIG_NEW_LEDS=y CONFIG_LEDS_CLASS=y diff --git a/arch/arm/configs/imx_v6_v7_defconfig b/arch/arm/configs/imx_v6_v7_defconfig index ef8815327e5b..59b7e45142d8 100644 --- a/arch/arm/configs/imx_v6_v7_defconfig +++ b/arch/arm/configs/imx_v6_v7_defconfig @@ -186,6 +186,7 @@ CONFIG_VIDEO_MX3=y CONFIG_V4L_MEM2MEM_DRIVERS=y CONFIG_VIDEO_CODA=y CONFIG_SOC_CAMERA_OV2640=y +CONFIG_IMX_IPUV3_CORE=y CONFIG_DRM=y CONFIG_DRM_PANEL_SIMPLE=y CONFIG_BACKLIGHT_LCD_SUPPORT=y diff --git a/arch/arm/configs/multi_v7_defconfig b/arch/arm/configs/multi_v7_defconfig index 17d9462b9fb9..534836497998 100644 --- a/arch/arm/configs/multi_v7_defconfig +++ b/arch/arm/configs/multi_v7_defconfig @@ -223,12 +223,12 @@ CONFIG_POWER_RESET_GPIO=y CONFIG_POWER_RESET_SUN6I=y CONFIG_SENSORS_LM90=y CONFIG_THERMAL=y -CONFIG_DOVE_THERMAL=y CONFIG_ARMADA_THERMAL=y CONFIG_WATCHDOG=y CONFIG_ORION_WATCHDOG=y CONFIG_SUNXI_WATCHDOG=y CONFIG_MFD_AS3722=y +CONFIG_MFD_BCM590XX=y CONFIG_MFD_CROS_EC=y CONFIG_MFD_CROS_EC_SPI=y CONFIG_MFD_MAX8907=y @@ -240,6 +240,7 @@ CONFIG_MFD_TPS65910=y CONFIG_REGULATOR_VIRTUAL_CONSUMER=y CONFIG_REGULATOR_AB8500=y CONFIG_REGULATOR_AS3722=y +CONFIG_REGULATOR_BCM590XX=y CONFIG_REGULATOR_GPIO=y CONFIG_REGULATOR_MAX8907=y CONFIG_REGULATOR_PALMAS=y @@ -353,6 +354,7 @@ CONFIG_MFD_NVEC=y CONFIG_KEYBOARD_NVEC=y CONFIG_SERIO_NVEC_PS2=y CONFIG_NVEC_POWER=y +CONFIG_QCOM_GSBI=y CONFIG_COMMON_CLK_QCOM=y CONFIG_MSM_GCC_8660=y CONFIG_MSM_MMCC_8960=y diff --git a/arch/arm/configs/mvebu_v7_defconfig b/arch/arm/configs/mvebu_v7_defconfig index e11170e37442..b0bfefa23902 100644 --- a/arch/arm/configs/mvebu_v7_defconfig +++ b/arch/arm/configs/mvebu_v7_defconfig @@ -14,6 +14,7 @@ CONFIG_MACH_ARMADA_370=y CONFIG_MACH_ARMADA_375=y CONFIG_MACH_ARMADA_38X=y CONFIG_MACH_ARMADA_XP=y +CONFIG_MACH_DOVE=y CONFIG_NEON=y # CONFIG_CACHE_L2X0 is not set # CONFIG_SWP_EMULATE is not set @@ -52,6 +53,7 @@ CONFIG_INPUT_EVDEV=y CONFIG_KEYBOARD_GPIO=y CONFIG_SERIAL_8250=y CONFIG_SERIAL_8250_CONSOLE=y +CONFIG_SERIAL_OF_PLATFORM=y CONFIG_I2C=y CONFIG_SPI=y CONFIG_SPI_ORION=y diff --git a/arch/arm/include/asm/mcpm.h b/arch/arm/include/asm/mcpm.h index d9702eb0b02b..94060adba174 100644 --- a/arch/arm/include/asm/mcpm.h +++ b/arch/arm/include/asm/mcpm.h @@ -208,8 +208,6 @@ struct sync_struct { struct mcpm_sync_struct clusters[MAX_NR_CLUSTERS]; }; -extern unsigned long sync_phys; /* physical address of *mcpm_sync */ - void __mcpm_cpu_going_down(unsigned int cpu, unsigned int cluster); void __mcpm_cpu_down(unsigned int cpu, unsigned int cluster); void __mcpm_outbound_leave_critical(unsigned int cluster, int state); diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h index f989d7c22dc5..e4e4208a9130 100644 --- a/arch/arm/include/asm/thread_info.h +++ b/arch/arm/include/asm/thread_info.h @@ -114,8 +114,14 @@ static inline struct thread_info *current_thread_info(void) ((unsigned long)(task_thread_info(tsk)->cpu_context.pc)) #define thread_saved_sp(tsk) \ ((unsigned long)(task_thread_info(tsk)->cpu_context.sp)) + +#ifndef CONFIG_THUMB2_KERNEL #define thread_saved_fp(tsk) \ ((unsigned long)(task_thread_info(tsk)->cpu_context.fp)) +#else +#define thread_saved_fp(tsk) \ + ((unsigned long)(task_thread_info(tsk)->cpu_context.r7)) +#endif extern void crunch_task_disable(struct thread_info *); extern void crunch_task_copy(struct thread_info *, void *); diff --git a/arch/arm/kernel/kprobes-test-arm.c b/arch/arm/kernel/kprobes-test-arm.c index 9db4b659d03e..cb1424240ff6 100644 --- a/arch/arm/kernel/kprobes-test-arm.c +++ b/arch/arm/kernel/kprobes-test-arm.c @@ -74,8 +74,6 @@ void kprobe_arm_test_cases(void) TEST_RRR( op "lt" s " r11, r",11,VAL1,", r",14,N(val),", asr r",7, 6,"")\ TEST_RR( op "gt" s " r12, r13" ", r",14,val, ", ror r",14,7,"")\ TEST_RR( op "le" s " r14, r",0, val, ", r13" ", lsl r",14,8,"")\ - TEST_RR( op s " r12, pc" ", r",14,val, ", ror r",14,7,"")\ - TEST_RR( op s " r14, r",0, val, ", pc" ", lsl r",14,8,"")\ TEST_R( op "eq" s " r0, r",11,VAL1,", #0xf5") \ TEST_R( op "ne" s " r11, r",0, VAL1,", #0xf5000000") \ TEST_R( op s " r7, r",8, VAL2,", #0x000af000") \ @@ -103,8 +101,6 @@ void kprobe_arm_test_cases(void) TEST_RRR( op "ge r",11,VAL1,", r",14,N(val),", asr r",7, 6,"") \ TEST_RR( op "le r13" ", r",14,val, ", ror r",14,7,"") \ TEST_RR( op "gt r",0, val, ", r13" ", lsl r",14,8,"") \ - TEST_RR( op " pc" ", r",14,val, ", ror r",14,7,"") \ - TEST_RR( op " r",0, val, ", pc" ", lsl r",14,8,"") \ TEST_R( op "eq r",11,VAL1,", #0xf5") \ TEST_R( op "ne r",0, VAL1,", #0xf5000000") \ TEST_R( op " r",8, VAL2,", #0x000af000") @@ -125,7 +121,6 @@ void kprobe_arm_test_cases(void) TEST_RR( op "ge" s " r11, r",11,N(val),", asr r",7, 6,"") \ TEST_RR( op "lt" s " r12, r",11,val, ", ror r",14,7,"") \ TEST_R( op "gt" s " r14, r13" ", lsl r",14,8,"") \ - TEST_R( op "le" s " r14, pc" ", lsl r",14,8,"") \ TEST( op "eq" s " r0, #0xf5") \ TEST( op "ne" s " r11, #0xf5000000") \ TEST( op s " r7, #0x000af000") \ @@ -159,12 +154,19 @@ void kprobe_arm_test_cases(void) TEST_SUPPORTED("cmp pc, #0x1000"); TEST_SUPPORTED("cmp sp, #0x1000"); - /* Data-processing with PC as shift*/ + /* Data-processing with PC and a shift count in a register */ TEST_UNSUPPORTED(__inst_arm(0xe15c0f1e) " @ cmp r12, r14, asl pc") TEST_UNSUPPORTED(__inst_arm(0xe1a0cf1e) " @ mov r12, r14, asl pc") TEST_UNSUPPORTED(__inst_arm(0xe08caf1e) " @ add r10, r12, r14, asl pc") - - /* Data-processing with PC as shift*/ + TEST_UNSUPPORTED(__inst_arm(0xe151021f) " @ cmp r1, pc, lsl r2") + TEST_UNSUPPORTED(__inst_arm(0xe17f0211) " @ cmn pc, r1, lsl r2") + TEST_UNSUPPORTED(__inst_arm(0xe1a0121f) " @ mov r1, pc, lsl r2") + TEST_UNSUPPORTED(__inst_arm(0xe1a0f211) " @ mov pc, r1, lsl r2") + TEST_UNSUPPORTED(__inst_arm(0xe042131f) " @ sub r1, r2, pc, lsl r3") + TEST_UNSUPPORTED(__inst_arm(0xe1cf1312) " @ bic r1, pc, r2, lsl r3") + TEST_UNSUPPORTED(__inst_arm(0xe081f312) " @ add pc, r1, r2, lsl r3") + + /* Data-processing with PC as a target and status registers updated */ TEST_UNSUPPORTED("movs pc, r1") TEST_UNSUPPORTED("movs pc, r1, lsl r2") TEST_UNSUPPORTED("movs pc, #0x10000") @@ -187,14 +189,14 @@ void kprobe_arm_test_cases(void) TEST_BF_R ("add pc, pc, r",14,2f-1f-8,"") TEST_BF_R ("add pc, r",14,2f-1f-8,", pc") TEST_BF_R ("mov pc, r",0,2f,"") - TEST_BF_RR("mov pc, r",0,2f,", asl r",1,0,"") + TEST_BF_R ("add pc, pc, r",14,(2f-1f-8)*2,", asr #1") TEST_BB( "sub pc, pc, #1b-2b+8") #if __LINUX_ARM_ARCH__ == 6 && !defined(CONFIG_CPU_V7) TEST_BB( "sub pc, pc, #1b-2b+8-2") /* UNPREDICTABLE before and after ARMv6 */ #endif TEST_BB_R( "sub pc, pc, r",14, 1f-2f+8,"") TEST_BB_R( "rsb pc, r",14,1f-2f+8,", pc") - TEST_RR( "add pc, pc, r",10,-2,", asl r",11,1,"") + TEST_R( "add pc, pc, r",10,-2,", asl #1") #ifdef CONFIG_THUMB2_KERNEL TEST_ARM_TO_THUMB_INTERWORK_R("add pc, pc, r",0,3f-1f-8+1,"") TEST_ARM_TO_THUMB_INTERWORK_R("sub pc, r",0,3f+8+1,", #8") @@ -216,6 +218,7 @@ void kprobe_arm_test_cases(void) TEST_BB_R("bx r",7,2f,"") TEST_BF_R("bxeq r",14,2f,"") +#if __LINUX_ARM_ARCH__ >= 5 TEST_R("clz r0, r",0, 0x0,"") TEST_R("clzeq r7, r",14,0x1,"") TEST_R("clz lr, r",7, 0xffffffff,"") @@ -337,6 +340,7 @@ void kprobe_arm_test_cases(void) TEST_UNSUPPORTED(__inst_arm(0xe16f02e1) " @ smultt pc, r1, r2") TEST_UNSUPPORTED(__inst_arm(0xe16002ef) " @ smultt r0, pc, r2") TEST_UNSUPPORTED(__inst_arm(0xe1600fe1) " @ smultt r0, r1, pc") +#endif TEST_GROUP("Multiply and multiply-accumulate") @@ -559,6 +563,7 @@ void kprobe_arm_test_cases(void) TEST_UNSUPPORTED("ldrsht r1, [r2], #48") #endif +#if __LINUX_ARM_ARCH__ >= 5 TEST_RPR( "strd r",0, VAL1,", [r",1, 48,", -r",2,24,"]") TEST_RPR( "strccd r",8, VAL2,", [r",13,0, ", r",12,48,"]") TEST_RPR( "strd r",4, VAL1,", [r",2, 24,", r",3, 48,"]!") @@ -595,6 +600,7 @@ void kprobe_arm_test_cases(void) TEST_UNSUPPORTED(__inst_arm(0xe1efc3d0) " @ ldrd r12, [pc, #48]!") TEST_UNSUPPORTED(__inst_arm(0xe0c9f3d0) " @ ldrd pc, [r9], #48") TEST_UNSUPPORTED(__inst_arm(0xe0c9e3d0) " @ ldrd lr, [r9], #48") +#endif TEST_GROUP("Miscellaneous") @@ -1227,7 +1233,9 @@ void kprobe_arm_test_cases(void) TEST_COPROCESSOR( "mrc"two" 0, 0, r0, cr0, cr0, 0") COPROCESSOR_INSTRUCTIONS_ST_LD("",e) +#if __LINUX_ARM_ARCH__ >= 5 COPROCESSOR_INSTRUCTIONS_MC_MR("",e) +#endif TEST_UNSUPPORTED("svc 0") TEST_UNSUPPORTED("svc 0xffffff") @@ -1287,7 +1295,9 @@ void kprobe_arm_test_cases(void) TEST( "blx __dummy_thumb_subroutine_odd") #endif /* __LINUX_ARM_ARCH__ >= 6 */ +#if __LINUX_ARM_ARCH__ >= 5 COPROCESSOR_INSTRUCTIONS_ST_LD("2",f) +#endif #if __LINUX_ARM_ARCH__ >= 6 COPROCESSOR_INSTRUCTIONS_MC_MR("2",f) #endif diff --git a/arch/arm/kernel/kprobes-test.c b/arch/arm/kernel/kprobes-test.c index 379639998d5a..08d731294bcd 100644 --- a/arch/arm/kernel/kprobes-test.c +++ b/arch/arm/kernel/kprobes-test.c @@ -225,6 +225,7 @@ static int pre_handler_called; static int post_handler_called; static int jprobe_func_called; static int kretprobe_handler_called; +static int tests_failed; #define FUNC_ARG1 0x12345678 #define FUNC_ARG2 0xabcdef @@ -461,6 +462,13 @@ static int run_api_tests(long (*func)(long, long)) pr_info(" jprobe\n"); ret = test_jprobe(func); +#if defined(CONFIG_THUMB2_KERNEL) && !defined(MODULE) + if (ret == -EINVAL) { + pr_err("FAIL: Known longtime bug with jprobe on Thumb kernels\n"); + tests_failed = ret; + ret = 0; + } +#endif if (ret < 0) return ret; @@ -1672,6 +1680,8 @@ static int __init run_all_tests(void) out: if (ret == 0) + ret = tests_failed; + if (ret == 0) pr_info("Finished kprobe tests OK\n"); else pr_err("kprobe tests failed\n"); diff --git a/arch/arm/kernel/perf_event_v7.c b/arch/arm/kernel/perf_event_v7.c index 2037f7205987..1d37568c547a 100644 --- a/arch/arm/kernel/perf_event_v7.c +++ b/arch/arm/kernel/perf_event_v7.c @@ -1924,7 +1924,7 @@ static int krait_pmu_get_event_idx(struct pmu_hw_events *cpuc, struct perf_event *event) { int idx; - int bit; + int bit = -1; unsigned int prefix; unsigned int region; unsigned int code; @@ -1953,7 +1953,7 @@ static int krait_pmu_get_event_idx(struct pmu_hw_events *cpuc, } idx = armv7pmu_get_event_idx(cpuc, event); - if (idx < 0 && krait_event) + if (idx < 0 && bit >= 0) clear_bit(bit, cpuc->used_mask); return idx; diff --git a/arch/arm/kernel/probes-arm.c b/arch/arm/kernel/probes-arm.c index 51a13a027989..8eaef81d8344 100644 --- a/arch/arm/kernel/probes-arm.c +++ b/arch/arm/kernel/probes-arm.c @@ -341,12 +341,12 @@ static const union decode_item arm_cccc_000x_table[] = { /* CMP (reg-shift reg) cccc 0001 0101 xxxx xxxx xxxx 0xx1 xxxx */ /* CMN (reg-shift reg) cccc 0001 0111 xxxx xxxx xxxx 0xx1 xxxx */ DECODE_EMULATEX (0x0f900090, 0x01100010, PROBES_DATA_PROCESSING_REG, - REGS(ANY, 0, NOPC, 0, ANY)), + REGS(NOPC, 0, NOPC, 0, NOPC)), /* MOV (reg-shift reg) cccc 0001 101x xxxx xxxx xxxx 0xx1 xxxx */ /* MVN (reg-shift reg) cccc 0001 111x xxxx xxxx xxxx 0xx1 xxxx */ DECODE_EMULATEX (0x0fa00090, 0x01a00010, PROBES_DATA_PROCESSING_REG, - REGS(0, ANY, NOPC, 0, ANY)), + REGS(0, NOPC, NOPC, 0, NOPC)), /* AND (reg-shift reg) cccc 0000 000x xxxx xxxx xxxx 0xx1 xxxx */ /* EOR (reg-shift reg) cccc 0000 001x xxxx xxxx xxxx 0xx1 xxxx */ @@ -359,7 +359,7 @@ static const union decode_item arm_cccc_000x_table[] = { /* ORR (reg-shift reg) cccc 0001 100x xxxx xxxx xxxx 0xx1 xxxx */ /* BIC (reg-shift reg) cccc 0001 110x xxxx xxxx xxxx 0xx1 xxxx */ DECODE_EMULATEX (0x0e000090, 0x00000010, PROBES_DATA_PROCESSING_REG, - REGS(ANY, ANY, NOPC, 0, ANY)), + REGS(NOPC, NOPC, NOPC, 0, NOPC)), DECODE_END }; diff --git a/arch/arm/kernel/ptrace.c b/arch/arm/kernel/ptrace.c index 0dd3b79b15c3..0c27ed6f3f23 100644 --- a/arch/arm/kernel/ptrace.c +++ b/arch/arm/kernel/ptrace.c @@ -908,7 +908,7 @@ enum ptrace_syscall_dir { PTRACE_SYSCALL_EXIT, }; -static int tracehook_report_syscall(struct pt_regs *regs, +static void tracehook_report_syscall(struct pt_regs *regs, enum ptrace_syscall_dir dir) { unsigned long ip; @@ -926,7 +926,6 @@ static int tracehook_report_syscall(struct pt_regs *regs, current_thread_info()->syscall = -1; regs->ARM_ip = ip; - return current_thread_info()->syscall; } asmlinkage int syscall_trace_enter(struct pt_regs *regs, int scno) @@ -938,7 +937,9 @@ asmlinkage int syscall_trace_enter(struct pt_regs *regs, int scno) return -1; if (test_thread_flag(TIF_SYSCALL_TRACE)) - scno = tracehook_report_syscall(regs, PTRACE_SYSCALL_ENTER); + tracehook_report_syscall(regs, PTRACE_SYSCALL_ENTER); + + scno = current_thread_info()->syscall; if (test_thread_flag(TIF_SYSCALL_TRACEPOINT)) trace_sys_enter(regs, scno); diff --git a/arch/arm/kernel/topology.c b/arch/arm/kernel/topology.c index 9d853189028b..e35d880f9773 100644 --- a/arch/arm/kernel/topology.c +++ b/arch/arm/kernel/topology.c @@ -275,7 +275,7 @@ void store_cpu_topology(unsigned int cpuid) cpu_topology[cpuid].socket_id, mpidr); } -static inline const int cpu_corepower_flags(void) +static inline int cpu_corepower_flags(void) { return SD_SHARE_PKG_RESOURCES | SD_SHARE_POWERDOMAIN; } diff --git a/arch/arm/mach-exynos/exynos.c b/arch/arm/mach-exynos/exynos.c index f38cf7c110cc..46d893fcbe85 100644 --- a/arch/arm/mach-exynos/exynos.c +++ b/arch/arm/mach-exynos/exynos.c @@ -173,10 +173,8 @@ static struct platform_device exynos_cpuidle = { void __init exynos_cpuidle_init(void) { - if (soc_is_exynos5440()) - return; - - platform_device_register(&exynos_cpuidle); + if (soc_is_exynos4210() || soc_is_exynos5250()) + platform_device_register(&exynos_cpuidle); } void __init exynos_cpufreq_init(void) @@ -297,7 +295,7 @@ static void __init exynos_dt_machine_init(void) * This is called from smp_prepare_cpus if we've built for SMP, but * we still need to set it up for PM and firmware ops if not. */ - if (!IS_ENABLED(SMP)) + if (!IS_ENABLED(CONFIG_SMP)) exynos_sysram_init(); exynos_cpuidle_init(); diff --git a/arch/arm/mach-exynos/firmware.c b/arch/arm/mach-exynos/firmware.c index eb91d2350f8c..e8797bb78871 100644 --- a/arch/arm/mach-exynos/firmware.c +++ b/arch/arm/mach-exynos/firmware.c @@ -57,8 +57,13 @@ static int exynos_set_cpu_boot_addr(int cpu, unsigned long boot_addr) boot_reg = sysram_ns_base_addr + 0x1c; - if (!soc_is_exynos4212() && !soc_is_exynos3250()) - boot_reg += 4*cpu; + /* + * Almost all Exynos-series of SoCs that run in secure mode don't need + * additional offset for every CPU, with Exynos4412 being the only + * exception. + */ + if (soc_is_exynos4412()) + boot_reg += 4 * cpu; __raw_writel(boot_addr, boot_reg); return 0; diff --git a/arch/arm/mach-exynos/hotplug.c b/arch/arm/mach-exynos/hotplug.c index 69fa48397394..920a4baa53cd 100644 --- a/arch/arm/mach-exynos/hotplug.c +++ b/arch/arm/mach-exynos/hotplug.c @@ -40,21 +40,17 @@ static inline void cpu_leave_lowpower(void) static inline void platform_do_lowpower(unsigned int cpu, int *spurious) { + u32 mpidr = cpu_logical_map(cpu); + u32 core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); + for (;;) { - /* make cpu1 to be turned off at next WFI command */ - if (cpu == 1) - exynos_cpu_power_down(cpu); + /* Turn the CPU off on next WFI instruction. */ + exynos_cpu_power_down(core_id); - /* - * here's the WFI - */ - asm(".word 0xe320f003\n" - : - : - : "memory", "cc"); + wfi(); - if (pen_release == cpu_logical_map(cpu)) { + if (pen_release == core_id) { /* * OK, proper wakeup, we're done */ diff --git a/arch/arm/mach-exynos/mcpm-exynos.c b/arch/arm/mach-exynos/mcpm-exynos.c index 0498d0b887ef..ace0ed617476 100644 --- a/arch/arm/mach-exynos/mcpm-exynos.c +++ b/arch/arm/mach-exynos/mcpm-exynos.c @@ -25,7 +25,6 @@ #define EXYNOS5420_CPUS_PER_CLUSTER 4 #define EXYNOS5420_NR_CLUSTERS 2 -#define MCPM_BOOT_ADDR_OFFSET 0x1c /* * The common v7_exit_coherency_flush API could not be used because of the @@ -343,11 +342,13 @@ static int __init exynos_mcpm_init(void) pr_info("Exynos MCPM support installed\n"); /* - * Future entries into the kernel can now go - * through the cluster entry vectors. + * U-Boot SPL is hardcoded to jump to the start of ns_sram_base_addr + * as part of secondary_cpu_start(). Let's redirect it to the + * mcpm_entry_point(). */ - __raw_writel(virt_to_phys(mcpm_entry_point), - ns_sram_base_addr + MCPM_BOOT_ADDR_OFFSET); + __raw_writel(0xe59f0000, ns_sram_base_addr); /* ldr r0, [pc, #0] */ + __raw_writel(0xe12fff10, ns_sram_base_addr + 4); /* bx r0 */ + __raw_writel(virt_to_phys(mcpm_entry_point), ns_sram_base_addr + 8); iounmap(ns_sram_base_addr); diff --git a/arch/arm/mach-exynos/platsmp.c b/arch/arm/mach-exynos/platsmp.c index 1c8d31e39520..50b9aad5e27b 100644 --- a/arch/arm/mach-exynos/platsmp.c +++ b/arch/arm/mach-exynos/platsmp.c @@ -90,7 +90,8 @@ static void exynos_secondary_init(unsigned int cpu) static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle) { unsigned long timeout; - unsigned long phys_cpu = cpu_logical_map(cpu); + u32 mpidr = cpu_logical_map(cpu); + u32 core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); int ret = -ENOSYS; /* @@ -104,17 +105,18 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle) * the holding pen - release it, then wait for it to flag * that it has been released by resetting pen_release. * - * Note that "pen_release" is the hardware CPU ID, whereas + * Note that "pen_release" is the hardware CPU core ID, whereas * "cpu" is Linux's internal ID. */ - write_pen_release(phys_cpu); + write_pen_release(core_id); - if (!exynos_cpu_power_state(cpu)) { - exynos_cpu_power_up(cpu); + if (!exynos_cpu_power_state(core_id)) { + exynos_cpu_power_up(core_id); timeout = 10; /* wait max 10 ms until cpu1 is on */ - while (exynos_cpu_power_state(cpu) != S5P_CORE_LOCAL_PWR_EN) { + while (exynos_cpu_power_state(core_id) + != S5P_CORE_LOCAL_PWR_EN) { if (timeout-- == 0) break; @@ -145,20 +147,20 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle) * Try to set boot address using firmware first * and fall back to boot register if it fails. */ - ret = call_firmware_op(set_cpu_boot_addr, phys_cpu, boot_addr); + ret = call_firmware_op(set_cpu_boot_addr, core_id, boot_addr); if (ret && ret != -ENOSYS) goto fail; if (ret == -ENOSYS) { - void __iomem *boot_reg = cpu_boot_reg(phys_cpu); + void __iomem *boot_reg = cpu_boot_reg(core_id); if (IS_ERR(boot_reg)) { ret = PTR_ERR(boot_reg); goto fail; } - __raw_writel(boot_addr, cpu_boot_reg(phys_cpu)); + __raw_writel(boot_addr, cpu_boot_reg(core_id)); } - call_firmware_op(cpu_boot, phys_cpu); + call_firmware_op(cpu_boot, core_id); arch_send_wakeup_ipi_mask(cpumask_of(cpu)); @@ -227,22 +229,24 @@ static void __init exynos_smp_prepare_cpus(unsigned int max_cpus) * boot register if it fails. */ for (i = 1; i < max_cpus; ++i) { - unsigned long phys_cpu; unsigned long boot_addr; + u32 mpidr; + u32 core_id; int ret; - phys_cpu = cpu_logical_map(i); + mpidr = cpu_logical_map(i); + core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); boot_addr = virt_to_phys(exynos4_secondary_startup); - ret = call_firmware_op(set_cpu_boot_addr, phys_cpu, boot_addr); + ret = call_firmware_op(set_cpu_boot_addr, core_id, boot_addr); if (ret && ret != -ENOSYS) break; if (ret == -ENOSYS) { - void __iomem *boot_reg = cpu_boot_reg(phys_cpu); + void __iomem *boot_reg = cpu_boot_reg(core_id); if (IS_ERR(boot_reg)) break; - __raw_writel(boot_addr, cpu_boot_reg(phys_cpu)); + __raw_writel(boot_addr, cpu_boot_reg(core_id)); } } } diff --git a/arch/arm/mach-exynos/pm.c b/arch/arm/mach-exynos/pm.c index 87c0d34c7fba..202ca73e49c4 100644 --- a/arch/arm/mach-exynos/pm.c +++ b/arch/arm/mach-exynos/pm.c @@ -300,7 +300,7 @@ static int exynos_pm_suspend(void) tmp = (S5P_USE_STANDBY_WFI0 | S5P_USE_STANDBY_WFE0); __raw_writel(tmp, S5P_CENTRAL_SEQ_OPTION); - if (!soc_is_exynos5250()) + if (read_cpuid_part_number() == ARM_CPU_PART_CORTEX_A9) exynos_cpu_save_register(); return 0; @@ -334,7 +334,7 @@ static void exynos_pm_resume(void) if (exynos_pm_central_resume()) goto early_wakeup; - if (!soc_is_exynos5250()) + if (read_cpuid_part_number() == ARM_CPU_PART_CORTEX_A9) exynos_cpu_restore_register(); /* For release retention */ @@ -353,7 +353,7 @@ static void exynos_pm_resume(void) s3c_pm_do_restore_core(exynos_core_save, ARRAY_SIZE(exynos_core_save)); - if (!soc_is_exynos5250()) + if (read_cpuid_part_number() == ARM_CPU_PART_CORTEX_A9) scu_enable(S5P_VA_SCU); early_wakeup: @@ -440,15 +440,18 @@ static int exynos_cpu_pm_notifier(struct notifier_block *self, case CPU_PM_ENTER: if (cpu == 0) { exynos_pm_central_suspend(); - exynos_cpu_save_register(); + if (read_cpuid_part_number() == ARM_CPU_PART_CORTEX_A9) + exynos_cpu_save_register(); } break; case CPU_PM_EXIT: if (cpu == 0) { - if (!soc_is_exynos5250()) + if (read_cpuid_part_number() == + ARM_CPU_PART_CORTEX_A9) { scu_enable(S5P_VA_SCU); - exynos_cpu_restore_register(); + exynos_cpu_restore_register(); + } exynos_pm_central_resume(); } break; diff --git a/arch/arm/mach-exynos/pm_domains.c b/arch/arm/mach-exynos/pm_domains.c index fe6570ebbdde..797cb134bfff 100644 --- a/arch/arm/mach-exynos/pm_domains.c +++ b/arch/arm/mach-exynos/pm_domains.c @@ -17,6 +17,7 @@ #include <linux/err.h> #include <linux/slab.h> #include <linux/pm_domain.h> +#include <linux/clk.h> #include <linux/delay.h> #include <linux/of_address.h> #include <linux/of_platform.h> @@ -24,6 +25,8 @@ #include "regs-pmu.h" +#define MAX_CLK_PER_DOMAIN 4 + /* * Exynos specific wrapper around the generic power domain */ @@ -32,6 +35,9 @@ struct exynos_pm_domain { char const *name; bool is_off; struct generic_pm_domain pd; + struct clk *oscclk; + struct clk *clk[MAX_CLK_PER_DOMAIN]; + struct clk *pclk[MAX_CLK_PER_DOMAIN]; }; static int exynos_pd_power(struct generic_pm_domain *domain, bool power_on) @@ -44,6 +50,19 @@ static int exynos_pd_power(struct generic_pm_domain *domain, bool power_on) pd = container_of(domain, struct exynos_pm_domain, pd); base = pd->base; + /* Set oscclk before powering off a domain*/ + if (!power_on) { + int i; + + for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) { + if (IS_ERR(pd->clk[i])) + break; + if (clk_set_parent(pd->clk[i], pd->oscclk)) + pr_err("%s: error setting oscclk as parent to clock %d\n", + pd->name, i); + } + } + pwr = power_on ? S5P_INT_LOCAL_PWR_EN : 0; __raw_writel(pwr, base); @@ -60,6 +79,20 @@ static int exynos_pd_power(struct generic_pm_domain *domain, bool power_on) cpu_relax(); usleep_range(80, 100); } + + /* Restore clocks after powering on a domain*/ + if (power_on) { + int i; + + for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) { + if (IS_ERR(pd->clk[i])) + break; + if (clk_set_parent(pd->clk[i], pd->pclk[i])) + pr_err("%s: error setting parent to clock%d\n", + pd->name, i); + } + } + return 0; } @@ -152,9 +185,11 @@ static __init int exynos4_pm_init_power_domain(void) for_each_compatible_node(np, NULL, "samsung,exynos4210-pd") { struct exynos_pm_domain *pd; - int on; + int on, i; + struct device *dev; pdev = of_find_device_by_node(np); + dev = &pdev->dev; pd = kzalloc(sizeof(*pd), GFP_KERNEL); if (!pd) { @@ -170,6 +205,30 @@ static __init int exynos4_pm_init_power_domain(void) pd->pd.power_on = exynos_pd_power_on; pd->pd.of_node = np; + pd->oscclk = clk_get(dev, "oscclk"); + if (IS_ERR(pd->oscclk)) + goto no_clk; + + for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) { + char clk_name[8]; + + snprintf(clk_name, sizeof(clk_name), "clk%d", i); + pd->clk[i] = clk_get(dev, clk_name); + if (IS_ERR(pd->clk[i])) + break; + snprintf(clk_name, sizeof(clk_name), "pclk%d", i); + pd->pclk[i] = clk_get(dev, clk_name); + if (IS_ERR(pd->pclk[i])) { + clk_put(pd->clk[i]); + pd->clk[i] = ERR_PTR(-EINVAL); + break; + } + } + + if (IS_ERR(pd->clk[0])) + clk_put(pd->oscclk); + +no_clk: platform_set_drvdata(pdev, pd); on = __raw_readl(pd->base + 0x4) & S5P_INT_LOCAL_PWR_EN; diff --git a/arch/arm/mach-imx/Kconfig b/arch/arm/mach-imx/Kconfig index 28fa2fa49e5d..4b5185748f74 100644 --- a/arch/arm/mach-imx/Kconfig +++ b/arch/arm/mach-imx/Kconfig @@ -734,9 +734,9 @@ config SOC_IMX6 select HAVE_IMX_MMDC select HAVE_IMX_SRC select MFD_SYSCON - select PL310_ERRATA_588369 if CACHE_PL310 - select PL310_ERRATA_727915 if CACHE_PL310 - select PL310_ERRATA_769419 if CACHE_PL310 + select PL310_ERRATA_588369 if CACHE_L2X0 + select PL310_ERRATA_727915 if CACHE_L2X0 + select PL310_ERRATA_769419 if CACHE_L2X0 config SOC_IMX6Q bool "i.MX6 Quad/DualLite support" @@ -771,9 +771,9 @@ config SOC_VF610 select ARM_GIC select PINCTRL_VF610 select VF_PIT_TIMER - select PL310_ERRATA_588369 if CACHE_PL310 - select PL310_ERRATA_727915 if CACHE_PL310 - select PL310_ERRATA_769419 if CACHE_PL310 + select PL310_ERRATA_588369 if CACHE_L2X0 + select PL310_ERRATA_727915 if CACHE_L2X0 + select PL310_ERRATA_769419 if CACHE_L2X0 help This enable support for Freescale Vybrid VF610 processor. diff --git a/arch/arm/mach-imx/clk-gate2.c b/arch/arm/mach-imx/clk-gate2.c index 4ba587da89d2..84acdfd1d715 100644 --- a/arch/arm/mach-imx/clk-gate2.c +++ b/arch/arm/mach-imx/clk-gate2.c @@ -67,8 +67,12 @@ static void clk_gate2_disable(struct clk_hw *hw) spin_lock_irqsave(gate->lock, flags); - if (gate->share_count && --(*gate->share_count) > 0) - goto out; + if (gate->share_count) { + if (WARN_ON(*gate->share_count == 0)) + goto out; + else if (--(*gate->share_count) > 0) + goto out; + } reg = readl(gate->reg); reg &= ~(3 << gate->bit_idx); @@ -78,19 +82,26 @@ out: spin_unlock_irqrestore(gate->lock, flags); } -static int clk_gate2_is_enabled(struct clk_hw *hw) +static int clk_gate2_reg_is_enabled(void __iomem *reg, u8 bit_idx) { - u32 reg; - struct clk_gate2 *gate = to_clk_gate2(hw); + u32 val = readl(reg); - reg = readl(gate->reg); - - if (((reg >> gate->bit_idx) & 1) == 1) + if (((val >> bit_idx) & 1) == 1) return 1; return 0; } +static int clk_gate2_is_enabled(struct clk_hw *hw) +{ + struct clk_gate2 *gate = to_clk_gate2(hw); + + if (gate->share_count) + return !!(*gate->share_count); + else + return clk_gate2_reg_is_enabled(gate->reg, gate->bit_idx); +} + static struct clk_ops clk_gate2_ops = { .enable = clk_gate2_enable, .disable = clk_gate2_disable, @@ -116,6 +127,10 @@ struct clk *clk_register_gate2(struct device *dev, const char *name, gate->bit_idx = bit_idx; gate->flags = clk_gate2_flags; gate->lock = lock; + + /* Initialize share_count per hardware state */ + if (share_count) + *share_count = clk_gate2_reg_is_enabled(reg, bit_idx) ? 1 : 0; gate->share_count = share_count; init.name = name; diff --git a/arch/arm/mach-imx/clk-imx6q.c b/arch/arm/mach-imx/clk-imx6q.c index 8e795dea02ec..8556c787e59c 100644 --- a/arch/arm/mach-imx/clk-imx6q.c +++ b/arch/arm/mach-imx/clk-imx6q.c @@ -70,7 +70,7 @@ static const char *cko_sels[] = { "cko1", "cko2", }; static const char *lvds_sels[] = { "dummy", "dummy", "dummy", "dummy", "dummy", "dummy", "pll4_audio", "pll5_video", "pll8_mlb", "enet_ref", - "pcie_ref", "sata_ref", + "pcie_ref_125m", "sata_ref_100m", }; enum mx6q_clks { @@ -491,7 +491,7 @@ static void __init imx6q_clocks_init(struct device_node *ccm_node) /* All existing boards with PCIe use LVDS1 */ if (IS_ENABLED(CONFIG_PCI_IMX6)) - clk_set_parent(clk[lvds1_sel], clk[sata_ref]); + clk_set_parent(clk[lvds1_sel], clk[sata_ref_100m]); /* Set initial power mode */ imx6q_set_lpm(WAIT_CLOCKED); diff --git a/arch/arm/mach-imx/clk-imx6sl.c b/arch/arm/mach-imx/clk-imx6sl.c index 21cf06cebade..5408ca70c8d6 100644 --- a/arch/arm/mach-imx/clk-imx6sl.c +++ b/arch/arm/mach-imx/clk-imx6sl.c @@ -312,6 +312,7 @@ static void __init imx6sl_clocks_init(struct device_node *ccm_node) clks[IMX6SL_CLK_ECSPI2] = imx_clk_gate2("ecspi2", "ecspi_root", base + 0x6c, 2); clks[IMX6SL_CLK_ECSPI3] = imx_clk_gate2("ecspi3", "ecspi_root", base + 0x6c, 4); clks[IMX6SL_CLK_ECSPI4] = imx_clk_gate2("ecspi4", "ecspi_root", base + 0x6c, 6); + clks[IMX6SL_CLK_ENET] = imx_clk_gate2("enet", "ipg", base + 0x6c, 10); clks[IMX6SL_CLK_EPIT1] = imx_clk_gate2("epit1", "perclk", base + 0x6c, 12); clks[IMX6SL_CLK_EPIT2] = imx_clk_gate2("epit2", "perclk", base + 0x6c, 14); clks[IMX6SL_CLK_EXTERN_AUDIO] = imx_clk_gate2("extern_audio", "extern_audio_podf", base + 0x6c, 16); diff --git a/arch/arm/mach-integrator/integrator_ap.c b/arch/arm/mach-integrator/integrator_ap.c index dd0cc677d596..660ca6feff40 100644 --- a/arch/arm/mach-integrator/integrator_ap.c +++ b/arch/arm/mach-integrator/integrator_ap.c @@ -480,25 +480,18 @@ static const struct of_device_id ebi_match[] = { static void __init ap_init_of(void) { unsigned long sc_dec; - struct device_node *root; struct device_node *syscon; struct device_node *ebi; struct device *parent; struct soc_device *soc_dev; struct soc_device_attribute *soc_dev_attr; u32 ap_sc_id; - int err; int i; - /* Here we create an SoC device for the root node */ - root = of_find_node_by_path("/"); - if (!root) - return; - - syscon = of_find_matching_node(root, ap_syscon_match); + syscon = of_find_matching_node(NULL, ap_syscon_match); if (!syscon) return; - ebi = of_find_matching_node(root, ebi_match); + ebi = of_find_matching_node(NULL, ebi_match); if (!ebi) return; @@ -509,19 +502,17 @@ static void __init ap_init_of(void) if (!ebi_base) return; + of_platform_populate(NULL, of_default_bus_match_table, + ap_auxdata_lookup, NULL); + ap_sc_id = readl(ap_syscon_base); soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL); if (!soc_dev_attr) return; - err = of_property_read_string(root, "compatible", - &soc_dev_attr->soc_id); - if (err) - return; - err = of_property_read_string(root, "model", &soc_dev_attr->machine); - if (err) - return; + soc_dev_attr->soc_id = "XVC"; + soc_dev_attr->machine = "Integrator/AP"; soc_dev_attr->family = "Integrator"; soc_dev_attr->revision = kasprintf(GFP_KERNEL, "%c", 'A' + (ap_sc_id & 0x0f)); @@ -536,9 +527,6 @@ static void __init ap_init_of(void) parent = soc_device_to_device(soc_dev); integrator_init_sysfs(parent, ap_sc_id); - of_platform_populate(root, of_default_bus_match_table, - ap_auxdata_lookup, parent); - sc_dec = readl(ap_syscon_base + INTEGRATOR_SC_DEC_OFFSET); for (i = 0; i < 4; i++) { struct lm_device *lmdev; diff --git a/arch/arm/mach-integrator/integrator_cp.c b/arch/arm/mach-integrator/integrator_cp.c index a938242b0c95..0e57f8f820a5 100644 --- a/arch/arm/mach-integrator/integrator_cp.c +++ b/arch/arm/mach-integrator/integrator_cp.c @@ -279,20 +279,13 @@ static const struct of_device_id intcp_syscon_match[] = { static void __init intcp_init_of(void) { - struct device_node *root; struct device_node *cpcon; struct device *parent; struct soc_device *soc_dev; struct soc_device_attribute *soc_dev_attr; u32 intcp_sc_id; - int err; - /* Here we create an SoC device for the root node */ - root = of_find_node_by_path("/"); - if (!root) - return; - - cpcon = of_find_matching_node(root, intcp_syscon_match); + cpcon = of_find_matching_node(NULL, intcp_syscon_match); if (!cpcon) return; @@ -300,19 +293,17 @@ static void __init intcp_init_of(void) if (!intcp_con_base) return; + of_platform_populate(NULL, of_default_bus_match_table, + intcp_auxdata_lookup, NULL); + intcp_sc_id = readl(intcp_con_base); soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL); if (!soc_dev_attr) return; - err = of_property_read_string(root, "compatible", - &soc_dev_attr->soc_id); - if (err) - return; - err = of_property_read_string(root, "model", &soc_dev_attr->machine); - if (err) - return; + soc_dev_attr->soc_id = "XCV"; + soc_dev_attr->machine = "Integrator/CP"; soc_dev_attr->family = "Integrator"; soc_dev_attr->revision = kasprintf(GFP_KERNEL, "%c", 'A' + (intcp_sc_id & 0x0f)); @@ -326,8 +317,6 @@ static void __init intcp_init_of(void) parent = soc_device_to_device(soc_dev); integrator_init_sysfs(parent, intcp_sc_id); - of_platform_populate(root, of_default_bus_match_table, - intcp_auxdata_lookup, parent); } static const char * intcp_dt_board_compat[] = { diff --git a/arch/arm/mach-mvebu/Kconfig b/arch/arm/mach-mvebu/Kconfig index 4a7c250c9a30..b9bc599a5fd0 100644 --- a/arch/arm/mach-mvebu/Kconfig +++ b/arch/arm/mach-mvebu/Kconfig @@ -10,6 +10,7 @@ menuconfig ARCH_MVEBU select ZONE_DMA if ARM_LPAE select ARCH_REQUIRE_GPIOLIB select PCI_QUIRKS if PCI + select OF_ADDRESS_PCI if ARCH_MVEBU @@ -17,6 +18,7 @@ config MACH_MVEBU_V7 bool select ARMADA_370_XP_TIMER select CACHE_L2X0 + select ARM_CPU_SUSPEND config MACH_ARMADA_370 bool "Marvell Armada 370 boards" if ARCH_MULTI_V7 diff --git a/arch/arm/mach-mvebu/Makefile b/arch/arm/mach-mvebu/Makefile index 2ecb828e4a8b..1636cdbef01a 100644 --- a/arch/arm/mach-mvebu/Makefile +++ b/arch/arm/mach-mvebu/Makefile @@ -7,7 +7,7 @@ CFLAGS_pmsu.o := -march=armv7-a obj-y += system-controller.o mvebu-soc-id.o ifeq ($(CONFIG_MACH_MVEBU_V7),y) -obj-y += cpu-reset.o board-v7.o coherency.o coherency_ll.o pmsu.o +obj-y += cpu-reset.o board-v7.o coherency.o coherency_ll.o pmsu.o pmsu_ll.o obj-$(CONFIG_SMP) += platsmp.o headsmp.o platsmp-a9.o headsmp-a9.o obj-$(CONFIG_HOTPLUG_CPU) += hotplug.o endif diff --git a/arch/arm/mach-mvebu/board-v7.c b/arch/arm/mach-mvebu/board-v7.c index 8bb742fdf5ca..b2524d689f21 100644 --- a/arch/arm/mach-mvebu/board-v7.c +++ b/arch/arm/mach-mvebu/board-v7.c @@ -23,6 +23,7 @@ #include <linux/mbus.h> #include <linux/signal.h> #include <linux/slab.h> +#include <linux/irqchip.h> #include <asm/hardware/cache-l2x0.h> #include <asm/mach/arch.h> #include <asm/mach/map.h> @@ -71,17 +72,23 @@ static int armada_375_external_abort_wa(unsigned long addr, unsigned int fsr, return 1; } -static void __init mvebu_timer_and_clk_init(void) +static void __init mvebu_init_irq(void) { - of_clk_init(NULL); - clocksource_of_init(); + irqchip_init(); mvebu_scu_enable(); coherency_init(); BUG_ON(mvebu_mbus_dt_init(coherency_available())); +} + +static void __init external_abort_quirk(void) +{ + u32 dev, rev; - if (of_machine_is_compatible("marvell,armada375")) - hook_fault_code(16 + 6, armada_375_external_abort_wa, SIGBUS, 0, - "imprecise external abort"); + if (mvebu_get_soc_id(&dev, &rev) == 0 && rev > ARMADA_375_Z1_REV) + return; + + hook_fault_code(16 + 6, armada_375_external_abort_wa, SIGBUS, 0, + "imprecise external abort"); } static void __init i2c_quirk(void) @@ -169,8 +176,10 @@ static void __init mvebu_dt_init(void) { if (of_machine_is_compatible("plathome,openblocks-ax3-4")) i2c_quirk(); - if (of_machine_is_compatible("marvell,a375-db")) + if (of_machine_is_compatible("marvell,a375-db")) { + external_abort_quirk(); thermal_quirk(); + } of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); } @@ -185,7 +194,7 @@ DT_MACHINE_START(ARMADA_370_XP_DT, "Marvell Armada 370/XP (Device Tree)") .l2c_aux_mask = ~0, .smp = smp_ops(armada_xp_smp_ops), .init_machine = mvebu_dt_init, - .init_time = mvebu_timer_and_clk_init, + .init_irq = mvebu_init_irq, .restart = mvebu_restart, .dt_compat = armada_370_xp_dt_compat, MACHINE_END @@ -198,7 +207,7 @@ static const char * const armada_375_dt_compat[] = { DT_MACHINE_START(ARMADA_375_DT, "Marvell Armada 375 (Device Tree)") .l2c_aux_val = 0, .l2c_aux_mask = ~0, - .init_time = mvebu_timer_and_clk_init, + .init_irq = mvebu_init_irq, .init_machine = mvebu_dt_init, .restart = mvebu_restart, .dt_compat = armada_375_dt_compat, @@ -213,7 +222,7 @@ static const char * const armada_38x_dt_compat[] = { DT_MACHINE_START(ARMADA_38X_DT, "Marvell Armada 380/385 (Device Tree)") .l2c_aux_val = 0, .l2c_aux_mask = ~0, - .init_time = mvebu_timer_and_clk_init, + .init_irq = mvebu_init_irq, .restart = mvebu_restart, .dt_compat = armada_38x_dt_compat, MACHINE_END diff --git a/arch/arm/mach-mvebu/coherency.c b/arch/arm/mach-mvebu/coherency.c index 477202fd39cc..2bdc3233abe2 100644 --- a/arch/arm/mach-mvebu/coherency.c +++ b/arch/arm/mach-mvebu/coherency.c @@ -292,6 +292,10 @@ static struct notifier_block mvebu_hwcc_nb = { .notifier_call = mvebu_hwcc_notifier, }; +static struct notifier_block mvebu_hwcc_pci_nb = { + .notifier_call = mvebu_hwcc_notifier, +}; + static void __init armada_370_coherency_init(struct device_node *np) { struct resource res; @@ -427,7 +431,7 @@ static int __init coherency_pci_init(void) { if (coherency_available()) bus_register_notifier(&pci_bus_type, - &mvebu_hwcc_nb); + &mvebu_hwcc_pci_nb); return 0; } diff --git a/arch/arm/mach-mvebu/headsmp-a9.S b/arch/arm/mach-mvebu/headsmp-a9.S index 5925366bc03c..da5bb292b91c 100644 --- a/arch/arm/mach-mvebu/headsmp-a9.S +++ b/arch/arm/mach-mvebu/headsmp-a9.S @@ -15,6 +15,8 @@ #include <linux/linkage.h> #include <linux/init.h> +#include <asm/assembler.h> + __CPUINIT #define CPU_RESUME_ADDR_REG 0xf10182d4 @@ -22,13 +24,18 @@ .global armada_375_smp_cpu1_enable_code_end armada_375_smp_cpu1_enable_code_start: - ldr r0, [pc, #4] +ARM_BE8(setend be) + adr r0, 1f + ldr r0, [r0] ldr r1, [r0] +ARM_BE8(rev r1, r1) mov pc, r1 +1: .word CPU_RESUME_ADDR_REG armada_375_smp_cpu1_enable_code_end: ENTRY(mvebu_cortex_a9_secondary_startup) +ARM_BE8(setend be) bl v7_invalidate_l1 b secondary_startup ENDPROC(mvebu_cortex_a9_secondary_startup) diff --git a/arch/arm/mach-mvebu/pmsu.c b/arch/arm/mach-mvebu/pmsu.c index 53a55c8520bf..25aa8237d668 100644 --- a/arch/arm/mach-mvebu/pmsu.c +++ b/arch/arm/mach-mvebu/pmsu.c @@ -66,6 +66,8 @@ static void __iomem *pmsu_mp_base; extern void ll_disable_coherency(void); extern void ll_enable_coherency(void); +extern void armada_370_xp_cpu_resume(void); + static struct platform_device armada_xp_cpuidle_device = { .name = "cpuidle-armada-370-xp", }; @@ -140,13 +142,6 @@ static void armada_370_xp_pmsu_enable_l2_powerdown_onidle(void) writel(reg, pmsu_mp_base + L2C_NFABRIC_PM_CTL); } -static void armada_370_xp_cpu_resume(void) -{ - asm volatile("bl ll_add_cpu_to_smp_group\n\t" - "bl ll_enable_coherency\n\t" - "b cpu_resume\n\t"); -} - /* No locking is needed because we only access per-CPU registers */ void armada_370_xp_pmsu_idle_prepare(bool deepidle) { @@ -206,12 +201,12 @@ static noinline int do_armada_370_xp_cpu_suspend(unsigned long deepidle) /* Test the CR_C bit and set it if it was cleared */ asm volatile( - "mrc p15, 0, %0, c1, c0, 0 \n\t" - "tst %0, #(1 << 2) \n\t" - "orreq %0, %0, #(1 << 2) \n\t" - "mcreq p15, 0, %0, c1, c0, 0 \n\t" + "mrc p15, 0, r0, c1, c0, 0 \n\t" + "tst r0, #(1 << 2) \n\t" + "orreq r0, r0, #(1 << 2) \n\t" + "mcreq p15, 0, r0, c1, c0, 0 \n\t" "isb " - : : "r" (0)); + : : : "r0"); pr_warn("Failed to suspend the system\n"); diff --git a/arch/arm/mach-mvebu/pmsu_ll.S b/arch/arm/mach-mvebu/pmsu_ll.S new file mode 100644 index 000000000000..fc3de68d8c54 --- /dev/null +++ b/arch/arm/mach-mvebu/pmsu_ll.S @@ -0,0 +1,25 @@ +/* + * Copyright (C) 2014 Marvell + * + * Thomas Petazzoni <thomas.petazzoni@free-electrons.com> + * Gregory Clement <gregory.clement@free-electrons.com> + * + * This file is licensed under the terms of the GNU General Public + * License version 2. This program is licensed "as is" without any + * warranty of any kind, whether express or implied. + */ + +#include <linux/linkage.h> +#include <asm/assembler.h> + +/* + * This is the entry point through which CPUs exiting cpuidle deep + * idle state are going. + */ +ENTRY(armada_370_xp_cpu_resume) +ARM_BE8(setend be ) @ go BE8 if entered LE + bl ll_add_cpu_to_smp_group + bl ll_enable_coherency + b cpu_resume +ENDPROC(armada_370_xp_cpu_resume) + diff --git a/arch/arm/mach-omap2/Kconfig b/arch/arm/mach-omap2/Kconfig index 062505345c95..1c1ed737f7ab 100644 --- a/arch/arm/mach-omap2/Kconfig +++ b/arch/arm/mach-omap2/Kconfig @@ -34,8 +34,8 @@ config ARCH_OMAP4 select HAVE_ARM_SCU if SMP select HAVE_ARM_TWD if SMP select OMAP_INTERCONNECT - select PL310_ERRATA_588369 - select PL310_ERRATA_727915 + select PL310_ERRATA_588369 if CACHE_L2X0 + select PL310_ERRATA_727915 if CACHE_L2X0 select PM_OPP if PM select PM_RUNTIME if CPU_IDLE select ARM_ERRATA_754322 diff --git a/arch/arm/mach-omap2/Makefile b/arch/arm/mach-omap2/Makefile index 8421f38cf445..8ca99e9321e3 100644 --- a/arch/arm/mach-omap2/Makefile +++ b/arch/arm/mach-omap2/Makefile @@ -110,14 +110,16 @@ obj-y += prm_common.o cm_common.o obj-$(CONFIG_ARCH_OMAP2) += prm2xxx_3xxx.o prm2xxx.o cm2xxx.o obj-$(CONFIG_ARCH_OMAP3) += prm2xxx_3xxx.o prm3xxx.o cm3xxx.o obj-$(CONFIG_ARCH_OMAP3) += vc3xxx_data.o vp3xxx_data.o -obj-$(CONFIG_SOC_AM33XX) += prm33xx.o cm33xx.o omap-prcm-4-5-common = cminst44xx.o cm44xx.o prm44xx.o \ prcm_mpu44xx.o prminst44xx.o \ vc44xx_data.o vp44xx_data.o obj-$(CONFIG_ARCH_OMAP4) += $(omap-prcm-4-5-common) obj-$(CONFIG_SOC_OMAP5) += $(omap-prcm-4-5-common) obj-$(CONFIG_SOC_DRA7XX) += $(omap-prcm-4-5-common) -obj-$(CONFIG_SOC_AM43XX) += $(omap-prcm-4-5-common) +am33xx-43xx-prcm-common += prm33xx.o cm33xx.o +obj-$(CONFIG_SOC_AM33XX) += $(am33xx-43xx-prcm-common) +obj-$(CONFIG_SOC_AM43XX) += $(omap-prcm-4-5-common) \ + $(am33xx-43xx-prcm-common) # OMAP voltage domains voltagedomain-common := voltage.o vc.o vp.o diff --git a/arch/arm/mach-omap2/clkt_dpll.c b/arch/arm/mach-omap2/clkt_dpll.c index 332af927f4d3..67fd26a18441 100644 --- a/arch/arm/mach-omap2/clkt_dpll.c +++ b/arch/arm/mach-omap2/clkt_dpll.c @@ -76,7 +76,7 @@ * (assuming that it is counting N upwards), or -2 if the enclosing loop * should skip to the next iteration (again assuming N is increasing). */ -static int _dpll_test_fint(struct clk_hw_omap *clk, u8 n) +static int _dpll_test_fint(struct clk_hw_omap *clk, unsigned int n) { struct dpll_data *dd; long fint, fint_min, fint_max; diff --git a/arch/arm/mach-omap2/cm-regbits-34xx.h b/arch/arm/mach-omap2/cm-regbits-34xx.h index 04dab2fcf862..ee6c784cd6b7 100644 --- a/arch/arm/mach-omap2/cm-regbits-34xx.h +++ b/arch/arm/mach-omap2/cm-regbits-34xx.h @@ -26,11 +26,14 @@ #define OMAP3430_EN_WDT3_SHIFT 12 #define OMAP3430_CM_FCLKEN_IVA2_EN_IVA2_MASK (1 << 0) #define OMAP3430_CM_FCLKEN_IVA2_EN_IVA2_SHIFT 0 +#define OMAP3430_IVA2_DPLL_FREQSEL_SHIFT 4 #define OMAP3430_IVA2_DPLL_FREQSEL_MASK (0xf << 4) #define OMAP3430_EN_IVA2_DPLL_DRIFTGUARD_SHIFT 3 +#define OMAP3430_EN_IVA2_DPLL_SHIFT 0 #define OMAP3430_EN_IVA2_DPLL_MASK (0x7 << 0) #define OMAP3430_ST_IVA2_SHIFT 0 #define OMAP3430_ST_IVA2_CLK_MASK (1 << 0) +#define OMAP3430_AUTO_IVA2_DPLL_SHIFT 0 #define OMAP3430_AUTO_IVA2_DPLL_MASK (0x7 << 0) #define OMAP3430_IVA2_CLK_SRC_SHIFT 19 #define OMAP3430_IVA2_CLK_SRC_WIDTH 3 diff --git a/arch/arm/mach-omap2/cm33xx.h b/arch/arm/mach-omap2/cm33xx.h index 15a778ce7707..bd2441790779 100644 --- a/arch/arm/mach-omap2/cm33xx.h +++ b/arch/arm/mach-omap2/cm33xx.h @@ -380,7 +380,7 @@ void am33xx_cm_clkdm_disable_hwsup(u16 inst, u16 cdoffs); void am33xx_cm_clkdm_force_sleep(u16 inst, u16 cdoffs); void am33xx_cm_clkdm_force_wakeup(u16 inst, u16 cdoffs); -#ifdef CONFIG_SOC_AM33XX +#if defined(CONFIG_SOC_AM33XX) || defined(CONFIG_SOC_AM43XX) extern int am33xx_cm_wait_module_idle(u16 inst, s16 cdoffs, u16 clkctrl_offs); extern void am33xx_cm_module_enable(u8 mode, u16 inst, s16 cdoffs, diff --git a/arch/arm/mach-omap2/common.h b/arch/arm/mach-omap2/common.h index a373d508799a..dc571f1d3b8a 100644 --- a/arch/arm/mach-omap2/common.h +++ b/arch/arm/mach-omap2/common.h @@ -162,7 +162,8 @@ static inline void omap3xxx_restart(enum reboot_mode mode, const char *cmd) } #endif -#if defined(CONFIG_ARCH_OMAP4) || defined(CONFIG_SOC_OMAP5) +#if defined(CONFIG_ARCH_OMAP4) || defined(CONFIG_SOC_OMAP5) || \ + defined(CONFIG_SOC_DRA7XX) || defined(CONFIG_SOC_AM43XX) void omap44xx_restart(enum reboot_mode mode, const char *cmd); #else static inline void omap44xx_restart(enum reboot_mode mode, const char *cmd) @@ -248,7 +249,6 @@ static inline void __iomem *omap4_get_scu_base(void) } #endif -extern void __init gic_init_irq(void); extern void gic_dist_disable(void); extern void gic_dist_enable(void); extern bool gic_dist_disabled(void); diff --git a/arch/arm/mach-omap2/devices.c b/arch/arm/mach-omap2/devices.c index 592ba0a0ecf3..b6f8f348296e 100644 --- a/arch/arm/mach-omap2/devices.c +++ b/arch/arm/mach-omap2/devices.c @@ -297,33 +297,6 @@ static void omap_init_audio(void) static inline void omap_init_audio(void) {} #endif -#if defined(CONFIG_SND_OMAP_SOC_OMAP_HDMI) || \ - defined(CONFIG_SND_OMAP_SOC_OMAP_HDMI_MODULE) - -static struct platform_device omap_hdmi_audio = { - .name = "omap-hdmi-audio", - .id = -1, -}; - -static void __init omap_init_hdmi_audio(void) -{ - struct omap_hwmod *oh; - struct platform_device *pdev; - - oh = omap_hwmod_lookup("dss_hdmi"); - if (!oh) - return; - - pdev = omap_device_build("omap-hdmi-audio-dai", -1, oh, NULL, 0); - WARN(IS_ERR(pdev), - "Can't build omap_device for omap-hdmi-audio-dai.\n"); - - platform_device_register(&omap_hdmi_audio); -} -#else -static inline void omap_init_hdmi_audio(void) {} -#endif - #if defined(CONFIG_SPI_OMAP24XX) || defined(CONFIG_SPI_OMAP24XX_MODULE) #include <linux/platform_data/spi-omap2-mcspi.h> @@ -459,7 +432,6 @@ static int __init omap2_init_devices(void) */ omap_init_audio(); omap_init_camera(); - omap_init_hdmi_audio(); omap_init_mbox(); /* If dtb is there, the devices will be created dynamically */ if (!of_have_populated_dt()) { diff --git a/arch/arm/mach-omap2/dsp.c b/arch/arm/mach-omap2/dsp.c index b8208b4b1bd9..f7492df1cbba 100644 --- a/arch/arm/mach-omap2/dsp.c +++ b/arch/arm/mach-omap2/dsp.c @@ -29,6 +29,7 @@ #ifdef CONFIG_TIDSPBRIDGE_DVFS #include "omap-pm.h" #endif +#include "soc.h" #include <linux/platform_data/dsp-omap.h> @@ -59,6 +60,9 @@ void __init omap_dsp_reserve_sdram_memblock(void) phys_addr_t size = CONFIG_TIDSPBRIDGE_MEMPOOL_SIZE; phys_addr_t paddr; + if (!cpu_is_omap34xx()) + return; + if (!size) return; @@ -83,6 +87,9 @@ static int __init omap_dsp_init(void) int err = -ENOMEM; struct omap_dsp_platform_data *pdata = &omap_dsp_pdata; + if (!cpu_is_omap34xx()) + return 0; + pdata->phys_mempool_base = omap_dsp_get_mempool_base(); if (pdata->phys_mempool_base) { @@ -115,6 +122,9 @@ module_init(omap_dsp_init); static void __exit omap_dsp_exit(void) { + if (!cpu_is_omap34xx()) + return; + platform_device_unregister(omap_dsp_pdev); } module_exit(omap_dsp_exit); diff --git a/arch/arm/mach-omap2/gpmc.c b/arch/arm/mach-omap2/gpmc.c index 2c0c2816900f..8bc13380f0a0 100644 --- a/arch/arm/mach-omap2/gpmc.c +++ b/arch/arm/mach-omap2/gpmc.c @@ -1615,7 +1615,7 @@ static int gpmc_probe_dt(struct platform_device *pdev) return ret; } - for_each_child_of_node(pdev->dev.of_node, child) { + for_each_available_child_of_node(pdev->dev.of_node, child) { if (!child->name) continue; diff --git a/arch/arm/mach-omap2/id.c b/arch/arm/mach-omap2/id.c index 43969da5d50b..d42022f2a71e 100644 --- a/arch/arm/mach-omap2/id.c +++ b/arch/arm/mach-omap2/id.c @@ -649,6 +649,18 @@ void __init dra7xxx_check_revision(void) } break; + case 0xb9bc: + switch (rev) { + case 0: + omap_revision = DRA722_REV_ES1_0; + break; + default: + /* If we have no new revisions */ + omap_revision = DRA722_REV_ES1_0; + break; + } + break; + default: /* Unknown default to latest silicon rev as default*/ pr_warn("%s: unknown idcode=0x%08x (hawkeye=0x%08x,rev=0x%d)\n", diff --git a/arch/arm/mach-omap2/mux.c b/arch/arm/mach-omap2/mux.c index fd88edeb027f..f62f7537d899 100644 --- a/arch/arm/mach-omap2/mux.c +++ b/arch/arm/mach-omap2/mux.c @@ -183,8 +183,10 @@ static int __init _omap_mux_get_by_name(struct omap_mux_partition *partition, m0_entry = mux->muxnames[0]; /* First check for full name in mode0.muxmode format */ - if (mode0_len && strncmp(muxname, m0_entry, mode0_len)) - continue; + if (mode0_len) + if (strncmp(muxname, m0_entry, mode0_len) || + (strlen(m0_entry) != mode0_len)) + continue; /* Then check for muxmode only */ for (i = 0; i < OMAP_MUX_NR_MODES; i++) { diff --git a/arch/arm/mach-omap2/omap4-common.c b/arch/arm/mach-omap2/omap4-common.c index 326cd982a3cb..539e8106eb96 100644 --- a/arch/arm/mach-omap2/omap4-common.c +++ b/arch/arm/mach-omap2/omap4-common.c @@ -102,26 +102,6 @@ void __init omap_barriers_init(void) {} #endif -void __init gic_init_irq(void) -{ - void __iomem *omap_irq_base; - - /* Static mapping, never released */ - gic_dist_base_addr = ioremap(OMAP44XX_GIC_DIST_BASE, SZ_4K); - BUG_ON(!gic_dist_base_addr); - - twd_base = ioremap(OMAP44XX_LOCAL_TWD_BASE, SZ_4K); - BUG_ON(!twd_base); - - /* Static mapping, never released */ - omap_irq_base = ioremap(OMAP44XX_GIC_CPU_BASE, SZ_512); - BUG_ON(!omap_irq_base); - - omap_wakeupgen_init(); - - gic_init(0, 29, gic_dist_base_addr, omap_irq_base); -} - void gic_dist_disable(void) { if (gic_dist_base_addr) diff --git a/arch/arm/mach-omap2/omap_hwmod.c b/arch/arm/mach-omap2/omap_hwmod.c index f7bb435bb543..6c074f37cdd2 100644 --- a/arch/arm/mach-omap2/omap_hwmod.c +++ b/arch/arm/mach-omap2/omap_hwmod.c @@ -4251,9 +4251,9 @@ void __init omap_hwmod_init(void) soc_ops.enable_module = _omap4_enable_module; soc_ops.disable_module = _omap4_disable_module; soc_ops.wait_target_ready = _omap4_wait_target_ready; - soc_ops.assert_hardreset = _omap4_assert_hardreset; - soc_ops.deassert_hardreset = _omap4_deassert_hardreset; - soc_ops.is_hardreset_asserted = _omap4_is_hardreset_asserted; + soc_ops.assert_hardreset = _am33xx_assert_hardreset; + soc_ops.deassert_hardreset = _am33xx_deassert_hardreset; + soc_ops.is_hardreset_asserted = _am33xx_is_hardreset_asserted; soc_ops.init_clkdm = _init_clkdm; } else if (soc_is_am33xx()) { soc_ops.enable_module = _am33xx_enable_module; diff --git a/arch/arm/mach-omap2/omap_hwmod_54xx_data.c b/arch/arm/mach-omap2/omap_hwmod_54xx_data.c index 290213f2cbe3..1103aa0e0d29 100644 --- a/arch/arm/mach-omap2/omap_hwmod_54xx_data.c +++ b/arch/arm/mach-omap2/omap_hwmod_54xx_data.c @@ -2020,6 +2020,77 @@ static struct omap_hwmod omap54xx_wd_timer2_hwmod = { }, }; +/* + * 'ocp2scp' class + * bridge to transform ocp interface protocol to scp (serial control port) + * protocol + */ +/* ocp2scp3 */ +static struct omap_hwmod omap54xx_ocp2scp3_hwmod; +/* l4_cfg -> ocp2scp3 */ +static struct omap_hwmod_ocp_if omap54xx_l4_cfg__ocp2scp3 = { + .master = &omap54xx_l4_cfg_hwmod, + .slave = &omap54xx_ocp2scp3_hwmod, + .clk = "l4_root_clk_div", + .user = OCP_USER_MPU | OCP_USER_SDMA, +}; + +static struct omap_hwmod omap54xx_ocp2scp3_hwmod = { + .name = "ocp2scp3", + .class = &omap54xx_ocp2scp_hwmod_class, + .clkdm_name = "l3init_clkdm", + .prcm = { + .omap4 = { + .clkctrl_offs = OMAP54XX_CM_L3INIT_OCP2SCP3_CLKCTRL_OFFSET, + .context_offs = OMAP54XX_RM_L3INIT_OCP2SCP3_CONTEXT_OFFSET, + .modulemode = MODULEMODE_HWCTRL, + }, + }, +}; + +/* + * 'sata' class + * sata: serial ata interface gen2 compliant ( 1 rx/ 1 tx) + */ + +static struct omap_hwmod_class_sysconfig omap54xx_sata_sysc = { + .sysc_offs = 0x0000, + .sysc_flags = (SYSC_HAS_MIDLEMODE | SYSC_HAS_SIDLEMODE), + .idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART | + SIDLE_SMART_WKUP | MSTANDBY_FORCE | MSTANDBY_NO | + MSTANDBY_SMART | MSTANDBY_SMART_WKUP), + .sysc_fields = &omap_hwmod_sysc_type2, +}; + +static struct omap_hwmod_class omap54xx_sata_hwmod_class = { + .name = "sata", + .sysc = &omap54xx_sata_sysc, +}; + +/* sata */ +static struct omap_hwmod omap54xx_sata_hwmod = { + .name = "sata", + .class = &omap54xx_sata_hwmod_class, + .clkdm_name = "l3init_clkdm", + .flags = HWMOD_SWSUP_SIDLE | HWMOD_SWSUP_MSTANDBY, + .main_clk = "func_48m_fclk", + .mpu_rt_idx = 1, + .prcm = { + .omap4 = { + .clkctrl_offs = OMAP54XX_CM_L3INIT_SATA_CLKCTRL_OFFSET, + .context_offs = OMAP54XX_RM_L3INIT_SATA_CONTEXT_OFFSET, + .modulemode = MODULEMODE_SWCTRL, + }, + }, +}; + +/* l4_cfg -> sata */ +static struct omap_hwmod_ocp_if omap54xx_l4_cfg__sata = { + .master = &omap54xx_l4_cfg_hwmod, + .slave = &omap54xx_sata_hwmod, + .clk = "l3_iclk_div", + .user = OCP_USER_MPU | OCP_USER_SDMA, +}; /* * Interfaces @@ -2765,6 +2836,8 @@ static struct omap_hwmod_ocp_if *omap54xx_hwmod_ocp_ifs[] __initdata = { &omap54xx_l4_cfg__usb_tll_hs, &omap54xx_l4_cfg__usb_otg_ss, &omap54xx_l4_wkup__wd_timer2, + &omap54xx_l4_cfg__ocp2scp3, + &omap54xx_l4_cfg__sata, NULL, }; diff --git a/arch/arm/mach-omap2/omap_hwmod_7xx_data.c b/arch/arm/mach-omap2/omap_hwmod_7xx_data.c index 20b4398cec05..284324f2b98a 100644 --- a/arch/arm/mach-omap2/omap_hwmod_7xx_data.c +++ b/arch/arm/mach-omap2/omap_hwmod_7xx_data.c @@ -1268,9 +1268,6 @@ static struct omap_hwmod_class dra7xx_sata_hwmod_class = { }; /* sata */ -static struct omap_hwmod_opt_clk sata_opt_clks[] = { - { .role = "ref_clk", .clk = "sata_ref_clk" }, -}; static struct omap_hwmod dra7xx_sata_hwmod = { .name = "sata", @@ -1278,6 +1275,7 @@ static struct omap_hwmod dra7xx_sata_hwmod = { .clkdm_name = "l3init_clkdm", .flags = HWMOD_SWSUP_SIDLE | HWMOD_SWSUP_MSTANDBY, .main_clk = "func_48m_fclk", + .mpu_rt_idx = 1, .prcm = { .omap4 = { .clkctrl_offs = DRA7XX_CM_L3INIT_SATA_CLKCTRL_OFFSET, @@ -1285,8 +1283,6 @@ static struct omap_hwmod dra7xx_sata_hwmod = { .modulemode = MODULEMODE_SWCTRL, }, }, - .opt_clks = sata_opt_clks, - .opt_clks_cnt = ARRAY_SIZE(sata_opt_clks), }; /* @@ -1731,8 +1727,20 @@ static struct omap_hwmod dra7xx_uart6_hwmod = { * */ +static struct omap_hwmod_class_sysconfig dra7xx_usb_otg_ss_sysc = { + .rev_offs = 0x0000, + .sysc_offs = 0x0010, + .sysc_flags = (SYSC_HAS_DMADISABLE | SYSC_HAS_MIDLEMODE | + SYSC_HAS_SIDLEMODE), + .idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART | + SIDLE_SMART_WKUP | MSTANDBY_FORCE | MSTANDBY_NO | + MSTANDBY_SMART | MSTANDBY_SMART_WKUP), + .sysc_fields = &omap_hwmod_sysc_type2, +}; + static struct omap_hwmod_class dra7xx_usb_otg_ss_hwmod_class = { .name = "usb_otg_ss", + .sysc = &dra7xx_usb_otg_ss_sysc, }; /* usb_otg_ss1 */ diff --git a/arch/arm/mach-omap2/prm-regbits-34xx.h b/arch/arm/mach-omap2/prm-regbits-34xx.h index 106132db532b..cbefbd7cfdb5 100644 --- a/arch/arm/mach-omap2/prm-regbits-34xx.h +++ b/arch/arm/mach-omap2/prm-regbits-34xx.h @@ -35,6 +35,8 @@ #define OMAP3430_LOGICSTATEST_MASK (1 << 2) #define OMAP3430_LASTLOGICSTATEENTERED_MASK (1 << 2) #define OMAP3430_LASTPOWERSTATEENTERED_MASK (0x3 << 0) +#define OMAP3430_GRPSEL_MCBSP5_MASK (1 << 10) +#define OMAP3430_GRPSEL_MCBSP1_MASK (1 << 9) #define OMAP3630_GRPSEL_UART4_MASK (1 << 18) #define OMAP3430_GRPSEL_GPIO6_MASK (1 << 17) #define OMAP3430_GRPSEL_GPIO5_MASK (1 << 16) @@ -42,6 +44,10 @@ #define OMAP3430_GRPSEL_GPIO3_MASK (1 << 14) #define OMAP3430_GRPSEL_GPIO2_MASK (1 << 13) #define OMAP3430_GRPSEL_UART3_MASK (1 << 11) +#define OMAP3430_GRPSEL_GPT8_MASK (1 << 9) +#define OMAP3430_GRPSEL_GPT7_MASK (1 << 8) +#define OMAP3430_GRPSEL_GPT6_MASK (1 << 7) +#define OMAP3430_GRPSEL_GPT5_MASK (1 << 6) #define OMAP3430_GRPSEL_MCBSP4_MASK (1 << 2) #define OMAP3430_GRPSEL_MCBSP3_MASK (1 << 1) #define OMAP3430_GRPSEL_MCBSP2_MASK (1 << 0) diff --git a/arch/arm/mach-omap2/soc.h b/arch/arm/mach-omap2/soc.h index de2a34c423a7..01ca8086fb6c 100644 --- a/arch/arm/mach-omap2/soc.h +++ b/arch/arm/mach-omap2/soc.h @@ -462,6 +462,7 @@ IS_OMAP_TYPE(3430, 0x3430) #define DRA7XX_CLASS 0x07000000 #define DRA752_REV_ES1_0 (DRA7XX_CLASS | (0x52 << 16) | (0x10 << 8)) #define DRA752_REV_ES1_1 (DRA7XX_CLASS | (0x52 << 16) | (0x11 << 8)) +#define DRA722_REV_ES1_0 (DRA7XX_CLASS | (0x22 << 16) | (0x10 << 8)) void omap2xxx_check_revision(void); void omap3xxx_check_revision(void); diff --git a/arch/arm/mach-sa1100/collie.c b/arch/arm/mach-sa1100/collie.c index f9874ba60cc8..108939f8d053 100644 --- a/arch/arm/mach-sa1100/collie.c +++ b/arch/arm/mach-sa1100/collie.c @@ -329,6 +329,11 @@ static struct mtd_partition collie_partitions[] = { .name = "rootfs", .offset = MTDPART_OFS_APPEND, .size = 0x00e20000, + }, { + .name = "bootblock", + .offset = MTDPART_OFS_APPEND, + .size = 0x00020000, + .mask_flags = MTD_WRITEABLE } }; @@ -356,7 +361,7 @@ static void collie_flash_exit(void) } static struct flash_platform_data collie_flash_data = { - .map_name = "jedec_probe", + .map_name = "cfi_probe", .init = collie_flash_init, .set_vpp = collie_set_vpp, .exit = collie_flash_exit, diff --git a/arch/arm/mach-sti/Kconfig b/arch/arm/mach-sti/Kconfig index 7e33e9d2c42e..878e9ec97d0f 100644 --- a/arch/arm/mach-sti/Kconfig +++ b/arch/arm/mach-sti/Kconfig @@ -11,8 +11,8 @@ menuconfig ARCH_STI select ARM_ERRATA_754322 select ARM_ERRATA_764369 if SMP select ARM_ERRATA_775420 - select PL310_ERRATA_753970 if CACHE_PL310 - select PL310_ERRATA_769419 if CACHE_PL310 + select PL310_ERRATA_753970 if CACHE_L2X0 + select PL310_ERRATA_769419 if CACHE_L2X0 help Include support for STiH41x SOCs like STiH415/416 using the device tree for discovery diff --git a/arch/arm/mach-sunxi/sunxi.c b/arch/arm/mach-sunxi/sunxi.c index 3f9587bb51f6..b6085084e0ff 100644 --- a/arch/arm/mach-sunxi/sunxi.c +++ b/arch/arm/mach-sunxi/sunxi.c @@ -12,8 +12,81 @@ #include <linux/clk-provider.h> #include <linux/clocksource.h> +#include <linux/delay.h> +#include <linux/kernel.h> +#include <linux/init.h> +#include <linux/of_address.h> +#include <linux/of_irq.h> +#include <linux/of_platform.h> +#include <linux/io.h> +#include <linux/reboot.h> #include <asm/mach/arch.h> +#include <asm/mach/map.h> +#include <asm/system_misc.h> + +#define SUN4I_WATCHDOG_CTRL_REG 0x00 +#define SUN4I_WATCHDOG_CTRL_RESTART BIT(0) +#define SUN4I_WATCHDOG_MODE_REG 0x04 +#define SUN4I_WATCHDOG_MODE_ENABLE BIT(0) +#define SUN4I_WATCHDOG_MODE_RESET_ENABLE BIT(1) + +#define SUN6I_WATCHDOG1_IRQ_REG 0x00 +#define SUN6I_WATCHDOG1_CTRL_REG 0x10 +#define SUN6I_WATCHDOG1_CTRL_RESTART BIT(0) +#define SUN6I_WATCHDOG1_CONFIG_REG 0x14 +#define SUN6I_WATCHDOG1_CONFIG_RESTART BIT(0) +#define SUN6I_WATCHDOG1_CONFIG_IRQ BIT(1) +#define SUN6I_WATCHDOG1_MODE_REG 0x18 +#define SUN6I_WATCHDOG1_MODE_ENABLE BIT(0) + +static void __iomem *wdt_base; + +static void sun4i_restart(enum reboot_mode mode, const char *cmd) +{ + if (!wdt_base) + return; + + /* Enable timer and set reset bit in the watchdog */ + writel(SUN4I_WATCHDOG_MODE_ENABLE | SUN4I_WATCHDOG_MODE_RESET_ENABLE, + wdt_base + SUN4I_WATCHDOG_MODE_REG); + + /* + * Restart the watchdog. The default (and lowest) interval + * value for the watchdog is 0.5s. + */ + writel(SUN4I_WATCHDOG_CTRL_RESTART, wdt_base + SUN4I_WATCHDOG_CTRL_REG); + + while (1) { + mdelay(5); + writel(SUN4I_WATCHDOG_MODE_ENABLE | SUN4I_WATCHDOG_MODE_RESET_ENABLE, + wdt_base + SUN4I_WATCHDOG_MODE_REG); + } +} + +static struct of_device_id sunxi_restart_ids[] = { + { .compatible = "allwinner,sun4i-a10-wdt" }, + { /*sentinel*/ } +}; + +static void sunxi_setup_restart(void) +{ + struct device_node *np; + + np = of_find_matching_node(NULL, sunxi_restart_ids); + if (WARN(!np, "unable to setup watchdog restart")) + return; + + wdt_base = of_iomap(np, 0); + WARN(!wdt_base, "failed to map watchdog base address"); +} + +static void __init sunxi_dt_init(void) +{ + sunxi_setup_restart(); + + of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); +} static const char * const sunxi_board_dt_compat[] = { "allwinner,sun4i-a10", @@ -23,7 +96,9 @@ static const char * const sunxi_board_dt_compat[] = { }; DT_MACHINE_START(SUNXI_DT, "Allwinner A1X (Device Tree)") + .init_machine = sunxi_dt_init, .dt_compat = sunxi_board_dt_compat, + .restart = sun4i_restart, MACHINE_END static const char * const sun6i_board_dt_compat[] = { @@ -51,5 +126,7 @@ static const char * const sun7i_board_dt_compat[] = { }; DT_MACHINE_START(SUN7I_DT, "Allwinner sun7i (A20) Family") + .init_machine = sunxi_dt_init, .dt_compat = sun7i_board_dt_compat, + .restart = sun4i_restart, MACHINE_END diff --git a/arch/arm/mach-ux500/Kconfig b/arch/arm/mach-ux500/Kconfig index 5be7c4583a93..699e8601dbf0 100644 --- a/arch/arm/mach-ux500/Kconfig +++ b/arch/arm/mach-ux500/Kconfig @@ -15,7 +15,7 @@ menuconfig ARCH_U8500 select PINCTRL select PINCTRL_ABX500 select PINCTRL_NOMADIK - select PL310_ERRATA_753970 if CACHE_PL310 + select PL310_ERRATA_753970 if CACHE_L2X0 help Support for ST-Ericsson's Ux500 architecture diff --git a/arch/arm/mach-vexpress/Kconfig b/arch/arm/mach-vexpress/Kconfig index 99c1f151c403..d8b9330f896a 100644 --- a/arch/arm/mach-vexpress/Kconfig +++ b/arch/arm/mach-vexpress/Kconfig @@ -43,7 +43,7 @@ config ARCH_VEXPRESS_CORTEX_A5_A9_ERRATA bool "Enable A5 and A9 only errata work-arounds" default y select ARM_ERRATA_720789 - select PL310_ERRATA_753970 if CACHE_PL310 + select PL310_ERRATA_753970 if CACHE_L2X0 help Provides common dependencies for Versatile Express platforms based on Cortex-A5 and Cortex-A9 processors. In order to diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig index eda0dd0ab97b..c348eaee7ee2 100644 --- a/arch/arm/mm/Kconfig +++ b/arch/arm/mm/Kconfig @@ -889,9 +889,10 @@ config CACHE_L2X0 help This option enables the L2x0 PrimeCell. +if CACHE_L2X0 + config CACHE_PL310 bool - depends on CACHE_L2X0 default y if CPU_V7 && !(CPU_V6 || CPU_V6K) help This option enables optimisations for the PL310 cache @@ -899,7 +900,6 @@ config CACHE_PL310 config PL310_ERRATA_588369 bool "PL310 errata: Clean & Invalidate maintenance operations do not invalidate clean lines" - depends on CACHE_L2X0 help The PL310 L2 cache controller implements three types of Clean & Invalidate maintenance operations: by Physical Address @@ -912,7 +912,6 @@ config PL310_ERRATA_588369 config PL310_ERRATA_727915 bool "PL310 errata: Background Clean & Invalidate by Way operation can cause data corruption" - depends on CACHE_L2X0 help PL310 implements the Clean & Invalidate by Way L2 cache maintenance operation (offset 0x7FC). This operation runs in background so that @@ -923,7 +922,6 @@ config PL310_ERRATA_727915 config PL310_ERRATA_753970 bool "PL310 errata: cache sync operation may be faulty" - depends on CACHE_PL310 help This option enables the workaround for the 753970 PL310 (r3p0) erratum. @@ -938,7 +936,6 @@ config PL310_ERRATA_753970 config PL310_ERRATA_769419 bool "PL310 errata: no automatic Store Buffer drain" - depends on CACHE_L2X0 help On revisions of the PL310 prior to r3p2, the Store Buffer does not automatically drain. This can cause normal, non-cacheable @@ -948,6 +945,8 @@ config PL310_ERRATA_769419 on systems with an outer cache, the store buffer is drained explicitly. +endif + config CACHE_TAUROS2 bool "Enable the Tauros2 L2 cache controller" depends on (ARCH_DOVE || ARCH_MMP || CPU_PJ4) diff --git a/arch/arm/mm/cache-l2x0.c b/arch/arm/mm/cache-l2x0.c index efc5cabf70e0..7c3fb41a462e 100644 --- a/arch/arm/mm/cache-l2x0.c +++ b/arch/arm/mm/cache-l2x0.c @@ -664,7 +664,7 @@ static int l2c310_cpu_enable_flz(struct notifier_block *nb, unsigned long act, v static void __init l2c310_enable(void __iomem *base, u32 aux, unsigned num_lock) { - unsigned rev = readl_relaxed(base + L2X0_CACHE_ID) & L2X0_CACHE_ID_PART_MASK; + unsigned rev = readl_relaxed(base + L2X0_CACHE_ID) & L2X0_CACHE_ID_RTL_MASK; bool cortex_a9 = read_cpuid_part_number() == ARM_CPU_PART_CORTEX_A9; if (rev >= L310_CACHE_ID_RTL_R2P0) { @@ -1069,6 +1069,33 @@ static const struct l2c_init_data of_l2c310_data __initconst = { }; /* + * This is a variant of the of_l2c310_data with .sync set to + * NULL. Outer sync operations are not needed when the system is I/O + * coherent, and potentially harmful in certain situations (PCIe/PL310 + * deadlock on Armada 375/38x due to hardware I/O coherency). The + * other operations are kept because they are infrequent (therefore do + * not cause the deadlock in practice) and needed for secondary CPU + * boot and other power management activities. + */ +static const struct l2c_init_data of_l2c310_coherent_data __initconst = { + .type = "L2C-310 Coherent", + .way_size_0 = SZ_8K, + .num_lock = 8, + .of_parse = l2c310_of_parse, + .enable = l2c310_enable, + .fixup = l2c310_fixup, + .save = l2c310_save, + .outer_cache = { + .inv_range = l2c210_inv_range, + .clean_range = l2c210_clean_range, + .flush_range = l2c210_flush_range, + .flush_all = l2c210_flush_all, + .disable = l2c310_disable, + .resume = l2c310_resume, + }, +}; + +/* * Note that the end addresses passed to Linux primitives are * noninclusive, while the hardware cache range operations use * inclusive start and end addresses. @@ -1487,6 +1514,10 @@ int __init l2x0_of_init(u32 aux_val, u32 aux_mask) data = of_match_node(l2x0_ids, np)->data; + if (of_device_is_compatible(np, "arm,pl310-cache") && + of_property_read_bool(np, "arm,io-coherent")) + data = &of_l2c310_coherent_data; + old_aux = readl_relaxed(l2x0_base + L2X0_AUX_CTRL); if (old_aux != ((old_aux & aux_mask) | aux_val)) { pr_warn("L2C: platform modifies aux control register: 0x%08x -> 0x%08x\n", diff --git a/arch/arm/mm/nommu.c b/arch/arm/mm/nommu.c index da1874f9f8cf..a014dfacd5ca 100644 --- a/arch/arm/mm/nommu.c +++ b/arch/arm/mm/nommu.c @@ -300,6 +300,7 @@ void __init sanity_check_meminfo(void) sanity_check_meminfo_mpu(); end = memblock_end_of_DRAM(); high_memory = __va(end - 1) + 1; + memblock_set_current_limit(end); } /* diff --git a/arch/arm/mm/proc-arm925.S b/arch/arm/mm/proc-arm925.S index 97448c3acf38..ba0d58e1a2a2 100644 --- a/arch/arm/mm/proc-arm925.S +++ b/arch/arm/mm/proc-arm925.S @@ -502,6 +502,7 @@ __\name\()_proc_info: .long \cpu_val .long \cpu_mask .long PMD_TYPE_SECT | \ + PMD_SECT_CACHEABLE | \ PMD_BIT4 | \ PMD_SECT_AP_WRITE | \ PMD_SECT_AP_READ diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index a474de346be6..839f48c26ef0 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -4,6 +4,7 @@ config ARM64 select ARCH_HAS_OPP select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST select ARCH_USE_CMPXCHG_LOCKREF + select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_WANT_OPTIONAL_GPIOLIB select ARCH_WANT_COMPAT_IPC_PARSE_VERSION select ARCH_WANT_FRAME_POINTERS diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 993bce527b85..902eb708804a 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -56,6 +56,8 @@ #define TASK_SIZE_32 UL(0x100000000) #define TASK_SIZE (test_thread_flag(TIF_32BIT) ? \ TASK_SIZE_32 : TASK_SIZE_64) +#define TASK_SIZE_OF(tsk) (test_tsk_thread_flag(tsk, TIF_32BIT) ? \ + TASK_SIZE_32 : TASK_SIZE_64) #else #define TASK_SIZE TASK_SIZE_64 #endif /* CONFIG_COMPAT */ diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 579702086488..e0ccceb317d9 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -292,7 +292,7 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, #define pmd_sect(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \ PMD_TYPE_SECT) -#ifdef ARM64_64K_PAGES +#ifdef CONFIG_ARM64_64K_PAGES #define pud_sect(pud) (0) #else #define pud_sect(pud) ((pud_val(pud) & PUD_TYPE_MASK) == \ diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h index a429b5940be2..501000fadb6f 100644 --- a/arch/arm64/include/asm/ptrace.h +++ b/arch/arm64/include/asm/ptrace.h @@ -21,6 +21,10 @@ #include <uapi/asm/ptrace.h> +/* Current Exception Level values, as contained in CurrentEL */ +#define CurrentEL_EL1 (1 << 2) +#define CurrentEL_EL2 (2 << 2) + /* AArch32-specific ptrace requests */ #define COMPAT_PTRACE_GETREGS 12 #define COMPAT_PTRACE_SETREGS 13 diff --git a/arch/arm64/kernel/efi-entry.S b/arch/arm64/kernel/efi-entry.S index 66716c9b9e5f..619b1dd7bcde 100644 --- a/arch/arm64/kernel/efi-entry.S +++ b/arch/arm64/kernel/efi-entry.S @@ -78,8 +78,7 @@ ENTRY(efi_stub_entry) /* Turn off Dcache and MMU */ mrs x0, CurrentEL - cmp x0, #PSR_MODE_EL2t - ccmp x0, #PSR_MODE_EL2h, #0x4, ne + cmp x0, #CurrentEL_EL2 b.ne 1f mrs x0, sctlr_el2 bic x0, x0, #1 << 0 // clear SCTLR.M diff --git a/arch/arm64/kernel/efi-stub.c b/arch/arm64/kernel/efi-stub.c index 60e98a639ac5..e786e6cdc400 100644 --- a/arch/arm64/kernel/efi-stub.c +++ b/arch/arm64/kernel/efi-stub.c @@ -12,8 +12,6 @@ #include <linux/efi.h> #include <linux/libfdt.h> #include <asm/sections.h> -#include <generated/compile.h> -#include <generated/utsrelease.h> /* * AArch64 requires the DTB to be 8-byte aligned in the first 512MiB from diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index a96d3a6a63f6..a2c1195abb7f 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -270,8 +270,7 @@ ENDPROC(stext) */ ENTRY(el2_setup) mrs x0, CurrentEL - cmp x0, #PSR_MODE_EL2t - ccmp x0, #PSR_MODE_EL2h, #0x4, ne + cmp x0, #CurrentEL_EL2 b.ne 1f mrs x0, sctlr_el2 CPU_BE( orr x0, x0, #(1 << 25) ) // Set the EE bit for EL2 diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index 9aecbace4128..13bbc3be6f5a 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -27,8 +27,10 @@ void __cpu_copy_user_page(void *kto, const void *kfrom, unsigned long vaddr) copy_page(kto, kfrom); __flush_dcache_area(kto, PAGE_SIZE); } +EXPORT_SYMBOL_GPL(__cpu_copy_user_page); void __cpu_clear_user_page(void *kaddr, unsigned long vaddr) { clear_page(kaddr); } +EXPORT_SYMBOL_GPL(__cpu_clear_user_page); diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c index e4193e3adc7f..0d64089d28b5 100644 --- a/arch/arm64/mm/flush.c +++ b/arch/arm64/mm/flush.c @@ -79,7 +79,8 @@ void __sync_icache_dcache(pte_t pte, unsigned long addr) return; if (!test_and_set_bit(PG_dcache_clean, &page->flags)) { - __flush_dcache_area(page_address(page), PAGE_SIZE); + __flush_dcache_area(page_address(page), + PAGE_SIZE << compound_order(page)); __flush_icache_all(); } else if (icache_is_aivivt()) { __flush_icache_all(); diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index f43db8a69262..e90c5426fe14 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -60,6 +60,17 @@ static int __init early_initrd(char *p) early_param("initrd", early_initrd); #endif +/* + * Return the maximum physical address for ZONE_DMA (DMA_BIT_MASK(32)). It + * currently assumes that for memory starting above 4G, 32-bit devices will + * use a DMA offset. + */ +static phys_addr_t max_zone_dma_phys(void) +{ + phys_addr_t offset = memblock_start_of_DRAM() & GENMASK_ULL(63, 32); + return min(offset + (1ULL << 32), memblock_end_of_DRAM()); +} + static void __init zone_sizes_init(unsigned long min, unsigned long max) { struct memblock_region *reg; @@ -70,9 +81,7 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max) /* 4GB maximum for 32-bit only capable devices */ if (IS_ENABLED(CONFIG_ZONE_DMA)) { - unsigned long max_dma_phys = - (unsigned long)(dma_to_phys(NULL, DMA_BIT_MASK(32)) + 1); - max_dma = max(min, min(max, max_dma_phys >> PAGE_SHIFT)); + max_dma = PFN_DOWN(max_zone_dma_phys()); zone_size[ZONE_DMA] = max_dma - min; } zone_size[ZONE_NORMAL] = max - max_dma; @@ -146,7 +155,7 @@ void __init arm64_memblock_init(void) /* 4GB maximum for 32-bit only capable devices */ if (IS_ENABLED(CONFIG_ZONE_DMA)) - dma_phys_limit = dma_to_phys(NULL, DMA_BIT_MASK(32)) + 1; + dma_phys_limit = max_zone_dma_phys(); dma_contiguous_reserve(dma_phys_limit); memblock_allow_resize(); diff --git a/arch/blackfin/configs/BF609-EZKIT_defconfig b/arch/blackfin/configs/BF609-EZKIT_defconfig index a7e9bfd84183..fcec5ce71392 100644 --- a/arch/blackfin/configs/BF609-EZKIT_defconfig +++ b/arch/blackfin/configs/BF609-EZKIT_defconfig @@ -102,7 +102,7 @@ CONFIG_I2C_CHARDEV=y CONFIG_I2C_BLACKFIN_TWI=y CONFIG_I2C_BLACKFIN_TWI_CLK_KHZ=100 CONFIG_SPI=y -CONFIG_SPI_BFIN_V3=y +CONFIG_SPI_ADI_V3=y CONFIG_GPIOLIB=y CONFIG_GPIO_SYSFS=y # CONFIG_HWMON is not set diff --git a/arch/blackfin/kernel/vmlinux.lds.S b/arch/blackfin/kernel/vmlinux.lds.S index ba35864b2b74..c9eec84aa258 100644 --- a/arch/blackfin/kernel/vmlinux.lds.S +++ b/arch/blackfin/kernel/vmlinux.lds.S @@ -145,7 +145,7 @@ SECTIONS .text_l1 L1_CODE_START : AT(LOADADDR(.exit.data) + SIZEOF(.exit.data)) #else - .init.data : AT(__data_lma + __data_len) + .init.data : AT(__data_lma + __data_len + 32) { __sinitdata = .; INIT_DATA diff --git a/arch/blackfin/mach-bf533/boards/blackstamp.c b/arch/blackfin/mach-bf533/boards/blackstamp.c index 63b0e4fe760c..0ccf0cf4daaf 100644 --- a/arch/blackfin/mach-bf533/boards/blackstamp.c +++ b/arch/blackfin/mach-bf533/boards/blackstamp.c @@ -20,6 +20,7 @@ #include <linux/spi/spi.h> #include <linux/spi/flash.h> #include <linux/irq.h> +#include <linux/gpio.h> #include <linux/i2c.h> #include <asm/dma.h> #include <asm/bfin5xx_spi.h> diff --git a/arch/blackfin/mach-bf537/boards/cm_bf537e.c b/arch/blackfin/mach-bf537/boards/cm_bf537e.c index c65c6dbda3da..1e7290ef3525 100644 --- a/arch/blackfin/mach-bf537/boards/cm_bf537e.c +++ b/arch/blackfin/mach-bf537/boards/cm_bf537e.c @@ -21,6 +21,7 @@ #endif #include <linux/ata_platform.h> #include <linux/irq.h> +#include <linux/gpio.h> #include <asm/dma.h> #include <asm/bfin5xx_spi.h> #include <asm/portmux.h> diff --git a/arch/blackfin/mach-bf537/boards/cm_bf537u.c b/arch/blackfin/mach-bf537/boards/cm_bf537u.c index af58454b4bff..c7495dc74690 100644 --- a/arch/blackfin/mach-bf537/boards/cm_bf537u.c +++ b/arch/blackfin/mach-bf537/boards/cm_bf537u.c @@ -21,6 +21,7 @@ #endif #include <linux/ata_platform.h> #include <linux/irq.h> +#include <linux/gpio.h> #include <asm/dma.h> #include <asm/bfin5xx_spi.h> #include <asm/portmux.h> diff --git a/arch/blackfin/mach-bf537/boards/tcm_bf537.c b/arch/blackfin/mach-bf537/boards/tcm_bf537.c index a0211225748d..6b988ad653d8 100644 --- a/arch/blackfin/mach-bf537/boards/tcm_bf537.c +++ b/arch/blackfin/mach-bf537/boards/tcm_bf537.c @@ -21,6 +21,7 @@ #endif #include <linux/ata_platform.h> #include <linux/irq.h> +#include <linux/gpio.h> #include <asm/dma.h> #include <asm/bfin5xx_spi.h> #include <asm/portmux.h> diff --git a/arch/blackfin/mach-bf548/boards/ezkit.c b/arch/blackfin/mach-bf548/boards/ezkit.c index 90138e6112c1..1fe7ff286619 100644 --- a/arch/blackfin/mach-bf548/boards/ezkit.c +++ b/arch/blackfin/mach-bf548/boards/ezkit.c @@ -2118,7 +2118,7 @@ static struct pinctrl_map __initdata bfin_pinmux_map[] = { PIN_MAP_MUX_GROUP_DEFAULT("bfin-rotary", "pinctrl-adi2.0", NULL, "rotary"), PIN_MAP_MUX_GROUP_DEFAULT("bfin_can.0", "pinctrl-adi2.0", NULL, "can0"), PIN_MAP_MUX_GROUP_DEFAULT("bfin_can.1", "pinctrl-adi2.0", NULL, "can1"), - PIN_MAP_MUX_GROUP_DEFAULT("bf54x-lq043", "pinctrl-adi2.0", NULL, "ppi0_24b"), + PIN_MAP_MUX_GROUP_DEFAULT("bf54x-lq043", "pinctrl-adi2.0", "ppi0_24bgrp", "ppi0"), PIN_MAP_MUX_GROUP_DEFAULT("bfin-i2s.0", "pinctrl-adi2.0", NULL, "sport0"), PIN_MAP_MUX_GROUP_DEFAULT("bfin-tdm.0", "pinctrl-adi2.0", NULL, "sport0"), PIN_MAP_MUX_GROUP_DEFAULT("bfin-ac97.0", "pinctrl-adi2.0", NULL, "sport0"), @@ -2140,7 +2140,9 @@ static struct pinctrl_map __initdata bfin_pinmux_map[] = { PIN_MAP_MUX_GROUP_DEFAULT("pata-bf54x", "pinctrl-adi2.0", NULL, "atapi_alter"), #endif PIN_MAP_MUX_GROUP_DEFAULT("bf5xx-nand.0", "pinctrl-adi2.0", NULL, "nfc0"), - PIN_MAP_MUX_GROUP_DEFAULT("bf54x-keys", "pinctrl-adi2.0", NULL, "keys_4x4"), + PIN_MAP_MUX_GROUP_DEFAULT("bf54x-keys", "pinctrl-adi2.0", "keys_4x4grp", "keys"), + PIN_MAP_MUX_GROUP("bf54x-keys", "4bit", "pinctrl-adi2.0", "keys_4x4grp", "keys"), + PIN_MAP_MUX_GROUP("bf54x-keys", "8bit", "pinctrl-adi2.0", "keys_8x8grp", "keys"), }; static int __init ezkit_init(void) diff --git a/arch/blackfin/mach-bf561/boards/acvilon.c b/arch/blackfin/mach-bf561/boards/acvilon.c index 430b16d5ccb1..6ab951534d79 100644 --- a/arch/blackfin/mach-bf561/boards/acvilon.c +++ b/arch/blackfin/mach-bf561/boards/acvilon.c @@ -44,6 +44,7 @@ #include <linux/spi/flash.h> #include <linux/irq.h> #include <linux/interrupt.h> +#include <linux/gpio.h> #include <linux/jiffies.h> #include <linux/i2c-pca-platform.h> #include <linux/delay.h> diff --git a/arch/blackfin/mach-bf561/boards/cm_bf561.c b/arch/blackfin/mach-bf561/boards/cm_bf561.c index 9f777df4cacc..e862f7823e68 100644 --- a/arch/blackfin/mach-bf561/boards/cm_bf561.c +++ b/arch/blackfin/mach-bf561/boards/cm_bf561.c @@ -18,6 +18,7 @@ #endif #include <linux/ata_platform.h> #include <linux/irq.h> +#include <linux/gpio.h> #include <asm/dma.h> #include <asm/bfin5xx_spi.h> #include <asm/portmux.h> diff --git a/arch/blackfin/mach-bf561/boards/ezkit.c b/arch/blackfin/mach-bf561/boards/ezkit.c index 88dee43e7abe..2de71e8c104b 100644 --- a/arch/blackfin/mach-bf561/boards/ezkit.c +++ b/arch/blackfin/mach-bf561/boards/ezkit.c @@ -14,6 +14,7 @@ #include <linux/spi/spi.h> #include <linux/irq.h> #include <linux/interrupt.h> +#include <linux/gpio.h> #include <linux/delay.h> #include <asm/dma.h> #include <asm/bfin5xx_spi.h> diff --git a/arch/blackfin/mach-bf609/boards/ezkit.c b/arch/blackfin/mach-bf609/boards/ezkit.c index 1ba4600de69f..e2c0b024ce88 100644 --- a/arch/blackfin/mach-bf609/boards/ezkit.c +++ b/arch/blackfin/mach-bf609/boards/ezkit.c @@ -698,8 +698,6 @@ int bf609_nor_flash_init(struct platform_device *pdev) { #define CONFIG_SMC_GCTL_VAL 0x00000010 - if (!devm_pinctrl_get_select_default(&pdev->dev)) - return -EBUSY; bfin_write32(SMC_GCTL, CONFIG_SMC_GCTL_VAL); bfin_write32(SMC_B0CTL, 0x01002011); bfin_write32(SMC_B0TIM, 0x08170977); @@ -709,7 +707,6 @@ int bf609_nor_flash_init(struct platform_device *pdev) void bf609_nor_flash_exit(struct platform_device *pdev) { - devm_pinctrl_put(pdev->dev.pins->p); bfin_write32(SMC_GCTL, 0); } @@ -2058,15 +2055,14 @@ static struct pinctrl_map __initdata bfin_pinmux_map[] = { PIN_MAP_MUX_GROUP_DEFAULT("bfin-rotary", "pinctrl-adi2.0", NULL, "rotary"), PIN_MAP_MUX_GROUP_DEFAULT("bfin_can.0", "pinctrl-adi2.0", NULL, "can0"), PIN_MAP_MUX_GROUP_DEFAULT("physmap-flash.0", "pinctrl-adi2.0", NULL, "smc0"), - PIN_MAP_MUX_GROUP_DEFAULT("bf609_nl8048.2", "pinctrl-adi2.0", NULL, "ppi2_16b"), - PIN_MAP_MUX_GROUP_DEFAULT("bfin_display.0", "pinctrl-adi2.0", NULL, "ppi0_16b"), -#if IS_ENABLED(CONFIG_VIDEO_MT9M114) - PIN_MAP_MUX_GROUP_DEFAULT("bfin_capture.0", "pinctrl-adi2.0", NULL, "ppi0_8b"), -#elif IS_ENABLED(CONFIG_VIDEO_VS6624) - PIN_MAP_MUX_GROUP_DEFAULT("bfin_capture.0", "pinctrl-adi2.0", NULL, "ppi0_16b"), -#else - PIN_MAP_MUX_GROUP_DEFAULT("bfin_capture.0", "pinctrl-adi2.0", NULL, "ppi0_24b"), -#endif + PIN_MAP_MUX_GROUP_DEFAULT("bf609_nl8048.2", "pinctrl-adi2.0", "ppi2_16bgrp", "ppi2"), + PIN_MAP_MUX_GROUP("bfin_display.0", "8bit", "pinctrl-adi2.0", "ppi2_8bgrp", "ppi2"), + PIN_MAP_MUX_GROUP_DEFAULT("bfin_display.0", "pinctrl-adi2.0", "ppi2_16bgrp", "ppi2"), + PIN_MAP_MUX_GROUP("bfin_display.0", "16bit", "pinctrl-adi2.0", "ppi2_16bgrp", "ppi2"), + PIN_MAP_MUX_GROUP("bfin_capture.0", "8bit", "pinctrl-adi2.0", "ppi0_8bgrp", "ppi0"), + PIN_MAP_MUX_GROUP_DEFAULT("bfin_capture.0", "pinctrl-adi2.0", "ppi0_16bgrp", "ppi0"), + PIN_MAP_MUX_GROUP("bfin_capture.0", "16bit", "pinctrl-adi2.0", "ppi0_16bgrp", "ppi0"), + PIN_MAP_MUX_GROUP("bfin_capture.0", "24bit", "pinctrl-adi2.0", "ppi0_24bgrp", "ppi0"), PIN_MAP_MUX_GROUP_DEFAULT("bfin-i2s.0", "pinctrl-adi2.0", NULL, "sport0"), PIN_MAP_MUX_GROUP_DEFAULT("bfin-tdm.0", "pinctrl-adi2.0", NULL, "sport0"), PIN_MAP_MUX_GROUP_DEFAULT("bfin-i2s.1", "pinctrl-adi2.0", NULL, "sport1"), diff --git a/arch/blackfin/mach-bf609/include/mach/pm.h b/arch/blackfin/mach-bf609/include/mach/pm.h index 3ca0fb965636..a1efd936dd30 100644 --- a/arch/blackfin/mach-bf609/include/mach/pm.h +++ b/arch/blackfin/mach-bf609/include/mach/pm.h @@ -10,6 +10,7 @@ #define __MACH_BF609_PM_H__ #include <linux/suspend.h> +#include <linux/platform_device.h> extern int bfin609_pm_enter(suspend_state_t state); extern int bf609_pm_prepare(void); @@ -19,6 +20,6 @@ void bf609_hibernate(void); void bfin_sec_raise_irq(unsigned int sid); void coreb_enable(void); -int bf609_nor_flash_init(void); -void bf609_nor_flash_exit(void); +int bf609_nor_flash_init(struct platform_device *pdev); +void bf609_nor_flash_exit(struct platform_device *pdev); #endif diff --git a/arch/blackfin/mach-bf609/pm.c b/arch/blackfin/mach-bf609/pm.c index 0cdd6955c7be..b1bfcf434d16 100644 --- a/arch/blackfin/mach-bf609/pm.c +++ b/arch/blackfin/mach-bf609/pm.c @@ -291,13 +291,13 @@ static struct bfin_cpu_pm_fns bf609_cpu_pm = { #if defined(CONFIG_MTD_PHYSMAP) || defined(CONFIG_MTD_PHYSMAP_MODULE) static int smc_pm_syscore_suspend(void) { - bf609_nor_flash_exit(); + bf609_nor_flash_exit(NULL); return 0; } static void smc_pm_syscore_resume(void) { - bf609_nor_flash_init(); + bf609_nor_flash_init(NULL); } static struct syscore_ops smc_pm_syscore_ops = { diff --git a/arch/blackfin/mach-common/ints-priority.c b/arch/blackfin/mach-common/ints-priority.c index 867b7cef204c..1f94784eab6d 100644 --- a/arch/blackfin/mach-common/ints-priority.c +++ b/arch/blackfin/mach-common/ints-priority.c @@ -1208,8 +1208,6 @@ int __init init_arch_irq(void) bfin_sec_set_priority(CONFIG_SEC_IRQ_PRIORITY_LEVELS, sec_int_priority); - bfin_sec_set_priority(CONFIG_SEC_IRQ_PRIORITY_LEVELS, sec_int_priority); - /* Enable interrupts IVG7-15 */ bfin_irq_flags |= IMASK_IVG15 | IMASK_IVG14 | IMASK_IVG13 | IMASK_IVG12 | IMASK_IVG11 | diff --git a/arch/ia64/include/uapi/asm/fcntl.h b/arch/ia64/include/uapi/asm/fcntl.h index 1dd275dc8f65..7b485876cad4 100644 --- a/arch/ia64/include/uapi/asm/fcntl.h +++ b/arch/ia64/include/uapi/asm/fcntl.h @@ -8,6 +8,7 @@ #define force_o_largefile() \ (personality(current->personality) != PER_LINUX32) +#include <linux/personality.h> #include <asm-generic/fcntl.h> #endif /* _ASM_IA64_FCNTL_H */ diff --git a/arch/m68k/kernel/head.S b/arch/m68k/kernel/head.S index dbb118e1a4e0..a54788458ca3 100644 --- a/arch/m68k/kernel/head.S +++ b/arch/m68k/kernel/head.S @@ -921,7 +921,8 @@ L(nocon): jls 1f lsrl #1,%d1 1: - movel %d1,m68k_init_mapped_size + lea %pc@(m68k_init_mapped_size),%a0 + movel %d1,%a0@ mmu_map #PAGE_OFFSET,%pc@(L(phys_kernel_start)),%d1,\ %pc@(m68k_supervisor_cachemode) diff --git a/arch/m68k/kernel/time.c b/arch/m68k/kernel/time.c index 958f1adb9d0c..3857737e3958 100644 --- a/arch/m68k/kernel/time.c +++ b/arch/m68k/kernel/time.c @@ -11,6 +11,7 @@ */ #include <linux/errno.h> +#include <linux/export.h> #include <linux/module.h> #include <linux/sched.h> #include <linux/kernel.h> @@ -30,6 +31,7 @@ unsigned long (*mach_random_get_entropy)(void); +EXPORT_SYMBOL_GPL(mach_random_get_entropy); /* diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig index 7a469acee33c..4e238e6e661c 100644 --- a/arch/mips/Kconfig +++ b/arch/mips/Kconfig @@ -269,6 +269,7 @@ config LANTIQ config LASAT bool "LASAT Networks platforms" select CEVT_R4K + select CRC32 select CSRC_R4K select DMA_NONCOHERENT select SYS_HAS_EARLY_PRINTK diff --git a/arch/mips/include/asm/sigcontext.h b/arch/mips/include/asm/sigcontext.h index f54bdbe85c0d..eeeb0f48c767 100644 --- a/arch/mips/include/asm/sigcontext.h +++ b/arch/mips/include/asm/sigcontext.h @@ -32,8 +32,6 @@ struct sigcontext32 { __u32 sc_lo2; __u32 sc_hi3; __u32 sc_lo3; - __u64 sc_msaregs[32]; /* Most significant 64 bits */ - __u32 sc_msa_csr; }; #endif /* _MIPS_SIM == _MIPS_SIM_ABI64 || _MIPS_SIM == _MIPS_SIM_NABI32 */ #endif /* _ASM_SIGCONTEXT_H */ diff --git a/arch/mips/include/asm/uasm.h b/arch/mips/include/asm/uasm.h index f8d63b3b40b4..708c5d414905 100644 --- a/arch/mips/include/asm/uasm.h +++ b/arch/mips/include/asm/uasm.h @@ -67,6 +67,9 @@ void ISAOPC(op)(u32 **buf, unsigned int a, unsigned int b, signed int c) #define Ip_u2s3u1(op) \ void ISAOPC(op)(u32 **buf, unsigned int a, signed int b, unsigned int c) +#define Ip_s3s1s2(op) \ +void ISAOPC(op)(u32 **buf, int a, int b, int c) + #define Ip_u2u1s3(op) \ void ISAOPC(op)(u32 **buf, unsigned int a, unsigned int b, signed int c) @@ -147,6 +150,7 @@ Ip_u2s3u1(_scd); Ip_u2s3u1(_sd); Ip_u2u1u3(_sll); Ip_u3u2u1(_sllv); +Ip_s3s1s2(_slt); Ip_u2u1s3(_sltiu); Ip_u3u1u2(_sltu); Ip_u2u1u3(_sra); diff --git a/arch/mips/include/uapi/asm/inst.h b/arch/mips/include/uapi/asm/inst.h index 4b7160259292..4bfdb9d4c186 100644 --- a/arch/mips/include/uapi/asm/inst.h +++ b/arch/mips/include/uapi/asm/inst.h @@ -273,6 +273,7 @@ enum mm_32a_minor_op { mm_and_op = 0x250, mm_or32_op = 0x290, mm_xor32_op = 0x310, + mm_slt_op = 0x350, mm_sltu_op = 0x390, }; diff --git a/arch/mips/include/uapi/asm/sigcontext.h b/arch/mips/include/uapi/asm/sigcontext.h index 681c17603a48..6c9906f59c6e 100644 --- a/arch/mips/include/uapi/asm/sigcontext.h +++ b/arch/mips/include/uapi/asm/sigcontext.h @@ -12,10 +12,6 @@ #include <linux/types.h> #include <asm/sgidefs.h> -/* Bits which may be set in sc_used_math */ -#define USEDMATH_FP (1 << 0) -#define USEDMATH_MSA (1 << 1) - #if _MIPS_SIM == _MIPS_SIM_ABI32 /* @@ -41,8 +37,6 @@ struct sigcontext { unsigned long sc_lo2; unsigned long sc_hi3; unsigned long sc_lo3; - unsigned long long sc_msaregs[32]; /* Most significant 64 bits */ - unsigned long sc_msa_csr; }; #endif /* _MIPS_SIM == _MIPS_SIM_ABI32 */ @@ -76,8 +70,6 @@ struct sigcontext { __u32 sc_used_math; __u32 sc_dsp; __u32 sc_reserved; - __u64 sc_msaregs[32]; - __u32 sc_msa_csr; }; diff --git a/arch/mips/kernel/asm-offsets.c b/arch/mips/kernel/asm-offsets.c index 02f075df8f2e..4bb5107511e2 100644 --- a/arch/mips/kernel/asm-offsets.c +++ b/arch/mips/kernel/asm-offsets.c @@ -293,7 +293,6 @@ void output_sc_defines(void) OFFSET(SC_LO2, sigcontext, sc_lo2); OFFSET(SC_HI3, sigcontext, sc_hi3); OFFSET(SC_LO3, sigcontext, sc_lo3); - OFFSET(SC_MSAREGS, sigcontext, sc_msaregs); BLANK(); } #endif @@ -308,7 +307,6 @@ void output_sc_defines(void) OFFSET(SC_MDLO, sigcontext, sc_mdlo); OFFSET(SC_PC, sigcontext, sc_pc); OFFSET(SC_FPC_CSR, sigcontext, sc_fpc_csr); - OFFSET(SC_MSAREGS, sigcontext, sc_msaregs); BLANK(); } #endif @@ -320,7 +318,6 @@ void output_sc32_defines(void) OFFSET(SC32_FPREGS, sigcontext32, sc_fpregs); OFFSET(SC32_FPC_CSR, sigcontext32, sc_fpc_csr); OFFSET(SC32_FPC_EIR, sigcontext32, sc_fpc_eir); - OFFSET(SC32_MSAREGS, sigcontext32, sc_msaregs); BLANK(); } #endif diff --git a/arch/mips/kernel/irq-msc01.c b/arch/mips/kernel/irq-msc01.c index 4858642d543d..a734b2c2f9ea 100644 --- a/arch/mips/kernel/irq-msc01.c +++ b/arch/mips/kernel/irq-msc01.c @@ -126,7 +126,7 @@ void __init init_msc_irqs(unsigned long icubase, unsigned int irqbase, msc_irqma board_bind_eic_interrupt = &msc_bind_eic_interrupt; - for (; nirq >= 0; nirq--, imp++) { + for (; nirq > 0; nirq--, imp++) { int n = imp->im_irq; switch (imp->im_type) { diff --git a/arch/mips/kernel/pm-cps.c b/arch/mips/kernel/pm-cps.c index 5aa4c6f8cf83..c4c2069d3a20 100644 --- a/arch/mips/kernel/pm-cps.c +++ b/arch/mips/kernel/pm-cps.c @@ -101,7 +101,7 @@ static void coupled_barrier(atomic_t *a, unsigned online) if (!coupled_coherence) return; - smp_mb__before_atomic_inc(); + smp_mb__before_atomic(); atomic_inc(a); while (atomic_read(a) < online) @@ -158,7 +158,7 @@ int cps_pm_enter_state(enum cps_pm_state state) /* Indicate that this CPU might not be coherent */ cpumask_clear_cpu(cpu, &cpu_coherent_mask); - smp_mb__after_clear_bit(); + smp_mb__after_atomic(); /* Create a non-coherent mapping of the core ready_count */ core_ready_count = per_cpu(ready_count, core); diff --git a/arch/mips/kernel/r4k_fpu.S b/arch/mips/kernel/r4k_fpu.S index 71814272d148..8352523568e6 100644 --- a/arch/mips/kernel/r4k_fpu.S +++ b/arch/mips/kernel/r4k_fpu.S @@ -13,7 +13,6 @@ * Copyright (C) 1999, 2001 Silicon Graphics, Inc. */ #include <asm/asm.h> -#include <asm/asmmacro.h> #include <asm/errno.h> #include <asm/fpregdef.h> #include <asm/mipsregs.h> @@ -246,218 +245,6 @@ LEAF(_restore_fp_context32) END(_restore_fp_context32) #endif -#ifdef CONFIG_CPU_HAS_MSA - - .macro save_sc_msareg wr, off, sc, tmp -#ifdef CONFIG_64BIT - copy_u_d \tmp, \wr, 1 - EX sd \tmp, (\off+(\wr*8))(\sc) -#elif defined(CONFIG_CPU_LITTLE_ENDIAN) - copy_u_w \tmp, \wr, 2 - EX sw \tmp, (\off+(\wr*8)+0)(\sc) - copy_u_w \tmp, \wr, 3 - EX sw \tmp, (\off+(\wr*8)+4)(\sc) -#else /* CONFIG_CPU_BIG_ENDIAN */ - copy_u_w \tmp, \wr, 2 - EX sw \tmp, (\off+(\wr*8)+4)(\sc) - copy_u_w \tmp, \wr, 3 - EX sw \tmp, (\off+(\wr*8)+0)(\sc) -#endif - .endm - -/* - * int _save_msa_context(struct sigcontext *sc) - * - * Save the upper 64 bits of each vector register along with the MSA_CSR - * register into sc. Returns zero on success, else non-zero. - */ -LEAF(_save_msa_context) - save_sc_msareg 0, SC_MSAREGS, a0, t0 - save_sc_msareg 1, SC_MSAREGS, a0, t0 - save_sc_msareg 2, SC_MSAREGS, a0, t0 - save_sc_msareg 3, SC_MSAREGS, a0, t0 - save_sc_msareg 4, SC_MSAREGS, a0, t0 - save_sc_msareg 5, SC_MSAREGS, a0, t0 - save_sc_msareg 6, SC_MSAREGS, a0, t0 - save_sc_msareg 7, SC_MSAREGS, a0, t0 - save_sc_msareg 8, SC_MSAREGS, a0, t0 - save_sc_msareg 9, SC_MSAREGS, a0, t0 - save_sc_msareg 10, SC_MSAREGS, a0, t0 - save_sc_msareg 11, SC_MSAREGS, a0, t0 - save_sc_msareg 12, SC_MSAREGS, a0, t0 - save_sc_msareg 13, SC_MSAREGS, a0, t0 - save_sc_msareg 14, SC_MSAREGS, a0, t0 - save_sc_msareg 15, SC_MSAREGS, a0, t0 - save_sc_msareg 16, SC_MSAREGS, a0, t0 - save_sc_msareg 17, SC_MSAREGS, a0, t0 - save_sc_msareg 18, SC_MSAREGS, a0, t0 - save_sc_msareg 19, SC_MSAREGS, a0, t0 - save_sc_msareg 20, SC_MSAREGS, a0, t0 - save_sc_msareg 21, SC_MSAREGS, a0, t0 - save_sc_msareg 22, SC_MSAREGS, a0, t0 - save_sc_msareg 23, SC_MSAREGS, a0, t0 - save_sc_msareg 24, SC_MSAREGS, a0, t0 - save_sc_msareg 25, SC_MSAREGS, a0, t0 - save_sc_msareg 26, SC_MSAREGS, a0, t0 - save_sc_msareg 27, SC_MSAREGS, a0, t0 - save_sc_msareg 28, SC_MSAREGS, a0, t0 - save_sc_msareg 29, SC_MSAREGS, a0, t0 - save_sc_msareg 30, SC_MSAREGS, a0, t0 - save_sc_msareg 31, SC_MSAREGS, a0, t0 - jr ra - li v0, 0 - END(_save_msa_context) - -#ifdef CONFIG_MIPS32_COMPAT - -/* - * int _save_msa_context32(struct sigcontext32 *sc) - * - * Save the upper 64 bits of each vector register along with the MSA_CSR - * register into sc. Returns zero on success, else non-zero. - */ -LEAF(_save_msa_context32) - save_sc_msareg 0, SC32_MSAREGS, a0, t0 - save_sc_msareg 1, SC32_MSAREGS, a0, t0 - save_sc_msareg 2, SC32_MSAREGS, a0, t0 - save_sc_msareg 3, SC32_MSAREGS, a0, t0 - save_sc_msareg 4, SC32_MSAREGS, a0, t0 - save_sc_msareg 5, SC32_MSAREGS, a0, t0 - save_sc_msareg 6, SC32_MSAREGS, a0, t0 - save_sc_msareg 7, SC32_MSAREGS, a0, t0 - save_sc_msareg 8, SC32_MSAREGS, a0, t0 - save_sc_msareg 9, SC32_MSAREGS, a0, t0 - save_sc_msareg 10, SC32_MSAREGS, a0, t0 - save_sc_msareg 11, SC32_MSAREGS, a0, t0 - save_sc_msareg 12, SC32_MSAREGS, a0, t0 - save_sc_msareg 13, SC32_MSAREGS, a0, t0 - save_sc_msareg 14, SC32_MSAREGS, a0, t0 - save_sc_msareg 15, SC32_MSAREGS, a0, t0 - save_sc_msareg 16, SC32_MSAREGS, a0, t0 - save_sc_msareg 17, SC32_MSAREGS, a0, t0 - save_sc_msareg 18, SC32_MSAREGS, a0, t0 - save_sc_msareg 19, SC32_MSAREGS, a0, t0 - save_sc_msareg 20, SC32_MSAREGS, a0, t0 - save_sc_msareg 21, SC32_MSAREGS, a0, t0 - save_sc_msareg 22, SC32_MSAREGS, a0, t0 - save_sc_msareg 23, SC32_MSAREGS, a0, t0 - save_sc_msareg 24, SC32_MSAREGS, a0, t0 - save_sc_msareg 25, SC32_MSAREGS, a0, t0 - save_sc_msareg 26, SC32_MSAREGS, a0, t0 - save_sc_msareg 27, SC32_MSAREGS, a0, t0 - save_sc_msareg 28, SC32_MSAREGS, a0, t0 - save_sc_msareg 29, SC32_MSAREGS, a0, t0 - save_sc_msareg 30, SC32_MSAREGS, a0, t0 - save_sc_msareg 31, SC32_MSAREGS, a0, t0 - jr ra - li v0, 0 - END(_save_msa_context32) - -#endif /* CONFIG_MIPS32_COMPAT */ - - .macro restore_sc_msareg wr, off, sc, tmp -#ifdef CONFIG_64BIT - EX ld \tmp, (\off+(\wr*8))(\sc) - insert_d \wr, 1, \tmp -#elif defined(CONFIG_CPU_LITTLE_ENDIAN) - EX lw \tmp, (\off+(\wr*8)+0)(\sc) - insert_w \wr, 2, \tmp - EX lw \tmp, (\off+(\wr*8)+4)(\sc) - insert_w \wr, 3, \tmp -#else /* CONFIG_CPU_BIG_ENDIAN */ - EX lw \tmp, (\off+(\wr*8)+4)(\sc) - insert_w \wr, 2, \tmp - EX lw \tmp, (\off+(\wr*8)+0)(\sc) - insert_w \wr, 3, \tmp -#endif - .endm - -/* - * int _restore_msa_context(struct sigcontext *sc) - */ -LEAF(_restore_msa_context) - restore_sc_msareg 0, SC_MSAREGS, a0, t0 - restore_sc_msareg 1, SC_MSAREGS, a0, t0 - restore_sc_msareg 2, SC_MSAREGS, a0, t0 - restore_sc_msareg 3, SC_MSAREGS, a0, t0 - restore_sc_msareg 4, SC_MSAREGS, a0, t0 - restore_sc_msareg 5, SC_MSAREGS, a0, t0 - restore_sc_msareg 6, SC_MSAREGS, a0, t0 - restore_sc_msareg 7, SC_MSAREGS, a0, t0 - restore_sc_msareg 8, SC_MSAREGS, a0, t0 - restore_sc_msareg 9, SC_MSAREGS, a0, t0 - restore_sc_msareg 10, SC_MSAREGS, a0, t0 - restore_sc_msareg 11, SC_MSAREGS, a0, t0 - restore_sc_msareg 12, SC_MSAREGS, a0, t0 - restore_sc_msareg 13, SC_MSAREGS, a0, t0 - restore_sc_msareg 14, SC_MSAREGS, a0, t0 - restore_sc_msareg 15, SC_MSAREGS, a0, t0 - restore_sc_msareg 16, SC_MSAREGS, a0, t0 - restore_sc_msareg 17, SC_MSAREGS, a0, t0 - restore_sc_msareg 18, SC_MSAREGS, a0, t0 - restore_sc_msareg 19, SC_MSAREGS, a0, t0 - restore_sc_msareg 20, SC_MSAREGS, a0, t0 - restore_sc_msareg 21, SC_MSAREGS, a0, t0 - restore_sc_msareg 22, SC_MSAREGS, a0, t0 - restore_sc_msareg 23, SC_MSAREGS, a0, t0 - restore_sc_msareg 24, SC_MSAREGS, a0, t0 - restore_sc_msareg 25, SC_MSAREGS, a0, t0 - restore_sc_msareg 26, SC_MSAREGS, a0, t0 - restore_sc_msareg 27, SC_MSAREGS, a0, t0 - restore_sc_msareg 28, SC_MSAREGS, a0, t0 - restore_sc_msareg 29, SC_MSAREGS, a0, t0 - restore_sc_msareg 30, SC_MSAREGS, a0, t0 - restore_sc_msareg 31, SC_MSAREGS, a0, t0 - jr ra - li v0, 0 - END(_restore_msa_context) - -#ifdef CONFIG_MIPS32_COMPAT - -/* - * int _restore_msa_context32(struct sigcontext32 *sc) - */ -LEAF(_restore_msa_context32) - restore_sc_msareg 0, SC32_MSAREGS, a0, t0 - restore_sc_msareg 1, SC32_MSAREGS, a0, t0 - restore_sc_msareg 2, SC32_MSAREGS, a0, t0 - restore_sc_msareg 3, SC32_MSAREGS, a0, t0 - restore_sc_msareg 4, SC32_MSAREGS, a0, t0 - restore_sc_msareg 5, SC32_MSAREGS, a0, t0 - restore_sc_msareg 6, SC32_MSAREGS, a0, t0 - restore_sc_msareg 7, SC32_MSAREGS, a0, t0 - restore_sc_msareg 8, SC32_MSAREGS, a0, t0 - restore_sc_msareg 9, SC32_MSAREGS, a0, t0 - restore_sc_msareg 10, SC32_MSAREGS, a0, t0 - restore_sc_msareg 11, SC32_MSAREGS, a0, t0 - restore_sc_msareg 12, SC32_MSAREGS, a0, t0 - restore_sc_msareg 13, SC32_MSAREGS, a0, t0 - restore_sc_msareg 14, SC32_MSAREGS, a0, t0 - restore_sc_msareg 15, SC32_MSAREGS, a0, t0 - restore_sc_msareg 16, SC32_MSAREGS, a0, t0 - restore_sc_msareg 17, SC32_MSAREGS, a0, t0 - restore_sc_msareg 18, SC32_MSAREGS, a0, t0 - restore_sc_msareg 19, SC32_MSAREGS, a0, t0 - restore_sc_msareg 20, SC32_MSAREGS, a0, t0 - restore_sc_msareg 21, SC32_MSAREGS, a0, t0 - restore_sc_msareg 22, SC32_MSAREGS, a0, t0 - restore_sc_msareg 23, SC32_MSAREGS, a0, t0 - restore_sc_msareg 24, SC32_MSAREGS, a0, t0 - restore_sc_msareg 25, SC32_MSAREGS, a0, t0 - restore_sc_msareg 26, SC32_MSAREGS, a0, t0 - restore_sc_msareg 27, SC32_MSAREGS, a0, t0 - restore_sc_msareg 28, SC32_MSAREGS, a0, t0 - restore_sc_msareg 29, SC32_MSAREGS, a0, t0 - restore_sc_msareg 30, SC32_MSAREGS, a0, t0 - restore_sc_msareg 31, SC32_MSAREGS, a0, t0 - jr ra - li v0, 0 - END(_restore_msa_context32) - -#endif /* CONFIG_MIPS32_COMPAT */ - -#endif /* CONFIG_CPU_HAS_MSA */ - .set reorder .type fault@function diff --git a/arch/mips/kernel/signal.c b/arch/mips/kernel/signal.c index 33133d3df3e5..9e60d117e41e 100644 --- a/arch/mips/kernel/signal.c +++ b/arch/mips/kernel/signal.c @@ -31,7 +31,6 @@ #include <linux/bitops.h> #include <asm/cacheflush.h> #include <asm/fpu.h> -#include <asm/msa.h> #include <asm/sim.h> #include <asm/ucontext.h> #include <asm/cpu-features.h> @@ -48,9 +47,6 @@ static int (*restore_fp_context)(struct sigcontext __user *sc); extern asmlinkage int _save_fp_context(struct sigcontext __user *sc); extern asmlinkage int _restore_fp_context(struct sigcontext __user *sc); -extern asmlinkage int _save_msa_context(struct sigcontext __user *sc); -extern asmlinkage int _restore_msa_context(struct sigcontext __user *sc); - struct sigframe { u32 sf_ass[4]; /* argument save space for o32 */ u32 sf_pad[2]; /* Was: signal trampoline */ @@ -100,60 +96,20 @@ static int copy_fp_from_sigcontext(struct sigcontext __user *sc) } /* - * These functions will save only the upper 64 bits of the vector registers, - * since the lower 64 bits have already been saved as the scalar FP context. - */ -static int copy_msa_to_sigcontext(struct sigcontext __user *sc) -{ - int i; - int err = 0; - - for (i = 0; i < NUM_FPU_REGS; i++) { - err |= - __put_user(get_fpr64(¤t->thread.fpu.fpr[i], 1), - &sc->sc_msaregs[i]); - } - err |= __put_user(current->thread.fpu.msacsr, &sc->sc_msa_csr); - - return err; -} - -static int copy_msa_from_sigcontext(struct sigcontext __user *sc) -{ - int i; - int err = 0; - u64 val; - - for (i = 0; i < NUM_FPU_REGS; i++) { - err |= __get_user(val, &sc->sc_msaregs[i]); - set_fpr64(¤t->thread.fpu.fpr[i], 1, val); - } - err |= __get_user(current->thread.fpu.msacsr, &sc->sc_msa_csr); - - return err; -} - -/* * Helper routines */ -static int protected_save_fp_context(struct sigcontext __user *sc, - unsigned used_math) +static int protected_save_fp_context(struct sigcontext __user *sc) { int err; - bool save_msa = cpu_has_msa && (used_math & USEDMATH_MSA); #ifndef CONFIG_EVA while (1) { lock_fpu_owner(); if (is_fpu_owner()) { err = save_fp_context(sc); - if (save_msa && !err) - err = _save_msa_context(sc); unlock_fpu_owner(); } else { unlock_fpu_owner(); err = copy_fp_to_sigcontext(sc); - if (save_msa && !err) - err = copy_msa_to_sigcontext(sc); } if (likely(!err)) break; @@ -169,38 +125,24 @@ static int protected_save_fp_context(struct sigcontext __user *sc, * EVA does not have FPU EVA instructions so saving fpu context directly * does not work. */ - disable_msa(); lose_fpu(1); err = save_fp_context(sc); /* this might fail */ - if (save_msa && !err) - err = copy_msa_to_sigcontext(sc); #endif return err; } -static int protected_restore_fp_context(struct sigcontext __user *sc, - unsigned used_math) +static int protected_restore_fp_context(struct sigcontext __user *sc) { int err, tmp __maybe_unused; - bool restore_msa = cpu_has_msa && (used_math & USEDMATH_MSA); #ifndef CONFIG_EVA while (1) { lock_fpu_owner(); if (is_fpu_owner()) { err = restore_fp_context(sc); - if (restore_msa && !err) { - enable_msa(); - err = _restore_msa_context(sc); - } else { - /* signal handler may have used MSA */ - disable_msa(); - } unlock_fpu_owner(); } else { unlock_fpu_owner(); err = copy_fp_from_sigcontext(sc); - if (!err && (used_math & USEDMATH_MSA)) - err = copy_msa_from_sigcontext(sc); } if (likely(!err)) break; @@ -216,11 +158,8 @@ static int protected_restore_fp_context(struct sigcontext __user *sc, * EVA does not have FPU EVA instructions so restoring fpu context * directly does not work. */ - enable_msa(); lose_fpu(0); err = restore_fp_context(sc); /* this might fail */ - if (restore_msa && !err) - err = copy_msa_from_sigcontext(sc); #endif return err; } @@ -252,8 +191,7 @@ int setup_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc) err |= __put_user(rddsp(DSP_MASK), &sc->sc_dsp); } - used_math = used_math() ? USEDMATH_FP : 0; - used_math |= thread_msa_context_live() ? USEDMATH_MSA : 0; + used_math = !!used_math(); err |= __put_user(used_math, &sc->sc_used_math); if (used_math) { @@ -261,7 +199,7 @@ int setup_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc) * Save FPU state to signal context. Signal handler * will "inherit" current FPU state. */ - err |= protected_save_fp_context(sc, used_math); + err |= protected_save_fp_context(sc); } return err; } @@ -286,14 +224,14 @@ int fpcsr_pending(unsigned int __user *fpcsr) } static int -check_and_restore_fp_context(struct sigcontext __user *sc, unsigned used_math) +check_and_restore_fp_context(struct sigcontext __user *sc) { int err, sig; err = sig = fpcsr_pending(&sc->sc_fpc_csr); if (err > 0) err = 0; - err |= protected_restore_fp_context(sc, used_math); + err |= protected_restore_fp_context(sc); return err ?: sig; } @@ -333,10 +271,9 @@ int restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc) if (used_math) { /* restore fpu context if we have used it before */ if (!err) - err = check_and_restore_fp_context(sc, used_math); + err = check_and_restore_fp_context(sc); } else { - /* signal handler may have used FPU or MSA. Disable them. */ - disable_msa(); + /* signal handler may have used FPU. Give it up. */ lose_fpu(0); } diff --git a/arch/mips/kernel/signal32.c b/arch/mips/kernel/signal32.c index 299f956e4db3..bae2e6ee2109 100644 --- a/arch/mips/kernel/signal32.c +++ b/arch/mips/kernel/signal32.c @@ -30,7 +30,6 @@ #include <asm/sim.h> #include <asm/ucontext.h> #include <asm/fpu.h> -#include <asm/msa.h> #include <asm/war.h> #include <asm/vdso.h> #include <asm/dsp.h> @@ -43,9 +42,6 @@ static int (*restore_fp_context32)(struct sigcontext32 __user *sc); extern asmlinkage int _save_fp_context32(struct sigcontext32 __user *sc); extern asmlinkage int _restore_fp_context32(struct sigcontext32 __user *sc); -extern asmlinkage int _save_msa_context32(struct sigcontext32 __user *sc); -extern asmlinkage int _restore_msa_context32(struct sigcontext32 __user *sc); - /* * Including <asm/unistd.h> would give use the 64-bit syscall numbers ... */ @@ -115,59 +111,19 @@ static int copy_fp_from_sigcontext32(struct sigcontext32 __user *sc) } /* - * These functions will save only the upper 64 bits of the vector registers, - * since the lower 64 bits have already been saved as the scalar FP context. - */ -static int copy_msa_to_sigcontext32(struct sigcontext32 __user *sc) -{ - int i; - int err = 0; - - for (i = 0; i < NUM_FPU_REGS; i++) { - err |= - __put_user(get_fpr64(¤t->thread.fpu.fpr[i], 1), - &sc->sc_msaregs[i]); - } - err |= __put_user(current->thread.fpu.msacsr, &sc->sc_msa_csr); - - return err; -} - -static int copy_msa_from_sigcontext32(struct sigcontext32 __user *sc) -{ - int i; - int err = 0; - u64 val; - - for (i = 0; i < NUM_FPU_REGS; i++) { - err |= __get_user(val, &sc->sc_msaregs[i]); - set_fpr64(¤t->thread.fpu.fpr[i], 1, val); - } - err |= __get_user(current->thread.fpu.msacsr, &sc->sc_msa_csr); - - return err; -} - -/* * sigcontext handlers */ -static int protected_save_fp_context32(struct sigcontext32 __user *sc, - unsigned used_math) +static int protected_save_fp_context32(struct sigcontext32 __user *sc) { int err; - bool save_msa = cpu_has_msa && (used_math & USEDMATH_MSA); while (1) { lock_fpu_owner(); if (is_fpu_owner()) { err = save_fp_context32(sc); - if (save_msa && !err) - err = _save_msa_context32(sc); unlock_fpu_owner(); } else { unlock_fpu_owner(); err = copy_fp_to_sigcontext32(sc); - if (save_msa && !err) - err = copy_msa_to_sigcontext32(sc); } if (likely(!err)) break; @@ -181,28 +137,17 @@ static int protected_save_fp_context32(struct sigcontext32 __user *sc, return err; } -static int protected_restore_fp_context32(struct sigcontext32 __user *sc, - unsigned used_math) +static int protected_restore_fp_context32(struct sigcontext32 __user *sc) { int err, tmp __maybe_unused; - bool restore_msa = cpu_has_msa && (used_math & USEDMATH_MSA); while (1) { lock_fpu_owner(); if (is_fpu_owner()) { err = restore_fp_context32(sc); - if (restore_msa && !err) { - enable_msa(); - err = _restore_msa_context32(sc); - } else { - /* signal handler may have used MSA */ - disable_msa(); - } unlock_fpu_owner(); } else { unlock_fpu_owner(); err = copy_fp_from_sigcontext32(sc); - if (restore_msa && !err) - err = copy_msa_from_sigcontext32(sc); } if (likely(!err)) break; @@ -241,8 +186,7 @@ static int setup_sigcontext32(struct pt_regs *regs, err |= __put_user(mflo3(), &sc->sc_lo3); } - used_math = used_math() ? USEDMATH_FP : 0; - used_math |= thread_msa_context_live() ? USEDMATH_MSA : 0; + used_math = !!used_math(); err |= __put_user(used_math, &sc->sc_used_math); if (used_math) { @@ -250,21 +194,20 @@ static int setup_sigcontext32(struct pt_regs *regs, * Save FPU state to signal context. Signal handler * will "inherit" current FPU state. */ - err |= protected_save_fp_context32(sc, used_math); + err |= protected_save_fp_context32(sc); } return err; } static int -check_and_restore_fp_context32(struct sigcontext32 __user *sc, - unsigned used_math) +check_and_restore_fp_context32(struct sigcontext32 __user *sc) { int err, sig; err = sig = fpcsr_pending(&sc->sc_fpc_csr); if (err > 0) err = 0; - err |= protected_restore_fp_context32(sc, used_math); + err |= protected_restore_fp_context32(sc); return err ?: sig; } @@ -301,10 +244,9 @@ static int restore_sigcontext32(struct pt_regs *regs, if (used_math) { /* restore fpu context if we have used it before */ if (!err) - err = check_and_restore_fp_context32(sc, used_math); + err = check_and_restore_fp_context32(sc); } else { - /* signal handler may have used FPU or MSA. Disable them. */ - disable_msa(); + /* signal handler may have used FPU. Give it up. */ lose_fpu(0); } diff --git a/arch/mips/kernel/smp-cps.c b/arch/mips/kernel/smp-cps.c index df0598d9bfdd..949f2c6827a0 100644 --- a/arch/mips/kernel/smp-cps.c +++ b/arch/mips/kernel/smp-cps.c @@ -301,7 +301,7 @@ static int cps_cpu_disable(void) core_cfg = &mips_cps_core_bootcfg[current_cpu_data.core]; atomic_sub(1 << cpu_vpe_id(¤t_cpu_data), &core_cfg->vpe_mask); - smp_mb__after_atomic_dec(); + smp_mb__after_atomic(); set_cpu_online(cpu, false); cpu_clear(cpu, cpu_callin_map); diff --git a/arch/mips/kvm/kvm_mips.c b/arch/mips/kvm/kvm_mips.c index cd5e4f568439..f3c56a182fd8 100644 --- a/arch/mips/kvm/kvm_mips.c +++ b/arch/mips/kvm/kvm_mips.c @@ -384,6 +384,7 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu) kfree(vcpu->arch.guest_ebase); kfree(vcpu->arch.kseg0_commpage); + kfree(vcpu); } void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) diff --git a/arch/mips/math-emu/ieee754.c b/arch/mips/math-emu/ieee754.c index 53f1d2287084..8e97acbbe22c 100644 --- a/arch/mips/math-emu/ieee754.c +++ b/arch/mips/math-emu/ieee754.c @@ -34,13 +34,22 @@ * Special constants */ -#define DPCNST(s, b, m) \ +/* + * Older GCC requires the inner braces for initialization of union ieee754dp's + * anonymous struct member. Without an error will result. + */ +#define xPCNST(s, b, m, ebias) \ { \ - .sign = (s), \ - .bexp = (b) + DP_EBIAS, \ - .mant = (m) \ + { \ + .sign = (s), \ + .bexp = (b) + ebias, \ + .mant = (m) \ + } \ } +#define DPCNST(s, b, m) \ + xPCNST(s, b, m, DP_EBIAS) + const union ieee754dp __ieee754dp_spcvals[] = { DPCNST(0, DP_EMIN - 1, 0x0000000000000ULL), /* + zero */ DPCNST(1, DP_EMIN - 1, 0x0000000000000ULL), /* - zero */ @@ -62,11 +71,7 @@ const union ieee754dp __ieee754dp_spcvals[] = { }; #define SPCNST(s, b, m) \ -{ \ - .sign = (s), \ - .bexp = (b) + SP_EBIAS, \ - .mant = (m) \ -} + xPCNST(s, b, m, SP_EBIAS) const union ieee754sp __ieee754sp_spcvals[] = { SPCNST(0, SP_EMIN - 1, 0x000000), /* + zero */ diff --git a/arch/mips/mm/uasm-micromips.c b/arch/mips/mm/uasm-micromips.c index 775c2800cba2..8399ddf03a02 100644 --- a/arch/mips/mm/uasm-micromips.c +++ b/arch/mips/mm/uasm-micromips.c @@ -102,6 +102,7 @@ static struct insn insn_table_MM[] = { { insn_sd, 0, 0 }, { insn_sll, M(mm_pool32a_op, 0, 0, 0, 0, mm_sll32_op), RT | RS | RD }, { insn_sllv, M(mm_pool32a_op, 0, 0, 0, 0, mm_sllv32_op), RT | RS | RD }, + { insn_slt, M(mm_pool32a_op, 0, 0, 0, 0, mm_slt_op), RT | RS | RD }, { insn_sltiu, M(mm_sltiu32_op, 0, 0, 0, 0, 0), RT | RS | SIMM }, { insn_sltu, M(mm_pool32a_op, 0, 0, 0, 0, mm_sltu_op), RT | RS | RD }, { insn_sra, M(mm_pool32a_op, 0, 0, 0, 0, mm_sra_op), RT | RS | RD }, diff --git a/arch/mips/mm/uasm-mips.c b/arch/mips/mm/uasm-mips.c index 38792c2364f5..6708a2dbf934 100644 --- a/arch/mips/mm/uasm-mips.c +++ b/arch/mips/mm/uasm-mips.c @@ -89,7 +89,7 @@ static struct insn insn_table[] = { { insn_lb, M(lb_op, 0, 0, 0, 0, 0), RS | RT | SIMM }, { insn_ld, M(ld_op, 0, 0, 0, 0, 0), RS | RT | SIMM }, { insn_ldx, M(spec3_op, 0, 0, 0, ldx_op, lx_op), RS | RT | RD }, - { insn_lh, M(lw_op, 0, 0, 0, 0, 0), RS | RT | SIMM }, + { insn_lh, M(lh_op, 0, 0, 0, 0, 0), RS | RT | SIMM }, { insn_lld, M(lld_op, 0, 0, 0, 0, 0), RS | RT | SIMM }, { insn_ll, M(ll_op, 0, 0, 0, 0, 0), RS | RT | SIMM }, { insn_lui, M(lui_op, 0, 0, 0, 0, 0), RT | SIMM }, @@ -110,6 +110,7 @@ static struct insn insn_table[] = { { insn_sd, M(sd_op, 0, 0, 0, 0, 0), RS | RT | SIMM }, { insn_sll, M(spec_op, 0, 0, 0, 0, sll_op), RT | RD | RE }, { insn_sllv, M(spec_op, 0, 0, 0, 0, sllv_op), RS | RT | RD }, + { insn_slt, M(spec_op, 0, 0, 0, 0, slt_op), RS | RT | RD }, { insn_sltiu, M(sltiu_op, 0, 0, 0, 0, 0), RS | RT | SIMM }, { insn_sltu, M(spec_op, 0, 0, 0, 0, sltu_op), RS | RT | RD }, { insn_sra, M(spec_op, 0, 0, 0, 0, sra_op), RT | RD | RE }, diff --git a/arch/mips/mm/uasm.c b/arch/mips/mm/uasm.c index 00515805fe41..a01b0d6cedd2 100644 --- a/arch/mips/mm/uasm.c +++ b/arch/mips/mm/uasm.c @@ -53,7 +53,7 @@ enum opcode { insn_ld, insn_ldx, insn_lh, insn_ll, insn_lld, insn_lui, insn_lw, insn_lwx, insn_mfc0, insn_mfhi, insn_mflo, insn_mtc0, insn_mul, insn_or, insn_ori, insn_pref, insn_rfe, insn_rotr, insn_sc, insn_scd, - insn_sd, insn_sll, insn_sllv, insn_sltiu, insn_sltu, insn_sra, + insn_sd, insn_sll, insn_sllv, insn_slt, insn_sltiu, insn_sltu, insn_sra, insn_srl, insn_srlv, insn_subu, insn_sw, insn_sync, insn_syscall, insn_tlbp, insn_tlbr, insn_tlbwi, insn_tlbwr, insn_wait, insn_wsbh, insn_xor, insn_xori, insn_yield, @@ -139,6 +139,13 @@ Ip_u1u2u3(op) \ } \ UASM_EXPORT_SYMBOL(uasm_i##op); +#define I_s3s1s2(op) \ +Ip_s3s1s2(op) \ +{ \ + build_insn(buf, insn##op, b, c, a); \ +} \ +UASM_EXPORT_SYMBOL(uasm_i##op); + #define I_u2u1u3(op) \ Ip_u2u1u3(op) \ { \ @@ -289,6 +296,7 @@ I_u2s3u1(_scd) I_u2s3u1(_sd) I_u2u1u3(_sll) I_u3u2u1(_sllv) +I_s3s1s2(_slt) I_u2u1s3(_sltiu) I_u3u1u2(_sltu) I_u2u1u3(_sra) diff --git a/arch/mips/net/bpf_jit.c b/arch/mips/net/bpf_jit.c index a67b9753330b..b87390a56a2f 100644 --- a/arch/mips/net/bpf_jit.c +++ b/arch/mips/net/bpf_jit.c @@ -119,8 +119,6 @@ /* Arguments used by JIT */ #define ARGS_USED_BY_JIT 2 /* only applicable to 64-bit */ -#define FLAG_NEED_X_RESET (1 << 0) - #define SBIT(x) (1 << (x)) /* Signed version of BIT() */ /** @@ -153,6 +151,8 @@ static inline int optimize_div(u32 *k) return 0; } +static inline void emit_jit_reg_move(ptr dst, ptr src, struct jit_ctx *ctx); + /* Simply emit the instruction if the JIT memory space has been allocated */ #define emit_instr(ctx, func, ...) \ do { \ @@ -166,9 +166,7 @@ do { \ /* Determine if immediate is within the 16-bit signed range */ static inline bool is_range16(s32 imm) { - if (imm >= SBIT(15) || imm < -SBIT(15)) - return true; - return false; + return !(imm >= SBIT(15) || imm < -SBIT(15)); } static inline void emit_addu(unsigned int dst, unsigned int src1, @@ -187,7 +185,7 @@ static inline void emit_load_imm(unsigned int dst, u32 imm, struct jit_ctx *ctx) { if (ctx->target != NULL) { /* addiu can only handle s16 */ - if (is_range16(imm)) { + if (!is_range16(imm)) { u32 *p = &ctx->target[ctx->idx]; uasm_i_lui(&p, r_tmp_imm, (s32)imm >> 16); p = &ctx->target[ctx->idx + 1]; @@ -199,7 +197,7 @@ static inline void emit_load_imm(unsigned int dst, u32 imm, struct jit_ctx *ctx) } ctx->idx++; - if (is_range16(imm)) + if (!is_range16(imm)) ctx->idx++; } @@ -240,7 +238,7 @@ static inline void emit_daddiu(unsigned int dst, unsigned int src, static inline void emit_addiu(unsigned int dst, unsigned int src, u32 imm, struct jit_ctx *ctx) { - if (is_range16(imm)) { + if (!is_range16(imm)) { emit_load_imm(r_tmp, imm, ctx); emit_addu(dst, r_tmp, src, ctx); } else { @@ -313,8 +311,11 @@ static inline void emit_sll(unsigned int dst, unsigned int src, unsigned int sa, struct jit_ctx *ctx) { /* sa is 5-bits long */ - BUG_ON(sa >= BIT(5)); - emit_instr(ctx, sll, dst, src, sa); + if (sa >= BIT(5)) + /* Shifting >= 32 results in zero */ + emit_jit_reg_move(dst, r_zero, ctx); + else + emit_instr(ctx, sll, dst, src, sa); } static inline void emit_srlv(unsigned int dst, unsigned int src, @@ -327,8 +328,17 @@ static inline void emit_srl(unsigned int dst, unsigned int src, unsigned int sa, struct jit_ctx *ctx) { /* sa is 5-bits long */ - BUG_ON(sa >= BIT(5)); - emit_instr(ctx, srl, dst, src, sa); + if (sa >= BIT(5)) + /* Shifting >= 32 results in zero */ + emit_jit_reg_move(dst, r_zero, ctx); + else + emit_instr(ctx, srl, dst, src, sa); +} + +static inline void emit_slt(unsigned int dst, unsigned int src1, + unsigned int src2, struct jit_ctx *ctx) +{ + emit_instr(ctx, slt, dst, src1, src2); } static inline void emit_sltu(unsigned int dst, unsigned int src1, @@ -341,7 +351,7 @@ static inline void emit_sltiu(unsigned dst, unsigned int src, unsigned int imm, struct jit_ctx *ctx) { /* 16 bit immediate */ - if (is_range16((s32)imm)) { + if (!is_range16((s32)imm)) { emit_load_imm(r_tmp, imm, ctx); emit_sltu(dst, src, r_tmp, ctx); } else { @@ -408,7 +418,7 @@ static inline void emit_div(unsigned int dst, unsigned int src, u32 *p = &ctx->target[ctx->idx]; uasm_i_divu(&p, dst, src); p = &ctx->target[ctx->idx + 1]; - uasm_i_mfhi(&p, dst); + uasm_i_mflo(&p, dst); } ctx->idx += 2; /* 2 insts */ } @@ -443,6 +453,17 @@ static inline void emit_wsbh(unsigned int dst, unsigned int src, emit_instr(ctx, wsbh, dst, src); } +/* load pointer to register */ +static inline void emit_load_ptr(unsigned int dst, unsigned int src, + int imm, struct jit_ctx *ctx) +{ + /* src contains the base addr of the 32/64-pointer */ + if (config_enabled(CONFIG_64BIT)) + emit_instr(ctx, ld, dst, imm, src); + else + emit_instr(ctx, lw, dst, imm, src); +} + /* load a function pointer to register */ static inline void emit_load_func(unsigned int reg, ptr imm, struct jit_ctx *ctx) @@ -545,29 +566,13 @@ static inline u16 align_sp(unsigned int num) return num; } -static inline void update_on_xread(struct jit_ctx *ctx) -{ - if (!(ctx->flags & SEEN_X)) - ctx->flags |= FLAG_NEED_X_RESET; - - ctx->flags |= SEEN_X; -} - static bool is_load_to_a(u16 inst) { switch (inst) { - case BPF_S_LD_W_LEN: - case BPF_S_LD_W_ABS: - case BPF_S_LD_H_ABS: - case BPF_S_LD_B_ABS: - case BPF_S_ANC_CPU: - case BPF_S_ANC_IFINDEX: - case BPF_S_ANC_MARK: - case BPF_S_ANC_PROTOCOL: - case BPF_S_ANC_RXHASH: - case BPF_S_ANC_VLAN_TAG: - case BPF_S_ANC_VLAN_TAG_PRESENT: - case BPF_S_ANC_QUEUE: + case BPF_LD | BPF_W | BPF_LEN: + case BPF_LD | BPF_W | BPF_ABS: + case BPF_LD | BPF_H | BPF_ABS: + case BPF_LD | BPF_B | BPF_ABS: return true; default: return false; @@ -618,7 +623,10 @@ static void save_bpf_jit_regs(struct jit_ctx *ctx, unsigned offset) if (ctx->flags & SEEN_MEM) { if (real_off % (RSIZE * 2)) real_off += RSIZE; - emit_addiu(r_M, r_sp, real_off, ctx); + if (config_enabled(CONFIG_64BIT)) + emit_daddiu(r_M, r_sp, real_off, ctx); + else + emit_addiu(r_M, r_sp, real_off, ctx); } } @@ -705,11 +713,11 @@ static void build_prologue(struct jit_ctx *ctx) if (ctx->flags & SEEN_SKB) emit_reg_move(r_skb, MIPS_R_A0, ctx); - if (ctx->flags & FLAG_NEED_X_RESET) + if (ctx->flags & SEEN_X) emit_jit_reg_move(r_X, r_zero, ctx); /* Do not leak kernel data to userspace */ - if ((first_inst != BPF_S_RET_K) && !(is_load_to_a(first_inst))) + if ((first_inst != (BPF_RET | BPF_K)) && !(is_load_to_a(first_inst))) emit_jit_reg_move(r_A, r_zero, ctx); } @@ -757,13 +765,17 @@ static u64 jit_get_skb_w(struct sk_buff *skb, unsigned offset) return (u64)err << 32 | ntohl(ret); } -#define PKT_TYPE_MAX 7 +#ifdef __BIG_ENDIAN_BITFIELD +#define PKT_TYPE_MAX (7 << 5) +#else +#define PKT_TYPE_MAX 7 +#endif static int pkt_type_offset(void) { struct sk_buff skb_probe = { .pkt_type = ~0, }; - char *ct = (char *)&skb_probe; + u8 *ct = (u8 *)&skb_probe; unsigned int off; for (off = 0; off < sizeof(struct sk_buff); off++) { @@ -783,46 +795,62 @@ static int build_body(struct jit_ctx *ctx) u32 k, b_off __maybe_unused; for (i = 0; i < prog->len; i++) { + u16 code; + inst = &(prog->insns[i]); pr_debug("%s: code->0x%02x, jt->0x%x, jf->0x%x, k->0x%x\n", __func__, inst->code, inst->jt, inst->jf, inst->k); k = inst->k; + code = bpf_anc_helper(inst); if (ctx->target == NULL) ctx->offsets[i] = ctx->idx * 4; - switch (inst->code) { - case BPF_S_LD_IMM: + switch (code) { + case BPF_LD | BPF_IMM: /* A <- k ==> li r_A, k */ ctx->flags |= SEEN_A; emit_load_imm(r_A, k, ctx); break; - case BPF_S_LD_W_LEN: + case BPF_LD | BPF_W | BPF_LEN: BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, len) != 4); /* A <- len ==> lw r_A, offset(skb) */ ctx->flags |= SEEN_SKB | SEEN_A; off = offsetof(struct sk_buff, len); emit_load(r_A, r_skb, off, ctx); break; - case BPF_S_LD_MEM: + case BPF_LD | BPF_MEM: /* A <- M[k] ==> lw r_A, offset(M) */ ctx->flags |= SEEN_MEM | SEEN_A; emit_load(r_A, r_M, SCRATCH_OFF(k), ctx); break; - case BPF_S_LD_W_ABS: + case BPF_LD | BPF_W | BPF_ABS: /* A <- P[k:4] */ load_order = 2; goto load; - case BPF_S_LD_H_ABS: + case BPF_LD | BPF_H | BPF_ABS: /* A <- P[k:2] */ load_order = 1; goto load; - case BPF_S_LD_B_ABS: + case BPF_LD | BPF_B | BPF_ABS: /* A <- P[k:1] */ load_order = 0; load: + /* the interpreter will deal with the negative K */ + if ((int)k < 0) + return -ENOTSUPP; + emit_load_imm(r_off, k, ctx); load_common: + /* + * We may got here from the indirect loads so + * return if offset is negative. + */ + emit_slt(r_s0, r_off, r_zero, ctx); + emit_bcond(MIPS_COND_NE, r_s0, r_zero, + b_imm(prog->len, ctx), ctx); + emit_reg_move(r_ret, r_zero, ctx); + ctx->flags |= SEEN_CALL | SEEN_OFF | SEEN_S0 | SEEN_SKB | SEEN_A; @@ -852,39 +880,42 @@ load_common: emit_b(b_imm(prog->len, ctx), ctx); emit_reg_move(r_ret, r_zero, ctx); break; - case BPF_S_LD_W_IND: + case BPF_LD | BPF_W | BPF_IND: /* A <- P[X + k:4] */ load_order = 2; goto load_ind; - case BPF_S_LD_H_IND: + case BPF_LD | BPF_H | BPF_IND: /* A <- P[X + k:2] */ load_order = 1; goto load_ind; - case BPF_S_LD_B_IND: + case BPF_LD | BPF_B | BPF_IND: /* A <- P[X + k:1] */ load_order = 0; load_ind: - update_on_xread(ctx); ctx->flags |= SEEN_OFF | SEEN_X; emit_addiu(r_off, r_X, k, ctx); goto load_common; - case BPF_S_LDX_IMM: + case BPF_LDX | BPF_IMM: /* X <- k */ ctx->flags |= SEEN_X; emit_load_imm(r_X, k, ctx); break; - case BPF_S_LDX_MEM: + case BPF_LDX | BPF_MEM: /* X <- M[k] */ ctx->flags |= SEEN_X | SEEN_MEM; emit_load(r_X, r_M, SCRATCH_OFF(k), ctx); break; - case BPF_S_LDX_W_LEN: + case BPF_LDX | BPF_W | BPF_LEN: /* X <- len */ ctx->flags |= SEEN_X | SEEN_SKB; off = offsetof(struct sk_buff, len); emit_load(r_X, r_skb, off, ctx); break; - case BPF_S_LDX_B_MSH: + case BPF_LDX | BPF_B | BPF_MSH: + /* the interpreter will deal with the negative K */ + if ((int)k < 0) + return -ENOTSUPP; + /* X <- 4 * (P[k:1] & 0xf) */ ctx->flags |= SEEN_X | SEEN_CALL | SEEN_S0 | SEEN_SKB; /* Load offset to a1 */ @@ -917,50 +948,49 @@ load_ind: emit_b(b_imm(prog->len, ctx), ctx); emit_load_imm(r_ret, 0, ctx); /* delay slot */ break; - case BPF_S_ST: + case BPF_ST: /* M[k] <- A */ ctx->flags |= SEEN_MEM | SEEN_A; emit_store(r_A, r_M, SCRATCH_OFF(k), ctx); break; - case BPF_S_STX: + case BPF_STX: /* M[k] <- X */ ctx->flags |= SEEN_MEM | SEEN_X; emit_store(r_X, r_M, SCRATCH_OFF(k), ctx); break; - case BPF_S_ALU_ADD_K: + case BPF_ALU | BPF_ADD | BPF_K: /* A += K */ ctx->flags |= SEEN_A; emit_addiu(r_A, r_A, k, ctx); break; - case BPF_S_ALU_ADD_X: + case BPF_ALU | BPF_ADD | BPF_X: /* A += X */ ctx->flags |= SEEN_A | SEEN_X; emit_addu(r_A, r_A, r_X, ctx); break; - case BPF_S_ALU_SUB_K: + case BPF_ALU | BPF_SUB | BPF_K: /* A -= K */ ctx->flags |= SEEN_A; emit_addiu(r_A, r_A, -k, ctx); break; - case BPF_S_ALU_SUB_X: + case BPF_ALU | BPF_SUB | BPF_X: /* A -= X */ ctx->flags |= SEEN_A | SEEN_X; emit_subu(r_A, r_A, r_X, ctx); break; - case BPF_S_ALU_MUL_K: + case BPF_ALU | BPF_MUL | BPF_K: /* A *= K */ /* Load K to scratch register before MUL */ ctx->flags |= SEEN_A | SEEN_S0; emit_load_imm(r_s0, k, ctx); emit_mul(r_A, r_A, r_s0, ctx); break; - case BPF_S_ALU_MUL_X: + case BPF_ALU | BPF_MUL | BPF_X: /* A *= X */ - update_on_xread(ctx); ctx->flags |= SEEN_A | SEEN_X; emit_mul(r_A, r_A, r_X, ctx); break; - case BPF_S_ALU_DIV_K: + case BPF_ALU | BPF_DIV | BPF_K: /* A /= k */ if (k == 1) break; @@ -973,7 +1003,7 @@ load_ind: emit_load_imm(r_s0, k, ctx); emit_div(r_A, r_s0, ctx); break; - case BPF_S_ALU_MOD_K: + case BPF_ALU | BPF_MOD | BPF_K: /* A %= k */ if (k == 1 || optimize_div(&k)) { ctx->flags |= SEEN_A; @@ -984,9 +1014,8 @@ load_ind: emit_mod(r_A, r_s0, ctx); } break; - case BPF_S_ALU_DIV_X: + case BPF_ALU | BPF_DIV | BPF_X: /* A /= X */ - update_on_xread(ctx); ctx->flags |= SEEN_X | SEEN_A; /* Check if r_X is zero */ emit_bcond(MIPS_COND_EQ, r_X, r_zero, @@ -994,9 +1023,8 @@ load_ind: emit_load_imm(r_val, 0, ctx); /* delay slot */ emit_div(r_A, r_X, ctx); break; - case BPF_S_ALU_MOD_X: + case BPF_ALU | BPF_MOD | BPF_X: /* A %= X */ - update_on_xread(ctx); ctx->flags |= SEEN_X | SEEN_A; /* Check if r_X is zero */ emit_bcond(MIPS_COND_EQ, r_X, r_zero, @@ -1004,94 +1032,89 @@ load_ind: emit_load_imm(r_val, 0, ctx); /* delay slot */ emit_mod(r_A, r_X, ctx); break; - case BPF_S_ALU_OR_K: + case BPF_ALU | BPF_OR | BPF_K: /* A |= K */ ctx->flags |= SEEN_A; emit_ori(r_A, r_A, k, ctx); break; - case BPF_S_ALU_OR_X: + case BPF_ALU | BPF_OR | BPF_X: /* A |= X */ - update_on_xread(ctx); ctx->flags |= SEEN_A; emit_ori(r_A, r_A, r_X, ctx); break; - case BPF_S_ALU_XOR_K: + case BPF_ALU | BPF_XOR | BPF_K: /* A ^= k */ ctx->flags |= SEEN_A; emit_xori(r_A, r_A, k, ctx); break; - case BPF_S_ANC_ALU_XOR_X: - case BPF_S_ALU_XOR_X: + case BPF_ANC | SKF_AD_ALU_XOR_X: + case BPF_ALU | BPF_XOR | BPF_X: /* A ^= X */ - update_on_xread(ctx); ctx->flags |= SEEN_A; emit_xor(r_A, r_A, r_X, ctx); break; - case BPF_S_ALU_AND_K: + case BPF_ALU | BPF_AND | BPF_K: /* A &= K */ ctx->flags |= SEEN_A; emit_andi(r_A, r_A, k, ctx); break; - case BPF_S_ALU_AND_X: + case BPF_ALU | BPF_AND | BPF_X: /* A &= X */ - update_on_xread(ctx); ctx->flags |= SEEN_A | SEEN_X; emit_and(r_A, r_A, r_X, ctx); break; - case BPF_S_ALU_LSH_K: + case BPF_ALU | BPF_LSH | BPF_K: /* A <<= K */ ctx->flags |= SEEN_A; emit_sll(r_A, r_A, k, ctx); break; - case BPF_S_ALU_LSH_X: + case BPF_ALU | BPF_LSH | BPF_X: /* A <<= X */ ctx->flags |= SEEN_A | SEEN_X; - update_on_xread(ctx); emit_sllv(r_A, r_A, r_X, ctx); break; - case BPF_S_ALU_RSH_K: + case BPF_ALU | BPF_RSH | BPF_K: /* A >>= K */ ctx->flags |= SEEN_A; emit_srl(r_A, r_A, k, ctx); break; - case BPF_S_ALU_RSH_X: + case BPF_ALU | BPF_RSH | BPF_X: ctx->flags |= SEEN_A | SEEN_X; - update_on_xread(ctx); emit_srlv(r_A, r_A, r_X, ctx); break; - case BPF_S_ALU_NEG: + case BPF_ALU | BPF_NEG: /* A = -A */ ctx->flags |= SEEN_A; emit_neg(r_A, ctx); break; - case BPF_S_JMP_JA: + case BPF_JMP | BPF_JA: /* pc += K */ emit_b(b_imm(i + k + 1, ctx), ctx); emit_nop(ctx); break; - case BPF_S_JMP_JEQ_K: + case BPF_JMP | BPF_JEQ | BPF_K: /* pc += ( A == K ) ? pc->jt : pc->jf */ condt = MIPS_COND_EQ | MIPS_COND_K; goto jmp_cmp; - case BPF_S_JMP_JEQ_X: + case BPF_JMP | BPF_JEQ | BPF_X: ctx->flags |= SEEN_X; /* pc += ( A == X ) ? pc->jt : pc->jf */ condt = MIPS_COND_EQ | MIPS_COND_X; goto jmp_cmp; - case BPF_S_JMP_JGE_K: + case BPF_JMP | BPF_JGE | BPF_K: /* pc += ( A >= K ) ? pc->jt : pc->jf */ condt = MIPS_COND_GE | MIPS_COND_K; goto jmp_cmp; - case BPF_S_JMP_JGE_X: + case BPF_JMP | BPF_JGE | BPF_X: ctx->flags |= SEEN_X; /* pc += ( A >= X ) ? pc->jt : pc->jf */ condt = MIPS_COND_GE | MIPS_COND_X; goto jmp_cmp; - case BPF_S_JMP_JGT_K: + case BPF_JMP | BPF_JGT | BPF_K: /* pc += ( A > K ) ? pc->jt : pc->jf */ condt = MIPS_COND_GT | MIPS_COND_K; goto jmp_cmp; - case BPF_S_JMP_JGT_X: + case BPF_JMP | BPF_JGT | BPF_X: ctx->flags |= SEEN_X; /* pc += ( A > X ) ? pc->jt : pc->jf */ condt = MIPS_COND_GT | MIPS_COND_X; @@ -1109,7 +1132,7 @@ jmp_cmp: } /* A < (K|X) ? r_scrach = 1 */ b_off = b_imm(i + inst->jf + 1, ctx); - emit_bcond(MIPS_COND_GT, r_s0, r_zero, b_off, + emit_bcond(MIPS_COND_NE, r_s0, r_zero, b_off, ctx); emit_nop(ctx); /* A > (K|X) ? scratch = 0 */ @@ -1167,7 +1190,7 @@ jmp_cmp: } } break; - case BPF_S_JMP_JSET_K: + case BPF_JMP | BPF_JSET | BPF_K: ctx->flags |= SEEN_S0 | SEEN_S1 | SEEN_A; /* pc += (A & K) ? pc -> jt : pc -> jf */ emit_load_imm(r_s1, k, ctx); @@ -1181,7 +1204,7 @@ jmp_cmp: emit_b(b_off, ctx); emit_nop(ctx); break; - case BPF_S_JMP_JSET_X: + case BPF_JMP | BPF_JSET | BPF_X: ctx->flags |= SEEN_S0 | SEEN_X | SEEN_A; /* pc += (A & X) ? pc -> jt : pc -> jf */ emit_and(r_s0, r_A, r_X, ctx); @@ -1194,7 +1217,7 @@ jmp_cmp: emit_b(b_off, ctx); emit_nop(ctx); break; - case BPF_S_RET_A: + case BPF_RET | BPF_A: ctx->flags |= SEEN_A; if (i != prog->len - 1) /* @@ -1204,7 +1227,7 @@ jmp_cmp: emit_b(b_imm(prog->len, ctx), ctx); emit_reg_move(r_ret, r_A, ctx); /* delay slot */ break; - case BPF_S_RET_K: + case BPF_RET | BPF_K: /* * It can emit two instructions so it does not fit on * the delay slot. @@ -1219,19 +1242,18 @@ jmp_cmp: emit_nop(ctx); } break; - case BPF_S_MISC_TAX: + case BPF_MISC | BPF_TAX: /* X = A */ ctx->flags |= SEEN_X | SEEN_A; emit_jit_reg_move(r_X, r_A, ctx); break; - case BPF_S_MISC_TXA: + case BPF_MISC | BPF_TXA: /* A = X */ ctx->flags |= SEEN_A | SEEN_X; - update_on_xread(ctx); emit_jit_reg_move(r_A, r_X, ctx); break; /* AUX */ - case BPF_S_ANC_PROTOCOL: + case BPF_ANC | SKF_AD_PROTOCOL: /* A = ntohs(skb->protocol */ ctx->flags |= SEEN_SKB | SEEN_OFF | SEEN_A; BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, @@ -1256,7 +1278,7 @@ jmp_cmp: } #endif break; - case BPF_S_ANC_CPU: + case BPF_ANC | SKF_AD_CPU: ctx->flags |= SEEN_A | SEEN_OFF; /* A = current_thread_info()->cpu */ BUILD_BUG_ON(FIELD_SIZEOF(struct thread_info, @@ -1265,11 +1287,12 @@ jmp_cmp: /* $28/gp points to the thread_info struct */ emit_load(r_A, 28, off, ctx); break; - case BPF_S_ANC_IFINDEX: + case BPF_ANC | SKF_AD_IFINDEX: /* A = skb->dev->ifindex */ ctx->flags |= SEEN_SKB | SEEN_A | SEEN_S0; off = offsetof(struct sk_buff, dev); - emit_load(r_s0, r_skb, off, ctx); + /* Load *dev pointer */ + emit_load_ptr(r_s0, r_skb, off, ctx); /* error (0) in the delay slot */ emit_bcond(MIPS_COND_EQ, r_s0, r_zero, b_imm(prog->len, ctx), ctx); @@ -1279,31 +1302,36 @@ jmp_cmp: off = offsetof(struct net_device, ifindex); emit_load(r_A, r_s0, off, ctx); break; - case BPF_S_ANC_MARK: + case BPF_ANC | SKF_AD_MARK: ctx->flags |= SEEN_SKB | SEEN_A; BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, mark) != 4); off = offsetof(struct sk_buff, mark); emit_load(r_A, r_skb, off, ctx); break; - case BPF_S_ANC_RXHASH: + case BPF_ANC | SKF_AD_RXHASH: ctx->flags |= SEEN_SKB | SEEN_A; BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, hash) != 4); off = offsetof(struct sk_buff, hash); emit_load(r_A, r_skb, off, ctx); break; - case BPF_S_ANC_VLAN_TAG: - case BPF_S_ANC_VLAN_TAG_PRESENT: + case BPF_ANC | SKF_AD_VLAN_TAG: + case BPF_ANC | SKF_AD_VLAN_TAG_PRESENT: ctx->flags |= SEEN_SKB | SEEN_S0 | SEEN_A; BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, vlan_tci) != 2); off = offsetof(struct sk_buff, vlan_tci); emit_half_load(r_s0, r_skb, off, ctx); - if (inst->code == BPF_S_ANC_VLAN_TAG) - emit_and(r_A, r_s0, VLAN_VID_MASK, ctx); - else - emit_and(r_A, r_s0, VLAN_TAG_PRESENT, ctx); + if (code == (BPF_ANC | SKF_AD_VLAN_TAG)) { + emit_andi(r_A, r_s0, (u16)~VLAN_TAG_PRESENT, ctx); + } else { + emit_andi(r_A, r_s0, VLAN_TAG_PRESENT, ctx); + /* return 1 if present */ + emit_sltu(r_A, r_zero, r_A, ctx); + } break; - case BPF_S_ANC_PKTTYPE: + case BPF_ANC | SKF_AD_PKTTYPE: + ctx->flags |= SEEN_SKB; + off = pkt_type_offset(); if (off < 0) @@ -1311,8 +1339,12 @@ jmp_cmp: emit_load_byte(r_tmp, r_skb, off, ctx); /* Keep only the last 3 bits */ emit_andi(r_A, r_tmp, PKT_TYPE_MAX, ctx); +#ifdef __BIG_ENDIAN_BITFIELD + /* Get the actual packet type to the lower 3 bits */ + emit_srl(r_A, r_A, 5, ctx); +#endif break; - case BPF_S_ANC_QUEUE: + case BPF_ANC | SKF_AD_QUEUE: ctx->flags |= SEEN_SKB | SEEN_A; BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, queue_mapping) != 2); @@ -1322,8 +1354,8 @@ jmp_cmp: emit_half_load(r_A, r_skb, off, ctx); break; default: - pr_warn("%s: Unhandled opcode: 0x%02x\n", __FILE__, - inst->code); + pr_debug("%s: Unhandled opcode: 0x%02x\n", __FILE__, + inst->code); return -1; } } diff --git a/arch/parisc/include/uapi/asm/signal.h b/arch/parisc/include/uapi/asm/signal.h index a2fa297196bc..f5645d6a89f2 100644 --- a/arch/parisc/include/uapi/asm/signal.h +++ b/arch/parisc/include/uapi/asm/signal.h @@ -69,8 +69,6 @@ #define SA_NOMASK SA_NODEFER #define SA_ONESHOT SA_RESETHAND -#define SA_RESTORER 0x04000000 /* obsolete -- ignored */ - #define MINSIGSTKSZ 2048 #define SIGSTKSZ 8192 diff --git a/arch/parisc/kernel/hardware.c b/arch/parisc/kernel/hardware.c index 608716f8496b..af3bc359dc70 100644 --- a/arch/parisc/kernel/hardware.c +++ b/arch/parisc/kernel/hardware.c @@ -1210,7 +1210,8 @@ static struct hp_hardware hp_hardware_list[] = { {HPHW_FIO, 0x004, 0x00320, 0x0, "Metheus Frame Buffer"}, {HPHW_FIO, 0x004, 0x00340, 0x0, "BARCO CX4500 VME Grphx Cnsl"}, {HPHW_FIO, 0x004, 0x00360, 0x0, "Hughes TOG VME FDDI"}, - {HPHW_FIO, 0x076, 0x000AD, 0x00, "Crestone Peak RS-232"}, + {HPHW_FIO, 0x076, 0x000AD, 0x0, "Crestone Peak Core RS-232"}, + {HPHW_FIO, 0x077, 0x000AD, 0x0, "Crestone Peak Fast? Core RS-232"}, {HPHW_IOA, 0x185, 0x0000B, 0x00, "Java BC Summit Port"}, {HPHW_IOA, 0x1FF, 0x0000B, 0x00, "Hitachi Ghostview Summit Port"}, {HPHW_IOA, 0x580, 0x0000B, 0x10, "U2-IOA BC Runway Port"}, diff --git a/arch/parisc/kernel/sys_parisc32.c b/arch/parisc/kernel/sys_parisc32.c index bb9f3b64de55..93c1963d76fe 100644 --- a/arch/parisc/kernel/sys_parisc32.c +++ b/arch/parisc/kernel/sys_parisc32.c @@ -4,6 +4,7 @@ * Copyright (C) 2000-2001 Hewlett Packard Company * Copyright (C) 2000 John Marvin * Copyright (C) 2001 Matthew Wilcox + * Copyright (C) 2014 Helge Deller <deller@gmx.de> * * These routines maintain argument size conversion between 32bit and 64bit * environment. Based heavily on sys_ia32.c and sys_sparc32.c. @@ -11,44 +12,8 @@ #include <linux/compat.h> #include <linux/kernel.h> -#include <linux/sched.h> -#include <linux/fs.h> -#include <linux/mm.h> -#include <linux/file.h> -#include <linux/signal.h> -#include <linux/resource.h> -#include <linux/times.h> -#include <linux/time.h> -#include <linux/smp.h> -#include <linux/sem.h> -#include <linux/shm.h> -#include <linux/slab.h> -#include <linux/uio.h> -#include <linux/ncp_fs.h> -#include <linux/poll.h> -#include <linux/personality.h> -#include <linux/stat.h> -#include <linux/highmem.h> -#include <linux/highuid.h> -#include <linux/mman.h> -#include <linux/binfmts.h> -#include <linux/namei.h> -#include <linux/vfs.h> -#include <linux/ptrace.h> -#include <linux/swap.h> #include <linux/syscalls.h> -#include <asm/types.h> -#include <asm/uaccess.h> -#include <asm/mmu_context.h> - -#undef DEBUG - -#ifdef DEBUG -#define DBG(x) printk x -#else -#define DBG(x) -#endif asmlinkage long sys32_unimplemented(int r26, int r25, int r24, int r23, int r22, int r21, int r20) @@ -57,3 +22,12 @@ asmlinkage long sys32_unimplemented(int r26, int r25, int r24, int r23, current->comm, current->pid, r20); return -ENOSYS; } + +asmlinkage long sys32_fanotify_mark(compat_int_t fanotify_fd, compat_uint_t flags, + compat_uint_t mask0, compat_uint_t mask1, compat_int_t dfd, + const char __user * pathname) +{ + return sys_fanotify_mark(fanotify_fd, flags, + ((__u64)mask1 << 32) | mask0, + dfd, pathname); +} diff --git a/arch/parisc/kernel/syscall_table.S b/arch/parisc/kernel/syscall_table.S index c5fa7a697fba..84c5d3a58fa1 100644 --- a/arch/parisc/kernel/syscall_table.S +++ b/arch/parisc/kernel/syscall_table.S @@ -418,7 +418,7 @@ ENTRY_SAME(accept4) /* 320 */ ENTRY_SAME(prlimit64) ENTRY_SAME(fanotify_init) - ENTRY_COMP(fanotify_mark) + ENTRY_DIFF(fanotify_mark) ENTRY_COMP(clock_adjtime) ENTRY_SAME(name_to_handle_at) /* 325 */ ENTRY_COMP(open_by_handle_at) diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c index ae085ad0fba0..0bef864264c0 100644 --- a/arch/parisc/mm/init.c +++ b/arch/parisc/mm/init.c @@ -728,7 +728,6 @@ static void __init pagetable_init(void) #endif empty_zero_page = alloc_bootmem_pages(PAGE_SIZE); - memset(empty_zero_page, 0, PAGE_SIZE); } static void __init gateway_init(void) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index bd6dd6ed3a9f..80b94b0add1f 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -145,6 +145,7 @@ config PPC select HAVE_IRQ_EXIT_ON_IRQ_STACK select ARCH_USE_CMPXCHG_LOCKREF if PPC64 select HAVE_ARCH_AUDITSYSCALL + select ARCH_SUPPORTS_ATOMIC_RMW config GENERIC_CSUM def_bool CPU_LITTLE_ENDIAN @@ -414,7 +415,7 @@ config KEXEC config CRASH_DUMP bool "Build a kdump crash kernel" depends on PPC64 || 6xx || FSL_BOOKE || (44x && !SMP) - select RELOCATABLE if PPC64 || 44x || FSL_BOOKE + select RELOCATABLE if (PPC64 && !COMPILE_TEST) || 44x || FSL_BOOKE help Build a kernel suitable for use as a kdump capture kernel. The same kernel binary can be used as production kernel and dump @@ -1017,6 +1018,7 @@ endmenu if PPC64 config RELOCATABLE bool "Build a relocatable kernel" + depends on !COMPILE_TEST select NONSTATIC_KERNEL help This builds a kernel image that is capable of running anywhere diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug index 790352f93700..35d16bd2760b 100644 --- a/arch/powerpc/Kconfig.debug +++ b/arch/powerpc/Kconfig.debug @@ -303,7 +303,6 @@ config PPC_EARLY_DEBUG_OPAL_VTERMNO This correspond to which /dev/hvcN you want to use for early debug. - On OPAL v1 (takeover) this should always be 0 On OPAL v2, this will be 0 for network console and 1 or 2 for the machine built-in serial ports. diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h index 37991e154ef8..840a5509b3f1 100644 --- a/arch/powerpc/include/asm/code-patching.h +++ b/arch/powerpc/include/asm/code-patching.h @@ -88,4 +88,15 @@ static inline unsigned long ppc_function_entry(void *func) #endif } +static inline unsigned long ppc_global_function_entry(void *func) +{ +#if defined(CONFIG_PPC64) && defined(_CALL_ELF) && _CALL_ELF == 2 + /* PPC64 ABIv2 the global entry point is at the address */ + return (unsigned long)func; +#else + /* All other cases there is no change vs ppc_function_entry() */ + return ppc_function_entry(func); +#endif +} + #endif /* _ASM_POWERPC_CODE_PATCHING_H */ diff --git a/arch/powerpc/include/asm/cputable.h b/arch/powerpc/include/asm/cputable.h index bc2347774f0a..0fdd7eece6d9 100644 --- a/arch/powerpc/include/asm/cputable.h +++ b/arch/powerpc/include/asm/cputable.h @@ -447,6 +447,7 @@ extern const char *powerpc_base_platform; CPU_FTR_DBELL | CPU_FTR_HAS_PPR | CPU_FTR_DAWR | \ CPU_FTR_ARCH_207S | CPU_FTR_TM_COMP) #define CPU_FTRS_POWER8E (CPU_FTRS_POWER8 | CPU_FTR_PMAO_BUG) +#define CPU_FTRS_POWER8_DD1 (CPU_FTRS_POWER8 & ~CPU_FTR_DBELL) #define CPU_FTRS_CELL (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \ CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \ CPU_FTR_ALTIVEC_COMP | CPU_FTR_MMCRA | CPU_FTR_SMT | \ diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h index fddb72b48ce9..d645428a65a4 100644 --- a/arch/powerpc/include/asm/kvm_book3s_64.h +++ b/arch/powerpc/include/asm/kvm_book3s_64.h @@ -198,8 +198,10 @@ static inline unsigned long compute_tlbie_rb(unsigned long v, unsigned long r, return rb; } -static inline unsigned long hpte_page_size(unsigned long h, unsigned long l) +static inline unsigned long __hpte_page_size(unsigned long h, unsigned long l, + bool is_base_size) { + int size, a_psize; /* Look at the 8 bit LP value */ unsigned int lp = (l >> LP_SHIFT) & ((1 << LP_BITS) - 1); @@ -214,14 +216,27 @@ static inline unsigned long hpte_page_size(unsigned long h, unsigned long l) continue; a_psize = __hpte_actual_psize(lp, size); - if (a_psize != -1) + if (a_psize != -1) { + if (is_base_size) + return 1ul << mmu_psize_defs[size].shift; return 1ul << mmu_psize_defs[a_psize].shift; + } } } return 0; } +static inline unsigned long hpte_page_size(unsigned long h, unsigned long l) +{ + return __hpte_page_size(h, l, 0); +} + +static inline unsigned long hpte_base_page_size(unsigned long h, unsigned long l) +{ + return __hpte_page_size(h, l, 1); +} + static inline unsigned long hpte_rpn(unsigned long ptel, unsigned long psize) { return ((ptel & HPTE_R_RPN) & ~(psize - 1)) >> PAGE_SHIFT; diff --git a/arch/powerpc/include/asm/mmu-hash64.h b/arch/powerpc/include/asm/mmu-hash64.h index 807014dde821..c2b4dcf23d03 100644 --- a/arch/powerpc/include/asm/mmu-hash64.h +++ b/arch/powerpc/include/asm/mmu-hash64.h @@ -22,6 +22,7 @@ */ #include <asm/pgtable-ppc64.h> #include <asm/bug.h> +#include <asm/processor.h> /* * Segment table @@ -496,7 +497,7 @@ extern void slb_set_size(u16 size); */ struct subpage_prot_table { unsigned long maxaddr; /* only addresses < this are protected */ - unsigned int **protptrs[2]; + unsigned int **protptrs[(TASK_SIZE_USER64 >> 43)]; unsigned int *low_prot[4]; }; diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h index f8d1d6dcf7db..e61f24ed4e65 100644 --- a/arch/powerpc/include/asm/mmu.h +++ b/arch/powerpc/include/asm/mmu.h @@ -19,8 +19,7 @@ #define MMU_FTR_TYPE_40x ASM_CONST(0x00000004) #define MMU_FTR_TYPE_44x ASM_CONST(0x00000008) #define MMU_FTR_TYPE_FSL_E ASM_CONST(0x00000010) -#define MMU_FTR_TYPE_3E ASM_CONST(0x00000020) -#define MMU_FTR_TYPE_47x ASM_CONST(0x00000040) +#define MMU_FTR_TYPE_47x ASM_CONST(0x00000020) /* * This is individual features @@ -106,13 +105,6 @@ MMU_FTR_CI_LARGE_PAGE #define MMU_FTRS_PA6T MMU_FTRS_DEFAULT_HPTE_ARCH_V2 | \ MMU_FTR_CI_LARGE_PAGE | MMU_FTR_NO_SLBIE_B -#define MMU_FTRS_A2 MMU_FTR_TYPE_3E | MMU_FTR_USE_TLBILX | \ - MMU_FTR_USE_TLBIVAX_BCAST | \ - MMU_FTR_LOCK_BCAST_INVAL | \ - MMU_FTR_USE_TLBRSRV | \ - MMU_FTR_USE_PAIRED_MAS | \ - MMU_FTR_TLBIEL | \ - MMU_FTR_16M_PAGE #ifndef __ASSEMBLY__ #include <asm/cputable.h> diff --git a/arch/powerpc/include/asm/opal.h b/arch/powerpc/include/asm/opal.h index 460018889ba9..0da1dbd42e02 100644 --- a/arch/powerpc/include/asm/opal.h +++ b/arch/powerpc/include/asm/opal.h @@ -12,27 +12,7 @@ #ifndef __OPAL_H #define __OPAL_H -/****** Takeover interface ********/ - -/* PAPR H-Call used to querty the HAL existence and/or instanciate - * it from within pHyp (tech preview only). - * - * This is exclusively used in prom_init.c - */ - #ifndef __ASSEMBLY__ - -struct opal_takeover_args { - u64 k_image; /* r4 */ - u64 k_size; /* r5 */ - u64 k_entry; /* r6 */ - u64 k_entry2; /* r7 */ - u64 hal_addr; /* r8 */ - u64 rd_image; /* r9 */ - u64 rd_size; /* r10 */ - u64 rd_loc; /* r11 */ -}; - /* * SG entry * @@ -55,15 +35,6 @@ struct opal_sg_list { /* We calculate number of sg entries based on PAGE_SIZE */ #define SG_ENTRIES_PER_NODE ((PAGE_SIZE - 16) / sizeof(struct opal_sg_entry)) -extern long opal_query_takeover(u64 *hal_size, u64 *hal_align); - -extern long opal_do_takeover(struct opal_takeover_args *args); - -struct rtas_args; -extern int opal_enter_rtas(struct rtas_args *args, - unsigned long data, - unsigned long entry); - #endif /* __ASSEMBLY__ */ /****** OPAL APIs ******/ diff --git a/arch/powerpc/include/asm/perf_event_server.h b/arch/powerpc/include/asm/perf_event_server.h index 9ed737146dbb..b3e936027b26 100644 --- a/arch/powerpc/include/asm/perf_event_server.h +++ b/arch/powerpc/include/asm/perf_event_server.h @@ -61,8 +61,7 @@ struct power_pmu { #define PPMU_SIAR_VALID 0x00000010 /* Processor has SIAR Valid bit */ #define PPMU_HAS_SSLOT 0x00000020 /* Has sampled slot in MMCRA */ #define PPMU_HAS_SIER 0x00000040 /* Has SIER */ -#define PPMU_BHRB 0x00000080 /* has BHRB feature enabled */ -#define PPMU_EBB 0x00000100 /* supports event based branch */ +#define PPMU_ARCH_207S 0x00000080 /* PMC is architecture v2.07S */ /* * Values for flags to get_alternatives() diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h index 9ea266eae33e..7e4612528546 100644 --- a/arch/powerpc/include/asm/ppc_asm.h +++ b/arch/powerpc/include/asm/ppc_asm.h @@ -277,6 +277,8 @@ n: .globl n; \ n: +#define _GLOBAL_TOC(name) _GLOBAL(name) + #define _KPROBE(n) \ .section ".kprobes.text","a"; \ .globl n; \ diff --git a/arch/powerpc/include/asm/swab.h b/arch/powerpc/include/asm/swab.h index b9bd1ca944d0..96f59de61855 100644 --- a/arch/powerpc/include/asm/swab.h +++ b/arch/powerpc/include/asm/swab.h @@ -9,10 +9,6 @@ #include <uapi/asm/swab.h> -#ifdef __GNUC__ -#ifndef __powerpc64__ -#endif /* __powerpc64__ */ - static __inline__ __u16 ld_le16(const volatile __u16 *addr) { __u16 val; @@ -20,19 +16,12 @@ static __inline__ __u16 ld_le16(const volatile __u16 *addr) __asm__ __volatile__ ("lhbrx %0,0,%1" : "=r" (val) : "r" (addr), "m" (*addr)); return val; } -#define __arch_swab16p ld_le16 static __inline__ void st_le16(volatile __u16 *addr, const __u16 val) { __asm__ __volatile__ ("sthbrx %1,0,%2" : "=m" (*addr) : "r" (val), "r" (addr)); } -static inline void __arch_swab16s(__u16 *addr) -{ - st_le16(addr, *addr); -} -#define __arch_swab16s __arch_swab16s - static __inline__ __u32 ld_le32(const volatile __u32 *addr) { __u32 val; @@ -40,42 +29,10 @@ static __inline__ __u32 ld_le32(const volatile __u32 *addr) __asm__ __volatile__ ("lwbrx %0,0,%1" : "=r" (val) : "r" (addr), "m" (*addr)); return val; } -#define __arch_swab32p ld_le32 static __inline__ void st_le32(volatile __u32 *addr, const __u32 val) { __asm__ __volatile__ ("stwbrx %1,0,%2" : "=m" (*addr) : "r" (val), "r" (addr)); } -static inline void __arch_swab32s(__u32 *addr) -{ - st_le32(addr, *addr); -} -#define __arch_swab32s __arch_swab32s - -static inline __attribute_const__ __u16 __arch_swab16(__u16 value) -{ - __u16 result; - - __asm__("rlwimi %0,%1,8,16,23" - : "=r" (result) - : "r" (value), "0" (value >> 8)); - return result; -} -#define __arch_swab16 __arch_swab16 - -static inline __attribute_const__ __u32 __arch_swab32(__u32 value) -{ - __u32 result; - - __asm__("rlwimi %0,%1,24,16,23\n\t" - "rlwimi %0,%1,8,8,15\n\t" - "rlwimi %0,%1,24,0,7" - : "=r" (result) - : "r" (value), "0" (value >> 24)); - return result; -} -#define __arch_swab32 __arch_swab32 - -#endif /* __GNUC__ */ #endif /* _ASM_POWERPC_SWAB_H */ diff --git a/arch/powerpc/kernel/cputable.c b/arch/powerpc/kernel/cputable.c index 965291b4c2fa..0c157642c2a1 100644 --- a/arch/powerpc/kernel/cputable.c +++ b/arch/powerpc/kernel/cputable.c @@ -527,6 +527,26 @@ static struct cpu_spec __initdata cpu_specs[] = { .machine_check_early = __machine_check_early_realmode_p8, .platform = "power8", }, + { /* Power8 DD1: Does not support doorbell IPIs */ + .pvr_mask = 0xffffff00, + .pvr_value = 0x004d0100, + .cpu_name = "POWER8 (raw)", + .cpu_features = CPU_FTRS_POWER8_DD1, + .cpu_user_features = COMMON_USER_POWER8, + .cpu_user_features2 = COMMON_USER2_POWER8, + .mmu_features = MMU_FTRS_POWER8, + .icache_bsize = 128, + .dcache_bsize = 128, + .num_pmcs = 6, + .pmc_type = PPC_PMC_IBM, + .oprofile_cpu_type = "ppc64/power8", + .oprofile_type = PPC_OPROFILE_INVALID, + .cpu_setup = __setup_cpu_power8, + .cpu_restore = __restore_cpu_power8, + .flush_tlb = __flush_tlb_power8, + .machine_check_early = __machine_check_early_realmode_p8, + .platform = "power8", + }, { /* Power8 */ .pvr_mask = 0xffff0000, .pvr_value = 0x004d0000, diff --git a/arch/powerpc/kernel/ftrace.c b/arch/powerpc/kernel/ftrace.c index f202d0731b06..d178834fe508 100644 --- a/arch/powerpc/kernel/ftrace.c +++ b/arch/powerpc/kernel/ftrace.c @@ -10,6 +10,8 @@ * */ +#define pr_fmt(fmt) "ftrace-powerpc: " fmt + #include <linux/spinlock.h> #include <linux/hardirq.h> #include <linux/uaccess.h> @@ -105,7 +107,7 @@ __ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec, unsigned long addr) { unsigned int op; - unsigned long ptr; + unsigned long entry, ptr; unsigned long ip = rec->ip; void *tramp; @@ -115,7 +117,7 @@ __ftrace_make_nop(struct module *mod, /* Make sure that that this is still a 24bit jump */ if (!is_bl_op(op)) { - printk(KERN_ERR "Not expected bl: opcode is %x\n", op); + pr_err("Not expected bl: opcode is %x\n", op); return -EINVAL; } @@ -125,21 +127,21 @@ __ftrace_make_nop(struct module *mod, pr_devel("ip:%lx jumps to %p", ip, tramp); if (!is_module_trampoline(tramp)) { - printk(KERN_ERR "Not a trampoline\n"); + pr_err("Not a trampoline\n"); return -EINVAL; } if (module_trampoline_target(mod, tramp, &ptr)) { - printk(KERN_ERR "Failed to get trampoline target\n"); + pr_err("Failed to get trampoline target\n"); return -EFAULT; } pr_devel("trampoline target %lx", ptr); + entry = ppc_global_function_entry((void *)addr); /* This should match what was called */ - if (ptr != ppc_function_entry((void *)addr)) { - printk(KERN_ERR "addr %lx does not match expected %lx\n", - ptr, ppc_function_entry((void *)addr)); + if (ptr != entry) { + pr_err("addr %lx does not match expected %lx\n", ptr, entry); return -EINVAL; } @@ -179,7 +181,7 @@ __ftrace_make_nop(struct module *mod, /* Make sure that that this is still a 24bit jump */ if (!is_bl_op(op)) { - printk(KERN_ERR "Not expected bl: opcode is %x\n", op); + pr_err("Not expected bl: opcode is %x\n", op); return -EINVAL; } @@ -198,7 +200,7 @@ __ftrace_make_nop(struct module *mod, /* Find where the trampoline jumps to */ if (probe_kernel_read(jmp, (void *)tramp, sizeof(jmp))) { - printk(KERN_ERR "Failed to read %lx\n", tramp); + pr_err("Failed to read %lx\n", tramp); return -EFAULT; } @@ -209,7 +211,7 @@ __ftrace_make_nop(struct module *mod, ((jmp[1] & 0xffff0000) != 0x398c0000) || (jmp[2] != 0x7d8903a6) || (jmp[3] != 0x4e800420)) { - printk(KERN_ERR "Not a trampoline\n"); + pr_err("Not a trampoline\n"); return -EINVAL; } @@ -221,8 +223,7 @@ __ftrace_make_nop(struct module *mod, pr_devel(" %lx ", tramp); if (tramp != addr) { - printk(KERN_ERR - "Trampoline location %08lx does not match addr\n", + pr_err("Trampoline location %08lx does not match addr\n", tramp); return -EINVAL; } @@ -263,15 +264,13 @@ int ftrace_make_nop(struct module *mod, */ if (!rec->arch.mod) { if (!mod) { - printk(KERN_ERR "No module loaded addr=%lx\n", - addr); + pr_err("No module loaded addr=%lx\n", addr); return -EFAULT; } rec->arch.mod = mod; } else if (mod) { if (mod != rec->arch.mod) { - printk(KERN_ERR - "Record mod %p not equal to passed in mod %p\n", + pr_err("Record mod %p not equal to passed in mod %p\n", rec->arch.mod, mod); return -EINVAL; } @@ -307,26 +306,25 @@ __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) * The load offset is different depending on the ABI. For simplicity * just mask it out when doing the compare. */ - if ((op[0] != 0x48000008) || ((op[1] & 0xffff00000) != 0xe8410000)) { - printk(KERN_ERR "Unexpected call sequence: %x %x\n", - op[0], op[1]); + if ((op[0] != 0x48000008) || ((op[1] & 0xffff0000) != 0xe8410000)) { + pr_err("Unexpected call sequence: %x %x\n", op[0], op[1]); return -EINVAL; } /* If we never set up a trampoline to ftrace_caller, then bail */ if (!rec->arch.mod->arch.tramp) { - printk(KERN_ERR "No ftrace trampoline\n"); + pr_err("No ftrace trampoline\n"); return -EINVAL; } /* Ensure branch is within 24 bits */ - if (create_branch(ip, rec->arch.mod->arch.tramp, BRANCH_SET_LINK)) { - printk(KERN_ERR "Branch out of range"); + if (!create_branch(ip, rec->arch.mod->arch.tramp, BRANCH_SET_LINK)) { + pr_err("Branch out of range\n"); return -EINVAL; } if (patch_branch(ip, rec->arch.mod->arch.tramp, BRANCH_SET_LINK)) { - printk(KERN_ERR "REL24 out of range!\n"); + pr_err("REL24 out of range!\n"); return -EINVAL; } @@ -345,13 +343,13 @@ __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) /* It should be pointing to a nop */ if (op != PPC_INST_NOP) { - printk(KERN_ERR "Expected NOP but have %x\n", op); + pr_err("Expected NOP but have %x\n", op); return -EINVAL; } /* If we never set up a trampoline to ftrace_caller, then bail */ if (!rec->arch.mod->arch.tramp) { - printk(KERN_ERR "No ftrace trampoline\n"); + pr_err("No ftrace trampoline\n"); return -EINVAL; } @@ -359,7 +357,7 @@ __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) op = create_branch((unsigned int *)ip, rec->arch.mod->arch.tramp, BRANCH_SET_LINK); if (!op) { - printk(KERN_ERR "REL24 out of range!\n"); + pr_err("REL24 out of range!\n"); return -EINVAL; } @@ -397,7 +395,7 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) * already have a module defined. */ if (!rec->arch.mod) { - printk(KERN_ERR "No module loaded\n"); + pr_err("No module loaded\n"); return -EINVAL; } diff --git a/arch/powerpc/kernel/idle_power7.S b/arch/powerpc/kernel/idle_power7.S index 2480256272d4..5cf3d367190d 100644 --- a/arch/powerpc/kernel/idle_power7.S +++ b/arch/powerpc/kernel/idle_power7.S @@ -131,7 +131,7 @@ _GLOBAL(power7_nap) _GLOBAL(power7_sleep) li r3,1 - li r4,0 + li r4,1 b power7_powersave_common /* No return */ diff --git a/arch/powerpc/kernel/iomap.c b/arch/powerpc/kernel/iomap.c index b82227e7e21b..12e48d56f771 100644 --- a/arch/powerpc/kernel/iomap.c +++ b/arch/powerpc/kernel/iomap.c @@ -23,7 +23,7 @@ unsigned int ioread16(void __iomem *addr) } unsigned int ioread16be(void __iomem *addr) { - return in_be16(addr); + return readw_be(addr); } unsigned int ioread32(void __iomem *addr) { @@ -31,7 +31,7 @@ unsigned int ioread32(void __iomem *addr) } unsigned int ioread32be(void __iomem *addr) { - return in_be32(addr); + return readl_be(addr); } EXPORT_SYMBOL(ioread8); EXPORT_SYMBOL(ioread16); @@ -49,7 +49,7 @@ void iowrite16(u16 val, void __iomem *addr) } void iowrite16be(u16 val, void __iomem *addr) { - out_be16(addr, val); + writew_be(val, addr); } void iowrite32(u32 val, void __iomem *addr) { @@ -57,7 +57,7 @@ void iowrite32(u32 val, void __iomem *addr) } void iowrite32be(u32 val, void __iomem *addr) { - out_be32(addr, val); + writel_be(val, addr); } EXPORT_SYMBOL(iowrite8); EXPORT_SYMBOL(iowrite16); @@ -75,15 +75,15 @@ EXPORT_SYMBOL(iowrite32be); */ void ioread8_rep(void __iomem *addr, void *dst, unsigned long count) { - _insb((u8 __iomem *) addr, dst, count); + readsb(addr, dst, count); } void ioread16_rep(void __iomem *addr, void *dst, unsigned long count) { - _insw_ns((u16 __iomem *) addr, dst, count); + readsw(addr, dst, count); } void ioread32_rep(void __iomem *addr, void *dst, unsigned long count) { - _insl_ns((u32 __iomem *) addr, dst, count); + readsl(addr, dst, count); } EXPORT_SYMBOL(ioread8_rep); EXPORT_SYMBOL(ioread16_rep); @@ -91,15 +91,15 @@ EXPORT_SYMBOL(ioread32_rep); void iowrite8_rep(void __iomem *addr, const void *src, unsigned long count) { - _outsb((u8 __iomem *) addr, src, count); + writesb(addr, src, count); } void iowrite16_rep(void __iomem *addr, const void *src, unsigned long count) { - _outsw_ns((u16 __iomem *) addr, src, count); + writesw(addr, src, count); } void iowrite32_rep(void __iomem *addr, const void *src, unsigned long count) { - _outsl_ns((u32 __iomem *) addr, src, count); + writesl(addr, src, count); } EXPORT_SYMBOL(iowrite8_rep); EXPORT_SYMBOL(iowrite16_rep); diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c index 90fab64d911d..2f72af82513c 100644 --- a/arch/powerpc/kernel/kprobes.c +++ b/arch/powerpc/kernel/kprobes.c @@ -32,6 +32,7 @@ #include <linux/module.h> #include <linux/kdebug.h> #include <linux/slab.h> +#include <asm/code-patching.h> #include <asm/cacheflush.h> #include <asm/sstep.h> #include <asm/uaccess.h> @@ -491,12 +492,10 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self, return ret; } -#ifdef CONFIG_PPC64 unsigned long arch_deref_entry_point(void *entry) { - return ((func_descr_t *)entry)->entry; + return ppc_global_function_entry(entry); } -#endif int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) { @@ -508,8 +507,12 @@ int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) /* setup return addr to the jprobe handler routine */ regs->nip = arch_deref_entry_point(jp->entry); #ifdef CONFIG_PPC64 +#if defined(_CALL_ELF) && _CALL_ELF == 2 + regs->gpr[12] = (unsigned long)jp->entry; +#else regs->gpr[2] = (unsigned long)(((func_descr_t *)jp->entry)->toc); #endif +#endif return 1; } diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c index 077d2ce6c5a7..d807ee626af9 100644 --- a/arch/powerpc/kernel/module_64.c +++ b/arch/powerpc/kernel/module_64.c @@ -315,8 +315,17 @@ static void dedotify_versions(struct modversion_info *vers, struct modversion_info *end; for (end = (void *)vers + size; vers < end; vers++) - if (vers->name[0] == '.') + if (vers->name[0] == '.') { memmove(vers->name, vers->name+1, strlen(vers->name)); +#ifdef ARCH_RELOCATES_KCRCTAB + /* The TOC symbol has no CRC computed. To avoid CRC + * check failing, we must force it to the expected + * value (see CRC check in module.c). + */ + if (!strcmp(vers->name, "TOC.")) + vers->crc = -(unsigned long)reloc_start; +#endif + } } /* Undefined symbols which refer to .funcname, hack to funcname (or .TOC.) */ diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c index 613a860a203c..b694b0730971 100644 --- a/arch/powerpc/kernel/prom.c +++ b/arch/powerpc/kernel/prom.c @@ -662,13 +662,6 @@ void __init early_init_devtree(void *params) of_scan_flat_dt(early_init_dt_scan_fw_dump, NULL); #endif - /* Pre-initialize the cmd_line with the content of boot_commmand_line, - * which will be empty except when the content of the variable has - * been overriden by a bootloading mechanism. This happens typically - * with HAL takeover - */ - strlcpy(cmd_line, boot_command_line, COMMAND_LINE_SIZE); - /* Retrieve various informations from the /chosen node of the * device-tree, including the platform type, initrd location and * size, TCE reserve, and more ... diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c index 078145acf7fb..1a85d8f96739 100644 --- a/arch/powerpc/kernel/prom_init.c +++ b/arch/powerpc/kernel/prom_init.c @@ -1268,201 +1268,6 @@ static u64 __initdata prom_opal_base; static u64 __initdata prom_opal_entry; #endif -#ifdef __BIG_ENDIAN__ -/* XXX Don't change this structure without updating opal-takeover.S */ -static struct opal_secondary_data { - s64 ack; /* 0 */ - u64 go; /* 8 */ - struct opal_takeover_args args; /* 16 */ -} opal_secondary_data; - -static u64 __initdata prom_opal_align; -static u64 __initdata prom_opal_size; -static int __initdata prom_rtas_start_cpu; -static u64 __initdata prom_rtas_data; -static u64 __initdata prom_rtas_entry; - -extern char opal_secondary_entry; - -static void __init prom_query_opal(void) -{ - long rc; - - /* We must not query for OPAL presence on a machine that - * supports TNK takeover (970 blades), as this uses the same - * h-call with different arguments and will crash - */ - if (PHANDLE_VALID(call_prom("finddevice", 1, 1, - ADDR("/tnk-memory-map")))) { - prom_printf("TNK takeover detected, skipping OPAL check\n"); - return; - } - - prom_printf("Querying for OPAL presence... "); - - rc = opal_query_takeover(&prom_opal_size, - &prom_opal_align); - prom_debug("(rc = %ld) ", rc); - if (rc != 0) { - prom_printf("not there.\n"); - return; - } - of_platform = PLATFORM_OPAL; - prom_printf(" there !\n"); - prom_debug(" opal_size = 0x%lx\n", prom_opal_size); - prom_debug(" opal_align = 0x%lx\n", prom_opal_align); - if (prom_opal_align < 0x10000) - prom_opal_align = 0x10000; -} - -static int __init prom_rtas_call(int token, int nargs, int nret, - int *outputs, ...) -{ - struct rtas_args rtas_args; - va_list list; - int i; - - rtas_args.token = token; - rtas_args.nargs = nargs; - rtas_args.nret = nret; - rtas_args.rets = (rtas_arg_t *)&(rtas_args.args[nargs]); - va_start(list, outputs); - for (i = 0; i < nargs; ++i) - rtas_args.args[i] = va_arg(list, rtas_arg_t); - va_end(list); - - for (i = 0; i < nret; ++i) - rtas_args.rets[i] = 0; - - opal_enter_rtas(&rtas_args, prom_rtas_data, - prom_rtas_entry); - - if (nret > 1 && outputs != NULL) - for (i = 0; i < nret-1; ++i) - outputs[i] = rtas_args.rets[i+1]; - return (nret > 0)? rtas_args.rets[0]: 0; -} - -static void __init prom_opal_hold_cpus(void) -{ - int i, cnt, cpu, rc; - long j; - phandle node; - char type[64]; - u32 servers[8]; - void *entry = (unsigned long *)&opal_secondary_entry; - struct opal_secondary_data *data = &opal_secondary_data; - - prom_debug("prom_opal_hold_cpus: start...\n"); - prom_debug(" - entry = 0x%x\n", entry); - prom_debug(" - data = 0x%x\n", data); - - data->ack = -1; - data->go = 0; - - /* look for cpus */ - for (node = 0; prom_next_node(&node); ) { - type[0] = 0; - prom_getprop(node, "device_type", type, sizeof(type)); - if (strcmp(type, "cpu") != 0) - continue; - - /* Skip non-configured cpus. */ - if (prom_getprop(node, "status", type, sizeof(type)) > 0) - if (strcmp(type, "okay") != 0) - continue; - - cnt = prom_getprop(node, "ibm,ppc-interrupt-server#s", servers, - sizeof(servers)); - if (cnt == PROM_ERROR) - break; - cnt >>= 2; - for (i = 0; i < cnt; i++) { - cpu = servers[i]; - prom_debug("CPU %d ... ", cpu); - if (cpu == prom.cpu) { - prom_debug("booted !\n"); - continue; - } - prom_debug("starting ... "); - - /* Init the acknowledge var which will be reset by - * the secondary cpu when it awakens from its OF - * spinloop. - */ - data->ack = -1; - rc = prom_rtas_call(prom_rtas_start_cpu, 3, 1, - NULL, cpu, entry, data); - prom_debug("rtas rc=%d ...", rc); - - for (j = 0; j < 100000000 && data->ack == -1; j++) { - HMT_low(); - mb(); - } - HMT_medium(); - if (data->ack != -1) - prom_debug("done, PIR=0x%x\n", data->ack); - else - prom_debug("timeout !\n"); - } - } - prom_debug("prom_opal_hold_cpus: end...\n"); -} - -static void __init prom_opal_takeover(void) -{ - struct opal_secondary_data *data = &opal_secondary_data; - struct opal_takeover_args *args = &data->args; - u64 align = prom_opal_align; - u64 top_addr, opal_addr; - - args->k_image = (u64)_stext; - args->k_size = _end - _stext; - args->k_entry = 0; - args->k_entry2 = 0x60; - - top_addr = _ALIGN_UP(args->k_size, align); - - if (prom_initrd_start != 0) { - args->rd_image = prom_initrd_start; - args->rd_size = prom_initrd_end - args->rd_image; - args->rd_loc = top_addr; - top_addr = _ALIGN_UP(args->rd_loc + args->rd_size, align); - } - - /* Pickup an address for the HAL. We want to go really high - * up to avoid problem with future kexecs. On the other hand - * we don't want to be all over the TCEs on P5IOC2 machines - * which are going to be up there too. We assume the machine - * has plenty of memory, and we ask for the HAL for now to - * be just below the 1G point, or above the initrd - */ - opal_addr = _ALIGN_DOWN(0x40000000 - prom_opal_size, align); - if (opal_addr < top_addr) - opal_addr = top_addr; - args->hal_addr = opal_addr; - - /* Copy the command line to the kernel image */ - strlcpy(boot_command_line, prom_cmd_line, - COMMAND_LINE_SIZE); - - prom_debug(" k_image = 0x%lx\n", args->k_image); - prom_debug(" k_size = 0x%lx\n", args->k_size); - prom_debug(" k_entry = 0x%lx\n", args->k_entry); - prom_debug(" k_entry2 = 0x%lx\n", args->k_entry2); - prom_debug(" hal_addr = 0x%lx\n", args->hal_addr); - prom_debug(" rd_image = 0x%lx\n", args->rd_image); - prom_debug(" rd_size = 0x%lx\n", args->rd_size); - prom_debug(" rd_loc = 0x%lx\n", args->rd_loc); - prom_printf("Performing OPAL takeover,this can take a few minutes..\n"); - prom_close_stdin(); - mb(); - data->go = 1; - for (;;) - opal_do_takeover(args); -} -#endif /* __BIG_ENDIAN__ */ - /* * Allocate room for and instantiate OPAL */ @@ -1597,12 +1402,6 @@ static void __init prom_instantiate_rtas(void) &val, sizeof(val)) != PROM_ERROR) rtas_has_query_cpu_stopped = true; -#if defined(CONFIG_PPC_POWERNV) && defined(__BIG_ENDIAN__) - /* PowerVN takeover hack */ - prom_rtas_data = base; - prom_rtas_entry = entry; - prom_getprop(rtas_node, "start-cpu", &prom_rtas_start_cpu, 4); -#endif prom_debug("rtas base = 0x%x\n", base); prom_debug("rtas entry = 0x%x\n", entry); prom_debug("rtas size = 0x%x\n", (long)size); @@ -3027,16 +2826,6 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4, prom_instantiate_rtas(); #ifdef CONFIG_PPC_POWERNV -#ifdef __BIG_ENDIAN__ - /* Detect HAL and try instanciating it & doing takeover */ - if (of_platform == PLATFORM_PSERIES_LPAR) { - prom_query_opal(); - if (of_platform == PLATFORM_OPAL) { - prom_opal_hold_cpus(); - prom_opal_takeover(); - } - } else -#endif /* __BIG_ENDIAN__ */ if (of_platform == PLATFORM_OPAL) prom_instantiate_opal(); #endif /* CONFIG_PPC_POWERNV */ diff --git a/arch/powerpc/kernel/prom_init_check.sh b/arch/powerpc/kernel/prom_init_check.sh index 77aa1e95e904..fe8e54b9ef7d 100644 --- a/arch/powerpc/kernel/prom_init_check.sh +++ b/arch/powerpc/kernel/prom_init_check.sh @@ -21,9 +21,7 @@ _end enter_prom memcpy memset reloc_offset __secondary_hold __secondary_hold_acknowledge __secondary_hold_spinloop __start strcmp strcpy strlcpy strlen strncmp strstr logo_linux_clut224 reloc_got2 kernstart_addr memstart_addr linux_banner _stext -opal_query_takeover opal_do_takeover opal_enter_rtas opal_secondary_entry -boot_command_line __prom_init_toc_start __prom_init_toc_end -btext_setup_display TOC." +__prom_init_toc_start __prom_init_toc_end btext_setup_display TOC." NM="$1" OBJ="$2" diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c index e239df3768ac..e5b022c55ccd 100644 --- a/arch/powerpc/kernel/setup-common.c +++ b/arch/powerpc/kernel/setup-common.c @@ -469,9 +469,17 @@ void __init smp_setup_cpu_maps(void) } for (j = 0; j < nthreads && cpu < nr_cpu_ids; j++) { + bool avail; + DBG(" thread %d -> cpu %d (hard id %d)\n", j, cpu, be32_to_cpu(intserv[j])); - set_cpu_present(cpu, of_device_is_available(dn)); + + avail = of_device_is_available(dn); + if (!avail) + avail = !of_property_match_string(dn, + "enable-method", "spin-table"); + + set_cpu_present(cpu, avail); set_hard_smp_processor_id(cpu, be32_to_cpu(intserv[j])); set_cpu_possible(cpu, true); cpu++; diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c index 4e47db686b5d..1bc5a1755ed4 100644 --- a/arch/powerpc/kernel/signal_32.c +++ b/arch/powerpc/kernel/signal_32.c @@ -54,7 +54,6 @@ #include "signal.h" -#undef DEBUG_SIG #ifdef CONFIG_PPC64 #define sys_rt_sigreturn compat_sys_rt_sigreturn @@ -1063,10 +1062,6 @@ int handle_rt_signal32(unsigned long sig, struct k_sigaction *ka, return 1; badframe: -#ifdef DEBUG_SIG - printk("badframe in handle_rt_signal, regs=%p frame=%p newsp=%lx\n", - regs, frame, newsp); -#endif if (show_unhandled_signals) printk_ratelimited(KERN_INFO "%s[%d]: bad frame in handle_rt_signal32: " @@ -1484,10 +1479,6 @@ int handle_signal32(unsigned long sig, struct k_sigaction *ka, return 1; badframe: -#ifdef DEBUG_SIG - printk("badframe in handle_signal, regs=%p frame=%p newsp=%lx\n", - regs, frame, newsp); -#endif if (show_unhandled_signals) printk_ratelimited(KERN_INFO "%s[%d]: bad frame in handle_signal32: " diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powerpc/kernel/signal_64.c index d501dc4dc3e6..97c1e4b683fc 100644 --- a/arch/powerpc/kernel/signal_64.c +++ b/arch/powerpc/kernel/signal_64.c @@ -38,7 +38,6 @@ #include "signal.h" -#define DEBUG_SIG 0 #define GP_REGS_SIZE min(sizeof(elf_gregset_t), sizeof(struct pt_regs)) #define FP_REGS_SIZE sizeof(elf_fpregset_t) @@ -700,10 +699,6 @@ int sys_rt_sigreturn(unsigned long r3, unsigned long r4, unsigned long r5, return 0; badframe: -#if DEBUG_SIG - printk("badframe in sys_rt_sigreturn, regs=%p uc=%p &uc->uc_mcontext=%p\n", - regs, uc, &uc->uc_mcontext); -#endif if (show_unhandled_signals) printk_ratelimited(regs->msr & MSR_64BIT ? fmt64 : fmt32, current->comm, current->pid, "rt_sigreturn", @@ -809,10 +804,6 @@ int handle_rt_signal64(int signr, struct k_sigaction *ka, siginfo_t *info, return 1; badframe: -#if DEBUG_SIG - printk("badframe in setup_rt_frame, regs=%p frame=%p newsp=%lx\n", - regs, frame, newsp); -#endif if (show_unhandled_signals) printk_ratelimited(regs->msr & MSR_64BIT ? fmt64 : fmt32, current->comm, current->pid, "setup_rt_frame", diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index 51a3ff78838a..1007fb802e6b 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -747,7 +747,7 @@ int setup_profiling_timer(unsigned int multiplier) #ifdef CONFIG_SCHED_SMT /* cpumask of CPUs with asymetric SMT dependancy */ -static const int powerpc_smt_flags(void) +static int powerpc_smt_flags(void) { int flags = SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES; diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c index 80561074078d..68468d695f12 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c @@ -1562,7 +1562,7 @@ static ssize_t kvm_htab_write(struct file *file, const char __user *buf, goto out; } if (!rma_setup && is_vrma_hpte(v)) { - unsigned long psize = hpte_page_size(v, r); + unsigned long psize = hpte_base_page_size(v, r); unsigned long senc = slb_pgsize_encoding(psize); unsigned long lpcr; diff --git a/arch/powerpc/kvm/book3s_hv_interrupts.S b/arch/powerpc/kvm/book3s_hv_interrupts.S index 8c86422a1e37..731be7478b27 100644 --- a/arch/powerpc/kvm/book3s_hv_interrupts.S +++ b/arch/powerpc/kvm/book3s_hv_interrupts.S @@ -127,11 +127,6 @@ BEGIN_FTR_SECTION stw r10, HSTATE_PMC + 24(r13) stw r11, HSTATE_PMC + 28(r13) END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201) -BEGIN_FTR_SECTION - mfspr r9, SPRN_SIER - std r8, HSTATE_MMCR + 40(r13) - std r9, HSTATE_MMCR + 48(r13) -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) 31: /* diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c index 6e6224318c36..5a24d3c2b6b8 100644 --- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c +++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c @@ -814,13 +814,10 @@ long kvmppc_hv_find_lock_hpte(struct kvm *kvm, gva_t eaddr, unsigned long slb_v, r = hpte[i+1]; /* - * Check the HPTE again, including large page size - * Since we don't currently allow any MPSS (mixed - * page-size segment) page sizes, it is sufficient - * to check against the actual page size. + * Check the HPTE again, including base page size */ if ((v & valid) && (v & mask) == val && - hpte_page_size(v, r) == (1ul << pshift)) + hpte_base_page_size(v, r) == (1ul << pshift)) /* Return with the HPTE still locked */ return (hash << 3) + (i >> 1); diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index 868347ef09fd..558a67df8126 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -48,7 +48,7 @@ * * LR = return address to continue at after eventually re-enabling MMU */ -_GLOBAL(kvmppc_hv_entry_trampoline) +_GLOBAL_TOC(kvmppc_hv_entry_trampoline) mflr r0 std r0, PPC_LR_STKOFF(r1) stdu r1, -112(r1) diff --git a/arch/powerpc/kvm/book3s_interrupts.S b/arch/powerpc/kvm/book3s_interrupts.S index e2c29e381dc7..d044b8b7c69d 100644 --- a/arch/powerpc/kvm/book3s_interrupts.S +++ b/arch/powerpc/kvm/book3s_interrupts.S @@ -25,7 +25,11 @@ #include <asm/exception-64s.h> #if defined(CONFIG_PPC_BOOK3S_64) +#if defined(_CALL_ELF) && _CALL_ELF == 2 +#define FUNC(name) name +#else #define FUNC(name) GLUE(.,name) +#endif #define GET_SHADOW_VCPU(reg) addi reg, r13, PACA_SVCPU #elif defined(CONFIG_PPC_BOOK3S_32) diff --git a/arch/powerpc/kvm/book3s_rmhandlers.S b/arch/powerpc/kvm/book3s_rmhandlers.S index 9eec675220e6..16c4d88ba27d 100644 --- a/arch/powerpc/kvm/book3s_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_rmhandlers.S @@ -36,7 +36,11 @@ #if defined(CONFIG_PPC_BOOK3S_64) +#if defined(_CALL_ELF) && _CALL_ELF == 2 +#define FUNC(name) name +#else #define FUNC(name) GLUE(.,name) +#endif #elif defined(CONFIG_PPC_BOOK3S_32) @@ -146,7 +150,7 @@ kvmppc_handler_skip_ins: * On entry, r4 contains the guest shadow MSR * MSR.EE has to be 0 when calling this function */ -_GLOBAL(kvmppc_entry_trampoline) +_GLOBAL_TOC(kvmppc_entry_trampoline) mfmsr r5 LOAD_REG_ADDR(r7, kvmppc_handler_trampoline_enter) toreal(r7) diff --git a/arch/powerpc/kvm/book3s_rtas.c b/arch/powerpc/kvm/book3s_rtas.c index edb14ba992b3..ef27fbd5d9c5 100644 --- a/arch/powerpc/kvm/book3s_rtas.c +++ b/arch/powerpc/kvm/book3s_rtas.c @@ -23,20 +23,20 @@ static void kvm_rtas_set_xive(struct kvm_vcpu *vcpu, struct rtas_args *args) u32 irq, server, priority; int rc; - if (args->nargs != 3 || args->nret != 1) { + if (be32_to_cpu(args->nargs) != 3 || be32_to_cpu(args->nret) != 1) { rc = -3; goto out; } - irq = args->args[0]; - server = args->args[1]; - priority = args->args[2]; + irq = be32_to_cpu(args->args[0]); + server = be32_to_cpu(args->args[1]); + priority = be32_to_cpu(args->args[2]); rc = kvmppc_xics_set_xive(vcpu->kvm, irq, server, priority); if (rc) rc = -3; out: - args->rets[0] = rc; + args->rets[0] = cpu_to_be32(rc); } static void kvm_rtas_get_xive(struct kvm_vcpu *vcpu, struct rtas_args *args) @@ -44,12 +44,12 @@ static void kvm_rtas_get_xive(struct kvm_vcpu *vcpu, struct rtas_args *args) u32 irq, server, priority; int rc; - if (args->nargs != 1 || args->nret != 3) { + if (be32_to_cpu(args->nargs) != 1 || be32_to_cpu(args->nret) != 3) { rc = -3; goto out; } - irq = args->args[0]; + irq = be32_to_cpu(args->args[0]); server = priority = 0; rc = kvmppc_xics_get_xive(vcpu->kvm, irq, &server, &priority); @@ -58,10 +58,10 @@ static void kvm_rtas_get_xive(struct kvm_vcpu *vcpu, struct rtas_args *args) goto out; } - args->rets[1] = server; - args->rets[2] = priority; + args->rets[1] = cpu_to_be32(server); + args->rets[2] = cpu_to_be32(priority); out: - args->rets[0] = rc; + args->rets[0] = cpu_to_be32(rc); } static void kvm_rtas_int_off(struct kvm_vcpu *vcpu, struct rtas_args *args) @@ -69,18 +69,18 @@ static void kvm_rtas_int_off(struct kvm_vcpu *vcpu, struct rtas_args *args) u32 irq; int rc; - if (args->nargs != 1 || args->nret != 1) { + if (be32_to_cpu(args->nargs) != 1 || be32_to_cpu(args->nret) != 1) { rc = -3; goto out; } - irq = args->args[0]; + irq = be32_to_cpu(args->args[0]); rc = kvmppc_xics_int_off(vcpu->kvm, irq); if (rc) rc = -3; out: - args->rets[0] = rc; + args->rets[0] = cpu_to_be32(rc); } static void kvm_rtas_int_on(struct kvm_vcpu *vcpu, struct rtas_args *args) @@ -88,18 +88,18 @@ static void kvm_rtas_int_on(struct kvm_vcpu *vcpu, struct rtas_args *args) u32 irq; int rc; - if (args->nargs != 1 || args->nret != 1) { + if (be32_to_cpu(args->nargs) != 1 || be32_to_cpu(args->nret) != 1) { rc = -3; goto out; } - irq = args->args[0]; + irq = be32_to_cpu(args->args[0]); rc = kvmppc_xics_int_on(vcpu->kvm, irq); if (rc) rc = -3; out: - args->rets[0] = rc; + args->rets[0] = cpu_to_be32(rc); } #endif /* CONFIG_KVM_XICS */ @@ -205,32 +205,6 @@ int kvm_vm_ioctl_rtas_define_token(struct kvm *kvm, void __user *argp) return rc; } -static void kvmppc_rtas_swap_endian_in(struct rtas_args *args) -{ -#ifdef __LITTLE_ENDIAN__ - int i; - - args->token = be32_to_cpu(args->token); - args->nargs = be32_to_cpu(args->nargs); - args->nret = be32_to_cpu(args->nret); - for (i = 0; i < args->nargs; i++) - args->args[i] = be32_to_cpu(args->args[i]); -#endif -} - -static void kvmppc_rtas_swap_endian_out(struct rtas_args *args) -{ -#ifdef __LITTLE_ENDIAN__ - int i; - - for (i = 0; i < args->nret; i++) - args->args[i] = cpu_to_be32(args->args[i]); - args->token = cpu_to_be32(args->token); - args->nargs = cpu_to_be32(args->nargs); - args->nret = cpu_to_be32(args->nret); -#endif -} - int kvmppc_rtas_hcall(struct kvm_vcpu *vcpu) { struct rtas_token_definition *d; @@ -249,8 +223,6 @@ int kvmppc_rtas_hcall(struct kvm_vcpu *vcpu) if (rc) goto fail; - kvmppc_rtas_swap_endian_in(&args); - /* * args->rets is a pointer into args->args. Now that we've * copied args we need to fix it up to point into our copy, @@ -258,13 +230,13 @@ int kvmppc_rtas_hcall(struct kvm_vcpu *vcpu) * value so we can restore it on the way out. */ orig_rets = args.rets; - args.rets = &args.args[args.nargs]; + args.rets = &args.args[be32_to_cpu(args.nargs)]; mutex_lock(&vcpu->kvm->lock); rc = -ENOENT; list_for_each_entry(d, &vcpu->kvm->arch.rtas_tokens, list) { - if (d->token == args.token) { + if (d->token == be32_to_cpu(args.token)) { d->handler->handler(vcpu, &args); rc = 0; break; @@ -275,7 +247,6 @@ int kvmppc_rtas_hcall(struct kvm_vcpu *vcpu) if (rc == 0) { args.rets = orig_rets; - kvmppc_rtas_swap_endian_out(&args); rc = kvm_write_guest(vcpu->kvm, args_phys, &args, sizeof(args)); if (rc) goto fail; diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c index dd2cc03f406f..86903d3f5a03 100644 --- a/arch/powerpc/kvm/e500_mmu_host.c +++ b/arch/powerpc/kvm/e500_mmu_host.c @@ -473,7 +473,8 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500, if (printk_ratelimit()) pr_err("%s: pte not present: gfn %lx, pfn %lx\n", __func__, (long)gfn, pfn); - return -EINVAL; + ret = -EINVAL; + goto out; } kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg); diff --git a/arch/powerpc/lib/mem_64.S b/arch/powerpc/lib/mem_64.S index 0738f96befbf..43435c6892fb 100644 --- a/arch/powerpc/lib/mem_64.S +++ b/arch/powerpc/lib/mem_64.S @@ -77,7 +77,7 @@ _GLOBAL(memset) stb r4,0(r6) blr -_GLOBAL(memmove) +_GLOBAL_TOC(memmove) cmplw 0,r3,r4 bgt backwards_memcpy b memcpy diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c index 412dd46dd0b7..5c09f365c842 100644 --- a/arch/powerpc/lib/sstep.c +++ b/arch/powerpc/lib/sstep.c @@ -1198,7 +1198,7 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr) sh = regs->gpr[rb] & 0x3f; ival = (signed int) regs->gpr[rd]; regs->gpr[ra] = ival >> (sh < 32 ? sh : 31); - if (ival < 0 && (sh >= 32 || (ival & ((1 << sh) - 1)) != 0)) + if (ival < 0 && (sh >= 32 || (ival & ((1ul << sh) - 1)) != 0)) regs->xer |= XER_CA; else regs->xer &= ~XER_CA; @@ -1208,7 +1208,7 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr) sh = rb; ival = (signed int) regs->gpr[rd]; regs->gpr[ra] = ival >> sh; - if (ival < 0 && (ival & ((1 << sh) - 1)) != 0) + if (ival < 0 && (ival & ((1ul << sh) - 1)) != 0) regs->xer |= XER_CA; else regs->xer &= ~XER_CA; @@ -1216,7 +1216,7 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr) #ifdef __powerpc64__ case 27: /* sld */ - sh = regs->gpr[rd] & 0x7f; + sh = regs->gpr[rb] & 0x7f; if (sh < 64) regs->gpr[ra] = regs->gpr[rd] << sh; else @@ -1235,7 +1235,7 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr) sh = regs->gpr[rb] & 0x7f; ival = (signed long int) regs->gpr[rd]; regs->gpr[ra] = ival >> (sh < 64 ? sh : 63); - if (ival < 0 && (sh >= 64 || (ival & ((1 << sh) - 1)) != 0)) + if (ival < 0 && (sh >= 64 || (ival & ((1ul << sh) - 1)) != 0)) regs->xer |= XER_CA; else regs->xer &= ~XER_CA; @@ -1246,7 +1246,7 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr) sh = rb | ((instr & 2) << 4); ival = (signed long int) regs->gpr[rd]; regs->gpr[ra] = ival >> sh; - if (ival < 0 && (ival & ((1 << sh) - 1)) != 0) + if (ival < 0 && (ival & ((1ul << sh) - 1)) != 0) regs->xer |= XER_CA; else regs->xer &= ~XER_CA; diff --git a/arch/powerpc/mm/mmu_context_nohash.c b/arch/powerpc/mm/mmu_context_nohash.c index af3d78e19302..928ebe79668b 100644 --- a/arch/powerpc/mm/mmu_context_nohash.c +++ b/arch/powerpc/mm/mmu_context_nohash.c @@ -410,17 +410,7 @@ void __init mmu_context_init(void) } else if (mmu_has_feature(MMU_FTR_TYPE_47x)) { first_context = 1; last_context = 65535; - } else -#ifdef CONFIG_PPC_BOOK3E_MMU - if (mmu_has_feature(MMU_FTR_TYPE_3E)) { - u32 mmucfg = mfspr(SPRN_MMUCFG); - u32 pid_bits = (mmucfg & MMUCFG_PIDSIZE_MASK) - >> MMUCFG_PIDSIZE_SHIFT; - first_context = 1; - last_context = (1UL << (pid_bits + 1)) - 1; - } else -#endif - { + } else { first_context = 1; last_context = 255; } diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c index 6dcdadefd8d0..82e82cadcde5 100644 --- a/arch/powerpc/net/bpf_jit_comp.c +++ b/arch/powerpc/net/bpf_jit_comp.c @@ -390,12 +390,16 @@ static int bpf_jit_build_body(struct sk_filter *fp, u32 *image, case BPF_ANC | SKF_AD_VLAN_TAG: case BPF_ANC | SKF_AD_VLAN_TAG_PRESENT: BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, vlan_tci) != 2); + BUILD_BUG_ON(VLAN_TAG_PRESENT != 0x1000); + PPC_LHZ_OFFS(r_A, r_skb, offsetof(struct sk_buff, vlan_tci)); - if (code == (BPF_ANC | SKF_AD_VLAN_TAG)) - PPC_ANDI(r_A, r_A, VLAN_VID_MASK); - else + if (code == (BPF_ANC | SKF_AD_VLAN_TAG)) { + PPC_ANDI(r_A, r_A, ~VLAN_TAG_PRESENT); + } else { PPC_ANDI(r_A, r_A, VLAN_TAG_PRESENT); + PPC_SRWI(r_A, r_A, 12); + } break; case BPF_ANC | SKF_AD_QUEUE: BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c index 4520c9356b54..6b0641c3f03f 100644 --- a/arch/powerpc/perf/core-book3s.c +++ b/arch/powerpc/perf/core-book3s.c @@ -485,7 +485,7 @@ static bool is_ebb_event(struct perf_event *event) * check that the PMU supports EBB, meaning those that don't can still * use bit 63 of the event code for something else if they wish. */ - return (ppmu->flags & PPMU_EBB) && + return (ppmu->flags & PPMU_ARCH_207S) && ((event->attr.config >> PERF_EVENT_CONFIG_EBB_SHIFT) & 1); } @@ -777,7 +777,7 @@ void perf_event_print_debug(void) if (ppmu->flags & PPMU_HAS_SIER) sier = mfspr(SPRN_SIER); - if (ppmu->flags & PPMU_EBB) { + if (ppmu->flags & PPMU_ARCH_207S) { pr_info("MMCR2: %016lx EBBHR: %016lx\n", mfspr(SPRN_MMCR2), mfspr(SPRN_EBBHR)); pr_info("EBBRR: %016lx BESCR: %016lx\n", @@ -996,7 +996,22 @@ static void power_pmu_read(struct perf_event *event) } while (local64_cmpxchg(&event->hw.prev_count, prev, val) != prev); local64_add(delta, &event->count); - local64_sub(delta, &event->hw.period_left); + + /* + * A number of places program the PMC with (0x80000000 - period_left). + * We never want period_left to be less than 1 because we will program + * the PMC with a value >= 0x800000000 and an edge detected PMC will + * roll around to 0 before taking an exception. We have seen this + * on POWER8. + * + * To fix this, clamp the minimum value of period_left to 1. + */ + do { + prev = local64_read(&event->hw.period_left); + val = prev - delta; + if (val < 1) + val = 1; + } while (local64_cmpxchg(&event->hw.period_left, prev, val) != prev); } /* @@ -1300,6 +1315,9 @@ static void power_pmu_enable(struct pmu *pmu) write_mmcr0(cpuhw, mmcr0); + if (ppmu->flags & PPMU_ARCH_207S) + mtspr(SPRN_MMCR2, 0); + /* * Enable instruction sampling if necessary */ @@ -1696,7 +1714,7 @@ static int power_pmu_event_init(struct perf_event *event) if (has_branch_stack(event)) { /* PMU has BHRB enabled */ - if (!(ppmu->flags & PPMU_BHRB)) + if (!(ppmu->flags & PPMU_ARCH_207S)) return -EOPNOTSUPP; } diff --git a/arch/powerpc/perf/power8-pmu.c b/arch/powerpc/perf/power8-pmu.c index fe2763b6e039..639cd9156585 100644 --- a/arch/powerpc/perf/power8-pmu.c +++ b/arch/powerpc/perf/power8-pmu.c @@ -792,7 +792,7 @@ static struct power_pmu power8_pmu = { .get_constraint = power8_get_constraint, .get_alternatives = power8_get_alternatives, .disable_pmc = power8_disable_pmc, - .flags = PPMU_HAS_SSLOT | PPMU_HAS_SIER | PPMU_BHRB | PPMU_EBB, + .flags = PPMU_HAS_SSLOT | PPMU_HAS_SIER | PPMU_ARCH_207S, .n_generic = ARRAY_SIZE(power8_generic_events), .generic_events = power8_generic_events, .cache_events = &power8_cache_events, diff --git a/arch/powerpc/platforms/cell/cbe_thermal.c b/arch/powerpc/platforms/cell/cbe_thermal.c index 94560db788bf..2c15ff094483 100644 --- a/arch/powerpc/platforms/cell/cbe_thermal.c +++ b/arch/powerpc/platforms/cell/cbe_thermal.c @@ -125,7 +125,7 @@ static ssize_t show_throttle(struct cbe_pmd_regs __iomem *pmd_regs, char *buf, i static ssize_t store_throttle(struct cbe_pmd_regs __iomem *pmd_regs, const char *buf, size_t size, int pos) { u64 reg_value; - int temp; + unsigned int temp; u64 new_value; int ret; diff --git a/arch/powerpc/platforms/cell/spu_syscalls.c b/arch/powerpc/platforms/cell/spu_syscalls.c index 38e0a1a5cec3..5e6e0bad6db6 100644 --- a/arch/powerpc/platforms/cell/spu_syscalls.c +++ b/arch/powerpc/platforms/cell/spu_syscalls.c @@ -111,6 +111,7 @@ asmlinkage long sys_spu_run(int fd, __u32 __user *unpc, __u32 __user *ustatus) return ret; } +#ifdef CONFIG_COREDUMP int elf_coredump_extra_notes_size(void) { struct spufs_calls *calls; @@ -142,6 +143,7 @@ int elf_coredump_extra_notes_write(struct coredump_params *cprm) return ret; } +#endif void notify_spus_active(void) { diff --git a/arch/powerpc/platforms/cell/spufs/Makefile b/arch/powerpc/platforms/cell/spufs/Makefile index b9d5d678aa44..52a7d2596d30 100644 --- a/arch/powerpc/platforms/cell/spufs/Makefile +++ b/arch/powerpc/platforms/cell/spufs/Makefile @@ -1,8 +1,9 @@ obj-$(CONFIG_SPU_FS) += spufs.o -spufs-y += inode.o file.o context.o syscalls.o coredump.o +spufs-y += inode.o file.o context.o syscalls.o spufs-y += sched.o backing_ops.o hw_ops.o run.o gang.o spufs-y += switch.o fault.o lscsa_alloc.o +spufs-$(CONFIG_COREDUMP) += coredump.o # magic for the trace events CFLAGS_sched.o := -I$(src) diff --git a/arch/powerpc/platforms/cell/spufs/syscalls.c b/arch/powerpc/platforms/cell/spufs/syscalls.c index b045fdda4845..a87200a535fa 100644 --- a/arch/powerpc/platforms/cell/spufs/syscalls.c +++ b/arch/powerpc/platforms/cell/spufs/syscalls.c @@ -79,8 +79,10 @@ static long do_spu_create(const char __user *pathname, unsigned int flags, struct spufs_calls spufs_calls = { .create_thread = do_spu_create, .spu_run = do_spu_run, - .coredump_extra_notes_size = spufs_coredump_extra_notes_size, - .coredump_extra_notes_write = spufs_coredump_extra_notes_write, .notify_spus_active = do_notify_spus_active, .owner = THIS_MODULE, +#ifdef CONFIG_COREDUMP + .coredump_extra_notes_size = spufs_coredump_extra_notes_size, + .coredump_extra_notes_write = spufs_coredump_extra_notes_write, +#endif }; diff --git a/arch/powerpc/platforms/powernv/Makefile b/arch/powerpc/platforms/powernv/Makefile index d55891f89a2c..4ad227d04c1a 100644 --- a/arch/powerpc/platforms/powernv/Makefile +++ b/arch/powerpc/platforms/powernv/Makefile @@ -1,4 +1,4 @@ -obj-y += setup.o opal-takeover.o opal-wrappers.o opal.o opal-async.o +obj-y += setup.o opal-wrappers.o opal.o opal-async.o obj-y += opal-rtc.o opal-nvram.o opal-lpc.o opal-flash.o obj-y += rng.o opal-elog.o opal-dump.o opal-sysparam.o opal-sensor.o obj-y += opal-msglog.o diff --git a/arch/powerpc/platforms/powernv/opal-takeover.S b/arch/powerpc/platforms/powernv/opal-takeover.S deleted file mode 100644 index 11a3169ee583..000000000000 --- a/arch/powerpc/platforms/powernv/opal-takeover.S +++ /dev/null @@ -1,140 +0,0 @@ -/* - * PowerNV OPAL takeover assembly code, for use by prom_init.c - * - * Copyright 2011 IBM Corp. - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License - * as published by the Free Software Foundation; either version - * 2 of the License, or (at your option) any later version. - */ - -#include <asm/ppc_asm.h> -#include <asm/hvcall.h> -#include <asm/asm-offsets.h> -#include <asm/opal.h> - -#define H_HAL_TAKEOVER 0x5124 -#define H_HAL_TAKEOVER_QUERY_MAGIC -1 - - .text -_GLOBAL(opal_query_takeover) - mfcr r0 - stw r0,8(r1) - stdu r1,-STACKFRAMESIZE(r1) - std r3,STK_PARAM(R3)(r1) - std r4,STK_PARAM(R4)(r1) - li r3,H_HAL_TAKEOVER - li r4,H_HAL_TAKEOVER_QUERY_MAGIC - HVSC - addi r1,r1,STACKFRAMESIZE - ld r10,STK_PARAM(R3)(r1) - std r4,0(r10) - ld r10,STK_PARAM(R4)(r1) - std r5,0(r10) - lwz r0,8(r1) - mtcrf 0xff,r0 - blr - -_GLOBAL(opal_do_takeover) - mfcr r0 - stw r0,8(r1) - mflr r0 - std r0,16(r1) - bl __opal_do_takeover - ld r0,16(r1) - mtlr r0 - lwz r0,8(r1) - mtcrf 0xff,r0 - blr - -__opal_do_takeover: - ld r4,0(r3) - ld r5,0x8(r3) - ld r6,0x10(r3) - ld r7,0x18(r3) - ld r8,0x20(r3) - ld r9,0x28(r3) - ld r10,0x30(r3) - ld r11,0x38(r3) - li r3,H_HAL_TAKEOVER - HVSC - blr - - .globl opal_secondary_entry -opal_secondary_entry: - mr r31,r3 - mfmsr r11 - li r12,(MSR_SF | MSR_ISF)@highest - sldi r12,r12,48 - or r11,r11,r12 - mtmsrd r11 - isync - mfspr r4,SPRN_PIR - std r4,0(r3) -1: HMT_LOW - ld r4,8(r3) - cmpli cr0,r4,0 - beq 1b - HMT_MEDIUM -1: addi r3,r31,16 - bl __opal_do_takeover - b 1b - -_GLOBAL(opal_enter_rtas) - mflr r0 - std r0,16(r1) - stdu r1,-PROM_FRAME_SIZE(r1) /* Save SP and create stack space */ - - /* Because PROM is running in 32b mode, it clobbers the high order half - * of all registers that it saves. We therefore save those registers - * PROM might touch to the stack. (r0, r3-r13 are caller saved) - */ - SAVE_GPR(2, r1) - SAVE_GPR(13, r1) - SAVE_8GPRS(14, r1) - SAVE_10GPRS(22, r1) - mfcr r10 - mfmsr r11 - std r10,_CCR(r1) - std r11,_MSR(r1) - - /* Get the PROM entrypoint */ - mtlr r5 - - /* Switch MSR to 32 bits mode - */ - li r12,1 - rldicr r12,r12,MSR_SF_LG,(63-MSR_SF_LG) - andc r11,r11,r12 - li r12,1 - rldicr r12,r12,MSR_ISF_LG,(63-MSR_ISF_LG) - andc r11,r11,r12 - mtmsrd r11 - isync - - /* Enter RTAS here... */ - blrl - - /* Just make sure that r1 top 32 bits didn't get - * corrupt by OF - */ - rldicl r1,r1,0,32 - - /* Restore the MSR (back to 64 bits) */ - ld r0,_MSR(r1) - MTMSRD(r0) - isync - - /* Restore other registers */ - REST_GPR(2, r1) - REST_GPR(13, r1) - REST_8GPRS(14, r1) - REST_10GPRS(22, r1) - ld r4,_CCR(r1) - mtcr r4 - - addi r1,r1,PROM_FRAME_SIZE - ld r0,16(r1) - mtlr r0 - blr diff --git a/arch/powerpc/platforms/pseries/dlpar.c b/arch/powerpc/platforms/pseries/dlpar.c index 022b38e6a80b..2d0b4d68a40a 100644 --- a/arch/powerpc/platforms/pseries/dlpar.c +++ b/arch/powerpc/platforms/pseries/dlpar.c @@ -86,6 +86,7 @@ static struct device_node *dlpar_parse_cc_node(struct cc_workarea *ccwa, } of_node_set_flag(dn, OF_DYNAMIC); + of_node_init(dn); return dn; } diff --git a/arch/powerpc/platforms/pseries/reconfig.c b/arch/powerpc/platforms/pseries/reconfig.c index 0435bb65d0aa..1c0a60d98867 100644 --- a/arch/powerpc/platforms/pseries/reconfig.c +++ b/arch/powerpc/platforms/pseries/reconfig.c @@ -69,6 +69,7 @@ static int pSeries_reconfig_add_node(const char *path, struct property *proplist np->properties = proplist; of_node_set_flag(np, OF_DYNAMIC); + of_node_init(np); np->parent = derive_parent(path); if (IS_ERR(np->parent)) { diff --git a/arch/powerpc/sysdev/dart_iommu.c b/arch/powerpc/sysdev/dart_iommu.c index 62c47bb76517..9e5353ff6d1b 100644 --- a/arch/powerpc/sysdev/dart_iommu.c +++ b/arch/powerpc/sysdev/dart_iommu.c @@ -476,6 +476,11 @@ void __init alloc_dart_table(void) */ dart_tablebase = (unsigned long) __va(memblock_alloc_base(1UL<<24, 1UL<<24, 0x80000000L)); + /* + * The DART space is later unmapped from the kernel linear mapping and + * accessing dart_tablebase during kmemleak scanning will fault. + */ + kmemleak_no_scan((void *)dart_tablebase); printk(KERN_INFO "DART table allocated at: %lx\n", dart_tablebase); } diff --git a/arch/s390/include/asm/switch_to.h b/arch/s390/include/asm/switch_to.h index df38c70cd59e..18ea9e3f8142 100644 --- a/arch/s390/include/asm/switch_to.h +++ b/arch/s390/include/asm/switch_to.h @@ -51,8 +51,8 @@ static inline int restore_fp_ctl(u32 *fpc) return 0; asm volatile( - "0: lfpc %1\n" - " la %0,0\n" + " lfpc %1\n" + "0: la %0,0\n" "1:\n" EX_TABLE(0b,1b) : "=d" (rc) : "Q" (*fpc), "0" (-EINVAL)); diff --git a/arch/s390/include/uapi/asm/Kbuild b/arch/s390/include/uapi/asm/Kbuild index 0e2b54db82bc..08fe6dad9026 100644 --- a/arch/s390/include/uapi/asm/Kbuild +++ b/arch/s390/include/uapi/asm/Kbuild @@ -37,6 +37,7 @@ header-y += signal.h header-y += socket.h header-y += sockios.h header-y += sclp_ctl.h +header-y += sie.h header-y += stat.h header-y += statfs.h header-y += swab.h diff --git a/arch/s390/include/uapi/asm/sie.h b/arch/s390/include/uapi/asm/sie.h index 3d97f610198d..5d9cc19462c4 100644 --- a/arch/s390/include/uapi/asm/sie.h +++ b/arch/s390/include/uapi/asm/sie.h @@ -1,8 +1,6 @@ #ifndef _UAPI_ASM_S390_SIE_H #define _UAPI_ASM_S390_SIE_H -#include <asm/sigp.h> - #define diagnose_codes \ { 0x10, "DIAG (0x10) release pages" }, \ { 0x44, "DIAG (0x44) time slice end" }, \ @@ -13,18 +11,18 @@ { 0x500, "DIAG (0x500) KVM virtio functions" }, \ { 0x501, "DIAG (0x501) KVM breakpoint" } -#define sigp_order_codes \ - { SIGP_SENSE, "SIGP sense" }, \ - { SIGP_EXTERNAL_CALL, "SIGP external call" }, \ - { SIGP_EMERGENCY_SIGNAL, "SIGP emergency signal" }, \ - { SIGP_STOP, "SIGP stop" }, \ - { SIGP_STOP_AND_STORE_STATUS, "SIGP stop and store status" }, \ - { SIGP_SET_ARCHITECTURE, "SIGP set architecture" }, \ - { SIGP_SET_PREFIX, "SIGP set prefix" }, \ - { SIGP_SENSE_RUNNING, "SIGP sense running" }, \ - { SIGP_RESTART, "SIGP restart" }, \ - { SIGP_INITIAL_CPU_RESET, "SIGP initial cpu reset" }, \ - { SIGP_STORE_STATUS_AT_ADDRESS, "SIGP store status at address" } +#define sigp_order_codes \ + { 0x01, "SIGP sense" }, \ + { 0x02, "SIGP external call" }, \ + { 0x03, "SIGP emergency signal" }, \ + { 0x05, "SIGP stop" }, \ + { 0x06, "SIGP restart" }, \ + { 0x09, "SIGP stop and store status" }, \ + { 0x0b, "SIGP initial cpu reset" }, \ + { 0x0d, "SIGP set prefix" }, \ + { 0x0e, "SIGP store status at address" }, \ + { 0x12, "SIGP set architecture" }, \ + { 0x15, "SIGP sense running" } #define icpt_prog_codes \ { 0x0001, "Prog Operation" }, \ diff --git a/arch/s390/kernel/head.S b/arch/s390/kernel/head.S index 7ba7d6784510..e88d35d74950 100644 --- a/arch/s390/kernel/head.S +++ b/arch/s390/kernel/head.S @@ -437,11 +437,11 @@ ENTRY(startup_kdump) #if defined(CONFIG_64BIT) #if defined(CONFIG_MARCH_ZEC12) - .long 3, 0xc100efea, 0xf46ce800, 0x00400000 + .long 3, 0xc100eff2, 0xf46ce800, 0x00400000 #elif defined(CONFIG_MARCH_Z196) - .long 2, 0xc100efea, 0xf46c0000 + .long 2, 0xc100eff2, 0xf46c0000 #elif defined(CONFIG_MARCH_Z10) - .long 2, 0xc100efea, 0xf0680000 + .long 2, 0xc100eff2, 0xf0680000 #elif defined(CONFIG_MARCH_Z9_109) .long 1, 0xc100efc2 #elif defined(CONFIG_MARCH_Z990) diff --git a/arch/s390/kernel/ptrace.c b/arch/s390/kernel/ptrace.c index 2d716734b5b1..5dc7ad9e2fbf 100644 --- a/arch/s390/kernel/ptrace.c +++ b/arch/s390/kernel/ptrace.c @@ -334,9 +334,14 @@ static int __poke_user(struct task_struct *child, addr_t addr, addr_t data) unsigned long mask = PSW_MASK_USER; mask |= is_ri_task(child) ? PSW_MASK_RI : 0; - if ((data & ~mask) != PSW_USER_BITS) + if ((data ^ PSW_USER_BITS) & ~mask) + /* Invalid psw mask. */ + return -EINVAL; + if ((data & PSW_MASK_ASC) == PSW_ASC_HOME) + /* Invalid address-space-control bits */ return -EINVAL; if ((data & PSW_MASK_EA) && !(data & PSW_MASK_BA)) + /* Invalid addressing mode bits */ return -EINVAL; } *(addr_t *)((addr_t) &task_pt_regs(child)->psw + addr) = data; @@ -672,9 +677,12 @@ static int __poke_user_compat(struct task_struct *child, mask |= is_ri_task(child) ? PSW32_MASK_RI : 0; /* Build a 64 bit psw mask from 31 bit mask. */ - if ((tmp & ~mask) != PSW32_USER_BITS) + if ((tmp ^ PSW32_USER_BITS) & ~mask) /* Invalid psw mask. */ return -EINVAL; + if ((data & PSW32_MASK_ASC) == PSW32_ASC_HOME) + /* Invalid address-space-control bits */ + return -EINVAL; regs->psw.mask = (regs->psw.mask & ~PSW_MASK_USER) | (regs->psw.mask & PSW_MASK_BA) | (__u64)(tmp & mask) << 32; diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c index 9ddc51eeb8d6..30de42730b2f 100644 --- a/arch/s390/pci/pci.c +++ b/arch/s390/pci/pci.c @@ -48,13 +48,10 @@ static LIST_HEAD(zpci_list); static DEFINE_SPINLOCK(zpci_list_lock); -static void zpci_enable_irq(struct irq_data *data); -static void zpci_disable_irq(struct irq_data *data); - static struct irq_chip zpci_irq_chip = { .name = "zPCI", - .irq_unmask = zpci_enable_irq, - .irq_mask = zpci_disable_irq, + .irq_unmask = unmask_msi_irq, + .irq_mask = mask_msi_irq, }; static DECLARE_BITMAP(zpci_domain, ZPCI_NR_DEVICES); @@ -244,43 +241,6 @@ static int zpci_cfg_store(struct zpci_dev *zdev, int offset, u32 val, u8 len) return rc; } -static int zpci_msi_set_mask_bits(struct msi_desc *msi, u32 mask, u32 flag) -{ - int offset, pos; - u32 mask_bits; - - if (msi->msi_attrib.is_msix) { - offset = msi->msi_attrib.entry_nr * PCI_MSIX_ENTRY_SIZE + - PCI_MSIX_ENTRY_VECTOR_CTRL; - msi->masked = readl(msi->mask_base + offset); - writel(flag, msi->mask_base + offset); - } else if (msi->msi_attrib.maskbit) { - pos = (long) msi->mask_base; - pci_read_config_dword(msi->dev, pos, &mask_bits); - mask_bits &= ~(mask); - mask_bits |= flag & mask; - pci_write_config_dword(msi->dev, pos, mask_bits); - } else - return 0; - - msi->msi_attrib.maskbit = !!flag; - return 1; -} - -static void zpci_enable_irq(struct irq_data *data) -{ - struct msi_desc *msi = irq_get_msi_desc(data->irq); - - zpci_msi_set_mask_bits(msi, 1, 0); -} - -static void zpci_disable_irq(struct irq_data *data) -{ - struct msi_desc *msi = irq_get_msi_desc(data->irq); - - zpci_msi_set_mask_bits(msi, 1, 1); -} - void pcibios_fixup_bus(struct pci_bus *bus) { } @@ -487,7 +447,10 @@ void arch_teardown_msi_irqs(struct pci_dev *pdev) /* Release MSI interrupts */ list_for_each_entry(msi, &pdev->msi_list, list) { - zpci_msi_set_mask_bits(msi, 1, 1); + if (msi->msi_attrib.is_msix) + default_msix_mask_irq(msi, 1); + else + default_msi_mask_irq(msi, 1, 1); irq_set_msi_desc(msi->irq, NULL); irq_free_desc(msi->irq); msi->msg.address_lo = 0; diff --git a/arch/sh/Makefile b/arch/sh/Makefile index d4d16e4be07c..bf5b3f5f4962 100644 --- a/arch/sh/Makefile +++ b/arch/sh/Makefile @@ -32,7 +32,8 @@ endif cflags-$(CONFIG_CPU_SH2) := $(call cc-option,-m2,) cflags-$(CONFIG_CPU_SH2A) += $(call cc-option,-m2a,) \ - $(call cc-option,-m2a-nofpu,) + $(call cc-option,-m2a-nofpu,) \ + $(call cc-option,-m4-nofpu,) cflags-$(CONFIG_CPU_SH3) := $(call cc-option,-m3,) cflags-$(CONFIG_CPU_SH4) := $(call cc-option,-m4,) \ $(call cc-option,-mno-implicit-fp,-m4-nofpu) diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig index 29f2e988c56a..407c87d9879a 100644 --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -78,6 +78,7 @@ config SPARC64 select HAVE_C_RECORDMCOUNT select NO_BOOTMEM select HAVE_ARCH_AUDITSYSCALL + select ARCH_SUPPORTS_ATOMIC_RMW config ARCH_DEFCONFIG string diff --git a/arch/sparc/include/asm/irq_64.h b/arch/sparc/include/asm/irq_64.h index 375cffcf7dbd..91d219381306 100644 --- a/arch/sparc/include/asm/irq_64.h +++ b/arch/sparc/include/asm/irq_64.h @@ -89,7 +89,7 @@ static inline unsigned long get_softint(void) return retval; } -void arch_trigger_all_cpu_backtrace(void); +void arch_trigger_all_cpu_backtrace(bool); #define arch_trigger_all_cpu_backtrace arch_trigger_all_cpu_backtrace extern void *hardirq_stack[NR_CPUS]; diff --git a/arch/sparc/include/uapi/asm/unistd.h b/arch/sparc/include/uapi/asm/unistd.h index b73274fb961a..42f2bca1d338 100644 --- a/arch/sparc/include/uapi/asm/unistd.h +++ b/arch/sparc/include/uapi/asm/unistd.h @@ -410,8 +410,9 @@ #define __NR_finit_module 342 #define __NR_sched_setattr 343 #define __NR_sched_getattr 344 +#define __NR_renameat2 345 -#define NR_syscalls 345 +#define NR_syscalls 346 /* Bitmask values returned from kern_features system call. */ #define KERN_FEATURE_MIXED_MODE_STACK 0x00000001 diff --git a/arch/sparc/kernel/process_64.c b/arch/sparc/kernel/process_64.c index b2988f25e230..027e09986194 100644 --- a/arch/sparc/kernel/process_64.c +++ b/arch/sparc/kernel/process_64.c @@ -239,7 +239,7 @@ static void __global_reg_poll(struct global_reg_snapshot *gp) } } -void arch_trigger_all_cpu_backtrace(void) +void arch_trigger_all_cpu_backtrace(bool include_self) { struct thread_info *tp = current_thread_info(); struct pt_regs *regs = get_irq_regs(); @@ -251,16 +251,22 @@ void arch_trigger_all_cpu_backtrace(void) spin_lock_irqsave(&global_cpu_snapshot_lock, flags); - memset(global_cpu_snapshot, 0, sizeof(global_cpu_snapshot)); - this_cpu = raw_smp_processor_id(); - __global_reg_self(tp, regs, this_cpu); + memset(global_cpu_snapshot, 0, sizeof(global_cpu_snapshot)); + + if (include_self) + __global_reg_self(tp, regs, this_cpu); smp_fetch_global_regs(); for_each_online_cpu(cpu) { - struct global_reg_snapshot *gp = &global_cpu_snapshot[cpu].reg; + struct global_reg_snapshot *gp; + + if (!include_self && cpu == this_cpu) + continue; + + gp = &global_cpu_snapshot[cpu].reg; __global_reg_poll(gp); @@ -292,7 +298,7 @@ void arch_trigger_all_cpu_backtrace(void) static void sysrq_handle_globreg(int key) { - arch_trigger_all_cpu_backtrace(); + arch_trigger_all_cpu_backtrace(true); } static struct sysrq_key_op sparc_globalreg_op = { diff --git a/arch/sparc/kernel/sys32.S b/arch/sparc/kernel/sys32.S index d066eb18650c..f834224208ed 100644 --- a/arch/sparc/kernel/sys32.S +++ b/arch/sparc/kernel/sys32.S @@ -48,6 +48,7 @@ SIGN1(sys32_futex, compat_sys_futex, %o1) SIGN1(sys32_recvfrom, compat_sys_recvfrom, %o0) SIGN1(sys32_recvmsg, compat_sys_recvmsg, %o0) SIGN1(sys32_sendmsg, compat_sys_sendmsg, %o0) +SIGN2(sys32_renameat2, sys_renameat2, %o0, %o2) .globl sys32_mmap2 sys32_mmap2: diff --git a/arch/sparc/kernel/systbls_32.S b/arch/sparc/kernel/systbls_32.S index 151ace8766cc..85fe9b1087cd 100644 --- a/arch/sparc/kernel/systbls_32.S +++ b/arch/sparc/kernel/systbls_32.S @@ -86,3 +86,4 @@ sys_call_table: /*330*/ .long sys_fanotify_mark, sys_prlimit64, sys_name_to_handle_at, sys_open_by_handle_at, sys_clock_adjtime /*335*/ .long sys_syncfs, sys_sendmmsg, sys_setns, sys_process_vm_readv, sys_process_vm_writev /*340*/ .long sys_ni_syscall, sys_kcmp, sys_finit_module, sys_sched_setattr, sys_sched_getattr +/*345*/ .long sys_renameat2 diff --git a/arch/sparc/kernel/systbls_64.S b/arch/sparc/kernel/systbls_64.S index 4bd4e2bb26cf..33ecba2826ea 100644 --- a/arch/sparc/kernel/systbls_64.S +++ b/arch/sparc/kernel/systbls_64.S @@ -87,6 +87,7 @@ sys_call_table32: /*330*/ .word compat_sys_fanotify_mark, sys_prlimit64, sys_name_to_handle_at, compat_sys_open_by_handle_at, compat_sys_clock_adjtime .word sys_syncfs, compat_sys_sendmmsg, sys_setns, compat_sys_process_vm_readv, compat_sys_process_vm_writev /*340*/ .word sys_kern_features, sys_kcmp, sys_finit_module, sys_sched_setattr, sys_sched_getattr + .word sys32_renameat2 #endif /* CONFIG_COMPAT */ @@ -165,3 +166,4 @@ sys_call_table: /*330*/ .word sys_fanotify_mark, sys_prlimit64, sys_name_to_handle_at, sys_open_by_handle_at, sys_clock_adjtime .word sys_syncfs, sys_sendmmsg, sys_setns, sys_process_vm_readv, sys_process_vm_writev /*340*/ .word sys_kern_features, sys_kcmp, sys_finit_module, sys_sched_setattr, sys_sched_getattr + .word sys_renameat2 diff --git a/arch/um/kernel/tlb.c b/arch/um/kernel/tlb.c index 9472079471bb..f1b3eb14b855 100644 --- a/arch/um/kernel/tlb.c +++ b/arch/um/kernel/tlb.c @@ -12,6 +12,7 @@ #include <mem_user.h> #include <os.h> #include <skas.h> +#include <kern_util.h> struct host_vm_change { struct host_vm_op { @@ -124,6 +125,9 @@ static int add_munmap(unsigned long addr, unsigned long len, struct host_vm_op *last; int ret = 0; + if ((addr >= STUB_START) && (addr < STUB_END)) + return -EINVAL; + if (hvc->index != 0) { last = &hvc->ops[hvc->index - 1]; if ((last->type == MUNMAP) && @@ -283,8 +287,11 @@ void fix_range_common(struct mm_struct *mm, unsigned long start_addr, /* This is not an else because ret is modified above */ if (ret) { printk(KERN_ERR "fix_range_common: failed, killing current " - "process\n"); + "process: %d\n", task_tgid_vnr(current)); + /* We are under mmap_sem, release it such that current can terminate */ + up_write(¤t->mm->mmap_sem); force_sig(SIGKILL, current); + do_signal(); } } diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c index 974b87474a99..5678c3571e7c 100644 --- a/arch/um/kernel/trap.c +++ b/arch/um/kernel/trap.c @@ -206,7 +206,7 @@ unsigned long segv(struct faultinfo fi, unsigned long ip, int is_user, int is_write = FAULT_WRITE(fi); unsigned long address = FAULT_ADDRESS(fi); - if (regs) + if (!is_user && regs) current->thread.segv_regs = container_of(regs, struct pt_regs, regs); if (!is_user && (address >= start_vm) && (address < end_vm)) { diff --git a/arch/um/os-Linux/skas/process.c b/arch/um/os-Linux/skas/process.c index d531879a4617..908579f2b0ab 100644 --- a/arch/um/os-Linux/skas/process.c +++ b/arch/um/os-Linux/skas/process.c @@ -54,7 +54,7 @@ static int ptrace_dump_regs(int pid) void wait_stub_done(int pid) { - int n, status, err, bad_stop = 0; + int n, status, err; while (1) { CATCH_EINTR(n = waitpid(pid, &status, WUNTRACED | __WALL)); @@ -74,8 +74,6 @@ void wait_stub_done(int pid) if (((1 << WSTOPSIG(status)) & STUB_DONE_MASK) != 0) return; - else - bad_stop = 1; bad_wait: err = ptrace_dump_regs(pid); @@ -85,10 +83,7 @@ bad_wait: printk(UM_KERN_ERR "wait_stub_done : failed to wait for SIGTRAP, " "pid = %d, n = %d, errno = %d, status = 0x%x\n", pid, n, errno, status); - if (bad_stop) - kill(pid, SIGKILL); - else - fatal_sigsegv(); + fatal_sigsegv(); } extern unsigned long current_stub_stack(void); diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index a8f749ef0fdc..d24887b645dc 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -131,6 +131,7 @@ config X86 select HAVE_CC_STACKPROTECTOR select GENERIC_CPU_AUTOPROBE select HAVE_ARCH_AUDITSYSCALL + select ARCH_SUPPORTS_ATOMIC_RMW config INSTRUCTION_DECODER def_bool y diff --git a/arch/x86/boot/header.S b/arch/x86/boot/header.S index 84c223479e3c..7a6d43a554d7 100644 --- a/arch/x86/boot/header.S +++ b/arch/x86/boot/header.S @@ -91,10 +91,9 @@ bs_die: .section ".bsdata", "a" bugger_off_msg: - .ascii "Direct floppy boot is not supported. " - .ascii "Use a boot loader program instead.\r\n" + .ascii "Use a boot loader.\r\n" .ascii "\n" - .ascii "Remove disk and press any key to reboot ...\r\n" + .ascii "Remove disk and press any key to reboot...\r\n" .byte 0 #ifdef CONFIG_EFI_STUB @@ -108,7 +107,7 @@ coff_header: #else .word 0x8664 # x86-64 #endif - .word 3 # nr_sections + .word 4 # nr_sections .long 0 # TimeDateStamp .long 0 # PointerToSymbolTable .long 1 # NumberOfSymbols @@ -250,6 +249,25 @@ section_table: .word 0 # NumberOfLineNumbers .long 0x60500020 # Characteristics (section flags) + # + # The offset & size fields are filled in by build.c. + # + .ascii ".bss" + .byte 0 + .byte 0 + .byte 0 + .byte 0 + .long 0 + .long 0x0 + .long 0 # Size of initialized data + # on disk + .long 0x0 + .long 0 # PointerToRelocations + .long 0 # PointerToLineNumbers + .word 0 # NumberOfRelocations + .word 0 # NumberOfLineNumbers + .long 0xc8000080 # Characteristics (section flags) + #endif /* CONFIG_EFI_STUB */ # Kernel attributes; used by setup. This is part 1 of the diff --git a/arch/x86/boot/tools/build.c b/arch/x86/boot/tools/build.c index 1a2f2121cada..a7661c430cd9 100644 --- a/arch/x86/boot/tools/build.c +++ b/arch/x86/boot/tools/build.c @@ -143,7 +143,7 @@ static void usage(void) #ifdef CONFIG_EFI_STUB -static void update_pecoff_section_header(char *section_name, u32 offset, u32 size) +static void update_pecoff_section_header_fields(char *section_name, u32 vma, u32 size, u32 datasz, u32 offset) { unsigned int pe_header; unsigned short num_sections; @@ -164,10 +164,10 @@ static void update_pecoff_section_header(char *section_name, u32 offset, u32 siz put_unaligned_le32(size, section + 0x8); /* section header vma field */ - put_unaligned_le32(offset, section + 0xc); + put_unaligned_le32(vma, section + 0xc); /* section header 'size of initialised data' field */ - put_unaligned_le32(size, section + 0x10); + put_unaligned_le32(datasz, section + 0x10); /* section header 'file offset' field */ put_unaligned_le32(offset, section + 0x14); @@ -179,6 +179,11 @@ static void update_pecoff_section_header(char *section_name, u32 offset, u32 siz } } +static void update_pecoff_section_header(char *section_name, u32 offset, u32 size) +{ + update_pecoff_section_header_fields(section_name, offset, size, size, offset); +} + static void update_pecoff_setup_and_reloc(unsigned int size) { u32 setup_offset = 0x200; @@ -203,9 +208,6 @@ static void update_pecoff_text(unsigned int text_start, unsigned int file_sz) pe_header = get_unaligned_le32(&buf[0x3c]); - /* Size of image */ - put_unaligned_le32(file_sz, &buf[pe_header + 0x50]); - /* * Size of code: Subtract the size of the first sector (512 bytes) * which includes the header. @@ -220,6 +222,22 @@ static void update_pecoff_text(unsigned int text_start, unsigned int file_sz) update_pecoff_section_header(".text", text_start, text_sz); } +static void update_pecoff_bss(unsigned int file_sz, unsigned int init_sz) +{ + unsigned int pe_header; + unsigned int bss_sz = init_sz - file_sz; + + pe_header = get_unaligned_le32(&buf[0x3c]); + + /* Size of uninitialized data */ + put_unaligned_le32(bss_sz, &buf[pe_header + 0x24]); + + /* Size of image */ + put_unaligned_le32(init_sz, &buf[pe_header + 0x50]); + + update_pecoff_section_header_fields(".bss", file_sz, bss_sz, 0, 0); +} + static int reserve_pecoff_reloc_section(int c) { /* Reserve 0x20 bytes for .reloc section */ @@ -259,6 +277,8 @@ static void efi_stub_entry_update(void) static inline void update_pecoff_setup_and_reloc(unsigned int size) {} static inline void update_pecoff_text(unsigned int text_start, unsigned int file_sz) {} +static inline void update_pecoff_bss(unsigned int file_sz, + unsigned int init_sz) {} static inline void efi_stub_defaults(void) {} static inline void efi_stub_entry_update(void) {} @@ -310,7 +330,7 @@ static void parse_zoffset(char *fname) int main(int argc, char ** argv) { - unsigned int i, sz, setup_sectors; + unsigned int i, sz, setup_sectors, init_sz; int c; u32 sys_size; struct stat sb; @@ -376,7 +396,9 @@ int main(int argc, char ** argv) buf[0x1f1] = setup_sectors-1; put_unaligned_le32(sys_size, &buf[0x1f4]); - update_pecoff_text(setup_sectors * 512, sz + i + ((sys_size * 16) - sz)); + update_pecoff_text(setup_sectors * 512, i + (sys_size * 16)); + init_sz = get_unaligned_le32(&buf[0x260]); + update_pecoff_bss(i + (sys_size * 16), init_sz); efi_stub_entry_update(); diff --git a/arch/x86/crypto/sha512_ssse3_glue.c b/arch/x86/crypto/sha512_ssse3_glue.c index f30cd10293f0..8626b03e83b7 100644 --- a/arch/x86/crypto/sha512_ssse3_glue.c +++ b/arch/x86/crypto/sha512_ssse3_glue.c @@ -141,7 +141,7 @@ static int sha512_ssse3_final(struct shash_desc *desc, u8 *out) /* save number of bits */ bits[1] = cpu_to_be64(sctx->count[0] << 3); - bits[0] = cpu_to_be64(sctx->count[1] << 3) | sctx->count[0] >> 61; + bits[0] = cpu_to_be64(sctx->count[1] << 3 | sctx->count[0] >> 61); /* Pad out to 112 mod 128 and append length */ index = sctx->count[0] & 0x7f; diff --git a/arch/x86/include/asm/irq.h b/arch/x86/include/asm/irq.h index cb6cfcd034cf..a80cbb88ea91 100644 --- a/arch/x86/include/asm/irq.h +++ b/arch/x86/include/asm/irq.h @@ -43,7 +43,7 @@ extern int vector_used_by_percpu_irq(unsigned int vector); extern void init_ISA_irqs(void); #ifdef CONFIG_X86_LOCAL_APIC -void arch_trigger_all_cpu_backtrace(void); +void arch_trigger_all_cpu_backtrace(bool); #define arch_trigger_all_cpu_backtrace arch_trigger_all_cpu_backtrace #endif diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 49314155b66c..49205d01b9ad 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -95,7 +95,7 @@ static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level) #define KVM_REFILL_PAGES 25 #define KVM_MAX_CPUID_ENTRIES 80 #define KVM_NR_FIXED_MTRR_REGION 88 -#define KVM_NR_VAR_MTRR 8 +#define KVM_NR_VAR_MTRR 10 #define ASYNC_PF_PER_VCPU 64 @@ -461,7 +461,7 @@ struct kvm_vcpu_arch { bool nmi_injected; /* Trying to inject an NMI this entry */ struct mtrr_state_type mtrr_state; - u32 pat; + u64 pat; unsigned switch_db_regs; unsigned long db[KVM_NR_DB_REGS]; diff --git a/arch/x86/include/asm/ptrace.h b/arch/x86/include/asm/ptrace.h index 14fd6fd75a19..6205f0c434db 100644 --- a/arch/x86/include/asm/ptrace.h +++ b/arch/x86/include/asm/ptrace.h @@ -231,6 +231,22 @@ static inline unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs, #define ARCH_HAS_USER_SINGLE_STEP_INFO +/* + * When hitting ptrace_stop(), we cannot return using SYSRET because + * that does not restore the full CPU state, only a minimal set. The + * ptracer can change arbitrary register values, which is usually okay + * because the usual ptrace stops run off the signal delivery path which + * forces IRET; however, ptrace_event() stops happen in arbitrary places + * in the kernel and don't force IRET path. + * + * So force IRET path after a ptrace stop. + */ +#define arch_ptrace_stop_needed(code, info) \ +({ \ + set_thread_flag(TIF_NOTIFY_RESUME); \ + false; \ +}) + struct user_desc; extern int do_get_thread_area(struct task_struct *p, int idx, struct user_desc __user *info); diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c index c3fcb5de5083..6a1e71bde323 100644 --- a/arch/x86/kernel/apic/hw_nmi.c +++ b/arch/x86/kernel/apic/hw_nmi.c @@ -33,31 +33,41 @@ static DECLARE_BITMAP(backtrace_mask, NR_CPUS) __read_mostly; /* "in progress" flag of arch_trigger_all_cpu_backtrace */ static unsigned long backtrace_flag; -void arch_trigger_all_cpu_backtrace(void) +void arch_trigger_all_cpu_backtrace(bool include_self) { int i; + int cpu = get_cpu(); - if (test_and_set_bit(0, &backtrace_flag)) + if (test_and_set_bit(0, &backtrace_flag)) { /* * If there is already a trigger_all_cpu_backtrace() in progress * (backtrace_flag == 1), don't output double cpu dump infos. */ + put_cpu(); return; + } cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask); + if (!include_self) + cpumask_clear_cpu(cpu, to_cpumask(backtrace_mask)); - printk(KERN_INFO "sending NMI to all CPUs:\n"); - apic->send_IPI_all(NMI_VECTOR); + if (!cpumask_empty(to_cpumask(backtrace_mask))) { + pr_info("sending NMI to %s CPUs:\n", + (include_self ? "all" : "other")); + apic->send_IPI_mask(to_cpumask(backtrace_mask), NMI_VECTOR); + } /* Wait for up to 10 seconds for all CPUs to do the backtrace */ for (i = 0; i < 10 * 1000; i++) { if (cpumask_empty(to_cpumask(backtrace_mask))) break; mdelay(1); + touch_softlockup_watchdog(); } clear_bit(0, &backtrace_flag); smp_mb__after_atomic(); + put_cpu(); } static int diff --git a/arch/x86/kernel/apm_32.c b/arch/x86/kernel/apm_32.c index f3a1f04ed4cb..584874451414 100644 --- a/arch/x86/kernel/apm_32.c +++ b/arch/x86/kernel/apm_32.c @@ -841,7 +841,6 @@ static int apm_do_idle(void) u32 eax; u8 ret = 0; int idled = 0; - int polling; int err = 0; if (!need_resched()) { diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c index a80029035bf2..f9e4fdd3b877 100644 --- a/arch/x86/kernel/cpu/intel.c +++ b/arch/x86/kernel/cpu/intel.c @@ -370,6 +370,17 @@ static void init_intel(struct cpuinfo_x86 *c) */ detect_extended_topology(c); + if (!cpu_has(c, X86_FEATURE_XTOPOLOGY)) { + /* + * let's use the legacy cpuid vector 0x1 and 0x4 for topology + * detection. + */ + c->x86_max_cores = intel_num_cpu_cores(c); +#ifdef CONFIG_X86_32 + detect_ht(c); +#endif + } + l2 = init_intel_cacheinfo(c); if (c->cpuid_level > 9) { unsigned eax = cpuid_eax(10); @@ -438,17 +449,6 @@ static void init_intel(struct cpuinfo_x86 *c) set_cpu_cap(c, X86_FEATURE_P3); #endif - if (!cpu_has(c, X86_FEATURE_XTOPOLOGY)) { - /* - * let's use the legacy cpuid vector 0x1 and 0x4 for topology - * detection. - */ - c->x86_max_cores = intel_num_cpu_cores(c); -#ifdef CONFIG_X86_32 - detect_ht(c); -#endif - } - /* Work around errata */ srat_detect_node(c); diff --git a/arch/x86/kernel/cpu/intel_cacheinfo.c b/arch/x86/kernel/cpu/intel_cacheinfo.c index a952e9c85b6f..9c8f7394c612 100644 --- a/arch/x86/kernel/cpu/intel_cacheinfo.c +++ b/arch/x86/kernel/cpu/intel_cacheinfo.c @@ -730,6 +730,18 @@ unsigned int init_intel_cacheinfo(struct cpuinfo_x86 *c) #endif } +#ifdef CONFIG_X86_HT + /* + * If cpu_llc_id is not yet set, this means cpuid_level < 4 which in + * turns means that the only possibility is SMT (as indicated in + * cpuid1). Since cpuid2 doesn't specify shared caches, and we know + * that SMT shares all caches, we can unconditionally set cpu_llc_id to + * c->phys_proc_id. + */ + if (per_cpu(cpu_llc_id, cpu) == BAD_APICID) + per_cpu(cpu_llc_id, cpu) = c->phys_proc_id; +#endif + c->x86_cache_size = l3 ? l3 : (l2 ? l2 : (l1i+l1d)); return l2; diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c index bb92f38153b2..9a79c8dbd8e8 100644 --- a/arch/x86/kernel/cpu/mcheck/mce.c +++ b/arch/x86/kernel/cpu/mcheck/mce.c @@ -2451,6 +2451,12 @@ static __init int mcheck_init_device(void) for_each_online_cpu(i) { err = mce_device_create(i); if (err) { + /* + * Register notifier anyway (and do not unreg it) so + * that we don't leave undeleted timers, see notifier + * callback above. + */ + __register_hotcpu_notifier(&mce_cpu_notifier); cpu_notifier_register_done(); goto err_device_create; } @@ -2471,10 +2477,6 @@ static __init int mcheck_init_device(void) err_register: unregister_syscore_ops(&mce_syscore_ops); - cpu_notifier_register_begin(); - __unregister_hotcpu_notifier(&mce_cpu_notifier); - cpu_notifier_register_done(); - err_device_create: /* * We didn't keep track of which devices were created above, but diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c index 2bdfbff8a4f6..2879ecdaac43 100644 --- a/arch/x86/kernel/cpu/perf_event.c +++ b/arch/x86/kernel/cpu/perf_event.c @@ -118,6 +118,9 @@ static int x86_pmu_extra_regs(u64 config, struct perf_event *event) continue; if (event->attr.config1 & ~er->valid_mask) return -EINVAL; + /* Check if the extra msrs can be safely accessed*/ + if (!er->extra_msr_access) + return -ENXIO; reg->idx = er->idx; reg->config = event->attr.config1; diff --git a/arch/x86/kernel/cpu/perf_event.h b/arch/x86/kernel/cpu/perf_event.h index 3b2f9bdd974b..8ade93111e03 100644 --- a/arch/x86/kernel/cpu/perf_event.h +++ b/arch/x86/kernel/cpu/perf_event.h @@ -295,14 +295,16 @@ struct extra_reg { u64 config_mask; u64 valid_mask; int idx; /* per_xxx->regs[] reg index */ + bool extra_msr_access; }; #define EVENT_EXTRA_REG(e, ms, m, vm, i) { \ - .event = (e), \ - .msr = (ms), \ - .config_mask = (m), \ - .valid_mask = (vm), \ - .idx = EXTRA_REG_##i, \ + .event = (e), \ + .msr = (ms), \ + .config_mask = (m), \ + .valid_mask = (vm), \ + .idx = EXTRA_REG_##i, \ + .extra_msr_access = true, \ } #define INTEL_EVENT_EXTRA_REG(event, msr, vm, idx) \ diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c index adb02aa62af5..2502d0d9d246 100644 --- a/arch/x86/kernel/cpu/perf_event_intel.c +++ b/arch/x86/kernel/cpu/perf_event_intel.c @@ -1382,6 +1382,15 @@ again: intel_pmu_lbr_read(); /* + * CondChgd bit 63 doesn't mean any overflow status. Ignore + * and clear the bit. + */ + if (__test_and_clear_bit(63, (unsigned long *)&status)) { + if (!status) + goto done; + } + + /* * PEBS overflow sets bit 62 in the global status register */ if (__test_and_clear_bit(62, (unsigned long *)&status)) { @@ -2173,6 +2182,41 @@ static void intel_snb_check_microcode(void) } } +/* + * Under certain circumstances, access certain MSR may cause #GP. + * The function tests if the input MSR can be safely accessed. + */ +static bool check_msr(unsigned long msr, u64 mask) +{ + u64 val_old, val_new, val_tmp; + + /* + * Read the current value, change it and read it back to see if it + * matches, this is needed to detect certain hardware emulators + * (qemu/kvm) that don't trap on the MSR access and always return 0s. + */ + if (rdmsrl_safe(msr, &val_old)) + return false; + + /* + * Only change the bits which can be updated by wrmsrl. + */ + val_tmp = val_old ^ mask; + if (wrmsrl_safe(msr, val_tmp) || + rdmsrl_safe(msr, &val_new)) + return false; + + if (val_new != val_tmp) + return false; + + /* Here it's sure that the MSR can be safely accessed. + * Restore the old value and return. + */ + wrmsrl(msr, val_old); + + return true; +} + static __init void intel_sandybridge_quirk(void) { x86_pmu.check_microcode = intel_snb_check_microcode; @@ -2262,7 +2306,8 @@ __init int intel_pmu_init(void) union cpuid10_ebx ebx; struct event_constraint *c; unsigned int unused; - int version; + struct extra_reg *er; + int version, i; if (!cpu_has(&boot_cpu_data, X86_FEATURE_ARCH_PERFMON)) { switch (boot_cpu_data.x86) { @@ -2465,6 +2510,9 @@ __init int intel_pmu_init(void) case 62: /* IvyBridge EP */ memcpy(hw_cache_event_ids, snb_hw_cache_event_ids, sizeof(hw_cache_event_ids)); + /* dTLB-load-misses on IVB is different than SNB */ + hw_cache_event_ids[C(DTLB)][C(OP_READ)][C(RESULT_MISS)] = 0x8108; /* DTLB_LOAD_MISSES.DEMAND_LD_MISS_CAUSES_A_WALK */ + memcpy(hw_cache_extra_regs, snb_hw_cache_extra_regs, sizeof(hw_cache_extra_regs)); @@ -2565,6 +2613,34 @@ __init int intel_pmu_init(void) } } + /* + * Access LBR MSR may cause #GP under certain circumstances. + * E.g. KVM doesn't support LBR MSR + * Check all LBT MSR here. + * Disable LBR access if any LBR MSRs can not be accessed. + */ + if (x86_pmu.lbr_nr && !check_msr(x86_pmu.lbr_tos, 0x3UL)) + x86_pmu.lbr_nr = 0; + for (i = 0; i < x86_pmu.lbr_nr; i++) { + if (!(check_msr(x86_pmu.lbr_from + i, 0xffffUL) && + check_msr(x86_pmu.lbr_to + i, 0xffffUL))) + x86_pmu.lbr_nr = 0; + } + + /* + * Access extra MSR may cause #GP under certain circumstances. + * E.g. KVM doesn't support offcore event + * Check all extra_regs here. + */ + if (x86_pmu.extra_regs) { + for (er = x86_pmu.extra_regs; er->msr; er++) { + er->extra_msr_access = check_msr(er->msr, 0x1ffUL); + /* Disable LBR select mapping */ + if ((er->idx == EXTRA_REG_LBR) && !er->extra_msr_access) + x86_pmu.lbr_sel_map = NULL; + } + } + /* Support full width counters using alternative MSR range */ if (x86_pmu.intel_cap.full_width_write) { x86_pmu.max_period = x86_pmu.cntval_mask; diff --git a/arch/x86/kernel/cpu/perf_event_intel_ds.c b/arch/x86/kernel/cpu/perf_event_intel_ds.c index 980970cb744d..696ade311ded 100644 --- a/arch/x86/kernel/cpu/perf_event_intel_ds.c +++ b/arch/x86/kernel/cpu/perf_event_intel_ds.c @@ -311,9 +311,11 @@ static int alloc_bts_buffer(int cpu) if (!x86_pmu.bts) return 0; - buffer = kzalloc_node(BTS_BUFFER_SIZE, GFP_KERNEL, node); - if (unlikely(!buffer)) + buffer = kzalloc_node(BTS_BUFFER_SIZE, GFP_KERNEL | __GFP_NOWARN, node); + if (unlikely(!buffer)) { + WARN_ONCE(1, "%s: BTS buffer allocation failure\n", __func__); return -ENOMEM; + } max = BTS_BUFFER_SIZE / BTS_RECORD_SIZE; thresh = max / 16; diff --git a/arch/x86/kernel/cpu/perf_event_intel_uncore.c b/arch/x86/kernel/cpu/perf_event_intel_uncore.c index e009f3cedf89..cfc6f9dfcd90 100644 --- a/arch/x86/kernel/cpu/perf_event_intel_uncore.c +++ b/arch/x86/kernel/cpu/perf_event_intel_uncore.c @@ -550,16 +550,16 @@ static struct extra_reg snbep_uncore_cbox_extra_regs[] = { SNBEP_CBO_EVENT_EXTRA_REG(0x4134, 0xffff, 0x6), SNBEP_CBO_EVENT_EXTRA_REG(0x0135, 0xffff, 0x8), SNBEP_CBO_EVENT_EXTRA_REG(0x0335, 0xffff, 0x8), - SNBEP_CBO_EVENT_EXTRA_REG(0x4135, 0xffff, 0xc), - SNBEP_CBO_EVENT_EXTRA_REG(0x4335, 0xffff, 0xc), + SNBEP_CBO_EVENT_EXTRA_REG(0x4135, 0xffff, 0xa), + SNBEP_CBO_EVENT_EXTRA_REG(0x4335, 0xffff, 0xa), SNBEP_CBO_EVENT_EXTRA_REG(0x4435, 0xffff, 0x2), SNBEP_CBO_EVENT_EXTRA_REG(0x4835, 0xffff, 0x2), SNBEP_CBO_EVENT_EXTRA_REG(0x4a35, 0xffff, 0x2), SNBEP_CBO_EVENT_EXTRA_REG(0x5035, 0xffff, 0x2), SNBEP_CBO_EVENT_EXTRA_REG(0x0136, 0xffff, 0x8), SNBEP_CBO_EVENT_EXTRA_REG(0x0336, 0xffff, 0x8), - SNBEP_CBO_EVENT_EXTRA_REG(0x4136, 0xffff, 0xc), - SNBEP_CBO_EVENT_EXTRA_REG(0x4336, 0xffff, 0xc), + SNBEP_CBO_EVENT_EXTRA_REG(0x4136, 0xffff, 0xa), + SNBEP_CBO_EVENT_EXTRA_REG(0x4336, 0xffff, 0xa), SNBEP_CBO_EVENT_EXTRA_REG(0x4436, 0xffff, 0x2), SNBEP_CBO_EVENT_EXTRA_REG(0x4836, 0xffff, 0x2), SNBEP_CBO_EVENT_EXTRA_REG(0x4a36, 0xffff, 0x2), @@ -1222,6 +1222,7 @@ static struct extra_reg ivt_uncore_cbox_extra_regs[] = { SNBEP_CBO_EVENT_EXTRA_REG(SNBEP_CBO_PMON_CTL_TID_EN, SNBEP_CBO_PMON_CTL_TID_EN, 0x1), SNBEP_CBO_EVENT_EXTRA_REG(0x1031, 0x10ff, 0x2), + SNBEP_CBO_EVENT_EXTRA_REG(0x1134, 0xffff, 0x4), SNBEP_CBO_EVENT_EXTRA_REG(0x4134, 0xffff, 0xc), SNBEP_CBO_EVENT_EXTRA_REG(0x5134, 0xffff, 0xc), @@ -1245,7 +1246,7 @@ static struct extra_reg ivt_uncore_cbox_extra_regs[] = { SNBEP_CBO_EVENT_EXTRA_REG(0x8335, 0xffff, 0x10), SNBEP_CBO_EVENT_EXTRA_REG(0x0136, 0xffff, 0x10), SNBEP_CBO_EVENT_EXTRA_REG(0x0336, 0xffff, 0x10), - SNBEP_CBO_EVENT_EXTRA_REG(0x2336, 0xffff, 0x10), + SNBEP_CBO_EVENT_EXTRA_REG(0x2136, 0xffff, 0x10), SNBEP_CBO_EVENT_EXTRA_REG(0x2336, 0xffff, 0x10), SNBEP_CBO_EVENT_EXTRA_REG(0x4136, 0xffff, 0x18), SNBEP_CBO_EVENT_EXTRA_REG(0x4336, 0xffff, 0x18), diff --git a/arch/x86/kernel/entry_32.S b/arch/x86/kernel/entry_32.S index f0da82b8e634..0d0c9d4ab6d5 100644 --- a/arch/x86/kernel/entry_32.S +++ b/arch/x86/kernel/entry_32.S @@ -423,8 +423,9 @@ sysenter_past_esp: jnz sysenter_audit sysenter_do_call: cmpl $(NR_syscalls), %eax - jae syscall_badsys + jae sysenter_badsys call *sys_call_table(,%eax,4) +sysenter_after_call: movl %eax,PT_EAX(%esp) LOCKDEP_SYS_EXIT DISABLE_INTERRUPTS(CLBR_ANY) @@ -501,6 +502,7 @@ ENTRY(system_call) jae syscall_badsys syscall_call: call *sys_call_table(,%eax,4) +syscall_after_call: movl %eax,PT_EAX(%esp) # store the return value syscall_exit: LOCKDEP_SYS_EXIT @@ -674,8 +676,13 @@ syscall_fault: END(syscall_fault) syscall_badsys: - movl $-ENOSYS,PT_EAX(%esp) - jmp resume_userspace + movl $-ENOSYS,%eax + jmp syscall_after_call +END(syscall_badsys) + +sysenter_badsys: + movl $-ENOSYS,%eax + jmp sysenter_after_call END(syscall_badsys) CFI_ENDPROC diff --git a/arch/x86/kernel/espfix_64.c b/arch/x86/kernel/espfix_64.c index 6afbb16e9b79..94d857fb1033 100644 --- a/arch/x86/kernel/espfix_64.c +++ b/arch/x86/kernel/espfix_64.c @@ -175,7 +175,7 @@ void init_espfix_ap(void) if (!pud_present(pud)) { pmd_p = (pmd_t *)__get_free_page(PGALLOC_GFP); pud = __pud(__pa(pmd_p) | (PGTABLE_PROT & ptemask)); - paravirt_alloc_pud(&init_mm, __pa(pmd_p) >> PAGE_SHIFT); + paravirt_alloc_pmd(&init_mm, __pa(pmd_p) >> PAGE_SHIFT); for (n = 0; n < ESPFIX_PUD_CLONES; n++) set_pud(&pud_p[n], pud); } @@ -185,7 +185,7 @@ void init_espfix_ap(void) if (!pmd_present(pmd)) { pte_p = (pte_t *)__get_free_page(PGALLOC_GFP); pmd = __pmd(__pa(pte_p) | (PGTABLE_PROT & ptemask)); - paravirt_alloc_pmd(&init_mm, __pa(pte_p) >> PAGE_SHIFT); + paravirt_alloc_pte(&init_mm, __pa(pte_p) >> PAGE_SHIFT); for (n = 0; n < ESPFIX_PMD_CLONES; n++) set_pmd(&pmd_p[n], pmd); } @@ -193,7 +193,6 @@ void init_espfix_ap(void) pte_p = pte_offset_kernel(&pmd, addr); stack_page = (void *)__get_free_page(GFP_KERNEL); pte = __pte(__pa(stack_page) | (__PAGE_KERNEL_RO & ptemask)); - paravirt_alloc_pte(&init_mm, __pa(stack_page) >> PAGE_SHIFT); for (n = 0; n < ESPFIX_PTE_CLONES; n++) set_pte(&pte_p[n*PTE_STRIDE], pte); diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c index 7596df664901..67e6d19ef1be 100644 --- a/arch/x86/kernel/kprobes/core.c +++ b/arch/x86/kernel/kprobes/core.c @@ -574,6 +574,9 @@ int kprobe_int3_handler(struct pt_regs *regs) struct kprobe *p; struct kprobe_ctlblk *kcb; + if (user_mode_vm(regs)) + return 0; + addr = (kprobe_opcode_t *)(regs->ip - sizeof(kprobe_opcode_t)); /* * We don't want to be preempted for the entire diff --git a/arch/x86/kernel/signal.c b/arch/x86/kernel/signal.c index a0da58db43a8..2851d63c1202 100644 --- a/arch/x86/kernel/signal.c +++ b/arch/x86/kernel/signal.c @@ -363,7 +363,7 @@ static int __setup_rt_frame(int sig, struct ksignal *ksig, /* Set up to return from userspace. */ restorer = current->mm->context.vdso + - selected_vdso32->sym___kernel_sigreturn; + selected_vdso32->sym___kernel_rt_sigreturn; if (ksig->ka.sa.sa_flags & SA_RESTORER) restorer = ksig->ka.sa.sa_restorer; put_user_ex(restorer, &frame->pretcode); diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c index 57e5ce126d5a..ea030319b321 100644 --- a/arch/x86/kernel/tsc.c +++ b/arch/x86/kernel/tsc.c @@ -920,9 +920,9 @@ static int time_cpufreq_notifier(struct notifier_block *nb, unsigned long val, tsc_khz = cpufreq_scale(tsc_khz_ref, ref_freq, freq->new); if (!(freq->flags & CPUFREQ_CONST_LOOPS)) mark_tsc_unstable("cpufreq changes"); - } - set_cyc2ns_scale(tsc_khz, freq->cpu); + set_cyc2ns_scale(tsc_khz, freq->cpu); + } return 0; } diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index ec8366c5cfea..b5e994ad0135 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -1462,6 +1462,7 @@ static void svm_get_segment(struct kvm_vcpu *vcpu, */ if (var->unusable) var->db = 0; + var->dpl = to_svm(vcpu)->vmcb->save.cpl; break; } } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index f32a02578c0d..ef432f891d30 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1898,7 +1898,7 @@ static int set_msr_hyperv_pw(struct kvm_vcpu *vcpu, u32 msr, u64 data) if (!(data & HV_X64_MSR_TSC_REFERENCE_ENABLE)) break; gfn = data >> HV_X64_MSR_TSC_REFERENCE_ADDRESS_SHIFT; - if (kvm_write_guest(kvm, data, + if (kvm_write_guest(kvm, gfn << HV_X64_MSR_TSC_REFERENCE_ADDRESS_SHIFT, &tsc_ref, sizeof(tsc_ref))) return 1; mark_page_dirty(kvm, gfn); @@ -5887,6 +5887,18 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win) kvm_x86_ops->set_nmi(vcpu); } } else if (kvm_cpu_has_injectable_intr(vcpu)) { + /* + * Because interrupts can be injected asynchronously, we are + * calling check_nested_events again here to avoid a race condition. + * See https://lkml.org/lkml/2014/7/2/60 for discussion about this + * proposal and current concerns. Perhaps we should be setting + * KVM_REQ_EVENT only on certain events and not unconditionally? + */ + if (is_guest_mode(vcpu) && kvm_x86_ops->check_nested_events) { + r = kvm_x86_ops->check_nested_events(vcpu, req_int_win); + if (r != 0) + return r; + } if (kvm_x86_ops->interrupt_allowed(vcpu)) { kvm_queue_interrupt(vcpu, kvm_cpu_get_interrupt(vcpu), false); diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile index 3c0809a0631f..61b04fe36e66 100644 --- a/arch/x86/vdso/Makefile +++ b/arch/x86/vdso/Makefile @@ -11,7 +11,6 @@ VDSO32-$(CONFIG_COMPAT) := y # files to link into the vdso vobjs-y := vdso-note.o vclock_gettime.o vgetcpu.o vdso-fakesections.o -vobjs-nox32 := vdso-fakesections.o # files to link into kernel obj-y += vma.o @@ -67,7 +66,8 @@ $(obj)/vdso-image-%.c: $(obj)/vdso%.so.dbg $(obj)/vdso2c FORCE # CFL := $(PROFILING) -mcmodel=small -fPIC -O2 -fasynchronous-unwind-tables -m64 \ $(filter -g%,$(KBUILD_CFLAGS)) $(call cc-option, -fno-stack-protector) \ - -fno-omit-frame-pointer -foptimize-sibling-calls + -fno-omit-frame-pointer -foptimize-sibling-calls \ + -DDISABLE_BRANCH_PROFILING $(vobjs): KBUILD_CFLAGS += $(CFL) @@ -134,7 +134,7 @@ override obj-dirs = $(dir $(obj)) $(obj)/vdso32/ targets += vdso32/vdso32.lds targets += vdso32/note.o vdso32/vclock_gettime.o $(vdso32.so-y:%=vdso32/%.o) -targets += vdso32/vclock_gettime.o +targets += vdso32/vclock_gettime.o vdso32/vdso-fakesections.o $(obj)/vdso32.o: $(vdso32-images:%=$(obj)/%) @@ -150,11 +150,13 @@ KBUILD_CFLAGS_32 += -m32 -msoft-float -mregparm=0 -fpic KBUILD_CFLAGS_32 += $(call cc-option, -fno-stack-protector) KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls) KBUILD_CFLAGS_32 += -fno-omit-frame-pointer +KBUILD_CFLAGS_32 += -DDISABLE_BRANCH_PROFILING $(vdso32-images:%=$(obj)/%.dbg): KBUILD_CFLAGS = $(KBUILD_CFLAGS_32) $(vdso32-images:%=$(obj)/%.dbg): $(obj)/vdso32-%.so.dbg: FORCE \ $(obj)/vdso32/vdso32.lds \ $(obj)/vdso32/vclock_gettime.o \ + $(obj)/vdso32/vdso-fakesections.o \ $(obj)/vdso32/note.o \ $(obj)/vdso32/%.o $(call if_changed,vdso) @@ -169,14 +171,24 @@ quiet_cmd_vdso = VDSO $@ sh $(srctree)/$(src)/checkundef.sh '$(NM)' '$@' VDSO_LDFLAGS = -fPIC -shared $(call cc-ldoption, -Wl$(comma)--hash-style=sysv) \ - -Wl,-Bsymbolic $(LTO_CFLAGS) + $(call cc-ldoption, -Wl$(comma)--build-id) -Wl,-Bsymbolic $(LTO_CFLAGS) GCOV_PROFILE := n # -# Install the unstripped copies of vdso*.so. +# Install the unstripped copies of vdso*.so. If our toolchain supports +# build-id, install .build-id links as well. # quiet_cmd_vdso_install = INSTALL $(@:install_%=%) - cmd_vdso_install = cp $< $(MODLIB)/vdso/$(@:install_%=%) +define cmd_vdso_install + cp $< "$(MODLIB)/vdso/$(@:install_%=%)"; \ + if readelf -n $< |grep -q 'Build ID'; then \ + buildid=`readelf -n $< |grep 'Build ID' |sed -e 's/^.*Build ID: \(.*\)$$/\1/'`; \ + first=`echo $$buildid | cut -b-2`; \ + last=`echo $$buildid | cut -b3-`; \ + mkdir -p "$(MODLIB)/vdso/.build-id/$$first"; \ + ln -sf "../../$(@:install_%=%)" "$(MODLIB)/vdso/.build-id/$$first/$$last.debug"; \ + fi +endef vdso_img_insttargets := $(vdso_img_sodbg:%.dbg=install_%) diff --git a/arch/x86/vdso/vclock_gettime.c b/arch/x86/vdso/vclock_gettime.c index b2e4f493e5b0..9793322751e0 100644 --- a/arch/x86/vdso/vclock_gettime.c +++ b/arch/x86/vdso/vclock_gettime.c @@ -11,9 +11,6 @@ * Check with readelf after changing. */ -/* Disable profiling for userspace code: */ -#define DISABLE_BRANCH_PROFILING - #include <uapi/linux/time.h> #include <asm/vgtod.h> #include <asm/hpet.h> diff --git a/arch/x86/vdso/vdso-fakesections.c b/arch/x86/vdso/vdso-fakesections.c index cb8a8d72c24b..aa5fbfab20a5 100644 --- a/arch/x86/vdso/vdso-fakesections.c +++ b/arch/x86/vdso/vdso-fakesections.c @@ -2,31 +2,20 @@ * Copyright 2014 Andy Lutomirski * Subject to the GNU Public License, v.2 * - * Hack to keep broken Go programs working. - * - * The Go runtime had a couple of bugs: it would read the section table to try - * to figure out how many dynamic symbols there were (it shouldn't have looked - * at the section table at all) and, if there were no SHT_SYNDYM section table - * entry, it would use an uninitialized value for the number of symbols. As a - * workaround, we supply a minimal section table. vdso2c will adjust the - * in-memory image so that "vdso_fake_sections" becomes the section table. - * - * The bug was introduced by: - * https://code.google.com/p/go/source/detail?r=56ea40aac72b (2012-08-31) - * and is being addressed in the Go runtime in this issue: - * https://code.google.com/p/go/issues/detail?id=8197 + * String table for loadable section headers. See vdso2c.h for why + * this exists. */ -#ifndef __x86_64__ -#error This hack is specific to the 64-bit vDSO -#endif - -#include <linux/elf.h> - -extern const __visible struct elf64_shdr vdso_fake_sections[]; -const __visible struct elf64_shdr vdso_fake_sections[] = { - { - .sh_type = SHT_DYNSYM, - .sh_entsize = sizeof(Elf64_Sym), - } -}; +const char fake_shstrtab[] __attribute__((section(".fake_shstrtab"))) = + ".hash\0" + ".dynsym\0" + ".dynstr\0" + ".gnu.version\0" + ".gnu.version_d\0" + ".dynamic\0" + ".rodata\0" + ".fake_shstrtab\0" /* Yay, self-referential code. */ + ".note\0" + ".eh_frame_hdr\0" + ".eh_frame\0" + ".text"; diff --git a/arch/x86/vdso/vdso-layout.lds.S b/arch/x86/vdso/vdso-layout.lds.S index 2ec72f651ebf..9197544eea9a 100644 --- a/arch/x86/vdso/vdso-layout.lds.S +++ b/arch/x86/vdso/vdso-layout.lds.S @@ -6,6 +6,16 @@ * This script controls its layout. */ +#if defined(BUILD_VDSO64) +# define SHDR_SIZE 64 +#elif defined(BUILD_VDSO32) || defined(BUILD_VDSOX32) +# define SHDR_SIZE 40 +#else +# error unknown VDSO target +#endif + +#define NUM_FAKE_SHDRS 13 + SECTIONS { . = SIZEOF_HEADERS; @@ -18,36 +28,53 @@ SECTIONS .gnu.version_d : { *(.gnu.version_d) } .gnu.version_r : { *(.gnu.version_r) } + .dynamic : { *(.dynamic) } :text :dynamic + + .rodata : { + *(.rodata*) + *(.data*) + *(.sdata*) + *(.got.plt) *(.got) + *(.gnu.linkonce.d.*) + *(.bss*) + *(.dynbss*) + *(.gnu.linkonce.b.*) + + /* + * Ideally this would live in a C file, but that won't + * work cleanly for x32 until we start building the x32 + * C code using an x32 toolchain. + */ + VDSO_FAKE_SECTION_TABLE_START = .; + . = . + NUM_FAKE_SHDRS * SHDR_SIZE; + VDSO_FAKE_SECTION_TABLE_END = .; + } :text + + .fake_shstrtab : { *(.fake_shstrtab) } :text + + .note : { *(.note.*) } :text :note .eh_frame_hdr : { *(.eh_frame_hdr) } :text :eh_frame_hdr .eh_frame : { KEEP (*(.eh_frame)) } :text - .dynamic : { *(.dynamic) } :text :dynamic - - .rodata : { *(.rodata*) } :text - .data : { - *(.data*) - *(.sdata*) - *(.got.plt) *(.got) - *(.gnu.linkonce.d.*) - *(.bss*) - *(.dynbss*) - *(.gnu.linkonce.b.*) - } - - .altinstructions : { *(.altinstructions) } - .altinstr_replacement : { *(.altinstr_replacement) } /* - * Align the actual code well away from the non-instruction data. - * This is the best thing for the I-cache. + * Text is well-separated from actual data: there's plenty of + * stuff that isn't used at runtime in between. */ - . = ALIGN(0x100); .text : { *(.text*) } :text =0x90909090, /* + * At the end so that eu-elflint stays happy when vdso2c strips + * these. A better implementation would avoid allocating space + * for these. + */ + .altinstructions : { *(.altinstructions) } :text + .altinstr_replacement : { *(.altinstr_replacement) } :text + + /* * The remainder of the vDSO consists of special pages that are * shared between the kernel and userspace. It needs to be at the * end so that it doesn't overlap the mapping of the actual @@ -75,6 +102,7 @@ SECTIONS /DISCARD/ : { *(.discard) *(.discard.*) + *(__bug_table) } } diff --git a/arch/x86/vdso/vdso.lds.S b/arch/x86/vdso/vdso.lds.S index 75e3404c83b1..6807932643c2 100644 --- a/arch/x86/vdso/vdso.lds.S +++ b/arch/x86/vdso/vdso.lds.S @@ -6,6 +6,8 @@ * the DSO. */ +#define BUILD_VDSO64 + #include "vdso-layout.lds.S" /* diff --git a/arch/x86/vdso/vdso2c.c b/arch/x86/vdso/vdso2c.c index 7a6bf50f9165..238dbe82776e 100644 --- a/arch/x86/vdso/vdso2c.c +++ b/arch/x86/vdso/vdso2c.c @@ -23,6 +23,8 @@ enum { sym_vvar_page, sym_hpet_page, sym_end_mapping, + sym_VDSO_FAKE_SECTION_TABLE_START, + sym_VDSO_FAKE_SECTION_TABLE_END, }; const int special_pages[] = { @@ -30,15 +32,26 @@ const int special_pages[] = { sym_hpet_page, }; -char const * const required_syms[] = { - [sym_vvar_page] = "vvar_page", - [sym_hpet_page] = "hpet_page", - [sym_end_mapping] = "end_mapping", - "VDSO32_NOTE_MASK", - "VDSO32_SYSENTER_RETURN", - "__kernel_vsyscall", - "__kernel_sigreturn", - "__kernel_rt_sigreturn", +struct vdso_sym { + const char *name; + bool export; +}; + +struct vdso_sym required_syms[] = { + [sym_vvar_page] = {"vvar_page", true}, + [sym_hpet_page] = {"hpet_page", true}, + [sym_end_mapping] = {"end_mapping", true}, + [sym_VDSO_FAKE_SECTION_TABLE_START] = { + "VDSO_FAKE_SECTION_TABLE_START", false + }, + [sym_VDSO_FAKE_SECTION_TABLE_END] = { + "VDSO_FAKE_SECTION_TABLE_END", false + }, + {"VDSO32_NOTE_MASK", true}, + {"VDSO32_SYSENTER_RETURN", true}, + {"__kernel_vsyscall", true}, + {"__kernel_sigreturn", true}, + {"__kernel_rt_sigreturn", true}, }; __attribute__((format(printf, 1, 2))) __attribute__((noreturn)) @@ -83,37 +96,21 @@ extern void bad_put_le(void); #define NSYMS (sizeof(required_syms) / sizeof(required_syms[0])) -#define BITS 64 -#define GOFUNC go64 -#define Elf_Ehdr Elf64_Ehdr -#define Elf_Shdr Elf64_Shdr -#define Elf_Phdr Elf64_Phdr -#define Elf_Sym Elf64_Sym -#define Elf_Dyn Elf64_Dyn +#define BITSFUNC3(name, bits) name##bits +#define BITSFUNC2(name, bits) BITSFUNC3(name, bits) +#define BITSFUNC(name) BITSFUNC2(name, ELF_BITS) + +#define ELF_BITS_XFORM2(bits, x) Elf##bits##_##x +#define ELF_BITS_XFORM(bits, x) ELF_BITS_XFORM2(bits, x) +#define ELF(x) ELF_BITS_XFORM(ELF_BITS, x) + +#define ELF_BITS 64 #include "vdso2c.h" -#undef BITS -#undef GOFUNC -#undef Elf_Ehdr -#undef Elf_Shdr -#undef Elf_Phdr -#undef Elf_Sym -#undef Elf_Dyn - -#define BITS 32 -#define GOFUNC go32 -#define Elf_Ehdr Elf32_Ehdr -#define Elf_Shdr Elf32_Shdr -#define Elf_Phdr Elf32_Phdr -#define Elf_Sym Elf32_Sym -#define Elf_Dyn Elf32_Dyn +#undef ELF_BITS + +#define ELF_BITS 32 #include "vdso2c.h" -#undef BITS -#undef GOFUNC -#undef Elf_Ehdr -#undef Elf_Shdr -#undef Elf_Phdr -#undef Elf_Sym -#undef Elf_Dyn +#undef ELF_BITS static void go(void *addr, size_t len, FILE *outfile, const char *name) { diff --git a/arch/x86/vdso/vdso2c.h b/arch/x86/vdso/vdso2c.h index c6eefaf389b9..11b65d4f9414 100644 --- a/arch/x86/vdso/vdso2c.h +++ b/arch/x86/vdso/vdso2c.h @@ -4,23 +4,139 @@ * are built for 32-bit userspace. */ -static void GOFUNC(void *addr, size_t len, FILE *outfile, const char *name) +/* + * We're writing a section table for a few reasons: + * + * The Go runtime had a couple of bugs: it would read the section + * table to try to figure out how many dynamic symbols there were (it + * shouldn't have looked at the section table at all) and, if there + * were no SHT_SYNDYM section table entry, it would use an + * uninitialized value for the number of symbols. An empty DYNSYM + * table would work, but I see no reason not to write a valid one (and + * keep full performance for old Go programs). This hack is only + * needed on x86_64. + * + * The bug was introduced on 2012-08-31 by: + * https://code.google.com/p/go/source/detail?r=56ea40aac72b + * and was fixed on 2014-06-13 by: + * https://code.google.com/p/go/source/detail?r=fc1cd5e12595 + * + * Binutils has issues debugging the vDSO: it reads the section table to + * find SHT_NOTE; it won't look at PT_NOTE for the in-memory vDSO, which + * would break build-id if we removed the section table. Binutils + * also requires that shstrndx != 0. See: + * https://sourceware.org/bugzilla/show_bug.cgi?id=17064 + * + * elfutils might not look for PT_NOTE if there is a section table at + * all. I don't know whether this matters for any practical purpose. + * + * For simplicity, rather than hacking up a partial section table, we + * just write a mostly complete one. We omit non-dynamic symbols, + * though, since they're rather large. + * + * Once binutils gets fixed, we might be able to drop this for all but + * the 64-bit vdso, since build-id only works in kernel RPMs, and + * systems that update to new enough kernel RPMs will likely update + * binutils in sync. build-id has never worked for home-built kernel + * RPMs without manual symlinking, and I suspect that no one ever does + * that. + */ +struct BITSFUNC(fake_sections) +{ + ELF(Shdr) *table; + unsigned long table_offset; + int count, max_count; + + int in_shstrndx; + unsigned long shstr_offset; + const char *shstrtab; + size_t shstrtab_len; + + int out_shstrndx; +}; + +static unsigned int BITSFUNC(find_shname)(struct BITSFUNC(fake_sections) *out, + const char *name) +{ + const char *outname = out->shstrtab; + while (outname - out->shstrtab < out->shstrtab_len) { + if (!strcmp(name, outname)) + return (outname - out->shstrtab) + out->shstr_offset; + outname += strlen(outname) + 1; + } + + if (*name) + printf("Warning: could not find output name \"%s\"\n", name); + return out->shstr_offset + out->shstrtab_len - 1; /* Use a null. */ +} + +static void BITSFUNC(init_sections)(struct BITSFUNC(fake_sections) *out) +{ + if (!out->in_shstrndx) + fail("didn't find the fake shstrndx\n"); + + memset(out->table, 0, out->max_count * sizeof(ELF(Shdr))); + + if (out->max_count < 1) + fail("we need at least two fake output sections\n"); + + PUT_LE(&out->table[0].sh_type, SHT_NULL); + PUT_LE(&out->table[0].sh_name, BITSFUNC(find_shname)(out, "")); + + out->count = 1; +} + +static void BITSFUNC(copy_section)(struct BITSFUNC(fake_sections) *out, + int in_idx, const ELF(Shdr) *in, + const char *name) +{ + uint64_t flags = GET_LE(&in->sh_flags); + + bool copy = flags & SHF_ALLOC && + (GET_LE(&in->sh_size) || + (GET_LE(&in->sh_type) != SHT_RELA && + GET_LE(&in->sh_type) != SHT_REL)) && + strcmp(name, ".altinstructions") && + strcmp(name, ".altinstr_replacement"); + + if (!copy) + return; + + if (out->count >= out->max_count) + fail("too many copied sections (max = %d)\n", out->max_count); + + if (in_idx == out->in_shstrndx) + out->out_shstrndx = out->count; + + out->table[out->count] = *in; + PUT_LE(&out->table[out->count].sh_name, + BITSFUNC(find_shname)(out, name)); + + /* elfutils requires that a strtab have the correct type. */ + if (!strcmp(name, ".fake_shstrtab")) + PUT_LE(&out->table[out->count].sh_type, SHT_STRTAB); + + out->count++; +} + +static void BITSFUNC(go)(void *addr, size_t len, + FILE *outfile, const char *name) { int found_load = 0; unsigned long load_size = -1; /* Work around bogus warning */ unsigned long data_size; - Elf_Ehdr *hdr = (Elf_Ehdr *)addr; + ELF(Ehdr) *hdr = (ELF(Ehdr) *)addr; int i; unsigned long j; - Elf_Shdr *symtab_hdr = NULL, *strtab_hdr, *secstrings_hdr, + ELF(Shdr) *symtab_hdr = NULL, *strtab_hdr, *secstrings_hdr, *alt_sec = NULL; - Elf_Dyn *dyn = 0, *dyn_end = 0; + ELF(Dyn) *dyn = 0, *dyn_end = 0; const char *secstrings; uint64_t syms[NSYMS] = {}; - uint64_t fake_sections_value = 0, fake_sections_size = 0; + struct BITSFUNC(fake_sections) fake_sections = {}; - Elf_Phdr *pt = (Elf_Phdr *)(addr + GET_LE(&hdr->e_phoff)); + ELF(Phdr) *pt = (ELF(Phdr) *)(addr + GET_LE(&hdr->e_phoff)); /* Walk the segment table. */ for (i = 0; i < GET_LE(&hdr->e_phnum); i++) { @@ -51,7 +167,7 @@ static void GOFUNC(void *addr, size_t len, FILE *outfile, const char *name) for (i = 0; dyn + i < dyn_end && GET_LE(&dyn[i].d_tag) != DT_NULL; i++) { typeof(dyn[i].d_tag) tag = GET_LE(&dyn[i].d_tag); - if (tag == DT_REL || tag == DT_RELSZ || + if (tag == DT_REL || tag == DT_RELSZ || tag == DT_RELA || tag == DT_RELENT || tag == DT_TEXTREL) fail("vdso image contains dynamic relocations\n"); } @@ -61,7 +177,7 @@ static void GOFUNC(void *addr, size_t len, FILE *outfile, const char *name) GET_LE(&hdr->e_shentsize)*GET_LE(&hdr->e_shstrndx); secstrings = addr + GET_LE(&secstrings_hdr->sh_offset); for (i = 0; i < GET_LE(&hdr->e_shnum); i++) { - Elf_Shdr *sh = addr + GET_LE(&hdr->e_shoff) + + ELF(Shdr) *sh = addr + GET_LE(&hdr->e_shoff) + GET_LE(&hdr->e_shentsize) * i; if (GET_LE(&sh->sh_type) == SHT_SYMTAB) symtab_hdr = sh; @@ -82,29 +198,63 @@ static void GOFUNC(void *addr, size_t len, FILE *outfile, const char *name) i < GET_LE(&symtab_hdr->sh_size) / GET_LE(&symtab_hdr->sh_entsize); i++) { int k; - Elf_Sym *sym = addr + GET_LE(&symtab_hdr->sh_offset) + + ELF(Sym) *sym = addr + GET_LE(&symtab_hdr->sh_offset) + GET_LE(&symtab_hdr->sh_entsize) * i; const char *name = addr + GET_LE(&strtab_hdr->sh_offset) + GET_LE(&sym->st_name); for (k = 0; k < NSYMS; k++) { - if (!strcmp(name, required_syms[k])) { + if (!strcmp(name, required_syms[k].name)) { if (syms[k]) { fail("duplicate symbol %s\n", - required_syms[k]); + required_syms[k].name); } syms[k] = GET_LE(&sym->st_value); } } - if (!strcmp(name, "vdso_fake_sections")) { - if (fake_sections_value) - fail("duplicate vdso_fake_sections\n"); - fake_sections_value = GET_LE(&sym->st_value); - fake_sections_size = GET_LE(&sym->st_size); + if (!strcmp(name, "fake_shstrtab")) { + ELF(Shdr) *sh; + + fake_sections.in_shstrndx = GET_LE(&sym->st_shndx); + fake_sections.shstrtab = addr + GET_LE(&sym->st_value); + fake_sections.shstrtab_len = GET_LE(&sym->st_size); + sh = addr + GET_LE(&hdr->e_shoff) + + GET_LE(&hdr->e_shentsize) * + fake_sections.in_shstrndx; + fake_sections.shstr_offset = GET_LE(&sym->st_value) - + GET_LE(&sh->sh_addr); } } + /* Build the output section table. */ + if (!syms[sym_VDSO_FAKE_SECTION_TABLE_START] || + !syms[sym_VDSO_FAKE_SECTION_TABLE_END]) + fail("couldn't find fake section table\n"); + if ((syms[sym_VDSO_FAKE_SECTION_TABLE_END] - + syms[sym_VDSO_FAKE_SECTION_TABLE_START]) % sizeof(ELF(Shdr))) + fail("fake section table size isn't a multiple of sizeof(Shdr)\n"); + fake_sections.table = addr + syms[sym_VDSO_FAKE_SECTION_TABLE_START]; + fake_sections.table_offset = syms[sym_VDSO_FAKE_SECTION_TABLE_START]; + fake_sections.max_count = (syms[sym_VDSO_FAKE_SECTION_TABLE_END] - + syms[sym_VDSO_FAKE_SECTION_TABLE_START]) / + sizeof(ELF(Shdr)); + + BITSFUNC(init_sections)(&fake_sections); + for (i = 0; i < GET_LE(&hdr->e_shnum); i++) { + ELF(Shdr) *sh = addr + GET_LE(&hdr->e_shoff) + + GET_LE(&hdr->e_shentsize) * i; + BITSFUNC(copy_section)(&fake_sections, i, sh, + secstrings + GET_LE(&sh->sh_name)); + } + if (!fake_sections.out_shstrndx) + fail("didn't generate shstrndx?!?\n"); + + PUT_LE(&hdr->e_shoff, fake_sections.table_offset); + PUT_LE(&hdr->e_shentsize, sizeof(ELF(Shdr))); + PUT_LE(&hdr->e_shnum, fake_sections.count); + PUT_LE(&hdr->e_shstrndx, fake_sections.out_shstrndx); + /* Validate mapping addresses. */ for (i = 0; i < sizeof(special_pages) / sizeof(special_pages[0]); i++) { if (!syms[i]) @@ -112,25 +262,17 @@ static void GOFUNC(void *addr, size_t len, FILE *outfile, const char *name) if (syms[i] % 4096) fail("%s must be a multiple of 4096\n", - required_syms[i]); + required_syms[i].name); if (syms[i] < data_size) fail("%s must be after the text mapping\n", - required_syms[i]); + required_syms[i].name); if (syms[sym_end_mapping] < syms[i] + 4096) - fail("%s overruns end_mapping\n", required_syms[i]); + fail("%s overruns end_mapping\n", + required_syms[i].name); } if (syms[sym_end_mapping] % 4096) fail("end_mapping must be a multiple of 4096\n"); - /* Remove sections or use fakes */ - if (fake_sections_size % sizeof(Elf_Shdr)) - fail("vdso_fake_sections size is not a multiple of %ld\n", - (long)sizeof(Elf_Shdr)); - PUT_LE(&hdr->e_shoff, fake_sections_value); - PUT_LE(&hdr->e_shentsize, fake_sections_value ? sizeof(Elf_Shdr) : 0); - PUT_LE(&hdr->e_shnum, fake_sections_size / sizeof(Elf_Shdr)); - PUT_LE(&hdr->e_shstrndx, SHN_UNDEF); - if (!name) { fwrite(addr, load_size, 1, outfile); return; @@ -168,9 +310,9 @@ static void GOFUNC(void *addr, size_t len, FILE *outfile, const char *name) (unsigned long)GET_LE(&alt_sec->sh_size)); } for (i = 0; i < NSYMS; i++) { - if (syms[i]) + if (required_syms[i].export && syms[i]) fprintf(outfile, "\t.sym_%s = 0x%" PRIx64 ",\n", - required_syms[i], syms[i]); + required_syms[i].name, syms[i]); } fprintf(outfile, "};\n"); } diff --git a/arch/x86/vdso/vdso32/vdso-fakesections.c b/arch/x86/vdso/vdso32/vdso-fakesections.c new file mode 100644 index 000000000000..541468e25265 --- /dev/null +++ b/arch/x86/vdso/vdso32/vdso-fakesections.c @@ -0,0 +1 @@ +#include "../vdso-fakesections.c" diff --git a/arch/x86/vdso/vdsox32.lds.S b/arch/x86/vdso/vdsox32.lds.S index 46b991b578a8..697c11ece90c 100644 --- a/arch/x86/vdso/vdsox32.lds.S +++ b/arch/x86/vdso/vdsox32.lds.S @@ -6,6 +6,8 @@ * the DSO. */ +#define BUILD_VDSOX32 + #include "vdso-layout.lds.S" /* diff --git a/arch/x86/vdso/vma.c b/arch/x86/vdso/vma.c index e1513c47872a..5a5176de8d0a 100644 --- a/arch/x86/vdso/vma.c +++ b/arch/x86/vdso/vma.c @@ -62,6 +62,9 @@ struct linux_binprm; Only used for the 64-bit and x32 vdsos. */ static unsigned long vdso_addr(unsigned long start, unsigned len) { +#ifdef CONFIG_X86_32 + return 0; +#else unsigned long addr, end; unsigned offset; end = (start + PMD_SIZE - 1) & PMD_MASK; @@ -83,6 +86,7 @@ static unsigned long vdso_addr(unsigned long start, unsigned len) addr = align_vdso_addr(addr); return addr; +#endif } static int map_vdso(const struct vdso_image *image, bool calculate_addr) diff --git a/arch/xtensa/kernel/vectors.S b/arch/xtensa/kernel/vectors.S index f9e1ec346e35..8453e6e39895 100644 --- a/arch/xtensa/kernel/vectors.S +++ b/arch/xtensa/kernel/vectors.S @@ -376,38 +376,42 @@ _DoubleExceptionVector_WindowOverflow: beqz a2, 1f # if at start of vector, don't restore addi a0, a0, -128 - bbsi a0, 8, 1f # don't restore except for overflow 8 and 12 - bbsi a0, 7, 2f + bbsi.l a0, 8, 1f # don't restore except for overflow 8 and 12 + + /* + * This fixup handler is for the extremely unlikely case where the + * overflow handler's reference thru a0 gets a hardware TLB refill + * that bumps out the (distinct, aliasing) TLB entry that mapped its + * prior references thru a9/a13, and where our reference now thru + * a9/a13 gets a 2nd-level miss exception (not hardware TLB refill). + */ + movi a2, window_overflow_restore_a0_fixup + s32i a2, a3, EXC_TABLE_FIXUP + l32i a2, a3, EXC_TABLE_DOUBLE_SAVE + xsr a3, excsave1 + + bbsi.l a0, 7, 2f /* * Restore a0 as saved by _WindowOverflow8(). - * - * FIXME: we really need a fixup handler for this L32E, - * for the extremely unlikely case where the overflow handler's - * reference thru a0 gets a hardware TLB refill that bumps out - * the (distinct, aliasing) TLB entry that mapped its prior - * references thru a9, and where our reference now thru a9 - * gets a 2nd-level miss exception (not hardware TLB refill). */ - l32e a2, a9, -16 - wsr a2, depc # replace the saved a0 - j 1f + l32e a0, a9, -16 + wsr a0, depc # replace the saved a0 + j 3f 2: /* * Restore a0 as saved by _WindowOverflow12(). - * - * FIXME: we really need a fixup handler for this L32E, - * for the extremely unlikely case where the overflow handler's - * reference thru a0 gets a hardware TLB refill that bumps out - * the (distinct, aliasing) TLB entry that mapped its prior - * references thru a13, and where our reference now thru a13 - * gets a 2nd-level miss exception (not hardware TLB refill). */ - l32e a2, a13, -16 - wsr a2, depc # replace the saved a0 + l32e a0, a13, -16 + wsr a0, depc # replace the saved a0 +3: + xsr a3, excsave1 + movi a0, 0 + s32i a0, a3, EXC_TABLE_FIXUP + s32i a2, a3, EXC_TABLE_DOUBLE_SAVE 1: /* * Restore WindowBase while leaving all address registers restored. @@ -449,6 +453,7 @@ _DoubleExceptionVector_WindowOverflow: s32i a0, a2, PT_DEPC +_DoubleExceptionVector_handle_exception: addx4 a0, a0, a3 l32i a0, a0, EXC_TABLE_FAST_USER xsr a3, excsave1 @@ -464,11 +469,120 @@ _DoubleExceptionVector_WindowOverflow: rotw -3 j 1b - .end literal_prefix ENDPROC(_DoubleExceptionVector) /* + * Fixup handler for TLB miss in double exception handler for window owerflow. + * We get here with windowbase set to the window that was being spilled and + * a0 trashed. a0 bit 7 determines if this is a call8 (bit clear) or call12 + * (bit set) window. + * + * We do the following here: + * - go to the original window retaining a0 value; + * - set up exception stack to return back to appropriate a0 restore code + * (we'll need to rotate window back and there's no place to save this + * information, use different return address for that); + * - handle the exception; + * - go to the window that was being spilled; + * - set up window_overflow_restore_a0_fixup as a fixup routine; + * - reload a0; + * - restore the original window; + * - reset the default fixup routine; + * - return to user. By the time we get to this fixup handler all information + * about the conditions of the original double exception that happened in + * the window overflow handler is lost, so we just return to userspace to + * retry overflow from start. + * + * a0: value of depc, original value in depc + * a2: trashed, original value in EXC_TABLE_DOUBLE_SAVE + * a3: exctable, original value in excsave1 + */ + +ENTRY(window_overflow_restore_a0_fixup) + + rsr a0, ps + extui a0, a0, PS_OWB_SHIFT, PS_OWB_WIDTH + rsr a2, windowbase + sub a0, a2, a0 + extui a0, a0, 0, 3 + l32i a2, a3, EXC_TABLE_DOUBLE_SAVE + xsr a3, excsave1 + + _beqi a0, 1, .Lhandle_1 + _beqi a0, 3, .Lhandle_3 + + .macro overflow_fixup_handle_exception_pane n + + rsr a0, depc + rotw -\n + + xsr a3, excsave1 + wsr a2, depc + l32i a2, a3, EXC_TABLE_KSTK + s32i a0, a2, PT_AREG0 + + movi a0, .Lrestore_\n + s32i a0, a2, PT_DEPC + rsr a0, exccause + j _DoubleExceptionVector_handle_exception + + .endm + + overflow_fixup_handle_exception_pane 2 +.Lhandle_1: + overflow_fixup_handle_exception_pane 1 +.Lhandle_3: + overflow_fixup_handle_exception_pane 3 + + .macro overflow_fixup_restore_a0_pane n + + rotw \n + /* Need to preserve a0 value here to be able to handle exception + * that may occur on a0 reload from stack. It may occur because + * TLB miss handler may not be atomic and pointer to page table + * may be lost before we get here. There are no free registers, + * so we need to use EXC_TABLE_DOUBLE_SAVE area. + */ + xsr a3, excsave1 + s32i a2, a3, EXC_TABLE_DOUBLE_SAVE + movi a2, window_overflow_restore_a0_fixup + s32i a2, a3, EXC_TABLE_FIXUP + l32i a2, a3, EXC_TABLE_DOUBLE_SAVE + xsr a3, excsave1 + bbsi.l a0, 7, 1f + l32e a0, a9, -16 + j 2f +1: + l32e a0, a13, -16 +2: + rotw -\n + + .endm + +.Lrestore_2: + overflow_fixup_restore_a0_pane 2 + +.Lset_default_fixup: + xsr a3, excsave1 + s32i a2, a3, EXC_TABLE_DOUBLE_SAVE + movi a2, 0 + s32i a2, a3, EXC_TABLE_FIXUP + l32i a2, a3, EXC_TABLE_DOUBLE_SAVE + xsr a3, excsave1 + rfe + +.Lrestore_1: + overflow_fixup_restore_a0_pane 1 + j .Lset_default_fixup +.Lrestore_3: + overflow_fixup_restore_a0_pane 3 + j .Lset_default_fixup + +ENDPROC(window_overflow_restore_a0_fixup) + + .end literal_prefix +/* * Debug interrupt vector * * There is not much space here, so simply jump to another handler. diff --git a/arch/xtensa/kernel/vmlinux.lds.S b/arch/xtensa/kernel/vmlinux.lds.S index ee32c0085dff..d16db6df86f8 100644 --- a/arch/xtensa/kernel/vmlinux.lds.S +++ b/arch/xtensa/kernel/vmlinux.lds.S @@ -269,13 +269,13 @@ SECTIONS .UserExceptionVector.literal) SECTION_VECTOR (_DoubleExceptionVector_literal, .DoubleExceptionVector.literal, - DOUBLEEXC_VECTOR_VADDR - 16, + DOUBLEEXC_VECTOR_VADDR - 40, SIZEOF(.UserExceptionVector.text), .UserExceptionVector.text) SECTION_VECTOR (_DoubleExceptionVector_text, .DoubleExceptionVector.text, DOUBLEEXC_VECTOR_VADDR, - 32, + 40, .DoubleExceptionVector.literal) . = (LOADADDR( .DoubleExceptionVector.text ) + SIZEOF( .DoubleExceptionVector.text ) + 3) & ~ 3; diff --git a/arch/xtensa/mm/init.c b/arch/xtensa/mm/init.c index 4224256bb215..77ed20209ca5 100644 --- a/arch/xtensa/mm/init.c +++ b/arch/xtensa/mm/init.c @@ -191,7 +191,7 @@ int __init mem_reserve(unsigned long start, unsigned long end, int must_exist) return -EINVAL; } - if (it && start - it->start < bank_sz) { + if (it && start - it->start <= bank_sz) { if (start == it->start) { if (end - it->start < bank_sz) { it->start = end; diff --git a/block/bio.c b/block/bio.c index 8c2e55e39a1b..0ec61c9e536c 100644 --- a/block/bio.c +++ b/block/bio.c @@ -746,6 +746,14 @@ static int __bio_add_page(struct request_queue *q, struct bio *bio, struct page goto done; } + + /* + * If the queue doesn't support SG gaps and adding this + * offset would create a gap, disallow it. + */ + if (q->queue_flags & (1 << QUEUE_FLAG_SG_GAPS) && + bvec_gap_to_prev(prev, offset)) + return 0; } if (bio->bi_vcnt >= bio->bi_max_vecs) diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 069bc202ffe3..28d227c5ca77 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -80,7 +80,7 @@ static struct blkcg_gq *blkg_alloc(struct blkcg *blkcg, struct request_queue *q, blkg->q = q; INIT_LIST_HEAD(&blkg->q_node); blkg->blkcg = blkcg; - blkg->refcnt = 1; + atomic_set(&blkg->refcnt, 1); /* root blkg uses @q->root_rl, init rl only for !root blkgs */ if (blkcg != &blkcg_root) { @@ -399,11 +399,8 @@ void __blkg_release_rcu(struct rcu_head *rcu_head) /* release the blkcg and parent blkg refs this blkg has been holding */ css_put(&blkg->blkcg->css); - if (blkg->parent) { - spin_lock_irq(blkg->q->queue_lock); + if (blkg->parent) blkg_put(blkg->parent); - spin_unlock_irq(blkg->q->queue_lock); - } blkg_free(blkg); } @@ -875,6 +872,13 @@ void blkcg_drain_queue(struct request_queue *q) { lockdep_assert_held(q->queue_lock); + /* + * @q could be exiting and already have destroyed all blkgs as + * indicated by NULL root_blkg. If so, don't confuse policies. + */ + if (!q->root_blkg) + return; + blk_throtl_drain(q); } @@ -1093,7 +1097,7 @@ EXPORT_SYMBOL_GPL(blkcg_deactivate_policy); * Register @pol with blkcg core. Might sleep and @pol may be modified on * successful registration. Returns 0 on success and -errno on failure. */ -int __init blkcg_policy_register(struct blkcg_policy *pol) +int blkcg_policy_register(struct blkcg_policy *pol) { int i, ret; diff --git a/block/blk-cgroup.h b/block/blk-cgroup.h index cbb7f943f78a..d3fd7aa3d2a3 100644 --- a/block/blk-cgroup.h +++ b/block/blk-cgroup.h @@ -18,6 +18,7 @@ #include <linux/seq_file.h> #include <linux/radix-tree.h> #include <linux/blkdev.h> +#include <linux/atomic.h> /* Max limits for throttle policy */ #define THROTL_IOPS_MAX UINT_MAX @@ -104,7 +105,7 @@ struct blkcg_gq { struct request_list rl; /* reference count */ - int refcnt; + atomic_t refcnt; /* is this blkg online? protected by both blkcg and q locks */ bool online; @@ -145,7 +146,7 @@ void blkcg_drain_queue(struct request_queue *q); void blkcg_exit_queue(struct request_queue *q); /* Blkio controller policy registration */ -int __init blkcg_policy_register(struct blkcg_policy *pol); +int blkcg_policy_register(struct blkcg_policy *pol); void blkcg_policy_unregister(struct blkcg_policy *pol); int blkcg_activate_policy(struct request_queue *q, const struct blkcg_policy *pol); @@ -257,13 +258,12 @@ static inline int blkg_path(struct blkcg_gq *blkg, char *buf, int buflen) * blkg_get - get a blkg reference * @blkg: blkg to get * - * The caller should be holding queue_lock and an existing reference. + * The caller should be holding an existing reference. */ static inline void blkg_get(struct blkcg_gq *blkg) { - lockdep_assert_held(blkg->q->queue_lock); - WARN_ON_ONCE(!blkg->refcnt); - blkg->refcnt++; + WARN_ON_ONCE(atomic_read(&blkg->refcnt) <= 0); + atomic_inc(&blkg->refcnt); } void __blkg_release_rcu(struct rcu_head *rcu); @@ -271,14 +271,11 @@ void __blkg_release_rcu(struct rcu_head *rcu); /** * blkg_put - put a blkg reference * @blkg: blkg to put - * - * The caller should be holding queue_lock. */ static inline void blkg_put(struct blkcg_gq *blkg) { - lockdep_assert_held(blkg->q->queue_lock); - WARN_ON_ONCE(blkg->refcnt <= 0); - if (!--blkg->refcnt) + WARN_ON_ONCE(atomic_read(&blkg->refcnt) <= 0); + if (atomic_dec_and_test(&blkg->refcnt)) call_rcu(&blkg->rcu_head, __blkg_release_rcu); } @@ -580,7 +577,7 @@ static inline struct blkcg_gq *blkg_lookup(struct blkcg *blkcg, void *key) { ret static inline int blkcg_init_queue(struct request_queue *q) { return 0; } static inline void blkcg_drain_queue(struct request_queue *q) { } static inline void blkcg_exit_queue(struct request_queue *q) { } -static inline int __init blkcg_policy_register(struct blkcg_policy *pol) { return 0; } +static inline int blkcg_policy_register(struct blkcg_policy *pol) { return 0; } static inline void blkcg_policy_unregister(struct blkcg_policy *pol) { } static inline int blkcg_activate_policy(struct request_queue *q, const struct blkcg_policy *pol) { return 0; } diff --git a/block/blk-merge.c b/block/blk-merge.c index b3bf0df0f4c2..54535831f1e1 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -568,6 +568,8 @@ int blk_attempt_req_merge(struct request_queue *q, struct request *rq, bool blk_rq_merge_ok(struct request *rq, struct bio *bio) { + struct request_queue *q = rq->q; + if (!rq_mergeable(rq) || !bio_mergeable(bio)) return false; @@ -591,6 +593,14 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio) !blk_write_same_mergeable(rq->bio, bio)) return false; + if (q->queue_flags & (1 << QUEUE_FLAG_SG_GAPS)) { + struct bio_vec *bprev; + + bprev = &rq->biotail->bi_io_vec[bio->bi_vcnt - 1]; + if (bvec_gap_to_prev(bprev, bio->bi_io_vec[0].bv_offset)) + return false; + } + return true; } diff --git a/block/blk-mq.c b/block/blk-mq.c index 0ef2dc7f01bf..ad69ef657e85 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -878,7 +878,7 @@ void blk_mq_start_hw_queue(struct blk_mq_hw_ctx *hctx) clear_bit(BLK_MQ_S_STOPPED, &hctx->state); preempt_disable(); - __blk_mq_run_hw_queue(hctx); + blk_mq_run_hw_queue(hctx, false); preempt_enable(); } EXPORT_SYMBOL(blk_mq_start_hw_queue); diff --git a/block/blk-tag.c b/block/blk-tag.c index 3f33d8672268..a185b86741e5 100644 --- a/block/blk-tag.c +++ b/block/blk-tag.c @@ -27,18 +27,15 @@ struct request *blk_queue_find_tag(struct request_queue *q, int tag) EXPORT_SYMBOL(blk_queue_find_tag); /** - * __blk_free_tags - release a given set of tag maintenance info + * blk_free_tags - release a given set of tag maintenance info * @bqt: the tag map to free * - * Tries to free the specified @bqt. Returns true if it was - * actually freed and false if there are still references using it + * Drop the reference count on @bqt and frees it when the last reference + * is dropped. */ -static int __blk_free_tags(struct blk_queue_tag *bqt) +void blk_free_tags(struct blk_queue_tag *bqt) { - int retval; - - retval = atomic_dec_and_test(&bqt->refcnt); - if (retval) { + if (atomic_dec_and_test(&bqt->refcnt)) { BUG_ON(find_first_bit(bqt->tag_map, bqt->max_depth) < bqt->max_depth); @@ -50,9 +47,8 @@ static int __blk_free_tags(struct blk_queue_tag *bqt) kfree(bqt); } - - return retval; } +EXPORT_SYMBOL(blk_free_tags); /** * __blk_queue_free_tags - release tag maintenance info @@ -69,28 +65,13 @@ void __blk_queue_free_tags(struct request_queue *q) if (!bqt) return; - __blk_free_tags(bqt); + blk_free_tags(bqt); q->queue_tags = NULL; queue_flag_clear_unlocked(QUEUE_FLAG_QUEUED, q); } /** - * blk_free_tags - release a given set of tag maintenance info - * @bqt: the tag map to free - * - * For externally managed @bqt frees the map. Callers of this - * function must guarantee to have released all the queues that - * might have been using this tag map. - */ -void blk_free_tags(struct blk_queue_tag *bqt) -{ - if (unlikely(!__blk_free_tags(bqt))) - BUG(); -} -EXPORT_SYMBOL(blk_free_tags); - -/** * blk_queue_free_tags - release tag maintenance info * @q: the request queue for the device * diff --git a/block/compat_ioctl.c b/block/compat_ioctl.c index fbd5a67cb773..a0926a6094b2 100644 --- a/block/compat_ioctl.c +++ b/block/compat_ioctl.c @@ -690,6 +690,7 @@ long compat_blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg) case BLKROSET: case BLKDISCARD: case BLKSECDISCARD: + case BLKZEROOUT: /* * the ones below are implemented in blkdev_locked_ioctl, * but we call blkdev_ioctl, which gets the lock for us diff --git a/block/elevator.c b/block/elevator.c index 34bded18910e..24c28b659bb3 100644 --- a/block/elevator.c +++ b/block/elevator.c @@ -825,7 +825,7 @@ void elv_unregister_queue(struct request_queue *q) } EXPORT_SYMBOL(elv_unregister_queue); -int __init elv_register(struct elevator_type *e) +int elv_register(struct elevator_type *e) { char *def = ""; diff --git a/drivers/acpi/ac.c b/drivers/acpi/ac.c index c67f6f5ad611..36b0e61f9c09 100644 --- a/drivers/acpi/ac.c +++ b/drivers/acpi/ac.c @@ -30,6 +30,10 @@ #include <linux/types.h> #include <linux/dmi.h> #include <linux/delay.h> +#ifdef CONFIG_ACPI_PROCFS_POWER +#include <linux/proc_fs.h> +#include <linux/seq_file.h> +#endif #include <linux/platform_device.h> #include <linux/power_supply.h> #include <linux/acpi.h> @@ -52,6 +56,7 @@ MODULE_AUTHOR("Paul Diefenbaugh"); MODULE_DESCRIPTION("ACPI AC Adapter Driver"); MODULE_LICENSE("GPL"); + static int acpi_ac_add(struct acpi_device *device); static int acpi_ac_remove(struct acpi_device *device); static void acpi_ac_notify(struct acpi_device *device, u32 event); @@ -67,6 +72,13 @@ static int acpi_ac_resume(struct device *dev); #endif static SIMPLE_DEV_PM_OPS(acpi_ac_pm, NULL, acpi_ac_resume); +#ifdef CONFIG_ACPI_PROCFS_POWER +extern struct proc_dir_entry *acpi_lock_ac_dir(void); +extern void *acpi_unlock_ac_dir(struct proc_dir_entry *acpi_ac_dir); +static int acpi_ac_open_fs(struct inode *inode, struct file *file); +#endif + + static int ac_sleep_before_get_state_ms; static struct acpi_driver acpi_ac_driver = { @@ -91,6 +103,16 @@ struct acpi_ac { #define to_acpi_ac(x) container_of(x, struct acpi_ac, charger) +#ifdef CONFIG_ACPI_PROCFS_POWER +static const struct file_operations acpi_ac_fops = { + .owner = THIS_MODULE, + .open = acpi_ac_open_fs, + .read = seq_read, + .llseek = seq_lseek, + .release = single_release, +}; +#endif + /* -------------------------------------------------------------------------- AC Adapter Management -------------------------------------------------------------------------- */ @@ -143,6 +165,83 @@ static enum power_supply_property ac_props[] = { POWER_SUPPLY_PROP_ONLINE, }; +#ifdef CONFIG_ACPI_PROCFS_POWER +/* -------------------------------------------------------------------------- + FS Interface (/proc) + -------------------------------------------------------------------------- */ + +static struct proc_dir_entry *acpi_ac_dir; + +static int acpi_ac_seq_show(struct seq_file *seq, void *offset) +{ + struct acpi_ac *ac = seq->private; + + + if (!ac) + return 0; + + if (acpi_ac_get_state(ac)) { + seq_puts(seq, "ERROR: Unable to read AC Adapter state\n"); + return 0; + } + + seq_puts(seq, "state: "); + switch (ac->state) { + case ACPI_AC_STATUS_OFFLINE: + seq_puts(seq, "off-line\n"); + break; + case ACPI_AC_STATUS_ONLINE: + seq_puts(seq, "on-line\n"); + break; + default: + seq_puts(seq, "unknown\n"); + break; + } + + return 0; +} + +static int acpi_ac_open_fs(struct inode *inode, struct file *file) +{ + return single_open(file, acpi_ac_seq_show, PDE_DATA(inode)); +} + +static int acpi_ac_add_fs(struct acpi_ac *ac) +{ + struct proc_dir_entry *entry = NULL; + + printk(KERN_WARNING PREFIX "Deprecated procfs I/F for AC is loaded," + " please retry with CONFIG_ACPI_PROCFS_POWER cleared\n"); + if (!acpi_device_dir(ac->device)) { + acpi_device_dir(ac->device) = + proc_mkdir(acpi_device_bid(ac->device), acpi_ac_dir); + if (!acpi_device_dir(ac->device)) + return -ENODEV; + } + + /* 'state' [R] */ + entry = proc_create_data(ACPI_AC_FILE_STATE, + S_IRUGO, acpi_device_dir(ac->device), + &acpi_ac_fops, ac); + if (!entry) + return -ENODEV; + return 0; +} + +static int acpi_ac_remove_fs(struct acpi_ac *ac) +{ + + if (acpi_device_dir(ac->device)) { + remove_proc_entry(ACPI_AC_FILE_STATE, + acpi_device_dir(ac->device)); + remove_proc_entry(acpi_device_bid(ac->device), acpi_ac_dir); + acpi_device_dir(ac->device) = NULL; + } + + return 0; +} +#endif + /* -------------------------------------------------------------------------- Driver Model -------------------------------------------------------------------------- */ @@ -243,6 +342,11 @@ static int acpi_ac_add(struct acpi_device *device) goto end; ac->charger.name = acpi_device_bid(device); +#ifdef CONFIG_ACPI_PROCFS_POWER + result = acpi_ac_add_fs(ac); + if (result) + goto end; +#endif ac->charger.type = POWER_SUPPLY_TYPE_MAINS; ac->charger.properties = ac_props; ac->charger.num_properties = ARRAY_SIZE(ac_props); @@ -258,8 +362,12 @@ static int acpi_ac_add(struct acpi_device *device) ac->battery_nb.notifier_call = acpi_ac_battery_notify; register_acpi_notifier(&ac->battery_nb); end: - if (result) + if (result) { +#ifdef CONFIG_ACPI_PROCFS_POWER + acpi_ac_remove_fs(ac); +#endif kfree(ac); + } dmi_check_system(ac_dmi_table); return result; @@ -303,6 +411,10 @@ static int acpi_ac_remove(struct acpi_device *device) power_supply_unregister(&ac->charger); unregister_acpi_notifier(&ac->battery_nb); +#ifdef CONFIG_ACPI_PROCFS_POWER + acpi_ac_remove_fs(ac); +#endif + kfree(ac); return 0; @@ -315,9 +427,20 @@ static int __init acpi_ac_init(void) if (acpi_disabled) return -ENODEV; +#ifdef CONFIG_ACPI_PROCFS_POWER + acpi_ac_dir = acpi_lock_ac_dir(); + if (!acpi_ac_dir) + return -ENODEV; +#endif + + result = acpi_bus_register_driver(&acpi_ac_driver); - if (result < 0) + if (result < 0) { +#ifdef CONFIG_ACPI_PROCFS_POWER + acpi_unlock_ac_dir(acpi_ac_dir); +#endif return -ENODEV; + } return 0; } @@ -325,6 +448,9 @@ static int __init acpi_ac_init(void) static void __exit acpi_ac_exit(void) { acpi_bus_unregister_driver(&acpi_ac_driver); +#ifdef CONFIG_ACPI_PROCFS_POWER + acpi_unlock_ac_dir(acpi_ac_dir); +#endif } module_init(acpi_ac_init); module_exit(acpi_ac_exit); diff --git a/drivers/acpi/acpi_pnp.c b/drivers/acpi/acpi_pnp.c index 6703c1fd993a..4ddb0dca56f6 100644 --- a/drivers/acpi/acpi_pnp.c +++ b/drivers/acpi/acpi_pnp.c @@ -14,6 +14,8 @@ #include <linux/module.h> static const struct acpi_device_id acpi_pnp_device_ids[] = { + /* soc_button_array */ + {"PNP0C40"}, /* pata_isapnp */ {"PNP0600"}, /* Generic ESDI/IDE/ATA compatible hard disk controller */ /* floppy */ diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c index 0d7116f34b95..130f513e08c9 100644 --- a/drivers/acpi/battery.c +++ b/drivers/acpi/battery.c @@ -35,6 +35,7 @@ #include <linux/delay.h> #include <linux/slab.h> #include <linux/suspend.h> +#include <linux/delay.h> #include <asm/unaligned.h> #ifdef CONFIG_ACPI_PROCFS_POWER @@ -534,6 +535,20 @@ static int acpi_battery_get_state(struct acpi_battery *battery) " invalid.\n"); } + /* + * When fully charged, some batteries wrongly report + * capacity_now = design_capacity instead of = full_charge_capacity + */ + if (battery->capacity_now > battery->full_charge_capacity + && battery->full_charge_capacity != ACPI_BATTERY_VALUE_UNKNOWN) { + battery->capacity_now = battery->full_charge_capacity; + if (battery->capacity_now != battery->design_capacity) + printk_once(KERN_WARNING FW_BUG + "battery: reported current charge level (%d) " + "is higher than reported maximum charge level (%d).\n", + battery->capacity_now, battery->full_charge_capacity); + } + if (test_bit(ACPI_BATTERY_QUIRK_PERCENTAGE_CAPACITY, &battery->flags) && battery->capacity_now >= 0 && battery->capacity_now <= 100) battery->capacity_now = (battery->capacity_now * @@ -1151,6 +1166,28 @@ static struct dmi_system_id bat_dmi_table[] = { {}, }; +/* + * Some machines'(E,G Lenovo Z480) ECs are not stable + * during boot up and this causes battery driver fails to be + * probed due to failure of getting battery information + * from EC sometimes. After several retries, the operation + * may work. So add retry code here and 20ms sleep between + * every retries. + */ +static int acpi_battery_update_retry(struct acpi_battery *battery) +{ + int retry, ret; + + for (retry = 5; retry; retry--) { + ret = acpi_battery_update(battery, false); + if (!ret) + break; + + msleep(20); + } + return ret; +} + static int acpi_battery_add(struct acpi_device *device) { int result = 0; @@ -1169,9 +1206,11 @@ static int acpi_battery_add(struct acpi_device *device) mutex_init(&battery->sysfs_lock); if (acpi_has_method(battery->device->handle, "_BIX")) set_bit(ACPI_BATTERY_XINFO_PRESENT, &battery->flags); - result = acpi_battery_update(battery, false); + + result = acpi_battery_update_retry(battery); if (result) goto fail; + #ifdef CONFIG_ACPI_PROCFS_POWER result = acpi_battery_add_fs(device); #endif diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c index ad11ba4a412d..a66ab658abbc 100644 --- a/drivers/acpi/ec.c +++ b/drivers/acpi/ec.c @@ -1,11 +1,14 @@ /* - * ec.c - ACPI Embedded Controller Driver (v2.1) + * ec.c - ACPI Embedded Controller Driver (v2.2) * - * Copyright (C) 2006-2008 Alexey Starikovskiy <astarikovskiy@suse.de> - * Copyright (C) 2006 Denis Sadykov <denis.m.sadykov@intel.com> - * Copyright (C) 2004 Luming Yu <luming.yu@intel.com> - * Copyright (C) 2001, 2002 Andy Grover <andrew.grover@intel.com> - * Copyright (C) 2001, 2002 Paul Diefenbaugh <paul.s.diefenbaugh@intel.com> + * Copyright (C) 2001-2014 Intel Corporation + * Author: 2014 Lv Zheng <lv.zheng@intel.com> + * 2006, 2007 Alexey Starikovskiy <alexey.y.starikovskiy@intel.com> + * 2006 Denis Sadykov <denis.m.sadykov@intel.com> + * 2004 Luming Yu <luming.yu@intel.com> + * 2001, 2002 Andy Grover <andrew.grover@intel.com> + * 2001, 2002 Paul Diefenbaugh <paul.s.diefenbaugh@intel.com> + * Copyright (C) 2008 Alexey Starikovskiy <astarikovskiy@suse.de> * * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * @@ -52,6 +55,7 @@ /* EC status register */ #define ACPI_EC_FLAG_OBF 0x01 /* Output buffer full */ #define ACPI_EC_FLAG_IBF 0x02 /* Input buffer full */ +#define ACPI_EC_FLAG_CMD 0x08 /* Input buffer contains a command */ #define ACPI_EC_FLAG_BURST 0x10 /* burst mode */ #define ACPI_EC_FLAG_SCI 0x20 /* EC-SCI occurred */ @@ -78,6 +82,9 @@ enum { EC_FLAGS_BLOCKED, /* Transactions are blocked */ }; +#define ACPI_EC_COMMAND_POLL 0x01 /* Available for command byte */ +#define ACPI_EC_COMMAND_COMPLETE 0x02 /* Completed last byte */ + /* ec.c is compiled in acpi namespace so this shows up as acpi.ec_delay param */ static unsigned int ec_delay __read_mostly = ACPI_EC_DELAY; module_param(ec_delay, uint, 0644); @@ -109,7 +116,7 @@ struct transaction { u8 ri; u8 wlen; u8 rlen; - bool done; + u8 flags; }; struct acpi_ec *boot_ec, *first_ec; @@ -127,83 +134,104 @@ static int EC_FLAGS_CLEAR_ON_RESUME; /* Needs acpi_ec_clear() on boot/resume */ static inline u8 acpi_ec_read_status(struct acpi_ec *ec) { u8 x = inb(ec->command_addr); - pr_debug("---> status = 0x%2.2x\n", x); + pr_debug("EC_SC(R) = 0x%2.2x " + "SCI_EVT=%d BURST=%d CMD=%d IBF=%d OBF=%d\n", + x, + !!(x & ACPI_EC_FLAG_SCI), + !!(x & ACPI_EC_FLAG_BURST), + !!(x & ACPI_EC_FLAG_CMD), + !!(x & ACPI_EC_FLAG_IBF), + !!(x & ACPI_EC_FLAG_OBF)); return x; } static inline u8 acpi_ec_read_data(struct acpi_ec *ec) { u8 x = inb(ec->data_addr); - pr_debug("---> data = 0x%2.2x\n", x); + pr_debug("EC_DATA(R) = 0x%2.2x\n", x); return x; } static inline void acpi_ec_write_cmd(struct acpi_ec *ec, u8 command) { - pr_debug("<--- command = 0x%2.2x\n", command); + pr_debug("EC_SC(W) = 0x%2.2x\n", command); outb(command, ec->command_addr); } static inline void acpi_ec_write_data(struct acpi_ec *ec, u8 data) { - pr_debug("<--- data = 0x%2.2x\n", data); + pr_debug("EC_DATA(W) = 0x%2.2x\n", data); outb(data, ec->data_addr); } -static int ec_transaction_done(struct acpi_ec *ec) +static int ec_transaction_completed(struct acpi_ec *ec) { unsigned long flags; int ret = 0; spin_lock_irqsave(&ec->lock, flags); - if (!ec->curr || ec->curr->done) + if (ec->curr && (ec->curr->flags & ACPI_EC_COMMAND_COMPLETE)) ret = 1; spin_unlock_irqrestore(&ec->lock, flags); return ret; } -static void start_transaction(struct acpi_ec *ec) +static bool advance_transaction(struct acpi_ec *ec) { - ec->curr->irq_count = ec->curr->wi = ec->curr->ri = 0; - ec->curr->done = false; - acpi_ec_write_cmd(ec, ec->curr->command); -} - -static void advance_transaction(struct acpi_ec *ec, u8 status) -{ - unsigned long flags; struct transaction *t; + u8 status; + bool wakeup = false; - spin_lock_irqsave(&ec->lock, flags); + pr_debug("===== %s =====\n", in_interrupt() ? "IRQ" : "TASK"); + status = acpi_ec_read_status(ec); t = ec->curr; if (!t) - goto unlock; - if (t->wlen > t->wi) { - if ((status & ACPI_EC_FLAG_IBF) == 0) - acpi_ec_write_data(ec, - t->wdata[t->wi++]); - else - goto err; - } else if (t->rlen > t->ri) { - if ((status & ACPI_EC_FLAG_OBF) == 1) { - t->rdata[t->ri++] = acpi_ec_read_data(ec); - if (t->rlen == t->ri) - t->done = true; + goto err; + if (t->flags & ACPI_EC_COMMAND_POLL) { + if (t->wlen > t->wi) { + if ((status & ACPI_EC_FLAG_IBF) == 0) + acpi_ec_write_data(ec, t->wdata[t->wi++]); + else + goto err; + } else if (t->rlen > t->ri) { + if ((status & ACPI_EC_FLAG_OBF) == 1) { + t->rdata[t->ri++] = acpi_ec_read_data(ec); + if (t->rlen == t->ri) { + t->flags |= ACPI_EC_COMMAND_COMPLETE; + wakeup = true; + } + } else + goto err; + } else if (t->wlen == t->wi && + (status & ACPI_EC_FLAG_IBF) == 0) { + t->flags |= ACPI_EC_COMMAND_COMPLETE; + wakeup = true; + } + return wakeup; + } else { + if ((status & ACPI_EC_FLAG_IBF) == 0) { + acpi_ec_write_cmd(ec, t->command); + t->flags |= ACPI_EC_COMMAND_POLL; } else goto err; - } else if (t->wlen == t->wi && - (status & ACPI_EC_FLAG_IBF) == 0) - t->done = true; - goto unlock; + return wakeup; + } err: /* * If SCI bit is set, then don't think it's a false IRQ * otherwise will take a not handled IRQ as a false one. */ - if (in_interrupt() && !(status & ACPI_EC_FLAG_SCI)) - ++t->irq_count; + if (!(status & ACPI_EC_FLAG_SCI)) { + if (in_interrupt() && t) + ++t->irq_count; + } + return wakeup; +} -unlock: - spin_unlock_irqrestore(&ec->lock, flags); +static void start_transaction(struct acpi_ec *ec) +{ + ec->curr->irq_count = ec->curr->wi = ec->curr->ri = 0; + ec->curr->flags = 0; + (void)advance_transaction(ec); } static int acpi_ec_sync_query(struct acpi_ec *ec, u8 *data); @@ -228,15 +256,17 @@ static int ec_poll(struct acpi_ec *ec) /* don't sleep with disabled interrupts */ if (EC_FLAGS_MSI || irqs_disabled()) { udelay(ACPI_EC_MSI_UDELAY); - if (ec_transaction_done(ec)) + if (ec_transaction_completed(ec)) return 0; } else { if (wait_event_timeout(ec->wait, - ec_transaction_done(ec), + ec_transaction_completed(ec), msecs_to_jiffies(1))) return 0; } - advance_transaction(ec, acpi_ec_read_status(ec)); + spin_lock_irqsave(&ec->lock, flags); + (void)advance_transaction(ec); + spin_unlock_irqrestore(&ec->lock, flags); } while (time_before(jiffies, delay)); pr_debug("controller reset, restart transaction\n"); spin_lock_irqsave(&ec->lock, flags); @@ -268,23 +298,6 @@ static int acpi_ec_transaction_unlocked(struct acpi_ec *ec, return ret; } -static int ec_check_ibf0(struct acpi_ec *ec) -{ - u8 status = acpi_ec_read_status(ec); - return (status & ACPI_EC_FLAG_IBF) == 0; -} - -static int ec_wait_ibf0(struct acpi_ec *ec) -{ - unsigned long delay = jiffies + msecs_to_jiffies(ec_delay); - /* interrupt wait manually if GPE mode is not active */ - while (time_before(jiffies, delay)) - if (wait_event_timeout(ec->wait, ec_check_ibf0(ec), - msecs_to_jiffies(1))) - return 0; - return -ETIME; -} - static int acpi_ec_transaction(struct acpi_ec *ec, struct transaction *t) { int status; @@ -305,12 +318,6 @@ static int acpi_ec_transaction(struct acpi_ec *ec, struct transaction *t) goto unlock; } } - if (ec_wait_ibf0(ec)) { - pr_err("input buffer is not empty, " - "aborting transaction\n"); - status = -ETIME; - goto end; - } pr_debug("transaction start (cmd=0x%02x, addr=0x%02x)\n", t->command, t->wdata ? t->wdata[0] : 0); /* disable GPE during transaction if storm is detected */ @@ -334,7 +341,6 @@ static int acpi_ec_transaction(struct acpi_ec *ec, struct transaction *t) set_bit(EC_FLAGS_GPE_STORM, &ec->flags); } pr_debug("transaction end\n"); -end: if (ec->global_lock) acpi_release_global_lock(glk); unlock: @@ -634,17 +640,14 @@ static int ec_check_sci(struct acpi_ec *ec, u8 state) static u32 acpi_ec_gpe_handler(acpi_handle gpe_device, u32 gpe_number, void *data) { + unsigned long flags; struct acpi_ec *ec = data; - u8 status = acpi_ec_read_status(ec); - pr_debug("~~~> interrupt, status:0x%02x\n", status); - - advance_transaction(ec, status); - if (ec_transaction_done(ec) && - (acpi_ec_read_status(ec) & ACPI_EC_FLAG_IBF) == 0) { + spin_lock_irqsave(&ec->lock, flags); + if (advance_transaction(ec)) wake_up(&ec->wait); - ec_check_sci(ec, acpi_ec_read_status(ec)); - } + spin_unlock_irqrestore(&ec->lock, flags); + ec_check_sci(ec, acpi_ec_read_status(ec)); return ACPI_INTERRUPT_HANDLED | ACPI_REENABLE_GPE; } @@ -1066,8 +1069,10 @@ int __init acpi_ec_ecdt_probe(void) /* fall through */ } - if (EC_FLAGS_SKIP_DSDT_SCAN) + if (EC_FLAGS_SKIP_DSDT_SCAN) { + kfree(saved_ec); return -ENODEV; + } /* This workaround is needed only on some broken machines, * which require early EC, but fail to provide ECDT */ @@ -1105,6 +1110,7 @@ install: } error: kfree(boot_ec); + kfree(saved_ec); boot_ec = NULL; return -ENODEV; } diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c index 0bdacc5e26a3..2ba8f02ced36 100644 --- a/drivers/acpi/resource.c +++ b/drivers/acpi/resource.c @@ -77,7 +77,7 @@ bool acpi_dev_resource_memory(struct acpi_resource *ares, struct resource *res) switch (ares->type) { case ACPI_RESOURCE_TYPE_MEMORY24: memory24 = &ares->data.memory24; - if (!memory24->address_length) + if (!memory24->minimum && !memory24->address_length) return false; acpi_dev_get_memresource(res, memory24->minimum, memory24->address_length, @@ -85,7 +85,7 @@ bool acpi_dev_resource_memory(struct acpi_resource *ares, struct resource *res) break; case ACPI_RESOURCE_TYPE_MEMORY32: memory32 = &ares->data.memory32; - if (!memory32->address_length) + if (!memory32->minimum && !memory32->address_length) return false; acpi_dev_get_memresource(res, memory32->minimum, memory32->address_length, @@ -93,7 +93,7 @@ bool acpi_dev_resource_memory(struct acpi_resource *ares, struct resource *res) break; case ACPI_RESOURCE_TYPE_FIXED_MEMORY32: fixed_memory32 = &ares->data.fixed_memory32; - if (!fixed_memory32->address_length) + if (!fixed_memory32->address && !fixed_memory32->address_length) return false; acpi_dev_get_memresource(res, fixed_memory32->address, fixed_memory32->address_length, @@ -150,7 +150,7 @@ bool acpi_dev_resource_io(struct acpi_resource *ares, struct resource *res) switch (ares->type) { case ACPI_RESOURCE_TYPE_IO: io = &ares->data.io; - if (!io->address_length) + if (!io->minimum && !io->address_length) return false; acpi_dev_get_ioresource(res, io->minimum, io->address_length, @@ -158,7 +158,7 @@ bool acpi_dev_resource_io(struct acpi_resource *ares, struct resource *res) break; case ACPI_RESOURCE_TYPE_FIXED_IO: fixed_io = &ares->data.fixed_io; - if (!fixed_io->address_length) + if (!fixed_io->address && !fixed_io->address_length) return false; acpi_dev_get_ioresource(res, fixed_io->address, fixed_io->address_length, diff --git a/drivers/acpi/video.c b/drivers/acpi/video.c index fb9ffe9adc64..350d52a8f781 100644 --- a/drivers/acpi/video.c +++ b/drivers/acpi/video.c @@ -68,7 +68,7 @@ MODULE_AUTHOR("Bruno Ducrot"); MODULE_DESCRIPTION("ACPI Video Driver"); MODULE_LICENSE("GPL"); -static bool brightness_switch_enabled; +static bool brightness_switch_enabled = 1; module_param(brightness_switch_enabled, bool, 0644); /* @@ -241,13 +241,14 @@ static bool acpi_video_use_native_backlight(void) return use_native_backlight_dmi; } -static bool acpi_video_verify_backlight_support(void) +bool acpi_video_verify_backlight_support(void) { if (acpi_osi_is_win8() && acpi_video_use_native_backlight() && backlight_device_registered(BACKLIGHT_RAW)) return false; return acpi_video_backlight_support(); } +EXPORT_SYMBOL_GPL(acpi_video_verify_backlight_support); /* backlight device sysfs support */ static int acpi_video_get_brightness(struct backlight_device *bd) @@ -563,6 +564,14 @@ static struct dmi_system_id video_dmi_table[] __initdata = { }, }, { + .callback = video_set_use_native_backlight, + .ident = "Acer TravelMate B113", + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "Acer"), + DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate B113"), + }, + }, + { .callback = video_set_use_native_backlight, .ident = "HP ProBook 4340s", .matches = { @@ -572,6 +581,14 @@ static struct dmi_system_id video_dmi_table[] __initdata = { }, { .callback = video_set_use_native_backlight, + .ident = "HP ProBook 4540s", + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), + DMI_MATCH(DMI_PRODUCT_VERSION, "HP ProBook 4540s"), + }, + }, + { + .callback = video_set_use_native_backlight, .ident = "HP ProBook 2013 models", .matches = { DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c index 33e3db548a29..c42feb2bacd0 100644 --- a/drivers/acpi/video_detect.c +++ b/drivers/acpi/video_detect.c @@ -166,6 +166,14 @@ static struct dmi_system_id video_detect_dmi_table[] = { DMI_MATCH(DMI_PRODUCT_NAME, "UL30A"), }, }, + { + .callback = video_detect_force_vendor, + .ident = "Dell Inspiron 5737", + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), + DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 5737"), + }, + }, { }, }; diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c index dae5607e1115..4cd52a4541a9 100644 --- a/drivers/ata/ahci.c +++ b/drivers/ata/ahci.c @@ -456,6 +456,7 @@ static const struct pci_device_id ahci_pci_tbl[] = { /* Promise */ { PCI_VDEVICE(PROMISE, 0x3f20), board_ahci }, /* PDC42819 */ + { PCI_VDEVICE(PROMISE, 0x3781), board_ahci }, /* FastTrak TX8660 ahci-mode */ /* Asmedia */ { PCI_VDEVICE(ASMEDIA, 0x0601), board_ahci }, /* ASM1060 */ diff --git a/drivers/ata/ahci.h b/drivers/ata/ahci.h index 05882e4445a6..5513296e5e2e 100644 --- a/drivers/ata/ahci.h +++ b/drivers/ata/ahci.h @@ -371,7 +371,9 @@ int ahci_do_softreset(struct ata_link *link, unsigned int *class, int pmp, unsigned long deadline, int (*check_ready)(struct ata_link *link)); +unsigned int ahci_qc_issue(struct ata_queued_cmd *qc); int ahci_stop_engine(struct ata_port *ap); +void ahci_start_fis_rx(struct ata_port *ap); void ahci_start_engine(struct ata_port *ap); int ahci_check_ready(struct ata_link *link); int ahci_kick_engine(struct ata_port *ap); diff --git a/drivers/ata/ahci_imx.c b/drivers/ata/ahci_imx.c index 3a901520c62b..cac4360f272a 100644 --- a/drivers/ata/ahci_imx.c +++ b/drivers/ata/ahci_imx.c @@ -58,6 +58,8 @@ enum ahci_imx_type { struct imx_ahci_priv { struct platform_device *ahci_pdev; enum ahci_imx_type type; + struct clk *sata_clk; + struct clk *sata_ref_clk; struct clk *ahb_clk; struct regmap *gpr; bool no_device; @@ -224,7 +226,7 @@ static int imx_sata_enable(struct ahci_host_priv *hpriv) return ret; } - ret = ahci_platform_enable_clks(hpriv); + ret = clk_prepare_enable(imxpriv->sata_ref_clk); if (ret < 0) goto disable_regulator; @@ -291,7 +293,7 @@ static void imx_sata_disable(struct ahci_host_priv *hpriv) !IMX6Q_GPR13_SATA_MPLL_CLK_EN); } - ahci_platform_disable_clks(hpriv); + clk_disable_unprepare(imxpriv->sata_ref_clk); if (hpriv->target_pwr) regulator_disable(hpriv->target_pwr); @@ -324,6 +326,9 @@ static void ahci_imx_error_handler(struct ata_port *ap) writel(reg_val | IMX_P0PHYCR_TEST_PDDQ, mmio + IMX_P0PHYCR); imx_sata_disable(hpriv); imxpriv->no_device = true; + + dev_info(ap->dev, "no device found, disabling link.\n"); + dev_info(ap->dev, "pass " MODULE_PARAM_PREFIX ".hotplug=1 to enable hotplug\n"); } static int ahci_imx_softreset(struct ata_link *link, unsigned int *class, @@ -385,6 +390,19 @@ static int imx_ahci_probe(struct platform_device *pdev) imxpriv->no_device = false; imxpriv->first_time = true; imxpriv->type = (enum ahci_imx_type)of_id->data; + + imxpriv->sata_clk = devm_clk_get(dev, "sata"); + if (IS_ERR(imxpriv->sata_clk)) { + dev_err(dev, "can't get sata clock.\n"); + return PTR_ERR(imxpriv->sata_clk); + } + + imxpriv->sata_ref_clk = devm_clk_get(dev, "sata_ref"); + if (IS_ERR(imxpriv->sata_ref_clk)) { + dev_err(dev, "can't get sata_ref clock.\n"); + return PTR_ERR(imxpriv->sata_ref_clk); + } + imxpriv->ahb_clk = devm_clk_get(dev, "ahb"); if (IS_ERR(imxpriv->ahb_clk)) { dev_err(dev, "can't get ahb clock.\n"); @@ -407,10 +425,14 @@ static int imx_ahci_probe(struct platform_device *pdev) hpriv->plat_data = imxpriv; - ret = imx_sata_enable(hpriv); + ret = clk_prepare_enable(imxpriv->sata_clk); if (ret) return ret; + ret = imx_sata_enable(hpriv); + if (ret) + goto disable_clk; + /* * Configure the HWINIT bits of the HOST_CAP and HOST_PORTS_IMPL, * and IP vendor specific register IMX_TIMER1MS. @@ -435,16 +457,24 @@ static int imx_ahci_probe(struct platform_device *pdev) ret = ahci_platform_init_host(pdev, hpriv, &ahci_imx_port_info, 0, 0, 0); if (ret) - imx_sata_disable(hpriv); + goto disable_sata; + return 0; + +disable_sata: + imx_sata_disable(hpriv); +disable_clk: + clk_disable_unprepare(imxpriv->sata_clk); return ret; } static void ahci_imx_host_stop(struct ata_host *host) { struct ahci_host_priv *hpriv = host->private_data; + struct imx_ahci_priv *imxpriv = hpriv->plat_data; imx_sata_disable(hpriv); + clk_disable_unprepare(imxpriv->sata_clk); } #ifdef CONFIG_PM_SLEEP diff --git a/drivers/ata/ahci_platform.c b/drivers/ata/ahci_platform.c index ebe505c17763..b10d81ddb528 100644 --- a/drivers/ata/ahci_platform.c +++ b/drivers/ata/ahci_platform.c @@ -58,7 +58,7 @@ static int ahci_probe(struct platform_device *pdev) } if (of_device_is_compatible(dev->of_node, "hisilicon,hisi-ahci")) - hflags |= AHCI_HFLAG_NO_FBS; + hflags |= AHCI_HFLAG_NO_FBS | AHCI_HFLAG_NO_NCQ; rc = ahci_platform_init_host(pdev, hpriv, &ahci_port_info, hflags, 0, 0); diff --git a/drivers/ata/ahci_xgene.c b/drivers/ata/ahci_xgene.c index 042a9bb45c86..ee3a3659bd9e 100644 --- a/drivers/ata/ahci_xgene.c +++ b/drivers/ata/ahci_xgene.c @@ -78,6 +78,7 @@ struct xgene_ahci_context { struct ahci_host_priv *hpriv; struct device *dev; + u8 last_cmd[MAX_AHCI_CHN_PERCTR]; /* tracking the last command issued*/ void __iomem *csr_core; /* Core CSR address of IP */ void __iomem *csr_diag; /* Diag CSR address of IP */ void __iomem *csr_axi; /* AXI CSR address of IP */ @@ -98,20 +99,62 @@ static int xgene_ahci_init_memram(struct xgene_ahci_context *ctx) } /** + * xgene_ahci_restart_engine - Restart the dma engine. + * @ap : ATA port of interest + * + * Restarts the dma engine inside the controller. + */ +static int xgene_ahci_restart_engine(struct ata_port *ap) +{ + struct ahci_host_priv *hpriv = ap->host->private_data; + + ahci_stop_engine(ap); + ahci_start_fis_rx(ap); + hpriv->start_engine(ap); + + return 0; +} + +/** + * xgene_ahci_qc_issue - Issue commands to the device + * @qc: Command to issue + * + * Due to Hardware errata for IDENTIFY DEVICE command, the controller cannot + * clear the BSY bit after receiving the PIO setup FIS. This results in the dma + * state machine goes into the CMFatalErrorUpdate state and locks up. By + * restarting the dma engine, it removes the controller out of lock up state. + */ +static unsigned int xgene_ahci_qc_issue(struct ata_queued_cmd *qc) +{ + struct ata_port *ap = qc->ap; + struct ahci_host_priv *hpriv = ap->host->private_data; + struct xgene_ahci_context *ctx = hpriv->plat_data; + int rc = 0; + + if (unlikely(ctx->last_cmd[ap->port_no] == ATA_CMD_ID_ATA)) + xgene_ahci_restart_engine(ap); + + rc = ahci_qc_issue(qc); + + /* Save the last command issued */ + ctx->last_cmd[ap->port_no] = qc->tf.command; + + return rc; +} + +/** * xgene_ahci_read_id - Read ID data from the specified device * @dev: device * @tf: proposed taskfile * @id: data buffer * * This custom read ID function is required due to the fact that the HW - * does not support DEVSLP and the controller state machine may get stuck - * after processing the ID query command. + * does not support DEVSLP. */ static unsigned int xgene_ahci_read_id(struct ata_device *dev, struct ata_taskfile *tf, u16 *id) { u32 err_mask; - void __iomem *port_mmio = ahci_port_base(dev->link->ap); err_mask = ata_do_dev_read_id(dev, tf, id); if (err_mask) @@ -133,16 +176,6 @@ static unsigned int xgene_ahci_read_id(struct ata_device *dev, */ id[ATA_ID_FEATURE_SUPP] &= ~(1 << 8); - /* - * Due to HW errata, restart the port if no other command active. - * Otherwise the controller may get stuck. - */ - if (!readl(port_mmio + PORT_CMD_ISSUE)) { - writel(PORT_CMD_FIS_RX, port_mmio + PORT_CMD); - readl(port_mmio + PORT_CMD); /* Force a barrier */ - writel(PORT_CMD_FIS_RX | PORT_CMD_START, port_mmio + PORT_CMD); - readl(port_mmio + PORT_CMD); /* Force a barrier */ - } return 0; } @@ -300,6 +333,7 @@ static struct ata_port_operations xgene_ahci_ops = { .host_stop = xgene_ahci_host_stop, .hardreset = xgene_ahci_hardreset, .read_id = xgene_ahci_read_id, + .qc_issue = xgene_ahci_qc_issue, }; static const struct ata_port_info xgene_ahci_port_info = { diff --git a/drivers/ata/libahci.c b/drivers/ata/libahci.c index 40ea583d3610..d72ce0470309 100644 --- a/drivers/ata/libahci.c +++ b/drivers/ata/libahci.c @@ -68,7 +68,6 @@ static ssize_t ahci_transmit_led_message(struct ata_port *ap, u32 state, static int ahci_scr_read(struct ata_link *link, unsigned int sc_reg, u32 *val); static int ahci_scr_write(struct ata_link *link, unsigned int sc_reg, u32 val); -static unsigned int ahci_qc_issue(struct ata_queued_cmd *qc); static bool ahci_qc_fill_rtf(struct ata_queued_cmd *qc); static int ahci_port_start(struct ata_port *ap); static void ahci_port_stop(struct ata_port *ap); @@ -620,7 +619,7 @@ int ahci_stop_engine(struct ata_port *ap) } EXPORT_SYMBOL_GPL(ahci_stop_engine); -static void ahci_start_fis_rx(struct ata_port *ap) +void ahci_start_fis_rx(struct ata_port *ap) { void __iomem *port_mmio = ahci_port_base(ap); struct ahci_host_priv *hpriv = ap->host->private_data; @@ -646,6 +645,7 @@ static void ahci_start_fis_rx(struct ata_port *ap) /* flush */ readl(port_mmio + PORT_CMD); } +EXPORT_SYMBOL_GPL(ahci_start_fis_rx); static int ahci_stop_fis_rx(struct ata_port *ap) { @@ -1945,7 +1945,7 @@ irqreturn_t ahci_interrupt(int irq, void *dev_instance) } EXPORT_SYMBOL_GPL(ahci_interrupt); -static unsigned int ahci_qc_issue(struct ata_queued_cmd *qc) +unsigned int ahci_qc_issue(struct ata_queued_cmd *qc) { struct ata_port *ap = qc->ap; void __iomem *port_mmio = ahci_port_base(ap); @@ -1974,6 +1974,7 @@ static unsigned int ahci_qc_issue(struct ata_queued_cmd *qc) return 0; } +EXPORT_SYMBOL_GPL(ahci_qc_issue); static bool ahci_qc_fill_rtf(struct ata_queued_cmd *qc) { diff --git a/drivers/ata/libahci_platform.c b/drivers/ata/libahci_platform.c index 3a5b4ed25a4f..b0077589f065 100644 --- a/drivers/ata/libahci_platform.c +++ b/drivers/ata/libahci_platform.c @@ -250,8 +250,13 @@ struct ahci_host_priv *ahci_platform_get_resources(struct platform_device *pdev) if (IS_ERR(hpriv->phy)) { rc = PTR_ERR(hpriv->phy); switch (rc) { - case -ENODEV: case -ENOSYS: + /* No PHY support. Check if PHY is required. */ + if (of_find_property(dev->of_node, "phys", NULL)) { + dev_err(dev, "couldn't get sata-phy: ENOSYS\n"); + goto err_out; + } + case -ENODEV: /* continue normally */ hpriv->phy = NULL; break; diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c index 18d97d5c7d90..677c0c1b03bd 100644 --- a/drivers/ata/libata-core.c +++ b/drivers/ata/libata-core.c @@ -4787,6 +4787,10 @@ void swap_buf_le16(u16 *buf, unsigned int buf_words) * ata_qc_new - Request an available ATA command, for queueing * @ap: target port * + * Some ATA host controllers may implement a queue depth which is less + * than ATA_MAX_QUEUE. So we shouldn't allocate a tag which is beyond + * the hardware limitation. + * * LOCKING: * None. */ @@ -4794,14 +4798,15 @@ void swap_buf_le16(u16 *buf, unsigned int buf_words) static struct ata_queued_cmd *ata_qc_new(struct ata_port *ap) { struct ata_queued_cmd *qc = NULL; + unsigned int max_queue = ap->host->n_tags; unsigned int i, tag; /* no command while frozen */ if (unlikely(ap->pflags & ATA_PFLAG_FROZEN)) return NULL; - for (i = 0; i < ATA_MAX_QUEUE; i++) { - tag = (i + ap->last_tag + 1) % ATA_MAX_QUEUE; + for (i = 0, tag = ap->last_tag + 1; i < max_queue; i++, tag++) { + tag = tag < max_queue ? tag : 0; /* the last tag is reserved for internal command. */ if (tag == ATA_TAG_INTERNAL) @@ -6088,6 +6093,7 @@ void ata_host_init(struct ata_host *host, struct device *dev, { spin_lock_init(&host->lock); mutex_init(&host->eh_mutex); + host->n_tags = ATA_MAX_QUEUE - 1; host->dev = dev; host->ops = ops; } @@ -6169,6 +6175,8 @@ int ata_host_register(struct ata_host *host, struct scsi_host_template *sht) { int i, rc; + host->n_tags = clamp(sht->can_queue, 1, ATA_MAX_QUEUE - 1); + /* host must have been started */ if (!(host->flags & ATA_HOST_STARTED)) { dev_err(host->dev, "BUG: trying to register unstarted host\n"); diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c index 6760fc4e85b8..dad83df555c4 100644 --- a/drivers/ata/libata-eh.c +++ b/drivers/ata/libata-eh.c @@ -1811,7 +1811,7 @@ static unsigned int ata_eh_analyze_tf(struct ata_queued_cmd *qc, case ATA_DEV_ATA: if (err & ATA_ICRC) qc->err_mask |= AC_ERR_ATA_BUS; - if (err & ATA_UNC) + if (err & (ATA_UNC | ATA_AMNF)) qc->err_mask |= AC_ERR_MEDIA; if (err & ATA_IDNF) qc->err_mask |= AC_ERR_INVALID; @@ -2556,11 +2556,12 @@ static void ata_eh_link_report(struct ata_link *link) } if (cmd->command != ATA_CMD_PACKET && - (res->feature & (ATA_ICRC | ATA_UNC | ATA_IDNF | - ATA_ABORTED))) - ata_dev_err(qc->dev, "error: { %s%s%s%s}\n", + (res->feature & (ATA_ICRC | ATA_UNC | ATA_AMNF | + ATA_IDNF | ATA_ABORTED))) + ata_dev_err(qc->dev, "error: { %s%s%s%s%s}\n", res->feature & ATA_ICRC ? "ICRC " : "", res->feature & ATA_UNC ? "UNC " : "", + res->feature & ATA_AMNF ? "AMNF " : "", res->feature & ATA_IDNF ? "IDNF " : "", res->feature & ATA_ABORTED ? "ABRT " : ""); #endif diff --git a/drivers/ata/pata_ep93xx.c b/drivers/ata/pata_ep93xx.c index 6ad5c072ce34..4d37c5415fc7 100644 --- a/drivers/ata/pata_ep93xx.c +++ b/drivers/ata/pata_ep93xx.c @@ -915,7 +915,7 @@ static int ep93xx_pata_probe(struct platform_device *pdev) struct ep93xx_pata_data *drv_data; struct ata_host *host; struct ata_port *ap; - unsigned int irq; + int irq; struct resource *mem_res; void __iomem *ide_base; int err; diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c index 83969f8c5727..6467c919c509 100644 --- a/drivers/base/dma-contiguous.c +++ b/drivers/base/dma-contiguous.c @@ -176,14 +176,24 @@ static int __init cma_activate_area(struct cma *cma) base_pfn = pfn; for (j = pageblock_nr_pages; j; --j, pfn++) { WARN_ON_ONCE(!pfn_valid(pfn)); + /* + * alloc_contig_range requires the pfn range + * specified to be in the same zone. Make this + * simple by forcing the entire CMA resv range + * to be in the same zone. + */ if (page_zone(pfn_to_page(pfn)) != zone) - return -EINVAL; + goto err; } init_cma_reserved_pageblock(pfn_to_page(base_pfn)); } while (--i); mutex_init(&cma->lock); return 0; + +err: + kfree(cma->bitmap); + return -EINVAL; } static struct cma cma_areas[MAX_CMA_AREAS]; diff --git a/drivers/base/platform.c b/drivers/base/platform.c index 9e9227e1762d..eee48c49f5de 100644 --- a/drivers/base/platform.c +++ b/drivers/base/platform.c @@ -89,8 +89,13 @@ int platform_get_irq(struct platform_device *dev, unsigned int num) return dev->archdata.irqs[num]; #else struct resource *r; - if (IS_ENABLED(CONFIG_OF_IRQ) && dev->dev.of_node) - return of_irq_get(dev->dev.of_node, num); + if (IS_ENABLED(CONFIG_OF_IRQ) && dev->dev.of_node) { + int ret; + + ret = of_irq_get(dev->dev.of_node, num); + if (ret >= 0 || ret == -EPROBE_DEFER) + return ret; + } r = platform_get_resource(dev, IORESOURCE_IRQ, num); @@ -133,8 +138,13 @@ int platform_get_irq_byname(struct platform_device *dev, const char *name) { struct resource *r; - if (IS_ENABLED(CONFIG_OF_IRQ) && dev->dev.of_node) - return of_irq_get_byname(dev->dev.of_node, name); + if (IS_ENABLED(CONFIG_OF_IRQ) && dev->dev.of_node) { + int ret; + + ret = of_irq_get_byname(dev->dev.of_node, name); + if (ret >= 0 || ret == -EPROBE_DEFER) + return ret; + } r = platform_get_resource_byname(dev, IORESOURCE_IRQ, name); return r ? r->start : -ENXIO; diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c index 1b35c45c92b7..3f2e16738080 100644 --- a/drivers/block/drbd/drbd_nl.c +++ b/drivers/block/drbd/drbd_nl.c @@ -544,6 +544,12 @@ void conn_try_outdate_peer_async(struct drbd_connection *connection) struct task_struct *opa; kref_get(&connection->kref); + /* We may just have force_sig()'ed this thread + * to get it out of some blocking network function. + * Clear signals; otherwise kthread_run(), which internally uses + * wait_on_completion_killable(), will mistake our pending signal + * for a new fatal signal and fail. */ + flush_signals(current); opa = kthread_run(_try_outdate_peer_async, connection, "drbd_async_h"); if (IS_ERR(opa)) { drbd_err(connection, "out of mem, failed to invoke fence-peer helper\n"); diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c index b6c8aaf4931b..5b17ec88ea05 100644 --- a/drivers/block/drbd/drbd_receiver.c +++ b/drivers/block/drbd/drbd_receiver.c @@ -1337,8 +1337,11 @@ int drbd_submit_peer_request(struct drbd_device *device, return 0; } + /* Discards don't have any payload. + * But the scsi layer still expects a bio_vec it can use internally, + * see sd_setup_discard_cmnd() and blk_add_request_payload(). */ if (peer_req->flags & EE_IS_TRIM) - nr_pages = 0; /* discards don't have any payload. */ + nr_pages = 1; /* In most cases, we will only need one bio. But in case the lower * level restrictions happen to be different at this offset on this diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c index 677db049f55a..56d46ffb08e1 100644 --- a/drivers/block/floppy.c +++ b/drivers/block/floppy.c @@ -3777,7 +3777,7 @@ static void floppy_rb0_cb(struct bio *bio, int err) int drive = cbdata->drive; if (err) { - pr_info("floppy: error %d while reading block 0", err); + pr_info("floppy: error %d while reading block 0\n", err); set_bit(FD_OPEN_SHOULD_FAIL_BIT, &UDRS->flags); } complete(&cbdata->complete); diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index bbeb404b3a07..b2c98c1bc037 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -1431,6 +1431,14 @@ static bool obj_request_exists_test(struct rbd_obj_request *obj_request) return test_bit(OBJ_REQ_EXISTS, &obj_request->flags) != 0; } +static bool obj_request_overlaps_parent(struct rbd_obj_request *obj_request) +{ + struct rbd_device *rbd_dev = obj_request->img_request->rbd_dev; + + return obj_request->img_offset < + round_up(rbd_dev->parent_overlap, rbd_obj_bytes(&rbd_dev->header)); +} + static void rbd_obj_request_get(struct rbd_obj_request *obj_request) { dout("%s: obj %p (was %d)\n", __func__, obj_request, @@ -2748,7 +2756,7 @@ static int rbd_img_obj_request_submit(struct rbd_obj_request *obj_request) */ if (!img_request_write_test(img_request) || !img_request_layered_test(img_request) || - rbd_dev->parent_overlap <= obj_request->img_offset || + !obj_request_overlaps_parent(obj_request) || ((known = obj_request_known_test(obj_request)) && obj_request_exists_test(obj_request))) { diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 48eccb350180..36e54be402df 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -624,7 +624,16 @@ static void zram_reset_device(struct zram *zram, bool reset_capacity) zram->disksize = 0; if (reset_capacity) set_capacity(zram->disk, 0); + up_write(&zram->init_lock); + + /* + * Revalidate disk out of the init_lock to avoid lockdep splat. + * It's okay because disk's capacity is protected by init_lock + * so that revalidate_disk always sees up-to-date capacity. + */ + if (reset_capacity) + revalidate_disk(zram->disk); } static ssize_t disksize_store(struct device *dev, @@ -665,6 +674,14 @@ static ssize_t disksize_store(struct device *dev, zram->disksize = disksize; set_capacity(zram->disk, zram->disksize >> SECTOR_SHIFT); up_write(&zram->init_lock); + + /* + * Revalidate disk out of the init_lock to avoid lockdep splat. + * It's okay because disk's capacity is protected by init_lock + * so that revalidate_disk always sees up-to-date capacity. + */ + revalidate_disk(zram->disk); + return len; out_destroy_comp: diff --git a/drivers/bluetooth/ath3k.c b/drivers/bluetooth/ath3k.c index f98380648cb3..f50dffc0374f 100644 --- a/drivers/bluetooth/ath3k.c +++ b/drivers/bluetooth/ath3k.c @@ -90,7 +90,6 @@ static const struct usb_device_id ath3k_table[] = { { USB_DEVICE(0x0b05, 0x17d0) }, { USB_DEVICE(0x0CF3, 0x0036) }, { USB_DEVICE(0x0CF3, 0x3004) }, - { USB_DEVICE(0x0CF3, 0x3005) }, { USB_DEVICE(0x0CF3, 0x3008) }, { USB_DEVICE(0x0CF3, 0x311D) }, { USB_DEVICE(0x0CF3, 0x311E) }, @@ -140,7 +139,6 @@ static const struct usb_device_id ath3k_blist_tbl[] = { { USB_DEVICE(0x0b05, 0x17d0), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0CF3, 0x0036), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0cf3, 0x3004), .driver_info = BTUSB_ATH3012 }, - { USB_DEVICE(0x0cf3, 0x3005), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0cf3, 0x3008), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0cf3, 0x311D), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0cf3, 0x311E), .driver_info = BTUSB_ATH3012 }, diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c index a1c80b0c7663..6250fc2fb93a 100644 --- a/drivers/bluetooth/btusb.c +++ b/drivers/bluetooth/btusb.c @@ -162,7 +162,6 @@ static const struct usb_device_id blacklist_table[] = { { USB_DEVICE(0x0b05, 0x17d0), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0cf3, 0x0036), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0cf3, 0x3004), .driver_info = BTUSB_ATH3012 }, - { USB_DEVICE(0x0cf3, 0x3005), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0cf3, 0x3008), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0cf3, 0x311d), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0cf3, 0x311e), .driver_info = BTUSB_ATH3012 }, diff --git a/drivers/bluetooth/hci_h5.c b/drivers/bluetooth/hci_h5.c index 04680ead9275..fede8ca7147c 100644 --- a/drivers/bluetooth/hci_h5.c +++ b/drivers/bluetooth/hci_h5.c @@ -406,6 +406,7 @@ static int h5_rx_3wire_hdr(struct hci_uart *hu, unsigned char c) H5_HDR_PKT_TYPE(hdr) != HCI_3WIRE_LINK_PKT) { BT_ERR("Non-link packet received in non-active state"); h5_reset_rx(h5); + return 0; } h5->rx_func = h5_rx_payload; diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c index 334601cc81cf..c4419ea1ab07 100644 --- a/drivers/char/hw_random/core.c +++ b/drivers/char/hw_random/core.c @@ -55,16 +55,41 @@ static DEFINE_MUTEX(rng_mutex); static int data_avail; static u8 *rng_buffer; +static inline int rng_get_data(struct hwrng *rng, u8 *buffer, size_t size, + int wait); + static size_t rng_buffer_size(void) { return SMP_CACHE_BYTES < 32 ? 32 : SMP_CACHE_BYTES; } +static void add_early_randomness(struct hwrng *rng) +{ + unsigned char bytes[16]; + int bytes_read; + + /* + * Currently only virtio-rng cannot return data during device + * probe, and that's handled in virtio-rng.c itself. If there + * are more such devices, this call to rng_get_data can be + * made conditional here instead of doing it per-device. + */ + bytes_read = rng_get_data(rng, bytes, sizeof(bytes), 1); + if (bytes_read > 0) + add_device_randomness(bytes, bytes_read); +} + static inline int hwrng_init(struct hwrng *rng) { - if (!rng->init) - return 0; - return rng->init(rng); + if (rng->init) { + int ret; + + ret = rng->init(rng); + if (ret) + return ret; + } + add_early_randomness(rng); + return 0; } static inline void hwrng_cleanup(struct hwrng *rng) @@ -304,8 +329,6 @@ int hwrng_register(struct hwrng *rng) { int err = -EINVAL; struct hwrng *old_rng, *tmp; - unsigned char bytes[16]; - int bytes_read; if (rng->name == NULL || (rng->data_read == NULL && rng->read == NULL)) @@ -347,9 +370,17 @@ int hwrng_register(struct hwrng *rng) INIT_LIST_HEAD(&rng->list); list_add_tail(&rng->list, &rng_list); - bytes_read = rng_get_data(rng, bytes, sizeof(bytes), 1); - if (bytes_read > 0) - add_device_randomness(bytes, bytes_read); + if (old_rng && !rng->init) { + /* + * Use a new device's input to add some randomness to + * the system. If this rng device isn't going to be + * used right away, its init function hasn't been + * called yet; so only use the randomness from devices + * that don't need an init callback. + */ + add_early_randomness(rng); + } + out_unlock: mutex_unlock(&rng_mutex); out: diff --git a/drivers/char/hw_random/virtio-rng.c b/drivers/char/hw_random/virtio-rng.c index f3e71501de54..e9b15bc18b4d 100644 --- a/drivers/char/hw_random/virtio-rng.c +++ b/drivers/char/hw_random/virtio-rng.c @@ -38,6 +38,8 @@ struct virtrng_info { int index; }; +static bool probe_done; + static void random_recv_done(struct virtqueue *vq) { struct virtrng_info *vi = vq->vdev->priv; @@ -67,6 +69,13 @@ static int virtio_read(struct hwrng *rng, void *buf, size_t size, bool wait) int ret; struct virtrng_info *vi = (struct virtrng_info *)rng->priv; + /* + * Don't ask host for data till we're setup. This call can + * happen during hwrng_register(), after commit d9e7972619. + */ + if (unlikely(!probe_done)) + return 0; + if (!vi->busy) { vi->busy = true; init_completion(&vi->have_data); @@ -137,6 +146,7 @@ static int probe_common(struct virtio_device *vdev) return err; } + probe_done = true; return 0; } diff --git a/drivers/char/i8k.c b/drivers/char/i8k.c index d915707d2ba1..93dcad0c1cbe 100644 --- a/drivers/char/i8k.c +++ b/drivers/char/i8k.c @@ -138,7 +138,9 @@ static int i8k_smm(struct smm_regs *regs) if (!alloc_cpumask_var(&old_mask, GFP_KERNEL)) return -ENOMEM; cpumask_copy(old_mask, ¤t->cpus_allowed); - set_cpus_allowed_ptr(current, cpumask_of(0)); + rc = set_cpus_allowed_ptr(current, cpumask_of(0)); + if (rc) + goto out; if (smp_processor_id() != 0) { rc = -EBUSY; goto out; diff --git a/drivers/char/random.c b/drivers/char/random.c index 0a7ac0a7b252..71529e196b84 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -641,7 +641,7 @@ retry: } while (unlikely(entropy_count < pool_size-2 && pnfrac)); } - if (entropy_count < 0) { + if (unlikely(entropy_count < 0)) { pr_warn("random: negative entropy/overflow: pool %s count %d\n", r->name, entropy_count); WARN_ON(1); @@ -981,7 +981,7 @@ static size_t account(struct entropy_store *r, size_t nbytes, int min, int reserved) { int entropy_count, orig; - size_t ibytes; + size_t ibytes, nfrac; BUG_ON(r->entropy_count > r->poolinfo->poolfracbits); @@ -999,7 +999,17 @@ retry: } if (ibytes < min) ibytes = 0; - if ((entropy_count -= ibytes << (ENTROPY_SHIFT + 3)) < 0) + + if (unlikely(entropy_count < 0)) { + pr_warn("random: negative entropy count: pool %s count %d\n", + r->name, entropy_count); + WARN_ON(1); + entropy_count = 0; + } + nfrac = ibytes << (ENTROPY_SHIFT + 3); + if ((size_t) entropy_count > nfrac) + entropy_count -= nfrac; + else entropy_count = 0; if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig) @@ -1376,6 +1386,7 @@ urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos) "with %d bits of entropy available\n", current->comm, nonblocking_pool.entropy_total); + nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3)); ret = extract_entropy_user(&nonblocking_pool, buf, nbytes); trace_urandom_read(8 * nbytes, ENTROPY_BITS(&nonblocking_pool), diff --git a/drivers/clk/clk-s2mps11.c b/drivers/clk/clk-s2mps11.c index 9b7b5859a420..3757e9e72d37 100644 --- a/drivers/clk/clk-s2mps11.c +++ b/drivers/clk/clk-s2mps11.c @@ -230,16 +230,13 @@ static int s2mps11_clk_probe(struct platform_device *pdev) goto err_reg; } - s2mps11_clk->lookup = devm_kzalloc(&pdev->dev, - sizeof(struct clk_lookup), GFP_KERNEL); + s2mps11_clk->lookup = clkdev_alloc(s2mps11_clk->clk, + s2mps11_name(s2mps11_clk), NULL); if (!s2mps11_clk->lookup) { ret = -ENOMEM; goto err_lup; } - s2mps11_clk->lookup->con_id = s2mps11_name(s2mps11_clk); - s2mps11_clk->lookup->clk = s2mps11_clk->clk; - clkdev_add(s2mps11_clk->lookup); } diff --git a/drivers/clk/qcom/mmcc-msm8960.c b/drivers/clk/qcom/mmcc-msm8960.c index 12f3c0b64fcd..4c449b3170f6 100644 --- a/drivers/clk/qcom/mmcc-msm8960.c +++ b/drivers/clk/qcom/mmcc-msm8960.c @@ -1209,7 +1209,7 @@ static struct clk_branch rot_clk = { static u8 mmcc_pxo_hdmi_map[] = { [P_PXO] = 0, - [P_HDMI_PLL] = 2, + [P_HDMI_PLL] = 3, }; static const char *mmcc_pxo_hdmi[] = { diff --git a/drivers/clk/samsung/clk-exynos4.c b/drivers/clk/samsung/clk-exynos4.c index 4f150c9dd38c..7f4a473a7ad7 100644 --- a/drivers/clk/samsung/clk-exynos4.c +++ b/drivers/clk/samsung/clk-exynos4.c @@ -925,21 +925,13 @@ static struct samsung_gate_clock exynos4x12_gate_clks[] __initdata = { GATE(CLK_RTC, "rtc", "aclk100", E4X12_GATE_IP_PERIR, 15, 0, 0), GATE(CLK_KEYIF, "keyif", "aclk100", E4X12_GATE_IP_PERIR, 16, 0, 0), - GATE(CLK_SCLK_PWM_ISP, "sclk_pwm_isp", "div_pwm_isp", - E4X12_SRC_MASK_ISP, 0, CLK_SET_RATE_PARENT, 0), - GATE(CLK_SCLK_SPI0_ISP, "sclk_spi0_isp", "div_spi0_isp_pre", - E4X12_SRC_MASK_ISP, 4, CLK_SET_RATE_PARENT, 0), - GATE(CLK_SCLK_SPI1_ISP, "sclk_spi1_isp", "div_spi1_isp_pre", - E4X12_SRC_MASK_ISP, 8, CLK_SET_RATE_PARENT, 0), - GATE(CLK_SCLK_UART_ISP, "sclk_uart_isp", "div_uart_isp", - E4X12_SRC_MASK_ISP, 12, CLK_SET_RATE_PARENT, 0), - GATE(CLK_PWM_ISP_SCLK, "pwm_isp_sclk", "sclk_pwm_isp", + GATE(CLK_PWM_ISP_SCLK, "pwm_isp_sclk", "div_pwm_isp", E4X12_GATE_IP_ISP, 0, 0, 0), - GATE(CLK_SPI0_ISP_SCLK, "spi0_isp_sclk", "sclk_spi0_isp", + GATE(CLK_SPI0_ISP_SCLK, "spi0_isp_sclk", "div_spi0_isp_pre", E4X12_GATE_IP_ISP, 1, 0, 0), - GATE(CLK_SPI1_ISP_SCLK, "spi1_isp_sclk", "sclk_spi1_isp", + GATE(CLK_SPI1_ISP_SCLK, "spi1_isp_sclk", "div_spi1_isp_pre", E4X12_GATE_IP_ISP, 2, 0, 0), - GATE(CLK_UART_ISP_SCLK, "uart_isp_sclk", "sclk_uart_isp", + GATE(CLK_UART_ISP_SCLK, "uart_isp_sclk", "div_uart_isp", E4X12_GATE_IP_ISP, 3, 0, 0), GATE(CLK_WDT, "watchdog", "aclk100", E4X12_GATE_IP_PERIR, 14, 0, 0), GATE(CLK_PCM0, "pcm0", "aclk100", E4X12_GATE_IP_MAUDIO, 2, diff --git a/drivers/clk/samsung/clk-exynos5250.c b/drivers/clk/samsung/clk-exynos5250.c index 1fad4c5e3f5d..184f64293b26 100644 --- a/drivers/clk/samsung/clk-exynos5250.c +++ b/drivers/clk/samsung/clk-exynos5250.c @@ -661,7 +661,7 @@ static struct samsung_gate_clock exynos5250_gate_clks[] __initdata = { GATE(CLK_RTC, "rtc", "div_aclk66", GATE_IP_PERIS, 20, 0, 0), GATE(CLK_TMU, "tmu", "div_aclk66", GATE_IP_PERIS, 21, 0, 0), GATE(CLK_SMMU_TV, "smmu_tv", "mout_aclk200_disp1_sub", - GATE_IP_DISP1, 2, 0, 0), + GATE_IP_DISP1, 9, 0, 0), GATE(CLK_SMMU_FIMD1, "smmu_fimd1", "mout_aclk200_disp1_sub", GATE_IP_DISP1, 8, 0, 0), GATE(CLK_SMMU_2D, "smmu_2d", "div_aclk200", GATE_IP_ACP, 7, 0, 0), diff --git a/drivers/clk/samsung/clk-exynos5420.c b/drivers/clk/samsung/clk-exynos5420.c index 9d7d7eed03fd..a4e6cc782e5c 100644 --- a/drivers/clk/samsung/clk-exynos5420.c +++ b/drivers/clk/samsung/clk-exynos5420.c @@ -631,7 +631,8 @@ static struct samsung_mux_clock exynos5x_mux_clks[] __initdata = { SRC_TOP4, 16, 1), MUX(0, "mout_user_aclk266", mout_user_aclk266_p, SRC_TOP4, 20, 1), MUX(0, "mout_user_aclk166", mout_user_aclk166_p, SRC_TOP4, 24, 1), - MUX(0, "mout_user_aclk333", mout_user_aclk333_p, SRC_TOP4, 28, 1), + MUX(CLK_MOUT_USER_ACLK333, "mout_user_aclk333", mout_user_aclk333_p, + SRC_TOP4, 28, 1), MUX(0, "mout_user_aclk400_disp1", mout_user_aclk400_disp1_p, SRC_TOP5, 0, 1), @@ -684,7 +685,8 @@ static struct samsung_mux_clock exynos5x_mux_clks[] __initdata = { SRC_TOP11, 12, 1), MUX(0, "mout_sw_aclk266", mout_sw_aclk266_p, SRC_TOP11, 20, 1), MUX(0, "mout_sw_aclk166", mout_sw_aclk166_p, SRC_TOP11, 24, 1), - MUX(0, "mout_sw_aclk333", mout_sw_aclk333_p, SRC_TOP11, 28, 1), + MUX(CLK_MOUT_SW_ACLK333, "mout_sw_aclk333", mout_sw_aclk333_p, + SRC_TOP11, 28, 1), MUX(0, "mout_sw_aclk400_disp1", mout_sw_aclk400_disp1_p, SRC_TOP12, 4, 1), @@ -890,8 +892,6 @@ static struct samsung_gate_clock exynos5x_gate_clks[] __initdata = { GATE_BUS_TOP, 9, CLK_IGNORE_UNUSED, 0), GATE(0, "aclk66_psgen", "mout_user_aclk66_psgen", GATE_BUS_TOP, 10, CLK_IGNORE_UNUSED, 0), - GATE(CLK_ACLK66_PERIC, "aclk66_peric", "mout_user_aclk66_peric", - GATE_BUS_TOP, 11, CLK_IGNORE_UNUSED, 0), GATE(0, "aclk266_isp", "mout_user_aclk266_isp", GATE_BUS_TOP, 13, 0, 0), GATE(0, "aclk166", "mout_user_aclk166", @@ -994,34 +994,61 @@ static struct samsung_gate_clock exynos5x_gate_clks[] __initdata = { SRC_MASK_FSYS, 24, CLK_SET_RATE_PARENT, 0), /* PERIC Block */ - GATE(CLK_UART0, "uart0", "aclk66_peric", GATE_IP_PERIC, 0, 0, 0), - GATE(CLK_UART1, "uart1", "aclk66_peric", GATE_IP_PERIC, 1, 0, 0), - GATE(CLK_UART2, "uart2", "aclk66_peric", GATE_IP_PERIC, 2, 0, 0), - GATE(CLK_UART3, "uart3", "aclk66_peric", GATE_IP_PERIC, 3, 0, 0), - GATE(CLK_I2C0, "i2c0", "aclk66_peric", GATE_IP_PERIC, 6, 0, 0), - GATE(CLK_I2C1, "i2c1", "aclk66_peric", GATE_IP_PERIC, 7, 0, 0), - GATE(CLK_I2C2, "i2c2", "aclk66_peric", GATE_IP_PERIC, 8, 0, 0), - GATE(CLK_I2C3, "i2c3", "aclk66_peric", GATE_IP_PERIC, 9, 0, 0), - GATE(CLK_USI0, "usi0", "aclk66_peric", GATE_IP_PERIC, 10, 0, 0), - GATE(CLK_USI1, "usi1", "aclk66_peric", GATE_IP_PERIC, 11, 0, 0), - GATE(CLK_USI2, "usi2", "aclk66_peric", GATE_IP_PERIC, 12, 0, 0), - GATE(CLK_USI3, "usi3", "aclk66_peric", GATE_IP_PERIC, 13, 0, 0), - GATE(CLK_I2C_HDMI, "i2c_hdmi", "aclk66_peric", GATE_IP_PERIC, 14, 0, 0), - GATE(CLK_TSADC, "tsadc", "aclk66_peric", GATE_IP_PERIC, 15, 0, 0), - GATE(CLK_SPI0, "spi0", "aclk66_peric", GATE_IP_PERIC, 16, 0, 0), - GATE(CLK_SPI1, "spi1", "aclk66_peric", GATE_IP_PERIC, 17, 0, 0), - GATE(CLK_SPI2, "spi2", "aclk66_peric", GATE_IP_PERIC, 18, 0, 0), - GATE(CLK_I2S1, "i2s1", "aclk66_peric", GATE_IP_PERIC, 20, 0, 0), - GATE(CLK_I2S2, "i2s2", "aclk66_peric", GATE_IP_PERIC, 21, 0, 0), - GATE(CLK_PCM1, "pcm1", "aclk66_peric", GATE_IP_PERIC, 22, 0, 0), - GATE(CLK_PCM2, "pcm2", "aclk66_peric", GATE_IP_PERIC, 23, 0, 0), - GATE(CLK_PWM, "pwm", "aclk66_peric", GATE_IP_PERIC, 24, 0, 0), - GATE(CLK_SPDIF, "spdif", "aclk66_peric", GATE_IP_PERIC, 26, 0, 0), - GATE(CLK_USI4, "usi4", "aclk66_peric", GATE_IP_PERIC, 28, 0, 0), - GATE(CLK_USI5, "usi5", "aclk66_peric", GATE_IP_PERIC, 30, 0, 0), - GATE(CLK_USI6, "usi6", "aclk66_peric", GATE_IP_PERIC, 31, 0, 0), - - GATE(CLK_KEYIF, "keyif", "aclk66_peric", GATE_BUS_PERIC, 22, 0, 0), + GATE(CLK_UART0, "uart0", "mout_user_aclk66_peric", + GATE_IP_PERIC, 0, 0, 0), + GATE(CLK_UART1, "uart1", "mout_user_aclk66_peric", + GATE_IP_PERIC, 1, 0, 0), + GATE(CLK_UART2, "uart2", "mout_user_aclk66_peric", + GATE_IP_PERIC, 2, 0, 0), + GATE(CLK_UART3, "uart3", "mout_user_aclk66_peric", + GATE_IP_PERIC, 3, 0, 0), + GATE(CLK_I2C0, "i2c0", "mout_user_aclk66_peric", + GATE_IP_PERIC, 6, 0, 0), + GATE(CLK_I2C1, "i2c1", "mout_user_aclk66_peric", + GATE_IP_PERIC, 7, 0, 0), + GATE(CLK_I2C2, "i2c2", "mout_user_aclk66_peric", + GATE_IP_PERIC, 8, 0, 0), + GATE(CLK_I2C3, "i2c3", "mout_user_aclk66_peric", + GATE_IP_PERIC, 9, 0, 0), + GATE(CLK_USI0, "usi0", "mout_user_aclk66_peric", + GATE_IP_PERIC, 10, 0, 0), + GATE(CLK_USI1, "usi1", "mout_user_aclk66_peric", + GATE_IP_PERIC, 11, 0, 0), + GATE(CLK_USI2, "usi2", "mout_user_aclk66_peric", + GATE_IP_PERIC, 12, 0, 0), + GATE(CLK_USI3, "usi3", "mout_user_aclk66_peric", + GATE_IP_PERIC, 13, 0, 0), + GATE(CLK_I2C_HDMI, "i2c_hdmi", "mout_user_aclk66_peric", + GATE_IP_PERIC, 14, 0, 0), + GATE(CLK_TSADC, "tsadc", "mout_user_aclk66_peric", + GATE_IP_PERIC, 15, 0, 0), + GATE(CLK_SPI0, "spi0", "mout_user_aclk66_peric", + GATE_IP_PERIC, 16, 0, 0), + GATE(CLK_SPI1, "spi1", "mout_user_aclk66_peric", + GATE_IP_PERIC, 17, 0, 0), + GATE(CLK_SPI2, "spi2", "mout_user_aclk66_peric", + GATE_IP_PERIC, 18, 0, 0), + GATE(CLK_I2S1, "i2s1", "mout_user_aclk66_peric", + GATE_IP_PERIC, 20, 0, 0), + GATE(CLK_I2S2, "i2s2", "mout_user_aclk66_peric", + GATE_IP_PERIC, 21, 0, 0), + GATE(CLK_PCM1, "pcm1", "mout_user_aclk66_peric", + GATE_IP_PERIC, 22, 0, 0), + GATE(CLK_PCM2, "pcm2", "mout_user_aclk66_peric", + GATE_IP_PERIC, 23, 0, 0), + GATE(CLK_PWM, "pwm", "mout_user_aclk66_peric", + GATE_IP_PERIC, 24, 0, 0), + GATE(CLK_SPDIF, "spdif", "mout_user_aclk66_peric", + GATE_IP_PERIC, 26, 0, 0), + GATE(CLK_USI4, "usi4", "mout_user_aclk66_peric", + GATE_IP_PERIC, 28, 0, 0), + GATE(CLK_USI5, "usi5", "mout_user_aclk66_peric", + GATE_IP_PERIC, 30, 0, 0), + GATE(CLK_USI6, "usi6", "mout_user_aclk66_peric", + GATE_IP_PERIC, 31, 0, 0), + + GATE(CLK_KEYIF, "keyif", "mout_user_aclk66_peric", + GATE_BUS_PERIC, 22, 0, 0), /* PERIS Block */ GATE(CLK_CHIPID, "chipid", "aclk66_psgen", diff --git a/drivers/clk/samsung/clk-s3c2410.c b/drivers/clk/samsung/clk-s3c2410.c index ba0716801db2..140f4733c02e 100644 --- a/drivers/clk/samsung/clk-s3c2410.c +++ b/drivers/clk/samsung/clk-s3c2410.c @@ -152,6 +152,11 @@ struct samsung_clock_alias s3c2410_common_aliases[] __initdata = { ALIAS(HCLK, NULL, "hclk"), ALIAS(MPLL, NULL, "mpll"), ALIAS(FCLK, NULL, "fclk"), + ALIAS(PCLK, NULL, "watchdog"), + ALIAS(PCLK_SDI, NULL, "sdi"), + ALIAS(HCLK_NAND, NULL, "nand"), + ALIAS(PCLK_I2S, NULL, "iis"), + ALIAS(PCLK_I2C, NULL, "i2c"), }; /* S3C2410 specific clocks */ @@ -378,7 +383,7 @@ void __init s3c2410_common_clk_init(struct device_node *np, unsigned long xti_f, if (!np) s3c2410_common_clk_register_fixed_ext(ctx, xti_f); - if (current_soc == 2410) { + if (current_soc == S3C2410) { if (_get_rate("xti") == 12 * MHZ) { s3c2410_plls[mpll].rate_table = pll_s3c2410_12mhz_tbl; s3c2410_plls[upll].rate_table = pll_s3c2410_12mhz_tbl; @@ -432,7 +437,7 @@ void __init s3c2410_common_clk_init(struct device_node *np, unsigned long xti_f, samsung_clk_register_fixed_factor(ctx, s3c2410_ffactor, ARRAY_SIZE(s3c2410_ffactor)); samsung_clk_register_alias(ctx, s3c2410_aliases, - ARRAY_SIZE(s3c2410_common_aliases)); + ARRAY_SIZE(s3c2410_aliases)); break; case S3C2440: samsung_clk_register_mux(ctx, s3c2440_muxes, diff --git a/drivers/clk/samsung/clk-s3c64xx.c b/drivers/clk/samsung/clk-s3c64xx.c index efa16ee592c8..8889ff1c10fc 100644 --- a/drivers/clk/samsung/clk-s3c64xx.c +++ b/drivers/clk/samsung/clk-s3c64xx.c @@ -418,8 +418,10 @@ static struct samsung_clock_alias s3c64xx_clock_aliases[] = { ALIAS(SCLK_MMC2, "s3c-sdhci.2", "mmc_busclk.2"), ALIAS(SCLK_MMC1, "s3c-sdhci.1", "mmc_busclk.2"), ALIAS(SCLK_MMC0, "s3c-sdhci.0", "mmc_busclk.2"), - ALIAS(SCLK_SPI1, "s3c6410-spi.1", "spi-bus"), - ALIAS(SCLK_SPI0, "s3c6410-spi.0", "spi-bus"), + ALIAS(PCLK_SPI1, "s3c6410-spi.1", "spi_busclk0"), + ALIAS(SCLK_SPI1, "s3c6410-spi.1", "spi_busclk2"), + ALIAS(PCLK_SPI0, "s3c6410-spi.0", "spi_busclk0"), + ALIAS(SCLK_SPI0, "s3c6410-spi.0", "spi_busclk2"), ALIAS(SCLK_AUDIO1, "samsung-pcm.1", "audio-bus"), ALIAS(SCLK_AUDIO1, "samsung-i2s.1", "audio-bus"), ALIAS(SCLK_AUDIO0, "samsung-pcm.0", "audio-bus"), diff --git a/drivers/clk/spear/spear3xx_clock.c b/drivers/clk/spear/spear3xx_clock.c index c2d204315546..bb5f387774e2 100644 --- a/drivers/clk/spear/spear3xx_clock.c +++ b/drivers/clk/spear/spear3xx_clock.c @@ -211,7 +211,7 @@ static inline void spear310_clk_init(void) { } /* array of all spear 320 clock lookups */ #ifdef CONFIG_MACH_SPEAR320 -#define SPEAR320_CONTROL_REG (soc_config_base + 0x0000) +#define SPEAR320_CONTROL_REG (soc_config_base + 0x0010) #define SPEAR320_EXT_CTRL_REG (soc_config_base + 0x0018) #define SPEAR320_UARTX_PCLK_MASK 0x1 @@ -245,7 +245,8 @@ static const char *smii0_parents[] = { "smii_125m_pad", "ras_pll2_clk", "ras_syn0_gclk", }; static const char *uartx_parents[] = { "ras_syn1_gclk", "ras_apb_clk", }; -static void __init spear320_clk_init(void __iomem *soc_config_base) +static void __init spear320_clk_init(void __iomem *soc_config_base, + struct clk *ras_apb_clk) { struct clk *clk; @@ -342,6 +343,8 @@ static void __init spear320_clk_init(void __iomem *soc_config_base) SPEAR320_CONTROL_REG, UART1_PCLK_SHIFT, UART1_PCLK_MASK, 0, &_lock); clk_register_clkdev(clk, NULL, "a3000000.serial"); + /* Enforce ras_apb_clk */ + clk_set_parent(clk, ras_apb_clk); clk = clk_register_mux(NULL, "uart2_clk", uartx_parents, ARRAY_SIZE(uartx_parents), @@ -349,6 +352,8 @@ static void __init spear320_clk_init(void __iomem *soc_config_base) SPEAR320_EXT_CTRL_REG, SPEAR320_UART2_PCLK_SHIFT, SPEAR320_UARTX_PCLK_MASK, 0, &_lock); clk_register_clkdev(clk, NULL, "a4000000.serial"); + /* Enforce ras_apb_clk */ + clk_set_parent(clk, ras_apb_clk); clk = clk_register_mux(NULL, "uart3_clk", uartx_parents, ARRAY_SIZE(uartx_parents), @@ -379,12 +384,12 @@ static void __init spear320_clk_init(void __iomem *soc_config_base) clk_register_clkdev(clk, NULL, "60100000.serial"); } #else -static inline void spear320_clk_init(void __iomem *soc_config_base) { } +static inline void spear320_clk_init(void __iomem *sb, struct clk *rc) { } #endif void __init spear3xx_clk_init(void __iomem *misc_base, void __iomem *soc_config_base) { - struct clk *clk, *clk1; + struct clk *clk, *clk1, *ras_apb_clk; clk = clk_register_fixed_rate(NULL, "osc_32k_clk", NULL, CLK_IS_ROOT, 32000); @@ -613,6 +618,7 @@ void __init spear3xx_clk_init(void __iomem *misc_base, void __iomem *soc_config_ clk = clk_register_gate(NULL, "ras_apb_clk", "apb_clk", 0, RAS_CLK_ENB, RAS_APB_CLK_ENB, 0, &_lock); clk_register_clkdev(clk, "ras_apb_clk", NULL); + ras_apb_clk = clk; clk = clk_register_gate(NULL, "ras_32k_clk", "osc_32k_clk", 0, RAS_CLK_ENB, RAS_32K_CLK_ENB, 0, &_lock); @@ -659,5 +665,5 @@ void __init spear3xx_clk_init(void __iomem *misc_base, void __iomem *soc_config_ else if (of_machine_is_compatible("st,spear310")) spear310_clk_init(); else if (of_machine_is_compatible("st,spear320")) - spear320_clk_init(soc_config_base); + spear320_clk_init(soc_config_base, ras_apb_clk); } diff --git a/drivers/clk/sunxi/clk-sun6i-apb0-gates.c b/drivers/clk/sunxi/clk-sun6i-apb0-gates.c index 44cd27c5c401..670f90d629d7 100644 --- a/drivers/clk/sunxi/clk-sun6i-apb0-gates.c +++ b/drivers/clk/sunxi/clk-sun6i-apb0-gates.c @@ -29,7 +29,7 @@ static int sun6i_a31_apb0_gates_clk_probe(struct platform_device *pdev) r = platform_get_resource(pdev, IORESOURCE_MEM, 0); reg = devm_ioremap_resource(&pdev->dev, r); - if (!reg) + if (IS_ERR(reg)) return PTR_ERR(reg); clk_parent = of_clk_get_parent_name(np, 0); diff --git a/drivers/clk/ti/apll.c b/drivers/clk/ti/apll.c index 5428c9c547cd..72d97279eae1 100644 --- a/drivers/clk/ti/apll.c +++ b/drivers/clk/ti/apll.c @@ -77,13 +77,11 @@ static int dra7_apll_enable(struct clk_hw *hw) if (i == MAX_APLL_WAIT_TRIES) { pr_warn("clock: %s failed transition to '%s'\n", clk_name, (state) ? "locked" : "bypassed"); - } else { + r = -EBUSY; + } else pr_debug("clock: %s transition to '%s' in %d loops\n", clk_name, (state) ? "locked" : "bypassed", i); - r = 0; - } - return r; } @@ -338,7 +336,7 @@ static void __init of_omap2_apll_setup(struct device_node *node) const char *parent_name; u32 val; - ad = kzalloc(sizeof(*clk_hw), GFP_KERNEL); + ad = kzalloc(sizeof(*ad), GFP_KERNEL); clk_hw = kzalloc(sizeof(*clk_hw), GFP_KERNEL); init = kzalloc(sizeof(*init), GFP_KERNEL); diff --git a/drivers/clk/ti/dpll.c b/drivers/clk/ti/dpll.c index abd956d5f838..79791e1bf282 100644 --- a/drivers/clk/ti/dpll.c +++ b/drivers/clk/ti/dpll.c @@ -161,7 +161,8 @@ cleanup: } #if defined(CONFIG_ARCH_OMAP4) || defined(CONFIG_SOC_OMAP5) || \ - defined(CONFIG_SOC_DRA7XX) || defined(CONFIG_SOC_AM33XX) + defined(CONFIG_SOC_DRA7XX) || defined(CONFIG_SOC_AM33XX) || \ + defined(CONFIG_SOC_AM43XX) /** * ti_clk_register_dpll_x2 - Registers a DPLLx2 clock * @node: device node for this clock @@ -322,7 +323,7 @@ CLK_OF_DECLARE(ti_omap4_dpll_x2_clock, "ti,omap4-dpll-x2-clock", of_ti_omap4_dpll_x2_setup); #endif -#ifdef CONFIG_SOC_AM33XX +#if defined(CONFIG_SOC_AM33XX) || defined(CONFIG_SOC_AM43XX) static void __init of_ti_am3_dpll_x2_setup(struct device_node *node) { ti_clk_register_dpll_x2(node, &dpll_x2_ck_ops, NULL); diff --git a/drivers/clk/ti/mux.c b/drivers/clk/ti/mux.c index 0197a478720c..e9d650e51287 100644 --- a/drivers/clk/ti/mux.c +++ b/drivers/clk/ti/mux.c @@ -160,7 +160,7 @@ static void of_mux_clk_setup(struct device_node *node) u8 clk_mux_flags = 0; u32 mask = 0; u32 shift = 0; - u32 flags = 0; + u32 flags = CLK_SET_RATE_NO_REPARENT; num_parents = of_clk_get_parent_count(node); if (num_parents < 2) { diff --git a/drivers/clocksource/exynos_mct.c b/drivers/clocksource/exynos_mct.c index 8d6420013a04..ab51bf20a3ed 100644 --- a/drivers/clocksource/exynos_mct.c +++ b/drivers/clocksource/exynos_mct.c @@ -153,19 +153,16 @@ static void exynos4_mct_write(unsigned int value, unsigned long offset) } /* Clocksource handling */ -static void exynos4_mct_frc_start(u32 hi, u32 lo) +static void exynos4_mct_frc_start(void) { u32 reg; - exynos4_mct_write(lo, EXYNOS4_MCT_G_CNT_L); - exynos4_mct_write(hi, EXYNOS4_MCT_G_CNT_U); - reg = __raw_readl(reg_base + EXYNOS4_MCT_G_TCON); reg |= MCT_G_TCON_START; exynos4_mct_write(reg, EXYNOS4_MCT_G_TCON); } -static cycle_t exynos4_frc_read(struct clocksource *cs) +static cycle_t notrace _exynos4_frc_read(void) { unsigned int lo, hi; u32 hi2 = __raw_readl(reg_base + EXYNOS4_MCT_G_CNT_U); @@ -179,9 +176,14 @@ static cycle_t exynos4_frc_read(struct clocksource *cs) return ((cycle_t)hi << 32) | lo; } +static cycle_t exynos4_frc_read(struct clocksource *cs) +{ + return _exynos4_frc_read(); +} + static void exynos4_frc_resume(struct clocksource *cs) { - exynos4_mct_frc_start(0, 0); + exynos4_mct_frc_start(); } struct clocksource mct_frc = { @@ -195,12 +197,23 @@ struct clocksource mct_frc = { static u64 notrace exynos4_read_sched_clock(void) { - return exynos4_frc_read(&mct_frc); + return _exynos4_frc_read(); +} + +static struct delay_timer exynos4_delay_timer; + +static cycles_t exynos4_read_current_timer(void) +{ + return _exynos4_frc_read(); } static void __init exynos4_clocksource_init(void) { - exynos4_mct_frc_start(0, 0); + exynos4_mct_frc_start(); + + exynos4_delay_timer.read_current_timer = &exynos4_read_current_timer; + exynos4_delay_timer.freq = clk_rate; + register_current_timer_delay(&exynos4_delay_timer); if (clocksource_register_hz(&mct_frc, clk_rate)) panic("%s: can't register clocksource\n", mct_frc.name); diff --git a/drivers/cpufreq/Kconfig.arm b/drivers/cpufreq/Kconfig.arm index ebac67115009..7364a538e056 100644 --- a/drivers/cpufreq/Kconfig.arm +++ b/drivers/cpufreq/Kconfig.arm @@ -104,6 +104,7 @@ config ARM_IMX6Q_CPUFREQ tristate "Freescale i.MX6 cpufreq support" depends on ARCH_MXC depends on REGULATOR_ANATOP + select PM_OPP help This adds cpufreq driver support for Freescale i.MX6 series SoCs. @@ -118,7 +119,7 @@ config ARM_INTEGRATOR If in doubt, say Y. config ARM_KIRKWOOD_CPUFREQ - def_bool MACH_KIRKWOOD + def_bool ARCH_KIRKWOOD || MACH_KIRKWOOD help This adds the CPUFreq driver for Marvell Kirkwood SoCs. diff --git a/drivers/cpufreq/Makefile b/drivers/cpufreq/Makefile index 738c8b7b17dc..db6d9a2fea4d 100644 --- a/drivers/cpufreq/Makefile +++ b/drivers/cpufreq/Makefile @@ -49,7 +49,7 @@ obj-$(CONFIG_ARM_BIG_LITTLE_CPUFREQ) += arm_big_little.o # LITTLE drivers, so that it is probed last. obj-$(CONFIG_ARM_DT_BL_CPUFREQ) += arm_big_little_dt.o -obj-$(CONFIG_ARCH_DAVINCI_DA850) += davinci-cpufreq.o +obj-$(CONFIG_ARCH_DAVINCI) += davinci-cpufreq.o obj-$(CONFIG_UX500_SOC_DB8500) += dbx500-cpufreq.o obj-$(CONFIG_ARM_EXYNOS_CPUFREQ) += exynos-cpufreq.o obj-$(CONFIG_ARM_EXYNOS4210_CPUFREQ) += exynos4210-cpufreq.o diff --git a/drivers/cpufreq/cpufreq-cpu0.c b/drivers/cpufreq/cpufreq-cpu0.c index ee1ae303a07c..86beda9f950b 100644 --- a/drivers/cpufreq/cpufreq-cpu0.c +++ b/drivers/cpufreq/cpufreq-cpu0.c @@ -152,11 +152,8 @@ static int cpu0_cpufreq_probe(struct platform_device *pdev) goto out_put_reg; } - ret = of_init_opp_table(cpu_dev); - if (ret) { - pr_err("failed to init OPP table: %d\n", ret); - goto out_put_clk; - } + /* OPPs might be populated at runtime, don't check for error here */ + of_init_opp_table(cpu_dev); ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table); if (ret) { diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 62259d27f03e..6f024852c6fb 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -1153,10 +1153,12 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) * the creation of a brand new one. So we need to perform this update * by invoking update_policy_cpu(). */ - if (recover_policy && cpu != policy->cpu) + if (recover_policy && cpu != policy->cpu) { update_policy_cpu(policy, cpu); - else + WARN_ON(kobject_move(&policy->kobj, &dev->kobj)); + } else { policy->cpu = cpu; + } cpumask_copy(policy->cpus, cpumask_of(cpu)); diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c index 924bb2d42b1c..86631cb6f7de 100644 --- a/drivers/cpufreq/intel_pstate.c +++ b/drivers/cpufreq/intel_pstate.c @@ -128,6 +128,7 @@ static struct pstate_funcs pstate_funcs; struct perf_limits { int no_turbo; + int turbo_disabled; int max_perf_pct; int min_perf_pct; int32_t max_perf; @@ -287,7 +288,10 @@ static ssize_t store_no_turbo(struct kobject *a, struct attribute *b, if (ret != 1) return -EINVAL; limits.no_turbo = clamp_t(int, input, 0 , 1); - + if (limits.turbo_disabled) { + pr_warn("Turbo disabled by BIOS or unavailable on processor\n"); + limits.no_turbo = limits.turbo_disabled; + } return count; } @@ -357,21 +361,21 @@ static int byt_get_min_pstate(void) { u64 value; rdmsrl(BYT_RATIOS, value); - return (value >> 8) & 0x3F; + return (value >> 8) & 0x7F; } static int byt_get_max_pstate(void) { u64 value; rdmsrl(BYT_RATIOS, value); - return (value >> 16) & 0x3F; + return (value >> 16) & 0x7F; } static int byt_get_turbo_pstate(void) { u64 value; rdmsrl(BYT_TURBO_RATIOS, value); - return value & 0x3F; + return value & 0x7F; } static void byt_set_pstate(struct cpudata *cpudata, int pstate) @@ -381,7 +385,7 @@ static void byt_set_pstate(struct cpudata *cpudata, int pstate) u32 vid; val = pstate << 8; - if (limits.no_turbo) + if (limits.no_turbo && !limits.turbo_disabled) val |= (u64)1 << 32; vid_fp = cpudata->vid.min + mul_fp( @@ -405,8 +409,8 @@ static void byt_get_vid(struct cpudata *cpudata) rdmsrl(BYT_VIDS, value); - cpudata->vid.min = int_tofp((value >> 8) & 0x3f); - cpudata->vid.max = int_tofp((value >> 16) & 0x3f); + cpudata->vid.min = int_tofp((value >> 8) & 0x7f); + cpudata->vid.max = int_tofp((value >> 16) & 0x7f); cpudata->vid.ratio = div_fp( cpudata->vid.max - cpudata->vid.min, int_tofp(cpudata->pstate.max_pstate - @@ -448,7 +452,7 @@ static void core_set_pstate(struct cpudata *cpudata, int pstate) u64 val; val = pstate << 8; - if (limits.no_turbo) + if (limits.no_turbo && !limits.turbo_disabled) val |= (u64)1 << 32; wrmsrl_on_cpu(cpudata->cpu, MSR_IA32_PERF_CTL, val); @@ -696,9 +700,8 @@ static int intel_pstate_init_cpu(unsigned int cpunum) cpu = all_cpu_data[cpunum]; - intel_pstate_get_cpu_pstates(cpu); - cpu->cpu = cpunum; + intel_pstate_get_cpu_pstates(cpu); init_timer_deferrable(&cpu->timer); cpu->timer.function = intel_pstate_timer_func; @@ -741,7 +744,7 @@ static int intel_pstate_set_policy(struct cpufreq_policy *policy) limits.min_perf = int_tofp(1); limits.max_perf_pct = 100; limits.max_perf = int_tofp(1); - limits.no_turbo = 0; + limits.no_turbo = limits.turbo_disabled; return 0; } limits.min_perf_pct = (policy->min * 100) / policy->cpuinfo.max_freq; @@ -784,6 +787,7 @@ static int intel_pstate_cpu_init(struct cpufreq_policy *policy) { struct cpudata *cpu; int rc; + u64 misc_en; rc = intel_pstate_init_cpu(policy->cpu); if (rc) @@ -791,8 +795,13 @@ static int intel_pstate_cpu_init(struct cpufreq_policy *policy) cpu = all_cpu_data[policy->cpu]; - if (!limits.no_turbo && - limits.min_perf_pct == 100 && limits.max_perf_pct == 100) + rdmsrl(MSR_IA32_MISC_ENABLE, misc_en); + if (misc_en & MSR_IA32_MISC_ENABLE_TURBO_DISABLE || + cpu->pstate.max_pstate == cpu->pstate.turbo_pstate) { + limits.turbo_disabled = 1; + limits.no_turbo = 1; + } + if (limits.min_perf_pct == 100 && limits.max_perf_pct == 100) policy->policy = CPUFREQ_POLICY_PERFORMANCE; else policy->policy = CPUFREQ_POLICY_POWERSAVE; diff --git a/drivers/cpufreq/sa1110-cpufreq.c b/drivers/cpufreq/sa1110-cpufreq.c index 546376719d8f..b5befc211172 100644 --- a/drivers/cpufreq/sa1110-cpufreq.c +++ b/drivers/cpufreq/sa1110-cpufreq.c @@ -349,7 +349,7 @@ static int __init sa1110_clk_init(void) name = "K4S641632D"; if (machine_is_h3100()) name = "KM416S4030CT"; - if (machine_is_jornada720()) + if (machine_is_jornada720() || machine_is_h3600()) name = "K4S281632B-1H"; if (machine_is_nanoengine()) name = "MT48LC8M16A2TG-75"; diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c index 1d80bd3636c5..b512a4ba7569 100644 --- a/drivers/crypto/caam/jr.c +++ b/drivers/crypto/caam/jr.c @@ -453,8 +453,8 @@ static int caam_jr_probe(struct platform_device *pdev) int error; jrdev = &pdev->dev; - jrpriv = kmalloc(sizeof(struct caam_drv_private_jr), - GFP_KERNEL); + jrpriv = devm_kmalloc(jrdev, sizeof(struct caam_drv_private_jr), + GFP_KERNEL); if (!jrpriv) return -ENOMEM; @@ -487,10 +487,8 @@ static int caam_jr_probe(struct platform_device *pdev) /* Now do the platform independent part */ error = caam_jr_init(jrdev); /* now turn on hardware */ - if (error) { - kfree(jrpriv); + if (error) return error; - } jrpriv->dev = jrdev; spin_lock(&driver_data.jr_alloc_lock); diff --git a/drivers/dma/cppi41.c b/drivers/dma/cppi41.c index d028f36ae655..8f8b0b608875 100644 --- a/drivers/dma/cppi41.c +++ b/drivers/dma/cppi41.c @@ -86,6 +86,9 @@ #define USBSS_IRQ_PD_COMP (1 << 2) +/* Packet Descriptor */ +#define PD2_ZERO_LENGTH (1 << 19) + struct cppi41_channel { struct dma_chan chan; struct dma_async_tx_descriptor txd; @@ -307,7 +310,7 @@ static irqreturn_t cppi41_irq(int irq, void *data) __iormb(); while (val) { - u32 desc; + u32 desc, len; q_num = __fls(val); val &= ~(1 << q_num); @@ -319,9 +322,13 @@ static irqreturn_t cppi41_irq(int irq, void *data) q_num, desc); continue; } - c->residue = pd_trans_len(c->desc->pd6) - - pd_trans_len(c->desc->pd0); + if (c->desc->pd2 & PD2_ZERO_LENGTH) + len = 0; + else + len = pd_trans_len(c->desc->pd0); + + c->residue = pd_trans_len(c->desc->pd6) - len; dma_cookie_complete(&c->txd); c->txd.callback(c->txd.callback_param); } diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c index 128714622bf5..14867e3ac8ff 100644 --- a/drivers/dma/imx-sdma.c +++ b/drivers/dma/imx-sdma.c @@ -255,6 +255,7 @@ struct sdma_channel { enum dma_slave_buswidth word_size; unsigned int buf_tail; unsigned int num_bd; + unsigned int period_len; struct sdma_buffer_descriptor *bd; dma_addr_t bd_phys; unsigned int pc_from_device, pc_to_device; @@ -593,6 +594,12 @@ static void sdma_event_disable(struct sdma_channel *sdmac, unsigned int event) static void sdma_handle_channel_loop(struct sdma_channel *sdmac) { + if (sdmac->desc.callback) + sdmac->desc.callback(sdmac->desc.callback_param); +} + +static void sdma_update_channel_loop(struct sdma_channel *sdmac) +{ struct sdma_buffer_descriptor *bd; /* @@ -611,9 +618,6 @@ static void sdma_handle_channel_loop(struct sdma_channel *sdmac) bd->mode.status |= BD_DONE; sdmac->buf_tail++; sdmac->buf_tail %= sdmac->num_bd; - - if (sdmac->desc.callback) - sdmac->desc.callback(sdmac->desc.callback_param); } } @@ -669,6 +673,9 @@ static irqreturn_t sdma_int_handler(int irq, void *dev_id) int channel = fls(stat) - 1; struct sdma_channel *sdmac = &sdma->channel[channel]; + if (sdmac->flags & IMX_DMA_SG_LOOP) + sdma_update_channel_loop(sdmac); + tasklet_schedule(&sdmac->tasklet); __clear_bit(channel, &stat); @@ -1129,6 +1136,7 @@ static struct dma_async_tx_descriptor *sdma_prep_dma_cyclic( sdmac->status = DMA_IN_PROGRESS; sdmac->buf_tail = 0; + sdmac->period_len = period_len; sdmac->flags |= IMX_DMA_SG_LOOP; sdmac->direction = direction; @@ -1225,9 +1233,15 @@ static enum dma_status sdma_tx_status(struct dma_chan *chan, struct dma_tx_state *txstate) { struct sdma_channel *sdmac = to_sdma_chan(chan); + u32 residue; + + if (sdmac->flags & IMX_DMA_SG_LOOP) + residue = (sdmac->num_bd - sdmac->buf_tail) * sdmac->period_len; + else + residue = sdmac->chn_count - sdmac->chn_real_count; dma_set_tx_state(txstate, chan->completed_cookie, chan->cookie, - sdmac->chn_count - sdmac->chn_real_count); + residue); return sdmac->status; } diff --git a/drivers/firewire/Kconfig b/drivers/firewire/Kconfig index 4199849e3758..145974f9662b 100644 --- a/drivers/firewire/Kconfig +++ b/drivers/firewire/Kconfig @@ -1,4 +1,5 @@ menu "IEEE 1394 (FireWire) support" + depends on HAS_DMA depends on PCI || COMPILE_TEST # firewire-core does not depend on PCI but is # not useful without PCI controller driver diff --git a/drivers/firewire/ohci.c b/drivers/firewire/ohci.c index 57985410f12f..a66a3217f1d9 100644 --- a/drivers/firewire/ohci.c +++ b/drivers/firewire/ohci.c @@ -336,10 +336,10 @@ static const struct { QUIRK_CYCLE_TIMER | QUIRK_IR_WAKE}, {PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_VT6315, 0, - QUIRK_CYCLE_TIMER | QUIRK_NO_MSI}, + QUIRK_CYCLE_TIMER /* FIXME: necessary? */ | QUIRK_NO_MSI}, {PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_VT6315, PCI_ANY_ID, - 0}, + QUIRK_NO_MSI}, {PCI_VENDOR_ID_VIA, PCI_ANY_ID, PCI_ANY_ID, QUIRK_CYCLE_TIMER | QUIRK_NO_MSI}, diff --git a/drivers/firmware/efi/efi-pstore.c b/drivers/firmware/efi/efi-pstore.c index 4b9dc836dcf9..e992abc5ef26 100644 --- a/drivers/firmware/efi/efi-pstore.c +++ b/drivers/firmware/efi/efi-pstore.c @@ -40,7 +40,7 @@ struct pstore_read_data { static inline u64 generic_id(unsigned long timestamp, unsigned int part, int count) { - return (timestamp * 100 + part) * 1000 + count; + return ((u64) timestamp * 100 + part) * 1000 + count; } static int efi_pstore_read_func(struct efivar_entry *entry, void *data) diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c index cd36deb619fa..dc79346689e6 100644 --- a/drivers/firmware/efi/efi.c +++ b/drivers/firmware/efi/efi.c @@ -346,6 +346,7 @@ static __initdata struct { struct param_info { int verbose; + int found; void *params; }; @@ -353,25 +354,21 @@ static int __init fdt_find_uefi_params(unsigned long node, const char *uname, int depth, void *data) { struct param_info *info = data; - void *prop, *dest; - unsigned long len; + const void *prop; + void *dest; u64 val; - int i; + int i, len; if (depth != 1 || (strcmp(uname, "chosen") != 0 && strcmp(uname, "chosen@0") != 0)) return 0; - pr_info("Getting parameters from FDT:\n"); - for (i = 0; i < ARRAY_SIZE(dt_params); i++) { prop = of_get_flat_dt_prop(node, dt_params[i].propname, &len); - if (!prop) { - pr_err("Can't find %s in device tree!\n", - dt_params[i].name); + if (!prop) return 0; - } dest = info->params + dt_params[i].offset; + info->found++; val = of_read_number(prop, len / sizeof(u32)); @@ -390,10 +387,21 @@ static int __init fdt_find_uefi_params(unsigned long node, const char *uname, int __init efi_get_fdt_params(struct efi_fdt_params *params, int verbose) { struct param_info info; + int ret; + + pr_info("Getting EFI parameters from FDT:\n"); info.verbose = verbose; + info.found = 0; info.params = params; - return of_scan_flat_dt(fdt_find_uefi_params, &info); + ret = of_scan_flat_dt(fdt_find_uefi_params, &info); + if (!info.found) + pr_info("UEFI not found.\n"); + else if (!ret) + pr_err("Can't find '%s' in device tree!\n", + dt_params[info.found].name); + + return ret; } #endif /* CONFIG_EFI_PARAMS_FROM_FDT */ diff --git a/drivers/firmware/efi/fdt.c b/drivers/firmware/efi/fdt.c index 5c6a8e8a9580..507a3df46a5d 100644 --- a/drivers/firmware/efi/fdt.c +++ b/drivers/firmware/efi/fdt.c @@ -23,16 +23,6 @@ static efi_status_t update_fdt(efi_system_table_t *sys_table, void *orig_fdt, u32 fdt_val32; u64 fdt_val64; - /* - * Copy definition of linux_banner here. Since this code is - * built as part of the decompressor for ARM v7, pulling - * in version.c where linux_banner is defined for the - * kernel brings other kernel dependencies with it. - */ - const char linux_banner[] = - "Linux version " UTS_RELEASE " (" LINUX_COMPILE_BY "@" - LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION "\n"; - /* Do some checks on provided FDT, if it exists*/ if (orig_fdt) { if (fdt_check_header(orig_fdt)) { @@ -63,7 +53,7 @@ static efi_status_t update_fdt(efi_system_table_t *sys_table, void *orig_fdt, */ prev = 0; for (;;) { - const char *type, *name; + const char *type; int len; node = fdt_next_node(fdt, prev, NULL); diff --git a/drivers/gpio/gpio-mcp23s08.c b/drivers/gpio/gpio-mcp23s08.c index fe7c0e211f9a..57adbc90fdad 100644 --- a/drivers/gpio/gpio-mcp23s08.c +++ b/drivers/gpio/gpio-mcp23s08.c @@ -900,8 +900,6 @@ static int mcp23s08_probe(struct spi_device *spi) if (spi_present_mask & (1 << addr)) chips++; } - if (!chips) - return -ENODEV; } else { type = spi_get_device_id(spi)->driver_data; pdata = dev_get_platdata(&spi->dev); @@ -940,10 +938,6 @@ static int mcp23s08_probe(struct spi_device *spi) if (!(spi_present_mask & (1 << addr))) continue; chips--; - if (chips < 0) { - dev_err(&spi->dev, "FATAL: invalid negative chip id\n"); - goto fail; - } data->mcp[addr] = &data->chip[chips]; status = mcp23s08_probe_one(data->mcp[addr], &spi->dev, spi, 0x40 | (addr << 1), type, base, diff --git a/drivers/gpio/gpio-rcar.c b/drivers/gpio/gpio-rcar.c index 0c9f803fc1ac..b6ae89ea8811 100644 --- a/drivers/gpio/gpio-rcar.c +++ b/drivers/gpio/gpio-rcar.c @@ -284,6 +284,7 @@ static int gpio_rcar_irq_domain_map(struct irq_domain *h, unsigned int irq, static struct irq_domain_ops gpio_rcar_irq_domain_ops = { .map = gpio_rcar_irq_domain_map, + .xlate = irq_domain_xlate_twocell, }; struct gpio_rcar_info { diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c index 03711d00aaae..8218078b6133 100644 --- a/drivers/gpu/drm/drm_drv.c +++ b/drivers/gpu/drm/drm_drv.c @@ -419,8 +419,9 @@ long drm_ioctl(struct file *filp, retcode = -EFAULT; goto err_i1; } - } else + } else if (cmd & IOC_OUT) { memset(kdata, 0, usize); + } if (ioctl->flags & DRM_UNLOCKED) retcode = func(dev, kdata, file_priv); diff --git a/drivers/gpu/drm/exynos/exynos_drm_dpi.c b/drivers/gpu/drm/exynos/exynos_drm_dpi.c index 482127f633c5..9e530f205ad2 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_dpi.c +++ b/drivers/gpu/drm/exynos/exynos_drm_dpi.c @@ -40,7 +40,7 @@ exynos_dpi_detect(struct drm_connector *connector, bool force) { struct exynos_dpi *ctx = connector_to_dpi(connector); - if (!ctx->panel->connector) + if (ctx->panel && !ctx->panel->connector) drm_panel_attach(ctx->panel, &ctx->connector); return connector_status_connected; diff --git a/drivers/gpu/drm/exynos/exynos_drm_drv.c b/drivers/gpu/drm/exynos/exynos_drm_drv.c index d91f27777537..ab7d182063c3 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_drv.c +++ b/drivers/gpu/drm/exynos/exynos_drm_drv.c @@ -765,24 +765,24 @@ static int exynos_drm_init(void) return 0; -err_unregister_pd: - platform_device_unregister(exynos_drm_pdev); - err_remove_vidi: #ifdef CONFIG_DRM_EXYNOS_VIDI exynos_drm_remove_vidi(); + +err_unregister_pd: #endif + platform_device_unregister(exynos_drm_pdev); return ret; } static void exynos_drm_exit(void) { + platform_driver_unregister(&exynos_drm_platform_driver); #ifdef CONFIG_DRM_EXYNOS_VIDI exynos_drm_remove_vidi(); #endif platform_device_unregister(exynos_drm_pdev); - platform_driver_unregister(&exynos_drm_platform_driver); } module_init(exynos_drm_init); diff --git a/drivers/gpu/drm/exynos/exynos_drm_drv.h b/drivers/gpu/drm/exynos/exynos_drm_drv.h index 36535f398848..06cde4506278 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_drv.h +++ b/drivers/gpu/drm/exynos/exynos_drm_drv.h @@ -343,7 +343,7 @@ struct exynos_drm_display * exynos_dpi_probe(struct device *dev); int exynos_dpi_remove(struct device *dev); #else static inline struct exynos_drm_display * -exynos_dpi_probe(struct device *dev) { return 0; } +exynos_dpi_probe(struct device *dev) { return NULL; } static inline int exynos_dpi_remove(struct device *dev) { return 0; } #endif diff --git a/drivers/gpu/drm/exynos/exynos_drm_fimd.c b/drivers/gpu/drm/exynos/exynos_drm_fimd.c index bb45ab2e7384..33161ad38201 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_fimd.c +++ b/drivers/gpu/drm/exynos/exynos_drm_fimd.c @@ -741,6 +741,8 @@ static void fimd_apply(struct exynos_drm_manager *mgr) win_data = &ctx->win_data[i]; if (win_data->enabled) fimd_win_commit(mgr, i); + else + fimd_win_disable(mgr, i); } fimd_commit(mgr); diff --git a/drivers/gpu/drm/exynos/exynos_hdmi.c b/drivers/gpu/drm/exynos/exynos_hdmi.c index c104d0c9b385..aa259b0a873a 100644 --- a/drivers/gpu/drm/exynos/exynos_hdmi.c +++ b/drivers/gpu/drm/exynos/exynos_hdmi.c @@ -2090,6 +2090,11 @@ out: static void hdmi_dpms(struct exynos_drm_display *display, int mode) { + struct hdmi_context *hdata = display->ctx; + struct drm_encoder *encoder = hdata->encoder; + struct drm_crtc *crtc = encoder->crtc; + struct drm_crtc_helper_funcs *funcs = NULL; + DRM_DEBUG_KMS("mode %d\n", mode); switch (mode) { @@ -2099,6 +2104,20 @@ static void hdmi_dpms(struct exynos_drm_display *display, int mode) case DRM_MODE_DPMS_STANDBY: case DRM_MODE_DPMS_SUSPEND: case DRM_MODE_DPMS_OFF: + /* + * The SFRs of VP and Mixer are updated by Vertical Sync of + * Timing generator which is a part of HDMI so the sequence + * to disable TV Subsystem should be as following, + * VP -> Mixer -> HDMI + * + * Below codes will try to disable Mixer and VP(if used) + * prior to disabling HDMI. + */ + if (crtc) + funcs = crtc->helper_private; + if (funcs && funcs->dpms) + (*funcs->dpms)(crtc, mode); + hdmi_poweroff(display); break; default: diff --git a/drivers/gpu/drm/exynos/exynos_mixer.c b/drivers/gpu/drm/exynos/exynos_mixer.c index 4c5aed7e54c8..7529946d0a74 100644 --- a/drivers/gpu/drm/exynos/exynos_mixer.c +++ b/drivers/gpu/drm/exynos/exynos_mixer.c @@ -377,6 +377,20 @@ static void mixer_run(struct mixer_context *ctx) mixer_regs_dump(ctx); } +static void mixer_stop(struct mixer_context *ctx) +{ + struct mixer_resources *res = &ctx->mixer_res; + int timeout = 20; + + mixer_reg_writemask(res, MXR_STATUS, 0, MXR_STATUS_REG_RUN); + + while (!(mixer_reg_read(res, MXR_STATUS) & MXR_STATUS_REG_IDLE) && + --timeout) + usleep_range(10000, 12000); + + mixer_regs_dump(ctx); +} + static void vp_video_buffer(struct mixer_context *ctx, int win) { struct mixer_resources *res = &ctx->mixer_res; @@ -497,13 +511,8 @@ static void vp_video_buffer(struct mixer_context *ctx, int win) static void mixer_layer_update(struct mixer_context *ctx) { struct mixer_resources *res = &ctx->mixer_res; - u32 val; - - val = mixer_reg_read(res, MXR_CFG); - /* allow one update per vsync only */ - if (!(val & MXR_CFG_LAYER_UPDATE_COUNT_MASK)) - mixer_reg_writemask(res, MXR_CFG, ~0, MXR_CFG_LAYER_UPDATE); + mixer_reg_writemask(res, MXR_CFG, ~0, MXR_CFG_LAYER_UPDATE); } static void mixer_graph_buffer(struct mixer_context *ctx, int win) @@ -1010,6 +1019,8 @@ static void mixer_wait_for_vblank(struct exynos_drm_manager *mgr) } mutex_unlock(&mixer_ctx->mixer_mutex); + drm_vblank_get(mgr->crtc->dev, mixer_ctx->pipe); + atomic_set(&mixer_ctx->wait_vsync_event, 1); /* @@ -1020,6 +1031,8 @@ static void mixer_wait_for_vblank(struct exynos_drm_manager *mgr) !atomic_read(&mixer_ctx->wait_vsync_event), HZ/20)) DRM_DEBUG_KMS("vblank wait timed out.\n"); + + drm_vblank_put(mgr->crtc->dev, mixer_ctx->pipe); } static void mixer_window_suspend(struct exynos_drm_manager *mgr) @@ -1061,7 +1074,7 @@ static void mixer_poweron(struct exynos_drm_manager *mgr) mutex_unlock(&ctx->mixer_mutex); return; } - ctx->powered = true; + mutex_unlock(&ctx->mixer_mutex); pm_runtime_get_sync(ctx->dev); @@ -1072,6 +1085,12 @@ static void mixer_poweron(struct exynos_drm_manager *mgr) clk_prepare_enable(res->sclk_mixer); } + mutex_lock(&ctx->mixer_mutex); + ctx->powered = true; + mutex_unlock(&ctx->mixer_mutex); + + mixer_reg_writemask(res, MXR_STATUS, ~0, MXR_STATUS_SOFT_RESET); + mixer_reg_write(res, MXR_INT_EN, ctx->int_en); mixer_win_reset(ctx); @@ -1084,14 +1103,21 @@ static void mixer_poweroff(struct exynos_drm_manager *mgr) struct mixer_resources *res = &ctx->mixer_res; mutex_lock(&ctx->mixer_mutex); - if (!ctx->powered) - goto out; + if (!ctx->powered) { + mutex_unlock(&ctx->mixer_mutex); + return; + } mutex_unlock(&ctx->mixer_mutex); + mixer_stop(ctx); mixer_window_suspend(mgr); ctx->int_en = mixer_reg_read(res, MXR_INT_EN); + mutex_lock(&ctx->mixer_mutex); + ctx->powered = false; + mutex_unlock(&ctx->mixer_mutex); + clk_disable_unprepare(res->mixer); if (ctx->vp_enabled) { clk_disable_unprepare(res->vp); @@ -1099,12 +1125,6 @@ static void mixer_poweroff(struct exynos_drm_manager *mgr) } pm_runtime_put_sync(ctx->dev); - - mutex_lock(&ctx->mixer_mutex); - ctx->powered = false; - -out: - mutex_unlock(&ctx->mixer_mutex); } static void mixer_dpms(struct exynos_drm_manager *mgr, int mode) diff --git a/drivers/gpu/drm/exynos/regs-mixer.h b/drivers/gpu/drm/exynos/regs-mixer.h index 4537026bc385..5f32e1a29411 100644 --- a/drivers/gpu/drm/exynos/regs-mixer.h +++ b/drivers/gpu/drm/exynos/regs-mixer.h @@ -78,6 +78,7 @@ #define MXR_STATUS_BIG_ENDIAN (1 << 3) #define MXR_STATUS_ENDIAN_MASK (1 << 3) #define MXR_STATUS_SYNC_ENABLE (1 << 2) +#define MXR_STATUS_REG_IDLE (1 << 1) #define MXR_STATUS_REG_RUN (1 << 0) /* bits for MXR_CFG */ diff --git a/drivers/gpu/drm/i2c/tda998x_drv.c b/drivers/gpu/drm/i2c/tda998x_drv.c index 240c331405b9..ac357b02bd35 100644 --- a/drivers/gpu/drm/i2c/tda998x_drv.c +++ b/drivers/gpu/drm/i2c/tda998x_drv.c @@ -810,6 +810,12 @@ static int tda998x_encoder_mode_valid(struct drm_encoder *encoder, struct drm_display_mode *mode) { + if (mode->clock > 150000) + return MODE_CLOCK_HIGH; + if (mode->htotal >= BIT(13)) + return MODE_BAD_HVALUE; + if (mode->vtotal >= BIT(11)) + return MODE_BAD_VVALUE; return MODE_OK; } @@ -1048,8 +1054,8 @@ read_edid_block(struct drm_encoder *encoder, uint8_t *buf, int blk) return i; } } else { - for (i = 10; i > 0; i--) { - msleep(10); + for (i = 100; i > 0; i--) { + msleep(1); ret = reg_read(priv, REG_INT_FLAGS_2); if (ret < 0) return ret; @@ -1183,7 +1189,6 @@ static void tda998x_encoder_destroy(struct drm_encoder *encoder) { struct tda998x_priv *priv = to_tda998x_priv(encoder); - drm_i2c_encoder_destroy(encoder); /* disable all IRQs and free the IRQ handler */ cec_write(priv, REG_CEC_RXSHPDINTENA, 0); @@ -1193,6 +1198,7 @@ tda998x_encoder_destroy(struct drm_encoder *encoder) if (priv->cec) i2c_unregister_device(priv->cec); + drm_i2c_encoder_destroy(encoder); kfree(priv); } diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index 601caa88c092..b8c689202c40 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -446,7 +446,9 @@ static int i915_gem_object_info(struct seq_file *m, void* data) memset(&stats, 0, sizeof(stats)); stats.file_priv = file->driver_priv; + spin_lock(&file->table_lock); idr_for_each(&file->object_idr, per_file_stats, &stats); + spin_unlock(&file->table_lock); /* * Although we have a valid reference on file->pid, that does * not guarantee that the task_struct who called get_pid() is diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c index 6c656392d67d..d44344140627 100644 --- a/drivers/gpu/drm/i915/i915_dma.c +++ b/drivers/gpu/drm/i915/i915_dma.c @@ -1464,12 +1464,13 @@ static int i915_kick_out_vgacon(struct drm_i915_private *dev_priv) #else static int i915_kick_out_vgacon(struct drm_i915_private *dev_priv) { - int ret; + int ret = 0; DRM_INFO("Replacing VGA console driver\n"); console_lock(); - ret = do_take_over_console(&dummy_con, 0, MAX_NR_CONSOLES - 1, 1); + if (con_is_bound(&vga_con)) + ret = do_take_over_console(&dummy_con, 0, MAX_NR_CONSOLES - 1, 1); if (ret == 0) { ret = do_unregister_con_driver(&vga_con); diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 49414d30e8d4..374f964323ad 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -656,6 +656,7 @@ enum intel_sbi_destination { #define QUIRK_PIPEA_FORCE (1<<0) #define QUIRK_LVDS_SSC_DISABLE (1<<1) #define QUIRK_INVERT_BRIGHTNESS (1<<2) +#define QUIRK_BACKLIGHT_PRESENT (1<<3) struct intel_fbdev; struct intel_fbc_work; @@ -977,6 +978,8 @@ struct i915_power_well { bool always_on; /* power well enable/disable usage count */ int count; + /* cached hw enabled state */ + bool hw_enabled; unsigned long domains; unsigned long data; const struct i915_power_well_ops *ops; diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index f36126383d26..d893e4da5dce 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1616,22 +1616,6 @@ out: return ret; } -void i915_gem_release_all_mmaps(struct drm_i915_private *dev_priv) -{ - struct i915_vma *vma; - - /* - * Only the global gtt is relevant for gtt memory mappings, so restrict - * list traversal to objects bound into the global address space. Note - * that the active list should be empty, but better safe than sorry. - */ - WARN_ON(!list_empty(&dev_priv->gtt.base.active_list)); - list_for_each_entry(vma, &dev_priv->gtt.base.active_list, mm_list) - i915_gem_release_mmap(vma->obj); - list_for_each_entry(vma, &dev_priv->gtt.base.inactive_list, mm_list) - i915_gem_release_mmap(vma->obj); -} - /** * i915_gem_release_mmap - remove physical page mappings * @obj: obj in question @@ -1657,6 +1641,15 @@ i915_gem_release_mmap(struct drm_i915_gem_object *obj) obj->fault_mappable = false; } +void +i915_gem_release_all_mmaps(struct drm_i915_private *dev_priv) +{ + struct drm_i915_gem_object *obj; + + list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) + i915_gem_release_mmap(obj); +} + uint32_t i915_gem_get_gtt_size(struct drm_device *dev, uint32_t size, int tiling_mode) { diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c index 3ffe308d5893..a5ddf3bce9c3 100644 --- a/drivers/gpu/drm/i915/i915_gem_context.c +++ b/drivers/gpu/drm/i915/i915_gem_context.c @@ -598,6 +598,7 @@ static int do_switch(struct intel_engine_cs *ring, struct intel_context *from = ring->last_context; struct i915_hw_ppgtt *ppgtt = ctx_to_ppgtt(to); u32 hw_flags = 0; + bool uninitialized = false; int ret, i; if (from != NULL && ring == &dev_priv->ring[RCS]) { @@ -696,19 +697,20 @@ static int do_switch(struct intel_engine_cs *ring, i915_gem_context_unreference(from); } + uninitialized = !to->is_initialized && from == NULL; + to->is_initialized = true; + done: i915_gem_context_reference(to); ring->last_context = to; to->last_ring = ring; - if (ring->id == RCS && !to->is_initialized && from == NULL) { + if (uninitialized) { ret = i915_gem_render_state_init(ring); if (ret) DRM_ERROR("init render state: %d\n", ret); } - to->is_initialized = true; - return 0; unpin_out: diff --git a/drivers/gpu/drm/i915/i915_gem_render_state.c b/drivers/gpu/drm/i915/i915_gem_render_state.c index 3521f998a178..34894b573064 100644 --- a/drivers/gpu/drm/i915/i915_gem_render_state.c +++ b/drivers/gpu/drm/i915/i915_gem_render_state.c @@ -31,7 +31,7 @@ struct i915_render_state { struct drm_i915_gem_object *obj; unsigned long ggtt_offset; - void *batch; + u32 *batch; u32 size; u32 len; }; @@ -80,7 +80,7 @@ free: static void render_state_free(struct i915_render_state *so) { - kunmap(so->batch); + kunmap(kmap_to_page(so->batch)); i915_gem_object_ggtt_unpin(so->obj); drm_gem_object_unreference(&so->obj->base); kfree(so); diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c index 62ef55ba061c..7465ab0fd396 100644 --- a/drivers/gpu/drm/i915/i915_gem_stolen.c +++ b/drivers/gpu/drm/i915/i915_gem_stolen.c @@ -74,6 +74,50 @@ static unsigned long i915_stolen_to_physical(struct drm_device *dev) if (base == 0) return 0; + /* make sure we don't clobber the GTT if it's within stolen memory */ + if (INTEL_INFO(dev)->gen <= 4 && !IS_G33(dev) && !IS_G4X(dev)) { + struct { + u32 start, end; + } stolen[2] = { + { .start = base, .end = base + dev_priv->gtt.stolen_size, }, + { .start = base, .end = base + dev_priv->gtt.stolen_size, }, + }; + u64 gtt_start, gtt_end; + + gtt_start = I915_READ(PGTBL_CTL); + if (IS_GEN4(dev)) + gtt_start = (gtt_start & PGTBL_ADDRESS_LO_MASK) | + (gtt_start & PGTBL_ADDRESS_HI_MASK) << 28; + else + gtt_start &= PGTBL_ADDRESS_LO_MASK; + gtt_end = gtt_start + gtt_total_entries(dev_priv->gtt) * 4; + + if (gtt_start >= stolen[0].start && gtt_start < stolen[0].end) + stolen[0].end = gtt_start; + if (gtt_end > stolen[1].start && gtt_end <= stolen[1].end) + stolen[1].start = gtt_end; + + /* pick the larger of the two chunks */ + if (stolen[0].end - stolen[0].start > + stolen[1].end - stolen[1].start) { + base = stolen[0].start; + dev_priv->gtt.stolen_size = stolen[0].end - stolen[0].start; + } else { + base = stolen[1].start; + dev_priv->gtt.stolen_size = stolen[1].end - stolen[1].start; + } + + if (stolen[0].start != stolen[1].start || + stolen[0].end != stolen[1].end) { + DRM_DEBUG_KMS("GTT within stolen memory at 0x%llx-0x%llx\n", + (unsigned long long) gtt_start, + (unsigned long long) gtt_end - 1); + DRM_DEBUG_KMS("Stolen memory adjusted to 0x%x-0x%x\n", + base, base + (u32) dev_priv->gtt.stolen_size - 1); + } + } + + /* Verify that nothing else uses this physical address. Stolen * memory should be reserved by the BIOS and hidden from the * kernel. So if the region is already marked as busy, something diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c index 267f069765ad..c05c84f3f091 100644 --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c @@ -2845,7 +2845,7 @@ static int semaphore_passed(struct intel_engine_cs *ring) { struct drm_i915_private *dev_priv = ring->dev->dev_private; struct intel_engine_cs *signaller; - u32 seqno, ctl; + u32 seqno; ring->hangcheck.deadlock++; @@ -2857,15 +2857,12 @@ static int semaphore_passed(struct intel_engine_cs *ring) if (signaller->hangcheck.deadlock >= I915_NUM_RINGS) return -1; - /* cursory check for an unkickable deadlock */ - ctl = I915_READ_CTL(signaller); - if (ctl & RING_WAIT_SEMAPHORE && semaphore_passed(signaller) < 0) - return -1; - if (i915_seqno_passed(signaller->get_seqno(signaller, false), seqno)) return 1; - if (signaller->hangcheck.deadlock) + /* cursory check for an unkickable deadlock */ + if (I915_READ_CTL(signaller) & RING_WAIT_SEMAPHORE && + semaphore_passed(signaller) < 0) return -1; return 0; diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h index e691b30b2817..a5bab61bfc00 100644 --- a/drivers/gpu/drm/i915/i915_reg.h +++ b/drivers/gpu/drm/i915/i915_reg.h @@ -942,6 +942,9 @@ enum punit_power_well { /* * Instruction and interrupt control regs */ +#define PGTBL_CTL 0x02020 +#define PGTBL_ADDRESS_LO_MASK 0xfffff000 /* bits [31:12] */ +#define PGTBL_ADDRESS_HI_MASK 0x000000f0 /* bits [35:32] (gen4) */ #define PGTBL_ER 0x02024 #define RENDER_RING_BASE 0x02000 #define BSD_RING_BASE 0x04000 diff --git a/drivers/gpu/drm/i915/intel_bios.c b/drivers/gpu/drm/i915/intel_bios.c index 1ee98f121a00..827498e081df 100644 --- a/drivers/gpu/drm/i915/intel_bios.c +++ b/drivers/gpu/drm/i915/intel_bios.c @@ -315,9 +315,6 @@ parse_lfp_backlight(struct drm_i915_private *dev_priv, struct bdb_header *bdb) const struct bdb_lfp_backlight_data *backlight_data; const struct bdb_lfp_backlight_data_entry *entry; - /* Err to enabling backlight if no backlight block. */ - dev_priv->vbt.backlight.present = true; - backlight_data = find_section(bdb, BDB_LVDS_BACKLIGHT); if (!backlight_data) return; @@ -1088,6 +1085,9 @@ init_vbt_defaults(struct drm_i915_private *dev_priv) dev_priv->vbt.crt_ddc_pin = GMBUS_PORT_VGADDC; + /* Default to having backlight */ + dev_priv->vbt.backlight.present = true; + /* LFP panel data */ dev_priv->vbt.lvds_dither = 1; dev_priv->vbt.lvds_vbt = 0; diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c index efd3cf50cb0f..f0be855ddf45 100644 --- a/drivers/gpu/drm/i915/intel_display.c +++ b/drivers/gpu/drm/i915/intel_display.c @@ -2087,6 +2087,7 @@ void intel_flush_primary_plane(struct drm_i915_private *dev_priv, static void intel_enable_primary_hw_plane(struct drm_i915_private *dev_priv, enum plane plane, enum pipe pipe) { + struct drm_device *dev = dev_priv->dev; struct intel_crtc *intel_crtc = to_intel_crtc(dev_priv->pipe_to_crtc_mapping[pipe]); int reg; @@ -2106,6 +2107,14 @@ static void intel_enable_primary_hw_plane(struct drm_i915_private *dev_priv, I915_WRITE(reg, val | DISPLAY_PLANE_ENABLE); intel_flush_primary_plane(dev_priv, plane); + + /* + * BDW signals flip done immediately if the plane + * is disabled, even if the plane enable is already + * armed to occur at the next vblank :( + */ + if (IS_BROADWELL(dev)) + intel_wait_for_vblank(dev, intel_crtc->pipe); } /** @@ -4564,7 +4573,10 @@ static void valleyview_crtc_enable(struct drm_crtc *crtc) if (intel_crtc->active) return; - vlv_prepare_pll(intel_crtc); + is_dsi = intel_pipe_has_type(crtc, INTEL_OUTPUT_DSI); + + if (!is_dsi && !IS_CHERRYVIEW(dev)) + vlv_prepare_pll(intel_crtc); /* Set up the display plane register */ dspcntr = DISPPLANE_GAMMA_ENABLE; @@ -4598,8 +4610,6 @@ static void valleyview_crtc_enable(struct drm_crtc *crtc) if (encoder->pre_pll_enable) encoder->pre_pll_enable(encoder); - is_dsi = intel_pipe_has_type(crtc, INTEL_OUTPUT_DSI); - if (!is_dsi) { if (IS_CHERRYVIEW(dev)) chv_enable_pll(intel_crtc); @@ -11087,6 +11097,22 @@ const char *intel_output_name(int output) return names[output]; } +static bool intel_crt_present(struct drm_device *dev) +{ + struct drm_i915_private *dev_priv = dev->dev_private; + + if (IS_ULT(dev)) + return false; + + if (IS_CHERRYVIEW(dev)) + return false; + + if (IS_VALLEYVIEW(dev) && !dev_priv->vbt.int_crt_support) + return false; + + return true; +} + static void intel_setup_outputs(struct drm_device *dev) { struct drm_i915_private *dev_priv = dev->dev_private; @@ -11095,7 +11121,7 @@ static void intel_setup_outputs(struct drm_device *dev) intel_lvds_init(dev); - if (!IS_ULT(dev) && !IS_CHERRYVIEW(dev) && dev_priv->vbt.int_crt_support) + if (intel_crt_present(dev)) intel_crt_init(dev); if (HAS_DDI(dev)) { @@ -11565,6 +11591,14 @@ static void quirk_invert_brightness(struct drm_device *dev) DRM_INFO("applying inverted panel brightness quirk\n"); } +/* Some VBT's incorrectly indicate no backlight is present */ +static void quirk_backlight_present(struct drm_device *dev) +{ + struct drm_i915_private *dev_priv = dev->dev_private; + dev_priv->quirks |= QUIRK_BACKLIGHT_PRESENT; + DRM_INFO("applying backlight present quirk\n"); +} + struct intel_quirk { int device; int subsystem_vendor; @@ -11633,6 +11667,15 @@ static struct intel_quirk intel_quirks[] = { /* Acer Aspire 5336 */ { 0x2a42, 0x1025, 0x048a, quirk_invert_brightness }, + + /* Acer C720 and C720P Chromebooks (Celeron 2955U) have backlights */ + { 0x0a06, 0x1025, 0x0a11, quirk_backlight_present }, + + /* Toshiba CB35 Chromebook (Celeron 2955U) */ + { 0x0a06, 0x1179, 0x0a88, quirk_backlight_present }, + + /* HP Chromebook 14 (Celeron 2955U) */ + { 0x0a06, 0x103c, 0x21ed, quirk_backlight_present }, }; static void intel_init_quirks(struct drm_device *dev) @@ -11871,6 +11914,7 @@ static void intel_sanitize_crtc(struct intel_crtc *crtc) * ... */ plane = crtc->plane; crtc->plane = !plane; + crtc->primary_enabled = true; dev_priv->display.crtc_disable(&crtc->base); crtc->plane = plane; @@ -12411,8 +12455,8 @@ intel_display_capture_error_state(struct drm_device *dev) for_each_pipe(i) { error->pipe[i].power_domain_on = - intel_display_power_enabled_sw(dev_priv, - POWER_DOMAIN_PIPE(i)); + intel_display_power_enabled_unlocked(dev_priv, + POWER_DOMAIN_PIPE(i)); if (!error->pipe[i].power_domain_on) continue; @@ -12447,7 +12491,7 @@ intel_display_capture_error_state(struct drm_device *dev) enum transcoder cpu_transcoder = transcoders[i]; error->transcoder[i].power_domain_on = - intel_display_power_enabled_sw(dev_priv, + intel_display_power_enabled_unlocked(dev_priv, POWER_DOMAIN_TRANSCODER(cpu_transcoder)); if (!error->transcoder[i].power_domain_on) continue; diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c index 52fda950fd2a..8a1a4fbc06ac 100644 --- a/drivers/gpu/drm/i915/intel_dp.c +++ b/drivers/gpu/drm/i915/intel_dp.c @@ -28,6 +28,8 @@ #include <linux/i2c.h> #include <linux/slab.h> #include <linux/export.h> +#include <linux/notifier.h> +#include <linux/reboot.h> #include <drm/drmP.h> #include <drm/drm_crtc.h> #include <drm/drm_crtc_helper.h> @@ -336,6 +338,37 @@ static u32 _pp_stat_reg(struct intel_dp *intel_dp) return VLV_PIPE_PP_STATUS(vlv_power_sequencer_pipe(intel_dp)); } +/* Reboot notifier handler to shutdown panel power to guarantee T12 timing + This function only applicable when panel PM state is not to be tracked */ +static int edp_notify_handler(struct notifier_block *this, unsigned long code, + void *unused) +{ + struct intel_dp *intel_dp = container_of(this, typeof(* intel_dp), + edp_notifier); + struct drm_device *dev = intel_dp_to_dev(intel_dp); + struct drm_i915_private *dev_priv = dev->dev_private; + u32 pp_div; + u32 pp_ctrl_reg, pp_div_reg; + enum pipe pipe = vlv_power_sequencer_pipe(intel_dp); + + if (!is_edp(intel_dp) || code != SYS_RESTART) + return 0; + + if (IS_VALLEYVIEW(dev)) { + pp_ctrl_reg = VLV_PIPE_PP_CONTROL(pipe); + pp_div_reg = VLV_PIPE_PP_DIVISOR(pipe); + pp_div = I915_READ(pp_div_reg); + pp_div &= PP_REFERENCE_DIVIDER_MASK; + + /* 0x1F write to PP_DIV_REG sets max cycle delay */ + I915_WRITE(pp_div_reg, pp_div | 0x1F); + I915_WRITE(pp_ctrl_reg, PANEL_UNLOCK_REGS | PANEL_POWER_OFF); + msleep(intel_dp->panel_power_cycle_delay); + } + + return 0; +} + static bool edp_have_panel_power(struct intel_dp *intel_dp) { struct drm_device *dev = intel_dp_to_dev(intel_dp); @@ -873,8 +906,8 @@ intel_dp_compute_config(struct intel_encoder *encoder, mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock, bpp); - for (lane_count = min_lane_count; lane_count <= max_lane_count; lane_count <<= 1) { - for (clock = min_clock; clock <= max_clock; clock++) { + for (clock = min_clock; clock <= max_clock; clock++) { + for (lane_count = min_lane_count; lane_count <= max_lane_count; lane_count <<= 1) { link_clock = drm_dp_bw_code_to_link_rate(bws[clock]); link_avail = intel_dp_max_data_rate(link_clock, lane_count); @@ -3707,6 +3740,10 @@ void intel_dp_encoder_destroy(struct drm_encoder *encoder) drm_modeset_lock(&dev->mode_config.connection_mutex, NULL); edp_panel_vdd_off_sync(intel_dp); drm_modeset_unlock(&dev->mode_config.connection_mutex); + if (intel_dp->edp_notifier.notifier_call) { + unregister_reboot_notifier(&intel_dp->edp_notifier); + intel_dp->edp_notifier.notifier_call = NULL; + } } kfree(intel_dig_port); } @@ -4184,6 +4221,11 @@ static bool intel_edp_init_connector(struct intel_dp *intel_dp, } mutex_unlock(&dev->mode_config.mutex); + if (IS_VALLEYVIEW(dev)) { + intel_dp->edp_notifier.notifier_call = edp_notify_handler; + register_reboot_notifier(&intel_dp->edp_notifier); + } + intel_panel_init(&intel_connector->panel, fixed_mode, downclock_mode); intel_panel_setup_backlight(connector); diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h index bda0ae3d80cc..f67340ed2c12 100644 --- a/drivers/gpu/drm/i915/intel_drv.h +++ b/drivers/gpu/drm/i915/intel_drv.h @@ -538,6 +538,8 @@ struct intel_dp { unsigned long last_power_on; unsigned long last_backlight_off; bool psr_setup_done; + struct notifier_block edp_notifier; + bool use_tps3; struct intel_connector *attached_connector; @@ -950,8 +952,8 @@ int intel_power_domains_init(struct drm_i915_private *); void intel_power_domains_remove(struct drm_i915_private *); bool intel_display_power_enabled(struct drm_i915_private *dev_priv, enum intel_display_power_domain domain); -bool intel_display_power_enabled_sw(struct drm_i915_private *dev_priv, - enum intel_display_power_domain domain); +bool intel_display_power_enabled_unlocked(struct drm_i915_private *dev_priv, + enum intel_display_power_domain domain); void intel_display_power_get(struct drm_i915_private *dev_priv, enum intel_display_power_domain domain); void intel_display_power_put(struct drm_i915_private *dev_priv, diff --git a/drivers/gpu/drm/i915/intel_dsi.c b/drivers/gpu/drm/i915/intel_dsi.c index 02f99d768d49..3fd082933c87 100644 --- a/drivers/gpu/drm/i915/intel_dsi.c +++ b/drivers/gpu/drm/i915/intel_dsi.c @@ -117,17 +117,18 @@ static void intel_dsi_device_ready(struct intel_encoder *encoder) /* bandgap reset is needed after everytime we do power gate */ band_gap_reset(dev_priv); + I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_ENTER); + usleep_range(2500, 3000); + val = I915_READ(MIPI_PORT_CTRL(pipe)); I915_WRITE(MIPI_PORT_CTRL(pipe), val | LP_OUTPUT_HOLD); usleep_range(1000, 1500); - I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY | ULPS_STATE_EXIT); - usleep_range(2000, 2500); - I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY); - usleep_range(2000, 2500); - I915_WRITE(MIPI_DEVICE_READY(pipe), 0x00); - usleep_range(2000, 2500); + + I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_EXIT); + usleep_range(2500, 3000); + I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY); - usleep_range(2000, 2500); + usleep_range(2500, 3000); } static void intel_dsi_enable(struct intel_encoder *encoder) @@ -271,23 +272,23 @@ static void intel_dsi_clear_device_ready(struct intel_encoder *encoder) DRM_DEBUG_KMS("\n"); - I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_ENTER); + I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY | ULPS_STATE_ENTER); usleep_range(2000, 2500); - I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_EXIT); + I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY | ULPS_STATE_EXIT); usleep_range(2000, 2500); - I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_ENTER); + I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY | ULPS_STATE_ENTER); usleep_range(2000, 2500); - val = I915_READ(MIPI_PORT_CTRL(pipe)); - I915_WRITE(MIPI_PORT_CTRL(pipe), val & ~LP_OUTPUT_HOLD); - usleep_range(1000, 1500); - if (wait_for(((I915_READ(MIPI_PORT_CTRL(pipe)) & AFE_LATCHOUT) == 0x00000), 30)) DRM_ERROR("DSI LP not going Low\n"); + val = I915_READ(MIPI_PORT_CTRL(pipe)); + I915_WRITE(MIPI_PORT_CTRL(pipe), val & ~LP_OUTPUT_HOLD); + usleep_range(1000, 1500); + I915_WRITE(MIPI_DEVICE_READY(pipe), 0x00); usleep_range(2000, 2500); diff --git a/drivers/gpu/drm/i915/intel_dsi_cmd.c b/drivers/gpu/drm/i915/intel_dsi_cmd.c index 3eeb21b9fddf..933c86305237 100644 --- a/drivers/gpu/drm/i915/intel_dsi_cmd.c +++ b/drivers/gpu/drm/i915/intel_dsi_cmd.c @@ -404,12 +404,6 @@ int dpi_send_cmd(struct intel_dsi *intel_dsi, u32 cmd, bool hs) else cmd |= DPI_LP_MODE; - /* DPI virtual channel?! */ - - mask = DPI_FIFO_EMPTY; - if (wait_for((I915_READ(MIPI_GEN_FIFO_STAT(pipe)) & mask) == mask, 50)) - DRM_ERROR("Timeout waiting for DPI FIFO empty.\n"); - /* clear bit */ I915_WRITE(MIPI_INTR_STAT(pipe), SPL_PKT_SENT_INTERRUPT); diff --git a/drivers/gpu/drm/i915/intel_lvds.c b/drivers/gpu/drm/i915/intel_lvds.c index 23126023aeba..5e5a72fca5fb 100644 --- a/drivers/gpu/drm/i915/intel_lvds.c +++ b/drivers/gpu/drm/i915/intel_lvds.c @@ -111,6 +111,13 @@ static void intel_lvds_get_config(struct intel_encoder *encoder, pipe_config->adjusted_mode.flags |= flags; + /* gen2/3 store dither state in pfit control, needs to match */ + if (INTEL_INFO(dev)->gen < 4) { + tmp = I915_READ(PFIT_CONTROL); + + pipe_config->gmch_pfit.control |= tmp & PANEL_8TO6_DITHER_ENABLE; + } + dotclock = pipe_config->port_clock; if (HAS_PCH_SPLIT(dev_priv->dev)) diff --git a/drivers/gpu/drm/i915/intel_opregion.c b/drivers/gpu/drm/i915/intel_opregion.c index 2e2c71fcc9ed..4f6b53998d79 100644 --- a/drivers/gpu/drm/i915/intel_opregion.c +++ b/drivers/gpu/drm/i915/intel_opregion.c @@ -403,6 +403,15 @@ static u32 asle_set_backlight(struct drm_device *dev, u32 bclp) DRM_DEBUG_DRIVER("bclp = 0x%08x\n", bclp); + /* + * If the acpi_video interface is not supposed to be used, don't + * bother processing backlight level change requests from firmware. + */ + if (!acpi_video_verify_backlight_support()) { + DRM_DEBUG_KMS("opregion backlight request ignored\n"); + return 0; + } + if (!(bclp & ASLE_BCLP_VALID)) return ASLC_BACKLIGHT_FAILED; diff --git a/drivers/gpu/drm/i915/intel_panel.c b/drivers/gpu/drm/i915/intel_panel.c index 38a98570d10c..12b02fe1d0ae 100644 --- a/drivers/gpu/drm/i915/intel_panel.c +++ b/drivers/gpu/drm/i915/intel_panel.c @@ -361,16 +361,16 @@ void intel_gmch_panel_fitting(struct intel_crtc *intel_crtc, pfit_control |= ((intel_crtc->pipe << PFIT_PIPE_SHIFT) | PFIT_FILTER_FUZZY); - /* Make sure pre-965 set dither correctly for 18bpp panels. */ - if (INTEL_INFO(dev)->gen < 4 && pipe_config->pipe_bpp == 18) - pfit_control |= PANEL_8TO6_DITHER_ENABLE; - out: if ((pfit_control & PFIT_ENABLE) == 0) { pfit_control = 0; pfit_pgm_ratios = 0; } + /* Make sure pre-965 set dither correctly for 18bpp panels. */ + if (INTEL_INFO(dev)->gen < 4 && pipe_config->pipe_bpp == 18) + pfit_control |= PANEL_8TO6_DITHER_ENABLE; + pipe_config->gmch_pfit.control = pfit_control; pipe_config->gmch_pfit.pgm_ratios = pfit_pgm_ratios; pipe_config->gmch_pfit.lvds_border_bits = border; @@ -1118,8 +1118,12 @@ int intel_panel_setup_backlight(struct drm_connector *connector) int ret; if (!dev_priv->vbt.backlight.present) { - DRM_DEBUG_KMS("native backlight control not available per VBT\n"); - return 0; + if (dev_priv->quirks & QUIRK_BACKLIGHT_PRESENT) { + DRM_DEBUG_KMS("no backlight present per VBT, but present per quirk\n"); + } else { + DRM_DEBUG_KMS("no backlight present per VBT\n"); + return 0; + } } /* set level and max in panel struct */ diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c index 54242e4f6f4c..ee72807069e4 100644 --- a/drivers/gpu/drm/i915/intel_pm.c +++ b/drivers/gpu/drm/i915/intel_pm.c @@ -3209,6 +3209,14 @@ void gen6_set_rps(struct drm_device *dev, u8 val) */ static void vlv_set_rps_idle(struct drm_i915_private *dev_priv) { + struct drm_device *dev = dev_priv->dev; + + /* Latest VLV doesn't need to force the gfx clock */ + if (dev->pdev->revision >= 0xd) { + valleyview_set_rps(dev_priv->dev, dev_priv->rps.min_freq_softlimit); + return; + } + /* * When we are idle. Drop to min voltage state. */ @@ -5603,8 +5611,8 @@ static bool hsw_power_well_enabled(struct drm_i915_private *dev_priv, (HSW_PWR_WELL_ENABLE_REQUEST | HSW_PWR_WELL_STATE_ENABLED); } -bool intel_display_power_enabled_sw(struct drm_i915_private *dev_priv, - enum intel_display_power_domain domain) +bool intel_display_power_enabled_unlocked(struct drm_i915_private *dev_priv, + enum intel_display_power_domain domain) { struct i915_power_domains *power_domains; struct i915_power_well *power_well; @@ -5615,16 +5623,19 @@ bool intel_display_power_enabled_sw(struct drm_i915_private *dev_priv, return false; power_domains = &dev_priv->power_domains; + is_enabled = true; + for_each_power_well_rev(i, power_well, BIT(domain), power_domains) { if (power_well->always_on) continue; - if (!power_well->count) { + if (!power_well->hw_enabled) { is_enabled = false; break; } } + return is_enabled; } @@ -5632,30 +5643,15 @@ bool intel_display_power_enabled(struct drm_i915_private *dev_priv, enum intel_display_power_domain domain) { struct i915_power_domains *power_domains; - struct i915_power_well *power_well; - bool is_enabled; - int i; - - if (dev_priv->pm.suspended) - return false; + bool ret; power_domains = &dev_priv->power_domains; - is_enabled = true; - mutex_lock(&power_domains->lock); - for_each_power_well_rev(i, power_well, BIT(domain), power_domains) { - if (power_well->always_on) - continue; - - if (!power_well->ops->is_enabled(dev_priv, power_well)) { - is_enabled = false; - break; - } - } + ret = intel_display_power_enabled_unlocked(dev_priv, domain); mutex_unlock(&power_domains->lock); - return is_enabled; + return ret; } /* @@ -5976,6 +5972,7 @@ void intel_display_power_get(struct drm_i915_private *dev_priv, if (!power_well->count++) { DRM_DEBUG_KMS("enabling %s\n", power_well->name); power_well->ops->enable(dev_priv, power_well); + power_well->hw_enabled = true; } check_power_well_state(dev_priv, power_well); @@ -6005,6 +6002,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, if (!--power_well->count && i915.disable_power_well) { DRM_DEBUG_KMS("disabling %s\n", power_well->name); + power_well->hw_enabled = false; power_well->ops->disable(dev_priv, power_well); } @@ -6048,6 +6046,27 @@ int i915_release_power_well(void) } EXPORT_SYMBOL_GPL(i915_release_power_well); +/* + * Private interface for the audio driver to get CDCLK in kHz. + * + * Caller must request power well using i915_request_power_well() prior to + * making the call. + */ +int i915_get_cdclk_freq(void) +{ + struct drm_i915_private *dev_priv; + + if (!hsw_pwr) + return -ENODEV; + + dev_priv = container_of(hsw_pwr, struct drm_i915_private, + power_domains); + + return intel_ddi_get_cdclk_freq(dev_priv); +} +EXPORT_SYMBOL_GPL(i915_get_cdclk_freq); + + #define POWER_DOMAIN_MASK (BIT(POWER_DOMAIN_NUM) - 1) #define HSW_ALWAYS_ON_POWER_DOMAINS ( \ @@ -6267,8 +6286,11 @@ static void intel_power_domains_resume(struct drm_i915_private *dev_priv) int i; mutex_lock(&power_domains->lock); - for_each_power_well(i, power_well, POWER_DOMAIN_MASK, power_domains) + for_each_power_well(i, power_well, POWER_DOMAIN_MASK, power_domains) { power_well->ops->sync_hw(dev_priv, power_well); + power_well->hw_enabled = power_well->ops->is_enabled(dev_priv, + power_well); + } mutex_unlock(&power_domains->lock); } diff --git a/drivers/gpu/drm/i915/intel_sprite.c b/drivers/gpu/drm/i915/intel_sprite.c index 1b66ddcdfb33..9a17b4e92ef4 100644 --- a/drivers/gpu/drm/i915/intel_sprite.c +++ b/drivers/gpu/drm/i915/intel_sprite.c @@ -691,6 +691,14 @@ intel_post_enable_primary(struct drm_crtc *crtc) struct intel_crtc *intel_crtc = to_intel_crtc(crtc); /* + * BDW signals flip done immediately if the plane + * is disabled, even if the plane enable is already + * armed to occur at the next vblank :( + */ + if (IS_BROADWELL(dev)) + intel_wait_for_vblank(dev, intel_crtc->pipe); + + /* * FIXME IPS should be fine as long as one plane is * enabled, but in practice it seems to have problems * when going from primary only to sprite only and vice diff --git a/drivers/gpu/drm/msm/hdmi/hdmi.c b/drivers/gpu/drm/msm/hdmi/hdmi.c index ae750f6928c1..7f7aadef8a82 100644 --- a/drivers/gpu/drm/msm/hdmi/hdmi.c +++ b/drivers/gpu/drm/msm/hdmi/hdmi.c @@ -277,6 +277,7 @@ static int hdmi_bind(struct device *dev, struct device *master, void *data) static const char *hpd_reg_names[] = {"hpd-gdsc", "hpd-5v"}; static const char *pwr_reg_names[] = {"core-vdda", "core-vcc"}; static const char *hpd_clk_names[] = {"iface_clk", "core_clk", "mdp_core_clk"}; + static unsigned long hpd_clk_freq[] = {0, 19200000, 0}; static const char *pwr_clk_names[] = {"extp_clk", "alt_iface_clk"}; config.phy_init = hdmi_phy_8x74_init; @@ -286,6 +287,7 @@ static int hdmi_bind(struct device *dev, struct device *master, void *data) config.pwr_reg_names = pwr_reg_names; config.pwr_reg_cnt = ARRAY_SIZE(pwr_reg_names); config.hpd_clk_names = hpd_clk_names; + config.hpd_freq = hpd_clk_freq; config.hpd_clk_cnt = ARRAY_SIZE(hpd_clk_names); config.pwr_clk_names = pwr_clk_names; config.pwr_clk_cnt = ARRAY_SIZE(pwr_clk_names); diff --git a/drivers/gpu/drm/msm/hdmi/hdmi.h b/drivers/gpu/drm/msm/hdmi/hdmi.h index 9fafee6a3e43..9d7723c6528a 100644 --- a/drivers/gpu/drm/msm/hdmi/hdmi.h +++ b/drivers/gpu/drm/msm/hdmi/hdmi.h @@ -87,6 +87,7 @@ struct hdmi_platform_config { /* clks that need to be on for hpd: */ const char **hpd_clk_names; + const long unsigned *hpd_freq; int hpd_clk_cnt; /* clks that need to be on for screen pwr (ie pixel clk): */ diff --git a/drivers/gpu/drm/msm/hdmi/hdmi_connector.c b/drivers/gpu/drm/msm/hdmi/hdmi_connector.c index e56a6196867c..28f7e3ec6c28 100644 --- a/drivers/gpu/drm/msm/hdmi/hdmi_connector.c +++ b/drivers/gpu/drm/msm/hdmi/hdmi_connector.c @@ -127,6 +127,14 @@ static int hpd_enable(struct hdmi_connector *hdmi_connector) } for (i = 0; i < config->hpd_clk_cnt; i++) { + if (config->hpd_freq && config->hpd_freq[i]) { + ret = clk_set_rate(hdmi->hpd_clks[i], + config->hpd_freq[i]); + if (ret) + dev_warn(dev->dev, "failed to set clk %s (%d)\n", + config->hpd_clk_names[i], ret); + } + ret = clk_prepare_enable(hdmi->hpd_clks[i]); if (ret) { dev_err(dev->dev, "failed to enable hpd clk: %s (%d)\n", diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c index 42caf7fcb0b9..71510ee26e96 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c @@ -20,6 +20,10 @@ #include "msm_mmu.h" #include "mdp5_kms.h" +static const char *iommu_ports[] = { + "mdp_0", +}; + static struct mdp5_platform_config *mdp5_get_config(struct platform_device *dev); static int mdp5_hw_init(struct msm_kms *kms) @@ -104,6 +108,12 @@ static void mdp5_preclose(struct msm_kms *kms, struct drm_file *file) static void mdp5_destroy(struct msm_kms *kms) { struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); + struct msm_mmu *mmu = mdp5_kms->mmu; + + if (mmu) { + mmu->funcs->detach(mmu, iommu_ports, ARRAY_SIZE(iommu_ports)); + mmu->funcs->destroy(mmu); + } kfree(mdp5_kms); } @@ -216,10 +226,6 @@ fail: return ret; } -static const char *iommu_ports[] = { - "mdp_0", -}; - static int get_clk(struct platform_device *pdev, struct clk **clkp, const char *name) { @@ -317,17 +323,23 @@ struct msm_kms *mdp5_kms_init(struct drm_device *dev) mmu = msm_iommu_new(dev, config->iommu); if (IS_ERR(mmu)) { ret = PTR_ERR(mmu); + dev_err(dev->dev, "failed to init iommu: %d\n", ret); goto fail; } + ret = mmu->funcs->attach(mmu, iommu_ports, ARRAY_SIZE(iommu_ports)); - if (ret) + if (ret) { + dev_err(dev->dev, "failed to attach iommu: %d\n", ret); + mmu->funcs->destroy(mmu); goto fail; + } } else { dev_info(dev->dev, "no iommu, fallback to phys " "contig buffers for scanout\n"); mmu = NULL; } + mdp5_kms->mmu = mmu; mdp5_kms->id = msm_register_mmu(dev, mmu); if (mdp5_kms->id < 0) { diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h index c8b1a2522c25..6e981b692d1d 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h @@ -33,6 +33,7 @@ struct mdp5_kms { /* mapper-id used to request GEM buffer mapped for scanout: */ int id; + struct msm_mmu *mmu; /* for tracking smp allocation amongst pipes: */ mdp5_smp_state_t smp_state; diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 0d2562fb681e..9a5d87db5c23 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -159,7 +159,7 @@ static int msm_unload(struct drm_device *dev) static int get_mdp_ver(struct platform_device *pdev) { #ifdef CONFIG_OF - const static struct of_device_id match_types[] = { { + static const struct of_device_id match_types[] = { { .compatible = "qcom,mdss_mdp", .data = (void *)5, }, { diff --git a/drivers/gpu/drm/msm/msm_fbdev.c b/drivers/gpu/drm/msm/msm_fbdev.c index a752ab83b810..5107fc4826bc 100644 --- a/drivers/gpu/drm/msm/msm_fbdev.c +++ b/drivers/gpu/drm/msm/msm_fbdev.c @@ -59,7 +59,7 @@ static int msm_fbdev_create(struct drm_fb_helper *helper, struct drm_framebuffer *fb = NULL; struct fb_info *fbi = NULL; struct drm_mode_fb_cmd2 mode_cmd = {0}; - dma_addr_t paddr; + uint32_t paddr; int ret, size; sizes->surface_bpp = 32; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index bb8026daebc9..690d7e7b6d1e 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -278,6 +278,7 @@ int msm_gem_get_iova_locked(struct drm_gem_object *obj, int id, uint32_t *iova) { struct msm_gem_object *msm_obj = to_msm_bo(obj); + struct drm_device *dev = obj->dev; int ret = 0; if (!msm_obj->domain[id].iova) { @@ -285,6 +286,11 @@ int msm_gem_get_iova_locked(struct drm_gem_object *obj, int id, struct msm_mmu *mmu = priv->mmus[id]; struct page **pages = get_pages(obj); + if (!mmu) { + dev_err(dev->dev, "null MMU pointer\n"); + return -EINVAL; + } + if (IS_ERR(pages)) return PTR_ERR(pages); diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index 92b745986231..4b2ad9181edf 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -28,7 +28,7 @@ static int msm_fault_handler(struct iommu_domain *iommu, struct device *dev, unsigned long iova, int flags, void *arg) { DBG("*** fault: iova=%08lx, flags=%d", iova, flags); - return 0; + return -ENOSYS; } static int msm_iommu_attach(struct msm_mmu *mmu, const char **names, int cnt) @@ -40,8 +40,10 @@ static int msm_iommu_attach(struct msm_mmu *mmu, const char **names, int cnt) for (i = 0; i < cnt; i++) { struct device *msm_iommu_get_ctx(const char *ctx_name); struct device *ctx = msm_iommu_get_ctx(names[i]); - if (IS_ERR_OR_NULL(ctx)) + if (IS_ERR_OR_NULL(ctx)) { + dev_warn(dev->dev, "couldn't get %s context", names[i]); continue; + } ret = iommu_attach_device(iommu->domain, ctx); if (ret) { dev_warn(dev->dev, "could not attach iommu to %s", names[i]); @@ -52,6 +54,20 @@ static int msm_iommu_attach(struct msm_mmu *mmu, const char **names, int cnt) return 0; } +static void msm_iommu_detach(struct msm_mmu *mmu, const char **names, int cnt) +{ + struct msm_iommu *iommu = to_msm_iommu(mmu); + int i; + + for (i = 0; i < cnt; i++) { + struct device *msm_iommu_get_ctx(const char *ctx_name); + struct device *ctx = msm_iommu_get_ctx(names[i]); + if (IS_ERR_OR_NULL(ctx)) + continue; + iommu_detach_device(iommu->domain, ctx); + } +} + static int msm_iommu_map(struct msm_mmu *mmu, uint32_t iova, struct sg_table *sgt, unsigned len, int prot) { @@ -110,7 +126,7 @@ static int msm_iommu_unmap(struct msm_mmu *mmu, uint32_t iova, VERB("unmap[%d]: %08x(%x)", i, iova, bytes); - BUG_ON(!IS_ALIGNED(bytes, PAGE_SIZE)); + BUG_ON(!PAGE_ALIGNED(bytes)); da += bytes; } @@ -127,6 +143,7 @@ static void msm_iommu_destroy(struct msm_mmu *mmu) static const struct msm_mmu_funcs funcs = { .attach = msm_iommu_attach, + .detach = msm_iommu_detach, .map = msm_iommu_map, .unmap = msm_iommu_unmap, .destroy = msm_iommu_destroy, diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h index 030324482b4a..21da6d154f71 100644 --- a/drivers/gpu/drm/msm/msm_mmu.h +++ b/drivers/gpu/drm/msm/msm_mmu.h @@ -22,6 +22,7 @@ struct msm_mmu_funcs { int (*attach)(struct msm_mmu *mmu, const char **names, int cnt); + void (*detach)(struct msm_mmu *mmu, const char **names, int cnt); int (*map)(struct msm_mmu *mmu, uint32_t iova, struct sg_table *sgt, unsigned len, int prot); int (*unmap)(struct msm_mmu *mmu, uint32_t iova, struct sg_table *sgt, diff --git a/drivers/gpu/drm/nouveau/core/engine/disp/nv50.c b/drivers/gpu/drm/nouveau/core/engine/disp/nv50.c index 26e962b7e702..2283c442a10d 100644 --- a/drivers/gpu/drm/nouveau/core/engine/disp/nv50.c +++ b/drivers/gpu/drm/nouveau/core/engine/disp/nv50.c @@ -1516,11 +1516,11 @@ nv50_disp_intr_unk20_2(struct nv50_disp_priv *priv, int head) } switch ((ctrl & 0x000f0000) >> 16) { - case 6: datarate = pclk * 30 / 8; break; - case 5: datarate = pclk * 24 / 8; break; + case 6: datarate = pclk * 30; break; + case 5: datarate = pclk * 24; break; case 2: default: - datarate = pclk * 18 / 8; + datarate = pclk * 18; break; } diff --git a/drivers/gpu/drm/nouveau/core/engine/disp/nvd0.c b/drivers/gpu/drm/nouveau/core/engine/disp/nvd0.c index 48aa38a87e3f..fa30d8196f35 100644 --- a/drivers/gpu/drm/nouveau/core/engine/disp/nvd0.c +++ b/drivers/gpu/drm/nouveau/core/engine/disp/nvd0.c @@ -1159,11 +1159,11 @@ nvd0_disp_intr_unk2_2(struct nv50_disp_priv *priv, int head) if (outp->info.type == DCB_OUTPUT_DP) { u32 sync = nv_rd32(priv, 0x660404 + (head * 0x300)); switch ((sync & 0x000003c0) >> 6) { - case 6: pclk = pclk * 30 / 8; break; - case 5: pclk = pclk * 24 / 8; break; + case 6: pclk = pclk * 30; break; + case 5: pclk = pclk * 24; break; case 2: default: - pclk = pclk * 18 / 8; + pclk = pclk * 18; break; } diff --git a/drivers/gpu/drm/nouveau/core/engine/disp/outpdp.c b/drivers/gpu/drm/nouveau/core/engine/disp/outpdp.c index 52c299c3d300..eb2d7789555d 100644 --- a/drivers/gpu/drm/nouveau/core/engine/disp/outpdp.c +++ b/drivers/gpu/drm/nouveau/core/engine/disp/outpdp.c @@ -34,7 +34,7 @@ nvkm_output_dp_train(struct nvkm_output *base, u32 datarate, bool wait) struct nvkm_output_dp *outp = (void *)base; bool retrain = true; u8 link[2], stat[3]; - u32 rate; + u32 linkrate; int ret, i; /* check that the link is trained at a high enough rate */ @@ -44,8 +44,10 @@ nvkm_output_dp_train(struct nvkm_output *base, u32 datarate, bool wait) goto done; } - rate = link[0] * 27000 * (link[1] & DPCD_LC01_LANE_COUNT_SET); - if (rate < ((datarate / 8) * 10)) { + linkrate = link[0] * 27000 * (link[1] & DPCD_LC01_LANE_COUNT_SET); + linkrate = (linkrate * 8) / 10; /* 8B/10B coding overhead */ + datarate = (datarate + 9) / 10; /* -> decakilobits */ + if (linkrate < datarate) { DBG("link not trained at sufficient rate\n"); goto done; } diff --git a/drivers/gpu/drm/nouveau/core/engine/disp/sornv50.c b/drivers/gpu/drm/nouveau/core/engine/disp/sornv50.c index e1832778e8b6..7a1ebdfa9e1b 100644 --- a/drivers/gpu/drm/nouveau/core/engine/disp/sornv50.c +++ b/drivers/gpu/drm/nouveau/core/engine/disp/sornv50.c @@ -87,6 +87,7 @@ nv50_sor_mthd(struct nouveau_object *object, u32 mthd, void *args, u32 size) struct nvkm_output_dp *outpdp = (void *)outp; switch (data) { case NV94_DISP_SOR_DP_PWR_STATE_OFF: + nouveau_event_put(outpdp->irq); ((struct nvkm_output_dp_impl *)nv_oclass(outp)) ->lnk_pwr(outpdp, 0); atomic_set(&outpdp->lt.done, 0); diff --git a/drivers/gpu/drm/nouveau/core/subdev/fb/ramfuc.h b/drivers/gpu/drm/nouveau/core/subdev/fb/ramfuc.h index 0f57fcfe0bbf..2af9cfd2c60f 100644 --- a/drivers/gpu/drm/nouveau/core/subdev/fb/ramfuc.h +++ b/drivers/gpu/drm/nouveau/core/subdev/fb/ramfuc.h @@ -26,7 +26,7 @@ ramfuc_reg2(u32 addr1, u32 addr2) }; } -static inline struct ramfuc_reg +static noinline struct ramfuc_reg ramfuc_reg(u32 addr) { return ramfuc_reg2(addr, addr); @@ -107,7 +107,7 @@ ramfuc_nsec(struct ramfuc *ram, u32 nsec) #define ram_init(s,p) ramfuc_init(&(s)->base, (p)) #define ram_exec(s,e) ramfuc_exec(&(s)->base, (e)) -#define ram_have(s,r) ((s)->r_##r.addr != 0x000000) +#define ram_have(s,r) ((s)->r_##r.addr[0] != 0x000000) #define ram_rd32(s,r) ramfuc_rd32(&(s)->base, &(s)->r_##r) #define ram_wr32(s,r,d) ramfuc_wr32(&(s)->base, &(s)->r_##r, (d)) #define ram_nuke(s,r) ramfuc_nuke(&(s)->base, &(s)->r_##r) diff --git a/drivers/gpu/drm/nouveau/core/subdev/fb/ramnve0.c b/drivers/gpu/drm/nouveau/core/subdev/fb/ramnve0.c index 1ad3ea503133..c5b46e302319 100644 --- a/drivers/gpu/drm/nouveau/core/subdev/fb/ramnve0.c +++ b/drivers/gpu/drm/nouveau/core/subdev/fb/ramnve0.c @@ -200,6 +200,7 @@ r1373f4_init(struct nve0_ramfuc *fuc) /* (re)program mempll, if required */ if (ram->mode == 2) { ram_mask(fuc, 0x1373f4, 0x00010000, 0x00000000); + ram_mask(fuc, 0x132000, 0x80000000, 0x80000000); ram_mask(fuc, 0x132000, 0x00000001, 0x00000000); ram_mask(fuc, 0x132004, 0x103fffff, mcoef); ram_mask(fuc, 0x132000, 0x00000001, 0x00000001); diff --git a/drivers/gpu/drm/nouveau/core/subdev/therm/temp.c b/drivers/gpu/drm/nouveau/core/subdev/therm/temp.c index cfde9eb44ad0..6212537b90c5 100644 --- a/drivers/gpu/drm/nouveau/core/subdev/therm/temp.c +++ b/drivers/gpu/drm/nouveau/core/subdev/therm/temp.c @@ -192,11 +192,11 @@ alarm_timer_callback(struct nouveau_alarm *alarm) nouveau_therm_threshold_hyst_polling(therm, &sensor->thrs_shutdown, NOUVEAU_THERM_THRS_SHUTDOWN); + spin_unlock_irqrestore(&priv->sensor.alarm_program_lock, flags); + /* schedule the next poll in one second */ if (therm->temp_get(therm) >= 0 && list_empty(&alarm->head)) - ptimer->alarm(ptimer, 1000 * 1000 * 1000, alarm); - - spin_unlock_irqrestore(&priv->sensor.alarm_program_lock, flags); + ptimer->alarm(ptimer, 1000000000ULL, alarm); } void diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c index ddd83756b9a2..5425ffe3931d 100644 --- a/drivers/gpu/drm/nouveau/nouveau_drm.c +++ b/drivers/gpu/drm/nouveau/nouveau_drm.c @@ -652,12 +652,12 @@ int nouveau_pmops_resume(struct device *dev) ret = nouveau_do_resume(drm_dev); if (ret) return ret; - if (drm_dev->mode_config.num_crtc) - nouveau_fbcon_set_suspend(drm_dev, 0); - nouveau_fbcon_zfill_all(drm_dev); - if (drm_dev->mode_config.num_crtc) + if (drm_dev->mode_config.num_crtc) { nouveau_display_resume(drm_dev); + nouveau_fbcon_set_suspend(drm_dev, 0); + } + return 0; } @@ -683,11 +683,12 @@ static int nouveau_pmops_thaw(struct device *dev) ret = nouveau_do_resume(drm_dev); if (ret) return ret; - if (drm_dev->mode_config.num_crtc) - nouveau_fbcon_set_suspend(drm_dev, 0); - nouveau_fbcon_zfill_all(drm_dev); - if (drm_dev->mode_config.num_crtc) + + if (drm_dev->mode_config.num_crtc) { nouveau_display_resume(drm_dev); + nouveau_fbcon_set_suspend(drm_dev, 0); + } + return 0; } diff --git a/drivers/gpu/drm/nouveau/nouveau_fbcon.c b/drivers/gpu/drm/nouveau/nouveau_fbcon.c index 64a42cfd3717..191665ee7f52 100644 --- a/drivers/gpu/drm/nouveau/nouveau_fbcon.c +++ b/drivers/gpu/drm/nouveau/nouveau_fbcon.c @@ -531,17 +531,10 @@ nouveau_fbcon_set_suspend(struct drm_device *dev, int state) if (state == 1) nouveau_fbcon_save_disable_accel(dev); fb_set_suspend(drm->fbcon->helper.fbdev, state); - if (state == 0) + if (state == 0) { nouveau_fbcon_restore_accel(dev); + nouveau_fbcon_zfill(dev, drm->fbcon); + } console_unlock(); } } - -void -nouveau_fbcon_zfill_all(struct drm_device *dev) -{ - struct nouveau_drm *drm = nouveau_drm(dev); - if (drm->fbcon) { - nouveau_fbcon_zfill(dev, drm->fbcon); - } -} diff --git a/drivers/gpu/drm/nouveau/nouveau_fbcon.h b/drivers/gpu/drm/nouveau/nouveau_fbcon.h index fdfc0c94fbcc..fcff797d2084 100644 --- a/drivers/gpu/drm/nouveau/nouveau_fbcon.h +++ b/drivers/gpu/drm/nouveau/nouveau_fbcon.h @@ -61,7 +61,6 @@ void nouveau_fbcon_gpu_lockup(struct fb_info *info); int nouveau_fbcon_init(struct drm_device *dev); void nouveau_fbcon_fini(struct drm_device *dev); void nouveau_fbcon_set_suspend(struct drm_device *dev, int state); -void nouveau_fbcon_zfill_all(struct drm_device *dev); void nouveau_fbcon_save_disable_accel(struct drm_device *dev); void nouveau_fbcon_restore_accel(struct drm_device *dev); diff --git a/drivers/gpu/drm/nouveau/nv50_display.c b/drivers/gpu/drm/nouveau/nv50_display.c index afdf607df3e6..4c534b7b04da 100644 --- a/drivers/gpu/drm/nouveau/nv50_display.c +++ b/drivers/gpu/drm/nouveau/nv50_display.c @@ -1741,7 +1741,8 @@ nv50_sor_dpms(struct drm_encoder *encoder, int mode) } } - mthd = (ffs(nv_encoder->dcb->sorconf.link) - 1) << 2; + mthd = (ffs(nv_encoder->dcb->heads) - 1) << 3; + mthd |= (ffs(nv_encoder->dcb->sorconf.link) - 1) << 2; mthd |= nv_encoder->or; if (nv_encoder->dcb->type == DCB_OUTPUT_DP) { diff --git a/drivers/gpu/drm/qxl/qxl_irq.c b/drivers/gpu/drm/qxl/qxl_irq.c index 34d6a85e9023..0bf1e20c6e44 100644 --- a/drivers/gpu/drm/qxl/qxl_irq.c +++ b/drivers/gpu/drm/qxl/qxl_irq.c @@ -33,6 +33,9 @@ irqreturn_t qxl_irq_handler(int irq, void *arg) pending = xchg(&qdev->ram_header->int_pending, 0); + if (!pending) + return IRQ_NONE; + atomic_inc(&qdev->irq_received); if (pending & QXL_INTERRUPT_DISPLAY) { diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c b/drivers/gpu/drm/radeon/atombios_crtc.c index a03c73411a56..30d242b25078 100644 --- a/drivers/gpu/drm/radeon/atombios_crtc.c +++ b/drivers/gpu/drm/radeon/atombios_crtc.c @@ -1414,8 +1414,8 @@ static int dce4_crtc_do_set_base(struct drm_crtc *crtc, tmp &= ~EVERGREEN_GRPH_SURFACE_UPDATE_H_RETRACE_EN; WREG32(EVERGREEN_GRPH_FLIP_CONTROL + radeon_crtc->crtc_offset, tmp); - /* set pageflip to happen anywhere in vblank interval */ - WREG32(EVERGREEN_MASTER_UPDATE_MODE + radeon_crtc->crtc_offset, 0); + /* set pageflip to happen only at start of vblank interval (front porch) */ + WREG32(EVERGREEN_MASTER_UPDATE_MODE + radeon_crtc->crtc_offset, 3); if (!atomic && fb && fb != crtc->primary->fb) { radeon_fb = to_radeon_framebuffer(fb); @@ -1614,8 +1614,8 @@ static int avivo_crtc_do_set_base(struct drm_crtc *crtc, tmp &= ~AVIVO_D1GRPH_SURFACE_UPDATE_H_RETRACE_EN; WREG32(AVIVO_D1GRPH_FLIP_CONTROL + radeon_crtc->crtc_offset, tmp); - /* set pageflip to happen anywhere in vblank interval */ - WREG32(AVIVO_D1MODE_MASTER_UPDATE_MODE + radeon_crtc->crtc_offset, 0); + /* set pageflip to happen only at start of vblank interval (front porch) */ + WREG32(AVIVO_D1MODE_MASTER_UPDATE_MODE + radeon_crtc->crtc_offset, 3); if (!atomic && fb && fb != crtc->primary->fb) { radeon_fb = to_radeon_framebuffer(fb); diff --git a/drivers/gpu/drm/radeon/atombios_dp.c b/drivers/gpu/drm/radeon/atombios_dp.c index c5b1f2da3954..b1e11f8434e2 100644 --- a/drivers/gpu/drm/radeon/atombios_dp.c +++ b/drivers/gpu/drm/radeon/atombios_dp.c @@ -127,7 +127,7 @@ static int radeon_process_aux_ch(struct radeon_i2c_chan *chan, /* flags not zero */ if (args.v1.ucReplyStatus == 2) { DRM_DEBUG_KMS("dp_aux_ch flags not zero\n"); - r = -EBUSY; + r = -EIO; goto done; } @@ -403,16 +403,18 @@ bool radeon_dp_getdpcd(struct radeon_connector *radeon_connector) { struct radeon_connector_atom_dig *dig_connector = radeon_connector->con_priv; u8 msg[DP_DPCD_SIZE]; - int ret, i; + int ret; + + char dpcd_hex_dump[DP_DPCD_SIZE * 3]; ret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux, DP_DPCD_REV, msg, DP_DPCD_SIZE); if (ret > 0) { memcpy(dig_connector->dpcd, msg, DP_DPCD_SIZE); - DRM_DEBUG_KMS("DPCD: "); - for (i = 0; i < DP_DPCD_SIZE; i++) - DRM_DEBUG_KMS("%02x ", msg[i]); - DRM_DEBUG_KMS("\n"); + + hex_dump_to_buffer(dig_connector->dpcd, sizeof(dig_connector->dpcd), + 32, 1, dpcd_hex_dump, sizeof(dpcd_hex_dump), false); + DRM_DEBUG_KMS("DPCD: %s\n", dpcd_hex_dump); radeon_dp_probe_oui(radeon_connector); diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c b/drivers/gpu/drm/radeon/atombios_encoders.c index 2b2908440644..7d68203a3737 100644 --- a/drivers/gpu/drm/radeon/atombios_encoders.c +++ b/drivers/gpu/drm/radeon/atombios_encoders.c @@ -183,7 +183,6 @@ void radeon_atom_backlight_init(struct radeon_encoder *radeon_encoder, struct backlight_properties props; struct radeon_backlight_privdata *pdata; struct radeon_encoder_atom_dig *dig; - u8 backlight_level; char bl_name[16]; /* Mac laptops with multiple GPUs use the gmux driver for backlight @@ -222,12 +221,17 @@ void radeon_atom_backlight_init(struct radeon_encoder *radeon_encoder, pdata->encoder = radeon_encoder; - backlight_level = radeon_atom_get_backlight_level_from_reg(rdev); - dig = radeon_encoder->enc_priv; dig->bl_dev = bd; bd->props.brightness = radeon_atom_backlight_get_brightness(bd); + /* Set a reasonable default here if the level is 0 otherwise + * fbdev will attempt to turn the backlight on after console + * unblanking and it will try and restore 0 which turns the backlight + * off again. + */ + if (bd->props.brightness == 0) + bd->props.brightness = RADEON_MAX_BL_LEVEL; bd->props.power = FB_BLANK_UNBLANK; backlight_update_status(bd); diff --git a/drivers/gpu/drm/radeon/ci_dpm.c b/drivers/gpu/drm/radeon/ci_dpm.c index 10dae4106c08..584090ac3eb9 100644 --- a/drivers/gpu/drm/radeon/ci_dpm.c +++ b/drivers/gpu/drm/radeon/ci_dpm.c @@ -1179,7 +1179,7 @@ static int ci_stop_dpm(struct radeon_device *rdev) tmp &= ~GLOBAL_PWRMGT_EN; WREG32_SMC(GENERAL_PWRMGT, tmp); - tmp = RREG32(SCLK_PWRMGT_CNTL); + tmp = RREG32_SMC(SCLK_PWRMGT_CNTL); tmp &= ~DYNAMIC_PM_EN; WREG32_SMC(SCLK_PWRMGT_CNTL, tmp); diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c index dcd4518a9b08..c0ea66192fe0 100644 --- a/drivers/gpu/drm/radeon/cik.c +++ b/drivers/gpu/drm/radeon/cik.c @@ -2291,6 +2291,7 @@ static void cik_tiling_mode_table_init(struct radeon_device *rdev) gb_tile_moden = 0; break; } + rdev->config.cik.macrotile_mode_array[reg_offset] = gb_tile_moden; WREG32(GB_MACROTILE_MODE0 + (reg_offset * 4), gb_tile_moden); } } else if (num_pipe_configs == 8) { @@ -7376,6 +7377,7 @@ static inline u32 cik_get_ih_wptr(struct radeon_device *rdev) tmp = RREG32(IH_RB_CNTL); tmp |= IH_WPTR_OVERFLOW_CLEAR; WREG32(IH_RB_CNTL, tmp); + wptr &= ~RB_OVERFLOW; } return (wptr & rdev->ih.ptr_mask); } @@ -7676,14 +7678,16 @@ restart_ih: addr = RREG32(VM_CONTEXT1_PROTECTION_FAULT_ADDR); status = RREG32(VM_CONTEXT1_PROTECTION_FAULT_STATUS); mc_client = RREG32(VM_CONTEXT1_PROTECTION_FAULT_MCCLIENT); + /* reset addr and status */ + WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); + if (addr == 0x0 && status == 0x0) + break; dev_err(rdev->dev, "GPU fault detected: %d 0x%08x\n", src_id, src_data); dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_ADDR 0x%08X\n", addr); dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x%08X\n", status); cik_vm_decode_fault(rdev, status, addr, mc_client); - /* reset addr and status */ - WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); break; case 167: /* VCE */ DRM_DEBUG("IH: VCE int: 0x%08x\n", src_data); diff --git a/drivers/gpu/drm/radeon/cikd.h b/drivers/gpu/drm/radeon/cikd.h index ae88660f34ea..0c6e1b55d968 100644 --- a/drivers/gpu/drm/radeon/cikd.h +++ b/drivers/gpu/drm/radeon/cikd.h @@ -1752,12 +1752,12 @@ #define EOP_TC_WB_ACTION_EN (1 << 15) /* L2 */ #define EOP_TCL1_ACTION_EN (1 << 16) #define EOP_TC_ACTION_EN (1 << 17) /* L2 */ +#define EOP_TCL2_VOLATILE (1 << 24) #define EOP_CACHE_POLICY(x) ((x) << 25) /* 0 - LRU * 1 - Stream * 2 - Bypass */ -#define EOP_TCL2_VOLATILE (1 << 27) #define DATA_SEL(x) ((x) << 29) /* 0 - discard * 1 - send low 32bit data diff --git a/drivers/gpu/drm/radeon/cypress_dpm.c b/drivers/gpu/drm/radeon/cypress_dpm.c index 5a9a5f4d7888..47d31e915758 100644 --- a/drivers/gpu/drm/radeon/cypress_dpm.c +++ b/drivers/gpu/drm/radeon/cypress_dpm.c @@ -1551,7 +1551,7 @@ int cypress_populate_smc_voltage_tables(struct radeon_device *rdev, table->voltageMaskTable.highMask[RV770_SMC_VOLTAGEMASK_VDDCI] = 0; table->voltageMaskTable.lowMask[RV770_SMC_VOLTAGEMASK_VDDCI] = - cpu_to_be32(eg_pi->vddc_voltage_table.mask_low); + cpu_to_be32(eg_pi->vddci_voltage_table.mask_low); } return 0; diff --git a/drivers/gpu/drm/radeon/evergreen.c b/drivers/gpu/drm/radeon/evergreen.c index e2f605224e8c..15e4f28015e1 100644 --- a/drivers/gpu/drm/radeon/evergreen.c +++ b/drivers/gpu/drm/radeon/evergreen.c @@ -189,7 +189,7 @@ static const u32 evergreen_golden_registers[] = 0x8c1c, 0xffffffff, 0x00001010, 0x28350, 0xffffffff, 0x00000000, 0xa008, 0xffffffff, 0x00010000, - 0x5cc, 0xffffffff, 0x00000001, + 0x5c4, 0xffffffff, 0x00000001, 0x9508, 0xffffffff, 0x00000002, 0x913c, 0x0000000f, 0x0000000a }; @@ -476,7 +476,7 @@ static const u32 cedar_golden_registers[] = 0x8c1c, 0xffffffff, 0x00001010, 0x28350, 0xffffffff, 0x00000000, 0xa008, 0xffffffff, 0x00010000, - 0x5cc, 0xffffffff, 0x00000001, + 0x5c4, 0xffffffff, 0x00000001, 0x9508, 0xffffffff, 0x00000002 }; @@ -635,7 +635,7 @@ static const u32 juniper_mgcg_init[] = static const u32 supersumo_golden_registers[] = { 0x5eb4, 0xffffffff, 0x00000002, - 0x5cc, 0xffffffff, 0x00000001, + 0x5c4, 0xffffffff, 0x00000001, 0x7030, 0xffffffff, 0x00000011, 0x7c30, 0xffffffff, 0x00000011, 0x6104, 0x01000300, 0x00000000, @@ -719,7 +719,7 @@ static const u32 sumo_golden_registers[] = static const u32 wrestler_golden_registers[] = { 0x5eb4, 0xffffffff, 0x00000002, - 0x5cc, 0xffffffff, 0x00000001, + 0x5c4, 0xffffffff, 0x00000001, 0x7030, 0xffffffff, 0x00000011, 0x7c30, 0xffffffff, 0x00000011, 0x6104, 0x01000300, 0x00000000, @@ -2642,8 +2642,9 @@ void evergreen_mc_resume(struct radeon_device *rdev, struct evergreen_mc_save *s for (i = 0; i < rdev->num_crtc; i++) { if (save->crtc_enabled[i]) { tmp = RREG32(EVERGREEN_MASTER_UPDATE_MODE + crtc_offsets[i]); - if ((tmp & 0x3) != 0) { - tmp &= ~0x3; + if ((tmp & 0x7) != 3) { + tmp &= ~0x7; + tmp |= 0x3; WREG32(EVERGREEN_MASTER_UPDATE_MODE + crtc_offsets[i], tmp); } tmp = RREG32(EVERGREEN_GRPH_UPDATE + crtc_offsets[i]); @@ -4755,6 +4756,7 @@ static u32 evergreen_get_ih_wptr(struct radeon_device *rdev) tmp = RREG32(IH_RB_CNTL); tmp |= IH_WPTR_OVERFLOW_CLEAR; WREG32(IH_RB_CNTL, tmp); + wptr &= ~RB_OVERFLOW; } return (wptr & rdev->ih.ptr_mask); } @@ -5066,14 +5068,16 @@ restart_ih: case 147: addr = RREG32(VM_CONTEXT1_PROTECTION_FAULT_ADDR); status = RREG32(VM_CONTEXT1_PROTECTION_FAULT_STATUS); + /* reset addr and status */ + WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); + if (addr == 0x0 && status == 0x0) + break; dev_err(rdev->dev, "GPU fault detected: %d 0x%08x\n", src_id, src_data); dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_ADDR 0x%08X\n", addr); dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x%08X\n", status); cayman_vm_decode_fault(rdev, status, addr); - /* reset addr and status */ - WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); break; case 176: /* CP_INT in ring buffer */ case 177: /* CP_INT in IB1 */ diff --git a/drivers/gpu/drm/radeon/evergreen_reg.h b/drivers/gpu/drm/radeon/evergreen_reg.h index 333d143fca2c..23bff590fb6e 100644 --- a/drivers/gpu/drm/radeon/evergreen_reg.h +++ b/drivers/gpu/drm/radeon/evergreen_reg.h @@ -239,7 +239,6 @@ # define EVERGREEN_CRTC_V_BLANK (1 << 0) #define EVERGREEN_CRTC_STATUS_POSITION 0x6e90 #define EVERGREEN_CRTC_STATUS_HV_COUNT 0x6ea0 -#define EVERGREEN_MASTER_UPDATE_MODE 0x6ef8 #define EVERGREEN_CRTC_UPDATE_LOCK 0x6ed4 #define EVERGREEN_MASTER_UPDATE_LOCK 0x6ef4 #define EVERGREEN_MASTER_UPDATE_MODE 0x6ef8 diff --git a/drivers/gpu/drm/radeon/kv_dpm.c b/drivers/gpu/drm/radeon/kv_dpm.c index 3f6e817d97ee..9ef8c38f2d66 100644 --- a/drivers/gpu/drm/radeon/kv_dpm.c +++ b/drivers/gpu/drm/radeon/kv_dpm.c @@ -2726,7 +2726,7 @@ int kv_dpm_init(struct radeon_device *rdev) pi->caps_sclk_ds = true; pi->enable_auto_thermal_throttling = true; pi->disable_nb_ps3_in_battery = false; - pi->bapm_enable = false; + pi->bapm_enable = true; pi->voltage_drop_t = 0; pi->caps_sclk_throttle_low_notification = false; pi->caps_fps = false; /* true? */ diff --git a/drivers/gpu/drm/radeon/ni_dpm.c b/drivers/gpu/drm/radeon/ni_dpm.c index 004c931606c4..01fc4888e6fe 100644 --- a/drivers/gpu/drm/radeon/ni_dpm.c +++ b/drivers/gpu/drm/radeon/ni_dpm.c @@ -1315,7 +1315,7 @@ static void ni_populate_smc_voltage_tables(struct radeon_device *rdev, table->voltageMaskTable.highMask[NISLANDS_SMC_VOLTAGEMASK_VDDCI] = 0; table->voltageMaskTable.lowMask[NISLANDS_SMC_VOLTAGEMASK_VDDCI] = - cpu_to_be32(eg_pi->vddc_voltage_table.mask_low); + cpu_to_be32(eg_pi->vddci_voltage_table.mask_low); } } diff --git a/drivers/gpu/drm/radeon/r600.c b/drivers/gpu/drm/radeon/r600.c index c66952d4b00c..3c69f58e46ef 100644 --- a/drivers/gpu/drm/radeon/r600.c +++ b/drivers/gpu/drm/radeon/r600.c @@ -3795,6 +3795,7 @@ static u32 r600_get_ih_wptr(struct radeon_device *rdev) tmp = RREG32(IH_RB_CNTL); tmp |= IH_WPTR_OVERFLOW_CLEAR; WREG32(IH_RB_CNTL, tmp); + wptr &= ~RB_OVERFLOW; } return (wptr & rdev->ih.ptr_mask); } diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h index 4b0bbf88d5c0..60c47f829122 100644 --- a/drivers/gpu/drm/radeon/radeon.h +++ b/drivers/gpu/drm/radeon/radeon.h @@ -102,6 +102,7 @@ extern int radeon_runtime_pm; extern int radeon_hard_reset; extern int radeon_vm_size; extern int radeon_vm_block_size; +extern int radeon_deep_color; /* * Copy from radeon_drv.h so we don't have to include both and have conflicting @@ -448,6 +449,7 @@ struct radeon_bo_va { /* protected by vm mutex */ struct list_head vm_list; + struct list_head vm_status; /* constant after initialization */ struct radeon_vm *vm; @@ -683,10 +685,9 @@ struct radeon_flip_work { struct work_struct unpin_work; struct radeon_device *rdev; int crtc_id; - struct drm_framebuffer *fb; + uint64_t base; struct drm_pending_vblank_event *event; struct radeon_bo *old_rbo; - struct radeon_bo *new_rbo; struct radeon_fence *fence; }; @@ -749,10 +750,6 @@ union radeon_irq_stat_regs { struct cik_irq_stat_regs cik; }; -#define RADEON_MAX_HPD_PINS 7 -#define RADEON_MAX_CRTCS 6 -#define RADEON_MAX_AFMT_BLOCKS 7 - struct radeon_irq { bool installed; spinlock_t lock; @@ -871,6 +868,9 @@ struct radeon_vm { struct list_head va; unsigned id; + /* BOs freed, but not yet updated in the PT */ + struct list_head freed; + /* contains the page directory */ struct radeon_bo *page_directory; uint64_t pd_gpu_addr; @@ -879,6 +879,8 @@ struct radeon_vm { /* array of page tables, one for each page directory entry */ struct radeon_vm_pt *page_tables; + struct radeon_bo_va *ib_bo_va; + struct mutex mutex; /* last fence for cs using this vm */ struct radeon_fence *fence; @@ -2836,9 +2838,10 @@ void radeon_vm_fence(struct radeon_device *rdev, uint64_t radeon_vm_map_gart(struct radeon_device *rdev, uint64_t addr); int radeon_vm_update_page_directory(struct radeon_device *rdev, struct radeon_vm *vm); +int radeon_vm_clear_freed(struct radeon_device *rdev, + struct radeon_vm *vm); int radeon_vm_bo_update(struct radeon_device *rdev, - struct radeon_vm *vm, - struct radeon_bo *bo, + struct radeon_bo_va *bo_va, struct ttm_mem_reg *mem); void radeon_vm_bo_invalidate(struct radeon_device *rdev, struct radeon_bo *bo); @@ -2851,8 +2854,8 @@ int radeon_vm_bo_set_addr(struct radeon_device *rdev, struct radeon_bo_va *bo_va, uint64_t offset, uint32_t flags); -int radeon_vm_bo_rmv(struct radeon_device *rdev, - struct radeon_bo_va *bo_va); +void radeon_vm_bo_rmv(struct radeon_device *rdev, + struct radeon_bo_va *bo_va); /* audio */ void r600_audio_update_hdmi(struct work_struct *work); diff --git a/drivers/gpu/drm/radeon/radeon_atombios.c b/drivers/gpu/drm/radeon/radeon_atombios.c index 30844814c25a..173f378428a9 100644 --- a/drivers/gpu/drm/radeon/radeon_atombios.c +++ b/drivers/gpu/drm/radeon/radeon_atombios.c @@ -1227,11 +1227,19 @@ bool radeon_atom_get_clock_info(struct drm_device *dev) rdev->clock.default_dispclk = le32_to_cpu(firmware_info->info_21.ulDefaultDispEngineClkFreq); if (rdev->clock.default_dispclk == 0) { - if (ASIC_IS_DCE5(rdev)) + if (ASIC_IS_DCE6(rdev)) + rdev->clock.default_dispclk = 60000; /* 600 Mhz */ + else if (ASIC_IS_DCE5(rdev)) rdev->clock.default_dispclk = 54000; /* 540 Mhz */ else rdev->clock.default_dispclk = 60000; /* 600 Mhz */ } + /* set a reasonable default for DP */ + if (ASIC_IS_DCE6(rdev) && (rdev->clock.default_dispclk < 53900)) { + DRM_INFO("Changing default dispclk from %dMhz to 600Mhz\n", + rdev->clock.default_dispclk / 100); + rdev->clock.default_dispclk = 60000; + } rdev->clock.dp_extclk = le16_to_cpu(firmware_info->info_21.usUniphyDPModeExtClkFreq); rdev->clock.current_dispclk = rdev->clock.default_dispclk; diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c index 1b9177ed181f..44831197e82e 100644 --- a/drivers/gpu/drm/radeon/radeon_connectors.c +++ b/drivers/gpu/drm/radeon/radeon_connectors.c @@ -199,6 +199,9 @@ int radeon_get_monitor_bpc(struct drm_connector *connector) } } + if ((radeon_deep_color == 0) && (bpc > 8)) + bpc = 8; + DRM_DEBUG("%s: Display bpc=%d, returned bpc=%d\n", connector->name, connector->display_info.bpc, bpc); diff --git a/drivers/gpu/drm/radeon/radeon_cs.c b/drivers/gpu/drm/radeon/radeon_cs.c index 71a143461478..ae763f60c8a0 100644 --- a/drivers/gpu/drm/radeon/radeon_cs.c +++ b/drivers/gpu/drm/radeon/radeon_cs.c @@ -461,13 +461,23 @@ static int radeon_bo_vm_update_pte(struct radeon_cs_parser *p, struct radeon_vm *vm) { struct radeon_device *rdev = p->rdev; + struct radeon_bo_va *bo_va; int i, r; r = radeon_vm_update_page_directory(rdev, vm); if (r) return r; - r = radeon_vm_bo_update(rdev, vm, rdev->ring_tmp_bo.bo, + r = radeon_vm_clear_freed(rdev, vm); + if (r) + return r; + + if (vm->ib_bo_va == NULL) { + DRM_ERROR("Tmp BO not in VM!\n"); + return -EINVAL; + } + + r = radeon_vm_bo_update(rdev, vm->ib_bo_va, &rdev->ring_tmp_bo.bo->tbo.mem); if (r) return r; @@ -480,7 +490,13 @@ static int radeon_bo_vm_update_pte(struct radeon_cs_parser *p, continue; bo = p->relocs[i].robj; - r = radeon_vm_bo_update(rdev, vm, bo, &bo->tbo.mem); + bo_va = radeon_vm_bo_find(vm, bo); + if (bo_va == NULL) { + dev_err(rdev->dev, "bo %p not in vm %p\n", bo, vm); + return -EINVAL; + } + + r = radeon_vm_bo_update(rdev, bo_va, &bo->tbo.mem); if (r) return r; } diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c index 03686fab842d..697add2cd4e3 100644 --- a/drivers/gpu/drm/radeon/radeon_device.c +++ b/drivers/gpu/drm/radeon/radeon_device.c @@ -1056,36 +1056,36 @@ static void radeon_check_arguments(struct radeon_device *rdev) if (!radeon_check_pot_argument(radeon_vm_size)) { dev_warn(rdev->dev, "VM size (%d) must be a power of 2\n", radeon_vm_size); - radeon_vm_size = 4096; + radeon_vm_size = 4; } - if (radeon_vm_size < 4) { - dev_warn(rdev->dev, "VM size (%d) to small, min is 4MB\n", + if (radeon_vm_size < 1) { + dev_warn(rdev->dev, "VM size (%d) to small, min is 1GB\n", radeon_vm_size); - radeon_vm_size = 4096; + radeon_vm_size = 4; } /* * Max GPUVM size for Cayman, SI and CI are 40 bits. */ - if (radeon_vm_size > 1024*1024) { - dev_warn(rdev->dev, "VM size (%d) to large, max is 1TB\n", + if (radeon_vm_size > 1024) { + dev_warn(rdev->dev, "VM size (%d) too large, max is 1TB\n", radeon_vm_size); - radeon_vm_size = 4096; + radeon_vm_size = 4; } /* defines number of bits in page table versus page directory, * a page is 4KB so we have 12 bits offset, minimum 9 bits in the * page table and the remaining bits are in the page directory */ if (radeon_vm_block_size < 9) { - dev_warn(rdev->dev, "VM page table size (%d) to small\n", + dev_warn(rdev->dev, "VM page table size (%d) too small\n", radeon_vm_block_size); radeon_vm_block_size = 9; } if (radeon_vm_block_size > 24 || - radeon_vm_size < (1ull << radeon_vm_block_size)) { - dev_warn(rdev->dev, "VM page table size (%d) to large\n", + (radeon_vm_size * 1024) < (1ull << radeon_vm_block_size)) { + dev_warn(rdev->dev, "VM page table size (%d) too large\n", radeon_vm_block_size); radeon_vm_block_size = 9; } @@ -1238,7 +1238,7 @@ int radeon_device_init(struct radeon_device *rdev, /* Adjust VM size here. * Max GPUVM size for cayman+ is 40 bits. */ - rdev->vm_manager.max_pfn = radeon_vm_size << 8; + rdev->vm_manager.max_pfn = radeon_vm_size << 18; /* Set asic functions */ r = radeon_asic_init(rdev); diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c index 8fc362aa6a1a..bf25061c8ac4 100644 --- a/drivers/gpu/drm/radeon/radeon_display.c +++ b/drivers/gpu/drm/radeon/radeon_display.c @@ -285,7 +285,6 @@ static void radeon_unpin_work_func(struct work_struct *__work) void radeon_crtc_handle_vblank(struct radeon_device *rdev, int crtc_id) { struct radeon_crtc *radeon_crtc = rdev->mode_info.crtcs[crtc_id]; - struct radeon_flip_work *work; unsigned long flags; u32 update_pending; int vpos, hpos; @@ -295,8 +294,11 @@ void radeon_crtc_handle_vblank(struct radeon_device *rdev, int crtc_id) return; spin_lock_irqsave(&rdev->ddev->event_lock, flags); - work = radeon_crtc->flip_work; - if (work == NULL) { + if (radeon_crtc->flip_status != RADEON_FLIP_SUBMITTED) { + DRM_DEBUG_DRIVER("radeon_crtc->flip_status = %d != " + "RADEON_FLIP_SUBMITTED(%d)\n", + radeon_crtc->flip_status, + RADEON_FLIP_SUBMITTED); spin_unlock_irqrestore(&rdev->ddev->event_lock, flags); return; } @@ -344,12 +346,17 @@ void radeon_crtc_handle_flip(struct radeon_device *rdev, int crtc_id) spin_lock_irqsave(&rdev->ddev->event_lock, flags); work = radeon_crtc->flip_work; - if (work == NULL) { + if (radeon_crtc->flip_status != RADEON_FLIP_SUBMITTED) { + DRM_DEBUG_DRIVER("radeon_crtc->flip_status = %d != " + "RADEON_FLIP_SUBMITTED(%d)\n", + radeon_crtc->flip_status, + RADEON_FLIP_SUBMITTED); spin_unlock_irqrestore(&rdev->ddev->event_lock, flags); return; } /* Pageflip completed. Clean up. */ + radeon_crtc->flip_status = RADEON_FLIP_NONE; radeon_crtc->flip_work = NULL; /* wakeup userspace */ @@ -359,7 +366,6 @@ void radeon_crtc_handle_flip(struct radeon_device *rdev, int crtc_id) spin_unlock_irqrestore(&rdev->ddev->event_lock, flags); drm_vblank_put(rdev->ddev, radeon_crtc->crtc_id); - radeon_fence_unref(&work->fence); radeon_irq_kms_pflip_irq_put(rdev, work->crtc_id); queue_work(radeon_crtc->flip_queue, &work->unpin_work); } @@ -379,51 +385,108 @@ static void radeon_flip_work_func(struct work_struct *__work) struct radeon_crtc *radeon_crtc = rdev->mode_info.crtcs[work->crtc_id]; struct drm_crtc *crtc = &radeon_crtc->base; - struct drm_framebuffer *fb = work->fb; - - uint32_t tiling_flags, pitch_pixels; - uint64_t base; - unsigned long flags; int r; down_read(&rdev->exclusive_lock); - while (work->fence) { + if (work->fence) { r = radeon_fence_wait(work->fence, false); if (r == -EDEADLK) { up_read(&rdev->exclusive_lock); r = radeon_gpu_reset(rdev); down_read(&rdev->exclusive_lock); } + if (r) + DRM_ERROR("failed to wait on page flip fence (%d)!\n", r); - if (r) { - DRM_ERROR("failed to wait on page flip fence (%d)!\n", - r); - goto cleanup; - } else - radeon_fence_unref(&work->fence); + /* We continue with the page flip even if we failed to wait on + * the fence, otherwise the DRM core and userspace will be + * confused about which BO the CRTC is scanning out + */ + + radeon_fence_unref(&work->fence); } + /* We borrow the event spin lock for protecting flip_status */ + spin_lock_irqsave(&crtc->dev->event_lock, flags); + + /* set the proper interrupt */ + radeon_irq_kms_pflip_irq_get(rdev, radeon_crtc->crtc_id); + + /* do the flip (mmio) */ + radeon_page_flip(rdev, radeon_crtc->crtc_id, work->base); + + radeon_crtc->flip_status = RADEON_FLIP_SUBMITTED; + spin_unlock_irqrestore(&crtc->dev->event_lock, flags); + up_read(&rdev->exclusive_lock); +} + +static int radeon_crtc_page_flip(struct drm_crtc *crtc, + struct drm_framebuffer *fb, + struct drm_pending_vblank_event *event, + uint32_t page_flip_flags) +{ + struct drm_device *dev = crtc->dev; + struct radeon_device *rdev = dev->dev_private; + struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc); + struct radeon_framebuffer *old_radeon_fb; + struct radeon_framebuffer *new_radeon_fb; + struct drm_gem_object *obj; + struct radeon_flip_work *work; + struct radeon_bo *new_rbo; + uint32_t tiling_flags, pitch_pixels; + uint64_t base; + unsigned long flags; + int r; + + work = kzalloc(sizeof *work, GFP_KERNEL); + if (work == NULL) + return -ENOMEM; + + INIT_WORK(&work->flip_work, radeon_flip_work_func); + INIT_WORK(&work->unpin_work, radeon_unpin_work_func); + + work->rdev = rdev; + work->crtc_id = radeon_crtc->crtc_id; + work->event = event; + + /* schedule unpin of the old buffer */ + old_radeon_fb = to_radeon_framebuffer(crtc->primary->fb); + obj = old_radeon_fb->obj; + + /* take a reference to the old object */ + drm_gem_object_reference(obj); + work->old_rbo = gem_to_radeon_bo(obj); + + new_radeon_fb = to_radeon_framebuffer(fb); + obj = new_radeon_fb->obj; + new_rbo = gem_to_radeon_bo(obj); + + spin_lock(&new_rbo->tbo.bdev->fence_lock); + if (new_rbo->tbo.sync_obj) + work->fence = radeon_fence_ref(new_rbo->tbo.sync_obj); + spin_unlock(&new_rbo->tbo.bdev->fence_lock); + /* pin the new buffer */ - DRM_DEBUG_DRIVER("flip-ioctl() cur_fbo = %p, cur_bbo = %p\n", - work->old_rbo, work->new_rbo); + DRM_DEBUG_DRIVER("flip-ioctl() cur_rbo = %p, new_rbo = %p\n", + work->old_rbo, new_rbo); - r = radeon_bo_reserve(work->new_rbo, false); + r = radeon_bo_reserve(new_rbo, false); if (unlikely(r != 0)) { DRM_ERROR("failed to reserve new rbo buffer before flip\n"); goto cleanup; } /* Only 27 bit offset for legacy CRTC */ - r = radeon_bo_pin_restricted(work->new_rbo, RADEON_GEM_DOMAIN_VRAM, + r = radeon_bo_pin_restricted(new_rbo, RADEON_GEM_DOMAIN_VRAM, ASIC_IS_AVIVO(rdev) ? 0 : 1 << 27, &base); if (unlikely(r != 0)) { - radeon_bo_unreserve(work->new_rbo); + radeon_bo_unreserve(new_rbo); r = -EINVAL; DRM_ERROR("failed to pin new rbo buffer before flip\n"); goto cleanup; } - radeon_bo_get_tiling_flags(work->new_rbo, &tiling_flags, NULL); - radeon_bo_unreserve(work->new_rbo); + radeon_bo_get_tiling_flags(new_rbo, &tiling_flags, NULL); + radeon_bo_unreserve(new_rbo); if (!ASIC_IS_AVIVO(rdev)) { /* crtc offset is from display base addr not FB location */ @@ -460,6 +523,7 @@ static void radeon_flip_work_func(struct work_struct *__work) } base &= ~7; } + work->base = base; r = drm_vblank_get(crtc->dev, radeon_crtc->crtc_id); if (r) { @@ -470,98 +534,42 @@ static void radeon_flip_work_func(struct work_struct *__work) /* We borrow the event spin lock for protecting flip_work */ spin_lock_irqsave(&crtc->dev->event_lock, flags); - /* set the proper interrupt */ - radeon_irq_kms_pflip_irq_get(rdev, radeon_crtc->crtc_id); + if (radeon_crtc->flip_status != RADEON_FLIP_NONE) { + DRM_DEBUG_DRIVER("flip queue: crtc already busy\n"); + spin_unlock_irqrestore(&crtc->dev->event_lock, flags); + r = -EBUSY; + goto vblank_cleanup; + } + radeon_crtc->flip_status = RADEON_FLIP_PENDING; + radeon_crtc->flip_work = work; - /* do the flip (mmio) */ - radeon_page_flip(rdev, radeon_crtc->crtc_id, base); + /* update crtc fb */ + crtc->primary->fb = fb; spin_unlock_irqrestore(&crtc->dev->event_lock, flags); - up_read(&rdev->exclusive_lock); - return; + queue_work(radeon_crtc->flip_queue, &work->flip_work); + return 0; + +vblank_cleanup: + drm_vblank_put(crtc->dev, radeon_crtc->crtc_id); pflip_cleanup: - if (unlikely(radeon_bo_reserve(work->new_rbo, false) != 0)) { + if (unlikely(radeon_bo_reserve(new_rbo, false) != 0)) { DRM_ERROR("failed to reserve new rbo in error path\n"); goto cleanup; } - if (unlikely(radeon_bo_unpin(work->new_rbo) != 0)) { + if (unlikely(radeon_bo_unpin(new_rbo) != 0)) { DRM_ERROR("failed to unpin new rbo in error path\n"); } - radeon_bo_unreserve(work->new_rbo); + radeon_bo_unreserve(new_rbo); cleanup: drm_gem_object_unreference_unlocked(&work->old_rbo->gem_base); radeon_fence_unref(&work->fence); kfree(work); - up_read(&rdev->exclusive_lock); -} - -static int radeon_crtc_page_flip(struct drm_crtc *crtc, - struct drm_framebuffer *fb, - struct drm_pending_vblank_event *event, - uint32_t page_flip_flags) -{ - struct drm_device *dev = crtc->dev; - struct radeon_device *rdev = dev->dev_private; - struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc); - struct radeon_framebuffer *old_radeon_fb; - struct radeon_framebuffer *new_radeon_fb; - struct drm_gem_object *obj; - struct radeon_flip_work *work; - unsigned long flags; - - work = kzalloc(sizeof *work, GFP_KERNEL); - if (work == NULL) - return -ENOMEM; - - INIT_WORK(&work->flip_work, radeon_flip_work_func); - INIT_WORK(&work->unpin_work, radeon_unpin_work_func); - - work->rdev = rdev; - work->crtc_id = radeon_crtc->crtc_id; - work->fb = fb; - work->event = event; - /* schedule unpin of the old buffer */ - old_radeon_fb = to_radeon_framebuffer(crtc->primary->fb); - obj = old_radeon_fb->obj; - - /* take a reference to the old object */ - drm_gem_object_reference(obj); - work->old_rbo = gem_to_radeon_bo(obj); - - new_radeon_fb = to_radeon_framebuffer(fb); - obj = new_radeon_fb->obj; - work->new_rbo = gem_to_radeon_bo(obj); - - spin_lock(&work->new_rbo->tbo.bdev->fence_lock); - if (work->new_rbo->tbo.sync_obj) - work->fence = radeon_fence_ref(work->new_rbo->tbo.sync_obj); - spin_unlock(&work->new_rbo->tbo.bdev->fence_lock); - - /* We borrow the event spin lock for protecting flip_work */ - spin_lock_irqsave(&crtc->dev->event_lock, flags); - - if (radeon_crtc->flip_work) { - DRM_DEBUG_DRIVER("flip queue: crtc already busy\n"); - spin_unlock_irqrestore(&crtc->dev->event_lock, flags); - drm_gem_object_unreference_unlocked(&work->old_rbo->gem_base); - radeon_fence_unref(&work->fence); - kfree(work); - return -EBUSY; - } - radeon_crtc->flip_work = work; - - /* update crtc fb */ - crtc->primary->fb = fb; - - spin_unlock_irqrestore(&crtc->dev->event_lock, flags); - - queue_work(radeon_crtc->flip_queue, &work->flip_work); - - return 0; + return r; } static int @@ -821,6 +829,10 @@ int radeon_ddc_get_modes(struct radeon_connector *radeon_connector) struct radeon_device *rdev = dev->dev_private; int ret = 0; + /* don't leak the edid if we already fetched it in detect() */ + if (radeon_connector->edid) + goto got_edid; + /* on hw with routers, select right port */ if (radeon_connector->router.ddc_valid) radeon_router_select_ddc_port(radeon_connector); @@ -859,6 +871,7 @@ int radeon_ddc_get_modes(struct radeon_connector *radeon_connector) radeon_connector->edid = radeon_bios_get_hardcoded_edid(rdev); } if (radeon_connector->edid) { +got_edid: drm_mode_connector_update_edid_property(&radeon_connector->base, radeon_connector->edid); ret = drm_add_edid_modes(&radeon_connector->base, radeon_connector->edid); drm_edid_to_eld(&radeon_connector->base, radeon_connector->edid); diff --git a/drivers/gpu/drm/radeon/radeon_drv.c b/drivers/gpu/drm/radeon/radeon_drv.c index 6e3017413386..e9e361084249 100644 --- a/drivers/gpu/drm/radeon/radeon_drv.c +++ b/drivers/gpu/drm/radeon/radeon_drv.c @@ -173,8 +173,9 @@ int radeon_dpm = -1; int radeon_aspm = -1; int radeon_runtime_pm = -1; int radeon_hard_reset = 0; -int radeon_vm_size = 4096; +int radeon_vm_size = 4; int radeon_vm_block_size = 9; +int radeon_deep_color = 0; MODULE_PARM_DESC(no_wb, "Disable AGP writeback for scratch registers"); module_param_named(no_wb, radeon_no_wb, int, 0444); @@ -242,12 +243,15 @@ module_param_named(runpm, radeon_runtime_pm, int, 0444); MODULE_PARM_DESC(hard_reset, "PCI config reset (1 = force enable, 0 = disable (default))"); module_param_named(hard_reset, radeon_hard_reset, int, 0444); -MODULE_PARM_DESC(vm_size, "VM address space size in megabytes (default 4GB)"); +MODULE_PARM_DESC(vm_size, "VM address space size in gigabytes (default 4GB)"); module_param_named(vm_size, radeon_vm_size, int, 0444); MODULE_PARM_DESC(vm_block_size, "VM page table size in bits (default 9)"); module_param_named(vm_block_size, radeon_vm_block_size, int, 0444); +MODULE_PARM_DESC(deep_color, "Deep Color support (1 = enable, 0 = disable (default))"); +module_param_named(deep_color, radeon_deep_color, int, 0444); + static struct pci_device_id pciidlist[] = { radeon_PCI_IDS }; diff --git a/drivers/gpu/drm/radeon/radeon_kms.c b/drivers/gpu/drm/radeon/radeon_kms.c index 35d931881b4b..d25ae6acfd5a 100644 --- a/drivers/gpu/drm/radeon/radeon_kms.c +++ b/drivers/gpu/drm/radeon/radeon_kms.c @@ -579,7 +579,7 @@ int radeon_driver_open_kms(struct drm_device *dev, struct drm_file *file_priv) /* new gpu have virtual address space support */ if (rdev->family >= CHIP_CAYMAN) { struct radeon_fpriv *fpriv; - struct radeon_bo_va *bo_va; + struct radeon_vm *vm; int r; fpriv = kzalloc(sizeof(*fpriv), GFP_KERNEL); @@ -587,7 +587,8 @@ int radeon_driver_open_kms(struct drm_device *dev, struct drm_file *file_priv) return -ENOMEM; } - r = radeon_vm_init(rdev, &fpriv->vm); + vm = &fpriv->vm; + r = radeon_vm_init(rdev, vm); if (r) { kfree(fpriv); return r; @@ -596,22 +597,23 @@ int radeon_driver_open_kms(struct drm_device *dev, struct drm_file *file_priv) if (rdev->accel_working) { r = radeon_bo_reserve(rdev->ring_tmp_bo.bo, false); if (r) { - radeon_vm_fini(rdev, &fpriv->vm); + radeon_vm_fini(rdev, vm); kfree(fpriv); return r; } /* map the ib pool buffer read only into * virtual address space */ - bo_va = radeon_vm_bo_add(rdev, &fpriv->vm, - rdev->ring_tmp_bo.bo); - r = radeon_vm_bo_set_addr(rdev, bo_va, RADEON_VA_IB_OFFSET, + vm->ib_bo_va = radeon_vm_bo_add(rdev, vm, + rdev->ring_tmp_bo.bo); + r = radeon_vm_bo_set_addr(rdev, vm->ib_bo_va, + RADEON_VA_IB_OFFSET, RADEON_VM_PAGE_READABLE | RADEON_VM_PAGE_SNOOPED); radeon_bo_unreserve(rdev->ring_tmp_bo.bo); if (r) { - radeon_vm_fini(rdev, &fpriv->vm); + radeon_vm_fini(rdev, vm); kfree(fpriv); return r; } @@ -640,21 +642,19 @@ void radeon_driver_postclose_kms(struct drm_device *dev, /* new gpu have virtual address space support */ if (rdev->family >= CHIP_CAYMAN && file_priv->driver_priv) { struct radeon_fpriv *fpriv = file_priv->driver_priv; - struct radeon_bo_va *bo_va; + struct radeon_vm *vm = &fpriv->vm; int r; if (rdev->accel_working) { r = radeon_bo_reserve(rdev->ring_tmp_bo.bo, false); if (!r) { - bo_va = radeon_vm_bo_find(&fpriv->vm, - rdev->ring_tmp_bo.bo); - if (bo_va) - radeon_vm_bo_rmv(rdev, bo_va); + if (vm->ib_bo_va) + radeon_vm_bo_rmv(rdev, vm->ib_bo_va); radeon_bo_unreserve(rdev->ring_tmp_bo.bo); } } - radeon_vm_fini(rdev, &fpriv->vm); + radeon_vm_fini(rdev, vm); kfree(fpriv); file_priv->driver_priv = NULL; } diff --git a/drivers/gpu/drm/radeon/radeon_mode.h b/drivers/gpu/drm/radeon/radeon_mode.h index ad0e4b8cc7e3..0592ddb0904b 100644 --- a/drivers/gpu/drm/radeon/radeon_mode.h +++ b/drivers/gpu/drm/radeon/radeon_mode.h @@ -46,6 +46,10 @@ struct radeon_device; #define to_radeon_encoder(x) container_of(x, struct radeon_encoder, base) #define to_radeon_framebuffer(x) container_of(x, struct radeon_framebuffer, base) +#define RADEON_MAX_HPD_PINS 7 +#define RADEON_MAX_CRTCS 6 +#define RADEON_MAX_AFMT_BLOCKS 7 + enum radeon_rmx_type { RMX_OFF, RMX_FULL, @@ -233,8 +237,8 @@ struct radeon_mode_info { struct card_info *atom_card_info; enum radeon_connector_table connector_table; bool mode_config_initialized; - struct radeon_crtc *crtcs[6]; - struct radeon_afmt *afmt[7]; + struct radeon_crtc *crtcs[RADEON_MAX_CRTCS]; + struct radeon_afmt *afmt[RADEON_MAX_AFMT_BLOCKS]; /* DVI-I properties */ struct drm_property *coherent_mode_property; /* DAC enable load detect */ @@ -302,6 +306,12 @@ struct radeon_atom_ss { uint16_t amount; }; +enum radeon_flip_status { + RADEON_FLIP_NONE, + RADEON_FLIP_PENDING, + RADEON_FLIP_SUBMITTED +}; + struct radeon_crtc { struct drm_crtc base; int crtc_id; @@ -327,6 +337,7 @@ struct radeon_crtc { /* page flipping */ struct workqueue_struct *flip_queue; struct radeon_flip_work *flip_work; + enum radeon_flip_status flip_status; /* pll sharing */ struct radeon_atom_ss ss; bool ss_enabled; diff --git a/drivers/gpu/drm/radeon/radeon_pm.c b/drivers/gpu/drm/radeon/radeon_pm.c index 12c663e86ca1..e447e390d09a 100644 --- a/drivers/gpu/drm/radeon/radeon_pm.c +++ b/drivers/gpu/drm/radeon/radeon_pm.c @@ -73,8 +73,10 @@ void radeon_pm_acpi_event_handler(struct radeon_device *rdev) rdev->pm.dpm.ac_power = true; else rdev->pm.dpm.ac_power = false; - if (rdev->asic->dpm.enable_bapm) - radeon_dpm_enable_bapm(rdev, rdev->pm.dpm.ac_power); + if (rdev->family == CHIP_ARUBA) { + if (rdev->asic->dpm.enable_bapm) + radeon_dpm_enable_bapm(rdev, rdev->pm.dpm.ac_power); + } mutex_unlock(&rdev->pm.mutex); } else if (rdev->pm.pm_method == PM_METHOD_PROFILE) { if (rdev->pm.profile == PM_PROFILE_AUTO) { diff --git a/drivers/gpu/drm/radeon/radeon_vm.c b/drivers/gpu/drm/radeon/radeon_vm.c index 899d9126cad6..725d3669014f 100644 --- a/drivers/gpu/drm/radeon/radeon_vm.c +++ b/drivers/gpu/drm/radeon/radeon_vm.c @@ -332,6 +332,7 @@ struct radeon_bo_va *radeon_vm_bo_add(struct radeon_device *rdev, bo_va->ref_count = 1; INIT_LIST_HEAD(&bo_va->bo_list); INIT_LIST_HEAD(&bo_va->vm_list); + INIT_LIST_HEAD(&bo_va->vm_status); mutex_lock(&vm->mutex); list_add(&bo_va->vm_list, &vm->va); @@ -468,6 +469,19 @@ int radeon_vm_bo_set_addr(struct radeon_device *rdev, head = &tmp->vm_list; } + if (bo_va->soffset) { + /* add a clone of the bo_va to clear the old address */ + tmp = kzalloc(sizeof(struct radeon_bo_va), GFP_KERNEL); + if (!tmp) { + mutex_unlock(&vm->mutex); + return -ENOMEM; + } + tmp->soffset = bo_va->soffset; + tmp->eoffset = bo_va->eoffset; + tmp->vm = vm; + list_add(&tmp->vm_status, &vm->freed); + } + bo_va->soffset = soffset; bo_va->eoffset = eoffset; bo_va->flags = flags; @@ -495,7 +509,7 @@ int radeon_vm_bo_set_addr(struct radeon_device *rdev, mutex_unlock(&vm->mutex); r = radeon_bo_create(rdev, RADEON_VM_PTE_COUNT * 8, - RADEON_GPU_PAGE_SIZE, false, + RADEON_GPU_PAGE_SIZE, true, RADEON_GEM_DOMAIN_VRAM, NULL, &pt); if (r) return r; @@ -823,25 +837,19 @@ static void radeon_vm_update_ptes(struct radeon_device *rdev, * Object have to be reserved and mutex must be locked! */ int radeon_vm_bo_update(struct radeon_device *rdev, - struct radeon_vm *vm, - struct radeon_bo *bo, + struct radeon_bo_va *bo_va, struct ttm_mem_reg *mem) { + struct radeon_vm *vm = bo_va->vm; struct radeon_ib ib; - struct radeon_bo_va *bo_va; unsigned nptes, ndw; uint64_t addr; int r; - bo_va = radeon_vm_bo_find(vm, bo); - if (bo_va == NULL) { - dev_err(rdev->dev, "bo %p not in vm %p\n", bo, vm); - return -EINVAL; - } if (!bo_va->soffset) { dev_err(rdev->dev, "bo %p don't has a mapping in vm %p\n", - bo, vm); + bo_va->bo, vm); return -EINVAL; } @@ -868,7 +876,7 @@ int radeon_vm_bo_update(struct radeon_device *rdev, trace_radeon_vm_bo_update(bo_va); - nptes = radeon_bo_ngpu_pages(bo); + nptes = (bo_va->eoffset - bo_va->soffset) / RADEON_GPU_PAGE_SIZE; /* padding, etc. */ ndw = 64; @@ -911,33 +919,61 @@ int radeon_vm_bo_update(struct radeon_device *rdev, } /** + * radeon_vm_clear_freed - clear freed BOs in the PT + * + * @rdev: radeon_device pointer + * @vm: requested vm + * + * Make sure all freed BOs are cleared in the PT. + * Returns 0 for success. + * + * PTs have to be reserved and mutex must be locked! + */ +int radeon_vm_clear_freed(struct radeon_device *rdev, + struct radeon_vm *vm) +{ + struct radeon_bo_va *bo_va, *tmp; + int r; + + list_for_each_entry_safe(bo_va, tmp, &vm->freed, vm_status) { + list_del(&bo_va->vm_status); + r = radeon_vm_bo_update(rdev, bo_va, NULL); + kfree(bo_va); + if (r) + return r; + } + return 0; + +} + +/** * radeon_vm_bo_rmv - remove a bo to a specific vm * * @rdev: radeon_device pointer * @bo_va: requested bo_va * * Remove @bo_va->bo from the requested vm (cayman+). - * Remove @bo_va->bo from the list of bos associated with the bo_va->vm and - * remove the ptes for @bo_va in the page table. - * Returns 0 for success. * * Object have to be reserved! */ -int radeon_vm_bo_rmv(struct radeon_device *rdev, - struct radeon_bo_va *bo_va) +void radeon_vm_bo_rmv(struct radeon_device *rdev, + struct radeon_bo_va *bo_va) { - int r = 0; + struct radeon_vm *vm = bo_va->vm; - mutex_lock(&bo_va->vm->mutex); - if (bo_va->soffset) - r = radeon_vm_bo_update(rdev, bo_va->vm, bo_va->bo, NULL); + list_del(&bo_va->bo_list); + mutex_lock(&vm->mutex); list_del(&bo_va->vm_list); - mutex_unlock(&bo_va->vm->mutex); - list_del(&bo_va->bo_list); - kfree(bo_va); - return r; + if (bo_va->soffset) { + bo_va->bo = NULL; + list_add(&bo_va->vm_status, &vm->freed); + } else { + kfree(bo_va); + } + + mutex_unlock(&vm->mutex); } /** @@ -975,11 +1011,13 @@ int radeon_vm_init(struct radeon_device *rdev, struct radeon_vm *vm) int r; vm->id = 0; + vm->ib_bo_va = NULL; vm->fence = NULL; vm->last_flush = NULL; vm->last_id_use = NULL; mutex_init(&vm->mutex); INIT_LIST_HEAD(&vm->va); + INIT_LIST_HEAD(&vm->freed); pd_size = radeon_vm_directory_size(rdev); pd_entries = radeon_vm_num_pdes(rdev); @@ -992,7 +1030,7 @@ int radeon_vm_init(struct radeon_device *rdev, struct radeon_vm *vm) return -ENOMEM; } - r = radeon_bo_create(rdev, pd_size, align, false, + r = radeon_bo_create(rdev, pd_size, align, true, RADEON_GEM_DOMAIN_VRAM, NULL, &vm->page_directory); if (r) @@ -1034,7 +1072,8 @@ void radeon_vm_fini(struct radeon_device *rdev, struct radeon_vm *vm) kfree(bo_va); } } - + list_for_each_entry_safe(bo_va, tmp, &vm->freed, vm_status) + kfree(bo_va); for (i = 0; i < radeon_vm_num_pdes(rdev); i++) radeon_bo_unref(&vm->page_tables[i].bo); diff --git a/drivers/gpu/drm/radeon/rv515.c b/drivers/gpu/drm/radeon/rv515.c index 237dd29d9f1c..3e21e869015f 100644 --- a/drivers/gpu/drm/radeon/rv515.c +++ b/drivers/gpu/drm/radeon/rv515.c @@ -406,8 +406,9 @@ void rv515_mc_resume(struct radeon_device *rdev, struct rv515_mc_save *save) for (i = 0; i < rdev->num_crtc; i++) { if (save->crtc_enabled[i]) { tmp = RREG32(AVIVO_D1MODE_MASTER_UPDATE_MODE + crtc_offsets[i]); - if ((tmp & 0x3) != 0) { - tmp &= ~0x3; + if ((tmp & 0x7) != 3) { + tmp &= ~0x7; + tmp |= 0x3; WREG32(AVIVO_D1MODE_MASTER_UPDATE_MODE + crtc_offsets[i], tmp); } tmp = RREG32(AVIVO_D1GRPH_UPDATE + crtc_offsets[i]); diff --git a/drivers/gpu/drm/radeon/rv770_dpm.c b/drivers/gpu/drm/radeon/rv770_dpm.c index da041a43d82e..3c76e1dcdf04 100644 --- a/drivers/gpu/drm/radeon/rv770_dpm.c +++ b/drivers/gpu/drm/radeon/rv770_dpm.c @@ -2329,12 +2329,6 @@ void rv770_get_engine_memory_ss(struct radeon_device *rdev) pi->mclk_ss = radeon_atombios_get_asic_ss_info(rdev, &ss, ASIC_INTERNAL_MEMORY_SS, 0); - /* disable ss, causes hangs on some cayman boards */ - if (rdev->family == CHIP_CAYMAN) { - pi->sclk_ss = false; - pi->mclk_ss = false; - } - if (pi->sclk_ss || pi->mclk_ss) pi->dynamic_ss = true; else diff --git a/drivers/gpu/drm/radeon/si.c b/drivers/gpu/drm/radeon/si.c index 730cee2c34cf..9e854fd016da 100644 --- a/drivers/gpu/drm/radeon/si.c +++ b/drivers/gpu/drm/radeon/si.c @@ -6103,6 +6103,7 @@ static inline u32 si_get_ih_wptr(struct radeon_device *rdev) tmp = RREG32(IH_RB_CNTL); tmp |= IH_WPTR_OVERFLOW_CLEAR; WREG32(IH_RB_CNTL, tmp); + wptr &= ~RB_OVERFLOW; } return (wptr & rdev->ih.ptr_mask); } @@ -6376,14 +6377,16 @@ restart_ih: case 147: addr = RREG32(VM_CONTEXT1_PROTECTION_FAULT_ADDR); status = RREG32(VM_CONTEXT1_PROTECTION_FAULT_STATUS); + /* reset addr and status */ + WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); + if (addr == 0x0 && status == 0x0) + break; dev_err(rdev->dev, "GPU fault detected: %d 0x%08x\n", src_id, src_data); dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_ADDR 0x%08X\n", addr); dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x%08X\n", status); si_vm_decode_fault(rdev, status, addr); - /* reset addr and status */ - WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); break; case 176: /* RINGID0 CP_INT */ radeon_fence_process(rdev, RADEON_RING_TYPE_GFX_INDEX); diff --git a/drivers/gpu/drm/radeon/trinity_dpm.c b/drivers/gpu/drm/radeon/trinity_dpm.c index 2a2822c03329..32e50be9c4ac 100644 --- a/drivers/gpu/drm/radeon/trinity_dpm.c +++ b/drivers/gpu/drm/radeon/trinity_dpm.c @@ -1874,7 +1874,16 @@ int trinity_dpm_init(struct radeon_device *rdev) for (i = 0; i < SUMO_MAX_HARDWARE_POWERLEVELS; i++) pi->at[i] = TRINITY_AT_DFLT; - pi->enable_bapm = false; + /* There are stability issues reported on with + * bapm enabled when switching between AC and battery + * power. At the same time, some MSI boards hang + * if it's not enabled and dpm is enabled. Just enable + * it for MSI boards right now. + */ + if (rdev->pdev->subsystem_vendor == 0x1462) + pi->enable_bapm = true; + else + pi->enable_bapm = false; pi->enable_nbps_policy = true; pi->enable_sclk_ds = true; pi->enable_gfx_power_gating = true; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c index a89ad938eacf..b031b48dbb3c 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c @@ -179,7 +179,6 @@ static int vmw_fb_set_par(struct fb_info *info) vmw_write(vmw_priv, SVGA_REG_DISPLAY_POSITION_Y, info->var.yoffset); vmw_write(vmw_priv, SVGA_REG_DISPLAY_WIDTH, info->var.xres); vmw_write(vmw_priv, SVGA_REG_DISPLAY_HEIGHT, info->var.yres); - vmw_write(vmw_priv, SVGA_REG_BYTES_PER_LINE, info->fix.line_length); vmw_write(vmw_priv, SVGA_REG_DISPLAY_ID, SVGA_ID_INVALID); } diff --git a/drivers/hid/Kconfig b/drivers/hid/Kconfig index 800c8b60f7a2..5e79c6ad914f 100644 --- a/drivers/hid/Kconfig +++ b/drivers/hid/Kconfig @@ -810,7 +810,7 @@ config HID_ZYDACRON config HID_SENSOR_HUB tristate "HID Sensors framework support" - depends on HID + depends on HID && HAS_IOMEM select MFD_CORE default n ---help--- diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h index 6d00bb9366fa..48b66bbffc94 100644 --- a/drivers/hid/hid-ids.h +++ b/drivers/hid/hid-ids.h @@ -323,6 +323,7 @@ #define USB_VENDOR_ID_ETURBOTOUCH 0x22b9 #define USB_DEVICE_ID_ETURBOTOUCH 0x0006 +#define USB_DEVICE_ID_ETURBOTOUCH_2968 0x2968 #define USB_VENDOR_ID_EZKEY 0x0518 #define USB_DEVICE_ID_BTC_8193 0x0002 @@ -715,6 +716,8 @@ #define USB_VENDOR_ID_PENMOUNT 0x14e1 #define USB_DEVICE_ID_PENMOUNT_PCI 0x3500 +#define USB_DEVICE_ID_PENMOUNT_1610 0x1610 +#define USB_DEVICE_ID_PENMOUNT_1640 0x1640 #define USB_VENDOR_ID_PETALYNX 0x18b1 #define USB_DEVICE_ID_PETALYNX_MAXTER_REMOTE 0x0037 diff --git a/drivers/hid/hid-rmi.c b/drivers/hid/hid-rmi.c index 2451c7e5febd..578bbe65902b 100644 --- a/drivers/hid/hid-rmi.c +++ b/drivers/hid/hid-rmi.c @@ -428,6 +428,7 @@ static int rmi_raw_event(struct hid_device *hdev, return 0; } +#ifdef CONFIG_PM static int rmi_post_reset(struct hid_device *hdev) { return rmi_set_mode(hdev, RMI_MODE_ATTN_REPORTS); @@ -437,6 +438,7 @@ static int rmi_post_resume(struct hid_device *hdev) { return rmi_set_mode(hdev, RMI_MODE_ATTN_REPORTS); } +#endif /* CONFIG_PM */ #define RMI4_MAX_PAGE 0xff #define RMI4_PAGE_SIZE 0x0100 diff --git a/drivers/hid/hid-sensor-hub.c b/drivers/hid/hid-sensor-hub.c index a8d5c8faf8cf..e244e449cbba 100644 --- a/drivers/hid/hid-sensor-hub.c +++ b/drivers/hid/hid-sensor-hub.c @@ -159,17 +159,18 @@ int sensor_hub_register_callback(struct hid_sensor_hub_device *hsdev, { struct hid_sensor_hub_callbacks_list *callback; struct sensor_hub_data *pdata = hid_get_drvdata(hsdev->hdev); + unsigned long flags; - spin_lock(&pdata->dyn_callback_lock); + spin_lock_irqsave(&pdata->dyn_callback_lock, flags); list_for_each_entry(callback, &pdata->dyn_callback_list, list) if (callback->usage_id == usage_id && callback->hsdev == hsdev) { - spin_unlock(&pdata->dyn_callback_lock); + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); return -EINVAL; } callback = kzalloc(sizeof(*callback), GFP_ATOMIC); if (!callback) { - spin_unlock(&pdata->dyn_callback_lock); + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); return -ENOMEM; } callback->hsdev = hsdev; @@ -177,7 +178,7 @@ int sensor_hub_register_callback(struct hid_sensor_hub_device *hsdev, callback->usage_id = usage_id; callback->priv = NULL; list_add_tail(&callback->list, &pdata->dyn_callback_list); - spin_unlock(&pdata->dyn_callback_lock); + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); return 0; } @@ -188,8 +189,9 @@ int sensor_hub_remove_callback(struct hid_sensor_hub_device *hsdev, { struct hid_sensor_hub_callbacks_list *callback; struct sensor_hub_data *pdata = hid_get_drvdata(hsdev->hdev); + unsigned long flags; - spin_lock(&pdata->dyn_callback_lock); + spin_lock_irqsave(&pdata->dyn_callback_lock, flags); list_for_each_entry(callback, &pdata->dyn_callback_list, list) if (callback->usage_id == usage_id && callback->hsdev == hsdev) { @@ -197,7 +199,7 @@ int sensor_hub_remove_callback(struct hid_sensor_hub_device *hsdev, kfree(callback); break; } - spin_unlock(&pdata->dyn_callback_lock); + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); return 0; } @@ -378,15 +380,16 @@ static int sensor_hub_suspend(struct hid_device *hdev, pm_message_t message) { struct sensor_hub_data *pdata = hid_get_drvdata(hdev); struct hid_sensor_hub_callbacks_list *callback; + unsigned long flags; hid_dbg(hdev, " sensor_hub_suspend\n"); - spin_lock(&pdata->dyn_callback_lock); + spin_lock_irqsave(&pdata->dyn_callback_lock, flags); list_for_each_entry(callback, &pdata->dyn_callback_list, list) { if (callback->usage_callback->suspend) callback->usage_callback->suspend( callback->hsdev, callback->priv); } - spin_unlock(&pdata->dyn_callback_lock); + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); return 0; } @@ -395,15 +398,16 @@ static int sensor_hub_resume(struct hid_device *hdev) { struct sensor_hub_data *pdata = hid_get_drvdata(hdev); struct hid_sensor_hub_callbacks_list *callback; + unsigned long flags; hid_dbg(hdev, " sensor_hub_resume\n"); - spin_lock(&pdata->dyn_callback_lock); + spin_lock_irqsave(&pdata->dyn_callback_lock, flags); list_for_each_entry(callback, &pdata->dyn_callback_list, list) { if (callback->usage_callback->resume) callback->usage_callback->resume( callback->hsdev, callback->priv); } - spin_unlock(&pdata->dyn_callback_lock); + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); return 0; } @@ -632,6 +636,7 @@ static int sensor_hub_probe(struct hid_device *hdev, if (name == NULL) { hid_err(hdev, "Failed MFD device name\n"); ret = -ENOMEM; + kfree(hsdev); goto err_no_mem; } sd->hid_sensor_hub_client_devs[ diff --git a/drivers/hid/usbhid/hid-quirks.c b/drivers/hid/usbhid/hid-quirks.c index 59badc10a08c..31e6727cd009 100644 --- a/drivers/hid/usbhid/hid-quirks.c +++ b/drivers/hid/usbhid/hid-quirks.c @@ -49,6 +49,7 @@ static const struct hid_blacklist { { USB_VENDOR_ID_EMS, USB_DEVICE_ID_EMS_TRIO_LINKER_PLUS_II, HID_QUIRK_MULTI_INPUT }, { USB_VENDOR_ID_ETURBOTOUCH, USB_DEVICE_ID_ETURBOTOUCH, HID_QUIRK_MULTI_INPUT }, + { USB_VENDOR_ID_ETURBOTOUCH, USB_DEVICE_ID_ETURBOTOUCH_2968, HID_QUIRK_MULTI_INPUT }, { USB_VENDOR_ID_GREENASIA, USB_DEVICE_ID_GREENASIA_DUAL_USB_JOYPAD, HID_QUIRK_MULTI_INPUT }, { USB_VENDOR_ID_PANTHERLORD, USB_DEVICE_ID_PANTHERLORD_TWIN_USB_JOYSTICK, HID_QUIRK_MULTI_INPUT | HID_QUIRK_SKIP_OUTPUT_REPORTS }, { USB_VENDOR_ID_PLAYDOTCOM, USB_DEVICE_ID_PLAYDOTCOM_EMS_USBII, HID_QUIRK_MULTI_INPUT }, @@ -76,6 +77,8 @@ static const struct hid_blacklist { { USB_VENDOR_ID_MSI, USB_DEVICE_ID_MSI_GX680R_LED_PANEL, HID_QUIRK_NO_INIT_REPORTS }, { USB_VENDOR_ID_NEXIO, USB_DEVICE_ID_NEXIO_MULTITOUCH_PTI0750, HID_QUIRK_NO_INIT_REPORTS }, { USB_VENDOR_ID_NOVATEK, USB_DEVICE_ID_NOVATEK_MOUSE, HID_QUIRK_NO_INIT_REPORTS }, + { USB_VENDOR_ID_PENMOUNT, USB_DEVICE_ID_PENMOUNT_1610, HID_QUIRK_NOGET }, + { USB_VENDOR_ID_PENMOUNT, USB_DEVICE_ID_PENMOUNT_1640, HID_QUIRK_NOGET }, { USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN, HID_QUIRK_NO_INIT_REPORTS }, { USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN1, HID_QUIRK_NO_INIT_REPORTS }, { USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN2, HID_QUIRK_NO_INIT_REPORTS }, diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c index e84f4526eb36..ae22e3c1fc4c 100644 --- a/drivers/hv/connection.c +++ b/drivers/hv/connection.c @@ -339,9 +339,13 @@ static void process_chn_event(u32 relid) */ do { - hv_begin_read(&channel->inbound); + if (read_state) + hv_begin_read(&channel->inbound); channel->onchannel_callback(arg); - bytes_to_read = hv_end_read(&channel->inbound); + if (read_state) + bytes_to_read = hv_end_read(&channel->inbound); + else + bytes_to_read = 0; } while (read_state && (bytes_to_read != 0)); } else { pr_err("no channel callback for relid - %u\n", relid); diff --git a/drivers/hv/hv_fcopy.c b/drivers/hv/hv_fcopy.c index eaaa3d843b80..23b2ce294c4c 100644 --- a/drivers/hv/hv_fcopy.c +++ b/drivers/hv/hv_fcopy.c @@ -246,8 +246,8 @@ void hv_fcopy_onchannelcallback(void *context) /* * Send the information to the user-level daemon. */ - fcopy_send_data(); schedule_delayed_work(&fcopy_work, 5*HZ); + fcopy_send_data(); return; } icmsghdr->icflags = ICMSGHDRFLAG_TRANSACTION | ICMSGHDRFLAG_RESPONSE; diff --git a/drivers/hv/hv_kvp.c b/drivers/hv/hv_kvp.c index ea852537307e..521c14625b3a 100644 --- a/drivers/hv/hv_kvp.c +++ b/drivers/hv/hv_kvp.c @@ -127,6 +127,17 @@ kvp_work_func(struct work_struct *dummy) kvp_respond_to_host(NULL, HV_E_FAIL); } +static void poll_channel(struct vmbus_channel *channel) +{ + if (channel->target_cpu != smp_processor_id()) + smp_call_function_single(channel->target_cpu, + hv_kvp_onchannelcallback, + channel, true); + else + hv_kvp_onchannelcallback(channel); +} + + static int kvp_handle_handshake(struct hv_kvp_msg *msg) { int ret = 1; @@ -155,7 +166,7 @@ static int kvp_handle_handshake(struct hv_kvp_msg *msg) kvp_register(dm_reg_value); kvp_transaction.active = false; if (kvp_transaction.kvp_context) - hv_kvp_onchannelcallback(kvp_transaction.kvp_context); + poll_channel(kvp_transaction.kvp_context); } return ret; } @@ -568,7 +579,7 @@ response_done: vmbus_sendpacket(channel, recv_buffer, buf_len, req_id, VM_PKT_DATA_INBAND, 0); - + poll_channel(channel); } /* @@ -603,7 +614,7 @@ void hv_kvp_onchannelcallback(void *context) return; } - vmbus_recvpacket(channel, recv_buffer, PAGE_SIZE * 2, &recvlen, + vmbus_recvpacket(channel, recv_buffer, PAGE_SIZE * 4, &recvlen, &requestid); if (recvlen > 0) { diff --git a/drivers/hv/hv_util.c b/drivers/hv/hv_util.c index dd761806f0e8..3b9c9ef0deb8 100644 --- a/drivers/hv/hv_util.c +++ b/drivers/hv/hv_util.c @@ -319,7 +319,7 @@ static int util_probe(struct hv_device *dev, (struct hv_util_service *)dev_id->driver_data; int ret; - srv->recv_buffer = kmalloc(PAGE_SIZE * 2, GFP_KERNEL); + srv->recv_buffer = kmalloc(PAGE_SIZE * 4, GFP_KERNEL); if (!srv->recv_buffer) return -ENOMEM; if (srv->util_init) { diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig index 08531a128f53..02d3d85829f3 100644 --- a/drivers/hwmon/Kconfig +++ b/drivers/hwmon/Kconfig @@ -1052,7 +1052,7 @@ config SENSORS_PC87427 will be called pc87427. config SENSORS_NTC_THERMISTOR - tristate "NTC thermistor support" + tristate "NTC thermistor support from Murata" depends on !OF || IIO=n || IIO help This driver supports NTC thermistors sensor reading and its @@ -1060,7 +1060,8 @@ config SENSORS_NTC_THERMISTOR send notifications about the temperature. Currently, this driver supports - NCP15WB473, NCP18WB473, NCP21WB473, NCP03WB473, and NCP15WL333. + NCP15WB473, NCP18WB473, NCP21WB473, NCP03WB473, and NCP15WL333 + from Murata. This driver can also be built as a module. If so, the module will be called ntc-thermistor. @@ -1176,6 +1177,7 @@ config SENSORS_DME1737 config SENSORS_EMC1403 tristate "SMSC EMC1403/23 thermal sensor" depends on I2C + select REGMAP_I2C help If you say yes here you get support for the SMSC EMC1403/23 temperature monitoring chip. diff --git a/drivers/hwmon/adc128d818.c b/drivers/hwmon/adc128d818.c index 5ffd81f19d01..0625e50d7a6e 100644 --- a/drivers/hwmon/adc128d818.c +++ b/drivers/hwmon/adc128d818.c @@ -239,50 +239,50 @@ static ssize_t adc128_show_alarm(struct device *dev, return sprintf(buf, "%u\n", !!(alarms & mask)); } -static SENSOR_DEVICE_ATTR_2(in0_input, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 0, 0); +static SENSOR_DEVICE_ATTR_2(in0_input, S_IRUGO, + adc128_show_in, NULL, 0, 0); static SENSOR_DEVICE_ATTR_2(in0_min, S_IWUSR | S_IRUGO, adc128_show_in, adc128_set_in, 0, 1); static SENSOR_DEVICE_ATTR_2(in0_max, S_IWUSR | S_IRUGO, adc128_show_in, adc128_set_in, 0, 2); -static SENSOR_DEVICE_ATTR_2(in1_input, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 1, 0); +static SENSOR_DEVICE_ATTR_2(in1_input, S_IRUGO, + adc128_show_in, NULL, 1, 0); static SENSOR_DEVICE_ATTR_2(in1_min, S_IWUSR | S_IRUGO, adc128_show_in, adc128_set_in, 1, 1); static SENSOR_DEVICE_ATTR_2(in1_max, S_IWUSR | S_IRUGO, adc128_show_in, adc128_set_in, 1, 2); -static SENSOR_DEVICE_ATTR_2(in2_input, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 2, 0); +static SENSOR_DEVICE_ATTR_2(in2_input, S_IRUGO, + adc128_show_in, NULL, 2, 0); static SENSOR_DEVICE_ATTR_2(in2_min, S_IWUSR | S_IRUGO, adc128_show_in, adc128_set_in, 2, 1); static SENSOR_DEVICE_ATTR_2(in2_max, S_IWUSR | S_IRUGO, adc128_show_in, adc128_set_in, 2, 2); -static SENSOR_DEVICE_ATTR_2(in3_input, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 3, 0); +static SENSOR_DEVICE_ATTR_2(in3_input, S_IRUGO, + adc128_show_in, NULL, 3, 0); static SENSOR_DEVICE_ATTR_2(in3_min, S_IWUSR | S_IRUGO, adc128_show_in, adc128_set_in, 3, 1); static SENSOR_DEVICE_ATTR_2(in3_max, S_IWUSR | S_IRUGO, adc128_show_in, adc128_set_in, 3, 2); -static SENSOR_DEVICE_ATTR_2(in4_input, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 4, 0); +static SENSOR_DEVICE_ATTR_2(in4_input, S_IRUGO, + adc128_show_in, NULL, 4, 0); static SENSOR_DEVICE_ATTR_2(in4_min, S_IWUSR | S_IRUGO, adc128_show_in, adc128_set_in, 4, 1); static SENSOR_DEVICE_ATTR_2(in4_max, S_IWUSR | S_IRUGO, adc128_show_in, adc128_set_in, 4, 2); -static SENSOR_DEVICE_ATTR_2(in5_input, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 5, 0); +static SENSOR_DEVICE_ATTR_2(in5_input, S_IRUGO, + adc128_show_in, NULL, 5, 0); static SENSOR_DEVICE_ATTR_2(in5_min, S_IWUSR | S_IRUGO, adc128_show_in, adc128_set_in, 5, 1); static SENSOR_DEVICE_ATTR_2(in5_max, S_IWUSR | S_IRUGO, adc128_show_in, adc128_set_in, 5, 2); -static SENSOR_DEVICE_ATTR_2(in6_input, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 6, 0); +static SENSOR_DEVICE_ATTR_2(in6_input, S_IRUGO, + adc128_show_in, NULL, 6, 0); static SENSOR_DEVICE_ATTR_2(in6_min, S_IWUSR | S_IRUGO, adc128_show_in, adc128_set_in, 6, 1); static SENSOR_DEVICE_ATTR_2(in6_max, S_IWUSR | S_IRUGO, diff --git a/drivers/hwmon/adm1021.c b/drivers/hwmon/adm1021.c index 3eb4281689b5..d74241bb278c 100644 --- a/drivers/hwmon/adm1021.c +++ b/drivers/hwmon/adm1021.c @@ -185,7 +185,7 @@ static ssize_t set_temp_max(struct device *dev, struct adm1021_data *data = dev_get_drvdata(dev); struct i2c_client *client = data->client; long temp; - int err; + int reg_val, err; err = kstrtol(buf, 10, &temp); if (err) @@ -193,10 +193,11 @@ static ssize_t set_temp_max(struct device *dev, temp /= 1000; mutex_lock(&data->update_lock); - data->temp_max[index] = clamp_val(temp, -128, 127); + reg_val = clamp_val(temp, -128, 127); + data->temp_max[index] = reg_val * 1000; if (!read_only) i2c_smbus_write_byte_data(client, ADM1021_REG_TOS_W(index), - data->temp_max[index]); + reg_val); mutex_unlock(&data->update_lock); return count; @@ -210,7 +211,7 @@ static ssize_t set_temp_min(struct device *dev, struct adm1021_data *data = dev_get_drvdata(dev); struct i2c_client *client = data->client; long temp; - int err; + int reg_val, err; err = kstrtol(buf, 10, &temp); if (err) @@ -218,10 +219,11 @@ static ssize_t set_temp_min(struct device *dev, temp /= 1000; mutex_lock(&data->update_lock); - data->temp_min[index] = clamp_val(temp, -128, 127); + reg_val = clamp_val(temp, -128, 127); + data->temp_min[index] = reg_val * 1000; if (!read_only) i2c_smbus_write_byte_data(client, ADM1021_REG_THYST_W(index), - data->temp_min[index]); + reg_val); mutex_unlock(&data->update_lock); return count; diff --git a/drivers/hwmon/adm1029.c b/drivers/hwmon/adm1029.c index 78339e880bd6..2804571b269e 100644 --- a/drivers/hwmon/adm1029.c +++ b/drivers/hwmon/adm1029.c @@ -232,6 +232,9 @@ static ssize_t set_fan_div(struct device *dev, /* Update the value */ reg = (reg & 0x3F) | (val << 6); + /* Update the cache */ + data->fan_div[attr->index] = reg; + /* Write value */ i2c_smbus_write_byte_data(client, ADM1029_REG_FAN_DIV[attr->index], reg); diff --git a/drivers/hwmon/adm1031.c b/drivers/hwmon/adm1031.c index a8a540ca8c34..51c1a5a165ab 100644 --- a/drivers/hwmon/adm1031.c +++ b/drivers/hwmon/adm1031.c @@ -365,6 +365,7 @@ set_auto_temp_min(struct device *dev, struct device_attribute *attr, if (ret) return ret; + val = clamp_val(val, 0, 127000); mutex_lock(&data->update_lock); data->auto_temp[nr] = AUTO_TEMP_MIN_TO_REG(val, data->auto_temp[nr]); adm1031_write_value(client, ADM1031_REG_AUTO_TEMP(nr), @@ -394,6 +395,7 @@ set_auto_temp_max(struct device *dev, struct device_attribute *attr, if (ret) return ret; + val = clamp_val(val, 0, 127000); mutex_lock(&data->update_lock); data->temp_max[nr] = AUTO_TEMP_MAX_TO_REG(val, data->auto_temp[nr], data->pwm[nr]); @@ -696,7 +698,7 @@ static ssize_t set_temp_min(struct device *dev, struct device_attribute *attr, if (ret) return ret; - val = clamp_val(val, -55000, nr == 0 ? 127750 : 127875); + val = clamp_val(val, -55000, 127000); mutex_lock(&data->update_lock); data->temp_min[nr] = TEMP_TO_REG(val); adm1031_write_value(client, ADM1031_REG_TEMP_MIN(nr), @@ -717,7 +719,7 @@ static ssize_t set_temp_max(struct device *dev, struct device_attribute *attr, if (ret) return ret; - val = clamp_val(val, -55000, nr == 0 ? 127750 : 127875); + val = clamp_val(val, -55000, 127000); mutex_lock(&data->update_lock); data->temp_max[nr] = TEMP_TO_REG(val); adm1031_write_value(client, ADM1031_REG_TEMP_MAX(nr), @@ -738,7 +740,7 @@ static ssize_t set_temp_crit(struct device *dev, struct device_attribute *attr, if (ret) return ret; - val = clamp_val(val, -55000, nr == 0 ? 127750 : 127875); + val = clamp_val(val, -55000, 127000); mutex_lock(&data->update_lock); data->temp_crit[nr] = TEMP_TO_REG(val); adm1031_write_value(client, ADM1031_REG_TEMP_CRIT(nr), diff --git a/drivers/hwmon/adt7470.c b/drivers/hwmon/adt7470.c index 0f4dea5ccf17..9ee3913850d6 100644 --- a/drivers/hwmon/adt7470.c +++ b/drivers/hwmon/adt7470.c @@ -515,7 +515,7 @@ static ssize_t set_temp_min(struct device *dev, return -EINVAL; temp = DIV_ROUND_CLOSEST(temp, 1000); - temp = clamp_val(temp, 0, 255); + temp = clamp_val(temp, -128, 127); mutex_lock(&data->lock); data->temp_min[attr->index] = temp; @@ -549,7 +549,7 @@ static ssize_t set_temp_max(struct device *dev, return -EINVAL; temp = DIV_ROUND_CLOSEST(temp, 1000); - temp = clamp_val(temp, 0, 255); + temp = clamp_val(temp, -128, 127); mutex_lock(&data->lock); data->temp_max[attr->index] = temp; @@ -826,7 +826,7 @@ static ssize_t set_pwm_tmin(struct device *dev, return -EINVAL; temp = DIV_ROUND_CLOSEST(temp, 1000); - temp = clamp_val(temp, 0, 255); + temp = clamp_val(temp, -128, 127); mutex_lock(&data->lock); data->pwm_tmin[attr->index] = temp; diff --git a/drivers/hwmon/amc6821.c b/drivers/hwmon/amc6821.c index eea817296513..9f2be3dd28f3 100644 --- a/drivers/hwmon/amc6821.c +++ b/drivers/hwmon/amc6821.c @@ -704,7 +704,7 @@ static SENSOR_DEVICE_ATTR(temp1_max_alarm, S_IRUGO, get_temp_alarm, NULL, IDX_TEMP1_MAX); static SENSOR_DEVICE_ATTR(temp1_crit_alarm, S_IRUGO, get_temp_alarm, NULL, IDX_TEMP1_CRIT); -static SENSOR_DEVICE_ATTR(temp2_input, S_IRUGO | S_IWUSR, +static SENSOR_DEVICE_ATTR(temp2_input, S_IRUGO, get_temp, NULL, IDX_TEMP2_INPUT); static SENSOR_DEVICE_ATTR(temp2_min, S_IRUGO | S_IWUSR, get_temp, set_temp, IDX_TEMP2_MIN); diff --git a/drivers/hwmon/da9052-hwmon.c b/drivers/hwmon/da9052-hwmon.c index afd31042b452..d14ab3c45daa 100644 --- a/drivers/hwmon/da9052-hwmon.c +++ b/drivers/hwmon/da9052-hwmon.c @@ -194,7 +194,7 @@ static ssize_t da9052_hwmon_show_name(struct device *dev, struct device_attribute *devattr, char *buf) { - return sprintf(buf, "da9052-hwmon\n"); + return sprintf(buf, "da9052\n"); } static ssize_t show_label(struct device *dev, diff --git a/drivers/hwmon/da9055-hwmon.c b/drivers/hwmon/da9055-hwmon.c index 73b3865f1207..35eb7738d711 100644 --- a/drivers/hwmon/da9055-hwmon.c +++ b/drivers/hwmon/da9055-hwmon.c @@ -204,7 +204,7 @@ static ssize_t da9055_hwmon_show_name(struct device *dev, struct device_attribute *devattr, char *buf) { - return sprintf(buf, "da9055-hwmon\n"); + return sprintf(buf, "da9055\n"); } static ssize_t show_label(struct device *dev, diff --git a/drivers/hwmon/emc2103.c b/drivers/hwmon/emc2103.c index fd892dd48e4c..78002de46cb6 100644 --- a/drivers/hwmon/emc2103.c +++ b/drivers/hwmon/emc2103.c @@ -250,9 +250,7 @@ static ssize_t set_temp_min(struct device *dev, struct device_attribute *da, if (result < 0) return result; - val = DIV_ROUND_CLOSEST(val, 1000); - if ((val < -63) || (val > 127)) - return -EINVAL; + val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), -63, 127); mutex_lock(&data->update_lock); data->temp_min[nr] = val; @@ -274,9 +272,7 @@ static ssize_t set_temp_max(struct device *dev, struct device_attribute *da, if (result < 0) return result; - val = DIV_ROUND_CLOSEST(val, 1000); - if ((val < -63) || (val > 127)) - return -EINVAL; + val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), -63, 127); mutex_lock(&data->update_lock); data->temp_max[nr] = val; @@ -390,15 +386,14 @@ static ssize_t set_fan_target(struct device *dev, struct device_attribute *da, { struct emc2103_data *data = emc2103_update_device(dev); struct i2c_client *client = to_i2c_client(dev); - long rpm_target; + unsigned long rpm_target; - int result = kstrtol(buf, 10, &rpm_target); + int result = kstrtoul(buf, 10, &rpm_target); if (result < 0) return result; /* Datasheet states 16384 as maximum RPM target (table 3.2) */ - if ((rpm_target < 0) || (rpm_target > 16384)) - return -EINVAL; + rpm_target = clamp_val(rpm_target, 0, 16384); mutex_lock(&data->update_lock); diff --git a/drivers/hwmon/gpio-fan.c b/drivers/hwmon/gpio-fan.c index ba35e4d530b5..2566c43dd1e9 100644 --- a/drivers/hwmon/gpio-fan.c +++ b/drivers/hwmon/gpio-fan.c @@ -538,7 +538,7 @@ static int gpio_fan_probe(struct platform_device *pdev) /* Make this driver part of hwmon class. */ fan_data->hwmon_dev = hwmon_device_register_with_groups(&pdev->dev, - "gpio-fan", fan_data, + "gpio_fan", fan_data, gpio_fan_groups); if (IS_ERR(fan_data->hwmon_dev)) return PTR_ERR(fan_data->hwmon_dev); diff --git a/drivers/hwmon/ntc_thermistor.c b/drivers/hwmon/ntc_thermistor.c index e76feb86a1d4..ae66f42c4d6d 100644 --- a/drivers/hwmon/ntc_thermistor.c +++ b/drivers/hwmon/ntc_thermistor.c @@ -163,6 +163,18 @@ static int ntc_adc_iio_read(struct ntc_thermistor_platform_data *pdata) } static const struct of_device_id ntc_match[] = { + { .compatible = "murata,ncp15wb473", + .data = &ntc_thermistor_id[0] }, + { .compatible = "murata,ncp18wb473", + .data = &ntc_thermistor_id[1] }, + { .compatible = "murata,ncp21wb473", + .data = &ntc_thermistor_id[2] }, + { .compatible = "murata,ncp03wb473", + .data = &ntc_thermistor_id[3] }, + { .compatible = "murata,ncp15wl333", + .data = &ntc_thermistor_id[4] }, + + /* Usage of vendor name "ntc" is deprecated */ { .compatible = "ntc,ncp15wb473", .data = &ntc_thermistor_id[0] }, { .compatible = "ntc,ncp18wb473", @@ -500,7 +512,7 @@ static int ntc_thermistor_probe(struct platform_device *pdev) } dev_info(&pdev->dev, "Thermistor type: %s successfully probed.\n", - pdev->name); + pdev_id->name); return 0; err_after_sysfs: @@ -534,7 +546,7 @@ static struct platform_driver ntc_thermistor_driver = { module_platform_driver(ntc_thermistor_driver); -MODULE_DESCRIPTION("NTC Thermistor Driver"); +MODULE_DESCRIPTION("NTC Thermistor Driver from Murata"); MODULE_AUTHOR("MyungJoo Ham <myungjoo.ham@samsung.com>"); MODULE_LICENSE("GPL"); MODULE_ALIAS("platform:ntc-thermistor"); diff --git a/drivers/hwmon/smsc47m192.c b/drivers/hwmon/smsc47m192.c index efee4c59239f..34b9a601ad07 100644 --- a/drivers/hwmon/smsc47m192.c +++ b/drivers/hwmon/smsc47m192.c @@ -86,7 +86,7 @@ static inline u8 IN_TO_REG(unsigned long val, int n) */ static inline s8 TEMP_TO_REG(int val) { - return clamp_val(SCALE(val, 1, 1000), -128000, 127000); + return SCALE(clamp_val(val, -128000, 127000), 1, 1000); } static inline int TEMP_FROM_REG(s8 val) @@ -384,6 +384,8 @@ static ssize_t set_vrm(struct device *dev, struct device_attribute *attr, err = kstrtoul(buf, 10, &val); if (err) return err; + if (val > 255) + return -EINVAL; data->vrm = val; return count; diff --git a/drivers/hwmon/w83l786ng.c b/drivers/hwmon/w83l786ng.c index 6ed76ceb9270..32487c19cbfc 100644 --- a/drivers/hwmon/w83l786ng.c +++ b/drivers/hwmon/w83l786ng.c @@ -249,7 +249,7 @@ static ssize_t show_##reg(struct device *dev, struct device_attribute *attr, \ int nr = to_sensor_dev_attr(attr)->index; \ struct w83l786ng_data *data = w83l786ng_update_device(dev); \ return sprintf(buf, "%d\n", \ - FAN_FROM_REG(data->fan[nr], DIV_FROM_REG(data->fan_div[nr]))); \ + FAN_FROM_REG(data->reg[nr], DIV_FROM_REG(data->fan_div[nr]))); \ } show_fan_reg(fan); diff --git a/drivers/i2c/busses/i2c-sun6i-p2wi.c b/drivers/i2c/busses/i2c-sun6i-p2wi.c index 09de4fd12d57..4d75d4759709 100644 --- a/drivers/i2c/busses/i2c-sun6i-p2wi.c +++ b/drivers/i2c/busses/i2c-sun6i-p2wi.c @@ -22,7 +22,6 @@ * */ #include <linux/clk.h> -#include <linux/module.h> #include <linux/i2c.h> #include <linux/io.h> #include <linux/interrupt.h> diff --git a/drivers/i2c/muxes/Kconfig b/drivers/i2c/muxes/Kconfig index f7f9865b8b89..f6d313e528de 100644 --- a/drivers/i2c/muxes/Kconfig +++ b/drivers/i2c/muxes/Kconfig @@ -40,6 +40,7 @@ config I2C_MUX_PCA9541 config I2C_MUX_PCA954x tristate "Philips PCA954x I2C Mux/switches" + depends on GPIOLIB help If you say yes here you get support for the Philips PCA954x I2C mux/switch devices. diff --git a/drivers/ide/Kconfig b/drivers/ide/Kconfig index 8fb46aab2d87..a04c49f2a011 100644 --- a/drivers/ide/Kconfig +++ b/drivers/ide/Kconfig @@ -416,6 +416,7 @@ config BLK_DEV_CY82C693 config BLK_DEV_CS5520 tristate "Cyrix CS5510/20 MediaGX chipset support (VERY EXPERIMENTAL)" + depends on X86_32 || COMPILE_TEST select BLK_DEV_IDEDMA_PCI help Include support for PIO tuning and virtual DMA on the Cyrix MediaGX @@ -426,6 +427,7 @@ config BLK_DEV_CS5520 config BLK_DEV_CS5530 tristate "Cyrix/National Semiconductor CS5530 MediaGX chipset support" + depends on X86_32 || COMPILE_TEST select BLK_DEV_IDEDMA_PCI help Include support for UDMA on the Cyrix MediaGX 5530 chipset. This @@ -435,7 +437,7 @@ config BLK_DEV_CS5530 config BLK_DEV_CS5535 tristate "AMD CS5535 chipset support" - depends on X86 && !X86_64 + depends on X86_32 select BLK_DEV_IDEDMA_PCI help Include support for UDMA on the NSC/AMD CS5535 companion chipset. @@ -486,6 +488,7 @@ config BLK_DEV_JMICRON config BLK_DEV_SC1200 tristate "National SCx200 chipset support" + depends on X86_32 || COMPILE_TEST select BLK_DEV_IDEDMA_PCI help This driver adds support for the on-board IDE controller on the diff --git a/drivers/ide/ide-probe.c b/drivers/ide/ide-probe.c index 2a744a91370e..a3d3b1733c49 100644 --- a/drivers/ide/ide-probe.c +++ b/drivers/ide/ide-probe.c @@ -853,8 +853,9 @@ static int init_irq (ide_hwif_t *hwif) if (irq_handler == NULL) irq_handler = ide_intr; - if (request_irq(hwif->irq, irq_handler, sa, hwif->name, hwif)) - goto out_up; + if (!host->get_lock) + if (request_irq(hwif->irq, irq_handler, sa, hwif->name, hwif)) + goto out_up; #if !defined(__mc68000__) printk(KERN_INFO "%s at 0x%03lx-0x%03lx,0x%03lx on irq %d", hwif->name, @@ -1533,7 +1534,8 @@ static void ide_unregister(ide_hwif_t *hwif) ide_proc_unregister_port(hwif); - free_irq(hwif->irq, hwif); + if (!hwif->host->get_lock) + free_irq(hwif->irq, hwif); device_unregister(hwif->portdev); device_unregister(&hwif->gendev); diff --git a/drivers/iio/accel/hid-sensor-accel-3d.c b/drivers/iio/accel/hid-sensor-accel-3d.c index 69abf9163df7..54e464e4bb72 100644 --- a/drivers/iio/accel/hid-sensor-accel-3d.c +++ b/drivers/iio/accel/hid-sensor-accel-3d.c @@ -110,7 +110,6 @@ static int accel_3d_read_raw(struct iio_dev *indio_dev, struct accel_3d_state *accel_state = iio_priv(indio_dev); int report_id = -1; u32 address; - int ret; int ret_type; s32 poll_value; @@ -151,14 +150,12 @@ static int accel_3d_read_raw(struct iio_dev *indio_dev, ret_type = IIO_VAL_INT; break; case IIO_CHAN_INFO_SAMP_FREQ: - ret = hid_sensor_read_samp_freq_value( + ret_type = hid_sensor_read_samp_freq_value( &accel_state->common_attributes, val, val2); - ret_type = IIO_VAL_INT_PLUS_MICRO; break; case IIO_CHAN_INFO_HYSTERESIS: - ret = hid_sensor_read_raw_hyst_value( + ret_type = hid_sensor_read_raw_hyst_value( &accel_state->common_attributes, val, val2); - ret_type = IIO_VAL_INT_PLUS_MICRO; break; default: ret_type = -EINVAL; diff --git a/drivers/iio/accel/mma8452.c b/drivers/iio/accel/mma8452.c index 17aeea170566..2a5fa9a436e5 100644 --- a/drivers/iio/accel/mma8452.c +++ b/drivers/iio/accel/mma8452.c @@ -111,8 +111,14 @@ static const int mma8452_samp_freq[8][2] = { {6, 250000}, {1, 560000} }; +/* + * Hardware has fullscale of -2G, -4G, -8G corresponding to raw value -2048 + * The userspace interface uses m/s^2 and we declare micro units + * So scale factor is given by: + * g * N * 1000000 / 2048 for N = 2, 4, 8 and g=9.80665 + */ static const int mma8452_scales[3][2] = { - {0, 977}, {0, 1953}, {0, 3906} + {0, 9577}, {0, 19154}, {0, 38307} }; static ssize_t mma8452_show_samp_freq_avail(struct device *dev, diff --git a/drivers/iio/adc/ad799x.c b/drivers/iio/adc/ad799x.c index 39b4cb48d738..6eba301ee03d 100644 --- a/drivers/iio/adc/ad799x.c +++ b/drivers/iio/adc/ad799x.c @@ -427,9 +427,12 @@ static int ad799x_write_event_value(struct iio_dev *indio_dev, int ret; struct ad799x_state *st = iio_priv(indio_dev); + if (val < 0 || val > RES_MASK(chan->scan_type.realbits)) + return -EINVAL; + mutex_lock(&indio_dev->mlock); ret = ad799x_i2c_write16(st, ad799x_threshold_reg(chan, dir, info), - val); + val << chan->scan_type.shift); mutex_unlock(&indio_dev->mlock); return ret; @@ -452,7 +455,8 @@ static int ad799x_read_event_value(struct iio_dev *indio_dev, mutex_unlock(&indio_dev->mlock); if (ret < 0) return ret; - *val = valin; + *val = (valin >> chan->scan_type.shift) & + RES_MASK(chan->scan_type.realbits); return IIO_VAL_INT; } diff --git a/drivers/iio/adc/ti_am335x_adc.c b/drivers/iio/adc/ti_am335x_adc.c index a4db3026bec6..d5dc4c6ce86c 100644 --- a/drivers/iio/adc/ti_am335x_adc.c +++ b/drivers/iio/adc/ti_am335x_adc.c @@ -374,7 +374,7 @@ static int tiadc_read_raw(struct iio_dev *indio_dev, return -EAGAIN; } } - map_val = chan->channel + TOTAL_CHANNELS; + map_val = adc_dev->channel_step[chan->scan_index]; /* * We check the complete FIFO. We programmed just one entry but in case diff --git a/drivers/iio/gyro/hid-sensor-gyro-3d.c b/drivers/iio/gyro/hid-sensor-gyro-3d.c index 40f4e4935d0d..fa034a3dad78 100644 --- a/drivers/iio/gyro/hid-sensor-gyro-3d.c +++ b/drivers/iio/gyro/hid-sensor-gyro-3d.c @@ -110,7 +110,6 @@ static int gyro_3d_read_raw(struct iio_dev *indio_dev, struct gyro_3d_state *gyro_state = iio_priv(indio_dev); int report_id = -1; u32 address; - int ret; int ret_type; s32 poll_value; @@ -151,14 +150,12 @@ static int gyro_3d_read_raw(struct iio_dev *indio_dev, ret_type = IIO_VAL_INT; break; case IIO_CHAN_INFO_SAMP_FREQ: - ret = hid_sensor_read_samp_freq_value( + ret_type = hid_sensor_read_samp_freq_value( &gyro_state->common_attributes, val, val2); - ret_type = IIO_VAL_INT_PLUS_MICRO; break; case IIO_CHAN_INFO_HYSTERESIS: - ret = hid_sensor_read_raw_hyst_value( + ret_type = hid_sensor_read_raw_hyst_value( &gyro_state->common_attributes, val, val2); - ret_type = IIO_VAL_INT_PLUS_MICRO; break; default: ret_type = -EINVAL; diff --git a/drivers/iio/industrialio-event.c b/drivers/iio/industrialio-event.c index 258a973a1fb8..bfbf4d419f41 100644 --- a/drivers/iio/industrialio-event.c +++ b/drivers/iio/industrialio-event.c @@ -345,6 +345,9 @@ static int iio_device_add_event(struct iio_dev *indio_dev, &indio_dev->event_interface->dev_attr_list); kfree(postfix); + if ((ret == -EBUSY) && (shared_by != IIO_SEPARATE)) + continue; + if (ret) return ret; diff --git a/drivers/iio/inkern.c b/drivers/iio/inkern.c index d833d55052ea..c7497009d60a 100644 --- a/drivers/iio/inkern.c +++ b/drivers/iio/inkern.c @@ -183,7 +183,7 @@ static struct iio_channel *of_iio_channel_get_by_name(struct device_node *np, else if (name && index >= 0) { pr_err("ERROR: could not get IIO channel %s:%s(%i)\n", np->full_name, name ? name : "", index); - return chan; + return NULL; } /* @@ -193,8 +193,9 @@ static struct iio_channel *of_iio_channel_get_by_name(struct device_node *np, */ np = np->parent; if (np && !of_get_property(np, "io-channel-ranges", NULL)) - break; + return NULL; } + return chan; } @@ -317,6 +318,7 @@ struct iio_channel *iio_channel_get(struct device *dev, if (channel != NULL) return channel; } + return iio_channel_get_sys(name, channel_name); } EXPORT_SYMBOL_GPL(iio_channel_get); diff --git a/drivers/iio/light/hid-sensor-als.c b/drivers/iio/light/hid-sensor-als.c index f34c94380b41..96e71e103ea7 100644 --- a/drivers/iio/light/hid-sensor-als.c +++ b/drivers/iio/light/hid-sensor-als.c @@ -79,7 +79,6 @@ static int als_read_raw(struct iio_dev *indio_dev, struct als_state *als_state = iio_priv(indio_dev); int report_id = -1; u32 address; - int ret; int ret_type; s32 poll_value; @@ -129,14 +128,12 @@ static int als_read_raw(struct iio_dev *indio_dev, ret_type = IIO_VAL_INT; break; case IIO_CHAN_INFO_SAMP_FREQ: - ret = hid_sensor_read_samp_freq_value( + ret_type = hid_sensor_read_samp_freq_value( &als_state->common_attributes, val, val2); - ret_type = IIO_VAL_INT_PLUS_MICRO; break; case IIO_CHAN_INFO_HYSTERESIS: - ret = hid_sensor_read_raw_hyst_value( + ret_type = hid_sensor_read_raw_hyst_value( &als_state->common_attributes, val, val2); - ret_type = IIO_VAL_INT_PLUS_MICRO; break; default: ret_type = -EINVAL; diff --git a/drivers/iio/light/hid-sensor-prox.c b/drivers/iio/light/hid-sensor-prox.c index d203ef4d892f..412bae86d6ae 100644 --- a/drivers/iio/light/hid-sensor-prox.c +++ b/drivers/iio/light/hid-sensor-prox.c @@ -74,7 +74,6 @@ static int prox_read_raw(struct iio_dev *indio_dev, struct prox_state *prox_state = iio_priv(indio_dev); int report_id = -1; u32 address; - int ret; int ret_type; s32 poll_value; @@ -125,14 +124,12 @@ static int prox_read_raw(struct iio_dev *indio_dev, ret_type = IIO_VAL_INT; break; case IIO_CHAN_INFO_SAMP_FREQ: - ret = hid_sensor_read_samp_freq_value( + ret_type = hid_sensor_read_samp_freq_value( &prox_state->common_attributes, val, val2); - ret_type = IIO_VAL_INT_PLUS_MICRO; break; case IIO_CHAN_INFO_HYSTERESIS: - ret = hid_sensor_read_raw_hyst_value( + ret_type = hid_sensor_read_raw_hyst_value( &prox_state->common_attributes, val, val2); - ret_type = IIO_VAL_INT_PLUS_MICRO; break; default: ret_type = -EINVAL; diff --git a/drivers/iio/light/tcs3472.c b/drivers/iio/light/tcs3472.c index fe063a0a21cd..752569985d1d 100644 --- a/drivers/iio/light/tcs3472.c +++ b/drivers/iio/light/tcs3472.c @@ -52,6 +52,7 @@ struct tcs3472_data { struct i2c_client *client; + struct mutex lock; u8 enable; u8 control; u8 atime; @@ -116,10 +117,17 @@ static int tcs3472_read_raw(struct iio_dev *indio_dev, switch (mask) { case IIO_CHAN_INFO_RAW: + if (iio_buffer_enabled(indio_dev)) + return -EBUSY; + + mutex_lock(&data->lock); ret = tcs3472_req_data(data); - if (ret < 0) + if (ret < 0) { + mutex_unlock(&data->lock); return ret; + } ret = i2c_smbus_read_word_data(data->client, chan->address); + mutex_unlock(&data->lock); if (ret < 0) return ret; *val = ret; @@ -255,6 +263,7 @@ static int tcs3472_probe(struct i2c_client *client, data = iio_priv(indio_dev); i2c_set_clientdata(client, indio_dev); data->client = client; + mutex_init(&data->lock); indio_dev->dev.parent = &client->dev; indio_dev->info = &tcs3472_info; diff --git a/drivers/iio/magnetometer/hid-sensor-magn-3d.c b/drivers/iio/magnetometer/hid-sensor-magn-3d.c index 41cf29e2a371..b2b0937d5133 100644 --- a/drivers/iio/magnetometer/hid-sensor-magn-3d.c +++ b/drivers/iio/magnetometer/hid-sensor-magn-3d.c @@ -110,7 +110,6 @@ static int magn_3d_read_raw(struct iio_dev *indio_dev, struct magn_3d_state *magn_state = iio_priv(indio_dev); int report_id = -1; u32 address; - int ret; int ret_type; s32 poll_value; @@ -153,14 +152,12 @@ static int magn_3d_read_raw(struct iio_dev *indio_dev, ret_type = IIO_VAL_INT; break; case IIO_CHAN_INFO_SAMP_FREQ: - ret = hid_sensor_read_samp_freq_value( + ret_type = hid_sensor_read_samp_freq_value( &magn_state->common_attributes, val, val2); - ret_type = IIO_VAL_INT_PLUS_MICRO; break; case IIO_CHAN_INFO_HYSTERESIS: - ret = hid_sensor_read_raw_hyst_value( + ret_type = hid_sensor_read_raw_hyst_value( &magn_state->common_attributes, val, val2); - ret_type = IIO_VAL_INT_PLUS_MICRO; break; default: ret_type = -EINVAL; diff --git a/drivers/iio/pressure/hid-sensor-press.c b/drivers/iio/pressure/hid-sensor-press.c index 1cd190c73788..2c0d2a4fed8c 100644 --- a/drivers/iio/pressure/hid-sensor-press.c +++ b/drivers/iio/pressure/hid-sensor-press.c @@ -78,7 +78,6 @@ static int press_read_raw(struct iio_dev *indio_dev, struct press_state *press_state = iio_priv(indio_dev); int report_id = -1; u32 address; - int ret; int ret_type; s32 poll_value; @@ -128,14 +127,12 @@ static int press_read_raw(struct iio_dev *indio_dev, ret_type = IIO_VAL_INT; break; case IIO_CHAN_INFO_SAMP_FREQ: - ret = hid_sensor_read_samp_freq_value( + ret_type = hid_sensor_read_samp_freq_value( &press_state->common_attributes, val, val2); - ret_type = IIO_VAL_INT_PLUS_MICRO; break; case IIO_CHAN_INFO_HYSTERESIS: - ret = hid_sensor_read_raw_hyst_value( + ret_type = hid_sensor_read_raw_hyst_value( &press_state->common_attributes, val, val2); - ret_type = IIO_VAL_INT_PLUS_MICRO; break; default: ret_type = -EINVAL; diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c index 5e153f6d4b48..768a0fb67dd6 100644 --- a/drivers/infiniband/hw/cxgb4/cm.c +++ b/drivers/infiniband/hw/cxgb4/cm.c @@ -432,8 +432,17 @@ static void arp_failure_discard(void *handle, struct sk_buff *skb) */ static void act_open_req_arp_failure(void *handle, struct sk_buff *skb) { + struct c4iw_ep *ep = handle; + printk(KERN_ERR MOD "ARP failure duing connect\n"); kfree_skb(skb); + connect_reply_upcall(ep, -EHOSTUNREACH); + state_set(&ep->com, DEAD); + remove_handle(ep->com.dev, &ep->com.dev->atid_idr, ep->atid); + cxgb4_free_atid(ep->com.dev->rdev.lldi.tids, ep->atid); + dst_release(ep->dst); + cxgb4_l2t_release(ep->l2t); + c4iw_put_ep(&ep->com); } /* @@ -658,7 +667,7 @@ static int send_connect(struct c4iw_ep *ep) opt2 |= T5_OPT_2_VALID; opt2 |= V_CONG_CNTRL(CONG_ALG_TAHOE); } - t4_set_arp_err_handler(skb, NULL, act_open_req_arp_failure); + t4_set_arp_err_handler(skb, ep, act_open_req_arp_failure); if (is_t4(ep->com.dev->rdev.lldi.adapter_type)) { if (ep->com.remote_addr.ss_family == AF_INET) { @@ -2180,7 +2189,6 @@ static void reject_cr(struct c4iw_dev *dev, u32 hwtid, struct sk_buff *skb) PDBG("%s c4iw_dev %p tid %u\n", __func__, dev, hwtid); BUG_ON(skb_cloned(skb)); skb_trim(skb, sizeof(struct cpl_tid_release)); - skb_get(skb); release_tid(&dev->rdev, hwtid, skb); return; } @@ -3917,7 +3925,7 @@ int __init c4iw_cm_init(void) return 0; } -void __exit c4iw_cm_term(void) +void c4iw_cm_term(void) { WARN_ON(!list_empty(&timeout_list)); flush_workqueue(workq); diff --git a/drivers/infiniband/hw/cxgb4/device.c b/drivers/infiniband/hw/cxgb4/device.c index dd93aadc996e..7db82b24302b 100644 --- a/drivers/infiniband/hw/cxgb4/device.c +++ b/drivers/infiniband/hw/cxgb4/device.c @@ -696,6 +696,7 @@ static int c4iw_rdev_open(struct c4iw_rdev *rdev) pr_err(MOD "error allocating status page\n"); goto err4; } + rdev->status_page->db_off = 0; return 0; err4: c4iw_rqtpool_destroy(rdev); @@ -729,7 +730,6 @@ static void c4iw_dealloc(struct uld_ctx *ctx) if (ctx->dev->rdev.oc_mw_kva) iounmap(ctx->dev->rdev.oc_mw_kva); ib_dealloc_device(&ctx->dev->ibdev); - iwpm_exit(RDMA_NL_C4IW); ctx->dev = NULL; } @@ -826,12 +826,6 @@ static struct c4iw_dev *c4iw_alloc(const struct cxgb4_lld_info *infop) setup_debugfs(devp); } - ret = iwpm_init(RDMA_NL_C4IW); - if (ret) { - pr_err("port mapper initialization failed with %d\n", ret); - ib_dealloc_device(&devp->ibdev); - return ERR_PTR(ret); - } return devp; } @@ -1332,6 +1326,15 @@ static int __init c4iw_init_module(void) pr_err("%s[%u]: Failed to add netlink callback\n" , __func__, __LINE__); + err = iwpm_init(RDMA_NL_C4IW); + if (err) { + pr_err("port mapper initialization failed with %d\n", err); + ibnl_remove_client(RDMA_NL_C4IW); + c4iw_cm_term(); + debugfs_remove_recursive(c4iw_debugfs_root); + return err; + } + cxgb4_register_uld(CXGB4_ULD_RDMA, &c4iw_uld_info); return 0; @@ -1349,6 +1352,7 @@ static void __exit c4iw_exit_module(void) } mutex_unlock(&dev_mutex); cxgb4_unregister_uld(CXGB4_ULD_RDMA); + iwpm_exit(RDMA_NL_C4IW); ibnl_remove_client(RDMA_NL_C4IW); c4iw_cm_term(); debugfs_remove_recursive(c4iw_debugfs_root); diff --git a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h index 125bc5d1e175..361fff7a0742 100644 --- a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h +++ b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h @@ -908,7 +908,7 @@ int c4iw_destroy_ctrl_qp(struct c4iw_rdev *rdev); int c4iw_register_device(struct c4iw_dev *dev); void c4iw_unregister_device(struct c4iw_dev *dev); int __init c4iw_cm_init(void); -void __exit c4iw_cm_term(void); +void c4iw_cm_term(void); void c4iw_release_dev_ucontext(struct c4iw_rdev *rdev, struct c4iw_dev_ucontext *uctx); void c4iw_init_dev_ucontext(struct c4iw_rdev *rdev, diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index d13ddf1c0033..bbbcf389272c 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -675,7 +675,7 @@ static int create_kernel_qp(struct mlx5_ib_dev *dev, int err; uuari = &dev->mdev.priv.uuari; - if (init_attr->create_flags & ~IB_QP_CREATE_SIGNATURE_EN) + if (init_attr->create_flags & ~(IB_QP_CREATE_SIGNATURE_EN | IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK)) return -EINVAL; if (init_attr->qp_type == MLX5_IB_QPT_REG_UMR) diff --git a/drivers/input/input.c b/drivers/input/input.c index 1c4c0db05550..29ca0bb4f561 100644 --- a/drivers/input/input.c +++ b/drivers/input/input.c @@ -257,9 +257,10 @@ static int input_handle_abs_event(struct input_dev *dev, } static int input_get_disposition(struct input_dev *dev, - unsigned int type, unsigned int code, int value) + unsigned int type, unsigned int code, int *pval) { int disposition = INPUT_IGNORE_EVENT; + int value = *pval; switch (type) { @@ -357,6 +358,7 @@ static int input_get_disposition(struct input_dev *dev, break; } + *pval = value; return disposition; } @@ -365,7 +367,7 @@ static void input_handle_event(struct input_dev *dev, { int disposition; - disposition = input_get_disposition(dev, type, code, value); + disposition = input_get_disposition(dev, type, code, &value); if ((disposition & INPUT_PASS_TO_DEVICE) && dev->event) dev->event(dev, type, code, value); diff --git a/drivers/input/keyboard/st-keyscan.c b/drivers/input/keyboard/st-keyscan.c index 758b48731415..de7be4f03d91 100644 --- a/drivers/input/keyboard/st-keyscan.c +++ b/drivers/input/keyboard/st-keyscan.c @@ -215,6 +215,7 @@ static int keyscan_probe(struct platform_device *pdev) return 0; } +#ifdef CONFIG_PM_SLEEP static int keyscan_suspend(struct device *dev) { struct platform_device *pdev = to_platform_device(dev); @@ -249,6 +250,7 @@ static int keyscan_resume(struct device *dev) mutex_unlock(&input->mutex); return retval; } +#endif static SIMPLE_DEV_PM_OPS(keyscan_dev_pm_ops, keyscan_suspend, keyscan_resume); diff --git a/drivers/input/misc/sirfsoc-onkey.c b/drivers/input/misc/sirfsoc-onkey.c index e4104f9b2e6d..fed5102e1802 100644 --- a/drivers/input/misc/sirfsoc-onkey.c +++ b/drivers/input/misc/sirfsoc-onkey.c @@ -213,7 +213,7 @@ static struct platform_driver sirfsoc_pwrc_driver = { module_platform_driver(sirfsoc_pwrc_driver); -MODULE_LICENSE("GPLv2"); +MODULE_LICENSE("GPL v2"); MODULE_AUTHOR("Binghua Duan <Binghua.Duan@csr.com>, Xianglong Du <Xianglong.Du@csr.com>"); MODULE_DESCRIPTION("CSR Prima2 PWRC Driver"); MODULE_ALIAS("platform:sirfsoc-pwrc"); diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c index ec772d962f06..ef9e0b8a9aa7 100644 --- a/drivers/input/mouse/synaptics.c +++ b/drivers/input/mouse/synaptics.c @@ -132,7 +132,8 @@ static const struct min_max_quirk min_max_pnpid_table[] = { 1232, 5710, 1156, 4696 }, { - (const char * const []){"LEN0034", "LEN0036", "LEN2004", NULL}, + (const char * const []){"LEN0034", "LEN0036", "LEN2002", + "LEN2004", NULL}, 1024, 5112, 2024, 4832 }, { @@ -168,7 +169,7 @@ static const char * const topbuttonpad_pnp_ids[] = { "LEN0049", "LEN2000", "LEN2001", /* Edge E431 */ - "LEN2002", + "LEN2002", /* Edge E531 */ "LEN2003", "LEN2004", /* L440 */ "LEN2005", diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h index 381b20d4c561..136b7b204f56 100644 --- a/drivers/input/serio/i8042-x86ia64io.h +++ b/drivers/input/serio/i8042-x86ia64io.h @@ -402,6 +402,13 @@ static const struct dmi_system_id __initconst i8042_dmi_nomux_table[] = { }, }, { + /* Acer Aspire 5710 */ + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "Acer"), + DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5710"), + }, + }, + { /* Gericom Bellagio */ .matches = { DMI_MATCH(DMI_SYS_VENDOR, "Gericom"), diff --git a/drivers/input/tablet/wacom_wac.c b/drivers/input/tablet/wacom_wac.c index 977d05cd9e2e..e73cf2c71f35 100644 --- a/drivers/input/tablet/wacom_wac.c +++ b/drivers/input/tablet/wacom_wac.c @@ -1217,9 +1217,9 @@ static void wacom_bpt3_touch_msg(struct wacom_wac *wacom, unsigned char *data) * a=(pi*r^2)/C. */ int a = data[5]; - int x_res = input_abs_get_res(input, ABS_X); - int y_res = input_abs_get_res(input, ABS_Y); - width = 2 * int_sqrt(a * WACOM_CONTACT_AREA_SCALE); + int x_res = input_abs_get_res(input, ABS_MT_POSITION_X); + int y_res = input_abs_get_res(input, ABS_MT_POSITION_Y); + width = 2 * int_sqrt(a * WACOM_CONTACT_AREA_SCALE); height = width * y_res / x_res; } @@ -1587,7 +1587,7 @@ static void wacom_abs_set_axis(struct input_dev *input_dev, input_abs_set_res(input_dev, ABS_X, features->x_resolution); input_abs_set_res(input_dev, ABS_Y, features->y_resolution); } else { - if (features->touch_max <= 2) { + if (features->touch_max == 1) { input_set_abs_params(input_dev, ABS_X, 0, features->x_max, features->x_fuzz, 0); input_set_abs_params(input_dev, ABS_Y, 0, @@ -1815,14 +1815,8 @@ int wacom_setup_input_capabilities(struct input_dev *input_dev, case MTTPC: case MTTPC_B: case TABLETPC2FG: - if (features->device_type == BTN_TOOL_FINGER) { - unsigned int flags = INPUT_MT_DIRECT; - - if (wacom_wac->features.type == TABLETPC2FG) - flags = 0; - - input_mt_init_slots(input_dev, features->touch_max, flags); - } + if (features->device_type == BTN_TOOL_FINGER && features->touch_max > 1) + input_mt_init_slots(input_dev, features->touch_max, INPUT_MT_DIRECT); /* fall through */ case TABLETPC: @@ -1883,10 +1877,6 @@ int wacom_setup_input_capabilities(struct input_dev *input_dev, __set_bit(BTN_RIGHT, input_dev->keybit); if (features->touch_max) { - /* touch interface */ - unsigned int flags = INPUT_MT_POINTER; - - __set_bit(INPUT_PROP_POINTER, input_dev->propbit); if (features->pktlen == WACOM_PKGLEN_BBTOUCH3) { input_set_abs_params(input_dev, ABS_MT_TOUCH_MAJOR, @@ -1894,12 +1884,8 @@ int wacom_setup_input_capabilities(struct input_dev *input_dev, input_set_abs_params(input_dev, ABS_MT_TOUCH_MINOR, 0, features->y_max, 0, 0); - } else { - __set_bit(BTN_TOOL_FINGER, input_dev->keybit); - __set_bit(BTN_TOOL_DOUBLETAP, input_dev->keybit); - flags = 0; } - input_mt_init_slots(input_dev, features->touch_max, flags); + input_mt_init_slots(input_dev, features->touch_max, INPUT_MT_POINTER); } else { /* buttons/keys only interface */ __clear_bit(ABS_X, input_dev->absbit); diff --git a/drivers/input/touchscreen/ti_am335x_tsc.c b/drivers/input/touchscreen/ti_am335x_tsc.c index 4e793a17361f..2ce649520fe0 100644 --- a/drivers/input/touchscreen/ti_am335x_tsc.c +++ b/drivers/input/touchscreen/ti_am335x_tsc.c @@ -359,9 +359,12 @@ static int titsc_parse_dt(struct platform_device *pdev, */ err = of_property_read_u32(node, "ti,coordinate-readouts", &ts_dev->coordinate_readouts); - if (err < 0) + if (err < 0) { + dev_warn(&pdev->dev, "please use 'ti,coordinate-readouts' instead\n"); err = of_property_read_u32(node, "ti,coordiante-readouts", &ts_dev->coordinate_readouts); + } + if (err < 0) return err; diff --git a/drivers/iommu/amd_iommu_v2.c b/drivers/iommu/amd_iommu_v2.c index d4daa05efe60..499b4366a98d 100644 --- a/drivers/iommu/amd_iommu_v2.c +++ b/drivers/iommu/amd_iommu_v2.c @@ -45,7 +45,7 @@ struct pri_queue { struct pasid_state { struct list_head list; /* For global state-list */ atomic_t count; /* Reference count */ - atomic_t mmu_notifier_count; /* Counting nested mmu_notifier + unsigned mmu_notifier_count; /* Counting nested mmu_notifier calls */ struct task_struct *task; /* Task bound to this PASID */ struct mm_struct *mm; /* mm_struct for the faults */ @@ -53,7 +53,8 @@ struct pasid_state { struct pri_queue pri[PRI_QUEUE_SIZE]; /* PRI tag states */ struct device_state *device_state; /* Link to our device_state */ int pasid; /* PASID index */ - spinlock_t lock; /* Protect pri_queues */ + spinlock_t lock; /* Protect pri_queues and + mmu_notifer_count */ wait_queue_head_t wq; /* To wait for count == 0 */ }; @@ -431,15 +432,19 @@ static void mn_invalidate_range_start(struct mmu_notifier *mn, { struct pasid_state *pasid_state; struct device_state *dev_state; + unsigned long flags; pasid_state = mn_to_state(mn); dev_state = pasid_state->device_state; - if (atomic_add_return(1, &pasid_state->mmu_notifier_count) == 1) { + spin_lock_irqsave(&pasid_state->lock, flags); + if (pasid_state->mmu_notifier_count == 0) { amd_iommu_domain_set_gcr3(dev_state->domain, pasid_state->pasid, __pa(empty_page_table)); } + pasid_state->mmu_notifier_count += 1; + spin_unlock_irqrestore(&pasid_state->lock, flags); } static void mn_invalidate_range_end(struct mmu_notifier *mn, @@ -448,15 +453,19 @@ static void mn_invalidate_range_end(struct mmu_notifier *mn, { struct pasid_state *pasid_state; struct device_state *dev_state; + unsigned long flags; pasid_state = mn_to_state(mn); dev_state = pasid_state->device_state; - if (atomic_dec_and_test(&pasid_state->mmu_notifier_count)) { + spin_lock_irqsave(&pasid_state->lock, flags); + pasid_state->mmu_notifier_count -= 1; + if (pasid_state->mmu_notifier_count == 0) { amd_iommu_domain_set_gcr3(dev_state->domain, pasid_state->pasid, __pa(pasid_state->mm->pgd)); } + spin_unlock_irqrestore(&pasid_state->lock, flags); } static void mn_release(struct mmu_notifier *mn, struct mm_struct *mm) @@ -650,7 +659,6 @@ int amd_iommu_bind_pasid(struct pci_dev *pdev, int pasid, goto out; atomic_set(&pasid_state->count, 1); - atomic_set(&pasid_state->mmu_notifier_count, 0); init_waitqueue_head(&pasid_state->wq); spin_lock_init(&pasid_state->lock); diff --git a/drivers/iommu/fsl_pamu.c b/drivers/iommu/fsl_pamu.c index b99dd88e31b9..bb446d742a2d 100644 --- a/drivers/iommu/fsl_pamu.c +++ b/drivers/iommu/fsl_pamu.c @@ -170,10 +170,10 @@ int pamu_disable_liodn(int liodn) static unsigned int map_addrspace_size_to_wse(phys_addr_t addrspace_size) { /* Bug if not a power of 2 */ - BUG_ON(!is_power_of_2(addrspace_size)); + BUG_ON((addrspace_size & (addrspace_size - 1))); /* window size is 2^(WSE+1) bytes */ - return __ffs(addrspace_size) - 1; + return fls64(addrspace_size) - 2; } /* Derive the PAACE window count encoding for the subwindow count */ @@ -351,7 +351,7 @@ int pamu_config_ppaace(int liodn, phys_addr_t win_addr, phys_addr_t win_size, struct paace *ppaace; unsigned long fspi; - if (!is_power_of_2(win_size) || win_size < PAMU_PAGE_SIZE) { + if ((win_size & (win_size - 1)) || win_size < PAMU_PAGE_SIZE) { pr_debug("window size too small or not a power of two %llx\n", win_size); return -EINVAL; } @@ -464,7 +464,7 @@ int pamu_config_spaace(int liodn, u32 subwin_cnt, u32 subwin, return -ENOENT; } - if (!is_power_of_2(subwin_size) || subwin_size < PAMU_PAGE_SIZE) { + if ((subwin_size & (subwin_size - 1)) || subwin_size < PAMU_PAGE_SIZE) { pr_debug("subwindow size out of range, or not a power of 2\n"); return -EINVAL; } diff --git a/drivers/iommu/fsl_pamu_domain.c b/drivers/iommu/fsl_pamu_domain.c index 93072ba44b1d..af47648301a9 100644 --- a/drivers/iommu/fsl_pamu_domain.c +++ b/drivers/iommu/fsl_pamu_domain.c @@ -301,7 +301,7 @@ static int check_size(u64 size, dma_addr_t iova) * Size must be a power of two and at least be equal * to PAMU page size. */ - if (!is_power_of_2(size) || size < PAMU_PAGE_SIZE) { + if ((size & (size - 1)) || size < PAMU_PAGE_SIZE) { pr_debug("%s: size too small or not a power of two\n", __func__); return -EINVAL; } @@ -335,11 +335,6 @@ static struct fsl_dma_domain *iommu_alloc_dma_domain(void) return domain; } -static inline struct device_domain_info *find_domain(struct device *dev) -{ - return dev->archdata.iommu_domain; -} - static void remove_device_ref(struct device_domain_info *info, u32 win_cnt) { unsigned long flags; @@ -380,7 +375,7 @@ static void attach_device(struct fsl_dma_domain *dma_domain, int liodn, struct d * Check here if the device is already attached to domain or not. * If the device is already attached to a domain detach it. */ - old_domain_info = find_domain(dev); + old_domain_info = dev->archdata.iommu_domain; if (old_domain_info && old_domain_info->domain != dma_domain) { spin_unlock_irqrestore(&device_domain_lock, flags); detach_device(dev, old_domain_info->domain); @@ -399,7 +394,7 @@ static void attach_device(struct fsl_dma_domain *dma_domain, int liodn, struct d * the info for the first LIODN as all * LIODNs share the same domain */ - if (!old_domain_info) + if (!dev->archdata.iommu_domain) dev->archdata.iommu_domain = info; spin_unlock_irqrestore(&device_domain_lock, flags); @@ -1042,12 +1037,15 @@ root_bus: group = get_shared_pci_device_group(pdev); } + if (!group) + group = ERR_PTR(-ENODEV); + return group; } static int fsl_pamu_add_device(struct device *dev) { - struct iommu_group *group = NULL; + struct iommu_group *group = ERR_PTR(-ENODEV); struct pci_dev *pdev; const u32 *prop; int ret, len; @@ -1070,7 +1068,7 @@ static int fsl_pamu_add_device(struct device *dev) group = get_device_iommu_group(dev); } - if (!group || IS_ERR(group)) + if (IS_ERR(group)) return PTR_ERR(group); ret = iommu_group_add_device(group, dev); diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 6bb32773c3ac..51b6b77dc3e5 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -3816,14 +3816,11 @@ int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info) ((void *)rmrr) + rmrr->header.length, rmrr->segment, rmrru->devices, rmrru->devices_cnt); - if (ret > 0) - break; - else if(ret < 0) + if(ret < 0) return ret; } else if (info->event == BUS_NOTIFY_DEL_DEVICE) { - if (dmar_remove_dev_scope(info, rmrr->segment, - rmrru->devices, rmrru->devices_cnt)) - break; + dmar_remove_dev_scope(info, rmrr->segment, + rmrru->devices, rmrru->devices_cnt); } } diff --git a/drivers/irqchip/irq-armada-370-xp.c b/drivers/irqchip/irq-armada-370-xp.c index c887e6eebc41..574aba0eba4e 100644 --- a/drivers/irqchip/irq-armada-370-xp.c +++ b/drivers/irqchip/irq-armada-370-xp.c @@ -334,6 +334,15 @@ static void armada_mpic_send_doorbell(const struct cpumask *mask, static void armada_xp_mpic_smp_cpu_init(void) { + u32 control; + int nr_irqs, i; + + control = readl(main_int_base + ARMADA_370_XP_INT_CONTROL); + nr_irqs = (control >> 2) & 0x3ff; + + for (i = 0; i < nr_irqs; i++) + writel(i, per_cpu_int_base + ARMADA_370_XP_INT_SET_MASK_OFFS); + /* Clear pending IPIs */ writel(0, per_cpu_int_base + ARMADA_370_XP_IN_DRBEL_CAUSE_OFFS); @@ -474,7 +483,7 @@ static int __init armada_370_xp_mpic_of_init(struct device_node *node, struct device_node *parent) { struct resource main_int_res, per_cpu_int_res; - int parent_irq; + int parent_irq, nr_irqs, i; u32 control; BUG_ON(of_address_to_resource(node, 0, &main_int_res)); @@ -496,9 +505,13 @@ static int __init armada_370_xp_mpic_of_init(struct device_node *node, BUG_ON(!per_cpu_int_base); control = readl(main_int_base + ARMADA_370_XP_INT_CONTROL); + nr_irqs = (control >> 2) & 0x3ff; + + for (i = 0; i < nr_irqs; i++) + writel(i, main_int_base + ARMADA_370_XP_INT_CLEAR_ENABLE_OFFS); armada_370_xp_mpic_domain = - irq_domain_add_linear(node, (control >> 2) & 0x3ff, + irq_domain_add_linear(node, nr_irqs, &armada_370_xp_mpic_irq_ops, NULL); BUG_ON(!armada_370_xp_mpic_domain); diff --git a/drivers/irqchip/irq-brcmstb-l2.c b/drivers/irqchip/irq-brcmstb-l2.c index 8ee2a36d5840..c15c840987d2 100644 --- a/drivers/irqchip/irq-brcmstb-l2.c +++ b/drivers/irqchip/irq-brcmstb-l2.c @@ -150,7 +150,7 @@ int __init brcmstb_l2_intc_of_init(struct device_node *np, /* Allocate a single Generic IRQ chip for this node */ ret = irq_alloc_domain_generic_chips(data->domain, 32, 1, - np->full_name, handle_level_irq, clr, 0, 0); + np->full_name, handle_edge_irq, clr, 0, 0); if (ret) { pr_err("failed to allocate generic irq chip\n"); goto out_free_domain; diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c index 7e11c9d6ae8c..7c131cf7cc13 100644 --- a/drivers/irqchip/irq-gic.c +++ b/drivers/irqchip/irq-gic.c @@ -42,6 +42,7 @@ #include <linux/irqchip/chained_irq.h> #include <linux/irqchip/arm-gic.h> +#include <asm/cputype.h> #include <asm/irq.h> #include <asm/exception.h> #include <asm/smp_plat.h> @@ -954,7 +955,9 @@ void __init gic_init_bases(unsigned int gic_nr, int irq_start, } for_each_possible_cpu(cpu) { - unsigned long offset = percpu_offset * cpu_logical_map(cpu); + u32 mpidr = cpu_logical_map(cpu); + u32 core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); + unsigned long offset = percpu_offset * core_id; *per_cpu_ptr(gic->dist_base.percpu_base, cpu) = dist_base + offset; *per_cpu_ptr(gic->cpu_base.percpu_base, cpu) = cpu_base + offset; } @@ -1071,8 +1074,10 @@ gic_of_init(struct device_node *node, struct device_node *parent) gic_cnt++; return 0; } +IRQCHIP_DECLARE(gic_400, "arm,gic-400", gic_of_init); IRQCHIP_DECLARE(cortex_a15_gic, "arm,cortex-a15-gic", gic_of_init); IRQCHIP_DECLARE(cortex_a9_gic, "arm,cortex-a9-gic", gic_of_init); +IRQCHIP_DECLARE(cortex_a7_gic, "arm,cortex-a7-gic", gic_of_init); IRQCHIP_DECLARE(msm_8660_qgic, "qcom,msm-8660-qgic", gic_of_init); IRQCHIP_DECLARE(msm_qgic2, "qcom,msm-qgic2", gic_of_init); diff --git a/drivers/irqchip/spear-shirq.c b/drivers/irqchip/spear-shirq.c index 3fdda3a40269..6ce6bd3441bf 100644 --- a/drivers/irqchip/spear-shirq.c +++ b/drivers/irqchip/spear-shirq.c @@ -125,7 +125,7 @@ static struct spear_shirq spear320_shirq_ras2 = { }; static struct spear_shirq spear320_shirq_ras3 = { - .irq_nr = 3, + .irq_nr = 7, .irq_bit_off = 0, .invalid_irq = 1, .regs = { diff --git a/drivers/isdn/hisax/Kconfig b/drivers/isdn/hisax/Kconfig index d9edcc94c2a8..97465ac5a2d5 100644 --- a/drivers/isdn/hisax/Kconfig +++ b/drivers/isdn/hisax/Kconfig @@ -16,7 +16,7 @@ config ISDN_DRV_HISAX also to the configuration option of the driver for your particular card, below. -if ISDN_DRV_HISAX!=n +if ISDN_DRV_HISAX comment "D-channel protocol features" @@ -348,10 +348,6 @@ config HISAX_ENTERNOW_PCI This enables HiSax support for the Formula-n enter:now PCI ISDN card. -endif - -if ISDN_DRV_HISAX - config HISAX_DEBUG bool "HiSax debugging" help @@ -420,11 +416,6 @@ config HISAX_FRITZ_PCIPNP (the latter also needs you to select "ISA Plug and Play support" from the menu "Plug and Play configuration") -config HISAX_AVM_A1_PCMCIA - bool - depends on HISAX_AVM_A1_CS - default y - endif endmenu diff --git a/drivers/isdn/hisax/l3ni1.c b/drivers/isdn/hisax/l3ni1.c index 0df6691d045c..8dc791bfaa6f 100644 --- a/drivers/isdn/hisax/l3ni1.c +++ b/drivers/isdn/hisax/l3ni1.c @@ -2059,13 +2059,17 @@ static int l3ni1_cmd_global(struct PStack *st, isdn_ctrl *ic) memcpy(p, ic->parm.ni1_io.data, ic->parm.ni1_io.datalen); /* copy data */ l = (p - temp) + ic->parm.ni1_io.datalen; /* total length */ - if (ic->parm.ni1_io.timeout > 0) - if (!(pc = ni1_new_l3_process(st, -1))) - { free_invoke_id(st, id); + if (ic->parm.ni1_io.timeout > 0) { + pc = ni1_new_l3_process(st, -1); + if (!pc) { + free_invoke_id(st, id); return (-2); } - pc->prot.ni1.ll_id = ic->parm.ni1_io.ll_id; /* remember id */ - pc->prot.ni1.proc = ic->parm.ni1_io.proc; /* and procedure */ + /* remember id */ + pc->prot.ni1.ll_id = ic->parm.ni1_io.ll_id; + /* and procedure */ + pc->prot.ni1.proc = ic->parm.ni1_io.proc; + } if (!(skb = l3_alloc_skb(l))) { free_invoke_id(st, id); diff --git a/drivers/isdn/i4l/isdn_ppp.c b/drivers/isdn/i4l/isdn_ppp.c index 61ac63237446..62f0688d45a5 100644 --- a/drivers/isdn/i4l/isdn_ppp.c +++ b/drivers/isdn/i4l/isdn_ppp.c @@ -442,7 +442,7 @@ static int get_filter(void __user *arg, struct sock_filter **p) { struct sock_fprog uprog; struct sock_filter *code = NULL; - int len, err; + int len; if (copy_from_user(&uprog, arg, sizeof(uprog))) return -EFAULT; @@ -458,12 +458,6 @@ static int get_filter(void __user *arg, struct sock_filter **p) if (IS_ERR(code)) return PTR_ERR(code); - err = sk_chk_filter(code, uprog.len); - if (err) { - kfree(code); - return err; - } - *p = code; return uprog.len; } @@ -644,9 +638,15 @@ isdn_ppp_ioctl(int min, struct file *file, unsigned int cmd, unsigned long arg) fprog.len = len; fprog.filter = code; - if (is->pass_filter) + if (is->pass_filter) { sk_unattached_filter_destroy(is->pass_filter); - err = sk_unattached_filter_create(&is->pass_filter, &fprog); + is->pass_filter = NULL; + } + if (fprog.filter != NULL) + err = sk_unattached_filter_create(&is->pass_filter, + &fprog); + else + err = 0; kfree(code); return err; @@ -663,9 +663,15 @@ isdn_ppp_ioctl(int min, struct file *file, unsigned int cmd, unsigned long arg) fprog.len = len; fprog.filter = code; - if (is->active_filter) + if (is->active_filter) { sk_unattached_filter_destroy(is->active_filter); - err = sk_unattached_filter_create(&is->active_filter, &fprog); + is->active_filter = NULL; + } + if (fprog.filter != NULL) + err = sk_unattached_filter_create(&is->active_filter, + &fprog); + else + err = 0; kfree(code); return err; diff --git a/drivers/macintosh/smu.c b/drivers/macintosh/smu.c index 23b4a3b28dbc..4eab93aa570b 100644 --- a/drivers/macintosh/smu.c +++ b/drivers/macintosh/smu.c @@ -1257,7 +1257,8 @@ static unsigned int smu_fpoll(struct file *file, poll_table *wait) if (pp->busy && pp->cmd.status != 1) mask |= POLLIN; spin_unlock_irqrestore(&pp->lock, flags); - } if (pp->mode == smu_file_events) { + } + if (pp->mode == smu_file_events) { /* Not yet implemented */ } return mask; diff --git a/drivers/md/dm-cache-metadata.c b/drivers/md/dm-cache-metadata.c index 4ead4ba60656..d2899e7eb3aa 100644 --- a/drivers/md/dm-cache-metadata.c +++ b/drivers/md/dm-cache-metadata.c @@ -425,6 +425,15 @@ static int __open_metadata(struct dm_cache_metadata *cmd) disk_super = dm_block_data(sblock); + /* Verify the data block size hasn't changed */ + if (le32_to_cpu(disk_super->data_block_size) != cmd->data_block_size) { + DMERR("changing the data block size (from %u to %llu) is not supported", + le32_to_cpu(disk_super->data_block_size), + (unsigned long long)cmd->data_block_size); + r = -EINVAL; + goto bad; + } + r = __check_incompat_features(disk_super, cmd); if (r < 0) goto bad; diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 53b213226c01..4cba2d808afb 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -1,5 +1,5 @@ /* - * Copyright (C) 2003 Christophe Saout <christophe@saout.de> + * Copyright (C) 2003 Jana Saout <jana@saout.de> * Copyright (C) 2004 Clemens Fruhwirth <clemens@endorphin.org> * Copyright (C) 2006-2009 Red Hat, Inc. All rights reserved. * Copyright (C) 2013 Milan Broz <gmazyland@gmail.com> @@ -1996,6 +1996,6 @@ static void __exit dm_crypt_exit(void) module_init(dm_crypt_init); module_exit(dm_crypt_exit); -MODULE_AUTHOR("Christophe Saout <christophe@saout.de>"); +MODULE_AUTHOR("Jana Saout <jana@saout.de>"); MODULE_DESCRIPTION(DM_NAME " target for transparent encryption / decryption"); MODULE_LICENSE("GPL"); diff --git a/drivers/md/dm-io.c b/drivers/md/dm-io.c index 3842ac738f98..db404a0f7e2c 100644 --- a/drivers/md/dm-io.c +++ b/drivers/md/dm-io.c @@ -10,6 +10,7 @@ #include <linux/device-mapper.h> #include <linux/bio.h> +#include <linux/completion.h> #include <linux/mempool.h> #include <linux/module.h> #include <linux/sched.h> @@ -32,7 +33,7 @@ struct dm_io_client { struct io { unsigned long error_bits; atomic_t count; - struct task_struct *sleeper; + struct completion *wait; struct dm_io_client *client; io_notify_fn callback; void *context; @@ -121,8 +122,8 @@ static void dec_count(struct io *io, unsigned int region, int error) invalidate_kernel_vmap_range(io->vma_invalidate_address, io->vma_invalidate_size); - if (io->sleeper) - wake_up_process(io->sleeper); + if (io->wait) + complete(io->wait); else { unsigned long r = io->error_bits; @@ -387,6 +388,7 @@ static int sync_io(struct dm_io_client *client, unsigned int num_regions, */ volatile char io_[sizeof(struct io) + __alignof__(struct io) - 1]; struct io *io = (struct io *)PTR_ALIGN(&io_, __alignof__(struct io)); + DECLARE_COMPLETION_ONSTACK(wait); if (num_regions > 1 && (rw & RW_MASK) != WRITE) { WARN_ON(1); @@ -395,7 +397,7 @@ static int sync_io(struct dm_io_client *client, unsigned int num_regions, io->error_bits = 0; atomic_set(&io->count, 1); /* see dispatch_io() */ - io->sleeper = current; + io->wait = &wait; io->client = client; io->vma_invalidate_address = dp->vma_invalidate_address; @@ -403,15 +405,7 @@ static int sync_io(struct dm_io_client *client, unsigned int num_regions, dispatch_io(rw, num_regions, where, dp, io, 1); - while (1) { - set_current_state(TASK_UNINTERRUPTIBLE); - - if (!atomic_read(&io->count)) - break; - - io_schedule(); - } - set_current_state(TASK_RUNNING); + wait_for_completion_io(&wait); if (error_bits) *error_bits = io->error_bits; @@ -434,7 +428,7 @@ static int async_io(struct dm_io_client *client, unsigned int num_regions, io = mempool_alloc(client->pool, GFP_NOIO); io->error_bits = 0; atomic_set(&io->count, 1); /* see dispatch_io() */ - io->sleeper = NULL; + io->wait = NULL; io->client = client; io->callback = fn; io->context = context; diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c index 3f6fd9d33ba3..f4167b013d99 100644 --- a/drivers/md/dm-mpath.c +++ b/drivers/md/dm-mpath.c @@ -1611,8 +1611,9 @@ static int multipath_busy(struct dm_target *ti) spin_lock_irqsave(&m->lock, flags); - /* pg_init in progress, requeue until done */ - if (!pg_ready(m)) { + /* pg_init in progress or no paths available */ + if (m->pg_init_in_progress || + (!m->nr_valid_paths && m->queue_if_no_path)) { busy = 1; goto out; } diff --git a/drivers/md/dm-thin-metadata.c b/drivers/md/dm-thin-metadata.c index b086a945edcb..e9d33ad59df5 100644 --- a/drivers/md/dm-thin-metadata.c +++ b/drivers/md/dm-thin-metadata.c @@ -613,6 +613,15 @@ static int __open_metadata(struct dm_pool_metadata *pmd) disk_super = dm_block_data(sblock); + /* Verify the data block size hasn't changed */ + if (le32_to_cpu(disk_super->data_block_size) != pmd->data_block_size) { + DMERR("changing the data block size (from %u to %llu) is not supported", + le32_to_cpu(disk_super->data_block_size), + (unsigned long long)pmd->data_block_size); + r = -EINVAL; + goto bad_unlock_sblock; + } + r = __check_incompat_features(disk_super, pmd); if (r < 0) goto bad_unlock_sblock; diff --git a/drivers/md/dm-zero.c b/drivers/md/dm-zero.c index c99003e0d47a..b9a64bbce304 100644 --- a/drivers/md/dm-zero.c +++ b/drivers/md/dm-zero.c @@ -1,5 +1,5 @@ /* - * Copyright (C) 2003 Christophe Saout <christophe@saout.de> + * Copyright (C) 2003 Jana Saout <jana@saout.de> * * This file is released under the GPL. */ @@ -79,6 +79,6 @@ static void __exit dm_zero_exit(void) module_init(dm_zero_init) module_exit(dm_zero_exit) -MODULE_AUTHOR("Christophe Saout <christophe@saout.de>"); +MODULE_AUTHOR("Jana Saout <jana@saout.de>"); MODULE_DESCRIPTION(DM_NAME " dummy target returning zeros"); MODULE_LICENSE("GPL"); diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 437d99045ef2..32b958dbc499 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -54,6 +54,8 @@ static void do_deferred_remove(struct work_struct *w); static DECLARE_WORK(deferred_remove_work, do_deferred_remove); +static struct workqueue_struct *deferred_remove_workqueue; + /* * For bio-based dm. * One of these is allocated per bio. @@ -276,16 +278,24 @@ static int __init local_init(void) if (r) goto out_free_rq_tio_cache; + deferred_remove_workqueue = alloc_workqueue("kdmremove", WQ_UNBOUND, 1); + if (!deferred_remove_workqueue) { + r = -ENOMEM; + goto out_uevent_exit; + } + _major = major; r = register_blkdev(_major, _name); if (r < 0) - goto out_uevent_exit; + goto out_free_workqueue; if (!_major) _major = r; return 0; +out_free_workqueue: + destroy_workqueue(deferred_remove_workqueue); out_uevent_exit: dm_uevent_exit(); out_free_rq_tio_cache: @@ -299,6 +309,7 @@ out_free_io_cache: static void local_exit(void) { flush_scheduled_work(); + destroy_workqueue(deferred_remove_workqueue); kmem_cache_destroy(_rq_tio_cache); kmem_cache_destroy(_io_cache); @@ -407,7 +418,7 @@ static void dm_blk_close(struct gendisk *disk, fmode_t mode) if (atomic_dec_and_test(&md->open_count) && (test_bit(DMF_DEFERRED_REMOVE, &md->flags))) - schedule_work(&deferred_remove_work); + queue_work(deferred_remove_workqueue, &deferred_remove_work); dm_put(md); diff --git a/drivers/md/md.c b/drivers/md/md.c index 34846856dbc6..32fc19c540d4 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -5599,7 +5599,7 @@ static int get_array_info(struct mddev * mddev, void __user * arg) if (mddev->in_sync) info.state = (1<<MD_SB_CLEAN); if (mddev->bitmap && mddev->bitmap_info.offset) - info.state = (1<<MD_SB_BITMAP_PRESENT); + info.state |= (1<<MD_SB_BITMAP_PRESENT); info.active_disks = insync; info.working_disks = working; info.failed_disks = failed; @@ -7501,6 +7501,19 @@ void md_do_sync(struct md_thread *thread) rdev->recovery_offset < j) j = rdev->recovery_offset; rcu_read_unlock(); + + /* If there is a bitmap, we need to make sure all + * writes that started before we added a spare + * complete before we start doing a recovery. + * Otherwise the write might complete and (via + * bitmap_endwrite) set a bit in the bitmap after the + * recovery has checked that bit and skipped that + * region. + */ + if (mddev->bitmap) { + mddev->pers->quiesce(mddev, 1); + mddev->pers->quiesce(mddev, 0); + } } printk(KERN_INFO "md: %s of RAID array %s\n", desc, mdname(mddev)); diff --git a/drivers/media/dvb-frontends/si2168.c b/drivers/media/dvb-frontends/si2168.c index 8637d2ed7623..2e3cdcfa0a67 100644 --- a/drivers/media/dvb-frontends/si2168.c +++ b/drivers/media/dvb-frontends/si2168.c @@ -60,7 +60,7 @@ static int si2168_cmd_execute(struct si2168 *s, struct si2168_cmd *cmd) jiffies_to_msecs(jiffies) - (jiffies_to_msecs(timeout) - TIMEOUT)); - if (!(cmd->args[0] >> 7) & 0x01) { + if (!((cmd->args[0] >> 7) & 0x01)) { ret = -ETIMEDOUT; goto err_mutex_unlock; } @@ -485,20 +485,6 @@ static int si2168_init(struct dvb_frontend *fe) if (ret) goto err; - cmd.args[0] = 0x05; - cmd.args[1] = 0x00; - cmd.args[2] = 0xaa; - cmd.args[3] = 0x4d; - cmd.args[4] = 0x56; - cmd.args[5] = 0x40; - cmd.args[6] = 0x00; - cmd.args[7] = 0x00; - cmd.wlen = 8; - cmd.rlen = 1; - ret = si2168_cmd_execute(s, &cmd); - if (ret) - goto err; - /* cold state - try to download firmware */ dev_info(&s->client->dev, "%s: found a '%s' in cold state\n", KBUILD_MODNAME, si2168_ops.info.name); diff --git a/drivers/media/dvb-frontends/si2168_priv.h b/drivers/media/dvb-frontends/si2168_priv.h index 2a343e896f40..53f7f06ae343 100644 --- a/drivers/media/dvb-frontends/si2168_priv.h +++ b/drivers/media/dvb-frontends/si2168_priv.h @@ -22,7 +22,7 @@ #include <linux/firmware.h> #include <linux/i2c-mux.h> -#define SI2168_FIRMWARE "dvb-demod-si2168-01.fw" +#define SI2168_FIRMWARE "dvb-demod-si2168-02.fw" /* state struct */ struct si2168 { diff --git a/drivers/media/dvb-frontends/tda10071.c b/drivers/media/dvb-frontends/tda10071.c index 522fe00f5eee..9619be5d4827 100644 --- a/drivers/media/dvb-frontends/tda10071.c +++ b/drivers/media/dvb-frontends/tda10071.c @@ -668,6 +668,7 @@ static int tda10071_set_frontend(struct dvb_frontend *fe) struct dtv_frontend_properties *c = &fe->dtv_property_cache; int ret, i; u8 mode, rolloff, pilot, inversion, div; + fe_modulation_t modulation; dev_dbg(&priv->i2c->dev, "%s: delivery_system=%d modulation=%d frequency=%d symbol_rate=%d inversion=%d pilot=%d rolloff=%d\n", @@ -702,10 +703,13 @@ static int tda10071_set_frontend(struct dvb_frontend *fe) switch (c->delivery_system) { case SYS_DVBS: + modulation = QPSK; rolloff = 0; pilot = 2; break; case SYS_DVBS2: + modulation = c->modulation; + switch (c->rolloff) { case ROLLOFF_20: rolloff = 2; @@ -750,7 +754,7 @@ static int tda10071_set_frontend(struct dvb_frontend *fe) for (i = 0, mode = 0xff; i < ARRAY_SIZE(TDA10071_MODCOD); i++) { if (c->delivery_system == TDA10071_MODCOD[i].delivery_system && - c->modulation == TDA10071_MODCOD[i].modulation && + modulation == TDA10071_MODCOD[i].modulation && c->fec_inner == TDA10071_MODCOD[i].fec) { mode = TDA10071_MODCOD[i].val; dev_dbg(&priv->i2c->dev, "%s: mode found=%02x\n", @@ -834,10 +838,10 @@ static int tda10071_get_frontend(struct dvb_frontend *fe) switch ((buf[1] >> 0) & 0x01) { case 0: - c->inversion = INVERSION_OFF; + c->inversion = INVERSION_ON; break; case 1: - c->inversion = INVERSION_ON; + c->inversion = INVERSION_OFF; break; } @@ -856,7 +860,7 @@ static int tda10071_get_frontend(struct dvb_frontend *fe) if (ret) goto error; - c->symbol_rate = (buf[0] << 16) | (buf[1] << 8) | (buf[2] << 0); + c->symbol_rate = ((buf[0] << 16) | (buf[1] << 8) | (buf[2] << 0)) * 1000; return ret; error: diff --git a/drivers/media/dvb-frontends/tda10071_priv.h b/drivers/media/dvb-frontends/tda10071_priv.h index 4baf14bfb65a..420486192736 100644 --- a/drivers/media/dvb-frontends/tda10071_priv.h +++ b/drivers/media/dvb-frontends/tda10071_priv.h @@ -55,6 +55,7 @@ static struct tda10071_modcod { { SYS_DVBS2, QPSK, FEC_8_9, 0x0a }, { SYS_DVBS2, QPSK, FEC_9_10, 0x0b }, /* 8PSK */ + { SYS_DVBS2, PSK_8, FEC_AUTO, 0x00 }, { SYS_DVBS2, PSK_8, FEC_3_5, 0x0c }, { SYS_DVBS2, PSK_8, FEC_2_3, 0x0d }, { SYS_DVBS2, PSK_8, FEC_3_4, 0x0e }, diff --git a/drivers/media/pci/saa7134/saa7134-empress.c b/drivers/media/pci/saa7134/saa7134-empress.c index e65c760e4e8b..0006d6bf8c18 100644 --- a/drivers/media/pci/saa7134/saa7134-empress.c +++ b/drivers/media/pci/saa7134/saa7134-empress.c @@ -179,7 +179,7 @@ static const struct v4l2_file_operations ts_fops = .read = vb2_fop_read, .poll = vb2_fop_poll, .mmap = vb2_fop_mmap, - .ioctl = video_ioctl2, + .unlocked_ioctl = video_ioctl2, }; static const struct v4l2_ioctl_ops ts_ioctl_ops = { diff --git a/drivers/media/platform/davinci/vpif_capture.c b/drivers/media/platform/davinci/vpif_capture.c index a7ed16497903..1e4ec697fb10 100644 --- a/drivers/media/platform/davinci/vpif_capture.c +++ b/drivers/media/platform/davinci/vpif_capture.c @@ -269,6 +269,7 @@ err: list_del(&buf->list); vb2_buffer_done(&buf->vb, VB2_BUF_STATE_QUEUED); } + spin_unlock_irqrestore(&common->irqlock, flags); return ret; } diff --git a/drivers/media/platform/davinci/vpif_display.c b/drivers/media/platform/davinci/vpif_display.c index 5bb085b19bcb..b431b58f39e3 100644 --- a/drivers/media/platform/davinci/vpif_display.c +++ b/drivers/media/platform/davinci/vpif_display.c @@ -233,6 +233,7 @@ err: list_del(&buf->list); vb2_buffer_done(&buf->vb, VB2_BUF_STATE_QUEUED); } + spin_unlock_irqrestore(&common->irqlock, flags); return ret; } diff --git a/drivers/media/tuners/si2157.c b/drivers/media/tuners/si2157.c index 271a752cee54..fa4cc7b880aa 100644 --- a/drivers/media/tuners/si2157.c +++ b/drivers/media/tuners/si2157.c @@ -57,7 +57,7 @@ static int si2157_cmd_execute(struct si2157 *s, struct si2157_cmd *cmd) jiffies_to_msecs(jiffies) - (jiffies_to_msecs(timeout) - TIMEOUT)); - if (!(buf[0] >> 7) & 0x01) { + if (!((buf[0] >> 7) & 0x01)) { ret = -ETIMEDOUT; goto err_mutex_unlock; } else { diff --git a/drivers/media/usb/dvb-usb-v2/af9035.c b/drivers/media/usb/dvb-usb-v2/af9035.c index 021e4d35e4d7..7b9b75f60774 100644 --- a/drivers/media/usb/dvb-usb-v2/af9035.c +++ b/drivers/media/usb/dvb-usb-v2/af9035.c @@ -704,15 +704,41 @@ static int af9035_read_config(struct dvb_usb_device *d) if (ret < 0) goto err; - if (tmp == 0x00) - dev_dbg(&d->udev->dev, - "%s: [%d]tuner not set, using default\n", - __func__, i); - else + dev_dbg(&d->udev->dev, "%s: [%d]tuner=%02x\n", + __func__, i, tmp); + + /* tuner sanity check */ + if (state->chip_type == 0x9135) { + if (state->chip_version == 0x02) { + /* IT9135 BX (v2) */ + switch (tmp) { + case AF9033_TUNER_IT9135_60: + case AF9033_TUNER_IT9135_61: + case AF9033_TUNER_IT9135_62: + state->af9033_config[i].tuner = tmp; + break; + } + } else { + /* IT9135 AX (v1) */ + switch (tmp) { + case AF9033_TUNER_IT9135_38: + case AF9033_TUNER_IT9135_51: + case AF9033_TUNER_IT9135_52: + state->af9033_config[i].tuner = tmp; + break; + } + } + } else { + /* AF9035 */ state->af9033_config[i].tuner = tmp; + } - dev_dbg(&d->udev->dev, "%s: [%d]tuner=%02x\n", - __func__, i, state->af9033_config[i].tuner); + if (state->af9033_config[i].tuner != tmp) { + dev_info(&d->udev->dev, + "%s: [%d] overriding tuner from %02x to %02x\n", + KBUILD_MODNAME, i, tmp, + state->af9033_config[i].tuner); + } switch (state->af9033_config[i].tuner) { case AF9033_TUNER_TUA9001: diff --git a/drivers/media/usb/gspca/pac7302.c b/drivers/media/usb/gspca/pac7302.c index 2fd1c5e31a0f..339adce7c7a5 100644 --- a/drivers/media/usb/gspca/pac7302.c +++ b/drivers/media/usb/gspca/pac7302.c @@ -928,6 +928,7 @@ static const struct usb_device_id device_table[] = { {USB_DEVICE(0x093a, 0x2620)}, {USB_DEVICE(0x093a, 0x2621)}, {USB_DEVICE(0x093a, 0x2622), .driver_info = FL_VFLIP}, + {USB_DEVICE(0x093a, 0x2623), .driver_info = FL_VFLIP}, {USB_DEVICE(0x093a, 0x2624), .driver_info = FL_VFLIP}, {USB_DEVICE(0x093a, 0x2625)}, {USB_DEVICE(0x093a, 0x2626)}, diff --git a/drivers/media/usb/hdpvr/hdpvr-video.c b/drivers/media/usb/hdpvr/hdpvr-video.c index 0500c4175d5f..6bce01a674f9 100644 --- a/drivers/media/usb/hdpvr/hdpvr-video.c +++ b/drivers/media/usb/hdpvr/hdpvr-video.c @@ -82,7 +82,7 @@ static void hdpvr_read_bulk_callback(struct urb *urb) } /*=========================================================================*/ -/* bufffer bits */ +/* buffer bits */ /* function expects dev->io_mutex to be hold by caller */ int hdpvr_cancel_queue(struct hdpvr_device *dev) @@ -926,7 +926,7 @@ static int hdpvr_s_ctrl(struct v4l2_ctrl *ctrl) case V4L2_CID_MPEG_AUDIO_ENCODING: if (dev->flags & HDPVR_FLAG_AC3_CAP) { opt->audio_codec = ctrl->val; - return hdpvr_set_audio(dev, opt->audio_input, + return hdpvr_set_audio(dev, opt->audio_input + 1, opt->audio_codec); } return 0; @@ -1198,7 +1198,7 @@ int hdpvr_register_videodev(struct hdpvr_device *dev, struct device *parent, v4l2_ctrl_new_std_menu(hdl, &hdpvr_ctrl_ops, V4L2_CID_MPEG_AUDIO_ENCODING, ac3 ? V4L2_MPEG_AUDIO_ENCODING_AC3 : V4L2_MPEG_AUDIO_ENCODING_AAC, - 0x7, V4L2_MPEG_AUDIO_ENCODING_AAC); + 0x7, ac3 ? dev->options.audio_codec : V4L2_MPEG_AUDIO_ENCODING_AAC); v4l2_ctrl_new_std_menu(hdl, &hdpvr_ctrl_ops, V4L2_CID_MPEG_VIDEO_ENCODING, V4L2_MPEG_VIDEO_ENCODING_MPEG_4_AVC, 0x3, diff --git a/drivers/media/v4l2-core/v4l2-dv-timings.c b/drivers/media/v4l2-core/v4l2-dv-timings.c index 4ae54caadd03..ce1c9f5d9dee 100644 --- a/drivers/media/v4l2-core/v4l2-dv-timings.c +++ b/drivers/media/v4l2-core/v4l2-dv-timings.c @@ -610,10 +610,10 @@ struct v4l2_fract v4l2_calc_aspect_ratio(u8 hor_landscape, u8 vert_portrait) aspect.denominator = 9; } else if (ratio == 34) { aspect.numerator = 4; - aspect.numerator = 3; + aspect.denominator = 3; } else if (ratio == 68) { aspect.numerator = 15; - aspect.numerator = 9; + aspect.denominator = 9; } else { aspect.numerator = hor_landscape + 99; aspect.denominator = 100; diff --git a/drivers/memstick/host/rtsx_pci_ms.c b/drivers/memstick/host/rtsx_pci_ms.c index 2a635b6fdaf7..c880ba685754 100644 --- a/drivers/memstick/host/rtsx_pci_ms.c +++ b/drivers/memstick/host/rtsx_pci_ms.c @@ -601,6 +601,7 @@ static int rtsx_pci_ms_drv_remove(struct platform_device *pdev) pcr->slots[RTSX_MS_CARD].card_event = NULL; msh = host->msh; host->eject = true; + cancel_work_sync(&host->handle_req); mutex_lock(&host->host_mutex); if (host->req) { diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig index ee8204cc31e9..6cc4b6acc22a 100644 --- a/drivers/mfd/Kconfig +++ b/drivers/mfd/Kconfig @@ -760,6 +760,7 @@ config MFD_SYSCON config MFD_DAVINCI_VOICECODEC tristate select MFD_CORE + select REGMAP_MMIO config MFD_TI_AM335X_TSCADC tristate "TI ADC / Touch Screen chip support" @@ -1225,7 +1226,7 @@ config MFD_WM8994 functionaltiy of the device other drivers must be enabled. config MFD_STW481X - bool "Support for ST Microelectronics STw481x" + tristate "Support for ST Microelectronics STw481x" depends on I2C && ARCH_NOMADIK select REGMAP_I2C select MFD_CORE @@ -1248,7 +1249,7 @@ config MCP_SA11X0 # Chip drivers config MCP_UCB1200 - bool "Support for UCB1200 / UCB1300" + tristate "Support for UCB1200 / UCB1300" depends on MCP_SA11X0 select MCP diff --git a/drivers/mfd/ab8500-core.c b/drivers/mfd/ab8500-core.c index a8ee4a36a1d8..cf2e6a198c6b 100644 --- a/drivers/mfd/ab8500-core.c +++ b/drivers/mfd/ab8500-core.c @@ -591,7 +591,7 @@ static int ab8500_irq_init(struct ab8500 *ab8500, struct device_node *np) num_irqs = AB8500_NR_IRQS; /* If ->irq_base is zero this will give a linear mapping */ - ab8500->domain = irq_domain_add_simple(NULL, + ab8500->domain = irq_domain_add_simple(ab8500->dev->of_node, num_irqs, 0, &ab8500_irq_ops, ab8500); diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig index a43d0c467274..ee9402324a23 100644 --- a/drivers/misc/Kconfig +++ b/drivers/misc/Kconfig @@ -54,7 +54,7 @@ config AD525X_DPOT_SPI config ATMEL_PWM tristate "Atmel AT32/AT91 PWM support" depends on HAVE_CLK - depends on AVR32 || AT91SAM9263 || AT91SAM9RL || AT91SAM9G45 + depends on AVR32 || ARCH_AT91SAM9263 || ARCH_AT91SAM9RL || ARCH_AT91SAM9G45 help This option enables device driver support for the PWM channels on certain Atmel processors. Pulse Width Modulation is used for diff --git a/drivers/mtd/chips/cfi_cmdset_0001.c b/drivers/mtd/chips/cfi_cmdset_0001.c index e4ec355704a6..a7543ba3e190 100644 --- a/drivers/mtd/chips/cfi_cmdset_0001.c +++ b/drivers/mtd/chips/cfi_cmdset_0001.c @@ -52,6 +52,11 @@ /* Atmel chips */ #define AT49BV640D 0x02de #define AT49BV640DT 0x02db +/* Sharp chips */ +#define LH28F640BFHE_PTTL90 0x00b0 +#define LH28F640BFHE_PBTL90 0x00b1 +#define LH28F640BFHE_PTTL70A 0x00b2 +#define LH28F640BFHE_PBTL70A 0x00b3 static int cfi_intelext_read (struct mtd_info *, loff_t, size_t, size_t *, u_char *); static int cfi_intelext_write_words(struct mtd_info *, loff_t, size_t, size_t *, const u_char *); @@ -258,6 +263,36 @@ static void fixup_st_m28w320cb(struct mtd_info *mtd) (cfi->cfiq->EraseRegionInfo[1] & 0xffff0000) | 0x3e; }; +static int is_LH28F640BF(struct cfi_private *cfi) +{ + /* Sharp LH28F640BF Family */ + if (cfi->mfr == CFI_MFR_SHARP && ( + cfi->id == LH28F640BFHE_PTTL90 || cfi->id == LH28F640BFHE_PBTL90 || + cfi->id == LH28F640BFHE_PTTL70A || cfi->id == LH28F640BFHE_PBTL70A)) + return 1; + return 0; +} + +static void fixup_LH28F640BF(struct mtd_info *mtd) +{ + struct map_info *map = mtd->priv; + struct cfi_private *cfi = map->fldrv_priv; + struct cfi_pri_intelext *extp = cfi->cmdset_priv; + + /* Reset the Partition Configuration Register on LH28F640BF + * to a single partition (PCR = 0x000): PCR is embedded into A0-A15. */ + if (is_LH28F640BF(cfi)) { + printk(KERN_INFO "Reset Partition Config. Register: 1 Partition of 4 planes\n"); + map_write(map, CMD(0x60), 0); + map_write(map, CMD(0x04), 0); + + /* We have set one single partition thus + * Simultaneous Operations are not allowed */ + printk(KERN_INFO "cfi_cmdset_0001: Simultaneous Operations disabled\n"); + extp->FeatureSupport &= ~512; + } +} + static void fixup_use_point(struct mtd_info *mtd) { struct map_info *map = mtd->priv; @@ -309,6 +344,8 @@ static struct cfi_fixup cfi_fixup_table[] = { { CFI_MFR_ST, 0x00ba, /* M28W320CT */ fixup_st_m28w320ct }, { CFI_MFR_ST, 0x00bb, /* M28W320CB */ fixup_st_m28w320cb }, { CFI_MFR_INTEL, CFI_ID_ANY, fixup_unlock_powerup_lock }, + { CFI_MFR_SHARP, CFI_ID_ANY, fixup_unlock_powerup_lock }, + { CFI_MFR_SHARP, CFI_ID_ANY, fixup_LH28F640BF }, { 0, 0, NULL } }; @@ -1649,6 +1686,12 @@ static int __xipram do_write_buffer(struct map_info *map, struct flchip *chip, initial_adr = adr; cmd_adr = adr & ~(wbufsize-1); + /* Sharp LH28F640BF chips need the first address for the + * Page Buffer Program command. See Table 5 of + * LH28F320BF, LH28F640BF, LH28F128BF Series (Appendix FUM00701) */ + if (is_LH28F640BF(cfi)) + cmd_adr = adr; + /* Let's determine this according to the interleave only once */ write_cmd = (cfi->cfiq->P_ID != P_ID_INTEL_PERFORMANCE) ? CMD(0xe8) : CMD(0xe9); diff --git a/drivers/mtd/devices/elm.c b/drivers/mtd/devices/elm.c index 7df86948e6d4..b4f61c7fc161 100644 --- a/drivers/mtd/devices/elm.c +++ b/drivers/mtd/devices/elm.c @@ -475,6 +475,7 @@ static int elm_context_save(struct elm_info *info) ELM_SYNDROME_FRAGMENT_1 + offset); regs->elm_syndrome_fragment_0[i] = elm_read_reg(info, ELM_SYNDROME_FRAGMENT_0 + offset); + break; default: return -EINVAL; } @@ -520,6 +521,7 @@ static int elm_context_restore(struct elm_info *info) regs->elm_syndrome_fragment_1[i]); elm_write_reg(info, ELM_SYNDROME_FRAGMENT_0 + offset, regs->elm_syndrome_fragment_0[i]); + break; default: return -EINVAL; } diff --git a/drivers/mtd/nand/nand_base.c b/drivers/mtd/nand/nand_base.c index 41167e9e991e..4f3e80c68a26 100644 --- a/drivers/mtd/nand/nand_base.c +++ b/drivers/mtd/nand/nand_base.c @@ -4047,8 +4047,10 @@ int nand_scan_tail(struct mtd_info *mtd) ecc->layout->oobavail += ecc->layout->oobfree[i].length; mtd->oobavail = ecc->layout->oobavail; - /* ECC sanity check: warn noisily if it's too weak */ - WARN_ON(!nand_ecc_strength_good(mtd)); + /* ECC sanity check: warn if it's too weak */ + if (!nand_ecc_strength_good(mtd)) + pr_warn("WARNING: %s: the ECC used on your system is too weak compared to the one required by the NAND chip\n", + mtd->name); /* * Set the number of read / write steps for one page depending on ECC diff --git a/drivers/mtd/ubi/fastmap.c b/drivers/mtd/ubi/fastmap.c index b04e7d059888..0431b46d9fd9 100644 --- a/drivers/mtd/ubi/fastmap.c +++ b/drivers/mtd/ubi/fastmap.c @@ -125,7 +125,7 @@ static struct ubi_ainf_volume *add_vol(struct ubi_attach_info *ai, int vol_id, parent = *p; av = rb_entry(parent, struct ubi_ainf_volume, rb); - if (vol_id < av->vol_id) + if (vol_id > av->vol_id) p = &(*p)->rb_left; else p = &(*p)->rb_right; @@ -423,7 +423,7 @@ static int scan_pool(struct ubi_device *ubi, struct ubi_attach_info *ai, pnum, err); ret = err > 0 ? UBI_BAD_FASTMAP : err; goto out; - } else if (ret == UBI_IO_BITFLIPS) + } else if (err == UBI_IO_BITFLIPS) scrub = 1; /* diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c index 04f35f960cb8..701f86cd5993 100644 --- a/drivers/net/bonding/bond_main.c +++ b/drivers/net/bonding/bond_main.c @@ -1025,10 +1025,14 @@ static netdev_features_t bond_fix_features(struct net_device *dev, NETIF_F_FRAGLIST | NETIF_F_ALL_TSO | \ NETIF_F_HIGHDMA | NETIF_F_LRO) +#define BOND_ENC_FEATURES (NETIF_F_ALL_CSUM | NETIF_F_SG | NETIF_F_RXCSUM |\ + NETIF_F_TSO | NETIF_F_GSO_UDP_TUNNEL) + static void bond_compute_features(struct bonding *bond) { unsigned int flags, dst_release_flag = IFF_XMIT_DST_RELEASE; netdev_features_t vlan_features = BOND_VLAN_FEATURES; + netdev_features_t enc_features = BOND_ENC_FEATURES; struct net_device *bond_dev = bond->dev; struct list_head *iter; struct slave *slave; @@ -1044,6 +1048,9 @@ static void bond_compute_features(struct bonding *bond) vlan_features = netdev_increment_features(vlan_features, slave->dev->vlan_features, BOND_VLAN_FEATURES); + enc_features = netdev_increment_features(enc_features, + slave->dev->hw_enc_features, + BOND_ENC_FEATURES); dst_release_flag &= slave->dev->priv_flags; if (slave->dev->hard_header_len > max_hard_header_len) max_hard_header_len = slave->dev->hard_header_len; @@ -1054,6 +1061,7 @@ static void bond_compute_features(struct bonding *bond) done: bond_dev->vlan_features = vlan_features; + bond_dev->hw_enc_features = enc_features; bond_dev->hard_header_len = max_hard_header_len; bond_dev->gso_max_segs = gso_max_segs; netif_set_gso_max_size(bond_dev, gso_max_size); @@ -3975,6 +3983,7 @@ void bond_setup(struct net_device *bond_dev) NETIF_F_HW_VLAN_CTAG_FILTER; bond_dev->hw_features &= ~(NETIF_F_ALL_CSUM & ~NETIF_F_HW_CSUM); + bond_dev->hw_features |= NETIF_F_GSO_UDP_TUNNEL; bond_dev->features |= bond_dev->hw_features; } @@ -4059,7 +4068,7 @@ static int bond_check_params(struct bond_params *params) } if (ad_select) { - bond_opt_initstr(&newval, lacp_rate); + bond_opt_initstr(&newval, ad_select); valptr = bond_opt_parse(bond_opt_get(BOND_OPT_AD_SELECT), &newval); if (!valptr) { diff --git a/drivers/net/can/slcan.c b/drivers/net/can/slcan.c index dcf9196f6316..ea4d4f1a6411 100644 --- a/drivers/net/can/slcan.c +++ b/drivers/net/can/slcan.c @@ -52,6 +52,7 @@ #include <linux/delay.h> #include <linux/init.h> #include <linux/kernel.h> +#include <linux/workqueue.h> #include <linux/can.h> #include <linux/can/skb.h> @@ -85,6 +86,7 @@ struct slcan { struct tty_struct *tty; /* ptr to TTY structure */ struct net_device *dev; /* easy for intr handling */ spinlock_t lock; + struct work_struct tx_work; /* Flushes transmit buffer */ /* These are pointers to the malloc()ed frame buffers. */ unsigned char rbuff[SLC_MTU]; /* receiver buffer */ @@ -309,36 +311,46 @@ static void slc_encaps(struct slcan *sl, struct can_frame *cf) sl->dev->stats.tx_bytes += cf->can_dlc; } -/* - * Called by the driver when there's room for more data. If we have - * more packets to send, we send them here. - */ -static void slcan_write_wakeup(struct tty_struct *tty) +/* Write out any remaining transmit buffer. Scheduled when tty is writable */ +static void slcan_transmit(struct work_struct *work) { + struct slcan *sl = container_of(work, struct slcan, tx_work); int actual; - struct slcan *sl = (struct slcan *) tty->disc_data; + spin_lock_bh(&sl->lock); /* First make sure we're connected. */ - if (!sl || sl->magic != SLCAN_MAGIC || !netif_running(sl->dev)) + if (!sl->tty || sl->magic != SLCAN_MAGIC || !netif_running(sl->dev)) { + spin_unlock_bh(&sl->lock); return; + } - spin_lock_bh(&sl->lock); if (sl->xleft <= 0) { /* Now serial buffer is almost free & we can start * transmission of another packet */ sl->dev->stats.tx_packets++; - clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags); + clear_bit(TTY_DO_WRITE_WAKEUP, &sl->tty->flags); spin_unlock_bh(&sl->lock); netif_wake_queue(sl->dev); return; } - actual = tty->ops->write(tty, sl->xhead, sl->xleft); + actual = sl->tty->ops->write(sl->tty, sl->xhead, sl->xleft); sl->xleft -= actual; sl->xhead += actual; spin_unlock_bh(&sl->lock); } +/* + * Called by the driver when there's room for more data. + * Schedule the transmit. + */ +static void slcan_write_wakeup(struct tty_struct *tty) +{ + struct slcan *sl = tty->disc_data; + + schedule_work(&sl->tx_work); +} + /* Send a can_frame to a TTY queue. */ static netdev_tx_t slc_xmit(struct sk_buff *skb, struct net_device *dev) { @@ -528,6 +540,7 @@ static struct slcan *slc_alloc(dev_t line) sl->magic = SLCAN_MAGIC; sl->dev = dev; spin_lock_init(&sl->lock); + INIT_WORK(&sl->tx_work, slcan_transmit); slcan_devs[i] = dev; return sl; @@ -626,8 +639,12 @@ static void slcan_close(struct tty_struct *tty) if (!sl || sl->magic != SLCAN_MAGIC || sl->tty != tty) return; + spin_lock_bh(&sl->lock); tty->disc_data = NULL; sl->tty = NULL; + spin_unlock_bh(&sl->lock); + + flush_work(&sl->tx_work); /* Flush network side */ unregister_netdev(sl->dev); diff --git a/drivers/net/ethernet/allwinner/sun4i-emac.c b/drivers/net/ethernet/allwinner/sun4i-emac.c index 28460676b8ca..d81e7167a8b5 100644 --- a/drivers/net/ethernet/allwinner/sun4i-emac.c +++ b/drivers/net/ethernet/allwinner/sun4i-emac.c @@ -736,6 +736,7 @@ static int emac_open(struct net_device *dev) ret = emac_mdio_probe(dev); if (ret < 0) { + free_irq(dev->irq, dev); netdev_err(dev, "cannot probe MDIO bus\n"); return ret; } diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c index 141160ef249a..5776e503e4c5 100644 --- a/drivers/net/ethernet/broadcom/bcmsysport.c +++ b/drivers/net/ethernet/broadcom/bcmsysport.c @@ -654,13 +654,13 @@ static int bcm_sysport_tx_poll(struct napi_struct *napi, int budget) work_done = bcm_sysport_tx_reclaim(ring->priv, ring); - if (work_done < budget) { + if (work_done == 0) { napi_complete(napi); /* re-enable TX interrupt */ intrl2_1_mask_clear(ring->priv, BIT(ring->index)); } - return work_done; + return 0; } static void bcm_sysport_tx_reclaim_all(struct bcm_sysport_priv *priv) @@ -1254,28 +1254,17 @@ static inline void umac_enable_set(struct bcm_sysport_priv *priv, usleep_range(1000, 2000); } -static inline int umac_reset(struct bcm_sysport_priv *priv) +static inline void umac_reset(struct bcm_sysport_priv *priv) { - unsigned int timeout = 0; u32 reg; - int ret = 0; - - umac_writel(priv, 0, UMAC_CMD); - while (timeout++ < 1000) { - reg = umac_readl(priv, UMAC_CMD); - if (!(reg & CMD_SW_RESET)) - break; - - udelay(1); - } - - if (timeout == 1000) { - dev_err(&priv->pdev->dev, - "timeout waiting for MAC to come out of reset\n"); - ret = -ETIMEDOUT; - } - return ret; + reg = umac_readl(priv, UMAC_CMD); + reg |= CMD_SW_RESET; + umac_writel(priv, reg, UMAC_CMD); + udelay(10); + reg = umac_readl(priv, UMAC_CMD); + reg &= ~CMD_SW_RESET; + umac_writel(priv, reg, UMAC_CMD); } static void umac_set_hw_addr(struct bcm_sysport_priv *priv, @@ -1303,11 +1292,7 @@ static int bcm_sysport_open(struct net_device *dev) int ret; /* Reset UniMAC */ - ret = umac_reset(priv); - if (ret) { - netdev_err(dev, "UniMAC reset failed\n"); - return ret; - } + umac_reset(priv); /* Flush TX and RX FIFOs at TOPCTRL level */ topctrl_flush(priv); @@ -1589,12 +1574,6 @@ static int bcm_sysport_probe(struct platform_device *pdev) BUILD_BUG_ON(sizeof(struct bcm_tsb) != 8); dev->needed_headroom += sizeof(struct bcm_tsb); - /* We are interfaced to a switch which handles the multicast - * filtering for us, so we do not support programming any - * multicast hash table in this Ethernet MAC. - */ - dev->flags &= ~IFF_MULTICAST; - /* libphy will adjust the link state accordingly */ netif_carrier_off(dev); diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c index 47c5814114e1..4b875da1c7ed 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c @@ -797,7 +797,8 @@ static void bnx2x_tpa_stop(struct bnx2x *bp, struct bnx2x_fastpath *fp, return; } - bnx2x_frag_free(fp, new_data); + if (new_data) + bnx2x_frag_free(fp, new_data); drop: /* drop the packet and keep the buffer in the bin */ DP(NETIF_MSG_RX_STATUS, diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c index 2887034523e0..6a8b1453a1b9 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c @@ -12937,7 +12937,7 @@ static int bnx2x_get_num_non_def_sbs(struct pci_dev *pdev, int cnic_cnt) * without the default SB. * For VFs there is no default SB, then we return (index+1). */ - pci_read_config_word(pdev, pdev->msix_cap + PCI_MSI_FLAGS, &control); + pci_read_config_word(pdev, pdev->msix_cap + PCI_MSIX_FLAGS, &control); index = control & PCI_MSIX_FLAGS_QSIZE; diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c index 5ba1cfbd60da..16281ad2da12 100644 --- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c +++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c @@ -1408,13 +1408,6 @@ static int bcmgenet_alloc_rx_buffers(struct bcmgenet_priv *priv) if (cb->skb) continue; - /* set the DMA descriptor length once and for all - * it will only change if we support dynamically sizing - * priv->rx_buf_len, but we do not - */ - dmadesc_set_length_status(priv, priv->rx_bd_assign_ptr, - priv->rx_buf_len << DMA_BUFLENGTH_SHIFT); - ret = bcmgenet_rx_refill(priv, cb); if (ret) break; @@ -2535,14 +2528,17 @@ static int bcmgenet_probe(struct platform_device *pdev) netif_set_real_num_tx_queues(priv->dev, priv->hw_params->tx_queues + 1); netif_set_real_num_rx_queues(priv->dev, priv->hw_params->rx_queues + 1); - err = register_netdev(dev); - if (err) - goto err_clk_disable; + /* libphy will determine the link state */ + netif_carrier_off(dev); /* Turn off the main clock, WOL clock is handled separately */ if (!IS_ERR(priv->clk)) clk_disable_unprepare(priv->clk); + err = register_netdev(dev); + if (err) + goto err; + return err; err_clk_disable: diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.h b/drivers/net/ethernet/broadcom/genet/bcmgenet.h index 0f117105fed1..e23c993b1362 100644 --- a/drivers/net/ethernet/broadcom/genet/bcmgenet.h +++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h @@ -331,9 +331,9 @@ struct bcmgenet_mib_counters { #define EXT_ENERGY_DET_MASK (1 << 12) #define EXT_RGMII_OOB_CTRL 0x0C -#define RGMII_MODE_EN (1 << 0) #define RGMII_LINK (1 << 4) #define OOB_DISABLE (1 << 5) +#define RGMII_MODE_EN (1 << 6) #define ID_MODE_DIS (1 << 16) #define EXT_GPHY_CTRL 0x1C diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c index df2792d8383d..8afa579e7c40 100644 --- a/drivers/net/ethernet/broadcom/tg3.c +++ b/drivers/net/ethernet/broadcom/tg3.c @@ -3224,7 +3224,7 @@ static int tg3_nvram_read_using_eeprom(struct tg3 *tp, return 0; } -#define NVRAM_CMD_TIMEOUT 100 +#define NVRAM_CMD_TIMEOUT 5000 static int tg3_nvram_exec_cmd(struct tg3 *tp, u32 nvram_cmd) { @@ -3232,7 +3232,7 @@ static int tg3_nvram_exec_cmd(struct tg3 *tp, u32 nvram_cmd) tw32(NVRAM_CMD, nvram_cmd); for (i = 0; i < NVRAM_CMD_TIMEOUT; i++) { - udelay(10); + usleep_range(10, 40); if (tr32(NVRAM_CMD) & NVRAM_CMD_DONE) { udelay(10); break; @@ -7854,8 +7854,8 @@ static int tg3_tso_bug(struct tg3 *tp, struct sk_buff *skb) netif_wake_queue(tp->dev); } - segs = skb_gso_segment(skb, tp->dev->features & ~NETIF_F_TSO); - if (IS_ERR(segs)) + segs = skb_gso_segment(skb, tp->dev->features & ~(NETIF_F_TSO | NETIF_F_TSO6)); + if (IS_ERR(segs) || !segs) goto tg3_tso_bug_end; do { diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c index 2f8d6b910383..a83271cf17c3 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c @@ -4057,22 +4057,19 @@ int cxgb4_unregister_uld(enum cxgb4_uld type) EXPORT_SYMBOL(cxgb4_unregister_uld); /* Check if netdev on which event is occured belongs to us or not. Return - * suceess (1) if it belongs otherwise failure (0). + * success (true) if it belongs otherwise failure (false). + * Called with rcu_read_lock() held. */ -static int cxgb4_netdev(struct net_device *netdev) +static bool cxgb4_netdev(const struct net_device *netdev) { struct adapter *adap; int i; - spin_lock(&adap_rcu_lock); list_for_each_entry_rcu(adap, &adap_rcu_list, rcu_node) for (i = 0; i < MAX_NPORTS; i++) - if (adap->port[i] == netdev) { - spin_unlock(&adap_rcu_lock); - return 1; - } - spin_unlock(&adap_rcu_lock); - return 0; + if (adap->port[i] == netdev) + return true; + return false; } static int clip_add(struct net_device *event_dev, struct inet6_ifaddr *ifa, @@ -6396,6 +6393,7 @@ static void remove_one(struct pci_dev *pdev) adapter->flags &= ~DEV_ENABLED; } pci_release_regions(pdev); + synchronize_rcu(); kfree(adapter); } else pci_release_regions(pdev); diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c index bba67681aeaa..931478e7bd28 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c +++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c @@ -3962,6 +3962,7 @@ int t4_port_init(struct adapter *adap, int mbox, int pf, int vf) p->lport = j; p->rss_size = rss_size; memcpy(adap->port[i]->dev_addr, addr, ETH_ALEN); + adap->port[i]->dev_port = j; ret = ntohl(c.u.info.lstatus_to_modtype); p->mdio_addr = (ret & FW_PORT_CMD_MDIOCAP) ? diff --git a/drivers/net/ethernet/dec/tulip/timer.c b/drivers/net/ethernet/dec/tulip/timer.c index 768379b8aee9..523d9dde50a2 100644 --- a/drivers/net/ethernet/dec/tulip/timer.c +++ b/drivers/net/ethernet/dec/tulip/timer.c @@ -158,7 +158,7 @@ void comet_timer(unsigned long data) { struct net_device *dev = (struct net_device *)data; struct tulip_private *tp = netdev_priv(dev); - int next_tick = 60*HZ; + int next_tick = 2*HZ; if (tulip_debug > 1) netdev_dbg(dev, "Comet link status %04x partner capability %04x\n", diff --git a/drivers/net/ethernet/emulex/benet/be.h b/drivers/net/ethernet/emulex/benet/be.h index 2e7c5553955e..c2f5d2d3b932 100644 --- a/drivers/net/ethernet/emulex/benet/be.h +++ b/drivers/net/ethernet/emulex/benet/be.h @@ -557,9 +557,7 @@ static inline u16 be_max_qs(struct be_adapter *adapter) #define be_pvid_tagging_enabled(adapter) (adapter->pvid) /* Is BE in QNQ multi-channel mode */ -#define be_is_qnq_mode(adapter) (adapter->mc_type == FLEX10 || \ - adapter->mc_type == vNIC1 || \ - adapter->mc_type == UFP) +#define be_is_qnq_mode(adapter) (adapter->function_mode & QNQ_MODE) #define lancer_chip(adapter) (adapter->pdev->device == OC_DEVICE_ID3 || \ adapter->pdev->device == OC_DEVICE_ID4) diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.h b/drivers/net/ethernet/emulex/benet/be_cmds.h index 3e0a6b243806..59b3c056f329 100644 --- a/drivers/net/ethernet/emulex/benet/be_cmds.h +++ b/drivers/net/ethernet/emulex/benet/be_cmds.h @@ -1091,7 +1091,7 @@ struct be_cmd_resp_modify_eq_delay { * based on the skew/IPL. */ #define RDMA_ENABLED 0x4 -#define FLEX10_MODE 0x400 +#define QNQ_MODE 0x400 #define VNIC_MODE 0x20000 #define UMC_ENABLED 0x1000000 struct be_cmd_req_query_fw_cfg { diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c index 6822b3d76d85..1e187fb760f8 100644 --- a/drivers/net/ethernet/emulex/benet/be_main.c +++ b/drivers/net/ethernet/emulex/benet/be_main.c @@ -2902,7 +2902,7 @@ static int be_open(struct net_device *netdev) for_all_evt_queues(adapter, eqo, i) { napi_enable(&eqo->napi); be_enable_busy_poll(eqo); - be_eq_notify(adapter, eqo->q.id, true, false, 0); + be_eq_notify(adapter, eqo->q.id, true, true, 0); } adapter->flags |= BE_FLAGS_NAPI_ENABLED; @@ -3254,9 +3254,9 @@ err: static u8 be_convert_mc_type(u32 function_mode) { - if (function_mode & VNIC_MODE && function_mode & FLEX10_MODE) + if (function_mode & VNIC_MODE && function_mode & QNQ_MODE) return vNIC1; - else if (function_mode & FLEX10_MODE) + else if (function_mode & QNQ_MODE) return FLEX10; else if (function_mode & VNIC_MODE) return vNIC2; diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c index 38d9d276ab8b..77037fd377b8 100644 --- a/drivers/net/ethernet/freescale/fec_main.c +++ b/drivers/net/ethernet/freescale/fec_main.c @@ -320,6 +320,11 @@ static void *swap_buffer(void *bufaddr, int len) return bufaddr; } +static inline bool is_ipv4_pkt(struct sk_buff *skb) +{ + return skb->protocol == htons(ETH_P_IP) && ip_hdr(skb)->version == 4; +} + static int fec_enet_clear_csum(struct sk_buff *skb, struct net_device *ndev) { @@ -330,7 +335,8 @@ fec_enet_clear_csum(struct sk_buff *skb, struct net_device *ndev) if (unlikely(skb_cow_head(skb, 0))) return -1; - ip_hdr(skb)->check = 0; + if (is_ipv4_pkt(skb)) + ip_hdr(skb)->check = 0; *(__sum16 *)(skb->head + skb->csum_start + skb->csum_offset) = 0; return 0; diff --git a/drivers/net/ethernet/freescale/ucc_geth.c b/drivers/net/ethernet/freescale/ucc_geth.c index fab39e295441..36fc429298e3 100644 --- a/drivers/net/ethernet/freescale/ucc_geth.c +++ b/drivers/net/ethernet/freescale/ucc_geth.c @@ -2990,11 +2990,11 @@ static int ucc_geth_startup(struct ucc_geth_private *ugeth) if (ug_info->rxExtendedFiltering) { size += THREAD_RX_PRAM_ADDITIONAL_FOR_EXTENDED_FILTERING; if (ug_info->largestexternallookupkeysize == - QE_FLTR_TABLE_LOOKUP_KEY_SIZE_8_BYTES) + QE_FLTR_LARGEST_EXTERNAL_TABLE_LOOKUP_KEY_SIZE_8_BYTES) size += THREAD_RX_PRAM_ADDITIONAL_FOR_EXTENDED_FILTERING_8; if (ug_info->largestexternallookupkeysize == - QE_FLTR_TABLE_LOOKUP_KEY_SIZE_16_BYTES) + QE_FLTR_LARGEST_EXTERNAL_TABLE_LOOKUP_KEY_SIZE_16_BYTES) size += THREAD_RX_PRAM_ADDITIONAL_FOR_EXTENDED_FILTERING_16; } diff --git a/drivers/net/ethernet/intel/igb/e1000_82575.c b/drivers/net/ethernet/intel/igb/e1000_82575.c index a2db388cc31e..ee74f9536b31 100644 --- a/drivers/net/ethernet/intel/igb/e1000_82575.c +++ b/drivers/net/ethernet/intel/igb/e1000_82575.c @@ -1481,6 +1481,13 @@ static s32 igb_init_hw_82575(struct e1000_hw *hw) s32 ret_val; u16 i, rar_count = mac->rar_entry_count; + if ((hw->mac.type >= e1000_i210) && + !(igb_get_flash_presence_i210(hw))) { + ret_val = igb_pll_workaround_i210(hw); + if (ret_val) + return ret_val; + } + /* Initialize identification LED */ ret_val = igb_id_led_init(hw); if (ret_val) { diff --git a/drivers/net/ethernet/intel/igb/e1000_defines.h b/drivers/net/ethernet/intel/igb/e1000_defines.h index 2a8bb35c2df2..217f8138851b 100644 --- a/drivers/net/ethernet/intel/igb/e1000_defines.h +++ b/drivers/net/ethernet/intel/igb/e1000_defines.h @@ -46,14 +46,15 @@ #define E1000_CTRL_EXT_SDP3_DIR 0x00000800 /* SDP3 Data direction */ /* Physical Func Reset Done Indication */ -#define E1000_CTRL_EXT_PFRSTD 0x00004000 -#define E1000_CTRL_EXT_LINK_MODE_MASK 0x00C00000 -#define E1000_CTRL_EXT_LINK_MODE_PCIE_SERDES 0x00C00000 -#define E1000_CTRL_EXT_LINK_MODE_1000BASE_KX 0x00400000 -#define E1000_CTRL_EXT_LINK_MODE_SGMII 0x00800000 -#define E1000_CTRL_EXT_LINK_MODE_GMII 0x00000000 -#define E1000_CTRL_EXT_EIAME 0x01000000 -#define E1000_CTRL_EXT_IRCA 0x00000001 +#define E1000_CTRL_EXT_PFRSTD 0x00004000 +#define E1000_CTRL_EXT_SDLPE 0X00040000 /* SerDes Low Power Enable */ +#define E1000_CTRL_EXT_LINK_MODE_MASK 0x00C00000 +#define E1000_CTRL_EXT_LINK_MODE_PCIE_SERDES 0x00C00000 +#define E1000_CTRL_EXT_LINK_MODE_1000BASE_KX 0x00400000 +#define E1000_CTRL_EXT_LINK_MODE_SGMII 0x00800000 +#define E1000_CTRL_EXT_LINK_MODE_GMII 0x00000000 +#define E1000_CTRL_EXT_EIAME 0x01000000 +#define E1000_CTRL_EXT_IRCA 0x00000001 /* Interrupt delay cancellation */ /* Driver loaded bit for FW */ #define E1000_CTRL_EXT_DRV_LOAD 0x10000000 @@ -62,6 +63,7 @@ /* packet buffer parity error detection enabled */ /* descriptor FIFO parity error detection enable */ #define E1000_CTRL_EXT_PBA_CLR 0x80000000 /* PBA Clear */ +#define E1000_CTRL_EXT_PHYPDEN 0x00100000 #define E1000_I2CCMD_REG_ADDR_SHIFT 16 #define E1000_I2CCMD_PHY_ADDR_SHIFT 24 #define E1000_I2CCMD_OPCODE_READ 0x08000000 diff --git a/drivers/net/ethernet/intel/igb/e1000_hw.h b/drivers/net/ethernet/intel/igb/e1000_hw.h index 89925e405849..ce55ea5d750c 100644 --- a/drivers/net/ethernet/intel/igb/e1000_hw.h +++ b/drivers/net/ethernet/intel/igb/e1000_hw.h @@ -567,4 +567,7 @@ struct net_device *igb_get_hw_dev(struct e1000_hw *hw); /* These functions must be implemented by drivers */ s32 igb_read_pcie_cap_reg(struct e1000_hw *hw, u32 reg, u16 *value); s32 igb_write_pcie_cap_reg(struct e1000_hw *hw, u32 reg, u16 *value); + +void igb_read_pci_cfg(struct e1000_hw *hw, u32 reg, u16 *value); +void igb_write_pci_cfg(struct e1000_hw *hw, u32 reg, u16 *value); #endif /* _E1000_HW_H_ */ diff --git a/drivers/net/ethernet/intel/igb/e1000_i210.c b/drivers/net/ethernet/intel/igb/e1000_i210.c index 337161f440dd..65d931669f81 100644 --- a/drivers/net/ethernet/intel/igb/e1000_i210.c +++ b/drivers/net/ethernet/intel/igb/e1000_i210.c @@ -834,3 +834,69 @@ s32 igb_init_nvm_params_i210(struct e1000_hw *hw) } return ret_val; } + +/** + * igb_pll_workaround_i210 + * @hw: pointer to the HW structure + * + * Works around an errata in the PLL circuit where it occasionally + * provides the wrong clock frequency after power up. + **/ +s32 igb_pll_workaround_i210(struct e1000_hw *hw) +{ + s32 ret_val; + u32 wuc, mdicnfg, ctrl, ctrl_ext, reg_val; + u16 nvm_word, phy_word, pci_word, tmp_nvm; + int i; + + /* Get and set needed register values */ + wuc = rd32(E1000_WUC); + mdicnfg = rd32(E1000_MDICNFG); + reg_val = mdicnfg & ~E1000_MDICNFG_EXT_MDIO; + wr32(E1000_MDICNFG, reg_val); + + /* Get data from NVM, or set default */ + ret_val = igb_read_invm_word_i210(hw, E1000_INVM_AUTOLOAD, + &nvm_word); + if (ret_val) + nvm_word = E1000_INVM_DEFAULT_AL; + tmp_nvm = nvm_word | E1000_INVM_PLL_WO_VAL; + for (i = 0; i < E1000_MAX_PLL_TRIES; i++) { + /* check current state directly from internal PHY */ + igb_read_phy_reg_gs40g(hw, (E1000_PHY_PLL_FREQ_PAGE | + E1000_PHY_PLL_FREQ_REG), &phy_word); + if ((phy_word & E1000_PHY_PLL_UNCONF) + != E1000_PHY_PLL_UNCONF) { + ret_val = 0; + break; + } else { + ret_val = -E1000_ERR_PHY; + } + /* directly reset the internal PHY */ + ctrl = rd32(E1000_CTRL); + wr32(E1000_CTRL, ctrl|E1000_CTRL_PHY_RST); + + ctrl_ext = rd32(E1000_CTRL_EXT); + ctrl_ext |= (E1000_CTRL_EXT_PHYPDEN | E1000_CTRL_EXT_SDLPE); + wr32(E1000_CTRL_EXT, ctrl_ext); + + wr32(E1000_WUC, 0); + reg_val = (E1000_INVM_AUTOLOAD << 4) | (tmp_nvm << 16); + wr32(E1000_EEARBC_I210, reg_val); + + igb_read_pci_cfg(hw, E1000_PCI_PMCSR, &pci_word); + pci_word |= E1000_PCI_PMCSR_D3; + igb_write_pci_cfg(hw, E1000_PCI_PMCSR, &pci_word); + usleep_range(1000, 2000); + pci_word &= ~E1000_PCI_PMCSR_D3; + igb_write_pci_cfg(hw, E1000_PCI_PMCSR, &pci_word); + reg_val = (E1000_INVM_AUTOLOAD << 4) | (nvm_word << 16); + wr32(E1000_EEARBC_I210, reg_val); + + /* restore WUC register */ + wr32(E1000_WUC, wuc); + } + /* restore MDICNFG setting */ + wr32(E1000_MDICNFG, mdicnfg); + return ret_val; +} diff --git a/drivers/net/ethernet/intel/igb/e1000_i210.h b/drivers/net/ethernet/intel/igb/e1000_i210.h index 9f34976687ba..3442b6357d01 100644 --- a/drivers/net/ethernet/intel/igb/e1000_i210.h +++ b/drivers/net/ethernet/intel/igb/e1000_i210.h @@ -33,6 +33,7 @@ s32 igb_read_xmdio_reg(struct e1000_hw *hw, u16 addr, u8 dev_addr, u16 *data); s32 igb_write_xmdio_reg(struct e1000_hw *hw, u16 addr, u8 dev_addr, u16 data); s32 igb_init_nvm_params_i210(struct e1000_hw *hw); bool igb_get_flash_presence_i210(struct e1000_hw *hw); +s32 igb_pll_workaround_i210(struct e1000_hw *hw); #define E1000_STM_OPCODE 0xDB00 #define E1000_EEPROM_FLASH_SIZE_WORD 0x11 @@ -78,4 +79,15 @@ enum E1000_INVM_STRUCTURE_TYPE { #define NVM_LED_1_CFG_DEFAULT_I211 0x0184 #define NVM_LED_0_2_CFG_DEFAULT_I211 0x200C +/* PLL Defines */ +#define E1000_PCI_PMCSR 0x44 +#define E1000_PCI_PMCSR_D3 0x03 +#define E1000_MAX_PLL_TRIES 5 +#define E1000_PHY_PLL_UNCONF 0xFF +#define E1000_PHY_PLL_FREQ_PAGE 0xFC0000 +#define E1000_PHY_PLL_FREQ_REG 0x000E +#define E1000_INVM_DEFAULT_AL 0x202F +#define E1000_INVM_AUTOLOAD 0x0A +#define E1000_INVM_PLL_WO_VAL 0x0010 + #endif diff --git a/drivers/net/ethernet/intel/igb/e1000_regs.h b/drivers/net/ethernet/intel/igb/e1000_regs.h index 1cc4b1a7e597..f5ba4e4eafb9 100644 --- a/drivers/net/ethernet/intel/igb/e1000_regs.h +++ b/drivers/net/ethernet/intel/igb/e1000_regs.h @@ -66,6 +66,7 @@ #define E1000_PBA 0x01000 /* Packet Buffer Allocation - RW */ #define E1000_PBS 0x01008 /* Packet Buffer Size */ #define E1000_EEMNGCTL 0x01010 /* MNG EEprom Control */ +#define E1000_EEARBC_I210 0x12024 /* EEPROM Auto Read Bus Control */ #define E1000_EEWR 0x0102C /* EEPROM Write Register - RW */ #define E1000_I2CCMD 0x01028 /* SFPI2C Command Register - RW */ #define E1000_FRTIMER 0x01048 /* Free Running Timer - RW */ diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index f145adbb55ac..a9537ba7a5a0 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -7215,6 +7215,20 @@ static int igb_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd) } } +void igb_read_pci_cfg(struct e1000_hw *hw, u32 reg, u16 *value) +{ + struct igb_adapter *adapter = hw->back; + + pci_read_config_word(adapter->pdev, reg, value); +} + +void igb_write_pci_cfg(struct e1000_hw *hw, u32 reg, u16 *value) +{ + struct igb_adapter *adapter = hw->back; + + pci_write_config_word(adapter->pdev, reg, *value); +} + s32 igb_read_pcie_cap_reg(struct e1000_hw *hw, u32 reg, u16 *value) { struct igb_adapter *adapter = hw->back; @@ -7578,6 +7592,8 @@ static int igb_sriov_reinit(struct pci_dev *dev) if (netif_running(netdev)) igb_close(netdev); + else + igb_reset(adapter); igb_clear_interrupt_scheme(adapter); diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index 45beca17fa50..dadd9a5f6323 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -1207,7 +1207,7 @@ static u32 mvneta_txq_desc_csum(int l3_offs, int l3_proto, command = l3_offs << MVNETA_TX_L3_OFF_SHIFT; command |= ip_hdr_len << MVNETA_TX_IP_HLEN_SHIFT; - if (l3_proto == swab16(ETH_P_IP)) + if (l3_proto == htons(ETH_P_IP)) command |= MVNETA_TXD_IP_CSUM; else command |= MVNETA_TX_L3_IP6; @@ -2529,7 +2529,7 @@ static void mvneta_adjust_link(struct net_device *ndev) if (phydev->speed == SPEED_1000) val |= MVNETA_GMAC_CONFIG_GMII_SPEED; - else + else if (phydev->speed == SPEED_100) val |= MVNETA_GMAC_CONFIG_MII_SPEED; mvreg_write(pp, MVNETA_GMAC_AUTONEG_CONFIG, val); diff --git a/drivers/net/ethernet/marvell/skge.c b/drivers/net/ethernet/marvell/skge.c index 7f81ae66cc89..e912b6887d40 100644 --- a/drivers/net/ethernet/marvell/skge.c +++ b/drivers/net/ethernet/marvell/skge.c @@ -4199,6 +4199,13 @@ static struct dmi_system_id skge_32bit_dma_boards[] = { DMI_MATCH(DMI_BOARD_NAME, "P5NSLI") }, }, + { + .ident = "FUJITSU SIEMENS A8NE-FM", + .matches = { + DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTek Computer INC."), + DMI_MATCH(DMI_BOARD_NAME, "A8NE-FM") + }, + }, {} }; diff --git a/drivers/net/ethernet/mellanox/mlx4/cq.c b/drivers/net/ethernet/mellanox/mlx4/cq.c index 80f725228f5b..56022d647837 100644 --- a/drivers/net/ethernet/mellanox/mlx4/cq.c +++ b/drivers/net/ethernet/mellanox/mlx4/cq.c @@ -294,8 +294,6 @@ int mlx4_cq_alloc(struct mlx4_dev *dev, int nent, init_completion(&cq->free); cq->irq = priv->eq_table.eq[cq->vector].irq; - cq->irq_affinity_change = false; - return 0; err_radix: diff --git a/drivers/net/ethernet/mellanox/mlx4/en_cq.c b/drivers/net/ethernet/mellanox/mlx4/en_cq.c index 4b2130760eed..82322b1c8411 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_cq.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_cq.c @@ -128,11 +128,16 @@ int mlx4_en_activate_cq(struct mlx4_en_priv *priv, struct mlx4_en_cq *cq, mlx4_warn(mdev, "Failed assigning an EQ to %s, falling back to legacy EQ's\n", name); } + } } else { cq->vector = (cq->ring + 1 + priv->port) % mdev->dev->caps.num_comp_vectors; } + + cq->irq_desc = + irq_to_desc(mlx4_eq_get_irq(mdev->dev, + cq->vector)); } else { /* For TX we use the same irq per ring we assigned for the RX */ @@ -187,8 +192,6 @@ void mlx4_en_destroy_cq(struct mlx4_en_priv *priv, struct mlx4_en_cq **pcq) mlx4_en_unmap_buffer(&cq->wqres.buf); mlx4_free_hwq_res(mdev->dev, &cq->wqres, cq->buf_size); if (priv->mdev->dev->caps.comp_pool && cq->vector) { - if (!cq->is_tx) - irq_set_affinity_hint(cq->mcq.irq, NULL); mlx4_release_eq(priv->mdev->dev, cq->vector); } cq->vector = 0; @@ -204,6 +207,7 @@ void mlx4_en_deactivate_cq(struct mlx4_en_priv *priv, struct mlx4_en_cq *cq) if (!cq->is_tx) { napi_hash_del(&cq->napi); synchronize_rcu(); + irq_set_affinity_hint(cq->mcq.irq, NULL); } netif_napi_del(&cq->napi); diff --git a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c index fa1a069e14e6..68d763d2d030 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c @@ -417,6 +417,8 @@ static int mlx4_en_get_coalesce(struct net_device *dev, coal->tx_coalesce_usecs = priv->tx_usecs; coal->tx_max_coalesced_frames = priv->tx_frames; + coal->tx_max_coalesced_frames_irq = priv->tx_work_limit; + coal->rx_coalesce_usecs = priv->rx_usecs; coal->rx_max_coalesced_frames = priv->rx_frames; @@ -426,6 +428,7 @@ static int mlx4_en_get_coalesce(struct net_device *dev, coal->rx_coalesce_usecs_high = priv->rx_usecs_high; coal->rate_sample_interval = priv->sample_interval; coal->use_adaptive_rx_coalesce = priv->adaptive_rx_coal; + return 0; } @@ -434,6 +437,9 @@ static int mlx4_en_set_coalesce(struct net_device *dev, { struct mlx4_en_priv *priv = netdev_priv(dev); + if (!coal->tx_max_coalesced_frames_irq) + return -EINVAL; + priv->rx_frames = (coal->rx_max_coalesced_frames == MLX4_EN_AUTO_CONF) ? MLX4_EN_RX_COAL_TARGET : @@ -457,6 +463,7 @@ static int mlx4_en_set_coalesce(struct net_device *dev, priv->rx_usecs_high = coal->rx_coalesce_usecs_high; priv->sample_interval = coal->rate_sample_interval; priv->adaptive_rx_coal = coal->use_adaptive_rx_coalesce; + priv->tx_work_limit = coal->tx_max_coalesced_frames_irq; return mlx4_en_moderation_update(priv); } diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c index 7d4fb7bf2593..7345c43b019e 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c @@ -2336,7 +2336,7 @@ static void mlx4_en_add_vxlan_port(struct net_device *dev, struct mlx4_en_priv *priv = netdev_priv(dev); __be16 current_port; - if (!(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_VXLAN_OFFLOADS)) + if (priv->mdev->dev->caps.tunnel_offload_mode != MLX4_TUNNEL_OFFLOAD_MODE_VXLAN) return; if (sa_family == AF_INET6) @@ -2473,6 +2473,7 @@ int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port, MLX4_WQE_CTRL_SOLICITED); priv->num_tx_rings_p_up = mdev->profile.num_tx_rings_p_up; priv->tx_ring_num = prof->tx_ring_num; + priv->tx_work_limit = MLX4_EN_DEFAULT_TX_WORK; priv->tx_ring = kzalloc(sizeof(struct mlx4_en_tx_ring *) * MAX_TX_RINGS, GFP_KERNEL); diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c index d2d415732d99..5535862f27cc 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c @@ -40,6 +40,7 @@ #include <linux/if_ether.h> #include <linux/if_vlan.h> #include <linux/vmalloc.h> +#include <linux/irq.h> #include "mlx4_en.h" @@ -782,6 +783,7 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud PKT_HASH_TYPE_L3); skb_record_rx_queue(gro_skb, cq->ring); + skb_mark_napi_id(gro_skb, &cq->napi); if (ring->hwtstamp_rx_filter == HWTSTAMP_FILTER_ALL) { timestamp = mlx4_en_get_cqe_ts(cqe); @@ -896,16 +898,25 @@ int mlx4_en_poll_rx_cq(struct napi_struct *napi, int budget) /* If we used up all the quota - we're probably not done yet... */ if (done == budget) { + int cpu_curr; + const struct cpumask *aff; + INC_PERF_COUNTER(priv->pstats.napi_quota); - if (unlikely(cq->mcq.irq_affinity_change)) { - cq->mcq.irq_affinity_change = false; + + cpu_curr = smp_processor_id(); + aff = irq_desc_get_irq_data(cq->irq_desc)->affinity; + + if (unlikely(!cpumask_test_cpu(cpu_curr, aff))) { + /* Current cpu is not according to smp_irq_affinity - + * probably affinity changed. need to stop this NAPI + * poll, and restart it on the right CPU + */ napi_complete(napi); mlx4_en_arm_cq(priv, cq); return 0; } } else { /* Done for now */ - cq->mcq.irq_affinity_change = false; napi_complete(napi); mlx4_en_arm_cq(priv, cq); } diff --git a/drivers/net/ethernet/mellanox/mlx4/en_tx.c b/drivers/net/ethernet/mellanox/mlx4/en_tx.c index 8be7483f8236..5045bab59633 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_tx.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_tx.c @@ -351,9 +351,8 @@ int mlx4_en_free_tx_buf(struct net_device *dev, struct mlx4_en_tx_ring *ring) return cnt; } -static int mlx4_en_process_tx_cq(struct net_device *dev, - struct mlx4_en_cq *cq, - int budget) +static bool mlx4_en_process_tx_cq(struct net_device *dev, + struct mlx4_en_cq *cq) { struct mlx4_en_priv *priv = netdev_priv(dev); struct mlx4_cq *mcq = &cq->mcq; @@ -372,9 +371,10 @@ static int mlx4_en_process_tx_cq(struct net_device *dev, int factor = priv->cqe_factor; u64 timestamp = 0; int done = 0; + int budget = priv->tx_work_limit; if (!priv->port_up) - return 0; + return true; index = cons_index & size_mask; cqe = &buf[(index << factor) + factor]; @@ -447,7 +447,7 @@ static int mlx4_en_process_tx_cq(struct net_device *dev, netif_tx_wake_queue(ring->tx_queue); ring->wake_queue++; } - return done; + return done < budget; } void mlx4_en_tx_irq(struct mlx4_cq *mcq) @@ -467,24 +467,16 @@ int mlx4_en_poll_tx_cq(struct napi_struct *napi, int budget) struct mlx4_en_cq *cq = container_of(napi, struct mlx4_en_cq, napi); struct net_device *dev = cq->dev; struct mlx4_en_priv *priv = netdev_priv(dev); - int done; + int clean_complete; - done = mlx4_en_process_tx_cq(dev, cq, budget); + clean_complete = mlx4_en_process_tx_cq(dev, cq); + if (!clean_complete) + return budget; - /* If we used up all the quota - we're probably not done yet... */ - if (done < budget) { - /* Done for now */ - cq->mcq.irq_affinity_change = false; - napi_complete(napi); - mlx4_en_arm_cq(priv, cq); - return done; - } else if (unlikely(cq->mcq.irq_affinity_change)) { - cq->mcq.irq_affinity_change = false; - napi_complete(napi); - mlx4_en_arm_cq(priv, cq); - return 0; - } - return budget; + napi_complete(napi); + mlx4_en_arm_cq(priv, cq); + + return 0; } static struct mlx4_en_tx_desc *mlx4_en_bounce_to_desc(struct mlx4_en_priv *priv, diff --git a/drivers/net/ethernet/mellanox/mlx4/eq.c b/drivers/net/ethernet/mellanox/mlx4/eq.c index d954ec1eac17..2a004b347e1d 100644 --- a/drivers/net/ethernet/mellanox/mlx4/eq.c +++ b/drivers/net/ethernet/mellanox/mlx4/eq.c @@ -53,11 +53,6 @@ enum { MLX4_EQ_ENTRY_SIZE = 0x20 }; -struct mlx4_irq_notify { - void *arg; - struct irq_affinity_notify notify; -}; - #define MLX4_EQ_STATUS_OK ( 0 << 28) #define MLX4_EQ_STATUS_WRITE_FAIL (10 << 28) #define MLX4_EQ_OWNER_SW ( 0 << 24) @@ -1088,57 +1083,6 @@ static void mlx4_unmap_clr_int(struct mlx4_dev *dev) iounmap(priv->clr_base); } -static void mlx4_irq_notifier_notify(struct irq_affinity_notify *notify, - const cpumask_t *mask) -{ - struct mlx4_irq_notify *n = container_of(notify, - struct mlx4_irq_notify, - notify); - struct mlx4_priv *priv = (struct mlx4_priv *)n->arg; - struct radix_tree_iter iter; - void **slot; - - radix_tree_for_each_slot(slot, &priv->cq_table.tree, &iter, 0) { - struct mlx4_cq *cq = (struct mlx4_cq *)(*slot); - - if (cq->irq == notify->irq) - cq->irq_affinity_change = true; - } -} - -static void mlx4_release_irq_notifier(struct kref *ref) -{ - struct mlx4_irq_notify *n = container_of(ref, struct mlx4_irq_notify, - notify.kref); - kfree(n); -} - -static void mlx4_assign_irq_notifier(struct mlx4_priv *priv, - struct mlx4_dev *dev, int irq) -{ - struct mlx4_irq_notify *irq_notifier = NULL; - int err = 0; - - irq_notifier = kzalloc(sizeof(*irq_notifier), GFP_KERNEL); - if (!irq_notifier) { - mlx4_warn(dev, "Failed to allocate irq notifier. irq %d\n", - irq); - return; - } - - irq_notifier->notify.irq = irq; - irq_notifier->notify.notify = mlx4_irq_notifier_notify; - irq_notifier->notify.release = mlx4_release_irq_notifier; - irq_notifier->arg = priv; - err = irq_set_affinity_notifier(irq, &irq_notifier->notify); - if (err) { - kfree(irq_notifier); - irq_notifier = NULL; - mlx4_warn(dev, "Failed to set irq notifier. irq %d\n", irq); - } -} - - int mlx4_alloc_eq_table(struct mlx4_dev *dev) { struct mlx4_priv *priv = mlx4_priv(dev); @@ -1409,8 +1353,6 @@ int mlx4_assign_eq(struct mlx4_dev *dev, char *name, struct cpu_rmap *rmap, continue; /*we dont want to break here*/ } - mlx4_assign_irq_notifier(priv, dev, - priv->eq_table.eq[vec].irq); eq_set_ci(&priv->eq_table.eq[vec], 1); } @@ -1427,6 +1369,14 @@ int mlx4_assign_eq(struct mlx4_dev *dev, char *name, struct cpu_rmap *rmap, } EXPORT_SYMBOL(mlx4_assign_eq); +int mlx4_eq_get_irq(struct mlx4_dev *dev, int vec) +{ + struct mlx4_priv *priv = mlx4_priv(dev); + + return priv->eq_table.eq[vec].irq; +} +EXPORT_SYMBOL(mlx4_eq_get_irq); + void mlx4_release_eq(struct mlx4_dev *dev, int vec) { struct mlx4_priv *priv = mlx4_priv(dev); @@ -1438,9 +1388,6 @@ void mlx4_release_eq(struct mlx4_dev *dev, int vec) Belonging to a legacy EQ*/ mutex_lock(&priv->msix_ctl.pool_lock); if (priv->msix_ctl.pool_bm & 1ULL << i) { - irq_set_affinity_notifier( - priv->eq_table.eq[vec].irq, - NULL); free_irq(priv->eq_table.eq[vec].irq, &priv->eq_table.eq[vec]); priv->msix_ctl.pool_bm &= ~(1ULL << i); diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c index 5f42f6d6e4c6..82ab427290c3 100644 --- a/drivers/net/ethernet/mellanox/mlx4/main.c +++ b/drivers/net/ethernet/mellanox/mlx4/main.c @@ -2439,7 +2439,8 @@ slave_start: (num_vfs_argc > 1 || probe_vfs_argc > 1)) { mlx4_err(dev, "Invalid syntax of num_vfs/probe_vfs with IB port - single port VFs syntax is only supported when all ports are configured as ethernet\n"); - goto err_close; + err = -EINVAL; + goto err_master_mfunc; } for (i = 0; i < sizeof(nvfs)/sizeof(nvfs[0]); i++) { unsigned j; diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h index 0e15295bedd6..d72a5a894fc6 100644 --- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h +++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h @@ -126,6 +126,8 @@ enum { #define MAX_TX_RINGS (MLX4_EN_MAX_TX_RING_P_UP * \ MLX4_EN_NUM_UP) +#define MLX4_EN_DEFAULT_TX_WORK 256 + /* Target number of packets to coalesce with interrupt moderation */ #define MLX4_EN_RX_COAL_TARGET 44 #define MLX4_EN_RX_COAL_TIME 0x10 @@ -343,6 +345,7 @@ struct mlx4_en_cq { #define CQ_USER_PEND (MLX4_EN_CQ_STATE_POLL | MLX4_EN_CQ_STATE_POLL_YIELD) spinlock_t poll_lock; /* protects from LLS/napi conflicts */ #endif /* CONFIG_NET_RX_BUSY_POLL */ + struct irq_desc *irq_desc; }; struct mlx4_en_port_profile { @@ -542,6 +545,7 @@ struct mlx4_en_priv { __be32 ctrl_flags; u32 flags; u8 num_tx_rings_p_up; + u32 tx_work_limit; u32 tx_ring_num; u32 rx_ring_num; u32 rx_skb_size; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mr.c b/drivers/net/ethernet/mellanox/mlx5/core/mr.c index ba0401d4af50..184c3615f479 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/mr.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/mr.c @@ -94,6 +94,11 @@ int mlx5_core_create_mkey(struct mlx5_core_dev *dev, struct mlx5_core_mr *mr, write_lock_irq(&table->lock); err = radix_tree_insert(&table->tree, mlx5_base_mkey(mr->key), mr); write_unlock_irq(&table->lock); + if (err) { + mlx5_core_warn(dev, "failed radix tree insert of mr 0x%x, %d\n", + mlx5_base_mkey(mr->key), err); + mlx5_core_destroy_mkey(dev, mr); + } return err; } @@ -104,12 +109,22 @@ int mlx5_core_destroy_mkey(struct mlx5_core_dev *dev, struct mlx5_core_mr *mr) struct mlx5_mr_table *table = &dev->priv.mr_table; struct mlx5_destroy_mkey_mbox_in in; struct mlx5_destroy_mkey_mbox_out out; + struct mlx5_core_mr *deleted_mr; unsigned long flags; int err; memset(&in, 0, sizeof(in)); memset(&out, 0, sizeof(out)); + write_lock_irqsave(&table->lock, flags); + deleted_mr = radix_tree_delete(&table->tree, mlx5_base_mkey(mr->key)); + write_unlock_irqrestore(&table->lock, flags); + if (!deleted_mr) { + mlx5_core_warn(dev, "failed radix tree delete of mr 0x%x\n", + mlx5_base_mkey(mr->key)); + return -ENOENT; + } + in.hdr.opcode = cpu_to_be16(MLX5_CMD_OP_DESTROY_MKEY); in.mkey = cpu_to_be32(mlx5_mkey_to_idx(mr->key)); err = mlx5_cmd_exec(dev, &in, sizeof(in), &out, sizeof(out)); @@ -119,10 +134,6 @@ int mlx5_core_destroy_mkey(struct mlx5_core_dev *dev, struct mlx5_core_mr *mr) if (out.hdr.status) return mlx5_cmd_status_to_err(&out.hdr); - write_lock_irqsave(&table->lock, flags); - radix_tree_delete(&table->tree, mlx5_base_mkey(mr->key)); - write_unlock_irqrestore(&table->lock, flags); - return err; } EXPORT_SYMBOL(mlx5_core_destroy_mkey); diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c index be425ad5e824..61623e9af574 100644 --- a/drivers/net/ethernet/realtek/r8169.c +++ b/drivers/net/ethernet/realtek/r8169.c @@ -538,6 +538,7 @@ enum rtl_register_content { MagicPacket = (1 << 5), /* Wake up when receives a Magic Packet */ LinkUp = (1 << 4), /* Wake up when the cable connection is re-established */ Jumbo_En0 = (1 << 2), /* 8168 only. Reserved in the 8168b */ + Rdy_to_L23 = (1 << 1), /* L23 Enable */ Beacon_en = (1 << 0), /* 8168 only. Reserved in the 8168b */ /* Config4 register */ @@ -4239,6 +4240,8 @@ static void rtl_init_rxcfg(struct rtl8169_private *tp) RTL_W32(RxConfig, RX128_INT_EN | RX_MULTI_EN | RX_DMA_BURST); break; case RTL_GIGA_MAC_VER_40: + RTL_W32(RxConfig, RX128_INT_EN | RX_MULTI_EN | RX_DMA_BURST | RX_EARLY_OFF); + break; case RTL_GIGA_MAC_VER_41: case RTL_GIGA_MAC_VER_42: case RTL_GIGA_MAC_VER_43: @@ -4897,6 +4900,21 @@ static void rtl_enable_clock_request(struct pci_dev *pdev) PCI_EXP_LNKCTL_CLKREQ_EN); } +static void rtl_pcie_state_l2l3_enable(struct rtl8169_private *tp, bool enable) +{ + void __iomem *ioaddr = tp->mmio_addr; + u8 data; + + data = RTL_R8(Config3); + + if (enable) + data |= Rdy_to_L23; + else + data &= ~Rdy_to_L23; + + RTL_W8(Config3, data); +} + #define R8168_CPCMD_QUIRK_MASK (\ EnableBist | \ Mac_dbgo_oe | \ @@ -5246,6 +5264,7 @@ static void rtl_hw_start_8411(struct rtl8169_private *tp) }; rtl_hw_start_8168f(tp); + rtl_pcie_state_l2l3_enable(tp, false); rtl_ephy_init(tp, e_info_8168f_1, ARRAY_SIZE(e_info_8168f_1)); @@ -5284,6 +5303,8 @@ static void rtl_hw_start_8168g_1(struct rtl8169_private *tp) rtl_w1w0_eri(tp, 0x2fc, ERIAR_MASK_0001, 0x01, 0x06, ERIAR_EXGMAC); rtl_w1w0_eri(tp, 0x1b0, ERIAR_MASK_0011, 0x0000, 0x1000, ERIAR_EXGMAC); + + rtl_pcie_state_l2l3_enable(tp, false); } static void rtl_hw_start_8168g_2(struct rtl8169_private *tp) @@ -5536,6 +5557,8 @@ static void rtl_hw_start_8105e_1(struct rtl8169_private *tp) RTL_W8(DLLPR, RTL_R8(DLLPR) | PFM_EN); rtl_ephy_init(tp, e_info_8105e_1, ARRAY_SIZE(e_info_8105e_1)); + + rtl_pcie_state_l2l3_enable(tp, false); } static void rtl_hw_start_8105e_2(struct rtl8169_private *tp) @@ -5571,6 +5594,8 @@ static void rtl_hw_start_8402(struct rtl8169_private *tp) rtl_eri_write(tp, 0xc0, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC); rtl_eri_write(tp, 0xb8, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC); rtl_w1w0_eri(tp, 0x0d4, ERIAR_MASK_0011, 0x0e00, 0xff00, ERIAR_EXGMAC); + + rtl_pcie_state_l2l3_enable(tp, false); } static void rtl_hw_start_8106(struct rtl8169_private *tp) @@ -5583,6 +5608,8 @@ static void rtl_hw_start_8106(struct rtl8169_private *tp) RTL_W32(MISC, (RTL_R32(MISC) | DISABLE_LAN_EN) & ~EARLY_TALLY_EN); RTL_W8(MCU, RTL_R8(MCU) | EN_NDP | EN_OOB_RESET); RTL_W8(DLLPR, RTL_R8(DLLPR) & ~PFM_EN); + + rtl_pcie_state_l2l3_enable(tp, false); } static void rtl_hw_start_8101(struct net_device *dev) diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c index b3e148ef5683..9d3748361a1e 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c @@ -320,11 +320,8 @@ static void dwmac1000_set_eee_timer(void __iomem *ioaddr, int ls, int tw) static void dwmac1000_ctrl_ane(void __iomem *ioaddr, bool restart) { - u32 value; - - value = readl(ioaddr + GMAC_AN_CTRL); /* auto negotiation enable and External Loopback enable */ - value = GMAC_AN_CTRL_ANE | GMAC_AN_CTRL_ELE; + u32 value = GMAC_AN_CTRL_ANE | GMAC_AN_CTRL_ELE; if (restart) value |= GMAC_AN_CTRL_RAN; diff --git a/drivers/net/ethernet/stmicro/stmmac/enh_desc.c b/drivers/net/ethernet/stmicro/stmmac/enh_desc.c index 7e6628a91514..1e2bcf5f89e1 100644 --- a/drivers/net/ethernet/stmicro/stmmac/enh_desc.c +++ b/drivers/net/ethernet/stmicro/stmmac/enh_desc.c @@ -145,7 +145,7 @@ static void enh_desc_get_ext_status(void *data, struct stmmac_extra_stats *x, x->rx_msg_type_delay_req++; else if (p->des4.erx.msg_type == RDES_EXT_DELAY_RESP) x->rx_msg_type_delay_resp++; - else if (p->des4.erx.msg_type == RDES_EXT_DELAY_REQ) + else if (p->des4.erx.msg_type == RDES_EXT_PDELAY_REQ) x->rx_msg_type_pdelay_req++; else if (p->des4.erx.msg_type == RDES_EXT_PDELAY_RESP) x->rx_msg_type_pdelay_resp++; diff --git a/drivers/net/ethernet/sun/sunvnet.c b/drivers/net/ethernet/sun/sunvnet.c index 1c24a8f368bd..fd411d6e19a2 100644 --- a/drivers/net/ethernet/sun/sunvnet.c +++ b/drivers/net/ethernet/sun/sunvnet.c @@ -1083,6 +1083,24 @@ static struct vnet *vnet_find_or_create(const u64 *local_mac) return vp; } +static void vnet_cleanup(void) +{ + struct vnet *vp; + struct net_device *dev; + + mutex_lock(&vnet_list_mutex); + while (!list_empty(&vnet_list)) { + vp = list_first_entry(&vnet_list, struct vnet, list); + list_del(&vp->list); + dev = vp->dev; + /* vio_unregister_driver() should have cleaned up port_list */ + BUG_ON(!list_empty(&vp->port_list)); + unregister_netdev(dev); + free_netdev(dev); + } + mutex_unlock(&vnet_list_mutex); +} + static const char *local_mac_prop = "local-mac-address"; static struct vnet *vnet_find_parent(struct mdesc_handle *hp, @@ -1240,7 +1258,6 @@ static int vnet_port_remove(struct vio_dev *vdev) kfree(port); - unregister_netdev(vp->dev); } return 0; } @@ -1268,6 +1285,7 @@ static int __init vnet_init(void) static void __exit vnet_exit(void) { vio_unregister_driver(&vnet_port_driver); + vnet_cleanup(); } module_init(vnet_init); diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c index ff380dac6629..b988d16cd34e 100644 --- a/drivers/net/ethernet/ti/cpsw.c +++ b/drivers/net/ethernet/ti/cpsw.c @@ -1212,7 +1212,12 @@ static int cpsw_ndo_open(struct net_device *ndev) for_each_slave(priv, cpsw_slave_open, priv); /* Add default VLAN */ - cpsw_add_default_vlan(priv); + if (!priv->data.dual_emac) + cpsw_add_default_vlan(priv); + else + cpsw_ale_add_vlan(priv->ale, priv->data.default_vlan, + ALE_ALL_PORTS << priv->host_port, + ALE_ALL_PORTS << priv->host_port, 0, 0); if (!cpsw_common_res_usage_state(priv)) { /* setup tx dma to fixed prio and zero offset */ diff --git a/drivers/net/ethernet/tile/tilegx.c b/drivers/net/ethernet/tile/tilegx.c index 14389f841d43..4c70360967c2 100644 --- a/drivers/net/ethernet/tile/tilegx.c +++ b/drivers/net/ethernet/tile/tilegx.c @@ -2191,7 +2191,6 @@ static void tile_net_setup(struct net_device *dev) static void tile_net_dev_init(const char *name, const uint8_t *mac) { int ret; - int i; struct net_device *dev; struct tile_net_priv *priv; diff --git a/drivers/net/fddi/defxx.c b/drivers/net/fddi/defxx.c index eb78203cd58e..2aa57270838f 100644 --- a/drivers/net/fddi/defxx.c +++ b/drivers/net/fddi/defxx.c @@ -291,7 +291,11 @@ static int dfx_hw_dma_uninit(DFX_board_t *bp, PI_UINT32 type); static int dfx_rcv_init(DFX_board_t *bp, int get_buffers); static void dfx_rcv_queue_process(DFX_board_t *bp); +#ifdef DYNAMIC_BUFFERS static void dfx_rcv_flush(DFX_board_t *bp); +#else +static inline void dfx_rcv_flush(DFX_board_t *bp) {} +#endif static netdev_tx_t dfx_xmt_queue_pkt(struct sk_buff *skb, struct net_device *dev); @@ -2849,7 +2853,7 @@ static int dfx_hw_dma_uninit(DFX_board_t *bp, PI_UINT32 type) * Align an sk_buff to a boundary power of 2 * */ - +#ifdef DYNAMIC_BUFFERS static void my_skb_align(struct sk_buff *skb, int n) { unsigned long x = (unsigned long)skb->data; @@ -2859,7 +2863,7 @@ static void my_skb_align(struct sk_buff *skb, int n) skb_reserve(skb, v - x); } - +#endif /* * ================ @@ -3074,10 +3078,7 @@ static void dfx_rcv_queue_process( break; } else { -#ifndef DYNAMIC_BUFFERS - if (! rx_in_place) -#endif - { + if (!rx_in_place) { /* Receive buffer allocated, pass receive packet up */ skb_copy_to_linear_data(skb, @@ -3453,10 +3454,6 @@ static void dfx_rcv_flush( DFX_board_t *bp ) } } -#else -static inline void dfx_rcv_flush( DFX_board_t *bp ) -{ -} #endif /* DYNAMIC_BUFFERS */ /* diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c index c041f63a6d30..4ed38eaecea8 100644 --- a/drivers/net/hyperv/netvsc.c +++ b/drivers/net/hyperv/netvsc.c @@ -189,7 +189,7 @@ static int netvsc_destroy_buf(struct netvsc_device *net_device) "unable to teardown send buffer's gpadl\n"); return ret; } - net_device->recv_buf_gpadl_handle = 0; + net_device->send_buf_gpadl_handle = 0; } if (net_device->send_buf) { /* Free up the receive buffer */ diff --git a/drivers/net/ieee802154/at86rf230.c b/drivers/net/ieee802154/at86rf230.c index 4517b149ed07..50899416f668 100644 --- a/drivers/net/ieee802154/at86rf230.c +++ b/drivers/net/ieee802154/at86rf230.c @@ -1137,6 +1137,8 @@ static int at86rf230_probe(struct spi_device *spi) dev->flags = IEEE802154_HW_OMIT_CKSUM | IEEE802154_HW_AACK; irq_type = irq_get_trigger_type(spi->irq); + if (!irq_type) + irq_type = IRQF_TRIGGER_RISING; if (irq_type & (IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING)) { irq_worker = at86rf230_irqwork; irq_handler = at86rf230_isr; @@ -1168,7 +1170,8 @@ static int at86rf230_probe(struct spi_device *spi) if (rc) goto err_hw_init; - rc = devm_request_irq(&spi->dev, spi->irq, irq_handler, IRQF_SHARED, + rc = devm_request_irq(&spi->dev, spi->irq, irq_handler, + IRQF_SHARED | irq_type, dev_name(&spi->dev), lp); if (rc) goto err_hw_init; diff --git a/drivers/net/phy/at803x.c b/drivers/net/phy/at803x.c index 6c622aedbae1..fdc1b418fa6a 100644 --- a/drivers/net/phy/at803x.c +++ b/drivers/net/phy/at803x.c @@ -16,9 +16,13 @@ #include <linux/string.h> #include <linux/netdevice.h> #include <linux/etherdevice.h> +#include <linux/of_gpio.h> +#include <linux/gpio/consumer.h> #define AT803X_INTR_ENABLE 0x12 #define AT803X_INTR_STATUS 0x13 +#define AT803X_SMART_SPEED 0x14 +#define AT803X_LED_CONTROL 0x18 #define AT803X_WOL_ENABLE 0x01 #define AT803X_DEVICE_ADDR 0x03 #define AT803X_LOC_MAC_ADDR_0_15_OFFSET 0x804C @@ -35,10 +39,52 @@ #define AT803X_DEBUG_SYSTEM_MODE_CTRL 0x05 #define AT803X_DEBUG_RGMII_TX_CLK_DLY BIT(8) +#define ATH8030_PHY_ID 0x004dd076 +#define ATH8031_PHY_ID 0x004dd074 +#define ATH8035_PHY_ID 0x004dd072 + MODULE_DESCRIPTION("Atheros 803x PHY driver"); MODULE_AUTHOR("Matus Ujhelyi"); MODULE_LICENSE("GPL"); +struct at803x_priv { + bool phy_reset:1; + struct gpio_desc *gpiod_reset; +}; + +struct at803x_context { + u16 bmcr; + u16 advertise; + u16 control1000; + u16 int_enable; + u16 smart_speed; + u16 led_control; +}; + +/* save relevant PHY registers to private copy */ +static void at803x_context_save(struct phy_device *phydev, + struct at803x_context *context) +{ + context->bmcr = phy_read(phydev, MII_BMCR); + context->advertise = phy_read(phydev, MII_ADVERTISE); + context->control1000 = phy_read(phydev, MII_CTRL1000); + context->int_enable = phy_read(phydev, AT803X_INTR_ENABLE); + context->smart_speed = phy_read(phydev, AT803X_SMART_SPEED); + context->led_control = phy_read(phydev, AT803X_LED_CONTROL); +} + +/* restore relevant PHY registers from private copy */ +static void at803x_context_restore(struct phy_device *phydev, + const struct at803x_context *context) +{ + phy_write(phydev, MII_BMCR, context->bmcr); + phy_write(phydev, MII_ADVERTISE, context->advertise); + phy_write(phydev, MII_CTRL1000, context->control1000); + phy_write(phydev, AT803X_INTR_ENABLE, context->int_enable); + phy_write(phydev, AT803X_SMART_SPEED, context->smart_speed); + phy_write(phydev, AT803X_LED_CONTROL, context->led_control); +} + static int at803x_set_wol(struct phy_device *phydev, struct ethtool_wolinfo *wol) { @@ -142,6 +188,26 @@ static int at803x_resume(struct phy_device *phydev) return 0; } +static int at803x_probe(struct phy_device *phydev) +{ + struct device *dev = &phydev->dev; + struct at803x_priv *priv; + + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); + if (!priv) + return -ENOMEM; + + priv->gpiod_reset = devm_gpiod_get(dev, "reset"); + if (IS_ERR(priv->gpiod_reset)) + priv->gpiod_reset = NULL; + else + gpiod_direction_output(priv->gpiod_reset, 1); + + phydev->priv = priv; + + return 0; +} + static int at803x_config_init(struct phy_device *phydev) { int ret; @@ -189,58 +255,99 @@ static int at803x_config_intr(struct phy_device *phydev) return err; } +static void at803x_link_change_notify(struct phy_device *phydev) +{ + struct at803x_priv *priv = phydev->priv; + + /* + * Conduct a hardware reset for AT8030 every time a link loss is + * signalled. This is necessary to circumvent a hardware bug that + * occurs when the cable is unplugged while TX packets are pending + * in the FIFO. In such cases, the FIFO enters an error mode it + * cannot recover from by software. + */ + if (phydev->drv->phy_id == ATH8030_PHY_ID) { + if (phydev->state == PHY_NOLINK) { + if (priv->gpiod_reset && !priv->phy_reset) { + struct at803x_context context; + + at803x_context_save(phydev, &context); + + gpiod_set_value(priv->gpiod_reset, 0); + msleep(1); + gpiod_set_value(priv->gpiod_reset, 1); + msleep(1); + + at803x_context_restore(phydev, &context); + + dev_dbg(&phydev->dev, "%s(): phy was reset\n", + __func__); + priv->phy_reset = true; + } + } else { + priv->phy_reset = false; + } + } +} + static struct phy_driver at803x_driver[] = { { /* ATHEROS 8035 */ - .phy_id = 0x004dd072, - .name = "Atheros 8035 ethernet", - .phy_id_mask = 0xffffffef, - .config_init = at803x_config_init, - .set_wol = at803x_set_wol, - .get_wol = at803x_get_wol, - .suspend = at803x_suspend, - .resume = at803x_resume, - .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, - .config_aneg = genphy_config_aneg, - .read_status = genphy_read_status, - .driver = { + .phy_id = ATH8035_PHY_ID, + .name = "Atheros 8035 ethernet", + .phy_id_mask = 0xffffffef, + .probe = at803x_probe, + .config_init = at803x_config_init, + .link_change_notify = at803x_link_change_notify, + .set_wol = at803x_set_wol, + .get_wol = at803x_get_wol, + .suspend = at803x_suspend, + .resume = at803x_resume, + .features = PHY_GBIT_FEATURES, + .flags = PHY_HAS_INTERRUPT, + .config_aneg = genphy_config_aneg, + .read_status = genphy_read_status, + .driver = { .owner = THIS_MODULE, }, }, { /* ATHEROS 8030 */ - .phy_id = 0x004dd076, - .name = "Atheros 8030 ethernet", - .phy_id_mask = 0xffffffef, - .config_init = at803x_config_init, - .set_wol = at803x_set_wol, - .get_wol = at803x_get_wol, - .suspend = at803x_suspend, - .resume = at803x_resume, - .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, - .config_aneg = genphy_config_aneg, - .read_status = genphy_read_status, - .driver = { + .phy_id = ATH8030_PHY_ID, + .name = "Atheros 8030 ethernet", + .phy_id_mask = 0xffffffef, + .probe = at803x_probe, + .config_init = at803x_config_init, + .link_change_notify = at803x_link_change_notify, + .set_wol = at803x_set_wol, + .get_wol = at803x_get_wol, + .suspend = at803x_suspend, + .resume = at803x_resume, + .features = PHY_GBIT_FEATURES, + .flags = PHY_HAS_INTERRUPT, + .config_aneg = genphy_config_aneg, + .read_status = genphy_read_status, + .driver = { .owner = THIS_MODULE, }, }, { /* ATHEROS 8031 */ - .phy_id = 0x004dd074, - .name = "Atheros 8031 ethernet", - .phy_id_mask = 0xffffffef, - .config_init = at803x_config_init, - .set_wol = at803x_set_wol, - .get_wol = at803x_get_wol, - .suspend = at803x_suspend, - .resume = at803x_resume, - .features = PHY_GBIT_FEATURES, - .flags = PHY_HAS_INTERRUPT, - .config_aneg = genphy_config_aneg, - .read_status = genphy_read_status, - .ack_interrupt = &at803x_ack_interrupt, - .config_intr = &at803x_config_intr, - .driver = { + .phy_id = ATH8031_PHY_ID, + .name = "Atheros 8031 ethernet", + .phy_id_mask = 0xffffffef, + .probe = at803x_probe, + .config_init = at803x_config_init, + .link_change_notify = at803x_link_change_notify, + .set_wol = at803x_set_wol, + .get_wol = at803x_get_wol, + .suspend = at803x_suspend, + .resume = at803x_resume, + .features = PHY_GBIT_FEATURES, + .flags = PHY_HAS_INTERRUPT, + .config_aneg = genphy_config_aneg, + .read_status = genphy_read_status, + .ack_interrupt = &at803x_ack_interrupt, + .config_intr = &at803x_config_intr, + .driver = { .owner = THIS_MODULE, }, } }; @@ -260,9 +367,9 @@ module_init(atheros_init); module_exit(atheros_exit); static struct mdio_device_id __maybe_unused atheros_tbl[] = { - { 0x004dd076, 0xffffffef }, - { 0x004dd074, 0xffffffef }, - { 0x004dd072, 0xffffffef }, + { ATH8030_PHY_ID, 0xffffffef }, + { ATH8031_PHY_ID, 0xffffffef }, + { ATH8035_PHY_ID, 0xffffffef }, { } }; diff --git a/drivers/net/phy/dp83640.c b/drivers/net/phy/dp83640.c index 6a999e6814a0..9408157a246c 100644 --- a/drivers/net/phy/dp83640.c +++ b/drivers/net/phy/dp83640.c @@ -1323,15 +1323,15 @@ static bool dp83640_rxtstamp(struct phy_device *phydev, { struct dp83640_private *dp83640 = phydev->priv; - if (!dp83640->hwts_rx_en) - return false; - if (is_status_frame(skb, type)) { decode_status_frame(dp83640, skb); kfree_skb(skb); return true; } + if (!dp83640->hwts_rx_en) + return false; + SKB_PTP_TYPE(skb) = type; skb_queue_tail(&dp83640->rx_queue, skb); schedule_work(&dp83640->ts_work); diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c index 2e58aa54484c..4eaadcfcb0fe 100644 --- a/drivers/net/phy/mdio_bus.c +++ b/drivers/net/phy/mdio_bus.c @@ -187,6 +187,50 @@ struct mii_bus *of_mdio_find_bus(struct device_node *mdio_bus_np) return d ? to_mii_bus(d) : NULL; } EXPORT_SYMBOL(of_mdio_find_bus); + +/* Walk the list of subnodes of a mdio bus and look for a node that matches the + * phy's address with its 'reg' property. If found, set the of_node pointer for + * the phy. This allows auto-probed pyh devices to be supplied with information + * passed in via DT. + */ +static void of_mdiobus_link_phydev(struct mii_bus *mdio, + struct phy_device *phydev) +{ + struct device *dev = &phydev->dev; + struct device_node *child; + + if (dev->of_node || !mdio->dev.of_node) + return; + + for_each_available_child_of_node(mdio->dev.of_node, child) { + int addr; + int ret; + + ret = of_property_read_u32(child, "reg", &addr); + if (ret < 0) { + dev_err(dev, "%s has invalid PHY address\n", + child->full_name); + continue; + } + + /* A PHY must have a reg property in the range [0-31] */ + if (addr >= PHY_MAX_ADDR) { + dev_err(dev, "%s PHY address %i is too large\n", + child->full_name, addr); + continue; + } + + if (addr == phydev->addr) { + dev->of_node = child; + return; + } + } +} +#else /* !IS_ENABLED(CONFIG_OF_MDIO) */ +static inline void of_mdiobus_link_phydev(struct mii_bus *mdio, + struct phy_device *phydev) +{ +} #endif /** diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c index 3bc079a67a3d..f7c61812ea4a 100644 --- a/drivers/net/phy/phy.c +++ b/drivers/net/phy/phy.c @@ -720,6 +720,9 @@ void phy_state_machine(struct work_struct *work) mutex_lock(&phydev->lock); + if (phydev->drv->link_change_notify) + phydev->drv->link_change_notify(phydev); + switch (phydev->state) { case PHY_DOWN: case PHY_STARTING: diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c index 91d6c1272fcf..d5b77ef3a210 100644 --- a/drivers/net/ppp/ppp_generic.c +++ b/drivers/net/ppp/ppp_generic.c @@ -539,7 +539,7 @@ static int get_filter(void __user *arg, struct sock_filter **p) { struct sock_fprog uprog; struct sock_filter *code = NULL; - int len, err; + int len; if (copy_from_user(&uprog, arg, sizeof(uprog))) return -EFAULT; @@ -554,12 +554,6 @@ static int get_filter(void __user *arg, struct sock_filter **p) if (IS_ERR(code)) return PTR_ERR(code); - err = sk_chk_filter(code, uprog.len); - if (err) { - kfree(code); - return err; - } - *p = code; return uprog.len; } @@ -763,10 +757,15 @@ static long ppp_ioctl(struct file *file, unsigned int cmd, unsigned long arg) }; ppp_lock(ppp); - if (ppp->pass_filter) + if (ppp->pass_filter) { sk_unattached_filter_destroy(ppp->pass_filter); - err = sk_unattached_filter_create(&ppp->pass_filter, - &fprog); + ppp->pass_filter = NULL; + } + if (fprog.filter != NULL) + err = sk_unattached_filter_create(&ppp->pass_filter, + &fprog); + else + err = 0; kfree(code); ppp_unlock(ppp); } @@ -784,10 +783,15 @@ static long ppp_ioctl(struct file *file, unsigned int cmd, unsigned long arg) }; ppp_lock(ppp); - if (ppp->active_filter) + if (ppp->active_filter) { sk_unattached_filter_destroy(ppp->active_filter); - err = sk_unattached_filter_create(&ppp->active_filter, - &fprog); + ppp->active_filter = NULL; + } + if (fprog.filter != NULL) + err = sk_unattached_filter_create(&ppp->active_filter, + &fprog); + else + err = 0; kfree(code); ppp_unlock(ppp); } diff --git a/drivers/net/ppp/pppoe.c b/drivers/net/ppp/pppoe.c index 2ea7efd11857..6c9c16d76935 100644 --- a/drivers/net/ppp/pppoe.c +++ b/drivers/net/ppp/pppoe.c @@ -675,7 +675,7 @@ static int pppoe_connect(struct socket *sock, struct sockaddr *uservaddr, po->chan.hdrlen = (sizeof(struct pppoe_hdr) + dev->hard_header_len); - po->chan.mtu = dev->mtu - sizeof(struct pppoe_hdr); + po->chan.mtu = dev->mtu - sizeof(struct pppoe_hdr) - 2; po->chan.private = sk; po->chan.ops = &pppoe_chan_ops; diff --git a/drivers/net/slip/slip.c b/drivers/net/slip/slip.c index ad4a94e9ff57..87526443841f 100644 --- a/drivers/net/slip/slip.c +++ b/drivers/net/slip/slip.c @@ -83,6 +83,7 @@ #include <linux/delay.h> #include <linux/init.h> #include <linux/slab.h> +#include <linux/workqueue.h> #include "slip.h" #ifdef CONFIG_INET #include <linux/ip.h> @@ -416,36 +417,46 @@ static void sl_encaps(struct slip *sl, unsigned char *icp, int len) #endif } -/* - * Called by the driver when there's room for more data. If we have - * more packets to send, we send them here. - */ -static void slip_write_wakeup(struct tty_struct *tty) +/* Write out any remaining transmit buffer. Scheduled when tty is writable */ +static void slip_transmit(struct work_struct *work) { + struct slip *sl = container_of(work, struct slip, tx_work); int actual; - struct slip *sl = tty->disc_data; + spin_lock_bh(&sl->lock); /* First make sure we're connected. */ - if (!sl || sl->magic != SLIP_MAGIC || !netif_running(sl->dev)) + if (!sl->tty || sl->magic != SLIP_MAGIC || !netif_running(sl->dev)) { + spin_unlock_bh(&sl->lock); return; + } - spin_lock_bh(&sl->lock); if (sl->xleft <= 0) { /* Now serial buffer is almost free & we can start * transmission of another packet */ sl->dev->stats.tx_packets++; - clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags); + clear_bit(TTY_DO_WRITE_WAKEUP, &sl->tty->flags); spin_unlock_bh(&sl->lock); sl_unlock(sl); return; } - actual = tty->ops->write(tty, sl->xhead, sl->xleft); + actual = sl->tty->ops->write(sl->tty, sl->xhead, sl->xleft); sl->xleft -= actual; sl->xhead += actual; spin_unlock_bh(&sl->lock); } +/* + * Called by the driver when there's room for more data. + * Schedule the transmit. + */ +static void slip_write_wakeup(struct tty_struct *tty) +{ + struct slip *sl = tty->disc_data; + + schedule_work(&sl->tx_work); +} + static void sl_tx_timeout(struct net_device *dev) { struct slip *sl = netdev_priv(dev); @@ -749,6 +760,7 @@ static struct slip *sl_alloc(dev_t line) sl->magic = SLIP_MAGIC; sl->dev = dev; spin_lock_init(&sl->lock); + INIT_WORK(&sl->tx_work, slip_transmit); sl->mode = SL_MODE_DEFAULT; #ifdef CONFIG_SLIP_SMART /* initialize timer_list struct */ @@ -872,8 +884,12 @@ static void slip_close(struct tty_struct *tty) if (!sl || sl->magic != SLIP_MAGIC || sl->tty != tty) return; + spin_lock_bh(&sl->lock); tty->disc_data = NULL; sl->tty = NULL; + spin_unlock_bh(&sl->lock); + + flush_work(&sl->tx_work); /* VSV = very important to remove timers */ #ifdef CONFIG_SLIP_SMART diff --git a/drivers/net/slip/slip.h b/drivers/net/slip/slip.h index 67673cf1266b..cf32aadf508f 100644 --- a/drivers/net/slip/slip.h +++ b/drivers/net/slip/slip.h @@ -53,6 +53,7 @@ struct slip { struct tty_struct *tty; /* ptr to TTY structure */ struct net_device *dev; /* easy for intr handling */ spinlock_t lock; + struct work_struct tx_work; /* Flushes transmit buffer */ #ifdef SL_INCLUDE_CSLIP struct slcompress *slcomp; /* for header compression */ diff --git a/drivers/net/usb/hso.c b/drivers/net/usb/hso.c index a3a05869309d..a4272ed62da8 100644 --- a/drivers/net/usb/hso.c +++ b/drivers/net/usb/hso.c @@ -258,10 +258,8 @@ struct hso_serial { * so as not to drop characters on the floor. */ int curr_rx_urb_idx; - u16 curr_rx_urb_offset; u8 rx_urb_filled[MAX_RX_URBS]; struct tasklet_struct unthrottle_tasklet; - struct work_struct retry_unthrottle_workqueue; }; struct hso_device { @@ -1252,14 +1250,6 @@ static void hso_unthrottle(struct tty_struct *tty) tasklet_hi_schedule(&serial->unthrottle_tasklet); } -static void hso_unthrottle_workfunc(struct work_struct *work) -{ - struct hso_serial *serial = - container_of(work, struct hso_serial, - retry_unthrottle_workqueue); - hso_unthrottle_tasklet(serial); -} - /* open the requested serial port */ static int hso_serial_open(struct tty_struct *tty, struct file *filp) { @@ -1295,8 +1285,6 @@ static int hso_serial_open(struct tty_struct *tty, struct file *filp) tasklet_init(&serial->unthrottle_tasklet, (void (*)(unsigned long))hso_unthrottle_tasklet, (unsigned long)serial); - INIT_WORK(&serial->retry_unthrottle_workqueue, - hso_unthrottle_workfunc); result = hso_start_serial_device(serial->parent, GFP_KERNEL); if (result) { hso_stop_serial_device(serial->parent); @@ -1345,7 +1333,6 @@ static void hso_serial_close(struct tty_struct *tty, struct file *filp) if (!usb_gone) hso_stop_serial_device(serial->parent); tasklet_kill(&serial->unthrottle_tasklet); - cancel_work_sync(&serial->retry_unthrottle_workqueue); } if (!usb_gone) @@ -2013,8 +2000,7 @@ static void ctrl_callback(struct urb *urb) static int put_rxbuf_data(struct urb *urb, struct hso_serial *serial) { struct tty_struct *tty; - int write_length_remaining = 0; - int curr_write_len; + int count; /* Sanity check */ if (urb == NULL || serial == NULL) { @@ -2024,29 +2010,28 @@ static int put_rxbuf_data(struct urb *urb, struct hso_serial *serial) tty = tty_port_tty_get(&serial->port); + if (tty && test_bit(TTY_THROTTLED, &tty->flags)) { + tty_kref_put(tty); + return -1; + } + /* Push data to tty */ - write_length_remaining = urb->actual_length - - serial->curr_rx_urb_offset; D1("data to push to tty"); - while (write_length_remaining) { - if (tty && test_bit(TTY_THROTTLED, &tty->flags)) { - tty_kref_put(tty); - return -1; - } - curr_write_len = tty_insert_flip_string(&serial->port, - urb->transfer_buffer + serial->curr_rx_urb_offset, - write_length_remaining); - serial->curr_rx_urb_offset += curr_write_len; - write_length_remaining -= curr_write_len; + count = tty_buffer_request_room(&serial->port, urb->actual_length); + if (count >= urb->actual_length) { + tty_insert_flip_string(&serial->port, urb->transfer_buffer, + urb->actual_length); tty_flip_buffer_push(&serial->port); + } else { + dev_warn(&serial->parent->usb->dev, + "dropping data, %d bytes lost\n", urb->actual_length); } + tty_kref_put(tty); - if (write_length_remaining == 0) { - serial->curr_rx_urb_offset = 0; - serial->rx_urb_filled[hso_urb_to_index(serial, urb)] = 0; - } - return write_length_remaining; + serial->rx_urb_filled[hso_urb_to_index(serial, urb)] = 0; + + return 0; } @@ -2217,7 +2202,6 @@ static int hso_stop_serial_device(struct hso_device *hso_dev) } } serial->curr_rx_urb_idx = 0; - serial->curr_rx_urb_offset = 0; if (serial->tx_urb) usb_kill_urb(serial->tx_urb); diff --git a/drivers/net/usb/huawei_cdc_ncm.c b/drivers/net/usb/huawei_cdc_ncm.c index f9822bc75425..735f7dadb9a0 100644 --- a/drivers/net/usb/huawei_cdc_ncm.c +++ b/drivers/net/usb/huawei_cdc_ncm.c @@ -84,12 +84,13 @@ static int huawei_cdc_ncm_bind(struct usbnet *usbnet_dev, ctx = drvstate->ctx; if (usbnet_dev->status) - /* CDC-WMC r1.1 requires wMaxCommand to be "at least 256 - * decimal (0x100)" + /* The wMaxCommand buffer must be big enough to hold + * any message from the modem. Experience has shown + * that some replies are more than 256 bytes long */ subdriver = usb_cdc_wdm_register(ctx->control, &usbnet_dev->status->desc, - 256, /* wMaxCommand */ + 1024, /* wMaxCommand */ huawei_cdc_ncm_wdm_manage_power); if (IS_ERR(subdriver)) { ret = PTR_ERR(subdriver); @@ -193,6 +194,9 @@ static const struct usb_device_id huawei_cdc_ncm_devs[] = { { USB_VENDOR_AND_INTERFACE_INFO(0x12d1, 0xff, 0x02, 0x76), .driver_info = (unsigned long)&huawei_cdc_ncm_info, }, + { USB_VENDOR_AND_INTERFACE_INFO(0x12d1, 0xff, 0x03, 0x16), + .driver_info = (unsigned long)&huawei_cdc_ncm_info, + }, /* Terminating entry */ { diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c index cf62d7e8329f..22756db53dca 100644 --- a/drivers/net/usb/qmi_wwan.c +++ b/drivers/net/usb/qmi_wwan.c @@ -667,6 +667,7 @@ static const struct usb_device_id products[] = { {QMI_FIXED_INTF(0x05c6, 0x9084, 4)}, {QMI_FIXED_INTF(0x05c6, 0x920d, 0)}, {QMI_FIXED_INTF(0x05c6, 0x920d, 5)}, + {QMI_FIXED_INTF(0x0846, 0x68a2, 8)}, {QMI_FIXED_INTF(0x12d1, 0x140c, 1)}, /* Huawei E173 */ {QMI_FIXED_INTF(0x12d1, 0x14ac, 1)}, /* Huawei E1820 */ {QMI_FIXED_INTF(0x16d8, 0x6003, 0)}, /* CMOTech 6003 */ @@ -741,6 +742,7 @@ static const struct usb_device_id products[] = { {QMI_FIXED_INTF(0x19d2, 0x1424, 2)}, {QMI_FIXED_INTF(0x19d2, 0x1425, 2)}, {QMI_FIXED_INTF(0x19d2, 0x1426, 2)}, /* ZTE MF91 */ + {QMI_FIXED_INTF(0x19d2, 0x1428, 2)}, /* Telewell TW-LTE 4G v2 */ {QMI_FIXED_INTF(0x19d2, 0x2002, 4)}, /* ZTE (Vodafone) K3765-Z */ {QMI_FIXED_INTF(0x0f3d, 0x68a2, 8)}, /* Sierra Wireless MC7700 */ {QMI_FIXED_INTF(0x114f, 0x68a2, 8)}, /* Sierra Wireless MC7750 */ @@ -756,6 +758,7 @@ static const struct usb_device_id products[] = { {QMI_FIXED_INTF(0x1199, 0x9054, 8)}, /* Sierra Wireless Modem */ {QMI_FIXED_INTF(0x1199, 0x9055, 8)}, /* Netgear AirCard 341U */ {QMI_FIXED_INTF(0x1199, 0x9056, 8)}, /* Sierra Wireless Modem */ + {QMI_FIXED_INTF(0x1199, 0x9057, 8)}, {QMI_FIXED_INTF(0x1199, 0x9061, 8)}, /* Sierra Wireless Modem */ {QMI_FIXED_INTF(0x1bbb, 0x011e, 4)}, /* Telekom Speedstick LTE II (Alcatel One Touch L100V LTE) */ {QMI_FIXED_INTF(0x1bbb, 0x0203, 2)}, /* Alcatel L800MA */ diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c index 25431965a625..7bad2d316637 100644 --- a/drivers/net/usb/r8152.c +++ b/drivers/net/usb/r8152.c @@ -1359,7 +1359,7 @@ static void r8152_csum_workaround(struct r8152 *tp, struct sk_buff *skb, struct sk_buff_head seg_list; struct sk_buff *segs, *nskb; - features &= ~(NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO); + features &= ~(NETIF_F_SG | NETIF_F_IPV6_CSUM | NETIF_F_TSO6); segs = skb_gso_segment(skb, features); if (IS_ERR(segs) || !segs) goto drop; @@ -3204,8 +3204,13 @@ static void rtl8152_get_ethtool_stats(struct net_device *dev, struct r8152 *tp = netdev_priv(dev); struct tally_counter tally; + if (usb_autopm_get_interface(tp->intf) < 0) + return; + generic_ocp_read(tp, PLA_TALLYCNT, sizeof(tally), &tally, MCU_TYPE_PLA); + usb_autopm_put_interface(tp->intf); + data[0] = le64_to_cpu(tally.tx_packets); data[1] = le64_to_cpu(tally.rx_packets); data[2] = le64_to_cpu(tally.tx_errors); diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c index 424db65e4396..d07bf4cb893f 100644 --- a/drivers/net/usb/smsc95xx.c +++ b/drivers/net/usb/smsc95xx.c @@ -1714,6 +1714,18 @@ static int smsc95xx_resume(struct usb_interface *intf) return ret; } +static int smsc95xx_reset_resume(struct usb_interface *intf) +{ + struct usbnet *dev = usb_get_intfdata(intf); + int ret; + + ret = smsc95xx_reset(dev); + if (ret < 0) + return ret; + + return smsc95xx_resume(intf); +} + static void smsc95xx_rx_csum_offload(struct sk_buff *skb) { skb->csum = *(u16 *)(skb_tail_pointer(skb) - 2); @@ -2004,7 +2016,7 @@ static struct usb_driver smsc95xx_driver = { .probe = usbnet_probe, .suspend = smsc95xx_suspend, .resume = smsc95xx_resume, - .reset_resume = smsc95xx_resume, + .reset_resume = smsc95xx_reset_resume, .disconnect = usbnet_disconnect, .disable_hub_initiated_lpm = 1, .supports_autosuspend = 1, diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c index 97394345e5dd..b76f7dcde0db 100644 --- a/drivers/net/vmxnet3/vmxnet3_drv.c +++ b/drivers/net/vmxnet3/vmxnet3_drv.c @@ -2589,8 +2589,8 @@ vmxnet3_open(struct net_device *netdev) for (i = 0; i < adapter->num_tx_queues; i++) spin_lock_init(&adapter->tx_queue[i].tx_lock); - err = vmxnet3_create_queues(adapter, VMXNET3_DEF_TX_RING_SIZE, - VMXNET3_DEF_RX_RING_SIZE, + err = vmxnet3_create_queues(adapter, adapter->tx_ring_size, + adapter->rx_ring_size, VMXNET3_DEF_RX_RING_SIZE); if (err) goto queue_err; @@ -2968,6 +2968,9 @@ vmxnet3_probe_device(struct pci_dev *pdev, adapter->netdev = netdev; adapter->pdev = pdev; + adapter->tx_ring_size = VMXNET3_DEF_TX_RING_SIZE; + adapter->rx_ring_size = VMXNET3_DEF_RX_RING_SIZE; + spin_lock_init(&adapter->cmd_lock); adapter->adapter_pa = dma_map_single(&adapter->pdev->dev, adapter, sizeof(struct vmxnet3_adapter), diff --git a/drivers/net/vmxnet3/vmxnet3_ethtool.c b/drivers/net/vmxnet3/vmxnet3_ethtool.c index 40c1c7b0d9e0..b725fd9e7803 100644 --- a/drivers/net/vmxnet3/vmxnet3_ethtool.c +++ b/drivers/net/vmxnet3/vmxnet3_ethtool.c @@ -449,8 +449,8 @@ vmxnet3_get_ringparam(struct net_device *netdev, param->rx_mini_max_pending = 0; param->rx_jumbo_max_pending = 0; - param->rx_pending = adapter->rx_queue[0].rx_ring[0].size; - param->tx_pending = adapter->tx_queue[0].tx_ring.size; + param->rx_pending = adapter->rx_ring_size; + param->tx_pending = adapter->tx_ring_size; param->rx_mini_pending = 0; param->rx_jumbo_pending = 0; } @@ -529,9 +529,11 @@ vmxnet3_set_ringparam(struct net_device *netdev, * size */ netdev_err(netdev, "failed to apply new sizes, " "try the default ones\n"); + new_rx_ring_size = VMXNET3_DEF_RX_RING_SIZE; + new_tx_ring_size = VMXNET3_DEF_TX_RING_SIZE; err = vmxnet3_create_queues(adapter, - VMXNET3_DEF_TX_RING_SIZE, - VMXNET3_DEF_RX_RING_SIZE, + new_tx_ring_size, + new_rx_ring_size, VMXNET3_DEF_RX_RING_SIZE); if (err) { netdev_err(netdev, "failed to create queues " @@ -545,6 +547,8 @@ vmxnet3_set_ringparam(struct net_device *netdev, netdev_err(netdev, "failed to re-activate, error %d." " Closing it\n", err); } + adapter->tx_ring_size = new_tx_ring_size; + adapter->rx_ring_size = new_rx_ring_size; out: clear_bit(VMXNET3_STATE_BIT_RESETTING, &adapter->state); diff --git a/drivers/net/vmxnet3/vmxnet3_int.h b/drivers/net/vmxnet3/vmxnet3_int.h index 190569d02450..29ee77f2c97f 100644 --- a/drivers/net/vmxnet3/vmxnet3_int.h +++ b/drivers/net/vmxnet3/vmxnet3_int.h @@ -349,6 +349,11 @@ struct vmxnet3_adapter { u32 link_speed; /* in mbps */ u64 tx_timeout_count; + + /* Ring sizes */ + u32 tx_ring_size; + u32 rx_ring_size; + struct work_struct work; unsigned long state; /* VMXNET3_STATE_BIT_xxx */ diff --git a/drivers/net/wan/farsync.c b/drivers/net/wan/farsync.c index 93ace042d0aa..1f041271f7fe 100644 --- a/drivers/net/wan/farsync.c +++ b/drivers/net/wan/farsync.c @@ -2363,7 +2363,7 @@ static char *type_strings[] = { "FarSync TE1" }; -static void +static int fst_init_card(struct fst_card_info *card) { int i; @@ -2374,24 +2374,21 @@ fst_init_card(struct fst_card_info *card) * we'll have to revise it in some way then. */ for (i = 0; i < card->nports; i++) { - err = register_hdlc_device(card->ports[i].dev); - if (err < 0) { - int j; + err = register_hdlc_device(card->ports[i].dev); + if (err < 0) { pr_err("Cannot register HDLC device for port %d (errno %d)\n", - i, -err); - for (j = i; j < card->nports; j++) { - free_netdev(card->ports[j].dev); - card->ports[j].dev = NULL; - } - card->nports = i; - break; - } + i, -err); + while (i--) + unregister_hdlc_device(card->ports[i].dev); + return err; + } } pr_info("%s-%s: %s IRQ%d, %d ports\n", port_to_dev(&card->ports[0])->name, port_to_dev(&card->ports[card->nports - 1])->name, type_strings[card->type], card->irq, card->nports); + return 0; } static const struct net_device_ops fst_ops = { @@ -2447,15 +2444,12 @@ fst_add_one(struct pci_dev *pdev, const struct pci_device_id *ent) /* Try to enable the device */ if ((err = pci_enable_device(pdev)) != 0) { pr_err("Failed to enable card. Err %d\n", -err); - kfree(card); - return err; + goto enable_fail; } if ((err = pci_request_regions(pdev, "FarSync")) !=0) { pr_err("Failed to allocate regions. Err %d\n", -err); - pci_disable_device(pdev); - kfree(card); - return err; + goto regions_fail; } /* Get virtual addresses of memory regions */ @@ -2464,30 +2458,21 @@ fst_add_one(struct pci_dev *pdev, const struct pci_device_id *ent) card->phys_ctlmem = pci_resource_start(pdev, 3); if ((card->mem = ioremap(card->phys_mem, FST_MEMSIZE)) == NULL) { pr_err("Physical memory remap failed\n"); - pci_release_regions(pdev); - pci_disable_device(pdev); - kfree(card); - return -ENODEV; + err = -ENODEV; + goto ioremap_physmem_fail; } if ((card->ctlmem = ioremap(card->phys_ctlmem, 0x10)) == NULL) { pr_err("Control memory remap failed\n"); - pci_release_regions(pdev); - pci_disable_device(pdev); - iounmap(card->mem); - kfree(card); - return -ENODEV; + err = -ENODEV; + goto ioremap_ctlmem_fail; } dbg(DBG_PCI, "kernel mem %p, ctlmem %p\n", card->mem, card->ctlmem); /* Register the interrupt handler */ if (request_irq(pdev->irq, fst_intr, IRQF_SHARED, FST_DEV_NAME, card)) { pr_err("Unable to register interrupt %d\n", card->irq); - pci_release_regions(pdev); - pci_disable_device(pdev); - iounmap(card->ctlmem); - iounmap(card->mem); - kfree(card); - return -ENODEV; + err = -ENODEV; + goto irq_fail; } /* Record info we need */ @@ -2513,13 +2498,8 @@ fst_add_one(struct pci_dev *pdev, const struct pci_device_id *ent) while (i--) free_netdev(card->ports[i].dev); pr_err("FarSync: out of memory\n"); - free_irq(card->irq, card); - pci_release_regions(pdev); - pci_disable_device(pdev); - iounmap(card->ctlmem); - iounmap(card->mem); - kfree(card); - return -ENODEV; + err = -ENOMEM; + goto hdlcdev_fail; } card->ports[i].dev = dev; card->ports[i].card = card; @@ -2565,9 +2545,16 @@ fst_add_one(struct pci_dev *pdev, const struct pci_device_id *ent) pci_set_drvdata(pdev, card); /* Remainder of card setup */ + if (no_of_cards_added >= FST_MAX_CARDS) { + pr_err("FarSync: too many cards\n"); + err = -ENOMEM; + goto card_array_fail; + } fst_card_array[no_of_cards_added] = card; card->card_no = no_of_cards_added++; /* Record instance and bump it */ - fst_init_card(card); + err = fst_init_card(card); + if (err) + goto init_card_fail; if (card->family == FST_FAMILY_TXU) { /* * Allocate a dma buffer for transmit and receives @@ -2577,29 +2564,46 @@ fst_add_one(struct pci_dev *pdev, const struct pci_device_id *ent) &card->rx_dma_handle_card); if (card->rx_dma_handle_host == NULL) { pr_err("Could not allocate rx dma buffer\n"); - fst_disable_intr(card); - pci_release_regions(pdev); - pci_disable_device(pdev); - iounmap(card->ctlmem); - iounmap(card->mem); - kfree(card); - return -ENOMEM; + err = -ENOMEM; + goto rx_dma_fail; } card->tx_dma_handle_host = pci_alloc_consistent(card->device, FST_MAX_MTU, &card->tx_dma_handle_card); if (card->tx_dma_handle_host == NULL) { pr_err("Could not allocate tx dma buffer\n"); - fst_disable_intr(card); - pci_release_regions(pdev); - pci_disable_device(pdev); - iounmap(card->ctlmem); - iounmap(card->mem); - kfree(card); - return -ENOMEM; + err = -ENOMEM; + goto tx_dma_fail; } } return 0; /* Success */ + +tx_dma_fail: + pci_free_consistent(card->device, FST_MAX_MTU, + card->rx_dma_handle_host, + card->rx_dma_handle_card); +rx_dma_fail: + fst_disable_intr(card); + for (i = 0 ; i < card->nports ; i++) + unregister_hdlc_device(card->ports[i].dev); +init_card_fail: + fst_card_array[card->card_no] = NULL; +card_array_fail: + for (i = 0 ; i < card->nports ; i++) + free_netdev(card->ports[i].dev); +hdlcdev_fail: + free_irq(card->irq, card); +irq_fail: + iounmap(card->ctlmem); +ioremap_ctlmem_fail: + iounmap(card->mem); +ioremap_physmem_fail: + pci_release_regions(pdev); +regions_fail: + pci_disable_device(pdev); +enable_fail: + kfree(card); + return err; } /* diff --git a/drivers/net/wan/x25_asy.c b/drivers/net/wan/x25_asy.c index 5895f1978691..fa9fdfa128c1 100644 --- a/drivers/net/wan/x25_asy.c +++ b/drivers/net/wan/x25_asy.c @@ -122,8 +122,12 @@ static int x25_asy_change_mtu(struct net_device *dev, int newmtu) { struct x25_asy *sl = netdev_priv(dev); unsigned char *xbuff, *rbuff; - int len = 2 * newmtu; + int len; + if (newmtu > 65534) + return -EINVAL; + + len = 2 * newmtu; xbuff = kmalloc(len + 4, GFP_ATOMIC); rbuff = kmalloc(len + 4, GFP_ATOMIC); diff --git a/drivers/net/wireless/ath/ath10k/core.c b/drivers/net/wireless/ath/ath10k/core.c index 82017f56e661..e6c56c5bb0f6 100644 --- a/drivers/net/wireless/ath/ath10k/core.c +++ b/drivers/net/wireless/ath/ath10k/core.c @@ -795,7 +795,11 @@ int ath10k_core_start(struct ath10k *ar) if (status) goto err_htc_stop; - ar->free_vdev_map = (1 << TARGET_NUM_VDEVS) - 1; + if (test_bit(ATH10K_FW_FEATURE_WMI_10X, ar->fw_features)) + ar->free_vdev_map = (1 << TARGET_10X_NUM_VDEVS) - 1; + else + ar->free_vdev_map = (1 << TARGET_NUM_VDEVS) - 1; + INIT_LIST_HEAD(&ar->arvifs); if (!test_bit(ATH10K_FLAG_FIRST_BOOT_DONE, &ar->dev_flags)) diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c index 6c102b1312ff..eebc860c3655 100644 --- a/drivers/net/wireless/ath/ath10k/htt_rx.c +++ b/drivers/net/wireless/ath/ath10k/htt_rx.c @@ -312,7 +312,6 @@ static int ath10k_htt_rx_amsdu_pop(struct ath10k_htt *htt, int msdu_len, msdu_chaining = 0; struct sk_buff *msdu; struct htt_rx_desc *rx_desc; - bool corrupted = false; lockdep_assert_held(&htt->rx_ring.lock); @@ -439,9 +438,6 @@ static int ath10k_htt_rx_amsdu_pop(struct ath10k_htt *htt, last_msdu = __le32_to_cpu(rx_desc->msdu_end.info0) & RX_MSDU_END_INFO0_LAST_MSDU; - if (msdu_chaining && !last_msdu) - corrupted = true; - if (last_msdu) { msdu->next = NULL; break; @@ -457,20 +453,6 @@ static int ath10k_htt_rx_amsdu_pop(struct ath10k_htt *htt, msdu_chaining = -1; /* - * Apparently FW sometimes reports weird chained MSDU sequences with - * more than one rx descriptor. This seems like a bug but needs more - * analyzing. For the time being fix it by dropping such sequences to - * avoid blowing up the host system. - */ - if (corrupted) { - ath10k_warn("failed to pop chained msdus, dropping\n"); - ath10k_htt_rx_free_msdu_chain(*head_msdu); - *head_msdu = NULL; - *tail_msdu = NULL; - msdu_chaining = -EINVAL; - } - - /* * Don't refill the ring yet. * * First, the elements popped here are still in use - it is not diff --git a/drivers/net/wireless/b43/Kconfig b/drivers/net/wireless/b43/Kconfig index e3f67b8d3f80..40fd9b7b1426 100644 --- a/drivers/net/wireless/b43/Kconfig +++ b/drivers/net/wireless/b43/Kconfig @@ -36,7 +36,7 @@ config B43_SSB choice prompt "Supported bus types" depends on B43 - default B43_BCMA_AND_SSB + default B43_BUSES_BCMA_AND_SSB config B43_BUSES_BCMA_AND_SSB bool "BCMA and SSB" diff --git a/drivers/net/wireless/b43/main.c b/drivers/net/wireless/b43/main.c index 32538ac5f7e4..0d6a0bb1f876 100644 --- a/drivers/net/wireless/b43/main.c +++ b/drivers/net/wireless/b43/main.c @@ -5221,6 +5221,7 @@ static int b43_wireless_core_attach(struct b43_wldev *dev) /* We don't support 5 GHz on some PHYs yet */ switch (dev->phy.type) { case B43_PHYTYPE_A: + case B43_PHYTYPE_G: case B43_PHYTYPE_N: case B43_PHYTYPE_LP: case B43_PHYTYPE_HT: diff --git a/drivers/net/wireless/b43/xmit.c b/drivers/net/wireless/b43/xmit.c index 4f38f19b8e3d..6e6ef3fc2247 100644 --- a/drivers/net/wireless/b43/xmit.c +++ b/drivers/net/wireless/b43/xmit.c @@ -811,9 +811,13 @@ void b43_rx(struct b43_wldev *dev, struct sk_buff *skb, const void *_rxhdr) break; case B43_PHYTYPE_G: status.band = IEEE80211_BAND_2GHZ; - /* chanid is the radio channel cookie value as used - * to tune the radio. */ - status.freq = chanid + 2400; + /* Somewhere between 478.104 and 508.1084 firmware for G-PHY + * has been modified to be compatible with N-PHY and others. + */ + if (dev->fw.rev >= 508) + status.freq = ieee80211_channel_to_frequency(chanid, status.band); + else + status.freq = chanid + 2400; break; case B43_PHYTYPE_N: case B43_PHYTYPE_LP: diff --git a/drivers/net/wireless/brcm80211/brcmfmac/usb.c b/drivers/net/wireless/brcm80211/brcmfmac/usb.c index 6db51a666f61..d06fcb05adf2 100644 --- a/drivers/net/wireless/brcm80211/brcmfmac/usb.c +++ b/drivers/net/wireless/brcm80211/brcmfmac/usb.c @@ -1184,8 +1184,6 @@ static int brcmf_usb_probe_cb(struct brcmf_usbdev_info *devinfo) bus->bus_priv.usb = bus_pub; dev_set_drvdata(dev, bus); bus->ops = &brcmf_usb_bus_ops; - bus->chip = bus_pub->devid; - bus->chiprev = bus_pub->chiprev; bus->proto_type = BRCMF_PROTO_BCDC; bus->always_use_fws_queue = true; @@ -1194,6 +1192,9 @@ static int brcmf_usb_probe_cb(struct brcmf_usbdev_info *devinfo) if (ret) goto fail; } + bus->chip = bus_pub->devid; + bus->chiprev = bus_pub->chiprev; + /* request firmware here */ brcmf_fw_get_firmwares(dev, 0, brcmf_usb_get_fwname(devinfo), NULL, brcmf_usb_probe_phase2); diff --git a/drivers/net/wireless/iwlwifi/dvm/rxon.c b/drivers/net/wireless/iwlwifi/dvm/rxon.c index ed50de6362ed..6dc5dd3ced44 100644 --- a/drivers/net/wireless/iwlwifi/dvm/rxon.c +++ b/drivers/net/wireless/iwlwifi/dvm/rxon.c @@ -1068,13 +1068,6 @@ int iwlagn_commit_rxon(struct iwl_priv *priv, struct iwl_rxon_context *ctx) /* recalculate basic rates */ iwl_calc_basic_rates(priv, ctx); - /* - * force CTS-to-self frames protection if RTS-CTS is not preferred - * one aggregation protection method - */ - if (!priv->hw_params.use_rts_for_aggregation) - ctx->staging.flags |= RXON_FLG_SELF_CTS_EN; - if ((ctx->vif && ctx->vif->bss_conf.use_short_slot) || !(ctx->staging.flags & RXON_FLG_BAND_24G_MSK)) ctx->staging.flags |= RXON_FLG_SHORT_SLOT_MSK; @@ -1480,11 +1473,6 @@ void iwlagn_bss_info_changed(struct ieee80211_hw *hw, else ctx->staging.flags &= ~RXON_FLG_TGG_PROTECT_MSK; - if (bss_conf->use_cts_prot) - ctx->staging.flags |= RXON_FLG_SELF_CTS_EN; - else - ctx->staging.flags &= ~RXON_FLG_SELF_CTS_EN; - memcpy(ctx->staging.bssid_addr, bss_conf->bssid, ETH_ALEN); if (vif->type == NL80211_IFTYPE_AP || diff --git a/drivers/net/wireless/iwlwifi/iwl-fw.h b/drivers/net/wireless/iwlwifi/iwl-fw.h index 0aa7c0085c9f..b1a33322b9ba 100644 --- a/drivers/net/wireless/iwlwifi/iwl-fw.h +++ b/drivers/net/wireless/iwlwifi/iwl-fw.h @@ -88,6 +88,7 @@ * P2P client interfaces simultaneously if they are in different bindings. * @IWL_UCODE_TLV_FLAGS_P2P_BSS_PS_SCM: support power save on BSS station and * P2P client interfaces simultaneously if they are in same bindings. + * @IWL_UCODE_TLV_FLAGS_UAPSD_SUPPORT: General support for uAPSD * @IWL_UCODE_TLV_FLAGS_P2P_PS_UAPSD: P2P client supports uAPSD power save * @IWL_UCODE_TLV_FLAGS_BCAST_FILTERING: uCode supports broadcast filtering. * @IWL_UCODE_TLV_FLAGS_GO_UAPSD: AP/GO interfaces support uAPSD clients diff --git a/drivers/net/wireless/iwlwifi/mvm/mac-ctxt.c b/drivers/net/wireless/iwlwifi/mvm/mac-ctxt.c index 8b5302777632..725ba49576bf 100644 --- a/drivers/net/wireless/iwlwifi/mvm/mac-ctxt.c +++ b/drivers/net/wireless/iwlwifi/mvm/mac-ctxt.c @@ -667,10 +667,9 @@ static void iwl_mvm_mac_ctxt_cmd_common(struct iwl_mvm *mvm, if (vif->bss_conf.qos) cmd->qos_flags |= cpu_to_le32(MAC_QOS_FLG_UPDATE_EDCA); - if (vif->bss_conf.use_cts_prot) { + if (vif->bss_conf.use_cts_prot) cmd->protection_flags |= cpu_to_le32(MAC_PROT_FLG_TGG_PROTECT); - cmd->protection_flags |= cpu_to_le32(MAC_PROT_FLG_SELF_CTS_EN); - } + IWL_DEBUG_RATE(mvm, "use_cts_prot %d, ht_operation_mode %d\n", vif->bss_conf.use_cts_prot, vif->bss_conf.ht_operation_mode); diff --git a/drivers/net/wireless/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/iwlwifi/mvm/mac80211.c index 7215f5980186..9bfb90680cdc 100644 --- a/drivers/net/wireless/iwlwifi/mvm/mac80211.c +++ b/drivers/net/wireless/iwlwifi/mvm/mac80211.c @@ -303,6 +303,13 @@ int iwl_mvm_mac_setup_register(struct iwl_mvm *mvm) hw->uapsd_max_sp_len = IWL_UAPSD_MAX_SP; } + if (mvm->fw->ucode_capa.flags & IWL_UCODE_TLV_FLAGS_UAPSD_SUPPORT && + !iwlwifi_mod_params.uapsd_disable) { + hw->flags |= IEEE80211_HW_SUPPORTS_UAPSD; + hw->uapsd_queues = IWL_UAPSD_AC_INFO; + hw->uapsd_max_sp_len = IWL_UAPSD_MAX_SP; + } + hw->sta_data_size = sizeof(struct iwl_mvm_sta); hw->vif_data_size = sizeof(struct iwl_mvm_vif); hw->chanctx_data_size = sizeof(u16); @@ -1159,8 +1166,12 @@ static void iwl_mvm_bcast_filter_iterator(void *_data, u8 *mac, bcast_mac = &cmd->macs[mvmvif->id]; - /* enable filtering only for associated stations */ - if (vif->type != NL80211_IFTYPE_STATION || !vif->bss_conf.assoc) + /* + * enable filtering only for associated stations, but not for P2P + * Clients + */ + if (vif->type != NL80211_IFTYPE_STATION || vif->p2p || + !vif->bss_conf.assoc) return; bcast_mac->default_discard = 1; @@ -1237,10 +1248,6 @@ static int iwl_mvm_configure_bcast_filter(struct iwl_mvm *mvm, if (!(mvm->fw->ucode_capa.flags & IWL_UCODE_TLV_FLAGS_BCAST_FILTERING)) return 0; - /* bcast filtering isn't supported for P2P client */ - if (vif->p2p) - return 0; - if (!iwl_mvm_bcast_filter_build_cmd(mvm, &cmd)) return 0; diff --git a/drivers/net/wireless/iwlwifi/mvm/scan.c b/drivers/net/wireless/iwlwifi/mvm/scan.c index 4b6c7d4bd199..eac2b424f6a0 100644 --- a/drivers/net/wireless/iwlwifi/mvm/scan.c +++ b/drivers/net/wireless/iwlwifi/mvm/scan.c @@ -588,9 +588,7 @@ static void iwl_build_scan_cmd(struct iwl_mvm *mvm, struct iwl_scan_offload_cmd *scan, struct iwl_mvm_scan_params *params) { - scan->channel_count = - mvm->nvm_data->bands[IEEE80211_BAND_2GHZ].n_channels + - mvm->nvm_data->bands[IEEE80211_BAND_5GHZ].n_channels; + scan->channel_count = req->n_channels; scan->quiet_time = cpu_to_le16(IWL_ACTIVE_QUIET_TIME); scan->quiet_plcp_th = cpu_to_le16(IWL_PLCP_QUIET_THRESH); scan->good_CRC_th = IWL_GOOD_CRC_TH_DEFAULT; @@ -669,61 +667,37 @@ static void iwl_build_channel_cfg(struct iwl_mvm *mvm, struct cfg80211_sched_scan_request *req, struct iwl_scan_channel_cfg *channels, enum ieee80211_band band, - int *head, int *tail, + int *head, u32 ssid_bitmap, struct iwl_mvm_scan_params *params) { - struct ieee80211_supported_band *s_band; - int n_channels = req->n_channels; - int i, j, index = 0; - bool partial; + int i, index = 0; - /* - * We have to configure all supported channels, even if we don't want to - * scan on them, but we have to send channels in the order that we want - * to scan. So add requested channels to head of the list and others to - * the end. - */ - s_band = &mvm->nvm_data->bands[band]; - - for (i = 0; i < s_band->n_channels && *head <= *tail; i++) { - partial = false; - for (j = 0; j < n_channels; j++) - if (s_band->channels[i].center_freq == - req->channels[j]->center_freq) { - index = *head; - (*head)++; - /* - * Channels that came with the request will be - * in partial scan . - */ - partial = true; - break; - } - if (!partial) { - index = *tail; - (*tail)--; - } - channels->channel_number[index] = - cpu_to_le16(ieee80211_frequency_to_channel( - s_band->channels[i].center_freq)); + for (i = 0; i < req->n_channels; i++) { + struct ieee80211_channel *chan = req->channels[i]; + + if (chan->band != band) + continue; + + index = *head; + (*head)++; + + channels->channel_number[index] = cpu_to_le16(chan->hw_value); channels->dwell_time[index][0] = params->dwell[band].active; channels->dwell_time[index][1] = params->dwell[band].passive; channels->iter_count[index] = cpu_to_le16(1); channels->iter_interval[index] = 0; - if (!(s_band->channels[i].flags & IEEE80211_CHAN_NO_IR)) + if (!(chan->flags & IEEE80211_CHAN_NO_IR)) channels->type[index] |= cpu_to_le32(IWL_SCAN_OFFLOAD_CHANNEL_ACTIVE); channels->type[index] |= - cpu_to_le32(IWL_SCAN_OFFLOAD_CHANNEL_FULL); - if (partial) - channels->type[index] |= - cpu_to_le32(IWL_SCAN_OFFLOAD_CHANNEL_PARTIAL); + cpu_to_le32(IWL_SCAN_OFFLOAD_CHANNEL_FULL | + IWL_SCAN_OFFLOAD_CHANNEL_PARTIAL); - if (s_band->channels[i].flags & IEEE80211_CHAN_NO_HT40) + if (chan->flags & IEEE80211_CHAN_NO_HT40) channels->type[index] |= cpu_to_le32(IWL_SCAN_OFFLOAD_CHANNEL_NARROW); @@ -740,7 +714,6 @@ int iwl_mvm_config_sched_scan(struct iwl_mvm *mvm, int band_2ghz = mvm->nvm_data->bands[IEEE80211_BAND_2GHZ].n_channels; int band_5ghz = mvm->nvm_data->bands[IEEE80211_BAND_5GHZ].n_channels; int head = 0; - int tail = band_2ghz + band_5ghz - 1; u32 ssid_bitmap; int cmd_len; int ret; @@ -772,7 +745,7 @@ int iwl_mvm_config_sched_scan(struct iwl_mvm *mvm, &scan_cfg->scan_cmd.tx_cmd[0], scan_cfg->data); iwl_build_channel_cfg(mvm, req, &scan_cfg->channel_cfg, - IEEE80211_BAND_2GHZ, &head, &tail, + IEEE80211_BAND_2GHZ, &head, ssid_bitmap, ¶ms); } if (band_5ghz) { @@ -782,7 +755,7 @@ int iwl_mvm_config_sched_scan(struct iwl_mvm *mvm, scan_cfg->data + SCAN_OFFLOAD_PROBE_REQ_SIZE); iwl_build_channel_cfg(mvm, req, &scan_cfg->channel_cfg, - IEEE80211_BAND_5GHZ, &head, &tail, + IEEE80211_BAND_5GHZ, &head, ssid_bitmap, ¶ms); } diff --git a/drivers/net/wireless/iwlwifi/pcie/drv.c b/drivers/net/wireless/iwlwifi/pcie/drv.c index 7091a18d5a72..98950e45c7b0 100644 --- a/drivers/net/wireless/iwlwifi/pcie/drv.c +++ b/drivers/net/wireless/iwlwifi/pcie/drv.c @@ -367,6 +367,7 @@ static DEFINE_PCI_DEVICE_TABLE(iwl_hw_card_ids) = { {IWL_PCI_DEVICE(0x095A, 0x5012, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095A, 0x5412, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095A, 0x5410, iwl7265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x095A, 0x5510, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095A, 0x5400, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095A, 0x1010, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095A, 0x5000, iwl7265_2n_cfg)}, @@ -380,7 +381,7 @@ static DEFINE_PCI_DEVICE_TABLE(iwl_hw_card_ids) = { {IWL_PCI_DEVICE(0x095A, 0x9110, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095A, 0x9112, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095A, 0x9210, iwl7265_2ac_cfg)}, - {IWL_PCI_DEVICE(0x095A, 0x9200, iwl7265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x095B, 0x9200, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095A, 0x9510, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095A, 0x9310, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095A, 0x9410, iwl7265_2ac_cfg)}, diff --git a/drivers/net/wireless/mwifiex/11n_aggr.c b/drivers/net/wireless/mwifiex/11n_aggr.c index 5b32106182f8..fe0f66f73507 100644 --- a/drivers/net/wireless/mwifiex/11n_aggr.c +++ b/drivers/net/wireless/mwifiex/11n_aggr.c @@ -185,6 +185,7 @@ mwifiex_11n_aggregate_pkt(struct mwifiex_private *priv, skb_reserve(skb_aggr, headroom + sizeof(struct txpd)); tx_info_aggr = MWIFIEX_SKB_TXCB(skb_aggr); + memset(tx_info_aggr, 0, sizeof(*tx_info_aggr)); tx_info_aggr->bss_type = tx_info_src->bss_type; tx_info_aggr->bss_num = tx_info_src->bss_num; diff --git a/drivers/net/wireless/mwifiex/cfg80211.c b/drivers/net/wireless/mwifiex/cfg80211.c index e95dec91a561..b511613bba2d 100644 --- a/drivers/net/wireless/mwifiex/cfg80211.c +++ b/drivers/net/wireless/mwifiex/cfg80211.c @@ -220,6 +220,7 @@ mwifiex_cfg80211_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev, } tx_info = MWIFIEX_SKB_TXCB(skb); + memset(tx_info, 0, sizeof(*tx_info)); tx_info->bss_num = priv->bss_num; tx_info->bss_type = priv->bss_type; tx_info->pkt_len = pkt_len; diff --git a/drivers/net/wireless/mwifiex/cmdevt.c b/drivers/net/wireless/mwifiex/cmdevt.c index 8dee6c86f4f1..c161141f6c39 100644 --- a/drivers/net/wireless/mwifiex/cmdevt.c +++ b/drivers/net/wireless/mwifiex/cmdevt.c @@ -453,6 +453,7 @@ int mwifiex_process_event(struct mwifiex_adapter *adapter) if (skb) { rx_info = MWIFIEX_SKB_RXCB(skb); + memset(rx_info, 0, sizeof(*rx_info)); rx_info->bss_num = priv->bss_num; rx_info->bss_type = priv->bss_type; } diff --git a/drivers/net/wireless/mwifiex/main.c b/drivers/net/wireless/mwifiex/main.c index cbabc12fbda3..e91cd0fa5ca8 100644 --- a/drivers/net/wireless/mwifiex/main.c +++ b/drivers/net/wireless/mwifiex/main.c @@ -645,6 +645,7 @@ mwifiex_hard_start_xmit(struct sk_buff *skb, struct net_device *dev) } tx_info = MWIFIEX_SKB_TXCB(skb); + memset(tx_info, 0, sizeof(*tx_info)); tx_info->bss_num = priv->bss_num; tx_info->bss_type = priv->bss_type; tx_info->pkt_len = skb->len; diff --git a/drivers/net/wireless/mwifiex/pcie.c b/drivers/net/wireless/mwifiex/pcie.c index 574d4b597468..2cc9b6fca490 100644 --- a/drivers/net/wireless/mwifiex/pcie.c +++ b/drivers/net/wireless/mwifiex/pcie.c @@ -50,7 +50,7 @@ mwifiex_map_pci_memory(struct mwifiex_adapter *adapter, struct sk_buff *skb, return -1; } mapping.len = size; - memcpy(skb->cb, &mapping, sizeof(mapping)); + mwifiex_store_mapping(skb, &mapping); return 0; } @@ -60,7 +60,7 @@ static void mwifiex_unmap_pci_memory(struct mwifiex_adapter *adapter, struct pcie_service_card *card = adapter->card; struct mwifiex_dma_mapping mapping; - MWIFIEX_SKB_PACB(skb, &mapping); + mwifiex_get_mapping(skb, &mapping); pci_unmap_single(card->dev, mapping.addr, mapping.len, flags); } diff --git a/drivers/net/wireless/mwifiex/sta_tx.c b/drivers/net/wireless/mwifiex/sta_tx.c index 5fce7e78a36e..70eb863c7249 100644 --- a/drivers/net/wireless/mwifiex/sta_tx.c +++ b/drivers/net/wireless/mwifiex/sta_tx.c @@ -150,6 +150,7 @@ int mwifiex_send_null_packet(struct mwifiex_private *priv, u8 flags) return -1; tx_info = MWIFIEX_SKB_TXCB(skb); + memset(tx_info, 0, sizeof(*tx_info)); tx_info->bss_num = priv->bss_num; tx_info->bss_type = priv->bss_type; tx_info->pkt_len = data_len - (sizeof(struct txpd) + INTF_HEADER_LEN); diff --git a/drivers/net/wireless/mwifiex/tdls.c b/drivers/net/wireless/mwifiex/tdls.c index e73034fbbde9..0e88364e0c67 100644 --- a/drivers/net/wireless/mwifiex/tdls.c +++ b/drivers/net/wireless/mwifiex/tdls.c @@ -605,6 +605,7 @@ int mwifiex_send_tdls_data_frame(struct mwifiex_private *priv, const u8 *peer, } tx_info = MWIFIEX_SKB_TXCB(skb); + memset(tx_info, 0, sizeof(*tx_info)); tx_info->bss_num = priv->bss_num; tx_info->bss_type = priv->bss_type; @@ -760,6 +761,7 @@ int mwifiex_send_tdls_action_frame(struct mwifiex_private *priv, const u8 *peer, skb->priority = MWIFIEX_PRIO_VI; tx_info = MWIFIEX_SKB_TXCB(skb); + memset(tx_info, 0, sizeof(*tx_info)); tx_info->bss_num = priv->bss_num; tx_info->bss_type = priv->bss_type; tx_info->flags |= MWIFIEX_BUF_FLAG_TDLS_PKT; diff --git a/drivers/net/wireless/mwifiex/txrx.c b/drivers/net/wireless/mwifiex/txrx.c index 37f26afd4314..fd7e5b9b4581 100644 --- a/drivers/net/wireless/mwifiex/txrx.c +++ b/drivers/net/wireless/mwifiex/txrx.c @@ -55,6 +55,7 @@ int mwifiex_handle_rx_packet(struct mwifiex_adapter *adapter, return -1; } + memset(rx_info, 0, sizeof(*rx_info)); rx_info->bss_num = priv->bss_num; rx_info->bss_type = priv->bss_type; diff --git a/drivers/net/wireless/mwifiex/uap_txrx.c b/drivers/net/wireless/mwifiex/uap_txrx.c index 9a56bc61cb1d..b0601b91cc4f 100644 --- a/drivers/net/wireless/mwifiex/uap_txrx.c +++ b/drivers/net/wireless/mwifiex/uap_txrx.c @@ -175,6 +175,7 @@ static void mwifiex_uap_queue_bridged_pkt(struct mwifiex_private *priv, } tx_info = MWIFIEX_SKB_TXCB(skb); + memset(tx_info, 0, sizeof(*tx_info)); tx_info->bss_num = priv->bss_num; tx_info->bss_type = priv->bss_type; tx_info->flags |= MWIFIEX_BUF_FLAG_BRIDGED_PKT; diff --git a/drivers/net/wireless/mwifiex/util.h b/drivers/net/wireless/mwifiex/util.h index ddae57021397..caadb3737b9e 100644 --- a/drivers/net/wireless/mwifiex/util.h +++ b/drivers/net/wireless/mwifiex/util.h @@ -20,32 +20,55 @@ #ifndef _MWIFIEX_UTIL_H_ #define _MWIFIEX_UTIL_H_ +struct mwifiex_dma_mapping { + dma_addr_t addr; + size_t len; +}; + +struct mwifiex_cb { + struct mwifiex_dma_mapping dma_mapping; + union { + struct mwifiex_rxinfo rx_info; + struct mwifiex_txinfo tx_info; + }; +}; + static inline struct mwifiex_rxinfo *MWIFIEX_SKB_RXCB(struct sk_buff *skb) { - return (struct mwifiex_rxinfo *)(skb->cb + sizeof(dma_addr_t)); + struct mwifiex_cb *cb = (struct mwifiex_cb *)skb->cb; + + BUILD_BUG_ON(sizeof(struct mwifiex_cb) > sizeof(skb->cb)); + return &cb->rx_info; } static inline struct mwifiex_txinfo *MWIFIEX_SKB_TXCB(struct sk_buff *skb) { - return (struct mwifiex_txinfo *)(skb->cb + sizeof(dma_addr_t)); + struct mwifiex_cb *cb = (struct mwifiex_cb *)skb->cb; + + return &cb->tx_info; } -struct mwifiex_dma_mapping { - dma_addr_t addr; - size_t len; -}; +static inline void mwifiex_store_mapping(struct sk_buff *skb, + struct mwifiex_dma_mapping *mapping) +{ + struct mwifiex_cb *cb = (struct mwifiex_cb *)skb->cb; + + memcpy(&cb->dma_mapping, mapping, sizeof(*mapping)); +} -static inline void MWIFIEX_SKB_PACB(struct sk_buff *skb, - struct mwifiex_dma_mapping *mapping) +static inline void mwifiex_get_mapping(struct sk_buff *skb, + struct mwifiex_dma_mapping *mapping) { - memcpy(mapping, skb->cb, sizeof(*mapping)); + struct mwifiex_cb *cb = (struct mwifiex_cb *)skb->cb; + + memcpy(mapping, &cb->dma_mapping, sizeof(*mapping)); } static inline dma_addr_t MWIFIEX_SKB_DMA_ADDR(struct sk_buff *skb) { struct mwifiex_dma_mapping mapping; - MWIFIEX_SKB_PACB(skb, &mapping); + mwifiex_get_mapping(skb, &mapping); return mapping.addr; } diff --git a/drivers/net/wireless/rt2x00/rt2500pci.c b/drivers/net/wireless/rt2x00/rt2500pci.c index 2f1cd929c6f6..a511cccc9f01 100644 --- a/drivers/net/wireless/rt2x00/rt2500pci.c +++ b/drivers/net/wireless/rt2x00/rt2500pci.c @@ -1681,8 +1681,13 @@ static int rt2500pci_init_eeprom(struct rt2x00_dev *rt2x00dev) /* * Detect if this device has an hardware controlled radio. */ - if (rt2x00_get_field16(eeprom, EEPROM_ANTENNA_HARDWARE_RADIO)) + if (rt2x00_get_field16(eeprom, EEPROM_ANTENNA_HARDWARE_RADIO)) { __set_bit(CAPABILITY_HW_BUTTON, &rt2x00dev->cap_flags); + /* + * On this device RFKILL initialized during probe does not work. + */ + __set_bit(REQUIRE_DELAYED_RFKILL, &rt2x00dev->cap_flags); + } /* * Check if the BBP tuning should be enabled. diff --git a/drivers/net/wireless/rt2x00/rt2800usb.c b/drivers/net/wireless/rt2x00/rt2800usb.c index a49c3d73ea2c..832006b5aab1 100644 --- a/drivers/net/wireless/rt2x00/rt2800usb.c +++ b/drivers/net/wireless/rt2x00/rt2800usb.c @@ -229,6 +229,31 @@ static enum hrtimer_restart rt2800usb_tx_sta_fifo_timeout(struct hrtimer *timer) /* * Firmware functions */ +static int rt2800usb_autorun_detect(struct rt2x00_dev *rt2x00dev) +{ + __le32 *reg; + u32 fw_mode; + + reg = kmalloc(sizeof(*reg), GFP_KERNEL); + if (reg == NULL) + return -ENOMEM; + /* cannot use rt2x00usb_register_read here as it uses different + * mode (MULTI_READ vs. DEVICE_MODE) and does not pass the + * magic value USB_MODE_AUTORUN (0x11) to the device, thus the + * returned value would be invalid. + */ + rt2x00usb_vendor_request(rt2x00dev, USB_DEVICE_MODE, + USB_VENDOR_REQUEST_IN, 0, USB_MODE_AUTORUN, + reg, sizeof(*reg), REGISTER_TIMEOUT_FIRMWARE); + fw_mode = le32_to_cpu(*reg); + kfree(reg); + + if ((fw_mode & 0x00000003) == 2) + return 1; + + return 0; +} + static char *rt2800usb_get_firmware_name(struct rt2x00_dev *rt2x00dev) { return FIRMWARE_RT2870; @@ -240,6 +265,7 @@ static int rt2800usb_write_firmware(struct rt2x00_dev *rt2x00dev, int status; u32 offset; u32 length; + int retval; /* * Check which section of the firmware we need. @@ -257,8 +283,16 @@ static int rt2800usb_write_firmware(struct rt2x00_dev *rt2x00dev, /* * Write firmware to device. */ - rt2x00usb_register_multiwrite(rt2x00dev, FIRMWARE_IMAGE_BASE, - data + offset, length); + retval = rt2800usb_autorun_detect(rt2x00dev); + if (retval < 0) + return retval; + if (retval) { + rt2x00_info(rt2x00dev, + "Firmware loading not required - NIC in AutoRun mode\n"); + } else { + rt2x00usb_register_multiwrite(rt2x00dev, FIRMWARE_IMAGE_BASE, + data + offset, length); + } rt2x00usb_register_write(rt2x00dev, H2M_MAILBOX_CID, ~0); rt2x00usb_register_write(rt2x00dev, H2M_MAILBOX_STATUS, ~0); @@ -735,11 +769,26 @@ static void rt2800usb_fill_rxdone(struct queue_entry *entry, /* * Device probe functions. */ +static int rt2800usb_efuse_detect(struct rt2x00_dev *rt2x00dev) +{ + int retval; + + retval = rt2800usb_autorun_detect(rt2x00dev); + if (retval < 0) + return retval; + if (retval) + return 1; + return rt2800_efuse_detect(rt2x00dev); +} + static int rt2800usb_read_eeprom(struct rt2x00_dev *rt2x00dev) { int retval; - if (rt2800_efuse_detect(rt2x00dev)) + retval = rt2800usb_efuse_detect(rt2x00dev); + if (retval < 0) + return retval; + if (retval) retval = rt2800_read_eeprom_efuse(rt2x00dev); else retval = rt2x00usb_eeprom_read(rt2x00dev, rt2x00dev->eeprom, diff --git a/drivers/net/wireless/rt2x00/rt2x00.h b/drivers/net/wireless/rt2x00/rt2x00.h index 010b76505243..d13f25cd70d5 100644 --- a/drivers/net/wireless/rt2x00/rt2x00.h +++ b/drivers/net/wireless/rt2x00/rt2x00.h @@ -693,6 +693,7 @@ enum rt2x00_capability_flags { REQUIRE_SW_SEQNO, REQUIRE_HT_TX_DESC, REQUIRE_PS_AUTOWAKE, + REQUIRE_DELAYED_RFKILL, /* * Capabilities diff --git a/drivers/net/wireless/rt2x00/rt2x00dev.c b/drivers/net/wireless/rt2x00/rt2x00dev.c index 2bde6729f5e6..4fa43a2eeb73 100644 --- a/drivers/net/wireless/rt2x00/rt2x00dev.c +++ b/drivers/net/wireless/rt2x00/rt2x00dev.c @@ -1126,9 +1126,10 @@ static void rt2x00lib_uninitialize(struct rt2x00_dev *rt2x00dev) return; /* - * Unregister extra components. + * Stop rfkill polling. */ - rt2x00rfkill_unregister(rt2x00dev); + if (test_bit(REQUIRE_DELAYED_RFKILL, &rt2x00dev->cap_flags)) + rt2x00rfkill_unregister(rt2x00dev); /* * Allow the HW to uninitialize. @@ -1166,6 +1167,12 @@ static int rt2x00lib_initialize(struct rt2x00_dev *rt2x00dev) set_bit(DEVICE_STATE_INITIALIZED, &rt2x00dev->flags); + /* + * Start rfkill polling. + */ + if (test_bit(REQUIRE_DELAYED_RFKILL, &rt2x00dev->cap_flags)) + rt2x00rfkill_register(rt2x00dev); + return 0; } @@ -1375,7 +1382,12 @@ int rt2x00lib_probe_dev(struct rt2x00_dev *rt2x00dev) rt2x00link_register(rt2x00dev); rt2x00leds_register(rt2x00dev); rt2x00debug_register(rt2x00dev); - rt2x00rfkill_register(rt2x00dev); + + /* + * Start rfkill polling. + */ + if (!test_bit(REQUIRE_DELAYED_RFKILL, &rt2x00dev->cap_flags)) + rt2x00rfkill_register(rt2x00dev); return 0; @@ -1391,6 +1403,12 @@ void rt2x00lib_remove_dev(struct rt2x00_dev *rt2x00dev) clear_bit(DEVICE_STATE_PRESENT, &rt2x00dev->flags); /* + * Stop rfkill polling. + */ + if (!test_bit(REQUIRE_DELAYED_RFKILL, &rt2x00dev->cap_flags)) + rt2x00rfkill_unregister(rt2x00dev); + + /* * Disable radio. */ rt2x00lib_disable_radio(rt2x00dev); diff --git a/drivers/net/wireless/rt2x00/rt2x00mac.c b/drivers/net/wireless/rt2x00/rt2x00mac.c index 212ac4842c16..004dff9b962d 100644 --- a/drivers/net/wireless/rt2x00/rt2x00mac.c +++ b/drivers/net/wireless/rt2x00/rt2x00mac.c @@ -487,6 +487,8 @@ int rt2x00mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, crypto.cipher = rt2x00crypto_key_to_cipher(key); if (crypto.cipher == CIPHER_NONE) return -EOPNOTSUPP; + if (crypto.cipher == CIPHER_TKIP && rt2x00_is_usb(rt2x00dev)) + return -EOPNOTSUPP; crypto.cmd = cmd; diff --git a/drivers/net/wireless/rt2x00/rt2x00usb.h b/drivers/net/wireless/rt2x00/rt2x00usb.h index e7bcf62347d5..831b65f93feb 100644 --- a/drivers/net/wireless/rt2x00/rt2x00usb.h +++ b/drivers/net/wireless/rt2x00/rt2x00usb.h @@ -93,6 +93,7 @@ enum rt2x00usb_mode_offset { USB_MODE_SLEEP = 7, /* RT73USB */ USB_MODE_FIRMWARE = 8, /* RT73USB */ USB_MODE_WAKEUP = 9, /* RT73USB */ + USB_MODE_AUTORUN = 17, /* RT2800USB */ }; /** diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h index 4dd7c4a1923b..2532ce85d718 100644 --- a/drivers/net/xen-netback/common.h +++ b/drivers/net/xen-netback/common.h @@ -222,6 +222,7 @@ struct xenvif { /* Queues */ struct xenvif_queue *queues; + unsigned int num_queues; /* active queues, resource allocated */ /* Miscellaneous private stuff. */ struct net_device *dev; diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c index 852da34b8961..9e97c7ca0ddd 100644 --- a/drivers/net/xen-netback/interface.c +++ b/drivers/net/xen-netback/interface.c @@ -137,32 +137,11 @@ static void xenvif_wake_queue_callback(unsigned long data) } } -static u16 xenvif_select_queue(struct net_device *dev, struct sk_buff *skb, - void *accel_priv, select_queue_fallback_t fallback) -{ - unsigned int num_queues = dev->real_num_tx_queues; - u32 hash; - u16 queue_index; - - /* First, check if there is only one queue to optimise the - * single-queue or old frontend scenario. - */ - if (num_queues == 1) { - queue_index = 0; - } else { - /* Use skb_get_hash to obtain an L4 hash if available */ - hash = skb_get_hash(skb); - queue_index = hash % num_queues; - } - - return queue_index; -} - static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev) { struct xenvif *vif = netdev_priv(dev); struct xenvif_queue *queue = NULL; - unsigned int num_queues = dev->real_num_tx_queues; + unsigned int num_queues = vif->num_queues; u16 index; int min_slots_needed; @@ -225,7 +204,7 @@ static struct net_device_stats *xenvif_get_stats(struct net_device *dev) { struct xenvif *vif = netdev_priv(dev); struct xenvif_queue *queue = NULL; - unsigned int num_queues = dev->real_num_tx_queues; + unsigned int num_queues = vif->num_queues; unsigned long rx_bytes = 0; unsigned long rx_packets = 0; unsigned long tx_bytes = 0; @@ -256,7 +235,7 @@ out: static void xenvif_up(struct xenvif *vif) { struct xenvif_queue *queue = NULL; - unsigned int num_queues = vif->dev->real_num_tx_queues; + unsigned int num_queues = vif->num_queues; unsigned int queue_index; for (queue_index = 0; queue_index < num_queues; ++queue_index) { @@ -272,7 +251,7 @@ static void xenvif_up(struct xenvif *vif) static void xenvif_down(struct xenvif *vif) { struct xenvif_queue *queue = NULL; - unsigned int num_queues = vif->dev->real_num_tx_queues; + unsigned int num_queues = vif->num_queues; unsigned int queue_index; for (queue_index = 0; queue_index < num_queues; ++queue_index) { @@ -379,7 +358,7 @@ static void xenvif_get_ethtool_stats(struct net_device *dev, struct ethtool_stats *stats, u64 * data) { struct xenvif *vif = netdev_priv(dev); - unsigned int num_queues = dev->real_num_tx_queues; + unsigned int num_queues = vif->num_queues; int i; unsigned int queue_index; struct xenvif_stats *vif_stats; @@ -424,7 +403,6 @@ static const struct net_device_ops xenvif_netdev_ops = { .ndo_fix_features = xenvif_fix_features, .ndo_set_mac_address = eth_mac_addr, .ndo_validate_addr = eth_validate_addr, - .ndo_select_queue = xenvif_select_queue, }; struct xenvif *xenvif_alloc(struct device *parent, domid_t domid, @@ -438,7 +416,7 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid, snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle); /* Allocate a netdev with the max. supported number of queues. * When the guest selects the desired number, it will be updated - * via netif_set_real_num_tx_queues(). + * via netif_set_real_num_*_queues(). */ dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, xenvif_max_queues); @@ -458,11 +436,9 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid, vif->dev = dev; vif->disabled = false; - /* Start out with no queues. The call below does not require - * rtnl_lock() as it happens before register_netdev(). - */ + /* Start out with no queues. */ vif->queues = NULL; - netif_set_real_num_tx_queues(dev, 0); + vif->num_queues = 0; dev->netdev_ops = &xenvif_netdev_ops; dev->hw_features = NETIF_F_SG | @@ -677,7 +653,7 @@ static void xenvif_wait_unmap_timeout(struct xenvif_queue *queue, void xenvif_disconnect(struct xenvif *vif) { struct xenvif_queue *queue = NULL; - unsigned int num_queues = vif->dev->real_num_tx_queues; + unsigned int num_queues = vif->num_queues; unsigned int queue_index; if (netif_carrier_ok(vif->dev)) @@ -724,7 +700,7 @@ void xenvif_deinit_queue(struct xenvif_queue *queue) void xenvif_free(struct xenvif *vif) { struct xenvif_queue *queue = NULL; - unsigned int num_queues = vif->dev->real_num_tx_queues; + unsigned int num_queues = vif->num_queues; unsigned int queue_index; /* Here we want to avoid timeout messages if an skb can be legitimately * stuck somewhere else. Realistically this could be an another vif's @@ -748,12 +724,9 @@ void xenvif_free(struct xenvif *vif) xenvif_deinit_queue(queue); } - /* Free the array of queues. The call below does not require - * rtnl_lock() because it happens after unregister_netdev(). - */ - netif_set_real_num_tx_queues(vif->dev, 0); vfree(vif->queues); vif->queues = NULL; + vif->num_queues = 0; free_netdev(vif->dev); diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c index 1844a47636b6..c65b636bcab9 100644 --- a/drivers/net/xen-netback/netback.c +++ b/drivers/net/xen-netback/netback.c @@ -1030,14 +1030,21 @@ static int xenvif_tx_check_gop(struct xenvif_queue *queue, { struct gnttab_map_grant_ref *gop_map = *gopp_map; u16 pending_idx = XENVIF_TX_CB(skb)->pending_idx; + /* This always points to the shinfo of the skb being checked, which + * could be either the first or the one on the frag_list + */ struct skb_shared_info *shinfo = skb_shinfo(skb); + /* If this is non-NULL, we are currently checking the frag_list skb, and + * this points to the shinfo of the first one + */ + struct skb_shared_info *first_shinfo = NULL; int nr_frags = shinfo->nr_frags; + const bool sharedslot = nr_frags && + frag_get_pending_idx(&shinfo->frags[0]) == pending_idx; int i, err; - struct sk_buff *first_skb = NULL; /* Check status of header. */ err = (*gopp_copy)->status; - (*gopp_copy)++; if (unlikely(err)) { if (net_ratelimit()) netdev_dbg(queue->vif->dev, @@ -1045,8 +1052,12 @@ static int xenvif_tx_check_gop(struct xenvif_queue *queue, (*gopp_copy)->status, pending_idx, (*gopp_copy)->source.u.ref); - xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR); + /* The first frag might still have this slot mapped */ + if (!sharedslot) + xenvif_idx_release(queue, pending_idx, + XEN_NETIF_RSP_ERROR); } + (*gopp_copy)++; check_frags: for (i = 0; i < nr_frags; i++, gop_map++) { @@ -1062,8 +1073,19 @@ check_frags: pending_idx, gop_map->handle); /* Had a previous error? Invalidate this fragment. */ - if (unlikely(err)) + if (unlikely(err)) { xenvif_idx_unmap(queue, pending_idx); + /* If the mapping of the first frag was OK, but + * the header's copy failed, and they are + * sharing a slot, send an error + */ + if (i == 0 && sharedslot) + xenvif_idx_release(queue, pending_idx, + XEN_NETIF_RSP_ERROR); + else + xenvif_idx_release(queue, pending_idx, + XEN_NETIF_RSP_OKAY); + } continue; } @@ -1075,42 +1097,53 @@ check_frags: gop_map->status, pending_idx, gop_map->ref); + xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR); /* Not the first error? Preceding frags already invalidated. */ if (err) continue; - /* First error: invalidate preceding fragments. */ + + /* First error: if the header haven't shared a slot with the + * first frag, release it as well. + */ + if (!sharedslot) + xenvif_idx_release(queue, + XENVIF_TX_CB(skb)->pending_idx, + XEN_NETIF_RSP_OKAY); + + /* Invalidate preceding fragments of this skb. */ for (j = 0; j < i; j++) { pending_idx = frag_get_pending_idx(&shinfo->frags[j]); xenvif_idx_unmap(queue, pending_idx); + xenvif_idx_release(queue, pending_idx, + XEN_NETIF_RSP_OKAY); + } + + /* And if we found the error while checking the frag_list, unmap + * the first skb's frags + */ + if (first_shinfo) { + for (j = 0; j < first_shinfo->nr_frags; j++) { + pending_idx = frag_get_pending_idx(&first_shinfo->frags[j]); + xenvif_idx_unmap(queue, pending_idx); + xenvif_idx_release(queue, pending_idx, + XEN_NETIF_RSP_OKAY); + } } /* Remember the error: invalidate all subsequent fragments. */ err = newerr; } - if (skb_has_frag_list(skb)) { - first_skb = skb; - skb = shinfo->frag_list; - shinfo = skb_shinfo(skb); + if (skb_has_frag_list(skb) && !first_shinfo) { + first_shinfo = skb_shinfo(skb); + shinfo = skb_shinfo(skb_shinfo(skb)->frag_list); nr_frags = shinfo->nr_frags; goto check_frags; } - /* There was a mapping error in the frag_list skb. We have to unmap - * the first skb's frags - */ - if (first_skb && err) { - int j; - shinfo = skb_shinfo(first_skb); - for (j = 0; j < shinfo->nr_frags; j++) { - pending_idx = frag_get_pending_idx(&shinfo->frags[j]); - xenvif_idx_unmap(queue, pending_idx); - } - } - *gopp_map = gop_map; return err; } @@ -1518,7 +1551,16 @@ static int xenvif_tx_submit(struct xenvif_queue *queue) /* Check the remap error code. */ if (unlikely(xenvif_tx_check_gop(queue, skb, &gop_map, &gop_copy))) { + /* If there was an error, xenvif_tx_check_gop is + * expected to release all the frags which were mapped, + * so kfree_skb shouldn't do it again + */ skb_shinfo(skb)->nr_frags = 0; + if (skb_has_frag_list(skb)) { + struct sk_buff *nskb = + skb_shinfo(skb)->frag_list; + skb_shinfo(nskb)->nr_frags = 0; + } kfree_skb(skb); continue; } @@ -1822,8 +1864,6 @@ void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx) tx_unmap_op.status); BUG(); } - - xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_OKAY); } static inline int rx_work_todo(struct xenvif_queue *queue) diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c index 96c63dc2509e..3d85acd84bad 100644 --- a/drivers/net/xen-netback/xenbus.c +++ b/drivers/net/xen-netback/xenbus.c @@ -527,9 +527,7 @@ static void connect(struct backend_info *be) /* Use the number of queues requested by the frontend */ be->vif->queues = vzalloc(requested_num_queues * sizeof(struct xenvif_queue)); - rtnl_lock(); - netif_set_real_num_tx_queues(be->vif->dev, requested_num_queues); - rtnl_unlock(); + be->vif->num_queues = requested_num_queues; for (queue_index = 0; queue_index < requested_num_queues; ++queue_index) { queue = &be->vif->queues[queue_index]; @@ -546,9 +544,7 @@ static void connect(struct backend_info *be) * earlier queues can be destroyed using the regular * disconnect logic. */ - rtnl_lock(); - netif_set_real_num_tx_queues(be->vif->dev, queue_index); - rtnl_unlock(); + be->vif->num_queues = queue_index; goto err; } @@ -561,13 +557,19 @@ static void connect(struct backend_info *be) * and also clean up any previously initialised queues. */ xenvif_deinit_queue(queue); - rtnl_lock(); - netif_set_real_num_tx_queues(be->vif->dev, queue_index); - rtnl_unlock(); + be->vif->num_queues = queue_index; goto err; } } + /* Initialisation completed, tell core driver the number of + * active queues. + */ + rtnl_lock(); + netif_set_real_num_tx_queues(be->vif->dev, requested_num_queues); + netif_set_real_num_rx_queues(be->vif->dev, requested_num_queues); + rtnl_unlock(); + xenvif_carrier_on(be->vif); unregister_hotplug_status_watch(be); @@ -582,13 +584,11 @@ static void connect(struct backend_info *be) return; err: - if (be->vif->dev->real_num_tx_queues > 0) + if (be->vif->num_queues > 0) xenvif_disconnect(be->vif); /* Clean up existing queues */ vfree(be->vif->queues); be->vif->queues = NULL; - rtnl_lock(); - netif_set_real_num_tx_queues(be->vif->dev, 0); - rtnl_unlock(); + be->vif->num_queues = 0; return; } @@ -596,7 +596,7 @@ err: static int connect_rings(struct backend_info *be, struct xenvif_queue *queue) { struct xenbus_device *dev = be->dev; - unsigned int num_queues = queue->vif->dev->real_num_tx_queues; + unsigned int num_queues = queue->vif->num_queues; unsigned long tx_ring_ref, rx_ring_ref; unsigned int tx_evtchn, rx_evtchn; int err; diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c index 5a7872ac3566..055222bae6e4 100644 --- a/drivers/net/xen-netfront.c +++ b/drivers/net/xen-netfront.c @@ -1287,7 +1287,7 @@ static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id) if (likely(netif_carrier_ok(dev) && RING_HAS_UNCONSUMED_RESPONSES(&queue->rx))) - napi_schedule(&queue->napi); + napi_schedule(&queue->napi); return IRQ_HANDLED; } @@ -1437,16 +1437,12 @@ static void xennet_end_access(int ref, void *page) static void xennet_disconnect_backend(struct netfront_info *info) { unsigned int i = 0; - struct netfront_queue *queue = NULL; unsigned int num_queues = info->netdev->real_num_tx_queues; + netif_carrier_off(info->netdev); + for (i = 0; i < num_queues; ++i) { - /* Stop old i/f to prevent errors whilst we rebuild the state. */ - spin_lock_bh(&queue->rx_lock); - spin_lock_irq(&queue->tx_lock); - netif_carrier_off(queue->info->netdev); - spin_unlock_irq(&queue->tx_lock); - spin_unlock_bh(&queue->rx_lock); + struct netfront_queue *queue = &info->queues[i]; if (queue->tx_irq && (queue->tx_irq == queue->rx_irq)) unbind_from_irqhandler(queue->tx_irq, queue); @@ -1457,6 +1453,8 @@ static void xennet_disconnect_backend(struct netfront_info *info) queue->tx_evtchn = queue->rx_evtchn = 0; queue->tx_irq = queue->rx_irq = 0; + napi_synchronize(&queue->napi); + /* End access and free the pages */ xennet_end_access(queue->tx_ring_ref, queue->tx.sring); xennet_end_access(queue->rx_ring_ref, queue->rx.sring); @@ -1698,8 +1696,6 @@ static int xennet_init_queue(struct netfront_queue *queue) goto exit_free_tx; } - netif_napi_add(queue->info->netdev, &queue->napi, xennet_poll, 64); - return 0; exit_free_tx: @@ -1790,6 +1786,70 @@ error: return err; } +static void xennet_destroy_queues(struct netfront_info *info) +{ + unsigned int i; + + rtnl_lock(); + + for (i = 0; i < info->netdev->real_num_tx_queues; i++) { + struct netfront_queue *queue = &info->queues[i]; + + if (netif_running(info->netdev)) + napi_disable(&queue->napi); + netif_napi_del(&queue->napi); + } + + rtnl_unlock(); + + kfree(info->queues); + info->queues = NULL; +} + +static int xennet_create_queues(struct netfront_info *info, + unsigned int num_queues) +{ + unsigned int i; + int ret; + + info->queues = kcalloc(num_queues, sizeof(struct netfront_queue), + GFP_KERNEL); + if (!info->queues) + return -ENOMEM; + + rtnl_lock(); + + for (i = 0; i < num_queues; i++) { + struct netfront_queue *queue = &info->queues[i]; + + queue->id = i; + queue->info = info; + + ret = xennet_init_queue(queue); + if (ret < 0) { + dev_warn(&info->netdev->dev, "only created %d queues\n", + num_queues); + num_queues = i; + break; + } + + netif_napi_add(queue->info->netdev, &queue->napi, + xennet_poll, 64); + if (netif_running(info->netdev)) + napi_enable(&queue->napi); + } + + netif_set_real_num_tx_queues(info->netdev, num_queues); + + rtnl_unlock(); + + if (num_queues == 0) { + dev_err(&info->netdev->dev, "no queues\n"); + return -EINVAL; + } + return 0; +} + /* Common code used when first setting up, and when resuming. */ static int talk_to_netback(struct xenbus_device *dev, struct netfront_info *info) @@ -1826,42 +1886,20 @@ static int talk_to_netback(struct xenbus_device *dev, goto out; } - /* Allocate array of queues */ - info->queues = kcalloc(num_queues, sizeof(struct netfront_queue), GFP_KERNEL); - if (!info->queues) { - err = -ENOMEM; - goto out; - } - rtnl_lock(); - netif_set_real_num_tx_queues(info->netdev, num_queues); - rtnl_unlock(); + if (info->queues) + xennet_destroy_queues(info); + + err = xennet_create_queues(info, num_queues); + if (err < 0) + goto destroy_ring; /* Create shared ring, alloc event channel -- for each queue */ for (i = 0; i < num_queues; ++i) { queue = &info->queues[i]; - queue->id = i; - queue->info = info; - err = xennet_init_queue(queue); - if (err) { - /* xennet_init_queue() cleans up after itself on failure, - * but we still have to clean up any previously initialised - * queues. If i > 0, set num_queues to i, then goto - * destroy_ring, which calls xennet_disconnect_backend() - * to tidy up. - */ - if (i > 0) { - rtnl_lock(); - netif_set_real_num_tx_queues(info->netdev, i); - rtnl_unlock(); - goto destroy_ring; - } else { - goto out; - } - } err = setup_netfront(dev, queue, feature_split_evtchn); if (err) { - /* As for xennet_init_queue(), setup_netfront() will tidy - * up the current queue on error, but we need to clean up + /* setup_netfront() will tidy up the current + * queue on error, but we need to clean up * those already allocated. */ if (i > 0) { @@ -2005,13 +2043,15 @@ static int xennet_connect(struct net_device *dev) /* By now, the queue structures have been set up */ for (j = 0; j < num_queues; ++j) { queue = &np->queues[j]; - spin_lock_bh(&queue->rx_lock); - spin_lock_irq(&queue->tx_lock); /* Step 1: Discard all pending TX packet fragments. */ + spin_lock_irq(&queue->tx_lock); xennet_release_tx_bufs(queue); + spin_unlock_irq(&queue->tx_lock); /* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */ + spin_lock_bh(&queue->rx_lock); + for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) { skb_frag_t *frag; const struct page *page; @@ -2035,6 +2075,8 @@ static int xennet_connect(struct net_device *dev) } queue->rx.req_prod_pvt = requeue_idx; + + spin_unlock_bh(&queue->rx_lock); } /* @@ -2046,13 +2088,17 @@ static int xennet_connect(struct net_device *dev) netif_carrier_on(np->netdev); for (j = 0; j < num_queues; ++j) { queue = &np->queues[j]; + notify_remote_via_irq(queue->tx_irq); if (queue->tx_irq != queue->rx_irq) notify_remote_via_irq(queue->rx_irq); - xennet_tx_buf_gc(queue); - xennet_alloc_rx_buffers(queue); + spin_lock_irq(&queue->tx_lock); + xennet_tx_buf_gc(queue); spin_unlock_irq(&queue->tx_lock); + + spin_lock_bh(&queue->rx_lock); + xennet_alloc_rx_buffers(queue); spin_unlock_bh(&queue->rx_lock); } diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c index c4cddf0cd96d..b777d8f46bd5 100644 --- a/drivers/of/fdt.c +++ b/drivers/of/fdt.c @@ -880,6 +880,21 @@ void __init __weak early_init_dt_add_memory_arch(u64 base, u64 size) const u64 phys_offset = __pa(PAGE_OFFSET); base &= PAGE_MASK; size &= PAGE_MASK; + + if (sizeof(phys_addr_t) < sizeof(u64)) { + if (base > ULONG_MAX) { + pr_warning("Ignoring memory block 0x%llx - 0x%llx\n", + base, base + size); + return; + } + + if (base + size > ULONG_MAX) { + pr_warning("Ignoring memory range 0x%lx - 0x%llx\n", + ULONG_MAX, base + size); + size = ULONG_MAX - base; + } + } + if (base + size < phys_offset) { pr_warning("Ignoring memory block 0x%llx - 0x%llx\n", base, base + size); diff --git a/drivers/of/of_mdio.c b/drivers/of/of_mdio.c index fb4a59830648..401b2453da45 100644 --- a/drivers/of/of_mdio.c +++ b/drivers/of/of_mdio.c @@ -182,40 +182,6 @@ int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np) } EXPORT_SYMBOL(of_mdiobus_register); -/** - * of_mdiobus_link_phydev - Find a device node for a phy - * @mdio: pointer to mii_bus structure - * @phydev: phydev for which the of_node pointer should be set - * - * Walk the list of subnodes of a mdio bus and look for a node that matches the - * phy's address with its 'reg' property. If found, set the of_node pointer for - * the phy. This allows auto-probed pyh devices to be supplied with information - * passed in via DT. - */ -void of_mdiobus_link_phydev(struct mii_bus *mdio, - struct phy_device *phydev) -{ - struct device *dev = &phydev->dev; - struct device_node *child; - - if (dev->of_node || !mdio->dev.of_node) - return; - - for_each_available_child_of_node(mdio->dev.of_node, child) { - int addr; - - addr = of_mdio_parse_addr(&mdio->dev, child); - if (addr < 0) - continue; - - if (addr == phydev->addr) { - dev->of_node = child; - return; - } - } -} -EXPORT_SYMBOL(of_mdiobus_link_phydev); - /* Helper function for of_phy_find_device */ static int of_phy_match(struct device *dev, void *phy_np) { @@ -323,11 +289,13 @@ int of_phy_register_fixed_link(struct device_node *np) fixed_link_node = of_get_child_by_name(np, "fixed-link"); if (fixed_link_node) { status.link = 1; - status.duplex = of_property_read_bool(np, "full-duplex"); + status.duplex = of_property_read_bool(fixed_link_node, + "full-duplex"); if (of_property_read_u32(fixed_link_node, "speed", &status.speed)) return -EINVAL; - status.pause = of_property_read_bool(np, "pause"); - status.asym_pause = of_property_read_bool(np, "asym-pause"); + status.pause = of_property_read_bool(fixed_link_node, "pause"); + status.asym_pause = of_property_read_bool(fixed_link_node, + "asym-pause"); of_node_put(fixed_link_node); return fixed_phy_register(PHY_POLL, &status, np); } diff --git a/drivers/parport/Kconfig b/drivers/parport/Kconfig index 2872ece81f35..44333bd8f908 100644 --- a/drivers/parport/Kconfig +++ b/drivers/parport/Kconfig @@ -5,6 +5,12 @@ # Parport configuration. # +config ARCH_MIGHT_HAVE_PC_PARPORT + bool + help + Select this config option from the architecture Kconfig if + the architecture might have PC parallel port hardware. + menuconfig PARPORT tristate "Parallel port support" depends on HAS_IOMEM @@ -31,12 +37,6 @@ menuconfig PARPORT If unsure, say Y. -config ARCH_MIGHT_HAVE_PC_PARPORT - bool - help - Select this config option from the architecture Kconfig if - the architecture might have PC parallel port hardware. - if PARPORT config PARPORT_PC diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index 63a54a340863..1c8592b0e146 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c @@ -3135,8 +3135,13 @@ static int pci_af_flr(struct pci_dev *dev, int probe) if (probe) return 0; - /* Wait for Transaction Pending bit clean */ - if (pci_wait_for_pending(dev, pos + PCI_AF_STATUS, PCI_AF_STATUS_TP)) + /* + * Wait for Transaction Pending bit to clear. A word-aligned test + * is used, so we use the conrol offset rather than status and shift + * the test bit to match. + */ + if (pci_wait_for_pending(dev, pos + PCI_AF_CTRL, + PCI_AF_STATUS_TP << 8)) goto clear; dev_err(&dev->dev, "transaction is not cleared; proceeding with reset anyway\n"); diff --git a/drivers/phy/Kconfig b/drivers/phy/Kconfig index 16a2f067c242..64b98d242ea6 100644 --- a/drivers/phy/Kconfig +++ b/drivers/phy/Kconfig @@ -112,6 +112,7 @@ config PHY_EXYNOS5250_SATA config PHY_SUN4I_USB tristate "Allwinner sunxi SoC USB PHY driver" depends on ARCH_SUNXI && HAS_IOMEM && OF + depends on RESET_CONTROLLER select GENERIC_PHY help Enable this to support the transceiver that is part of Allwinner @@ -122,6 +123,7 @@ config PHY_SUN4I_USB config PHY_SAMSUNG_USB2 tristate "Samsung USB 2.0 PHY driver" + depends on HAS_IOMEM select GENERIC_PHY select MFD_SYSCON help diff --git a/drivers/phy/phy-core.c b/drivers/phy/phy-core.c index c64a2f3b2d62..49c446530101 100644 --- a/drivers/phy/phy-core.c +++ b/drivers/phy/phy-core.c @@ -614,8 +614,9 @@ struct phy *phy_create(struct device *dev, const struct phy_ops *ops, return phy; put_dev: - put_device(&phy->dev); - ida_remove(&phy_ida, phy->id); + put_device(&phy->dev); /* calls phy_release() which frees resources */ + return ERR_PTR(ret); + free_phy: kfree(phy); return ERR_PTR(ret); @@ -799,7 +800,7 @@ static void phy_release(struct device *dev) phy = to_phy(dev); dev_vdbg(dev, "releasing '%s'\n", dev_name(dev)); - ida_remove(&phy_ida, phy->id); + ida_simple_remove(&phy_ida, phy->id); kfree(phy); } diff --git a/drivers/phy/phy-omap-usb2.c b/drivers/phy/phy-omap-usb2.c index 7007c11fe07d..34b396146c8a 100644 --- a/drivers/phy/phy-omap-usb2.c +++ b/drivers/phy/phy-omap-usb2.c @@ -233,8 +233,8 @@ static int omap_usb2_probe(struct platform_device *pdev) if (phy_data->flags & OMAP_USB2_CALIBRATE_FALSE_DISCONNECT) { res = platform_get_resource(pdev, IORESOURCE_MEM, 0); phy->phy_base = devm_ioremap_resource(&pdev->dev, res); - if (!phy->phy_base) - return -ENOMEM; + if (IS_ERR(phy->phy_base)) + return PTR_ERR(phy->phy_base); phy->flags |= OMAP_USB2_CALIBRATE_FALSE_DISCONNECT; } @@ -262,7 +262,6 @@ static int omap_usb2_probe(struct platform_device *pdev) otg->phy = &phy->phy; platform_set_drvdata(pdev, phy); - pm_runtime_enable(phy->dev); generic_phy = devm_phy_create(phy->dev, &ops, NULL); if (IS_ERR(generic_phy)) @@ -270,10 +269,13 @@ static int omap_usb2_probe(struct platform_device *pdev) phy_set_drvdata(generic_phy, phy); + pm_runtime_enable(phy->dev); phy_provider = devm_of_phy_provider_register(phy->dev, of_phy_simple_xlate); - if (IS_ERR(phy_provider)) + if (IS_ERR(phy_provider)) { + pm_runtime_disable(phy->dev); return PTR_ERR(phy_provider); + } phy->wkupclk = devm_clk_get(phy->dev, "wkupclk"); if (IS_ERR(phy->wkupclk)) { @@ -317,6 +319,7 @@ static int omap_usb2_remove(struct platform_device *pdev) if (!IS_ERR(phy->optclk)) clk_unprepare(phy->optclk); usb_remove_phy(&phy->phy); + pm_runtime_disable(phy->dev); return 0; } diff --git a/drivers/phy/phy-samsung-usb2.c b/drivers/phy/phy-samsung-usb2.c index 8a8c6bc8709a..1e69a32c221d 100644 --- a/drivers/phy/phy-samsung-usb2.c +++ b/drivers/phy/phy-samsung-usb2.c @@ -107,6 +107,7 @@ static const struct of_device_id samsung_usb2_phy_of_match[] = { #endif { }, }; +MODULE_DEVICE_TABLE(of, samsung_usb2_phy_of_match); static int samsung_usb2_phy_probe(struct platform_device *pdev) { diff --git a/drivers/pinctrl/berlin/berlin.c b/drivers/pinctrl/berlin/berlin.c index edf5d2fd2b22..86db2235ab00 100644 --- a/drivers/pinctrl/berlin/berlin.c +++ b/drivers/pinctrl/berlin/berlin.c @@ -320,7 +320,7 @@ int berlin_pinctrl_probe(struct platform_device *pdev, regmap = dev_get_regmap(&pdev->dev, NULL); if (!regmap) - return PTR_ERR(regmap); + return -ENODEV; pctrl = devm_kzalloc(dev, sizeof(*pctrl), GFP_KERNEL); if (!pctrl) diff --git a/drivers/pinctrl/pinctrl-st.c b/drivers/pinctrl/pinctrl-st.c index 1bd6363bc95e..9f43916637ca 100644 --- a/drivers/pinctrl/pinctrl-st.c +++ b/drivers/pinctrl/pinctrl-st.c @@ -1431,7 +1431,7 @@ static void st_gpio_irqmux_handler(unsigned irq, struct irq_desc *desc) status = readl(info->irqmux_base); - for_each_set_bit(n, &status, ST_GPIO_PINS_PER_BANK) + for_each_set_bit(n, &status, info->nbanks) __gpio_irq_handler(&info->banks[n]); chained_irq_exit(chip, desc); diff --git a/drivers/pinctrl/sunxi/pinctrl-sunxi.c b/drivers/pinctrl/sunxi/pinctrl-sunxi.c index f1ca75e6d7b1..5f38c7f67834 100644 --- a/drivers/pinctrl/sunxi/pinctrl-sunxi.c +++ b/drivers/pinctrl/sunxi/pinctrl-sunxi.c @@ -211,6 +211,10 @@ static int sunxi_pctrl_dt_node_to_map(struct pinctrl_dev *pctldev, configlen++; pinconfig = kzalloc(configlen * sizeof(*pinconfig), GFP_KERNEL); + if (!pinconfig) { + kfree(*map); + return -ENOMEM; + } if (!of_property_read_u32(node, "allwinner,drive", &val)) { u16 strength = (val + 1) * 10; diff --git a/drivers/ptp/Kconfig b/drivers/ptp/Kconfig index 6aea373547f6..ee3de3421f2d 100644 --- a/drivers/ptp/Kconfig +++ b/drivers/ptp/Kconfig @@ -74,7 +74,7 @@ config DP83640_PHY config PTP_1588_CLOCK_PCH tristate "Intel PCH EG20T as PTP clock" - depends on X86 || COMPILE_TEST + depends on X86_32 || COMPILE_TEST depends on HAS_IOMEM && NET select PTP_1588_CLOCK help diff --git a/drivers/regulator/bcm590xx-regulator.c b/drivers/regulator/bcm590xx-regulator.c index 57544e254a78..58ece59367ae 100644 --- a/drivers/regulator/bcm590xx-regulator.c +++ b/drivers/regulator/bcm590xx-regulator.c @@ -119,6 +119,10 @@ static const unsigned int ldo_c_table[] = { 2900000, 3000000, 3300000, }; +static const unsigned int ldo_vbus[] = { + 5000000, +}; + /* DCDC group CSR: supported voltages in microvolts */ static const struct regulator_linear_range dcdc_csr_ranges[] = { REGULATOR_LINEAR_RANGE(860000, 2, 50, 10000), @@ -192,6 +196,7 @@ static struct bcm590xx_info bcm590xx_regs[] = { BCM590XX_REG_TABLE(gpldo4, ldo_a_table), BCM590XX_REG_TABLE(gpldo5, ldo_a_table), BCM590XX_REG_TABLE(gpldo6, ldo_a_table), + BCM590XX_REG_TABLE(vbus, ldo_vbus), }; struct bcm590xx_reg { diff --git a/drivers/regulator/palmas-regulator.c b/drivers/regulator/palmas-regulator.c index b982f0ff4e01..93b4ad842901 100644 --- a/drivers/regulator/palmas-regulator.c +++ b/drivers/regulator/palmas-regulator.c @@ -325,6 +325,10 @@ static int palmas_set_mode_smps(struct regulator_dev *dev, unsigned int mode) if (rail_enable) palmas_smps_write(pmic->palmas, palmas_regs_info[id].ctrl_addr, reg); + + /* Switch the enable value to ensure this is used for enable */ + pmic->desc[id].enable_val = pmic->current_reg_mode[id]; + return 0; } @@ -964,6 +968,14 @@ static int palmas_regulators_probe(struct platform_device *pdev) return ret; pmic->current_reg_mode[id] = reg & PALMAS_SMPS12_CTRL_MODE_ACTIVE_MASK; + + pmic->desc[id].enable_reg = + PALMAS_BASE_TO_REG(PALMAS_SMPS_BASE, + palmas_regs_info[id].ctrl_addr); + pmic->desc[id].enable_mask = + PALMAS_SMPS12_CTRL_MODE_ACTIVE_MASK; + /* set_mode overrides this value */ + pmic->desc[id].enable_val = SMPS_CTRL_MODE_ON; } pmic->desc[id].type = REGULATOR_VOLTAGE; diff --git a/drivers/regulator/tps65218-regulator.c b/drivers/regulator/tps65218-regulator.c index 69b4b7750410..9effe48c605e 100644 --- a/drivers/regulator/tps65218-regulator.c +++ b/drivers/regulator/tps65218-regulator.c @@ -209,7 +209,7 @@ static const struct regulator_desc regulators[] = { 1, -1, -1, TPS65218_REG_ENABLE1, TPS65218_ENABLE1_DC6_EN, NULL, NULL, 0, 0), TPS65218_REGULATOR("LDO1", TPS65218_LDO_1, tps65218_ldo1_dcdc34_ops, 64, - TPS65218_REG_CONTROL_DCDC4, + TPS65218_REG_CONTROL_LDO1, TPS65218_CONTROL_LDO1_MASK, TPS65218_REG_ENABLE2, TPS65218_ENABLE2_LDO1_EN, NULL, ldo1_dcdc3_ranges, 2, 0), @@ -240,6 +240,7 @@ static int tps65218_regulator_probe(struct platform_device *pdev) config.init_data = init_data; config.driver_data = tps; config.regmap = tps->regmap; + config.of_node = pdev->dev.of_node; rdev = devm_regulator_register(&pdev->dev, ®ulators[id], &config); if (IS_ERR(rdev)) { diff --git a/drivers/s390/char/raw3270.c b/drivers/s390/char/raw3270.c index 15b3459f8656..220acb4cbee5 100644 --- a/drivers/s390/char/raw3270.c +++ b/drivers/s390/char/raw3270.c @@ -633,7 +633,6 @@ raw3270_reset_device_cb(struct raw3270_request *rq, void *data) } else raw3270_writesf_readpart(rp); memset(&rp->init_reset, 0, sizeof(rp->init_reset)); - memset(&rp->init_data, 0, sizeof(rp->init_data)); } static int diff --git a/drivers/s390/crypto/ap_bus.c b/drivers/s390/crypto/ap_bus.c index 69ef4f8cfac8..4038437ff033 100644 --- a/drivers/s390/crypto/ap_bus.c +++ b/drivers/s390/crypto/ap_bus.c @@ -901,10 +901,15 @@ static int ap_device_probe(struct device *dev) int rc; ap_dev->drv = ap_drv; + + spin_lock_bh(&ap_device_list_lock); + list_add(&ap_dev->list, &ap_device_list); + spin_unlock_bh(&ap_device_list_lock); + rc = ap_drv->probe ? ap_drv->probe(ap_dev) : -ENODEV; - if (!rc) { + if (rc) { spin_lock_bh(&ap_device_list_lock); - list_add(&ap_dev->list, &ap_device_list); + list_del_init(&ap_dev->list); spin_unlock_bh(&ap_device_list_lock); } return rc; diff --git a/drivers/scsi/be2iscsi/be_main.c b/drivers/scsi/be2iscsi/be_main.c index 554349029628..56467df3d6de 100644 --- a/drivers/scsi/be2iscsi/be_main.c +++ b/drivers/scsi/be2iscsi/be_main.c @@ -4198,6 +4198,8 @@ static int hba_setup_cid_tbls(struct beiscsi_hba *phba) kfree(phba->ep_array); phba->ep_array = NULL; ret = -ENOMEM; + + goto free_memory; } for (i = 0; i < phba->params.cxns_per_ctrl; i++) { diff --git a/drivers/scsi/be2iscsi/be_mgmt.c b/drivers/scsi/be2iscsi/be_mgmt.c index 6045aa78986a..07934b0b9ee1 100644 --- a/drivers/scsi/be2iscsi/be_mgmt.c +++ b/drivers/scsi/be2iscsi/be_mgmt.c @@ -1008,10 +1008,8 @@ int mgmt_set_ip(struct beiscsi_hba *phba, BE2_IPV6 : BE2_IPV4 ; rc = mgmt_get_if_info(phba, ip_type, &if_info); - if (rc) { - kfree(if_info); + if (rc) return rc; - } if (boot_proto == ISCSI_BOOTPROTO_DHCP) { if (if_info->dhcp_state) { diff --git a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c index f54843023466..785d0d71781e 100644 --- a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c +++ b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c @@ -516,23 +516,17 @@ static void bnx2fc_recv_frame(struct sk_buff *skb) skb_pull(skb, sizeof(struct fcoe_hdr)); fr_len = skb->len - sizeof(struct fcoe_crc_eof); - stats = per_cpu_ptr(lport->stats, get_cpu()); - stats->RxFrames++; - stats->RxWords += fr_len / FCOE_WORD_TO_BYTE; - fp = (struct fc_frame *)skb; fc_frame_init(fp); fr_dev(fp) = lport; fr_sof(fp) = hp->fcoe_sof; if (skb_copy_bits(skb, fr_len, &crc_eof, sizeof(crc_eof))) { - put_cpu(); kfree_skb(skb); return; } fr_eof(fp) = crc_eof.fcoe_eof; fr_crc(fp) = crc_eof.fcoe_crc32; if (pskb_trim(skb, fr_len)) { - put_cpu(); kfree_skb(skb); return; } @@ -544,7 +538,6 @@ static void bnx2fc_recv_frame(struct sk_buff *skb) port = lport_priv(vn_port); if (!ether_addr_equal(port->data_src_addr, dest_mac)) { BNX2FC_HBA_DBG(lport, "fpma mismatch\n"); - put_cpu(); kfree_skb(skb); return; } @@ -552,7 +545,6 @@ static void bnx2fc_recv_frame(struct sk_buff *skb) if (fh->fh_r_ctl == FC_RCTL_DD_SOL_DATA && fh->fh_type == FC_TYPE_FCP) { /* Drop FCP data. We dont this in L2 path */ - put_cpu(); kfree_skb(skb); return; } @@ -562,7 +554,6 @@ static void bnx2fc_recv_frame(struct sk_buff *skb) case ELS_LOGO: if (ntoh24(fh->fh_s_id) == FC_FID_FLOGI) { /* drop non-FIP LOGO */ - put_cpu(); kfree_skb(skb); return; } @@ -572,22 +563,23 @@ static void bnx2fc_recv_frame(struct sk_buff *skb) if (fh->fh_r_ctl == FC_RCTL_BA_ABTS) { /* Drop incoming ABTS */ - put_cpu(); kfree_skb(skb); return; } + stats = per_cpu_ptr(lport->stats, smp_processor_id()); + stats->RxFrames++; + stats->RxWords += fr_len / FCOE_WORD_TO_BYTE; + if (le32_to_cpu(fr_crc(fp)) != ~crc32(~0, skb->data, fr_len)) { if (stats->InvalidCRCCount < 5) printk(KERN_WARNING PFX "dropping frame with " "CRC error\n"); stats->InvalidCRCCount++; - put_cpu(); kfree_skb(skb); return; } - put_cpu(); fc_exch_recv(lport, fp); } diff --git a/drivers/scsi/bnx2fc/bnx2fc_io.c b/drivers/scsi/bnx2fc/bnx2fc_io.c index 32a5e0a2a669..7bc47fc7c686 100644 --- a/drivers/scsi/bnx2fc/bnx2fc_io.c +++ b/drivers/scsi/bnx2fc/bnx2fc_io.c @@ -282,6 +282,8 @@ struct bnx2fc_cmd_mgr *bnx2fc_cmd_mgr_alloc(struct bnx2fc_hba *hba) arr_sz, GFP_KERNEL); if (!cmgr->free_list_lock) { printk(KERN_ERR PFX "failed to alloc free_list_lock\n"); + kfree(cmgr->free_list); + cmgr->free_list = NULL; goto mem_err; } diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.c b/drivers/scsi/ibmvscsi/ibmvscsi.c index 2ebfb2bb0f42..7b23f21f22f1 100644 --- a/drivers/scsi/ibmvscsi/ibmvscsi.c +++ b/drivers/scsi/ibmvscsi/ibmvscsi.c @@ -185,6 +185,11 @@ static struct viosrp_crq *crq_queue_next_crq(struct crq_queue *queue) if (crq->valid & 0x80) { if (++queue->cur == queue->size) queue->cur = 0; + + /* Ensure the read of the valid bit occurs before reading any + * other bits of the CRQ entry + */ + rmb(); } else crq = NULL; spin_unlock_irqrestore(&queue->lock, flags); @@ -203,6 +208,11 @@ static int ibmvscsi_send_crq(struct ibmvscsi_host_data *hostdata, { struct vio_dev *vdev = to_vio_dev(hostdata->dev); + /* + * Ensure the command buffer is flushed to memory before handing it + * over to the VIOS to prevent it from fetching any stale data. + */ + mb(); return plpar_hcall_norets(H_SEND_CRQ, vdev->unit_address, word1, word2); } @@ -797,7 +807,8 @@ static void purge_requests(struct ibmvscsi_host_data *hostdata, int error_code) evt->hostdata->dev); if (evt->cmnd_done) evt->cmnd_done(evt->cmnd); - } else if (evt->done) + } else if (evt->done && evt->crq.format != VIOSRP_MAD_FORMAT && + evt->iu.srp.login_req.opcode != SRP_LOGIN_REQ) evt->done(evt); free_event_struct(&evt->hostdata->pool, evt); spin_lock_irqsave(hostdata->host->host_lock, flags); diff --git a/drivers/scsi/pm8001/pm8001_init.c b/drivers/scsi/pm8001/pm8001_init.c index c4f31b21feb8..e90c89f1d480 100644 --- a/drivers/scsi/pm8001/pm8001_init.c +++ b/drivers/scsi/pm8001/pm8001_init.c @@ -677,7 +677,7 @@ static void pm8001_init_sas_add(struct pm8001_hba_info *pm8001_ha) * pm8001_get_phy_settings_info : Read phy setting values. * @pm8001_ha : our hba. */ -void pm8001_get_phy_settings_info(struct pm8001_hba_info *pm8001_ha) +static int pm8001_get_phy_settings_info(struct pm8001_hba_info *pm8001_ha) { #ifdef PM8001_READ_VPD @@ -691,11 +691,15 @@ void pm8001_get_phy_settings_info(struct pm8001_hba_info *pm8001_ha) payload.offset = 0; payload.length = 4096; payload.func_specific = kzalloc(4096, GFP_KERNEL); + if (!payload.func_specific) + return -ENOMEM; /* Read phy setting values from flash */ PM8001_CHIP_DISP->get_nvmd_req(pm8001_ha, &payload); wait_for_completion(&completion); pm8001_set_phy_profile(pm8001_ha, sizeof(u8), payload.func_specific); + kfree(payload.func_specific); #endif + return 0; } #ifdef PM8001_USE_MSIX @@ -879,8 +883,11 @@ static int pm8001_pci_probe(struct pci_dev *pdev, pm8001_init_sas_add(pm8001_ha); /* phy setting support for motherboard controller */ if (pdev->subsystem_vendor != PCI_VENDOR_ID_ADAPTEC2 && - pdev->subsystem_vendor != 0) - pm8001_get_phy_settings_info(pm8001_ha); + pdev->subsystem_vendor != 0) { + rc = pm8001_get_phy_settings_info(pm8001_ha); + if (rc) + goto err_out_shost; + } pm8001_post_sas_ha_init(shost, chip); rc = sas_register_ha(SHOST_TO_SAS_HA(shost)); if (rc) diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c index 4b188b0164e9..e632e14180cf 100644 --- a/drivers/scsi/qla2xxx/qla_target.c +++ b/drivers/scsi/qla2xxx/qla_target.c @@ -1128,7 +1128,7 @@ static void qlt_24xx_retry_term_exchange(struct scsi_qla_host *vha, ctio->u.status1.flags = __constant_cpu_to_le16(CTIO7_FLAGS_STATUS_MODE_1 | CTIO7_FLAGS_TERMINATE); - ctio->u.status1.ox_id = entry->fcp_hdr_le.ox_id; + ctio->u.status1.ox_id = cpu_to_le16(entry->fcp_hdr_le.ox_id); qla2x00_start_iocbs(vha, vha->req); @@ -1262,6 +1262,7 @@ static void qlt_24xx_send_task_mgmt_ctio(struct scsi_qla_host *ha, { struct atio_from_isp *atio = &mcmd->orig_iocb.atio; struct ctio7_to_24xx *ctio; + uint16_t temp; ql_dbg(ql_dbg_tgt, ha, 0xe008, "Sending task mgmt CTIO7 (ha=%p, atio=%p, resp_code=%x\n", @@ -1292,7 +1293,8 @@ static void qlt_24xx_send_task_mgmt_ctio(struct scsi_qla_host *ha, ctio->u.status1.flags = (atio->u.isp24.attr << 9) | __constant_cpu_to_le16(CTIO7_FLAGS_STATUS_MODE_1 | CTIO7_FLAGS_SEND_STATUS); - ctio->u.status1.ox_id = swab16(atio->u.isp24.fcp_hdr.ox_id); + temp = be16_to_cpu(atio->u.isp24.fcp_hdr.ox_id); + ctio->u.status1.ox_id = cpu_to_le16(temp); ctio->u.status1.scsi_status = __constant_cpu_to_le16(SS_RESPONSE_INFO_LEN_VALID); ctio->u.status1.response_len = __constant_cpu_to_le16(8); @@ -1513,6 +1515,7 @@ static int qlt_24xx_build_ctio_pkt(struct qla_tgt_prm *prm, struct ctio7_to_24xx *pkt; struct qla_hw_data *ha = vha->hw; struct atio_from_isp *atio = &prm->cmd->atio; + uint16_t temp; pkt = (struct ctio7_to_24xx *)vha->req->ring_ptr; prm->pkt = pkt; @@ -1541,13 +1544,13 @@ static int qlt_24xx_build_ctio_pkt(struct qla_tgt_prm *prm, pkt->initiator_id[2] = atio->u.isp24.fcp_hdr.s_id[0]; pkt->exchange_addr = atio->u.isp24.exchange_addr; pkt->u.status0.flags |= (atio->u.isp24.attr << 9); - pkt->u.status0.ox_id = swab16(atio->u.isp24.fcp_hdr.ox_id); + temp = be16_to_cpu(atio->u.isp24.fcp_hdr.ox_id); + pkt->u.status0.ox_id = cpu_to_le16(temp); pkt->u.status0.relative_offset = cpu_to_le32(prm->cmd->offset); ql_dbg(ql_dbg_tgt, vha, 0xe00c, "qla_target(%d): handle(cmd) -> %08x, timeout %d, ox_id %#x\n", - vha->vp_idx, pkt->handle, QLA_TGT_TIMEOUT, - le16_to_cpu(pkt->u.status0.ox_id)); + vha->vp_idx, pkt->handle, QLA_TGT_TIMEOUT, temp); return 0; } @@ -2619,6 +2622,7 @@ static int __qlt_send_term_exchange(struct scsi_qla_host *vha, struct qla_hw_data *ha = vha->hw; request_t *pkt; int ret = 0; + uint16_t temp; ql_dbg(ql_dbg_tgt, vha, 0xe01c, "Sending TERM EXCH CTIO (ha=%p)\n", ha); @@ -2655,7 +2659,8 @@ static int __qlt_send_term_exchange(struct scsi_qla_host *vha, ctio24->u.status1.flags = (atio->u.isp24.attr << 9) | __constant_cpu_to_le16(CTIO7_FLAGS_STATUS_MODE_1 | CTIO7_FLAGS_TERMINATE); - ctio24->u.status1.ox_id = swab16(atio->u.isp24.fcp_hdr.ox_id); + temp = be16_to_cpu(atio->u.isp24.fcp_hdr.ox_id); + ctio24->u.status1.ox_id = cpu_to_le16(temp); /* Most likely, it isn't needed */ ctio24->u.status1.residual = get_unaligned((uint32_t *) diff --git a/drivers/scsi/qla2xxx/qla_target.h b/drivers/scsi/qla2xxx/qla_target.h index e0a58fd13f66..d1d24fb0160a 100644 --- a/drivers/scsi/qla2xxx/qla_target.h +++ b/drivers/scsi/qla2xxx/qla_target.h @@ -443,7 +443,7 @@ struct ctio7_to_24xx { uint16_t reserved1; __le16 flags; uint32_t residual; - uint16_t ox_id; + __le16 ox_id; uint16_t scsi_status; uint32_t relative_offset; uint32_t reserved2; @@ -458,7 +458,7 @@ struct ctio7_to_24xx { uint16_t sense_length; uint16_t flags; uint32_t residual; - uint16_t ox_id; + __le16 ox_id; uint16_t scsi_status; uint16_t response_len; uint16_t reserved; diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c index cbe38e5e7955..7e957918f33f 100644 --- a/drivers/scsi/scsi_error.c +++ b/drivers/scsi/scsi_error.c @@ -131,7 +131,7 @@ scmd_eh_abort_handler(struct work_struct *work) "aborting command %p\n", scmd)); rtn = scsi_try_to_abort_cmd(sdev->host->hostt, scmd); if (rtn == SUCCESS) { - scmd->result |= DID_TIME_OUT << 16; + set_host_byte(scmd, DID_TIME_OUT); if (scsi_host_eh_past_deadline(sdev->host)) { SCSI_LOG_ERROR_RECOVERY(3, scmd_printk(KERN_INFO, scmd, @@ -167,7 +167,7 @@ scmd_eh_abort_handler(struct work_struct *work) scmd_printk(KERN_WARNING, scmd, "scmd %p terminate " "aborted command\n", scmd)); - scmd->result |= DID_TIME_OUT << 16; + set_host_byte(scmd, DID_TIME_OUT); scsi_finish_command(scmd); } } @@ -287,15 +287,15 @@ enum blk_eh_timer_return scsi_times_out(struct request *req) else if (host->hostt->eh_timed_out) rtn = host->hostt->eh_timed_out(scmd); - if (rtn == BLK_EH_NOT_HANDLED && !host->hostt->no_async_abort) - if (scsi_abort_command(scmd) == SUCCESS) + if (rtn == BLK_EH_NOT_HANDLED) { + if (!host->hostt->no_async_abort && + scsi_abort_command(scmd) == SUCCESS) return BLK_EH_NOT_HANDLED; - scmd->result |= DID_TIME_OUT << 16; - - if (unlikely(rtn == BLK_EH_NOT_HANDLED && - !scsi_eh_scmd_add(scmd, SCSI_EH_CANCEL_CMD))) - rtn = BLK_EH_HANDLED; + set_host_byte(scmd, DID_TIME_OUT); + if (!scsi_eh_scmd_add(scmd, SCSI_EH_CANCEL_CMD)) + rtn = BLK_EH_HANDLED; + } return rtn; } @@ -1777,7 +1777,7 @@ int scsi_decide_disposition(struct scsi_cmnd *scmd) break; case DID_ABORT: if (scmd->eh_eflags & SCSI_EH_ABORT_SCHEDULED) { - scmd->result |= DID_TIME_OUT << 16; + set_host_byte(scmd, DID_TIME_OUT); return SUCCESS; } case DID_NO_CONNECT: diff --git a/drivers/scsi/scsi_transport_fc.c b/drivers/scsi/scsi_transport_fc.c index f80908f74ca9..521f5838594b 100644 --- a/drivers/scsi/scsi_transport_fc.c +++ b/drivers/scsi/scsi_transport_fc.c @@ -2549,6 +2549,7 @@ fc_rport_final_delete(struct work_struct *work) fc_flush_devloss(shost); if (!cancel_delayed_work(&rport->dev_loss_work)) fc_flush_devloss(shost); + cancel_work_sync(&rport->scan_work); spin_lock_irqsave(shost->host_lock, flags); rport->flags &= ~FC_RPORT_DEVLOSS_PENDING; } diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index e9689d57ccb6..6825eda1114a 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -2441,7 +2441,10 @@ sd_read_cache_type(struct scsi_disk *sdkp, unsigned char *buffer) } sdkp->DPOFUA = (data.device_specific & 0x10) != 0; - if (sdkp->DPOFUA && !sdkp->device->use_10_for_rw) { + if (sdp->broken_fua) { + sd_first_printk(KERN_NOTICE, sdkp, "Disabling FUA\n"); + sdkp->DPOFUA = 0; + } else if (sdkp->DPOFUA && !sdkp->device->use_10_for_rw) { sd_first_printk(KERN_NOTICE, sdkp, "Uses READ/WRITE(6), disabling FUA\n"); sdkp->DPOFUA = 0; diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c index 89ee5929eb6d..308256b5e4cb 100644 --- a/drivers/scsi/virtio_scsi.c +++ b/drivers/scsi/virtio_scsi.c @@ -237,6 +237,16 @@ static void virtscsi_req_done(struct virtqueue *vq) virtscsi_vq_done(vscsi, req_vq, virtscsi_complete_cmd); }; +static void virtscsi_poll_requests(struct virtio_scsi *vscsi) +{ + int i, num_vqs; + + num_vqs = vscsi->num_queues; + for (i = 0; i < num_vqs; i++) + virtscsi_vq_done(vscsi, &vscsi->req_vqs[i], + virtscsi_complete_cmd); +} + static void virtscsi_complete_free(struct virtio_scsi *vscsi, void *buf) { struct virtio_scsi_cmd *cmd = buf; @@ -253,6 +263,8 @@ static void virtscsi_ctrl_done(struct virtqueue *vq) virtscsi_vq_done(vscsi, &vscsi->ctrl_vq, virtscsi_complete_free); }; +static void virtscsi_handle_event(struct work_struct *work); + static int virtscsi_kick_event(struct virtio_scsi *vscsi, struct virtio_scsi_event_node *event_node) { @@ -260,6 +272,7 @@ static int virtscsi_kick_event(struct virtio_scsi *vscsi, struct scatterlist sg; unsigned long flags; + INIT_WORK(&event_node->work, virtscsi_handle_event); sg_init_one(&sg, &event_node->event, sizeof(struct virtio_scsi_event)); spin_lock_irqsave(&vscsi->event_vq.vq_lock, flags); @@ -377,7 +390,6 @@ static void virtscsi_complete_event(struct virtio_scsi *vscsi, void *buf) { struct virtio_scsi_event_node *event_node = buf; - INIT_WORK(&event_node->work, virtscsi_handle_event); schedule_work(&event_node->work); } @@ -589,6 +601,18 @@ static int virtscsi_tmf(struct virtio_scsi *vscsi, struct virtio_scsi_cmd *cmd) cmd->resp.tmf.response == VIRTIO_SCSI_S_FUNCTION_SUCCEEDED) ret = SUCCESS; + /* + * The spec guarantees that all requests related to the TMF have + * been completed, but the callback might not have run yet if + * we're using independent interrupts (e.g. MSI). Poll the + * virtqueues once. + * + * In the abort case, sc->scsi_done will do nothing, because + * the block layer must have detected a timeout and as a result + * REQ_ATOM_COMPLETE has been set. + */ + virtscsi_poll_requests(vscsi); + out: mempool_free(cmd, virtscsi_cmd_pool); return ret; diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c index a98df7eeb42d..fe792106bdc5 100644 --- a/drivers/spi/spi-pxa2xx.c +++ b/drivers/spi/spi-pxa2xx.c @@ -118,6 +118,7 @@ static void lpss_ssp_setup(struct driver_data *drv_data) */ orig = readl(drv_data->ioaddr + offset + SPI_CS_CONTROL); + /* Test SPI_CS_CONTROL_SW_MODE bit enabling */ value = orig | SPI_CS_CONTROL_SW_MODE; writel(value, drv_data->ioaddr + offset + SPI_CS_CONTROL); value = readl(drv_data->ioaddr + offset + SPI_CS_CONTROL); @@ -126,10 +127,13 @@ static void lpss_ssp_setup(struct driver_data *drv_data) goto detection_done; } - value &= ~SPI_CS_CONTROL_SW_MODE; + orig = readl(drv_data->ioaddr + offset + SPI_CS_CONTROL); + + /* Test SPI_CS_CONTROL_SW_MODE bit disabling */ + value = orig & ~SPI_CS_CONTROL_SW_MODE; writel(value, drv_data->ioaddr + offset + SPI_CS_CONTROL); value = readl(drv_data->ioaddr + offset + SPI_CS_CONTROL); - if (value != orig) { + if (value != (orig & ~SPI_CS_CONTROL_SW_MODE)) { offset = 0x800; goto detection_done; } diff --git a/drivers/spi/spi-qup.c b/drivers/spi/spi-qup.c index fc1de86d3c8a..c08da380cb23 100644 --- a/drivers/spi/spi-qup.c +++ b/drivers/spi/spi-qup.c @@ -424,31 +424,6 @@ static int spi_qup_io_config(struct spi_device *spi, struct spi_transfer *xfer) return 0; } -static void spi_qup_set_cs(struct spi_device *spi, bool enable) -{ - struct spi_qup *controller = spi_master_get_devdata(spi->master); - - u32 iocontol, mask; - - iocontol = readl_relaxed(controller->base + SPI_IO_CONTROL); - - /* Disable auto CS toggle and use manual */ - iocontol &= ~SPI_IO_C_MX_CS_MODE; - iocontol |= SPI_IO_C_FORCE_CS; - - iocontol &= ~SPI_IO_C_CS_SELECT_MASK; - iocontol |= SPI_IO_C_CS_SELECT(spi->chip_select); - - mask = SPI_IO_C_CS_N_POLARITY_0 << spi->chip_select; - - if (enable) - iocontol |= mask; - else - iocontol &= ~mask; - - writel_relaxed(iocontol, controller->base + SPI_IO_CONTROL); -} - static int spi_qup_transfer_one(struct spi_master *master, struct spi_device *spi, struct spi_transfer *xfer) @@ -571,12 +546,16 @@ static int spi_qup_probe(struct platform_device *pdev) return -ENOMEM; } + /* use num-cs unless not present or out of range */ + if (of_property_read_u16(dev->of_node, "num-cs", + &master->num_chipselect) || + (master->num_chipselect > SPI_NUM_CHIPSELECTS)) + master->num_chipselect = SPI_NUM_CHIPSELECTS; + master->bus_num = pdev->id; master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LOOP; - master->num_chipselect = SPI_NUM_CHIPSELECTS; master->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 32); master->max_speed_hz = max_freq; - master->set_cs = spi_qup_set_cs; master->transfer_one = spi_qup_transfer_one; master->dev.of_node = pdev->dev.of_node; master->auto_runtime_pm = true; @@ -640,16 +619,19 @@ static int spi_qup_probe(struct platform_device *pdev) if (ret) goto error; - ret = devm_spi_register_master(dev, master); - if (ret) - goto error; - pm_runtime_set_autosuspend_delay(dev, MSEC_PER_SEC); pm_runtime_use_autosuspend(dev); pm_runtime_set_active(dev); pm_runtime_enable(dev); + + ret = devm_spi_register_master(dev, master); + if (ret) + goto disable_pm; + return 0; +disable_pm: + pm_runtime_disable(&pdev->dev); error: clk_disable_unprepare(cclk); clk_disable_unprepare(iclk); diff --git a/drivers/spi/spi-sh-sci.c b/drivers/spi/spi-sh-sci.c index 1f56ef651d1a..b83dd733684c 100644 --- a/drivers/spi/spi-sh-sci.c +++ b/drivers/spi/spi-sh-sci.c @@ -175,9 +175,9 @@ static int sh_sci_spi_remove(struct platform_device *dev) { struct sh_sci_spi *sp = platform_get_drvdata(dev); - iounmap(sp->membase); - setbits(sp, PIN_INIT, 0); spi_bitbang_stop(&sp->bitbang); + setbits(sp, PIN_INIT, 0); + iounmap(sp->membase); spi_master_put(sp->bitbang.master); return 0; } diff --git a/drivers/staging/iio/adc/ad7291.c b/drivers/staging/iio/adc/ad7291.c index 357cef2a6f4c..7194bd138762 100644 --- a/drivers/staging/iio/adc/ad7291.c +++ b/drivers/staging/iio/adc/ad7291.c @@ -465,7 +465,7 @@ static int ad7291_probe(struct i2c_client *client, struct ad7291_platform_data *pdata = client->dev.platform_data; struct ad7291_chip_info *chip; struct iio_dev *indio_dev; - int ret = 0; + int ret; indio_dev = devm_iio_device_alloc(&client->dev, sizeof(*chip)); if (!indio_dev) @@ -475,7 +475,7 @@ static int ad7291_probe(struct i2c_client *client, if (pdata && pdata->use_external_ref) { chip->reg = devm_regulator_get(&client->dev, "vref"); if (IS_ERR(chip->reg)) - return ret; + return PTR_ERR(chip->reg); ret = regulator_enable(chip->reg); if (ret) diff --git a/drivers/staging/media/omap4iss/Kconfig b/drivers/staging/media/omap4iss/Kconfig index 78b0fba7047e..8afc6fee40c5 100644 --- a/drivers/staging/media/omap4iss/Kconfig +++ b/drivers/staging/media/omap4iss/Kconfig @@ -1,6 +1,6 @@ config VIDEO_OMAP4 bool "OMAP 4 Camera support" - depends on VIDEO_V4L2 && VIDEO_V4L2_SUBDEV_API && I2C && ARCH_OMAP4 + depends on VIDEO_V4L2=y && VIDEO_V4L2_SUBDEV_API && I2C=y && ARCH_OMAP4 select VIDEOBUF2_DMA_CONTIG ---help--- Driver for an OMAP 4 ISS controller. diff --git a/drivers/staging/tidspbridge/core/tiomap3430.c b/drivers/staging/tidspbridge/core/tiomap3430.c index 8945b4e3a2a6..cb50120ed7b5 100644 --- a/drivers/staging/tidspbridge/core/tiomap3430.c +++ b/drivers/staging/tidspbridge/core/tiomap3430.c @@ -280,8 +280,10 @@ static int bridge_brd_monitor(struct bridge_dev_context *dev_ctxt) OMAP3430_IVA2_MOD, OMAP2_CM_CLKSTCTRL); /* Wait until the state has moved to ON */ - while (*pdata->dsp_prm_read(OMAP3430_IVA2_MOD, OMAP2_PM_PWSTST)& - OMAP_INTRANSITION_MASK); + while ((*pdata->dsp_prm_read)(OMAP3430_IVA2_MOD, + OMAP2_PM_PWSTST) & + OMAP_INTRANSITION_MASK) + ; /* Disable Automatic transition */ (*pdata->dsp_cm_write)(OMAP34XX_CLKSTCTRL_DISABLE_AUTO, OMAP3430_IVA2_MOD, OMAP2_CM_CLKSTCTRL); diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c index 5663f4d19d02..1f4c794f5fcc 100644 --- a/drivers/target/iscsi/iscsi_target.c +++ b/drivers/target/iscsi/iscsi_target.c @@ -1309,7 +1309,7 @@ iscsit_check_dataout_hdr(struct iscsi_conn *conn, unsigned char *buf, if (cmd->data_direction != DMA_TO_DEVICE) { pr_err("Command ITT: 0x%08x received DataOUT for a" " NON-WRITE command.\n", cmd->init_task_tag); - return iscsit_reject_cmd(cmd, ISCSI_REASON_PROTOCOL_ERROR, buf); + return iscsit_dump_data_payload(conn, payload_length, 1); } se_cmd = &cmd->se_cmd; iscsit_mod_dataout_timer(cmd); diff --git a/drivers/target/iscsi/iscsi_target_auth.c b/drivers/target/iscsi/iscsi_target_auth.c index 19b842c3e0b3..ab4915c0d933 100644 --- a/drivers/target/iscsi/iscsi_target_auth.c +++ b/drivers/target/iscsi/iscsi_target_auth.c @@ -174,7 +174,6 @@ static int chap_server_compute_md5( char *nr_out_ptr, unsigned int *nr_out_len) { - char *endptr; unsigned long id; unsigned char id_as_uchar; unsigned char digest[MD5_SIGNATURE_SIZE]; @@ -320,9 +319,14 @@ static int chap_server_compute_md5( } if (type == HEX) - id = simple_strtoul(&identifier[2], &endptr, 0); + ret = kstrtoul(&identifier[2], 0, &id); else - id = simple_strtoul(identifier, &endptr, 0); + ret = kstrtoul(identifier, 0, &id); + + if (ret < 0) { + pr_err("kstrtoul() failed for CHAP identifier: %d\n", ret); + goto out; + } if (id > 255) { pr_err("chap identifier: %lu greater than 255\n", id); goto out; @@ -351,6 +355,10 @@ static int chap_server_compute_md5( pr_err("Unable to convert incoming challenge\n"); goto out; } + if (challenge_len > 1024) { + pr_err("CHAP_C exceeds maximum binary size of 1024 bytes\n"); + goto out; + } /* * During mutual authentication, the CHAP_C generated by the * initiator must not match the original CHAP_C generated by diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c index fecb69535a15..5e71ac609418 100644 --- a/drivers/target/iscsi/iscsi_target_login.c +++ b/drivers/target/iscsi/iscsi_target_login.c @@ -1216,7 +1216,7 @@ old_sess_out: static int __iscsi_target_login_thread(struct iscsi_np *np) { u8 *buffer, zero_tsih = 0; - int ret = 0, rc, stop; + int ret = 0, rc; struct iscsi_conn *conn = NULL; struct iscsi_login *login; struct iscsi_portal_group *tpg = NULL; @@ -1230,6 +1230,9 @@ static int __iscsi_target_login_thread(struct iscsi_np *np) if (np->np_thread_state == ISCSI_NP_THREAD_RESET) { np->np_thread_state = ISCSI_NP_THREAD_ACTIVE; complete(&np->np_restart_comp); + } else if (np->np_thread_state == ISCSI_NP_THREAD_SHUTDOWN) { + spin_unlock_bh(&np->np_thread_lock); + goto exit; } else { np->np_thread_state = ISCSI_NP_THREAD_ACTIVE; } @@ -1422,10 +1425,8 @@ old_sess_out: } out: - stop = kthread_should_stop(); - /* Wait for another socket.. */ - if (!stop) - return 1; + return 1; + exit: iscsi_stop_login_thread_timer(np); spin_lock_bh(&np->np_thread_lock); @@ -1442,7 +1443,7 @@ int iscsi_target_login_thread(void *arg) allow_signal(SIGINT); - while (!kthread_should_stop()) { + while (1) { ret = __iscsi_target_login_thread(np); /* * We break and exit here unless another sock_accept() call diff --git a/drivers/target/iscsi/iscsi_target_util.c b/drivers/target/iscsi/iscsi_target_util.c index 53e157cb8c54..fd90b28f1d94 100644 --- a/drivers/target/iscsi/iscsi_target_util.c +++ b/drivers/target/iscsi/iscsi_target_util.c @@ -1295,6 +1295,8 @@ int iscsit_tx_login_rsp(struct iscsi_conn *conn, u8 status_class, u8 status_deta login->login_failed = 1; iscsit_collect_login_stats(conn, status_class, status_detail); + memset(&login->rsp[0], 0, ISCSI_HDR_LEN); + hdr = (struct iscsi_login_rsp *)&login->rsp[0]; hdr->opcode = ISCSI_OP_LOGIN_RSP; hdr->status_class = status_class; diff --git a/drivers/target/loopback/tcm_loop.c b/drivers/target/loopback/tcm_loop.c index 6d2f37578b29..8c64b8776a96 100644 --- a/drivers/target/loopback/tcm_loop.c +++ b/drivers/target/loopback/tcm_loop.c @@ -239,6 +239,7 @@ static void tcm_loop_submission_work(struct work_struct *work) return; out_done: + kmem_cache_free(tcm_loop_cmd_cache, tl_cmd); sc->scsi_done(sc); return; } diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c index 11d26fe65bfb..98da90167159 100644 --- a/drivers/target/target_core_device.c +++ b/drivers/target/target_core_device.c @@ -616,6 +616,7 @@ void core_dev_unexport( dev->export_count--; spin_unlock(&hba->device_lock); + lun->lun_sep = NULL; lun->lun_se_dev = NULL; } diff --git a/drivers/tc/tc.c b/drivers/tc/tc.c index a8aaf6ac2ae2..946562389ca8 100644 --- a/drivers/tc/tc.c +++ b/drivers/tc/tc.c @@ -129,7 +129,10 @@ static void __init tc_bus_add_devices(struct tc_bus *tbus) tc_device_get_irq(tdev); - device_register(&tdev->dev); + if (device_register(&tdev->dev)) { + put_device(&tdev->dev); + goto out_err; + } list_add_tail(&tdev->node, &tbus->devices); out_err: @@ -148,7 +151,10 @@ static int __init tc_init(void) INIT_LIST_HEAD(&tc_bus.devices); dev_set_name(&tc_bus.dev, "tc"); - device_register(&tc_bus.dev); + if (device_register(&tc_bus.dev)) { + put_device(&tc_bus.dev); + return 0; + } if (tc_bus.info.slot_size) { unsigned int tc_clock = tc_get_speed(&tc_bus) / 100000; diff --git a/drivers/thermal/imx_thermal.c b/drivers/thermal/imx_thermal.c index a99c63152b8d..2c516f2eebed 100644 --- a/drivers/thermal/imx_thermal.c +++ b/drivers/thermal/imx_thermal.c @@ -306,7 +306,7 @@ static int imx_get_sensor_data(struct platform_device *pdev) { struct imx_thermal_data *data = platform_get_drvdata(pdev); struct regmap *map; - int t1, t2, n1, n2; + int t1, n1; int ret; u32 val; u64 temp64; @@ -333,14 +333,10 @@ static int imx_get_sensor_data(struct platform_device *pdev) /* * Sensor data layout: * [31:20] - sensor value @ 25C - * [19:8] - sensor value of hot - * [7:0] - hot temperature value * Use universal formula now and only need sensor value @ 25C * slope = 0.4297157 - (0.0015976 * 25C fuse) */ n1 = val >> 20; - n2 = (val & 0xfff00) >> 8; - t2 = val & 0xff; t1 = 25; /* t1 always 25C */ /* @@ -366,16 +362,16 @@ static int imx_get_sensor_data(struct platform_device *pdev) data->c2 = n1 * data->c1 + 1000 * t1; /* - * Set the default passive cooling trip point to 20 °C below the - * maximum die temperature. Can be changed from userspace. + * Set the default passive cooling trip point, + * can be changed from userspace. */ - data->temp_passive = 1000 * (t2 - 20); + data->temp_passive = IMX_TEMP_PASSIVE; /* - * The maximum die temperature is t2, let's give 5 °C cushion - * for noise and possible temperature rise between measurements. + * The maximum die temperature set to 20 C higher than + * IMX_TEMP_PASSIVE. */ - data->temp_critical = 1000 * (t2 - 5); + data->temp_critical = 1000 * 20 + data->temp_passive; return 0; } diff --git a/drivers/thermal/of-thermal.c b/drivers/thermal/of-thermal.c index 04b1be7fa018..4b2b999b7611 100644 --- a/drivers/thermal/of-thermal.c +++ b/drivers/thermal/of-thermal.c @@ -156,8 +156,8 @@ static int of_thermal_bind(struct thermal_zone_device *thermal, ret = thermal_zone_bind_cooling_device(thermal, tbp->trip_id, cdev, - tbp->min, - tbp->max); + tbp->max, + tbp->min); if (ret) return ret; } @@ -712,11 +712,12 @@ thermal_of_build_thermal_zone(struct device_node *np) } i = 0; - for_each_child_of_node(child, gchild) + for_each_child_of_node(child, gchild) { ret = thermal_of_populate_bind_params(gchild, &tz->tbps[i++], tz->trips, tz->ntrips); if (ret) goto free_tbps; + } finish: of_node_put(child); diff --git a/drivers/thermal/thermal_hwmon.c b/drivers/thermal/thermal_hwmon.c index fdb07199d9c2..1967bee4f076 100644 --- a/drivers/thermal/thermal_hwmon.c +++ b/drivers/thermal/thermal_hwmon.c @@ -140,6 +140,12 @@ thermal_hwmon_lookup_temp(const struct thermal_hwmon_device *hwmon, return NULL; } +static bool thermal_zone_crit_temp_valid(struct thermal_zone_device *tz) +{ + unsigned long temp; + return tz->ops->get_crit_temp && !tz->ops->get_crit_temp(tz, &temp); +} + int thermal_add_hwmon_sysfs(struct thermal_zone_device *tz) { struct thermal_hwmon_device *hwmon; @@ -189,21 +195,18 @@ int thermal_add_hwmon_sysfs(struct thermal_zone_device *tz) if (result) goto free_temp_mem; - if (tz->ops->get_crit_temp) { - unsigned long temperature; - if (!tz->ops->get_crit_temp(tz, &temperature)) { - snprintf(temp->temp_crit.name, - sizeof(temp->temp_crit.name), + if (thermal_zone_crit_temp_valid(tz)) { + snprintf(temp->temp_crit.name, + sizeof(temp->temp_crit.name), "temp%d_crit", hwmon->count); - temp->temp_crit.attr.attr.name = temp->temp_crit.name; - temp->temp_crit.attr.attr.mode = 0444; - temp->temp_crit.attr.show = temp_crit_show; - sysfs_attr_init(&temp->temp_crit.attr.attr); - result = device_create_file(hwmon->device, - &temp->temp_crit.attr); - if (result) - goto unregister_input; - } + temp->temp_crit.attr.attr.name = temp->temp_crit.name; + temp->temp_crit.attr.attr.mode = 0444; + temp->temp_crit.attr.show = temp_crit_show; + sysfs_attr_init(&temp->temp_crit.attr.attr); + result = device_create_file(hwmon->device, + &temp->temp_crit.attr); + if (result) + goto unregister_input; } mutex_lock(&thermal_hwmon_list_lock); @@ -250,7 +253,7 @@ void thermal_remove_hwmon_sysfs(struct thermal_zone_device *tz) } device_remove_file(hwmon->device, &temp->temp_input.attr); - if (tz->ops->get_crit_temp) + if (thermal_zone_crit_temp_valid(tz)) device_remove_file(hwmon->device, &temp->temp_crit.attr); mutex_lock(&thermal_hwmon_list_lock); diff --git a/drivers/thermal/ti-soc-thermal/ti-bandgap.c b/drivers/thermal/ti-soc-thermal/ti-bandgap.c index a1271b55103a..634b6ce0e63a 100644 --- a/drivers/thermal/ti-soc-thermal/ti-bandgap.c +++ b/drivers/thermal/ti-soc-thermal/ti-bandgap.c @@ -1155,7 +1155,7 @@ static struct ti_bandgap *ti_bandgap_build(struct platform_device *pdev) /* register shadow for context save and restore */ bgp->regval = devm_kzalloc(&pdev->dev, sizeof(*bgp->regval) * bgp->conf->sensor_count, GFP_KERNEL); - if (!bgp) { + if (!bgp->regval) { dev_err(&pdev->dev, "Unable to allocate mem for driver ref\n"); return ERR_PTR(-ENOMEM); } diff --git a/drivers/tty/serial/arc_uart.c b/drivers/tty/serial/arc_uart.c index c9f5c9dcc15c..008c223eaf26 100644 --- a/drivers/tty/serial/arc_uart.c +++ b/drivers/tty/serial/arc_uart.c @@ -177,7 +177,7 @@ static void arc_serial_tx_chars(struct arc_uart_port *uart) uart->port.icount.tx++; uart->port.x_char = 0; sent = 1; - } else if (xmit->tail != xmit->head) { /* TODO: uart_circ_empty */ + } else if (!uart_circ_empty(xmit)) { ch = xmit->buf[xmit->tail]; xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); uart->port.icount.tx++; diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c index e2f93874989b..044e86d528ae 100644 --- a/drivers/tty/serial/imx.c +++ b/drivers/tty/serial/imx.c @@ -567,6 +567,9 @@ static void imx_start_tx(struct uart_port *port) struct imx_port *sport = (struct imx_port *)port; unsigned long temp; + if (uart_circ_empty(&port->state->xmit)) + return; + if (USE_IRDA(sport)) { /* half duplex in IrDA mode; have to disable receive mode */ temp = readl(sport->port.membase + UCR4); diff --git a/drivers/tty/serial/ip22zilog.c b/drivers/tty/serial/ip22zilog.c index 1efd4c36ba0c..99b7b8697861 100644 --- a/drivers/tty/serial/ip22zilog.c +++ b/drivers/tty/serial/ip22zilog.c @@ -603,6 +603,8 @@ static void ip22zilog_start_tx(struct uart_port *port) } else { struct circ_buf *xmit = &port->state->xmit; + if (uart_circ_empty(xmit)) + return; writeb(xmit->buf[xmit->tail], &channel->data); ZSDELAY(); ZS_WSYNC(channel); diff --git a/drivers/tty/serial/m32r_sio.c b/drivers/tty/serial/m32r_sio.c index 68f2c53e0b54..5702828fb62e 100644 --- a/drivers/tty/serial/m32r_sio.c +++ b/drivers/tty/serial/m32r_sio.c @@ -266,9 +266,11 @@ static void m32r_sio_start_tx(struct uart_port *port) if (!(up->ier & UART_IER_THRI)) { up->ier |= UART_IER_THRI; serial_out(up, UART_IER, up->ier); - serial_out(up, UART_TX, xmit->buf[xmit->tail]); - xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); - up->port.icount.tx++; + if (!uart_circ_empty(xmit)) { + serial_out(up, UART_TX, xmit->buf[xmit->tail]); + xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); + up->port.icount.tx++; + } } while((serial_in(up, UART_LSR) & UART_EMPTY) != UART_EMPTY); #else diff --git a/drivers/tty/serial/msm_serial.c b/drivers/tty/serial/msm_serial.c index c41aca4dfc43..72000a6d5af0 100644 --- a/drivers/tty/serial/msm_serial.c +++ b/drivers/tty/serial/msm_serial.c @@ -991,7 +991,7 @@ static const struct of_device_id msm_uartdm_table[] = { { } }; -static int __init msm_serial_probe(struct platform_device *pdev) +static int msm_serial_probe(struct platform_device *pdev) { struct msm_port *msm_port; struct resource *resource; diff --git a/drivers/tty/serial/pmac_zilog.c b/drivers/tty/serial/pmac_zilog.c index 8193635103ee..f7ad5b903055 100644 --- a/drivers/tty/serial/pmac_zilog.c +++ b/drivers/tty/serial/pmac_zilog.c @@ -653,6 +653,8 @@ static void pmz_start_tx(struct uart_port *port) } else { struct circ_buf *xmit = &port->state->xmit; + if (uart_circ_empty(xmit)) + goto out; write_zsdata(uap, xmit->buf[xmit->tail]); zssync(uap); xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); @@ -661,6 +663,7 @@ static void pmz_start_tx(struct uart_port *port) if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) uart_write_wakeup(&uap->port); } + out: pmz_debug("pmz: start_tx() done.\n"); } diff --git a/drivers/tty/serial/sunsab.c b/drivers/tty/serial/sunsab.c index 80a58eca785b..2f57df9a71d9 100644 --- a/drivers/tty/serial/sunsab.c +++ b/drivers/tty/serial/sunsab.c @@ -427,6 +427,9 @@ static void sunsab_start_tx(struct uart_port *port) struct circ_buf *xmit = &up->port.state->xmit; int i; + if (uart_circ_empty(xmit)) + return; + up->interrupt_mask1 &= ~(SAB82532_IMR1_ALLS|SAB82532_IMR1_XPR); writeb(up->interrupt_mask1, &up->regs->w.imr1); diff --git a/drivers/tty/serial/sunzilog.c b/drivers/tty/serial/sunzilog.c index a85db8b87156..02df3940b95e 100644 --- a/drivers/tty/serial/sunzilog.c +++ b/drivers/tty/serial/sunzilog.c @@ -703,6 +703,8 @@ static void sunzilog_start_tx(struct uart_port *port) } else { struct circ_buf *xmit = &port->state->xmit; + if (uart_circ_empty(xmit)) + return; writeb(xmit->buf[xmit->tail], &channel->data); ZSDELAY(); ZS_WSYNC(channel); diff --git a/drivers/usb/chipidea/udc.c b/drivers/usb/chipidea/udc.c index 69425b3cb6b7..b8125aa64ad8 100644 --- a/drivers/usb/chipidea/udc.c +++ b/drivers/usb/chipidea/udc.c @@ -1169,8 +1169,8 @@ static int ep_enable(struct usb_ep *ep, if (hwep->type == USB_ENDPOINT_XFER_CONTROL) cap |= QH_IOS; - if (hwep->num) - cap |= QH_ZLT; + + cap |= QH_ZLT; cap |= (hwep->ep.maxpacket << __ffs(QH_MAX_PKT)) & QH_MAX_PKT; /* * For ISO-TX, we set mult at QH as the largest value, and use @@ -1321,6 +1321,7 @@ static int ep_dequeue(struct usb_ep *ep, struct usb_request *req) struct ci_hw_ep *hwep = container_of(ep, struct ci_hw_ep, ep); struct ci_hw_req *hwreq = container_of(req, struct ci_hw_req, req); unsigned long flags; + struct td_node *node, *tmpnode; if (ep == NULL || req == NULL || hwreq->req.status != -EALREADY || hwep->ep.desc == NULL || list_empty(&hwreq->queue) || @@ -1331,6 +1332,12 @@ static int ep_dequeue(struct usb_ep *ep, struct usb_request *req) hw_ep_flush(hwep->ci, hwep->num, hwep->dir); + list_for_each_entry_safe(node, tmpnode, &hwreq->tds, td) { + dma_pool_free(hwep->td_pool, node->ptr, node->dma); + list_del(&node->td); + kfree(node); + } + /* pop request */ list_del_init(&hwreq->queue); diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c index 21b99b4b4082..0e950ad8cb25 100644 --- a/drivers/usb/core/hub.c +++ b/drivers/usb/core/hub.c @@ -889,6 +889,25 @@ static int hub_usb3_port_disable(struct usb_hub *hub, int port1) if (!hub_is_superspeed(hub->hdev)) return -EINVAL; + ret = hub_port_status(hub, port1, &portstatus, &portchange); + if (ret < 0) + return ret; + + /* + * USB controller Advanced Micro Devices, Inc. [AMD] FCH USB XHCI + * Controller [1022:7814] will have spurious result making the following + * usb 3.0 device hotplugging route to the 2.0 root hub and recognized + * as high-speed device if we set the usb 3.0 port link state to + * Disabled. Since it's already in USB_SS_PORT_LS_RX_DETECT state, we + * check the state here to avoid the bug. + */ + if ((portstatus & USB_PORT_STAT_LINK_STATE) == + USB_SS_PORT_LS_RX_DETECT) { + dev_dbg(&hub->ports[port1 - 1]->dev, + "Not disabling port; link state is RxDetect\n"); + return ret; + } + ret = hub_set_port_link_state(hub, port1, USB_SS_PORT_LS_SS_DISABLED); if (ret) return ret; diff --git a/drivers/usb/dwc3/Kconfig b/drivers/usb/dwc3/Kconfig index 8eb996e4f058..261c3b428220 100644 --- a/drivers/usb/dwc3/Kconfig +++ b/drivers/usb/dwc3/Kconfig @@ -45,6 +45,7 @@ comment "Platform Glue Driver Support" config USB_DWC3_OMAP tristate "Texas Instruments OMAP5 and similar Platforms" depends on EXTCON && (ARCH_OMAP2PLUS || COMPILE_TEST) + depends on OF default USB_DWC3 help Some platforms from Texas Instruments like OMAP5, DRA7xxx and diff --git a/drivers/usb/dwc3/dwc3-omap.c b/drivers/usb/dwc3/dwc3-omap.c index 4af4c3567656..07a736acd0f2 100644 --- a/drivers/usb/dwc3/dwc3-omap.c +++ b/drivers/usb/dwc3/dwc3-omap.c @@ -322,7 +322,7 @@ static int dwc3_omap_remove_core(struct device *dev, void *c) { struct platform_device *pdev = to_platform_device(dev); - platform_device_unregister(pdev); + of_device_unregister(pdev); return 0; } @@ -599,7 +599,7 @@ static int dwc3_omap_prepare(struct device *dev) { struct dwc3_omap *omap = dev_get_drvdata(dev); - dwc3_omap_disable_irqs(omap); + dwc3_omap_write_irqmisc_set(omap, 0x00); return 0; } @@ -607,8 +607,19 @@ static int dwc3_omap_prepare(struct device *dev) static void dwc3_omap_complete(struct device *dev) { struct dwc3_omap *omap = dev_get_drvdata(dev); + u32 reg; - dwc3_omap_enable_irqs(omap); + reg = (USBOTGSS_IRQMISC_OEVT | + USBOTGSS_IRQMISC_DRVVBUS_RISE | + USBOTGSS_IRQMISC_CHRGVBUS_RISE | + USBOTGSS_IRQMISC_DISCHRGVBUS_RISE | + USBOTGSS_IRQMISC_IDPULLUP_RISE | + USBOTGSS_IRQMISC_DRVVBUS_FALL | + USBOTGSS_IRQMISC_CHRGVBUS_FALL | + USBOTGSS_IRQMISC_DISCHRGVBUS_FALL | + USBOTGSS_IRQMISC_IDPULLUP_FALL); + + dwc3_omap_write_irqmisc_set(omap, reg); } static int dwc3_omap_suspend(struct device *dev) diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c index 9d64dd02c57e..dab7927d1009 100644 --- a/drivers/usb/dwc3/gadget.c +++ b/drivers/usb/dwc3/gadget.c @@ -828,10 +828,6 @@ static void dwc3_prepare_one_trb(struct dwc3_ep *dep, length, last ? " last" : "", chain ? " chain" : ""); - /* Skip the LINK-TRB on ISOC */ - if (((dep->free_slot & DWC3_TRB_MASK) == DWC3_TRB_NUM - 1) && - usb_endpoint_xfer_isoc(dep->endpoint.desc)) - dep->free_slot++; trb = &dep->trb_pool[dep->free_slot & DWC3_TRB_MASK]; @@ -843,6 +839,10 @@ static void dwc3_prepare_one_trb(struct dwc3_ep *dep, } dep->free_slot++; + /* Skip the LINK-TRB on ISOC */ + if (((dep->free_slot & DWC3_TRB_MASK) == DWC3_TRB_NUM - 1) && + usb_endpoint_xfer_isoc(dep->endpoint.desc)) + dep->free_slot++; trb->size = DWC3_TRB_SIZE_LENGTH(length); trb->bpl = lower_32_bits(dma); diff --git a/drivers/usb/gadget/configfs.c b/drivers/usb/gadget/configfs.c index 2ddcd635ca2a..97142146eead 100644 --- a/drivers/usb/gadget/configfs.c +++ b/drivers/usb/gadget/configfs.c @@ -1145,15 +1145,15 @@ static struct configfs_item_operations interf_item_ops = { .store_attribute = usb_os_desc_attr_store, }; -static ssize_t rndis_grp_compatible_id_show(struct usb_os_desc *desc, - char *page) +static ssize_t interf_grp_compatible_id_show(struct usb_os_desc *desc, + char *page) { memcpy(page, desc->ext_compat_id, 8); return 8; } -static ssize_t rndis_grp_compatible_id_store(struct usb_os_desc *desc, - const char *page, size_t len) +static ssize_t interf_grp_compatible_id_store(struct usb_os_desc *desc, + const char *page, size_t len) { int l; @@ -1171,20 +1171,20 @@ static ssize_t rndis_grp_compatible_id_store(struct usb_os_desc *desc, return len; } -static struct usb_os_desc_attribute rndis_grp_attr_compatible_id = +static struct usb_os_desc_attribute interf_grp_attr_compatible_id = __CONFIGFS_ATTR(compatible_id, S_IRUGO | S_IWUSR, - rndis_grp_compatible_id_show, - rndis_grp_compatible_id_store); + interf_grp_compatible_id_show, + interf_grp_compatible_id_store); -static ssize_t rndis_grp_sub_compatible_id_show(struct usb_os_desc *desc, - char *page) +static ssize_t interf_grp_sub_compatible_id_show(struct usb_os_desc *desc, + char *page) { memcpy(page, desc->ext_compat_id + 8, 8); return 8; } -static ssize_t rndis_grp_sub_compatible_id_store(struct usb_os_desc *desc, - const char *page, size_t len) +static ssize_t interf_grp_sub_compatible_id_store(struct usb_os_desc *desc, + const char *page, size_t len) { int l; @@ -1202,20 +1202,21 @@ static ssize_t rndis_grp_sub_compatible_id_store(struct usb_os_desc *desc, return len; } -static struct usb_os_desc_attribute rndis_grp_attr_sub_compatible_id = +static struct usb_os_desc_attribute interf_grp_attr_sub_compatible_id = __CONFIGFS_ATTR(sub_compatible_id, S_IRUGO | S_IWUSR, - rndis_grp_sub_compatible_id_show, - rndis_grp_sub_compatible_id_store); + interf_grp_sub_compatible_id_show, + interf_grp_sub_compatible_id_store); static struct configfs_attribute *interf_grp_attrs[] = { - &rndis_grp_attr_compatible_id.attr, - &rndis_grp_attr_sub_compatible_id.attr, + &interf_grp_attr_compatible_id.attr, + &interf_grp_attr_sub_compatible_id.attr, NULL }; int usb_os_desc_prepare_interf_dir(struct config_group *parent, int n_interf, struct usb_os_desc **desc, + char **names, struct module *owner) { struct config_group **f_default_groups, *os_desc_group, @@ -1257,8 +1258,8 @@ int usb_os_desc_prepare_interf_dir(struct config_group *parent, d = desc[n_interf]; d->owner = owner; config_group_init_type_name(&d->group, "", interface_type); - config_item_set_name(&d->group.cg_item, "interface.%d", - n_interf); + config_item_set_name(&d->group.cg_item, "interface.%s", + names[n_interf]); interface_groups[n_interf] = &d->group; } diff --git a/drivers/usb/gadget/configfs.h b/drivers/usb/gadget/configfs.h index a14ac792c698..36c468c4f5e9 100644 --- a/drivers/usb/gadget/configfs.h +++ b/drivers/usb/gadget/configfs.h @@ -8,6 +8,7 @@ void unregister_gadget_item(struct config_item *item); int usb_os_desc_prepare_interf_dir(struct config_group *parent, int n_interf, struct usb_os_desc **desc, + char **names, struct module *owner); static inline struct usb_os_desc *to_usb_os_desc(struct config_item *item) diff --git a/drivers/usb/gadget/f_fs.c b/drivers/usb/gadget/f_fs.c index 74202d67f911..8598c27c7d43 100644 --- a/drivers/usb/gadget/f_fs.c +++ b/drivers/usb/gadget/f_fs.c @@ -1483,11 +1483,13 @@ static int functionfs_bind(struct ffs_data *ffs, struct usb_composite_dev *cdev) ffs->ep0req->context = ffs; lang = ffs->stringtabs; - for (lang = ffs->stringtabs; *lang; ++lang) { - struct usb_string *str = (*lang)->strings; - int id = first_id; - for (; str->s; ++id, ++str) - str->id = id; + if (lang) { + for (; *lang; ++lang) { + struct usb_string *str = (*lang)->strings; + int id = first_id; + for (; str->s; ++id, ++str) + str->id = id; + } } ffs->gadget = cdev->gadget; diff --git a/drivers/usb/gadget/f_rndis.c b/drivers/usb/gadget/f_rndis.c index eed3ad878047..9c41e9515b8e 100644 --- a/drivers/usb/gadget/f_rndis.c +++ b/drivers/usb/gadget/f_rndis.c @@ -687,7 +687,7 @@ rndis_bind(struct usb_configuration *c, struct usb_function *f) f->os_desc_table = kzalloc(sizeof(*f->os_desc_table), GFP_KERNEL); if (!f->os_desc_table) - return PTR_ERR(f->os_desc_table); + return -ENOMEM; f->os_desc_n = 1; f->os_desc_table[0].os_desc = &rndis_opts->rndis_os_desc; } @@ -905,6 +905,7 @@ static struct usb_function_instance *rndis_alloc_inst(void) { struct f_rndis_opts *opts; struct usb_os_desc *descs[1]; + char *names[1]; opts = kzalloc(sizeof(*opts), GFP_KERNEL); if (!opts) @@ -922,8 +923,9 @@ static struct usb_function_instance *rndis_alloc_inst(void) INIT_LIST_HEAD(&opts->rndis_os_desc.ext_prop); descs[0] = &opts->rndis_os_desc; + names[0] = "rndis"; usb_os_desc_prepare_interf_dir(&opts->func_inst.group, 1, descs, - THIS_MODULE); + names, THIS_MODULE); config_group_init_type_name(&opts->func_inst.group, "", &rndis_func_type); diff --git a/drivers/usb/gadget/gr_udc.c b/drivers/usb/gadget/gr_udc.c index 99a37ed03e27..c7004ee89c90 100644 --- a/drivers/usb/gadget/gr_udc.c +++ b/drivers/usb/gadget/gr_udc.c @@ -1532,8 +1532,9 @@ static int gr_ep_enable(struct usb_ep *_ep, "%s mode: multiple trans./microframe not valid\n", (mode == 2 ? "Bulk" : "Control")); return -EINVAL; - } else if (nt == 0x11) { - dev_err(dev->dev, "Invalid value for trans./microframe\n"); + } else if (nt == 0x3) { + dev_err(dev->dev, + "Invalid value 0x3 for additional trans./microframe\n"); return -EINVAL; } else if ((nt + 1) * max > buffer_size) { dev_err(dev->dev, "Hw buffer size %d < max payload %d * %d\n", diff --git a/drivers/usb/gadget/inode.c b/drivers/usb/gadget/inode.c index ee6c16416c30..2e4ce7704908 100644 --- a/drivers/usb/gadget/inode.c +++ b/drivers/usb/gadget/inode.c @@ -1264,8 +1264,13 @@ dev_release (struct inode *inode, struct file *fd) kfree (dev->buf); dev->buf = NULL; - put_dev (dev); + /* other endpoints were all decoupled from this device */ + spin_lock_irq(&dev->lock); + dev->state = STATE_DEV_DISABLED; + spin_unlock_irq(&dev->lock); + + put_dev (dev); return 0; } diff --git a/drivers/usb/gadget/u_ether.c b/drivers/usb/gadget/u_ether.c index 3d78a8844e43..97b027724ee7 100644 --- a/drivers/usb/gadget/u_ether.c +++ b/drivers/usb/gadget/u_ether.c @@ -1120,7 +1120,10 @@ void gether_disconnect(struct gether *link) DBG(dev, "%s\n", __func__); + netif_tx_lock(dev->net); netif_stop_queue(dev->net); + netif_tx_unlock(dev->net); + netif_carrier_off(dev->net); /* disable endpoints, forcing (synchronous) completion diff --git a/drivers/usb/host/Kconfig b/drivers/usb/host/Kconfig index 61b7817bd66b..03314f861bee 100644 --- a/drivers/usb/host/Kconfig +++ b/drivers/usb/host/Kconfig @@ -176,7 +176,7 @@ config USB_EHCI_HCD_AT91 config USB_EHCI_MSM tristate "Support for Qualcomm QSD/MSM on-chip EHCI USB controller" - depends on ARCH_MSM + depends on ARCH_MSM || ARCH_QCOM select USB_EHCI_ROOT_HUB_TT ---help--- Enables support for the USB Host controller present on the diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c index 2b998c60faf2..aa79e8749040 100644 --- a/drivers/usb/host/xhci-hub.c +++ b/drivers/usb/host/xhci-hub.c @@ -22,6 +22,7 @@ #include <linux/slab.h> +#include <linux/device.h> #include <asm/unaligned.h> #include "xhci.h" @@ -1139,7 +1140,9 @@ int xhci_bus_suspend(struct usb_hcd *hcd) * including the USB 3.0 roothub, but only if CONFIG_PM_RUNTIME * is enabled, so also enable remote wake here. */ - if (hcd->self.root_hub->do_remote_wakeup) { + if (hcd->self.root_hub->do_remote_wakeup + && device_may_wakeup(hcd->self.controller)) { + if (t1 & PORT_CONNECT) { t2 |= PORT_WKOC_E | PORT_WKDISC_E; t2 &= ~PORT_WKCONN_E; diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index d67ff71209f5..749fc68eb5c1 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -1433,8 +1433,11 @@ static void handle_cmd_completion(struct xhci_hcd *xhci, xhci_handle_cmd_reset_ep(xhci, slot_id, cmd_trb, cmd_comp_code); break; case TRB_RESET_DEV: - WARN_ON(slot_id != TRB_TO_SLOT_ID( - le32_to_cpu(cmd_trb->generic.field[3]))); + /* SLOT_ID field in reset device cmd completion event TRB is 0. + * Use the SLOT_ID from the command TRB instead (xhci 4.6.11) + */ + slot_id = TRB_TO_SLOT_ID( + le32_to_cpu(cmd_trb->generic.field[3])); xhci_handle_cmd_reset_dev(xhci, slot_id, event); break; case TRB_NEC_GET_FW: @@ -3534,7 +3537,7 @@ static unsigned int xhci_get_burst_count(struct xhci_hcd *xhci, return 0; max_burst = urb->ep->ss_ep_comp.bMaxBurst; - return roundup(total_packet_count, max_burst + 1) - 1; + return DIV_ROUND_UP(total_packet_count, max_burst + 1) - 1; } /* diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c index 2b8d9a24af09..7436d5f5e67a 100644 --- a/drivers/usb/host/xhci.c +++ b/drivers/usb/host/xhci.c @@ -936,7 +936,7 @@ int xhci_suspend(struct xhci_hcd *xhci) */ int xhci_resume(struct xhci_hcd *xhci, bool hibernated) { - u32 command, temp = 0; + u32 command, temp = 0, status; struct usb_hcd *hcd = xhci_to_hcd(xhci); struct usb_hcd *secondary_hcd; int retval = 0; @@ -1054,8 +1054,12 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated) done: if (retval == 0) { - usb_hcd_resume_root_hub(hcd); - usb_hcd_resume_root_hub(xhci->shared_hcd); + /* Resume root hubs only when have pending events. */ + status = readl(&xhci->op_regs->status); + if (status & STS_EINT) { + usb_hcd_resume_root_hub(hcd); + usb_hcd_resume_root_hub(xhci->shared_hcd); + } } /* diff --git a/drivers/usb/musb/musb_am335x.c b/drivers/usb/musb/musb_am335x.c index d2353781bd2d..1e58ed2361cc 100644 --- a/drivers/usb/musb/musb_am335x.c +++ b/drivers/usb/musb/musb_am335x.c @@ -19,21 +19,6 @@ err: return ret; } -static int of_remove_populated_child(struct device *dev, void *d) -{ - struct platform_device *pdev = to_platform_device(dev); - - of_device_unregister(pdev); - return 0; -} - -static int am335x_child_remove(struct platform_device *pdev) -{ - device_for_each_child(&pdev->dev, NULL, of_remove_populated_child); - pm_runtime_disable(&pdev->dev); - return 0; -} - static const struct of_device_id am335x_child_of_match[] = { { .compatible = "ti,am33xx-usb" }, { }, @@ -42,13 +27,17 @@ MODULE_DEVICE_TABLE(of, am335x_child_of_match); static struct platform_driver am335x_child_driver = { .probe = am335x_child_probe, - .remove = am335x_child_remove, .driver = { .name = "am335x-usb-childs", .of_match_table = am335x_child_of_match, }, }; -module_platform_driver(am335x_child_driver); +static int __init am335x_child_init(void) +{ + return platform_driver_register(&am335x_child_driver); +} +module_init(am335x_child_init); + MODULE_DESCRIPTION("AM33xx child devices"); MODULE_LICENSE("GPL v2"); diff --git a/drivers/usb/musb/musb_core.c b/drivers/usb/musb/musb_core.c index 61da471b7aed..eff3c5cf84f4 100644 --- a/drivers/usb/musb/musb_core.c +++ b/drivers/usb/musb/musb_core.c @@ -849,7 +849,7 @@ b_host: } /* handle babble condition */ - if (int_usb & MUSB_INTR_BABBLE) + if (int_usb & MUSB_INTR_BABBLE && is_host_active(musb)) schedule_work(&musb->recover_work); #if 0 diff --git a/drivers/usb/musb/musb_cppi41.c b/drivers/usb/musb/musb_cppi41.c index 7b8bbf53127e..5341bb223b7c 100644 --- a/drivers/usb/musb/musb_cppi41.c +++ b/drivers/usb/musb/musb_cppi41.c @@ -318,7 +318,7 @@ static void cppi41_dma_callback(void *private_data) } list_add_tail(&cppi41_channel->tx_check, &controller->early_tx_list); - if (!hrtimer_active(&controller->early_tx)) { + if (!hrtimer_is_queued(&controller->early_tx)) { hrtimer_start_range_ns(&controller->early_tx, ktime_set(0, 140 * NSEC_PER_USEC), 40 * NSEC_PER_USEC, diff --git a/drivers/usb/musb/musb_dsps.c b/drivers/usb/musb/musb_dsps.c index 51beb13c7e1a..09529f94e72d 100644 --- a/drivers/usb/musb/musb_dsps.c +++ b/drivers/usb/musb/musb_dsps.c @@ -494,10 +494,9 @@ static int dsps_musb_set_mode(struct musb *musb, u8 mode) struct dsps_glue *glue = dev_get_drvdata(dev->parent); const struct dsps_musb_wrapper *wrp = glue->wrp; void __iomem *ctrl_base = musb->ctrl_base; - void __iomem *base = musb->mregs; u32 reg; - reg = dsps_readl(base, wrp->mode); + reg = dsps_readl(ctrl_base, wrp->mode); switch (mode) { case MUSB_HOST: @@ -510,7 +509,7 @@ static int dsps_musb_set_mode(struct musb *musb, u8 mode) */ reg |= (1 << wrp->iddig_mux); - dsps_writel(base, wrp->mode, reg); + dsps_writel(ctrl_base, wrp->mode, reg); dsps_writel(ctrl_base, wrp->phy_utmi, 0x02); break; case MUSB_PERIPHERAL: @@ -523,10 +522,10 @@ static int dsps_musb_set_mode(struct musb *musb, u8 mode) */ reg |= (1 << wrp->iddig_mux); - dsps_writel(base, wrp->mode, reg); + dsps_writel(ctrl_base, wrp->mode, reg); break; case MUSB_OTG: - dsps_writel(base, wrp->phy_utmi, 0x02); + dsps_writel(ctrl_base, wrp->phy_utmi, 0x02); break; default: dev_err(glue->dev, "unsupported mode %d\n", mode); diff --git a/drivers/usb/musb/ux500.c b/drivers/usb/musb/ux500.c index c2e45e632723..f202e5088461 100644 --- a/drivers/usb/musb/ux500.c +++ b/drivers/usb/musb/ux500.c @@ -274,7 +274,6 @@ static int ux500_probe(struct platform_device *pdev) musb->dev.parent = &pdev->dev; musb->dev.dma_mask = &pdev->dev.coherent_dma_mask; musb->dev.coherent_dma_mask = pdev->dev.coherent_dma_mask; - musb->dev.of_node = pdev->dev.of_node; glue->dev = &pdev->dev; glue->musb = musb; diff --git a/drivers/usb/phy/phy-msm-usb.c b/drivers/usb/phy/phy-msm-usb.c index ced34f39bdd4..c929370cdaa6 100644 --- a/drivers/usb/phy/phy-msm-usb.c +++ b/drivers/usb/phy/phy-msm-usb.c @@ -1229,7 +1229,9 @@ static void msm_otg_sm_work(struct work_struct *w) motg->chg_state = USB_CHG_STATE_UNDEFINED; motg->chg_type = USB_INVALID_CHARGER; } - pm_runtime_put_sync(otg->phy->dev); + + if (otg->phy->state == OTG_STATE_B_IDLE) + pm_runtime_put_sync(otg->phy->dev); break; case OTG_STATE_B_PERIPHERAL: dev_dbg(otg->phy->dev, "OTG_STATE_B_PERIPHERAL state\n"); diff --git a/drivers/usb/renesas_usbhs/fifo.c b/drivers/usb/renesas_usbhs/fifo.c index d49f9c326035..4fd36530bfa3 100644 --- a/drivers/usb/renesas_usbhs/fifo.c +++ b/drivers/usb/renesas_usbhs/fifo.c @@ -681,6 +681,14 @@ usbhs_fifo_read_end: usbhs_pipe_number(pipe), pkt->length, pkt->actual, *is_done, pkt->zero); + /* + * Transmission end + */ + if (*is_done) { + if (usbhs_pipe_is_dcp(pipe)) + usbhs_dcp_control_transfer_done(pipe); + } + usbhs_fifo_read_busy: usbhsf_fifo_unselect(pipe, fifo); diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c index 762e4a5f5ae9..330df5ce435b 100644 --- a/drivers/usb/serial/cp210x.c +++ b/drivers/usb/serial/cp210x.c @@ -153,6 +153,7 @@ static const struct usb_device_id id_table[] = { { USB_DEVICE(0x1843, 0x0200) }, /* Vaisala USB Instrument Cable */ { USB_DEVICE(0x18EF, 0xE00F) }, /* ELV USB-I2C-Interface */ { USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */ + { USB_DEVICE(0x1B1C, 0x1C00) }, /* Corsair USB Dongle */ { USB_DEVICE(0x1BE3, 0x07A6) }, /* WAGO 750-923 USB Service Cable */ { USB_DEVICE(0x1E29, 0x0102) }, /* Festo CPX-USB */ { USB_DEVICE(0x1E29, 0x0501) }, /* Festo CMSP */ diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c index edf3b124583c..8a3813be1b28 100644 --- a/drivers/usb/serial/ftdi_sio.c +++ b/drivers/usb/serial/ftdi_sio.c @@ -720,7 +720,8 @@ static const struct usb_device_id id_table_combined[] = { { USB_DEVICE(FTDI_VID, FTDI_ACG_HFDUAL_PID) }, { USB_DEVICE(FTDI_VID, FTDI_YEI_SERVOCENTER31_PID) }, { USB_DEVICE(FTDI_VID, FTDI_THORLABS_PID) }, - { USB_DEVICE(TESTO_VID, TESTO_USB_INTERFACE_PID) }, + { USB_DEVICE(TESTO_VID, TESTO_1_PID) }, + { USB_DEVICE(TESTO_VID, TESTO_3_PID) }, { USB_DEVICE(FTDI_VID, FTDI_GAMMA_SCOUT_PID) }, { USB_DEVICE(FTDI_VID, FTDI_TACTRIX_OPENPORT_13M_PID) }, { USB_DEVICE(FTDI_VID, FTDI_TACTRIX_OPENPORT_13S_PID) }, @@ -944,6 +945,8 @@ static const struct usb_device_id id_table_combined[] = { { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_2_PID) }, { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_3_PID) }, { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_4_PID) }, + /* Infineon Devices */ + { USB_DEVICE_INTERFACE_NUMBER(INFINEON_VID, INFINEON_TRIBOARD_PID, 1) }, { } /* Terminating entry */ }; @@ -1566,14 +1569,17 @@ static void ftdi_set_max_packet_size(struct usb_serial_port *port) struct usb_device *udev = serial->dev; struct usb_interface *interface = serial->interface; - struct usb_endpoint_descriptor *ep_desc = &interface->cur_altsetting->endpoint[1].desc; + struct usb_endpoint_descriptor *ep_desc; unsigned num_endpoints; - int i; + unsigned i; num_endpoints = interface->cur_altsetting->desc.bNumEndpoints; dev_info(&udev->dev, "Number of endpoints %d\n", num_endpoints); + if (!num_endpoints) + return; + /* NOTE: some customers have programmed FT232R/FT245R devices * with an endpoint size of 0 - not good. In this case, we * want to override the endpoint descriptor setting and use a diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h index 500474c48f4b..c4777bc6aee0 100644 --- a/drivers/usb/serial/ftdi_sio_ids.h +++ b/drivers/usb/serial/ftdi_sio_ids.h @@ -584,6 +584,12 @@ #define RATOC_PRODUCT_ID_USB60F 0xb020 /* + * Infineon Technologies + */ +#define INFINEON_VID 0x058b +#define INFINEON_TRIBOARD_PID 0x0028 /* DAS JTAG TriBoard TC1798 V1.0 */ + +/* * Acton Research Corp. */ #define ACTON_VID 0x0647 /* Vendor ID */ @@ -798,7 +804,8 @@ * Submitted by Colin Leroy */ #define TESTO_VID 0x128D -#define TESTO_USB_INTERFACE_PID 0x0001 +#define TESTO_1_PID 0x0001 +#define TESTO_3_PID 0x0003 /* * Mobility Electronics products. diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c index 59c3108cc136..a9688940543d 100644 --- a/drivers/usb/serial/option.c +++ b/drivers/usb/serial/option.c @@ -352,6 +352,9 @@ static void option_instat_callback(struct urb *urb); /* Zoom */ #define ZOOM_PRODUCT_4597 0x9607 +/* SpeedUp SU9800 usb 3g modem */ +#define SPEEDUP_PRODUCT_SU9800 0x9800 + /* Haier products */ #define HAIER_VENDOR_ID 0x201e #define HAIER_PRODUCT_CE100 0x2009 @@ -372,8 +375,12 @@ static void option_instat_callback(struct urb *urb); /* Olivetti products */ #define OLIVETTI_VENDOR_ID 0x0b3c #define OLIVETTI_PRODUCT_OLICARD100 0xc000 +#define OLIVETTI_PRODUCT_OLICARD120 0xc001 +#define OLIVETTI_PRODUCT_OLICARD140 0xc002 #define OLIVETTI_PRODUCT_OLICARD145 0xc003 +#define OLIVETTI_PRODUCT_OLICARD155 0xc004 #define OLIVETTI_PRODUCT_OLICARD200 0xc005 +#define OLIVETTI_PRODUCT_OLICARD160 0xc00a #define OLIVETTI_PRODUCT_OLICARD500 0xc00b /* Celot products */ @@ -1480,6 +1487,8 @@ static const struct usb_device_id option_ids[] = { .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1426, 0xff, 0xff, 0xff), /* ZTE MF91 */ .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1428, 0xff, 0xff, 0xff), /* Telewell TW-LTE 4G v2 */ + .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1533, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1534, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1535, 0xff, 0xff, 0xff) }, @@ -1577,6 +1586,7 @@ static const struct usb_device_id option_ids[] = { { USB_DEVICE(LONGCHEER_VENDOR_ID, FOUR_G_SYSTEMS_PRODUCT_W14), .driver_info = (kernel_ulong_t)&four_g_w14_blacklist }, + { USB_DEVICE_INTERFACE_CLASS(LONGCHEER_VENDOR_ID, SPEEDUP_PRODUCT_SU9800, 0xff) }, { USB_DEVICE(LONGCHEER_VENDOR_ID, ZOOM_PRODUCT_4597) }, { USB_DEVICE(LONGCHEER_VENDOR_ID, IBALL_3_5G_CONNECT) }, { USB_DEVICE(HAIER_VENDOR_ID, HAIER_PRODUCT_CE100) }, @@ -1611,15 +1621,21 @@ static const struct usb_device_id option_ids[] = { { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC25_MDMNET) }, { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC28_MDM) }, /* HC28 enumerates with Siemens or Cinterion VID depending on FW revision */ { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC28_MDMNET) }, - - { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD100) }, + { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD100), + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, + { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD120), + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, + { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD140), + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD145) }, + { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD155), + .driver_info = (kernel_ulong_t)&net_intf6_blacklist }, { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD200), - .driver_info = (kernel_ulong_t)&net_intf6_blacklist - }, + .driver_info = (kernel_ulong_t)&net_intf6_blacklist }, + { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD160), + .driver_info = (kernel_ulong_t)&net_intf6_blacklist }, { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD500), - .driver_info = (kernel_ulong_t)&net_intf4_blacklist - }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE(CELOT_VENDOR_ID, CELOT_PRODUCT_CT680M) }, /* CT-650 CDMA 450 1xEVDO modem */ { USB_DEVICE_AND_INTERFACE_INFO(SAMSUNG_VENDOR_ID, SAMSUNG_PRODUCT_GT_B3730, USB_CLASS_CDC_DATA, 0x00, 0x00) }, /* Samsung GT-B3730 LTE USB modem.*/ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CEM600) }, diff --git a/drivers/usb/storage/scsiglue.c b/drivers/usb/storage/scsiglue.c index 9d38ddc8da49..866b5df36ed1 100644 --- a/drivers/usb/storage/scsiglue.c +++ b/drivers/usb/storage/scsiglue.c @@ -256,6 +256,10 @@ static int slave_configure(struct scsi_device *sdev) if (us->fflags & US_FL_WRITE_CACHE) sdev->wce_default_on = 1; + /* A few buggy USB-ATA bridges don't understand FUA */ + if (us->fflags & US_FL_BROKEN_FUA) + sdev->broken_fua = 1; + } else { /* Non-disk-type devices don't need to blacklist any pages diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h index 174a447868cd..80a5b366255f 100644 --- a/drivers/usb/storage/unusual_devs.h +++ b/drivers/usb/storage/unusual_devs.h @@ -1936,6 +1936,13 @@ UNUSUAL_DEV( 0x14cd, 0x6600, 0x0201, 0x0201, USB_SC_DEVICE, USB_PR_DEVICE, NULL, US_FL_IGNORE_RESIDUE ), +/* Reported by Michael Büsch <m@bues.ch> */ +UNUSUAL_DEV( 0x152d, 0x0567, 0x0114, 0x0114, + "JMicron", + "USB to ATA/ATAPI Bridge", + USB_SC_DEVICE, USB_PR_DEVICE, NULL, + US_FL_BROKEN_FUA ), + /* Reported by Alexandre Oliva <oliva@lsd.ic.unicamp.br> * JMicron responds to USN and several other SCSI ioctls with a * residue that causes subsequent I/O requests to fail. */ diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 971a760af4a1..8dae2f724a35 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -700,14 +700,6 @@ static void handle_rx_net(struct vhost_work *work) handle_rx(net); } -static void vhost_net_free(void *addr) -{ - if (is_vmalloc_addr(addr)) - vfree(addr); - else - kfree(addr); -} - static int vhost_net_open(struct inode *inode, struct file *f) { struct vhost_net *n; @@ -723,7 +715,7 @@ static int vhost_net_open(struct inode *inode, struct file *f) } vqs = kmalloc(VHOST_NET_VQ_MAX * sizeof(*vqs), GFP_KERNEL); if (!vqs) { - vhost_net_free(n); + kvfree(n); return -ENOMEM; } @@ -840,7 +832,7 @@ static int vhost_net_release(struct inode *inode, struct file *f) * since jobs can re-queue themselves. */ vhost_net_flush(n); kfree(n->dev.vqs); - vhost_net_free(n); + kvfree(n); return 0; } diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index 4f4ffa4c604e..69906cacd04f 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -1503,14 +1503,6 @@ static int vhost_scsi_set_features(struct vhost_scsi *vs, u64 features) return 0; } -static void vhost_scsi_free(struct vhost_scsi *vs) -{ - if (is_vmalloc_addr(vs)) - vfree(vs); - else - kfree(vs); -} - static int vhost_scsi_open(struct inode *inode, struct file *f) { struct vhost_scsi *vs; @@ -1550,7 +1542,7 @@ static int vhost_scsi_open(struct inode *inode, struct file *f) return 0; err_vqs: - vhost_scsi_free(vs); + kvfree(vs); err_vs: return r; } @@ -1569,7 +1561,7 @@ static int vhost_scsi_release(struct inode *inode, struct file *f) /* Jobs can re-queue themselves in evt kick handler. Do extra flush. */ vhost_scsi_flush(vs); kfree(vs->dev.vqs); - vhost_scsi_free(vs); + kvfree(vs); return 0; } diff --git a/drivers/video/fbdev/atmel_lcdfb.c b/drivers/video/fbdev/atmel_lcdfb.c index e683b6ef9594..d36e830d6fc6 100644 --- a/drivers/video/fbdev/atmel_lcdfb.c +++ b/drivers/video/fbdev/atmel_lcdfb.c @@ -1057,6 +1057,7 @@ static int atmel_lcdfb_of_init(struct atmel_lcdfb_info *sinfo) goto put_display_node; } + INIT_LIST_HEAD(&pdata->pwr_gpios); ret = -ENOMEM; for (i = 0; i < of_gpio_named_count(display_np, "atmel,power-control-gpio"); i++) { gpio = of_get_named_gpio_flags(display_np, "atmel,power-control-gpio", @@ -1082,6 +1083,7 @@ static int atmel_lcdfb_of_init(struct atmel_lcdfb_info *sinfo) dev_err(dev, "set direction output gpio %d failed\n", gpio); goto put_display_node; } + list_add(&og->list, &pdata->pwr_gpios); } if (is_gpio_power) diff --git a/drivers/video/fbdev/bfin_adv7393fb.c b/drivers/video/fbdev/bfin_adv7393fb.c index a54f7f7d763b..8fe41caac38e 100644 --- a/drivers/video/fbdev/bfin_adv7393fb.c +++ b/drivers/video/fbdev/bfin_adv7393fb.c @@ -408,7 +408,7 @@ static int bfin_adv7393_fb_probe(struct i2c_client *client, /* Workaround "PPI Does Not Start Properly In Specific Mode" */ if (ANOMALY_05000400) { ret = gpio_request_one(P_IDENT(P_PPI0_FS3), GPIOF_OUT_INIT_LOW, - "PPI0_FS3") + "PPI0_FS3"); if (ret) { dev_err(&client->dev, "PPI0_FS3 GPIO request failed\n"); ret = -EBUSY; diff --git a/drivers/video/fbdev/omap2/dss/omapdss-boot-init.c b/drivers/video/fbdev/omap2/dss/omapdss-boot-init.c index 99af9e88b2d8..2f0822ee3ff9 100644 --- a/drivers/video/fbdev/omap2/dss/omapdss-boot-init.c +++ b/drivers/video/fbdev/omap2/dss/omapdss-boot-init.c @@ -121,9 +121,11 @@ static void __init omapdss_add_to_list(struct device_node *node, bool root) { struct dss_conv_node *n = kmalloc(sizeof(struct dss_conv_node), GFP_KERNEL); - n->node = node; - n->root = root; - list_add(&n->list, &dss_conv_list); + if (n) { + n->node = node; + n->root = root; + list_add(&n->list, &dss_conv_list); + } } static bool __init omapdss_list_contains(const struct device_node *node) diff --git a/drivers/video/fbdev/vt8500lcdfb.c b/drivers/video/fbdev/vt8500lcdfb.c index a8f2b280f796..a1134c3f6c11 100644 --- a/drivers/video/fbdev/vt8500lcdfb.c +++ b/drivers/video/fbdev/vt8500lcdfb.c @@ -474,8 +474,6 @@ static int vt8500lcd_remove(struct platform_device *pdev) res = platform_get_resource(pdev, IORESOURCE_MEM, 0); release_mem_region(res->start, resource_size(res)); - kfree(fbi); - return 0; } diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c index b7a506f2bb14..5c660c77f03b 100644 --- a/drivers/xen/balloon.c +++ b/drivers/xen/balloon.c @@ -426,20 +426,18 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp) * p2m are consistent. */ if (!xen_feature(XENFEAT_auto_translated_physmap)) { - unsigned long p; - struct page *scratch_page = get_balloon_scratch_page(); - if (!PageHighMem(page)) { + struct page *scratch_page = get_balloon_scratch_page(); + ret = HYPERVISOR_update_va_mapping( (unsigned long)__va(pfn << PAGE_SHIFT), pfn_pte(page_to_pfn(scratch_page), PAGE_KERNEL_RO), 0); BUG_ON(ret); - } - p = page_to_pfn(scratch_page); - __set_phys_to_machine(pfn, pfn_to_mfn(p)); - put_balloon_scratch_page(); + put_balloon_scratch_page(); + } + __set_phys_to_machine(pfn, INVALID_P2M_ENTRY); } #endif diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c index c3667b202f2f..5f1e1f3cd186 100644 --- a/drivers/xen/manage.c +++ b/drivers/xen/manage.c @@ -88,7 +88,6 @@ static int xen_suspend(void *data) if (!si->cancelled) { xen_irq_resume(); - xen_console_resume(); xen_timer_resume(); } @@ -135,6 +134,10 @@ static void do_suspend(void) err = stop_machine(xen_suspend, &si, cpumask_of(0)); + /* Resume console as early as possible. */ + if (!si.cancelled) + xen_console_resume(); + raw_notifier_call_chain(&xen_resume_notifier, 0, NULL); dpm_resume_start(si.cancelled ? PMSG_THAW : PMSG_RESTORE); diff --git a/firmware/Makefile b/firmware/Makefile index 5747417069ca..0862d34cf7d1 100644 --- a/firmware/Makefile +++ b/firmware/Makefile @@ -219,6 +219,12 @@ $(obj)/%.fw: $(obj)/%.H16 $(ihex2fw_dep) obj-y += $(patsubst %,%.gen.o, $(fw-external-y)) obj-$(CONFIG_FIRMWARE_IN_KERNEL) += $(patsubst %,%.gen.o, $(fw-shipped-y)) +ifeq ($(KBUILD_SRC),) +# Makefile.build only creates subdirectories for O= builds, but external +# firmware might live outside the kernel source tree +_dummy := $(foreach d,$(addprefix $(obj)/,$(dir $(fw-external-y))), $(shell [ -d $(d) ] || mkdir -p $(d))) +endif + # Remove .S files and binaries created from ihex # (during 'make clean' .config isn't included so they're all in $(fw-shipped-)) targets := $(fw-shipped-) $(patsubst $(obj)/%,%, \ @@ -830,16 +830,20 @@ void exit_aio(struct mm_struct *mm) static void put_reqs_available(struct kioctx *ctx, unsigned nr) { struct kioctx_cpu *kcpu; + unsigned long flags; preempt_disable(); kcpu = this_cpu_ptr(ctx->cpu); + local_irq_save(flags); kcpu->reqs_available += nr; + while (kcpu->reqs_available >= ctx->req_batch * 2) { kcpu->reqs_available -= ctx->req_batch; atomic_add(ctx->req_batch, &ctx->reqs_available); } + local_irq_restore(flags); preempt_enable(); } @@ -847,10 +851,12 @@ static bool get_reqs_available(struct kioctx *ctx) { struct kioctx_cpu *kcpu; bool ret = false; + unsigned long flags; preempt_disable(); kcpu = this_cpu_ptr(ctx->cpu); + local_irq_save(flags); if (!kcpu->reqs_available) { int old, avail = atomic_read(&ctx->reqs_available); @@ -869,6 +875,7 @@ static bool get_reqs_available(struct kioctx *ctx) ret = true; kcpu->reqs_available--; out: + local_irq_restore(flags); preempt_enable(); return ret; } @@ -1021,6 +1028,7 @@ void aio_complete(struct kiocb *iocb, long res, long res2) /* everything turned out well, dispose of the aiocb. */ kiocb_free(iocb); + put_reqs_available(ctx, 1); /* * We have to order our ring_info tail store above and test @@ -1062,6 +1070,9 @@ static long aio_read_events_ring(struct kioctx *ctx, if (head == tail) goto out; + head %= ctx->nr_events; + tail %= ctx->nr_events; + while (ret < nr) { long avail; struct io_event *ev; @@ -1100,8 +1111,6 @@ static long aio_read_events_ring(struct kioctx *ctx, flush_dcache_page(ctx->ring_pages[0]); pr_debug("%li h%u t%u\n", ret, head, tail); - - put_reqs_available(ctx, ret); out: mutex_unlock(&ctx->ring_lock); diff --git a/fs/autofs4/inode.c b/fs/autofs4/inode.c index d7bd395ab586..1c55388ae633 100644 --- a/fs/autofs4/inode.c +++ b/fs/autofs4/inode.c @@ -210,7 +210,7 @@ int autofs4_fill_super(struct super_block *s, void *data, int silent) int pipefd; struct autofs_sb_info *sbi; struct autofs_info *ino; - int pgrp; + int pgrp = 0; bool pgrp_set = false; int ret = -EINVAL; diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c index 92371c414228..1daea0b47187 100644 --- a/fs/btrfs/compression.c +++ b/fs/btrfs/compression.c @@ -821,7 +821,7 @@ static void free_workspace(int type, struct list_head *workspace) spin_lock(workspace_lock); if (*num_workspace < num_online_cpus()) { - list_add_tail(workspace, idle_workspace); + list_add(workspace, idle_workspace); (*num_workspace)++; spin_unlock(workspace_lock); goto wake; diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c index 2af6e66fe788..eea26e1b2fda 100644 --- a/fs/btrfs/dev-replace.c +++ b/fs/btrfs/dev-replace.c @@ -36,6 +36,7 @@ #include "check-integrity.h" #include "rcu-string.h" #include "dev-replace.h" +#include "sysfs.h" static int btrfs_dev_replace_finishing(struct btrfs_fs_info *fs_info, int scrub_ret); @@ -562,6 +563,10 @@ static int btrfs_dev_replace_finishing(struct btrfs_fs_info *fs_info, fs_info->fs_devices->latest_bdev = tgt_device->bdev; list_add(&tgt_device->dev_alloc_list, &fs_info->fs_devices->alloc_list); + /* replace the sysfs entry */ + btrfs_kobj_rm_device(fs_info, src_device); + btrfs_kobj_add_device(fs_info, tgt_device); + btrfs_rm_dev_replace_blocked(fs_info); btrfs_rm_dev_replace_srcdev(fs_info, src_device); diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index 8bb4aa19898f..08e65e9cf2aa 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -369,7 +369,8 @@ static int verify_parent_transid(struct extent_io_tree *io_tree, out: unlock_extent_cached(io_tree, eb->start, eb->start + eb->len - 1, &cached_state, GFP_NOFS); - btrfs_tree_read_unlock_blocking(eb); + if (need_lock) + btrfs_tree_read_unlock_blocking(eb); return ret; } @@ -2904,7 +2905,9 @@ retry_root_backup: if (ret) goto fail_qgroup; + mutex_lock(&fs_info->cleaner_mutex); ret = btrfs_recover_relocation(tree_root); + mutex_unlock(&fs_info->cleaner_mutex); if (ret < 0) { printk(KERN_WARNING "BTRFS: failed to recover relocation\n"); diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c index 99c253918208..813537f362f9 100644 --- a/fs/btrfs/extent-tree.c +++ b/fs/btrfs/extent-tree.c @@ -5678,7 +5678,6 @@ void btrfs_prepare_extent_commit(struct btrfs_trans_handle *trans, struct btrfs_caching_control *next; struct btrfs_caching_control *caching_ctl; struct btrfs_block_group_cache *cache; - struct btrfs_space_info *space_info; down_write(&fs_info->commit_root_sem); @@ -5701,9 +5700,6 @@ void btrfs_prepare_extent_commit(struct btrfs_trans_handle *trans, up_write(&fs_info->commit_root_sem); - list_for_each_entry_rcu(space_info, &fs_info->space_info, list) - percpu_counter_set(&space_info->total_bytes_pinned, 0); - update_global_block_rsv(fs_info); } @@ -5741,6 +5737,7 @@ static int unpin_extent_range(struct btrfs_root *root, u64 start, u64 end) spin_lock(&cache->lock); cache->pinned -= len; space_info->bytes_pinned -= len; + percpu_counter_add(&space_info->total_bytes_pinned, -len); if (cache->ro) { space_info->bytes_readonly += len; readonly = true; diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index 0d321c23069a..47aceb494d1d 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -136,19 +136,22 @@ static unsigned int btrfs_flags_to_ioctl(unsigned int flags) void btrfs_update_iflags(struct inode *inode) { struct btrfs_inode *ip = BTRFS_I(inode); - - inode->i_flags &= ~(S_SYNC|S_APPEND|S_IMMUTABLE|S_NOATIME|S_DIRSYNC); + unsigned int new_fl = 0; if (ip->flags & BTRFS_INODE_SYNC) - inode->i_flags |= S_SYNC; + new_fl |= S_SYNC; if (ip->flags & BTRFS_INODE_IMMUTABLE) - inode->i_flags |= S_IMMUTABLE; + new_fl |= S_IMMUTABLE; if (ip->flags & BTRFS_INODE_APPEND) - inode->i_flags |= S_APPEND; + new_fl |= S_APPEND; if (ip->flags & BTRFS_INODE_NOATIME) - inode->i_flags |= S_NOATIME; + new_fl |= S_NOATIME; if (ip->flags & BTRFS_INODE_DIRSYNC) - inode->i_flags |= S_DIRSYNC; + new_fl |= S_DIRSYNC; + + set_mask_bits(&inode->i_flags, + S_SYNC | S_APPEND | S_IMMUTABLE | S_NOATIME | S_DIRSYNC, + new_fl); } /* @@ -3139,7 +3142,6 @@ out: static void clone_update_extent_map(struct inode *inode, const struct btrfs_trans_handle *trans, const struct btrfs_path *path, - struct btrfs_file_extent_item *fi, const u64 hole_offset, const u64 hole_len) { @@ -3154,7 +3156,11 @@ static void clone_update_extent_map(struct inode *inode, return; } - if (fi) { + if (path) { + struct btrfs_file_extent_item *fi; + + fi = btrfs_item_ptr(path->nodes[0], path->slots[0], + struct btrfs_file_extent_item); btrfs_extent_item_to_extent_map(inode, path, fi, false, em); em->generation = -1; if (btrfs_file_extent_type(path->nodes[0], fi) == @@ -3508,18 +3514,15 @@ process_slot: btrfs_item_ptr_offset(leaf, slot), size); inode_add_bytes(inode, datal); - extent = btrfs_item_ptr(leaf, slot, - struct btrfs_file_extent_item); } /* If we have an implicit hole (NO_HOLES feature). */ if (drop_start < new_key.offset) clone_update_extent_map(inode, trans, - path, NULL, drop_start, + NULL, drop_start, new_key.offset - drop_start); - clone_update_extent_map(inode, trans, path, - extent, 0, 0); + clone_update_extent_map(inode, trans, path, 0, 0); btrfs_mark_buffer_dirty(leaf); btrfs_release_path(path); @@ -3562,12 +3565,10 @@ process_slot: btrfs_end_transaction(trans, root); goto out; } + clone_update_extent_map(inode, trans, NULL, last_dest_end, + destoff + len - last_dest_end); ret = clone_finish_inode_update(trans, inode, destoff + len, destoff, olen); - if (ret) - goto out; - clone_update_extent_map(inode, trans, path, NULL, last_dest_end, - destoff + len - last_dest_end); } out: diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c index e12441c7cf1d..7187b14faa6c 100644 --- a/fs/btrfs/ordered-data.c +++ b/fs/btrfs/ordered-data.c @@ -484,8 +484,19 @@ void btrfs_wait_logged_extents(struct btrfs_root *log, u64 transid) log_list); list_del_init(&ordered->log_list); spin_unlock_irq(&log->log_extents_lock[index]); + + if (!test_bit(BTRFS_ORDERED_IO_DONE, &ordered->flags) && + !test_bit(BTRFS_ORDERED_DIRECT, &ordered->flags)) { + struct inode *inode = ordered->inode; + u64 start = ordered->file_offset; + u64 end = ordered->file_offset + ordered->len - 1; + + WARN_ON(!inode); + filemap_fdatawrite_range(inode->i_mapping, start, end); + } wait_event(ordered->wait, test_bit(BTRFS_ORDERED_IO_DONE, &ordered->flags)); + btrfs_put_ordered_extent(ordered); spin_lock_irq(&log->log_extents_lock[index]); } diff --git a/fs/btrfs/print-tree.c b/fs/btrfs/print-tree.c index 6efd70d3b64f..9626b4ad3b9a 100644 --- a/fs/btrfs/print-tree.c +++ b/fs/btrfs/print-tree.c @@ -54,7 +54,7 @@ static void print_extent_data_ref(struct extent_buffer *eb, btrfs_extent_data_ref_count(eb, ref)); } -static void print_extent_item(struct extent_buffer *eb, int slot) +static void print_extent_item(struct extent_buffer *eb, int slot, int type) { struct btrfs_extent_item *ei; struct btrfs_extent_inline_ref *iref; @@ -63,7 +63,6 @@ static void print_extent_item(struct extent_buffer *eb, int slot) struct btrfs_disk_key key; unsigned long end; unsigned long ptr; - int type; u32 item_size = btrfs_item_size_nr(eb, slot); u64 flags; u64 offset; @@ -88,7 +87,8 @@ static void print_extent_item(struct extent_buffer *eb, int slot) btrfs_extent_refs(eb, ei), btrfs_extent_generation(eb, ei), flags); - if (flags & BTRFS_EXTENT_FLAG_TREE_BLOCK) { + if ((type == BTRFS_EXTENT_ITEM_KEY) && + flags & BTRFS_EXTENT_FLAG_TREE_BLOCK) { struct btrfs_tree_block_info *info; info = (struct btrfs_tree_block_info *)(ei + 1); btrfs_tree_block_key(eb, info, &key); @@ -223,7 +223,8 @@ void btrfs_print_leaf(struct btrfs_root *root, struct extent_buffer *l) btrfs_disk_root_refs(l, ri)); break; case BTRFS_EXTENT_ITEM_KEY: - print_extent_item(l, i); + case BTRFS_METADATA_ITEM_KEY: + print_extent_item(l, i, type); break; case BTRFS_TREE_BLOCK_REF_KEY: printk(KERN_INFO "\t\ttree block backref\n"); diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c index 4055291a523e..4a88f073fdd7 100644 --- a/fs/btrfs/raid56.c +++ b/fs/btrfs/raid56.c @@ -1956,9 +1956,10 @@ static int __raid56_parity_recover(struct btrfs_raid_bio *rbio) * pages are going to be uptodate. */ for (stripe = 0; stripe < bbio->num_stripes; stripe++) { - if (rbio->faila == stripe || - rbio->failb == stripe) + if (rbio->faila == stripe || rbio->failb == stripe) { + atomic_inc(&rbio->bbio->error); continue; + } for (pagenr = 0; pagenr < nr_pages; pagenr++) { struct page *p; diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c index 4662d92a4b73..8e16bca69c56 100644 --- a/fs/btrfs/super.c +++ b/fs/btrfs/super.c @@ -522,9 +522,10 @@ int btrfs_parse_options(struct btrfs_root *root, char *options) case Opt_ssd_spread: btrfs_set_and_info(root, SSD_SPREAD, "use spread ssd allocation scheme"); + btrfs_set_opt(info->mount_opt, SSD); break; case Opt_nossd: - btrfs_clear_and_info(root, NOSSD, + btrfs_set_and_info(root, NOSSD, "not using ssd allocation scheme"); btrfs_clear_opt(info->mount_opt, SSD); break; @@ -1467,7 +1468,9 @@ static int btrfs_remount(struct super_block *sb, int *flags, char *data) goto restore; /* recover relocation */ + mutex_lock(&fs_info->cleaner_mutex); ret = btrfs_recover_relocation(root); + mutex_unlock(&fs_info->cleaner_mutex); if (ret) goto restore; @@ -1808,6 +1811,8 @@ static int btrfs_show_devname(struct seq_file *m, struct dentry *root) list_for_each_entry(dev, head, dev_list) { if (dev->missing) continue; + if (!dev->name) + continue; if (!first_dev || dev->devid < first_dev->devid) first_dev = dev; } diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c index df39458f1487..78699364f537 100644 --- a/fs/btrfs/sysfs.c +++ b/fs/btrfs/sysfs.c @@ -605,14 +605,37 @@ static void init_feature_attrs(void) } } -static int add_device_membership(struct btrfs_fs_info *fs_info) +int btrfs_kobj_rm_device(struct btrfs_fs_info *fs_info, + struct btrfs_device *one_device) +{ + struct hd_struct *disk; + struct kobject *disk_kobj; + + if (!fs_info->device_dir_kobj) + return -EINVAL; + + if (one_device) { + disk = one_device->bdev->bd_part; + disk_kobj = &part_to_dev(disk)->kobj; + + sysfs_remove_link(fs_info->device_dir_kobj, + disk_kobj->name); + } + + return 0; +} + +int btrfs_kobj_add_device(struct btrfs_fs_info *fs_info, + struct btrfs_device *one_device) { int error = 0; struct btrfs_fs_devices *fs_devices = fs_info->fs_devices; struct btrfs_device *dev; - fs_info->device_dir_kobj = kobject_create_and_add("devices", + if (!fs_info->device_dir_kobj) + fs_info->device_dir_kobj = kobject_create_and_add("devices", &fs_info->super_kobj); + if (!fs_info->device_dir_kobj) return -ENOMEM; @@ -623,6 +646,9 @@ static int add_device_membership(struct btrfs_fs_info *fs_info) if (!dev->bdev) continue; + if (one_device && one_device != dev) + continue; + disk = dev->bdev->bd_part; disk_kobj = &part_to_dev(disk)->kobj; @@ -666,7 +692,7 @@ int btrfs_sysfs_add_one(struct btrfs_fs_info *fs_info) if (error) goto failure; - error = add_device_membership(fs_info); + error = btrfs_kobj_add_device(fs_info, NULL); if (error) goto failure; diff --git a/fs/btrfs/sysfs.h b/fs/btrfs/sysfs.h index 9ab576318a84..ac46df37504c 100644 --- a/fs/btrfs/sysfs.h +++ b/fs/btrfs/sysfs.h @@ -66,4 +66,8 @@ char *btrfs_printable_features(enum btrfs_feature_set set, u64 flags); extern const char * const btrfs_feature_set_names[3]; extern struct kobj_type space_info_ktype; extern struct kobj_type btrfs_raid_ktype; +int btrfs_kobj_add_device(struct btrfs_fs_info *fs_info, + struct btrfs_device *one_device); +int btrfs_kobj_rm_device(struct btrfs_fs_info *fs_info, + struct btrfs_device *one_device); #endif /* _BTRFS_SYSFS_H_ */ diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c index 511839c04f11..5f379affdf23 100644 --- a/fs/btrfs/transaction.c +++ b/fs/btrfs/transaction.c @@ -386,11 +386,13 @@ start_transaction(struct btrfs_root *root, u64 num_items, unsigned int type, bool reloc_reserved = false; int ret; + /* Send isn't supposed to start transactions. */ + ASSERT(current->journal_info != (void *)BTRFS_SEND_TRANS_STUB); + if (test_bit(BTRFS_FS_STATE_ERROR, &root->fs_info->fs_state)) return ERR_PTR(-EROFS); - if (current->journal_info && - current->journal_info != (void *)BTRFS_SEND_TRANS_STUB) { + if (current->journal_info) { WARN_ON(type & TRANS_EXTWRITERS); h = current->journal_info; h->use_count++; @@ -491,6 +493,7 @@ again: smp_mb(); if (cur_trans->state >= TRANS_STATE_BLOCKED && may_wait_transaction(root, type)) { + current->journal_info = h; btrfs_commit_transaction(h, root); goto again; } @@ -1615,11 +1618,6 @@ static int btrfs_flush_all_pending_stuffs(struct btrfs_trans_handle *trans, int ret; ret = btrfs_run_delayed_items(trans, root); - /* - * running the delayed items may have added new refs. account - * them now so that they hinder processing of more delayed refs - * as little as possible. - */ if (ret) return ret; diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index c83b24251e53..6cb82f62cb7c 100644 --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -40,6 +40,7 @@ #include "rcu-string.h" #include "math.h" #include "dev-replace.h" +#include "sysfs.h" static int init_first_rw_device(struct btrfs_trans_handle *trans, struct btrfs_root *root, @@ -554,12 +555,14 @@ static struct btrfs_fs_devices *clone_fs_devices(struct btrfs_fs_devices *orig) * This is ok to do without rcu read locked because we hold the * uuid mutex so nothing we touch in here is going to disappear. */ - name = rcu_string_strdup(orig_dev->name->str, GFP_NOFS); - if (!name) { - kfree(device); - goto error; + if (orig_dev->name) { + name = rcu_string_strdup(orig_dev->name->str, GFP_NOFS); + if (!name) { + kfree(device); + goto error; + } + rcu_assign_pointer(device->name, name); } - rcu_assign_pointer(device->name, name); list_add(&device->dev_list, &fs_devices->devices); device->fs_devices = fs_devices; @@ -1677,8 +1680,11 @@ int btrfs_rm_device(struct btrfs_root *root, char *device_path) if (device->bdev == root->fs_info->fs_devices->latest_bdev) root->fs_info->fs_devices->latest_bdev = next_device->bdev; - if (device->bdev) + if (device->bdev) { device->fs_devices->open_devices--; + /* remove sysfs entry */ + btrfs_kobj_rm_device(root->fs_info, device); + } call_rcu(&device->rcu, free_device); @@ -2143,9 +2149,14 @@ int btrfs_init_new_device(struct btrfs_root *root, char *device_path) total_bytes = btrfs_super_num_devices(root->fs_info->super_copy); btrfs_set_super_num_devices(root->fs_info->super_copy, total_bytes + 1); + + /* add sysfs device entry */ + btrfs_kobj_add_device(root->fs_info, device); + mutex_unlock(&root->fs_info->fs_devices->device_list_mutex); if (seeding_dev) { + char fsid_buf[BTRFS_UUID_UNPARSED_SIZE]; ret = init_first_rw_device(trans, root, device); if (ret) { btrfs_abort_transaction(trans, root, ret); @@ -2156,6 +2167,14 @@ int btrfs_init_new_device(struct btrfs_root *root, char *device_path) btrfs_abort_transaction(trans, root, ret); goto error_trans; } + + /* Sprouting would change fsid of the mounted root, + * so rename the fsid on the sysfs + */ + snprintf(fsid_buf, BTRFS_UUID_UNPARSED_SIZE, "%pU", + root->fs_info->fsid); + if (kobject_rename(&root->fs_info->super_kobj, fsid_buf)) + goto error_trans; } else { ret = btrfs_add_device(trans, root, device); if (ret) { @@ -2205,6 +2224,7 @@ error_trans: unlock_chunks(root); btrfs_end_transaction(trans, root); rcu_string_free(device->name); + btrfs_kobj_rm_device(root->fs_info, device); kfree(device); error: blkdev_put(bdev, FMODE_EXCL); diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c index 4f196314c0c1..b67d8fc81277 100644 --- a/fs/btrfs/zlib.c +++ b/fs/btrfs/zlib.c @@ -136,7 +136,7 @@ static int zlib_compress_pages(struct list_head *ws, if (workspace->def_strm.total_in > 8192 && workspace->def_strm.total_in < workspace->def_strm.total_out) { - ret = -EIO; + ret = -E2BIG; goto out; } /* we need another page for writing out. Test this diff --git a/fs/cifs/cifs_unicode.c b/fs/cifs/cifs_unicode.c index 0227b45ef00a..15e9505aa35f 100644 --- a/fs/cifs/cifs_unicode.c +++ b/fs/cifs/cifs_unicode.c @@ -290,7 +290,8 @@ int cifsConvertToUTF16(__le16 *target, const char *source, int srclen, const struct nls_table *cp, int mapChars) { - int i, j, charlen; + int i, charlen; + int j = 0; char src_char; __le16 dst_char; wchar_t tmp; @@ -298,12 +299,11 @@ cifsConvertToUTF16(__le16 *target, const char *source, int srclen, if (!mapChars) return cifs_strtoUTF16(target, source, PATH_MAX, cp); - for (i = 0, j = 0; i < srclen; j++) { + for (i = 0; i < srclen; j++) { src_char = source[i]; charlen = 1; switch (src_char) { case 0: - put_unaligned(0, &target[j]); goto ctoUTF16_out; case ':': dst_char = cpu_to_le16(UNI_COLON); @@ -350,6 +350,7 @@ cifsConvertToUTF16(__le16 *target, const char *source, int srclen, } ctoUTF16_out: + put_unaligned(0, &target[j]); /* Null terminate target unicode string */ return j; } diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c index 2c90d07c0b3a..888398067420 100644 --- a/fs/cifs/cifsfs.c +++ b/fs/cifs/cifsfs.c @@ -725,6 +725,19 @@ out_nls: goto out; } +static ssize_t +cifs_loose_read_iter(struct kiocb *iocb, struct iov_iter *iter) +{ + ssize_t rc; + struct inode *inode = file_inode(iocb->ki_filp); + + rc = cifs_revalidate_mapping(inode); + if (rc) + return rc; + + return generic_file_read_iter(iocb, iter); +} + static ssize_t cifs_file_write_iter(struct kiocb *iocb, struct iov_iter *from) { struct inode *inode = file_inode(iocb->ki_filp); @@ -881,7 +894,7 @@ const struct inode_operations cifs_symlink_inode_ops = { const struct file_operations cifs_file_ops = { .read = new_sync_read, .write = new_sync_write, - .read_iter = generic_file_read_iter, + .read_iter = cifs_loose_read_iter, .write_iter = cifs_file_write_iter, .open = cifs_open, .release = cifs_close, @@ -939,7 +952,7 @@ const struct file_operations cifs_file_direct_ops = { const struct file_operations cifs_file_nobrl_ops = { .read = new_sync_read, .write = new_sync_write, - .read_iter = generic_file_read_iter, + .read_iter = cifs_loose_read_iter, .write_iter = cifs_file_write_iter, .open = cifs_open, .release = cifs_close, diff --git a/fs/cifs/link.c b/fs/cifs/link.c index 264ece71bdb2..68559fd557fb 100644 --- a/fs/cifs/link.c +++ b/fs/cifs/link.c @@ -374,7 +374,7 @@ cifs_create_mf_symlink(unsigned int xid, struct cifs_tcon *tcon, oparms.cifs_sb = cifs_sb; oparms.desired_access = GENERIC_WRITE; oparms.create_options = create_options; - oparms.disposition = FILE_OPEN; + oparms.disposition = FILE_CREATE; oparms.path = path; oparms.fid = &fid; oparms.reconnect = false; diff --git a/fs/coredump.c b/fs/coredump.c index 0b2528fb640e..a93f7e6ea4cf 100644 --- a/fs/coredump.c +++ b/fs/coredump.c @@ -306,7 +306,7 @@ static int zap_threads(struct task_struct *tsk, struct mm_struct *mm, if (unlikely(nr < 0)) return nr; - tsk->flags = PF_DUMPCORE; + tsk->flags |= PF_DUMPCORE; if (atomic_read(&mm->mm_users) == nr + 1) goto done; /* diff --git a/fs/direct-io.c b/fs/direct-io.c index 98040ba388ac..194d0d122cae 100644 --- a/fs/direct-io.c +++ b/fs/direct-io.c @@ -198,9 +198,8 @@ static inline int dio_refill_pages(struct dio *dio, struct dio_submit *sdio) * L1 cache. */ static inline struct page *dio_get_page(struct dio *dio, - struct dio_submit *sdio, size_t *from, size_t *to) + struct dio_submit *sdio) { - int n; if (dio_pages_present(sdio) == 0) { int ret; @@ -209,10 +208,7 @@ static inline struct page *dio_get_page(struct dio *dio, return ERR_PTR(ret); BUG_ON(dio_pages_present(sdio) == 0); } - n = sdio->head++; - *from = n ? 0 : sdio->from; - *to = (n == sdio->tail - 1) ? sdio->to : PAGE_SIZE; - return dio->pages[n]; + return dio->pages[sdio->head]; } /** @@ -911,11 +907,15 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio, while (sdio->block_in_file < sdio->final_block_in_request) { struct page *page; size_t from, to; - page = dio_get_page(dio, sdio, &from, &to); + + page = dio_get_page(dio, sdio); if (IS_ERR(page)) { ret = PTR_ERR(page); goto out; } + from = sdio->head ? 0 : sdio->from; + to = (sdio->head == sdio->tail - 1) ? sdio->to : PAGE_SIZE; + sdio->head++; while (from < to) { unsigned this_chunk_bytes; /* # of bytes mapped */ diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c index 0762d143e252..fca382037ddd 100644 --- a/fs/ext4/balloc.c +++ b/fs/ext4/balloc.c @@ -194,7 +194,16 @@ static void ext4_init_block_bitmap(struct super_block *sb, if (!ext4_group_desc_csum_verify(sb, block_group, gdp)) { ext4_error(sb, "Checksum bad for group %u", block_group); grp = ext4_get_group_info(sb, block_group); + if (!EXT4_MB_GRP_BBITMAP_CORRUPT(grp)) + percpu_counter_sub(&sbi->s_freeclusters_counter, + grp->bb_free); set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT, &grp->bb_state); + if (!EXT4_MB_GRP_IBITMAP_CORRUPT(grp)) { + int count; + count = ext4_free_inodes_count(sb, gdp); + percpu_counter_sub(&sbi->s_freeinodes_counter, + count); + } set_bit(EXT4_GROUP_INFO_IBITMAP_CORRUPT_BIT, &grp->bb_state); return; } @@ -359,6 +368,7 @@ static void ext4_validate_block_bitmap(struct super_block *sb, { ext4_fsblk_t blk; struct ext4_group_info *grp = ext4_get_group_info(sb, block_group); + struct ext4_sb_info *sbi = EXT4_SB(sb); if (buffer_verified(bh)) return; @@ -369,6 +379,9 @@ static void ext4_validate_block_bitmap(struct super_block *sb, ext4_unlock_group(sb, block_group); ext4_error(sb, "bg %u: block %llu: invalid block bitmap", block_group, blk); + if (!EXT4_MB_GRP_BBITMAP_CORRUPT(grp)) + percpu_counter_sub(&sbi->s_freeclusters_counter, + grp->bb_free); set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT, &grp->bb_state); return; } @@ -376,6 +389,9 @@ static void ext4_validate_block_bitmap(struct super_block *sb, desc, bh))) { ext4_unlock_group(sb, block_group); ext4_error(sb, "bg %u: bad block bitmap checksum", block_group); + if (!EXT4_MB_GRP_BBITMAP_CORRUPT(grp)) + percpu_counter_sub(&sbi->s_freeclusters_counter, + grp->bb_free); set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT, &grp->bb_state); return; } diff --git a/fs/ext4/extents_status.c b/fs/ext4/extents_status.c index 3f5c188953a4..0b7e28e7eaa4 100644 --- a/fs/ext4/extents_status.c +++ b/fs/ext4/extents_status.c @@ -966,10 +966,10 @@ retry: continue; } - if (ei->i_es_lru_nr == 0 || ei == locked_ei) + if (ei->i_es_lru_nr == 0 || ei == locked_ei || + !write_trylock(&ei->i_es_lock)) continue; - write_lock(&ei->i_es_lock); shrunk = __es_try_to_reclaim_extents(ei, nr_to_scan); if (ei->i_es_lru_nr == 0) list_del_init(&ei->i_es_lru); diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c index 0ee59a6644e2..5b87fc36aab8 100644 --- a/fs/ext4/ialloc.c +++ b/fs/ext4/ialloc.c @@ -71,6 +71,7 @@ static unsigned ext4_init_inode_bitmap(struct super_block *sb, struct ext4_group_desc *gdp) { struct ext4_group_info *grp; + struct ext4_sb_info *sbi = EXT4_SB(sb); J_ASSERT_BH(bh, buffer_locked(bh)); /* If checksum is bad mark all blocks and inodes use to prevent @@ -78,7 +79,16 @@ static unsigned ext4_init_inode_bitmap(struct super_block *sb, if (!ext4_group_desc_csum_verify(sb, block_group, gdp)) { ext4_error(sb, "Checksum bad for group %u", block_group); grp = ext4_get_group_info(sb, block_group); + if (!EXT4_MB_GRP_BBITMAP_CORRUPT(grp)) + percpu_counter_sub(&sbi->s_freeclusters_counter, + grp->bb_free); set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT, &grp->bb_state); + if (!EXT4_MB_GRP_IBITMAP_CORRUPT(grp)) { + int count; + count = ext4_free_inodes_count(sb, gdp); + percpu_counter_sub(&sbi->s_freeinodes_counter, + count); + } set_bit(EXT4_GROUP_INFO_IBITMAP_CORRUPT_BIT, &grp->bb_state); return 0; } @@ -116,6 +126,7 @@ ext4_read_inode_bitmap(struct super_block *sb, ext4_group_t block_group) struct buffer_head *bh = NULL; ext4_fsblk_t bitmap_blk; struct ext4_group_info *grp; + struct ext4_sb_info *sbi = EXT4_SB(sb); desc = ext4_get_group_desc(sb, block_group, NULL); if (!desc) @@ -185,6 +196,12 @@ verify: ext4_error(sb, "Corrupt inode bitmap - block_group = %u, " "inode_bitmap = %llu", block_group, bitmap_blk); grp = ext4_get_group_info(sb, block_group); + if (!EXT4_MB_GRP_IBITMAP_CORRUPT(grp)) { + int count; + count = ext4_free_inodes_count(sb, desc); + percpu_counter_sub(&sbi->s_freeinodes_counter, + count); + } set_bit(EXT4_GROUP_INFO_IBITMAP_CORRUPT_BIT, &grp->bb_state); return NULL; } @@ -321,6 +338,12 @@ out: fatal = err; } else { ext4_error(sb, "bit already cleared for inode %lu", ino); + if (gdp && !EXT4_MB_GRP_IBITMAP_CORRUPT(grp)) { + int count; + count = ext4_free_inodes_count(sb, gdp); + percpu_counter_sub(&sbi->s_freeinodes_counter, + count); + } set_bit(EXT4_GROUP_INFO_IBITMAP_CORRUPT_BIT, &grp->bb_state); } @@ -851,6 +874,13 @@ got: goto out; } + BUFFER_TRACE(group_desc_bh, "get_write_access"); + err = ext4_journal_get_write_access(handle, group_desc_bh); + if (err) { + ext4_std_error(sb, err); + goto out; + } + /* We may have to initialize the block bitmap if it isn't already */ if (ext4_has_group_desc_csum(sb) && gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) { @@ -887,13 +917,6 @@ got: } } - BUFFER_TRACE(group_desc_bh, "get_write_access"); - err = ext4_journal_get_write_access(handle, group_desc_bh); - if (err) { - ext4_std_error(sb, err); - goto out; - } - /* Update the relevant bg descriptor fields */ if (ext4_has_group_desc_csum(sb)) { int free; diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c index 8a57e9fcd1b9..fd69da194826 100644 --- a/fs/ext4/indirect.c +++ b/fs/ext4/indirect.c @@ -389,7 +389,13 @@ static int ext4_alloc_branch(handle_t *handle, struct inode *inode, return 0; failed: for (; i >= 0; i--) { - if (i != indirect_blks && branch[i].bh) + /* + * We want to ext4_forget() only freshly allocated indirect + * blocks. Buffer for new_blocks[i-1] is at branch[i].bh and + * buffer at branch[0].bh is indirect block / inode already + * existing before ext4_alloc_branch() was called. + */ + if (i > 0 && i != indirect_blks && branch[i].bh) ext4_forget(handle, 1, inode, branch[i].bh, branch[i].bh->b_blocknr); ext4_free_blocks(handle, inode, NULL, new_blocks[i], @@ -1310,16 +1316,24 @@ static int free_hole_blocks(handle_t *handle, struct inode *inode, blk = *i_data; if (level > 0) { ext4_lblk_t first2; + ext4_lblk_t count2; + bh = sb_bread(inode->i_sb, le32_to_cpu(blk)); if (!bh) { EXT4_ERROR_INODE_BLOCK(inode, le32_to_cpu(blk), "Read failure"); return -EIO; } - first2 = (first > offset) ? first - offset : 0; + if (first > offset) { + first2 = first - offset; + count2 = count; + } else { + first2 = 0; + count2 = count - (offset - first); + } ret = free_hole_blocks(handle, inode, bh, (__le32 *)bh->b_data, level - 1, - first2, count - offset, + first2, count2, inode->i_sb->s_blocksize >> 2); if (ret) { brelse(bh); @@ -1329,8 +1343,8 @@ static int free_hole_blocks(handle_t *handle, struct inode *inode, if (level == 0 || (bh && all_zeroes((__le32 *)bh->b_data, (__le32 *)bh->b_data + addr_per_block))) { - ext4_free_data(handle, inode, parent_bh, &blk, &blk+1); - *i_data = 0; + ext4_free_data(handle, inode, parent_bh, + i_data, i_data + 1); } brelse(bh); bh = NULL; diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index 59e31622cc6e..2dcb936be90e 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -722,6 +722,7 @@ void ext4_mb_generate_buddy(struct super_block *sb, void *buddy, void *bitmap, ext4_group_t group) { struct ext4_group_info *grp = ext4_get_group_info(sb, group); + struct ext4_sb_info *sbi = EXT4_SB(sb); ext4_grpblk_t max = EXT4_CLUSTERS_PER_GROUP(sb); ext4_grpblk_t i = 0; ext4_grpblk_t first; @@ -751,14 +752,17 @@ void ext4_mb_generate_buddy(struct super_block *sb, if (free != grp->bb_free) { ext4_grp_locked_error(sb, group, 0, 0, - "%u clusters in bitmap, %u in gd; " - "block bitmap corrupt.", + "block bitmap and bg descriptor " + "inconsistent: %u vs %u free clusters", free, grp->bb_free); /* * If we intend to continue, we consider group descriptor * corrupt and update bb_free using bitmap value */ grp->bb_free = free; + if (!EXT4_MB_GRP_BBITMAP_CORRUPT(grp)) + percpu_counter_sub(&sbi->s_freeclusters_counter, + grp->bb_free); set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT, &grp->bb_state); } mb_set_largest_free_order(sb, grp); @@ -1431,6 +1435,7 @@ static void mb_free_blocks(struct inode *inode, struct ext4_buddy *e4b, right_is_free = !mb_test_bit(last + 1, e4b->bd_bitmap); if (unlikely(block != -1)) { + struct ext4_sb_info *sbi = EXT4_SB(sb); ext4_fsblk_t blocknr; blocknr = ext4_group_first_block_no(sb, e4b->bd_group); @@ -1441,6 +1446,9 @@ static void mb_free_blocks(struct inode *inode, struct ext4_buddy *e4b, "freeing already freed block " "(bit %u); block bitmap corrupt.", block); + if (!EXT4_MB_GRP_BBITMAP_CORRUPT(e4b->bd_info)) + percpu_counter_sub(&sbi->s_freeclusters_counter, + e4b->bd_info->bb_free); /* Mark the block group as corrupt. */ set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT, &e4b->bd_info->bb_state); diff --git a/fs/ext4/super.c b/fs/ext4/super.c index b9b9aabfb4d2..6df7bc611dbd 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -1525,8 +1525,6 @@ static int handle_mount_opt(struct super_block *sb, char *opt, int token, arg = JBD2_DEFAULT_MAX_COMMIT_AGE; sbi->s_commit_interval = HZ * arg; } else if (token == Opt_max_batch_time) { - if (arg == 0) - arg = EXT4_DEF_MAX_BATCH_TIME; sbi->s_max_batch_time = arg; } else if (token == Opt_min_batch_time) { sbi->s_min_batch_time = arg; @@ -2809,10 +2807,11 @@ static void print_daily_error_info(unsigned long arg) es = sbi->s_es; if (es->s_error_count) - ext4_msg(sb, KERN_NOTICE, "error count: %u", + /* fsck newer than v1.41.13 is needed to clean this condition. */ + ext4_msg(sb, KERN_NOTICE, "error count since last fsck: %u", le32_to_cpu(es->s_error_count)); if (es->s_first_error_time) { - printk(KERN_NOTICE "EXT4-fs (%s): initial error at %u: %.*s:%d", + printk(KERN_NOTICE "EXT4-fs (%s): initial error at time %u: %.*s:%d", sb->s_id, le32_to_cpu(es->s_first_error_time), (int) sizeof(es->s_first_error_func), es->s_first_error_func, @@ -2826,7 +2825,7 @@ static void print_daily_error_info(unsigned long arg) printk("\n"); } if (es->s_last_error_time) { - printk(KERN_NOTICE "EXT4-fs (%s): last error at %u: %.*s:%d", + printk(KERN_NOTICE "EXT4-fs (%s): last error at time %u: %.*s:%d", sb->s_id, le32_to_cpu(es->s_last_error_time), (int) sizeof(es->s_last_error_func), es->s_last_error_func, @@ -3880,38 +3879,19 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent) goto failed_mount2; } } - - /* - * set up enough so that it can read an inode, - * and create new inode for buddy allocator - */ - sbi->s_gdb_count = db_count; - if (!test_opt(sb, NOLOAD) && - EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_HAS_JOURNAL)) - sb->s_op = &ext4_sops; - else - sb->s_op = &ext4_nojournal_sops; - - ext4_ext_init(sb); - err = ext4_mb_init(sb); - if (err) { - ext4_msg(sb, KERN_ERR, "failed to initialize mballoc (%d)", - err); - goto failed_mount2; - } - if (!ext4_check_descriptors(sb, &first_not_zeroed)) { ext4_msg(sb, KERN_ERR, "group descriptors corrupted!"); - goto failed_mount2a; + goto failed_mount2; } if (EXT4_HAS_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_FLEX_BG)) if (!ext4_fill_flex_info(sb)) { ext4_msg(sb, KERN_ERR, "unable to initialize " "flex_bg meta info!"); - goto failed_mount2a; + goto failed_mount2; } + sbi->s_gdb_count = db_count; get_random_bytes(&sbi->s_next_generation, sizeof(u32)); spin_lock_init(&sbi->s_next_gen_lock); @@ -3946,6 +3926,14 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent) sbi->s_stripe = ext4_get_stripe_size(sbi); sbi->s_extent_max_zeroout_kb = 32; + /* + * set up enough so that it can read an inode + */ + if (!test_opt(sb, NOLOAD) && + EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_HAS_JOURNAL)) + sb->s_op = &ext4_sops; + else + sb->s_op = &ext4_nojournal_sops; sb->s_export_op = &ext4_export_ops; sb->s_xattr = ext4_xattr_handlers; #ifdef CONFIG_QUOTA @@ -4135,13 +4123,21 @@ no_journal: if (err) { ext4_msg(sb, KERN_ERR, "failed to reserve %llu clusters for " "reserved pool", ext4_calculate_resv_clusters(sb)); - goto failed_mount5; + goto failed_mount4a; } err = ext4_setup_system_zone(sb); if (err) { ext4_msg(sb, KERN_ERR, "failed to initialize system " "zone (%d)", err); + goto failed_mount4a; + } + + ext4_ext_init(sb); + err = ext4_mb_init(sb); + if (err) { + ext4_msg(sb, KERN_ERR, "failed to initialize mballoc (%d)", + err); goto failed_mount5; } @@ -4218,8 +4214,11 @@ failed_mount8: failed_mount7: ext4_unregister_li_request(sb); failed_mount6: - ext4_release_system_zone(sb); + ext4_mb_release(sb); failed_mount5: + ext4_ext_release(sb); + ext4_release_system_zone(sb); +failed_mount4a: dput(sb->s_root); sb->s_root = NULL; failed_mount4: @@ -4243,14 +4242,11 @@ failed_mount3: percpu_counter_destroy(&sbi->s_extent_cache_cnt); if (sbi->s_mmp_tsk) kthread_stop(sbi->s_mmp_tsk); -failed_mount2a: - ext4_mb_release(sb); failed_mount2: for (i = 0; i < db_count; i++) brelse(sbi->s_group_desc[i]); ext4_kvfree(sbi->s_group_desc); failed_mount: - ext4_ext_release(sb); if (sbi->s_chksum_driver) crypto_free_shash(sbi->s_chksum_driver); if (sbi->s_proc) { diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index 0924521306b4..f8cf619edb5f 100644 --- a/fs/f2fs/data.c +++ b/fs/f2fs/data.c @@ -608,8 +608,8 @@ static int __allocate_data_block(struct dnode_of_data *dn) * b. do not use extent cache for better performance * c. give the block addresses to blockdev */ -static int get_data_block(struct inode *inode, sector_t iblock, - struct buffer_head *bh_result, int create) +static int __get_data_block(struct inode *inode, sector_t iblock, + struct buffer_head *bh_result, int create, bool fiemap) { struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb); unsigned int blkbits = inode->i_sb->s_blocksize_bits; @@ -637,7 +637,7 @@ static int get_data_block(struct inode *inode, sector_t iblock, err = 0; goto unlock_out; } - if (dn.data_blkaddr == NEW_ADDR) + if (dn.data_blkaddr == NEW_ADDR && !fiemap) goto put_out; if (dn.data_blkaddr != NULL_ADDR) { @@ -671,7 +671,7 @@ get_next: err = 0; goto unlock_out; } - if (dn.data_blkaddr == NEW_ADDR) + if (dn.data_blkaddr == NEW_ADDR && !fiemap) goto put_out; end_offset = ADDRS_PER_PAGE(dn.node_page, F2FS_I(inode)); @@ -708,10 +708,23 @@ out: return err; } +static int get_data_block(struct inode *inode, sector_t iblock, + struct buffer_head *bh_result, int create) +{ + return __get_data_block(inode, iblock, bh_result, create, false); +} + +static int get_data_block_fiemap(struct inode *inode, sector_t iblock, + struct buffer_head *bh_result, int create) +{ + return __get_data_block(inode, iblock, bh_result, create, true); +} + int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo, u64 start, u64 len) { - return generic_block_fiemap(inode, fieinfo, start, len, get_data_block); + return generic_block_fiemap(inode, fieinfo, + start, len, get_data_block_fiemap); } static int f2fs_read_data_page(struct file *file, struct page *page) diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c index 966acb039e3b..a4addd72ebbd 100644 --- a/fs/f2fs/dir.c +++ b/fs/f2fs/dir.c @@ -376,11 +376,11 @@ static struct page *init_inode_metadata(struct inode *inode, put_error: f2fs_put_page(page, 1); +error: /* once the failed inode becomes a bad inode, i_mode is S_IFREG */ truncate_inode_pages(&inode->i_data, 0); truncate_blocks(inode, 0); remove_dirty_dir_inode(inode); -error: remove_inode_page(inode); return ERR_PTR(err); } diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index e51c732b0dd9..58df97e174d0 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -342,9 +342,6 @@ struct f2fs_sm_info { struct dirty_seglist_info *dirty_info; /* dirty segment information */ struct curseg_info *curseg_array; /* active segment information */ - struct list_head wblist_head; /* list of under-writeback pages */ - spinlock_t wblist_lock; /* lock for checkpoint */ - block_t seg0_blkaddr; /* block address of 0'th segment */ block_t main_blkaddr; /* start block address of main area */ block_t ssa_blkaddr; /* start block address of SSA area */ @@ -644,7 +641,8 @@ static inline void f2fs_unlock_all(struct f2fs_sb_info *sbi) */ static inline int check_nid_range(struct f2fs_sb_info *sbi, nid_t nid) { - WARN_ON((nid >= NM_I(sbi)->max_nid)); + if (unlikely(nid < F2FS_ROOT_INO(sbi))) + return -EINVAL; if (unlikely(nid >= NM_I(sbi)->max_nid)) return -EINVAL; return 0; diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c index c58e33075719..7d8b96275092 100644 --- a/fs/f2fs/file.c +++ b/fs/f2fs/file.c @@ -659,16 +659,19 @@ static int expand_inode_data(struct inode *inode, loff_t offset, off_start = offset & (PAGE_CACHE_SIZE - 1); off_end = (offset + len) & (PAGE_CACHE_SIZE - 1); + f2fs_lock_op(sbi); + for (index = pg_start; index <= pg_end; index++) { struct dnode_of_data dn; - f2fs_lock_op(sbi); + if (index == pg_end && !off_end) + goto noalloc; + set_new_dnode(&dn, inode, NULL, NULL, 0); ret = f2fs_reserve_block(&dn, index); - f2fs_unlock_op(sbi); if (ret) break; - +noalloc: if (pg_start == pg_end) new_size = offset + len; else if (index == pg_start && off_start) @@ -683,8 +686,9 @@ static int expand_inode_data(struct inode *inode, loff_t offset, i_size_read(inode) < new_size) { i_size_write(inode, new_size); mark_inode_dirty(inode); - f2fs_write_inode(inode, NULL); + update_inode_page(inode); } + f2fs_unlock_op(sbi); return ret; } diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c index adc622c6bdce..2cf6962f6cc8 100644 --- a/fs/f2fs/inode.c +++ b/fs/f2fs/inode.c @@ -78,6 +78,7 @@ static int do_read_inode(struct inode *inode) if (check_nid_range(sbi, inode->i_ino)) { f2fs_msg(inode->i_sb, KERN_ERR, "bad inode number: %lu", (unsigned long) inode->i_ino); + WARN_ON(1); return -EINVAL; } diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c index 9138c32aa698..a6bdddc33ce2 100644 --- a/fs/f2fs/namei.c +++ b/fs/f2fs/namei.c @@ -417,9 +417,6 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry, } f2fs_set_link(new_dir, new_entry, new_page, old_inode); - down_write(&F2FS_I(old_inode)->i_sem); - F2FS_I(old_inode)->i_pino = new_dir->i_ino; - up_write(&F2FS_I(old_inode)->i_sem); new_inode->i_ctime = CURRENT_TIME; down_write(&F2FS_I(new_inode)->i_sem); @@ -448,6 +445,10 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry, } } + down_write(&F2FS_I(old_inode)->i_sem); + file_lost_pino(old_inode); + up_write(&F2FS_I(old_inode)->i_sem); + old_inode->i_ctime = CURRENT_TIME; mark_inode_dirty(old_inode); @@ -457,9 +458,6 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry, if (old_dir != new_dir) { f2fs_set_link(old_inode, old_dir_entry, old_dir_page, new_dir); - down_write(&F2FS_I(old_inode)->i_sem); - F2FS_I(old_inode)->i_pino = new_dir->i_ino; - up_write(&F2FS_I(old_inode)->i_sem); update_inode_page(old_inode); } else { kunmap(old_dir_page); @@ -474,7 +472,8 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry, return 0; put_out_dir: - f2fs_put_page(new_page, 1); + kunmap(new_page); + f2fs_put_page(new_page, 0); out_dir: if (old_dir_entry) { kunmap(old_dir_page); diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c index 9dfb9a042fd2..4b697ccc9b0c 100644 --- a/fs/f2fs/node.c +++ b/fs/f2fs/node.c @@ -42,6 +42,8 @@ bool available_free_memory(struct f2fs_sb_info *sbi, int type) mem_size = (nm_i->nat_cnt * sizeof(struct nat_entry)) >> 12; res = mem_size < ((val.totalram * nm_i->ram_thresh / 100) >> 2); } else if (type == DIRTY_DENTS) { + if (sbi->sb->s_bdi->dirty_exceeded) + return false; mem_size = get_pages(sbi, F2FS_DIRTY_DENTS); res = mem_size < ((val.totalram * nm_i->ram_thresh / 100) >> 1); } diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c index f25f0e07e26f..d04613df710a 100644 --- a/fs/f2fs/segment.c +++ b/fs/f2fs/segment.c @@ -272,14 +272,15 @@ int create_flush_cmd_control(struct f2fs_sb_info *sbi) return -ENOMEM; spin_lock_init(&fcc->issue_lock); init_waitqueue_head(&fcc->flush_wait_queue); + sbi->sm_info->cmd_control_info = fcc; fcc->f2fs_issue_flush = kthread_run(issue_flush_thread, sbi, "f2fs_flush-%u:%u", MAJOR(dev), MINOR(dev)); if (IS_ERR(fcc->f2fs_issue_flush)) { err = PTR_ERR(fcc->f2fs_issue_flush); kfree(fcc); + sbi->sm_info->cmd_control_info = NULL; return err; } - sbi->sm_info->cmd_control_info = fcc; return err; } @@ -1885,8 +1886,6 @@ int build_segment_manager(struct f2fs_sb_info *sbi) /* init sm info */ sbi->sm_info = sm_info; - INIT_LIST_HEAD(&sm_info->wblist_head); - spin_lock_init(&sm_info->wblist_lock); sm_info->seg0_blkaddr = le32_to_cpu(raw_super->segment0_blkaddr); sm_info->main_blkaddr = le32_to_cpu(raw_super->main_blkaddr); sm_info->segment_count = le32_to_cpu(raw_super->segment_count); diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c index b2b18637cb9e..8f96d9372ade 100644 --- a/fs/f2fs/super.c +++ b/fs/f2fs/super.c @@ -689,9 +689,7 @@ static struct inode *f2fs_nfs_get_inode(struct super_block *sb, struct f2fs_sb_info *sbi = F2FS_SB(sb); struct inode *inode; - if (unlikely(ino < F2FS_ROOT_INO(sbi))) - return ERR_PTR(-ESTALE); - if (unlikely(ino >= NM_I(sbi)->max_nid)) + if (check_nid_range(sbi, ino)) return ERR_PTR(-ESTALE); /* diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c index 098f97bdcf1b..ca887314aba9 100644 --- a/fs/fuse/dev.c +++ b/fs/fuse/dev.c @@ -643,9 +643,8 @@ struct fuse_copy_state { unsigned long seglen; unsigned long addr; struct page *pg; - void *mapaddr; - void *buf; unsigned len; + unsigned offset; unsigned move_pages:1; }; @@ -666,23 +665,17 @@ static void fuse_copy_finish(struct fuse_copy_state *cs) if (cs->currbuf) { struct pipe_buffer *buf = cs->currbuf; - if (!cs->write) { - kunmap_atomic(cs->mapaddr); - } else { - kunmap_atomic(cs->mapaddr); + if (cs->write) buf->len = PAGE_SIZE - cs->len; - } cs->currbuf = NULL; - cs->mapaddr = NULL; - } else if (cs->mapaddr) { - kunmap_atomic(cs->mapaddr); + } else if (cs->pg) { if (cs->write) { flush_dcache_page(cs->pg); set_page_dirty_lock(cs->pg); } put_page(cs->pg); - cs->mapaddr = NULL; } + cs->pg = NULL; } /* @@ -691,7 +684,7 @@ static void fuse_copy_finish(struct fuse_copy_state *cs) */ static int fuse_copy_fill(struct fuse_copy_state *cs) { - unsigned long offset; + struct page *page; int err; unlock_request(cs->fc, cs->req); @@ -706,14 +699,12 @@ static int fuse_copy_fill(struct fuse_copy_state *cs) BUG_ON(!cs->nr_segs); cs->currbuf = buf; - cs->mapaddr = kmap_atomic(buf->page); + cs->pg = buf->page; + cs->offset = buf->offset; cs->len = buf->len; - cs->buf = cs->mapaddr + buf->offset; cs->pipebufs++; cs->nr_segs--; } else { - struct page *page; - if (cs->nr_segs == cs->pipe->buffers) return -EIO; @@ -726,8 +717,8 @@ static int fuse_copy_fill(struct fuse_copy_state *cs) buf->len = 0; cs->currbuf = buf; - cs->mapaddr = kmap_atomic(page); - cs->buf = cs->mapaddr; + cs->pg = page; + cs->offset = 0; cs->len = PAGE_SIZE; cs->pipebufs++; cs->nr_segs++; @@ -740,14 +731,13 @@ static int fuse_copy_fill(struct fuse_copy_state *cs) cs->iov++; cs->nr_segs--; } - err = get_user_pages_fast(cs->addr, 1, cs->write, &cs->pg); + err = get_user_pages_fast(cs->addr, 1, cs->write, &page); if (err < 0) return err; BUG_ON(err != 1); - offset = cs->addr % PAGE_SIZE; - cs->mapaddr = kmap_atomic(cs->pg); - cs->buf = cs->mapaddr + offset; - cs->len = min(PAGE_SIZE - offset, cs->seglen); + cs->pg = page; + cs->offset = cs->addr % PAGE_SIZE; + cs->len = min(PAGE_SIZE - cs->offset, cs->seglen); cs->seglen -= cs->len; cs->addr += cs->len; } @@ -760,15 +750,20 @@ static int fuse_copy_do(struct fuse_copy_state *cs, void **val, unsigned *size) { unsigned ncpy = min(*size, cs->len); if (val) { + void *pgaddr = kmap_atomic(cs->pg); + void *buf = pgaddr + cs->offset; + if (cs->write) - memcpy(cs->buf, *val, ncpy); + memcpy(buf, *val, ncpy); else - memcpy(*val, cs->buf, ncpy); + memcpy(*val, buf, ncpy); + + kunmap_atomic(pgaddr); *val += ncpy; } *size -= ncpy; cs->len -= ncpy; - cs->buf += ncpy; + cs->offset += ncpy; return ncpy; } @@ -874,8 +869,8 @@ static int fuse_try_move_page(struct fuse_copy_state *cs, struct page **pagep) out_fallback_unlock: unlock_page(newpage); out_fallback: - cs->mapaddr = kmap_atomic(buf->page); - cs->buf = cs->mapaddr + buf->offset; + cs->pg = buf->page; + cs->offset = buf->offset; err = lock_request(cs->fc, cs->req); if (err) diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c index 42198359fa1b..0c6048247a34 100644 --- a/fs/fuse/dir.c +++ b/fs/fuse/dir.c @@ -198,7 +198,8 @@ static int fuse_dentry_revalidate(struct dentry *entry, unsigned int flags) inode = ACCESS_ONCE(entry->d_inode); if (inode && is_bad_inode(inode)) goto invalid; - else if (fuse_dentry_time(entry) < get_jiffies_64()) { + else if (time_before64(fuse_dentry_time(entry), get_jiffies_64()) || + (flags & LOOKUP_REVAL)) { int err; struct fuse_entry_out outarg; struct fuse_req *req; @@ -814,13 +815,6 @@ static int fuse_rename_common(struct inode *olddir, struct dentry *oldent, return err; } -static int fuse_rename(struct inode *olddir, struct dentry *oldent, - struct inode *newdir, struct dentry *newent) -{ - return fuse_rename_common(olddir, oldent, newdir, newent, 0, - FUSE_RENAME, sizeof(struct fuse_rename_in)); -} - static int fuse_rename2(struct inode *olddir, struct dentry *oldent, struct inode *newdir, struct dentry *newent, unsigned int flags) @@ -831,17 +825,30 @@ static int fuse_rename2(struct inode *olddir, struct dentry *oldent, if (flags & ~(RENAME_NOREPLACE | RENAME_EXCHANGE)) return -EINVAL; - if (fc->no_rename2 || fc->minor < 23) - return -EINVAL; + if (flags) { + if (fc->no_rename2 || fc->minor < 23) + return -EINVAL; - err = fuse_rename_common(olddir, oldent, newdir, newent, flags, - FUSE_RENAME2, sizeof(struct fuse_rename2_in)); - if (err == -ENOSYS) { - fc->no_rename2 = 1; - err = -EINVAL; + err = fuse_rename_common(olddir, oldent, newdir, newent, flags, + FUSE_RENAME2, + sizeof(struct fuse_rename2_in)); + if (err == -ENOSYS) { + fc->no_rename2 = 1; + err = -EINVAL; + } + } else { + err = fuse_rename_common(olddir, oldent, newdir, newent, 0, + FUSE_RENAME, + sizeof(struct fuse_rename_in)); } + return err; +} +static int fuse_rename(struct inode *olddir, struct dentry *oldent, + struct inode *newdir, struct dentry *newent) +{ + return fuse_rename2(olddir, oldent, newdir, newent, 0); } static int fuse_link(struct dentry *entry, struct inode *newdir, @@ -985,7 +992,7 @@ int fuse_update_attributes(struct inode *inode, struct kstat *stat, int err; bool r; - if (fi->i_time < get_jiffies_64()) { + if (time_before64(fi->i_time, get_jiffies_64())) { r = true; err = fuse_do_getattr(inode, stat, file); } else { @@ -1171,7 +1178,7 @@ static int fuse_permission(struct inode *inode, int mask) ((mask & MAY_EXEC) && S_ISREG(inode->i_mode))) { struct fuse_inode *fi = get_fuse_inode(inode); - if (fi->i_time < get_jiffies_64()) { + if (time_before64(fi->i_time, get_jiffies_64())) { refreshed = true; err = fuse_perm_getattr(inode, mask); diff --git a/fs/fuse/file.c b/fs/fuse/file.c index 6e16dad13e9b..40ac2628ddcf 100644 --- a/fs/fuse/file.c +++ b/fs/fuse/file.c @@ -1687,7 +1687,7 @@ static int fuse_writepage_locked(struct page *page) error = -EIO; req->ff = fuse_write_file_get(fc, fi); if (!req->ff) - goto err_free; + goto err_nofile; fuse_write_fill(req, req->ff, page_offset(page), 0); @@ -1715,6 +1715,8 @@ static int fuse_writepage_locked(struct page *page) return 0; +err_nofile: + __free_page(tmp_page); err_free: fuse_request_free(req); err: @@ -1955,8 +1957,8 @@ static int fuse_writepages(struct address_space *mapping, data.ff = NULL; err = -ENOMEM; - data.orig_pages = kzalloc(sizeof(struct page *) * - FUSE_MAX_PAGES_PER_REQ, + data.orig_pages = kcalloc(FUSE_MAX_PAGES_PER_REQ, + sizeof(struct page *), GFP_NOFS); if (!data.orig_pages) goto out; diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c index 754dcf23de8a..03246cd9d47a 100644 --- a/fs/fuse/inode.c +++ b/fs/fuse/inode.c @@ -478,6 +478,17 @@ static const match_table_t tokens = { {OPT_ERR, NULL} }; +static int fuse_match_uint(substring_t *s, unsigned int *res) +{ + int err = -ENOMEM; + char *buf = match_strdup(s); + if (buf) { + err = kstrtouint(buf, 10, res); + kfree(buf); + } + return err; +} + static int parse_fuse_opt(char *opt, struct fuse_mount_data *d, int is_bdev) { char *p; @@ -488,6 +499,7 @@ static int parse_fuse_opt(char *opt, struct fuse_mount_data *d, int is_bdev) while ((p = strsep(&opt, ",")) != NULL) { int token; int value; + unsigned uv; substring_t args[MAX_OPT_ARGS]; if (!*p) continue; @@ -511,18 +523,18 @@ static int parse_fuse_opt(char *opt, struct fuse_mount_data *d, int is_bdev) break; case OPT_USER_ID: - if (match_int(&args[0], &value)) + if (fuse_match_uint(&args[0], &uv)) return 0; - d->user_id = make_kuid(current_user_ns(), value); + d->user_id = make_kuid(current_user_ns(), uv); if (!uid_valid(d->user_id)) return 0; d->user_id_present = 1; break; case OPT_GROUP_ID: - if (match_int(&args[0], &value)) + if (fuse_match_uint(&args[0], &uv)) return 0; - d->group_id = make_kgid(current_user_ns(), value); + d->group_id = make_kgid(current_user_ns(), uv); if (!gid_valid(d->group_id)) return 0; d->group_id_present = 1; @@ -895,9 +907,6 @@ static void process_init_reply(struct fuse_conn *fc, struct fuse_req *req) fc->writeback_cache = 1; if (arg->time_gran && arg->time_gran <= 1000000000) fc->sb->s_time_gran = arg->time_gran; - else - fc->sb->s_time_gran = 1000000000; - } else { ra_pages = fc->max_read / PAGE_CACHE_SIZE; fc->no_lock = 1; @@ -926,7 +935,7 @@ static void fuse_send_init(struct fuse_conn *fc, struct fuse_req *req) FUSE_SPLICE_WRITE | FUSE_SPLICE_MOVE | FUSE_SPLICE_READ | FUSE_FLOCK_LOCKS | FUSE_IOCTL_DIR | FUSE_AUTO_INVAL_DATA | FUSE_DO_READDIRPLUS | FUSE_READDIRPLUS_AUTO | FUSE_ASYNC_DIO | - FUSE_WRITEBACK_CACHE; + FUSE_WRITEBACK_CACHE | FUSE_NO_OPEN_SUPPORT; req->in.h.opcode = FUSE_INIT; req->in.numargs = 1; req->in.args[0].size = sizeof(*arg); @@ -1006,7 +1015,7 @@ static int fuse_fill_super(struct super_block *sb, void *data, int silent) sb->s_flags &= ~(MS_NOSEC | MS_I_VERSION); - if (!parse_fuse_opt((char *) data, &d, is_bdev)) + if (!parse_fuse_opt(data, &d, is_bdev)) goto err; if (is_bdev) { diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c index 4fc3a3046174..26b3f952e6b1 100644 --- a/fs/gfs2/file.c +++ b/fs/gfs2/file.c @@ -981,7 +981,7 @@ static int do_flock(struct file *file, int cmd, struct file_lock *fl) int error = 0; state = (fl->fl_type == F_WRLCK) ? LM_ST_EXCLUSIVE : LM_ST_SHARED; - flags = (IS_SETLKW(cmd) ? 0 : LM_FLAG_TRY) | GL_EXACT | GL_NOCACHE; + flags = (IS_SETLKW(cmd) ? 0 : LM_FLAG_TRY) | GL_EXACT; mutex_lock(&fp->f_fl_mutex); @@ -991,7 +991,7 @@ static int do_flock(struct file *file, int cmd, struct file_lock *fl) goto out; flock_lock_file_wait(file, &(struct file_lock){.fl_type = F_UNLCK}); - gfs2_glock_dq_wait(fl_gh); + gfs2_glock_dq(fl_gh); gfs2_holder_reinit(state, flags, fl_gh); } else { error = gfs2_glock_get(GFS2_SB(&ip->i_inode), ip->i_no_addr, diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c index c355f7320e44..ee4e04fe60fc 100644 --- a/fs/gfs2/glock.c +++ b/fs/gfs2/glock.c @@ -731,14 +731,14 @@ int gfs2_glock_get(struct gfs2_sbd *sdp, u64 number, cachep = gfs2_glock_aspace_cachep; else cachep = gfs2_glock_cachep; - gl = kmem_cache_alloc(cachep, GFP_KERNEL); + gl = kmem_cache_alloc(cachep, GFP_NOFS); if (!gl) return -ENOMEM; memset(&gl->gl_lksb, 0, sizeof(struct dlm_lksb)); if (glops->go_flags & GLOF_LVB) { - gl->gl_lksb.sb_lvbptr = kzalloc(GFS2_MIN_LVB_SIZE, GFP_KERNEL); + gl->gl_lksb.sb_lvbptr = kzalloc(GFS2_MIN_LVB_SIZE, GFP_NOFS); if (!gl->gl_lksb.sb_lvbptr) { kmem_cache_free(cachep, gl); return -ENOMEM; @@ -1404,12 +1404,16 @@ __acquires(&lru_lock) gl = list_entry(list->next, struct gfs2_glock, gl_lru); list_del_init(&gl->gl_lru); if (!spin_trylock(&gl->gl_spin)) { +add_back_to_lru: list_add(&gl->gl_lru, &lru_list); atomic_inc(&lru_count); continue; } + if (test_and_set_bit(GLF_LOCK, &gl->gl_flags)) { + spin_unlock(&gl->gl_spin); + goto add_back_to_lru; + } clear_bit(GLF_LRU, &gl->gl_flags); - spin_unlock(&lru_lock); gl->gl_lockref.count++; if (demote_ok(gl)) handle_callback(gl, LM_ST_UNLOCKED, 0, false); @@ -1417,7 +1421,7 @@ __acquires(&lru_lock) if (queue_delayed_work(glock_workqueue, &gl->gl_work, 0) == 0) gl->gl_lockref.count--; spin_unlock(&gl->gl_spin); - spin_lock(&lru_lock); + cond_resched_lock(&lru_lock); } } @@ -1442,7 +1446,7 @@ static long gfs2_scan_glock_lru(int nr) gl = list_entry(lru_list.next, struct gfs2_glock, gl_lru); /* Test for being demotable */ - if (!test_and_set_bit(GLF_LOCK, &gl->gl_flags)) { + if (!test_bit(GLF_LOCK, &gl->gl_flags)) { list_move(&gl->gl_lru, &dispose); atomic_dec(&lru_count); freed++; diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c index fc1100781bbc..2ffc67dce87f 100644 --- a/fs/gfs2/glops.c +++ b/fs/gfs2/glops.c @@ -234,8 +234,8 @@ static void inode_go_sync(struct gfs2_glock *gl) * inode_go_inval - prepare a inode glock to be released * @gl: the glock * @flags: - * - * Normally we invlidate everything, but if we are moving into + * + * Normally we invalidate everything, but if we are moving into * LM_ST_DEFERRED from LM_ST_SHARED or LM_ST_EXCLUSIVE then we * can keep hold of the metadata, since it won't have changed. * diff --git a/fs/gfs2/lock_dlm.c b/fs/gfs2/lock_dlm.c index 91f274de1246..4fafea1c9ecf 100644 --- a/fs/gfs2/lock_dlm.c +++ b/fs/gfs2/lock_dlm.c @@ -1036,8 +1036,8 @@ static int set_recover_size(struct gfs2_sbd *sdp, struct dlm_slot *slots, new_size = old_size + RECOVER_SIZE_INC; - submit = kzalloc(new_size * sizeof(uint32_t), GFP_NOFS); - result = kzalloc(new_size * sizeof(uint32_t), GFP_NOFS); + submit = kcalloc(new_size, sizeof(uint32_t), GFP_NOFS); + result = kcalloc(new_size, sizeof(uint32_t), GFP_NOFS); if (!submit || !result) { kfree(submit); kfree(result); diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c index db629d1bd1bd..f4cb9c0d6bbd 100644 --- a/fs/gfs2/rgrp.c +++ b/fs/gfs2/rgrp.c @@ -337,7 +337,7 @@ static bool gfs2_unaligned_extlen(struct gfs2_rbm *rbm, u32 n_unaligned, u32 *le /** * gfs2_free_extlen - Return extent length of free blocks - * @rbm: Starting position + * @rrbm: Starting position * @len: Max length to check * * Starting at the block specified by the rbm, see how many free blocks @@ -2522,7 +2522,7 @@ void gfs2_rlist_alloc(struct gfs2_rgrp_list *rlist, unsigned int state) /** * gfs2_rlist_free - free a resource group list - * @list: the list of resource groups + * @rlist: the list of resource groups * */ diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c index 38cfcf5f6fce..6f0f590cc5a3 100644 --- a/fs/jbd2/transaction.c +++ b/fs/jbd2/transaction.c @@ -1588,9 +1588,12 @@ int jbd2_journal_stop(handle_t *handle) * to perform a synchronous write. We do this to detect the * case where a single process is doing a stream of sync * writes. No point in waiting for joiners in that case. + * + * Setting max_batch_time to 0 disables this completely. */ pid = current->pid; - if (handle->h_sync && journal->j_last_sync_writer != pid) { + if (handle->h_sync && journal->j_last_sync_writer != pid && + journal->j_max_batch_time) { u64 commit_time, trans_time; journal->j_last_sync_writer = pid; diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c index e3d37f607f97..d895b4b7b661 100644 --- a/fs/kernfs/file.c +++ b/fs/kernfs/file.c @@ -39,6 +39,19 @@ struct kernfs_open_node { struct list_head files; /* goes through kernfs_open_file.list */ }; +/* + * kernfs_notify() may be called from any context and bounces notifications + * through a work item. To minimize space overhead in kernfs_node, the + * pending queue is implemented as a singly linked list of kernfs_nodes. + * The list is terminated with the self pointer so that whether a + * kernfs_node is on the list or not can be determined by testing the next + * pointer for NULL. + */ +#define KERNFS_NOTIFY_EOL ((void *)&kernfs_notify_list) + +static DEFINE_SPINLOCK(kernfs_notify_lock); +static struct kernfs_node *kernfs_notify_list = KERNFS_NOTIFY_EOL; + static struct kernfs_open_file *kernfs_of(struct file *file) { return ((struct seq_file *)file->private_data)->private; @@ -783,24 +796,25 @@ static unsigned int kernfs_fop_poll(struct file *filp, poll_table *wait) return DEFAULT_POLLMASK|POLLERR|POLLPRI; } -/** - * kernfs_notify - notify a kernfs file - * @kn: file to notify - * - * Notify @kn such that poll(2) on @kn wakes up. - */ -void kernfs_notify(struct kernfs_node *kn) +static void kernfs_notify_workfn(struct work_struct *work) { - struct kernfs_root *root = kernfs_root(kn); + struct kernfs_node *kn; struct kernfs_open_node *on; struct kernfs_super_info *info; - unsigned long flags; - - if (WARN_ON(kernfs_type(kn) != KERNFS_FILE)) +repeat: + /* pop one off the notify_list */ + spin_lock_irq(&kernfs_notify_lock); + kn = kernfs_notify_list; + if (kn == KERNFS_NOTIFY_EOL) { + spin_unlock_irq(&kernfs_notify_lock); return; + } + kernfs_notify_list = kn->attr.notify_next; + kn->attr.notify_next = NULL; + spin_unlock_irq(&kernfs_notify_lock); /* kick poll */ - spin_lock_irqsave(&kernfs_open_node_lock, flags); + spin_lock_irq(&kernfs_open_node_lock); on = kn->attr.open; if (on) { @@ -808,12 +822,12 @@ void kernfs_notify(struct kernfs_node *kn) wake_up_interruptible(&on->poll); } - spin_unlock_irqrestore(&kernfs_open_node_lock, flags); + spin_unlock_irq(&kernfs_open_node_lock); /* kick fsnotify */ mutex_lock(&kernfs_mutex); - list_for_each_entry(info, &root->supers, node) { + list_for_each_entry(info, &kernfs_root(kn)->supers, node) { struct inode *inode; struct dentry *dentry; @@ -833,6 +847,33 @@ void kernfs_notify(struct kernfs_node *kn) } mutex_unlock(&kernfs_mutex); + kernfs_put(kn); + goto repeat; +} + +/** + * kernfs_notify - notify a kernfs file + * @kn: file to notify + * + * Notify @kn such that poll(2) on @kn wakes up. Maybe be called from any + * context. + */ +void kernfs_notify(struct kernfs_node *kn) +{ + static DECLARE_WORK(kernfs_notify_work, kernfs_notify_workfn); + unsigned long flags; + + if (WARN_ON(kernfs_type(kn) != KERNFS_FILE)) + return; + + spin_lock_irqsave(&kernfs_notify_lock, flags); + if (!kn->attr.notify_next) { + kernfs_get(kn); + kn->attr.notify_next = kernfs_notify_list; + kernfs_notify_list = kn; + schedule_work(&kernfs_notify_work); + } + spin_unlock_irqrestore(&kernfs_notify_lock, flags); } EXPORT_SYMBOL_GPL(kernfs_notify); diff --git a/fs/kernfs/mount.c b/fs/kernfs/mount.c index d171b98a6cdd..f973ae9b05f1 100644 --- a/fs/kernfs/mount.c +++ b/fs/kernfs/mount.c @@ -211,6 +211,36 @@ void kernfs_kill_sb(struct super_block *sb) kernfs_put(root_kn); } +/** + * kernfs_pin_sb: try to pin the superblock associated with a kernfs_root + * @kernfs_root: the kernfs_root in question + * @ns: the namespace tag + * + * Pin the superblock so the superblock won't be destroyed in subsequent + * operations. This can be used to block ->kill_sb() which may be useful + * for kernfs users which dynamically manage superblocks. + * + * Returns NULL if there's no superblock associated to this kernfs_root, or + * -EINVAL if the superblock is being freed. + */ +struct super_block *kernfs_pin_sb(struct kernfs_root *root, const void *ns) +{ + struct kernfs_super_info *info; + struct super_block *sb = NULL; + + mutex_lock(&kernfs_mutex); + list_for_each_entry(info, &root->supers, node) { + if (info->ns == ns) { + sb = info->sb; + if (!atomic_inc_not_zero(&info->sb->s_active)) + sb = ERR_PTR(-EINVAL); + break; + } + } + mutex_unlock(&kernfs_mutex); + return sb; +} + void __init kernfs_init(void) { kernfs_node_cache = kmem_cache_create("kernfs_node_cache", diff --git a/fs/mbcache.c b/fs/mbcache.c index bf166e388f0d..187477ded6b3 100644 --- a/fs/mbcache.c +++ b/fs/mbcache.c @@ -73,6 +73,7 @@ #include <linux/mbcache.h> #include <linux/init.h> #include <linux/blockgroup_lock.h> +#include <linux/log2.h> #ifdef MB_CACHE_DEBUG # define mb_debug(f...) do { \ @@ -93,7 +94,7 @@ #define MB_CACHE_WRITER ((unsigned short)~0U >> 1) -#define MB_CACHE_ENTRY_LOCK_BITS __builtin_log2(NR_BG_LOCKS) +#define MB_CACHE_ENTRY_LOCK_BITS ilog2(NR_BG_LOCKS) #define MB_CACHE_ENTRY_LOCK_INDEX(ce) \ (hash_long((unsigned long)ce, MB_CACHE_ENTRY_LOCK_BITS)) diff --git a/fs/namei.c b/fs/namei.c index 985c6f368485..9eb787e5c167 100644 --- a/fs/namei.c +++ b/fs/namei.c @@ -2256,9 +2256,10 @@ done: goto out; } path->dentry = dentry; - path->mnt = mntget(nd->path.mnt); + path->mnt = nd->path.mnt; if (should_follow_link(dentry, nd->flags & LOOKUP_FOLLOW)) return 1; + mntget(path->mnt); follow_mount(path); error = 0; out: diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c index 8f98138cbc43..f11b9eed0de1 100644 --- a/fs/nfs/direct.c +++ b/fs/nfs/direct.c @@ -756,7 +756,6 @@ static void nfs_direct_write_completion(struct nfs_pgio_header *hdr) spin_unlock(&dreq->lock); while (!list_empty(&hdr->pages)) { - bool do_destroy = true; req = nfs_list_entry(hdr->pages.next); nfs_list_remove_request(req); @@ -765,7 +764,6 @@ static void nfs_direct_write_completion(struct nfs_pgio_header *hdr) case NFS_IOHDR_NEED_COMMIT: kref_get(&req->wb_kref); nfs_mark_request_commit(req, hdr->lseg, &cinfo); - do_destroy = false; } nfs_unlock_and_release_request(req); } diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c index c496f8a74639..9927913c97c2 100644 --- a/fs/nfs/inode.c +++ b/fs/nfs/inode.c @@ -147,6 +147,17 @@ int nfs_sync_mapping(struct address_space *mapping) return ret; } +static void nfs_set_cache_invalid(struct inode *inode, unsigned long flags) +{ + struct nfs_inode *nfsi = NFS_I(inode); + + if (inode->i_mapping->nrpages == 0) + flags &= ~NFS_INO_INVALID_DATA; + nfsi->cache_validity |= flags; + if (flags & NFS_INO_INVALID_DATA) + nfs_fscache_invalidate(inode); +} + /* * Invalidate the local caches */ @@ -162,17 +173,16 @@ static void nfs_zap_caches_locked(struct inode *inode) memset(NFS_I(inode)->cookieverf, 0, sizeof(NFS_I(inode)->cookieverf)); if (S_ISREG(mode) || S_ISDIR(mode) || S_ISLNK(mode)) { - nfs_fscache_invalidate(inode); - nfsi->cache_validity |= NFS_INO_INVALID_ATTR + nfs_set_cache_invalid(inode, NFS_INO_INVALID_ATTR | NFS_INO_INVALID_DATA | NFS_INO_INVALID_ACCESS | NFS_INO_INVALID_ACL - | NFS_INO_REVAL_PAGECACHE; + | NFS_INO_REVAL_PAGECACHE); } else - nfsi->cache_validity |= NFS_INO_INVALID_ATTR + nfs_set_cache_invalid(inode, NFS_INO_INVALID_ATTR | NFS_INO_INVALID_ACCESS | NFS_INO_INVALID_ACL - | NFS_INO_REVAL_PAGECACHE; + | NFS_INO_REVAL_PAGECACHE); nfs_zap_label_cache_locked(nfsi); } @@ -187,8 +197,7 @@ void nfs_zap_mapping(struct inode *inode, struct address_space *mapping) { if (mapping->nrpages != 0) { spin_lock(&inode->i_lock); - NFS_I(inode)->cache_validity |= NFS_INO_INVALID_DATA; - nfs_fscache_invalidate(inode); + nfs_set_cache_invalid(inode, NFS_INO_INVALID_DATA); spin_unlock(&inode->i_lock); } } @@ -209,7 +218,7 @@ EXPORT_SYMBOL_GPL(nfs_zap_acl_cache); void nfs_invalidate_atime(struct inode *inode) { spin_lock(&inode->i_lock); - NFS_I(inode)->cache_validity |= NFS_INO_INVALID_ATIME; + nfs_set_cache_invalid(inode, NFS_INO_INVALID_ATIME); spin_unlock(&inode->i_lock); } EXPORT_SYMBOL_GPL(nfs_invalidate_atime); @@ -369,7 +378,7 @@ nfs_fhget(struct super_block *sb, struct nfs_fh *fh, struct nfs_fattr *fattr, st inode->i_mode = fattr->mode; if ((fattr->valid & NFS_ATTR_FATTR_MODE) == 0 && nfs_server_capable(inode, NFS_CAP_MODE)) - nfsi->cache_validity |= NFS_INO_INVALID_ATTR; + nfs_set_cache_invalid(inode, NFS_INO_INVALID_ATTR); /* Why so? Because we want revalidate for devices/FIFOs, and * that's precisely what we have in nfs_file_inode_operations. */ @@ -415,36 +424,36 @@ nfs_fhget(struct super_block *sb, struct nfs_fh *fh, struct nfs_fattr *fattr, st if (fattr->valid & NFS_ATTR_FATTR_ATIME) inode->i_atime = fattr->atime; else if (nfs_server_capable(inode, NFS_CAP_ATIME)) - nfsi->cache_validity |= NFS_INO_INVALID_ATTR; + nfs_set_cache_invalid(inode, NFS_INO_INVALID_ATTR); if (fattr->valid & NFS_ATTR_FATTR_MTIME) inode->i_mtime = fattr->mtime; else if (nfs_server_capable(inode, NFS_CAP_MTIME)) - nfsi->cache_validity |= NFS_INO_INVALID_ATTR; + nfs_set_cache_invalid(inode, NFS_INO_INVALID_ATTR); if (fattr->valid & NFS_ATTR_FATTR_CTIME) inode->i_ctime = fattr->ctime; else if (nfs_server_capable(inode, NFS_CAP_CTIME)) - nfsi->cache_validity |= NFS_INO_INVALID_ATTR; + nfs_set_cache_invalid(inode, NFS_INO_INVALID_ATTR); if (fattr->valid & NFS_ATTR_FATTR_CHANGE) inode->i_version = fattr->change_attr; else if (nfs_server_capable(inode, NFS_CAP_CHANGE_ATTR)) - nfsi->cache_validity |= NFS_INO_INVALID_ATTR; + nfs_set_cache_invalid(inode, NFS_INO_INVALID_ATTR); if (fattr->valid & NFS_ATTR_FATTR_SIZE) inode->i_size = nfs_size_to_loff_t(fattr->size); else - nfsi->cache_validity |= NFS_INO_INVALID_ATTR - | NFS_INO_REVAL_PAGECACHE; + nfs_set_cache_invalid(inode, NFS_INO_INVALID_ATTR + | NFS_INO_REVAL_PAGECACHE); if (fattr->valid & NFS_ATTR_FATTR_NLINK) set_nlink(inode, fattr->nlink); else if (nfs_server_capable(inode, NFS_CAP_NLINK)) - nfsi->cache_validity |= NFS_INO_INVALID_ATTR; + nfs_set_cache_invalid(inode, NFS_INO_INVALID_ATTR); if (fattr->valid & NFS_ATTR_FATTR_OWNER) inode->i_uid = fattr->uid; else if (nfs_server_capable(inode, NFS_CAP_OWNER)) - nfsi->cache_validity |= NFS_INO_INVALID_ATTR; + nfs_set_cache_invalid(inode, NFS_INO_INVALID_ATTR); if (fattr->valid & NFS_ATTR_FATTR_GROUP) inode->i_gid = fattr->gid; else if (nfs_server_capable(inode, NFS_CAP_OWNER_GROUP)) - nfsi->cache_validity |= NFS_INO_INVALID_ATTR; + nfs_set_cache_invalid(inode, NFS_INO_INVALID_ATTR); if (fattr->valid & NFS_ATTR_FATTR_BLOCKS_USED) inode->i_blocks = fattr->du.nfs2.blocks; if (fattr->valid & NFS_ATTR_FATTR_SPACE_USED) { @@ -550,6 +559,9 @@ static int nfs_vmtruncate(struct inode * inode, loff_t offset) spin_lock(&inode->i_lock); i_size_write(inode, offset); + /* Optimisation */ + if (offset == 0) + NFS_I(inode)->cache_validity &= ~NFS_INO_INVALID_DATA; spin_unlock(&inode->i_lock); truncate_pagecache(inode, offset); @@ -578,7 +590,8 @@ void nfs_setattr_update_inode(struct inode *inode, struct iattr *attr) inode->i_uid = attr->ia_uid; if ((attr->ia_valid & ATTR_GID) != 0) inode->i_gid = attr->ia_gid; - NFS_I(inode)->cache_validity |= NFS_INO_INVALID_ACCESS|NFS_INO_INVALID_ACL; + nfs_set_cache_invalid(inode, NFS_INO_INVALID_ACCESS + | NFS_INO_INVALID_ACL); spin_unlock(&inode->i_lock); } if ((attr->ia_valid & ATTR_SIZE) != 0) { @@ -1101,7 +1114,7 @@ static unsigned long nfs_wcc_update_inode(struct inode *inode, struct nfs_fattr && inode->i_version == fattr->pre_change_attr) { inode->i_version = fattr->change_attr; if (S_ISDIR(inode->i_mode)) - nfsi->cache_validity |= NFS_INO_INVALID_DATA; + nfs_set_cache_invalid(inode, NFS_INO_INVALID_DATA); ret |= NFS_INO_INVALID_ATTR; } /* If we have atomic WCC data, we may update some attributes */ @@ -1117,7 +1130,7 @@ static unsigned long nfs_wcc_update_inode(struct inode *inode, struct nfs_fattr && timespec_equal(&inode->i_mtime, &fattr->pre_mtime)) { memcpy(&inode->i_mtime, &fattr->mtime, sizeof(inode->i_mtime)); if (S_ISDIR(inode->i_mode)) - nfsi->cache_validity |= NFS_INO_INVALID_DATA; + nfs_set_cache_invalid(inode, NFS_INO_INVALID_DATA); ret |= NFS_INO_INVALID_ATTR; } if ((fattr->valid & NFS_ATTR_FATTR_PRESIZE) @@ -1128,9 +1141,6 @@ static unsigned long nfs_wcc_update_inode(struct inode *inode, struct nfs_fattr ret |= NFS_INO_INVALID_ATTR; } - if (nfsi->cache_validity & NFS_INO_INVALID_DATA) - nfs_fscache_invalidate(inode); - return ret; } @@ -1189,7 +1199,7 @@ static int nfs_check_inode_attributes(struct inode *inode, struct nfs_fattr *fat invalid |= NFS_INO_INVALID_ATIME; if (invalid != 0) - nfsi->cache_validity |= invalid; + nfs_set_cache_invalid(inode, invalid); nfsi->read_cache_jiffies = fattr->time_start; return 0; @@ -1402,13 +1412,11 @@ EXPORT_SYMBOL_GPL(nfs_refresh_inode); static int nfs_post_op_update_inode_locked(struct inode *inode, struct nfs_fattr *fattr) { - struct nfs_inode *nfsi = NFS_I(inode); + unsigned long invalid = NFS_INO_INVALID_ATTR|NFS_INO_REVAL_PAGECACHE; - nfsi->cache_validity |= NFS_INO_INVALID_ATTR|NFS_INO_REVAL_PAGECACHE; - if (S_ISDIR(inode->i_mode)) { - nfsi->cache_validity |= NFS_INO_INVALID_DATA; - nfs_fscache_invalidate(inode); - } + if (S_ISDIR(inode->i_mode)) + invalid |= NFS_INO_INVALID_DATA; + nfs_set_cache_invalid(inode, invalid); if ((fattr->valid & NFS_ATTR_FATTR) == 0) return 0; return nfs_refresh_inode_locked(inode, fattr); @@ -1601,6 +1609,7 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr) if ((nfsi->npages == 0) || new_isize > cur_isize) { i_size_write(inode, new_isize); invalid |= NFS_INO_INVALID_ATTR|NFS_INO_INVALID_DATA; + invalid &= ~NFS_INO_REVAL_PAGECACHE; } dprintk("NFS: isize change on server for file %s/%ld " "(%Ld to %Ld)\n", @@ -1702,10 +1711,7 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr) invalid &= ~NFS_INO_INVALID_DATA; if (!NFS_PROTO(inode)->have_delegation(inode, FMODE_READ) || (save_cache_validity & NFS_INO_REVAL_FORCED)) - nfsi->cache_validity |= invalid; - - if (invalid & NFS_INO_INVALID_DATA) - nfs_fscache_invalidate(inode); + nfs_set_cache_invalid(inode, invalid); return 0; out_err: diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h index 82ddbf46660e..f415cbf9f6c3 100644 --- a/fs/nfs/internal.h +++ b/fs/nfs/internal.h @@ -244,6 +244,7 @@ void nfs_pgio_data_release(struct nfs_pgio_data *); int nfs_generic_pgio(struct nfs_pageio_descriptor *, struct nfs_pgio_header *); int nfs_initiate_pgio(struct rpc_clnt *, struct nfs_pgio_data *, const struct rpc_call_ops *, int, int); +void nfs_free_request(struct nfs_page *req); static inline void nfs_iocounter_init(struct nfs_io_counter *c) { diff --git a/fs/nfs/nfs3acl.c b/fs/nfs/nfs3acl.c index 871d6eda8dba..8f854dde4150 100644 --- a/fs/nfs/nfs3acl.c +++ b/fs/nfs/nfs3acl.c @@ -247,3 +247,46 @@ const struct xattr_handler *nfs3_xattr_handlers[] = { &posix_acl_default_xattr_handler, NULL, }; + +static int +nfs3_list_one_acl(struct inode *inode, int type, const char *name, void *data, + size_t size, ssize_t *result) +{ + struct posix_acl *acl; + char *p = data + *result; + + acl = get_acl(inode, type); + if (!acl) + return 0; + + posix_acl_release(acl); + + *result += strlen(name); + *result += 1; + if (!size) + return 0; + if (*result > size) + return -ERANGE; + + strcpy(p, name); + return 0; +} + +ssize_t +nfs3_listxattr(struct dentry *dentry, char *data, size_t size) +{ + struct inode *inode = dentry->d_inode; + ssize_t result = 0; + int error; + + error = nfs3_list_one_acl(inode, ACL_TYPE_ACCESS, + POSIX_ACL_XATTR_ACCESS, data, size, &result); + if (error) + return error; + + error = nfs3_list_one_acl(inode, ACL_TYPE_DEFAULT, + POSIX_ACL_XATTR_DEFAULT, data, size, &result); + if (error) + return error; + return result; +} diff --git a/fs/nfs/nfs3proc.c b/fs/nfs/nfs3proc.c index e7daa42bbc86..f0afa291fd58 100644 --- a/fs/nfs/nfs3proc.c +++ b/fs/nfs/nfs3proc.c @@ -885,7 +885,7 @@ static const struct inode_operations nfs3_dir_inode_operations = { .getattr = nfs_getattr, .setattr = nfs_setattr, #ifdef CONFIG_NFS_V3_ACL - .listxattr = generic_listxattr, + .listxattr = nfs3_listxattr, .getxattr = generic_getxattr, .setxattr = generic_setxattr, .removexattr = generic_removexattr, @@ -899,7 +899,7 @@ static const struct inode_operations nfs3_file_inode_operations = { .getattr = nfs_getattr, .setattr = nfs_setattr, #ifdef CONFIG_NFS_V3_ACL - .listxattr = generic_listxattr, + .listxattr = nfs3_listxattr, .getxattr = generic_getxattr, .setxattr = generic_setxattr, .removexattr = generic_removexattr, diff --git a/fs/nfs/nfs4_fs.h b/fs/nfs/nfs4_fs.h index f63cb87cd730..ba2affa51941 100644 --- a/fs/nfs/nfs4_fs.h +++ b/fs/nfs/nfs4_fs.h @@ -230,7 +230,7 @@ int nfs_atomic_open(struct inode *, struct dentry *, struct file *, extern struct file_system_type nfs4_fs_type; /* nfs4namespace.c */ -struct rpc_clnt *nfs4_create_sec_client(struct rpc_clnt *, struct inode *, struct qstr *); +struct rpc_clnt *nfs4_negotiate_security(struct rpc_clnt *, struct inode *, struct qstr *); struct vfsmount *nfs4_submount(struct nfs_server *, struct dentry *, struct nfs_fh *, struct nfs_fattr *); int nfs4_replace_transport(struct nfs_server *server, diff --git a/fs/nfs/nfs4namespace.c b/fs/nfs/nfs4namespace.c index 3d5dbf80d46a..3d83cb1fdc70 100644 --- a/fs/nfs/nfs4namespace.c +++ b/fs/nfs/nfs4namespace.c @@ -139,16 +139,22 @@ static size_t nfs_parse_server_name(char *string, size_t len, * @server: NFS server struct * @flavors: List of security tuples returned by SECINFO procedure * - * Return the pseudoflavor of the first security mechanism in - * "flavors" that is locally supported. Return RPC_AUTH_UNIX if - * no matching flavor is found in the array. The "flavors" array + * Return an rpc client that uses the first security mechanism in + * "flavors" that is locally supported. The "flavors" array * is searched in the order returned from the server, per RFC 3530 - * recommendation. + * recommendation and each flavor is checked for membership in the + * sec= mount option list if it exists. + * + * Return -EPERM if no matching flavor is found in the array. + * + * Please call rpc_shutdown_client() when you are done with this rpc client. + * */ -static rpc_authflavor_t nfs_find_best_sec(struct nfs_server *server, +static struct rpc_clnt *nfs_find_best_sec(struct rpc_clnt *clnt, + struct nfs_server *server, struct nfs4_secinfo_flavors *flavors) { - rpc_authflavor_t pseudoflavor; + rpc_authflavor_t pflavor; struct nfs4_secinfo4 *secinfo; unsigned int i; @@ -159,62 +165,73 @@ static rpc_authflavor_t nfs_find_best_sec(struct nfs_server *server, case RPC_AUTH_NULL: case RPC_AUTH_UNIX: case RPC_AUTH_GSS: - pseudoflavor = rpcauth_get_pseudoflavor(secinfo->flavor, + pflavor = rpcauth_get_pseudoflavor(secinfo->flavor, &secinfo->flavor_info); - /* make sure pseudoflavor matches sec= mount opt */ - if (pseudoflavor != RPC_AUTH_MAXFLAVOR && - nfs_auth_info_match(&server->auth_info, - pseudoflavor)) - return pseudoflavor; - break; + /* does the pseudoflavor match a sec= mount opt? */ + if (pflavor != RPC_AUTH_MAXFLAVOR && + nfs_auth_info_match(&server->auth_info, pflavor)) { + struct rpc_clnt *new; + struct rpc_cred *cred; + + /* Cloning creates an rpc_auth for the flavor */ + new = rpc_clone_client_set_auth(clnt, pflavor); + if (IS_ERR(new)) + continue; + /** + * Check that the user actually can use the + * flavor. This is mostly for RPC_AUTH_GSS + * where cr_init obtains a gss context + */ + cred = rpcauth_lookupcred(new->cl_auth, 0); + if (IS_ERR(cred)) { + rpc_shutdown_client(new); + continue; + } + put_rpccred(cred); + return new; + } } } - - /* if there were any sec= options then nothing matched */ - if (server->auth_info.flavor_len > 0) - return -EPERM; - - return RPC_AUTH_UNIX; + return ERR_PTR(-EPERM); } -static rpc_authflavor_t nfs4_negotiate_security(struct inode *inode, struct qstr *name) +/** + * nfs4_negotiate_security - in response to an NFS4ERR_WRONGSEC on lookup, + * return an rpc_clnt that uses the best available security flavor with + * respect to the secinfo flavor list and the sec= mount options. + * + * @clnt: RPC client to clone + * @inode: directory inode + * @name: lookup name + * + * Please call rpc_shutdown_client() when you are done with this rpc client. + */ +struct rpc_clnt * +nfs4_negotiate_security(struct rpc_clnt *clnt, struct inode *inode, + struct qstr *name) { struct page *page; struct nfs4_secinfo_flavors *flavors; - rpc_authflavor_t flavor; + struct rpc_clnt *new; int err; page = alloc_page(GFP_KERNEL); if (!page) - return -ENOMEM; + return ERR_PTR(-ENOMEM); + flavors = page_address(page); err = nfs4_proc_secinfo(inode, name, flavors); if (err < 0) { - flavor = err; + new = ERR_PTR(err); goto out; } - flavor = nfs_find_best_sec(NFS_SERVER(inode), flavors); + new = nfs_find_best_sec(clnt, NFS_SERVER(inode), flavors); out: put_page(page); - return flavor; -} - -/* - * Please call rpc_shutdown_client() when you are done with this client. - */ -struct rpc_clnt *nfs4_create_sec_client(struct rpc_clnt *clnt, struct inode *inode, - struct qstr *name) -{ - rpc_authflavor_t flavor; - - flavor = nfs4_negotiate_security(inode, name); - if ((int)flavor < 0) - return ERR_PTR((int)flavor); - - return rpc_clone_client_set_auth(clnt, flavor); + return new; } static struct vfsmount *try_location(struct nfs_clone_mount *mountdata, @@ -397,11 +414,6 @@ struct vfsmount *nfs4_submount(struct nfs_server *server, struct dentry *dentry, if (client->cl_auth->au_flavor != flavor) flavor = client->cl_auth->au_flavor; - else { - rpc_authflavor_t new = nfs4_negotiate_security(dir, name); - if ((int)new >= 0) - flavor = new; - } mnt = nfs_do_submount(dentry, fh, fattr, flavor); out: rpc_shutdown_client(client); diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c index 285ad5334018..4bf3d97cc5a0 100644 --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -3247,7 +3247,7 @@ static int nfs4_proc_lookup_common(struct rpc_clnt **clnt, struct inode *dir, err = -EPERM; if (client != *clnt) goto out; - client = nfs4_create_sec_client(client, dir, name); + client = nfs4_negotiate_security(client, dir, name); if (IS_ERR(client)) return PTR_ERR(client); diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c index b6ee3a6ee96d..17fab89f6358 100644 --- a/fs/nfs/pagelist.c +++ b/fs/nfs/pagelist.c @@ -29,8 +29,6 @@ static struct kmem_cache *nfs_page_cachep; static const struct rpc_call_ops nfs_pgio_common_ops; -static void nfs_free_request(struct nfs_page *); - static bool nfs_pgarray_set(struct nfs_page_array *p, unsigned int pagecount) { p->npages = pagecount; @@ -239,20 +237,28 @@ nfs_page_group_init(struct nfs_page *req, struct nfs_page *prev) WARN_ON_ONCE(prev == req); if (!prev) { + /* a head request */ req->wb_head = req; req->wb_this_page = req; } else { + /* a subrequest */ WARN_ON_ONCE(prev->wb_this_page != prev->wb_head); WARN_ON_ONCE(!test_bit(PG_HEADLOCK, &prev->wb_head->wb_flags)); req->wb_head = prev->wb_head; req->wb_this_page = prev->wb_this_page; prev->wb_this_page = req; + /* All subrequests take a ref on the head request until + * nfs_page_group_destroy is called */ + kref_get(&req->wb_head->wb_kref); + /* grab extra ref if head request has extra ref from * the write/commit path to handle handoff between write * and commit lists */ - if (test_bit(PG_INODE_REF, &prev->wb_head->wb_flags)) + if (test_bit(PG_INODE_REF, &prev->wb_head->wb_flags)) { + set_bit(PG_INODE_REF, &req->wb_flags); kref_get(&req->wb_kref); + } } } @@ -269,6 +275,10 @@ nfs_page_group_destroy(struct kref *kref) struct nfs_page *req = container_of(kref, struct nfs_page, wb_kref); struct nfs_page *tmp, *next; + /* subrequests must release the ref on the head request */ + if (req->wb_head != req) + nfs_release_request(req->wb_head); + if (!nfs_page_group_sync_on_bit(req, PG_TEARDOWN)) return; @@ -394,7 +404,7 @@ static void nfs_clear_request(struct nfs_page *req) * * Note: Should never be called with the spinlock held! */ -static void nfs_free_request(struct nfs_page *req) +void nfs_free_request(struct nfs_page *req) { WARN_ON_ONCE(req->wb_this_page != req); @@ -925,7 +935,6 @@ static int __nfs_pageio_add_request(struct nfs_pageio_descriptor *desc, nfs_pageio_doio(desc); if (desc->pg_error < 0) return 0; - desc->pg_moreio = 0; if (desc->pg_recoalesce) return 0; /* retry add_request for this subreq */ @@ -972,6 +981,7 @@ static int nfs_do_recoalesce(struct nfs_pageio_descriptor *desc) desc->pg_count = 0; desc->pg_base = 0; desc->pg_recoalesce = 0; + desc->pg_moreio = 0; while (!list_empty(&head)) { struct nfs_page *req; diff --git a/fs/nfs/write.c b/fs/nfs/write.c index 3ee5af4e738e..5e2f10304548 100644 --- a/fs/nfs/write.c +++ b/fs/nfs/write.c @@ -46,6 +46,7 @@ static const struct rpc_call_ops nfs_commit_ops; static const struct nfs_pgio_completion_ops nfs_async_write_completion_ops; static const struct nfs_commit_completion_ops nfs_commit_completion_ops; static const struct nfs_rw_ops nfs_rw_write_ops; +static void nfs_clear_request_commit(struct nfs_page *req); static struct kmem_cache *nfs_wdata_cachep; static mempool_t *nfs_wdata_mempool; @@ -91,8 +92,15 @@ static void nfs_context_set_write_error(struct nfs_open_context *ctx, int error) set_bit(NFS_CONTEXT_ERROR_WRITE, &ctx->flags); } +/* + * nfs_page_find_head_request_locked - find head request associated with @page + * + * must be called while holding the inode lock. + * + * returns matching head request with reference held, or NULL if not found. + */ static struct nfs_page * -nfs_page_find_request_locked(struct nfs_inode *nfsi, struct page *page) +nfs_page_find_head_request_locked(struct nfs_inode *nfsi, struct page *page) { struct nfs_page *req = NULL; @@ -104,25 +112,33 @@ nfs_page_find_request_locked(struct nfs_inode *nfsi, struct page *page) /* Linearly search the commit list for the correct req */ list_for_each_entry_safe(freq, t, &nfsi->commit_info.list, wb_list) { if (freq->wb_page == page) { - req = freq; + req = freq->wb_head; break; } } } - if (req) + if (req) { + WARN_ON_ONCE(req->wb_head != req); + kref_get(&req->wb_kref); + } return req; } -static struct nfs_page *nfs_page_find_request(struct page *page) +/* + * nfs_page_find_head_request - find head request associated with @page + * + * returns matching head request with reference held, or NULL if not found. + */ +static struct nfs_page *nfs_page_find_head_request(struct page *page) { struct inode *inode = page_file_mapping(page)->host; struct nfs_page *req = NULL; spin_lock(&inode->i_lock); - req = nfs_page_find_request_locked(NFS_I(inode), page); + req = nfs_page_find_head_request_locked(NFS_I(inode), page); spin_unlock(&inode->i_lock); return req; } @@ -274,36 +290,246 @@ static void nfs_end_page_writeback(struct nfs_page *req) clear_bdi_congested(&nfss->backing_dev_info, BLK_RW_ASYNC); } -static struct nfs_page *nfs_find_and_lock_request(struct page *page, bool nonblock) + +/* nfs_page_group_clear_bits + * @req - an nfs request + * clears all page group related bits from @req + */ +static void +nfs_page_group_clear_bits(struct nfs_page *req) +{ + clear_bit(PG_TEARDOWN, &req->wb_flags); + clear_bit(PG_UNLOCKPAGE, &req->wb_flags); + clear_bit(PG_UPTODATE, &req->wb_flags); + clear_bit(PG_WB_END, &req->wb_flags); + clear_bit(PG_REMOVE, &req->wb_flags); +} + + +/* + * nfs_unroll_locks_and_wait - unlock all newly locked reqs and wait on @req + * + * this is a helper function for nfs_lock_and_join_requests + * + * @inode - inode associated with request page group, must be holding inode lock + * @head - head request of page group, must be holding head lock + * @req - request that couldn't lock and needs to wait on the req bit lock + * @nonblock - if true, don't actually wait + * + * NOTE: this must be called holding page_group bit lock and inode spin lock + * and BOTH will be released before returning. + * + * returns 0 on success, < 0 on error. + */ +static int +nfs_unroll_locks_and_wait(struct inode *inode, struct nfs_page *head, + struct nfs_page *req, bool nonblock) + __releases(&inode->i_lock) +{ + struct nfs_page *tmp; + int ret; + + /* relinquish all the locks successfully grabbed this run */ + for (tmp = head ; tmp != req; tmp = tmp->wb_this_page) + nfs_unlock_request(tmp); + + WARN_ON_ONCE(test_bit(PG_TEARDOWN, &req->wb_flags)); + + /* grab a ref on the request that will be waited on */ + kref_get(&req->wb_kref); + + nfs_page_group_unlock(head); + spin_unlock(&inode->i_lock); + + /* release ref from nfs_page_find_head_request_locked */ + nfs_release_request(head); + + if (!nonblock) + ret = nfs_wait_on_request(req); + else + ret = -EAGAIN; + nfs_release_request(req); + + return ret; +} + +/* + * nfs_destroy_unlinked_subrequests - destroy recently unlinked subrequests + * + * @destroy_list - request list (using wb_this_page) terminated by @old_head + * @old_head - the old head of the list + * + * All subrequests must be locked and removed from all lists, so at this point + * they are only "active" in this function, and possibly in nfs_wait_on_request + * with a reference held by some other context. + */ +static void +nfs_destroy_unlinked_subrequests(struct nfs_page *destroy_list, + struct nfs_page *old_head) +{ + while (destroy_list) { + struct nfs_page *subreq = destroy_list; + + destroy_list = (subreq->wb_this_page == old_head) ? + NULL : subreq->wb_this_page; + + WARN_ON_ONCE(old_head != subreq->wb_head); + + /* make sure old group is not used */ + subreq->wb_head = subreq; + subreq->wb_this_page = subreq; + + nfs_clear_request_commit(subreq); + + /* subreq is now totally disconnected from page group or any + * write / commit lists. last chance to wake any waiters */ + nfs_unlock_request(subreq); + + if (!test_bit(PG_TEARDOWN, &subreq->wb_flags)) { + /* release ref on old head request */ + nfs_release_request(old_head); + + nfs_page_group_clear_bits(subreq); + + /* release the PG_INODE_REF reference */ + if (test_and_clear_bit(PG_INODE_REF, &subreq->wb_flags)) + nfs_release_request(subreq); + else + WARN_ON_ONCE(1); + } else { + WARN_ON_ONCE(test_bit(PG_CLEAN, &subreq->wb_flags)); + /* zombie requests have already released the last + * reference and were waiting on the rest of the + * group to complete. Since it's no longer part of a + * group, simply free the request */ + nfs_page_group_clear_bits(subreq); + nfs_free_request(subreq); + } + } +} + +/* + * nfs_lock_and_join_requests - join all subreqs to the head req and return + * a locked reference, cancelling any pending + * operations for this page. + * + * @page - the page used to lookup the "page group" of nfs_page structures + * @nonblock - if true, don't block waiting for request locks + * + * This function joins all sub requests to the head request by first + * locking all requests in the group, cancelling any pending operations + * and finally updating the head request to cover the whole range covered by + * the (former) group. All subrequests are removed from any write or commit + * lists, unlinked from the group and destroyed. + * + * Returns a locked, referenced pointer to the head request - which after + * this call is guaranteed to be the only request associated with the page. + * Returns NULL if no requests are found for @page, or a ERR_PTR if an + * error was encountered. + */ +static struct nfs_page * +nfs_lock_and_join_requests(struct page *page, bool nonblock) { struct inode *inode = page_file_mapping(page)->host; - struct nfs_page *req; + struct nfs_page *head, *subreq; + struct nfs_page *destroy_list = NULL; + unsigned int total_bytes; int ret; +try_again: + total_bytes = 0; + + WARN_ON_ONCE(destroy_list); + spin_lock(&inode->i_lock); - for (;;) { - req = nfs_page_find_request_locked(NFS_I(inode), page); - if (req == NULL) - break; - if (nfs_lock_request(req)) - break; - /* Note: If we hold the page lock, as is the case in nfs_writepage, - * then the call to nfs_lock_request() will always - * succeed provided that someone hasn't already marked the - * request as dirty (in which case we don't care). - */ + + /* + * A reference is taken only on the head request which acts as a + * reference to the whole page group - the group will not be destroyed + * until the head reference is released. + */ + head = nfs_page_find_head_request_locked(NFS_I(inode), page); + + if (!head) { spin_unlock(&inode->i_lock); - if (!nonblock) - ret = nfs_wait_on_request(req); - else - ret = -EAGAIN; - nfs_release_request(req); - if (ret != 0) + return NULL; + } + + /* lock each request in the page group */ + nfs_page_group_lock(head); + subreq = head; + do { + /* + * Subrequests are always contiguous, non overlapping + * and in order. If not, it's a programming error. + */ + WARN_ON_ONCE(subreq->wb_offset != + (head->wb_offset + total_bytes)); + + /* keep track of how many bytes this group covers */ + total_bytes += subreq->wb_bytes; + + if (!nfs_lock_request(subreq)) { + /* releases page group bit lock and + * inode spin lock and all references */ + ret = nfs_unroll_locks_and_wait(inode, head, + subreq, nonblock); + + if (ret == 0) + goto try_again; + return ERR_PTR(ret); - spin_lock(&inode->i_lock); + } + + subreq = subreq->wb_this_page; + } while (subreq != head); + + /* Now that all requests are locked, make sure they aren't on any list. + * Commit list removal accounting is done after locks are dropped */ + subreq = head; + do { + nfs_list_remove_request(subreq); + subreq = subreq->wb_this_page; + } while (subreq != head); + + /* unlink subrequests from head, destroy them later */ + if (head->wb_this_page != head) { + /* destroy list will be terminated by head */ + destroy_list = head->wb_this_page; + head->wb_this_page = head; + + /* change head request to cover whole range that + * the former page group covered */ + head->wb_bytes = total_bytes; } + + /* + * prepare head request to be added to new pgio descriptor + */ + nfs_page_group_clear_bits(head); + + /* + * some part of the group was still on the inode list - otherwise + * the group wouldn't be involved in async write. + * grab a reference for the head request, iff it needs one. + */ + if (!test_and_set_bit(PG_INODE_REF, &head->wb_flags)) + kref_get(&head->wb_kref); + + nfs_page_group_unlock(head); + + /* drop lock to clear_request_commit the head req and clean up + * requests on destroy list */ spin_unlock(&inode->i_lock); - return req; + + nfs_destroy_unlinked_subrequests(destroy_list, head); + + /* clean up commit list state */ + nfs_clear_request_commit(head); + + /* still holds ref on head from nfs_page_find_head_request_locked + * and still has lock on head from lock loop */ + return head; } /* @@ -316,7 +542,7 @@ static int nfs_page_async_flush(struct nfs_pageio_descriptor *pgio, struct nfs_page *req; int ret = 0; - req = nfs_find_and_lock_request(page, nonblock); + req = nfs_lock_and_join_requests(page, nonblock); if (!req) goto out; ret = PTR_ERR(req); @@ -448,7 +674,9 @@ static void nfs_inode_add_request(struct inode *inode, struct nfs_page *req) set_page_private(req->wb_page, (unsigned long)req); } nfsi->npages++; - set_bit(PG_INODE_REF, &req->wb_flags); + /* this a head request for a page group - mark it as having an + * extra reference so sub groups can follow suit */ + WARN_ON(test_and_set_bit(PG_INODE_REF, &req->wb_flags)); kref_get(&req->wb_kref); spin_unlock(&inode->i_lock); } @@ -474,7 +702,9 @@ static void nfs_inode_remove_request(struct nfs_page *req) nfsi->npages--; spin_unlock(&inode->i_lock); } - nfs_release_request(req); + + if (test_and_clear_bit(PG_INODE_REF, &req->wb_flags)) + nfs_release_request(req); } static void @@ -638,7 +868,6 @@ static void nfs_write_completion(struct nfs_pgio_header *hdr) { struct nfs_commit_info cinfo; unsigned long bytes = 0; - bool do_destroy; if (test_bit(NFS_IOHDR_REDO, &hdr->flags)) goto out; @@ -668,7 +897,6 @@ remove_req: next: nfs_unlock_request(req); nfs_end_page_writeback(req); - do_destroy = !test_bit(NFS_IOHDR_NEED_COMMIT, &hdr->flags); nfs_release_request(req); } out: @@ -769,7 +997,7 @@ static struct nfs_page *nfs_try_to_update_request(struct inode *inode, spin_lock(&inode->i_lock); for (;;) { - req = nfs_page_find_request_locked(NFS_I(inode), page); + req = nfs_page_find_head_request_locked(NFS_I(inode), page); if (req == NULL) goto out_unlock; @@ -877,7 +1105,7 @@ int nfs_flush_incompatible(struct file *file, struct page *page) * dropped page. */ do { - req = nfs_page_find_request(page); + req = nfs_page_find_head_request(page); if (req == NULL) return 0; l_ctx = req->wb_lock_context; @@ -934,12 +1162,14 @@ static bool nfs_write_pageuptodate(struct page *page, struct inode *inode) if (nfs_have_delegated_attributes(inode)) goto out; - if (nfsi->cache_validity & (NFS_INO_INVALID_DATA|NFS_INO_REVAL_PAGECACHE)) + if (nfsi->cache_validity & NFS_INO_REVAL_PAGECACHE) return false; smp_rmb(); if (test_bit(NFS_INO_INVALIDATING, &nfsi->flags)) return false; out: + if (nfsi->cache_validity & NFS_INO_INVALID_DATA) + return false; return PageUptodate(page) != 0; } @@ -1567,27 +1797,28 @@ int nfs_wb_page_cancel(struct inode *inode, struct page *page) struct nfs_page *req; int ret = 0; - for (;;) { - wait_on_page_writeback(page); - req = nfs_page_find_request(page); - if (req == NULL) - break; - if (nfs_lock_request(req)) { - nfs_clear_request_commit(req); - nfs_inode_remove_request(req); - /* - * In case nfs_inode_remove_request has marked the - * page as being dirty - */ - cancel_dirty_page(page, PAGE_CACHE_SIZE); - nfs_unlock_and_release_request(req); - break; - } - ret = nfs_wait_on_request(req); - nfs_release_request(req); - if (ret < 0) - break; + wait_on_page_writeback(page); + + /* blocking call to cancel all requests and join to a single (head) + * request */ + req = nfs_lock_and_join_requests(page, false); + + if (IS_ERR(req)) { + ret = PTR_ERR(req); + } else if (req) { + /* all requests from this page have been cancelled by + * nfs_lock_and_join_requests, so just remove the head + * request from the inode / page_private pointer and + * release it */ + nfs_inode_remove_request(req); + /* + * In case nfs_inode_remove_request has marked the + * page as being dirty + */ + cancel_dirty_page(page, PAGE_CACHE_SIZE); + nfs_unlock_and_release_request(req); } + return ret; } diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c index 6851b003f2a4..8f029db5d271 100644 --- a/fs/nfsd/nfs4proc.c +++ b/fs/nfsd/nfs4proc.c @@ -617,15 +617,6 @@ nfsd4_create(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate, switch (create->cr_type) { case NF4LNK: - /* ugh! we have to null-terminate the linktext, or - * vfs_symlink() will choke. it is always safe to - * null-terminate by brute force, since at worst we - * will overwrite the first byte of the create namelen - * in the XDR buffer, which has already been extracted - * during XDR decode. - */ - create->cr_linkname[create->cr_linklen] = 0; - status = nfsd_symlink(rqstp, &cstate->current_fh, create->cr_name, create->cr_namelen, create->cr_linkname, create->cr_linklen, diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c index 83baf2bfe9e9..944275c8f56d 100644 --- a/fs/nfsd/nfs4xdr.c +++ b/fs/nfsd/nfs4xdr.c @@ -600,7 +600,18 @@ nfsd4_decode_create(struct nfsd4_compoundargs *argp, struct nfsd4_create *create READ_BUF(4); create->cr_linklen = be32_to_cpup(p++); READ_BUF(create->cr_linklen); - SAVEMEM(create->cr_linkname, create->cr_linklen); + /* + * The VFS will want a null-terminated string, and + * null-terminating in place isn't safe since this might + * end on a page boundary: + */ + create->cr_linkname = + kmalloc(create->cr_linklen + 1, GFP_KERNEL); + if (!create->cr_linkname) + return nfserr_jukebox; + memcpy(create->cr_linkname, p, create->cr_linklen); + create->cr_linkname[create->cr_linklen] = '\0'; + defer_free(argp, kfree, create->cr_linkname); break; case NF4BLK: case NF4CHR: @@ -2630,7 +2641,7 @@ nfsd4_encode_rdattr_error(struct xdr_stream *xdr, __be32 nfserr) { __be32 *p; - p = xdr_reserve_space(xdr, 6); + p = xdr_reserve_space(xdr, 20); if (!p) return NULL; *p++ = htonl(2); @@ -2868,6 +2879,7 @@ again: * return the conflicting open: */ if (conf->len) { + kfree(conf->data); conf->len = 0; conf->data = NULL; goto again; @@ -2880,6 +2892,7 @@ again: if (conf->len) { p = xdr_encode_opaque_fixed(p, &ld->ld_clientid, 8); p = xdr_encode_opaque(p, conf->data, conf->len); + kfree(conf->data); } else { /* non - nfsv4 lock in conflict, no clientid nor owner */ p = xdr_encode_hyper(p, (u64)0); /* clientid */ *p++ = cpu_to_be32(0); /* length of owner name */ @@ -2896,7 +2909,7 @@ nfsd4_encode_lock(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_lo nfserr = nfsd4_encode_stateid(xdr, &lock->lk_resp_stateid); else if (nfserr == nfserr_denied) nfserr = nfsd4_encode_lock_denied(xdr, &lock->lk_denied); - kfree(lock->lk_denied.ld_owner.data); + return nfserr; } @@ -3267,7 +3280,7 @@ nfsd4_encode_readlink(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd wire_count = htonl(maxcount); write_bytes_to_xdr_buf(xdr->buf, length_offset, &wire_count, 4); - xdr_truncate_encode(xdr, length_offset + 4 + maxcount); + xdr_truncate_encode(xdr, length_offset + 4 + ALIGN(maxcount, 4)); if (maxcount & 3) write_bytes_to_xdr_buf(xdr->buf, length_offset + 4 + maxcount, &zero, 4 - (maxcount&3)); diff --git a/fs/ocfs2/dlm/dlmcommon.h b/fs/ocfs2/dlm/dlmcommon.h index a106b3f2b22a..fae17c640df3 100644 --- a/fs/ocfs2/dlm/dlmcommon.h +++ b/fs/ocfs2/dlm/dlmcommon.h @@ -331,6 +331,7 @@ struct dlm_lock_resource u16 state; char lvb[DLM_LVB_LEN]; unsigned int inflight_locks; + unsigned int inflight_assert_workers; unsigned long refmap[BITS_TO_LONGS(O2NM_MAX_NODES)]; }; @@ -910,6 +911,9 @@ void dlm_lockres_drop_inflight_ref(struct dlm_ctxt *dlm, void dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm, struct dlm_lock_resource *res); +void __dlm_lockres_grab_inflight_worker(struct dlm_ctxt *dlm, + struct dlm_lock_resource *res); + void dlm_queue_ast(struct dlm_ctxt *dlm, struct dlm_lock *lock); void dlm_queue_bast(struct dlm_ctxt *dlm, struct dlm_lock *lock); void __dlm_queue_ast(struct dlm_ctxt *dlm, struct dlm_lock *lock); diff --git a/fs/ocfs2/dlm/dlmmaster.c b/fs/ocfs2/dlm/dlmmaster.c index 3087a21d32f9..82abf0cc9a12 100644 --- a/fs/ocfs2/dlm/dlmmaster.c +++ b/fs/ocfs2/dlm/dlmmaster.c @@ -581,6 +581,7 @@ static void dlm_init_lockres(struct dlm_ctxt *dlm, atomic_set(&res->asts_reserved, 0); res->migration_pending = 0; res->inflight_locks = 0; + res->inflight_assert_workers = 0; res->dlm = dlm; @@ -683,6 +684,43 @@ void dlm_lockres_drop_inflight_ref(struct dlm_ctxt *dlm, wake_up(&res->wq); } +void __dlm_lockres_grab_inflight_worker(struct dlm_ctxt *dlm, + struct dlm_lock_resource *res) +{ + assert_spin_locked(&res->spinlock); + res->inflight_assert_workers++; + mlog(0, "%s:%.*s: inflight assert worker++: now %u\n", + dlm->name, res->lockname.len, res->lockname.name, + res->inflight_assert_workers); +} + +static void dlm_lockres_grab_inflight_worker(struct dlm_ctxt *dlm, + struct dlm_lock_resource *res) +{ + spin_lock(&res->spinlock); + __dlm_lockres_grab_inflight_worker(dlm, res); + spin_unlock(&res->spinlock); +} + +static void __dlm_lockres_drop_inflight_worker(struct dlm_ctxt *dlm, + struct dlm_lock_resource *res) +{ + assert_spin_locked(&res->spinlock); + BUG_ON(res->inflight_assert_workers == 0); + res->inflight_assert_workers--; + mlog(0, "%s:%.*s: inflight assert worker--: now %u\n", + dlm->name, res->lockname.len, res->lockname.name, + res->inflight_assert_workers); +} + +static void dlm_lockres_drop_inflight_worker(struct dlm_ctxt *dlm, + struct dlm_lock_resource *res) +{ + spin_lock(&res->spinlock); + __dlm_lockres_drop_inflight_worker(dlm, res); + spin_unlock(&res->spinlock); +} + /* * lookup a lock resource by name. * may already exist in the hashtable. @@ -1603,7 +1641,8 @@ send_response: mlog(ML_ERROR, "failed to dispatch assert master work\n"); response = DLM_MASTER_RESP_ERROR; dlm_lockres_put(res); - } + } else + dlm_lockres_grab_inflight_worker(dlm, res); } else { if (res) dlm_lockres_put(res); @@ -2118,6 +2157,8 @@ static void dlm_assert_master_worker(struct dlm_work_item *item, void *data) dlm_lockres_release_ast(dlm, res); put: + dlm_lockres_drop_inflight_worker(dlm, res); + dlm_lockres_put(res); mlog(0, "finished with dlm_assert_master_worker\n"); @@ -3088,11 +3129,15 @@ static int dlm_add_migration_mle(struct dlm_ctxt *dlm, /* remove it so that only one mle will be found */ __dlm_unlink_mle(dlm, tmp); __dlm_mle_detach_hb_events(dlm, tmp); - ret = DLM_MIGRATE_RESPONSE_MASTERY_REF; - mlog(0, "%s:%.*s: master=%u, newmaster=%u, " - "telling master to get ref for cleared out mle " - "during migration\n", dlm->name, namelen, name, - master, new_master); + if (tmp->type == DLM_MLE_MASTER) { + ret = DLM_MIGRATE_RESPONSE_MASTERY_REF; + mlog(0, "%s:%.*s: master=%u, newmaster=%u, " + "telling master to get ref " + "for cleared out mle during " + "migration\n", dlm->name, + namelen, name, master, + new_master); + } } spin_unlock(&tmp->spinlock); } diff --git a/fs/ocfs2/dlm/dlmrecovery.c b/fs/ocfs2/dlm/dlmrecovery.c index 5de019437ea5..45067faf5695 100644 --- a/fs/ocfs2/dlm/dlmrecovery.c +++ b/fs/ocfs2/dlm/dlmrecovery.c @@ -1708,7 +1708,8 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data, mlog_errno(-ENOMEM); /* retry!? */ BUG(); - } + } else + __dlm_lockres_grab_inflight_worker(dlm, res); } else /* put.. incase we are not the master */ dlm_lockres_put(res); spin_unlock(&res->spinlock); diff --git a/fs/ocfs2/dlm/dlmthread.c b/fs/ocfs2/dlm/dlmthread.c index 9db869de829d..69aac6f088ad 100644 --- a/fs/ocfs2/dlm/dlmthread.c +++ b/fs/ocfs2/dlm/dlmthread.c @@ -259,12 +259,15 @@ static void dlm_run_purge_list(struct dlm_ctxt *dlm, * refs on it. */ unused = __dlm_lockres_unused(lockres); if (!unused || - (lockres->state & DLM_LOCK_RES_MIGRATING)) { + (lockres->state & DLM_LOCK_RES_MIGRATING) || + (lockres->inflight_assert_workers != 0)) { mlog(0, "%s: res %.*s is in use or being remastered, " - "used %d, state %d\n", dlm->name, - lockres->lockname.len, lockres->lockname.name, - !unused, lockres->state); - list_move_tail(&dlm->purge_list, &lockres->purge); + "used %d, state %d, assert master workers %u\n", + dlm->name, lockres->lockname.len, + lockres->lockname.name, + !unused, lockres->state, + lockres->inflight_assert_workers); + list_move_tail(&lockres->purge, &dlm->purge_list); spin_unlock(&lockres->spinlock); continue; } diff --git a/fs/ocfs2/dlm/dlmunlock.c b/fs/ocfs2/dlm/dlmunlock.c index 5698b52cf5c9..2e3c9dbab68c 100644 --- a/fs/ocfs2/dlm/dlmunlock.c +++ b/fs/ocfs2/dlm/dlmunlock.c @@ -191,7 +191,9 @@ static enum dlm_status dlmunlock_common(struct dlm_ctxt *dlm, DLM_UNLOCK_CLEAR_CONVERT_TYPE); } else if (status == DLM_RECOVERING || status == DLM_MIGRATING || - status == DLM_FORWARD) { + status == DLM_FORWARD || + status == DLM_NOLOCKMGR + ) { /* must clear the actions because this unlock * is about to be retried. cannot free or do * any list manipulation. */ @@ -200,7 +202,8 @@ static enum dlm_status dlmunlock_common(struct dlm_ctxt *dlm, res->lockname.name, status==DLM_RECOVERING?"recovering": (status==DLM_MIGRATING?"migrating": - "forward")); + (status == DLM_FORWARD ? "forward" : + "nolockmanager"))); actions = 0; } if (flags & LKM_CANCEL) @@ -364,7 +367,10 @@ static enum dlm_status dlm_send_remote_unlock_request(struct dlm_ctxt *dlm, * updated state to the recovery master. this thread * just needs to finish out the operation and call * the unlockast. */ - ret = DLM_NORMAL; + if (dlm_is_node_dead(dlm, owner)) + ret = DLM_NORMAL; + else + ret = DLM_NOLOCKMGR; } else { /* something bad. this will BUG in ocfs2 */ ret = dlm_err_to_dlm_status(tmpret); @@ -638,7 +644,9 @@ retry: if (status == DLM_RECOVERING || status == DLM_MIGRATING || - status == DLM_FORWARD) { + status == DLM_FORWARD || + status == DLM_NOLOCKMGR) { + /* We want to go away for a tiny bit to allow recovery * / migration to complete on this resource. I don't * know of any wait queue we could sleep on as this @@ -650,7 +658,7 @@ retry: msleep(50); mlog(0, "retrying unlock due to pending recovery/" - "migration/in-progress\n"); + "migration/in-progress/reconnect\n"); goto retry; } diff --git a/fs/ocfs2/namei.c b/fs/ocfs2/namei.c index 2060fc398445..8add6f1030d7 100644 --- a/fs/ocfs2/namei.c +++ b/fs/ocfs2/namei.c @@ -205,6 +205,21 @@ static struct inode *ocfs2_get_init_inode(struct inode *dir, umode_t mode) return inode; } +static void ocfs2_cleanup_add_entry_failure(struct ocfs2_super *osb, + struct dentry *dentry, struct inode *inode) +{ + struct ocfs2_dentry_lock *dl = dentry->d_fsdata; + + ocfs2_simple_drop_lockres(osb, &dl->dl_lockres); + ocfs2_lock_res_free(&dl->dl_lockres); + BUG_ON(dl->dl_count != 1); + spin_lock(&dentry_attach_lock); + dentry->d_fsdata = NULL; + spin_unlock(&dentry_attach_lock); + kfree(dl); + iput(inode); +} + static int ocfs2_mknod(struct inode *dir, struct dentry *dentry, umode_t mode, @@ -231,6 +246,7 @@ static int ocfs2_mknod(struct inode *dir, sigset_t oldset; int did_block_signals = 0; struct posix_acl *default_acl = NULL, *acl = NULL; + struct ocfs2_dentry_lock *dl = NULL; trace_ocfs2_mknod(dir, dentry, dentry->d_name.len, dentry->d_name.name, (unsigned long long)OCFS2_I(dir)->ip_blkno, @@ -423,6 +439,8 @@ static int ocfs2_mknod(struct inode *dir, goto leave; } + dl = dentry->d_fsdata; + status = ocfs2_add_entry(handle, dentry, inode, OCFS2_I(inode)->ip_blkno, parent_fe_bh, &lookup); @@ -469,6 +487,9 @@ leave: * ocfs2_delete_inode will mutex_lock again. */ if ((status < 0) && inode) { + if (dl) + ocfs2_cleanup_add_entry_failure(osb, dentry, inode); + OCFS2_I(inode)->ip_flags |= OCFS2_INODE_SKIP_ORPHAN_DIR; clear_nlink(inode); iput(inode); @@ -991,6 +1012,65 @@ leave: return status; } +static int ocfs2_check_if_ancestor(struct ocfs2_super *osb, + u64 src_inode_no, u64 dest_inode_no) +{ + int ret = 0, i = 0; + u64 parent_inode_no = 0; + u64 child_inode_no = src_inode_no; + struct inode *child_inode; + +#define MAX_LOOKUP_TIMES 32 + while (1) { + child_inode = ocfs2_iget(osb, child_inode_no, 0, 0); + if (IS_ERR(child_inode)) { + ret = PTR_ERR(child_inode); + break; + } + + ret = ocfs2_inode_lock(child_inode, NULL, 0); + if (ret < 0) { + iput(child_inode); + if (ret != -ENOENT) + mlog_errno(ret); + break; + } + + ret = ocfs2_lookup_ino_from_name(child_inode, "..", 2, + &parent_inode_no); + ocfs2_inode_unlock(child_inode, 0); + iput(child_inode); + if (ret < 0) { + ret = -ENOENT; + break; + } + + if (parent_inode_no == dest_inode_no) { + ret = 1; + break; + } + + if (parent_inode_no == osb->root_inode->i_ino) { + ret = 0; + break; + } + + child_inode_no = parent_inode_no; + + if (++i >= MAX_LOOKUP_TIMES) { + mlog(ML_NOTICE, "max lookup times reached, filesystem " + "may have nested directories, " + "src inode: %llu, dest inode: %llu.\n", + (unsigned long long)src_inode_no, + (unsigned long long)dest_inode_no); + ret = 0; + break; + } + } + + return ret; +} + /* * The only place this should be used is rename! * if they have the same id, then the 1st one is the only one locked. @@ -1002,6 +1082,7 @@ static int ocfs2_double_lock(struct ocfs2_super *osb, struct inode *inode2) { int status; + int inode1_is_ancestor, inode2_is_ancestor; struct ocfs2_inode_info *oi1 = OCFS2_I(inode1); struct ocfs2_inode_info *oi2 = OCFS2_I(inode2); struct buffer_head **tmpbh; @@ -1015,9 +1096,26 @@ static int ocfs2_double_lock(struct ocfs2_super *osb, if (*bh2) *bh2 = NULL; - /* we always want to lock the one with the lower lockid first. */ + /* we always want to lock the one with the lower lockid first. + * and if they are nested, we lock ancestor first */ if (oi1->ip_blkno != oi2->ip_blkno) { - if (oi1->ip_blkno < oi2->ip_blkno) { + inode1_is_ancestor = ocfs2_check_if_ancestor(osb, oi2->ip_blkno, + oi1->ip_blkno); + if (inode1_is_ancestor < 0) { + status = inode1_is_ancestor; + goto bail; + } + + inode2_is_ancestor = ocfs2_check_if_ancestor(osb, oi1->ip_blkno, + oi2->ip_blkno); + if (inode2_is_ancestor < 0) { + status = inode2_is_ancestor; + goto bail; + } + + if ((inode1_is_ancestor == 1) || + (oi1->ip_blkno < oi2->ip_blkno && + inode2_is_ancestor == 0)) { /* switch id1 and id2 around */ tmpbh = bh2; bh2 = bh1; @@ -1098,6 +1196,7 @@ static int ocfs2_rename(struct inode *old_dir, struct ocfs2_dir_lookup_result old_entry_lookup = { NULL, }; struct ocfs2_dir_lookup_result orphan_insert = { NULL, }; struct ocfs2_dir_lookup_result target_insert = { NULL, }; + bool should_add_orphan = false; /* At some point it might be nice to break this function up a * bit. */ @@ -1134,6 +1233,21 @@ static int ocfs2_rename(struct inode *old_dir, goto bail; } rename_lock = 1; + + /* here we cannot guarantee the inodes haven't just been + * changed, so check if they are nested again */ + status = ocfs2_check_if_ancestor(osb, new_dir->i_ino, + old_inode->i_ino); + if (status < 0) { + mlog_errno(status); + goto bail; + } else if (status == 1) { + status = -EPERM; + trace_ocfs2_rename_not_permitted( + (unsigned long long)old_inode->i_ino, + (unsigned long long)new_dir->i_ino); + goto bail; + } } /* if old and new are the same, this'll just do one lock. */ @@ -1304,6 +1418,7 @@ static int ocfs2_rename(struct inode *old_dir, mlog_errno(status); goto bail; } + should_add_orphan = true; } } else { BUG_ON(new_dentry->d_parent->d_inode != new_dir); @@ -1348,17 +1463,6 @@ static int ocfs2_rename(struct inode *old_dir, goto bail; } - if (S_ISDIR(new_inode->i_mode) || - (ocfs2_read_links_count(newfe) == 1)) { - status = ocfs2_orphan_add(osb, handle, new_inode, - newfe_bh, orphan_name, - &orphan_insert, orphan_dir); - if (status < 0) { - mlog_errno(status); - goto bail; - } - } - /* change the dirent to point to the correct inode */ status = ocfs2_update_entry(new_dir, handle, &target_lookup_res, old_inode); @@ -1373,6 +1477,15 @@ static int ocfs2_rename(struct inode *old_dir, else ocfs2_add_links_count(newfe, -1); ocfs2_journal_dirty(handle, newfe_bh); + if (should_add_orphan) { + status = ocfs2_orphan_add(osb, handle, new_inode, + newfe_bh, orphan_name, + &orphan_insert, orphan_dir); + if (status < 0) { + mlog_errno(status); + goto bail; + } + } } else { /* if the name was not found in new_dir, add it now */ status = ocfs2_add_entry(handle, new_dentry, old_inode, @@ -1642,6 +1755,7 @@ static int ocfs2_symlink(struct inode *dir, struct ocfs2_dir_lookup_result lookup = { NULL, }; sigset_t oldset; int did_block_signals = 0; + struct ocfs2_dentry_lock *dl = NULL; trace_ocfs2_symlink_begin(dir, dentry, symname, dentry->d_name.len, dentry->d_name.name); @@ -1830,6 +1944,8 @@ static int ocfs2_symlink(struct inode *dir, goto bail; } + dl = dentry->d_fsdata; + status = ocfs2_add_entry(handle, dentry, inode, le64_to_cpu(fe->i_blkno), parent_fe_bh, &lookup); @@ -1864,6 +1980,9 @@ bail: if (xattr_ac) ocfs2_free_alloc_context(xattr_ac); if ((status < 0) && inode) { + if (dl) + ocfs2_cleanup_add_entry_failure(osb, dentry, inode); + OCFS2_I(inode)->ip_flags |= OCFS2_INODE_SKIP_ORPHAN_DIR; clear_nlink(inode); iput(inode); diff --git a/fs/ocfs2/ocfs2_trace.h b/fs/ocfs2/ocfs2_trace.h index 1b60c62aa9d6..6cb019b7c6a8 100644 --- a/fs/ocfs2/ocfs2_trace.h +++ b/fs/ocfs2/ocfs2_trace.h @@ -2292,6 +2292,8 @@ TRACE_EVENT(ocfs2_rename, __entry->new_len, __get_str(new_name)) ); +DEFINE_OCFS2_ULL_ULL_EVENT(ocfs2_rename_not_permitted); + TRACE_EVENT(ocfs2_rename_target_exists, TP_PROTO(int new_len, const char *new_name), TP_ARGS(new_len, new_name), diff --git a/fs/ocfs2/refcounttree.c b/fs/ocfs2/refcounttree.c index 714e53b9cc66..636aab69ead5 100644 --- a/fs/ocfs2/refcounttree.c +++ b/fs/ocfs2/refcounttree.c @@ -4288,9 +4288,16 @@ static int ocfs2_reflink(struct dentry *old_dentry, struct inode *dir, goto out; } + error = ocfs2_rw_lock(inode, 1); + if (error) { + mlog_errno(error); + goto out; + } + error = ocfs2_inode_lock(inode, &old_bh, 1); if (error) { mlog_errno(error); + ocfs2_rw_unlock(inode, 1); goto out; } @@ -4302,6 +4309,7 @@ static int ocfs2_reflink(struct dentry *old_dentry, struct inode *dir, up_write(&OCFS2_I(inode)->ip_xattr_sem); ocfs2_inode_unlock(inode, 1); + ocfs2_rw_unlock(inode, 1); brelse(old_bh); if (error) { diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c index c7a89cea5c5d..ddb662b32447 100644 --- a/fs/ocfs2/super.c +++ b/fs/ocfs2/super.c @@ -1925,15 +1925,11 @@ static void ocfs2_dismount_volume(struct super_block *sb, int mnt_err) ocfs2_shutdown_local_alloc(osb); + ocfs2_truncate_log_shutdown(osb); + /* This will disable recovery and flush any recovery work. */ ocfs2_recovery_exit(osb); - /* - * During dismount, when it recovers another node it will call - * ocfs2_recover_orphans and queue delayed work osb_truncate_log_wq. - */ - ocfs2_truncate_log_shutdown(osb); - ocfs2_journal_shutdown(osb); ocfs2_sync_blockdev(sb); diff --git a/fs/proc/stat.c b/fs/proc/stat.c index 9d231e9e5f0e..bf2d03f8fd3e 100644 --- a/fs/proc/stat.c +++ b/fs/proc/stat.c @@ -184,29 +184,11 @@ static int show_stat(struct seq_file *p, void *v) static int stat_open(struct inode *inode, struct file *file) { - size_t size = 1024 + 128 * num_possible_cpus(); - char *buf; - struct seq_file *m; - int res; + size_t size = 1024 + 128 * num_online_cpus(); /* minimum size to display an interrupt count : 2 bytes */ size += 2 * nr_irqs; - - /* don't ask for more than the kmalloc() max size */ - if (size > KMALLOC_MAX_SIZE) - size = KMALLOC_MAX_SIZE; - buf = kmalloc(size, GFP_KERNEL); - if (!buf) - return -ENOMEM; - - res = single_open(file, show_stat, NULL); - if (!res) { - m = file->private_data; - m->buf = buf; - m->size = ksize(buf); - } else - kfree(buf); - return res; + return single_open_size(file, show_stat, NULL, size); } static const struct file_operations proc_stat_operations = { diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c index 9cd5f63715c0..7f30bdc57d13 100644 --- a/fs/quota/dquot.c +++ b/fs/quota/dquot.c @@ -702,6 +702,7 @@ dqcache_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) struct dquot *dquot; unsigned long freed = 0; + spin_lock(&dq_list_lock); head = free_dquots.prev; while (head != &free_dquots && sc->nr_to_scan) { dquot = list_entry(head, struct dquot, dq_free); @@ -713,6 +714,7 @@ dqcache_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) freed++; head = free_dquots.prev; } + spin_unlock(&dq_list_lock); return freed; } diff --git a/fs/seq_file.c b/fs/seq_file.c index 1d641bb108d2..3857b720cb1b 100644 --- a/fs/seq_file.c +++ b/fs/seq_file.c @@ -8,8 +8,10 @@ #include <linux/fs.h> #include <linux/export.h> #include <linux/seq_file.h> +#include <linux/vmalloc.h> #include <linux/slab.h> #include <linux/cred.h> +#include <linux/mm.h> #include <asm/uaccess.h> #include <asm/page.h> @@ -30,6 +32,16 @@ static void seq_set_overflow(struct seq_file *m) m->count = m->size; } +static void *seq_buf_alloc(unsigned long size) +{ + void *buf; + + buf = kmalloc(size, GFP_KERNEL | __GFP_NOWARN); + if (!buf && size > PAGE_SIZE) + buf = vmalloc(size); + return buf; +} + /** * seq_open - initialize sequential file * @file: file we initialize @@ -96,7 +108,7 @@ static int traverse(struct seq_file *m, loff_t offset) return 0; } if (!m->buf) { - m->buf = kmalloc(m->size = PAGE_SIZE, GFP_KERNEL); + m->buf = seq_buf_alloc(m->size = PAGE_SIZE); if (!m->buf) return -ENOMEM; } @@ -135,9 +147,9 @@ static int traverse(struct seq_file *m, loff_t offset) Eoverflow: m->op->stop(m, p); - kfree(m->buf); + kvfree(m->buf); m->count = 0; - m->buf = kmalloc(m->size <<= 1, GFP_KERNEL); + m->buf = seq_buf_alloc(m->size <<= 1); return !m->buf ? -ENOMEM : -EAGAIN; } @@ -192,7 +204,7 @@ ssize_t seq_read(struct file *file, char __user *buf, size_t size, loff_t *ppos) /* grab buffer if we didn't have one */ if (!m->buf) { - m->buf = kmalloc(m->size = PAGE_SIZE, GFP_KERNEL); + m->buf = seq_buf_alloc(m->size = PAGE_SIZE); if (!m->buf) goto Enomem; } @@ -232,9 +244,9 @@ ssize_t seq_read(struct file *file, char __user *buf, size_t size, loff_t *ppos) if (m->count < m->size) goto Fill; m->op->stop(m, p); - kfree(m->buf); + kvfree(m->buf); m->count = 0; - m->buf = kmalloc(m->size <<= 1, GFP_KERNEL); + m->buf = seq_buf_alloc(m->size <<= 1); if (!m->buf) goto Enomem; m->version = 0; @@ -350,7 +362,7 @@ EXPORT_SYMBOL(seq_lseek); int seq_release(struct inode *inode, struct file *file) { struct seq_file *m = file->private_data; - kfree(m->buf); + kvfree(m->buf); kfree(m); return 0; } @@ -605,13 +617,13 @@ EXPORT_SYMBOL(single_open); int single_open_size(struct file *file, int (*show)(struct seq_file *, void *), void *data, size_t size) { - char *buf = kmalloc(size, GFP_KERNEL); + char *buf = seq_buf_alloc(size); int ret; if (!buf) return -ENOMEM; ret = single_open(file, show, data); if (ret) { - kfree(buf); + kvfree(buf); return ret; } ((struct seq_file *)file->private_data)->buf = buf; diff --git a/fs/xattr.c b/fs/xattr.c index 3377dff18404..c69e6d43a0d2 100644 --- a/fs/xattr.c +++ b/fs/xattr.c @@ -843,7 +843,7 @@ struct simple_xattr *simple_xattr_alloc(const void *value, size_t size) /* wrap around? */ len = sizeof(*new_xattr) + size; - if (len <= sizeof(*new_xattr)) + if (len < sizeof(*new_xattr)) return NULL; new_xattr = kmalloc(len, GFP_KERNEL); diff --git a/fs/xfs/xfs_bmap.c b/fs/xfs/xfs_bmap.c index 96175df211b1..75c3fe5f3d9d 100644 --- a/fs/xfs/xfs_bmap.c +++ b/fs/xfs/xfs_bmap.c @@ -4298,8 +4298,8 @@ xfs_bmapi_delay( } -int -__xfs_bmapi_allocate( +static int +xfs_bmapi_allocate( struct xfs_bmalloca *bma) { struct xfs_mount *mp = bma->ip->i_mount; @@ -4578,9 +4578,6 @@ xfs_bmapi_write( bma.flist = flist; bma.firstblock = firstblock; - if (flags & XFS_BMAPI_STACK_SWITCH) - bma.stack_switch = 1; - while (bno < end && n < *nmap) { inhole = eof || bma.got.br_startoff > bno; wasdelay = !inhole && isnullstartblock(bma.got.br_startblock); diff --git a/fs/xfs/xfs_bmap.h b/fs/xfs/xfs_bmap.h index 38ba36e9b2f0..b879ca56a64c 100644 --- a/fs/xfs/xfs_bmap.h +++ b/fs/xfs/xfs_bmap.h @@ -77,7 +77,6 @@ typedef struct xfs_bmap_free * from written to unwritten, otherwise convert from unwritten to written. */ #define XFS_BMAPI_CONVERT 0x040 -#define XFS_BMAPI_STACK_SWITCH 0x080 #define XFS_BMAPI_FLAGS \ { XFS_BMAPI_ENTIRE, "ENTIRE" }, \ @@ -86,8 +85,7 @@ typedef struct xfs_bmap_free { XFS_BMAPI_PREALLOC, "PREALLOC" }, \ { XFS_BMAPI_IGSTATE, "IGSTATE" }, \ { XFS_BMAPI_CONTIG, "CONTIG" }, \ - { XFS_BMAPI_CONVERT, "CONVERT" }, \ - { XFS_BMAPI_STACK_SWITCH, "STACK_SWITCH" } + { XFS_BMAPI_CONVERT, "CONVERT" } static inline int xfs_bmapi_aflag(int w) diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c index 703b3ec1796c..64731ef3324d 100644 --- a/fs/xfs/xfs_bmap_util.c +++ b/fs/xfs/xfs_bmap_util.c @@ -249,59 +249,6 @@ xfs_bmap_rtalloc( } /* - * Stack switching interfaces for allocation - */ -static void -xfs_bmapi_allocate_worker( - struct work_struct *work) -{ - struct xfs_bmalloca *args = container_of(work, - struct xfs_bmalloca, work); - unsigned long pflags; - unsigned long new_pflags = PF_FSTRANS; - - /* - * we are in a transaction context here, but may also be doing work - * in kswapd context, and hence we may need to inherit that state - * temporarily to ensure that we don't block waiting for memory reclaim - * in any way. - */ - if (args->kswapd) - new_pflags |= PF_MEMALLOC | PF_SWAPWRITE | PF_KSWAPD; - - current_set_flags_nested(&pflags, new_pflags); - - args->result = __xfs_bmapi_allocate(args); - complete(args->done); - - current_restore_flags_nested(&pflags, new_pflags); -} - -/* - * Some allocation requests often come in with little stack to work on. Push - * them off to a worker thread so there is lots of stack to use. Otherwise just - * call directly to avoid the context switch overhead here. - */ -int -xfs_bmapi_allocate( - struct xfs_bmalloca *args) -{ - DECLARE_COMPLETION_ONSTACK(done); - - if (!args->stack_switch) - return __xfs_bmapi_allocate(args); - - - args->done = &done; - args->kswapd = current_is_kswapd(); - INIT_WORK_ONSTACK(&args->work, xfs_bmapi_allocate_worker); - queue_work(xfs_alloc_wq, &args->work); - wait_for_completion(&done); - destroy_work_on_stack(&args->work); - return args->result; -} - -/* * Check if the endoff is outside the last extent. If so the caller will grow * the allocation to a stripe unit boundary. All offsets are considered outside * the end of file for an empty fork, so 1 is returned in *eof in that case. diff --git a/fs/xfs/xfs_bmap_util.h b/fs/xfs/xfs_bmap_util.h index 075f72232a64..2fdb72d2c908 100644 --- a/fs/xfs/xfs_bmap_util.h +++ b/fs/xfs/xfs_bmap_util.h @@ -55,8 +55,6 @@ struct xfs_bmalloca { bool userdata;/* set if is user data */ bool aeof; /* allocated space at eof */ bool conv; /* overwriting unwritten extents */ - bool stack_switch; - bool kswapd; /* allocation in kswapd context */ int flags; struct completion *done; struct work_struct work; @@ -66,8 +64,6 @@ struct xfs_bmalloca { int xfs_bmap_finish(struct xfs_trans **tp, struct xfs_bmap_free *flist, int *committed); int xfs_bmap_rtalloc(struct xfs_bmalloca *ap); -int xfs_bmapi_allocate(struct xfs_bmalloca *args); -int __xfs_bmapi_allocate(struct xfs_bmalloca *args); int xfs_bmap_eof(struct xfs_inode *ip, xfs_fileoff_t endoff, int whichfork, int *eof); int xfs_bmap_count_blocks(struct xfs_trans *tp, struct xfs_inode *ip, diff --git a/fs/xfs/xfs_btree.c b/fs/xfs/xfs_btree.c index bf810c6baf2b..cf893bc1e373 100644 --- a/fs/xfs/xfs_btree.c +++ b/fs/xfs/xfs_btree.c @@ -33,6 +33,7 @@ #include "xfs_error.h" #include "xfs_trace.h" #include "xfs_cksum.h" +#include "xfs_alloc.h" /* * Cursor allocation zone. @@ -2323,7 +2324,7 @@ error1: * record (to be inserted into parent). */ STATIC int /* error */ -xfs_btree_split( +__xfs_btree_split( struct xfs_btree_cur *cur, int level, union xfs_btree_ptr *ptrp, @@ -2503,6 +2504,85 @@ error0: return error; } +struct xfs_btree_split_args { + struct xfs_btree_cur *cur; + int level; + union xfs_btree_ptr *ptrp; + union xfs_btree_key *key; + struct xfs_btree_cur **curp; + int *stat; /* success/failure */ + int result; + bool kswapd; /* allocation in kswapd context */ + struct completion *done; + struct work_struct work; +}; + +/* + * Stack switching interfaces for allocation + */ +static void +xfs_btree_split_worker( + struct work_struct *work) +{ + struct xfs_btree_split_args *args = container_of(work, + struct xfs_btree_split_args, work); + unsigned long pflags; + unsigned long new_pflags = PF_FSTRANS; + + /* + * we are in a transaction context here, but may also be doing work + * in kswapd context, and hence we may need to inherit that state + * temporarily to ensure that we don't block waiting for memory reclaim + * in any way. + */ + if (args->kswapd) + new_pflags |= PF_MEMALLOC | PF_SWAPWRITE | PF_KSWAPD; + + current_set_flags_nested(&pflags, new_pflags); + + args->result = __xfs_btree_split(args->cur, args->level, args->ptrp, + args->key, args->curp, args->stat); + complete(args->done); + + current_restore_flags_nested(&pflags, new_pflags); +} + +/* + * BMBT split requests often come in with little stack to work on. Push + * them off to a worker thread so there is lots of stack to use. For the other + * btree types, just call directly to avoid the context switch overhead here. + */ +STATIC int /* error */ +xfs_btree_split( + struct xfs_btree_cur *cur, + int level, + union xfs_btree_ptr *ptrp, + union xfs_btree_key *key, + struct xfs_btree_cur **curp, + int *stat) /* success/failure */ +{ + struct xfs_btree_split_args args; + DECLARE_COMPLETION_ONSTACK(done); + + if (cur->bc_btnum != XFS_BTNUM_BMAP) + return __xfs_btree_split(cur, level, ptrp, key, curp, stat); + + args.cur = cur; + args.level = level; + args.ptrp = ptrp; + args.key = key; + args.curp = curp; + args.stat = stat; + args.done = &done; + args.kswapd = current_is_kswapd(); + INIT_WORK_ONSTACK(&args.work, xfs_btree_split_worker); + queue_work(xfs_alloc_wq, &args.work); + wait_for_completion(&done); + destroy_work_on_stack(&args.work); + return args.result; +} + + /* * Copy the old inode root contents into a real block and make the * broot point to it. diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c index 6c5eb4c551e3..6d3ec2b6ee29 100644 --- a/fs/xfs/xfs_iomap.c +++ b/fs/xfs/xfs_iomap.c @@ -749,8 +749,7 @@ xfs_iomap_write_allocate( * pointer that the caller gave to us. */ error = xfs_bmapi_write(tp, ip, map_start_fsb, - count_fsb, - XFS_BMAPI_STACK_SWITCH, + count_fsb, 0, &first_block, 1, imap, &nimaps, &free_list); if (error) diff --git a/fs/xfs/xfs_sb.c b/fs/xfs/xfs_sb.c index c3453b11f563..7703fa6770ff 100644 --- a/fs/xfs/xfs_sb.c +++ b/fs/xfs/xfs_sb.c @@ -483,10 +483,16 @@ xfs_sb_quota_to_disk( } /* - * GQUOTINO and PQUOTINO cannot be used together in versions - * of superblock that do not have pquotino. from->sb_flags - * tells us which quota is active and should be copied to - * disk. + * GQUOTINO and PQUOTINO cannot be used together in versions of + * superblock that do not have pquotino. from->sb_flags tells us which + * quota is active and should be copied to disk. If neither are active, + * make sure we write NULLFSINO to the sb_gquotino field as a quota + * inode value of "0" is invalid when the XFS_SB_VERSION_QUOTA feature + * bit is set. + * + * Note that we don't need to handle the sb_uquotino or sb_pquotino here + * as they do not require any translation. Hence the main sb field loop + * will write them appropriately from the in-core superblock. */ if ((*fields & XFS_SB_GQUOTINO) && (from->sb_qflags & XFS_GQUOTA_ACCT)) @@ -494,6 +500,17 @@ xfs_sb_quota_to_disk( else if ((*fields & XFS_SB_PQUOTINO) && (from->sb_qflags & XFS_PQUOTA_ACCT)) to->sb_gquotino = cpu_to_be64(from->sb_pquotino); + else { + /* + * We can't rely on just the fields being logged to tell us + * that it is safe to write NULLFSINO - we should only do that + * if quotas are not actually enabled. Hence only write + * NULLFSINO if both in-core quota inodes are NULL. + */ + if (from->sb_gquotino == NULLFSINO && + from->sb_pquotino == NULLFSINO) + to->sb_gquotino = cpu_to_be64(NULLFSINO); + } *fields &= ~(XFS_SB_PQUOTINO | XFS_SB_GQUOTINO); } diff --git a/include/acpi/video.h b/include/acpi/video.h index ea4c7bbded4d..843ef1adfbfa 100644 --- a/include/acpi/video.h +++ b/include/acpi/video.h @@ -22,6 +22,7 @@ extern void acpi_video_unregister(void); extern void acpi_video_unregister_backlight(void); extern int acpi_video_get_edid(struct acpi_device *device, int type, int device_id, void **edid); +extern bool acpi_video_verify_backlight_support(void); #else static inline int acpi_video_register(void) { return 0; } static inline void acpi_video_unregister(void) { return; } @@ -31,6 +32,7 @@ static inline int acpi_video_get_edid(struct acpi_device *device, int type, { return -ENODEV; } +static inline bool acpi_video_verify_backlight_support(void) { return false; } #endif #endif diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h index 471ba48c7ae4..c1c0b0cf39b4 100644 --- a/include/asm-generic/vmlinux.lds.h +++ b/include/asm-generic/vmlinux.lds.h @@ -693,7 +693,7 @@ . = ALIGN(PAGE_SIZE); \ *(.data..percpu..page_aligned) \ . = ALIGN(cacheline); \ - *(.data..percpu..readmostly) \ + *(.data..percpu..read_mostly) \ . = ALIGN(cacheline); \ *(.data..percpu) \ *(.data..percpu..shared_aligned) \ diff --git a/include/drm/i915_pciids.h b/include/drm/i915_pciids.h index 0572035673f3..a70d45647898 100644 --- a/include/drm/i915_pciids.h +++ b/include/drm/i915_pciids.h @@ -237,13 +237,21 @@ #define INTEL_BDW_GT3D_IDS(info) \ _INTEL_BDW_D_IDS(3, info) +#define INTEL_BDW_RSVDM_IDS(info) \ + _INTEL_BDW_M_IDS(4, info) + +#define INTEL_BDW_RSVDD_IDS(info) \ + _INTEL_BDW_D_IDS(4, info) + #define INTEL_BDW_M_IDS(info) \ INTEL_BDW_GT12M_IDS(info), \ - INTEL_BDW_GT3M_IDS(info) + INTEL_BDW_GT3M_IDS(info), \ + INTEL_BDW_RSVDM_IDS(info) #define INTEL_BDW_D_IDS(info) \ INTEL_BDW_GT12D_IDS(info), \ - INTEL_BDW_GT3D_IDS(info) + INTEL_BDW_GT3D_IDS(info), \ + INTEL_BDW_RSVDD_IDS(info) #define INTEL_CHV_IDS(info) \ INTEL_VGA_DEVICE(0x22b0, info), \ diff --git a/include/drm/i915_powerwell.h b/include/drm/i915_powerwell.h index 2baba9996094..baa6f11b1837 100644 --- a/include/drm/i915_powerwell.h +++ b/include/drm/i915_powerwell.h @@ -32,5 +32,6 @@ /* For use by hda_i915 driver */ extern int i915_request_power_well(void); extern int i915_release_power_well(void); +extern int i915_get_cdclk_freq(void); #endif /* _I915_POWERWELL_H_ */ diff --git a/include/dt-bindings/clock/exynos5420.h b/include/dt-bindings/clock/exynos5420.h index 97dcb89d37d3..21d51ae1d242 100644 --- a/include/dt-bindings/clock/exynos5420.h +++ b/include/dt-bindings/clock/exynos5420.h @@ -63,7 +63,6 @@ #define CLK_SCLK_MPHY_IXTAL24 161 /* gate clocks */ -#define CLK_ACLK66_PERIC 256 #define CLK_UART0 257 #define CLK_UART1 258 #define CLK_UART2 259 @@ -203,6 +202,8 @@ #define CLK_MOUT_G3D 641 #define CLK_MOUT_VPLL 642 #define CLK_MOUT_MAUDIO0 643 +#define CLK_MOUT_USER_ACLK333 644 +#define CLK_MOUT_SW_ACLK333 645 /* divider clocks */ #define CLK_DOUT_PIXEL 768 diff --git a/include/dt-bindings/clock/imx6sl-clock.h b/include/dt-bindings/clock/imx6sl-clock.h index 7cf5c9969336..b91dd462ba85 100644 --- a/include/dt-bindings/clock/imx6sl-clock.h +++ b/include/dt-bindings/clock/imx6sl-clock.h @@ -145,6 +145,7 @@ #define IMX6SL_CLK_USDHC4 132 #define IMX6SL_CLK_PLL4_AUDIO_DIV 133 #define IMX6SL_CLK_SPBA 134 -#define IMX6SL_CLK_END 135 +#define IMX6SL_CLK_ENET 135 +#define IMX6SL_CLK_END 136 #endif /* __DT_BINDINGS_CLOCK_IMX6SL_H */ diff --git a/include/dt-bindings/clock/stih415-clks.h b/include/dt-bindings/clock/stih415-clks.h index 0d2c7397e028..d80caa68aebd 100644 --- a/include/dt-bindings/clock/stih415-clks.h +++ b/include/dt-bindings/clock/stih415-clks.h @@ -10,6 +10,7 @@ #define CLK_ETH1_PHY 4 /* CLOCKGEN A1 */ +#define CLK_ICN_IF_2 0 #define CLK_GMAC0_PHY 3 #endif diff --git a/include/dt-bindings/clock/stih416-clks.h b/include/dt-bindings/clock/stih416-clks.h index 552c779eb6af..f9bdbd13568d 100644 --- a/include/dt-bindings/clock/stih416-clks.h +++ b/include/dt-bindings/clock/stih416-clks.h @@ -10,6 +10,7 @@ #define CLK_ETH1_PHY 4 /* CLOCKGEN A1 */ +#define CLK_ICN_IF_2 0 #define CLK_GMAC0_PHY 3 #endif diff --git a/include/linux/bio.h b/include/linux/bio.h index 5a645769f020..d2633ee099d9 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -186,6 +186,15 @@ static inline void *bio_data(struct bio *bio) #define BIOVEC_SEG_BOUNDARY(q, b1, b2) \ __BIO_SEG_BOUNDARY(bvec_to_phys((b1)), bvec_to_phys((b2)) + (b2)->bv_len, queue_segment_boundary((q))) +/* + * Check if adding a bio_vec after bprv with offset would create a gap in + * the SG list. Most drivers don't care about this, but some do. + */ +static inline bool bvec_gap_to_prev(struct bio_vec *bprv, unsigned int offset) +{ + return offset || ((bprv->bv_offset + bprv->bv_len) & (PAGE_SIZE - 1)); +} + #define bio_io_error(bio) bio_endio((bio), -EIO) /* @@ -644,10 +653,6 @@ struct biovec_slab { #if defined(CONFIG_BLK_DEV_INTEGRITY) - - -#define bip_vec_idx(bip, idx) (&(bip->bip_vec[(idx)])) - #define bip_for_each_vec(bvl, bip, iter) \ for_each_bvec(bvl, (bip)->bip_vec, iter, (bip)->bip_iter) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 713f8b62b435..8699bcf5f099 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -512,6 +512,7 @@ struct request_queue { #define QUEUE_FLAG_DEAD 19 /* queue tear-down finished */ #define QUEUE_FLAG_INIT_DONE 20 /* queue is initialized */ #define QUEUE_FLAG_NO_SG_MERGE 21 /* don't attempt to merge SG segments*/ +#define QUEUE_FLAG_SG_GAPS 22 /* queue doesn't support SG gaps */ #define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ (1 << QUEUE_FLAG_STACKABLE) | \ diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h index ec4112d257bc..8f8ae95c6e27 100644 --- a/include/linux/cpufreq.h +++ b/include/linux/cpufreq.h @@ -482,8 +482,8 @@ extern struct cpufreq_governor cpufreq_gov_conservative; *********************************************************************/ /* Special Values of .frequency field */ -#define CPUFREQ_ENTRY_INVALID ~0 -#define CPUFREQ_TABLE_END ~1 +#define CPUFREQ_ENTRY_INVALID ~0u +#define CPUFREQ_TABLE_END ~1u /* Special Values of .flags field */ #define CPUFREQ_BOOST_FREQ (1 << 0) diff --git a/include/linux/elevator.h b/include/linux/elevator.h index e2a6bd7fb133..45a91474487d 100644 --- a/include/linux/elevator.h +++ b/include/linux/elevator.h @@ -143,7 +143,7 @@ extern void elv_drain_elevator(struct request_queue *); * io scheduler registration */ extern void __init load_default_elevator_module(void); -extern int __init elv_register(struct elevator_type *); +extern int elv_register(struct elevator_type *); extern void elv_unregister(struct elevator_type *); /* diff --git a/include/linux/kernfs.h b/include/linux/kernfs.h index 17aa1cce6f8e..30faf797c2c3 100644 --- a/include/linux/kernfs.h +++ b/include/linux/kernfs.h @@ -91,6 +91,7 @@ struct kernfs_elem_attr { const struct kernfs_ops *ops; struct kernfs_open_node *open; loff_t size; + struct kernfs_node *notify_next; /* for kernfs_notify() */ }; /* @@ -304,6 +305,7 @@ struct dentry *kernfs_mount_ns(struct file_system_type *fs_type, int flags, struct kernfs_root *root, unsigned long magic, bool *new_sb_created, const void *ns); void kernfs_kill_sb(struct super_block *sb); +struct super_block *kernfs_pin_sb(struct kernfs_root *root, const void *ns); void kernfs_init(void); diff --git a/include/linux/libata.h b/include/linux/libata.h index 5ab4e3a76721..92abb497ab14 100644 --- a/include/linux/libata.h +++ b/include/linux/libata.h @@ -593,6 +593,7 @@ struct ata_host { struct device *dev; void __iomem * const *iomap; unsigned int n_ports; + unsigned int n_tags; /* nr of NCQ tags */ void *private_data; struct ata_port_operations *ops; unsigned long flags; diff --git a/include/linux/mlx4/device.h b/include/linux/mlx4/device.h index b12f4bbd064c..35b51e7af886 100644 --- a/include/linux/mlx4/device.h +++ b/include/linux/mlx4/device.h @@ -578,8 +578,6 @@ struct mlx4_cq { u32 cons_index; u16 irq; - bool irq_affinity_change; - __be32 *set_ci_db; __be32 *arm_db; int arm_sn; @@ -1167,6 +1165,8 @@ int mlx4_assign_eq(struct mlx4_dev *dev, char *name, struct cpu_rmap *rmap, int *vector); void mlx4_release_eq(struct mlx4_dev *dev, int vec); +int mlx4_eq_get_irq(struct mlx4_dev *dev, int vec); + int mlx4_get_phys_port_id(struct mlx4_dev *dev); int mlx4_wol_read(struct mlx4_dev *dev, u64 *config, int port); int mlx4_wol_write(struct mlx4_dev *dev, u64 config, int port); diff --git a/include/linux/mutex.h b/include/linux/mutex.h index 11692dea18aa..42aa9b9ecd5f 100644 --- a/include/linux/mutex.h +++ b/include/linux/mutex.h @@ -17,6 +17,7 @@ #include <linux/lockdep.h> #include <linux/atomic.h> #include <asm/processor.h> +#include <linux/osq_lock.h> /* * Simple, straightforward mutexes with strict semantics: @@ -46,7 +47,6 @@ * - detects multi-task circular deadlocks and prints out all affected * locks and tasks (and only those tasks) */ -struct optimistic_spin_queue; struct mutex { /* 1: unlocked, 0: locked, negative: locked, possible waiters */ atomic_t count; @@ -56,7 +56,7 @@ struct mutex { struct task_struct *owner; #endif #ifdef CONFIG_MUTEX_SPIN_ON_OWNER - struct optimistic_spin_queue *osq; /* Spinner MCS lock */ + struct optimistic_spin_queue osq; /* Spinner MCS lock */ #endif #ifdef CONFIG_DEBUG_MUTEXES const char *name; diff --git a/include/linux/nmi.h b/include/linux/nmi.h index 6a45fb583ff1..447775ee2c4b 100644 --- a/include/linux/nmi.h +++ b/include/linux/nmi.h @@ -32,15 +32,24 @@ static inline void touch_nmi_watchdog(void) #ifdef arch_trigger_all_cpu_backtrace static inline bool trigger_all_cpu_backtrace(void) { - arch_trigger_all_cpu_backtrace(); + arch_trigger_all_cpu_backtrace(true); return true; } +static inline bool trigger_allbutself_cpu_backtrace(void) +{ + arch_trigger_all_cpu_backtrace(false); + return true; +} #else static inline bool trigger_all_cpu_backtrace(void) { return false; } +static inline bool trigger_allbutself_cpu_backtrace(void) +{ + return false; +} #endif #ifdef CONFIG_LOCKUP_DETECTOR @@ -48,6 +57,7 @@ int hw_nmi_is_cpu_stuck(struct pt_regs *); u64 hw_nmi_get_sample_period(int watchdog_thresh); extern int watchdog_user_enabled; extern int watchdog_thresh; +extern int sysctl_softlockup_all_cpu_backtrace; struct ctl_table; extern int proc_dowatchdog(struct ctl_table *, int , void __user *, size_t *, loff_t *); diff --git a/include/linux/of_mdio.h b/include/linux/of_mdio.h index a70c9493d55a..d449018d0726 100644 --- a/include/linux/of_mdio.h +++ b/include/linux/of_mdio.h @@ -25,9 +25,6 @@ struct phy_device *of_phy_attach(struct net_device *dev, extern struct mii_bus *of_mdio_find_bus(struct device_node *mdio_np); -extern void of_mdiobus_link_phydev(struct mii_bus *mdio, - struct phy_device *phydev); - #else /* CONFIG_OF */ static inline int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np) { @@ -63,11 +60,6 @@ static inline struct mii_bus *of_mdio_find_bus(struct device_node *mdio_np) { return NULL; } - -static inline void of_mdiobus_link_phydev(struct mii_bus *mdio, - struct phy_device *phydev) -{ -} #endif /* CONFIG_OF */ #if defined(CONFIG_OF) && defined(CONFIG_FIXED_PHY) diff --git a/include/linux/osq_lock.h b/include/linux/osq_lock.h new file mode 100644 index 000000000000..90230d5811c5 --- /dev/null +++ b/include/linux/osq_lock.h @@ -0,0 +1,27 @@ +#ifndef __LINUX_OSQ_LOCK_H +#define __LINUX_OSQ_LOCK_H + +/* + * An MCS like lock especially tailored for optimistic spinning for sleeping + * lock implementations (mutex, rwsem, etc). + */ + +#define OSQ_UNLOCKED_VAL (0) + +struct optimistic_spin_queue { + /* + * Stores an encoded value of the CPU # of the tail node in the queue. + * If the queue is empty, then it's set to OSQ_UNLOCKED_VAL. + */ + atomic_t tail; +}; + +/* Init macro and function. */ +#define OSQ_LOCK_UNLOCKED { ATOMIC_INIT(OSQ_UNLOCKED_VAL) } + +static inline void osq_lock_init(struct optimistic_spin_queue *lock) +{ + atomic_set(&lock->tail, OSQ_UNLOCKED_VAL); +} + +#endif diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 3c545b48aeab..8304959ad336 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -360,6 +360,9 @@ static inline void ClearPageCompound(struct page *page) ClearPageHead(page); } #endif + +#define PG_head_mask ((1L << PG_head)) + #else /* * Reduce page flag use as much as possible by overlapping diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 0a97b583ee8d..e1474ae18c88 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -399,6 +399,18 @@ static inline struct page *read_mapping_page(struct address_space *mapping, } /* + * Get the offset in PAGE_SIZE. + * (TODO: hugepage should have ->index in PAGE_SIZE) + */ +static inline pgoff_t page_to_pgoff(struct page *page) +{ + if (unlikely(PageHeadHuge(page))) + return page->index << compound_order(page); + else + return page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT); +} + +/* * Return byte-offset into filesystem object for page. */ static inline loff_t page_offset(struct page *page) diff --git a/include/linux/percpu-defs.h b/include/linux/percpu-defs.h index a5fc7d01aad6..dec01d6c3f80 100644 --- a/include/linux/percpu-defs.h +++ b/include/linux/percpu-defs.h @@ -146,10 +146,10 @@ * Declaration/definition used for per-CPU variables that must be read mostly. */ #define DECLARE_PER_CPU_READ_MOSTLY(type, name) \ - DECLARE_PER_CPU_SECTION(type, name, "..readmostly") + DECLARE_PER_CPU_SECTION(type, name, "..read_mostly") #define DEFINE_PER_CPU_READ_MOSTLY(type, name) \ - DEFINE_PER_CPU_SECTION(type, name, "..readmostly") + DEFINE_PER_CPU_SECTION(type, name, "..read_mostly") /* * Intermodule exports for per-CPU variables. sparse forgets about diff --git a/include/linux/phy.h b/include/linux/phy.h index 864ddafad8cc..68041446c450 100644 --- a/include/linux/phy.h +++ b/include/linux/phy.h @@ -536,6 +536,15 @@ struct phy_driver { /* See set_wol, but for checking whether Wake on LAN is enabled. */ void (*get_wol)(struct phy_device *dev, struct ethtool_wolinfo *wol); + /* + * Called to inform a PHY device driver when the core is about to + * change the link state. This callback is supposed to be used as + * fixup hook for drivers that need to take action when the link + * state changes. Drivers are by no means allowed to mess with the + * PHY device structure in their implementations. + */ + void (*link_change_notify)(struct phy_device *dev); + struct device_driver driver; }; #define to_phy_driver(d) container_of(d, struct phy_driver, driver) diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h index 077904c8b70d..cc79eff4a1ad 100644 --- a/include/linux/ptrace.h +++ b/include/linux/ptrace.h @@ -334,6 +334,9 @@ static inline void user_single_step_siginfo(struct task_struct *tsk, * calling arch_ptrace_stop() when it would be superfluous. For example, * if the thread has not been back to user mode since the last stop, the * thread state might indicate that nothing needs to be done. + * + * This is guaranteed to be invoked once before a task stops for ptrace and + * may include arch-specific operations necessary prior to a ptrace stop. */ #define arch_ptrace_stop_needed(code, info) (0) #endif diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index 5a75d19aa661..6a94cc8b1ca0 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -44,7 +44,6 @@ #include <linux/debugobjects.h> #include <linux/bug.h> #include <linux/compiler.h> -#include <linux/percpu.h> #include <asm/barrier.h> extern int rcu_expedited; /* for sysctl */ @@ -300,41 +299,6 @@ bool __rcu_is_watching(void); #endif /* #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || defined(CONFIG_SMP) */ /* - * Hooks for cond_resched() and friends to avoid RCU CPU stall warnings. - */ - -#define RCU_COND_RESCHED_LIM 256 /* ms vs. 100s of ms. */ -DECLARE_PER_CPU(int, rcu_cond_resched_count); -void rcu_resched(void); - -/* - * Is it time to report RCU quiescent states? - * - * Note unsynchronized access to rcu_cond_resched_count. Yes, we might - * increment some random CPU's count, and possibly also load the result from - * yet another CPU's count. We might even clobber some other CPU's attempt - * to zero its counter. This is all OK because the goal is not precision, - * but rather reasonable amortization of rcu_note_context_switch() overhead - * and extremely high probability of avoiding RCU CPU stall warnings. - * Note that this function has to be preempted in just the wrong place, - * many thousands of times in a row, for anything bad to happen. - */ -static inline bool rcu_should_resched(void) -{ - return raw_cpu_inc_return(rcu_cond_resched_count) >= - RCU_COND_RESCHED_LIM; -} - -/* - * Report quiscent states to RCU if it is time to do so. - */ -static inline void rcu_cond_resched(void) -{ - if (unlikely(rcu_should_resched())) - rcu_resched(); -} - -/* * Infrastructure to implement the synchronize_() primitives in * TREE_RCU and rcu_barrier_() primitives in TINY_RCU. */ @@ -358,9 +322,19 @@ void wait_rcu_gp(call_rcu_func_t crf); * initialization. */ #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD +void init_rcu_head(struct rcu_head *head); +void destroy_rcu_head(struct rcu_head *head); void init_rcu_head_on_stack(struct rcu_head *head); void destroy_rcu_head_on_stack(struct rcu_head *head); #else /* !CONFIG_DEBUG_OBJECTS_RCU_HEAD */ +static inline void init_rcu_head(struct rcu_head *head) +{ +} + +static inline void destroy_rcu_head(struct rcu_head *head) +{ +} + static inline void init_rcu_head_on_stack(struct rcu_head *head) { } diff --git a/include/linux/rwsem-spinlock.h b/include/linux/rwsem-spinlock.h index d5b13bc07a0b..561e8615528d 100644 --- a/include/linux/rwsem-spinlock.h +++ b/include/linux/rwsem-spinlock.h @@ -15,13 +15,13 @@ #ifdef __KERNEL__ /* * the rw-semaphore definition - * - if activity is 0 then there are no active readers or writers - * - if activity is +ve then that is the number of active readers - * - if activity is -1 then there is one active writer + * - if count is 0 then there are no active readers or writers + * - if count is +ve then that is the number of active readers + * - if count is -1 then there is one active writer * - if wait_list is not empty, then there are processes waiting for the semaphore */ struct rw_semaphore { - __s32 activity; + __s32 count; raw_spinlock_t wait_lock; struct list_head wait_list; #ifdef CONFIG_DEBUG_LOCK_ALLOC diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h index 8d79708146aa..035d3c57fc8a 100644 --- a/include/linux/rwsem.h +++ b/include/linux/rwsem.h @@ -13,10 +13,11 @@ #include <linux/kernel.h> #include <linux/list.h> #include <linux/spinlock.h> - #include <linux/atomic.h> +#ifdef CONFIG_RWSEM_SPIN_ON_OWNER +#include <linux/osq_lock.h> +#endif -struct optimistic_spin_queue; struct rw_semaphore; #ifdef CONFIG_RWSEM_GENERIC_SPINLOCK @@ -25,15 +26,15 @@ struct rw_semaphore; /* All arch specific implementations share the same struct */ struct rw_semaphore { long count; - raw_spinlock_t wait_lock; struct list_head wait_list; -#ifdef CONFIG_SMP + raw_spinlock_t wait_lock; +#ifdef CONFIG_RWSEM_SPIN_ON_OWNER + struct optimistic_spin_queue osq; /* spinner MCS lock */ /* * Write owner. Used as a speculative check to see * if the owner is running on the cpu. */ struct task_struct *owner; - struct optimistic_spin_queue *osq; /* spinner MCS lock */ #endif #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; @@ -64,22 +65,19 @@ static inline int rwsem_is_locked(struct rw_semaphore *sem) # define __RWSEM_DEP_MAP_INIT(lockname) #endif -#if defined(CONFIG_SMP) && !defined(CONFIG_RWSEM_GENERIC_SPINLOCK) -#define __RWSEM_INITIALIZER(name) \ - { RWSEM_UNLOCKED_VALUE, \ - __RAW_SPIN_LOCK_UNLOCKED(name.wait_lock), \ - LIST_HEAD_INIT((name).wait_list), \ - NULL, /* owner */ \ - NULL /* mcs lock */ \ - __RWSEM_DEP_MAP_INIT(name) } +#ifdef CONFIG_RWSEM_SPIN_ON_OWNER +#define __RWSEM_OPT_INIT(lockname) , .osq = OSQ_LOCK_UNLOCKED, .owner = NULL #else -#define __RWSEM_INITIALIZER(name) \ - { RWSEM_UNLOCKED_VALUE, \ - __RAW_SPIN_LOCK_UNLOCKED(name.wait_lock), \ - LIST_HEAD_INIT((name).wait_list) \ - __RWSEM_DEP_MAP_INIT(name) } +#define __RWSEM_OPT_INIT(lockname) #endif +#define __RWSEM_INITIALIZER(name) \ + { .count = RWSEM_UNLOCKED_VALUE, \ + .wait_list = LIST_HEAD_INIT((name).wait_list), \ + .wait_lock = __RAW_SPIN_LOCK_UNLOCKED(name.wait_lock) \ + __RWSEM_OPT_INIT(name) \ + __RWSEM_DEP_MAP_INIT(name) } + #define DECLARE_RWSEM(name) \ struct rw_semaphore name = __RWSEM_INITIALIZER(name) diff --git a/include/linux/sched.h b/include/linux/sched.h index 306f4f0c987a..0376b054a0d0 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -872,21 +872,21 @@ enum cpu_idle_type { #define SD_NUMA 0x4000 /* cross-node balancing */ #ifdef CONFIG_SCHED_SMT -static inline const int cpu_smt_flags(void) +static inline int cpu_smt_flags(void) { return SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES; } #endif #ifdef CONFIG_SCHED_MC -static inline const int cpu_core_flags(void) +static inline int cpu_core_flags(void) { return SD_SHARE_PKG_RESOURCES; } #endif #ifdef CONFIG_NUMA -static inline const int cpu_numa_flags(void) +static inline int cpu_numa_flags(void) { return SD_NUMA; } @@ -999,7 +999,7 @@ void free_sched_domains(cpumask_var_t doms[], unsigned int ndoms); bool cpus_share_cache(int this_cpu, int that_cpu); typedef const struct cpumask *(*sched_domain_mask_f)(int cpu); -typedef const int (*sched_domain_flags_f)(void); +typedef int (*sched_domain_flags_f)(void); #define SDTL_OVERLAP 0x01 diff --git a/include/linux/socket.h b/include/linux/socket.h index 8e98297f1388..ec538fc287a6 100644 --- a/include/linux/socket.h +++ b/include/linux/socket.h @@ -305,8 +305,6 @@ struct ucred { /* IPX options */ #define IPX_TYPE 1 -extern int memcpy_fromiovecend(unsigned char *kdata, const struct iovec *iov, - int offset, int len); extern int csum_partial_copy_fromiovecend(unsigned char *kdata, struct iovec *iov, int offset, @@ -315,8 +313,6 @@ extern unsigned long iov_pages(const struct iovec *iov, int offset, unsigned long nr_segs); extern int verify_iovec(struct msghdr *m, struct iovec *iov, struct sockaddr_storage *address, int mode); -extern int memcpy_toiovecend(const struct iovec *v, unsigned char *kdata, - int offset, int len); extern int move_addr_to_kernel(void __user *uaddr, int ulen, struct sockaddr_storage *kaddr); extern int put_cmsg(struct msghdr*, int level, int type, int len, void *data); diff --git a/include/linux/uio.h b/include/linux/uio.h index e2231e47cec1..09a7cffc224e 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -94,8 +94,20 @@ static inline size_t iov_iter_count(struct iov_iter *i) return i->count; } -static inline void iov_iter_truncate(struct iov_iter *i, size_t count) +/* + * Cap the iov_iter by given limit; note that the second argument is + * *not* the new size - it's upper limit for such. Passing it a value + * greater than the amount of data in iov_iter is fine - it'll just do + * nothing in that case. + */ +static inline void iov_iter_truncate(struct iov_iter *i, u64 count) { + /* + * count doesn't have to fit in size_t - comparison extends both + * operands to u64 here and any value that would be truncated by + * conversion in assignement is by definition greater than all + * values of size_t, including old i->count. + */ if (i->count > count) i->count = count; } @@ -111,6 +123,9 @@ static inline void iov_iter_reexpand(struct iov_iter *i, size_t count) int memcpy_fromiovec(unsigned char *kdata, struct iovec *iov, int len); int memcpy_toiovec(struct iovec *iov, unsigned char *kdata, int len); - +int memcpy_fromiovecend(unsigned char *kdata, const struct iovec *iov, + int offset, int len); +int memcpy_toiovecend(const struct iovec *v, unsigned char *kdata, + int offset, int len); #endif diff --git a/include/linux/usb_usual.h b/include/linux/usb_usual.h index 1a64b26046ed..9b7de1b46437 100644 --- a/include/linux/usb_usual.h +++ b/include/linux/usb_usual.h @@ -70,7 +70,9 @@ US_FLAG(NEEDS_CAP16, 0x00400000) \ /* cannot handle READ_CAPACITY_10 */ \ US_FLAG(IGNORE_UAS, 0x00800000) \ - /* Device advertises UAS but it is broken */ + /* Device advertises UAS but it is broken */ \ + US_FLAG(BROKEN_FUA, 0x01000000) \ + /* Cannot handle FUA in WRITE or READ CDBs */ \ #define US_FLAG(name, value) US_FL_##name = value , enum { US_DO_ALL_FLAGS }; diff --git a/include/net/neighbour.h b/include/net/neighbour.h index 7277caf3743d..47f425464f84 100644 --- a/include/net/neighbour.h +++ b/include/net/neighbour.h @@ -203,7 +203,6 @@ struct neigh_table { void (*proxy_redo)(struct sk_buff *skb); char *id; struct neigh_parms parms; - /* HACK. gc_* should follow parms without a gap! */ int gc_interval; int gc_thresh1; int gc_thresh2; diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h index 7ee6ce6564ae..c4d86198d3d6 100644 --- a/include/net/netfilter/nf_tables.h +++ b/include/net/netfilter/nf_tables.h @@ -6,6 +6,7 @@ #include <linux/netfilter/nfnetlink.h> #include <linux/netfilter/x_tables.h> #include <linux/netfilter/nf_tables.h> +#include <linux/u64_stats_sync.h> #include <net/netlink.h> #define NFT_JUMP_STACK_SIZE 16 @@ -503,9 +504,9 @@ enum nft_chain_flags { * @net: net namespace that this chain belongs to * @table: table that this chain belongs to * @handle: chain handle - * @flags: bitmask of enum nft_chain_flags * @use: number of jump references to this chain * @level: length of longest path to this chain + * @flags: bitmask of enum nft_chain_flags * @name: name of the chain */ struct nft_chain { @@ -514,9 +515,9 @@ struct nft_chain { struct net *net; struct nft_table *table; u64 handle; - u8 flags; - u16 use; + u32 use; u16 level; + u8 flags; char name[NFT_CHAIN_MAXNAMELEN]; }; @@ -528,8 +529,9 @@ enum nft_chain_type { }; struct nft_stats { - u64 bytes; - u64 pkts; + u64 bytes; + u64 pkts; + struct u64_stats_sync syncp; }; #define NFT_HOOK_OPS_MAX 2 diff --git a/include/net/netns/ieee802154_6lowpan.h b/include/net/netns/ieee802154_6lowpan.h index 079030c853d8..e2070960bac0 100644 --- a/include/net/netns/ieee802154_6lowpan.h +++ b/include/net/netns/ieee802154_6lowpan.h @@ -16,7 +16,7 @@ struct netns_sysctl_lowpan { struct netns_ieee802154_lowpan { struct netns_sysctl_lowpan sysctl; struct netns_frags frags; - u16 max_dsize; + int max_dsize; }; #endif diff --git a/include/net/netns/nftables.h b/include/net/netns/nftables.h index 26a394cb91a8..eee608b12cc9 100644 --- a/include/net/netns/nftables.h +++ b/include/net/netns/nftables.h @@ -13,8 +13,8 @@ struct netns_nftables { struct nft_af_info *inet; struct nft_af_info *arp; struct nft_af_info *bridge; + unsigned int base_seq; u8 gencursor; - u8 genctr; }; #endif diff --git a/include/net/sock.h b/include/net/sock.h index 07b7fcd60d80..156350745700 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -1730,8 +1730,8 @@ sk_dst_get(struct sock *sk) rcu_read_lock(); dst = rcu_dereference(sk->sk_dst_cache); - if (dst) - dst_hold(dst); + if (dst && !atomic_inc_not_zero(&dst->__refcnt)) + dst = NULL; rcu_read_unlock(); return dst; } @@ -1768,9 +1768,11 @@ __sk_dst_set(struct sock *sk, struct dst_entry *dst) static inline void sk_dst_set(struct sock *sk, struct dst_entry *dst) { - spin_lock(&sk->sk_dst_lock); - __sk_dst_set(sk, dst); - spin_unlock(&sk->sk_dst_lock); + struct dst_entry *old_dst; + + sk_tx_queue_clear(sk); + old_dst = xchg((__force struct dst_entry **)&sk->sk_dst_cache, dst); + dst_release(old_dst); } static inline void @@ -1782,9 +1784,7 @@ __sk_dst_reset(struct sock *sk) static inline void sk_dst_reset(struct sock *sk) { - spin_lock(&sk->sk_dst_lock); - __sk_dst_reset(sk); - spin_unlock(&sk->sk_dst_lock); + sk_dst_set(sk, NULL); } struct dst_entry *__sk_dst_check(struct sock *sk, u32 cookie); diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h index 42ed789ebafc..e0ae71098144 100644 --- a/include/scsi/scsi_cmnd.h +++ b/include/scsi/scsi_cmnd.h @@ -318,7 +318,7 @@ static inline void set_driver_byte(struct scsi_cmnd *cmd, char status) static inline unsigned scsi_transfer_length(struct scsi_cmnd *scmd) { - unsigned int xfer_len = blk_rq_bytes(scmd->request); + unsigned int xfer_len = scsi_out(scmd)->length; unsigned int prot_op = scsi_get_prot_op(scmd); unsigned int sector_size = scmd->device->sector_size; diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h index 5853c913d2b0..27ab31017f09 100644 --- a/include/scsi/scsi_device.h +++ b/include/scsi/scsi_device.h @@ -173,6 +173,7 @@ struct scsi_device { unsigned is_visible:1; /* is the device visible in sysfs */ unsigned wce_default_on:1; /* Cache is ON by default */ unsigned no_dif:1; /* T10 PI (DIF) should be disabled */ + unsigned broken_fua:1; /* Don't set FUA bit */ atomic_t disk_events_disable_depth; /* disable depth for disk events */ diff --git a/include/trace/ftrace.h b/include/trace/ftrace.h index 0fd06fef9fac..26b4f2e13275 100644 --- a/include/trace/ftrace.h +++ b/include/trace/ftrace.h @@ -44,6 +44,12 @@ #undef __field_ext #define __field_ext(type, item, filter_type) type item; +#undef __field_struct +#define __field_struct(type, item) type item; + +#undef __field_struct_ext +#define __field_struct_ext(type, item, filter_type) type item; + #undef __array #define __array(type, item, len) type item[len]; @@ -122,6 +128,12 @@ #undef __field_ext #define __field_ext(type, item, filter_type) +#undef __field_struct +#define __field_struct(type, item) + +#undef __field_struct_ext +#define __field_struct_ext(type, item, filter_type) + #undef __array #define __array(type, item, len) @@ -315,9 +327,21 @@ static struct trace_event_functions ftrace_event_type_funcs_##call = { \ if (ret) \ return ret; +#undef __field_struct_ext +#define __field_struct_ext(type, item, filter_type) \ + ret = trace_define_field(event_call, #type, #item, \ + offsetof(typeof(field), item), \ + sizeof(field.item), \ + 0, filter_type); \ + if (ret) \ + return ret; + #undef __field #define __field(type, item) __field_ext(type, item, FILTER_OTHER) +#undef __field_struct +#define __field_struct(type, item) __field_struct_ext(type, item, FILTER_OTHER) + #undef __array #define __array(type, item, len) \ do { \ @@ -379,6 +403,12 @@ ftrace_define_fields_##call(struct ftrace_event_call *event_call) \ #undef __field_ext #define __field_ext(type, item, filter_type) +#undef __field_struct +#define __field_struct(type, item) + +#undef __field_struct_ext +#define __field_struct_ext(type, item, filter_type) + #undef __array #define __array(type, item, len) @@ -550,6 +580,9 @@ static inline notrace int ftrace_get_offsets_##call( \ #undef __field #define __field(type, item) +#undef __field_struct +#define __field_struct(type, item) + #undef __array #define __array(type, item, len) diff --git a/include/trace/syscall.h b/include/trace/syscall.h index fed853f3d7aa..9674145e2f6a 100644 --- a/include/trace/syscall.h +++ b/include/trace/syscall.h @@ -4,6 +4,7 @@ #include <linux/tracepoint.h> #include <linux/unistd.h> #include <linux/ftrace_event.h> +#include <linux/thread_info.h> #include <asm/ptrace.h> @@ -32,4 +33,18 @@ struct syscall_metadata { struct ftrace_event_call *exit_event; }; +#if defined(CONFIG_TRACEPOINTS) && defined(CONFIG_HAVE_SYSCALL_TRACEPOINTS) +static inline void syscall_tracepoint_update(struct task_struct *p) +{ + if (test_thread_flag(TIF_SYSCALL_TRACEPOINT)) + set_tsk_thread_flag(p, TIF_SYSCALL_TRACEPOINT); + else + clear_tsk_thread_flag(p, TIF_SYSCALL_TRACEPOINT); +} +#else +static inline void syscall_tracepoint_update(struct task_struct *p) +{ +} +#endif + #endif /* _TRACE_SYSCALL_H */ diff --git a/include/uapi/linux/btrfs.h b/include/uapi/linux/btrfs.h index 6f9c38ce45c7..2f47824e7a36 100644 --- a/include/uapi/linux/btrfs.h +++ b/include/uapi/linux/btrfs.h @@ -38,6 +38,7 @@ struct btrfs_ioctl_vol_args { #define BTRFS_SUBVOL_QGROUP_INHERIT (1ULL << 2) #define BTRFS_FSID_SIZE 16 #define BTRFS_UUID_SIZE 16 +#define BTRFS_UUID_UNPARSED_SIZE 37 #define BTRFS_QGROUP_INHERIT_SET_LIMITS (1ULL << 0) diff --git a/include/uapi/linux/fuse.h b/include/uapi/linux/fuse.h index 40b5ca8a1b1f..25084a052a1e 100644 --- a/include/uapi/linux/fuse.h +++ b/include/uapi/linux/fuse.h @@ -101,6 +101,7 @@ * - add FATTR_CTIME * - add ctime and ctimensec to fuse_setattr_in * - add FUSE_RENAME2 request + * - add FUSE_NO_OPEN_SUPPORT flag */ #ifndef _LINUX_FUSE_H @@ -229,6 +230,7 @@ struct fuse_file_lock { * FUSE_READDIRPLUS_AUTO: adaptive readdirplus * FUSE_ASYNC_DIO: asynchronous direct I/O submission * FUSE_WRITEBACK_CACHE: use writeback cache for buffered writes + * FUSE_NO_OPEN_SUPPORT: kernel supports zero-message opens */ #define FUSE_ASYNC_READ (1 << 0) #define FUSE_POSIX_LOCKS (1 << 1) @@ -247,6 +249,7 @@ struct fuse_file_lock { #define FUSE_READDIRPLUS_AUTO (1 << 14) #define FUSE_ASYNC_DIO (1 << 15) #define FUSE_WRITEBACK_CACHE (1 << 16) +#define FUSE_NO_OPEN_SUPPORT (1 << 17) /** * CUSE INIT request/reply flags diff --git a/include/uapi/linux/usb/functionfs.h b/include/uapi/linux/usb/functionfs.h index 2a4b4a72a4f9..24b68c59dcf8 100644 --- a/include/uapi/linux/usb/functionfs.h +++ b/include/uapi/linux/usb/functionfs.h @@ -33,6 +33,13 @@ struct usb_endpoint_descriptor_no_audio { __u8 bInterval; } __attribute__((packed)); +/* Legacy format, deprecated as of 3.14. */ +struct usb_functionfs_descs_head { + __le32 magic; + __le32 length; + __le32 fs_count; + __le32 hs_count; +} __attribute__((packed, deprecated)); /* * Descriptors format: diff --git a/include/uapi/sound/compress_offload.h b/include/uapi/sound/compress_offload.h index 21eed488783f..1964026b5e09 100644 --- a/include/uapi/sound/compress_offload.h +++ b/include/uapi/sound/compress_offload.h @@ -39,7 +39,7 @@ struct snd_compressed_buffer { __u32 fragment_size; __u32 fragments; -}; +} __attribute__((packed, aligned(4))); /** * struct snd_compr_params: compressed stream params @@ -51,7 +51,7 @@ struct snd_compr_params { struct snd_compressed_buffer buffer; struct snd_codec codec; __u8 no_wake_mode; -}; +} __attribute__((packed, aligned(4))); /** * struct snd_compr_tstamp: timestamp descriptor @@ -70,7 +70,7 @@ struct snd_compr_tstamp { __u32 pcm_frames; __u32 pcm_io_frames; __u32 sampling_rate; -}; +} __attribute__((packed, aligned(4))); /** * struct snd_compr_avail: avail descriptor @@ -80,7 +80,7 @@ struct snd_compr_tstamp { struct snd_compr_avail { __u64 avail; struct snd_compr_tstamp tstamp; -} __attribute__((packed)); +} __attribute__((packed, aligned(4))); enum snd_compr_direction { SND_COMPRESS_PLAYBACK = 0, @@ -107,7 +107,7 @@ struct snd_compr_caps { __u32 max_fragments; __u32 codecs[MAX_NUM_CODECS]; __u32 reserved[11]; -}; +} __attribute__((packed, aligned(4))); /** * struct snd_compr_codec_caps: query capability of codec @@ -119,7 +119,7 @@ struct snd_compr_codec_caps { __u32 codec; __u32 num_descriptors; struct snd_codec_desc descriptor[MAX_NUM_CODEC_DESCRIPTORS]; -}; +} __attribute__((packed, aligned(4))); /** * @SNDRV_COMPRESS_ENCODER_PADDING: no of samples appended by the encoder at the @@ -140,7 +140,7 @@ enum { struct snd_compr_metadata { __u32 key; __u32 value[8]; -}; +} __attribute__((packed, aligned(4))); /** * compress path ioctl definitions diff --git a/include/uapi/sound/compress_params.h b/include/uapi/sound/compress_params.h index 165e7059de75..d9bd9ca0d5b0 100644 --- a/include/uapi/sound/compress_params.h +++ b/include/uapi/sound/compress_params.h @@ -268,7 +268,7 @@ struct snd_enc_vorbis { __u32 max_bit_rate; __u32 min_bit_rate; __u32 downmix; -}; +} __attribute__((packed, aligned(4))); /** @@ -284,7 +284,7 @@ struct snd_enc_real { __u32 quant_bits; __u32 start_region; __u32 num_regions; -}; +} __attribute__((packed, aligned(4))); /** * struct snd_enc_flac @@ -308,12 +308,12 @@ struct snd_enc_real { struct snd_enc_flac { __u32 num; __u32 gain; -}; +} __attribute__((packed, aligned(4))); struct snd_enc_generic { __u32 bw; /* encoder bandwidth */ __s32 reserved[15]; -}; +} __attribute__((packed, aligned(4))); union snd_codec_options { struct snd_enc_wma wma; @@ -321,7 +321,7 @@ union snd_codec_options { struct snd_enc_real real; struct snd_enc_flac flac; struct snd_enc_generic generic; -}; +} __attribute__((packed, aligned(4))); /** struct snd_codec_desc - description of codec capabilities * @max_ch: Maximum number of audio channels @@ -358,7 +358,7 @@ struct snd_codec_desc { __u32 formats; __u32 min_buffer; __u32 reserved[15]; -}; +} __attribute__((packed, aligned(4))); /** struct snd_codec * @id: Identifies the supported audio encoder/decoder. @@ -399,6 +399,6 @@ struct snd_codec { __u32 align; union snd_codec_options options; __u32 reserved[3]; -}; +} __attribute__((packed, aligned(4))); #endif diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks index 35536d9c0964..76768ee812b2 100644 --- a/kernel/Kconfig.locks +++ b/kernel/Kconfig.locks @@ -220,9 +220,16 @@ config INLINE_WRITE_UNLOCK_IRQRESTORE endif +config ARCH_SUPPORTS_ATOMIC_RMW + bool + config MUTEX_SPIN_ON_OWNER def_bool y - depends on SMP && !DEBUG_MUTEXES + depends on SMP && !DEBUG_MUTEXES && ARCH_SUPPORTS_ATOMIC_RMW + +config RWSEM_SPIN_ON_OWNER + def_bool y + depends on SMP && RWSEM_XCHGADD_ALGORITHM && ARCH_SUPPORTS_ATOMIC_RMW config ARCH_USE_QUEUE_RWLOCK bool diff --git a/kernel/cgroup.c b/kernel/cgroup.c index 7868fc3c0bc5..70776aec2562 100644 --- a/kernel/cgroup.c +++ b/kernel/cgroup.c @@ -1648,10 +1648,13 @@ static struct dentry *cgroup_mount(struct file_system_type *fs_type, int flags, const char *unused_dev_name, void *data) { + struct super_block *pinned_sb = NULL; + struct cgroup_subsys *ss; struct cgroup_root *root; struct cgroup_sb_opts opts; struct dentry *dentry; int ret; + int i; bool new_sb; /* @@ -1677,6 +1680,27 @@ static struct dentry *cgroup_mount(struct file_system_type *fs_type, goto out_unlock; } + /* + * Destruction of cgroup root is asynchronous, so subsystems may + * still be dying after the previous unmount. Let's drain the + * dying subsystems. We just need to ensure that the ones + * unmounted previously finish dying and don't care about new ones + * starting. Testing ref liveliness is good enough. + */ + for_each_subsys(ss, i) { + if (!(opts.subsys_mask & (1 << i)) || + ss->root == &cgrp_dfl_root) + continue; + + if (!percpu_ref_tryget_live(&ss->root->cgrp.self.refcnt)) { + mutex_unlock(&cgroup_mutex); + msleep(10); + ret = restart_syscall(); + goto out_free; + } + cgroup_put(&ss->root->cgrp); + } + for_each_root(root) { bool name_match = false; @@ -1717,15 +1741,23 @@ static struct dentry *cgroup_mount(struct file_system_type *fs_type, } /* - * A root's lifetime is governed by its root cgroup. - * tryget_live failure indicate that the root is being - * destroyed. Wait for destruction to complete so that the - * subsystems are free. We can use wait_queue for the wait - * but this path is super cold. Let's just sleep for a bit - * and retry. + * We want to reuse @root whose lifetime is governed by its + * ->cgrp. Let's check whether @root is alive and keep it + * that way. As cgroup_kill_sb() can happen anytime, we + * want to block it by pinning the sb so that @root doesn't + * get killed before mount is complete. + * + * With the sb pinned, tryget_live can reliably indicate + * whether @root can be reused. If it's being killed, + * drain it. We can use wait_queue for the wait but this + * path is super cold. Let's just sleep a bit and retry. */ - if (!percpu_ref_tryget_live(&root->cgrp.self.refcnt)) { + pinned_sb = kernfs_pin_sb(root->kf_root, NULL); + if (IS_ERR(pinned_sb) || + !percpu_ref_tryget_live(&root->cgrp.self.refcnt)) { mutex_unlock(&cgroup_mutex); + if (!IS_ERR_OR_NULL(pinned_sb)) + deactivate_super(pinned_sb); msleep(10); ret = restart_syscall(); goto out_free; @@ -1770,6 +1802,16 @@ out_free: CGROUP_SUPER_MAGIC, &new_sb); if (IS_ERR(dentry) || !new_sb) cgroup_put(&root->cgrp); + + /* + * If @pinned_sb, we're reusing an existing root and holding an + * extra ref on its sb. Mount is complete. Put the extra ref. + */ + if (pinned_sb) { + WARN_ON(new_sb); + deactivate_super(pinned_sb); + } + return dentry; } @@ -3328,7 +3370,7 @@ bool css_has_online_children(struct cgroup_subsys_state *css) rcu_read_lock(); css_for_each_child(child, css) { - if (css->flags & CSS_ONLINE) { + if (child->flags & CSS_ONLINE) { ret = true; break; } diff --git a/kernel/cpuset.c b/kernel/cpuset.c index f6b33c696224..116a4164720a 100644 --- a/kernel/cpuset.c +++ b/kernel/cpuset.c @@ -1181,7 +1181,13 @@ done: int current_cpuset_is_being_rebound(void) { - return task_cs(current) == cpuset_being_rebound; + int ret; + + rcu_read_lock(); + ret = task_cs(current) == cpuset_being_rebound; + rcu_read_unlock(); + + return ret; } static int update_relax_domain_level(struct cpuset *cs, s64 val) @@ -1617,7 +1623,17 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of, * resources, wait for the previously scheduled operations before * proceeding, so that we don't end up keep removing tasks added * after execution capability is restored. + * + * cpuset_hotplug_work calls back into cgroup core via + * cgroup_transfer_tasks() and waiting for it from a cgroupfs + * operation like this one can lead to a deadlock through kernfs + * active_ref protection. Let's break the protection. Losing the + * protection is okay as we check whether @cs is online after + * grabbing cpuset_mutex anyway. This only happens on the legacy + * hierarchies. */ + css_get(&cs->css); + kernfs_break_active_protection(of->kn); flush_work(&cpuset_hotplug_work); mutex_lock(&cpuset_mutex); @@ -1645,6 +1661,8 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of, free_trial_cpuset(trialcs); out_unlock: mutex_unlock(&cpuset_mutex); + kernfs_unbreak_active_protection(of->kn); + css_put(&cs->css); return retval ?: nbytes; } diff --git a/kernel/events/core.c b/kernel/events/core.c index 47996766e3da..1cf24b3e42ec 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -2320,7 +2320,7 @@ static void perf_event_context_sched_out(struct task_struct *task, int ctxn, next_parent = rcu_dereference(next_ctx->parent_ctx); /* If neither context have a parent context; they cannot be clones. */ - if (!parent && !next_parent) + if (!parent || !next_parent) goto unlock; if (next_parent == ctx || next_ctx == parent || next_parent == parent) { @@ -7464,7 +7464,19 @@ __perf_event_exit_task(struct perf_event *child_event, struct perf_event_context *child_ctx, struct task_struct *child) { - perf_remove_from_context(child_event, true); + /* + * Do not destroy the 'original' grouping; because of the context + * switch optimization the original events could've ended up in a + * random child task. + * + * If we were to destroy the original group, all group related + * operations would cease to function properly after this random + * child dies. + * + * Do destroy all inherited groups, we don't care about those + * and being thorough is better. + */ + perf_remove_from_context(child_event, !!child_event->parent); /* * It can happen that the parent exits first, and has events @@ -7480,7 +7492,7 @@ __perf_event_exit_task(struct perf_event *child_event, static void perf_event_exit_task_context(struct task_struct *child, int ctxn) { struct perf_event *child_event, *next; - struct perf_event_context *child_ctx; + struct perf_event_context *child_ctx, *parent_ctx; unsigned long flags; if (likely(!child->perf_event_ctxp[ctxn])) { @@ -7505,6 +7517,15 @@ static void perf_event_exit_task_context(struct task_struct *child, int ctxn) raw_spin_lock(&child_ctx->lock); task_ctx_sched_out(child_ctx); child->perf_event_ctxp[ctxn] = NULL; + + /* + * In order to avoid freeing: child_ctx->parent_ctx->task + * under perf_event_context::lock, grab another reference. + */ + parent_ctx = child_ctx->parent_ctx; + if (parent_ctx) + get_ctx(parent_ctx); + /* * If this context is a clone; unclone it so it can't get * swapped to another process while we're removing all @@ -7515,6 +7536,13 @@ static void perf_event_exit_task_context(struct task_struct *child, int ctxn) raw_spin_unlock_irqrestore(&child_ctx->lock, flags); /* + * Now that we no longer hold perf_event_context::lock, drop + * our extra child_ctx->parent_ctx reference. + */ + if (parent_ctx) + put_ctx(parent_ctx); + + /* * Report the task dead after unscheduling the events so that we * won't get any samples after PERF_RECORD_EXIT. We can however still * get a few PERF_RECORD_READ events. diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index c445e392e93f..6f3254e8c137 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -846,7 +846,7 @@ static void __uprobe_unregister(struct uprobe *uprobe, struct uprobe_consumer *u { int err; - if (!consumer_del(uprobe, uc)) /* WARN? */ + if (WARN_ON(!consumer_del(uprobe, uc))) return; err = register_for_each_vma(uprobe, NULL); @@ -927,7 +927,7 @@ int uprobe_apply(struct inode *inode, loff_t offset, int ret = -ENOENT; uprobe = find_uprobe(inode, offset); - if (!uprobe) + if (WARN_ON(!uprobe)) return ret; down_write(&uprobe->register_rwsem); @@ -952,7 +952,7 @@ void uprobe_unregister(struct inode *inode, loff_t offset, struct uprobe_consume struct uprobe *uprobe; uprobe = find_uprobe(inode, offset); - if (!uprobe) + if (WARN_ON(!uprobe)) return; down_write(&uprobe->register_rwsem); diff --git a/kernel/fork.c b/kernel/fork.c index d2799d1fc952..6a13c46cd87d 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1487,7 +1487,9 @@ static struct task_struct *copy_process(unsigned long clone_flags, total_forks++; spin_unlock(¤t->sighand->siglock); + syscall_tracepoint_update(p); write_unlock_irq(&tasklist_lock); + proc_fork_connector(p); cgroup_post_fork(p); if (clone_flags & CLONE_THREAD) diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c index 7339e42a85ab..1487a123db5c 100644 --- a/kernel/irq/irqdesc.c +++ b/kernel/irq/irqdesc.c @@ -455,9 +455,9 @@ EXPORT_SYMBOL_GPL(irq_alloc_hwirqs); */ void irq_free_hwirqs(unsigned int from, int cnt) { - int i; + int i, j; - for (i = from; cnt > 0; i++, cnt--) { + for (i = from, j = cnt; j > 0; i++, j--) { irq_set_status_flags(i, _IRQ_NOREQUEST | _IRQ_NOPROBE); arch_teardown_hwirq(i); } diff --git a/kernel/kexec.c b/kernel/kexec.c index 6748688813d0..369f41a94124 100644 --- a/kernel/kexec.c +++ b/kernel/kexec.c @@ -1617,6 +1617,7 @@ static int __init crash_save_vmcoreinfo_init(void) #ifdef CONFIG_MEMORY_FAILURE VMCOREINFO_NUMBER(PG_hwpoison); #endif + VMCOREINFO_NUMBER(PG_head_mask); VMCOREINFO_NUMBER(PAGE_BUDDY_MAPCOUNT_VALUE); arch_crash_save_vmcoreinfo(); diff --git a/kernel/kprobes.c b/kernel/kprobes.c index 3214289df5a7..734e9a7d280b 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -2037,19 +2037,23 @@ static int __init populate_kprobe_blacklist(unsigned long *start, { unsigned long *iter; struct kprobe_blacklist_entry *ent; - unsigned long offset = 0, size = 0; + unsigned long entry, offset = 0, size = 0; for (iter = start; iter < end; iter++) { - if (!kallsyms_lookup_size_offset(*iter, &size, &offset)) { - pr_err("Failed to find blacklist %p\n", (void *)*iter); + entry = arch_deref_entry_point((void *)*iter); + + if (!kernel_text_address(entry) || + !kallsyms_lookup_size_offset(entry, &size, &offset)) { + pr_err("Failed to find blacklist at %p\n", + (void *)entry); continue; } ent = kmalloc(sizeof(*ent), GFP_KERNEL); if (!ent) return -ENOMEM; - ent->start_addr = *iter; - ent->end_addr = *iter + size; + ent->start_addr = entry; + ent->end_addr = entry + size; INIT_LIST_HEAD(&ent->list); list_add_tail(&ent->list, &kprobe_blacklist); } diff --git a/kernel/locking/mcs_spinlock.c b/kernel/locking/mcs_spinlock.c index 838dc9e00669..be9ee1559fca 100644 --- a/kernel/locking/mcs_spinlock.c +++ b/kernel/locking/mcs_spinlock.c @@ -14,21 +14,47 @@ * called from interrupt context and we have preemption disabled while * spinning. */ -static DEFINE_PER_CPU_SHARED_ALIGNED(struct optimistic_spin_queue, osq_node); +static DEFINE_PER_CPU_SHARED_ALIGNED(struct optimistic_spin_node, osq_node); + +/* + * We use the value 0 to represent "no CPU", thus the encoded value + * will be the CPU number incremented by 1. + */ +static inline int encode_cpu(int cpu_nr) +{ + return cpu_nr + 1; +} + +static inline struct optimistic_spin_node *decode_cpu(int encoded_cpu_val) +{ + int cpu_nr = encoded_cpu_val - 1; + + return per_cpu_ptr(&osq_node, cpu_nr); +} /* * Get a stable @node->next pointer, either for unlock() or unqueue() purposes. * Can return NULL in case we were the last queued and we updated @lock instead. */ -static inline struct optimistic_spin_queue * -osq_wait_next(struct optimistic_spin_queue **lock, - struct optimistic_spin_queue *node, - struct optimistic_spin_queue *prev) +static inline struct optimistic_spin_node * +osq_wait_next(struct optimistic_spin_queue *lock, + struct optimistic_spin_node *node, + struct optimistic_spin_node *prev) { - struct optimistic_spin_queue *next = NULL; + struct optimistic_spin_node *next = NULL; + int curr = encode_cpu(smp_processor_id()); + int old; + + /* + * If there is a prev node in queue, then the 'old' value will be + * the prev node's CPU #, else it's set to OSQ_UNLOCKED_VAL since if + * we're currently last in queue, then the queue will then become empty. + */ + old = prev ? prev->cpu : OSQ_UNLOCKED_VAL; for (;;) { - if (*lock == node && cmpxchg(lock, node, prev) == node) { + if (atomic_read(&lock->tail) == curr && + atomic_cmpxchg(&lock->tail, curr, old) == curr) { /* * We were the last queued, we moved @lock back. @prev * will now observe @lock and will complete its @@ -59,18 +85,23 @@ osq_wait_next(struct optimistic_spin_queue **lock, return next; } -bool osq_lock(struct optimistic_spin_queue **lock) +bool osq_lock(struct optimistic_spin_queue *lock) { - struct optimistic_spin_queue *node = this_cpu_ptr(&osq_node); - struct optimistic_spin_queue *prev, *next; + struct optimistic_spin_node *node = this_cpu_ptr(&osq_node); + struct optimistic_spin_node *prev, *next; + int curr = encode_cpu(smp_processor_id()); + int old; node->locked = 0; node->next = NULL; + node->cpu = curr; - node->prev = prev = xchg(lock, node); - if (likely(prev == NULL)) + old = atomic_xchg(&lock->tail, curr); + if (old == OSQ_UNLOCKED_VAL) return true; + prev = decode_cpu(old); + node->prev = prev; ACCESS_ONCE(prev->next) = node; /* @@ -149,20 +180,21 @@ unqueue: return false; } -void osq_unlock(struct optimistic_spin_queue **lock) +void osq_unlock(struct optimistic_spin_queue *lock) { - struct optimistic_spin_queue *node = this_cpu_ptr(&osq_node); - struct optimistic_spin_queue *next; + struct optimistic_spin_node *node, *next; + int curr = encode_cpu(smp_processor_id()); /* * Fast path for the uncontended case. */ - if (likely(cmpxchg(lock, node, NULL) == node)) + if (likely(atomic_cmpxchg(&lock->tail, curr, OSQ_UNLOCKED_VAL) == curr)) return; /* * Second most likely case. */ + node = this_cpu_ptr(&osq_node); next = xchg(&node->next, NULL); if (next) { ACCESS_ONCE(next->locked) = 1; diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h index a2dbac4aca6b..74356dc0ce29 100644 --- a/kernel/locking/mcs_spinlock.h +++ b/kernel/locking/mcs_spinlock.h @@ -118,12 +118,13 @@ void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *node) * mutex_lock()/rwsem_down_{read,write}() etc. */ -struct optimistic_spin_queue { - struct optimistic_spin_queue *next, *prev; +struct optimistic_spin_node { + struct optimistic_spin_node *next, *prev; int locked; /* 1 if lock acquired */ + int cpu; /* encoded CPU # value */ }; -extern bool osq_lock(struct optimistic_spin_queue **lock); -extern void osq_unlock(struct optimistic_spin_queue **lock); +extern bool osq_lock(struct optimistic_spin_queue *lock); +extern void osq_unlock(struct optimistic_spin_queue *lock); #endif /* __LINUX_MCS_SPINLOCK_H */ diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index bc73d33c6760..acca2c1a3c5e 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -60,7 +60,7 @@ __mutex_init(struct mutex *lock, const char *name, struct lock_class_key *key) INIT_LIST_HEAD(&lock->wait_list); mutex_clear_owner(lock); #ifdef CONFIG_MUTEX_SPIN_ON_OWNER - lock->osq = NULL; + osq_lock_init(&lock->osq); #endif debug_mutex_init(lock, name, key); diff --git a/kernel/locking/rwsem-spinlock.c b/kernel/locking/rwsem-spinlock.c index 9be8a9144978..2c93571162cb 100644 --- a/kernel/locking/rwsem-spinlock.c +++ b/kernel/locking/rwsem-spinlock.c @@ -26,7 +26,7 @@ int rwsem_is_locked(struct rw_semaphore *sem) unsigned long flags; if (raw_spin_trylock_irqsave(&sem->wait_lock, flags)) { - ret = (sem->activity != 0); + ret = (sem->count != 0); raw_spin_unlock_irqrestore(&sem->wait_lock, flags); } return ret; @@ -46,7 +46,7 @@ void __init_rwsem(struct rw_semaphore *sem, const char *name, debug_check_no_locks_freed((void *)sem, sizeof(*sem)); lockdep_init_map(&sem->dep_map, name, key, 0); #endif - sem->activity = 0; + sem->count = 0; raw_spin_lock_init(&sem->wait_lock); INIT_LIST_HEAD(&sem->wait_list); } @@ -95,7 +95,7 @@ __rwsem_do_wake(struct rw_semaphore *sem, int wakewrite) waiter = list_entry(next, struct rwsem_waiter, list); } while (waiter->type != RWSEM_WAITING_FOR_WRITE); - sem->activity += woken; + sem->count += woken; out: return sem; @@ -126,9 +126,9 @@ void __sched __down_read(struct rw_semaphore *sem) raw_spin_lock_irqsave(&sem->wait_lock, flags); - if (sem->activity >= 0 && list_empty(&sem->wait_list)) { + if (sem->count >= 0 && list_empty(&sem->wait_list)) { /* granted */ - sem->activity++; + sem->count++; raw_spin_unlock_irqrestore(&sem->wait_lock, flags); goto out; } @@ -170,9 +170,9 @@ int __down_read_trylock(struct rw_semaphore *sem) raw_spin_lock_irqsave(&sem->wait_lock, flags); - if (sem->activity >= 0 && list_empty(&sem->wait_list)) { + if (sem->count >= 0 && list_empty(&sem->wait_list)) { /* granted */ - sem->activity++; + sem->count++; ret = 1; } @@ -206,7 +206,7 @@ void __sched __down_write_nested(struct rw_semaphore *sem, int subclass) * itself into sleep and waiting for system woke it or someone * else in the head of the wait list up. */ - if (sem->activity == 0) + if (sem->count == 0) break; set_task_state(tsk, TASK_UNINTERRUPTIBLE); raw_spin_unlock_irqrestore(&sem->wait_lock, flags); @@ -214,7 +214,7 @@ void __sched __down_write_nested(struct rw_semaphore *sem, int subclass) raw_spin_lock_irqsave(&sem->wait_lock, flags); } /* got the lock */ - sem->activity = -1; + sem->count = -1; list_del(&waiter.list); raw_spin_unlock_irqrestore(&sem->wait_lock, flags); @@ -235,9 +235,9 @@ int __down_write_trylock(struct rw_semaphore *sem) raw_spin_lock_irqsave(&sem->wait_lock, flags); - if (sem->activity == 0) { + if (sem->count == 0) { /* got the lock */ - sem->activity = -1; + sem->count = -1; ret = 1; } @@ -255,7 +255,7 @@ void __up_read(struct rw_semaphore *sem) raw_spin_lock_irqsave(&sem->wait_lock, flags); - if (--sem->activity == 0 && !list_empty(&sem->wait_list)) + if (--sem->count == 0 && !list_empty(&sem->wait_list)) sem = __rwsem_wake_one_writer(sem); raw_spin_unlock_irqrestore(&sem->wait_lock, flags); @@ -270,7 +270,7 @@ void __up_write(struct rw_semaphore *sem) raw_spin_lock_irqsave(&sem->wait_lock, flags); - sem->activity = 0; + sem->count = 0; if (!list_empty(&sem->wait_list)) sem = __rwsem_do_wake(sem, 1); @@ -287,7 +287,7 @@ void __downgrade_write(struct rw_semaphore *sem) raw_spin_lock_irqsave(&sem->wait_lock, flags); - sem->activity = 1; + sem->count = 1; if (!list_empty(&sem->wait_list)) sem = __rwsem_do_wake(sem, 0); diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c index dacc32142fcc..a2391ac135c8 100644 --- a/kernel/locking/rwsem-xadd.c +++ b/kernel/locking/rwsem-xadd.c @@ -82,9 +82,9 @@ void __init_rwsem(struct rw_semaphore *sem, const char *name, sem->count = RWSEM_UNLOCKED_VALUE; raw_spin_lock_init(&sem->wait_lock); INIT_LIST_HEAD(&sem->wait_list); -#ifdef CONFIG_SMP +#ifdef CONFIG_RWSEM_SPIN_ON_OWNER sem->owner = NULL; - sem->osq = NULL; + osq_lock_init(&sem->osq); #endif } @@ -262,7 +262,7 @@ static inline bool rwsem_try_write_lock(long count, struct rw_semaphore *sem) return false; } -#ifdef CONFIG_SMP +#ifdef CONFIG_RWSEM_SPIN_ON_OWNER /* * Try to acquire write lock before the writer has been put on wait queue. */ @@ -285,10 +285,10 @@ static inline bool rwsem_try_write_lock_unqueued(struct rw_semaphore *sem) static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem) { struct task_struct *owner; - bool on_cpu = true; + bool on_cpu = false; if (need_resched()) - return 0; + return false; rcu_read_lock(); owner = ACCESS_ONCE(sem->owner); @@ -297,9 +297,9 @@ static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem) rcu_read_unlock(); /* - * If sem->owner is not set, the rwsem owner may have - * just acquired it and not set the owner yet or the rwsem - * has been released. + * If sem->owner is not set, yet we have just recently entered the + * slowpath, then there is a possibility reader(s) may have the lock. + * To be safe, avoid spinning in these situations. */ return on_cpu; } diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c index 42f806de49d4..e2d3bc7f03b4 100644 --- a/kernel/locking/rwsem.c +++ b/kernel/locking/rwsem.c @@ -12,7 +12,7 @@ #include <linux/atomic.h> -#if defined(CONFIG_SMP) && defined(CONFIG_RWSEM_XCHGADD_ALGORITHM) +#ifdef CONFIG_RWSEM_SPIN_ON_OWNER static inline void rwsem_set_owner(struct rw_semaphore *sem) { sem->owner = current; diff --git a/kernel/power/process.c b/kernel/power/process.c index 0ca8d83e2369..4ee194eb524b 100644 --- a/kernel/power/process.c +++ b/kernel/power/process.c @@ -186,6 +186,7 @@ void thaw_processes(void) printk("Restarting tasks ... "); + __usermodehelper_set_disable_depth(UMH_FREEZING); thaw_workqueues(); read_lock(&tasklist_lock); diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c index 4dd8822f732a..ed35a4790afe 100644 --- a/kernel/power/suspend.c +++ b/kernel/power/suspend.c @@ -306,7 +306,7 @@ int suspend_devices_and_enter(suspend_state_t state) error = suspend_ops->begin(state); if (error) goto Close; - } else if (state == PM_SUSPEND_FREEZE && freeze_ops->begin) { + } else if (state == PM_SUSPEND_FREEZE && freeze_ops && freeze_ops->begin) { error = freeze_ops->begin(); if (error) goto Close; @@ -335,7 +335,7 @@ int suspend_devices_and_enter(suspend_state_t state) Close: if (need_suspend_ops(state) && suspend_ops->end) suspend_ops->end(); - else if (state == PM_SUSPEND_FREEZE && freeze_ops->end) + else if (state == PM_SUSPEND_FREEZE && freeze_ops && freeze_ops->end) freeze_ops->end(); return error; diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c index ea2d5f6962ed..13e839dbca07 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -1416,9 +1416,10 @@ static int have_callable_console(void) /* * Can we actually use the console at this time on this cpu? * - * Console drivers may assume that per-cpu resources have been allocated. So - * unless they're explicitly marked as being able to cope (CON_ANYTIME) don't - * call them until this CPU is officially up. + * Console drivers may assume that per-cpu resources have + * been allocated. So unless they're explicitly marked as + * being able to cope (CON_ANYTIME) don't call them until + * this CPU is officially up. */ static inline int can_use_console(unsigned int cpu) { @@ -1431,10 +1432,8 @@ static inline int can_use_console(unsigned int cpu) * console_lock held, and 'console_locked' set) if it * is successful, false otherwise. */ -static int console_trylock_for_printk(void) +static int console_trylock_for_printk(unsigned int cpu) { - unsigned int cpu = smp_processor_id(); - if (!console_trylock()) return 0; /* @@ -1609,8 +1608,7 @@ asmlinkage int vprintk_emit(int facility, int level, */ if (!oops_in_progress && !lockdep_recursing(current)) { recursion_bug = 1; - local_irq_restore(flags); - return 0; + goto out_restore_irqs; } zap_locks(); } @@ -1718,27 +1716,21 @@ asmlinkage int vprintk_emit(int facility, int level, logbuf_cpu = UINT_MAX; raw_spin_unlock(&logbuf_lock); - lockdep_on(); - local_irq_restore(flags); /* If called from the scheduler, we can not call up(). */ - if (in_sched) - return printed_len; - - /* - * Disable preemption to avoid being preempted while holding - * console_sem which would prevent anyone from printing to console - */ - preempt_disable(); - /* - * Try to acquire and then immediately release the console semaphore. - * The release will print out buffers and wake up /dev/kmsg and syslog() - * users. - */ - if (console_trylock_for_printk()) - console_unlock(); - preempt_enable(); + if (!in_sched) { + /* + * Try to acquire and then immediately release the console + * semaphore. The release will print out buffers and wake up + * /dev/kmsg and syslog() users. + */ + if (console_trylock_for_printk(this_cpu)) + console_unlock(); + } + lockdep_on(); +out_restore_irqs: + local_irq_restore(flags); return printed_len; } EXPORT_SYMBOL(vprintk_emit); diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index f1ba77363fbb..625d0b0cd75a 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -206,6 +206,70 @@ void rcu_bh_qs(int cpu) rdp->passed_quiesce = 1; } +static DEFINE_PER_CPU(int, rcu_sched_qs_mask); + +static DEFINE_PER_CPU(struct rcu_dynticks, rcu_dynticks) = { + .dynticks_nesting = DYNTICK_TASK_EXIT_IDLE, + .dynticks = ATOMIC_INIT(1), +#ifdef CONFIG_NO_HZ_FULL_SYSIDLE + .dynticks_idle_nesting = DYNTICK_TASK_NEST_VALUE, + .dynticks_idle = ATOMIC_INIT(1), +#endif /* #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */ +}; + +/* + * Let the RCU core know that this CPU has gone through the scheduler, + * which is a quiescent state. This is called when the need for a + * quiescent state is urgent, so we burn an atomic operation and full + * memory barriers to let the RCU core know about it, regardless of what + * this CPU might (or might not) do in the near future. + * + * We inform the RCU core by emulating a zero-duration dyntick-idle + * period, which we in turn do by incrementing the ->dynticks counter + * by two. + */ +static void rcu_momentary_dyntick_idle(void) +{ + unsigned long flags; + struct rcu_data *rdp; + struct rcu_dynticks *rdtp; + int resched_mask; + struct rcu_state *rsp; + + local_irq_save(flags); + + /* + * Yes, we can lose flag-setting operations. This is OK, because + * the flag will be set again after some delay. + */ + resched_mask = raw_cpu_read(rcu_sched_qs_mask); + raw_cpu_write(rcu_sched_qs_mask, 0); + + /* Find the flavor that needs a quiescent state. */ + for_each_rcu_flavor(rsp) { + rdp = raw_cpu_ptr(rsp->rda); + if (!(resched_mask & rsp->flavor_mask)) + continue; + smp_mb(); /* rcu_sched_qs_mask before cond_resched_completed. */ + if (ACCESS_ONCE(rdp->mynode->completed) != + ACCESS_ONCE(rdp->cond_resched_completed)) + continue; + + /* + * Pretend to be momentarily idle for the quiescent state. + * This allows the grace-period kthread to record the + * quiescent state, with no need for this CPU to do anything + * further. + */ + rdtp = this_cpu_ptr(&rcu_dynticks); + smp_mb__before_atomic(); /* Earlier stuff before QS. */ + atomic_add(2, &rdtp->dynticks); /* QS. */ + smp_mb__after_atomic(); /* Later stuff after QS. */ + break; + } + local_irq_restore(flags); +} + /* * Note a context switch. This is a quiescent state for RCU-sched, * and requires special handling for preemptible RCU. @@ -216,19 +280,12 @@ void rcu_note_context_switch(int cpu) trace_rcu_utilization(TPS("Start context switch")); rcu_sched_qs(cpu); rcu_preempt_note_context_switch(cpu); + if (unlikely(raw_cpu_read(rcu_sched_qs_mask))) + rcu_momentary_dyntick_idle(); trace_rcu_utilization(TPS("End context switch")); } EXPORT_SYMBOL_GPL(rcu_note_context_switch); -static DEFINE_PER_CPU(struct rcu_dynticks, rcu_dynticks) = { - .dynticks_nesting = DYNTICK_TASK_EXIT_IDLE, - .dynticks = ATOMIC_INIT(1), -#ifdef CONFIG_NO_HZ_FULL_SYSIDLE - .dynticks_idle_nesting = DYNTICK_TASK_NEST_VALUE, - .dynticks_idle = ATOMIC_INIT(1), -#endif /* #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */ -}; - static long blimit = 10; /* Maximum callbacks per rcu_do_batch. */ static long qhimark = 10000; /* If this many pending, ignore blimit. */ static long qlowmark = 100; /* Once only this many pending, use blimit. */ @@ -243,6 +300,13 @@ static ulong jiffies_till_next_fqs = ULONG_MAX; module_param(jiffies_till_first_fqs, ulong, 0644); module_param(jiffies_till_next_fqs, ulong, 0644); +/* + * How long the grace period must be before we start recruiting + * quiescent-state help from rcu_note_context_switch(). + */ +static ulong jiffies_till_sched_qs = HZ / 20; +module_param(jiffies_till_sched_qs, ulong, 0644); + static bool rcu_start_gp_advanced(struct rcu_state *rsp, struct rcu_node *rnp, struct rcu_data *rdp); static void force_qs_rnp(struct rcu_state *rsp, @@ -853,6 +917,7 @@ static int rcu_implicit_dynticks_qs(struct rcu_data *rdp, bool *isidle, unsigned long *maxj) { unsigned int curr; + int *rcrmp; unsigned int snap; curr = (unsigned int)atomic_add_return(0, &rdp->dynticks->dynticks); @@ -893,27 +958,43 @@ static int rcu_implicit_dynticks_qs(struct rcu_data *rdp, } /* - * There is a possibility that a CPU in adaptive-ticks state - * might run in the kernel with the scheduling-clock tick disabled - * for an extended time period. Invoke rcu_kick_nohz_cpu() to - * force the CPU to restart the scheduling-clock tick in this - * CPU is in this state. - */ - rcu_kick_nohz_cpu(rdp->cpu); - - /* - * Alternatively, the CPU might be running in the kernel - * for an extended period of time without a quiescent state. - * Attempt to force the CPU through the scheduler to gain the - * needed quiescent state, but only if the grace period has gone - * on for an uncommonly long time. If there are many stuck CPUs, - * we will beat on the first one until it gets unstuck, then move - * to the next. Only do this for the primary flavor of RCU. + * A CPU running for an extended time within the kernel can + * delay RCU grace periods. When the CPU is in NO_HZ_FULL mode, + * even context-switching back and forth between a pair of + * in-kernel CPU-bound tasks cannot advance grace periods. + * So if the grace period is old enough, make the CPU pay attention. + * Note that the unsynchronized assignments to the per-CPU + * rcu_sched_qs_mask variable are safe. Yes, setting of + * bits can be lost, but they will be set again on the next + * force-quiescent-state pass. So lost bit sets do not result + * in incorrect behavior, merely in a grace period lasting + * a few jiffies longer than it might otherwise. Because + * there are at most four threads involved, and because the + * updates are only once every few jiffies, the probability of + * lossage (and thus of slight grace-period extension) is + * quite low. + * + * Note that if the jiffies_till_sched_qs boot/sysfs parameter + * is set too high, we override with half of the RCU CPU stall + * warning delay. */ - if (rdp->rsp == rcu_state_p && + rcrmp = &per_cpu(rcu_sched_qs_mask, rdp->cpu); + if (ULONG_CMP_GE(jiffies, + rdp->rsp->gp_start + jiffies_till_sched_qs) || ULONG_CMP_GE(jiffies, rdp->rsp->jiffies_resched)) { - rdp->rsp->jiffies_resched += 5; - resched_cpu(rdp->cpu); + if (!(ACCESS_ONCE(*rcrmp) & rdp->rsp->flavor_mask)) { + ACCESS_ONCE(rdp->cond_resched_completed) = + ACCESS_ONCE(rdp->mynode->completed); + smp_mb(); /* ->cond_resched_completed before *rcrmp. */ + ACCESS_ONCE(*rcrmp) = + ACCESS_ONCE(*rcrmp) + rdp->rsp->flavor_mask; + resched_cpu(rdp->cpu); /* Force CPU into scheduler. */ + rdp->rsp->jiffies_resched += 5; /* Enable beating. */ + } else if (ULONG_CMP_GE(jiffies, rdp->rsp->jiffies_resched)) { + /* Time to beat on that CPU again! */ + resched_cpu(rdp->cpu); /* Force CPU into scheduler. */ + rdp->rsp->jiffies_resched += 5; /* Re-enable beating. */ + } } return 0; @@ -3491,6 +3572,7 @@ static void __init rcu_init_one(struct rcu_state *rsp, "rcu_node_fqs_1", "rcu_node_fqs_2", "rcu_node_fqs_3" }; /* Match MAX_RCU_LVLS */ + static u8 fl_mask = 0x1; int cpustride = 1; int i; int j; @@ -3509,6 +3591,8 @@ static void __init rcu_init_one(struct rcu_state *rsp, for (i = 1; i < rcu_num_lvls; i++) rsp->level[i] = rsp->level[i - 1] + rsp->levelcnt[i - 1]; rcu_init_levelspread(rsp); + rsp->flavor_mask = fl_mask; + fl_mask <<= 1; /* Initialize the elements themselves, starting from the leaves. */ diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h index bf2c1e669691..0f69a79c5b7d 100644 --- a/kernel/rcu/tree.h +++ b/kernel/rcu/tree.h @@ -307,6 +307,9 @@ struct rcu_data { /* 4) reasons this CPU needed to be kicked by force_quiescent_state */ unsigned long dynticks_fqs; /* Kicked due to dynticks idle. */ unsigned long offline_fqs; /* Kicked due to being offline. */ + unsigned long cond_resched_completed; + /* Grace period that needs help */ + /* from cond_resched(). */ /* 5) __rcu_pending() statistics. */ unsigned long n_rcu_pending; /* rcu_pending() calls since boot. */ @@ -392,6 +395,7 @@ struct rcu_state { struct rcu_node *level[RCU_NUM_LVLS]; /* Hierarchy levels. */ u32 levelcnt[MAX_RCU_LVLS + 1]; /* # nodes in each level. */ u8 levelspread[RCU_NUM_LVLS]; /* kids/node in each level. */ + u8 flavor_mask; /* bit in flavor mask. */ struct rcu_data __percpu *rda; /* pointer of percu rcu_data. */ void (*call)(struct rcu_head *head, /* call_rcu() flavor. */ void (*func)(struct rcu_head *head)); @@ -563,7 +567,7 @@ static bool rcu_nocb_need_deferred_wakeup(struct rcu_data *rdp); static void do_nocb_deferred_wakeup(struct rcu_data *rdp); static void rcu_boot_init_nocb_percpu_data(struct rcu_data *rdp); static void rcu_spawn_nocb_kthreads(struct rcu_state *rsp); -static void rcu_kick_nohz_cpu(int cpu); +static void __maybe_unused rcu_kick_nohz_cpu(int cpu); static bool init_nocb_callback_list(struct rcu_data *rdp); static void rcu_sysidle_enter(struct rcu_dynticks *rdtp, int irq); static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq); diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index cbc2c45265e2..02ac0fb186b8 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -2404,7 +2404,7 @@ static bool init_nocb_callback_list(struct rcu_data *rdp) * if an adaptive-ticks CPU is failing to respond to the current grace * period and has not be idle from an RCU perspective, kick it. */ -static void rcu_kick_nohz_cpu(int cpu) +static void __maybe_unused rcu_kick_nohz_cpu(int cpu) { #ifdef CONFIG_NO_HZ_FULL if (tick_nohz_full_cpu(cpu)) diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c index a2aeb4df0f60..bc7883570530 100644 --- a/kernel/rcu/update.c +++ b/kernel/rcu/update.c @@ -200,12 +200,12 @@ void wait_rcu_gp(call_rcu_func_t crf) EXPORT_SYMBOL_GPL(wait_rcu_gp); #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD -static inline void debug_init_rcu_head(struct rcu_head *head) +void init_rcu_head(struct rcu_head *head) { debug_object_init(head, &rcuhead_debug_descr); } -static inline void debug_rcu_head_free(struct rcu_head *head) +void destroy_rcu_head(struct rcu_head *head) { debug_object_free(head, &rcuhead_debug_descr); } @@ -350,21 +350,3 @@ static int __init check_cpu_stall_init(void) early_initcall(check_cpu_stall_init); #endif /* #ifdef CONFIG_RCU_STALL_COMMON */ - -/* - * Hooks for cond_resched() and friends to avoid RCU CPU stall warnings. - */ - -DEFINE_PER_CPU(int, rcu_cond_resched_count); - -/* - * Report a set of RCU quiescent states, for use by cond_resched() - * and friends. Out of line due to being called infrequently. - */ -void rcu_resched(void) -{ - preempt_disable(); - __this_cpu_write(rcu_cond_resched_count, 0); - rcu_note_context_switch(smp_processor_id()); - preempt_enable(); -} diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 3bdf01b494fe..bc1638b33449 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4147,7 +4147,6 @@ static void __cond_resched(void) int __sched _cond_resched(void) { - rcu_cond_resched(); if (should_resched()) { __cond_resched(); return 1; @@ -4166,18 +4165,15 @@ EXPORT_SYMBOL(_cond_resched); */ int __cond_resched_lock(spinlock_t *lock) { - bool need_rcu_resched = rcu_should_resched(); int resched = should_resched(); int ret = 0; lockdep_assert_held(lock); - if (spin_needbreak(lock) || resched || need_rcu_resched) { + if (spin_needbreak(lock) || resched) { spin_unlock(lock); if (resched) __cond_resched(); - else if (unlikely(need_rcu_resched)) - rcu_resched(); else cpu_relax(); ret = 1; @@ -4191,7 +4187,6 @@ int __sched __cond_resched_softirq(void) { BUG_ON(!in_softirq()); - rcu_cond_resched(); /* BH disabled OK, just recording QSes. */ if (should_resched()) { local_bh_enable(); __cond_resched(); diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index 695f9773bb60..627b3c34b821 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -608,7 +608,7 @@ void proc_sched_show_task(struct task_struct *p, struct seq_file *m) avg_atom = p->se.sum_exec_runtime; if (nr_switches) - do_div(avg_atom, nr_switches); + avg_atom = div64_ul(avg_atom, nr_switches); else avg_atom = -1LL; diff --git a/kernel/smp.c b/kernel/smp.c index 306f8180b0d5..80c33f8de14f 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -29,6 +29,8 @@ static DEFINE_PER_CPU_SHARED_ALIGNED(struct call_function_data, cfd_data); static DEFINE_PER_CPU_SHARED_ALIGNED(struct llist_head, call_single_queue); +static void flush_smp_call_function_queue(bool warn_cpu_offline); + static int hotplug_cfd(struct notifier_block *nfb, unsigned long action, void *hcpu) { @@ -51,12 +53,27 @@ hotplug_cfd(struct notifier_block *nfb, unsigned long action, void *hcpu) #ifdef CONFIG_HOTPLUG_CPU case CPU_UP_CANCELED: case CPU_UP_CANCELED_FROZEN: + /* Fall-through to the CPU_DEAD[_FROZEN] case. */ case CPU_DEAD: case CPU_DEAD_FROZEN: free_cpumask_var(cfd->cpumask); free_percpu(cfd->csd); break; + + case CPU_DYING: + case CPU_DYING_FROZEN: + /* + * The IPIs for the smp-call-function callbacks queued by other + * CPUs might arrive late, either due to hardware latencies or + * because this CPU disabled interrupts (inside stop-machine) + * before the IPIs were sent. So flush out any pending callbacks + * explicitly (without waiting for the IPIs to arrive), to + * ensure that the outgoing CPU doesn't go offline with work + * still pending. + */ + flush_smp_call_function_queue(false); + break; #endif }; @@ -177,23 +194,47 @@ static int generic_exec_single(int cpu, struct call_single_data *csd, return 0; } -/* - * Invoked by arch to handle an IPI for call function single. Must be - * called from the arch with interrupts disabled. +/** + * generic_smp_call_function_single_interrupt - Execute SMP IPI callbacks + * + * Invoked by arch to handle an IPI for call function single. + * Must be called with interrupts disabled. */ void generic_smp_call_function_single_interrupt(void) { + flush_smp_call_function_queue(true); +} + +/** + * flush_smp_call_function_queue - Flush pending smp-call-function callbacks + * + * @warn_cpu_offline: If set to 'true', warn if callbacks were queued on an + * offline CPU. Skip this check if set to 'false'. + * + * Flush any pending smp-call-function callbacks queued on this CPU. This is + * invoked by the generic IPI handler, as well as by a CPU about to go offline, + * to ensure that all pending IPI callbacks are run before it goes completely + * offline. + * + * Loop through the call_single_queue and run all the queued callbacks. + * Must be called with interrupts disabled. + */ +static void flush_smp_call_function_queue(bool warn_cpu_offline) +{ + struct llist_head *head; struct llist_node *entry; struct call_single_data *csd, *csd_next; static bool warned; - entry = llist_del_all(&__get_cpu_var(call_single_queue)); + WARN_ON(!irqs_disabled()); + + head = &__get_cpu_var(call_single_queue); + entry = llist_del_all(head); entry = llist_reverse_order(entry); - /* - * Shouldn't receive this interrupt on a cpu that is not yet online. - */ - if (unlikely(!cpu_online(smp_processor_id()) && !warned)) { + /* There shouldn't be any pending callbacks on an offline CPU. */ + if (unlikely(warn_cpu_offline && !cpu_online(smp_processor_id()) && + !warned && !llist_empty(head))) { warned = true; WARN(1, "IPI on offline CPU %d\n", smp_processor_id()); diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 7de6555cfea0..75b22e22a72c 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -136,7 +136,6 @@ static unsigned long dirty_bytes_min = 2 * PAGE_SIZE; /* this is needed for the proc_dointvec_minmax for [fs_]overflow UID and GID */ static int maxolduid = 65535; static int minolduid; -static int min_percpu_pagelist_fract = 8; static int ngroups_max = NGROUPS_MAX; static const int cap_last_cap = CAP_LAST_CAP; @@ -861,6 +860,17 @@ static struct ctl_table kern_table[] = { .extra1 = &zero, .extra2 = &one, }, +#ifdef CONFIG_SMP + { + .procname = "softlockup_all_cpu_backtrace", + .data = &sysctl_softlockup_all_cpu_backtrace, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = proc_dointvec_minmax, + .extra1 = &zero, + .extra2 = &one, + }, +#endif /* CONFIG_SMP */ { .procname = "nmi_watchdog", .data = &watchdog_user_enabled, @@ -1317,7 +1327,7 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(percpu_pagelist_fraction), .mode = 0644, .proc_handler = percpu_pagelist_fraction_sysctl_handler, - .extra1 = &min_percpu_pagelist_fract, + .extra1 = &zero, }, #ifdef CONFIG_MMU { diff --git a/kernel/time/alarmtimer.c b/kernel/time/alarmtimer.c index 88c9c65a430d..fe75444ae7ec 100644 --- a/kernel/time/alarmtimer.c +++ b/kernel/time/alarmtimer.c @@ -585,9 +585,14 @@ static int alarm_timer_set(struct k_itimer *timr, int flags, struct itimerspec *new_setting, struct itimerspec *old_setting) { + ktime_t exp; + if (!rtcdev) return -ENOTSUPP; + if (flags & ~TIMER_ABSTIME) + return -EINVAL; + if (old_setting) alarm_timer_get(timr, old_setting); @@ -597,8 +602,16 @@ static int alarm_timer_set(struct k_itimer *timr, int flags, /* start the timer */ timr->it.alarm.interval = timespec_to_ktime(new_setting->it_interval); - alarm_start(&timr->it.alarm.alarmtimer, - timespec_to_ktime(new_setting->it_value)); + exp = timespec_to_ktime(new_setting->it_value); + /* Convert (if necessary) to absolute time */ + if (flags != TIMER_ABSTIME) { + ktime_t now; + + now = alarm_bases[timr->it.alarm.alarmtimer.type].gettime(); + exp = ktime_add(now, exp); + } + + alarm_start(&timr->it.alarm.alarmtimer, exp); return 0; } @@ -730,6 +743,9 @@ static int alarm_timer_nsleep(const clockid_t which_clock, int flags, if (!alarmtimer_get_rtcdev()) return -ENOTSUPP; + if (flags & ~TIMER_ABSTIME) + return -EINVAL; + if (!capable(CAP_WAKE_ALARM)) return -EPERM; diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 5b372e3ed675..ac9d1dad630b 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -265,12 +265,12 @@ static void update_ftrace_function(void) func = ftrace_ops_list_func; } + update_function_graph_func(); + /* If there's no change, then do nothing more here */ if (ftrace_trace_function == func) return; - update_function_graph_func(); - /* * If we are using the list function, it doesn't care * about the function_trace_ops. diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 7c56c3d06943..ff7027199a9a 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -616,10 +616,6 @@ int ring_buffer_poll_wait(struct ring_buffer *buffer, int cpu, struct ring_buffer_per_cpu *cpu_buffer; struct rb_irq_work *work; - if ((cpu == RING_BUFFER_ALL_CPUS && !ring_buffer_empty(buffer)) || - (cpu != RING_BUFFER_ALL_CPUS && !ring_buffer_empty_cpu(buffer, cpu))) - return POLLIN | POLLRDNORM; - if (cpu == RING_BUFFER_ALL_CPUS) work = &buffer->irq_work; else { diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 384ede311717..291397e66669 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -466,6 +466,12 @@ int __trace_puts(unsigned long ip, const char *str, int size) struct print_entry *entry; unsigned long irq_flags; int alloc; + int pc; + + if (!(trace_flags & TRACE_ITER_PRINTK)) + return 0; + + pc = preempt_count(); if (unlikely(tracing_selftest_running || tracing_disabled)) return 0; @@ -475,7 +481,7 @@ int __trace_puts(unsigned long ip, const char *str, int size) local_save_flags(irq_flags); buffer = global_trace.trace_buffer.buffer; event = trace_buffer_lock_reserve(buffer, TRACE_PRINT, alloc, - irq_flags, preempt_count()); + irq_flags, pc); if (!event) return 0; @@ -492,6 +498,7 @@ int __trace_puts(unsigned long ip, const char *str, int size) entry->buf[size] = '\0'; __buffer_unlock_commit(buffer, event); + ftrace_trace_stack(buffer, irq_flags, 4, pc); return size; } @@ -509,6 +516,12 @@ int __trace_bputs(unsigned long ip, const char *str) struct bputs_entry *entry; unsigned long irq_flags; int size = sizeof(struct bputs_entry); + int pc; + + if (!(trace_flags & TRACE_ITER_PRINTK)) + return 0; + + pc = preempt_count(); if (unlikely(tracing_selftest_running || tracing_disabled)) return 0; @@ -516,7 +529,7 @@ int __trace_bputs(unsigned long ip, const char *str) local_save_flags(irq_flags); buffer = global_trace.trace_buffer.buffer; event = trace_buffer_lock_reserve(buffer, TRACE_BPUTS, size, - irq_flags, preempt_count()); + irq_flags, pc); if (!event) return 0; @@ -525,6 +538,7 @@ int __trace_bputs(unsigned long ip, const char *str) entry->str = str; __buffer_unlock_commit(buffer, event); + ftrace_trace_stack(buffer, irq_flags, 4, pc); return 1; } @@ -809,7 +823,7 @@ static struct { { trace_clock_local, "local", 1 }, { trace_clock_global, "global", 1 }, { trace_clock_counter, "counter", 0 }, - { trace_clock_jiffies, "uptime", 1 }, + { trace_clock_jiffies, "uptime", 0 }, { trace_clock, "perf", 1 }, ARCH_TRACE_CLOCKS }; @@ -1396,7 +1410,6 @@ void tracing_start(void) arch_spin_unlock(&global_trace.max_lock); - ftrace_start(); out: raw_spin_unlock_irqrestore(&global_trace.start_lock, flags); } @@ -1443,7 +1456,6 @@ void tracing_stop(void) struct ring_buffer *buffer; unsigned long flags; - ftrace_stop(); raw_spin_lock_irqsave(&global_trace.start_lock, flags); if (global_trace.stop_count++) goto out; diff --git a/kernel/trace/trace_clock.c b/kernel/trace/trace_clock.c index 26dc348332b7..57b67b1f24d1 100644 --- a/kernel/trace/trace_clock.c +++ b/kernel/trace/trace_clock.c @@ -59,13 +59,14 @@ u64 notrace trace_clock(void) /* * trace_jiffy_clock(): Simply use jiffies as a clock counter. + * Note that this use of jiffies_64 is not completely safe on + * 32-bit systems. But the window is tiny, and the effect if + * we are affected is that we will have an obviously bogus + * timestamp on a trace event - i.e. not life threatening. */ u64 notrace trace_clock_jiffies(void) { - u64 jiffy = jiffies - INITIAL_JIFFIES; - - /* Return nsecs */ - return (u64)jiffies_to_usecs(jiffy) * 1000ULL; + return jiffies_64_to_clock_t(jiffies_64 - INITIAL_JIFFIES); } /* diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c index f99e0b3bca8c..2de53628689f 100644 --- a/kernel/trace/trace_events.c +++ b/kernel/trace/trace_events.c @@ -470,6 +470,7 @@ static void remove_event_file_dir(struct ftrace_event_file *file) list_del(&file->list); remove_subsystem(file->system); + free_event_filter(file->filter); kmem_cache_free(file_cachep, file); } diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c index 04fdb5de823c..3c9b97e6b1f4 100644 --- a/kernel/trace/trace_uprobe.c +++ b/kernel/trace/trace_uprobe.c @@ -893,6 +893,9 @@ probe_event_enable(struct trace_uprobe *tu, struct ftrace_event_file *file, int ret; if (file) { + if (tu->tp.flags & TP_FLAG_PROFILE) + return -EINTR; + link = kmalloc(sizeof(*link), GFP_KERNEL); if (!link) return -ENOMEM; @@ -901,29 +904,40 @@ probe_event_enable(struct trace_uprobe *tu, struct ftrace_event_file *file, list_add_tail_rcu(&link->list, &tu->tp.files); tu->tp.flags |= TP_FLAG_TRACE; - } else - tu->tp.flags |= TP_FLAG_PROFILE; + } else { + if (tu->tp.flags & TP_FLAG_TRACE) + return -EINTR; - ret = uprobe_buffer_enable(); - if (ret < 0) - return ret; + tu->tp.flags |= TP_FLAG_PROFILE; + } WARN_ON(!uprobe_filter_is_empty(&tu->filter)); if (enabled) return 0; + ret = uprobe_buffer_enable(); + if (ret) + goto err_flags; + tu->consumer.filter = filter; ret = uprobe_register(tu->inode, tu->offset, &tu->consumer); - if (ret) { - if (file) { - list_del(&link->list); - kfree(link); - tu->tp.flags &= ~TP_FLAG_TRACE; - } else - tu->tp.flags &= ~TP_FLAG_PROFILE; - } + if (ret) + goto err_buffer; + return 0; + + err_buffer: + uprobe_buffer_disable(); + + err_flags: + if (file) { + list_del(&link->list); + kfree(link); + tu->tp.flags &= ~TP_FLAG_TRACE; + } else { + tu->tp.flags &= ~TP_FLAG_PROFILE; + } return ret; } @@ -1201,12 +1215,6 @@ static int uprobe_dispatcher(struct uprobe_consumer *con, struct pt_regs *regs) current->utask->vaddr = (unsigned long) &udd; -#ifdef CONFIG_PERF_EVENTS - if ((tu->tp.flags & TP_FLAG_TRACE) == 0 && - !uprobe_perf_filter(&tu->consumer, 0, current->mm)) - return UPROBE_HANDLER_REMOVE; -#endif - if (WARN_ON_ONCE(!uprobe_cpu_buffer)) return 0; diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c index 33cbd8c203f8..3490407dc7b7 100644 --- a/kernel/tracepoint.c +++ b/kernel/tracepoint.c @@ -492,33 +492,29 @@ static int sys_tracepoint_refcount; void syscall_regfunc(void) { - unsigned long flags; - struct task_struct *g, *t; + struct task_struct *p, *t; if (!sys_tracepoint_refcount) { - read_lock_irqsave(&tasklist_lock, flags); - do_each_thread(g, t) { - /* Skip kernel threads. */ - if (t->mm) - set_tsk_thread_flag(t, TIF_SYSCALL_TRACEPOINT); - } while_each_thread(g, t); - read_unlock_irqrestore(&tasklist_lock, flags); + read_lock(&tasklist_lock); + for_each_process_thread(p, t) { + set_tsk_thread_flag(t, TIF_SYSCALL_TRACEPOINT); + } + read_unlock(&tasklist_lock); } sys_tracepoint_refcount++; } void syscall_unregfunc(void) { - unsigned long flags; - struct task_struct *g, *t; + struct task_struct *p, *t; sys_tracepoint_refcount--; if (!sys_tracepoint_refcount) { - read_lock_irqsave(&tasklist_lock, flags); - do_each_thread(g, t) { + read_lock(&tasklist_lock); + for_each_process_thread(p, t) { clear_tsk_thread_flag(t, TIF_SYSCALL_TRACEPOINT); - } while_each_thread(g, t); - read_unlock_irqrestore(&tasklist_lock, flags); + } + read_unlock(&tasklist_lock); } } #endif diff --git a/kernel/watchdog.c b/kernel/watchdog.c index 516203e665fc..c3319bd1b040 100644 --- a/kernel/watchdog.c +++ b/kernel/watchdog.c @@ -31,6 +31,12 @@ int watchdog_user_enabled = 1; int __read_mostly watchdog_thresh = 10; +#ifdef CONFIG_SMP +int __read_mostly sysctl_softlockup_all_cpu_backtrace; +#else +#define sysctl_softlockup_all_cpu_backtrace 0 +#endif + static int __read_mostly watchdog_running; static u64 __read_mostly sample_period; @@ -47,6 +53,7 @@ static DEFINE_PER_CPU(bool, watchdog_nmi_touch); static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts_saved); static DEFINE_PER_CPU(struct perf_event *, watchdog_ev); #endif +static unsigned long soft_lockup_nmi_warn; /* boot commands */ /* @@ -95,6 +102,15 @@ static int __init nosoftlockup_setup(char *str) } __setup("nosoftlockup", nosoftlockup_setup); /* */ +#ifdef CONFIG_SMP +static int __init softlockup_all_cpu_backtrace_setup(char *str) +{ + sysctl_softlockup_all_cpu_backtrace = + !!simple_strtol(str, NULL, 0); + return 1; +} +__setup("softlockup_all_cpu_backtrace=", softlockup_all_cpu_backtrace_setup); +#endif /* * Hard-lockup warnings should be triggered after just a few seconds. Soft- @@ -271,6 +287,7 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer) unsigned long touch_ts = __this_cpu_read(watchdog_touch_ts); struct pt_regs *regs = get_irq_regs(); int duration; + int softlockup_all_cpu_backtrace = sysctl_softlockup_all_cpu_backtrace; /* kick the hardlockup detector */ watchdog_interrupt_count(); @@ -317,6 +334,17 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer) if (__this_cpu_read(soft_watchdog_warn) == true) return HRTIMER_RESTART; + if (softlockup_all_cpu_backtrace) { + /* Prevent multiple soft-lockup reports if one cpu is already + * engaged in dumping cpu back traces + */ + if (test_and_set_bit(0, &soft_lockup_nmi_warn)) { + /* Someone else will report us. Let's give up */ + __this_cpu_write(soft_watchdog_warn, true); + return HRTIMER_RESTART; + } + } + printk(KERN_EMERG "BUG: soft lockup - CPU#%d stuck for %us! [%s:%d]\n", smp_processor_id(), duration, current->comm, task_pid_nr(current)); @@ -327,6 +355,17 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer) else dump_stack(); + if (softlockup_all_cpu_backtrace) { + /* Avoid generating two back traces for current + * given that one is already made above + */ + trigger_allbutself_cpu_backtrace(); + + clear_bit(0, &soft_lockup_nmi_warn); + /* Barrier to sync with other cpus */ + smp_mb__after_atomic(); + } + if (softlockup_panic) panic("softlockup: hung tasks"); __this_cpu_write(soft_watchdog_warn, true); @@ -527,10 +566,8 @@ static void update_timers_all_cpus(void) int cpu; get_online_cpus(); - preempt_disable(); for_each_online_cpu(cpu) update_timers(cpu); - preempt_enable(); put_online_cpus(); } diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 6203d2900877..35974ac69600 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -3284,6 +3284,7 @@ int workqueue_sysfs_register(struct workqueue_struct *wq) } } + dev_set_uevent_suppress(&wq_dev->dev, false); kobject_uevent(&wq_dev->dev.kobj, KOBJ_ADD); return 0; } @@ -4879,7 +4880,7 @@ static void __init wq_numa_init(void) BUG_ON(!tbl); for_each_node(node) - BUG_ON(!alloc_cpumask_var_node(&tbl[node], GFP_KERNEL, + BUG_ON(!zalloc_cpumask_var_node(&tbl[node], GFP_KERNEL, node_online(node) ? node : NUMA_NO_NODE)); for_each_possible_cpu(cpu) { diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 7cfcc1b8e101..7a638aa3545b 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -930,7 +930,7 @@ config LOCKDEP bool depends on DEBUG_KERNEL && TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT select STACKTRACE - select FRAME_POINTER if !MIPS && !PPC && !ARM_UNWIND && !S390 && !MICROBLAZE && !ARC + select FRAME_POINTER if !MIPS && !PPC && !ARM_UNWIND && !S390 && !MICROBLAZE && !ARC && !SCORE select KALLSYMS select KALLSYMS_ALL @@ -1408,7 +1408,7 @@ config FAULT_INJECTION_STACKTRACE_FILTER depends on FAULT_INJECTION_DEBUG_FS && STACKTRACE_SUPPORT depends on !X86_64 select STACKTRACE - select FRAME_POINTER if !MIPS && !PPC && !S390 && !MICROBLAZE && !ARM_UNWIND && !ARC + select FRAME_POINTER if !MIPS && !PPC && !S390 && !MICROBLAZE && !ARM_UNWIND && !ARC && !SCORE help Provide stacktrace filter for fault-injection capabilities diff --git a/lib/cpumask.c b/lib/cpumask.c index c101230658eb..b6513a9f2892 100644 --- a/lib/cpumask.c +++ b/lib/cpumask.c @@ -191,7 +191,7 @@ int cpumask_set_cpu_local_first(int i, int numa_node, cpumask_t *dstp) i %= num_online_cpus(); - if (!cpumask_of_node(numa_node)) { + if (numa_node == -1 || !cpumask_of_node(numa_node)) { /* Use all online cpu's for non numa aware system */ cpumask_copy(mask, cpu_online_mask); } else { diff --git a/lib/iovec.c b/lib/iovec.c index 454baa88bf27..7a7c2da4cddf 100644 --- a/lib/iovec.c +++ b/lib/iovec.c @@ -51,3 +51,58 @@ int memcpy_toiovec(struct iovec *iov, unsigned char *kdata, int len) return 0; } EXPORT_SYMBOL(memcpy_toiovec); + +/* + * Copy kernel to iovec. Returns -EFAULT on error. + */ + +int memcpy_toiovecend(const struct iovec *iov, unsigned char *kdata, + int offset, int len) +{ + int copy; + for (; len > 0; ++iov) { + /* Skip over the finished iovecs */ + if (unlikely(offset >= iov->iov_len)) { + offset -= iov->iov_len; + continue; + } + copy = min_t(unsigned int, iov->iov_len - offset, len); + if (copy_to_user(iov->iov_base + offset, kdata, copy)) + return -EFAULT; + offset = 0; + kdata += copy; + len -= copy; + } + + return 0; +} +EXPORT_SYMBOL(memcpy_toiovecend); + +/* + * Copy iovec to kernel. Returns -EFAULT on error. + */ + +int memcpy_fromiovecend(unsigned char *kdata, const struct iovec *iov, + int offset, int len) +{ + /* Skip over the finished iovecs */ + while (offset >= iov->iov_len) { + offset -= iov->iov_len; + iov++; + } + + while (len > 0) { + u8 __user *base = iov->iov_base + offset; + int copy = min_t(unsigned int, len, iov->iov_len - offset); + + offset = 0; + if (copy_from_user(kdata, base, copy)) + return -EFAULT; + len -= copy; + kdata += copy; + iov++; + } + + return 0; +} +EXPORT_SYMBOL(memcpy_fromiovecend); diff --git a/lib/lz4/lz4_decompress.c b/lib/lz4/lz4_decompress.c index df6839e3ce08..7a85967060a5 100644 --- a/lib/lz4/lz4_decompress.c +++ b/lib/lz4/lz4_decompress.c @@ -72,6 +72,8 @@ static int lz4_uncompress(const char *source, char *dest, int osize) len = *ip++; for (; len == 255; length += 255) len = *ip++; + if (unlikely(length > (size_t)(length + len))) + goto _output_error; length += len; } @@ -106,6 +108,8 @@ static int lz4_uncompress(const char *source, char *dest, int osize) if (length == ML_MASK) { for (; *ip == 255; length += 255) ip++; + if (unlikely(length > (size_t)(length + *ip))) + goto _output_error; length += *ip++; } @@ -155,7 +159,7 @@ static int lz4_uncompress(const char *source, char *dest, int osize) /* write overflow error detected */ _output_error: - return (int) (-(((char *)ip) - source)); + return -1; } static int lz4_uncompress_unknownoutputsize(const char *source, char *dest, @@ -188,6 +192,8 @@ static int lz4_uncompress_unknownoutputsize(const char *source, char *dest, int s = 255; while ((ip < iend) && (s == 255)) { s = *ip++; + if (unlikely(length > (size_t)(length + s))) + goto _output_error; length += s; } } @@ -228,6 +234,8 @@ static int lz4_uncompress_unknownoutputsize(const char *source, char *dest, if (length == ML_MASK) { while (ip < iend) { int s = *ip++; + if (unlikely(length > (size_t)(length + s))) + goto _output_error; length += s; if (s == 255) continue; @@ -280,7 +288,7 @@ static int lz4_uncompress_unknownoutputsize(const char *source, char *dest, /* write overflow error detected */ _output_error: - return (int) (-(((char *) ip) - source)); + return -1; } int lz4_decompress(const unsigned char *src, size_t *src_len, diff --git a/lib/lzo/lzo1x_decompress_safe.c b/lib/lzo/lzo1x_decompress_safe.c index 569985d522d5..8563081e8da3 100644 --- a/lib/lzo/lzo1x_decompress_safe.c +++ b/lib/lzo/lzo1x_decompress_safe.c @@ -19,11 +19,31 @@ #include <linux/lzo.h> #include "lzodefs.h" -#define HAVE_IP(x) ((size_t)(ip_end - ip) >= (size_t)(x)) -#define HAVE_OP(x) ((size_t)(op_end - op) >= (size_t)(x)) -#define NEED_IP(x) if (!HAVE_IP(x)) goto input_overrun -#define NEED_OP(x) if (!HAVE_OP(x)) goto output_overrun -#define TEST_LB(m_pos) if ((m_pos) < out) goto lookbehind_overrun +#define HAVE_IP(t, x) \ + (((size_t)(ip_end - ip) >= (size_t)(t + x)) && \ + (((t + x) >= t) && ((t + x) >= x))) + +#define HAVE_OP(t, x) \ + (((size_t)(op_end - op) >= (size_t)(t + x)) && \ + (((t + x) >= t) && ((t + x) >= x))) + +#define NEED_IP(t, x) \ + do { \ + if (!HAVE_IP(t, x)) \ + goto input_overrun; \ + } while (0) + +#define NEED_OP(t, x) \ + do { \ + if (!HAVE_OP(t, x)) \ + goto output_overrun; \ + } while (0) + +#define TEST_LB(m_pos) \ + do { \ + if ((m_pos) < out) \ + goto lookbehind_overrun; \ + } while (0) int lzo1x_decompress_safe(const unsigned char *in, size_t in_len, unsigned char *out, size_t *out_len) @@ -58,14 +78,14 @@ int lzo1x_decompress_safe(const unsigned char *in, size_t in_len, while (unlikely(*ip == 0)) { t += 255; ip++; - NEED_IP(1); + NEED_IP(1, 0); } t += 15 + *ip++; } t += 3; copy_literal_run: #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) - if (likely(HAVE_IP(t + 15) && HAVE_OP(t + 15))) { + if (likely(HAVE_IP(t, 15) && HAVE_OP(t, 15))) { const unsigned char *ie = ip + t; unsigned char *oe = op + t; do { @@ -81,8 +101,8 @@ copy_literal_run: } else #endif { - NEED_OP(t); - NEED_IP(t + 3); + NEED_OP(t, 0); + NEED_IP(t, 3); do { *op++ = *ip++; } while (--t > 0); @@ -95,7 +115,7 @@ copy_literal_run: m_pos -= t >> 2; m_pos -= *ip++ << 2; TEST_LB(m_pos); - NEED_OP(2); + NEED_OP(2, 0); op[0] = m_pos[0]; op[1] = m_pos[1]; op += 2; @@ -119,10 +139,10 @@ copy_literal_run: while (unlikely(*ip == 0)) { t += 255; ip++; - NEED_IP(1); + NEED_IP(1, 0); } t += 31 + *ip++; - NEED_IP(2); + NEED_IP(2, 0); } m_pos = op - 1; next = get_unaligned_le16(ip); @@ -137,10 +157,10 @@ copy_literal_run: while (unlikely(*ip == 0)) { t += 255; ip++; - NEED_IP(1); + NEED_IP(1, 0); } t += 7 + *ip++; - NEED_IP(2); + NEED_IP(2, 0); } next = get_unaligned_le16(ip); ip += 2; @@ -154,7 +174,7 @@ copy_literal_run: #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) if (op - m_pos >= 8) { unsigned char *oe = op + t; - if (likely(HAVE_OP(t + 15))) { + if (likely(HAVE_OP(t, 15))) { do { COPY8(op, m_pos); op += 8; @@ -164,7 +184,7 @@ copy_literal_run: m_pos += 8; } while (op < oe); op = oe; - if (HAVE_IP(6)) { + if (HAVE_IP(6, 0)) { state = next; COPY4(op, ip); op += next; @@ -172,7 +192,7 @@ copy_literal_run: continue; } } else { - NEED_OP(t); + NEED_OP(t, 0); do { *op++ = *m_pos++; } while (op < oe); @@ -181,7 +201,7 @@ copy_literal_run: #endif { unsigned char *oe = op + t; - NEED_OP(t); + NEED_OP(t, 0); op[0] = m_pos[0]; op[1] = m_pos[1]; op += 2; @@ -194,15 +214,15 @@ match_next: state = next; t = next; #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) - if (likely(HAVE_IP(6) && HAVE_OP(4))) { + if (likely(HAVE_IP(6, 0) && HAVE_OP(4, 0))) { COPY4(op, ip); op += t; ip += t; } else #endif { - NEED_IP(t + 3); - NEED_OP(t); + NEED_IP(t, 3); + NEED_OP(t, 0); while (t > 0) { *op++ = *ip++; t--; diff --git a/lib/swiotlb.c b/lib/swiotlb.c index 649d097853a1..4abda074ea45 100644 --- a/lib/swiotlb.c +++ b/lib/swiotlb.c @@ -86,6 +86,7 @@ static unsigned int io_tlb_index; * We need to save away the original address corresponding to a mapped entry * for the sync operations. */ +#define INVALID_PHYS_ADDR (~(phys_addr_t)0) static phys_addr_t *io_tlb_orig_addr; /* @@ -188,12 +189,14 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) io_tlb_list = memblock_virt_alloc( PAGE_ALIGN(io_tlb_nslabs * sizeof(int)), PAGE_SIZE); - for (i = 0; i < io_tlb_nslabs; i++) - io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE); - io_tlb_index = 0; io_tlb_orig_addr = memblock_virt_alloc( PAGE_ALIGN(io_tlb_nslabs * sizeof(phys_addr_t)), PAGE_SIZE); + for (i = 0; i < io_tlb_nslabs; i++) { + io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE); + io_tlb_orig_addr[i] = INVALID_PHYS_ADDR; + } + io_tlb_index = 0; if (verbose) swiotlb_print_info(); @@ -313,10 +316,6 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) if (!io_tlb_list) goto cleanup3; - for (i = 0; i < io_tlb_nslabs; i++) - io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE); - io_tlb_index = 0; - io_tlb_orig_addr = (phys_addr_t *) __get_free_pages(GFP_KERNEL, get_order(io_tlb_nslabs * @@ -324,7 +323,11 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) if (!io_tlb_orig_addr) goto cleanup4; - memset(io_tlb_orig_addr, 0, io_tlb_nslabs * sizeof(phys_addr_t)); + for (i = 0; i < io_tlb_nslabs; i++) { + io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE); + io_tlb_orig_addr[i] = INVALID_PHYS_ADDR; + } + io_tlb_index = 0; swiotlb_print_info(); @@ -556,7 +559,8 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, /* * First, sync the memory before unmapping the entry */ - if (orig_addr && ((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL))) + if (orig_addr != INVALID_PHYS_ADDR && + ((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL))) swiotlb_bounce(orig_addr, tlb_addr, size, DMA_FROM_DEVICE); /* @@ -573,8 +577,10 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, * Step 1: return the slots to the free list, merging the * slots with superceeding slots */ - for (i = index + nslots - 1; i >= index; i--) + for (i = index + nslots - 1; i >= index; i--) { io_tlb_list[i] = ++count; + io_tlb_orig_addr[i] = INVALID_PHYS_ADDR; + } /* * Step 2: merge the returned slots with the preceding slots, * if available (non zero) @@ -593,6 +599,8 @@ void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr, int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT; phys_addr_t orig_addr = io_tlb_orig_addr[index]; + if (orig_addr == INVALID_PHYS_ADDR) + return; orig_addr += (unsigned long)tlb_addr & ((1 << IO_TLB_SHIFT) - 1); switch (target) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index e60837dc785c..33514d88fef9 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -941,6 +941,37 @@ unlock: spin_unlock(ptl); } +/* + * Save CONFIG_DEBUG_PAGEALLOC from faulting falsely on tail pages + * during copy_user_huge_page()'s copy_page_rep(): in the case when + * the source page gets split and a tail freed before copy completes. + * Called under pmd_lock of checked pmd, so safe from splitting itself. + */ +static void get_user_huge_page(struct page *page) +{ + if (IS_ENABLED(CONFIG_DEBUG_PAGEALLOC)) { + struct page *endpage = page + HPAGE_PMD_NR; + + atomic_add(HPAGE_PMD_NR, &page->_count); + while (++page < endpage) + get_huge_page_tail(page); + } else { + get_page(page); + } +} + +static void put_user_huge_page(struct page *page) +{ + if (IS_ENABLED(CONFIG_DEBUG_PAGEALLOC)) { + struct page *endpage = page + HPAGE_PMD_NR; + + while (page < endpage) + put_page(page++); + } else { + put_page(page); + } +} + static int do_huge_pmd_wp_page_fallback(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, @@ -1074,7 +1105,7 @@ int do_huge_pmd_wp_page(struct mm_struct *mm, struct vm_area_struct *vma, ret |= VM_FAULT_WRITE; goto out_unlock; } - get_page(page); + get_user_huge_page(page); spin_unlock(ptl); alloc: if (transparent_hugepage_enabled(vma) && @@ -1095,7 +1126,7 @@ alloc: split_huge_page(page); ret |= VM_FAULT_FALLBACK; } - put_page(page); + put_user_huge_page(page); } count_vm_event(THP_FAULT_FALLBACK); goto out; @@ -1105,7 +1136,7 @@ alloc: put_page(new_page); if (page) { split_huge_page(page); - put_page(page); + put_user_huge_page(page); } else split_huge_page_pmd(vma, address, pmd); ret |= VM_FAULT_FALLBACK; @@ -1127,7 +1158,7 @@ alloc: spin_lock(ptl); if (page) - put_page(page); + put_user_huge_page(page); if (unlikely(!pmd_same(*pmd, orig_pmd))) { spin_unlock(ptl); mem_cgroup_uncharge_page(new_page); @@ -2392,8 +2423,6 @@ static void collapse_huge_page(struct mm_struct *mm, pmd = mm_find_pmd(mm, address); if (!pmd) goto out; - if (pmd_trans_huge(*pmd)) - goto out; anon_vma_lock_write(vma->anon_vma); @@ -2492,8 +2521,6 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, pmd = mm_find_pmd(mm, address); if (!pmd) goto out; - if (pmd_trans_huge(*pmd)) - goto out; memset(khugepaged_node_load, 0, sizeof(khugepaged_node_load)); pte = pte_offset_map_lock(mm, pmd, address, &ptl); @@ -2846,12 +2873,22 @@ void split_huge_page_pmd_mm(struct mm_struct *mm, unsigned long address, static void split_huge_page_address(struct mm_struct *mm, unsigned long address) { + pgd_t *pgd; + pud_t *pud; pmd_t *pmd; VM_BUG_ON(!(address & ~HPAGE_PMD_MASK)); - pmd = mm_find_pmd(mm, address); - if (!pmd) + pgd = pgd_offset(mm, address); + if (!pgd_present(*pgd)) + return; + + pud = pud_offset(pgd, address); + if (!pud_present(*pud)) + return; + + pmd = pmd_offset(pud, address); + if (!pmd_present(*pmd)) return; /* * Caller holds the mmap_sem write mode, so a huge pmd cannot diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 226910cb7c9b..9221c02ed9e2 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2520,6 +2520,31 @@ static void set_huge_ptep_writable(struct vm_area_struct *vma, update_mmu_cache(vma, address, ptep); } +static int is_hugetlb_entry_migration(pte_t pte) +{ + swp_entry_t swp; + + if (huge_pte_none(pte) || pte_present(pte)) + return 0; + swp = pte_to_swp_entry(pte); + if (non_swap_entry(swp) && is_migration_entry(swp)) + return 1; + else + return 0; +} + +static int is_hugetlb_entry_hwpoisoned(pte_t pte) +{ + swp_entry_t swp; + + if (huge_pte_none(pte) || pte_present(pte)) + return 0; + swp = pte_to_swp_entry(pte); + if (non_swap_entry(swp) && is_hwpoison_entry(swp)) + return 1; + else + return 0; +} int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, struct vm_area_struct *vma) @@ -2559,7 +2584,24 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, dst_ptl = huge_pte_lock(h, dst, dst_pte); src_ptl = huge_pte_lockptr(h, src, src_pte); spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); - if (!huge_pte_none(huge_ptep_get(src_pte))) { + entry = huge_ptep_get(src_pte); + if (huge_pte_none(entry)) { /* skip none entry */ + ; + } else if (unlikely(is_hugetlb_entry_migration(entry) || + is_hugetlb_entry_hwpoisoned(entry))) { + swp_entry_t swp_entry = pte_to_swp_entry(entry); + + if (is_write_migration_entry(swp_entry) && cow) { + /* + * COW mappings require pages in both + * parent and child to be set to read. + */ + make_migration_entry_read(&swp_entry); + entry = swp_entry_to_pte(swp_entry); + set_huge_pte_at(src, addr, src_pte, entry); + } + set_huge_pte_at(dst, addr, dst_pte, entry); + } else { if (cow) huge_ptep_set_wrprotect(src, addr, src_pte); entry = huge_ptep_get(src_pte); @@ -2578,32 +2620,6 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, return ret; } -static int is_hugetlb_entry_migration(pte_t pte) -{ - swp_entry_t swp; - - if (huge_pte_none(pte) || pte_present(pte)) - return 0; - swp = pte_to_swp_entry(pte); - if (non_swap_entry(swp) && is_migration_entry(swp)) - return 1; - else - return 0; -} - -static int is_hugetlb_entry_hwpoisoned(pte_t pte) -{ - swp_entry_t swp; - - if (huge_pte_none(pte) || pte_present(pte)) - return 0; - swp = pte_to_swp_entry(pte); - if (non_swap_entry(swp) && is_hwpoison_entry(swp)) - return 1; - else - return 0; -} - void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, unsigned long end, struct page *ref_page) @@ -945,7 +945,6 @@ static int replace_page(struct vm_area_struct *vma, struct page *page, pmd = mm_find_pmd(mm, addr); if (!pmd) goto out; - BUG_ON(pmd_trans_huge(*pmd)); mmun_start = addr; mmun_end = addr + PAGE_SIZE; diff --git a/mm/memory-failure.c b/mm/memory-failure.c index cd8989c1027e..7211a73ba14d 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -435,7 +435,7 @@ static void collect_procs_anon(struct page *page, struct list_head *to_kill, if (av == NULL) /* Not actually mapped anymore */ return; - pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT); + pgoff = page_to_pgoff(page); read_lock(&tasklist_lock); for_each_process (tsk) { struct anon_vma_chain *vmac; @@ -469,7 +469,7 @@ static void collect_procs_file(struct page *page, struct list_head *to_kill, mutex_lock(&mapping->i_mmap_mutex); read_lock(&tasklist_lock); for_each_process(tsk) { - pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT); + pgoff_t pgoff = page_to_pgoff(page); struct task_struct *t = task_early_kill(tsk, force_early); if (!t) @@ -895,7 +895,7 @@ static int hwpoison_user_mappings(struct page *p, unsigned long pfn, struct page *hpage = *hpagep; struct page *ppage; - if (PageReserved(p) || PageSlab(p)) + if (PageReserved(p) || PageSlab(p) || !PageLRU(p)) return SWAP_SUCCESS; /* @@ -1159,9 +1159,6 @@ int memory_failure(unsigned long pfn, int trapno, int flags) action_result(pfn, "free buddy, 2nd try", DELAYED); return 0; } - action_result(pfn, "non LRU", IGNORED); - put_page(p); - return -EBUSY; } } @@ -1194,6 +1191,9 @@ int memory_failure(unsigned long pfn, int trapno, int flags) return 0; } + if (!PageHuge(p) && !PageTransTail(p) && !PageLRU(p)) + goto identify_page_state; + /* * For error on the tail page, we should set PG_hwpoison * on the head page to show that the hugepage is hwpoisoned @@ -1243,6 +1243,7 @@ int memory_failure(unsigned long pfn, int trapno, int flags) goto out; } +identify_page_state: res = -EBUSY; /* * The first check uses the current page flags which may not have any diff --git a/mm/memory.c b/mm/memory.c index d67fd9fcf1f2..7e8d8205b610 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2882,7 +2882,8 @@ static int do_read_fault(struct mm_struct *mm, struct vm_area_struct *vma, * if page by the offset is not ready to be mapped (cold cache or * something). */ - if (vma->vm_ops->map_pages && fault_around_pages() > 1) { + if (vma->vm_ops->map_pages && !(flags & FAULT_FLAG_NONLINEAR) && + fault_around_pages() > 1) { pte = pte_offset_map_lock(mm, pmd, address, &ptl); do_fault_around(vma, address, pte, pgoff, flags); if (!pte_same(*pte, orig_pte)) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 284974230459..8f5330d74f47 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -656,19 +656,18 @@ static unsigned long change_prot_numa(struct vm_area_struct *vma, * @nodes and @flags,) it's isolated and queued to the pagelist which is * passed via @private.) */ -static struct vm_area_struct * +static int queue_pages_range(struct mm_struct *mm, unsigned long start, unsigned long end, const nodemask_t *nodes, unsigned long flags, void *private) { - int err; - struct vm_area_struct *first, *vma, *prev; - + int err = 0; + struct vm_area_struct *vma, *prev; - first = find_vma(mm, start); - if (!first) - return ERR_PTR(-EFAULT); + vma = find_vma(mm, start); + if (!vma) + return -EFAULT; prev = NULL; - for (vma = first; vma && vma->vm_start < end; vma = vma->vm_next) { + for (; vma && vma->vm_start < end; vma = vma->vm_next) { unsigned long endvma = vma->vm_end; if (endvma > end) @@ -678,9 +677,9 @@ queue_pages_range(struct mm_struct *mm, unsigned long start, unsigned long end, if (!(flags & MPOL_MF_DISCONTIG_OK)) { if (!vma->vm_next && vma->vm_end < end) - return ERR_PTR(-EFAULT); + return -EFAULT; if (prev && prev->vm_end < vma->vm_start) - return ERR_PTR(-EFAULT); + return -EFAULT; } if (flags & MPOL_MF_LAZY) { @@ -694,15 +693,13 @@ queue_pages_range(struct mm_struct *mm, unsigned long start, unsigned long end, err = queue_pages_pgd_range(vma, start, endvma, nodes, flags, private); - if (err) { - first = ERR_PTR(err); + if (err) break; - } } next: prev = vma; } - return first; + return err; } /* @@ -1156,16 +1153,17 @@ out: /* * Allocate a new page for page migration based on vma policy. - * Start assuming that page is mapped by vma pointed to by @private. + * Start by assuming the page is mapped by the same vma as contains @start. * Search forward from there, if not. N.B., this assumes that the * list of pages handed to migrate_pages()--which is how we get here-- * is in virtual address order. */ -static struct page *new_vma_page(struct page *page, unsigned long private, int **x) +static struct page *new_page(struct page *page, unsigned long start, int **x) { - struct vm_area_struct *vma = (struct vm_area_struct *)private; + struct vm_area_struct *vma; unsigned long uninitialized_var(address); + vma = find_vma(current->mm, start); while (vma) { address = page_address_in_vma(page, vma); if (address != -EFAULT) @@ -1195,7 +1193,7 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from, return -ENOSYS; } -static struct page *new_vma_page(struct page *page, unsigned long private, int **x) +static struct page *new_page(struct page *page, unsigned long start, int **x) { return NULL; } @@ -1205,7 +1203,6 @@ static long do_mbind(unsigned long start, unsigned long len, unsigned short mode, unsigned short mode_flags, nodemask_t *nmask, unsigned long flags) { - struct vm_area_struct *vma; struct mm_struct *mm = current->mm; struct mempolicy *new; unsigned long end; @@ -1271,11 +1268,9 @@ static long do_mbind(unsigned long start, unsigned long len, if (err) goto mpol_out; - vma = queue_pages_range(mm, start, end, nmask, + err = queue_pages_range(mm, start, end, nmask, flags | MPOL_MF_INVERT, &pagelist); - - err = PTR_ERR(vma); /* maybe ... */ - if (!IS_ERR(vma)) + if (!err) err = mbind_range(mm, start, end, new); if (!err) { @@ -1283,9 +1278,8 @@ static long do_mbind(unsigned long start, unsigned long len, if (!list_empty(&pagelist)) { WARN_ON_ONCE(flags & MPOL_MF_LAZY); - nr_failed = migrate_pages(&pagelist, new_vma_page, - NULL, (unsigned long)vma, - MIGRATE_SYNC, MR_MEMPOLICY_MBIND); + nr_failed = migrate_pages(&pagelist, new_page, NULL, + start, MIGRATE_SYNC, MR_MEMPOLICY_MBIND); if (nr_failed) putback_movable_pages(&pagelist); } @@ -2145,7 +2139,6 @@ struct mempolicy *__mpol_dup(struct mempolicy *old) } else *new = *old; - rcu_read_lock(); if (current_cpuset_is_being_rebound()) { nodemask_t mems = cpuset_mems_allowed(current); if (new->flags & MPOL_F_REBINDING) @@ -2153,7 +2146,6 @@ struct mempolicy *__mpol_dup(struct mempolicy *old) else mpol_rebind_policy(new, &mems, MPOL_REBIND_ONCE); } - rcu_read_unlock(); atomic_set(&new->refcnt, 1); return new; } diff --git a/mm/migrate.c b/mm/migrate.c index 63f0cd559999..be6dbf995c0c 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -120,8 +120,6 @@ static int remove_migration_pte(struct page *new, struct vm_area_struct *vma, pmd = mm_find_pmd(mm, addr); if (!pmd) goto out; - if (pmd_trans_huge(*pmd)) - goto out; ptep = pte_offset_map(pmd, addr); @@ -990,9 +988,10 @@ out: * it. Otherwise, putback_lru_page() will drop the reference grabbed * during isolation. */ - if (rc != MIGRATEPAGE_SUCCESS && put_new_page) + if (rc != MIGRATEPAGE_SUCCESS && put_new_page) { + ClearPageSwapBacked(newpage); put_new_page(newpage, private); - else + } else putback_lru_page(newpage); if (result) { diff --git a/mm/msync.c b/mm/msync.c index a5c673669ca6..992a1673d488 100644 --- a/mm/msync.c +++ b/mm/msync.c @@ -78,7 +78,8 @@ SYSCALL_DEFINE3(msync, unsigned long, start, size_t, len, int, flags) goto out_unlock; } file = vma->vm_file; - fstart = start + ((loff_t)vma->vm_pgoff << PAGE_SHIFT); + fstart = (start - vma->vm_start) + + ((loff_t)vma->vm_pgoff << PAGE_SHIFT); fend = fstart + (min(end, vma->vm_end) - start) - 1; start = vma->vm_end; if ((flags & MS_SYNC) && file && diff --git a/mm/nommu.c b/mm/nommu.c index b78e3a8f5ee7..4a852f6c5709 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -786,7 +786,7 @@ static void delete_vma_from_mm(struct vm_area_struct *vma) for (i = 0; i < VMACACHE_SIZE; i++) { /* if the vma is cached, invalidate the entire cache */ if (curr->vmacache[i] == vma) { - vmacache_invalidate(curr->mm); + vmacache_invalidate(mm); break; } } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 4f59fa29eda8..0ea758b898fd 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -69,6 +69,7 @@ /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */ static DEFINE_MUTEX(pcp_batch_high_lock); +#define MIN_PERCPU_PAGELIST_FRACTION (8) #ifdef CONFIG_USE_PERCPU_NUMA_NODE_ID DEFINE_PER_CPU(int, numa_node); @@ -815,9 +816,21 @@ void __init init_cma_reserved_pageblock(struct page *page) set_page_count(p, 0); } while (++p, --i); - set_page_refcounted(page); set_pageblock_migratetype(page, MIGRATE_CMA); - __free_pages(page, pageblock_order); + + if (pageblock_order >= MAX_ORDER) { + i = pageblock_nr_pages; + p = page; + do { + set_page_refcounted(p); + __free_pages(p, MAX_ORDER - 1); + p += MAX_ORDER_NR_PAGES; + } while (i -= MAX_ORDER_NR_PAGES); + } else { + set_page_refcounted(page); + __free_pages(page, pageblock_order); + } + adjust_managed_page_count(page, pageblock_nr_pages); } #endif @@ -4145,7 +4158,7 @@ static void __meminit zone_init_free_lists(struct zone *zone) memmap_init_zone((size), (nid), (zone), (start_pfn), MEMMAP_EARLY) #endif -static int __meminit zone_batchsize(struct zone *zone) +static int zone_batchsize(struct zone *zone) { #ifdef CONFIG_MMU int batch; @@ -4261,8 +4274,8 @@ static void pageset_set_high(struct per_cpu_pageset *p, pageset_update(&p->pcp, high, batch); } -static void __meminit pageset_set_high_and_batch(struct zone *zone, - struct per_cpu_pageset *pcp) +static void pageset_set_high_and_batch(struct zone *zone, + struct per_cpu_pageset *pcp) { if (percpu_pagelist_fraction) pageset_set_high(pcp, @@ -5881,23 +5894,38 @@ int percpu_pagelist_fraction_sysctl_handler(struct ctl_table *table, int write, void __user *buffer, size_t *length, loff_t *ppos) { struct zone *zone; - unsigned int cpu; + int old_percpu_pagelist_fraction; int ret; + mutex_lock(&pcp_batch_high_lock); + old_percpu_pagelist_fraction = percpu_pagelist_fraction; + ret = proc_dointvec_minmax(table, write, buffer, length, ppos); - if (!write || (ret < 0)) - return ret; + if (!write || ret < 0) + goto out; + + /* Sanity checking to avoid pcp imbalance */ + if (percpu_pagelist_fraction && + percpu_pagelist_fraction < MIN_PERCPU_PAGELIST_FRACTION) { + percpu_pagelist_fraction = old_percpu_pagelist_fraction; + ret = -EINVAL; + goto out; + } + + /* No change? */ + if (percpu_pagelist_fraction == old_percpu_pagelist_fraction) + goto out; - mutex_lock(&pcp_batch_high_lock); for_each_populated_zone(zone) { - unsigned long high; - high = zone->managed_pages / percpu_pagelist_fraction; + unsigned int cpu; + for_each_possible_cpu(cpu) - pageset_set_high(per_cpu_ptr(zone->pageset, cpu), - high); + pageset_set_high_and_batch(zone, + per_cpu_ptr(zone->pageset, cpu)); } +out: mutex_unlock(&pcp_batch_high_lock); - return 0; + return ret; } int hashdist = HASHDIST_DEFAULT; diff --git a/mm/rmap.c b/mm/rmap.c index bf05fc872ae8..22a4a7699cdb 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -517,11 +517,7 @@ void page_unlock_anon_vma_read(struct anon_vma *anon_vma) static inline unsigned long __vma_address(struct page *page, struct vm_area_struct *vma) { - pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT); - - if (unlikely(is_vm_hugetlb_page(vma))) - pgoff = page->index << huge_page_order(page_hstate(page)); - + pgoff_t pgoff = page_to_pgoff(page); return vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); } @@ -569,6 +565,7 @@ pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address) pgd_t *pgd; pud_t *pud; pmd_t *pmd = NULL; + pmd_t pmde; pgd = pgd_offset(mm, address); if (!pgd_present(*pgd)) @@ -579,7 +576,13 @@ pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address) goto out; pmd = pmd_offset(pud, address); - if (!pmd_present(*pmd)) + /* + * Some THP functions use the sequence pmdp_clear_flush(), set_pmd_at() + * without holding anon_vma lock for write. So when looking for a + * genuine pmde (in which to find pte), test present and !THP together. + */ + pmde = ACCESS_ONCE(*pmd); + if (!pmd_present(pmde) || pmd_trans_huge(pmde)) pmd = NULL; out: return pmd; @@ -615,9 +618,6 @@ pte_t *__page_check_address(struct page *page, struct mm_struct *mm, if (!pmd) return NULL; - if (pmd_trans_huge(*pmd)) - return NULL; - pte = pte_offset_map(pmd, address); /* Make a quick check before getting the lock */ if (!sync && !pte_present(*pte)) { @@ -1635,7 +1635,7 @@ static struct anon_vma *rmap_walk_anon_lock(struct page *page, static int rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc) { struct anon_vma *anon_vma; - pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT); + pgoff_t pgoff = page_to_pgoff(page); struct anon_vma_chain *avc; int ret = SWAP_AGAIN; @@ -1676,7 +1676,7 @@ static int rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc) static int rmap_walk_file(struct page *page, struct rmap_walk_control *rwc) { struct address_space *mapping = page->mapping; - pgoff_t pgoff = page->index << compound_order(page); + pgoff_t pgoff = page_to_pgoff(page); struct vm_area_struct *vma; int ret = SWAP_AGAIN; diff --git a/mm/shmem.c b/mm/shmem.c index f484c276e994..af68b15a8fc1 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -80,11 +80,12 @@ static struct vfsmount *shm_mnt; #define SHORT_SYMLINK_LEN 128 /* - * shmem_fallocate and shmem_writepage communicate via inode->i_private - * (with i_mutex making sure that it has only one user at a time): - * we would prefer not to enlarge the shmem inode just for that. + * shmem_fallocate communicates with shmem_fault or shmem_writepage via + * inode->i_private (with i_mutex making sure that it has only one user at + * a time): we would prefer not to enlarge the shmem inode just for that. */ struct shmem_falloc { + wait_queue_head_t *waitq; /* faults into hole wait for punch to end */ pgoff_t start; /* start of range currently being fallocated */ pgoff_t next; /* the next page offset to be fallocated */ pgoff_t nr_falloced; /* how many new pages have been fallocated */ @@ -467,23 +468,20 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, return; index = start; - for ( ; ; ) { + while (index < end) { cond_resched(); pvec.nr = find_get_entries(mapping, index, min(end - index, (pgoff_t)PAGEVEC_SIZE), pvec.pages, indices); if (!pvec.nr) { - if (index == start || unfalloc) + /* If all gone or hole-punch or unfalloc, we're done */ + if (index == start || end != -1) break; + /* But if truncating, restart to make sure all gone */ index = start; continue; } - if ((index == start || unfalloc) && indices[0] >= end) { - pagevec_remove_exceptionals(&pvec); - pagevec_release(&pvec); - break; - } mem_cgroup_uncharge_start(); for (i = 0; i < pagevec_count(&pvec); i++) { struct page *page = pvec.pages[i]; @@ -495,8 +493,12 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, if (radix_tree_exceptional_entry(page)) { if (unfalloc) continue; - nr_swaps_freed += !shmem_free_swap(mapping, - index, page); + if (shmem_free_swap(mapping, index, page)) { + /* Swap was replaced by page: retry */ + index--; + break; + } + nr_swaps_freed++; continue; } @@ -505,6 +507,11 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, if (page->mapping == mapping) { VM_BUG_ON_PAGE(PageWriteback(page), page); truncate_inode_page(mapping, page); + } else { + /* Page was replaced by swap: retry */ + unlock_page(page); + index--; + break; } } unlock_page(page); @@ -759,6 +766,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) spin_lock(&inode->i_lock); shmem_falloc = inode->i_private; if (shmem_falloc && + !shmem_falloc->waitq && index >= shmem_falloc->start && index < shmem_falloc->next) shmem_falloc->nr_unswapped++; @@ -1027,6 +1035,9 @@ repeat: goto failed; } + if (page && sgp == SGP_WRITE) + mark_page_accessed(page); + /* fallocated page? */ if (page && !PageUptodate(page)) { if (sgp != SGP_READ) @@ -1108,6 +1119,9 @@ repeat: shmem_recalc_inode(inode); spin_unlock(&info->lock); + if (sgp == SGP_WRITE) + mark_page_accessed(page); + delete_from_swap_cache(page); set_page_dirty(page); swap_free(swap); @@ -1134,6 +1148,9 @@ repeat: __SetPageSwapBacked(page); __set_page_locked(page); + if (sgp == SGP_WRITE) + init_page_accessed(page); + error = mem_cgroup_charge_file(page, current->mm, gfp & GFP_RECLAIM_MASK); if (error) @@ -1233,6 +1250,64 @@ static int shmem_fault(struct vm_area_struct *vma, struct vm_fault *vmf) int error; int ret = VM_FAULT_LOCKED; + /* + * Trinity finds that probing a hole which tmpfs is punching can + * prevent the hole-punch from ever completing: which in turn + * locks writers out with its hold on i_mutex. So refrain from + * faulting pages into the hole while it's being punched. Although + * shmem_undo_range() does remove the additions, it may be unable to + * keep up, as each new page needs its own unmap_mapping_range() call, + * and the i_mmap tree grows ever slower to scan if new vmas are added. + * + * It does not matter if we sometimes reach this check just before the + * hole-punch begins, so that one fault then races with the punch: + * we just need to make racing faults a rare case. + * + * The implementation below would be much simpler if we just used a + * standard mutex or completion: but we cannot take i_mutex in fault, + * and bloating every shmem inode for this unlikely case would be sad. + */ + if (unlikely(inode->i_private)) { + struct shmem_falloc *shmem_falloc; + + spin_lock(&inode->i_lock); + shmem_falloc = inode->i_private; + if (shmem_falloc && + shmem_falloc->waitq && + vmf->pgoff >= shmem_falloc->start && + vmf->pgoff < shmem_falloc->next) { + wait_queue_head_t *shmem_falloc_waitq; + DEFINE_WAIT(shmem_fault_wait); + + ret = VM_FAULT_NOPAGE; + if ((vmf->flags & FAULT_FLAG_ALLOW_RETRY) && + !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) { + /* It's polite to up mmap_sem if we can */ + up_read(&vma->vm_mm->mmap_sem); + ret = VM_FAULT_RETRY; + } + + shmem_falloc_waitq = shmem_falloc->waitq; + prepare_to_wait(shmem_falloc_waitq, &shmem_fault_wait, + TASK_UNINTERRUPTIBLE); + spin_unlock(&inode->i_lock); + schedule(); + + /* + * shmem_falloc_waitq points into the shmem_fallocate() + * stack of the hole-punching task: shmem_falloc_waitq + * is usually invalid by the time we reach here, but + * finish_wait() does not dereference it in that case; + * though i_lock needed lest racing with wake_up_all(). + */ + spin_lock(&inode->i_lock); + finish_wait(shmem_falloc_waitq, &shmem_fault_wait); + spin_unlock(&inode->i_lock); + return ret; + } + spin_unlock(&inode->i_lock); + } + error = shmem_getpage(inode, vmf->pgoff, &vmf->page, SGP_CACHE, &ret); if (error) return ((error == -ENOMEM) ? VM_FAULT_OOM : VM_FAULT_SIGBUS); @@ -1372,13 +1447,9 @@ shmem_write_begin(struct file *file, struct address_space *mapping, loff_t pos, unsigned len, unsigned flags, struct page **pagep, void **fsdata) { - int ret; struct inode *inode = mapping->host; pgoff_t index = pos >> PAGE_CACHE_SHIFT; - ret = shmem_getpage(inode, index, pagep, SGP_WRITE, NULL); - if (ret == 0 && *pagep) - init_page_accessed(*pagep); - return ret; + return shmem_getpage(inode, index, pagep, SGP_WRITE, NULL); } static int @@ -1724,18 +1795,34 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, pgoff_t start, index, end; int error; + if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE)) + return -EOPNOTSUPP; + mutex_lock(&inode->i_mutex); if (mode & FALLOC_FL_PUNCH_HOLE) { struct address_space *mapping = file->f_mapping; loff_t unmap_start = round_up(offset, PAGE_SIZE); loff_t unmap_end = round_down(offset + len, PAGE_SIZE) - 1; + DECLARE_WAIT_QUEUE_HEAD_ONSTACK(shmem_falloc_waitq); + + shmem_falloc.waitq = &shmem_falloc_waitq; + shmem_falloc.start = unmap_start >> PAGE_SHIFT; + shmem_falloc.next = (unmap_end + 1) >> PAGE_SHIFT; + spin_lock(&inode->i_lock); + inode->i_private = &shmem_falloc; + spin_unlock(&inode->i_lock); if ((u64)unmap_end > (u64)unmap_start) unmap_mapping_range(mapping, unmap_start, 1 + unmap_end - unmap_start, 0); shmem_truncate_range(inode, offset, offset + len - 1); /* No need to unmap again: hole-punching leaves COWed pages */ + + spin_lock(&inode->i_lock); + inode->i_private = NULL; + wake_up_all(&shmem_falloc_waitq); + spin_unlock(&inode->i_lock); error = 0; goto out; } @@ -1753,6 +1840,7 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, goto out; } + shmem_falloc.waitq = NULL; shmem_falloc.start = start; shmem_falloc.next = start; shmem_falloc.nr_falloced = 0; diff --git a/mm/slab.c b/mm/slab.c index 9ca3b87edabc..3070b929a1bf 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -386,6 +386,39 @@ static void **dbg_userword(struct kmem_cache *cachep, void *objp) #endif +#define OBJECT_FREE (0) +#define OBJECT_ACTIVE (1) + +#ifdef CONFIG_DEBUG_SLAB_LEAK + +static void set_obj_status(struct page *page, int idx, int val) +{ + int freelist_size; + char *status; + struct kmem_cache *cachep = page->slab_cache; + + freelist_size = cachep->num * sizeof(freelist_idx_t); + status = (char *)page->freelist + freelist_size; + status[idx] = val; +} + +static inline unsigned int get_obj_status(struct page *page, int idx) +{ + int freelist_size; + char *status; + struct kmem_cache *cachep = page->slab_cache; + + freelist_size = cachep->num * sizeof(freelist_idx_t); + status = (char *)page->freelist + freelist_size; + + return status[idx]; +} + +#else +static inline void set_obj_status(struct page *page, int idx, int val) {} + +#endif + /* * Do not go above this order unless 0 objects fit into the slab or * overridden on the command line. @@ -576,12 +609,30 @@ static inline struct array_cache *cpu_cache_get(struct kmem_cache *cachep) return cachep->array[smp_processor_id()]; } +static size_t calculate_freelist_size(int nr_objs, size_t align) +{ + size_t freelist_size; + + freelist_size = nr_objs * sizeof(freelist_idx_t); + if (IS_ENABLED(CONFIG_DEBUG_SLAB_LEAK)) + freelist_size += nr_objs * sizeof(char); + + if (align) + freelist_size = ALIGN(freelist_size, align); + + return freelist_size; +} + static int calculate_nr_objs(size_t slab_size, size_t buffer_size, size_t idx_size, size_t align) { int nr_objs; + size_t remained_size; size_t freelist_size; + int extra_space = 0; + if (IS_ENABLED(CONFIG_DEBUG_SLAB_LEAK)) + extra_space = sizeof(char); /* * Ignore padding for the initial guess. The padding * is at most @align-1 bytes, and @buffer_size is at @@ -590,14 +641,15 @@ static int calculate_nr_objs(size_t slab_size, size_t buffer_size, * into the memory allocation when taking the padding * into account. */ - nr_objs = slab_size / (buffer_size + idx_size); + nr_objs = slab_size / (buffer_size + idx_size + extra_space); /* * This calculated number will be either the right * amount, or one greater than what we want. */ - freelist_size = slab_size - nr_objs * buffer_size; - if (freelist_size < ALIGN(nr_objs * idx_size, align)) + remained_size = slab_size - nr_objs * buffer_size; + freelist_size = calculate_freelist_size(nr_objs, align); + if (remained_size < freelist_size) nr_objs--; return nr_objs; @@ -635,7 +687,7 @@ static void cache_estimate(unsigned long gfporder, size_t buffer_size, } else { nr_objs = calculate_nr_objs(slab_size, buffer_size, sizeof(freelist_idx_t), align); - mgmt_size = ALIGN(nr_objs * sizeof(freelist_idx_t), align); + mgmt_size = calculate_freelist_size(nr_objs, align); } *num = nr_objs; *left_over = slab_size - nr_objs*buffer_size - mgmt_size; @@ -2041,13 +2093,16 @@ static size_t calculate_slab_order(struct kmem_cache *cachep, break; if (flags & CFLGS_OFF_SLAB) { + size_t freelist_size_per_obj = sizeof(freelist_idx_t); /* * Max number of objs-per-slab for caches which * use off-slab slabs. Needed to avoid a possible * looping condition in cache_grow(). */ + if (IS_ENABLED(CONFIG_DEBUG_SLAB_LEAK)) + freelist_size_per_obj += sizeof(char); offslab_limit = size; - offslab_limit /= sizeof(freelist_idx_t); + offslab_limit /= freelist_size_per_obj; if (num > offslab_limit) break; @@ -2294,8 +2349,7 @@ __kmem_cache_create (struct kmem_cache *cachep, unsigned long flags) if (!cachep->num) return -E2BIG; - freelist_size = - ALIGN(cachep->num * sizeof(freelist_idx_t), cachep->align); + freelist_size = calculate_freelist_size(cachep->num, cachep->align); /* * If the slab has been placed off-slab, and we have enough space then @@ -2308,7 +2362,7 @@ __kmem_cache_create (struct kmem_cache *cachep, unsigned long flags) if (flags & CFLGS_OFF_SLAB) { /* really off slab. No need for manual alignment */ - freelist_size = cachep->num * sizeof(freelist_idx_t); + freelist_size = calculate_freelist_size(cachep->num, 0); #ifdef CONFIG_PAGE_POISONING /* If we're going to use the generic kernel_map_pages() @@ -2612,6 +2666,7 @@ static void cache_init_objs(struct kmem_cache *cachep, if (cachep->ctor) cachep->ctor(objp); #endif + set_obj_status(page, i, OBJECT_FREE); set_free_obj(page, i, i); } } @@ -2820,6 +2875,7 @@ static void *cache_free_debugcheck(struct kmem_cache *cachep, void *objp, BUG_ON(objnr >= cachep->num); BUG_ON(objp != index_to_obj(cachep, page, objnr)); + set_obj_status(page, objnr, OBJECT_FREE); if (cachep->flags & SLAB_POISON) { #ifdef CONFIG_DEBUG_PAGEALLOC if ((cachep->size % PAGE_SIZE)==0 && OFF_SLAB(cachep)) { @@ -2953,6 +3009,8 @@ static inline void cache_alloc_debugcheck_before(struct kmem_cache *cachep, static void *cache_alloc_debugcheck_after(struct kmem_cache *cachep, gfp_t flags, void *objp, unsigned long caller) { + struct page *page; + if (!objp) return objp; if (cachep->flags & SLAB_POISON) { @@ -2983,6 +3041,9 @@ static void *cache_alloc_debugcheck_after(struct kmem_cache *cachep, *dbg_redzone1(cachep, objp) = RED_ACTIVE; *dbg_redzone2(cachep, objp) = RED_ACTIVE; } + + page = virt_to_head_page(objp); + set_obj_status(page, obj_to_index(cachep, page, objp), OBJECT_ACTIVE); objp += obj_offset(cachep); if (cachep->ctor && cachep->flags & SLAB_POISON) cachep->ctor(objp); @@ -4219,21 +4280,12 @@ static void handle_slab(unsigned long *n, struct kmem_cache *c, struct page *page) { void *p; - int i, j; + int i; if (n[0] == n[1]) return; for (i = 0, p = page->s_mem; i < c->num; i++, p += c->size) { - bool active = true; - - for (j = page->active; j < c->num; j++) { - /* Skip freed item */ - if (get_free_obj(page, j) == i) { - active = false; - break; - } - } - if (!active) + if (get_obj_status(page, i) != OBJECT_ACTIVE) continue; if (!add_caller(n, (unsigned long)*dbg_userword(c, p))) diff --git a/mm/slab_common.c b/mm/slab_common.c index 735e01a0db6f..d31c4bacc6a2 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -55,7 +55,7 @@ static int kmem_cache_sanity_check(const char *name, size_t size) continue; } -#if !defined(CONFIG_SLUB) || !defined(CONFIG_SLUB_DEBUG_ON) +#if !defined(CONFIG_SLUB) if (!strcmp(s->name, name)) { pr_err("%s (%s): Cache name already exists.\n", __func__, name); diff --git a/mm/slub.c b/mm/slub.c index b2b047327d76..73004808537e 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1881,7 +1881,7 @@ redo: new.frozen = 0; - if (!new.inuse && n->nr_partial > s->min_partial) + if (!new.inuse && n->nr_partial >= s->min_partial) m = M_FREE; else if (new.freelist) { m = M_PARTIAL; @@ -1992,7 +1992,7 @@ static void unfreeze_partials(struct kmem_cache *s, new.freelist, new.counters, "unfreezing slab")); - if (unlikely(!new.inuse && n->nr_partial > s->min_partial)) { + if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) { page->next = discard_page; discard_page = page; } else { @@ -2620,7 +2620,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page, return; } - if (unlikely(!new.inuse && n->nr_partial > s->min_partial)) + if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) goto slab_empty; /* diff --git a/mm/truncate.c b/mm/truncate.c index 6a78c814bebf..eda247307164 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -355,14 +355,16 @@ void truncate_inode_pages_range(struct address_space *mapping, for ( ; ; ) { cond_resched(); if (!pagevec_lookup_entries(&pvec, mapping, index, - min(end - index, (pgoff_t)PAGEVEC_SIZE), - indices)) { + min(end - index, (pgoff_t)PAGEVEC_SIZE), indices)) { + /* If all gone from start onwards, we're done */ if (index == start) break; + /* Otherwise restart to make sure all gone */ index = start; continue; } if (index == start && indices[0] >= end) { + /* All gone out of hole to be punched, we're done */ pagevec_remove_exceptionals(&pvec); pagevec_release(&pvec); break; @@ -373,8 +375,11 @@ void truncate_inode_pages_range(struct address_space *mapping, /* We rely upon deletion not changing page->index */ index = indices[i]; - if (index >= end) + if (index >= end) { + /* Restart punch to make sure all gone */ + index = start - 1; break; + } if (radix_tree_exceptional_entry(page)) { clear_exceptional_entry(mapping, index, page); diff --git a/net/8021q/vlan_core.c b/net/8021q/vlan_core.c index 9012b1c922b6..75d427763992 100644 --- a/net/8021q/vlan_core.c +++ b/net/8021q/vlan_core.c @@ -114,8 +114,11 @@ EXPORT_SYMBOL(vlan_dev_vlan_proto); static struct sk_buff *vlan_reorder_header(struct sk_buff *skb) { - if (skb_cow(skb, skb_headroom(skb)) < 0) + if (skb_cow(skb, skb_headroom(skb)) < 0) { + kfree_skb(skb); return NULL; + } + memmove(skb->data - ETH_HLEN, skb->data - VLAN_ETH_HLEN, 2 * ETH_ALEN); skb->mac_header += VLAN_HLEN; return skb; diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c index ad2ac3c00398..dd11f612e03e 100644 --- a/net/8021q/vlan_dev.c +++ b/net/8021q/vlan_dev.c @@ -627,8 +627,6 @@ static void vlan_dev_uninit(struct net_device *dev) struct vlan_dev_priv *vlan = vlan_dev_priv(dev); int i; - free_percpu(vlan->vlan_pcpu_stats); - vlan->vlan_pcpu_stats = NULL; for (i = 0; i < ARRAY_SIZE(vlan->egress_priority_map); i++) { while ((pm = vlan->egress_priority_map[i]) != NULL) { vlan->egress_priority_map[i] = pm->next; @@ -785,6 +783,15 @@ static const struct net_device_ops vlan_netdev_ops = { .ndo_get_lock_subclass = vlan_dev_get_lock_subclass, }; +static void vlan_dev_free(struct net_device *dev) +{ + struct vlan_dev_priv *vlan = vlan_dev_priv(dev); + + free_percpu(vlan->vlan_pcpu_stats); + vlan->vlan_pcpu_stats = NULL; + free_netdev(dev); +} + void vlan_setup(struct net_device *dev) { ether_setup(dev); @@ -794,7 +801,7 @@ void vlan_setup(struct net_device *dev) dev->tx_queue_len = 0; dev->netdev_ops = &vlan_netdev_ops; - dev->destructor = free_netdev; + dev->destructor = vlan_dev_free; dev->ethtool_ops = &vlan_ethtool_ops; memset(dev->broadcast, 0, ETH_ALEN); diff --git a/net/appletalk/ddp.c b/net/appletalk/ddp.c index 01a1082e02b3..bfcf6be1d665 100644 --- a/net/appletalk/ddp.c +++ b/net/appletalk/ddp.c @@ -1489,8 +1489,6 @@ static int atalk_rcv(struct sk_buff *skb, struct net_device *dev, goto drop; /* Queue packet (standard) */ - skb->sk = sock; - if (sock_queue_rcv_skb(sock, skb) < 0) goto drop; @@ -1644,7 +1642,6 @@ static int atalk_sendmsg(struct kiocb *iocb, struct socket *sock, struct msghdr if (!skb) goto out; - skb->sk = sk; skb_reserve(skb, ddp_dl->header_length); skb_reserve(skb, dev->hard_header_len); skb->dev = dev; diff --git a/net/batman-adv/bridge_loop_avoidance.c b/net/batman-adv/bridge_loop_avoidance.c index 6f0d9ec37950..a957c8140721 100644 --- a/net/batman-adv/bridge_loop_avoidance.c +++ b/net/batman-adv/bridge_loop_avoidance.c @@ -800,11 +800,6 @@ static int batadv_check_claim_group(struct batadv_priv *bat_priv, bla_dst = (struct batadv_bla_claim_dst *)hw_dst; bla_dst_own = &bat_priv->bla.claim_dest; - /* check if it is a claim packet in general */ - if (memcmp(bla_dst->magic, bla_dst_own->magic, - sizeof(bla_dst->magic)) != 0) - return 0; - /* if announcement packet, use the source, * otherwise assume it is in the hw_src */ @@ -866,12 +861,13 @@ static int batadv_bla_process_claim(struct batadv_priv *bat_priv, struct batadv_hard_iface *primary_if, struct sk_buff *skb) { - struct batadv_bla_claim_dst *bla_dst; + struct batadv_bla_claim_dst *bla_dst, *bla_dst_own; uint8_t *hw_src, *hw_dst; - struct vlan_ethhdr *vhdr; + struct vlan_hdr *vhdr, vhdr_buf; struct ethhdr *ethhdr; struct arphdr *arphdr; unsigned short vid; + int vlan_depth = 0; __be16 proto; int headlen; int ret; @@ -882,9 +878,24 @@ static int batadv_bla_process_claim(struct batadv_priv *bat_priv, proto = ethhdr->h_proto; headlen = ETH_HLEN; if (vid & BATADV_VLAN_HAS_TAG) { - vhdr = vlan_eth_hdr(skb); - proto = vhdr->h_vlan_encapsulated_proto; - headlen += VLAN_HLEN; + /* Traverse the VLAN/Ethertypes. + * + * At this point it is known that the first protocol is a VLAN + * header, so start checking at the encapsulated protocol. + * + * The depth of the VLAN headers is recorded to drop BLA claim + * frames encapsulated into multiple VLAN headers (QinQ). + */ + do { + vhdr = skb_header_pointer(skb, headlen, VLAN_HLEN, + &vhdr_buf); + if (!vhdr) + return 0; + + proto = vhdr->h_vlan_encapsulated_proto; + headlen += VLAN_HLEN; + vlan_depth++; + } while (proto == htons(ETH_P_8021Q)); } if (proto != htons(ETH_P_ARP)) @@ -914,6 +925,19 @@ static int batadv_bla_process_claim(struct batadv_priv *bat_priv, hw_src = (uint8_t *)arphdr + sizeof(struct arphdr); hw_dst = hw_src + ETH_ALEN + 4; bla_dst = (struct batadv_bla_claim_dst *)hw_dst; + bla_dst_own = &bat_priv->bla.claim_dest; + + /* check if it is a claim frame in general */ + if (memcmp(bla_dst->magic, bla_dst_own->magic, + sizeof(bla_dst->magic)) != 0) + return 0; + + /* check if there is a claim frame encapsulated deeper in (QinQ) and + * drop that, as this is not supported by BLA but should also not be + * sent via the mesh. + */ + if (vlan_depth > 1) + return 1; /* check if it is a claim frame. */ ret = batadv_check_claim_group(bat_priv, primary_if, hw_src, hw_dst, diff --git a/net/batman-adv/soft-interface.c b/net/batman-adv/soft-interface.c index e7ee65dc20bf..cbd677f48c00 100644 --- a/net/batman-adv/soft-interface.c +++ b/net/batman-adv/soft-interface.c @@ -448,10 +448,15 @@ out: * possibly free it * @softif_vlan: the vlan object to release */ -void batadv_softif_vlan_free_ref(struct batadv_softif_vlan *softif_vlan) +void batadv_softif_vlan_free_ref(struct batadv_softif_vlan *vlan) { - if (atomic_dec_and_test(&softif_vlan->refcount)) - kfree_rcu(softif_vlan, rcu); + if (atomic_dec_and_test(&vlan->refcount)) { + spin_lock_bh(&vlan->bat_priv->softif_vlan_list_lock); + hlist_del_rcu(&vlan->list); + spin_unlock_bh(&vlan->bat_priv->softif_vlan_list_lock); + + kfree_rcu(vlan, rcu); + } } /** @@ -505,6 +510,7 @@ int batadv_softif_create_vlan(struct batadv_priv *bat_priv, unsigned short vid) if (!vlan) return -ENOMEM; + vlan->bat_priv = bat_priv; vlan->vid = vid; atomic_set(&vlan->refcount, 1); @@ -516,6 +522,10 @@ int batadv_softif_create_vlan(struct batadv_priv *bat_priv, unsigned short vid) return err; } + spin_lock_bh(&bat_priv->softif_vlan_list_lock); + hlist_add_head_rcu(&vlan->list, &bat_priv->softif_vlan_list); + spin_unlock_bh(&bat_priv->softif_vlan_list_lock); + /* add a new TT local entry. This one will be marked with the NOPURGE * flag */ @@ -523,10 +533,6 @@ int batadv_softif_create_vlan(struct batadv_priv *bat_priv, unsigned short vid) bat_priv->soft_iface->dev_addr, vid, BATADV_NULL_IFINDEX, BATADV_NO_MARK); - spin_lock_bh(&bat_priv->softif_vlan_list_lock); - hlist_add_head_rcu(&vlan->list, &bat_priv->softif_vlan_list); - spin_unlock_bh(&bat_priv->softif_vlan_list_lock); - return 0; } @@ -538,18 +544,13 @@ int batadv_softif_create_vlan(struct batadv_priv *bat_priv, unsigned short vid) static void batadv_softif_destroy_vlan(struct batadv_priv *bat_priv, struct batadv_softif_vlan *vlan) { - spin_lock_bh(&bat_priv->softif_vlan_list_lock); - hlist_del_rcu(&vlan->list); - spin_unlock_bh(&bat_priv->softif_vlan_list_lock); - - batadv_sysfs_del_vlan(bat_priv, vlan); - /* explicitly remove the associated TT local entry because it is marked * with the NOPURGE flag */ batadv_tt_local_remove(bat_priv, bat_priv->soft_iface->dev_addr, vlan->vid, "vlan interface destroyed", false); + batadv_sysfs_del_vlan(bat_priv, vlan); batadv_softif_vlan_free_ref(vlan); } @@ -567,6 +568,8 @@ static int batadv_interface_add_vid(struct net_device *dev, __be16 proto, unsigned short vid) { struct batadv_priv *bat_priv = netdev_priv(dev); + struct batadv_softif_vlan *vlan; + int ret; /* only 802.1Q vlans are supported. * batman-adv does not know how to handle other types @@ -576,7 +579,36 @@ static int batadv_interface_add_vid(struct net_device *dev, __be16 proto, vid |= BATADV_VLAN_HAS_TAG; - return batadv_softif_create_vlan(bat_priv, vid); + /* if a new vlan is getting created and it already exists, it means that + * it was not deleted yet. batadv_softif_vlan_get() increases the + * refcount in order to revive the object. + * + * if it does not exist then create it. + */ + vlan = batadv_softif_vlan_get(bat_priv, vid); + if (!vlan) + return batadv_softif_create_vlan(bat_priv, vid); + + /* recreate the sysfs object if it was already destroyed (and it should + * be since we received a kill_vid() for this vlan + */ + if (!vlan->kobj) { + ret = batadv_sysfs_add_vlan(bat_priv->soft_iface, vlan); + if (ret) { + batadv_softif_vlan_free_ref(vlan); + return ret; + } + } + + /* add a new TT local entry. This one will be marked with the NOPURGE + * flag. This must be added again, even if the vlan object already + * exists, because the entry was deleted by kill_vid() + */ + batadv_tt_local_add(bat_priv->soft_iface, + bat_priv->soft_iface->dev_addr, vid, + BATADV_NULL_IFINDEX, BATADV_NO_MARK); + + return 0; } /** diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c index d636bde72c9a..5f59e7f899a0 100644 --- a/net/batman-adv/translation-table.c +++ b/net/batman-adv/translation-table.c @@ -511,6 +511,7 @@ bool batadv_tt_local_add(struct net_device *soft_iface, const uint8_t *addr, struct batadv_priv *bat_priv = netdev_priv(soft_iface); struct batadv_tt_local_entry *tt_local; struct batadv_tt_global_entry *tt_global = NULL; + struct batadv_softif_vlan *vlan; struct net_device *in_dev = NULL; struct hlist_head *head; struct batadv_tt_orig_list_entry *orig_entry; @@ -572,6 +573,9 @@ bool batadv_tt_local_add(struct net_device *soft_iface, const uint8_t *addr, if (!tt_local) goto out; + /* increase the refcounter of the related vlan */ + vlan = batadv_softif_vlan_get(bat_priv, vid); + batadv_dbg(BATADV_DBG_TT, bat_priv, "Creating new local tt entry: %pM (vid: %d, ttvn: %d)\n", addr, BATADV_PRINT_VID(vid), @@ -604,6 +608,7 @@ bool batadv_tt_local_add(struct net_device *soft_iface, const uint8_t *addr, if (unlikely(hash_added != 0)) { /* remove the reference for the hash */ batadv_tt_local_entry_free_ref(tt_local); + batadv_softif_vlan_free_ref(vlan); goto out; } @@ -1009,6 +1014,7 @@ uint16_t batadv_tt_local_remove(struct batadv_priv *bat_priv, { struct batadv_tt_local_entry *tt_local_entry; uint16_t flags, curr_flags = BATADV_NO_FLAGS; + struct batadv_softif_vlan *vlan; tt_local_entry = batadv_tt_local_hash_find(bat_priv, addr, vid); if (!tt_local_entry) @@ -1039,6 +1045,11 @@ uint16_t batadv_tt_local_remove(struct batadv_priv *bat_priv, hlist_del_rcu(&tt_local_entry->common.hash_entry); batadv_tt_local_entry_free_ref(tt_local_entry); + /* decrease the reference held for this vlan */ + vlan = batadv_softif_vlan_get(bat_priv, vid); + batadv_softif_vlan_free_ref(vlan); + batadv_softif_vlan_free_ref(vlan); + out: if (tt_local_entry) batadv_tt_local_entry_free_ref(tt_local_entry); @@ -1111,6 +1122,7 @@ static void batadv_tt_local_table_free(struct batadv_priv *bat_priv) spinlock_t *list_lock; /* protects write access to the hash lists */ struct batadv_tt_common_entry *tt_common_entry; struct batadv_tt_local_entry *tt_local; + struct batadv_softif_vlan *vlan; struct hlist_node *node_tmp; struct hlist_head *head; uint32_t i; @@ -1131,6 +1143,13 @@ static void batadv_tt_local_table_free(struct batadv_priv *bat_priv) tt_local = container_of(tt_common_entry, struct batadv_tt_local_entry, common); + + /* decrease the reference held for this vlan */ + vlan = batadv_softif_vlan_get(bat_priv, + tt_common_entry->vid); + batadv_softif_vlan_free_ref(vlan); + batadv_softif_vlan_free_ref(vlan); + batadv_tt_local_entry_free_ref(tt_local); } spin_unlock_bh(list_lock); @@ -3139,6 +3158,7 @@ static void batadv_tt_local_purge_pending_clients(struct batadv_priv *bat_priv) struct batadv_hashtable *hash = bat_priv->tt.local_hash; struct batadv_tt_common_entry *tt_common; struct batadv_tt_local_entry *tt_local; + struct batadv_softif_vlan *vlan; struct hlist_node *node_tmp; struct hlist_head *head; spinlock_t *list_lock; /* protects write access to the hash lists */ @@ -3167,6 +3187,12 @@ static void batadv_tt_local_purge_pending_clients(struct batadv_priv *bat_priv) tt_local = container_of(tt_common, struct batadv_tt_local_entry, common); + + /* decrease the reference held for this vlan */ + vlan = batadv_softif_vlan_get(bat_priv, tt_common->vid); + batadv_softif_vlan_free_ref(vlan); + batadv_softif_vlan_free_ref(vlan); + batadv_tt_local_entry_free_ref(tt_local); } spin_unlock_bh(list_lock); diff --git a/net/batman-adv/types.h b/net/batman-adv/types.h index 34891a56773f..8854c05622a9 100644 --- a/net/batman-adv/types.h +++ b/net/batman-adv/types.h @@ -687,6 +687,7 @@ struct batadv_priv_nc { /** * struct batadv_softif_vlan - per VLAN attributes set + * @bat_priv: pointer to the mesh object * @vid: VLAN identifier * @kobj: kobject for sysfs vlan subdirectory * @ap_isolation: AP isolation state @@ -696,6 +697,7 @@ struct batadv_priv_nc { * @rcu: struct used for freeing in a RCU-safe manner */ struct batadv_softif_vlan { + struct batadv_priv *bat_priv; unsigned short vid; struct kobject *kobj; atomic_t ap_isolation; /* boolean */ diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c index 8671bc79a35b..a7a27bc2c0b1 100644 --- a/net/bluetooth/hci_conn.c +++ b/net/bluetooth/hci_conn.c @@ -289,10 +289,20 @@ static void hci_conn_timeout(struct work_struct *work) { struct hci_conn *conn = container_of(work, struct hci_conn, disc_work.work); + int refcnt = atomic_read(&conn->refcnt); BT_DBG("hcon %p state %s", conn, state_to_string(conn->state)); - if (atomic_read(&conn->refcnt)) + WARN_ON(refcnt < 0); + + /* FIXME: It was observed that in pairing failed scenario, refcnt + * drops below 0. Probably this is because l2cap_conn_del calls + * l2cap_chan_del for each channel, and inside l2cap_chan_del conn is + * dropped. After that loop hci_chan_del is called which also drops + * conn. For now make sure that ACL is alive if refcnt is higher then 0, + * otherwise drop it. + */ + if (refcnt > 0) return; switch (conn->state) { @@ -610,11 +620,6 @@ static void hci_req_add_le_create_conn(struct hci_request *req, if (hci_update_random_address(req, false, &own_addr_type)) return; - /* Save the address type used for this connnection attempt so we able - * to retrieve this information if we need it. - */ - conn->src_type = own_addr_type; - cp.scan_interval = cpu_to_le16(hdev->le_scan_interval); cp.scan_window = cpu_to_le16(hdev->le_scan_window); bacpy(&cp.peer_addr, &conn->dst); @@ -894,7 +899,7 @@ static int hci_conn_auth(struct hci_conn *conn, __u8 sec_level, __u8 auth_type) /* If we're already encrypted set the REAUTH_PEND flag, * otherwise set the ENCRYPT_PEND. */ - if (conn->key_type != 0xff) + if (conn->link_mode & HCI_LM_ENCRYPT) set_bit(HCI_CONN_REAUTH_PEND, &conn->flags); else set_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags); diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c index 21e5913d12e0..640c54ec1bd2 100644 --- a/net/bluetooth/hci_event.c +++ b/net/bluetooth/hci_event.c @@ -48,6 +48,10 @@ static void hci_cc_inquiry_cancel(struct hci_dev *hdev, struct sk_buff *skb) smp_mb__after_atomic(); /* wake_up_bit advises about this barrier */ wake_up_bit(&hdev->flags, HCI_INQUIRY); + hci_dev_lock(hdev); + hci_discovery_set_state(hdev, DISCOVERY_STOPPED); + hci_dev_unlock(hdev); + hci_conn_check_pending(hdev); } @@ -3537,7 +3541,11 @@ static void hci_io_capa_request_evt(struct hci_dev *hdev, struct sk_buff *skb) cp.authentication = conn->auth_type; /* Request MITM protection if our IO caps allow it - * except for the no-bonding case + * except for the no-bonding case. + * conn->auth_type is not updated here since + * that might cause the user confirmation to be + * rejected in case the remote doesn't have the + * IO capabilities for MITM. */ if (conn->io_capability != HCI_IO_NO_INPUT_OUTPUT && cp.authentication != HCI_AT_NO_BONDING) @@ -3628,8 +3636,11 @@ static void hci_user_confirm_request_evt(struct hci_dev *hdev, /* If we're not the initiators request authorization to * proceed from user space (mgmt_user_confirm with - * confirm_hint set to 1). */ - if (!test_bit(HCI_CONN_AUTH_PEND, &conn->flags)) { + * confirm_hint set to 1). The exception is if neither + * side had MITM in which case we do auto-accept. + */ + if (!test_bit(HCI_CONN_AUTH_PEND, &conn->flags) && + (loc_mitm || rem_mitm)) { BT_DBG("Confirming auto-accept as acceptor"); confirm_hint = 1; goto confirm; diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c index 6eabbe05fe54..323f23cd2c37 100644 --- a/net/bluetooth/l2cap_core.c +++ b/net/bluetooth/l2cap_core.c @@ -1663,7 +1663,13 @@ static void l2cap_conn_del(struct hci_conn *hcon, int err) kfree_skb(conn->rx_skb); skb_queue_purge(&conn->pending_rx); - flush_work(&conn->pending_rx_work); + + /* We can not call flush_work(&conn->pending_rx_work) here since we + * might block if we are running on a worker from the same workqueue + * pending_rx_work is waiting on. + */ + if (work_pending(&conn->pending_rx_work)) + cancel_work_sync(&conn->pending_rx_work); l2cap_unregister_all_users(conn); diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c index ade3fb4c23bc..e1378693cc90 100644 --- a/net/bluetooth/l2cap_sock.c +++ b/net/bluetooth/l2cap_sock.c @@ -787,11 +787,6 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname, /*change security for LE channels */ if (chan->scid == L2CAP_CID_ATT) { - if (!conn->hcon->out) { - err = -EINVAL; - break; - } - if (smp_conn_security(conn->hcon, sec.level)) break; sk->sk_state = BT_CONFIG; diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c index 0fce54412ffd..af8e0a6243b7 100644 --- a/net/bluetooth/mgmt.c +++ b/net/bluetooth/mgmt.c @@ -1047,6 +1047,43 @@ static void clean_up_hci_complete(struct hci_dev *hdev, u8 status) } } +static void hci_stop_discovery(struct hci_request *req) +{ + struct hci_dev *hdev = req->hdev; + struct hci_cp_remote_name_req_cancel cp; + struct inquiry_entry *e; + + switch (hdev->discovery.state) { + case DISCOVERY_FINDING: + if (test_bit(HCI_INQUIRY, &hdev->flags)) { + hci_req_add(req, HCI_OP_INQUIRY_CANCEL, 0, NULL); + } else { + cancel_delayed_work(&hdev->le_scan_disable); + hci_req_add_le_scan_disable(req); + } + + break; + + case DISCOVERY_RESOLVING: + e = hci_inquiry_cache_lookup_resolve(hdev, BDADDR_ANY, + NAME_PENDING); + if (!e) + return; + + bacpy(&cp.bdaddr, &e->data.bdaddr); + hci_req_add(req, HCI_OP_REMOTE_NAME_REQ_CANCEL, sizeof(cp), + &cp); + + break; + + default: + /* Passive scanning */ + if (test_bit(HCI_LE_SCAN, &hdev->dev_flags)) + hci_req_add_le_scan_disable(req); + break; + } +} + static int clean_up_hci_state(struct hci_dev *hdev) { struct hci_request req; @@ -1063,9 +1100,7 @@ static int clean_up_hci_state(struct hci_dev *hdev) if (test_bit(HCI_ADVERTISING, &hdev->dev_flags)) disable_advertising(&req); - if (test_bit(HCI_LE_SCAN, &hdev->dev_flags)) { - hci_req_add_le_scan_disable(&req); - } + hci_stop_discovery(&req); list_for_each_entry(conn, &hdev->conn_hash.list, list) { struct hci_cp_disconnect dc; @@ -2996,8 +3031,13 @@ static int user_pairing_resp(struct sock *sk, struct hci_dev *hdev, } if (addr->type == BDADDR_LE_PUBLIC || addr->type == BDADDR_LE_RANDOM) { - /* Continue with pairing via SMP */ + /* Continue with pairing via SMP. The hdev lock must be + * released as SMP may try to recquire it for crypto + * purposes. + */ + hci_dev_unlock(hdev); err = smp_user_confirm_reply(conn, mgmt_op, passkey); + hci_dev_lock(hdev); if (!err) err = cmd_complete(sk, hdev->id, mgmt_op, @@ -3574,8 +3614,6 @@ static int stop_discovery(struct sock *sk, struct hci_dev *hdev, void *data, { struct mgmt_cp_stop_discovery *mgmt_cp = data; struct pending_cmd *cmd; - struct hci_cp_remote_name_req_cancel cp; - struct inquiry_entry *e; struct hci_request req; int err; @@ -3605,52 +3643,22 @@ static int stop_discovery(struct sock *sk, struct hci_dev *hdev, void *data, hci_req_init(&req, hdev); - switch (hdev->discovery.state) { - case DISCOVERY_FINDING: - if (test_bit(HCI_INQUIRY, &hdev->flags)) { - hci_req_add(&req, HCI_OP_INQUIRY_CANCEL, 0, NULL); - } else { - cancel_delayed_work(&hdev->le_scan_disable); - - hci_req_add_le_scan_disable(&req); - } - - break; + hci_stop_discovery(&req); - case DISCOVERY_RESOLVING: - e = hci_inquiry_cache_lookup_resolve(hdev, BDADDR_ANY, - NAME_PENDING); - if (!e) { - mgmt_pending_remove(cmd); - err = cmd_complete(sk, hdev->id, - MGMT_OP_STOP_DISCOVERY, 0, - &mgmt_cp->type, - sizeof(mgmt_cp->type)); - hci_discovery_set_state(hdev, DISCOVERY_STOPPED); - goto unlock; - } - - bacpy(&cp.bdaddr, &e->data.bdaddr); - hci_req_add(&req, HCI_OP_REMOTE_NAME_REQ_CANCEL, sizeof(cp), - &cp); - - break; - - default: - BT_DBG("unknown discovery state %u", hdev->discovery.state); - - mgmt_pending_remove(cmd); - err = cmd_complete(sk, hdev->id, MGMT_OP_STOP_DISCOVERY, - MGMT_STATUS_FAILED, &mgmt_cp->type, - sizeof(mgmt_cp->type)); + err = hci_req_run(&req, stop_discovery_complete); + if (!err) { + hci_discovery_set_state(hdev, DISCOVERY_STOPPING); goto unlock; } - err = hci_req_run(&req, stop_discovery_complete); - if (err < 0) - mgmt_pending_remove(cmd); - else - hci_discovery_set_state(hdev, DISCOVERY_STOPPING); + mgmt_pending_remove(cmd); + + /* If no HCI commands were sent we're done */ + if (err == -ENODATA) { + err = cmd_complete(sk, hdev->id, MGMT_OP_STOP_DISCOVERY, 0, + &mgmt_cp->type, sizeof(mgmt_cp->type)); + hci_discovery_set_state(hdev, DISCOVERY_STOPPED); + } unlock: hci_dev_unlock(hdev); diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c index 3d1cc164557d..e33a982161c1 100644 --- a/net/bluetooth/smp.c +++ b/net/bluetooth/smp.c @@ -385,6 +385,16 @@ static const u8 gen_method[5][5] = { { CFM_PASSKEY, CFM_PASSKEY, REQ_PASSKEY, JUST_WORKS, OVERLAP }, }; +static u8 get_auth_method(struct smp_chan *smp, u8 local_io, u8 remote_io) +{ + /* If either side has unknown io_caps, use JUST WORKS */ + if (local_io > SMP_IO_KEYBOARD_DISPLAY || + remote_io > SMP_IO_KEYBOARD_DISPLAY) + return JUST_WORKS; + + return gen_method[remote_io][local_io]; +} + static int tk_request(struct l2cap_conn *conn, u8 remote_oob, u8 auth, u8 local_io, u8 remote_io) { @@ -401,14 +411,11 @@ static int tk_request(struct l2cap_conn *conn, u8 remote_oob, u8 auth, BT_DBG("tk_request: auth:%d lcl:%d rem:%d", auth, local_io, remote_io); /* If neither side wants MITM, use JUST WORKS */ - /* If either side has unknown io_caps, use JUST WORKS */ /* Otherwise, look up method from the table */ - if (!(auth & SMP_AUTH_MITM) || - local_io > SMP_IO_KEYBOARD_DISPLAY || - remote_io > SMP_IO_KEYBOARD_DISPLAY) + if (!(auth & SMP_AUTH_MITM)) method = JUST_WORKS; else - method = gen_method[remote_io][local_io]; + method = get_auth_method(smp, local_io, remote_io); /* If not bonding, don't ask user to confirm a Zero TK */ if (!(auth & SMP_AUTH_BONDING) && method == JUST_CFM) @@ -544,7 +551,7 @@ static u8 smp_random(struct smp_chan *smp) hci_le_start_enc(hcon, ediv, rand, stk); hcon->enc_key_size = smp->enc_key_size; } else { - u8 stk[16]; + u8 stk[16], auth; __le64 rand = 0; __le16 ediv = 0; @@ -556,8 +563,13 @@ static u8 smp_random(struct smp_chan *smp) memset(stk + smp->enc_key_size, 0, SMP_MAX_ENC_KEY_SIZE - smp->enc_key_size); + if (hcon->pending_sec_level == BT_SECURITY_HIGH) + auth = 1; + else + auth = 0; + hci_add_ltk(hcon->hdev, &hcon->dst, hcon->dst_type, - HCI_SMP_STK_SLAVE, 0, stk, smp->enc_key_size, + HCI_SMP_STK_SLAVE, auth, stk, smp->enc_key_size, ediv, rand); } @@ -664,7 +676,7 @@ static u8 smp_cmd_pairing_req(struct l2cap_conn *conn, struct sk_buff *skb) { struct smp_cmd_pairing rsp, *req = (void *) skb->data; struct smp_chan *smp; - u8 key_size, auth; + u8 key_size, auth, sec_level; int ret; BT_DBG("conn %p", conn); @@ -690,7 +702,19 @@ static u8 smp_cmd_pairing_req(struct l2cap_conn *conn, struct sk_buff *skb) /* We didn't start the pairing, so match remote */ auth = req->auth_req; - conn->hcon->pending_sec_level = authreq_to_seclevel(auth); + sec_level = authreq_to_seclevel(auth); + if (sec_level > conn->hcon->pending_sec_level) + conn->hcon->pending_sec_level = sec_level; + + /* If we need MITM check that it can be acheived */ + if (conn->hcon->pending_sec_level >= BT_SECURITY_HIGH) { + u8 method; + + method = get_auth_method(smp, conn->hcon->io_capability, + req->io_capability); + if (method == JUST_WORKS || method == JUST_CFM) + return SMP_AUTH_REQUIREMENTS; + } build_pairing_cmd(conn, req, &rsp, auth); @@ -738,6 +762,16 @@ static u8 smp_cmd_pairing_rsp(struct l2cap_conn *conn, struct sk_buff *skb) if (check_enc_key_size(conn, key_size)) return SMP_ENC_KEY_SIZE; + /* If we need MITM check that it can be acheived */ + if (conn->hcon->pending_sec_level >= BT_SECURITY_HIGH) { + u8 method; + + method = get_auth_method(smp, req->io_capability, + rsp->io_capability); + if (method == JUST_WORKS || method == JUST_CFM) + return SMP_AUTH_REQUIREMENTS; + } + get_random_bytes(smp->prnd, sizeof(smp->prnd)); smp->prsp[0] = SMP_CMD_PAIRING_RSP; @@ -833,6 +867,7 @@ static u8 smp_cmd_security_req(struct l2cap_conn *conn, struct sk_buff *skb) struct smp_cmd_pairing cp; struct hci_conn *hcon = conn->hcon; struct smp_chan *smp; + u8 sec_level; BT_DBG("conn %p", conn); @@ -842,7 +877,9 @@ static u8 smp_cmd_security_req(struct l2cap_conn *conn, struct sk_buff *skb) if (!(conn->hcon->link_mode & HCI_LM_MASTER)) return SMP_CMD_NOTSUPP; - hcon->pending_sec_level = authreq_to_seclevel(rp->auth_req); + sec_level = authreq_to_seclevel(rp->auth_req); + if (sec_level > hcon->pending_sec_level) + hcon->pending_sec_level = sec_level; if (smp_ltk_encrypt(conn, hcon->pending_sec_level)) return 0; @@ -896,9 +933,12 @@ int smp_conn_security(struct hci_conn *hcon, __u8 sec_level) if (smp_sufficient_security(hcon, sec_level)) return 1; + if (sec_level > hcon->pending_sec_level) + hcon->pending_sec_level = sec_level; + if (hcon->link_mode & HCI_LM_MASTER) - if (smp_ltk_encrypt(conn, sec_level)) - goto done; + if (smp_ltk_encrypt(conn, hcon->pending_sec_level)) + return 0; if (test_and_set_bit(HCI_CONN_LE_SMP_PEND, &hcon->flags)) return 0; @@ -913,7 +953,7 @@ int smp_conn_security(struct hci_conn *hcon, __u8 sec_level) * requires it. */ if (hcon->io_capability != HCI_IO_NO_INPUT_OUTPUT || - sec_level > BT_SECURITY_MEDIUM) + hcon->pending_sec_level > BT_SECURITY_MEDIUM) authreq |= SMP_AUTH_MITM; if (hcon->link_mode & HCI_LM_MASTER) { @@ -932,9 +972,6 @@ int smp_conn_security(struct hci_conn *hcon, __u8 sec_level) set_bit(SMP_FLAG_INITIATOR, &smp->flags); -done: - hcon->pending_sec_level = sec_level; - return 0; } diff --git a/net/core/dev.c b/net/core/dev.c index 30eedf677913..367a586d0c8a 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -148,6 +148,9 @@ struct list_head ptype_all __read_mostly; /* Taps */ static struct list_head offload_base __read_mostly; static int netif_rx_internal(struct sk_buff *skb); +static int call_netdevice_notifiers_info(unsigned long val, + struct net_device *dev, + struct netdev_notifier_info *info); /* * The @dev_base_head list is protected by @dev_base_lock and the rtnl @@ -1207,7 +1210,11 @@ EXPORT_SYMBOL(netdev_features_change); void netdev_state_change(struct net_device *dev) { if (dev->flags & IFF_UP) { - call_netdevice_notifiers(NETDEV_CHANGE, dev); + struct netdev_notifier_change_info change_info; + + change_info.flags_changed = 0; + call_netdevice_notifiers_info(NETDEV_CHANGE, dev, + &change_info.info); rtmsg_ifinfo(RTM_NEWLINK, dev, 0, GFP_KERNEL); } } @@ -4089,6 +4096,8 @@ static void napi_reuse_skb(struct napi_struct *napi, struct sk_buff *skb) skb->vlan_tci = 0; skb->dev = napi->dev; skb->skb_iif = 0; + skb->encapsulation = 0; + skb_shinfo(skb)->gso_type = 0; skb->truesize = SKB_TRUESIZE(skb_end_offset(skb)); napi->skb = skb; @@ -4227,9 +4236,8 @@ static int process_backlog(struct napi_struct *napi, int quota) #endif napi->weight = weight_p; local_irq_disable(); - while (work < quota) { + while (1) { struct sk_buff *skb; - unsigned int qlen; while ((skb = __skb_dequeue(&sd->process_queue))) { local_irq_enable(); @@ -4243,24 +4251,24 @@ static int process_backlog(struct napi_struct *napi, int quota) } rps_lock(sd); - qlen = skb_queue_len(&sd->input_pkt_queue); - if (qlen) - skb_queue_splice_tail_init(&sd->input_pkt_queue, - &sd->process_queue); - - if (qlen < quota - work) { + if (skb_queue_empty(&sd->input_pkt_queue)) { /* * Inline a custom version of __napi_complete(). * only current cpu owns and manipulates this napi, - * and NAPI_STATE_SCHED is the only possible flag set on backlog. - * we can use a plain write instead of clear_bit(), + * and NAPI_STATE_SCHED is the only possible flag set + * on backlog. + * We can use a plain write instead of clear_bit(), * and we dont need an smp_mb() memory barrier. */ list_del(&napi->poll_list); napi->state = 0; + rps_unlock(sd); - quota = work + qlen; + break; } + + skb_queue_splice_tail_init(&sd->input_pkt_queue, + &sd->process_queue); rps_unlock(sd); } local_irq_enable(); diff --git a/net/core/dst.c b/net/core/dst.c index 80d6286c8b62..a028409ee438 100644 --- a/net/core/dst.c +++ b/net/core/dst.c @@ -269,6 +269,15 @@ again: } EXPORT_SYMBOL(dst_destroy); +static void dst_destroy_rcu(struct rcu_head *head) +{ + struct dst_entry *dst = container_of(head, struct dst_entry, rcu_head); + + dst = dst_destroy(dst); + if (dst) + __dst_free(dst); +} + void dst_release(struct dst_entry *dst) { if (dst) { @@ -276,11 +285,8 @@ void dst_release(struct dst_entry *dst) newrefcnt = atomic_dec_return(&dst->__refcnt); WARN_ON(newrefcnt < 0); - if (unlikely(dst->flags & DST_NOCACHE) && !newrefcnt) { - dst = dst_destroy(dst); - if (dst) - __dst_free(dst); - } + if (unlikely(dst->flags & DST_NOCACHE) && !newrefcnt) + call_rcu(&dst->rcu_head, dst_destroy_rcu); } } EXPORT_SYMBOL(dst_release); diff --git a/net/core/filter.c b/net/core/filter.c index 735fad897496..1dbf6462f766 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -840,11 +840,11 @@ int sk_convert_filter(struct sock_filter *prog, int len, BUILD_BUG_ON(BPF_MEMWORDS * sizeof(u32) > MAX_BPF_STACK); BUILD_BUG_ON(BPF_REG_FP + 1 != MAX_BPF_REG); - if (len <= 0 || len >= BPF_MAXINSNS) + if (len <= 0 || len > BPF_MAXINSNS) return -EINVAL; if (new_prog) { - addrs = kzalloc(len * sizeof(*addrs), GFP_KERNEL); + addrs = kcalloc(len, sizeof(*addrs), GFP_KERNEL); if (!addrs) return -ENOMEM; } @@ -1101,7 +1101,7 @@ static int check_load_and_stores(struct sock_filter *filter, int flen) BUILD_BUG_ON(BPF_MEMWORDS > 16); - masks = kmalloc(flen * sizeof(*masks), GFP_KERNEL); + masks = kmalloc_array(flen, sizeof(*masks), GFP_KERNEL); if (!masks) return -ENOMEM; @@ -1382,7 +1382,7 @@ static struct sk_filter *__sk_migrate_realloc(struct sk_filter *fp, fp_new = sock_kmalloc(sk, len, GFP_KERNEL); if (fp_new) { *fp_new = *fp; - /* As we're kepping orig_prog in fp_new along, + /* As we're keeping orig_prog in fp_new along, * we need to make sure we're not evicting it * from the old fp. */ @@ -1524,8 +1524,8 @@ static struct sk_filter *__sk_prepare_filter(struct sk_filter *fp, /** * sk_unattached_filter_create - create an unattached filter - * @fprog: the filter program * @pfp: the unattached filter that is created + * @fprog: the filter program * * Create a filter independent of any socket. We first run some * sanity checks on it to make sure it does not explode on us later. diff --git a/net/core/iovec.c b/net/core/iovec.c index b61869429f4c..827dd6beb49c 100644 --- a/net/core/iovec.c +++ b/net/core/iovec.c @@ -75,61 +75,6 @@ int verify_iovec(struct msghdr *m, struct iovec *iov, struct sockaddr_storage *a } /* - * Copy kernel to iovec. Returns -EFAULT on error. - */ - -int memcpy_toiovecend(const struct iovec *iov, unsigned char *kdata, - int offset, int len) -{ - int copy; - for (; len > 0; ++iov) { - /* Skip over the finished iovecs */ - if (unlikely(offset >= iov->iov_len)) { - offset -= iov->iov_len; - continue; - } - copy = min_t(unsigned int, iov->iov_len - offset, len); - if (copy_to_user(iov->iov_base + offset, kdata, copy)) - return -EFAULT; - offset = 0; - kdata += copy; - len -= copy; - } - - return 0; -} -EXPORT_SYMBOL(memcpy_toiovecend); - -/* - * Copy iovec to kernel. Returns -EFAULT on error. - */ - -int memcpy_fromiovecend(unsigned char *kdata, const struct iovec *iov, - int offset, int len) -{ - /* Skip over the finished iovecs */ - while (offset >= iov->iov_len) { - offset -= iov->iov_len; - iov++; - } - - while (len > 0) { - u8 __user *base = iov->iov_base + offset; - int copy = min_t(unsigned int, len, iov->iov_len - offset); - - offset = 0; - if (copy_from_user(kdata, base, copy)) - return -EFAULT; - len -= copy; - kdata += copy; - iov++; - } - - return 0; -} -EXPORT_SYMBOL(memcpy_fromiovecend); - -/* * And now for the all-in-one: copy and checksum from a user iovec * directly to a datagram * Calls to csum_partial but the last must be in 32 bit chunks diff --git a/net/core/neighbour.c b/net/core/neighbour.c index 32d872eec7f5..559890b0f0a2 100644 --- a/net/core/neighbour.c +++ b/net/core/neighbour.c @@ -3059,11 +3059,12 @@ int neigh_sysctl_register(struct net_device *dev, struct neigh_parms *p, memset(&t->neigh_vars[NEIGH_VAR_GC_INTERVAL], 0, sizeof(t->neigh_vars[NEIGH_VAR_GC_INTERVAL])); } else { + struct neigh_table *tbl = p->tbl; dev_name_source = "default"; - t->neigh_vars[NEIGH_VAR_GC_INTERVAL].data = (int *)(p + 1); - t->neigh_vars[NEIGH_VAR_GC_THRESH1].data = (int *)(p + 1) + 1; - t->neigh_vars[NEIGH_VAR_GC_THRESH2].data = (int *)(p + 1) + 2; - t->neigh_vars[NEIGH_VAR_GC_THRESH3].data = (int *)(p + 1) + 3; + t->neigh_vars[NEIGH_VAR_GC_INTERVAL].data = &tbl->gc_interval; + t->neigh_vars[NEIGH_VAR_GC_THRESH1].data = &tbl->gc_thresh1; + t->neigh_vars[NEIGH_VAR_GC_THRESH2].data = &tbl->gc_thresh2; + t->neigh_vars[NEIGH_VAR_GC_THRESH3].data = &tbl->gc_thresh3; } if (handler) { diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 9cd5344fad73..c1a33033cbe2 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -2993,7 +2993,7 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb, skb_put(nskb, len), len, 0); SKB_GSO_CB(nskb)->csum_start = - skb_headroom(nskb) + offset; + skb_headroom(nskb) + doffset; continue; } diff --git a/net/dns_resolver/dns_query.c b/net/dns_resolver/dns_query.c index 9acec61f5433..dd8696a3dbec 100644 --- a/net/dns_resolver/dns_query.c +++ b/net/dns_resolver/dns_query.c @@ -150,7 +150,7 @@ int dns_query(const char *type, const char *name, size_t namelen, goto put; memcpy(*_result, upayload->data, len); - *_result[len] = '\0'; + (*_result)[len] = '\0'; if (_expiry) *_expiry = rkey->expiry; diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c index d5e6836cf772..d156b3c5f363 100644 --- a/net/ipv4/af_inet.c +++ b/net/ipv4/af_inet.c @@ -1429,6 +1429,9 @@ static int inet_gro_complete(struct sk_buff *skb, int nhoff) int proto = iph->protocol; int err = -ENOSYS; + if (skb->encapsulation) + skb_set_inner_network_header(skb, nhoff); + csum_replace2(&iph->check, iph->tot_len, newlen); iph->tot_len = newlen; diff --git a/net/ipv4/gre_demux.c b/net/ipv4/gre_demux.c index 4e9619bca732..0485bf7f8f03 100644 --- a/net/ipv4/gre_demux.c +++ b/net/ipv4/gre_demux.c @@ -68,6 +68,7 @@ void gre_build_header(struct sk_buff *skb, const struct tnl_ptk_info *tpi, skb_push(skb, hdr_len); + skb_reset_transport_header(skb); greh = (struct gre_base_hdr *)skb->data; greh->flags = tnl_flags_to_gre_flags(tpi->flags); greh->protocol = tpi->proto; diff --git a/net/ipv4/gre_offload.c b/net/ipv4/gre_offload.c index eb92deb12666..f0bdd47bbbcb 100644 --- a/net/ipv4/gre_offload.c +++ b/net/ipv4/gre_offload.c @@ -263,6 +263,9 @@ static int gre_gro_complete(struct sk_buff *skb, int nhoff) int err = -ENOENT; __be16 type; + skb->encapsulation = 1; + skb_shinfo(skb)->gso_type = SKB_GSO_GRE; + type = greh->protocol; if (greh->flags & GRE_KEY) grehlen += GRE_HEADER_SECTION; diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c index 79c3d947a481..42b7bcf8045b 100644 --- a/net/ipv4/icmp.c +++ b/net/ipv4/icmp.c @@ -739,8 +739,6 @@ static void icmp_unreach(struct sk_buff *skb) /* fall through */ case 0: info = ntohs(icmph->un.frag.mtu); - if (!info) - goto out; } break; case ICMP_SR_FAILED: diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c index 6748d420f714..db710b059bab 100644 --- a/net/ipv4/igmp.c +++ b/net/ipv4/igmp.c @@ -1944,6 +1944,10 @@ int ip_mc_leave_group(struct sock *sk, struct ip_mreqn *imr) rtnl_lock(); in_dev = ip_mc_find_dev(net, imr); + if (!in_dev) { + ret = -ENODEV; + goto out; + } ifindex = imr->imr_ifindex; for (imlp = &inet->mc_list; (iml = rtnl_dereference(*imlp)) != NULL; @@ -1961,16 +1965,14 @@ int ip_mc_leave_group(struct sock *sk, struct ip_mreqn *imr) *imlp = iml->next_rcu; - if (in_dev) - ip_mc_dec_group(in_dev, group); + ip_mc_dec_group(in_dev, group); rtnl_unlock(); /* decrease mem now to avoid the memleak warning */ atomic_sub(sizeof(*iml), &sk->sk_omem_alloc); kfree_rcu(iml, rcu); return 0; } - if (!in_dev) - ret = -ENODEV; +out: rtnl_unlock(); return ret; } diff --git a/net/ipv4/ip_options.c b/net/ipv4/ip_options.c index 5e7aecea05cd..ad382499bace 100644 --- a/net/ipv4/ip_options.c +++ b/net/ipv4/ip_options.c @@ -288,6 +288,10 @@ int ip_options_compile(struct net *net, optptr++; continue; } + if (unlikely(l < 2)) { + pp_ptr = optptr; + goto error; + } optlen = optptr[1]; if (optlen < 2 || optlen > l) { pp_ptr = optptr; diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c index 097b3e7c1e8f..6f9de61dce5f 100644 --- a/net/ipv4/ip_tunnel.c +++ b/net/ipv4/ip_tunnel.c @@ -73,12 +73,7 @@ static void __tunnel_dst_set(struct ip_tunnel_dst *idst, { struct dst_entry *old_dst; - if (dst) { - if (dst->flags & DST_NOCACHE) - dst = NULL; - else - dst_clone(dst); - } + dst_clone(dst); old_dst = xchg((__force struct dst_entry **)&idst->dst, dst); dst_release(old_dst); } @@ -108,13 +103,14 @@ static struct rtable *tunnel_rtable_get(struct ip_tunnel *t, u32 cookie) rcu_read_lock(); dst = rcu_dereference(this_cpu_ptr(t->dst_cache)->dst); + if (dst && !atomic_inc_not_zero(&dst->__refcnt)) + dst = NULL; if (dst) { if (dst->obsolete && dst->ops->check(dst, cookie) == NULL) { - rcu_read_unlock(); tunnel_dst_reset(t); - return NULL; + dst_release(dst); + dst = NULL; } - dst_hold(dst); } rcu_read_unlock(); return (struct rtable *)dst; @@ -173,6 +169,7 @@ struct ip_tunnel *ip_tunnel_lookup(struct ip_tunnel_net *itn, hlist_for_each_entry_rcu(t, head, hash_node) { if (remote != t->parms.iph.daddr || + t->parms.iph.saddr != 0 || !(t->dev->flags & IFF_UP)) continue; @@ -189,10 +186,11 @@ struct ip_tunnel *ip_tunnel_lookup(struct ip_tunnel_net *itn, head = &itn->tunnels[hash]; hlist_for_each_entry_rcu(t, head, hash_node) { - if ((local != t->parms.iph.saddr && - (local != t->parms.iph.daddr || - !ipv4_is_multicast(local))) || - !(t->dev->flags & IFF_UP)) + if ((local != t->parms.iph.saddr || t->parms.iph.daddr != 0) && + (local != t->parms.iph.daddr || !ipv4_is_multicast(local))) + continue; + + if (!(t->dev->flags & IFF_UP)) continue; if (!ip_tunnel_key_match(&t->parms, flags, key)) @@ -209,6 +207,8 @@ struct ip_tunnel *ip_tunnel_lookup(struct ip_tunnel_net *itn, hlist_for_each_entry_rcu(t, head, hash_node) { if (t->parms.i_key != key || + t->parms.iph.saddr != 0 || + t->parms.iph.daddr != 0 || !(t->dev->flags & IFF_UP)) continue; diff --git a/net/ipv4/route.c b/net/ipv4/route.c index 082239ffe34a..3162ea923ded 100644 --- a/net/ipv4/route.c +++ b/net/ipv4/route.c @@ -1010,7 +1010,7 @@ void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu) const struct iphdr *iph = (const struct iphdr *) skb->data; struct flowi4 fl4; struct rtable *rt; - struct dst_entry *dst; + struct dst_entry *odst = NULL; bool new = false; bh_lock_sock(sk); @@ -1018,16 +1018,17 @@ void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu) if (!ip_sk_accept_pmtu(sk)) goto out; - rt = (struct rtable *) __sk_dst_get(sk); + odst = sk_dst_get(sk); - if (sock_owned_by_user(sk) || !rt) { + if (sock_owned_by_user(sk) || !odst) { __ipv4_sk_update_pmtu(skb, sk, mtu); goto out; } __build_flow_key(&fl4, sk, iph, 0, 0, 0, 0, 0); - if (!__sk_dst_check(sk, 0)) { + rt = (struct rtable *)odst; + if (odst->obsolete && odst->ops->check(odst, 0) == NULL) { rt = ip_route_output_flow(sock_net(sk), &fl4, sk); if (IS_ERR(rt)) goto out; @@ -1037,8 +1038,7 @@ void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu) __ip_rt_update_pmtu((struct rtable *) rt->dst.path, &fl4, mtu); - dst = dst_check(&rt->dst, 0); - if (!dst) { + if (!dst_check(&rt->dst, 0)) { if (new) dst_release(&rt->dst); @@ -1050,10 +1050,11 @@ void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu) } if (new) - __sk_dst_set(sk, &rt->dst); + sk_dst_set(sk, &rt->dst); out: bh_unlock_sock(sk); + dst_release(odst); } EXPORT_SYMBOL_GPL(ipv4_sk_update_pmtu); diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index eb1dde37e678..9d2118e5fbc7 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1108,7 +1108,7 @@ int tcp_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg, if (unlikely(tp->repair)) { if (tp->repair_queue == TCP_RECV_QUEUE) { copied = tcp_send_rcvq(sk, msg, size); - goto out; + goto out_nopush; } err = -EINVAL; @@ -1282,6 +1282,7 @@ wait_for_memory: out: if (copied) tcp_push(sk, flags, mss_now, tp->nonagle, size_goal); +out_nopush: release_sock(sk); return copied + copied_syn; diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c index 62e48cf84e60..9771563ab564 100644 --- a/net/ipv4/tcp_fastopen.c +++ b/net/ipv4/tcp_fastopen.c @@ -131,7 +131,7 @@ static bool tcp_fastopen_create_child(struct sock *sk, struct dst_entry *dst, struct request_sock *req) { - struct tcp_sock *tp = tcp_sk(sk); + struct tcp_sock *tp; struct request_sock_queue *queue = &inet_csk(sk)->icsk_accept_queue; struct sock *child; diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 40661fc1e233..40639c288dc2 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -1106,7 +1106,7 @@ static bool tcp_check_dsack(struct sock *sk, const struct sk_buff *ack_skb, } /* D-SACK for already forgotten data... Do dumb counting. */ - if (dup_sack && tp->undo_marker && tp->undo_retrans && + if (dup_sack && tp->undo_marker && tp->undo_retrans > 0 && !after(end_seq_0, prior_snd_una) && after(end_seq_0, tp->undo_marker)) tp->undo_retrans--; @@ -1162,7 +1162,7 @@ static int tcp_match_skb_to_sack(struct sock *sk, struct sk_buff *skb, unsigned int new_len = (pkt_len / mss) * mss; if (!in_sack && new_len < pkt_len) { new_len += mss; - if (new_len > skb->len) + if (new_len >= skb->len) return 0; } pkt_len = new_len; @@ -1187,7 +1187,7 @@ static u8 tcp_sacktag_one(struct sock *sk, /* Account D-SACK for retransmitted packet. */ if (dup_sack && (sacked & TCPCB_RETRANS)) { - if (tp->undo_marker && tp->undo_retrans && + if (tp->undo_marker && tp->undo_retrans > 0 && after(end_seq, tp->undo_marker)) tp->undo_retrans--; if (sacked & TCPCB_SACKED_ACKED) @@ -1893,7 +1893,7 @@ static void tcp_clear_retrans_partial(struct tcp_sock *tp) tp->lost_out = 0; tp->undo_marker = 0; - tp->undo_retrans = 0; + tp->undo_retrans = -1; } void tcp_clear_retrans(struct tcp_sock *tp) @@ -2665,7 +2665,7 @@ static void tcp_enter_recovery(struct sock *sk, bool ece_ack) tp->prior_ssthresh = 0; tp->undo_marker = tp->snd_una; - tp->undo_retrans = tp->retrans_out; + tp->undo_retrans = tp->retrans_out ? : -1; if (inet_csk(sk)->icsk_ca_state < TCP_CA_CWR) { if (!ece_ack) diff --git a/net/ipv4/tcp_offload.c b/net/ipv4/tcp_offload.c index 4e86c59ec7f7..55046ecd083e 100644 --- a/net/ipv4/tcp_offload.c +++ b/net/ipv4/tcp_offload.c @@ -309,7 +309,7 @@ static int tcp4_gro_complete(struct sk_buff *skb, int thoff) th->check = ~tcp_v4_check(skb->len - thoff, iph->saddr, iph->daddr, 0); - skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4; + skb_shinfo(skb)->gso_type |= SKB_GSO_TCPV4; return tcp_gro_complete(skb); } diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index d92bce0ea24e..179b51e6bda3 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -2525,8 +2525,6 @@ int tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb) if (!tp->retrans_stamp) tp->retrans_stamp = TCP_SKB_CB(skb)->when; - tp->undo_retrans += tcp_skb_pcount(skb); - /* snd_nxt is stored to detect loss of retransmitted segment, * see tcp_input.c tcp_sacktag_write_queue(). */ @@ -2534,6 +2532,10 @@ int tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb) } else if (err != -EBUSY) { NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPRETRANSFAIL); } + + if (tp->undo_retrans < 0) + tp->undo_retrans = 0; + tp->undo_retrans += tcp_skb_pcount(skb); return err; } diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index d92f94b7e402..7d5a8661df76 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -1588,8 +1588,11 @@ int udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb) goto csum_error; - if (sk_rcvqueues_full(sk, skb, sk->sk_rcvbuf)) + if (sk_rcvqueues_full(sk, skb, sk->sk_rcvbuf)) { + UDP_INC_STATS_BH(sock_net(sk), UDP_MIB_RCVBUFERRORS, + is_udplite); goto drop; + } rc = 0; diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c index 08b367c6b9cf..617f0958e164 100644 --- a/net/ipv6/mcast.c +++ b/net/ipv6/mcast.c @@ -1301,8 +1301,17 @@ int igmp6_event_query(struct sk_buff *skb) len = ntohs(ipv6_hdr(skb)->payload_len) + sizeof(struct ipv6hdr); len -= skb_network_header_len(skb); - /* Drop queries with not link local source */ - if (!(ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL)) + /* RFC3810 6.2 + * Upon reception of an MLD message that contains a Query, the node + * checks if the source address of the message is a valid link-local + * address, if the Hop Limit is set to 1, and if the Router Alert + * option is present in the Hop-By-Hop Options header of the IPv6 + * packet. If any of these checks fails, the packet is dropped. + */ + if (!(ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL) || + ipv6_hdr(skb)->hop_limit != 1 || + !(IP6CB(skb)->flags & IP6SKB_ROUTERALERT) || + IP6CB(skb)->ra != htons(IPV6_OPT_ROUTERALERT_MLD)) return -EINVAL; idev = __in6_dev_get(skb->dev); diff --git a/net/ipv6/tcpv6_offload.c b/net/ipv6/tcpv6_offload.c index 8517d3cd1aed..01b0ff9a0c2c 100644 --- a/net/ipv6/tcpv6_offload.c +++ b/net/ipv6/tcpv6_offload.c @@ -73,7 +73,7 @@ static int tcp6_gro_complete(struct sk_buff *skb, int thoff) th->check = ~tcp_v6_check(skb->len - thoff, &iph->saddr, &iph->daddr, 0); - skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6; + skb_shinfo(skb)->gso_type |= SKB_GSO_TCPV6; return tcp_gro_complete(skb); } diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c index 95c834799288..7092ff78fd84 100644 --- a/net/ipv6/udp.c +++ b/net/ipv6/udp.c @@ -674,8 +674,11 @@ int udpv6_queue_rcv_skb(struct sock *sk, struct sk_buff *skb) goto csum_error; } - if (sk_rcvqueues_full(sk, skb, sk->sk_rcvbuf)) + if (sk_rcvqueues_full(sk, skb, sk->sk_rcvbuf)) { + UDP6_INC_STATS_BH(sock_net(sk), + UDP_MIB_RCVBUFERRORS, is_udplite); goto drop; + } skb_dst_drop(skb); @@ -690,6 +693,7 @@ int udpv6_queue_rcv_skb(struct sock *sk, struct sk_buff *skb) bh_unlock_sock(sk); return rc; + csum_error: UDP6_INC_STATS_BH(sock_net(sk), UDP_MIB_CSUMERRORS, is_udplite); drop: diff --git a/net/l2tp/l2tp_ppp.c b/net/l2tp/l2tp_ppp.c index 950909f04ee6..13752d96275e 100644 --- a/net/l2tp/l2tp_ppp.c +++ b/net/l2tp/l2tp_ppp.c @@ -1365,7 +1365,7 @@ static int pppol2tp_setsockopt(struct socket *sock, int level, int optname, int err; if (level != SOL_PPPOL2TP) - return udp_prot.setsockopt(sk, level, optname, optval, optlen); + return -EINVAL; if (optlen < sizeof(int)) return -EINVAL; @@ -1491,7 +1491,7 @@ static int pppol2tp_getsockopt(struct socket *sock, int level, int optname, struct pppol2tp_session *ps; if (level != SOL_PPPOL2TP) - return udp_prot.getsockopt(sk, level, optname, optval, optlen); + return -EINVAL; if (get_user(len, optlen)) return -EFAULT; diff --git a/net/mac80211/util.c b/net/mac80211/util.c index 6886601afe1c..a6cda52ed920 100644 --- a/net/mac80211/util.c +++ b/net/mac80211/util.c @@ -1096,11 +1096,12 @@ void ieee80211_send_auth(struct ieee80211_sub_if_data *sdata, int err; /* 24 + 6 = header + auth_algo + auth_transaction + status_code */ - skb = dev_alloc_skb(local->hw.extra_tx_headroom + 24 + 6 + extra_len); + skb = dev_alloc_skb(local->hw.extra_tx_headroom + IEEE80211_WEP_IV_LEN + + 24 + 6 + extra_len + IEEE80211_WEP_ICV_LEN); if (!skb) return; - skb_reserve(skb, local->hw.extra_tx_headroom); + skb_reserve(skb, local->hw.extra_tx_headroom + IEEE80211_WEP_IV_LEN); mgmt = (struct ieee80211_mgmt *) skb_put(skb, 24 + 6); memset(mgmt, 0, 24 + 6); diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c index c42e83d2751c..581a6584ed0c 100644 --- a/net/netfilter/ipvs/ip_vs_ctl.c +++ b/net/netfilter/ipvs/ip_vs_ctl.c @@ -3778,6 +3778,7 @@ static void __net_exit ip_vs_control_net_cleanup_sysctl(struct net *net) cancel_delayed_work_sync(&ipvs->defense_work); cancel_work_sync(&ipvs->defense_work.work); unregister_net_sysctl_table(ipvs->sysctl_hdr); + ip_vs_stop_estimator(net, &ipvs->tot_stats); } #else @@ -3840,7 +3841,6 @@ void __net_exit ip_vs_control_net_cleanup(struct net *net) struct netns_ipvs *ipvs = net_ipvs(net); ip_vs_trash_cleanup(net); - ip_vs_stop_estimator(net, &ipvs->tot_stats); ip_vs_control_net_cleanup_sysctl(net); remove_proc_entry("ip_vs_stats_percpu", net->proc_net); remove_proc_entry("ip_vs_stats", net->proc_net); diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c index 58579634427d..300ed1eec729 100644 --- a/net/netfilter/nf_conntrack_netlink.c +++ b/net/netfilter/nf_conntrack_netlink.c @@ -597,6 +597,9 @@ ctnetlink_nlmsg_size(const struct nf_conn *ct) #ifdef CONFIG_NF_CONNTRACK_MARK + nla_total_size(sizeof(u_int32_t)) /* CTA_MARK */ #endif +#ifdef CONFIG_NF_CONNTRACK_ZONES + + nla_total_size(sizeof(u_int16_t)) /* CTA_ZONE */ +#endif + ctnetlink_proto_size(ct) + ctnetlink_label_size(ct) ; @@ -1150,7 +1153,7 @@ static int ctnetlink_done_list(struct netlink_callback *cb) static int ctnetlink_dump_list(struct sk_buff *skb, struct netlink_callback *cb, bool dying) { - struct nf_conn *ct, *last = NULL; + struct nf_conn *ct, *last; struct nf_conntrack_tuple_hash *h; struct hlist_nulls_node *n; struct nfgenmsg *nfmsg = nlmsg_data(cb->nlh); @@ -1163,8 +1166,7 @@ ctnetlink_dump_list(struct sk_buff *skb, struct netlink_callback *cb, bool dying if (cb->args[2]) return 0; - if (cb->args[0] == nr_cpu_ids) - return 0; + last = (struct nf_conn *)cb->args[1]; for (cpu = cb->args[0]; cpu < nr_cpu_ids; cpu++) { struct ct_pcpu *pcpu; @@ -1174,7 +1176,6 @@ ctnetlink_dump_list(struct sk_buff *skb, struct netlink_callback *cb, bool dying pcpu = per_cpu_ptr(net->ct.pcpu_lists, cpu); spin_lock_bh(&pcpu->lock); - last = (struct nf_conn *)cb->args[1]; list = dying ? &pcpu->dying : &pcpu->unconfirmed; restart: hlist_nulls_for_each_entry(h, n, list, hnnode) { @@ -1193,7 +1194,9 @@ restart: ct); rcu_read_unlock(); if (res < 0) { - nf_conntrack_get(&ct->ct_general); + if (!atomic_inc_not_zero(&ct->ct_general.use)) + continue; + cb->args[0] = cpu; cb->args[1] = (unsigned long)ct; spin_unlock_bh(&pcpu->lock); goto out; @@ -1202,10 +1205,10 @@ restart: if (cb->args[1]) { cb->args[1] = 0; goto restart; - } else - cb->args[2] = 1; + } spin_unlock_bh(&pcpu->lock); } + cb->args[2] = 1; out: if (last) nf_ct_put(last); @@ -2040,6 +2043,9 @@ ctnetlink_nfqueue_build_size(const struct nf_conn *ct) #ifdef CONFIG_NF_CONNTRACK_MARK + nla_total_size(sizeof(u_int32_t)) /* CTA_MARK */ #endif +#ifdef CONFIG_NF_CONNTRACK_ZONES + + nla_total_size(sizeof(u_int16_t)) /* CTA_ZONE */ +#endif + ctnetlink_proto_size(ct) ; } diff --git a/net/netfilter/nf_nat_core.c b/net/netfilter/nf_nat_core.c index 09096a670c45..a49907b1dabc 100644 --- a/net/netfilter/nf_nat_core.c +++ b/net/netfilter/nf_nat_core.c @@ -525,6 +525,39 @@ static int nf_nat_proto_remove(struct nf_conn *i, void *data) return i->status & IPS_NAT_MASK ? 1 : 0; } +static int nf_nat_proto_clean(struct nf_conn *ct, void *data) +{ + struct nf_conn_nat *nat = nfct_nat(ct); + + if (nf_nat_proto_remove(ct, data)) + return 1; + + if (!nat || !nat->ct) + return 0; + + /* This netns is being destroyed, and conntrack has nat null binding. + * Remove it from bysource hash, as the table will be freed soon. + * + * Else, when the conntrack is destoyed, nf_nat_cleanup_conntrack() + * will delete entry from already-freed table. + */ + if (!del_timer(&ct->timeout)) + return 1; + + spin_lock_bh(&nf_nat_lock); + hlist_del_rcu(&nat->bysource); + ct->status &= ~IPS_NAT_DONE_MASK; + nat->ct = NULL; + spin_unlock_bh(&nf_nat_lock); + + add_timer(&ct->timeout); + + /* don't delete conntrack. Although that would make things a lot + * simpler, we'd end up flushing all conntracks on nat rmmod. + */ + return 0; +} + static void nf_nat_l4proto_clean(u8 l3proto, u8 l4proto) { struct nf_nat_proto_clean clean = { @@ -795,7 +828,7 @@ static void __net_exit nf_nat_net_exit(struct net *net) { struct nf_nat_proto_clean clean = {}; - nf_ct_iterate_cleanup(net, &nf_nat_proto_remove, &clean, 0, 0); + nf_ct_iterate_cleanup(net, nf_nat_proto_clean, &clean, 0, 0); synchronize_rcu(); nf_ct_free_hashtable(net->ct.nat_bysource, net->ct.nat_htable_size); } diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c index 624e083125b9..8746ff9a8357 100644 --- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -35,7 +35,7 @@ int nft_register_afinfo(struct net *net, struct nft_af_info *afi) { INIT_LIST_HEAD(&afi->tables); nfnl_lock(NFNL_SUBSYS_NFTABLES); - list_add_tail(&afi->list, &net->nft.af_info); + list_add_tail_rcu(&afi->list, &net->nft.af_info); nfnl_unlock(NFNL_SUBSYS_NFTABLES); return 0; } @@ -51,7 +51,7 @@ EXPORT_SYMBOL_GPL(nft_register_afinfo); void nft_unregister_afinfo(struct nft_af_info *afi) { nfnl_lock(NFNL_SUBSYS_NFTABLES); - list_del(&afi->list); + list_del_rcu(&afi->list); nfnl_unlock(NFNL_SUBSYS_NFTABLES); } EXPORT_SYMBOL_GPL(nft_unregister_afinfo); @@ -277,11 +277,14 @@ static int nf_tables_dump_tables(struct sk_buff *skb, struct net *net = sock_net(skb->sk); int family = nfmsg->nfgen_family; - list_for_each_entry(afi, &net->nft.af_info, list) { + rcu_read_lock(); + cb->seq = net->nft.base_seq; + + list_for_each_entry_rcu(afi, &net->nft.af_info, list) { if (family != NFPROTO_UNSPEC && family != afi->family) continue; - list_for_each_entry(table, &afi->tables, list) { + list_for_each_entry_rcu(table, &afi->tables, list) { if (idx < s_idx) goto cont; if (idx > s_idx) @@ -294,11 +297,14 @@ static int nf_tables_dump_tables(struct sk_buff *skb, NLM_F_MULTI, afi->family, table) < 0) goto done; + + nl_dump_check_consistent(cb, nlmsg_hdr(skb)); cont: idx++; } } done: + rcu_read_unlock(); cb->args[0] = idx; return skb->len; } @@ -407,6 +413,9 @@ static int nf_tables_updtable(struct nft_ctx *ctx) if (flags & ~NFT_TABLE_F_DORMANT) return -EINVAL; + if (flags == ctx->table->flags) + return 0; + trans = nft_trans_alloc(ctx, NFT_MSG_NEWTABLE, sizeof(struct nft_trans_table)); if (trans == NULL) @@ -514,7 +523,7 @@ static int nf_tables_newtable(struct sock *nlsk, struct sk_buff *skb, module_put(afi->owner); return err; } - list_add_tail(&table->list, &afi->tables); + list_add_tail_rcu(&table->list, &afi->tables); return 0; } @@ -546,7 +555,7 @@ static int nf_tables_deltable(struct sock *nlsk, struct sk_buff *skb, if (err < 0) return err; - list_del(&table->list); + list_del_rcu(&table->list); return 0; } @@ -635,13 +644,20 @@ static int nft_dump_stats(struct sk_buff *skb, struct nft_stats __percpu *stats) { struct nft_stats *cpu_stats, total; struct nlattr *nest; + unsigned int seq; + u64 pkts, bytes; int cpu; memset(&total, 0, sizeof(total)); for_each_possible_cpu(cpu) { cpu_stats = per_cpu_ptr(stats, cpu); - total.pkts += cpu_stats->pkts; - total.bytes += cpu_stats->bytes; + do { + seq = u64_stats_fetch_begin_irq(&cpu_stats->syncp); + pkts = cpu_stats->pkts; + bytes = cpu_stats->bytes; + } while (u64_stats_fetch_retry_irq(&cpu_stats->syncp, seq)); + total.pkts += pkts; + total.bytes += bytes; } nest = nla_nest_start(skb, NFTA_CHAIN_COUNTERS); if (nest == NULL) @@ -761,12 +777,15 @@ static int nf_tables_dump_chains(struct sk_buff *skb, struct net *net = sock_net(skb->sk); int family = nfmsg->nfgen_family; - list_for_each_entry(afi, &net->nft.af_info, list) { + rcu_read_lock(); + cb->seq = net->nft.base_seq; + + list_for_each_entry_rcu(afi, &net->nft.af_info, list) { if (family != NFPROTO_UNSPEC && family != afi->family) continue; - list_for_each_entry(table, &afi->tables, list) { - list_for_each_entry(chain, &table->chains, list) { + list_for_each_entry_rcu(table, &afi->tables, list) { + list_for_each_entry_rcu(chain, &table->chains, list) { if (idx < s_idx) goto cont; if (idx > s_idx) @@ -778,17 +797,19 @@ static int nf_tables_dump_chains(struct sk_buff *skb, NLM_F_MULTI, afi->family, table, chain) < 0) goto done; + + nl_dump_check_consistent(cb, nlmsg_hdr(skb)); cont: idx++; } } } done: + rcu_read_unlock(); cb->args[0] = idx; return skb->len; } - static int nf_tables_getchain(struct sock *nlsk, struct sk_buff *skb, const struct nlmsghdr *nlh, const struct nlattr * const nla[]) @@ -861,7 +882,7 @@ static struct nft_stats __percpu *nft_stats_alloc(const struct nlattr *attr) if (!tb[NFTA_COUNTER_BYTES] || !tb[NFTA_COUNTER_PACKETS]) return ERR_PTR(-EINVAL); - newstats = alloc_percpu(struct nft_stats); + newstats = netdev_alloc_pcpu_stats(struct nft_stats); if (newstats == NULL) return ERR_PTR(-ENOMEM); @@ -1077,7 +1098,7 @@ static int nf_tables_newchain(struct sock *nlsk, struct sk_buff *skb, } basechain->stats = stats; } else { - stats = alloc_percpu(struct nft_stats); + stats = netdev_alloc_pcpu_stats(struct nft_stats); if (IS_ERR(stats)) { module_put(type->owner); kfree(basechain); @@ -1130,7 +1151,7 @@ static int nf_tables_newchain(struct sock *nlsk, struct sk_buff *skb, goto err2; table->use++; - list_add_tail(&chain->list, &table->chains); + list_add_tail_rcu(&chain->list, &table->chains); return 0; err2: if (!(table->flags & NFT_TABLE_F_DORMANT) && @@ -1180,7 +1201,7 @@ static int nf_tables_delchain(struct sock *nlsk, struct sk_buff *skb, return err; table->use--; - list_del(&chain->list); + list_del_rcu(&chain->list); return 0; } @@ -1199,9 +1220,9 @@ int nft_register_expr(struct nft_expr_type *type) { nfnl_lock(NFNL_SUBSYS_NFTABLES); if (type->family == NFPROTO_UNSPEC) - list_add_tail(&type->list, &nf_tables_expressions); + list_add_tail_rcu(&type->list, &nf_tables_expressions); else - list_add(&type->list, &nf_tables_expressions); + list_add_rcu(&type->list, &nf_tables_expressions); nfnl_unlock(NFNL_SUBSYS_NFTABLES); return 0; } @@ -1216,7 +1237,7 @@ EXPORT_SYMBOL_GPL(nft_register_expr); void nft_unregister_expr(struct nft_expr_type *type) { nfnl_lock(NFNL_SUBSYS_NFTABLES); - list_del(&type->list); + list_del_rcu(&type->list); nfnl_unlock(NFNL_SUBSYS_NFTABLES); } EXPORT_SYMBOL_GPL(nft_unregister_expr); @@ -1549,16 +1570,17 @@ static int nf_tables_dump_rules(struct sk_buff *skb, unsigned int idx = 0, s_idx = cb->args[0]; struct net *net = sock_net(skb->sk); int family = nfmsg->nfgen_family; - u8 genctr = ACCESS_ONCE(net->nft.genctr); - u8 gencursor = ACCESS_ONCE(net->nft.gencursor); - list_for_each_entry(afi, &net->nft.af_info, list) { + rcu_read_lock(); + cb->seq = net->nft.base_seq; + + list_for_each_entry_rcu(afi, &net->nft.af_info, list) { if (family != NFPROTO_UNSPEC && family != afi->family) continue; - list_for_each_entry(table, &afi->tables, list) { - list_for_each_entry(chain, &table->chains, list) { - list_for_each_entry(rule, &chain->rules, list) { + list_for_each_entry_rcu(table, &afi->tables, list) { + list_for_each_entry_rcu(chain, &table->chains, list) { + list_for_each_entry_rcu(rule, &chain->rules, list) { if (!nft_rule_is_active(net, rule)) goto cont; if (idx < s_idx) @@ -1572,6 +1594,8 @@ static int nf_tables_dump_rules(struct sk_buff *skb, NLM_F_MULTI | NLM_F_APPEND, afi->family, table, chain, rule) < 0) goto done; + + nl_dump_check_consistent(cb, nlmsg_hdr(skb)); cont: idx++; } @@ -1579,9 +1603,7 @@ cont: } } done: - /* Invalidate this dump, a transition to the new generation happened */ - if (gencursor != net->nft.gencursor || genctr != net->nft.genctr) - return -EBUSY; + rcu_read_unlock(); cb->args[0] = idx; return skb->len; @@ -1730,6 +1752,9 @@ static int nf_tables_newrule(struct sock *nlsk, struct sk_buff *skb, if (!create || nlh->nlmsg_flags & NLM_F_REPLACE) return -EINVAL; handle = nf_tables_alloc_handle(table); + + if (chain->use == UINT_MAX) + return -EOVERFLOW; } if (nla[NFTA_RULE_POSITION]) { @@ -1789,14 +1814,15 @@ static int nf_tables_newrule(struct sock *nlsk, struct sk_buff *skb, if (nlh->nlmsg_flags & NLM_F_REPLACE) { if (nft_rule_is_active_next(net, old_rule)) { - trans = nft_trans_rule_add(&ctx, NFT_MSG_NEWRULE, + trans = nft_trans_rule_add(&ctx, NFT_MSG_DELRULE, old_rule); if (trans == NULL) { err = -ENOMEM; goto err2; } nft_rule_disactivate_next(net, old_rule); - list_add_tail(&rule->list, &old_rule->list); + chain->use--; + list_add_tail_rcu(&rule->list, &old_rule->list); } else { err = -ENOENT; goto err2; @@ -1826,6 +1852,7 @@ err3: list_del_rcu(&nft_trans_rule(trans)->list); nft_rule_clear(net, nft_trans_rule(trans)); nft_trans_destroy(trans); + chain->use++; } err2: nf_tables_rule_destroy(&ctx, rule); @@ -1927,7 +1954,7 @@ static LIST_HEAD(nf_tables_set_ops); int nft_register_set(struct nft_set_ops *ops) { nfnl_lock(NFNL_SUBSYS_NFTABLES); - list_add_tail(&ops->list, &nf_tables_set_ops); + list_add_tail_rcu(&ops->list, &nf_tables_set_ops); nfnl_unlock(NFNL_SUBSYS_NFTABLES); return 0; } @@ -1936,7 +1963,7 @@ EXPORT_SYMBOL_GPL(nft_register_set); void nft_unregister_set(struct nft_set_ops *ops) { nfnl_lock(NFNL_SUBSYS_NFTABLES); - list_del(&ops->list); + list_del_rcu(&ops->list); nfnl_unlock(NFNL_SUBSYS_NFTABLES); } EXPORT_SYMBOL_GPL(nft_unregister_set); @@ -2229,7 +2256,10 @@ static int nf_tables_dump_sets_table(struct nft_ctx *ctx, struct sk_buff *skb, if (cb->args[1]) return skb->len; - list_for_each_entry(set, &ctx->table->sets, list) { + rcu_read_lock(); + cb->seq = ctx->net->nft.base_seq; + + list_for_each_entry_rcu(set, &ctx->table->sets, list) { if (idx < s_idx) goto cont; if (nf_tables_fill_set(skb, ctx, set, NFT_MSG_NEWSET, @@ -2237,11 +2267,13 @@ static int nf_tables_dump_sets_table(struct nft_ctx *ctx, struct sk_buff *skb, cb->args[0] = idx; goto done; } + nl_dump_check_consistent(cb, nlmsg_hdr(skb)); cont: idx++; } cb->args[1] = 1; done: + rcu_read_unlock(); return skb->len; } @@ -2255,7 +2287,10 @@ static int nf_tables_dump_sets_family(struct nft_ctx *ctx, struct sk_buff *skb, if (cb->args[1]) return skb->len; - list_for_each_entry(table, &ctx->afi->tables, list) { + rcu_read_lock(); + cb->seq = ctx->net->nft.base_seq; + + list_for_each_entry_rcu(table, &ctx->afi->tables, list) { if (cur_table) { if (cur_table != table) continue; @@ -2264,7 +2299,7 @@ static int nf_tables_dump_sets_family(struct nft_ctx *ctx, struct sk_buff *skb, } ctx->table = table; idx = 0; - list_for_each_entry(set, &ctx->table->sets, list) { + list_for_each_entry_rcu(set, &ctx->table->sets, list) { if (idx < s_idx) goto cont; if (nf_tables_fill_set(skb, ctx, set, NFT_MSG_NEWSET, @@ -2273,12 +2308,14 @@ static int nf_tables_dump_sets_family(struct nft_ctx *ctx, struct sk_buff *skb, cb->args[2] = (unsigned long) table; goto done; } + nl_dump_check_consistent(cb, nlmsg_hdr(skb)); cont: idx++; } } cb->args[1] = 1; done: + rcu_read_unlock(); return skb->len; } @@ -2295,7 +2332,10 @@ static int nf_tables_dump_sets_all(struct nft_ctx *ctx, struct sk_buff *skb, if (cb->args[1]) return skb->len; - list_for_each_entry(afi, &net->nft.af_info, list) { + rcu_read_lock(); + cb->seq = net->nft.base_seq; + + list_for_each_entry_rcu(afi, &net->nft.af_info, list) { if (cur_family) { if (afi->family != cur_family) continue; @@ -2303,7 +2343,7 @@ static int nf_tables_dump_sets_all(struct nft_ctx *ctx, struct sk_buff *skb, cur_family = 0; } - list_for_each_entry(table, &afi->tables, list) { + list_for_each_entry_rcu(table, &afi->tables, list) { if (cur_table) { if (cur_table != table) continue; @@ -2314,7 +2354,7 @@ static int nf_tables_dump_sets_all(struct nft_ctx *ctx, struct sk_buff *skb, ctx->table = table; ctx->afi = afi; idx = 0; - list_for_each_entry(set, &ctx->table->sets, list) { + list_for_each_entry_rcu(set, &ctx->table->sets, list) { if (idx < s_idx) goto cont; if (nf_tables_fill_set(skb, ctx, set, @@ -2325,6 +2365,7 @@ static int nf_tables_dump_sets_all(struct nft_ctx *ctx, struct sk_buff *skb, cb->args[3] = afi->family; goto done; } + nl_dump_check_consistent(cb, nlmsg_hdr(skb)); cont: idx++; } @@ -2334,6 +2375,7 @@ cont: } cb->args[1] = 1; done: + rcu_read_unlock(); return skb->len; } @@ -2592,7 +2634,7 @@ static int nf_tables_newset(struct sock *nlsk, struct sk_buff *skb, if (err < 0) goto err2; - list_add_tail(&set->list, &table->sets); + list_add_tail_rcu(&set->list, &table->sets); table->use++; return 0; @@ -2612,7 +2654,7 @@ static void nft_set_destroy(struct nft_set *set) static void nf_tables_set_destroy(const struct nft_ctx *ctx, struct nft_set *set) { - list_del(&set->list); + list_del_rcu(&set->list); nf_tables_set_notify(ctx, set, NFT_MSG_DELSET, GFP_ATOMIC); nft_set_destroy(set); } @@ -2647,7 +2689,7 @@ static int nf_tables_delset(struct sock *nlsk, struct sk_buff *skb, if (err < 0) return err; - list_del(&set->list); + list_del_rcu(&set->list); ctx.table->use--; return 0; } @@ -2699,14 +2741,14 @@ int nf_tables_bind_set(const struct nft_ctx *ctx, struct nft_set *set, } bind: binding->chain = ctx->chain; - list_add_tail(&binding->list, &set->bindings); + list_add_tail_rcu(&binding->list, &set->bindings); return 0; } void nf_tables_unbind_set(const struct nft_ctx *ctx, struct nft_set *set, struct nft_set_binding *binding) { - list_del(&binding->list); + list_del_rcu(&binding->list); if (list_empty(&set->bindings) && set->flags & NFT_SET_ANONYMOUS && !(set->flags & NFT_SET_INACTIVE)) @@ -2845,7 +2887,7 @@ static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb) goto nla_put_failure; nfmsg = nlmsg_data(nlh); - nfmsg->nfgen_family = NFPROTO_UNSPEC; + nfmsg->nfgen_family = ctx.afi->family; nfmsg->version = NFNETLINK_V0; nfmsg->res_id = 0; @@ -3341,7 +3383,7 @@ static int nf_tables_commit(struct sk_buff *skb) struct nft_set *set; /* Bump generation counter, invalidate any dump in progress */ - net->nft.genctr++; + while (++net->nft.base_seq == 0); /* A new generation has just started */ net->nft.gencursor = gencursor_next(net); @@ -3486,12 +3528,12 @@ static int nf_tables_abort(struct sk_buff *skb) } nft_trans_destroy(trans); } else { - list_del(&trans->ctx.table->list); + list_del_rcu(&trans->ctx.table->list); } break; case NFT_MSG_DELTABLE: - list_add_tail(&trans->ctx.table->list, - &trans->ctx.afi->tables); + list_add_tail_rcu(&trans->ctx.table->list, + &trans->ctx.afi->tables); nft_trans_destroy(trans); break; case NFT_MSG_NEWCHAIN: @@ -3502,7 +3544,7 @@ static int nf_tables_abort(struct sk_buff *skb) nft_trans_destroy(trans); } else { trans->ctx.table->use--; - list_del(&trans->ctx.chain->list); + list_del_rcu(&trans->ctx.chain->list); if (!(trans->ctx.table->flags & NFT_TABLE_F_DORMANT) && trans->ctx.chain->flags & NFT_BASE_CHAIN) { nf_unregister_hooks(nft_base_chain(trans->ctx.chain)->ops, @@ -3512,8 +3554,8 @@ static int nf_tables_abort(struct sk_buff *skb) break; case NFT_MSG_DELCHAIN: trans->ctx.table->use++; - list_add_tail(&trans->ctx.chain->list, - &trans->ctx.table->chains); + list_add_tail_rcu(&trans->ctx.chain->list, + &trans->ctx.table->chains); nft_trans_destroy(trans); break; case NFT_MSG_NEWRULE: @@ -3527,12 +3569,12 @@ static int nf_tables_abort(struct sk_buff *skb) break; case NFT_MSG_NEWSET: trans->ctx.table->use--; - list_del(&nft_trans_set(trans)->list); + list_del_rcu(&nft_trans_set(trans)->list); break; case NFT_MSG_DELSET: trans->ctx.table->use++; - list_add_tail(&nft_trans_set(trans)->list, - &trans->ctx.table->sets); + list_add_tail_rcu(&nft_trans_set(trans)->list, + &trans->ctx.table->sets); nft_trans_destroy(trans); break; case NFT_MSG_NEWSETELEM: @@ -3946,6 +3988,7 @@ static int nf_tables_init_net(struct net *net) { INIT_LIST_HEAD(&net->nft.af_info); INIT_LIST_HEAD(&net->nft.commit_list); + net->nft.base_seq = 1; return 0; } diff --git a/net/netfilter/nf_tables_core.c b/net/netfilter/nf_tables_core.c index 345acfb1720b..3b90eb2b2c55 100644 --- a/net/netfilter/nf_tables_core.c +++ b/net/netfilter/nf_tables_core.c @@ -109,7 +109,7 @@ nft_do_chain(struct nft_pktinfo *pkt, const struct nf_hook_ops *ops) struct nft_data data[NFT_REG_MAX + 1]; unsigned int stackptr = 0; struct nft_jumpstack jumpstack[NFT_JUMP_STACK_SIZE]; - struct nft_stats __percpu *stats; + struct nft_stats *stats; int rulenum; /* * Cache cursor to avoid problems in case that the cursor is updated @@ -205,9 +205,11 @@ next_rule: nft_trace_packet(pkt, basechain, -1, NFT_TRACE_POLICY); rcu_read_lock_bh(); - stats = rcu_dereference(nft_base_chain(basechain)->stats); - __this_cpu_inc(stats->pkts); - __this_cpu_add(stats->bytes, pkt->skb->len); + stats = this_cpu_ptr(rcu_dereference(nft_base_chain(basechain)->stats)); + u64_stats_update_begin(&stats->syncp); + stats->pkts++; + stats->bytes += pkt->skb->len; + u64_stats_update_end(&stats->syncp); rcu_read_unlock_bh(); return nft_base_chain(basechain)->policy; diff --git a/net/netfilter/nft_compat.c b/net/netfilter/nft_compat.c index 8a779be832fb..1840989092ed 100644 --- a/net/netfilter/nft_compat.c +++ b/net/netfilter/nft_compat.c @@ -195,6 +195,15 @@ static void nft_target_destroy(const struct nft_ctx *ctx, const struct nft_expr *expr) { struct xt_target *target = expr->ops->data; + void *info = nft_expr_priv(expr); + struct xt_tgdtor_param par; + + par.net = ctx->net; + par.target = target; + par.targinfo = info; + par.family = ctx->afi->family; + if (par.target->destroy != NULL) + par.target->destroy(&par); module_put(target->me); } @@ -382,6 +391,15 @@ static void nft_match_destroy(const struct nft_ctx *ctx, const struct nft_expr *expr) { struct xt_match *match = expr->ops->data; + void *info = nft_expr_priv(expr); + struct xt_mtdtor_param par; + + par.net = ctx->net; + par.match = match; + par.matchinfo = info; + par.family = ctx->afi->family; + if (par.match->destroy != NULL) + par.match->destroy(&par); module_put(match->me); } diff --git a/net/netfilter/nft_nat.c b/net/netfilter/nft_nat.c index a0195d28bcfc..79ff58cd36dc 100644 --- a/net/netfilter/nft_nat.c +++ b/net/netfilter/nft_nat.c @@ -175,12 +175,14 @@ static int nft_nat_dump(struct sk_buff *skb, const struct nft_expr *expr) if (nla_put_be32(skb, NFTA_NAT_REG_ADDR_MAX, htonl(priv->sreg_addr_max))) goto nla_put_failure; - if (nla_put_be32(skb, - NFTA_NAT_REG_PROTO_MIN, htonl(priv->sreg_proto_min))) - goto nla_put_failure; - if (nla_put_be32(skb, - NFTA_NAT_REG_PROTO_MAX, htonl(priv->sreg_proto_max))) - goto nla_put_failure; + if (priv->sreg_proto_min) { + if (nla_put_be32(skb, NFTA_NAT_REG_PROTO_MIN, + htonl(priv->sreg_proto_min))) + goto nla_put_failure; + if (nla_put_be32(skb, NFTA_NAT_REG_PROTO_MAX, + htonl(priv->sreg_proto_max))) + goto nla_put_failure; + } return 0; nla_put_failure: diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c index 15c731f03fa6..e6fac7e3db52 100644 --- a/net/netlink/af_netlink.c +++ b/net/netlink/af_netlink.c @@ -636,7 +636,7 @@ static unsigned int netlink_poll(struct file *file, struct socket *sock, while (nlk->cb_running && netlink_dump_space(nlk)) { err = netlink_dump(sk); if (err < 0) { - sk->sk_err = err; + sk->sk_err = -err; sk->sk_error_report(sk); break; } @@ -2483,7 +2483,7 @@ static int netlink_recvmsg(struct kiocb *kiocb, struct socket *sock, atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf / 2) { ret = netlink_dump(sk); if (ret) { - sk->sk_err = ret; + sk->sk_err = -ret; sk->sk_error_report(sk); } } diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c index c36856a457ca..e70d8b18e962 100644 --- a/net/openvswitch/actions.c +++ b/net/openvswitch/actions.c @@ -551,6 +551,8 @@ static int do_execute_actions(struct datapath *dp, struct sk_buff *skb, case OVS_ACTION_ATTR_SAMPLE: err = sample(dp, skb, a); + if (unlikely(err)) /* skb already freed. */ + return err; break; } diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c index 0d407bca81e3..9db4bf6740d1 100644 --- a/net/openvswitch/datapath.c +++ b/net/openvswitch/datapath.c @@ -1,5 +1,5 @@ /* - * Copyright (c) 2007-2013 Nicira, Inc. + * Copyright (c) 2007-2014 Nicira, Inc. * * This program is free software; you can redistribute it and/or * modify it under the terms of version 2 of the GNU General Public @@ -276,7 +276,7 @@ void ovs_dp_process_received_packet(struct vport *p, struct sk_buff *skb) OVS_CB(skb)->flow = flow; OVS_CB(skb)->pkt_key = &key; - ovs_flow_stats_update(OVS_CB(skb)->flow, skb); + ovs_flow_stats_update(OVS_CB(skb)->flow, key.tp.flags, skb); ovs_execute_actions(dp, skb); stats_counter = &stats->n_hit; @@ -889,8 +889,11 @@ static int ovs_flow_cmd_new(struct sk_buff *skb, struct genl_info *info) } /* The unmasked key has to be the same for flow updates. */ if (unlikely(!ovs_flow_cmp_unmasked_key(flow, &match))) { - error = -EEXIST; - goto err_unlock_ovs; + flow = ovs_flow_tbl_lookup_exact(&dp->table, &match); + if (!flow) { + error = -ENOENT; + goto err_unlock_ovs; + } } /* Update actions. */ old_acts = ovsl_dereference(flow->sf_acts); @@ -981,16 +984,12 @@ static int ovs_flow_cmd_set(struct sk_buff *skb, struct genl_info *info) goto err_unlock_ovs; } /* Check that the flow exists. */ - flow = ovs_flow_tbl_lookup(&dp->table, &key); + flow = ovs_flow_tbl_lookup_exact(&dp->table, &match); if (unlikely(!flow)) { error = -ENOENT; goto err_unlock_ovs; } - /* The unmasked key has to be the same for flow updates. */ - if (unlikely(!ovs_flow_cmp_unmasked_key(flow, &match))) { - error = -EEXIST; - goto err_unlock_ovs; - } + /* Update actions, if present. */ if (likely(acts)) { old_acts = ovsl_dereference(flow->sf_acts); @@ -1063,8 +1062,8 @@ static int ovs_flow_cmd_get(struct sk_buff *skb, struct genl_info *info) goto unlock; } - flow = ovs_flow_tbl_lookup(&dp->table, &key); - if (!flow || !ovs_flow_cmp_unmasked_key(flow, &match)) { + flow = ovs_flow_tbl_lookup_exact(&dp->table, &match); + if (!flow) { err = -ENOENT; goto unlock; } @@ -1113,8 +1112,8 @@ static int ovs_flow_cmd_del(struct sk_buff *skb, struct genl_info *info) goto unlock; } - flow = ovs_flow_tbl_lookup(&dp->table, &key); - if (unlikely(!flow || !ovs_flow_cmp_unmasked_key(flow, &match))) { + flow = ovs_flow_tbl_lookup_exact(&dp->table, &match); + if (unlikely(!flow)) { err = -ENOENT; goto unlock; } diff --git a/net/openvswitch/flow.c b/net/openvswitch/flow.c index 334751cb1528..d07ab538fc9d 100644 --- a/net/openvswitch/flow.c +++ b/net/openvswitch/flow.c @@ -61,10 +61,10 @@ u64 ovs_flow_used_time(unsigned long flow_jiffies) #define TCP_FLAGS_BE16(tp) (*(__be16 *)&tcp_flag_word(tp) & htons(0x0FFF)) -void ovs_flow_stats_update(struct sw_flow *flow, struct sk_buff *skb) +void ovs_flow_stats_update(struct sw_flow *flow, __be16 tcp_flags, + struct sk_buff *skb) { struct flow_stats *stats; - __be16 tcp_flags = flow->key.tp.flags; int node = numa_node_id(); stats = rcu_dereference(flow->stats[node]); diff --git a/net/openvswitch/flow.h b/net/openvswitch/flow.h index ac395d2cd821..5e5aaed3a85b 100644 --- a/net/openvswitch/flow.h +++ b/net/openvswitch/flow.h @@ -1,5 +1,5 @@ /* - * Copyright (c) 2007-2013 Nicira, Inc. + * Copyright (c) 2007-2014 Nicira, Inc. * * This program is free software; you can redistribute it and/or * modify it under the terms of version 2 of the GNU General Public @@ -180,7 +180,8 @@ struct arp_eth_header { unsigned char ar_tip[4]; /* target IP address */ } __packed; -void ovs_flow_stats_update(struct sw_flow *, struct sk_buff *); +void ovs_flow_stats_update(struct sw_flow *, __be16 tcp_flags, + struct sk_buff *); void ovs_flow_stats_get(const struct sw_flow *, struct ovs_flow_stats *, unsigned long *used, __be16 *tcp_flags); void ovs_flow_stats_clear(struct sw_flow *); diff --git a/net/openvswitch/flow_table.c b/net/openvswitch/flow_table.c index 574c3abc9b30..cf2d853646f0 100644 --- a/net/openvswitch/flow_table.c +++ b/net/openvswitch/flow_table.c @@ -456,6 +456,22 @@ struct sw_flow *ovs_flow_tbl_lookup(struct flow_table *tbl, return ovs_flow_tbl_lookup_stats(tbl, key, &n_mask_hit); } +struct sw_flow *ovs_flow_tbl_lookup_exact(struct flow_table *tbl, + struct sw_flow_match *match) +{ + struct table_instance *ti = rcu_dereference_ovsl(tbl->ti); + struct sw_flow_mask *mask; + struct sw_flow *flow; + + /* Always called under ovs-mutex. */ + list_for_each_entry(mask, &tbl->mask_list, list) { + flow = masked_flow_lookup(ti, match->key, mask); + if (flow && ovs_flow_cmp_unmasked_key(flow, match)) /* Found */ + return flow; + } + return NULL; +} + int ovs_flow_tbl_num_masks(const struct flow_table *table) { struct sw_flow_mask *mask; diff --git a/net/openvswitch/flow_table.h b/net/openvswitch/flow_table.h index ca8a5820f615..5918bff7f3f6 100644 --- a/net/openvswitch/flow_table.h +++ b/net/openvswitch/flow_table.h @@ -76,7 +76,8 @@ struct sw_flow *ovs_flow_tbl_lookup_stats(struct flow_table *, u32 *n_mask_hit); struct sw_flow *ovs_flow_tbl_lookup(struct flow_table *, const struct sw_flow_key *); - +struct sw_flow *ovs_flow_tbl_lookup_exact(struct flow_table *tbl, + struct sw_flow_match *match); bool ovs_flow_cmp_unmasked_key(const struct sw_flow *flow, struct sw_flow_match *match); diff --git a/net/openvswitch/vport-gre.c b/net/openvswitch/vport-gre.c index 35ec4fed09e2..f49148a07da2 100644 --- a/net/openvswitch/vport-gre.c +++ b/net/openvswitch/vport-gre.c @@ -110,6 +110,22 @@ static int gre_rcv(struct sk_buff *skb, return PACKET_RCVD; } +/* Called with rcu_read_lock and BH disabled. */ +static int gre_err(struct sk_buff *skb, u32 info, + const struct tnl_ptk_info *tpi) +{ + struct ovs_net *ovs_net; + struct vport *vport; + + ovs_net = net_generic(dev_net(skb->dev), ovs_net_id); + vport = rcu_dereference(ovs_net->vport_net.gre_vport); + + if (unlikely(!vport)) + return PACKET_REJECT; + else + return PACKET_RCVD; +} + static int gre_tnl_send(struct vport *vport, struct sk_buff *skb) { struct net *net = ovs_dp_get_net(vport->dp); @@ -186,6 +202,7 @@ error: static struct gre_cisco_protocol gre_protocol = { .handler = gre_rcv, + .err_handler = gre_err, .priority = 1, }; diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c index c39b583ace32..70c0be8d0121 100644 --- a/net/sched/cls_u32.c +++ b/net/sched/cls_u32.c @@ -38,6 +38,7 @@ #include <linux/errno.h> #include <linux/rtnetlink.h> #include <linux/skbuff.h> +#include <linux/bitmap.h> #include <net/netlink.h> #include <net/act_api.h> #include <net/pkt_cls.h> @@ -460,17 +461,25 @@ static int u32_delete(struct tcf_proto *tp, unsigned long arg) return 0; } +#define NR_U32_NODE (1<<12) static u32 gen_new_kid(struct tc_u_hnode *ht, u32 handle) { struct tc_u_knode *n; - unsigned int i = 0x7FF; + unsigned long i; + unsigned long *bitmap = kzalloc(BITS_TO_LONGS(NR_U32_NODE) * sizeof(unsigned long), + GFP_KERNEL); + if (!bitmap) + return handle | 0xFFF; for (n = ht->ht[TC_U32_HASH(handle)]; n; n = n->next) - if (i < TC_U32_NODE(n->handle)) - i = TC_U32_NODE(n->handle); - i++; + set_bit(TC_U32_NODE(n->handle), bitmap); - return handle | (i > 0xFFF ? 0xFFF : i); + i = find_next_zero_bit(bitmap, NR_U32_NODE, 0x800); + if (i >= NR_U32_NODE) + i = find_next_zero_bit(bitmap, NR_U32_NODE, 1); + + kfree(bitmap); + return handle | (i >= NR_U32_NODE ? 0xFFF : i); } static const struct nla_policy u32_policy[TCA_U32_MAX + 1] = { diff --git a/net/sctp/sysctl.c b/net/sctp/sysctl.c index dcb19592761e..12c7e01c2677 100644 --- a/net/sctp/sysctl.c +++ b/net/sctp/sysctl.c @@ -321,41 +321,40 @@ static int proc_sctp_do_hmac_alg(struct ctl_table *ctl, int write, loff_t *ppos) { struct net *net = current->nsproxy->net_ns; - char tmp[8]; struct ctl_table tbl; - int ret; - int changed = 0; + bool changed = false; char *none = "none"; + char tmp[8]; + int ret; memset(&tbl, 0, sizeof(struct ctl_table)); if (write) { tbl.data = tmp; - tbl.maxlen = 8; + tbl.maxlen = sizeof(tmp); } else { tbl.data = net->sctp.sctp_hmac_alg ? : none; tbl.maxlen = strlen(tbl.data); } - ret = proc_dostring(&tbl, write, buffer, lenp, ppos); - if (write) { + ret = proc_dostring(&tbl, write, buffer, lenp, ppos); + if (write && ret == 0) { #ifdef CONFIG_CRYPTO_MD5 if (!strncmp(tmp, "md5", 3)) { net->sctp.sctp_hmac_alg = "md5"; - changed = 1; + changed = true; } #endif #ifdef CONFIG_CRYPTO_SHA1 if (!strncmp(tmp, "sha1", 4)) { net->sctp.sctp_hmac_alg = "sha1"; - changed = 1; + changed = true; } #endif if (!strncmp(tmp, "none", 4)) { net->sctp.sctp_hmac_alg = NULL; - changed = 1; + changed = true; } - if (!changed) ret = -EINVAL; } @@ -368,11 +367,10 @@ static int proc_sctp_do_rto_min(struct ctl_table *ctl, int write, loff_t *ppos) { struct net *net = current->nsproxy->net_ns; - int new_value; - struct ctl_table tbl; unsigned int min = *(unsigned int *) ctl->extra1; unsigned int max = *(unsigned int *) ctl->extra2; - int ret; + struct ctl_table tbl; + int ret, new_value; memset(&tbl, 0, sizeof(struct ctl_table)); tbl.maxlen = sizeof(unsigned int); @@ -381,12 +379,15 @@ static int proc_sctp_do_rto_min(struct ctl_table *ctl, int write, tbl.data = &new_value; else tbl.data = &net->sctp.rto_min; + ret = proc_dointvec(&tbl, write, buffer, lenp, ppos); - if (write) { - if (ret || new_value > max || new_value < min) + if (write && ret == 0) { + if (new_value > max || new_value < min) return -EINVAL; + net->sctp.rto_min = new_value; } + return ret; } @@ -395,11 +396,10 @@ static int proc_sctp_do_rto_max(struct ctl_table *ctl, int write, loff_t *ppos) { struct net *net = current->nsproxy->net_ns; - int new_value; - struct ctl_table tbl; unsigned int min = *(unsigned int *) ctl->extra1; unsigned int max = *(unsigned int *) ctl->extra2; - int ret; + struct ctl_table tbl; + int ret, new_value; memset(&tbl, 0, sizeof(struct ctl_table)); tbl.maxlen = sizeof(unsigned int); @@ -408,12 +408,15 @@ static int proc_sctp_do_rto_max(struct ctl_table *ctl, int write, tbl.data = &new_value; else tbl.data = &net->sctp.rto_max; + ret = proc_dointvec(&tbl, write, buffer, lenp, ppos); - if (write) { - if (ret || new_value > max || new_value < min) + if (write && ret == 0) { + if (new_value > max || new_value < min) return -EINVAL; + net->sctp.rto_max = new_value; } + return ret; } @@ -444,8 +447,7 @@ static int proc_sctp_do_auth(struct ctl_table *ctl, int write, tbl.data = &net->sctp.auth_enable; ret = proc_dointvec(&tbl, write, buffer, lenp, ppos); - - if (write) { + if (write && ret == 0) { struct sock *sk = net->sctp.ctl_sock; net->sctp.auth_enable = new_value; diff --git a/net/sctp/ulpevent.c b/net/sctp/ulpevent.c index 85c64658bd0b..b6842fdb53d4 100644 --- a/net/sctp/ulpevent.c +++ b/net/sctp/ulpevent.c @@ -366,9 +366,10 @@ fail: * specification [SCTP] and any extensions for a list of possible * error formats. */ -struct sctp_ulpevent *sctp_ulpevent_make_remote_error( - const struct sctp_association *asoc, struct sctp_chunk *chunk, - __u16 flags, gfp_t gfp) +struct sctp_ulpevent * +sctp_ulpevent_make_remote_error(const struct sctp_association *asoc, + struct sctp_chunk *chunk, __u16 flags, + gfp_t gfp) { struct sctp_ulpevent *event; struct sctp_remote_error *sre; @@ -387,8 +388,7 @@ struct sctp_ulpevent *sctp_ulpevent_make_remote_error( /* Copy the skb to a new skb with room for us to prepend * notification with. */ - skb = skb_copy_expand(chunk->skb, sizeof(struct sctp_remote_error), - 0, gfp); + skb = skb_copy_expand(chunk->skb, sizeof(*sre), 0, gfp); /* Pull off the rest of the cause TLV from the chunk. */ skb_pull(chunk->skb, elen); @@ -399,62 +399,21 @@ struct sctp_ulpevent *sctp_ulpevent_make_remote_error( event = sctp_skb2event(skb); sctp_ulpevent_init(event, MSG_NOTIFICATION, skb->truesize); - sre = (struct sctp_remote_error *) - skb_push(skb, sizeof(struct sctp_remote_error)); + sre = (struct sctp_remote_error *) skb_push(skb, sizeof(*sre)); /* Trim the buffer to the right length. */ - skb_trim(skb, sizeof(struct sctp_remote_error) + elen); + skb_trim(skb, sizeof(*sre) + elen); - /* Socket Extensions for SCTP - * 5.3.1.3 SCTP_REMOTE_ERROR - * - * sre_type: - * It should be SCTP_REMOTE_ERROR. - */ + /* RFC6458, Section 6.1.3. SCTP_REMOTE_ERROR */ + memset(sre, 0, sizeof(*sre)); sre->sre_type = SCTP_REMOTE_ERROR; - - /* - * Socket Extensions for SCTP - * 5.3.1.3 SCTP_REMOTE_ERROR - * - * sre_flags: 16 bits (unsigned integer) - * Currently unused. - */ sre->sre_flags = 0; - - /* Socket Extensions for SCTP - * 5.3.1.3 SCTP_REMOTE_ERROR - * - * sre_length: sizeof (__u32) - * - * This field is the total length of the notification data, - * including the notification header. - */ sre->sre_length = skb->len; - - /* Socket Extensions for SCTP - * 5.3.1.3 SCTP_REMOTE_ERROR - * - * sre_error: 16 bits (unsigned integer) - * This value represents one of the Operational Error causes defined in - * the SCTP specification, in network byte order. - */ sre->sre_error = cause; - - /* Socket Extensions for SCTP - * 5.3.1.3 SCTP_REMOTE_ERROR - * - * sre_assoc_id: sizeof (sctp_assoc_t) - * - * The association id field, holds the identifier for the association. - * All notifications for a given association have the same association - * identifier. For TCP style socket, this field is ignored. - */ sctp_ulpevent_set_owner(event, asoc); sre->sre_assoc_id = sctp_assoc2id(asoc); return event; - fail: return NULL; } @@ -899,7 +858,9 @@ __u16 sctp_ulpevent_get_notification_type(const struct sctp_ulpevent *event) return notification->sn_header.sn_type; } -/* Copy out the sndrcvinfo into a msghdr. */ +/* RFC6458, Section 5.3.2. SCTP Header Information Structure + * (SCTP_SNDRCV, DEPRECATED) + */ void sctp_ulpevent_read_sndrcvinfo(const struct sctp_ulpevent *event, struct msghdr *msghdr) { @@ -908,74 +869,21 @@ void sctp_ulpevent_read_sndrcvinfo(const struct sctp_ulpevent *event, if (sctp_ulpevent_is_notification(event)) return; - /* Sockets API Extensions for SCTP - * Section 5.2.2 SCTP Header Information Structure (SCTP_SNDRCV) - * - * sinfo_stream: 16 bits (unsigned integer) - * - * For recvmsg() the SCTP stack places the message's stream number in - * this value. - */ + memset(&sinfo, 0, sizeof(sinfo)); sinfo.sinfo_stream = event->stream; - /* sinfo_ssn: 16 bits (unsigned integer) - * - * For recvmsg() this value contains the stream sequence number that - * the remote endpoint placed in the DATA chunk. For fragmented - * messages this is the same number for all deliveries of the message - * (if more than one recvmsg() is needed to read the message). - */ sinfo.sinfo_ssn = event->ssn; - /* sinfo_ppid: 32 bits (unsigned integer) - * - * In recvmsg() this value is - * the same information that was passed by the upper layer in the peer - * application. Please note that byte order issues are NOT accounted - * for and this information is passed opaquely by the SCTP stack from - * one end to the other. - */ sinfo.sinfo_ppid = event->ppid; - /* sinfo_flags: 16 bits (unsigned integer) - * - * This field may contain any of the following flags and is composed of - * a bitwise OR of these values. - * - * recvmsg() flags: - * - * SCTP_UNORDERED - This flag is present when the message was sent - * non-ordered. - */ sinfo.sinfo_flags = event->flags; - /* sinfo_tsn: 32 bit (unsigned integer) - * - * For the receiving side, this field holds a TSN that was - * assigned to one of the SCTP Data Chunks. - */ sinfo.sinfo_tsn = event->tsn; - /* sinfo_cumtsn: 32 bit (unsigned integer) - * - * This field will hold the current cumulative TSN as - * known by the underlying SCTP layer. Note this field is - * ignored when sending and only valid for a receive - * operation when sinfo_flags are set to SCTP_UNORDERED. - */ sinfo.sinfo_cumtsn = event->cumtsn; - /* sinfo_assoc_id: sizeof (sctp_assoc_t) - * - * The association handle field, sinfo_assoc_id, holds the identifier - * for the association announced in the COMMUNICATION_UP notification. - * All notifications for a given association have the same identifier. - * Ignored for one-to-one style sockets. - */ sinfo.sinfo_assoc_id = sctp_assoc2id(event->asoc); - - /* context value that is set via SCTP_CONTEXT socket option. */ + /* Context value that is set via SCTP_CONTEXT socket option. */ sinfo.sinfo_context = event->asoc->default_rcv_context; - /* These fields are not used while receiving. */ sinfo.sinfo_timetolive = 0; put_cmsg(msghdr, IPPROTO_SCTP, SCTP_SNDRCV, - sizeof(struct sctp_sndrcvinfo), (void *)&sinfo); + sizeof(sinfo), &sinfo); } /* Do accounting for bytes received and hold a reference to the association diff --git a/net/sunrpc/auth.c b/net/sunrpc/auth.c index 247e973544bf..f77366717420 100644 --- a/net/sunrpc/auth.c +++ b/net/sunrpc/auth.c @@ -592,6 +592,7 @@ rpcauth_lookupcred(struct rpc_auth *auth, int flags) put_group_info(acred.group_info); return ret; } +EXPORT_SYMBOL_GPL(rpcauth_lookupcred); void rpcauth_init_cred(struct rpc_cred *cred, const struct auth_cred *acred, diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c index 26631679a1fa..55c6c9d3e1ce 100644 --- a/net/tipc/bcast.c +++ b/net/tipc/bcast.c @@ -559,6 +559,7 @@ receive: buf = node->bclink.deferred_head; node->bclink.deferred_head = buf->next; + buf->next = NULL; node->bclink.deferred_size--; goto receive; } diff --git a/net/tipc/msg.c b/net/tipc/msg.c index 8be6e94a1ca9..0a37a472c29f 100644 --- a/net/tipc/msg.c +++ b/net/tipc/msg.c @@ -101,9 +101,11 @@ int tipc_msg_build(struct tipc_msg *hdr, struct iovec const *msg_sect, } /* tipc_buf_append(): Append a buffer to the fragment list of another buffer - * Let first buffer become head buffer - * Returns 1 and sets *buf to headbuf if chain is complete, otherwise 0 - * Leaves headbuf pointer at NULL if failure + * @*headbuf: in: NULL for first frag, otherwise value returned from prev call + * out: set when successful non-complete reassembly, otherwise NULL + * @*buf: in: the buffer to append. Always defined + * out: head buf after sucessful complete reassembly, otherwise NULL + * Returns 1 when reassembly complete, otherwise 0 */ int tipc_buf_append(struct sk_buff **headbuf, struct sk_buff **buf) { @@ -122,6 +124,7 @@ int tipc_buf_append(struct sk_buff **headbuf, struct sk_buff **buf) goto out_free; head = *headbuf = frag; skb_frag_list_init(head); + *buf = NULL; return 0; } if (!head) @@ -150,5 +153,7 @@ int tipc_buf_append(struct sk_buff **headbuf, struct sk_buff **buf) out_free: pr_warn_ratelimited("Unable to build fragment list\n"); kfree_skb(*buf); + kfree_skb(*headbuf); + *buf = *headbuf = NULL; return 0; } diff --git a/net/wireless/core.h b/net/wireless/core.h index e9afbf10e756..7e3a3cef7df9 100644 --- a/net/wireless/core.h +++ b/net/wireless/core.h @@ -424,7 +424,7 @@ static inline unsigned int elapsed_jiffies_msecs(unsigned long start) if (end >= start) return jiffies_to_msecs(end - start); - return jiffies_to_msecs(end + (MAX_JIFFY_OFFSET - start) + 1); + return jiffies_to_msecs(end + (ULONG_MAX - start) + 1); } void diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c index ba4f1723c83a..6668daf69326 100644 --- a/net/wireless/nl80211.c +++ b/net/wireless/nl80211.c @@ -1497,18 +1497,17 @@ static int nl80211_send_wiphy(struct cfg80211_registered_device *rdev, } CMD(start_p2p_device, START_P2P_DEVICE); CMD(set_mcast_rate, SET_MCAST_RATE); +#ifdef CONFIG_NL80211_TESTMODE + CMD(testmode_cmd, TESTMODE); +#endif if (state->split) { CMD(crit_proto_start, CRIT_PROTOCOL_START); CMD(crit_proto_stop, CRIT_PROTOCOL_STOP); if (rdev->wiphy.flags & WIPHY_FLAG_HAS_CHANNEL_SWITCH) CMD(channel_switch, CHANNEL_SWITCH); + CMD(set_qos_map, SET_QOS_MAP); } - CMD(set_qos_map, SET_QOS_MAP); - -#ifdef CONFIG_NL80211_TESTMODE - CMD(testmode_cmd, TESTMODE); -#endif - + /* add into the if now */ #undef CMD if (rdev->ops->connect || rdev->ops->auth) { diff --git a/net/wireless/reg.c b/net/wireless/reg.c index 558b0e3a02d8..1afdf45db38f 100644 --- a/net/wireless/reg.c +++ b/net/wireless/reg.c @@ -935,7 +935,7 @@ freq_reg_info_regd(struct wiphy *wiphy, u32 center_freq, if (!band_rule_found) band_rule_found = freq_in_rule_band(fr, center_freq); - bw_fits = reg_does_bw_fit(fr, center_freq, MHZ_TO_KHZ(5)); + bw_fits = reg_does_bw_fit(fr, center_freq, MHZ_TO_KHZ(20)); if (band_rule_found && bw_fits) return rr; @@ -1019,10 +1019,10 @@ static void chan_reg_rule_print_dbg(const struct ieee80211_regdomain *regd, } #endif -/* Find an ieee80211_reg_rule such that a 5MHz channel with frequency - * chan->center_freq fits there. - * If there is no such reg_rule, disable the channel, otherwise set the - * flags corresponding to the bandwidths allowed in the particular reg_rule +/* + * Note that right now we assume the desired channel bandwidth + * is always 20 MHz for each individual channel (HT40 uses 20 MHz + * per channel, the primary and the extension channel). */ static void handle_channel(struct wiphy *wiphy, enum nl80211_reg_initiator initiator, @@ -1083,12 +1083,8 @@ static void handle_channel(struct wiphy *wiphy, if (reg_rule->flags & NL80211_RRF_AUTO_BW) max_bandwidth_khz = reg_get_max_bandwidth(regd, reg_rule); - if (max_bandwidth_khz < MHZ_TO_KHZ(10)) - bw_flags = IEEE80211_CHAN_NO_10MHZ; - if (max_bandwidth_khz < MHZ_TO_KHZ(20)) - bw_flags |= IEEE80211_CHAN_NO_20MHZ; if (max_bandwidth_khz < MHZ_TO_KHZ(40)) - bw_flags |= IEEE80211_CHAN_NO_HT40; + bw_flags = IEEE80211_CHAN_NO_HT40; if (max_bandwidth_khz < MHZ_TO_KHZ(80)) bw_flags |= IEEE80211_CHAN_NO_80MHZ; if (max_bandwidth_khz < MHZ_TO_KHZ(160)) @@ -1522,12 +1518,8 @@ static void handle_channel_custom(struct wiphy *wiphy, if (reg_rule->flags & NL80211_RRF_AUTO_BW) max_bandwidth_khz = reg_get_max_bandwidth(regd, reg_rule); - if (max_bandwidth_khz < MHZ_TO_KHZ(10)) - bw_flags = IEEE80211_CHAN_NO_10MHZ; - if (max_bandwidth_khz < MHZ_TO_KHZ(20)) - bw_flags |= IEEE80211_CHAN_NO_20MHZ; if (max_bandwidth_khz < MHZ_TO_KHZ(40)) - bw_flags |= IEEE80211_CHAN_NO_HT40; + bw_flags = IEEE80211_CHAN_NO_HT40; if (max_bandwidth_khz < MHZ_TO_KHZ(80)) bw_flags |= IEEE80211_CHAN_NO_80MHZ; if (max_bandwidth_khz < MHZ_TO_KHZ(160)) diff --git a/samples/trace_events/trace-events-sample.h b/samples/trace_events/trace-events-sample.h index 6af373236d73..4b0113f73ee9 100644 --- a/samples/trace_events/trace-events-sample.h +++ b/samples/trace_events/trace-events-sample.h @@ -56,7 +56,8 @@ * struct: This defines the way the data will be stored in the ring buffer. * There are currently two types of elements. __field and __array. * a __field is broken up into (type, name). Where type can be any - * type but an array. + * primitive type (integer, long or pointer). __field_struct() can + * be any static complex data value (struct, union, but not an array). * For an array. there are three fields. (type, name, size). The * type of elements in the array, the name of the field and the size * of the array. diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl index 010b18ef4ea0..182be0f12407 100755 --- a/scripts/checkpatch.pl +++ b/scripts/checkpatch.pl @@ -3476,12 +3476,17 @@ sub process { } } -# unnecessary return in a void function? (a single leading tab, then return;) - if ($sline =~ /^\+\treturn\s*;\s*$/ && - $prevline =~ /^\+/) { +# unnecessary return in a void function +# at end-of-function, with the previous line a single leading tab, then return; +# and the line before that not a goto label target like "out:" + if ($sline =~ /^[ \+]}\s*$/ && + $prevline =~ /^\+\treturn\s*;\s*$/ && + $linenr >= 3 && + $lines[$linenr - 3] =~ /^[ +]/ && + $lines[$linenr - 3] !~ /^[ +]\s*$Ident\s*:/) { WARN("RETURN_VOID", - "void function return statements are not generally useful\n" . $herecurr); - } + "void function return statements are not generally useful\n" . $hereprev); + } # if statements using unnecessary parentheses - ie: if ((foo == bar)) if ($^V && $^V ge 5.10.0 && diff --git a/scripts/kernel-doc b/scripts/kernel-doc index da058da413e7..16a07cfa4d34 100755 --- a/scripts/kernel-doc +++ b/scripts/kernel-doc @@ -2073,6 +2073,7 @@ sub check_return_section { sub dump_function($$) { my $prototype = shift; my $file = shift; + my $noret = 0; $prototype =~ s/^static +//; $prototype =~ s/^extern +//; @@ -2086,7 +2087,7 @@ sub dump_function($$) { $prototype =~ s/__init_or_module +//; $prototype =~ s/__must_check +//; $prototype =~ s/__weak +//; - $prototype =~ s/^#\s*define\s+//; #ak added + my $define = $prototype =~ s/^#\s*define\s+//; #ak added $prototype =~ s/__attribute__\s*\(\([a-z,]*\)\)//; # Yes, this truly is vile. We are looking for: @@ -2105,7 +2106,15 @@ sub dump_function($$) { # - atomic_set (macro) # - pci_match_device, __copy_to_user (long return type) - if ($prototype =~ m/^()([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ || + if ($define && $prototype =~ m/^()([a-zA-Z0-9_~:]+)\s+/) { + # This is an object-like macro, it has no return type and no parameter + # list. + # Function-like macros are not allowed to have spaces between + # declaration_name and opening parenthesis (notice the \s+). + $return_type = $1; + $declaration_name = $2; + $noret = 1; + } elsif ($prototype =~ m/^()([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ || $prototype =~ m/^(\w+)\s+([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ || $prototype =~ m/^(\w+\s*\*)\s*([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ || $prototype =~ m/^(\w+\s+\w+)\s+([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ || @@ -2140,7 +2149,7 @@ sub dump_function($$) { # of warnings goes sufficiently down, the check is only performed in # verbose mode. # TODO: always perform the check. - if ($verbose) { + if ($verbose && !$noret) { check_return_section($file, $declaration_name, $return_type); } diff --git a/scripts/recordmcount.h b/scripts/recordmcount.h index 9d1421e63ff8..49b582a225b0 100644 --- a/scripts/recordmcount.h +++ b/scripts/recordmcount.h @@ -163,11 +163,11 @@ static int mcount_adjust = 0; static int MIPS_is_fake_mcount(Elf_Rel const *rp) { - static Elf_Addr old_r_offset; + static Elf_Addr old_r_offset = ~(Elf_Addr)0; Elf_Addr current_r_offset = _w(rp->r_offset); int is_fake; - is_fake = old_r_offset && + is_fake = (old_r_offset != ~(Elf_Addr)0) && (current_r_offset - old_r_offset == MIPS_FAKEMCOUNT_OFFSET); old_r_offset = current_r_offset; diff --git a/sound/firewire/bebob/bebob_maudio.c b/sound/firewire/bebob/bebob_maudio.c index 6af50eb80ea7..70faa3a32526 100644 --- a/sound/firewire/bebob/bebob_maudio.c +++ b/sound/firewire/bebob/bebob_maudio.c @@ -379,11 +379,11 @@ static int special_clk_ctl_put(struct snd_kcontrol *kctl, struct special_params *params = bebob->maudio_special_quirk; int err, id; - mutex_lock(&bebob->mutex); - id = uval->value.enumerated.item[0]; if (id >= ARRAY_SIZE(special_clk_labels)) - return 0; + return -EINVAL; + + mutex_lock(&bebob->mutex); err = avc_maudio_set_special_clk(bebob, id, params->dig_in_fmt, @@ -391,7 +391,10 @@ static int special_clk_ctl_put(struct snd_kcontrol *kctl, params->clk_lock); mutex_unlock(&bebob->mutex); - return err >= 0; + if (err >= 0) + err = 1; + + return err; } static struct snd_kcontrol_new special_clk_ctl = { .name = "Clock Source", @@ -434,8 +437,8 @@ static struct snd_kcontrol_new special_sync_ctl = { .get = special_sync_ctl_get, }; -/* Digital interface control for special firmware */ -static char *const special_dig_iface_labels[] = { +/* Digital input interface control for special firmware */ +static char *const special_dig_in_iface_labels[] = { "S/PDIF Optical", "S/PDIF Coaxial", "ADAT Optical" }; static int special_dig_in_iface_ctl_info(struct snd_kcontrol *kctl, @@ -443,13 +446,13 @@ static int special_dig_in_iface_ctl_info(struct snd_kcontrol *kctl, { einf->type = SNDRV_CTL_ELEM_TYPE_ENUMERATED; einf->count = 1; - einf->value.enumerated.items = ARRAY_SIZE(special_dig_iface_labels); + einf->value.enumerated.items = ARRAY_SIZE(special_dig_in_iface_labels); if (einf->value.enumerated.item >= einf->value.enumerated.items) einf->value.enumerated.item = einf->value.enumerated.items - 1; strcpy(einf->value.enumerated.name, - special_dig_iface_labels[einf->value.enumerated.item]); + special_dig_in_iface_labels[einf->value.enumerated.item]); return 0; } @@ -491,26 +494,36 @@ static int special_dig_in_iface_ctl_set(struct snd_kcontrol *kctl, unsigned int id, dig_in_fmt, dig_in_iface; int err; - mutex_lock(&bebob->mutex); - id = uval->value.enumerated.item[0]; + if (id >= ARRAY_SIZE(special_dig_in_iface_labels)) + return -EINVAL; /* decode user value */ dig_in_fmt = (id >> 1) & 0x01; dig_in_iface = id & 0x01; + mutex_lock(&bebob->mutex); + err = avc_maudio_set_special_clk(bebob, params->clk_src, dig_in_fmt, params->dig_out_fmt, params->clk_lock); - if ((err < 0) || (params->dig_in_fmt > 0)) /* ADAT */ + if (err < 0) + goto end; + + /* For ADAT, optical interface is only available. */ + if (params->dig_in_fmt > 0) { + err = 1; goto end; + } + /* For S/PDIF, optical/coaxial interfaces are selectable. */ err = avc_audio_set_selector(bebob->unit, 0x00, 0x04, dig_in_iface); if (err < 0) dev_err(&bebob->unit->device, "fail to set digital input interface: %d\n", err); + err = 1; end: special_stream_formation_set(bebob); mutex_unlock(&bebob->mutex); @@ -525,18 +538,22 @@ static struct snd_kcontrol_new special_dig_in_iface_ctl = { .put = special_dig_in_iface_ctl_set }; +/* Digital output interface control for special firmware */ +static char *const special_dig_out_iface_labels[] = { + "S/PDIF Optical and Coaxial", "ADAT Optical" +}; static int special_dig_out_iface_ctl_info(struct snd_kcontrol *kctl, struct snd_ctl_elem_info *einf) { einf->type = SNDRV_CTL_ELEM_TYPE_ENUMERATED; einf->count = 1; - einf->value.enumerated.items = ARRAY_SIZE(special_dig_iface_labels) - 1; + einf->value.enumerated.items = ARRAY_SIZE(special_dig_out_iface_labels); if (einf->value.enumerated.item >= einf->value.enumerated.items) einf->value.enumerated.item = einf->value.enumerated.items - 1; strcpy(einf->value.enumerated.name, - special_dig_iface_labels[einf->value.enumerated.item + 1]); + special_dig_out_iface_labels[einf->value.enumerated.item]); return 0; } @@ -558,16 +575,20 @@ static int special_dig_out_iface_ctl_set(struct snd_kcontrol *kctl, unsigned int id; int err; - mutex_lock(&bebob->mutex); - id = uval->value.enumerated.item[0]; + if (id >= ARRAY_SIZE(special_dig_out_iface_labels)) + return -EINVAL; + + mutex_lock(&bebob->mutex); err = avc_maudio_set_special_clk(bebob, params->clk_src, params->dig_in_fmt, id, params->clk_lock); - if (err >= 0) + if (err >= 0) { special_stream_formation_set(bebob); + err = 1; + } mutex_unlock(&bebob->mutex); return err; diff --git a/sound/pci/hda/hda_auto_parser.c b/sound/pci/hda/hda_auto_parser.c index b684c6e4f301..dabe41975a9d 100644 --- a/sound/pci/hda/hda_auto_parser.c +++ b/sound/pci/hda/hda_auto_parser.c @@ -898,6 +898,7 @@ void snd_hda_pick_fixup(struct hda_codec *codec, if (!strcmp(codec->modelname, models->name)) { codec->fixup_id = models->id; codec->fixup_name = models->name; + codec->fixup_list = fixlist; codec->fixup_forced = 1; return; } diff --git a/sound/pci/hda/hda_controller.c b/sound/pci/hda/hda_controller.c index 480bbddbd801..6df04d91c93c 100644 --- a/sound/pci/hda/hda_controller.c +++ b/sound/pci/hda/hda_controller.c @@ -193,7 +193,8 @@ azx_assign_device(struct azx *chip, struct snd_pcm_substream *substream) dsp_unlock(azx_dev); return azx_dev; } - if (!res) + if (!res || + (chip->driver_caps & AZX_DCAPS_REVERSE_ASSIGN)) res = azx_dev; } dsp_unlock(azx_dev); diff --git a/sound/pci/hda/hda_i915.c b/sound/pci/hda/hda_i915.c index e9e8a4a4a9a1..8b4940ba33d6 100644 --- a/sound/pci/hda/hda_i915.c +++ b/sound/pci/hda/hda_i915.c @@ -20,10 +20,20 @@ #include <linux/module.h> #include <sound/core.h> #include <drm/i915_powerwell.h> +#include "hda_priv.h" #include "hda_i915.h" +/* Intel HSW/BDW display HDA controller Extended Mode registers. + * EM4 (M value) and EM5 (N Value) are used to convert CDClk (Core Display + * Clock) to 24MHz BCLK: BCLK = CDCLK * M / N + * The values will be lost when the display power well is disabled. + */ +#define ICH6_REG_EM4 0x100c +#define ICH6_REG_EM5 0x1010 + static int (*get_power)(void); static int (*put_power)(void); +static int (*get_cdclk)(void); int hda_display_power(bool enable) { @@ -38,6 +48,43 @@ int hda_display_power(bool enable) return put_power(); } +void haswell_set_bclk(struct azx *chip) +{ + int cdclk_freq; + unsigned int bclk_m, bclk_n; + + if (!get_cdclk) + return; + + cdclk_freq = get_cdclk(); + switch (cdclk_freq) { + case 337500: + bclk_m = 16; + bclk_n = 225; + break; + + case 450000: + default: /* default CDCLK 450MHz */ + bclk_m = 4; + bclk_n = 75; + break; + + case 540000: + bclk_m = 4; + bclk_n = 90; + break; + + case 675000: + bclk_m = 8; + bclk_n = 225; + break; + } + + azx_writew(chip, EM4, bclk_m); + azx_writew(chip, EM5, bclk_n); +} + + int hda_i915_init(void) { int err = 0; @@ -55,6 +102,10 @@ int hda_i915_init(void) return -ENODEV; } + get_cdclk = symbol_request(i915_get_cdclk_freq); + if (!get_cdclk) /* may have abnormal BCLK and audio playback rate */ + pr_warn("hda-i915: get_cdclk symbol get fail\n"); + pr_debug("HDA driver get symbol successfully from i915 module\n"); return err; @@ -70,6 +121,10 @@ int hda_i915_exit(void) symbol_put(i915_release_power_well); put_power = NULL; } + if (get_cdclk) { + symbol_put(i915_get_cdclk_freq); + get_cdclk = NULL; + } return 0; } diff --git a/sound/pci/hda/hda_i915.h b/sound/pci/hda/hda_i915.h index bfd835f8f1aa..e6072c627583 100644 --- a/sound/pci/hda/hda_i915.h +++ b/sound/pci/hda/hda_i915.h @@ -18,10 +18,12 @@ #ifdef CONFIG_SND_HDA_I915 int hda_display_power(bool enable); +void haswell_set_bclk(struct azx *chip); int hda_i915_init(void); int hda_i915_exit(void); #else static inline int hda_display_power(bool enable) { return 0; } +static inline void haswell_set_bclk(struct azx *chip) { return; } static inline int hda_i915_init(void) { return -ENODEV; diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c index 23fd6b9aecca..83cd19017cf3 100644 --- a/sound/pci/hda/hda_intel.c +++ b/sound/pci/hda/hda_intel.c @@ -62,9 +62,9 @@ #include <linux/vga_switcheroo.h> #include <linux/firmware.h> #include "hda_codec.h" -#include "hda_i915.h" #include "hda_controller.h" #include "hda_priv.h" +#include "hda_i915.h" static int index[SNDRV_CARDS] = SNDRV_DEFAULT_IDX; @@ -227,7 +227,7 @@ enum { /* quirks for Intel PCH */ #define AZX_DCAPS_INTEL_PCH_NOPM \ (AZX_DCAPS_SCH_SNOOP | AZX_DCAPS_BUFSIZE | \ - AZX_DCAPS_COUNT_LPIB_DELAY) + AZX_DCAPS_COUNT_LPIB_DELAY | AZX_DCAPS_REVERSE_ASSIGN) #define AZX_DCAPS_INTEL_PCH \ (AZX_DCAPS_INTEL_PCH_NOPM | AZX_DCAPS_PM_RUNTIME) @@ -288,6 +288,11 @@ static char *driver_short_names[] = { [AZX_DRIVER_GENERIC] = "HD-Audio Generic", }; +struct hda_intel { + struct azx chip; +}; + + #ifdef CONFIG_X86 static void __mark_pages_wc(struct azx *chip, struct snd_dma_buffer *dmab, bool on) { @@ -591,7 +596,7 @@ static int azx_suspend(struct device *dev) struct azx *chip = card->private_data; struct azx_pcm *p; - if (chip->disabled) + if (chip->disabled || chip->init_failed) return 0; snd_power_change_state(card, SNDRV_CTL_POWER_D3hot); @@ -606,6 +611,7 @@ static int azx_suspend(struct device *dev) free_irq(chip->irq, chip); chip->irq = -1; } + if (chip->msi) pci_disable_msi(chip->pci); pci_disable_device(pci); @@ -622,11 +628,13 @@ static int azx_resume(struct device *dev) struct snd_card *card = dev_get_drvdata(dev); struct azx *chip = card->private_data; - if (chip->disabled) + if (chip->disabled || chip->init_failed) return 0; - if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL) + if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL) { hda_display_power(true); + haswell_set_bclk(chip); + } pci_set_power_state(pci, PCI_D0); pci_restore_state(pci); if (pci_enable_device(pci) < 0) { @@ -657,7 +665,7 @@ static int azx_runtime_suspend(struct device *dev) struct snd_card *card = dev_get_drvdata(dev); struct azx *chip = card->private_data; - if (chip->disabled) + if (chip->disabled || chip->init_failed) return 0; if (!(chip->driver_caps & AZX_DCAPS_PM_RUNTIME)) @@ -672,6 +680,7 @@ static int azx_runtime_suspend(struct device *dev) azx_clear_irq_pending(chip); if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL) hda_display_power(false); + return 0; } @@ -683,14 +692,16 @@ static int azx_runtime_resume(struct device *dev) struct hda_codec *codec; int status; - if (chip->disabled) + if (chip->disabled || chip->init_failed) return 0; if (!(chip->driver_caps & AZX_DCAPS_PM_RUNTIME)) return 0; - if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL) + if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL) { hda_display_power(true); + haswell_set_bclk(chip); + } /* Read STATESTS before controller reset */ status = azx_readw(chip, STATESTS); @@ -718,7 +729,7 @@ static int azx_runtime_idle(struct device *dev) struct snd_card *card = dev_get_drvdata(dev); struct azx *chip = card->private_data; - if (chip->disabled) + if (chip->disabled || chip->init_failed) return 0; if (!power_save_controller || @@ -883,6 +894,8 @@ static int register_vga_switcheroo(struct azx *chip) static int azx_free(struct azx *chip) { struct pci_dev *pci = chip->pci; + struct hda_intel *hda = container_of(chip, struct hda_intel, chip); + int i; if ((chip->driver_caps & AZX_DCAPS_PM_RUNTIME) @@ -930,7 +943,7 @@ static int azx_free(struct azx *chip) hda_display_power(false); hda_i915_exit(); } - kfree(chip); + kfree(hda); return 0; } @@ -1174,6 +1187,7 @@ static int azx_create(struct snd_card *card, struct pci_dev *pci, static struct snd_device_ops ops = { .dev_free = azx_dev_free, }; + struct hda_intel *hda; struct azx *chip; int err; @@ -1183,13 +1197,14 @@ static int azx_create(struct snd_card *card, struct pci_dev *pci, if (err < 0) return err; - chip = kzalloc(sizeof(*chip), GFP_KERNEL); - if (!chip) { - dev_err(card->dev, "Cannot allocate chip\n"); + hda = kzalloc(sizeof(*hda), GFP_KERNEL); + if (!hda) { + dev_err(card->dev, "Cannot allocate hda\n"); pci_disable_device(pci); return -ENOMEM; } + chip = &hda->chip; spin_lock_init(&chip->reg_lock); mutex_init(&chip->open_mutex); chip->card = card; @@ -1375,6 +1390,10 @@ static int azx_first_init(struct azx *chip) /* initialize chip */ azx_init_pci(chip); + + if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL) + haswell_set_bclk(chip); + azx_init_chip(chip, (probe_only[dev] & 2) == 0); /* codec detection */ diff --git a/sound/pci/hda/hda_local.h b/sound/pci/hda/hda_local.h index ebd1fa6f015c..4e2d4863daa1 100644 --- a/sound/pci/hda/hda_local.h +++ b/sound/pci/hda/hda_local.h @@ -417,6 +417,27 @@ struct snd_hda_pin_quirk { int value; /* quirk value */ }; +#ifdef CONFIG_SND_DEBUG_VERBOSE + +#define SND_HDA_PIN_QUIRK(_codec, _subvendor, _name, _value, _pins...) \ + { .codec = _codec,\ + .subvendor = _subvendor,\ + .name = _name,\ + .value = _value,\ + .pins = (const struct hda_pintbl[]) { _pins } \ + } +#else + +#define SND_HDA_PIN_QUIRK(_codec, _subvendor, _name, _value, _pins...) \ + { .codec = _codec,\ + .subvendor = _subvendor,\ + .value = _value,\ + .pins = (const struct hda_pintbl[]) { _pins } \ + } + +#endif + + /* fixup types */ enum { HDA_FIXUP_INVALID, diff --git a/sound/pci/hda/hda_priv.h b/sound/pci/hda/hda_priv.h index 4a7cb01fa912..e9d1a5762a55 100644 --- a/sound/pci/hda/hda_priv.h +++ b/sound/pci/hda/hda_priv.h @@ -186,6 +186,7 @@ enum { SDI0, SDI1, SDI2, SDI3, SDO0, SDO1, SDO2, SDO3 }; #define AZX_DCAPS_BUFSIZE (1 << 21) /* no buffer size alignment */ #define AZX_DCAPS_ALIGN_BUFSIZE (1 << 22) /* buffer size alignment */ #define AZX_DCAPS_4K_BDLE_BOUNDARY (1 << 23) /* BDLE in 4k boundary */ +#define AZX_DCAPS_REVERSE_ASSIGN (1 << 24) /* Assign devices in reverse order */ #define AZX_DCAPS_COUNT_LPIB_DELAY (1 << 25) /* Take LPIB as delay */ #define AZX_DCAPS_PM_RUNTIME (1 << 26) /* runtime PM support */ #define AZX_DCAPS_I915_POWERWELL (1 << 27) /* HSW i915 powerwell support */ diff --git a/sound/pci/hda/hda_tegra.c b/sound/pci/hda/hda_tegra.c index a366ba9293a8..358414da6418 100644 --- a/sound/pci/hda/hda_tegra.c +++ b/sound/pci/hda/hda_tegra.c @@ -236,6 +236,7 @@ disable_hda: return rc; } +#ifdef CONFIG_PM_SLEEP static void hda_tegra_disable_clocks(struct hda_tegra *data) { clk_disable_unprepare(data->hda2hdmi_clk); @@ -243,7 +244,6 @@ static void hda_tegra_disable_clocks(struct hda_tegra *data) clk_disable_unprepare(data->hda_clk); } -#ifdef CONFIG_PM_SLEEP /* * power management */ diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c index 3e4417b0ddbe..ba4ca52072ff 100644 --- a/sound/pci/hda/patch_hdmi.c +++ b/sound/pci/hda/patch_hdmi.c @@ -2204,7 +2204,7 @@ static int generic_hdmi_resume(struct hda_codec *codec) struct hdmi_spec *spec = codec->spec; int pin_idx; - generic_hdmi_init(codec); + codec->patch_ops.init(codec); snd_hda_codec_resume_amp(codec); snd_hda_codec_resume_cache(codec); @@ -3337,6 +3337,7 @@ static const struct hda_codec_preset snd_hda_preset_hdmi[] = { { .id = 0x10de0051, .name = "GPU 51 HDMI/DP", .patch = patch_nvhdmi }, { .id = 0x10de0060, .name = "GPU 60 HDMI/DP", .patch = patch_nvhdmi }, { .id = 0x10de0067, .name = "MCP67 HDMI", .patch = patch_nvhdmi_2ch }, +{ .id = 0x10de0070, .name = "GPU 70 HDMI/DP", .patch = patch_nvhdmi }, { .id = 0x10de0071, .name = "GPU 71 HDMI/DP", .patch = patch_nvhdmi }, { .id = 0x10de8001, .name = "MCP73 HDMI", .patch = patch_nvhdmi_2ch }, { .id = 0x11069f80, .name = "VX900 HDMI/DP", .patch = patch_via_hdmi }, @@ -3394,6 +3395,7 @@ MODULE_ALIAS("snd-hda-codec-id:10de0044"); MODULE_ALIAS("snd-hda-codec-id:10de0051"); MODULE_ALIAS("snd-hda-codec-id:10de0060"); MODULE_ALIAS("snd-hda-codec-id:10de0067"); +MODULE_ALIAS("snd-hda-codec-id:10de0070"); MODULE_ALIAS("snd-hda-codec-id:10de0071"); MODULE_ALIAS("snd-hda-codec-id:10de8001"); MODULE_ALIAS("snd-hda-codec-id:11069f80"); diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index af76995fa966..b60824e90408 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -4880,6 +4880,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x17aa, 0x2208, "Thinkpad T431s", ALC269_FIXUP_LENOVO_DOCK), SND_PCI_QUIRK(0x17aa, 0x220c, "Thinkpad T440s", ALC292_FIXUP_TPT440_DOCK), SND_PCI_QUIRK(0x17aa, 0x220e, "Thinkpad T440p", ALC292_FIXUP_TPT440_DOCK), + SND_PCI_QUIRK(0x17aa, 0x2210, "Thinkpad T540p", ALC292_FIXUP_TPT440_DOCK), SND_PCI_QUIRK(0x17aa, 0x2212, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), SND_PCI_QUIRK(0x17aa, 0x2214, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), SND_PCI_QUIRK(0x17aa, 0x2215, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), @@ -4962,228 +4963,141 @@ static const struct hda_model_fixup alc269_fixup_models[] = { }; static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = { - { - .codec = 0x10ec0255, - .subvendor = 0x1028, -#ifdef CONFIG_SND_DEBUG_VERBOSE - .name = "Dell", -#endif - .pins = (const struct hda_pintbl[]) { - {0x12, 0x90a60140}, - {0x14, 0x90170110}, - {0x17, 0x40000000}, - {0x18, 0x411111f0}, - {0x19, 0x411111f0}, - {0x1a, 0x411111f0}, - {0x1b, 0x411111f0}, - {0x1d, 0x40700001}, - {0x1e, 0x411111f0}, - {0x21, 0x02211020}, - }, - .value = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, - }, - { - .codec = 0x10ec0255, - .subvendor = 0x1028, -#ifdef CONFIG_SND_DEBUG_VERBOSE - .name = "Dell", -#endif - .pins = (const struct hda_pintbl[]) { - {0x12, 0x90a60160}, - {0x14, 0x90170120}, - {0x17, 0x40000000}, - {0x18, 0x411111f0}, - {0x19, 0x411111f0}, - {0x1a, 0x411111f0}, - {0x1b, 0x411111f0}, - {0x1d, 0x40700001}, - {0x1e, 0x411111f0}, - {0x21, 0x02211030}, - }, - .value = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, - }, - { - .codec = 0x10ec0255, - .subvendor = 0x1028, -#ifdef CONFIG_SND_DEBUG_VERBOSE - .name = "Dell", -#endif - .pins = (const struct hda_pintbl[]) { - {0x12, 0x90a60160}, - {0x14, 0x90170120}, - {0x17, 0x90170140}, - {0x18, 0x40000000}, - {0x19, 0x411111f0}, - {0x1a, 0x411111f0}, - {0x1b, 0x411111f0}, - {0x1d, 0x41163b05}, - {0x1e, 0x411111f0}, - {0x21, 0x0321102f}, - }, - .value = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, - }, - { - .codec = 0x10ec0255, - .subvendor = 0x1028, -#ifdef CONFIG_SND_DEBUG_VERBOSE - .name = "Dell", -#endif - .pins = (const struct hda_pintbl[]) { - {0x12, 0x90a60160}, - {0x14, 0x90170130}, - {0x17, 0x40000000}, - {0x18, 0x411111f0}, - {0x19, 0x411111f0}, - {0x1a, 0x411111f0}, - {0x1b, 0x411111f0}, - {0x1d, 0x40700001}, - {0x1e, 0x411111f0}, - {0x21, 0x02211040}, - }, - .value = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, - }, - { - .codec = 0x10ec0255, - .subvendor = 0x1028, -#ifdef CONFIG_SND_DEBUG_VERBOSE - .name = "Dell", -#endif - .pins = (const struct hda_pintbl[]) { - {0x12, 0x90a60160}, - {0x14, 0x90170140}, - {0x17, 0x40000000}, - {0x18, 0x411111f0}, - {0x19, 0x411111f0}, - {0x1a, 0x411111f0}, - {0x1b, 0x411111f0}, - {0x1d, 0x40700001}, - {0x1e, 0x411111f0}, - {0x21, 0x02211050}, - }, - .value = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, - }, - { - .codec = 0x10ec0255, - .subvendor = 0x1028, -#ifdef CONFIG_SND_DEBUG_VERBOSE - .name = "Dell", -#endif - .pins = (const struct hda_pintbl[]) { - {0x12, 0x90a60170}, - {0x14, 0x90170120}, - {0x17, 0x40000000}, - {0x18, 0x411111f0}, - {0x19, 0x411111f0}, - {0x1a, 0x411111f0}, - {0x1b, 0x411111f0}, - {0x1d, 0x40700001}, - {0x1e, 0x411111f0}, - {0x21, 0x02211030}, - }, - .value = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, - }, - { - .codec = 0x10ec0255, - .subvendor = 0x1028, -#ifdef CONFIG_SND_DEBUG_VERBOSE - .name = "Dell", -#endif - .pins = (const struct hda_pintbl[]) { - {0x12, 0x90a60170}, - {0x14, 0x90170130}, - {0x17, 0x40000000}, - {0x18, 0x411111f0}, - {0x19, 0x411111f0}, - {0x1a, 0x411111f0}, - {0x1b, 0x411111f0}, - {0x1d, 0x40700001}, - {0x1e, 0x411111f0}, - {0x21, 0x02211040}, - }, - .value = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, - }, - { - .codec = 0x10ec0283, - .subvendor = 0x1028, -#ifdef CONFIG_SND_DEBUG_VERBOSE - .name = "Dell", -#endif - .pins = (const struct hda_pintbl[]) { - {0x12, 0x90a60130}, - {0x14, 0x90170110}, - {0x17, 0x40020008}, - {0x18, 0x411111f0}, - {0x19, 0x411111f0}, - {0x1a, 0x411111f0}, - {0x1b, 0x411111f0}, - {0x1d, 0x40e00001}, - {0x1e, 0x411111f0}, - {0x21, 0x0321101f}, - }, - .value = ALC269_FIXUP_DELL1_MIC_NO_PRESENCE, - }, - { - .codec = 0x10ec0283, - .subvendor = 0x1028, -#ifdef CONFIG_SND_DEBUG_VERBOSE - .name = "Dell", -#endif - .pins = (const struct hda_pintbl[]) { - {0x12, 0x90a60160}, - {0x14, 0x90170120}, - {0x17, 0x40000000}, - {0x18, 0x411111f0}, - {0x19, 0x411111f0}, - {0x1a, 0x411111f0}, - {0x1b, 0x411111f0}, - {0x1d, 0x40700001}, - {0x1e, 0x411111f0}, - {0x21, 0x02211030}, - }, - .value = ALC269_FIXUP_DELL1_MIC_NO_PRESENCE, - }, - { - .codec = 0x10ec0292, - .subvendor = 0x1028, -#ifdef CONFIG_SND_DEBUG_VERBOSE - .name = "Dell", -#endif - .pins = (const struct hda_pintbl[]) { - {0x12, 0x90a60140}, - {0x13, 0x411111f0}, - {0x14, 0x90170110}, - {0x15, 0x0221401f}, - {0x16, 0x411111f0}, - {0x18, 0x411111f0}, - {0x19, 0x411111f0}, - {0x1a, 0x411111f0}, - {0x1b, 0x411111f0}, - {0x1d, 0x40700001}, - {0x1e, 0x411111f0}, - }, - .value = ALC269_FIXUP_DELL3_MIC_NO_PRESENCE, - }, - { - .codec = 0x10ec0293, - .subvendor = 0x1028, -#ifdef CONFIG_SND_DEBUG_VERBOSE - .name = "Dell", -#endif - .pins = (const struct hda_pintbl[]) { - {0x12, 0x40000000}, - {0x13, 0x90a60140}, - {0x14, 0x90170110}, - {0x15, 0x0221401f}, - {0x16, 0x21014020}, - {0x18, 0x411111f0}, - {0x19, 0x21a19030}, - {0x1a, 0x411111f0}, - {0x1b, 0x411111f0}, - {0x1d, 0x40700001}, - {0x1e, 0x411111f0}, - }, - .value = ALC293_FIXUP_DELL1_MIC_NO_PRESENCE, - }, + SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, + {0x12, 0x90a60140}, + {0x14, 0x90170110}, + {0x17, 0x40000000}, + {0x18, 0x411111f0}, + {0x19, 0x411111f0}, + {0x1a, 0x411111f0}, + {0x1b, 0x411111f0}, + {0x1d, 0x40700001}, + {0x1e, 0x411111f0}, + {0x21, 0x02211020}), + SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, + {0x12, 0x90a60160}, + {0x14, 0x90170120}, + {0x17, 0x40000000}, + {0x18, 0x411111f0}, + {0x19, 0x411111f0}, + {0x1a, 0x411111f0}, + {0x1b, 0x411111f0}, + {0x1d, 0x40700001}, + {0x1e, 0x411111f0}, + {0x21, 0x02211030}), + SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, + {0x12, 0x90a60160}, + {0x14, 0x90170120}, + {0x17, 0x90170140}, + {0x18, 0x40000000}, + {0x19, 0x411111f0}, + {0x1a, 0x411111f0}, + {0x1b, 0x411111f0}, + {0x1d, 0x41163b05}, + {0x1e, 0x411111f0}, + {0x21, 0x0321102f}), + SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, + {0x12, 0x90a60160}, + {0x14, 0x90170130}, + {0x17, 0x40000000}, + {0x18, 0x411111f0}, + {0x19, 0x411111f0}, + {0x1a, 0x411111f0}, + {0x1b, 0x411111f0}, + {0x1d, 0x40700001}, + {0x1e, 0x411111f0}, + {0x21, 0x02211040}), + SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, + {0x12, 0x90a60160}, + {0x14, 0x90170140}, + {0x17, 0x40000000}, + {0x18, 0x411111f0}, + {0x19, 0x411111f0}, + {0x1a, 0x411111f0}, + {0x1b, 0x411111f0}, + {0x1d, 0x40700001}, + {0x1e, 0x411111f0}, + {0x21, 0x02211050}), + SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, + {0x12, 0x90a60170}, + {0x14, 0x90170120}, + {0x17, 0x40000000}, + {0x18, 0x411111f0}, + {0x19, 0x411111f0}, + {0x1a, 0x411111f0}, + {0x1b, 0x411111f0}, + {0x1d, 0x40700001}, + {0x1e, 0x411111f0}, + {0x21, 0x02211030}), + SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, + {0x12, 0x90a60170}, + {0x14, 0x90170130}, + {0x17, 0x40000000}, + {0x18, 0x411111f0}, + {0x19, 0x411111f0}, + {0x1a, 0x411111f0}, + {0x1b, 0x411111f0}, + {0x1d, 0x40700001}, + {0x1e, 0x411111f0}, + {0x21, 0x02211040}), + SND_HDA_PIN_QUIRK(0x10ec0283, 0x1028, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE, + {0x12, 0x90a60130}, + {0x14, 0x90170110}, + {0x17, 0x40020008}, + {0x18, 0x411111f0}, + {0x19, 0x411111f0}, + {0x1a, 0x411111f0}, + {0x1b, 0x411111f0}, + {0x1d, 0x40e00001}, + {0x1e, 0x411111f0}, + {0x21, 0x0321101f}), + SND_HDA_PIN_QUIRK(0x10ec0283, 0x1028, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE, + {0x12, 0x90a60160}, + {0x14, 0x90170120}, + {0x17, 0x40000000}, + {0x18, 0x411111f0}, + {0x19, 0x411111f0}, + {0x1a, 0x411111f0}, + {0x1b, 0x411111f0}, + {0x1d, 0x40700001}, + {0x1e, 0x411111f0}, + {0x21, 0x02211030}), + SND_HDA_PIN_QUIRK(0x10ec0292, 0x1028, "Dell", ALC269_FIXUP_DELL3_MIC_NO_PRESENCE, + {0x12, 0x90a60140}, + {0x13, 0x411111f0}, + {0x14, 0x90170110}, + {0x15, 0x0221401f}, + {0x16, 0x411111f0}, + {0x18, 0x411111f0}, + {0x19, 0x411111f0}, + {0x1a, 0x411111f0}, + {0x1b, 0x411111f0}, + {0x1d, 0x40700001}, + {0x1e, 0x411111f0}), + SND_HDA_PIN_QUIRK(0x10ec0293, 0x1028, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE, + {0x12, 0x40000000}, + {0x13, 0x90a60140}, + {0x14, 0x90170110}, + {0x15, 0x0221401f}, + {0x16, 0x21014020}, + {0x18, 0x411111f0}, + {0x19, 0x21a19030}, + {0x1a, 0x411111f0}, + {0x1b, 0x411111f0}, + {0x1d, 0x40700001}, + {0x1e, 0x411111f0}), + SND_HDA_PIN_QUIRK(0x10ec0293, 0x1028, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE, + {0x12, 0x40000000}, + {0x13, 0x90a60140}, + {0x14, 0x90170110}, + {0x15, 0x0221401f}, + {0x16, 0x411111f0}, + {0x18, 0x411111f0}, + {0x19, 0x411111f0}, + {0x1a, 0x411111f0}, + {0x1b, 0x411111f0}, + {0x1d, 0x40700001}, + {0x1e, 0x411111f0}), {} }; @@ -6039,90 +5953,66 @@ static const struct hda_model_fixup alc662_fixup_models[] = { }; static const struct snd_hda_pin_quirk alc662_pin_fixup_tbl[] = { - { - .codec = 0x10ec0668, - .subvendor = 0x1028, -#ifdef CONFIG_SND_DEBUG_VERBOSE - .name = "Dell", -#endif - .pins = (const struct hda_pintbl[]) { - {0x12, 0x99a30130}, - {0x14, 0x90170110}, - {0x15, 0x0321101f}, - {0x16, 0x03011020}, - {0x18, 0x40000008}, - {0x19, 0x411111f0}, - {0x1a, 0x411111f0}, - {0x1b, 0x411111f0}, - {0x1d, 0x41000001}, - {0x1e, 0x411111f0}, - {0x1f, 0x411111f0}, - }, - .value = ALC668_FIXUP_AUTO_MUTE, - }, - { - .codec = 0x10ec0668, - .subvendor = 0x1028, -#ifdef CONFIG_SND_DEBUG_VERBOSE - .name = "Dell", -#endif - .pins = (const struct hda_pintbl[]) { - {0x12, 0x99a30140}, - {0x14, 0x90170110}, - {0x15, 0x0321101f}, - {0x16, 0x03011020}, - {0x18, 0x40000008}, - {0x19, 0x411111f0}, - {0x1a, 0x411111f0}, - {0x1b, 0x411111f0}, - {0x1d, 0x41000001}, - {0x1e, 0x411111f0}, - {0x1f, 0x411111f0}, - }, - .value = ALC668_FIXUP_AUTO_MUTE, - }, - { - .codec = 0x10ec0668, - .subvendor = 0x1028, -#ifdef CONFIG_SND_DEBUG_VERBOSE - .name = "Dell", -#endif - .pins = (const struct hda_pintbl[]) { - {0x12, 0x99a30150}, - {0x14, 0x90170110}, - {0x15, 0x0321101f}, - {0x16, 0x03011020}, - {0x18, 0x40000008}, - {0x19, 0x411111f0}, - {0x1a, 0x411111f0}, - {0x1b, 0x411111f0}, - {0x1d, 0x41000001}, - {0x1e, 0x411111f0}, - {0x1f, 0x411111f0}, - }, - .value = ALC668_FIXUP_AUTO_MUTE, - }, - { - .codec = 0x10ec0668, - .subvendor = 0x1028, -#ifdef CONFIG_SND_DEBUG_VERBOSE - .name = "Dell", -#endif - .pins = (const struct hda_pintbl[]) { - {0x12, 0x411111f0}, - {0x14, 0x90170110}, - {0x15, 0x0321101f}, - {0x16, 0x03011020}, - {0x18, 0x40000008}, - {0x19, 0x411111f0}, - {0x1a, 0x411111f0}, - {0x1b, 0x411111f0}, - {0x1d, 0x41000001}, - {0x1e, 0x411111f0}, - {0x1f, 0x411111f0}, - }, - .value = ALC668_FIXUP_AUTO_MUTE, - }, + SND_HDA_PIN_QUIRK(0x10ec0668, 0x1028, "Dell", ALC668_FIXUP_AUTO_MUTE, + {0x12, 0x99a30130}, + {0x14, 0x90170110}, + {0x15, 0x0321101f}, + {0x16, 0x03011020}, + {0x18, 0x40000008}, + {0x19, 0x411111f0}, + {0x1a, 0x411111f0}, + {0x1b, 0x411111f0}, + {0x1d, 0x41000001}, + {0x1e, 0x411111f0}, + {0x1f, 0x411111f0}), + SND_HDA_PIN_QUIRK(0x10ec0668, 0x1028, "Dell", ALC668_FIXUP_AUTO_MUTE, + {0x12, 0x99a30140}, + {0x14, 0x90170110}, + {0x15, 0x0321101f}, + {0x16, 0x03011020}, + {0x18, 0x40000008}, + {0x19, 0x411111f0}, + {0x1a, 0x411111f0}, + {0x1b, 0x411111f0}, + {0x1d, 0x41000001}, + {0x1e, 0x411111f0}, + {0x1f, 0x411111f0}), + SND_HDA_PIN_QUIRK(0x10ec0668, 0x1028, "Dell", ALC668_FIXUP_AUTO_MUTE, + {0x12, 0x99a30150}, + {0x14, 0x90170110}, + {0x15, 0x0321101f}, + {0x16, 0x03011020}, + {0x18, 0x40000008}, + {0x19, 0x411111f0}, + {0x1a, 0x411111f0}, + {0x1b, 0x411111f0}, + {0x1d, 0x41000001}, + {0x1e, 0x411111f0}, + {0x1f, 0x411111f0}), + SND_HDA_PIN_QUIRK(0x10ec0668, 0x1028, "Dell", ALC668_FIXUP_AUTO_MUTE, + {0x12, 0x411111f0}, + {0x14, 0x90170110}, + {0x15, 0x0321101f}, + {0x16, 0x03011020}, + {0x18, 0x40000008}, + {0x19, 0x411111f0}, + {0x1a, 0x411111f0}, + {0x1b, 0x411111f0}, + {0x1d, 0x41000001}, + {0x1e, 0x411111f0}, + {0x1f, 0x411111f0}), + SND_HDA_PIN_QUIRK(0x10ec0668, 0x1028, "Dell XPS 15", ALC668_FIXUP_AUTO_MUTE, + {0x12, 0x90a60130}, + {0x14, 0x90170110}, + {0x15, 0x0321101f}, + {0x16, 0x40000000}, + {0x18, 0x411111f0}, + {0x19, 0x411111f0}, + {0x1a, 0x411111f0}, + {0x1b, 0x411111f0}, + {0x1d, 0x40d6832d}, + {0x1e, 0x411111f0}, + {0x1f, 0x411111f0}), {} }; diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c index 7f40a150899c..3744ea4e843d 100644 --- a/sound/pci/hda/patch_sigmatel.c +++ b/sound/pci/hda/patch_sigmatel.c @@ -122,6 +122,12 @@ enum { }; enum { + STAC_92HD95_HP_LED, + STAC_92HD95_HP_BASS, + STAC_92HD95_MODELS +}; + +enum { STAC_925x_REF, STAC_M1, STAC_M1_2, @@ -4128,6 +4134,48 @@ static const struct snd_pci_quirk stac9205_fixup_tbl[] = { {} /* terminator */ }; +static void stac92hd95_fixup_hp_led(struct hda_codec *codec, + const struct hda_fixup *fix, int action) +{ + struct sigmatel_spec *spec = codec->spec; + + if (action != HDA_FIXUP_ACT_PRE_PROBE) + return; + + if (find_mute_led_cfg(codec, spec->default_polarity)) + codec_dbg(codec, "mute LED gpio %d polarity %d\n", + spec->gpio_led, + spec->gpio_led_polarity); +} + +static const struct hda_fixup stac92hd95_fixups[] = { + [STAC_92HD95_HP_LED] = { + .type = HDA_FIXUP_FUNC, + .v.func = stac92hd95_fixup_hp_led, + }, + [STAC_92HD95_HP_BASS] = { + .type = HDA_FIXUP_VERBS, + .v.verbs = (const struct hda_verb[]) { + {0x1a, 0x795, 0x00}, /* HPF to 100Hz */ + {} + }, + .chained = true, + .chain_id = STAC_92HD95_HP_LED, + }, +}; + +static const struct snd_pci_quirk stac92hd95_fixup_tbl[] = { + SND_PCI_QUIRK(PCI_VENDOR_ID_HP, 0x1911, "HP Spectre 13", STAC_92HD95_HP_BASS), + {} /* terminator */ +}; + +static const struct hda_model_fixup stac92hd95_models[] = { + { .id = STAC_92HD95_HP_LED, .name = "hp-led" }, + { .id = STAC_92HD95_HP_BASS, .name = "hp-bass" }, + {} +}; + + static int stac_parse_auto_config(struct hda_codec *codec) { struct sigmatel_spec *spec = codec->spec; @@ -4580,10 +4628,16 @@ static int patch_stac92hd95(struct hda_codec *codec) spec->gen.beep_nid = 0x19; /* digital beep */ spec->pwr_nids = stac92hd95_pwr_nids; spec->num_pwrs = ARRAY_SIZE(stac92hd95_pwr_nids); - spec->default_polarity = -1; /* no default cfg */ + spec->default_polarity = 0; codec->patch_ops = stac_patch_ops; + snd_hda_pick_fixup(codec, stac92hd95_models, stac92hd95_fixup_tbl, + stac92hd95_fixups); + snd_hda_apply_fixup(codec, HDA_FIXUP_ACT_PRE_PROBE); + + stac_setup_gpio(codec); + err = stac_parse_auto_config(codec); if (err < 0) { stac_free(codec); @@ -4592,6 +4646,8 @@ static int patch_stac92hd95(struct hda_codec *codec) codec->proc_widget_hook = stac92hd_proc_hook; + snd_hda_apply_fixup(codec, HDA_FIXUP_ACT_PROBE); + return 0; } diff --git a/sound/soc/fsl/imx-pcm-dma.c b/sound/soc/fsl/imx-pcm-dma.c index 0849b7b83f0a..0db94f492e97 100644 --- a/sound/soc/fsl/imx-pcm-dma.c +++ b/sound/soc/fsl/imx-pcm-dma.c @@ -59,7 +59,6 @@ int imx_pcm_dma_init(struct platform_device *pdev) { return devm_snd_dmaengine_pcm_register(&pdev->dev, &imx_dmaengine_pcm_config, - SND_DMAENGINE_PCM_FLAG_NO_RESIDUE | SND_DMAENGINE_PCM_FLAG_COMPAT); } EXPORT_SYMBOL_GPL(imx_pcm_dma_init); diff --git a/sound/usb/card.c b/sound/usb/card.c index c3b5b7dca1c3..a09e5f3519e3 100644 --- a/sound/usb/card.c +++ b/sound/usb/card.c @@ -307,6 +307,11 @@ static int snd_usb_create_streams(struct snd_usb_audio *chip, int ctrlif) static int snd_usb_audio_free(struct snd_usb_audio *chip) { + struct list_head *p, *n; + + list_for_each_safe(p, n, &chip->ep_list) + snd_usb_endpoint_free(p); + mutex_destroy(&chip->mutex); kfree(chip); return 0; @@ -585,7 +590,7 @@ static void snd_usb_audio_disconnect(struct usb_device *dev, struct snd_usb_audio *chip) { struct snd_card *card; - struct list_head *p, *n; + struct list_head *p; if (chip == (void *)-1L) return; @@ -598,14 +603,16 @@ static void snd_usb_audio_disconnect(struct usb_device *dev, mutex_lock(®ister_mutex); chip->num_interfaces--; if (chip->num_interfaces <= 0) { + struct snd_usb_endpoint *ep; + snd_card_disconnect(card); /* release the pcm resources */ list_for_each(p, &chip->pcm_list) { snd_usb_stream_disconnect(p); } /* release the endpoint resources */ - list_for_each_safe(p, n, &chip->ep_list) { - snd_usb_endpoint_free(p); + list_for_each_entry(ep, &chip->ep_list, list) { + snd_usb_endpoint_release(ep); } /* release the midi resources */ list_for_each(p, &chip->midi_list) { diff --git a/sound/usb/endpoint.c b/sound/usb/endpoint.c index 289f582c9130..114e3e7ff511 100644 --- a/sound/usb/endpoint.c +++ b/sound/usb/endpoint.c @@ -987,19 +987,30 @@ void snd_usb_endpoint_deactivate(struct snd_usb_endpoint *ep) } /** + * snd_usb_endpoint_release: Tear down an snd_usb_endpoint + * + * @ep: the endpoint to release + * + * This function does not care for the endpoint's use count but will tear + * down all the streaming URBs immediately. + */ +void snd_usb_endpoint_release(struct snd_usb_endpoint *ep) +{ + release_urbs(ep, 1); +} + +/** * snd_usb_endpoint_free: Free the resources of an snd_usb_endpoint * * @ep: the list header of the endpoint to free * - * This function does not care for the endpoint's use count but will tear - * down all the streaming URBs immediately and free all resources. + * This free all resources of the given ep. */ void snd_usb_endpoint_free(struct list_head *head) { struct snd_usb_endpoint *ep; ep = list_entry(head, struct snd_usb_endpoint, list); - release_urbs(ep, 1); kfree(ep); } diff --git a/sound/usb/endpoint.h b/sound/usb/endpoint.h index 1c7e8ee48abc..e61ee5c356a3 100644 --- a/sound/usb/endpoint.h +++ b/sound/usb/endpoint.h @@ -23,6 +23,7 @@ void snd_usb_endpoint_stop(struct snd_usb_endpoint *ep); void snd_usb_endpoint_sync_pending_stop(struct snd_usb_endpoint *ep); int snd_usb_endpoint_activate(struct snd_usb_endpoint *ep); void snd_usb_endpoint_deactivate(struct snd_usb_endpoint *ep); +void snd_usb_endpoint_release(struct snd_usb_endpoint *ep); void snd_usb_endpoint_free(struct list_head *head); int snd_usb_endpoint_implicit_feedback_sink(struct snd_usb_endpoint *ep); diff --git a/tools/lib/lockdep/include/liblockdep/mutex.h b/tools/lib/lockdep/include/liblockdep/mutex.h index c342f7087147..ee53a42818ca 100644 --- a/tools/lib/lockdep/include/liblockdep/mutex.h +++ b/tools/lib/lockdep/include/liblockdep/mutex.h @@ -35,7 +35,7 @@ static inline int __mutex_init(liblockdep_pthread_mutex_t *lock, static inline int liblockdep_pthread_mutex_lock(liblockdep_pthread_mutex_t *lock) { - lock_acquire(&lock->dep_map, 0, 0, 0, 2, NULL, (unsigned long)_RET_IP_); + lock_acquire(&lock->dep_map, 0, 0, 0, 1, NULL, (unsigned long)_RET_IP_); return pthread_mutex_lock(&lock->mutex); } @@ -47,7 +47,7 @@ static inline int liblockdep_pthread_mutex_unlock(liblockdep_pthread_mutex_t *lo static inline int liblockdep_pthread_mutex_trylock(liblockdep_pthread_mutex_t *lock) { - lock_acquire(&lock->dep_map, 0, 1, 0, 2, NULL, (unsigned long)_RET_IP_); + lock_acquire(&lock->dep_map, 0, 1, 0, 1, NULL, (unsigned long)_RET_IP_); return pthread_mutex_trylock(&lock->mutex) == 0 ? 1 : 0; } diff --git a/tools/lib/lockdep/include/liblockdep/rwlock.h b/tools/lib/lockdep/include/liblockdep/rwlock.h index a680ab8c2e36..4ec03f861551 100644 --- a/tools/lib/lockdep/include/liblockdep/rwlock.h +++ b/tools/lib/lockdep/include/liblockdep/rwlock.h @@ -36,7 +36,7 @@ static inline int __rwlock_init(liblockdep_pthread_rwlock_t *lock, static inline int liblockdep_pthread_rwlock_rdlock(liblockdep_pthread_rwlock_t *lock) { - lock_acquire(&lock->dep_map, 0, 0, 2, 2, NULL, (unsigned long)_RET_IP_); + lock_acquire(&lock->dep_map, 0, 0, 2, 1, NULL, (unsigned long)_RET_IP_); return pthread_rwlock_rdlock(&lock->rwlock); } @@ -49,19 +49,19 @@ static inline int liblockdep_pthread_rwlock_unlock(liblockdep_pthread_rwlock_t * static inline int liblockdep_pthread_rwlock_wrlock(liblockdep_pthread_rwlock_t *lock) { - lock_acquire(&lock->dep_map, 0, 0, 0, 2, NULL, (unsigned long)_RET_IP_); + lock_acquire(&lock->dep_map, 0, 0, 0, 1, NULL, (unsigned long)_RET_IP_); return pthread_rwlock_wrlock(&lock->rwlock); } static inline int liblockdep_pthread_rwlock_tryrdlock(liblockdep_pthread_rwlock_t *lock) { - lock_acquire(&lock->dep_map, 0, 1, 2, 2, NULL, (unsigned long)_RET_IP_); + lock_acquire(&lock->dep_map, 0, 1, 2, 1, NULL, (unsigned long)_RET_IP_); return pthread_rwlock_tryrdlock(&lock->rwlock) == 0 ? 1 : 0; } static inline int liblockdep_pthread_rwlock_trywlock(liblockdep_pthread_rwlock_t *lock) { - lock_acquire(&lock->dep_map, 0, 1, 0, 2, NULL, (unsigned long)_RET_IP_); + lock_acquire(&lock->dep_map, 0, 1, 0, 1, NULL, (unsigned long)_RET_IP_); return pthread_rwlock_trywlock(&lock->rwlock) == 0 ? 1 : 0; } diff --git a/tools/lib/lockdep/preload.c b/tools/lib/lockdep/preload.c index 23bd69cb5ade..6f803609e498 100644 --- a/tools/lib/lockdep/preload.c +++ b/tools/lib/lockdep/preload.c @@ -92,7 +92,7 @@ enum { none, prepare, done, } __init_state; static void init_preload(void); static void try_init_preload(void) { - if (!__init_state != done) + if (__init_state != done) init_preload(); } @@ -252,7 +252,7 @@ int pthread_mutex_lock(pthread_mutex_t *mutex) try_init_preload(); - lock_acquire(&__get_lock(mutex)->dep_map, 0, 0, 0, 2, NULL, + lock_acquire(&__get_lock(mutex)->dep_map, 0, 0, 0, 1, NULL, (unsigned long)_RET_IP_); /* * Here's the thing with pthread mutexes: unlike the kernel variant, @@ -281,7 +281,7 @@ int pthread_mutex_trylock(pthread_mutex_t *mutex) try_init_preload(); - lock_acquire(&__get_lock(mutex)->dep_map, 0, 1, 0, 2, NULL, (unsigned long)_RET_IP_); + lock_acquire(&__get_lock(mutex)->dep_map, 0, 1, 0, 1, NULL, (unsigned long)_RET_IP_); r = ll_pthread_mutex_trylock(mutex); if (r) lock_release(&__get_lock(mutex)->dep_map, 0, (unsigned long)_RET_IP_); @@ -303,7 +303,7 @@ int pthread_mutex_unlock(pthread_mutex_t *mutex) */ r = ll_pthread_mutex_unlock(mutex); if (r) - lock_acquire(&__get_lock(mutex)->dep_map, 0, 0, 0, 2, NULL, (unsigned long)_RET_IP_); + lock_acquire(&__get_lock(mutex)->dep_map, 0, 0, 0, 1, NULL, (unsigned long)_RET_IP_); return r; } @@ -352,7 +352,7 @@ int pthread_rwlock_rdlock(pthread_rwlock_t *rwlock) init_preload(); - lock_acquire(&__get_lock(rwlock)->dep_map, 0, 0, 2, 2, NULL, (unsigned long)_RET_IP_); + lock_acquire(&__get_lock(rwlock)->dep_map, 0, 0, 2, 1, NULL, (unsigned long)_RET_IP_); r = ll_pthread_rwlock_rdlock(rwlock); if (r) lock_release(&__get_lock(rwlock)->dep_map, 0, (unsigned long)_RET_IP_); @@ -366,7 +366,7 @@ int pthread_rwlock_tryrdlock(pthread_rwlock_t *rwlock) init_preload(); - lock_acquire(&__get_lock(rwlock)->dep_map, 0, 1, 2, 2, NULL, (unsigned long)_RET_IP_); + lock_acquire(&__get_lock(rwlock)->dep_map, 0, 1, 2, 1, NULL, (unsigned long)_RET_IP_); r = ll_pthread_rwlock_tryrdlock(rwlock); if (r) lock_release(&__get_lock(rwlock)->dep_map, 0, (unsigned long)_RET_IP_); @@ -380,7 +380,7 @@ int pthread_rwlock_trywrlock(pthread_rwlock_t *rwlock) init_preload(); - lock_acquire(&__get_lock(rwlock)->dep_map, 0, 1, 0, 2, NULL, (unsigned long)_RET_IP_); + lock_acquire(&__get_lock(rwlock)->dep_map, 0, 1, 0, 1, NULL, (unsigned long)_RET_IP_); r = ll_pthread_rwlock_trywrlock(rwlock); if (r) lock_release(&__get_lock(rwlock)->dep_map, 0, (unsigned long)_RET_IP_); @@ -394,7 +394,7 @@ int pthread_rwlock_wrlock(pthread_rwlock_t *rwlock) init_preload(); - lock_acquire(&__get_lock(rwlock)->dep_map, 0, 0, 0, 2, NULL, (unsigned long)_RET_IP_); + lock_acquire(&__get_lock(rwlock)->dep_map, 0, 0, 0, 1, NULL, (unsigned long)_RET_IP_); r = ll_pthread_rwlock_wrlock(rwlock); if (r) lock_release(&__get_lock(rwlock)->dep_map, 0, (unsigned long)_RET_IP_); @@ -411,7 +411,7 @@ int pthread_rwlock_unlock(pthread_rwlock_t *rwlock) lock_release(&__get_lock(rwlock)->dep_map, 0, (unsigned long)_RET_IP_); r = ll_pthread_rwlock_unlock(rwlock); if (r) - lock_acquire(&__get_lock(rwlock)->dep_map, 0, 0, 0, 2, NULL, (unsigned long)_RET_IP_); + lock_acquire(&__get_lock(rwlock)->dep_map, 0, 0, 0, 1, NULL, (unsigned long)_RET_IP_); return r; } @@ -439,8 +439,6 @@ __attribute__((constructor)) static void init_preload(void) ll_pthread_rwlock_unlock = dlsym(RTLD_NEXT, "pthread_rwlock_unlock"); #endif - printf("%p\n", ll_pthread_mutex_trylock);fflush(stdout); - lockdep_init(); __init_state = done; diff --git a/tools/testing/selftests/cpu-hotplug/Makefile b/tools/testing/selftests/cpu-hotplug/Makefile index ae5faf9aade2..790c23a9db44 100644 --- a/tools/testing/selftests/cpu-hotplug/Makefile +++ b/tools/testing/selftests/cpu-hotplug/Makefile @@ -1,6 +1,6 @@ all: run_tests: - @/bin/sh ./on-off-test.sh || echo "cpu-hotplug selftests: [FAIL]" + @/bin/bash ./on-off-test.sh || echo "cpu-hotplug selftests: [FAIL]" clean: diff --git a/tools/testing/selftests/ipc/msgque.c b/tools/testing/selftests/ipc/msgque.c index aa290c0de6f5..552f0810bffb 100644 --- a/tools/testing/selftests/ipc/msgque.c +++ b/tools/testing/selftests/ipc/msgque.c @@ -193,6 +193,11 @@ int main(int argc, char **argv) int msg, pid, err; struct msgque_data msgque; + if (getuid() != 0) { + printf("Please run the test as root - Exiting.\n"); + exit(1); + } + msgque.key = ftok(argv[0], 822155650); if (msgque.key == -1) { printf("Can't make key\n"); diff --git a/tools/testing/selftests/memory-hotplug/Makefile b/tools/testing/selftests/memory-hotplug/Makefile index 350bfeda3aa8..058c76f5d102 100644 --- a/tools/testing/selftests/memory-hotplug/Makefile +++ b/tools/testing/selftests/memory-hotplug/Makefile @@ -1,6 +1,6 @@ all: run_tests: - @/bin/sh ./on-off-test.sh || echo "memory-hotplug selftests: [FAIL]" + @/bin/bash ./on-off-test.sh || echo "memory-hotplug selftests: [FAIL]" clean: diff --git a/tools/testing/selftests/powerpc/tm/Makefile b/tools/testing/selftests/powerpc/tm/Makefile index 51267f4184a6..2cede239a074 100644 --- a/tools/testing/selftests/powerpc/tm/Makefile +++ b/tools/testing/selftests/powerpc/tm/Makefile @@ -2,7 +2,7 @@ PROGS := tm-resched-dscr all: $(PROGS) -$(PROGS): +$(PROGS): ../harness.c run_tests: all @-for PROG in $(PROGS); do \ diff --git a/tools/testing/selftests/powerpc/tm/tm-resched-dscr.c b/tools/testing/selftests/powerpc/tm/tm-resched-dscr.c index ee98e3886af2..42d4c8caad81 100644 --- a/tools/testing/selftests/powerpc/tm/tm-resched-dscr.c +++ b/tools/testing/selftests/powerpc/tm/tm-resched-dscr.c @@ -28,6 +28,8 @@ #include <assert.h> #include <asm/tm.h> +#include "utils.h" + #define TBEGIN ".long 0x7C00051D ;" #define TEND ".long 0x7C00055D ;" #define TCHECK ".long 0x7C00059C ;" @@ -36,7 +38,8 @@ #define SPRN_TEXASR 0x82 #define SPRN_DSCR 0x03 -int main(void) { +int test_body(void) +{ uint64_t rv, dscr1 = 1, dscr2, texasr; printf("Check DSCR TM context switch: "); @@ -81,10 +84,15 @@ int main(void) { } if (dscr2 != dscr1) { printf(" FAIL\n"); - exit(EXIT_FAILURE); + return 1; } else { printf(" OK\n"); - exit(EXIT_SUCCESS); + return 0; } } } + +int main(void) +{ + return test_harness(test_body, "tm_resched_dscr"); +} diff --git a/tools/thermal/tmon/Makefile b/tools/thermal/tmon/Makefile index 447321104ec0..e775adcbd29f 100644 --- a/tools/thermal/tmon/Makefile +++ b/tools/thermal/tmon/Makefile @@ -21,7 +21,7 @@ OBJS = tmon.o tui.o sysfs.o pid.o OBJS += tmon: $(OBJS) Makefile tmon.h - $(CC) ${CFLAGS} $(LDFLAGS) $(OBJS) -o $(TARGET) -lm -lpanel -lncursesw -lpthread + $(CC) ${CFLAGS} $(LDFLAGS) $(OBJS) -o $(TARGET) -lm -lpanel -lncursesw -ltinfo -lpthread valgrind: tmon sudo valgrind -v --track-origins=yes --tool=memcheck --leak-check=yes --show-reachable=yes --num-callers=20 --track-fds=yes ./$(TARGET) 1> /dev/null diff --git a/tools/thermal/tmon/tmon.c b/tools/thermal/tmon/tmon.c index b30f531173e4..09b7c3218334 100644 --- a/tools/thermal/tmon/tmon.c +++ b/tools/thermal/tmon/tmon.c @@ -142,6 +142,7 @@ static void start_syslog(void) static void prepare_logging(void) { int i; + struct stat logstat; if (!logging) return; @@ -152,6 +153,29 @@ static void prepare_logging(void) return; } + if (lstat(TMON_LOG_FILE, &logstat) < 0) { + syslog(LOG_ERR, "Unable to stat log file %s\n", TMON_LOG_FILE); + fclose(tmon_log); + tmon_log = NULL; + return; + } + + /* The log file must be a regular file owned by us */ + if (S_ISLNK(logstat.st_mode)) { + syslog(LOG_ERR, "Log file is a symlink. Will not log\n"); + fclose(tmon_log); + tmon_log = NULL; + return; + } + + if (logstat.st_uid != getuid()) { + syslog(LOG_ERR, "We don't own the log file. Not logging\n"); + fclose(tmon_log); + tmon_log = NULL; + return; + } + + fprintf(tmon_log, "#----------- THERMAL SYSTEM CONFIG -------------\n"); for (i = 0; i < ptdata.nr_tz_sensor; i++) { char binding_str[33]; /* size of long + 1 */ @@ -331,7 +355,7 @@ static void start_daemon_mode() disable_tui(); /* change the file mode mask */ - umask(0); + umask(S_IWGRP | S_IWOTH); /* new SID for the daemon process */ sid = setsid(); diff --git a/tools/usb/ffs-test.c b/tools/usb/ffs-test.c index fe1e66b6ef40..a87e99f37c52 100644 --- a/tools/usb/ffs-test.c +++ b/tools/usb/ffs-test.c @@ -116,8 +116,8 @@ static const struct { .header = { .magic = cpu_to_le32(FUNCTIONFS_DESCRIPTORS_MAGIC), .length = cpu_to_le32(sizeof descriptors), - .fs_count = 3, - .hs_count = 3, + .fs_count = cpu_to_le32(3), + .hs_count = cpu_to_le32(3), }, .fs_descs = { .intf = { |