diff options
author | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2014-07-21 10:07:25 -0700 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2014-07-21 10:07:25 -0700 |
commit | 90125edbc49ec58b9101c4271bb1d87b04a4620f (patch) | |
tree | 068353d4234435b17ade687b50a5d5015bdf1ad9 | |
parent | 0a8adf584759cbcbce5d88d419db01a8d0373abf (diff) | |
parent | 9a3c4145af32125c5ee39c0272662b47307a8323 (diff) | |
download | linux-90125edbc49ec58b9101c4271bb1d87b04a4620f.tar.bz2 |
Merge 3.16-rc6 into driver-core-next
We want the platform changes in here as well.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
441 files changed, 3865 insertions, 2190 deletions
diff --git a/Documentation/Changes b/Documentation/Changes index 2254db0f00a5..227bec88021e 100644 --- a/Documentation/Changes +++ b/Documentation/Changes @@ -280,12 +280,9 @@ that is possible. mcelog ------ -In Linux 2.6.31+ the i386 kernel needs to run the mcelog utility -as a regular cronjob similar to the x86-64 kernel to process and log -machine check events when CONFIG_X86_NEW_MCE is enabled. Machine check -events are errors reported by the CPU. Processing them is strongly encouraged. -All x86-64 kernels since 2.6.4 require the mcelog utility to -process machine checks. +On x86 kernels the mcelog utility is needed to process and log machine check +events when CONFIG_X86_MCE is enabled. Machine check events are errors reported +by the CPU. Processing them is strongly encouraged. Getting updated software ======================== diff --git a/Documentation/DocBook/gadget.tmpl b/Documentation/DocBook/gadget.tmpl index 4017f147ba2f..2c425d70f7e2 100644 --- a/Documentation/DocBook/gadget.tmpl +++ b/Documentation/DocBook/gadget.tmpl @@ -708,7 +708,7 @@ hardware level details could be very different. <para>Systems need specialized hardware support to implement OTG, notably including a special <emphasis>Mini-AB</emphasis> jack -and associated transciever to support <emphasis>Dual-Role</emphasis> +and associated transceiver to support <emphasis>Dual-Role</emphasis> operation: they can act either as a host, using the standard Linux-USB host side driver stack, diff --git a/Documentation/DocBook/genericirq.tmpl b/Documentation/DocBook/genericirq.tmpl index 46347f603353..59fb5c077541 100644 --- a/Documentation/DocBook/genericirq.tmpl +++ b/Documentation/DocBook/genericirq.tmpl @@ -182,7 +182,7 @@ <para> Each interrupt is described by an interrupt descriptor structure irq_desc. The interrupt is referenced by an 'unsigned int' numeric - value which selects the corresponding interrupt decription structure + value which selects the corresponding interrupt description structure in the descriptor structures array. The descriptor structure contains status information and pointers to the interrupt flow method and the interrupt chip structure @@ -470,7 +470,7 @@ if (desc->irq_data.chip->irq_eoi) <para> To avoid copies of identical implementations of IRQ chips the core provides a configurable generic interrupt chip - implementation. Developers should check carefuly whether the + implementation. Developers should check carefully whether the generic chip fits their needs before implementing the same functionality slightly differently themselves. </para> diff --git a/Documentation/DocBook/kernel-locking.tmpl b/Documentation/DocBook/kernel-locking.tmpl index 19f2a5a5a5b4..e584ee12a1e7 100644 --- a/Documentation/DocBook/kernel-locking.tmpl +++ b/Documentation/DocBook/kernel-locking.tmpl @@ -1760,7 +1760,7 @@ as it would be on UP. </para> <para> -There is a furthur optimization possible here: remember our original +There is a further optimization possible here: remember our original cache code, where there were no reference counts and the caller simply held the lock whenever using the object? This is still possible: if you hold the lock, no one can delete the object, so you don't need to diff --git a/Documentation/DocBook/libata.tmpl b/Documentation/DocBook/libata.tmpl index deb71baed328..d7fcdc5a4379 100644 --- a/Documentation/DocBook/libata.tmpl +++ b/Documentation/DocBook/libata.tmpl @@ -677,7 +677,7 @@ and other resources, etc. <listitem> <para> - ATA_QCFLAG_ACTIVE is clared from qc->flags. + ATA_QCFLAG_ACTIVE is cleared from qc->flags. </para> </listitem> @@ -708,7 +708,7 @@ and other resources, etc. <listitem> <para> - qc->waiting is claread & completed (in that order). + qc->waiting is cleared & completed (in that order). </para> </listitem> @@ -1163,7 +1163,7 @@ and other resources, etc. <para> Once sense data is acquired, this type of errors can be - handled similary to other SCSI errors. Note that sense data + handled similarly to other SCSI errors. Note that sense data may indicate ATA bus error (e.g. Sense Key 04h HARDWARE ERROR && ASC/ASCQ 47h/00h SCSI PARITY ERROR). In such cases, the error should be considered as an ATA bus error and diff --git a/Documentation/DocBook/media_api.tmpl b/Documentation/DocBook/media_api.tmpl index 4decb46bfa76..03f9a1f8d413 100644 --- a/Documentation/DocBook/media_api.tmpl +++ b/Documentation/DocBook/media_api.tmpl @@ -68,7 +68,7 @@ several digital tv standards. While it is called as DVB API, in fact it covers several different video standards including DVB-T, DVB-S, DVB-C and ATSC. The API is currently being updated - to documment support also for DVB-S2, ISDB-T and ISDB-S.</para> + to document support also for DVB-S2, ISDB-T and ISDB-S.</para> <para>The third part covers the Remote Controller API.</para> <para>The fourth part covers the Media Controller API.</para> <para>For additional information and for the latest development code, diff --git a/Documentation/DocBook/mtdnand.tmpl b/Documentation/DocBook/mtdnand.tmpl index cd11926e07c7..7da8f0402af5 100644 --- a/Documentation/DocBook/mtdnand.tmpl +++ b/Documentation/DocBook/mtdnand.tmpl @@ -91,7 +91,7 @@ <listitem><para> [MTD Interface]</para><para> These functions provide the interface to the MTD kernel API. - They are not replacable and provide functionality + They are not replaceable and provide functionality which is complete hardware independent. </para></listitem> <listitem><para> @@ -100,14 +100,14 @@ </para></listitem> <listitem><para> [GENERIC]</para><para> - Generic functions are not replacable and provide functionality + Generic functions are not replaceable and provide functionality which is complete hardware independent. </para></listitem> <listitem><para> [DEFAULT]</para><para> Default functions provide hardware related functionality which is suitable for most of the implementations. These functions can be replaced by the - board driver if neccecary. Those functions are called via pointers in the + board driver if necessary. Those functions are called via pointers in the NAND chip description structure. The board driver can set the functions which should be replaced by board dependent functions before calling nand_scan(). If the function pointer is NULL on entry to nand_scan() then the pointer @@ -264,7 +264,7 @@ static void board_hwcontrol(struct mtd_info *mtd, int cmd) is set up nand_scan() is called. This function tries to detect and identify then chip. If a chip is found all the internal data fields are initialized accordingly. - The structure(s) have to be zeroed out first and then filled with the neccecary + The structure(s) have to be zeroed out first and then filled with the necessary information about the device. </para> <programlisting> @@ -327,7 +327,7 @@ module_init(board_init); <sect1 id="Exit_function"> <title>Exit function</title> <para> - The exit function is only neccecary if the driver is + The exit function is only necessary if the driver is compiled as a module. It releases all resources which are held by the chip driver and unregisters the partitions in the MTD layer. @@ -494,7 +494,7 @@ static void board_select_chip (struct mtd_info *mtd, int chip) in this case. See rts_from4.c and diskonchip.c for implementation reference. In those cases we must also use bad block tables on FLASH, because the ECC layout is - interferring with the bad block marker positions. + interfering with the bad block marker positions. See bad block table support for details. </para> </sect2> @@ -542,7 +542,7 @@ static void board_select_chip (struct mtd_info *mtd, int chip) <para> nand_scan() calls the function nand_default_bbt(). nand_default_bbt() selects appropriate default - bad block table desriptors depending on the chip information + bad block table descriptors depending on the chip information which was retrieved by nand_scan(). </para> <para> @@ -554,7 +554,7 @@ static void board_select_chip (struct mtd_info *mtd, int chip) <sect2 id="Flash_based_tables"> <title>Flash based tables</title> <para> - It may be desired or neccecary to keep a bad block table in FLASH. + It may be desired or necessary to keep a bad block table in FLASH. For AG-AND chips this is mandatory, as they have no factory marked bad blocks. They have factory marked good blocks. The marker pattern is erased when the block is erased to be reused. So in case of @@ -565,10 +565,10 @@ static void board_select_chip (struct mtd_info *mtd, int chip) of the blocks. </para> <para> - The blocks in which the tables are stored are procteted against + The blocks in which the tables are stored are protected against accidental access by marking them bad in the memory bad block table. The bad block table management functions are allowed - to circumvernt this protection. + to circumvent this protection. </para> <para> The simplest way to activate the FLASH based bad block table support @@ -592,7 +592,7 @@ static void board_select_chip (struct mtd_info *mtd, int chip) User defined tables are created by filling out a nand_bbt_descr structure and storing the pointer in the nand_chip structure member bbt_td before calling nand_scan(). - If a mirror table is neccecary a second structure must be + If a mirror table is necessary a second structure must be created and a pointer to this structure must be stored in bbt_md inside the nand_chip structure. If the bbt_md member is set to NULL then only the main table is used @@ -666,7 +666,7 @@ static void board_select_chip (struct mtd_info *mtd, int chip) <para> For automatic placement some blocks must be reserved for bad block table storage. The number of reserved blocks is defined - in the maxblocks member of the babd block table description structure. + in the maxblocks member of the bad block table description structure. Reserving 4 blocks for mirrored tables should be a reasonable number. This also limits the number of blocks which are scanned for the bad block table ident pattern. @@ -1068,11 +1068,11 @@ in this page</entry> <chapter id="filesystems"> <title>Filesystem support</title> <para> - The NAND driver provides all neccecary functions for a + The NAND driver provides all necessary functions for a filesystem via the MTD interface. </para> <para> - Filesystems must be aware of the NAND pecularities and + Filesystems must be aware of the NAND peculiarities and restrictions. One major restrictions of NAND Flash is, that you cannot write as often as you want to a page. The consecutive writes to a page, before erasing it again, are restricted to 1-3 writes, depending on the @@ -1222,7 +1222,7 @@ in this page</entry> #define NAND_BBT_VERSION 0x00000100 /* Create a bbt if none axists */ #define NAND_BBT_CREATE 0x00000200 -/* Write bbt if neccecary */ +/* Write bbt if necessary */ #define NAND_BBT_WRITE 0x00001000 /* Read and write back block contents when writing bbt */ #define NAND_BBT_SAVECONTENT 0x00002000 diff --git a/Documentation/DocBook/regulator.tmpl b/Documentation/DocBook/regulator.tmpl index 346e552fa2cc..3b08a085d2c7 100644 --- a/Documentation/DocBook/regulator.tmpl +++ b/Documentation/DocBook/regulator.tmpl @@ -155,7 +155,7 @@ release regulators. Functions are provided to <link linkend='API-regulator-enable'>enable</link> and <link linkend='API-regulator-disable'>disable</link> the - reguator and to get and set the runtime parameters of the + regulator and to get and set the runtime parameters of the regulator. </para> <para> diff --git a/Documentation/DocBook/uio-howto.tmpl b/Documentation/DocBook/uio-howto.tmpl index 95618159e29b..bbe9c1fd5cef 100644 --- a/Documentation/DocBook/uio-howto.tmpl +++ b/Documentation/DocBook/uio-howto.tmpl @@ -766,10 +766,10 @@ framework to set up sysfs files for this region. Simply leave it alone. <para> The dynamic memory regions will be allocated when the UIO device file, <varname>/dev/uioX</varname> is opened. - Simiar to static memory resources, the memory region information for + Similar to static memory resources, the memory region information for dynamic regions is then visible via sysfs at <varname>/sys/class/uio/uioX/maps/mapY/*</varname>. - The dynmaic memory regions will be freed when the UIO device file is + The dynamic memory regions will be freed when the UIO device file is closed. When no processes are holding the device file open, the address returned to userspace is ~0. </para> diff --git a/Documentation/DocBook/usb.tmpl b/Documentation/DocBook/usb.tmpl index 8d57c1888dca..85fc0e28576f 100644 --- a/Documentation/DocBook/usb.tmpl +++ b/Documentation/DocBook/usb.tmpl @@ -153,7 +153,7 @@ <listitem><para>The Linux USB API supports synchronous calls for control and bulk messages. - It also supports asynchnous calls for all kinds of data transfer, + It also supports asynchronous calls for all kinds of data transfer, using request structures called "URBs" (USB Request Blocks). </para></listitem> diff --git a/Documentation/DocBook/writing-an-alsa-driver.tmpl b/Documentation/DocBook/writing-an-alsa-driver.tmpl index d0056a4e9c53..6f639d9530b5 100644 --- a/Documentation/DocBook/writing-an-alsa-driver.tmpl +++ b/Documentation/DocBook/writing-an-alsa-driver.tmpl @@ -5696,7 +5696,7 @@ struct _snd_pcm_runtime { suspending the PCM operations via <function>snd_pcm_suspend_all()</function> or <function>snd_pcm_suspend()</function>. It means that the PCM - streams are already stoppped when the register snapshot is + streams are already stopped when the register snapshot is taken. But, remember that you don't have to restart the PCM stream in the resume callback. It'll be restarted via trigger call with <constant>SNDRV_PCM_TRIGGER_RESUME</constant> diff --git a/Documentation/acpi/enumeration.txt b/Documentation/acpi/enumeration.txt index fd786ea13a1f..e182be5e3c83 100644 --- a/Documentation/acpi/enumeration.txt +++ b/Documentation/acpi/enumeration.txt @@ -60,12 +60,6 @@ If the driver needs to perform more complex initialization like getting and configuring GPIOs it can get its ACPI handle and extract this information from ACPI tables. -Currently the kernel is not able to automatically determine from which ACPI -device it should make the corresponding platform device so we need to add -the ACPI device explicitly to acpi_platform_device_ids list defined in -drivers/acpi/acpi_platform.c. This limitation is only for the platform -devices, SPI and I2C devices are created automatically as described below. - DMA support ~~~~~~~~~~~ DMA controllers enumerated via ACPI should be registered in the system to diff --git a/Documentation/cpu-freq/intel-pstate.txt b/Documentation/cpu-freq/intel-pstate.txt index e742d21dbd96..a69ffe1d54d5 100644 --- a/Documentation/cpu-freq/intel-pstate.txt +++ b/Documentation/cpu-freq/intel-pstate.txt @@ -15,10 +15,13 @@ New sysfs files for controlling P state selection have been added to /sys/devices/system/cpu/intel_pstate/ max_perf_pct: limits the maximum P state that will be requested by - the driver stated as a percentage of the available performance. + the driver stated as a percentage of the available performance. The + available (P states) performance may be reduced by the no_turbo + setting described below. min_perf_pct: limits the minimum P state that will be requested by - the driver stated as a percentage of the available performance. + the driver stated as a percentage of the max (non-turbo) + performance level. no_turbo: limits the driver to selecting P states below the turbo frequency range. diff --git a/Documentation/devicetree/bindings/arm/exynos/power_domain.txt b/Documentation/devicetree/bindings/arm/exynos/power_domain.txt index 5216b419016a..8b4f7b7fe88b 100644 --- a/Documentation/devicetree/bindings/arm/exynos/power_domain.txt +++ b/Documentation/devicetree/bindings/arm/exynos/power_domain.txt @@ -9,6 +9,18 @@ Required Properties: - reg: physical base address of the controller and length of memory mapped region. +Optional Properties: +- clocks: List of clock handles. The parent clocks of the input clocks to the + devices in this power domain are set to oscclk before power gating + and restored back after powering on a domain. This is required for + all domains which are powered on and off and not required for unused + domains. +- clock-names: The following clocks can be specified: + - oscclk: Oscillator clock. + - pclkN, clkN: Pairs of parent of input clock and input clock to the + devices in this power domain. Maximum of 4 pairs (N = 0 to 3) + are supported currently. + Node of a device using power domains must have a samsung,power-domain property defined with a phandle to respective power domain. @@ -19,6 +31,14 @@ Example: reg = <0x10023C00 0x10>; }; + mfc_pd: power-domain@10044060 { + compatible = "samsung,exynos4210-pd"; + reg = <0x10044060 0x20>; + clocks = <&clock CLK_FIN_PLL>, <&clock CLK_MOUT_SW_ACLK333>, + <&clock CLK_MOUT_USER_ACLK333>; + clock-names = "oscclk", "pclk0", "clk0"; + }; + Example of the node using power domain: node { diff --git a/Documentation/devicetree/bindings/cpufreq/cpufreq-cpu0.txt b/Documentation/devicetree/bindings/cpufreq/cpufreq-cpu0.txt index f055515d2b62..366690cb86a3 100644 --- a/Documentation/devicetree/bindings/cpufreq/cpufreq-cpu0.txt +++ b/Documentation/devicetree/bindings/cpufreq/cpufreq-cpu0.txt @@ -8,10 +8,12 @@ Both required and optional properties listed below must be defined under node /cpus/cpu@0. Required properties: -- operating-points: Refer to Documentation/devicetree/bindings/power/opp.txt - for details +- None Optional properties: +- operating-points: Refer to Documentation/devicetree/bindings/power/opp.txt for + details. OPPs *must* be supplied either via DT, i.e. this property, or + populated at runtime. - clock-latency: Specify the possible maximum transition latency for clock, in unit of nanoseconds. - voltage-tolerance: Specify the CPU voltage tolerance in percentage. diff --git a/Documentation/devicetree/bindings/serial/renesas,sci-serial.txt b/Documentation/devicetree/bindings/serial/renesas,sci-serial.txt index 64fd7dec1bbc..b3556609a06f 100644 --- a/Documentation/devicetree/bindings/serial/renesas,sci-serial.txt +++ b/Documentation/devicetree/bindings/serial/renesas,sci-serial.txt @@ -4,6 +4,13 @@ Required properties: - compatible: Must contain one of the following: + - "renesas,scifa-sh73a0" for SH73A0 (SH-Mobile AG5) SCIFA compatible UART. + - "renesas,scifb-sh73a0" for SH73A0 (SH-Mobile AG5) SCIFB compatible UART. + - "renesas,scifa-r8a73a4" for R8A73A4 (R-Mobile APE6) SCIFA compatible UART. + - "renesas,scifb-r8a73a4" for R8A73A4 (R-Mobile APE6) SCIFB compatible UART. + - "renesas,scifa-r8a7740" for R8A7740 (R-Mobile A1) SCIFA compatible UART. + - "renesas,scifb-r8a7740" for R8A7740 (R-Mobile A1) SCIFB compatible UART. + - "renesas,scif-r8a7778" for R8A7778 (R-Car M1) SCIF compatible UART. - "renesas,scif-r8a7779" for R8A7779 (R-Car H1) SCIF compatible UART. - "renesas,scif-r8a7790" for R8A7790 (R-Car H2) SCIF compatible UART. - "renesas,scifa-r8a7790" for R8A7790 (R-Car H2) SCIFA compatible UART. diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt index c1b9aa8c5a52..b7fa2f599459 100644 --- a/Documentation/kernel-parameters.txt +++ b/Documentation/kernel-parameters.txt @@ -2790,6 +2790,12 @@ bytes respectively. Such letter suffixes can also be entirely omitted. leaf rcu_node structure. Useful for very large systems. + rcutree.jiffies_till_sched_qs= [KNL] + Set required age in jiffies for a + given grace period before RCU starts + soliciting quiescent-state help from + rcu_note_context_switch(). + rcutree.jiffies_till_first_fqs= [KNL] Set delay from grace-period initialization to first attempt to force quiescent states. @@ -3526,7 +3532,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted. the allocated input device; If set to 0, video driver will only send out the event without touching backlight brightness level. - default: 0 + default: 1 virtio_mmio.device= [VMMIO] Memory mapped virtio (platform) device. diff --git a/Documentation/laptops/00-INDEX b/Documentation/laptops/00-INDEX index d13b9a9a9e00..d399ae1fc724 100644 --- a/Documentation/laptops/00-INDEX +++ b/Documentation/laptops/00-INDEX @@ -8,8 +8,8 @@ disk-shock-protection.txt - information on hard disk shock protection. dslm.c - Simple Disk Sleep Monitor program -hpfall.c - - (HP) laptop accelerometer program for disk protection. +freefall.c + - (HP/DELL) laptop accelerometer program for disk protection. laptop-mode.txt - how to conserve battery power using laptop-mode. sony-laptop.txt diff --git a/Documentation/laptops/hpfall.c b/Documentation/laptops/freefall.c index b85dbbac0499..aab2ff09e868 100644 --- a/Documentation/laptops/hpfall.c +++ b/Documentation/laptops/freefall.c @@ -1,7 +1,9 @@ -/* Disk protection for HP machines. +/* Disk protection for HP/DELL machines. * * Copyright 2008 Eric Piel * Copyright 2009 Pavel Machek <pavel@ucw.cz> + * Copyright 2012 Sonal Santan + * Copyright 2014 Pali Rohár <pali.rohar@gmail.com> * * GPLv2. */ @@ -18,24 +20,31 @@ #include <signal.h> #include <sys/mman.h> #include <sched.h> +#include <syslog.h> -char unload_heads_path[64]; +static int noled; +static char unload_heads_path[64]; +static char device_path[32]; +static const char app_name[] = "FREE FALL"; -int set_unload_heads_path(char *device) +static int set_unload_heads_path(char *device) { char devname[64]; if (strlen(device) <= 5 || strncmp(device, "/dev/", 5) != 0) return -EINVAL; - strncpy(devname, device + 5, sizeof(devname)); + strncpy(devname, device + 5, sizeof(devname) - 1); + strncpy(device_path, device, sizeof(device_path) - 1); snprintf(unload_heads_path, sizeof(unload_heads_path) - 1, "/sys/block/%s/device/unload_heads", devname); return 0; } -int valid_disk(void) + +static int valid_disk(void) { int fd = open(unload_heads_path, O_RDONLY); + if (fd < 0) { perror(unload_heads_path); return 0; @@ -45,43 +54,54 @@ int valid_disk(void) return 1; } -void write_int(char *path, int i) +static void write_int(char *path, int i) { char buf[1024]; int fd = open(path, O_RDWR); + if (fd < 0) { perror("open"); exit(1); } + sprintf(buf, "%d", i); + if (write(fd, buf, strlen(buf)) != strlen(buf)) { perror("write"); exit(1); } + close(fd); } -void set_led(int on) +static void set_led(int on) { + if (noled) + return; write_int("/sys/class/leds/hp::hddprotect/brightness", on); } -void protect(int seconds) +static void protect(int seconds) { + const char *str = (seconds == 0) ? "Unparked" : "Parked"; + write_int(unload_heads_path, seconds*1000); + syslog(LOG_INFO, "%s %s disk head\n", str, device_path); } -int on_ac(void) +static int on_ac(void) { -// /sys/class/power_supply/AC0/online + /* /sys/class/power_supply/AC0/online */ + return 1; } -int lid_open(void) +static int lid_open(void) { -// /proc/acpi/button/lid/LID/state + /* /proc/acpi/button/lid/LID/state */ + return 1; } -void ignore_me(void) +static void ignore_me(int signum) { protect(0); set_led(0); @@ -90,6 +110,7 @@ void ignore_me(void) int main(int argc, char **argv) { int fd, ret; + struct stat st; struct sched_param param; if (argc == 1) @@ -111,7 +132,16 @@ int main(int argc, char **argv) return EXIT_FAILURE; } - daemon(0, 0); + if (stat("/sys/class/leds/hp::hddprotect/brightness", &st)) + noled = 1; + + if (daemon(0, 0) != 0) { + perror("daemon"); + return EXIT_FAILURE; + } + + openlog(app_name, LOG_CONS | LOG_PID | LOG_NDELAY, LOG_LOCAL1); + param.sched_priority = sched_get_priority_max(SCHED_FIFO); sched_setscheduler(0, SCHED_FIFO, ¶m); mlockall(MCL_CURRENT|MCL_FUTURE); @@ -141,6 +171,7 @@ int main(int argc, char **argv) alarm(20); } + closelog(); close(fd); return EXIT_SUCCESS; } diff --git a/MAINTAINERS b/MAINTAINERS index 7c97777dd1b3..11665a898498 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -156,7 +156,6 @@ F: drivers/net/hamradio/6pack.c 8169 10/100/1000 GIGABIT ETHERNET DRIVER M: Realtek linux nic maintainers <nic_swsd@realtek.com> -M: Francois Romieu <romieu@fr.zoreil.com> L: netdev@vger.kernel.org S: Maintained F: drivers/net/ethernet/realtek/r8169.c @@ -1314,6 +1313,20 @@ W: http://oss.renesas.com Q: http://patchwork.kernel.org/project/linux-sh/list/ T: git git://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas.git next S: Supported +F: arch/arm/boot/dts/emev2* +F: arch/arm/boot/dts/r7s* +F: arch/arm/boot/dts/r8a* +F: arch/arm/boot/dts/sh* +F: arch/arm/configs/ape6evm_defconfig +F: arch/arm/configs/armadillo800eva_defconfig +F: arch/arm/configs/bockw_defconfig +F: arch/arm/configs/genmai_defconfig +F: arch/arm/configs/koelsch_defconfig +F: arch/arm/configs/kzm9g_defconfig +F: arch/arm/configs/lager_defconfig +F: arch/arm/configs/mackerel_defconfig +F: arch/arm/configs/marzen_defconfig +F: arch/arm/configs/shmobile_defconfig F: arch/arm/mach-shmobile/ F: drivers/sh/ @@ -4497,8 +4510,7 @@ S: Supported F: drivers/idle/i7300_idle.c IEEE 802.15.4 SUBSYSTEM -M: Alexander Smirnov <alex.bluesman.smirnov@gmail.com> -M: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com> +M: Alexander Aring <alex.aring@gmail.com> L: linux-zigbee-devel@lists.sourceforge.net (moderated for non-subscribers) W: http://apps.sourceforge.net/trac/linux-zigbee T: git git://git.kernel.org/pub/scm/linux/kernel/git/lowpan/lowpan.git @@ -6787,7 +6799,7 @@ F: arch/x86/kernel/quirks.c PCI DRIVER FOR IMX6 M: Richard Zhu <r65037@freescale.com> -M: Shawn Guo <shawn.guo@linaro.org> +M: Shawn Guo <shawn.guo@freescale.com> L: linux-pci@vger.kernel.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained @@ -8984,7 +8996,7 @@ F: drivers/media/radio/radio-raremono.c THERMAL M: Zhang Rui <rui.zhang@intel.com> -M: Eduardo Valentin <eduardo.valentin@ti.com> +M: Eduardo Valentin <edubezval@gmail.com> L: linux-pm@vger.kernel.org T: git git://git.kernel.org/pub/scm/linux/kernel/git/rzhang/linux.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/evalenti/linux-soc-thermal.git @@ -9011,7 +9023,7 @@ S: Maintained F: drivers/platform/x86/thinkpad_acpi.c TI BANDGAP AND THERMAL DRIVER -M: Eduardo Valentin <eduardo.valentin@ti.com> +M: Eduardo Valentin <edubezval@gmail.com> L: linux-pm@vger.kernel.org S: Supported F: drivers/thermal/ti-soc-thermal/ @@ -1,7 +1,7 @@ VERSION = 3 PATCHLEVEL = 16 SUBLEVEL = 0 -EXTRAVERSION = -rc4 +EXTRAVERSION = -rc6 NAME = Shuffling Zombie Juror # *DOCUMENTATION* @@ -41,6 +41,29 @@ unexport GREP_OPTIONS # descending is started. They are now explicitly listed as the # prepare rule. +# Beautify output +# --------------------------------------------------------------------------- +# +# Normally, we echo the whole command before executing it. By making +# that echo $($(quiet)$(cmd)), we now have the possibility to set +# $(quiet) to choose other forms of output instead, e.g. +# +# quiet_cmd_cc_o_c = Compiling $(RELDIR)/$@ +# cmd_cc_o_c = $(CC) $(c_flags) -c -o $@ $< +# +# If $(quiet) is empty, the whole command will be printed. +# If it is set to "quiet_", only the short version will be printed. +# If it is set to "silent_", nothing will be printed at all, since +# the variable $(silent_cmd_cc_o_c) doesn't exist. +# +# A simple variant is to prefix commands with $(Q) - that's useful +# for commands that shall be hidden in non-verbose mode. +# +# $(Q)ln $@ :< +# +# If KBUILD_VERBOSE equals 0 then the above command will be hidden. +# If KBUILD_VERBOSE equals 1 then the above command is displayed. +# # To put more focus on warnings, be less verbose as default # Use 'make V=1' to see the full commands @@ -51,6 +74,29 @@ ifndef KBUILD_VERBOSE KBUILD_VERBOSE = 0 endif +ifeq ($(KBUILD_VERBOSE),1) + quiet = + Q = +else + quiet=quiet_ + Q = @ +endif + +# If the user is running make -s (silent mode), suppress echoing of +# commands + +ifneq ($(filter 4.%,$(MAKE_VERSION)),) # make-4 +ifneq ($(filter %s ,$(firstword x$(MAKEFLAGS))),) + quiet=silent_ +endif +else # make-3.8x +ifneq ($(filter s% -s%,$(MAKEFLAGS)),) + quiet=silent_ +endif +endif + +export quiet Q KBUILD_VERBOSE + # Call a source code checker (by default, "sparse") as part of the # C compilation. # @@ -128,8 +174,11 @@ $(filter-out _all sub-make $(CURDIR)/Makefile, $(MAKECMDGOALS)) _all: sub-make # Fake the "Entering directory" message once, so that IDEs/editors are # able to understand relative filenames. + echodir := @echo + quiet_echodir := @echo +silent_echodir := @: sub-make: FORCE - @echo "make[1]: Entering directory \`$(KBUILD_OUTPUT)'" + $($(quiet)echodir) "make[1]: Entering directory \`$(KBUILD_OUTPUT)'" $(if $(KBUILD_VERBOSE:1=),@)$(MAKE) -C $(KBUILD_OUTPUT) \ KBUILD_SRC=$(CURDIR) \ KBUILD_EXTMOD="$(KBUILD_EXTMOD)" -f $(CURDIR)/Makefile \ @@ -292,52 +341,6 @@ endif export KBUILD_MODULES KBUILD_BUILTIN export KBUILD_CHECKSRC KBUILD_SRC KBUILD_EXTMOD -# Beautify output -# --------------------------------------------------------------------------- -# -# Normally, we echo the whole command before executing it. By making -# that echo $($(quiet)$(cmd)), we now have the possibility to set -# $(quiet) to choose other forms of output instead, e.g. -# -# quiet_cmd_cc_o_c = Compiling $(RELDIR)/$@ -# cmd_cc_o_c = $(CC) $(c_flags) -c -o $@ $< -# -# If $(quiet) is empty, the whole command will be printed. -# If it is set to "quiet_", only the short version will be printed. -# If it is set to "silent_", nothing will be printed at all, since -# the variable $(silent_cmd_cc_o_c) doesn't exist. -# -# A simple variant is to prefix commands with $(Q) - that's useful -# for commands that shall be hidden in non-verbose mode. -# -# $(Q)ln $@ :< -# -# If KBUILD_VERBOSE equals 0 then the above command will be hidden. -# If KBUILD_VERBOSE equals 1 then the above command is displayed. - -ifeq ($(KBUILD_VERBOSE),1) - quiet = - Q = -else - quiet=quiet_ - Q = @ -endif - -# If the user is running make -s (silent mode), suppress echoing of -# commands - -ifneq ($(filter 4.%,$(MAKE_VERSION)),) # make-4 -ifneq ($(filter %s ,$(firstword x$(MAKEFLAGS))),) - quiet=silent_ -endif -else # make-3.8x -ifneq ($(filter s% -s%,$(MAKEFLAGS)),) - quiet=silent_ -endif -endif - -export quiet Q KBUILD_VERBOSE - ifneq ($(CC),) ifeq ($(shell $(CC) -v 2>&1 | grep -c "clang version"), 1) COMPILER := clang @@ -1173,7 +1176,7 @@ distclean: mrproper # Packaging of the kernel to various formats # --------------------------------------------------------------------------- # rpm target kept for backward compatibility -package-dir := $(srctree)/scripts/package +package-dir := scripts/package %src-pkg: FORCE $(Q)$(MAKE) $(build)=$(package-dir) $@ diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 245058b3b0ef..88acf8bc1490 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -6,6 +6,7 @@ config ARM select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST select ARCH_HAVE_CUSTOM_GPIO_H select ARCH_MIGHT_HAVE_PC_PARPORT + select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_USE_BUILTIN_BSWAP select ARCH_USE_CMPXCHG_LOCKREF select ARCH_WANT_IPC_PARSE_VERSION diff --git a/arch/arm/boot/dts/am335x-evm.dts b/arch/arm/boot/dts/am335x-evm.dts index ecb267767cf5..e2156a583de7 100644 --- a/arch/arm/boot/dts/am335x-evm.dts +++ b/arch/arm/boot/dts/am335x-evm.dts @@ -529,8 +529,8 @@ serial-dir = < /* 0: INACTIVE, 1: TX, 2: RX */ 0 0 1 2 >; - tx-num-evt = <1>; - rx-num-evt = <1>; + tx-num-evt = <32>; + rx-num-evt = <32>; }; &tps { diff --git a/arch/arm/boot/dts/am335x-evmsk.dts b/arch/arm/boot/dts/am335x-evmsk.dts index ab9a34ce524c..80a3b215e7d6 100644 --- a/arch/arm/boot/dts/am335x-evmsk.dts +++ b/arch/arm/boot/dts/am335x-evmsk.dts @@ -560,8 +560,8 @@ serial-dir = < /* 0: INACTIVE, 1: TX, 2: RX */ 0 0 1 2 >; - tx-num-evt = <1>; - rx-num-evt = <1>; + tx-num-evt = <32>; + rx-num-evt = <32>; }; &tscadc { diff --git a/arch/arm/boot/dts/am335x-igep0033.dtsi b/arch/arm/boot/dts/am335x-igep0033.dtsi index 8a0a72dc7dd7..a1a0cc5eb35c 100644 --- a/arch/arm/boot/dts/am335x-igep0033.dtsi +++ b/arch/arm/boot/dts/am335x-igep0033.dtsi @@ -105,10 +105,16 @@ &cpsw_emac0 { phy_id = <&davinci_mdio>, <0>; + phy-mode = "rmii"; }; &cpsw_emac1 { phy_id = <&davinci_mdio>, <1>; + phy-mode = "rmii"; +}; + +&phy_sel { + rmii-clock-ext; }; &elm { diff --git a/arch/arm/boot/dts/at91sam9n12.dtsi b/arch/arm/boot/dts/at91sam9n12.dtsi index 287795985e32..b84bac5bada4 100644 --- a/arch/arm/boot/dts/at91sam9n12.dtsi +++ b/arch/arm/boot/dts/at91sam9n12.dtsi @@ -925,7 +925,7 @@ compatible = "atmel,at91rm9200-ohci", "usb-ohci"; reg = <0x00500000 0x00100000>; interrupts = <22 IRQ_TYPE_LEVEL_HIGH 2>; - clocks = <&usb>, <&uhphs_clk>, <&udphs_clk>, + clocks = <&usb>, <&uhphs_clk>, <&uhphs_clk>, <&uhpck>; clock-names = "usb_clk", "ohci_clk", "hclk", "uhpck"; status = "disabled"; diff --git a/arch/arm/boot/dts/at91sam9x5.dtsi b/arch/arm/boot/dts/at91sam9x5.dtsi index d6133f497207..2c0d6ea3ab41 100644 --- a/arch/arm/boot/dts/at91sam9x5.dtsi +++ b/arch/arm/boot/dts/at91sam9x5.dtsi @@ -1045,6 +1045,8 @@ reg = <0x00500000 0x80000 0xf803c000 0x400>; interrupts = <23 IRQ_TYPE_LEVEL_HIGH 0>; + clocks = <&usb>, <&udphs_clk>; + clock-names = "hclk", "pclk"; status = "disabled"; ep0 { @@ -1122,6 +1124,7 @@ compatible = "atmel,at91sam9rl-pwm"; reg = <0xf8034000 0x300>; interrupts = <18 IRQ_TYPE_LEVEL_HIGH 4>; + clocks = <&pwm_clk>; #pwm-cells = <3>; status = "disabled"; }; @@ -1153,8 +1156,7 @@ compatible = "atmel,at91rm9200-ohci", "usb-ohci"; reg = <0x00600000 0x100000>; interrupts = <22 IRQ_TYPE_LEVEL_HIGH 2>; - clocks = <&usb>, <&uhphs_clk>, <&udphs_clk>, - <&uhpck>; + clocks = <&usb>, <&uhphs_clk>, <&uhphs_clk>, <&uhpck>; clock-names = "usb_clk", "ohci_clk", "hclk", "uhpck"; status = "disabled"; }; diff --git a/arch/arm/boot/dts/dra7-evm.dts b/arch/arm/boot/dts/dra7-evm.dts index 4adc28039c30..83089540e324 100644 --- a/arch/arm/boot/dts/dra7-evm.dts +++ b/arch/arm/boot/dts/dra7-evm.dts @@ -240,6 +240,7 @@ regulator-name = "ldo3"; regulator-min-microvolt = <1800000>; regulator-max-microvolt = <1800000>; + regulator-always-on; regulator-boot-on; }; diff --git a/arch/arm/boot/dts/dra7xx-clocks.dtsi b/arch/arm/boot/dts/dra7xx-clocks.dtsi index c90c76de84d6..dc7a292fe939 100644 --- a/arch/arm/boot/dts/dra7xx-clocks.dtsi +++ b/arch/arm/boot/dts/dra7xx-clocks.dtsi @@ -673,10 +673,12 @@ l3_iclk_div: l3_iclk_div { #clock-cells = <0>; - compatible = "fixed-factor-clock"; + compatible = "ti,divider-clock"; + ti,max-div = <2>; + ti,bit-shift = <4>; + reg = <0x0100>; clocks = <&dpll_core_h12x2_ck>; - clock-mult = <1>; - clock-div = <1>; + ti,index-power-of-two; }; l4_root_clk_div: l4_root_clk_div { @@ -684,7 +686,7 @@ compatible = "fixed-factor-clock"; clocks = <&l3_iclk_div>; clock-mult = <1>; - clock-div = <1>; + clock-div = <2>; }; video1_clk2_div: video1_clk2_div { diff --git a/arch/arm/boot/dts/exynos4.dtsi b/arch/arm/boot/dts/exynos4.dtsi index fbaf426d2daa..17b22e9cc2aa 100644 --- a/arch/arm/boot/dts/exynos4.dtsi +++ b/arch/arm/boot/dts/exynos4.dtsi @@ -554,7 +554,7 @@ interrupts = <0 37 0>, <0 38 0>, <0 39 0>, <0 40 0>, <0 41 0>; clocks = <&clock CLK_PWM>; clock-names = "timers"; - #pwm-cells = <2>; + #pwm-cells = <3>; status = "disabled"; }; diff --git a/arch/arm/boot/dts/exynos5420.dtsi b/arch/arm/boot/dts/exynos5420.dtsi index e38532271ef9..15957227ffda 100644 --- a/arch/arm/boot/dts/exynos5420.dtsi +++ b/arch/arm/boot/dts/exynos5420.dtsi @@ -167,7 +167,7 @@ compatible = "samsung,exynos5420-audss-clock"; reg = <0x03810000 0x0C>; #clock-cells = <1>; - clocks = <&clock CLK_FIN_PLL>, <&clock CLK_FOUT_EPLL>, + clocks = <&clock CLK_FIN_PLL>, <&clock CLK_MAU_EPLL>, <&clock CLK_SCLK_MAUDIO0>, <&clock CLK_SCLK_MAUPCM0>; clock-names = "pll_ref", "pll_in", "sclk_audio", "sclk_pcm_in"; }; @@ -260,6 +260,9 @@ mfc_pd: power-domain@10044060 { compatible = "samsung,exynos4210-pd"; reg = <0x10044060 0x20>; + clocks = <&clock CLK_FIN_PLL>, <&clock CLK_MOUT_SW_ACLK333>, + <&clock CLK_MOUT_USER_ACLK333>; + clock-names = "oscclk", "pclk0", "clk0"; }; disp_pd: power-domain@100440C0 { diff --git a/arch/arm/kernel/kprobes-test-arm.c b/arch/arm/kernel/kprobes-test-arm.c index 9db4b659d03e..cb1424240ff6 100644 --- a/arch/arm/kernel/kprobes-test-arm.c +++ b/arch/arm/kernel/kprobes-test-arm.c @@ -74,8 +74,6 @@ void kprobe_arm_test_cases(void) TEST_RRR( op "lt" s " r11, r",11,VAL1,", r",14,N(val),", asr r",7, 6,"")\ TEST_RR( op "gt" s " r12, r13" ", r",14,val, ", ror r",14,7,"")\ TEST_RR( op "le" s " r14, r",0, val, ", r13" ", lsl r",14,8,"")\ - TEST_RR( op s " r12, pc" ", r",14,val, ", ror r",14,7,"")\ - TEST_RR( op s " r14, r",0, val, ", pc" ", lsl r",14,8,"")\ TEST_R( op "eq" s " r0, r",11,VAL1,", #0xf5") \ TEST_R( op "ne" s " r11, r",0, VAL1,", #0xf5000000") \ TEST_R( op s " r7, r",8, VAL2,", #0x000af000") \ @@ -103,8 +101,6 @@ void kprobe_arm_test_cases(void) TEST_RRR( op "ge r",11,VAL1,", r",14,N(val),", asr r",7, 6,"") \ TEST_RR( op "le r13" ", r",14,val, ", ror r",14,7,"") \ TEST_RR( op "gt r",0, val, ", r13" ", lsl r",14,8,"") \ - TEST_RR( op " pc" ", r",14,val, ", ror r",14,7,"") \ - TEST_RR( op " r",0, val, ", pc" ", lsl r",14,8,"") \ TEST_R( op "eq r",11,VAL1,", #0xf5") \ TEST_R( op "ne r",0, VAL1,", #0xf5000000") \ TEST_R( op " r",8, VAL2,", #0x000af000") @@ -125,7 +121,6 @@ void kprobe_arm_test_cases(void) TEST_RR( op "ge" s " r11, r",11,N(val),", asr r",7, 6,"") \ TEST_RR( op "lt" s " r12, r",11,val, ", ror r",14,7,"") \ TEST_R( op "gt" s " r14, r13" ", lsl r",14,8,"") \ - TEST_R( op "le" s " r14, pc" ", lsl r",14,8,"") \ TEST( op "eq" s " r0, #0xf5") \ TEST( op "ne" s " r11, #0xf5000000") \ TEST( op s " r7, #0x000af000") \ @@ -159,12 +154,19 @@ void kprobe_arm_test_cases(void) TEST_SUPPORTED("cmp pc, #0x1000"); TEST_SUPPORTED("cmp sp, #0x1000"); - /* Data-processing with PC as shift*/ + /* Data-processing with PC and a shift count in a register */ TEST_UNSUPPORTED(__inst_arm(0xe15c0f1e) " @ cmp r12, r14, asl pc") TEST_UNSUPPORTED(__inst_arm(0xe1a0cf1e) " @ mov r12, r14, asl pc") TEST_UNSUPPORTED(__inst_arm(0xe08caf1e) " @ add r10, r12, r14, asl pc") - - /* Data-processing with PC as shift*/ + TEST_UNSUPPORTED(__inst_arm(0xe151021f) " @ cmp r1, pc, lsl r2") + TEST_UNSUPPORTED(__inst_arm(0xe17f0211) " @ cmn pc, r1, lsl r2") + TEST_UNSUPPORTED(__inst_arm(0xe1a0121f) " @ mov r1, pc, lsl r2") + TEST_UNSUPPORTED(__inst_arm(0xe1a0f211) " @ mov pc, r1, lsl r2") + TEST_UNSUPPORTED(__inst_arm(0xe042131f) " @ sub r1, r2, pc, lsl r3") + TEST_UNSUPPORTED(__inst_arm(0xe1cf1312) " @ bic r1, pc, r2, lsl r3") + TEST_UNSUPPORTED(__inst_arm(0xe081f312) " @ add pc, r1, r2, lsl r3") + + /* Data-processing with PC as a target and status registers updated */ TEST_UNSUPPORTED("movs pc, r1") TEST_UNSUPPORTED("movs pc, r1, lsl r2") TEST_UNSUPPORTED("movs pc, #0x10000") @@ -187,14 +189,14 @@ void kprobe_arm_test_cases(void) TEST_BF_R ("add pc, pc, r",14,2f-1f-8,"") TEST_BF_R ("add pc, r",14,2f-1f-8,", pc") TEST_BF_R ("mov pc, r",0,2f,"") - TEST_BF_RR("mov pc, r",0,2f,", asl r",1,0,"") + TEST_BF_R ("add pc, pc, r",14,(2f-1f-8)*2,", asr #1") TEST_BB( "sub pc, pc, #1b-2b+8") #if __LINUX_ARM_ARCH__ == 6 && !defined(CONFIG_CPU_V7) TEST_BB( "sub pc, pc, #1b-2b+8-2") /* UNPREDICTABLE before and after ARMv6 */ #endif TEST_BB_R( "sub pc, pc, r",14, 1f-2f+8,"") TEST_BB_R( "rsb pc, r",14,1f-2f+8,", pc") - TEST_RR( "add pc, pc, r",10,-2,", asl r",11,1,"") + TEST_R( "add pc, pc, r",10,-2,", asl #1") #ifdef CONFIG_THUMB2_KERNEL TEST_ARM_TO_THUMB_INTERWORK_R("add pc, pc, r",0,3f-1f-8+1,"") TEST_ARM_TO_THUMB_INTERWORK_R("sub pc, r",0,3f+8+1,", #8") @@ -216,6 +218,7 @@ void kprobe_arm_test_cases(void) TEST_BB_R("bx r",7,2f,"") TEST_BF_R("bxeq r",14,2f,"") +#if __LINUX_ARM_ARCH__ >= 5 TEST_R("clz r0, r",0, 0x0,"") TEST_R("clzeq r7, r",14,0x1,"") TEST_R("clz lr, r",7, 0xffffffff,"") @@ -337,6 +340,7 @@ void kprobe_arm_test_cases(void) TEST_UNSUPPORTED(__inst_arm(0xe16f02e1) " @ smultt pc, r1, r2") TEST_UNSUPPORTED(__inst_arm(0xe16002ef) " @ smultt r0, pc, r2") TEST_UNSUPPORTED(__inst_arm(0xe1600fe1) " @ smultt r0, r1, pc") +#endif TEST_GROUP("Multiply and multiply-accumulate") @@ -559,6 +563,7 @@ void kprobe_arm_test_cases(void) TEST_UNSUPPORTED("ldrsht r1, [r2], #48") #endif +#if __LINUX_ARM_ARCH__ >= 5 TEST_RPR( "strd r",0, VAL1,", [r",1, 48,", -r",2,24,"]") TEST_RPR( "strccd r",8, VAL2,", [r",13,0, ", r",12,48,"]") TEST_RPR( "strd r",4, VAL1,", [r",2, 24,", r",3, 48,"]!") @@ -595,6 +600,7 @@ void kprobe_arm_test_cases(void) TEST_UNSUPPORTED(__inst_arm(0xe1efc3d0) " @ ldrd r12, [pc, #48]!") TEST_UNSUPPORTED(__inst_arm(0xe0c9f3d0) " @ ldrd pc, [r9], #48") TEST_UNSUPPORTED(__inst_arm(0xe0c9e3d0) " @ ldrd lr, [r9], #48") +#endif TEST_GROUP("Miscellaneous") @@ -1227,7 +1233,9 @@ void kprobe_arm_test_cases(void) TEST_COPROCESSOR( "mrc"two" 0, 0, r0, cr0, cr0, 0") COPROCESSOR_INSTRUCTIONS_ST_LD("",e) +#if __LINUX_ARM_ARCH__ >= 5 COPROCESSOR_INSTRUCTIONS_MC_MR("",e) +#endif TEST_UNSUPPORTED("svc 0") TEST_UNSUPPORTED("svc 0xffffff") @@ -1287,7 +1295,9 @@ void kprobe_arm_test_cases(void) TEST( "blx __dummy_thumb_subroutine_odd") #endif /* __LINUX_ARM_ARCH__ >= 6 */ +#if __LINUX_ARM_ARCH__ >= 5 COPROCESSOR_INSTRUCTIONS_ST_LD("2",f) +#endif #if __LINUX_ARM_ARCH__ >= 6 COPROCESSOR_INSTRUCTIONS_MC_MR("2",f) #endif diff --git a/arch/arm/kernel/kprobes-test.c b/arch/arm/kernel/kprobes-test.c index 379639998d5a..08d731294bcd 100644 --- a/arch/arm/kernel/kprobes-test.c +++ b/arch/arm/kernel/kprobes-test.c @@ -225,6 +225,7 @@ static int pre_handler_called; static int post_handler_called; static int jprobe_func_called; static int kretprobe_handler_called; +static int tests_failed; #define FUNC_ARG1 0x12345678 #define FUNC_ARG2 0xabcdef @@ -461,6 +462,13 @@ static int run_api_tests(long (*func)(long, long)) pr_info(" jprobe\n"); ret = test_jprobe(func); +#if defined(CONFIG_THUMB2_KERNEL) && !defined(MODULE) + if (ret == -EINVAL) { + pr_err("FAIL: Known longtime bug with jprobe on Thumb kernels\n"); + tests_failed = ret; + ret = 0; + } +#endif if (ret < 0) return ret; @@ -1672,6 +1680,8 @@ static int __init run_all_tests(void) out: if (ret == 0) + ret = tests_failed; + if (ret == 0) pr_info("Finished kprobe tests OK\n"); else pr_err("kprobe tests failed\n"); diff --git a/arch/arm/kernel/probes-arm.c b/arch/arm/kernel/probes-arm.c index 51a13a027989..8eaef81d8344 100644 --- a/arch/arm/kernel/probes-arm.c +++ b/arch/arm/kernel/probes-arm.c @@ -341,12 +341,12 @@ static const union decode_item arm_cccc_000x_table[] = { /* CMP (reg-shift reg) cccc 0001 0101 xxxx xxxx xxxx 0xx1 xxxx */ /* CMN (reg-shift reg) cccc 0001 0111 xxxx xxxx xxxx 0xx1 xxxx */ DECODE_EMULATEX (0x0f900090, 0x01100010, PROBES_DATA_PROCESSING_REG, - REGS(ANY, 0, NOPC, 0, ANY)), + REGS(NOPC, 0, NOPC, 0, NOPC)), /* MOV (reg-shift reg) cccc 0001 101x xxxx xxxx xxxx 0xx1 xxxx */ /* MVN (reg-shift reg) cccc 0001 111x xxxx xxxx xxxx 0xx1 xxxx */ DECODE_EMULATEX (0x0fa00090, 0x01a00010, PROBES_DATA_PROCESSING_REG, - REGS(0, ANY, NOPC, 0, ANY)), + REGS(0, NOPC, NOPC, 0, NOPC)), /* AND (reg-shift reg) cccc 0000 000x xxxx xxxx xxxx 0xx1 xxxx */ /* EOR (reg-shift reg) cccc 0000 001x xxxx xxxx xxxx 0xx1 xxxx */ @@ -359,7 +359,7 @@ static const union decode_item arm_cccc_000x_table[] = { /* ORR (reg-shift reg) cccc 0001 100x xxxx xxxx xxxx 0xx1 xxxx */ /* BIC (reg-shift reg) cccc 0001 110x xxxx xxxx xxxx 0xx1 xxxx */ DECODE_EMULATEX (0x0e000090, 0x00000010, PROBES_DATA_PROCESSING_REG, - REGS(ANY, ANY, NOPC, 0, ANY)), + REGS(NOPC, NOPC, NOPC, 0, NOPC)), DECODE_END }; diff --git a/arch/arm/kernel/topology.c b/arch/arm/kernel/topology.c index 9d853189028b..e35d880f9773 100644 --- a/arch/arm/kernel/topology.c +++ b/arch/arm/kernel/topology.c @@ -275,7 +275,7 @@ void store_cpu_topology(unsigned int cpuid) cpu_topology[cpuid].socket_id, mpidr); } -static inline const int cpu_corepower_flags(void) +static inline int cpu_corepower_flags(void) { return SD_SHARE_PKG_RESOURCES | SD_SHARE_POWERDOMAIN; } diff --git a/arch/arm/mach-exynos/exynos.c b/arch/arm/mach-exynos/exynos.c index f38cf7c110cc..46d893fcbe85 100644 --- a/arch/arm/mach-exynos/exynos.c +++ b/arch/arm/mach-exynos/exynos.c @@ -173,10 +173,8 @@ static struct platform_device exynos_cpuidle = { void __init exynos_cpuidle_init(void) { - if (soc_is_exynos5440()) - return; - - platform_device_register(&exynos_cpuidle); + if (soc_is_exynos4210() || soc_is_exynos5250()) + platform_device_register(&exynos_cpuidle); } void __init exynos_cpufreq_init(void) @@ -297,7 +295,7 @@ static void __init exynos_dt_machine_init(void) * This is called from smp_prepare_cpus if we've built for SMP, but * we still need to set it up for PM and firmware ops if not. */ - if (!IS_ENABLED(SMP)) + if (!IS_ENABLED(CONFIG_SMP)) exynos_sysram_init(); exynos_cpuidle_init(); diff --git a/arch/arm/mach-exynos/firmware.c b/arch/arm/mach-exynos/firmware.c index eb91d2350f8c..e8797bb78871 100644 --- a/arch/arm/mach-exynos/firmware.c +++ b/arch/arm/mach-exynos/firmware.c @@ -57,8 +57,13 @@ static int exynos_set_cpu_boot_addr(int cpu, unsigned long boot_addr) boot_reg = sysram_ns_base_addr + 0x1c; - if (!soc_is_exynos4212() && !soc_is_exynos3250()) - boot_reg += 4*cpu; + /* + * Almost all Exynos-series of SoCs that run in secure mode don't need + * additional offset for every CPU, with Exynos4412 being the only + * exception. + */ + if (soc_is_exynos4412()) + boot_reg += 4 * cpu; __raw_writel(boot_addr, boot_reg); return 0; diff --git a/arch/arm/mach-exynos/hotplug.c b/arch/arm/mach-exynos/hotplug.c index 8a134d019cb3..920a4baa53cd 100644 --- a/arch/arm/mach-exynos/hotplug.c +++ b/arch/arm/mach-exynos/hotplug.c @@ -40,15 +40,17 @@ static inline void cpu_leave_lowpower(void) static inline void platform_do_lowpower(unsigned int cpu, int *spurious) { + u32 mpidr = cpu_logical_map(cpu); + u32 core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); + for (;;) { - /* make cpu1 to be turned off at next WFI command */ - if (cpu == 1) - exynos_cpu_power_down(cpu); + /* Turn the CPU off on next WFI instruction. */ + exynos_cpu_power_down(core_id); wfi(); - if (pen_release == cpu_logical_map(cpu)) { + if (pen_release == core_id) { /* * OK, proper wakeup, we're done */ diff --git a/arch/arm/mach-exynos/platsmp.c b/arch/arm/mach-exynos/platsmp.c index 1c8d31e39520..50b9aad5e27b 100644 --- a/arch/arm/mach-exynos/platsmp.c +++ b/arch/arm/mach-exynos/platsmp.c @@ -90,7 +90,8 @@ static void exynos_secondary_init(unsigned int cpu) static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle) { unsigned long timeout; - unsigned long phys_cpu = cpu_logical_map(cpu); + u32 mpidr = cpu_logical_map(cpu); + u32 core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); int ret = -ENOSYS; /* @@ -104,17 +105,18 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle) * the holding pen - release it, then wait for it to flag * that it has been released by resetting pen_release. * - * Note that "pen_release" is the hardware CPU ID, whereas + * Note that "pen_release" is the hardware CPU core ID, whereas * "cpu" is Linux's internal ID. */ - write_pen_release(phys_cpu); + write_pen_release(core_id); - if (!exynos_cpu_power_state(cpu)) { - exynos_cpu_power_up(cpu); + if (!exynos_cpu_power_state(core_id)) { + exynos_cpu_power_up(core_id); timeout = 10; /* wait max 10 ms until cpu1 is on */ - while (exynos_cpu_power_state(cpu) != S5P_CORE_LOCAL_PWR_EN) { + while (exynos_cpu_power_state(core_id) + != S5P_CORE_LOCAL_PWR_EN) { if (timeout-- == 0) break; @@ -145,20 +147,20 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle) * Try to set boot address using firmware first * and fall back to boot register if it fails. */ - ret = call_firmware_op(set_cpu_boot_addr, phys_cpu, boot_addr); + ret = call_firmware_op(set_cpu_boot_addr, core_id, boot_addr); if (ret && ret != -ENOSYS) goto fail; if (ret == -ENOSYS) { - void __iomem *boot_reg = cpu_boot_reg(phys_cpu); + void __iomem *boot_reg = cpu_boot_reg(core_id); if (IS_ERR(boot_reg)) { ret = PTR_ERR(boot_reg); goto fail; } - __raw_writel(boot_addr, cpu_boot_reg(phys_cpu)); + __raw_writel(boot_addr, cpu_boot_reg(core_id)); } - call_firmware_op(cpu_boot, phys_cpu); + call_firmware_op(cpu_boot, core_id); arch_send_wakeup_ipi_mask(cpumask_of(cpu)); @@ -227,22 +229,24 @@ static void __init exynos_smp_prepare_cpus(unsigned int max_cpus) * boot register if it fails. */ for (i = 1; i < max_cpus; ++i) { - unsigned long phys_cpu; unsigned long boot_addr; + u32 mpidr; + u32 core_id; int ret; - phys_cpu = cpu_logical_map(i); + mpidr = cpu_logical_map(i); + core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); boot_addr = virt_to_phys(exynos4_secondary_startup); - ret = call_firmware_op(set_cpu_boot_addr, phys_cpu, boot_addr); + ret = call_firmware_op(set_cpu_boot_addr, core_id, boot_addr); if (ret && ret != -ENOSYS) break; if (ret == -ENOSYS) { - void __iomem *boot_reg = cpu_boot_reg(phys_cpu); + void __iomem *boot_reg = cpu_boot_reg(core_id); if (IS_ERR(boot_reg)) break; - __raw_writel(boot_addr, cpu_boot_reg(phys_cpu)); + __raw_writel(boot_addr, cpu_boot_reg(core_id)); } } } diff --git a/arch/arm/mach-exynos/pm_domains.c b/arch/arm/mach-exynos/pm_domains.c index fe6570ebbdde..797cb134bfff 100644 --- a/arch/arm/mach-exynos/pm_domains.c +++ b/arch/arm/mach-exynos/pm_domains.c @@ -17,6 +17,7 @@ #include <linux/err.h> #include <linux/slab.h> #include <linux/pm_domain.h> +#include <linux/clk.h> #include <linux/delay.h> #include <linux/of_address.h> #include <linux/of_platform.h> @@ -24,6 +25,8 @@ #include "regs-pmu.h" +#define MAX_CLK_PER_DOMAIN 4 + /* * Exynos specific wrapper around the generic power domain */ @@ -32,6 +35,9 @@ struct exynos_pm_domain { char const *name; bool is_off; struct generic_pm_domain pd; + struct clk *oscclk; + struct clk *clk[MAX_CLK_PER_DOMAIN]; + struct clk *pclk[MAX_CLK_PER_DOMAIN]; }; static int exynos_pd_power(struct generic_pm_domain *domain, bool power_on) @@ -44,6 +50,19 @@ static int exynos_pd_power(struct generic_pm_domain *domain, bool power_on) pd = container_of(domain, struct exynos_pm_domain, pd); base = pd->base; + /* Set oscclk before powering off a domain*/ + if (!power_on) { + int i; + + for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) { + if (IS_ERR(pd->clk[i])) + break; + if (clk_set_parent(pd->clk[i], pd->oscclk)) + pr_err("%s: error setting oscclk as parent to clock %d\n", + pd->name, i); + } + } + pwr = power_on ? S5P_INT_LOCAL_PWR_EN : 0; __raw_writel(pwr, base); @@ -60,6 +79,20 @@ static int exynos_pd_power(struct generic_pm_domain *domain, bool power_on) cpu_relax(); usleep_range(80, 100); } + + /* Restore clocks after powering on a domain*/ + if (power_on) { + int i; + + for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) { + if (IS_ERR(pd->clk[i])) + break; + if (clk_set_parent(pd->clk[i], pd->pclk[i])) + pr_err("%s: error setting parent to clock%d\n", + pd->name, i); + } + } + return 0; } @@ -152,9 +185,11 @@ static __init int exynos4_pm_init_power_domain(void) for_each_compatible_node(np, NULL, "samsung,exynos4210-pd") { struct exynos_pm_domain *pd; - int on; + int on, i; + struct device *dev; pdev = of_find_device_by_node(np); + dev = &pdev->dev; pd = kzalloc(sizeof(*pd), GFP_KERNEL); if (!pd) { @@ -170,6 +205,30 @@ static __init int exynos4_pm_init_power_domain(void) pd->pd.power_on = exynos_pd_power_on; pd->pd.of_node = np; + pd->oscclk = clk_get(dev, "oscclk"); + if (IS_ERR(pd->oscclk)) + goto no_clk; + + for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) { + char clk_name[8]; + + snprintf(clk_name, sizeof(clk_name), "clk%d", i); + pd->clk[i] = clk_get(dev, clk_name); + if (IS_ERR(pd->clk[i])) + break; + snprintf(clk_name, sizeof(clk_name), "pclk%d", i); + pd->pclk[i] = clk_get(dev, clk_name); + if (IS_ERR(pd->pclk[i])) { + clk_put(pd->clk[i]); + pd->clk[i] = ERR_PTR(-EINVAL); + break; + } + } + + if (IS_ERR(pd->clk[0])) + clk_put(pd->oscclk); + +no_clk: platform_set_drvdata(pdev, pd); on = __raw_readl(pd->base + 0x4) & S5P_INT_LOCAL_PWR_EN; diff --git a/arch/arm/mach-imx/clk-gate2.c b/arch/arm/mach-imx/clk-gate2.c index 4ba587da89d2..84acdfd1d715 100644 --- a/arch/arm/mach-imx/clk-gate2.c +++ b/arch/arm/mach-imx/clk-gate2.c @@ -67,8 +67,12 @@ static void clk_gate2_disable(struct clk_hw *hw) spin_lock_irqsave(gate->lock, flags); - if (gate->share_count && --(*gate->share_count) > 0) - goto out; + if (gate->share_count) { + if (WARN_ON(*gate->share_count == 0)) + goto out; + else if (--(*gate->share_count) > 0) + goto out; + } reg = readl(gate->reg); reg &= ~(3 << gate->bit_idx); @@ -78,19 +82,26 @@ out: spin_unlock_irqrestore(gate->lock, flags); } -static int clk_gate2_is_enabled(struct clk_hw *hw) +static int clk_gate2_reg_is_enabled(void __iomem *reg, u8 bit_idx) { - u32 reg; - struct clk_gate2 *gate = to_clk_gate2(hw); + u32 val = readl(reg); - reg = readl(gate->reg); - - if (((reg >> gate->bit_idx) & 1) == 1) + if (((val >> bit_idx) & 1) == 1) return 1; return 0; } +static int clk_gate2_is_enabled(struct clk_hw *hw) +{ + struct clk_gate2 *gate = to_clk_gate2(hw); + + if (gate->share_count) + return !!(*gate->share_count); + else + return clk_gate2_reg_is_enabled(gate->reg, gate->bit_idx); +} + static struct clk_ops clk_gate2_ops = { .enable = clk_gate2_enable, .disable = clk_gate2_disable, @@ -116,6 +127,10 @@ struct clk *clk_register_gate2(struct device *dev, const char *name, gate->bit_idx = bit_idx; gate->flags = clk_gate2_flags; gate->lock = lock; + + /* Initialize share_count per hardware state */ + if (share_count) + *share_count = clk_gate2_reg_is_enabled(reg, bit_idx) ? 1 : 0; gate->share_count = share_count; init.name = name; diff --git a/arch/arm/mach-imx/clk-imx6q.c b/arch/arm/mach-imx/clk-imx6q.c index 8e795dea02ec..8556c787e59c 100644 --- a/arch/arm/mach-imx/clk-imx6q.c +++ b/arch/arm/mach-imx/clk-imx6q.c @@ -70,7 +70,7 @@ static const char *cko_sels[] = { "cko1", "cko2", }; static const char *lvds_sels[] = { "dummy", "dummy", "dummy", "dummy", "dummy", "dummy", "pll4_audio", "pll5_video", "pll8_mlb", "enet_ref", - "pcie_ref", "sata_ref", + "pcie_ref_125m", "sata_ref_100m", }; enum mx6q_clks { @@ -491,7 +491,7 @@ static void __init imx6q_clocks_init(struct device_node *ccm_node) /* All existing boards with PCIe use LVDS1 */ if (IS_ENABLED(CONFIG_PCI_IMX6)) - clk_set_parent(clk[lvds1_sel], clk[sata_ref]); + clk_set_parent(clk[lvds1_sel], clk[sata_ref_100m]); /* Set initial power mode */ imx6q_set_lpm(WAIT_CLOCKED); diff --git a/arch/arm/mach-mvebu/coherency.c b/arch/arm/mach-mvebu/coherency.c index 477202fd39cc..2bdc3233abe2 100644 --- a/arch/arm/mach-mvebu/coherency.c +++ b/arch/arm/mach-mvebu/coherency.c @@ -292,6 +292,10 @@ static struct notifier_block mvebu_hwcc_nb = { .notifier_call = mvebu_hwcc_notifier, }; +static struct notifier_block mvebu_hwcc_pci_nb = { + .notifier_call = mvebu_hwcc_notifier, +}; + static void __init armada_370_coherency_init(struct device_node *np) { struct resource res; @@ -427,7 +431,7 @@ static int __init coherency_pci_init(void) { if (coherency_available()) bus_register_notifier(&pci_bus_type, - &mvebu_hwcc_nb); + &mvebu_hwcc_pci_nb); return 0; } diff --git a/arch/arm/mach-mvebu/headsmp-a9.S b/arch/arm/mach-mvebu/headsmp-a9.S index 5925366bc03c..da5bb292b91c 100644 --- a/arch/arm/mach-mvebu/headsmp-a9.S +++ b/arch/arm/mach-mvebu/headsmp-a9.S @@ -15,6 +15,8 @@ #include <linux/linkage.h> #include <linux/init.h> +#include <asm/assembler.h> + __CPUINIT #define CPU_RESUME_ADDR_REG 0xf10182d4 @@ -22,13 +24,18 @@ .global armada_375_smp_cpu1_enable_code_end armada_375_smp_cpu1_enable_code_start: - ldr r0, [pc, #4] +ARM_BE8(setend be) + adr r0, 1f + ldr r0, [r0] ldr r1, [r0] +ARM_BE8(rev r1, r1) mov pc, r1 +1: .word CPU_RESUME_ADDR_REG armada_375_smp_cpu1_enable_code_end: ENTRY(mvebu_cortex_a9_secondary_startup) +ARM_BE8(setend be) bl v7_invalidate_l1 b secondary_startup ENDPROC(mvebu_cortex_a9_secondary_startup) diff --git a/arch/arm/mach-mvebu/pmsu.c b/arch/arm/mach-mvebu/pmsu.c index a1d407c0febe..25aa8237d668 100644 --- a/arch/arm/mach-mvebu/pmsu.c +++ b/arch/arm/mach-mvebu/pmsu.c @@ -201,12 +201,12 @@ static noinline int do_armada_370_xp_cpu_suspend(unsigned long deepidle) /* Test the CR_C bit and set it if it was cleared */ asm volatile( - "mrc p15, 0, %0, c1, c0, 0 \n\t" - "tst %0, #(1 << 2) \n\t" - "orreq %0, %0, #(1 << 2) \n\t" - "mcreq p15, 0, %0, c1, c0, 0 \n\t" + "mrc p15, 0, r0, c1, c0, 0 \n\t" + "tst r0, #(1 << 2) \n\t" + "orreq r0, r0, #(1 << 2) \n\t" + "mcreq p15, 0, r0, c1, c0, 0 \n\t" "isb " - : : "r" (0)); + : : : "r0"); pr_warn("Failed to suspend the system\n"); diff --git a/arch/arm/mach-omap2/clkt_dpll.c b/arch/arm/mach-omap2/clkt_dpll.c index 332af927f4d3..67fd26a18441 100644 --- a/arch/arm/mach-omap2/clkt_dpll.c +++ b/arch/arm/mach-omap2/clkt_dpll.c @@ -76,7 +76,7 @@ * (assuming that it is counting N upwards), or -2 if the enclosing loop * should skip to the next iteration (again assuming N is increasing). */ -static int _dpll_test_fint(struct clk_hw_omap *clk, u8 n) +static int _dpll_test_fint(struct clk_hw_omap *clk, unsigned int n) { struct dpll_data *dd; long fint, fint_min, fint_max; diff --git a/arch/arm/mach-omap2/cm-regbits-34xx.h b/arch/arm/mach-omap2/cm-regbits-34xx.h index 04dab2fcf862..ee6c784cd6b7 100644 --- a/arch/arm/mach-omap2/cm-regbits-34xx.h +++ b/arch/arm/mach-omap2/cm-regbits-34xx.h @@ -26,11 +26,14 @@ #define OMAP3430_EN_WDT3_SHIFT 12 #define OMAP3430_CM_FCLKEN_IVA2_EN_IVA2_MASK (1 << 0) #define OMAP3430_CM_FCLKEN_IVA2_EN_IVA2_SHIFT 0 +#define OMAP3430_IVA2_DPLL_FREQSEL_SHIFT 4 #define OMAP3430_IVA2_DPLL_FREQSEL_MASK (0xf << 4) #define OMAP3430_EN_IVA2_DPLL_DRIFTGUARD_SHIFT 3 +#define OMAP3430_EN_IVA2_DPLL_SHIFT 0 #define OMAP3430_EN_IVA2_DPLL_MASK (0x7 << 0) #define OMAP3430_ST_IVA2_SHIFT 0 #define OMAP3430_ST_IVA2_CLK_MASK (1 << 0) +#define OMAP3430_AUTO_IVA2_DPLL_SHIFT 0 #define OMAP3430_AUTO_IVA2_DPLL_MASK (0x7 << 0) #define OMAP3430_IVA2_CLK_SRC_SHIFT 19 #define OMAP3430_IVA2_CLK_SRC_WIDTH 3 diff --git a/arch/arm/mach-omap2/common.h b/arch/arm/mach-omap2/common.h index b2d252bf4a53..dc571f1d3b8a 100644 --- a/arch/arm/mach-omap2/common.h +++ b/arch/arm/mach-omap2/common.h @@ -162,7 +162,8 @@ static inline void omap3xxx_restart(enum reboot_mode mode, const char *cmd) } #endif -#if defined(CONFIG_ARCH_OMAP4) || defined(CONFIG_SOC_OMAP5) +#if defined(CONFIG_ARCH_OMAP4) || defined(CONFIG_SOC_OMAP5) || \ + defined(CONFIG_SOC_DRA7XX) || defined(CONFIG_SOC_AM43XX) void omap44xx_restart(enum reboot_mode mode, const char *cmd); #else static inline void omap44xx_restart(enum reboot_mode mode, const char *cmd) diff --git a/arch/arm/mach-omap2/devices.c b/arch/arm/mach-omap2/devices.c index 592ba0a0ecf3..b6f8f348296e 100644 --- a/arch/arm/mach-omap2/devices.c +++ b/arch/arm/mach-omap2/devices.c @@ -297,33 +297,6 @@ static void omap_init_audio(void) static inline void omap_init_audio(void) {} #endif -#if defined(CONFIG_SND_OMAP_SOC_OMAP_HDMI) || \ - defined(CONFIG_SND_OMAP_SOC_OMAP_HDMI_MODULE) - -static struct platform_device omap_hdmi_audio = { - .name = "omap-hdmi-audio", - .id = -1, -}; - -static void __init omap_init_hdmi_audio(void) -{ - struct omap_hwmod *oh; - struct platform_device *pdev; - - oh = omap_hwmod_lookup("dss_hdmi"); - if (!oh) - return; - - pdev = omap_device_build("omap-hdmi-audio-dai", -1, oh, NULL, 0); - WARN(IS_ERR(pdev), - "Can't build omap_device for omap-hdmi-audio-dai.\n"); - - platform_device_register(&omap_hdmi_audio); -} -#else -static inline void omap_init_hdmi_audio(void) {} -#endif - #if defined(CONFIG_SPI_OMAP24XX) || defined(CONFIG_SPI_OMAP24XX_MODULE) #include <linux/platform_data/spi-omap2-mcspi.h> @@ -459,7 +432,6 @@ static int __init omap2_init_devices(void) */ omap_init_audio(); omap_init_camera(); - omap_init_hdmi_audio(); omap_init_mbox(); /* If dtb is there, the devices will be created dynamically */ if (!of_have_populated_dt()) { diff --git a/arch/arm/mach-omap2/dsp.c b/arch/arm/mach-omap2/dsp.c index b8208b4b1bd9..f7492df1cbba 100644 --- a/arch/arm/mach-omap2/dsp.c +++ b/arch/arm/mach-omap2/dsp.c @@ -29,6 +29,7 @@ #ifdef CONFIG_TIDSPBRIDGE_DVFS #include "omap-pm.h" #endif +#include "soc.h" #include <linux/platform_data/dsp-omap.h> @@ -59,6 +60,9 @@ void __init omap_dsp_reserve_sdram_memblock(void) phys_addr_t size = CONFIG_TIDSPBRIDGE_MEMPOOL_SIZE; phys_addr_t paddr; + if (!cpu_is_omap34xx()) + return; + if (!size) return; @@ -83,6 +87,9 @@ static int __init omap_dsp_init(void) int err = -ENOMEM; struct omap_dsp_platform_data *pdata = &omap_dsp_pdata; + if (!cpu_is_omap34xx()) + return 0; + pdata->phys_mempool_base = omap_dsp_get_mempool_base(); if (pdata->phys_mempool_base) { @@ -115,6 +122,9 @@ module_init(omap_dsp_init); static void __exit omap_dsp_exit(void) { + if (!cpu_is_omap34xx()) + return; + platform_device_unregister(omap_dsp_pdev); } module_exit(omap_dsp_exit); diff --git a/arch/arm/mach-omap2/gpmc.c b/arch/arm/mach-omap2/gpmc.c index 2c0c2816900f..8bc13380f0a0 100644 --- a/arch/arm/mach-omap2/gpmc.c +++ b/arch/arm/mach-omap2/gpmc.c @@ -1615,7 +1615,7 @@ static int gpmc_probe_dt(struct platform_device *pdev) return ret; } - for_each_child_of_node(pdev->dev.of_node, child) { + for_each_available_child_of_node(pdev->dev.of_node, child) { if (!child->name) continue; diff --git a/arch/arm/mach-omap2/omap_hwmod_7xx_data.c b/arch/arm/mach-omap2/omap_hwmod_7xx_data.c index 20b4398cec05..284324f2b98a 100644 --- a/arch/arm/mach-omap2/omap_hwmod_7xx_data.c +++ b/arch/arm/mach-omap2/omap_hwmod_7xx_data.c @@ -1268,9 +1268,6 @@ static struct omap_hwmod_class dra7xx_sata_hwmod_class = { }; /* sata */ -static struct omap_hwmod_opt_clk sata_opt_clks[] = { - { .role = "ref_clk", .clk = "sata_ref_clk" }, -}; static struct omap_hwmod dra7xx_sata_hwmod = { .name = "sata", @@ -1278,6 +1275,7 @@ static struct omap_hwmod dra7xx_sata_hwmod = { .clkdm_name = "l3init_clkdm", .flags = HWMOD_SWSUP_SIDLE | HWMOD_SWSUP_MSTANDBY, .main_clk = "func_48m_fclk", + .mpu_rt_idx = 1, .prcm = { .omap4 = { .clkctrl_offs = DRA7XX_CM_L3INIT_SATA_CLKCTRL_OFFSET, @@ -1285,8 +1283,6 @@ static struct omap_hwmod dra7xx_sata_hwmod = { .modulemode = MODULEMODE_SWCTRL, }, }, - .opt_clks = sata_opt_clks, - .opt_clks_cnt = ARRAY_SIZE(sata_opt_clks), }; /* @@ -1731,8 +1727,20 @@ static struct omap_hwmod dra7xx_uart6_hwmod = { * */ +static struct omap_hwmod_class_sysconfig dra7xx_usb_otg_ss_sysc = { + .rev_offs = 0x0000, + .sysc_offs = 0x0010, + .sysc_flags = (SYSC_HAS_DMADISABLE | SYSC_HAS_MIDLEMODE | + SYSC_HAS_SIDLEMODE), + .idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART | + SIDLE_SMART_WKUP | MSTANDBY_FORCE | MSTANDBY_NO | + MSTANDBY_SMART | MSTANDBY_SMART_WKUP), + .sysc_fields = &omap_hwmod_sysc_type2, +}; + static struct omap_hwmod_class dra7xx_usb_otg_ss_hwmod_class = { .name = "usb_otg_ss", + .sysc = &dra7xx_usb_otg_ss_sysc, }; /* usb_otg_ss1 */ diff --git a/arch/arm/mach-omap2/prm-regbits-34xx.h b/arch/arm/mach-omap2/prm-regbits-34xx.h index 106132db532b..cbefbd7cfdb5 100644 --- a/arch/arm/mach-omap2/prm-regbits-34xx.h +++ b/arch/arm/mach-omap2/prm-regbits-34xx.h @@ -35,6 +35,8 @@ #define OMAP3430_LOGICSTATEST_MASK (1 << 2) #define OMAP3430_LASTLOGICSTATEENTERED_MASK (1 << 2) #define OMAP3430_LASTPOWERSTATEENTERED_MASK (0x3 << 0) +#define OMAP3430_GRPSEL_MCBSP5_MASK (1 << 10) +#define OMAP3430_GRPSEL_MCBSP1_MASK (1 << 9) #define OMAP3630_GRPSEL_UART4_MASK (1 << 18) #define OMAP3430_GRPSEL_GPIO6_MASK (1 << 17) #define OMAP3430_GRPSEL_GPIO5_MASK (1 << 16) @@ -42,6 +44,10 @@ #define OMAP3430_GRPSEL_GPIO3_MASK (1 << 14) #define OMAP3430_GRPSEL_GPIO2_MASK (1 << 13) #define OMAP3430_GRPSEL_UART3_MASK (1 << 11) +#define OMAP3430_GRPSEL_GPT8_MASK (1 << 9) +#define OMAP3430_GRPSEL_GPT7_MASK (1 << 8) +#define OMAP3430_GRPSEL_GPT6_MASK (1 << 7) +#define OMAP3430_GRPSEL_GPT5_MASK (1 << 6) #define OMAP3430_GRPSEL_MCBSP4_MASK (1 << 2) #define OMAP3430_GRPSEL_MCBSP3_MASK (1 << 1) #define OMAP3430_GRPSEL_MCBSP2_MASK (1 << 0) diff --git a/arch/arm/mm/cache-l2x0.c b/arch/arm/mm/cache-l2x0.c index 076172b69422..7c3fb41a462e 100644 --- a/arch/arm/mm/cache-l2x0.c +++ b/arch/arm/mm/cache-l2x0.c @@ -664,7 +664,7 @@ static int l2c310_cpu_enable_flz(struct notifier_block *nb, unsigned long act, v static void __init l2c310_enable(void __iomem *base, u32 aux, unsigned num_lock) { - unsigned rev = readl_relaxed(base + L2X0_CACHE_ID) & L2X0_CACHE_ID_PART_MASK; + unsigned rev = readl_relaxed(base + L2X0_CACHE_ID) & L2X0_CACHE_ID_RTL_MASK; bool cortex_a9 = read_cpuid_part_number() == ARM_CPU_PART_CORTEX_A9; if (rev >= L310_CACHE_ID_RTL_R2P0) { diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index a474de346be6..839f48c26ef0 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -4,6 +4,7 @@ config ARM64 select ARCH_HAS_OPP select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST select ARCH_USE_CMPXCHG_LOCKREF + select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_WANT_OPTIONAL_GPIOLIB select ARCH_WANT_COMPAT_IPC_PARSE_VERSION select ARCH_WANT_FRAME_POINTERS diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 993bce527b85..902eb708804a 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -56,6 +56,8 @@ #define TASK_SIZE_32 UL(0x100000000) #define TASK_SIZE (test_thread_flag(TIF_32BIT) ? \ TASK_SIZE_32 : TASK_SIZE_64) +#define TASK_SIZE_OF(tsk) (test_tsk_thread_flag(tsk, TIF_32BIT) ? \ + TASK_SIZE_32 : TASK_SIZE_64) #else #define TASK_SIZE TASK_SIZE_64 #endif /* CONFIG_COMPAT */ diff --git a/arch/arm64/kernel/efi-stub.c b/arch/arm64/kernel/efi-stub.c index 60e98a639ac5..e786e6cdc400 100644 --- a/arch/arm64/kernel/efi-stub.c +++ b/arch/arm64/kernel/efi-stub.c @@ -12,8 +12,6 @@ #include <linux/efi.h> #include <linux/libfdt.h> #include <asm/sections.h> -#include <generated/compile.h> -#include <generated/utsrelease.h> /* * AArch64 requires the DTB to be 8-byte aligned in the first 512MiB from diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index 9aecbace4128..13bbc3be6f5a 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -27,8 +27,10 @@ void __cpu_copy_user_page(void *kto, const void *kfrom, unsigned long vaddr) copy_page(kto, kfrom); __flush_dcache_area(kto, PAGE_SIZE); } +EXPORT_SYMBOL_GPL(__cpu_copy_user_page); void __cpu_clear_user_page(void *kaddr, unsigned long vaddr) { clear_page(kaddr); } +EXPORT_SYMBOL_GPL(__cpu_clear_user_page); diff --git a/arch/m68k/kernel/head.S b/arch/m68k/kernel/head.S index dbb118e1a4e0..a54788458ca3 100644 --- a/arch/m68k/kernel/head.S +++ b/arch/m68k/kernel/head.S @@ -921,7 +921,8 @@ L(nocon): jls 1f lsrl #1,%d1 1: - movel %d1,m68k_init_mapped_size + lea %pc@(m68k_init_mapped_size),%a0 + movel %d1,%a0@ mmu_map #PAGE_OFFSET,%pc@(L(phys_kernel_start)),%d1,\ %pc@(m68k_supervisor_cachemode) diff --git a/arch/m68k/kernel/time.c b/arch/m68k/kernel/time.c index 958f1adb9d0c..3857737e3958 100644 --- a/arch/m68k/kernel/time.c +++ b/arch/m68k/kernel/time.c @@ -11,6 +11,7 @@ */ #include <linux/errno.h> +#include <linux/export.h> #include <linux/module.h> #include <linux/sched.h> #include <linux/kernel.h> @@ -30,6 +31,7 @@ unsigned long (*mach_random_get_entropy)(void); +EXPORT_SYMBOL_GPL(mach_random_get_entropy); /* diff --git a/arch/parisc/kernel/hardware.c b/arch/parisc/kernel/hardware.c index 608716f8496b..af3bc359dc70 100644 --- a/arch/parisc/kernel/hardware.c +++ b/arch/parisc/kernel/hardware.c @@ -1210,7 +1210,8 @@ static struct hp_hardware hp_hardware_list[] = { {HPHW_FIO, 0x004, 0x00320, 0x0, "Metheus Frame Buffer"}, {HPHW_FIO, 0x004, 0x00340, 0x0, "BARCO CX4500 VME Grphx Cnsl"}, {HPHW_FIO, 0x004, 0x00360, 0x0, "Hughes TOG VME FDDI"}, - {HPHW_FIO, 0x076, 0x000AD, 0x00, "Crestone Peak RS-232"}, + {HPHW_FIO, 0x076, 0x000AD, 0x0, "Crestone Peak Core RS-232"}, + {HPHW_FIO, 0x077, 0x000AD, 0x0, "Crestone Peak Fast? Core RS-232"}, {HPHW_IOA, 0x185, 0x0000B, 0x00, "Java BC Summit Port"}, {HPHW_IOA, 0x1FF, 0x0000B, 0x00, "Hitachi Ghostview Summit Port"}, {HPHW_IOA, 0x580, 0x0000B, 0x10, "U2-IOA BC Runway Port"}, diff --git a/arch/parisc/kernel/sys_parisc32.c b/arch/parisc/kernel/sys_parisc32.c index bb9f3b64de55..93c1963d76fe 100644 --- a/arch/parisc/kernel/sys_parisc32.c +++ b/arch/parisc/kernel/sys_parisc32.c @@ -4,6 +4,7 @@ * Copyright (C) 2000-2001 Hewlett Packard Company * Copyright (C) 2000 John Marvin * Copyright (C) 2001 Matthew Wilcox + * Copyright (C) 2014 Helge Deller <deller@gmx.de> * * These routines maintain argument size conversion between 32bit and 64bit * environment. Based heavily on sys_ia32.c and sys_sparc32.c. @@ -11,44 +12,8 @@ #include <linux/compat.h> #include <linux/kernel.h> -#include <linux/sched.h> -#include <linux/fs.h> -#include <linux/mm.h> -#include <linux/file.h> -#include <linux/signal.h> -#include <linux/resource.h> -#include <linux/times.h> -#include <linux/time.h> -#include <linux/smp.h> -#include <linux/sem.h> -#include <linux/shm.h> -#include <linux/slab.h> -#include <linux/uio.h> -#include <linux/ncp_fs.h> -#include <linux/poll.h> -#include <linux/personality.h> -#include <linux/stat.h> -#include <linux/highmem.h> -#include <linux/highuid.h> -#include <linux/mman.h> -#include <linux/binfmts.h> -#include <linux/namei.h> -#include <linux/vfs.h> -#include <linux/ptrace.h> -#include <linux/swap.h> #include <linux/syscalls.h> -#include <asm/types.h> -#include <asm/uaccess.h> -#include <asm/mmu_context.h> - -#undef DEBUG - -#ifdef DEBUG -#define DBG(x) printk x -#else -#define DBG(x) -#endif asmlinkage long sys32_unimplemented(int r26, int r25, int r24, int r23, int r22, int r21, int r20) @@ -57,3 +22,12 @@ asmlinkage long sys32_unimplemented(int r26, int r25, int r24, int r23, current->comm, current->pid, r20); return -ENOSYS; } + +asmlinkage long sys32_fanotify_mark(compat_int_t fanotify_fd, compat_uint_t flags, + compat_uint_t mask0, compat_uint_t mask1, compat_int_t dfd, + const char __user * pathname) +{ + return sys_fanotify_mark(fanotify_fd, flags, + ((__u64)mask1 << 32) | mask0, + dfd, pathname); +} diff --git a/arch/parisc/kernel/syscall_table.S b/arch/parisc/kernel/syscall_table.S index c5fa7a697fba..84c5d3a58fa1 100644 --- a/arch/parisc/kernel/syscall_table.S +++ b/arch/parisc/kernel/syscall_table.S @@ -418,7 +418,7 @@ ENTRY_SAME(accept4) /* 320 */ ENTRY_SAME(prlimit64) ENTRY_SAME(fanotify_init) - ENTRY_COMP(fanotify_mark) + ENTRY_DIFF(fanotify_mark) ENTRY_COMP(clock_adjtime) ENTRY_SAME(name_to_handle_at) /* 325 */ ENTRY_COMP(open_by_handle_at) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index bd6dd6ed3a9f..80b94b0add1f 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -145,6 +145,7 @@ config PPC select HAVE_IRQ_EXIT_ON_IRQ_STACK select ARCH_USE_CMPXCHG_LOCKREF if PPC64 select HAVE_ARCH_AUDITSYSCALL + select ARCH_SUPPORTS_ATOMIC_RMW config GENERIC_CSUM def_bool CPU_LITTLE_ENDIAN @@ -414,7 +415,7 @@ config KEXEC config CRASH_DUMP bool "Build a kdump crash kernel" depends on PPC64 || 6xx || FSL_BOOKE || (44x && !SMP) - select RELOCATABLE if PPC64 || 44x || FSL_BOOKE + select RELOCATABLE if (PPC64 && !COMPILE_TEST) || 44x || FSL_BOOKE help Build a kernel suitable for use as a kdump capture kernel. The same kernel binary can be used as production kernel and dump @@ -1017,6 +1018,7 @@ endmenu if PPC64 config RELOCATABLE bool "Build a relocatable kernel" + depends on !COMPILE_TEST select NONSTATIC_KERNEL help This builds a kernel image that is capable of running anywhere diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h index f8d1d6dcf7db..e61f24ed4e65 100644 --- a/arch/powerpc/include/asm/mmu.h +++ b/arch/powerpc/include/asm/mmu.h @@ -19,8 +19,7 @@ #define MMU_FTR_TYPE_40x ASM_CONST(0x00000004) #define MMU_FTR_TYPE_44x ASM_CONST(0x00000008) #define MMU_FTR_TYPE_FSL_E ASM_CONST(0x00000010) -#define MMU_FTR_TYPE_3E ASM_CONST(0x00000020) -#define MMU_FTR_TYPE_47x ASM_CONST(0x00000040) +#define MMU_FTR_TYPE_47x ASM_CONST(0x00000020) /* * This is individual features @@ -106,13 +105,6 @@ MMU_FTR_CI_LARGE_PAGE #define MMU_FTRS_PA6T MMU_FTRS_DEFAULT_HPTE_ARCH_V2 | \ MMU_FTR_CI_LARGE_PAGE | MMU_FTR_NO_SLBIE_B -#define MMU_FTRS_A2 MMU_FTR_TYPE_3E | MMU_FTR_USE_TLBILX | \ - MMU_FTR_USE_TLBIVAX_BCAST | \ - MMU_FTR_LOCK_BCAST_INVAL | \ - MMU_FTR_USE_TLBRSRV | \ - MMU_FTR_USE_PAIRED_MAS | \ - MMU_FTR_TLBIEL | \ - MMU_FTR_16M_PAGE #ifndef __ASSEMBLY__ #include <asm/cputable.h> diff --git a/arch/powerpc/include/asm/perf_event_server.h b/arch/powerpc/include/asm/perf_event_server.h index 9ed737146dbb..b3e936027b26 100644 --- a/arch/powerpc/include/asm/perf_event_server.h +++ b/arch/powerpc/include/asm/perf_event_server.h @@ -61,8 +61,7 @@ struct power_pmu { #define PPMU_SIAR_VALID 0x00000010 /* Processor has SIAR Valid bit */ #define PPMU_HAS_SSLOT 0x00000020 /* Has sampled slot in MMCRA */ #define PPMU_HAS_SIER 0x00000040 /* Has SIER */ -#define PPMU_BHRB 0x00000080 /* has BHRB feature enabled */ -#define PPMU_EBB 0x00000100 /* supports event based branch */ +#define PPMU_ARCH_207S 0x00000080 /* PMC is architecture v2.07S */ /* * Values for flags to get_alternatives() diff --git a/arch/powerpc/kernel/idle_power7.S b/arch/powerpc/kernel/idle_power7.S index 2480256272d4..5cf3d367190d 100644 --- a/arch/powerpc/kernel/idle_power7.S +++ b/arch/powerpc/kernel/idle_power7.S @@ -131,7 +131,7 @@ _GLOBAL(power7_nap) _GLOBAL(power7_sleep) li r3,1 - li r4,0 + li r4,1 b power7_powersave_common /* No return */ diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index 51a3ff78838a..1007fb802e6b 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -747,7 +747,7 @@ int setup_profiling_timer(unsigned int multiplier) #ifdef CONFIG_SCHED_SMT /* cpumask of CPUs with asymetric SMT dependancy */ -static const int powerpc_smt_flags(void) +static int powerpc_smt_flags(void) { int flags = SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES; diff --git a/arch/powerpc/kvm/book3s_hv_interrupts.S b/arch/powerpc/kvm/book3s_hv_interrupts.S index 8c86422a1e37..731be7478b27 100644 --- a/arch/powerpc/kvm/book3s_hv_interrupts.S +++ b/arch/powerpc/kvm/book3s_hv_interrupts.S @@ -127,11 +127,6 @@ BEGIN_FTR_SECTION stw r10, HSTATE_PMC + 24(r13) stw r11, HSTATE_PMC + 28(r13) END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201) -BEGIN_FTR_SECTION - mfspr r9, SPRN_SIER - std r8, HSTATE_MMCR + 40(r13) - std r9, HSTATE_MMCR + 48(r13) -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) 31: /* diff --git a/arch/powerpc/mm/mmu_context_nohash.c b/arch/powerpc/mm/mmu_context_nohash.c index af3d78e19302..928ebe79668b 100644 --- a/arch/powerpc/mm/mmu_context_nohash.c +++ b/arch/powerpc/mm/mmu_context_nohash.c @@ -410,17 +410,7 @@ void __init mmu_context_init(void) } else if (mmu_has_feature(MMU_FTR_TYPE_47x)) { first_context = 1; last_context = 65535; - } else -#ifdef CONFIG_PPC_BOOK3E_MMU - if (mmu_has_feature(MMU_FTR_TYPE_3E)) { - u32 mmucfg = mfspr(SPRN_MMUCFG); - u32 pid_bits = (mmucfg & MMUCFG_PIDSIZE_MASK) - >> MMUCFG_PIDSIZE_SHIFT; - first_context = 1; - last_context = (1UL << (pid_bits + 1)) - 1; - } else -#endif - { + } else { first_context = 1; last_context = 255; } diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c index 6dcdadefd8d0..82e82cadcde5 100644 --- a/arch/powerpc/net/bpf_jit_comp.c +++ b/arch/powerpc/net/bpf_jit_comp.c @@ -390,12 +390,16 @@ static int bpf_jit_build_body(struct sk_filter *fp, u32 *image, case BPF_ANC | SKF_AD_VLAN_TAG: case BPF_ANC | SKF_AD_VLAN_TAG_PRESENT: BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, vlan_tci) != 2); + BUILD_BUG_ON(VLAN_TAG_PRESENT != 0x1000); + PPC_LHZ_OFFS(r_A, r_skb, offsetof(struct sk_buff, vlan_tci)); - if (code == (BPF_ANC | SKF_AD_VLAN_TAG)) - PPC_ANDI(r_A, r_A, VLAN_VID_MASK); - else + if (code == (BPF_ANC | SKF_AD_VLAN_TAG)) { + PPC_ANDI(r_A, r_A, ~VLAN_TAG_PRESENT); + } else { PPC_ANDI(r_A, r_A, VLAN_TAG_PRESENT); + PPC_SRWI(r_A, r_A, 12); + } break; case BPF_ANC | SKF_AD_QUEUE: BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c index 4520c9356b54..6b0641c3f03f 100644 --- a/arch/powerpc/perf/core-book3s.c +++ b/arch/powerpc/perf/core-book3s.c @@ -485,7 +485,7 @@ static bool is_ebb_event(struct perf_event *event) * check that the PMU supports EBB, meaning those that don't can still * use bit 63 of the event code for something else if they wish. */ - return (ppmu->flags & PPMU_EBB) && + return (ppmu->flags & PPMU_ARCH_207S) && ((event->attr.config >> PERF_EVENT_CONFIG_EBB_SHIFT) & 1); } @@ -777,7 +777,7 @@ void perf_event_print_debug(void) if (ppmu->flags & PPMU_HAS_SIER) sier = mfspr(SPRN_SIER); - if (ppmu->flags & PPMU_EBB) { + if (ppmu->flags & PPMU_ARCH_207S) { pr_info("MMCR2: %016lx EBBHR: %016lx\n", mfspr(SPRN_MMCR2), mfspr(SPRN_EBBHR)); pr_info("EBBRR: %016lx BESCR: %016lx\n", @@ -996,7 +996,22 @@ static void power_pmu_read(struct perf_event *event) } while (local64_cmpxchg(&event->hw.prev_count, prev, val) != prev); local64_add(delta, &event->count); - local64_sub(delta, &event->hw.period_left); + + /* + * A number of places program the PMC with (0x80000000 - period_left). + * We never want period_left to be less than 1 because we will program + * the PMC with a value >= 0x800000000 and an edge detected PMC will + * roll around to 0 before taking an exception. We have seen this + * on POWER8. + * + * To fix this, clamp the minimum value of period_left to 1. + */ + do { + prev = local64_read(&event->hw.period_left); + val = prev - delta; + if (val < 1) + val = 1; + } while (local64_cmpxchg(&event->hw.period_left, prev, val) != prev); } /* @@ -1300,6 +1315,9 @@ static void power_pmu_enable(struct pmu *pmu) write_mmcr0(cpuhw, mmcr0); + if (ppmu->flags & PPMU_ARCH_207S) + mtspr(SPRN_MMCR2, 0); + /* * Enable instruction sampling if necessary */ @@ -1696,7 +1714,7 @@ static int power_pmu_event_init(struct perf_event *event) if (has_branch_stack(event)) { /* PMU has BHRB enabled */ - if (!(ppmu->flags & PPMU_BHRB)) + if (!(ppmu->flags & PPMU_ARCH_207S)) return -EOPNOTSUPP; } diff --git a/arch/powerpc/perf/power8-pmu.c b/arch/powerpc/perf/power8-pmu.c index fe2763b6e039..639cd9156585 100644 --- a/arch/powerpc/perf/power8-pmu.c +++ b/arch/powerpc/perf/power8-pmu.c @@ -792,7 +792,7 @@ static struct power_pmu power8_pmu = { .get_constraint = power8_get_constraint, .get_alternatives = power8_get_alternatives, .disable_pmc = power8_disable_pmc, - .flags = PPMU_HAS_SSLOT | PPMU_HAS_SIER | PPMU_BHRB | PPMU_EBB, + .flags = PPMU_HAS_SSLOT | PPMU_HAS_SIER | PPMU_ARCH_207S, .n_generic = ARRAY_SIZE(power8_generic_events), .generic_events = power8_generic_events, .cache_events = &power8_cache_events, diff --git a/arch/powerpc/platforms/cell/spu_syscalls.c b/arch/powerpc/platforms/cell/spu_syscalls.c index 38e0a1a5cec3..5e6e0bad6db6 100644 --- a/arch/powerpc/platforms/cell/spu_syscalls.c +++ b/arch/powerpc/platforms/cell/spu_syscalls.c @@ -111,6 +111,7 @@ asmlinkage long sys_spu_run(int fd, __u32 __user *unpc, __u32 __user *ustatus) return ret; } +#ifdef CONFIG_COREDUMP int elf_coredump_extra_notes_size(void) { struct spufs_calls *calls; @@ -142,6 +143,7 @@ int elf_coredump_extra_notes_write(struct coredump_params *cprm) return ret; } +#endif void notify_spus_active(void) { diff --git a/arch/powerpc/platforms/cell/spufs/Makefile b/arch/powerpc/platforms/cell/spufs/Makefile index b9d5d678aa44..52a7d2596d30 100644 --- a/arch/powerpc/platforms/cell/spufs/Makefile +++ b/arch/powerpc/platforms/cell/spufs/Makefile @@ -1,8 +1,9 @@ obj-$(CONFIG_SPU_FS) += spufs.o -spufs-y += inode.o file.o context.o syscalls.o coredump.o +spufs-y += inode.o file.o context.o syscalls.o spufs-y += sched.o backing_ops.o hw_ops.o run.o gang.o spufs-y += switch.o fault.o lscsa_alloc.o +spufs-$(CONFIG_COREDUMP) += coredump.o # magic for the trace events CFLAGS_sched.o := -I$(src) diff --git a/arch/powerpc/platforms/cell/spufs/syscalls.c b/arch/powerpc/platforms/cell/spufs/syscalls.c index b045fdda4845..a87200a535fa 100644 --- a/arch/powerpc/platforms/cell/spufs/syscalls.c +++ b/arch/powerpc/platforms/cell/spufs/syscalls.c @@ -79,8 +79,10 @@ static long do_spu_create(const char __user *pathname, unsigned int flags, struct spufs_calls spufs_calls = { .create_thread = do_spu_create, .spu_run = do_spu_run, - .coredump_extra_notes_size = spufs_coredump_extra_notes_size, - .coredump_extra_notes_write = spufs_coredump_extra_notes_write, .notify_spus_active = do_notify_spus_active, .owner = THIS_MODULE, +#ifdef CONFIG_COREDUMP + .coredump_extra_notes_size = spufs_coredump_extra_notes_size, + .coredump_extra_notes_write = spufs_coredump_extra_notes_write, +#endif }; diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig index 29f2e988c56a..407c87d9879a 100644 --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -78,6 +78,7 @@ config SPARC64 select HAVE_C_RECORDMCOUNT select NO_BOOTMEM select HAVE_ARCH_AUDITSYSCALL + select ARCH_SUPPORTS_ATOMIC_RMW config ARCH_DEFCONFIG string diff --git a/arch/um/kernel/tlb.c b/arch/um/kernel/tlb.c index 9472079471bb..f1b3eb14b855 100644 --- a/arch/um/kernel/tlb.c +++ b/arch/um/kernel/tlb.c @@ -12,6 +12,7 @@ #include <mem_user.h> #include <os.h> #include <skas.h> +#include <kern_util.h> struct host_vm_change { struct host_vm_op { @@ -124,6 +125,9 @@ static int add_munmap(unsigned long addr, unsigned long len, struct host_vm_op *last; int ret = 0; + if ((addr >= STUB_START) && (addr < STUB_END)) + return -EINVAL; + if (hvc->index != 0) { last = &hvc->ops[hvc->index - 1]; if ((last->type == MUNMAP) && @@ -283,8 +287,11 @@ void fix_range_common(struct mm_struct *mm, unsigned long start_addr, /* This is not an else because ret is modified above */ if (ret) { printk(KERN_ERR "fix_range_common: failed, killing current " - "process\n"); + "process: %d\n", task_tgid_vnr(current)); + /* We are under mmap_sem, release it such that current can terminate */ + up_write(¤t->mm->mmap_sem); force_sig(SIGKILL, current); + do_signal(); } } diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c index 974b87474a99..5678c3571e7c 100644 --- a/arch/um/kernel/trap.c +++ b/arch/um/kernel/trap.c @@ -206,7 +206,7 @@ unsigned long segv(struct faultinfo fi, unsigned long ip, int is_user, int is_write = FAULT_WRITE(fi); unsigned long address = FAULT_ADDRESS(fi); - if (regs) + if (!is_user && regs) current->thread.segv_regs = container_of(regs, struct pt_regs, regs); if (!is_user && (address >= start_vm) && (address < end_vm)) { diff --git a/arch/um/os-Linux/skas/process.c b/arch/um/os-Linux/skas/process.c index d531879a4617..908579f2b0ab 100644 --- a/arch/um/os-Linux/skas/process.c +++ b/arch/um/os-Linux/skas/process.c @@ -54,7 +54,7 @@ static int ptrace_dump_regs(int pid) void wait_stub_done(int pid) { - int n, status, err, bad_stop = 0; + int n, status, err; while (1) { CATCH_EINTR(n = waitpid(pid, &status, WUNTRACED | __WALL)); @@ -74,8 +74,6 @@ void wait_stub_done(int pid) if (((1 << WSTOPSIG(status)) & STUB_DONE_MASK) != 0) return; - else - bad_stop = 1; bad_wait: err = ptrace_dump_regs(pid); @@ -85,10 +83,7 @@ bad_wait: printk(UM_KERN_ERR "wait_stub_done : failed to wait for SIGTRAP, " "pid = %d, n = %d, errno = %d, status = 0x%x\n", pid, n, errno, status); - if (bad_stop) - kill(pid, SIGKILL); - else - fatal_sigsegv(); + fatal_sigsegv(); } extern unsigned long current_stub_stack(void); diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index a8f749ef0fdc..d24887b645dc 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -131,6 +131,7 @@ config X86 select HAVE_CC_STACKPROTECTOR select GENERIC_CPU_AUTOPROBE select HAVE_ARCH_AUDITSYSCALL + select ARCH_SUPPORTS_ATOMIC_RMW config INSTRUCTION_DECODER def_bool y diff --git a/arch/x86/boot/header.S b/arch/x86/boot/header.S index 84c223479e3c..7a6d43a554d7 100644 --- a/arch/x86/boot/header.S +++ b/arch/x86/boot/header.S @@ -91,10 +91,9 @@ bs_die: .section ".bsdata", "a" bugger_off_msg: - .ascii "Direct floppy boot is not supported. " - .ascii "Use a boot loader program instead.\r\n" + .ascii "Use a boot loader.\r\n" .ascii "\n" - .ascii "Remove disk and press any key to reboot ...\r\n" + .ascii "Remove disk and press any key to reboot...\r\n" .byte 0 #ifdef CONFIG_EFI_STUB @@ -108,7 +107,7 @@ coff_header: #else .word 0x8664 # x86-64 #endif - .word 3 # nr_sections + .word 4 # nr_sections .long 0 # TimeDateStamp .long 0 # PointerToSymbolTable .long 1 # NumberOfSymbols @@ -250,6 +249,25 @@ section_table: .word 0 # NumberOfLineNumbers .long 0x60500020 # Characteristics (section flags) + # + # The offset & size fields are filled in by build.c. + # + .ascii ".bss" + .byte 0 + .byte 0 + .byte 0 + .byte 0 + .long 0 + .long 0x0 + .long 0 # Size of initialized data + # on disk + .long 0x0 + .long 0 # PointerToRelocations + .long 0 # PointerToLineNumbers + .word 0 # NumberOfRelocations + .word 0 # NumberOfLineNumbers + .long 0xc8000080 # Characteristics (section flags) + #endif /* CONFIG_EFI_STUB */ # Kernel attributes; used by setup. This is part 1 of the diff --git a/arch/x86/boot/tools/build.c b/arch/x86/boot/tools/build.c index 1a2f2121cada..a7661c430cd9 100644 --- a/arch/x86/boot/tools/build.c +++ b/arch/x86/boot/tools/build.c @@ -143,7 +143,7 @@ static void usage(void) #ifdef CONFIG_EFI_STUB -static void update_pecoff_section_header(char *section_name, u32 offset, u32 size) +static void update_pecoff_section_header_fields(char *section_name, u32 vma, u32 size, u32 datasz, u32 offset) { unsigned int pe_header; unsigned short num_sections; @@ -164,10 +164,10 @@ static void update_pecoff_section_header(char *section_name, u32 offset, u32 siz put_unaligned_le32(size, section + 0x8); /* section header vma field */ - put_unaligned_le32(offset, section + 0xc); + put_unaligned_le32(vma, section + 0xc); /* section header 'size of initialised data' field */ - put_unaligned_le32(size, section + 0x10); + put_unaligned_le32(datasz, section + 0x10); /* section header 'file offset' field */ put_unaligned_le32(offset, section + 0x14); @@ -179,6 +179,11 @@ static void update_pecoff_section_header(char *section_name, u32 offset, u32 siz } } +static void update_pecoff_section_header(char *section_name, u32 offset, u32 size) +{ + update_pecoff_section_header_fields(section_name, offset, size, size, offset); +} + static void update_pecoff_setup_and_reloc(unsigned int size) { u32 setup_offset = 0x200; @@ -203,9 +208,6 @@ static void update_pecoff_text(unsigned int text_start, unsigned int file_sz) pe_header = get_unaligned_le32(&buf[0x3c]); - /* Size of image */ - put_unaligned_le32(file_sz, &buf[pe_header + 0x50]); - /* * Size of code: Subtract the size of the first sector (512 bytes) * which includes the header. @@ -220,6 +222,22 @@ static void update_pecoff_text(unsigned int text_start, unsigned int file_sz) update_pecoff_section_header(".text", text_start, text_sz); } +static void update_pecoff_bss(unsigned int file_sz, unsigned int init_sz) +{ + unsigned int pe_header; + unsigned int bss_sz = init_sz - file_sz; + + pe_header = get_unaligned_le32(&buf[0x3c]); + + /* Size of uninitialized data */ + put_unaligned_le32(bss_sz, &buf[pe_header + 0x24]); + + /* Size of image */ + put_unaligned_le32(init_sz, &buf[pe_header + 0x50]); + + update_pecoff_section_header_fields(".bss", file_sz, bss_sz, 0, 0); +} + static int reserve_pecoff_reloc_section(int c) { /* Reserve 0x20 bytes for .reloc section */ @@ -259,6 +277,8 @@ static void efi_stub_entry_update(void) static inline void update_pecoff_setup_and_reloc(unsigned int size) {} static inline void update_pecoff_text(unsigned int text_start, unsigned int file_sz) {} +static inline void update_pecoff_bss(unsigned int file_sz, + unsigned int init_sz) {} static inline void efi_stub_defaults(void) {} static inline void efi_stub_entry_update(void) {} @@ -310,7 +330,7 @@ static void parse_zoffset(char *fname) int main(int argc, char ** argv) { - unsigned int i, sz, setup_sectors; + unsigned int i, sz, setup_sectors, init_sz; int c; u32 sys_size; struct stat sb; @@ -376,7 +396,9 @@ int main(int argc, char ** argv) buf[0x1f1] = setup_sectors-1; put_unaligned_le32(sys_size, &buf[0x1f4]); - update_pecoff_text(setup_sectors * 512, sz + i + ((sys_size * 16) - sz)); + update_pecoff_text(setup_sectors * 512, i + (sys_size * 16)); + init_sz = get_unaligned_le32(&buf[0x260]); + update_pecoff_bss(i + (sys_size * 16), init_sz); efi_stub_entry_update(); diff --git a/arch/x86/crypto/sha512_ssse3_glue.c b/arch/x86/crypto/sha512_ssse3_glue.c index f30cd10293f0..8626b03e83b7 100644 --- a/arch/x86/crypto/sha512_ssse3_glue.c +++ b/arch/x86/crypto/sha512_ssse3_glue.c @@ -141,7 +141,7 @@ static int sha512_ssse3_final(struct shash_desc *desc, u8 *out) /* save number of bits */ bits[1] = cpu_to_be64(sctx->count[0] << 3); - bits[0] = cpu_to_be64(sctx->count[1] << 3) | sctx->count[0] >> 61; + bits[0] = cpu_to_be64(sctx->count[1] << 3 | sctx->count[0] >> 61); /* Pad out to 112 mod 128 and append length */ index = sctx->count[0] & 0x7f; diff --git a/arch/x86/kernel/apm_32.c b/arch/x86/kernel/apm_32.c index f3a1f04ed4cb..584874451414 100644 --- a/arch/x86/kernel/apm_32.c +++ b/arch/x86/kernel/apm_32.c @@ -841,7 +841,6 @@ static int apm_do_idle(void) u32 eax; u8 ret = 0; int idled = 0; - int polling; int err = 0; if (!need_resched()) { diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c index adb02aa62af5..07846d738bdb 100644 --- a/arch/x86/kernel/cpu/perf_event_intel.c +++ b/arch/x86/kernel/cpu/perf_event_intel.c @@ -1382,6 +1382,15 @@ again: intel_pmu_lbr_read(); /* + * CondChgd bit 63 doesn't mean any overflow status. Ignore + * and clear the bit. + */ + if (__test_and_clear_bit(63, (unsigned long *)&status)) { + if (!status) + goto done; + } + + /* * PEBS overflow sets bit 62 in the global status register */ if (__test_and_clear_bit(62, (unsigned long *)&status)) { diff --git a/arch/x86/kernel/espfix_64.c b/arch/x86/kernel/espfix_64.c index 6afbb16e9b79..94d857fb1033 100644 --- a/arch/x86/kernel/espfix_64.c +++ b/arch/x86/kernel/espfix_64.c @@ -175,7 +175,7 @@ void init_espfix_ap(void) if (!pud_present(pud)) { pmd_p = (pmd_t *)__get_free_page(PGALLOC_GFP); pud = __pud(__pa(pmd_p) | (PGTABLE_PROT & ptemask)); - paravirt_alloc_pud(&init_mm, __pa(pmd_p) >> PAGE_SHIFT); + paravirt_alloc_pmd(&init_mm, __pa(pmd_p) >> PAGE_SHIFT); for (n = 0; n < ESPFIX_PUD_CLONES; n++) set_pud(&pud_p[n], pud); } @@ -185,7 +185,7 @@ void init_espfix_ap(void) if (!pmd_present(pmd)) { pte_p = (pte_t *)__get_free_page(PGALLOC_GFP); pmd = __pmd(__pa(pte_p) | (PGTABLE_PROT & ptemask)); - paravirt_alloc_pmd(&init_mm, __pa(pte_p) >> PAGE_SHIFT); + paravirt_alloc_pte(&init_mm, __pa(pte_p) >> PAGE_SHIFT); for (n = 0; n < ESPFIX_PMD_CLONES; n++) set_pmd(&pmd_p[n], pmd); } @@ -193,7 +193,6 @@ void init_espfix_ap(void) pte_p = pte_offset_kernel(&pmd, addr); stack_page = (void *)__get_free_page(GFP_KERNEL); pte = __pte(__pa(stack_page) | (__PAGE_KERNEL_RO & ptemask)); - paravirt_alloc_pte(&init_mm, __pa(stack_page) >> PAGE_SHIFT); for (n = 0; n < ESPFIX_PTE_CLONES; n++) set_pte(&pte_p[n*PTE_STRIDE], pte); diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c index 57e5ce126d5a..ea030319b321 100644 --- a/arch/x86/kernel/tsc.c +++ b/arch/x86/kernel/tsc.c @@ -920,9 +920,9 @@ static int time_cpufreq_notifier(struct notifier_block *nb, unsigned long val, tsc_khz = cpufreq_scale(tsc_khz_ref, ref_freq, freq->new); if (!(freq->flags & CPUFREQ_CONST_LOOPS)) mark_tsc_unstable("cpufreq changes"); - } - set_cyc2ns_scale(tsc_khz, freq->cpu); + set_cyc2ns_scale(tsc_khz, freq->cpu); + } return 0; } diff --git a/arch/x86/vdso/vdso2c.h b/arch/x86/vdso/vdso2c.h index df95a2fdff73..11b65d4f9414 100644 --- a/arch/x86/vdso/vdso2c.h +++ b/arch/x86/vdso/vdso2c.h @@ -93,6 +93,9 @@ static void BITSFUNC(copy_section)(struct BITSFUNC(fake_sections) *out, uint64_t flags = GET_LE(&in->sh_flags); bool copy = flags & SHF_ALLOC && + (GET_LE(&in->sh_size) || + (GET_LE(&in->sh_type) != SHT_RELA && + GET_LE(&in->sh_type) != SHT_REL)) && strcmp(name, ".altinstructions") && strcmp(name, ".altinstr_replacement"); diff --git a/arch/x86/vdso/vma.c b/arch/x86/vdso/vma.c index e1513c47872a..5a5176de8d0a 100644 --- a/arch/x86/vdso/vma.c +++ b/arch/x86/vdso/vma.c @@ -62,6 +62,9 @@ struct linux_binprm; Only used for the 64-bit and x32 vdsos. */ static unsigned long vdso_addr(unsigned long start, unsigned len) { +#ifdef CONFIG_X86_32 + return 0; +#else unsigned long addr, end; unsigned offset; end = (start + PMD_SIZE - 1) & PMD_MASK; @@ -83,6 +86,7 @@ static unsigned long vdso_addr(unsigned long start, unsigned len) addr = align_vdso_addr(addr); return addr; +#endif } static int map_vdso(const struct vdso_image *image, bool calculate_addr) diff --git a/drivers/acpi/ac.c b/drivers/acpi/ac.c index c67f6f5ad611..36b0e61f9c09 100644 --- a/drivers/acpi/ac.c +++ b/drivers/acpi/ac.c @@ -30,6 +30,10 @@ #include <linux/types.h> #include <linux/dmi.h> #include <linux/delay.h> +#ifdef CONFIG_ACPI_PROCFS_POWER +#include <linux/proc_fs.h> +#include <linux/seq_file.h> +#endif #include <linux/platform_device.h> #include <linux/power_supply.h> #include <linux/acpi.h> @@ -52,6 +56,7 @@ MODULE_AUTHOR("Paul Diefenbaugh"); MODULE_DESCRIPTION("ACPI AC Adapter Driver"); MODULE_LICENSE("GPL"); + static int acpi_ac_add(struct acpi_device *device); static int acpi_ac_remove(struct acpi_device *device); static void acpi_ac_notify(struct acpi_device *device, u32 event); @@ -67,6 +72,13 @@ static int acpi_ac_resume(struct device *dev); #endif static SIMPLE_DEV_PM_OPS(acpi_ac_pm, NULL, acpi_ac_resume); +#ifdef CONFIG_ACPI_PROCFS_POWER +extern struct proc_dir_entry *acpi_lock_ac_dir(void); +extern void *acpi_unlock_ac_dir(struct proc_dir_entry *acpi_ac_dir); +static int acpi_ac_open_fs(struct inode *inode, struct file *file); +#endif + + static int ac_sleep_before_get_state_ms; static struct acpi_driver acpi_ac_driver = { @@ -91,6 +103,16 @@ struct acpi_ac { #define to_acpi_ac(x) container_of(x, struct acpi_ac, charger) +#ifdef CONFIG_ACPI_PROCFS_POWER +static const struct file_operations acpi_ac_fops = { + .owner = THIS_MODULE, + .open = acpi_ac_open_fs, + .read = seq_read, + .llseek = seq_lseek, + .release = single_release, +}; +#endif + /* -------------------------------------------------------------------------- AC Adapter Management -------------------------------------------------------------------------- */ @@ -143,6 +165,83 @@ static enum power_supply_property ac_props[] = { POWER_SUPPLY_PROP_ONLINE, }; +#ifdef CONFIG_ACPI_PROCFS_POWER +/* -------------------------------------------------------------------------- + FS Interface (/proc) + -------------------------------------------------------------------------- */ + +static struct proc_dir_entry *acpi_ac_dir; + +static int acpi_ac_seq_show(struct seq_file *seq, void *offset) +{ + struct acpi_ac *ac = seq->private; + + + if (!ac) + return 0; + + if (acpi_ac_get_state(ac)) { + seq_puts(seq, "ERROR: Unable to read AC Adapter state\n"); + return 0; + } + + seq_puts(seq, "state: "); + switch (ac->state) { + case ACPI_AC_STATUS_OFFLINE: + seq_puts(seq, "off-line\n"); + break; + case ACPI_AC_STATUS_ONLINE: + seq_puts(seq, "on-line\n"); + break; + default: + seq_puts(seq, "unknown\n"); + break; + } + + return 0; +} + +static int acpi_ac_open_fs(struct inode *inode, struct file *file) +{ + return single_open(file, acpi_ac_seq_show, PDE_DATA(inode)); +} + +static int acpi_ac_add_fs(struct acpi_ac *ac) +{ + struct proc_dir_entry *entry = NULL; + + printk(KERN_WARNING PREFIX "Deprecated procfs I/F for AC is loaded," + " please retry with CONFIG_ACPI_PROCFS_POWER cleared\n"); + if (!acpi_device_dir(ac->device)) { + acpi_device_dir(ac->device) = + proc_mkdir(acpi_device_bid(ac->device), acpi_ac_dir); + if (!acpi_device_dir(ac->device)) + return -ENODEV; + } + + /* 'state' [R] */ + entry = proc_create_data(ACPI_AC_FILE_STATE, + S_IRUGO, acpi_device_dir(ac->device), + &acpi_ac_fops, ac); + if (!entry) + return -ENODEV; + return 0; +} + +static int acpi_ac_remove_fs(struct acpi_ac *ac) +{ + + if (acpi_device_dir(ac->device)) { + remove_proc_entry(ACPI_AC_FILE_STATE, + acpi_device_dir(ac->device)); + remove_proc_entry(acpi_device_bid(ac->device), acpi_ac_dir); + acpi_device_dir(ac->device) = NULL; + } + + return 0; +} +#endif + /* -------------------------------------------------------------------------- Driver Model -------------------------------------------------------------------------- */ @@ -243,6 +342,11 @@ static int acpi_ac_add(struct acpi_device *device) goto end; ac->charger.name = acpi_device_bid(device); +#ifdef CONFIG_ACPI_PROCFS_POWER + result = acpi_ac_add_fs(ac); + if (result) + goto end; +#endif ac->charger.type = POWER_SUPPLY_TYPE_MAINS; ac->charger.properties = ac_props; ac->charger.num_properties = ARRAY_SIZE(ac_props); @@ -258,8 +362,12 @@ static int acpi_ac_add(struct acpi_device *device) ac->battery_nb.notifier_call = acpi_ac_battery_notify; register_acpi_notifier(&ac->battery_nb); end: - if (result) + if (result) { +#ifdef CONFIG_ACPI_PROCFS_POWER + acpi_ac_remove_fs(ac); +#endif kfree(ac); + } dmi_check_system(ac_dmi_table); return result; @@ -303,6 +411,10 @@ static int acpi_ac_remove(struct acpi_device *device) power_supply_unregister(&ac->charger); unregister_acpi_notifier(&ac->battery_nb); +#ifdef CONFIG_ACPI_PROCFS_POWER + acpi_ac_remove_fs(ac); +#endif + kfree(ac); return 0; @@ -315,9 +427,20 @@ static int __init acpi_ac_init(void) if (acpi_disabled) return -ENODEV; +#ifdef CONFIG_ACPI_PROCFS_POWER + acpi_ac_dir = acpi_lock_ac_dir(); + if (!acpi_ac_dir) + return -ENODEV; +#endif + + result = acpi_bus_register_driver(&acpi_ac_driver); - if (result < 0) + if (result < 0) { +#ifdef CONFIG_ACPI_PROCFS_POWER + acpi_unlock_ac_dir(acpi_ac_dir); +#endif return -ENODEV; + } return 0; } @@ -325,6 +448,9 @@ static int __init acpi_ac_init(void) static void __exit acpi_ac_exit(void) { acpi_bus_unregister_driver(&acpi_ac_driver); +#ifdef CONFIG_ACPI_PROCFS_POWER + acpi_unlock_ac_dir(acpi_ac_dir); +#endif } module_init(acpi_ac_init); module_exit(acpi_ac_exit); diff --git a/drivers/acpi/acpi_pnp.c b/drivers/acpi/acpi_pnp.c index 6703c1fd993a..4ddb0dca56f6 100644 --- a/drivers/acpi/acpi_pnp.c +++ b/drivers/acpi/acpi_pnp.c @@ -14,6 +14,8 @@ #include <linux/module.h> static const struct acpi_device_id acpi_pnp_device_ids[] = { + /* soc_button_array */ + {"PNP0C40"}, /* pata_isapnp */ {"PNP0600"}, /* Generic ESDI/IDE/ATA compatible hard disk controller */ /* floppy */ diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c index 0d7116f34b95..130f513e08c9 100644 --- a/drivers/acpi/battery.c +++ b/drivers/acpi/battery.c @@ -35,6 +35,7 @@ #include <linux/delay.h> #include <linux/slab.h> #include <linux/suspend.h> +#include <linux/delay.h> #include <asm/unaligned.h> #ifdef CONFIG_ACPI_PROCFS_POWER @@ -534,6 +535,20 @@ static int acpi_battery_get_state(struct acpi_battery *battery) " invalid.\n"); } + /* + * When fully charged, some batteries wrongly report + * capacity_now = design_capacity instead of = full_charge_capacity + */ + if (battery->capacity_now > battery->full_charge_capacity + && battery->full_charge_capacity != ACPI_BATTERY_VALUE_UNKNOWN) { + battery->capacity_now = battery->full_charge_capacity; + if (battery->capacity_now != battery->design_capacity) + printk_once(KERN_WARNING FW_BUG + "battery: reported current charge level (%d) " + "is higher than reported maximum charge level (%d).\n", + battery->capacity_now, battery->full_charge_capacity); + } + if (test_bit(ACPI_BATTERY_QUIRK_PERCENTAGE_CAPACITY, &battery->flags) && battery->capacity_now >= 0 && battery->capacity_now <= 100) battery->capacity_now = (battery->capacity_now * @@ -1151,6 +1166,28 @@ static struct dmi_system_id bat_dmi_table[] = { {}, }; +/* + * Some machines'(E,G Lenovo Z480) ECs are not stable + * during boot up and this causes battery driver fails to be + * probed due to failure of getting battery information + * from EC sometimes. After several retries, the operation + * may work. So add retry code here and 20ms sleep between + * every retries. + */ +static int acpi_battery_update_retry(struct acpi_battery *battery) +{ + int retry, ret; + + for (retry = 5; retry; retry--) { + ret = acpi_battery_update(battery, false); + if (!ret) + break; + + msleep(20); + } + return ret; +} + static int acpi_battery_add(struct acpi_device *device) { int result = 0; @@ -1169,9 +1206,11 @@ static int acpi_battery_add(struct acpi_device *device) mutex_init(&battery->sysfs_lock); if (acpi_has_method(battery->device->handle, "_BIX")) set_bit(ACPI_BATTERY_XINFO_PRESENT, &battery->flags); - result = acpi_battery_update(battery, false); + + result = acpi_battery_update_retry(battery); if (result) goto fail; + #ifdef CONFIG_ACPI_PROCFS_POWER result = acpi_battery_add_fs(device); #endif diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c index ad11ba4a412d..a66ab658abbc 100644 --- a/drivers/acpi/ec.c +++ b/drivers/acpi/ec.c @@ -1,11 +1,14 @@ /* - * ec.c - ACPI Embedded Controller Driver (v2.1) + * ec.c - ACPI Embedded Controller Driver (v2.2) * - * Copyright (C) 2006-2008 Alexey Starikovskiy <astarikovskiy@suse.de> - * Copyright (C) 2006 Denis Sadykov <denis.m.sadykov@intel.com> - * Copyright (C) 2004 Luming Yu <luming.yu@intel.com> - * Copyright (C) 2001, 2002 Andy Grover <andrew.grover@intel.com> - * Copyright (C) 2001, 2002 Paul Diefenbaugh <paul.s.diefenbaugh@intel.com> + * Copyright (C) 2001-2014 Intel Corporation + * Author: 2014 Lv Zheng <lv.zheng@intel.com> + * 2006, 2007 Alexey Starikovskiy <alexey.y.starikovskiy@intel.com> + * 2006 Denis Sadykov <denis.m.sadykov@intel.com> + * 2004 Luming Yu <luming.yu@intel.com> + * 2001, 2002 Andy Grover <andrew.grover@intel.com> + * 2001, 2002 Paul Diefenbaugh <paul.s.diefenbaugh@intel.com> + * Copyright (C) 2008 Alexey Starikovskiy <astarikovskiy@suse.de> * * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * @@ -52,6 +55,7 @@ /* EC status register */ #define ACPI_EC_FLAG_OBF 0x01 /* Output buffer full */ #define ACPI_EC_FLAG_IBF 0x02 /* Input buffer full */ +#define ACPI_EC_FLAG_CMD 0x08 /* Input buffer contains a command */ #define ACPI_EC_FLAG_BURST 0x10 /* burst mode */ #define ACPI_EC_FLAG_SCI 0x20 /* EC-SCI occurred */ @@ -78,6 +82,9 @@ enum { EC_FLAGS_BLOCKED, /* Transactions are blocked */ }; +#define ACPI_EC_COMMAND_POLL 0x01 /* Available for command byte */ +#define ACPI_EC_COMMAND_COMPLETE 0x02 /* Completed last byte */ + /* ec.c is compiled in acpi namespace so this shows up as acpi.ec_delay param */ static unsigned int ec_delay __read_mostly = ACPI_EC_DELAY; module_param(ec_delay, uint, 0644); @@ -109,7 +116,7 @@ struct transaction { u8 ri; u8 wlen; u8 rlen; - bool done; + u8 flags; }; struct acpi_ec *boot_ec, *first_ec; @@ -127,83 +134,104 @@ static int EC_FLAGS_CLEAR_ON_RESUME; /* Needs acpi_ec_clear() on boot/resume */ static inline u8 acpi_ec_read_status(struct acpi_ec *ec) { u8 x = inb(ec->command_addr); - pr_debug("---> status = 0x%2.2x\n", x); + pr_debug("EC_SC(R) = 0x%2.2x " + "SCI_EVT=%d BURST=%d CMD=%d IBF=%d OBF=%d\n", + x, + !!(x & ACPI_EC_FLAG_SCI), + !!(x & ACPI_EC_FLAG_BURST), + !!(x & ACPI_EC_FLAG_CMD), + !!(x & ACPI_EC_FLAG_IBF), + !!(x & ACPI_EC_FLAG_OBF)); return x; } static inline u8 acpi_ec_read_data(struct acpi_ec *ec) { u8 x = inb(ec->data_addr); - pr_debug("---> data = 0x%2.2x\n", x); + pr_debug("EC_DATA(R) = 0x%2.2x\n", x); return x; } static inline void acpi_ec_write_cmd(struct acpi_ec *ec, u8 command) { - pr_debug("<--- command = 0x%2.2x\n", command); + pr_debug("EC_SC(W) = 0x%2.2x\n", command); outb(command, ec->command_addr); } static inline void acpi_ec_write_data(struct acpi_ec *ec, u8 data) { - pr_debug("<--- data = 0x%2.2x\n", data); + pr_debug("EC_DATA(W) = 0x%2.2x\n", data); outb(data, ec->data_addr); } -static int ec_transaction_done(struct acpi_ec *ec) +static int ec_transaction_completed(struct acpi_ec *ec) { unsigned long flags; int ret = 0; spin_lock_irqsave(&ec->lock, flags); - if (!ec->curr || ec->curr->done) + if (ec->curr && (ec->curr->flags & ACPI_EC_COMMAND_COMPLETE)) ret = 1; spin_unlock_irqrestore(&ec->lock, flags); return ret; } -static void start_transaction(struct acpi_ec *ec) +static bool advance_transaction(struct acpi_ec *ec) { - ec->curr->irq_count = ec->curr->wi = ec->curr->ri = 0; - ec->curr->done = false; - acpi_ec_write_cmd(ec, ec->curr->command); -} - -static void advance_transaction(struct acpi_ec *ec, u8 status) -{ - unsigned long flags; struct transaction *t; + u8 status; + bool wakeup = false; - spin_lock_irqsave(&ec->lock, flags); + pr_debug("===== %s =====\n", in_interrupt() ? "IRQ" : "TASK"); + status = acpi_ec_read_status(ec); t = ec->curr; if (!t) - goto unlock; - if (t->wlen > t->wi) { - if ((status & ACPI_EC_FLAG_IBF) == 0) - acpi_ec_write_data(ec, - t->wdata[t->wi++]); - else - goto err; - } else if (t->rlen > t->ri) { - if ((status & ACPI_EC_FLAG_OBF) == 1) { - t->rdata[t->ri++] = acpi_ec_read_data(ec); - if (t->rlen == t->ri) - t->done = true; + goto err; + if (t->flags & ACPI_EC_COMMAND_POLL) { + if (t->wlen > t->wi) { + if ((status & ACPI_EC_FLAG_IBF) == 0) + acpi_ec_write_data(ec, t->wdata[t->wi++]); + else + goto err; + } else if (t->rlen > t->ri) { + if ((status & ACPI_EC_FLAG_OBF) == 1) { + t->rdata[t->ri++] = acpi_ec_read_data(ec); + if (t->rlen == t->ri) { + t->flags |= ACPI_EC_COMMAND_COMPLETE; + wakeup = true; + } + } else + goto err; + } else if (t->wlen == t->wi && + (status & ACPI_EC_FLAG_IBF) == 0) { + t->flags |= ACPI_EC_COMMAND_COMPLETE; + wakeup = true; + } + return wakeup; + } else { + if ((status & ACPI_EC_FLAG_IBF) == 0) { + acpi_ec_write_cmd(ec, t->command); + t->flags |= ACPI_EC_COMMAND_POLL; } else goto err; - } else if (t->wlen == t->wi && - (status & ACPI_EC_FLAG_IBF) == 0) - t->done = true; - goto unlock; + return wakeup; + } err: /* * If SCI bit is set, then don't think it's a false IRQ * otherwise will take a not handled IRQ as a false one. */ - if (in_interrupt() && !(status & ACPI_EC_FLAG_SCI)) - ++t->irq_count; + if (!(status & ACPI_EC_FLAG_SCI)) { + if (in_interrupt() && t) + ++t->irq_count; + } + return wakeup; +} -unlock: - spin_unlock_irqrestore(&ec->lock, flags); +static void start_transaction(struct acpi_ec *ec) +{ + ec->curr->irq_count = ec->curr->wi = ec->curr->ri = 0; + ec->curr->flags = 0; + (void)advance_transaction(ec); } static int acpi_ec_sync_query(struct acpi_ec *ec, u8 *data); @@ -228,15 +256,17 @@ static int ec_poll(struct acpi_ec *ec) /* don't sleep with disabled interrupts */ if (EC_FLAGS_MSI || irqs_disabled()) { udelay(ACPI_EC_MSI_UDELAY); - if (ec_transaction_done(ec)) + if (ec_transaction_completed(ec)) return 0; } else { if (wait_event_timeout(ec->wait, - ec_transaction_done(ec), + ec_transaction_completed(ec), msecs_to_jiffies(1))) return 0; } - advance_transaction(ec, acpi_ec_read_status(ec)); + spin_lock_irqsave(&ec->lock, flags); + (void)advance_transaction(ec); + spin_unlock_irqrestore(&ec->lock, flags); } while (time_before(jiffies, delay)); pr_debug("controller reset, restart transaction\n"); spin_lock_irqsave(&ec->lock, flags); @@ -268,23 +298,6 @@ static int acpi_ec_transaction_unlocked(struct acpi_ec *ec, return ret; } -static int ec_check_ibf0(struct acpi_ec *ec) -{ - u8 status = acpi_ec_read_status(ec); - return (status & ACPI_EC_FLAG_IBF) == 0; -} - -static int ec_wait_ibf0(struct acpi_ec *ec) -{ - unsigned long delay = jiffies + msecs_to_jiffies(ec_delay); - /* interrupt wait manually if GPE mode is not active */ - while (time_before(jiffies, delay)) - if (wait_event_timeout(ec->wait, ec_check_ibf0(ec), - msecs_to_jiffies(1))) - return 0; - return -ETIME; -} - static int acpi_ec_transaction(struct acpi_ec *ec, struct transaction *t) { int status; @@ -305,12 +318,6 @@ static int acpi_ec_transaction(struct acpi_ec *ec, struct transaction *t) goto unlock; } } - if (ec_wait_ibf0(ec)) { - pr_err("input buffer is not empty, " - "aborting transaction\n"); - status = -ETIME; - goto end; - } pr_debug("transaction start (cmd=0x%02x, addr=0x%02x)\n", t->command, t->wdata ? t->wdata[0] : 0); /* disable GPE during transaction if storm is detected */ @@ -334,7 +341,6 @@ static int acpi_ec_transaction(struct acpi_ec *ec, struct transaction *t) set_bit(EC_FLAGS_GPE_STORM, &ec->flags); } pr_debug("transaction end\n"); -end: if (ec->global_lock) acpi_release_global_lock(glk); unlock: @@ -634,17 +640,14 @@ static int ec_check_sci(struct acpi_ec *ec, u8 state) static u32 acpi_ec_gpe_handler(acpi_handle gpe_device, u32 gpe_number, void *data) { + unsigned long flags; struct acpi_ec *ec = data; - u8 status = acpi_ec_read_status(ec); - pr_debug("~~~> interrupt, status:0x%02x\n", status); - - advance_transaction(ec, status); - if (ec_transaction_done(ec) && - (acpi_ec_read_status(ec) & ACPI_EC_FLAG_IBF) == 0) { + spin_lock_irqsave(&ec->lock, flags); + if (advance_transaction(ec)) wake_up(&ec->wait); - ec_check_sci(ec, acpi_ec_read_status(ec)); - } + spin_unlock_irqrestore(&ec->lock, flags); + ec_check_sci(ec, acpi_ec_read_status(ec)); return ACPI_INTERRUPT_HANDLED | ACPI_REENABLE_GPE; } @@ -1066,8 +1069,10 @@ int __init acpi_ec_ecdt_probe(void) /* fall through */ } - if (EC_FLAGS_SKIP_DSDT_SCAN) + if (EC_FLAGS_SKIP_DSDT_SCAN) { + kfree(saved_ec); return -ENODEV; + } /* This workaround is needed only on some broken machines, * which require early EC, but fail to provide ECDT */ @@ -1105,6 +1110,7 @@ install: } error: kfree(boot_ec); + kfree(saved_ec); boot_ec = NULL; return -ENODEV; } diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c index 0bdacc5e26a3..2ba8f02ced36 100644 --- a/drivers/acpi/resource.c +++ b/drivers/acpi/resource.c @@ -77,7 +77,7 @@ bool acpi_dev_resource_memory(struct acpi_resource *ares, struct resource *res) switch (ares->type) { case ACPI_RESOURCE_TYPE_MEMORY24: memory24 = &ares->data.memory24; - if (!memory24->address_length) + if (!memory24->minimum && !memory24->address_length) return false; acpi_dev_get_memresource(res, memory24->minimum, memory24->address_length, @@ -85,7 +85,7 @@ bool acpi_dev_resource_memory(struct acpi_resource *ares, struct resource *res) break; case ACPI_RESOURCE_TYPE_MEMORY32: memory32 = &ares->data.memory32; - if (!memory32->address_length) + if (!memory32->minimum && !memory32->address_length) return false; acpi_dev_get_memresource(res, memory32->minimum, memory32->address_length, @@ -93,7 +93,7 @@ bool acpi_dev_resource_memory(struct acpi_resource *ares, struct resource *res) break; case ACPI_RESOURCE_TYPE_FIXED_MEMORY32: fixed_memory32 = &ares->data.fixed_memory32; - if (!fixed_memory32->address_length) + if (!fixed_memory32->address && !fixed_memory32->address_length) return false; acpi_dev_get_memresource(res, fixed_memory32->address, fixed_memory32->address_length, @@ -150,7 +150,7 @@ bool acpi_dev_resource_io(struct acpi_resource *ares, struct resource *res) switch (ares->type) { case ACPI_RESOURCE_TYPE_IO: io = &ares->data.io; - if (!io->address_length) + if (!io->minimum && !io->address_length) return false; acpi_dev_get_ioresource(res, io->minimum, io->address_length, @@ -158,7 +158,7 @@ bool acpi_dev_resource_io(struct acpi_resource *ares, struct resource *res) break; case ACPI_RESOURCE_TYPE_FIXED_IO: fixed_io = &ares->data.fixed_io; - if (!fixed_io->address_length) + if (!fixed_io->address && !fixed_io->address_length) return false; acpi_dev_get_ioresource(res, fixed_io->address, fixed_io->address_length, diff --git a/drivers/acpi/video.c b/drivers/acpi/video.c index fb9ffe9adc64..350d52a8f781 100644 --- a/drivers/acpi/video.c +++ b/drivers/acpi/video.c @@ -68,7 +68,7 @@ MODULE_AUTHOR("Bruno Ducrot"); MODULE_DESCRIPTION("ACPI Video Driver"); MODULE_LICENSE("GPL"); -static bool brightness_switch_enabled; +static bool brightness_switch_enabled = 1; module_param(brightness_switch_enabled, bool, 0644); /* @@ -241,13 +241,14 @@ static bool acpi_video_use_native_backlight(void) return use_native_backlight_dmi; } -static bool acpi_video_verify_backlight_support(void) +bool acpi_video_verify_backlight_support(void) { if (acpi_osi_is_win8() && acpi_video_use_native_backlight() && backlight_device_registered(BACKLIGHT_RAW)) return false; return acpi_video_backlight_support(); } +EXPORT_SYMBOL_GPL(acpi_video_verify_backlight_support); /* backlight device sysfs support */ static int acpi_video_get_brightness(struct backlight_device *bd) @@ -563,6 +564,14 @@ static struct dmi_system_id video_dmi_table[] __initdata = { }, }, { + .callback = video_set_use_native_backlight, + .ident = "Acer TravelMate B113", + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "Acer"), + DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate B113"), + }, + }, + { .callback = video_set_use_native_backlight, .ident = "HP ProBook 4340s", .matches = { @@ -572,6 +581,14 @@ static struct dmi_system_id video_dmi_table[] __initdata = { }, { .callback = video_set_use_native_backlight, + .ident = "HP ProBook 4540s", + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), + DMI_MATCH(DMI_PRODUCT_VERSION, "HP ProBook 4540s"), + }, + }, + { + .callback = video_set_use_native_backlight, .ident = "HP ProBook 2013 models", .matches = { DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c index 33e3db548a29..c42feb2bacd0 100644 --- a/drivers/acpi/video_detect.c +++ b/drivers/acpi/video_detect.c @@ -166,6 +166,14 @@ static struct dmi_system_id video_detect_dmi_table[] = { DMI_MATCH(DMI_PRODUCT_NAME, "UL30A"), }, }, + { + .callback = video_detect_force_vendor, + .ident = "Dell Inspiron 5737", + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), + DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 5737"), + }, + }, { }, }; diff --git a/drivers/ata/ahci.h b/drivers/ata/ahci.h index 05882e4445a6..5513296e5e2e 100644 --- a/drivers/ata/ahci.h +++ b/drivers/ata/ahci.h @@ -371,7 +371,9 @@ int ahci_do_softreset(struct ata_link *link, unsigned int *class, int pmp, unsigned long deadline, int (*check_ready)(struct ata_link *link)); +unsigned int ahci_qc_issue(struct ata_queued_cmd *qc); int ahci_stop_engine(struct ata_port *ap); +void ahci_start_fis_rx(struct ata_port *ap); void ahci_start_engine(struct ata_port *ap); int ahci_check_ready(struct ata_link *link); int ahci_kick_engine(struct ata_port *ap); diff --git a/drivers/ata/ahci_imx.c b/drivers/ata/ahci_imx.c index 3a901520c62b..cac4360f272a 100644 --- a/drivers/ata/ahci_imx.c +++ b/drivers/ata/ahci_imx.c @@ -58,6 +58,8 @@ enum ahci_imx_type { struct imx_ahci_priv { struct platform_device *ahci_pdev; enum ahci_imx_type type; + struct clk *sata_clk; + struct clk *sata_ref_clk; struct clk *ahb_clk; struct regmap *gpr; bool no_device; @@ -224,7 +226,7 @@ static int imx_sata_enable(struct ahci_host_priv *hpriv) return ret; } - ret = ahci_platform_enable_clks(hpriv); + ret = clk_prepare_enable(imxpriv->sata_ref_clk); if (ret < 0) goto disable_regulator; @@ -291,7 +293,7 @@ static void imx_sata_disable(struct ahci_host_priv *hpriv) !IMX6Q_GPR13_SATA_MPLL_CLK_EN); } - ahci_platform_disable_clks(hpriv); + clk_disable_unprepare(imxpriv->sata_ref_clk); if (hpriv->target_pwr) regulator_disable(hpriv->target_pwr); @@ -324,6 +326,9 @@ static void ahci_imx_error_handler(struct ata_port *ap) writel(reg_val | IMX_P0PHYCR_TEST_PDDQ, mmio + IMX_P0PHYCR); imx_sata_disable(hpriv); imxpriv->no_device = true; + + dev_info(ap->dev, "no device found, disabling link.\n"); + dev_info(ap->dev, "pass " MODULE_PARAM_PREFIX ".hotplug=1 to enable hotplug\n"); } static int ahci_imx_softreset(struct ata_link *link, unsigned int *class, @@ -385,6 +390,19 @@ static int imx_ahci_probe(struct platform_device *pdev) imxpriv->no_device = false; imxpriv->first_time = true; imxpriv->type = (enum ahci_imx_type)of_id->data; + + imxpriv->sata_clk = devm_clk_get(dev, "sata"); + if (IS_ERR(imxpriv->sata_clk)) { + dev_err(dev, "can't get sata clock.\n"); + return PTR_ERR(imxpriv->sata_clk); + } + + imxpriv->sata_ref_clk = devm_clk_get(dev, "sata_ref"); + if (IS_ERR(imxpriv->sata_ref_clk)) { + dev_err(dev, "can't get sata_ref clock.\n"); + return PTR_ERR(imxpriv->sata_ref_clk); + } + imxpriv->ahb_clk = devm_clk_get(dev, "ahb"); if (IS_ERR(imxpriv->ahb_clk)) { dev_err(dev, "can't get ahb clock.\n"); @@ -407,10 +425,14 @@ static int imx_ahci_probe(struct platform_device *pdev) hpriv->plat_data = imxpriv; - ret = imx_sata_enable(hpriv); + ret = clk_prepare_enable(imxpriv->sata_clk); if (ret) return ret; + ret = imx_sata_enable(hpriv); + if (ret) + goto disable_clk; + /* * Configure the HWINIT bits of the HOST_CAP and HOST_PORTS_IMPL, * and IP vendor specific register IMX_TIMER1MS. @@ -435,16 +457,24 @@ static int imx_ahci_probe(struct platform_device *pdev) ret = ahci_platform_init_host(pdev, hpriv, &ahci_imx_port_info, 0, 0, 0); if (ret) - imx_sata_disable(hpriv); + goto disable_sata; + return 0; + +disable_sata: + imx_sata_disable(hpriv); +disable_clk: + clk_disable_unprepare(imxpriv->sata_clk); return ret; } static void ahci_imx_host_stop(struct ata_host *host) { struct ahci_host_priv *hpriv = host->private_data; + struct imx_ahci_priv *imxpriv = hpriv->plat_data; imx_sata_disable(hpriv); + clk_disable_unprepare(imxpriv->sata_clk); } #ifdef CONFIG_PM_SLEEP diff --git a/drivers/ata/ahci_platform.c b/drivers/ata/ahci_platform.c index ebe505c17763..b10d81ddb528 100644 --- a/drivers/ata/ahci_platform.c +++ b/drivers/ata/ahci_platform.c @@ -58,7 +58,7 @@ static int ahci_probe(struct platform_device *pdev) } if (of_device_is_compatible(dev->of_node, "hisilicon,hisi-ahci")) - hflags |= AHCI_HFLAG_NO_FBS; + hflags |= AHCI_HFLAG_NO_FBS | AHCI_HFLAG_NO_NCQ; rc = ahci_platform_init_host(pdev, hpriv, &ahci_port_info, hflags, 0, 0); diff --git a/drivers/ata/ahci_xgene.c b/drivers/ata/ahci_xgene.c index 042a9bb45c86..ee3a3659bd9e 100644 --- a/drivers/ata/ahci_xgene.c +++ b/drivers/ata/ahci_xgene.c @@ -78,6 +78,7 @@ struct xgene_ahci_context { struct ahci_host_priv *hpriv; struct device *dev; + u8 last_cmd[MAX_AHCI_CHN_PERCTR]; /* tracking the last command issued*/ void __iomem *csr_core; /* Core CSR address of IP */ void __iomem *csr_diag; /* Diag CSR address of IP */ void __iomem *csr_axi; /* AXI CSR address of IP */ @@ -98,20 +99,62 @@ static int xgene_ahci_init_memram(struct xgene_ahci_context *ctx) } /** + * xgene_ahci_restart_engine - Restart the dma engine. + * @ap : ATA port of interest + * + * Restarts the dma engine inside the controller. + */ +static int xgene_ahci_restart_engine(struct ata_port *ap) +{ + struct ahci_host_priv *hpriv = ap->host->private_data; + + ahci_stop_engine(ap); + ahci_start_fis_rx(ap); + hpriv->start_engine(ap); + + return 0; +} + +/** + * xgene_ahci_qc_issue - Issue commands to the device + * @qc: Command to issue + * + * Due to Hardware errata for IDENTIFY DEVICE command, the controller cannot + * clear the BSY bit after receiving the PIO setup FIS. This results in the dma + * state machine goes into the CMFatalErrorUpdate state and locks up. By + * restarting the dma engine, it removes the controller out of lock up state. + */ +static unsigned int xgene_ahci_qc_issue(struct ata_queued_cmd *qc) +{ + struct ata_port *ap = qc->ap; + struct ahci_host_priv *hpriv = ap->host->private_data; + struct xgene_ahci_context *ctx = hpriv->plat_data; + int rc = 0; + + if (unlikely(ctx->last_cmd[ap->port_no] == ATA_CMD_ID_ATA)) + xgene_ahci_restart_engine(ap); + + rc = ahci_qc_issue(qc); + + /* Save the last command issued */ + ctx->last_cmd[ap->port_no] = qc->tf.command; + + return rc; +} + +/** * xgene_ahci_read_id - Read ID data from the specified device * @dev: device * @tf: proposed taskfile * @id: data buffer * * This custom read ID function is required due to the fact that the HW - * does not support DEVSLP and the controller state machine may get stuck - * after processing the ID query command. + * does not support DEVSLP. */ static unsigned int xgene_ahci_read_id(struct ata_device *dev, struct ata_taskfile *tf, u16 *id) { u32 err_mask; - void __iomem *port_mmio = ahci_port_base(dev->link->ap); err_mask = ata_do_dev_read_id(dev, tf, id); if (err_mask) @@ -133,16 +176,6 @@ static unsigned int xgene_ahci_read_id(struct ata_device *dev, */ id[ATA_ID_FEATURE_SUPP] &= ~(1 << 8); - /* - * Due to HW errata, restart the port if no other command active. - * Otherwise the controller may get stuck. - */ - if (!readl(port_mmio + PORT_CMD_ISSUE)) { - writel(PORT_CMD_FIS_RX, port_mmio + PORT_CMD); - readl(port_mmio + PORT_CMD); /* Force a barrier */ - writel(PORT_CMD_FIS_RX | PORT_CMD_START, port_mmio + PORT_CMD); - readl(port_mmio + PORT_CMD); /* Force a barrier */ - } return 0; } @@ -300,6 +333,7 @@ static struct ata_port_operations xgene_ahci_ops = { .host_stop = xgene_ahci_host_stop, .hardreset = xgene_ahci_hardreset, .read_id = xgene_ahci_read_id, + .qc_issue = xgene_ahci_qc_issue, }; static const struct ata_port_info xgene_ahci_port_info = { diff --git a/drivers/ata/libahci.c b/drivers/ata/libahci.c index 40ea583d3610..d72ce0470309 100644 --- a/drivers/ata/libahci.c +++ b/drivers/ata/libahci.c @@ -68,7 +68,6 @@ static ssize_t ahci_transmit_led_message(struct ata_port *ap, u32 state, static int ahci_scr_read(struct ata_link *link, unsigned int sc_reg, u32 *val); static int ahci_scr_write(struct ata_link *link, unsigned int sc_reg, u32 val); -static unsigned int ahci_qc_issue(struct ata_queued_cmd *qc); static bool ahci_qc_fill_rtf(struct ata_queued_cmd *qc); static int ahci_port_start(struct ata_port *ap); static void ahci_port_stop(struct ata_port *ap); @@ -620,7 +619,7 @@ int ahci_stop_engine(struct ata_port *ap) } EXPORT_SYMBOL_GPL(ahci_stop_engine); -static void ahci_start_fis_rx(struct ata_port *ap) +void ahci_start_fis_rx(struct ata_port *ap) { void __iomem *port_mmio = ahci_port_base(ap); struct ahci_host_priv *hpriv = ap->host->private_data; @@ -646,6 +645,7 @@ static void ahci_start_fis_rx(struct ata_port *ap) /* flush */ readl(port_mmio + PORT_CMD); } +EXPORT_SYMBOL_GPL(ahci_start_fis_rx); static int ahci_stop_fis_rx(struct ata_port *ap) { @@ -1945,7 +1945,7 @@ irqreturn_t ahci_interrupt(int irq, void *dev_instance) } EXPORT_SYMBOL_GPL(ahci_interrupt); -static unsigned int ahci_qc_issue(struct ata_queued_cmd *qc) +unsigned int ahci_qc_issue(struct ata_queued_cmd *qc) { struct ata_port *ap = qc->ap; void __iomem *port_mmio = ahci_port_base(ap); @@ -1974,6 +1974,7 @@ static unsigned int ahci_qc_issue(struct ata_queued_cmd *qc) return 0; } +EXPORT_SYMBOL_GPL(ahci_qc_issue); static bool ahci_qc_fill_rtf(struct ata_queued_cmd *qc) { diff --git a/drivers/ata/libahci_platform.c b/drivers/ata/libahci_platform.c index 3a5b4ed25a4f..b0077589f065 100644 --- a/drivers/ata/libahci_platform.c +++ b/drivers/ata/libahci_platform.c @@ -250,8 +250,13 @@ struct ahci_host_priv *ahci_platform_get_resources(struct platform_device *pdev) if (IS_ERR(hpriv->phy)) { rc = PTR_ERR(hpriv->phy); switch (rc) { - case -ENODEV: case -ENOSYS: + /* No PHY support. Check if PHY is required. */ + if (of_find_property(dev->of_node, "phys", NULL)) { + dev_err(dev, "couldn't get sata-phy: ENOSYS\n"); + goto err_out; + } + case -ENODEV: /* continue normally */ hpriv->phy = NULL; break; diff --git a/drivers/base/platform.c b/drivers/base/platform.c index 148f66a1d49a..7cc7440b014d 100644 --- a/drivers/base/platform.c +++ b/drivers/base/platform.c @@ -90,8 +90,13 @@ int platform_get_irq(struct platform_device *dev, unsigned int num) return dev->archdata.irqs[num]; #else struct resource *r; - if (IS_ENABLED(CONFIG_OF_IRQ) && dev->dev.of_node) - return of_irq_get(dev->dev.of_node, num); + if (IS_ENABLED(CONFIG_OF_IRQ) && dev->dev.of_node) { + int ret; + + ret = of_irq_get(dev->dev.of_node, num); + if (ret >= 0 || ret == -EPROBE_DEFER) + return ret; + } r = platform_get_resource(dev, IORESOURCE_IRQ, num); @@ -134,8 +139,13 @@ int platform_get_irq_byname(struct platform_device *dev, const char *name) { struct resource *r; - if (IS_ENABLED(CONFIG_OF_IRQ) && dev->dev.of_node) - return of_irq_get_byname(dev->dev.of_node, name); + if (IS_ENABLED(CONFIG_OF_IRQ) && dev->dev.of_node) { + int ret; + + ret = of_irq_get_byname(dev->dev.of_node, name); + if (ret >= 0 || ret == -EPROBE_DEFER) + return ret; + } r = platform_get_resource_byname(dev, IORESOURCE_IRQ, name); return r ? r->start : -ENXIO; diff --git a/drivers/bluetooth/ath3k.c b/drivers/bluetooth/ath3k.c index f98380648cb3..f50dffc0374f 100644 --- a/drivers/bluetooth/ath3k.c +++ b/drivers/bluetooth/ath3k.c @@ -90,7 +90,6 @@ static const struct usb_device_id ath3k_table[] = { { USB_DEVICE(0x0b05, 0x17d0) }, { USB_DEVICE(0x0CF3, 0x0036) }, { USB_DEVICE(0x0CF3, 0x3004) }, - { USB_DEVICE(0x0CF3, 0x3005) }, { USB_DEVICE(0x0CF3, 0x3008) }, { USB_DEVICE(0x0CF3, 0x311D) }, { USB_DEVICE(0x0CF3, 0x311E) }, @@ -140,7 +139,6 @@ static const struct usb_device_id ath3k_blist_tbl[] = { { USB_DEVICE(0x0b05, 0x17d0), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0CF3, 0x0036), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0cf3, 0x3004), .driver_info = BTUSB_ATH3012 }, - { USB_DEVICE(0x0cf3, 0x3005), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0cf3, 0x3008), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0cf3, 0x311D), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0cf3, 0x311E), .driver_info = BTUSB_ATH3012 }, diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c index a1c80b0c7663..6250fc2fb93a 100644 --- a/drivers/bluetooth/btusb.c +++ b/drivers/bluetooth/btusb.c @@ -162,7 +162,6 @@ static const struct usb_device_id blacklist_table[] = { { USB_DEVICE(0x0b05, 0x17d0), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0cf3, 0x0036), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0cf3, 0x3004), .driver_info = BTUSB_ATH3012 }, - { USB_DEVICE(0x0cf3, 0x3005), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0cf3, 0x3008), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0cf3, 0x311d), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0cf3, 0x311e), .driver_info = BTUSB_ATH3012 }, diff --git a/drivers/bluetooth/hci_h5.c b/drivers/bluetooth/hci_h5.c index 04680ead9275..fede8ca7147c 100644 --- a/drivers/bluetooth/hci_h5.c +++ b/drivers/bluetooth/hci_h5.c @@ -406,6 +406,7 @@ static int h5_rx_3wire_hdr(struct hci_uart *hu, unsigned char c) H5_HDR_PKT_TYPE(hdr) != HCI_3WIRE_LINK_PKT) { BT_ERR("Non-link packet received in non-active state"); h5_reset_rx(h5); + return 0; } h5->rx_func = h5_rx_payload; diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c index 334601cc81cf..c4419ea1ab07 100644 --- a/drivers/char/hw_random/core.c +++ b/drivers/char/hw_random/core.c @@ -55,16 +55,41 @@ static DEFINE_MUTEX(rng_mutex); static int data_avail; static u8 *rng_buffer; +static inline int rng_get_data(struct hwrng *rng, u8 *buffer, size_t size, + int wait); + static size_t rng_buffer_size(void) { return SMP_CACHE_BYTES < 32 ? 32 : SMP_CACHE_BYTES; } +static void add_early_randomness(struct hwrng *rng) +{ + unsigned char bytes[16]; + int bytes_read; + + /* + * Currently only virtio-rng cannot return data during device + * probe, and that's handled in virtio-rng.c itself. If there + * are more such devices, this call to rng_get_data can be + * made conditional here instead of doing it per-device. + */ + bytes_read = rng_get_data(rng, bytes, sizeof(bytes), 1); + if (bytes_read > 0) + add_device_randomness(bytes, bytes_read); +} + static inline int hwrng_init(struct hwrng *rng) { - if (!rng->init) - return 0; - return rng->init(rng); + if (rng->init) { + int ret; + + ret = rng->init(rng); + if (ret) + return ret; + } + add_early_randomness(rng); + return 0; } static inline void hwrng_cleanup(struct hwrng *rng) @@ -304,8 +329,6 @@ int hwrng_register(struct hwrng *rng) { int err = -EINVAL; struct hwrng *old_rng, *tmp; - unsigned char bytes[16]; - int bytes_read; if (rng->name == NULL || (rng->data_read == NULL && rng->read == NULL)) @@ -347,9 +370,17 @@ int hwrng_register(struct hwrng *rng) INIT_LIST_HEAD(&rng->list); list_add_tail(&rng->list, &rng_list); - bytes_read = rng_get_data(rng, bytes, sizeof(bytes), 1); - if (bytes_read > 0) - add_device_randomness(bytes, bytes_read); + if (old_rng && !rng->init) { + /* + * Use a new device's input to add some randomness to + * the system. If this rng device isn't going to be + * used right away, its init function hasn't been + * called yet; so only use the randomness from devices + * that don't need an init callback. + */ + add_early_randomness(rng); + } + out_unlock: mutex_unlock(&rng_mutex); out: diff --git a/drivers/char/hw_random/virtio-rng.c b/drivers/char/hw_random/virtio-rng.c index f3e71501de54..e9b15bc18b4d 100644 --- a/drivers/char/hw_random/virtio-rng.c +++ b/drivers/char/hw_random/virtio-rng.c @@ -38,6 +38,8 @@ struct virtrng_info { int index; }; +static bool probe_done; + static void random_recv_done(struct virtqueue *vq) { struct virtrng_info *vi = vq->vdev->priv; @@ -67,6 +69,13 @@ static int virtio_read(struct hwrng *rng, void *buf, size_t size, bool wait) int ret; struct virtrng_info *vi = (struct virtrng_info *)rng->priv; + /* + * Don't ask host for data till we're setup. This call can + * happen during hwrng_register(), after commit d9e7972619. + */ + if (unlikely(!probe_done)) + return 0; + if (!vi->busy) { vi->busy = true; init_completion(&vi->have_data); @@ -137,6 +146,7 @@ static int probe_common(struct virtio_device *vdev) return err; } + probe_done = true; return 0; } diff --git a/drivers/char/i8k.c b/drivers/char/i8k.c index d915707d2ba1..93dcad0c1cbe 100644 --- a/drivers/char/i8k.c +++ b/drivers/char/i8k.c @@ -138,7 +138,9 @@ static int i8k_smm(struct smm_regs *regs) if (!alloc_cpumask_var(&old_mask, GFP_KERNEL)) return -ENOMEM; cpumask_copy(old_mask, ¤t->cpus_allowed); - set_cpus_allowed_ptr(current, cpumask_of(0)); + rc = set_cpus_allowed_ptr(current, cpumask_of(0)); + if (rc) + goto out; if (smp_processor_id() != 0) { rc = -EBUSY; goto out; diff --git a/drivers/char/random.c b/drivers/char/random.c index 0a7ac0a7b252..71529e196b84 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -641,7 +641,7 @@ retry: } while (unlikely(entropy_count < pool_size-2 && pnfrac)); } - if (entropy_count < 0) { + if (unlikely(entropy_count < 0)) { pr_warn("random: negative entropy/overflow: pool %s count %d\n", r->name, entropy_count); WARN_ON(1); @@ -981,7 +981,7 @@ static size_t account(struct entropy_store *r, size_t nbytes, int min, int reserved) { int entropy_count, orig; - size_t ibytes; + size_t ibytes, nfrac; BUG_ON(r->entropy_count > r->poolinfo->poolfracbits); @@ -999,7 +999,17 @@ retry: } if (ibytes < min) ibytes = 0; - if ((entropy_count -= ibytes << (ENTROPY_SHIFT + 3)) < 0) + + if (unlikely(entropy_count < 0)) { + pr_warn("random: negative entropy count: pool %s count %d\n", + r->name, entropy_count); + WARN_ON(1); + entropy_count = 0; + } + nfrac = ibytes << (ENTROPY_SHIFT + 3); + if ((size_t) entropy_count > nfrac) + entropy_count -= nfrac; + else entropy_count = 0; if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig) @@ -1376,6 +1386,7 @@ urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos) "with %d bits of entropy available\n", current->comm, nonblocking_pool.entropy_total); + nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3)); ret = extract_entropy_user(&nonblocking_pool, buf, nbytes); trace_urandom_read(8 * nbytes, ENTROPY_BITS(&nonblocking_pool), diff --git a/drivers/clk/clk-s2mps11.c b/drivers/clk/clk-s2mps11.c index 9b7b5859a420..3757e9e72d37 100644 --- a/drivers/clk/clk-s2mps11.c +++ b/drivers/clk/clk-s2mps11.c @@ -230,16 +230,13 @@ static int s2mps11_clk_probe(struct platform_device *pdev) goto err_reg; } - s2mps11_clk->lookup = devm_kzalloc(&pdev->dev, - sizeof(struct clk_lookup), GFP_KERNEL); + s2mps11_clk->lookup = clkdev_alloc(s2mps11_clk->clk, + s2mps11_name(s2mps11_clk), NULL); if (!s2mps11_clk->lookup) { ret = -ENOMEM; goto err_lup; } - s2mps11_clk->lookup->con_id = s2mps11_name(s2mps11_clk); - s2mps11_clk->lookup->clk = s2mps11_clk->clk; - clkdev_add(s2mps11_clk->lookup); } diff --git a/drivers/clk/qcom/mmcc-msm8960.c b/drivers/clk/qcom/mmcc-msm8960.c index 12f3c0b64fcd..4c449b3170f6 100644 --- a/drivers/clk/qcom/mmcc-msm8960.c +++ b/drivers/clk/qcom/mmcc-msm8960.c @@ -1209,7 +1209,7 @@ static struct clk_branch rot_clk = { static u8 mmcc_pxo_hdmi_map[] = { [P_PXO] = 0, - [P_HDMI_PLL] = 2, + [P_HDMI_PLL] = 3, }; static const char *mmcc_pxo_hdmi[] = { diff --git a/drivers/clk/samsung/clk-exynos4.c b/drivers/clk/samsung/clk-exynos4.c index 4f150c9dd38c..7f4a473a7ad7 100644 --- a/drivers/clk/samsung/clk-exynos4.c +++ b/drivers/clk/samsung/clk-exynos4.c @@ -925,21 +925,13 @@ static struct samsung_gate_clock exynos4x12_gate_clks[] __initdata = { GATE(CLK_RTC, "rtc", "aclk100", E4X12_GATE_IP_PERIR, 15, 0, 0), GATE(CLK_KEYIF, "keyif", "aclk100", E4X12_GATE_IP_PERIR, 16, 0, 0), - GATE(CLK_SCLK_PWM_ISP, "sclk_pwm_isp", "div_pwm_isp", - E4X12_SRC_MASK_ISP, 0, CLK_SET_RATE_PARENT, 0), - GATE(CLK_SCLK_SPI0_ISP, "sclk_spi0_isp", "div_spi0_isp_pre", - E4X12_SRC_MASK_ISP, 4, CLK_SET_RATE_PARENT, 0), - GATE(CLK_SCLK_SPI1_ISP, "sclk_spi1_isp", "div_spi1_isp_pre", - E4X12_SRC_MASK_ISP, 8, CLK_SET_RATE_PARENT, 0), - GATE(CLK_SCLK_UART_ISP, "sclk_uart_isp", "div_uart_isp", - E4X12_SRC_MASK_ISP, 12, CLK_SET_RATE_PARENT, 0), - GATE(CLK_PWM_ISP_SCLK, "pwm_isp_sclk", "sclk_pwm_isp", + GATE(CLK_PWM_ISP_SCLK, "pwm_isp_sclk", "div_pwm_isp", E4X12_GATE_IP_ISP, 0, 0, 0), - GATE(CLK_SPI0_ISP_SCLK, "spi0_isp_sclk", "sclk_spi0_isp", + GATE(CLK_SPI0_ISP_SCLK, "spi0_isp_sclk", "div_spi0_isp_pre", E4X12_GATE_IP_ISP, 1, 0, 0), - GATE(CLK_SPI1_ISP_SCLK, "spi1_isp_sclk", "sclk_spi1_isp", + GATE(CLK_SPI1_ISP_SCLK, "spi1_isp_sclk", "div_spi1_isp_pre", E4X12_GATE_IP_ISP, 2, 0, 0), - GATE(CLK_UART_ISP_SCLK, "uart_isp_sclk", "sclk_uart_isp", + GATE(CLK_UART_ISP_SCLK, "uart_isp_sclk", "div_uart_isp", E4X12_GATE_IP_ISP, 3, 0, 0), GATE(CLK_WDT, "watchdog", "aclk100", E4X12_GATE_IP_PERIR, 14, 0, 0), GATE(CLK_PCM0, "pcm0", "aclk100", E4X12_GATE_IP_MAUDIO, 2, diff --git a/drivers/clk/samsung/clk-exynos5250.c b/drivers/clk/samsung/clk-exynos5250.c index 1fad4c5e3f5d..184f64293b26 100644 --- a/drivers/clk/samsung/clk-exynos5250.c +++ b/drivers/clk/samsung/clk-exynos5250.c @@ -661,7 +661,7 @@ static struct samsung_gate_clock exynos5250_gate_clks[] __initdata = { GATE(CLK_RTC, "rtc", "div_aclk66", GATE_IP_PERIS, 20, 0, 0), GATE(CLK_TMU, "tmu", "div_aclk66", GATE_IP_PERIS, 21, 0, 0), GATE(CLK_SMMU_TV, "smmu_tv", "mout_aclk200_disp1_sub", - GATE_IP_DISP1, 2, 0, 0), + GATE_IP_DISP1, 9, 0, 0), GATE(CLK_SMMU_FIMD1, "smmu_fimd1", "mout_aclk200_disp1_sub", GATE_IP_DISP1, 8, 0, 0), GATE(CLK_SMMU_2D, "smmu_2d", "div_aclk200", GATE_IP_ACP, 7, 0, 0), diff --git a/drivers/clk/samsung/clk-exynos5420.c b/drivers/clk/samsung/clk-exynos5420.c index 9d7d7eed03fd..a4e6cc782e5c 100644 --- a/drivers/clk/samsung/clk-exynos5420.c +++ b/drivers/clk/samsung/clk-exynos5420.c @@ -631,7 +631,8 @@ static struct samsung_mux_clock exynos5x_mux_clks[] __initdata = { SRC_TOP4, 16, 1), MUX(0, "mout_user_aclk266", mout_user_aclk266_p, SRC_TOP4, 20, 1), MUX(0, "mout_user_aclk166", mout_user_aclk166_p, SRC_TOP4, 24, 1), - MUX(0, "mout_user_aclk333", mout_user_aclk333_p, SRC_TOP4, 28, 1), + MUX(CLK_MOUT_USER_ACLK333, "mout_user_aclk333", mout_user_aclk333_p, + SRC_TOP4, 28, 1), MUX(0, "mout_user_aclk400_disp1", mout_user_aclk400_disp1_p, SRC_TOP5, 0, 1), @@ -684,7 +685,8 @@ static struct samsung_mux_clock exynos5x_mux_clks[] __initdata = { SRC_TOP11, 12, 1), MUX(0, "mout_sw_aclk266", mout_sw_aclk266_p, SRC_TOP11, 20, 1), MUX(0, "mout_sw_aclk166", mout_sw_aclk166_p, SRC_TOP11, 24, 1), - MUX(0, "mout_sw_aclk333", mout_sw_aclk333_p, SRC_TOP11, 28, 1), + MUX(CLK_MOUT_SW_ACLK333, "mout_sw_aclk333", mout_sw_aclk333_p, + SRC_TOP11, 28, 1), MUX(0, "mout_sw_aclk400_disp1", mout_sw_aclk400_disp1_p, SRC_TOP12, 4, 1), @@ -890,8 +892,6 @@ static struct samsung_gate_clock exynos5x_gate_clks[] __initdata = { GATE_BUS_TOP, 9, CLK_IGNORE_UNUSED, 0), GATE(0, "aclk66_psgen", "mout_user_aclk66_psgen", GATE_BUS_TOP, 10, CLK_IGNORE_UNUSED, 0), - GATE(CLK_ACLK66_PERIC, "aclk66_peric", "mout_user_aclk66_peric", - GATE_BUS_TOP, 11, CLK_IGNORE_UNUSED, 0), GATE(0, "aclk266_isp", "mout_user_aclk266_isp", GATE_BUS_TOP, 13, 0, 0), GATE(0, "aclk166", "mout_user_aclk166", @@ -994,34 +994,61 @@ static struct samsung_gate_clock exynos5x_gate_clks[] __initdata = { SRC_MASK_FSYS, 24, CLK_SET_RATE_PARENT, 0), /* PERIC Block */ - GATE(CLK_UART0, "uart0", "aclk66_peric", GATE_IP_PERIC, 0, 0, 0), - GATE(CLK_UART1, "uart1", "aclk66_peric", GATE_IP_PERIC, 1, 0, 0), - GATE(CLK_UART2, "uart2", "aclk66_peric", GATE_IP_PERIC, 2, 0, 0), - GATE(CLK_UART3, "uart3", "aclk66_peric", GATE_IP_PERIC, 3, 0, 0), - GATE(CLK_I2C0, "i2c0", "aclk66_peric", GATE_IP_PERIC, 6, 0, 0), - GATE(CLK_I2C1, "i2c1", "aclk66_peric", GATE_IP_PERIC, 7, 0, 0), - GATE(CLK_I2C2, "i2c2", "aclk66_peric", GATE_IP_PERIC, 8, 0, 0), - GATE(CLK_I2C3, "i2c3", "aclk66_peric", GATE_IP_PERIC, 9, 0, 0), - GATE(CLK_USI0, "usi0", "aclk66_peric", GATE_IP_PERIC, 10, 0, 0), - GATE(CLK_USI1, "usi1", "aclk66_peric", GATE_IP_PERIC, 11, 0, 0), - GATE(CLK_USI2, "usi2", "aclk66_peric", GATE_IP_PERIC, 12, 0, 0), - GATE(CLK_USI3, "usi3", "aclk66_peric", GATE_IP_PERIC, 13, 0, 0), - GATE(CLK_I2C_HDMI, "i2c_hdmi", "aclk66_peric", GATE_IP_PERIC, 14, 0, 0), - GATE(CLK_TSADC, "tsadc", "aclk66_peric", GATE_IP_PERIC, 15, 0, 0), - GATE(CLK_SPI0, "spi0", "aclk66_peric", GATE_IP_PERIC, 16, 0, 0), - GATE(CLK_SPI1, "spi1", "aclk66_peric", GATE_IP_PERIC, 17, 0, 0), - GATE(CLK_SPI2, "spi2", "aclk66_peric", GATE_IP_PERIC, 18, 0, 0), - GATE(CLK_I2S1, "i2s1", "aclk66_peric", GATE_IP_PERIC, 20, 0, 0), - GATE(CLK_I2S2, "i2s2", "aclk66_peric", GATE_IP_PERIC, 21, 0, 0), - GATE(CLK_PCM1, "pcm1", "aclk66_peric", GATE_IP_PERIC, 22, 0, 0), - GATE(CLK_PCM2, "pcm2", "aclk66_peric", GATE_IP_PERIC, 23, 0, 0), - GATE(CLK_PWM, "pwm", "aclk66_peric", GATE_IP_PERIC, 24, 0, 0), - GATE(CLK_SPDIF, "spdif", "aclk66_peric", GATE_IP_PERIC, 26, 0, 0), - GATE(CLK_USI4, "usi4", "aclk66_peric", GATE_IP_PERIC, 28, 0, 0), - GATE(CLK_USI5, "usi5", "aclk66_peric", GATE_IP_PERIC, 30, 0, 0), - GATE(CLK_USI6, "usi6", "aclk66_peric", GATE_IP_PERIC, 31, 0, 0), - - GATE(CLK_KEYIF, "keyif", "aclk66_peric", GATE_BUS_PERIC, 22, 0, 0), + GATE(CLK_UART0, "uart0", "mout_user_aclk66_peric", + GATE_IP_PERIC, 0, 0, 0), + GATE(CLK_UART1, "uart1", "mout_user_aclk66_peric", + GATE_IP_PERIC, 1, 0, 0), + GATE(CLK_UART2, "uart2", "mout_user_aclk66_peric", + GATE_IP_PERIC, 2, 0, 0), + GATE(CLK_UART3, "uart3", "mout_user_aclk66_peric", + GATE_IP_PERIC, 3, 0, 0), + GATE(CLK_I2C0, "i2c0", "mout_user_aclk66_peric", + GATE_IP_PERIC, 6, 0, 0), + GATE(CLK_I2C1, "i2c1", "mout_user_aclk66_peric", + GATE_IP_PERIC, 7, 0, 0), + GATE(CLK_I2C2, "i2c2", "mout_user_aclk66_peric", + GATE_IP_PERIC, 8, 0, 0), + GATE(CLK_I2C3, "i2c3", "mout_user_aclk66_peric", + GATE_IP_PERIC, 9, 0, 0), + GATE(CLK_USI0, "usi0", "mout_user_aclk66_peric", + GATE_IP_PERIC, 10, 0, 0), + GATE(CLK_USI1, "usi1", "mout_user_aclk66_peric", + GATE_IP_PERIC, 11, 0, 0), + GATE(CLK_USI2, "usi2", "mout_user_aclk66_peric", + GATE_IP_PERIC, 12, 0, 0), + GATE(CLK_USI3, "usi3", "mout_user_aclk66_peric", + GATE_IP_PERIC, 13, 0, 0), + GATE(CLK_I2C_HDMI, "i2c_hdmi", "mout_user_aclk66_peric", + GATE_IP_PERIC, 14, 0, 0), + GATE(CLK_TSADC, "tsadc", "mout_user_aclk66_peric", + GATE_IP_PERIC, 15, 0, 0), + GATE(CLK_SPI0, "spi0", "mout_user_aclk66_peric", + GATE_IP_PERIC, 16, 0, 0), + GATE(CLK_SPI1, "spi1", "mout_user_aclk66_peric", + GATE_IP_PERIC, 17, 0, 0), + GATE(CLK_SPI2, "spi2", "mout_user_aclk66_peric", + GATE_IP_PERIC, 18, 0, 0), + GATE(CLK_I2S1, "i2s1", "mout_user_aclk66_peric", + GATE_IP_PERIC, 20, 0, 0), + GATE(CLK_I2S2, "i2s2", "mout_user_aclk66_peric", + GATE_IP_PERIC, 21, 0, 0), + GATE(CLK_PCM1, "pcm1", "mout_user_aclk66_peric", + GATE_IP_PERIC, 22, 0, 0), + GATE(CLK_PCM2, "pcm2", "mout_user_aclk66_peric", + GATE_IP_PERIC, 23, 0, 0), + GATE(CLK_PWM, "pwm", "mout_user_aclk66_peric", + GATE_IP_PERIC, 24, 0, 0), + GATE(CLK_SPDIF, "spdif", "mout_user_aclk66_peric", + GATE_IP_PERIC, 26, 0, 0), + GATE(CLK_USI4, "usi4", "mout_user_aclk66_peric", + GATE_IP_PERIC, 28, 0, 0), + GATE(CLK_USI5, "usi5", "mout_user_aclk66_peric", + GATE_IP_PERIC, 30, 0, 0), + GATE(CLK_USI6, "usi6", "mout_user_aclk66_peric", + GATE_IP_PERIC, 31, 0, 0), + + GATE(CLK_KEYIF, "keyif", "mout_user_aclk66_peric", + GATE_BUS_PERIC, 22, 0, 0), /* PERIS Block */ GATE(CLK_CHIPID, "chipid", "aclk66_psgen", diff --git a/drivers/clk/samsung/clk-s3c2410.c b/drivers/clk/samsung/clk-s3c2410.c index ba0716801db2..140f4733c02e 100644 --- a/drivers/clk/samsung/clk-s3c2410.c +++ b/drivers/clk/samsung/clk-s3c2410.c @@ -152,6 +152,11 @@ struct samsung_clock_alias s3c2410_common_aliases[] __initdata = { ALIAS(HCLK, NULL, "hclk"), ALIAS(MPLL, NULL, "mpll"), ALIAS(FCLK, NULL, "fclk"), + ALIAS(PCLK, NULL, "watchdog"), + ALIAS(PCLK_SDI, NULL, "sdi"), + ALIAS(HCLK_NAND, NULL, "nand"), + ALIAS(PCLK_I2S, NULL, "iis"), + ALIAS(PCLK_I2C, NULL, "i2c"), }; /* S3C2410 specific clocks */ @@ -378,7 +383,7 @@ void __init s3c2410_common_clk_init(struct device_node *np, unsigned long xti_f, if (!np) s3c2410_common_clk_register_fixed_ext(ctx, xti_f); - if (current_soc == 2410) { + if (current_soc == S3C2410) { if (_get_rate("xti") == 12 * MHZ) { s3c2410_plls[mpll].rate_table = pll_s3c2410_12mhz_tbl; s3c2410_plls[upll].rate_table = pll_s3c2410_12mhz_tbl; @@ -432,7 +437,7 @@ void __init s3c2410_common_clk_init(struct device_node *np, unsigned long xti_f, samsung_clk_register_fixed_factor(ctx, s3c2410_ffactor, ARRAY_SIZE(s3c2410_ffactor)); samsung_clk_register_alias(ctx, s3c2410_aliases, - ARRAY_SIZE(s3c2410_common_aliases)); + ARRAY_SIZE(s3c2410_aliases)); break; case S3C2440: samsung_clk_register_mux(ctx, s3c2440_muxes, diff --git a/drivers/clk/samsung/clk-s3c64xx.c b/drivers/clk/samsung/clk-s3c64xx.c index efa16ee592c8..8889ff1c10fc 100644 --- a/drivers/clk/samsung/clk-s3c64xx.c +++ b/drivers/clk/samsung/clk-s3c64xx.c @@ -418,8 +418,10 @@ static struct samsung_clock_alias s3c64xx_clock_aliases[] = { ALIAS(SCLK_MMC2, "s3c-sdhci.2", "mmc_busclk.2"), ALIAS(SCLK_MMC1, "s3c-sdhci.1", "mmc_busclk.2"), ALIAS(SCLK_MMC0, "s3c-sdhci.0", "mmc_busclk.2"), - ALIAS(SCLK_SPI1, "s3c6410-spi.1", "spi-bus"), - ALIAS(SCLK_SPI0, "s3c6410-spi.0", "spi-bus"), + ALIAS(PCLK_SPI1, "s3c6410-spi.1", "spi_busclk0"), + ALIAS(SCLK_SPI1, "s3c6410-spi.1", "spi_busclk2"), + ALIAS(PCLK_SPI0, "s3c6410-spi.0", "spi_busclk0"), + ALIAS(SCLK_SPI0, "s3c6410-spi.0", "spi_busclk2"), ALIAS(SCLK_AUDIO1, "samsung-pcm.1", "audio-bus"), ALIAS(SCLK_AUDIO1, "samsung-i2s.1", "audio-bus"), ALIAS(SCLK_AUDIO0, "samsung-pcm.0", "audio-bus"), diff --git a/drivers/clk/spear/spear3xx_clock.c b/drivers/clk/spear/spear3xx_clock.c index c2d204315546..bb5f387774e2 100644 --- a/drivers/clk/spear/spear3xx_clock.c +++ b/drivers/clk/spear/spear3xx_clock.c @@ -211,7 +211,7 @@ static inline void spear310_clk_init(void) { } /* array of all spear 320 clock lookups */ #ifdef CONFIG_MACH_SPEAR320 -#define SPEAR320_CONTROL_REG (soc_config_base + 0x0000) +#define SPEAR320_CONTROL_REG (soc_config_base + 0x0010) #define SPEAR320_EXT_CTRL_REG (soc_config_base + 0x0018) #define SPEAR320_UARTX_PCLK_MASK 0x1 @@ -245,7 +245,8 @@ static const char *smii0_parents[] = { "smii_125m_pad", "ras_pll2_clk", "ras_syn0_gclk", }; static const char *uartx_parents[] = { "ras_syn1_gclk", "ras_apb_clk", }; -static void __init spear320_clk_init(void __iomem *soc_config_base) +static void __init spear320_clk_init(void __iomem *soc_config_base, + struct clk *ras_apb_clk) { struct clk *clk; @@ -342,6 +343,8 @@ static void __init spear320_clk_init(void __iomem *soc_config_base) SPEAR320_CONTROL_REG, UART1_PCLK_SHIFT, UART1_PCLK_MASK, 0, &_lock); clk_register_clkdev(clk, NULL, "a3000000.serial"); + /* Enforce ras_apb_clk */ + clk_set_parent(clk, ras_apb_clk); clk = clk_register_mux(NULL, "uart2_clk", uartx_parents, ARRAY_SIZE(uartx_parents), @@ -349,6 +352,8 @@ static void __init spear320_clk_init(void __iomem *soc_config_base) SPEAR320_EXT_CTRL_REG, SPEAR320_UART2_PCLK_SHIFT, SPEAR320_UARTX_PCLK_MASK, 0, &_lock); clk_register_clkdev(clk, NULL, "a4000000.serial"); + /* Enforce ras_apb_clk */ + clk_set_parent(clk, ras_apb_clk); clk = clk_register_mux(NULL, "uart3_clk", uartx_parents, ARRAY_SIZE(uartx_parents), @@ -379,12 +384,12 @@ static void __init spear320_clk_init(void __iomem *soc_config_base) clk_register_clkdev(clk, NULL, "60100000.serial"); } #else -static inline void spear320_clk_init(void __iomem *soc_config_base) { } +static inline void spear320_clk_init(void __iomem *sb, struct clk *rc) { } #endif void __init spear3xx_clk_init(void __iomem *misc_base, void __iomem *soc_config_base) { - struct clk *clk, *clk1; + struct clk *clk, *clk1, *ras_apb_clk; clk = clk_register_fixed_rate(NULL, "osc_32k_clk", NULL, CLK_IS_ROOT, 32000); @@ -613,6 +618,7 @@ void __init spear3xx_clk_init(void __iomem *misc_base, void __iomem *soc_config_ clk = clk_register_gate(NULL, "ras_apb_clk", "apb_clk", 0, RAS_CLK_ENB, RAS_APB_CLK_ENB, 0, &_lock); clk_register_clkdev(clk, "ras_apb_clk", NULL); + ras_apb_clk = clk; clk = clk_register_gate(NULL, "ras_32k_clk", "osc_32k_clk", 0, RAS_CLK_ENB, RAS_32K_CLK_ENB, 0, &_lock); @@ -659,5 +665,5 @@ void __init spear3xx_clk_init(void __iomem *misc_base, void __iomem *soc_config_ else if (of_machine_is_compatible("st,spear310")) spear310_clk_init(); else if (of_machine_is_compatible("st,spear320")) - spear320_clk_init(soc_config_base); + spear320_clk_init(soc_config_base, ras_apb_clk); } diff --git a/drivers/clk/sunxi/clk-sun6i-apb0-gates.c b/drivers/clk/sunxi/clk-sun6i-apb0-gates.c index 44cd27c5c401..670f90d629d7 100644 --- a/drivers/clk/sunxi/clk-sun6i-apb0-gates.c +++ b/drivers/clk/sunxi/clk-sun6i-apb0-gates.c @@ -29,7 +29,7 @@ static int sun6i_a31_apb0_gates_clk_probe(struct platform_device *pdev) r = platform_get_resource(pdev, IORESOURCE_MEM, 0); reg = devm_ioremap_resource(&pdev->dev, r); - if (!reg) + if (IS_ERR(reg)) return PTR_ERR(reg); clk_parent = of_clk_get_parent_name(np, 0); diff --git a/drivers/clk/ti/apll.c b/drivers/clk/ti/apll.c index 5428c9c547cd..72d97279eae1 100644 --- a/drivers/clk/ti/apll.c +++ b/drivers/clk/ti/apll.c @@ -77,13 +77,11 @@ static int dra7_apll_enable(struct clk_hw *hw) if (i == MAX_APLL_WAIT_TRIES) { pr_warn("clock: %s failed transition to '%s'\n", clk_name, (state) ? "locked" : "bypassed"); - } else { + r = -EBUSY; + } else pr_debug("clock: %s transition to '%s' in %d loops\n", clk_name, (state) ? "locked" : "bypassed", i); - r = 0; - } - return r; } @@ -338,7 +336,7 @@ static void __init of_omap2_apll_setup(struct device_node *node) const char *parent_name; u32 val; - ad = kzalloc(sizeof(*clk_hw), GFP_KERNEL); + ad = kzalloc(sizeof(*ad), GFP_KERNEL); clk_hw = kzalloc(sizeof(*clk_hw), GFP_KERNEL); init = kzalloc(sizeof(*init), GFP_KERNEL); diff --git a/drivers/clk/ti/dpll.c b/drivers/clk/ti/dpll.c index abd956d5f838..79791e1bf282 100644 --- a/drivers/clk/ti/dpll.c +++ b/drivers/clk/ti/dpll.c @@ -161,7 +161,8 @@ cleanup: } #if defined(CONFIG_ARCH_OMAP4) || defined(CONFIG_SOC_OMAP5) || \ - defined(CONFIG_SOC_DRA7XX) || defined(CONFIG_SOC_AM33XX) + defined(CONFIG_SOC_DRA7XX) || defined(CONFIG_SOC_AM33XX) || \ + defined(CONFIG_SOC_AM43XX) /** * ti_clk_register_dpll_x2 - Registers a DPLLx2 clock * @node: device node for this clock @@ -322,7 +323,7 @@ CLK_OF_DECLARE(ti_omap4_dpll_x2_clock, "ti,omap4-dpll-x2-clock", of_ti_omap4_dpll_x2_setup); #endif -#ifdef CONFIG_SOC_AM33XX +#if defined(CONFIG_SOC_AM33XX) || defined(CONFIG_SOC_AM43XX) static void __init of_ti_am3_dpll_x2_setup(struct device_node *node) { ti_clk_register_dpll_x2(node, &dpll_x2_ck_ops, NULL); diff --git a/drivers/clk/ti/mux.c b/drivers/clk/ti/mux.c index 0197a478720c..e9d650e51287 100644 --- a/drivers/clk/ti/mux.c +++ b/drivers/clk/ti/mux.c @@ -160,7 +160,7 @@ static void of_mux_clk_setup(struct device_node *node) u8 clk_mux_flags = 0; u32 mask = 0; u32 shift = 0; - u32 flags = 0; + u32 flags = CLK_SET_RATE_NO_REPARENT; num_parents = of_clk_get_parent_count(node); if (num_parents < 2) { diff --git a/drivers/clocksource/exynos_mct.c b/drivers/clocksource/exynos_mct.c index f71d55f5e6e5..ab51bf20a3ed 100644 --- a/drivers/clocksource/exynos_mct.c +++ b/drivers/clocksource/exynos_mct.c @@ -162,7 +162,7 @@ static void exynos4_mct_frc_start(void) exynos4_mct_write(reg, EXYNOS4_MCT_G_TCON); } -static cycle_t exynos4_frc_read(struct clocksource *cs) +static cycle_t notrace _exynos4_frc_read(void) { unsigned int lo, hi; u32 hi2 = __raw_readl(reg_base + EXYNOS4_MCT_G_CNT_U); @@ -176,6 +176,11 @@ static cycle_t exynos4_frc_read(struct clocksource *cs) return ((cycle_t)hi << 32) | lo; } +static cycle_t exynos4_frc_read(struct clocksource *cs) +{ + return _exynos4_frc_read(); +} + static void exynos4_frc_resume(struct clocksource *cs) { exynos4_mct_frc_start(); @@ -192,13 +197,24 @@ struct clocksource mct_frc = { static u64 notrace exynos4_read_sched_clock(void) { - return exynos4_frc_read(&mct_frc); + return _exynos4_frc_read(); +} + +static struct delay_timer exynos4_delay_timer; + +static cycles_t exynos4_read_current_timer(void) +{ + return _exynos4_frc_read(); } static void __init exynos4_clocksource_init(void) { exynos4_mct_frc_start(); + exynos4_delay_timer.read_current_timer = &exynos4_read_current_timer; + exynos4_delay_timer.freq = clk_rate; + register_current_timer_delay(&exynos4_delay_timer); + if (clocksource_register_hz(&mct_frc, clk_rate)) panic("%s: can't register clocksource\n", mct_frc.name); diff --git a/drivers/cpufreq/Kconfig.arm b/drivers/cpufreq/Kconfig.arm index ebac67115009..7364a538e056 100644 --- a/drivers/cpufreq/Kconfig.arm +++ b/drivers/cpufreq/Kconfig.arm @@ -104,6 +104,7 @@ config ARM_IMX6Q_CPUFREQ tristate "Freescale i.MX6 cpufreq support" depends on ARCH_MXC depends on REGULATOR_ANATOP + select PM_OPP help This adds cpufreq driver support for Freescale i.MX6 series SoCs. @@ -118,7 +119,7 @@ config ARM_INTEGRATOR If in doubt, say Y. config ARM_KIRKWOOD_CPUFREQ - def_bool MACH_KIRKWOOD + def_bool ARCH_KIRKWOOD || MACH_KIRKWOOD help This adds the CPUFreq driver for Marvell Kirkwood SoCs. diff --git a/drivers/cpufreq/Makefile b/drivers/cpufreq/Makefile index 738c8b7b17dc..db6d9a2fea4d 100644 --- a/drivers/cpufreq/Makefile +++ b/drivers/cpufreq/Makefile @@ -49,7 +49,7 @@ obj-$(CONFIG_ARM_BIG_LITTLE_CPUFREQ) += arm_big_little.o # LITTLE drivers, so that it is probed last. obj-$(CONFIG_ARM_DT_BL_CPUFREQ) += arm_big_little_dt.o -obj-$(CONFIG_ARCH_DAVINCI_DA850) += davinci-cpufreq.o +obj-$(CONFIG_ARCH_DAVINCI) += davinci-cpufreq.o obj-$(CONFIG_UX500_SOC_DB8500) += dbx500-cpufreq.o obj-$(CONFIG_ARM_EXYNOS_CPUFREQ) += exynos-cpufreq.o obj-$(CONFIG_ARM_EXYNOS4210_CPUFREQ) += exynos4210-cpufreq.o diff --git a/drivers/cpufreq/cpufreq-cpu0.c b/drivers/cpufreq/cpufreq-cpu0.c index ee1ae303a07c..86beda9f950b 100644 --- a/drivers/cpufreq/cpufreq-cpu0.c +++ b/drivers/cpufreq/cpufreq-cpu0.c @@ -152,11 +152,8 @@ static int cpu0_cpufreq_probe(struct platform_device *pdev) goto out_put_reg; } - ret = of_init_opp_table(cpu_dev); - if (ret) { - pr_err("failed to init OPP table: %d\n", ret); - goto out_put_clk; - } + /* OPPs might be populated at runtime, don't check for error here */ + of_init_opp_table(cpu_dev); ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table); if (ret) { diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 62259d27f03e..6f024852c6fb 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -1153,10 +1153,12 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) * the creation of a brand new one. So we need to perform this update * by invoking update_policy_cpu(). */ - if (recover_policy && cpu != policy->cpu) + if (recover_policy && cpu != policy->cpu) { update_policy_cpu(policy, cpu); - else + WARN_ON(kobject_move(&policy->kobj, &dev->kobj)); + } else { policy->cpu = cpu; + } cpumask_copy(policy->cpus, cpumask_of(cpu)); diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c index 924bb2d42b1c..86631cb6f7de 100644 --- a/drivers/cpufreq/intel_pstate.c +++ b/drivers/cpufreq/intel_pstate.c @@ -128,6 +128,7 @@ static struct pstate_funcs pstate_funcs; struct perf_limits { int no_turbo; + int turbo_disabled; int max_perf_pct; int min_perf_pct; int32_t max_perf; @@ -287,7 +288,10 @@ static ssize_t store_no_turbo(struct kobject *a, struct attribute *b, if (ret != 1) return -EINVAL; limits.no_turbo = clamp_t(int, input, 0 , 1); - + if (limits.turbo_disabled) { + pr_warn("Turbo disabled by BIOS or unavailable on processor\n"); + limits.no_turbo = limits.turbo_disabled; + } return count; } @@ -357,21 +361,21 @@ static int byt_get_min_pstate(void) { u64 value; rdmsrl(BYT_RATIOS, value); - return (value >> 8) & 0x3F; + return (value >> 8) & 0x7F; } static int byt_get_max_pstate(void) { u64 value; rdmsrl(BYT_RATIOS, value); - return (value >> 16) & 0x3F; + return (value >> 16) & 0x7F; } static int byt_get_turbo_pstate(void) { u64 value; rdmsrl(BYT_TURBO_RATIOS, value); - return value & 0x3F; + return value & 0x7F; } static void byt_set_pstate(struct cpudata *cpudata, int pstate) @@ -381,7 +385,7 @@ static void byt_set_pstate(struct cpudata *cpudata, int pstate) u32 vid; val = pstate << 8; - if (limits.no_turbo) + if (limits.no_turbo && !limits.turbo_disabled) val |= (u64)1 << 32; vid_fp = cpudata->vid.min + mul_fp( @@ -405,8 +409,8 @@ static void byt_get_vid(struct cpudata *cpudata) rdmsrl(BYT_VIDS, value); - cpudata->vid.min = int_tofp((value >> 8) & 0x3f); - cpudata->vid.max = int_tofp((value >> 16) & 0x3f); + cpudata->vid.min = int_tofp((value >> 8) & 0x7f); + cpudata->vid.max = int_tofp((value >> 16) & 0x7f); cpudata->vid.ratio = div_fp( cpudata->vid.max - cpudata->vid.min, int_tofp(cpudata->pstate.max_pstate - @@ -448,7 +452,7 @@ static void core_set_pstate(struct cpudata *cpudata, int pstate) u64 val; val = pstate << 8; - if (limits.no_turbo) + if (limits.no_turbo && !limits.turbo_disabled) val |= (u64)1 << 32; wrmsrl_on_cpu(cpudata->cpu, MSR_IA32_PERF_CTL, val); @@ -696,9 +700,8 @@ static int intel_pstate_init_cpu(unsigned int cpunum) cpu = all_cpu_data[cpunum]; - intel_pstate_get_cpu_pstates(cpu); - cpu->cpu = cpunum; + intel_pstate_get_cpu_pstates(cpu); init_timer_deferrable(&cpu->timer); cpu->timer.function = intel_pstate_timer_func; @@ -741,7 +744,7 @@ static int intel_pstate_set_policy(struct cpufreq_policy *policy) limits.min_perf = int_tofp(1); limits.max_perf_pct = 100; limits.max_perf = int_tofp(1); - limits.no_turbo = 0; + limits.no_turbo = limits.turbo_disabled; return 0; } limits.min_perf_pct = (policy->min * 100) / policy->cpuinfo.max_freq; @@ -784,6 +787,7 @@ static int intel_pstate_cpu_init(struct cpufreq_policy *policy) { struct cpudata *cpu; int rc; + u64 misc_en; rc = intel_pstate_init_cpu(policy->cpu); if (rc) @@ -791,8 +795,13 @@ static int intel_pstate_cpu_init(struct cpufreq_policy *policy) cpu = all_cpu_data[policy->cpu]; - if (!limits.no_turbo && - limits.min_perf_pct == 100 && limits.max_perf_pct == 100) + rdmsrl(MSR_IA32_MISC_ENABLE, misc_en); + if (misc_en & MSR_IA32_MISC_ENABLE_TURBO_DISABLE || + cpu->pstate.max_pstate == cpu->pstate.turbo_pstate) { + limits.turbo_disabled = 1; + limits.no_turbo = 1; + } + if (limits.min_perf_pct == 100 && limits.max_perf_pct == 100) policy->policy = CPUFREQ_POLICY_PERFORMANCE; else policy->policy = CPUFREQ_POLICY_POWERSAVE; diff --git a/drivers/cpufreq/sa1110-cpufreq.c b/drivers/cpufreq/sa1110-cpufreq.c index 546376719d8f..b5befc211172 100644 --- a/drivers/cpufreq/sa1110-cpufreq.c +++ b/drivers/cpufreq/sa1110-cpufreq.c @@ -349,7 +349,7 @@ static int __init sa1110_clk_init(void) name = "K4S641632D"; if (machine_is_h3100()) name = "KM416S4030CT"; - if (machine_is_jornada720()) + if (machine_is_jornada720() || machine_is_h3600()) name = "K4S281632B-1H"; if (machine_is_nanoengine()) name = "MT48LC8M16A2TG-75"; diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c index 1d80bd3636c5..b512a4ba7569 100644 --- a/drivers/crypto/caam/jr.c +++ b/drivers/crypto/caam/jr.c @@ -453,8 +453,8 @@ static int caam_jr_probe(struct platform_device *pdev) int error; jrdev = &pdev->dev; - jrpriv = kmalloc(sizeof(struct caam_drv_private_jr), - GFP_KERNEL); + jrpriv = devm_kmalloc(jrdev, sizeof(struct caam_drv_private_jr), + GFP_KERNEL); if (!jrpriv) return -ENOMEM; @@ -487,10 +487,8 @@ static int caam_jr_probe(struct platform_device *pdev) /* Now do the platform independent part */ error = caam_jr_init(jrdev); /* now turn on hardware */ - if (error) { - kfree(jrpriv); + if (error) return error; - } jrpriv->dev = jrdev; spin_lock(&driver_data.jr_alloc_lock); diff --git a/drivers/dma/cppi41.c b/drivers/dma/cppi41.c index d028f36ae655..8f8b0b608875 100644 --- a/drivers/dma/cppi41.c +++ b/drivers/dma/cppi41.c @@ -86,6 +86,9 @@ #define USBSS_IRQ_PD_COMP (1 << 2) +/* Packet Descriptor */ +#define PD2_ZERO_LENGTH (1 << 19) + struct cppi41_channel { struct dma_chan chan; struct dma_async_tx_descriptor txd; @@ -307,7 +310,7 @@ static irqreturn_t cppi41_irq(int irq, void *data) __iormb(); while (val) { - u32 desc; + u32 desc, len; q_num = __fls(val); val &= ~(1 << q_num); @@ -319,9 +322,13 @@ static irqreturn_t cppi41_irq(int irq, void *data) q_num, desc); continue; } - c->residue = pd_trans_len(c->desc->pd6) - - pd_trans_len(c->desc->pd0); + if (c->desc->pd2 & PD2_ZERO_LENGTH) + len = 0; + else + len = pd_trans_len(c->desc->pd0); + + c->residue = pd_trans_len(c->desc->pd6) - len; dma_cookie_complete(&c->txd); c->txd.callback(c->txd.callback_param); } diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c index 128714622bf5..14867e3ac8ff 100644 --- a/drivers/dma/imx-sdma.c +++ b/drivers/dma/imx-sdma.c @@ -255,6 +255,7 @@ struct sdma_channel { enum dma_slave_buswidth word_size; unsigned int buf_tail; unsigned int num_bd; + unsigned int period_len; struct sdma_buffer_descriptor *bd; dma_addr_t bd_phys; unsigned int pc_from_device, pc_to_device; @@ -593,6 +594,12 @@ static void sdma_event_disable(struct sdma_channel *sdmac, unsigned int event) static void sdma_handle_channel_loop(struct sdma_channel *sdmac) { + if (sdmac->desc.callback) + sdmac->desc.callback(sdmac->desc.callback_param); +} + +static void sdma_update_channel_loop(struct sdma_channel *sdmac) +{ struct sdma_buffer_descriptor *bd; /* @@ -611,9 +618,6 @@ static void sdma_handle_channel_loop(struct sdma_channel *sdmac) bd->mode.status |= BD_DONE; sdmac->buf_tail++; sdmac->buf_tail %= sdmac->num_bd; - - if (sdmac->desc.callback) - sdmac->desc.callback(sdmac->desc.callback_param); } } @@ -669,6 +673,9 @@ static irqreturn_t sdma_int_handler(int irq, void *dev_id) int channel = fls(stat) - 1; struct sdma_channel *sdmac = &sdma->channel[channel]; + if (sdmac->flags & IMX_DMA_SG_LOOP) + sdma_update_channel_loop(sdmac); + tasklet_schedule(&sdmac->tasklet); __clear_bit(channel, &stat); @@ -1129,6 +1136,7 @@ static struct dma_async_tx_descriptor *sdma_prep_dma_cyclic( sdmac->status = DMA_IN_PROGRESS; sdmac->buf_tail = 0; + sdmac->period_len = period_len; sdmac->flags |= IMX_DMA_SG_LOOP; sdmac->direction = direction; @@ -1225,9 +1233,15 @@ static enum dma_status sdma_tx_status(struct dma_chan *chan, struct dma_tx_state *txstate) { struct sdma_channel *sdmac = to_sdma_chan(chan); + u32 residue; + + if (sdmac->flags & IMX_DMA_SG_LOOP) + residue = (sdmac->num_bd - sdmac->buf_tail) * sdmac->period_len; + else + residue = sdmac->chn_count - sdmac->chn_real_count; dma_set_tx_state(txstate, chan->completed_cookie, chan->cookie, - sdmac->chn_count - sdmac->chn_real_count); + residue); return sdmac->status; } diff --git a/drivers/firewire/Kconfig b/drivers/firewire/Kconfig index 4199849e3758..145974f9662b 100644 --- a/drivers/firewire/Kconfig +++ b/drivers/firewire/Kconfig @@ -1,4 +1,5 @@ menu "IEEE 1394 (FireWire) support" + depends on HAS_DMA depends on PCI || COMPILE_TEST # firewire-core does not depend on PCI but is # not useful without PCI controller driver diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c index eff1a2f22f09..dc79346689e6 100644 --- a/drivers/firmware/efi/efi.c +++ b/drivers/firmware/efi/efi.c @@ -346,6 +346,7 @@ static __initdata struct { struct param_info { int verbose; + int found; void *params; }; @@ -362,16 +363,12 @@ static int __init fdt_find_uefi_params(unsigned long node, const char *uname, (strcmp(uname, "chosen") != 0 && strcmp(uname, "chosen@0") != 0)) return 0; - pr_info("Getting parameters from FDT:\n"); - for (i = 0; i < ARRAY_SIZE(dt_params); i++) { prop = of_get_flat_dt_prop(node, dt_params[i].propname, &len); - if (!prop) { - pr_err("Can't find %s in device tree!\n", - dt_params[i].name); + if (!prop) return 0; - } dest = info->params + dt_params[i].offset; + info->found++; val = of_read_number(prop, len / sizeof(u32)); @@ -390,10 +387,21 @@ static int __init fdt_find_uefi_params(unsigned long node, const char *uname, int __init efi_get_fdt_params(struct efi_fdt_params *params, int verbose) { struct param_info info; + int ret; + + pr_info("Getting EFI parameters from FDT:\n"); info.verbose = verbose; + info.found = 0; info.params = params; - return of_scan_flat_dt(fdt_find_uefi_params, &info); + ret = of_scan_flat_dt(fdt_find_uefi_params, &info); + if (!info.found) + pr_info("UEFI not found.\n"); + else if (!ret) + pr_err("Can't find '%s' in device tree!\n", + dt_params[info.found].name); + + return ret; } #endif /* CONFIG_EFI_PARAMS_FROM_FDT */ diff --git a/drivers/firmware/efi/fdt.c b/drivers/firmware/efi/fdt.c index 82d774161cc9..507a3df46a5d 100644 --- a/drivers/firmware/efi/fdt.c +++ b/drivers/firmware/efi/fdt.c @@ -23,16 +23,6 @@ static efi_status_t update_fdt(efi_system_table_t *sys_table, void *orig_fdt, u32 fdt_val32; u64 fdt_val64; - /* - * Copy definition of linux_banner here. Since this code is - * built as part of the decompressor for ARM v7, pulling - * in version.c where linux_banner is defined for the - * kernel brings other kernel dependencies with it. - */ - const char linux_banner[] = - "Linux version " UTS_RELEASE " (" LINUX_COMPILE_BY "@" - LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION "\n"; - /* Do some checks on provided FDT, if it exists*/ if (orig_fdt) { if (fdt_check_header(orig_fdt)) { diff --git a/drivers/gpio/gpio-mcp23s08.c b/drivers/gpio/gpio-mcp23s08.c index fe7c0e211f9a..57adbc90fdad 100644 --- a/drivers/gpio/gpio-mcp23s08.c +++ b/drivers/gpio/gpio-mcp23s08.c @@ -900,8 +900,6 @@ static int mcp23s08_probe(struct spi_device *spi) if (spi_present_mask & (1 << addr)) chips++; } - if (!chips) - return -ENODEV; } else { type = spi_get_device_id(spi)->driver_data; pdata = dev_get_platdata(&spi->dev); @@ -940,10 +938,6 @@ static int mcp23s08_probe(struct spi_device *spi) if (!(spi_present_mask & (1 << addr))) continue; chips--; - if (chips < 0) { - dev_err(&spi->dev, "FATAL: invalid negative chip id\n"); - goto fail; - } data->mcp[addr] = &data->chip[chips]; status = mcp23s08_probe_one(data->mcp[addr], &spi->dev, spi, 0x40 | (addr << 1), type, base, diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c index 6c656392d67d..d44344140627 100644 --- a/drivers/gpu/drm/i915/i915_dma.c +++ b/drivers/gpu/drm/i915/i915_dma.c @@ -1464,12 +1464,13 @@ static int i915_kick_out_vgacon(struct drm_i915_private *dev_priv) #else static int i915_kick_out_vgacon(struct drm_i915_private *dev_priv) { - int ret; + int ret = 0; DRM_INFO("Replacing VGA console driver\n"); console_lock(); - ret = do_take_over_console(&dummy_con, 0, MAX_NR_CONSOLES - 1, 1); + if (con_is_bound(&vga_con)) + ret = do_take_over_console(&dummy_con, 0, MAX_NR_CONSOLES - 1, 1); if (ret == 0) { ret = do_unregister_con_driver(&vga_con); diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index a47fbf60b781..374f964323ad 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -656,6 +656,7 @@ enum intel_sbi_destination { #define QUIRK_PIPEA_FORCE (1<<0) #define QUIRK_LVDS_SSC_DISABLE (1<<1) #define QUIRK_INVERT_BRIGHTNESS (1<<2) +#define QUIRK_BACKLIGHT_PRESENT (1<<3) struct intel_fbdev; struct intel_fbc_work; diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c index 62ef55ba061c..7465ab0fd396 100644 --- a/drivers/gpu/drm/i915/i915_gem_stolen.c +++ b/drivers/gpu/drm/i915/i915_gem_stolen.c @@ -74,6 +74,50 @@ static unsigned long i915_stolen_to_physical(struct drm_device *dev) if (base == 0) return 0; + /* make sure we don't clobber the GTT if it's within stolen memory */ + if (INTEL_INFO(dev)->gen <= 4 && !IS_G33(dev) && !IS_G4X(dev)) { + struct { + u32 start, end; + } stolen[2] = { + { .start = base, .end = base + dev_priv->gtt.stolen_size, }, + { .start = base, .end = base + dev_priv->gtt.stolen_size, }, + }; + u64 gtt_start, gtt_end; + + gtt_start = I915_READ(PGTBL_CTL); + if (IS_GEN4(dev)) + gtt_start = (gtt_start & PGTBL_ADDRESS_LO_MASK) | + (gtt_start & PGTBL_ADDRESS_HI_MASK) << 28; + else + gtt_start &= PGTBL_ADDRESS_LO_MASK; + gtt_end = gtt_start + gtt_total_entries(dev_priv->gtt) * 4; + + if (gtt_start >= stolen[0].start && gtt_start < stolen[0].end) + stolen[0].end = gtt_start; + if (gtt_end > stolen[1].start && gtt_end <= stolen[1].end) + stolen[1].start = gtt_end; + + /* pick the larger of the two chunks */ + if (stolen[0].end - stolen[0].start > + stolen[1].end - stolen[1].start) { + base = stolen[0].start; + dev_priv->gtt.stolen_size = stolen[0].end - stolen[0].start; + } else { + base = stolen[1].start; + dev_priv->gtt.stolen_size = stolen[1].end - stolen[1].start; + } + + if (stolen[0].start != stolen[1].start || + stolen[0].end != stolen[1].end) { + DRM_DEBUG_KMS("GTT within stolen memory at 0x%llx-0x%llx\n", + (unsigned long long) gtt_start, + (unsigned long long) gtt_end - 1); + DRM_DEBUG_KMS("Stolen memory adjusted to 0x%x-0x%x\n", + base, base + (u32) dev_priv->gtt.stolen_size - 1); + } + } + + /* Verify that nothing else uses this physical address. Stolen * memory should be reserved by the BIOS and hidden from the * kernel. So if the region is already marked as busy, something diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h index e691b30b2817..a5bab61bfc00 100644 --- a/drivers/gpu/drm/i915/i915_reg.h +++ b/drivers/gpu/drm/i915/i915_reg.h @@ -942,6 +942,9 @@ enum punit_power_well { /* * Instruction and interrupt control regs */ +#define PGTBL_CTL 0x02020 +#define PGTBL_ADDRESS_LO_MASK 0xfffff000 /* bits [31:12] */ +#define PGTBL_ADDRESS_HI_MASK 0x000000f0 /* bits [35:32] (gen4) */ #define PGTBL_ER 0x02024 #define RENDER_RING_BASE 0x02000 #define BSD_RING_BASE 0x04000 diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c index 556c916dbf9d..f0be855ddf45 100644 --- a/drivers/gpu/drm/i915/intel_display.c +++ b/drivers/gpu/drm/i915/intel_display.c @@ -11591,6 +11591,14 @@ static void quirk_invert_brightness(struct drm_device *dev) DRM_INFO("applying inverted panel brightness quirk\n"); } +/* Some VBT's incorrectly indicate no backlight is present */ +static void quirk_backlight_present(struct drm_device *dev) +{ + struct drm_i915_private *dev_priv = dev->dev_private; + dev_priv->quirks |= QUIRK_BACKLIGHT_PRESENT; + DRM_INFO("applying backlight present quirk\n"); +} + struct intel_quirk { int device; int subsystem_vendor; @@ -11659,6 +11667,15 @@ static struct intel_quirk intel_quirks[] = { /* Acer Aspire 5336 */ { 0x2a42, 0x1025, 0x048a, quirk_invert_brightness }, + + /* Acer C720 and C720P Chromebooks (Celeron 2955U) have backlights */ + { 0x0a06, 0x1025, 0x0a11, quirk_backlight_present }, + + /* Toshiba CB35 Chromebook (Celeron 2955U) */ + { 0x0a06, 0x1179, 0x0a88, quirk_backlight_present }, + + /* HP Chromebook 14 (Celeron 2955U) */ + { 0x0a06, 0x103c, 0x21ed, quirk_backlight_present }, }; static void intel_init_quirks(struct drm_device *dev) @@ -11897,6 +11914,7 @@ static void intel_sanitize_crtc(struct intel_crtc *crtc) * ... */ plane = crtc->plane; crtc->plane = !plane; + crtc->primary_enabled = true; dev_priv->display.crtc_disable(&crtc->base); crtc->plane = plane; diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c index 52fda950fd2a..8a1a4fbc06ac 100644 --- a/drivers/gpu/drm/i915/intel_dp.c +++ b/drivers/gpu/drm/i915/intel_dp.c @@ -28,6 +28,8 @@ #include <linux/i2c.h> #include <linux/slab.h> #include <linux/export.h> +#include <linux/notifier.h> +#include <linux/reboot.h> #include <drm/drmP.h> #include <drm/drm_crtc.h> #include <drm/drm_crtc_helper.h> @@ -336,6 +338,37 @@ static u32 _pp_stat_reg(struct intel_dp *intel_dp) return VLV_PIPE_PP_STATUS(vlv_power_sequencer_pipe(intel_dp)); } +/* Reboot notifier handler to shutdown panel power to guarantee T12 timing + This function only applicable when panel PM state is not to be tracked */ +static int edp_notify_handler(struct notifier_block *this, unsigned long code, + void *unused) +{ + struct intel_dp *intel_dp = container_of(this, typeof(* intel_dp), + edp_notifier); + struct drm_device *dev = intel_dp_to_dev(intel_dp); + struct drm_i915_private *dev_priv = dev->dev_private; + u32 pp_div; + u32 pp_ctrl_reg, pp_div_reg; + enum pipe pipe = vlv_power_sequencer_pipe(intel_dp); + + if (!is_edp(intel_dp) || code != SYS_RESTART) + return 0; + + if (IS_VALLEYVIEW(dev)) { + pp_ctrl_reg = VLV_PIPE_PP_CONTROL(pipe); + pp_div_reg = VLV_PIPE_PP_DIVISOR(pipe); + pp_div = I915_READ(pp_div_reg); + pp_div &= PP_REFERENCE_DIVIDER_MASK; + + /* 0x1F write to PP_DIV_REG sets max cycle delay */ + I915_WRITE(pp_div_reg, pp_div | 0x1F); + I915_WRITE(pp_ctrl_reg, PANEL_UNLOCK_REGS | PANEL_POWER_OFF); + msleep(intel_dp->panel_power_cycle_delay); + } + + return 0; +} + static bool edp_have_panel_power(struct intel_dp *intel_dp) { struct drm_device *dev = intel_dp_to_dev(intel_dp); @@ -873,8 +906,8 @@ intel_dp_compute_config(struct intel_encoder *encoder, mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock, bpp); - for (lane_count = min_lane_count; lane_count <= max_lane_count; lane_count <<= 1) { - for (clock = min_clock; clock <= max_clock; clock++) { + for (clock = min_clock; clock <= max_clock; clock++) { + for (lane_count = min_lane_count; lane_count <= max_lane_count; lane_count <<= 1) { link_clock = drm_dp_bw_code_to_link_rate(bws[clock]); link_avail = intel_dp_max_data_rate(link_clock, lane_count); @@ -3707,6 +3740,10 @@ void intel_dp_encoder_destroy(struct drm_encoder *encoder) drm_modeset_lock(&dev->mode_config.connection_mutex, NULL); edp_panel_vdd_off_sync(intel_dp); drm_modeset_unlock(&dev->mode_config.connection_mutex); + if (intel_dp->edp_notifier.notifier_call) { + unregister_reboot_notifier(&intel_dp->edp_notifier); + intel_dp->edp_notifier.notifier_call = NULL; + } } kfree(intel_dig_port); } @@ -4184,6 +4221,11 @@ static bool intel_edp_init_connector(struct intel_dp *intel_dp, } mutex_unlock(&dev->mode_config.mutex); + if (IS_VALLEYVIEW(dev)) { + intel_dp->edp_notifier.notifier_call = edp_notify_handler; + register_reboot_notifier(&intel_dp->edp_notifier); + } + intel_panel_init(&intel_connector->panel, fixed_mode, downclock_mode); intel_panel_setup_backlight(connector); diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h index eaa27ee9e367..f67340ed2c12 100644 --- a/drivers/gpu/drm/i915/intel_drv.h +++ b/drivers/gpu/drm/i915/intel_drv.h @@ -538,6 +538,8 @@ struct intel_dp { unsigned long last_power_on; unsigned long last_backlight_off; bool psr_setup_done; + struct notifier_block edp_notifier; + bool use_tps3; struct intel_connector *attached_connector; diff --git a/drivers/gpu/drm/i915/intel_dsi.c b/drivers/gpu/drm/i915/intel_dsi.c index 02f99d768d49..3fd082933c87 100644 --- a/drivers/gpu/drm/i915/intel_dsi.c +++ b/drivers/gpu/drm/i915/intel_dsi.c @@ -117,17 +117,18 @@ static void intel_dsi_device_ready(struct intel_encoder *encoder) /* bandgap reset is needed after everytime we do power gate */ band_gap_reset(dev_priv); + I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_ENTER); + usleep_range(2500, 3000); + val = I915_READ(MIPI_PORT_CTRL(pipe)); I915_WRITE(MIPI_PORT_CTRL(pipe), val | LP_OUTPUT_HOLD); usleep_range(1000, 1500); - I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY | ULPS_STATE_EXIT); - usleep_range(2000, 2500); - I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY); - usleep_range(2000, 2500); - I915_WRITE(MIPI_DEVICE_READY(pipe), 0x00); - usleep_range(2000, 2500); + + I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_EXIT); + usleep_range(2500, 3000); + I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY); - usleep_range(2000, 2500); + usleep_range(2500, 3000); } static void intel_dsi_enable(struct intel_encoder *encoder) @@ -271,23 +272,23 @@ static void intel_dsi_clear_device_ready(struct intel_encoder *encoder) DRM_DEBUG_KMS("\n"); - I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_ENTER); + I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY | ULPS_STATE_ENTER); usleep_range(2000, 2500); - I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_EXIT); + I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY | ULPS_STATE_EXIT); usleep_range(2000, 2500); - I915_WRITE(MIPI_DEVICE_READY(pipe), ULPS_STATE_ENTER); + I915_WRITE(MIPI_DEVICE_READY(pipe), DEVICE_READY | ULPS_STATE_ENTER); usleep_range(2000, 2500); - val = I915_READ(MIPI_PORT_CTRL(pipe)); - I915_WRITE(MIPI_PORT_CTRL(pipe), val & ~LP_OUTPUT_HOLD); - usleep_range(1000, 1500); - if (wait_for(((I915_READ(MIPI_PORT_CTRL(pipe)) & AFE_LATCHOUT) == 0x00000), 30)) DRM_ERROR("DSI LP not going Low\n"); + val = I915_READ(MIPI_PORT_CTRL(pipe)); + I915_WRITE(MIPI_PORT_CTRL(pipe), val & ~LP_OUTPUT_HOLD); + usleep_range(1000, 1500); + I915_WRITE(MIPI_DEVICE_READY(pipe), 0x00); usleep_range(2000, 2500); diff --git a/drivers/gpu/drm/i915/intel_dsi_cmd.c b/drivers/gpu/drm/i915/intel_dsi_cmd.c index 3eeb21b9fddf..933c86305237 100644 --- a/drivers/gpu/drm/i915/intel_dsi_cmd.c +++ b/drivers/gpu/drm/i915/intel_dsi_cmd.c @@ -404,12 +404,6 @@ int dpi_send_cmd(struct intel_dsi *intel_dsi, u32 cmd, bool hs) else cmd |= DPI_LP_MODE; - /* DPI virtual channel?! */ - - mask = DPI_FIFO_EMPTY; - if (wait_for((I915_READ(MIPI_GEN_FIFO_STAT(pipe)) & mask) == mask, 50)) - DRM_ERROR("Timeout waiting for DPI FIFO empty.\n"); - /* clear bit */ I915_WRITE(MIPI_INTR_STAT(pipe), SPL_PKT_SENT_INTERRUPT); diff --git a/drivers/gpu/drm/i915/intel_lvds.c b/drivers/gpu/drm/i915/intel_lvds.c index 23126023aeba..5e5a72fca5fb 100644 --- a/drivers/gpu/drm/i915/intel_lvds.c +++ b/drivers/gpu/drm/i915/intel_lvds.c @@ -111,6 +111,13 @@ static void intel_lvds_get_config(struct intel_encoder *encoder, pipe_config->adjusted_mode.flags |= flags; + /* gen2/3 store dither state in pfit control, needs to match */ + if (INTEL_INFO(dev)->gen < 4) { + tmp = I915_READ(PFIT_CONTROL); + + pipe_config->gmch_pfit.control |= tmp & PANEL_8TO6_DITHER_ENABLE; + } + dotclock = pipe_config->port_clock; if (HAS_PCH_SPLIT(dev_priv->dev)) diff --git a/drivers/gpu/drm/i915/intel_opregion.c b/drivers/gpu/drm/i915/intel_opregion.c index 2e2c71fcc9ed..4f6b53998d79 100644 --- a/drivers/gpu/drm/i915/intel_opregion.c +++ b/drivers/gpu/drm/i915/intel_opregion.c @@ -403,6 +403,15 @@ static u32 asle_set_backlight(struct drm_device *dev, u32 bclp) DRM_DEBUG_DRIVER("bclp = 0x%08x\n", bclp); + /* + * If the acpi_video interface is not supposed to be used, don't + * bother processing backlight level change requests from firmware. + */ + if (!acpi_video_verify_backlight_support()) { + DRM_DEBUG_KMS("opregion backlight request ignored\n"); + return 0; + } + if (!(bclp & ASLE_BCLP_VALID)) return ASLC_BACKLIGHT_FAILED; diff --git a/drivers/gpu/drm/i915/intel_panel.c b/drivers/gpu/drm/i915/intel_panel.c index 38a98570d10c..12b02fe1d0ae 100644 --- a/drivers/gpu/drm/i915/intel_panel.c +++ b/drivers/gpu/drm/i915/intel_panel.c @@ -361,16 +361,16 @@ void intel_gmch_panel_fitting(struct intel_crtc *intel_crtc, pfit_control |= ((intel_crtc->pipe << PFIT_PIPE_SHIFT) | PFIT_FILTER_FUZZY); - /* Make sure pre-965 set dither correctly for 18bpp panels. */ - if (INTEL_INFO(dev)->gen < 4 && pipe_config->pipe_bpp == 18) - pfit_control |= PANEL_8TO6_DITHER_ENABLE; - out: if ((pfit_control & PFIT_ENABLE) == 0) { pfit_control = 0; pfit_pgm_ratios = 0; } + /* Make sure pre-965 set dither correctly for 18bpp panels. */ + if (INTEL_INFO(dev)->gen < 4 && pipe_config->pipe_bpp == 18) + pfit_control |= PANEL_8TO6_DITHER_ENABLE; + pipe_config->gmch_pfit.control = pfit_control; pipe_config->gmch_pfit.pgm_ratios = pfit_pgm_ratios; pipe_config->gmch_pfit.lvds_border_bits = border; @@ -1118,8 +1118,12 @@ int intel_panel_setup_backlight(struct drm_connector *connector) int ret; if (!dev_priv->vbt.backlight.present) { - DRM_DEBUG_KMS("native backlight control not available per VBT\n"); - return 0; + if (dev_priv->quirks & QUIRK_BACKLIGHT_PRESENT) { + DRM_DEBUG_KMS("no backlight present per VBT, but present per quirk\n"); + } else { + DRM_DEBUG_KMS("no backlight present per VBT\n"); + return 0; + } } /* set level and max in panel struct */ diff --git a/drivers/gpu/drm/nouveau/core/engine/disp/nv50.c b/drivers/gpu/drm/nouveau/core/engine/disp/nv50.c index 26e962b7e702..2283c442a10d 100644 --- a/drivers/gpu/drm/nouveau/core/engine/disp/nv50.c +++ b/drivers/gpu/drm/nouveau/core/engine/disp/nv50.c @@ -1516,11 +1516,11 @@ nv50_disp_intr_unk20_2(struct nv50_disp_priv *priv, int head) } switch ((ctrl & 0x000f0000) >> 16) { - case 6: datarate = pclk * 30 / 8; break; - case 5: datarate = pclk * 24 / 8; break; + case 6: datarate = pclk * 30; break; + case 5: datarate = pclk * 24; break; case 2: default: - datarate = pclk * 18 / 8; + datarate = pclk * 18; break; } diff --git a/drivers/gpu/drm/nouveau/core/engine/disp/nvd0.c b/drivers/gpu/drm/nouveau/core/engine/disp/nvd0.c index 48aa38a87e3f..fa30d8196f35 100644 --- a/drivers/gpu/drm/nouveau/core/engine/disp/nvd0.c +++ b/drivers/gpu/drm/nouveau/core/engine/disp/nvd0.c @@ -1159,11 +1159,11 @@ nvd0_disp_intr_unk2_2(struct nv50_disp_priv *priv, int head) if (outp->info.type == DCB_OUTPUT_DP) { u32 sync = nv_rd32(priv, 0x660404 + (head * 0x300)); switch ((sync & 0x000003c0) >> 6) { - case 6: pclk = pclk * 30 / 8; break; - case 5: pclk = pclk * 24 / 8; break; + case 6: pclk = pclk * 30; break; + case 5: pclk = pclk * 24; break; case 2: default: - pclk = pclk * 18 / 8; + pclk = pclk * 18; break; } diff --git a/drivers/gpu/drm/nouveau/core/engine/disp/outpdp.c b/drivers/gpu/drm/nouveau/core/engine/disp/outpdp.c index 52c299c3d300..eb2d7789555d 100644 --- a/drivers/gpu/drm/nouveau/core/engine/disp/outpdp.c +++ b/drivers/gpu/drm/nouveau/core/engine/disp/outpdp.c @@ -34,7 +34,7 @@ nvkm_output_dp_train(struct nvkm_output *base, u32 datarate, bool wait) struct nvkm_output_dp *outp = (void *)base; bool retrain = true; u8 link[2], stat[3]; - u32 rate; + u32 linkrate; int ret, i; /* check that the link is trained at a high enough rate */ @@ -44,8 +44,10 @@ nvkm_output_dp_train(struct nvkm_output *base, u32 datarate, bool wait) goto done; } - rate = link[0] * 27000 * (link[1] & DPCD_LC01_LANE_COUNT_SET); - if (rate < ((datarate / 8) * 10)) { + linkrate = link[0] * 27000 * (link[1] & DPCD_LC01_LANE_COUNT_SET); + linkrate = (linkrate * 8) / 10; /* 8B/10B coding overhead */ + datarate = (datarate + 9) / 10; /* -> decakilobits */ + if (linkrate < datarate) { DBG("link not trained at sufficient rate\n"); goto done; } diff --git a/drivers/gpu/drm/nouveau/core/engine/disp/sornv50.c b/drivers/gpu/drm/nouveau/core/engine/disp/sornv50.c index e1832778e8b6..7a1ebdfa9e1b 100644 --- a/drivers/gpu/drm/nouveau/core/engine/disp/sornv50.c +++ b/drivers/gpu/drm/nouveau/core/engine/disp/sornv50.c @@ -87,6 +87,7 @@ nv50_sor_mthd(struct nouveau_object *object, u32 mthd, void *args, u32 size) struct nvkm_output_dp *outpdp = (void *)outp; switch (data) { case NV94_DISP_SOR_DP_PWR_STATE_OFF: + nouveau_event_put(outpdp->irq); ((struct nvkm_output_dp_impl *)nv_oclass(outp)) ->lnk_pwr(outpdp, 0); atomic_set(&outpdp->lt.done, 0); diff --git a/drivers/gpu/drm/nouveau/core/subdev/fb/ramfuc.h b/drivers/gpu/drm/nouveau/core/subdev/fb/ramfuc.h index 0f57fcfe0bbf..2af9cfd2c60f 100644 --- a/drivers/gpu/drm/nouveau/core/subdev/fb/ramfuc.h +++ b/drivers/gpu/drm/nouveau/core/subdev/fb/ramfuc.h @@ -26,7 +26,7 @@ ramfuc_reg2(u32 addr1, u32 addr2) }; } -static inline struct ramfuc_reg +static noinline struct ramfuc_reg ramfuc_reg(u32 addr) { return ramfuc_reg2(addr, addr); @@ -107,7 +107,7 @@ ramfuc_nsec(struct ramfuc *ram, u32 nsec) #define ram_init(s,p) ramfuc_init(&(s)->base, (p)) #define ram_exec(s,e) ramfuc_exec(&(s)->base, (e)) -#define ram_have(s,r) ((s)->r_##r.addr != 0x000000) +#define ram_have(s,r) ((s)->r_##r.addr[0] != 0x000000) #define ram_rd32(s,r) ramfuc_rd32(&(s)->base, &(s)->r_##r) #define ram_wr32(s,r,d) ramfuc_wr32(&(s)->base, &(s)->r_##r, (d)) #define ram_nuke(s,r) ramfuc_nuke(&(s)->base, &(s)->r_##r) diff --git a/drivers/gpu/drm/nouveau/core/subdev/fb/ramnve0.c b/drivers/gpu/drm/nouveau/core/subdev/fb/ramnve0.c index 1ad3ea503133..c5b46e302319 100644 --- a/drivers/gpu/drm/nouveau/core/subdev/fb/ramnve0.c +++ b/drivers/gpu/drm/nouveau/core/subdev/fb/ramnve0.c @@ -200,6 +200,7 @@ r1373f4_init(struct nve0_ramfuc *fuc) /* (re)program mempll, if required */ if (ram->mode == 2) { ram_mask(fuc, 0x1373f4, 0x00010000, 0x00000000); + ram_mask(fuc, 0x132000, 0x80000000, 0x80000000); ram_mask(fuc, 0x132000, 0x00000001, 0x00000000); ram_mask(fuc, 0x132004, 0x103fffff, mcoef); ram_mask(fuc, 0x132000, 0x00000001, 0x00000001); diff --git a/drivers/gpu/drm/nouveau/core/subdev/therm/temp.c b/drivers/gpu/drm/nouveau/core/subdev/therm/temp.c index cfde9eb44ad0..6212537b90c5 100644 --- a/drivers/gpu/drm/nouveau/core/subdev/therm/temp.c +++ b/drivers/gpu/drm/nouveau/core/subdev/therm/temp.c @@ -192,11 +192,11 @@ alarm_timer_callback(struct nouveau_alarm *alarm) nouveau_therm_threshold_hyst_polling(therm, &sensor->thrs_shutdown, NOUVEAU_THERM_THRS_SHUTDOWN); + spin_unlock_irqrestore(&priv->sensor.alarm_program_lock, flags); + /* schedule the next poll in one second */ if (therm->temp_get(therm) >= 0 && list_empty(&alarm->head)) - ptimer->alarm(ptimer, 1000 * 1000 * 1000, alarm); - - spin_unlock_irqrestore(&priv->sensor.alarm_program_lock, flags); + ptimer->alarm(ptimer, 1000000000ULL, alarm); } void diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c index e8ae68a9aaf1..c9428c943afb 100644 --- a/drivers/gpu/drm/nouveau/nouveau_drm.c +++ b/drivers/gpu/drm/nouveau/nouveau_drm.c @@ -652,12 +652,12 @@ int nouveau_pmops_resume(struct device *dev) ret = nouveau_do_resume(drm_dev); if (ret) return ret; - if (drm_dev->mode_config.num_crtc) - nouveau_fbcon_set_suspend(drm_dev, 0); - nouveau_fbcon_zfill_all(drm_dev); - if (drm_dev->mode_config.num_crtc) + if (drm_dev->mode_config.num_crtc) { nouveau_display_resume(drm_dev); + nouveau_fbcon_set_suspend(drm_dev, 0); + } + return 0; } @@ -683,11 +683,12 @@ static int nouveau_pmops_thaw(struct device *dev) ret = nouveau_do_resume(drm_dev); if (ret) return ret; - if (drm_dev->mode_config.num_crtc) - nouveau_fbcon_set_suspend(drm_dev, 0); - nouveau_fbcon_zfill_all(drm_dev); - if (drm_dev->mode_config.num_crtc) + + if (drm_dev->mode_config.num_crtc) { nouveau_display_resume(drm_dev); + nouveau_fbcon_set_suspend(drm_dev, 0); + } + return 0; } diff --git a/drivers/gpu/drm/nouveau/nouveau_fbcon.c b/drivers/gpu/drm/nouveau/nouveau_fbcon.c index 64a42cfd3717..191665ee7f52 100644 --- a/drivers/gpu/drm/nouveau/nouveau_fbcon.c +++ b/drivers/gpu/drm/nouveau/nouveau_fbcon.c @@ -531,17 +531,10 @@ nouveau_fbcon_set_suspend(struct drm_device *dev, int state) if (state == 1) nouveau_fbcon_save_disable_accel(dev); fb_set_suspend(drm->fbcon->helper.fbdev, state); - if (state == 0) + if (state == 0) { nouveau_fbcon_restore_accel(dev); + nouveau_fbcon_zfill(dev, drm->fbcon); + } console_unlock(); } } - -void -nouveau_fbcon_zfill_all(struct drm_device *dev) -{ - struct nouveau_drm *drm = nouveau_drm(dev); - if (drm->fbcon) { - nouveau_fbcon_zfill(dev, drm->fbcon); - } -} diff --git a/drivers/gpu/drm/nouveau/nouveau_fbcon.h b/drivers/gpu/drm/nouveau/nouveau_fbcon.h index fdfc0c94fbcc..fcff797d2084 100644 --- a/drivers/gpu/drm/nouveau/nouveau_fbcon.h +++ b/drivers/gpu/drm/nouveau/nouveau_fbcon.h @@ -61,7 +61,6 @@ void nouveau_fbcon_gpu_lockup(struct fb_info *info); int nouveau_fbcon_init(struct drm_device *dev); void nouveau_fbcon_fini(struct drm_device *dev); void nouveau_fbcon_set_suspend(struct drm_device *dev, int state); -void nouveau_fbcon_zfill_all(struct drm_device *dev); void nouveau_fbcon_save_disable_accel(struct drm_device *dev); void nouveau_fbcon_restore_accel(struct drm_device *dev); diff --git a/drivers/gpu/drm/nouveau/nv50_display.c b/drivers/gpu/drm/nouveau/nv50_display.c index afdf607df3e6..4c534b7b04da 100644 --- a/drivers/gpu/drm/nouveau/nv50_display.c +++ b/drivers/gpu/drm/nouveau/nv50_display.c @@ -1741,7 +1741,8 @@ nv50_sor_dpms(struct drm_encoder *encoder, int mode) } } - mthd = (ffs(nv_encoder->dcb->sorconf.link) - 1) << 2; + mthd = (ffs(nv_encoder->dcb->heads) - 1) << 3; + mthd |= (ffs(nv_encoder->dcb->sorconf.link) - 1) << 2; mthd |= nv_encoder->or; if (nv_encoder->dcb->type == DCB_OUTPUT_DP) { diff --git a/drivers/gpu/drm/qxl/qxl_irq.c b/drivers/gpu/drm/qxl/qxl_irq.c index 34d6a85e9023..0bf1e20c6e44 100644 --- a/drivers/gpu/drm/qxl/qxl_irq.c +++ b/drivers/gpu/drm/qxl/qxl_irq.c @@ -33,6 +33,9 @@ irqreturn_t qxl_irq_handler(int irq, void *arg) pending = xchg(&qdev->ram_header->int_pending, 0); + if (!pending) + return IRQ_NONE; + atomic_inc(&qdev->irq_received); if (pending & QXL_INTERRUPT_DISPLAY) { diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c b/drivers/gpu/drm/radeon/atombios_crtc.c index a03c73411a56..30d242b25078 100644 --- a/drivers/gpu/drm/radeon/atombios_crtc.c +++ b/drivers/gpu/drm/radeon/atombios_crtc.c @@ -1414,8 +1414,8 @@ static int dce4_crtc_do_set_base(struct drm_crtc *crtc, tmp &= ~EVERGREEN_GRPH_SURFACE_UPDATE_H_RETRACE_EN; WREG32(EVERGREEN_GRPH_FLIP_CONTROL + radeon_crtc->crtc_offset, tmp); - /* set pageflip to happen anywhere in vblank interval */ - WREG32(EVERGREEN_MASTER_UPDATE_MODE + radeon_crtc->crtc_offset, 0); + /* set pageflip to happen only at start of vblank interval (front porch) */ + WREG32(EVERGREEN_MASTER_UPDATE_MODE + radeon_crtc->crtc_offset, 3); if (!atomic && fb && fb != crtc->primary->fb) { radeon_fb = to_radeon_framebuffer(fb); @@ -1614,8 +1614,8 @@ static int avivo_crtc_do_set_base(struct drm_crtc *crtc, tmp &= ~AVIVO_D1GRPH_SURFACE_UPDATE_H_RETRACE_EN; WREG32(AVIVO_D1GRPH_FLIP_CONTROL + radeon_crtc->crtc_offset, tmp); - /* set pageflip to happen anywhere in vblank interval */ - WREG32(AVIVO_D1MODE_MASTER_UPDATE_MODE + radeon_crtc->crtc_offset, 0); + /* set pageflip to happen only at start of vblank interval (front porch) */ + WREG32(AVIVO_D1MODE_MASTER_UPDATE_MODE + radeon_crtc->crtc_offset, 3); if (!atomic && fb && fb != crtc->primary->fb) { radeon_fb = to_radeon_framebuffer(fb); diff --git a/drivers/gpu/drm/radeon/atombios_dp.c b/drivers/gpu/drm/radeon/atombios_dp.c index 35f4182c63b6..b1e11f8434e2 100644 --- a/drivers/gpu/drm/radeon/atombios_dp.c +++ b/drivers/gpu/drm/radeon/atombios_dp.c @@ -127,7 +127,7 @@ static int radeon_process_aux_ch(struct radeon_i2c_chan *chan, /* flags not zero */ if (args.v1.ucReplyStatus == 2) { DRM_DEBUG_KMS("dp_aux_ch flags not zero\n"); - r = -EBUSY; + r = -EIO; goto done; } diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c b/drivers/gpu/drm/radeon/atombios_encoders.c index 2b2908440644..7d68203a3737 100644 --- a/drivers/gpu/drm/radeon/atombios_encoders.c +++ b/drivers/gpu/drm/radeon/atombios_encoders.c @@ -183,7 +183,6 @@ void radeon_atom_backlight_init(struct radeon_encoder *radeon_encoder, struct backlight_properties props; struct radeon_backlight_privdata *pdata; struct radeon_encoder_atom_dig *dig; - u8 backlight_level; char bl_name[16]; /* Mac laptops with multiple GPUs use the gmux driver for backlight @@ -222,12 +221,17 @@ void radeon_atom_backlight_init(struct radeon_encoder *radeon_encoder, pdata->encoder = radeon_encoder; - backlight_level = radeon_atom_get_backlight_level_from_reg(rdev); - dig = radeon_encoder->enc_priv; dig->bl_dev = bd; bd->props.brightness = radeon_atom_backlight_get_brightness(bd); + /* Set a reasonable default here if the level is 0 otherwise + * fbdev will attempt to turn the backlight on after console + * unblanking and it will try and restore 0 which turns the backlight + * off again. + */ + if (bd->props.brightness == 0) + bd->props.brightness = RADEON_MAX_BL_LEVEL; bd->props.power = FB_BLANK_UNBLANK; backlight_update_status(bd); diff --git a/drivers/gpu/drm/radeon/ci_dpm.c b/drivers/gpu/drm/radeon/ci_dpm.c index 10dae4106c08..584090ac3eb9 100644 --- a/drivers/gpu/drm/radeon/ci_dpm.c +++ b/drivers/gpu/drm/radeon/ci_dpm.c @@ -1179,7 +1179,7 @@ static int ci_stop_dpm(struct radeon_device *rdev) tmp &= ~GLOBAL_PWRMGT_EN; WREG32_SMC(GENERAL_PWRMGT, tmp); - tmp = RREG32(SCLK_PWRMGT_CNTL); + tmp = RREG32_SMC(SCLK_PWRMGT_CNTL); tmp &= ~DYNAMIC_PM_EN; WREG32_SMC(SCLK_PWRMGT_CNTL, tmp); diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c index dcd4518a9b08..0b2471107137 100644 --- a/drivers/gpu/drm/radeon/cik.c +++ b/drivers/gpu/drm/radeon/cik.c @@ -7676,14 +7676,16 @@ restart_ih: addr = RREG32(VM_CONTEXT1_PROTECTION_FAULT_ADDR); status = RREG32(VM_CONTEXT1_PROTECTION_FAULT_STATUS); mc_client = RREG32(VM_CONTEXT1_PROTECTION_FAULT_MCCLIENT); + /* reset addr and status */ + WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); + if (addr == 0x0 && status == 0x0) + break; dev_err(rdev->dev, "GPU fault detected: %d 0x%08x\n", src_id, src_data); dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_ADDR 0x%08X\n", addr); dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x%08X\n", status); cik_vm_decode_fault(rdev, status, addr, mc_client); - /* reset addr and status */ - WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); break; case 167: /* VCE */ DRM_DEBUG("IH: VCE int: 0x%08x\n", src_data); diff --git a/drivers/gpu/drm/radeon/evergreen.c b/drivers/gpu/drm/radeon/evergreen.c index e2f605224e8c..250bac3935a4 100644 --- a/drivers/gpu/drm/radeon/evergreen.c +++ b/drivers/gpu/drm/radeon/evergreen.c @@ -189,7 +189,7 @@ static const u32 evergreen_golden_registers[] = 0x8c1c, 0xffffffff, 0x00001010, 0x28350, 0xffffffff, 0x00000000, 0xa008, 0xffffffff, 0x00010000, - 0x5cc, 0xffffffff, 0x00000001, + 0x5c4, 0xffffffff, 0x00000001, 0x9508, 0xffffffff, 0x00000002, 0x913c, 0x0000000f, 0x0000000a }; @@ -476,7 +476,7 @@ static const u32 cedar_golden_registers[] = 0x8c1c, 0xffffffff, 0x00001010, 0x28350, 0xffffffff, 0x00000000, 0xa008, 0xffffffff, 0x00010000, - 0x5cc, 0xffffffff, 0x00000001, + 0x5c4, 0xffffffff, 0x00000001, 0x9508, 0xffffffff, 0x00000002 }; @@ -635,7 +635,7 @@ static const u32 juniper_mgcg_init[] = static const u32 supersumo_golden_registers[] = { 0x5eb4, 0xffffffff, 0x00000002, - 0x5cc, 0xffffffff, 0x00000001, + 0x5c4, 0xffffffff, 0x00000001, 0x7030, 0xffffffff, 0x00000011, 0x7c30, 0xffffffff, 0x00000011, 0x6104, 0x01000300, 0x00000000, @@ -719,7 +719,7 @@ static const u32 sumo_golden_registers[] = static const u32 wrestler_golden_registers[] = { 0x5eb4, 0xffffffff, 0x00000002, - 0x5cc, 0xffffffff, 0x00000001, + 0x5c4, 0xffffffff, 0x00000001, 0x7030, 0xffffffff, 0x00000011, 0x7c30, 0xffffffff, 0x00000011, 0x6104, 0x01000300, 0x00000000, @@ -2642,8 +2642,9 @@ void evergreen_mc_resume(struct radeon_device *rdev, struct evergreen_mc_save *s for (i = 0; i < rdev->num_crtc; i++) { if (save->crtc_enabled[i]) { tmp = RREG32(EVERGREEN_MASTER_UPDATE_MODE + crtc_offsets[i]); - if ((tmp & 0x3) != 0) { - tmp &= ~0x3; + if ((tmp & 0x7) != 3) { + tmp &= ~0x7; + tmp |= 0x3; WREG32(EVERGREEN_MASTER_UPDATE_MODE + crtc_offsets[i], tmp); } tmp = RREG32(EVERGREEN_GRPH_UPDATE + crtc_offsets[i]); @@ -5066,14 +5067,16 @@ restart_ih: case 147: addr = RREG32(VM_CONTEXT1_PROTECTION_FAULT_ADDR); status = RREG32(VM_CONTEXT1_PROTECTION_FAULT_STATUS); + /* reset addr and status */ + WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); + if (addr == 0x0 && status == 0x0) + break; dev_err(rdev->dev, "GPU fault detected: %d 0x%08x\n", src_id, src_data); dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_ADDR 0x%08X\n", addr); dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x%08X\n", status); cayman_vm_decode_fault(rdev, status, addr); - /* reset addr and status */ - WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); break; case 176: /* CP_INT in ring buffer */ case 177: /* CP_INT in IB1 */ diff --git a/drivers/gpu/drm/radeon/evergreen_reg.h b/drivers/gpu/drm/radeon/evergreen_reg.h index 333d143fca2c..23bff590fb6e 100644 --- a/drivers/gpu/drm/radeon/evergreen_reg.h +++ b/drivers/gpu/drm/radeon/evergreen_reg.h @@ -239,7 +239,6 @@ # define EVERGREEN_CRTC_V_BLANK (1 << 0) #define EVERGREEN_CRTC_STATUS_POSITION 0x6e90 #define EVERGREEN_CRTC_STATUS_HV_COUNT 0x6ea0 -#define EVERGREEN_MASTER_UPDATE_MODE 0x6ef8 #define EVERGREEN_CRTC_UPDATE_LOCK 0x6ed4 #define EVERGREEN_MASTER_UPDATE_LOCK 0x6ef4 #define EVERGREEN_MASTER_UPDATE_MODE 0x6ef8 diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h index 29d9cc04c04e..b7204500a9a6 100644 --- a/drivers/gpu/drm/radeon/radeon.h +++ b/drivers/gpu/drm/radeon/radeon.h @@ -684,10 +684,9 @@ struct radeon_flip_work { struct work_struct unpin_work; struct radeon_device *rdev; int crtc_id; - struct drm_framebuffer *fb; + uint64_t base; struct drm_pending_vblank_event *event; struct radeon_bo *old_rbo; - struct radeon_bo *new_rbo; struct radeon_fence *fence; }; diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c index 13896edcf0b6..bf25061c8ac4 100644 --- a/drivers/gpu/drm/radeon/radeon_display.c +++ b/drivers/gpu/drm/radeon/radeon_display.c @@ -366,7 +366,6 @@ void radeon_crtc_handle_flip(struct radeon_device *rdev, int crtc_id) spin_unlock_irqrestore(&rdev->ddev->event_lock, flags); drm_vblank_put(rdev->ddev, radeon_crtc->crtc_id); - radeon_fence_unref(&work->fence); radeon_irq_kms_pflip_irq_put(rdev, work->crtc_id); queue_work(radeon_crtc->flip_queue, &work->unpin_work); } @@ -386,51 +385,108 @@ static void radeon_flip_work_func(struct work_struct *__work) struct radeon_crtc *radeon_crtc = rdev->mode_info.crtcs[work->crtc_id]; struct drm_crtc *crtc = &radeon_crtc->base; - struct drm_framebuffer *fb = work->fb; - - uint32_t tiling_flags, pitch_pixels; - uint64_t base; - unsigned long flags; int r; down_read(&rdev->exclusive_lock); - while (work->fence) { + if (work->fence) { r = radeon_fence_wait(work->fence, false); if (r == -EDEADLK) { up_read(&rdev->exclusive_lock); r = radeon_gpu_reset(rdev); down_read(&rdev->exclusive_lock); } + if (r) + DRM_ERROR("failed to wait on page flip fence (%d)!\n", r); - if (r) { - DRM_ERROR("failed to wait on page flip fence (%d)!\n", - r); - goto cleanup; - } else - radeon_fence_unref(&work->fence); + /* We continue with the page flip even if we failed to wait on + * the fence, otherwise the DRM core and userspace will be + * confused about which BO the CRTC is scanning out + */ + + radeon_fence_unref(&work->fence); } + /* We borrow the event spin lock for protecting flip_status */ + spin_lock_irqsave(&crtc->dev->event_lock, flags); + + /* set the proper interrupt */ + radeon_irq_kms_pflip_irq_get(rdev, radeon_crtc->crtc_id); + + /* do the flip (mmio) */ + radeon_page_flip(rdev, radeon_crtc->crtc_id, work->base); + + radeon_crtc->flip_status = RADEON_FLIP_SUBMITTED; + spin_unlock_irqrestore(&crtc->dev->event_lock, flags); + up_read(&rdev->exclusive_lock); +} + +static int radeon_crtc_page_flip(struct drm_crtc *crtc, + struct drm_framebuffer *fb, + struct drm_pending_vblank_event *event, + uint32_t page_flip_flags) +{ + struct drm_device *dev = crtc->dev; + struct radeon_device *rdev = dev->dev_private; + struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc); + struct radeon_framebuffer *old_radeon_fb; + struct radeon_framebuffer *new_radeon_fb; + struct drm_gem_object *obj; + struct radeon_flip_work *work; + struct radeon_bo *new_rbo; + uint32_t tiling_flags, pitch_pixels; + uint64_t base; + unsigned long flags; + int r; + + work = kzalloc(sizeof *work, GFP_KERNEL); + if (work == NULL) + return -ENOMEM; + + INIT_WORK(&work->flip_work, radeon_flip_work_func); + INIT_WORK(&work->unpin_work, radeon_unpin_work_func); + + work->rdev = rdev; + work->crtc_id = radeon_crtc->crtc_id; + work->event = event; + + /* schedule unpin of the old buffer */ + old_radeon_fb = to_radeon_framebuffer(crtc->primary->fb); + obj = old_radeon_fb->obj; + + /* take a reference to the old object */ + drm_gem_object_reference(obj); + work->old_rbo = gem_to_radeon_bo(obj); + + new_radeon_fb = to_radeon_framebuffer(fb); + obj = new_radeon_fb->obj; + new_rbo = gem_to_radeon_bo(obj); + + spin_lock(&new_rbo->tbo.bdev->fence_lock); + if (new_rbo->tbo.sync_obj) + work->fence = radeon_fence_ref(new_rbo->tbo.sync_obj); + spin_unlock(&new_rbo->tbo.bdev->fence_lock); + /* pin the new buffer */ - DRM_DEBUG_DRIVER("flip-ioctl() cur_fbo = %p, cur_bbo = %p\n", - work->old_rbo, work->new_rbo); + DRM_DEBUG_DRIVER("flip-ioctl() cur_rbo = %p, new_rbo = %p\n", + work->old_rbo, new_rbo); - r = radeon_bo_reserve(work->new_rbo, false); + r = radeon_bo_reserve(new_rbo, false); if (unlikely(r != 0)) { DRM_ERROR("failed to reserve new rbo buffer before flip\n"); goto cleanup; } /* Only 27 bit offset for legacy CRTC */ - r = radeon_bo_pin_restricted(work->new_rbo, RADEON_GEM_DOMAIN_VRAM, + r = radeon_bo_pin_restricted(new_rbo, RADEON_GEM_DOMAIN_VRAM, ASIC_IS_AVIVO(rdev) ? 0 : 1 << 27, &base); if (unlikely(r != 0)) { - radeon_bo_unreserve(work->new_rbo); + radeon_bo_unreserve(new_rbo); r = -EINVAL; DRM_ERROR("failed to pin new rbo buffer before flip\n"); goto cleanup; } - radeon_bo_get_tiling_flags(work->new_rbo, &tiling_flags, NULL); - radeon_bo_unreserve(work->new_rbo); + radeon_bo_get_tiling_flags(new_rbo, &tiling_flags, NULL); + radeon_bo_unreserve(new_rbo); if (!ASIC_IS_AVIVO(rdev)) { /* crtc offset is from display base addr not FB location */ @@ -467,6 +523,7 @@ static void radeon_flip_work_func(struct work_struct *__work) } base &= ~7; } + work->base = base; r = drm_vblank_get(crtc->dev, radeon_crtc->crtc_id); if (r) { @@ -477,100 +534,42 @@ static void radeon_flip_work_func(struct work_struct *__work) /* We borrow the event spin lock for protecting flip_work */ spin_lock_irqsave(&crtc->dev->event_lock, flags); - /* set the proper interrupt */ - radeon_irq_kms_pflip_irq_get(rdev, radeon_crtc->crtc_id); + if (radeon_crtc->flip_status != RADEON_FLIP_NONE) { + DRM_DEBUG_DRIVER("flip queue: crtc already busy\n"); + spin_unlock_irqrestore(&crtc->dev->event_lock, flags); + r = -EBUSY; + goto vblank_cleanup; + } + radeon_crtc->flip_status = RADEON_FLIP_PENDING; + radeon_crtc->flip_work = work; - /* do the flip (mmio) */ - radeon_page_flip(rdev, radeon_crtc->crtc_id, base); + /* update crtc fb */ + crtc->primary->fb = fb; - radeon_crtc->flip_status = RADEON_FLIP_SUBMITTED; spin_unlock_irqrestore(&crtc->dev->event_lock, flags); - up_read(&rdev->exclusive_lock); - return; + queue_work(radeon_crtc->flip_queue, &work->flip_work); + return 0; + +vblank_cleanup: + drm_vblank_put(crtc->dev, radeon_crtc->crtc_id); pflip_cleanup: - if (unlikely(radeon_bo_reserve(work->new_rbo, false) != 0)) { + if (unlikely(radeon_bo_reserve(new_rbo, false) != 0)) { DRM_ERROR("failed to reserve new rbo in error path\n"); goto cleanup; } - if (unlikely(radeon_bo_unpin(work->new_rbo) != 0)) { + if (unlikely(radeon_bo_unpin(new_rbo) != 0)) { DRM_ERROR("failed to unpin new rbo in error path\n"); } - radeon_bo_unreserve(work->new_rbo); + radeon_bo_unreserve(new_rbo); cleanup: drm_gem_object_unreference_unlocked(&work->old_rbo->gem_base); radeon_fence_unref(&work->fence); kfree(work); - up_read(&rdev->exclusive_lock); -} - -static int radeon_crtc_page_flip(struct drm_crtc *crtc, - struct drm_framebuffer *fb, - struct drm_pending_vblank_event *event, - uint32_t page_flip_flags) -{ - struct drm_device *dev = crtc->dev; - struct radeon_device *rdev = dev->dev_private; - struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc); - struct radeon_framebuffer *old_radeon_fb; - struct radeon_framebuffer *new_radeon_fb; - struct drm_gem_object *obj; - struct radeon_flip_work *work; - unsigned long flags; - - work = kzalloc(sizeof *work, GFP_KERNEL); - if (work == NULL) - return -ENOMEM; - - INIT_WORK(&work->flip_work, radeon_flip_work_func); - INIT_WORK(&work->unpin_work, radeon_unpin_work_func); - - work->rdev = rdev; - work->crtc_id = radeon_crtc->crtc_id; - work->fb = fb; - work->event = event; - - /* schedule unpin of the old buffer */ - old_radeon_fb = to_radeon_framebuffer(crtc->primary->fb); - obj = old_radeon_fb->obj; - - /* take a reference to the old object */ - drm_gem_object_reference(obj); - work->old_rbo = gem_to_radeon_bo(obj); - - new_radeon_fb = to_radeon_framebuffer(fb); - obj = new_radeon_fb->obj; - work->new_rbo = gem_to_radeon_bo(obj); - - spin_lock(&work->new_rbo->tbo.bdev->fence_lock); - if (work->new_rbo->tbo.sync_obj) - work->fence = radeon_fence_ref(work->new_rbo->tbo.sync_obj); - spin_unlock(&work->new_rbo->tbo.bdev->fence_lock); - - /* We borrow the event spin lock for protecting flip_work */ - spin_lock_irqsave(&crtc->dev->event_lock, flags); - if (radeon_crtc->flip_status != RADEON_FLIP_NONE) { - DRM_DEBUG_DRIVER("flip queue: crtc already busy\n"); - spin_unlock_irqrestore(&crtc->dev->event_lock, flags); - drm_gem_object_unreference_unlocked(&work->old_rbo->gem_base); - radeon_fence_unref(&work->fence); - kfree(work); - return -EBUSY; - } - radeon_crtc->flip_status = RADEON_FLIP_PENDING; - radeon_crtc->flip_work = work; - - /* update crtc fb */ - crtc->primary->fb = fb; - - spin_unlock_irqrestore(&crtc->dev->event_lock, flags); - - queue_work(radeon_crtc->flip_queue, &work->flip_work); - - return 0; + return r; } static int @@ -830,6 +829,10 @@ int radeon_ddc_get_modes(struct radeon_connector *radeon_connector) struct radeon_device *rdev = dev->dev_private; int ret = 0; + /* don't leak the edid if we already fetched it in detect() */ + if (radeon_connector->edid) + goto got_edid; + /* on hw with routers, select right port */ if (radeon_connector->router.ddc_valid) radeon_router_select_ddc_port(radeon_connector); @@ -868,6 +871,7 @@ int radeon_ddc_get_modes(struct radeon_connector *radeon_connector) radeon_connector->edid = radeon_bios_get_hardcoded_edid(rdev); } if (radeon_connector->edid) { +got_edid: drm_mode_connector_update_edid_property(&radeon_connector->base, radeon_connector->edid); ret = drm_add_edid_modes(&radeon_connector->base, radeon_connector->edid); drm_edid_to_eld(&radeon_connector->base, radeon_connector->edid); diff --git a/drivers/gpu/drm/radeon/rv515.c b/drivers/gpu/drm/radeon/rv515.c index 237dd29d9f1c..3e21e869015f 100644 --- a/drivers/gpu/drm/radeon/rv515.c +++ b/drivers/gpu/drm/radeon/rv515.c @@ -406,8 +406,9 @@ void rv515_mc_resume(struct radeon_device *rdev, struct rv515_mc_save *save) for (i = 0; i < rdev->num_crtc; i++) { if (save->crtc_enabled[i]) { tmp = RREG32(AVIVO_D1MODE_MASTER_UPDATE_MODE + crtc_offsets[i]); - if ((tmp & 0x3) != 0) { - tmp &= ~0x3; + if ((tmp & 0x7) != 3) { + tmp &= ~0x7; + tmp |= 0x3; WREG32(AVIVO_D1MODE_MASTER_UPDATE_MODE + crtc_offsets[i], tmp); } tmp = RREG32(AVIVO_D1GRPH_UPDATE + crtc_offsets[i]); diff --git a/drivers/gpu/drm/radeon/rv770_dpm.c b/drivers/gpu/drm/radeon/rv770_dpm.c index da041a43d82e..3c76e1dcdf04 100644 --- a/drivers/gpu/drm/radeon/rv770_dpm.c +++ b/drivers/gpu/drm/radeon/rv770_dpm.c @@ -2329,12 +2329,6 @@ void rv770_get_engine_memory_ss(struct radeon_device *rdev) pi->mclk_ss = radeon_atombios_get_asic_ss_info(rdev, &ss, ASIC_INTERNAL_MEMORY_SS, 0); - /* disable ss, causes hangs on some cayman boards */ - if (rdev->family == CHIP_CAYMAN) { - pi->sclk_ss = false; - pi->mclk_ss = false; - } - if (pi->sclk_ss || pi->mclk_ss) pi->dynamic_ss = true; else diff --git a/drivers/gpu/drm/radeon/si.c b/drivers/gpu/drm/radeon/si.c index 730cee2c34cf..eba0225259a4 100644 --- a/drivers/gpu/drm/radeon/si.c +++ b/drivers/gpu/drm/radeon/si.c @@ -6376,14 +6376,16 @@ restart_ih: case 147: addr = RREG32(VM_CONTEXT1_PROTECTION_FAULT_ADDR); status = RREG32(VM_CONTEXT1_PROTECTION_FAULT_STATUS); + /* reset addr and status */ + WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); + if (addr == 0x0 && status == 0x0) + break; dev_err(rdev->dev, "GPU fault detected: %d 0x%08x\n", src_id, src_data); dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_ADDR 0x%08X\n", addr); dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x%08X\n", status); si_vm_decode_fault(rdev, status, addr); - /* reset addr and status */ - WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1); break; case 176: /* RINGID0 CP_INT */ radeon_fence_process(rdev, RADEON_RING_TYPE_GFX_INDEX); diff --git a/drivers/hid/Kconfig b/drivers/hid/Kconfig index 800c8b60f7a2..5e79c6ad914f 100644 --- a/drivers/hid/Kconfig +++ b/drivers/hid/Kconfig @@ -810,7 +810,7 @@ config HID_ZYDACRON config HID_SENSOR_HUB tristate "HID Sensors framework support" - depends on HID + depends on HID && HAS_IOMEM select MFD_CORE default n ---help--- diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h index 6d00bb9366fa..48b66bbffc94 100644 --- a/drivers/hid/hid-ids.h +++ b/drivers/hid/hid-ids.h @@ -323,6 +323,7 @@ #define USB_VENDOR_ID_ETURBOTOUCH 0x22b9 #define USB_DEVICE_ID_ETURBOTOUCH 0x0006 +#define USB_DEVICE_ID_ETURBOTOUCH_2968 0x2968 #define USB_VENDOR_ID_EZKEY 0x0518 #define USB_DEVICE_ID_BTC_8193 0x0002 @@ -715,6 +716,8 @@ #define USB_VENDOR_ID_PENMOUNT 0x14e1 #define USB_DEVICE_ID_PENMOUNT_PCI 0x3500 +#define USB_DEVICE_ID_PENMOUNT_1610 0x1610 +#define USB_DEVICE_ID_PENMOUNT_1640 0x1640 #define USB_VENDOR_ID_PETALYNX 0x18b1 #define USB_DEVICE_ID_PETALYNX_MAXTER_REMOTE 0x0037 diff --git a/drivers/hid/hid-rmi.c b/drivers/hid/hid-rmi.c index 2451c7e5febd..578bbe65902b 100644 --- a/drivers/hid/hid-rmi.c +++ b/drivers/hid/hid-rmi.c @@ -428,6 +428,7 @@ static int rmi_raw_event(struct hid_device *hdev, return 0; } +#ifdef CONFIG_PM static int rmi_post_reset(struct hid_device *hdev) { return rmi_set_mode(hdev, RMI_MODE_ATTN_REPORTS); @@ -437,6 +438,7 @@ static int rmi_post_resume(struct hid_device *hdev) { return rmi_set_mode(hdev, RMI_MODE_ATTN_REPORTS); } +#endif /* CONFIG_PM */ #define RMI4_MAX_PAGE 0xff #define RMI4_PAGE_SIZE 0x0100 diff --git a/drivers/hid/hid-sensor-hub.c b/drivers/hid/hid-sensor-hub.c index a8d5c8faf8cf..e244e449cbba 100644 --- a/drivers/hid/hid-sensor-hub.c +++ b/drivers/hid/hid-sensor-hub.c @@ -159,17 +159,18 @@ int sensor_hub_register_callback(struct hid_sensor_hub_device *hsdev, { struct hid_sensor_hub_callbacks_list *callback; struct sensor_hub_data *pdata = hid_get_drvdata(hsdev->hdev); + unsigned long flags; - spin_lock(&pdata->dyn_callback_lock); + spin_lock_irqsave(&pdata->dyn_callback_lock, flags); list_for_each_entry(callback, &pdata->dyn_callback_list, list) if (callback->usage_id == usage_id && callback->hsdev == hsdev) { - spin_unlock(&pdata->dyn_callback_lock); + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); return -EINVAL; } callback = kzalloc(sizeof(*callback), GFP_ATOMIC); if (!callback) { - spin_unlock(&pdata->dyn_callback_lock); + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); return -ENOMEM; } callback->hsdev = hsdev; @@ -177,7 +178,7 @@ int sensor_hub_register_callback(struct hid_sensor_hub_device *hsdev, callback->usage_id = usage_id; callback->priv = NULL; list_add_tail(&callback->list, &pdata->dyn_callback_list); - spin_unlock(&pdata->dyn_callback_lock); + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); return 0; } @@ -188,8 +189,9 @@ int sensor_hub_remove_callback(struct hid_sensor_hub_device *hsdev, { struct hid_sensor_hub_callbacks_list *callback; struct sensor_hub_data *pdata = hid_get_drvdata(hsdev->hdev); + unsigned long flags; - spin_lock(&pdata->dyn_callback_lock); + spin_lock_irqsave(&pdata->dyn_callback_lock, flags); list_for_each_entry(callback, &pdata->dyn_callback_list, list) if (callback->usage_id == usage_id && callback->hsdev == hsdev) { @@ -197,7 +199,7 @@ int sensor_hub_remove_callback(struct hid_sensor_hub_device *hsdev, kfree(callback); break; } - spin_unlock(&pdata->dyn_callback_lock); + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); return 0; } @@ -378,15 +380,16 @@ static int sensor_hub_suspend(struct hid_device *hdev, pm_message_t message) { struct sensor_hub_data *pdata = hid_get_drvdata(hdev); struct hid_sensor_hub_callbacks_list *callback; + unsigned long flags; hid_dbg(hdev, " sensor_hub_suspend\n"); - spin_lock(&pdata->dyn_callback_lock); + spin_lock_irqsave(&pdata->dyn_callback_lock, flags); list_for_each_entry(callback, &pdata->dyn_callback_list, list) { if (callback->usage_callback->suspend) callback->usage_callback->suspend( callback->hsdev, callback->priv); } - spin_unlock(&pdata->dyn_callback_lock); + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); return 0; } @@ -395,15 +398,16 @@ static int sensor_hub_resume(struct hid_device *hdev) { struct sensor_hub_data *pdata = hid_get_drvdata(hdev); struct hid_sensor_hub_callbacks_list *callback; + unsigned long flags; hid_dbg(hdev, " sensor_hub_resume\n"); - spin_lock(&pdata->dyn_callback_lock); + spin_lock_irqsave(&pdata->dyn_callback_lock, flags); list_for_each_entry(callback, &pdata->dyn_callback_list, list) { if (callback->usage_callback->resume) callback->usage_callback->resume( callback->hsdev, callback->priv); } - spin_unlock(&pdata->dyn_callback_lock); + spin_unlock_irqrestore(&pdata->dyn_callback_lock, flags); return 0; } @@ -632,6 +636,7 @@ static int sensor_hub_probe(struct hid_device *hdev, if (name == NULL) { hid_err(hdev, "Failed MFD device name\n"); ret = -ENOMEM; + kfree(hsdev); goto err_no_mem; } sd->hid_sensor_hub_client_devs[ diff --git a/drivers/hid/usbhid/hid-quirks.c b/drivers/hid/usbhid/hid-quirks.c index 59badc10a08c..31e6727cd009 100644 --- a/drivers/hid/usbhid/hid-quirks.c +++ b/drivers/hid/usbhid/hid-quirks.c @@ -49,6 +49,7 @@ static const struct hid_blacklist { { USB_VENDOR_ID_EMS, USB_DEVICE_ID_EMS_TRIO_LINKER_PLUS_II, HID_QUIRK_MULTI_INPUT }, { USB_VENDOR_ID_ETURBOTOUCH, USB_DEVICE_ID_ETURBOTOUCH, HID_QUIRK_MULTI_INPUT }, + { USB_VENDOR_ID_ETURBOTOUCH, USB_DEVICE_ID_ETURBOTOUCH_2968, HID_QUIRK_MULTI_INPUT }, { USB_VENDOR_ID_GREENASIA, USB_DEVICE_ID_GREENASIA_DUAL_USB_JOYPAD, HID_QUIRK_MULTI_INPUT }, { USB_VENDOR_ID_PANTHERLORD, USB_DEVICE_ID_PANTHERLORD_TWIN_USB_JOYSTICK, HID_QUIRK_MULTI_INPUT | HID_QUIRK_SKIP_OUTPUT_REPORTS }, { USB_VENDOR_ID_PLAYDOTCOM, USB_DEVICE_ID_PLAYDOTCOM_EMS_USBII, HID_QUIRK_MULTI_INPUT }, @@ -76,6 +77,8 @@ static const struct hid_blacklist { { USB_VENDOR_ID_MSI, USB_DEVICE_ID_MSI_GX680R_LED_PANEL, HID_QUIRK_NO_INIT_REPORTS }, { USB_VENDOR_ID_NEXIO, USB_DEVICE_ID_NEXIO_MULTITOUCH_PTI0750, HID_QUIRK_NO_INIT_REPORTS }, { USB_VENDOR_ID_NOVATEK, USB_DEVICE_ID_NOVATEK_MOUSE, HID_QUIRK_NO_INIT_REPORTS }, + { USB_VENDOR_ID_PENMOUNT, USB_DEVICE_ID_PENMOUNT_1610, HID_QUIRK_NOGET }, + { USB_VENDOR_ID_PENMOUNT, USB_DEVICE_ID_PENMOUNT_1640, HID_QUIRK_NOGET }, { USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN, HID_QUIRK_NO_INIT_REPORTS }, { USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN1, HID_QUIRK_NO_INIT_REPORTS }, { USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN2, HID_QUIRK_NO_INIT_REPORTS }, diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c index e84f4526eb36..ae22e3c1fc4c 100644 --- a/drivers/hv/connection.c +++ b/drivers/hv/connection.c @@ -339,9 +339,13 @@ static void process_chn_event(u32 relid) */ do { - hv_begin_read(&channel->inbound); + if (read_state) + hv_begin_read(&channel->inbound); channel->onchannel_callback(arg); - bytes_to_read = hv_end_read(&channel->inbound); + if (read_state) + bytes_to_read = hv_end_read(&channel->inbound); + else + bytes_to_read = 0; } while (read_state && (bytes_to_read != 0)); } else { pr_err("no channel callback for relid - %u\n", relid); diff --git a/drivers/hv/hv_fcopy.c b/drivers/hv/hv_fcopy.c index eaaa3d843b80..23b2ce294c4c 100644 --- a/drivers/hv/hv_fcopy.c +++ b/drivers/hv/hv_fcopy.c @@ -246,8 +246,8 @@ void hv_fcopy_onchannelcallback(void *context) /* * Send the information to the user-level daemon. */ - fcopy_send_data(); schedule_delayed_work(&fcopy_work, 5*HZ); + fcopy_send_data(); return; } icmsghdr->icflags = ICMSGHDRFLAG_TRANSACTION | ICMSGHDRFLAG_RESPONSE; diff --git a/drivers/hv/hv_kvp.c b/drivers/hv/hv_kvp.c index ea852537307e..521c14625b3a 100644 --- a/drivers/hv/hv_kvp.c +++ b/drivers/hv/hv_kvp.c @@ -127,6 +127,17 @@ kvp_work_func(struct work_struct *dummy) kvp_respond_to_host(NULL, HV_E_FAIL); } +static void poll_channel(struct vmbus_channel *channel) +{ + if (channel->target_cpu != smp_processor_id()) + smp_call_function_single(channel->target_cpu, + hv_kvp_onchannelcallback, + channel, true); + else + hv_kvp_onchannelcallback(channel); +} + + static int kvp_handle_handshake(struct hv_kvp_msg *msg) { int ret = 1; @@ -155,7 +166,7 @@ static int kvp_handle_handshake(struct hv_kvp_msg *msg) kvp_register(dm_reg_value); kvp_transaction.active = false; if (kvp_transaction.kvp_context) - hv_kvp_onchannelcallback(kvp_transaction.kvp_context); + poll_channel(kvp_transaction.kvp_context); } return ret; } @@ -568,7 +579,7 @@ response_done: vmbus_sendpacket(channel, recv_buffer, buf_len, req_id, VM_PKT_DATA_INBAND, 0); - + poll_channel(channel); } /* @@ -603,7 +614,7 @@ void hv_kvp_onchannelcallback(void *context) return; } - vmbus_recvpacket(channel, recv_buffer, PAGE_SIZE * 2, &recvlen, + vmbus_recvpacket(channel, recv_buffer, PAGE_SIZE * 4, &recvlen, &requestid); if (recvlen > 0) { diff --git a/drivers/hv/hv_util.c b/drivers/hv/hv_util.c index dd761806f0e8..3b9c9ef0deb8 100644 --- a/drivers/hv/hv_util.c +++ b/drivers/hv/hv_util.c @@ -319,7 +319,7 @@ static int util_probe(struct hv_device *dev, (struct hv_util_service *)dev_id->driver_data; int ret; - srv->recv_buffer = kmalloc(PAGE_SIZE * 2, GFP_KERNEL); + srv->recv_buffer = kmalloc(PAGE_SIZE * 4, GFP_KERNEL); if (!srv->recv_buffer) return -ENOMEM; if (srv->util_init) { diff --git a/drivers/hwmon/adc128d818.c b/drivers/hwmon/adc128d818.c index 5ffd81f19d01..0625e50d7a6e 100644 --- a/drivers/hwmon/adc128d818.c +++ b/drivers/hwmon/adc128d818.c @@ -239,50 +239,50 @@ static ssize_t adc128_show_alarm(struct device *dev, return sprintf(buf, "%u\n", !!(alarms & mask)); } -static SENSOR_DEVICE_ATTR_2(in0_input, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 0, 0); +static SENSOR_DEVICE_ATTR_2(in0_input, S_IRUGO, + adc128_show_in, NULL, 0, 0); static SENSOR_DEVICE_ATTR_2(in0_min, S_IWUSR | S_IRUGO, adc128_show_in, adc128_set_in, 0, 1); static SENSOR_DEVICE_ATTR_2(in0_max, S_IWUSR | S_IRUGO, adc128_show_in, adc128_set_in, 0, 2); -static SENSOR_DEVICE_ATTR_2(in1_input, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 1, 0); +static SENSOR_DEVICE_ATTR_2(in1_input, S_IRUGO, + adc128_show_in, NULL, 1, 0); static SENSOR_DEVICE_ATTR_2(in1_min, S_IWUSR | S_IRUGO, adc128_show_in, adc128_set_in, 1, 1); static SENSOR_DEVICE_ATTR_2(in1_max, S_IWUSR | S_IRUGO, adc128_show_in, adc128_set_in, 1, 2); -static SENSOR_DEVICE_ATTR_2(in2_input, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 2, 0); +static SENSOR_DEVICE_ATTR_2(in2_input, S_IRUGO, + adc128_show_in, NULL, 2, 0); static SENSOR_DEVICE_ATTR_2(in2_min, S_IWUSR | S_IRUGO, adc128_show_in, adc128_set_in, 2, 1); static SENSOR_DEVICE_ATTR_2(in2_max, S_IWUSR | S_IRUGO, adc128_show_in, adc128_set_in, 2, 2); -static SENSOR_DEVICE_ATTR_2(in3_input, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 3, 0); +static SENSOR_DEVICE_ATTR_2(in3_input, S_IRUGO, + adc128_show_in, NULL, 3, 0); static SENSOR_DEVICE_ATTR_2(in3_min, S_IWUSR | S_IRUGO, adc128_show_in, adc128_set_in, 3, 1); static SENSOR_DEVICE_ATTR_2(in3_max, S_IWUSR | S_IRUGO, adc128_show_in, adc128_set_in, 3, 2); -static SENSOR_DEVICE_ATTR_2(in4_input, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 4, 0); +static SENSOR_DEVICE_ATTR_2(in4_input, S_IRUGO, + adc128_show_in, NULL, 4, 0); static SENSOR_DEVICE_ATTR_2(in4_min, S_IWUSR | S_IRUGO, adc128_show_in, adc128_set_in, 4, 1); static SENSOR_DEVICE_ATTR_2(in4_max, S_IWUSR | S_IRUGO, adc128_show_in, adc128_set_in, 4, 2); -static SENSOR_DEVICE_ATTR_2(in5_input, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 5, 0); +static SENSOR_DEVICE_ATTR_2(in5_input, S_IRUGO, + adc128_show_in, NULL, 5, 0); static SENSOR_DEVICE_ATTR_2(in5_min, S_IWUSR | S_IRUGO, adc128_show_in, adc128_set_in, 5, 1); static SENSOR_DEVICE_ATTR_2(in5_max, S_IWUSR | S_IRUGO, adc128_show_in, adc128_set_in, 5, 2); -static SENSOR_DEVICE_ATTR_2(in6_input, S_IWUSR | S_IRUGO, - adc128_show_in, adc128_set_in, 6, 0); +static SENSOR_DEVICE_ATTR_2(in6_input, S_IRUGO, + adc128_show_in, NULL, 6, 0); static SENSOR_DEVICE_ATTR_2(in6_min, S_IWUSR | S_IRUGO, adc128_show_in, adc128_set_in, 6, 1); static SENSOR_DEVICE_ATTR_2(in6_max, S_IWUSR | S_IRUGO, diff --git a/drivers/hwmon/adm1021.c b/drivers/hwmon/adm1021.c index 3eb4281689b5..d74241bb278c 100644 --- a/drivers/hwmon/adm1021.c +++ b/drivers/hwmon/adm1021.c @@ -185,7 +185,7 @@ static ssize_t set_temp_max(struct device *dev, struct adm1021_data *data = dev_get_drvdata(dev); struct i2c_client *client = data->client; long temp; - int err; + int reg_val, err; err = kstrtol(buf, 10, &temp); if (err) @@ -193,10 +193,11 @@ static ssize_t set_temp_max(struct device *dev, temp /= 1000; mutex_lock(&data->update_lock); - data->temp_max[index] = clamp_val(temp, -128, 127); + reg_val = clamp_val(temp, -128, 127); + data->temp_max[index] = reg_val * 1000; if (!read_only) i2c_smbus_write_byte_data(client, ADM1021_REG_TOS_W(index), - data->temp_max[index]); + reg_val); mutex_unlock(&data->update_lock); return count; @@ -210,7 +211,7 @@ static ssize_t set_temp_min(struct device *dev, struct adm1021_data *data = dev_get_drvdata(dev); struct i2c_client *client = data->client; long temp; - int err; + int reg_val, err; err = kstrtol(buf, 10, &temp); if (err) @@ -218,10 +219,11 @@ static ssize_t set_temp_min(struct device *dev, temp /= 1000; mutex_lock(&data->update_lock); - data->temp_min[index] = clamp_val(temp, -128, 127); + reg_val = clamp_val(temp, -128, 127); + data->temp_min[index] = reg_val * 1000; if (!read_only) i2c_smbus_write_byte_data(client, ADM1021_REG_THYST_W(index), - data->temp_min[index]); + reg_val); mutex_unlock(&data->update_lock); return count; diff --git a/drivers/hwmon/adm1029.c b/drivers/hwmon/adm1029.c index 78339e880bd6..2804571b269e 100644 --- a/drivers/hwmon/adm1029.c +++ b/drivers/hwmon/adm1029.c @@ -232,6 +232,9 @@ static ssize_t set_fan_div(struct device *dev, /* Update the value */ reg = (reg & 0x3F) | (val << 6); + /* Update the cache */ + data->fan_div[attr->index] = reg; + /* Write value */ i2c_smbus_write_byte_data(client, ADM1029_REG_FAN_DIV[attr->index], reg); diff --git a/drivers/hwmon/adm1031.c b/drivers/hwmon/adm1031.c index a8a540ca8c34..51c1a5a165ab 100644 --- a/drivers/hwmon/adm1031.c +++ b/drivers/hwmon/adm1031.c @@ -365,6 +365,7 @@ set_auto_temp_min(struct device *dev, struct device_attribute *attr, if (ret) return ret; + val = clamp_val(val, 0, 127000); mutex_lock(&data->update_lock); data->auto_temp[nr] = AUTO_TEMP_MIN_TO_REG(val, data->auto_temp[nr]); adm1031_write_value(client, ADM1031_REG_AUTO_TEMP(nr), @@ -394,6 +395,7 @@ set_auto_temp_max(struct device *dev, struct device_attribute *attr, if (ret) return ret; + val = clamp_val(val, 0, 127000); mutex_lock(&data->update_lock); data->temp_max[nr] = AUTO_TEMP_MAX_TO_REG(val, data->auto_temp[nr], data->pwm[nr]); @@ -696,7 +698,7 @@ static ssize_t set_temp_min(struct device *dev, struct device_attribute *attr, if (ret) return ret; - val = clamp_val(val, -55000, nr == 0 ? 127750 : 127875); + val = clamp_val(val, -55000, 127000); mutex_lock(&data->update_lock); data->temp_min[nr] = TEMP_TO_REG(val); adm1031_write_value(client, ADM1031_REG_TEMP_MIN(nr), @@ -717,7 +719,7 @@ static ssize_t set_temp_max(struct device *dev, struct device_attribute *attr, if (ret) return ret; - val = clamp_val(val, -55000, nr == 0 ? 127750 : 127875); + val = clamp_val(val, -55000, 127000); mutex_lock(&data->update_lock); data->temp_max[nr] = TEMP_TO_REG(val); adm1031_write_value(client, ADM1031_REG_TEMP_MAX(nr), @@ -738,7 +740,7 @@ static ssize_t set_temp_crit(struct device *dev, struct device_attribute *attr, if (ret) return ret; - val = clamp_val(val, -55000, nr == 0 ? 127750 : 127875); + val = clamp_val(val, -55000, 127000); mutex_lock(&data->update_lock); data->temp_crit[nr] = TEMP_TO_REG(val); adm1031_write_value(client, ADM1031_REG_TEMP_CRIT(nr), diff --git a/drivers/hwmon/adt7470.c b/drivers/hwmon/adt7470.c index 0f4dea5ccf17..9ee3913850d6 100644 --- a/drivers/hwmon/adt7470.c +++ b/drivers/hwmon/adt7470.c @@ -515,7 +515,7 @@ static ssize_t set_temp_min(struct device *dev, return -EINVAL; temp = DIV_ROUND_CLOSEST(temp, 1000); - temp = clamp_val(temp, 0, 255); + temp = clamp_val(temp, -128, 127); mutex_lock(&data->lock); data->temp_min[attr->index] = temp; @@ -549,7 +549,7 @@ static ssize_t set_temp_max(struct device *dev, return -EINVAL; temp = DIV_ROUND_CLOSEST(temp, 1000); - temp = clamp_val(temp, 0, 255); + temp = clamp_val(temp, -128, 127); mutex_lock(&data->lock); data->temp_max[attr->index] = temp; @@ -826,7 +826,7 @@ static ssize_t set_pwm_tmin(struct device *dev, return -EINVAL; temp = DIV_ROUND_CLOSEST(temp, 1000); - temp = clamp_val(temp, 0, 255); + temp = clamp_val(temp, -128, 127); mutex_lock(&data->lock); data->pwm_tmin[attr->index] = temp; diff --git a/drivers/hwmon/amc6821.c b/drivers/hwmon/amc6821.c index eea817296513..9f2be3dd28f3 100644 --- a/drivers/hwmon/amc6821.c +++ b/drivers/hwmon/amc6821.c @@ -704,7 +704,7 @@ static SENSOR_DEVICE_ATTR(temp1_max_alarm, S_IRUGO, get_temp_alarm, NULL, IDX_TEMP1_MAX); static SENSOR_DEVICE_ATTR(temp1_crit_alarm, S_IRUGO, get_temp_alarm, NULL, IDX_TEMP1_CRIT); -static SENSOR_DEVICE_ATTR(temp2_input, S_IRUGO | S_IWUSR, +static SENSOR_DEVICE_ATTR(temp2_input, S_IRUGO, get_temp, NULL, IDX_TEMP2_INPUT); static SENSOR_DEVICE_ATTR(temp2_min, S_IRUGO | S_IWUSR, get_temp, set_temp, IDX_TEMP2_MIN); diff --git a/drivers/hwmon/da9052-hwmon.c b/drivers/hwmon/da9052-hwmon.c index afd31042b452..d14ab3c45daa 100644 --- a/drivers/hwmon/da9052-hwmon.c +++ b/drivers/hwmon/da9052-hwmon.c @@ -194,7 +194,7 @@ static ssize_t da9052_hwmon_show_name(struct device *dev, struct device_attribute *devattr, char *buf) { - return sprintf(buf, "da9052-hwmon\n"); + return sprintf(buf, "da9052\n"); } static ssize_t show_label(struct device *dev, diff --git a/drivers/hwmon/da9055-hwmon.c b/drivers/hwmon/da9055-hwmon.c index 73b3865f1207..35eb7738d711 100644 --- a/drivers/hwmon/da9055-hwmon.c +++ b/drivers/hwmon/da9055-hwmon.c @@ -204,7 +204,7 @@ static ssize_t da9055_hwmon_show_name(struct device *dev, struct device_attribute *devattr, char *buf) { - return sprintf(buf, "da9055-hwmon\n"); + return sprintf(buf, "da9055\n"); } static ssize_t show_label(struct device *dev, diff --git a/drivers/hwmon/emc2103.c b/drivers/hwmon/emc2103.c index fd892dd48e4c..78002de46cb6 100644 --- a/drivers/hwmon/emc2103.c +++ b/drivers/hwmon/emc2103.c @@ -250,9 +250,7 @@ static ssize_t set_temp_min(struct device *dev, struct device_attribute *da, if (result < 0) return result; - val = DIV_ROUND_CLOSEST(val, 1000); - if ((val < -63) || (val > 127)) - return -EINVAL; + val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), -63, 127); mutex_lock(&data->update_lock); data->temp_min[nr] = val; @@ -274,9 +272,7 @@ static ssize_t set_temp_max(struct device *dev, struct device_attribute *da, if (result < 0) return result; - val = DIV_ROUND_CLOSEST(val, 1000); - if ((val < -63) || (val > 127)) - return -EINVAL; + val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), -63, 127); mutex_lock(&data->update_lock); data->temp_max[nr] = val; @@ -390,15 +386,14 @@ static ssize_t set_fan_target(struct device *dev, struct device_attribute *da, { struct emc2103_data *data = emc2103_update_device(dev); struct i2c_client *client = to_i2c_client(dev); - long rpm_target; + unsigned long rpm_target; - int result = kstrtol(buf, 10, &rpm_target); + int result = kstrtoul(buf, 10, &rpm_target); if (result < 0) return result; /* Datasheet states 16384 as maximum RPM target (table 3.2) */ - if ((rpm_target < 0) || (rpm_target > 16384)) - return -EINVAL; + rpm_target = clamp_val(rpm_target, 0, 16384); mutex_lock(&data->update_lock); diff --git a/drivers/hwmon/ntc_thermistor.c b/drivers/hwmon/ntc_thermistor.c index bdfbe9114889..ae66f42c4d6d 100644 --- a/drivers/hwmon/ntc_thermistor.c +++ b/drivers/hwmon/ntc_thermistor.c @@ -512,7 +512,7 @@ static int ntc_thermistor_probe(struct platform_device *pdev) } dev_info(&pdev->dev, "Thermistor type: %s successfully probed.\n", - pdev->name); + pdev_id->name); return 0; err_after_sysfs: diff --git a/drivers/i2c/busses/i2c-sun6i-p2wi.c b/drivers/i2c/busses/i2c-sun6i-p2wi.c index 09de4fd12d57..4d75d4759709 100644 --- a/drivers/i2c/busses/i2c-sun6i-p2wi.c +++ b/drivers/i2c/busses/i2c-sun6i-p2wi.c @@ -22,7 +22,6 @@ * */ #include <linux/clk.h> -#include <linux/module.h> #include <linux/i2c.h> #include <linux/io.h> #include <linux/interrupt.h> diff --git a/drivers/i2c/muxes/Kconfig b/drivers/i2c/muxes/Kconfig index f7f9865b8b89..f6d313e528de 100644 --- a/drivers/i2c/muxes/Kconfig +++ b/drivers/i2c/muxes/Kconfig @@ -40,6 +40,7 @@ config I2C_MUX_PCA9541 config I2C_MUX_PCA954x tristate "Philips PCA954x I2C Mux/switches" + depends on GPIOLIB help If you say yes here you get support for the Philips PCA954x I2C mux/switch devices. diff --git a/drivers/iio/accel/hid-sensor-accel-3d.c b/drivers/iio/accel/hid-sensor-accel-3d.c index 69abf9163df7..54e464e4bb72 100644 --- a/drivers/iio/accel/hid-sensor-accel-3d.c +++ b/drivers/iio/accel/hid-sensor-accel-3d.c @@ -110,7 +110,6 @@ static int accel_3d_read_raw(struct iio_dev *indio_dev, struct accel_3d_state *accel_state = iio_priv(indio_dev); int report_id = -1; u32 address; - int ret; int ret_type; s32 poll_value; @@ -151,14 +150,12 @@ static int accel_3d_read_raw(struct iio_dev *indio_dev, ret_type = IIO_VAL_INT; break; case IIO_CHAN_INFO_SAMP_FREQ: - ret = hid_sensor_read_samp_freq_value( + ret_type = hid_sensor_read_samp_freq_value( &accel_state->common_attributes, val, val2); - ret_type = IIO_VAL_INT_PLUS_MICRO; break; case IIO_CHAN_INFO_HYSTERESIS: - ret = hid_sensor_read_raw_hyst_value( + ret_type = hid_sensor_read_raw_hyst_value( &accel_state->common_attributes, val, val2); - ret_type = IIO_VAL_INT_PLUS_MICRO; break; default: ret_type = -EINVAL; diff --git a/drivers/iio/accel/mma8452.c b/drivers/iio/accel/mma8452.c index 17aeea170566..2a5fa9a436e5 100644 --- a/drivers/iio/accel/mma8452.c +++ b/drivers/iio/accel/mma8452.c @@ -111,8 +111,14 @@ static const int mma8452_samp_freq[8][2] = { {6, 250000}, {1, 560000} }; +/* + * Hardware has fullscale of -2G, -4G, -8G corresponding to raw value -2048 + * The userspace interface uses m/s^2 and we declare micro units + * So scale factor is given by: + * g * N * 1000000 / 2048 for N = 2, 4, 8 and g=9.80665 + */ static const int mma8452_scales[3][2] = { - {0, 977}, {0, 1953}, {0, 3906} + {0, 9577}, {0, 19154}, {0, 38307} }; static ssize_t mma8452_show_samp_freq_avail(struct device *dev, diff --git a/drivers/iio/adc/ti_am335x_adc.c b/drivers/iio/adc/ti_am335x_adc.c index a4db3026bec6..d5dc4c6ce86c 100644 --- a/drivers/iio/adc/ti_am335x_adc.c +++ b/drivers/iio/adc/ti_am335x_adc.c @@ -374,7 +374,7 @@ static int tiadc_read_raw(struct iio_dev *indio_dev, return -EAGAIN; } } - map_val = chan->channel + TOTAL_CHANNELS; + map_val = adc_dev->channel_step[chan->scan_index]; /* * We check the complete FIFO. We programmed just one entry but in case diff --git a/drivers/iio/gyro/hid-sensor-gyro-3d.c b/drivers/iio/gyro/hid-sensor-gyro-3d.c index 40f4e4935d0d..fa034a3dad78 100644 --- a/drivers/iio/gyro/hid-sensor-gyro-3d.c +++ b/drivers/iio/gyro/hid-sensor-gyro-3d.c @@ -110,7 +110,6 @@ static int gyro_3d_read_raw(struct iio_dev *indio_dev, struct gyro_3d_state *gyro_state = iio_priv(indio_dev); int report_id = -1; u32 address; - int ret; int ret_type; s32 poll_value; @@ -151,14 +150,12 @@ static int gyro_3d_read_raw(struct iio_dev *indio_dev, ret_type = IIO_VAL_INT; break; case IIO_CHAN_INFO_SAMP_FREQ: - ret = hid_sensor_read_samp_freq_value( + ret_type = hid_sensor_read_samp_freq_value( &gyro_state->common_attributes, val, val2); - ret_type = IIO_VAL_INT_PLUS_MICRO; break; case IIO_CHAN_INFO_HYSTERESIS: - ret = hid_sensor_read_raw_hyst_value( + ret_type = hid_sensor_read_raw_hyst_value( &gyro_state->common_attributes, val, val2); - ret_type = IIO_VAL_INT_PLUS_MICRO; break; default: ret_type = -EINVAL; diff --git a/drivers/iio/industrialio-event.c b/drivers/iio/industrialio-event.c index 258a973a1fb8..bfbf4d419f41 100644 --- a/drivers/iio/industrialio-event.c +++ b/drivers/iio/industrialio-event.c @@ -345,6 +345,9 @@ static int iio_device_add_event(struct iio_dev *indio_dev, &indio_dev->event_interface->dev_attr_list); kfree(postfix); + if ((ret == -EBUSY) && (shared_by != IIO_SEPARATE)) + continue; + if (ret) return ret; diff --git a/drivers/iio/light/hid-sensor-als.c b/drivers/iio/light/hid-sensor-als.c index f34c94380b41..96e71e103ea7 100644 --- a/drivers/iio/light/hid-sensor-als.c +++ b/drivers/iio/light/hid-sensor-als.c @@ -79,7 +79,6 @@ static int als_read_raw(struct iio_dev *indio_dev, struct als_state *als_state = iio_priv(indio_dev); int report_id = -1; u32 address; - int ret; int ret_type; s32 poll_value; @@ -129,14 +128,12 @@ static int als_read_raw(struct iio_dev *indio_dev, ret_type = IIO_VAL_INT; break; case IIO_CHAN_INFO_SAMP_FREQ: - ret = hid_sensor_read_samp_freq_value( + ret_type = hid_sensor_read_samp_freq_value( &als_state->common_attributes, val, val2); - ret_type = IIO_VAL_INT_PLUS_MICRO; break; case IIO_CHAN_INFO_HYSTERESIS: - ret = hid_sensor_read_raw_hyst_value( + ret_type = hid_sensor_read_raw_hyst_value( &als_state->common_attributes, val, val2); - ret_type = IIO_VAL_INT_PLUS_MICRO; break; default: ret_type = -EINVAL; diff --git a/drivers/iio/light/hid-sensor-prox.c b/drivers/iio/light/hid-sensor-prox.c index d203ef4d892f..412bae86d6ae 100644 --- a/drivers/iio/light/hid-sensor-prox.c +++ b/drivers/iio/light/hid-sensor-prox.c @@ -74,7 +74,6 @@ static int prox_read_raw(struct iio_dev *indio_dev, struct prox_state *prox_state = iio_priv(indio_dev); int report_id = -1; u32 address; - int ret; int ret_type; s32 poll_value; @@ -125,14 +124,12 @@ static int prox_read_raw(struct iio_dev *indio_dev, ret_type = IIO_VAL_INT; break; case IIO_CHAN_INFO_SAMP_FREQ: - ret = hid_sensor_read_samp_freq_value( + ret_type = hid_sensor_read_samp_freq_value( &prox_state->common_attributes, val, val2); - ret_type = IIO_VAL_INT_PLUS_MICRO; break; case IIO_CHAN_INFO_HYSTERESIS: - ret = hid_sensor_read_raw_hyst_value( + ret_type = hid_sensor_read_raw_hyst_value( &prox_state->common_attributes, val, val2); - ret_type = IIO_VAL_INT_PLUS_MICRO; break; default: ret_type = -EINVAL; diff --git a/drivers/iio/light/tcs3472.c b/drivers/iio/light/tcs3472.c index fe063a0a21cd..752569985d1d 100644 --- a/drivers/iio/light/tcs3472.c +++ b/drivers/iio/light/tcs3472.c @@ -52,6 +52,7 @@ struct tcs3472_data { struct i2c_client *client; + struct mutex lock; u8 enable; u8 control; u8 atime; @@ -116,10 +117,17 @@ static int tcs3472_read_raw(struct iio_dev *indio_dev, switch (mask) { case IIO_CHAN_INFO_RAW: + if (iio_buffer_enabled(indio_dev)) + return -EBUSY; + + mutex_lock(&data->lock); ret = tcs3472_req_data(data); - if (ret < 0) + if (ret < 0) { + mutex_unlock(&data->lock); return ret; + } ret = i2c_smbus_read_word_data(data->client, chan->address); + mutex_unlock(&data->lock); if (ret < 0) return ret; *val = ret; @@ -255,6 +263,7 @@ static int tcs3472_probe(struct i2c_client *client, data = iio_priv(indio_dev); i2c_set_clientdata(client, indio_dev); data->client = client; + mutex_init(&data->lock); indio_dev->dev.parent = &client->dev; indio_dev->info = &tcs3472_info; diff --git a/drivers/iio/magnetometer/hid-sensor-magn-3d.c b/drivers/iio/magnetometer/hid-sensor-magn-3d.c index 41cf29e2a371..b2b0937d5133 100644 --- a/drivers/iio/magnetometer/hid-sensor-magn-3d.c +++ b/drivers/iio/magnetometer/hid-sensor-magn-3d.c @@ -110,7 +110,6 @@ static int magn_3d_read_raw(struct iio_dev *indio_dev, struct magn_3d_state *magn_state = iio_priv(indio_dev); int report_id = -1; u32 address; - int ret; int ret_type; s32 poll_value; @@ -153,14 +152,12 @@ static int magn_3d_read_raw(struct iio_dev *indio_dev, ret_type = IIO_VAL_INT; break; case IIO_CHAN_INFO_SAMP_FREQ: - ret = hid_sensor_read_samp_freq_value( + ret_type = hid_sensor_read_samp_freq_value( &magn_state->common_attributes, val, val2); - ret_type = IIO_VAL_INT_PLUS_MICRO; break; case IIO_CHAN_INFO_HYSTERESIS: - ret = hid_sensor_read_raw_hyst_value( + ret_type = hid_sensor_read_raw_hyst_value( &magn_state->common_attributes, val, val2); - ret_type = IIO_VAL_INT_PLUS_MICRO; break; default: ret_type = -EINVAL; diff --git a/drivers/iio/pressure/hid-sensor-press.c b/drivers/iio/pressure/hid-sensor-press.c index 1cd190c73788..2c0d2a4fed8c 100644 --- a/drivers/iio/pressure/hid-sensor-press.c +++ b/drivers/iio/pressure/hid-sensor-press.c @@ -78,7 +78,6 @@ static int press_read_raw(struct iio_dev *indio_dev, struct press_state *press_state = iio_priv(indio_dev); int report_id = -1; u32 address; - int ret; int ret_type; s32 poll_value; @@ -128,14 +127,12 @@ static int press_read_raw(struct iio_dev *indio_dev, ret_type = IIO_VAL_INT; break; case IIO_CHAN_INFO_SAMP_FREQ: - ret = hid_sensor_read_samp_freq_value( + ret_type = hid_sensor_read_samp_freq_value( &press_state->common_attributes, val, val2); - ret_type = IIO_VAL_INT_PLUS_MICRO; break; case IIO_CHAN_INFO_HYSTERESIS: - ret = hid_sensor_read_raw_hyst_value( + ret_type = hid_sensor_read_raw_hyst_value( &press_state->common_attributes, val, val2); - ret_type = IIO_VAL_INT_PLUS_MICRO; break; default: ret_type = -EINVAL; diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c index 5e153f6d4b48..768a0fb67dd6 100644 --- a/drivers/infiniband/hw/cxgb4/cm.c +++ b/drivers/infiniband/hw/cxgb4/cm.c @@ -432,8 +432,17 @@ static void arp_failure_discard(void *handle, struct sk_buff *skb) */ static void act_open_req_arp_failure(void *handle, struct sk_buff *skb) { + struct c4iw_ep *ep = handle; + printk(KERN_ERR MOD "ARP failure duing connect\n"); kfree_skb(skb); + connect_reply_upcall(ep, -EHOSTUNREACH); + state_set(&ep->com, DEAD); + remove_handle(ep->com.dev, &ep->com.dev->atid_idr, ep->atid); + cxgb4_free_atid(ep->com.dev->rdev.lldi.tids, ep->atid); + dst_release(ep->dst); + cxgb4_l2t_release(ep->l2t); + c4iw_put_ep(&ep->com); } /* @@ -658,7 +667,7 @@ static int send_connect(struct c4iw_ep *ep) opt2 |= T5_OPT_2_VALID; opt2 |= V_CONG_CNTRL(CONG_ALG_TAHOE); } - t4_set_arp_err_handler(skb, NULL, act_open_req_arp_failure); + t4_set_arp_err_handler(skb, ep, act_open_req_arp_failure); if (is_t4(ep->com.dev->rdev.lldi.adapter_type)) { if (ep->com.remote_addr.ss_family == AF_INET) { @@ -2180,7 +2189,6 @@ static void reject_cr(struct c4iw_dev *dev, u32 hwtid, struct sk_buff *skb) PDBG("%s c4iw_dev %p tid %u\n", __func__, dev, hwtid); BUG_ON(skb_cloned(skb)); skb_trim(skb, sizeof(struct cpl_tid_release)); - skb_get(skb); release_tid(&dev->rdev, hwtid, skb); return; } @@ -3917,7 +3925,7 @@ int __init c4iw_cm_init(void) return 0; } -void __exit c4iw_cm_term(void) +void c4iw_cm_term(void) { WARN_ON(!list_empty(&timeout_list)); flush_workqueue(workq); diff --git a/drivers/infiniband/hw/cxgb4/device.c b/drivers/infiniband/hw/cxgb4/device.c index dd93aadc996e..7db82b24302b 100644 --- a/drivers/infiniband/hw/cxgb4/device.c +++ b/drivers/infiniband/hw/cxgb4/device.c @@ -696,6 +696,7 @@ static int c4iw_rdev_open(struct c4iw_rdev *rdev) pr_err(MOD "error allocating status page\n"); goto err4; } + rdev->status_page->db_off = 0; return 0; err4: c4iw_rqtpool_destroy(rdev); @@ -729,7 +730,6 @@ static void c4iw_dealloc(struct uld_ctx *ctx) if (ctx->dev->rdev.oc_mw_kva) iounmap(ctx->dev->rdev.oc_mw_kva); ib_dealloc_device(&ctx->dev->ibdev); - iwpm_exit(RDMA_NL_C4IW); ctx->dev = NULL; } @@ -826,12 +826,6 @@ static struct c4iw_dev *c4iw_alloc(const struct cxgb4_lld_info *infop) setup_debugfs(devp); } - ret = iwpm_init(RDMA_NL_C4IW); - if (ret) { - pr_err("port mapper initialization failed with %d\n", ret); - ib_dealloc_device(&devp->ibdev); - return ERR_PTR(ret); - } return devp; } @@ -1332,6 +1326,15 @@ static int __init c4iw_init_module(void) pr_err("%s[%u]: Failed to add netlink callback\n" , __func__, __LINE__); + err = iwpm_init(RDMA_NL_C4IW); + if (err) { + pr_err("port mapper initialization failed with %d\n", err); + ibnl_remove_client(RDMA_NL_C4IW); + c4iw_cm_term(); + debugfs_remove_recursive(c4iw_debugfs_root); + return err; + } + cxgb4_register_uld(CXGB4_ULD_RDMA, &c4iw_uld_info); return 0; @@ -1349,6 +1352,7 @@ static void __exit c4iw_exit_module(void) } mutex_unlock(&dev_mutex); cxgb4_unregister_uld(CXGB4_ULD_RDMA); + iwpm_exit(RDMA_NL_C4IW); ibnl_remove_client(RDMA_NL_C4IW); c4iw_cm_term(); debugfs_remove_recursive(c4iw_debugfs_root); diff --git a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h index 125bc5d1e175..361fff7a0742 100644 --- a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h +++ b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h @@ -908,7 +908,7 @@ int c4iw_destroy_ctrl_qp(struct c4iw_rdev *rdev); int c4iw_register_device(struct c4iw_dev *dev); void c4iw_unregister_device(struct c4iw_dev *dev); int __init c4iw_cm_init(void); -void __exit c4iw_cm_term(void); +void c4iw_cm_term(void); void c4iw_release_dev_ucontext(struct c4iw_rdev *rdev, struct c4iw_dev_ucontext *uctx); void c4iw_init_dev_ucontext(struct c4iw_rdev *rdev, diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index d13ddf1c0033..bbbcf389272c 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -675,7 +675,7 @@ static int create_kernel_qp(struct mlx5_ib_dev *dev, int err; uuari = &dev->mdev.priv.uuari; - if (init_attr->create_flags & ~IB_QP_CREATE_SIGNATURE_EN) + if (init_attr->create_flags & ~(IB_QP_CREATE_SIGNATURE_EN | IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK)) return -EINVAL; if (init_attr->qp_type == MLX5_IB_QPT_REG_UMR) diff --git a/drivers/iommu/fsl_pamu.c b/drivers/iommu/fsl_pamu.c index b99dd88e31b9..bb446d742a2d 100644 --- a/drivers/iommu/fsl_pamu.c +++ b/drivers/iommu/fsl_pamu.c @@ -170,10 +170,10 @@ int pamu_disable_liodn(int liodn) static unsigned int map_addrspace_size_to_wse(phys_addr_t addrspace_size) { /* Bug if not a power of 2 */ - BUG_ON(!is_power_of_2(addrspace_size)); + BUG_ON((addrspace_size & (addrspace_size - 1))); /* window size is 2^(WSE+1) bytes */ - return __ffs(addrspace_size) - 1; + return fls64(addrspace_size) - 2; } /* Derive the PAACE window count encoding for the subwindow count */ @@ -351,7 +351,7 @@ int pamu_config_ppaace(int liodn, phys_addr_t win_addr, phys_addr_t win_size, struct paace *ppaace; unsigned long fspi; - if (!is_power_of_2(win_size) || win_size < PAMU_PAGE_SIZE) { + if ((win_size & (win_size - 1)) || win_size < PAMU_PAGE_SIZE) { pr_debug("window size too small or not a power of two %llx\n", win_size); return -EINVAL; } @@ -464,7 +464,7 @@ int pamu_config_spaace(int liodn, u32 subwin_cnt, u32 subwin, return -ENOENT; } - if (!is_power_of_2(subwin_size) || subwin_size < PAMU_PAGE_SIZE) { + if ((subwin_size & (subwin_size - 1)) || subwin_size < PAMU_PAGE_SIZE) { pr_debug("subwindow size out of range, or not a power of 2\n"); return -EINVAL; } diff --git a/drivers/iommu/fsl_pamu_domain.c b/drivers/iommu/fsl_pamu_domain.c index 93072ba44b1d..af47648301a9 100644 --- a/drivers/iommu/fsl_pamu_domain.c +++ b/drivers/iommu/fsl_pamu_domain.c @@ -301,7 +301,7 @@ static int check_size(u64 size, dma_addr_t iova) * Size must be a power of two and at least be equal * to PAMU page size. */ - if (!is_power_of_2(size) || size < PAMU_PAGE_SIZE) { + if ((size & (size - 1)) || size < PAMU_PAGE_SIZE) { pr_debug("%s: size too small or not a power of two\n", __func__); return -EINVAL; } @@ -335,11 +335,6 @@ static struct fsl_dma_domain *iommu_alloc_dma_domain(void) return domain; } -static inline struct device_domain_info *find_domain(struct device *dev) -{ - return dev->archdata.iommu_domain; -} - static void remove_device_ref(struct device_domain_info *info, u32 win_cnt) { unsigned long flags; @@ -380,7 +375,7 @@ static void attach_device(struct fsl_dma_domain *dma_domain, int liodn, struct d * Check here if the device is already attached to domain or not. * If the device is already attached to a domain detach it. */ - old_domain_info = find_domain(dev); + old_domain_info = dev->archdata.iommu_domain; if (old_domain_info && old_domain_info->domain != dma_domain) { spin_unlock_irqrestore(&device_domain_lock, flags); detach_device(dev, old_domain_info->domain); @@ -399,7 +394,7 @@ static void attach_device(struct fsl_dma_domain *dma_domain, int liodn, struct d * the info for the first LIODN as all * LIODNs share the same domain */ - if (!old_domain_info) + if (!dev->archdata.iommu_domain) dev->archdata.iommu_domain = info; spin_unlock_irqrestore(&device_domain_lock, flags); @@ -1042,12 +1037,15 @@ root_bus: group = get_shared_pci_device_group(pdev); } + if (!group) + group = ERR_PTR(-ENODEV); + return group; } static int fsl_pamu_add_device(struct device *dev) { - struct iommu_group *group = NULL; + struct iommu_group *group = ERR_PTR(-ENODEV); struct pci_dev *pdev; const u32 *prop; int ret, len; @@ -1070,7 +1068,7 @@ static int fsl_pamu_add_device(struct device *dev) group = get_device_iommu_group(dev); } - if (!group || IS_ERR(group)) + if (IS_ERR(group)) return PTR_ERR(group); ret = iommu_group_add_device(group, dev); diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c index 7e11c9d6ae8c..7c131cf7cc13 100644 --- a/drivers/irqchip/irq-gic.c +++ b/drivers/irqchip/irq-gic.c @@ -42,6 +42,7 @@ #include <linux/irqchip/chained_irq.h> #include <linux/irqchip/arm-gic.h> +#include <asm/cputype.h> #include <asm/irq.h> #include <asm/exception.h> #include <asm/smp_plat.h> @@ -954,7 +955,9 @@ void __init gic_init_bases(unsigned int gic_nr, int irq_start, } for_each_possible_cpu(cpu) { - unsigned long offset = percpu_offset * cpu_logical_map(cpu); + u32 mpidr = cpu_logical_map(cpu); + u32 core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); + unsigned long offset = percpu_offset * core_id; *per_cpu_ptr(gic->dist_base.percpu_base, cpu) = dist_base + offset; *per_cpu_ptr(gic->cpu_base.percpu_base, cpu) = cpu_base + offset; } @@ -1071,8 +1074,10 @@ gic_of_init(struct device_node *node, struct device_node *parent) gic_cnt++; return 0; } +IRQCHIP_DECLARE(gic_400, "arm,gic-400", gic_of_init); IRQCHIP_DECLARE(cortex_a15_gic, "arm,cortex-a15-gic", gic_of_init); IRQCHIP_DECLARE(cortex_a9_gic, "arm,cortex-a9-gic", gic_of_init); +IRQCHIP_DECLARE(cortex_a7_gic, "arm,cortex-a7-gic", gic_of_init); IRQCHIP_DECLARE(msm_8660_qgic, "qcom,msm-8660-qgic", gic_of_init); IRQCHIP_DECLARE(msm_qgic2, "qcom,msm-qgic2", gic_of_init); diff --git a/drivers/isdn/hisax/l3ni1.c b/drivers/isdn/hisax/l3ni1.c index 0df6691d045c..8dc791bfaa6f 100644 --- a/drivers/isdn/hisax/l3ni1.c +++ b/drivers/isdn/hisax/l3ni1.c @@ -2059,13 +2059,17 @@ static int l3ni1_cmd_global(struct PStack *st, isdn_ctrl *ic) memcpy(p, ic->parm.ni1_io.data, ic->parm.ni1_io.datalen); /* copy data */ l = (p - temp) + ic->parm.ni1_io.datalen; /* total length */ - if (ic->parm.ni1_io.timeout > 0) - if (!(pc = ni1_new_l3_process(st, -1))) - { free_invoke_id(st, id); + if (ic->parm.ni1_io.timeout > 0) { + pc = ni1_new_l3_process(st, -1); + if (!pc) { + free_invoke_id(st, id); return (-2); } - pc->prot.ni1.ll_id = ic->parm.ni1_io.ll_id; /* remember id */ - pc->prot.ni1.proc = ic->parm.ni1_io.proc; /* and procedure */ + /* remember id */ + pc->prot.ni1.ll_id = ic->parm.ni1_io.ll_id; + /* and procedure */ + pc->prot.ni1.proc = ic->parm.ni1_io.proc; + } if (!(skb = l3_alloc_skb(l))) { free_invoke_id(st, id); diff --git a/drivers/isdn/i4l/isdn_ppp.c b/drivers/isdn/i4l/isdn_ppp.c index 61ac63237446..a333b7f798d1 100644 --- a/drivers/isdn/i4l/isdn_ppp.c +++ b/drivers/isdn/i4l/isdn_ppp.c @@ -442,7 +442,7 @@ static int get_filter(void __user *arg, struct sock_filter **p) { struct sock_fprog uprog; struct sock_filter *code = NULL; - int len, err; + int len; if (copy_from_user(&uprog, arg, sizeof(uprog))) return -EFAULT; @@ -458,12 +458,6 @@ static int get_filter(void __user *arg, struct sock_filter **p) if (IS_ERR(code)) return PTR_ERR(code); - err = sk_chk_filter(code, uprog.len); - if (err) { - kfree(code); - return err; - } - *p = code; return uprog.len; } diff --git a/drivers/md/dm-cache-metadata.c b/drivers/md/dm-cache-metadata.c index 4ead4ba60656..d2899e7eb3aa 100644 --- a/drivers/md/dm-cache-metadata.c +++ b/drivers/md/dm-cache-metadata.c @@ -425,6 +425,15 @@ static int __open_metadata(struct dm_cache_metadata *cmd) disk_super = dm_block_data(sblock); + /* Verify the data block size hasn't changed */ + if (le32_to_cpu(disk_super->data_block_size) != cmd->data_block_size) { + DMERR("changing the data block size (from %u to %llu) is not supported", + le32_to_cpu(disk_super->data_block_size), + (unsigned long long)cmd->data_block_size); + r = -EINVAL; + goto bad; + } + r = __check_incompat_features(disk_super, cmd); if (r < 0) goto bad; diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 53b213226c01..4cba2d808afb 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -1,5 +1,5 @@ /* - * Copyright (C) 2003 Christophe Saout <christophe@saout.de> + * Copyright (C) 2003 Jana Saout <jana@saout.de> * Copyright (C) 2004 Clemens Fruhwirth <clemens@endorphin.org> * Copyright (C) 2006-2009 Red Hat, Inc. All rights reserved. * Copyright (C) 2013 Milan Broz <gmazyland@gmail.com> @@ -1996,6 +1996,6 @@ static void __exit dm_crypt_exit(void) module_init(dm_crypt_init); module_exit(dm_crypt_exit); -MODULE_AUTHOR("Christophe Saout <christophe@saout.de>"); +MODULE_AUTHOR("Jana Saout <jana@saout.de>"); MODULE_DESCRIPTION(DM_NAME " target for transparent encryption / decryption"); MODULE_LICENSE("GPL"); diff --git a/drivers/md/dm-io.c b/drivers/md/dm-io.c index 3842ac738f98..db404a0f7e2c 100644 --- a/drivers/md/dm-io.c +++ b/drivers/md/dm-io.c @@ -10,6 +10,7 @@ #include <linux/device-mapper.h> #include <linux/bio.h> +#include <linux/completion.h> #include <linux/mempool.h> #include <linux/module.h> #include <linux/sched.h> @@ -32,7 +33,7 @@ struct dm_io_client { struct io { unsigned long error_bits; atomic_t count; - struct task_struct *sleeper; + struct completion *wait; struct dm_io_client *client; io_notify_fn callback; void *context; @@ -121,8 +122,8 @@ static void dec_count(struct io *io, unsigned int region, int error) invalidate_kernel_vmap_range(io->vma_invalidate_address, io->vma_invalidate_size); - if (io->sleeper) - wake_up_process(io->sleeper); + if (io->wait) + complete(io->wait); else { unsigned long r = io->error_bits; @@ -387,6 +388,7 @@ static int sync_io(struct dm_io_client *client, unsigned int num_regions, */ volatile char io_[sizeof(struct io) + __alignof__(struct io) - 1]; struct io *io = (struct io *)PTR_ALIGN(&io_, __alignof__(struct io)); + DECLARE_COMPLETION_ONSTACK(wait); if (num_regions > 1 && (rw & RW_MASK) != WRITE) { WARN_ON(1); @@ -395,7 +397,7 @@ static int sync_io(struct dm_io_client *client, unsigned int num_regions, io->error_bits = 0; atomic_set(&io->count, 1); /* see dispatch_io() */ - io->sleeper = current; + io->wait = &wait; io->client = client; io->vma_invalidate_address = dp->vma_invalidate_address; @@ -403,15 +405,7 @@ static int sync_io(struct dm_io_client *client, unsigned int num_regions, dispatch_io(rw, num_regions, where, dp, io, 1); - while (1) { - set_current_state(TASK_UNINTERRUPTIBLE); - - if (!atomic_read(&io->count)) - break; - - io_schedule(); - } - set_current_state(TASK_RUNNING); + wait_for_completion_io(&wait); if (error_bits) *error_bits = io->error_bits; @@ -434,7 +428,7 @@ static int async_io(struct dm_io_client *client, unsigned int num_regions, io = mempool_alloc(client->pool, GFP_NOIO); io->error_bits = 0; atomic_set(&io->count, 1); /* see dispatch_io() */ - io->sleeper = NULL; + io->wait = NULL; io->client = client; io->callback = fn; io->context = context; diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c index 3f6fd9d33ba3..f4167b013d99 100644 --- a/drivers/md/dm-mpath.c +++ b/drivers/md/dm-mpath.c @@ -1611,8 +1611,9 @@ static int multipath_busy(struct dm_target *ti) spin_lock_irqsave(&m->lock, flags); - /* pg_init in progress, requeue until done */ - if (!pg_ready(m)) { + /* pg_init in progress or no paths available */ + if (m->pg_init_in_progress || + (!m->nr_valid_paths && m->queue_if_no_path)) { busy = 1; goto out; } diff --git a/drivers/md/dm-thin-metadata.c b/drivers/md/dm-thin-metadata.c index b086a945edcb..e9d33ad59df5 100644 --- a/drivers/md/dm-thin-metadata.c +++ b/drivers/md/dm-thin-metadata.c @@ -613,6 +613,15 @@ static int __open_metadata(struct dm_pool_metadata *pmd) disk_super = dm_block_data(sblock); + /* Verify the data block size hasn't changed */ + if (le32_to_cpu(disk_super->data_block_size) != pmd->data_block_size) { + DMERR("changing the data block size (from %u to %llu) is not supported", + le32_to_cpu(disk_super->data_block_size), + (unsigned long long)pmd->data_block_size); + r = -EINVAL; + goto bad_unlock_sblock; + } + r = __check_incompat_features(disk_super, pmd); if (r < 0) goto bad_unlock_sblock; diff --git a/drivers/md/dm-zero.c b/drivers/md/dm-zero.c index c99003e0d47a..b9a64bbce304 100644 --- a/drivers/md/dm-zero.c +++ b/drivers/md/dm-zero.c @@ -1,5 +1,5 @@ /* - * Copyright (C) 2003 Christophe Saout <christophe@saout.de> + * Copyright (C) 2003 Jana Saout <jana@saout.de> * * This file is released under the GPL. */ @@ -79,6 +79,6 @@ static void __exit dm_zero_exit(void) module_init(dm_zero_init) module_exit(dm_zero_exit) -MODULE_AUTHOR("Christophe Saout <christophe@saout.de>"); +MODULE_AUTHOR("Jana Saout <jana@saout.de>"); MODULE_DESCRIPTION(DM_NAME " dummy target returning zeros"); MODULE_LICENSE("GPL"); diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 437d99045ef2..32b958dbc499 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -54,6 +54,8 @@ static void do_deferred_remove(struct work_struct *w); static DECLARE_WORK(deferred_remove_work, do_deferred_remove); +static struct workqueue_struct *deferred_remove_workqueue; + /* * For bio-based dm. * One of these is allocated per bio. @@ -276,16 +278,24 @@ static int __init local_init(void) if (r) goto out_free_rq_tio_cache; + deferred_remove_workqueue = alloc_workqueue("kdmremove", WQ_UNBOUND, 1); + if (!deferred_remove_workqueue) { + r = -ENOMEM; + goto out_uevent_exit; + } + _major = major; r = register_blkdev(_major, _name); if (r < 0) - goto out_uevent_exit; + goto out_free_workqueue; if (!_major) _major = r; return 0; +out_free_workqueue: + destroy_workqueue(deferred_remove_workqueue); out_uevent_exit: dm_uevent_exit(); out_free_rq_tio_cache: @@ -299,6 +309,7 @@ out_free_io_cache: static void local_exit(void) { flush_scheduled_work(); + destroy_workqueue(deferred_remove_workqueue); kmem_cache_destroy(_rq_tio_cache); kmem_cache_destroy(_io_cache); @@ -407,7 +418,7 @@ static void dm_blk_close(struct gendisk *disk, fmode_t mode) if (atomic_dec_and_test(&md->open_count) && (test_bit(DMF_DEFERRED_REMOVE, &md->flags))) - schedule_work(&deferred_remove_work); + queue_work(deferred_remove_workqueue, &deferred_remove_work); dm_put(md); diff --git a/drivers/mtd/chips/cfi_cmdset_0001.c b/drivers/mtd/chips/cfi_cmdset_0001.c index e4ec355704a6..a7543ba3e190 100644 --- a/drivers/mtd/chips/cfi_cmdset_0001.c +++ b/drivers/mtd/chips/cfi_cmdset_0001.c @@ -52,6 +52,11 @@ /* Atmel chips */ #define AT49BV640D 0x02de #define AT49BV640DT 0x02db +/* Sharp chips */ +#define LH28F640BFHE_PTTL90 0x00b0 +#define LH28F640BFHE_PBTL90 0x00b1 +#define LH28F640BFHE_PTTL70A 0x00b2 +#define LH28F640BFHE_PBTL70A 0x00b3 static int cfi_intelext_read (struct mtd_info *, loff_t, size_t, size_t *, u_char *); static int cfi_intelext_write_words(struct mtd_info *, loff_t, size_t, size_t *, const u_char *); @@ -258,6 +263,36 @@ static void fixup_st_m28w320cb(struct mtd_info *mtd) (cfi->cfiq->EraseRegionInfo[1] & 0xffff0000) | 0x3e; }; +static int is_LH28F640BF(struct cfi_private *cfi) +{ + /* Sharp LH28F640BF Family */ + if (cfi->mfr == CFI_MFR_SHARP && ( + cfi->id == LH28F640BFHE_PTTL90 || cfi->id == LH28F640BFHE_PBTL90 || + cfi->id == LH28F640BFHE_PTTL70A || cfi->id == LH28F640BFHE_PBTL70A)) + return 1; + return 0; +} + +static void fixup_LH28F640BF(struct mtd_info *mtd) +{ + struct map_info *map = mtd->priv; + struct cfi_private *cfi = map->fldrv_priv; + struct cfi_pri_intelext *extp = cfi->cmdset_priv; + + /* Reset the Partition Configuration Register on LH28F640BF + * to a single partition (PCR = 0x000): PCR is embedded into A0-A15. */ + if (is_LH28F640BF(cfi)) { + printk(KERN_INFO "Reset Partition Config. Register: 1 Partition of 4 planes\n"); + map_write(map, CMD(0x60), 0); + map_write(map, CMD(0x04), 0); + + /* We have set one single partition thus + * Simultaneous Operations are not allowed */ + printk(KERN_INFO "cfi_cmdset_0001: Simultaneous Operations disabled\n"); + extp->FeatureSupport &= ~512; + } +} + static void fixup_use_point(struct mtd_info *mtd) { struct map_info *map = mtd->priv; @@ -309,6 +344,8 @@ static struct cfi_fixup cfi_fixup_table[] = { { CFI_MFR_ST, 0x00ba, /* M28W320CT */ fixup_st_m28w320ct }, { CFI_MFR_ST, 0x00bb, /* M28W320CB */ fixup_st_m28w320cb }, { CFI_MFR_INTEL, CFI_ID_ANY, fixup_unlock_powerup_lock }, + { CFI_MFR_SHARP, CFI_ID_ANY, fixup_unlock_powerup_lock }, + { CFI_MFR_SHARP, CFI_ID_ANY, fixup_LH28F640BF }, { 0, 0, NULL } }; @@ -1649,6 +1686,12 @@ static int __xipram do_write_buffer(struct map_info *map, struct flchip *chip, initial_adr = adr; cmd_adr = adr & ~(wbufsize-1); + /* Sharp LH28F640BF chips need the first address for the + * Page Buffer Program command. See Table 5 of + * LH28F320BF, LH28F640BF, LH28F128BF Series (Appendix FUM00701) */ + if (is_LH28F640BF(cfi)) + cmd_adr = adr; + /* Let's determine this according to the interleave only once */ write_cmd = (cfi->cfiq->P_ID != P_ID_INTEL_PERFORMANCE) ? CMD(0xe8) : CMD(0xe9); diff --git a/drivers/mtd/devices/elm.c b/drivers/mtd/devices/elm.c index 7df86948e6d4..b4f61c7fc161 100644 --- a/drivers/mtd/devices/elm.c +++ b/drivers/mtd/devices/elm.c @@ -475,6 +475,7 @@ static int elm_context_save(struct elm_info *info) ELM_SYNDROME_FRAGMENT_1 + offset); regs->elm_syndrome_fragment_0[i] = elm_read_reg(info, ELM_SYNDROME_FRAGMENT_0 + offset); + break; default: return -EINVAL; } @@ -520,6 +521,7 @@ static int elm_context_restore(struct elm_info *info) regs->elm_syndrome_fragment_1[i]); elm_write_reg(info, ELM_SYNDROME_FRAGMENT_0 + offset, regs->elm_syndrome_fragment_0[i]); + break; default: return -EINVAL; } diff --git a/drivers/mtd/nand/nand_base.c b/drivers/mtd/nand/nand_base.c index 41167e9e991e..4f3e80c68a26 100644 --- a/drivers/mtd/nand/nand_base.c +++ b/drivers/mtd/nand/nand_base.c @@ -4047,8 +4047,10 @@ int nand_scan_tail(struct mtd_info *mtd) ecc->layout->oobavail += ecc->layout->oobfree[i].length; mtd->oobavail = ecc->layout->oobavail; - /* ECC sanity check: warn noisily if it's too weak */ - WARN_ON(!nand_ecc_strength_good(mtd)); + /* ECC sanity check: warn if it's too weak */ + if (!nand_ecc_strength_good(mtd)) + pr_warn("WARNING: %s: the ECC used on your system is too weak compared to the one required by the NAND chip\n", + mtd->name); /* * Set the number of read / write steps for one page depending on ECC diff --git a/drivers/mtd/ubi/fastmap.c b/drivers/mtd/ubi/fastmap.c index b04e7d059888..0431b46d9fd9 100644 --- a/drivers/mtd/ubi/fastmap.c +++ b/drivers/mtd/ubi/fastmap.c @@ -125,7 +125,7 @@ static struct ubi_ainf_volume *add_vol(struct ubi_attach_info *ai, int vol_id, parent = *p; av = rb_entry(parent, struct ubi_ainf_volume, rb); - if (vol_id < av->vol_id) + if (vol_id > av->vol_id) p = &(*p)->rb_left; else p = &(*p)->rb_right; @@ -423,7 +423,7 @@ static int scan_pool(struct ubi_device *ubi, struct ubi_attach_info *ai, pnum, err); ret = err > 0 ? UBI_BAD_FASTMAP : err; goto out; - } else if (ret == UBI_IO_BITFLIPS) + } else if (err == UBI_IO_BITFLIPS) scrub = 1; /* diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c index 3a451b6cd3d5..701f86cd5993 100644 --- a/drivers/net/bonding/bond_main.c +++ b/drivers/net/bonding/bond_main.c @@ -4068,7 +4068,7 @@ static int bond_check_params(struct bond_params *params) } if (ad_select) { - bond_opt_initstr(&newval, lacp_rate); + bond_opt_initstr(&newval, ad_select); valptr = bond_opt_parse(bond_opt_get(BOND_OPT_AD_SELECT), &newval); if (!valptr) { diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c index 141160ef249a..5776e503e4c5 100644 --- a/drivers/net/ethernet/broadcom/bcmsysport.c +++ b/drivers/net/ethernet/broadcom/bcmsysport.c @@ -654,13 +654,13 @@ static int bcm_sysport_tx_poll(struct napi_struct *napi, int budget) work_done = bcm_sysport_tx_reclaim(ring->priv, ring); - if (work_done < budget) { + if (work_done == 0) { napi_complete(napi); /* re-enable TX interrupt */ intrl2_1_mask_clear(ring->priv, BIT(ring->index)); } - return work_done; + return 0; } static void bcm_sysport_tx_reclaim_all(struct bcm_sysport_priv *priv) @@ -1254,28 +1254,17 @@ static inline void umac_enable_set(struct bcm_sysport_priv *priv, usleep_range(1000, 2000); } -static inline int umac_reset(struct bcm_sysport_priv *priv) +static inline void umac_reset(struct bcm_sysport_priv *priv) { - unsigned int timeout = 0; u32 reg; - int ret = 0; - - umac_writel(priv, 0, UMAC_CMD); - while (timeout++ < 1000) { - reg = umac_readl(priv, UMAC_CMD); - if (!(reg & CMD_SW_RESET)) - break; - - udelay(1); - } - - if (timeout == 1000) { - dev_err(&priv->pdev->dev, - "timeout waiting for MAC to come out of reset\n"); - ret = -ETIMEDOUT; - } - return ret; + reg = umac_readl(priv, UMAC_CMD); + reg |= CMD_SW_RESET; + umac_writel(priv, reg, UMAC_CMD); + udelay(10); + reg = umac_readl(priv, UMAC_CMD); + reg &= ~CMD_SW_RESET; + umac_writel(priv, reg, UMAC_CMD); } static void umac_set_hw_addr(struct bcm_sysport_priv *priv, @@ -1303,11 +1292,7 @@ static int bcm_sysport_open(struct net_device *dev) int ret; /* Reset UniMAC */ - ret = umac_reset(priv); - if (ret) { - netdev_err(dev, "UniMAC reset failed\n"); - return ret; - } + umac_reset(priv); /* Flush TX and RX FIFOs at TOPCTRL level */ topctrl_flush(priv); @@ -1589,12 +1574,6 @@ static int bcm_sysport_probe(struct platform_device *pdev) BUILD_BUG_ON(sizeof(struct bcm_tsb) != 8); dev->needed_headroom += sizeof(struct bcm_tsb); - /* We are interfaced to a switch which handles the multicast - * filtering for us, so we do not support programming any - * multicast hash table in this Ethernet MAC. - */ - dev->flags &= ~IFF_MULTICAST; - /* libphy will adjust the link state accordingly */ netif_carrier_off(dev); diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c index 47c5814114e1..4b875da1c7ed 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c @@ -797,7 +797,8 @@ static void bnx2x_tpa_stop(struct bnx2x *bp, struct bnx2x_fastpath *fp, return; } - bnx2x_frag_free(fp, new_data); + if (new_data) + bnx2x_frag_free(fp, new_data); drop: /* drop the packet and keep the buffer in the bin */ DP(NETIF_MSG_RX_STATUS, diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c index 2887034523e0..6a8b1453a1b9 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c @@ -12937,7 +12937,7 @@ static int bnx2x_get_num_non_def_sbs(struct pci_dev *pdev, int cnic_cnt) * without the default SB. * For VFs there is no default SB, then we return (index+1). */ - pci_read_config_word(pdev, pdev->msix_cap + PCI_MSI_FLAGS, &control); + pci_read_config_word(pdev, pdev->msix_cap + PCI_MSIX_FLAGS, &control); index = control & PCI_MSIX_FLAGS_QSIZE; diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c index 5ba1cfbd60da..16281ad2da12 100644 --- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c +++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c @@ -1408,13 +1408,6 @@ static int bcmgenet_alloc_rx_buffers(struct bcmgenet_priv *priv) if (cb->skb) continue; - /* set the DMA descriptor length once and for all - * it will only change if we support dynamically sizing - * priv->rx_buf_len, but we do not - */ - dmadesc_set_length_status(priv, priv->rx_bd_assign_ptr, - priv->rx_buf_len << DMA_BUFLENGTH_SHIFT); - ret = bcmgenet_rx_refill(priv, cb); if (ret) break; @@ -2535,14 +2528,17 @@ static int bcmgenet_probe(struct platform_device *pdev) netif_set_real_num_tx_queues(priv->dev, priv->hw_params->tx_queues + 1); netif_set_real_num_rx_queues(priv->dev, priv->hw_params->rx_queues + 1); - err = register_netdev(dev); - if (err) - goto err_clk_disable; + /* libphy will determine the link state */ + netif_carrier_off(dev); /* Turn off the main clock, WOL clock is handled separately */ if (!IS_ERR(priv->clk)) clk_disable_unprepare(priv->clk); + err = register_netdev(dev); + if (err) + goto err; + return err; err_clk_disable: diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.h b/drivers/net/ethernet/broadcom/genet/bcmgenet.h index 0f117105fed1..e23c993b1362 100644 --- a/drivers/net/ethernet/broadcom/genet/bcmgenet.h +++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h @@ -331,9 +331,9 @@ struct bcmgenet_mib_counters { #define EXT_ENERGY_DET_MASK (1 << 12) #define EXT_RGMII_OOB_CTRL 0x0C -#define RGMII_MODE_EN (1 << 0) #define RGMII_LINK (1 << 4) #define OOB_DISABLE (1 << 5) +#define RGMII_MODE_EN (1 << 6) #define ID_MODE_DIS (1 << 16) #define EXT_GPHY_CTRL 0x1C diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c index 34a26e42f19d..1e187fb760f8 100644 --- a/drivers/net/ethernet/emulex/benet/be_main.c +++ b/drivers/net/ethernet/emulex/benet/be_main.c @@ -2902,7 +2902,7 @@ static int be_open(struct net_device *netdev) for_all_evt_queues(adapter, eqo, i) { napi_enable(&eqo->napi); be_enable_busy_poll(eqo); - be_eq_notify(adapter, eqo->q.id, true, false, 0); + be_eq_notify(adapter, eqo->q.id, true, true, 0); } adapter->flags |= BE_FLAGS_NAPI_ENABLED; diff --git a/drivers/net/ethernet/freescale/ucc_geth.c b/drivers/net/ethernet/freescale/ucc_geth.c index fab39e295441..36fc429298e3 100644 --- a/drivers/net/ethernet/freescale/ucc_geth.c +++ b/drivers/net/ethernet/freescale/ucc_geth.c @@ -2990,11 +2990,11 @@ static int ucc_geth_startup(struct ucc_geth_private *ugeth) if (ug_info->rxExtendedFiltering) { size += THREAD_RX_PRAM_ADDITIONAL_FOR_EXTENDED_FILTERING; if (ug_info->largestexternallookupkeysize == - QE_FLTR_TABLE_LOOKUP_KEY_SIZE_8_BYTES) + QE_FLTR_LARGEST_EXTERNAL_TABLE_LOOKUP_KEY_SIZE_8_BYTES) size += THREAD_RX_PRAM_ADDITIONAL_FOR_EXTENDED_FILTERING_8; if (ug_info->largestexternallookupkeysize == - QE_FLTR_TABLE_LOOKUP_KEY_SIZE_16_BYTES) + QE_FLTR_LARGEST_EXTERNAL_TABLE_LOOKUP_KEY_SIZE_16_BYTES) size += THREAD_RX_PRAM_ADDITIONAL_FOR_EXTENDED_FILTERING_16; } diff --git a/drivers/net/ethernet/intel/igb/e1000_82575.c b/drivers/net/ethernet/intel/igb/e1000_82575.c index a2db388cc31e..ee74f9536b31 100644 --- a/drivers/net/ethernet/intel/igb/e1000_82575.c +++ b/drivers/net/ethernet/intel/igb/e1000_82575.c @@ -1481,6 +1481,13 @@ static s32 igb_init_hw_82575(struct e1000_hw *hw) s32 ret_val; u16 i, rar_count = mac->rar_entry_count; + if ((hw->mac.type >= e1000_i210) && + !(igb_get_flash_presence_i210(hw))) { + ret_val = igb_pll_workaround_i210(hw); + if (ret_val) + return ret_val; + } + /* Initialize identification LED */ ret_val = igb_id_led_init(hw); if (ret_val) { diff --git a/drivers/net/ethernet/intel/igb/e1000_defines.h b/drivers/net/ethernet/intel/igb/e1000_defines.h index 2a8bb35c2df2..217f8138851b 100644 --- a/drivers/net/ethernet/intel/igb/e1000_defines.h +++ b/drivers/net/ethernet/intel/igb/e1000_defines.h @@ -46,14 +46,15 @@ #define E1000_CTRL_EXT_SDP3_DIR 0x00000800 /* SDP3 Data direction */ /* Physical Func Reset Done Indication */ -#define E1000_CTRL_EXT_PFRSTD 0x00004000 -#define E1000_CTRL_EXT_LINK_MODE_MASK 0x00C00000 -#define E1000_CTRL_EXT_LINK_MODE_PCIE_SERDES 0x00C00000 -#define E1000_CTRL_EXT_LINK_MODE_1000BASE_KX 0x00400000 -#define E1000_CTRL_EXT_LINK_MODE_SGMII 0x00800000 -#define E1000_CTRL_EXT_LINK_MODE_GMII 0x00000000 -#define E1000_CTRL_EXT_EIAME 0x01000000 -#define E1000_CTRL_EXT_IRCA 0x00000001 +#define E1000_CTRL_EXT_PFRSTD 0x00004000 +#define E1000_CTRL_EXT_SDLPE 0X00040000 /* SerDes Low Power Enable */ +#define E1000_CTRL_EXT_LINK_MODE_MASK 0x00C00000 +#define E1000_CTRL_EXT_LINK_MODE_PCIE_SERDES 0x00C00000 +#define E1000_CTRL_EXT_LINK_MODE_1000BASE_KX 0x00400000 +#define E1000_CTRL_EXT_LINK_MODE_SGMII 0x00800000 +#define E1000_CTRL_EXT_LINK_MODE_GMII 0x00000000 +#define E1000_CTRL_EXT_EIAME 0x01000000 +#define E1000_CTRL_EXT_IRCA 0x00000001 /* Interrupt delay cancellation */ /* Driver loaded bit for FW */ #define E1000_CTRL_EXT_DRV_LOAD 0x10000000 @@ -62,6 +63,7 @@ /* packet buffer parity error detection enabled */ /* descriptor FIFO parity error detection enable */ #define E1000_CTRL_EXT_PBA_CLR 0x80000000 /* PBA Clear */ +#define E1000_CTRL_EXT_PHYPDEN 0x00100000 #define E1000_I2CCMD_REG_ADDR_SHIFT 16 #define E1000_I2CCMD_PHY_ADDR_SHIFT 24 #define E1000_I2CCMD_OPCODE_READ 0x08000000 diff --git a/drivers/net/ethernet/intel/igb/e1000_hw.h b/drivers/net/ethernet/intel/igb/e1000_hw.h index 89925e405849..ce55ea5d750c 100644 --- a/drivers/net/ethernet/intel/igb/e1000_hw.h +++ b/drivers/net/ethernet/intel/igb/e1000_hw.h @@ -567,4 +567,7 @@ struct net_device *igb_get_hw_dev(struct e1000_hw *hw); /* These functions must be implemented by drivers */ s32 igb_read_pcie_cap_reg(struct e1000_hw *hw, u32 reg, u16 *value); s32 igb_write_pcie_cap_reg(struct e1000_hw *hw, u32 reg, u16 *value); + +void igb_read_pci_cfg(struct e1000_hw *hw, u32 reg, u16 *value); +void igb_write_pci_cfg(struct e1000_hw *hw, u32 reg, u16 *value); #endif /* _E1000_HW_H_ */ diff --git a/drivers/net/ethernet/intel/igb/e1000_i210.c b/drivers/net/ethernet/intel/igb/e1000_i210.c index 337161f440dd..65d931669f81 100644 --- a/drivers/net/ethernet/intel/igb/e1000_i210.c +++ b/drivers/net/ethernet/intel/igb/e1000_i210.c @@ -834,3 +834,69 @@ s32 igb_init_nvm_params_i210(struct e1000_hw *hw) } return ret_val; } + +/** + * igb_pll_workaround_i210 + * @hw: pointer to the HW structure + * + * Works around an errata in the PLL circuit where it occasionally + * provides the wrong clock frequency after power up. + **/ +s32 igb_pll_workaround_i210(struct e1000_hw *hw) +{ + s32 ret_val; + u32 wuc, mdicnfg, ctrl, ctrl_ext, reg_val; + u16 nvm_word, phy_word, pci_word, tmp_nvm; + int i; + + /* Get and set needed register values */ + wuc = rd32(E1000_WUC); + mdicnfg = rd32(E1000_MDICNFG); + reg_val = mdicnfg & ~E1000_MDICNFG_EXT_MDIO; + wr32(E1000_MDICNFG, reg_val); + + /* Get data from NVM, or set default */ + ret_val = igb_read_invm_word_i210(hw, E1000_INVM_AUTOLOAD, + &nvm_word); + if (ret_val) + nvm_word = E1000_INVM_DEFAULT_AL; + tmp_nvm = nvm_word | E1000_INVM_PLL_WO_VAL; + for (i = 0; i < E1000_MAX_PLL_TRIES; i++) { + /* check current state directly from internal PHY */ + igb_read_phy_reg_gs40g(hw, (E1000_PHY_PLL_FREQ_PAGE | + E1000_PHY_PLL_FREQ_REG), &phy_word); + if ((phy_word & E1000_PHY_PLL_UNCONF) + != E1000_PHY_PLL_UNCONF) { + ret_val = 0; + break; + } else { + ret_val = -E1000_ERR_PHY; + } + /* directly reset the internal PHY */ + ctrl = rd32(E1000_CTRL); + wr32(E1000_CTRL, ctrl|E1000_CTRL_PHY_RST); + + ctrl_ext = rd32(E1000_CTRL_EXT); + ctrl_ext |= (E1000_CTRL_EXT_PHYPDEN | E1000_CTRL_EXT_SDLPE); + wr32(E1000_CTRL_EXT, ctrl_ext); + + wr32(E1000_WUC, 0); + reg_val = (E1000_INVM_AUTOLOAD << 4) | (tmp_nvm << 16); + wr32(E1000_EEARBC_I210, reg_val); + + igb_read_pci_cfg(hw, E1000_PCI_PMCSR, &pci_word); + pci_word |= E1000_PCI_PMCSR_D3; + igb_write_pci_cfg(hw, E1000_PCI_PMCSR, &pci_word); + usleep_range(1000, 2000); + pci_word &= ~E1000_PCI_PMCSR_D3; + igb_write_pci_cfg(hw, E1000_PCI_PMCSR, &pci_word); + reg_val = (E1000_INVM_AUTOLOAD << 4) | (nvm_word << 16); + wr32(E1000_EEARBC_I210, reg_val); + + /* restore WUC register */ + wr32(E1000_WUC, wuc); + } + /* restore MDICNFG setting */ + wr32(E1000_MDICNFG, mdicnfg); + return ret_val; +} diff --git a/drivers/net/ethernet/intel/igb/e1000_i210.h b/drivers/net/ethernet/intel/igb/e1000_i210.h index 9f34976687ba..3442b6357d01 100644 --- a/drivers/net/ethernet/intel/igb/e1000_i210.h +++ b/drivers/net/ethernet/intel/igb/e1000_i210.h @@ -33,6 +33,7 @@ s32 igb_read_xmdio_reg(struct e1000_hw *hw, u16 addr, u8 dev_addr, u16 *data); s32 igb_write_xmdio_reg(struct e1000_hw *hw, u16 addr, u8 dev_addr, u16 data); s32 igb_init_nvm_params_i210(struct e1000_hw *hw); bool igb_get_flash_presence_i210(struct e1000_hw *hw); +s32 igb_pll_workaround_i210(struct e1000_hw *hw); #define E1000_STM_OPCODE 0xDB00 #define E1000_EEPROM_FLASH_SIZE_WORD 0x11 @@ -78,4 +79,15 @@ enum E1000_INVM_STRUCTURE_TYPE { #define NVM_LED_1_CFG_DEFAULT_I211 0x0184 #define NVM_LED_0_2_CFG_DEFAULT_I211 0x200C +/* PLL Defines */ +#define E1000_PCI_PMCSR 0x44 +#define E1000_PCI_PMCSR_D3 0x03 +#define E1000_MAX_PLL_TRIES 5 +#define E1000_PHY_PLL_UNCONF 0xFF +#define E1000_PHY_PLL_FREQ_PAGE 0xFC0000 +#define E1000_PHY_PLL_FREQ_REG 0x000E +#define E1000_INVM_DEFAULT_AL 0x202F +#define E1000_INVM_AUTOLOAD 0x0A +#define E1000_INVM_PLL_WO_VAL 0x0010 + #endif diff --git a/drivers/net/ethernet/intel/igb/e1000_regs.h b/drivers/net/ethernet/intel/igb/e1000_regs.h index 1cc4b1a7e597..f5ba4e4eafb9 100644 --- a/drivers/net/ethernet/intel/igb/e1000_regs.h +++ b/drivers/net/ethernet/intel/igb/e1000_regs.h @@ -66,6 +66,7 @@ #define E1000_PBA 0x01000 /* Packet Buffer Allocation - RW */ #define E1000_PBS 0x01008 /* Packet Buffer Size */ #define E1000_EEMNGCTL 0x01010 /* MNG EEprom Control */ +#define E1000_EEARBC_I210 0x12024 /* EEPROM Auto Read Bus Control */ #define E1000_EEWR 0x0102C /* EEPROM Write Register - RW */ #define E1000_I2CCMD 0x01028 /* SFPI2C Command Register - RW */ #define E1000_FRTIMER 0x01048 /* Free Running Timer - RW */ diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index f145adbb55ac..a9537ba7a5a0 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -7215,6 +7215,20 @@ static int igb_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd) } } +void igb_read_pci_cfg(struct e1000_hw *hw, u32 reg, u16 *value) +{ + struct igb_adapter *adapter = hw->back; + + pci_read_config_word(adapter->pdev, reg, value); +} + +void igb_write_pci_cfg(struct e1000_hw *hw, u32 reg, u16 *value) +{ + struct igb_adapter *adapter = hw->back; + + pci_write_config_word(adapter->pdev, reg, *value); +} + s32 igb_read_pcie_cap_reg(struct e1000_hw *hw, u32 reg, u16 *value) { struct igb_adapter *adapter = hw->back; @@ -7578,6 +7592,8 @@ static int igb_sriov_reinit(struct pci_dev *dev) if (netif_running(netdev)) igb_close(netdev); + else + igb_reset(adapter); igb_clear_interrupt_scheme(adapter); diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index 45beca17fa50..dadd9a5f6323 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -1207,7 +1207,7 @@ static u32 mvneta_txq_desc_csum(int l3_offs, int l3_proto, command = l3_offs << MVNETA_TX_L3_OFF_SHIFT; command |= ip_hdr_len << MVNETA_TX_IP_HLEN_SHIFT; - if (l3_proto == swab16(ETH_P_IP)) + if (l3_proto == htons(ETH_P_IP)) command |= MVNETA_TXD_IP_CSUM; else command |= MVNETA_TX_L3_IP6; @@ -2529,7 +2529,7 @@ static void mvneta_adjust_link(struct net_device *ndev) if (phydev->speed == SPEED_1000) val |= MVNETA_GMAC_CONFIG_GMII_SPEED; - else + else if (phydev->speed == SPEED_100) val |= MVNETA_GMAC_CONFIG_MII_SPEED; mvreg_write(pp, MVNETA_GMAC_AUTONEG_CONFIG, val); diff --git a/drivers/net/ethernet/mellanox/mlx4/cq.c b/drivers/net/ethernet/mellanox/mlx4/cq.c index 80f725228f5b..56022d647837 100644 --- a/drivers/net/ethernet/mellanox/mlx4/cq.c +++ b/drivers/net/ethernet/mellanox/mlx4/cq.c @@ -294,8 +294,6 @@ int mlx4_cq_alloc(struct mlx4_dev *dev, int nent, init_completion(&cq->free); cq->irq = priv->eq_table.eq[cq->vector].irq; - cq->irq_affinity_change = false; - return 0; err_radix: diff --git a/drivers/net/ethernet/mellanox/mlx4/en_cq.c b/drivers/net/ethernet/mellanox/mlx4/en_cq.c index 4b2130760eed..14c00048bbec 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_cq.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_cq.c @@ -128,6 +128,10 @@ int mlx4_en_activate_cq(struct mlx4_en_priv *priv, struct mlx4_en_cq *cq, mlx4_warn(mdev, "Failed assigning an EQ to %s, falling back to legacy EQ's\n", name); } + + cq->irq_desc = + irq_to_desc(mlx4_eq_get_irq(mdev->dev, + cq->vector)); } } else { cq->vector = (cq->ring + 1 + priv->port) % @@ -187,8 +191,6 @@ void mlx4_en_destroy_cq(struct mlx4_en_priv *priv, struct mlx4_en_cq **pcq) mlx4_en_unmap_buffer(&cq->wqres.buf); mlx4_free_hwq_res(mdev->dev, &cq->wqres, cq->buf_size); if (priv->mdev->dev->caps.comp_pool && cq->vector) { - if (!cq->is_tx) - irq_set_affinity_hint(cq->mcq.irq, NULL); mlx4_release_eq(priv->mdev->dev, cq->vector); } cq->vector = 0; @@ -204,6 +206,7 @@ void mlx4_en_deactivate_cq(struct mlx4_en_priv *priv, struct mlx4_en_cq *cq) if (!cq->is_tx) { napi_hash_del(&cq->napi); synchronize_rcu(); + irq_set_affinity_hint(cq->mcq.irq, NULL); } netif_napi_del(&cq->napi); diff --git a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c index fa1a069e14e6..68d763d2d030 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c @@ -417,6 +417,8 @@ static int mlx4_en_get_coalesce(struct net_device *dev, coal->tx_coalesce_usecs = priv->tx_usecs; coal->tx_max_coalesced_frames = priv->tx_frames; + coal->tx_max_coalesced_frames_irq = priv->tx_work_limit; + coal->rx_coalesce_usecs = priv->rx_usecs; coal->rx_max_coalesced_frames = priv->rx_frames; @@ -426,6 +428,7 @@ static int mlx4_en_get_coalesce(struct net_device *dev, coal->rx_coalesce_usecs_high = priv->rx_usecs_high; coal->rate_sample_interval = priv->sample_interval; coal->use_adaptive_rx_coalesce = priv->adaptive_rx_coal; + return 0; } @@ -434,6 +437,9 @@ static int mlx4_en_set_coalesce(struct net_device *dev, { struct mlx4_en_priv *priv = netdev_priv(dev); + if (!coal->tx_max_coalesced_frames_irq) + return -EINVAL; + priv->rx_frames = (coal->rx_max_coalesced_frames == MLX4_EN_AUTO_CONF) ? MLX4_EN_RX_COAL_TARGET : @@ -457,6 +463,7 @@ static int mlx4_en_set_coalesce(struct net_device *dev, priv->rx_usecs_high = coal->rx_coalesce_usecs_high; priv->sample_interval = coal->rate_sample_interval; priv->adaptive_rx_coal = coal->use_adaptive_rx_coalesce; + priv->tx_work_limit = coal->tx_max_coalesced_frames_irq; return mlx4_en_moderation_update(priv); } diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c index 7d4fb7bf2593..7345c43b019e 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c @@ -2336,7 +2336,7 @@ static void mlx4_en_add_vxlan_port(struct net_device *dev, struct mlx4_en_priv *priv = netdev_priv(dev); __be16 current_port; - if (!(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_VXLAN_OFFLOADS)) + if (priv->mdev->dev->caps.tunnel_offload_mode != MLX4_TUNNEL_OFFLOAD_MODE_VXLAN) return; if (sa_family == AF_INET6) @@ -2473,6 +2473,7 @@ int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port, MLX4_WQE_CTRL_SOLICITED); priv->num_tx_rings_p_up = mdev->profile.num_tx_rings_p_up; priv->tx_ring_num = prof->tx_ring_num; + priv->tx_work_limit = MLX4_EN_DEFAULT_TX_WORK; priv->tx_ring = kzalloc(sizeof(struct mlx4_en_tx_ring *) * MAX_TX_RINGS, GFP_KERNEL); diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c index d2d415732d99..5535862f27cc 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c @@ -40,6 +40,7 @@ #include <linux/if_ether.h> #include <linux/if_vlan.h> #include <linux/vmalloc.h> +#include <linux/irq.h> #include "mlx4_en.h" @@ -782,6 +783,7 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud PKT_HASH_TYPE_L3); skb_record_rx_queue(gro_skb, cq->ring); + skb_mark_napi_id(gro_skb, &cq->napi); if (ring->hwtstamp_rx_filter == HWTSTAMP_FILTER_ALL) { timestamp = mlx4_en_get_cqe_ts(cqe); @@ -896,16 +898,25 @@ int mlx4_en_poll_rx_cq(struct napi_struct *napi, int budget) /* If we used up all the quota - we're probably not done yet... */ if (done == budget) { + int cpu_curr; + const struct cpumask *aff; + INC_PERF_COUNTER(priv->pstats.napi_quota); - if (unlikely(cq->mcq.irq_affinity_change)) { - cq->mcq.irq_affinity_change = false; + + cpu_curr = smp_processor_id(); + aff = irq_desc_get_irq_data(cq->irq_desc)->affinity; + + if (unlikely(!cpumask_test_cpu(cpu_curr, aff))) { + /* Current cpu is not according to smp_irq_affinity - + * probably affinity changed. need to stop this NAPI + * poll, and restart it on the right CPU + */ napi_complete(napi); mlx4_en_arm_cq(priv, cq); return 0; } } else { /* Done for now */ - cq->mcq.irq_affinity_change = false; napi_complete(napi); mlx4_en_arm_cq(priv, cq); } diff --git a/drivers/net/ethernet/mellanox/mlx4/en_tx.c b/drivers/net/ethernet/mellanox/mlx4/en_tx.c index 8be7483f8236..5045bab59633 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_tx.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_tx.c @@ -351,9 +351,8 @@ int mlx4_en_free_tx_buf(struct net_device *dev, struct mlx4_en_tx_ring *ring) return cnt; } -static int mlx4_en_process_tx_cq(struct net_device *dev, - struct mlx4_en_cq *cq, - int budget) +static bool mlx4_en_process_tx_cq(struct net_device *dev, + struct mlx4_en_cq *cq) { struct mlx4_en_priv *priv = netdev_priv(dev); struct mlx4_cq *mcq = &cq->mcq; @@ -372,9 +371,10 @@ static int mlx4_en_process_tx_cq(struct net_device *dev, int factor = priv->cqe_factor; u64 timestamp = 0; int done = 0; + int budget = priv->tx_work_limit; if (!priv->port_up) - return 0; + return true; index = cons_index & size_mask; cqe = &buf[(index << factor) + factor]; @@ -447,7 +447,7 @@ static int mlx4_en_process_tx_cq(struct net_device *dev, netif_tx_wake_queue(ring->tx_queue); ring->wake_queue++; } - return done; + return done < budget; } void mlx4_en_tx_irq(struct mlx4_cq *mcq) @@ -467,24 +467,16 @@ int mlx4_en_poll_tx_cq(struct napi_struct *napi, int budget) struct mlx4_en_cq *cq = container_of(napi, struct mlx4_en_cq, napi); struct net_device *dev = cq->dev; struct mlx4_en_priv *priv = netdev_priv(dev); - int done; + int clean_complete; - done = mlx4_en_process_tx_cq(dev, cq, budget); + clean_complete = mlx4_en_process_tx_cq(dev, cq); + if (!clean_complete) + return budget; - /* If we used up all the quota - we're probably not done yet... */ - if (done < budget) { - /* Done for now */ - cq->mcq.irq_affinity_change = false; - napi_complete(napi); - mlx4_en_arm_cq(priv, cq); - return done; - } else if (unlikely(cq->mcq.irq_affinity_change)) { - cq->mcq.irq_affinity_change = false; - napi_complete(napi); - mlx4_en_arm_cq(priv, cq); - return 0; - } - return budget; + napi_complete(napi); + mlx4_en_arm_cq(priv, cq); + + return 0; } static struct mlx4_en_tx_desc *mlx4_en_bounce_to_desc(struct mlx4_en_priv *priv, diff --git a/drivers/net/ethernet/mellanox/mlx4/eq.c b/drivers/net/ethernet/mellanox/mlx4/eq.c index d954ec1eac17..2a004b347e1d 100644 --- a/drivers/net/ethernet/mellanox/mlx4/eq.c +++ b/drivers/net/ethernet/mellanox/mlx4/eq.c @@ -53,11 +53,6 @@ enum { MLX4_EQ_ENTRY_SIZE = 0x20 }; -struct mlx4_irq_notify { - void *arg; - struct irq_affinity_notify notify; -}; - #define MLX4_EQ_STATUS_OK ( 0 << 28) #define MLX4_EQ_STATUS_WRITE_FAIL (10 << 28) #define MLX4_EQ_OWNER_SW ( 0 << 24) @@ -1088,57 +1083,6 @@ static void mlx4_unmap_clr_int(struct mlx4_dev *dev) iounmap(priv->clr_base); } -static void mlx4_irq_notifier_notify(struct irq_affinity_notify *notify, - const cpumask_t *mask) -{ - struct mlx4_irq_notify *n = container_of(notify, - struct mlx4_irq_notify, - notify); - struct mlx4_priv *priv = (struct mlx4_priv *)n->arg; - struct radix_tree_iter iter; - void **slot; - - radix_tree_for_each_slot(slot, &priv->cq_table.tree, &iter, 0) { - struct mlx4_cq *cq = (struct mlx4_cq *)(*slot); - - if (cq->irq == notify->irq) - cq->irq_affinity_change = true; - } -} - -static void mlx4_release_irq_notifier(struct kref *ref) -{ - struct mlx4_irq_notify *n = container_of(ref, struct mlx4_irq_notify, - notify.kref); - kfree(n); -} - -static void mlx4_assign_irq_notifier(struct mlx4_priv *priv, - struct mlx4_dev *dev, int irq) -{ - struct mlx4_irq_notify *irq_notifier = NULL; - int err = 0; - - irq_notifier = kzalloc(sizeof(*irq_notifier), GFP_KERNEL); - if (!irq_notifier) { - mlx4_warn(dev, "Failed to allocate irq notifier. irq %d\n", - irq); - return; - } - - irq_notifier->notify.irq = irq; - irq_notifier->notify.notify = mlx4_irq_notifier_notify; - irq_notifier->notify.release = mlx4_release_irq_notifier; - irq_notifier->arg = priv; - err = irq_set_affinity_notifier(irq, &irq_notifier->notify); - if (err) { - kfree(irq_notifier); - irq_notifier = NULL; - mlx4_warn(dev, "Failed to set irq notifier. irq %d\n", irq); - } -} - - int mlx4_alloc_eq_table(struct mlx4_dev *dev) { struct mlx4_priv *priv = mlx4_priv(dev); @@ -1409,8 +1353,6 @@ int mlx4_assign_eq(struct mlx4_dev *dev, char *name, struct cpu_rmap *rmap, continue; /*we dont want to break here*/ } - mlx4_assign_irq_notifier(priv, dev, - priv->eq_table.eq[vec].irq); eq_set_ci(&priv->eq_table.eq[vec], 1); } @@ -1427,6 +1369,14 @@ int mlx4_assign_eq(struct mlx4_dev *dev, char *name, struct cpu_rmap *rmap, } EXPORT_SYMBOL(mlx4_assign_eq); +int mlx4_eq_get_irq(struct mlx4_dev *dev, int vec) +{ + struct mlx4_priv *priv = mlx4_priv(dev); + + return priv->eq_table.eq[vec].irq; +} +EXPORT_SYMBOL(mlx4_eq_get_irq); + void mlx4_release_eq(struct mlx4_dev *dev, int vec) { struct mlx4_priv *priv = mlx4_priv(dev); @@ -1438,9 +1388,6 @@ void mlx4_release_eq(struct mlx4_dev *dev, int vec) Belonging to a legacy EQ*/ mutex_lock(&priv->msix_ctl.pool_lock); if (priv->msix_ctl.pool_bm & 1ULL << i) { - irq_set_affinity_notifier( - priv->eq_table.eq[vec].irq, - NULL); free_irq(priv->eq_table.eq[vec].irq, &priv->eq_table.eq[vec]); priv->msix_ctl.pool_bm &= ~(1ULL << i); diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h index 0e15295bedd6..d72a5a894fc6 100644 --- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h +++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h @@ -126,6 +126,8 @@ enum { #define MAX_TX_RINGS (MLX4_EN_MAX_TX_RING_P_UP * \ MLX4_EN_NUM_UP) +#define MLX4_EN_DEFAULT_TX_WORK 256 + /* Target number of packets to coalesce with interrupt moderation */ #define MLX4_EN_RX_COAL_TARGET 44 #define MLX4_EN_RX_COAL_TIME 0x10 @@ -343,6 +345,7 @@ struct mlx4_en_cq { #define CQ_USER_PEND (MLX4_EN_CQ_STATE_POLL | MLX4_EN_CQ_STATE_POLL_YIELD) spinlock_t poll_lock; /* protects from LLS/napi conflicts */ #endif /* CONFIG_NET_RX_BUSY_POLL */ + struct irq_desc *irq_desc; }; struct mlx4_en_port_profile { @@ -542,6 +545,7 @@ struct mlx4_en_priv { __be32 ctrl_flags; u32 flags; u8 num_tx_rings_p_up; + u32 tx_work_limit; u32 tx_ring_num; u32 rx_ring_num; u32 rx_skb_size; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mr.c b/drivers/net/ethernet/mellanox/mlx5/core/mr.c index ba0401d4af50..184c3615f479 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/mr.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/mr.c @@ -94,6 +94,11 @@ int mlx5_core_create_mkey(struct mlx5_core_dev *dev, struct mlx5_core_mr *mr, write_lock_irq(&table->lock); err = radix_tree_insert(&table->tree, mlx5_base_mkey(mr->key), mr); write_unlock_irq(&table->lock); + if (err) { + mlx5_core_warn(dev, "failed radix tree insert of mr 0x%x, %d\n", + mlx5_base_mkey(mr->key), err); + mlx5_core_destroy_mkey(dev, mr); + } return err; } @@ -104,12 +109,22 @@ int mlx5_core_destroy_mkey(struct mlx5_core_dev *dev, struct mlx5_core_mr *mr) struct mlx5_mr_table *table = &dev->priv.mr_table; struct mlx5_destroy_mkey_mbox_in in; struct mlx5_destroy_mkey_mbox_out out; + struct mlx5_core_mr *deleted_mr; unsigned long flags; int err; memset(&in, 0, sizeof(in)); memset(&out, 0, sizeof(out)); + write_lock_irqsave(&table->lock, flags); + deleted_mr = radix_tree_delete(&table->tree, mlx5_base_mkey(mr->key)); + write_unlock_irqrestore(&table->lock, flags); + if (!deleted_mr) { + mlx5_core_warn(dev, "failed radix tree delete of mr 0x%x\n", + mlx5_base_mkey(mr->key)); + return -ENOENT; + } + in.hdr.opcode = cpu_to_be16(MLX5_CMD_OP_DESTROY_MKEY); in.mkey = cpu_to_be32(mlx5_mkey_to_idx(mr->key)); err = mlx5_cmd_exec(dev, &in, sizeof(in), &out, sizeof(out)); @@ -119,10 +134,6 @@ int mlx5_core_destroy_mkey(struct mlx5_core_dev *dev, struct mlx5_core_mr *mr) if (out.hdr.status) return mlx5_cmd_status_to_err(&out.hdr); - write_lock_irqsave(&table->lock, flags); - radix_tree_delete(&table->tree, mlx5_base_mkey(mr->key)); - write_unlock_irqrestore(&table->lock, flags); - return err; } EXPORT_SYMBOL(mlx5_core_destroy_mkey); diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c index be425ad5e824..06bdc31a828d 100644 --- a/drivers/net/ethernet/realtek/r8169.c +++ b/drivers/net/ethernet/realtek/r8169.c @@ -538,6 +538,7 @@ enum rtl_register_content { MagicPacket = (1 << 5), /* Wake up when receives a Magic Packet */ LinkUp = (1 << 4), /* Wake up when the cable connection is re-established */ Jumbo_En0 = (1 << 2), /* 8168 only. Reserved in the 8168b */ + Rdy_to_L23 = (1 << 1), /* L23 Enable */ Beacon_en = (1 << 0), /* 8168 only. Reserved in the 8168b */ /* Config4 register */ @@ -4897,6 +4898,21 @@ static void rtl_enable_clock_request(struct pci_dev *pdev) PCI_EXP_LNKCTL_CLKREQ_EN); } +static void rtl_pcie_state_l2l3_enable(struct rtl8169_private *tp, bool enable) +{ + void __iomem *ioaddr = tp->mmio_addr; + u8 data; + + data = RTL_R8(Config3); + + if (enable) + data |= Rdy_to_L23; + else + data &= ~Rdy_to_L23; + + RTL_W8(Config3, data); +} + #define R8168_CPCMD_QUIRK_MASK (\ EnableBist | \ Mac_dbgo_oe | \ @@ -5246,6 +5262,7 @@ static void rtl_hw_start_8411(struct rtl8169_private *tp) }; rtl_hw_start_8168f(tp); + rtl_pcie_state_l2l3_enable(tp, false); rtl_ephy_init(tp, e_info_8168f_1, ARRAY_SIZE(e_info_8168f_1)); @@ -5284,6 +5301,8 @@ static void rtl_hw_start_8168g_1(struct rtl8169_private *tp) rtl_w1w0_eri(tp, 0x2fc, ERIAR_MASK_0001, 0x01, 0x06, ERIAR_EXGMAC); rtl_w1w0_eri(tp, 0x1b0, ERIAR_MASK_0011, 0x0000, 0x1000, ERIAR_EXGMAC); + + rtl_pcie_state_l2l3_enable(tp, false); } static void rtl_hw_start_8168g_2(struct rtl8169_private *tp) @@ -5536,6 +5555,8 @@ static void rtl_hw_start_8105e_1(struct rtl8169_private *tp) RTL_W8(DLLPR, RTL_R8(DLLPR) | PFM_EN); rtl_ephy_init(tp, e_info_8105e_1, ARRAY_SIZE(e_info_8105e_1)); + + rtl_pcie_state_l2l3_enable(tp, false); } static void rtl_hw_start_8105e_2(struct rtl8169_private *tp) @@ -5571,6 +5592,8 @@ static void rtl_hw_start_8402(struct rtl8169_private *tp) rtl_eri_write(tp, 0xc0, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC); rtl_eri_write(tp, 0xb8, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC); rtl_w1w0_eri(tp, 0x0d4, ERIAR_MASK_0011, 0x0e00, 0xff00, ERIAR_EXGMAC); + + rtl_pcie_state_l2l3_enable(tp, false); } static void rtl_hw_start_8106(struct rtl8169_private *tp) @@ -5583,6 +5606,8 @@ static void rtl_hw_start_8106(struct rtl8169_private *tp) RTL_W32(MISC, (RTL_R32(MISC) | DISABLE_LAN_EN) & ~EARLY_TALLY_EN); RTL_W8(MCU, RTL_R8(MCU) | EN_NDP | EN_OOB_RESET); RTL_W8(DLLPR, RTL_R8(DLLPR) & ~PFM_EN); + + rtl_pcie_state_l2l3_enable(tp, false); } static void rtl_hw_start_8101(struct net_device *dev) diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c index b3e148ef5683..9d3748361a1e 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c @@ -320,11 +320,8 @@ static void dwmac1000_set_eee_timer(void __iomem *ioaddr, int ls, int tw) static void dwmac1000_ctrl_ane(void __iomem *ioaddr, bool restart) { - u32 value; - - value = readl(ioaddr + GMAC_AN_CTRL); /* auto negotiation enable and External Loopback enable */ - value = GMAC_AN_CTRL_ANE | GMAC_AN_CTRL_ELE; + u32 value = GMAC_AN_CTRL_ANE | GMAC_AN_CTRL_ELE; if (restart) value |= GMAC_AN_CTRL_RAN; diff --git a/drivers/net/ethernet/stmicro/stmmac/enh_desc.c b/drivers/net/ethernet/stmicro/stmmac/enh_desc.c index 7e6628a91514..1e2bcf5f89e1 100644 --- a/drivers/net/ethernet/stmicro/stmmac/enh_desc.c +++ b/drivers/net/ethernet/stmicro/stmmac/enh_desc.c @@ -145,7 +145,7 @@ static void enh_desc_get_ext_status(void *data, struct stmmac_extra_stats *x, x->rx_msg_type_delay_req++; else if (p->des4.erx.msg_type == RDES_EXT_DELAY_RESP) x->rx_msg_type_delay_resp++; - else if (p->des4.erx.msg_type == RDES_EXT_DELAY_REQ) + else if (p->des4.erx.msg_type == RDES_EXT_PDELAY_REQ) x->rx_msg_type_pdelay_req++; else if (p->des4.erx.msg_type == RDES_EXT_PDELAY_RESP) x->rx_msg_type_pdelay_resp++; diff --git a/drivers/net/fddi/defxx.c b/drivers/net/fddi/defxx.c index eb78203cd58e..2aa57270838f 100644 --- a/drivers/net/fddi/defxx.c +++ b/drivers/net/fddi/defxx.c @@ -291,7 +291,11 @@ static int dfx_hw_dma_uninit(DFX_board_t *bp, PI_UINT32 type); static int dfx_rcv_init(DFX_board_t *bp, int get_buffers); static void dfx_rcv_queue_process(DFX_board_t *bp); +#ifdef DYNAMIC_BUFFERS static void dfx_rcv_flush(DFX_board_t *bp); +#else +static inline void dfx_rcv_flush(DFX_board_t *bp) {} +#endif static netdev_tx_t dfx_xmt_queue_pkt(struct sk_buff *skb, struct net_device *dev); @@ -2849,7 +2853,7 @@ static int dfx_hw_dma_uninit(DFX_board_t *bp, PI_UINT32 type) * Align an sk_buff to a boundary power of 2 * */ - +#ifdef DYNAMIC_BUFFERS static void my_skb_align(struct sk_buff *skb, int n) { unsigned long x = (unsigned long)skb->data; @@ -2859,7 +2863,7 @@ static void my_skb_align(struct sk_buff *skb, int n) skb_reserve(skb, v - x); } - +#endif /* * ================ @@ -3074,10 +3078,7 @@ static void dfx_rcv_queue_process( break; } else { -#ifndef DYNAMIC_BUFFERS - if (! rx_in_place) -#endif - { + if (!rx_in_place) { /* Receive buffer allocated, pass receive packet up */ skb_copy_to_linear_data(skb, @@ -3453,10 +3454,6 @@ static void dfx_rcv_flush( DFX_board_t *bp ) } } -#else -static inline void dfx_rcv_flush( DFX_board_t *bp ) -{ -} #endif /* DYNAMIC_BUFFERS */ /* diff --git a/drivers/net/phy/dp83640.c b/drivers/net/phy/dp83640.c index 6a999e6814a0..9408157a246c 100644 --- a/drivers/net/phy/dp83640.c +++ b/drivers/net/phy/dp83640.c @@ -1323,15 +1323,15 @@ static bool dp83640_rxtstamp(struct phy_device *phydev, { struct dp83640_private *dp83640 = phydev->priv; - if (!dp83640->hwts_rx_en) - return false; - if (is_status_frame(skb, type)) { decode_status_frame(dp83640, skb); kfree_skb(skb); return true; } + if (!dp83640->hwts_rx_en) + return false; + SKB_PTP_TYPE(skb) = type; skb_queue_tail(&dp83640->rx_queue, skb); schedule_work(&dp83640->ts_work); diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c index 2e58aa54484c..4eaadcfcb0fe 100644 --- a/drivers/net/phy/mdio_bus.c +++ b/drivers/net/phy/mdio_bus.c @@ -187,6 +187,50 @@ struct mii_bus *of_mdio_find_bus(struct device_node *mdio_bus_np) return d ? to_mii_bus(d) : NULL; } EXPORT_SYMBOL(of_mdio_find_bus); + +/* Walk the list of subnodes of a mdio bus and look for a node that matches the + * phy's address with its 'reg' property. If found, set the of_node pointer for + * the phy. This allows auto-probed pyh devices to be supplied with information + * passed in via DT. + */ +static void of_mdiobus_link_phydev(struct mii_bus *mdio, + struct phy_device *phydev) +{ + struct device *dev = &phydev->dev; + struct device_node *child; + + if (dev->of_node || !mdio->dev.of_node) + return; + + for_each_available_child_of_node(mdio->dev.of_node, child) { + int addr; + int ret; + + ret = of_property_read_u32(child, "reg", &addr); + if (ret < 0) { + dev_err(dev, "%s has invalid PHY address\n", + child->full_name); + continue; + } + + /* A PHY must have a reg property in the range [0-31] */ + if (addr >= PHY_MAX_ADDR) { + dev_err(dev, "%s PHY address %i is too large\n", + child->full_name, addr); + continue; + } + + if (addr == phydev->addr) { + dev->of_node = child; + return; + } + } +} +#else /* !IS_ENABLED(CONFIG_OF_MDIO) */ +static inline void of_mdiobus_link_phydev(struct mii_bus *mdio, + struct phy_device *phydev) +{ +} #endif /** diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c index 91d6c1272fcf..e2f20f807de8 100644 --- a/drivers/net/ppp/ppp_generic.c +++ b/drivers/net/ppp/ppp_generic.c @@ -539,7 +539,7 @@ static int get_filter(void __user *arg, struct sock_filter **p) { struct sock_fprog uprog; struct sock_filter *code = NULL; - int len, err; + int len; if (copy_from_user(&uprog, arg, sizeof(uprog))) return -EFAULT; @@ -554,12 +554,6 @@ static int get_filter(void __user *arg, struct sock_filter **p) if (IS_ERR(code)) return PTR_ERR(code); - err = sk_chk_filter(code, uprog.len); - if (err) { - kfree(code); - return err; - } - *p = code; return uprog.len; } diff --git a/drivers/net/ppp/pppoe.c b/drivers/net/ppp/pppoe.c index 2ea7efd11857..6c9c16d76935 100644 --- a/drivers/net/ppp/pppoe.c +++ b/drivers/net/ppp/pppoe.c @@ -675,7 +675,7 @@ static int pppoe_connect(struct socket *sock, struct sockaddr *uservaddr, po->chan.hdrlen = (sizeof(struct pppoe_hdr) + dev->hard_header_len); - po->chan.mtu = dev->mtu - sizeof(struct pppoe_hdr); + po->chan.mtu = dev->mtu - sizeof(struct pppoe_hdr) - 2; po->chan.private = sk; po->chan.ops = &pppoe_chan_ops; diff --git a/drivers/net/usb/hso.c b/drivers/net/usb/hso.c index a3a05869309d..a4272ed62da8 100644 --- a/drivers/net/usb/hso.c +++ b/drivers/net/usb/hso.c @@ -258,10 +258,8 @@ struct hso_serial { * so as not to drop characters on the floor. */ int curr_rx_urb_idx; - u16 curr_rx_urb_offset; u8 rx_urb_filled[MAX_RX_URBS]; struct tasklet_struct unthrottle_tasklet; - struct work_struct retry_unthrottle_workqueue; }; struct hso_device { @@ -1252,14 +1250,6 @@ static void hso_unthrottle(struct tty_struct *tty) tasklet_hi_schedule(&serial->unthrottle_tasklet); } -static void hso_unthrottle_workfunc(struct work_struct *work) -{ - struct hso_serial *serial = - container_of(work, struct hso_serial, - retry_unthrottle_workqueue); - hso_unthrottle_tasklet(serial); -} - /* open the requested serial port */ static int hso_serial_open(struct tty_struct *tty, struct file *filp) { @@ -1295,8 +1285,6 @@ static int hso_serial_open(struct tty_struct *tty, struct file *filp) tasklet_init(&serial->unthrottle_tasklet, (void (*)(unsigned long))hso_unthrottle_tasklet, (unsigned long)serial); - INIT_WORK(&serial->retry_unthrottle_workqueue, - hso_unthrottle_workfunc); result = hso_start_serial_device(serial->parent, GFP_KERNEL); if (result) { hso_stop_serial_device(serial->parent); @@ -1345,7 +1333,6 @@ static void hso_serial_close(struct tty_struct *tty, struct file *filp) if (!usb_gone) hso_stop_serial_device(serial->parent); tasklet_kill(&serial->unthrottle_tasklet); - cancel_work_sync(&serial->retry_unthrottle_workqueue); } if (!usb_gone) @@ -2013,8 +2000,7 @@ static void ctrl_callback(struct urb *urb) static int put_rxbuf_data(struct urb *urb, struct hso_serial *serial) { struct tty_struct *tty; - int write_length_remaining = 0; - int curr_write_len; + int count; /* Sanity check */ if (urb == NULL || serial == NULL) { @@ -2024,29 +2010,28 @@ static int put_rxbuf_data(struct urb *urb, struct hso_serial *serial) tty = tty_port_tty_get(&serial->port); + if (tty && test_bit(TTY_THROTTLED, &tty->flags)) { + tty_kref_put(tty); + return -1; + } + /* Push data to tty */ - write_length_remaining = urb->actual_length - - serial->curr_rx_urb_offset; D1("data to push to tty"); - while (write_length_remaining) { - if (tty && test_bit(TTY_THROTTLED, &tty->flags)) { - tty_kref_put(tty); - return -1; - } - curr_write_len = tty_insert_flip_string(&serial->port, - urb->transfer_buffer + serial->curr_rx_urb_offset, - write_length_remaining); - serial->curr_rx_urb_offset += curr_write_len; - write_length_remaining -= curr_write_len; + count = tty_buffer_request_room(&serial->port, urb->actual_length); + if (count >= urb->actual_length) { + tty_insert_flip_string(&serial->port, urb->transfer_buffer, + urb->actual_length); tty_flip_buffer_push(&serial->port); + } else { + dev_warn(&serial->parent->usb->dev, + "dropping data, %d bytes lost\n", urb->actual_length); } + tty_kref_put(tty); - if (write_length_remaining == 0) { - serial->curr_rx_urb_offset = 0; - serial->rx_urb_filled[hso_urb_to_index(serial, urb)] = 0; - } - return write_length_remaining; + serial->rx_urb_filled[hso_urb_to_index(serial, urb)] = 0; + + return 0; } @@ -2217,7 +2202,6 @@ static int hso_stop_serial_device(struct hso_device *hso_dev) } } serial->curr_rx_urb_idx = 0; - serial->curr_rx_urb_offset = 0; if (serial->tx_urb) usb_kill_urb(serial->tx_urb); diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c index cf62d7e8329f..c4638c67f6b9 100644 --- a/drivers/net/usb/qmi_wwan.c +++ b/drivers/net/usb/qmi_wwan.c @@ -741,6 +741,7 @@ static const struct usb_device_id products[] = { {QMI_FIXED_INTF(0x19d2, 0x1424, 2)}, {QMI_FIXED_INTF(0x19d2, 0x1425, 2)}, {QMI_FIXED_INTF(0x19d2, 0x1426, 2)}, /* ZTE MF91 */ + {QMI_FIXED_INTF(0x19d2, 0x1428, 2)}, /* Telewell TW-LTE 4G v2 */ {QMI_FIXED_INTF(0x19d2, 0x2002, 4)}, /* ZTE (Vodafone) K3765-Z */ {QMI_FIXED_INTF(0x0f3d, 0x68a2, 8)}, /* Sierra Wireless MC7700 */ {QMI_FIXED_INTF(0x114f, 0x68a2, 8)}, /* Sierra Wireless MC7750 */ diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c index 25431965a625..7bad2d316637 100644 --- a/drivers/net/usb/r8152.c +++ b/drivers/net/usb/r8152.c @@ -1359,7 +1359,7 @@ static void r8152_csum_workaround(struct r8152 *tp, struct sk_buff *skb, struct sk_buff_head seg_list; struct sk_buff *segs, *nskb; - features &= ~(NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO); + features &= ~(NETIF_F_SG | NETIF_F_IPV6_CSUM | NETIF_F_TSO6); segs = skb_gso_segment(skb, features); if (IS_ERR(segs) || !segs) goto drop; @@ -3204,8 +3204,13 @@ static void rtl8152_get_ethtool_stats(struct net_device *dev, struct r8152 *tp = netdev_priv(dev); struct tally_counter tally; + if (usb_autopm_get_interface(tp->intf) < 0) + return; + generic_ocp_read(tp, PLA_TALLYCNT, sizeof(tally), &tally, MCU_TYPE_PLA); + usb_autopm_put_interface(tp->intf); + data[0] = le64_to_cpu(tally.tx_packets); data[1] = le64_to_cpu(tally.rx_packets); data[2] = le64_to_cpu(tally.tx_errors); diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c index 424db65e4396..d07bf4cb893f 100644 --- a/drivers/net/usb/smsc95xx.c +++ b/drivers/net/usb/smsc95xx.c @@ -1714,6 +1714,18 @@ static int smsc95xx_resume(struct usb_interface *intf) return ret; } +static int smsc95xx_reset_resume(struct usb_interface *intf) +{ + struct usbnet *dev = usb_get_intfdata(intf); + int ret; + + ret = smsc95xx_reset(dev); + if (ret < 0) + return ret; + + return smsc95xx_resume(intf); +} + static void smsc95xx_rx_csum_offload(struct sk_buff *skb) { skb->csum = *(u16 *)(skb_tail_pointer(skb) - 2); @@ -2004,7 +2016,7 @@ static struct usb_driver smsc95xx_driver = { .probe = usbnet_probe, .suspend = smsc95xx_suspend, .resume = smsc95xx_resume, - .reset_resume = smsc95xx_resume, + .reset_resume = smsc95xx_reset_resume, .disconnect = usbnet_disconnect, .disable_hub_initiated_lpm = 1, .supports_autosuspend = 1, diff --git a/drivers/net/wan/farsync.c b/drivers/net/wan/farsync.c index 93ace042d0aa..1f041271f7fe 100644 --- a/drivers/net/wan/farsync.c +++ b/drivers/net/wan/farsync.c @@ -2363,7 +2363,7 @@ static char *type_strings[] = { "FarSync TE1" }; -static void +static int fst_init_card(struct fst_card_info *card) { int i; @@ -2374,24 +2374,21 @@ fst_init_card(struct fst_card_info *card) * we'll have to revise it in some way then. */ for (i = 0; i < card->nports; i++) { - err = register_hdlc_device(card->ports[i].dev); - if (err < 0) { - int j; + err = register_hdlc_device(card->ports[i].dev); + if (err < 0) { pr_err("Cannot register HDLC device for port %d (errno %d)\n", - i, -err); - for (j = i; j < card->nports; j++) { - free_netdev(card->ports[j].dev); - card->ports[j].dev = NULL; - } - card->nports = i; - break; - } + i, -err); + while (i--) + unregister_hdlc_device(card->ports[i].dev); + return err; + } } pr_info("%s-%s: %s IRQ%d, %d ports\n", port_to_dev(&card->ports[0])->name, port_to_dev(&card->ports[card->nports - 1])->name, type_strings[card->type], card->irq, card->nports); + return 0; } static const struct net_device_ops fst_ops = { @@ -2447,15 +2444,12 @@ fst_add_one(struct pci_dev *pdev, const struct pci_device_id *ent) /* Try to enable the device */ if ((err = pci_enable_device(pdev)) != 0) { pr_err("Failed to enable card. Err %d\n", -err); - kfree(card); - return err; + goto enable_fail; } if ((err = pci_request_regions(pdev, "FarSync")) !=0) { pr_err("Failed to allocate regions. Err %d\n", -err); - pci_disable_device(pdev); - kfree(card); - return err; + goto regions_fail; } /* Get virtual addresses of memory regions */ @@ -2464,30 +2458,21 @@ fst_add_one(struct pci_dev *pdev, const struct pci_device_id *ent) card->phys_ctlmem = pci_resource_start(pdev, 3); if ((card->mem = ioremap(card->phys_mem, FST_MEMSIZE)) == NULL) { pr_err("Physical memory remap failed\n"); - pci_release_regions(pdev); - pci_disable_device(pdev); - kfree(card); - return -ENODEV; + err = -ENODEV; + goto ioremap_physmem_fail; } if ((card->ctlmem = ioremap(card->phys_ctlmem, 0x10)) == NULL) { pr_err("Control memory remap failed\n"); - pci_release_regions(pdev); - pci_disable_device(pdev); - iounmap(card->mem); - kfree(card); - return -ENODEV; + err = -ENODEV; + goto ioremap_ctlmem_fail; } dbg(DBG_PCI, "kernel mem %p, ctlmem %p\n", card->mem, card->ctlmem); /* Register the interrupt handler */ if (request_irq(pdev->irq, fst_intr, IRQF_SHARED, FST_DEV_NAME, card)) { pr_err("Unable to register interrupt %d\n", card->irq); - pci_release_regions(pdev); - pci_disable_device(pdev); - iounmap(card->ctlmem); - iounmap(card->mem); - kfree(card); - return -ENODEV; + err = -ENODEV; + goto irq_fail; } /* Record info we need */ @@ -2513,13 +2498,8 @@ fst_add_one(struct pci_dev *pdev, const struct pci_device_id *ent) while (i--) free_netdev(card->ports[i].dev); pr_err("FarSync: out of memory\n"); - free_irq(card->irq, card); - pci_release_regions(pdev); - pci_disable_device(pdev); - iounmap(card->ctlmem); - iounmap(card->mem); - kfree(card); - return -ENODEV; + err = -ENOMEM; + goto hdlcdev_fail; } card->ports[i].dev = dev; card->ports[i].card = card; @@ -2565,9 +2545,16 @@ fst_add_one(struct pci_dev *pdev, const struct pci_device_id *ent) pci_set_drvdata(pdev, card); /* Remainder of card setup */ + if (no_of_cards_added >= FST_MAX_CARDS) { + pr_err("FarSync: too many cards\n"); + err = -ENOMEM; + goto card_array_fail; + } fst_card_array[no_of_cards_added] = card; card->card_no = no_of_cards_added++; /* Record instance and bump it */ - fst_init_card(card); + err = fst_init_card(card); + if (err) + goto init_card_fail; if (card->family == FST_FAMILY_TXU) { /* * Allocate a dma buffer for transmit and receives @@ -2577,29 +2564,46 @@ fst_add_one(struct pci_dev *pdev, const struct pci_device_id *ent) &card->rx_dma_handle_card); if (card->rx_dma_handle_host == NULL) { pr_err("Could not allocate rx dma buffer\n"); - fst_disable_intr(card); - pci_release_regions(pdev); - pci_disable_device(pdev); - iounmap(card->ctlmem); - iounmap(card->mem); - kfree(card); - return -ENOMEM; + err = -ENOMEM; + goto rx_dma_fail; } card->tx_dma_handle_host = pci_alloc_consistent(card->device, FST_MAX_MTU, &card->tx_dma_handle_card); if (card->tx_dma_handle_host == NULL) { pr_err("Could not allocate tx dma buffer\n"); - fst_disable_intr(card); - pci_release_regions(pdev); - pci_disable_device(pdev); - iounmap(card->ctlmem); - iounmap(card->mem); - kfree(card); - return -ENOMEM; + err = -ENOMEM; + goto tx_dma_fail; } } return 0; /* Success */ + +tx_dma_fail: + pci_free_consistent(card->device, FST_MAX_MTU, + card->rx_dma_handle_host, + card->rx_dma_handle_card); +rx_dma_fail: + fst_disable_intr(card); + for (i = 0 ; i < card->nports ; i++) + unregister_hdlc_device(card->ports[i].dev); +init_card_fail: + fst_card_array[card->card_no] = NULL; +card_array_fail: + for (i = 0 ; i < card->nports ; i++) + free_netdev(card->ports[i].dev); +hdlcdev_fail: + free_irq(card->irq, card); +irq_fail: + iounmap(card->ctlmem); +ioremap_ctlmem_fail: + iounmap(card->mem); +ioremap_physmem_fail: + pci_release_regions(pdev); +regions_fail: + pci_disable_device(pdev); +enable_fail: + kfree(card); + return err; } /* diff --git a/drivers/net/wireless/ath/ath10k/core.c b/drivers/net/wireless/ath/ath10k/core.c index 82017f56e661..e6c56c5bb0f6 100644 --- a/drivers/net/wireless/ath/ath10k/core.c +++ b/drivers/net/wireless/ath/ath10k/core.c @@ -795,7 +795,11 @@ int ath10k_core_start(struct ath10k *ar) if (status) goto err_htc_stop; - ar->free_vdev_map = (1 << TARGET_NUM_VDEVS) - 1; + if (test_bit(ATH10K_FW_FEATURE_WMI_10X, ar->fw_features)) + ar->free_vdev_map = (1 << TARGET_10X_NUM_VDEVS) - 1; + else + ar->free_vdev_map = (1 << TARGET_NUM_VDEVS) - 1; + INIT_LIST_HEAD(&ar->arvifs); if (!test_bit(ATH10K_FLAG_FIRST_BOOT_DONE, &ar->dev_flags)) diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c index 6c102b1312ff..eebc860c3655 100644 --- a/drivers/net/wireless/ath/ath10k/htt_rx.c +++ b/drivers/net/wireless/ath/ath10k/htt_rx.c @@ -312,7 +312,6 @@ static int ath10k_htt_rx_amsdu_pop(struct ath10k_htt *htt, int msdu_len, msdu_chaining = 0; struct sk_buff *msdu; struct htt_rx_desc *rx_desc; - bool corrupted = false; lockdep_assert_held(&htt->rx_ring.lock); @@ -439,9 +438,6 @@ static int ath10k_htt_rx_amsdu_pop(struct ath10k_htt *htt, last_msdu = __le32_to_cpu(rx_desc->msdu_end.info0) & RX_MSDU_END_INFO0_LAST_MSDU; - if (msdu_chaining && !last_msdu) - corrupted = true; - if (last_msdu) { msdu->next = NULL; break; @@ -457,20 +453,6 @@ static int ath10k_htt_rx_amsdu_pop(struct ath10k_htt *htt, msdu_chaining = -1; /* - * Apparently FW sometimes reports weird chained MSDU sequences with - * more than one rx descriptor. This seems like a bug but needs more - * analyzing. For the time being fix it by dropping such sequences to - * avoid blowing up the host system. - */ - if (corrupted) { - ath10k_warn("failed to pop chained msdus, dropping\n"); - ath10k_htt_rx_free_msdu_chain(*head_msdu); - *head_msdu = NULL; - *tail_msdu = NULL; - msdu_chaining = -EINVAL; - } - - /* * Don't refill the ring yet. * * First, the elements popped here are still in use - it is not diff --git a/drivers/net/wireless/brcm80211/brcmfmac/usb.c b/drivers/net/wireless/brcm80211/brcmfmac/usb.c index 6db51a666f61..d06fcb05adf2 100644 --- a/drivers/net/wireless/brcm80211/brcmfmac/usb.c +++ b/drivers/net/wireless/brcm80211/brcmfmac/usb.c @@ -1184,8 +1184,6 @@ static int brcmf_usb_probe_cb(struct brcmf_usbdev_info *devinfo) bus->bus_priv.usb = bus_pub; dev_set_drvdata(dev, bus); bus->ops = &brcmf_usb_bus_ops; - bus->chip = bus_pub->devid; - bus->chiprev = bus_pub->chiprev; bus->proto_type = BRCMF_PROTO_BCDC; bus->always_use_fws_queue = true; @@ -1194,6 +1192,9 @@ static int brcmf_usb_probe_cb(struct brcmf_usbdev_info *devinfo) if (ret) goto fail; } + bus->chip = bus_pub->devid; + bus->chiprev = bus_pub->chiprev; + /* request firmware here */ brcmf_fw_get_firmwares(dev, 0, brcmf_usb_get_fwname(devinfo), NULL, brcmf_usb_probe_phase2); diff --git a/drivers/net/wireless/iwlwifi/dvm/rxon.c b/drivers/net/wireless/iwlwifi/dvm/rxon.c index ed50de6362ed..6dc5dd3ced44 100644 --- a/drivers/net/wireless/iwlwifi/dvm/rxon.c +++ b/drivers/net/wireless/iwlwifi/dvm/rxon.c @@ -1068,13 +1068,6 @@ int iwlagn_commit_rxon(struct iwl_priv *priv, struct iwl_rxon_context *ctx) /* recalculate basic rates */ iwl_calc_basic_rates(priv, ctx); - /* - * force CTS-to-self frames protection if RTS-CTS is not preferred - * one aggregation protection method - */ - if (!priv->hw_params.use_rts_for_aggregation) - ctx->staging.flags |= RXON_FLG_SELF_CTS_EN; - if ((ctx->vif && ctx->vif->bss_conf.use_short_slot) || !(ctx->staging.flags & RXON_FLG_BAND_24G_MSK)) ctx->staging.flags |= RXON_FLG_SHORT_SLOT_MSK; @@ -1480,11 +1473,6 @@ void iwlagn_bss_info_changed(struct ieee80211_hw *hw, else ctx->staging.flags &= ~RXON_FLG_TGG_PROTECT_MSK; - if (bss_conf->use_cts_prot) - ctx->staging.flags |= RXON_FLG_SELF_CTS_EN; - else - ctx->staging.flags &= ~RXON_FLG_SELF_CTS_EN; - memcpy(ctx->staging.bssid_addr, bss_conf->bssid, ETH_ALEN); if (vif->type == NL80211_IFTYPE_AP || diff --git a/drivers/net/wireless/iwlwifi/iwl-fw.h b/drivers/net/wireless/iwlwifi/iwl-fw.h index 0aa7c0085c9f..b1a33322b9ba 100644 --- a/drivers/net/wireless/iwlwifi/iwl-fw.h +++ b/drivers/net/wireless/iwlwifi/iwl-fw.h @@ -88,6 +88,7 @@ * P2P client interfaces simultaneously if they are in different bindings. * @IWL_UCODE_TLV_FLAGS_P2P_BSS_PS_SCM: support power save on BSS station and * P2P client interfaces simultaneously if they are in same bindings. + * @IWL_UCODE_TLV_FLAGS_UAPSD_SUPPORT: General support for uAPSD * @IWL_UCODE_TLV_FLAGS_P2P_PS_UAPSD: P2P client supports uAPSD power save * @IWL_UCODE_TLV_FLAGS_BCAST_FILTERING: uCode supports broadcast filtering. * @IWL_UCODE_TLV_FLAGS_GO_UAPSD: AP/GO interfaces support uAPSD clients diff --git a/drivers/net/wireless/iwlwifi/mvm/mac-ctxt.c b/drivers/net/wireless/iwlwifi/mvm/mac-ctxt.c index 8b5302777632..725ba49576bf 100644 --- a/drivers/net/wireless/iwlwifi/mvm/mac-ctxt.c +++ b/drivers/net/wireless/iwlwifi/mvm/mac-ctxt.c @@ -667,10 +667,9 @@ static void iwl_mvm_mac_ctxt_cmd_common(struct iwl_mvm *mvm, if (vif->bss_conf.qos) cmd->qos_flags |= cpu_to_le32(MAC_QOS_FLG_UPDATE_EDCA); - if (vif->bss_conf.use_cts_prot) { + if (vif->bss_conf.use_cts_prot) cmd->protection_flags |= cpu_to_le32(MAC_PROT_FLG_TGG_PROTECT); - cmd->protection_flags |= cpu_to_le32(MAC_PROT_FLG_SELF_CTS_EN); - } + IWL_DEBUG_RATE(mvm, "use_cts_prot %d, ht_operation_mode %d\n", vif->bss_conf.use_cts_prot, vif->bss_conf.ht_operation_mode); diff --git a/drivers/net/wireless/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/iwlwifi/mvm/mac80211.c index 7215f5980186..9bfb90680cdc 100644 --- a/drivers/net/wireless/iwlwifi/mvm/mac80211.c +++ b/drivers/net/wireless/iwlwifi/mvm/mac80211.c @@ -303,6 +303,13 @@ int iwl_mvm_mac_setup_register(struct iwl_mvm *mvm) hw->uapsd_max_sp_len = IWL_UAPSD_MAX_SP; } + if (mvm->fw->ucode_capa.flags & IWL_UCODE_TLV_FLAGS_UAPSD_SUPPORT && + !iwlwifi_mod_params.uapsd_disable) { + hw->flags |= IEEE80211_HW_SUPPORTS_UAPSD; + hw->uapsd_queues = IWL_UAPSD_AC_INFO; + hw->uapsd_max_sp_len = IWL_UAPSD_MAX_SP; + } + hw->sta_data_size = sizeof(struct iwl_mvm_sta); hw->vif_data_size = sizeof(struct iwl_mvm_vif); hw->chanctx_data_size = sizeof(u16); @@ -1159,8 +1166,12 @@ static void iwl_mvm_bcast_filter_iterator(void *_data, u8 *mac, bcast_mac = &cmd->macs[mvmvif->id]; - /* enable filtering only for associated stations */ - if (vif->type != NL80211_IFTYPE_STATION || !vif->bss_conf.assoc) + /* + * enable filtering only for associated stations, but not for P2P + * Clients + */ + if (vif->type != NL80211_IFTYPE_STATION || vif->p2p || + !vif->bss_conf.assoc) return; bcast_mac->default_discard = 1; @@ -1237,10 +1248,6 @@ static int iwl_mvm_configure_bcast_filter(struct iwl_mvm *mvm, if (!(mvm->fw->ucode_capa.flags & IWL_UCODE_TLV_FLAGS_BCAST_FILTERING)) return 0; - /* bcast filtering isn't supported for P2P client */ - if (vif->p2p) - return 0; - if (!iwl_mvm_bcast_filter_build_cmd(mvm, &cmd)) return 0; diff --git a/drivers/net/wireless/iwlwifi/mvm/scan.c b/drivers/net/wireless/iwlwifi/mvm/scan.c index 4b6c7d4bd199..eac2b424f6a0 100644 --- a/drivers/net/wireless/iwlwifi/mvm/scan.c +++ b/drivers/net/wireless/iwlwifi/mvm/scan.c @@ -588,9 +588,7 @@ static void iwl_build_scan_cmd(struct iwl_mvm *mvm, struct iwl_scan_offload_cmd *scan, struct iwl_mvm_scan_params *params) { - scan->channel_count = - mvm->nvm_data->bands[IEEE80211_BAND_2GHZ].n_channels + - mvm->nvm_data->bands[IEEE80211_BAND_5GHZ].n_channels; + scan->channel_count = req->n_channels; scan->quiet_time = cpu_to_le16(IWL_ACTIVE_QUIET_TIME); scan->quiet_plcp_th = cpu_to_le16(IWL_PLCP_QUIET_THRESH); scan->good_CRC_th = IWL_GOOD_CRC_TH_DEFAULT; @@ -669,61 +667,37 @@ static void iwl_build_channel_cfg(struct iwl_mvm *mvm, struct cfg80211_sched_scan_request *req, struct iwl_scan_channel_cfg *channels, enum ieee80211_band band, - int *head, int *tail, + int *head, u32 ssid_bitmap, struct iwl_mvm_scan_params *params) { - struct ieee80211_supported_band *s_band; - int n_channels = req->n_channels; - int i, j, index = 0; - bool partial; + int i, index = 0; - /* - * We have to configure all supported channels, even if we don't want to - * scan on them, but we have to send channels in the order that we want - * to scan. So add requested channels to head of the list and others to - * the end. - */ - s_band = &mvm->nvm_data->bands[band]; - - for (i = 0; i < s_band->n_channels && *head <= *tail; i++) { - partial = false; - for (j = 0; j < n_channels; j++) - if (s_band->channels[i].center_freq == - req->channels[j]->center_freq) { - index = *head; - (*head)++; - /* - * Channels that came with the request will be - * in partial scan . - */ - partial = true; - break; - } - if (!partial) { - index = *tail; - (*tail)--; - } - channels->channel_number[index] = - cpu_to_le16(ieee80211_frequency_to_channel( - s_band->channels[i].center_freq)); + for (i = 0; i < req->n_channels; i++) { + struct ieee80211_channel *chan = req->channels[i]; + + if (chan->band != band) + continue; + + index = *head; + (*head)++; + + channels->channel_number[index] = cpu_to_le16(chan->hw_value); channels->dwell_time[index][0] = params->dwell[band].active; channels->dwell_time[index][1] = params->dwell[band].passive; channels->iter_count[index] = cpu_to_le16(1); channels->iter_interval[index] = 0; - if (!(s_band->channels[i].flags & IEEE80211_CHAN_NO_IR)) + if (!(chan->flags & IEEE80211_CHAN_NO_IR)) channels->type[index] |= cpu_to_le32(IWL_SCAN_OFFLOAD_CHANNEL_ACTIVE); channels->type[index] |= - cpu_to_le32(IWL_SCAN_OFFLOAD_CHANNEL_FULL); - if (partial) - channels->type[index] |= - cpu_to_le32(IWL_SCAN_OFFLOAD_CHANNEL_PARTIAL); + cpu_to_le32(IWL_SCAN_OFFLOAD_CHANNEL_FULL | + IWL_SCAN_OFFLOAD_CHANNEL_PARTIAL); - if (s_band->channels[i].flags & IEEE80211_CHAN_NO_HT40) + if (chan->flags & IEEE80211_CHAN_NO_HT40) channels->type[index] |= cpu_to_le32(IWL_SCAN_OFFLOAD_CHANNEL_NARROW); @@ -740,7 +714,6 @@ int iwl_mvm_config_sched_scan(struct iwl_mvm *mvm, int band_2ghz = mvm->nvm_data->bands[IEEE80211_BAND_2GHZ].n_channels; int band_5ghz = mvm->nvm_data->bands[IEEE80211_BAND_5GHZ].n_channels; int head = 0; - int tail = band_2ghz + band_5ghz - 1; u32 ssid_bitmap; int cmd_len; int ret; @@ -772,7 +745,7 @@ int iwl_mvm_config_sched_scan(struct iwl_mvm *mvm, &scan_cfg->scan_cmd.tx_cmd[0], scan_cfg->data); iwl_build_channel_cfg(mvm, req, &scan_cfg->channel_cfg, - IEEE80211_BAND_2GHZ, &head, &tail, + IEEE80211_BAND_2GHZ, &head, ssid_bitmap, ¶ms); } if (band_5ghz) { @@ -782,7 +755,7 @@ int iwl_mvm_config_sched_scan(struct iwl_mvm *mvm, scan_cfg->data + SCAN_OFFLOAD_PROBE_REQ_SIZE); iwl_build_channel_cfg(mvm, req, &scan_cfg->channel_cfg, - IEEE80211_BAND_5GHZ, &head, &tail, + IEEE80211_BAND_5GHZ, &head, ssid_bitmap, ¶ms); } diff --git a/drivers/net/wireless/iwlwifi/pcie/drv.c b/drivers/net/wireless/iwlwifi/pcie/drv.c index 7091a18d5a72..98950e45c7b0 100644 --- a/drivers/net/wireless/iwlwifi/pcie/drv.c +++ b/drivers/net/wireless/iwlwifi/pcie/drv.c @@ -367,6 +367,7 @@ static DEFINE_PCI_DEVICE_TABLE(iwl_hw_card_ids) = { {IWL_PCI_DEVICE(0x095A, 0x5012, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095A, 0x5412, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095A, 0x5410, iwl7265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x095A, 0x5510, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095A, 0x5400, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095A, 0x1010, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095A, 0x5000, iwl7265_2n_cfg)}, @@ -380,7 +381,7 @@ static DEFINE_PCI_DEVICE_TABLE(iwl_hw_card_ids) = { {IWL_PCI_DEVICE(0x095A, 0x9110, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095A, 0x9112, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095A, 0x9210, iwl7265_2ac_cfg)}, - {IWL_PCI_DEVICE(0x095A, 0x9200, iwl7265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x095B, 0x9200, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095A, 0x9510, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095A, 0x9310, iwl7265_2ac_cfg)}, {IWL_PCI_DEVICE(0x095A, 0x9410, iwl7265_2ac_cfg)}, diff --git a/drivers/net/wireless/mwifiex/11n_aggr.c b/drivers/net/wireless/mwifiex/11n_aggr.c index 5b32106182f8..fe0f66f73507 100644 --- a/drivers/net/wireless/mwifiex/11n_aggr.c +++ b/drivers/net/wireless/mwifiex/11n_aggr.c @@ -185,6 +185,7 @@ mwifiex_11n_aggregate_pkt(struct mwifiex_private *priv, skb_reserve(skb_aggr, headroom + sizeof(struct txpd)); tx_info_aggr = MWIFIEX_SKB_TXCB(skb_aggr); + memset(tx_info_aggr, 0, sizeof(*tx_info_aggr)); tx_info_aggr->bss_type = tx_info_src->bss_type; tx_info_aggr->bss_num = tx_info_src->bss_num; diff --git a/drivers/net/wireless/mwifiex/cfg80211.c b/drivers/net/wireless/mwifiex/cfg80211.c index e95dec91a561..b511613bba2d 100644 --- a/drivers/net/wireless/mwifiex/cfg80211.c +++ b/drivers/net/wireless/mwifiex/cfg80211.c @@ -220,6 +220,7 @@ mwifiex_cfg80211_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev, } tx_info = MWIFIEX_SKB_TXCB(skb); + memset(tx_info, 0, sizeof(*tx_info)); tx_info->bss_num = priv->bss_num; tx_info->bss_type = priv->bss_type; tx_info->pkt_len = pkt_len; diff --git a/drivers/net/wireless/mwifiex/cmdevt.c b/drivers/net/wireless/mwifiex/cmdevt.c index 8dee6c86f4f1..c161141f6c39 100644 --- a/drivers/net/wireless/mwifiex/cmdevt.c +++ b/drivers/net/wireless/mwifiex/cmdevt.c @@ -453,6 +453,7 @@ int mwifiex_process_event(struct mwifiex_adapter *adapter) if (skb) { rx_info = MWIFIEX_SKB_RXCB(skb); + memset(rx_info, 0, sizeof(*rx_info)); rx_info->bss_num = priv->bss_num; rx_info->bss_type = priv->bss_type; } diff --git a/drivers/net/wireless/mwifiex/main.c b/drivers/net/wireless/mwifiex/main.c index cbabc12fbda3..e91cd0fa5ca8 100644 --- a/drivers/net/wireless/mwifiex/main.c +++ b/drivers/net/wireless/mwifiex/main.c @@ -645,6 +645,7 @@ mwifiex_hard_start_xmit(struct sk_buff *skb, struct net_device *dev) } tx_info = MWIFIEX_SKB_TXCB(skb); + memset(tx_info, 0, sizeof(*tx_info)); tx_info->bss_num = priv->bss_num; tx_info->bss_type = priv->bss_type; tx_info->pkt_len = skb->len; diff --git a/drivers/net/wireless/mwifiex/sta_tx.c b/drivers/net/wireless/mwifiex/sta_tx.c index 5fce7e78a36e..70eb863c7249 100644 --- a/drivers/net/wireless/mwifiex/sta_tx.c +++ b/drivers/net/wireless/mwifiex/sta_tx.c @@ -150,6 +150,7 @@ int mwifiex_send_null_packet(struct mwifiex_private *priv, u8 flags) return -1; tx_info = MWIFIEX_SKB_TXCB(skb); + memset(tx_info, 0, sizeof(*tx_info)); tx_info->bss_num = priv->bss_num; tx_info->bss_type = priv->bss_type; tx_info->pkt_len = data_len - (sizeof(struct txpd) + INTF_HEADER_LEN); diff --git a/drivers/net/wireless/mwifiex/tdls.c b/drivers/net/wireless/mwifiex/tdls.c index e73034fbbde9..0e88364e0c67 100644 --- a/drivers/net/wireless/mwifiex/tdls.c +++ b/drivers/net/wireless/mwifiex/tdls.c @@ -605,6 +605,7 @@ int mwifiex_send_tdls_data_frame(struct mwifiex_private *priv, const u8 *peer, } tx_info = MWIFIEX_SKB_TXCB(skb); + memset(tx_info, 0, sizeof(*tx_info)); tx_info->bss_num = priv->bss_num; tx_info->bss_type = priv->bss_type; @@ -760,6 +761,7 @@ int mwifiex_send_tdls_action_frame(struct mwifiex_private *priv, const u8 *peer, skb->priority = MWIFIEX_PRIO_VI; tx_info = MWIFIEX_SKB_TXCB(skb); + memset(tx_info, 0, sizeof(*tx_info)); tx_info->bss_num = priv->bss_num; tx_info->bss_type = priv->bss_type; tx_info->flags |= MWIFIEX_BUF_FLAG_TDLS_PKT; diff --git a/drivers/net/wireless/mwifiex/txrx.c b/drivers/net/wireless/mwifiex/txrx.c index 37f26afd4314..fd7e5b9b4581 100644 --- a/drivers/net/wireless/mwifiex/txrx.c +++ b/drivers/net/wireless/mwifiex/txrx.c @@ -55,6 +55,7 @@ int mwifiex_handle_rx_packet(struct mwifiex_adapter *adapter, return -1; } + memset(rx_info, 0, sizeof(*rx_info)); rx_info->bss_num = priv->bss_num; rx_info->bss_type = priv->bss_type; diff --git a/drivers/net/wireless/mwifiex/uap_txrx.c b/drivers/net/wireless/mwifiex/uap_txrx.c index 9a56bc61cb1d..b0601b91cc4f 100644 --- a/drivers/net/wireless/mwifiex/uap_txrx.c +++ b/drivers/net/wireless/mwifiex/uap_txrx.c @@ -175,6 +175,7 @@ static void mwifiex_uap_queue_bridged_pkt(struct mwifiex_private *priv, } tx_info = MWIFIEX_SKB_TXCB(skb); + memset(tx_info, 0, sizeof(*tx_info)); tx_info->bss_num = priv->bss_num; tx_info->bss_type = priv->bss_type; tx_info->flags |= MWIFIEX_BUF_FLAG_BRIDGED_PKT; diff --git a/drivers/net/wireless/rt2x00/rt2800usb.c b/drivers/net/wireless/rt2x00/rt2800usb.c index e11dab2216c6..832006b5aab1 100644 --- a/drivers/net/wireless/rt2x00/rt2800usb.c +++ b/drivers/net/wireless/rt2x00/rt2800usb.c @@ -231,9 +231,12 @@ static enum hrtimer_restart rt2800usb_tx_sta_fifo_timeout(struct hrtimer *timer) */ static int rt2800usb_autorun_detect(struct rt2x00_dev *rt2x00dev) { - __le32 reg; + __le32 *reg; u32 fw_mode; + reg = kmalloc(sizeof(*reg), GFP_KERNEL); + if (reg == NULL) + return -ENOMEM; /* cannot use rt2x00usb_register_read here as it uses different * mode (MULTI_READ vs. DEVICE_MODE) and does not pass the * magic value USB_MODE_AUTORUN (0x11) to the device, thus the @@ -241,8 +244,9 @@ static int rt2800usb_autorun_detect(struct rt2x00_dev *rt2x00dev) */ rt2x00usb_vendor_request(rt2x00dev, USB_DEVICE_MODE, USB_VENDOR_REQUEST_IN, 0, USB_MODE_AUTORUN, - ®, sizeof(reg), REGISTER_TIMEOUT_FIRMWARE); - fw_mode = le32_to_cpu(reg); + reg, sizeof(*reg), REGISTER_TIMEOUT_FIRMWARE); + fw_mode = le32_to_cpu(*reg); + kfree(reg); if ((fw_mode & 0x00000003) == 2) return 1; @@ -261,6 +265,7 @@ static int rt2800usb_write_firmware(struct rt2x00_dev *rt2x00dev, int status; u32 offset; u32 length; + int retval; /* * Check which section of the firmware we need. @@ -278,7 +283,10 @@ static int rt2800usb_write_firmware(struct rt2x00_dev *rt2x00dev, /* * Write firmware to device. */ - if (rt2800usb_autorun_detect(rt2x00dev)) { + retval = rt2800usb_autorun_detect(rt2x00dev); + if (retval < 0) + return retval; + if (retval) { rt2x00_info(rt2x00dev, "Firmware loading not required - NIC in AutoRun mode\n"); } else { @@ -763,7 +771,12 @@ static void rt2800usb_fill_rxdone(struct queue_entry *entry, */ static int rt2800usb_efuse_detect(struct rt2x00_dev *rt2x00dev) { - if (rt2800usb_autorun_detect(rt2x00dev)) + int retval; + + retval = rt2800usb_autorun_detect(rt2x00dev); + if (retval < 0) + return retval; + if (retval) return 1; return rt2800_efuse_detect(rt2x00dev); } @@ -772,7 +785,10 @@ static int rt2800usb_read_eeprom(struct rt2x00_dev *rt2x00dev) { int retval; - if (rt2800usb_efuse_detect(rt2x00dev)) + retval = rt2800usb_efuse_detect(rt2x00dev); + if (retval < 0) + return retval; + if (retval) retval = rt2800_read_eeprom_efuse(rt2x00dev); else retval = rt2x00usb_eeprom_read(rt2x00dev, rt2x00dev->eeprom, diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c index 2ccb4a02368b..055222bae6e4 100644 --- a/drivers/net/xen-netfront.c +++ b/drivers/net/xen-netfront.c @@ -1439,16 +1439,11 @@ static void xennet_disconnect_backend(struct netfront_info *info) unsigned int i = 0; unsigned int num_queues = info->netdev->real_num_tx_queues; + netif_carrier_off(info->netdev); + for (i = 0; i < num_queues; ++i) { struct netfront_queue *queue = &info->queues[i]; - /* Stop old i/f to prevent errors whilst we rebuild the state. */ - spin_lock_bh(&queue->rx_lock); - spin_lock_irq(&queue->tx_lock); - netif_carrier_off(queue->info->netdev); - spin_unlock_irq(&queue->tx_lock); - spin_unlock_bh(&queue->rx_lock); - if (queue->tx_irq && (queue->tx_irq == queue->rx_irq)) unbind_from_irqhandler(queue->tx_irq, queue); if (queue->tx_irq && (queue->tx_irq != queue->rx_irq)) { @@ -1458,6 +1453,8 @@ static void xennet_disconnect_backend(struct netfront_info *info) queue->tx_evtchn = queue->rx_evtchn = 0; queue->tx_irq = queue->rx_irq = 0; + napi_synchronize(&queue->napi); + /* End access and free the pages */ xennet_end_access(queue->tx_ring_ref, queue->tx.sring); xennet_end_access(queue->rx_ring_ref, queue->rx.sring); @@ -2046,13 +2043,15 @@ static int xennet_connect(struct net_device *dev) /* By now, the queue structures have been set up */ for (j = 0; j < num_queues; ++j) { queue = &np->queues[j]; - spin_lock_bh(&queue->rx_lock); - spin_lock_irq(&queue->tx_lock); /* Step 1: Discard all pending TX packet fragments. */ + spin_lock_irq(&queue->tx_lock); xennet_release_tx_bufs(queue); + spin_unlock_irq(&queue->tx_lock); /* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */ + spin_lock_bh(&queue->rx_lock); + for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) { skb_frag_t *frag; const struct page *page; @@ -2076,6 +2075,8 @@ static int xennet_connect(struct net_device *dev) } queue->rx.req_prod_pvt = requeue_idx; + + spin_unlock_bh(&queue->rx_lock); } /* @@ -2087,13 +2088,17 @@ static int xennet_connect(struct net_device *dev) netif_carrier_on(np->netdev); for (j = 0; j < num_queues; ++j) { queue = &np->queues[j]; + notify_remote_via_irq(queue->tx_irq); if (queue->tx_irq != queue->rx_irq) notify_remote_via_irq(queue->rx_irq); - xennet_tx_buf_gc(queue); - xennet_alloc_rx_buffers(queue); + spin_lock_irq(&queue->tx_lock); + xennet_tx_buf_gc(queue); spin_unlock_irq(&queue->tx_lock); + + spin_lock_bh(&queue->rx_lock); + xennet_alloc_rx_buffers(queue); spin_unlock_bh(&queue->rx_lock); } diff --git a/drivers/of/of_mdio.c b/drivers/of/of_mdio.c index a3bf2122a8d5..401b2453da45 100644 --- a/drivers/of/of_mdio.c +++ b/drivers/of/of_mdio.c @@ -182,40 +182,6 @@ int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np) } EXPORT_SYMBOL(of_mdiobus_register); -/** - * of_mdiobus_link_phydev - Find a device node for a phy - * @mdio: pointer to mii_bus structure - * @phydev: phydev for which the of_node pointer should be set - * - * Walk the list of subnodes of a mdio bus and look for a node that matches the - * phy's address with its 'reg' property. If found, set the of_node pointer for - * the phy. This allows auto-probed pyh devices to be supplied with information - * passed in via DT. - */ -void of_mdiobus_link_phydev(struct mii_bus *mdio, - struct phy_device *phydev) -{ - struct device *dev = &phydev->dev; - struct device_node *child; - - if (dev->of_node || !mdio->dev.of_node) - return; - - for_each_available_child_of_node(mdio->dev.of_node, child) { - int addr; - - addr = of_mdio_parse_addr(&mdio->dev, child); - if (addr < 0) - continue; - - if (addr == phydev->addr) { - dev->of_node = child; - return; - } - } -} -EXPORT_SYMBOL(of_mdiobus_link_phydev); - /* Helper function for of_phy_find_device */ static int of_phy_match(struct device *dev, void *phy_np) { diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index 63a54a340863..1c8592b0e146 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c @@ -3135,8 +3135,13 @@ static int pci_af_flr(struct pci_dev *dev, int probe) if (probe) return 0; - /* Wait for Transaction Pending bit clean */ - if (pci_wait_for_pending(dev, pos + PCI_AF_STATUS, PCI_AF_STATUS_TP)) + /* + * Wait for Transaction Pending bit to clear. A word-aligned test + * is used, so we use the conrol offset rather than status and shift + * the test bit to match. + */ + if (pci_wait_for_pending(dev, pos + PCI_AF_CTRL, + PCI_AF_STATUS_TP << 8)) goto clear; dev_err(&dev->dev, "transaction is not cleared; proceeding with reset anyway\n"); diff --git a/drivers/phy/Kconfig b/drivers/phy/Kconfig index 16a2f067c242..64b98d242ea6 100644 --- a/drivers/phy/Kconfig +++ b/drivers/phy/Kconfig @@ -112,6 +112,7 @@ config PHY_EXYNOS5250_SATA config PHY_SUN4I_USB tristate "Allwinner sunxi SoC USB PHY driver" depends on ARCH_SUNXI && HAS_IOMEM && OF + depends on RESET_CONTROLLER select GENERIC_PHY help Enable this to support the transceiver that is part of Allwinner @@ -122,6 +123,7 @@ config PHY_SUN4I_USB config PHY_SAMSUNG_USB2 tristate "Samsung USB 2.0 PHY driver" + depends on HAS_IOMEM select GENERIC_PHY select MFD_SYSCON help diff --git a/drivers/phy/phy-core.c b/drivers/phy/phy-core.c index c64a2f3b2d62..49c446530101 100644 --- a/drivers/phy/phy-core.c +++ b/drivers/phy/phy-core.c @@ -614,8 +614,9 @@ struct phy *phy_create(struct device *dev, const struct phy_ops *ops, return phy; put_dev: - put_device(&phy->dev); - ida_remove(&phy_ida, phy->id); + put_device(&phy->dev); /* calls phy_release() which frees resources */ + return ERR_PTR(ret); + free_phy: kfree(phy); return ERR_PTR(ret); @@ -799,7 +800,7 @@ static void phy_release(struct device *dev) phy = to_phy(dev); dev_vdbg(dev, "releasing '%s'\n", dev_name(dev)); - ida_remove(&phy_ida, phy->id); + ida_simple_remove(&phy_ida, phy->id); kfree(phy); } diff --git a/drivers/phy/phy-omap-usb2.c b/drivers/phy/phy-omap-usb2.c index 7007c11fe07d..34b396146c8a 100644 --- a/drivers/phy/phy-omap-usb2.c +++ b/drivers/phy/phy-omap-usb2.c @@ -233,8 +233,8 @@ static int omap_usb2_probe(struct platform_device *pdev) if (phy_data->flags & OMAP_USB2_CALIBRATE_FALSE_DISCONNECT) { res = platform_get_resource(pdev, IORESOURCE_MEM, 0); phy->phy_base = devm_ioremap_resource(&pdev->dev, res); - if (!phy->phy_base) - return -ENOMEM; + if (IS_ERR(phy->phy_base)) + return PTR_ERR(phy->phy_base); phy->flags |= OMAP_USB2_CALIBRATE_FALSE_DISCONNECT; } @@ -262,7 +262,6 @@ static int omap_usb2_probe(struct platform_device *pdev) otg->phy = &phy->phy; platform_set_drvdata(pdev, phy); - pm_runtime_enable(phy->dev); generic_phy = devm_phy_create(phy->dev, &ops, NULL); if (IS_ERR(generic_phy)) @@ -270,10 +269,13 @@ static int omap_usb2_probe(struct platform_device *pdev) phy_set_drvdata(generic_phy, phy); + pm_runtime_enable(phy->dev); phy_provider = devm_of_phy_provider_register(phy->dev, of_phy_simple_xlate); - if (IS_ERR(phy_provider)) + if (IS_ERR(phy_provider)) { + pm_runtime_disable(phy->dev); return PTR_ERR(phy_provider); + } phy->wkupclk = devm_clk_get(phy->dev, "wkupclk"); if (IS_ERR(phy->wkupclk)) { @@ -317,6 +319,7 @@ static int omap_usb2_remove(struct platform_device *pdev) if (!IS_ERR(phy->optclk)) clk_unprepare(phy->optclk); usb_remove_phy(&phy->phy); + pm_runtime_disable(phy->dev); return 0; } diff --git a/drivers/phy/phy-samsung-usb2.c b/drivers/phy/phy-samsung-usb2.c index 8a8c6bc8709a..1e69a32c221d 100644 --- a/drivers/phy/phy-samsung-usb2.c +++ b/drivers/phy/phy-samsung-usb2.c @@ -107,6 +107,7 @@ static const struct of_device_id samsung_usb2_phy_of_match[] = { #endif { }, }; +MODULE_DEVICE_TABLE(of, samsung_usb2_phy_of_match); static int samsung_usb2_phy_probe(struct platform_device *pdev) { diff --git a/drivers/pinctrl/berlin/berlin.c b/drivers/pinctrl/berlin/berlin.c index edf5d2fd2b22..86db2235ab00 100644 --- a/drivers/pinctrl/berlin/berlin.c +++ b/drivers/pinctrl/berlin/berlin.c @@ -320,7 +320,7 @@ int berlin_pinctrl_probe(struct platform_device *pdev, regmap = dev_get_regmap(&pdev->dev, NULL); if (!regmap) - return PTR_ERR(regmap); + return -ENODEV; pctrl = devm_kzalloc(dev, sizeof(*pctrl), GFP_KERNEL); if (!pctrl) diff --git a/drivers/pinctrl/sunxi/pinctrl-sunxi.c b/drivers/pinctrl/sunxi/pinctrl-sunxi.c index f1ca75e6d7b1..5f38c7f67834 100644 --- a/drivers/pinctrl/sunxi/pinctrl-sunxi.c +++ b/drivers/pinctrl/sunxi/pinctrl-sunxi.c @@ -211,6 +211,10 @@ static int sunxi_pctrl_dt_node_to_map(struct pinctrl_dev *pctldev, configlen++; pinconfig = kzalloc(configlen * sizeof(*pinconfig), GFP_KERNEL); + if (!pinconfig) { + kfree(*map); + return -ENOMEM; + } if (!of_property_read_u32(node, "allwinner,drive", &val)) { u16 strength = (val + 1) * 10; diff --git a/drivers/thermal/imx_thermal.c b/drivers/thermal/imx_thermal.c index a99c63152b8d..2c516f2eebed 100644 --- a/drivers/thermal/imx_thermal.c +++ b/drivers/thermal/imx_thermal.c @@ -306,7 +306,7 @@ static int imx_get_sensor_data(struct platform_device *pdev) { struct imx_thermal_data *data = platform_get_drvdata(pdev); struct regmap *map; - int t1, t2, n1, n2; + int t1, n1; int ret; u32 val; u64 temp64; @@ -333,14 +333,10 @@ static int imx_get_sensor_data(struct platform_device *pdev) /* * Sensor data layout: * [31:20] - sensor value @ 25C - * [19:8] - sensor value of hot - * [7:0] - hot temperature value * Use universal formula now and only need sensor value @ 25C * slope = 0.4297157 - (0.0015976 * 25C fuse) */ n1 = val >> 20; - n2 = (val & 0xfff00) >> 8; - t2 = val & 0xff; t1 = 25; /* t1 always 25C */ /* @@ -366,16 +362,16 @@ static int imx_get_sensor_data(struct platform_device *pdev) data->c2 = n1 * data->c1 + 1000 * t1; /* - * Set the default passive cooling trip point to 20 °C below the - * maximum die temperature. Can be changed from userspace. + * Set the default passive cooling trip point, + * can be changed from userspace. */ - data->temp_passive = 1000 * (t2 - 20); + data->temp_passive = IMX_TEMP_PASSIVE; /* - * The maximum die temperature is t2, let's give 5 °C cushion - * for noise and possible temperature rise between measurements. + * The maximum die temperature set to 20 C higher than + * IMX_TEMP_PASSIVE. */ - data->temp_critical = 1000 * (t2 - 5); + data->temp_critical = 1000 * 20 + data->temp_passive; return 0; } diff --git a/drivers/thermal/of-thermal.c b/drivers/thermal/of-thermal.c index 04b1be7fa018..4b2b999b7611 100644 --- a/drivers/thermal/of-thermal.c +++ b/drivers/thermal/of-thermal.c @@ -156,8 +156,8 @@ static int of_thermal_bind(struct thermal_zone_device *thermal, ret = thermal_zone_bind_cooling_device(thermal, tbp->trip_id, cdev, - tbp->min, - tbp->max); + tbp->max, + tbp->min); if (ret) return ret; } @@ -712,11 +712,12 @@ thermal_of_build_thermal_zone(struct device_node *np) } i = 0; - for_each_child_of_node(child, gchild) + for_each_child_of_node(child, gchild) { ret = thermal_of_populate_bind_params(gchild, &tz->tbps[i++], tz->trips, tz->ntrips); if (ret) goto free_tbps; + } finish: of_node_put(child); diff --git a/drivers/thermal/thermal_hwmon.c b/drivers/thermal/thermal_hwmon.c index fdb07199d9c2..1967bee4f076 100644 --- a/drivers/thermal/thermal_hwmon.c +++ b/drivers/thermal/thermal_hwmon.c @@ -140,6 +140,12 @@ thermal_hwmon_lookup_temp(const struct thermal_hwmon_device *hwmon, return NULL; } +static bool thermal_zone_crit_temp_valid(struct thermal_zone_device *tz) +{ + unsigned long temp; + return tz->ops->get_crit_temp && !tz->ops->get_crit_temp(tz, &temp); +} + int thermal_add_hwmon_sysfs(struct thermal_zone_device *tz) { struct thermal_hwmon_device *hwmon; @@ -189,21 +195,18 @@ int thermal_add_hwmon_sysfs(struct thermal_zone_device *tz) if (result) goto free_temp_mem; - if (tz->ops->get_crit_temp) { - unsigned long temperature; - if (!tz->ops->get_crit_temp(tz, &temperature)) { - snprintf(temp->temp_crit.name, - sizeof(temp->temp_crit.name), + if (thermal_zone_crit_temp_valid(tz)) { + snprintf(temp->temp_crit.name, + sizeof(temp->temp_crit.name), "temp%d_crit", hwmon->count); - temp->temp_crit.attr.attr.name = temp->temp_crit.name; - temp->temp_crit.attr.attr.mode = 0444; - temp->temp_crit.attr.show = temp_crit_show; - sysfs_attr_init(&temp->temp_crit.attr.attr); - result = device_create_file(hwmon->device, - &temp->temp_crit.attr); - if (result) - goto unregister_input; - } + temp->temp_crit.attr.attr.name = temp->temp_crit.name; + temp->temp_crit.attr.attr.mode = 0444; + temp->temp_crit.attr.show = temp_crit_show; + sysfs_attr_init(&temp->temp_crit.attr.attr); + result = device_create_file(hwmon->device, + &temp->temp_crit.attr); + if (result) + goto unregister_input; } mutex_lock(&thermal_hwmon_list_lock); @@ -250,7 +253,7 @@ void thermal_remove_hwmon_sysfs(struct thermal_zone_device *tz) } device_remove_file(hwmon->device, &temp->temp_input.attr); - if (tz->ops->get_crit_temp) + if (thermal_zone_crit_temp_valid(tz)) device_remove_file(hwmon->device, &temp->temp_crit.attr); mutex_lock(&thermal_hwmon_list_lock); diff --git a/drivers/thermal/ti-soc-thermal/ti-bandgap.c b/drivers/thermal/ti-soc-thermal/ti-bandgap.c index a1271b55103a..634b6ce0e63a 100644 --- a/drivers/thermal/ti-soc-thermal/ti-bandgap.c +++ b/drivers/thermal/ti-soc-thermal/ti-bandgap.c @@ -1155,7 +1155,7 @@ static struct ti_bandgap *ti_bandgap_build(struct platform_device *pdev) /* register shadow for context save and restore */ bgp->regval = devm_kzalloc(&pdev->dev, sizeof(*bgp->regval) * bgp->conf->sensor_count, GFP_KERNEL); - if (!bgp) { + if (!bgp->regval) { dev_err(&pdev->dev, "Unable to allocate mem for driver ref\n"); return ERR_PTR(-ENOMEM); } diff --git a/drivers/tty/serial/arc_uart.c b/drivers/tty/serial/arc_uart.c index c9f5c9dcc15c..008c223eaf26 100644 --- a/drivers/tty/serial/arc_uart.c +++ b/drivers/tty/serial/arc_uart.c @@ -177,7 +177,7 @@ static void arc_serial_tx_chars(struct arc_uart_port *uart) uart->port.icount.tx++; uart->port.x_char = 0; sent = 1; - } else if (xmit->tail != xmit->head) { /* TODO: uart_circ_empty */ + } else if (!uart_circ_empty(xmit)) { ch = xmit->buf[xmit->tail]; xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); uart->port.icount.tx++; diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c index e2f93874989b..044e86d528ae 100644 --- a/drivers/tty/serial/imx.c +++ b/drivers/tty/serial/imx.c @@ -567,6 +567,9 @@ static void imx_start_tx(struct uart_port *port) struct imx_port *sport = (struct imx_port *)port; unsigned long temp; + if (uart_circ_empty(&port->state->xmit)) + return; + if (USE_IRDA(sport)) { /* half duplex in IrDA mode; have to disable receive mode */ temp = readl(sport->port.membase + UCR4); diff --git a/drivers/tty/serial/ip22zilog.c b/drivers/tty/serial/ip22zilog.c index 1efd4c36ba0c..99b7b8697861 100644 --- a/drivers/tty/serial/ip22zilog.c +++ b/drivers/tty/serial/ip22zilog.c @@ -603,6 +603,8 @@ static void ip22zilog_start_tx(struct uart_port *port) } else { struct circ_buf *xmit = &port->state->xmit; + if (uart_circ_empty(xmit)) + return; writeb(xmit->buf[xmit->tail], &channel->data); ZSDELAY(); ZS_WSYNC(channel); diff --git a/drivers/tty/serial/m32r_sio.c b/drivers/tty/serial/m32r_sio.c index 68f2c53e0b54..5702828fb62e 100644 --- a/drivers/tty/serial/m32r_sio.c +++ b/drivers/tty/serial/m32r_sio.c @@ -266,9 +266,11 @@ static void m32r_sio_start_tx(struct uart_port *port) if (!(up->ier & UART_IER_THRI)) { up->ier |= UART_IER_THRI; serial_out(up, UART_IER, up->ier); - serial_out(up, UART_TX, xmit->buf[xmit->tail]); - xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); - up->port.icount.tx++; + if (!uart_circ_empty(xmit)) { + serial_out(up, UART_TX, xmit->buf[xmit->tail]); + xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); + up->port.icount.tx++; + } } while((serial_in(up, UART_LSR) & UART_EMPTY) != UART_EMPTY); #else diff --git a/drivers/tty/serial/pmac_zilog.c b/drivers/tty/serial/pmac_zilog.c index 8193635103ee..f7ad5b903055 100644 --- a/drivers/tty/serial/pmac_zilog.c +++ b/drivers/tty/serial/pmac_zilog.c @@ -653,6 +653,8 @@ static void pmz_start_tx(struct uart_port *port) } else { struct circ_buf *xmit = &port->state->xmit; + if (uart_circ_empty(xmit)) + goto out; write_zsdata(uap, xmit->buf[xmit->tail]); zssync(uap); xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); @@ -661,6 +663,7 @@ static void pmz_start_tx(struct uart_port *port) if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) uart_write_wakeup(&uap->port); } + out: pmz_debug("pmz: start_tx() done.\n"); } diff --git a/drivers/tty/serial/sunsab.c b/drivers/tty/serial/sunsab.c index 80a58eca785b..2f57df9a71d9 100644 --- a/drivers/tty/serial/sunsab.c +++ b/drivers/tty/serial/sunsab.c @@ -427,6 +427,9 @@ static void sunsab_start_tx(struct uart_port *port) struct circ_buf *xmit = &up->port.state->xmit; int i; + if (uart_circ_empty(xmit)) + return; + up->interrupt_mask1 &= ~(SAB82532_IMR1_ALLS|SAB82532_IMR1_XPR); writeb(up->interrupt_mask1, &up->regs->w.imr1); diff --git a/drivers/tty/serial/sunzilog.c b/drivers/tty/serial/sunzilog.c index a85db8b87156..02df3940b95e 100644 --- a/drivers/tty/serial/sunzilog.c +++ b/drivers/tty/serial/sunzilog.c @@ -703,6 +703,8 @@ static void sunzilog_start_tx(struct uart_port *port) } else { struct circ_buf *xmit = &port->state->xmit; + if (uart_circ_empty(xmit)) + return; writeb(xmit->buf[xmit->tail], &channel->data); ZSDELAY(); ZS_WSYNC(channel); diff --git a/drivers/usb/chipidea/udc.c b/drivers/usb/chipidea/udc.c index 9d2b673f90e3..b8125aa64ad8 100644 --- a/drivers/usb/chipidea/udc.c +++ b/drivers/usb/chipidea/udc.c @@ -1169,8 +1169,8 @@ static int ep_enable(struct usb_ep *ep, if (hwep->type == USB_ENDPOINT_XFER_CONTROL) cap |= QH_IOS; - if (hwep->num) - cap |= QH_ZLT; + + cap |= QH_ZLT; cap |= (hwep->ep.maxpacket << __ffs(QH_MAX_PKT)) & QH_MAX_PKT; /* * For ISO-TX, we set mult at QH as the largest value, and use diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c index 21b99b4b4082..0e950ad8cb25 100644 --- a/drivers/usb/core/hub.c +++ b/drivers/usb/core/hub.c @@ -889,6 +889,25 @@ static int hub_usb3_port_disable(struct usb_hub *hub, int port1) if (!hub_is_superspeed(hub->hdev)) return -EINVAL; + ret = hub_port_status(hub, port1, &portstatus, &portchange); + if (ret < 0) + return ret; + + /* + * USB controller Advanced Micro Devices, Inc. [AMD] FCH USB XHCI + * Controller [1022:7814] will have spurious result making the following + * usb 3.0 device hotplugging route to the 2.0 root hub and recognized + * as high-speed device if we set the usb 3.0 port link state to + * Disabled. Since it's already in USB_SS_PORT_LS_RX_DETECT state, we + * check the state here to avoid the bug. + */ + if ((portstatus & USB_PORT_STAT_LINK_STATE) == + USB_SS_PORT_LS_RX_DETECT) { + dev_dbg(&hub->ports[port1 - 1]->dev, + "Not disabling port; link state is RxDetect\n"); + return ret; + } + ret = hub_set_port_link_state(hub, port1, USB_SS_PORT_LS_SS_DISABLED); if (ret) return ret; diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c index 762e4a5f5ae9..330df5ce435b 100644 --- a/drivers/usb/serial/cp210x.c +++ b/drivers/usb/serial/cp210x.c @@ -153,6 +153,7 @@ static const struct usb_device_id id_table[] = { { USB_DEVICE(0x1843, 0x0200) }, /* Vaisala USB Instrument Cable */ { USB_DEVICE(0x18EF, 0xE00F) }, /* ELV USB-I2C-Interface */ { USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */ + { USB_DEVICE(0x1B1C, 0x1C00) }, /* Corsair USB Dongle */ { USB_DEVICE(0x1BE3, 0x07A6) }, /* WAGO 750-923 USB Service Cable */ { USB_DEVICE(0x1E29, 0x0102) }, /* Festo CPX-USB */ { USB_DEVICE(0x1E29, 0x0501) }, /* Festo CMSP */ diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c index 115662c16dcc..8a3813be1b28 100644 --- a/drivers/usb/serial/ftdi_sio.c +++ b/drivers/usb/serial/ftdi_sio.c @@ -720,7 +720,8 @@ static const struct usb_device_id id_table_combined[] = { { USB_DEVICE(FTDI_VID, FTDI_ACG_HFDUAL_PID) }, { USB_DEVICE(FTDI_VID, FTDI_YEI_SERVOCENTER31_PID) }, { USB_DEVICE(FTDI_VID, FTDI_THORLABS_PID) }, - { USB_DEVICE(TESTO_VID, TESTO_USB_INTERFACE_PID) }, + { USB_DEVICE(TESTO_VID, TESTO_1_PID) }, + { USB_DEVICE(TESTO_VID, TESTO_3_PID) }, { USB_DEVICE(FTDI_VID, FTDI_GAMMA_SCOUT_PID) }, { USB_DEVICE(FTDI_VID, FTDI_TACTRIX_OPENPORT_13M_PID) }, { USB_DEVICE(FTDI_VID, FTDI_TACTRIX_OPENPORT_13S_PID) }, @@ -944,6 +945,8 @@ static const struct usb_device_id id_table_combined[] = { { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_2_PID) }, { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_3_PID) }, { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_4_PID) }, + /* Infineon Devices */ + { USB_DEVICE_INTERFACE_NUMBER(INFINEON_VID, INFINEON_TRIBOARD_PID, 1) }, { } /* Terminating entry */ }; diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h index 500474c48f4b..c4777bc6aee0 100644 --- a/drivers/usb/serial/ftdi_sio_ids.h +++ b/drivers/usb/serial/ftdi_sio_ids.h @@ -584,6 +584,12 @@ #define RATOC_PRODUCT_ID_USB60F 0xb020 /* + * Infineon Technologies + */ +#define INFINEON_VID 0x058b +#define INFINEON_TRIBOARD_PID 0x0028 /* DAS JTAG TriBoard TC1798 V1.0 */ + +/* * Acton Research Corp. */ #define ACTON_VID 0x0647 /* Vendor ID */ @@ -798,7 +804,8 @@ * Submitted by Colin Leroy */ #define TESTO_VID 0x128D -#define TESTO_USB_INTERFACE_PID 0x0001 +#define TESTO_1_PID 0x0001 +#define TESTO_3_PID 0x0003 /* * Mobility Electronics products. diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c index ac73f49cd9f0..a9688940543d 100644 --- a/drivers/usb/serial/option.c +++ b/drivers/usb/serial/option.c @@ -1487,6 +1487,8 @@ static const struct usb_device_id option_ids[] = { .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1426, 0xff, 0xff, 0xff), /* ZTE MF91 */ .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1428, 0xff, 0xff, 0xff), /* Telewell TW-LTE 4G v2 */ + .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1533, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1534, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1535, 0xff, 0xff, 0xff) }, diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c index b7a506f2bb14..5c660c77f03b 100644 --- a/drivers/xen/balloon.c +++ b/drivers/xen/balloon.c @@ -426,20 +426,18 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp) * p2m are consistent. */ if (!xen_feature(XENFEAT_auto_translated_physmap)) { - unsigned long p; - struct page *scratch_page = get_balloon_scratch_page(); - if (!PageHighMem(page)) { + struct page *scratch_page = get_balloon_scratch_page(); + ret = HYPERVISOR_update_va_mapping( (unsigned long)__va(pfn << PAGE_SHIFT), pfn_pte(page_to_pfn(scratch_page), PAGE_KERNEL_RO), 0); BUG_ON(ret); - } - p = page_to_pfn(scratch_page); - __set_phys_to_machine(pfn, pfn_to_mfn(p)); - put_balloon_scratch_page(); + put_balloon_scratch_page(); + } + __set_phys_to_machine(pfn, INVALID_P2M_ENTRY); } #endif diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c index c3667b202f2f..5f1e1f3cd186 100644 --- a/drivers/xen/manage.c +++ b/drivers/xen/manage.c @@ -88,7 +88,6 @@ static int xen_suspend(void *data) if (!si->cancelled) { xen_irq_resume(); - xen_console_resume(); xen_timer_resume(); } @@ -135,6 +134,10 @@ static void do_suspend(void) err = stop_machine(xen_suspend, &si, cpumask_of(0)); + /* Resume console as early as possible. */ + if (!si.cancelled) + xen_console_resume(); + raw_notifier_call_chain(&xen_resume_notifier, 0, NULL); dpm_resume_start(si.cancelled ? PMSG_THAW : PMSG_RESTORE); diff --git a/firmware/Makefile b/firmware/Makefile index 5747417069ca..0862d34cf7d1 100644 --- a/firmware/Makefile +++ b/firmware/Makefile @@ -219,6 +219,12 @@ $(obj)/%.fw: $(obj)/%.H16 $(ihex2fw_dep) obj-y += $(patsubst %,%.gen.o, $(fw-external-y)) obj-$(CONFIG_FIRMWARE_IN_KERNEL) += $(patsubst %,%.gen.o, $(fw-shipped-y)) +ifeq ($(KBUILD_SRC),) +# Makefile.build only creates subdirectories for O= builds, but external +# firmware might live outside the kernel source tree +_dummy := $(foreach d,$(addprefix $(obj)/,$(dir $(fw-external-y))), $(shell [ -d $(d) ] || mkdir -p $(d))) +endif + # Remove .S files and binaries created from ihex # (during 'make clean' .config isn't included so they're all in $(fw-shipped-)) targets := $(fw-shipped-) $(patsubst $(obj)/%,%, \ @@ -830,16 +830,20 @@ void exit_aio(struct mm_struct *mm) static void put_reqs_available(struct kioctx *ctx, unsigned nr) { struct kioctx_cpu *kcpu; + unsigned long flags; preempt_disable(); kcpu = this_cpu_ptr(ctx->cpu); + local_irq_save(flags); kcpu->reqs_available += nr; + while (kcpu->reqs_available >= ctx->req_batch * 2) { kcpu->reqs_available -= ctx->req_batch; atomic_add(ctx->req_batch, &ctx->reqs_available); } + local_irq_restore(flags); preempt_enable(); } @@ -847,10 +851,12 @@ static bool get_reqs_available(struct kioctx *ctx) { struct kioctx_cpu *kcpu; bool ret = false; + unsigned long flags; preempt_disable(); kcpu = this_cpu_ptr(ctx->cpu); + local_irq_save(flags); if (!kcpu->reqs_available) { int old, avail = atomic_read(&ctx->reqs_available); @@ -869,6 +875,7 @@ static bool get_reqs_available(struct kioctx *ctx) ret = true; kcpu->reqs_available--; out: + local_irq_restore(flags); preempt_enable(); return ret; } diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c index e12441c7cf1d..7187b14faa6c 100644 --- a/fs/btrfs/ordered-data.c +++ b/fs/btrfs/ordered-data.c @@ -484,8 +484,19 @@ void btrfs_wait_logged_extents(struct btrfs_root *log, u64 transid) log_list); list_del_init(&ordered->log_list); spin_unlock_irq(&log->log_extents_lock[index]); + + if (!test_bit(BTRFS_ORDERED_IO_DONE, &ordered->flags) && + !test_bit(BTRFS_ORDERED_DIRECT, &ordered->flags)) { + struct inode *inode = ordered->inode; + u64 start = ordered->file_offset; + u64 end = ordered->file_offset + ordered->len - 1; + + WARN_ON(!inode); + filemap_fdatawrite_range(inode->i_mapping, start, end); + } wait_event(ordered->wait, test_bit(BTRFS_ORDERED_IO_DONE, &ordered->flags)); + btrfs_put_ordered_extent(ordered); spin_lock_irq(&log->log_extents_lock[index]); } diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index 6104676857f5..6cb82f62cb7c 100644 --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -1680,11 +1680,11 @@ int btrfs_rm_device(struct btrfs_root *root, char *device_path) if (device->bdev == root->fs_info->fs_devices->latest_bdev) root->fs_info->fs_devices->latest_bdev = next_device->bdev; - if (device->bdev) + if (device->bdev) { device->fs_devices->open_devices--; - - /* remove sysfs entry */ - btrfs_kobj_rm_device(root->fs_info, device); + /* remove sysfs entry */ + btrfs_kobj_rm_device(root->fs_info, device); + } call_rcu(&device->rcu, free_device); diff --git a/fs/ext4/extents_status.c b/fs/ext4/extents_status.c index 3f5c188953a4..0b7e28e7eaa4 100644 --- a/fs/ext4/extents_status.c +++ b/fs/ext4/extents_status.c @@ -966,10 +966,10 @@ retry: continue; } - if (ei->i_es_lru_nr == 0 || ei == locked_ei) + if (ei->i_es_lru_nr == 0 || ei == locked_ei || + !write_trylock(&ei->i_es_lock)) continue; - write_lock(&ei->i_es_lock); shrunk = __es_try_to_reclaim_extents(ei, nr_to_scan); if (ei->i_es_lru_nr == 0) list_del_init(&ei->i_es_lru); diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c index a87455df38bc..5b87fc36aab8 100644 --- a/fs/ext4/ialloc.c +++ b/fs/ext4/ialloc.c @@ -338,7 +338,7 @@ out: fatal = err; } else { ext4_error(sb, "bit already cleared for inode %lu", ino); - if (!EXT4_MB_GRP_IBITMAP_CORRUPT(grp)) { + if (gdp && !EXT4_MB_GRP_IBITMAP_CORRUPT(grp)) { int count; count = ext4_free_inodes_count(sb, gdp); percpu_counter_sub(&sbi->s_freeinodes_counter, @@ -874,6 +874,13 @@ got: goto out; } + BUFFER_TRACE(group_desc_bh, "get_write_access"); + err = ext4_journal_get_write_access(handle, group_desc_bh); + if (err) { + ext4_std_error(sb, err); + goto out; + } + /* We may have to initialize the block bitmap if it isn't already */ if (ext4_has_group_desc_csum(sb) && gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) { @@ -910,13 +917,6 @@ got: } } - BUFFER_TRACE(group_desc_bh, "get_write_access"); - err = ext4_journal_get_write_access(handle, group_desc_bh); - if (err) { - ext4_std_error(sb, err); - goto out; - } - /* Update the relevant bg descriptor fields */ if (ext4_has_group_desc_csum(sb)) { int free; diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index 7f72f50a8fa7..2dcb936be90e 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -752,8 +752,8 @@ void ext4_mb_generate_buddy(struct super_block *sb, if (free != grp->bb_free) { ext4_grp_locked_error(sb, group, 0, 0, - "%u clusters in bitmap, %u in gd; " - "block bitmap corrupt.", + "block bitmap and bg descriptor " + "inconsistent: %u vs %u free clusters", free, grp->bb_free); /* * If we intend to continue, we consider group descriptor diff --git a/fs/ext4/super.c b/fs/ext4/super.c index b9b9aabfb4d2..6df7bc611dbd 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -1525,8 +1525,6 @@ static int handle_mount_opt(struct super_block *sb, char *opt, int token, arg = JBD2_DEFAULT_MAX_COMMIT_AGE; sbi->s_commit_interval = HZ * arg; } else if (token == Opt_max_batch_time) { - if (arg == 0) - arg = EXT4_DEF_MAX_BATCH_TIME; sbi->s_max_batch_time = arg; } else if (token == Opt_min_batch_time) { sbi->s_min_batch_time = arg; @@ -2809,10 +2807,11 @@ static void print_daily_error_info(unsigned long arg) es = sbi->s_es; if (es->s_error_count) - ext4_msg(sb, KERN_NOTICE, "error count: %u", + /* fsck newer than v1.41.13 is needed to clean this condition. */ + ext4_msg(sb, KERN_NOTICE, "error count since last fsck: %u", le32_to_cpu(es->s_error_count)); if (es->s_first_error_time) { - printk(KERN_NOTICE "EXT4-fs (%s): initial error at %u: %.*s:%d", + printk(KERN_NOTICE "EXT4-fs (%s): initial error at time %u: %.*s:%d", sb->s_id, le32_to_cpu(es->s_first_error_time), (int) sizeof(es->s_first_error_func), es->s_first_error_func, @@ -2826,7 +2825,7 @@ static void print_daily_error_info(unsigned long arg) printk("\n"); } if (es->s_last_error_time) { - printk(KERN_NOTICE "EXT4-fs (%s): last error at %u: %.*s:%d", + printk(KERN_NOTICE "EXT4-fs (%s): last error at time %u: %.*s:%d", sb->s_id, le32_to_cpu(es->s_last_error_time), (int) sizeof(es->s_last_error_func), es->s_last_error_func, @@ -3880,38 +3879,19 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent) goto failed_mount2; } } - - /* - * set up enough so that it can read an inode, - * and create new inode for buddy allocator - */ - sbi->s_gdb_count = db_count; - if (!test_opt(sb, NOLOAD) && - EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_HAS_JOURNAL)) - sb->s_op = &ext4_sops; - else - sb->s_op = &ext4_nojournal_sops; - - ext4_ext_init(sb); - err = ext4_mb_init(sb); - if (err) { - ext4_msg(sb, KERN_ERR, "failed to initialize mballoc (%d)", - err); - goto failed_mount2; - } - if (!ext4_check_descriptors(sb, &first_not_zeroed)) { ext4_msg(sb, KERN_ERR, "group descriptors corrupted!"); - goto failed_mount2a; + goto failed_mount2; } if (EXT4_HAS_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_FLEX_BG)) if (!ext4_fill_flex_info(sb)) { ext4_msg(sb, KERN_ERR, "unable to initialize " "flex_bg meta info!"); - goto failed_mount2a; + goto failed_mount2; } + sbi->s_gdb_count = db_count; get_random_bytes(&sbi->s_next_generation, sizeof(u32)); spin_lock_init(&sbi->s_next_gen_lock); @@ -3946,6 +3926,14 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent) sbi->s_stripe = ext4_get_stripe_size(sbi); sbi->s_extent_max_zeroout_kb = 32; + /* + * set up enough so that it can read an inode + */ + if (!test_opt(sb, NOLOAD) && + EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_HAS_JOURNAL)) + sb->s_op = &ext4_sops; + else + sb->s_op = &ext4_nojournal_sops; sb->s_export_op = &ext4_export_ops; sb->s_xattr = ext4_xattr_handlers; #ifdef CONFIG_QUOTA @@ -4135,13 +4123,21 @@ no_journal: if (err) { ext4_msg(sb, KERN_ERR, "failed to reserve %llu clusters for " "reserved pool", ext4_calculate_resv_clusters(sb)); - goto failed_mount5; + goto failed_mount4a; } err = ext4_setup_system_zone(sb); if (err) { ext4_msg(sb, KERN_ERR, "failed to initialize system " "zone (%d)", err); + goto failed_mount4a; + } + + ext4_ext_init(sb); + err = ext4_mb_init(sb); + if (err) { + ext4_msg(sb, KERN_ERR, "failed to initialize mballoc (%d)", + err); goto failed_mount5; } @@ -4218,8 +4214,11 @@ failed_mount8: failed_mount7: ext4_unregister_li_request(sb); failed_mount6: - ext4_release_system_zone(sb); + ext4_mb_release(sb); failed_mount5: + ext4_ext_release(sb); + ext4_release_system_zone(sb); +failed_mount4a: dput(sb->s_root); sb->s_root = NULL; failed_mount4: @@ -4243,14 +4242,11 @@ failed_mount3: percpu_counter_destroy(&sbi->s_extent_cache_cnt); if (sbi->s_mmp_tsk) kthread_stop(sbi->s_mmp_tsk); -failed_mount2a: - ext4_mb_release(sb); failed_mount2: for (i = 0; i < db_count; i++) brelse(sbi->s_group_desc[i]); ext4_kvfree(sbi->s_group_desc); failed_mount: - ext4_ext_release(sb); if (sbi->s_chksum_driver) crypto_free_shash(sbi->s_chksum_driver); if (sbi->s_proc) { diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index 0924521306b4..f8cf619edb5f 100644 --- a/fs/f2fs/data.c +++ b/fs/f2fs/data.c @@ -608,8 +608,8 @@ static int __allocate_data_block(struct dnode_of_data *dn) * b. do not use extent cache for better performance * c. give the block addresses to blockdev */ -static int get_data_block(struct inode *inode, sector_t iblock, - struct buffer_head *bh_result, int create) +static int __get_data_block(struct inode *inode, sector_t iblock, + struct buffer_head *bh_result, int create, bool fiemap) { struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb); unsigned int blkbits = inode->i_sb->s_blocksize_bits; @@ -637,7 +637,7 @@ static int get_data_block(struct inode *inode, sector_t iblock, err = 0; goto unlock_out; } - if (dn.data_blkaddr == NEW_ADDR) + if (dn.data_blkaddr == NEW_ADDR && !fiemap) goto put_out; if (dn.data_blkaddr != NULL_ADDR) { @@ -671,7 +671,7 @@ get_next: err = 0; goto unlock_out; } - if (dn.data_blkaddr == NEW_ADDR) + if (dn.data_blkaddr == NEW_ADDR && !fiemap) goto put_out; end_offset = ADDRS_PER_PAGE(dn.node_page, F2FS_I(inode)); @@ -708,10 +708,23 @@ out: return err; } +static int get_data_block(struct inode *inode, sector_t iblock, + struct buffer_head *bh_result, int create) +{ + return __get_data_block(inode, iblock, bh_result, create, false); +} + +static int get_data_block_fiemap(struct inode *inode, sector_t iblock, + struct buffer_head *bh_result, int create) +{ + return __get_data_block(inode, iblock, bh_result, create, true); +} + int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo, u64 start, u64 len) { - return generic_block_fiemap(inode, fieinfo, start, len, get_data_block); + return generic_block_fiemap(inode, fieinfo, + start, len, get_data_block_fiemap); } static int f2fs_read_data_page(struct file *file, struct page *page) diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c index 966acb039e3b..a4addd72ebbd 100644 --- a/fs/f2fs/dir.c +++ b/fs/f2fs/dir.c @@ -376,11 +376,11 @@ static struct page *init_inode_metadata(struct inode *inode, put_error: f2fs_put_page(page, 1); +error: /* once the failed inode becomes a bad inode, i_mode is S_IFREG */ truncate_inode_pages(&inode->i_data, 0); truncate_blocks(inode, 0); remove_dirty_dir_inode(inode); -error: remove_inode_page(inode); return ERR_PTR(err); } diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index e51c732b0dd9..58df97e174d0 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -342,9 +342,6 @@ struct f2fs_sm_info { struct dirty_seglist_info *dirty_info; /* dirty segment information */ struct curseg_info *curseg_array; /* active segment information */ - struct list_head wblist_head; /* list of under-writeback pages */ - spinlock_t wblist_lock; /* lock for checkpoint */ - block_t seg0_blkaddr; /* block address of 0'th segment */ block_t main_blkaddr; /* start block address of main area */ block_t ssa_blkaddr; /* start block address of SSA area */ @@ -644,7 +641,8 @@ static inline void f2fs_unlock_all(struct f2fs_sb_info *sbi) */ static inline int check_nid_range(struct f2fs_sb_info *sbi, nid_t nid) { - WARN_ON((nid >= NM_I(sbi)->max_nid)); + if (unlikely(nid < F2FS_ROOT_INO(sbi))) + return -EINVAL; if (unlikely(nid >= NM_I(sbi)->max_nid)) return -EINVAL; return 0; diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c index c58e33075719..7d8b96275092 100644 --- a/fs/f2fs/file.c +++ b/fs/f2fs/file.c @@ -659,16 +659,19 @@ static int expand_inode_data(struct inode *inode, loff_t offset, off_start = offset & (PAGE_CACHE_SIZE - 1); off_end = (offset + len) & (PAGE_CACHE_SIZE - 1); + f2fs_lock_op(sbi); + for (index = pg_start; index <= pg_end; index++) { struct dnode_of_data dn; - f2fs_lock_op(sbi); + if (index == pg_end && !off_end) + goto noalloc; + set_new_dnode(&dn, inode, NULL, NULL, 0); ret = f2fs_reserve_block(&dn, index); - f2fs_unlock_op(sbi); if (ret) break; - +noalloc: if (pg_start == pg_end) new_size = offset + len; else if (index == pg_start && off_start) @@ -683,8 +686,9 @@ static int expand_inode_data(struct inode *inode, loff_t offset, i_size_read(inode) < new_size) { i_size_write(inode, new_size); mark_inode_dirty(inode); - f2fs_write_inode(inode, NULL); + update_inode_page(inode); } + f2fs_unlock_op(sbi); return ret; } diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c index adc622c6bdce..2cf6962f6cc8 100644 --- a/fs/f2fs/inode.c +++ b/fs/f2fs/inode.c @@ -78,6 +78,7 @@ static int do_read_inode(struct inode *inode) if (check_nid_range(sbi, inode->i_ino)) { f2fs_msg(inode->i_sb, KERN_ERR, "bad inode number: %lu", (unsigned long) inode->i_ino); + WARN_ON(1); return -EINVAL; } diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c index 9138c32aa698..a6bdddc33ce2 100644 --- a/fs/f2fs/namei.c +++ b/fs/f2fs/namei.c @@ -417,9 +417,6 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry, } f2fs_set_link(new_dir, new_entry, new_page, old_inode); - down_write(&F2FS_I(old_inode)->i_sem); - F2FS_I(old_inode)->i_pino = new_dir->i_ino; - up_write(&F2FS_I(old_inode)->i_sem); new_inode->i_ctime = CURRENT_TIME; down_write(&F2FS_I(new_inode)->i_sem); @@ -448,6 +445,10 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry, } } + down_write(&F2FS_I(old_inode)->i_sem); + file_lost_pino(old_inode); + up_write(&F2FS_I(old_inode)->i_sem); + old_inode->i_ctime = CURRENT_TIME; mark_inode_dirty(old_inode); @@ -457,9 +458,6 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry, if (old_dir != new_dir) { f2fs_set_link(old_inode, old_dir_entry, old_dir_page, new_dir); - down_write(&F2FS_I(old_inode)->i_sem); - F2FS_I(old_inode)->i_pino = new_dir->i_ino; - up_write(&F2FS_I(old_inode)->i_sem); update_inode_page(old_inode); } else { kunmap(old_dir_page); @@ -474,7 +472,8 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry, return 0; put_out_dir: - f2fs_put_page(new_page, 1); + kunmap(new_page); + f2fs_put_page(new_page, 0); out_dir: if (old_dir_entry) { kunmap(old_dir_page); diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c index 9dfb9a042fd2..4b697ccc9b0c 100644 --- a/fs/f2fs/node.c +++ b/fs/f2fs/node.c @@ -42,6 +42,8 @@ bool available_free_memory(struct f2fs_sb_info *sbi, int type) mem_size = (nm_i->nat_cnt * sizeof(struct nat_entry)) >> 12; res = mem_size < ((val.totalram * nm_i->ram_thresh / 100) >> 2); } else if (type == DIRTY_DENTS) { + if (sbi->sb->s_bdi->dirty_exceeded) + return false; mem_size = get_pages(sbi, F2FS_DIRTY_DENTS); res = mem_size < ((val.totalram * nm_i->ram_thresh / 100) >> 1); } diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c index f25f0e07e26f..d04613df710a 100644 --- a/fs/f2fs/segment.c +++ b/fs/f2fs/segment.c @@ -272,14 +272,15 @@ int create_flush_cmd_control(struct f2fs_sb_info *sbi) return -ENOMEM; spin_lock_init(&fcc->issue_lock); init_waitqueue_head(&fcc->flush_wait_queue); + sbi->sm_info->cmd_control_info = fcc; fcc->f2fs_issue_flush = kthread_run(issue_flush_thread, sbi, "f2fs_flush-%u:%u", MAJOR(dev), MINOR(dev)); if (IS_ERR(fcc->f2fs_issue_flush)) { err = PTR_ERR(fcc->f2fs_issue_flush); kfree(fcc); + sbi->sm_info->cmd_control_info = NULL; return err; } - sbi->sm_info->cmd_control_info = fcc; return err; } @@ -1885,8 +1886,6 @@ int build_segment_manager(struct f2fs_sb_info *sbi) /* init sm info */ sbi->sm_info = sm_info; - INIT_LIST_HEAD(&sm_info->wblist_head); - spin_lock_init(&sm_info->wblist_lock); sm_info->seg0_blkaddr = le32_to_cpu(raw_super->segment0_blkaddr); sm_info->main_blkaddr = le32_to_cpu(raw_super->main_blkaddr); sm_info->segment_count = le32_to_cpu(raw_super->segment_count); diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c index b2b18637cb9e..8f96d9372ade 100644 --- a/fs/f2fs/super.c +++ b/fs/f2fs/super.c @@ -689,9 +689,7 @@ static struct inode *f2fs_nfs_get_inode(struct super_block *sb, struct f2fs_sb_info *sbi = F2FS_SB(sb); struct inode *inode; - if (unlikely(ino < F2FS_ROOT_INO(sbi))) - return ERR_PTR(-ESTALE); - if (unlikely(ino >= NM_I(sbi)->max_nid)) + if (check_nid_range(sbi, ino)) return ERR_PTR(-ESTALE); /* diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c index 098f97bdcf1b..ca887314aba9 100644 --- a/fs/fuse/dev.c +++ b/fs/fuse/dev.c @@ -643,9 +643,8 @@ struct fuse_copy_state { unsigned long seglen; unsigned long addr; struct page *pg; - void *mapaddr; - void *buf; unsigned len; + unsigned offset; unsigned move_pages:1; }; @@ -666,23 +665,17 @@ static void fuse_copy_finish(struct fuse_copy_state *cs) if (cs->currbuf) { struct pipe_buffer *buf = cs->currbuf; - if (!cs->write) { - kunmap_atomic(cs->mapaddr); - } else { - kunmap_atomic(cs->mapaddr); + if (cs->write) buf->len = PAGE_SIZE - cs->len; - } cs->currbuf = NULL; - cs->mapaddr = NULL; - } else if (cs->mapaddr) { - kunmap_atomic(cs->mapaddr); + } else if (cs->pg) { if (cs->write) { flush_dcache_page(cs->pg); set_page_dirty_lock(cs->pg); } put_page(cs->pg); - cs->mapaddr = NULL; } + cs->pg = NULL; } /* @@ -691,7 +684,7 @@ static void fuse_copy_finish(struct fuse_copy_state *cs) */ static int fuse_copy_fill(struct fuse_copy_state *cs) { - unsigned long offset; + struct page *page; int err; unlock_request(cs->fc, cs->req); @@ -706,14 +699,12 @@ static int fuse_copy_fill(struct fuse_copy_state *cs) BUG_ON(!cs->nr_segs); cs->currbuf = buf; - cs->mapaddr = kmap_atomic(buf->page); + cs->pg = buf->page; + cs->offset = buf->offset; cs->len = buf->len; - cs->buf = cs->mapaddr + buf->offset; cs->pipebufs++; cs->nr_segs--; } else { - struct page *page; - if (cs->nr_segs == cs->pipe->buffers) return -EIO; @@ -726,8 +717,8 @@ static int fuse_copy_fill(struct fuse_copy_state *cs) buf->len = 0; cs->currbuf = buf; - cs->mapaddr = kmap_atomic(page); - cs->buf = cs->mapaddr; + cs->pg = page; + cs->offset = 0; cs->len = PAGE_SIZE; cs->pipebufs++; cs->nr_segs++; @@ -740,14 +731,13 @@ static int fuse_copy_fill(struct fuse_copy_state *cs) cs->iov++; cs->nr_segs--; } - err = get_user_pages_fast(cs->addr, 1, cs->write, &cs->pg); + err = get_user_pages_fast(cs->addr, 1, cs->write, &page); if (err < 0) return err; BUG_ON(err != 1); - offset = cs->addr % PAGE_SIZE; - cs->mapaddr = kmap_atomic(cs->pg); - cs->buf = cs->mapaddr + offset; - cs->len = min(PAGE_SIZE - offset, cs->seglen); + cs->pg = page; + cs->offset = cs->addr % PAGE_SIZE; + cs->len = min(PAGE_SIZE - cs->offset, cs->seglen); cs->seglen -= cs->len; cs->addr += cs->len; } @@ -760,15 +750,20 @@ static int fuse_copy_do(struct fuse_copy_state *cs, void **val, unsigned *size) { unsigned ncpy = min(*size, cs->len); if (val) { + void *pgaddr = kmap_atomic(cs->pg); + void *buf = pgaddr + cs->offset; + if (cs->write) - memcpy(cs->buf, *val, ncpy); + memcpy(buf, *val, ncpy); else - memcpy(*val, cs->buf, ncpy); + memcpy(*val, buf, ncpy); + + kunmap_atomic(pgaddr); *val += ncpy; } *size -= ncpy; cs->len -= ncpy; - cs->buf += ncpy; + cs->offset += ncpy; return ncpy; } @@ -874,8 +869,8 @@ static int fuse_try_move_page(struct fuse_copy_state *cs, struct page **pagep) out_fallback_unlock: unlock_page(newpage); out_fallback: - cs->mapaddr = kmap_atomic(buf->page); - cs->buf = cs->mapaddr + buf->offset; + cs->pg = buf->page; + cs->offset = buf->offset; err = lock_request(cs->fc, cs->req); if (err) diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c index 42198359fa1b..0c6048247a34 100644 --- a/fs/fuse/dir.c +++ b/fs/fuse/dir.c @@ -198,7 +198,8 @@ static int fuse_dentry_revalidate(struct dentry *entry, unsigned int flags) inode = ACCESS_ONCE(entry->d_inode); if (inode && is_bad_inode(inode)) goto invalid; - else if (fuse_dentry_time(entry) < get_jiffies_64()) { + else if (time_before64(fuse_dentry_time(entry), get_jiffies_64()) || + (flags & LOOKUP_REVAL)) { int err; struct fuse_entry_out outarg; struct fuse_req *req; @@ -814,13 +815,6 @@ static int fuse_rename_common(struct inode *olddir, struct dentry *oldent, return err; } -static int fuse_rename(struct inode *olddir, struct dentry *oldent, - struct inode *newdir, struct dentry *newent) -{ - return fuse_rename_common(olddir, oldent, newdir, newent, 0, - FUSE_RENAME, sizeof(struct fuse_rename_in)); -} - static int fuse_rename2(struct inode *olddir, struct dentry *oldent, struct inode *newdir, struct dentry *newent, unsigned int flags) @@ -831,17 +825,30 @@ static int fuse_rename2(struct inode *olddir, struct dentry *oldent, if (flags & ~(RENAME_NOREPLACE | RENAME_EXCHANGE)) return -EINVAL; - if (fc->no_rename2 || fc->minor < 23) - return -EINVAL; + if (flags) { + if (fc->no_rename2 || fc->minor < 23) + return -EINVAL; - err = fuse_rename_common(olddir, oldent, newdir, newent, flags, - FUSE_RENAME2, sizeof(struct fuse_rename2_in)); - if (err == -ENOSYS) { - fc->no_rename2 = 1; - err = -EINVAL; + err = fuse_rename_common(olddir, oldent, newdir, newent, flags, + FUSE_RENAME2, + sizeof(struct fuse_rename2_in)); + if (err == -ENOSYS) { + fc->no_rename2 = 1; + err = -EINVAL; + } + } else { + err = fuse_rename_common(olddir, oldent, newdir, newent, 0, + FUSE_RENAME, + sizeof(struct fuse_rename_in)); } + return err; +} +static int fuse_rename(struct inode *olddir, struct dentry *oldent, + struct inode *newdir, struct dentry *newent) +{ + return fuse_rename2(olddir, oldent, newdir, newent, 0); } static int fuse_link(struct dentry *entry, struct inode *newdir, @@ -985,7 +992,7 @@ int fuse_update_attributes(struct inode *inode, struct kstat *stat, int err; bool r; - if (fi->i_time < get_jiffies_64()) { + if (time_before64(fi->i_time, get_jiffies_64())) { r = true; err = fuse_do_getattr(inode, stat, file); } else { @@ -1171,7 +1178,7 @@ static int fuse_permission(struct inode *inode, int mask) ((mask & MAY_EXEC) && S_ISREG(inode->i_mode))) { struct fuse_inode *fi = get_fuse_inode(inode); - if (fi->i_time < get_jiffies_64()) { + if (time_before64(fi->i_time, get_jiffies_64())) { refreshed = true; err = fuse_perm_getattr(inode, mask); diff --git a/fs/fuse/file.c b/fs/fuse/file.c index 6e16dad13e9b..40ac2628ddcf 100644 --- a/fs/fuse/file.c +++ b/fs/fuse/file.c @@ -1687,7 +1687,7 @@ static int fuse_writepage_locked(struct page *page) error = -EIO; req->ff = fuse_write_file_get(fc, fi); if (!req->ff) - goto err_free; + goto err_nofile; fuse_write_fill(req, req->ff, page_offset(page), 0); @@ -1715,6 +1715,8 @@ static int fuse_writepage_locked(struct page *page) return 0; +err_nofile: + __free_page(tmp_page); err_free: fuse_request_free(req); err: @@ -1955,8 +1957,8 @@ static int fuse_writepages(struct address_space *mapping, data.ff = NULL; err = -ENOMEM; - data.orig_pages = kzalloc(sizeof(struct page *) * - FUSE_MAX_PAGES_PER_REQ, + data.orig_pages = kcalloc(FUSE_MAX_PAGES_PER_REQ, + sizeof(struct page *), GFP_NOFS); if (!data.orig_pages) goto out; diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c index 754dcf23de8a..8474028d7848 100644 --- a/fs/fuse/inode.c +++ b/fs/fuse/inode.c @@ -478,6 +478,17 @@ static const match_table_t tokens = { {OPT_ERR, NULL} }; +static int fuse_match_uint(substring_t *s, unsigned int *res) +{ + int err = -ENOMEM; + char *buf = match_strdup(s); + if (buf) { + err = kstrtouint(buf, 10, res); + kfree(buf); + } + return err; +} + static int parse_fuse_opt(char *opt, struct fuse_mount_data *d, int is_bdev) { char *p; @@ -488,6 +499,7 @@ static int parse_fuse_opt(char *opt, struct fuse_mount_data *d, int is_bdev) while ((p = strsep(&opt, ",")) != NULL) { int token; int value; + unsigned uv; substring_t args[MAX_OPT_ARGS]; if (!*p) continue; @@ -511,18 +523,18 @@ static int parse_fuse_opt(char *opt, struct fuse_mount_data *d, int is_bdev) break; case OPT_USER_ID: - if (match_int(&args[0], &value)) + if (fuse_match_uint(&args[0], &uv)) return 0; - d->user_id = make_kuid(current_user_ns(), value); + d->user_id = make_kuid(current_user_ns(), uv); if (!uid_valid(d->user_id)) return 0; d->user_id_present = 1; break; case OPT_GROUP_ID: - if (match_int(&args[0], &value)) + if (fuse_match_uint(&args[0], &uv)) return 0; - d->group_id = make_kgid(current_user_ns(), value); + d->group_id = make_kgid(current_user_ns(), uv); if (!gid_valid(d->group_id)) return 0; d->group_id_present = 1; @@ -1006,7 +1018,7 @@ static int fuse_fill_super(struct super_block *sb, void *data, int silent) sb->s_flags &= ~(MS_NOSEC | MS_I_VERSION); - if (!parse_fuse_opt((char *) data, &d, is_bdev)) + if (!parse_fuse_opt(data, &d, is_bdev)) goto err; if (is_bdev) { diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c index 4fc3a3046174..26b3f952e6b1 100644 --- a/fs/gfs2/file.c +++ b/fs/gfs2/file.c @@ -981,7 +981,7 @@ static int do_flock(struct file *file, int cmd, struct file_lock *fl) int error = 0; state = (fl->fl_type == F_WRLCK) ? LM_ST_EXCLUSIVE : LM_ST_SHARED; - flags = (IS_SETLKW(cmd) ? 0 : LM_FLAG_TRY) | GL_EXACT | GL_NOCACHE; + flags = (IS_SETLKW(cmd) ? 0 : LM_FLAG_TRY) | GL_EXACT; mutex_lock(&fp->f_fl_mutex); @@ -991,7 +991,7 @@ static int do_flock(struct file *file, int cmd, struct file_lock *fl) goto out; flock_lock_file_wait(file, &(struct file_lock){.fl_type = F_UNLCK}); - gfs2_glock_dq_wait(fl_gh); + gfs2_glock_dq(fl_gh); gfs2_holder_reinit(state, flags, fl_gh); } else { error = gfs2_glock_get(GFS2_SB(&ip->i_inode), ip->i_no_addr, diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c index c355f7320e44..ee4e04fe60fc 100644 --- a/fs/gfs2/glock.c +++ b/fs/gfs2/glock.c @@ -731,14 +731,14 @@ int gfs2_glock_get(struct gfs2_sbd *sdp, u64 number, cachep = gfs2_glock_aspace_cachep; else cachep = gfs2_glock_cachep; - gl = kmem_cache_alloc(cachep, GFP_KERNEL); + gl = kmem_cache_alloc(cachep, GFP_NOFS); if (!gl) return -ENOMEM; memset(&gl->gl_lksb, 0, sizeof(struct dlm_lksb)); if (glops->go_flags & GLOF_LVB) { - gl->gl_lksb.sb_lvbptr = kzalloc(GFS2_MIN_LVB_SIZE, GFP_KERNEL); + gl->gl_lksb.sb_lvbptr = kzalloc(GFS2_MIN_LVB_SIZE, GFP_NOFS); if (!gl->gl_lksb.sb_lvbptr) { kmem_cache_free(cachep, gl); return -ENOMEM; @@ -1404,12 +1404,16 @@ __acquires(&lru_lock) gl = list_entry(list->next, struct gfs2_glock, gl_lru); list_del_init(&gl->gl_lru); if (!spin_trylock(&gl->gl_spin)) { +add_back_to_lru: list_add(&gl->gl_lru, &lru_list); atomic_inc(&lru_count); continue; } + if (test_and_set_bit(GLF_LOCK, &gl->gl_flags)) { + spin_unlock(&gl->gl_spin); + goto add_back_to_lru; + } clear_bit(GLF_LRU, &gl->gl_flags); - spin_unlock(&lru_lock); gl->gl_lockref.count++; if (demote_ok(gl)) handle_callback(gl, LM_ST_UNLOCKED, 0, false); @@ -1417,7 +1421,7 @@ __acquires(&lru_lock) if (queue_delayed_work(glock_workqueue, &gl->gl_work, 0) == 0) gl->gl_lockref.count--; spin_unlock(&gl->gl_spin); - spin_lock(&lru_lock); + cond_resched_lock(&lru_lock); } } @@ -1442,7 +1446,7 @@ static long gfs2_scan_glock_lru(int nr) gl = list_entry(lru_list.next, struct gfs2_glock, gl_lru); /* Test for being demotable */ - if (!test_and_set_bit(GLF_LOCK, &gl->gl_flags)) { + if (!test_bit(GLF_LOCK, &gl->gl_flags)) { list_move(&gl->gl_lru, &dispose); atomic_dec(&lru_count); freed++; diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c index fc1100781bbc..2ffc67dce87f 100644 --- a/fs/gfs2/glops.c +++ b/fs/gfs2/glops.c @@ -234,8 +234,8 @@ static void inode_go_sync(struct gfs2_glock *gl) * inode_go_inval - prepare a inode glock to be released * @gl: the glock * @flags: - * - * Normally we invlidate everything, but if we are moving into + * + * Normally we invalidate everything, but if we are moving into * LM_ST_DEFERRED from LM_ST_SHARED or LM_ST_EXCLUSIVE then we * can keep hold of the metadata, since it won't have changed. * diff --git a/fs/gfs2/lock_dlm.c b/fs/gfs2/lock_dlm.c index 91f274de1246..4fafea1c9ecf 100644 --- a/fs/gfs2/lock_dlm.c +++ b/fs/gfs2/lock_dlm.c @@ -1036,8 +1036,8 @@ static int set_recover_size(struct gfs2_sbd *sdp, struct dlm_slot *slots, new_size = old_size + RECOVER_SIZE_INC; - submit = kzalloc(new_size * sizeof(uint32_t), GFP_NOFS); - result = kzalloc(new_size * sizeof(uint32_t), GFP_NOFS); + submit = kcalloc(new_size, sizeof(uint32_t), GFP_NOFS); + result = kcalloc(new_size, sizeof(uint32_t), GFP_NOFS); if (!submit || !result) { kfree(submit); kfree(result); diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c index db629d1bd1bd..f4cb9c0d6bbd 100644 --- a/fs/gfs2/rgrp.c +++ b/fs/gfs2/rgrp.c @@ -337,7 +337,7 @@ static bool gfs2_unaligned_extlen(struct gfs2_rbm *rbm, u32 n_unaligned, u32 *le /** * gfs2_free_extlen - Return extent length of free blocks - * @rbm: Starting position + * @rrbm: Starting position * @len: Max length to check * * Starting at the block specified by the rbm, see how many free blocks @@ -2522,7 +2522,7 @@ void gfs2_rlist_alloc(struct gfs2_rgrp_list *rlist, unsigned int state) /** * gfs2_rlist_free - free a resource group list - * @list: the list of resource groups + * @rlist: the list of resource groups * */ diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c index 38cfcf5f6fce..6f0f590cc5a3 100644 --- a/fs/jbd2/transaction.c +++ b/fs/jbd2/transaction.c @@ -1588,9 +1588,12 @@ int jbd2_journal_stop(handle_t *handle) * to perform a synchronous write. We do this to detect the * case where a single process is doing a stream of sync * writes. No point in waiting for joiners in that case. + * + * Setting max_batch_time to 0 disables this completely. */ pid = current->pid; - if (handle->h_sync && journal->j_last_sync_writer != pid) { + if (handle->h_sync && journal->j_last_sync_writer != pid && + journal->j_max_batch_time) { u64 commit_time, trans_time; journal->j_last_sync_writer = pid; diff --git a/fs/kernfs/mount.c b/fs/kernfs/mount.c index d171b98a6cdd..f973ae9b05f1 100644 --- a/fs/kernfs/mount.c +++ b/fs/kernfs/mount.c @@ -211,6 +211,36 @@ void kernfs_kill_sb(struct super_block *sb) kernfs_put(root_kn); } +/** + * kernfs_pin_sb: try to pin the superblock associated with a kernfs_root + * @kernfs_root: the kernfs_root in question + * @ns: the namespace tag + * + * Pin the superblock so the superblock won't be destroyed in subsequent + * operations. This can be used to block ->kill_sb() which may be useful + * for kernfs users which dynamically manage superblocks. + * + * Returns NULL if there's no superblock associated to this kernfs_root, or + * -EINVAL if the superblock is being freed. + */ +struct super_block *kernfs_pin_sb(struct kernfs_root *root, const void *ns) +{ + struct kernfs_super_info *info; + struct super_block *sb = NULL; + + mutex_lock(&kernfs_mutex); + list_for_each_entry(info, &root->supers, node) { + if (info->ns == ns) { + sb = info->sb; + if (!atomic_inc_not_zero(&info->sb->s_active)) + sb = ERR_PTR(-EINVAL); + break; + } + } + mutex_unlock(&kernfs_mutex); + return sb; +} + void __init kernfs_init(void) { kernfs_node_cache = kmem_cache_create("kernfs_node_cache", diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c index 8f98138cbc43..f11b9eed0de1 100644 --- a/fs/nfs/direct.c +++ b/fs/nfs/direct.c @@ -756,7 +756,6 @@ static void nfs_direct_write_completion(struct nfs_pgio_header *hdr) spin_unlock(&dreq->lock); while (!list_empty(&hdr->pages)) { - bool do_destroy = true; req = nfs_list_entry(hdr->pages.next); nfs_list_remove_request(req); @@ -765,7 +764,6 @@ static void nfs_direct_write_completion(struct nfs_pgio_header *hdr) case NFS_IOHDR_NEED_COMMIT: kref_get(&req->wb_kref); nfs_mark_request_commit(req, hdr->lseg, &cinfo); - do_destroy = false; } nfs_unlock_and_release_request(req); } diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h index 82ddbf46660e..f415cbf9f6c3 100644 --- a/fs/nfs/internal.h +++ b/fs/nfs/internal.h @@ -244,6 +244,7 @@ void nfs_pgio_data_release(struct nfs_pgio_data *); int nfs_generic_pgio(struct nfs_pageio_descriptor *, struct nfs_pgio_header *); int nfs_initiate_pgio(struct rpc_clnt *, struct nfs_pgio_data *, const struct rpc_call_ops *, int, int); +void nfs_free_request(struct nfs_page *req); static inline void nfs_iocounter_init(struct nfs_io_counter *c) { diff --git a/fs/nfs/nfs3acl.c b/fs/nfs/nfs3acl.c index 871d6eda8dba..8f854dde4150 100644 --- a/fs/nfs/nfs3acl.c +++ b/fs/nfs/nfs3acl.c @@ -247,3 +247,46 @@ const struct xattr_handler *nfs3_xattr_handlers[] = { &posix_acl_default_xattr_handler, NULL, }; + +static int +nfs3_list_one_acl(struct inode *inode, int type, const char *name, void *data, + size_t size, ssize_t *result) +{ + struct posix_acl *acl; + char *p = data + *result; + + acl = get_acl(inode, type); + if (!acl) + return 0; + + posix_acl_release(acl); + + *result += strlen(name); + *result += 1; + if (!size) + return 0; + if (*result > size) + return -ERANGE; + + strcpy(p, name); + return 0; +} + +ssize_t +nfs3_listxattr(struct dentry *dentry, char *data, size_t size) +{ + struct inode *inode = dentry->d_inode; + ssize_t result = 0; + int error; + + error = nfs3_list_one_acl(inode, ACL_TYPE_ACCESS, + POSIX_ACL_XATTR_ACCESS, data, size, &result); + if (error) + return error; + + error = nfs3_list_one_acl(inode, ACL_TYPE_DEFAULT, + POSIX_ACL_XATTR_DEFAULT, data, size, &result); + if (error) + return error; + return result; +} diff --git a/fs/nfs/nfs3proc.c b/fs/nfs/nfs3proc.c index e7daa42bbc86..f0afa291fd58 100644 --- a/fs/nfs/nfs3proc.c +++ b/fs/nfs/nfs3proc.c @@ -885,7 +885,7 @@ static const struct inode_operations nfs3_dir_inode_operations = { .getattr = nfs_getattr, .setattr = nfs_setattr, #ifdef CONFIG_NFS_V3_ACL - .listxattr = generic_listxattr, + .listxattr = nfs3_listxattr, .getxattr = generic_getxattr, .setxattr = generic_setxattr, .removexattr = generic_removexattr, @@ -899,7 +899,7 @@ static const struct inode_operations nfs3_file_inode_operations = { .getattr = nfs_getattr, .setattr = nfs_setattr, #ifdef CONFIG_NFS_V3_ACL - .listxattr = generic_listxattr, + .listxattr = nfs3_listxattr, .getxattr = generic_getxattr, .setxattr = generic_setxattr, .removexattr = generic_removexattr, diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c index b6ee3a6ee96d..17fab89f6358 100644 --- a/fs/nfs/pagelist.c +++ b/fs/nfs/pagelist.c @@ -29,8 +29,6 @@ static struct kmem_cache *nfs_page_cachep; static const struct rpc_call_ops nfs_pgio_common_ops; -static void nfs_free_request(struct nfs_page *); - static bool nfs_pgarray_set(struct nfs_page_array *p, unsigned int pagecount) { p->npages = pagecount; @@ -239,20 +237,28 @@ nfs_page_group_init(struct nfs_page *req, struct nfs_page *prev) WARN_ON_ONCE(prev == req); if (!prev) { + /* a head request */ req->wb_head = req; req->wb_this_page = req; } else { + /* a subrequest */ WARN_ON_ONCE(prev->wb_this_page != prev->wb_head); WARN_ON_ONCE(!test_bit(PG_HEADLOCK, &prev->wb_head->wb_flags)); req->wb_head = prev->wb_head; req->wb_this_page = prev->wb_this_page; prev->wb_this_page = req; + /* All subrequests take a ref on the head request until + * nfs_page_group_destroy is called */ + kref_get(&req->wb_head->wb_kref); + /* grab extra ref if head request has extra ref from * the write/commit path to handle handoff between write * and commit lists */ - if (test_bit(PG_INODE_REF, &prev->wb_head->wb_flags)) + if (test_bit(PG_INODE_REF, &prev->wb_head->wb_flags)) { + set_bit(PG_INODE_REF, &req->wb_flags); kref_get(&req->wb_kref); + } } } @@ -269,6 +275,10 @@ nfs_page_group_destroy(struct kref *kref) struct nfs_page *req = container_of(kref, struct nfs_page, wb_kref); struct nfs_page *tmp, *next; + /* subrequests must release the ref on the head request */ + if (req->wb_head != req) + nfs_release_request(req->wb_head); + if (!nfs_page_group_sync_on_bit(req, PG_TEARDOWN)) return; @@ -394,7 +404,7 @@ static void nfs_clear_request(struct nfs_page *req) * * Note: Should never be called with the spinlock held! */ -static void nfs_free_request(struct nfs_page *req) +void nfs_free_request(struct nfs_page *req) { WARN_ON_ONCE(req->wb_this_page != req); @@ -925,7 +935,6 @@ static int __nfs_pageio_add_request(struct nfs_pageio_descriptor *desc, nfs_pageio_doio(desc); if (desc->pg_error < 0) return 0; - desc->pg_moreio = 0; if (desc->pg_recoalesce) return 0; /* retry add_request for this subreq */ @@ -972,6 +981,7 @@ static int nfs_do_recoalesce(struct nfs_pageio_descriptor *desc) desc->pg_count = 0; desc->pg_base = 0; desc->pg_recoalesce = 0; + desc->pg_moreio = 0; while (!list_empty(&head)) { struct nfs_page *req; diff --git a/fs/nfs/write.c b/fs/nfs/write.c index 98ff061ccaf3..5e2f10304548 100644 --- a/fs/nfs/write.c +++ b/fs/nfs/write.c @@ -46,6 +46,7 @@ static const struct rpc_call_ops nfs_commit_ops; static const struct nfs_pgio_completion_ops nfs_async_write_completion_ops; static const struct nfs_commit_completion_ops nfs_commit_completion_ops; static const struct nfs_rw_ops nfs_rw_write_ops; +static void nfs_clear_request_commit(struct nfs_page *req); static struct kmem_cache *nfs_wdata_cachep; static mempool_t *nfs_wdata_mempool; @@ -91,8 +92,15 @@ static void nfs_context_set_write_error(struct nfs_open_context *ctx, int error) set_bit(NFS_CONTEXT_ERROR_WRITE, &ctx->flags); } +/* + * nfs_page_find_head_request_locked - find head request associated with @page + * + * must be called while holding the inode lock. + * + * returns matching head request with reference held, or NULL if not found. + */ static struct nfs_page * -nfs_page_find_request_locked(struct nfs_inode *nfsi, struct page *page) +nfs_page_find_head_request_locked(struct nfs_inode *nfsi, struct page *page) { struct nfs_page *req = NULL; @@ -104,25 +112,33 @@ nfs_page_find_request_locked(struct nfs_inode *nfsi, struct page *page) /* Linearly search the commit list for the correct req */ list_for_each_entry_safe(freq, t, &nfsi->commit_info.list, wb_list) { if (freq->wb_page == page) { - req = freq; + req = freq->wb_head; break; } } } - if (req) + if (req) { + WARN_ON_ONCE(req->wb_head != req); + kref_get(&req->wb_kref); + } return req; } -static struct nfs_page *nfs_page_find_request(struct page *page) +/* + * nfs_page_find_head_request - find head request associated with @page + * + * returns matching head request with reference held, or NULL if not found. + */ +static struct nfs_page *nfs_page_find_head_request(struct page *page) { struct inode *inode = page_file_mapping(page)->host; struct nfs_page *req = NULL; spin_lock(&inode->i_lock); - req = nfs_page_find_request_locked(NFS_I(inode), page); + req = nfs_page_find_head_request_locked(NFS_I(inode), page); spin_unlock(&inode->i_lock); return req; } @@ -274,36 +290,246 @@ static void nfs_end_page_writeback(struct nfs_page *req) clear_bdi_congested(&nfss->backing_dev_info, BLK_RW_ASYNC); } -static struct nfs_page *nfs_find_and_lock_request(struct page *page, bool nonblock) + +/* nfs_page_group_clear_bits + * @req - an nfs request + * clears all page group related bits from @req + */ +static void +nfs_page_group_clear_bits(struct nfs_page *req) +{ + clear_bit(PG_TEARDOWN, &req->wb_flags); + clear_bit(PG_UNLOCKPAGE, &req->wb_flags); + clear_bit(PG_UPTODATE, &req->wb_flags); + clear_bit(PG_WB_END, &req->wb_flags); + clear_bit(PG_REMOVE, &req->wb_flags); +} + + +/* + * nfs_unroll_locks_and_wait - unlock all newly locked reqs and wait on @req + * + * this is a helper function for nfs_lock_and_join_requests + * + * @inode - inode associated with request page group, must be holding inode lock + * @head - head request of page group, must be holding head lock + * @req - request that couldn't lock and needs to wait on the req bit lock + * @nonblock - if true, don't actually wait + * + * NOTE: this must be called holding page_group bit lock and inode spin lock + * and BOTH will be released before returning. + * + * returns 0 on success, < 0 on error. + */ +static int +nfs_unroll_locks_and_wait(struct inode *inode, struct nfs_page *head, + struct nfs_page *req, bool nonblock) + __releases(&inode->i_lock) +{ + struct nfs_page *tmp; + int ret; + + /* relinquish all the locks successfully grabbed this run */ + for (tmp = head ; tmp != req; tmp = tmp->wb_this_page) + nfs_unlock_request(tmp); + + WARN_ON_ONCE(test_bit(PG_TEARDOWN, &req->wb_flags)); + + /* grab a ref on the request that will be waited on */ + kref_get(&req->wb_kref); + + nfs_page_group_unlock(head); + spin_unlock(&inode->i_lock); + + /* release ref from nfs_page_find_head_request_locked */ + nfs_release_request(head); + + if (!nonblock) + ret = nfs_wait_on_request(req); + else + ret = -EAGAIN; + nfs_release_request(req); + + return ret; +} + +/* + * nfs_destroy_unlinked_subrequests - destroy recently unlinked subrequests + * + * @destroy_list - request list (using wb_this_page) terminated by @old_head + * @old_head - the old head of the list + * + * All subrequests must be locked and removed from all lists, so at this point + * they are only "active" in this function, and possibly in nfs_wait_on_request + * with a reference held by some other context. + */ +static void +nfs_destroy_unlinked_subrequests(struct nfs_page *destroy_list, + struct nfs_page *old_head) +{ + while (destroy_list) { + struct nfs_page *subreq = destroy_list; + + destroy_list = (subreq->wb_this_page == old_head) ? + NULL : subreq->wb_this_page; + + WARN_ON_ONCE(old_head != subreq->wb_head); + + /* make sure old group is not used */ + subreq->wb_head = subreq; + subreq->wb_this_page = subreq; + + nfs_clear_request_commit(subreq); + + /* subreq is now totally disconnected from page group or any + * write / commit lists. last chance to wake any waiters */ + nfs_unlock_request(subreq); + + if (!test_bit(PG_TEARDOWN, &subreq->wb_flags)) { + /* release ref on old head request */ + nfs_release_request(old_head); + + nfs_page_group_clear_bits(subreq); + + /* release the PG_INODE_REF reference */ + if (test_and_clear_bit(PG_INODE_REF, &subreq->wb_flags)) + nfs_release_request(subreq); + else + WARN_ON_ONCE(1); + } else { + WARN_ON_ONCE(test_bit(PG_CLEAN, &subreq->wb_flags)); + /* zombie requests have already released the last + * reference and were waiting on the rest of the + * group to complete. Since it's no longer part of a + * group, simply free the request */ + nfs_page_group_clear_bits(subreq); + nfs_free_request(subreq); + } + } +} + +/* + * nfs_lock_and_join_requests - join all subreqs to the head req and return + * a locked reference, cancelling any pending + * operations for this page. + * + * @page - the page used to lookup the "page group" of nfs_page structures + * @nonblock - if true, don't block waiting for request locks + * + * This function joins all sub requests to the head request by first + * locking all requests in the group, cancelling any pending operations + * and finally updating the head request to cover the whole range covered by + * the (former) group. All subrequests are removed from any write or commit + * lists, unlinked from the group and destroyed. + * + * Returns a locked, referenced pointer to the head request - which after + * this call is guaranteed to be the only request associated with the page. + * Returns NULL if no requests are found for @page, or a ERR_PTR if an + * error was encountered. + */ +static struct nfs_page * +nfs_lock_and_join_requests(struct page *page, bool nonblock) { struct inode *inode = page_file_mapping(page)->host; - struct nfs_page *req; + struct nfs_page *head, *subreq; + struct nfs_page *destroy_list = NULL; + unsigned int total_bytes; int ret; +try_again: + total_bytes = 0; + + WARN_ON_ONCE(destroy_list); + spin_lock(&inode->i_lock); - for (;;) { - req = nfs_page_find_request_locked(NFS_I(inode), page); - if (req == NULL) - break; - if (nfs_lock_request(req)) - break; - /* Note: If we hold the page lock, as is the case in nfs_writepage, - * then the call to nfs_lock_request() will always - * succeed provided that someone hasn't already marked the - * request as dirty (in which case we don't care). - */ + + /* + * A reference is taken only on the head request which acts as a + * reference to the whole page group - the group will not be destroyed + * until the head reference is released. + */ + head = nfs_page_find_head_request_locked(NFS_I(inode), page); + + if (!head) { spin_unlock(&inode->i_lock); - if (!nonblock) - ret = nfs_wait_on_request(req); - else - ret = -EAGAIN; - nfs_release_request(req); - if (ret != 0) + return NULL; + } + + /* lock each request in the page group */ + nfs_page_group_lock(head); + subreq = head; + do { + /* + * Subrequests are always contiguous, non overlapping + * and in order. If not, it's a programming error. + */ + WARN_ON_ONCE(subreq->wb_offset != + (head->wb_offset + total_bytes)); + + /* keep track of how many bytes this group covers */ + total_bytes += subreq->wb_bytes; + + if (!nfs_lock_request(subreq)) { + /* releases page group bit lock and + * inode spin lock and all references */ + ret = nfs_unroll_locks_and_wait(inode, head, + subreq, nonblock); + + if (ret == 0) + goto try_again; + return ERR_PTR(ret); - spin_lock(&inode->i_lock); + } + + subreq = subreq->wb_this_page; + } while (subreq != head); + + /* Now that all requests are locked, make sure they aren't on any list. + * Commit list removal accounting is done after locks are dropped */ + subreq = head; + do { + nfs_list_remove_request(subreq); + subreq = subreq->wb_this_page; + } while (subreq != head); + + /* unlink subrequests from head, destroy them later */ + if (head->wb_this_page != head) { + /* destroy list will be terminated by head */ + destroy_list = head->wb_this_page; + head->wb_this_page = head; + + /* change head request to cover whole range that + * the former page group covered */ + head->wb_bytes = total_bytes; } + + /* + * prepare head request to be added to new pgio descriptor + */ + nfs_page_group_clear_bits(head); + + /* + * some part of the group was still on the inode list - otherwise + * the group wouldn't be involved in async write. + * grab a reference for the head request, iff it needs one. + */ + if (!test_and_set_bit(PG_INODE_REF, &head->wb_flags)) + kref_get(&head->wb_kref); + + nfs_page_group_unlock(head); + + /* drop lock to clear_request_commit the head req and clean up + * requests on destroy list */ spin_unlock(&inode->i_lock); - return req; + + nfs_destroy_unlinked_subrequests(destroy_list, head); + + /* clean up commit list state */ + nfs_clear_request_commit(head); + + /* still holds ref on head from nfs_page_find_head_request_locked + * and still has lock on head from lock loop */ + return head; } /* @@ -316,7 +542,7 @@ static int nfs_page_async_flush(struct nfs_pageio_descriptor *pgio, struct nfs_page *req; int ret = 0; - req = nfs_find_and_lock_request(page, nonblock); + req = nfs_lock_and_join_requests(page, nonblock); if (!req) goto out; ret = PTR_ERR(req); @@ -448,7 +674,9 @@ static void nfs_inode_add_request(struct inode *inode, struct nfs_page *req) set_page_private(req->wb_page, (unsigned long)req); } nfsi->npages++; - set_bit(PG_INODE_REF, &req->wb_flags); + /* this a head request for a page group - mark it as having an + * extra reference so sub groups can follow suit */ + WARN_ON(test_and_set_bit(PG_INODE_REF, &req->wb_flags)); kref_get(&req->wb_kref); spin_unlock(&inode->i_lock); } @@ -474,7 +702,9 @@ static void nfs_inode_remove_request(struct nfs_page *req) nfsi->npages--; spin_unlock(&inode->i_lock); } - nfs_release_request(req); + + if (test_and_clear_bit(PG_INODE_REF, &req->wb_flags)) + nfs_release_request(req); } static void @@ -638,7 +868,6 @@ static void nfs_write_completion(struct nfs_pgio_header *hdr) { struct nfs_commit_info cinfo; unsigned long bytes = 0; - bool do_destroy; if (test_bit(NFS_IOHDR_REDO, &hdr->flags)) goto out; @@ -668,7 +897,6 @@ remove_req: next: nfs_unlock_request(req); nfs_end_page_writeback(req); - do_destroy = !test_bit(NFS_IOHDR_NEED_COMMIT, &hdr->flags); nfs_release_request(req); } out: @@ -769,7 +997,7 @@ static struct nfs_page *nfs_try_to_update_request(struct inode *inode, spin_lock(&inode->i_lock); for (;;) { - req = nfs_page_find_request_locked(NFS_I(inode), page); + req = nfs_page_find_head_request_locked(NFS_I(inode), page); if (req == NULL) goto out_unlock; @@ -877,7 +1105,7 @@ int nfs_flush_incompatible(struct file *file, struct page *page) * dropped page. */ do { - req = nfs_page_find_request(page); + req = nfs_page_find_head_request(page); if (req == NULL) return 0; l_ctx = req->wb_lock_context; @@ -1569,27 +1797,28 @@ int nfs_wb_page_cancel(struct inode *inode, struct page *page) struct nfs_page *req; int ret = 0; - for (;;) { - wait_on_page_writeback(page); - req = nfs_page_find_request(page); - if (req == NULL) - break; - if (nfs_lock_request(req)) { - nfs_clear_request_commit(req); - nfs_inode_remove_request(req); - /* - * In case nfs_inode_remove_request has marked the - * page as being dirty - */ - cancel_dirty_page(page, PAGE_CACHE_SIZE); - nfs_unlock_and_release_request(req); - break; - } - ret = nfs_wait_on_request(req); - nfs_release_request(req); - if (ret < 0) - break; + wait_on_page_writeback(page); + + /* blocking call to cancel all requests and join to a single (head) + * request */ + req = nfs_lock_and_join_requests(page, false); + + if (IS_ERR(req)) { + ret = PTR_ERR(req); + } else if (req) { + /* all requests from this page have been cancelled by + * nfs_lock_and_join_requests, so just remove the head + * request from the inode / page_private pointer and + * release it */ + nfs_inode_remove_request(req); + /* + * In case nfs_inode_remove_request has marked the + * page as being dirty + */ + cancel_dirty_page(page, PAGE_CACHE_SIZE); + nfs_unlock_and_release_request(req); } + return ret; } diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c index 2fc7abebeb9b..b56b1cc02718 100644 --- a/fs/nfsd/nfs4xdr.c +++ b/fs/nfsd/nfs4xdr.c @@ -2641,7 +2641,7 @@ nfsd4_encode_rdattr_error(struct xdr_stream *xdr, __be32 nfserr) { __be32 *p; - p = xdr_reserve_space(xdr, 6); + p = xdr_reserve_space(xdr, 20); if (!p) return NULL; *p++ = htonl(2); diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c index 9cd5f63715c0..7f30bdc57d13 100644 --- a/fs/quota/dquot.c +++ b/fs/quota/dquot.c @@ -702,6 +702,7 @@ dqcache_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) struct dquot *dquot; unsigned long freed = 0; + spin_lock(&dq_list_lock); head = free_dquots.prev; while (head != &free_dquots && sc->nr_to_scan) { dquot = list_entry(head, struct dquot, dq_free); @@ -713,6 +714,7 @@ dqcache_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) freed++; head = free_dquots.prev; } + spin_unlock(&dq_list_lock); return freed; } diff --git a/fs/xfs/xfs_bmap.c b/fs/xfs/xfs_bmap.c index 96175df211b1..75c3fe5f3d9d 100644 --- a/fs/xfs/xfs_bmap.c +++ b/fs/xfs/xfs_bmap.c @@ -4298,8 +4298,8 @@ xfs_bmapi_delay( } -int -__xfs_bmapi_allocate( +static int +xfs_bmapi_allocate( struct xfs_bmalloca *bma) { struct xfs_mount *mp = bma->ip->i_mount; @@ -4578,9 +4578,6 @@ xfs_bmapi_write( bma.flist = flist; bma.firstblock = firstblock; - if (flags & XFS_BMAPI_STACK_SWITCH) - bma.stack_switch = 1; - while (bno < end && n < *nmap) { inhole = eof || bma.got.br_startoff > bno; wasdelay = !inhole && isnullstartblock(bma.got.br_startblock); diff --git a/fs/xfs/xfs_bmap.h b/fs/xfs/xfs_bmap.h index 38ba36e9b2f0..b879ca56a64c 100644 --- a/fs/xfs/xfs_bmap.h +++ b/fs/xfs/xfs_bmap.h @@ -77,7 +77,6 @@ typedef struct xfs_bmap_free * from written to unwritten, otherwise convert from unwritten to written. */ #define XFS_BMAPI_CONVERT 0x040 -#define XFS_BMAPI_STACK_SWITCH 0x080 #define XFS_BMAPI_FLAGS \ { XFS_BMAPI_ENTIRE, "ENTIRE" }, \ @@ -86,8 +85,7 @@ typedef struct xfs_bmap_free { XFS_BMAPI_PREALLOC, "PREALLOC" }, \ { XFS_BMAPI_IGSTATE, "IGSTATE" }, \ { XFS_BMAPI_CONTIG, "CONTIG" }, \ - { XFS_BMAPI_CONVERT, "CONVERT" }, \ - { XFS_BMAPI_STACK_SWITCH, "STACK_SWITCH" } + { XFS_BMAPI_CONVERT, "CONVERT" } static inline int xfs_bmapi_aflag(int w) diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c index 703b3ec1796c..64731ef3324d 100644 --- a/fs/xfs/xfs_bmap_util.c +++ b/fs/xfs/xfs_bmap_util.c @@ -249,59 +249,6 @@ xfs_bmap_rtalloc( } /* - * Stack switching interfaces for allocation - */ -static void -xfs_bmapi_allocate_worker( - struct work_struct *work) -{ - struct xfs_bmalloca *args = container_of(work, - struct xfs_bmalloca, work); - unsigned long pflags; - unsigned long new_pflags = PF_FSTRANS; - - /* - * we are in a transaction context here, but may also be doing work - * in kswapd context, and hence we may need to inherit that state - * temporarily to ensure that we don't block waiting for memory reclaim - * in any way. - */ - if (args->kswapd) - new_pflags |= PF_MEMALLOC | PF_SWAPWRITE | PF_KSWAPD; - - current_set_flags_nested(&pflags, new_pflags); - - args->result = __xfs_bmapi_allocate(args); - complete(args->done); - - current_restore_flags_nested(&pflags, new_pflags); -} - -/* - * Some allocation requests often come in with little stack to work on. Push - * them off to a worker thread so there is lots of stack to use. Otherwise just - * call directly to avoid the context switch overhead here. - */ -int -xfs_bmapi_allocate( - struct xfs_bmalloca *args) -{ - DECLARE_COMPLETION_ONSTACK(done); - - if (!args->stack_switch) - return __xfs_bmapi_allocate(args); - - - args->done = &done; - args->kswapd = current_is_kswapd(); - INIT_WORK_ONSTACK(&args->work, xfs_bmapi_allocate_worker); - queue_work(xfs_alloc_wq, &args->work); - wait_for_completion(&done); - destroy_work_on_stack(&args->work); - return args->result; -} - -/* * Check if the endoff is outside the last extent. If so the caller will grow * the allocation to a stripe unit boundary. All offsets are considered outside * the end of file for an empty fork, so 1 is returned in *eof in that case. diff --git a/fs/xfs/xfs_bmap_util.h b/fs/xfs/xfs_bmap_util.h index 075f72232a64..2fdb72d2c908 100644 --- a/fs/xfs/xfs_bmap_util.h +++ b/fs/xfs/xfs_bmap_util.h @@ -55,8 +55,6 @@ struct xfs_bmalloca { bool userdata;/* set if is user data */ bool aeof; /* allocated space at eof */ bool conv; /* overwriting unwritten extents */ - bool stack_switch; - bool kswapd; /* allocation in kswapd context */ int flags; struct completion *done; struct work_struct work; @@ -66,8 +64,6 @@ struct xfs_bmalloca { int xfs_bmap_finish(struct xfs_trans **tp, struct xfs_bmap_free *flist, int *committed); int xfs_bmap_rtalloc(struct xfs_bmalloca *ap); -int xfs_bmapi_allocate(struct xfs_bmalloca *args); -int __xfs_bmapi_allocate(struct xfs_bmalloca *args); int xfs_bmap_eof(struct xfs_inode *ip, xfs_fileoff_t endoff, int whichfork, int *eof); int xfs_bmap_count_blocks(struct xfs_trans *tp, struct xfs_inode *ip, diff --git a/fs/xfs/xfs_btree.c b/fs/xfs/xfs_btree.c index bf810c6baf2b..cf893bc1e373 100644 --- a/fs/xfs/xfs_btree.c +++ b/fs/xfs/xfs_btree.c @@ -33,6 +33,7 @@ #include "xfs_error.h" #include "xfs_trace.h" #include "xfs_cksum.h" +#include "xfs_alloc.h" /* * Cursor allocation zone. @@ -2323,7 +2324,7 @@ error1: * record (to be inserted into parent). */ STATIC int /* error */ -xfs_btree_split( +__xfs_btree_split( struct xfs_btree_cur *cur, int level, union xfs_btree_ptr *ptrp, @@ -2503,6 +2504,85 @@ error0: return error; } +struct xfs_btree_split_args { + struct xfs_btree_cur *cur; + int level; + union xfs_btree_ptr *ptrp; + union xfs_btree_key *key; + struct xfs_btree_cur **curp; + int *stat; /* success/failure */ + int result; + bool kswapd; /* allocation in kswapd context */ + struct completion *done; + struct work_struct work; +}; + +/* + * Stack switching interfaces for allocation + */ +static void +xfs_btree_split_worker( + struct work_struct *work) +{ + struct xfs_btree_split_args *args = container_of(work, + struct xfs_btree_split_args, work); + unsigned long pflags; + unsigned long new_pflags = PF_FSTRANS; + + /* + * we are in a transaction context here, but may also be doing work + * in kswapd context, and hence we may need to inherit that state + * temporarily to ensure that we don't block waiting for memory reclaim + * in any way. + */ + if (args->kswapd) + new_pflags |= PF_MEMALLOC | PF_SWAPWRITE | PF_KSWAPD; + + current_set_flags_nested(&pflags, new_pflags); + + args->result = __xfs_btree_split(args->cur, args->level, args->ptrp, + args->key, args->curp, args->stat); + complete(args->done); + + current_restore_flags_nested(&pflags, new_pflags); +} + +/* + * BMBT split requests often come in with little stack to work on. Push + * them off to a worker thread so there is lots of stack to use. For the other + * btree types, just call directly to avoid the context switch overhead here. + */ +STATIC int /* error */ +xfs_btree_split( + struct xfs_btree_cur *cur, + int level, + union xfs_btree_ptr *ptrp, + union xfs_btree_key *key, + struct xfs_btree_cur **curp, + int *stat) /* success/failure */ +{ + struct xfs_btree_split_args args; + DECLARE_COMPLETION_ONSTACK(done); + + if (cur->bc_btnum != XFS_BTNUM_BMAP) + return __xfs_btree_split(cur, level, ptrp, key, curp, stat); + + args.cur = cur; + args.level = level; + args.ptrp = ptrp; + args.key = key; + args.curp = curp; + args.stat = stat; + args.done = &done; + args.kswapd = current_is_kswapd(); + INIT_WORK_ONSTACK(&args.work, xfs_btree_split_worker); + queue_work(xfs_alloc_wq, &args.work); + wait_for_completion(&done); + destroy_work_on_stack(&args.work); + return args.result; +} + + /* * Copy the old inode root contents into a real block and make the * broot point to it. diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c index 6c5eb4c551e3..6d3ec2b6ee29 100644 --- a/fs/xfs/xfs_iomap.c +++ b/fs/xfs/xfs_iomap.c @@ -749,8 +749,7 @@ xfs_iomap_write_allocate( * pointer that the caller gave to us. */ error = xfs_bmapi_write(tp, ip, map_start_fsb, - count_fsb, - XFS_BMAPI_STACK_SWITCH, + count_fsb, 0, &first_block, 1, imap, &nimaps, &free_list); if (error) diff --git a/fs/xfs/xfs_sb.c b/fs/xfs/xfs_sb.c index c3453b11f563..7703fa6770ff 100644 --- a/fs/xfs/xfs_sb.c +++ b/fs/xfs/xfs_sb.c @@ -483,10 +483,16 @@ xfs_sb_quota_to_disk( } /* - * GQUOTINO and PQUOTINO cannot be used together in versions - * of superblock that do not have pquotino. from->sb_flags - * tells us which quota is active and should be copied to - * disk. + * GQUOTINO and PQUOTINO cannot be used together in versions of + * superblock that do not have pquotino. from->sb_flags tells us which + * quota is active and should be copied to disk. If neither are active, + * make sure we write NULLFSINO to the sb_gquotino field as a quota + * inode value of "0" is invalid when the XFS_SB_VERSION_QUOTA feature + * bit is set. + * + * Note that we don't need to handle the sb_uquotino or sb_pquotino here + * as they do not require any translation. Hence the main sb field loop + * will write them appropriately from the in-core superblock. */ if ((*fields & XFS_SB_GQUOTINO) && (from->sb_qflags & XFS_GQUOTA_ACCT)) @@ -494,6 +500,17 @@ xfs_sb_quota_to_disk( else if ((*fields & XFS_SB_PQUOTINO) && (from->sb_qflags & XFS_PQUOTA_ACCT)) to->sb_gquotino = cpu_to_be64(from->sb_pquotino); + else { + /* + * We can't rely on just the fields being logged to tell us + * that it is safe to write NULLFSINO - we should only do that + * if quotas are not actually enabled. Hence only write + * NULLFSINO if both in-core quota inodes are NULL. + */ + if (from->sb_gquotino == NULLFSINO && + from->sb_pquotino == NULLFSINO) + to->sb_gquotino = cpu_to_be64(NULLFSINO); + } *fields &= ~(XFS_SB_PQUOTINO | XFS_SB_GQUOTINO); } diff --git a/include/acpi/video.h b/include/acpi/video.h index ea4c7bbded4d..843ef1adfbfa 100644 --- a/include/acpi/video.h +++ b/include/acpi/video.h @@ -22,6 +22,7 @@ extern void acpi_video_unregister(void); extern void acpi_video_unregister_backlight(void); extern int acpi_video_get_edid(struct acpi_device *device, int type, int device_id, void **edid); +extern bool acpi_video_verify_backlight_support(void); #else static inline int acpi_video_register(void) { return 0; } static inline void acpi_video_unregister(void) { return; } @@ -31,6 +32,7 @@ static inline int acpi_video_get_edid(struct acpi_device *device, int type, { return -ENODEV; } +static inline bool acpi_video_verify_backlight_support(void) { return false; } #endif #endif diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h index 471ba48c7ae4..c1c0b0cf39b4 100644 --- a/include/asm-generic/vmlinux.lds.h +++ b/include/asm-generic/vmlinux.lds.h @@ -693,7 +693,7 @@ . = ALIGN(PAGE_SIZE); \ *(.data..percpu..page_aligned) \ . = ALIGN(cacheline); \ - *(.data..percpu..readmostly) \ + *(.data..percpu..read_mostly) \ . = ALIGN(cacheline); \ *(.data..percpu) \ *(.data..percpu..shared_aligned) \ diff --git a/include/dt-bindings/clock/exynos5420.h b/include/dt-bindings/clock/exynos5420.h index 97dcb89d37d3..21d51ae1d242 100644 --- a/include/dt-bindings/clock/exynos5420.h +++ b/include/dt-bindings/clock/exynos5420.h @@ -63,7 +63,6 @@ #define CLK_SCLK_MPHY_IXTAL24 161 /* gate clocks */ -#define CLK_ACLK66_PERIC 256 #define CLK_UART0 257 #define CLK_UART1 258 #define CLK_UART2 259 @@ -203,6 +202,8 @@ #define CLK_MOUT_G3D 641 #define CLK_MOUT_VPLL 642 #define CLK_MOUT_MAUDIO0 643 +#define CLK_MOUT_USER_ACLK333 644 +#define CLK_MOUT_SW_ACLK333 645 /* divider clocks */ #define CLK_DOUT_PIXEL 768 diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h index ec4112d257bc..8f8ae95c6e27 100644 --- a/include/linux/cpufreq.h +++ b/include/linux/cpufreq.h @@ -482,8 +482,8 @@ extern struct cpufreq_governor cpufreq_gov_conservative; *********************************************************************/ /* Special Values of .frequency field */ -#define CPUFREQ_ENTRY_INVALID ~0 -#define CPUFREQ_TABLE_END ~1 +#define CPUFREQ_ENTRY_INVALID ~0u +#define CPUFREQ_TABLE_END ~1u /* Special Values of .flags field */ #define CPUFREQ_BOOST_FREQ (1 << 0) diff --git a/include/linux/kernfs.h b/include/linux/kernfs.h index 145375ea0bd9..30faf797c2c3 100644 --- a/include/linux/kernfs.h +++ b/include/linux/kernfs.h @@ -305,6 +305,7 @@ struct dentry *kernfs_mount_ns(struct file_system_type *fs_type, int flags, struct kernfs_root *root, unsigned long magic, bool *new_sb_created, const void *ns); void kernfs_kill_sb(struct super_block *sb); +struct super_block *kernfs_pin_sb(struct kernfs_root *root, const void *ns); void kernfs_init(void); diff --git a/include/linux/mlx4/device.h b/include/linux/mlx4/device.h index b12f4bbd064c..35b51e7af886 100644 --- a/include/linux/mlx4/device.h +++ b/include/linux/mlx4/device.h @@ -578,8 +578,6 @@ struct mlx4_cq { u32 cons_index; u16 irq; - bool irq_affinity_change; - __be32 *set_ci_db; __be32 *arm_db; int arm_sn; @@ -1167,6 +1165,8 @@ int mlx4_assign_eq(struct mlx4_dev *dev, char *name, struct cpu_rmap *rmap, int *vector); void mlx4_release_eq(struct mlx4_dev *dev, int vec); +int mlx4_eq_get_irq(struct mlx4_dev *dev, int vec); + int mlx4_get_phys_port_id(struct mlx4_dev *dev); int mlx4_wol_read(struct mlx4_dev *dev, u64 *config, int port); int mlx4_wol_write(struct mlx4_dev *dev, u64 config, int port); diff --git a/include/linux/mutex.h b/include/linux/mutex.h index 11692dea18aa..42aa9b9ecd5f 100644 --- a/include/linux/mutex.h +++ b/include/linux/mutex.h @@ -17,6 +17,7 @@ #include <linux/lockdep.h> #include <linux/atomic.h> #include <asm/processor.h> +#include <linux/osq_lock.h> /* * Simple, straightforward mutexes with strict semantics: @@ -46,7 +47,6 @@ * - detects multi-task circular deadlocks and prints out all affected * locks and tasks (and only those tasks) */ -struct optimistic_spin_queue; struct mutex { /* 1: unlocked, 0: locked, negative: locked, possible waiters */ atomic_t count; @@ -56,7 +56,7 @@ struct mutex { struct task_struct *owner; #endif #ifdef CONFIG_MUTEX_SPIN_ON_OWNER - struct optimistic_spin_queue *osq; /* Spinner MCS lock */ + struct optimistic_spin_queue osq; /* Spinner MCS lock */ #endif #ifdef CONFIG_DEBUG_MUTEXES const char *name; diff --git a/include/linux/of_mdio.h b/include/linux/of_mdio.h index a70c9493d55a..d449018d0726 100644 --- a/include/linux/of_mdio.h +++ b/include/linux/of_mdio.h @@ -25,9 +25,6 @@ struct phy_device *of_phy_attach(struct net_device *dev, extern struct mii_bus *of_mdio_find_bus(struct device_node *mdio_np); -extern void of_mdiobus_link_phydev(struct mii_bus *mdio, - struct phy_device *phydev); - #else /* CONFIG_OF */ static inline int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np) { @@ -63,11 +60,6 @@ static inline struct mii_bus *of_mdio_find_bus(struct device_node *mdio_np) { return NULL; } - -static inline void of_mdiobus_link_phydev(struct mii_bus *mdio, - struct phy_device *phydev) -{ -} #endif /* CONFIG_OF */ #if defined(CONFIG_OF) && defined(CONFIG_FIXED_PHY) diff --git a/include/linux/osq_lock.h b/include/linux/osq_lock.h new file mode 100644 index 000000000000..90230d5811c5 --- /dev/null +++ b/include/linux/osq_lock.h @@ -0,0 +1,27 @@ +#ifndef __LINUX_OSQ_LOCK_H +#define __LINUX_OSQ_LOCK_H + +/* + * An MCS like lock especially tailored for optimistic spinning for sleeping + * lock implementations (mutex, rwsem, etc). + */ + +#define OSQ_UNLOCKED_VAL (0) + +struct optimistic_spin_queue { + /* + * Stores an encoded value of the CPU # of the tail node in the queue. + * If the queue is empty, then it's set to OSQ_UNLOCKED_VAL. + */ + atomic_t tail; +}; + +/* Init macro and function. */ +#define OSQ_LOCK_UNLOCKED { ATOMIC_INIT(OSQ_UNLOCKED_VAL) } + +static inline void osq_lock_init(struct optimistic_spin_queue *lock) +{ + atomic_set(&lock->tail, OSQ_UNLOCKED_VAL); +} + +#endif diff --git a/include/linux/percpu-defs.h b/include/linux/percpu-defs.h index a5fc7d01aad6..dec01d6c3f80 100644 --- a/include/linux/percpu-defs.h +++ b/include/linux/percpu-defs.h @@ -146,10 +146,10 @@ * Declaration/definition used for per-CPU variables that must be read mostly. */ #define DECLARE_PER_CPU_READ_MOSTLY(type, name) \ - DECLARE_PER_CPU_SECTION(type, name, "..readmostly") + DECLARE_PER_CPU_SECTION(type, name, "..read_mostly") #define DEFINE_PER_CPU_READ_MOSTLY(type, name) \ - DEFINE_PER_CPU_SECTION(type, name, "..readmostly") + DEFINE_PER_CPU_SECTION(type, name, "..read_mostly") /* * Intermodule exports for per-CPU variables. sparse forgets about diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index 5a75d19aa661..6a94cc8b1ca0 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -44,7 +44,6 @@ #include <linux/debugobjects.h> #include <linux/bug.h> #include <linux/compiler.h> -#include <linux/percpu.h> #include <asm/barrier.h> extern int rcu_expedited; /* for sysctl */ @@ -300,41 +299,6 @@ bool __rcu_is_watching(void); #endif /* #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || defined(CONFIG_SMP) */ /* - * Hooks for cond_resched() and friends to avoid RCU CPU stall warnings. - */ - -#define RCU_COND_RESCHED_LIM 256 /* ms vs. 100s of ms. */ -DECLARE_PER_CPU(int, rcu_cond_resched_count); -void rcu_resched(void); - -/* - * Is it time to report RCU quiescent states? - * - * Note unsynchronized access to rcu_cond_resched_count. Yes, we might - * increment some random CPU's count, and possibly also load the result from - * yet another CPU's count. We might even clobber some other CPU's attempt - * to zero its counter. This is all OK because the goal is not precision, - * but rather reasonable amortization of rcu_note_context_switch() overhead - * and extremely high probability of avoiding RCU CPU stall warnings. - * Note that this function has to be preempted in just the wrong place, - * many thousands of times in a row, for anything bad to happen. - */ -static inline bool rcu_should_resched(void) -{ - return raw_cpu_inc_return(rcu_cond_resched_count) >= - RCU_COND_RESCHED_LIM; -} - -/* - * Report quiscent states to RCU if it is time to do so. - */ -static inline void rcu_cond_resched(void) -{ - if (unlikely(rcu_should_resched())) - rcu_resched(); -} - -/* * Infrastructure to implement the synchronize_() primitives in * TREE_RCU and rcu_barrier_() primitives in TINY_RCU. */ @@ -358,9 +322,19 @@ void wait_rcu_gp(call_rcu_func_t crf); * initialization. */ #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD +void init_rcu_head(struct rcu_head *head); +void destroy_rcu_head(struct rcu_head *head); void init_rcu_head_on_stack(struct rcu_head *head); void destroy_rcu_head_on_stack(struct rcu_head *head); #else /* !CONFIG_DEBUG_OBJECTS_RCU_HEAD */ +static inline void init_rcu_head(struct rcu_head *head) +{ +} + +static inline void destroy_rcu_head(struct rcu_head *head) +{ +} + static inline void init_rcu_head_on_stack(struct rcu_head *head) { } diff --git a/include/linux/rwsem-spinlock.h b/include/linux/rwsem-spinlock.h index d5b13bc07a0b..561e8615528d 100644 --- a/include/linux/rwsem-spinlock.h +++ b/include/linux/rwsem-spinlock.h @@ -15,13 +15,13 @@ #ifdef __KERNEL__ /* * the rw-semaphore definition - * - if activity is 0 then there are no active readers or writers - * - if activity is +ve then that is the number of active readers - * - if activity is -1 then there is one active writer + * - if count is 0 then there are no active readers or writers + * - if count is +ve then that is the number of active readers + * - if count is -1 then there is one active writer * - if wait_list is not empty, then there are processes waiting for the semaphore */ struct rw_semaphore { - __s32 activity; + __s32 count; raw_spinlock_t wait_lock; struct list_head wait_list; #ifdef CONFIG_DEBUG_LOCK_ALLOC diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h index 8d79708146aa..035d3c57fc8a 100644 --- a/include/linux/rwsem.h +++ b/include/linux/rwsem.h @@ -13,10 +13,11 @@ #include <linux/kernel.h> #include <linux/list.h> #include <linux/spinlock.h> - #include <linux/atomic.h> +#ifdef CONFIG_RWSEM_SPIN_ON_OWNER +#include <linux/osq_lock.h> +#endif -struct optimistic_spin_queue; struct rw_semaphore; #ifdef CONFIG_RWSEM_GENERIC_SPINLOCK @@ -25,15 +26,15 @@ struct rw_semaphore; /* All arch specific implementations share the same struct */ struct rw_semaphore { long count; - raw_spinlock_t wait_lock; struct list_head wait_list; -#ifdef CONFIG_SMP + raw_spinlock_t wait_lock; +#ifdef CONFIG_RWSEM_SPIN_ON_OWNER + struct optimistic_spin_queue osq; /* spinner MCS lock */ /* * Write owner. Used as a speculative check to see * if the owner is running on the cpu. */ struct task_struct *owner; - struct optimistic_spin_queue *osq; /* spinner MCS lock */ #endif #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; @@ -64,22 +65,19 @@ static inline int rwsem_is_locked(struct rw_semaphore *sem) # define __RWSEM_DEP_MAP_INIT(lockname) #endif -#if defined(CONFIG_SMP) && !defined(CONFIG_RWSEM_GENERIC_SPINLOCK) -#define __RWSEM_INITIALIZER(name) \ - { RWSEM_UNLOCKED_VALUE, \ - __RAW_SPIN_LOCK_UNLOCKED(name.wait_lock), \ - LIST_HEAD_INIT((name).wait_list), \ - NULL, /* owner */ \ - NULL /* mcs lock */ \ - __RWSEM_DEP_MAP_INIT(name) } +#ifdef CONFIG_RWSEM_SPIN_ON_OWNER +#define __RWSEM_OPT_INIT(lockname) , .osq = OSQ_LOCK_UNLOCKED, .owner = NULL #else -#define __RWSEM_INITIALIZER(name) \ - { RWSEM_UNLOCKED_VALUE, \ - __RAW_SPIN_LOCK_UNLOCKED(name.wait_lock), \ - LIST_HEAD_INIT((name).wait_list) \ - __RWSEM_DEP_MAP_INIT(name) } +#define __RWSEM_OPT_INIT(lockname) #endif +#define __RWSEM_INITIALIZER(name) \ + { .count = RWSEM_UNLOCKED_VALUE, \ + .wait_list = LIST_HEAD_INIT((name).wait_list), \ + .wait_lock = __RAW_SPIN_LOCK_UNLOCKED(name.wait_lock) \ + __RWSEM_OPT_INIT(name) \ + __RWSEM_DEP_MAP_INIT(name) } + #define DECLARE_RWSEM(name) \ struct rw_semaphore name = __RWSEM_INITIALIZER(name) diff --git a/include/linux/sched.h b/include/linux/sched.h index 306f4f0c987a..0376b054a0d0 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -872,21 +872,21 @@ enum cpu_idle_type { #define SD_NUMA 0x4000 /* cross-node balancing */ #ifdef CONFIG_SCHED_SMT -static inline const int cpu_smt_flags(void) +static inline int cpu_smt_flags(void) { return SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES; } #endif #ifdef CONFIG_SCHED_MC -static inline const int cpu_core_flags(void) +static inline int cpu_core_flags(void) { return SD_SHARE_PKG_RESOURCES; } #endif #ifdef CONFIG_NUMA -static inline const int cpu_numa_flags(void) +static inline int cpu_numa_flags(void) { return SD_NUMA; } @@ -999,7 +999,7 @@ void free_sched_domains(cpumask_var_t doms[], unsigned int ndoms); bool cpus_share_cache(int this_cpu, int that_cpu); typedef const struct cpumask *(*sched_domain_mask_f)(int cpu); -typedef const int (*sched_domain_flags_f)(void); +typedef int (*sched_domain_flags_f)(void); #define SDTL_OVERLAP 0x01 diff --git a/include/net/neighbour.h b/include/net/neighbour.h index 7277caf3743d..47f425464f84 100644 --- a/include/net/neighbour.h +++ b/include/net/neighbour.h @@ -203,7 +203,6 @@ struct neigh_table { void (*proxy_redo)(struct sk_buff *skb); char *id; struct neigh_parms parms; - /* HACK. gc_* should follow parms without a gap! */ int gc_interval; int gc_thresh1; int gc_thresh2; diff --git a/include/net/netns/ieee802154_6lowpan.h b/include/net/netns/ieee802154_6lowpan.h index 079030c853d8..e2070960bac0 100644 --- a/include/net/netns/ieee802154_6lowpan.h +++ b/include/net/netns/ieee802154_6lowpan.h @@ -16,7 +16,7 @@ struct netns_sysctl_lowpan { struct netns_ieee802154_lowpan { struct netns_sysctl_lowpan sysctl; struct netns_frags frags; - u16 max_dsize; + int max_dsize; }; #endif diff --git a/include/net/sock.h b/include/net/sock.h index 173cae485de1..156350745700 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -1768,9 +1768,11 @@ __sk_dst_set(struct sock *sk, struct dst_entry *dst) static inline void sk_dst_set(struct sock *sk, struct dst_entry *dst) { - spin_lock(&sk->sk_dst_lock); - __sk_dst_set(sk, dst); - spin_unlock(&sk->sk_dst_lock); + struct dst_entry *old_dst; + + sk_tx_queue_clear(sk); + old_dst = xchg((__force struct dst_entry **)&sk->sk_dst_cache, dst); + dst_release(old_dst); } static inline void @@ -1782,9 +1784,7 @@ __sk_dst_reset(struct sock *sk) static inline void sk_dst_reset(struct sock *sk) { - spin_lock(&sk->sk_dst_lock); - __sk_dst_reset(sk); - spin_unlock(&sk->sk_dst_lock); + sk_dst_set(sk, NULL); } struct dst_entry *__sk_dst_check(struct sock *sk, u32 cookie); diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks index 35536d9c0964..76768ee812b2 100644 --- a/kernel/Kconfig.locks +++ b/kernel/Kconfig.locks @@ -220,9 +220,16 @@ config INLINE_WRITE_UNLOCK_IRQRESTORE endif +config ARCH_SUPPORTS_ATOMIC_RMW + bool + config MUTEX_SPIN_ON_OWNER def_bool y - depends on SMP && !DEBUG_MUTEXES + depends on SMP && !DEBUG_MUTEXES && ARCH_SUPPORTS_ATOMIC_RMW + +config RWSEM_SPIN_ON_OWNER + def_bool y + depends on SMP && RWSEM_XCHGADD_ALGORITHM && ARCH_SUPPORTS_ATOMIC_RMW config ARCH_USE_QUEUE_RWLOCK bool diff --git a/kernel/cgroup.c b/kernel/cgroup.c index 7868fc3c0bc5..70776aec2562 100644 --- a/kernel/cgroup.c +++ b/kernel/cgroup.c @@ -1648,10 +1648,13 @@ static struct dentry *cgroup_mount(struct file_system_type *fs_type, int flags, const char *unused_dev_name, void *data) { + struct super_block *pinned_sb = NULL; + struct cgroup_subsys *ss; struct cgroup_root *root; struct cgroup_sb_opts opts; struct dentry *dentry; int ret; + int i; bool new_sb; /* @@ -1677,6 +1680,27 @@ static struct dentry *cgroup_mount(struct file_system_type *fs_type, goto out_unlock; } + /* + * Destruction of cgroup root is asynchronous, so subsystems may + * still be dying after the previous unmount. Let's drain the + * dying subsystems. We just need to ensure that the ones + * unmounted previously finish dying and don't care about new ones + * starting. Testing ref liveliness is good enough. + */ + for_each_subsys(ss, i) { + if (!(opts.subsys_mask & (1 << i)) || + ss->root == &cgrp_dfl_root) + continue; + + if (!percpu_ref_tryget_live(&ss->root->cgrp.self.refcnt)) { + mutex_unlock(&cgroup_mutex); + msleep(10); + ret = restart_syscall(); + goto out_free; + } + cgroup_put(&ss->root->cgrp); + } + for_each_root(root) { bool name_match = false; @@ -1717,15 +1741,23 @@ static struct dentry *cgroup_mount(struct file_system_type *fs_type, } /* - * A root's lifetime is governed by its root cgroup. - * tryget_live failure indicate that the root is being - * destroyed. Wait for destruction to complete so that the - * subsystems are free. We can use wait_queue for the wait - * but this path is super cold. Let's just sleep for a bit - * and retry. + * We want to reuse @root whose lifetime is governed by its + * ->cgrp. Let's check whether @root is alive and keep it + * that way. As cgroup_kill_sb() can happen anytime, we + * want to block it by pinning the sb so that @root doesn't + * get killed before mount is complete. + * + * With the sb pinned, tryget_live can reliably indicate + * whether @root can be reused. If it's being killed, + * drain it. We can use wait_queue for the wait but this + * path is super cold. Let's just sleep a bit and retry. */ - if (!percpu_ref_tryget_live(&root->cgrp.self.refcnt)) { + pinned_sb = kernfs_pin_sb(root->kf_root, NULL); + if (IS_ERR(pinned_sb) || + !percpu_ref_tryget_live(&root->cgrp.self.refcnt)) { mutex_unlock(&cgroup_mutex); + if (!IS_ERR_OR_NULL(pinned_sb)) + deactivate_super(pinned_sb); msleep(10); ret = restart_syscall(); goto out_free; @@ -1770,6 +1802,16 @@ out_free: CGROUP_SUPER_MAGIC, &new_sb); if (IS_ERR(dentry) || !new_sb) cgroup_put(&root->cgrp); + + /* + * If @pinned_sb, we're reusing an existing root and holding an + * extra ref on its sb. Mount is complete. Put the extra ref. + */ + if (pinned_sb) { + WARN_ON(new_sb); + deactivate_super(pinned_sb); + } + return dentry; } @@ -3328,7 +3370,7 @@ bool css_has_online_children(struct cgroup_subsys_state *css) rcu_read_lock(); css_for_each_child(child, css) { - if (css->flags & CSS_ONLINE) { + if (child->flags & CSS_ONLINE) { ret = true; break; } diff --git a/kernel/cpuset.c b/kernel/cpuset.c index f6b33c696224..116a4164720a 100644 --- a/kernel/cpuset.c +++ b/kernel/cpuset.c @@ -1181,7 +1181,13 @@ done: int current_cpuset_is_being_rebound(void) { - return task_cs(current) == cpuset_being_rebound; + int ret; + + rcu_read_lock(); + ret = task_cs(current) == cpuset_being_rebound; + rcu_read_unlock(); + + return ret; } static int update_relax_domain_level(struct cpuset *cs, s64 val) @@ -1617,7 +1623,17 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of, * resources, wait for the previously scheduled operations before * proceeding, so that we don't end up keep removing tasks added * after execution capability is restored. + * + * cpuset_hotplug_work calls back into cgroup core via + * cgroup_transfer_tasks() and waiting for it from a cgroupfs + * operation like this one can lead to a deadlock through kernfs + * active_ref protection. Let's break the protection. Losing the + * protection is okay as we check whether @cs is online after + * grabbing cpuset_mutex anyway. This only happens on the legacy + * hierarchies. */ + css_get(&cs->css); + kernfs_break_active_protection(of->kn); flush_work(&cpuset_hotplug_work); mutex_lock(&cpuset_mutex); @@ -1645,6 +1661,8 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of, free_trial_cpuset(trialcs); out_unlock: mutex_unlock(&cpuset_mutex); + kernfs_unbreak_active_protection(of->kn); + css_put(&cs->css); return retval ?: nbytes; } diff --git a/kernel/events/core.c b/kernel/events/core.c index a33d9a2bcbd7..b0c95f0f06fd 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -2320,7 +2320,7 @@ static void perf_event_context_sched_out(struct task_struct *task, int ctxn, next_parent = rcu_dereference(next_ctx->parent_ctx); /* If neither context have a parent context; they cannot be clones. */ - if (!parent && !next_parent) + if (!parent || !next_parent) goto unlock; if (next_parent == ctx || next_ctx == parent || next_parent == parent) { diff --git a/kernel/locking/mcs_spinlock.c b/kernel/locking/mcs_spinlock.c index 838dc9e00669..be9ee1559fca 100644 --- a/kernel/locking/mcs_spinlock.c +++ b/kernel/locking/mcs_spinlock.c @@ -14,21 +14,47 @@ * called from interrupt context and we have preemption disabled while * spinning. */ -static DEFINE_PER_CPU_SHARED_ALIGNED(struct optimistic_spin_queue, osq_node); +static DEFINE_PER_CPU_SHARED_ALIGNED(struct optimistic_spin_node, osq_node); + +/* + * We use the value 0 to represent "no CPU", thus the encoded value + * will be the CPU number incremented by 1. + */ +static inline int encode_cpu(int cpu_nr) +{ + return cpu_nr + 1; +} + +static inline struct optimistic_spin_node *decode_cpu(int encoded_cpu_val) +{ + int cpu_nr = encoded_cpu_val - 1; + + return per_cpu_ptr(&osq_node, cpu_nr); +} /* * Get a stable @node->next pointer, either for unlock() or unqueue() purposes. * Can return NULL in case we were the last queued and we updated @lock instead. */ -static inline struct optimistic_spin_queue * -osq_wait_next(struct optimistic_spin_queue **lock, - struct optimistic_spin_queue *node, - struct optimistic_spin_queue *prev) +static inline struct optimistic_spin_node * +osq_wait_next(struct optimistic_spin_queue *lock, + struct optimistic_spin_node *node, + struct optimistic_spin_node *prev) { - struct optimistic_spin_queue *next = NULL; + struct optimistic_spin_node *next = NULL; + int curr = encode_cpu(smp_processor_id()); + int old; + + /* + * If there is a prev node in queue, then the 'old' value will be + * the prev node's CPU #, else it's set to OSQ_UNLOCKED_VAL since if + * we're currently last in queue, then the queue will then become empty. + */ + old = prev ? prev->cpu : OSQ_UNLOCKED_VAL; for (;;) { - if (*lock == node && cmpxchg(lock, node, prev) == node) { + if (atomic_read(&lock->tail) == curr && + atomic_cmpxchg(&lock->tail, curr, old) == curr) { /* * We were the last queued, we moved @lock back. @prev * will now observe @lock and will complete its @@ -59,18 +85,23 @@ osq_wait_next(struct optimistic_spin_queue **lock, return next; } -bool osq_lock(struct optimistic_spin_queue **lock) +bool osq_lock(struct optimistic_spin_queue *lock) { - struct optimistic_spin_queue *node = this_cpu_ptr(&osq_node); - struct optimistic_spin_queue *prev, *next; + struct optimistic_spin_node *node = this_cpu_ptr(&osq_node); + struct optimistic_spin_node *prev, *next; + int curr = encode_cpu(smp_processor_id()); + int old; node->locked = 0; node->next = NULL; + node->cpu = curr; - node->prev = prev = xchg(lock, node); - if (likely(prev == NULL)) + old = atomic_xchg(&lock->tail, curr); + if (old == OSQ_UNLOCKED_VAL) return true; + prev = decode_cpu(old); + node->prev = prev; ACCESS_ONCE(prev->next) = node; /* @@ -149,20 +180,21 @@ unqueue: return false; } -void osq_unlock(struct optimistic_spin_queue **lock) +void osq_unlock(struct optimistic_spin_queue *lock) { - struct optimistic_spin_queue *node = this_cpu_ptr(&osq_node); - struct optimistic_spin_queue *next; + struct optimistic_spin_node *node, *next; + int curr = encode_cpu(smp_processor_id()); /* * Fast path for the uncontended case. */ - if (likely(cmpxchg(lock, node, NULL) == node)) + if (likely(atomic_cmpxchg(&lock->tail, curr, OSQ_UNLOCKED_VAL) == curr)) return; /* * Second most likely case. */ + node = this_cpu_ptr(&osq_node); next = xchg(&node->next, NULL); if (next) { ACCESS_ONCE(next->locked) = 1; diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h index a2dbac4aca6b..74356dc0ce29 100644 --- a/kernel/locking/mcs_spinlock.h +++ b/kernel/locking/mcs_spinlock.h @@ -118,12 +118,13 @@ void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *node) * mutex_lock()/rwsem_down_{read,write}() etc. */ -struct optimistic_spin_queue { - struct optimistic_spin_queue *next, *prev; +struct optimistic_spin_node { + struct optimistic_spin_node *next, *prev; int locked; /* 1 if lock acquired */ + int cpu; /* encoded CPU # value */ }; -extern bool osq_lock(struct optimistic_spin_queue **lock); -extern void osq_unlock(struct optimistic_spin_queue **lock); +extern bool osq_lock(struct optimistic_spin_queue *lock); +extern void osq_unlock(struct optimistic_spin_queue *lock); #endif /* __LINUX_MCS_SPINLOCK_H */ diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index bc73d33c6760..acca2c1a3c5e 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -60,7 +60,7 @@ __mutex_init(struct mutex *lock, const char *name, struct lock_class_key *key) INIT_LIST_HEAD(&lock->wait_list); mutex_clear_owner(lock); #ifdef CONFIG_MUTEX_SPIN_ON_OWNER - lock->osq = NULL; + osq_lock_init(&lock->osq); #endif debug_mutex_init(lock, name, key); diff --git a/kernel/locking/rwsem-spinlock.c b/kernel/locking/rwsem-spinlock.c index 9be8a9144978..2c93571162cb 100644 --- a/kernel/locking/rwsem-spinlock.c +++ b/kernel/locking/rwsem-spinlock.c @@ -26,7 +26,7 @@ int rwsem_is_locked(struct rw_semaphore *sem) unsigned long flags; if (raw_spin_trylock_irqsave(&sem->wait_lock, flags)) { - ret = (sem->activity != 0); + ret = (sem->count != 0); raw_spin_unlock_irqrestore(&sem->wait_lock, flags); } return ret; @@ -46,7 +46,7 @@ void __init_rwsem(struct rw_semaphore *sem, const char *name, debug_check_no_locks_freed((void *)sem, sizeof(*sem)); lockdep_init_map(&sem->dep_map, name, key, 0); #endif - sem->activity = 0; + sem->count = 0; raw_spin_lock_init(&sem->wait_lock); INIT_LIST_HEAD(&sem->wait_list); } @@ -95,7 +95,7 @@ __rwsem_do_wake(struct rw_semaphore *sem, int wakewrite) waiter = list_entry(next, struct rwsem_waiter, list); } while (waiter->type != RWSEM_WAITING_FOR_WRITE); - sem->activity += woken; + sem->count += woken; out: return sem; @@ -126,9 +126,9 @@ void __sched __down_read(struct rw_semaphore *sem) raw_spin_lock_irqsave(&sem->wait_lock, flags); - if (sem->activity >= 0 && list_empty(&sem->wait_list)) { + if (sem->count >= 0 && list_empty(&sem->wait_list)) { /* granted */ - sem->activity++; + sem->count++; raw_spin_unlock_irqrestore(&sem->wait_lock, flags); goto out; } @@ -170,9 +170,9 @@ int __down_read_trylock(struct rw_semaphore *sem) raw_spin_lock_irqsave(&sem->wait_lock, flags); - if (sem->activity >= 0 && list_empty(&sem->wait_list)) { + if (sem->count >= 0 && list_empty(&sem->wait_list)) { /* granted */ - sem->activity++; + sem->count++; ret = 1; } @@ -206,7 +206,7 @@ void __sched __down_write_nested(struct rw_semaphore *sem, int subclass) * itself into sleep and waiting for system woke it or someone * else in the head of the wait list up. */ - if (sem->activity == 0) + if (sem->count == 0) break; set_task_state(tsk, TASK_UNINTERRUPTIBLE); raw_spin_unlock_irqrestore(&sem->wait_lock, flags); @@ -214,7 +214,7 @@ void __sched __down_write_nested(struct rw_semaphore *sem, int subclass) raw_spin_lock_irqsave(&sem->wait_lock, flags); } /* got the lock */ - sem->activity = -1; + sem->count = -1; list_del(&waiter.list); raw_spin_unlock_irqrestore(&sem->wait_lock, flags); @@ -235,9 +235,9 @@ int __down_write_trylock(struct rw_semaphore *sem) raw_spin_lock_irqsave(&sem->wait_lock, flags); - if (sem->activity == 0) { + if (sem->count == 0) { /* got the lock */ - sem->activity = -1; + sem->count = -1; ret = 1; } @@ -255,7 +255,7 @@ void __up_read(struct rw_semaphore *sem) raw_spin_lock_irqsave(&sem->wait_lock, flags); - if (--sem->activity == 0 && !list_empty(&sem->wait_list)) + if (--sem->count == 0 && !list_empty(&sem->wait_list)) sem = __rwsem_wake_one_writer(sem); raw_spin_unlock_irqrestore(&sem->wait_lock, flags); @@ -270,7 +270,7 @@ void __up_write(struct rw_semaphore *sem) raw_spin_lock_irqsave(&sem->wait_lock, flags); - sem->activity = 0; + sem->count = 0; if (!list_empty(&sem->wait_list)) sem = __rwsem_do_wake(sem, 1); @@ -287,7 +287,7 @@ void __downgrade_write(struct rw_semaphore *sem) raw_spin_lock_irqsave(&sem->wait_lock, flags); - sem->activity = 1; + sem->count = 1; if (!list_empty(&sem->wait_list)) sem = __rwsem_do_wake(sem, 0); diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c index dacc32142fcc..a2391ac135c8 100644 --- a/kernel/locking/rwsem-xadd.c +++ b/kernel/locking/rwsem-xadd.c @@ -82,9 +82,9 @@ void __init_rwsem(struct rw_semaphore *sem, const char *name, sem->count = RWSEM_UNLOCKED_VALUE; raw_spin_lock_init(&sem->wait_lock); INIT_LIST_HEAD(&sem->wait_list); -#ifdef CONFIG_SMP +#ifdef CONFIG_RWSEM_SPIN_ON_OWNER sem->owner = NULL; - sem->osq = NULL; + osq_lock_init(&sem->osq); #endif } @@ -262,7 +262,7 @@ static inline bool rwsem_try_write_lock(long count, struct rw_semaphore *sem) return false; } -#ifdef CONFIG_SMP +#ifdef CONFIG_RWSEM_SPIN_ON_OWNER /* * Try to acquire write lock before the writer has been put on wait queue. */ @@ -285,10 +285,10 @@ static inline bool rwsem_try_write_lock_unqueued(struct rw_semaphore *sem) static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem) { struct task_struct *owner; - bool on_cpu = true; + bool on_cpu = false; if (need_resched()) - return 0; + return false; rcu_read_lock(); owner = ACCESS_ONCE(sem->owner); @@ -297,9 +297,9 @@ static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem) rcu_read_unlock(); /* - * If sem->owner is not set, the rwsem owner may have - * just acquired it and not set the owner yet or the rwsem - * has been released. + * If sem->owner is not set, yet we have just recently entered the + * slowpath, then there is a possibility reader(s) may have the lock. + * To be safe, avoid spinning in these situations. */ return on_cpu; } diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c index 42f806de49d4..e2d3bc7f03b4 100644 --- a/kernel/locking/rwsem.c +++ b/kernel/locking/rwsem.c @@ -12,7 +12,7 @@ #include <linux/atomic.h> -#if defined(CONFIG_SMP) && defined(CONFIG_RWSEM_XCHGADD_ALGORITHM) +#ifdef CONFIG_RWSEM_SPIN_ON_OWNER static inline void rwsem_set_owner(struct rw_semaphore *sem) { sem->owner = current; diff --git a/kernel/power/process.c b/kernel/power/process.c index 0ca8d83e2369..4ee194eb524b 100644 --- a/kernel/power/process.c +++ b/kernel/power/process.c @@ -186,6 +186,7 @@ void thaw_processes(void) printk("Restarting tasks ... "); + __usermodehelper_set_disable_depth(UMH_FREEZING); thaw_workqueues(); read_lock(&tasklist_lock); diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c index 4dd8822f732a..ed35a4790afe 100644 --- a/kernel/power/suspend.c +++ b/kernel/power/suspend.c @@ -306,7 +306,7 @@ int suspend_devices_and_enter(suspend_state_t state) error = suspend_ops->begin(state); if (error) goto Close; - } else if (state == PM_SUSPEND_FREEZE && freeze_ops->begin) { + } else if (state == PM_SUSPEND_FREEZE && freeze_ops && freeze_ops->begin) { error = freeze_ops->begin(); if (error) goto Close; @@ -335,7 +335,7 @@ int suspend_devices_and_enter(suspend_state_t state) Close: if (need_suspend_ops(state) && suspend_ops->end) suspend_ops->end(); - else if (state == PM_SUSPEND_FREEZE && freeze_ops->end) + else if (state == PM_SUSPEND_FREEZE && freeze_ops && freeze_ops->end) freeze_ops->end(); return error; diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index f1ba77363fbb..625d0b0cd75a 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -206,6 +206,70 @@ void rcu_bh_qs(int cpu) rdp->passed_quiesce = 1; } +static DEFINE_PER_CPU(int, rcu_sched_qs_mask); + +static DEFINE_PER_CPU(struct rcu_dynticks, rcu_dynticks) = { + .dynticks_nesting = DYNTICK_TASK_EXIT_IDLE, + .dynticks = ATOMIC_INIT(1), +#ifdef CONFIG_NO_HZ_FULL_SYSIDLE + .dynticks_idle_nesting = DYNTICK_TASK_NEST_VALUE, + .dynticks_idle = ATOMIC_INIT(1), +#endif /* #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */ +}; + +/* + * Let the RCU core know that this CPU has gone through the scheduler, + * which is a quiescent state. This is called when the need for a + * quiescent state is urgent, so we burn an atomic operation and full + * memory barriers to let the RCU core know about it, regardless of what + * this CPU might (or might not) do in the near future. + * + * We inform the RCU core by emulating a zero-duration dyntick-idle + * period, which we in turn do by incrementing the ->dynticks counter + * by two. + */ +static void rcu_momentary_dyntick_idle(void) +{ + unsigned long flags; + struct rcu_data *rdp; + struct rcu_dynticks *rdtp; + int resched_mask; + struct rcu_state *rsp; + + local_irq_save(flags); + + /* + * Yes, we can lose flag-setting operations. This is OK, because + * the flag will be set again after some delay. + */ + resched_mask = raw_cpu_read(rcu_sched_qs_mask); + raw_cpu_write(rcu_sched_qs_mask, 0); + + /* Find the flavor that needs a quiescent state. */ + for_each_rcu_flavor(rsp) { + rdp = raw_cpu_ptr(rsp->rda); + if (!(resched_mask & rsp->flavor_mask)) + continue; + smp_mb(); /* rcu_sched_qs_mask before cond_resched_completed. */ + if (ACCESS_ONCE(rdp->mynode->completed) != + ACCESS_ONCE(rdp->cond_resched_completed)) + continue; + + /* + * Pretend to be momentarily idle for the quiescent state. + * This allows the grace-period kthread to record the + * quiescent state, with no need for this CPU to do anything + * further. + */ + rdtp = this_cpu_ptr(&rcu_dynticks); + smp_mb__before_atomic(); /* Earlier stuff before QS. */ + atomic_add(2, &rdtp->dynticks); /* QS. */ + smp_mb__after_atomic(); /* Later stuff after QS. */ + break; + } + local_irq_restore(flags); +} + /* * Note a context switch. This is a quiescent state for RCU-sched, * and requires special handling for preemptible RCU. @@ -216,19 +280,12 @@ void rcu_note_context_switch(int cpu) trace_rcu_utilization(TPS("Start context switch")); rcu_sched_qs(cpu); rcu_preempt_note_context_switch(cpu); + if (unlikely(raw_cpu_read(rcu_sched_qs_mask))) + rcu_momentary_dyntick_idle(); trace_rcu_utilization(TPS("End context switch")); } EXPORT_SYMBOL_GPL(rcu_note_context_switch); -static DEFINE_PER_CPU(struct rcu_dynticks, rcu_dynticks) = { - .dynticks_nesting = DYNTICK_TASK_EXIT_IDLE, - .dynticks = ATOMIC_INIT(1), -#ifdef CONFIG_NO_HZ_FULL_SYSIDLE - .dynticks_idle_nesting = DYNTICK_TASK_NEST_VALUE, - .dynticks_idle = ATOMIC_INIT(1), -#endif /* #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */ -}; - static long blimit = 10; /* Maximum callbacks per rcu_do_batch. */ static long qhimark = 10000; /* If this many pending, ignore blimit. */ static long qlowmark = 100; /* Once only this many pending, use blimit. */ @@ -243,6 +300,13 @@ static ulong jiffies_till_next_fqs = ULONG_MAX; module_param(jiffies_till_first_fqs, ulong, 0644); module_param(jiffies_till_next_fqs, ulong, 0644); +/* + * How long the grace period must be before we start recruiting + * quiescent-state help from rcu_note_context_switch(). + */ +static ulong jiffies_till_sched_qs = HZ / 20; +module_param(jiffies_till_sched_qs, ulong, 0644); + static bool rcu_start_gp_advanced(struct rcu_state *rsp, struct rcu_node *rnp, struct rcu_data *rdp); static void force_qs_rnp(struct rcu_state *rsp, @@ -853,6 +917,7 @@ static int rcu_implicit_dynticks_qs(struct rcu_data *rdp, bool *isidle, unsigned long *maxj) { unsigned int curr; + int *rcrmp; unsigned int snap; curr = (unsigned int)atomic_add_return(0, &rdp->dynticks->dynticks); @@ -893,27 +958,43 @@ static int rcu_implicit_dynticks_qs(struct rcu_data *rdp, } /* - * There is a possibility that a CPU in adaptive-ticks state - * might run in the kernel with the scheduling-clock tick disabled - * for an extended time period. Invoke rcu_kick_nohz_cpu() to - * force the CPU to restart the scheduling-clock tick in this - * CPU is in this state. - */ - rcu_kick_nohz_cpu(rdp->cpu); - - /* - * Alternatively, the CPU might be running in the kernel - * for an extended period of time without a quiescent state. - * Attempt to force the CPU through the scheduler to gain the - * needed quiescent state, but only if the grace period has gone - * on for an uncommonly long time. If there are many stuck CPUs, - * we will beat on the first one until it gets unstuck, then move - * to the next. Only do this for the primary flavor of RCU. + * A CPU running for an extended time within the kernel can + * delay RCU grace periods. When the CPU is in NO_HZ_FULL mode, + * even context-switching back and forth between a pair of + * in-kernel CPU-bound tasks cannot advance grace periods. + * So if the grace period is old enough, make the CPU pay attention. + * Note that the unsynchronized assignments to the per-CPU + * rcu_sched_qs_mask variable are safe. Yes, setting of + * bits can be lost, but they will be set again on the next + * force-quiescent-state pass. So lost bit sets do not result + * in incorrect behavior, merely in a grace period lasting + * a few jiffies longer than it might otherwise. Because + * there are at most four threads involved, and because the + * updates are only once every few jiffies, the probability of + * lossage (and thus of slight grace-period extension) is + * quite low. + * + * Note that if the jiffies_till_sched_qs boot/sysfs parameter + * is set too high, we override with half of the RCU CPU stall + * warning delay. */ - if (rdp->rsp == rcu_state_p && + rcrmp = &per_cpu(rcu_sched_qs_mask, rdp->cpu); + if (ULONG_CMP_GE(jiffies, + rdp->rsp->gp_start + jiffies_till_sched_qs) || ULONG_CMP_GE(jiffies, rdp->rsp->jiffies_resched)) { - rdp->rsp->jiffies_resched += 5; - resched_cpu(rdp->cpu); + if (!(ACCESS_ONCE(*rcrmp) & rdp->rsp->flavor_mask)) { + ACCESS_ONCE(rdp->cond_resched_completed) = + ACCESS_ONCE(rdp->mynode->completed); + smp_mb(); /* ->cond_resched_completed before *rcrmp. */ + ACCESS_ONCE(*rcrmp) = + ACCESS_ONCE(*rcrmp) + rdp->rsp->flavor_mask; + resched_cpu(rdp->cpu); /* Force CPU into scheduler. */ + rdp->rsp->jiffies_resched += 5; /* Enable beating. */ + } else if (ULONG_CMP_GE(jiffies, rdp->rsp->jiffies_resched)) { + /* Time to beat on that CPU again! */ + resched_cpu(rdp->cpu); /* Force CPU into scheduler. */ + rdp->rsp->jiffies_resched += 5; /* Re-enable beating. */ + } } return 0; @@ -3491,6 +3572,7 @@ static void __init rcu_init_one(struct rcu_state *rsp, "rcu_node_fqs_1", "rcu_node_fqs_2", "rcu_node_fqs_3" }; /* Match MAX_RCU_LVLS */ + static u8 fl_mask = 0x1; int cpustride = 1; int i; int j; @@ -3509,6 +3591,8 @@ static void __init rcu_init_one(struct rcu_state *rsp, for (i = 1; i < rcu_num_lvls; i++) rsp->level[i] = rsp->level[i - 1] + rsp->levelcnt[i - 1]; rcu_init_levelspread(rsp); + rsp->flavor_mask = fl_mask; + fl_mask <<= 1; /* Initialize the elements themselves, starting from the leaves. */ diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h index bf2c1e669691..0f69a79c5b7d 100644 --- a/kernel/rcu/tree.h +++ b/kernel/rcu/tree.h @@ -307,6 +307,9 @@ struct rcu_data { /* 4) reasons this CPU needed to be kicked by force_quiescent_state */ unsigned long dynticks_fqs; /* Kicked due to dynticks idle. */ unsigned long offline_fqs; /* Kicked due to being offline. */ + unsigned long cond_resched_completed; + /* Grace period that needs help */ + /* from cond_resched(). */ /* 5) __rcu_pending() statistics. */ unsigned long n_rcu_pending; /* rcu_pending() calls since boot. */ @@ -392,6 +395,7 @@ struct rcu_state { struct rcu_node *level[RCU_NUM_LVLS]; /* Hierarchy levels. */ u32 levelcnt[MAX_RCU_LVLS + 1]; /* # nodes in each level. */ u8 levelspread[RCU_NUM_LVLS]; /* kids/node in each level. */ + u8 flavor_mask; /* bit in flavor mask. */ struct rcu_data __percpu *rda; /* pointer of percu rcu_data. */ void (*call)(struct rcu_head *head, /* call_rcu() flavor. */ void (*func)(struct rcu_head *head)); @@ -563,7 +567,7 @@ static bool rcu_nocb_need_deferred_wakeup(struct rcu_data *rdp); static void do_nocb_deferred_wakeup(struct rcu_data *rdp); static void rcu_boot_init_nocb_percpu_data(struct rcu_data *rdp); static void rcu_spawn_nocb_kthreads(struct rcu_state *rsp); -static void rcu_kick_nohz_cpu(int cpu); +static void __maybe_unused rcu_kick_nohz_cpu(int cpu); static bool init_nocb_callback_list(struct rcu_data *rdp); static void rcu_sysidle_enter(struct rcu_dynticks *rdtp, int irq); static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq); diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index cbc2c45265e2..02ac0fb186b8 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -2404,7 +2404,7 @@ static bool init_nocb_callback_list(struct rcu_data *rdp) * if an adaptive-ticks CPU is failing to respond to the current grace * period and has not be idle from an RCU perspective, kick it. */ -static void rcu_kick_nohz_cpu(int cpu) +static void __maybe_unused rcu_kick_nohz_cpu(int cpu) { #ifdef CONFIG_NO_HZ_FULL if (tick_nohz_full_cpu(cpu)) diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c index a2aeb4df0f60..bc7883570530 100644 --- a/kernel/rcu/update.c +++ b/kernel/rcu/update.c @@ -200,12 +200,12 @@ void wait_rcu_gp(call_rcu_func_t crf) EXPORT_SYMBOL_GPL(wait_rcu_gp); #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD -static inline void debug_init_rcu_head(struct rcu_head *head) +void init_rcu_head(struct rcu_head *head) { debug_object_init(head, &rcuhead_debug_descr); } -static inline void debug_rcu_head_free(struct rcu_head *head) +void destroy_rcu_head(struct rcu_head *head) { debug_object_free(head, &rcuhead_debug_descr); } @@ -350,21 +350,3 @@ static int __init check_cpu_stall_init(void) early_initcall(check_cpu_stall_init); #endif /* #ifdef CONFIG_RCU_STALL_COMMON */ - -/* - * Hooks for cond_resched() and friends to avoid RCU CPU stall warnings. - */ - -DEFINE_PER_CPU(int, rcu_cond_resched_count); - -/* - * Report a set of RCU quiescent states, for use by cond_resched() - * and friends. Out of line due to being called infrequently. - */ -void rcu_resched(void) -{ - preempt_disable(); - __this_cpu_write(rcu_cond_resched_count, 0); - rcu_note_context_switch(smp_processor_id()); - preempt_enable(); -} diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 3bdf01b494fe..bc1638b33449 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4147,7 +4147,6 @@ static void __cond_resched(void) int __sched _cond_resched(void) { - rcu_cond_resched(); if (should_resched()) { __cond_resched(); return 1; @@ -4166,18 +4165,15 @@ EXPORT_SYMBOL(_cond_resched); */ int __cond_resched_lock(spinlock_t *lock) { - bool need_rcu_resched = rcu_should_resched(); int resched = should_resched(); int ret = 0; lockdep_assert_held(lock); - if (spin_needbreak(lock) || resched || need_rcu_resched) { + if (spin_needbreak(lock) || resched) { spin_unlock(lock); if (resched) __cond_resched(); - else if (unlikely(need_rcu_resched)) - rcu_resched(); else cpu_relax(); ret = 1; @@ -4191,7 +4187,6 @@ int __sched __cond_resched_softirq(void) { BUG_ON(!in_softirq()); - rcu_cond_resched(); /* BH disabled OK, just recording QSes. */ if (should_resched()) { local_bh_enable(); __cond_resched(); diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index 695f9773bb60..627b3c34b821 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -608,7 +608,7 @@ void proc_sched_show_task(struct task_struct *p, struct seq_file *m) avg_atom = p->se.sum_exec_runtime; if (nr_switches) - do_div(avg_atom, nr_switches); + avg_atom = div64_ul(avg_atom, nr_switches); else avg_atom = -1LL; diff --git a/kernel/time/alarmtimer.c b/kernel/time/alarmtimer.c index 88c9c65a430d..fe75444ae7ec 100644 --- a/kernel/time/alarmtimer.c +++ b/kernel/time/alarmtimer.c @@ -585,9 +585,14 @@ static int alarm_timer_set(struct k_itimer *timr, int flags, struct itimerspec *new_setting, struct itimerspec *old_setting) { + ktime_t exp; + if (!rtcdev) return -ENOTSUPP; + if (flags & ~TIMER_ABSTIME) + return -EINVAL; + if (old_setting) alarm_timer_get(timr, old_setting); @@ -597,8 +602,16 @@ static int alarm_timer_set(struct k_itimer *timr, int flags, /* start the timer */ timr->it.alarm.interval = timespec_to_ktime(new_setting->it_interval); - alarm_start(&timr->it.alarm.alarmtimer, - timespec_to_ktime(new_setting->it_value)); + exp = timespec_to_ktime(new_setting->it_value); + /* Convert (if necessary) to absolute time */ + if (flags != TIMER_ABSTIME) { + ktime_t now; + + now = alarm_bases[timr->it.alarm.alarmtimer.type].gettime(); + exp = ktime_add(now, exp); + } + + alarm_start(&timr->it.alarm.alarmtimer, exp); return 0; } @@ -730,6 +743,9 @@ static int alarm_timer_nsleep(const clockid_t which_clock, int flags, if (!alarmtimer_get_rtcdev()) return -ENOTSUPP; + if (flags & ~TIMER_ABSTIME) + return -EINVAL; + if (!capable(CAP_WAKE_ALARM)) return -EPERM; diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 5b372e3ed675..ac9d1dad630b 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -265,12 +265,12 @@ static void update_ftrace_function(void) func = ftrace_ops_list_func; } + update_function_graph_func(); + /* If there's no change, then do nothing more here */ if (ftrace_trace_function == func) return; - update_function_graph_func(); - /* * If we are using the list function, it doesn't care * about the function_trace_ops. diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 7c56c3d06943..ff7027199a9a 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -616,10 +616,6 @@ int ring_buffer_poll_wait(struct ring_buffer *buffer, int cpu, struct ring_buffer_per_cpu *cpu_buffer; struct rb_irq_work *work; - if ((cpu == RING_BUFFER_ALL_CPUS && !ring_buffer_empty(buffer)) || - (cpu != RING_BUFFER_ALL_CPUS && !ring_buffer_empty_cpu(buffer, cpu))) - return POLLIN | POLLRDNORM; - if (cpu == RING_BUFFER_ALL_CPUS) work = &buffer->irq_work; else { diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index f243444a3772..bda9621638cc 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -466,6 +466,12 @@ int __trace_puts(unsigned long ip, const char *str, int size) struct print_entry *entry; unsigned long irq_flags; int alloc; + int pc; + + if (!(trace_flags & TRACE_ITER_PRINTK)) + return 0; + + pc = preempt_count(); if (unlikely(tracing_selftest_running || tracing_disabled)) return 0; @@ -475,7 +481,7 @@ int __trace_puts(unsigned long ip, const char *str, int size) local_save_flags(irq_flags); buffer = global_trace.trace_buffer.buffer; event = trace_buffer_lock_reserve(buffer, TRACE_PRINT, alloc, - irq_flags, preempt_count()); + irq_flags, pc); if (!event) return 0; @@ -492,6 +498,7 @@ int __trace_puts(unsigned long ip, const char *str, int size) entry->buf[size] = '\0'; __buffer_unlock_commit(buffer, event); + ftrace_trace_stack(buffer, irq_flags, 4, pc); return size; } @@ -509,6 +516,12 @@ int __trace_bputs(unsigned long ip, const char *str) struct bputs_entry *entry; unsigned long irq_flags; int size = sizeof(struct bputs_entry); + int pc; + + if (!(trace_flags & TRACE_ITER_PRINTK)) + return 0; + + pc = preempt_count(); if (unlikely(tracing_selftest_running || tracing_disabled)) return 0; @@ -516,7 +529,7 @@ int __trace_bputs(unsigned long ip, const char *str) local_save_flags(irq_flags); buffer = global_trace.trace_buffer.buffer; event = trace_buffer_lock_reserve(buffer, TRACE_BPUTS, size, - irq_flags, preempt_count()); + irq_flags, pc); if (!event) return 0; @@ -525,6 +538,7 @@ int __trace_bputs(unsigned long ip, const char *str) entry->str = str; __buffer_unlock_commit(buffer, event); + ftrace_trace_stack(buffer, irq_flags, 4, pc); return 1; } diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c index f99e0b3bca8c..2de53628689f 100644 --- a/kernel/trace/trace_events.c +++ b/kernel/trace/trace_events.c @@ -470,6 +470,7 @@ static void remove_event_file_dir(struct ftrace_event_file *file) list_del(&file->list); remove_subsystem(file->system); + free_event_filter(file->filter); kmem_cache_free(file_cachep, file); } diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 6203d2900877..35974ac69600 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -3284,6 +3284,7 @@ int workqueue_sysfs_register(struct workqueue_struct *wq) } } + dev_set_uevent_suppress(&wq_dev->dev, false); kobject_uevent(&wq_dev->dev.kobj, KOBJ_ADD); return 0; } @@ -4879,7 +4880,7 @@ static void __init wq_numa_init(void) BUG_ON(!tbl); for_each_node(node) - BUG_ON(!alloc_cpumask_var_node(&tbl[node], GFP_KERNEL, + BUG_ON(!zalloc_cpumask_var_node(&tbl[node], GFP_KERNEL, node_online(node) ? node : NUMA_NO_NODE)); for_each_possible_cpu(cpu) { diff --git a/lib/cpumask.c b/lib/cpumask.c index c101230658eb..b6513a9f2892 100644 --- a/lib/cpumask.c +++ b/lib/cpumask.c @@ -191,7 +191,7 @@ int cpumask_set_cpu_local_first(int i, int numa_node, cpumask_t *dstp) i %= num_online_cpus(); - if (!cpumask_of_node(numa_node)) { + if (numa_node == -1 || !cpumask_of_node(numa_node)) { /* Use all online cpu's for non numa aware system */ cpumask_copy(mask, cpu_online_mask); } else { diff --git a/mm/mempolicy.c b/mm/mempolicy.c index eb58de19f815..8f5330d74f47 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2139,7 +2139,6 @@ struct mempolicy *__mpol_dup(struct mempolicy *old) } else *new = *old; - rcu_read_lock(); if (current_cpuset_is_being_rebound()) { nodemask_t mems = cpuset_mems_allowed(current); if (new->flags & MPOL_F_REBINDING) @@ -2147,7 +2146,6 @@ struct mempolicy *__mpol_dup(struct mempolicy *old) else mpol_rebind_policy(new, &mems, MPOL_REBIND_ONCE); } - rcu_read_unlock(); atomic_set(&new->refcnt, 1); return new; } diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c index ad2ac3c00398..dd11f612e03e 100644 --- a/net/8021q/vlan_dev.c +++ b/net/8021q/vlan_dev.c @@ -627,8 +627,6 @@ static void vlan_dev_uninit(struct net_device *dev) struct vlan_dev_priv *vlan = vlan_dev_priv(dev); int i; - free_percpu(vlan->vlan_pcpu_stats); - vlan->vlan_pcpu_stats = NULL; for (i = 0; i < ARRAY_SIZE(vlan->egress_priority_map); i++) { while ((pm = vlan->egress_priority_map[i]) != NULL) { vlan->egress_priority_map[i] = pm->next; @@ -785,6 +783,15 @@ static const struct net_device_ops vlan_netdev_ops = { .ndo_get_lock_subclass = vlan_dev_get_lock_subclass, }; +static void vlan_dev_free(struct net_device *dev) +{ + struct vlan_dev_priv *vlan = vlan_dev_priv(dev); + + free_percpu(vlan->vlan_pcpu_stats); + vlan->vlan_pcpu_stats = NULL; + free_netdev(dev); +} + void vlan_setup(struct net_device *dev) { ether_setup(dev); @@ -794,7 +801,7 @@ void vlan_setup(struct net_device *dev) dev->tx_queue_len = 0; dev->netdev_ops = &vlan_netdev_ops; - dev->destructor = free_netdev; + dev->destructor = vlan_dev_free; dev->ethtool_ops = &vlan_ethtool_ops; memset(dev->broadcast, 0, ETH_ALEN); diff --git a/net/appletalk/ddp.c b/net/appletalk/ddp.c index 01a1082e02b3..bfcf6be1d665 100644 --- a/net/appletalk/ddp.c +++ b/net/appletalk/ddp.c @@ -1489,8 +1489,6 @@ static int atalk_rcv(struct sk_buff *skb, struct net_device *dev, goto drop; /* Queue packet (standard) */ - skb->sk = sock; - if (sock_queue_rcv_skb(sock, skb) < 0) goto drop; @@ -1644,7 +1642,6 @@ static int atalk_sendmsg(struct kiocb *iocb, struct socket *sock, struct msghdr if (!skb) goto out; - skb->sk = sk; skb_reserve(skb, ddp_dl->header_length); skb_reserve(skb, dev->hard_header_len); skb->dev = dev; diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c index ca01d1861854..a7a27bc2c0b1 100644 --- a/net/bluetooth/hci_conn.c +++ b/net/bluetooth/hci_conn.c @@ -289,10 +289,20 @@ static void hci_conn_timeout(struct work_struct *work) { struct hci_conn *conn = container_of(work, struct hci_conn, disc_work.work); + int refcnt = atomic_read(&conn->refcnt); BT_DBG("hcon %p state %s", conn, state_to_string(conn->state)); - if (atomic_read(&conn->refcnt)) + WARN_ON(refcnt < 0); + + /* FIXME: It was observed that in pairing failed scenario, refcnt + * drops below 0. Probably this is because l2cap_conn_del calls + * l2cap_chan_del for each channel, and inside l2cap_chan_del conn is + * dropped. After that loop hci_chan_del is called which also drops + * conn. For now make sure that ACL is alive if refcnt is higher then 0, + * otherwise drop it. + */ + if (refcnt > 0) return; switch (conn->state) { diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c index f2829a7932e2..e33a982161c1 100644 --- a/net/bluetooth/smp.c +++ b/net/bluetooth/smp.c @@ -385,6 +385,16 @@ static const u8 gen_method[5][5] = { { CFM_PASSKEY, CFM_PASSKEY, REQ_PASSKEY, JUST_WORKS, OVERLAP }, }; +static u8 get_auth_method(struct smp_chan *smp, u8 local_io, u8 remote_io) +{ + /* If either side has unknown io_caps, use JUST WORKS */ + if (local_io > SMP_IO_KEYBOARD_DISPLAY || + remote_io > SMP_IO_KEYBOARD_DISPLAY) + return JUST_WORKS; + + return gen_method[remote_io][local_io]; +} + static int tk_request(struct l2cap_conn *conn, u8 remote_oob, u8 auth, u8 local_io, u8 remote_io) { @@ -401,14 +411,11 @@ static int tk_request(struct l2cap_conn *conn, u8 remote_oob, u8 auth, BT_DBG("tk_request: auth:%d lcl:%d rem:%d", auth, local_io, remote_io); /* If neither side wants MITM, use JUST WORKS */ - /* If either side has unknown io_caps, use JUST WORKS */ /* Otherwise, look up method from the table */ - if (!(auth & SMP_AUTH_MITM) || - local_io > SMP_IO_KEYBOARD_DISPLAY || - remote_io > SMP_IO_KEYBOARD_DISPLAY) + if (!(auth & SMP_AUTH_MITM)) method = JUST_WORKS; else - method = gen_method[remote_io][local_io]; + method = get_auth_method(smp, local_io, remote_io); /* If not bonding, don't ask user to confirm a Zero TK */ if (!(auth & SMP_AUTH_BONDING) && method == JUST_CFM) @@ -669,7 +676,7 @@ static u8 smp_cmd_pairing_req(struct l2cap_conn *conn, struct sk_buff *skb) { struct smp_cmd_pairing rsp, *req = (void *) skb->data; struct smp_chan *smp; - u8 key_size, auth; + u8 key_size, auth, sec_level; int ret; BT_DBG("conn %p", conn); @@ -695,7 +702,19 @@ static u8 smp_cmd_pairing_req(struct l2cap_conn *conn, struct sk_buff *skb) /* We didn't start the pairing, so match remote */ auth = req->auth_req; - conn->hcon->pending_sec_level = authreq_to_seclevel(auth); + sec_level = authreq_to_seclevel(auth); + if (sec_level > conn->hcon->pending_sec_level) + conn->hcon->pending_sec_level = sec_level; + + /* If we need MITM check that it can be acheived */ + if (conn->hcon->pending_sec_level >= BT_SECURITY_HIGH) { + u8 method; + + method = get_auth_method(smp, conn->hcon->io_capability, + req->io_capability); + if (method == JUST_WORKS || method == JUST_CFM) + return SMP_AUTH_REQUIREMENTS; + } build_pairing_cmd(conn, req, &rsp, auth); @@ -743,6 +762,16 @@ static u8 smp_cmd_pairing_rsp(struct l2cap_conn *conn, struct sk_buff *skb) if (check_enc_key_size(conn, key_size)) return SMP_ENC_KEY_SIZE; + /* If we need MITM check that it can be acheived */ + if (conn->hcon->pending_sec_level >= BT_SECURITY_HIGH) { + u8 method; + + method = get_auth_method(smp, req->io_capability, + rsp->io_capability); + if (method == JUST_WORKS || method == JUST_CFM) + return SMP_AUTH_REQUIREMENTS; + } + get_random_bytes(smp->prnd, sizeof(smp->prnd)); smp->prsp[0] = SMP_CMD_PAIRING_RSP; @@ -838,6 +867,7 @@ static u8 smp_cmd_security_req(struct l2cap_conn *conn, struct sk_buff *skb) struct smp_cmd_pairing cp; struct hci_conn *hcon = conn->hcon; struct smp_chan *smp; + u8 sec_level; BT_DBG("conn %p", conn); @@ -847,7 +877,9 @@ static u8 smp_cmd_security_req(struct l2cap_conn *conn, struct sk_buff *skb) if (!(conn->hcon->link_mode & HCI_LM_MASTER)) return SMP_CMD_NOTSUPP; - hcon->pending_sec_level = authreq_to_seclevel(rp->auth_req); + sec_level = authreq_to_seclevel(rp->auth_req); + if (sec_level > hcon->pending_sec_level) + hcon->pending_sec_level = sec_level; if (smp_ltk_encrypt(conn, hcon->pending_sec_level)) return 0; @@ -901,9 +933,12 @@ int smp_conn_security(struct hci_conn *hcon, __u8 sec_level) if (smp_sufficient_security(hcon, sec_level)) return 1; + if (sec_level > hcon->pending_sec_level) + hcon->pending_sec_level = sec_level; + if (hcon->link_mode & HCI_LM_MASTER) - if (smp_ltk_encrypt(conn, sec_level)) - goto done; + if (smp_ltk_encrypt(conn, hcon->pending_sec_level)) + return 0; if (test_and_set_bit(HCI_CONN_LE_SMP_PEND, &hcon->flags)) return 0; @@ -918,7 +953,7 @@ int smp_conn_security(struct hci_conn *hcon, __u8 sec_level) * requires it. */ if (hcon->io_capability != HCI_IO_NO_INPUT_OUTPUT || - sec_level > BT_SECURITY_MEDIUM) + hcon->pending_sec_level > BT_SECURITY_MEDIUM) authreq |= SMP_AUTH_MITM; if (hcon->link_mode & HCI_LM_MASTER) { @@ -937,9 +972,6 @@ int smp_conn_security(struct hci_conn *hcon, __u8 sec_level) set_bit(SMP_FLAG_INITIATOR, &smp->flags); -done: - hcon->pending_sec_level = sec_level; - return 0; } diff --git a/net/core/dev.c b/net/core/dev.c index 30eedf677913..7990984ca364 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -148,6 +148,9 @@ struct list_head ptype_all __read_mostly; /* Taps */ static struct list_head offload_base __read_mostly; static int netif_rx_internal(struct sk_buff *skb); +static int call_netdevice_notifiers_info(unsigned long val, + struct net_device *dev, + struct netdev_notifier_info *info); /* * The @dev_base_head list is protected by @dev_base_lock and the rtnl @@ -1207,7 +1210,11 @@ EXPORT_SYMBOL(netdev_features_change); void netdev_state_change(struct net_device *dev) { if (dev->flags & IFF_UP) { - call_netdevice_notifiers(NETDEV_CHANGE, dev); + struct netdev_notifier_change_info change_info; + + change_info.flags_changed = 0; + call_netdevice_notifiers_info(NETDEV_CHANGE, dev, + &change_info.info); rtmsg_ifinfo(RTM_NEWLINK, dev, 0, GFP_KERNEL); } } @@ -4227,9 +4234,8 @@ static int process_backlog(struct napi_struct *napi, int quota) #endif napi->weight = weight_p; local_irq_disable(); - while (work < quota) { + while (1) { struct sk_buff *skb; - unsigned int qlen; while ((skb = __skb_dequeue(&sd->process_queue))) { local_irq_enable(); @@ -4243,24 +4249,24 @@ static int process_backlog(struct napi_struct *napi, int quota) } rps_lock(sd); - qlen = skb_queue_len(&sd->input_pkt_queue); - if (qlen) - skb_queue_splice_tail_init(&sd->input_pkt_queue, - &sd->process_queue); - - if (qlen < quota - work) { + if (skb_queue_empty(&sd->input_pkt_queue)) { /* * Inline a custom version of __napi_complete(). * only current cpu owns and manipulates this napi, - * and NAPI_STATE_SCHED is the only possible flag set on backlog. - * we can use a plain write instead of clear_bit(), + * and NAPI_STATE_SCHED is the only possible flag set + * on backlog. + * We can use a plain write instead of clear_bit(), * and we dont need an smp_mb() memory barrier. */ list_del(&napi->poll_list); napi->state = 0; + rps_unlock(sd); - quota = work + qlen; + break; } + + skb_queue_splice_tail_init(&sd->input_pkt_queue, + &sd->process_queue); rps_unlock(sd); } local_irq_enable(); diff --git a/net/core/neighbour.c b/net/core/neighbour.c index 32d872eec7f5..559890b0f0a2 100644 --- a/net/core/neighbour.c +++ b/net/core/neighbour.c @@ -3059,11 +3059,12 @@ int neigh_sysctl_register(struct net_device *dev, struct neigh_parms *p, memset(&t->neigh_vars[NEIGH_VAR_GC_INTERVAL], 0, sizeof(t->neigh_vars[NEIGH_VAR_GC_INTERVAL])); } else { + struct neigh_table *tbl = p->tbl; dev_name_source = "default"; - t->neigh_vars[NEIGH_VAR_GC_INTERVAL].data = (int *)(p + 1); - t->neigh_vars[NEIGH_VAR_GC_THRESH1].data = (int *)(p + 1) + 1; - t->neigh_vars[NEIGH_VAR_GC_THRESH2].data = (int *)(p + 1) + 2; - t->neigh_vars[NEIGH_VAR_GC_THRESH3].data = (int *)(p + 1) + 3; + t->neigh_vars[NEIGH_VAR_GC_INTERVAL].data = &tbl->gc_interval; + t->neigh_vars[NEIGH_VAR_GC_THRESH1].data = &tbl->gc_thresh1; + t->neigh_vars[NEIGH_VAR_GC_THRESH2].data = &tbl->gc_thresh2; + t->neigh_vars[NEIGH_VAR_GC_THRESH3].data = &tbl->gc_thresh3; } if (handler) { diff --git a/net/ipv4/gre_demux.c b/net/ipv4/gre_demux.c index 4e9619bca732..0485bf7f8f03 100644 --- a/net/ipv4/gre_demux.c +++ b/net/ipv4/gre_demux.c @@ -68,6 +68,7 @@ void gre_build_header(struct sk_buff *skb, const struct tnl_ptk_info *tpi, skb_push(skb, hdr_len); + skb_reset_transport_header(skb); greh = (struct gre_base_hdr *)skb->data; greh->flags = tnl_flags_to_gre_flags(tpi->flags); greh->protocol = tpi->proto; diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c index 79c3d947a481..42b7bcf8045b 100644 --- a/net/ipv4/icmp.c +++ b/net/ipv4/icmp.c @@ -739,8 +739,6 @@ static void icmp_unreach(struct sk_buff *skb) /* fall through */ case 0: info = ntohs(icmph->un.frag.mtu); - if (!info) - goto out; } break; case ICMP_SR_FAILED: diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c index 6748d420f714..db710b059bab 100644 --- a/net/ipv4/igmp.c +++ b/net/ipv4/igmp.c @@ -1944,6 +1944,10 @@ int ip_mc_leave_group(struct sock *sk, struct ip_mreqn *imr) rtnl_lock(); in_dev = ip_mc_find_dev(net, imr); + if (!in_dev) { + ret = -ENODEV; + goto out; + } ifindex = imr->imr_ifindex; for (imlp = &inet->mc_list; (iml = rtnl_dereference(*imlp)) != NULL; @@ -1961,16 +1965,14 @@ int ip_mc_leave_group(struct sock *sk, struct ip_mreqn *imr) *imlp = iml->next_rcu; - if (in_dev) - ip_mc_dec_group(in_dev, group); + ip_mc_dec_group(in_dev, group); rtnl_unlock(); /* decrease mem now to avoid the memleak warning */ atomic_sub(sizeof(*iml), &sk->sk_omem_alloc); kfree_rcu(iml, rcu); return 0; } - if (!in_dev) - ret = -ENODEV; +out: rtnl_unlock(); return ret; } diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c index 54b6731dab55..6f9de61dce5f 100644 --- a/net/ipv4/ip_tunnel.c +++ b/net/ipv4/ip_tunnel.c @@ -169,6 +169,7 @@ struct ip_tunnel *ip_tunnel_lookup(struct ip_tunnel_net *itn, hlist_for_each_entry_rcu(t, head, hash_node) { if (remote != t->parms.iph.daddr || + t->parms.iph.saddr != 0 || !(t->dev->flags & IFF_UP)) continue; @@ -185,10 +186,11 @@ struct ip_tunnel *ip_tunnel_lookup(struct ip_tunnel_net *itn, head = &itn->tunnels[hash]; hlist_for_each_entry_rcu(t, head, hash_node) { - if ((local != t->parms.iph.saddr && - (local != t->parms.iph.daddr || - !ipv4_is_multicast(local))) || - !(t->dev->flags & IFF_UP)) + if ((local != t->parms.iph.saddr || t->parms.iph.daddr != 0) && + (local != t->parms.iph.daddr || !ipv4_is_multicast(local))) + continue; + + if (!(t->dev->flags & IFF_UP)) continue; if (!ip_tunnel_key_match(&t->parms, flags, key)) @@ -205,6 +207,8 @@ struct ip_tunnel *ip_tunnel_lookup(struct ip_tunnel_net *itn, hlist_for_each_entry_rcu(t, head, hash_node) { if (t->parms.i_key != key || + t->parms.iph.saddr != 0 || + t->parms.iph.daddr != 0 || !(t->dev->flags & IFF_UP)) continue; diff --git a/net/ipv4/route.c b/net/ipv4/route.c index 082239ffe34a..3162ea923ded 100644 --- a/net/ipv4/route.c +++ b/net/ipv4/route.c @@ -1010,7 +1010,7 @@ void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu) const struct iphdr *iph = (const struct iphdr *) skb->data; struct flowi4 fl4; struct rtable *rt; - struct dst_entry *dst; + struct dst_entry *odst = NULL; bool new = false; bh_lock_sock(sk); @@ -1018,16 +1018,17 @@ void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu) if (!ip_sk_accept_pmtu(sk)) goto out; - rt = (struct rtable *) __sk_dst_get(sk); + odst = sk_dst_get(sk); - if (sock_owned_by_user(sk) || !rt) { + if (sock_owned_by_user(sk) || !odst) { __ipv4_sk_update_pmtu(skb, sk, mtu); goto out; } __build_flow_key(&fl4, sk, iph, 0, 0, 0, 0, 0); - if (!__sk_dst_check(sk, 0)) { + rt = (struct rtable *)odst; + if (odst->obsolete && odst->ops->check(odst, 0) == NULL) { rt = ip_route_output_flow(sock_net(sk), &fl4, sk); if (IS_ERR(rt)) goto out; @@ -1037,8 +1038,7 @@ void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu) __ip_rt_update_pmtu((struct rtable *) rt->dst.path, &fl4, mtu); - dst = dst_check(&rt->dst, 0); - if (!dst) { + if (!dst_check(&rt->dst, 0)) { if (new) dst_release(&rt->dst); @@ -1050,10 +1050,11 @@ void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu) } if (new) - __sk_dst_set(sk, &rt->dst); + sk_dst_set(sk, &rt->dst); out: bh_unlock_sock(sk); + dst_release(odst); } EXPORT_SYMBOL_GPL(ipv4_sk_update_pmtu); diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index eb1dde37e678..9d2118e5fbc7 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1108,7 +1108,7 @@ int tcp_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg, if (unlikely(tp->repair)) { if (tp->repair_queue == TCP_RECV_QUEUE) { copied = tcp_send_rcvq(sk, msg, size); - goto out; + goto out_nopush; } err = -EINVAL; @@ -1282,6 +1282,7 @@ wait_for_memory: out: if (copied) tcp_push(sk, flags, mss_now, tp->nonagle, size_goal); +out_nopush: release_sock(sk); return copied + copied_syn; diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index b5c23756965a..40639c288dc2 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -1106,7 +1106,7 @@ static bool tcp_check_dsack(struct sock *sk, const struct sk_buff *ack_skb, } /* D-SACK for already forgotten data... Do dumb counting. */ - if (dup_sack && tp->undo_marker && tp->undo_retrans && + if (dup_sack && tp->undo_marker && tp->undo_retrans > 0 && !after(end_seq_0, prior_snd_una) && after(end_seq_0, tp->undo_marker)) tp->undo_retrans--; @@ -1187,7 +1187,7 @@ static u8 tcp_sacktag_one(struct sock *sk, /* Account D-SACK for retransmitted packet. */ if (dup_sack && (sacked & TCPCB_RETRANS)) { - if (tp->undo_marker && tp->undo_retrans && + if (tp->undo_marker && tp->undo_retrans > 0 && after(end_seq, tp->undo_marker)) tp->undo_retrans--; if (sacked & TCPCB_SACKED_ACKED) @@ -1893,7 +1893,7 @@ static void tcp_clear_retrans_partial(struct tcp_sock *tp) tp->lost_out = 0; tp->undo_marker = 0; - tp->undo_retrans = 0; + tp->undo_retrans = -1; } void tcp_clear_retrans(struct tcp_sock *tp) @@ -2665,7 +2665,7 @@ static void tcp_enter_recovery(struct sock *sk, bool ece_ack) tp->prior_ssthresh = 0; tp->undo_marker = tp->snd_una; - tp->undo_retrans = tp->retrans_out; + tp->undo_retrans = tp->retrans_out ? : -1; if (inet_csk(sk)->icsk_ca_state < TCP_CA_CWR) { if (!ece_ack) diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index d92bce0ea24e..179b51e6bda3 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -2525,8 +2525,6 @@ int tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb) if (!tp->retrans_stamp) tp->retrans_stamp = TCP_SKB_CB(skb)->when; - tp->undo_retrans += tcp_skb_pcount(skb); - /* snd_nxt is stored to detect loss of retransmitted segment, * see tcp_input.c tcp_sacktag_write_queue(). */ @@ -2534,6 +2532,10 @@ int tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb) } else if (err != -EBUSY) { NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPRETRANSFAIL); } + + if (tp->undo_retrans < 0) + tp->undo_retrans = 0; + tp->undo_retrans += tcp_skb_pcount(skb); return err; } diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index d92f94b7e402..7d5a8661df76 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -1588,8 +1588,11 @@ int udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb) goto csum_error; - if (sk_rcvqueues_full(sk, skb, sk->sk_rcvbuf)) + if (sk_rcvqueues_full(sk, skb, sk->sk_rcvbuf)) { + UDP_INC_STATS_BH(sock_net(sk), UDP_MIB_RCVBUFERRORS, + is_udplite); goto drop; + } rc = 0; diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c index 08b367c6b9cf..617f0958e164 100644 --- a/net/ipv6/mcast.c +++ b/net/ipv6/mcast.c @@ -1301,8 +1301,17 @@ int igmp6_event_query(struct sk_buff *skb) len = ntohs(ipv6_hdr(skb)->payload_len) + sizeof(struct ipv6hdr); len -= skb_network_header_len(skb); - /* Drop queries with not link local source */ - if (!(ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL)) + /* RFC3810 6.2 + * Upon reception of an MLD message that contains a Query, the node + * checks if the source address of the message is a valid link-local + * address, if the Hop Limit is set to 1, and if the Router Alert + * option is present in the Hop-By-Hop Options header of the IPv6 + * packet. If any of these checks fails, the packet is dropped. + */ + if (!(ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL) || + ipv6_hdr(skb)->hop_limit != 1 || + !(IP6CB(skb)->flags & IP6SKB_ROUTERALERT) || + IP6CB(skb)->ra != htons(IPV6_OPT_ROUTERALERT_MLD)) return -EINVAL; idev = __in6_dev_get(skb->dev); diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c index 95c834799288..7092ff78fd84 100644 --- a/net/ipv6/udp.c +++ b/net/ipv6/udp.c @@ -674,8 +674,11 @@ int udpv6_queue_rcv_skb(struct sock *sk, struct sk_buff *skb) goto csum_error; } - if (sk_rcvqueues_full(sk, skb, sk->sk_rcvbuf)) + if (sk_rcvqueues_full(sk, skb, sk->sk_rcvbuf)) { + UDP6_INC_STATS_BH(sock_net(sk), + UDP_MIB_RCVBUFERRORS, is_udplite); goto drop; + } skb_dst_drop(skb); @@ -690,6 +693,7 @@ int udpv6_queue_rcv_skb(struct sock *sk, struct sk_buff *skb) bh_unlock_sock(sk); return rc; + csum_error: UDP6_INC_STATS_BH(sock_net(sk), UDP_MIB_CSUMERRORS, is_udplite); drop: diff --git a/net/l2tp/l2tp_ppp.c b/net/l2tp/l2tp_ppp.c index 950909f04ee6..13752d96275e 100644 --- a/net/l2tp/l2tp_ppp.c +++ b/net/l2tp/l2tp_ppp.c @@ -1365,7 +1365,7 @@ static int pppol2tp_setsockopt(struct socket *sock, int level, int optname, int err; if (level != SOL_PPPOL2TP) - return udp_prot.setsockopt(sk, level, optname, optval, optlen); + return -EINVAL; if (optlen < sizeof(int)) return -EINVAL; @@ -1491,7 +1491,7 @@ static int pppol2tp_getsockopt(struct socket *sock, int level, int optname, struct pppol2tp_session *ps; if (level != SOL_PPPOL2TP) - return udp_prot.getsockopt(sk, level, optname, optval, optlen); + return -EINVAL; if (get_user(len, optlen)) return -EFAULT; diff --git a/net/mac80211/util.c b/net/mac80211/util.c index 6886601afe1c..a6cda52ed920 100644 --- a/net/mac80211/util.c +++ b/net/mac80211/util.c @@ -1096,11 +1096,12 @@ void ieee80211_send_auth(struct ieee80211_sub_if_data *sdata, int err; /* 24 + 6 = header + auth_algo + auth_transaction + status_code */ - skb = dev_alloc_skb(local->hw.extra_tx_headroom + 24 + 6 + extra_len); + skb = dev_alloc_skb(local->hw.extra_tx_headroom + IEEE80211_WEP_IV_LEN + + 24 + 6 + extra_len + IEEE80211_WEP_ICV_LEN); if (!skb) return; - skb_reserve(skb, local->hw.extra_tx_headroom); + skb_reserve(skb, local->hw.extra_tx_headroom + IEEE80211_WEP_IV_LEN); mgmt = (struct ieee80211_mgmt *) skb_put(skb, 24 + 6); memset(mgmt, 0, 24 + 6); diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c index 15c731f03fa6..e6fac7e3db52 100644 --- a/net/netlink/af_netlink.c +++ b/net/netlink/af_netlink.c @@ -636,7 +636,7 @@ static unsigned int netlink_poll(struct file *file, struct socket *sock, while (nlk->cb_running && netlink_dump_space(nlk)) { err = netlink_dump(sk); if (err < 0) { - sk->sk_err = err; + sk->sk_err = -err; sk->sk_error_report(sk); break; } @@ -2483,7 +2483,7 @@ static int netlink_recvmsg(struct kiocb *kiocb, struct socket *sock, atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf / 2) { ret = netlink_dump(sk); if (ret) { - sk->sk_err = ret; + sk->sk_err = -ret; sk->sk_error_report(sk); } } diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c index c36856a457ca..e70d8b18e962 100644 --- a/net/openvswitch/actions.c +++ b/net/openvswitch/actions.c @@ -551,6 +551,8 @@ static int do_execute_actions(struct datapath *dp, struct sk_buff *skb, case OVS_ACTION_ATTR_SAMPLE: err = sample(dp, skb, a); + if (unlikely(err)) /* skb already freed. */ + return err; break; } diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c index 0d407bca81e3..9db4bf6740d1 100644 --- a/net/openvswitch/datapath.c +++ b/net/openvswitch/datapath.c @@ -1,5 +1,5 @@ /* - * Copyright (c) 2007-2013 Nicira, Inc. + * Copyright (c) 2007-2014 Nicira, Inc. * * This program is free software; you can redistribute it and/or * modify it under the terms of version 2 of the GNU General Public @@ -276,7 +276,7 @@ void ovs_dp_process_received_packet(struct vport *p, struct sk_buff *skb) OVS_CB(skb)->flow = flow; OVS_CB(skb)->pkt_key = &key; - ovs_flow_stats_update(OVS_CB(skb)->flow, skb); + ovs_flow_stats_update(OVS_CB(skb)->flow, key.tp.flags, skb); ovs_execute_actions(dp, skb); stats_counter = &stats->n_hit; @@ -889,8 +889,11 @@ static int ovs_flow_cmd_new(struct sk_buff *skb, struct genl_info *info) } /* The unmasked key has to be the same for flow updates. */ if (unlikely(!ovs_flow_cmp_unmasked_key(flow, &match))) { - error = -EEXIST; - goto err_unlock_ovs; + flow = ovs_flow_tbl_lookup_exact(&dp->table, &match); + if (!flow) { + error = -ENOENT; + goto err_unlock_ovs; + } } /* Update actions. */ old_acts = ovsl_dereference(flow->sf_acts); @@ -981,16 +984,12 @@ static int ovs_flow_cmd_set(struct sk_buff *skb, struct genl_info *info) goto err_unlock_ovs; } /* Check that the flow exists. */ - flow = ovs_flow_tbl_lookup(&dp->table, &key); + flow = ovs_flow_tbl_lookup_exact(&dp->table, &match); if (unlikely(!flow)) { error = -ENOENT; goto err_unlock_ovs; } - /* The unmasked key has to be the same for flow updates. */ - if (unlikely(!ovs_flow_cmp_unmasked_key(flow, &match))) { - error = -EEXIST; - goto err_unlock_ovs; - } + /* Update actions, if present. */ if (likely(acts)) { old_acts = ovsl_dereference(flow->sf_acts); @@ -1063,8 +1062,8 @@ static int ovs_flow_cmd_get(struct sk_buff *skb, struct genl_info *info) goto unlock; } - flow = ovs_flow_tbl_lookup(&dp->table, &key); - if (!flow || !ovs_flow_cmp_unmasked_key(flow, &match)) { + flow = ovs_flow_tbl_lookup_exact(&dp->table, &match); + if (!flow) { err = -ENOENT; goto unlock; } @@ -1113,8 +1112,8 @@ static int ovs_flow_cmd_del(struct sk_buff *skb, struct genl_info *info) goto unlock; } - flow = ovs_flow_tbl_lookup(&dp->table, &key); - if (unlikely(!flow || !ovs_flow_cmp_unmasked_key(flow, &match))) { + flow = ovs_flow_tbl_lookup_exact(&dp->table, &match); + if (unlikely(!flow)) { err = -ENOENT; goto unlock; } diff --git a/net/openvswitch/flow.c b/net/openvswitch/flow.c index 334751cb1528..d07ab538fc9d 100644 --- a/net/openvswitch/flow.c +++ b/net/openvswitch/flow.c @@ -61,10 +61,10 @@ u64 ovs_flow_used_time(unsigned long flow_jiffies) #define TCP_FLAGS_BE16(tp) (*(__be16 *)&tcp_flag_word(tp) & htons(0x0FFF)) -void ovs_flow_stats_update(struct sw_flow *flow, struct sk_buff *skb) +void ovs_flow_stats_update(struct sw_flow *flow, __be16 tcp_flags, + struct sk_buff *skb) { struct flow_stats *stats; - __be16 tcp_flags = flow->key.tp.flags; int node = numa_node_id(); stats = rcu_dereference(flow->stats[node]); diff --git a/net/openvswitch/flow.h b/net/openvswitch/flow.h index ac395d2cd821..5e5aaed3a85b 100644 --- a/net/openvswitch/flow.h +++ b/net/openvswitch/flow.h @@ -1,5 +1,5 @@ /* - * Copyright (c) 2007-2013 Nicira, Inc. + * Copyright (c) 2007-2014 Nicira, Inc. * * This program is free software; you can redistribute it and/or * modify it under the terms of version 2 of the GNU General Public @@ -180,7 +180,8 @@ struct arp_eth_header { unsigned char ar_tip[4]; /* target IP address */ } __packed; -void ovs_flow_stats_update(struct sw_flow *, struct sk_buff *); +void ovs_flow_stats_update(struct sw_flow *, __be16 tcp_flags, + struct sk_buff *); void ovs_flow_stats_get(const struct sw_flow *, struct ovs_flow_stats *, unsigned long *used, __be16 *tcp_flags); void ovs_flow_stats_clear(struct sw_flow *); diff --git a/net/openvswitch/flow_table.c b/net/openvswitch/flow_table.c index 574c3abc9b30..cf2d853646f0 100644 --- a/net/openvswitch/flow_table.c +++ b/net/openvswitch/flow_table.c @@ -456,6 +456,22 @@ struct sw_flow *ovs_flow_tbl_lookup(struct flow_table *tbl, return ovs_flow_tbl_lookup_stats(tbl, key, &n_mask_hit); } +struct sw_flow *ovs_flow_tbl_lookup_exact(struct flow_table *tbl, + struct sw_flow_match *match) +{ + struct table_instance *ti = rcu_dereference_ovsl(tbl->ti); + struct sw_flow_mask *mask; + struct sw_flow *flow; + + /* Always called under ovs-mutex. */ + list_for_each_entry(mask, &tbl->mask_list, list) { + flow = masked_flow_lookup(ti, match->key, mask); + if (flow && ovs_flow_cmp_unmasked_key(flow, match)) /* Found */ + return flow; + } + return NULL; +} + int ovs_flow_tbl_num_masks(const struct flow_table *table) { struct sw_flow_mask *mask; diff --git a/net/openvswitch/flow_table.h b/net/openvswitch/flow_table.h index ca8a5820f615..5918bff7f3f6 100644 --- a/net/openvswitch/flow_table.h +++ b/net/openvswitch/flow_table.h @@ -76,7 +76,8 @@ struct sw_flow *ovs_flow_tbl_lookup_stats(struct flow_table *, u32 *n_mask_hit); struct sw_flow *ovs_flow_tbl_lookup(struct flow_table *, const struct sw_flow_key *); - +struct sw_flow *ovs_flow_tbl_lookup_exact(struct flow_table *tbl, + struct sw_flow_match *match); bool ovs_flow_cmp_unmasked_key(const struct sw_flow *flow, struct sw_flow_match *match); diff --git a/net/openvswitch/vport-gre.c b/net/openvswitch/vport-gre.c index 35ec4fed09e2..f49148a07da2 100644 --- a/net/openvswitch/vport-gre.c +++ b/net/openvswitch/vport-gre.c @@ -110,6 +110,22 @@ static int gre_rcv(struct sk_buff *skb, return PACKET_RCVD; } +/* Called with rcu_read_lock and BH disabled. */ +static int gre_err(struct sk_buff *skb, u32 info, + const struct tnl_ptk_info *tpi) +{ + struct ovs_net *ovs_net; + struct vport *vport; + + ovs_net = net_generic(dev_net(skb->dev), ovs_net_id); + vport = rcu_dereference(ovs_net->vport_net.gre_vport); + + if (unlikely(!vport)) + return PACKET_REJECT; + else + return PACKET_RCVD; +} + static int gre_tnl_send(struct vport *vport, struct sk_buff *skb) { struct net *net = ovs_dp_get_net(vport->dp); @@ -186,6 +202,7 @@ error: static struct gre_cisco_protocol gre_protocol = { .handler = gre_rcv, + .err_handler = gre_err, .priority = 1, }; diff --git a/net/sctp/ulpevent.c b/net/sctp/ulpevent.c index 85c64658bd0b..b6842fdb53d4 100644 --- a/net/sctp/ulpevent.c +++ b/net/sctp/ulpevent.c @@ -366,9 +366,10 @@ fail: * specification [SCTP] and any extensions for a list of possible * error formats. */ -struct sctp_ulpevent *sctp_ulpevent_make_remote_error( - const struct sctp_association *asoc, struct sctp_chunk *chunk, - __u16 flags, gfp_t gfp) +struct sctp_ulpevent * +sctp_ulpevent_make_remote_error(const struct sctp_association *asoc, + struct sctp_chunk *chunk, __u16 flags, + gfp_t gfp) { struct sctp_ulpevent *event; struct sctp_remote_error *sre; @@ -387,8 +388,7 @@ struct sctp_ulpevent *sctp_ulpevent_make_remote_error( /* Copy the skb to a new skb with room for us to prepend * notification with. */ - skb = skb_copy_expand(chunk->skb, sizeof(struct sctp_remote_error), - 0, gfp); + skb = skb_copy_expand(chunk->skb, sizeof(*sre), 0, gfp); /* Pull off the rest of the cause TLV from the chunk. */ skb_pull(chunk->skb, elen); @@ -399,62 +399,21 @@ struct sctp_ulpevent *sctp_ulpevent_make_remote_error( event = sctp_skb2event(skb); sctp_ulpevent_init(event, MSG_NOTIFICATION, skb->truesize); - sre = (struct sctp_remote_error *) - skb_push(skb, sizeof(struct sctp_remote_error)); + sre = (struct sctp_remote_error *) skb_push(skb, sizeof(*sre)); /* Trim the buffer to the right length. */ - skb_trim(skb, sizeof(struct sctp_remote_error) + elen); + skb_trim(skb, sizeof(*sre) + elen); - /* Socket Extensions for SCTP - * 5.3.1.3 SCTP_REMOTE_ERROR - * - * sre_type: - * It should be SCTP_REMOTE_ERROR. - */ + /* RFC6458, Section 6.1.3. SCTP_REMOTE_ERROR */ + memset(sre, 0, sizeof(*sre)); sre->sre_type = SCTP_REMOTE_ERROR; - - /* - * Socket Extensions for SCTP - * 5.3.1.3 SCTP_REMOTE_ERROR - * - * sre_flags: 16 bits (unsigned integer) - * Currently unused. - */ sre->sre_flags = 0; - - /* Socket Extensions for SCTP - * 5.3.1.3 SCTP_REMOTE_ERROR - * - * sre_length: sizeof (__u32) - * - * This field is the total length of the notification data, - * including the notification header. - */ sre->sre_length = skb->len; - - /* Socket Extensions for SCTP - * 5.3.1.3 SCTP_REMOTE_ERROR - * - * sre_error: 16 bits (unsigned integer) - * This value represents one of the Operational Error causes defined in - * the SCTP specification, in network byte order. - */ sre->sre_error = cause; - - /* Socket Extensions for SCTP - * 5.3.1.3 SCTP_REMOTE_ERROR - * - * sre_assoc_id: sizeof (sctp_assoc_t) - * - * The association id field, holds the identifier for the association. - * All notifications for a given association have the same association - * identifier. For TCP style socket, this field is ignored. - */ sctp_ulpevent_set_owner(event, asoc); sre->sre_assoc_id = sctp_assoc2id(asoc); return event; - fail: return NULL; } @@ -899,7 +858,9 @@ __u16 sctp_ulpevent_get_notification_type(const struct sctp_ulpevent *event) return notification->sn_header.sn_type; } -/* Copy out the sndrcvinfo into a msghdr. */ +/* RFC6458, Section 5.3.2. SCTP Header Information Structure + * (SCTP_SNDRCV, DEPRECATED) + */ void sctp_ulpevent_read_sndrcvinfo(const struct sctp_ulpevent *event, struct msghdr *msghdr) { @@ -908,74 +869,21 @@ void sctp_ulpevent_read_sndrcvinfo(const struct sctp_ulpevent *event, if (sctp_ulpevent_is_notification(event)) return; - /* Sockets API Extensions for SCTP - * Section 5.2.2 SCTP Header Information Structure (SCTP_SNDRCV) - * - * sinfo_stream: 16 bits (unsigned integer) - * - * For recvmsg() the SCTP stack places the message's stream number in - * this value. - */ + memset(&sinfo, 0, sizeof(sinfo)); sinfo.sinfo_stream = event->stream; - /* sinfo_ssn: 16 bits (unsigned integer) - * - * For recvmsg() this value contains the stream sequence number that - * the remote endpoint placed in the DATA chunk. For fragmented - * messages this is the same number for all deliveries of the message - * (if more than one recvmsg() is needed to read the message). - */ sinfo.sinfo_ssn = event->ssn; - /* sinfo_ppid: 32 bits (unsigned integer) - * - * In recvmsg() this value is - * the same information that was passed by the upper layer in the peer - * application. Please note that byte order issues are NOT accounted - * for and this information is passed opaquely by the SCTP stack from - * one end to the other. - */ sinfo.sinfo_ppid = event->ppid; - /* sinfo_flags: 16 bits (unsigned integer) - * - * This field may contain any of the following flags and is composed of - * a bitwise OR of these values. - * - * recvmsg() flags: - * - * SCTP_UNORDERED - This flag is present when the message was sent - * non-ordered. - */ sinfo.sinfo_flags = event->flags; - /* sinfo_tsn: 32 bit (unsigned integer) - * - * For the receiving side, this field holds a TSN that was - * assigned to one of the SCTP Data Chunks. - */ sinfo.sinfo_tsn = event->tsn; - /* sinfo_cumtsn: 32 bit (unsigned integer) - * - * This field will hold the current cumulative TSN as - * known by the underlying SCTP layer. Note this field is - * ignored when sending and only valid for a receive - * operation when sinfo_flags are set to SCTP_UNORDERED. - */ sinfo.sinfo_cumtsn = event->cumtsn; - /* sinfo_assoc_id: sizeof (sctp_assoc_t) - * - * The association handle field, sinfo_assoc_id, holds the identifier - * for the association announced in the COMMUNICATION_UP notification. - * All notifications for a given association have the same identifier. - * Ignored for one-to-one style sockets. - */ sinfo.sinfo_assoc_id = sctp_assoc2id(event->asoc); - - /* context value that is set via SCTP_CONTEXT socket option. */ + /* Context value that is set via SCTP_CONTEXT socket option. */ sinfo.sinfo_context = event->asoc->default_rcv_context; - /* These fields are not used while receiving. */ sinfo.sinfo_timetolive = 0; put_cmsg(msghdr, IPPROTO_SCTP, SCTP_SNDRCV, - sizeof(struct sctp_sndrcvinfo), (void *)&sinfo); + sizeof(sinfo), &sinfo); } /* Do accounting for bytes received and hold a reference to the association diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c index 26631679a1fa..55c6c9d3e1ce 100644 --- a/net/tipc/bcast.c +++ b/net/tipc/bcast.c @@ -559,6 +559,7 @@ receive: buf = node->bclink.deferred_head; node->bclink.deferred_head = buf->next; + buf->next = NULL; node->bclink.deferred_size--; goto receive; } diff --git a/net/tipc/msg.c b/net/tipc/msg.c index 8be6e94a1ca9..0a37a472c29f 100644 --- a/net/tipc/msg.c +++ b/net/tipc/msg.c @@ -101,9 +101,11 @@ int tipc_msg_build(struct tipc_msg *hdr, struct iovec const *msg_sect, } /* tipc_buf_append(): Append a buffer to the fragment list of another buffer - * Let first buffer become head buffer - * Returns 1 and sets *buf to headbuf if chain is complete, otherwise 0 - * Leaves headbuf pointer at NULL if failure + * @*headbuf: in: NULL for first frag, otherwise value returned from prev call + * out: set when successful non-complete reassembly, otherwise NULL + * @*buf: in: the buffer to append. Always defined + * out: head buf after sucessful complete reassembly, otherwise NULL + * Returns 1 when reassembly complete, otherwise 0 */ int tipc_buf_append(struct sk_buff **headbuf, struct sk_buff **buf) { @@ -122,6 +124,7 @@ int tipc_buf_append(struct sk_buff **headbuf, struct sk_buff **buf) goto out_free; head = *headbuf = frag; skb_frag_list_init(head); + *buf = NULL; return 0; } if (!head) @@ -150,5 +153,7 @@ int tipc_buf_append(struct sk_buff **headbuf, struct sk_buff **buf) out_free: pr_warn_ratelimited("Unable to build fragment list\n"); kfree_skb(*buf); + kfree_skb(*headbuf); + *buf = *headbuf = NULL; return 0; } diff --git a/net/wireless/core.h b/net/wireless/core.h index e9afbf10e756..7e3a3cef7df9 100644 --- a/net/wireless/core.h +++ b/net/wireless/core.h @@ -424,7 +424,7 @@ static inline unsigned int elapsed_jiffies_msecs(unsigned long start) if (end >= start) return jiffies_to_msecs(end - start); - return jiffies_to_msecs(end + (MAX_JIFFY_OFFSET - start) + 1); + return jiffies_to_msecs(end + (ULONG_MAX - start) + 1); } void diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c index ba4f1723c83a..6668daf69326 100644 --- a/net/wireless/nl80211.c +++ b/net/wireless/nl80211.c @@ -1497,18 +1497,17 @@ static int nl80211_send_wiphy(struct cfg80211_registered_device *rdev, } CMD(start_p2p_device, START_P2P_DEVICE); CMD(set_mcast_rate, SET_MCAST_RATE); +#ifdef CONFIG_NL80211_TESTMODE + CMD(testmode_cmd, TESTMODE); +#endif if (state->split) { CMD(crit_proto_start, CRIT_PROTOCOL_START); CMD(crit_proto_stop, CRIT_PROTOCOL_STOP); if (rdev->wiphy.flags & WIPHY_FLAG_HAS_CHANNEL_SWITCH) CMD(channel_switch, CHANNEL_SWITCH); + CMD(set_qos_map, SET_QOS_MAP); } - CMD(set_qos_map, SET_QOS_MAP); - -#ifdef CONFIG_NL80211_TESTMODE - CMD(testmode_cmd, TESTMODE); -#endif - + /* add into the if now */ #undef CMD if (rdev->ops->connect || rdev->ops->auth) { diff --git a/net/wireless/reg.c b/net/wireless/reg.c index 558b0e3a02d8..1afdf45db38f 100644 --- a/net/wireless/reg.c +++ b/net/wireless/reg.c @@ -935,7 +935,7 @@ freq_reg_info_regd(struct wiphy *wiphy, u32 center_freq, if (!band_rule_found) band_rule_found = freq_in_rule_band(fr, center_freq); - bw_fits = reg_does_bw_fit(fr, center_freq, MHZ_TO_KHZ(5)); + bw_fits = reg_does_bw_fit(fr, center_freq, MHZ_TO_KHZ(20)); if (band_rule_found && bw_fits) return rr; @@ -1019,10 +1019,10 @@ static void chan_reg_rule_print_dbg(const struct ieee80211_regdomain *regd, } #endif -/* Find an ieee80211_reg_rule such that a 5MHz channel with frequency - * chan->center_freq fits there. - * If there is no such reg_rule, disable the channel, otherwise set the - * flags corresponding to the bandwidths allowed in the particular reg_rule +/* + * Note that right now we assume the desired channel bandwidth + * is always 20 MHz for each individual channel (HT40 uses 20 MHz + * per channel, the primary and the extension channel). */ static void handle_channel(struct wiphy *wiphy, enum nl80211_reg_initiator initiator, @@ -1083,12 +1083,8 @@ static void handle_channel(struct wiphy *wiphy, if (reg_rule->flags & NL80211_RRF_AUTO_BW) max_bandwidth_khz = reg_get_max_bandwidth(regd, reg_rule); - if (max_bandwidth_khz < MHZ_TO_KHZ(10)) - bw_flags = IEEE80211_CHAN_NO_10MHZ; - if (max_bandwidth_khz < MHZ_TO_KHZ(20)) - bw_flags |= IEEE80211_CHAN_NO_20MHZ; if (max_bandwidth_khz < MHZ_TO_KHZ(40)) - bw_flags |= IEEE80211_CHAN_NO_HT40; + bw_flags = IEEE80211_CHAN_NO_HT40; if (max_bandwidth_khz < MHZ_TO_KHZ(80)) bw_flags |= IEEE80211_CHAN_NO_80MHZ; if (max_bandwidth_khz < MHZ_TO_KHZ(160)) @@ -1522,12 +1518,8 @@ static void handle_channel_custom(struct wiphy *wiphy, if (reg_rule->flags & NL80211_RRF_AUTO_BW) max_bandwidth_khz = reg_get_max_bandwidth(regd, reg_rule); - if (max_bandwidth_khz < MHZ_TO_KHZ(10)) - bw_flags = IEEE80211_CHAN_NO_10MHZ; - if (max_bandwidth_khz < MHZ_TO_KHZ(20)) - bw_flags |= IEEE80211_CHAN_NO_20MHZ; if (max_bandwidth_khz < MHZ_TO_KHZ(40)) - bw_flags |= IEEE80211_CHAN_NO_HT40; + bw_flags = IEEE80211_CHAN_NO_HT40; if (max_bandwidth_khz < MHZ_TO_KHZ(80)) bw_flags |= IEEE80211_CHAN_NO_80MHZ; if (max_bandwidth_khz < MHZ_TO_KHZ(160)) diff --git a/scripts/kernel-doc b/scripts/kernel-doc index da058da413e7..16a07cfa4d34 100755 --- a/scripts/kernel-doc +++ b/scripts/kernel-doc @@ -2073,6 +2073,7 @@ sub check_return_section { sub dump_function($$) { my $prototype = shift; my $file = shift; + my $noret = 0; $prototype =~ s/^static +//; $prototype =~ s/^extern +//; @@ -2086,7 +2087,7 @@ sub dump_function($$) { $prototype =~ s/__init_or_module +//; $prototype =~ s/__must_check +//; $prototype =~ s/__weak +//; - $prototype =~ s/^#\s*define\s+//; #ak added + my $define = $prototype =~ s/^#\s*define\s+//; #ak added $prototype =~ s/__attribute__\s*\(\([a-z,]*\)\)//; # Yes, this truly is vile. We are looking for: @@ -2105,7 +2106,15 @@ sub dump_function($$) { # - atomic_set (macro) # - pci_match_device, __copy_to_user (long return type) - if ($prototype =~ m/^()([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ || + if ($define && $prototype =~ m/^()([a-zA-Z0-9_~:]+)\s+/) { + # This is an object-like macro, it has no return type and no parameter + # list. + # Function-like macros are not allowed to have spaces between + # declaration_name and opening parenthesis (notice the \s+). + $return_type = $1; + $declaration_name = $2; + $noret = 1; + } elsif ($prototype =~ m/^()([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ || $prototype =~ m/^(\w+)\s+([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ || $prototype =~ m/^(\w+\s*\*)\s*([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ || $prototype =~ m/^(\w+\s+\w+)\s+([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ || @@ -2140,7 +2149,7 @@ sub dump_function($$) { # of warnings goes sufficiently down, the check is only performed in # verbose mode. # TODO: always perform the check. - if ($verbose) { + if ($verbose && !$noret) { check_return_section($file, $declaration_name, $return_type); } diff --git a/sound/pci/hda/hda_controller.c b/sound/pci/hda/hda_controller.c index 480bbddbd801..6df04d91c93c 100644 --- a/sound/pci/hda/hda_controller.c +++ b/sound/pci/hda/hda_controller.c @@ -193,7 +193,8 @@ azx_assign_device(struct azx *chip, struct snd_pcm_substream *substream) dsp_unlock(azx_dev); return azx_dev; } - if (!res) + if (!res || + (chip->driver_caps & AZX_DCAPS_REVERSE_ASSIGN)) res = azx_dev; } dsp_unlock(azx_dev); diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c index b6b4e71a0b0b..83cd19017cf3 100644 --- a/sound/pci/hda/hda_intel.c +++ b/sound/pci/hda/hda_intel.c @@ -227,7 +227,7 @@ enum { /* quirks for Intel PCH */ #define AZX_DCAPS_INTEL_PCH_NOPM \ (AZX_DCAPS_SCH_SNOOP | AZX_DCAPS_BUFSIZE | \ - AZX_DCAPS_COUNT_LPIB_DELAY) + AZX_DCAPS_COUNT_LPIB_DELAY | AZX_DCAPS_REVERSE_ASSIGN) #define AZX_DCAPS_INTEL_PCH \ (AZX_DCAPS_INTEL_PCH_NOPM | AZX_DCAPS_PM_RUNTIME) @@ -596,7 +596,7 @@ static int azx_suspend(struct device *dev) struct azx *chip = card->private_data; struct azx_pcm *p; - if (chip->disabled) + if (chip->disabled || chip->init_failed) return 0; snd_power_change_state(card, SNDRV_CTL_POWER_D3hot); @@ -628,7 +628,7 @@ static int azx_resume(struct device *dev) struct snd_card *card = dev_get_drvdata(dev); struct azx *chip = card->private_data; - if (chip->disabled) + if (chip->disabled || chip->init_failed) return 0; if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL) { @@ -665,7 +665,7 @@ static int azx_runtime_suspend(struct device *dev) struct snd_card *card = dev_get_drvdata(dev); struct azx *chip = card->private_data; - if (chip->disabled) + if (chip->disabled || chip->init_failed) return 0; if (!(chip->driver_caps & AZX_DCAPS_PM_RUNTIME)) @@ -692,7 +692,7 @@ static int azx_runtime_resume(struct device *dev) struct hda_codec *codec; int status; - if (chip->disabled) + if (chip->disabled || chip->init_failed) return 0; if (!(chip->driver_caps & AZX_DCAPS_PM_RUNTIME)) @@ -729,7 +729,7 @@ static int azx_runtime_idle(struct device *dev) struct snd_card *card = dev_get_drvdata(dev); struct azx *chip = card->private_data; - if (chip->disabled) + if (chip->disabled || chip->init_failed) return 0; if (!power_save_controller || diff --git a/sound/pci/hda/hda_priv.h b/sound/pci/hda/hda_priv.h index 4a7cb01fa912..e9d1a5762a55 100644 --- a/sound/pci/hda/hda_priv.h +++ b/sound/pci/hda/hda_priv.h @@ -186,6 +186,7 @@ enum { SDI0, SDI1, SDI2, SDI3, SDO0, SDO1, SDO2, SDO3 }; #define AZX_DCAPS_BUFSIZE (1 << 21) /* no buffer size alignment */ #define AZX_DCAPS_ALIGN_BUFSIZE (1 << 22) /* buffer size alignment */ #define AZX_DCAPS_4K_BDLE_BOUNDARY (1 << 23) /* BDLE in 4k boundary */ +#define AZX_DCAPS_REVERSE_ASSIGN (1 << 24) /* Assign devices in reverse order */ #define AZX_DCAPS_COUNT_LPIB_DELAY (1 << 25) /* Take LPIB as delay */ #define AZX_DCAPS_PM_RUNTIME (1 << 26) /* runtime PM support */ #define AZX_DCAPS_I915_POWERWELL (1 << 27) /* HSW i915 powerwell support */ diff --git a/sound/pci/hda/hda_tegra.c b/sound/pci/hda/hda_tegra.c index a366ba9293a8..358414da6418 100644 --- a/sound/pci/hda/hda_tegra.c +++ b/sound/pci/hda/hda_tegra.c @@ -236,6 +236,7 @@ disable_hda: return rc; } +#ifdef CONFIG_PM_SLEEP static void hda_tegra_disable_clocks(struct hda_tegra *data) { clk_disable_unprepare(data->hda2hdmi_clk); @@ -243,7 +244,6 @@ static void hda_tegra_disable_clocks(struct hda_tegra *data) clk_disable_unprepare(data->hda_clk); } -#ifdef CONFIG_PM_SLEEP /* * power management */ diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c index 4fe876b65fda..ba4ca52072ff 100644 --- a/sound/pci/hda/patch_hdmi.c +++ b/sound/pci/hda/patch_hdmi.c @@ -3337,6 +3337,7 @@ static const struct hda_codec_preset snd_hda_preset_hdmi[] = { { .id = 0x10de0051, .name = "GPU 51 HDMI/DP", .patch = patch_nvhdmi }, { .id = 0x10de0060, .name = "GPU 60 HDMI/DP", .patch = patch_nvhdmi }, { .id = 0x10de0067, .name = "MCP67 HDMI", .patch = patch_nvhdmi_2ch }, +{ .id = 0x10de0070, .name = "GPU 70 HDMI/DP", .patch = patch_nvhdmi }, { .id = 0x10de0071, .name = "GPU 71 HDMI/DP", .patch = patch_nvhdmi }, { .id = 0x10de8001, .name = "MCP73 HDMI", .patch = patch_nvhdmi_2ch }, { .id = 0x11069f80, .name = "VX900 HDMI/DP", .patch = patch_via_hdmi }, @@ -3394,6 +3395,7 @@ MODULE_ALIAS("snd-hda-codec-id:10de0044"); MODULE_ALIAS("snd-hda-codec-id:10de0051"); MODULE_ALIAS("snd-hda-codec-id:10de0060"); MODULE_ALIAS("snd-hda-codec-id:10de0067"); +MODULE_ALIAS("snd-hda-codec-id:10de0070"); MODULE_ALIAS("snd-hda-codec-id:10de0071"); MODULE_ALIAS("snd-hda-codec-id:10de8001"); MODULE_ALIAS("snd-hda-codec-id:11069f80"); diff --git a/sound/soc/fsl/imx-pcm-dma.c b/sound/soc/fsl/imx-pcm-dma.c index 0849b7b83f0a..0db94f492e97 100644 --- a/sound/soc/fsl/imx-pcm-dma.c +++ b/sound/soc/fsl/imx-pcm-dma.c @@ -59,7 +59,6 @@ int imx_pcm_dma_init(struct platform_device *pdev) { return devm_snd_dmaengine_pcm_register(&pdev->dev, &imx_dmaengine_pcm_config, - SND_DMAENGINE_PCM_FLAG_NO_RESIDUE | SND_DMAENGINE_PCM_FLAG_COMPAT); } EXPORT_SYMBOL_GPL(imx_pcm_dma_init); diff --git a/tools/lib/lockdep/include/liblockdep/mutex.h b/tools/lib/lockdep/include/liblockdep/mutex.h index c342f7087147..ee53a42818ca 100644 --- a/tools/lib/lockdep/include/liblockdep/mutex.h +++ b/tools/lib/lockdep/include/liblockdep/mutex.h @@ -35,7 +35,7 @@ static inline int __mutex_init(liblockdep_pthread_mutex_t *lock, static inline int liblockdep_pthread_mutex_lock(liblockdep_pthread_mutex_t *lock) { - lock_acquire(&lock->dep_map, 0, 0, 0, 2, NULL, (unsigned long)_RET_IP_); + lock_acquire(&lock->dep_map, 0, 0, 0, 1, NULL, (unsigned long)_RET_IP_); return pthread_mutex_lock(&lock->mutex); } @@ -47,7 +47,7 @@ static inline int liblockdep_pthread_mutex_unlock(liblockdep_pthread_mutex_t *lo static inline int liblockdep_pthread_mutex_trylock(liblockdep_pthread_mutex_t *lock) { - lock_acquire(&lock->dep_map, 0, 1, 0, 2, NULL, (unsigned long)_RET_IP_); + lock_acquire(&lock->dep_map, 0, 1, 0, 1, NULL, (unsigned long)_RET_IP_); return pthread_mutex_trylock(&lock->mutex) == 0 ? 1 : 0; } diff --git a/tools/lib/lockdep/include/liblockdep/rwlock.h b/tools/lib/lockdep/include/liblockdep/rwlock.h index a680ab8c2e36..4ec03f861551 100644 --- a/tools/lib/lockdep/include/liblockdep/rwlock.h +++ b/tools/lib/lockdep/include/liblockdep/rwlock.h @@ -36,7 +36,7 @@ static inline int __rwlock_init(liblockdep_pthread_rwlock_t *lock, static inline int liblockdep_pthread_rwlock_rdlock(liblockdep_pthread_rwlock_t *lock) { - lock_acquire(&lock->dep_map, 0, 0, 2, 2, NULL, (unsigned long)_RET_IP_); + lock_acquire(&lock->dep_map, 0, 0, 2, 1, NULL, (unsigned long)_RET_IP_); return pthread_rwlock_rdlock(&lock->rwlock); } @@ -49,19 +49,19 @@ static inline int liblockdep_pthread_rwlock_unlock(liblockdep_pthread_rwlock_t * static inline int liblockdep_pthread_rwlock_wrlock(liblockdep_pthread_rwlock_t *lock) { - lock_acquire(&lock->dep_map, 0, 0, 0, 2, NULL, (unsigned long)_RET_IP_); + lock_acquire(&lock->dep_map, 0, 0, 0, 1, NULL, (unsigned long)_RET_IP_); return pthread_rwlock_wrlock(&lock->rwlock); } static inline int liblockdep_pthread_rwlock_tryrdlock(liblockdep_pthread_rwlock_t *lock) { - lock_acquire(&lock->dep_map, 0, 1, 2, 2, NULL, (unsigned long)_RET_IP_); + lock_acquire(&lock->dep_map, 0, 1, 2, 1, NULL, (unsigned long)_RET_IP_); return pthread_rwlock_tryrdlock(&lock->rwlock) == 0 ? 1 : 0; } static inline int liblockdep_pthread_rwlock_trywlock(liblockdep_pthread_rwlock_t *lock) { - lock_acquire(&lock->dep_map, 0, 1, 0, 2, NULL, (unsigned long)_RET_IP_); + lock_acquire(&lock->dep_map, 0, 1, 0, 1, NULL, (unsigned long)_RET_IP_); return pthread_rwlock_trywlock(&lock->rwlock) == 0 ? 1 : 0; } diff --git a/tools/lib/lockdep/preload.c b/tools/lib/lockdep/preload.c index 23bd69cb5ade..6f803609e498 100644 --- a/tools/lib/lockdep/preload.c +++ b/tools/lib/lockdep/preload.c @@ -92,7 +92,7 @@ enum { none, prepare, done, } __init_state; static void init_preload(void); static void try_init_preload(void) { - if (!__init_state != done) + if (__init_state != done) init_preload(); } @@ -252,7 +252,7 @@ int pthread_mutex_lock(pthread_mutex_t *mutex) try_init_preload(); - lock_acquire(&__get_lock(mutex)->dep_map, 0, 0, 0, 2, NULL, + lock_acquire(&__get_lock(mutex)->dep_map, 0, 0, 0, 1, NULL, (unsigned long)_RET_IP_); /* * Here's the thing with pthread mutexes: unlike the kernel variant, @@ -281,7 +281,7 @@ int pthread_mutex_trylock(pthread_mutex_t *mutex) try_init_preload(); - lock_acquire(&__get_lock(mutex)->dep_map, 0, 1, 0, 2, NULL, (unsigned long)_RET_IP_); + lock_acquire(&__get_lock(mutex)->dep_map, 0, 1, 0, 1, NULL, (unsigned long)_RET_IP_); r = ll_pthread_mutex_trylock(mutex); if (r) lock_release(&__get_lock(mutex)->dep_map, 0, (unsigned long)_RET_IP_); @@ -303,7 +303,7 @@ int pthread_mutex_unlock(pthread_mutex_t *mutex) */ r = ll_pthread_mutex_unlock(mutex); if (r) - lock_acquire(&__get_lock(mutex)->dep_map, 0, 0, 0, 2, NULL, (unsigned long)_RET_IP_); + lock_acquire(&__get_lock(mutex)->dep_map, 0, 0, 0, 1, NULL, (unsigned long)_RET_IP_); return r; } @@ -352,7 +352,7 @@ int pthread_rwlock_rdlock(pthread_rwlock_t *rwlock) init_preload(); - lock_acquire(&__get_lock(rwlock)->dep_map, 0, 0, 2, 2, NULL, (unsigned long)_RET_IP_); + lock_acquire(&__get_lock(rwlock)->dep_map, 0, 0, 2, 1, NULL, (unsigned long)_RET_IP_); r = ll_pthread_rwlock_rdlock(rwlock); if (r) lock_release(&__get_lock(rwlock)->dep_map, 0, (unsigned long)_RET_IP_); @@ -366,7 +366,7 @@ int pthread_rwlock_tryrdlock(pthread_rwlock_t *rwlock) init_preload(); - lock_acquire(&__get_lock(rwlock)->dep_map, 0, 1, 2, 2, NULL, (unsigned long)_RET_IP_); + lock_acquire(&__get_lock(rwlock)->dep_map, 0, 1, 2, 1, NULL, (unsigned long)_RET_IP_); r = ll_pthread_rwlock_tryrdlock(rwlock); if (r) lock_release(&__get_lock(rwlock)->dep_map, 0, (unsigned long)_RET_IP_); @@ -380,7 +380,7 @@ int pthread_rwlock_trywrlock(pthread_rwlock_t *rwlock) init_preload(); - lock_acquire(&__get_lock(rwlock)->dep_map, 0, 1, 0, 2, NULL, (unsigned long)_RET_IP_); + lock_acquire(&__get_lock(rwlock)->dep_map, 0, 1, 0, 1, NULL, (unsigned long)_RET_IP_); r = ll_pthread_rwlock_trywrlock(rwlock); if (r) lock_release(&__get_lock(rwlock)->dep_map, 0, (unsigned long)_RET_IP_); @@ -394,7 +394,7 @@ int pthread_rwlock_wrlock(pthread_rwlock_t *rwlock) init_preload(); - lock_acquire(&__get_lock(rwlock)->dep_map, 0, 0, 0, 2, NULL, (unsigned long)_RET_IP_); + lock_acquire(&__get_lock(rwlock)->dep_map, 0, 0, 0, 1, NULL, (unsigned long)_RET_IP_); r = ll_pthread_rwlock_wrlock(rwlock); if (r) lock_release(&__get_lock(rwlock)->dep_map, 0, (unsigned long)_RET_IP_); @@ -411,7 +411,7 @@ int pthread_rwlock_unlock(pthread_rwlock_t *rwlock) lock_release(&__get_lock(rwlock)->dep_map, 0, (unsigned long)_RET_IP_); r = ll_pthread_rwlock_unlock(rwlock); if (r) - lock_acquire(&__get_lock(rwlock)->dep_map, 0, 0, 0, 2, NULL, (unsigned long)_RET_IP_); + lock_acquire(&__get_lock(rwlock)->dep_map, 0, 0, 0, 1, NULL, (unsigned long)_RET_IP_); return r; } @@ -439,8 +439,6 @@ __attribute__((constructor)) static void init_preload(void) ll_pthread_rwlock_unlock = dlsym(RTLD_NEXT, "pthread_rwlock_unlock"); #endif - printf("%p\n", ll_pthread_mutex_trylock);fflush(stdout); - lockdep_init(); __init_state = done; diff --git a/tools/perf/ui/browsers/hists.c b/tools/perf/ui/browsers/hists.c index 52c03fbbba17..04a229aa5c0f 100644 --- a/tools/perf/ui/browsers/hists.c +++ b/tools/perf/ui/browsers/hists.c @@ -17,6 +17,7 @@ #include "../util.h" #include "../ui.h" #include "map.h" +#include "annotate.h" struct hist_browser { struct ui_browser b; @@ -1593,13 +1594,18 @@ static int perf_evsel__hists_browse(struct perf_evsel *evsel, int nr_events, bi->to.sym->name) > 0) annotate_t = nr_options++; } else { - if (browser->selection != NULL && browser->selection->sym != NULL && - !browser->selection->map->dso->annotate_warned && - asprintf(&options[nr_options], "Annotate %s", - browser->selection->sym->name) > 0) - annotate = nr_options++; + !browser->selection->map->dso->annotate_warned) { + struct annotation *notes; + + notes = symbol__annotation(browser->selection->sym); + + if (notes->src && + asprintf(&options[nr_options], "Annotate %s", + browser->selection->sym->name) > 0) + annotate = nr_options++; + } } if (thread != NULL && @@ -1656,6 +1662,7 @@ retry_popup_menu: if (choice == annotate || choice == annotate_t || choice == annotate_f) { struct hist_entry *he; + struct annotation *notes; int err; do_annotate: if (!objdump_path && perf_session_env__lookup_objdump(env)) @@ -1679,6 +1686,10 @@ do_annotate: he->ms.map = he->branch_info->to.map; } + notes = symbol__annotation(he->ms.sym); + if (!notes->src) + continue; + /* * Don't let this be freed, say, by hists__decay_entry. */ diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c index 0e5fea95d596..c73e1fc12e53 100644 --- a/tools/perf/util/machine.c +++ b/tools/perf/util/machine.c @@ -496,18 +496,6 @@ struct process_args { u64 start; }; -static int symbol__in_kernel(void *arg, const char *name, - char type __maybe_unused, u64 start) -{ - struct process_args *args = arg; - - if (strchr(name, '[')) - return 0; - - args->start = start; - return 1; -} - static void machine__get_kallsyms_filename(struct machine *machine, char *buf, size_t bufsz) { @@ -517,27 +505,41 @@ static void machine__get_kallsyms_filename(struct machine *machine, char *buf, scnprintf(buf, bufsz, "%s/proc/kallsyms", machine->root_dir); } -/* Figure out the start address of kernel map from /proc/kallsyms */ -static u64 machine__get_kernel_start_addr(struct machine *machine) +const char *ref_reloc_sym_names[] = {"_text", "_stext", NULL}; + +/* Figure out the start address of kernel map from /proc/kallsyms. + * Returns the name of the start symbol in *symbol_name. Pass in NULL as + * symbol_name if it's not that important. + */ +static u64 machine__get_kernel_start_addr(struct machine *machine, + const char **symbol_name) { char filename[PATH_MAX]; - struct process_args args; + int i; + const char *name; + u64 addr = 0; machine__get_kallsyms_filename(machine, filename, PATH_MAX); if (symbol__restricted_filename(filename, "/proc/kallsyms")) return 0; - if (kallsyms__parse(filename, &args, symbol__in_kernel) <= 0) - return 0; + for (i = 0; (name = ref_reloc_sym_names[i]) != NULL; i++) { + addr = kallsyms__get_function_start(filename, name); + if (addr) + break; + } + + if (symbol_name) + *symbol_name = name; - return args.start; + return addr; } int __machine__create_kernel_maps(struct machine *machine, struct dso *kernel) { enum map_type type; - u64 start = machine__get_kernel_start_addr(machine); + u64 start = machine__get_kernel_start_addr(machine, NULL); for (type = 0; type < MAP__NR_TYPES; ++type) { struct kmap *kmap; @@ -852,23 +854,11 @@ static int machine__create_modules(struct machine *machine) return 0; } -const char *ref_reloc_sym_names[] = {"_text", "_stext", NULL}; - int machine__create_kernel_maps(struct machine *machine) { struct dso *kernel = machine__get_kernel(machine); - char filename[PATH_MAX]; const char *name; - u64 addr = 0; - int i; - - machine__get_kallsyms_filename(machine, filename, PATH_MAX); - - for (i = 0; (name = ref_reloc_sym_names[i]) != NULL; i++) { - addr = kallsyms__get_function_start(filename, name); - if (addr) - break; - } + u64 addr = machine__get_kernel_start_addr(machine, &name); if (!addr) return -1; diff --git a/tools/thermal/tmon/Makefile b/tools/thermal/tmon/Makefile index 447321104ec0..e775adcbd29f 100644 --- a/tools/thermal/tmon/Makefile +++ b/tools/thermal/tmon/Makefile @@ -21,7 +21,7 @@ OBJS = tmon.o tui.o sysfs.o pid.o OBJS += tmon: $(OBJS) Makefile tmon.h - $(CC) ${CFLAGS} $(LDFLAGS) $(OBJS) -o $(TARGET) -lm -lpanel -lncursesw -lpthread + $(CC) ${CFLAGS} $(LDFLAGS) $(OBJS) -o $(TARGET) -lm -lpanel -lncursesw -ltinfo -lpthread valgrind: tmon sudo valgrind -v --track-origins=yes --tool=memcheck --leak-check=yes --show-reachable=yes --num-callers=20 --track-fds=yes ./$(TARGET) 1> /dev/null diff --git a/tools/thermal/tmon/tmon.c b/tools/thermal/tmon/tmon.c index b30f531173e4..09b7c3218334 100644 --- a/tools/thermal/tmon/tmon.c +++ b/tools/thermal/tmon/tmon.c @@ -142,6 +142,7 @@ static void start_syslog(void) static void prepare_logging(void) { int i; + struct stat logstat; if (!logging) return; @@ -152,6 +153,29 @@ static void prepare_logging(void) return; } + if (lstat(TMON_LOG_FILE, &logstat) < 0) { + syslog(LOG_ERR, "Unable to stat log file %s\n", TMON_LOG_FILE); + fclose(tmon_log); + tmon_log = NULL; + return; + } + + /* The log file must be a regular file owned by us */ + if (S_ISLNK(logstat.st_mode)) { + syslog(LOG_ERR, "Log file is a symlink. Will not log\n"); + fclose(tmon_log); + tmon_log = NULL; + return; + } + + if (logstat.st_uid != getuid()) { + syslog(LOG_ERR, "We don't own the log file. Not logging\n"); + fclose(tmon_log); + tmon_log = NULL; + return; + } + + fprintf(tmon_log, "#----------- THERMAL SYSTEM CONFIG -------------\n"); for (i = 0; i < ptdata.nr_tz_sensor; i++) { char binding_str[33]; /* size of long + 1 */ @@ -331,7 +355,7 @@ static void start_daemon_mode() disable_tui(); /* change the file mode mask */ - umask(0); + umask(S_IWGRP | S_IWOTH); /* new SID for the daemon process */ sid = setsid(); |