commit a2ab9187600ddca13da9e5c20e3abb92ea885ddd
Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date:   Fri Jan 16 07:00:22 2015 -0800

    Linux 3.14.29

commit 1bec714a0ee181769111903f01dad4a2cb875fff
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Sun Jan 11 11:33:57 2015 -0800

    mm: Don't count the stack guard page towards RLIMIT_STACK
    
    commit 690eac53daff34169a4d74fc7bfbd388c4896abb upstream.
    
    Commit fee7e49d4514 ("mm: propagate error from stack expansion even for
    guard page") made sure that we return the error properly for stack
    growth conditions.  It also theorized that counting the guard page
    towards the stack limit might break something, but also said "Let's see
    if anybody notices".
    
    Somebody did notice.  Apparently android-x86 sets the stack limit very
    close to the limit indeed, and including the guard page in the rlimit
    check causes the android 'zygote' process problems.
    
    So this adds the (fairly trivial) code to make the stack rlimit check be
    against the actual real stack size, rather than the size of the vma that
    includes the guard page.
    
    Reported-and-tested-by: Chih-Wei Huang <cwhuang@android-x86.org>
    Cc: Jay Foad <jay.foad@gmail.com>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 11e4f3bfdfd2d0f4a1104f0cbf19764b387ba4aa
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Tue Jan 6 13:00:05 2015 -0800

    mm: propagate error from stack expansion even for guard page
    
    commit fee7e49d45149fba60156f5b59014f764d3e3728 upstream.
    
    Jay Foad reports that the address sanitizer test (asan) sometimes gets
    confused by a stack pointer that ends up being outside the stack vma
    that is reported by /proc/maps.
    
    This happens due to an interaction between RLIMIT_STACK and the guard
    page: when we do the guard page check, we ignore the potential error
    from the stack expansion, which effectively results in a missing guard
    page, since the expected stack expansion won't have been done.
    
    And since /proc/maps explicitly ignores the guard page (commit
    d7824370e263: "mm: fix up some user-visible effects of the stack guard
    page"), the stack pointer ends up being outside the reported stack area.
    
    This is the minimal patch: it just propagates the error.  It also
    effectively makes the guard page part of the stack limit, which in turn
    measn that the actual real stack is one page less than the stack limit.
    
    Let's see if anybody notices.  We could teach acct_stack_growth() to
    allow an extra page for a grow-up/grow-down stack in the rlimit test,
    but I don't want to add more complexity if it isn't needed.
    
    Reported-and-tested-by: Jay Foad <jay.foad@gmail.com>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 18d9304b892c9f090e42dca40d041586519951be
Author: Vlastimil Babka <vbabka@suse.cz>
Date:   Thu Jan 8 14:32:40 2015 -0800

    mm, vmscan: prevent kswapd livelock due to pfmemalloc-throttled process being killed
    
    commit 9e5e3661727eaf960d3480213f8e87c8d67b6956 upstream.
    
    Charles Shirron and Paul Cassella from Cray Inc have reported kswapd
    stuck in a busy loop with nothing left to balance, but
    kswapd_try_to_sleep() failing to sleep.  Their analysis found the cause
    to be a combination of several factors:
    
    1. A process is waiting in throttle_direct_reclaim() on pgdat->pfmemalloc_wait
    
    2. The process has been killed (by OOM in this case), but has not yet been
       scheduled to remove itself from the waitqueue and die.
    
    3. kswapd checks for throttled processes in prepare_kswapd_sleep():
    
            if (waitqueue_active(&pgdat->pfmemalloc_wait)) {
                    wake_up(&pgdat->pfmemalloc_wait);
    		return false; // kswapd will not go to sleep
    	}
    
       However, for a process that was already killed, wake_up() does not remove
       the process from the waitqueue, since try_to_wake_up() checks its state
       first and returns false when the process is no longer waiting.
    
    4. kswapd is running on the same CPU as the only CPU that the process is
       allowed to run on (through cpus_allowed, or possibly single-cpu system).
    
    5. CONFIG_PREEMPT_NONE=y kernel is used. If there's nothing to balance, kswapd
       encounters no voluntary preemption points and repeatedly fails
       prepare_kswapd_sleep(), blocking the process from running and removing
       itself from the waitqueue, which would let kswapd sleep.
    
    So, the source of the problem is that we prevent kswapd from going to
    sleep until there are processes waiting on the pfmemalloc_wait queue,
    and a process waiting on a queue is guaranteed to be removed from the
    queue only when it gets scheduled.  This was done to make sure that no
    process is left sleeping on pfmemalloc_wait when kswapd itself goes to
    sleep.
    
    However, it isn't necessary to postpone kswapd sleep until the
    pfmemalloc_wait queue actually empties.  To prevent processes from being
    left sleeping, it's actually enough to guarantee that all processes
    waiting on pfmemalloc_wait queue have been woken up by the time we put
    kswapd to sleep.
    
    This patch therefore fixes this issue by substituting 'wake_up' with
    'wake_up_all' and removing 'return false' in the code snippet from
    prepare_kswapd_sleep() above.  Note that if any process puts itself in
    the queue after this waitqueue_active() check, or after the wake up
    itself, it means that the process will also wake up kswapd - and since
    we are under prepare_to_wait(), the wake up won't be missed.  Also we
    update the comment prepare_kswapd_sleep() to hopefully more clearly
    describe the races it is preventing.
    
    Fixes: 5515061d22f0 ("mm: throttle direct reclaimers if PF_MEMALLOC reserves are low and swap is backed by network storage")
    Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
    Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
    Cc: Mel Gorman <mgorman@suse.de>
    Cc: Johannes Weiner <hannes@cmpxchg.org>
    Acked-by: Michal Hocko <mhocko@suse.cz>
    Acked-by: Rik van Riel <riel@redhat.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit b36cd20d358da24c18a1110b20e23b31b15b1419
Author: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Date:   Mon Jan 5 10:50:15 2015 +0100

    mmc: sdhci: Fix sleep in atomic after inserting SD card
    
    commit 2836766a9d0bd02c66073f8dd44796e6cc23848d upstream.
    
    Sleep in atomic context happened on Trats2 board after inserting or
    removing SD card because mmc_gpio_get_cd() was called under spin lock.
    
    Fix this by moving card detection earlier, before acquiring spin lock.
    The mmc_gpio_get_cd() call does not have to be protected by spin lock
    because it does not access any sdhci internal data.
    The sdhci_do_get_cd() call access host flags (SDHCI_DEVICE_DEAD). After
    moving it out side of spin lock it could theoretically race with driver
    removal but still there is no actual protection against manual card
    eject.
    
    Dmesg after inserting SD card:
    [   41.663414] BUG: sleeping function called from invalid context at drivers/gpio/gpiolib.c:1511
    [   41.670469] in_atomic(): 1, irqs_disabled(): 128, pid: 30, name: kworker/u8:1
    [   41.677580] INFO: lockdep is turned off.
    [   41.681486] irq event stamp: 61972
    [   41.684872] hardirqs last  enabled at (61971): [<c0490ee0>] _raw_spin_unlock_irq+0x24/0x5c
    [   41.693118] hardirqs last disabled at (61972): [<c04907ac>] _raw_spin_lock_irq+0x18/0x54
    [   41.701190] softirqs last  enabled at (61648): [<c0026fd4>] __do_softirq+0x234/0x2c8
    [   41.708914] softirqs last disabled at (61631): [<c00273a0>] irq_exit+0xd0/0x114
    [   41.716206] Preemption disabled at:[<  (null)>]   (null)
    [   41.721500]
    [   41.722985] CPU: 3 PID: 30 Comm: kworker/u8:1 Tainted: G        W      3.18.0-rc5-next-20141121 #883
    [   41.732111] Workqueue: kmmcd mmc_rescan
    [   41.735945] [<c0014d2c>] (unwind_backtrace) from [<c0011c80>] (show_stack+0x10/0x14)
    [   41.743661] [<c0011c80>] (show_stack) from [<c0489d14>] (dump_stack+0x70/0xbc)
    [   41.750867] [<c0489d14>] (dump_stack) from [<c0228b74>] (gpiod_get_raw_value_cansleep+0x18/0x30)
    [   41.759628] [<c0228b74>] (gpiod_get_raw_value_cansleep) from [<c03646e8>] (mmc_gpio_get_cd+0x38/0x58)
    [   41.768821] [<c03646e8>] (mmc_gpio_get_cd) from [<c036d378>] (sdhci_request+0x50/0x1a4)
    [   41.776808] [<c036d378>] (sdhci_request) from [<c0357934>] (mmc_start_request+0x138/0x268)
    [   41.785051] [<c0357934>] (mmc_start_request) from [<c0357cc8>] (mmc_wait_for_req+0x58/0x1a0)
    [   41.793469] [<c0357cc8>] (mmc_wait_for_req) from [<c0357e68>] (mmc_wait_for_cmd+0x58/0x78)
    [   41.801714] [<c0357e68>] (mmc_wait_for_cmd) from [<c0361c00>] (mmc_io_rw_direct_host+0x98/0x124)
    [   41.810480] [<c0361c00>] (mmc_io_rw_direct_host) from [<c03620f8>] (sdio_reset+0x2c/0x64)
    [   41.818641] [<c03620f8>] (sdio_reset) from [<c035a3d8>] (mmc_rescan+0x254/0x2e4)
    [   41.826028] [<c035a3d8>] (mmc_rescan) from [<c003a0e0>] (process_one_work+0x180/0x3f4)
    [   41.833920] [<c003a0e0>] (process_one_work) from [<c003a3bc>] (worker_thread+0x34/0x4b0)
    [   41.841991] [<c003a3bc>] (worker_thread) from [<c003fed8>] (kthread+0xe4/0x104)
    [   41.849285] [<c003fed8>] (kthread) from [<c000f268>] (ret_from_fork+0x14/0x2c)
    [   42.038276] mmc0: new high speed SDHC card at address 1234
    
    Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
    Fixes: 94144a465dd0 ("mmc: sdhci: add get_cd() implementation")
    Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 2937d5ac11c421a02f276295f388994027bbe607
Author: Stefan Roese <sr@denx.de>
Date:   Fri Jan 31 13:44:59 2014 +0100

    spi: fsl: Fix problem with multi message transfers
    
    commit 4302a59629f7a0bd70fd1605d2b558597517372a upstream.
    
    When used via spidev with more than one messages to tranfer via
    SPI_IOC_MESSAGE the current implementation would return with
    -EINVAL, since bits_per_word and speed_hz are set in all
    transfer structs. And in the 2nd loop status will stay at
    -EINVAL as its not overwritten again via fsl_spi_setup_transfer().
    
    This patch changes this behavious by first checking if one of
    the messages uses different settings. If this is the case
    the function will return with -EINVAL. If not, the messages
    are transferred correctly.
    
    Signed-off-by: Stefan Roese <sr@denx.de>
    Signed-off-by: Mark Brown <broonie@linaro.org>
    Cc: Esben Haabendal <esbenhaabendal@gmail.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit ceaefcdf2a9f24835691d58d45c5cc50d596a7d0
Author: Jiri Olsa <jolsa@kernel.org>
Date:   Wed Nov 26 16:39:31 2014 +0100

    perf session: Do not fail on processing out of order event
    
    commit f61ff6c06dc8f32c7036013ad802c899ec590607 upstream.
    
    Linus reported perf report command being interrupted due to processing
    of 'out of order' event, with following error:
    
      Timestamp below last timeslice flush
      0x5733a8 [0x28]: failed to process type: 3
    
    I could reproduce the issue and in my case it was caused by one CPU
    (mmap) being behind during record and userspace mmap reader seeing the
    data after other CPUs data were already stored.
    
    This is expected under some circumstances because we need to limit the
    number of events that we queue for reordering when we receive a
    PERF_RECORD_FINISHED_ROUND or when we force flush due to memory
    pressure.
    
    Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Jiri Olsa <jolsa@kernel.org>
    Acked-by: Ingo Molnar <mingo@kernel.org>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
    Cc: David Ahern <dsahern@gmail.com>
    Cc: Frederic Weisbecker <fweisbec@gmail.com>
    Cc: Ingo Molnar <mingo@kernel.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Matt Fleming <matt.fleming@intel.com>
    Cc: Namhyung Kim <namhyung@kernel.org>
    Cc: Paul Mackerras <paulus@samba.org>
    Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
    Cc: Stephane Eranian <eranian@google.com>
    Link: http://lkml.kernel.org/r/1417016371-30249-1-git-send-email-jolsa@kernel.org
    Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
    [zhangzhiqiang: backport to 3.10:
     - adjust context
     - commit f61ff6c06d struct events_stats was defined in tools/perf/util/event.h
       while 3.10 stable defined in tools/perf/util/hist.h.
     - 3.10 stable there is no pr_oe_time() which used for debug.
     - After the above adjustments, becomes same to the original patch:
       https://github.com/torvalds/linux/commit/f61ff6c06dc8f32c7036013ad802c899ec590607
    ]
    Signed-off-by: Zhiqiang Zhang <zhangzhiqiang.zhang@huawei.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit c8af8989238dd365808ddc3606dc458b089597d3
Author: Jiri Olsa <jolsa@kernel.org>
Date:   Wed Dec 10 21:23:51 2014 +0100

    perf: Fix events installation during moving group
    
    commit 9fc81d87420d0d3fd62d5e5529972c0ad9eab9cc upstream.
    
    We allow PMU driver to change the cpu on which the event
    should be installed to. This happened in patch:
    
      e2d37cd213dc ("perf: Allow the PMU driver to choose the CPU on which to install events")
    
    This patch also forces all the group members to follow
    the currently opened events cpu if the group happened
    to be moved.
    
    This and the change of event->cpu in perf_install_in_context()
    function introduced in:
    
      0cda4c023132 ("perf: Introduce perf_pmu_migrate_context()")
    
    forces group members to change their event->cpu,
    if the currently-opened-event's PMU changed the cpu
    and there is a group move.
    
    Above behaviour causes problem for breakpoint events,
    which uses event->cpu to touch cpu specific data for
    breakpoints accounting. By changing event->cpu, some
    breakpoints slots were wrongly accounted for given
    cpu.
    
    Vinces's perf fuzzer hit this issue and caused following
    WARN on my setup:
    
       WARNING: CPU: 0 PID: 20214 at arch/x86/kernel/hw_breakpoint.c:119 arch_install_hw_breakpoint+0x142/0x150()
       Can't find any breakpoint slot
       [...]
    
    This patch changes the group moving code to keep the event's
    original cpu.
    
    Reported-by: Vince Weaver <vince@deater.net>
    Signed-off-by: Jiri Olsa <jolsa@redhat.com>
    Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
    Cc: Frederic Weisbecker <fweisbec@gmail.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Stephane Eranian <eranian@google.com>
    Cc: Vince Weaver <vince@deater.net>
    Cc: Yan, Zheng <zheng.z.yan@intel.com>
    Link: http://lkml.kernel.org/r/1418243031-20367-3-git-send-email-jolsa@kernel.org
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit ec9c772a1cb7436c278da5eb0aadb2589a62b485
Author: Jiri Olsa <jolsa@kernel.org>
Date:   Wed Dec 10 21:23:50 2014 +0100

    perf/x86/intel/uncore: Make sure only uncore events are collected
    
    commit af91568e762d04931dcbdd6bef4655433d8b9418 upstream.
    
    The uncore_collect_events functions assumes that event group
    might contain only uncore events which is wrong, because it
    might contain any type of events.
    
    This bug leads to uncore framework touching 'not' uncore events,
    which could end up all sorts of bugs.
    
    One was triggered by Vince's perf fuzzer, when the uncore code
    touched breakpoint event private event space as if it was uncore
    event and caused BUG:
    
       BUG: unable to handle kernel paging request at ffffffff82822068
       IP: [<ffffffff81020338>] uncore_assign_events+0x188/0x250
       ...
    
    The code in uncore_assign_events() function was looking for
    event->hw.idx data while the event was initialized as a
    breakpoint with different members in event->hw union.
    
    This patch forces uncore_collect_events() to collect only uncore
    events.
    
    Reported-by: Vince Weaver <vince@deater.net>
    Signed-off-by: Jiri Olsa <jolsa@redhat.com>
    Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
    Cc: Frederic Weisbecker <fweisbec@gmail.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Stephane Eranian <eranian@google.com>
    Cc: Yan, Zheng <zheng.z.yan@intel.com>
    Link: http://lkml.kernel.org/r/1418243031-20367-2-git-send-email-jolsa@kernel.org
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 3341738696e065a56ad26026056664a88fc5039d
Author: Chris Mason <clm@fb.com>
Date:   Wed Dec 31 12:18:29 2014 -0500

    Btrfs: don't delay inode ref updates during log replay
    
    commit 6f8960541b1eb6054a642da48daae2320fddba93 upstream.
    
    Commit 1d52c78afbb (Btrfs: try not to ENOSPC on log replay) added a
    check to skip delayed inode updates during log replay because it
    confuses the enospc code.  But the delayed processing will end up
    ignoring delayed refs from log replay because the inode itself wasn't
    put through the delayed code.
    
    This can end up triggering a warning at commit time:
    
    WARNING: CPU: 2 PID: 778 at fs/btrfs/delayed-inode.c:1410 btrfs_assert_delayed_root_empty+0x32/0x34()
    
    Which is repeated for each commit because we never process the delayed
    inode ref update.
    
    The fix used here is to change btrfs_delayed_delete_inode_ref to return
    an error if we're currently in log replay.  The caller will do the ref
    deletion immediately and everything will work properly.
    
    Signed-off-by: Chris Mason <clm@fb.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 852cacf6bd83dd335d4138746dccb0497adfd3aa
Author: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Date:   Fri Dec 19 17:03:47 2014 +0000

    arm64: kernel: fix __cpu_suspend mm switch on warm-boot
    
    commit f43c27188a49111b58e9611afa2f0365b0b55625 upstream.
    
    On arm64 the TTBR0_EL1 register is set to either the reserved TTBR0
    page tables on boot or to the active_mm mappings belonging to user space
    processes, it must never be set to swapper_pg_dir page tables mappings.
    
    When a CPU is booted its active_mm is set to init_mm even though its
    TTBR0_EL1 points at the reserved TTBR0 page mappings. This implies
    that when __cpu_suspend is triggered the active_mm can point at
    init_mm even if the current TTBR0_EL1 register contains the reserved
    TTBR0_EL1 mappings.
    
    Therefore, the mm save and restore executed in __cpu_suspend might
    turn out to be erroneous in that, if the current->active_mm corresponds
    to init_mm, on resume from low power it ends up restoring in the
    TTBR0_EL1 the init_mm mappings that are global and can cause speculation
    of TLB entries which end up being propagated to user space.
    
    This patch fixes the issue by checking the active_mm pointer before
    restoring the TTBR0 mappings. If the current active_mm == &init_mm,
    the code sets the TTBR0_EL1 to the reserved TTBR0 mapping instead of
    switching back to the active_mm, which is the expected behaviour
    corresponding to the TTBR0_EL1 settings when __cpu_suspend was entered.
    
    Fixes: 95322526ef62 ("arm64: kernel: cpu_{suspend/resume} implementation")
    Cc: Will Deacon <will.deacon@arm.com>
    Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
    Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 219591c5764eff2e378e810f3bafbaf2732f9d87
Author: Laura Abbott <lauraa@codeaurora.org>
Date:   Fri Nov 21 21:50:40 2014 +0000

    arm64: Move cpu_resume into the text section
    
    commit c3684fbb446501b48dec6677a6a9f61c215053de upstream.
    
    The function cpu_resume currently lives in the .data section.
    There's no reason for it to be there since we can use relative
    instructions without a problem. Move a few cpu_resume data
    structures out of the assembly file so the .data annotation
    can be dropped completely and cpu_resume ends up in the read
    only text section.
    
    Reviewed-by: Kees Cook <keescook@chromium.org>
    Reviewed-by: Mark Rutland <mark.rutland@arm.com>
    Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
    Tested-by: Mark Rutland <mark.rutland@arm.com>
    Tested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
    Tested-by: Kees Cook <keescook@chromium.org>
    Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
    Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
    Signed-off-by: Will Deacon <will.deacon@arm.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 0e42d84ba218ee0d3dd2e8f367785c8bc71a9c14
Author: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Date:   Thu Aug 7 14:54:50 2014 +0100

    arm64: kernel: refactor the CPU suspend API for retention states
    
    commit 714f59925595b9c2ea9c22b107b340d38e3b3bc9 upstream.
    
    CPU suspend is the standard kernel interface to be used to enter
    low-power states on ARM64 systems. Current cpu_suspend implementation
    by default assumes that all low power states are losing the CPU context,
    so the CPU registers must be saved and cleaned to DRAM upon state
    entry. Furthermore, the current cpu_suspend() implementation assumes
    that if the CPU suspend back-end method returns when called, this has
    to be considered an error regardless of the return code (which can be
    successful) since the CPU was not expected to return from a code path that
    is different from cpu_resume code path - eg returning from the reset vector.
    
    All in all this means that the current API does not cope well with low-power
    states that preserve the CPU context when entered (ie retention states),
    since first of all the context is saved for nothing on state entry for
    those states and a successful state entry can return as a normal function
    return, which is considered an error by the current CPU suspend
    implementation.
    
    This patch refactors the cpu_suspend() API so that it can be split in
    two separate functionalities. The arm64 cpu_suspend API just provides
    a wrapper around CPU suspend operation hook. A new function is
    introduced (for architecture code use only) for states that require
    context saving upon entry:
    
    __cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
    
    __cpu_suspend() saves the context on function entry and calls the
    so called suspend finisher (ie fn) to complete the suspend operation.
    The finisher is not expected to return, unless it fails in which case
    the error is propagated back to the __cpu_suspend caller.
    
    The API refactoring results in the following pseudo code call sequence for a
    suspending CPU, when triggered from a kernel subsystem:
    
    /*
     * int cpu_suspend(unsigned long idx)
     * @idx: idle state index
     */
    {
    -> cpu_suspend(idx)
    	|---> CPU operations suspend hook called, if present
    		|--> if (retention_state)
    			|--> direct suspend back-end call (eg PSCI suspend)
    		     else
    			|--> __cpu_suspend(idx, &back_end_finisher);
    }
    
    By refactoring the cpu_suspend API this way, the CPU operations back-end
    has a chance to detect whether idle states require state saving or not
    and can call the required suspend operations accordingly either through
    simple function call or indirectly through __cpu_suspend() which carries out
    state saving and suspend finisher dispatching to complete idle state entry.
    
    Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
    Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org>
    Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
    Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 5ef30fef01e8c36fb0112637ef1d4b25c8f960c1
Author: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com>
Date:   Thu Jul 17 18:19:20 2014 +0100

    arm64: kernel: add missing __init section marker to cpu_suspend_init
    
    commit 18ab7db6b749ac27aac08d572afbbd2f4d937934 upstream.
    
    Suspend init function must be marked as __init, since it is not needed
    after the kernel has booted. This patch moves the cpu_suspend_init()
    function to the __init section.
    
    Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
    Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 47abb28e7183cea1402564f5cbccd5a863f30432
Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Date:   Thu Jan 1 23:38:28 2015 +0100

    ACPI / PM: Fix PM initialization for devices that are not present
    
    commit 1b1f3e1699a9886f1070f94171097ab4ccdbfc95 upstream.
    
    If an ACPI device object whose _STA returns 0 (not present and not
    functional) has _PR0 or _PS0, its power_manageable flag will be set
    and acpi_bus_init_power() will return 0 for it.  Consequently, if
    such a device object is passed to the ACPI device PM functions, they
    will attempt to carry out the requested operation on the device,
    although they should not do that for devices that are not present.
    
    To fix that problem make acpi_bus_init_power() return an error code
    for devices that are not present which will cause power_manageable to
    be cleared for them as appropriate in acpi_bus_get_power_flags().
    However, the lists of power resources should not be freed for the
    device in that case, so modify acpi_bus_get_power_flags() to keep
    those lists even if acpi_bus_init_power() returns an error.
    Accordingly, when deciding whether or not the lists of power
    resources need to be freed, acpi_free_power_resources_lists()
    should check the power.flags.power_resources flag instead of
    flags.power_manageable, so make that change too.
    
    Furthermore, if acpi_bus_attach() sees that flags.initialized is
    unset for the given device, it should reset the power management
    settings of the device and re-initialize them from scratch instead
    of relying on the previous settings (the device may have appeared
    after being not present previously, for example), so make it use
    the 'valid' flag of the D0 power state as the initial value of
    flags.power_manageable for it and call acpi_bus_init_power() to
    discover its current power state.
    
    Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
    Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 8d33f5146f12be8bffa916cf414772d593807f9f
Author: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Date:   Thu Nov 13 10:38:57 2014 +0100

    ARM: mvebu: disable I/O coherency on non-SMP situations on Armada 370/375/38x/XP
    
    commit e55355453600a33bb5ca4f71f2d7214875f3b061 upstream.
    
    Enabling the hardware I/O coherency on Armada 370, Armada 375, Armada
    38x and Armada XP requires a certain number of conditions:
    
     - On Armada 370, the cache policy must be set to write-allocate.
    
     - On Armada 375, 38x and XP, the cache policy must be set to
       write-allocate, the pages must be mapped with the shareable
       attribute, and the SMP bit must be set
    
    Currently, on Armada XP, when CONFIG_SMP is enabled, those conditions
    are met. However, when Armada XP is used in a !CONFIG_SMP kernel, none
    of these conditions are met. With Armada 370, the situation is worse:
    since the processor is single core, regardless of whether CONFIG_SMP
    or !CONFIG_SMP is used, the cache policy will be set to write-back by
    the kernel and not write-allocate.
    
    Since solving this problem turns out to be quite complicated, and we
    don't want to let users with a mainline kernel known to have
    infrequent but existing data corruptions, this commit proposes to
    simply disable hardware I/O coherency in situations where it is known
    not to work.
    
    And basically, the is_smp() function of the kernel tells us whether it
    is OK to enable hardware I/O coherency or not, so this commit slightly
    refactors the coherency_type() function to return
    COHERENCY_FABRIC_TYPE_NONE when is_smp() is false, or the appropriate
    type of the coherency fabric in the other case.
    
    Thanks to this, the I/O coherency fabric will no longer be used at all
    in !CONFIG_SMP configurations. It will continue to be used in
    CONFIG_SMP configurations on Armada XP, Armada 375 and Armada 38x
    (which are multiple cores processors), but will no longer be used on
    Armada 370 (which is a single core processor).
    
    In the process, it simplifies the implementation of the
    coherency_type() function, and adds a missing call to of_node_put().
    
    Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
    Fixes: e60304f8cb7bb545e79fe62d9b9762460c254ec2 ("arm: mvebu: Add hardware I/O Coherency support")
    Acked-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
    Link: https://lkml.kernel.org/r/1415871540-20302-3-git-send-email-thomas.petazzoni@free-electrons.com
    Signed-off-by: Jason Cooper <jason@lakedaemon.net>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 3f4ddf1a9297a18b388463f89cee0bd60a064aca
Author: Pavel Machek <pavel@ucw.cz>
Date:   Sun Jan 4 20:01:23 2015 +0100

    Revert "ARM: 7830/1: delay: don't bother reporting bogomips in /proc/cpuinfo"
    
    commit 4bf9636c39ac70da091d5a2e28d3448eaa7f115c upstream.
    
    Commit 9fc2105aeaaf ("ARM: 7830/1: delay: don't bother reporting
    bogomips in /proc/cpuinfo") breaks audio in python, and probably
    elsewhere, with message
    
      FATAL: cannot locate cpu MHz in /proc/cpuinfo
    
    I'm not the first one to hit it, see for example
    
      https://theredblacktree.wordpress.com/2014/08/10/fatal-cannot-locate-cpu-mhz-in-proccpuinfo/
      https://devtalk.nvidia.com/default/topic/765800/workaround-for-fatal-cannot-locate-cpu-mhz-in-proc-cpuinf/?offset=1
    
    Reading original changelog, I have to say "Stop breaking working setups.
    You know who you are!".
    
    Signed-off-by: Pavel Machek <pavel@ucw.cz>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit dab35042ecb771edefc13e1620470919128d7900
Author: Nishanth Menon <nm@ti.com>
Date:   Tue Oct 21 15:22:28 2014 -0500

    ARM: OMAP4: PM: Only do static dependency configuration in omap4_init_static_deps
    
    commit 9008d83fe9dc2e0f19b8ba17a423b3759d8e0fd7 upstream.
    
    Commit 705814b5ea6f ("ARM: OMAP4+: PM: Consolidate OMAP4 PM code to
    re-use it for OMAP5")
    
    Moved logic generic for OMAP5+ as part of the init routine by
    introducing omap4_pm_init. However, the patch left the powerdomain
    initial setup, an unused omap4430 es1.0 check and a spurious log
    "Power Management for TI OMAP4." in the original code.
    
    Remove the duplicate code which is already present in omap4_pm_init from
    omap4_init_static_deps.
    
    As part of this change, also move the u-boot version print out of the
    static dependency function to the omap4_pm_init function.
    
    Fixes: 705814b5ea6f ("ARM: OMAP4+: PM: Consolidate OMAP4 PM code to re-use it for OMAP5")
    Signed-off-by: Nishanth Menon <nm@ti.com>
    Signed-off-by: Tony Lindgren <tony@atomide.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 8be474538333eba411aae430bedc7b7e4e4f882e
Author: Tomasz Figa <tomasz.figa@gmail.com>
Date:   Wed Sep 24 00:14:29 2014 +0900

    ARM: dts: Enable PWM node by default for s3c64xx
    
    commit 5e794de514f56de1e78e979ca09c56a91aa2e9f1 upstream.
    
    The PWM block is required for system clock source so it must be always
    enabled. This patch fixes boot issues on SMDK6410 which did not have
    the node enabled explicitly for other purposes.
    
    Fixes: eeb93d02 ("clocksource: of: Respect device tree node status")
    
    Signed-off-by: Tomasz Figa <tomasz.figa@gmail.com>
    Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 9feeb8f3887bd398b9afb676b67f9f35b0878179
Author: Lokesh Vutla <lokeshvutla@ti.com>
Date:   Wed Nov 12 10:54:15 2014 +0530

    ARM: dts: DRA7: wdt: Fix compatible property for watchdog node
    
    commit be6688350a4470e417aaeca54d162652aab40ac5 upstream.
    
    OMAP wdt driver supports only ti,omap3-wdt compatible. In DRA7 dt
    wdt compatible property is defined as ti,omap4-wdt by mistake instead of
    ti,omap3-wdt. Correcting the typo.
    
    Fixes: 6e58b8f1daaf1a ("ARM: dts: DRA7: Add the dts files for dra7 SoC and dra7-evm board")
    Signed-off-by: Lokesh Vutla <lokeshvutla@ti.com>
    Signed-off-by: Tony Lindgren <tony@atomide.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit cae817ad60ef1a615dcdb0dd7dc505aefe63c88c
Author: Luca Abeni <luca.abeni@unitn.it>
Date:   Wed Dec 17 11:50:32 2014 +0100

    sched/deadline: Avoid double-accounting in case of missed deadlines
    
    commit 269ad8015a6b2bb1cf9e684da4921eb6fa0a0c88 upstream.
    
    The dl_runtime_exceeded() function is supposed to ckeck if
    a SCHED_DEADLINE task must be throttled, by checking if its
    current runtime is <= 0. However, it also checks if the
    scheduling deadline has been missed (the current time is
    larger than the current scheduling deadline), further
    decreasing the runtime if this happens.
    This "double accounting" is wrong:
    
    - In case of partitioned scheduling (or single CPU), this
      happens if task_tick_dl() has been called later than expected
      (due to small HZ values). In this case, the current runtime is
      also negative, and replenish_dl_entity() can take care of the
      deadline miss by recharging the current runtime to a value smaller
      than dl_runtime
    
    - In case of global scheduling on multiple CPUs, scheduling
      deadlines can be missed even if the task did not consume more
      runtime than expected, hence penalizing the task is wrong
    
    This patch fix this problem by throttling a SCHED_DEADLINE task
    only when its runtime becomes negative, and not modifying the runtime
    
    Signed-off-by: Luca Abeni <luca.abeni@unitn.it>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Acked-by: Juri Lelli <juri.lelli@gmail.com>
    Cc: Dario Faggioli <raistlin@linux.it>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Link: http://lkml.kernel.org/r/1418813432-20797-3-git-send-email-luca.abeni@unitn.it
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 678c8bb7aa75fb956b311b42ad4b7262bcc24b5e
Author: Luca Abeni <luca.abeni@unitn.it>
Date:   Wed Dec 17 11:50:31 2014 +0100

    sched/deadline: Fix migration of SCHED_DEADLINE tasks
    
    commit 6a503c3be937d275113b702e0421e5b0720abe8a upstream.
    
    According to global EDF, tasks should be migrated between runqueues
    without checking if their scheduling deadlines and runtimes are valid.
    However, SCHED_DEADLINE currently performs such a check:
    a migration happens doing:
    
    	deactivate_task(rq, next_task, 0);
    	set_task_cpu(next_task, later_rq->cpu);
    	activate_task(later_rq, next_task, 0);
    
    which ends up calling dequeue_task_dl(), setting the new CPU, and then
    calling enqueue_task_dl().
    
    enqueue_task_dl() then calls enqueue_dl_entity(), which calls
    update_dl_entity(), which can modify scheduling deadline and runtime,
    breaking global EDF scheduling.
    
    As a result, some of the properties of global EDF are not respected:
    for example, a taskset {(30, 80), (40, 80), (120, 170)} scheduled on
    two cores can have unbounded response times for the third task even
    if 30/80+40/80+120/170 = 1.5809 < 2
    
    This can be fixed by invoking update_dl_entity() only in case of
    wakeup, or if this is a new SCHED_DEADLINE task.
    
    Signed-off-by: Luca Abeni <luca.abeni@unitn.it>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Acked-by: Juri Lelli <juri.lelli@gmail.com>
    Cc: Dario Faggioli <raistlin@linux.it>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Link: http://lkml.kernel.org/r/1418813432-20797-2-git-send-email-luca.abeni@unitn.it
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 11d1b5db26bdb35063850dfb662b2e7de159a4be
Author: Johannes Berg <johannes.berg@intel.com>
Date:   Wed Dec 10 15:41:28 2014 -0800

    scripts/kernel-doc: don't eat struct members with __aligned
    
    commit 7b990789a4c3420fa57596b368733158e432d444 upstream.
    
    The change from \d+ to .+ inside __aligned() means that the following
    structure:
    
      struct test {
            u8 a __aligned(2);
            u8 b __aligned(2);
      };
    
    essentially gets modified to
    
      struct test {
            u8 a;
      };
    
    for purposes of kernel-doc, thus dropping a struct member, which in
    turns causes warnings and invalid kernel-doc generation.
    
    Fix this by replacing the catch-all (".") with anything that's not a
    semicolon ("[^;]").
    
    Fixes: 9dc30918b23f ("scripts/kernel-doc: handle struct member __aligned without numbers")
    Signed-off-by: Johannes Berg <johannes.berg@intel.com>
    Cc: Nishanth Menon <nm@ti.com>
    Cc: Randy Dunlap <rdunlap@infradead.org>
    Cc: Michal Marek <mmarek@suse.cz>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 1cde12540bf232899dbf162427083b4982e4bf54
Author: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
Date:   Wed Dec 10 15:54:34 2014 -0800

    nilfs2: fix the nilfs_iget() vs. nilfs_new_inode() races
    
    commit 705304a863cc41585508c0f476f6d3ec28cf7e00 upstream.
    
    Same story as in commit 41080b5a2401 ("nfsd race fixes: ext2") (similar
    ext2 fix) except that nilfs2 needs to use insert_inode_locked4() instead
    of insert_inode_locked() and a bug of a check for dead inodes needs to
    be fixed.
    
    If nilfs_iget() is called from nfsd after nilfs_new_inode() calls
    insert_inode_locked4(), nilfs_iget() will wait for unlock_new_inode() at
    the end of nilfs_mkdir()/nilfs_create()/etc to unlock the inode.
    
    If nilfs_iget() is called before nilfs_new_inode() calls
    insert_inode_locked4(), it will create an in-core inode and read its
    data from the on-disk inode.  But, nilfs_iget() will find i_nlink equals
    zero and fail at nilfs_read_inode_common(), which will lead it to call
    iget_failed() and cleanly fail.
    
    However, this sanity check doesn't work as expected for reused on-disk
    inodes because they leave a non-zero value in i_mode field and it
    hinders the test of i_nlink.  This patch also fixes the issue by
    removing the test on i_mode that nilfs2 doesn't need.
    
    Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
    Cc: Al Viro <viro@zeniv.linux.org.uk>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit b9c2571e0db86a7eae3e11bd4b02117a74a3688f
Author: Brian Norris <computersforpeace@gmail.com>
Date:   Fri Nov 21 10:24:29 2014 -0800

    mtd: tests: abort torturetest on erase errors
    
    commit 68f29815034e9dc9ed53cad85946c32b07adc8cc upstream.
    
    The torture test should quit once it actually induces an error in the
    flash. This step was accidentally removed during refactoring.
    
    Without this fix, the torturetest just continues infinitely, or until
    the maximum cycle count is reached. e.g.:
    
       ...
       [ 7619.218171] mtd_test: error -5 while erasing EB 100
       [ 7619.297981] mtd_test: error -5 while erasing EB 100
       [ 7619.377953] mtd_test: error -5 while erasing EB 100
       [ 7619.457998] mtd_test: error -5 while erasing EB 100
       [ 7619.537990] mtd_test: error -5 while erasing EB 100
       ...
    
    Fixes: 6cf78358c94f ("mtd: mtd_torturetest: use mtd_test helpers")
    Signed-off-by: Brian Norris <computersforpeace@gmail.com>
    Cc: Akinobu Mita <akinobu.mita@gmail.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 92a34c870562539755aa2d8eb4b8d2b9cdc3bbe4
Author: Yan, Zheng <zheng.z.yan@intel.com>
Date:   Mon Mar 24 09:56:43 2014 +0800

    ceph: fix null pointer dereference in discard_cap_releases()
    
    commit 00bd8edb861eb41d274938cfc0338999d9c593a3 upstream.
    
    send_mds_reconnect() may call discard_cap_releases() after all
    release messages have been dropped by cleanup_cap_releases()
    
    Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
    Reviewed-by: Sage Weil <sage@inktank.com>
    Cc: Markus Blank-Burian <burian@muenster.de>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit e3de52b760cbea4a07171d41ff6f85176f8692a3
Author: Dan Carpenter <dan.carpenter@oracle.com>
Date:   Fri Nov 28 11:33:34 2014 +0300

    ceph: do_sync is never initialized
    
    commit 021b77bee210843bed1ea91b5cad58235ff9c8e5 upstream.
    
    Probably this code was syncing a lot more often then intended because
    the do_sync variable wasn't set to zero.
    
    Fixes: c62988ec0910 ('ceph: avoid meaningless calling ceph_caps_revoking if sync_mode == WB_SYNC_ALL.')
    Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
    Signed-off-by: Ilya Dryomov <idryomov@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 0cd81f595375b4eaa0d3dfb779daecc82691044c
Author: Benjamin Coddington <bcodding@redhat.com>
Date:   Sun Dec 7 16:05:47 2014 -0500

    nfsd4: fix xdr4 inclusion of escaped char
    
    commit 5a64e56976f1ba98743e1678c0029a98e9034c81 upstream.
    
    Fix a bug where nfsd4_encode_components_esc() includes the esc_end char as
    an additional string encoding.
    
    Signed-off-by: Benjamin Coddington <bcodding@redhat.com>
    Fixes: e7a0444aef4a "nfsd: add IPv6 addr escaping to fs_location hosts"
    Signed-off-by: J. Bruce Fields <bfields@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit d7d13fde8bc716a2cdc83964cbefcb1ba6af13f8
Author: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Date:   Fri Dec 5 16:40:07 2014 +0100

    fs: nfsd: Fix signedness bug in compare_blob
    
    commit ef17af2a817db97d42dd2ec0a425231748e23dbc upstream.
    
    Bugs similar to the one in acbbe6fbb240 (kcmp: fix standard comparison
    bug) are in rich supply.
    
    In this variant, the problem is that struct xdr_netobj::len has type
    unsigned int, so the expression o1->len - o2->len _also_ has type
    unsigned int; it has completely well-defined semantics, and the result
    is some non-negative integer, which is always representable in a long
    long. But this means that if the conditional triggers, we are
    guaranteed to return a positive value from compare_blob.
    
    In this case it could be fixed by
    
    -       res = o1->len - o2->len;
    +       res = (long long)o1->len - (long long)o2->len;
    
    but I'd rather eliminate the usually broken 'return a - b;' idiom.
    
    Reviewed-by: Jeff Layton <jlayton@primarydata.com>
    Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
    Signed-off-by: J. Bruce Fields <bfields@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 8a8c38c8e681f36170d6d565c8c50539f7845667
Author: Vitaly Kuznetsov <vkuznets@redhat.com>
Date:   Tue Nov 4 13:40:11 2014 +0100

    Drivers: hv: vmbus: Fix a race condition when unregistering a device
    
    commit 04a258c162a85c0f4ae56be67634dc43c9a4fa9b upstream.
    
    When build with Debug the following crash is sometimes observed:
    Call Trace:
     [<ffffffff812b9600>] string+0x40/0x100
     [<ffffffff812bb038>] vsnprintf+0x218/0x5e0
     [<ffffffff810baf7d>] ? trace_hardirqs_off+0xd/0x10
     [<ffffffff812bb4c1>] vscnprintf+0x11/0x30
     [<ffffffff8107a2f0>] vprintk+0xd0/0x5c0
     [<ffffffffa0051ea0>] ? vmbus_process_rescind_offer+0x0/0x110 [hv_vmbus]
     [<ffffffff8155c71c>] printk+0x41/0x45
     [<ffffffffa004ebac>] vmbus_device_unregister+0x2c/0x40 [hv_vmbus]
     [<ffffffffa0051ecb>] vmbus_process_rescind_offer+0x2b/0x110 [hv_vmbus]
    ...
    
    This happens due to the following race: between 'if (channel->device_obj)' check
    in vmbus_process_rescind_offer() and pr_debug() in vmbus_device_unregister() the
    device can disappear. Fix the issue by taking an additional reference to the
    device before proceeding to vmbus_device_unregister().
    
    Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
    Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 7810e6d8f1390a96e634e66072064c69a6f7d08d
Author: Christian Riesch <christian.riesch@omicron.at>
Date:   Thu Nov 13 05:53:26 2014 +0100

    n_tty: Fix read_buf race condition, increment read_head after pushing data
    
    commit 8bfbe2de769afda051c56aba5450391670e769fc upstream.
    
    Commit 19e2ad6a09f0c06dbca19c98e5f4584269d913dd ("n_tty: Remove overflow
    tests from receive_buf() path") moved the increment of read_head into
    the arguments list of read_buf_addr(). Function calls represent a
    sequence point in C. Therefore read_head is incremented before the
    character c is placed in the buffer. Since the circular read buffer is
    a lock-less design since commit 6d76bd2618535c581f1673047b8341fd291abc67
    ("n_tty: Make N_TTY ldisc receive path lockless"), this creates a race
    condition that leads to communication errors.
    
    This patch modifies the code to increment read_head _after_ the data
    is placed in the buffer and thus fixes the race for non-SMP machines.
    To fix the problem for SMP machines, memory barriers must be added in
    a separate patch.
    
    Signed-off-by: Christian Riesch <christian.riesch@omicron.at>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 24003a22ef28a69dd07fa5277ce7ef0c944e8a0e
Author: Robert Baldyga <r.baldyga@samsung.com>
Date:   Mon Nov 24 07:56:21 2014 +0100

    serial: samsung: wait for transfer completion before clock disable
    
    commit 1ff383a4c3eda8893ec61b02831826e1b1f46b41 upstream.
    
    This patch adds waiting until transmit buffer and shifter will be empty
    before clock disabling.
    
    Without this fix it's possible to have clock disabled while data was
    not transmited yet, which causes unproper state of TX line and problems
    in following data transfers.
    
    Signed-off-by: Robert Baldyga <r.baldyga@samsung.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 37b2a5a7c849b7242322688ce380d0a1cc1b1d0f
Author: Steven Rostedt (Red Hat) <rostedt@goodmis.org>
Date:   Wed Dec 10 17:31:07 2014 -0500

    tracing/sched: Check preempt_count() for current when reading task->state
    
    commit aee4e5f3d3abb7a2239dd02f6d8fb173413fd02f upstream.
    
    When recording the state of a task for the sched_switch tracepoint a check of
    task_preempt_count() is performed to see if PREEMPT_ACTIVE is set. This is
    because, technically, a task being preempted is really in the TASK_RUNNING
    state, and that is what should be recorded when tracing a sched_switch,
    even if the task put itself into another state (it hasn't scheduled out
    in that state yet).
    
    But with the change to use per_cpu preempt counts, the
    task_thread_info(p)->preempt_count is no longer used, and instead
    task_preempt_count(p) is used.
    
    The problem is that this does not use the current preempt count but a stale
    one from a previous sched_switch. The task_preempt_count(p) uses
    saved_preempt_count and not preempt_count(). But for tracing sched_switch,
    if p is current, we really want preempt_count().
    
    I hit this bug when I was tracing sleep and the call from do_nanosleep()
    scheduled out in the "RUNNING" state.
    
               sleep-4290  [000] 537272.259992: sched_switch:         sleep:4290 [120] R ==> swapper/0:0 [120]
               sleep-4290  [000] 537272.260015: kernel_stack:         <stack trace>
    => __schedule (ffffffff8150864a)
    => schedule (ffffffff815089f8)
    => do_nanosleep (ffffffff8150b76c)
    => hrtimer_nanosleep (ffffffff8108d66b)
    => SyS_nanosleep (ffffffff8108d750)
    => return_to_handler (ffffffff8150e8e5)
    => tracesys_phase2 (ffffffff8150c844)
    
    After a bit of hair pulling, I found that the state was really
    TASK_INTERRUPTIBLE, but the saved_preempt_count had an old PREEMPT_ACTIVE
    set and caused the sched_switch tracepoint to show it as RUNNING.
    
    Link: http://lkml.kernel.org/r/20141210174428.3cb7542a@gandalf.local.home
    
    Acked-by: Ingo Molnar <mingo@kernel.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Fixes: 01028747559a "sched: Create more preempt_count accessors"
    Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit adfd114937f829e354a8d6d3bd13d02bec769715
Author: Tejun Heo <tj@kernel.org>
Date:   Fri Oct 24 15:38:21 2014 -0400

    writeback: fix a subtle race condition in I_DIRTY clearing
    
    commit 9c6ac78eb3521c5937b2dd8a7d1b300f41092f45 upstream.
    
    After invoking ->dirty_inode(), __mark_inode_dirty() does smp_mb() and
    tests inode->i_state locklessly to see whether it already has all the
    necessary I_DIRTY bits set.  The comment above the barrier doesn't
    contain any useful information - memory barriers can't ensure "changes
    are seen by all cpus" by itself.
    
    And it sure enough was broken.  Please consider the following
    scenario.
    
     CPU 0					CPU 1
     -------------------------------------------------------------------------------
    
    					enters __writeback_single_inode()
    					grabs inode->i_lock
    					tests PAGECACHE_TAG_DIRTY which is clear
     enters __set_page_dirty()
     grabs mapping->tree_lock
     sets PAGECACHE_TAG_DIRTY
     releases mapping->tree_lock
     leaves __set_page_dirty()
    
     enters __mark_inode_dirty()
     smp_mb()
     sees I_DIRTY_PAGES set
     leaves __mark_inode_dirty()
    					clears I_DIRTY_PAGES
    					releases inode->i_lock
    
    Now @inode has dirty pages w/ I_DIRTY_PAGES clear.  This doesn't seem
    to lead to an immediately critical problem because requeue_inode()
    later checks PAGECACHE_TAG_DIRTY instead of I_DIRTY_PAGES when
    deciding whether the inode needs to be requeued for IO and there are
    enough unintentional memory barriers inbetween, so while the inode
    ends up with inconsistent I_DIRTY_PAGES flag, it doesn't fall off the
    IO list.
    
    The lack of explicit barrier may also theoretically affect the other
    I_DIRTY bits which deal with metadata dirtiness.  There is no
    guarantee that a strong enough barrier exists between
    I_DIRTY_[DATA]SYNC clearing and write_inode() writing out the dirtied
    inode.  Filesystem inode writeout path likely has enough stuff which
    can behave as full barrier but it's theoretically possible that the
    writeout may not see all the updates from ->dirty_inode().
    
    Fix it by adding an explicit smp_mb() after I_DIRTY clearing.  Note
    that I_DIRTY_PAGES needs a special treatment as it always needs to be
    cleared to be interlocked with the lockless test on
    __mark_inode_dirty() side.  It's cleared unconditionally and
    reinstated after smp_mb() if the mapping still has dirty pages.
    
    Also add comments explaining how and why the barriers are paired.
    
    Lightly tested.
    
    Signed-off-by: Tejun Heo <tj@kernel.org>
    Cc: Jan Kara <jack@suse.cz>
    Cc: Mikulas Patocka <mpatocka@redhat.com>
    Cc: Jens Axboe <axboe@kernel.dk>
    Cc: Al Viro <viro@zeniv.linux.org.uk>
    Reviewed-by: Jan Kara <jack@suse.cz>
    Signed-off-by: Jens Axboe <axboe@fb.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 7756c314710ceee0336824b3812261c1ae49106f
Author: Oliver Neukum <oneukum@suse.de>
Date:   Thu Nov 20 14:54:35 2014 +0100

    cdc-acm: memory leak in error case
    
    commit d908f8478a8d18e66c80a12adb27764920c1f1ca upstream.
    
    If probe() fails not only the attributes need to be removed
    but also the memory freed.
    
    Reported-by: Ahmed Tamrawi <ahmedtamrawi@gmail.com>
    Signed-off-by: Oliver Neukum <oneukum@suse.de>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 0fa799921d850b6e3321cf6bd54c48894c4af927
Author: Jens Axboe <axboe@fb.com>
Date:   Wed Nov 19 13:06:22 2014 -0700

    genhd: check for int overflow in disk_expand_part_tbl()
    
    commit 5fabcb4c33fe11c7e3afdf805fde26c1a54d0953 upstream.
    
    We can get here from blkdev_ioctl() -> blkpg_ioctl() -> add_partition()
    with a user passed in partno value. If we pass in 0x7fffffff, the
    new target in disk_expand_part_tbl() overflows the 'int' and we
    access beyond the end of ptbl->part[] and even write to it when we
    do the rcu_assign_pointer() to assign the new partition.
    
    Reported-by: David Ramos <daramos@stanford.edu>
    Signed-off-by: Jens Axboe <axboe@fb.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 25f9cb00a7862643fe3209e3a8ab3387af2a784d
Author: Steev Klimaszewski <threeway@gmail.com>
Date:   Tue Dec 30 00:55:48 2014 -0600

    Add USB_EHCI_EXYNOS to multi_v7_defconfig
    
    commit 007487f1fd43d84f26cda926081ca219a24ecbc4 upstream.
    
    Currently we enable Exynos devices in the multi v7 defconfig, however, when
    testing on my ODROID-U3, I noticed that USB was not working.  Enabling this
    option causes USB to work, which enables networking support as well since the
    ODROID-U3 has networking on the USB bus.
    
    [arnd] Support for odroid-u3 was added in 3.10, so it would be nice to
    backport this fix at least that far.
    
    Signed-off-by: Steev Klimaszewski <steev@gentoo.org>
    Signed-off-by: Arnd Bergmann <arnd@arndb.de>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 77e2e4877b2000379a5e525d51e79e6705ce242d
Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date:   Fri Nov 7 08:48:15 2014 -0800

    USB: cdc-acm: check for valid interfaces
    
    commit 403dff4e2c94f275e24fd85f40b2732ffec268a1 upstream.
    
    We need to check that we have both a valid data and control inteface for both
    types of headers (union and not union.)
    
    References: https://bugzilla.kernel.org/show_bug.cgi?id=83551
    Reported-by: Simon Schubert <2+kernel@0x2c.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 628b776115aa817dddee83c29fde3ba7fe806a96
Author: Takashi Iwai <tiwai@suse.de>
Date:   Mon Jan 5 13:27:33 2015 +0100

    ALSA: hda - Fix wrong gpio_dir & gpio_mask hint setups for IDT/STAC codecs
    
    commit c507de88f6a336bd7296c9ec0073b2d4af8b4f5e upstream.
    
    stac_store_hints() does utterly wrong for masking the values for
    gpio_dir and gpio_data, likely due to copy&paste errors.  Fortunately,
    this feature is used very rarely, so the impact must be really small.
    
    Reported-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
    Signed-off-by: Takashi Iwai <tiwai@suse.de>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit b9de348047106afccfb35e6c424a35b496f4bb31
Author: Dan Carpenter <dan.carpenter@oracle.com>
Date:   Thu Nov 27 01:34:43 2014 +0300

    ALSA: hda - using uninitialized data
    
    commit 69eba10e606a80665f8573221fec589430d9d1cb upstream.
    
    In olden times the snd_hda_param_read() function always set "*start_id"
    but in 2007 we introduced a new return and it causes uninitialized data
    bugs in a couple of the callers: print_codec_info() and
    hdmi_parse_codec().
    
    Fixes: e8a7f136f5ed ('[ALSA] hda-intel - Improve HD-audio codec probing robustness')
    Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
    Signed-off-by: Takashi Iwai <tiwai@suse.de>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit af769e768312266eab883668a1f386313d41100b
Author: Jiri Jaburek <jjaburek@redhat.com>
Date:   Thu Dec 18 02:03:19 2014 +0100

    ALSA: usb-audio: extend KEF X300A FU 10 tweak to Arcam rPAC
    
    commit d70a1b9893f820fdbcdffac408c909c50f2e6b43 upstream.
    
    The Arcam rPAC seems to have the same problem - whenever anything
    (alsamixer, udevd, 3.9+ kernel from 60af3d037eb8c, ..) attempts to
    access mixer / control interface of the card, the firmware "locks up"
    the entire device, resulting in
      SNDRV_PCM_IOCTL_HW_PARAMS failed (-5): Input/output error
    from alsa-lib.
    
    Other operating systems can somehow read the mixer (there seems to be
    playback volume/mute), but any manipulation is ignored by the device
    (which has hardware volume controls).
    
    Signed-off-by: Jiri Jaburek <jjaburek@redhat.com>
    Signed-off-by: Takashi Iwai <tiwai@suse.de>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 7de4a0e7fe4390dfc3b42c1fdcd53104758cc33c
Author: Ian Abbott <abbotti@mev.co.uk>
Date:   Thu Nov 6 16:23:39 2014 +0000

    misc: genwqe: check for error from get_user_pages_fast()
    
    commit cf35d6e0475982667b0d2d318fb27be4b8849827 upstream.
    
    `genwqe_user_vmap()` calls `get_user_pages_fast()` and if the return
    value is less than the number of pages requested, it frees the pages and
    returns an error (`-EFAULT`).  However, it fails to consider a negative
    error return value from `get_user_pages_fast()`.  In that case, the test
    `if (rc < m->nr_pages)` will be false (due to promotion of `rc` to a
    large `unsigned int`) and the code will continue on to call
    `genwqe_map_pages()` with an invalid list of page pointers.  Fix it by
    bailing out if `get_user_pages_fast()` returns a negative error value.
    
    Signed-off-by: Ian Abbott <abbotti@mev.co.uk>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit e9820ed9a247ea5b018d71bf0af750acf0c02451
Author: Alex Williamson <alex.williamson@redhat.com>
Date:   Fri Oct 31 11:13:07 2014 -0600

    driver core: Fix unbalanced device reference in drivers_probe
    
    commit bb34cb6bbd287b57e955bc5cfd42fcde6aaca279 upstream.
    
    bus_find_device_by_name() acquires a device reference which is never
    released.  This results in an object leak, which on older kernels
    results in failure to release all resources of PCI devices.  libvirt
    uses drivers_probe to re-attach devices to the host after assignment
    and is therefore a common trigger for this leak.
    
    Example:
    
    # cd /sys/bus/pci/
    # dmesg -C
    # echo 1 > devices/0000\:01\:00.0/sriov_numvfs
    # echo 0 > devices/0000\:01\:00.0/sriov_numvfs
    # dmesg | grep 01:10
     pci 0000:01:10.0: [8086:10ca] type 00 class 0x020000
     kobject: '0000:01:10.0' (ffff8801d79cd0a8): kobject_add_internal: parent: '0000:00:01.0', set: 'devices'
     kobject: '0000:01:10.0' (ffff8801d79cd0a8): kobject_uevent_env
     kobject: '0000:01:10.0' (ffff8801d79cd0a8): fill_kobj_path: path = '/devices/pci0000:00/0000:00:01.0/0000:01:10.0'
     kobject: '0000:01:10.0' (ffff8801d79cd0a8): kobject_uevent_env
     kobject: '0000:01:10.0' (ffff8801d79cd0a8): fill_kobj_path: path = '/devices/pci0000:00/0000:00:01.0/0000:01:10.0'
     kobject: '0000:01:10.0' (ffff8801d79cd0a8): kobject_uevent_env
     kobject: '0000:01:10.0' (ffff8801d79cd0a8): fill_kobj_path: path = '/devices/pci0000:00/0000:00:01.0/0000:01:10.0'
     kobject: '0000:01:10.0' (ffff8801d79cd0a8): kobject_cleanup, parent           (null)
     kobject: '0000:01:10.0' (ffff8801d79cd0a8): calling ktype release
     kobject: '0000:01:10.0': free name
    
    [kobject freed as expected]
    
    # dmesg -C
    # echo 1 > devices/0000\:01\:00.0/sriov_numvfs
    # echo 0000:01:10.0 > drivers_probe
    # echo 0 > devices/0000\:01\:00.0/sriov_numvfs
    # dmesg | grep 01:10
     pci 0000:01:10.0: [8086:10ca] type 00 class 0x020000
     kobject: '0000:01:10.0' (ffff8801d79ce0a8): kobject_add_internal: parent: '0000:00:01.0', set: 'devices'
     kobject: '0000:01:10.0' (ffff8801d79ce0a8): kobject_uevent_env
     kobject: '0000:01:10.0' (ffff8801d79ce0a8): fill_kobj_path: path = '/devices/pci0000:00/0000:00:01.0/0000:01:10.0'
     kobject: '0000:01:10.0' (ffff8801d79ce0a8): kobject_uevent_env
     kobject: '0000:01:10.0' (ffff8801d79ce0a8): fill_kobj_path: path = '/devices/pci0000:00/0000:00:01.0/0000:01:10.0'
     kobject: '0000:01:10.0' (ffff8801d79ce0a8): kobject_uevent_env
     kobject: '0000:01:10.0' (ffff8801d79ce0a8): fill_kobj_path: path = '/devices/pci0000:00/0000:00:01.0/0000:01:10.0'
    
    [no free]
    
    Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 99c8619590d4d44fe51b5ef0542c0edbd844b0b5
Author: Andy Lutomirski <luto@amacapital.net>
Date:   Sun Dec 21 08:57:46 2014 -0800

    x86, vdso: Use asm volatile in __getcpu
    
    commit 1ddf0b1b11aa8a90cef6706e935fc31c75c406ba upstream.
    
    In Linux 3.18 and below, GCC hoists the lsl instructions in the
    pvclock code all the way to the beginning of __vdso_clock_gettime,
    slowing the non-paravirt case significantly.  For unknown reasons,
    presumably related to the removal of a branch, the performance issue
    is gone as of
    
    e76b027e6408 x86,vdso: Use LSL unconditionally for vgetcpu
    
    but I don't trust GCC enough to expect the problem to stay fixed.
    
    There should be no correctness issue, because the __getcpu calls in
    __vdso_vlock_gettime were never necessary in the first place.
    
    Note to stable maintainers: In 3.18 and below, depending on
    configuration, gcc 4.9.2 generates code like this:
    
         9c3:       44 0f 03 e8             lsl    %ax,%r13d
         9c7:       45 89 eb                mov    %r13d,%r11d
         9ca:       0f 03 d8                lsl    %ax,%ebx
    
    This patch won't apply as is to any released kernel, but I'll send a
    trivial backported version if needed.
    
    [
     Backported by Andy Lutomirski.  Should apply to all affected
     versions.  This fixes a functionality bug as well as a performance
     bug: buggy kernels can infinite loop in __vdso_clock_gettime on
     affected compilers.  See, for exammple:
    
     https://bugzilla.redhat.com/show_bug.cgi?id=1178975
    ]
    
    Fixes: 51c19b4f5927 x86: vdso: pvclock gettime support
    Cc: Marcelo Tosatti <mtosatti@redhat.com>
    Acked-by: Paolo Bonzini <pbonzini@redhat.com>
    Signed-off-by: Andy Lutomirski <luto@amacapital.net>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 67ff8e53620c9aa941a7e4abbbfd921b0c4f97f0
Author: Andy Lutomirski <luto@amacapital.net>
Date:   Fri Dec 19 16:04:11 2014 -0800

    x86_64, vdso: Fix the vdso address randomization algorithm
    
    commit 394f56fe480140877304d342dec46d50dc823d46 upstream.
    
    The theory behind vdso randomization is that it's mapped at a random
    offset above the top of the stack.  To avoid wasting a page of
    memory for an extra page table, the vdso isn't supposed to extend
    past the lowest PMD into which it can fit.  Other than that, the
    address should be a uniformly distributed address that meets all of
    the alignment requirements.
    
    The current algorithm is buggy: the vdso has about a 50% probability
    of being at the very end of a PMD.  The current algorithm also has a
    decent chance of failing outright due to incorrect handling of the
    case where the top of the stack is near the top of its PMD.
    
    This fixes the implementation.  The paxtest estimate of vdso
    "randomisation" improves from 11 bits to 18 bits.  (Disclaimer: I
    don't know what the paxtest code is actually calculating.)
    
    It's worth noting that this algorithm is inherently biased: the vdso
    is more likely to end up near the end of its PMD than near the
    beginning.  Ideally we would either nix the PMD sharing requirement
    or jointly randomize the vdso and the stack to reduce the bias.
    
    In the mean time, this is a considerable improvement with basically
    no risk of compatibility issues, since the allowed outputs of the
    algorithm are unchanged.
    
    As an easy test, doing this:
    
    for i in `seq 10000`
      do grep -P vdso /proc/self/maps |cut -d- -f1
    done |sort |uniq -d
    
    used to produce lots of output (1445 lines on my most recent run).
    A tiny subset looks like this:
    
    7fffdfffe000
    7fffe01fe000
    7fffe05fe000
    7fffe07fe000
    7fffe09fe000
    7fffe0bfe000
    7fffe0dfe000
    
    Note the suspicious fe000 endings.  With the fix, I get a much more
    palatable 76 repeated addresses.
    
    Reviewed-by: Kees Cook <keescook@chromium.org>
    Signed-off-by: Andy Lutomirski <luto@amacapital.net>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 858788b7ba6dcfdd93c2fb8c9615e1aa11465ed6
Author: Paolo Bonzini <pbonzini@redhat.com>
Date:   Mon Dec 22 10:43:39 2014 +0100

    kvm: x86: drop severity of "generation wraparound" message
    
    commit a629df7eadffb03e6ce4a8616e62ea29fdf69b6b upstream.
    
    Since most virtual machines raise this message once, it is a bit annoying.
    Make it KERN_DEBUG severity.
    
    Fixes: 7a2e8aaf0f6873b47bc2347f216ea5b0e4c258ab
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 52c05f86313cd1ef400de43587e5038aba1c4e5c
Author: Giedrius Statkevičius <giedrius.statkevicius@gmail.com>
Date:   Sat Dec 27 00:28:30 2014 +0200

    HID: Add a new id 0x501a for Genius MousePen i608X
    
    commit 2bacedada682d5485424f5227f27a3d5d6eb551c upstream.
    
    New Genius MousePen i608X devices have a new id 0x501a instead of the
    old 0x5011 so add a new #define with "_2" appended and change required
    places.
    
    The remaining two checkpatch warnings about line length
    being over 80 characters are present in the original files too and this
    patch was made in the same style (no line break).
    
    Just adding a new id and changing the required places should make the
    new device work without any issues according to the bug report in the
    following url.
    
    This patch was made according to and fixes:
    https://bugzilla.kernel.org/show_bug.cgi?id=67111
    
    Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@gmail.com>
    Signed-off-by: Jiri Kosina <jkosina@suse.cz>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit eeffa922af58d9ab07e0a0e6f054cfa3da9a120f
Author: Karl Relton <karllinuxtest.relton@ntlworld.com>
Date:   Tue Dec 16 15:37:22 2014 +0000

    HID: add battery quirk for USB_DEVICE_ID_APPLE_ALU_WIRELESS_2011_ISO keyboard
    
    commit da940db41dcf8c04166f711646df2f35376010aa upstream.
    
    Apple bluetooth wireless keyboard (sold in UK) has always reported zero
    for battery strength no matter what condition the batteries are actually
    in. With this patch applied (applying same quirk as other Apple
    keyboards), the battery strength is now correctly reported.
    
    Signed-off-by: Karl Relton <karllinuxtest.relton@ntlworld.com>
    Signed-off-by: Jiri Kosina <jkosina@suse.cz>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit d9cb0b304d0213b96c72ad0314f0d7918b6ac8f2
Author: Dan Carpenter <dan.carpenter@oracle.com>
Date:   Fri Jan 9 15:32:31 2015 +0300

    HID: roccat: potential out of bounds in pyra_sysfs_write_settings()
    
    commit 606185b20caf4c57d7e41e5a5ea4aff460aef2ab upstream.
    
    This is a static checker fix.  We write some binary settings to the
    sysfs file.  One of the settings is the "->startup_profile".  There
    isn't any checking to make sure it fits into the
    pyra->profile_settings[] array in the profile_activated() function.
    
    I added a check to pyra_sysfs_write_settings() in both places because
    I wasn't positive that the other callers were correct.
    
    Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
    Signed-off-by: Jiri Kosina <jkosina@suse.cz>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 6d7f4a3b2a4537caef39fcbbd7b297d37a7932b5
Author: Gwendal Grignou <gwendal@chromium.org>
Date:   Thu Dec 11 16:02:45 2014 -0800

    HID: i2c-hid: prevent buffer overflow in early IRQ
    
    commit d1c7e29e8d276c669e8790bb8be9f505ddc48888 upstream.
    
    Before ->start() is called, bufsize size is set to HID_MIN_BUFFER_SIZE,
    64 bytes. While processing the IRQ, we were asking to receive up to
    wMaxInputLength bytes, which can be bigger than 64 bytes.
    
    Later, when ->start is run, a proper bufsize will be calculated.
    
    Given wMaxInputLength is said to be unreliable in other part of the
    code, set to receive only what we can even if it results in truncated
    reports.
    
    Signed-off-by: Gwendal Grignou <gwendal@chromium.org>
    Reviewed-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
    Signed-off-by: Jiri Kosina <jkosina@suse.cz>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 199eb5ada239253fb48ba51e7bc4612c54b651b6
Author: Jean-Baptiste Maneyrol <jmaneyrol@invensense.com>
Date:   Thu Nov 20 00:46:37 2014 +0800

    HID: i2c-hid: fix race condition reading reports
    
    commit 6296f4a8eb86f9abcc370fb7a1a116b8441c17fd upstream.
    
    Current driver uses a common buffer for reading reports either
    synchronously in i2c_hid_get_raw_report() and asynchronously in
    the interrupt handler.
    There is race condition if an interrupt arrives immediately after
    the report is received in i2c_hid_get_raw_report(); the common
    buffer is modified by the interrupt handler with the new report
    and then i2c_hid_get_raw_report() proceed using wrong data.
    
    Fix it by using a separate buffers for synchronous reports.
    
    Signed-off-by: Jean-Baptiste Maneyrol <jmaneyrol@invensense.com>
    [Antonio Borneo: cleanup, rebase to v3.17, submit mainline]
    Signed-off-by: Antonio Borneo <borneo.antonio@gmail.com>
    Reviewed-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
    Signed-off-by: Jiri Kosina <jkosina@suse.cz>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit bdc69c2acca3f51e7f572c1e2a823894b16572e9
Author: Jens Axboe <axboe@fb.com>
Date:   Mon Nov 24 15:02:42 2014 -0700

    blk-mq: use 'nr_cpu_ids' as highest CPU ID count for hwq <-> cpu map
    
    commit a33c1ba2913802b6fb23e974bb2f6a4e73c8b7ce upstream.
    
    We currently use num_possible_cpus(), but that breaks on sparc64 where
    the CPU ID space is discontig. Use nr_cpu_ids as the highest CPU ID
    instead, so we don't end up reading from invalid memory.
    
    Signed-off-by: Jens Axboe <axboe@fb.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 4ce9fbccfb9ee64bec6ac125150e2beac5f0c42f
Author: Jiang Liu <jiang.liu@linux.intel.com>
Date:   Wed Nov 26 09:42:10 2014 +0800

    iommu/vt-d: Fix an off-by-one bug in __domain_mapping()
    
    commit cc4f14aa170d895c9a43bdb56f62070c8a6da908 upstream.
    
    There's an off-by-one bug in function __domain_mapping(), which may
    trigger the BUG_ON(nr_pages < lvl_pages) when
    	(nr_pages + 1) & superpage_mask == 0
    
    The issue was introduced by commit 9051aa0268dc "intel-iommu: Combine
    domain_pfn_mapping() and domain_sg_mapping()", which sets sg_res to
    "nr_pages + 1" to avoid some of the 'sg_res==0' code paths.
    
    It's safe to remove extra "+1" because sg_res is only used to calculate
    page size now.
    
    Reported-And-Tested-by: Sudeep Dutt <sudeep.dutt@intel.com>
    Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
    Acked-By: David Woodhouse <David.Woodhouse@intel.com>
    Signed-off-by: Joerg Roedel <jroedel@suse.de>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit e69001acba6725963d0bc5476ec380f56bd65932
Author: Richard Weinberger <richard@nod.at>
Date:   Thu Nov 6 16:47:49 2014 +0100

    UBI: Fix double free after do_sync_erase()
    
    commit aa5ad3b6eb8feb2399a5d26c8fb0060561bb9534 upstream.
    
    If the erase worker is unable to erase a PEB it will
    free the ubi_wl_entry itself.
    The failing ubi_wl_entry must not free()'d again after
    do_sync_erase() returns.
    
    Signed-off-by: Richard Weinberger <richard@nod.at>
    Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 03b5def086df33c93fec035768ea864488c395b0
Author: Richard Weinberger <richard@nod.at>
Date:   Mon Oct 27 00:46:11 2014 +0100

    UBI: Fix invalid vfree()
    
    commit f38aed975c0c3645bbdfc5ebe35726e64caaf588 upstream.
    
    The logic of vfree()'ing vol->upd_buf is tied to vol->updating.
    In ubi_start_update() vol->updating is set long before vmalloc()'ing
    vol->upd_buf. If we encounter a write failure in ubi_start_update()
    before vmalloc() the UBI device release function will try to vfree()
    vol->upd_buf because vol->updating is set.
    Fix this by allocating vol->upd_buf directly after setting vol->updating.
    
    Fixes:
    [   31.559338] UBI warning: vol_cdev_release: update of volume 2 not finished, volume is damaged
    [   31.559340] ------------[ cut here ]------------
    [   31.559343] WARNING: CPU: 1 PID: 2747 at mm/vmalloc.c:1446 __vunmap+0xe3/0x110()
    [   31.559344] Trying to vfree() nonexistent vm area (ffffc90001f2b000)
    [   31.559345] Modules linked in:
    [   31.565620]  0000000000000bba ffff88002a0cbdb0 ffffffff818f0497 ffff88003b9ba148
    [   31.566347]  ffff88002a0cbde0 ffffffff8156f515 ffff88003b9ba148 0000000000000bba
    [   31.567073]  0000000000000000 0000000000000000 ffff88002a0cbe88 ffffffff8156c10a
    [   31.567793] Call Trace:
    [   31.568034]  [<ffffffff818f0497>] dump_stack+0x4e/0x7a
    [   31.568510]  [<ffffffff8156f515>] ubi_io_write_vid_hdr+0x155/0x160
    [   31.569084]  [<ffffffff8156c10a>] ubi_eba_write_leb+0x23a/0x870
    [   31.569628]  [<ffffffff81569b36>] vol_cdev_write+0x226/0x380
    [   31.570155]  [<ffffffff81179265>] vfs_write+0xb5/0x1f0
    [   31.570627]  [<ffffffff81179f8a>] SyS_pwrite64+0x6a/0xa0
    [   31.571123]  [<ffffffff818fde12>] system_call_fastpath+0x16/0x1b
    
    Signed-off-by: Richard Weinberger <richard@nod.at>
    Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 2e5df19fc0a36c8e1164f82762c2262140d11cb1
Author: Tony Lindgren <tony@atomide.com>
Date:   Tue Sep 16 13:50:01 2014 -0700

    pstore-ram: Allow optional mapping with pgprot_noncached
    
    commit 027bc8b08242c59e19356b4b2c189f2d849ab660 upstream.
    
    On some ARMs the memory can be mapped pgprot_noncached() and still
    be working for atomic operations. As pointed out by Colin Cross
    <ccross@android.com>, in some cases you do want to use
    pgprot_noncached() if the SoC supports it to see a debug printk
    just before a write hanging the system.
    
    On ARMs, the atomic operations on strongly ordered memory are
    implementation defined. So let's provide an optional kernel parameter
    for configuring pgprot_noncached(), and use pgprot_writecombine() by
    default.
    
    Cc: Arnd Bergmann <arnd@arndb.de>
    Cc: Rob Herring <robherring2@gmail.com>
    Cc: Randy Dunlap <rdunlap@infradead.org>
    Cc: Anton Vorontsov <anton@enomsg.org>
    Cc: Colin Cross <ccross@android.com>
    Cc: Olof Johansson <olof@lixom.net>
    Cc: Russell King <linux@arm.linux.org.uk>
    Acked-by: Kees Cook <keescook@chromium.org>
    Signed-off-by: Tony Lindgren <tony@atomide.com>
    Signed-off-by: Tony Luck <tony.luck@intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit e931caa1f7ea228e7f2bb930a7deb6d012ef8b06
Author: Rob Herring <robherring2@gmail.com>
Date:   Fri Sep 12 11:32:24 2014 -0700

    pstore-ram: Fix hangs by using write-combine mappings
    
    commit 7ae9cb81933515dc7db1aa3c47ef7653717e3090 upstream.
    
    Currently trying to use pstore on at least ARMs can hang as we're
    mapping the peristent RAM with pgprot_noncached().
    
    On ARMs, pgprot_noncached() will actually make the memory strongly
    ordered, and as the atomic operations pstore uses are implementation
    defined for strongly ordered memory, they may not work. So basically
    atomic operations have undefined behavior on ARM for device or strongly
    ordered memory types.
    
    Let's fix the issue by using write-combine variants for mappings. This
    corresponds to normal, non-cacheable memory on ARM. For many other
    architectures, this change does not change the mapping type as by
    default we have:
    
    #define pgprot_writecombine pgprot_noncached
    
    The reason why pgprot_noncached() was originaly used for pstore
    is because Colin Cross <ccross@android.com> had observed lost
    debug prints right before a device hanging write operation on some
    systems. For the platforms supporting pgprot_noncached(), we can
    add a an optional configuration option to support that. But let's
    get pstore working first before adding new features.
    
    Cc: Arnd Bergmann <arnd@arndb.de>
    Cc: Anton Vorontsov <cbouatmailru@gmail.com>
    Cc: Colin Cross <ccross@android.com>
    Cc: Olof Johansson <olof@lixom.net>
    Cc: linux-kernel@vger.kernel.org
    Acked-by: Kees Cook <keescook@chromium.org>
    Signed-off-by: Rob Herring <rob.herring@calxeda.com>
    [tony@atomide.com: updated description]
    Signed-off-by: Tony Lindgren <tony@atomide.com>
    Signed-off-by: Tony Luck <tony.luck@intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 8a5db4152bd90ac0bdd6fa52fd5990685f692194
Author: Myron Stowe <myron.stowe@redhat.com>
Date:   Thu Oct 30 11:54:37 2014 -0600

    PCI: Restore detection of read-only BARs
    
    commit 36e8164882ca6d3c41cb91e6f09a3ed236841f80 upstream.
    
    Commit 6ac665c63dca ("PCI: rewrite PCI BAR reading code") masked off
    low-order bits from 'l', but not from 'sz'.  Both are passed to pci_size(),
    which compares 'base == maxbase' to check for read-only BARs.  The masking
    of 'l' means that comparison will never be 'true', so the check for
    read-only BARs no longer works.
    
    Resolve this by also masking off the low-order bits of 'sz' before passing
    it into pci_size() as 'maxbase'.  With this change, pci_size() will once
    again catch the problems that have been encountered to date:
    
      - AGP aperture BAR of AMD-7xx host bridges: if the AGP window is
        disabled, this BAR is read-only and read as 0x00000008 [1]
    
      - BARs 0-4 of ALi IDE controllers can be non-zero and read-only [1]
    
      - Intel Sandy Bridge - Thermal Management Controller [8086:0103];
        BAR 0 returning 0xfed98004 [2]
    
      - Intel Xeon E5 v3/Core i7 Power Control Unit [8086:2fc0];
        Bar 0 returning 0x00001a [3]
    
    Link: [1] https://git.kernel.org/cgit/linux/kernel/git/tglx/history.git/commit/drivers/pci/probe.c?id=1307ef6621991f1c4bc3cec1b5a4ebd6fd3d66b9 ("PCI: probing read-only BARs" (pre-git))
    Link: [2] https://bugzilla.kernel.org/show_bug.cgi?id=43331
    Link: [3] https://bugzilla.kernel.org/show_bug.cgi?id=85991
    Reported-by: William Unruh <unruh@physics.ubc.ca>
    Reported-by: Martin Lucina <martin@lucina.net>
    Signed-off-by: Myron Stowe <myron.stowe@redhat.com>
    Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
    CC: Matthew Wilcox <willy@linux.intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 3b81a0747a2d44faf3a3926d27110a2d5c79604a
Author: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Date:   Fri Dec 5 10:01:15 2014 +0530

    powerpc/book3s: Fix partial invalidation of TLBs in MCE code.
    
    commit 682e77c861c4c60f79ffbeae5e1938ffed24a575 upstream.
    
    The existing MCE code calls flush_tlb hook with IS=0 (single page) resulting
    in partial invalidation of TLBs which is not right. This patch fixes
    that by passing IS=0xc00 to invalidate whole TLB for successful recovery
    from TLB and ERAT errors.
    
    Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 3d352acf5b50f58c1c0f5b2d616790f450747436
Author: Anton Blanchard <anton@samba.org>
Date:   Tue Nov 11 09:12:28 2014 +1100

    powerpc: Fix bad NULL pointer check in udbg_uart_getc_poll()
    
    commit cd32e2dcc9de6c27ecbbfc0e2079fb64b42bad5f upstream.
    
    We have some code in udbg_uart_getc_poll() that tries to protect
    against a NULL udbg_uart_in, but gets it all wrong.
    
    Found with the LLVM static analyzer (scan-build).
    
    Fixes: 309257484cc1 ("powerpc: Cleanup udbg_16550 and add support for LPC PIO-only UARTs")
    Signed-off-by: Anton Blanchard <anton@samba.org>
    [mpe: Add some newlines for readability while we're here]
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 10fedcacab0652f4391ab865f525de5dcaba44f9
Author: Andrew Jackson <Andrew.Jackson@arm.com>
Date:   Fri Dec 19 16:18:05 2014 +0000

    ASoC: dwc: Ensure FIFOs are flushed to prevent channel swap
    
    commit 3475c3d034d7f276a474c8bd53f44b48c8bf669d upstream.
    
    Flush the FIFOs when the stream is prepared for use.  This avoids
    an inadvertent swapping of the left/right channels if the FIFOs are
    not empty at startup.
    
    Signed-off-by: Andrew Jackson <Andrew.Jackson@arm.com>
    Signed-off-by: Mark Brown <broonie@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit ca8e3d0a0978f00710b00a56d0b737ec1a2a650c
Author: Jarkko Nikula <jarkko.nikula@linux.intel.com>
Date:   Mon Nov 24 15:32:36 2014 +0200

    ASoC: max98090: Fix ill-defined sidetone route
    
    commit 48826ee590da03e9882922edf96d8d27bdfe9552 upstream.
    
    Commit 5fe5b767dc6f ("ASoC: dapm: Do not pretend to support controls for non
    mixer/mux widgets") revealed ill-defined control in a route between
    "STENL Mux" and DACs in max98090.c:
    
    max98090 i2c-193C9890:00: Control not supported for path STENL Mux -> [NULL] -> DACL
    max98090 i2c-193C9890:00: ASoC: no dapm match for STENL Mux --> NULL --> DACL
    max98090 i2c-193C9890:00: ASoC: Failed to add route STENL Mux -> NULL -> DACL
    max98090 i2c-193C9890:00: Control not supported for path STENL Mux -> [NULL] -> DACR
    max98090 i2c-193C9890:00: ASoC: no dapm match for STENL Mux --> NULL --> DACR
    max98090 i2c-193C9890:00: ASoC: Failed to add route STENL Mux -> NULL -> DACR
    
    Since there is no control between "STENL Mux" and DACs the control name must
    be NULL not "NULL".
    
    Signed-off-by: Jarkko Nikula <jarkko.nikula@linux.intel.com>
    Signed-off-by: Mark Brown <broonie@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit b46be7b2d374114b2a045f6a4986ffc99b23b895
Author: Lars-Peter Clausen <lars@metafoo.de>
Date:   Wed Nov 19 18:29:02 2014 +0100

    ASoC: sigmadsp: Refuse to load firmware files with a non-supported version
    
    commit 50c0f21b42dd4cd02b51f82274f66912d9a7fa32 upstream.
    
    Make sure to check the version field of the firmware header to make sure to
    not accidentally try to parse a firmware file with a different layout.
    Trying to do so can result in loading invalid firmware code to the device.
    
    Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
    Signed-off-by: Mark Brown <broonie@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 27ce0db6786f25ca4d3923d7e514d198b3317294
Author: Felix Fietkau <nbd@openwrt.org>
Date:   Sun Nov 30 21:52:57 2014 +0100

    ath5k: fix hardware queue index assignment
    
    commit 9e4982f6a51a2442f1bb588fee42521b44b4531c upstream.
    
    Like with ath9k, ath5k queues also need to be ordered by priority.
    queue_info->tqi_subtype already contains the correct index, so use it
    instead of relying on the order of ath5k_hw_setup_tx_queue calls.
    
    Signed-off-by: Felix Fietkau <nbd@openwrt.org>
    Signed-off-by: John W. Linville <linville@tuxdriver.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit f27eaf361fddd262301dcc572e0a956e421c1de8
Author: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Date:   Mon Dec 1 16:44:09 2014 +0200

    iwlwifi: mvm: update values for Smart Fifo
    
    commit b4c82adcba8cb4b23068a6b800ca98da3bee6888 upstream.
    
    Interoperability issues were identified and root caused to
    the Smart Fifo watermarks. These issues arose with
    NetGear R7000. Fix this.
    
    Fixes: 1f3b0ff8ecce ("iwlwifi: mvm: Add Smart FIFO support")
    Reviewed-by: Johannes Berg <johannes.berg@intel.com>
    Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 3394691d34fc9baaaff3637b858b9911cd2b327d
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Fri Nov 21 16:56:12 2014 +0000

    swiotlb-xen: pass dev_addr to swiotlb_tbl_unmap_single
    
    commit 2c3fc8d26dd09b9d7069687eead849ee81c78e46 upstream.
    
    Need to pass the pointer within the swiotlb internal buffer to the
    swiotlb library, that in the case of xen_unmap_single is dev_addr, not
    paddr.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit f4862f0e93b9095e8e75d1f8663cec197d34fb58
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Fri Nov 21 16:55:12 2014 +0000

    swiotlb-xen: call xen_dma_sync_single_for_device when appropriate
    
    commit 9490c6c67e2f41760de8ece4e4f56f75f84ceb9e upstream.
    
    In xen_swiotlb_sync_single we always call xen_dma_sync_single_for_cpu,
    even when we should call xen_dma_sync_single_for_device. Fix that.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 5e8ad2ed959094a151ecf635b8ea3d59caedb095
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Fri Nov 21 11:10:39 2014 +0000

    swiotlb-xen: remove BUG_ON in xen_bus_to_phys
    
    commit c884227eaae9936f8ecbde6e1387bccdab5f4e90 upstream.
    
    On x86 truncation cannot occur because config XEN depends on X86_64 ||
    (X86_32 && X86_PAE).
    
    On ARM truncation can occur without CONFIG_ARM_LPAE, when the dma
    operation involves foreign grants. However in that case the physical
    address returned by xen_bus_to_phys is actually invalid (there is no mfn
    to pfn tracking for foreign grants on ARM) and it is not used.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
    Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 56da36633891eea7fae2afa1ce8cbde0aabcd5b8
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Fri Nov 21 11:09:39 2014 +0000

    swiotlb-xen: pass dev_addr to xen_dma_unmap_page and xen_dma_sync_single_for_cpu
    
    commit d6883e6f32e07ef2cc974753ba00927de099e6d7 upstream.
    
    xen_dma_unmap_page and xen_dma_sync_single_for_cpu take a dma_addr_t
    handle as argument, not a physical address.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
    Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 349dec7ebed3bef97e64609a98149964c1d24d71
Author: Stephane Grosjean <s.grosjean@peak-system.com>
Date:   Fri Nov 28 14:08:48 2014 +0100

    can: peak_usb: fix memset() usage
    
    commit dc50ddcd4c58a5a0226038307d6ef884bec9f8c2 upstream.
    
    This patchs fixes a misplaced call to memset() that fills the request
    buffer with 0. The problem was with sending PCAN_USBPRO_REQ_FCT
    requests, the content set by the caller was thus lost.
    
    With this patch, the memory area is zeroed only when requesting info
    from the device.
    
    Signed-off-by: Stephane Grosjean <s.grosjean@peak-system.com>
    Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit a9690a5a68838883bebadd334354cf51ea6be5d9
Author: Stephane Grosjean <s.grosjean@peak-system.com>
Date:   Fri Nov 28 13:49:10 2014 +0100

    can: peak_usb: fix cleanup sequence order in case of error during init
    
    commit af35d0f1cce7a990286e2b94c260a2c2d2a0e4b0 upstream.
    
    This patch sets the correct reverse sequence order to the instructions
    set to run, when any failure occurs during the initialization steps.
    It also adds the missing unregistration call of the can device if the
    failure appears after having been registered.
    
    Signed-off-by: Stephane Grosjean <s.grosjean@peak-system.com>
    Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit cc0d0d5e64a140ad26bb6f37bf7b224e6fde3312
Author: Felix Fietkau <nbd@openwrt.org>
Date:   Sun Nov 30 20:38:41 2014 +0100

    ath9k: fix BE/BK queue order
    
    commit 78063d81d353e10cbdd279c490593113b8fdae1c upstream.
    
    Hardware queues are ordered by priority. Use queue index 0 for BK, which
    has lower priority than BE.
    
    Signed-off-by: Felix Fietkau <nbd@openwrt.org>
    Signed-off-by: John W. Linville <linville@tuxdriver.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 51efefe49457ec85520f32f5797ce8520474bf6a
Author: Felix Fietkau <nbd@openwrt.org>
Date:   Sun Nov 30 20:38:40 2014 +0100

    ath9k_hw: fix hardware queue allocation
    
    commit ad8fdccf9c197a89e2d2fa78c453283dcc2c343f upstream.
    
    The driver passes the desired hardware queue index for a WMM data queue
    in qinfo->tqi_subtype. This was ignored in ath9k_hw_setuptxqueue, which
    instead relied on the order in which the function is called.
    
    Reported-by: Hubert Feurstein <h.feurstein@gmail.com>
    Signed-off-by: Felix Fietkau <nbd@openwrt.org>
    Signed-off-by: John W. Linville <linville@tuxdriver.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 173b04ae6c52c5d38ed2cec2cdd631bd38b77875
Author: Xue jiufei <xuejiufei@huawei.com>
Date:   Thu Jan 8 14:32:23 2015 -0800

    ocfs2: fix the wrong directory passed to ocfs2_lookup_ino_from_name() when link file
    
    commit 53dc20b9a3d928b0744dad5aee65b610de1cc85d upstream.
    
    In ocfs2_link(), the parent directory inode passed to function
    ocfs2_lookup_ino_from_name() is wrong.  Parameter dir is the parent of
    new_dentry not old_dentry.  We should get old_dir from old_dentry and
    lookup old_dentry in old_dir in case another node remove the old dentry.
    
    With this change, hard linking works again, when paths are relative with
    at least one subdirectory.  This is how the problem was reproducable:
    
      # mkdir a
      # mkdir b
      # touch a/test
      # ln a/test b/test
      ln: failed to create hard link `b/test' => `a/test': No such file or  directory
    
    However when creating links in the same dir, it worked well.
    
    Now the link gets created.
    
    Fixes: 0e048316ff57 ("ocfs2: check existence of old dentry in ocfs2_link()")
    Signed-off-by: joyce.xue <xuejiufei@huawei.com>
    Reported-by: Szabo Aron - UBIT <aron@ubit.hu>
    Cc: Mark Fasheh <mfasheh@suse.com>
    Cc: Joel Becker <jlbec@evilplan.org>
    Tested-by: Aron Szabo <aron@ubit.hu>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 8d3892789193b9c835d3d1f53bf478536960fd2c
Author: Junxiao Bi <junxiao.bi@oracle.com>
Date:   Thu Dec 18 16:17:37 2014 -0800

    ocfs2: fix journal commit deadlock
    
    commit 136f49b9171074872f2a14ad0ab10486d1ba13ca upstream.
    
    For buffer write, page lock will be got in write_begin and released in
    write_end, in ocfs2_write_end_nolock(), before it unlock the page in
    ocfs2_free_write_ctxt(), it calls ocfs2_run_deallocs(), this will ask
    for the read lock of journal->j_trans_barrier.  Holding page lock and
    ask for journal->j_trans_barrier breaks the locking order.
    
    This will cause a deadlock with journal commit threads, ocfs2cmt will
    get write lock of journal->j_trans_barrier first, then it wakes up
    kjournald2 to do the commit work, at last it waits until done.  To
    commit journal, kjournald2 needs flushing data first, it needs get the
    cache page lock.
    
    Since some ocfs2 cluster locks are holding by write process, this
    deadlock may hung the whole cluster.
    
    unlock pages before ocfs2_run_deallocs() can fix the locking order, also
    put unlock before ocfs2_commit_trans() to make page lock is unlocked
    before j_trans_barrier to preserve unlocking order.
    
    Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
    Reviewed-by: Wengang Wang <wen.gang.wang@oracle.com>
    Reviewed-by: Mark Fasheh <mfasheh@suse.de>
    Cc: Joel Becker <jlbec@evilplan.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit ecbd0b75d88fcd36464a2395bc90d4db0a5e5cc4
Author: Arnaud Ebalard <arno@natisbad.org>
Date:   Wed Dec 10 15:54:02 2014 -0800

    drivers/rtc/rtc-isl12057.c: fix masking of register values
    
    commit 5945b2880363ed7648e62aabba770ec57ff2a316 upstream.
    
    When Intersil ISL12057 support was added by commit 70e123373c05 ("rtc: Add
    support for Intersil ISL12057 I2C RTC chip"), two masks for time registers
    values imported from the device were either wrong or omitted, leading to
    additional bits from those registers to impact read values:
    
     - mask for hour register value when reading it in AM/PM mode. As
       AM/PM mode is not the usual mode used by the driver, this error
       would only have an impact on an externally configured RTC hour
       later read by the driver.
     - mask for month value. The lack of masking would provide an
       erroneous value if century bit is set.
    
    This patch fixes those two masks.
    
    Fixes: 70e123373c05 ("rtc: Add support for Intersil ISL12057 I2C RTC chip")
    Signed-off-by: Arnaud Ebalard <arno@natisbad.org>
    Cc: Mark Rutland <mark.rutland@arm.com>
    Cc: Alessandro Zummo <a.zummo@towertech.it>
    Cc: Peter Huewe <peter.huewe@infineon.com>
    Cc: Linus Walleij <linus.walleij@linaro.org>
    Cc: Thierry Reding <treding@nvidia.com>
    Cc: Mark Brown <broonie@kernel.org>
    Cc: Grant Likely <grant.likely@linaro.org>
    Acked-by: Uwe Kleine-König <uwe@kleine-koenig.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit caa2e48c4da0dd32c5abd8271075c666d2db866a
Author: Guo Zeng <guo.zeng@csr.com>
Date:   Wed Dec 10 15:52:24 2014 -0800

    drivers/rtc/rtc-sirfsoc.c: move hardware initilization earlier in probe
    
    commit 0e95325525c4383565cea4f402f15a3113162d05 upstream.
    
    Move rtc register to be later than hardware initialization.  The reason
    is that devm_rtc_device_register() will do read_time() which is a
    callback accessing hardware.  This sometimes causes a hang in the
    hardware related callback.
    
    Signed-off-by: Guo Zeng <guo.zeng@csr.com>
    Signed-off-by: Barry Song <Baohua.Song@csr.com>
    Cc: Alessandro Zummo <a.zummo@towertech.it>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>