commit 02d86837352952eb7ca4e9370fef944a16f69206
Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date:   Sat May 20 14:50:04 2017 +0200

    Linux 4.11.2

commit bbc105f3878b9e69479bfc0a758b4fec64c71cd0
Author: Kees Cook <keescook@chromium.org>
Date:   Mon Mar 6 12:42:12 2017 -0800

    pstore: Shut down worker when unregistering
    
    commit 6330d5534786d5315d56d558aa6d20740f97d80a upstream.
    
    When built as a module and running with update_ms >= 0, pstore will Oops
    during module unload since the work timer is still running. This makes sure
    the worker is stopped before unloading.
    
    Signed-off-by: Kees Cook <keescook@chromium.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit ed8834ea4f43cf7854de62f8985b01c088ba8a75
Author: Kees Cook <keescook@chromium.org>
Date:   Sun Mar 5 22:08:58 2017 -0800

    pstore: Use dynamic spinlock initializer
    
    commit e9a330c4289f2ba1ca4bf98c2b430ab165a8931b upstream.
    
    The per-prz spinlock should be using the dynamic initializer so that
    lockdep can correctly track it. Without this, under lockdep, we get a
    warning at boot that the lock is in non-static memory.
    
    Fixes: 109704492ef6 ("pstore: Make spinlock per zone instead of global")
    Fixes: 76d5692a5803 ("pstore: Correctly initialize spinlock and flags")
    Signed-off-by: Kees Cook <keescook@chromium.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit f25c78c879fac44e549918f3b7201c757201cc9d
Author: Ankit Kumar <ankit@linux.vnet.ibm.com>
Date:   Thu Apr 27 17:03:13 2017 +0530

    pstore: Fix flags to enable dumps on powerpc
    
    commit 041939c1ec54208b42f5cd819209173d52a29d34 upstream.
    
    After commit c950fd6f201a kernel registers pstore write based on flag set.
    Pstore write for powerpc is broken as flags(PSTORE_FLAGS_DMESG) is not set for
    powerpc architecture. On panic, kernel doesn't write message to
    /fs/pstore/dmesg*(Entry doesn't gets created at all).
    
    This patch enables pstore write for powerpc architecture by setting
    PSTORE_FLAGS_DMESG flag.
    
    Fixes: c950fd6f201a ("pstore: Split pstore fragile flags")
    Signed-off-by: Ankit Kumar <ankit@linux.vnet.ibm.com>
    Signed-off-by: Kees Cook <keescook@chromium.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit af0eb80e8eff79a6c70a1b2322e41141bed13a80
Author: Dan Williams <dan.j.williams@intel.com>
Date:   Thu May 4 19:54:42 2017 -0700

    libnvdimm, pfn: fix 'npfns' vs section alignment
    
    commit d5483feda85a8f39ee2e940e279547c686aac30c upstream.
    
    Fix failures to create namespaces due to the vmem_altmap not advertising
    enough free space to store the memmap.
    
     WARNING: CPU: 15 PID: 8022 at arch/x86/mm/init_64.c:656 arch_add_memory+0xde/0xf0
     [..]
     Call Trace:
      dump_stack+0x63/0x83
      __warn+0xcb/0xf0
      warn_slowpath_null+0x1d/0x20
      arch_add_memory+0xde/0xf0
      devm_memremap_pages+0x244/0x440
      pmem_attach_disk+0x37e/0x490 [nd_pmem]
      nd_pmem_probe+0x7e/0xa0 [nd_pmem]
      nvdimm_bus_probe+0x71/0x120 [libnvdimm]
      driver_probe_device+0x2bb/0x460
      bind_store+0x114/0x160
      drv_attr_store+0x25/0x30
    
    In commit 658922e57b84 "libnvdimm, pfn: fix memmap reservation sizing"
    we arranged for the capacity to be allocated, but failed to also update
    the 'npfns' parameter. This leads to cases where there is enough
    capacity reserved to hold all the allocated sections, but
    vmemmap_populate_hugepages() still encounters -ENOMEM from
    altmap_alloc_block_buf().
    
    This fix is a stop-gap until we can teach the core memory hotplug
    implementation to permit sub-section hotplug.
    
    Fixes: 658922e57b84 ("libnvdimm, pfn: fix memmap reservation sizing")
    Reported-by: Anisha Allada <anisha.allada@intel.com>
    Signed-off-by: Dan Williams <dan.j.williams@intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit a3ff3ebdf32a96dbb5873b72971e1148c8e260c5
Author: Dan Williams <dan.j.williams@intel.com>
Date:   Fri Apr 28 22:05:14 2017 -0700

    libnvdimm: fix nvdimm_bus_lock() vs device_lock() ordering
    
    commit 452bae0aede774f87bf56c28b6dd50b72c78986c upstream.
    
    A debug patch to turn the standard device_lock() into something that
    lockdep can analyze yielded the following:
    
     ======================================================
     [ INFO: possible circular locking dependency detected ]
     4.11.0-rc4+ #106 Tainted: G           O
     -------------------------------------------------------
     lt-libndctl/1898 is trying to acquire lock:
      (&dev->nvdimm_mutex/3){+.+.+.}, at: [<ffffffffc023c948>] nd_attach_ndns+0x178/0x1b0 [libnvdimm]
    
     but task is already holding lock:
      (&nvdimm_bus->reconfig_mutex){+.+.+.}, at: [<ffffffffc022e0b1>] nvdimm_bus_lock+0x21/0x30 [libnvdimm]
    
     which lock already depends on the new lock.
    
     the existing dependency chain (in reverse order) is:
    
     -> #1 (&nvdimm_bus->reconfig_mutex){+.+.+.}:
            lock_acquire+0xf6/0x1f0
            __mutex_lock+0x88/0x980
            mutex_lock_nested+0x1b/0x20
            nvdimm_bus_lock+0x21/0x30 [libnvdimm]
            nvdimm_namespace_capacity+0x1b/0x40 [libnvdimm]
            nvdimm_namespace_common_probe+0x230/0x510 [libnvdimm]
            nd_pmem_probe+0x14/0x180 [nd_pmem]
            nvdimm_bus_probe+0xa9/0x260 [libnvdimm]
    
     -> #0 (&dev->nvdimm_mutex/3){+.+.+.}:
            __lock_acquire+0x1107/0x1280
            lock_acquire+0xf6/0x1f0
            __mutex_lock+0x88/0x980
            mutex_lock_nested+0x1b/0x20
            nd_attach_ndns+0x178/0x1b0 [libnvdimm]
            nd_namespace_store+0x308/0x3c0 [libnvdimm]
            namespace_store+0x87/0x220 [libnvdimm]
    
    In this case '&dev->nvdimm_mutex/3' mirrors '&dev->mutex'.
    
    Fix this by replacing the use of device_lock() with nvdimm_bus_lock() to protect
    nd_{attach,detach}_ndns() operations.
    
    Fixes: 8c2f7e8658df ("libnvdimm: infrastructure for btt devices")
    Reported-by: Yi Zhang <yizhan@redhat.com>
    Signed-off-by: Dan Williams <dan.j.williams@intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit de21b800a6455b6745158facdf5417e3670e2cee
Author: Toshi Kani <toshi.kani@hpe.com>
Date:   Tue Apr 25 17:04:13 2017 -0600

    libnvdimm, pmem: fix a NULL pointer BUG in nd_pmem_notify
    
    commit b2518c78ce76896f0f8f7940bf02104b227e1709 upstream.
    
    The following BUG was observed when nd_pmem_notify() was called
    for a BTT device.  The use of a pmem_device pointer is not valid
    with BTT.
    
     BUG: unable to handle kernel NULL pointer dereference at 0000000000000030
     IP: nd_pmem_notify+0x30/0xf0 [nd_pmem]
     Call Trace:
      nd_device_notify+0x40/0x50
      child_notify+0x10/0x20
      device_for_each_child+0x50/0x90
      nd_region_notify+0x20/0x30
      nd_device_notify+0x40/0x50
      nvdimm_region_notify+0x27/0x30
      acpi_nfit_scrub+0x341/0x590 [nfit]
      process_one_work+0x197/0x450
      worker_thread+0x4e/0x4a0
      kthread+0x109/0x140
    
    Fix nd_pmem_notify() by setting nd_region and badblocks pointers
    properly for BTT.
    
    Cc: Vishal Verma <vishal.l.verma@intel.com>
    Fixes: 719994660c24 ("libnvdimm: async notification support")
    Signed-off-by: Toshi Kani <toshi.kani@hpe.com>
    Signed-off-by: Dan Williams <dan.j.williams@intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit d2572f5b0104a07040e086a46c0e431bd3b06b9f
Author: Dan Williams <dan.j.williams@intel.com>
Date:   Mon Apr 24 15:43:05 2017 -0700

    libnvdimm, region: fix flush hint detection crash
    
    commit bc042fdfbb92b5b13421316b4548e2d6e98eed37 upstream.
    
    In the case where a dimm does not have any associated flush hints the
    ndrd->flush_wpq array may be uninitialized leading to crashes with the
    following signature:
    
     BUG: unable to handle kernel NULL pointer dereference at 0000000000000010
     IP: region_visible+0x10f/0x160 [libnvdimm]
    
     Call Trace:
      internal_create_group+0xbe/0x2f0
      sysfs_create_groups+0x40/0x80
      device_add+0x2d8/0x650
      nd_async_device_register+0x12/0x40 [libnvdimm]
      async_run_entry_fn+0x39/0x170
      process_one_work+0x212/0x6c0
      ? process_one_work+0x197/0x6c0
      worker_thread+0x4e/0x4a0
      kthread+0x10c/0x140
      ? process_one_work+0x6c0/0x6c0
      ? kthread_create_on_node+0x60/0x60
      ret_from_fork+0x31/0x40
    
    Reviewed-by: Jeff Moyer <jmoyer@redhat.com>
    Fixes: f284a4f23752 ("libnvdimm: introduce nvdimm_flush() and nvdimm_has_flush()")
    Signed-off-by: Dan Williams <dan.j.williams@intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit d517be513311fd5800ed05d1e5f47b2337fbab9c
Author: Joeseph Chang <joechang@codeaurora.org>
Date:   Mon Mar 27 20:22:09 2017 -0600

    ipmi: Fix kernel panic at ipmi_ssif_thread()
    
    commit 6de65fcfdb51835789b245203d1bfc8d14cb1e06 upstream.
    
    msg_written_handler() may set ssif_info->multi_data to NULL
    when using ipmitool to write fru.
    
    Before setting ssif_info->multi_data to NULL, add new local
    pointer "data_to_send" and store correct i2c data pointer to
    it to fix NULL pointer kernel panic and incorrect ssif_info->multi_pos.
    
    Signed-off-by: Joeseph Chang <joechang@codeaurora.org>
    Signed-off-by: Corey Minyard <cminyard@mvista.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 8f0fde5bc9a58d5b0a0d8a3857fbc19b6d859eda
Author: Christoph Hellwig <hch@lst.de>
Date:   Tue Apr 25 13:39:54 2017 +0200

    libata: reject passthrough WRITE SAME requests
    
    commit c6ade20f5e50e188d20b711a618b20dd1d50457e upstream.
    
    The WRITE SAME to TRIM translation rewrites the DATA OUT buffer.  While
    the SCSI code accomodates for this by passing a read-writable buffer
    userspace applications don't cater for this behavior.  In fact it can
    be used to rewrite e.g. a readonly file through mmap and should be
    considered as a security fix.
    
    Signed-off-by: Christoph Hellwig <hch@lst.de>
    Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
    Signed-off-by: Tejun Heo <tj@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit a5938c093a1827a7262afc925a6a1032842af096
Author: Tejun Heo <tj@kernel.org>
Date:   Fri Apr 28 15:14:55 2017 -0400

    cgroup: fix spurious warnings on cgroup_is_dead() from cgroup_sk_alloc()
    
    commit a590b90d472f2c176c140576ee3ab44df7f67839 upstream.
    
    cgroup_get() expected to be called only on live cgroups and triggers
    warning on a dead cgroup; however, cgroup_sk_alloc() may be called
    while cloning a socket which is left in an empty and removed cgroup
    and thus may legitimately duplicate its reference on a dead cgroup.
    This currently triggers the following warning spuriously.
    
     WARNING: CPU: 14 PID: 0 at kernel/cgroup.c:490 cgroup_get+0x55/0x60
     ...
      [<ffffffff8107e123>] __warn+0xd3/0xf0
      [<ffffffff8107e20e>] warn_slowpath_null+0x1e/0x20
      [<ffffffff810ff465>] cgroup_get+0x55/0x60
      [<ffffffff81106061>] cgroup_sk_alloc+0x51/0xe0
      [<ffffffff81761beb>] sk_clone_lock+0x2db/0x390
      [<ffffffff817cce06>] inet_csk_clone_lock+0x16/0xc0
      [<ffffffff817e8173>] tcp_create_openreq_child+0x23/0x4b0
      [<ffffffff818601a1>] tcp_v6_syn_recv_sock+0x91/0x670
      [<ffffffff817e8b16>] tcp_check_req+0x3a6/0x4e0
      [<ffffffff81861ba3>] tcp_v6_rcv+0x693/0xa00
      [<ffffffff81837429>] ip6_input_finish+0x59/0x3e0
      [<ffffffff81837cb2>] ip6_input+0x32/0xb0
      [<ffffffff81837387>] ip6_rcv_finish+0x57/0xa0
      [<ffffffff81837ac8>] ipv6_rcv+0x318/0x4d0
      [<ffffffff817778c7>] __netif_receive_skb_core+0x2d7/0x9a0
      [<ffffffff81777fa6>] __netif_receive_skb+0x16/0x70
      [<ffffffff81778023>] netif_receive_skb_internal+0x23/0x80
      [<ffffffff817787d8>] napi_gro_frags+0x208/0x270
      [<ffffffff8168a9ec>] mlx4_en_process_rx_cq+0x74c/0xf40
      [<ffffffff8168b270>] mlx4_en_poll_rx_cq+0x30/0x90
      [<ffffffff81778b30>] net_rx_action+0x210/0x350
      [<ffffffff8188c426>] __do_softirq+0x106/0x2c7
      [<ffffffff81082bad>] irq_exit+0x9d/0xa0 [<ffffffff8188c0e4>] do_IRQ+0x54/0xd0
      [<ffffffff8188a63f>] common_interrupt+0x7f/0x7f <EOI>
      [<ffffffff8173d7e7>] cpuidle_enter+0x17/0x20
      [<ffffffff810bdfd9>] cpu_startup_entry+0x2a9/0x2f0
      [<ffffffff8103edd1>] start_secondary+0xf1/0x100
    
    This patch renames the existing cgroup_get() with the dead cgroup
    warning to cgroup_get_live() after cgroup_kn_lock_live() and
    introduces the new cgroup_get() which doesn't check whether the cgroup
    is live or dead.
    
    All existing cgroup_get() users except for cgroup_sk_alloc() are
    converted to use cgroup_get_live().
    
    Fixes: d979a39d7242 ("cgroup: duplicate cgroup reference when cloning sockets")
    Cc: Johannes Weiner <hannes@cmpxchg.org>
    Reported-by: Chris Mason <clm@fb.com>
    Signed-off-by: Tejun Heo <tj@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 740f485dae696753d0c776abd1d5f6f58b4fb738
Author: Johan Hovold <johan@kernel.org>
Date:   Wed Mar 29 18:15:28 2017 +0200

    Bluetooth: hci_intel: add missing tty-device sanity check
    
    commit dcb9cfaa5ea9aa0ec08aeb92582ccfe3e4c719a9 upstream.
    
    Make sure to check the tty-device pointer before looking up the sibling
    platform device to avoid dereferencing a NULL-pointer when the tty is
    one end of a Unix98 pty.
    
    Fixes: 74cdad37cd24 ("Bluetooth: hci_intel: Add runtime PM support")
    Fixes: 1ab1f239bf17 ("Bluetooth: hci_intel: Add support for platform driver")
    Cc: Loic Poulain <loic.poulain@intel.com>
    Signed-off-by: Johan Hovold <johan@kernel.org>
    Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit a86af7983d81ad4ff2de3bc1a2d05da8ce960d8c
Author: Johan Hovold <johan@kernel.org>
Date:   Wed Mar 29 18:15:27 2017 +0200

    Bluetooth: hci_bcm: add missing tty-device sanity check
    
    commit 95065a61e9bf25fb85295127fba893200c2bbbd8 upstream.
    
    Make sure to check the tty-device pointer before looking up the sibling
    platform device to avoid dereferencing a NULL-pointer when the tty is
    one end of a Unix98 pty.
    
    Fixes: 0395ffc1ee05 ("Bluetooth: hci_bcm: Add PM for BCM devices")
    Cc: Frederic Danis <frederic.danis@linux.intel.com>
    Signed-off-by: Johan Hovold <johan@kernel.org>
    Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit ded39dd9f00affd2170cf7eeb737f681c0f88b4f
Author: Szymon Janc <szymon.janc@codecoup.pl>
Date:   Mon Apr 24 18:25:04 2017 -0700

    Bluetooth: Fix user channel for 32bit userspace on 64bit kernel
    
    commit ab89f0bdd63a3721f7cd3f064f39fc4ac7ca14d4 upstream.
    
    Running 32bit userspace on 64bit kernel results in MSG_CMSG_COMPAT being
    defined as 0x80000000. This results in sendmsg failure if used from 32bit
    userspace running on 64bit kernel. Fix this by accounting for MSG_CMSG_COMPAT
    in flags check in hci_sock_sendmsg.
    
    Signed-off-by: Szymon Janc <szymon.janc@codecoup.pl>
    Signed-off-by: Marko Kiiskila <marko@runtime.io>
    Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 01f9853326b0ffd56ad25fec21064eef637e1ec0
Author: Timur Tabi <timur@codeaurora.org>
Date:   Thu Apr 13 08:55:08 2017 -0500

    tty: pl011: use "qdf2400_e44" as the earlycon name for QDF2400 E44
    
    commit 5a0722b898f851b9ef108ea7babc529e4efc773d upstream.
    
    Define a new early console name for Qualcomm Datacenter Technologies
    QDF2400 SOCs affected by erratum 44, instead of piggy-backing on "pl011".
    Previously, to enable traditional (non-SPCR) earlycon, the documentation
    said to specify "earlycon=pl011,<address>,qdf2400_e44", but the code was
    broken and this didn't actually work.
    
    So instead, the method for specifying the E44 work-around with traditional
    earlycon is "earlycon=qdf2400_e44,<address>".  Both methods of earlycon
    are now enabled with the same function.
    
    Fixes: e53e597fd4c4 ("tty: pl011: fix earlycon work-around for QDF2400 erratum 44")
    Signed-off-by: Timur Tabi <timur@codeaurora.org>
    Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 34e01f920739beb9e5c2161d874cdd018343aa3a
Author: Wang YanQing <udknight@gmail.com>
Date:   Wed Feb 22 19:37:08 2017 +0800

    tty: pty: Fix ldisc flush after userspace become aware of the data already
    
    commit 77dae6134440420bac334581a3ccee94cee1c054 upstream.
    
    While using emacs, cat or others' commands in konsole with recent
    kernels, I have met many times that CTRL-C freeze konsole. After
    konsole freeze I can't type anything, then I have to open a new one,
    it is very annoying.
    
    See bug report:
    https://bugs.kde.org/show_bug.cgi?id=175283
    
    The platform in that bug report is Solaris, but now the pty in linux
    has the same problem or the same behavior as Solaris :)
    
    It has high possibility to trigger the problem follow steps below:
    Note: In my test, BigFile is a text file whose size is bigger than 1G
    1:open konsole
    1:cat BigFile
    2:CTRL-C
    
    After some digging, I find out the reason is that commit 1d1d14da12e7
    ("pty: Fix buffer flush deadlock") changes the behavior of pty_flush_buffer.
    
    Thread A                                 Thread B
    --------                                 --------
    1:n_tty_poll return POLLIN
                                             2:CTRL-C trigger pty_flush_buffer
                                                 tty_buffer_flush
                                                   n_tty_flush_buffer
    3:attempt to check count of chars:
      ioctl(fd, TIOCINQ, &available)
      available is equal to 0
    
    4:read(fd, buffer, avaiable)
      return 0
    
    5:konsole close fd
    
    Yes, I know we could use the same patch included in the BUG report as
    a workaround for linux platform too. But I think the data in ldisc is
    belong to application of another side, we shouldn't clear it when we
    want to flush write buffer of this side in pty_flush_buffer. So I think
    it is better to disable ldisc flush in pty_flush_buffer, because its new
    hehavior bring no benefit except that it mess up the behavior between
    POLLIN, and TIOCINQ or FIONREAD.
    
    Also I find no flush_buffer function in others' tty driver has the
    same behavior as current pty_flush_buffer.
    
    Fixes: 1d1d14da12e7 ("pty: Fix buffer flush deadlock")
    Signed-off-by: Wang YanQing <udknight@gmail.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit aad32798dc881faec3f64c6b7d6c19f3329546a9
Author: Johan Hovold <johan@kernel.org>
Date:   Mon Apr 10 11:21:39 2017 +0200

    serial: omap: suspend device on probe errors
    
    commit 77e6fe7fd2b7cba0bf2f2dc8cde51d7b9a35bf74 upstream.
    
    Make sure to actually suspend the device before returning after a failed
    (or deferred) probe.
    
    Note that autosuspend must be disabled before runtime pm is disabled in
    order to balance the usage count due to a negative autosuspend delay as
    well as to make the final put suspend the device synchronously.
    
    Fixes: 388bc2622680 ("omap-serial: Fix the error handling in the omap_serial probe")
    Cc: Shubhrajyoti D <shubhrajyoti@ti.com>
    Signed-off-by: Johan Hovold <johan@kernel.org>
    Acked-by: Tony Lindgren <tony@atomide.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 9e85b5c73a64d0b4ac54afcdad690b66efa5c52f
Author: Johan Hovold <johan@kernel.org>
Date:   Mon Apr 10 11:21:38 2017 +0200

    serial: omap: fix runtime-pm handling on unbind
    
    commit 099bd73dc17ed77aa8c98323e043613b6e8f54fc upstream.
    
    An unbalanced and misplaced synchronous put was used to suspend the
    device on driver unbind, something which with a likewise misplaced
    pm_runtime_disable leads to external aborts when an open port is being
    removed.
    
    Unhandled fault: external abort on non-linefetch (0x1028) at 0xfa024010
    ...
    [<c046e760>] (serial_omap_set_mctrl) from [<c046a064>] (uart_update_mctrl+0x50/0x60)
    [<c046a064>] (uart_update_mctrl) from [<c046a400>] (uart_shutdown+0xbc/0x138)
    [<c046a400>] (uart_shutdown) from [<c046bd2c>] (uart_hangup+0x94/0x190)
    [<c046bd2c>] (uart_hangup) from [<c045b760>] (__tty_hangup+0x404/0x41c)
    [<c045b760>] (__tty_hangup) from [<c045b794>] (tty_vhangup+0x1c/0x20)
    [<c045b794>] (tty_vhangup) from [<c046ccc8>] (uart_remove_one_port+0xec/0x260)
    [<c046ccc8>] (uart_remove_one_port) from [<c046ef4c>] (serial_omap_remove+0x40/0x60)
    [<c046ef4c>] (serial_omap_remove) from [<c04845e8>] (platform_drv_remove+0x34/0x4c)
    
    Fix this up by resuming the device before deregistering the port and by
    suspending and disabling runtime pm only after the port has been
    removed.
    
    Also make sure to disable autosuspend before disabling runtime pm so
    that the usage count is balanced and device actually suspended before
    returning.
    
    Note that due to a negative autosuspend delay being set in probe, the
    unbalanced put would actually suspend the device on first driver unbind,
    while rebinding and again unbinding would result in a negative
    power.usage_count.
    
    Fixes: 7e9c8e7dbf3b ("serial: omap: make sure to suspend device before remove")
    Cc: Felipe Balbi <balbi@kernel.org>
    Cc: Santosh Shilimkar <santosh.shilimkar@ti.com>
    Signed-off-by: Johan Hovold <johan@kernel.org>
    Acked-by: Tony Lindgren <tony@atomide.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit cd823aa0bc50184ae104bac88904ab843763c27c
Author: Marek Szyprowski <m.szyprowski@samsung.com>
Date:   Mon Apr 3 08:21:00 2017 +0200

    serial: samsung: Add missing checks for dma_map_single failure
    
    commit 500fcc08a32bfd54f11951ba81530775df15c474 upstream.
    
    This patch adds missing checks for dma_map_single() failure and proper error
    reporting. Although this issue was harmless on ARM architecture, it is always
    good to use the DMA mapping API in a proper way. This patch fixes the following
    DMA API debug warning:
    
    WARNING: CPU: 1 PID: 3785 at lib/dma-debug.c:1171 check_unmap+0x8a0/0xf28
    dma-pl330 121a0000.pdma: DMA-API: device driver failed to check map error[device address=0x000000006e0f9000] [size=4096 bytes] [mapped as single]
    Modules linked in:
    CPU: 1 PID: 3785 Comm: (agetty) Tainted: G        W       4.11.0-rc1-00137-g07ca963-dirty #59
    Hardware name: SAMSUNG EXYNOS (Flattened Device Tree)
    [<c011aaa4>] (unwind_backtrace) from [<c01127c0>] (show_stack+0x20/0x24)
    [<c01127c0>] (show_stack) from [<c06ba5d8>] (dump_stack+0x84/0xa0)
    [<c06ba5d8>] (dump_stack) from [<c0139528>] (__warn+0x14c/0x180)
    [<c0139528>] (__warn) from [<c01395a4>] (warn_slowpath_fmt+0x48/0x50)
    [<c01395a4>] (warn_slowpath_fmt) from [<c072a114>] (check_unmap+0x8a0/0xf28)
    [<c072a114>] (check_unmap) from [<c072a834>] (debug_dma_unmap_page+0x98/0xc8)
    [<c072a834>] (debug_dma_unmap_page) from [<c0803874>] (s3c24xx_serial_shutdown+0x314/0x52c)
    [<c0803874>] (s3c24xx_serial_shutdown) from [<c07f5124>] (uart_port_shutdown+0x54/0x88)
    [<c07f5124>] (uart_port_shutdown) from [<c07f522c>] (uart_shutdown+0xd4/0x110)
    [<c07f522c>] (uart_shutdown) from [<c07f6a8c>] (uart_hangup+0x9c/0x208)
    [<c07f6a8c>] (uart_hangup) from [<c07c426c>] (__tty_hangup+0x49c/0x634)
    [<c07c426c>] (__tty_hangup) from [<c07c78ac>] (tty_ioctl+0xc88/0x16e4)
    [<c07c78ac>] (tty_ioctl) from [<c03b5f2c>] (do_vfs_ioctl+0xc4/0xd10)
    [<c03b5f2c>] (do_vfs_ioctl) from [<c03b6bf4>] (SyS_ioctl+0x7c/0x8c)
    [<c03b6bf4>] (SyS_ioctl) from [<c010b4a0>] (ret_fast_syscall+0x0/0x3c)
    
    Reported-by: Seung-Woo Kim <sw0312.kim@samsung.com>
    Fixes: 62c37eedb74c8 ("serial: samsung: add dma reqest/release functions")
    Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
    Reviewed-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
    Reviewed-by: Shuah Khan <shuahkh@osg.samsung.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 43a1c1ff12c0662769b051ea06a19bdba165fccf
Author: Marek Szyprowski <m.szyprowski@samsung.com>
Date:   Mon Apr 3 08:20:59 2017 +0200

    serial: samsung: Use right device for DMA-mapping calls
    
    commit 768d64f491a530062ddad50e016fb27125f8bd7c upstream.
    
    Driver should provide its own struct device for all DMA-mapping calls instead
    of extracting device pointer from DMA engine channel. Although this is harmless
    from the driver operation perspective on ARM architecture, it is always good
    to use the DMA mapping API in a proper way. This patch fixes following DMA API
    debug warning:
    
    WARNING: CPU: 0 PID: 0 at lib/dma-debug.c:1241 check_sync+0x520/0x9f4
    samsung-uart 12c20000.serial: DMA-API: device driver tries to sync DMA memory it has not allocated [device address=0x000000006df0f580] [size=64 bytes]
    Modules linked in:
    CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.11.0-rc1-00137-g07ca963 #51
    Hardware name: SAMSUNG EXYNOS (Flattened Device Tree)
    [<c011aaa4>] (unwind_backtrace) from [<c01127c0>] (show_stack+0x20/0x24)
    [<c01127c0>] (show_stack) from [<c06ba5d8>] (dump_stack+0x84/0xa0)
    [<c06ba5d8>] (dump_stack) from [<c0139528>] (__warn+0x14c/0x180)
    [<c0139528>] (__warn) from [<c01395a4>] (warn_slowpath_fmt+0x48/0x50)
    [<c01395a4>] (warn_slowpath_fmt) from [<c0729058>] (check_sync+0x520/0x9f4)
    [<c0729058>] (check_sync) from [<c072967c>] (debug_dma_sync_single_for_device+0x88/0xc8)
    [<c072967c>] (debug_dma_sync_single_for_device) from [<c0803c10>] (s3c24xx_serial_start_tx_dma+0x100/0x2f8)
    [<c0803c10>] (s3c24xx_serial_start_tx_dma) from [<c0804338>] (s3c24xx_serial_tx_chars+0x198/0x33c)
    
    Reported-by: Seung-Woo Kim <sw0312.kim@samsung.com>
    Fixes: 62c37eedb74c8 ("serial: samsung: add dma reqest/release functions")
    Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
    Reviewed-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
    Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org>
    Reviewed-by: Shuah Khan <shuahkh@osg.samsung.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit ab62f4118ba1d010a994d79f55ea8bd0b48f5b98
Author: Eric Biggers <ebiggers@google.com>
Date:   Mon Apr 24 10:00:09 2017 -0700

    fscrypt: avoid collisions when presenting long encrypted filenames
    
    commit 6b06cdee81d68a8a829ad8e8d0f31d6836744af9 upstream.
    
    When accessing an encrypted directory without the key, userspace must
    operate on filenames derived from the ciphertext names, which contain
    arbitrary bytes.  Since we must support filenames as long as NAME_MAX,
    we can't always just base64-encode the ciphertext, since that may make
    it too long.  Currently, this is solved by presenting long names in an
    abbreviated form containing any needed filesystem-specific hashes (e.g.
    to identify a directory block), then the last 16 bytes of ciphertext.
    This needs to be sufficient to identify the actual name on lookup.
    
    However, there is a bug.  It seems to have been assumed that due to the
    use of a CBC (ciphertext block chaining)-based encryption mode, the last
    16 bytes (i.e. the AES block size) of ciphertext would depend on the
    full plaintext, preventing collisions.  However, we actually use CBC
    with ciphertext stealing (CTS), which handles the last two blocks
    specially, causing them to appear "flipped".  Thus, it's actually the
    second-to-last block which depends on the full plaintext.
    
    This caused long filenames that differ only near the end of their
    plaintexts to, when observed without the key, point to the wrong inode
    and be undeletable.  For example, with ext4:
    
        # echo pass | e4crypt add_key -p 16 edir/
        # seq -f "edir/abcdefghijklmnopqrstuvwxyz012345%.0f" 100000 | xargs touch
        # find edir/ -type f | xargs stat -c %i | sort | uniq | wc -l
        100000
        # sync
        # echo 3 > /proc/sys/vm/drop_caches
        # keyctl new_session
        # find edir/ -type f | xargs stat -c %i | sort | uniq | wc -l
        2004
        # rm -rf edir/
        rm: cannot remove 'edir/_A7nNFi3rhkEQlJ6P,hdzluhODKOeWx5V': Structure needs cleaning
        ...
    
    To fix this, when presenting long encrypted filenames, encode the
    second-to-last block of ciphertext rather than the last 16 bytes.
    
    Although it would be nice to solve this without depending on a specific
    encryption mode, that would mean doing a cryptographic hash like SHA-256
    which would be much less efficient.  This way is sufficient for now, and
    it's still compatible with encryption modes like HEH which are strong
    pseudorandom permutations.  Also, changing the presented names is still
    allowed at any time because they are only provided to allow applications
    to do things like delete encrypted directories.  They're not designed to
    be used to persistently identify files --- which would be hard to do
    anyway, given that they're encrypted after all.
    
    For ease of backports, this patch only makes the minimal fix to both
    ext4 and f2fs.  It leaves ubifs as-is, since ubifs doesn't compare the
    ciphertext block yet.  Follow-on patches will clean things up properly
    and make the filesystems use a shared helper function.
    
    Fixes: 5de0b4d0cd15 ("ext4 crypto: simplify and speed up filename encryption")
    Reported-by: Gwendal Grignou <gwendal@chromium.org>
    Signed-off-by: Eric Biggers <ebiggers@google.com>
    Signed-off-by: Theodore Ts'o <tytso@mit.edu>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit eb86b4c68b8c59eec5e44fa7e1ea98c0a63eaa24
Author: Eric Biggers <ebiggers@google.com>
Date:   Fri Apr 7 10:58:37 2017 -0700

    fscrypt: fix context consistency check when key(s) unavailable
    
    commit 272f98f6846277378e1758a49a49d7bf39343c02 upstream.
    
    To mitigate some types of offline attacks, filesystem encryption is
    designed to enforce that all files in an encrypted directory tree use
    the same encryption policy (i.e. the same encryption context excluding
    the nonce).  However, the fscrypt_has_permitted_context() function which
    enforces this relies on comparing struct fscrypt_info's, which are only
    available when we have the encryption keys.  This can cause two
    incorrect behaviors:
    
    1. If we have the parent directory's key but not the child's key, or
       vice versa, then fscrypt_has_permitted_context() returned false,
       causing applications to see EPERM or ENOKEY.  This is incorrect if
       the encryption contexts are in fact consistent.  Although we'd
       normally have either both keys or neither key in that case since the
       master_key_descriptors would be the same, this is not guaranteed
       because keys can be added or removed from keyrings at any time.
    
    2. If we have neither the parent's key nor the child's key, then
       fscrypt_has_permitted_context() returned true, causing applications
       to see no error (or else an error for some other reason).  This is
       incorrect if the encryption contexts are in fact inconsistent, since
       in that case we should deny access.
    
    To fix this, retrieve and compare the fscrypt_contexts if we are unable
    to set up both fscrypt_infos.
    
    While this slightly hurts performance when accessing an encrypted
    directory tree without the key, this isn't a case we really need to be
    optimizing for; access *with* the key is much more important.
    Furthermore, the performance hit is barely noticeable given that we are
    already retrieving the fscrypt_context and doing two keyring searches in
    fscrypt_get_encryption_info().  If we ever actually wanted to optimize
    this case we might start by caching the fscrypt_contexts.
    
    Signed-off-by: Eric Biggers <ebiggers@google.com>
    Signed-off-by: Theodore Ts'o <tytso@mit.edu>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 602f8911c7dc9e54aaef1c3cd3ef31855dc270e3
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Sat May 6 10:27:13 2017 -0700

    initramfs: avoid "label at end of compound statement" error
    
    commit 394e4f5d5834b610ee032cceb20a1b1f45b01d28 upstream.
    
    Commit 17a9be317475 ("initramfs: Always do fput() and load modules after
    rootfs populate") introduced an error for the
    
        CONFIG_BLK_DEV_RAM=y
    
    case, because even though the code looks fine, the compiler really wants
    a statement after a label, or you'll get complaints:
    
      init/initramfs.c: In function 'populate_rootfs':
      init/initramfs.c:644:2: error: label at end of compound statement
    
    That commit moved the subsequent statements to outside the compound
    statement, leaving the label without any associated statements.
    
    Reported-by: Jörg Otte <jrg.otte@gmail.com>
    Fixes: 17a9be317475 ("initramfs: Always do fput() and load modules after rootfs populate")
    Cc: Al Viro <viro@zeniv.linux.org.uk>
    Cc: Stafford Horne <shorne@gmail.com>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit e54af78e1a74dc3d77a0da1d5a2889ab621cfc49
Author: Stafford Horne <shorne@gmail.com>
Date:   Thu May 4 21:15:56 2017 +0900

    initramfs: Always do fput() and load modules after rootfs populate
    
    commit 17a9be31747535184f2af156b1f080ec4c92a952 upstream.
    
    In OpenRISC we do not have a bootloader passed initrd, but the built in
    initramfs does contain the /init and other binaries, including modules.
    The previous commit 08865514805d2 ("initramfs: finish fput() before
    accessing any binary from initramfs") made a change to only call fput()
    if the bootloader initrd was available, this caused intermittent crashes
    for OpenRISC.
    
    This patch changes the fput() to happen unconditionally if any rootfs is
    loaded. Also, I added some comments to make it a bit more clear why we
    call unpack_to_rootfs() multiple times.
    
    Fixes: 08865514805d2 ("initramfs: finish fput() before accessing any binary from initramfs")
    Cc: Lokesh Vutla <lokeshvutla@ti.com>
    Cc: Al Viro <viro@zeniv.linux.org.uk>
    Acked-by: Al Viro <viro@zeniv.linux.org.uk>
    Signed-off-by: Stafford Horne <shorne@gmail.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 9b3be66cb479de75baafbf8ae2d3fcf979c109c2
Author: Jan Kara <jack@suse.cz>
Date:   Tue May 2 17:03:47 2017 +0200

    f2fs: Make flush bios explicitely sync
    
    commit 3adc5fcb7edf5f8dfe8d37dcb50ba6b30077c905 upstream.
    
    Commit b685d3d65ac7 "block: treat REQ_FUA and REQ_PREFLUSH as
    synchronous" removed REQ_SYNC flag from WRITE_{FUA|PREFLUSH|...}
    definitions.  generic_make_request_checks() however strips REQ_FUA and
    REQ_PREFLUSH flags from a bio when the storage doesn't report volatile
    write cache and thus write effectively becomes asynchronous which can
    lead to performance regressions.
    
    Fix the problem by making sure all bios which are synchronous are
    properly marked with REQ_SYNC.
    
    Fixes: b685d3d65ac791406e0dfd8779cc9b3707fea5a3
    CC: Jaegeuk Kim <jaegeuk@kernel.org>
    CC: linux-f2fs-devel@lists.sourceforge.net
    Signed-off-by: Jan Kara <jack@suse.cz>
    Acked-by: Chao Yu <yuchao0@huawei.com>
    Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit bd3dfe5049498ee72c54341a12c5e84a6d61796a
Author: Jaegeuk Kim <jaegeuk@kernel.org>
Date:   Mon Apr 24 10:00:08 2017 -0700

    f2fs: check entire encrypted bigname when finding a dentry
    
    commit 6332cd32c8290a80e929fc044dc5bdba77396e33 upstream.
    
    If user has no key under an encrypted dir, fscrypt gives digested dentries.
    Previously, when looking up a dentry, f2fs only checks its hash value with
    first 4 bytes of the digested dentry, which didn't handle hash collisions fully.
    This patch enhances to check entire dentry bytes likewise ext4.
    
    Eric reported how to reproduce this issue by:
    
     # seq -f "edir/abcdefghijklmnopqrstuvwxyz012345%.0f" 100000 | xargs touch
     # find edir -type f | xargs stat -c %i | sort | uniq | wc -l
    100000
     # sync
     # echo 3 > /proc/sys/vm/drop_caches
     # keyctl new_session
     # find edir -type f | xargs stat -c %i | sort | uniq | wc -l
    99999
    
    Reported-by: Eric Biggers <ebiggers@google.com>
    Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
    (fixed f2fs_dentry_hash() to work even when the hash is 0)
    Signed-off-by: Eric Biggers <ebiggers@google.com>
    Signed-off-by: Theodore Ts'o <tytso@mit.edu>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 3a625468bd9e6a53d1078ba0a2e9c17117147183
Author: Sheng Yong <shengyong1@huawei.com>
Date:   Sat Apr 22 10:39:20 2017 +0800

    f2fs: fix multiple f2fs_add_link() having same name for inline dentry
    
    commit d3bb910c15d75ee3340311c64a1c05985bb663a3 upstream.
    
    Commit 88c5c13a5027 (f2fs: fix multiple f2fs_add_link() calls having
    same name) does not cover the scenario where inline dentry is enabled.
    In that case, F2FS_I(dir)->task will be NULL, and __f2fs_add_link will
    lookup dentries one more time.
    
    This patch fixes it by moving the assigment of current task to a upper
    level to cover both normal and inline dentry.
    
    Fixes: 88c5c13a5027 (f2fs: fix multiple f2fs_add_link() calls having same name)
    Signed-off-by: Sheng Yong <shengyong1@huawei.com>
    Reviewed-by: Chao Yu <yuchao0@huawei.com>
    Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit e71f099677c18c7c7d9363dc9e89d27fe4a33b06
Author: Jaegeuk Kim <jaegeuk@kernel.org>
Date:   Tue Apr 11 19:01:26 2017 -0700

    f2fs: fix fs corruption due to zero inode page
    
    commit 9bb02c3627f46e50246bf7ab957b56ffbef623cb upstream.
    
    This patch fixes the following scenario.
    
    - f2fs_create/f2fs_mkdir             - write_checkpoint
     - f2fs_mark_inode_dirty_sync         - block_operations
                                           - f2fs_lock_all
                                           - f2fs_sync_inode_meta
                                            - f2fs_unlock_all
                                            - sync_inode_metadata
     - f2fs_lock_op
                                             - f2fs_write_inode
                                              - update_inode_page
                                               - get_node_page
                                                 return -ENOENT
     - new_inode_page
      - fill_node_footer
     - f2fs_mark_inode_dirty_sync
     - ...
     - f2fs_unlock_op
                                              - f2fs_inode_synced
                                           - f2fs_lock_all
                                           - do_checkpoint
    
    In this checkpoint, we can get an inode page which contains zeros having valid
    node footer only.
    
    Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit ed4d26a1e45b1cd0d2290cb50e9ab38ef74e0579
Author: Jaegeuk Kim <jaegeuk@kernel.org>
Date:   Tue Apr 4 16:45:30 2017 -0700

    Revert "f2fs: put allocate_segment after refresh_sit_entry"
    
    commit c6f82fe90d7458e5fa190a6820bfc24f96b0de4e upstream.
    
    This reverts commit 3436c4bdb30de421d46f58c9174669fbcfd40ce0.
    
    This makes a leak to register dirty segments. I reproduced the issue by
    modified postmark which injects a lot of file create/delete/update and
    finally triggers huge number of SSR allocations.
    
    [Jaegeuk Kim: Change missing incorrect comment]
    Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 33cbcc2556b36a94b7387a995756e7c5f07e05d9
Author: Jaegeuk Kim <jaegeuk@kernel.org>
Date:   Sat Mar 25 00:03:02 2017 -0700

    f2fs: fix wrong max cost initialization
    
    commit c541a51b8ce81d003b02ed67ad3604a2e6220e3e upstream.
    
    This patch fixes missing increased max cost caused by a patch that we increased
    cose of data segments in greedy algorithm.
    
    Fixes: b9cd20619 "f2fs: node segment is prior to data segment selected victim"
    Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 03606c8ecc11642f2906c17349994f7b4edf0f31
Author: Ross Zwisler <ross.zwisler@linux.intel.com>
Date:   Fri May 12 15:47:00 2017 -0700

    dax: fix PMD data corruption when fault races with write
    
    commit 876f29460cbd4086b43475890c1bf2488fa11d40 upstream.
    
    This is based on a patch from Jan Kara that fixed the equivalent race in
    the DAX PTE fault path.
    
    Currently DAX PMD read fault can race with write(2) in the following
    way:
    
    CPU1 - write(2)                 CPU2 - read fault
                                    dax_iomap_pmd_fault()
                                      ->iomap_begin() - sees hole
    
    dax_iomap_rw()
      iomap_apply()
        ->iomap_begin - allocates blocks
        dax_iomap_actor()
          invalidate_inode_pages2_range()
            - there's nothing to invalidate
    
                                      grab_mapping_entry()
                                      - we add huge zero page to the radix tree
                                        and map it to page tables
    
    The result is that hole page is mapped into page tables (and thus zeros
    are seen in mmap) while file has data written in that place.
    
    Fix the problem by locking exception entry before mapping blocks for the
    fault.  That way we are sure invalidate_inode_pages2_range() call for
    racing write will either block on entry lock waiting for the fault to
    finish (and unmap stale page tables after that) or read fault will see
    already allocated blocks by write(2).
    
    Fixes: 9f141d6ef6258 ("dax: Call ->iomap_begin without entry lock during dax fault")
    Link: http://lkml.kernel.org/r/20170510172700.18991-1-ross.zwisler@linux.intel.com
    Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
    Reviewed-by: Jan Kara <jack@suse.cz>
    Cc: Dan Williams <dan.j.williams@intel.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 5a3651b4a92cbc5230d67d2ce87fb3f7373c7665
Author: Jan Kara <jack@suse.cz>
Date:   Fri May 12 15:46:54 2017 -0700

    ext4: return to starting transaction in ext4_dax_huge_fault()
    
    commit fb26a1cbed8c90025572d48bc9eabe59f7571e88 upstream.
    
    DAX will return to locking exceptional entry before mapping blocks for a
    page fault to fix possible races with concurrent writes.  To avoid lock
    inversion between exceptional entry lock and transaction start, start
    the transaction already in ext4_dax_huge_fault().
    
    Fixes: 9f141d6ef6258a3a37a045842d9ba7e68f368956
    Link: http://lkml.kernel.org/r/20170510085419.27601-4-jack@suse.cz
    Signed-off-by: Jan Kara <jack@suse.cz>
    Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
    Cc: Dan Williams <dan.j.williams@intel.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 85f353dbd0a3113c5dd23940572e51c4750ea513
Author: Jan Kara <jack@suse.cz>
Date:   Fri May 12 15:46:50 2017 -0700

    mm: fix data corruption due to stale mmap reads
    
    commit cd656375f94632d7b5af57bf67b7b5c0270c591c upstream.
    
    Currently, we didn't invalidate page tables during invalidate_inode_pages2()
    for DAX.  That could result in e.g. 2MiB zero page being mapped into
    page tables while there were already underlying blocks allocated and
    thus data seen through mmap were different from data seen by read(2).
    The following sequence reproduces the problem:
    
     - open an mmap over a 2MiB hole
    
     - read from a 2MiB hole, faulting in a 2MiB zero page
    
     - write to the hole with write(3p). The write succeeds but we
       incorrectly leave the 2MiB zero page mapping intact.
    
     - via the mmap, read the data that was just written. Since the zero
       page mapping is still intact we read back zeroes instead of the new
       data.
    
    Fix the problem by unconditionally calling invalidate_inode_pages2_range()
    in dax_iomap_actor() for new block allocations and by properly
    invalidating page tables in invalidate_inode_pages2_range() for DAX
    mappings.
    
    Fixes: c6dcf52c23d2d3fb5235cec42d7dd3f786b87d55
    Link: http://lkml.kernel.org/r/20170510085419.27601-3-jack@suse.cz
    Signed-off-by: Jan Kara <jack@suse.cz>
    Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
    Cc: Dan Williams <dan.j.williams@intel.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 06305f51ef219cb5dbbfe1bab5e71f23f1a06fef
Author: Ross Zwisler <ross.zwisler@linux.intel.com>
Date:   Fri May 12 15:46:47 2017 -0700

    dax: prevent invalidation of mapped DAX entries
    
    commit 4636e70bb0a8b871998b6841a2e4b205cf2bc863 upstream.
    
    Patch series "mm,dax: Fix data corruption due to mmap inconsistency",
    v4.
    
    This series fixes data corruption that can happen for DAX mounts when
    page faults race with write(2) and as a result page tables get out of
    sync with block mappings in the filesystem and thus data seen through
    mmap is different from data seen through read(2).
    
    The series passes testing with t_mmap_stale test program from Ross and
    also other mmap related tests on DAX filesystem.
    
    This patch (of 4):
    
    dax_invalidate_mapping_entry() currently removes DAX exceptional entries
    only if they are clean and unlocked.  This is done via:
    
      invalidate_mapping_pages()
        invalidate_exceptional_entry()
          dax_invalidate_mapping_entry()
    
    However, for page cache pages removed in invalidate_mapping_pages()
    there is an additional criteria which is that the page must not be
    mapped.  This is noted in the comments above invalidate_mapping_pages()
    and is checked in invalidate_inode_page().
    
    For DAX entries this means that we can can end up in a situation where a
    DAX exceptional entry, either a huge zero page or a regular DAX entry,
    could end up mapped but without an associated radix tree entry.  This is
    inconsistent with the rest of the DAX code and with what happens in the
    page cache case.
    
    We aren't able to unmap the DAX exceptional entry because according to
    its comments invalidate_mapping_pages() isn't allowed to block, and
    unmap_mapping_range() takes a write lock on the mapping->i_mmap_rwsem.
    
    Since we essentially never have unmapped DAX entries to evict from the
    radix tree, just remove dax_invalidate_mapping_entry().
    
    Fixes: c6dcf52c23d2 ("mm: Invalidate DAX radix tree entries only if appropriate")
    Link: http://lkml.kernel.org/r/20170510085419.27601-2-jack@suse.cz
    Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
    Signed-off-by: Jan Kara <jack@suse.cz>
    Reported-by: Jan Kara <jack@suse.cz>
    Cc: Dan Williams <dan.j.williams@intel.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 55dbfd7dcd7172681f1c2397b6f6c5276fc4763a
Author: Dan Williams <dan.j.williams@intel.com>
Date:   Sun Apr 30 06:57:01 2017 -0700

    device-dax: fix sysfs attribute deadlock
    
    commit 565851c972b50612f3a4542e26879ffb3e906fc2 upstream.
    
    Usage of device_lock() for dax_region attributes is unnecessary and
    deadlock prone. It's unnecessary because the order of registration /
    un-registration guarantees that drvdata is always valid. It's deadlock
    prone because it sets up this situation:
    
     ndctl           D    0  2170   2082 0x00000000
     Call Trace:
      __schedule+0x31f/0x980
      schedule+0x3d/0x90
      schedule_preempt_disabled+0x15/0x20
      __mutex_lock+0x402/0x980
      ? __mutex_lock+0x158/0x980
      ? align_show+0x2b/0x80 [dax]
      ? kernfs_seq_start+0x2f/0x90
      mutex_lock_nested+0x1b/0x20
      align_show+0x2b/0x80 [dax]
      dev_attr_show+0x20/0x50
    
     ndctl           D    0  2186   2079 0x00000000
     Call Trace:
      __schedule+0x31f/0x980
      schedule+0x3d/0x90
      __kernfs_remove+0x1f6/0x340
      ? kernfs_remove_by_name_ns+0x45/0xa0
      ? remove_wait_queue+0x70/0x70
      kernfs_remove_by_name_ns+0x45/0xa0
      remove_files.isra.1+0x35/0x70
      sysfs_remove_group+0x44/0x90
      sysfs_remove_groups+0x2e/0x50
      dax_region_unregister+0x25/0x40 [dax]
      devm_action_release+0xf/0x20
      release_nodes+0x16d/0x2b0
      devres_release_all+0x3c/0x60
      device_release_driver_internal+0x17d/0x220
      device_release_driver+0x12/0x20
      unbind_store+0x112/0x160
    
    ndctl/2170 is trying to acquire the device_lock() to read an attribute,
    and ndctl/2186 is holding the device_lock() while trying to drain all
    active attribute readers.
    
    Thanks to Yi Zhang for the reproduction script.
    
    Fixes: d7fe1a67f658 ("dax: add region 'id', 'size', and 'align' attributes")
    Reported-by: Yi Zhang <yizhan@redhat.com>
    Signed-off-by: Dan Williams <dan.j.williams@intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 8931477790743c6c2593948b8dddb452171eb3a8
Author: Dan Williams <dan.j.williams@intel.com>
Date:   Fri Mar 17 12:48:09 2017 -0600

    device-dax: fix cdev leak
    
    commit ed01e50acdd3e4a640cf9ebd28a7e810c3ceca97 upstream.
    
    If device_add() fails, cleanup the cdev. Otherwise, we leak a kobj_map()
    with a stale device number.
    
    As Jason points out, there is a small possibility that userspace has
    opened and mapped the device in the time between cdev_add() and the
    device_add() failure. We need a new kill_dax_dev() helper to invalidate
    any established mappings.
    
    Fixes: ba09c01d2fa8 ("dax: convert to the cdev api")
    Reported-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
    Signed-off-by: Dan Williams <dan.j.williams@intel.com>
    Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
    Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 9bcb9cc96d7396430d9f978cce5085f8c9928970
Author: NeilBrown <neilb@suse.com>
Date:   Thu Apr 6 12:06:37 2017 +1000

    md/raid1: avoid reusing a resync bio after error handling.
    
    commit 0c9d5b127f695818c2c5a3868c1f28ca2969e905 upstream.
    
    fix_sync_read_error() modifies a bio on a newly faulty
    device by setting bi_end_io to end_sync_write.
    This ensure that put_buf() will still call rdev_dec_pending()
    as required, but makes sure that subsequent code in
    fix_sync_read_error() doesn't try to read from the device.
    
    Unfortunately this interacts badly with sync_request_write()
    which assumes that any bio with bi_end_io set to non-NULL
    other than end_sync_read is safe to write to.
    
    As the device is now faulty it doesn't make sense to write.
    As the bio was recently used for a read, it is "dirty"
    and not suitable for immediate submission.
    In particular, ->bi_next might be non-NULL, which will cause
    generic_make_request() to complain.
    
    Break this interaction by refusing to write to devices
    which are marked as Faulty.
    
    Reported-and-tested-by: Michael Wang <yun.wang@profitbricks.com>
    Fixes: 2e52d449bcec ("md/raid1: add failfast handling for reads.")
    Signed-off-by: NeilBrown <neilb@suse.com>
    Signed-off-by: Shaohua Li <shli@fb.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 7d708085476d8183c4753d3a1f8ca813fbb0d945
Author: Jason A. Donenfeld <Jason@zx2c4.com>
Date:   Fri Apr 7 02:33:30 2017 +0200

    padata: free correct variable
    
    commit 07a77929ba672d93642a56dc2255dd21e6e2290b upstream.
    
    The author meant to free the variable that was just allocated, instead
    of the one that failed to be allocated, but made a simple typo. This
    patch rectifies that.
    
    Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
    Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit f33f17b28d312ac2cc7c06e1a96aaa93a803be1d
Author: Amir Goldstein <amir73il@gmail.com>
Date:   Mon Apr 24 22:26:40 2017 +0300

    ovl: do not set overlay.opaque on non-dir create
    
    commit 4a99f3c83dc493c8ea84693d78cd792839c8aa64 upstream.
    
    The optimization for opaque dir create was wrongly being applied
    also to non-dir create.
    
    Fixes: 97c684cc9110 ("ovl: create directories inside merged parent opaque")
    Signed-off-by: Amir Goldstein <amir73il@gmail.com>
    Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 6a1baa3a163c8a802c008eeac65ab659ecdd0e0a
Author: Björn Jacke <bj@sernet.de>
Date:   Fri May 5 04:36:16 2017 +0200

    CIFS: add misssing SFM mapping for doublequote
    
    commit 85435d7a15294f9f7ef23469e6aaf7c5dfcc54f0 upstream.
    
    SFM is mapping doublequote to 0xF020
    
    Without this patch creating files with doublequote fails to Windows/Mac
    
    Signed-off-by: Bjoern Jacke <bjacke@samba.org>
    Signed-off-by: Steve French <smfrench@gmail.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit c4f35cc8dc593ac6633f24459e2bc445c9d50bd6
Author: David Disseldorp <ddiss@suse.de>
Date:   Thu May 4 00:41:13 2017 +0200

    cifs: fix CIFS_IOC_GET_MNT_INFO oops
    
    commit d8a6e505d6bba2250852fbc1c1c86fe68aaf9af3 upstream.
    
    An open directory may have a NULL private_data pointer prior to readdir.
    
    Fixes: 0de1f4c6f6c0 ("Add way to query server fs info for smb3")
    Signed-off-by: David Disseldorp <ddiss@suse.de>
    Signed-off-by: Steve French <smfrench@gmail.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 33fa011644900f7bf551c29b26eebaddb1eec020
Author: Rabin Vincent <rabinv@axis.com>
Date:   Wed May 3 17:54:01 2017 +0200

    CIFS: fix oplock break deadlocks
    
    commit 3998e6b87d4258a70df358296d6f1c7234012bfe upstream.
    
    When the final cifsFileInfo_put() is called from cifsiod and an oplock
    break work is queued, lockdep complains loudly:
    
     =============================================
     [ INFO: possible recursive locking detected ]
     4.11.0+ #21 Not tainted
     ---------------------------------------------
     kworker/0:2/78 is trying to acquire lock:
      ("cifsiod"){++++.+}, at: flush_work+0x215/0x350
    
     but task is already holding lock:
      ("cifsiod"){++++.+}, at: process_one_work+0x255/0x8e0
    
     other info that might help us debug this:
      Possible unsafe locking scenario:
    
            CPU0
            ----
       lock("cifsiod");
       lock("cifsiod");
    
      *** DEADLOCK ***
    
      May be due to missing lock nesting notation
    
     2 locks held by kworker/0:2/78:
      #0:  ("cifsiod"){++++.+}, at: process_one_work+0x255/0x8e0
      #1:  ((&wdata->work)){+.+...}, at: process_one_work+0x255/0x8e0
    
     stack backtrace:
     CPU: 0 PID: 78 Comm: kworker/0:2 Not tainted 4.11.0+ #21
     Workqueue: cifsiod cifs_writev_complete
     Call Trace:
      dump_stack+0x85/0xc2
      __lock_acquire+0x17dd/0x2260
      ? match_held_lock+0x20/0x2b0
      ? trace_hardirqs_off_caller+0x86/0x130
      ? mark_lock+0xa6/0x920
      lock_acquire+0xcc/0x260
      ? lock_acquire+0xcc/0x260
      ? flush_work+0x215/0x350
      flush_work+0x236/0x350
      ? flush_work+0x215/0x350
      ? destroy_worker+0x170/0x170
      __cancel_work_timer+0x17d/0x210
      ? ___preempt_schedule+0x16/0x18
      cancel_work_sync+0x10/0x20
      cifsFileInfo_put+0x338/0x7f0
      cifs_writedata_release+0x2a/0x40
      ? cifs_writedata_release+0x2a/0x40
      cifs_writev_complete+0x29d/0x850
      ? preempt_count_sub+0x18/0xd0
      process_one_work+0x304/0x8e0
      worker_thread+0x9b/0x6a0
      kthread+0x1b2/0x200
      ? process_one_work+0x8e0/0x8e0
      ? kthread_create_on_node+0x40/0x40
      ret_from_fork+0x31/0x40
    
    This is a real warning.  Since the oplock is queued on the same
    workqueue this can deadlock if there is only one worker thread active
    for the workqueue (which will be the case during memory pressure when
    the rescuer thread is handling it).
    
    Furthermore, there is at least one other kind of hang possible due to
    the oplock break handling if there is only worker.  (This can be
    reproduced without introducing memory pressure by having passing 1 for
    the max_active parameter of cifsiod.) cifs_oplock_break() can wait
    indefintely in the filemap_fdatawait() while the cifs_writev_complete()
    work is blocked:
    
     sysrq: SysRq : Show Blocked State
       task                        PC stack   pid father
     kworker/0:1     D    0    16      2 0x00000000
     Workqueue: cifsiod cifs_oplock_break
     Call Trace:
      __schedule+0x562/0xf40
      ? mark_held_locks+0x4a/0xb0
      schedule+0x57/0xe0
      io_schedule+0x21/0x50
      wait_on_page_bit+0x143/0x190
      ? add_to_page_cache_lru+0x150/0x150
      __filemap_fdatawait_range+0x134/0x190
      ? do_writepages+0x51/0x70
      filemap_fdatawait_range+0x14/0x30
      filemap_fdatawait+0x3b/0x40
      cifs_oplock_break+0x651/0x710
      ? preempt_count_sub+0x18/0xd0
      process_one_work+0x304/0x8e0
      worker_thread+0x9b/0x6a0
      kthread+0x1b2/0x200
      ? process_one_work+0x8e0/0x8e0
      ? kthread_create_on_node+0x40/0x40
      ret_from_fork+0x31/0x40
     dd              D    0   683    171 0x00000000
     Call Trace:
      __schedule+0x562/0xf40
      ? mark_held_locks+0x29/0xb0
      schedule+0x57/0xe0
      io_schedule+0x21/0x50
      wait_on_page_bit+0x143/0x190
      ? add_to_page_cache_lru+0x150/0x150
      __filemap_fdatawait_range+0x134/0x190
      ? do_writepages+0x51/0x70
      filemap_fdatawait_range+0x14/0x30
      filemap_fdatawait+0x3b/0x40
      filemap_write_and_wait+0x4e/0x70
      cifs_flush+0x6a/0xb0
      filp_close+0x52/0xa0
      __close_fd+0xdc/0x150
      SyS_close+0x33/0x60
      entry_SYSCALL_64_fastpath+0x1f/0xbe
    
     Showing all locks held in the system:
     2 locks held by kworker/0:1/16:
      #0:  ("cifsiod"){.+.+.+}, at: process_one_work+0x255/0x8e0
      #1:  ((&cfile->oplock_break)){+.+.+.}, at: process_one_work+0x255/0x8e0
    
     Showing busy workqueues and worker pools:
     workqueue cifsiod: flags=0xc
       pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/1
         in-flight: 16:cifs_oplock_break
         delayed: cifs_writev_complete, cifs_echo_request
     pool 0: cpus=0 node=0 flags=0x0 nice=0 hung=0s workers=3 idle: 750 3
    
    Fix these problems by creating a a new workqueue (with a rescuer) for
    the oplock break work.
    
    Signed-off-by: Rabin Vincent <rabinv@axis.com>
    Signed-off-by: Steve French <smfrench@gmail.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 076c404c128dd1c2c3ef6971fd95fbfb4b7df411
Author: David Disseldorp <ddiss@suse.de>
Date:   Wed May 3 17:39:08 2017 +0200

    cifs: fix CIFS_ENUMERATE_SNAPSHOTS oops
    
    commit 6026685de33b0db5b2b6b0e9b41b3a1a3261033c upstream.
    
    As with 618763958b22, an open directory may have a NULL private_data
    pointer prior to readdir. CIFS_ENUMERATE_SNAPSHOTS must check for this
    before dereference.
    
    Fixes: 834170c85978 ("Enable previous version support")
    Signed-off-by: David Disseldorp <ddiss@suse.de>
    Signed-off-by: Steve French <smfrench@gmail.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 63d86cd9703cc367bfe635b20596fbca43e3e0c9
Author: David Disseldorp <ddiss@suse.de>
Date:   Wed May 3 17:39:09 2017 +0200

    cifs: fix leak in FSCTL_ENUM_SNAPS response handling
    
    commit 0e5c795592930d51fd30d53a2e7b73cba022a29b upstream.
    
    The server may respond with success, and an output buffer less than
    sizeof(struct smb_snapshot_array) in length. Do not leak the output
    buffer in this case.
    
    Fixes: 834170c85978 ("Enable previous version support")
    Signed-off-by: David Disseldorp <ddiss@suse.de>
    Signed-off-by: Steve French <smfrench@gmail.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 160239dfbf8925cc5ab11b39852ec37ab9df437a
Author: Björn Jacke <bj@sernet.de>
Date:   Wed May 3 23:47:44 2017 +0200

    CIFS: fix mapping of SFM_SPACE and SFM_PERIOD
    
    commit b704e70b7cf48f9b67c07d585168e102dfa30bb4 upstream.
    
    - trailing space maps to 0xF028
    - trailing period maps to 0xF029
    
    This fix corrects the mapping of file names which have a trailing character
    that would otherwise be illegal (period or space) but is allowed by POSIX.
    
    Signed-off-by: Bjoern Jacke <bjacke@samba.org>
    Signed-off-by: Steve French <smfrench@gmail.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 6f95f1925abbced7445385d5b06c7cfc558dec36
Author: Steve French <smfrench@gmail.com>
Date:   Wed May 3 21:12:20 2017 -0500

    SMB3: Work around mount failure when using SMB3 dialect to Macs
    
    commit 7db0a6efdc3e990cdfd4b24820d010e9eb7890ad upstream.
    
    Macs send the maximum buffer size in response on ioctl to validate
    negotiate security information, which causes us to fail the mount
    as the response buffer is larger than the expected response.
    
    Changed ioctl response processing to allow for padding of validate
    negotiate ioctl response and limit the maximum response size to
    maximum buffer size.
    
    Signed-off-by: Steve French <steve.french@primarydata.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 49f70d15b5b9780269a9397ac28c63fe20563ae0
Author: Steve French <smfrench@gmail.com>
Date:   Tue May 2 13:35:20 2017 -0500

    Set unicode flag on cifs echo request to avoid Mac error
    
    commit 26c9cb668c7fbf9830516b75d8bee70b699ed449 upstream.
    
    Mac requires the unicode flag to be set for cifs, even for the smb
    echo request (which doesn't have strings).
    
    Without this Mac rejects the periodic echo requests (when mounting
    with cifs) that we use to check if server is down
    
    Signed-off-by: Steve French <smfrench@gmail.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit fdf150ee558cca25e9ec0fd1c1d010366ef59ffc
Author: Sachin Prabhu <sprabhu@redhat.com>
Date:   Wed Apr 26 17:10:17 2017 +0100

    Do not return number of bytes written for ioctl CIFS_IOC_COPYCHUNK_FILE
    
    commit 7d0c234fd2e1c9ca3fa032696c0c58b1b74a9e0b upstream.
    
    commit 620d8745b35d ("Introduce cifs_copy_file_range()") changes the
    behaviour of the cifs ioctl call CIFS_IOC_COPYCHUNK_FILE. In case of
    successful writes, it now returns the number of bytes written. This
    return value is treated as an error by the xfstest cifs/001. Depending
    on the errno set at that time, this may or may not result in the test
    failing.
    
    The patch fixes this by setting the return value to 0 in case of
    successful writes.
    
    Fixes: commit 620d8745b35d ("Introduce cifs_copy_file_range()")
    Reported-by: Eryu Guan <eguan@redhat.com>
    Signed-off-by: Sachin Prabhu <sprabhu@redhat.com>
    Acked-by: Pavel Shilovsky <pshilov@microsoft.com>
    Signed-off-by: Steve French <smfrench@gmail.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit ecf9bb0fd7ab18f166077c673b42b72c9ad5f91b
Author: Sachin Prabhu <sprabhu@redhat.com>
Date:   Wed Apr 26 14:05:46 2017 +0100

    Fix match_prepath()
    
    commit cd8c42968ee651b69e00f8661caff32b0086e82d upstream.
    
    Incorrect return value for shares not using the prefix path means that
    we will never match superblocks for these shares.
    
    Fixes: commit c1d8b24d1819 ("Compare prepaths when comparing superblocks")
    Signed-off-by: Sachin Prabhu <sprabhu@redhat.com>
    Reviewed-by: Pavel Shilovsky <pshilov@microsoft.com>
    Signed-off-by: Steve French <smfrench@gmail.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit d322fd67ca66cb7da82b2c348a1485a409b07ce2
Author: Vlastimil Babka <vbabka@suse.cz>
Date:   Mon May 8 15:59:46 2017 -0700

    mm: prevent potential recursive reclaim due to clearing PF_MEMALLOC
    
    commit 62be1511b1db8066220b18b7d4da2e6b9fdc69fb upstream.
    
    Patch series "more robust PF_MEMALLOC handling"
    
    This series aims to unify the setting and clearing of PF_MEMALLOC, which
    prevents recursive reclaim.  There are some places that clear the flag
    unconditionally from current->flags, which may result in clearing a
    pre-existing flag.  This already resulted in a bug report that Patch 1
    fixes (without the new helpers, to make backporting easier).  Patch 2
    introduces the new helpers, modelled after existing memalloc_noio_* and
    memalloc_nofs_* helpers, and converts mm core to use them.  Patches 3
    and 4 convert non-mm code.
    
    This patch (of 4):
    
    __alloc_pages_direct_compact() sets PF_MEMALLOC to prevent deadlock
    during page migration by lock_page() (see the comment in
    __unmap_and_move()).  Then it unconditionally clears the flag, which can
    clear a pre-existing PF_MEMALLOC flag and result in recursive reclaim.
    This was not a problem until commit a8161d1ed609 ("mm, page_alloc:
    restructure direct compaction handling in slowpath"), because direct
    compation was called only after direct reclaim, which was skipped when
    PF_MEMALLOC flag was set.
    
    Even now it's only a theoretical issue, as the new callsite of
    __alloc_pages_direct_compact() is reached only for costly orders and
    when gfp_pfmemalloc_allowed() is true, which means either
    __GFP_NOMEMALLOC is in gfp_flags or in_interrupt() is true.  There is no
    such known context, but let's play it safe and make
    __alloc_pages_direct_compact() robust for cases where PF_MEMALLOC is
    already set.
    
    Fixes: a8161d1ed609 ("mm, page_alloc: restructure direct compaction handling in slowpath")
    Link: http://lkml.kernel.org/r/20170405074700.29871-2-vbabka@suse.cz
    Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
    Reported-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
    Acked-by: Michal Hocko <mhocko@suse.com>
    Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>
    Cc: Mel Gorman <mgorman@techsingularity.net>
    Cc: Johannes Weiner <hannes@cmpxchg.org>
    Cc: Boris Brezillon <boris.brezillon@free-electrons.com>
    Cc: Chris Leech <cleech@redhat.com>
    Cc: "David S. Miller" <davem@davemloft.net>
    Cc: Eric Dumazet <edumazet@google.com>
    Cc: Josef Bacik <jbacik@fb.com>
    Cc: Lee Duncan <lduncan@suse.com>
    Cc: Michal Hocko <mhocko@suse.com>
    Cc: Richard Weinberger <richard@nod.at>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 41a68ab851822e3588952ae9f2709a61a88338f7
Author: Johannes Weiner <hannes@cmpxchg.org>
Date:   Wed May 3 14:55:03 2017 -0700

    mm: vmscan: fix IO/refault regression in cache workingset transition
    
    commit 2a2e48854d704214dac7546e87ae0e4daa0e61a0 upstream.
    
    Since commit 59dc76b0d4df ("mm: vmscan: reduce size of inactive file
    list") we noticed bigger IO spikes during changes in cache access
    patterns.
    
    The patch in question shrunk the inactive list size to leave more room
    for the current workingset in the presence of streaming IO.  However,
    workingset transitions that previously happened on the inactive list are
    now pushed out of memory and incur more refaults to complete.
    
    This patch disables active list protection when refaults are being
    observed.  This accelerates workingset transitions, and allows more of
    the new set to establish itself from memory, without eating into the
    ability to protect the established workingset during stable periods.
    
    The workloads that were measurably affected for us were hit pretty bad
    by it, with refault/majfault rates doubling and tripling during cache
    transitions, and the machines sustaining half-hour periods of 100% IO
    utilization, where they'd previously have sub-minute peaks at 60-90%.
    
    Stateful services that handle user data tend to be more conservative
    with kernel upgrades.  As a result we hit most page cache issues with
    some delay, as was the case here.
    
    The severity seemed to warrant a stable tag.
    
    Fixes: 59dc76b0d4df ("mm: vmscan: reduce size of inactive file list")
    Link: http://lkml.kernel.org/r/20170404220052.27593-1-hannes@cmpxchg.org
    Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Mel Gorman <mgorman@suse.de>
    Cc: Michal Hocko <mhocko@suse.com>
    Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit f7bccf3f166eee5afce01aead5c0b37fbc2defe0
Author: Andrey Ryabinin <aryabinin@virtuozzo.com>
Date:   Wed May 3 14:56:02 2017 -0700

    fs/block_dev: always invalidate cleancache in invalidate_bdev()
    
    commit a5f6a6a9c72eac38a7fadd1a038532bc8516337c upstream.
    
    invalidate_bdev() calls cleancache_invalidate_inode() iff ->nrpages != 0
    which doen't make any sense.
    
    Make sure that invalidate_bdev() always calls cleancache_invalidate_inode()
    regardless of mapping->nrpages value.
    
    Fixes: c515e1fd361c ("mm/fs: add hooks to support cleancache")
    Link: http://lkml.kernel.org/r/20170424164135.22350-3-aryabinin@virtuozzo.com
    Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
    Reviewed-by: Jan Kara <jack@suse.cz>
    Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Cc: Alexander Viro <viro@zeniv.linux.org.uk>
    Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
    Cc: Jens Axboe <axboe@kernel.dk>
    Cc: Johannes Weiner <hannes@cmpxchg.org>
    Cc: Alexey Kuznetsov <kuznet@virtuozzo.com>
    Cc: Christoph Hellwig <hch@lst.de>
    Cc: Nikolay Borisov <n.borisov.lkml@gmail.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 601462864dcceedd9afc2bbc14439675d48d179c
Author: Andrey Ryabinin <aryabinin@virtuozzo.com>
Date:   Wed May 3 14:55:59 2017 -0700

    fs: fix data invalidation in the cleancache during direct IO
    
    commit 55635ba76ef91f26b418702ace5e6287eb727f6a upstream.
    
    Patch series "Properly invalidate data in the cleancache", v2.
    
    We've noticed that after direct IO write, buffered read sometimes gets
    stale data which is coming from the cleancache.  The reason for this is
    that some direct write hooks call call invalidate_inode_pages2[_range]()
    conditionally iff mapping->nrpages is not zero, so we may not invalidate
    data in the cleancache.
    
    Another odd thing is that we check only for ->nrpages and don't check
    for ->nrexceptional, but invalidate_inode_pages2[_range] also
    invalidates exceptional entries as well.  So we invalidate exceptional
    entries only if ->nrpages != 0? This doesn't feel right.
    
     - Patch 1 fixes direct IO writes by removing ->nrpages check.
     - Patch 2 fixes similar case in invalidate_bdev().
         Note: I only fixed conditional cleancache_invalidate_inode() here.
           Do we also need to add ->nrexceptional check in into invalidate_bdev()?
    
     - Patches 3-4: some optimizations.
    
    This patch (of 4):
    
    Some direct IO write fs hooks call invalidate_inode_pages2[_range]()
    conditionally iff mapping->nrpages is not zero.  This can't be right,
    because invalidate_inode_pages2[_range]() also invalidate data in the
    cleancache via cleancache_invalidate_inode() call.  So if page cache is
    empty but there is some data in the cleancache, buffered read after
    direct IO write would get stale data from the cleancache.
    
    Also it doesn't feel right to check only for ->nrpages because
    invalidate_inode_pages2[_range] invalidates exceptional entries as well.
    
    Fix this by calling invalidate_inode_pages2[_range]() regardless of
    nrpages state.
    
    Note: nfs,cifs,9p doesn't need similar fix because the never call
    cleancache_get_page() (nor directly, nor via mpage_readpage[s]()), so
    they are not affected by this bug.
    
    Fixes: c515e1fd361c ("mm/fs: add hooks to support cleancache")
    Link: http://lkml.kernel.org/r/20170424164135.22350-2-aryabinin@virtuozzo.com
    Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
    Reviewed-by: Jan Kara <jack@suse.cz>
    Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Cc: Alexander Viro <viro@zeniv.linux.org.uk>
    Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
    Cc: Jens Axboe <axboe@kernel.dk>
    Cc: Johannes Weiner <hannes@cmpxchg.org>
    Cc: Alexey Kuznetsov <kuznet@virtuozzo.com>
    Cc: Christoph Hellwig <hch@lst.de>
    Cc: Nikolay Borisov <n.borisov.lkml@gmail.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit a4c34c20fbbc505dc3b202d299400ed57fed370b
Author: Luis Henriques <lhenriques@suse.com>
Date:   Fri Apr 28 11:14:04 2017 +0100

    ceph: fix memory leak in __ceph_setxattr()
    
    commit eeca958dce0a9231d1969f86196653eb50fcc9b3 upstream.
    
    The ceph_inode_xattr needs to be released when removing an xattr.  Easily
    reproducible running the 'generic/020' test from xfstests or simply by
    doing:
    
      attr -s attr0 -V 0 /mnt/test && attr -r attr0 /mnt/test
    
    While there, also fix the error path.
    
    Here's the kmemleak splat:
    
    unreferenced object 0xffff88001f86fbc0 (size 64):
      comm "attr", pid 244, jiffies 4294904246 (age 98.464s)
      hex dump (first 32 bytes):
        40 fa 86 1f 00 88 ff ff 80 32 38 1f 00 88 ff ff  @........28.....
        00 01 00 00 00 00 ad de 00 02 00 00 00 00 ad de  ................
      backtrace:
        [<ffffffff81560199>] kmemleak_alloc+0x49/0xa0
        [<ffffffff810f3e5b>] kmem_cache_alloc+0x9b/0xf0
        [<ffffffff812b157e>] __ceph_setxattr+0x17e/0x820
        [<ffffffff812b1c57>] ceph_set_xattr_handler+0x37/0x40
        [<ffffffff8111fb4b>] __vfs_removexattr+0x4b/0x60
        [<ffffffff8111fd37>] vfs_removexattr+0x77/0xd0
        [<ffffffff8111fdd1>] removexattr+0x41/0x60
        [<ffffffff8111fe65>] path_removexattr+0x75/0xa0
        [<ffffffff81120aeb>] SyS_lremovexattr+0xb/0x10
        [<ffffffff81564b20>] entry_SYSCALL_64_fastpath+0x13/0x94
        [<ffffffffffffffff>] 0xffffffffffffffff
    
    Signed-off-by: Luis Henriques <lhenriques@suse.com>
    Reviewed-by: "Yan, Zheng" <zyan@redhat.com>
    Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 2cfc74406969c5a7ced9bcd51af5792970713b22
Author: Michal Hocko <mhocko@suse.com>
Date:   Mon May 8 15:57:24 2017 -0700

    fs/xattr.c: zero out memory copied to userspace in getxattr
    
    commit 81be3dee96346fbe08c31be5ef74f03f6b63cf68 upstream.
    
    getxattr uses vmalloc to allocate memory if kzalloc fails.  This is
    filled by vfs_getxattr and then copied to the userspace.  vmalloc,
    however, doesn't zero out the memory so if the specific implementation
    of the xattr handler is sloppy we can theoretically expose a kernel
    memory.  There is no real sign this is really the case but let's make
    sure this will not happen and use vzalloc instead.
    
    Fixes: 779302e67835 ("fs/xattr.c:getxattr(): improve handling of allocation failures")
    Link: http://lkml.kernel.org/r/20170306103327.2766-1-mhocko@kernel.org
    Acked-by: Kees Cook <keescook@chromium.org>
    Reported-by: Vlastimil Babka <vbabka@suse.cz>
    Signed-off-by: Michal Hocko <mhocko@suse.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 8bd83223086df14ed99f52950e3c064985c94fd6
Author: Martin Brandenburg <martin@omnibond.com>
Date:   Tue Apr 25 15:38:04 2017 -0400

    orangefs: do not check possibly stale size on truncate
    
    commit 53950ef541675df48c219a8d665111a0e68dfc2f upstream.
    
    Let the server figure this out because our size might be out of date or
    not present.
    
    The bug was that
    
            xfs_io -f -t -c "pread -v 0 100" /mnt/foo
            echo "Test" > /mnt/foo
            xfs_io -f -t -c "pread -v 0 100" /mnt/foo
    
    fails because the second truncate did not happen if nothing had
    requested the size after the write in echo.  Thus i_size was zero (not
    present) and the orangefs_setattr though i_size was zero and there was
    nothing to do.
    
    Signed-off-by: Martin Brandenburg <martin@omnibond.com>
    Signed-off-by: Mike Marshall <hubcap@omnibond.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 66f1885beae71ccb4b49427cd37c57862b818a26
Author: Martin Brandenburg <martin@omnibond.com>
Date:   Tue Apr 25 15:37:58 2017 -0400

    orangefs: do not set getattr_time on orangefs_lookup
    
    commit 17930b252cd6f31163c259eaa99dd8aa630fb9ba upstream.
    
    Since orangefs_lookup calls orangefs_iget which calls
    orangefs_inode_getattr, getattr_time will get set.
    
    Signed-off-by: Martin Brandenburg <martin@omnibond.com>
    Signed-off-by: Mike Marshall <hubcap@omnibond.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit a6d3c33255f899afb233fca2439702099055baf9
Author: Martin Brandenburg <martin@omnibond.com>
Date:   Tue Apr 25 15:37:57 2017 -0400

    orangefs: clean up oversize xattr validation
    
    commit e675c5ec51fe2554719a7b6bcdbef0a770f2c19b upstream.
    
    Also don't check flags as this has been validated by the VFS already.
    
    Fix an off-by-one error in the max size checking.
    
    Stop logging just because userspace wants to write attributes which do
    not fit.
    
    This and the previous commit fix xfstests generic/020.
    
    Signed-off-by: Martin Brandenburg <martin@omnibond.com>
    Signed-off-by: Mike Marshall <hubcap@omnibond.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 7adb15036a05c76687706d40f0dda030f8607b0b
Author: Martin Brandenburg <martin@omnibond.com>
Date:   Tue Apr 25 15:37:56 2017 -0400

    orangefs: fix bounds check for listxattr
    
    commit a956af337b9ff25822d9ce1a59c6ed0c09fc14b9 upstream.
    
    Signed-off-by: Martin Brandenburg <martin@omnibond.com>
    Signed-off-by: Mike Marshall <hubcap@omnibond.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 1567c170633295b10e5821a34db5b341aefe1156
Author: Eric Biggers <ebiggers@google.com>
Date:   Sun Apr 30 00:10:50 2017 -0400

    ext4: evict inline data when writing to memory map
    
    commit 7b4cc9787fe35b3ee2dfb1c35e22eafc32e00c33 upstream.
    
    Currently the case of writing via mmap to a file with inline data is not
    handled.  This is maybe a rare case since it requires a writable memory
    map of a very small file, but it is trivial to trigger with on
    inline_data filesystem, and it causes the
    'BUG_ON(ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA));' in
    ext4_writepages() to be hit:
    
        mkfs.ext4 -O inline_data /dev/vdb
        mount /dev/vdb /mnt
        xfs_io -f /mnt/file \
            -c 'pwrite 0 1' \
            -c 'mmap -w 0 1m' \
            -c 'mwrite 0 1' \
            -c 'fsync'
    
            kernel BUG at fs/ext4/inode.c:2723!
            invalid opcode: 0000 [#1] SMP
            CPU: 1 PID: 2532 Comm: xfs_io Not tainted 4.11.0-rc1-xfstests-00301-g071d9acf3d1f #633
            Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-20170228_101828-anatol 04/01/2014
            task: ffff88003d3a8040 task.stack: ffffc90000300000
            RIP: 0010:ext4_writepages+0xc89/0xf8a
            RSP: 0018:ffffc90000303ca0 EFLAGS: 00010283
            RAX: 0000028410000000 RBX: ffff8800383fa3b0 RCX: ffffffff812afcdc
            RDX: 00000a9d00000246 RSI: ffffffff81e660e0 RDI: 0000000000000246
            RBP: ffffc90000303dc0 R08: 0000000000000002 R09: 869618e8f99b4fa5
            R10: 00000000852287a2 R11: 00000000a03b49f4 R12: ffff88003808e698
            R13: 0000000000000000 R14: 7fffffffffffffff R15: 7fffffffffffffff
            FS:  00007fd3e53094c0(0000) GS:ffff88003e400000(0000) knlGS:0000000000000000
            CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
            CR2: 00007fd3e4c51000 CR3: 000000003d554000 CR4: 00000000003406e0
            Call Trace:
             ? _raw_spin_unlock+0x27/0x2a
             ? kvm_clock_read+0x1e/0x20
             do_writepages+0x23/0x2c
             ? do_writepages+0x23/0x2c
             __filemap_fdatawrite_range+0x80/0x87
             filemap_write_and_wait_range+0x67/0x8c
             ext4_sync_file+0x20e/0x472
             vfs_fsync_range+0x8e/0x9f
             ? syscall_trace_enter+0x25b/0x2d0
             vfs_fsync+0x1c/0x1e
             do_fsync+0x31/0x4a
             SyS_fsync+0x10/0x14
             do_syscall_64+0x69/0x131
             entry_SYSCALL64_slow_path+0x25/0x25
    
    We could try to be smart and keep the inline data in this case, or at
    least support delayed allocation when allocating the block, but these
    solutions would be more complicated and don't seem worthwhile given how
    rare this case seems to be.  So just fix the bug by calling
    ext4_convert_inline_data() when we're asked to make a page writable, so
    that any inline data gets evicted, with the block allocated immediately.
    
    Reported-by: Nick Alcock <nick.alcock@oracle.com>
    Reviewed-by: Andreas Dilger <adilger@dilger.ca>
    Signed-off-by: Eric Biggers <ebiggers@google.com>
    Signed-off-by: Theodore Ts'o <tytso@mit.edu>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 7621cb7966b423cb09343b7d8a7bab87efbdd51a
Author: Jan Kara <jack@suse.cz>
Date:   Sat Apr 29 21:07:30 2017 -0400

    jbd2: fix dbench4 performance regression for 'nobarrier' mounts
    
    commit 5052b069acf73866d00077d8bc49983c3ee903e5 upstream.
    
    Commit b685d3d65ac7 "block: treat REQ_FUA and REQ_PREFLUSH as
    synchronous" removed REQ_SYNC flag from WRITE_FUA implementation. Since
    JBD2 strips REQ_FUA and REQ_FLUSH flags from submitted IO when the
    filesystem is mounted with nobarrier mount option, journal superblock
    writes ended up being async writes after this patch and that caused
    heavy performance regression for dbench4 benchmark with high number of
    processes. In my test setup with HP RAID array with non-volatile write
    cache and 32 GB ram, dbench4 runs with 8 processes regressed by ~25%.
    
    Fix the problem by making sure journal superblock writes are always
    treated as synchronous since they generally block progress of the
    journalling machinery and thus the whole filesystem.
    
    Fixes: b685d3d65ac791406e0dfd8779cc9b3707fea5a3
    Signed-off-by: Jan Kara <jack@suse.cz>
    Signed-off-by: Theodore Ts'o <tytso@mit.edu>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 7a17cbb4b8dec4a608869138a7f668314d70437b
Author: Christian Borntraeger <borntraeger@de.ibm.com>
Date:   Thu Apr 6 09:51:52 2017 +0200

    perf annotate s390: Implement jump types for perf annotate
    
    commit d9f8dfa9baf9b6ae1f2f84f887176558ecde5268 upstream.
    
    Implement simple detection for all kind of jumps and branches.
    
    Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
    Cc: Andreas Krebbel <krebbel@linux.vnet.ibm.com>
    Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
    Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: linux-s390 <linux-s390@vger.kernel.org>
    Link: http://lkml.kernel.org/r/1491465112-45819-3-git-send-email-borntraeger@de.ibm.com
    Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit b09eace76ed399b92a4ede8d2854bb2347a9cbab
Author: Christian Borntraeger <borntraeger@de.ibm.com>
Date:   Thu Apr 6 09:51:51 2017 +0200

    perf annotate s390: Fix perf annotate error -95 (4.10 regression)
    
    commit e77852b32d6d4430c68c38aaf73efe5650fa25af upstream.
    
    since 4.10 perf annotate exits on s390 with an "unknown error -95".
    Turns out that commit 786c1b51844d ("perf annotate: Start supporting
    cross arch annotation") added a hard requirement for architecture
    support when objdump is used but only provided x86 and arm support.
    Meanwhile power was added so lets add s390 as well.
    
    While at it make sure to implement the branch and jump types.
    
    Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
    Cc: Andreas Krebbel <krebbel@linux.vnet.ibm.com>
    Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
    Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: linux-s390 <linux-s390@vger.kernel.org>
    Fixes: 786c1b51844 "perf annotate: Start supporting cross arch annotation"
    Link: http://lkml.kernel.org/r/1491465112-45819-2-git-send-email-borntraeger@de.ibm.com
    Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 08b0b5bd543a9f8fee67283e980cd7db51165bf9
Author: Adrian Hunter <adrian.hunter@intel.com>
Date:   Fri Mar 24 14:15:52 2017 +0200

    perf auxtrace: Fix no_size logic in addr_filter__resolve_kernel_syms()
    
    commit c3a0bbc7ad7598dec5a204868bdf8a2b1b51df14 upstream.
    
    Address filtering with kernel symbols incorrectly resulted in the error
    "Cannot determine size of symbol" because the no_size logic was the wrong
    way around.
    
    Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
    Tested-by: Andi Kleen <ak@linux.intel.com>
    Link: http://lkml.kernel.org/r/1490357752-27942-1-git-send-email-adrian.hunter@intel.com
    Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 2d5845224145b04699bf157d0acbe817bd0dbadc
Author: Mike Marciniszyn <mike.marciniszyn@intel.com>
Date:   Sun Apr 9 10:16:35 2017 -0700

    IB/hfi1: Prevent kernel QP post send hard lockups
    
    commit b6eac931b9bb2bce4db7032c35b41e5e34ec22a5 upstream.
    
    The driver progress routines can call cond_resched() when
    a timeslice is exhausted and irqs are enabled.
    
    If the ULP had been holding a spin lock without disabling irqs and
    the post send directly called the progress routine, the cond_resched()
    could yield allowing another thread from the same ULP to deadlock
    on that same lock.
    
    Correct by replacing the current hfi1_do_send() calldown with a unique
    one for post send and adding an argument to hfi1_do_send() to indicate
    that the send engine is running in a thread.   If the routine is not
    running in a thread, avoid calling cond_resched().
    
    Fixes: Commit 831464ce4b74 ("IB/hfi1: Don't call cond_resched in atomic mode when sending packets")
    Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
    Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
    Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
    Signed-off-by: Doug Ledford <dledford@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 7a48c89a3c88745f0bfb16c3ad8d7be26fc3482c
Author: Jack Morgenstein <jackm@dev.mellanox.co.il>
Date:   Tue Mar 21 12:57:06 2017 +0200

    IB/mlx4: Reduce SRIOV multicast cleanup warning message to debug level
    
    commit fb7a91746af18b2ebf596778b38a709cdbc488d3 upstream.
    
    A warning message during SRIOV multicast cleanup should have actually been
    a debug level message. The condition generating the warning does no harm
    and can fill the message log.
    
    In some cases, during testing, some tests were so intense as to swamp the
    message log with these warning messages, causing a stall in the console
    message log output task. This stall caused an NMI to be sent to all CPUs
    (so that they all dumped their stacks into the message log).
    Aside from the message flood causing an NMI, the tests all passed.
    
    Once the message flood which caused the NMI is removed (by reducing the
    warning message to debug level), the NMI no longer occurs.
    
    Sample message log (console log) output illustrating the flood and
    resultant NMI (snippets with comments and modified with ... instead
    of hex digits, to satisfy checkpatch.pl):
    
     <mlx4_ib> _mlx4_ib_mcg_port_cleanup: ... WARNING: group refcount 1!!!...
     *** About 4000 almost identical lines in less than one second ***
     <mlx4_ib> _mlx4_ib_mcg_port_cleanup: ... WARNING: group refcount 1!!!...
     INFO: rcu_sched detected stalls on CPUs/tasks: { 17} (...)
     *** { 17} above indicates that CPU 17 was the one that stalled ***
     sending NMI to all CPUs:
     ...
     NMI backtrace for cpu 17
     CPU: 17 PID: 45909 Comm: kworker/17:2
     Hardware name: HP ProLiant DL360p Gen8, BIOS P71 09/08/2013
     Workqueue: events fb_flashcursor
     task: ffff880478...... ti: ffff88064e...... task.ti: ffff88064e......
     RIP: 0010:[ffffffff81......]  [ffffffff81......] io_serial_in+0x15/0x20
     RSP: 0018:ffff88064e257cb0  EFLAGS: 00000002
     RAX: 0000000000...... RBX: ffffffff81...... RCX: 0000000000......
     RDX: 0000000000...... RSI: 0000000000...... RDI: ffffffff81......
     RBP: ffff88064e...... R08: ffffffff81...... R09: 0000000000......
     R10: 0000000000...... R11: ffff88064e...... R12: 0000000000......
     R13: 0000000000...... R14: ffffffff81...... R15: 0000000000......
     FS:  0000000000......(0000) GS:ffff8804af......(0000) knlGS:000000000000
     CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080......
     CR2: 00007f2a2f...... CR3: 0000000001...... CR4: 0000000000......
     DR0: 0000000000...... DR1: 0000000000...... DR2: 0000000000......
     DR3: 0000000000...... DR6: 00000000ff...... DR7: 0000000000......
     Stack:
     ffff88064e...... ffffffff81...... ffffffff81...... 0000000000......
     ffffffff81...... ffff88064e...... ffffffff81...... ffffffff81......
     ffffffff81...... ffff88064e...... ffffffff81...... 0000000000......
     Call Trace:
    [<ffffffff813d099b>] wait_for_xmitr+0x3b/0xa0
    [<ffffffff813d0b5c>] serial8250_console_putchar+0x1c/0x30
    [<ffffffff813d0b40>] ? serial8250_console_write+0x140/0x140
    [<ffffffff813cb5fa>] uart_console_write+0x3a/0x80
    [<ffffffff813d0aae>] serial8250_console_write+0xae/0x140
    [<ffffffff8107c4d1>] call_console_drivers.constprop.15+0x91/0xf0
    [<ffffffff8107d6cf>] console_unlock+0x3bf/0x400
    [<ffffffff813503cd>] fb_flashcursor+0x5d/0x140
    [<ffffffff81355c30>] ? bit_clear+0x120/0x120
    [<ffffffff8109d5fb>] process_one_work+0x17b/0x470
    [<ffffffff8109e3cb>] worker_thread+0x11b/0x400
    [<ffffffff8109e2b0>] ? rescuer_thread+0x400/0x400
    [<ffffffff810a5aef>] kthread+0xcf/0xe0
    [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140
    [<ffffffff81645858>] ret_from_fork+0x58/0x90
    [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140
    Code: 48 89 e5 d3 e6 48 63 f6 48 03 77 10 8b 06 5d c3 66 0f 1f 44 00 00 66 66 66 6
    
    As indicated in the stack trace above, the console output task got swamped.
    
    Fixes: b9c5d6a64358 ("IB/mlx4: Add multicast group (MCG) paravirtualization for SR-IOV")
    Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
    Signed-off-by: Leon Romanovsky <leon@kernel.org>
    Signed-off-by: Doug Ledford <dledford@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 295a7aab354d14f0303867cf7fd71af26336f4d8
Author: Jack Morgenstein <jackm@dev.mellanox.co.il>
Date:   Tue Mar 21 12:57:05 2017 +0200

    IB/mlx4: Fix ib device initialization error flow
    
    commit 99e68909d5aba1861897fe7afc3306c3c81b6de0 upstream.
    
    In mlx4_ib_add, procedure mlx4_ib_alloc_eqs is called to allocate EQs.
    
    However, in the mlx4_ib_add error flow, procedure mlx4_ib_free_eqs is not
    called to free the allocated EQs.
    
    Fixes: e605b743f33d ("IB/mlx4: Increase the number of vectors (EQs) available for ULPs")
    Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
    Signed-off-by: Leon Romanovsky <leon@kernel.org>
    Signed-off-by: Doug Ledford <dledford@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit f98af463f8bb0fd0d7b2ffaec54ac6854f1b49b6
Author: Shamir Rabinovitch <shamir.rabinovitch@oracle.com>
Date:   Wed Mar 29 06:21:59 2017 -0400

    IB/IPoIB: ibX: failed to create mcg debug file
    
    commit 771a52584096c45e4565e8aabb596eece9d73d61 upstream.
    
    When udev renames the netdev devices, ipoib debugfs entries does not
    get renamed. As a result, if subsequent probe of ipoib device reuse the
    name then creating a debugfs entry for the new device would fail.
    
    Also, moved ipoib_create_debug_files and ipoib_delete_debug_files as part
    of ipoib event handling in order to avoid any race condition between these.
    
    Fixes: 1732b0ef3b3a ([IPoIB] add path record information in debugfs)
    Signed-off-by: Vijay Kumar <vijay.ac.kumar@oracle.com>
    Signed-off-by: Shamir Rabinovitch <shamir.rabinovitch@oracle.com>
    Reviewed-by: Mark Bloch <markb@mellanox.com>
    Signed-off-by: Doug Ledford <dledford@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 88ded1f18b4ee3cef7d08883cf51d0a226d51c72
Author: Michael J. Ruhl <michael.j.ruhl@intel.com>
Date:   Sun Apr 9 10:15:51 2017 -0700

    IB/core: For multicast functions, verify that LIDs are multicast LIDs
    
    commit 8561eae60ff9417a50fa1fb2b83ae950dc5c1e21 upstream.
    
    The Infiniband spec defines "A multicast address is defined by a
    MGID and a MLID" (section 10.5).  Currently the MLID value is not
    validated.
    
    Add check to verify that the MLID value is in the correct address
    range.
    
    Fixes: 0c33aeedb2cf ("[IB] Add checks to multicast attach and detach")
    Reviewed-by: Ira Weiny <ira.weiny@intel.com>
    Reviewed-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli@intel.com>
    Signed-off-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
    Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
    Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
    Signed-off-by: Doug Ledford <dledford@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 82f0fecceef090493b8fe6faea75dad2a8473d46
Author: Parav Pandit <parav@mellanox.com>
Date:   Sun Mar 19 10:55:55 2017 +0200

    IB/core: Fix kernel crash during fail to initialize device
    
    commit 4be3a4fa51f432ef045546d16f25c68a1ab525b9 upstream.
    
    This patch fixes the kernel crash that occurs during ib_dealloc_device()
    called due to provider driver fails with an error after
    ib_alloc_device() and before it can register using ib_register_device().
    
    This crashed seen in tha lab as below which can occur with any IB device
    which fails to perform its device initialization before invoking
    ib_register_device().
    
    This patch avoids touching cache and port immutable structures if device
    is not yet initialized.
    It also releases related memory when cache and port immutable data
    structure initialization fails during register_device() state.
    
    [81416.561946] BUG: unable to handle kernel NULL pointer dereference at (null)
    [81416.570340] IP: ib_cache_release_one+0x29/0x80 [ib_core]
    [81416.576222] PGD 78da66067
    [81416.576223] PUD 7f2d7c067
    [81416.579484] PMD 0
    [81416.582720]
    [81416.587242] Oops: 0000 [#1] SMP
    [81416.722395] task: ffff8807887515c0 task.stack: ffffc900062c0000
    [81416.729148] RIP: 0010:ib_cache_release_one+0x29/0x80 [ib_core]
    [81416.735793] RSP: 0018:ffffc900062c3a90 EFLAGS: 00010202
    [81416.741823] RAX: 0000000000000000 RBX: 0000000000000001 RCX: 0000000000000000
    [81416.749785] RDX: 0000000000000000 RSI: 0000000000000282 RDI: ffff880859fec000
    [81416.757757] RBP: ffffc900062c3aa0 R08: ffff8808536e5ac0 R09: ffff880859fec5b0
    [81416.765708] R10: 00000000536e5c01 R11: ffff8808536e5ac0 R12: ffff880859fec000
    [81416.773672] R13: 0000000000000000 R14: ffff8808536e5ac0 R15: ffff88084ebc0060
    [81416.781621] FS:  00007fd879fab740(0000) GS:ffff88085fac0000(0000) knlGS:0000000000000000
    [81416.790522] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    [81416.797094] CR2: 0000000000000000 CR3: 00000007eb215000 CR4: 00000000003406e0
    [81416.805051] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    [81416.812997] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
    [81416.820950] Call Trace:
    [81416.824226]  ib_device_release+0x1e/0x40 [ib_core]
    [81416.829858]  device_release+0x32/0xa0
    [81416.834370]  kobject_cleanup+0x63/0x170
    [81416.839058]  kobject_put+0x25/0x50
    [81416.843319]  ib_dealloc_device+0x25/0x40 [ib_core]
    [81416.848986]  mlx5_ib_add+0x163/0x1990 [mlx5_ib]
    [81416.854414]  mlx5_add_device+0x5a/0x160 [mlx5_core]
    [81416.860191]  mlx5_register_interface+0x8d/0xc0 [mlx5_core]
    [81416.866587]  ? 0xffffffffa09e9000
    [81416.870816]  mlx5_ib_init+0x15/0x17 [mlx5_ib]
    [81416.876094]  do_one_initcall+0x51/0x1b0
    [81416.880861]  ? __vunmap+0x85/0xd0
    [81416.885113]  ? kmem_cache_alloc_trace+0x14b/0x1b0
    [81416.890768]  ? vfree+0x2e/0x70
    [81416.894762]  do_init_module+0x60/0x1fa
    [81416.899441]  load_module+0x15f6/0x1af0
    [81416.904114]  ? __symbol_put+0x60/0x60
    [81416.908709]  ? ima_post_read_file+0x3d/0x80
    [81416.913828]  ? security_kernel_post_read_file+0x6b/0x80
    [81416.920006]  SYSC_finit_module+0xa6/0xf0
    [81416.924888]  SyS_finit_module+0xe/0x10
    [81416.929568]  entry_SYSCALL_64_fastpath+0x1a/0xa9
    [81416.935089] RIP: 0033:0x7fd879494949
    [81416.939543] RSP: 002b:00007ffdbc1b4e58 EFLAGS: 00000202 ORIG_RAX: 0000000000000139
    [81416.947982] RAX: ffffffffffffffda RBX: 0000000001b66f00 RCX: 00007fd879494949
    [81416.955965] RDX: 0000000000000000 RSI: 000000000041a13c RDI: 0000000000000003
    [81416.963926] RBP: 0000000000000003 R08: 0000000000000000 R09: 0000000001b652a0
    [81416.971861] R10: 0000000000000003 R11: 0000000000000202 R12: 00007ffdbc1b3e70
    [81416.979763] R13: 00007ffdbc1b3e50 R14: 0000000000000005 R15: 0000000000000000
    [81417.008005] RIP: ib_cache_release_one+0x29/0x80 [ib_core] RSP: ffffc900062c3a90
    [81417.016045] CR2: 0000000000000000
    
    Fixes: 55aeed0654 ("IB/core: Make ib_alloc_device init the kobject")
    Fixes: 7738613e7c ("IB/core: Add per port immutable struct to ib_device")
    Reviewed-by: Daniel Jurgens <danielj@mellanox.com>
    Signed-off-by: Parav Pandit <parav@mellanox.com>
    Signed-off-by: Leon Romanovsky <leon@kernel.org>
    Signed-off-by: Doug Ledford <dledford@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 5c7f1dfa7b512c7f60f55a6273aae5a09779b76d
Author: Jack Morgenstein <jackm@dev.mellanox.co.il>
Date:   Sun Mar 19 10:55:57 2017 +0200

    IB/core: Fix sysfs registration error flow
    
    commit b312be3d87e4c80872cbea869e569175c5eb0f9a upstream.
    
    The kernel commit cited below restructured ib device management
    so that the device kobject is initialized in ib_alloc_device.
    
    As part of the restructuring, the kobject is now initialized in
    procedure ib_alloc_device, and is later added to the device hierarchy
    in the ib_register_device call stack, in procedure
    ib_device_register_sysfs (which calls device_add).
    
    However, in the ib_device_register_sysfs error flow, if an error
    occurs following the call to device_add, the cleanup procedure
    device_unregister is called. This call results in the device object
    being deleted -- which results in various use-after-free crashes.
    
    The correct cleanup call is device_del -- which undoes device_add
    without deleting the device object.
    
    The device object will then (correctly) be deleted in the
    ib_register_device caller's error cleanup flow, when the caller invokes
    ib_dealloc_device.
    
    Fixes: 55aeed06544f6 ("IB/core: Make ib_alloc_device init the kobject")
    Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
    Signed-off-by: Leon Romanovsky <leon@kernel.org>
    Signed-off-by: Doug Ledford <dledford@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 09944c26606e5a1e6e7b76e58130ffed647c9d3f
Author: Ding Tianhong <dingtianhong@huawei.com>
Date:   Sat Apr 29 10:38:48 2017 +0800

    iov_iter: don't revert iov buffer if csum error
    
    commit a6a5993243550b09f620941dea741b7421fdf79c upstream.
    
    The patch 327868212381 (make skb_copy_datagram_msg() et.al. preserve
    ->msg_iter on error) will revert the iov buffer if copy to iter
    failed, but it didn't copy any datagram if the skb_checksum_complete
    error, so no need to revert any data at this place.
    
    v2: Sabrina notice that return -EFAULT when checksum error is not correct
        here, it would confuse the caller about the return value, so fix it.
    
    Fixes: 327868212381 ("make skb_copy_datagram_msg() et.al. preserve->msg_iter on error")
    Signed-off-by: Ding Tianhong <dingtianhong@huawei.com>
    Acked-by: Al Viro <viro@zeniv.linux.org.uk>
    Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
    Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 1e0b0a9ef081bbe75646401db313aa84636b08a0
Author: Alex Williamson <alex.williamson@redhat.com>
Date:   Thu Apr 13 14:10:15 2017 -0600

    vfio/type1: Remove locked page accounting workqueue
    
    commit 0cfef2b7410b64d7a430947e0b533314c4f97153 upstream.
    
    If the mmap_sem is contented then the vfio type1 IOMMU backend will
    defer locked page accounting updates to a workqueue task.  This has a
    few problems and depending on which side the user tries to play, they
    might be over-penalized for unmaps that haven't yet been accounted or
    race the workqueue to enter more mappings than they're allowed.  The
    original intent of this workqueue mechanism seems to be focused on
    reducing latency through the ioctl, but we cannot do so at the cost
    of correctness.  Remove this workqueue mechanism and update the
    callers to allow for failure.  We can also now recheck the limit under
    write lock to make sure we don't exceed it.
    
    vfio_pin_pages_remote() also now necessarily includes an unwind path
    which we can jump to directly if the consecutive page pinning finds
    that we're exceeding the user's memory limits.  This avoids the
    current lazy approach which does accounting and mapping up to the
    fault, only to return an error on the next iteration to unwind the
    entire vfio_dma.
    
    Reviewed-by: Peter Xu <peterx@redhat.com>
    Reviewed-by: Kirti Wankhede <kwankhede@nvidia.com>
    Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 34224e0e1c3141c11c8176be7c14a0360ee9bfe1
Author: Dennis Yang <dennisyang@qnap.com>
Date:   Tue Apr 18 15:27:06 2017 +0800

    dm thin: fix a memory leak when passing discard bio down
    
    commit 948f581a53b704b984aa20df009f0a2b4cf7f907 upstream.
    
    dm-thin does not free the discard_parent bio after all chained sub
    bios finished. The following kmemleak report could be observed after
    pool with discard_passdown option processes discard bios in
    linux v4.11-rc7. To fix this, we drop the discard_parent bio reference
    when its endio (passdown_endio) called.
    
    unreferenced object 0xffff8803d6b29700 (size 256):
      comm "kworker/u8:0", pid 30349, jiffies 4379504020 (age 143002.776s)
      hex dump (first 32 bytes):
        00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
        01 00 00 00 00 00 00 f0 00 00 00 00 00 00 00 00  ................
      backtrace:
        [<ffffffff81a5efd9>] kmemleak_alloc+0x49/0xa0
        [<ffffffff8114ec34>] kmem_cache_alloc+0xb4/0x100
        [<ffffffff8110eec0>] mempool_alloc_slab+0x10/0x20
        [<ffffffff8110efa5>] mempool_alloc+0x55/0x150
        [<ffffffff81374939>] bio_alloc_bioset+0xb9/0x260
        [<ffffffffa018fd20>] process_prepared_discard_passdown_pt1+0x40/0x1c0 [dm_thin_pool]
        [<ffffffffa018b409>] break_up_discard_bio+0x1a9/0x200 [dm_thin_pool]
        [<ffffffffa018b484>] process_discard_cell_passdown+0x24/0x40 [dm_thin_pool]
        [<ffffffffa018b24d>] process_discard_bio+0xdd/0xf0 [dm_thin_pool]
        [<ffffffffa018ecf6>] do_worker+0xa76/0xd50 [dm_thin_pool]
        [<ffffffff81086239>] process_one_work+0x139/0x370
        [<ffffffff810867b1>] worker_thread+0x61/0x450
        [<ffffffff8108b316>] kthread+0xd6/0xf0
        [<ffffffff81a6cd1f>] ret_from_fork+0x3f/0x70
        [<ffffffffffffffff>] 0xffffffffffffffff
    
    Signed-off-by: Dennis Yang <dennisyang@qnap.com>
    Signed-off-by: Mike Snitzer <snitzer@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 899750e7d9e811102787bec025cf33b23e49c532
Author: Bart Van Assche <bart.vanassche@sandisk.com>
Date:   Thu Apr 27 10:11:19 2017 -0700

    dm rq: check blk_mq_register_dev() return value in dm_mq_init_request_queue()
    
    commit 23a601248958fa4142d49294352fe8d1fdf3e509 upstream.
    
    Otherwise the request-based DM blk-mq request_queue will be put into
    service without being properly exported via sysfs.
    
    Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
    Reviewed-by: Hannes Reinecke <hare@suse.com>
    Cc: Christoph Hellwig <hch@lst.de>
    Signed-off-by: Mike Snitzer <snitzer@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit ee68aa113f7341f850c71156316913df2db580fd
Author: Somasundaram Krishnasamy <somasundaram.krishnasamy@oracle.com>
Date:   Fri Apr 7 12:14:55 2017 -0700

    dm era: save spacemap metadata root after the pre-commit
    
    commit 117aceb030307dcd431fdcff87ce988d3016c34a upstream.
    
    When committing era metadata to disk, it doesn't always save the latest
    spacemap metadata root in superblock. Due to this, metadata is getting
    corrupted sometimes when reopening the device. The correct order of update
    should be, pre-commit (shadows spacemap root), save the spacemap root
    (newly shadowed block) to in-core superblock and then the final commit.
    
    Signed-off-by: Somasundaram Krishnasamy <somasundaram.krishnasamy@oracle.com>
    Signed-off-by: Mike Snitzer <snitzer@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit f38faf569e923b121d666e59c61ebfe14e9c51cb
Author: Ondrej Kozina <okozina@redhat.com>
Date:   Mon Apr 24 14:21:53 2017 +0200

    dm crypt: rewrite (wipe) key in crypto layer using random data
    
    commit c82feeec9a014b72c4ffea36648cfb6f81cc1b73 upstream.
    
    The message "key wipe" used to wipe real key stored in crypto layer by
    rewriting it with zeroes.  Since commit 28856a9 ("crypto: xts -
    consolidate sanity check for keys") this no longer works in FIPS mode
    for XTS.
    
    While running in FIPS mode the crypto key part has to differ from the
    tweak key.
    
    Fixes: 28856a9 ("crypto: xts - consolidate sanity check for keys")
    Signed-off-by: Ondrej Kozina <okozina@redhat.com>
    Signed-off-by: Mike Snitzer <snitzer@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 6ab60d2b26dd94a7918bd1da902c2b9bea18ba10
Author: Gary R Hook <gary.hook@amd.com>
Date:   Fri Apr 21 10:50:14 2017 -0500

    crypto: ccp - Change ISR handler method for a v5 CCP
    
    commit 6263b51eb3190d30351360fd168959af7e3a49a9 upstream.
    
    The CCP has the ability to perform several operations simultaneously,
    but only one interrupt.  When implemented as a PCI device and using
    MSI-X/MSI interrupts, use a tasklet model to service interrupts. By
    disabling and enabling interrupts from the CCP, coupled with the
    queuing that tasklets provide, we can ensure that all events
    (occurring on the device) are recognized and serviced.
    
    This change fixes a problem wherein 2 or more busy queues can cause
    notification bits to change state while a (CCP) interrupt is being
    serviced, but after the queue state has been evaluated. This results
    in the event being 'lost' and the queue hanging, waiting to be
    serviced. Since the status bits are never fully de-asserted, the
    CCP never generates another interrupt (all bits zero -> one or more
    bits one), and no further CCP operations will be executed.
    
    Signed-off-by: Gary R Hook <gary.hook@amd.com>
    Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 747cbf47532650c5883ffa598fd290ea048acca9
Author: Gary R Hook <gary.hook@amd.com>
Date:   Fri Apr 21 10:50:05 2017 -0500

    crypto: ccp - Change ISR handler method for a v3 CCP
    
    commit 7b537b24e76a1e8e6d7ea91483a45d5b1426809b upstream.
    
    The CCP has the ability to perform several operations simultaneously,
    but only one interrupt.  When implemented as a PCI device and using
    MSI-X/MSI interrupts, use a tasklet model to service interrupts. By
    disabling and enabling interrupts from the CCP, coupled with the
    queuing that tasklets provide, we can ensure that all events
    (occurring on the device) are recognized and serviced.
    
    This change fixes a problem wherein 2 or more busy queues can cause
    notification bits to change state while a (CCP) interrupt is being
    serviced, but after the queue state has been evaluated. This results
    in the event being 'lost' and the queue hanging, waiting to be
    serviced. Since the status bits are never fully de-asserted, the
    CCP never generates another interrupt (all bits zero -> one or more
    bits one), and no further CCP operations will be executed.
    
    Signed-off-by: Gary R Hook <gary.hook@amd.com>
    Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 511b79fb949e8eacfbfd0c750700e814c41e4438
Author: Gary R Hook <ghook@amd.com>
Date:   Thu Apr 20 15:24:22 2017 -0500

    crypto: ccp - Disable interrupts early on unload
    
    commit 116591fe3eef11c6f06b662c9176385f13891183 upstream.
    
    Ensure that we disable interrupts first when shutting down
    the driver.
    
    Signed-off-by: Gary R Hook <ghook@amd.com>
    Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit c133daf731d936e9861c99fcc8d303e1cd11645f
Author: Gary R Hook <gary.hook@amd.com>
Date:   Thu Apr 20 15:24:09 2017 -0500

    crypto: ccp - Use only the relevant interrupt bits
    
    commit 56467cb11cf8ae4db9003f54b3d3425b5f07a10a upstream.
    
    Each CCP queue can product interrupts for 4 conditions:
    operation complete, queue empty, error, and queue stopped.
    This driver only works with completion and error events.
    
    Signed-off-by: Gary R Hook <gary.hook@amd.com>
    Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit b576fed7c831d4aeaf29029770066a8d69bef230
Author: Stephan Mueller <smueller@chronox.de>
Date:   Mon Apr 24 11:15:23 2017 +0200

    crypto: algif_aead - Require setkey before accept(2)
    
    commit 2a2a251f110576b1d89efbd0662677d7e7db21a8 upstream.
    
    Some cipher implementations will crash if you try to use them
    without calling setkey first.  This patch adds a check so that
    the accept(2) call will fail with -ENOKEY if setkey hasn't been
    done on the socket yet.
    
    Fixes: 400c40cf78da ("crypto: algif - add AEAD support")
    Signed-off-by: Stephan Mueller <smueller@chronox.de>
    Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 72a03cf8e35fbff333ecf2edaf33febd5d6fe60d
Author: Krzysztof Kozlowski <krzk@kernel.org>
Date:   Fri Mar 17 16:49:19 2017 +0200

    crypto: s5p-sss - Close possible race for completed requests
    
    commit 42d5c176b76e190a4a3e0dfeffdae661755955b6 upstream.
    
    Driver is capable of handling only one request at a time and it stores
    it in its state container struct s5p_aes_dev.  This stored request must be
    protected between concurrent invocations (e.g. completing current
    request and scheduling new one).  Combination of lock and "busy" field
    is used for that purpose.
    
    When "busy" field is true, the driver will not accept new request thus
    it will not overwrite currently handled data.
    
    However commit 28b62b145868 ("crypto: s5p-sss - Fix spinlock recursion
    on LRW(AES)") moved some of the write to "busy" field out of a lock
    protected critical section.  This might lead to potential race between
    completing current request and scheduling a new one.  Effectively the
    request completion might try to operate on new crypto request.
    
    Fixes: 28b62b145868 ("crypto: s5p-sss - Fix spinlock recursion on LRW(AES)")
    Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
    Reviewed-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
    Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit d9ae27b661af72b7140dd2b700c937f5e273cd2c
Author: Mike Snitzer <snitzer@redhat.com>
Date:   Sat Apr 22 17:22:09 2017 -0400

    block: fix blk_integrity_register to use template's interval_exp if not 0
    
    commit 2859323e35ab5fc42f351fbda23ab544eaa85945 upstream.
    
    When registering an integrity profile: if the template's interval_exp is
    not 0 use it, otherwise use the ilog2() of logical block size of the
    provided gendisk.
    
    This fixes a long-standing DM linear target bug where it cannot pass
    integrity data to the underlying device if its logical block size
    conflicts with the underlying device's logical block size.
    
    Reported-by: Mikulas Patocka <mpatocka@redhat.com>
    Signed-off-by: Mike Snitzer <snitzer@redhat.com>
    Acked-by: Martin K. Petersen <martin.petersen@oracle.com>
    Signed-off-by: Jens Axboe <axboe@fb.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit d97830327509462d6013128ab84b05036e939c1b
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Thu Apr 27 19:06:48 2017 +0100

    arm64: KVM: Fix decoding of Rt/Rt2 when trapping AArch32 CP accesses
    
    commit c667186f1c01ca8970c785888868b7ffd74e51ee upstream.
    
    Our 32bit CP14/15 handling inherited some of the ARMv7 code for handling
    the trapped system registers, completely missing the fact that the
    fields for Rt and Rt2 are now 5 bit wide, and not 4...
    
    Let's fix it, and provide an accessor for the most common Rt case.
    
    Reviewed-by: Christoffer Dall <cdall@linaro.org>
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Christoffer Dall <cdall@linaro.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 95121cc98ca0f9935c1d358180ecac1e4b161b64
Author: Andrew Jones <drjones@redhat.com>
Date:   Tue Apr 18 17:59:58 2017 +0200

    KVM: arm/arm64: fix races in kvm_psci_vcpu_on
    
    commit 6c7a5dce22b3f3cc44be098e2837fa6797edb8b8 upstream.
    
    Fix potential races in kvm_psci_vcpu_on() by taking the kvm->lock
    mutex.  In general, it's a bad idea to allow more than one PSCI_CPU_ON
    to process the same target VCPU at the same time.  One such problem
    that may arise is that one PSCI_CPU_ON could be resetting the target
    vcpu, which fills the entire sys_regs array with a temporary value
    including the MPIDR register, while another looks up the VCPU based
    on the MPIDR value, resulting in no target VCPU found.  Resolves both
    races found with the kvm-unit-tests/arm/psci unit test.
    
    Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
    Reviewed-by: Christoffer Dall <cdall@linaro.org>
    Reported-by: Levente Kurusa <lkurusa@redhat.com>
    Suggested-by: Christoffer Dall <cdall@linaro.org>
    Signed-off-by: Andrew Jones <drjones@redhat.com>
    Signed-off-by: Christoffer Dall <cdall@linaro.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit d1daa545bbc0945f10e420d859d849252de7affb
Author: Paolo Bonzini <pbonzini@redhat.com>
Date:   Tue May 2 16:20:18 2017 +0200

    Revert "KVM: Support vCPU-based gfn->hva cache"
    
    commit 4e335d9e7ddbcf83d03e7fbe65797ebed2272c18 upstream.
    
    This reverts commit bbd6411513aa8ef3ea02abab61318daf87c1af1e.
    
    I've been sitting on this revert for too long and it unfortunately
    missed 4.11.  It's also the reason why I haven't merged ring-based
    dirty tracking for 4.12.
    
    Using kvm_vcpu_memslots in kvm_gfn_to_hva_cache_init and
    kvm_vcpu_write_guest_offset_cached means that the MSR value can
    now be used to access SMRAM, simply by making it point to an SMRAM
    physical address.  This is problematic because it lets the guest
    OS overwrite memory that it shouldn't be able to touch.
    
    Fixes: bbd6411513aa8ef3ea02abab61318daf87c1af1e
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 2d271c9508474367a7526f636a672ea3eadba58e
Author: David Hildenbrand <david@redhat.com>
Date:   Thu Mar 23 11:46:03 2017 +0100

    KVM: x86: fix user triggerable warning in kvm_apic_accept_events()
    
    commit 28bf28887976d8881a3a59491896c718fade7355 upstream.
    
    If we already entered/are about to enter SMM, don't allow switching to
    INIT/SIPI_RECEIVED, otherwise the next call to kvm_apic_accept_events()
    will report a warning.
    
    Same applies if we are already in MP state INIT_RECEIVED and SMM is
    requested to be turned on. Refuse to set the VCPU events in this case.
    
    Fixes: cd7764fe9f73 ("KVM: x86: latch INITs while in system management mode")
    Reported-by: Dmitry Vyukov <dvyukov@google.com>
    Signed-off-by: David Hildenbrand <david@redhat.com>
    Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit d85d4c871a00afc100d0c62a0faf86103b5c095f
Author: Vince Weaver <vincent.weaver@maine.edu>
Date:   Tue May 2 14:08:50 2017 -0400

    perf/x86: Fix Broadwell-EP DRAM RAPL events
    
    commit 33b88e708e7dfa58dc896da2a98f5719d2eb315c upstream.
    
    It appears as though the Broadwell-EP DRAM units share the special
    units quirk with Haswell-EP/KNL.
    
    Without this patch, you get really high results (a single DRAM using 20W
    of power).
    
    The powercap driver in drivers/powercap/intel_rapl.c already has this
    change.
    
    Signed-off-by: Vince Weaver <vincent.weaver@maine.edu>
    Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
    Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
    Cc: Jiri Olsa <jolsa@redhat.com>
    Cc: Kan Liang <kan.liang@intel.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Stephane Eranian <eranian@gmail.com>
    Cc: Stephane Eranian <eranian@google.com>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 9e748196781e904272e9d0e266deccca92ba8215
Author: Richard Weinberger <richard@nod.at>
Date:   Sat Apr 1 00:41:57 2017 +0200

    um: Fix PTRACE_POKEUSER on x86_64
    
    commit 9abc74a22d85ab29cef9896a2582a530da7e79bf upstream.
    
    This is broken since ever but sadly nobody noticed.
    Recent versions of GDB set DR_CONTROL unconditionally and
    UML dies due to a heap corruption. It turns out that
    the PTRACE_POKEUSER was copy&pasted from i386 and assumes
    that addresses are 4 bytes long.
    
    Fix that by using 8 as address size in the calculation.
    
    Reported-by: jie cao <cj3054@gmail.com>
    Signed-off-by: Richard Weinberger <richard@nod.at>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit f25c69bd5ed57231bdf47aa88837a74141b4dc81
Author: Ben Hutchings <ben.hutchings@codethink.co.uk>
Date:   Tue May 9 18:00:43 2017 +0100

    x86, pmem: Fix cache flushing for iovec write < 8 bytes
    
    commit 8376efd31d3d7c44bd05be337adde023cc531fa1 upstream.
    
    Commit 11e63f6d920d added cache flushing for unaligned writes from an
    iovec, covering the first and last cache line of a >= 8 byte write and
    the first cache line of a < 8 byte write.  But an unaligned write of
    2-7 bytes can still cover two cache lines, so make sure we flush both
    in that case.
    
    Fixes: 11e63f6d920d ("x86, pmem: fix broken __copy_user_nocache ...")
    Signed-off-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
    Signed-off-by: Dan Williams <dan.j.williams@intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 20fd61dbb15479b8940661f3be8aaebd478b5844
Author: Andy Lutomirski <luto@kernel.org>
Date:   Wed Mar 22 14:32:29 2017 -0700

    selftests/x86/ldt_gdt_32: Work around a glibc sigaction() bug
    
    commit 65973dd3fd31151823f4b8c289eebbb3fb7e6bc0 upstream.
    
    i386 glibc is buggy and calls the sigaction syscall incorrectly.
    
    This is asymptomatic for normal programs, but it blows up on
    programs that do evil things with segmentation.  The ldt_gdt
    self-test is an example of such an evil program.
    
    This doesn't appear to be a regression -- I think I just got lucky
    with the uninitialized memory that glibc threw at the kernel when I
    wrote the test.
    
    This hackish fix manually issues sigaction(2) syscalls to undo the
    damage.  Without the fix, ldt_gdt_32 segfaults; with the fix, it
    passes for me.
    
    See: https://sourceware.org/bugzilla/show_bug.cgi?id=21269
    
    Signed-off-by: Andy Lutomirski <luto@kernel.org>
    Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Brian Gerst <brgerst@gmail.com>
    Cc: Denys Vlasenko <dvlasenk@redhat.com>
    Cc: H. Peter Anvin <hpa@zytor.com>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: Juergen Gross <jgross@suse.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Garnier <thgarnie@google.com>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Link: http://lkml.kernel.org/r/aaab0f9f93c9af25396f01232608c163a760a668.1490218061.git.luto@kernel.org
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 0dc0e26fee698efdba2260e83ecf26dffef5fe09
Author: Ashish Kalra <ashish@bluestacks.com>
Date:   Wed Apr 19 20:50:15 2017 +0530

    x86/boot: Fix BSS corruption/overwrite bug in early x86 kernel startup
    
    commit d594aa0277e541bb997aef0bc0a55172d8138340 upstream.
    
    The minimum size for a new stack (512 bytes) setup for arch/x86/boot components
    when the bootloader does not setup/provide a stack for the early boot components
    is not "enough".
    
    The setup code executing as part of early kernel startup code, uses the stack
    beyond 512 bytes and accidentally overwrites and corrupts part of the BSS
    section. This is exposed mostly in the early video setup code, where
    it was corrupting BSS variables like force_x, force_y, which in-turn affected
    kernel parameters such as screen_info (screen_info.orig_video_cols) and
    later caused an exception/panic in console_init().
    
    Most recent boot loaders setup the stack for early boot components, so this
    stack overwriting into BSS section issue has not been exposed.
    
    Signed-off-by: Ashish Kalra <ashish@bluestacks.com>
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Brian Gerst <brgerst@gmail.com>
    Cc: Denys Vlasenko <dvlasenk@redhat.com>
    Cc: H. Peter Anvin <hpa@zytor.com>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Link: http://lkml.kernel.org/r/20170419152015.10011-1-ashishkalra@Ashishs-MacBook-Pro.local
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 12001f3a456a7714e71ef324d12baf520a7c524d
Author: Guenter Roeck <linux@roeck-us.net>
Date:   Mon Mar 20 14:30:50 2017 -0700

    usb: hub: Do not attempt to autosuspend disconnected devices
    
    commit f5cccf49428447dfbc9edb7a04bb8fc316269781 upstream.
    
    While running a bind/unbind stress test with the dwc3 usb driver on rk3399,
    the following crash was observed.
    
    Unable to handle kernel NULL pointer dereference at virtual address 00000218
    pgd = ffffffc00165f000
    [00000218] *pgd=000000000174f003, *pud=000000000174f003,
                                    *pmd=0000000001750003, *pte=00e8000001751713
    Internal error: Oops: 96000005 [#1] PREEMPT SMP
    Modules linked in: uinput uvcvideo videobuf2_vmalloc cmac
    ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_nat_ipv4 nf_nat rfcomm
    xt_mark fuse bridge stp llc zram btusb btrtl btbcm btintel bluetooth
    ip6table_filter mwifiex_pcie mwifiex cfg80211 cdc_ether usbnet r8152 mii joydev
    snd_seq_midi snd_seq_midi_event snd_rawmidi snd_seq snd_seq_device ppp_async
    ppp_generic slhc tun
    CPU: 1 PID: 29814 Comm: kworker/1:1 Not tainted 4.4.52 #507
    Hardware name: Google Kevin (DT)
    Workqueue: pm pm_runtime_work
    task: ffffffc0ac540000 ti: ffffffc0af4d4000 task.ti: ffffffc0af4d4000
    PC is at autosuspend_check+0x74/0x174
    LR is at autosuspend_check+0x70/0x174
    ...
    Call trace:
    [<ffffffc00080dcc0>] autosuspend_check+0x74/0x174
    [<ffffffc000810500>] usb_runtime_idle+0x20/0x40
    [<ffffffc000785ae0>] __rpm_callback+0x48/0x7c
    [<ffffffc000786af0>] rpm_idle+0x1e8/0x498
    [<ffffffc000787cdc>] pm_runtime_work+0x88/0xcc
    [<ffffffc000249bb8>] process_one_work+0x390/0x6b8
    [<ffffffc00024abcc>] worker_thread+0x480/0x610
    [<ffffffc000251a80>] kthread+0x164/0x178
    [<ffffffc0002045d0>] ret_from_fork+0x10/0x40
    
    Source:
    
    (gdb) l *0xffffffc00080dcc0
    0xffffffc00080dcc0 is in autosuspend_check
    (drivers/usb/core/driver.c:1778).
    1773            /* We don't need to check interfaces that are
    1774             * disabled for runtime PM.  Either they are unbound
    1775             * or else their drivers don't support autosuspend
    1776             * and so they are permanently active.
    1777             */
    1778            if (intf->dev.power.disable_depth)
    1779                    continue;
    1780            if (atomic_read(&intf->dev.power.usage_count) > 0)
    1781                    return -EBUSY;
    1782            w |= intf->needs_remote_wakeup;
    
    Code analysis shows that intf is set to NULL in usb_disable_device() prior
    to setting actconfig to NULL. At the same time, usb_runtime_idle() does not
    lock the usb device, and neither does any of the functions in the
    traceback. This means that there is no protection against a race condition
    where usb_disable_device() is removing dev->actconfig->interface[] pointers
    while those are being accessed from autosuspend_check().
    
    To solve the problem, synchronize and validate device state between
    autosuspend_check() and usb_disconnect().
    
    Acked-by: Alan Stern <stern@rowland.harvard.edu>
    Signed-off-by: Guenter Roeck <linux@roeck-us.net>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 90d4e2a6592da98bfa99941bddd0d83ae789a565
Author: Guenter Roeck <linux@roeck-us.net>
Date:   Mon Mar 20 11:16:11 2017 -0700

    usb: hub: Fix error loop seen after hub communication errors
    
    commit 245b2eecee2aac6fdc77dcafaa73c33f9644c3c7 upstream.
    
    While stress testing a usb controller using a bind/unbind looop, the
    following error loop was observed.
    
    usb 7-1.2: new low-speed USB device number 3 using xhci-hcd
    usb 7-1.2: hub failed to enable device, error -108
    usb 7-1-port2: cannot disable (err = -22)
    usb 7-1-port2: couldn't allocate usb_device
    usb 7-1-port2: cannot disable (err = -22)
    hub 7-1:1.0: hub_ext_port_status failed (err = -22)
    hub 7-1:1.0: hub_ext_port_status failed (err = -22)
    hub 7-1:1.0: activate --> -22
    hub 7-1:1.0: hub_ext_port_status failed (err = -22)
    hub 7-1:1.0: hub_ext_port_status failed (err = -22)
    hub 7-1:1.0: activate --> -22
    hub 7-1:1.0: hub_ext_port_status failed (err = -22)
    hub 7-1:1.0: hub_ext_port_status failed (err = -22)
    hub 7-1:1.0: activate --> -22
    hub 7-1:1.0: hub_ext_port_status failed (err = -22)
    hub 7-1:1.0: hub_ext_port_status failed (err = -22)
    hub 7-1:1.0: activate --> -22
    hub 7-1:1.0: hub_ext_port_status failed (err = -22)
    hub 7-1:1.0: hub_ext_port_status failed (err = -22)
    hub 7-1:1.0: activate --> -22
    hub 7-1:1.0: hub_ext_port_status failed (err = -22)
    hub 7-1:1.0: hub_ext_port_status failed (err = -22)
    hub 7-1:1.0: activate --> -22
    hub 7-1:1.0: hub_ext_port_status failed (err = -22)
    hub 7-1:1.0: hub_ext_port_status failed (err = -22)
    hub 7-1:1.0: activate --> -22
    hub 7-1:1.0: hub_ext_port_status failed (err = -22)
    hub 7-1:1.0: hub_ext_port_status failed (err = -22)
    hub 7-1:1.0: activate --> -22
    hub 7-1:1.0: hub_ext_port_status failed (err = -22)
    hub 7-1:1.0: hub_ext_port_status failed (err = -22)
    ** 57 printk messages dropped ** hub 7-1:1.0: activate --> -22
    ** 82 printk messages dropped ** hub 7-1:1.0: hub_ext_port_status failed (err = -22)
    
    This continues forever. After adding tracebacks into the code,
    the call sequence leading to this is found to be as follows.
    
    [<ffffffc0007fc8e0>] hub_activate+0x368/0x7b8
    [<ffffffc0007fceb4>] hub_resume+0x2c/0x3c
    [<ffffffc00080b3b8>] usb_resume_interface.isra.6+0x128/0x158
    [<ffffffc00080b5d0>] usb_suspend_both+0x1e8/0x288
    [<ffffffc00080c9c4>] usb_runtime_suspend+0x3c/0x98
    [<ffffffc0007820a0>] __rpm_callback+0x48/0x7c
    [<ffffffc00078217c>] rpm_callback+0xa8/0xd4
    [<ffffffc000786234>] rpm_suspend+0x84/0x758
    [<ffffffc000786ca4>] rpm_idle+0x2c8/0x498
    [<ffffffc000786ed4>] __pm_runtime_idle+0x60/0xac
    [<ffffffc00080eba8>] usb_autopm_put_interface+0x6c/0x7c
    [<ffffffc000803798>] hub_event+0x10ac/0x12ac
    [<ffffffc000249bb8>] process_one_work+0x390/0x6b8
    [<ffffffc00024abcc>] worker_thread+0x480/0x610
    [<ffffffc000251a80>] kthread+0x164/0x178
    [<ffffffc0002045d0>] ret_from_fork+0x10/0x40
    
    kick_hub_wq() is called from hub_activate() even after failures to
    communicate with the hub. This results in an endless sequence of
    hub event -> hub activate -> wq trigger -> hub event -> ...
    
    Provide two solutions for the problem.
    
    - Only trigger the hub event queue if communication with the hub
      is successful.
    - After a suspend failure, only resume already suspended interfaces
      if the communication with the device is still possible.
    
    Each of the changes fixes the observed problem. Use both to improve
    robustness.
    
    Acked-by: Alan Stern <stern@rowland.harvard.edu>
    Signed-off-by: Guenter Roeck <linux@roeck-us.net>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 8e3b0f66f4c6387e83aef9541fea6d804edc891b
Author: Alexey Brodkin <Alexey.Brodkin@synopsys.com>
Date:   Thu Apr 13 15:33:34 2017 +0300

    usb: Make sure usb/phy/of gets built-in
    
    commit 3d6159640da9c9175d1ca42f151fc1a14caded59 upstream.
    
    DWC3 driver uses of_usb_get_phy_mode() which is
    implemented in drivers/usb/phy/of.c and in bare minimal
    configuration it might not be pulled in kernel binary.
    
    In case of ARC or ARM this could be easily reproduced with
    "allnodefconfig" +CONFIG_USB=m +CONFIG_USB_DWC3=m.
    
    On building all ends-up with:
    ---------------------->8------------------
      Kernel: arch/arm/boot/Image is ready
      Kernel: arch/arm/boot/zImage is ready
      Building modules, stage 2.
      MODPOST 5 modules
    ERROR: "of_usb_get_phy_mode" [drivers/usb/dwc3/dwc3.ko] undefined!
    make[1]: *** [__modpost] Error 1
    make: *** [modules] Error 2
    ---------------------->8------------------
    
    Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
    Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
    Cc: Geert Uytterhoeven <geert+renesas@glider.be>
    Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Felipe Balbi <balbi@kernel.org>
    Cc: Felix Fietkau <nbd@nbd.name>
    Cc: Jeremy Kerr <jk@ozlabs.org>
    Cc: linux-snps-arc@lists.infradead.org
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 6a9fc06acfb9934fb4aa9a6690a7f5998f3e1151
Author: Romain Izard <romain.izard.pro@gmail.com>
Date:   Fri Mar 10 14:11:41 2017 +0100

    usb: gadget: legacy gadgets are optional
    
    commit 6e253d0fbc665b36192b8ed3cecdbb65b413a1eb upstream.
    
    With commit bc49d1d17dcf ("usb: gadget: don't couple configfs to legacy
    gadgets"),it is possible to build a modular kernel with both built-in
    configfs support and modular legacy gadget drivers.
    
    But when building a kernel without modules, it is also necessary to be
    able to build with configfs but without any legacy gadget driver. This
    was a possible configuration when the USB_CONFIGFS was a part of the
    choice options, but not anymore.
    
    Mark the choice for legacy gadget drivers as optional restores this.
    
    Fixes: bc49d1d17dcf ("usb: gadget: don't couple configfs to legacy gadgets")
    Signed-off-by: Romain Izard <romain.izard.pro@gmail.com>
    Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 84759debb0e598c2948f833ef6b6c88712ce3459
Author: Gustavo A. R. Silva <garsilva@embeddedor.com>
Date:   Mon Apr 3 22:48:40 2017 -0500

    usb: misc: add missing continue in switch
    
    commit 2c930e3d0aed1505e86e0928d323df5027817740 upstream.
    
    Add missing continue in switch.
    
    Addresses-Coverity-ID: 1248733
    Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com>
    Acked-by: Alan Stern <stern@rowland.harvard.edu>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 69f8945b4f38799b7b46de744acae8c9f447f8d7
Author: Ian Abbott <abbotti@mev.co.uk>
Date:   Fri Feb 17 11:09:09 2017 +0000

    staging: comedi: jr3_pci: cope with jiffies wraparound
    
    commit 8ec04a491825e08068e92bed0bba7821893b6433 upstream.
    
    The timer expiry routine `jr3_pci_poll_dev()` checks for expiry by
    checking whether the absolute value of `jiffies` (stored in local
    variable `now`) is greater than the expected expiry time in jiffy units.
    This will fail when `jiffies` wraps around.  Also, it seems to make
    sense to handle the expiry one jiffy earlier than the current test.  Use
    `time_after_eq()` to check for expiry.
    
    Signed-off-by: Ian Abbott <abbotti@mev.co.uk>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 6e4c973ac6292fc72b9d3818afb0221e3202f738
Author: Ian Abbott <abbotti@mev.co.uk>
Date:   Fri Feb 17 11:09:08 2017 +0000

    staging: comedi: jr3_pci: fix possible null pointer dereference
    
    commit 45292be0b3db0b7f8286683b376e2d9f949d11f9 upstream.
    
    For some reason, the driver does not consider allocation of the
    subdevice private data to be a fatal error when attaching the COMEDI
    device.  It tests the subdevice private data pointer for validity at
    certain points, but omits some crucial tests.  In particular,
    `jr3_pci_auto_attach()` calls `jr3_pci_alloc_spriv()` to allocate and
    initialize the subdevice private data, but the same function
    subsequently dereferences the pointer to access the `next_time_min` and
    `next_time_max` members without checking it first.  The other missing
    test is in the timer expiry routine `jr3_pci_poll_dev()`, but it will
    crash before it gets that far.
    
    Fix the bug by returning `-ENOMEM` from `jr3_pci_auto_attach()` as soon
    as one of the calls to `jr3_pci_alloc_spriv()` returns `NULL`.  The
    COMEDI core will subsequently call `jr3_pci_detach()` to clean up.
    
    Signed-off-by: Ian Abbott <abbotti@mev.co.uk>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 99185235783a7479e6c28f508fbba79500e06fbd
Author: Sean Young <sean@mess.org>
Date:   Sat Mar 25 07:31:57 2017 -0300

    staging: sir: fill in missing fields and fix probe
    
    commit cf9ed9aa5b0c196b796d2728218e3c06b0f42d90 upstream.
    
    Some fields are left blank.
    
    Signed-off-by: Sean Young <sean@mess.org>
    Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 89cb8fcceab18f32ef637234b909a49649f475d9
Author: Aditya Shankar <aditya.shankar@microchip.com>
Date:   Fri Apr 7 17:24:58 2017 +0530

    staging: wilc1000: Fix problem with wrong vif index
    
    commit 0e490657c7214cce33fbca3d88227298c5c968ae upstream.
    
    The vif->idx value is always 0 for two interfaces.
    
    wl->vif_num = 0;
    
    loop {
         ...
    
         vif->idx = wl->vif_num;
         ...
         wl->vif_num = i;
          ....
         i++;
         ...
    }
    
    At present, vif->idx is assigned the value of wl->vif_num
    at the beginning of this block and device is initialized
    based on this index value.
    In the next iteration, wl->vif_num is still 0 as it is only updated
    later but gets assigned to vif->idx in the beginning. This causes problems
    later when we try to reference a particular interface and also while
    configuring the firmware.
    
    This patch moves the assignment to vif->idx from the beginning
    of the block to after wl->vif_num is updated with latest value of i.
    
    Fixes: commit 735bb39ca3be ("staging: wilc1000: simplify vif[i]->ndev accesses")
    Signed-off-by: Aditya Shankar <aditya.shankar@microchip.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 22d8767df90a1d4dd8a955e579e59ead617adcb7
Author: Johan Hovold <johan@kernel.org>
Date:   Wed Apr 26 12:23:04 2017 +0200

    staging: gdm724x: gdm_mux: fix use-after-free on module unload
    
    commit b58f45c8fc301fe83ee28cad3e64686c19e78f1c upstream.
    
    Make sure to deregister the USB driver before releasing the tty driver
    to avoid use-after-free in the USB disconnect callback where the tty
    devices are deregistered.
    
    Fixes: 61e121047645 ("staging: gdm7240: adding LTE USB driver")
    Cc: Won Kang <wkang77@gmail.com>
    Signed-off-by: Johan Hovold <johan@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit af685eefa2769a171074344c1e1c2457ed5a347b
Author: Malcolm Priestley <tvboxspy@gmail.com>
Date:   Sat Apr 22 11:14:57 2017 +0100

    staging: vt6656: use off stack for out buffer USB transfers.
    
    commit 12ecd24ef93277e4e5feaf27b0b18f2d3828bc5e upstream.
    
    Since 4.9 mandated USB buffers be heap allocated this causes the driver
    to fail.
    
    Since there is a wide range of buffer sizes use kmemdup to create
    allocated buffer.
    
    Signed-off-by: Malcolm Priestley <tvboxspy@gmail.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 5ba4fd334f607369cff4201107b58f0404e2f67e
Author: Malcolm Priestley <tvboxspy@gmail.com>
Date:   Sat Apr 22 11:14:58 2017 +0100

    staging: vt6656: use off stack for in buffer USB transfers.
    
    commit 05c0cf88bec588a7cb34de569acd871ceef26760 upstream.
    
    Since 4.9 mandated USB buffers to be heap allocated. This causes
    the driver to fail.
    
    Create buffer for USB transfers.
    
    Signed-off-by: Malcolm Priestley <tvboxspy@gmail.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit f49d36b7b3004a36cbe59da2b80917526c5ded2a
Author: Bjørn Mork <bjorn@mork.no>
Date:   Fri Apr 21 10:01:29 2017 +0200

    USB: Revert "cdc-wdm: fix "out-of-sync" due to missing notifications"
    
    commit 19445816996d1a89682c37685fe95959631d9f32 upstream.
    
    This reverts commit 833415a3e781 ("cdc-wdm: fix "out-of-sync" due to
    missing notifications")
    
    There have been several reports of wdm_read returning unexpected EIO
    errors with QMI devices using the qmi_wwan driver. The reporters
    confirm that reverting prevents these errors. I have been unable to
    reproduce the bug myself, and have no explanation to offer either. But
    reverting is the safe choice here, given that the commit was an
    attempt to work around a firmware problem.  Living with a firmware
    problem is still better than adding driver bugs.
    
    Reported-by: Kasper Holtze <kasper@holtze.dk>
    Reported-by: Aleksander Morgado <aleksander@aleksander.es>
    Reported-by: Daniele Palmas <dnlplm@gmail.com>
    Fixes: 833415a3e781 ("cdc-wdm: fix "out-of-sync" due to missing notifications")
    Signed-off-by: Bjørn Mork <bjorn@mork.no>
    Acked-by: Oliver Neukum <oneukum@suse.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 2c33341c96be3e35dc40b0a5324a214327f8ab8b
Author: Ajay Kaher <ajay.kaher@samsung.com>
Date:   Tue Mar 28 08:09:32 2017 -0400

    USB: Proper handling of Race Condition when two USB class drivers try to call init_usb_class simultaneously
    
    commit 2f86a96be0ccb1302b7eee7855dbee5ce4dc5dfb upstream.
    
    There is race condition when two USB class drivers try to call
    init_usb_class at the same time and leads to crash.
    code path: probe->usb_register_dev->init_usb_class
    
    To solve this, mutex locking has been added in init_usb_class() and
    destroy_usb_class().
    
    As pointed by Alan, removed "if (usb_class)" test from destroy_usb_class()
    because usb_class can never be NULL there.
    
    Signed-off-by: Ajay Kaher <ajay.kaher@samsung.com>
    Acked-by: Alan Stern <stern@rowland.harvard.edu>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 2436236ee3e7a951db1bc665bf9bac3098d92f91
Author: Marek Vasut <marex@denx.de>
Date:   Tue Apr 18 20:07:56 2017 +0200

    USB: serial: ftdi_sio: add device ID for Microsemi/Arrow SF2PLUS Dev Kit
    
    commit 31c5d1922b90ddc1da6a6ddecef7cd31f17aa32b upstream.
    
    This development kit has an FT4232 on it with a custom USB VID/PID.
    The FT4232 provides four UARTs, but only two are used. The UART 0
    is used by the FlashPro5 programmer and UART 2 is connected to the
    SmartFusion2 CortexM3 SoC UART port.
    
    Note that the USB VID is registered to Actel according to Linux USB
    VID database, but that was acquired by Microsemi.
    
    Signed-off-by: Marek Vasut <marex@denx.de>
    Signed-off-by: Johan Hovold <johan@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit f34b103cd5a48ece7b85917b87dac2a15a27de0c
Author: Peter Chen <peter.chen@nxp.com>
Date:   Wed Apr 19 16:55:52 2017 +0300

    usb: host: xhci: print correct command ring address
    
    commit 6fc091fb0459ade939a795bfdcaf645385b951d4 upstream.
    
    Print correct command ring address using 'val_64'.
    
    Signed-off-by: Peter Chen <peter.chen@nxp.com>
    Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit c5379cf67f1f2de699166e4275e43df39d21c575
Author: Roger Quadros <rogerq@ti.com>
Date:   Fri Apr 7 17:57:12 2017 +0300

    usb: xhci: bInterval quirk for TI TUSB73x0
    
    commit 69307ccb9ad7ccb653e332de68effdeaaab6907d upstream.
    
    As per [1] issue #4,
    "The periodic EP scheduler always tries to schedule the EPs
    that have large intervals (interval equal to or greater than
    128 microframes) into different microframes. So it maintains
    an internal counter and increments for each large interval
    EP added. When the counter is greater than 128, the scheduler
    rejects the new EP. So when the hub re-enumerated 128 times,
    it triggers this condition."
    
    This results in Bandwidth error when devices with periodic
    endpoints (ISO/INT) having bInterval > 7 are plugged and
    unplugged several times on a TUSB73x0 XHCI host.
    
    Workaround this issue by limiting the bInterval to 7
    (i.e. interval to 6) for High-speed or faster periodic endpoints.
    
    [1] - http://www.ti.com/lit/er/sllz076/sllz076.pdf
    
    Signed-off-by: Roger Quadros <rogerq@ti.com>
    Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit f9a45058a31257edf502638498c3a98602addd76
Author: Nicholas Bellinger <nab@linux-iscsi.org>
Date:   Tue Apr 25 10:55:12 2017 -0700

    iscsi-target: Set session_fall_back_to_erl0 when forcing reinstatement
    
    commit 197b806ae5db60c6f609d74da04ddb62ea5e1b00 upstream.
    
    While testing modification of per se_node_acl queue_depth forcing
    session reinstatement via lio_target_nacl_cmdsn_depth_store() ->
    core_tpg_set_initiator_node_queue_depth(), a hung task bug triggered
    when changing cmdsn_depth invoked session reinstatement while an iscsi
    login was already waiting for session reinstatement to complete.
    
    This can happen when an outstanding se_cmd descriptor is taking a
    long time to complete, and session reinstatement from iscsi login
    or cmdsn_depth change occurs concurrently.
    
    To address this bug, explicitly set session_fall_back_to_erl0 = 1
    when forcing session reinstatement, so session reinstatement is
    not attempted if an active session is already being shutdown.
    
    This patch has been tested with two scenarios.  The first when
    iscsi login is blocked waiting for iscsi session reinstatement
    to complete followed by queue_depth change via configfs, and
    second when queue_depth change via configfs us blocked followed
    by a iscsi login driven session reinstatement.
    
    Note this patch depends on commit d36ad77f702 to handle multiple
    sessions per se_node_acl when changing cmdsn_depth, and for
    pre v4.5 kernels will need to be included for stable as well.
    
    Reported-by: Gary Guo <ghg@datera.io>
    Tested-by: Gary Guo <ghg@datera.io>
    Cc: Gary Guo <ghg@datera.io>
    Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 14a890020f947089c7d4c0ee04833cece9de5a58
Author: Bart Van Assche <bart.vanassche@sandisk.com>
Date:   Thu May 4 15:50:47 2017 -0700

    target/fileio: Fix zero-length READ and WRITE handling
    
    commit 59ac9c078141b8fd0186c0b18660a1b2c24e724e upstream.
    
    This patch fixes zero-length READ and WRITE handling in target/FILEIO,
    which was broken a long time back by:
    
    Since:
    
      commit d81cb44726f050d7cf1be4afd9cb45d153b52066
      Author: Paolo Bonzini <pbonzini@redhat.com>
      Date:   Mon Sep 17 16:36:11 2012 -0700
    
          target: go through normal processing for all zero-length commands
    
    which moved zero-length READ and WRITE completion out of target-core,
    to doing submission into backend driver code.
    
    To address this, go ahead and invoke target_complete_cmd() for any
    non negative return value in fd_do_rw().
    
    Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
    Reviewed-by: Hannes Reinecke <hare@suse.com>
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Cc: Andy Grover <agrover@redhat.com>
    Cc: David Disseldorp <ddiss@suse.de>
    Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 25ed85889dcd29c19b02d12156c34fb065ba676c
Author: Nicholas Bellinger <nab@linux-iscsi.org>
Date:   Tue Apr 11 16:24:16 2017 -0700

    target: Fix compare_and_write_callback handling for non GOOD status
    
    commit a71a5dc7f833943998e97ca8fa6a4c708a0ed1a9 upstream.
    
    Following the bugfix for handling non SAM_STAT_GOOD COMPARE_AND_WRITE
    status during COMMIT phase in commit 9b2792c3da1, the same bug exists
    for the READ phase as well.
    
    This would manifest first as a lost SCSI response, and eventual
    hung task during fabric driver logout or re-login, as existing
    shutdown logic waited for the COMPARE_AND_WRITE se_cmd->cmd_kref
    to reach zero.
    
    To address this bug, compare_and_write_callback() has been changed
    to set post_ret = 1 and return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE
    as necessary to signal failure status.
    
    Reported-by: Bill Borsari <wgb@datera.io>
    Cc: Bill Borsari <wgb@datera.io>
    Tested-by: Gary Guo <ghg@datera.io>
    Cc: Gary Guo <ghg@datera.io>
    Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 1ec6f0815aa4cfc11f4be64cb532d0fccdd57fce
Author: Juergen Gross <jgross@suse.com>
Date:   Wed May 10 06:08:44 2017 +0200

    xen: adjust early dom0 p2m handling to xen hypervisor behavior
    
    commit 69861e0a52f8733355ce246f0db15e1b240ad667 upstream.
    
    When booted as pv-guest the p2m list presented by the Xen is already
    mapped to virtual addresses. In dom0 case the hypervisor might make use
    of 2M- or 1G-pages for this mapping. Unfortunately while being properly
    aligned in virtual and machine address space, those pages might not be
    aligned properly in guest physical address space.
    
    So when trying to obtain the guest physical address of such a page
    pud_pfn() and pmd_pfn() must be avoided as those will mask away guest
    physical address bits not being zero in this special case.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>