commit 2ac51e21d8c50ca37fc9b5b9a9b4937c810b0d0a
Author: Sasha Levin <alexander.levin@verizon.com>
Date:   Thu Jun 29 10:55:48 2017 -0400

    Linux 4.1.42
    
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit dcda279dede75d5cb4e6af18ba90eb4ca1e813ee
Author: Hugh Dickins <hughd@google.com>
Date:   Tue Jun 20 02:10:44 2017 -0700

    mm: fix new crash in unmapped_area_topdown()
    
    [ Upstream commit f4cb767d76cf7ee72f97dd76f6cfa6c76a5edc89 ]
    
    Trinity gets kernel BUG at mm/mmap.c:1963! in about 3 minutes of
    mmap testing.  That's the VM_BUG_ON(gap_end < gap_start) at the
    end of unmapped_area_topdown().  Linus points out how MAP_FIXED
    (which does not have to respect our stack guard gap intentions)
    could result in gap_end below gap_start there.  Fix that, and
    the similar case in its alternative, unmapped_area().
    
    Cc: stable@vger.kernel.org
    Fixes: 1be7107fbe18 ("mm: larger stack guard gap, between vmas")
    Reported-by: Dave Jones <davej@codemonkey.org.uk>
    Debugged-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Hugh Dickins <hughd@google.com>
    Acked-by: Michal Hocko <mhocko@suse.com>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 8b18c6b2a0dde5186ed83a60c4915c0909cbeb0a
Author: Sasha Levin <alexander.levin@verizon.com>
Date:   Wed Jun 28 18:57:07 2017 -0400

    mm: larger stack guard gap, between vmas
    
    [ Upstream commit 1be7107fbe18eed3e319a6c3e83c78254b693acb ]
    
    Stack guard page is a useful feature to reduce a risk of stack smashing
    into a different mapping. We have been using a single page gap which
    is sufficient to prevent having stack adjacent to a different mapping.
    But this seems to be insufficient in the light of the stack usage in
    userspace. E.g. glibc uses as large as 64kB alloca() in many commonly
    used functions. Others use constructs liks gid_t buffer[NGROUPS_MAX]
    which is 256kB or stack strings with MAX_ARG_STRLEN.
    
    This will become especially dangerous for suid binaries and the default
    no limit for the stack size limit because those applications can be
    tricked to consume a large portion of the stack and a single glibc call
    could jump over the guard page. These attacks are not theoretical,
    unfortunatelly.
    
    Make those attacks less probable by increasing the stack guard gap
    to 1MB (on systems with 4k pages; but make it depend on the page size
    because systems with larger base pages might cap stack allocations in
    the PAGE_SIZE units) which should cover larger alloca() and VLA stack
    allocations. It is obviously not a full fix because the problem is
    somehow inherent, but it should reduce attack space a lot.
    
    One could argue that the gap size should be configurable from userspace,
    but that can be done later when somebody finds that the new 1MB is wrong
    for some special case applications.  For now, add a kernel command line
    option (stack_guard_gap) to specify the stack gap size (in page units).
    
    Implementation wise, first delete all the old code for stack guard page:
    because although we could get away with accounting one extra page in a
    stack vma, accounting a larger gap can break userspace - case in point,
    a program run with "ulimit -S -v 20000" failed when the 1MB gap was
    counted for RLIMIT_AS; similar problems could come with RLIMIT_MLOCK
    and strict non-overcommit mode.
    
    Instead of keeping gap inside the stack vma, maintain the stack guard
    gap as a gap between vmas: using vm_start_gap() in place of vm_start
    (or vm_end_gap() in place of vm_end if VM_GROWSUP) in just those few
    places which need to respect the gap - mainly arch_get_unmapped_area(),
    and and the vma tree's subtree_gap support for that.
    
    Original-patch-by: Oleg Nesterov <oleg@redhat.com>
    Original-patch-by: Michal Hocko <mhocko@suse.com>
    Signed-off-by: Hugh Dickins <hughd@google.com>
    Acked-by: Michal Hocko <mhocko@suse.com>
    Tested-by: Helge Deller <deller@gmx.de> # parisc
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 55e6060ddd5fffa6f9baedd28e847afab84b5a2d
Author: Thomas Gleixner <tglx@linutronix.de>
Date:   Tue May 30 23:15:35 2017 +0200

    alarmtimer: Rate limit periodic intervals
    
    [ Upstream commit ff86bf0c65f14346bf2440534f9ba5ac232c39a0 ]
    
    The alarmtimer code has another source of potentially rearming itself too
    fast. Interval timers with a very samll interval have a similar CPU hog
    effect as the previously fixed overflow issue.
    
    The reason is that alarmtimers do not implement the normal protection
    against this kind of problem which the other posix timer use:
    
      timer expires -> queue signal -> deliver signal -> rearm timer
    
    This scheme brings the rearming under scheduler control and prevents
    permanently firing timers which hog the CPU.
    
    Bringing this scheme to the alarm timer code is a major overhaul because it
    lacks all the necessary mechanisms completely.
    
    So for a quick fix limit the interval to one jiffie. This is not
    problematic in practice as alarmtimers are usually backed by an RTC for
    suspend which have 1 second resolution. It could be therefor argued that
    the resolution of this clock should be set to 1 second in general, but
    that's outside the scope of this fix.
    
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Kostya Serebryany <kcc@google.com>
    Cc: syzkaller <syzkaller@googlegroups.com>
    Cc: John Stultz <john.stultz@linaro.org>
    Cc: Dmitry Vyukov <dvyukov@google.com>
    Cc: stable@vger.kernel.org
    Link: http://lkml.kernel.org/r/20170530211655.896767100@linutronix.de
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit cedbfb3dc38c128f7541a5e6050862ee038e7583
Author: Paul Burton <paul.burton@imgtec.com>
Date:   Fri Jun 2 11:35:01 2017 -0700

    MIPS: Fix bnezc/jialc return address calculation
    
    [ Upstream commit 1a73d9310e093fc3adffba4d0a67b9fab2ee3f63 ]
    
    The code handling the pop76 opcode (ie. bnezc & jialc instructions) in
    __compute_return_epc_for_insn() needs to set the value of $31 in the
    jialc case, which is encoded with rs = 0. However its check to
    differentiate bnezc (rs != 0) from jialc (rs = 0) was unfortunately
    backwards, meaning that if we emulate a bnezc instruction we clobber $31
    & if we emulate a jialc instruction it actually behaves like a jic
    instruction.
    
    Fix this by inverting the check of rs to match the way the instructions
    are actually encoded.
    
    Signed-off-by: Paul Burton <paul.burton@imgtec.com>
    Fixes: 28d6f93d201d ("MIPS: Emulate the new MIPS R6 BNEZC and JIALC instructions")
    Cc: stable <stable@vger.kernel.org> # v4.0+
    Cc: linux-mips@linux-mips.org
    Patchwork: https://patchwork.linux-mips.org/patch/16178/
    Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit d490b0caf87fa4d743b6cc2c00a16aa1f84d2c74
Author: Shuah Khan <shuahkh@osg.samsung.com>
Date:   Tue Jan 10 16:05:28 2017 -0700

    usb: dwc3: exynos fix axius clock error path to do cleanup
    
    [ Upstream commit 8ae584d1951f241efd45499f8774fd7066f22823 ]
    
    Axius clock error path returns without disabling clock and suspend clock.
    Fix it to disable them before returning error.
    
    Reviewed-by: Javier Martinez Canillas <javier@osg.samsung.com>
    Signed-off-by: Shuah Khan <shuahkh@osg.samsung.com>
    Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 61e04a644bd8f083e8c6354dae187a672b0e8f90
Author: Heiner Kallweit <hkallweit1@gmail.com>
Date:   Sun Jun 11 00:38:36 2017 +0200

    genirq: Release resources in __setup_irq() error path
    
    [ Upstream commit fa07ab72cbb0d843429e61bf179308aed6cbe0dd ]
    
    In case __irq_set_trigger() fails the resources requested via
    irq_request_resources() are not released.
    
    Add the missing release call into the error handling path.
    
    Fixes: c1bacbae8192 ("genirq: Provide irq_request/release_resources chip callbacks")
    Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Cc: stable@vger.kernel.org
    Link: http://lkml.kernel.org/r/655538f5-cb20-a892-ff15-fbd2dd1fa4ec@gmail.com
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit ec8376b633c9cef45790bf5b2cee7fe6d319748a
Author: Yu Zhao <yuzhao@google.com>
Date:   Fri Jun 16 14:02:31 2017 -0700

    swap: cond_resched in swap_cgroup_prepare()
    
    [ Upstream commit ef70762948dde012146926720b70e79736336764 ]
    
    I saw need_resched() warnings when swapping on large swapfile (TBs)
    because continuously allocating many pages in swap_cgroup_prepare() took
    too long.
    
    We already cond_resched when freeing page in swap_cgroup_swapoff().  Do
    the same for the page allocation.
    
    Link: http://lkml.kernel.org/r/20170604200109.17606-1-yuzhao@google.com
    Signed-off-by: Yu Zhao <yuzhao@google.com>
    Acked-by: Michal Hocko <mhocko@suse.com>
    Acked-by: Vladimir Davydov <vdavydov.dev@gmail.com>
    Cc: <stable@vger.kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 331720703ebb8efe1e697f2672f9a6f942cc7fd0
Author: James Morse <james.morse@arm.com>
Date:   Fri Jun 16 14:02:29 2017 -0700

    mm/memory-failure.c: use compound_head() flags for huge pages
    
    [ Upstream commit 7258ae5c5a2ce2f5969e8b18b881be40ab55433d ]
    
    memory_failure() chooses a recovery action function based on the page
    flags.  For huge pages it uses the tail page flags which don't have
    anything interesting set, resulting in:
    
    > Memory failure: 0x9be3b4: Unknown page state
    > Memory failure: 0x9be3b4: recovery action for unknown page: Failed
    
    Instead, save a copy of the head page's flags if this is a huge page,
    this means if there are no relevant flags for this tail page, we use the
    head pages flags instead.  This results in the me_huge_page() recovery
    action being called:
    
    > Memory failure: 0x9b7969: recovery action for huge page: Delayed
    
    For hugepages that have not yet been allocated, this allows the hugepage
    to be dequeued.
    
    Fixes: 524fca1e7356 ("HWPOISON: fix misjudgement of page_action() for errors on mlocked pages")
    Link: http://lkml.kernel.org/r/20170524130204.21845-1-james.morse@arm.com
    Signed-off-by: James Morse <james.morse@arm.com>
    Tested-by: Punit Agrawal <punit.agrawal@arm.com>
    Acked-by: Punit Agrawal <punit.agrawal@arm.com>
    Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
    Cc: <stable@vger.kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit e2884056685388d261108765303fb3a291f05b2c
Author: Alan Stern <stern@rowland.harvard.edu>
Date:   Tue Jun 13 15:23:42 2017 -0400

    USB: gadgetfs, dummy-hcd, net2280: fix locking for callbacks
    
    [ Upstream commit f16443a034c7aa359ddf6f0f9bc40d01ca31faea ]
    
    Using the syzkaller kernel fuzzer, Andrey Konovalov generated the
    following error in gadgetfs:
    
    > BUG: KASAN: use-after-free in __lock_acquire+0x3069/0x3690
    > kernel/locking/lockdep.c:3246
    > Read of size 8 at addr ffff88003a2bdaf8 by task kworker/3:1/903
    >
    > CPU: 3 PID: 903 Comm: kworker/3:1 Not tainted 4.12.0-rc4+ #35
    > Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
    > Workqueue: usb_hub_wq hub_event
    > Call Trace:
    >  __dump_stack lib/dump_stack.c:16 [inline]
    >  dump_stack+0x292/0x395 lib/dump_stack.c:52
    >  print_address_description+0x78/0x280 mm/kasan/report.c:252
    >  kasan_report_error mm/kasan/report.c:351 [inline]
    >  kasan_report+0x230/0x340 mm/kasan/report.c:408
    >  __asan_report_load8_noabort+0x19/0x20 mm/kasan/report.c:429
    >  __lock_acquire+0x3069/0x3690 kernel/locking/lockdep.c:3246
    >  lock_acquire+0x22d/0x560 kernel/locking/lockdep.c:3855
    >  __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
    >  _raw_spin_lock+0x2f/0x40 kernel/locking/spinlock.c:151
    >  spin_lock include/linux/spinlock.h:299 [inline]
    >  gadgetfs_suspend+0x89/0x130 drivers/usb/gadget/legacy/inode.c:1682
    >  set_link_state+0x88e/0xae0 drivers/usb/gadget/udc/dummy_hcd.c:455
    >  dummy_hub_control+0xd7e/0x1fb0 drivers/usb/gadget/udc/dummy_hcd.c:2074
    >  rh_call_control drivers/usb/core/hcd.c:689 [inline]
    >  rh_urb_enqueue drivers/usb/core/hcd.c:846 [inline]
    >  usb_hcd_submit_urb+0x92f/0x20b0 drivers/usb/core/hcd.c:1650
    >  usb_submit_urb+0x8b2/0x12c0 drivers/usb/core/urb.c:542
    >  usb_start_wait_urb+0x148/0x5b0 drivers/usb/core/message.c:56
    >  usb_internal_control_msg drivers/usb/core/message.c:100 [inline]
    >  usb_control_msg+0x341/0x4d0 drivers/usb/core/message.c:151
    >  usb_clear_port_feature+0x74/0xa0 drivers/usb/core/hub.c:412
    >  hub_port_disable+0x123/0x510 drivers/usb/core/hub.c:4177
    >  hub_port_init+0x1ed/0x2940 drivers/usb/core/hub.c:4648
    >  hub_port_connect drivers/usb/core/hub.c:4826 [inline]
    >  hub_port_connect_change drivers/usb/core/hub.c:4999 [inline]
    >  port_event drivers/usb/core/hub.c:5105 [inline]
    >  hub_event+0x1ae1/0x3d40 drivers/usb/core/hub.c:5185
    >  process_one_work+0xc08/0x1bd0 kernel/workqueue.c:2097
    >  process_scheduled_works kernel/workqueue.c:2157 [inline]
    >  worker_thread+0xb2b/0x1860 kernel/workqueue.c:2233
    >  kthread+0x363/0x440 kernel/kthread.c:231
    >  ret_from_fork+0x2a/0x40 arch/x86/entry/entry_64.S:424
    >
    > Allocated by task 9958:
    >  save_stack_trace+0x1b/0x20 arch/x86/kernel/stacktrace.c:59
    >  save_stack+0x43/0xd0 mm/kasan/kasan.c:513
    >  set_track mm/kasan/kasan.c:525 [inline]
    >  kasan_kmalloc+0xad/0xe0 mm/kasan/kasan.c:617
    >  kmem_cache_alloc_trace+0x87/0x280 mm/slub.c:2745
    >  kmalloc include/linux/slab.h:492 [inline]
    >  kzalloc include/linux/slab.h:665 [inline]
    >  dev_new drivers/usb/gadget/legacy/inode.c:170 [inline]
    >  gadgetfs_fill_super+0x24f/0x540 drivers/usb/gadget/legacy/inode.c:1993
    >  mount_single+0xf6/0x160 fs/super.c:1192
    >  gadgetfs_mount+0x31/0x40 drivers/usb/gadget/legacy/inode.c:2019
    >  mount_fs+0x9c/0x2d0 fs/super.c:1223
    >  vfs_kern_mount.part.25+0xcb/0x490 fs/namespace.c:976
    >  vfs_kern_mount fs/namespace.c:2509 [inline]
    >  do_new_mount fs/namespace.c:2512 [inline]
    >  do_mount+0x41b/0x2d90 fs/namespace.c:2834
    >  SYSC_mount fs/namespace.c:3050 [inline]
    >  SyS_mount+0xb0/0x120 fs/namespace.c:3027
    >  entry_SYSCALL_64_fastpath+0x1f/0xbe
    >
    > Freed by task 9960:
    >  save_stack_trace+0x1b/0x20 arch/x86/kernel/stacktrace.c:59
    >  save_stack+0x43/0xd0 mm/kasan/kasan.c:513
    >  set_track mm/kasan/kasan.c:525 [inline]
    >  kasan_slab_free+0x72/0xc0 mm/kasan/kasan.c:590
    >  slab_free_hook mm/slub.c:1357 [inline]
    >  slab_free_freelist_hook mm/slub.c:1379 [inline]
    >  slab_free mm/slub.c:2961 [inline]
    >  kfree+0xed/0x2b0 mm/slub.c:3882
    >  put_dev+0x124/0x160 drivers/usb/gadget/legacy/inode.c:163
    >  gadgetfs_kill_sb+0x33/0x60 drivers/usb/gadget/legacy/inode.c:2027
    >  deactivate_locked_super+0x8d/0xd0 fs/super.c:309
    >  deactivate_super+0x21e/0x310 fs/super.c:340
    >  cleanup_mnt+0xb7/0x150 fs/namespace.c:1112
    >  __cleanup_mnt+0x1b/0x20 fs/namespace.c:1119
    >  task_work_run+0x1a0/0x280 kernel/task_work.c:116
    >  exit_task_work include/linux/task_work.h:21 [inline]
    >  do_exit+0x18a8/0x2820 kernel/exit.c:878
    >  do_group_exit+0x14e/0x420 kernel/exit.c:982
    >  get_signal+0x784/0x1780 kernel/signal.c:2318
    >  do_signal+0xd7/0x2130 arch/x86/kernel/signal.c:808
    >  exit_to_usermode_loop+0x1ac/0x240 arch/x86/entry/common.c:157
    >  prepare_exit_to_usermode arch/x86/entry/common.c:194 [inline]
    >  syscall_return_slowpath+0x3ba/0x410 arch/x86/entry/common.c:263
    >  entry_SYSCALL_64_fastpath+0xbc/0xbe
    >
    > The buggy address belongs to the object at ffff88003a2bdae0
    >  which belongs to the cache kmalloc-1024 of size 1024
    > The buggy address is located 24 bytes inside of
    >  1024-byte region [ffff88003a2bdae0, ffff88003a2bdee0)
    > The buggy address belongs to the page:
    > page:ffffea0000e8ae00 count:1 mapcount:0 mapping:          (null)
    > index:0x0 compound_mapcount: 0
    > flags: 0x100000000008100(slab|head)
    > raw: 0100000000008100 0000000000000000 0000000000000000 0000000100170017
    > raw: ffffea0000ed3020 ffffea0000f5f820 ffff88003e80efc0 0000000000000000
    > page dumped because: kasan: bad access detected
    >
    > Memory state around the buggy address:
    >  ffff88003a2bd980: fb fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
    >  ffff88003a2bda00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
    > >ffff88003a2bda80: fc fc fc fc fc fc fc fc fc fc fc fc fb fb fb fb
    >                                                                 ^
    >  ffff88003a2bdb00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
    >  ffff88003a2bdb80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
    > ==================================================================
    
    What this means is that the gadgetfs_suspend() routine was trying to
    access dev->lock after it had been deallocated.  The root cause is a
    race in the dummy_hcd driver; the dummy_udc_stop() routine can race
    with the rest of the driver because it contains no locking.  And even
    when proper locking is added, it can still race with the
    set_link_state() function because that function incorrectly drops the
    private spinlock before invoking any gadget driver callbacks.
    
    The result of this race, as seen above, is that set_link_state() can
    invoke a callback in gadgetfs even after gadgetfs has been unbound
    from dummy_hcd's UDC and its private data structures have been
    deallocated.
    
    include/linux/usb/gadget.h documents that the ->reset, ->disconnect,
    ->suspend, and ->resume callbacks may be invoked in interrupt context.
    In general this is necessary, to prevent races with gadget driver
    removal.  This patch fixes dummy_hcd to retain the spinlock across
    these calls, and it adds a spinlock acquisition to dummy_udc_stop() to
    prevent the race.
    
    The net2280 driver makes the same mistake of dropping the private
    spinlock for its ->disconnect and ->reset callback invocations.  The
    patch fixes it too.
    
    Lastly, since gadgetfs_suspend() may be invoked in interrupt context,
    it cannot assume that interrupts are enabled when it runs.  It must
    use spin_lock_irqsave() instead of spin_lock_irq().  The patch fixes
    that bug as well.
    
    Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
    Reported-and-tested-by: Andrey Konovalov <andreyknvl@google.com>
    CC: <stable@vger.kernel.org>
    Acked-by: Felipe Balbi <felipe.balbi@linux.intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 7ed474c302c1a4ad9c9967442aac1fb633c15f27
Author: Corentin Labbe <clabbe.montjoie@gmail.com>
Date:   Fri Jun 9 14:48:41 2017 +0300

    usb: xhci: ASMedia ASM1042A chipset need shorts TX quirk
    
    [ Upstream commit d2f48f05cd2a2a0a708fbfa45f1a00a87660d937 ]
    
    When plugging an USB webcam I see the following message:
    [106385.615559] xhci_hcd 0000:04:00.0: WARN Successful completion on short TX: needs XHCI_TRUST_TX_LENGTH quirk?
    [106390.583860] handle_tx_event: 913 callbacks suppressed
    
    With this patch applied, I get no more printing of this message.
    
    Cc: <stable@vger.kernel.org>
    Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
    Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 235efbf2e58ce12f6204d61feb78e7858b1250f3
Author: Dan Carpenter <dan.carpenter@oracle.com>
Date:   Mon May 8 15:55:17 2017 -0700

    drivers/misc/c2port/c2port-duramar2150.c: checking for NULL instead of IS_ERR()
    
    [ Upstream commit 8128a31eaadbcdfa37774bbd28f3f00bac69996a ]
    
    c2port_device_register() never returns NULL, it uses error pointers.
    
    Link: http://lkml.kernel.org/r/20170412083321.GC3250@mwanda
    Fixes: 65131cd52b9e ("c2port: add c2port support for Eurotech Duramar 2150")
    Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
    Acked-by: Rodolfo Giometti <giometti@linux.it>
    Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit d5db08763ef2a66383193d7f38031918be0dce1b
Author: Chris Brandt <chris.brandt@renesas.com>
Date:   Thu Apr 27 12:12:49 2017 -0700

    usb: r8a66597-hcd: decrease timeout
    
    [ Upstream commit dd14a3e9b92ac6f0918054f9e3477438760a4fa6 ]
    
    The timeout for BULK packets was 300ms which is a long time if other
    endpoints or devices are waiting for their turn. Changing it to 50ms
    greatly increased the overall performance for multi-endpoint devices.
    
    Fixes: 5d3043586db4 ("usb: r8a66597-hcd: host controller driver for R8A6659")
    Signed-off-by: Chris Brandt <chris.brandt@renesas.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 152c8dcf311c392f0cd2c3446f26a749a99ad532
Author: Chris Brandt <chris.brandt@renesas.com>
Date:   Thu Apr 27 12:12:02 2017 -0700

    usb: r8a66597-hcd: select a different endpoint on timeout
    
    [ Upstream commit 1f873d857b6c2fefb4dada952674aa01bcfb92bd ]
    
    If multiple endpoints on a single device have pending IN URBs and one
    endpoint times out due to NAKs (perfectly legal), select a different
    endpoint URB to try.
    The existing code only checked to see another device address has pending
    URBs and ignores other IN endpoints on the current device address. This
    leads to endpoints never getting serviced if one endpoint is using NAK as
    a flow control method.
    
    Fixes: 5d3043586db4 ("usb: r8a66597-hcd: host controller driver for R8A6659")
    Signed-off-by: Chris Brandt <chris.brandt@renesas.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 08e1f98694894badf1cb749eb974d97547386ba5
Author: Johan Hovold <johan@kernel.org>
Date:   Wed May 10 18:18:25 2017 +0200

    USB: gadget: dummy_hcd: fix hub-descriptor removable fields
    
    [ Upstream commit d81182ce30dbd497a1e7047d7fda2af040347790 ]
    
    Flag the first and only port as removable while also leaving the
    remaining bits (including the reserved bit zero) unset in accordance
    with the specifications:
    
            "Within a byte, if no port exists for a given location, the bit
            field representing the port characteristics shall be 0."
    
    Also add a comment marking the legacy PortPwrCtrlMask field.
    
    Fixes: 1cd8fd2887e1 ("usb: gadget: dummy_hcd: add SuperSpeed support")
    Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
    Cc: Tatyana Brokhman <tlinder@codeaurora.org>
    Signed-off-by: Johan Hovold <johan@kernel.org>
    Acked-by: Alan Stern <stern@rowland.harvard.edu>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 0758e6a95a00d3d51c75bcc47b40e0e0f220dcd2
Author: Arnd Bergmann <arnd@arndb.de>
Date:   Thu Feb 2 12:53:04 2017 -0200

    [media] pvrusb2: reduce stack usage pvr2_eeprom_analyze()
    
    [ Upstream commit 6830733d53a4517588e56227b9c8538633f0c496 ]
    
    The driver uses a relatively large data structure on the stack, which
    showed up on my radar as we get a warning with the "latent entropy"
    GCC plugin:
    
    drivers/media/usb/pvrusb2/pvrusb2-eeprom.c:153:1: error: the frame size of 1376 bytes is larger than 1152 bytes [-Werror=frame-larger-than=]
    
    The warning is usually hidden as we raise the warning limit to 2048
    when the plugin is enabled, but I'd like to lower that again in the
    future, and making this function smaller helps to do that without
    build regressions.
    
    Further analysis shows that putting an 'i2c_client' structure on
    the stack is not really supported, as the embedded 'struct device'
    is not initialized here, and we are only saved by the fact that
    the function that is called here does not use the pointer at all.
    
    Fixes: d855497edbfb ("V4L/DVB (4228a): pvrusb2 to kernel 2.6.18")
    
    Signed-off-by: Arnd Bergmann <arnd@arndb.de>
    Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
    Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit bdc69cc86e019bff77be00fbdcb28273dc369428
Author: Anton Bondarenko <anton.bondarenko.sama@gmail.com>
Date:   Sun May 7 01:53:46 2017 +0200

    usb: core: fix potential memory leak in error path during hcd creation
    
    [ Upstream commit 1a744d2eb76aaafb997fda004ae3ae62a1538f85 ]
    
    Free memory allocated for address0_mutex if allocation of bandwidth_mutex
    failed.
    
    Fixes: feb26ac31a2a ("usb: core: hub: hub_port_init lock controller instead of bus")
    
    Signed-off-by: Anton Bondarenko <anton.bondarenko.sama@gmail.com>
    Acked-by: Alan Stern <stern@rowland.harvard.edu>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 11f00c7e1194329d35e3605df18c43bc081bdece
Author: Johan Hovold <johan@kernel.org>
Date:   Wed May 10 18:18:29 2017 +0200

    USB: hub: fix SS max number of ports
    
    [ Upstream commit 93491ced3c87c94b12220dbac0527e1356702179 ]
    
    Add define for the maximum number of ports on a SuperSpeed hub as per
    USB 3.1 spec Table 10-5, and use it when verifying the retrieved hub
    descriptor.
    
    This specifically avoids benign attempts to update the DeviceRemovable
    mask for non-existing ports (should we get that far).
    
    Fixes: dbe79bbe9dcb ("USB 3.0 Hub Changes")
    Acked-by: Alan Stern <stern@rowland.harvard.edu>
    Signed-off-by: Johan Hovold <johan@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit e507356624f1ccf8a4bf86b04fa840014316a9de
Author: Matt Ranostay <matt.ranostay@konsulko.com>
Date:   Fri Apr 14 16:38:19 2017 -0700

    iio: proximity: as3935: recalibrate RCO after resume
    
    [ Upstream commit 6272c0de13abf1480f701d38288f28a11b4301c4 ]
    
    According to the datasheet the RCO must be recalibrated
    on every power-on-reset. Also remove mutex locking in the
    calibration function since callers other than the probe
    function (which doesn't need it) will have a lock.
    
    Fixes: 24ddb0e4bba4 ("iio: Add AS3935 lightning sensor support")
    Cc: George McCollister <george.mccollister@gmail.com>
    Signed-off-by: Matt Ranostay <matt.ranostay@konsulko.com>
    Signed-off-by: Jonathan Cameron <jic23@kernel.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit fe947490181055dcd0cc61ad26c84302aadb8645
Author: Dan Carpenter <dan.carpenter@oracle.com>
Date:   Sat Apr 22 13:47:23 2017 +0300

    staging: rtl8188eu: prevent an underflow in rtw_check_beacon_data()
    
    [ Upstream commit 784047eb2d3405a35087af70cba46170c5576b25 ]
    
    The "len" could be as low as -14 so we should check for negatives.
    
    Fixes: 9a7fe54ddc3a ("staging: r8188eu: Add source files for new driver - part 1")
    Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 94bfe4f31f46435ecb6f834cd090241860dd0d0c
Author: Tony Lindgren <tony@atomide.com>
Date:   Sat Apr 15 10:05:08 2017 -0700

    mfd: omap-usb-tll: Fix inverted bit use for USB TLL mode
    
    [ Upstream commit 8b8a84c54aff4256d592dc18346c65ecf6811b45 ]
    
    Commit 16fa3dc75c22 ("mfd: omap-usb-tll: HOST TLL platform driver")
    added support for USB TLL, but uses OMAP_TLL_CHANNEL_CONF_ULPINOBITSTUFF
    bit the wrong way. The comments in the code are correct, but the inverted
    use of OMAP_TLL_CHANNEL_CONF_ULPINOBITSTUFF causes the register to be
    enabled instead of disabled unlike what the comments say.
    
    Without this change the Wrigley 3G LTE modem on droid 4 EHCI bus can
    be only pinged few times before it stops responding.
    
    Fixes: 16fa3dc75c22 ("mfd: omap-usb-tll: HOST TLL platform driver")
    Signed-off-by: Tony Lindgren <tony@atomide.com>
    Acked-by: Roger Quadros <rogerq@ti.com>
    Signed-off-by: Lee Jones <lee.jones@linaro.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 49919278f4ce90fe58a0c135ea8f3280b3e1e66e
Author: Laura Abbott <labbott@redhat.com>
Date:   Mon May 8 14:23:16 2017 -0700

    x86/mm/32: Set the '__vmalloc_start_set' flag in initmem_init()
    
    [ Upstream commit 861ce4a3244c21b0af64f880d5bfe5e6e2fb9e4a ]
    
    '__vmalloc_start_set' currently only gets set in initmem_init() when
    !CONFIG_NEED_MULTIPLE_NODES. This breaks detection of vmalloc address
    with virt_addr_valid() with CONFIG_NEED_MULTIPLE_NODES=y, causing
    a kernel crash:
    
      [mm/usercopy] 517e1fbeb6: kernel BUG at arch/x86/mm/physaddr.c:78!
    
    Set '__vmalloc_start_set' appropriately for that case as well.
    
    Reported-by: kbuild test robot <fengguang.wu@intel.com>
    Signed-off-by: Laura Abbott <labbott@redhat.com>
    Reviewed-by: Kees Cook <keescook@chromium.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Fixes: dc16ecf7fd1f ("x86-32: use specific __vmalloc_start_set flag in __virt_addr_valid")
    Link: http://lkml.kernel.org/r/1494278596-30373-1-git-send-email-labbott@redhat.com
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit daccc774edf2011ce0dfd6083b71f6eb87535d48
Author: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Date:   Fri May 12 16:35:45 2017 +0200

    serial: efm32: Fix parity management in 'efm32_uart_console_get_options()'
    
    [ Upstream commit be40597a1bc173bf9dadccdf5388b956f620ae8f ]
    
    UARTn_FRAME_PARITY_ODD is 0x0300
    UARTn_FRAME_PARITY_EVEN is 0x0200
    So if the UART is configured for EVEN parity, it would be reported as ODD.
    Fix it by correctly testing if the 2 bits are set.
    
    Fixes: 3afbd89c9639 ("serial/efm32: add new driver")
    Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
    Acked-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 394dc0f7c2ae373272d95bf19932d9f41ac806ee
Author: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Date:   Sat Jun 10 13:52:45 2017 +0300

    mac80211: don't send SMPS action frame in AP mode when not needed
    
    [ Upstream commit b3dd8279659f14f3624bb32559782d699fa6f7d1 ]
    
    mac80211 allows to modify the SMPS state of an AP both,
    when it is started, and after it has been started. Such a
    change will trigger an action frame to all the peers that
    are currently connected, and will be remembered so that
    new peers will get notified as soon as they connect (since
    the SMPS setting in the beacon may not be the right one).
    
    This means that we need to remember the SMPS state
    currently requested as well as the SMPS state that was
    configured initially (and advertised in the beacon).
    The former is bss->req_smps and the latter is
    sdata->smps_mode.
    
    Initially, the AP interface could only be started with
    SMPS_OFF, which means that sdata->smps_mode was SMPS_OFF
    always. Later, a nl80211 API was added to be able to start
    an AP with a different AP mode. That code forgot to update
    bss->req_smps and because of that, if the AP interface was
    started with SMPS_DYNAMIC, we had:
       sdata->smps_mode = SMPS_DYNAMIC
       bss->req_smps = SMPS_OFF
    
    That configuration made mac80211 think it needs to fire off
    an action frame to any new station connecting to the AP in
    order to let it know that the actual SMPS configuration is
    SMPS_OFF.
    
    Fix that by properly setting bss->req_smps in
    ieee80211_start_ap.
    
    Fixes: f69931748730 ("mac80211: set smps_mode according to ap params")
    Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
    Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
    Signed-off-by: Johannes Berg <johannes.berg@intel.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 8373afb6cb125bf716b22997a2da802cbfc2d833
Author: Johannes Berg <johannes.berg@intel.com>
Date:   Thu Apr 27 13:19:04 2017 +0200

    mac80211: fix IBSS presp allocation size
    
    [ Upstream commit f1f3e9e2a50a70de908f9dfe0d870e9cdc67e042 ]
    
    When VHT IBSS support was added, the size of the extra elements
    wasn't considered in ieee80211_ibss_build_presp(), which makes
    it possible that it would overrun the allocated buffer. Fix it
    by allocating the necessary space.
    
    Fixes: abcff6ef01f9 ("mac80211: add VHT support for IBSS")
    Reported-by: Shaul Triebitz <shaul.triebitz@intel.com>
    Signed-off-by: Johannes Berg <johannes.berg@intel.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 26e7f9d274cfc3fb06dc99ff0d7173e06529f217
Author: Koen Vandeputte <koen.vandeputte@ncentric.com>
Date:   Wed Feb 8 15:32:05 2017 +0100

    mac80211: fix CSA in IBSS mode
    
    [ Upstream commit f181d6a3bcc35633facf5f3925699021c13492c5 ]
    
    Add the missing IBSS capability flag during capability init as it needs
    to be inserted into the generated beacon in order for CSA to work.
    
    Fixes: cd7760e62c2ac ("mac80211: add support for CSA in IBSS mode")
    Signed-off-by: Piotr Gawlowicz <gawlowicz@tkn.tu-berlin.de>
    Signed-off-by: Mikołaj Chwalisz <chwalisz@tkn.tu-berlin.de>
    Tested-by: Koen Vandeputte <koen.vandeputte@ncentric.com>
    Signed-off-by: Johannes Berg <johannes.berg@intel.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 5125e4a412999c513cc12ae848d5d5f1ab72fac3
Author: Jason A. Donenfeld <Jason@zx2c4.com>
Date:   Sat Jun 10 04:59:12 2017 +0200

    mac80211/wpa: use constant time memory comparison for MACs
    
    [ Upstream commit 98c67d187db7808b1f3c95f2110dd4392d034182 ]
    
    Otherwise, we enable all sorts of forgeries via timing attack.
    
    Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
    Cc: Johannes Berg <johannes@sipsolutions.net>
    Cc: linux-wireless@vger.kernel.org
    Cc: stable@vger.kernel.org
    Signed-off-by: Johannes Berg <johannes.berg@intel.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 1a8dacfbbbe76969e766e20f8d2bfe9810471781
Author: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Date:   Thu Jun 8 14:00:49 2017 +0300

    mac80211: don't look at the PM bit of BAR frames
    
    [ Upstream commit 769dc04db3ed8484798aceb015b94deacc2ba557 ]
    
    When a peer sends a BAR frame with PM bit clear, we should
    not modify its PM state as madated by the spec in
    802.11-20012 10.2.1.2.
    
    Cc: stable@vger.kernel.org
    Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
    Signed-off-by: Johannes Berg <johannes.berg@intel.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 61df07cce8eba07e3b40c09d16a51e2204f7536f
Author: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Date:   Fri Apr 28 01:51:40 2017 -0300

    [media] vb2: Fix an off by one error in 'vb2_plane_vaddr'
    
    [ Upstream commit 5ebb6dd36c9f5fb37b1077b393c254d70a14cb46 ]
    
    We should ensure that 'plane_no' is '< vb->num_planes' as done in
    'vb2_plane_cookie' just a few lines below.
    
    Fixes: e23ccc0ad925 ("[media] v4l: add videobuf2 Video for Linux 2 driver framework")
    
    Cc: stable@vger.kernel.org
    Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
    Reviewed-by: Sakari Ailus <sakari.ailus@linux.intel.com>
    Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
    Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 6ea9210c046f2a3505e2b6691dfdddb22ed0c008
Author: Marc Kleine-Budde <mkl@pengutronix.de>
Date:   Sun Jun 4 14:03:42 2017 +0200

    can: gs_usb: fix memory leak in gs_cmd_reset()
    
    [ Upstream commit 5cda3ee5138e91ac369ed9d0b55eab0dab077686 ]
    
    This patch adds the missing kfree() in gs_cmd_reset() to free the
    memory that is not used anymore after usb_control_msg().
    
    Cc: linux-stable <stable@vger.kernel.org>
    Cc: Maximilian Schneider <max@schneidersoft.net>
    Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit d30248c41364c655966a9f72f5409477e57f9f9b
Author: Nicholas Bellinger <nab@linux-iscsi.org>
Date:   Thu Jun 8 04:51:54 2017 +0000

    configfs: Fix race between create_link and configfs_rmdir
    
    [ Upstream commit ba80aa909c99802c428682c352b0ee0baac0acd3 ]
    
    This patch closes a long standing race in configfs between
    the creation of a new symlink in create_link(), while the
    symlink target's config_item is being concurrently removed
    via configfs_rmdir().
    
    This can happen because the symlink target's reference
    is obtained by config_item_get() in create_link() before
    the CONFIGFS_USET_DROPPING bit set by configfs_detach_prep()
    during configfs_rmdir() shutdown is actually checked..
    
    This originally manifested itself on ppc64 on v4.8.y under
    heavy load using ibmvscsi target ports with Novalink API:
    
    [ 7877.289863] rpadlpar_io: slot U8247.22L.212A91A-V1-C8 added
    [ 7879.893760] ------------[ cut here ]------------
    [ 7879.893768] WARNING: CPU: 15 PID: 17585 at ./include/linux/kref.h:46 config_item_get+0x7c/0x90 [configfs]
    [ 7879.893811] CPU: 15 PID: 17585 Comm: targetcli Tainted: G           O 4.8.17-customv2.22 #12
    [ 7879.893812] task: c00000018a0d3400 task.stack: c0000001f3b40000
    [ 7879.893813] NIP: d000000002c664ec LR: d000000002c60980 CTR: c000000000b70870
    [ 7879.893814] REGS: c0000001f3b43810 TRAP: 0700   Tainted: G O     (4.8.17-customv2.22)
    [ 7879.893815] MSR: 8000000000029033 <SF,EE,ME,IR,DR,RI,LE>  CR: 28222242  XER: 00000000
    [ 7879.893820] CFAR: d000000002c664bc SOFTE: 1
                    GPR00: d000000002c60980 c0000001f3b43a90 d000000002c70908 c0000000fbc06820
                    GPR04: c0000001ef1bd900 0000000000000004 0000000000000001 0000000000000000
                    GPR08: 0000000000000000 0000000000000001 d000000002c69560 d000000002c66d80
                    GPR12: c000000000b70870 c00000000e798700 c0000001f3b43ca0 c0000001d4949d40
                    GPR16: c00000014637e1c0 0000000000000000 0000000000000000 c0000000f2392940
                    GPR20: c0000001f3b43b98 0000000000000041 0000000000600000 0000000000000000
                    GPR24: fffffffffffff000 0000000000000000 d000000002c60be0 c0000001f1dac490
                    GPR28: 0000000000000004 0000000000000000 c0000001ef1bd900 c0000000f2392940
    [ 7879.893839] NIP [d000000002c664ec] config_item_get+0x7c/0x90 [configfs]
    [ 7879.893841] LR [d000000002c60980] check_perm+0x80/0x2e0 [configfs]
    [ 7879.893842] Call Trace:
    [ 7879.893844] [c0000001f3b43ac0] [d000000002c60980] check_perm+0x80/0x2e0 [configfs]
    [ 7879.893847] [c0000001f3b43b10] [c000000000329770] do_dentry_open+0x2c0/0x460
    [ 7879.893849] [c0000001f3b43b70] [c000000000344480] path_openat+0x210/0x1490
    [ 7879.893851] [c0000001f3b43c80] [c00000000034708c] do_filp_open+0xfc/0x170
    [ 7879.893853] [c0000001f3b43db0] [c00000000032b5bc] do_sys_open+0x1cc/0x390
    [ 7879.893856] [c0000001f3b43e30] [c000000000009584] system_call+0x38/0xec
    [ 7879.893856] Instruction dump:
    [ 7879.893858] 409d0014 38210030 e8010010 7c0803a6 4e800020 3d220000 e94981e0 892a0000
    [ 7879.893861] 2f890000 409effe0 39200001 992a0000 <0fe00000> 4bffffd0 60000000 60000000
    [ 7879.893866] ---[ end trace 14078f0b3b5ad0aa ]---
    
    To close this race, go ahead and obtain the symlink's target
    config_item reference only after the existing CONFIGFS_USET_DROPPING
    check succeeds.
    
    This way, if configfs_rmdir() wins create_link() will return -ENONET,
    and if create_link() wins configfs_rmdir() will return -EBUSY.
    
    Reported-by: Bryant G. Ly <bryantly@linux.vnet.ibm.com>
    Tested-by: Bryant G. Ly <bryantly@linux.vnet.ibm.com>
    Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
    Signed-off-by: Christoph Hellwig <hch@lst.de>
    Cc: stable@vger.kernel.org
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 9307fb9f34e1e26fe5ad2de164043557f7ac445d
Author: Dan Carpenter <dan.carpenter@oracle.com>
Date:   Fri Nov 25 14:03:55 2016 +0300

    sparc64: make string buffers large enough
    
    [ Upstream commit b5c3206190f1fddd100b3060eb15f0d775ffeab8 ]
    
    My static checker complains that if "lvl" is ULONG_MAX (this is 64 bit)
    then some of the strings will overflow.  I don't know if that's possible
    but it seems simple enough to make the buffers slightly larger.
    
    Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit b865f707d7c61799b8808b060d0eae476a2b92f3
Author: Max Filippov <jcmvbkbc@gmail.com>
Date:   Mon Jun 5 02:43:51 2017 -0700

    xtensa: don't use linux IRQ #0
    
    [ Upstream commit e5c86679d5e864947a52fb31e45a425dea3e7fa9 ]
    
    Linux IRQ #0 is reserved for error reporting and may not be used.
    Increase NR_IRQS for one additional slot and increase
    irq_domain_add_legacy parameter first_irq value to 1, so that linux
    IRQ #0 is not associated with hardware IRQ #0 in legacy IRQ domains.
    Introduce macro XTENSA_PIC_LINUX_IRQ for static translation of xtensa
    PIC hardware IRQ # to linux IRQ #. Use this macro in XTFPGA platform
    data definitions.
    
    This fixes inability to use hardware IRQ #0 in configurations that don't
    use device tree and allows for non-identity mapping between linux IRQ #
    and hardware IRQ #.
    
    Cc: stable@vger.kernel.org
    Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit ddda59580e105f8886c17b20856cdeb53eb67e42
Author: Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@ericsson.com>
Date:   Tue Jan 24 13:00:47 2017 +0100

    tipc: ignore requests when the connection state is not CONNECTED
    
    [ Upstream commit 4c887aa65d38633885010277f3482400681be719 ]
    
    In tipc_conn_sendmsg(), we first queue the request to the outqueue
    followed by the connection state check. If the connection is not
    connected, we should not queue this message.
    
    In this commit, we reject the messages if the connection state is
    not CF_CONNECTED.
    
    Acked-by: Ying Xue <ying.xue@windriver.com>
    Acked-by: Jon Maloy <jon.maloy@ericsson.com>
    Tested-by: John Thompson <thompa.atl@gmail.com>
    Signed-off-by: Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@ericsson.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 1b98bab153c71520982c0f4d7460efc16f6b49a7
Author: Eric Dumazet <edumazet@google.com>
Date:   Tue Jan 24 15:18:07 2017 -0800

    proc: add a schedule point in proc_pid_readdir()
    
    [ Upstream commit 3ba4bceef23206349d4130ddf140819b365de7c8 ]
    
    We have seen proc_pid_readdir() invocations holding cpu for more than 50
    ms.  Add a cond_resched() to be gentle with other tasks.
    
    [akpm@linux-foundation.org: coding style fix]
    Link: http://lkml.kernel.org/r/1484238380.15816.42.camel@edumazet-glaptop3.roam.corp.google.com
    Signed-off-by: Eric Dumazet <edumazet@google.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit a20b7cab8735f9d2752fec5971d3405e3c0634c1
Author: Coly Li <colyli@suse.de>
Date:   Tue Jan 24 15:18:46 2017 -0800

    romfs: use different way to generate fsid for BLOCK or MTD
    
    [ Upstream commit f598f82e204ec0b17797caaf1b0311c52d43fb9a ]
    
    Commit 8a59f5d25265 ("fs/romfs: return f_fsid for statfs(2)") generates
    a 64bit id from sb->s_bdev->bd_dev.  This is only correct when romfs is
    defined with CONFIG_ROMFS_ON_BLOCK.  If romfs is only defined with
    CONFIG_ROMFS_ON_MTD, sb->s_bdev is NULL, referencing sb->s_bdev->bd_dev
    will triger an oops.
    
    Richard Weinberger points out that when CONFIG_ROMFS_BACKED_BY_BOTH=y,
    both CONFIG_ROMFS_ON_BLOCK and CONFIG_ROMFS_ON_MTD are defined.
    Therefore when calling huge_encode_dev() to generate a 64bit id, I use
    the follow order to choose parameter,
    
    - CONFIG_ROMFS_ON_BLOCK defined
      use sb->s_bdev->bd_dev
    - CONFIG_ROMFS_ON_BLOCK undefined and CONFIG_ROMFS_ON_MTD defined
      use sb->s_dev when,
    - both CONFIG_ROMFS_ON_BLOCK and CONFIG_ROMFS_ON_MTD undefined
      leave id as 0
    
    When CONFIG_ROMFS_ON_MTD is defined and sb->s_mtd is not NULL, sb->s_dev
    is set to a device ID generated by MTD_BLOCK_MAJOR and mtd index,
    otherwise sb->s_dev is 0.
    
    This is a try-best effort to generate a uniq file system ID, if all the
    above conditions are not meet, f_fsid of this romfs instance will be 0.
    Generally only one romfs can be built on single MTD block device, this
    method is enough to identify multiple romfs instances in a computer.
    
    Link: http://lkml.kernel.org/r/1482928596-115155-1-git-send-email-colyli@suse.de
    Signed-off-by: Coly Li <colyli@suse.de>
    Reported-by: Nong Li <nongli1031@gmail.com>
    Tested-by: Nong Li <nongli1031@gmail.com>
    Cc: Richard Weinberger <richard.weinberger@gmail.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 3e335922279b32412845d912d2226e4330e731ed
Author: Randy Dunlap <rdunlap@infradead.org>
Date:   Tue Jan 24 15:18:49 2017 -0800

    mn10300: fix build error of missing fpu_save()
    
    [ Upstream commit 3705ccfdd1e8b539225ce20e3925a945cc788d67 ]
    
    When CONFIG_FPU is not enabled on arch/mn10300, <asm/switch_to.h> causes
    a build error with a call to fpu_save():
    
      kernel/built-in.o: In function `.L410':
      core.c:(.sched.text+0x28a): undefined reference to `fpu_save'
    
    Fix this by including <asm/fpu.h> in <asm/switch_to.h> so that an empty
    static inline fpu_save() is defined.
    
    Link: http://lkml.kernel.org/r/dc421c4f-4842-4429-1b99-92865c2f24b6@infradead.org
    Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
    Reported-by: kbuild test robot <fengguang.wu@intel.com>
    Reviewed-by: David Howells <dhowells@redhat.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 86e9b2ee9cc6f327bdfd4f2a5630eefaeacebcb7
Author: Xin Long <lucien.xin@gmail.com>
Date:   Tue Jan 24 14:01:53 2017 +0800

    sctp: sctp_addr_id2transport should verify the addr before looking up assoc
    
    [ Upstream commit 6f29a130613191d3c6335169febe002cba00edf5 ]
    
    sctp_addr_id2transport is a function for sockopt to look up assoc by
    address. As the address is from userspace, it can be a v4-mapped v6
    address. But in sctp protocol stack, it always handles a v4-mapped
    v6 address as a v4 address. So it's necessary to convert it to a v4
    address before looking up assoc by address.
    
    This patch is to fix it by calling sctp_verify_addr in which it can do
    this conversion before calling sctp_endpoint_lookup_assoc, just like
    what sctp_sendmsg and __sctp_connect do for the address from users.
    
    Signed-off-by: Xin Long <lucien.xin@gmail.com>
    Acked-by: Neil Horman <nhorman@tuxdriver.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 5fbc861ad7eba34a9d27408797c912d104f3da1f
Author: hayeswang <hayeswang@realtek.com>
Date:   Thu Jan 26 09:38:33 2017 +0800

    r8152: re-schedule napi for tx
    
    [ Upstream commit 248b213ad908b88db15941202ef7cb7eb137c1a0 ]
    
    Re-schedule napi after napi_complete() for tx, if it is necessay.
    
    In r8152_poll(), if the tx is completed after tx_bottom() and before
    napi_complete(), the scheduling of napi would be lost. Then, no
    one handles the next tx until the next napi_schedule() is called.
    
    Signed-off-by: Hayes Wang <hayeswang@realtek.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 41e0083c7ddbfe9137471ddabdbd72c2ef4d27bc
Author: Y.C. Chen <yc_chen@aspeedtech.com>
Date:   Thu Jan 26 09:45:40 2017 +0800

    drm/ast: Fixed system hanged if disable P2A
    
    [ Upstream commit 6c971c09f38704513c426ba6515f22fb3d6c87d5 ]
    
    The original ast driver will access some BMC configuration through P2A bridge
    that can be disabled since AST2300 and after.
    It will cause system hanged if P2A bridge is disabled.
    Here is the update to fix it.
    
    Signed-off-by: Y.C. Chen <yc_chen@aspeedtech.com>
    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 9b50bb2bc343fe52178ae484f545a558e6e72b76
Author: Lyude Paul <lyude@redhat.com>
Date:   Wed Jan 11 21:25:23 2017 -0500

    drm/nouveau: Don't enabling polling twice on runtime resume
    
    [ Upstream commit cae9ff036eea577856d5b12860b4c79c5e71db4a ]
    
    As it turns out, on cards that actually have CRTCs on them we're already
    calling drm_kms_helper_poll_enable(drm_dev) from
    nouveau_display_resume() before we call it in
    nouveau_pmops_runtime_resume(). This leads us to accidentally trying to
    enable polling twice, which results in a potential deadlock between the
    RPM locks and drm_dev->mode_config.mutex if we end up trying to enable
    polling the second time while output_poll_execute is running and holding
    the mode_config lock. As such, make sure we only enable polling in
    nouveau_pmops_runtime_resume() if we need to.
    
    This fixes hangs observed on the ThinkPad W541
    
    Signed-off-by: Lyude <lyude@redhat.com>
    Cc: Hans de Goede <hdegoede@redhat.com>
    Cc: Kilian Singer <kilian.singer@quantumtechnology.info>
    Cc: Lukas Wunner <lukas@wunner.de>
    Cc: David Airlie <airlied@redhat.com>
    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit c29b8f7d2d1e225d8997b825aeb77e26ec517588
Author: Helge Deller <deller@gmx.de>
Date:   Tue Jan 3 22:55:50 2017 +0100

    parisc, parport_gsc: Fixes for printk continuation lines
    
    [ Upstream commit 83b5d1e3d3013dbf90645a5d07179d018c8243fa ]
    
    Signed-off-by: Helge Deller <deller@gmx.de>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 8cc5799710863ddf0429428f3cb4480b526c5597
Author: Alexey Khoroshilov <khoroshilov@ispras.ru>
Date:   Sat Jan 28 01:07:30 2017 +0300

    net: adaptec: starfire: add checks for dma mapping errors
    
    [ Upstream commit d1156b489fa734d1af763d6a07b1637c01bb0aed ]
    
    init_ring(), refill_rx_ring() and start_tx() don't check
    if mapping dma memory succeed.
    The patch adds the checks and failure handling.
    
    Found by Linux Driver Verification project (linuxtesting.org).
    
    Signed-off-by: Alexey Khoroshilov <khoroshilov@ispras.ru>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 6d43352435ba022174d62c4a1b6252ec56ab0fbb
Author: Jack Morgenstein <jackm@dev.mellanox.co.il>
Date:   Mon Jan 30 15:11:45 2017 +0200

    net/mlx4_core: Avoid command timeouts during VF driver device shutdown
    
    [ Upstream commit d585df1c5ccf995fcee910705ad7a9cdd11d4152 ]
    
    Some Hypervisors detach VFs from VMs by instantly causing an FLR event
    to be generated for a VF.
    
    In the mlx4 case, this will cause that VF's comm channel to be disabled
    before the VM has an opportunity to invoke the VF device's "shutdown"
    method.
    
    The result is that the VF driver on the VM will experience a command
    timeout during the shutdown process when the Hypervisor does not deliver
    a command-completion event to the VM.
    
    To avoid FW command timeouts on the VM when the driver's shutdown method
    is invoked, we detect the absence of the VF's comm channel at the very
    start of the shutdown process. If the comm-channel has already been
    disabled, we cause all FW commands during the device shutdown process to
    immediately return success (and thus avoid all command timeouts).
    
    Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
    Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 295a19f594e889d19bbcc0229c1cd0ef6b06863f
Author: Ben Skeggs <bskeggs@redhat.com>
Date:   Tue May 23 21:54:09 2017 -0400

    drm/nouveau/fence/g84-: protect against concurrent access to semaphore buffers
    
    [ Upstream commit 96692b097ba76d0c637ae8af47b29c73da33c9d0 ]
    
    Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 510c29634e35e9f81012d88f93f06709d6f176b1
Author: David Howells <dhowells@redhat.com>
Date:   Tue May 23 21:54:05 2017 -0400

    fscache: Clear outstanding writes when disabling a cookie
    
    [ Upstream commit 6bdded59c8933940ac7e5b416448276ac89d1144 ]
    
    fscache_disable_cookie() needs to clear the outstanding writes on the
    cookie it's disabling because they cannot be completed after.
    
    Without this, fscache_nfs_open_file() gets stuck because it disables the
    cookie when the file is opened for writing but can't uncache the pages till
    afterwards - otherwise there's a race between the open routine and anyone
    who already has it open R/O and is still reading from it.
    
    Looking in /proc/pid/stack of the offending process shows:
    
    [<ffffffffa0142883>] __fscache_wait_on_page_write+0x82/0x9b [fscache]
    [<ffffffffa014336e>] __fscache_uncache_all_inode_pages+0x91/0xe1 [fscache]
    [<ffffffffa01740fa>] nfs_fscache_open_file+0x59/0x9e [nfs]
    [<ffffffffa01ccf41>] nfs4_file_open+0x17f/0x1b8 [nfsv4]
    [<ffffffff8117350e>] do_dentry_open+0x16d/0x2b7
    [<ffffffff811743ac>] vfs_open+0x5c/0x65
    [<ffffffff81184185>] path_openat+0x785/0x8fb
    [<ffffffff81184343>] do_filp_open+0x48/0x9e
    [<ffffffff81174710>] do_sys_open+0x13b/0x1cb
    [<ffffffff811747b9>] SyS_open+0x19/0x1b
    [<ffffffff81001c44>] do_syscall_64+0x80/0x17a
    [<ffffffff8165c2da>] return_from_SYSCALL_64+0x0/0x7a
    [<ffffffffffffffff>] 0xffffffffffffffff
    
    Reported-by: Jianhong Yin <jiyin@redhat.com>
    Signed-off-by: David Howells <dhowells@redhat.com>
    Acked-by: Jeff Layton <jlayton@redhat.com>
    Acked-by: Steve Dickson <steved@redhat.com>
    Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 42c32ac3cec6a637b33697fd882a6ff20acedd14
Author: Stanislaw Gruszka <sgruszka@redhat.com>
Date:   Tue May 23 21:53:59 2017 -0400

    ethtool: do not vzalloc(0) on registers dump
    
    [ Upstream commit 3808d34838184fd29088d6b3a364ba2f1c018fb6 ]
    
    If ->get_regs_len() callback return 0, we allocate 0 bytes of memory,
    what print ugly warning in dmesg, which can be found further below.
    
    This happen on mac80211 devices where ieee80211_get_regs_len() just
    return 0 and driver only fills ethtool_regs structure and actually
    do not provide any dump. However I assume this can happen on other
    drivers i.e. when for some devices driver provide regs dump and for
    others do not. Hence preventing to to print warning in ethtool code
    seems to be reasonable.
    
    ethtool: vmalloc: allocation failure: 0 bytes, mode:0x24080c2(GFP_KERNEL|__GFP_HIGHMEM|__GFP_ZERO)
    <snip>
    Call Trace:
    [<ffffffff813bde47>] dump_stack+0x63/0x8c
    [<ffffffff811b0a1f>] warn_alloc+0x13f/0x170
    [<ffffffff811f0476>] __vmalloc_node_range+0x1e6/0x2c0
    [<ffffffff811f0874>] vzalloc+0x54/0x60
    [<ffffffff8169986c>] dev_ethtool+0xb4c/0x1b30
    [<ffffffff816adbb1>] dev_ioctl+0x181/0x520
    [<ffffffff816714d2>] sock_do_ioctl+0x42/0x50
    <snip>
    Mem-Info:
    active_anon:435809 inactive_anon:173951 isolated_anon:0
     active_file:835822 inactive_file:196932 isolated_file:0
     unevictable:0 dirty:8 writeback:0 unstable:0
     slab_reclaimable:157732 slab_unreclaimable:10022
     mapped:83042 shmem:306356 pagetables:9507 bounce:0
     free:130041 free_pcp:1080 free_cma:0
    Node 0 active_anon:1743236kB inactive_anon:695804kB active_file:3343288kB inactive_file:787728kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:332168kB dirty:32kB writeback:0kB shmem:0kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 1225424kB writeback_tmp:0kB unstable:0kB pages_scanned:0 all_unreclaimable? no
    Node 0 DMA free:15900kB min:136kB low:168kB high:200kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15984kB managed:15900kB mlocked:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
    lowmem_reserve[]: 0 3187 7643 7643
    Node 0 DMA32 free:419732kB min:28124kB low:35152kB high:42180kB active_anon:541180kB inactive_anon:248988kB active_file:1466388kB inactive_file:389632kB unevictable:0kB writepending:0kB present:3370280kB managed:3290932kB mlocked:0kB slab_reclaimable:217184kB slab_unreclaimable:4180kB kernel_stack:160kB pagetables:984kB bounce:0kB free_pcp:2236kB local_pcp:660kB free_cma:0kB
    lowmem_reserve[]: 0 0 4456 4456
    
    Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit eaabe4b740957fd759617c0bee9145e57900cf20
Author: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Date:   Thu Feb 2 18:05:26 2017 +0000

    log2: make order_base_2() behave correctly on const input value zero
    
    [ Upstream commit 29905b52fad0854351f57bab867647e4982285bf ]
    
    The function order_base_2() is defined (according to the comment block)
    as returning zero on input zero, but subsequently passes the input into
    roundup_pow_of_two(), which is explicitly undefined for input zero.
    
    This has gone unnoticed until now, but optimization passes in GCC 7 may
    produce constant folded function instances where a constant value of
    zero is passed into order_base_2(), resulting in link errors against the
    deliberately undefined '____ilog2_NaN'.
    
    So update order_base_2() to adhere to its own documented interface.
    
    [ See
    
         http://marc.info/?l=linux-kernel&m=147672952517795&w=2
    
      and follow-up discussion for more background. The gcc "optimization
      pass" is really just broken, but now the GCC trunk problem seems to
      have escaped out of just specially built daily images, so we need to
      work around it in mainline.    - Linus ]
    
    Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 8bc30cf03ca194f237e56dbad89b21fddf2c8ae0
Author: Peter Zijlstra <peterz@infradead.org>
Date:   Tue May 23 21:53:57 2017 -0400

    kasan: respect /proc/sys/kernel/traceoff_on_warning
    
    [ Upstream commit 4f40c6e5627ea73b4e7c615c59631f38cc880885 ]
    
    After much waiting I finally reproduced a KASAN issue, only to find my
    trace-buffer empty of useful information because it got spooled out :/
    
    Make kasan_report honour the /proc/sys/kernel/traceoff_on_warning
    interface.
    
    Link: http://lkml.kernel.org/r/20170125164106.3514-1-aryabinin@virtuozzo.com
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
    Acked-by: Alexander Potapenko <glider@google.com>
    Cc: Dmitry Vyukov <dvyukov@google.com>
    Cc: Steven Rostedt <rostedt@goodmis.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit acd666657821b1440dcb483cb4cde659017ee513
Author: David Lin <dtwlin@google.com>
Date:   Tue May 23 21:53:55 2017 -0400

    jump label: pass kbuild_cflags when checking for asm goto support
    
    [ Upstream commit 35f860f9ba6aac56cc38e8b18916d833a83f1157 ]
    
    Some versions of ARM GCC compiler such as Android toolchain throws in a
    '-fpic' flag by default.  This causes the gcc-goto check script to fail
    although some config would have '-fno-pic' flag in the KBUILD_CFLAGS.
    
    This patch passes the KBUILD_CFLAGS to the check script so that the
    script does not rely on the default config from different compilers.
    
    Link: http://lkml.kernel.org/r/20170120234329.78868-1-dtwlin@google.com
    Signed-off-by: David Lin <dtwlin@google.com>
    Acked-by: Steven Rostedt <rostedt@goodmis.org>
    Cc: Michal Marek <mmarek@suse.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit cb2098ab876eda3053057c9be82cb68772016751
Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Date:   Tue May 23 21:53:54 2017 -0400

    PM / runtime: Avoid false-positive warnings from might_sleep_if()
    
    [ Upstream commit a9306a63631493afc75893a4ac405d4e1cbae6aa ]
    
    The might_sleep_if() assertions in __pm_runtime_idle(),
    __pm_runtime_suspend() and __pm_runtime_resume() may generate
    false-positive warnings in some situations.  For example, that
    happens if a nested pm_runtime_get_sync()/pm_runtime_put() pair
    is executed with disabled interrupts within an outer
    pm_runtime_get_sync()/pm_runtime_put() section for the same device.
    [Generally, pm_runtime_get_sync() may sleep, so it should not be
    called with disabled interrupts, but in this particular case the
    previous pm_runtime_get_sync() guarantees that the device will not
    be suspended, so the inner pm_runtime_get_sync() will return
    immediately after incrementing the device's usage counter.]
    
    That started to happen in the i915 driver in 4.10-rc, leading to
    the following splat:
    
     BUG: sleeping function called from invalid context at drivers/base/power/runtime.c:1032
     in_atomic(): 1, irqs_disabled(): 0, pid: 1500, name: Xorg
     1 lock held by Xorg/1500:
      #0:  (&dev->struct_mutex){+.+.+.}, at:
      [<ffffffffa0680c13>] i915_mutex_lock_interruptible+0x43/0x140 [i915]
     CPU: 0 PID: 1500 Comm: Xorg Not tainted
     Call Trace:
      dump_stack+0x85/0xc2
      ___might_sleep+0x196/0x260
      __might_sleep+0x53/0xb0
      __pm_runtime_resume+0x7a/0x90
      intel_runtime_pm_get+0x25/0x90 [i915]
      aliasing_gtt_bind_vma+0xaa/0xf0 [i915]
      i915_vma_bind+0xaf/0x1e0 [i915]
      i915_gem_execbuffer_relocate_entry+0x513/0x6f0 [i915]
      i915_gem_execbuffer_relocate_vma.isra.34+0x188/0x250 [i915]
      ? trace_hardirqs_on+0xd/0x10
      ? i915_gem_execbuffer_reserve_vma.isra.31+0x152/0x1f0 [i915]
      ? i915_gem_execbuffer_reserve.isra.32+0x372/0x3a0 [i915]
      i915_gem_do_execbuffer.isra.38+0xa70/0x1a40 [i915]
      ? __might_fault+0x4e/0xb0
      i915_gem_execbuffer2+0xc5/0x260 [i915]
      ? __might_fault+0x4e/0xb0
      drm_ioctl+0x206/0x450 [drm]
      ? i915_gem_execbuffer+0x340/0x340 [i915]
      ? __fget+0x5/0x200
      do_vfs_ioctl+0x91/0x6f0
      ? __fget+0x111/0x200
      ? __fget+0x5/0x200
      SyS_ioctl+0x79/0x90
      entry_SYSCALL_64_fastpath+0x23/0xc6
    
    even though the code triggering it is correct.
    
    Unfortunately, the might_sleep_if() assertions in question are
    too coarse-grained to cover such cases correctly, so make them
    a bit less sensitive in order to avoid the false-positives.
    
    Reported-and-tested-by: Sedat Dilek <sedat.dilek@gmail.com>
    Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit d3121ad14562a320bc82195fd1034dca7a089610
Author: Arnd Bergmann <arnd@arndb.de>
Date:   Tue May 23 21:53:53 2017 -0400

    ARM: defconfigs: make NF_CT_PROTO_SCTP and NF_CT_PROTO_UDPLITE built-in
    
    [ Upstream commit 5aff1d245e8cc1ab5c4517d916edaed9e3f7f973 ]
    
    The symbols can no longer be used as loadable modules, leading to a harmless Kconfig
    warning:
    
    arch/arm/configs/imote2_defconfig:60:warning: symbol value 'm' invalid for NF_CT_PROTO_UDPLITE
    arch/arm/configs/imote2_defconfig:59:warning: symbol value 'm' invalid for NF_CT_PROTO_SCTP
    arch/arm/configs/ezx_defconfig:68:warning: symbol value 'm' invalid for NF_CT_PROTO_UDPLITE
    arch/arm/configs/ezx_defconfig:67:warning: symbol value 'm' invalid for NF_CT_PROTO_SCTP
    
    Let's make them built-in.
    
    Signed-off-by: Arnd Bergmann <arnd@arndb.de>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 4c8eb6278488d2f6813e54fc64685de8ad193575
Author: Linus Lüssing <linus.luessing@c0d3.blue>
Date:   Tue May 23 21:53:52 2017 -0400

    ipv6: Fix IPv6 packet loss in scenarios involving roaming + snooping switches
    
    [ Upstream commit a088d1d73a4bcfd7bc482f8d08375b9b665dc3e5 ]
    
    When for instance a mobile Linux device roams from one access point to
    another with both APs sharing the same broadcast domain and a
    multicast snooping switch in between:
    
    1)    (c) <~~~> (AP1) <--[SSW]--> (AP2)
    
    2)              (AP1) <--[SSW]--> (AP2) <~~~> (c)
    
    Then currently IPv6 multicast packets will get lost for (c) until an
    MLD Querier sends its next query message. The packet loss occurs
    because upon roaming the Linux host so far stayed silent regarding
    MLD and the snooping switch will therefore be unaware of the
    multicast topology change for a while.
    
    This patch fixes this by always resending MLD reports when an interface
    change happens, for instance from NO-CARRIER to CARRIER state.
    
    Signed-off-by: Linus Lüssing <linus.luessing@c0d3.blue>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 0def8e45d25f24be3a02be7994aa3a58b404b585
Author: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Date:   Tue May 23 21:53:43 2017 -0400

    sierra_net: Add support for IPv6 and Dual-Stack Link Sense Indications
    
    [ Upstream commit 5a70348e1187c5bf1cbd0ec51843f36befed1c2d ]
    
    If a context is configured as dualstack ("IPv4v6"), the modem indicates
    the context activation with a slightly different indication message.
    The dual-stack indication omits the link_type (IPv4/v6) and adds
    additional address fields.
    IPv6 LSIs are identical to IPv4 LSIs, but have a different link type.
    
    Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
    Reviewed-by: Bjørn Mork <bjorn@mork.no>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 0c2950fa861d777b65f6e67d993b3769f0b73618
Author: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Date:   Tue May 23 21:53:42 2017 -0400

    sierra_net: Skip validating irrelevant fields for IDLE LSIs
    
    [ Upstream commit 764895d3039e903dac3a70f219949efe43d036a0 ]
    
    When the context is deactivated, the link_type is set to 0xff, which
    triggers a warning message, and results in a wrong link status, as
    the LSI is ignored.
    
    Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit a9cbb7cd1868bda2733c3d484118f52654813233
Author: Ralf Baechle <ralf@linux-mips.org>
Date:   Tue May 23 21:53:40 2017 -0400

    NET: mkiss: Fix panic
    
    [ Upstream commit 7ba1b689038726d34e3244c1ac9e2e18c2ea4787 ]
    
    If a USB-to-serial adapter is unplugged, the driver re-initializes, with
    dev->hard_header_len and dev->addr_len set to zero, instead of the correct
    values.  If then a packet is sent through the half-dead interface, the
    kernel will panic due to running out of headroom in the skb when pushing
    for the AX.25 headers resulting in this panic:
    
    [<c0595468>] (skb_panic) from [<c0401f70>] (skb_push+0x4c/0x50)
    [<c0401f70>] (skb_push) from [<bf0bdad4>] (ax25_hard_header+0x34/0xf4 [ax25])
    [<bf0bdad4>] (ax25_hard_header [ax25]) from [<bf0d05d4>] (ax_header+0x38/0x40 [mkiss])
    [<bf0d05d4>] (ax_header [mkiss]) from [<c041b584>] (neigh_compat_output+0x8c/0xd8)
    [<c041b584>] (neigh_compat_output) from [<c043e7a8>] (ip_finish_output+0x2a0/0x914)
    [<c043e7a8>] (ip_finish_output) from [<c043f948>] (ip_output+0xd8/0xf0)
    [<c043f948>] (ip_output) from [<c043f04c>] (ip_local_out_sk+0x44/0x48)
    
    This patch makes mkiss behave like the 6pack driver. 6pack does not
    panic.  In 6pack.c sp_setup() (same function name here) the values for
    dev->hard_header_len and dev->addr_len are set to the same values as in
    my mkiss patch.
    
    [ralf@linux-mips.org: Massages original submission to conform to the usual
    standards for patch submissions.]
    
    Signed-off-by: Thomas Osterried <thomas@osterried.de>
    Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit d914dc3b811de7579a954a8c84f916c25bf73e05
Author: Ralf Baechle <ralf@linux-mips.org>
Date:   Tue May 23 21:53:37 2017 -0400

    NET: Fix /proc/net/arp for AX.25
    
    [ Upstream commit 4872e57c812dd312bf8193b5933fa60585cda42f ]
    
    When sending ARP requests over AX.25 links the hwaddress in the neighbour
    cache are not getting initialized.  For such an incomplete arp entry
    ax2asc2 will generate an empty string resulting in /proc/net/arp output
    like the following:
    
    $ cat /proc/net/arp
    IP address       HW type     Flags       HW address            Mask     Device
    192.168.122.1    0x1         0x2         52:54:00:00:5d:5f     *        ens3
    172.20.1.99      0x3         0x0              *        bpq0
    
    The missing field will confuse the procfs parsing of arp(8) resulting in
    incorrect output for the device such as the following:
    
    $ arp
    Address                  HWtype  HWaddress           Flags Mask            Iface
    gateway                  ether   52:54:00:00:5d:5f   C                     ens3
    172.20.1.99                      (incomplete)                              ens3
    
    This changes the content of /proc/net/arp to:
    
    $ cat /proc/net/arp
    IP address       HW type     Flags       HW address            Mask     Device
    172.20.1.99      0x3         0x0         *                     *        bpq0
    192.168.122.1    0x1         0x2         52:54:00:00:5d:5f     *        ens3
    
    To do so it change ax2asc to put the string "*" in buf for a NULL address
    argument.  Finally the HW address field is left aligned in a 17 character
    field (the length of an ethernet HW address in the usual hex notation) for
    readability.
    
    Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 68978d69ea6f9c493bcf826b61b9b49607bff260
Author: Jonathan T. Leighton <jtleight@udel.edu>
Date:   Tue May 23 21:53:34 2017 -0400

    ipv6: Inhibit IPv4-mapped src address on the wire.
    
    [ Upstream commit ec5e3b0a1d41fbda0cc33a45bc9e54e91d9d12c7 ]
    
    This patch adds a check for the problematic case of an IPv4-mapped IPv6
    source address and a destination address that is neither an IPv4-mapped
    IPv6 address nor in6addr_any, and returns an appropriate error. The
    check in done before returning from looking up the route.
    
    Signed-off-by: Jonathan T. Leighton <jtleight@udel.edu>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 197082364320eb5b9f9ca7c7f167b98dd0c330ad
Author: Jonathan T. Leighton <jtleight@udel.edu>
Date:   Tue May 23 21:53:33 2017 -0400

    ipv6: Handle IPv4-mapped src to in6addr_any dst.
    
    [ Upstream commit 052d2369d1b479cdbbe020fdd6d057d3c342db74 ]
    
    This patch adds a check on the type of the source address for the case
    where the destination address is in6addr_any. If the source is an
    IPv4-mapped IPv6 source address, the destination is changed to
    ::ffff:127.0.0.1, and otherwise the destination is changed to ::1. This
    is done in three locations to handle UDP calls to either connect() or
    sendmsg() and TCP calls to connect(). Note that udpv6_sendmsg() delays
    handling an in6addr_any destination until very late, so the patch only
    needs to handle the case where the source is an IPv4-mapped IPv6
    address.
    
    Signed-off-by: Jonathan T. Leighton <jtleight@udel.edu>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit dd4d061cf1f65e763a33aa2d1c6e1f9d0ec50639
Author: Anssi Hannula <anssi.hannula@bitwise.fi>
Date:   Tue May 23 21:53:29 2017 -0400

    net: xilinx_emaclite: fix receive buffer overflow
    
    [ Upstream commit cd224553641848dd17800fe559e4ff5d208553e8 ]
    
    xilinx_emaclite looks at the received data to try to determine the
    Ethernet packet length but does not properly clamp it if
    proto_type == ETH_P_IP or 1500 < proto_type <= 1518, causing a buffer
    overflow and a panic via skb_panic() as the length exceeds the allocated
    skb size.
    
    Fix those cases.
    
    Also add an additional unconditional check with WARN_ON() at the end.
    
    Signed-off-by: Anssi Hannula <anssi.hannula@bitwise.fi>
    Fixes: bb81b2ddfa19 ("net: add Xilinx emac lite device driver")
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 742e7978eabaa89532822009d5eb689f7a33c8d5
Author: Anssi Hannula <anssi.hannula@bitwise.fi>
Date:   Tue May 23 21:53:28 2017 -0400

    net: xilinx_emaclite: fix freezes due to unordered I/O
    
    [ Upstream commit acf138f1b00bdd1b7cd9894562ed0c2a1670888e ]
    
    The xilinx_emaclite uses __raw_writel and __raw_readl for register
    accesses. Those functions do not imply any kind of memory barriers and
    they may be reordered.
    
    The driver does not seem to take that into account, though, and the
    driver does not satisfy the ordering requirements of the hardware.
    For clear examples, see xemaclite_mdio_write() and xemaclite_mdio_read()
    which try to set MDIO address before initiating the transaction.
    
    I'm seeing system freezes with the driver with GCC 5.4 and current
    Linux kernels on Zynq-7000 SoC immediately when trying to use the
    interface.
    
    In commit 123c1407af87 ("net: emaclite: Do not use microblaze and ppc
    IO functions") the driver was switched from non-generic
    in_be32/out_be32 (memory barriers, big endian) to
    __raw_readl/__raw_writel (no memory barriers, native endian), so
    apparently the device follows system endianness and the driver was
    originally written with the assumption of memory barriers.
    
    Rather than try to hunt for each case of missing barrier, just switch
    the driver to use iowrite32/ioread32/iowrite32be/ioread32be depending
    on endianness instead.
    
    Tested on little-endian Zynq-7000 ARM SoC FPGA.
    
    Signed-off-by: Anssi Hannula <anssi.hannula@bitwise.fi>
    Fixes: 123c1407af87 ("net: emaclite: Do not use microblaze and ppc IO
    functions")
    Signed-off-by: David S. Miller <davem@davemloft.net>
    
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit afae1d9da32e2b8fec14884276fc0748f6019811
Author: Richard <richard@aaazen.com>
Date:   Sun May 21 12:27:00 2017 -0700

    partitions/msdos: FreeBSD UFS2 file systems are not recognized
    
    [ Upstream commit 223220356d5ebc05ead9a8d697abb0c0a906fc81 ]
    
    The code in block/partitions/msdos.c recognizes FreeBSD, OpenBSD
    and NetBSD partitions and does a reasonable job picking out OpenBSD
    and NetBSD UFS subpartitions.
    
    But for FreeBSD the subpartitions are always "bad".
    
        Kernel: <bsd:bad subpartition - ignored
    
    Though all 3 of these BSD systems use UFS as a file system, only
    FreeBSD uses relative start addresses in the subpartition
    declarations.
    
    The following patch fixes this for FreeBSD partitions and leaves
    the code for OpenBSD and NetBSD intact:
    
    Signed-off-by: Richard Narron <comet.berkeley@gmail.com>
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Signed-off-by: Jens Axboe <axboe@fb.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 7f6abe4c05600dae70b4a0037be77b752f57a077
Author: Imre Deak <imre.deak@intel.com>
Date:   Tue May 23 14:18:17 2017 -0500

    PCI/PM: Add needs_resume flag to avoid suspend complete optimization
    
    [ Upstream commit 4d071c3238987325b9e50e33051a40d1cce311cc ]
    
    Some drivers - like i915 - may not support the system suspend direct
    complete optimization due to differences in their runtime and system
    suspend sequence.  Add a flag that when set resumes the device before
    calling the driver's system suspend handlers which effectively disables
    the optimization.
    
    Needed by a future patch fixing suspend/resume on i915.
    
    Suggested by Rafael.
    
    Signed-off-by: Imre Deak <imre.deak@intel.com>
    Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
    Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
    Cc: stable@vger.kernel.org
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit cd1c4f855f68a251587340f63f3b5e2e38d2c97e
Author: Kees Cook <keescook@chromium.org>
Date:   Mon Feb 13 11:25:26 2017 -0800

    usercopy: Adjust tests to deal with SMAP/PAN
    
    [ Upstream commit f5f893c57e37ca730808cb2eee3820abd05e7507 ]
    
    Under SMAP/PAN/etc, we cannot write directly to userspace memory, so
    this rearranges the test bytes to get written through copy_to_user().
    Additionally drops the bad copy_from_user() test that would trigger a
    memcpy() against userspace on failure.
    
    Signed-off-by: Kees Cook <keescook@chromium.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 9da808668b5883c4d6b147f5ec7a89a8971508fa
Author: Kristina Martsenko <kristina.martsenko@arm.com>
Date:   Tue Jun 6 20:14:10 2017 +0100

    arm64: entry: improve data abort handling of tagged pointers
    
    [ Upstream commit 276e93279a630657fff4b086ba14c95955912dfa ]
    
    When handling a data abort from EL0, we currently zero the top byte of
    the faulting address, as we assume the address is a TTBR0 address, which
    may contain a non-zero address tag. However, the address may be a TTBR1
    address, in which case we should not zero the top byte. This patch fixes
    that. The effect is that the full TTBR1 address is passed to the task's
    signal handler (or printed out in the kernel log).
    
    When handling a data abort from EL1, we leave the faulting address
    intact, as we assume it's either a TTBR1 address or a TTBR0 address with
    tag 0x00. This is true as far as I'm aware, we don't seem to access a
    tagged TTBR0 address anywhere in the kernel. Regardless, it's easy to
    forget about address tags, and code added in the future may not always
    remember to remove tags from addresses before accessing them. So add tag
    handling to the EL1 data abort handler as well. This also makes it
    consistent with the EL0 data abort handler.
    
    Fixes: d50240a5f6ce ("arm64: mm: permit use of tagged pointers at EL0")
    Cc: <stable@vger.kernel.org> # 3.12.x-
    Reviewed-by: Dave Martin <Dave.Martin@arm.com>
    Acked-by: Will Deacon <will.deacon@arm.com>
    Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
    Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 47e49f2d1eda8c7a52a647239e41e625c1d8dfd8
Author: Julius Werner <jwerner@chromium.org>
Date:   Fri Jun 2 15:36:39 2017 -0700

    drivers: char: mem: Fix wraparound check to allow mappings up to the end
    
    [ Upstream commit 32829da54d9368103a2f03269a5120aa9ee4d5da ]
    
    A recent fix to /dev/mem prevents mappings from wrapping around the end
    of physical address space. However, the check was written in a way that
    also prevents a mapping reaching just up to the end of physical address
    space, which may be a valid use case (especially on 32-bit systems).
    This patch fixes it by checking the last mapped address (instead of the
    first address behind that) for overflow.
    
    Fixes: b299cde245 ("drivers: char: mem: Check for address space wraparound with mmap()")
    Cc: <stable@vger.kernel.org>
    Reported-by: Nico Huber <nico.h@gmx.de>
    Signed-off-by: Julius Werner <jwerner@chromium.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit bb3556c1d1554dbbc5a3b01619362e0261064b72
Author: Takashi Iwai <tiwai@suse.de>
Date:   Wed May 24 10:19:45 2017 +0200

    ASoC: Fix use-after-free at card unregistration
    
    [ Upstream commit 4efda5f2130da033aeedc5b3205569893b910de2 ]
    
    soc_cleanup_card_resources() call snd_card_free() at the last of its
    procedure.  This turned out to lead to a use-after-free.
    PCM runtimes have been already removed via soc_remove_pcm_runtimes(),
    while it's dereferenced later in soc_pcm_free() called via
    snd_card_free().
    
    The fix is simple: just move the snd_card_free() call to the beginning
    of the whole procedure.  This also gives another benefit: it
    guarantees that all operations have been shut down before actually
    releasing the resources, which was racy until now.
    
    Reported-and-tested-by: Robert Jarzmik <robert.jarzmik@free.fr>
    Signed-off-by: Takashi Iwai <tiwai@suse.de>
    Signed-off-by: Mark Brown <broonie@kernel.org>
    Cc: <stable@vger.kernel.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 88c41586db86544cd649d8959e68438c02e6faf6
Author: Takashi Iwai <tiwai@suse.de>
Date:   Fri Jun 2 17:26:56 2017 +0200

    ALSA: timer: Fix missing queue indices reset at SNDRV_TIMER_IOCTL_SELECT
    
    [ Upstream commit ba3021b2c79b2fa9114f92790a99deb27a65b728 ]
    
    snd_timer_user_tselect() reallocates the queue buffer dynamically, but
    it forgot to reset its indices.  Since the read may happen
    concurrently with ioctl and snd_timer_user_tselect() allocates the
    buffer via kmalloc(), this may lead to the leak of uninitialized
    kernel-space data, as spotted via KMSAN:
    
      BUG: KMSAN: use of unitialized memory in snd_timer_user_read+0x6c4/0xa10
      CPU: 0 PID: 1037 Comm: probe Not tainted 4.11.0-rc5+ #2739
      Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
      Call Trace:
       __dump_stack lib/dump_stack.c:16
       dump_stack+0x143/0x1b0 lib/dump_stack.c:52
       kmsan_report+0x12a/0x180 mm/kmsan/kmsan.c:1007
       kmsan_check_memory+0xc2/0x140 mm/kmsan/kmsan.c:1086
       copy_to_user ./arch/x86/include/asm/uaccess.h:725
       snd_timer_user_read+0x6c4/0xa10 sound/core/timer.c:2004
       do_loop_readv_writev fs/read_write.c:716
       __do_readv_writev+0x94c/0x1380 fs/read_write.c:864
       do_readv_writev fs/read_write.c:894
       vfs_readv fs/read_write.c:908
       do_readv+0x52a/0x5d0 fs/read_write.c:934
       SYSC_readv+0xb6/0xd0 fs/read_write.c:1021
       SyS_readv+0x87/0xb0 fs/read_write.c:1018
    
    This patch adds the missing reset of queue indices.  Together with the
    previous fix for the ioctl/read race, we cover the whole problem.
    
    Reported-by: Alexander Potapenko <glider@google.com>
    Tested-by: Alexander Potapenko <glider@google.com>
    Cc: <stable@vger.kernel.org>
    Signed-off-by: Takashi Iwai <tiwai@suse.de>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 5d28ba6eecdeb3bbd0f78948ca3470918aad13fa
Author: Takashi Iwai <tiwai@suse.de>
Date:   Fri Jun 2 15:03:38 2017 +0200

    ALSA: timer: Fix race between read and ioctl
    
    [ Upstream commit d11662f4f798b50d8c8743f433842c3e40fe3378 ]
    
    The read from ALSA timer device, the function snd_timer_user_tread(),
    may access to an uninitialized struct snd_timer_user fields when the
    read is concurrently performed while the ioctl like
    snd_timer_user_tselect() is invoked.  We have already fixed the races
    among ioctls via a mutex, but we seem to have forgotten the race
    between read vs ioctl.
    
    This patch simply applies (more exactly extends the already applied
    range of) tu->ioctl_lock in snd_timer_user_tread() for closing the
    race window.
    
    Reported-by: Alexander Potapenko <glider@google.com>
    Tested-by: Alexander Potapenko <glider@google.com>
    Cc: <stable@vger.kernel.org>
    Signed-off-by: Takashi Iwai <tiwai@suse.de>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 29837be8e9222ab4e4bcd9a46eb57306b2f9d976
Author: Dan Carpenter <dan.carpenter@oracle.com>
Date:   Thu Apr 27 12:12:08 2017 +0300

    drm/vmwgfx: Handle vmalloc() failure in vmw_local_fifo_reserve()
    
    [ Upstream commit f0c62e9878024300319ba2438adc7b06c6b9c448 ]
    
    If vmalloc() fails then we need to a bit of cleanup before returning.
    
    Cc: <stable@vger.kernel.org>
    Fixes: fb1d9738ca05 ("drm/vmwgfx: Add DRM driver for VMware Virtual GPU")
    Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
    Reviewed-by: Sinclair Yeh <syeh@vmware.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit d6f90404eaa0cbea9ffc9871c87db57356c428c7
Author: Jin Yao <yao.jin@linux.intel.com>
Date:   Thu May 25 18:09:07 2017 +0800

    perf/core: Drop kernel samples even though :u is specified
    
    [ Upstream commit cc1582c231ea041fbc68861dfaf957eaf902b829 ]
    
    When doing sampling, for example:
    
      perf record -e cycles:u ...
    
    On workloads that do a lot of kernel entry/exits we see kernel
    samples, even though :u is specified. This is due to skid existing.
    
    This might be a security issue because it can leak kernel addresses even
    though kernel sampling support is disabled.
    
    The patch drops the kernel samples if exclude_kernel is specified.
    
    For example, test on Haswell desktop:
    
      perf record -e cycles:u <mgen>
      perf report --stdio
    
    Before patch applied:
    
        99.77%  mgen     mgen              [.] buf_read
         0.20%  mgen     mgen              [.] rand_buf_init
         0.01%  mgen     [kernel.vmlinux]  [k] apic_timer_interrupt
         0.00%  mgen     mgen              [.] last_free_elem
         0.00%  mgen     libc-2.23.so      [.] __random_r
         0.00%  mgen     libc-2.23.so      [.] _int_malloc
         0.00%  mgen     mgen              [.] rand_array_init
         0.00%  mgen     [kernel.vmlinux]  [k] page_fault
         0.00%  mgen     libc-2.23.so      [.] __random
         0.00%  mgen     libc-2.23.so      [.] __strcasestr
         0.00%  mgen     ld-2.23.so        [.] strcmp
         0.00%  mgen     ld-2.23.so        [.] _dl_start
         0.00%  mgen     libc-2.23.so      [.] sched_setaffinity@@GLIBC_2.3.4
         0.00%  mgen     ld-2.23.so        [.] _start
    
    We can see kernel symbols apic_timer_interrupt and page_fault.
    
    After patch applied:
    
        99.79%  mgen     mgen           [.] buf_read
         0.19%  mgen     mgen           [.] rand_buf_init
         0.00%  mgen     libc-2.23.so   [.] __random_r
         0.00%  mgen     mgen           [.] rand_array_init
         0.00%  mgen     mgen           [.] last_free_elem
         0.00%  mgen     libc-2.23.so   [.] vfprintf
         0.00%  mgen     libc-2.23.so   [.] rand
         0.00%  mgen     libc-2.23.so   [.] __random
         0.00%  mgen     libc-2.23.so   [.] _int_malloc
         0.00%  mgen     libc-2.23.so   [.] _IO_doallocbuf
         0.00%  mgen     ld-2.23.so     [.] do_lookup_x
         0.00%  mgen     ld-2.23.so     [.] open_verify.constprop.7
         0.00%  mgen     ld-2.23.so     [.] _dl_important_hwcaps
         0.00%  mgen     libc-2.23.so   [.] sched_setaffinity@@GLIBC_2.3.4
         0.00%  mgen     ld-2.23.so     [.] _start
    
    There are only userspace symbols.
    
    Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Cc: <stable@vger.kernel.org>
    Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
    Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
    Cc: Jiri Olsa <jolsa@redhat.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Namhyung Kim <namhyung@kernel.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Stephane Eranian <eranian@google.com>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Vince Weaver <vincent.weaver@maine.edu>
    Cc: acme@kernel.org
    Cc: jolsa@kernel.org
    Cc: kan.liang@intel.com
    Cc: mark.rutland@arm.com
    Cc: will.deacon@arm.com
    Cc: yao.jin@intel.com
    Link: http://lkml.kernel.org/r/1495706947-3744-1-git-send-email-yao.jin@linux.intel.com
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit f44556278b7971b13a396e7f35656b691c37682c
Author: Michael Bringmann <mwb@linux.vnet.ibm.com>
Date:   Mon May 22 15:44:37 2017 -0500

    powerpc/hotplug-mem: Fix missing endian conversion of aa_index
    
    [ Upstream commit dc421b200f91930c9c6a9586810ff8c232cf10fc ]
    
    When adding or removing memory, the aa_index (affinity value) for the
    memblock must also be converted to match the endianness of the rest
    of the 'ibm,dynamic-memory' property.  Otherwise, subsequent retrieval
    of the attribute will likely lead to non-existent nodes, followed by
    using the default node in the code inappropriately.
    
    Fixes: 5f97b2a0d176 ("powerpc/pseries: Implement memory hotplug add in the kernel")
    Cc: stable@vger.kernel.org # v4.1+
    Signed-off-by: Michael Bringmann <mwb@linux.vnet.ibm.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 7ee9689e6b68dd129ab05afb3baf28973114c6db
Author: Michael Ellerman <mpe@ellerman.id.au>
Date:   Tue Jun 6 20:23:57 2017 +1000

    powerpc/numa: Fix percpu allocations to be NUMA aware
    
    [ Upstream commit ba4a648f12f4cd0a8003dd229b6ca8a53348ee4b ]
    
    In commit 8c272261194d ("powerpc/numa: Enable USE_PERCPU_NUMA_NODE_ID"), we
    switched to the generic implementation of cpu_to_node(), which uses a percpu
    variable to hold the NUMA node for each CPU.
    
    Unfortunately we neglected to notice that we use cpu_to_node() in the allocation
    of our percpu areas, leading to a chicken and egg problem. In practice what
    happens is when we are setting up the percpu areas, cpu_to_node() reports that
    all CPUs are on node 0, so we allocate all percpu areas on node 0.
    
    This is visible in the dmesg output, as all pcpu allocs being in group 0:
    
      pcpu-alloc: [0] 00 01 02 03 [0] 04 05 06 07
      pcpu-alloc: [0] 08 09 10 11 [0] 12 13 14 15
      pcpu-alloc: [0] 16 17 18 19 [0] 20 21 22 23
      pcpu-alloc: [0] 24 25 26 27 [0] 28 29 30 31
      pcpu-alloc: [0] 32 33 34 35 [0] 36 37 38 39
      pcpu-alloc: [0] 40 41 42 43 [0] 44 45 46 47
    
    To fix it we need an early_cpu_to_node() which can run prior to percpu being
    setup. We already have the numa_cpu_lookup_table we can use, so just plumb it
    in. With the patch dmesg output shows two groups, 0 and 1:
    
      pcpu-alloc: [0] 00 01 02 03 [0] 04 05 06 07
      pcpu-alloc: [0] 08 09 10 11 [0] 12 13 14 15
      pcpu-alloc: [0] 16 17 18 19 [0] 20 21 22 23
      pcpu-alloc: [1] 24 25 26 27 [1] 28 29 30 31
      pcpu-alloc: [1] 32 33 34 35 [1] 36 37 38 39
      pcpu-alloc: [1] 40 41 42 43 [1] 44 45 46 47
    
    We can also check the data_offset in the paca of various CPUs, with the fix we
    see:
    
      CPU 0:  data_offset = 0x0ffe8b0000
      CPU 24: data_offset = 0x1ffe5b0000
    
    And we can see from dmesg that CPU 24 has an allocation on node 1:
    
      node   0: [mem 0x0000000000000000-0x0000000fffffffff]
      node   1: [mem 0x0000001000000000-0x0000001fffffffff]
    
    Cc: stable@vger.kernel.org # v3.16+
    Fixes: 8c272261194d ("powerpc/numa: Enable USE_PERCPU_NUMA_NODE_ID")
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit eecbbd835e2e23aec0eef051da9d84d7d237e086
Author: Johannes Thumshirn <jthumshirn@suse.de>
Date:   Tue May 23 16:50:47 2017 +0200

    scsi: qla2xxx: don't disable a not previously enabled PCI device
    
    [ Upstream commit ddff7ed45edce4a4c92949d3c61cd25d229c4a14 ]
    
    When pci_enable_device() or pci_enable_device_mem() fail in
    qla2x00_probe_one() we bail out but do a call to
    pci_disable_device(). This causes the dev_WARN_ON() in
    pci_disable_device() to trigger, as the device wasn't enabled
    previously.
    
    So instead of taking the 'probe_out' error path we can directly return
    *iff* one of the pci_enable_device() calls fails.
    
    Additionally rename the 'probe_out' goto label's name to the more
    descriptive 'disable_device'.
    
    Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de>
    Fixes: e315cd28b9ef ("[SCSI] qla2xxx: Code changes for qla data structure refactoring")
    Cc: <stable@vger.kernel.org>
    Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com>
    Reviewed-by: Giridhar Malavali <giridhar.malavali@cavium.com>
    Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 4a213a0fe0b3b35dec5ed7bb69224b58e40dc5cd
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Mon Jun 5 19:17:18 2017 +0100

    KVM: arm/arm64: Handle possible NULL stage2 pud when ageing pages
    
    [ Upstream commit d6dbdd3c8558cad3b6d74cc357b408622d122331 ]
    
    Under memory pressure, we start ageing pages, which amounts to parsing
    the page tables. Since we don't want to allocate any extra level,
    we pass NULL for our private allocation cache. Which means that
    stage2_get_pud() is allowed to fail. This results in the following
    splat:
    
    [ 1520.409577] Unable to handle kernel NULL pointer dereference at virtual address 00000008
    [ 1520.417741] pgd = ffff810f52fef000
    [ 1520.421201] [00000008] *pgd=0000010f636c5003, *pud=0000010f56f48003, *pmd=0000000000000000
    [ 1520.429546] Internal error: Oops: 96000006 [#1] PREEMPT SMP
    [ 1520.435156] Modules linked in:
    [ 1520.438246] CPU: 15 PID: 53550 Comm: qemu-system-aar Tainted: G        W       4.12.0-rc4-00027-g1885c397eaec #7205
    [ 1520.448705] Hardware name: FOXCONN R2-1221R-A4/C2U4N_MB, BIOS G31FB12A 10/26/2016
    [ 1520.463726] task: ffff800ac5fb4e00 task.stack: ffff800ce04e0000
    [ 1520.469666] PC is at stage2_get_pmd+0x34/0x110
    [ 1520.474119] LR is at kvm_age_hva_handler+0x44/0xf0
    [ 1520.478917] pc : [<ffff0000080b137c>] lr : [<ffff0000080b149c>] pstate: 40000145
    [ 1520.486325] sp : ffff800ce04e33d0
    [ 1520.489644] x29: ffff800ce04e33d0 x28: 0000000ffff40064
    [ 1520.494967] x27: 0000ffff27e00000 x26: 0000000000000000
    [ 1520.500289] x25: ffff81051ba65008 x24: 0000ffff40065000
    [ 1520.505618] x23: 0000ffff40064000 x22: 0000000000000000
    [ 1520.510947] x21: ffff810f52b20000 x20: 0000000000000000
    [ 1520.516274] x19: 0000000058264000 x18: 0000000000000000
    [ 1520.521603] x17: 0000ffffa6fe7438 x16: ffff000008278b70
    [ 1520.526940] x15: 000028ccd8000000 x14: 0000000000000008
    [ 1520.532264] x13: ffff7e0018298000 x12: 0000000000000002
    [ 1520.537582] x11: ffff000009241b93 x10: 0000000000000940
    [ 1520.542908] x9 : ffff0000092ef800 x8 : 0000000000000200
    [ 1520.548229] x7 : ffff800ce04e36a8 x6 : 0000000000000000
    [ 1520.553552] x5 : 0000000000000001 x4 : 0000000000000000
    [ 1520.558873] x3 : 0000000000000000 x2 : 0000000000000008
    [ 1520.571696] x1 : ffff000008fd5000 x0 : ffff0000080b149c
    [ 1520.577039] Process qemu-system-aar (pid: 53550, stack limit = 0xffff800ce04e0000)
    [...]
    [ 1521.510735] [<ffff0000080b137c>] stage2_get_pmd+0x34/0x110
    [ 1521.516221] [<ffff0000080b149c>] kvm_age_hva_handler+0x44/0xf0
    [ 1521.522054] [<ffff0000080b0610>] handle_hva_to_gpa+0xb8/0xe8
    [ 1521.527716] [<ffff0000080b3434>] kvm_age_hva+0x44/0xf0
    [ 1521.532854] [<ffff0000080a58b0>] kvm_mmu_notifier_clear_flush_young+0x70/0xc0
    [ 1521.539992] [<ffff000008238378>] __mmu_notifier_clear_flush_young+0x88/0xd0
    [ 1521.546958] [<ffff00000821eca0>] page_referenced_one+0xf0/0x188
    [ 1521.552881] [<ffff00000821f36c>] rmap_walk_anon+0xec/0x250
    [ 1521.558370] [<ffff000008220f78>] rmap_walk+0x78/0xa0
    [ 1521.563337] [<ffff000008221104>] page_referenced+0x164/0x180
    [ 1521.569002] [<ffff0000081f1af0>] shrink_active_list+0x178/0x3b8
    [ 1521.574922] [<ffff0000081f2058>] shrink_node_memcg+0x328/0x600
    [ 1521.580758] [<ffff0000081f23f4>] shrink_node+0xc4/0x328
    [ 1521.585986] [<ffff0000081f2718>] do_try_to_free_pages+0xc0/0x340
    [ 1521.592000] [<ffff0000081f2a64>] try_to_free_pages+0xcc/0x240
    [...]
    
    The trivial fix is to handle this NULL pud value early, rather than
    dereferencing it blindly.
    
    Cc: stable@vger.kernel.org
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Reviewed-by: Christoffer Dall <cdall@linaro.org>
    Signed-off-by: Christoffer Dall <cdall@linaro.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 951269f95603208493aa6749c4554add9ab37372
Author: Jeff Mahoney <jeffm@suse.com>
Date:   Wed May 17 09:49:37 2017 -0400

    btrfs: fix memory leak in update_space_info failure path
    
    [ Upstream commit 896533a7da929136d0432713f02a3edffece2826 ]
    
    If we fail to add the space_info kobject, we'll leak the memory
    for the percpu counter.
    
    Fixes: 6ab0a2029c (btrfs: publish allocation data in sysfs)
    Cc: <stable@vger.kernel.org> # v3.14+
    Signed-off-by: Jeff Mahoney <jeffm@suse.com>
    Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
    Signed-off-by: David Sterba <dsterba@suse.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit d42014c8d4ce52834418402295283380b1e7c55f
Author: David Sterba <dsterba@suse.com>
Date:   Fri May 12 01:03:52 2017 +0200

    btrfs: use correct types for page indices in btrfs_page_exists_in_range
    
    [ Upstream commit cc2b702c52094b637a351d7491ac5200331d0445 ]
    
    Variables start_idx and end_idx are supposed to hold a page index
    derived from the file offsets. The int type is not the right one though,
    offsets larger than 1 << 44 will get silently trimmed off the high bits.
    (1 << 44 is 16TiB)
    
    What can go wrong, if start is below the boundary and end gets trimmed:
    - if there's a page after start, we'll find it (radix_tree_gang_lookup_slot)
    - the final check "if (page->index <= end_idx)" will unexpectedly fail
    
    The function will return false, ie. "there's no page in the range",
    although there is at least one.
    
    btrfs_page_exists_in_range is used to prevent races in:
    
    * in hole punching, where we make sure there are not pages in the
      truncated range, otherwise we'll wait for them to finish and redo
      truncation, but we're going to replace the pages with holes anyway so
      the only problem is the intermediate state
    
    * lock_extent_direct: we want to make sure there are no pages before we
      lock and start DIO, to prevent stale data reads
    
    For practical occurence of the bug, there are several constaints.  The
    file must be quite large, the affected range must cross the 16TiB
    boundary and the internal state of the file pages and pending operations
    must match.  Also, we must not have started any ordered data in the
    range, otherwise we don't even reach the buggy function check.
    
    DIO locking tries hard in several places to avoid deadlocks with
    buffered IO and avoids waiting for ranges. The worst consequence seems
    to be stale data read.
    
    CC: Liu Bo <bo.li.liu@oracle.com>
    CC: stable@vger.kernel.org      # 3.16+
    Fixes: fc4adbff823f7 ("btrfs: Drop EXTENT_UPTODATE check in hole punching and direct locking")
    Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
    Signed-off-by: David Sterba <dsterba@suse.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit cc558c203ce18331fc4b3c2ca4b94b11546070d1
Author: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Date:   Tue Jun 6 11:43:41 2017 +0200

    cxl: Fix error path on bad ioctl
    
    [ Upstream commit cec422c11caeeccae709e9942058b6b644ce434c ]
    
    Fix error path if we can't copy user structure on CXL_IOCTL_START_WORK
    ioctl. We shouldn't unlock the context status mutex as it was not
    locked (yet).
    
    Fixes: 0712dc7e73e5 ("cxl: Fix issues when unmapping contexts")
    Cc: stable@vger.kernel.org # v3.19+
    Signed-off-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
    Reviewed-by: Vaibhav Jain <vaibhav@linux.vnet.ibm.com>
    Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit c58e11d1da359ae5c4a277af18beacfb2e0bffa5
Author: Al Viro <viro@zeniv.linux.org.uk>
Date:   Thu Jun 8 21:15:45 2017 -0400

    ufs: set correct ->s_maxsize
    
    [ Upstream commit 6b0d144fa758869bdd652c50aa41aaf601232550 ]
    
    Cc: stable@vger.kernel.org
    Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 7ba100d53ebce07b612520d06f7432be8a5126dc
Author: Al Viro <viro@zeniv.linux.org.uk>
Date:   Thu Jun 8 18:15:18 2017 -0400

    fix ufs_isblockset()
    
    [ Upstream commit 414cf7186dbec29bd946c138d6b5c09da5955a08 ]
    
    Cc: stable@vger.kernel.org
    Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 7f8053503ed2887e0b699da07eba36f8dcd4e714
Author: Tejun Heo <tj@kernel.org>
Date:   Wed May 24 12:03:48 2017 -0400

    cpuset: consider dying css as offline
    
    [ Upstream commit 41c25707d21716826e3c1f60967f5550610ec1c9 ]
    
    In most cases, a cgroup controller don't care about the liftimes of
    cgroups.  For the controller, a css becomes online when ->css_online()
    is called on it and offline when ->css_offline() is called.
    
    However, cpuset is special in that the user interface it exposes cares
    whether certain cgroups exist or not.  Combined with the RCU delay
    between cgroup removal and css offlining, this can lead to user
    visible behavior oddities where operations which should succeed after
    cgroup removals fail for some time period.  The effects of cgroup
    removals are delayed when seen from userland.
    
    This patch adds css_is_dying() which tests whether offline is pending
    and updates is_cpuset_online() so that the function returns false also
    while offline is pending.  This gets rid of the userland visible
    delays.
    
    Signed-off-by: Tejun Heo <tj@kernel.org>
    Reported-by: Daniel Jordan <daniel.m.jordan@oracle.com>
    Link: http://lkml.kernel.org/r/327ca1f5-7957-fbb9-9e5f-9ba149d40ba2@oracle.com
    Cc: stable@vger.kernel.org
    Signed-off-by: Tejun Heo <tj@kernel.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 51037ec2ee8e57b520c8c3787794cd3568abb587
Author: Matt Ranostay <matt.ranostay@konsulko.com>
Date:   Thu Apr 27 00:52:32 2017 -0700

    iio: proximity: as3935: fix AS3935_INT mask
    
    [ Upstream commit 275292d3a3d62670b1b13484707b74e5239b4bb0 ]
    
    AS3935 interrupt mask has been incorrect so valid lightning events
    would never trigger an buffer event. Also noise interrupt should be
    BIT(0).
    
    Fixes: 24ddb0e4bba4 ("iio: Add AS3935 lightning sensor support")
    CC: stable@vger.kernel.org
    Signed-off-by: Matt Ranostay <matt.ranostay@konsulko.com>
    Signed-off-by: Jonathan Cameron <jic23@kernel.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 60e9d774dc8e75f8fd471c118011a86e8f754f72
Author: Oleg Drokin <green@linuxhacker.ru>
Date:   Fri May 26 23:40:33 2017 -0400

    staging/lustre/lov: remove set_fs() call from lov_getstripe()
    
    [ Upstream commit 0a33252e060e97ed3fbdcec9517672f1e91aaef3 ]
    
    lov_getstripe() calls set_fs(KERNEL_DS) so that it can handle a struct
    lov_user_md pointer from user- or kernel-space.  This changes the
    behavior of copy_from_user() on SPARC and may result in a misaligned
    access exception which in turn oopses the kernel.  In fact the
    relevant argument to lov_getstripe() is never called with a
    kernel-space pointer and so changing the address limits is unnecessary
    and so we remove the calls to save, set, and restore the address
    limits.
    
    Signed-off-by: John L. Hammond <john.hammond@intel.com>
    Reviewed-on: http://review.whamcloud.com/6150
    Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-3221
    Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
    Reviewed-by: Li Wei <wei.g.li@intel.com>
    Signed-off-by: Oleg Drokin <green@linuxhacker.ru>
    Cc: stable <stable@vger.kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 6f4f7e81b18e465efa1c6a7b3973da7bffe332d6
Author: Michael Thalmeier <michael.thalmeier@hale.at>
Date:   Thu May 18 16:14:14 2017 +0200

    usb: chipidea: debug: check before accessing ci_role
    
    [ Upstream commit 0340ff83cd4475261e7474033a381bc125b45244 ]
    
    ci_role BUGs when the role is >= CI_ROLE_END.
    
    Cc: stable@vger.kernel.org  #v3.10+
    Signed-off-by: Michael Thalmeier <michael.thalmeier@hale.at>
    Signed-off-by: Peter Chen <peter.chen@nxp.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 9738b3df00b190fbac813e5a5b76dabe439c84e3
Author: Jisheng Zhang <jszhang@marvell.com>
Date:   Mon Apr 24 12:35:51 2017 +0000

    usb: chipidea: udc: fix NULL pointer dereference if udc_start failed
    
    [ Upstream commit aa1f058d7d9244423b8c5a75b9484b1115df7f02 ]
    
    Fix below NULL pointer dereference. we set ci->roles[CI_ROLE_GADGET]
    too early in ci_hdrc_gadget_init(), if udc_start() fails due to some
    reason, the ci->roles[CI_ROLE_GADGET] check in  ci_hdrc_gadget_destroy
    can't protect us.
    
    We fix this issue by only setting ci->roles[CI_ROLE_GADGET] if
    udc_start() succeed.
    
    [    1.398550] Unable to handle kernel NULL pointer dereference at
    virtual address 00000000
    ...
    [    1.448600] PC is at dma_pool_free+0xb8/0xf0
    [    1.453012] LR is at dma_pool_free+0x28/0xf0
    [    2.113369] [<ffffff80081817d8>] dma_pool_free+0xb8/0xf0
    [    2.118857] [<ffffff800841209c>] destroy_eps+0x4c/0x68
    [    2.124165] [<ffffff8008413770>] ci_hdrc_gadget_destroy+0x28/0x50
    [    2.130461] [<ffffff800840fa30>] ci_hdrc_probe+0x588/0x7e8
    [    2.136129] [<ffffff8008380fb8>] platform_drv_probe+0x50/0xb8
    [    2.142066] [<ffffff800837f494>] driver_probe_device+0x1fc/0x2a8
    [    2.148270] [<ffffff800837f68c>] __device_attach_driver+0x9c/0xf8
    [    2.154563] [<ffffff800837d570>] bus_for_each_drv+0x58/0x98
    [    2.160317] [<ffffff800837f174>] __device_attach+0xc4/0x138
    [    2.166072] [<ffffff800837f738>] device_initial_probe+0x10/0x18
    [    2.172185] [<ffffff800837e58c>] bus_probe_device+0x94/0xa0
    [    2.177940] [<ffffff800837c560>] device_add+0x3f0/0x560
    [    2.183337] [<ffffff8008380d20>] platform_device_add+0x180/0x240
    [    2.189541] [<ffffff800840f0e8>] ci_hdrc_add_device+0x440/0x4f8
    [    2.195654] [<ffffff8008414194>] ci_hdrc_usb2_probe+0x13c/0x2d8
    [    2.201769] [<ffffff8008380fb8>] platform_drv_probe+0x50/0xb8
    [    2.207705] [<ffffff800837f494>] driver_probe_device+0x1fc/0x2a8
    [    2.213910] [<ffffff800837f5ec>] __driver_attach+0xac/0xb0
    [    2.219575] [<ffffff800837d4b0>] bus_for_each_dev+0x60/0xa0
    [    2.225329] [<ffffff800837ec80>] driver_attach+0x20/0x28
    [    2.230816] [<ffffff800837e880>] bus_add_driver+0x1d0/0x238
    [    2.236571] [<ffffff800837fdb0>] driver_register+0x60/0xf8
    [    2.242237] [<ffffff8008380ef4>] __platform_driver_register+0x44/0x50
    [    2.248891] [<ffffff80086fd440>] ci_hdrc_usb2_driver_init+0x18/0x20
    [    2.255365] [<ffffff8008082950>] do_one_initcall+0x38/0x128
    [    2.261121] [<ffffff80086e0d00>] kernel_init_freeable+0x1ac/0x250
    [    2.267414] [<ffffff800852f0b8>] kernel_init+0x10/0x100
    [    2.272810] [<ffffff8008082680>] ret_from_fork+0x10/0x50
    
    Cc: stable <stable@vger.kernel.org>
    Fixes: 3f124d233e97 ("usb: chipidea: add role init and destroy APIs")
    Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
    Signed-off-by: Peter Chen <peter.chen@nxp.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit db87e41d61aa8fa80c11147ea93fa69f634ba364
Author: Thinh Nguyen <Thinh.Nguyen@synopsys.com>
Date:   Thu May 11 17:26:48 2017 -0700

    usb: gadget: f_mass_storage: Serialize wake and sleep execution
    
    [ Upstream commit dc9217b69dd6089dcfeb86ed4b3c671504326087 ]
    
    f_mass_storage has a memorry barrier issue with the sleep and wake
    functions that can cause a deadlock. This results in intermittent hangs
    during MSC file transfer. The host will reset the device after receiving
    no response to resume the transfer. This issue is seen when dwc3 is
    processing 2 transfer-in-progress events at the same time, invoking
    completion handlers for CSW and CBW. Also this issue occurs depending on
    the system timing and latency.
    
    To increase the chance to hit this issue, you can force dwc3 driver to
    wait and process those 2 events at once by adding a small delay (~100us)
    in dwc3_check_event_buf() whenever the request is for CSW and read the
    event count again. Avoid debugging with printk and ftrace as extra
    delays and memory barrier will mask this issue.
    
    Scenario which can lead to failure:
    -----------------------------------
    1) The main thread sleeps and waits for the next command in
       get_next_command().
    2) bulk_in_complete() wakes up main thread for CSW.
    3) bulk_out_complete() tries to wake up the running main thread for CBW.
    4) thread_wakeup_needed is not loaded with correct value in
       sleep_thread().
    5) Main thread goes to sleep again.
    
    The pattern is shown below. Note the 2 critical variables.
     * common->thread_wakeup_needed
     * bh->state
    
            CPU 0 (sleep_thread)            CPU 1 (wakeup_thread)
            ==============================  ===============================
    
                                            bh->state = BH_STATE_FULL;
                                            smp_wmb();
            thread_wakeup_needed = 0;       thread_wakeup_needed = 1;
            smp_rmb();
            if (bh->state != BH_STATE_FULL)
                    sleep again ...
    
    As pointed out by Alan Stern, this is an R-pattern issue. The issue can
    be seen when there are two wakeups in quick succession. The
    thread_wakeup_needed can be overwritten in sleep_thread, and the read of
    the bh->state maybe reordered before the write to thread_wakeup_needed.
    
    This patch applies full memory barrier smp_mb() in both sleep_thread()
    and wakeup_thread() to ensure the order which the thread_wakeup_needed
    and bh->state are written and loaded.
    
    However, a better solution in the future would be to use wait_queue
    method that takes care of managing memory barrier between waker and
    waiter.
    
    Cc: <stable@vger.kernel.org>
    Acked-by: Alan Stern <stern@rowland.harvard.edu>
    Signed-off-by: Thinh Nguyen <thinhn@synopsys.com>
    Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 9262957933640067921a5e0858a804d38b62a604
Author: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Date:   Sun May 21 22:36:23 2017 -0400

    ext4: keep existing extra fields when inode expands
    
    [ Upstream commit 887a9730614727c4fff7cb756711b190593fc1df ]
    
    ext4_expand_extra_isize() should clear only space between old and new
    size.
    
    Fixes: 6dd4ee7cab7e # v2.6.23
    Cc: stable@vger.kernel.org
    Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
    Signed-off-by: Theodore Ts'o <tytso@mit.edu>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 4d1adc2ada19b8c059eb4f18cb107cb3104ae49d
Author: Jan Kara <jack@suse.cz>
Date:   Sun May 21 22:33:23 2017 -0400

    ext4: fix SEEK_HOLE
    
    [ Upstream commit 7d95eddf313c88b24f99d4ca9c2411a4b82fef33 ]
    
    Currently, SEEK_HOLE implementation in ext4 may both return that there's
    a hole at some offset although that offset already has data and skip
    some holes during a search for the next hole. The first problem is
    demostrated by:
    
    xfs_io -c "falloc 0 256k" -c "pwrite 0 56k" -c "seek -h 0" file
    wrote 57344/57344 bytes at offset 0
    56 KiB, 14 ops; 0.0000 sec (2.054 GiB/sec and 538461.5385 ops/sec)
    Whence  Result
    HOLE    0
    
    Where we can see that SEEK_HOLE wrongly returned offset 0 as containing
    a hole although we have written data there. The second problem can be
    demonstrated by:
    
    xfs_io -c "falloc 0 256k" -c "pwrite 0 56k" -c "pwrite 128k 8k"
           -c "seek -h 0" file
    
    wrote 57344/57344 bytes at offset 0
    56 KiB, 14 ops; 0.0000 sec (1.978 GiB/sec and 518518.5185 ops/sec)
    wrote 8192/8192 bytes at offset 131072
    8 KiB, 2 ops; 0.0000 sec (2 GiB/sec and 500000.0000 ops/sec)
    Whence  Result
    HOLE    139264
    
    Where we can see that hole at offsets 56k..128k has been ignored by the
    SEEK_HOLE call.
    
    The underlying problem is in the ext4_find_unwritten_pgoff() which is
    just buggy. In some cases it fails to update returned offset when it
    finds a hole (when no pages are found or when the first found page has
    higher index than expected), in some cases conditions for detecting hole
    are just missing (we fail to detect a situation where indices of
    returned pages are not contiguous).
    
    Fix ext4_find_unwritten_pgoff() to properly detect non-contiguous page
    indices and also handle all cases where we got less pages then expected
    in one place and handle it properly there.
    
    CC: stable@vger.kernel.org
    Fixes: c8c0df241cc2719b1262e627f999638411934f60
    CC: Zheng Liu <wenqing.lz@taobao.com>
    Signed-off-by: Jan Kara <jack@suse.cz>
    Signed-off-by: Theodore Ts'o <tytso@mit.edu>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 8406f302e98593884ffd3a4d89dbe1273e39e511
Author: Wanpeng Li <wanpeng.li@hotmail.com>
Date:   Thu Jun 8 20:13:40 2017 -0700

    KVM: async_pf: avoid async pf injection when in guest mode
    
    [ Upstream commit 9bc1f09f6fa76fdf31eb7d6a4a4df43574725f93 ]
    
     INFO: task gnome-terminal-:1734 blocked for more than 120 seconds.
           Not tainted 4.12.0-rc4+ #8
     "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
     gnome-terminal- D    0  1734   1015 0x00000000
     Call Trace:
      __schedule+0x3cd/0xb30
      schedule+0x40/0x90
      kvm_async_pf_task_wait+0x1cc/0x270
      ? __vfs_read+0x37/0x150
      ? prepare_to_swait+0x22/0x70
      do_async_page_fault+0x77/0xb0
      ? do_async_page_fault+0x77/0xb0
      async_page_fault+0x28/0x30
    
    This is triggered by running both win7 and win2016 on L1 KVM simultaneously,
    and then gives stress to memory on L1, I can observed this hang on L1 when
    at least ~70% swap area is occupied on L0.
    
    This is due to async pf was injected to L2 which should be injected to L1,
    L2 guest starts receiving pagefault w/ bogus %cr2(apf token from the host
    actually), and L1 guest starts accumulating tasks stuck in D state in
    kvm_async_pf_task_wait() since missing PAGE_READY async_pfs.
    
    This patch fixes the hang by doing async pf when executing L1 guest.
    
    Cc: Paolo Bonzini <pbonzini@redhat.com>
    Cc: Radim Krčmář <rkrcmar@redhat.com>
    Cc: stable@vger.kernel.org
    Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit fdb67b2a3a1621f30352e250ced6a859f781b708
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Tue Jun 6 19:08:35 2017 +0100

    arm: KVM: Allow unaligned accesses at HYP
    
    [ Upstream commit 33b5c38852b29736f3b472dd095c9a18ec22746f ]
    
    We currently have the HSCTLR.A bit set, trapping unaligned accesses
    at HYP, but we're not really prepared to deal with it.
    
    Since the rest of the kernel is pretty happy about that, let's follow
    its example and set HSCTLR.A to zero. Modern CPUs don't really care.
    
    Cc: stable@vger.kernel.org
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
    Signed-off-by: Christoffer Dall <cdall@linaro.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 1e8dabb6aa14294819156b5bd2b08d95e2059891
Author: Wanpeng Li <wanpeng.li@hotmail.com>
Date:   Thu Jun 8 01:22:07 2017 -0700

    KVM: cpuid: Fix read/write out-of-bounds vulnerability in cpuid emulation
    
    [ Upstream commit a3641631d14571242eec0d30c9faa786cbf52d44 ]
    
    If "i" is the last element in the vcpu->arch.cpuid_entries[] array, it
    potentially can be exploited the vulnerability. this will out-of-bounds
    read and write.  Luckily, the effect is small:
    
            /* when no next entry is found, the current entry[i] is reselected */
            for (j = i + 1; ; j = (j + 1) % nent) {
                    struct kvm_cpuid_entry2 *ej = &vcpu->arch.cpuid_entries[j];
                    if (ej->function == e->function) {
    
    It reads ej->maxphyaddr, which is user controlled.  However...
    
                            ej->flags |= KVM_CPUID_FLAG_STATE_READ_NEXT;
    
    After cpuid_entries there is
    
            int maxphyaddr;
            struct x86_emulate_ctxt emulate_ctxt;  /* 16-byte aligned */
    
    So we have:
    
    - cpuid_entries at offset 1B50 (6992)
    - maxphyaddr at offset 27D0 (6992 + 3200 = 10192)
    - padding at 27D4...27DF
    - emulate_ctxt at 27E0
    
    And it writes in the padding.  Pfew, writing the ops field of emulate_ctxt
    would have been much worse.
    
    This patch fixes it by modding the index to avoid the out-of-bounds
    access. Worst case, i == j and ej->function == e->function,
    the loop can bail out.
    
    Reported-by: Moguofang <moguofang@huawei.com>
    Cc: Paolo Bonzini <pbonzini@redhat.com>
    Cc: Radim Krčmář <rkrcmar@redhat.com>
    Cc: Guofang Mo <moguofang@huawei.com>
    Cc: stable@vger.kernel.org
    Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 702eb8d270f2edd8b18c2afbb360bc2bf9eca33e
Author: Paolo Bonzini <pbonzini@redhat.com>
Date:   Wed Apr 26 16:56:26 2017 +0200

    kvm: async_pf: fix rcu_irq_enter() with irqs enabled
    
    [ Upstream commit bbaf0e2b1c1b4f88abd6ef49576f0efb1734eae5 ]
    
    native_safe_halt enables interrupts, and you just shouldn't
    call rcu_irq_enter() with interrupts enabled.  Reorder the
    call with the following local_irq_disable() to respect the
    invariant.
    
    Reported-by: Ross Zwisler <ross.zwisler@linux.intel.com>
    Cc: stable@vger.kernel.org
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
    Tested-by: Wanpeng Li <wanpeng.li@hotmail.com>
    Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 4b1bf4b008ca1495bbd12b0a3f8a3319cfddad75
Author: J. Bruce Fields <bfields@redhat.com>
Date:   Tue May 23 12:24:40 2017 -0400

    nfsd4: fix null dereference on replay
    
    [ Upstream commit 9a307403d374b993061f5992a6e260c944920d0b ]
    
    if we receive a compound such that:
    
            - the sessionid, slot, and sequence number in the SEQUENCE op
              match a cached succesful reply with N ops, and
            - the Nth operation of the compound is a PUTFH, PUTPUBFH,
              PUTROOTFH, or RESTOREFH,
    
    then nfsd4_sequence will return 0 and set cstate->status to
    nfserr_replay_cache.  The current filehandle will not be set.  This will
    cause us to call check_nfsd_access with first argument NULL.
    
    To nfsd4_compound it looks like we just succesfully executed an
    operation that set a filehandle, but the current filehandle is not set.
    
    Fix this by moving the nfserr_replay_cache earlier.  There was never any
    reason to have it after the encode_op label, since the only case where
    he hit that is when opdesc->op_func sets it.
    
    Note that there are two ways we could hit this case:
    
            - a client is resending a previously sent compound that ended
              with one of the four PUTFH-like operations, or
            - a client is sending a *new* compound that (incorrectly) shares
              sessionid, slot, and sequence number with a previously sent
              compound, and the length of the previously sent compound
              happens to match the position of a PUTFH-like operation in the
              new compound.
    
    The second is obviously incorrect client behavior.  The first is also
    very strange--the only purpose of a PUTFH-like operation is to set the
    current filehandle to be used by the following operation, so there's no
    point in having it as the last in a compound.
    
    So it's likely this requires a buggy or malicious client to reproduce.
    
    Reported-by: Scott Mayhew <smayhew@redhat.com>
    Cc: stable@kernel.vger.org
    Signed-off-by: J. Bruce Fields <bfields@redhat.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 026ed759f4f4f30326e4c5edf76c9fe73bde3994
Author: Gilad Ben-Yossef <gilad@benyossef.com>
Date:   Thu May 18 16:29:25 2017 +0300

    crypto: gcm - wait for crypto op not signal safe
    
    [ Upstream commit f3ad587070d6bd961ab942b3fd7a85d00dfc934b ]
    
    crypto_gcm_setkey() was using wait_for_completion_interruptible() to
    wait for completion of async crypto op but if a signal occurs it
    may return before DMA ops of HW crypto provider finish, thus
    corrupting the data buffer that is kfree'ed in this case.
    
    Resolve this by using wait_for_completion() instead.
    
    Reported-by: Eric Biggers <ebiggers3@gmail.com>
    Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
    CC: stable@vger.kernel.org
    Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit e02ed52dd2d34fdf369ea45caffb78ca24ee8625
Author: Eric Biggers <ebiggers@google.com>
Date:   Thu Jun 8 14:48:47 2017 +0100

    KEYS: fix freeing uninitialized memory in key_update()
    
    [ Upstream commit 63a0b0509e700717a59f049ec6e4e04e903c7fe2 ]
    
    key_update() freed the key_preparsed_payload even if it was not
    initialized first.  This would cause a crash if userspace called
    keyctl_update() on a key with type like "asymmetric" that has a
    ->preparse() method but not an ->update() method.  Possibly it could
    even be triggered for other key types by racing with keyctl_setperm() to
    make the KEY_NEED_WRITE check fail (the permission was already checked,
    so normally it wouldn't fail there).
    
    Reproducer with key type "asymmetric", given a valid cert.der:
    
    keyctl new_session
    keyid=$(keyctl padd asymmetric desc @s < cert.der)
    keyctl setperm $keyid 0x3f000000
    keyctl update $keyid data
    
    [  150.686666] BUG: unable to handle kernel NULL pointer dereference at 0000000000000001
    [  150.687601] IP: asymmetric_key_free_kids+0x12/0x30
    [  150.688139] PGD 38a3d067
    [  150.688141] PUD 3b3de067
    [  150.688447] PMD 0
    [  150.688745]
    [  150.689160] Oops: 0000 [#1] SMP
    [  150.689455] Modules linked in:
    [  150.689769] CPU: 1 PID: 2478 Comm: keyctl Not tainted 4.11.0-rc4-xfstests-00187-ga9f6b6b8cd2f #742
    [  150.690916] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-20170228_101828-anatol 04/01/2014
    [  150.692199] task: ffff88003b30c480 task.stack: ffffc90000350000
    [  150.692952] RIP: 0010:asymmetric_key_free_kids+0x12/0x30
    [  150.693556] RSP: 0018:ffffc90000353e58 EFLAGS: 00010202
    [  150.694142] RAX: 0000000000000000 RBX: 0000000000000001 RCX: 0000000000000004
    [  150.694845] RDX: ffffffff81ee3920 RSI: ffff88003d4b0700 RDI: 0000000000000001
    [  150.697569] RBP: ffffc90000353e60 R08: ffff88003d5d2140 R09: 0000000000000000
    [  150.702483] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000001
    [  150.707393] R13: 0000000000000004 R14: ffff880038a4d2d8 R15: 000000000040411f
    [  150.709720] FS:  00007fcbcee35700(0000) GS:ffff88003fd00000(0000) knlGS:0000000000000000
    [  150.711504] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    [  150.712733] CR2: 0000000000000001 CR3: 0000000039eab000 CR4: 00000000003406e0
    [  150.714487] Call Trace:
    [  150.714975]  asymmetric_key_free_preparse+0x2f/0x40
    [  150.715907]  key_update+0xf7/0x140
    [  150.716560]  ? key_default_cmp+0x20/0x20
    [  150.717319]  keyctl_update_key+0xb0/0xe0
    [  150.718066]  SyS_keyctl+0x109/0x130
    [  150.718663]  entry_SYSCALL_64_fastpath+0x1f/0xc2
    [  150.719440] RIP: 0033:0x7fcbce75ff19
    [  150.719926] RSP: 002b:00007ffd5d167088 EFLAGS: 00000206 ORIG_RAX: 00000000000000fa
    [  150.720918] RAX: ffffffffffffffda RBX: 0000000000404d80 RCX: 00007fcbce75ff19
    [  150.721874] RDX: 00007ffd5d16785e RSI: 000000002866cd36 RDI: 0000000000000002
    [  150.722827] RBP: 0000000000000006 R08: 000000002866cd36 R09: 00007ffd5d16785e
    [  150.723781] R10: 0000000000000004 R11: 0000000000000206 R12: 0000000000404d80
    [  150.724650] R13: 00007ffd5d16784d R14: 00007ffd5d167238 R15: 000000000040411f
    [  150.725447] Code: 83 c4 08 31 c0 5b 41 5c 41 5d 41 5e 41 5f 5d c3 66 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 85 ff 74 23 55 48 89 e5 53 48 89 fb <48> 8b 3f e8 06 21 c5 ff 48 8b 7b 08 e8 fd 20 c5 ff 48 89 df e8
    [  150.727489] RIP: asymmetric_key_free_kids+0x12/0x30 RSP: ffffc90000353e58
    [  150.728117] CR2: 0000000000000001
    [  150.728430] ---[ end trace f7f8fe1da2d5ae8d ]---
    
    Fixes: 4d8c0250b841 ("KEYS: Call ->free_preparse() even after ->preparse() returns an error")
    Cc: stable@vger.kernel.org # 3.17+
    Signed-off-by: Eric Biggers <ebiggers@google.com>
    Signed-off-by: David Howells <dhowells@redhat.com>
    Signed-off-by: James Morris <james.l.morris@oracle.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit a38f69cb4a22982881a8821cc161c1b5a6151fa5
Author: Eric W. Biederman <ebiederm@xmission.com>
Date:   Mon May 22 15:40:12 2017 -0500

    ptrace: Properly initialize ptracer_cred on fork
    
    [ Upstream commit c70d9d809fdeecedb96972457ee45c49a232d97f ]
    
    When I introduced ptracer_cred I failed to consider the weirdness of
    fork where the task_struct copies the old value by default.  This
    winds up leaving ptracer_cred set even when a process forks and
    the child process does not wind up being ptraced.
    
    Because ptracer_cred is not set on non-ptraced processes whose
    parents were ptraced this has broken the ability of the enlightenment
    window manager to start setuid children.
    
    Fix this by properly initializing ptracer_cred in ptrace_init_task
    
    This must be done with a little bit of care to preserve the current value
    of ptracer_cred when ptrace carries through fork.  Re-reading the
    ptracer_cred from the ptracing process at this point is inconsistent
    with how PT_PTRACE_CAP has been maintained all of these years.
    
    Tested-by: Takashi Iwai <tiwai@suse.de>
    Fixes: 64b875f7ac8a ("ptrace: Capture the ptracer's creds not PT_PTRACE_CAP")
    Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 94d53c5028a392e3aafb00871e867776b0fc868c
Author: Jane Chu <jane.chu@oracle.com>
Date:   Tue Jun 6 14:32:29 2017 -0600

    arch/sparc: support NR_CPUS = 4096
    
    [ Upstream commit c79a13734d104b5b147d7cb0870276ccdd660dae ]
    
    Linux SPARC64 limits NR_CPUS to 4064 because init_cpu_send_mondo_info()
    only allocates a single page for NR_CPUS mondo entries. Thus we cannot
    use all 4096 CPUs on some SPARC platforms.
    
    To fix, allocate (2^order) pages where order is set according to the size
    of cpu_list for possible cpus. Since cpu_list_pa and cpu_mondo_block_pa
    are not used in asm code, there are no imm13 offsets from the base PA
    that will break because they can only reach one page.
    
    Orabug: 25505750
    
    Signed-off-by: Jane Chu <jane.chu@oracle.com>
    
    Reviewed-by: Bob Picco <bob.picco@oracle.com>
    Reviewed-by: Atish Patra <atish.patra@oracle.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 252bf31f5d919ead8cb638cbf1228a400b78dec4
Author: Pavel Tatashin <pasha.tatashin@oracle.com>
Date:   Wed May 31 11:25:25 2017 -0400

    sparc64: delete old wrap code
    
    [ Upstream commit 0197e41ce70511dc3b71f7fefa1a676e2b5cd60b ]
    
    The old method that is using xcall and softint to get new context id is
    deleted, as it is replaced by a method of using per_cpu_secondary_mm
    without xcall to perform the context wrap.
    
    Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
    Reviewed-by: Bob Picco <bob.picco@oracle.com>
    Reviewed-by: Steven Sistare <steven.sistare@oracle.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 0837a0481106cbd8d723978f38272ab93877ad89
Author: Pavel Tatashin <pasha.tatashin@oracle.com>
Date:   Wed May 31 11:25:24 2017 -0400

    sparc64: new context wrap
    
    [ Upstream commit a0582f26ec9dfd5360ea2f35dd9a1b026f8adda0 ]
    
    The current wrap implementation has a race issue: it is called outside of
    the ctx_alloc_lock, and also does not wait for all CPUs to complete the
    wrap.  This means that a thread can get a new context with a new version
    and another thread might still be running with the same context. The
    problem is especially severe on CPUs with shared TLBs, like sun4v. I used
    the following test to very quickly reproduce the problem:
    - start over 8K processes (must be more than context IDs)
    - write and read values at a  memory location in every process.
    
    Very quickly memory corruptions start happening, and what we read back
    does not equal what we wrote.
    
    Several approaches were explored before settling on this one:
    
    Approach 1:
    Move smp_new_mmu_context_version() inside ctx_alloc_lock, and wait for
    every process to complete the wrap. (Note: every CPU must WAIT before
    leaving smp_new_mmu_context_version_client() until every one arrives).
    
    This approach ends up with deadlocks, as some threads own locks which other
    threads are waiting for, and they never receive softint until these threads
    exit smp_new_mmu_context_version_client(). Since we do not allow the exit,
    deadlock happens.
    
    Approach 2:
    Handle wrap right during mondo interrupt. Use etrap/rtrap to enter into
    into C code, and issue new versions to every CPU.
    This approach adds some overhead to runtime: in switch_mm() we must add
    some checks to make sure that versions have not changed due to wrap while
    we were loading the new secondary context. (could be protected by PSTATE_IE
    but that degrades performance as on M7 and older CPUs as it takes 50 cycles
    for each access). Also, we still need a global per-cpu array of MMs to know
    where we need to load new contexts, otherwise we can change context to a
    thread that is going way (if we received mondo between switch_mm() and
    switch_to() time). Finally, there are some issues with window registers in
    rtrap() when context IDs are changed during CPU mondo time.
    
    The approach in this patch is the simplest and has almost no impact on
    runtime.  We use the array with mm's where last secondary contexts were
    loaded onto CPUs and bump their versions to the new generation without
    changing context IDs. If a new process comes in to get a context ID, it
    will go through get_new_mmu_context() because of version mismatch. But the
    running processes do not need to be interrupted. And wrap is quicker as we
    do not need to xcall and wait for everyone to receive and complete wrap.
    
    Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
    Reviewed-by: Bob Picco <bob.picco@oracle.com>
    Reviewed-by: Steven Sistare <steven.sistare@oracle.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 169dc5fd241d61ab37294f51be54a3b3a43d2f1f
Author: Pavel Tatashin <pasha.tatashin@oracle.com>
Date:   Wed May 31 11:25:23 2017 -0400

    sparc64: add per-cpu mm of secondary contexts
    
    [ Upstream commit 7a5b4bbf49fe86ce77488a70c5dccfe2d50d7a2d ]
    
    The new wrap is going to use information from this array to figure out
    mm's that currently have valid secondary contexts setup.
    
    Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
    Reviewed-by: Bob Picco <bob.picco@oracle.com>
    Reviewed-by: Steven Sistare <steven.sistare@oracle.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit ccadb4e680e9d117fc4ac258d432c6fbb2b0fa67
Author: Pavel Tatashin <pasha.tatashin@oracle.com>
Date:   Wed May 31 11:25:22 2017 -0400

    sparc64: redefine first version
    
    [ Upstream commit c4415235b2be0cc791572e8e7f7466ab8f73a2bf ]
    
    CTX_FIRST_VERSION defines the first context version, but also it defines
    first context. This patch redefines it to only include the first context
    version.
    
    Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
    Reviewed-by: Bob Picco <bob.picco@oracle.com>
    Reviewed-by: Steven Sistare <steven.sistare@oracle.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 5203c6c927244f3f512961165b639958e227c867
Author: Pavel Tatashin <pasha.tatashin@oracle.com>
Date:   Wed May 31 11:25:21 2017 -0400

    sparc64: combine activate_mm and switch_mm
    
    [ Upstream commit 14d0334c6748ff2aedb3f2f7fdc51ee90a9b54e7 ]
    
    The only difference between these two functions is that in activate_mm we
    unconditionally flush context. However, there is no need to keep this
    difference after fixing a bug where cpumask was not reset on a wrap. So, in
    this patch we combine these.
    
    Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
    Reviewed-by: Bob Picco <bob.picco@oracle.com>
    Reviewed-by: Steven Sistare <steven.sistare@oracle.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 317a444875fde20e9f4de2fece2776cd62bcb198
Author: Pavel Tatashin <pasha.tatashin@oracle.com>
Date:   Wed May 31 11:25:20 2017 -0400

    sparc64: reset mm cpumask after wrap
    
    [ Upstream commit 588974857359861891f478a070b1dc7ae04a3880 ]
    
    After a wrap (getting a new context version) a process must get a new
    context id, which means that we would need to flush the context id from
    the TLB before running for the first time with this ID on every CPU. But,
    we use mm_cpumask to determine if this process has been running on this CPU
    before, and this mask is not reset after a wrap. So, there are two possible
    fixes for this issue:
    
    1. Clear mm cpumask whenever mm gets a new context id
    2. Unconditionally flush context every time process is running on a CPU
    
    This patch implements the first solution
    
    Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
    Reviewed-by: Bob Picco <bob.picco@oracle.com>
    Reviewed-by: Steven Sistare <steven.sistare@oracle.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit a2334e23c296582a7b1047869a383c6639f00884
Author: James Clarke <jrtc27@jrtc27.com>
Date:   Mon May 29 20:17:56 2017 +0100

    sparc: Machine description indices can vary
    
    [ Upstream commit c982aa9c304bf0b9a7522fd118fed4afa5a0263c ]
    
    VIO devices were being looked up by their index in the machine
    description node block, but this often varies over time as devices are
    added and removed. Instead, store the ID and look up using the type,
    config handle and ID.
    
    Signed-off-by: James Clarke <jrtc27@jrtc27.com>
    Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=112541
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 8ee93884863e28e35fabae755c023361f367a6f6
Author: Mike Kravetz <mike.kravetz@oracle.com>
Date:   Fri Jun 2 14:51:12 2017 -0700

    sparc64: mm: fix copy_tsb to correctly copy huge page TSBs
    
    [ Upstream commit 654f4807624a657f364417c2a7454f0df9961734 ]
    
    When a TSB grows beyond its current capacity, a new TSB is allocated
    and copy_tsb is called to copy entries from the old TSB to the new.
    A hash shift based on page size is used to calculate the index of an
    entry in the TSB.  copy_tsb has hard coded PAGE_SHIFT in these
    calculations.  However, for huge page TSBs the value REAL_HPAGE_SHIFT
    should be used.  As a result, when copy_tsb is called for a huge page
    TSB the entries are placed at the incorrect index in the newly
    allocated TSB.  When doing hardware table walk, the MMU does not
    match these entries and we end up in the TSB miss handling code.
    This code will then create and write an entry to the correct index
    in the TSB.  We take a performance hit for the table walk miss and
    recreation of these entries.
    
    Pass a new parameter to copy_tsb that is the page size shift to be
    used when copying the TSB.
    
    Suggested-by: Anthony Yznaga <anthony.yznaga@oracle.com>
    Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 246fa51013e3b9c8247158978dd17b0ca674e7ac
Author: Max Filippov <jcmvbkbc@gmail.com>
Date:   Mon Jun 5 18:31:16 2017 -0700

    net: ethoc: enable NAPI before poll may be scheduled
    
    [ Upstream commit d220b942a4b6a0640aee78841608f4aa5e8e185e ]
    
    ethoc_reset enables device interrupts, ethoc_interrupt may schedule a
    NAPI poll before NAPI is enabled in the ethoc_open, which results in
    device being unable to send or receive anything until it's closed and
    reopened. In case the device is flooded with ingress packets it may be
    unable to recover at all.
    Move napi_enable above ethoc_reset in the ethoc_open to fix that.
    
    Fixes: a1702857724f ("net: Add support for the OpenCores 10/100 Mbps Ethernet MAC.")
    Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
    Reviewed-by: Tobias Klauser <tklauser@distanz.ch>
    Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 169a7e245c7fa97b0fcf1527bfa1d45a40b02405
Author: Eric Dumazet <edumazet@google.com>
Date:   Sat Jun 3 09:29:25 2017 -0700

    net: ping: do not abuse udp_poll()
    
    [ Upstream commit 77d4b1d36926a9b8387c6b53eeba42bcaaffcea3 ]
    
    Alexander reported various KASAN messages triggered in recent kernels
    
    The problem is that ping sockets should not use udp_poll() in the first
    place, and recent changes in UDP stack finally exposed this old bug.
    
    Fixes: c319b4d76b9e ("net: ipv4: add IPPROTO_ICMP socket kind")
    Fixes: 6d0bfe226116 ("net: ipv6: Add IPv6 support to the ping socket.")
    Signed-off-by: Eric Dumazet <edumazet@google.com>
    Reported-by: Sasha Levin <alexander.levin@verizon.com>
    Cc: Solar Designer <solar@openwall.com>
    Cc: Vasiliy Kulikov <segoon@openwall.com>
    Cc: Lorenzo Colitti <lorenzo@google.com>
    Acked-By: Lorenzo Colitti <lorenzo@google.com>
    Tested-By: Lorenzo Colitti <lorenzo@google.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 59dc08f8f5e7318a4a7ea20799159b1369138123
Author: David S. Miller <davem@davemloft.net>
Date:   Sun Jun 4 21:41:10 2017 -0400

    ipv6: Fix leak in ipv6_gso_segment().
    
    [ Upstream commit e3e86b5119f81e5e2499bea7ea1ebe8ac6aab789 ]
    
    If ip6_find_1stfragopt() fails and we return an error we have to free
    up 'segs' because nobody else is going to.
    
    Fixes: 2423496af35d ("ipv6: Prevent overrun when parsing v6 header options")
    Reported-by: Ben Hutchings <ben@decadent.org.uk>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit f257e5d318a56966859c8479098aa1917f71c867
Author: Yuchung Cheng <ycheng@google.com>
Date:   Wed May 31 11:21:27 2017 -0700

    tcp: disallow cwnd undo when switching congestion control
    
    [ Upstream commit 44abafc4cc094214a99f860f778c48ecb23422fc ]
    
    When the sender switches its congestion control during loss
    recovery, if the recovery is spurious then it may incorrectly
    revert cwnd and ssthresh to the older values set by a previous
    congestion control. Consider a congestion control (like BBR)
    that does not use ssthresh and keeps it infinite: the connection
    may incorrectly revert cwnd to an infinite value when switching
    from BBR to another congestion control.
    
    This patch fixes it by disallowing such cwnd undo operation
    upon switching congestion control.  Note that undo_marker
    is not reset s.t. the packets that were incorrectly marked
    lost would be corrected. We only avoid undoing the cwnd in
    tcp_undo_cwnd_reduction().
    
    Signed-off-by: Yuchung Cheng <ycheng@google.com>
    Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
    Signed-off-by: Neal Cardwell <ncardwell@google.com>
    Signed-off-by: Eric Dumazet <edumazet@google.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit fa95ca65fb86b350aff6f0ce37c39a9ad839a739
Author: Ganesh Goudar <ganeshgr@chelsio.com>
Date:   Wed May 31 18:26:28 2017 +0530

    cxgb4: avoid enabling napi twice to the same queue
    
    [ Upstream commit e7519f9926f1d0d11c776eb0475eb098c7760f68 ]
    
    Take uld mutex to avoid race between cxgb_up() and
    cxgb4_register_uld() to enable napi for the same uld
    queue.
    
    Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit cd276bb4fe54b464e325942dfc49f1d3923ae738
Author: Ben Hutchings <ben@decadent.org.uk>
Date:   Wed May 31 13:15:41 2017 +0100

    ipv6: xfrm: Handle errors reported by xfrm6_find_1stfragopt()
    
    [ Upstream commit 6e80ac5cc992ab6256c3dae87f7e57db15e1a58c ]
    
    xfrm6_find_1stfragopt() may now return an error code and we must
    not treat it as a length.
    
    Fixes: 2423496af35d ("ipv6: Prevent overrun when parsing v6 header options")
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
    Acked-by: Craig Gallek <kraig@google.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit e616f6da095ef1bcc497f623fc263d527f2ea2aa
Author: Mintz, Yuval <Yuval.Mintz@cavium.com>
Date:   Thu Jun 1 15:57:56 2017 +0300

    bnx2x: Fix Multi-Cos
    
    [ Upstream commit 3968d38917eb9bd0cd391265f6c9c538d9b33ffa ]
    
    Apparently multi-cos isn't working for bnx2x quite some time -
    driver implements ndo_select_queue() to allow queue-selection
    for FCoE, but the regular L2 flow would cause it to modulo the
    fallback's result by the number of queues.
    The fallback would return a queue matching the needed tc
    [via __skb_tx_hash()], but since the modulo is by the number of TSS
    queues where number of TCs is not accounted, transmission would always
    be done by a queue configured into using TC0.
    
    Fixes: ada7c19e6d27 ("bnx2x: use XPS if possible for bnx2x_select_queue instead of pure hash")
    Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit d600ccd7fd214ddc08afcda70b8462022e4c0149
Author: Eric Sandeen <sandeen@sandeen.net>
Date:   Mon May 22 19:54:10 2017 -0700

    xfs: fix unaligned access in xfs_btree_visit_blocks
    
    [ Upstream commit a4d768e702de224cc85e0c8eac9311763403b368 ]
    
    This structure copy was throwing unaligned access warnings on sparc64:
    
    Kernel unaligned access at TPC[1043c088] xfs_btree_visit_blocks+0x88/0xe0 [xfs]
    
    xfs_btree_copy_ptrs does a memcpy, which avoids it.
    
    Signed-off-by: Eric Sandeen <sandeen@redhat.com>
    Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
    Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 20d07bb1567a58b3f619e5f3d25e5a29ed903720
Author: Zorro Lang <zlang@redhat.com>
Date:   Mon May 15 08:40:02 2017 -0700

    xfs: bad assertion for delalloc an extent that start at i_size
    
    [ Upstream commit 892d2a5f705723b2cb488bfb38bcbdcf83273184 ]
    
    By run fsstress long enough time enough in RHEL-7, I find an
    assertion failure (harder to reproduce on linux-4.11, but problem
    is still there):
    
      XFS: Assertion failed: (iflags & BMV_IF_DELALLOC) != 0, file: fs/xfs/xfs_bmap_util.c
    
    The assertion is in xfs_getbmap() funciton:
    
      if (map[i].br_startblock == DELAYSTARTBLOCK &&
    -->   map[i].br_startoff <= XFS_B_TO_FSB(mp, XFS_ISIZE(ip)))
              ASSERT((iflags & BMV_IF_DELALLOC) != 0);
    
    When map[i].br_startoff == XFS_B_TO_FSB(mp, XFS_ISIZE(ip)), the
    startoff is just at EOF. But we only need to make sure delalloc
    extents that are within EOF, not include EOF.
    
    Signed-off-by: Zorro Lang <zlang@redhat.com>
    Reviewed-by: Brian Foster <bfoster@redhat.com>
    Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
    Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 45ed7e2f2f82d47113aa5169851af5d592c4fc57
Author: Brian Foster <bfoster@redhat.com>
Date:   Fri May 12 10:44:08 2017 -0700

    xfs: fix indlen accounting error on partial delalloc conversion
    
    [ Upstream commit 0daaecacb83bc6b656a56393ab77a31c28139bc7 ]
    
    The delalloc -> real block conversion path uses an incorrect
    calculation in the case where the middle part of a delalloc extent
    is being converted. This is documented as a rare situation because
    XFS generally attempts to maximize contiguity by converting as much
    of a delalloc extent as possible.
    
    If this situation does occur, the indlen reservation for the two new
    delalloc extents left behind by the conversion of the middle range
    is calculated and compared with the original reservation. If more
    blocks are required, the delta is allocated from the global block
    pool. This delta value can be characterized as the difference
    between the new total requirement (temp + temp2) and the currently
    available reservation minus those blocks that have already been
    allocated (startblockval(PREV.br_startblock) - allocated).
    
    The problem is that the current code does not account for previously
    allocated blocks correctly. It subtracts the current allocation
    count from the (new - old) delta rather than the old indlen
    reservation. This means that more indlen blocks than have been
    allocated end up stashed in the remaining extents and free space
    accounting is broken as a result.
    
    Fix up the calculation to subtract the allocated block count from
    the original extent indlen and thus correctly allocate the
    reservation delta based on the difference between the new total
    requirement and the unused blocks from the original reservation.
    Also remove a bogus assert that contradicts the fact that the new
    indlen reservation can be larger than the original indlen
    reservation.
    
    Signed-off-by: Brian Foster <bfoster@redhat.com>
    Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
    Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 1a229fd5ae97647b70341190539072d33f43f8d3
Author: Brian Foster <bfoster@redhat.com>
Date:   Wed Apr 26 08:30:40 2017 -0700

    xfs: wait on new inodes during quotaoff dquot release
    
    [ Upstream commit e20c8a517f259cb4d258e10b0cd5d4b30d4167a0 ]
    
    The quotaoff operation has a race with inode allocation that results
    in a livelock. An inode allocation that occurs before the quota
    status flags are updated acquires the appropriate dquots for the
    inode via xfs_qm_vop_dqalloc(). It then inserts the XFS_INEW inode
    into the perag radix tree, sometime later attaches the dquots to the
    inode and finally clears the XFS_INEW flag. Quotaoff expects to
    release the dquots from all inodes in the filesystem via
    xfs_qm_dqrele_all_inodes(). This invokes the AG inode iterator,
    which skips inodes in the XFS_INEW state because they are not fully
    constructed. If the scan occurs after dquots have been attached to
    an inode, but before XFS_INEW is cleared, the newly allocated inode
    will continue to hold a reference to the applicable dquots. When
    quotaoff invokes xfs_qm_dqpurge_all(), the reference count of those
    dquot(s) remain elevated and the dqpurge scan spins indefinitely.
    
    To address this problem, update the xfs_qm_dqrele_all_inodes() scan
    to wait on inodes marked on the XFS_INEW state. We wait on the
    inodes explicitly rather than skip and retry to avoid continuous
    retry loops due to a parallel inode allocation workload. Since
    quotaoff updates the quota state flags and uses a synchronous
    transaction before the dqrele scan, and dquots are attached to
    inodes after radix tree insertion iff quota is enabled, one INEW
    waiting pass through the AG guarantees that the scan has processed
    all inodes that could possibly hold dquot references.
    
    Reported-by: Eryu Guan <eguan@redhat.com>
    Signed-off-by: Brian Foster <bfoster@redhat.com>
    Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
    Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit b822f03516c9e367e7614823145213c1adfc1691
Author: Brian Foster <bfoster@redhat.com>
Date:   Wed Apr 26 08:30:39 2017 -0700

    xfs: update ag iterator to support wait on new inodes
    
    [ Upstream commit ae2c4ac2dd39b23a87ddb14ceddc3f2872c6aef5 ]
    
    The AG inode iterator currently skips new inodes as such inodes are
    inserted into the inode radix tree before they are fully
    constructed. Certain contexts require the ability to wait on the
    construction of new inodes, however. The fs-wide dquot release from
    the quotaoff sequence is an example of this.
    
    Update the AG inode iterator to support the ability to wait on
    inodes flagged with XFS_INEW upon request. Create a new
    xfs_inode_ag_iterator_flags() interface and support a set of
    iteration flags to modify the iteration behavior. When the
    XFS_AGITER_INEW_WAIT flag is set, include XFS_INEW flags in the
    radix tree inode lookup and wait on them before the callback is
    executed.
    
    Signed-off-by: Brian Foster <bfoster@redhat.com>
    Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
    Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 821afaaed81df25a1404589f0b683a7ad49f7e63
Author: Brian Foster <bfoster@redhat.com>
Date:   Wed Apr 26 08:30:39 2017 -0700

    xfs: support ability to wait on new inodes
    
    [ Upstream commit 756baca27fff3ecaeab9dbc7a5ee35a1d7bc0c7f ]
    
    Inodes that are inserted into the perag tree but still under
    construction are flagged with the XFS_INEW bit. Most contexts either
    skip such inodes when they are encountered or have the ability to
    handle them.
    
    The runtime quotaoff sequence introduces a context that must wait
    for construction of such inodes to correctly ensure that all dquots
    in the fs are released. In anticipation of this, support the ability
    to wait on new inodes. Wake the appropriate bit when XFS_INEW is
    cleared.
    
    Signed-off-by: Brian Foster <bfoster@redhat.com>
    Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
    Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 9b1260c216af9a531faf1bf09193b0dd3f3e7e9e
Author: Brian Foster <bfoster@redhat.com>
Date:   Fri Apr 21 12:40:44 2017 -0700

    xfs: fix up quotacheck buffer list error handling
    
    [ Upstream commit 20e8a063786050083fe05b4f45be338c60b49126 ]
    
    The quotacheck error handling of the delwri buffer list assumes the
    resident buffers are locked and doesn't clear the _XBF_DELWRI_Q flag
    on the buffers that are dequeued. This can lead to assert failures
    on buffer release and possibly other locking problems.
    
    Move this code to a delwri queue cancel helper function to
    encapsulate the logic required to properly release buffers from a
    delwri queue. Update the helper to clear the delwri queue flag and
    call it from quotacheck.
    
    Signed-off-by: Brian Foster <bfoster@redhat.com>
    Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
    Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 022e9b0e554b1024bc9a810df20b4be5932b3bfc
Author: Brian Foster <bfoster@redhat.com>
Date:   Thu Apr 20 08:06:47 2017 -0700

    xfs: prevent multi-fsb dir readahead from reading random blocks
    
    [ Upstream commit cb52ee334a45ae6c78a3999e4b473c43ddc528f4 ]
    
    Directory block readahead uses a complex iteration mechanism to map
    between high-level directory blocks and underlying physical extents.
    This mechanism attempts to traverse the higher-level dir blocks in a
    manner that handles multi-fsb directory blocks and simultaneously
    maintains a reference to the corresponding physical blocks.
    
    This logic doesn't handle certain (discontiguous) physical extent
    layouts correctly with multi-fsb directory blocks. For example,
    consider the case of a 4k FSB filesystem with a 2 FSB (8k) directory
    block size and a directory with the following extent layout:
    
     EXT: FILE-OFFSET      BLOCK-RANGE      AG AG-OFFSET        TOTAL
       0: [0..7]:          88..95            0 (88..95)             8
       1: [8..15]:         80..87            0 (80..87)             8
       2: [16..39]:        168..191          0 (168..191)          24
       3: [40..63]:        5242952..5242975  1 (72..95)            24
    
    Directory block 0 spans physical extents 0 and 1, dirblk 1 lies
    entirely within extent 2 and dirblk 2 spans extents 2 and 3. Because
    extent 2 is larger than the directory block size, the readahead code
    erroneously assumes the block is contiguous and issues a readahead
    based on the physical mapping of the first fsb of the dirblk. This
    results in read verifier failure and a spurious corruption or crc
    failure, depending on the filesystem format.
    
    Further, the subsequent readahead code responsible for walking
    through the physical table doesn't correctly advance the physical
    block reference for dirblk 2. Instead of advancing two physical
    filesystem blocks, the first iteration of the loop advances 1 block
    (correctly), but the subsequent iteration advances 2 more physical
    blocks because the next physical extent (extent 3, above) happens to
    cover more than dirblk 2. At this point, the higher-level directory
    block walking is completely off the rails of the actual physical
    layout of the directory for the respective mapping table.
    
    Update the contiguous dirblock logic to consider the current offset
    in the physical extent to avoid issuing directory readahead to
    unrelated blocks. Also, update the mapping table advancing code to
    consider the current offset within the current dirblock to avoid
    advancing the mapping reference too far beyond the dirblock.
    
    Signed-off-by: Brian Foster <bfoster@redhat.com>
    Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
    Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 641967d1f9035b30f35faf8969116cd904ab5824
Author: Eric Sandeen <sandeen@redhat.com>
Date:   Thu Apr 13 15:15:47 2017 -0700

    xfs: handle array index overrun in xfs_dir2_leaf_readbuf()
    
    [ Upstream commit 023cc840b40fad95c6fe26fff1d380a8c9d45939 ]
    
    Carlos had a case where "find" seemed to start spinning
    forever and never return.
    
    This was on a filesystem with non-default multi-fsb (8k)
    directory blocks, and a fragmented directory with extents
    like this:
    
    0:[0,133646,2,0]
    1:[2,195888,1,0]
    2:[3,195890,1,0]
    3:[4,195892,1,0]
    4:[5,195894,1,0]
    5:[6,195896,1,0]
    6:[7,195898,1,0]
    7:[8,195900,1,0]
    8:[9,195902,1,0]
    9:[10,195908,1,0]
    10:[11,195910,1,0]
    11:[12,195912,1,0]
    12:[13,195914,1,0]
    ...
    
    i.e. the first extent is a contiguous 2-fsb dir block, but
    after that it is fragmented into 1 block extents.
    
    At the top of the readdir path, we allocate a mapping array
    which (for this filesystem geometry) can hold 10 extents; see
    the assignment to map_info->map_size.  During readdir, we are
    therefore able to map extents 0 through 9 above into the array
    for readahead purposes.  If we count by 2, we see that the last
    mapped index (9) is the first block of a 2-fsb directory block.
    
    At the end of xfs_dir2_leaf_readbuf() we have 2 loops to fill
    more readahead; the outer loop assumes one full dir block is
    processed each loop iteration, and an inner loop that ensures
    that this is so by advancing to the next extent until a full
    directory block is mapped.
    
    The problem is that this inner loop may step past the last
    extent in the mapping array as it tries to reach the end of
    the directory block.  This will read garbage for the extent
    length, and as a result the loop control variable 'j' may
    become corrupted and never fail the loop conditional.
    
    The number of valid mappings we have in our array is stored
    in map->map_valid, so stop this inner loop based on that limit.
    
    There is an ASSERT at the top of the outer loop for this
    same condition, but we never made it out of the inner loop,
    so the ASSERT never fired.
    
    Huge appreciation for Carlos for debugging and isolating
    the problem.
    
    Debugged-and-analyzed-by: Carlos Maiolino <cmaiolino@redhat.com>
    Signed-off-by: Eric Sandeen <sandeen@redhat.com>
    Tested-by: Carlos Maiolino <cmaiolino@redhat.com>
    Reviewed-by: Carlos Maiolino <cmaiolino@redhat.com>
    Reviewed-by: Bill O'Donnell <billodo@redhat.com>
    Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
    Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 17d031b4add771e425ce90b0b1c4f315a2b04a01
Author: Darrick J. Wong <darrick.wong@oracle.com>
Date:   Mon Apr 3 15:17:57 2017 -0700

    xfs: fix over-copying of getbmap parameters from userspace
    
    [ Upstream commit be6324c00c4d1e0e665f03ed1fc18863a88da119 ]
    
    In xfs_ioc_getbmap, we should only copy the fields of struct getbmap
    from userspace, or else we end up copying random stack contents into the
    kernel.  struct getbmap is a strict subset of getbmapx, so a partial
    structure copy should work fine.
    
    Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 74d27999c51e353fd0482c6750f98a7d3af2703d
Author: Brian Foster <bfoster@redhat.com>
Date:   Tue Mar 28 14:51:44 2017 -0700

    xfs: use dedicated log worker wq to avoid deadlock with cil wq
    
    [ Upstream commit 696a562072e3c14bcd13ae5acc19cdf27679e865 ]
    
    The log covering background task used to be part of the xfssyncd
    workqueue. That workqueue was removed as of commit 5889608df ("xfs:
    syncd workqueue is no more") and the associated work item scheduled
    to the xfs-log wq. The latter is used for log buffer I/O completion.
    
    Since xfs_log_worker() can invoke a log flush, a deadlock is
    possible between the xfs-log and xfs-cil workqueues. Consider the
    following codepath from xfs_log_worker():
    
    xfs_log_worker()
      xfs_log_force()
        _xfs_log_force()
          xlog_cil_force()
            xlog_cil_force_lsn()
              xlog_cil_push_now()
                flush_work()
    
    The above is in xfs-log wq context and blocked waiting on the
    completion of an xfs-cil work item. Concurrently, the cil push in
    progress can end up blocked here:
    
    xlog_cil_push_work()
      xlog_cil_push()
        xlog_write()
          xlog_state_get_iclog_space()
            xlog_wait(&log->l_flush_wait, ...)
    
    The above is in xfs-cil context waiting on log buffer I/O
    completion, which executes in xfs-log wq context. In this scenario
    both workqueues are deadlocked waiting on eachother.
    
    Add a new workqueue specifically for the high level log covering and
    ail pushing worker, as was the case prior to commit 5889608df.
    
    Diagnosed-by: David Jeffery <djeffery@redhat.com>
    Signed-off-by: Brian Foster <bfoster@redhat.com>
    Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
    Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit ddf2f45b3344be6f220229c500957078af21148f
Author: Darrick J. Wong <darrick.wong@oracle.com>
Date:   Mon Apr 3 12:22:39 2017 -0700

    xfs: fix kernel memory exposure problems
    
    [ Upstream commit bf9216f922612d2db7666aae01e65064da2ffb3a ]
    
    Fix a memory exposure problems in inumbers where we allocate an array of
    structures with holes, fail to zero the holes, then blindly copy the
    kernel memory contents (junk and all) into userspace.
    
    Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 9bf638a08ad2f238db464bad0e8cf6b3c7ef0c52
Author: Punit Agrawal <punit.agrawal@arm.com>
Date:   Fri Jun 2 14:46:40 2017 -0700

    mm/migrate: fix refcount handling when !hugepage_migration_supported()
    
    [ Upstream commit 30809f559a0d348c2dfd7ab05e9a451e2384962e ]
    
    On failing to migrate a page, soft_offline_huge_page() performs the
    necessary update to the hugepage ref-count.
    
    But when !hugepage_migration_supported() , unmap_and_move_hugepage()
    also decrements the page ref-count for the hugepage.  The combined
    behaviour leaves the ref-count in an inconsistent state.
    
    This leads to soft lockups when running the overcommitted hugepage test
    from mce-tests suite.
    
      Soft offlining pfn 0x83ed600 at process virtual address 0x400000000000
      soft offline: 0x83ed600: migration failed 1, type 1fffc00000008008 (uptodate|head)
      INFO: rcu_preempt detected stalls on CPUs/tasks:
       Tasks blocked on level-0 rcu_node (CPUs 0-7): P2715
        (detected by 7, t=5254 jiffies, g=963, c=962, q=321)
        thugetlb_overco R  running task        0  2715   2685 0x00000008
        Call trace:
          dump_backtrace+0x0/0x268
          show_stack+0x24/0x30
          sched_show_task+0x134/0x180
          rcu_print_detail_task_stall_rnp+0x54/0x7c
          rcu_check_callbacks+0xa74/0xb08
          update_process_times+0x34/0x60
          tick_sched_handle.isra.7+0x38/0x70
          tick_sched_timer+0x4c/0x98
          __hrtimer_run_queues+0xc0/0x300
          hrtimer_interrupt+0xac/0x228
          arch_timer_handler_phys+0x3c/0x50
          handle_percpu_devid_irq+0x8c/0x290
          generic_handle_irq+0x34/0x50
          __handle_domain_irq+0x68/0xc0
          gic_handle_irq+0x5c/0xb0
    
    Address this by changing the putback_active_hugepage() in
    soft_offline_huge_page() to putback_movable_pages().
    
    This only triggers on systems that enable memory failure handling
    (ARCH_SUPPORTS_MEMORY_FAILURE) but not hugepage migration
    (!ARCH_ENABLE_HUGEPAGE_MIGRATION).
    
    I imagine this wasn't triggered as there aren't many systems running
    this configuration.
    
    [akpm@linux-foundation.org: remove dead comment, per Naoya]
    Link: http://lkml.kernel.org/r/20170525135146.32011-1-punit.agrawal@arm.com
    Reported-by: Manoj Iyer <manoj.iyer@canonical.com>
    Tested-by: Manoj Iyer <manoj.iyer@canonical.com>
    Suggested-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
    Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
    Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
    Cc: Wanpeng Li <wanpeng.li@hotmail.com>
    Cc: Christoph Lameter <cl@linux.com>
    Cc: Mel Gorman <mgorman@techsingularity.net>
    Cc: <stable@vger.kernel.org>    [3.14+]
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit c7dbf874d6b55ba723afcd745e3186a5140c3bff
Author: Alex Deucher <alexander.deucher@amd.com>
Date:   Thu May 11 13:14:14 2017 -0400

    drm/radeon/ci: disable mclk switching for high refresh rates (v2)
    
    [ Upstream commit 58d7e3e427db1bd68f33025519a9468140280a75 ]
    
    Even if the vblank period would allow it, it still seems to
    be problematic on some cards.
    
    v2: fix logic inversion (Nils)
    
    bug: https://bugs.freedesktop.org/show_bug.cgi?id=96868
    
    Cc: stable@vger.kernel.org
    Acked-by: Christian König <christian.koenig@amd.com>
    Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 9fcaaa87e08c0f0917318a6cd96d591926f2304e
Author: Richard Narron <comet.berkeley@gmail.com>
Date:   Sun Jun 4 16:23:18 2017 -0700

    fs/ufs: Set UFS default maximum bytes per file
    
    [ Upstream commit 239e250e4acbc0104d514307029c0839e834a51a ]
    
    This fixes a problem with reading files larger than 2GB from a UFS-2
    file system:
    
        https://bugzilla.kernel.org/show_bug.cgi?id=195721
    
    The incorrect UFS s_maxsize limit became a problem as of commit
    c2a9737f45e2 ("vfs,mm: fix a dead loop in truncate_inode_pages_range()")
    which started using s_maxbytes to avoid a page index overflow in
    do_generic_file_read().
    
    That caused files to be truncated on UFS-2 file systems because the
    default maximum file size is 2GB (MAX_NON_LFS) and UFS didn't update it.
    
    Here I simply increase the default to a common value used by other file
    systems.
    
    Signed-off-by: Richard Narron <comet.berkeley@gmail.com>
    Cc: Al Viro <viro@zeniv.linux.org.uk>
    Cc: Will B <will.brokenbourgh2877@gmail.com>
    Cc: Theodore Ts'o <tytso@mit.edu>
    Cc: <stable@vger.kernel.org> # v4.9 and backports of c2a9737f45e2
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit b36188f229d672beed4c95321f490b7f2b45d08f
Author: Orlando Arias <oarias@knights.ucf.edu>
Date:   Tue May 16 15:34:00 2017 -0400

    sparc: Fix -Wstringop-overflow warning
    
    [ Upstream commit deba804c90642c8ed0f15ac1083663976d578f54 ]
    
    Greetings,
    
    GCC 7 introduced the -Wstringop-overflow flag to detect buffer overflows
    in calls to string handling functions [1][2]. Due to the way
    ``empty_zero_page'' is declared in arch/sparc/include/setup.h, this
    causes a warning to trigger at compile time in the function mem_init(),
    which is subsequently converted to an error. The ensuing patch fixes
    this issue and aligns the declaration of empty_zero_page to that of
    other architectures. Thank you.
    
    Cheers,
    Orlando.
    
    [1] https://gcc.gnu.org/ml/gcc-patches/2016-10/msg02308.html
    [2] https://gcc.gnu.org/gcc-7/changes.html
    
    Signed-off-by: Orlando Arias <oarias@knights.ucf.edu>
    
    --------------------------------------------------------------------------------
    Signed-off-by: David S. Miller <davem@davemloft.net>
    
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 80f68f7daeb3bc2c41cd876d9abdf3a99e9e571b
Author: Davide Caratti <dcaratti@redhat.com>
Date:   Thu May 25 19:14:56 2017 +0200

    sctp: fix ICMP processing if skb is non-linear
    
    [ Upstream commit 804ec7ebe8ea003999ca8d1bfc499edc6a9e07df ]
    
    sometimes ICMP replies to INIT chunks are ignored by the client, even if
    the encapsulated SCTP headers match an open socket. This happens when the
    ICMP packet is carried by a paged skb: use skb_header_pointer() to read
    packet contents beyond the SCTP header, so that chunk header and initiate
    tag are validated correctly.
    
    v2:
    - don't use skb_header_pointer() to read the transport header, since
      icmp_socket_deliver() already puts these 8 bytes in the linear area.
    - change commit message to make specific reference to INIT chunks.
    
    Signed-off-by: Davide Caratti <dcaratti@redhat.com>
    Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
    Acked-by: Vlad Yasevich <vyasevich@gmail.com>
    Reviewed-by: Xin Long <lucien.xin@gmail.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 01426eb3503d764ddc10690b1133bee3f3873bf6
Author: Wei Wang <weiwan@google.com>
Date:   Wed May 24 09:59:31 2017 -0700

    tcp: avoid fastopen API to be used on AF_UNSPEC
    
    [ Upstream commit ba615f675281d76fd19aa03558777f81fb6b6084 ]
    
    Fastopen API should be used to perform fastopen operations on the TCP
    socket. It does not make sense to use fastopen API to perform disconnect
    by calling it with AF_UNSPEC. The fastopen data path is also prone to
    race conditions and bugs when using with AF_UNSPEC.
    
    One issue reported and analyzed by Vegard Nossum is as follows:
    +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    Thread A:                            Thread B:
    ------------------------------------------------------------------------
    sendto()
     - tcp_sendmsg()
         - sk_stream_memory_free() = 0
             - goto wait_for_sndbuf
                 - sk_stream_wait_memory()
                    - sk_wait_event() // sleep
              |                          sendto(flags=MSG_FASTOPEN, dest_addr=AF_UNSPEC)
              |                           - tcp_sendmsg()
              |                              - tcp_sendmsg_fastopen()
              |                                 - __inet_stream_connect()
              |                                    - tcp_disconnect() //because of AF_UNSPEC
              |                                       - tcp_transmit_skb()// send RST
              |                                    - return 0; // no reconnect!
              |                           - sk_stream_wait_connect()
              |                                 - sock_error()
              |                                    - xchg(&sk->sk_err, 0)
              |                                    - return -ECONNRESET
            - ... // wake up, see sk->sk_err == 0
        - skb_entail() on TCP_CLOSE socket
    
    If the connection is reopened then we will send a brand new SYN packet
    after thread A has already queued a buffer. At this point I think the
    socket internal state (sequence numbers etc.) becomes messed up.
    
    When the new connection is closed, the FIN-ACK is rejected because the
    sequence number is outside the window. The other side tries to
    retransmit,
    but __tcp_retransmit_skb() calls tcp_trim_head() on an empty skb which
    corrupts the skb data length and hits a BUG() in copy_and_csum_bits().
    +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    
    Hence, this patch adds a check for AF_UNSPEC in the fastopen data path
    and return EOPNOTSUPP to user if such case happens.
    
    Fixes: cf60af03ca4e7 ("tcp: Fast Open client - sendmsg(MSG_FASTOPEN)")
    Reported-by: Vegard Nossum <vegard.nossum@oracle.com>
    Signed-off-by: Wei Wang <weiwan@google.com>
    Signed-off-by: Eric Dumazet <edumazet@google.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit a10f1d6ad744e6f8b6a2a60a325bebe421bdbe14
Author: Vlad Yasevich <vyasevich@gmail.com>
Date:   Tue May 23 13:38:43 2017 -0400

    virtio-net: enable TSO/checksum offloads for Q-in-Q vlans
    
    [ Upstream commit 2836b4f224d4fd7d1a2b23c3eecaf0f0ae199a74 ]
    
    Since virtio does not provide it's own ndo_features_check handler,
    TSO, and now checksum offload, are disabled for stacked vlans.
    Re-enable the support and let the host take care of it.  This
    restores/improves Guest-to-Guest performance over Q-in-Q vlans.
    
    Acked-by: Jason Wang <jasowang@redhat.com>
    Acked-by: Michael S. Tsirkin <mst@redhat.com>
    Signed-off-by: Vladislav Yasevich <vyasevic@redhat.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit a05aec67cd6bd55200c2536916156eeeff06d2d9
Author: Vlad Yasevich <vyasevich@gmail.com>
Date:   Tue May 23 13:38:42 2017 -0400

    be2net: Fix offload features for Q-in-Q packets
    
    [ Upstream commit cc6e9de62a7f84c9293a2ea41bc412b55bb46e85 ]
    
    At least some of the be2net cards do not seem to be capabled
    of performing checksum offload computions on Q-in-Q packets.
    In these case, the recevied checksum on the remote is invalid
    and TCP syn packets are dropped.
    
    This patch adds a call to check disbled acceleration features
    on Q-in-Q tagged traffic.
    
    CC: Sathya Perla <sathya.perla@broadcom.com>
    CC: Ajit Khaparde <ajit.khaparde@broadcom.com>
    CC: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
    CC: Somnath Kotur <somnath.kotur@broadcom.com>
    Signed-off-by: Vladislav Yasevich <vyasevic@redhat.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit b7b05a3b40e8e7a5ef1ca9e814506a22d44e7c00
Author: Vlad Yasevich <vyasevich@gmail.com>
Date:   Tue May 23 13:38:41 2017 -0400

    vlan: Fix tcp checksum offloads in Q-in-Q vlans
    
    [ Upstream commit 35d2f80b07bbe03fb358afb0bdeff7437a7d67ff ]
    
    It appears that TCP checksum offloading has been broken for
    Q-in-Q vlans.  The behavior was execerbated by the
    series
        commit afb0bc972b52 ("Merge branch 'stacked_vlan_tso'")
    that that enabled accleleration features on stacked vlans.
    
    However, event without that series, it is possible to trigger
    this issue.  It just requires a lot more specialized configuration.
    
    The root cause is the interaction between how
    netdev_intersect_features() works, the features actually set on
    the vlan devices and HW having the ability to run checksum with
    longer headers.
    
    The issue starts when netdev_interesect_features() replaces
    NETIF_F_HW_CSUM with a combination of NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM,
    if the HW advertises IP|IPV6 specific checksums.  This happens
    for tagged and multi-tagged packets.   However, HW that enables
    IP|IPV6 checksum offloading doesn't gurantee that packets with
    arbitrarily long headers can be checksummed.
    
    This patch disables IP|IPV6 checksums on the packet for multi-tagged
    packets.
    
    CC: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
    CC: Michal Kubecek <mkubecek@suse.cz>
    Signed-off-by: Vladislav Yasevich <vyasevic@redhat.com>
    Acked-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit d78ddec4e7fb482e27514466ae8a738ef61a9f53
Author: Eric Dumazet <edumazet@google.com>
Date:   Fri May 19 14:17:48 2017 -0700

    ipv6: fix out of bound writes in __ip6_append_data()
    
    [ Upstream commit 232cd35d0804cc241eb887bb8d4d9b3b9881c64a ]
    
    Andrey Konovalov and idaifish@gmail.com reported crashes caused by
    one skb shared_info being overwritten from __ip6_append_data()
    
    Andrey program lead to following state :
    
    copy -4200 datalen 2000 fraglen 2040
    maxfraglen 2040 alloclen 2048 transhdrlen 0 offset 0 fraggap 6200
    
    The skb_copy_and_csum_bits(skb_prev, maxfraglen, data + transhdrlen,
    fraggap, 0); is overwriting skb->head and skb_shared_info
    
    Since we apparently detect this rare condition too late, move the
    code earlier to even avoid allocating skb and risking crashes.
    
    Once again, many thanks to Andrey and syzkaller team.
    
    Signed-off-by: Eric Dumazet <edumazet@google.com>
    Reported-by: Andrey Konovalov <andreyknvl@google.com>
    Tested-by: Andrey Konovalov <andreyknvl@google.com>
    Reported-by: <idaifish@gmail.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit acf388f77791df71d7cd0830fbd128f24545eb08
Author: Bjørn Mork <bjorn@mork.no>
Date:   Wed May 17 16:31:41 2017 +0200

    qmi_wwan: add another Lenovo EM74xx device ID
    
    [ Upstream commit 486181bcb3248e2f1977f4e69387a898234a4e1e ]
    
    In their infinite wisdom, and never ending quest for end user frustration,
    Lenovo has decided to use a new USB device ID for the wwan modules in
    their 2017 laptops.  The actual hardware is still the Sierra Wireless
    EM7455 or EM7430, depending on region.
    
    Signed-off-by: Bjørn Mork <bjorn@mork.no>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 7144c12e891dc6e0d51e9ef004761a35b5e0eaf6
Author: David S. Miller <davem@davemloft.net>
Date:   Wed May 17 22:54:11 2017 -0400

    ipv6: Check ip6_find_1stfragopt() return value properly.
    
    [ Upstream commit 7dd7eb9513bd02184d45f000ab69d78cb1fa1531 ]
    
    Do not use unsigned variables to see if it returns a negative
    error or not.
    
    Fixes: 2423496af35d ("ipv6: Prevent overrun when parsing v6 header options")
    Reported-by: Julia Lawall <julia.lawall@lip6.fr>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit e7f05ff30b0cd72b00c8ca7be3cd48fedf96550f
Author: Craig Gallek <kraig@google.com>
Date:   Tue May 16 14:36:23 2017 -0400

    ipv6: Prevent overrun when parsing v6 header options
    
    [ Upstream commit 2423496af35d94a87156b063ea5cedffc10a70a1 ]
    
    The KASAN warning repoted below was discovered with a syzkaller
    program.  The reproducer is basically:
      int s = socket(AF_INET6, SOCK_RAW, NEXTHDR_HOP);
      send(s, &one_byte_of_data, 1, MSG_MORE);
      send(s, &more_than_mtu_bytes_data, 2000, 0);
    
    The socket() call sets the nexthdr field of the v6 header to
    NEXTHDR_HOP, the first send call primes the payload with a non zero
    byte of data, and the second send call triggers the fragmentation path.
    
    The fragmentation code tries to parse the header options in order
    to figure out where to insert the fragment option.  Since nexthdr points
    to an invalid option, the calculation of the size of the network header
    can made to be much larger than the linear section of the skb and data
    is read outside of it.
    
    This fix makes ip6_find_1stfrag return an error if it detects
    running out-of-bounds.
    
    [   42.361487] ==================================================================
    [   42.364412] BUG: KASAN: slab-out-of-bounds in ip6_fragment+0x11c8/0x3730
    [   42.365471] Read of size 840 at addr ffff88000969e798 by task ip6_fragment-oo/3789
    [   42.366469]
    [   42.366696] CPU: 1 PID: 3789 Comm: ip6_fragment-oo Not tainted 4.11.0+ #41
    [   42.367628] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.1-1ubuntu1 04/01/2014
    [   42.368824] Call Trace:
    [   42.369183]  dump_stack+0xb3/0x10b
    [   42.369664]  print_address_description+0x73/0x290
    [   42.370325]  kasan_report+0x252/0x370
    [   42.370839]  ? ip6_fragment+0x11c8/0x3730
    [   42.371396]  check_memory_region+0x13c/0x1a0
    [   42.371978]  memcpy+0x23/0x50
    [   42.372395]  ip6_fragment+0x11c8/0x3730
    [   42.372920]  ? nf_ct_expect_unregister_notifier+0x110/0x110
    [   42.373681]  ? ip6_copy_metadata+0x7f0/0x7f0
    [   42.374263]  ? ip6_forward+0x2e30/0x2e30
    [   42.374803]  ip6_finish_output+0x584/0x990
    [   42.375350]  ip6_output+0x1b7/0x690
    [   42.375836]  ? ip6_finish_output+0x990/0x990
    [   42.376411]  ? ip6_fragment+0x3730/0x3730
    [   42.376968]  ip6_local_out+0x95/0x160
    [   42.377471]  ip6_send_skb+0xa1/0x330
    [   42.377969]  ip6_push_pending_frames+0xb3/0xe0
    [   42.378589]  rawv6_sendmsg+0x2051/0x2db0
    [   42.379129]  ? rawv6_bind+0x8b0/0x8b0
    [   42.379633]  ? _copy_from_user+0x84/0xe0
    [   42.380193]  ? debug_check_no_locks_freed+0x290/0x290
    [   42.380878]  ? ___sys_sendmsg+0x162/0x930
    [   42.381427]  ? rcu_read_lock_sched_held+0xa3/0x120
    [   42.382074]  ? sock_has_perm+0x1f6/0x290
    [   42.382614]  ? ___sys_sendmsg+0x167/0x930
    [   42.383173]  ? lock_downgrade+0x660/0x660
    [   42.383727]  inet_sendmsg+0x123/0x500
    [   42.384226]  ? inet_sendmsg+0x123/0x500
    [   42.384748]  ? inet_recvmsg+0x540/0x540
    [   42.385263]  sock_sendmsg+0xca/0x110
    [   42.385758]  SYSC_sendto+0x217/0x380
    [   42.386249]  ? SYSC_connect+0x310/0x310
    [   42.386783]  ? __might_fault+0x110/0x1d0
    [   42.387324]  ? lock_downgrade+0x660/0x660
    [   42.387880]  ? __fget_light+0xa1/0x1f0
    [   42.388403]  ? __fdget+0x18/0x20
    [   42.388851]  ? sock_common_setsockopt+0x95/0xd0
    [   42.389472]  ? SyS_setsockopt+0x17f/0x260
    [   42.390021]  ? entry_SYSCALL_64_fastpath+0x5/0xbe
    [   42.390650]  SyS_sendto+0x40/0x50
    [   42.391103]  entry_SYSCALL_64_fastpath+0x1f/0xbe
    [   42.391731] RIP: 0033:0x7fbbb711e383
    [   42.392217] RSP: 002b:00007ffff4d34f28 EFLAGS: 00000246 ORIG_RAX: 000000000000002c
    [   42.393235] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fbbb711e383
    [   42.394195] RDX: 0000000000001000 RSI: 00007ffff4d34f60 RDI: 0000000000000003
    [   42.395145] RBP: 0000000000000046 R08: 00007ffff4d34f40 R09: 0000000000000018
    [   42.396056] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000400aad
    [   42.396598] R13: 0000000000000066 R14: 00007ffff4d34ee0 R15: 00007fbbb717af00
    [   42.397257]
    [   42.397411] Allocated by task 3789:
    [   42.397702]  save_stack_trace+0x16/0x20
    [   42.398005]  save_stack+0x46/0xd0
    [   42.398267]  kasan_kmalloc+0xad/0xe0
    [   42.398548]  kasan_slab_alloc+0x12/0x20
    [   42.398848]  __kmalloc_node_track_caller+0xcb/0x380
    [   42.399224]  __kmalloc_reserve.isra.32+0x41/0xe0
    [   42.399654]  __alloc_skb+0xf8/0x580
    [   42.400003]  sock_wmalloc+0xab/0xf0
    [   42.400346]  __ip6_append_data.isra.41+0x2472/0x33d0
    [   42.400813]  ip6_append_data+0x1a8/0x2f0
    [   42.401122]  rawv6_sendmsg+0x11ee/0x2db0
    [   42.401505]  inet_sendmsg+0x123/0x500
    [   42.401860]  sock_sendmsg+0xca/0x110
    [   42.402209]  ___sys_sendmsg+0x7cb/0x930
    [   42.402582]  __sys_sendmsg+0xd9/0x190
    [   42.402941]  SyS_sendmsg+0x2d/0x50
    [   42.403273]  entry_SYSCALL_64_fastpath+0x1f/0xbe
    [   42.403718]
    [   42.403871] Freed by task 1794:
    [   42.404146]  save_stack_trace+0x16/0x20
    [   42.404515]  save_stack+0x46/0xd0
    [   42.404827]  kasan_slab_free+0x72/0xc0
    [   42.405167]  kfree+0xe8/0x2b0
    [   42.405462]  skb_free_head+0x74/0xb0
    [   42.405806]  skb_release_data+0x30e/0x3a0
    [   42.406198]  skb_release_all+0x4a/0x60
    [   42.406563]  consume_skb+0x113/0x2e0
    [   42.406910]  skb_free_datagram+0x1a/0xe0
    [   42.407288]  netlink_recvmsg+0x60d/0xe40
    [   42.407667]  sock_recvmsg+0xd7/0x110
    [   42.408022]  ___sys_recvmsg+0x25c/0x580
    [   42.408395]  __sys_recvmsg+0xd6/0x190
    [   42.408753]  SyS_recvmsg+0x2d/0x50
    [   42.409086]  entry_SYSCALL_64_fastpath+0x1f/0xbe
    [   42.409513]
    [   42.409665] The buggy address belongs to the object at ffff88000969e780
    [   42.409665]  which belongs to the cache kmalloc-512 of size 512
    [   42.410846] The buggy address is located 24 bytes inside of
    [   42.410846]  512-byte region [ffff88000969e780, ffff88000969e980)
    [   42.411941] The buggy address belongs to the page:
    [   42.412405] page:ffffea000025a780 count:1 mapcount:0 mapping:          (null) index:0x0 compound_mapcount: 0
    [   42.413298] flags: 0x100000000008100(slab|head)
    [   42.413729] raw: 0100000000008100 0000000000000000 0000000000000000 00000001800c000c
    [   42.414387] raw: ffffea00002a9500 0000000900000007 ffff88000c401280 0000000000000000
    [   42.415074] page dumped because: kasan: bad access detected
    [   42.415604]
    [   42.415757] Memory state around the buggy address:
    [   42.416222]  ffff88000969e880: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    [   42.416904]  ffff88000969e900: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    [   42.417591] >ffff88000969e980: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
    [   42.418273]                    ^
    [   42.418588]  ffff88000969ea00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
    [   42.419273]  ffff88000969ea80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
    [   42.419882] ==================================================================
    
    Reported-by: Andrey Konovalov <andreyknvl@google.com>
    Signed-off-by: Craig Gallek <kraig@google.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 3e674773cb5ea4d75b7bc2e5e05860dae1e16675
Author: Soheil Hassas Yeganeh <soheil@google.com>
Date:   Mon May 15 17:05:47 2017 -0400

    tcp: eliminate negative reordering in tcp_clean_rtx_queue
    
    [ Upstream commit bafbb9c73241760023d8981191ddd30bb1c6dbac ]
    
    tcp_ack() can call tcp_fragment() which may dededuct the
    value tp->fackets_out when MSS changes. When prior_fackets
    is larger than tp->fackets_out, tcp_clean_rtx_queue() can
    invoke tcp_update_reordering() with negative values. This
    results in absurd tp->reodering values higher than
    sysctl_tcp_max_reordering.
    
    Note that tcp_update_reordering indeeds sets tp->reordering
    to min(sysctl_tcp_max_reordering, metric), but because
    the comparison is signed, a negative metric always wins.
    
    Fixes: c7caf8d3ed7a ("[TCP]: Fix reord detection due to snd_una covered holes")
    Reported-by: Rebecca Isaacs <risaacs@google.com>
    Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
    Signed-off-by: Neal Cardwell <ncardwell@google.com>
    Signed-off-by: Yuchung Cheng <ycheng@google.com>
    Signed-off-by: Eric Dumazet <edumazet@google.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit e7b4f3d39f46530f340af6eabbc60fbad9f94f05
Author: Eric Dumazet <edumazet@google.com>
Date:   Wed May 17 07:16:40 2017 -0700

    sctp: do not inherit ipv6_{mc|ac|fl}_list from parent
    
    [ Upstream commit fdcee2cbb8438702ea1b328fb6e0ac5e9a40c7f8 ]
    
    SCTP needs fixes similar to 83eaddab4378 ("ipv6/dccp: do not inherit
    ipv6_mc_list from parent"), otherwise bad things can happen.
    
    Signed-off-by: Eric Dumazet <edumazet@google.com>
    Reported-by: Andrey Konovalov <andreyknvl@google.com>
    Tested-by: Andrey Konovalov <andreyknvl@google.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 61d62ee79ceccd727dc3c47d149b17373a450d39
Author: Yuchung Cheng <ycheng@google.com>
Date:   Wed May 10 17:01:27 2017 -0700

    tcp: avoid fragmenting peculiar skbs in SACK
    
    [ Upstream commit b451e5d24ba6687c6f0e7319c727a709a1846c06 ]
    
    This patch fixes a bug in splitting an SKB during SACK
    processing. Specifically if an skb contains multiple
    packets and is only partially sacked in the higher sequences,
    tcp_match_sack_to_skb() splits the skb and marks the second fragment
    as SACKed.
    
    The current code further attempts rounding up the first fragment
    to MSS boundaries. But it misses a boundary condition when the
    rounded-up fragment size (pkt_len) is exactly skb size.  Spliting
    such an skb is pointless and causses a kernel warning and aborts
    the SACK processing. This patch universally checks such over-split
    before calling tcp_fragment to prevent these unnecessary warnings.
    
    Fixes: adb92db857ee ("tcp: Make SACK code to split only at mss boundaries")
    Signed-off-by: Yuchung Cheng <ycheng@google.com>
    Signed-off-by: Eric Dumazet <edumazet@google.com>
    Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
    Acked-by: Neal Cardwell <ncardwell@google.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 93dcd4929d18e947dd5746b1e281c4a14c316a22
Author: Eric Dumazet <edumazet@google.com>
Date:   Tue May 16 13:27:53 2017 -0700

    net: fix compile error in skb_orphan_partial()
    
    [ Upstream commit 9142e9007f2d7ab58a587a1e1d921b0064a339aa ]
    
    If CONFIG_INET is not set, net/core/sock.c can not compile :
    
    net/core/sock.c: In function ‘skb_orphan_partial’:
    net/core/sock.c:1810:2: error: implicit declaration of function
    ‘skb_is_tcp_pure_ack’ [-Werror=implicit-function-declaration]
      if (skb_is_tcp_pure_ack(skb))
      ^
    
    Fix this by always including <net/tcp.h>
    
    Fixes: f6ba8d33cfbb ("netem: fix skb_orphan_partial()")
    Signed-off-by: Eric Dumazet <edumazet@google.com>
    Reported-by: Paul Gortmaker <paul.gortmaker@windriver.com>
    Reported-by: Randy Dunlap <rdunlap@infradead.org>
    Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 7a230cfdf208b079dd9c5d71e5704811e2b267d9
Author: Eric Dumazet <edumazet@google.com>
Date:   Thu May 11 15:24:41 2017 -0700

    netem: fix skb_orphan_partial()
    
    [ Upstream commit f6ba8d33cfbb46df569972e64dbb5bb7e929bfd9 ]
    
    I should have known that lowering skb->truesize was dangerous :/
    
    In case packets are not leaving the host via a standard Ethernet device,
    but looped back to local sockets, bad things can happen, as reported
    by Michael Madsen ( https://bugzilla.kernel.org/show_bug.cgi?id=195713 )
    
    So instead of tweaking skb->truesize, lets change skb->destructor
    and keep a reference on the owner socket via its sk_refcnt.
    
    Fixes: f2f872f9272a ("netem: Introduce skb_orphan_partial() helper")
    Signed-off-by: Eric Dumazet <edumazet@google.com>
    Reported-by: Michael Madsen <mkm@nabto.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 8404b686a33c3e8a557df5b80b04517cb90d96f5
Author: Julian Wiedmann <jwi@linux.vnet.ibm.com>
Date:   Wed May 10 19:07:53 2017 +0200

    s390/qeth: avoid null pointer dereference on OSN
    
    [ Upstream commit 25e2c341e7818a394da9abc403716278ee646014 ]
    
    Access card->dev only after checking whether's its valid.
    
    Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com>
    Reviewed-by: Ursula Braun <ubraun@linux.vnet.ibm.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 89b9ca1dd7d051ab9cef9df7d0bb32e01e7d4106
Author: Julian Wiedmann <jwi@linux.vnet.ibm.com>
Date:   Wed May 10 19:07:52 2017 +0200

    s390/qeth: unbreak OSM and OSN support
    
    [ Upstream commit 2d2ebb3ed0c6acfb014f98e427298673a5d07b82 ]
    
    commit b4d72c08b358 ("qeth: bridgeport support - basic control")
    broke the support for OSM and OSN devices as follows:
    
    As OSM and OSN are L2 only, qeth_core_probe_device() does an early
    setup by loading the l2 discipline and calling qeth_l2_probe_device().
    In this context, adding the l2-specific bridgeport sysfs attributes
    via qeth_l2_create_device_attributes() hits a BUG_ON in fs/sysfs/group.c,
    since the basic sysfs infrastructure for the device hasn't been
    established yet.
    
    Note that OSN actually has its own unique sysfs attributes
    (qeth_osn_devtype), so the additional attributes shouldn't be created
    at all.
    For OSM, add a new qeth_l2_devtype that contains all the common
    and l2-specific sysfs attributes.
    When qeth_core_probe_device() does early setup for OSM or OSN, assign
    the corresponding devtype so that the ccwgroup probe code creates the
    full set of sysfs attributes.
    This allows us to skip qeth_l2_create_device_attributes() in case
    of an early setup.
    
    Any device that can't do early setup will initially have only the
    generic sysfs attributes, and when it's probed later
    qeth_l2_probe_device() adds the l2-specific attributes.
    
    If an early-setup device is removed (by calling ccwgroup_ungroup()),
    device_unregister() will - using the devtype - delete the
    l2-specific attributes before qeth_l2_remove_device() is called.
    So make sure to not remove them twice.
    
    What complicates the issue is that qeth_l2_probe_device() and
    qeth_l2_remove_device() is also called on a device when its
    layer2 attribute changes (ie. its layer mode is switched).
    For early-setup devices this wouldn't work properly - we wouldn't
    remove the l2-specific attributes when switching to L3.
    But switching the layer mode doesn't actually make any sense;
    we already decided that the device can only operate in L2!
    So just refuse to switch the layer mode on such devices. Note that
    OSN doesn't have a layer2 attribute, so we only need to special-case
    OSM.
    
    Based on an initial patch by Ursula Braun.
    
    Fixes: b4d72c08b358 ("qeth: bridgeport support - basic control")
    Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 0b651772fed7c9a186ef699818b8ad5c540e3746
Author: Ursula Braun <ubraun@linux.vnet.ibm.com>
Date:   Wed May 10 19:07:51 2017 +0200

    s390/qeth: handle sysfs error during initialization
    
    [ Upstream commit 9111e7880ccf419548c7b0887df020b08eadb075 ]
    
    When setting up the device from within the layer discipline's
    probe routine, creating the layer-specific sysfs attributes can fail.
    Report this error back to the caller, and handle it by
    releasing the layer discipline.
    
    Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com>
    [jwi: updated commit msg, moved an OSN change to a subsequent patch]
    Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>

commit 4e0ecb773276404a4a0788e1398bee22719336b4
Author: Eric Dumazet <edumazet@google.com>
Date:   Tue May 9 06:29:19 2017 -0700

    dccp/tcp: do not inherit mc_list from parent
    
    [ Upstream commit 657831ffc38e30092a2d5f03d385d710eb88b09a ]
    
    syzkaller found a way to trigger double frees from ip_mc_drop_socket()
    
    It turns out that leave a copy of parent mc_list at accept() time,
    which is very bad.
    
    Very similar to commit 8b485ce69876 ("tcp: do not inherit
    fastopen_req from parent")
    
    Initial report from Pray3r, completed by Andrey one.
    Thanks a lot to them !
    
    Signed-off-by: Eric Dumazet <edumazet@google.com>
    Reported-by: Pray3r <pray3r.z@gmail.com>
    Reported-by: Andrey Konovalov <andreyknvl@google.com>
    Tested-by: Andrey Konovalov <andreyknvl@google.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <alexander.levin@verizon.com>