commit c6a15d151e35facd89d1cfcb4d734d452ade1cbb
Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date:   Wed Aug 26 10:29:07 2020 +0200

    Linux 4.9.234
    
    Tested-by: Jon Hunter <jonathanh@nvidia.com>
    Tested-by: Guenter Roeck <linux@roeck-us.net>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit c0ca97bcfc0bbb4b965450b0952b7b045ed37f2a
Author: Will Deacon <will@kernel.org>
Date:   Mon Aug 24 12:29:40 2020 +0100

    KVM: arm/arm64: Don't reschedule in unmap_stage2_range()
    
    Upstream commits fdfe7cbd5880 ("KVM: Pass MMU notifier range flags to
    kvm_unmap_hva_range()") and b5331379bc62 ("KVM: arm64: Only reschedule
    if MMU_NOTIFIER_RANGE_BLOCKABLE is not set") fix a "sleeping from invalid
    context" BUG caused by unmap_stage2_range() attempting to reschedule when
    called on the OOM path.
    
    Unfortunately, these patches rely on the MMU notifier callback being
    passed knowledge about whether or not blocking is permitted, which was
    introduced in 4.19. Rather than backport this considerable amount of
    infrastructure just for KVM on arm, instead just remove the conditional
    reschedule.
    
    Cc: <stable@vger.kernel.org> # v4.9 only
    Cc: Marc Zyngier <maz@kernel.org>
    Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
    Cc: James Morse <james.morse@arm.com>
    Signed-off-by: Will Deacon <will@kernel.org>
    Acked-by: Marc Zyngier <maz@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 606c6eb9f8a908757c082bc49cd75d167b15f2e7
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Aug 20 08:59:08 2020 +0200

    xen: don't reschedule in preemption off sections
    
    For support of long running hypercalls xen_maybe_preempt_hcall() is
    calling cond_resched() in case a hypercall marked as preemptible has
    been interrupted.
    
    Normally this is no problem, as only hypercalls done via some ioctl()s
    are marked to be preemptible. In rare cases when during such a
    preemptible hypercall an interrupt occurs and any softirq action is
    started from irq_exit(), a further hypercall issued by the softirq
    handler will be regarded to be preemptible, too. This might lead to
    rescheduling in spite of the softirq handler potentially having set
    preempt_disable(), leading to splats like:
    
    BUG: sleeping function called from invalid context at drivers/xen/preempt.c:37
    in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 20775, name: xl
    INFO: lockdep is turned off.
    CPU: 1 PID: 20775 Comm: xl Tainted: G D W 5.4.46-1_prgmr_debug.el7.x86_64 #1
    Call Trace:
    <IRQ>
    dump_stack+0x8f/0xd0
    ___might_sleep.cold.76+0xb2/0x103
    xen_maybe_preempt_hcall+0x48/0x70
    xen_do_hypervisor_callback+0x37/0x40
    RIP: e030:xen_hypercall_xen_version+0xa/0x20
    Code: ...
    RSP: e02b:ffffc900400dcc30 EFLAGS: 00000246
    RAX: 000000000004000d RBX: 0000000000000200 RCX: ffffffff8100122a
    RDX: ffff88812e788000 RSI: 0000000000000000 RDI: 0000000000000000
    RBP: ffffffff83ee3ad0 R08: 0000000000000001 R09: 0000000000000001
    R10: 0000000000000000 R11: 0000000000000246 R12: ffff8881824aa0b0
    R13: 0000000865496000 R14: 0000000865496000 R15: ffff88815d040000
    ? xen_hypercall_xen_version+0xa/0x20
    ? xen_force_evtchn_callback+0x9/0x10
    ? check_events+0x12/0x20
    ? xen_restore_fl_direct+0x1f/0x20
    ? _raw_spin_unlock_irqrestore+0x53/0x60
    ? debug_dma_sync_single_for_cpu+0x91/0xc0
    ? _raw_spin_unlock_irqrestore+0x53/0x60
    ? xen_swiotlb_sync_single_for_cpu+0x3d/0x140
    ? mlx4_en_process_rx_cq+0x6b6/0x1110 [mlx4_en]
    ? mlx4_en_poll_rx_cq+0x64/0x100 [mlx4_en]
    ? net_rx_action+0x151/0x4a0
    ? __do_softirq+0xed/0x55b
    ? irq_exit+0xea/0x100
    ? xen_evtchn_do_upcall+0x2c/0x40
    ? xen_do_hypervisor_callback+0x29/0x40
    </IRQ>
    ? xen_hypercall_domctl+0xa/0x20
    ? xen_hypercall_domctl+0x8/0x20
    ? privcmd_ioctl+0x221/0x990 [xen_privcmd]
    ? do_vfs_ioctl+0xa5/0x6f0
    ? ksys_ioctl+0x60/0x90
    ? trace_hardirqs_off_thunk+0x1a/0x20
    ? __x64_sys_ioctl+0x16/0x20
    ? do_syscall_64+0x62/0x250
    ? entry_SYSCALL_64_after_hwframe+0x49/0xbe
    
    Fix that by testing preempt_count() before calling cond_resched().
    
    In kernel 5.8 this can't happen any more due to the entry code rework
    (more than 100 patches, so not a candidate for backporting).
    
    The issue was introduced in kernel 4.3, so this patch should go into
    all stable kernels in [4.3 ... 5.7].
    
    Reported-by: Sarah Newman <srn@prgmr.com>
    Fixes: 0fa2f5cb2b0ecd8 ("sched/preempt, xen: Use need_resched() instead of should_resched()")
    Cc: Sarah Newman <srn@prgmr.com>
    Cc: stable@vger.kernel.org
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Tested-by: Chris Brannon <cmb@prgmr.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit fe5f83b16307b1772a85496f2079309e9d933823
Author: Peter Xu <peterx@redhat.com>
Date:   Thu Aug 6 23:26:11 2020 -0700

    mm/hugetlb: fix calculation of adjust_range_if_pmd_sharing_possible
    
    commit 75802ca66354a39ab8e35822747cd08b3384a99a upstream.
    
    This is found by code observation only.
    
    Firstly, the worst case scenario should assume the whole range was covered
    by pmd sharing.  The old algorithm might not work as expected for ranges
    like (1g-2m, 1g+2m), where the adjusted range should be (0, 1g+2m) but the
    expected range should be (0, 2g).
    
    Since at it, remove the loop since it should not be required.  With that,
    the new code should be faster too when the invalidating range is huge.
    
    Mike said:
    
    : With range (1g-2m, 1g+2m) within a vma (0, 2g) the existing code will only
    : adjust to (0, 1g+2m) which is incorrect.
    :
    : We should cc stable.  The original reason for adjusting the range was to
    : prevent data corruption (getting wrong page).  Since the range is not
    : always adjusted correctly, the potential for corruption still exists.
    :
    : However, I am fairly confident that adjust_range_if_pmd_sharing_possible
    : is only gong to be called in two cases:
    :
    : 1) for a single page
    : 2) for range == entire vma
    :
    : In those cases, the current code should produce the correct results.
    :
    : To be safe, let's just cc stable.
    
    Fixes: 017b1660df89 ("mm: migration: fix migration of huge PMD shared pages")
    Signed-off-by: Peter Xu <peterx@redhat.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
    Cc: Andrea Arcangeli <aarcange@redhat.com>
    Cc: Matthew Wilcox <willy@infradead.org>
    Cc: <stable@vger.kernel.org>
    Link: http://lkml.kernel.org/r/20200730201636.74778-1-peterx@redhat.com
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit b3ce6ca929dc677f7e443eb3012dfc7a433b1161
Author: Al Viro <viro@zeniv.linux.org.uk>
Date:   Sat Aug 22 18:25:52 2020 -0400

    do_epoll_ctl(): clean the failure exits up a bit
    
    commit 52c479697c9b73f628140dcdfcd39ea302d05482 upstream.
    
    Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
    Signed-off-by: Marc Zyngier <maz@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 9bbd20326fefd709dc6e7cbf7442ea640bb5f601
Author: Marc Zyngier <maz@kernel.org>
Date:   Wed Aug 19 17:12:17 2020 +0100

    epoll: Keep a reference on files added to the check list
    
    commit a9ed4a6560b8562b7e2e2bed9527e88001f7b682 upstream.
    
    When adding a new fd to an epoll, and that this new fd is an
    epoll fd itself, we recursively scan the fds attached to it
    to detect cycles, and add non-epool files to a "check list"
    that gets subsequently parsed.
    
    However, this check list isn't completely safe when deletions
    can happen concurrently. To sidestep the issue, make sure that
    a struct file placed on the check list sees its f_count increased,
    ensuring that a concurrent deletion won't result in the file
    disapearing from under our feet.
    
    Cc: stable@vger.kernel.org
    Signed-off-by: Marc Zyngier <maz@kernel.org>
    Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
    Signed-off-by: Marc Zyngier <maz@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit a7fef53a41dafc2a5b5a185b29c6e94e83f8b516
Author: Michael Ellerman <mpe@ellerman.id.au>
Date:   Fri Jul 24 19:25:25 2020 +1000

    powerpc: Allow 4224 bytes of stack expansion for the signal frame
    
    commit 63dee5df43a31f3844efabc58972f0a206ca4534 upstream.
    
    We have powerpc specific logic in our page fault handling to decide if
    an access to an unmapped address below the stack pointer should expand
    the stack VMA.
    
    The code was originally added in 2004 "ported from 2.4". The rough
    logic is that the stack is allowed to grow to 1MB with no extra
    checking. Over 1MB the access must be within 2048 bytes of the stack
    pointer, or be from a user instruction that updates the stack pointer.
    
    The 2048 byte allowance below the stack pointer is there to cover the
    288 byte "red zone" as well as the "about 1.5kB" needed by the signal
    delivery code.
    
    Unfortunately since then the signal frame has expanded, and is now
    4224 bytes on 64-bit kernels with transactional memory enabled. This
    means if a process has consumed more than 1MB of stack, and its stack
    pointer lies less than 4224 bytes from the next page boundary, signal
    delivery will fault when trying to expand the stack and the process
    will see a SEGV.
    
    The total size of the signal frame is the size of struct rt_sigframe
    (which includes the red zone) plus __SIGNAL_FRAMESIZE (128 bytes on
    64-bit).
    
    The 2048 byte allowance was correct until 2008 as the signal frame
    was:
    
    struct rt_sigframe {
            struct ucontext    uc;                           /*     0  1440 */
            /* --- cacheline 11 boundary (1408 bytes) was 32 bytes ago --- */
            long unsigned int          _unused[2];           /*  1440    16 */
            unsigned int               tramp[6];             /*  1456    24 */
            struct siginfo *           pinfo;                /*  1480     8 */
            void *                     puc;                  /*  1488     8 */
            struct siginfo     info;                         /*  1496   128 */
            /* --- cacheline 12 boundary (1536 bytes) was 88 bytes ago --- */
            char                       abigap[288];          /*  1624   288 */
    
            /* size: 1920, cachelines: 15, members: 7 */
            /* padding: 8 */
    };
    
    1920 + 128 = 2048
    
    Then in commit ce48b2100785 ("powerpc: Add VSX context save/restore,
    ptrace and signal support") (Jul 2008) the signal frame expanded to
    2304 bytes:
    
    struct rt_sigframe {
            struct ucontext    uc;                           /*     0  1696 */      <--
            /* --- cacheline 13 boundary (1664 bytes) was 32 bytes ago --- */
            long unsigned int          _unused[2];           /*  1696    16 */
            unsigned int               tramp[6];             /*  1712    24 */
            struct siginfo *           pinfo;                /*  1736     8 */
            void *                     puc;                  /*  1744     8 */
            struct siginfo     info;                         /*  1752   128 */
            /* --- cacheline 14 boundary (1792 bytes) was 88 bytes ago --- */
            char                       abigap[288];          /*  1880   288 */
    
            /* size: 2176, cachelines: 17, members: 7 */
            /* padding: 8 */
    };
    
    2176 + 128 = 2304
    
    At this point we should have been exposed to the bug, though as far as
    I know it was never reported. I no longer have a system old enough to
    easily test on.
    
    Then in 2010 commit 320b2b8de126 ("mm: keep a guard page below a
    grow-down stack segment") caused our stack expansion code to never
    trigger, as there was always a VMA found for a write up to PAGE_SIZE
    below r1.
    
    That meant the bug was hidden as we continued to expand the signal
    frame in commit 2b0a576d15e0 ("powerpc: Add new transactional memory
    state to the signal context") (Feb 2013):
    
    struct rt_sigframe {
            struct ucontext    uc;                           /*     0  1696 */
            /* --- cacheline 13 boundary (1664 bytes) was 32 bytes ago --- */
            struct ucontext    uc_transact;                  /*  1696  1696 */      <--
            /* --- cacheline 26 boundary (3328 bytes) was 64 bytes ago --- */
            long unsigned int          _unused[2];           /*  3392    16 */
            unsigned int               tramp[6];             /*  3408    24 */
            struct siginfo *           pinfo;                /*  3432     8 */
            void *                     puc;                  /*  3440     8 */
            struct siginfo     info;                         /*  3448   128 */
            /* --- cacheline 27 boundary (3456 bytes) was 120 bytes ago --- */
            char                       abigap[288];          /*  3576   288 */
    
            /* size: 3872, cachelines: 31, members: 8 */
            /* padding: 8 */
            /* last cacheline: 32 bytes */
    };
    
    3872 + 128 = 4000
    
    And commit 573ebfa6601f ("powerpc: Increase stack redzone for 64-bit
    userspace to 512 bytes") (Feb 2014):
    
    struct rt_sigframe {
            struct ucontext    uc;                           /*     0  1696 */
            /* --- cacheline 13 boundary (1664 bytes) was 32 bytes ago --- */
            struct ucontext    uc_transact;                  /*  1696  1696 */
            /* --- cacheline 26 boundary (3328 bytes) was 64 bytes ago --- */
            long unsigned int          _unused[2];           /*  3392    16 */
            unsigned int               tramp[6];             /*  3408    24 */
            struct siginfo *           pinfo;                /*  3432     8 */
            void *                     puc;                  /*  3440     8 */
            struct siginfo     info;                         /*  3448   128 */
            /* --- cacheline 27 boundary (3456 bytes) was 120 bytes ago --- */
            char                       abigap[512];          /*  3576   512 */      <--
    
            /* size: 4096, cachelines: 32, members: 8 */
            /* padding: 8 */
    };
    
    4096 + 128 = 4224
    
    Then finally in 2017, commit 1be7107fbe18 ("mm: larger stack guard
    gap, between vmas") exposed us to the existing bug, because it changed
    the stack VMA to be the correct/real size, meaning our stack expansion
    code is now triggered.
    
    Fix it by increasing the allowance to 4224 bytes.
    
    Hard-coding 4224 is obviously unsafe against future expansions of the
    signal frame in the same way as the existing code. We can't easily use
    sizeof() because the signal frame structure is not in a header. We
    will either fix that, or rip out all the custom stack expansion
    checking logic entirely.
    
    Fixes: ce48b2100785 ("powerpc: Add VSX context save/restore, ptrace and signal support")
    Cc: stable@vger.kernel.org # v2.6.27+
    Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
    Tested-by: Daniel Axtens <dja@axtens.net>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20200724092528.1578671-2-mpe@ellerman.id.au
    Signed-off-by: Daniel Axtens <dja@axtens.net>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 13ad432444c10cc72f185d345de00e8b21dfc70c
Author: Vasant Hegde <hegdevasant@linux.vnet.ibm.com>
Date:   Thu Aug 20 11:48:44 2020 +0530

    powerpc/pseries: Do not initiate shutdown when system is running on UPS
    
    commit 90a9b102eddf6a3f987d15f4454e26a2532c1c98 upstream.
    
    As per PAPR we have to look for both EPOW sensor value and event
    modifier to identify the type of event and take appropriate action.
    
    In LoPAPR v1.1 section 10.2.2 includes table 136 "EPOW Action Codes":
    
      SYSTEM_SHUTDOWN 3
    
      The system must be shut down. An EPOW-aware OS logs the EPOW error
      log information, then schedules the system to be shut down to begin
      after an OS defined delay internal (default is 10 minutes.)
    
    Then in section 10.3.2.2.8 there is table 146 "Platform Event Log
    Format, Version 6, EPOW Section", which includes the "EPOW Event
    Modifier":
    
      For EPOW sensor value = 3
      0x01 = Normal system shutdown with no additional delay
      0x02 = Loss of utility power, system is running on UPS/Battery
      0x03 = Loss of system critical functions, system should be shutdown
      0x04 = Ambient temperature too high
      All other values = reserved
    
    We have a user space tool (rtas_errd) on LPAR to monitor for
    EPOW_SHUTDOWN_ON_UPS. Once it gets an event it initiates shutdown
    after predefined time. It also starts monitoring for any new EPOW
    events. If it receives "Power restored" event before predefined time
    it will cancel the shutdown. Otherwise after predefined time it will
    shutdown the system.
    
    Commit 79872e35469b ("powerpc/pseries: All events of
    EPOW_SYSTEM_SHUTDOWN must initiate shutdown") changed our handling of
    the "on UPS/Battery" case, to immediately shutdown the system. This
    breaks existing setups that rely on the userspace tool to delay
    shutdown and let the system run on the UPS.
    
    Fixes: 79872e35469b ("powerpc/pseries: All events of EPOW_SYSTEM_SHUTDOWN must initiate shutdown")
    Cc: stable@vger.kernel.org # v4.0+
    Signed-off-by: Vasant Hegde <hegdevasant@linux.vnet.ibm.com>
    [mpe: Massage change log and add PAPR references]
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20200820061844.306460-1-hegdevasant@linux.vnet.ibm.com
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit b1fea03057ef8cd2f2cdac54701b0d92d39564b8
Author: Tom Rix <trix@redhat.com>
Date:   Fri Aug 21 06:56:00 2020 -0700

    net: dsa: b53: check for timeout
    
    [ Upstream commit 774d977abfd024e6f73484544b9abe5a5cd62de7 ]
    
    clang static analysis reports this problem
    
    b53_common.c:1583:13: warning: The left expression of the compound
      assignment is an uninitialized value. The computed value will
      also be garbage
            ent.port &= ~BIT(port);
            ~~~~~~~~ ^
    
    ent is set by a successful call to b53_arl_read().  Unsuccessful
    calls are caught by an switch statement handling specific returns.
    b32_arl_read() calls b53_arl_op_wait() which fails with the
    unhandled -ETIMEDOUT.
    
    So add -ETIMEDOUT to the switch statement.  Because
    b53_arl_op_wait() already prints out a message, do not add another
    one.
    
    Fixes: 1da6df85c6fb ("net: dsa: b53: Implement ARL add/del/dump operations")
    Signed-off-by: Tom Rix <trix@redhat.com>
    Acked-by: Florian Fainelli <f.fainelli@gmail.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <sashal@kernel.org>

commit f46eec9705309e53389e5ea9659876409c8d8d5b
Author: Dinghao Liu <dinghao.liu@zju.edu.cn>
Date:   Thu Aug 13 16:41:10 2020 +0800

    ASoC: intel: Fix memleak in sst_media_open
    
    [ Upstream commit 062fa09f44f4fb3776a23184d5d296b0c8872eb9 ]
    
    When power_up_sst() fails, stream needs to be freed
    just like when try_module_get() fails. However, current
    code is returning directly and ends up leaking memory.
    
    Fixes: 0121327c1a68b ("ASoC: Intel: mfld-pcm: add control for powering up/down dsp")
    Signed-off-by: Dinghao Liu <dinghao.liu@zju.edu.cn>
    Acked-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
    Link: https://lore.kernel.org/r/20200813084112.26205-1-dinghao.liu@zju.edu.cn
    Signed-off-by: Mark Brown <broonie@kernel.org>
    Signed-off-by: Sasha Levin <sashal@kernel.org>

commit 3f9f6b032d23c2668b816dc3d69af486aca56dba
Author: Fugang Duan <fugang.duan@nxp.com>
Date:   Thu Aug 13 15:13:14 2020 +0800

    net: fec: correct the error path for regulator disable in probe
    
    [ Upstream commit c6165cf0dbb82ded90163dce3ac183fc7a913dc4 ]
    
    Correct the error path for regulator disable.
    
    Fixes: 9269e5560b26 ("net: fec: add phy-reset-gpios PROBE_DEFER check")
    Signed-off-by: Fugang Duan <fugang.duan@nxp.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Sasha Levin <sashal@kernel.org>

commit f3926733dcdcee8e26d65f2f5ae891c3fdea71c5
Author: Przemyslaw Patynowski <przemyslawx.patynowski@intel.com>
Date:   Thu Aug 6 13:40:59 2020 +0000

    i40e: Set RX_ONLY mode for unicast promiscuous on VLAN
    
    [ Upstream commit 4bd5e02a2ed1575c2f65bd3c557a077dd399f0e8 ]
    
    Trusted VF with unicast promiscuous mode set, could listen to TX
    traffic of other VFs.
    Set unicast promiscuous mode to RX traffic, if VSI has port VLAN
    configured. Rename misleading I40E_AQC_SET_VSI_PROMISC_TX bit to
    I40E_AQC_SET_VSI_PROMISC_RX_ONLY. Aligned unicast promiscuous with
    VLAN to the one without VLAN.
    
    Fixes: 6c41a7606967 ("i40e: Add promiscuous on VLAN support")
    Fixes: 3b1200891b7f ("i40e: When in promisc mode apply promisc mode to Tx Traffic as well")
    Signed-off-by: Przemyslaw Patynowski <przemyslawx.patynowski@intel.com>
    Signed-off-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
    Signed-off-by: Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com>
    Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
    Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
    Signed-off-by: Sasha Levin <sashal@kernel.org>

commit 539ae3e03875dacaa9c388aff141ccbb4ef4ecb5
Author: Eric Sandeen <sandeen@redhat.com>
Date:   Wed Jun 17 14:19:04 2020 -0500

    ext4: fix potential negative array index in do_split()
    
    [ Upstream commit 5872331b3d91820e14716632ebb56b1399b34fe1 ]
    
    If for any reason a directory passed to do_split() does not have enough
    active entries to exceed half the size of the block, we can end up
    iterating over all "count" entries without finding a split point.
    
    In this case, count == move, and split will be zero, and we will
    attempt a negative index into map[].
    
    Guard against this by detecting this case, and falling back to
    split-to-half-of-count instead; in this case we will still have
    plenty of space (> half blocksize) in each split block.
    
    Fixes: ef2b02d3e617 ("ext34: ensure do_split leaves enough free space in both blocks")
    Signed-off-by: Eric Sandeen <sandeen@redhat.com>
    Reviewed-by: Andreas Dilger <adilger@dilger.ca>
    Reviewed-by: Jan Kara <jack@suse.cz>
    Link: https://lore.kernel.org/r/f53e246b-647c-64bb-16ec-135383c70ad7@redhat.com
    Signed-off-by: Theodore Ts'o <tytso@mit.edu>
    Signed-off-by: Sasha Levin <sashal@kernel.org>

commit cc0c6b17f948bd40862ea8db34e0e46741d995fb
Author: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
Date:   Tue Aug 11 18:33:54 2020 -0700

    alpha: fix annotation of io{read,write}{16,32}be()
    
    [ Upstream commit bd72866b8da499e60633ff28f8a4f6e09ca78efe ]
    
    These accessors must be used to read/write a big-endian bus.  The value
    returned or written is native-endian.
    
    However, these accessors are defined using be{16,32}_to_cpu() or
    cpu_to_be{16,32}() to make the endian conversion but these expect a
    __be{16,32} when none is present.  Keeping them would need a force cast
    that would solve nothing at all.
    
    So, do the conversion using swab{16,32}, like done in asm-generic for
    similar situations.
    
    Reported-by: kernel test robot <lkp@intel.com>
    Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Cc: Richard Henderson <rth@twiddle.net>
    Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
    Cc: Matt Turner <mattst88@gmail.com>
    Cc: Stephen Boyd <sboyd@kernel.org>
    Cc: Arnd Bergmann <arnd@arndb.de>
    Link: http://lkml.kernel.org/r/20200622114232.80039-1-luc.vanoostenryck@gmail.com
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Sasha Levin <sashal@kernel.org>

commit a7631e087f1e0eac2fa06a54674e8fdea794105c
Author: Eiichi Tsukata <devel@etsukata.com>
Date:   Thu Aug 6 15:18:48 2020 -0700

    xfs: Fix UBSAN null-ptr-deref in xfs_sysfs_init
    
    [ Upstream commit 96cf2a2c75567ff56195fe3126d497a2e7e4379f ]
    
    If xfs_sysfs_init is called with parent_kobj == NULL, UBSAN
    shows the following warning:
    
      UBSAN: null-ptr-deref in ./fs/xfs/xfs_sysfs.h:37:23
      member access within null pointer of type 'struct xfs_kobj'
      Call Trace:
       dump_stack+0x10e/0x195
       ubsan_type_mismatch_common+0x241/0x280
       __ubsan_handle_type_mismatch_v1+0x32/0x40
       init_xfs_fs+0x12b/0x28f
       do_one_initcall+0xdd/0x1d0
       do_initcall_level+0x151/0x1b6
       do_initcalls+0x50/0x8f
       do_basic_setup+0x29/0x2b
       kernel_init_freeable+0x19f/0x20b
       kernel_init+0x11/0x1e0
       ret_from_fork+0x22/0x30
    
    Fix it by checking parent_kobj before the code accesses its member.
    
    Signed-off-by: Eiichi Tsukata <devel@etsukata.com>
    Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
    [darrick: minor whitespace edits]
    Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
    Signed-off-by: Sasha Levin <sashal@kernel.org>

commit 057069c26fd3b5157c59b02d24f0ef5217e195ef
Author: Mao Wenan <wenan.mao@linux.alibaba.com>
Date:   Sun Aug 2 15:44:09 2020 +0800

    virtio_ring: Avoid loop when vq is broken in virtqueue_poll
    
    [ Upstream commit 481a0d7422db26fb63e2d64f0652667a5c6d0f3e ]
    
    The loop may exist if vq->broken is true,
    virtqueue_get_buf_ctx_packed or virtqueue_get_buf_ctx_split
    will return NULL, so virtnet_poll will reschedule napi to
    receive packet, it will lead cpu usage(si) to 100%.
    
    call trace as below:
    virtnet_poll
            virtnet_receive
                    virtqueue_get_buf_ctx
                            virtqueue_get_buf_ctx_packed
                            virtqueue_get_buf_ctx_split
            virtqueue_napi_complete
                    virtqueue_poll           //return true
                    virtqueue_napi_schedule //it will reschedule napi
    
    to fix this, return false if vq is broken in virtqueue_poll.
    
    Signed-off-by: Mao Wenan <wenan.mao@linux.alibaba.com>
    Acked-by: Michael S. Tsirkin <mst@redhat.com>
    Link: https://lore.kernel.org/r/1596354249-96204-1-git-send-email-wenan.mao@linux.alibaba.com
    Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
    Acked-by: Jason Wang <jasowang@redhat.com>
    Signed-off-by: Sasha Levin <sashal@kernel.org>

commit 958f6e406c1c82030d6bb3d070e5adf12e06f57a
Author: Javed Hasan <jhasan@marvell.com>
Date:   Wed Jul 29 01:18:23 2020 -0700

    scsi: libfc: Free skb in fc_disc_gpn_id_resp() for valid cases
    
    [ Upstream commit ec007ef40abb6a164d148b0dc19789a7a2de2cc8 ]
    
    In fc_disc_gpn_id_resp(), skb is supposed to get freed in all cases except
    for PTR_ERR. However, in some cases it didn't.
    
    This fix is to call fc_frame_free(fp) before function returns.
    
    Link: https://lore.kernel.org/r/20200729081824.30996-2-jhasan@marvell.com
    Reviewed-by: Girish Basrur <gbasrur@marvell.com>
    Reviewed-by: Santosh Vernekar <svernekar@marvell.com>
    Reviewed-by: Saurav Kashyap <skashyap@marvell.com>
    Reviewed-by: Shyam Sundar <ssundar@marvell.com>
    Signed-off-by: Javed Hasan <jhasan@marvell.com>
    Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
    Signed-off-by: Sasha Levin <sashal@kernel.org>

commit 4afde5c2320a3277caadda6326d340f6b30e360e
Author: Zhe Li <lizhe67@huawei.com>
Date:   Fri Jun 19 17:06:35 2020 +0800

    jffs2: fix UAF problem
    
    [ Upstream commit 798b7347e4f29553db4b996393caf12f5b233daf ]
    
    The log of UAF problem is listed below.
    BUG: KASAN: use-after-free in jffs2_rmdir+0xa4/0x1cc [jffs2] at addr c1f165fc
    Read of size 4 by task rm/8283
    =============================================================================
    BUG kmalloc-32 (Tainted: P    B      O   ): kasan: bad access detected
    -----------------------------------------------------------------------------
    
    INFO: Allocated in 0xbbbbbbbb age=3054364 cpu=0 pid=0
            0xb0bba6ef
            jffs2_write_dirent+0x11c/0x9c8 [jffs2]
            __slab_alloc.isra.21.constprop.25+0x2c/0x44
            __kmalloc+0x1dc/0x370
            jffs2_write_dirent+0x11c/0x9c8 [jffs2]
            jffs2_do_unlink+0x328/0x5fc [jffs2]
            jffs2_rmdir+0x110/0x1cc [jffs2]
            vfs_rmdir+0x180/0x268
            do_rmdir+0x2cc/0x300
            ret_from_syscall+0x0/0x3c
    INFO: Freed in 0x205b age=3054364 cpu=0 pid=0
            0x2e9173
            jffs2_add_fd_to_list+0x138/0x1dc [jffs2]
            jffs2_add_fd_to_list+0x138/0x1dc [jffs2]
            jffs2_garbage_collect_dirent.isra.3+0x21c/0x288 [jffs2]
            jffs2_garbage_collect_live+0x16bc/0x1800 [jffs2]
            jffs2_garbage_collect_pass+0x678/0x11d4 [jffs2]
            jffs2_garbage_collect_thread+0x1e8/0x3b0 [jffs2]
            kthread+0x1a8/0x1b0
            ret_from_kernel_thread+0x5c/0x64
    Call Trace:
    [c17ddd20] [c02452d4] kasan_report.part.0+0x298/0x72c (unreliable)
    [c17ddda0] [d2509680] jffs2_rmdir+0xa4/0x1cc [jffs2]
    [c17dddd0] [c026da04] vfs_rmdir+0x180/0x268
    [c17dde00] [c026f4e4] do_rmdir+0x2cc/0x300
    [c17ddf40] [c001a658] ret_from_syscall+0x0/0x3c
    
    The root cause is that we don't get "jffs2_inode_info.sem" before
    we scan list "jffs2_inode_info.dents" in function jffs2_rmdir.
    This patch add codes to get "jffs2_inode_info.sem" before we scan
    "jffs2_inode_info.dents" to slove the UAF problem.
    
    Signed-off-by: Zhe Li <lizhe67@huawei.com>
    Reviewed-by: Hou Tao <houtao1@huawei.com>
    Signed-off-by: Richard Weinberger <richard@nod.at>
    Signed-off-by: Sasha Levin <sashal@kernel.org>

commit 00d495ebd489255747e93c7162cd12b2338f3664
Author: Darrick J. Wong <darrick.wong@oracle.com>
Date:   Tue Jul 14 10:36:09 2020 -0700

    xfs: fix inode quota reservation checks
    
    [ Upstream commit f959b5d037e71a4d69b5bf71faffa065d9269b4a ]
    
    xfs_trans_dqresv is the function that we use to make reservations
    against resource quotas.  Each resource contains two counters: the
    q_core counter, which tracks resources allocated on disk; and the dquot
    reservation counter, which tracks how much of that resource has either
    been allocated or reserved by threads that are working on metadata
    updates.
    
    For disk blocks, we compare the proposed reservation counter against the
    hard and soft limits to decide if we're going to fail the operation.
    However, for inodes we inexplicably compare against the q_core counter,
    not the incore reservation count.
    
    Since the q_core counter is always lower than the reservation count and
    we unlock the dquot between reservation and transaction commit, this
    means that multiple threads can reserve the last inode count before we
    hit the hard limit, and when they commit, we'll be well over the hard
    limit.
    
    Fix this by checking against the incore inode reservation counter, since
    we would appear to maintain that correctly (and that's what we report in
    GETQUOTA).
    
    Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
    Reviewed-by: Allison Collins <allison.henderson@oracle.com>
    Reviewed-by: Chandan Babu R <chandanrlinux@gmail.com>
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Signed-off-by: Sasha Levin <sashal@kernel.org>

commit aac5d7539f28fe4ae5a78632299fed270adcb1b3
Author: Greg Ungerer <gerg@linux-m68k.org>
Date:   Sat Jun 13 17:17:52 2020 +1000

    m68knommu: fix overwriting of bits in ColdFire V3 cache control
    
    [ Upstream commit bdee0e793cea10c516ff48bf3ebb4ef1820a116b ]
    
    The Cache Control Register (CACR) of the ColdFire V3 has bits that
    control high level caching functions, and also enable/disable the use
    of the alternate stack pointer register (the EUSP bit) to provide
    separate supervisor and user stack pointer registers. The code as
    it is today will blindly clear the EUSP bit on cache actions like
    invalidation. So it is broken for this case - and that will result
    in failed booting (interrupt entry and exit processing will be
    completely hosed).
    
    This only affects ColdFire V3 parts that support the alternate stack
    register (like the 5329 for example) - generally speaking new parts do,
    older parts don't. It has no impact on ColdFire V3 parts with the single
    stack pointer, like the 5307 for example.
    
    Fix the cache bit defines used, so they maintain the EUSP bit when
    carrying out cache actions through the CACR register.
    
    Signed-off-by: Greg Ungerer <gerg@linux-m68k.org>
    Signed-off-by: Sasha Levin <sashal@kernel.org>

commit 609e9302f163f4c516d1cf722006aa46bea5d147
Author: Xiongfeng Wang <wangxiongfeng2@huawei.com>
Date:   Tue Jul 21 22:24:07 2020 -0700

    Input: psmouse - add a newline when printing 'proto' by sysfs
    
    [ Upstream commit 4aec14de3a15cf9789a0e19c847f164776f49473 ]
    
    When I cat parameter 'proto' by sysfs, it displays as follows. It's
    better to add a newline for easy reading.
    
    root@syzkaller:~# cat /sys/module/psmouse/parameters/proto
    autoroot@syzkaller:~#
    
    Signed-off-by: Xiongfeng Wang <wangxiongfeng2@huawei.com>
    Link: https://lore.kernel.org/r/20200720073846.120724-1-wangxiongfeng2@huawei.com
    Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
    Signed-off-by: Sasha Levin <sashal@kernel.org>

commit 0bd77f37daf707706ebe5c98f1744c317e71540a
Author: Evgeny Novikov <novikov@ispras.ru>
Date:   Fri Jul 10 11:02:23 2020 +0200

    media: vpss: clean up resources in init
    
    [ Upstream commit 9c487b0b0ea7ff22127fe99a7f67657d8730ff94 ]
    
    If platform_driver_register() fails within vpss_init() resources are not
    cleaned up. The patch fixes this issue by introducing the corresponding
    error handling.
    
    Found by Linux Driver Verification project (linuxtesting.org).
    
    Signed-off-by: Evgeny Novikov <novikov@ispras.ru>
    Signed-off-by: Hans Verkuil <hverkuil-cisco@xs4all.nl>
    Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
    Signed-off-by: Sasha Levin <sashal@kernel.org>

commit 3264112eb59b242aac2e3202464478452b6cc942
Author: Chuhong Yuan <hslester96@gmail.com>
Date:   Fri Jun 5 18:17:28 2020 +0200

    media: budget-core: Improve exception handling in budget_register()
    
    [ Upstream commit fc0456458df8b3421dba2a5508cd817fbc20ea71 ]
    
    budget_register() has no error handling after its failure.
    Add the missed undo functions for error handling to fix it.
    
    Signed-off-by: Chuhong Yuan <hslester96@gmail.com>
    Signed-off-by: Sean Young <sean@mess.org>
    Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
    Signed-off-by: Sasha Levin <sashal@kernel.org>

commit e42d52cafb1a8e7887ce135f99c1be889b47b5d2
Author: Stanley Chu <stanley.chu@mediatek.com>
Date:   Fri Jun 12 09:26:24 2020 +0800

    scsi: ufs: Add DELAY_BEFORE_LPM quirk for Micron devices
    
    [ Upstream commit c0a18ee0ce78d7957ec1a53be35b1b3beba80668 ]
    
    It is confirmed that Micron device needs DELAY_BEFORE_LPM quirk to have a
    delay before VCC is powered off. Sdd Micron vendor ID and this quirk for
    Micron devices.
    
    Link: https://lore.kernel.org/r/20200612012625.6615-2-stanley.chu@mediatek.com
    Reviewed-by: Bean Huo <beanhuo@micron.com>
    Reviewed-by: Alim Akhtar <alim.akhtar@samsung.com>
    Signed-off-by: Stanley Chu <stanley.chu@mediatek.com>
    Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
    Signed-off-by: Sasha Levin <sashal@kernel.org>

commit f6f3fdf5052bd0a1bb40188feceaa027c40ebc29
Author: Jan Kara <jack@suse.cz>
Date:   Fri Jul 31 18:21:35 2020 +0200

    ext4: fix checking of directory entry validity for inline directories
    
    [ Upstream commit 7303cb5bfe845f7d43cd9b2dbd37dbb266efda9b ]
    
    ext4_search_dir() and ext4_generic_delete_entry() can be called both for
    standard director blocks and for inline directories stored inside inode
    or inline xattr space. For the second case we didn't call
    ext4_check_dir_entry() with proper constraints that could result in
    accepting corrupted directory entry as well as false positive filesystem
    errors like:
    
    EXT4-fs error (device dm-0): ext4_search_dir:1395: inode #28320400:
    block 113246792: comm dockerd: bad entry in directory: directory entry too
    close to block end - offset=0, inode=28320403, rec_len=32, name_len=8,
    size=4096
    
    Fix the arguments passed to ext4_check_dir_entry().
    
    Fixes: 109ba779d6cc ("ext4: check for directory entries too close to block end")
    CC: stable@vger.kernel.org
    Signed-off-by: Jan Kara <jack@suse.cz>
    Link: https://lore.kernel.org/r/20200731162135.8080-1-jack@suse.cz
    Signed-off-by: Theodore Ts'o <tytso@mit.edu>
    Signed-off-by: Sasha Levin <sashal@kernel.org>

commit b522f43beefc90961d9fccf83e7f4c1dba724bc4
Author: Eric Biggers <ebiggers@google.com>
Date:   Mon Apr 24 10:00:13 2017 -0700

    ext4: clean up ext4_match() and callers
    
    [ Upstream commit d9b9f8d5a88cb7881d9f1c2b7e9de9a3fe1dc9e2 ]
    
    When ext4 encryption was originally merged, we were encrypting the
    user-specified filename in ext4_match(), introducing a lot of additional
    complexity into ext4_match() and its callers.  This has since been
    changed to encrypt the filename earlier, so we can remove the gunk
    that's no longer needed.  This more or less reverts ext4_search_dir()
    and ext4_find_dest_de() to the way they were in the v4.0 kernel.
    
    Signed-off-by: Eric Biggers <ebiggers@google.com>
    Signed-off-by: Theodore Ts'o <tytso@mit.edu>
    Signed-off-by: Sasha Levin <sashal@kernel.org>

commit 1a4029e931dc7d9dd0ba88cb8165b2d8737d52c5
Author: Charan Teja Reddy <charante@codeaurora.org>
Date:   Thu Aug 20 17:42:27 2020 -0700

    mm, page_alloc: fix core hung in free_pcppages_bulk()
    
    commit 88e8ac11d2ea3acc003cf01bb5a38c8aa76c3cfd upstream.
    
    The following race is observed with the repeated online, offline and a
    delay between two successive online of memory blocks of movable zone.
    
    P1                                              P2
    
    Online the first memory block in
    the movable zone. The pcp struct
    values are initialized to default
    values,i.e., pcp->high = 0 &
    pcp->batch = 1.
    
                                            Allocate the pages from the
                                            movable zone.
    
    Try to Online the second memory
    block in the movable zone thus it
    entered the online_pages() but yet
    to call zone_pcp_update().
                                            This process is entered into
                                            the exit path thus it tries
                                            to release the order-0 pages
                                            to pcp lists through
                                            free_unref_page_commit().
                                            As pcp->high = 0, pcp->count = 1
                                            proceed to call the function
                                            free_pcppages_bulk().
    Update the pcp values thus the
    new pcp values are like, say,
    pcp->high = 378, pcp->batch = 63.
                                            Read the pcp's batch value using
                                            READ_ONCE() and pass the same to
                                            free_pcppages_bulk(), pcp values
                                            passed here are, batch = 63,
                                            count = 1.
    
                                            Since num of pages in the pcp
                                            lists are less than ->batch,
                                            then it will stuck in
                                            while(list_empty(list)) loop
                                            with interrupts disabled thus
                                            a core hung.
    
    Avoid this by ensuring free_pcppages_bulk() is called with proper count of
    pcp list pages.
    
    The mentioned race is some what easily reproducible without [1] because
    pcp's are not updated for the first memory block online and thus there is
    a enough race window for P2 between alloc+free and pcp struct values
    update through onlining of second memory block.
    
    With [1], the race still exists but it is very narrow as we update the pcp
    struct values for the first memory block online itself.
    
    This is not limited to the movable zone, it could also happen in cases
    with the normal zone (e.g., hotplug to a node that only has DMA memory, or
    no other memory yet).
    
    [1]: https://patchwork.kernel.org/patch/11696389/
    
    Fixes: 5f8dcc21211a ("page-allocator: split per-cpu list into one-list-per-migrate-type")
    Signed-off-by: Charan Teja Reddy <charante@codeaurora.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Acked-by: David Hildenbrand <david@redhat.com>
    Acked-by: David Rientjes <rientjes@google.com>
    Acked-by: Michal Hocko <mhocko@suse.com>
    Cc: Michal Hocko <mhocko@suse.com>
    Cc: Vlastimil Babka <vbabka@suse.cz>
    Cc: Vinayak Menon <vinmenon@codeaurora.org>
    Cc: <stable@vger.kernel.org> [2.6+]
    Link: http://lkml.kernel.org/r/1597150703-19003-1-git-send-email-charante@codeaurora.org
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 8c6a0bcb20a86f7419e6777d2a18aa55be974d22
Author: Doug Berger <opendmb@gmail.com>
Date:   Thu Aug 20 17:42:24 2020 -0700

    mm: include CMA pages in lowmem_reserve at boot
    
    commit e08d3fdfe2dafa0331843f70ce1ff6c1c4900bf4 upstream.
    
    The lowmem_reserve arrays provide a means of applying pressure against
    allocations from lower zones that were targeted at higher zones.  Its
    values are a function of the number of pages managed by higher zones and
    are assigned by a call to the setup_per_zone_lowmem_reserve() function.
    
    The function is initially called at boot time by the function
    init_per_zone_wmark_min() and may be called later by accesses of the
    /proc/sys/vm/lowmem_reserve_ratio sysctl file.
    
    The function init_per_zone_wmark_min() was moved up from a module_init to
    a core_initcall to resolve a sequencing issue with khugepaged.
    Unfortunately this created a sequencing issue with CMA page accounting.
    
    The CMA pages are added to the managed page count of a zone when
    cma_init_reserved_areas() is called at boot also as a core_initcall.  This
    makes it uncertain whether the CMA pages will be added to the managed page
    counts of their zones before or after the call to
    init_per_zone_wmark_min() as it becomes dependent on link order.  With the
    current link order the pages are added to the managed count after the
    lowmem_reserve arrays are initialized at boot.
    
    This means the lowmem_reserve values at boot may be lower than the values
    used later if /proc/sys/vm/lowmem_reserve_ratio is accessed even if the
    ratio values are unchanged.
    
    In many cases the difference is not significant, but for example
    an ARM platform with 1GB of memory and the following memory layout
    
      cma: Reserved 256 MiB at 0x0000000030000000
      Zone ranges:
        DMA      [mem 0x0000000000000000-0x000000002fffffff]
        Normal   empty
        HighMem  [mem 0x0000000030000000-0x000000003fffffff]
    
    would result in 0 lowmem_reserve for the DMA zone.  This would allow
    userspace to deplete the DMA zone easily.
    
    Funnily enough
    
      $ cat /proc/sys/vm/lowmem_reserve_ratio
    
    would fix up the situation because as a side effect it forces
    setup_per_zone_lowmem_reserve.
    
    This commit breaks the link order dependency by invoking
    init_per_zone_wmark_min() as a postcore_initcall so that the CMA pages
    have the chance to be properly accounted in their zone(s) and allowing
    the lowmem_reserve arrays to receive consistent values.
    
    Fixes: bc22af74f271 ("mm: update min_free_kbytes from khugepaged after core initialization")
    Signed-off-by: Doug Berger <opendmb@gmail.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Acked-by: Michal Hocko <mhocko@suse.com>
    Cc: Jason Baron <jbaron@akamai.com>
    Cc: David Rientjes <rientjes@google.com>
    Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
    Cc: <stable@vger.kernel.org>
    Link: http://lkml.kernel.org/r/1597423766-27849-1-git-send-email-opendmb@gmail.com
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 6662601e579096e3bd9d00b0847ff013073f4a25
Author: Wei Yongjun <weiyongjun1@huawei.com>
Date:   Thu Aug 20 17:42:14 2020 -0700

    kernel/relay.c: fix memleak on destroy relay channel
    
    commit 71e843295c680898959b22dc877ae3839cc22470 upstream.
    
    kmemleak report memory leak as follows:
    
      unreferenced object 0x607ee4e5f948 (size 8):
      comm "syz-executor.1", pid 2098, jiffies 4295031601 (age 288.468s)
      hex dump (first 8 bytes):
      00 00 00 00 00 00 00 00 ........
      backtrace:
         relay_open kernel/relay.c:583 [inline]
         relay_open+0xb6/0x970 kernel/relay.c:563
         do_blk_trace_setup+0x4a8/0xb20 kernel/trace/blktrace.c:557
         __blk_trace_setup+0xb6/0x150 kernel/trace/blktrace.c:597
         blk_trace_ioctl+0x146/0x280 kernel/trace/blktrace.c:738
         blkdev_ioctl+0xb2/0x6a0 block/ioctl.c:613
         block_ioctl+0xe5/0x120 fs/block_dev.c:1871
         vfs_ioctl fs/ioctl.c:48 [inline]
         __do_sys_ioctl fs/ioctl.c:753 [inline]
         __se_sys_ioctl fs/ioctl.c:739 [inline]
         __x64_sys_ioctl+0x170/0x1ce fs/ioctl.c:739
         do_syscall_64+0x33/0x40 arch/x86/entry/common.c:46
         entry_SYSCALL_64_after_hwframe+0x44/0xa9
    
    'chan->buf' is malloced in relay_open() by alloc_percpu() but not free
    while destroy the relay channel.  Fix it by adding free_percpu() before
    return from relay_destroy_channel().
    
    Fixes: 017c59c042d0 ("relay: Use per CPU constructs for the relay channel buffer pointers")
    Reported-by: Hulk Robot <hulkci@huawei.com>
    Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
    Cc: Al Viro <viro@zeniv.linux.org.uk>
    Cc: Michael Ellerman <mpe@ellerman.id.au>
    Cc: David Rientjes <rientjes@google.com>
    Cc: Michel Lespinasse <walken@google.com>
    Cc: Daniel Axtens <dja@axtens.net>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Akash Goel <akash.goel@intel.com>
    Cc: <stable@vger.kernel.org>
    Link: http://lkml.kernel.org/r/20200817122826.48518-1-weiyongjun1@huawei.com
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 6d26d08216475e5a40e4f6ade397c181a19dc524
Author: Jann Horn <jannh@google.com>
Date:   Thu Aug 20 17:42:11 2020 -0700

    romfs: fix uninitialized memory leak in romfs_dev_read()
    
    commit bcf85fcedfdd17911982a3e3564fcfec7b01eebd upstream.
    
    romfs has a superblock field that limits the size of the filesystem; data
    beyond that limit is never accessed.
    
    romfs_dev_read() fetches a caller-supplied number of bytes from the
    backing device.  It returns 0 on success or an error code on failure;
    therefore, its API can't represent short reads, it's all-or-nothing.
    
    However, when romfs_dev_read() detects that the requested operation would
    cross the filesystem size limit, it currently silently truncates the
    requested number of bytes.  This e.g.  means that when the content of a
    file with size 0x1000 starts one byte before the filesystem size limit,
    ->readpage() will only fill a single byte of the supplied page while
    leaving the rest uninitialized, leaking that uninitialized memory to
    userspace.
    
    Fix it by returning an error code instead of truncating the read when the
    requested read operation would go beyond the end of the filesystem.
    
    Fixes: da4458bda237 ("NOMMU: Make it possible for RomFS to use MTD devices directly")
    Signed-off-by: Jann Horn <jannh@google.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Cc: David Howells <dhowells@redhat.com>
    Cc: <stable@vger.kernel.org>
    Link: http://lkml.kernel.org/r/20200818013202.2246365-1-jannh@google.com
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit daea4542b1330738281392ffd16f7b44a43fe1e2
Author: Josef Bacik <josef@toxicpanda.com>
Date:   Wed Jul 22 11:12:46 2020 -0400

    btrfs: don't show full path of bind mounts in subvol=
    
    [ Upstream commit 3ef3959b29c4a5bd65526ab310a1a18ae533172a ]
    
    Chris Murphy reported a problem where rpm ostree will bind mount a bunch
    of things for whatever voodoo it's doing.  But when it does this
    /proc/mounts shows something like
    
      /dev/sda /mnt/test btrfs rw,relatime,subvolid=256,subvol=/foo 0 0
      /dev/sda /mnt/test/baz btrfs rw,relatime,subvolid=256,subvol=/foo/bar 0 0
    
    Despite subvolid=256 being subvol=/foo.  This is because we're just
    spitting out the dentry of the mount point, which in the case of bind
    mounts is the source path for the mountpoint.  Instead we should spit
    out the path to the actual subvol.  Fix this by looking up the name for
    the subvolid we have mounted.  With this fix the same test looks like
    this
    
      /dev/sda /mnt/test btrfs rw,relatime,subvolid=256,subvol=/foo 0 0
      /dev/sda /mnt/test/baz btrfs rw,relatime,subvolid=256,subvol=/foo 0 0
    
    Reported-by: Chris Murphy <chris@colorremedies.com>
    CC: stable@vger.kernel.org # 4.4+
    Signed-off-by: Josef Bacik <josef@toxicpanda.com>
    Reviewed-by: David Sterba <dsterba@suse.com>
    Signed-off-by: David Sterba <dsterba@suse.com>
    Signed-off-by: Sasha Levin <sashal@kernel.org>

commit ba33ed7e3d43c6723b89c5105da86b0f29952b2c
Author: Marcos Paulo de Souza <mpdesouza@suse.com>
Date:   Fri Feb 21 14:56:12 2020 +0100

    btrfs: export helpers for subvolume name/id resolution
    
    [ Upstream commit c0c907a47dccf2cf26251a8fb4a8e7a3bf79ce84 ]
    
    The functions will be used outside of export.c and super.c to allow
    resolving subvolume name from a given id, eg. for subvolume deletion by
    id ioctl.
    
    Signed-off-by: Marcos Paulo de Souza <mpdesouza@suse.com>
    Reviewed-by: David Sterba <dsterba@suse.com>
    [ split from the next patch ]
    Signed-off-by: David Sterba <dsterba@suse.com>
    Signed-off-by: Sasha Levin <sashal@kernel.org>

commit cdb3f8b6c5b7dd3d38f0b847164c9d727c9ea6d0
Author: Hugh Dickins <hughd@google.com>
Date:   Thu Aug 20 17:42:02 2020 -0700

    khugepaged: adjust VM_BUG_ON_MM() in __khugepaged_enter()
    
    [ Upstream commit f3f99d63a8156c7a4a6b20aac22b53c5579c7dc1 ]
    
    syzbot crashes on the VM_BUG_ON_MM(khugepaged_test_exit(mm), mm) in
    __khugepaged_enter(): yes, when one thread is about to dump core, has set
    core_state, and is waiting for others, another might do something calling
    __khugepaged_enter(), which now crashes because I lumped the core_state
    test (known as "mmget_still_valid") into khugepaged_test_exit().  I still
    think it's best to lump them together, so just in this exceptional case,
    check mm->mm_users directly instead of khugepaged_test_exit().
    
    Fixes: bbe98f9cadff ("khugepaged: khugepaged_test_exit() check mmget_still_valid()")
    Reported-by: syzbot <syzkaller@googlegroups.com>
    Signed-off-by: Hugh Dickins <hughd@google.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Acked-by: Yang Shi <shy828301@gmail.com>
    Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
    Cc: Andrea Arcangeli <aarcange@redhat.com>
    Cc: Song Liu <songliubraving@fb.com>
    Cc: Mike Kravetz <mike.kravetz@oracle.com>
    Cc: Eric Dumazet <edumazet@google.com>
    Cc: <stable@vger.kernel.org>    [4.8+]
    Link: http://lkml.kernel.org/r/alpine.LSU.2.11.2008141503370.18085@eggly.anvils
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Sasha Levin <sashal@kernel.org>

commit db63d1862181e0c16cee82cec162c6c20e503da0
Author: Hugh Dickins <hughd@google.com>
Date:   Thu Aug 6 23:26:25 2020 -0700

    khugepaged: khugepaged_test_exit() check mmget_still_valid()
    
    [ Upstream commit bbe98f9cadff58cdd6a4acaeba0efa8565dabe65 ]
    
    Move collapse_huge_page()'s mmget_still_valid() check into
    khugepaged_test_exit() itself.  collapse_huge_page() is used for anon THP
    only, and earned its mmget_still_valid() check because it inserts a huge
    pmd entry in place of the page table's pmd entry; whereas
    collapse_file()'s retract_page_tables() or collapse_pte_mapped_thp()
    merely clears the page table's pmd entry.  But core dumping without mmap
    lock must have been as open to mistaking a racily cleared pmd entry for a
    page table at physical page 0, as exit_mmap() was.  And we certainly have
    no interest in mapping as a THP once dumping core.
    
    Fixes: 59ea6d06cfa9 ("coredump: fix race condition between collapse_huge_page() and core dumping")
    Signed-off-by: Hugh Dickins <hughd@google.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Cc: Andrea Arcangeli <aarcange@redhat.com>
    Cc: Song Liu <songliubraving@fb.com>
    Cc: Mike Kravetz <mike.kravetz@oracle.com>
    Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
    Cc: <stable@vger.kernel.org>    [4.8+]
    Link: http://lkml.kernel.org/r/alpine.LSU.2.11.2008021217020.27773@eggly.anvils
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Sasha Levin <sashal@kernel.org>

commit 854cbc3db30382717af6dd6cd3d69aced5fc6681
Author: Kevin Hao <haokexin@gmail.com>
Date:   Thu Jul 30 16:23:18 2020 +0800

    tracing/hwlat: Honor the tracing_cpumask
    
    [ Upstream commit 96b4833b6827a62c295b149213c68b559514c929 ]
    
    In calculation of the cpu mask for the hwlat kernel thread, the wrong
    cpu mask is used instead of the tracing_cpumask, this causes the
    tracing/tracing_cpumask useless for hwlat tracer. Fixes it.
    
    Link: https://lkml.kernel.org/r/20200730082318.42584-2-haokexin@gmail.com
    
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: stable@vger.kernel.org
    Fixes: 0330f7aa8ee6 ("tracing: Have hwlat trace migrate across tracing_cpumask CPUs")
    Signed-off-by: Kevin Hao <haokexin@gmail.com>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    Signed-off-by: Sasha Levin <sashal@kernel.org>

commit ee458aa7a03a8bdbc1f70e2b11d8d08c624f313f
Author: Steven Rostedt (VMware) <rostedt@goodmis.org>
Date:   Tue Jan 31 16:48:23 2017 -0500

    tracing: Clean up the hwlat binding code
    
    [ Upstream commit f447c196fe7a3a92c6396f7628020cb8d564be15 ]
    
    Instead of initializing the affinity of the hwlat kthread in the thread
    itself, simply set up the initial affinity at thread creation. This
    simplifies the code.
    
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    Signed-off-by: Sasha Levin <sashal@kernel.org>

commit b907dd1d2ddb3665ffc3c920350e7baa6772dfb2
Author: Masami Hiramatsu <mhiramat@kernel.org>
Date:   Fri Jul 10 22:11:23 2020 +0900

    perf probe: Fix memory leakage when the probe point is not found
    
    [ Upstream commit 12d572e785b15bc764e956caaa8a4c846fd15694 ]
    
    Fix the memory leakage in debuginfo__find_trace_events() when the probe
    point is not found in the debuginfo. If there is no probe point found in
    the debuginfo, debuginfo__find_probes() will NOT return -ENOENT, but 0.
    
    Thus the caller of debuginfo__find_probes() must check the tf.ntevs and
    release the allocated memory for the array of struct probe_trace_event.
    
    The current code releases the memory only if the debuginfo__find_probes()
    hits an error but not checks tf.ntevs. In the result, the memory allocated
    on *tevs are not released if tf.ntevs == 0.
    
    This fixes the memory leakage by checking tf.ntevs == 0 in addition to
    ret < 0.
    
    Fixes: ff741783506c ("perf probe: Introduce debuginfo to encapsulate dwarf information")
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Reviewed-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Oleg Nesterov <oleg@redhat.com>
    Cc: stable@vger.kernel.org
    Link: http://lore.kernel.org/lkml/159438668346.62703.10887420400718492503.stgit@devnote2
    Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
    Signed-off-by: Sasha Levin <sashal@kernel.org>

commit 71e7ac9ae8f0571fcd151cb35b6757ab1bfc109b
Author: Liu Ying <victor.liu@nxp.com>
Date:   Thu Jul 9 10:28:52 2020 +0800

    drm/imx: imx-ldb: Disable both channels for split mode in enc->disable()
    
    [ Upstream commit 3b2a999582c467d1883716b37ffcc00178a13713 ]
    
    Both of the two LVDS channels should be disabled for split mode
    in the encoder's ->disable() callback, because they are enabled
    in the encoder's ->enable() callback.
    
    Fixes: 6556f7f82b9c ("drm: imx: Move imx-drm driver out of staging")
    Cc: Philipp Zabel <p.zabel@pengutronix.de>
    Cc: Sascha Hauer <s.hauer@pengutronix.de>
    Cc: Pengutronix Kernel Team <kernel@pengutronix.de>
    Cc: NXP Linux Team <linux-imx@nxp.com>
    Cc: <stable@vger.kernel.org>
    Signed-off-by: Liu Ying <victor.liu@nxp.com>
    Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
    Signed-off-by: Sasha Levin <sashal@kernel.org>

commit 7ee66a8a71cf68a44ecfdf2f58e6847cf654ff88
Author: Jan Beulich <JBeulich@suse.com>
Date:   Mon Feb 26 04:11:51 2018 -0700

    x86/asm: Add instruction suffixes to bitops
    
    commit 22636f8c9511245cb3c8412039f1dd95afb3aa59 upstream.
    
    Omitting suffixes from instructions in AT&T mode is bad practice when
    operand size cannot be determined by the assembler from register
    operands, and is likely going to be warned about by upstream gas in the
    future (mine does already). Add the missing suffixes here. Note that for
    64-bit this means some operations change from being 32-bit to 64-bit.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Link: https://lkml.kernel.org/r/5A93F98702000078001ABACC@prv-mh.provo.novell.com
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

commit 3f2bea782ec1a4a3f30881f1e966057ed410f4e7
Author: Uros Bizjak <ubizjak@gmail.com>
Date:   Wed Sep 6 17:18:08 2017 +0200

    x86/asm: Remove unnecessary \n\t in front of CC_SET() from asm templates
    
    commit 3c52b5c64326d9dcfee4e10611c53ec1b1b20675 upstream.
    
    There is no need for \n\t in front of CC_SET(), as the macro already includes these two.
    
    Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Link: http://lkml.kernel.org/r/20170906151808.5634-1-ubizjak@gmail.com
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>