linux-2.6-block.git
5 years agoarm64: entry: remove unused register aliases
Mark Rutland [Thu, 3 Jan 2019 13:23:10 +0000 (13:23 +0000)]
arm64: entry: remove unused register aliases

In commit:

  3b7142752e4bee15 ("arm64: convert native/compat syscall entry to C")

... we moved the syscall invocation code from assembly to C, but left
behind a number of register aliases which are now unused.

Let's remove them before they confuse someone.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: smp: Fix compilation error
Shaokun Zhang [Sat, 29 Dec 2018 01:43:17 +0000 (09:43 +0800)]
arm64: smp: Fix compilation error

For arm64: updates for 4.21, there is a compilation error:
arch/arm64/kernel/head.S: Assembler messages:
arch/arm64/kernel/head.S:824: Error: missing ')'
arch/arm64/kernel/head.S:824: Error: missing ')'
arch/arm64/kernel/head.S:824: Error: missing ')'
arch/arm64/kernel/head.S:824: Error: unexpected characters following instruction at operand 2 -- `mov x2,#(2)|(2U<<(8))'
scripts/Makefile.build:391: recipe for target 'arch/arm64/kernel/head.o' failed
make[1]: *** [arch/arm64/kernel/head.o] Error 1
GCC version is gcc (Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609

Let's fix it using the UL() macro.

Fixes: 66f16a24512f ("arm64: smp: Rework early feature mismatched detection")
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Tested-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
[will: consistent use of UL() for all shifts in asm constants]
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: kaslr: print PHYS_OFFSET in dump_kernel_offset()
Miles Chen [Wed, 12 Dec 2018 10:56:49 +0000 (18:56 +0800)]
arm64: kaslr: print PHYS_OFFSET in dump_kernel_offset()

When debug with kaslr, it is sometimes necessary to have PHYS_OFFSET to
perform linear virtual address to physical address translation.
Sometimes we're debugging with only few information such as a kernel log
and a symbol file, print PHYS_OFFSET in dump_kernel_offset() for that case.

Tested by:
echo c > /proc/sysrq-trigger
[   11.996161] SMP: stopping secondary CPUs
[   11.996732] Kernel Offset: 0x2522200000 from 0xffffff8008000000
[   11.996881] PHYS_OFFSET: 0xffffffeb40000000

Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Miles Chen <miles.chen@mediatek.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: sysreg: Use _BITUL() when defining register bits
Will Deacon [Tue, 11 Dec 2018 16:42:31 +0000 (16:42 +0000)]
arm64: sysreg: Use _BITUL() when defining register bits

Using shifts directly is error-prone and can cause inadvertent sign
extensions or build problems with older versions of binutils.

Consistent use of the _BITUL() macro makes these problems disappear.

Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: cpufeature: Rework ptr auth hwcaps using multi_entry_cap_matches
Will Deacon [Wed, 12 Dec 2018 15:53:54 +0000 (15:53 +0000)]
arm64: cpufeature: Rework ptr auth hwcaps using multi_entry_cap_matches

Open-coding the pointer-auth HWCAPs is a mess and can be avoided by
reusing the multi-cap logic from the CPU errata framework.

Move the multi_entry_cap_matches code to cpufeature.h and reuse it for
the pointer auth HWCAPs.

Reviewed-by: Suzuki Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: cpufeature: Reduce number of pointer auth CPU caps from 6 to 4
Will Deacon [Wed, 12 Dec 2018 15:52:02 +0000 (15:52 +0000)]
arm64: cpufeature: Reduce number of pointer auth CPU caps from 6 to 4

We can easily avoid defining the two meta-capabilities for the address
and generic keys, so remove them and instead just check both of the
architected and impdef capabilities when determining the level of system
support.

Reviewed-by: Suzuki Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: docs: document pointer authentication
Mark Rutland [Fri, 7 Dec 2018 18:39:31 +0000 (18:39 +0000)]
arm64: docs: document pointer authentication

Now that we've added code to support pointer authentication, add some
documentation so that people can figure out if/how to use it.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Reviewed-by: Ramana Radhakrishnan <ramana.radhakrishnan@arm.com>
Cc: Andrew Jones <drjones@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Ramana Radhakrishnan <ramana.radhakrishnan@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: ptr auth: Move per-thread keys from thread_info to thread_struct
Will Deacon [Thu, 13 Dec 2018 13:14:06 +0000 (13:14 +0000)]
arm64: ptr auth: Move per-thread keys from thread_info to thread_struct

We don't need to get at the per-thread keys from assembly at all, so
they can live alongside the rest of the per-thread register state in
thread_struct instead of thread_info.

This will also allow straighforward whitelisting of the keys for
hardened usercopy should we expose them via a ptrace request later on.

Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: enable pointer authentication
Mark Rutland [Fri, 7 Dec 2018 18:39:30 +0000 (18:39 +0000)]
arm64: enable pointer authentication

Now that all the necessary bits are in place for userspace, add the
necessary Kconfig logic to allow this to be enabled.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: add prctl control for resetting ptrauth keys
Kristina Martsenko [Fri, 7 Dec 2018 18:39:28 +0000 (18:39 +0000)]
arm64: add prctl control for resetting ptrauth keys

Add an arm64-specific prctl to allow a thread to reinitialize its
pointer authentication keys to random values. This can be useful when
exec() is not used for starting new processes, to ensure that different
processes still have different keys.

Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: perf: strip PAC when unwinding userspace
Mark Rutland [Fri, 7 Dec 2018 18:39:27 +0000 (18:39 +0000)]
arm64: perf: strip PAC when unwinding userspace

When the kernel is unwinding userspace callchains, we can't expect that
the userspace consumer of these callchains has the data necessary to
strip the PAC from the stored LR.

This patch has the kernel strip the PAC from user stackframes when the
in-kernel unwinder is used. This only affects the LR value, and not the
FP.

This only affects the in-kernel unwinder. When userspace performs
unwinding, it is up to userspace to strip PACs as necessary (which can
be determined from DWARF information).

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Ramana Radhakrishnan <ramana.radhakrishnan@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: expose user PAC bit positions via ptrace
Mark Rutland [Fri, 7 Dec 2018 18:39:26 +0000 (18:39 +0000)]
arm64: expose user PAC bit positions via ptrace

When pointer authentication is in use, data/instruction pointers have a
number of PAC bits inserted into them. The number and position of these
bits depends on the configured TCR_ELx.TxSZ and whether tagging is
enabled. ARMv8.3 allows tagging to differ for instruction and data
pointers.

For userspace debuggers to unwind the stack and/or to follow pointer
chains, they need to be able to remove the PAC bits before attempting to
use a pointer.

This patch adds a new structure with masks describing the location of
the PAC bits in userspace instruction and data pointers (i.e. those
addressable via TTBR0), which userspace can query via PTRACE_GETREGSET.
By clearing these bits from pointers (and replacing them with the value
of bit 55), userspace can acquire the PAC-less versions.

This new regset is exposed when the kernel is built with (user) pointer
authentication support, and the address authentication feature is
enabled. Otherwise, the regset is hidden.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Ramana Radhakrishnan <ramana.radhakrishnan@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
[will: Fix to use vabits_user instead of VA_BITS and rename macro]
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: add basic pointer authentication support
Mark Rutland [Fri, 7 Dec 2018 18:39:25 +0000 (18:39 +0000)]
arm64: add basic pointer authentication support

This patch adds basic support for pointer authentication, allowing
userspace to make use of APIAKey, APIBKey, APDAKey, APDBKey, and
APGAKey. The kernel maintains key values for each process (shared by all
threads within), which are initialised to random values at exec() time.

The ID_AA64ISAR1_EL1.{APA,API,GPA,GPI} fields are exposed to userspace,
to describe that pointer authentication instructions are available and
that the kernel is managing the keys. Two new hwcaps are added for the
same reason: PACA (for address authentication) and PACG (for generic
authentication).

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Tested-by: Adam Wallis <awallis@codeaurora.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Ramana Radhakrishnan <ramana.radhakrishnan@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
[will: Fix sizeof() usage and unroll address key initialisation]
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64/cpufeature: detect pointer authentication
Mark Rutland [Fri, 7 Dec 2018 18:39:24 +0000 (18:39 +0000)]
arm64/cpufeature: detect pointer authentication

So that we can dynamically handle the presence of pointer authentication
functionality, wire up probing code in cpufeature.c.

From ARMv8.3 onwards, ID_AA64ISAR1 is no longer entirely RES0, and now
has four fields describing the presence of pointer authentication
functionality:

* APA - address authentication present, using an architected algorithm
* API - address authentication present, using an IMP DEF algorithm
* GPA - generic authentication present, using an architected algorithm
* GPI - generic authentication present, using an IMP DEF algorithm

This patch checks for both address and generic authentication,
separately. It is assumed that if all CPUs support an IMP DEF algorithm,
the same algorithm is used across all CPUs.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: Don't trap host pointer auth use to EL2
Mark Rutland [Fri, 7 Dec 2018 18:39:23 +0000 (18:39 +0000)]
arm64: Don't trap host pointer auth use to EL2

To allow EL0 (and/or EL1) to use pointer authentication functionality,
we must ensure that pointer authentication instructions and accesses to
pointer authentication keys are not trapped to EL2.

This patch ensures that HCR_EL2 is configured appropriately when the
kernel is booted at EL2. For non-VHE kernels we set HCR_EL2.{API,APK},
ensuring that EL1 can access keys and permit EL0 use of instructions.
For VHE kernels host EL0 (TGE && E2H) is unaffected by these settings,
and it doesn't matter how we configure HCR_EL2.{API,APK}, so we don't
bother setting them.

This does not enable support for KVM guests, since KVM manages HCR_EL2
itself when running VMs.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64/kvm: hide ptrauth from guests
Mark Rutland [Fri, 7 Dec 2018 18:39:22 +0000 (18:39 +0000)]
arm64/kvm: hide ptrauth from guests

In subsequent patches we're going to expose ptrauth to the host kernel
and userspace, but things are a bit trickier for guest kernels. For the
time being, let's hide ptrauth from KVM guests.

Regardless of how well-behaved the guest kernel is, guest userspace
could attempt to use ptrauth instructions, triggering a trap to EL2,
resulting in noise from kvm_handle_unknown_ec(). So let's write up a
handler for the PAC trap, which silently injects an UNDEF into the
guest, as if the feature were really missing.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64/kvm: consistently handle host HCR_EL2 flags
Mark Rutland [Fri, 7 Dec 2018 18:39:21 +0000 (18:39 +0000)]
arm64/kvm: consistently handle host HCR_EL2 flags

In KVM we define the configuration of HCR_EL2 for a VHE HOST in
HCR_HOST_VHE_FLAGS, but we don't have a similar definition for the
non-VHE host flags, and open-code HCR_RW. Further, in head.S we
open-code the flags for VHE and non-VHE configurations.

In future, we're going to want to configure more flags for the host, so
lets add a HCR_HOST_NVHE_FLAGS defintion, and consistently use both
HCR_HOST_VHE_FLAGS and HCR_HOST_NVHE_FLAGS in the kvm code and head.S.

We now use mov_q to generate the HCR_EL2 value, as we use when
configuring other registers in head.S.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: add pointer authentication register bits
Mark Rutland [Fri, 7 Dec 2018 18:39:20 +0000 (18:39 +0000)]
arm64: add pointer authentication register bits

The ARMv8.3 pointer authentication extension adds:

* New fields in ID_AA64ISAR1 to report the presence of pointer
  authentication functionality.

* New control bits in SCTLR_ELx to enable this functionality.

* New system registers to hold the keys necessary for this
  functionality.

* A new ESR_ELx.EC code used when the new instructions are affected by
  configurable traps

This patch adds the relevant definitions to <asm/sysreg.h> and
<asm/esr.h> for these, to be used by subsequent patches.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: add comments about EC exception levels
Kristina Martsenko [Fri, 7 Dec 2018 18:39:19 +0000 (18:39 +0000)]
arm64: add comments about EC exception levels

To make it clear which exceptions can't be taken to EL1 or EL2, add
comments next to the ESR_ELx_EC_* macro definitions.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: perf: Treat EXCLUDE_EL* bit definitions as unsigned
Will Deacon [Thu, 13 Dec 2018 15:34:44 +0000 (15:34 +0000)]
arm64: perf: Treat EXCLUDE_EL* bit definitions as unsigned

Although the upper 32 bits of the PMEVTYPER<n>_EL0 registers are RES0,
we should treat the EXCLUDE_EL* bit definitions as unsigned so that we
avoid accidentally sign-extending the privilege filtering bit (bit 31)
into the upper half of the register.

Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: kpti: Whitelist Cortex-A CPUs that don't implement the CSV3 field
Will Deacon [Thu, 13 Dec 2018 13:47:38 +0000 (13:47 +0000)]
arm64: kpti: Whitelist Cortex-A CPUs that don't implement the CSV3 field

While the CSV3 field of the ID_AA64_PFR0 CPU ID register can be checked
to see if a CPU is susceptible to Meltdown and therefore requires kpti
to be enabled, existing CPUs do not implement this field.

We therefore whitelist all unaffected Cortex-A CPUs that do not implement
the CSV3 field.

Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoMerge branch 'for-next/perf' into aarch64/for-next/core
Will Deacon [Wed, 12 Dec 2018 18:59:39 +0000 (18:59 +0000)]
Merge branch 'for-next/perf' into aarch64/for-next/core

Merge in arm64 perf and PMU driver updates, including support for the
system/uncore PMU in the ThunderX2 platform.

5 years agoarm64: enable per-task stack canaries
Ard Biesheuvel [Wed, 12 Dec 2018 12:08:44 +0000 (13:08 +0100)]
arm64: enable per-task stack canaries

This enables the use of per-task stack canary values if GCC has
support for emitting the stack canary reference relative to the
value of sp_el0, which holds the task struct pointer in the arm64
kernel.

The $(eval) extends KBUILD_CFLAGS at the moment the make rule is
applied, which means asm-offsets.o (which we rely on for the offset
value) is built without the arguments, and everything built afterwards
has the options set.

Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: Add memory hotplug support
Robin Murphy [Tue, 11 Dec 2018 18:48:48 +0000 (18:48 +0000)]
arm64: Add memory hotplug support

Wire up the basic support for hot-adding memory. Since memory hotplug
is fairly tightly coupled to sparsemem, we tweak pfn_valid() to also
cross-check the presence of a section in the manner of the generic
implementation, before falling back to memblock to check for no-map
regions within a present section as before. By having arch_add_memory(()
create the linear mapping first, this then makes everything work in the
way that __add_section() expects.

We expect hotplug to be ACPI-driven, so the swapper_pg_dir updates
should be safe from races by virtue of the global device hotplug lock.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: percpu: Fix LSE implementation of value-returning pcpu atomics
Will Deacon [Wed, 12 Dec 2018 14:17:20 +0000 (14:17 +0000)]
arm64: percpu: Fix LSE implementation of value-returning pcpu atomics

Commit 959bf2fd03b5 ("arm64: percpu: Rewrite per-cpu ops to allow use of
LSE atomics") introduced alternative code sequences for the arm64 percpu
atomics, so that the LSE instructions can be patched in at runtime if
they are supported by the CPU.

Unfortunately, when patching in the LSE sequence for a value-returning
pcpu atomic, the argument registers are the wrong way round. The
implementation of this_cpu_add_return() therefore ends up adding
uninitialised stack to the percpu variable and returning garbage.

As it turns out, there aren't very many users of the value-returning
percpu atomics in mainline and we only spotted this due to a failure in
the kprobes selftests. In this case, when attempting to single-step over
the out-of-line instruction slot, the debug monitors would not be
enabled because calling this_cpu_inc_return() on the kernel debug
monitor refcount would fail to detect the transition from 0. We would
consequently execute past the slot and take an undefined instruction
exception from the kernel, resulting in a BUG:

 | kernel BUG at arch/arm64/kernel/traps.c:421!
 | PREEMPT SMP
 | pc : do_undefinstr+0x268/0x278
 | lr : do_undefinstr+0x124/0x278
 | Process swapper/0 (pid: 1, stack limit = 0x(____ptrval____))
 | Call trace:
 |  do_undefinstr+0x268/0x278
 |  el1_undef+0x10/0x78
 |  0xffff00000803c004
 |  init_kprobes+0x150/0x180
 |  do_one_initcall+0x74/0x178
 |  kernel_init_freeable+0x188/0x224
 |  kernel_init+0x10/0x100
 |  ret_from_fork+0x10/0x1c

Fix the argument order to get the value-returning pcpu atomics working
correctly when implemented using the LSE instructions.

Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: add <asm/asm-prototypes.h>
Mark Rutland [Wed, 12 Dec 2018 12:22:19 +0000 (12:22 +0000)]
arm64: add <asm/asm-prototypes.h>

While we can export symbols from assembly files, CONFIG_MODVERIONS requires C
declarations of anyhting that's exported.

Let's account for this as other architectures do by placing these declarations
in <asm/asm-prototypes.h>, which kbuild will automatically use to generate
modversion information for assembly files.

Since we already define most prototypes in existing headers, we simply need to
include those headers in <asm/asm-prototypes.h>, and don't need to duplicate
these.

Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: mm: Introduce MAX_USER_VA_BITS definition
Will Deacon [Wed, 12 Dec 2018 11:51:40 +0000 (11:51 +0000)]
arm64: mm: Introduce MAX_USER_VA_BITS definition

With the introduction of 52-bit virtual addressing for userspace, we are
now in a position where the virtual addressing capability of userspace
may exceed that of the kernel. Consequently, the VA_BITS definition
cannot be used blindly, since it reflects only the size of kernel
virtual addresses.

This patch introduces MAX_USER_VA_BITS which is either VA_BITS or 52
depending on whether 52-bit virtual addressing has been configured at
build time, removing a few places where the 52 is open-coded based on
explicit CONFIG_ guards.

Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: fix ARM64_USER_VA_BITS_52 builds
Arnd Bergmann [Tue, 11 Dec 2018 14:08:10 +0000 (15:08 +0100)]
arm64: fix ARM64_USER_VA_BITS_52 builds

In some randconfig builds, the new CONFIG_ARM64_USER_VA_BITS_52
triggered a build failure:

arch/arm64/mm/proc.S:287: Error: immediate out of range

As it turns out, we were incorrectly setting PGTABLE_LEVELS here,
lacking any other default value.
This fixes the calculation of CONFIG_PGTABLE_LEVELS to consider
all combinations again.

Fixes: 68d23da4373a ("arm64: Kconfig: Re-jig CONFIG options for 52-bit VA")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: preempt: Fix big-endian when checking preempt count in assembly
Will Deacon [Tue, 11 Dec 2018 13:41:32 +0000 (13:41 +0000)]
arm64: preempt: Fix big-endian when checking preempt count in assembly

Commit 396244692232 ("arm64: preempt: Provide our own implementation of
asm/preempt.h") extended the preempt count field in struct thread_info
to 64 bits, so that it consists of a 32-bit count plus a 32-bit flag
indicating whether or not the current task needs rescheduling.

Whilst the asm-offsets definition of TSK_TI_PREEMPT was updated to point
to this new field, the assembly usage was left untouched meaning that a
32-bit load from TSK_TI_PREEMPT on a big-endian machine actually returns
the reschedule flag instead of the count.

Whilst we could fix this by pointing TSK_TI_PREEMPT at the count field,
we're actually better off reworking the two assembly users so that they
operate on the whole 64-bit value in favour of inspecting the thread
flags separately in order to determine whether a reschedule is needed.

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reported-by: "kernelci.org bot" <bot@kernelci.org>
Tested-by: Kevin Hilman <khilman@baylibre.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: kexec_file: include linux/vmalloc.h
Arnd Bergmann [Tue, 11 Dec 2018 10:05:46 +0000 (11:05 +0100)]
arm64: kexec_file: include linux/vmalloc.h

This is needed for compilation in some configurations that don't
include it implicitly:

arch/arm64/kernel/machine_kexec_file.c: In function 'arch_kimage_file_post_load_cleanup':
arch/arm64/kernel/machine_kexec_file.c:37:2: error: implicit declaration of function 'vfree'; did you mean 'kvfree'? [-Werror=implicit-function-declaration]

Fixes: 52b2a8af7436 ("arm64: kexec_file: load initrd and device-tree")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: mm: EXPORT vabits_user to modules
Will Deacon [Mon, 10 Dec 2018 19:20:23 +0000 (19:20 +0000)]
arm64: mm: EXPORT vabits_user to modules

TASK_SIZE is defined using the vabits_user variable for 64-bit tasks,
so ensure that this variable is exported to modules to avoid the
following build breakage with allmodconfig:

 | ERROR: "vabits_user" [lib/test_user_copy.ko] undefined!
 | ERROR: "vabits_user" [drivers/misc/lkdtm/lkdtm.ko] undefined!
 | ERROR: "vabits_user" [drivers/infiniband/hw/mlx5/mlx5_ib.ko] undefined!

Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoMerge branch 'for-next/kexec' into aarch64/for-next/core
Will Deacon [Mon, 10 Dec 2018 18:57:17 +0000 (18:57 +0000)]
Merge branch 'for-next/kexec' into aarch64/for-next/core

Merge in kexec_file_load() support from Akashi Takahiro.

5 years agoMerge branch 'kvm/cortex-a76-erratum-1165522' into aarch64/for-next/core
Will Deacon [Mon, 10 Dec 2018 18:53:03 +0000 (18:53 +0000)]
Merge branch 'kvm/cortex-a76-erratum-1165522' into aarch64/for-next/core

Pull in KVM workaround for A76 erratum #116522.

Conflicts:
arch/arm64/include/asm/cpucaps.h

5 years agoarm64: smp: Handle errors reported by the firmware
Suzuki K Poulose [Mon, 10 Dec 2018 18:07:33 +0000 (18:07 +0000)]
arm64: smp: Handle errors reported by the firmware

The __cpu_up() routine ignores the errors reported by the firmware
for a CPU bringup operation and looks for the error status set by the
booting CPU. If the CPU never entered the kernel, we could end up
in assuming stale error status, which otherwise would have been
set/cleared appropriately by the booting CPU.

Reported-by: Steve Capper <steve.capper@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: smp: Rework early feature mismatched detection
Will Deacon [Mon, 10 Dec 2018 14:21:13 +0000 (14:21 +0000)]
arm64: smp: Rework early feature mismatched detection

Rather than add additional variables to detect specific early feature
mismatches with secondary CPUs, we can instead dedicate the upper bits
of the CPU boot status word to flag specific mismatches.

This allows us to communicate both granule and VA-size mismatches back
to the primary CPU without the need for additional book-keeping.

Tested-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: Kconfig: Re-jig CONFIG options for 52-bit VA
Will Deacon [Mon, 10 Dec 2018 14:15:15 +0000 (14:15 +0000)]
arm64: Kconfig: Re-jig CONFIG options for 52-bit VA

Enabling 52-bit VAs for userspace is pretty confusing, since it requires
you to select "48-bit" virtual addressing in the Kconfig.

Rework the logic so that 52-bit user virtual addressing is advertised in
the "Virtual address space size" choice, along with some help text to
describe its interaction with Pointer Authentication. The EXPERT-only
option to force all user mappings to the 52-bit range is then made
available immediately below the VA size selection.

Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: mm: Allow forcing all userspace addresses to 52-bit
Steve Capper [Thu, 6 Dec 2018 22:50:42 +0000 (22:50 +0000)]
arm64: mm: Allow forcing all userspace addresses to 52-bit

On arm64 52-bit VAs are provided to userspace when a hint is supplied to
mmap. This helps maintain compatibility with software that expects at
most 48-bit VAs to be returned.

In order to help identify software that has 48-bit VA assumptions, this
patch allows one to compile a kernel where 52-bit VAs are returned by
default on HW that supports it.

This feature is intended to be for development systems only.

Signed-off-by: Steve Capper <steve.capper@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: mm: introduce 52-bit userspace support
Steve Capper [Thu, 6 Dec 2018 22:50:41 +0000 (22:50 +0000)]
arm64: mm: introduce 52-bit userspace support

On arm64 there is optional support for a 52-bit virtual address space.
To exploit this one has to be running with a 64KB page size and be
running on hardware that supports this.

For an arm64 kernel supporting a 48 bit VA with a 64KB page size,
some changes are needed to support a 52-bit userspace:
 * TCR_EL1.T0SZ needs to be 12 instead of 16,
 * TASK_SIZE needs to reflect the new size.

This patch implements the above when the support for 52-bit VAs is
detected at early boot time.

On arm64 userspace addresses translation is controlled by TTBR0_EL1. As
well as userspace, TTBR0_EL1 controls:
 * The identity mapping,
 * EFI runtime code.

It is possible to run a kernel with an identity mapping that has a
larger VA size than userspace (and for this case __cpu_set_tcr_t0sz()
would set TCR_EL1.T0SZ as appropriate). However, when the conditions for
52-bit userspace are met; it is possible to keep TCR_EL1.T0SZ fixed at
12. Thus in this patch, the TCR_EL1.T0SZ size changing logic is
disabled.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: mm: Prevent mismatched 52-bit VA support
Steve Capper [Thu, 6 Dec 2018 22:50:40 +0000 (22:50 +0000)]
arm64: mm: Prevent mismatched 52-bit VA support

For cases where there is a mismatch in ARMv8.2-LVA support between CPUs
we have to be careful in allowing secondary CPUs to boot if 52-bit
virtual addresses have already been enabled on the boot CPU.

This patch adds code to the secondary startup path. If the boot CPU has
enabled 52-bit VAs then ID_AA64MMFR2_EL1 is checked to see if the
secondary can also enable 52-bit support. If not, the secondary is
prevented from booting and an error message is displayed indicating why.

Technically this patch could be implemented using the cpufeature code
when considering 52-bit userspace support. However, we employ low level
checks here as the cpufeature code won't be able to run if we have
mismatched 52-bit kernel va support.

Signed-off-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: mm: Offset TTBR1 to allow 52-bit PTRS_PER_PGD
Steve Capper [Thu, 6 Dec 2018 22:50:39 +0000 (22:50 +0000)]
arm64: mm: Offset TTBR1 to allow 52-bit PTRS_PER_PGD

Enabling 52-bit VAs on arm64 requires that the PGD table expands from 64
entries (for the 48-bit case) to 1024 entries. This quantity,
PTRS_PER_PGD is used as follows to compute which PGD entry corresponds
to a given virtual address, addr:

pgd_index(addr) -> (addr >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1)

Userspace addresses are prefixed by 0's, so for a 48-bit userspace
address, uva, the following is true:
(uva >> PGDIR_SHIFT) & (1024 - 1) == (uva >> PGDIR_SHIFT) & (64 - 1)

In other words, a 48-bit userspace address will have the same pgd_index
when using PTRS_PER_PGD = 64 and 1024.

Kernel addresses are prefixed by 1's so, given a 48-bit kernel address,
kva, we have the following inequality:
(kva >> PGDIR_SHIFT) & (1024 - 1) != (kva >> PGDIR_SHIFT) & (64 - 1)

In other words a 48-bit kernel virtual address will have a different
pgd_index when using PTRS_PER_PGD = 64 and 1024.

If, however, we note that:
kva = 0xFFFF << 48 + lower (where lower[63:48] == 0b)
and, PGDIR_SHIFT = 42 (as we are dealing with 64KB PAGE_SIZE)

We can consider:
(kva >> PGDIR_SHIFT) & (1024 - 1) - (kva >> PGDIR_SHIFT) & (64 - 1)
 = (0xFFFF << 6) & 0x3FF - (0xFFFF << 6) & 0x3F // "lower" cancels out
 = 0x3C0

In other words, one can switch PTRS_PER_PGD to the 52-bit value globally
provided that they increment ttbr1_el1 by 0x3C0 * 8 = 0x1E00 bytes when
running with 48-bit kernel VAs (TCR_EL1.T1SZ = 16).

For kernel configuration where 52-bit userspace VAs are possible, this
patch offsets ttbr1_el1 and sets PTRS_PER_PGD corresponding to the
52-bit value.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Steve Capper <steve.capper@arm.com>
[will: added comment to TTBR1_BADDR_4852_OFFSET calculation]
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: mm: Define arch_get_mmap_end, arch_get_mmap_base
Steve Capper [Thu, 6 Dec 2018 22:50:38 +0000 (22:50 +0000)]
arm64: mm: Define arch_get_mmap_end, arch_get_mmap_base

Now that we have DEFAULT_MAP_WINDOW defined, we can arch_get_mmap_end
and arch_get_mmap_base helpers to allow for high addresses in mmap.

Signed-off-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: mm: Introduce DEFAULT_MAP_WINDOW
Steve Capper [Thu, 6 Dec 2018 22:50:37 +0000 (22:50 +0000)]
arm64: mm: Introduce DEFAULT_MAP_WINDOW

We wish to introduce a 52-bit virtual address space for userspace but
maintain compatibility with software that assumes the maximum VA space
size is 48 bit.

In order to achieve this, on 52-bit VA systems, we make mmap behave as
if it were running on a 48-bit VA system (unless userspace explicitly
requests a VA where addr[51:48] != 0).

On a system running a 52-bit userspace we need TASK_SIZE to represent
the 52-bit limit as it is used in various places to distinguish between
kernelspace and userspace addresses.

Thus we need a new limit for mmap, stack, ELF loader and EFI (which uses
TTBR0) to represent the non-extended VA space.

This patch introduces DEFAULT_MAP_WINDOW and DEFAULT_MAP_WINDOW_64 and
switches the appropriate logic to use that instead of TASK_SIZE.

Signed-off-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agomm: mmap: Allow for "high" userspace addresses
Steve Capper [Thu, 6 Dec 2018 22:50:36 +0000 (22:50 +0000)]
mm: mmap: Allow for "high" userspace addresses

This patch adds support for "high" userspace addresses that are
optionally supported on the system and have to be requested via a hint
mechanism ("high" addr parameter to mmap).

Architectures such as powerpc and x86 achieve this by making changes to
their architectural versions of arch_get_unmapped_* functions. However,
on arm64 we use the generic versions of these functions.

Rather than duplicate the generic arch_get_unmapped_* implementations
for arm64, this patch instead introduces two architectural helper macros
and applies them to arch_get_unmapped_*:
 arch_get_mmap_end(addr) - get mmap upper limit depending on addr hint
 arch_get_mmap_base(addr, base) - get mmap_base depending on addr hint

If these macros are not defined in architectural code then they default
to (TASK_SIZE) and (base) so should not introduce any behavioural
changes to architectures that do not define them.

Signed-off-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: kasan: Increase stack size for KASAN_EXTRA
Qian Cai [Fri, 7 Dec 2018 22:34:49 +0000 (17:34 -0500)]
arm64: kasan: Increase stack size for KASAN_EXTRA

If the kernel is configured with KASAN_EXTRA, the stack size is
increased significantly due to setting the GCC -fstack-reuse option to
"none" [1]. As a result, it can trigger a stack overrun quite often with
32k stack size compiled using GCC 8. For example, this reproducer

  https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/syscalls/madvise/madvise06.c

can trigger a "corrupted stack end detected inside scheduler" very
reliably with CONFIG_SCHED_STACK_END_CHECK enabled. There are other
reports at:

  https://lore.kernel.org/lkml/1542144497.12945.29.camel@gmx.us/
  https://lore.kernel.org/lkml/721E7B42-2D55-4866-9C1A-3E8D64F33F9C@gmx.us/

There are just too many functions that could have a large stack with
KASAN_EXTRA due to large local variables that have been called over and
over again without being able to reuse the stacks. Some noticiable ones
are,

size
7536 shrink_inactive_list
7440 shrink_page_list
6560 fscache_stats_show
3920 jbd2_journal_commit_transaction
3216 try_to_unmap_one
3072 migrate_page_move_mapping
3584 migrate_misplaced_transhuge_page
3920 ip_vs_lblcr_schedule
4304 lpfc_nvme_info_show
3888 lpfc_debugfs_nvmestat_data.constprop

There are other 49 functions over 2k in size while compiling kernel with
"-Wframe-larger-than=" on this machine. Hence, it is too much work to
change Makefiles for each object to compile without
-fsanitize-address-use-after-scope individually.

[1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81715#c23

Signed-off-by: Qian Cai <cai@lca.pw>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: Fix minor issues with the dcache_by_line_op macro
Will Deacon [Mon, 10 Dec 2018 13:39:48 +0000 (13:39 +0000)]
arm64: Fix minor issues with the dcache_by_line_op macro

The dcache_by_line_op macro suffers from a couple of small problems:

First, the GAS directives that are currently being used rely on
assembler behavior that is not documented, and probably not guaranteed
to produce the correct behavior going forward. As a result, we end up
with some undefined symbols in cache.o:

$ nm arch/arm64/mm/cache.o
         ...
         U civac
         ...
         U cvac
         U cvap
         U cvau

This is due to the fact that the comparisons used to select the
operation type in the dcache_by_line_op macro are comparing symbols
not strings, and even though it seems that GAS is doing the right
thing here (undefined symbols by the same name are equal to each
other), it seems unwise to rely on this.

Second, when patching in a DC CVAP instruction on CPUs that support it,
the fallback path consists of a DC CVAU instruction which may be
affected by CPU errata that require ARM64_WORKAROUND_CLEAN_CACHE.

Solve these issues by unrolling the various maintenance routines and
using the conditional directives that are documented as operating on
strings. To avoid the complexity of nested alternatives, we move the
DC CVAP patching to __clean_dcache_area_pop, falling back to a branch
to __clean_dcache_area_poc if DCPOP is not supported by the CPU.

Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Suggested-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: Add configuration/documentation for Cortex-A76 erratum 1165522
Marc Zyngier [Thu, 6 Dec 2018 17:31:26 +0000 (17:31 +0000)]
arm64: Add configuration/documentation for Cortex-A76 erratum 1165522

Now that the infrastructure to handle erratum 1165522 is in place,
let's make it a selectable option and add the required documentation.

Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: KVM: Handle ARM erratum 1165522 in TLB invalidation
Marc Zyngier [Thu, 6 Dec 2018 17:31:25 +0000 (17:31 +0000)]
arm64: KVM: Handle ARM erratum 1165522 in TLB invalidation

In order to avoid TLB corruption whilst invalidating TLBs on CPUs
affected by erratum 1165522, we need to prevent S1 page tables
from being usable.

For this, we set the EL1 S1 MMU on, and also disable the page table
walker (by setting the TCR_EL1.EPD* bits to 1).

This ensures that once we switch to the EL1/EL0 translation regime,
speculated AT instructions won't be able to parse the page tables.

Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: KVM: Add synchronization on translation regime change for erratum 1165522
Marc Zyngier [Thu, 6 Dec 2018 17:31:24 +0000 (17:31 +0000)]
arm64: KVM: Add synchronization on translation regime change for erratum 1165522

In order to ensure that slipping HCR_EL2.TGE is done at the right
time when switching translation regime, let insert the required ISBs
that will be patched in when erratum 1165522 is detected.

Take this opportunity to add the missing include of asm/alternative.h
which was getting there by pure luck.

Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: KVM: Force VHE for systems affected by erratum 1165522
Marc Zyngier [Thu, 6 Dec 2018 17:31:23 +0000 (17:31 +0000)]
arm64: KVM: Force VHE for systems affected by erratum 1165522

In order to easily mitigate ARM erratum 1165522, we need to force
affected CPUs to run in VHE mode if using KVM.

Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: Add TCR_EPD{0,1} definitions
Marc Zyngier [Thu, 6 Dec 2018 17:31:22 +0000 (17:31 +0000)]
arm64: Add TCR_EPD{0,1} definitions

We are soon going to play with TCR_EL1.EPD{0,1}, so let's add the
relevant definitions.

Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: KVM: Install stage-2 translation before enabling traps
Marc Zyngier [Thu, 6 Dec 2018 17:31:21 +0000 (17:31 +0000)]
arm64: KVM: Install stage-2 translation before enabling traps

It is a bit odd that we only install stage-2 translation after having
cleared HCR_EL2.TGE, which means that there is a window during which
AT requests could fail as stage-2 is not configured yet.

Let's move stage-2 configuration before we clear TGE, making the
guest entry sequence clearer: we first configure all the guest stuff,
then only switch to the guest translation regime.

While we're at it, do the same thing for !VHE. It doesn't hurt,
and keeps things symmetric.

Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoKVM: arm64: Rework detection of SVE, !VHE systems
Marc Zyngier [Thu, 6 Dec 2018 17:31:20 +0000 (17:31 +0000)]
KVM: arm64: Rework detection of SVE, !VHE systems

An SVE system is so far the only case where we mandate VHE. As we're
starting to grow this requirements, let's slightly rework the way we
deal with that situation, allowing for easy extension of this check.

Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: KVM: Make VHE Stage-2 TLB invalidation operations non-interruptible
Marc Zyngier [Thu, 6 Dec 2018 17:31:19 +0000 (17:31 +0000)]
arm64: KVM: Make VHE Stage-2 TLB invalidation operations non-interruptible

Contrary to the non-VHE version of the TLB invalidation helpers, the VHE
code  has interrupts enabled, meaning that we can take an interrupt in
the middle of such a sequence, and start running something else with
HCR_EL2.TGE cleared.

That's really not a good idea.

Take the heavy-handed option and disable interrupts in
__tlb_switch_to_guest_vhe, restoring them in __tlb_switch_to_host_vhe.
The latter also gain an ISB in order to make sure that TGE really has
taken effect.

Cc: stable@vger.kernel.org
Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: remove arm64ksyms.c
Mark Rutland [Fri, 7 Dec 2018 18:08:23 +0000 (18:08 +0000)]
arm64: remove arm64ksyms.c

Now that arm64ksyms.c has been reduced to a stub, let's remove it
entirely. New exports should be associated with their function
definition.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: frace: use asm EXPORT_SYMBOL()
Mark Rutland [Fri, 7 Dec 2018 18:08:22 +0000 (18:08 +0000)]
arm64: frace: use asm EXPORT_SYMBOL()

For a while now it's been possible to use EXPORT_SYMBOL() in assembly
files, which allows us to place exports immediately after assembly
functions, as we do for C functions.

As a step towards removing arm64ksyms.c, let's move the ftrace exports
to the assembly files the functions are defined in.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: string: use asm EXPORT_SYMBOL()
Mark Rutland [Fri, 7 Dec 2018 18:08:21 +0000 (18:08 +0000)]
arm64: string: use asm EXPORT_SYMBOL()

For a while now it's been possible to use EXPORT_SYMBOL() in assembly
files, which allows us to place exports immediately after assembly
functions, as we do for C functions.

As a step towards removing arm64ksyms.c, let's move the string routine
exports to the assembly files the functions are defined in. Routines
which should only be exported for !KASAN builds are exported using the
EXPORT_SYMBOL_NOKASAN() helper.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: uaccess: use asm EXPORT_SYMBOL()
Mark Rutland [Fri, 7 Dec 2018 18:08:20 +0000 (18:08 +0000)]
arm64: uaccess: use asm EXPORT_SYMBOL()

For a while now it's been possible to use EXPORT_SYMBOL() in assembly
files, which allows us to place exports immediately after assembly
functions, as we do for C functions.

As a step towards removing arm64ksyms.c, let's move the uaccess exports
to the assembly files the functions are defined in.  As we have to
include <asm/assembler.h>, the existing includes are fixed to follow the
usual ordering conventions.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: page: use asm EXPORT_SYMBOL()
Mark Rutland [Fri, 7 Dec 2018 18:08:19 +0000 (18:08 +0000)]
arm64: page: use asm EXPORT_SYMBOL()

For a while now it's been possible to use EXPORT_SYMBOL() in assembly
files, which allows us to place exports immediately after assembly
functions, as we do for C functions.

As a step towards removing arm64ksyms.c, let's move the copy_page and
clear_page exports to the assembly files the functions are defined in.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: smccc: use asm EXPORT_SYMBOL()
Mark Rutland [Fri, 7 Dec 2018 18:08:18 +0000 (18:08 +0000)]
arm64: smccc: use asm EXPORT_SYMBOL()

For a while now it's been possible to use EXPORT_SYMBOL() in assembly
files, which allows us to place exports immediately after assembly
functions, as we do for C functions.

As a step towards removing arm64ksyms.c, let's move the SMCCC exports to
the assembly file the functions are defined in.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: tishift: use asm EXPORT_SYMBOL()
Mark Rutland [Fri, 7 Dec 2018 18:08:17 +0000 (18:08 +0000)]
arm64: tishift: use asm EXPORT_SYMBOL()

For a while now it's been possible to use EXPORT_SYMBOL() in assembly
files, which allows us to place exports immediately after assembly
functions, as we do for C functions.

As a step towards removing arm64ksyms.c, let's move the tishift exports
to the assembly file the functions are defined in.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: add EXPORT_SYMBOL_NOKASAN()
Mark Rutland [Fri, 7 Dec 2018 18:08:16 +0000 (18:08 +0000)]
arm64: add EXPORT_SYMBOL_NOKASAN()

So that we can export symbols directly from assembly files, let's make
use of the generic <asm/export.h>. We have a few symbols that we'll want
to conditionally export for !KASAN kernel builds, so we add a helper for
that in <asm/assembler.h>.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: move memstart_addr export inline
Mark Rutland [Fri, 7 Dec 2018 18:08:15 +0000 (18:08 +0000)]
arm64: move memstart_addr export inline

Since we define memstart_addr in a C file, we can have the export
immediately after the definition of the symbol, as we do elsewhere.

As a step towards removing arm64ksyms.c, move the export of
memstart_addr to init.c, where the symbol is defined.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: remove bitop exports
Mark Rutland [Fri, 7 Dec 2018 18:08:14 +0000 (18:08 +0000)]
arm64: remove bitop exports

Now that the arm64 bitops are inlines built atop of the regular atomics,
we don't need to export anything.

Remove the redundant exports.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: cmpxchg: Use "K" instead of "L" for ll/sc immediate constraint
Will Deacon [Tue, 18 Sep 2018 08:39:55 +0000 (09:39 +0100)]
arm64: cmpxchg: Use "K" instead of "L" for ll/sc immediate constraint

The "L" AArch64 machine constraint, which we use for the "old" value in
an LL/SC cmpxchg(), generates an immediate that is suitable for a 64-bit
logical instruction. However, for cmpxchg() operations on types smaller
than 64 bits, this constraint can result in an invalid instruction which
is correctly rejected by GAS, such as EOR W1, W1, #0xffffffff.

Whilst we could special-case the constraint based on the cmpxchg size,
it's far easier to change the constraint to "K" and put up with using
a register for large 64-bit immediates. For out-of-line LL/SC atomics,
this is all moot anyway.

Reported-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: percpu: Rewrite per-cpu ops to allow use of LSE atomics
Will Deacon [Thu, 13 Sep 2018 14:56:16 +0000 (15:56 +0100)]
arm64: percpu: Rewrite per-cpu ops to allow use of LSE atomics

Our percpu code is a bit of an inconsistent mess:

  * It rolls its own xchg(), but reuses cmpxchg_local()
  * It uses various different flavours of preempt_{enable,disable}()
  * It returns values even for the non-returning RmW operations
  * It makes no use of LSE atomics outside of the cmpxchg() ops
  * There are individual macros for different sizes of access, but these
    are all funneled through a switch statement rather than dispatched
    directly to the relevant case

This patch rewrites the per-cpu operations to address these shortcomings.
Whilst the new code is a lot cleaner, the big advantage is that we can
use the non-returning ST- atomic instructions when we have LSE.

Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: Avoid masking "old" for LSE cmpxchg() implementation
Will Deacon [Thu, 13 Sep 2018 13:28:33 +0000 (14:28 +0100)]
arm64: Avoid masking "old" for LSE cmpxchg() implementation

The CAS instructions implicitly access only the relevant bits of the "old"
argument, so there is no need for explicit masking via type-casting as
there is in the LL/SC implementation.

Move the casting into the LL/SC code and remove it altogether for the LSE
implementation.

Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: Avoid redundant type conversions in xchg() and cmpxchg()
Will Deacon [Thu, 13 Sep 2018 12:30:45 +0000 (13:30 +0100)]
arm64: Avoid redundant type conversions in xchg() and cmpxchg()

Our atomic instructions (either LSE atomics of LDXR/STXR sequences)
natively support byte, half-word, word and double-word memory accesses
so there is no need to mask the data register prior to being stored.

Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: kexec_file: forbid kdump via kexec_file_load()
James Morse [Fri, 7 Dec 2018 10:14:39 +0000 (10:14 +0000)]
arm64: kexec_file: forbid kdump via kexec_file_load()

Now that kexec_walk_memblock() can do the crash-kernel placement itself
architectures that don't support kdump via kexe_file_load() need to
explicitly forbid it.

We don't support this on arm64 until the kernel can add the elfcorehdr
and usable-memory-range fields to the DT. Without these the crash-kernel
overwrites the previous kernel's memory during startup.

Add a check to refuse crash image loading.

Reviewed-by: Bhupesh Sharma <bhsharma@redhat.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: preempt: Provide our own implementation of asm/preempt.h
Will Deacon [Thu, 20 Sep 2018 09:26:40 +0000 (10:26 +0100)]
arm64: preempt: Provide our own implementation of asm/preempt.h

The asm-generic/preempt.h implementation doesn't make use of the
PREEMPT_NEED_RESCHED flag, since this can interact badly with load/store
architectures which rely on the preempt_count word being unchanged across
an interrupt.

However, since we're a 64-bit architecture and the preempt count is
only 32 bits wide, we can simply pack it next to the resched flag and
load the whole thing in one go, so that a dec-and-test operation doesn't
need to load twice.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agopreempt: Move PREEMPT_NEED_RESCHED definition into arch code
Will Deacon [Wed, 19 Sep 2018 12:39:26 +0000 (13:39 +0100)]
preempt: Move PREEMPT_NEED_RESCHED definition into arch code

PREEMPT_NEED_RESCHED is never used directly, so move it into the arch
code where it can potentially be implemented using either a different
bit in the preempt count or as an entirely separate entity.

Cc: Robert Love <rml@tech9.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: hugetlb: Register hugepages during arch init
Allen Pais [Tue, 23 Oct 2018 01:06:57 +0000 (06:36 +0530)]
arm64: hugetlb: Register hugepages during arch init

Add hstate for each supported hugepage size using arch initcall.

* no hugepage parameters

  Without hugepage parameters, only a default hugepage size is
  available for dynamic allocation.  It's different, for example, from
  x86_64 and sparc64 where all supported hugepage sizes are available.

* only default_hugepagesz= is specified and set not to HPAGE_SIZE

  In spite of the fact that default_hugepagesz= is set to a valid
  hugepage size, it's treated as unsupported and reverted to
  HPAGE_SIZE.  Such behaviour is also different from x86_64 and
  sparc64.

Acked-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Tom Saeger <tom.saeger@oracle.com>
Signed-off-by: Dmitry Klochkov <dmitry.klochkov@oracle.com>
Signed-off-by: Allen Pais <allen.pais@oracle.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: crypto: add NEON accelerated XOR implementation
Jackie Liu [Tue, 4 Dec 2018 01:43:23 +0000 (09:43 +0800)]
arm64: crypto: add NEON accelerated XOR implementation

This is a NEON acceleration method that can improve
performance by approximately 20%. I got the following
data from the centos 7.5 on Huawei's HISI1616 chip:

[ 93.837726] xor: measuring software checksum speed
[ 93.874039]   8regs  : 7123.200 MB/sec
[ 93.914038]   32regs : 7180.300 MB/sec
[ 93.954043]   arm64_neon: 9856.000 MB/sec
[ 93.954047] xor: using function: arm64_neon (9856.000 MB/sec)

I believe this code can bring some optimization for
all arm64 platform. thanks for Ard Biesheuvel's suggestions.

Signed-off-by: Jackie Liu <liuyun01@kylinos.cn>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64/neon: add workaround for ambiguous C99 stdint.h types
Jackie Liu [Tue, 4 Dec 2018 01:43:22 +0000 (09:43 +0800)]
arm64/neon: add workaround for ambiguous C99 stdint.h types

In a way similar to ARM commit 09096f6a0ee2 ("ARM: 7822/1: add workaround
for ambiguous C99 stdint.h types"), this patch redefines the macros that
are used in stdint.h so its definitions of uint64_t and int64_t are
compatible with those of the kernel.

This patch comes from: https://patchwork.kernel.org/patch/3540001/
Wrote by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

We mark this file as a private file and don't have to override asm/types.h

Signed-off-by: Jackie Liu <liuyun01@kylinos.cn>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: entry: Remove confusing comment
Will Deacon [Tue, 19 Jun 2018 13:08:24 +0000 (14:08 +0100)]
arm64: entry: Remove confusing comment

The comment about SYS_MEMBARRIER_SYNC_CORE relying on ERET being
context-synchronizing is confusing and misplaced with kpti. Given that
this is already documented under Documentation/ (see arch-support.txt
for membarrier), remove the comment altogether.

Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: entry: Place an SB sequence following an ERET instruction
Will Deacon [Thu, 14 Jun 2018 10:23:38 +0000 (11:23 +0100)]
arm64: entry: Place an SB sequence following an ERET instruction

Some CPUs can speculate past an ERET instruction and potentially perform
speculative accesses to memory before processing the exception return.
Since the register state is often controlled by a lower privilege level
at the point of an ERET, this could potentially be used as part of a
side-channel attack.

This patch emits an SB sequence after each ERET so that speculation is
held up on exception return.

Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: Add support for SB barrier and patch in over DSB; ISB sequences
Will Deacon [Thu, 14 Jun 2018 10:21:34 +0000 (11:21 +0100)]
arm64: Add support for SB barrier and patch in over DSB; ISB sequences

We currently use a DSB; ISB sequence to inhibit speculation in set_fs().
Whilst this works for current CPUs, future CPUs may implement a new SB
barrier instruction which acts as an architected speculation barrier.

On CPUs that support it, patch in an SB; NOP sequence over the DSB; ISB
sequence and advertise the presence of the new instruction to userspace.

Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: kexec_file: Refactor setup_dtb() to consolidate error checking
Will Deacon [Thu, 6 Dec 2018 15:02:45 +0000 (15:02 +0000)]
arm64: kexec_file: Refactor setup_dtb() to consolidate error checking

setup_dtb() is a little difficult to read. This is largely because it
duplicates the FDT -> Linux errno conversion for every intermediate
return value, but also because of silly cosmetic things like naming
and formatting.

Given that this is all brand new, refactor the function to get us off on
the right foot.

Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: kexec_file: add kaslr support
AKASHI Takahiro [Thu, 15 Nov 2018 05:52:55 +0000 (14:52 +0900)]
arm64: kexec_file: add kaslr support

Adding "kaslr-seed" to dtb enables triggering kaslr, or kernel virtual
address randomization, at secondary kernel boot. We always do this as
it will have no harm on kaslr-incapable kernel.

We don't have any "switch" to turn off this feature directly, but still
can suppress it by passing "nokaslr" as a kernel boot argument.

Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
[will: Use rng_is_initialized()]
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: kexec_file: add kernel signature verification support
AKASHI Takahiro [Thu, 15 Nov 2018 05:52:54 +0000 (14:52 +0900)]
arm64: kexec_file: add kernel signature verification support

With this patch, kernel verification can be done without IMA security
subsystem enabled. Turn on CONFIG_KEXEC_VERIFY_SIG instead.

On x86, a signature is embedded into a PE file (Microsoft's format) header
of binary. Since arm64's "Image" can also be seen as a PE file as far as
CONFIG_EFI is enabled, we adopt this format for kernel signing.

You can create a signed kernel image with:
    $ sbsign --key ${KEY} --cert ${CERT} Image

Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
[will: removed useless pr_debug()]
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: capabilities: Batch cpu_enable callbacks
Suzuki K Poulose [Fri, 30 Nov 2018 17:18:06 +0000 (17:18 +0000)]
arm64: capabilities: Batch cpu_enable callbacks

We use a stop_machine call for each available capability to
enable it on all the CPUs available at boot time. Instead
we could batch the cpu_enable callbacks to a single stop_machine()
call to save us some time.

Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: capabilities: Use linear array for detection and verification
Suzuki K Poulose [Fri, 30 Nov 2018 17:18:05 +0000 (17:18 +0000)]
arm64: capabilities: Use linear array for detection and verification

Use the sorted list of capability entries for the detection and
verification.

Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: capabilities: Optimize this_cpu_has_cap
Suzuki K Poulose [Fri, 30 Nov 2018 17:18:04 +0000 (17:18 +0000)]
arm64: capabilities: Optimize this_cpu_has_cap

Make use of the sorted capability list to access the capability
entry in this_cpu_has_cap() to avoid iterating over the two
tables.

Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: capabilities: Speed up capability lookup
Suzuki K Poulose [Fri, 30 Nov 2018 17:18:03 +0000 (17:18 +0000)]
arm64: capabilities: Speed up capability lookup

We maintain two separate tables of capabilities, errata and features,
which decide the system capabilities. We iterate over each of these
tables for various operations (e.g, detection, verification etc.).
We do not have a way to map a system "capability" to its entry,
(i.e, cap -> struct arm64_cpu_capabilities) which is needed for
this_cpu_has_cap(). So we iterate over the table one by one to
find the entry and then do the operation. Also, this prevents
us from optimizing the way we "enable" the capabilities on the
CPUs, where we now issue a stop_machine() for each available
capability.

One solution is to merge the two tables into a single table,
sorted by the capability. But this is has the following
disadvantages:
  - We loose the "classification" of an errata vs. feature
  - It is quite easy to make a mistake when adding an entry,
    unless we sort the table at runtime.

So we maintain a list of pointers to the capability entry, sorted
by the "cap number" in a separate array, initialized at boot time.
The only restriction is that we can have one "entry" per capability.
While at it, remove the duplicate declaration of arm64_errata table.

Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoinclude: pe.h: remove message[] from mz header definition
AKASHI Takahiro [Thu, 15 Nov 2018 05:52:53 +0000 (14:52 +0900)]
include: pe.h: remove message[] from mz header definition

message[] field won't be part of the definition of mz header.

This change is crucial for enabling kexec_file_load on arm64 because
arm64's "Image" binary, as in PE format, doesn't have any data for it and
accordingly the following check in pefile_parse_binary() will fail:

chkaddr(cursor, mz->peaddr, sizeof(*pe));

Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: kexec_file: invoke the kernel without purgatory
AKASHI Takahiro [Thu, 15 Nov 2018 05:52:52 +0000 (14:52 +0900)]
arm64: kexec_file: invoke the kernel without purgatory

On arm64, purgatory would do almost nothing. So just invoke secondary
kernel directly by jumping into its entry code.

While, in this case, cpu_soft_restart() must be called with dtb address
in the fifth argument, the behavior still stays compatible with kexec_load
case as long as the argument is null.

Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: James Morse <james.morse@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: kexec_file: allow for loading Image-format kernel
AKASHI Takahiro [Thu, 15 Nov 2018 05:52:50 +0000 (14:52 +0900)]
arm64: kexec_file: allow for loading Image-format kernel

This patch provides kexec_file_ops for "Image"-format kernel. In this
implementation, a binary is always loaded with a fixed offset identified
in text_offset field of its header.

Regarding signature verification for trusted boot, this patch doesn't
contains CONFIG_KEXEC_VERIFY_SIG support, which is to be added later
in this series, but file-attribute-based verification is still a viable
option by enabling IMA security subsystem.

You can sign(label) a to-be-kexec'ed kernel image on target file system
with:
    $ evmctl ima_sign --key /path/to/private_key.pem Image

On live system, you must have IMA enforced with, at least, the following
security policy:
    "appraise func=KEXEC_KERNEL_CHECK appraise_type=imasig"

See more details about IMA here:
    https://sourceforge.net/p/linux-ima/wiki/Home/

Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: kexec_file: load initrd and device-tree
AKASHI Takahiro [Thu, 15 Nov 2018 05:52:49 +0000 (14:52 +0900)]
arm64: kexec_file: load initrd and device-tree

load_other_segments() is expected to allocate and place all the necessary
memory segments other than kernel, including initrd and device-tree
blob (and elf core header for crash).
While most of the code was borrowed from kexec-tools' counterpart,
users may not be allowed to specify dtb explicitly, instead, the dtb
presented by the original boot loader is reused.

arch_kimage_kernel_post_load_cleanup() is responsible for freeing arm64-
specific data allocated in load_other_segments().

Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: enable KEXEC_FILE config
AKASHI Takahiro [Thu, 15 Nov 2018 05:52:48 +0000 (14:52 +0900)]
arm64: enable KEXEC_FILE config

Modify arm64/Kconfig to enable kexec_file_load support.

Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Acked-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: cpufeature: add MMFR0 helper functions
AKASHI Takahiro [Thu, 15 Nov 2018 05:52:47 +0000 (14:52 +0900)]
arm64: cpufeature: add MMFR0 helper functions

Those helper functions for MMFR0 register will be used later by kexec_file
loader.

Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: add image head flag definitions
AKASHI Takahiro [Thu, 15 Nov 2018 05:52:46 +0000 (14:52 +0900)]
arm64: add image head flag definitions

Those image head's flags will be used later by kexec_file loader.

Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Acked-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agokexec_file: kexec_walk_memblock() only walks a dedicated region at kdump
AKASHI Takahiro [Thu, 15 Nov 2018 05:52:44 +0000 (14:52 +0900)]
kexec_file: kexec_walk_memblock() only walks a dedicated region at kdump

In kdump case, there exists only one dedicated memblock region as usable
memory (crashk_res). With this patch, kexec_walk_memblock() runs a given
callback function on this region.

Cosmetic change: 0 to MEMBLOCK_NONE at for_each_free_mem_range*()

Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Acked-by: Dave Young <dyoung@redhat.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
Cc: Baoquan He <bhe@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agopowerpc, kexec_file: factor out memblock-based arch_kexec_walk_mem()
AKASHI Takahiro [Thu, 15 Nov 2018 05:52:43 +0000 (14:52 +0900)]
powerpc, kexec_file: factor out memblock-based arch_kexec_walk_mem()

Memblock list is another source for usable system memory layout.
So move powerpc's arch_kexec_walk_mem() to common code so that other
memblock-based architectures, particularly arm64, can also utilise it.
A moved function is now renamed to kexec_walk_memblock() and integrated
into kexec_locate_mem_hole(), which will now be usable for all
architectures with no need for overriding arch_kexec_walk_mem().

With this change, arch_kexec_walk_mem() need no longer be a weak function,
and was now renamed to kexec_walk_resources().

Since powerpc doesn't support kdump in its kexec_file_load(), the current
kexec_walk_memblock() won't work for kdump either in this form, this will
be fixed in the next patch.

Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Acked-by: Dave Young <dyoung@redhat.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
Cc: Baoquan He <bhe@redhat.com>
Acked-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agos390, kexec_file: drop arch_kexec_mem_walk()
AKASHI Takahiro [Thu, 15 Nov 2018 05:52:42 +0000 (14:52 +0900)]
s390, kexec_file: drop arch_kexec_mem_walk()

Since s390 already knows where to locate buffers, calling
arch_kexec_mem_walk() has no sense. So we can just drop it as kbuf->mem
indicates this while all other architectures sets it to 0 initially.

This change is a preparatory work for the next patch, where all the
variant memory walks, either on system resource or memblock, will be
put in one common place so that it will satisfy all the architectures'
need.

Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: Philipp Rudo <prudo@linux.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
Cc: Baoquan He <bhe@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agokexec_file: make kexec_image_post_load_cleanup_default() global
AKASHI Takahiro [Thu, 15 Nov 2018 05:52:41 +0000 (14:52 +0900)]
kexec_file: make kexec_image_post_load_cleanup_default() global

Change this function from static to global so that arm64 can implement
its own arch_kimage_file_post_load_cleanup() later using
kexec_image_post_load_cleanup_default().

Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Acked-by: Dave Young <dyoung@redhat.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
Cc: Baoquan He <bhe@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoasm-generic: add kexec_file_load system call to unistd.h
AKASHI Takahiro [Thu, 15 Nov 2018 05:52:40 +0000 (14:52 +0900)]
asm-generic: add kexec_file_load system call to unistd.h

The initial user of this system call number is arm64.

Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agodrivers/perf: Add Cavium ThunderX2 SoC UNCORE PMU driver
Kulkarni, Ganapatrao [Thu, 6 Dec 2018 11:51:31 +0000 (11:51 +0000)]
drivers/perf: Add Cavium ThunderX2 SoC UNCORE PMU driver

This patch adds a perf driver for the PMU UNCORE devices DDR4 Memory
Controller(DMC) and Level 3 Cache(L3C). Each PMU supports up to 4
counters. All counters lack overflow interrupt and are
sampled periodically.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Ganapatrao Kulkarni <ganapatrao.kulkarni@cavium.com>
[will: consistent enum cpuhp_state naming]
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoDocumentation: perf: Add documentation for ThunderX2 PMU uncore driver
Kulkarni, Ganapatrao [Thu, 6 Dec 2018 11:51:27 +0000 (11:51 +0000)]
Documentation: perf: Add documentation for ThunderX2 PMU uncore driver

The SoC has PMU support in its L3 cache controller (L3C) and in the
DDR4 Memory Controller (DMC).

Signed-off-by: Ganapatrao Kulkarni <ganapatrao.kulkarni@cavium.com>
[will: minor spelling and format fixes, dropped events list]
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: capabilities: Merge duplicate entries for Qualcomm erratum 1003
Suzuki K Poulose [Fri, 30 Nov 2018 17:18:02 +0000 (17:18 +0000)]
arm64: capabilities: Merge duplicate entries for Qualcomm erratum 1003

Remove duplicate entries for Qualcomm erratum 1003. Since the entries
are not purely based on generic MIDR checks, use the multi_cap_entry
type to merge the entries.

Cc: Christopher Covington <cov@codeaurora.org>
Cc: Will Deacon <will.deacon@arm.com>
Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: capabilities: Merge duplicate Cavium erratum entries
Suzuki K Poulose [Fri, 30 Nov 2018 17:18:01 +0000 (17:18 +0000)]
arm64: capabilities: Merge duplicate Cavium erratum entries

Merge duplicate entries for a single capability using the midr
range list for Cavium errata 30115 and 27456.

Cc: Andrew Pinski <apinski@cavium.com>
Cc: David Daney <david.daney@cavium.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
5 years agoarm64: capabilities: Merge entries for ARM64_WORKAROUND_CLEAN_CACHE
Suzuki K Poulose [Fri, 30 Nov 2018 17:18:00 +0000 (17:18 +0000)]
arm64: capabilities: Merge entries for ARM64_WORKAROUND_CLEAN_CACHE

We have two entries for ARM64_WORKAROUND_CLEAN_CACHE capability :

1) ARM Errata 826319, 827319, 824069, 819472 on A53 r0p[012]
2) ARM Errata 819472 on A53 r0p[01]

Both have the same work around. Merge these entries to avoid
duplicate entries for a single capability. Add a new Kconfig
entry to control the "capability" entry to make it easier
to handle combinations of the CONFIGs.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>