The AArch64 SPSR_EL1 register is architecturally mandated to
be mapped to the AArch32 SPSR_svc register. This means its
state should live in QEMU's env->banked_spsr[1] field.
Correct the various places in the code that incorrectly
put it in banked_spsr[0].
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For the ARM M-profile cores, exception return pops various registers
including the PC from the stack. The architecture defines that if the
lowest bit in the new PC value is set (ie the PC is not halfword
aligned) then behaviour is UNPREDICTABLE. In practice hardware
implementations seem to simply ignore the low bit, and some buggy
RTOSes incorrectly rely on this. QEMU's behaviour was architecturally
permitted, but bringing QEMU into line with the hardware behaviour
allows more guest code to run. We log the situation as a guest error.
This was reported as LP:1428657.
Reported-by: Anders Esbensen <anders@lyes.dk>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch makes the following changes to the determination of
whether an address is executable, when translating addresses
using LPAE.
1. No longer assumes that PL0 can't execute when it can't read.
It can in AArch64, a difference from AArch32.
2. Use va_size == 64 to determine we're in AArch64, rather than
arm_feature(env, ARM_FEATURE_V8), which is insufficient.
3. Add additional XN determinants
- NS && is_secure && (SCR & SCR_SIF)
- WXN && (prot & PAGE_WRITE)
- AArch64: (prot_PL0 & PAGE_WRITE)
- AArch32: UWXN && (prot_PL0 & PAGE_WRITE)
- XN determination should also work in secure mode (untested)
- XN may even work in EL2 (currently impossible to test)
4. Cleans up the bloated PAGE_EXEC condition - by removing it.
The helper get_S1prot is introduced. It may even work in EL2,
when support for that comes, but, as the function name implies,
it only works for stage 1 translations.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Message-id: 1426099139-14463-4-git-send-email-drjones@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Introduce simple_ap_to_rw_prot(), which has the same behavior as
ap_to_rw_prot(), but takes the 2-bit simple AP[2:1] instead of
the 3-bit AP[2:0]. Use this in get_phys_addr_v6 when SCTLR_AFE
is set, as that bit indicates we should be using the simple AP
format.
It's unlikely this path is getting used. I don't see CR_AFE
getting used by Linux, so possibly not. If it had been, then
the check would have been wrong for all but AP[2:1] = 0b11.
Anyway, this should fix it up, in case it ever does get used.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1426099139-14463-3-git-send-email-drjones@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Instead of mixing access permission checking with access permissions
to page protection flags translation, just do the translation, and
leave it to the caller to check the protection flags against the access
type. Also rename to ap_to_rw_prot to better describe the new behavior.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1426099139-14463-2-git-send-email-drjones@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
f64 exponent in HELPER(recpe_f64) should be compared to 2045 rather than 1023
(FPRecipEstimate in ARMV8 spec). This fixes incorrect underflow handling when
flushing denormals to zero in the FRECPE instructions operating on 64-bit
values.
Signed-off-by: Ildar Isaev <ild@inbox.ru>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
A few of the oldest parts of the page-table-walk code have broken indent
(either hardcoded tabs or two-spaces). Reindent these sections.
For ease of review, this patch does not touch the brace style and
so is a whitespace-only change.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Now we have the mmu_idx in get_phys_addr(), use it correctly to
determine the behaviour of virtual to physical address translations,
rather than using just an is_user flag and the current CPU state.
Some TODO comments have been added to indicate where changes will
need to be made to add EL2 and 64-bit EL3 support.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Make all the callers of get_phys_addr() pass it the correct
mmu_idx rather than just a simple "is_user" flag. This includes
properly decoding the AT/ATS system instructions; we include the
logic for handling all the opc1/opc2 cases because we'll need
them later for supporting EL2/EL3, even if we don't have the
regdef stanzas yet.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Instead of simply reusing ats_write() as the handler for both AArch32
and AArch64 address translation operations, use a different function
for each with the common code in a third function. This is necessary
because the semantics for selecting the right translation regime are
different; we are only getting away with sharing currently because
we don't support EL2 and only support EL3 in AArch32.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
We currently claim that for ARM the mmu_idx should simply be the current
exception level. However this isn't actually correct -- secure EL0 and EL1
should have separate indexes from non-secure EL0 and EL1 since their
VA->PA mappings may differ. We also will want an index for stage 2
translations when we properly support EL2.
Define and document all seven mmu index values that we require, and
pass the mmu index in the TB flags rather than exception level or
priv/user bit.
This change doesn't update the get_phys_addr() code, so our page
table walking still assumes a simplistic "user or priv?" model for
the moment.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
---
This leaves some odd gaps in the TB flags usage. I will circle
back and clean this up later (including moving the other common
flags like the singlestep ones to the top of the flags word),
but I didn't want to bloat this patchseries further.
Add assertion checking when cpreg structures are registered that they
either forbid raw-access attempts or at least make an attempt at
handling them. Also add an assert in the raw-accessor-of-last-resort,
to avoid silently doing a read or write from offset zero, which is
actually AArch32 CPU register r0.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1422282372-13735-3-git-send-email-peter.maydell@linaro.org
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
We currently mark ARM coprocessor/system register definitions with
the flag ARM_CP_NO_MIGRATE for two different reasons:
1) register is an alias on to state that's also visible via
some other register, and that other register is the one
responsible for migrating the state
2) register is not actually state at all (for instance the TLB
or cache maintenance operation "registers") and it makes no
sense to attempt to migrate it or otherwise access the raw state
This works fine for identifying which registers should be ignored
when performing migration, but we also use the same functions for
synchronizing system register state between QEMU and the kernel
when using KVM. In this case we don't want to try to sync state
into registers in category 2, but we do want to sync into registers
in category 1, because the kernel might have picked a different
one of the aliases as its choice for which one to expose for
migration. (In particular, on 32 bit hosts the kernel will
expose the state in the AArch32 version of the register, but
TCG's convention is to mark the AArch64 version as the version
to migrate, even if the CPU being emulated happens to be 32 bit,
so almost all system registers will hit this issue now that we've
added AArch64 system emulation.)
Fix this by splitting the NO_MIGRATE flag in two (ALIAS and NO_RAW)
corresponding to the two different reasons we might not want to
migrate a register. When setting up the TCG list of registers to
migrate we honour both flags; when populating the list from KVM,
only ignore registers which are NO_RAW.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Message-id: 1422282372-13735-2-git-send-email-peter.maydell@linaro.org
[PMM: changed ARM_CP_NO_MIGRATE to ARM_CP_ALIAS on new SP_EL1 and
SP_EL2 reginfo stanzas since there was a (semantic) merge conflict
with the patchset that added those]
Merge of the v8_el2_cp_reginfo and el3_cp_reginfo ARMCPRegInfo lists.
Previously, some EL3 registers were restricted to the ARMv8 list under the
impression that they were not needed on ARMv7. However, this is not the case
as the ARMv7/32-bit variants rely on the ARMv8/64-bit variants to handle
migration and reset. For this reason they must always exist.
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Message-id: 1418406450-14961-1-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Adds secure and non-secure bank register suport for TTBR0 and TTBR1.
Changes include adding secure and non-secure instances of ttbr0 and ttbr1 as
well as a CP register definition for TTBR0_EL3. Added a union containing
both EL based array fields and secure and non-secure fields mapped to them.
Updated accesses to use A32_BANKED_CURRENT_REG_GET macro.
Signed-off-by: Fabian Aggeler <aggelerf@ethz.ch>
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1416242878-876-17-git-send-email-greg.bellows@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The ARMv8 address translation system defines that a page table walk
starts at a level which depends on the translation granule size
and the number of bits of virtual address that need to be resolved.
Where the translation granule is 64KB and the guest sets the
TCR.TxSZ field to between 35 and 39, it's actually possible to
start at level 3 (the final level). QEMU's implementation failed
to handle this case, and so we would set level to 2 and behave
incorrectly (including invoking the C undefined behaviour of
shifting left by a negative number). Correct the code that
determines the starting level to deal with the start-at-3 case,
by replacing the if-else ladder with an expression derived from
the ARM ARM pseudocode version.
This error was detected by the Coverity scan, which spotted
the potential shift by a negative number.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1415890569-7454-1-git-send-email-peter.maydell@linaro.org
The DZP bit in the DCZID system register should be set if
the control bits which prohibit use of the DC ZVA instruction
have been set (it stands for Data Zero Prohibited). However
we had the sense of the test inverted; fix this so that the
bit reads correctly.
To avoid this regressing the behaviour of the user-mode
emulator, we must set the DZE bit in the SCTLR for that
config so that userspace continues to see DZP as zero (it
was getting the correct result by accident previously).
Reported-by: Christopher Covington <cov@codeaurora.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Christopher Covington <cov@codeaurora.org>
Message-id: 1412959792-20708-1-git-send-email-peter.maydell@linaro.org
Add support for handling PSCI calls in system emulation. Both version
0.1 and 0.2 of the PSCI spec are supported. Platforms can enable support
by setting the "psci-conduit" QOM property on the cpus to SMC or HVC
emulation and having a PSCI binding in their dtb.
Signed-off-by: Rob Herring <rob.herring@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1412865028-17725-7-git-send-email-peter.maydell@linaro.org
[PMM: made system reset/off PSCI functions power down the CPU so
we obey the PSCI API requirement never to return from them;
rearranged how the code is plumbed into the exception system,
so that we split "is this a valid call?" from "do the call"]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>