Merge pull request #1126 from robertovargas-arm/psci-v1.1
Update PSCI to v1.1
diff --git a/docs/xlat-tables-lib-v2-design.rst b/docs/xlat-tables-lib-v2-design.rst
index 3006ce7..07cbf86 100644
--- a/docs/xlat-tables-lib-v2-design.rst
+++ b/docs/xlat-tables-lib-v2-design.rst
@@ -66,7 +66,8 @@
- its physical base address;
- its virtual base address;
- its size;
-- its attributes.
+- its attributes;
+- its mapping granularity (optional).
See the ``struct mmap_region`` type in `xlat\_tables\_v2.h`_.
@@ -76,9 +77,37 @@
The region attributes specify the type of memory (for example device or cached
normal memory) as well as the memory access permissions (read-only or
-read-write, executable or not, secure or non-secure, and so on). See the
-``mmap_attr_t`` enumeration type in `xlat\_tables\_v2.h`_.
+read-write, executable or not, secure or non-secure, and so on). In the case of
+the EL1&0 translation regime, the attributes also specify whether the region is
+a User region (EL0) or Privileged region (EL1). See the ``mmap_attr_t``
+enumeration type in `xlat\_tables\_v2.h`_. Note that for the EL1&0 translation
+regime the Execute Never attribute is set simultaneously for both EL1 and EL0.
+The granularity controls the translation table level to go down to when mapping
+the region. For example, assuming the MMU has been configured to use a 4KB
+granule size, the library might map a 2MB memory region using either of the two
+following options:
+
+- using a single level-2 translation table entry;
+- using a level-2 intermediate entry to a level-3 translation table (which
+ contains 512 entries, each mapping 4KB).
+
+The first solution potentially requires less translation tables, hence
+potentially less memory. However, if part of this 2MB region is later remapped
+with different memory attributes, the library might need to split the existing
+page tables to refine the mappings. If a single level-2 entry has been used
+here, a level-3 table will need to be allocated on the fly and the level-2
+modified to point to this new level-3 table. This has a performance cost at
+run-time.
+
+If the user knows upfront that such a remapping operation is likely to happen
+then they might enforce a 4KB mapping granularity for this 2MB region from the
+beginning; remapping some of these 4KB pages on the fly then becomes a
+lightweight operation.
+
+The region's granularity is an optional field; if it is not specified the
+library will choose the mapping granularity for this region as it sees fit (more
+details can be found in `The memory mapping algorithm`_ section below).
Translation Context
~~~~~~~~~~~~~~~~~~~
@@ -190,6 +219,11 @@
compatibility breaks, should the ``mmap_region`` structure type evolve in the
future.
+The ``MAP_REGION()`` and ``MAP_REGION_FLAT()`` macros do not allow specifying a
+mapping granularity, which leaves the library implementation free to choose
+it. However, in cases where a specific granularity is required, the
+``MAP_REGION2()`` macro might be used instead.
+
As explained earlier in this document, when the dynamic mapping feature is
disabled, there is no notion of dynamic regions. Conceptually, there are only
static regions. For this reason (and to retain backward compatibility with the
@@ -265,6 +299,9 @@
Core module
~~~~~~~~~~~
+From mmap regions to translation tables
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
All the APIs in this module work on a translation context. The translation
context contains the list of ``mmap_region``, which holds the information of all
the regions that are mapped at any given time. Whenever there is a request to
@@ -288,14 +325,18 @@
be added. Changes to the translation tables (as well as the mmap regions list)
will take effect immediately.
+The memory mapping algorithm
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
The mapping function is implemented as a recursive algorithm. It is however
bound by the level of depth of the translation tables (the ARMv8-A architecture
allows up to 4 lookup levels).
-By default, the algorithm will attempt to minimize the number of translation
-tables created to satisfy the user's request. It will favour mapping a region
-using the biggest possible blocks, only creating a sub-table if it is strictly
-necessary. This is to reduce the memory footprint of the firmware.
+By default [#granularity-ref]_, the algorithm will attempt to minimize the
+number of translation tables created to satisfy the user's request. It will
+favour mapping a region using the biggest possible blocks, only creating a
+sub-table if it is strictly necessary. This is to reduce the memory footprint of
+the firmware.
The most common reason for needing a sub-table is when a specific mapping
requires a finer granularity. Misaligned regions also require a finer
@@ -322,6 +363,12 @@
refer to the comments in the source code of the core module for more details
about the sorting algorithm in use.
+.. [#granularity-ref] That is, when mmap regions do not enforce their mapping
+ granularity.
+
+TLB maintenance operations
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
The library takes care of performing TLB maintenance operations when required.
For example, when the user requests removing a dynamic region, the library
invalidates all TLB entries associated to that region to ensure that these
diff --git a/include/lib/xlat_tables/xlat_tables_defs.h b/include/lib/xlat_tables/xlat_tables_defs.h
index b0f5a04..7cb9d37 100644
--- a/include/lib/xlat_tables/xlat_tables_defs.h
+++ b/include/lib/xlat_tables/xlat_tables_defs.h
@@ -89,9 +89,22 @@
* AP[1] bit is ignored by hardware and is
* treated as if it is One in EL2/EL3
*/
-#define AP_RO (U(0x1) << 5)
-#define AP_RW (U(0x0) << 5)
+#define AP2_SHIFT U(0x7)
+#define AP2_RO U(0x1)
+#define AP2_RW U(0x0)
+
+#define AP1_SHIFT U(0x6)
+#define AP1_ACCESS_UNPRIVILEGED U(0x1)
+#define AP1_NO_ACCESS_UNPRIVILEGED U(0x0)
+/*
+ * The following definitions must all be passed to the LOWER_ATTRS() macro to
+ * get the right bitmask.
+ */
+#define AP_RO (AP2_RO << 5)
+#define AP_RW (AP2_RW << 5)
+#define AP_ACCESS_UNPRIVILEGED (AP1_ACCESS_UNPRIVILEGED << 4)
+#define AP_NO_ACCESS_UNPRIVILEGED (AP1_NO_ACCESS_UNPRIVILEGED << 4)
#define NS (U(0x1) << 3)
#define ATTR_NON_CACHEABLE_INDEX U(0x2)
#define ATTR_DEVICE_INDEX U(0x1)
diff --git a/include/lib/xlat_tables/xlat_tables_v2.h b/include/lib/xlat_tables/xlat_tables_v2.h
index 59f0955..1a55fba 100644
--- a/include/lib/xlat_tables/xlat_tables_v2.h
+++ b/include/lib/xlat_tables/xlat_tables_v2.h
@@ -15,20 +15,36 @@
#include <xlat_mmu_helpers.h>
#include <xlat_tables_v2_helpers.h>
-/* Helper macro to define entries for mmap_region_t. It creates
- * identity mappings for each region.
+/*
+ * Default granularity size for an mmap_region_t.
+ * Useful when no specific granularity is required.
+ *
+ * By default, choose the biggest possible block size allowed by the
+ * architectural state and granule size in order to minimize the number of page
+ * tables required for the mapping.
*/
-#define MAP_REGION_FLAT(adr, sz, attr) MAP_REGION(adr, adr, sz, attr)
+#define REGION_DEFAULT_GRANULARITY XLAT_BLOCK_SIZE(MIN_LVL_BLOCK_DESC)
-/* Helper macro to define entries for mmap_region_t. It allows to
- * re-map address mappings from 'pa' to 'va' for each region.
+/* Helper macro to define an mmap_region_t. */
+#define MAP_REGION(_pa, _va, _sz, _attr) \
+ _MAP_REGION_FULL_SPEC(_pa, _va, _sz, _attr, REGION_DEFAULT_GRANULARITY)
+
+/* Helper macro to define an mmap_region_t with an identity mapping. */
+#define MAP_REGION_FLAT(_adr, _sz, _attr) \
+ MAP_REGION(_adr, _adr, _sz, _attr)
+
+/*
+ * Helper macro to define an mmap_region_t to map with the desired granularity
+ * of translation tables.
+ *
+ * The granularity value passed to this macro must be a valid block or page
+ * size. When using a 4KB translation granule, this might be 4KB, 2MB or 1GB.
+ * Passing REGION_DEFAULT_GRANULARITY is also allowed and means that the library
+ * is free to choose the granularity for this region. In this case, it is
+ * equivalent to the MAP_REGION() macro.
*/
-#define MAP_REGION(_pa, _va, _sz, _attr) { \
- .base_pa = (_pa), \
- .base_va = (_va), \
- .size = (_sz), \
- .attr = (_attr), \
- }
+#define MAP_REGION2(_pa, _va, _sz, _attr, _gr) \
+ _MAP_REGION_FULL_SPEC(_pa, _va, _sz, _attr, _gr)
/*
* Shifts and masks to access fields of an mmap_attr_t
@@ -41,6 +57,11 @@
#define MT_SEC_SHIFT U(4)
/* Access permissions for instruction execution (EXECUTE/EXECUTE_NEVER) */
#define MT_EXECUTE_SHIFT U(5)
+/*
+ * In the EL1&0 translation regime, mark the region as User (EL0) or
+ * Privileged (EL1). In the EL3 translation regime this has no effect.
+ */
+#define MT_USER_SHIFT U(6)
/* All other bits are reserved */
/*
@@ -73,10 +94,20 @@
*/
MT_EXECUTE = U(0) << MT_EXECUTE_SHIFT,
MT_EXECUTE_NEVER = U(1) << MT_EXECUTE_SHIFT,
+
+ /*
+ * When mapping a region at EL0 or EL1, this attribute will be used to
+ * determine if a User mapping (EL0) will be created or a Privileged
+ * mapping (EL1).
+ */
+ MT_USER = U(1) << MT_USER_SHIFT,
+ MT_PRIVILEGED = U(0) << MT_USER_SHIFT,
} mmap_attr_t;
+/* Compound attributes for most common usages */
#define MT_CODE (MT_MEMORY | MT_RO | MT_EXECUTE)
#define MT_RO_DATA (MT_MEMORY | MT_RO | MT_EXECUTE_NEVER)
+#define MT_RW_DATA (MT_MEMORY | MT_RW | MT_EXECUTE_NEVER)
/*
* Structure for specifying a single region of memory.
@@ -86,9 +117,19 @@
uintptr_t base_va;
size_t size;
mmap_attr_t attr;
+ /* Desired granularity. See the MAP_REGION2() macro for more details. */
+ size_t granularity;
} mmap_region_t;
/*
+ * Translation regimes supported by this library.
+ */
+typedef enum xlat_regime {
+ EL1_EL0_REGIME,
+ EL3_REGIME,
+} xlat_regime_t;
+
+/*
* Declare the translation context type.
* Its definition is private.
*/
@@ -123,8 +164,25 @@
*/
#define REGISTER_XLAT_CONTEXT(_ctx_name, _mmap_count, _xlat_tables_count, \
_virt_addr_space_size, _phy_addr_space_size) \
- _REGISTER_XLAT_CONTEXT(_ctx_name, _mmap_count, _xlat_tables_count, \
- _virt_addr_space_size, _phy_addr_space_size)
+ _REGISTER_XLAT_CONTEXT_FULL_SPEC(_ctx_name, _mmap_count, \
+ _xlat_tables_count, \
+ _virt_addr_space_size, \
+ _phy_addr_space_size, \
+ IMAGE_XLAT_DEFAULT_REGIME)
+
+/*
+ * Same as REGISTER_XLAT_CONTEXT plus the additional parameter _xlat_regime to
+ * specify the translation regime managed by this xlat_ctx_t instance. The
+ * values are the one from xlat_regime_t enumeration.
+ */
+#define REGISTER_XLAT_CONTEXT2(_ctx_name, _mmap_count, _xlat_tables_count, \
+ _virt_addr_space_size, _phy_addr_space_size, \
+ _xlat_regime) \
+ _REGISTER_XLAT_CONTEXT_FULL_SPEC(_ctx_name, _mmap_count, \
+ _xlat_tables_count, \
+ _virt_addr_space_size, \
+ _phy_addr_space_size, \
+ _xlat_regime)
/******************************************************************************
* Generic translation table APIs.
diff --git a/include/lib/xlat_tables/xlat_tables_v2_helpers.h b/include/lib/xlat_tables/xlat_tables_v2_helpers.h
index f5e3100..0ebdc93 100644
--- a/include/lib/xlat_tables/xlat_tables_v2_helpers.h
+++ b/include/lib/xlat_tables/xlat_tables_v2_helpers.h
@@ -27,6 +27,20 @@
/* Forward declaration */
struct mmap_region;
+/*
+ * Helper macro to define an mmap_region_t. This macro allows to specify all
+ * the fields of the structure but its parameter list is not guaranteed to
+ * remain stable as we add members to mmap_region_t.
+ */
+#define _MAP_REGION_FULL_SPEC(_pa, _va, _sz, _attr, _gr) \
+ { \
+ .base_pa = (_pa), \
+ .base_va = (_va), \
+ .size = (_sz), \
+ .attr = (_attr), \
+ .granularity = (_gr), \
+ }
+
/* Struct that holds all information about the translation tables. */
struct xlat_ctx {
/*
@@ -85,11 +99,12 @@
unsigned int initialized;
/*
- * Bit mask that has to be ORed to the rest of a translation table
- * descriptor in order to prohibit execution of code at the exception
- * level of this translation context.
+ * Translation regime managed by this xlat_ctx_t. It takes the values of
+ * the enumeration xlat_regime_t. The type is "int" to avoid a circular
+ * dependency on xlat_tables_v2.h, but this member must be treated as
+ * xlat_regime_t.
*/
- uint64_t execute_never_mask;
+ int xlat_regime;
};
#if PLAT_XLAT_TABLES_DYNAMIC
@@ -106,9 +121,9 @@
/* do nothing */
#endif /* PLAT_XLAT_TABLES_DYNAMIC */
-
-#define _REGISTER_XLAT_CONTEXT(_ctx_name, _mmap_count, _xlat_tables_count, \
- _virt_addr_space_size, _phy_addr_space_size) \
+#define _REGISTER_XLAT_CONTEXT_FULL_SPEC(_ctx_name, _mmap_count, _xlat_tables_count, \
+ _virt_addr_space_size, _phy_addr_space_size, \
+ _xlat_regime) \
CASSERT(CHECK_VIRT_ADDR_SPACE_SIZE(_virt_addr_space_size), \
assert_invalid_virtual_addr_space_size_for_##_ctx_name); \
\
@@ -140,12 +155,23 @@
.tables = _ctx_name##_xlat_tables, \
.tables_num = _xlat_tables_count, \
_REGISTER_DYNMAP_STRUCT(_ctx_name) \
+ .xlat_regime = (_xlat_regime), \
.max_pa = 0, \
.max_va = 0, \
.next_table = 0, \
.initialized = 0, \
}
+
+/* This IMAGE_EL macro must not to be used outside the library */
+#if IMAGE_BL1 || IMAGE_BL31
+# define IMAGE_EL 3
+# define IMAGE_XLAT_DEFAULT_REGIME EL3_REGIME
+#else
+# define IMAGE_EL 1
+# define IMAGE_XLAT_DEFAULT_REGIME EL1_EL0_REGIME
+#endif
+
#endif /*__ASSEMBLY__*/
#endif /* __XLAT_TABLES_V2_HELPERS_H__ */
diff --git a/lib/xlat_tables_v2/aarch32/xlat_tables_arch.c b/lib/xlat_tables_v2/aarch32/xlat_tables_arch.c
index e66b927..cbc8685 100644
--- a/lib/xlat_tables_v2/aarch32/xlat_tables_arch.c
+++ b/lib/xlat_tables_v2/aarch32/xlat_tables_arch.c
@@ -22,7 +22,7 @@
}
#endif /* ENABLE_ASSERTIONS*/
-int is_mmu_enabled(void)
+int is_mmu_enabled_ctx(const xlat_ctx_t *ctx __unused)
{
return (read_sctlr() & SCTLR_M_BIT) != 0;
}
@@ -40,6 +40,17 @@
tlbimvaais(TLBI_ADDR(va));
}
+void xlat_arch_tlbi_va_regime(uintptr_t va, xlat_regime_t xlat_regime __unused)
+{
+ /*
+ * Ensure the translation table write has drained into memory before
+ * invalidating the TLB entry.
+ */
+ dsbishst();
+
+ tlbimvaais(TLBI_ADDR(va));
+}
+
void xlat_arch_tlbi_va_sync(void)
{
/* Invalidate all entries from branch predictors. */
@@ -77,11 +88,6 @@
return 3;
}
-uint64_t xlat_arch_get_xn_desc(int el __unused)
-{
- return UPPER_ATTRS(XN);
-}
-
/*******************************************************************************
* Function for enabling the MMU in Secure PL1, assuming that the page tables
* have already been created.
diff --git a/lib/xlat_tables_v2/aarch32/xlat_tables_arch_private.h b/lib/xlat_tables_v2/aarch32/xlat_tables_arch_private.h
new file mode 100644
index 0000000..509395d
--- /dev/null
+++ b/lib/xlat_tables_v2/aarch32/xlat_tables_arch_private.h
@@ -0,0 +1,22 @@
+/*
+ * Copyright (c) 2017, ARM Limited and Contributors. All rights reserved.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause
+ */
+
+#ifndef __XLAT_TABLES_ARCH_PRIVATE_H__
+#define __XLAT_TABLES_ARCH_PRIVATE_H__
+
+#include <xlat_tables_defs.h>
+#include <xlat_tables_v2.h>
+
+/*
+ * Return the execute-never mask that will prevent instruction fetch at the
+ * given translation regime.
+ */
+static inline uint64_t xlat_arch_regime_get_xn_desc(xlat_regime_t regime __unused)
+{
+ return UPPER_ATTRS(XN);
+}
+
+#endif /* __XLAT_TABLES_ARCH_PRIVATE_H__ */
diff --git a/lib/xlat_tables_v2/aarch64/xlat_tables_arch.c b/lib/xlat_tables_v2/aarch64/xlat_tables_arch.c
index 760db92..eda38d3 100644
--- a/lib/xlat_tables_v2/aarch64/xlat_tables_arch.c
+++ b/lib/xlat_tables_v2/aarch64/xlat_tables_arch.c
@@ -10,19 +10,12 @@
#include <bl_common.h>
#include <cassert.h>
#include <common_def.h>
-#include <platform_def.h>
#include <sys/types.h>
#include <utils.h>
#include <utils_def.h>
#include <xlat_tables_v2.h>
#include "../xlat_tables_private.h"
-#if defined(IMAGE_BL1) || defined(IMAGE_BL31)
-# define IMAGE_EL 3
-#else
-# define IMAGE_EL 1
-#endif
-
static unsigned long long calc_physical_addr_size_bits(
unsigned long long max_addr)
{
@@ -71,20 +64,31 @@
}
#endif /* ENABLE_ASSERTIONS*/
-int is_mmu_enabled(void)
+int is_mmu_enabled_ctx(const xlat_ctx_t *ctx)
+{
+ if (ctx->xlat_regime == EL1_EL0_REGIME) {
+ assert(xlat_arch_current_el() >= 1);
+ return (read_sctlr_el1() & SCTLR_M_BIT) != 0;
+ } else {
+ assert(ctx->xlat_regime == EL3_REGIME);
+ assert(xlat_arch_current_el() >= 3);
+ return (read_sctlr_el3() & SCTLR_M_BIT) != 0;
+ }
+}
+
+
+void xlat_arch_tlbi_va(uintptr_t va)
{
#if IMAGE_EL == 1
assert(IS_IN_EL(1));
- return (read_sctlr_el1() & SCTLR_M_BIT) != 0;
+ xlat_arch_tlbi_va_regime(va, EL1_EL0_REGIME);
#elif IMAGE_EL == 3
assert(IS_IN_EL(3));
- return (read_sctlr_el3() & SCTLR_M_BIT) != 0;
+ xlat_arch_tlbi_va_regime(va, EL3_REGIME);
#endif
}
-#if PLAT_XLAT_TABLES_DYNAMIC
-
-void xlat_arch_tlbi_va(uintptr_t va)
+void xlat_arch_tlbi_va_regime(uintptr_t va, xlat_regime_t xlat_regime)
{
/*
* Ensure the translation table write has drained into memory before
@@ -92,13 +96,21 @@
*/
dsbishst();
-#if IMAGE_EL == 1
- assert(IS_IN_EL(1));
- tlbivaae1is(TLBI_ADDR(va));
-#elif IMAGE_EL == 3
- assert(IS_IN_EL(3));
- tlbivae3is(TLBI_ADDR(va));
-#endif
+ /*
+ * This function only supports invalidation of TLB entries for the EL3
+ * and EL1&0 translation regimes.
+ *
+ * Also, it is architecturally UNDEFINED to invalidate TLBs of a higher
+ * exception level (see section D4.9.2 of the ARM ARM rev B.a).
+ */
+ if (xlat_regime == EL1_EL0_REGIME) {
+ assert(xlat_arch_current_el() >= 1);
+ tlbivaae1is(TLBI_ADDR(va));
+ } else {
+ assert(xlat_regime == EL3_REGIME);
+ assert(xlat_arch_current_el() >= 3);
+ tlbivae3is(TLBI_ADDR(va));
+ }
}
void xlat_arch_tlbi_va_sync(void)
@@ -124,8 +136,6 @@
isb();
}
-#endif /* PLAT_XLAT_TABLES_DYNAMIC */
-
int xlat_arch_current_el(void)
{
int el = GET_EL(read_CurrentEl());
@@ -135,16 +145,6 @@
return el;
}
-uint64_t xlat_arch_get_xn_desc(int el)
-{
- if (el == 3) {
- return UPPER_ATTRS(XN);
- } else {
- assert(el == 1);
- return UPPER_ATTRS(PXN);
- }
-}
-
/*******************************************************************************
* Macro generating the code for the function enabling the MMU in the given
* exception level, assuming that the pagetables have already been created.
diff --git a/lib/xlat_tables_v2/aarch64/xlat_tables_arch_private.h b/lib/xlat_tables_v2/aarch64/xlat_tables_arch_private.h
new file mode 100644
index 0000000..d201590
--- /dev/null
+++ b/lib/xlat_tables_v2/aarch64/xlat_tables_arch_private.h
@@ -0,0 +1,28 @@
+/*
+ * Copyright (c) 2017, ARM Limited and Contributors. All rights reserved.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause
+ */
+
+#ifndef __XLAT_TABLES_ARCH_PRIVATE_H__
+#define __XLAT_TABLES_ARCH_PRIVATE_H__
+
+#include <assert.h>
+#include <xlat_tables_defs.h>
+#include <xlat_tables_v2.h>
+
+/*
+ * Return the execute-never mask that will prevent instruction fetch at all ELs
+ * that are part of the given translation regime.
+ */
+static inline uint64_t xlat_arch_regime_get_xn_desc(xlat_regime_t regime)
+{
+ if (regime == EL1_EL0_REGIME) {
+ return UPPER_ATTRS(UXN) | UPPER_ATTRS(PXN);
+ } else {
+ assert(regime == EL3_REGIME);
+ return UPPER_ATTRS(XN);
+ }
+}
+
+#endif /* __XLAT_TABLES_ARCH_PRIVATE_H__ */
diff --git a/lib/xlat_tables_v2/xlat_tables.mk b/lib/xlat_tables_v2/xlat_tables.mk
index b94ce5d..06dd844 100644
--- a/lib/xlat_tables_v2/xlat_tables.mk
+++ b/lib/xlat_tables_v2/xlat_tables.mk
@@ -7,3 +7,5 @@
XLAT_TABLES_LIB_SRCS := $(addprefix lib/xlat_tables_v2/, \
${ARCH}/xlat_tables_arch.c \
xlat_tables_internal.c)
+
+INCLUDES += -Ilib/xlat_tables_v2/${ARCH}
diff --git a/lib/xlat_tables_v2/xlat_tables_internal.c b/lib/xlat_tables_v2/xlat_tables_internal.c
index da658b1..9faeb7e 100644
--- a/lib/xlat_tables_v2/xlat_tables_internal.c
+++ b/lib/xlat_tables_v2/xlat_tables_internal.c
@@ -14,7 +14,7 @@
#include <string.h>
#include <types.h>
#include <utils.h>
-#include <xlat_tables_arch.h>
+#include <xlat_tables_arch_private.h>
#include <xlat_tables_defs.h>
#include <xlat_tables_v2.h>
@@ -112,9 +112,11 @@
#endif /* PLAT_XLAT_TABLES_DYNAMIC */
-/* Returns a block/page table descriptor for the given level and attributes. */
-static uint64_t xlat_desc(mmap_attr_t attr, unsigned long long addr_pa,
- int level, uint64_t execute_never_mask)
+/*
+ * Returns a block/page table descriptor for the given level and attributes.
+ */
+uint64_t xlat_desc(const xlat_ctx_t *ctx, mmap_attr_t attr,
+ unsigned long long addr_pa, int level)
{
uint64_t desc;
int mem_type;
@@ -133,11 +135,30 @@
* Deduce other fields of the descriptor based on the MT_NS and MT_RW
* memory region attributes.
*/
+ desc |= LOWER_ATTRS(ACCESS_FLAG);
+
desc |= (attr & MT_NS) ? LOWER_ATTRS(NS) : 0;
desc |= (attr & MT_RW) ? LOWER_ATTRS(AP_RW) : LOWER_ATTRS(AP_RO);
- desc |= LOWER_ATTRS(ACCESS_FLAG);
/*
+ * Do not allow unprivileged access when the mapping is for a privileged
+ * EL. For translation regimes that do not have mappings for access for
+ * lower exception levels, set AP[2] to AP_NO_ACCESS_UNPRIVILEGED.
+ */
+ if (ctx->xlat_regime == EL1_EL0_REGIME) {
+ if (attr & MT_USER) {
+ /* EL0 mapping requested, so we give User access */
+ desc |= LOWER_ATTRS(AP_ACCESS_UNPRIVILEGED);
+ } else {
+ /* EL1 mapping requested, no User access granted */
+ desc |= LOWER_ATTRS(AP_NO_ACCESS_UNPRIVILEGED);
+ }
+ } else {
+ assert(ctx->xlat_regime == EL3_REGIME);
+ desc |= LOWER_ATTRS(AP_NO_ACCESS_UNPRIVILEGED);
+ }
+
+ /*
* Deduce shareability domain and executability of the memory region
* from the memory type of the attributes (MT_TYPE).
*
@@ -156,7 +177,7 @@
* fetch, which could be an issue if this memory region
* corresponds to a read-sensitive peripheral.
*/
- desc |= execute_never_mask;
+ desc |= xlat_arch_regime_get_xn_desc(ctx->xlat_regime);
} else { /* Normal memory */
/*
@@ -171,10 +192,13 @@
* translation table.
*
* For read-only memory, rely on the MT_EXECUTE/MT_EXECUTE_NEVER
- * attribute to figure out the value of the XN bit.
+ * attribute to figure out the value of the XN bit. The actual
+ * XN bit(s) to set in the descriptor depends on the context's
+ * translation regime and the policy applied in
+ * xlat_arch_regime_get_xn_desc().
*/
if ((attr & MT_RW) || (attr & MT_EXECUTE_NEVER)) {
- desc |= execute_never_mask;
+ desc |= xlat_arch_regime_get_xn_desc(ctx->xlat_regime);
}
if (mem_type == MT_MEMORY) {
@@ -314,7 +338,7 @@
if (action == ACTION_WRITE_BLOCK_ENTRY) {
table_base[table_idx] = INVALID_DESC;
- xlat_arch_tlbi_va(table_idx_va);
+ xlat_arch_tlbi_va_regime(table_idx_va, ctx->xlat_regime);
} else if (action == ACTION_RECURSE_INTO_TABLE) {
@@ -330,7 +354,8 @@
*/
if (xlat_table_is_empty(ctx, subtable)) {
table_base[table_idx] = INVALID_DESC;
- xlat_arch_tlbi_va(table_idx_va);
+ xlat_arch_tlbi_va_regime(table_idx_va,
+ ctx->xlat_regime);
}
} else {
@@ -417,7 +442,8 @@
* descriptors. If not, create a table instead.
*/
if ((dest_pa & XLAT_BLOCK_MASK(level)) ||
- (level < MIN_LVL_BLOCK_DESC))
+ (level < MIN_LVL_BLOCK_DESC) ||
+ (mm->granularity < XLAT_BLOCK_SIZE(level)))
return ACTION_CREATE_NEW_TABLE;
else
return ACTION_WRITE_BLOCK_ENTRY;
@@ -535,8 +561,7 @@
if (action == ACTION_WRITE_BLOCK_ENTRY) {
table_base[table_idx] =
- xlat_desc(mm->attr, table_idx_pa, level,
- ctx->execute_never_mask);
+ xlat_desc(ctx, mm->attr, table_idx_pa, level);
} else if (action == ACTION_CREATE_NEW_TABLE) {
@@ -590,9 +615,10 @@
mmap_region_t *mm = mmap;
while (mm->size) {
- tf_printf(" VA:%p PA:0x%llx size:0x%zx attr:0x%x\n",
+ tf_printf(" VA:%p PA:0x%llx size:0x%zx attr:0x%x",
(void *)mm->base_va, mm->base_pa,
mm->size, mm->attr);
+ tf_printf(" granularity:0x%zx\n", mm->granularity);
++mm;
};
tf_printf("\n");
@@ -613,7 +639,7 @@
unsigned long long base_pa = mm->base_pa;
uintptr_t base_va = mm->base_va;
size_t size = mm->size;
- mmap_attr_t attr = mm->attr;
+ size_t granularity = mm->granularity;
unsigned long long end_pa = base_pa + size - 1;
uintptr_t end_va = base_va + size - 1;
@@ -622,6 +648,12 @@
!IS_PAGE_ALIGNED(size))
return -EINVAL;
+ if ((granularity != XLAT_BLOCK_SIZE(1)) &&
+ (granularity != XLAT_BLOCK_SIZE(2)) &&
+ (granularity != XLAT_BLOCK_SIZE(3))) {
+ return -EINVAL;
+ }
+
/* Check for overflows */
if ((base_pa > end_pa) || (base_va > end_va))
return -ERANGE;
@@ -663,11 +695,9 @@
if (fully_overlapped_va) {
#if PLAT_XLAT_TABLES_DYNAMIC
- if ((attr & MT_DYNAMIC) ||
+ if ((mm->attr & MT_DYNAMIC) ||
(mm_cursor->attr & MT_DYNAMIC))
return -EPERM;
-#else
- (void)attr;
#endif /* PLAT_XLAT_TABLES_DYNAMIC */
if ((mm_cursor->base_va - mm_cursor->base_pa) !=
(base_va - base_pa))
@@ -876,9 +906,8 @@
.size = end_va - mm->base_va,
.attr = 0
};
- xlat_tables_unmap_region(ctx,
- &unmap_mm, 0, ctx->base_table,
- ctx->base_table_entries, ctx->base_level);
+ xlat_tables_unmap_region(ctx, &unmap_mm, 0, ctx->base_table,
+ ctx->base_table_entries, ctx->base_level);
return -ENOMEM;
}
@@ -993,9 +1022,10 @@
#if LOG_LEVEL >= LOG_LEVEL_VERBOSE
/* Print the attributes of the specified block descriptor. */
-static void xlat_desc_print(uint64_t desc, uint64_t execute_never_mask)
+static void xlat_desc_print(xlat_ctx_t *ctx, uint64_t desc)
{
int mem_type_index = ATTR_INDEX_GET(desc);
+ xlat_regime_t xlat_regime = ctx->xlat_regime;
if (mem_type_index == ATTR_IWBWA_OWBWA_NTR_INDEX) {
tf_printf("MEM");
@@ -1006,9 +1036,49 @@
tf_printf("DEV");
}
+ const char *priv_str = "(PRIV)";
+ const char *user_str = "(USER)";
+
- tf_printf(LOWER_ATTRS(AP_RO) & desc ? "-RO" : "-RW");
+ /*
+ * Showing Privileged vs Unprivileged only makes sense for EL1&0
+ * mappings
+ */
+ const char *ro_str = "-RO";
+ const char *rw_str = "-RW";
+ const char *no_access_str = "-NOACCESS";
+
+ if (xlat_regime == EL3_REGIME) {
+ /* For EL3, the AP[2] bit is all what matters */
+ tf_printf((desc & LOWER_ATTRS(AP_RO)) ? ro_str : rw_str);
+ } else {
+ const char *ap_str = (desc & LOWER_ATTRS(AP_RO)) ? ro_str : rw_str;
+ tf_printf(ap_str);
+ tf_printf(priv_str);
+ /*
+ * EL0 can only have the same permissions as EL1 or no
+ * permissions at all.
+ */
+ tf_printf((desc & LOWER_ATTRS(AP_ACCESS_UNPRIVILEGED))
+ ? ap_str : no_access_str);
+ tf_printf(user_str);
+ }
+
+ const char *xn_str = "-XN";
+ const char *exec_str = "-EXEC";
+
+ if (xlat_regime == EL3_REGIME) {
+ /* For EL3, the XN bit is all what matters */
+ tf_printf(LOWER_ATTRS(XN) & desc ? xn_str : exec_str);
+ } else {
+ /* For EL0 and EL1, we need to know who has which rights */
+ tf_printf(LOWER_ATTRS(PXN) & desc ? xn_str : exec_str);
+ tf_printf(priv_str);
+
+ tf_printf(LOWER_ATTRS(UXN) & desc ? xn_str : exec_str);
+ tf_printf(user_str);
+ }
+
tf_printf(LOWER_ATTRS(NS) & desc ? "-NS" : "-S");
- tf_printf(execute_never_mask & desc ? "-XN" : "-EXEC");
}
static const char * const level_spacers[] = {
@@ -1025,9 +1095,10 @@
* Recursive function that reads the translation tables passed as an argument
* and prints their status.
*/
-static void xlat_tables_print_internal(const uintptr_t table_base_va,
+static void xlat_tables_print_internal(xlat_ctx_t *ctx,
+ const uintptr_t table_base_va,
uint64_t *const table_base, const int table_entries,
- const unsigned int level, const uint64_t execute_never_mask)
+ const unsigned int level)
{
assert(level <= XLAT_TABLE_LEVEL_MAX);
@@ -1086,17 +1157,16 @@
uintptr_t addr_inner = desc & TABLE_ADDR_MASK;
- xlat_tables_print_internal(table_idx_va,
+ xlat_tables_print_internal(ctx, table_idx_va,
(uint64_t *)addr_inner,
- XLAT_TABLE_ENTRIES, level+1,
- execute_never_mask);
+ XLAT_TABLE_ENTRIES, level + 1);
} else {
tf_printf("%sVA:%p PA:0x%llx size:0x%zx ",
level_spacers[level],
(void *)table_idx_va,
(unsigned long long)(desc & TABLE_ADDR_MASK),
level_size);
- xlat_desc_print(desc, execute_never_mask);
+ xlat_desc_print(ctx, desc);
tf_printf("\n");
}
}
@@ -1116,7 +1186,15 @@
void xlat_tables_print(xlat_ctx_t *ctx)
{
#if LOG_LEVEL >= LOG_LEVEL_VERBOSE
+ const char *xlat_regime_str;
+ if (ctx->xlat_regime == EL1_EL0_REGIME) {
+ xlat_regime_str = "1&0";
+ } else {
+ assert(ctx->xlat_regime == EL3_REGIME);
+ xlat_regime_str = "3";
+ }
VERBOSE("Translation tables state:\n");
+ VERBOSE(" Xlat regime: EL%s\n", xlat_regime_str);
VERBOSE(" Max allowed PA: 0x%llx\n", ctx->pa_max_address);
VERBOSE(" Max allowed VA: %p\n", (void *) ctx->va_max_address);
VERBOSE(" Max mapped PA: 0x%llx\n", ctx->max_pa);
@@ -1140,22 +1218,21 @@
used_page_tables, ctx->tables_num,
ctx->tables_num - used_page_tables);
- xlat_tables_print_internal(0, ctx->base_table, ctx->base_table_entries,
- ctx->base_level, ctx->execute_never_mask);
+ xlat_tables_print_internal(ctx, 0, ctx->base_table,
+ ctx->base_table_entries, ctx->base_level);
#endif /* LOG_LEVEL >= LOG_LEVEL_VERBOSE */
}
void init_xlat_tables_ctx(xlat_ctx_t *ctx)
{
- mmap_region_t *mm = ctx->mmap;
-
- assert(!is_mmu_enabled());
+ assert(ctx != NULL);
assert(!ctx->initialized);
+ assert(ctx->xlat_regime == EL3_REGIME || ctx->xlat_regime == EL1_EL0_REGIME);
+ assert(!is_mmu_enabled_ctx(ctx));
- print_mmap(mm);
+ mmap_region_t *mm = ctx->mmap;
- ctx->execute_never_mask =
- xlat_arch_get_xn_desc(xlat_arch_current_el());
+ print_mmap(mm);
/* All tables must be zeroed before mapping any region. */
diff --git a/lib/xlat_tables_v2/xlat_tables_private.h b/lib/xlat_tables_v2/xlat_tables_private.h
index d352583..79efbeb 100644
--- a/lib/xlat_tables_v2/xlat_tables_private.h
+++ b/lib/xlat_tables_v2/xlat_tables_private.h
@@ -34,12 +34,24 @@
MT_DYNAMIC = 1 << MT_DYN_SHIFT
} mmap_priv_attr_t;
+#endif /* PLAT_XLAT_TABLES_DYNAMIC */
+
/*
- * Function used to invalidate all levels of the translation walk for a given
- * virtual address. It must be called for every translation table entry that is
- * modified.
+ * Invalidate all TLB entries that match the given virtual address. This
+ * operation applies to all PEs in the same Inner Shareable domain as the PE
+ * that executes this function. This functions must be called for every
+ * translation table entry that is modified.
+ *
+ * xlat_arch_tlbi_va() applies the invalidation to the exception level of the
+ * current translation regime, whereas xlat_arch_tlbi_va_regime() applies it to
+ * the given translation regime.
+ *
+ * Note, however, that it is architecturally UNDEFINED to invalidate TLB entries
+ * pertaining to a higher exception level, e.g. invalidating EL3 entries from
+ * S-EL1.
*/
void xlat_arch_tlbi_va(uintptr_t va);
+void xlat_arch_tlbi_va_regime(uintptr_t va, xlat_regime_t xlat_regime);
/*
* This function has to be called at the end of any code that uses the function
@@ -47,8 +59,6 @@
*/
void xlat_arch_tlbi_va_sync(void);
-#endif /* PLAT_XLAT_TABLES_DYNAMIC */
-
/* Print VA, PA, size and attributes of all regions in the mmap array. */
void print_mmap(mmap_region_t *const mmap);
@@ -66,13 +76,6 @@
int xlat_arch_current_el(void);
/*
- * Returns the bit mask that has to be ORed to the rest of a translation table
- * descriptor so that execution of code is prohibited at the given Exception
- * Level.
- */
-uint64_t xlat_arch_get_xn_desc(int el);
-
-/*
* Return the maximum physical address supported by the hardware.
* This value depends on the execution state (AArch32/AArch64).
*/
@@ -82,7 +85,10 @@
void enable_mmu_arch(unsigned int flags, uint64_t *base_table,
unsigned long long pa, uintptr_t max_va);
-/* Return 1 if the MMU of this Exception Level is enabled, 0 otherwise. */
-int is_mmu_enabled(void);
+/*
+ * Return 1 if the MMU of the translation regime managed by the given xlat_ctx_t
+ * is enabled, 0 otherwise.
+ */
+int is_mmu_enabled_ctx(const xlat_ctx_t *ctx);
#endif /* __XLAT_TABLES_PRIVATE_H__ */
diff --git a/services/spd/trusty/trusty.c b/services/spd/trusty/trusty.c
index e386f71..ecbcfae 100644
--- a/services/spd/trusty/trusty.c
+++ b/services/spd/trusty/trusty.c
@@ -99,6 +99,16 @@
ret.r1 = r1;
ret.r0 = r0;
+ /*
+ * To avoid the additional overhead in PSCI flow, skip FP context
+ * saving/restoring in case of CPU suspend and resume, asssuming that
+ * when it's needed the PSCI caller has preserved FP context before
+ * going here.
+ */
+#if CTX_INCLUDE_FPREGS
+ if (r0 != SMC_FC_CPU_SUSPEND && r0 != SMC_FC_CPU_RESUME)
+ fpregs_context_save(get_fpregs_ctx(cm_get_context(security_state)));
+#endif
cm_el1_sysregs_context_save(security_state);
ctx->saved_security_state = security_state;
@@ -107,6 +117,11 @@
assert(ctx->saved_security_state == !security_state);
cm_el1_sysregs_context_restore(security_state);
+#if CTX_INCLUDE_FPREGS
+ if (r0 != SMC_FC_CPU_SUSPEND && r0 != SMC_FC_CPU_RESUME)
+ fpregs_context_restore(get_fpregs_ctx(cm_get_context(security_state)));
+#endif
+
cm_set_next_eret_context(security_state);
return ret;
diff --git a/tools/cert_create/src/cert.c b/tools/cert_create/src/cert.c
index 1b84e36..3f0b4d3 100644
--- a/tools/cert_create/src/cert.c
+++ b/tools/cert_create/src/cert.c
@@ -90,7 +90,7 @@
X509_NAME *name;
ASN1_INTEGER *sno;
int i, num, rc = 0;
- EVP_MD_CTX mdCtx;
+ EVP_MD_CTX *mdCtx;
EVP_PKEY_CTX *pKeyCtx = NULL;
/* Create the certificate structure */
@@ -111,10 +111,14 @@
issuer = x;
}
- EVP_MD_CTX_init(&mdCtx);
+ mdCtx = EVP_MD_CTX_create();
+ if (mdCtx == NULL) {
+ ERR_print_errors_fp(stdout);
+ goto END;
+ }
/* Sign the certificate with the issuer key */
- if (!EVP_DigestSignInit(&mdCtx, &pKeyCtx, EVP_sha256(), NULL, ikey)) {
+ if (!EVP_DigestSignInit(mdCtx, &pKeyCtx, EVP_sha256(), NULL, ikey)) {
ERR_print_errors_fp(stdout);
goto END;
}
@@ -184,7 +188,7 @@
}
}
- if (!X509_sign_ctx(x, &mdCtx)) {
+ if (!X509_sign_ctx(x, mdCtx)) {
ERR_print_errors_fp(stdout);
goto END;
}
@@ -194,7 +198,7 @@
cert->x = x;
END:
- EVP_MD_CTX_cleanup(&mdCtx);
+ EVP_MD_CTX_destroy(mdCtx);
return rc;
}
diff --git a/tools/cert_create/src/ext.c b/tools/cert_create/src/ext.c
index 8ae6640..055ddbf 100644
--- a/tools/cert_create/src/ext.c
+++ b/tools/cert_create/src/ext.c
@@ -166,7 +166,7 @@
int sz;
/* OBJECT_IDENTIFIER with hash algorithm */
- algorithm = OBJ_nid2obj(md->type);
+ algorithm = OBJ_nid2obj(EVP_MD_type(md));
if (algorithm == NULL) {
return NULL;
}
diff --git a/tools/cert_create/src/key.c b/tools/cert_create/src/key.c
index e8257e9..871f9ee 100644
--- a/tools/cert_create/src/key.c
+++ b/tools/cert_create/src/key.c
@@ -43,13 +43,31 @@
static int key_create_rsa(key_t *key)
{
- RSA *rsa;
+ BIGNUM *e;
+ RSA *rsa = NULL;
- rsa = RSA_generate_key(RSA_KEY_BITS, RSA_F4, NULL, NULL);
+ e = BN_new();
+ if (e == NULL) {
+ printf("Cannot create RSA exponent\n");
+ goto err;
+ }
+
+ if (!BN_set_word(e, RSA_F4)) {
+ printf("Cannot assign RSA exponent\n");
+ goto err;
+ }
+
+ rsa = RSA_new();
if (rsa == NULL) {
printf("Cannot create RSA key\n");
goto err;
}
+
+ if (!RSA_generate_key_ex(rsa, RSA_KEY_BITS, e, NULL)) {
+ printf("Cannot generate RSA key\n");
+ goto err;
+ }
+
if (!EVP_PKEY_assign_RSA(key->key, rsa)) {
printf("Cannot assign RSA key\n");
goto err;
@@ -58,6 +76,7 @@
return 1;
err:
RSA_free(rsa);
+ BN_free(e);
return 0;
}
diff --git a/tools/cert_create/src/main.c b/tools/cert_create/src/main.c
index df59961..741242f 100644
--- a/tools/cert_create/src/main.c
+++ b/tools/cert_create/src/main.c
@@ -244,7 +244,7 @@
int main(int argc, char *argv[])
{
STACK_OF(X509_EXTENSION) * sk;
- X509_EXTENSION *cert_ext;
+ X509_EXTENSION *cert_ext = NULL;
ext_t *ext;
key_t *key;
cert_t *cert;
diff --git a/tools/fiptool/fiptool.c b/tools/fiptool/fiptool.c
index 02223d9..1dcb7e8 100644
--- a/tools/fiptool/fiptool.c
+++ b/tools/fiptool/fiptool.c
@@ -9,18 +9,12 @@
#include <assert.h>
#include <errno.h>
-#include <getopt.h>
#include <limits.h>
#include <stdarg.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
-#include <unistd.h>
-
-#include <openssl/sha.h>
-
-#include <firmware_image_package.h>
#include "fiptool.h"
#include "tbbr_config.h"
@@ -161,7 +155,7 @@
{
assert(desc != NULL);
- if (desc->action_arg != DO_UNSPEC)
+ if (desc->action_arg != (char *)DO_UNSPEC)
free(desc->action_arg);
desc->action = action;
desc->action_arg = NULL;
@@ -278,7 +272,7 @@
static int parse_fip(const char *filename, fip_toc_header_t *toc_header_out)
{
- struct stat st;
+ struct BLD_PLAT_STAT st;
FILE *fp;
char *buf, *bufend;
fip_toc_header_t *toc_header;
@@ -370,11 +364,12 @@
static image_t *read_image_from_file(const uuid_t *uuid, const char *filename)
{
- struct stat st;
+ struct BLD_PLAT_STAT st;
image_t *image;
FILE *fp;
assert(uuid != NULL);
+ assert(filename != NULL);
fp = fopen(filename, "rb");
if (fp == NULL)
@@ -469,6 +464,7 @@
(unsigned long long)image->toc_e.offset_address,
(unsigned long long)image->toc_e.size,
desc->cmdline_name);
+#ifndef _MSC_VER /* We don't have SHA256 for Visual Studio. */
if (verbose) {
unsigned char md[SHA256_DIGEST_LENGTH];
@@ -476,6 +472,7 @@
printf(", sha256=");
md_print(md, sizeof(md));
}
+#endif
putchar('\n');
}
diff --git a/tools/fiptool/fiptool.h b/tools/fiptool/fiptool.h
index 4b5cdd9..d8a5d2c 100644
--- a/tools/fiptool/fiptool.h
+++ b/tools/fiptool/fiptool.h
@@ -13,6 +13,8 @@
#include <firmware_image_package.h>
#include <uuid.h>
+#include "fiptool_platform.h"
+
#define NELEM(x) (sizeof (x) / sizeof *(x))
enum {
diff --git a/tools/fiptool/fiptool_platform.h b/tools/fiptool/fiptool_platform.h
new file mode 100644
index 0000000..bfdd1ef
--- /dev/null
+++ b/tools/fiptool/fiptool_platform.h
@@ -0,0 +1,29 @@
+/*
+ * Copyright (c) 2016-2017, ARM Limited and Contributors. All rights reserved.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Build platform specific handling.
+ * This allows for builds on non-Posix platforms
+ * e.g. Visual Studio on Windows
+ */
+
+#ifndef __FIPTOOL_PLATFORM_H__
+# define __FIPTOOL_PLATFORM_H__
+
+# ifndef _MSC_VER
+
+ /* Not Visual Studio, so include Posix Headers. */
+# include <getopt.h>
+# include <openssl/sha.h>
+# include <unistd.h>
+
+# define BLD_PLAT_STAT stat
+
+# else
+
+ /* Visual Studio. */
+
+# endif
+
+#endif /* __FIPTOOL_PLATFORM_H__ */