After calling fdt_node_offset_by_compatible() we must check its return
value and not an unrelated value.
Addresses-Coverity-ID: 1584993 Logically dead code
Fixes: 67ce5a763c ("platform: generic: Add support for specify coldboot harts in DT")
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Added support for the generic platform to specify the set of coldboot
hart in DT. If not specified in DT, all harts are allowed to coldboot
as before.
The functions related to sbi_hartmask are not available before coldboot,
so I used bitmap, and added a new bitmap_test() function to test whether
a certain bit of the bitmap is set.
Signed-off-by: Cheng Yang <yangcheng.work@foxmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Add a Kconfig option to control PMU fixup, so the next
stage software can dump the PMU node including event
mapping information for debugging purposes.
Signed-off-by: Yu Chien Peter Lin <peterlin@andestech.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
This reverts commit 6966ad0abe ("platform/lib: Allow the OS to map the
regions that are protected by PMP").
It was thought at the time of this commit that allowing the kernel to map
PMP protected regions was safe but it is actually not: for example, the
hibernation process will try to access any linear mapping page and then
will fault on such mapped PMP regions [1]. Another issue is that the
device tree specification [2] states that a !no-map region must be
declared as EfiBootServicesData/Code in the EFI memory map which would make
the PMP protected regions reclaimable by the kernel. And to circumvent
this, RISC-V edk2 diverges from the DT specification to declare those
regions as EfiReserved.
The no-map attribute was removed to allow the kernel to use hugepages
larger than 2MB to map the linear mapping to improve the performance but
actually a recent talk from Mike Rapoport [3] stated that the
performance benefit was marginal.
For all those reasons, let's mark all the PMP protected regions as "no-map".
[1] https://lore.kernel.org/linux-riscv/CAAYs2=gQvkhTeioMmqRDVGjdtNF_vhB+vm_1dHJxPNi75YDQ_Q@mail.gmail.com/
[2] "3.5.4 /reserved-memory and UEFI" https://github.com/devicetree-org/devicetree-specification/releases/download/v0.4-rc1/devicetree-specification-v0.4-rc1.pdf
[3] https://lwn.net/Articles/931406/
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Xiang W <wxjstz@126.com>
fdt_reserved_memory_fixup() uses filtered_order[PMP_COUNT]. The index
must not reach PMP_COUNT.
Fixes: 199189bd1c ("lib: utils: Mark only the largest region as reserved in FDT")
Addresses-Coverity-ID: 1536994 ("Out-of-bounds write")
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
When building with GCC-10 or older versions, it throws the following
error:
CC-DEP platform/generic/lib/utils/fdt/fdt_fixup.dep
CC platform/generic/lib/utils/fdt/fdt_fixup.o
lib/utils/fdt/fdt_fixup.c: In function 'fdt_reserved_memory_fixup':
lib/utils/fdt/fdt_fixup.c:376:2: error: label at end of compound statement
376 | next_entry:
| ^~~~~~~~~~
Remove the goto statement.
Resolves: https://github.com/riscv-software-src/opensbi/issues/288
Signed-off-by: Yu Chien Peter Lin <peterlin@andestech.com>
Signed-off-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Bin Meng <bmeng@tinylab.org>
In commit 230278dcf, RX and RW regions were marked separately.
When the RW region grows (e.g. with more harts) and it isn't a
power-of-two, sbi_domain_memregion_init will upgrade the region
to the next power-of-two. This will make RX and RW both start
at the same base address, like so (with 64 harts):
Domain0 Region01 : 0x0000000080000000-0x000000008001ffff M: (R,X) S/U: ()
Domain0 Region02 : 0x0000000080000000-0x00000000800fffff M: (R,W) S/U: ()
This doesn't break the permission enforcement because of static
priorities in PMP but makes the kernel complain about the regions
overlapping each other. Like so:
[ 0.000000] OF: reserved mem: OVERLAP DETECTED!
[ 0.000000] mmode_resv0@80000000 (0x0000000080000000--0x0000000080020000) \
overlaps with mmode_resv1@80000000 (0x0000000080000000--0x0000000080100000)
To fix this warning, among the multiple regions having same base
address but different sizes, add only the largest region as reserved
region during fdt fixup.
Fixes: 230278dcf (lib: sbi: Add separate entries for firmware RX and RW regions)
Signed-off-by: Himanshu Chauhan <hchauhan@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Since the availability and latency properties of CPU idle states depend
on the specific SBI HSM implementation, it is appropriate that the idle
states are added to the devicetree at runtime by that implementation.
This helper function adds a platform-provided array of idle states to
the devicetree, following the SBI idle state binding.
Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Samuel Holland <samuel@sholland.org>
The commit 9e0ba090 introduced more fine grained permissions for memory
regions and did not update the fdt_reserved_memory_fixup() function. As
a result, the fdt_reserved_memory_fixup continued to use the older coarse
permissions which causes the reserved memory node to be not inserted
into the DT.
To fix the above issue, we correct the flags used for memory region
permission checks in the fdt_reserved_memory_fixup() function.
Fixes: 9e0ba090 ("include: sbi: Fine grain the permissions for M and SU modes")
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Add a helper function fdt_fixup_node() based on the compatible string.
This will avoid code duplication for every new node fixup being added.
Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Reviewed-by: Bin Meng <bmeng@tinylab.org>
Reviewed-by: Xiang W <wxjstz@126.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
We should disable APLIC DT nodes in fdt_fixups() which are not
accessible to the next booting stage based on currently assigned
domain.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
We should disable IMSIC DT nodes in fdt_fixups() which are not
accessible to the next booting stage based on currently assigned
domain.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
The PMU DT node bindings are defined in docs/pmu_support.md
Add few fdt helper functions to parse the DT node and update the
event-counter mapping tables.
Reviewed-by: Anup Patel <anup.patel@wdc.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Current fdt_plic_fixup() only does necessary fix-up against the legacy
"riscv,plic0" node. The upstream Linux kernel defines its official DT
bindings which uses "sifive,plic-1.0.0" as the compatible string and
we should check that first, and if not present fall back to legacy.
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Anup Patel <anup.patel@wdc.com>
At present fdt_plic_fixup() accepts a 'compat' parameter for PLIC
compatible string. In preparation to support the new DT bindings,
drop this and use "riscv,plic0" directly in fdt_plic_fixup().
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Anup Patel <anup.patel@wdc.com>
The fdt_cpu_fixup() should disable a HART in DT if the HART
is not assigned to the current HART domain. This patch updates
fdt_cpu_fixup() accordingly.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Reviewed-by: Atish Patra <atish.patra@wdc.com>
Now that each HART is mapped to a domain having a set of memory
regions, we update fdt_reserved_memory_fixup() to use domain memory
regions for adding reserved memory nodes in device tree.
We also change reserved memory node name prefix from "mmode_pmp"
to "mmode_resv" because domain memory regions can impact other
hardware configurations (such as IOPMP, etc) along with PMP.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Reviewed-by: Atish Patra <atish.patra@wdc.com>
The fdt_cpu_fixup() should work fine even if HARTs without MMU
are not marked invalid by platform support code.
In future, we plan to treat HARTs without MMU as valid in the
generic platform support so that we can hold these HARTs in
HSM STOPPED state.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Reviewed-by: Atish Patra <atish.patra@wdc.com>
The SBI_HART_HAS_PMP feature is redundant because we already
have number of PMP regions returned by sbi_hart_pmp_count().
Checking whether PMP is supported for a HART can be simply done
by checking non-zero value returned by sbi_hart_pmp_count().
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Atish Patra <atish.patra@wdc.com>
Currently 256 bytes is used for the FDT expand size when fixing up
reserved memory node. Increase it to 1024 bytes with an estimated
size of 64 bytes per PMP memory region by 16 regions in total.
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Atish Patra <atish.patra@wdc.com>
This is achieved by removing the 'no-map' property from the
'reserved-memory' node when PMP is present, otherwise we keep it as it
offers a small protection if the OS does not map this region at all.
A new callback in platform_override is introduced and allows to fixup the
device-tree. It is used here to override this new default behaviour on
SiFive Fu540 platforms that has an erratum that prevents S-mode software
to access a PMP protected region using 1GB page table mapping.
If PMP is present, telling the OS not to map the reserved regions does not
add much protection since it only avoids access to regions that are already
protected by PMP. But by not allowing the OS to map those regions, it
creates holes in the OS system memory map and prevents the use of
hugepages which would generate, among other benefits, less TLB miss.
Signed-off-by: Alexandre Ghiti <alex@ghiti.fr>
Reviewed-by: Atish Patra <atish.patra@wdc.com>
Reviewed-by: Anup Patel <anup.patel@wdc.com>
It is not mandatory for a RISC-V systems to implement all PMP
regions so we have to check all PMPADDRx CSRs to determine excat
number of supported PMP regions.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
It makes more sense to have scratch pointer as parameter in
HART feature APIs because:
1. We already have scratch pointer at places where these APIs
are used.
2. This is consistent with lot of other APIs in sbi_hart.h
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Reviewed-by: Atish Patra <atish.patra@wdc.com>
PMP & performance counters belong to a hart rather than a platform.
In addition to that, these features enable reading/writing from a
particular csr. Thus, they can be detected and set at runtime rather
than compile time.
Move PMP/MCOUNTEREN/SCOUNTEREN features to hart and detect them at runtime.
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Tested-by: Jonathan Balkind <jbalkind@cs.princeton.edu>
Reviewed-by: Anup Patel <anup.patel@wdc.com>
As per RISC-V ISA, pmp is not mandatory. Currently, we only add reserved
memory node in DT only if PMP is present. That allows supervisor to access
the memory where OpenSBI continue to exist without realizing it for non-pmp
based platforms. It may result in corrupting OpenSBI. That's why OpenSBI
should at least let the supervisor know where it continue to exist.
This a best effort service provided by OpenSBI expecting that supervisor
software is not buggy and properly sets up its memory after parsing the
reserved-memory device tree node.
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Reviewed-by: Anup Patel <anup.patel@wdc.com>
Parsing HART id from a CPU DT node is a common requirement for
RISC-V systems. The newly added fdt_parse_hart_id() also helps
reduce duplicate code between fdt_cpu_fixup() function and
fdt_parse_hart_count() function.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Reviewed-by: Atish Patra <atish.patra@wdc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Currently, the fdt_cpu_fixup() implementation assumes:
1. We have one CPU DT for each HART under /cpus DT node
2. The CPU DT nodes are named sequentially (i.e cpu@0,
cpu@1, ...) which is not true for discontinuous and
sparse HART ids (i.e. cpu@0, cpu@4, cpu@5). Generally,
CPU DT node are named based on HART id and not HART
index
If any of the above assumptions are violated then the
fdt_cpu_fixup() will not work.
This improves fdt_cpu_fixup() implementation and makes
it independent of above assumptions.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Atish Patra <atish.patra@wdc.com>
FDT helper file contain both fdt fixup and parsing functions.
Split the fixup related functions to a separate file for a better code
organization.
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Reviewed-by: Anup Patel <anup.patel@wdc.com>