UefiCpuPkg/PiSmmCpuDxeSmm: Check PDE entry exist or not before use

Before the commit 701b5797 & 4ceefd6d, 2MB-page will be created to
cover [0: 4G] by default if SmmProfile enabled, and it will be go
through to change 2MB-page into 4KB-page during page table update
(InitPaging). If so, there was no problem to assert PDE entry exist
in the RestorePageTableBelow4G.

But after above commits, PageTableMap API is used to create/update
the page table, 1G-page will be the default page table mode, and
only covers the limited address range. Those not covered ranges
will be marked as non-present in 1g-page level address. If so,
2M-page address might not exist, it's incorrect to assert PDE
entry exist in the RestorePageTableBelow4G.

The correct behavior should check PDE entry exist or not, if not,
PDE should be allocated and assigned to PDPTE.

Note:
RestorePageTableBelow4G () does not use 1G page size entries
for the creation of new pages, maintaining consistency with the
behavior of the original code.

The purpose of this patch is to ensure that a Page Directory Entry
(PDE) exists prior to its usage.

Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
This commit is contained in:
Jiaxin Wu 2024-07-15 10:16:12 +08:00 committed by mergify[bot]
parent 9d8a5fbd0c
commit f73b97fe7f
3 changed files with 48 additions and 5 deletions

View File

@ -1,7 +1,7 @@
/** @file
Page table manipulation functions for IA-32 processors
Copyright (c) 2009 - 2023, Intel Corporation. All rights reserved.<BR>
Copyright (c) 2009 - 2024, Intel Corporation. All rights reserved.<BR>
Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
SPDX-License-Identifier: BSD-2-Clause-Patent
@ -66,6 +66,22 @@ SmmInitPageTable (
return GenSmmPageTable (PagingPae, mPhysicalAddressBits);
}
/**
Allocate free Page for PageFault handler use.
@return Page address.
**/
UINT64
AllocPage (
VOID
)
{
CpuDeadLoop ();
return 0;
}
/**
Page Fault handler for SMM use.

View File

@ -1202,6 +1202,21 @@ RestorePageTableBelow4G (
// PDPTE
//
PTIndex = (UINTN)BitFieldRead64 (PFAddress, 30, 38);
if ((PageTable[PTIndex] & IA32_PG_P) == 0) {
//
// For 32-bit case, because a full map page table for 0-4G is created by default,
// and since the PDPTE must be one non-leaf entry, the PDPTE must always be present.
// So, ASSERT it must be the 64-bit case running here.
//
ASSERT (sizeof (UINT64) == sizeof (UINTN));
//
// If the entry is not present, allocate one page from page pool for it
//
PageTable[PTIndex] = AllocPage () | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
}
ASSERT (PageTable[PTIndex] != 0);
PageTable = (UINT64 *)(UINTN)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
@ -1209,9 +1224,9 @@ RestorePageTableBelow4G (
// PD
//
PTIndex = (UINTN)BitFieldRead64 (PFAddress, 21, 29);
if ((PageTable[PTIndex] & IA32_PG_PS) != 0) {
if ((PageTable[PTIndex] & IA32_PG_P) == 0) {
//
// Large page
// A 2M page size will be used directly when the 2M entry is marked as non-present.
//
//
@ -1238,7 +1253,8 @@ RestorePageTableBelow4G (
}
} else {
//
// Small page
// If the 2M entry is marked as present, a 4K page size will be utilized.
// In this scenario, the 2M entry must be a non-leaf entry.
//
ASSERT (PageTable[PTIndex] != 0);
PageTable = (UINT64 *)(UINTN)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);

View File

@ -1,7 +1,7 @@
/** @file
SMM profile internal header file.
Copyright (c) 2012 - 2018, Intel Corporation. All rights reserved.<BR>
Copyright (c) 2012 - 2024, Intel Corporation. All rights reserved.<BR>
Copyright (c) 2020, AMD Incorporated. All rights reserved.<BR>
SPDX-License-Identifier: BSD-2-Clause-Patent
@ -142,6 +142,17 @@ IsAddressValid (
IN BOOLEAN *Nx
);
/**
Allocate free Page for PageFault handler use.
@return Page address.
**/
UINT64
AllocPage (
VOID
);
/**
Page Fault handler for SMM use.