|
|
#define | MMU_PDE_COUNT (1024) |
| | Number of entries inside the page directory.
|
| |
|
#define | MMU_PTE_COUNT (1024) |
| | Number of entries inside a single page table.
|
| |
|
#define | MMU_PDE_KERNEL_FIRST (767) |
| | First PDE corresponding to the kernel pages.
|
| |
|
#define | MMU_PDE_KERNEL_COUNT (MMU_PDE_COUNT - MMU_PDE_KERNEL_FIRST) |
| | Number of PDE corresponding to kernel pages.
|
| |
|
#define | MMU_RECURSIVE_PAGE_DIRECTORY_ADDRESS ((page_directory_t)FROM_PFN(0xFFFFF)) |
| | Compute the virtual address of a page table when using recursive paging .
|
| |
|
| | ALIGNED (PAGE_SIZE) |
| | Flush the translation buffer @function mmu_flush_tlb. More...
|
| |
| void | mmu_load (paddr_t page_directory) |
| | Replace the current page directory. More...
|
| |
| paddr_t | mmu_new (void) |
| | Inititialize a new page directory. More...
|
| |
| void | mmu_destroy (paddr_t mmu) |
| | Release the MMU's page_directory. More...
|
| |
|
void | mmu_clone (paddr_t destination) |
| | Clone the current MMU inside another one.
|
| |
| bool | mmu_map (vaddr_t virtual, paddr_t pageframe, int prot) |
| | Map a virtual address to a physical one. More...
|
| |
| bool | mmu_map_range (vaddr_t virtual, paddr_t physical, size_t size, int prot) |
| | Map a range of virtual addresses to physical ones. More...
|
| |
|
static paddr_t | __duplicate_cow_page (void *orig) |
| | Duplicate the content of a CoW page.
|
| |
| paddr_t | mmu_unmap (vaddr_t virtual) |
| | Unmap a virtual address. More...
|
| |
| void | mmu_unmap_range (vaddr_t start, vaddr_t end) |
| | Unmap a range of virtual addresses. More...
|
| |
| void | mmu_identity_map (paddr_t start, paddr_t end, int prot) |
| | Perform identity mapping inside a given virtual address range. More...
|
| |
| paddr_t | mmu_find_physical (vaddr_t virtual) |
| | Find the physical mapping of a virtual address. More...
|
| |
|
error_t | mmu_copy_on_write (vaddr_t addr) |
| | Try to remap a potential copy-on-write mapping.
|
| |
| bool | mmu_init (void) |
| | Initialize the MMU's paging system. More...
|
| |
|
|
u8 | mmu_pde::present: 1 |
| | Wether this entry is present.
|
| |
|
u8 | mmu_pde::writable: 1 |
| | Read/Write.
|
| |
|
u8 | mmu_pde::user: 1 |
| | User/Supervisor.
|
| |
|
u8 | mmu_pde::pwt: 1 |
| | Page-level write-through.
|
| |
|
u8 | mmu_pde::pcd: 1 |
| | Page-level cache disabled.
|
| |
|
u8 | mmu_pde::accessed: 1 |
| | Whether this entry has been used for translation.
|
| |
|
u32 | mmu_pde::page_table: 20 |
| | Pageframe number of the referenced page table.
|
| |
|
u8 | mmu_pte::present: 1 |
| | Wether this entry is present.
|
| |
|
u8 | mmu_pte::writable: 1 |
| | Read/Write.
|
| |
|
u8 | mmu_pte::user: 1 |
| | User/Supervisor.
|
| |
|
u8 | mmu_pte::pwt: 1 |
| | Page-level write-through.
|
| |
|
u8 | mmu_pte::pcd: 1 |
| | Page-level cache disabled.
|
| |
|
u8 | mmu_pte::accessed: 1 |
| | Whether the software has accessed this page.
|
| |
|
u8 | mmu_pte::dirty: 1 |
| | Whether software has written to this page.
|
| |
|
u32 | mmu_pte::page_frame: 20 |
| | Pageframe number of the referenced page frame.
|
| |
MMU - x86
Arch dependent implementation of the MMU interface.
Design
The x86 CPU family uses page tables to translate virtual addresses into physical ones.
Each page table containes 1024 entries (PTE), each corresponding to a single page in the virtual address space. Each PTE contains the physical address it is mapped to, as well as metadata for this page. These metadata can be used to modify the CPU's comportement when accessing an address inside this page (security restriction, cache policy, ...).
There exists different levels of Page Tables (PT). Page Tables can also contain links to other page tables, to allow accessing a larger virtual address space.
- Note
- In our current implementation we only use one PT level => 32bits virtual address space.
At the root of all this is the Page Directory (PD). Similarily to PTs, the PD holds the addresses of page tables, as well as similar metadatas. The CPU keeps the address of the PD for the running process inside the CR3 register.
Implementation
Recursvie Mapping
To allow us to easily edit the content of the page table entries, even once we have switched on paging, we map a the last entry in the page directory to itself. This allows us to be able to compute the address of a given page table, and manually edit its content without adding an otherwise necessary page table entry ... which would require mapping antother PTE (chicken and egg).
- See also
- https://medium.com/@connorstack/recursive-page-tables-ad1e03b20a85
Copy on Write (CoW)
When duplicating an entire MMU, the process is delayed until necessary. To do this we mark all writable pages as read-only, so that a pagefault gets triggered the next time they are modified. The duplication of the PDEs/PTEs is done during the pagefault's handling.
This mechanism avoids duplicating pages that will never be accessed, which is a common occurence when a process performs a combination of fork & exec.
◆ ALIGNED()
For efficiency, the result of the translations are cached by the CPU. This means that we need to invalidate the cache if we want new modifications made to the PD to be taken into account.
◆ mmu_destroy()
| void mmu_destroy |
( |
paddr_t |
mmu | ) |
|
- Note
- This function does not release all the memory that was potentially mapped by the MMU. This should be done separately by the caller.
◆ mmu_find_physical()
| paddr_t mmu_find_physical |
( |
vaddr_t |
virtual | ) |
|
- Returns
- -E_INVAL if error, a physical address if none
◆ mmu_identity_map()
| void mmu_identity_map |
( |
paddr_t |
start, |
|
|
paddr_t |
end, |
|
|
int |
prot |
|
) |
| |
Identity mapping is the process of mapping a virtual address to the same physical address.
Both start and end addresses will be included inside the range.
- Parameters
-
| start | the starting page of the address range |
| end | the ending address of the address range |
| prot | Protection rule in use for this page. A combination of mmu_prot flags. |
◆ mmu_init()
This function is responsible for setting any required bit inside the CPU's register.
It is also responsible of remapping the kernel's code and addresses before enabling paging.
- Warning
- After calling this function, each and every address will automatically be translated into its physical equivalent using the paging mechanism. Be sure to remap known addresses to avoid raising exceptions.
◆ mmu_load()
| void mmu_load |
( |
paddr_t |
mmu | ) |
|
- Parameters
-
| page_directory | The physical address of the page directory |
◆ mmu_map()
| bool mmu_map |
( |
vaddr_t |
virt, |
|
|
paddr_t |
physical, |
|
|
int |
prot |
|
) |
| |
- Parameters
-
| virt | The virtual address |
| physical | Its physical equivalent |
| prot | Protection rule in use for this page. A combination of mmu_prot flags. |
- Returns
- False if the address was already mapped before
◆ mmu_map_range()
| bool mmu_map_range |
( |
vaddr_t |
virt, |
|
|
paddr_t |
physical, |
|
|
size_t |
size, |
|
|
int |
prot |
|
) |
| |
- Parameters
-
| virt | The start of the virtual address range |
| physical | Its physical equivalent |
| size | The size of the region to map |
| prot | Protection rule in use for this page. A combination of mmu_prot flags. |
- Returns
- False if the address was already mapped before
◆ mmu_new()
Allocate and initialize a new page directory.
- Returns
- The physical address of the new page_directory, 0 if error.
◆ mmu_unmap()
| paddr_t mmu_unmap |
( |
vaddr_t |
virt | ) |
|
- Warning
- After calling this, referencing the given virtual address may cause the CPU to raise an exception.
- Parameters
-
- Returns
- The physical pageframe associated with the unmapped address
◆ mmu_unmap_range()
| void mmu_unmap_range |
( |
vaddr_t |
start, |
|
|
vaddr_t |
end |
|
) |
| |
- Parameters
-
| start | The start of the virtual address |
| end | The end of the virtual address |