Mauro Carvalho Chehab | 75e7fcd | 2020-02-10 07:02:59 +0100 | [diff] [blame] | 1 | .. SPDX-License-Identifier: GPL-2.0 |
| 2 | |
| 3 | ================= |
| 4 | KVM Lock Overview |
| 5 | ================= |
| 6 | |
| 7 | 1. Acquisition Orders |
| 8 | --------------------- |
| 9 | |
| 10 | The acquisition orders for mutexes are as follows: |
| 11 | |
| 12 | - kvm->lock is taken outside vcpu->mutex |
| 13 | |
| 14 | - kvm->lock is taken outside kvm->slots_lock and kvm->irq_lock |
| 15 | |
| 16 | - kvm->slots_lock is taken outside kvm->irq_lock, though acquiring |
| 17 | them together is quite rare. |
| 18 | |
| 19 | On x86, vcpu->mutex is taken outside kvm->arch.hyperv.hv_lock. |
| 20 | |
| 21 | Everything else is a leaf: no other lock is taken inside the critical |
| 22 | sections. |
| 23 | |
| 24 | 2. Exception |
| 25 | ------------ |
| 26 | |
| 27 | Fast page fault: |
| 28 | |
| 29 | Fast page fault is the fast path which fixes the guest page fault out of |
| 30 | the mmu-lock on x86. Currently, the page fault can be fast in one of the |
| 31 | following two cases: |
| 32 | |
| 33 | 1. Access Tracking: The SPTE is not present, but it is marked for access |
| 34 | tracking i.e. the SPTE_SPECIAL_MASK is set. That means we need to |
| 35 | restore the saved R/X bits. This is described in more detail later below. |
| 36 | |
| 37 | 2. Write-Protection: The SPTE is present and the fault is |
| 38 | caused by write-protect. That means we just need to change the W bit of |
| 39 | the spte. |
| 40 | |
| 41 | What we use to avoid all the race is the SPTE_HOST_WRITEABLE bit and |
| 42 | SPTE_MMU_WRITEABLE bit on the spte: |
| 43 | |
| 44 | - SPTE_HOST_WRITEABLE means the gfn is writable on host. |
| 45 | - SPTE_MMU_WRITEABLE means the gfn is writable on mmu. The bit is set when |
| 46 | the gfn is writable on guest mmu and it is not write-protected by shadow |
| 47 | page write-protection. |
| 48 | |
| 49 | On fast page fault path, we will use cmpxchg to atomically set the spte W |
| 50 | bit if spte.SPTE_HOST_WRITEABLE = 1 and spte.SPTE_WRITE_PROTECT = 1, or |
| 51 | restore the saved R/X bits if VMX_EPT_TRACK_ACCESS mask is set, or both. This |
| 52 | is safe because whenever changing these bits can be detected by cmpxchg. |
| 53 | |
| 54 | But we need carefully check these cases: |
| 55 | |
| 56 | 1) The mapping from gfn to pfn |
| 57 | |
| 58 | The mapping from gfn to pfn may be changed since we can only ensure the pfn |
| 59 | is not changed during cmpxchg. This is a ABA problem, for example, below case |
| 60 | will happen: |
| 61 | |
| 62 | +------------------------------------------------------------------------+ |
| 63 | | At the beginning:: | |
| 64 | | | |
| 65 | | gpte = gfn1 | |
| 66 | | gfn1 is mapped to pfn1 on host | |
| 67 | | spte is the shadow page table entry corresponding with gpte and | |
| 68 | | spte = pfn1 | |
| 69 | +------------------------------------------------------------------------+ |
| 70 | | On fast page fault path: | |
| 71 | +------------------------------------+-----------------------------------+ |
| 72 | | CPU 0: | CPU 1: | |
| 73 | +------------------------------------+-----------------------------------+ |
| 74 | | :: | | |
| 75 | | | | |
| 76 | | old_spte = *spte; | | |
| 77 | +------------------------------------+-----------------------------------+ |
| 78 | | | pfn1 is swapped out:: | |
| 79 | | | | |
| 80 | | | spte = 0; | |
| 81 | | | | |
| 82 | | | pfn1 is re-alloced for gfn2. | |
| 83 | | | | |
| 84 | | | gpte is changed to point to | |
| 85 | | | gfn2 by the guest:: | |
| 86 | | | | |
| 87 | | | spte = pfn1; | |
| 88 | +------------------------------------+-----------------------------------+ |
| 89 | | :: | |
| 90 | | | |
| 91 | | if (cmpxchg(spte, old_spte, old_spte+W) | |
| 92 | | mark_page_dirty(vcpu->kvm, gfn1) | |
| 93 | | OOPS!!! | |
| 94 | +------------------------------------------------------------------------+ |
| 95 | |
| 96 | We dirty-log for gfn1, that means gfn2 is lost in dirty-bitmap. |
| 97 | |
| 98 | For direct sp, we can easily avoid it since the spte of direct sp is fixed |
Peter Xu | 3ac40c4 | 2020-03-05 10:57:08 -0500 | [diff] [blame] | 99 | to gfn. For indirect sp, we disabled fast page fault for simplicity. |
| 100 | |
| 101 | A solution for indirect sp could be to pin the gfn, for example via |
| 102 | kvm_vcpu_gfn_to_pfn_atomic, before the cmpxchg. After the pinning: |
Mauro Carvalho Chehab | 75e7fcd | 2020-02-10 07:02:59 +0100 | [diff] [blame] | 103 | |
| 104 | - We have held the refcount of pfn that means the pfn can not be freed and |
| 105 | be reused for another gfn. |
Peter Xu | 3ac40c4 | 2020-03-05 10:57:08 -0500 | [diff] [blame] | 106 | - The pfn is writable and therefore it cannot be shared between different gfns |
Mauro Carvalho Chehab | 75e7fcd | 2020-02-10 07:02:59 +0100 | [diff] [blame] | 107 | by KSM. |
| 108 | |
| 109 | Then, we can ensure the dirty bitmaps is correctly set for a gfn. |
| 110 | |
Mauro Carvalho Chehab | 75e7fcd | 2020-02-10 07:02:59 +0100 | [diff] [blame] | 111 | 2) Dirty bit tracking |
| 112 | |
| 113 | In the origin code, the spte can be fast updated (non-atomically) if the |
| 114 | spte is read-only and the Accessed bit has already been set since the |
| 115 | Accessed bit and Dirty bit can not be lost. |
| 116 | |
| 117 | But it is not true after fast page fault since the spte can be marked |
| 118 | writable between reading spte and updating spte. Like below case: |
| 119 | |
| 120 | +------------------------------------------------------------------------+ |
| 121 | | At the beginning:: | |
| 122 | | | |
| 123 | | spte.W = 0 | |
| 124 | | spte.Accessed = 1 | |
| 125 | +------------------------------------+-----------------------------------+ |
| 126 | | CPU 0: | CPU 1: | |
| 127 | +------------------------------------+-----------------------------------+ |
| 128 | | In mmu_spte_clear_track_bits():: | | |
| 129 | | | | |
| 130 | | old_spte = *spte; | | |
| 131 | | | | |
| 132 | | | | |
| 133 | | /* 'if' condition is satisfied. */| | |
| 134 | | if (old_spte.Accessed == 1 && | | |
| 135 | | old_spte.W == 0) | | |
| 136 | | spte = 0ull; | | |
| 137 | +------------------------------------+-----------------------------------+ |
| 138 | | | on fast page fault path:: | |
| 139 | | | | |
| 140 | | | spte.W = 1 | |
| 141 | | | | |
| 142 | | | memory write on the spte:: | |
| 143 | | | | |
| 144 | | | spte.Dirty = 1 | |
| 145 | +------------------------------------+-----------------------------------+ |
| 146 | | :: | | |
| 147 | | | | |
| 148 | | else | | |
| 149 | | old_spte = xchg(spte, 0ull) | | |
| 150 | | if (old_spte.Accessed == 1) | | |
| 151 | | kvm_set_pfn_accessed(spte.pfn);| | |
| 152 | | if (old_spte.Dirty == 1) | | |
| 153 | | kvm_set_pfn_dirty(spte.pfn); | | |
| 154 | | OOPS!!! | | |
| 155 | +------------------------------------+-----------------------------------+ |
| 156 | |
| 157 | The Dirty bit is lost in this case. |
| 158 | |
| 159 | In order to avoid this kind of issue, we always treat the spte as "volatile" |
| 160 | if it can be updated out of mmu-lock, see spte_has_volatile_bits(), it means, |
| 161 | the spte is always atomically updated in this case. |
| 162 | |
| 163 | 3) flush tlbs due to spte updated |
| 164 | |
| 165 | If the spte is updated from writable to readonly, we should flush all TLBs, |
| 166 | otherwise rmap_write_protect will find a read-only spte, even though the |
| 167 | writable spte might be cached on a CPU's TLB. |
| 168 | |
| 169 | As mentioned before, the spte can be updated to writable out of mmu-lock on |
| 170 | fast page fault path, in order to easily audit the path, we see if TLBs need |
| 171 | be flushed caused by this reason in mmu_spte_update() since this is a common |
| 172 | function to update spte (present -> present). |
| 173 | |
| 174 | Since the spte is "volatile" if it can be updated out of mmu-lock, we always |
| 175 | atomically update the spte, the race caused by fast page fault can be avoided, |
| 176 | See the comments in spte_has_volatile_bits() and mmu_spte_update(). |
| 177 | |
| 178 | Lockless Access Tracking: |
| 179 | |
| 180 | This is used for Intel CPUs that are using EPT but do not support the EPT A/D |
| 181 | bits. In this case, when the KVM MMU notifier is called to track accesses to a |
| 182 | page (via kvm_mmu_notifier_clear_flush_young), it marks the PTE as not-present |
| 183 | by clearing the RWX bits in the PTE and storing the original R & X bits in |
| 184 | some unused/ignored bits. In addition, the SPTE_SPECIAL_MASK is also set on the |
| 185 | PTE (using the ignored bit 62). When the VM tries to access the page later on, |
| 186 | a fault is generated and the fast page fault mechanism described above is used |
| 187 | to atomically restore the PTE to a Present state. The W bit is not saved when |
| 188 | the PTE is marked for access tracking and during restoration to the Present |
| 189 | state, the W bit is set depending on whether or not it was a write access. If |
| 190 | it wasn't, then the W bit will remain clear until a write access happens, at |
| 191 | which time it will be set using the Dirty tracking mechanism described above. |
| 192 | |
| 193 | 3. Reference |
| 194 | ------------ |
| 195 | |
| 196 | :Name: kvm_lock |
| 197 | :Type: mutex |
| 198 | :Arch: any |
| 199 | :Protects: - vm_list |
| 200 | |
| 201 | :Name: kvm_count_lock |
| 202 | :Type: raw_spinlock_t |
| 203 | :Arch: any |
| 204 | :Protects: - hardware virtualization enable/disable |
| 205 | :Comment: 'raw' because hardware enabling/disabling must be atomic /wrt |
| 206 | migration. |
| 207 | |
| 208 | :Name: kvm_arch::tsc_write_lock |
| 209 | :Type: raw_spinlock |
| 210 | :Arch: x86 |
| 211 | :Protects: - kvm_arch::{last_tsc_write,last_tsc_nsec,last_tsc_offset} |
| 212 | - tsc offset in vmcb |
| 213 | :Comment: 'raw' because updating the tsc offsets must not be preempted. |
| 214 | |
| 215 | :Name: kvm->mmu_lock |
| 216 | :Type: spinlock_t |
| 217 | :Arch: any |
| 218 | :Protects: -shadow page/shadow tlb entry |
| 219 | :Comment: it is a spinlock since it is used in mmu notifier. |
| 220 | |
| 221 | :Name: kvm->srcu |
| 222 | :Type: srcu lock |
| 223 | :Arch: any |
| 224 | :Protects: - kvm->memslots |
| 225 | - kvm->buses |
| 226 | :Comment: The srcu read lock must be held while accessing memslots (e.g. |
| 227 | when using gfn_to_* functions) and while accessing in-kernel |
| 228 | MMIO/PIO address->device structure mapping (kvm->buses). |
| 229 | The srcu index can be stored in kvm_vcpu->srcu_idx per vcpu |
| 230 | if it is needed by multiple functions. |
| 231 | |
| 232 | :Name: blocked_vcpu_on_cpu_lock |
| 233 | :Type: spinlock_t |
| 234 | :Arch: x86 |
| 235 | :Protects: blocked_vcpu_on_cpu |
| 236 | :Comment: This is a per-CPU lock and it is used for VT-d posted-interrupts. |
| 237 | When VT-d posted-interrupts is supported and the VM has assigned |
| 238 | devices, we put the blocked vCPU on the list blocked_vcpu_on_cpu |
| 239 | protected by blocked_vcpu_on_cpu_lock, when VT-d hardware issues |
| 240 | wakeup notification event since external interrupts from the |
| 241 | assigned devices happens, we will find the vCPU on the list to |
| 242 | wakeup. |