Petr Mladek | 5e4e384 | 2016-04-25 17:14:35 +0200 | [diff] [blame] | 1 | ========= |
| 2 | Livepatch |
| 3 | ========= |
| 4 | |
| 5 | This document outlines basic information about kernel livepatching. |
| 6 | |
| 7 | Table of Contents: |
| 8 | |
| 9 | 1. Motivation |
| 10 | 2. Kprobes, Ftrace, Livepatching |
| 11 | 3. Consistency model |
| 12 | 4. Livepatch module |
| 13 | 4.1. New functions |
| 14 | 4.2. Metadata |
| 15 | 4.3. Livepatch module handling |
| 16 | 5. Livepatch life-cycle |
| 17 | 5.1. Registration |
| 18 | 5.2. Enabling |
| 19 | 5.3. Disabling |
| 20 | 5.4. Unregistration |
| 21 | 6. Sysfs |
| 22 | 7. Limitations |
| 23 | |
| 24 | |
| 25 | 1. Motivation |
| 26 | ============= |
| 27 | |
| 28 | There are many situations where users are reluctant to reboot a system. It may |
| 29 | be because their system is performing complex scientific computations or under |
| 30 | heavy load during peak usage. In addition to keeping systems up and running, |
| 31 | users want to also have a stable and secure system. Livepatching gives users |
| 32 | both by allowing for function calls to be redirected; thus, fixing critical |
| 33 | functions without a system reboot. |
| 34 | |
| 35 | |
| 36 | 2. Kprobes, Ftrace, Livepatching |
| 37 | ================================ |
| 38 | |
| 39 | There are multiple mechanisms in the Linux kernel that are directly related |
| 40 | to redirection of code execution; namely: kernel probes, function tracing, |
| 41 | and livepatching: |
| 42 | |
| 43 | + The kernel probes are the most generic. The code can be redirected by |
| 44 | putting a breakpoint instruction instead of any instruction. |
| 45 | |
| 46 | + The function tracer calls the code from a predefined location that is |
| 47 | close to the function entry point. This location is generated by the |
| 48 | compiler using the '-pg' gcc option. |
| 49 | |
| 50 | + Livepatching typically needs to redirect the code at the very beginning |
| 51 | of the function entry before the function parameters or the stack |
| 52 | are in any way modified. |
| 53 | |
| 54 | All three approaches need to modify the existing code at runtime. Therefore |
| 55 | they need to be aware of each other and not step over each other's toes. |
| 56 | Most of these problems are solved by using the dynamic ftrace framework as |
| 57 | a base. A Kprobe is registered as a ftrace handler when the function entry |
| 58 | is probed, see CONFIG_KPROBES_ON_FTRACE. Also an alternative function from |
| 59 | a live patch is called with the help of a custom ftrace handler. But there are |
| 60 | some limitations, see below. |
| 61 | |
| 62 | |
| 63 | 3. Consistency model |
| 64 | ==================== |
| 65 | |
| 66 | Functions are there for a reason. They take some input parameters, get or |
| 67 | release locks, read, process, and even write some data in a defined way, |
| 68 | have return values. In other words, each function has a defined semantic. |
| 69 | |
| 70 | Many fixes do not change the semantic of the modified functions. For |
| 71 | example, they add a NULL pointer or a boundary check, fix a race by adding |
| 72 | a missing memory barrier, or add some locking around a critical section. |
| 73 | Most of these changes are self contained and the function presents itself |
| 74 | the same way to the rest of the system. In this case, the functions might |
Josh Poimboeuf | d83a7cb | 2017-02-13 19:42:40 -0600 | [diff] [blame] | 75 | be updated independently one by one. (This can be done by setting the |
| 76 | 'immediate' flag in the klp_patch struct.) |
Petr Mladek | 5e4e384 | 2016-04-25 17:14:35 +0200 | [diff] [blame] | 77 | |
| 78 | But there are more complex fixes. For example, a patch might change |
| 79 | ordering of locking in multiple functions at the same time. Or a patch |
| 80 | might exchange meaning of some temporary structures and update |
| 81 | all the relevant functions. In this case, the affected unit |
| 82 | (thread, whole kernel) need to start using all new versions of |
| 83 | the functions at the same time. Also the switch must happen only |
| 84 | when it is safe to do so, e.g. when the affected locks are released |
| 85 | or no data are stored in the modified structures at the moment. |
| 86 | |
| 87 | The theory about how to apply functions a safe way is rather complex. |
| 88 | The aim is to define a so-called consistency model. It attempts to define |
| 89 | conditions when the new implementation could be used so that the system |
Josh Poimboeuf | d83a7cb | 2017-02-13 19:42:40 -0600 | [diff] [blame] | 90 | stays consistent. |
Petr Mladek | 5e4e384 | 2016-04-25 17:14:35 +0200 | [diff] [blame] | 91 | |
Josh Poimboeuf | d83a7cb | 2017-02-13 19:42:40 -0600 | [diff] [blame] | 92 | Livepatch has a consistency model which is a hybrid of kGraft and |
| 93 | kpatch: it uses kGraft's per-task consistency and syscall barrier |
| 94 | switching combined with kpatch's stack trace switching. There are also |
| 95 | a number of fallback options which make it quite flexible. |
Petr Mladek | 5e4e384 | 2016-04-25 17:14:35 +0200 | [diff] [blame] | 96 | |
Josh Poimboeuf | d83a7cb | 2017-02-13 19:42:40 -0600 | [diff] [blame] | 97 | Patches are applied on a per-task basis, when the task is deemed safe to |
| 98 | switch over. When a patch is enabled, livepatch enters into a |
| 99 | transition state where tasks are converging to the patched state. |
| 100 | Usually this transition state can complete in a few seconds. The same |
| 101 | sequence occurs when a patch is disabled, except the tasks converge from |
| 102 | the patched state to the unpatched state. |
Petr Mladek | 5e4e384 | 2016-04-25 17:14:35 +0200 | [diff] [blame] | 103 | |
Josh Poimboeuf | d83a7cb | 2017-02-13 19:42:40 -0600 | [diff] [blame] | 104 | An interrupt handler inherits the patched state of the task it |
| 105 | interrupts. The same is true for forked tasks: the child inherits the |
| 106 | patched state of the parent. |
| 107 | |
| 108 | Livepatch uses several complementary approaches to determine when it's |
| 109 | safe to patch tasks: |
| 110 | |
| 111 | 1. The first and most effective approach is stack checking of sleeping |
| 112 | tasks. If no affected functions are on the stack of a given task, |
| 113 | the task is patched. In most cases this will patch most or all of |
| 114 | the tasks on the first try. Otherwise it'll keep trying |
| 115 | periodically. This option is only available if the architecture has |
| 116 | reliable stacks (HAVE_RELIABLE_STACKTRACE). |
| 117 | |
| 118 | 2. The second approach, if needed, is kernel exit switching. A |
| 119 | task is switched when it returns to user space from a system call, a |
| 120 | user space IRQ, or a signal. It's useful in the following cases: |
| 121 | |
| 122 | a) Patching I/O-bound user tasks which are sleeping on an affected |
| 123 | function. In this case you have to send SIGSTOP and SIGCONT to |
| 124 | force it to exit the kernel and be patched. |
| 125 | b) Patching CPU-bound user tasks. If the task is highly CPU-bound |
| 126 | then it will get patched the next time it gets interrupted by an |
| 127 | IRQ. |
| 128 | c) In the future it could be useful for applying patches for |
| 129 | architectures which don't yet have HAVE_RELIABLE_STACKTRACE. In |
| 130 | this case you would have to signal most of the tasks on the |
| 131 | system. However this isn't supported yet because there's |
| 132 | currently no way to patch kthreads without |
| 133 | HAVE_RELIABLE_STACKTRACE. |
| 134 | |
| 135 | 3. For idle "swapper" tasks, since they don't ever exit the kernel, they |
| 136 | instead have a klp_update_patch_state() call in the idle loop which |
| 137 | allows them to be patched before the CPU enters the idle state. |
| 138 | |
| 139 | (Note there's not yet such an approach for kthreads.) |
| 140 | |
| 141 | All the above approaches may be skipped by setting the 'immediate' flag |
| 142 | in the 'klp_patch' struct, which will disable per-task consistency and |
| 143 | patch all tasks immediately. This can be useful if the patch doesn't |
| 144 | change any function or data semantics. Note that, even with this flag |
| 145 | set, it's possible that some tasks may still be running with an old |
| 146 | version of the function, until that function returns. |
| 147 | |
| 148 | There's also an 'immediate' flag in the 'klp_func' struct which allows |
| 149 | you to specify that certain functions in the patch can be applied |
| 150 | without per-task consistency. This might be useful if you want to patch |
| 151 | a common function like schedule(), and the function change doesn't need |
| 152 | consistency but the rest of the patch does. |
| 153 | |
| 154 | For architectures which don't have HAVE_RELIABLE_STACKTRACE, the user |
| 155 | must set patch->immediate which causes all tasks to be patched |
| 156 | immediately. This option should be used with care, only when the patch |
| 157 | doesn't change any function or data semantics. |
| 158 | |
| 159 | In the future, architectures which don't have HAVE_RELIABLE_STACKTRACE |
| 160 | may be allowed to use per-task consistency if we can come up with |
| 161 | another way to patch kthreads. |
| 162 | |
| 163 | The /sys/kernel/livepatch/<patch>/transition file shows whether a patch |
| 164 | is in transition. Only a single patch (the topmost patch on the stack) |
| 165 | can be in transition at a given time. A patch can remain in transition |
| 166 | indefinitely, if any of the tasks are stuck in the initial patch state. |
| 167 | |
| 168 | A transition can be reversed and effectively canceled by writing the |
| 169 | opposite value to the /sys/kernel/livepatch/<patch>/enabled file while |
| 170 | the transition is in progress. Then all the tasks will attempt to |
| 171 | converge back to the original patch state. |
| 172 | |
| 173 | There's also a /proc/<pid>/patch_state file which can be used to |
| 174 | determine which tasks are blocking completion of a patching operation. |
| 175 | If a patch is in transition, this file shows 0 to indicate the task is |
| 176 | unpatched and 1 to indicate it's patched. Otherwise, if no patch is in |
| 177 | transition, it shows -1. Any tasks which are blocking the transition |
| 178 | can be signaled with SIGSTOP and SIGCONT to force them to change their |
Miroslav Benes | 43347d5 | 2017-11-15 14:50:13 +0100 | [diff] [blame^] | 179 | patched state. This may be harmful to the system though. |
| 180 | /sys/kernel/livepatch/<patch>/signal attribute provides a better alternative. |
| 181 | Writing 1 to the attribute sends a fake signal to all remaining blocking |
| 182 | tasks. No proper signal is actually delivered (there is no data in signal |
| 183 | pending structures). Tasks are interrupted or woken up, and forced to change |
| 184 | their patched state. |
Josh Poimboeuf | d83a7cb | 2017-02-13 19:42:40 -0600 | [diff] [blame] | 185 | |
| 186 | 3.1 Adding consistency model support to new architectures |
| 187 | --------------------------------------------------------- |
| 188 | |
| 189 | For adding consistency model support to new architectures, there are a |
| 190 | few options: |
| 191 | |
| 192 | 1) Add CONFIG_HAVE_RELIABLE_STACKTRACE. This means porting objtool, and |
| 193 | for non-DWARF unwinders, also making sure there's a way for the stack |
| 194 | tracing code to detect interrupts on the stack. |
| 195 | |
| 196 | 2) Alternatively, ensure that every kthread has a call to |
| 197 | klp_update_patch_state() in a safe location. Kthreads are typically |
| 198 | in an infinite loop which does some action repeatedly. The safe |
| 199 | location to switch the kthread's patch state would be at a designated |
| 200 | point in the loop where there are no locks taken and all data |
| 201 | structures are in a well-defined state. |
| 202 | |
| 203 | The location is clear when using workqueues or the kthread worker |
| 204 | API. These kthreads process independent actions in a generic loop. |
| 205 | |
| 206 | It's much more complicated with kthreads which have a custom loop. |
| 207 | There the safe location must be carefully selected on a case-by-case |
| 208 | basis. |
| 209 | |
| 210 | In that case, arches without HAVE_RELIABLE_STACKTRACE would still be |
| 211 | able to use the non-stack-checking parts of the consistency model: |
| 212 | |
| 213 | a) patching user tasks when they cross the kernel/user space |
| 214 | boundary; and |
| 215 | |
| 216 | b) patching kthreads and idle tasks at their designated patch points. |
| 217 | |
| 218 | This option isn't as good as option 1 because it requires signaling |
| 219 | user tasks and waking kthreads to patch them. But it could still be |
| 220 | a good backup option for those architectures which don't have |
| 221 | reliable stack traces yet. |
| 222 | |
| 223 | In the meantime, patches for such architectures can bypass the |
| 224 | consistency model by setting klp_patch.immediate to true. This option |
| 225 | is perfectly fine for patches which don't change the semantics of the |
| 226 | patched functions. In practice, this is usable for ~90% of security |
| 227 | fixes. Use of this option also means the patch can't be unloaded after |
| 228 | it has been disabled. |
Petr Mladek | 5e4e384 | 2016-04-25 17:14:35 +0200 | [diff] [blame] | 229 | |
| 230 | |
| 231 | 4. Livepatch module |
| 232 | =================== |
| 233 | |
| 234 | Livepatches are distributed using kernel modules, see |
| 235 | samples/livepatch/livepatch-sample.c. |
| 236 | |
| 237 | The module includes a new implementation of functions that we want |
| 238 | to replace. In addition, it defines some structures describing the |
| 239 | relation between the original and the new implementation. Then there |
| 240 | is code that makes the kernel start using the new code when the livepatch |
| 241 | module is loaded. Also there is code that cleans up before the |
| 242 | livepatch module is removed. All this is explained in more details in |
| 243 | the next sections. |
| 244 | |
| 245 | |
| 246 | 4.1. New functions |
| 247 | ------------------ |
| 248 | |
| 249 | New versions of functions are typically just copied from the original |
| 250 | sources. A good practice is to add a prefix to the names so that they |
| 251 | can be distinguished from the original ones, e.g. in a backtrace. Also |
| 252 | they can be declared as static because they are not called directly |
| 253 | and do not need the global visibility. |
| 254 | |
| 255 | The patch contains only functions that are really modified. But they |
| 256 | might want to access functions or data from the original source file |
| 257 | that may only be locally accessible. This can be solved by a special |
| 258 | relocation section in the generated livepatch module, see |
| 259 | Documentation/livepatch/module-elf-format.txt for more details. |
| 260 | |
| 261 | |
| 262 | 4.2. Metadata |
Josh Poimboeuf | d83a7cb | 2017-02-13 19:42:40 -0600 | [diff] [blame] | 263 | ------------- |
Petr Mladek | 5e4e384 | 2016-04-25 17:14:35 +0200 | [diff] [blame] | 264 | |
| 265 | The patch is described by several structures that split the information |
| 266 | into three levels: |
| 267 | |
| 268 | + struct klp_func is defined for each patched function. It describes |
| 269 | the relation between the original and the new implementation of a |
| 270 | particular function. |
| 271 | |
| 272 | The structure includes the name, as a string, of the original function. |
| 273 | The function address is found via kallsyms at runtime. |
| 274 | |
| 275 | Then it includes the address of the new function. It is defined |
| 276 | directly by assigning the function pointer. Note that the new |
| 277 | function is typically defined in the same source file. |
| 278 | |
| 279 | As an optional parameter, the symbol position in the kallsyms database can |
| 280 | be used to disambiguate functions of the same name. This is not the |
| 281 | absolute position in the database, but rather the order it has been found |
| 282 | only for a particular object ( vmlinux or a kernel module ). Note that |
| 283 | kallsyms allows for searching symbols according to the object name. |
| 284 | |
Josh Poimboeuf | d83a7cb | 2017-02-13 19:42:40 -0600 | [diff] [blame] | 285 | There's also an 'immediate' flag which, when set, patches the |
| 286 | function immediately, bypassing the consistency model safety checks. |
| 287 | |
Petr Mladek | 5e4e384 | 2016-04-25 17:14:35 +0200 | [diff] [blame] | 288 | + struct klp_object defines an array of patched functions (struct |
| 289 | klp_func) in the same object. Where the object is either vmlinux |
| 290 | (NULL) or a module name. |
| 291 | |
| 292 | The structure helps to group and handle functions for each object |
| 293 | together. Note that patched modules might be loaded later than |
| 294 | the patch itself and the relevant functions might be patched |
| 295 | only when they are available. |
| 296 | |
| 297 | |
| 298 | + struct klp_patch defines an array of patched objects (struct |
| 299 | klp_object). |
| 300 | |
| 301 | This structure handles all patched functions consistently and eventually, |
| 302 | synchronously. The whole patch is applied only when all patched |
| 303 | symbols are found. The only exception are symbols from objects |
Josh Poimboeuf | d83a7cb | 2017-02-13 19:42:40 -0600 | [diff] [blame] | 304 | (kernel modules) that have not been loaded yet. |
| 305 | |
| 306 | Setting the 'immediate' flag applies the patch to all tasks |
| 307 | immediately, bypassing the consistency model safety checks. |
| 308 | |
| 309 | For more details on how the patch is applied on a per-task basis, |
| 310 | see the "Consistency model" section. |
Petr Mladek | 5e4e384 | 2016-04-25 17:14:35 +0200 | [diff] [blame] | 311 | |
| 312 | |
| 313 | 4.3. Livepatch module handling |
| 314 | ------------------------------ |
| 315 | |
| 316 | The usual behavior is that the new functions will get used when |
| 317 | the livepatch module is loaded. For this, the module init() function |
| 318 | has to register the patch (struct klp_patch) and enable it. See the |
| 319 | section "Livepatch life-cycle" below for more details about these |
| 320 | two operations. |
| 321 | |
| 322 | Module removal is only safe when there are no users of the underlying |
Josh Poimboeuf | 3ec2477 | 2017-03-06 11:20:29 -0600 | [diff] [blame] | 323 | functions. The immediate consistency model is not able to detect this. The |
| 324 | code just redirects the functions at the very beginning and it does not |
| 325 | check if the functions are in use. In other words, it knows when the |
| 326 | functions get called but it does not know when the functions return. |
| 327 | Therefore it cannot be decided when the livepatch module can be safely |
| 328 | removed. This is solved by a hybrid consistency model. When the system is |
| 329 | transitioned to a new patch state (patched/unpatched) it is guaranteed that |
| 330 | no task sleeps or runs in the old code. |
| 331 | |
Petr Mladek | 5e4e384 | 2016-04-25 17:14:35 +0200 | [diff] [blame] | 332 | |
| 333 | 5. Livepatch life-cycle |
| 334 | ======================= |
| 335 | |
| 336 | Livepatching defines four basic operations that define the life cycle of each |
| 337 | live patch: registration, enabling, disabling and unregistration. There are |
| 338 | several reasons why it is done this way. |
| 339 | |
| 340 | First, the patch is applied only when all patched symbols for already |
| 341 | loaded objects are found. The error handling is much easier if this |
| 342 | check is done before particular functions get redirected. |
| 343 | |
| 344 | Second, the immediate consistency model does not guarantee that anyone is not |
| 345 | sleeping in the new code after the patch is reverted. This means that the new |
| 346 | code needs to stay around "forever". If the code is there, one could apply it |
| 347 | again. Therefore it makes sense to separate the operations that might be done |
| 348 | once and those that need to be repeated when the patch is enabled (applied) |
| 349 | again. |
| 350 | |
| 351 | Third, it might take some time until the entire system is migrated |
| 352 | when a more complex consistency model is used. The patch revert might |
| 353 | block the livepatch module removal for too long. Therefore it is useful |
| 354 | to revert the patch using a separate operation that might be called |
| 355 | explicitly. But it does not make sense to remove all information |
| 356 | until the livepatch module is really removed. |
| 357 | |
| 358 | |
| 359 | 5.1. Registration |
| 360 | ----------------- |
| 361 | |
| 362 | Each patch first has to be registered using klp_register_patch(). This makes |
| 363 | the patch known to the livepatch framework. Also it does some preliminary |
| 364 | computing and checks. |
| 365 | |
| 366 | In particular, the patch is added into the list of known patches. The |
| 367 | addresses of the patched functions are found according to their names. |
| 368 | The special relocations, mentioned in the section "New functions", are |
| 369 | applied. The relevant entries are created under |
| 370 | /sys/kernel/livepatch/<name>. The patch is rejected when any operation |
| 371 | fails. |
| 372 | |
| 373 | |
| 374 | 5.2. Enabling |
| 375 | ------------- |
| 376 | |
| 377 | Registered patches might be enabled either by calling klp_enable_patch() or |
| 378 | by writing '1' to /sys/kernel/livepatch/<name>/enabled. The system will |
| 379 | start using the new implementation of the patched functions at this stage. |
| 380 | |
Josh Poimboeuf | d83a7cb | 2017-02-13 19:42:40 -0600 | [diff] [blame] | 381 | When a patch is enabled, livepatch enters into a transition state where |
| 382 | tasks are converging to the patched state. This is indicated by a value |
| 383 | of '1' in /sys/kernel/livepatch/<name>/transition. Once all tasks have |
| 384 | been patched, the 'transition' value changes to '0'. For more |
| 385 | information about this process, see the "Consistency model" section. |
| 386 | |
| 387 | If an original function is patched for the first time, a function |
| 388 | specific struct klp_ops is created and an universal ftrace handler is |
| 389 | registered. |
Petr Mladek | 5e4e384 | 2016-04-25 17:14:35 +0200 | [diff] [blame] | 390 | |
| 391 | Functions might be patched multiple times. The ftrace handler is registered |
| 392 | only once for the given function. Further patches just add an entry to the |
| 393 | list (see field `func_stack`) of the struct klp_ops. The last added |
| 394 | entry is chosen by the ftrace handler and becomes the active function |
| 395 | replacement. |
| 396 | |
| 397 | Note that the patches might be enabled in a different order than they were |
| 398 | registered. |
| 399 | |
| 400 | |
| 401 | 5.3. Disabling |
| 402 | -------------- |
| 403 | |
| 404 | Enabled patches might get disabled either by calling klp_disable_patch() or |
| 405 | by writing '0' to /sys/kernel/livepatch/<name>/enabled. At this stage |
| 406 | either the code from the previously enabled patch or even the original |
| 407 | code gets used. |
| 408 | |
Josh Poimboeuf | d83a7cb | 2017-02-13 19:42:40 -0600 | [diff] [blame] | 409 | When a patch is disabled, livepatch enters into a transition state where |
| 410 | tasks are converging to the unpatched state. This is indicated by a |
| 411 | value of '1' in /sys/kernel/livepatch/<name>/transition. Once all tasks |
| 412 | have been unpatched, the 'transition' value changes to '0'. For more |
| 413 | information about this process, see the "Consistency model" section. |
| 414 | |
Petr Mladek | 5e4e384 | 2016-04-25 17:14:35 +0200 | [diff] [blame] | 415 | Here all the functions (struct klp_func) associated with the to-be-disabled |
| 416 | patch are removed from the corresponding struct klp_ops. The ftrace handler |
| 417 | is unregistered and the struct klp_ops is freed when the func_stack list |
| 418 | becomes empty. |
| 419 | |
| 420 | Patches must be disabled in exactly the reverse order in which they were |
| 421 | enabled. It makes the problem and the implementation much easier. |
| 422 | |
| 423 | |
| 424 | 5.4. Unregistration |
| 425 | ------------------- |
| 426 | |
| 427 | Disabled patches might be unregistered by calling klp_unregister_patch(). |
| 428 | This can be done only when the patch is disabled and the code is no longer |
| 429 | used. It must be called before the livepatch module gets unloaded. |
| 430 | |
| 431 | At this stage, all the relevant sys-fs entries are removed and the patch |
| 432 | is removed from the list of known patches. |
| 433 | |
| 434 | |
| 435 | 6. Sysfs |
| 436 | ======== |
| 437 | |
| 438 | Information about the registered patches can be found under |
| 439 | /sys/kernel/livepatch. The patches could be enabled and disabled |
| 440 | by writing there. |
| 441 | |
Miroslav Benes | 43347d5 | 2017-11-15 14:50:13 +0100 | [diff] [blame^] | 442 | /sys/kernel/livepatch/<patch>/signal attribute allows administrator to affect a |
| 443 | patching operation. |
| 444 | |
Petr Mladek | 5e4e384 | 2016-04-25 17:14:35 +0200 | [diff] [blame] | 445 | See Documentation/ABI/testing/sysfs-kernel-livepatch for more details. |
| 446 | |
| 447 | |
| 448 | 7. Limitations |
| 449 | ============== |
| 450 | |
| 451 | The current Livepatch implementation has several limitations: |
| 452 | |
| 453 | |
| 454 | + The patch must not change the semantic of the patched functions. |
| 455 | |
| 456 | The current implementation guarantees only that either the old |
| 457 | or the new function is called. The functions are patched one |
| 458 | by one. It means that the patch must _not_ change the semantic |
| 459 | of the function. |
| 460 | |
| 461 | |
| 462 | + Data structures can not be patched. |
| 463 | |
| 464 | There is no support to version data structures or anyhow migrate |
| 465 | one structure into another. Also the simple consistency model does |
| 466 | not allow to switch more functions atomically. |
| 467 | |
| 468 | Once there is more complex consistency mode, it will be possible to |
| 469 | use some workarounds. For example, it will be possible to use a hole |
| 470 | for a new member because the data structure is aligned. Or it will |
| 471 | be possible to use an existing member for something else. |
| 472 | |
| 473 | There are no plans to add more generic support for modified structures |
| 474 | at the moment. |
| 475 | |
| 476 | |
| 477 | + Only functions that can be traced could be patched. |
| 478 | |
| 479 | Livepatch is based on the dynamic ftrace. In particular, functions |
| 480 | implementing ftrace or the livepatch ftrace handler could not be |
| 481 | patched. Otherwise, the code would end up in an infinite loop. A |
| 482 | potential mistake is prevented by marking the problematic functions |
| 483 | by "notrace". |
| 484 | |
| 485 | |
Petr Mladek | 5e4e384 | 2016-04-25 17:14:35 +0200 | [diff] [blame] | 486 | |
| 487 | + Livepatch works reliably only when the dynamic ftrace is located at |
| 488 | the very beginning of the function. |
| 489 | |
| 490 | The function need to be redirected before the stack or the function |
| 491 | parameters are modified in any way. For example, livepatch requires |
| 492 | using -fentry gcc compiler option on x86_64. |
| 493 | |
| 494 | One exception is the PPC port. It uses relative addressing and TOC. |
| 495 | Each function has to handle TOC and save LR before it could call |
| 496 | the ftrace handler. This operation has to be reverted on return. |
| 497 | Fortunately, the generic ftrace code has the same problem and all |
Masanari Iida | 8da9704 | 2017-01-24 21:45:15 +0900 | [diff] [blame] | 498 | this is handled on the ftrace level. |
Petr Mladek | 5e4e384 | 2016-04-25 17:14:35 +0200 | [diff] [blame] | 499 | |
| 500 | |
| 501 | + Kretprobes using the ftrace framework conflict with the patched |
| 502 | functions. |
| 503 | |
| 504 | Both kretprobes and livepatches use a ftrace handler that modifies |
| 505 | the return address. The first user wins. Either the probe or the patch |
| 506 | is rejected when the handler is already in use by the other. |
| 507 | |
| 508 | |
| 509 | + Kprobes in the original function are ignored when the code is |
| 510 | redirected to the new implementation. |
| 511 | |
| 512 | There is a work in progress to add warnings about this situation. |