Mauro Carvalho Chehab | d6a3b24 | 2019-06-12 14:53:03 -0300 | [diff] [blame] | 1 | ================================================ |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 2 | Completions - "wait for completion" barrier APIs |
| 3 | ================================================ |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 4 | |
| 5 | Introduction: |
| 6 | ------------- |
| 7 | |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 8 | If you have one or more threads that must wait for some kernel activity |
Jonathan Corbet | 7085f6c | 2015-03-27 10:16:35 -0600 | [diff] [blame] | 9 | to have reached a point or a specific state, completions can provide a |
| 10 | race-free solution to this problem. Semantically they are somewhat like a |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 11 | pthread_barrier() and have similar use-cases. |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 12 | |
Jonathan Corbet | 7085f6c | 2015-03-27 10:16:35 -0600 | [diff] [blame] | 13 | Completions are a code synchronization mechanism which is preferable to any |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 14 | misuse of locks/semaphores and busy-loops. Any time you think of using |
| 15 | yield() or some quirky msleep(1) loop to allow something else to proceed, |
| 16 | you probably want to look into using one of the wait_for_completion*() |
| 17 | calls and complete() instead. |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 18 | |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 19 | The advantage of using completions is that they have a well defined, focused |
| 20 | purpose which makes it very easy to see the intent of the code, but they |
| 21 | also result in more efficient code as all threads can continue execution |
| 22 | until the result is actually needed, and both the waiting and the signalling |
| 23 | is highly efficient using low level scheduler sleep/wakeup facilities. |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 24 | |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 25 | Completions are built on top of the waitqueue and wakeup infrastructure of |
| 26 | the Linux scheduler. The event the threads on the waitqueue are waiting for |
| 27 | is reduced to a simple flag in 'struct completion', appropriately called "done". |
| 28 | |
| 29 | As completions are scheduling related, the code can be found in |
Brian Norris | dc92726 | 2016-11-15 14:42:14 -0800 | [diff] [blame] | 30 | kernel/sched/completion.c. |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 31 | |
| 32 | |
| 33 | Usage: |
| 34 | ------ |
| 35 | |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 36 | There are three main parts to using completions: |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 37 | |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 38 | - the initialization of the 'struct completion' synchronization object |
| 39 | - the waiting part through a call to one of the variants of wait_for_completion(), |
| 40 | - the signaling side through a call to complete() or complete_all(). |
| 41 | |
| 42 | There are also some helper functions for checking the state of completions. |
| 43 | Note that while initialization must happen first, the waiting and signaling |
| 44 | part can happen in any order. I.e. it's entirely normal for a thread |
| 45 | to have marked a completion as 'done' before another thread checks whether |
| 46 | it has to wait for it. |
| 47 | |
| 48 | To use completions you need to #include <linux/completion.h> and |
| 49 | create a static or dynamic variable of type 'struct completion', |
Mauro Carvalho Chehab | d6a3b24 | 2019-06-12 14:53:03 -0300 | [diff] [blame] | 50 | which has only two fields:: |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 51 | |
| 52 | struct completion { |
| 53 | unsigned int done; |
| 54 | wait_queue_head_t wait; |
| 55 | }; |
| 56 | |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 57 | This provides the ->wait waitqueue to place tasks on for waiting (if any), and |
| 58 | the ->done completion flag for indicating whether it's completed or not. |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 59 | |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 60 | Completions should be named to refer to the event that is being synchronized on. |
Mauro Carvalho Chehab | d6a3b24 | 2019-06-12 14:53:03 -0300 | [diff] [blame] | 61 | A good example is:: |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 62 | |
| 63 | wait_for_completion(&early_console_added); |
| 64 | |
| 65 | complete(&early_console_added); |
| 66 | |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 67 | Good, intuitive naming (as always) helps code readability. Naming a completion |
| 68 | 'complete' is not helpful unless the purpose is super obvious... |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 69 | |
| 70 | |
| 71 | Initializing completions: |
| 72 | ------------------------- |
| 73 | |
Nicholas Mc Guire | 11e1369 | 2018-10-16 15:45:39 +0200 | [diff] [blame] | 74 | Dynamically allocated completion objects should preferably be embedded in data |
| 75 | structures that are assured to be alive for the life-time of the function/driver, |
| 76 | to prevent races with asynchronous complete() calls from occurring. |
| 77 | |
| 78 | Particular care should be taken when using the _timeout() or _killable()/_interruptible() |
| 79 | variants of wait_for_completion(), as it must be assured that memory de-allocation |
| 80 | does not happen until all related activities (complete() or reinit_completion()) |
| 81 | have taken place, even if these wait functions return prematurely due to a timeout |
| 82 | or a signal triggering. |
| 83 | |
| 84 | Initializing of dynamically allocated completion objects is done via a call to |
Mauro Carvalho Chehab | d6a3b24 | 2019-06-12 14:53:03 -0300 | [diff] [blame] | 85 | init_completion():: |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 86 | |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 87 | init_completion(&dynamic_object->done); |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 88 | |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 89 | In this call we initialize the waitqueue and set ->done to 0, i.e. "not completed" |
| 90 | or "not done". |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 91 | |
| 92 | The re-initialization function, reinit_completion(), simply resets the |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 93 | ->done field to 0 ("not done"), without touching the waitqueue. |
| 94 | Callers of this function must make sure that there are no racy |
| 95 | wait_for_completion() calls going on in parallel. |
| 96 | |
| 97 | Calling init_completion() on the same completion object twice is |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 98 | most likely a bug as it re-initializes the queue to an empty queue and |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 99 | enqueued tasks could get "lost" - use reinit_completion() in that case, |
| 100 | but be aware of other races. |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 101 | |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 102 | For static declaration and initialization, macros are available. |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 103 | |
Mauro Carvalho Chehab | d6a3b24 | 2019-06-12 14:53:03 -0300 | [diff] [blame] | 104 | For static (or global) declarations in file scope you can use |
| 105 | DECLARE_COMPLETION():: |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 106 | |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 107 | static DECLARE_COMPLETION(setup_done); |
| 108 | DECLARE_COMPLETION(setup_done); |
| 109 | |
| 110 | Note that in this case the completion is boot time (or module load time) |
| 111 | initialized to 'not done' and doesn't require an init_completion() call. |
| 112 | |
| 113 | When a completion is declared as a local variable within a function, |
Nicholas Mc Guire | 11e1369 | 2018-10-16 15:45:39 +0200 | [diff] [blame] | 114 | then the initialization should always use DECLARE_COMPLETION_ONSTACK() |
| 115 | explicitly, not just to make lockdep happy, but also to make it clear |
Mauro Carvalho Chehab | d6a3b24 | 2019-06-12 14:53:03 -0300 | [diff] [blame] | 116 | that limited scope had been considered and is intentional:: |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 117 | |
| 118 | DECLARE_COMPLETION_ONSTACK(setup_done) |
| 119 | |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 120 | Note that when using completion objects as local variables you must be |
Nicholas Mc Guire | 11e1369 | 2018-10-16 15:45:39 +0200 | [diff] [blame] | 121 | acutely aware of the short life time of the function stack: the function |
| 122 | must not return to a calling context until all activities (such as waiting |
| 123 | threads) have ceased and the completion object is completely unused. |
| 124 | |
| 125 | To emphasise this again: in particular when using some of the waiting API variants |
| 126 | with more complex outcomes, such as the timeout or signalling (_timeout(), |
| 127 | _killable() and _interruptible()) variants, the wait might complete |
| 128 | prematurely while the object might still be in use by another thread - and a return |
| 129 | from the wait_on_completion*() caller function will deallocate the function |
| 130 | stack and cause subtle data corruption if a complete() is done in some |
| 131 | other thread. Simple testing might not trigger these kinds of races. |
| 132 | |
| 133 | If unsure, use dynamically allocated completion objects, preferably embedded |
| 134 | in some other long lived object that has a boringly long life time which |
| 135 | exceeds the life time of any helper threads using the completion object, |
| 136 | or has a lock or other synchronization mechanism to make sure complete() |
| 137 | is not called on a freed object. |
| 138 | |
| 139 | A naive DECLARE_COMPLETION() on the stack triggers a lockdep warning. |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 140 | |
| 141 | Waiting for completions: |
| 142 | ------------------------ |
| 143 | |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 144 | For a thread to wait for some concurrent activity to finish, it |
Mauro Carvalho Chehab | d6a3b24 | 2019-06-12 14:53:03 -0300 | [diff] [blame] | 145 | calls wait_for_completion() on the initialized completion structure:: |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 146 | |
| 147 | void wait_for_completion(struct completion *done) |
| 148 | |
Mauro Carvalho Chehab | d6a3b24 | 2019-06-12 14:53:03 -0300 | [diff] [blame] | 149 | A typical usage scenario is:: |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 150 | |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 151 | CPU#1 CPU#2 |
| 152 | |
Jonathan Corbet | 7085f6c | 2015-03-27 10:16:35 -0600 | [diff] [blame] | 153 | struct completion setup_done; |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 154 | |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 155 | init_completion(&setup_done); |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 156 | initialize_work(...,&setup_done,...); |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 157 | |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 158 | /* run non-dependent code */ /* do setup */ |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 159 | |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 160 | wait_for_completion(&setup_done); complete(setup_done); |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 161 | |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 162 | This is not implying any particular order between wait_for_completion() and |
| 163 | the call to complete() - if the call to complete() happened before the call |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 164 | to wait_for_completion() then the waiting side simply will continue |
John Garry | 7b6abce | 2018-10-10 22:56:32 +0800 | [diff] [blame] | 165 | immediately as all dependencies are satisfied; if not, it will block until |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 166 | completion is signaled by complete(). |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 167 | |
Jonathan Corbet | 7085f6c | 2015-03-27 10:16:35 -0600 | [diff] [blame] | 168 | Note that wait_for_completion() is calling spin_lock_irq()/spin_unlock_irq(), |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 169 | so it can only be called safely when you know that interrupts are enabled. |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 170 | Calling it from IRQs-off atomic contexts will result in hard-to-detect |
| 171 | spurious enabling of interrupts. |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 172 | |
Jonathan Corbet | 7085f6c | 2015-03-27 10:16:35 -0600 | [diff] [blame] | 173 | The default behavior is to wait without a timeout and to mark the task as |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 174 | uninterruptible. wait_for_completion() and its variants are only safe |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 175 | in process context (as they can sleep) but not in atomic context, |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 176 | interrupt context, with disabled IRQs, or preemption is disabled - see also |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 177 | try_wait_for_completion() below for handling completion in atomic/interrupt |
| 178 | context. |
| 179 | |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 180 | As all variants of wait_for_completion() can (obviously) block for a long |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 181 | time depending on the nature of the activity they are waiting for, so in |
| 182 | most cases you probably don't want to call this with held mutexes. |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 183 | |
| 184 | |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 185 | wait_for_completion*() variants available: |
| 186 | ------------------------------------------ |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 187 | |
| 188 | The below variants all return status and this status should be checked in |
| 189 | most(/all) cases - in cases where the status is deliberately not checked you |
| 190 | probably want to make a note explaining this (e.g. see |
| 191 | arch/arm/kernel/smp.c:__cpu_up()). |
| 192 | |
| 193 | A common problem that occurs is to have unclean assignment of return types, |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 194 | so take care to assign return-values to variables of the proper type. |
| 195 | |
| 196 | Checking for the specific meaning of return values also has been found |
Mauro Carvalho Chehab | d6a3b24 | 2019-06-12 14:53:03 -0300 | [diff] [blame] | 197 | to be quite inaccurate, e.g. constructs like:: |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 198 | |
| 199 | if (!wait_for_completion_interruptible_timeout(...)) |
| 200 | |
| 201 | ... would execute the same code path for successful completion and for the |
Mauro Carvalho Chehab | d6a3b24 | 2019-06-12 14:53:03 -0300 | [diff] [blame] | 202 | interrupted case - which is probably not what you want:: |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 203 | |
| 204 | int wait_for_completion_interruptible(struct completion *done) |
| 205 | |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 206 | This function marks the task TASK_INTERRUPTIBLE while it is waiting. |
Mauro Carvalho Chehab | d6a3b24 | 2019-06-12 14:53:03 -0300 | [diff] [blame] | 207 | If a signal was received while waiting it will return -ERESTARTSYS; 0 otherwise:: |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 208 | |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 209 | unsigned long wait_for_completion_timeout(struct completion *done, unsigned long timeout) |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 210 | |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 211 | The task is marked as TASK_UNINTERRUPTIBLE and will wait at most 'timeout' |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 212 | jiffies. If a timeout occurs it returns 0, else the remaining time in |
| 213 | jiffies (but at least 1). |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 214 | |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 215 | Timeouts are preferably calculated with msecs_to_jiffies() or usecs_to_jiffies(), |
| 216 | to make the code largely HZ-invariant. |
| 217 | |
| 218 | If the returned timeout value is deliberately ignored a comment should probably explain |
Mauro Carvalho Chehab | d6a3b24 | 2019-06-12 14:53:03 -0300 | [diff] [blame] | 219 | why (e.g. see drivers/mfd/wm8350-core.c wm8350_read_auxadc()):: |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 220 | |
| 221 | long wait_for_completion_interruptible_timeout(struct completion *done, unsigned long timeout) |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 222 | |
Jonathan Corbet | 7085f6c | 2015-03-27 10:16:35 -0600 | [diff] [blame] | 223 | This function passes a timeout in jiffies and marks the task as |
| 224 | TASK_INTERRUPTIBLE. If a signal was received it will return -ERESTARTSYS; |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 225 | otherwise it returns 0 if the completion timed out, or the remaining time in |
Jonathan Corbet | 7085f6c | 2015-03-27 10:16:35 -0600 | [diff] [blame] | 226 | jiffies if completion occurred. |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 227 | |
Jonathan Corbet | 7085f6c | 2015-03-27 10:16:35 -0600 | [diff] [blame] | 228 | Further variants include _killable which uses TASK_KILLABLE as the |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 229 | designated tasks state and will return -ERESTARTSYS if it is interrupted, |
Mauro Carvalho Chehab | d6a3b24 | 2019-06-12 14:53:03 -0300 | [diff] [blame] | 230 | or 0 if completion was achieved. There is a _timeout variant as well:: |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 231 | |
| 232 | long wait_for_completion_killable(struct completion *done) |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 233 | long wait_for_completion_killable_timeout(struct completion *done, unsigned long timeout) |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 234 | |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 235 | The _io variants wait_for_completion_io() behave the same as the non-_io |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 236 | variants, except for accounting waiting time as 'waiting on IO', which has |
Mauro Carvalho Chehab | d6a3b24 | 2019-06-12 14:53:03 -0300 | [diff] [blame] | 237 | an impact on how the task is accounted in scheduling/IO stats:: |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 238 | |
| 239 | void wait_for_completion_io(struct completion *done) |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 240 | unsigned long wait_for_completion_io_timeout(struct completion *done, unsigned long timeout) |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 241 | |
| 242 | |
| 243 | Signaling completions: |
| 244 | ---------------------- |
| 245 | |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 246 | A thread that wants to signal that the conditions for continuation have been |
| 247 | achieved calls complete() to signal exactly one of the waiters that it can |
Mauro Carvalho Chehab | d6a3b24 | 2019-06-12 14:53:03 -0300 | [diff] [blame] | 248 | continue:: |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 249 | |
| 250 | void complete(struct completion *done) |
| 251 | |
Mauro Carvalho Chehab | d6a3b24 | 2019-06-12 14:53:03 -0300 | [diff] [blame] | 252 | ... or calls complete_all() to signal all current and future waiters:: |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 253 | |
| 254 | void complete_all(struct completion *done) |
| 255 | |
| 256 | The signaling will work as expected even if completions are signaled before |
| 257 | a thread starts waiting. This is achieved by the waiter "consuming" |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 258 | (decrementing) the done field of 'struct completion'. Waiting threads |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 259 | wakeup order is the same in which they were enqueued (FIFO order). |
| 260 | |
| 261 | If complete() is called multiple times then this will allow for that number |
| 262 | of waiters to continue - each call to complete() will simply increment the |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 263 | done field. Calling complete_all() multiple times is a bug though. Both |
| 264 | complete() and complete_all() can be called in IRQ/atomic context safely. |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 265 | |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 266 | There can only be one thread calling complete() or complete_all() on a |
| 267 | particular 'struct completion' at any time - serialized through the wait |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 268 | queue spinlock. Any such concurrent calls to complete() or complete_all() |
| 269 | probably are a design bug. |
| 270 | |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 271 | Signaling completion from IRQ context is fine as it will appropriately |
Linus Torvalds | 01aa9d5 | 2018-10-24 18:01:11 +0100 | [diff] [blame] | 272 | lock with spin_lock_irqsave()/spin_unlock_irqrestore() and it will never |
Mauro Carvalho Chehab | d6a3b24 | 2019-06-12 14:53:03 -0300 | [diff] [blame] | 273 | sleep. |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 274 | |
| 275 | |
| 276 | try_wait_for_completion()/completion_done(): |
| 277 | -------------------------------------------- |
| 278 | |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 279 | The try_wait_for_completion() function will not put the thread on the wait |
| 280 | queue but rather returns false if it would need to enqueue (block) the thread, |
Mauro Carvalho Chehab | d6a3b24 | 2019-06-12 14:53:03 -0300 | [diff] [blame] | 281 | else it consumes one posted completion and returns true:: |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 282 | |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 283 | bool try_wait_for_completion(struct completion *done) |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 284 | |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 285 | Finally, to check the state of a completion without changing it in any way, |
Jonathan Corbet | 7085f6c | 2015-03-27 10:16:35 -0600 | [diff] [blame] | 286 | call completion_done(), which returns false if there are no posted |
| 287 | completions that were not yet consumed by waiters (implying that there are |
Mauro Carvalho Chehab | d6a3b24 | 2019-06-12 14:53:03 -0300 | [diff] [blame] | 288 | waiters) and true otherwise:: |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 289 | |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 290 | bool completion_done(struct completion *done) |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 291 | |
| 292 | Both try_wait_for_completion() and completion_done() are safe to be called in |
Ingo Molnar | 0c37334 | 2018-10-11 10:36:23 +0200 | [diff] [blame] | 293 | IRQ or atomic context. |