Madhuparna Bhowmik | 9ffdd79 | 2019-10-29 01:54:17 +0530 | [diff] [blame] | 1 | .. _array_rcu_doc: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 2 | |
Madhuparna Bhowmik | 9ffdd79 | 2019-10-29 01:54:17 +0530 | [diff] [blame] | 3 | Using RCU to Protect Read-Mostly Arrays |
| 4 | ======================================= |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 5 | |
| 6 | Although RCU is more commonly used to protect linked lists, it can |
| 7 | also be used to protect arrays. Three situations are as follows: |
| 8 | |
Madhuparna Bhowmik | 9ffdd79 | 2019-10-29 01:54:17 +0530 | [diff] [blame] | 9 | 1. :ref:`Hash Tables <hash_tables>` |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 10 | |
Madhuparna Bhowmik | 9ffdd79 | 2019-10-29 01:54:17 +0530 | [diff] [blame] | 11 | 2. :ref:`Static Arrays <static_arrays>` |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 12 | |
Madhuparna Bhowmik | 9ffdd79 | 2019-10-29 01:54:17 +0530 | [diff] [blame] | 13 | 3. :ref:`Resizable Arrays <resizable_arrays>` |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 14 | |
Paul E. McKenney | cf9fbf8 | 2015-04-20 06:09:27 -0700 | [diff] [blame] | 15 | Each of these three situations involves an RCU-protected pointer to an |
| 16 | array that is separately indexed. It might be tempting to consider use |
| 17 | of RCU to instead protect the index into an array, however, this use |
Madhuparna Bhowmik | 9ffdd79 | 2019-10-29 01:54:17 +0530 | [diff] [blame] | 18 | case is **not** supported. The problem with RCU-protected indexes into |
Paul E. McKenney | cf9fbf8 | 2015-04-20 06:09:27 -0700 | [diff] [blame] | 19 | arrays is that compilers can play way too many optimization games with |
| 20 | integers, which means that the rules governing handling of these indexes |
| 21 | are far more trouble than they are worth. If RCU-protected indexes into |
| 22 | arrays prove to be particularly valuable (which they have not thus far), |
| 23 | explicit cooperation from the compiler will be required to permit them |
| 24 | to be safely used. |
| 25 | |
| 26 | That aside, each of the three RCU-protected pointer situations are |
| 27 | described in the following sections. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 28 | |
Madhuparna Bhowmik | 9ffdd79 | 2019-10-29 01:54:17 +0530 | [diff] [blame] | 29 | .. _hash_tables: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 30 | |
| 31 | Situation 1: Hash Tables |
Madhuparna Bhowmik | 9ffdd79 | 2019-10-29 01:54:17 +0530 | [diff] [blame] | 32 | ------------------------ |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 33 | |
| 34 | Hash tables are often implemented as an array, where each array entry |
| 35 | has a linked-list hash chain. Each hash chain can be protected by RCU |
| 36 | as described in the listRCU.txt document. This approach also applies |
| 37 | to other array-of-list situations, such as radix trees. |
| 38 | |
Madhuparna Bhowmik | 9ffdd79 | 2019-10-29 01:54:17 +0530 | [diff] [blame] | 39 | .. _static_arrays: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 40 | |
| 41 | Situation 2: Static Arrays |
Madhuparna Bhowmik | 9ffdd79 | 2019-10-29 01:54:17 +0530 | [diff] [blame] | 42 | -------------------------- |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 43 | |
| 44 | Static arrays, where the data (rather than a pointer to the data) is |
| 45 | located in each array element, and where the array is never resized, |
| 46 | have not been used with RCU. Rik van Riel recommends using seqlock in |
| 47 | this situation, which would also have minimal read-side overhead as long |
| 48 | as updates are rare. |
| 49 | |
Madhuparna Bhowmik | 9ffdd79 | 2019-10-29 01:54:17 +0530 | [diff] [blame] | 50 | Quick Quiz: |
| 51 | Why is it so important that updates be rare when using seqlock? |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 52 | |
Madhuparna Bhowmik | 9ffdd79 | 2019-10-29 01:54:17 +0530 | [diff] [blame] | 53 | :ref:`Answer to Quick Quiz <answer_quick_quiz_seqlock>` |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 54 | |
Madhuparna Bhowmik | 9ffdd79 | 2019-10-29 01:54:17 +0530 | [diff] [blame] | 55 | .. _resizable_arrays: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 56 | |
Madhuparna Bhowmik | 9ffdd79 | 2019-10-29 01:54:17 +0530 | [diff] [blame] | 57 | Situation 3: Resizable Arrays |
| 58 | ------------------------------ |
| 59 | |
| 60 | Use of RCU for resizable arrays is demonstrated by the grow_ary() |
Paul E. McKenney | cf9fbf8 | 2015-04-20 06:09:27 -0700 | [diff] [blame] | 61 | function formerly used by the System V IPC code. The array is used |
| 62 | to map from semaphore, message-queue, and shared-memory IDs to the data |
| 63 | structure that represents the corresponding IPC construct. The grow_ary() |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 64 | function does not acquire any locks; instead its caller must hold the |
| 65 | ids->sem semaphore. |
| 66 | |
| 67 | The grow_ary() function, shown below, does some limit checks, allocates a |
| 68 | new ipc_id_ary, copies the old to the new portion of the new, initializes |
| 69 | the remainder of the new, updates the ids->entries pointer to point to |
| 70 | the new array, and invokes ipc_rcu_putref() to free up the old array. |
| 71 | Note that rcu_assign_pointer() is used to update the ids->entries pointer, |
| 72 | which includes any memory barriers required on whatever architecture |
Madhuparna Bhowmik | 9ffdd79 | 2019-10-29 01:54:17 +0530 | [diff] [blame] | 73 | you are running on:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 74 | |
| 75 | static int grow_ary(struct ipc_ids* ids, int newsize) |
| 76 | { |
| 77 | struct ipc_id_ary* new; |
| 78 | struct ipc_id_ary* old; |
| 79 | int i; |
| 80 | int size = ids->entries->size; |
| 81 | |
| 82 | if(newsize > IPCMNI) |
| 83 | newsize = IPCMNI; |
| 84 | if(newsize <= size) |
| 85 | return newsize; |
| 86 | |
| 87 | new = ipc_rcu_alloc(sizeof(struct kern_ipc_perm *)*newsize + |
| 88 | sizeof(struct ipc_id_ary)); |
| 89 | if(new == NULL) |
| 90 | return size; |
| 91 | new->size = newsize; |
| 92 | memcpy(new->p, ids->entries->p, |
| 93 | sizeof(struct kern_ipc_perm *)*size + |
| 94 | sizeof(struct ipc_id_ary)); |
| 95 | for(i=size;i<newsize;i++) { |
| 96 | new->p[i] = NULL; |
| 97 | } |
| 98 | old = ids->entries; |
| 99 | |
| 100 | /* |
| 101 | * Use rcu_assign_pointer() to make sure the memcpyed |
| 102 | * contents of the new array are visible before the new |
| 103 | * array becomes visible. |
| 104 | */ |
| 105 | rcu_assign_pointer(ids->entries, new); |
| 106 | |
| 107 | ipc_rcu_putref(old); |
| 108 | return newsize; |
| 109 | } |
| 110 | |
| 111 | The ipc_rcu_putref() function decrements the array's reference count |
| 112 | and then, if the reference count has dropped to zero, uses call_rcu() |
| 113 | to free the array after a grace period has elapsed. |
| 114 | |
| 115 | The array is traversed by the ipc_lock() function. This function |
| 116 | indexes into the array under the protection of rcu_read_lock(), |
| 117 | using rcu_dereference() to pick up the pointer to the array so |
| 118 | that it may later safely be dereferenced -- memory barriers are |
| 119 | required on the Alpha CPU. Since the size of the array is stored |
| 120 | with the array itself, there can be no array-size mismatches, so |
| 121 | a simple check suffices. The pointer to the structure corresponding |
| 122 | to the desired IPC object is placed in "out", with NULL indicating |
| 123 | a non-existent entry. After acquiring "out->lock", the "out->deleted" |
| 124 | flag indicates whether the IPC object is in the process of being |
Madhuparna Bhowmik | 9ffdd79 | 2019-10-29 01:54:17 +0530 | [diff] [blame] | 125 | deleted, and, if not, the pointer is returned:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 126 | |
| 127 | struct kern_ipc_perm* ipc_lock(struct ipc_ids* ids, int id) |
| 128 | { |
| 129 | struct kern_ipc_perm* out; |
| 130 | int lid = id % SEQ_MULTIPLIER; |
| 131 | struct ipc_id_ary* entries; |
| 132 | |
| 133 | rcu_read_lock(); |
| 134 | entries = rcu_dereference(ids->entries); |
| 135 | if(lid >= entries->size) { |
| 136 | rcu_read_unlock(); |
| 137 | return NULL; |
| 138 | } |
| 139 | out = entries->p[lid]; |
| 140 | if(out == NULL) { |
| 141 | rcu_read_unlock(); |
| 142 | return NULL; |
| 143 | } |
| 144 | spin_lock(&out->lock); |
| 145 | |
| 146 | /* ipc_rmid() may have already freed the ID while ipc_lock |
| 147 | * was spinning: here verify that the structure is still valid |
| 148 | */ |
| 149 | if (out->deleted) { |
| 150 | spin_unlock(&out->lock); |
| 151 | rcu_read_unlock(); |
| 152 | return NULL; |
| 153 | } |
| 154 | return out; |
| 155 | } |
| 156 | |
Madhuparna Bhowmik | 9ffdd79 | 2019-10-29 01:54:17 +0530 | [diff] [blame] | 157 | .. _answer_quick_quiz_seqlock: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 158 | |
| 159 | Answer to Quick Quiz: |
Madhuparna Bhowmik | 9ffdd79 | 2019-10-29 01:54:17 +0530 | [diff] [blame] | 160 | Why is it so important that updates be rare when using seqlock? |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 161 | |
| 162 | The reason that it is important that updates be rare when |
| 163 | using seqlock is that frequent updates can livelock readers. |
| 164 | One way to avoid this problem is to assign a seqlock for |
| 165 | each array entry rather than to the entire array. |