Commit | Line | Data |
---|---|---|
9cc07df4 MCC |
1 | =========================================================================== |
2 | Proper Locking Under a Preemptible Kernel: Keeping Kernel Code Preempt-Safe | |
3 | =========================================================================== | |
1da177e4 | 4 | |
9cc07df4 MCC |
5 | :Author: Robert Love <rml@tech9.net> |
6 | :Last Updated: 28 Aug 2002 | |
1da177e4 | 7 | |
9cc07df4 MCC |
8 | |
9 | Introduction | |
10 | ============ | |
1da177e4 LT |
11 | |
12 | ||
13 | A preemptible kernel creates new locking issues. The issues are the same as | |
14 | those under SMP: concurrency and reentrancy. Thankfully, the Linux preemptible | |
15 | kernel model leverages existing SMP locking mechanisms. Thus, the kernel | |
16 | requires explicit additional locking for very few additional situations. | |
17 | ||
18 | This document is for all kernel hackers. Developing code in the kernel | |
19 | requires protecting these situations. | |
20 | ||
21 | ||
22 | RULE #1: Per-CPU data structures need explicit protection | |
9cc07df4 | 23 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
1da177e4 LT |
24 | |
25 | ||
9cc07df4 | 26 | Two similar problems arise. An example code snippet:: |
1da177e4 LT |
27 | |
28 | struct this_needs_locking tux[NR_CPUS]; | |
29 | tux[smp_processor_id()] = some_value; | |
30 | /* task is preempted here... */ | |
31 | something = tux[smp_processor_id()]; | |
32 | ||
33 | First, since the data is per-CPU, it may not have explicit SMP locking, but | |
34 | require it otherwise. Second, when a preempted task is finally rescheduled, | |
35 | the previous value of smp_processor_id may not equal the current. You must | |
36 | protect these situations by disabling preemption around them. | |
37 | ||
38 | You can also use put_cpu() and get_cpu(), which will disable preemption. | |
39 | ||
40 | ||
41 | RULE #2: CPU state must be protected. | |
9cc07df4 | 42 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
1da177e4 LT |
43 | |
44 | ||
45 | Under preemption, the state of the CPU must be protected. This is arch- | |
46 | dependent, but includes CPU structures and state not preserved over a context | |
47 | switch. For example, on x86, entering and exiting FPU mode is now a critical | |
48 | section that must occur while preemption is disabled. Think what would happen | |
49 | if the kernel is executing a floating-point instruction and is then preempted. | |
50 | Remember, the kernel does not save FPU state except for user tasks. Therefore, | |
51 | upon preemption, the FPU registers will be sold to the lowest bidder. Thus, | |
52 | preemption must be disabled around such regions. | |
53 | ||
54 | Note, some FPU functions are already explicitly preempt safe. For example, | |
55 | kernel_fpu_begin and kernel_fpu_end will disable and enable preemption. | |
3a0aee48 | 56 | However, fpu__restore() must be called with preemption disabled. |
1da177e4 LT |
57 | |
58 | ||
59 | RULE #3: Lock acquire and release must be performed by same task | |
9cc07df4 | 60 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
1da177e4 LT |
61 | |
62 | ||
63 | A lock acquired in one task must be released by the same task. This | |
64 | means you can't do oddball things like acquire a lock and go off to | |
65 | play while another task releases it. If you want to do something | |
66 | like this, acquire and release the task in the same code path and | |
67 | have the caller wait on an event by the other task. | |
68 | ||
69 | ||
9cc07df4 MCC |
70 | Solution |
71 | ======== | |
1da177e4 LT |
72 | |
73 | ||
74 | Data protection under preemption is achieved by disabling preemption for the | |
75 | duration of the critical region. | |
76 | ||
9cc07df4 MCC |
77 | :: |
78 | ||
79 | preempt_enable() decrement the preempt counter | |
80 | preempt_disable() increment the preempt counter | |
81 | preempt_enable_no_resched() decrement, but do not immediately preempt | |
82 | preempt_check_resched() if needed, reschedule | |
83 | preempt_count() return the preempt counter | |
1da177e4 LT |
84 | |
85 | The functions are nestable. In other words, you can call preempt_disable | |
86 | n-times in a code path, and preemption will not be reenabled until the n-th | |
87 | call to preempt_enable. The preempt statements define to nothing if | |
88 | preemption is not enabled. | |
89 | ||
90 | Note that you do not need to explicitly prevent preemption if you are holding | |
91 | any locks or interrupts are disabled, since preemption is implicitly disabled | |
92 | in those cases. | |
93 | ||
94 | But keep in mind that 'irqs disabled' is a fundamentally unsafe way of | |
95 | disabling preemption - any spin_unlock() decreasing the preemption count | |
96 | to 0 might trigger a reschedule. A simple printk() might trigger a reschedule. | |
97 | So use this implicit preemption-disabling property only if you know that the | |
98 | affected codepath does not do any of this. Best policy is to use this only for | |
99 | small, atomic code that you wrote and which calls no complex functions. | |
100 | ||
9cc07df4 | 101 | Example:: |
1da177e4 LT |
102 | |
103 | cpucache_t *cc; /* this is per-CPU */ | |
104 | preempt_disable(); | |
105 | cc = cc_data(searchp); | |
106 | if (cc && cc->avail) { | |
107 | __free_block(searchp, cc_entry(cc), cc->avail); | |
108 | cc->avail = 0; | |
109 | } | |
110 | preempt_enable(); | |
111 | return 0; | |
112 | ||
113 | Notice how the preemption statements must encompass every reference of the | |
9cc07df4 | 114 | critical variables. Another example:: |
1da177e4 LT |
115 | |
116 | int buf[NR_CPUS]; | |
117 | set_cpu_val(buf); | |
118 | if (buf[smp_processor_id()] == -1) printf(KERN_INFO "wee!\n"); | |
119 | spin_lock(&buf_lock); | |
120 | /* ... */ | |
121 | ||
122 | This code is not preempt-safe, but see how easily we can fix it by simply | |
123 | moving the spin_lock up two lines. | |
124 | ||
125 | ||
9cc07df4 MCC |
126 | Preventing preemption using interrupt disabling |
127 | =============================================== | |
1da177e4 LT |
128 | |
129 | ||
130 | It is possible to prevent a preemption event using local_irq_disable and | |
131 | local_irq_save. Note, when doing so, you must be very careful to not cause | |
132 | an event that would set need_resched and result in a preemption check. When | |
133 | in doubt, rely on locking or explicit preemption disabling. | |
134 | ||
135 | Note in 2.5 interrupt disabling is now only per-CPU (e.g. local). | |
136 | ||
137 | An additional concern is proper usage of local_irq_disable and local_irq_save. | |
138 | These may be used to protect from preemption, however, on exit, if preemption | |
139 | may be enabled, a test to see if preemption is required should be done. If | |
140 | these are called from the spin_lock and read/write lock macros, the right thing | |
141 | is done. They may also be called within a spin-lock protected region, however, | |
142 | if they are ever called outside of this context, a test for preemption should | |
143 | be made. Do note that calls from interrupt context or bottom half/ tasklets | |
144 | are also protected by preemption locks and so may use the versions which do | |
145 | not check preemption. |