mirror of https://github.com/torvalds/linux.git
Similar to the previous commit which described why we need to add a barrier to arch_spin_is_locked(), we have a similar problem with spin_unlock_wait(). We need a barrier on entry to ensure any spinlock we have previously taken is visibly locked prior to the load of lock->slock. It's also not clear if spin_unlock_wait() is intended to have ACQUIRE semantics. For now be conservative and add a barrier on exit to give it ACQUIRE semantics. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> |
||
|---|---|---|
| .. | ||
| Makefile | ||
| alloc.c | ||
| checksum_32.S | ||
| checksum_64.S | ||
| checksum_wrappers_64.c | ||
| code-patching.c | ||
| copy_32.S | ||
| copypage_64.S | ||
| copypage_power7.S | ||
| copyuser_64.S | ||
| copyuser_power7.S | ||
| crtsavres.S | ||
| devres.c | ||
| div64.S | ||
| feature-fixups-test.S | ||
| feature-fixups.c | ||
| hweight_64.S | ||
| ldstfp.S | ||
| locks.c | ||
| mem_64.S | ||
| memcpy_64.S | ||
| memcpy_power7.S | ||
| rheap.c | ||
| sstep.c | ||
| string.S | ||
| string_64.S | ||
| usercopy_64.c | ||
| vmx-helper.c | ||
| xor_vmx.c | ||