This is an argument for transactional memory vs. explicit locking that I have not seen before. Ingo compares it to garbage collection: it’s slower than manual control, but so much easier on the programmer’s brain than the alternative that it’s sure to win in the long run.
Originally shared by Ingo Molnar
Technology: Transactional Memory
TM is clearly gaining steam in the sense of getting mainstream hw implementations.
In a nutshell, TM is a glorified retry loop, treating reads and writes to a (smallish) region of memory as a transaction, giving lockless access and updates to complex data structures (and good scalability) in the best case and offering rollback and exception handling (fallback to locking) in the worst case – key bits of this implemented in hardware, sometimes pushed down to the memory controller level.
One aspect I have not seen stressed is that there’s a psychological advantage to TM: traditional locking has to be correct, while with TM most programming mistakes in concurrency management will slow the code down – but not break it.
In a more abstract sense this is the main virtue of Java as well (trading performance for pointer safety, amongst other things) – and Java and its offspring are an uncontested success story.
This was the main virtue of x86 and CISC as well. (I should add MESI and HTML to the list as well.)
What marks all of these technologies is that there were “technically superior” alternatives to them available in the early days of those technologies, still those alternatives eventually died.
Often the reason behind this seemingly imperfect and “unjust” selection process is that programming is to a large degree psychology:
Good programmers know and are able to use technology very well.
The very best, “out of this world” programmers I know are those who (beyond having chosen the right parents to get the right brain structure) also know themselves and their own limitations very well and are able to control that aspect.
Thus good hw and sw design has to consider the human condition and has to consider psychology.
“Easy to use” alone is not enough – it is also often counter-productive in programming because the world around us is complex, thus hiding natural complexity while trying to map it is really just kicking the can down the road, at a cost.
“Being forgiving to human mistakes” is the key trait IMO.
This is the driving principle behind lockdep (the kernel’s locking correctness validator) as well, and lockdep is by all means a success story as well, in the Linux kernel universe.
(Btw., and somewhat paradoxically, that’s the reason why I still don’t see much advantage in using transactional memory in the Linux kernel itself, if both TM and basic locking instructions are implemented similarly fast in hw: the combination of locking + RCU with strong validators is working well right now.)
It’s still a bit early to tell, but TM might have gotten the human psychology aspect right, so I’d not be surprised if it gained traction in a similar fashion to how Java gained traction.
Inspired by Paul E. McKenney’s recent post about TM: