x64 vs.

x64 vs. x86 is a mixed bag. On the good side, x64 has a lot more registers available, faster math with big numbers and (obviously) works better with huge apps that need more than 2 GB. On the downside, doubling the size of all pointers bloats RAM requirements and lets less data fit in cache.

Today I read an article about a new proposed software architecture that tries to pick the best of both. They call it “x32” which is a nice mix of names. It uses the all of the non-legacy goodness of x64 but uses 32-bit pointers to save RAM. That’s a fine sacrifice for most everyday apps (it’s going to be a while before “/bin/ls” needs more than 2GB!). It sounds complicated, but fascinating to me.

This superficially reminds me of the new compressed pointer feature in JDK7. This feature (http://download.oracle.com/javase/7/docs/technotes/guides/vm/performance-enhancements-7.html) assumes that the top half of a 64-bit pointer will almost always be zero, so why bother storing it in the heap? Instead, you just put those zeros back via the virtual machine when you actually want to dereference it, but it doesn’t take up cache/stack/heap space when the pointer is inert. And if you need more than 4 GB but less than 32 GB, they can actually drop the bottom 3 bits which are always zero due to 8-byte memory alignment requirements. And since most code is cache-limited anyway, the cost of the extra math is less than the win you get from fewer pipeline stalls. Now THAT is cool engineering.

https://lwn.net/Articles/456731/

One reply on “x64 vs.”

  1. I just learned that compressed pointers were actually introduced in 1.6.0_14 and were enabled by default in 1.6.0_23 (and fixed in 1.6.0_25!). It’s zero-based compressed pointers that were introduced in JDK7. My bad.

Comments are closed.