16 September 2011

Objects in mirror are closer then they appear

First time I heard this phrase from the Thomas Jahn’s "Knockin’ on Heaven’s Door". In Russian it would sound like "reflected objects are closer than appeared". But that doesn't deform the sense, isn't? Exactly, namely distortion. That what appeared to be when I benchmarked the hash code calculation.


The IntelliJ IDEA allows generate the hashCode implementation that looks like:

@Override
    public final int hashCode() {
        int result = code.hashCode();
        result = 31 * result + volume;
        final long temp = Double.doubleToLongBits(price);
        result = 31 * result + (int) (temp ^ (temp >>> 32));
        return result;
    }

Recently I came across interesting thoughts about "price of brackets" and decided to check how really important is the order of downcast. The mentioned code hasn’t warming up cycles so, I did not wondered that I got approximately same values for cases: a) when we downcast the result and b) when we XORed already CASTed numbers. Like following:

@Override
    public final int hashCode() {
        int result = name.hashCode();
        result = (result << 5) - result + volume;
        long temp = Double.doubleToLongBits(price);
        result = (result << 5) - result + ((int) temp ^ (int) (temp >>> 32));
        return result;
    }

I would say, subjectively, the a) performs even slightly better. On the brink of error, but often it was quicker.

But that was not so interesting. I do not know why, but I decided compare this result with just a getter of precomputed hash code. Say, if we requires hash of object more frequently then it has been changed or it is immutable, there is sense to pre-calculate its hash. However… the result was even worse.

That looks really silly so, I started to dig. Traces of compilation said that getter was just ignored by JIT. That explains why it works so slow, but its definitely doesn’t explains what going on there?! Desperately, I even created small project for easier shift between different environment. I had a chance to try it on the WindowZ. (Once again I felt the pain of C developers in vain attempts to plug in the disassembler: I just failed to build it for x64 Windows. But that is another story, haven't nothing in common with this.)

Now I have to admit, originally it was used the latest JDK 1.6, update 26 under MacOS. I had no chance to use other JDK there, but on other box I tried the JDK 1.7 that helped to preserve remains of mind, result was expected, pre-compiled hash did almost in twice quicker.

The devil is in details, the default settings for those JDKs, beside few minor, has a little difference that explains all this mystery:

1.61.7
UseFastAccessorMethods=trueUseFastAccessorMethods=false

Obviously, HotSpot applied optimization for a hash code getter that prevents JIT from further optimization. And indeed, with HotSpot in compiled mode (i.e. -Xcomp), the pre-compiled hash code was quicker.

No comments: