0
votes

Use the method of AtomicLong.incrementAndGet to test the performance difference between JDK 7 and JDK 8, the test data shows that the performance in JDK7 is better than JDK 8. Why does JDK7 performance is better than JDK 8? What causes the poor performance in JDK 8?

Test Report:

<table border="1">
      <thead>
        <tr>
          <th>The number of threads</th>
          <th>JDK7(Unit:Milliseconds)</th>
          <th>JDK8(Unit:Milliseconds)</th>
        </tr>
      </thead>
      <tbody>
        <tr>
          <td>1</td>
          <td>441351</td>
          <td>444246</td>
        </tr>
        <tr>
          <td>4</td>
          <td>245872</td>
          <td>248655</td>
        </tr>
        <tr>
          <td>8</td>
          <td>240513</td>
          <td>245395</td>
        </tr>
        <tr>
          <td>16</td>
          <td>275445</td>
          <td>279481</td>
        </tr>
      </tbody>
    </table>

System Environment:

CPU: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz 2.40GHz (two processors)

Memory:8.00GB

System:Windows Server 2008 R2 Standard

JDK Version Information:

JDK7: "1.7.0_75"

JDK8:"1.8.0_45"

JVM parameters:

JDK 7:

set JVM_OPT=-Xms1024m -Xmx1024m -Xmn256m -XX:SurvivorRatio=8 -XX:PermSize=128m -XX:MaxPermSize=256m -XX:+UseConcMarkSweepGC -XX:+UseParNewGC SET JVM_OPT=%JVM_OPT% -XX:+DisableExplicitGC SET JVM_OPT=%JVM_OPT% -XX:+UseCMSCompactAtFullCollection -XX:CMSFullGCsBeforeCompaction=0 set JVM_OPT=%JVM_OPT% -Dthread_count=16 set JVM_OPT=%JVM_OPT% -Dsize=5 set JVM_OPT=%JVM_OPT% -Dmax=300000000

JDK 8:

set JVM_OPT=-Xms1024m -Xmx1024m -Xmn256m -XX:SurvivorRatio=8 -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=256m -XX:+UseConcMarkSweepGC -XX:+UseParNewGC SET JVM_OPT=%JVM_OPT% -XX:+DisableExplicitGC SET JVM_OPT=%JVM_OPT% -XX:+UseFastAccessorMethods -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled SET JVM_OPT=%JVM_OPT% -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=62 set JVM_OPT=%JVM_OPT% -Dthread_count=16 set JVM_OPT=%JVM_OPT% -Dsize=5 set JVM_OPT=%JVM_OPT% -Dmax=300000000

Test code:

public class Main {
  private static final int K_SIZE = 1024;
  private static final long MAX = 300_000_000L;

  public static void main(String[] args) {

    int threadCount = Integer.getInteger("thread_count", 4);
    final int size = Integer.getInteger("size", 5);
    final long max = Long.getLong("max", MAX);

    final AtomicLong count = new AtomicLong();

    final CountDownLatch beginLatch = new CountDownLatch(1);
    final CountDownLatch endLatch = new CountDownLatch(threadCount);

    ExecutorService executor = Executors.newFixedThreadPool(threadCount);

    for (int i = 0; i < threadCount; i++) {
      executor.execute(new Runnable() {
        ConcurrentMap<Long, Long> map = new ConcurrentHashMap<Long, Long>();

        @Override
        public void run() {
          try {
            beginLatch.await();

                     byte[] data = null;
                     while (!Thread.currentThread().isInterrupted()) {
                       data = new byte[size * K_SIZE];
                       long current = count.incrementAndGet();
                       map.put(current, current);
                       data[0] = (byte) current;
                       if (current >= max) {
                         endLatch.countDown();
                            break;
                       } else if ((current % 1000) == 0) {
                         map.clear();
                       }
                     }
          } catch (InterruptedException e) {
             Thread.currentThread().interrupt();
          }
        }
     });
   }

   long startTime = System.currentTimeMillis();
  beginLatch.countDown();
  try {
    endLatch.await();
    long endTime = System.currentTimeMillis();
    System.out.println(endTime - startTime);
  } catch (InterruptedException e) {
  }

  executor.shutdown();
  }
}
1
Why does it matter? What does it prove? Is this truly a critical path in your application? Because I find that difficult to believe.Elliott Frisch
I suggest you remove all the other redundant and really slow operations before trying to look at the behaviour of a small operations otherwise it is highly likely that the big operations you are performing are the real case of the problem.Peter Lawrey

1 Answers

0
votes

The difference in performance you are demonstrating is less than 1%. That is so small as to be irrelevant for all but a very small subset of applications. Even with high resolution profiling tools it's often difficult to establish a definitive cause for insignificant performance changes such as this. With so many new features between 1.7 and 1.8 this could be caused by any number of things.

Being purely speculative for a moment, there are hundreds of bug fixes between 1.7 and 1.8. Additional error checking to cope with edge conditions could easily cause a minor performance degradation.