3
votes
2013-11-26T10:19:30.011+0800: [GC [ParNew: 2432484K->19997K(2696640K), 0.0378270 secs] 5560240K->3155822K(7691712K), 0.0398670 secs] [Times: user=0.18 sys=0.00, real=0.04 secs] 
2013-11-26T10:19:36.277+0800: [GC [ParNew: 2417093K->15777K(2696640K), 0.0372550 secs] 5552917K->3151795K(7691712K), 0.0388490 secs] [Times: user=0.28 sys=0.00, real=0.04 secs] 
**2013-11-26T10:19:36.325+0800: [GC [ParNew: 20441K->16452K(2696640K), 0.0186420 secs] 3156459K->3153092K(7691712K), 0.0200280 secs] [Times: user=0.17 sys=0.00, real=0.02 secs]** 
2013-11-26T10:19:41.464+0800: [GC [ParNew: 2413508K->22811K(2696640K), 0.0426440 secs] 5550148K->3160128K(7691712K), 0.0444710 secs] [Times: user=0.25 sys=0.00, real=0.04 secs] 

Obviously the minor gc happens every 5 or 6 seconds.

After 2013-11-26T10:19:36.277, there is a minor gc at 2013-11-26T10:19:36.325 with only 20441K used!!!

Why the minor gc happened (the blod above line)? Who knows?

More detais:

This phenomenon only happens no more than 10 times per 24 hours.

There are more details about our server:

CPU Count 12 CPUs

CPU Speed 2400 MHz

Memory Total 16322984 KB

The args of jvm(hotspot 1.6.0_26) are:

-Xms7804M -Xmx7804M -Xmn2926M -XX:PermSize=256M -XX:MaxPermSize=256M -XX:SurvivorRatio=8 -XX:TargetSurvivorRatio=70 -XX:MaxTenuringThreshold=10 -server -Xss256k -XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseConcMarkSweepGC -XX:+DisableExplicitGC -XX:SoftRefLRUPolicyMSPerMB=0 -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:ParallelGCThreads=12 -XX:+CMSScavengeBeforeRemark -XX:+CMSClassUnloadingEnabled

@Alexey Ragozin more logs:

2013-11-27T23:34:47.352+0800: [GC [ParNew: 2496458K->81521K(2696640K), 0.0381510 secs]     5104803K->2690529K(7691712K), 0.0406260 secs] [Times: user=0.28 sys=0.00, real=0.04 secs] 
2013-11-27T23:34:51.745+0800: [GC [ParNew: 2478577K->57599K(2696640K), 0.0535680 secs] 5087585K->2716762K(7691712K), 0.0554400 secs] [Times: user=0.28 sys=0.01, real=0.06 secs] 
2013-11-27T23:34:55.728+0800: [GC [ParNew: 2454747K->19841K(2696640K), 0.0300150 secs] 5113910K->2679701K(7691712K), 0.0320030 secs] [Times: user=0.18 sys=0.00, real=0.03 secs] 
**2013-11-27T23:34:55.769+0800: [GC [ParNew: 21438K->16389K(2696640K), 0.0171200 secs] 2681299K->2676788K(7691712K), 0.0187850 secs] [Times: user=0.13 sys=0.00, real=0.02 secs]** 
2013-11-27T23:34:59.888+0800: [GC [ParNew: 2413445K->16017K(2696640K), 0.0251400 secs] 5073844K->2676744K(7691712K), 0.0271570 secs] [Times: user=0.15 sys=0.00, real=0.02 secs] 
2013-11-27T23:35:04.374+0800: [GC [ParNew: 2413073K->16059K(2696640K), 0.0240360 secs] 5073800K->2677460K(7691712K), 0.0259960 secs] [Times: user=0.15 sys=0.00, real=0.03 secs] 
... ...
2013-11-28T02:56:57.719+0800: [GC [ParNew: 2449927K->45500K(2696640K), 0.0360740 secs] 6195838K->3791742K(7691712K), 0.0379370 secs] [Times: user=0.23 sys=0.00, real=0.03 secs] 
2013-11-28T02:57:37.987+0800: [GC [ParNew: 2442556K->54097K(2696640K), 0.0383490 secs] 6188798K->3800678K(7691712K), 0.0402170 secs] [Times: user=0.23 sys=0.00, real=0.04 secs] 
2013-11-28T02:57:38.036+0800: [GC [1 CMS-initial-mark: 3746580K(4995072K)] 3801777K(7691712K), 0.0694940 secs] [Times: user=0.06 sys=0.00, real=0.07 secs] 
2013-11-28T02:57:38.770+0800: [CMS-concurrent-mark: 0.661/0.662 secs] [Times: user=2.15 sys=0.00, real=0.66 secs] 
2013-11-28T02:57:38.806+0800: [CMS-concurrent-preclean: 0.035/0.035 secs] [Times: user=0.06 sys=0.01, real=0.04 secs] 
 CMS: abort preclean due to time 2013-11-28T02:57:43.862+0800: [CMS-concurrent-abortable-preclean: 5.016/5.056 secs] [Times: user=6.82 sys=0.19, real=5.05 secs] 
**2013-11-28T02:57:43.872+0800: [GC[YG occupancy: 407766 K (2696640 K)]2013-11-28T02:57:43.872+0800: [GC [ParNew: 407766K->35780K(2696640K), 0.0236470 secs] 4154346K->3782623K(7691712K), 0.0256460 secs] [Times: user=0.20 sys=0.00, real=0.03 secs]** 
[Rescan (parallel) , 0.0119390 secs][weak refs processing, 0.8478360 secs][class unloading, 0.0661530 secs][scrub symbol & string tables, 0.0146780 secs] [1 CMS-remark: 3746843K(4995072K)] 3782623K(7691712K), 1.1138910 secs] [Times: user=1.41 sys=0.00, real=1.12 secs] 
2013-11-28T02:57:48.965+0800: [CMS-concurrent-sweep: 3.893/3.977 secs] [Times: user=5.65 sys=0.15, real=3.97 secs] 
2013-11-28T02:57:48.977+0800: [CMS-concurrent-reset: 0.012/0.012 secs] [Times: user=0.02 sys=0.00, real=0.02 secs]
3
What language/environment is this? There are many different garbage collectors in the world.Damien_The_Unbeliever
It's java. jvm is hotspot 1.6.0_26yongfei gao

3 Answers

3
votes

-XX:+CMSScavengeBeforeRemark is a reason for your weird minor collection.

CMS have to do 2 Stop-the-World pauses to complete old space collection cycle

  • initial mark
  • remark

Both of them need to scan whole young space. Right after young collection, number of objects in young space is pretty small, so it is a beneficial schedule old space collection pauses right after young collection.

-XX:+CMSScavengeBeforeRemark options forces young collection as soon as CMS is ready for remark pause, even if Eden is not full yet.

You will see this "phenomen" every old space collection cycle.

More details about CMS mechanics - Understanding GC pauses in JVM, HotSpot's CMS collector
More about JVM GC options - HotSpot JVM garbage collection options cheat sheet

UPDATE
Technically speaking there is another reason which may cause premature young GC. If application tries to allocate large array which cannot fit Eden, JVM may start collection to free space. It is possible, but more likely JVM will choose to allocate this array directly in old space.

0
votes

I doubt that anyone, apart from the people who wrote and/or maintain that particular GC, could explain that phenomenon properly / correctly.

However, I would observe that your application is creating a extraordinary amount of garbage. 2400000K == 2.4Gb of garbage every 6 seconds. Either you have a pathological application, or this is a very unrealistic garbage collector benchmark ... that won't tell you anything meaningful about how the GC behaves for real applications.


If you asked this Question merely to satisfy your curiosity, I recommend that you investigate the phenomenon yourself ... and then report back your results as a self-Answer.

0
votes

Minor GCs happen when the young generation fills up. If follows that applications that create new objects in an high rate will experience increasingly more often minor garbage collections. Given the GC logs you post, the very same behavior can be seen for your application as well. You create objects in a high rate thus triggering minor GCs often.

For more information concerning GC in general you may give a look at this resource: http://www.oracle.com/webfolder/technetwork/tutorials/obe/java/gc01/index.html