I’m trying to use the cassandra-stress tool for the first time today. Although I'm able to run the tool, a lot of "Failed to connect over JMX; not collecting these stats" messages are displayed in the output
Command
cassandra-stress user \
profile=./stress_write.yaml ops\(insert=1\) \
n=1000000 \
-log file=./stress_write.log \
-node node1,node2,node3,node4,node5,node6
Output
WARN 19:44:25 Found host with 0.0.0.0 as rpc_address, using listen_address (/node5) to contact it instead. If this is incorrect you should avoid the use of 0.0.0.0 server side.
WARN 19:44:25 Found host with 0.0.0.0 as rpc_address, using listen_address (/node1) to contact it instead. If this is incorrect you should avoid the use of 0.0.0.0 server side.
WARN 19:44:25 Found host with 0.0.0.0 as rpc_address, using listen_address (/node2) to contact it instead. If this is incorrect you should avoid the use of 0.0.0.0 server side.
WARN 19:44:25 Found host with 0.0.0.0 as rpc_address, using listen_address (/node4) to contact it instead. If this is incorrect you should avoid the use of 0.0.0.0 server side.
WARN 19:44:25 Found host with 0.0.0.0 as rpc_address, using listen_address (/node3) to contact it instead. If this is incorrect you should avoid the use of 0.0.0.0 server side.
WARN 19:44:26 Found host with 0.0.0.0 as rpc_address, using listen_address (/node5) to contact it instead. If this is incorrect you should avoid the use of 0.0.0.0 server side.
WARN 19:44:26 Found host with 0.0.0.0 as rpc_address, using listen_address (/node1) to contact it instead. If this is incorrect you should avoid the use of 0.0.0.0 server side.
WARN 19:44:26 Found host with 0.0.0.0 as rpc_address, using listen_address (/node2) to contact it instead. If this is incorrect you should avoid the use of 0.0.0.0 server side.
WARN 19:44:26 Found host with 0.0.0.0 as rpc_address, using listen_address (/node4) to contact it instead. If this is incorrect you should avoid the use of 0.0.0.0 server side.
WARN 19:44:26 Found host with 0.0.0.0 as rpc_address, using listen_address (/node3) to contact it instead. If this is incorrect you should avoid the use of 0.0.0.0 server side.
INFO 19:44:26 Using data-center name 'DC2' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
INFO 19:44:26 New Cassandra host /node2:9042 added
INFO 19:44:26 New Cassandra host /node5:9042 added
Connected to cluster: MyCluster
INFO 19:44:26 New Cassandra host /node4:9042 added
INFO 19:44:26 New Cassandra host /node1:9042 added
INFO 19:44:26 New Cassandra host /node6:9042 added
Datatacenter: DC2; Host: /node4; Rack: rack1
Datatacenter: DC2; Host: /node3; Rack: rack1
Datatacenter: DC2; Host: /node6; Rack: rack1
Datatacenter: DC2; Host: /node5; Rack: rack1
Datatacenter: DC2; Host: /node1; Rack: rack1
Datatacenter: DC2; Host: /node2; Rack: rack1
INFO 19:44:26 New Cassandra host /node3:9042 added
Created schema. Sleeping 6s for propagation.
Failed to connect over JMX; not collecting these stats
Generating batches with [1..1] partitions and [1..1] rows (of [1..1] total rows in the partitions)
Failed to connect over JMX; not collecting these stats
Failed to connect over JMX; not collecting these stats
Improvement over 4 threadCount: 36%
Failed to connect over JMX; not collecting these stats
Improvement over 8 threadCount: 138%
Failed to connect over JMX; not collecting these stats
Improvement over 16 threadCount: 48%
Failed to connect over JMX; not collecting these stats
Improvement over 24 threadCount: 33%
Failed to connect over JMX; not collecting these stats
Improvement over 36 threadCount: 27%
Failed to connect over JMX; not collecting these stats
Improvement over 54 threadCount: 39%
Failed to connect over JMX; not collecting these stats
Improvement over 81 threadCount: 37%
Failed to connect over JMX; not collecting these stats
Improvement over 121 threadCount: 16%
Failed to connect over JMX; not collecting these stats
Improvement over 181 threadCount: 1%
Failed to connect over JMX; not collecting these stats
Improvement over 271 threadCount: 15%
Failed to connect over JMX; not collecting these stats
Improvement over 406 threadCount: 3%
Failed to connect over JMX; not collecting these stats
Improvement over 609 threadCount: -3%
Is there any command-line or file-based configuration parameter that I need to specify for JMX? I have tested and confirmed that connectivity between the stress machine and my nodes is not the issue, because I was able to establish a connection between them via jmxsh.
Another issue with the output, which may or may not be related to the JMX error, is that is it missing some key parts. I'm quoting the sample output from this Datastax documentation page to show the parts that are missing from what I got:
WARNING: uncertainty mode (err<) results in uneven workload between thread runs, so should be used for high level analysis only
Running with 4 threadCount
Running WRITE with 4 threads until stderr of mean < 0.02
total ops , adj row/s, op/s, pk/s, row/s, mean, med, .95, .99, .999, max, time, stderr, gc: #, max ms, sum ms, sdv ms, mb
2552 , 2553, 2553, 2553, 2553, 1.5, 1.4, 2.5, 6.0, 12.6, 18.0, 1.0, 0.00000, 0, 0, 0, 0, 0
5173 , 2634, 2613, 2613, 2613, 1.5, 1.5, 1.8, 2.6, 8.6, 9.2, 2.0, 0.00000, 0, 0, 0, 0, 0
...
Results:
op rate : 3954
partition rate : 3954
row rate : 3954
latency mean : 1.0
latency median : 0.8
latency 95th percentile : 1.5
latency 99th percentile : 1.8
latency 99.9th percentile : 2.2
latency max : 73.6
total gc count : 25
total gc mb : 1826
total gc time (s) : 1
avg gc time(ms) : 37
stdev gc time(ms) : 10
Total operation time : 00:00:59
Sleeping for 15s
Running with 4 threadCount
Notes
- My cluster is running DSE 4.6.1 (Cassandra 2.0.12)
- I'm running the stress tool from a different machine
- The stress tool version is from DSC 2.1 (Cassandra 2.1)