I'm trying to write an application controlling a swarm of robots via WiFi and MQTT protocol. I have performed some tests to measure will it be fast enough for my application. I would like to have a control loop (a message going from a PC to robot and back) that takes no more than 25-30ms average.
I have written an application using Paho Java client, that runs on two machines. When one receives a message on topic1, it publishes to topic2. Topic2 is subscribed by the second machine, that in turn publishes to topic1.
topic1 topic1
M1---------> broker ---------> M2
topic2 topic2
M1 <-------- broker <--------- M2
When all publishing and subscribing was made with QoS 0 a loop time was around 12ms average. However I would like to use QoS 1 to guarantee that the commands sent to robots will always reach their destination. When I tested the loop time, it averaged at around 250ms.
What causes so much increase in time? From my understanding, if there are no transmission errors, the exchanged packets number just doubles with QoS1 (there are PUBACKs sent from broker to clients for every message, see http://www.hivemq.com/mqtt-essentials-part-6-mqtt-quality-of-service-levels/).
Can I somehow reduce this time? I have tried Mosquitto and Apache Apollo brokers, both replicated the same results.
Edit:
I have changed a testing procedure a bit. Now, I have two instances of mqtt clients running on the same machine. One as a publisher, second as a subscriber. Publisher sends 1000 messages in 10ms intervals like this:
Client publisher = new Client(url, clientId+"pub", cleanSession, quietMode, userName, password);
Client subscriber = new Client(url, clientId+"sub", cleanSession, quietMode, userName, password);
subscriber.subscribe(pubTopic, qos);
while (counter < 1000) {
Thread.sleep(10,0);
String time = new Timestamp(System.currentTimeMillis()).toString();
publisher.publish(pubTopic, qos, time.getBytes());
counter++;
}
While subscriber just waits for messages and measures time:
public void messageArrived(String topic, MqttMessage message) throws MqttException {
// Called when a message arrives from the server that matches any
// subscription made by the client
Timestamp tRec = new Timestamp(System.currentTimeMillis());
String timeSent = new String(message.getPayload());
Timestamp tSent = Timestamp.valueOf(timeSent);
long diff = tRec.getTime() - tSent.getTime();
sum += diff;
counter++;
if (counter == 1000) {
double avg = sum / 1000.0;
System.out.println("avg time: " + avg);
}
}
Broker (mosquitto with default config), runs on a separate machine in the same local network. The results that I have achieved are even more bizarre than before. Now, it takes approximately 8-9ms for a message with QoS 1 to get to the subscriber. With QoS 2 it's around 20ms. However, with QoS 0, I get avg. times from 100ms to even 250ms! I guess that the error is somewhere in my test method, but I can't see where.
counter
is defined (if both the subscriber and the publisher are in the same class as they appear to be by your two code snippets, are they incrementing the exact samecounter
variable?). Your results are way too inconsistent and too high which is not normal. – kha