3
votes

I read binary data in portions (e.g. 100 bytes) from a file using QDataStream to process it then. Basically QDataStream stream(&file) with file being a QFile.

Everything works fine so far. But I guess that in general processing is faster when small data portions are not being read from the file one by one, but from a buffer which is fed by the file with bigger amount of data. So here are my questions:

  1. Is such a buffering already done internally when using QDataStream such that a manually implemented buffer would not further speed up the processing? That is, internally Qt will read more than the 100 bytes from the file?

  2. If not, what is the best way to manually do such a buffering? QBuffer?

Thanks for your answers and experiences,

Chris

1

1 Answers

4
votes

QDataStream itself doesn't perform any buffering (unlike e.g. QTextStream). But QFile provides some buffering by default unless you've opened it with QIODevice::Unbuffered flag. There is no information about how is that buffering performed and I don't know if it can be accelerated using manual buffering. But sequential reading is a common task and I think it will work fast by defaul.

QBuffer provides IO-interface for QByteArray. If your data chunks are fixed-size and you can be sure that any e.g. 100-byte fragment of the file can be parsed separately with QDataStream, than the solution is easy: read a QByteArray from the QFile and use QDataStream on that QByteArray (QBuffer will be used internally). But if it's not your case, you need to remove parsed data from the buffer and append new data when required, and it's more complicated task.