2
votes

I'm developing an application that needs to copy lots of files from one folder to another, using QT (5.6.1)

For doing this, I've been using the QFile::copy() method. This works well, except for one thing: it is extremely slow. Takes more than twice the time that the same copy operation takes using windows explorer.

Wondering why this was, I dug into the QT source code, and I found this in qfile.cpp, which looks relevant:

char block[4096];
qint64 totalRead = 0;
while(!atEnd()) {
    qint64 in = read(block, sizeof(block));
    if (in <= 0)
        break;
    totalRead += in;
    if(in != out.write(block, in)) {
        close();
        d->setError(QFile::CopyError, tr("Failure to write block"));
        error = true;
        break;
    }
}

So, from what I understand, the copy operation is using a 4096-byte buffer. This is very small for a copy operation, and could well be the cause of the issue. So what I did was change the size of the buffer to:

char block[4194304]; // 4MB buffer

Then I rebuilt the entire QT library to include this change. However, all the modification did was break the method completely. Now when my application tries to invoke QFile::Copy() the operation gets interrupted immediately (method doesn't even start to run, stops before the first line according to QtCreator's debugger). The debugger tells me:

The inferior stopped because it received a signal from the Operating System.

Signal name :
SIGSEGV
Signal meaning :
Segmentation fault

My c++ is a bit rusty, but I don't understand how just changing the allocation size of an array can completely break a method... can anyone help by either:

1) Telling me why QFile:Copy() is so slow (am I missing something? It's not just on my PC, tested on several different machines). And is the coulprit actually the code I posted above or something else entirely? 2) Telling me why that one change completely breaks QFile

3
There's a benchmark in qtbase (tests/benchmarks/corelib/io/qfile), which tries to read a file on Win32 using different block sizes. I'm not sure why 4K was universally selected. Perhaps it depends on the hard disk technology? Could you try running the benchmark (readBigFile_Win32 test function) and check?peppe
On Windows, your best bet will be to use CopyFileEx, see this complete example with progress indication :)Kuba hasn't forgotten Monica

3 Answers

6
votes

The reason that your change broke QFile is that a 4M buffer won't fit on the stack (the default stack size is typically something like 1M). A quick fix would be:

std::vector<char> vec(4*1024*1024);
char *block = &vec.front();

The vector will allocate the big buffer on the heap (and take care of deallocating when you are done), and you just point block at the front of the vector.

I think your analysis of why copy is slow is spot on.

2
votes

Well, changing the buffer size did no good, since that apparently is just a fallback in case the derived function engine()->copy() fails. I don't know exactly how that function works, nor did I want to waste time modifying core QT engine classes to make this work.

In the end, since my project was only supposed to run on Windows, I ended up using the native Win32 copy function. So I replaced my call to:

QFile::copy(src, dest);

with:

CopyFileExW((LPCWSTR)src.utf16(), (LPCWSTR)dest.utf16(), 0, this, 0, 0);

Note that you must #include "windows.h" for this invocation to work.

2
votes

This seems to be no more an issue with newer version of Qt (I am using 5.9.2). Please have a look at QFileSystemEngine::copyFile() in https://code.woboq.org/qt5/qtbase/src/corelib/io/qfilesystemengine_win.cpp.html The code uses native function CopyFile2. Also my testing confirmed that QFile::copy() is on par with the native implementation on Windows. Seems Qt has made some progress in this area.