1
votes

I am using QFile as a file reader and a file writer to copy files to USB from inside my application. I have been trying to figure out why my file copies to USB (with progress bar) are taking so long. I finally found out that when I close the QFile object that is used for writing, the close() operation can take well over the time taken for the actual write operation. These are very large files, and I read/write blocks of 16384 bytes, and then I send a signal to the GUI to increase the progress bar that is viewed by the user. I ended up adding a call to flush() after each write since I assume this is a result of the out stream not actually having yet been written to disk. That didn't make a difference. The close of the outgoing QFile object still takes much longer than what seems to have been the write time (timing taken before and after copy, and before and after each of the QFile::close() calls, the timing code has been removed for ease of reading, I also debugged and saw it happening). Of course, it doesn't help to not call the close() function, since the destruction of the QFile object causes it to be called.

My code is as follows (minus error checking, destination space checking, etc):

void FileCopy::run()
{
    QByteArray bytes;
    int totalBytesWritten = 0;
    int inListSize = inList.size();

    for (int i=0; !canceled && i<inListSize; i++)
    {
        QString inPath = inList.at(i).inPath;
        QString outPath = inList.at(i).outPath;
        QFile inFile(inPath);
        QFile outFile(outPath);
        int filesize = inFile.size();
        int bytesWritten = 0;

        if (!inFile.open(QIODevice::ReadOnly))
        {
            return;
        }

        if (!outFile.open(QIODevice::WriteOnly))
        {
            inFile.close();
            return;
        }

        // copy the FCS file with progress
        while (!canceled && bytesWritten < filesize)
        {
            bytes = inFile.read(MAXBYTES);
            qint64 outsize = outFile.write(bytes);
            outFile.flush();
            if (outsize != bytes.size())
            {
                break;
            }
            bytesWritten += outsize;
            totalBytesWritten += outsize;
            Q_EMIT signalBytesCopied(totalBytesWritten, i+1, inListSize);
            QThread::usleep(100); // allow time for detecting a cancel
        }

        inFile.close();
        outFile.close();
    }

    // Other error checking done here
}

Can anyone see a way to get passed this? I would actually prefer that the progress bar move more slowly, more accurately displaying the state of the copy to the user, than to have the progress bar read 100% in less than half the time it takes for the copy and close to actually complete.

I have also tried using QSaveFile instead of QFile for the output, but QSaveFile::commit() has the same exact problem, taking more time to commit than to finish the actual copy loop. I assume that this is because, underneath, it is using the same functionality as QFile is, derived from QIoDevice.

I have considered moving to using standard streams, but would like to keep some consistency in how file handling is done in this application. It is a possibility though, if QFile::close() is going to take this long to close. Or is it possible that the standard stream would have the same issue?

I am working on a Win7 32-bit box with VS2010 using Qt5.1.1 and the Qt 1.2.2 VS add-in. Thanks for any suggestions.

1

1 Answers

1
votes

While you are writing, the OS probably just caches the writes in memory (fast). But when you close the file it has to flush all the data to disk (slow - especially if it has not actually written any of it yet). So closing the file has to wait for the OS actually putting all the data onto the disk (USB) and that may actually be all of it at that time.

The reason why operating systems do something like this is of course to speed up writes - and often they can then get away with flushing the data to disk in the background when nothing else is going on (so you don't really notice the actual cost, since it is amortized over time where nothing else is going on). But if you just write and then close at once you are going to notice. Note: the alternative would be the write calls being slower - you would still end up spending the same actual time.