3
votes

I am implementing a method in my class, which will write out data from a TableView Object to a CSV file. However when the program runs, the program writes the data to the file on a USB drive at a very slow rate(3 or 4 seconds), but works fine with the system's internal drive. Is this because i have not used flush() or close(), after writing the file ??

Here is my code

bool ThicknessCalibrationDataDisplay::WriteCSVFileChanges()
{
    QModelIndex tableViewModelindex =  tableViewModel_->index(0,0);

    QFile file(CSVFileName_);
    if(!file.exists())
        return false;

    if(!file.open(QIODevice::WriteOnly))
        return false;


    for(int i = 0; i < totalRows_ ; i++)
    {
        for(int j = 0 ; j < totalColumns_; j++)
        {
            tableViewModelindex =  tableViewModel_->index(i,j);
            qDebug()<<tableViewModelindex.data();
            QString text = tableViewModelindex.data().toString();
            QTextStream OutputStream(&file);

            if(j == totalColumns_ - 1)
                OutputStream<<"\n\r";
            else
                OutputStream<<',';

        }
    }

}

This was my code earlier, and now i plan to close the file stream, for a graceful exit. The Qt API for QFile::close() says

Calls QFile::flush() and closes the file. Errors from flush are ignored.

So should i just call close(), or is it better to call flush(), log any errors and then call close() ?

Is there any other modification, that i have to make, to improve the write operation ?

2
That sounds like horrible design, ignoring errors. If that's really what happens, then yes, you have to call flush first, and process the errors. (But in any reasonable design, close should propagate any errors from flush up.)James Kanze
True. Am surprised close() will not return any boolean error status...Barath Ravikumar
Just curious, but what does QIODevice or whatever buy you over the standard streams.James Kanze
Throughout the codebase, Qt file stream classes are used, so just have to do the same to maintain project integrity....Barath Ravikumar
If you're not working alone, yes. You have to maintain some sort of coherence in the project. (Although in the case of IO, it's not rare to use several different solutions, depending. The requirements for transaction processing and logging are radically different, and an io subsystem which meets one will rarely meet the other. But from the little you're showing here, I don't think that there's any justification for breaking with existing practice in the application.)James Kanze

2 Answers

8
votes
  1. The flush() vs. close() is a red herring, it doesn't impact your performance at all.

  2. Destruction of a QTextStream forces a flush of the file. Flushing the file is slow. You're destroying your text stream every time you iterate through the loop! Create the stream outside of the loop!

    Here's the source, from Qt 5.1.1:

    QTextStream::~QTextStream()
    {
        if (!d->writeBuffer.isEmpty())
            d->flushWriteBuffer();
    }
    

    On Qt 5.1 or newer, you should be using QSaveFile if you want to ensure that the disk buffers are flushed once the file is closed. Only this gives you assurance that once you think you're done saving, the data is in fact on the disk.

  3. QTextStream has its own buffers. So, if you want to catch errors while flushing, you need to use the flush() method on the stream itself, not on the QFile.

  4. It's a very bad idea to do any file access from the GUI thread. Any time anything blocks, your UI thread blocks as well.

    Now the question is: how do you make your model safe to access from another thread? If you use a custom model, you probably only need to switch the model into a read-only mode for the duration of writing the file. The read-only mode would be a custom property of the model, such that all setData calls would fail. You'd of course need to indicate this to the user. The views of the model would be read-only, but that's much more friendly than blocking the entire GUI.

    If you think that merely preventing concurrent access to the model with a QMutex would be enough, think again. Any modifications made to the model may potentially change its structure, so your writer would need to properly handle all signals emitted by model. This would make the writer much more complex. A temporarily read-only model lets you have responsive GUI with a temporary inconvenience to the user, at a minimal increase in code complexity.

    class MyModel : public QStandardItemModel /*  or whatever other class you derive from */ {
        typedef QStandardItemModel inherited;
        Q_OBJECT
        Q_PROPERTY(bool writable READ isWritable WRITE setWritable NOTIFY writableChanged)
        bool m_writable;
    public:
        explicit MyModel(QObject * parent) : inherited(parent), m_writable(true) {}
        Q_SLOT void setReadOnly() {
            if (!m_writable) return;
            m_writable = false; 
            emit writableChanged(m_writable);
        }
        Q_SLOT void setReadWrite(){
            if (m_writable) return;
            m_writable = true; 
            emit writableChanged(m_writable);
        }
        Q_SIGNAL void writableChanged(bool);
        bool isWritable() const { return m_writable; }
        void setWritable(bool writable) {
           if (m_writable == writable) return;
            m_writable = writable; 
            emit writableChanged(m_writable);
        }
        bool setData(const QModelIndex & index, const QVariant & val, int role = Qt::EditRole) {
            if (! m_writable) return false;
            return inherited::setData(index, val, role);
        }
    };
    
  5. Some USB drives are abysmally slow.

Your reworked code should, at the minimum, look as follows:

bool ThicknessCalibrationDataDisplay::WriteCSVFileChanges()
{
#if QT_VERSION < QT_VERSION_CHECK(5,1,0)
    QFile file(CSVFileName_);
#else
    QSaveFile file(CSVFileName_); // does a disk commit at the end
#endif
    if(!file.exists())
        return false;

    if(!file.open(QIODevice::WriteOnly))
        return false;

    QTextStream os(&file);
    for(int i = 0; i < totalRows_ ; i++)
    {
        for(int j = 0 ; j < totalColumns_; j++)
        {
            QModelIndex index = tableViewModel_->index(i,j);
            qDebug() << index.data();
            os << index.data();
            if(j == totalColumns_ - 1)
                os<<"\n\r";
            else
                os<<',';

        }
    }
    os.flush();
#if QT_VERSION >= QT_VERSION_CHECK(5,1,0)
    return os.status() == QTextStream::Ok && file.commit();
#else
    return os.status() == QTextStream::Ok;
#endif
}
7
votes

I can tell you that QFile::close() is implicitly called at the end of the function as the destructor for the QFile object is called at the end of the file variable's scope. From the QFile documentation:

QFile::~QFile ()
Destroys the file object, closing it if necessary.

What you probably want to be experimenting with is the underlying sync function of the operating system. AFAIK Qt does not provide any API for this, so it wouldn't be portable. When the QFile::flush() function is called, a that is likely happening (at least on Linux) is that the data is flush from the userspace buffer to the underlying system cache and not necessarily to disk. On linux/unix, you need the fsync function to ensure the file is actually written to disk. The code would look something like:

file.flush();
fsync( file.handle() );
file.close();

I believe the same code will work on Windows too.

For more in fflush vs fsync see Difference between fflush and fsync