0
votes

I need images to be displayed in a constant frame rate, so I use two threads, one for rendering with VSYNC on, one for computing using CUDA, which may takes long time. I want computing thread running in rendering thread interval(after swap buffer, before next frame start rendering).

I have two problems here:

  1. How can I know when the image are exactly drawn on screen, then I can awake rending thread. After glutSwapBuffers(), the image may not be actually displayed on screen. I have not found a API to notify display completing.
  2. How can I stop computing thread when it's time to render, I have tried this_thread::yield() but it often still runs computing thread. I am not familiar with multiple threads programming.

I use C++11 for multiple thread, CUDA for computing, OpenGL for rendering.

Update:

As computing takes long time, but rendering must be performed at 60Hz, so I have to separate into two threads.

I have just resolved this problem by using condition_variable, it is similar to 'producer-consumer' problem. And there is no need to know when the image are actually drawn on screen, you can just let it computes all the time, and CUDA computing thread seems won't interrupt OpenGL rendering thread in one GPU, they are parallel.

Here is the code:

compute Thread:

void update(){
    while(1){
        unique_lock<mutex> locker(buffer_mutex);
        buffCond.wait(locker, []{return !updateFlag;});
        runSolver(d_x);// compute next several images, d_x point to the images buffer       
        updateFlag = true;
        locker.unlock();
        buffCond.notify_one();
    }
}

rendering thread:

void render(){
    initGL();
    glutMainLoop();
}
void display(){
    if(updateFlag){
        unique_lock<mutex> locker(buffer_mutex);
        updateBuffer(d_x);
        updateFlag = false;
        locker.unlock();
        buffCond.notify_one();
    }
    .../* OpenGL rendering*/
    glutSwapBuffers();
}
1
Why are you using threads for this? Generally speaking, if you want to start operation X after operation Y has finished, you do this synchronously. I don't see where the parallelism is coming from in this operation.Nicol Bolas
@NicolBolas it sounds as though his compute stage does not directly generate rendering instructions, but generates a model to be translated into rendering, probably some form of simulation. So he perhaps wants to have step calculations running at the same time as rendering translation of the previous stepkfsone

1 Answers

1
votes

Graphics rendering is generally done from a single thread, at least from single thread at a time. I believe the finer details depend on your software stack; for example, from Xlib:

Threaded applications: While Xlib does attempt to support multithreading, the API makes this difficult and error-prone.

More information from a rather opinionated but informed article:

[...] However, most real-life programs access Xlib through higher-level libraries, and the libraries do not initialize Xlib threading on their behalf. Today, most programs with multiple X11 connections and multiple threads are buggy.

With that said, multi-threaded CUDA should still be an option, so one thread can step into CUDA if it's done with OpenGL. That way CUDA can still make progress while another thread is rendering the last frame, but in the critical period delaying the next frame, no threads are waiting. Of course, if you truly have to re-join before and after each frame is rendered, you're just voluntarily incurring the cost of switching contexts without the benefit of concurrency. In this case there is no need for threading.

You may also benefit from reading the documentation or some examples of <atomic>, especially atomic_bool, for shared flags between threads. If used correctly, they can signal state safely between threads with very little cost.