5
votes

I'm working on a design that uses a gatekeeper task to access a shared resource. The basic design I have right now is a single queue that the gatekeeper task is receiving from and multiple tasks putting requests into it.

This is a memory limited system, and I'm using FreeRTOS (Cortex M3 port).

The problem is as follows: To handle these requests asynchronously is fairly simple. The requesting task queues its request and goes about its business, polling, processing, or waiting for other events. To handle these requests synchronously, I need a mechanism for the requesting task to block on such that once the request has been handled, the gatekeeper can wake up the task that called that request.

The easiest design I can think of would be to include a semaphore in each request, but given the memory limitations and the rather large size of a semaphore in FreeRTOS, this isn't practical.

What I've come up with is using the task suspend and task resume feature to manually block the task, passing a handle to the gatekeeper with which it can resume the task when the request is completed. There are some issues with suspend/resume, though, and I'd really like to avoid them. A single resume call will wake up a task no matter how many times it has been suspended by other calls and this can create an undesired behavior.

Some simple pseudo-C to demonstrate the suspend/resume method.

void gatekeeper_blocking_request(void)
{
     put_request_in_queue(request);
     task_suspend(this_task);
}

void gatekeeper_request_complete_callback(request)
{
     task_resume(request->task);
}

A workaround that I plan to use in the meantime is to use the asynchronous calls and implement the blocking entirely in each requesting task. The gatekeeper will execute a supplied callback when the operation completes, and that can then post to the task's main queue or a specific semaphore, or whatever is needed. Having the blocking calls for requests is essentially a convenience feature so each requesting task doesn't need to implement this.

Pseudo-C to demonstrate the task-specific blocking, but this needs to be implemented in each task.

void requesting_task(void)
{
     while(1)
     {
         gatekeeper_async_request(callback);
         pend_on_sempahore(sem);
     }
}

void callback(request)
{
     post_to_semaphore(sem);
}

Maybe the best solution is just to not implement blocking in the gatekeeper and API, and force each task to handle it. That will increase the complexity of each task's flow, though, and I was hoping I could avoid it. For the most part, all calls will want to block until the operation is finished.

Is there some construct that I'm missing, or even just a better term for this type of problem that I can google? I haven't come across anything like this in my searches.

Additional remarks - Two reasons for the gatekeeper task:

  1. Large stack space required. Rather than adding this requirement to each task, the gatekeeper can have a single stack with all the memory required.

  2. The resource is not always accessible in the CPU. It is synchronizing not only tasks in the CPU, but tasks outside the CPU as well.

2
Using a semaphore would be the traditional approach for the 2 tasks to synchronize. Do I understand correctly from your text that your (only) objection to the semaphore is its size in FreeRTOS? I haven't used FreeRTOS very recently, but I don't remember semaphores being very costly. And it looks like your proposed solution (with the callback) uses semaphores anyway? I'm just a little confused, sorry if I'm being dense here. Also, I'd wrap the async post / pend sequence into a tiny function...Dan
Dan, FreeRTOS implements semaphores as zero-length queues. A queue has four pointer members (32-bits each), five portBASE_TYPE members (port defines as 32-bits), and two xLists which are 20 bytes each. The callback solution would allow each task to use its own signaling mechanism, rather than having a semaphore for each request. I now have a design using a mutex and retains a separate task for stack savings, but there's no good way to implement non-blocking requests that I can find.rjp

2 Answers

2
votes

Use a mutex and make the gatekeeper a subroutine instead of a task.

2
votes

It's been six years since I posted this question, and I struggled with getting the synchronization working how I needed it to. There were some terrible abuses of OS constructs used. I've considered updating this code, even though it works, to be less abusive, and so I've looked at more elegant ways to handle this. FreeRTOS has also added a number of features in the last six years, one of which I believe provides a lightweight method to accomplish the same thing.

Direct-to-Task Notifications

Revisiting my original proposed method:

void gatekeeper_blocking_request(void)
{
     put_request_in_queue(request);
     task_suspend(this_task);
}

void gatekeeper_request_complete_callback(request)
{
     task_resume(request->task);
}

The reason this method was avoided was because the FreeRTOS task suspend/resume calls do not keep count, so several suspend calls will be negated by a single resume call. At the time, the suspend/resume feature was being used by the application, and so this was a real possibility.

Beginning with FreeRTOS 8.2.0, Direct-to-task notifications essentially provide a lightweight built-into-the-task binary semaphore. When a notification is sent to a task, the notification value may be set. This notification will lie dormant until the notified task calls some variant of xTaskNotifyWait() or it will be woken if it had already made such a call.

The above code, can be slightly reworked to be the following:

 void gatekeeper_blocking_request(void)
 {
      put_request_in_queue(request);
      xTaskNotifyWait( ... );
 }

 void gatekeeper_request_complete_callback(request)
 {
      xTaskNotify( ... );
 }

This is still not an ideal method, as if the task notifications are used elsewhere, you may run into the same problem with suspend/resume, where the task is woken by a different source than the one it is expecting. Given that, for me, it was a new feature, it may work out in the revised code.