Lightweight Cooperative Multitasking with Boost.Context

I’m writing this in response to Kenny Kerr’s article Lightweight Cooperative Multitasking with C++ in the August 2012 issue of MSDN Magazine, in which Kenny describes a method of implementing cooperative multitasking in C++.

What is Cooperative Multitasking?

This is a method of achieving concurrency in an application without using multiple threads. The basic idea is that you can have multiple tasks running on just a single thread. Now of course only one of these tasks can be running at any one time, so each task must perform some work and then explicitly yield to another task.

Ideally whatever method is used for beginning tasks and yielding should automatically provide a separate context for each task. That is, each task should have its own stack and register data.

Why Kenny’s Article is Cool

Kenny’s article is all about how you can implement your own cooperative multitasking in C++ without having to resort to a specific API calls or a separate library. Yes it’s based on a previously known technique by Tom Duff, but it demonstrates that an awful lot can be done with a relatively small amount of code. This is why I like C++; it gives you the power to do things like this.

As cool as it is, I don’t think I’ll be adding it to my production code any time soon. Besides the problems I’d have getting it past code review, I found a better solution…

Enter Boost.Context

At the end of the article, Kenny reveals his hopes (and dreams?) of a day when he can implement cooperative multitasking in C++ without having to resort to unmaintainable macro hacks.

As luck would have it, on the 20th August 2012 version 1.51 of the Boost C++ libraries was released which contained the new library Boost.Context by Oliver Kowalke. This is a fairly low-level library, but it does provide a basic cooperative multitasking library with a separate context for each task.

Average Task, Boost.Context Style

In his article, Kenny implements a basic application to take numeric input from the command line and produce some statistics. It’s not what I’d call a classic concurrency problem, but it serves the purpose of demonstrating the tasks yielding control only when appropriate. I’ve taken the code and re-implemented it using Boost.Context. Enjoy!

#include <boost\context\all.hpp>

namespace ctx = boost::ctx;

ctx::fcontext_t fc_avg, fc_input, fc_main;
bool b_quit = false;

// --- The Average Task ---
// Computes the average of the input data and 
// then yields execution back to the main task
//-------------------------
// struct average_args - data for the task
// average_yield() - wrapper for yielding execution back to the main task
// average_task() - the task logic
// ------------------------
struct average_args
{
	int * source;
	int sum;
	int count;
	int average;
	int task_;
};
void average_yield() { ctx::jump_fcontext(&fc_avg, &fc_main, 0); }
void average_task(intptr_t p)
{
	average_args* args = (average_args*)p;
	args->sum = 0;
	args->count = 0;
	args->average = 0;
	while (true)
	{
		args->sum += *args->source;
		++args->count;
		args->average = args->sum / args->count;
		average_yield();
	}

	printf("ERROR: should not reach the end of average function\n");
}

// --- The Input Task ---
// Reads a number as input from the console and 
// then yields execution back to the main task
// ----------------------
// struct input_args - data for the task
// input_yield() - wrapper for yielding execution back to the main task
// input_task() - the task logic
// ----------------------
struct input_args
{
	average_args* aa;
	int * target;
	int task_;
};
void input_yield() { ctx::jump_fcontext(&fc_input, &fc_main, 0); }
void input_task(intptr_t p)
{
	input_args* pia = (input_args*)p;

	while (true)
	{
		printf("number: ");
		if (!scanf_s("%d", pia->target))
		{
			b_quit = true;
			return;
		}
    
		input_yield();
	}

	printf("ERROR: should not reach the end of input function\n");
}

void main()
{
	int share = -1;
	average_args aa = {&share};
	input_args ia = {&aa, &share};
	ctx::stack_allocator alloc;

	// construct the input task
	fc_input.fc_stack.base = alloc.allocate(ctx::minimum_stacksize());
	ctx::make_fcontext(&fc_input, input_task);

	// construct the average task
	fc_avg.fc_stack.base = alloc.allocate(ctx::minimum_stacksize());
	ctx::make_fcontext(&fc_avg, average_task);

	while (!b_quit)
	{
		ctx::jump_fcontext( &fc_main, &fc_input, (intptr_t) &ia);
		ctx::jump_fcontext( &fc_main, &fc_avg, (intptr_t) &aa);
		printf("sum=%d count=%d average=%d\n", aa.sum, aa.count, aa.average);
	}
	
	printf("main: done\n");
}

Footnotes

The Boost.Context library has been described as a good candidate for providing low-level cooperative multitasking support to the developing Boost.Coroutine library. Coroutine will provide a more high-level, abstract way of applying these concurrency techniques in your code.

I’m looking forward to Kenny’s next article in which he has promised to show how the Visual C++ team is working on what must be some kind of cooperative multitasking that he’s called ‘resumable functions’.

5 thoughts on “Lightweight Cooperative Multitasking with Boost.Context”

  1. What would be the benefit of using cooperative multitasking over linear execution considering all the overhead of creation and maintainance of each of the contexts?

  2. Hi Disha. Briefly, the advantages as I see them are below (most important at the top):

    1. Deterministic program flow; I know that my context will only switch when I tell it to yield. In multithreading, the switch may happen at any point.
    2. Better performance; switching a coop m-tasking context will be done entirely in my program, but switching between threads typically involves the o/s kernel.
    3. Fewer synchronisation primitives; you’ll generally need more synchronization primitives or lock-free data structures in a typical multithreaded program as you don’t know when the threads will be switched (or even whether they’ll be running in parallel).

    Of course, we need to consider the disadvantages as well:

    1. More logic and code needed to manage the contexts. The complexity of the logic will increase greatly as you increase the number of contexts.
    2. You need to be a bit more aware of the architecture that your program will be running on. Using coop m-tasking for everything will stop your program taking advantage of a multi-core cpu.

    So is coop m-tasking better than m-threading? As always, it depends…

    HTH

  3. Nice article, thank you.
    One question I have about the implementation: how will it run with multiple independent tasks sharing CPU? In your code the tasks yield to main all the time. If I want to create a code in which yield() call would transfer control to some other waiting task, will I have to modify the use of the ctx::jump_fcontext() and, for instance, select from a queue the context of the next task?
    I’ve looked at the boost coroutine, but it also, it seems, can only give control back to its caller.

  4. Hi ilya. I think you have two options, either write a scheduler/controller as you suggest or get the task to jump to the next task, if that’s possible. Assuming it is possible, my preference would still be for a scheduler/controller to dictate the order the tasks are switched to mainly because you then have one place in your code that describes the overall task switching model you are using. If the tasks were to switch to other tasks, trying to understand the order of execution would involve tracing the jumps through the code.

    boost::coroutine is built on top of boost::context, so it’s not surprising that you see the same behaviour. coroutine provides a better abstraction for real-world problems than context; it has support for strongly-typed inputs and outputs that context lacks, so should be preferred.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.