9
votes

I'm working on designing the kernel (which I'm going to actually call the "core" just to be different, but its basically the same) for an OS I'm working on. The specifics of the OS itself are irrelevant if I can't get multi-tasking, memory management, and other basic things up and running, so I need to work on that first. I've some questinos about designing a malloc routine.

I figure that malloc() is either going to be a part of the kernel itself (I'm leaning towards this) or a part of the program, but I'm going to have to write my own implementation of the C standard library either way, so I get to write a malloc. My question is actually rather simple in this regard, how does C (or C++) manage its heap?

What I've always been taught in theorey classes is that the heap is an ever expanding piece of memory, starting at a specified address, and in a lot of senses behaving like a stack. In this way, I know that variables declared in global scope are at the beginning, and more variables are "pushed" onto the heap as they are declared in their respective scopes, and variables that go out of scope are simply left in memory space, but that space is marked as free so the heap can expand more if it needs to.

What I need to know is, how on earth does C actually handle a dynamically expanding heap in this manner? Does a compiled C program make its own calls to a malloc routine and handle its own heap, or do I need to provide it with an automatically expanding space? Also, how does the C program know where the heap begins?

Oh, and I know that the same concepts apply to other languages, but I would like any examples to be in C/C++ because I'm most comfortable with that language. I also would like to not worry about other things such as the stack, as I think I'm able to handle things like this on my own.

So I suppose my real question is, other than malloc/free (which handles getting and freeing pages for itself, etc) does a program need the OS to provide anything else?

Thanks!

EDIT I'm more interested in how C uses malloc in relation with the heap than in the actual workings of the malloc routine itself. If it helps, I'm doing this on x86, but C is cross compiler so it shouldn't matter. ^_^

EDIT FURTHER: I understand that I may be getting terms confused. I was taught that the "heap" was where the program stored things like global/local variables. I'm used to dealing with a "stack" in assembly programming, and I just realized that I probably mean that instead. A little research on my part shows that "heap" is more commonly used to refer to the total memory that a program has allocated for itself, or, the total number (and order) of pages of memory the OS has provided.

So, with that in mind, how do I deal with an ever expanding stack? (it does appear that my C theory class was mildly... flawed.)

7

7 Answers

16
votes

malloc is generally implemented in the C runtime in userspace, relying on specific OS system calls to map in pages of virtual memory. The job of malloc and free is to manage those pages of memory, which are fixed in size (typically 4 KB, but sometimes bigger), and to slice and dice them into pieces that applications can use.

See, for example, the GNU libc implementation.

For a much simpler implementation, check out the MIT operating systems class from last year. Specifically, see the final lab handout, and take a look at lib/malloc.c. This code uses the operating system JOS developed in the class. The way it works is that it reads through the page tables (provided read-only by the OS), looking for unmapped virtual address ranges. It then uses the sys_page_alloc and sys_page_unmap system calls to map and unmap pages into the current process.

14
votes

There are multiple ways to tackle the problem.

Most often C programs have their own malloc/free functionality. That one will work for the small objects. Initially (and as soon as the memory is exhausted) the memory manager will ask the OS for more memory. Traditional methods to do this are mmap and sbrk on the unix variants (GlobalAlloc / LocalAlloc on Win32).

I suggest that you take a look at the Doug Lea memory allocator (google: dlmalloc) from a memory provider (e.g. OS) point of view. That allocator is top notch in a very good one and has hooks for all major operation system. If you want to know what a high performance allocator expects from an OS this is code is your first choice.

4
votes

Are you confusing the heap and the stack?

I ask because you mention "an ever expanding piece of memory", scope and pushing variables on the heap as they are declared. That sure sounds like you are actually talking about the stack.

In the most common C implementations declarations of automatic variables like

int i;

are generally going to result in i being allocated on the stack. In general malloc won't get involved unless you explicitly invoke it, or some library call you make invokes it.

I'd recommend looking at "Expert C Programming" by Peter Van Der Linden for background on how C programs typically work with the stack and the heap.

1
votes

Compulsory reading: Knuth - Art of Computer Programming, Volume 1, Chapter 2, Section 2.5. Otherwise, you could read Kernighan & Ritchie "The C Programming Language" to see an implementation; or, you could read Plauger "The Standard C Library" to see another implementation.

I believe that what you need to do inside your core will be somewhat different from what the programs outside the core see. In particular, the in-core memory allocation for programs will be dealing with virtual memory, etc, whereas the programs outside the code simply see the results of what the core has provided.

1
votes

Read about virtual memory management (paging). It's highly CPU-specific, and every OS implements VM management specially for every supported CPU. If you're writing your OS for x86/amd64, read their respective manuals.

0
votes

Generally, the C library handles the implementation of malloc, requesting memory from the OS (either via anonymous mmap or, in older systems, sbrk) as necessary. So your kernel side of things should handle allocating whole pages via something like one of those means.

Then it's up to malloc to dole out memory in a way that doesn't fragment the free memory too much. I'm not too au fait with the details of this, though; however, the term arena comes to mind. If I can hunt down a reference, I'll update this post.

0
votes

Danger Danger!! If your even considering attempting kernel development, you should be very aware of the cost of your resources and their relatively limited availability...

One thing about recursion, is that it's very, expensive (at least in kernel land), you're not going to see many functions written to simply continue unabaided, or else your kernel will panic.

To underscore my point here, (at stackoverflow.com heh), check out this post from the NT Debugging blog about kernel stack overflow's, specificially,

· On x86-based platforms, the kernel-mode stack is 12K.

· On x64-based platforms, the kernel-mode stack is 24K. (x64-based platforms include systems with processors using the AMD64 architecture and processors using the Intel EM64T architecture).

· On Itanium-based platforms, the kernel-mode stack is 32K with a 32K backing store.

That's really, not a whole lot;

The Usual Suspects


1. Using the stack liberally.

2. Calling functions recursively.

If you read over the blog a bit, you will see how hard kernel development can be with a rather unique set of issues. You're theory class was not wrong, it was simply, simple. ;)

To go from theory -> kernel development is about as significant of a context switch as is possible (perhaps save some hypervisor interaction in the mix!!).

Anyhow, never assume, validate and test your expectations.