I'm trying to disable dynamic memory allocation altogether in a low-resource application.
This is unusual. In general people are limiting the dynamic memory used by their (hosted) application (and how to do that is a different question, often operating system specific). Why do you want to disable it entirely ? As explained below, dynamic memory is very likely to be used internally inside your C standard library implementation.
Read carefully the C11 standard n1570 (or the C99 one).
There are basically two "modes" or two "dialects" of C: the hosted C language and the freestanding C language. The exact wording in the ยง4 Conformance of the standard is
The two forms of conforming implementation are hosted and freestanding. A conforming hosted implementation shall accept any strictly conforming program. A conforming freestanding implementation shall accept any strictly conforming program in which the use of the features specified in the library clause (clause 7) is confined to the contents of the standard headers <float.h>
, <iso646.h>
, <limits.h>
, <stdalign.h>
, <stdarg.h>
, <stdbool.h>
, <stddef.h>
, <stdint.h>
, and <stdnoreturn.h>
.
And malloc
is defined (declared in <stdlib.h>
) and should be available for hosted implementations and is usually not available in freestanding implementations (but that is implementation specific).
Apparently, you are using a freestanding implementation (since you don't have malloc
that the standard requires from hosted implementations). GCC has the -ffreestanding
mode for that. You should use it. Then <stdlib.h>
is not available, and your code cannot use the standard malloc
in that mode (unless it explicitly declares malloc
).
In a hosted implementation, you usually can redefine your malloc
(provided it still has all the properties required by the standard). Then you might use something like this (an always failing, but still standard conforming, malloc
implementation).
At last, if you use a GNU binutils linker, you can always fail the link if your object files contain any external reference to malloc
. That is trivial to implement -by adding some specific recipe or rule- in your Makefile
(probably using nm
), or if using any decent build automation tool (if your build automation don't permit such a check just before linking, switch to one that does: make
, ninja
, omake
and many others....).
If you want to detect any use of malloc
at compile time in a hosted environment, you might write your own GCC plugin doing so (I feel that is overkill, but the choice is yours). Or (much more simple in practice) use some script (e.g. with grep) detecting occurrences of the malloc
or calloc
word in your C source code.
Notice that in most hosted implementations, in practice standard functions like fprintf
, fopen
, printf
, fputc
(and many others) are internally -at least sometimes- using malloc
. Concretely, if your program (above a hosted implementation) uses fopen
it is very likely to indirectly use malloc
, since inside a standard FILE
there is generally some heap-allocated buffer that fopen
is malloc
-ing (and it usually gets free
-d at fclose
time).
Is there a way to enforce that no dynamic memory allocation can take place and fail the build if so?
In practice, yes. Just add some script in your Makefile
doing such a check. Either use grep
on your source files, or use nm
on your object files. But if you use the standard fopen
(from <stdio.h>
) in your code, it is usually doing some malloc
internally.
Alternatively, define your own always failing malloc
and calloc
and trivial free
(like here)
On many operating systems (the one used by your application, if any), there is a way to limit the heap memory at runtime. Linux has setrlimit(2) with RLIMIT_DATA
.
If you are using some free software or open-source C standard library implementation (in a hosted environment) such as GNU glibc or musl-libc, you could study its source code and check that fopen
uses heap memory.