I can print with printf
as a hex or octal number. Is there a format tag to print as binary, or arbitrary base?
I am running gcc.
printf("%d %x %o\n", 10, 10, 10); //prints "10 A 12\n"
print("%b\n", 10); // prints "%b\n"
I can print with printf
as a hex or octal number. Is there a format tag to print as binary, or arbitrary base?
I am running gcc.
printf("%d %x %o\n", 10, 10, 10); //prints "10 A 12\n"
print("%b\n", 10); // prints "%b\n"
Hacky but works for me:
#define BYTE_TO_BINARY_PATTERN "%c%c%c%c%c%c%c%c"
#define BYTE_TO_BINARY(byte) \
(byte & 0x80 ? '1' : '0'), \
(byte & 0x40 ? '1' : '0'), \
(byte & 0x20 ? '1' : '0'), \
(byte & 0x10 ? '1' : '0'), \
(byte & 0x08 ? '1' : '0'), \
(byte & 0x04 ? '1' : '0'), \
(byte & 0x02 ? '1' : '0'), \
(byte & 0x01 ? '1' : '0')
printf("Leading text "BYTE_TO_BINARY_PATTERN, BYTE_TO_BINARY(byte));
For multi-byte types
printf("m: "BYTE_TO_BINARY_PATTERN" "BYTE_TO_BINARY_PATTERN"\n",
BYTE_TO_BINARY(m>>8), BYTE_TO_BINARY(m));
You need all the extra quotes unfortunately. This approach has the efficiency risks of macros (don't pass a function as the argument to BYTE_TO_BINARY
) but avoids the memory issues and multiple invocations of strcat in some of the other proposals here.
Print Binary for Any Datatype
// Assumes little endian
void printBits(size_t const size, void const * const ptr)
{
unsigned char *b = (unsigned char*) ptr;
unsigned char byte;
int i, j;
for (i = size-1; i >= 0; i--) {
for (j = 7; j >= 0; j--) {
byte = (b[i] >> j) & 1;
printf("%u", byte);
}
}
puts("");
}
Test:
int main(int argv, char* argc[])
{
int i = 23;
uint ui = UINT_MAX;
float f = 23.45f;
printBits(sizeof(i), &i);
printBits(sizeof(ui), &ui);
printBits(sizeof(f), &f);
return 0;
}
Here is a quick hack to demonstrate techniques to do what you want.
#include <stdio.h> /* printf */
#include <string.h> /* strcat */
#include <stdlib.h> /* strtol */
const char *byte_to_binary
(
int x
)
{
static char b[9];
b[0] = '\0';
int z;
for (z = 128; z > 0; z >>= 1)
{
strcat(b, ((x & z) == z) ? "1" : "0");
}
return b;
}
int main
(
void
)
{
{
/* binary string to int */
char *tmp;
char *b = "0101";
printf("%d\n", strtol(b, &tmp, 2));
}
{
/* byte to binary string */
printf("%s\n", byte_to_binary(5));
}
return 0;
}
There isn't a binary conversion specifier in glibc normally.
It is possible to add custom conversion types to the printf() family of functions in glibc. See register_printf_function for details. You could add a custom %b conversion for your own use, if it simplifies the application code to have it available.
Here is an example of how to implement a custom printf formats in glibc.
You could use a small table to improve speed1. Similar techniques are useful in the embedded world, for example, to invert a byte:
const char *bit_rep[16] = {
[ 0] = "0000", [ 1] = "0001", [ 2] = "0010", [ 3] = "0011",
[ 4] = "0100", [ 5] = "0101", [ 6] = "0110", [ 7] = "0111",
[ 8] = "1000", [ 9] = "1001", [10] = "1010", [11] = "1011",
[12] = "1100", [13] = "1101", [14] = "1110", [15] = "1111",
};
void print_byte(uint8_t byte)
{
printf("%s%s", bit_rep[byte >> 4], bit_rep[byte & 0x0F]);
}
1 I'm mostly referring to embedded applications where optimizers are not so aggressive and the speed difference is visible.
Print the least significant bit and shift it out on the right. Doing this until the integer becomes zero prints the binary representation without leading zeros but in reversed order. Using recursion, the order can be corrected quite easily.
#include <stdio.h>
void print_binary(unsigned int number)
{
if (number >> 1) {
print_binary(number >> 1);
}
putc((number & 1) ? '1' : '0', stdout);
}
To me, this is one of the cleanest solutions to the problem. If you like 0b
prefix and a trailing new line character, I suggest wrapping the function.
Based on @William Whyte's answer, this is a macro that provides int8
,16
,32
& 64
versions, reusing the INT8
macro to avoid repetition.
/* --- PRINTF_BYTE_TO_BINARY macro's --- */
#define PRINTF_BINARY_PATTERN_INT8 "%c%c%c%c%c%c%c%c"
#define PRINTF_BYTE_TO_BINARY_INT8(i) \
(((i) & 0x80ll) ? '1' : '0'), \
(((i) & 0x40ll) ? '1' : '0'), \
(((i) & 0x20ll) ? '1' : '0'), \
(((i) & 0x10ll) ? '1' : '0'), \
(((i) & 0x08ll) ? '1' : '0'), \
(((i) & 0x04ll) ? '1' : '0'), \
(((i) & 0x02ll) ? '1' : '0'), \
(((i) & 0x01ll) ? '1' : '0')
#define PRINTF_BINARY_PATTERN_INT16 \
PRINTF_BINARY_PATTERN_INT8 PRINTF_BINARY_PATTERN_INT8
#define PRINTF_BYTE_TO_BINARY_INT16(i) \
PRINTF_BYTE_TO_BINARY_INT8((i) >> 8), PRINTF_BYTE_TO_BINARY_INT8(i)
#define PRINTF_BINARY_PATTERN_INT32 \
PRINTF_BINARY_PATTERN_INT16 PRINTF_BINARY_PATTERN_INT16
#define PRINTF_BYTE_TO_BINARY_INT32(i) \
PRINTF_BYTE_TO_BINARY_INT16((i) >> 16), PRINTF_BYTE_TO_BINARY_INT16(i)
#define PRINTF_BINARY_PATTERN_INT64 \
PRINTF_BINARY_PATTERN_INT32 PRINTF_BINARY_PATTERN_INT32
#define PRINTF_BYTE_TO_BINARY_INT64(i) \
PRINTF_BYTE_TO_BINARY_INT32((i) >> 32), PRINTF_BYTE_TO_BINARY_INT32(i)
/* --- end macros --- */
#include <stdio.h>
int main() {
long long int flag = 1648646756487983144ll;
printf("My Flag "
PRINTF_BINARY_PATTERN_INT64 "\n",
PRINTF_BYTE_TO_BINARY_INT64(flag));
return 0;
}
This outputs:
My Flag 0001011011100001001010110111110101111000100100001111000000101000
For readability you may want to add a separator for eg:
My Flag 00010110,11100001,00101011,01111101,01111000,10010000,11110000,00101000
Here's a version of the function that does not suffer from reentrancy issues or limits on the size/type of the argument:
#define FMT_BUF_SIZE (CHAR_BIT*sizeof(uintmax_t)+1)
char *binary_fmt(uintmax_t x, char buf[static FMT_BUF_SIZE])
{
char *s = buf + FMT_BUF_SIZE;
*--s = 0;
if (!x) *--s = '0';
for (; x; x /= 2) *--s = '0' + x%2;
return s;
}
Note that this code would work just as well for any base between 2 and 10 if you just replace the 2's by the desired base. Usage is:
char tmp[FMT_BUF_SIZE];
printf("%s\n", binary_fmt(x, tmp));
Where x
is any integral expression.
Quick and easy solution:
void printbits(my_integer_type x)
{
for(int i=sizeof(x)<<3; i; i--)
putchar('0'+((x>>(i-1))&1));
}
Works for any size type and for signed and unsigned ints. The '&1' is needed to handle signed ints as the shift may do sign extension.
There are so many ways of doing this. Here's a super simple one for printing 32 bits or n bits from a signed or unsigned 32 bit type (not putting a negative if signed, just printing the actual bits) and no carriage return. Note that i is decremented before the bit shift:
#define printbits_n(x,n) for (int i=n;i;i--,putchar('0'|(x>>i)&1))
#define printbits_32(x) printbits_n(x,32)
What about returning a string with the bits to store or print later? You either can allocate the memory and return it and the user has to free it, or else you return a static string but it will get clobbered if it's called again, or by another thread. Both methods shown:
char *int_to_bitstring_alloc(int x, int count)
{
count = count<1 ? sizeof(x)*8 : count;
char *pstr = malloc(count+1);
for(int i = 0; i<count; i++)
pstr[i] = '0' | ((x>>(count-1-i))&1);
pstr[count]=0;
return pstr;
}
#define BITSIZEOF(x) (sizeof(x)*8)
char *int_to_bitstring_static(int x, int count)
{
static char bitbuf[BITSIZEOF(x)+1];
count = (count<1 || count>BITSIZEOF(x)) ? BITSIZEOF(x) : count;
for(int i = 0; i<count; i++)
bitbuf[i] = '0' | ((x>>(count-1-i))&1);
bitbuf[count]=0;
return bitbuf;
}
Call with:
// memory allocated string returned which needs to be freed
char *pstr = int_to_bitstring_alloc(0x97e50ae6, 17);
printf("bits = 0b%s\n", pstr);
free(pstr);
// no free needed but you need to copy the string to save it somewhere else
char *pstr2 = int_to_bitstring_static(0x97e50ae6, 17);
printf("bits = 0b%s\n", pstr2);
None of the previously posted answers are exactly what I was looking for, so I wrote one. It is super simple to use %B
with the printf
!
/*
* File: main.c
* Author: Techplex.Engineer
*
* Created on February 14, 2012, 9:16 PM
*/
#include <stdio.h>
#include <stdlib.h>
#include <printf.h>
#include <math.h>
#include <string.h>
static int printf_arginfo_M(const struct printf_info *info, size_t n, int *argtypes)
{
/* "%M" always takes one argument, a pointer to uint8_t[6]. */
if (n > 0) {
argtypes[0] = PA_POINTER;
}
return 1;
}
static int printf_output_M(FILE *stream, const struct printf_info *info, const void *const *args)
{
int value = 0;
int len;
value = *(int **) (args[0]);
// Beginning of my code ------------------------------------------------------------
char buffer [50] = ""; // Is this bad?
char buffer2 [50] = ""; // Is this bad?
int bits = info->width;
if (bits <= 0)
bits = 8; // Default to 8 bits
int mask = pow(2, bits - 1);
while (mask > 0) {
sprintf(buffer, "%s", ((value & mask) > 0 ? "1" : "0"));
strcat(buffer2, buffer);
mask >>= 1;
}
strcat(buffer2, "\n");
// End of my code --------------------------------------------------------------
len = fprintf(stream, "%s", buffer2);
return len;
}
int main(int argc, char** argv)
{
register_printf_specifier('B', printf_output_M, printf_arginfo_M);
printf("%4B\n", 65);
return EXIT_SUCCESS;
}
Is there a printf converter to print in binary format?
The printf()
family is only able to print integers in base 8, 10, and 16 using the standard specifiers directly. I suggest creating a function that converts the number to a string per code's particular needs.
To print in any base [2-36]
All other answers so far have at least one of these limitations.
Use static memory for the return buffer. This limits the number of times the function may be used as an argument to printf()
.
Allocate memory requiring the calling code to free pointers.
Require the calling code to explicitly provide a suitable buffer.
Call printf()
directly. This obliges a new function for to fprintf()
, sprintf()
, vsprintf()
, etc.
Use a reduced integer range.
The following has none of the above limitation. It does require C99 or later and use of "%s"
. It uses a compound literal to provide the buffer space. It has no trouble with multiple calls in a printf()
.
#include <assert.h>
#include <limits.h>
#define TO_BASE_N (sizeof(unsigned)*CHAR_BIT + 1)
// v--compound literal--v
#define TO_BASE(x, b) my_to_base((char [TO_BASE_N]){""}, (x), (b))
// Tailor the details of the conversion function as needed
// This one does not display unneeded leading zeros
// Use return value, not `buf`
char *my_to_base(char buf[TO_BASE_N], unsigned i, int base) {
assert(base >= 2 && base <= 36);
char *s = &buf[TO_BASE_N - 1];
*s = '\0';
do {
s--;
*s = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"[i % base];
i /= base;
} while (i);
// Could employ memmove here to move the used buffer to the beginning
// size_t len = &buf[TO_BASE_N] - s;
// memmove(buf, s, len);
return s;
}
#include <stdio.h>
int main(void) {
int ip1 = 0x01020304;
int ip2 = 0x05060708;
printf("%s %s\n", TO_BASE(ip1, 16), TO_BASE(ip2, 16));
printf("%s %s\n", TO_BASE(ip1, 2), TO_BASE(ip2, 2));
puts(TO_BASE(ip1, 8));
puts(TO_BASE(ip1, 36));
return 0;
}
Output
1020304 5060708
1000000100000001100000100 101000001100000011100001000
100401404
A2F44
This code should handle your needs up to 64 bits.
I created two functions: pBin
and pBinFill
. Both do the same thing, but pBinFill
fills in the leading spaces with the fill character provided by its last argument.
The test function generates some test data, then prints it out using the pBinFill
function.
#define kDisplayWidth 64
char* pBin(long int x,char *so)
{
char s[kDisplayWidth+1];
int i = kDisplayWidth;
s[i--] = 0x00; // terminate string
do { // fill in array from right to left
s[i--] = (x & 1) ? '1' : '0'; // determine bit
x >>= 1; // shift right 1 bit
} while (x > 0);
i++; // point to last valid character
sprintf(so, "%s", s+i); // stick it in the temp string string
return so;
}
char* pBinFill(long int x, char *so, char fillChar)
{
// fill in array from right to left
char s[kDisplayWidth+1];
int i = kDisplayWidth;
s[i--] = 0x00; // terminate string
do { // fill in array from right to left
s[i--] = (x & 1) ? '1' : '0';
x >>= 1; // shift right 1 bit
} while (x > 0);
while (i >= 0) s[i--] = fillChar; // fill with fillChar
sprintf(so, "%s", s);
return so;
}
void test()
{
char so[kDisplayWidth+1]; // working buffer for pBin
long int val = 1;
do {
printf("%ld =\t\t%#lx =\t\t0b%s\n", val, val, pBinFill(val, so, '0'));
val *= 11; // generate test data
} while (val < 100000000);
}
Output:
00000001 = 0x000001 = 0b00000000000000000000000000000001
00000011 = 0x00000b = 0b00000000000000000000000000001011
00000121 = 0x000079 = 0b00000000000000000000000001111001
00001331 = 0x000533 = 0b00000000000000000000010100110011
00014641 = 0x003931 = 0b00000000000000000011100100110001
00161051 = 0x02751b = 0b00000000000000100111010100011011
01771561 = 0x1b0829 = 0b00000000000110110000100000101001
19487171 = 0x12959c3 = 0b00000001001010010101100111000011
Some runtimes support "%b" although that is not a standard.
Also see here for an interesting discussion:
http://bytes.com/forum/thread591027.html
HTH
Use:
char buffer [33];
itoa(value, buffer, 2);
printf("\nbinary: %s\n", buffer);
For more ref., see How to print binary number via printf.
This approach has as attributes:
#include <stdio.h>
#include <stdint.h>
#include <string.h>
#include <limits.h>
#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
#define for_endian(size) for (int i = 0; i < size; ++i)
#elif __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
#define for_endian(size) for (int i = size - 1; i >= 0; --i)
#else
#error "Endianness not detected"
#endif
#define printb(value) \
({ \
typeof(value) _v = value; \
__printb((typeof(_v) *) &_v, sizeof(_v)); \
})
#define MSB_MASK 1 << (CHAR_BIT - 1)
void __printb(void *value, size_t size)
{
unsigned char uc;
unsigned char bits[CHAR_BIT + 1];
bits[CHAR_BIT] = '\0';
for_endian(size) {
uc = ((unsigned char *) value)[i];
memset(bits, '0', CHAR_BIT);
for (int j = 0; uc && j < CHAR_BIT; ++j) {
if (uc & MSB_MASK)
bits[j] = '1';
uc <<= 1;
}
printf("%s ", bits);
}
printf("\n");
}
int main(void)
{
uint8_t c1 = 0xff, c2 = 0x44;
uint8_t c3 = c1 + c2;
printb(c1);
printb((char) 0xff);
printb((short) 0xff);
printb(0xff);
printb(c2);
printb(0x44);
printb(0x4411ff01);
printb((uint16_t) c3);
printb('A');
printf("\n");
return 0;
}
$ ./printb
11111111
11111111
00000000 11111111
00000000 00000000 00000000 11111111
01000100
00000000 00000000 00000000 01000100
01000100 00010001 11111111 00000001
00000000 01000011
00000000 00000000 00000000 01000001
I have used another approach (bitprint.h) to fill a table with all bytes (as bit strings) and print them based on the input/index byte. It's worth taking a look.
void print_ulong_bin(const unsigned long * const var, int bits) {
int i;
#if defined(__LP64__) || defined(_LP64)
if( (bits > 64) || (bits <= 0) )
#else
if( (bits > 32) || (bits <= 0) )
#endif
return;
for(i = 0; i < bits; i++) {
printf("%lu", (*var >> (bits - 1 - i)) & 0x01);
}
}
should work - untested.
I liked the code by paniq, the static buffer is a good idea. However it fails if you want multiple binary formats in a single printf() because it always returns the same pointer and overwrites the array.
Here's a C style drop-in that rotates pointer on a split buffer.
char *
format_binary(unsigned int x)
{
#define MAXLEN 8 // width of output format
#define MAXCNT 4 // count per printf statement
static char fmtbuf[(MAXLEN+1)*MAXCNT];
static int count = 0;
char *b;
count = count % MAXCNT + 1;
b = &fmtbuf[(MAXLEN+1)*count];
b[MAXLEN] = '\0';
for (int z = 0; z < MAXLEN; z++) { b[MAXLEN-1-z] = ((x>>z) & 0x1) ? '1' : '0'; }
return b;
}
No standard and portable way.
Some implementations provide itoa(), but it's not going to be in most, and it has a somewhat crummy interface. But the code is behind the link and should let you implement your own formatter pretty easily.
One statement generic conversion of any integral type into the binary string representation using standard library:
#include <bitset>
MyIntegralType num = 10;
print("%s\n",
std::bitset<sizeof(num) * 8>(num).to_string().insert(0, "0b").c_str()
); // prints "0b1010\n"
Or just: std::cout << std::bitset<sizeof(num) * 8>(num);
Based on @ideasman42's suggestion in his answer, this is a macro that provides int8
,16
,32
& 64
versions, reusing the INT8
macro to avoid repetition.
/* --- PRINTF_BYTE_TO_BINARY macro's --- */
#define PRINTF_BINARY_SEPARATOR
#define PRINTF_BINARY_PATTERN_INT8 "%c%c%c%c%c%c%c%c"
#define PRINTF_BYTE_TO_BINARY_INT8(i) \
(((i) & 0x80ll) ? '1' : '0'), \
(((i) & 0x40ll) ? '1' : '0'), \
(((i) & 0x20ll) ? '1' : '0'), \
(((i) & 0x10ll) ? '1' : '0'), \
(((i) & 0x08ll) ? '1' : '0'), \
(((i) & 0x04ll) ? '1' : '0'), \
(((i) & 0x02ll) ? '1' : '0'), \
(((i) & 0x01ll) ? '1' : '0')
#define PRINTF_BINARY_PATTERN_INT16 \
PRINTF_BINARY_PATTERN_INT8 PRINTF_BINARY_SEPARATOR PRINTF_BINARY_PATTERN_INT8
#define PRINTF_BYTE_TO_BINARY_INT16(i) \
PRINTF_BYTE_TO_BINARY_INT8((i) >> 8), PRINTF_BYTE_TO_BINARY_INT8(i)
#define PRINTF_BINARY_PATTERN_INT32 \
PRINTF_BINARY_PATTERN_INT16 PRINTF_BINARY_SEPARATOR PRINTF_BINARY_PATTERN_INT16
#define PRINTF_BYTE_TO_BINARY_INT32(i) \
PRINTF_BYTE_TO_BINARY_INT16((i) >> 16), PRINTF_BYTE_TO_BINARY_INT16(i)
#define PRINTF_BINARY_PATTERN_INT64 \
PRINTF_BINARY_PATTERN_INT32 PRINTF_BINARY_SEPARATOR PRINTF_BINARY_PATTERN_INT32
#define PRINTF_BYTE_TO_BINARY_INT64(i) \
PRINTF_BYTE_TO_BINARY_INT32((i) >> 32), PRINTF_BYTE_TO_BINARY_INT32(i)
/* --- end macros --- */
#include <stdio.h>
int main() {
long long int flag = 1648646756487983144ll;
printf("My Flag "
PRINTF_BINARY_PATTERN_INT64 "\n",
PRINTF_BYTE_TO_BINARY_INT64(flag));
return 0;
}
This outputs:
My Flag 0001011011100001001010110111110101111000100100001111000000101000
For readability you can change :#define PRINTF_BINARY_SEPARATOR
to #define PRINTF_BINARY_SEPARATOR ","
or #define PRINTF_BINARY_SEPARATOR " "
This will output:
My Flag 00010110,11100001,00101011,01111101,01111000,10010000,11110000,00101000
or
My Flag 00010110 11100001 00101011 01111101 01111000 10010000 11110000 00101000
/* Convert an int to it's binary representation */
char *int2bin(int num, int pad)
{
char *str = malloc(sizeof(char) * (pad+1));
if (str) {
str[pad]='\0';
while (--pad>=0) {
str[pad] = num & 1 ? '1' : '0';
num >>= 1;
}
} else {
return "";
}
return str;
}
/* example usage */
printf("The number 5 in binary is %s", int2bin(5, 4));
/* "The number 5 in binary is 0101" */
Next will show to you memory layout:
#include <limits>
#include <iostream>
#include <string>
using namespace std;
template<class T> string binary_text(T dec, string byte_separator = " ") {
char* pch = (char*)&dec;
string res;
for (int i = 0; i < sizeof(T); i++) {
for (int j = 1; j < 8; j++) {
res.append(pch[i] & 1 ? "1" : "0");
pch[i] /= 2;
}
res.append(byte_separator);
}
return res;
}
int main() {
cout << binary_text(5) << endl;
cout << binary_text(.1) << endl;
return 0;
}