2
votes

world. I'm a very novice in CS and studying C with the book 'C Primer Plus'. There are review questions at the end of chapter 3 and I'm confused with some of their answers. I would like to clarify by my understanding of them. Please correct me and share some your knowledge.

So here are the questions from the book below, then my questions after.

6. Identify the data type (as used in declaration statements) and the printf() format specifier for each of the following constants:
     Constant       Type          Specifier
-a.    12           int              %d
-b.    0X3       unsigned int        %#x
...
-d.   2.34E07      double            %e
-e.   '\040'      char(rlly int)     %c
-f.    7.0         double            %f
...

7. the same as the question 6 but assume a 16-bit int:
     Constant       Type          Specifier
-a.    012       unsigned int        %#o
...
-d.   100000        long             %ld
...
-g.    0x44      unsigned int        %d

My questions:

  1. Regarding 6-a & 7-a, how does adding 0 in front 12 need unsigned int to have more positive space in a 16-bit int system? 12 is 1100 in binary. 012 is mathematically the same but is it different in computing?

  2. Regarding 6-b & 7-g, how come the hexadecimal formats have unsigned int? 0X3 and 0x44 are respectively 11 and 1000100 in binary, 3 and 68 in decimal. A 16-bit integer has possible values range from −32,768 to 32,767, but the both are unsigned int in the answers.

  3. Regarding 6-d, I read in the book that the C standard provides a float has to be able to represent at least six significant figures. here 2.34E07 has more, so double. but also from the book, by default, the compiler assumes floating-point constants are double precision. Is this default rule only for constant? Do I need to be specific when assigning a variable?

  4. Regarding 6-f, How come double for 7.0? is it because the compiler assumes floating-point constants are double precision? for variables should I use float like float f = 7.0; ?

  5. Regarding 7-d, Is it long type(guarantees 32-bit) to have more space for 100,000? but we have to assume 16-bit int system, which is between 0 and 65,535 in an unsigned representation. How can a computer that has 16-bit int system handle more than what it is capable of?

Please feel free to tear me apart, shatter my understanding of data types in C and teach me.

1
Thank you, user3121023. an octal value makes sense. but 012 in octal is 10 in decimal. is unsigned int necessary? And yes, 2.34E07 and 2.34E27 have three significant digits but different exponents. Is 2.34E07 23,400,000, right? and why double for 2.34E07? isn't float enough for the value?aom

1 Answers

2
votes
  1. The two most widely used bases in computer science, besides binary, are octal and hexadecimal. In most programming languages, including C, their respective prefixes in integer literals are 0b (careful, while gcc and clang support it under certain conditions, this isn't an official prefix in C and may not work with other compilers), 0 and 0x. In the case of printf, one of the uses of the hashtag flag (#) is to specify the prefix: in the case of 7-a, #o specifies the octal base prefix.

  2. This is default behavior in C, the x or X specifiers refer to unsigned hexadecimal integers.

  3. When initializing or assigning a variable, it is good practice to always add the f suffix. If you write float f = 0.15466789797554, you may get an error due to double rounding. The compiler first rounds the intended float constant to a double, then rounds it back to a float for the assignment.

  4. In C, the compiler assumes that a floating-point constant devoid of a suffix is of type double. To enforce its type as being a float, make sure to append the f suffix (e.g. 3.14f).

  5. The C standard specifies that the long modifier adds 2 bytes to a data type's width. Using an unsigned int data type with no modifier, as you mentioned, does not let us give it the value 100,000. Adding the long modifier takes us from 16 bits (2 bytes) to 32 bits (4 bytes), so we go from 32,767 to 2,147,483,647 as the maximum for a signed int and from 65,535 to 4,284,867,295 as the maximum for an unsigned int. On a processor where the word size is 16 bits (2 bytes), the compiler splits the long into an upper and lower half. It then stores the two in separate registers.