Great question!
I know this question is 9 years old now, and I only know a part of the answer you were seeking, but I just came here having a similar question, and many things have changed since that question was asked, such as hardware and GPSes available. I work with this subject frequently in firmware dealing with different kinds of GPSes in different kinds of applications, and have lost count of the hours (and days) I have spent working out "the best design" for different applications that I have worked with or developed.
As always, different solutions are going to provide benefits and costs, and ultimately, a "best design" is always going to be a "best fit" of the benefits and costs against system requirements. Here are some things that I have to consider when I ask the same question:
CPU Time Cost
If CPU does not have a built-in floating-point co-processor (as is the case with many microcontrollers), then dealing with 'float', 'double', and 'long double' can be extremely costly. For example, with one 16-bit microcontroller I work with regularly, a multiplication using 'double' values costs 326 CPU clock cycles, and a division costs 1193 clock cycles. Very expensive!
Accuracy Trade-Off
At the equator, a 'float' (IEEE-754 32-bit floating point value), needing to represent a signed degree value, assuming 7 "clean" significant decimal digits able to be represented, the change of one least-significant decimal digit (e.g. from 179.9999 to 180.0000) is going to represent a distance of about 11.12 meters. This may or may not meet hard system accuracy requirements. Whereas a 'double' (with 15 "clean" significant decimal digits represented, thus a change from 179.999999999999 to 180.000000000000) represents about 0.00011 mm.
Input Accuracy Limitations
If you're dealing with input from a GPS, how many digits of real accuracy are you getting, and how many do you need to preserve?
Development Time Costs
An IEEE-754 64-bit double-precision value ('double') and 32-bit single-precision value ('float') are VERY convenient to deal with in the C language since math libraries for both come with virtually every C compiler, and are usually very reliable. If your CPU comes with a hardware floating-point processor, this is an easy choice.
RAM and Storage Costs
If you have to keep a large number of these values in RAM (or storage e.g. MYSQL), available RAM (and storage space) might have an impact on the workability of the solution.
Available Data vs Required Data
One example I'm dealing with at this writing (the reason I came here to this question) is that I am dealing with a u-blox M8 GPS which is able to give me binary GPS information (saving the CPU overhead of translating ASCII NMEA sentences). In this binary format (called "UBX Protocol") latitude and longitude are represented as signed 32-bit integers, which representation is able to represent accuracy (at the equator) of down to about 1.11 cm. For example, -105.0269805 degrees longitude is represented as -1050269805 (using all 32 bits) and one LSb change represents about 1.11 cm change in latitude anywhere, and 1.11 cm longitude at the equator (and less at higher latitudes, in proportion to the cosine of the latitude). The application this GPS is in does navigation tasks, which (already existing and well-tested code) requires 'double' data types. Unfortunately, converting this integer to an IEEE-754 64-bit 'double' cannot be easily done just by moving the base-2 bits of the integer into the internal representation bits of the 'double' since the decimal shift to be performed is a base-10 decimal shift. Were it a base-2 decimal shift instead, then the base-2 bits of the integer could be moved into the bit-fields of the 'double' with very little translation required. But alas, this is not the case with the signed integer I have. So it is going to cost me a multiplication on a CPU that doesn't have a hardware floating-point processor: 326 CPU clock cycles.
double ldLatitude;
int32_t li32LatFromGps;
ldLatitude = (double)li32LatFromGps * 0.0000001;
Note this multiplication was chosen over this:
ldLatitude = (double)li32LatFromGps / 10000000.0;
because 'double' multiplication is about 3.6X faster than 'double' division on the CPU that I'm dealing with. Such is life in the microcontroller world. :-)
What would have been BRILLIANT (and may be in the future if I can spare the time on weekends) is if the navigation tasks could be done directly with the 32-bit signed integer! Then no conversion would be needed.... But would it cost more to do the navigation tasks with such an integer? CPU costs, probably much more efficient. Development time costs? That's another question, especially with a well-tested system already in place, that uses IEEE-754 64-bit 'double' values! Plus there is already-existing software that provides map data (using 'double' degree values), which software would have to be converted to use the signed integer as well -- not an overnight task!
One VERY interesting option is to directly (without translation) represent intersections between approximations of "rectangles" (actually trapezoids, which become triangles at the poles) using the raw latitude/longitude integers. At the equator these rectangles would have dimensions of approximately 1.11 cm east-west by 1.11 cm north-south, whereas at a latitude of say London, England, the dimensions would be approximately 0.69 cm east-west by 1.11 cm north-south. That may or may not be easy to deal with, depending on what the application needs.
Anyway, I hope these thoughts and discussion help others who are looking at this topic for "the best design" for their system.
Kind regards,
Vic