This switch is the equivalent of the -CF command line switch. Normally, the compiler will set the precision of a floating point constant to the minimally required precision to represent it exactly.
This switch can be used to ensure that the compiler never lowers the precision below the specified value. Supported values are 32, 64 and DEFAULT. 80 is not supported for implementation reasons.
Note that this has nothing to do with the actual precision used by calculations: there the type of the variable will determine what precision is used. This switch determines only with what precision a constant declaration is stored:
The type of the above constant will be double even though it can be represented exactly using single.
Note that a value of 80 (Extended precision) is not supported.