However, in most cases, float and double seem to be interchangeable, i.e Using one or the other does not seem to affec. The 53 bits of double s give about 16 digits of precision The 24 bits of float s give about 7 digits of precision. L specifies that a following a, a, e, e, f, f, g, or g conversion specifier applies to a long double argument The same rules specified for fprintf apply for printf, sprintf and similar functions.
494 a double is not an integer, so the cast won't work Note the difference between the double class and the double primitive Also note that a double is a number, so it has the method intvalue, which you can use to get the value as a primitive int. Long double vs double i am unable to understand the difference between between long double and double in c and c++ 39 this question already has answers here Double vs double in java [duplicate] (7 answers) difference between decimal, float and double in.net
A double has a much higher precision due to it's difference in size If the numbers you are using will commonly exceed the value of a float, then use a double Several other people have mentioned performance isssues That would be exactly last on my list of considerations Correctness should be your #1 consideration. The term double precision is something of a misnomer because the precision is not really double
But, in c++, when calling scanf and printf, the notation used to specify a double is %lf, and that stands for long float, right So while a float is less precise than a double, a long float (presumedly called long float because it can be longer by having more terms) is the same accuracy and therefore essentially the same thing?
OPEN