C++/floating point arithmetics
I have problems with floating point numbers. We have been preparing functional tests for OpenGL API functions. In OpenGL a number of functions just set a state value and their get functions get those state values. In order to test those set functions, we just set a value by the set function and get it by the related get function. We decide if the set function works fine by just comparing the set and get values. Since a large number of functions have floating point parameters to set we have precision problems. When we simply multiply two numbers an difference (error) occurs between the "perfect" and the computer calculated values. For the complex calculations the error arises. So we need a base epsilon value to decide the maximum acceptable error value. When we use floating point precision as the epsilon value, our test steps fail since 6 digit precision is too small.
I would like to know if there is a common method to decide the maximum acceptable error range for the floating point calculations. Is there any way to calculate the amount of error produced in a floating point calculation?
OS: Windows XP, Compiler: Ms Visual C++ 6
cem , Thank you for your question.
You can find lots of tutorial information about floating point precision by searching the Web.
A simple way to determine the precision of an elementary floating point calculation is to perform it twice, once for values that are low by one bit in the least significant bit and once for values that are high by one bit in the least significant bit. Subtract the resulting values to get the precision range. For example, if the epsilon of 1.0 is approx. 2.22 x 10**-16 (as it is for typical 'double' datatypes having 15 decimal digits of precision), then multiplication (of 1.0 x 1.0) has a precision of about 4.9 x 10**-16.