C++/floating point arithmetics in OpenGL
I have problems with floating point numbers. We have been preparing functional tests for OpenGL API functions. In OpenGL a number of functions just set a state value and their get functions get those state values. In order to test those set functions, we just set a value by the set function and get it by the related get function. We decide if the set function works fine by just comparing the set and get values. Since a large number of functions have floating point parameters to set, we have precision problems. When we simply multiply two numbers an difference (error) occurs between the "perfect" and the computer calculated values. For the complex calculations the error arises. So we need a base epsilon value to decide the maximum acceptable error value. When we use floating point precision as the epsilon value, our test steps fail since 6 digit precision is too small.
I would like to know if there is a common method to decide the maximum acceptable error range for the floating point calculations. Is there any way to calculate the amount of error produced in a floating point calculation?
OS: Windows XP, Compiler: Ms Visual C++ 6
I have always used 0.00001f as my epsilon value. Works well enough. The algorithm I use for calculating floating point error is as follows(pseudo-code):
if(result - fabs(number1 - number2) >= episilon)
// floating point error
result = (number1 - number2) + epsilon;
All that does is correct the error by adding the epsilon offset to it. It works pretty good I reckon.
On a side note, in all the projects I've worked on, floating point error was never really an issue, as down to 6 decimal places is such a small number.
I hope this information was useful to you.