Return to Website

Number Watch Web Forum

This forum is about wrong numbers in science, politics and the media. It respects good science and good English.

Number Watch Web Forum
Start a New Topic 
Author
Comment
Computational extravegance

http://www.beyond3d.com/articles/fastinvsqrt/

The above discussion has a wonderful piece of almost number magic. Unless you are a programmer (or a numberwatcher) the analysis is less than invigorating. It is fascinating though.

More interesting is a comment made by one of the potential creators of the code. He has worked with someone else on similar code for Computational Fluid Dynamics. The fascinating part is that this code had to be accurate to 48 significant bits (I originally thought this set digits, but this is more like 14 significant digits in the decimal system).

Can someone explain why it is necessary to have 14 significant digits in computational fluid dynamics.

The only thing I can think of is since it is an inverse sqrt, that the result is <0 as long as the operand is > 1. This is a region of trepidation in the computational world, since zero isn't zero and multiplications of small numbers lead to even smaller numbers always getting closer to flipping a bit and getting a really big number completely unrelated to the process.

14 significant digits, still seems excessive, and makes me think the guy is chasing after inaccuracies in the wrong places.

The programmer was increasing the speed of the CFD process, at which he succeeded masterfully (taken the process down from a week to two days.) The programmer did his job well. I worry about the scientist/engineer who forgot the relevance of signifcant digits.

As usual, my gut is quite likely wrong.

Re: Computational extravegance

Brad,
Your gut is right at least once a day. But I think it's right in this, too. The most exact theory we have (Quantum Electrodynamics) can make calculations that agree with experiment to 11 significant digits (due to experimental limits). The actual computation can go two or three digits more.
(http://www.lassp.cornell.edu/sethna/Cracks/QED.html)
In Computational Fluid Dynamics, I don't think you can go more than 4 or 5 significant digits. Perhaps a certain Bending Author can enlighten us on this given the experience he had with electrohydrodynamic instability in his youth.

Best,
Jaime

Re: Computational extravegance

In answer to the specific question "Can someone explain why it is necessary to have 14 significant digits in computational fluid dynamics".

The basic approach in computer programming is to provide two levels of number precision: single precision and double precision. Double precision is only really intended for computationally intensive applications like solving simultaneous equations or solving for eigenvalues and eigenvectors. Most mathematical modelling programs like CFD would be coded in double precision. According to IEEE 754, single precision corresponds to 32 bit words with a 23 bit mantissa (about 7 decimal digits), and double precision corresponds to 64 bit words with a 53 bit mantissa (about 15 decimal digits). 15 digits might look over the top at first, but it has to be realised that even solving something like 100 simultaneous equations (a fairly small problem) might involve 100,000 arithmetic (multiply, divide, add, subtract) operations. Each arithmetic operation could incur a small round off error, and the accumulated error from large numbers of arithmetic operations could build up to degrade most of the 15 digits that you started with, hopefully not affecting the most significant 3 or 4 digits in the final result.

The 48 bit mantissa (14 decimal digits) that you've seen Brad is probably for a Cray supercomputer which uses a 64 bit word with 48 bit mantissa for single precision.

Now the idea that a mantissa of about 50 bits is adequate for computationally intensive work was established in the 1960s when people envisaged that several tens of thousands of simultaneous equations would be the biggest problem anybody would ever want to solve. Computers have got more powerful over the decades and it isn't uncommon for people nowadays to run problems with hundreds of thousands or even millions of equations (I would suspect climate models would be of this size). So there is an argument that on very large models people should be using something like a 100 bit mantissa, which would correspond to double precision on a Cray or the not very widely available 'quadruple precision'.

An example of somebody arguing that higher precision than the current norm ought to be used is given in this recent paper:
http://perso.ens-lyon.fr/gilles.villard/BIBLIOGRAPHIE/PDF/phys06.pdf

Re: Re: Computational extravegance

While I understand the rounding issues, I must continute to question the precept.

In my physics class in College, the professor told us on all tests that he only wanted 1 significant digit. Pi/3 = 1. This was an attempt on his part to break people of their calculator habit (he didn't allow them into tests). Practically speaking though, 3.1415927/3 = 1.04719756. Let's assume that this actually represents a real unit like Meters. You send the part to get machined. Where do the digits become irrelevant to the machinist? There may be some applications where you need the part manufactured down to the millimeter. Down to the micrometer though, is unlikely, since expansion and contraction at any temperature for just about any substance will exceed that tolerance. (I can't completely support this, there may be some new materials that that don't contract and expand much).

I can't say that making calculations as precise as possible is bad. Isolating possible points of failure is never bad. From my experience though, I think they are focusing on the wrong point of failure. There are very practical reasons for the lessons we receive in Significant digits. They apply to gross computational methods as well as hand calcs.

As to Global Circulation Models, I can only say that anyone attempting to model a system without knowing the boundary conditions should not expect much in the way of accuracy or precision. To stand by such models and say they are accurate is downright fraudulent. The inaccuracies of such models are not due to significant digits, they are due to an incomplete understanding of the system they are modeling. Once again it is not bad to try and model the system to learn from the process and try and model it better. It is only bad to misrepresent the accuracy of the models.

100 Bit mantissas will do nothing to make the accuracy of such models increase. Decreasing the size of the node, on the otherhand might. The only reason that a larger mantissa seems useful is because the last digits 'might' represent a molecule of air. This is faulty though because the size of the node (I believe they are about 100Km on a side and 1 km deep), because the trailing digits are irrelevant to the magnitude of the answer.

If you really want to increase the accuracy of such models, you will have to model down to the molecular level, but that would require extraordinary amounts of energy. Modeling my office would require more computing power than is currently being used on GCM's now.

Thanks for the explanation though.