How accurate are the trig functions? Do they use power series? If so, to how many terms?
On an unrelated note, should I use two resistors or a trimpot for a voltage divider?
How accurate are the trig functions? Do they use power series? If so, to how many terms?
On an unrelated note, should I use two resistors or a trimpot for a voltage divider?
On an unrelated note, should I use two resistors or a trimpot for a voltage divider?
What are you trying to do?
A pair of resistors will be more stable, there are a larger range of power dissipation, stablity, tolerance and physical size available. They are also less sensitive to vibration and physical stress.
On the the hand, if you need to adjust something every few hours or quicker, a trim pot wins hands down.
How accurate are the trig functions?
How accurate do they need to be?
I'd try to avoid accumulating their values over many calculations, e.g. I wouldn't write
absolute += sin(deltaT);
I'd tend to write
absolute = sin(t);
I usually assume they are better than I can write in a few days, even assuming I have the source, and access to the Internet.
The source code for the math library is part of the newlib library
As an example, part of the comment in the source of k_sin.c says:
* 3. sin(x) is approximated by a polynomial of degree 13 on
* [0,pi/4]
* 3 13
* sin(x) ~ x + S1*x + ... + S6*x
* where
*
* |sin(x) 2 4 6 8 10 12 | -58
* |----- - (1+S1*x +S2*x +S3*x +S4*x +S5*x +S6*x )| <= 2
* | x |
(the layout in the source code is slightly better)
Wait, 0 to pi/4? I need from 0 to pi...
Other than that, it is a power series. It has 13 terms, so that is plenty accurate. I read the source code that you put the link to and, to be honest, I didn't understand much. I'll check the values on my own and confirm them, but if they aren't close to the actual values or the necessary range isn't present, then I'll just write my own function for it.
silntknight - that's just the "kernel" sin. the version you call from your code is in s_sin.c.
recall that once you have sin and cos on [0,pi/4], various symmetries buy you the rest of the values. so having kernel sin and cos on [0,pi/4] is enough for sin and cos in general.
also, just in case you've never tried, you should know that writing numerically stable floating point trig functions that work well across their entire range of input values is ... non-trivial. the version that comes with newlib has benefited from years of engineering and mathematical analysis, and is quite likely to be adequate to your purposes.
"what every computer scientist should know about floating-point arithmetic" is a source of good bits, in case you do end up rolling your own:
Whatever the actual implementation is, implement the math in a separate program and get test vectors. Not only should you get the input and output values, but also intermediate values. You can use Matlab or your favorite math tool with a fixed point toolbox.
Silntknight - How accurate do you need it to be?
I can think of only a few physical processes which have an accuracy and precision that exceed the limits of ordinary single precision float, let alone doubles. What are you trying to calculate?
A common cause of problems is accumulating floating point values across many iterations. If you can avoid accumulating errors, then IMHO, its unlikely that the precision or accuracy of newlib will be a source of problems.
I strongly agree with mbolivar.
newlib is likely very solid.
My only proviso would be if you can demonstrate or prove those functions are not accurate enough in one or two days of work. I base that 'one or two days' on the assumption that unless you have got the tools and familiarity with this sort of stuff to do the demonstration or proof that quickly, then the implementation of improved functionality will be a lot of work. I'd SWAG* at months. (Just trying to save you time)
Another investigation might be to search the newlib mail archives for comments on inaccuracy, and what was done to fix it.
A demonstrably, provably, better implementation is likely subtle code. Probably more subtle than newlib. I am not trying to dissuade you from doing it, but IMHO, the best bet is finding a ready written implementation with all the tests, and proof of superiority.
Just testing a new implementation could take a long time.
Please consider larryang's suggestion to use proper tools like Matlab, Mathematica, or a very good toolbox, that has been 'beaten on' for years, and test intermediate values. The intermediate values should alert you to problems, and their causes, better than only looking at final values.
Also, be careful about how values are converted to ASCII and printed out. Years ago, I used a printf function that wasn't accurate in the way I was using it. I spent quite a lot of effort chasing that down only to discover that I was using formatting wrongly.
IMHO, you'll need to be very familiar with the issues in "what every computer scientist should know about floating-point arithmetic" because it isn't necessarily enough to have a higher degree, more precise equation. The inaccurate and non-uniform behaviour of floating point numbers will have to be taken into account if you are going to do much better than newlib.
These guys suggest using Java's arbitrary precision fixed point BigDecimal class:
http://www.peer.caltech.edu/Particulate/Aerosol/UltraHighPrecisionMieCalcs.pdf
As mbolivar explained, if it is the trig functions you need, 0 to pi/4 is enough to get all of the absolute values, the rest are 'reflections'. In this case fixed point is a real possibility (sorry about that pun). If you can characterise the range of calculation for other functions, then fixed point may still be a good route.
*SWAG - Smart Wild-Ass Guess, as opposed to WAG
I'll concur about gbulmer about the printf issue. Compare the raw hex of the inputs, outputs and intermediate values.
Depending on performance and code size considerations, there other implementations out there: CORDIC and more conventional look-up tables (LUT) with interpolation.
For more reading, try DSP-related websites, and dig through embedded.com. For textbooks, I've use Koren's Computer Arithmetic Algorithms. I've also heard good things about Crenshaw's Math Toolkit for Real-Time Programming.
First, I will just end up testing this. I don't doubt the accuracy, though I simulated a Taylor Series to 5 terms and it was plenty accurate (error less than .002).
My only concern is range because I'd like to keep my program simple and not have to make it more complicated to handle the 180 degree range.
Lastly, I had to keep this short because my browser likes to crash on me and I have to leave the computer soon. Also, I think I just proved that I could make a function that is more accurate.
I simulated a Taylor Series to 5 terms and it was plenty accurate (error less than .002).
So you only need sin or cos accurate to 0.002?
Just use newlib's sin and cos, it is likely way, way, way, way, way better than that.
The terms for k_sin are over 20 significant digits, and term 6 is at 10^ -10.
My only concern is range because I'd like to keep my program simple and not have to make it more complicated to handle the 180 degree range.
I don't understand what the concern is.
a. the absolute values in the range 0 to pi/2 are the only numbers possible for sin and cos. Everything else is a 'reflection', there are no more numbers; sine (2*pi + x) == sine (x), etc.
b. the sin() wrapper around k_sin() (same for cos) handles the reflection anyway.
Also, I think I just proved that I could make a function that is more accurate.
What is the proof?
How accurate is newlib's sin and cos functions?
I think they are much more accurate than an error of 0.002
Ok, so if the cos() wrapper handles the reflection then I have no problem. I do understand the idea of reflections between quadrants for various angles, I just didn't want to have to deal with them.
For the proof, I can just keep adding terms to the Taylor Series and it would have to be, at some point, more accurate. Still, you are right that it would take months for me to make a formal math proof that any equation I make is always more accurate.
I had a look at sin, it first reduces the value of x to be in the range [0,pi/2], and determines which 'octant' x is in.
To do this it uses __ieee754_rem_pio2, which is prepared to use a value of pi upto 476 decimal digits long, to ensure there isn't too much loss of precision.
sin then uses either a sin kernel or a cos kernel, to generate the numeric value of sin(x) in the range [0, pi/4] (it uses cos to get the complementary octants) and rotates or reflects them to map to the correct octant of x.
So those 20+ significant digit terms, across a dynamic range of 30 decimal places, is being applied to angles of 0 to pi/4 (i.e. less than 1.0)
I'd SWAG the error bound as smaller than 10^-10, (or they were just picking random numbers for the terms :-), and WAG the error bound as smaller than 10^-20 across the range.
Sorry our posts crossed.
For the proof, I can just keep adding terms to the Taylor Series and it would have to be, at some point, more accurate.
There is no doubt that using enough terms with arbitrary precision arithmetic a more precise sin or cos could be produced.
The problem I think we are all trying to communicate is, the double precision arithmetic available uses a finite number of (binary) digits. Worse, the errors build up in non-uniform ways. It is not necessarily straight forward to code that series, using those terms to get close to the 52+1 bits of precision theoreticaly available.
So, if there really is a need for better than 10 significant digits, then I agree with larryang. IMHO, the best option is to use fixed point with a lot more precision than a doubles 52+1 bits. That way I think you can manage the error bound more easily.
So we're dealing with nano-scale errors. I think I can deal with that!
I'm slightly confused at the complexity here. I know that two Taylor Series can be used to get the values over the entire range from 0 to 2 pi with very little complexity. With some more coding, it could even reduce the number of terms needed by using only the values [0, pi/2]. Maybe I'll write myself a small function and compare it just for fun. I'll let you all know how that goes.
Sorry, crossed again.
I think it would be fair to say that fixed point is like nano-scale. You can nail what the error bound is.
Floating point is more subtle. At the risk of being too simplistic, the error bound depends on the largest number involved, so smaller terms can become irrelevant if the function isn't coded carefully.
As it happens, you can keep all the arithmetic for sin and cos within pi/4, which makes it a bit easier because the dynamic range is constrained to [0,<0.8].
If you need other functions too, e.g. log, ln, etc. it may be nastier.
You must log in to post.