What is the application anyway? I take it that performance isn't a high priority.
Math Features
(46 posts) (6 voices)-
Posted 5 years ago #
-
Well, I need to calculate the amount of air left in a cylinder based on how high the piston head is. I know the angle at which air starts being compressed; it is determined from BDC (bottom dead center).
50+50*cos(theta)
gives me the amount of air remaining as a direct percent (of 100). Performance is a "high priority," but I don't think it is in the way you intended it to mean. I have a maximum resolution for my output of 10us, which is remarkable for engine control, but I don't need extreme resolution elsewhere because of my constraint here.Posted 5 years ago # -
I have a maximum resolution for my output of 10us
Okay, so how fast is the engine?
If it were an ordinary car engine, it would be about 6000rpm, and a Formula One car engine is about 18000rpm (and as fast as very fast motorycles engines).So those are 100rps to 300rps, or 10,000us to 3,333us/rev respectively.
If the resolution needed is about 10us, that equates to 1 part in 1,000, to 1 part in 333 resolution limit, or under 4 significant digits.Posted 5 years ago # -
4000 rpm safe upper limit. 15000 us/rev. I'm not sure how 10us resolution = 1 part per thousand. Shouldn't it be 1 part per hundred? In any case, I am still under 4 significant digits.
Posted 5 years ago # -
Silntknight - As you say 4000rpm =~ 66.7 rps = 15,000us per revolution
a resolution of 10us represents 1 part in 15,000us/10us = 1 part in 1,500
(the other calculations, e.g. 10,000us yields 10,000us/10us = 1 part in 1,000us)yes, I agree, 1 part in 1,500 is well within 4 significant digits; within 11 bits.
If you have a good math tool, I'd be interested in the error in newlib's cos. I would be very surprised if it were worse than 10 significant digits, and I'd expect better than 20 significant digits (... deleted ...) . (EDIT: I must have gone a bit mad, 10 significant digits is plausibe, but 20 is too ambitious; double precision is 52+1 bits is just over 15 digits, say 15 and 1/2 digits, so where I got 20 digits from, I can't imagine. Sorry)
I did notice that the article at http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems had what IMHO is a helpful explanation about the problems of using floating point. It also has a few examples of techniques for reducing some of the problems, and references to deeper analysis and better approaches.
Posted 5 years ago # -
I did a test to calculate sin() of 256 numbers between 0 and 2*pi on the Maple. All values were the same to the last bit when I compared to the same calculation in Matlab (calculated by the math co-processor in a CoreI7).
Posted 5 years ago # -
The same to how many places? Did you use format long? I'd be interested to see an RMS error for this (just out of curiosity). Anyway, it's good to know. When I did my analysis for the Taylor Series, it diverged after about 170 degrees, but that's because I used 4 terms. When I changed it to 5 there was no visible divergence over the range. I didn't do an RMS analysis though. I might do that soon if I remember.
Posted 5 years ago # -
Every bit was identical. I used doubles on the Maple and printed the eight bytes in hexadecimal form. Like this:
void loop() { double t; unsigned char *b; // Let b point to the bytes in t b = (unsigned char*)&t; if(SerialUSB.available()) { SerialUSB.read(); for (uint16 i = 0; i < 256; i++) { t = sin(2*M_PI/255.0 * (double)i); // print t in hexadecimal form for (int8 j = 7; j >= 0; j--) { if(b[j] < 16) SerialUSB.print('0'); SerialUSB.print(b[j],HEX); } SerialUSB.println(); } } }
and got output like this (first few lines)
This can be read by Matlab's (or octave) hex2num() to convert it to doubles.0000000000000000 3F993A8F3A344071 3FA9389957FE39D2 3FB2E7FFCE3B453B 3FB930C2B920DC80 3FBF759B65775EBB 3FC2DAC833E51C96
In Matlab I compared it tok = (0:255)'; x = sin(2*pi/255*k);
and the difference was zero everywhere.Well this only means that the difference is less than about 2.2204e-16 (eps in matlab). So i generated the hex numbers in Matlab with num2hex() and the generated sequence was identical to that generated by the Maple. So every bit matched.
(I first made a mistake and wrote in another order i matlab, like "x = sin(2*pi*k/255);" (multiply by k before divide by 255) then the biggest difference was about 9e-16.)
By the way, I did some timing of the calculations of t also. If I remember correct (I don't have the Maple at hand right now) they were about 2200 cycles each (with some variation).
Posted 5 years ago # -
sniglen - very nice piece of work.
There is an explanation which my 'naughty pixie' side is forcing me to write ...
It could be that Matlab uses the same cos()/sin() function code as newlib.
sin()/cos() were originally written in 1993 by Sun Microsystems' SunPro (which I believe was the software/compiler group). The code has an Open Source license. So it may be that Matlab's math library has exactly the same code!I am not trying to suggest that the code is anything less than superb, only that if it is the same code, and the soft floating point library on Maple is very good, they would give the same answer.
I apologise for raising this, and creating fud, but "the pixie made me do it" ;-)
Posted 5 years ago # -
Well, if the two use different codes, then the trig features in newlib are very impressive; I would easily trust MATLAB. If they are the same, then I have no reason to doubt the newlib's accuracy.
Posted 5 years ago # -
I don't think Matlab uses an algorithm for sin(), it just calls the FSIN instruction (or similar). The calculation is done by the FPU.
Posted 5 years ago # -
gbulmer -- looks like you may be right:
http://www.mathworks.co.uk/matlabcentral/newsreader/view_thread/53534(thread about trig accuracy issues in old versions of matlab, circa 2003)
relevant quote follows, last sentence in particular is telling:
The error with sin(x) for large values of x in MATLAB 5.2 was fixed in
subsequent releases. The cause of the problem was a bug in the Microsoft
compiler used to build MATLAB. It was being overly aggressive in
optimizing certain calls to the C standard math library. We resolved this
and some similar problems by switching to a high-quality, robust,
replacement math library from Sun.--
Steve Eddins
The MathWorks, Inc.
[Mail sent to news4@eddins.net is forwarded to my MathWorks address.]Posted 5 years ago # -
mbolivar - thanks for finding that!
The 'naughty pixie' (on my left shoulder) is giggling very loudly in my ear :-)I am NOT suggesting that the Maple sin/cos functions are anything less than superb.
The indication that Matlab may have been using it since 2003 does suggest that the folks from SunPro did a very good job back in 1993, and should be feeling happily smug (I suggest they should celebrate by sipping a warm mug of Green & Blacks organic cocoa, with added honey and brandy - liquid alcoholic dark chocolate Toblerone, yum B-)
Posted 5 years ago # -
Removed double post
Posted 5 years ago # -
No wonder it's the same result then :-)
"doc sin" in Matlab (r2007b) gives this information
Algorithm
sin uses FDLIBM, which was developed at SunSoft, a Sun Microsystems, Inc. business, by Kwok C. Ng, and others. For information about FDLIBM, see http://www.netlib.org.
which leads to this. I think you've seen it before ;-). I will have a look at the documentation a little earlier next time.
Posted 5 years ago #
Reply »
You must log in to post.