Hi Crenn, I'm using floats for parksin and theta, so 7 decimal digits.
Sin/Cos are in radians so I can't use integers etc for theta.
Hi Crenn, I'm using floats for parksin and theta, so 7 decimal digits.
Sin/Cos are in radians so I can't use integers etc for theta.
Wait....I'm having a 'durrrr' moment, can I use an integer for theta instead of a float?....
You can, but what I'm trying to point out is that you could use Sinf/Cosf to speed things up if you only need single floating precision and not double.
http://www.codecogs.com/reference/c/math.h/cos.php?alias=cosf
Hi crenn, I can't get your link to work, but if I understand correctly you are talking about taking an integer number, dividing it by 10 and putting it into a float variable then doing the sin/cos on that number?
It will only have 1 digit after the decimal point so will run quicker?
If I have a minimum of 0.1 radians gradation this corresponds to 5.73 degrees, which does not seem enough precision, I think I'd need to calculate out how much precision I actually need.
My PWM frequency is 10khz and my maximum physical rpm is 5500rpm. The motor has 6x electrical revolutions per single physical resolution. I should be able to figure out what kind of spacial angular resolution I need that makes sense without overkill.
Ok I think this might be the way to go, using CORDIC for the sin/cos:
http://en.wikipedia.org/wiki/CORDIC
I found a number of examples, here are some for folks that are interested:
http://www.dcs.gla.ac.uk/~jhw/cordic/index.html
http://www.emesystems.com/BS2mathC.htm
bart_dood - have you considered doing the calculations using fixed point:
crenn got this library working:
http://en.wikipedia.org/wiki/Libfixmath
and I think robodude666 got some big (more than 2x, maybe 4x?) speedups.
It is only Q16.16 precision, which might not be enough precision for your purposes.
Thanks gbulmer but I think 4x is still too slow. Currently sin or cos take 50 microseconds each, so 100 total. 4x would still be 25 microseconds, added to the 55-57 microseconds for the other code puts me over 80, which is pushing it.
I am leaning towards a lookup table. The reason being I am just calculating out how much actual resolution makes sense for the angle. Here's my rational:
The minimum mechanical RPM the motor will turn in practice is 1000rpm (I have data on the motor), this equates to 6000rpm electronic rpm (6 electrical revolutions per 1 mechanical revolution). At the minimum rpm there will be the most number of PWM periods per electrical rpm so this will required the highest precision for the angle (the angle sin/cos needs to be updated on every PWM computation).
6000 electrical rpm equates to 100 revolutions per second. The PWM frequency is 10khz so the number of pulse width modulation periods is simply 10,000/100 which is 100. So therefore in 1 electrical revolution there are only 100 times the angle and sin/cos needs to be calculated. 360/100 is only 3.6 degrees between each computation.
However the angle won't always be exact multiples of 3.6 degrees so it makes sense to have a lookup table with more resolution. 360 data points or so seems to make sense, so increments of 1 degree. This seems perfectly manageable for a lookup table.
I'll perhaps test it out and report back..thanks for all the help
bart_dood - that is a clear analysis, thank you, I think I get it. Sorry for wasting your time with the fixed point suggestion.
I agree with you; I love lookup tables. Having 128KBytes (or even more on RET6/Native) lets me have quite precise tables. I'd be tempted to either have more resolution, and/or store values to make acceleration and deceleration easier to calculate.
I think I understand the 'worst case', but what is the other extreme?
When it is running as fast as it can go, do you need to have several different PWM values to get smooth motion? It might be worth running that estimate too, seeing how many steps it needs, and checking to see if the resolution based on that fits nicely.
Also, how smooth does acceleration and/or deceleration need to be?
A change of one part in 360 (assuming 1 degree) to accelerate from slowest to a tiny bit faster sounds very good, but what is the change of speed at the top end? Again, a 'back of an envelope' for a couple of minutes will very likely be enough to estimate these factors and give you a comfortable feeling.
HTH, and I apologise if it is clear to you already.
(I'm not asking you to spend your time creating another post, just suggesting that top speed and acceleration are worth sketching out too)
(full disclosure: I am not a member of LeafLabs staff)
Hi gbulmer, no offense taken! I am still working through all this and its nice to talk it though, it does help clarify things in my mind!
I have made some major progress, the lookup table has gone great. I decided on 720 values each for the sin/cos, floating points, I can afford the memory etc. The other beauty of this approach is it cleaned up all my angle and velocity calculations too, instead of radian floating points for angles, all I need now is a simple integer that references the lookup table. The integer simply represents 0.5 degrees so 0-360 degrees is 0-720 in the lookup table.
Its really the way to go for this.
I just now (like 5 mins ago) ran a test app again with my oscilloscope to see how fast everything runs now, and with every function for the closed loop control I'm down to 59 microseconds. I think I might be able to squeeze another 1-2 microseconds out more. For comparison microchip has a 3 phase motor app running on one of their 32 bit dsp's that runs at 45 microseconds and this is with quite a lot of low level code involved. So I'm happy the maple is ballpark very close to this despite the higher level simpler code I am using.
Next I need to confirm all my numbers etc coming out are correct, do some more tidying and perhaps try the code out in the next couple of weeks.
I still have events that have to trigger on external interrups (hall sensors) so they will add more time, but it should be very low.
I have other questions too, like do micros() values work inside interrupts? I know millis() don't but I have to have a basis of measuring time.
Thanks again everyone!
bart_dood
I have other questions too, like do micros() values work inside interrupts? I know millis() don't but I have to have a basis of measuring time.
All of the source of the maple libraries comes with the IDE, so you can go look.
To save you some time, here is the source of micros():
static inline uint32 micros(void) {
uint32 ms;
uint32 cycle_cnt;
uint32 res;
do {
cycle_cnt = systick_get_count();
ms = millis();
} while (ms != millis());
/* SYSTICK_RELOAD_VAL is 1 less than the number of cycles it
actually takes to complete a SysTick reload */
res = (ms * US_PER_MS) +
(SYSTICK_RELOAD_VAL + 1 - cycle_cnt) / CYCLES_PER_MICROSECOND;
return res;
}
Hmmm. I think there might be a bug.
I think
do {
cycle_cnt = systick_get_count();
ms = millis();
} while (ms != millis());
should be
do {
ms = millis();
cycle_cnt = systick_get_count();
} while (ms != millis());
Let's see what LeafLabs say?
http://forums.leaflabs.com/topic.php?id=1098
Otherwise, I believe millis() and micros() have the same problems; the systick interrupt will be blocked while you are executing code inside an interrupt service routine with a higher priority. One interrupt from systick should not be lost, because the NVIC remembers them, but millis internal value might 'lose', by missing second and subsequent systick interrupts if your higher-priority interrupt routine lasts too long.
Could you use a timer inside your interrupt routine instead?
Well this project is certainly challenging, after making some changes and updates at the weekend my code has slowed down a lot again, now I have to figure out why. It seems very sensitive to any changes.
Thanks gbulmer for the information. I don't think I can get away without any kind of time reference of some kind, basically I have a timed interrupt that happens every 100 microseconds, the code inside this interrupt is the code that must run fast, so lets say it takes 60 microseconds. On top of this I have three external interrupts, so when a TTL change happens it updates a motor angle and then re-computes the new motor speed. It currently does this by looking at the time difference between the last interrupt and the current time, it knows exactly what the angle has changed by so therefore it can calculate the rotational speed. This is all quite simple integer code so should happen very fast.
I need to update the speed regularly because inside the fast 100 microsecond interrupt the motor position is re-computed every time, I use time and speed to estimate the angle position. I know the last angular position and the speed so its simply a matter of calculating how far the motor has moved.
Are there any references on using a timer instead of micros() online?
thanks again
Hi bart_dood,
It's a really nice project you got going here. I probadly have to clone your motor-control if I stumble upon an inverter and three-phase motor. I have previously worked with 3-phase motor control, so just spotted an option to save some memory:
The sine function is symmetrical, so you can actually get away with only storing 1/4 of the sine.
Don't know if this is stating the obvious, but here goes.
From 0 to pi/2 you use the lookup in memory.
From pi/2 to pi you can step your way back from pi/2 to 0, it s the same numbers just reversed in order.
From pi to 3*pi/2 is just the negative values from 0 to pi/2
and from 3*pi/2 to 2*pi it's the same as pi/2 to 0 only negative.
Have you thought about some dead-time compenzaton? I haven't spotted the dead-time for your inverter, so I don't know how much of a problem it will be, perhaps it is already compensated in logics?
I don't have a paper right here, but can probadly find one. I'm not a shark at SVM, since I used the the sinusoidal PWM with 3. harmonic injected. I don't know what is the most computational intensive of the two, but could be a cool study.
I will be receiving my Maple Native tomorrow, so perhaps I will see if I can run your code :)
Best of luck on your project!
hi mbk,
Thanks for the idea, it would save me memory and its a good idea. I don't think I'm limited by memory right now. Its more speed and making sure the code is functioning correctly.
I did do some work on the firmware yesterday, and I got it running quicker again, its interesting some of the things that make a difference for example, I have some IF statements in a PI loop, if I replace the value that it compares from a variable to a number the code slows down, about 0.25-0.5 microseconds per IF statement.
Another example, on an analogread, if you have math on the same line (subtract a value, multiply by a scalar etc) it runs slower than having the math on a different line in the code.
As far as deadtime, I'm using a driver chip that automatically creates deadtime based off a clock. It works perfectly as I've checked it via the oscilloscope. The part is IXDP630 which is for the basic RC chip, the crystal version is IXDP631. The chip is 5 volt TTL so I have converters to convert the 3.3 volts from the maple to drive the chip.
The output from my code is not currently correct, I am working on finding out why next.
So I figured out I made some stupid typo's which is why my PWM outputs weren't working as expected, they are fixed now so I'm at the point where my code is running at 64 microseconds which is fast enough (but I'd still like to shave some time off) and my outputs seem to make sense (although I need to check over a wider parameter range).
The next thing I need to figure out is this timer problem. If I can't use micros() at all then I need some kind of time solution instead. I'm examining where I use micros() right now and seeing if its ok if it doesn't increment depending on where I used it.
If anyone has any suggestions for this I'd appreciate it, if it would help I can paste up here my timer setups, I have one 100microsecond timed interrupt and three external interrupts (that are very fast and just read digital inputs high/low and measure time).
thanks!
regarding micros() - cant you just lower the priorities of the offending interrupts (your timers) or perhaps raise the priority of the systick (for some reason I thought systick was already higher prio than most things).
Another alternative is to actually read the value in a timer - they are counters after all. If they rollover too often, you can actually cascade them, so that when one timer rolls over, you can cause that to auto-increment another timer rather than trigger and interrupt.
If you dont really care about the "system time" and you just want to know relative times between external events, or the period of a cycle or whatever (sorry! I only skimmed the thread...) then reading out the timers counter value is almost definitely what you want.
You must log in to post.