For my project I need to predict when a level will get to a certain point. I'm going to assume a linear model (y=ax+b, with y=level, x=time and b=y_0), finding the x value when y=0 is then trivial (y_end=-b/a). My initial thoughts are to approach the parameter estimation for this in one of two ways:
1. To run a recursive least squares algorithm over all time that the system runs
2. To run ordinary least squares over a fifo of the last 'n' readings of the level
The level changes in a roughly constant manner (give or take measurement noise), so the two methods should give the same answer (to within an acceptable tolerance). My question is:
Which would give the least load - assuming that any matrix/vector work is carried out in fixed point (and either done in the form of precalculated expressions or using Trenki's matrix library)?
Normally I'd simulate the two different *LS approaches in Matlab but I only have Matlab student R2009a, which doesn't include the fixed-point toolbox, so I seem to have hit a bit of a brick wall!
Any ideas please? I've seen that Octave has a FP toolbox, but development seems to have stopped on this a while ago - has anyone used it?