PureBytes Links
Trading Reference Links
|
Alex Matulich wrote:
This may be too simplistic but...
If you think the JMA filter is one of the best, why not compare the RMS
error for your filter vs his?
Best,
Bill "notaguru" Vedder
To any math or signal processing gurus:
I have a problem figuring out exactly how to measure the quality of
fit for a low-lag smoothing filter applied to market data. I need
such a measurement for the purpose of an optimization I'm doing in
Excel.
Background: The filter I developed exhibits similar characteristics
to Mark Jurik's JMA in that it "turns on a dime" when the market
reverses, with little or no overshoot. It's weird to see a
smoothing filter turning sharp corners. I doubt it's as good as
JMA though, in terms of keeping up with sustained bursts of speed
lasting for several bars in a row. However I am pleased that the
algorithm is FAST, using NO loops, no if/then decisions (at least
not yet), and needing only 1 bar of lookback. Because the algorithm
consists of sequential operations with no loops, it was easy to
implement in an Excel spreadsheet.
The algorithm has some constants in it, which aren't constant, but
are some unknown functions of N (the "length" parameter). For any
given N, the constants are constant regardless of the input data.
There are several different constants, but I already have sensible
length-dependent functions for all but a couple of them (in some
cases its easy, like an exponential decay term might use something
like 2/(N+1) which is the standard decay term for an exponential
moving average).
What I need to do is optimize these constants against some measure
of quality-of-fit, and generate a list of values versus N. To these
values I would fit some function, and then use that function in my
code.
Problem: How do you measure how well a filter fits the data?
One approach is to calculate RMS error between the filter and the
data. The problem here is, either of the constants can control
how wiggly the filter output is. If you want to filter noise, you
don't want the filter to follow every wiggle in the data. Therefore
trying to optimize this parameter for RMS error will give the wrong
result.
Another measure is the total length of the filter's track. The
smoother the filter, the shorter will be its length because it's
not wasting itself tracking minor wiggles. However, optimizing for
shortness causes the adjustable constants to hit their constraints
to make the filter too smooth.
Still another measure is the number of times the filter reverses
direction. This has a similar problem to the previous one.
Multiplying two or three of the above results together didn't yield
anything fruitful. Converting them to a vector magnitude, i.e.
sqareroot(measure1^2 + measure2^2) also didn't give me satisfaction.
I could, of course, make adjustments to the constants by eye, for
each input length, for the test data set. However, my own judgment
of what's a "good" fit may not be valid. Also I won't come up with
a set of values to which I can fit some mathematical function.
|