Typically in statistical sampling, extreme outliers are treated as spurious samples and left out. That way the sampled data can fit nicely with more elegant statistical distributions and can be explained well by models. On other hand the premise of technical analysis is "markets are inefficient and one can profit from these inefficiencies".

TA typically assumes the market price moves are a proxy for the information content and price inefficiencies. So technical trading rules typically have a price & time scale parameter like number of days for purpose of smoothing data, to reflect some cycle length and to identify market inefficiency to profit. So far good.

When it comes to markets, the days with large price moves are the ones that reflect most the inefficiencies and also trigger the most emotions. No

*w a rarely asked but important question is would your technical indicators yield better performance when you consider only information rich large price move days versus considering all days?*

I recently came across a paper that targets this question. What the author does is uses volatility as a filter to screen out noise (i.e., flat days) and include only days that are rich in information. Once these flat days are filtered out, then the author applies the same trading rules on non-filtered days and does before-n-after trading rules performance comparison.

*So what are filtered days?*The paper first defines a threshold and then uses this threshold to filter out some of the days in the sample. The threshold is for example like a 25% of sample daily return's standard deviation of the full set. Now using this threshold, we filter out nearly all flat days (i.e., days with gain or loss less than threshold) from the full sample. The days of interest to us here is the retained set i.e., non-filtered days in the sample.

**Trading Systems & Data**

To validate whether this filtering helps, the paper picks three trading rules/systems and compares the performance of these three systems on full sample data vs performance on retained data (i.e., data where flat days were filtered out). The data is SPX daily index for last 23 years i.e., 1990-2012. I think this is long enough data.

*Short Term System: 2 day run mean reversion**Rules:*

*Long 100% in SPX at the market close of a trading day when index has been down 2 days in a row.**Short 100% in SPX at the market close of a trading day when index has been up 2 day in a row.**Continue with current position (long or short) till a switching conditions has not been met.*

*Filtering schemes:*

*Scheme 1 - Apply a fixed filter i.e., ignore days that have less than 25% of SPX daily return standard deviation. The standard deviation was calculated for the entire period.**Scheme 2 - Filter all days that have returns less than 20% of SPX daily return standard deviation. This standard deviation was calculated on last 60 days of rolling window.**Scheme 3 - Filter all days where threshold is 22% of current SPX index option implied volatility.*

The concept applied for short term system is basically ignore nearly flat days and focus on market moving days to improve your short term trading rules performance. This is similar to volatility filtering systems that one hears about in TA.

I am not sure filtering schemes 1 & 3 would be robust. I generally prefer to stay away from thresholds that are absolutes. Also these two filtering schemes has a look ahead bias.

**Intermediate Term System: Dual Moving Average Cross (DMAC)***Rules:*

*Go Long when short term moving average crosses above long term moving average**Go Short when short term moving average crosses below long term moving average.*

*Filtering scheme:*

*Filter all days whose daily returns are less than 0.25% daily returns of SPX when computing SMA and LMA.*

The concept that gets applied indirectly here for intermediate term system (i.e., MA system) is to increase the MA length when there are many flat days. So in other words the simple MA becomes an adaptive MA.

I am not fully convinced yet that filtering out nearly flat days is the way to apply this concept. Part of the reason is most bars effect (unless they were in key locations) will fizzle out in few days. Whereas the system we are talking here is intermediate term system. Another reason is the equity curve seems bad last 3-4 years. Don't know if it is due to change of market character since financial crisis and popularity of risk aversion.

**Long Term System: Price Channel Trading***Rules:*

*Switch to long when close is greater then m day price channel high.**Switch to short when close is below the m day price channel low.*

*Filtering schemes:*

*Filter all days in channel calculations whose daily returns are less that 0.25% of daily SPX return*.

Here I am not sure why the filtering scheme is improving the performance. Basically what we are saying is when we have too many flat days, then increase the channel look back period. I would think the other way (i.e., decreasing the the channel look back period when too many flat days) would be more profitable. The rationale - volatility contraction.

**Concluding thoughts:**
I think on the whole the core concepts in this paper are good. But on other hand, I don't feel comfortable with absolute thresholds and especially if they were calculated by looking ahead.

My main take away from the paper is utilizing of this filtering concept but probably in a different way for a short term system. For intermediate and long term systems, probably I will skip this concept for now.

For any one interested in reading full paper, following are the details -

My main take away from the paper is utilizing of this filtering concept but probably in a different way for a short term system. For intermediate and long term systems, probably I will skip this concept for now.

For any one interested in reading full paper, following are the details -

*Source - "Filtered Market Statistics and Technical Trading Rules", George Yang, May 2013.*

*Wish you all good health and good trading!!!*