Friday, August 24, 2012

Two Schools of Quants

I recently read this question off the Quant Finance subforum of StackExchange, a Q&A website for programmers: "Which approach dominates? Mathematical modeling or data mining?"

Basically, it seems that there are two schools of quants. This is my possibly over-simplified and generalized interpretation of the distinction.

1. Background: mathematicians / theoretical physicists / computer scientists / academic economists
Inferencedeductive
Epistemologyrationalist
Type of knowledgea priori
Beliefs: "There exist absolute immutable truths of the markets."
"I can discover these truths by superior thinking."
"With the right theories, I can make money."

2.  Background: scientists / statisticians / engineers / programmers / business economists
Inferenceinductive
Epistemologyempiricist
Type of knowledge: a posteriori
Beliefs: "I cannot know if there exist any immutable truths of the markets."
"However, I can asymptotically approximate these truths through superior observation."
"With the right models, I can make money."

I think that's the essence of it, although the commenters in that post seem to be using many more words than I am. To boil it down to even simpler terms (at the risk of evaporating some meaning), the theoretical dominates for the former whereas the data dominates for the latter.

How would you begin to identify your "school" of quant strategy? I guess the first thing to do is to ask yourself, why did I make that trade? If you say, in historically similar situations, the price of this security responded this way, and I'm going to assume that this will continue to happen in the future, then you're probably in the latter camp. On the other hand, if you respond with, there's this theory which proves that security prices moves this way, given certain assumptions and axioms, then you're probably in the former.

However, there's problems with this answer. On the StackExchange post, "Quant Guy" makes a good observation about the Fama-French models: 

As more complex/realistic theories are devised, there is also the concern whether the theory itself was formed after peeking at the data - i.e. devising theories to explain persistent patterns or anomalies which an earlier theory could not 'explain' away. In this context, Fama-French's model is not a theory - it spotted an empirical regularity which was not explained by CAPM, but it is not a theory in the deductive sense.

Some background: CAPM (Capital Asset Pricing Model) explains the return of an asset as a function of a single factor: "beta", or more specifically, "market beta". As time went by, however, market participants noticed that this single factor couldn't explain all asset returns, as some low beta stocks outperformed and high beta stocks underperformed. In stepped Eugene Fama and Kenneth French of the University of Chicago, who noticed that even after correcting for market beta, small cap stocks and cheap stocks tend to outperform large cap stocks and expensive stocks: hence the Fama–French three-factor model, which adds a small cap factor and a value factor.

There is even a four-factor extension called the "Carhart four-factor model", which adds a momentum factor. We can quickly see the problem with the progression of such theories: they are not theories at all! The reason is simple: they were not conceived independently in the mind of the theoretician from logical principles and axioms, but rather, were given birth by the data. With each new market anomaly that cannot be proven by existing models, we can explain it away with a new factor, creating an n+1 factor model.

I often give the hypothetical example of a Keynesian who believes in the Phillips curve, which is the inverse relationship between unemployment and inflation, being confronted by a disbeliever. The disbeliever shows to the Keynesian a counterexample: the US in the 1970s, in which stagflation - the co-occurrence of high unemployment and inflation - is rampant, and exclaims triumphantly, "HA! There is no way that the Phillips curve can explain this! Now you must throw away your theories!" To which the Keynesian calmly replies, "On the contrary, this gives me a new theory, which is the Phillips2 curve. It's exactly the same as the old Phillips1 curve, and indeed the inverse relationship between unemployment and inflation still holds everywhere and always. Except with one important modification: when Nixon is president."

Of course, this is a ridiculous example, but it shows exactly how Fama-French is not a theory. In statistics, we call this overfitting. One might asks which is the better approach, theory or data, but I'm not sure if there is an answer. I'm not even sure if you can be strictly in one camp and not the other, as there seems to be more of a continuum than a strict dichotomy. In the real world, it is hard to really make the distinction between theory and models, as theories are often suggested by the data. Indeed, it's impossible NOT to be influenced by real world, unless you live in a cave with no Bloomberg terminals or something.

I guess ideally, you should be a quant who finds the middle way and melds the two approaches, but I'm not sure what that would look like... pragmatism?

No comments:

Post a Comment