Friday, August 31, 2012

What do Programmers do?



As a Software Engineer, people are generally asking me to explain what I do at work.  I myself wouldn't have been able to explain this a few years ago, and thought (correctly) that programmers are mega nerds. I'd always considered programmers as glorified typists.  That being said, let me try to explain what things are really like, from my own perspective.  While reading this try to keep in mind that this is all from my personal experience, though I have had the opportunity to work at both massive and tiny companies.

First: some context.  I'm working at OkCupid Labs as a software engineer, and my position is heavily focused on backend work.  We work on small(er) software projects with the goal of bringing people together.  We're pretty much a startup.

Ok then!  Let's start at a high level, then work our way down; software teams at software companies have programs or program features that we want to add to our applications, and as a software engineer I need to find out how to turn an idea into a functioning (piece of a) program and then write it up.  I'll first go through and do some really high level design - is this feature technically feasible, and if so, what is a good way to structure the program and its data structures?  I'll run through designs in my head, reaching back to stuff I learned in college and things I've picked up since then.  I might doodle some stuff on some paper or a whiteboard to sketch out designs if they're more complex, or if I need to discuss them with someone else.  I may even talk out loud to myself, or try to explain my ideas to an inanimate (1) object because having to explain things often brings out the obvious flaws/solutions.  No matter how I do it, I'm aiming for an efficient and reliable design (2).

If all went well and I've filled my notebook/whiteboard/coworker's head with my brilliant, working designs, then its on to coding this beast up!  I grab my favorite text editor and start writing code in whatever language my group has chosen.  Its kind of like this, but more like writing an essay with proper structure and syntax in which you describe in very precise terms what the computer should do.  Inevitably I'll make some mistakes in writing this essay - maybe this construct here didn't mean exactly what I meant, or maybe I just left a word (or whole sentence, or idea) out.  These will come up later as bugs, which will hopefully be caught by me when I test the feature, but may end up harassing some poor user in the distant future if not found.  Bugs are inevitable - they will always be in your code; all you can do is minimizes your chances of introducing them into the system and finding as many as you can and fixing them.

Something that is really important to mention about coding is that its a very creative exercise.  It's not quite like writing a recipe so much as writing an essay or a poem.  Just because the instructions to the computer must be precise and accurate doesn't mean there's only one way to do it.  Code has a style to it which makes everyone's look a little bit different.  Spacing, naming conventions and commenting in someone's code may make it look very different from someone else's, and yet it may do the same exact thing.  You may find one person's style much more readable than another person's.  Even outside of code style, there are nuances in a program that may make it slightly more or less efficient in terms of CPU cycles or memory.  Its often easy to tell how good a coder is by looking through their code - sloppy, complicated designs are marks of a novice, while experts write code that most programmers will describe as elegant and simple.  Less code is often better code.

While running through this design and implementation of a feature, I won't necessarily be working alone.  I'll be talking to other engineers about how to hook my feature up with the rest of the code, asking their opinions and perhaps working with a partner on the design and coding of the feature.  There may be meetings in which various phases of this process are discussed, and there will always be requirements (things that this program must do).

At the end of all this, I'll get to 'ship' my code, meaning that this code will either be put onto a website for people to use, put up for download, or physically shipped on a dvd or other physical media.  Still, it doesn't stop there.  Unlike school projects, no software project is ever truly finished - there's maintenance and improvement that needs to be done so long as the software is still in use.  Maintenance means fixing bugs and making sure that the program still works given new technologies, while improvements might be adding more functionality to the program, or where you go back to fix some of the design mistakes that were made previously.

Though I haven't spoken about this very much, software development is a huge team effort.  There are product people, designers, frontend engineers, testers, technical writers, managers etc.  Everyone contributes to a system that won't function without all components working together.  Each day we come in, work on a feature, fix some bugs, and discuss the future of the project.  Its an incredibly stimulating job where you're always trying to solve some new problem - and, to me it's incredibly rewarding.

Footnotes:
1) I hear rubber ducks work best.
2)  Efficiency and reliability - what do these mean?  Reliability is a bit easier to explain, and means that the software you build should work every time, no matter what buttons you mash, etc.  If it doesn't work every time, then it should at least fail gracefully - no blue screens of death, please.  Efficiency is a bit more of a challenge.  Like most things, a computer has limited resources which include memory in the form of RAM, disk space, CPU cycles, network bandwidth.  If you have an application that uses any of these poorly you end up with a program that doesn't operate at all,  operates poorly on low-end systems, or operates but takes so long that it would take 80 years to compute a solution (check out generating the fibonacci numbers recursively - this algorithm grows at roughly O(2^n) meaning that each extra number you want to generate increases the amount of work needed exponentially).

Saturday, August 25, 2012

Intro to Poker

Yayy, my first post!

I figure I'll talk a lot about poker on this blog since it relates to a lot of interesting concepts: probability/risk, expectation, and psychology.  I'll start with some probability today.

The Theory of Poker: 

If you played a million hands with someone heads up, or a million hands at a 9 person table, the only way you make money in the long run is if other people make mistakes, or rather, make more mistakes than you do.  Mistakes include folding with the best hand, calling with a worse hand (without correct odds), or even making a less positive EV bet (like calling when raising is optimal).  In the end, if you make less mistakes than the other person, you win.

Intro Poker Probability:

To talk about poker, we need to talk about probability first.  If there is 12 dollars in the pot, and you face a bet of 2 on the turn (with one card left to go), and you have the nut (best) flush draw and you know your opponent has Aces (a pair of Aces), you should actually call.  There are 9 outs (cards that will allow you to win the hand)  in the deck, so you need 37:9* odds, or a little over 4:1, and you are getting 6:1, so your call has positive EV.  But poker is never as simple as this since 95+% of your decisions are not on the river.     

Conditional Probability and Implied Odds/ Reverse Implied Odds: 

Conditional Probability is hugely important in poker.  Ask yourself how likely you are to be best, but also ask yourself what can happen if you are in fact best on future streets, and what can happen if you are worse on future streets.  So here's an ex. You raise one off the button (button is the dealer, and 1 off means you're to the right of the dealer and there are 3 people left to act.  3 off would mean 3 seats to the right of the dealer and there are 5 people left to act) with A3s (Ace 3 suited) and the dealer and BB (big blind) call.  Flop is A 7 4 rainbow (no suit matches), so it's great to think your pair of Aces are best, so you want to bet right? Well, you can bet and take down the pot most of the time, but you're only getting called with a better Ace (Ace and a higher kicker/other card) and only a few worse hands.  Let's think about checking (not betting).  You're likely to win the hand at showdown (after all cards are revealed), unless someone hits 2 pair or runner - runner something.  And if the guy behind you has a better Ace and bets, at least you have more information to make a better decision.  So in fact, checking is best here, because of conditional probability.    

You have pocket fives and there's an open raise (the first raise and first action preflop), call and you're on the button (you're the dealer).  So the pot is just above 2:1, and the chance you flop a set (another 5 comes out) is 8/9:1 against.  This sounds terrible, but when you do hit your set, you may win much more than what's in the pot in later streets.  You have a hidden hand and top pair is likely to give up a lot of money in that scenario, so you have implied odds.     As a general rule, the other players need about 20x the bet you're calling pre-flop behind (meaning if the raise is to $7, they need about $140 behind to make your call worth it (to make up for times they fold to you and you don't get paid).

Reverse implied odds is the opposite.  You open raise with AJ and a guy re-raises you preflop.  The pot odds let's say are 2:1, and let's say he's reraising sometimes with worse hands like KQ, AT (Ace-Ten), and if he's got a pocket pair like T's or 9's it's a coin flip, so you should call right? Well no.  If the guy has AK or AQ, you're absolutely crushed and you don't know it.  If an Ace comes out, you're likely to lose a lot of money.  If the guy has A's, K's or Q's and the flop comes out J high, you're losing even more money.  

I'll end this post here, because it's getting long.  But I promise I'll get into more interesting stuff down the road, and how poker is good simulation for other things.  Most importantly, I can teach you how to be positive EV at the casino.

*with a 1/3 prob of winning, you need at least 2:1 odds, odds are listed as failures:successes, unless you say 3:2 "favorite", in which case it's successes:faiures.  It's not very intuitive for us math geeks, but actually gamblers use this because it frames the probability correctly, if you have 2 in the pot and face a bet of 1, you need 2:1 odds.  
  

Friday, August 24, 2012

Two Schools of Quants

I recently read this question off the Quant Finance subforum of StackExchange, a Q&A website for programmers: "Which approach dominates? Mathematical modeling or data mining?"

Basically, it seems that there are two schools of quants. This is my possibly over-simplified and generalized interpretation of the distinction.

1. Background: mathematicians / theoretical physicists / computer scientists / academic economists
Inferencedeductive
Epistemologyrationalist
Type of knowledgea priori
Beliefs: "There exist absolute immutable truths of the markets."
"I can discover these truths by superior thinking."
"With the right theories, I can make money."

2.  Background: scientists / statisticians / engineers / programmers / business economists
Inferenceinductive
Epistemologyempiricist
Type of knowledge: a posteriori
Beliefs: "I cannot know if there exist any immutable truths of the markets."
"However, I can asymptotically approximate these truths through superior observation."
"With the right models, I can make money."

I think that's the essence of it, although the commenters in that post seem to be using many more words than I am. To boil it down to even simpler terms (at the risk of evaporating some meaning), the theoretical dominates for the former whereas the data dominates for the latter.

How would you begin to identify your "school" of quant strategy? I guess the first thing to do is to ask yourself, why did I make that trade? If you say, in historically similar situations, the price of this security responded this way, and I'm going to assume that this will continue to happen in the future, then you're probably in the latter camp. On the other hand, if you respond with, there's this theory which proves that security prices moves this way, given certain assumptions and axioms, then you're probably in the former.

However, there's problems with this answer. On the StackExchange post, "Quant Guy" makes a good observation about the Fama-French models: 

As more complex/realistic theories are devised, there is also the concern whether the theory itself was formed after peeking at the data - i.e. devising theories to explain persistent patterns or anomalies which an earlier theory could not 'explain' away. In this context, Fama-French's model is not a theory - it spotted an empirical regularity which was not explained by CAPM, but it is not a theory in the deductive sense.

Some background: CAPM (Capital Asset Pricing Model) explains the return of an asset as a function of a single factor: "beta", or more specifically, "market beta". As time went by, however, market participants noticed that this single factor couldn't explain all asset returns, as some low beta stocks outperformed and high beta stocks underperformed. In stepped Eugene Fama and Kenneth French of the University of Chicago, who noticed that even after correcting for market beta, small cap stocks and cheap stocks tend to outperform large cap stocks and expensive stocks: hence the Fama–French three-factor model, which adds a small cap factor and a value factor.

There is even a four-factor extension called the "Carhart four-factor model", which adds a momentum factor. We can quickly see the problem with the progression of such theories: they are not theories at all! The reason is simple: they were not conceived independently in the mind of the theoretician from logical principles and axioms, but rather, were given birth by the data. With each new market anomaly that cannot be proven by existing models, we can explain it away with a new factor, creating an n+1 factor model.

I often give the hypothetical example of a Keynesian who believes in the Phillips curve, which is the inverse relationship between unemployment and inflation, being confronted by a disbeliever. The disbeliever shows to the Keynesian a counterexample: the US in the 1970s, in which stagflation - the co-occurrence of high unemployment and inflation - is rampant, and exclaims triumphantly, "HA! There is no way that the Phillips curve can explain this! Now you must throw away your theories!" To which the Keynesian calmly replies, "On the contrary, this gives me a new theory, which is the Phillips2 curve. It's exactly the same as the old Phillips1 curve, and indeed the inverse relationship between unemployment and inflation still holds everywhere and always. Except with one important modification: when Nixon is president."

Of course, this is a ridiculous example, but it shows exactly how Fama-French is not a theory. In statistics, we call this overfitting. One might asks which is the better approach, theory or data, but I'm not sure if there is an answer. I'm not even sure if you can be strictly in one camp and not the other, as there seems to be more of a continuum than a strict dichotomy. In the real world, it is hard to really make the distinction between theory and models, as theories are often suggested by the data. Indeed, it's impossible NOT to be influenced by real world, unless you live in a cave with no Bloomberg terminals or something.

I guess ideally, you should be a quant who finds the middle way and melds the two approaches, but I'm not sure what that would look like... pragmatism?

Tuesday, August 21, 2012

Word of the Day

I do a word of the day with my friend Christina. No real criteria except that it has to amuse me. Anyway, here's what we've done so far:

Apocryphal [uh-pok-ruh-fuhl]
1) of doubtful authorship or authenticity
2) false, spurious 
3) in a biblical context, it means non-canonical.


Milquetoast [milk-tohst]
1) weak, ineffectual, timid, unassertive, bland person, place or thing.


If it sounds like "milk toast" that's no coincidence.

Shibboleth [shib-uh-lith, ‐leth]
1) pronunciation, custom, principle, belief, behavior, mode of dress, etc. distinguishing a particular class or group of people. 
2) common saying with little current meaning or truth. 
3) slogan, catchword.

Some examples: the pronunciation of Nevada ("a" as in "dad", rather than as in "father") and Boise ("boysee" instead of "boyzee") is used as shibboleth to identify locals. In computer security, shibboleth refers to digital credentials (such as passwords) or physical credentials (such as RFID cards or fingerprint scans) used to protect computer access. Interestingly, the word "shibboleth" itself was once used as shibboleth: in the Old Testament, Gileadites pronounced it as shibboleth, which distinguished them from Ephraimites who pronounced it as sibboleth.

Sangfroid [sahn-frwa]
1) composure or coolness as shown in danger or under trying circumstances.

If you know French, then you know that sang is blood, froid is cold, so sangfroid literally means cold-blooded. Sometimes spelled with a hyphen, as in sang-froid.

New words for this week:

Palaver [puh-lav-er, lah-ver]
1) noun: prolonged and idle discussion 
2) verb: to talk unnecessarily at length.

Protean [proh-tee-uhn, proh-tee-]
1) variable, readily assuming different shapes or forms.

Protean comes from Proteus, the Greek shape-shifting sea-god.

edit: added pronunciations, another definition to shibboleth, and a new word, milquetoast.

Wednesday, August 15, 2012

Automation and Labor

FT Alphaville recently had a thought experiment on how things would have gone if "the US, rather than taking advantage of cheap labour in China, had kept things at home and heavily invested in automation".

This is one of my favorite themes: the rise of automation in manufacturing and its effects on the global economic balances. Recently, over the past year or so, multiple research publications and journals (such as the FT, The Economist, GS, BofA, BCG, Gavekal, ISI) have featured this supposed inevitable renaissance as robotics and 3D printing revitalize manufacturing in the US.

It's an exciting and sexy theme I will probably cover in greater detail in the future. Today, however, I am interested in very specific component: labor.

Luddites

Low-cost production techniques could soon become so advanced and so low cost — thanks to developments like 3D printing — that even the tiniest salaries in Africa will not make it worthwhile to employ human beings at all.

In the 1800's, the Luddites were a group of disgruntled skilled weavers who displayed their discontent by destroying the automated looms that made it possible to hire unskilled (and cheaper) labor in their place. Of course, in hindsight, these productivity-enhancing machines were a good thing: it freed up future generations of educated men and women towards more interesting work, e.g. as innovators of new industries. In other words, would-be weavers became engineers, inventors, thinkers, etc. instead.

Are we falling prey to this Luddite fallacy today? It's clear that on its own, automation cannot be said to be a job-killer. Generally, as the prices of goods fall with the productivity gain made possible by automation, demand for goods increases, which results in increasing demand for labor. This gives us more employment, rather than less.

However, the current round of automation (the third industrial revolution, as The Economist likes to put it) is potentially different.


This is unlike the job destruction and creation that has taken place continuously since the beginning of the Industrial Revolution, as machines gradually replaced the muscle-power of human labourers and horses. Today, automation is having an impact not just on routine work, but on cognitive and even creative tasks as well. A tipping point seems to have been reached, at which AI-based automation threatens to supplant the brain-power of large swathes of middle-income employees.

In previous industrial revolutions, machines replaced our "hardware", that is, our bodies and our physical labor (bad analogy, but let's roll with it). This was fine, because as long as our ideas added value and machines couldn't automate our thought processes, we could remain employed in an intellectual capacity. However, machines are now able to simulate many of our thought processes - and not just the purely computational ones - with the industrial implementation of artificial intelligence and machine learning techniques. Machines are now replacing our "software", that is, our minds and our intellectual labor.

Modern Luddites like to point at today's unprecedented levels of long term unemployment as evidence for this. However, if this were true, why is there continuing high demand for skilled workers?


Nearly 60 percent of survey participants expect to increase their workforce (compared to 50 percent in the fall), but finding qualified workers to fill open positions continues to be a concern for the industry – an unusual dichotomy, given that national unemployment rate remains high," said Kurek. "The need for a skilled workforce could be one of the greatest impediments to growth for U.S. manufacturers and distributors, and makes it difficult to compete in the global market."

source

For many manufacturers, finding the best people to fill these new positions is far from assured. Respondents to ThomasNet.com's IMB lament the skilled labor shortage, and are vocal about what needs to be done to fill the gap. One noted that we need to "improve the attitude within the U.S. regarding the desirability of manufacturing for the next generation." And many respondents want to see education reform with the aim of giving America's youth the skills needed to join the manufacturing workforce.

Usually, automation draws labor demand away from the high-skilled towards the low-skilled. So what is going on here?

Perhaps it's simply a matter of perspective.

Skills and Training

In the long term, it may be true that we are on the cusp of a post-scarcity world of maximal leisure and obsolete labor thanks to the ever increasing intelligence of machines. In the short term, however, human intellectual capital is very much in demand.

[In the 1990s] the US went for massive outsourcing. However, Germany and Northern Europe in general went for automation. The reason why the latter region went for automation is that they were already fighting high labour costs as early as 1980, so they already were well on the path to automation.
At least in Northern Europe, the big automation revolution that we saw starting around 1980 seems to be coming to an end. The reason is the full exhaustion of engineers/skilled technical workers.

This is why some have proposed working hour limits, e.g. instead of a few investment bankers working 100 hour weeks, twice as many bankers working 50 hour weeks are hired. Of course, the policy's track record isn't that great (consider France's 35 hour/week limit and look at their economy) and furthermore, it falls victim to the lump of labor fallacy. However, this perhaps helps us in understanding the liberal's curious characterization of an ambitious overachiever taking excessive overtime for himself as somehow "greedy" due to his theft of his fellow man's labor hours. Compare with a conservative's view that all hard work is deserved and efficiently allocated to those who are most deserving.

However, considering the theory that we are on the cusp of a post-scarcity world, perhaps the liberal is right.

We live in a world were there is too much work for highly educated workers and not enough work for low educated ones. This at a time of general decline in education standards as demonstrated by many PISA studies. The forces of the market would have it that wages of the educated will rise and wages of the non-educated will fall.

This is why we need cheap, effective and widely available STEM (science, tech, engineering, mathematics) training (and retraining) opportunities in the US. This is both for new entrants to the labor force as well as for existing workers who have been displaced. A "liberal arts" education may be more intellectually pure, but for the majority of the population, it's simply irresponsible. The German education system, which emphasizes vocational schools alongside industry apprenticeships, is a good model (already these are starting to pop up in the US). This would also serve to help students avoid taking on excessive student debt as costs are lower than traditional research universities, and full time offers are often made by the employer to their apprentices upon graduation. Businesses would also be more willing to hire new graduates (as opposed to experienced hires) as job-specific training has already been completed (avoiding expensive on-the-job training programs as in the US).

However, the fundamental problem remains that the exponential pace of technological progress may simply be too fast for the skills mismatch gap to close. This is why I am optimistic about online education opportunities such as Udacity, Coursera, Codecademy, iTunes U, which meet many of the above requirements for good STEM training: cheap (it's free), effective (taught by professors, some of whom are at Ivy League universities) and widely available (anyone with internet). Recently, Udacity announced that they will offering certification exams in Pearson testing centers in conjunction with their job placement program. Unlike inflexible bureaucratic universities, online education is subject to the free markets: courses are offered in the subjects where the demand is, course demand goes where the opportunities are. This should go a long way towards minimizing the skills mismatch gap.

Future

Demand for unskilled labor will disappear or at least be significantly diminished. Although AI, robotics and 3D printing may have the potential to completely eliminate labor as a factor of production, as some modern Luddites claim, in the short term, the modern day equivalents of machine-operators (programmers, engineers, technicians) will be in high (and probably increasing) demand.

As the ultra-low labor costs in emerging markets no longer matter, jobs may initially come back to the the developed world. However, this job reshoring may be limited in impact as the emerging markets will seek to rapidly catch up (look at the growth of India's IT talent), especially as knowledge becomes freer and more easily available (such as through online courses). Thus, the most sustainable way for developed markets to compete will probably be through superior education and training.

Monday, August 13, 2012

Betting/trading strategies- Martingale

Introduction to betting strategies
Something I have been thinking about recently has to do with different trading/betting systems. I'll probably be doing a series of blog posts on this. Let me first define what I mean by a trading/betting strategy- this is a set of rules that dictate what you should do next based on your prior streak of wins/losses and your present P&L in the game (eg: gambling game, stock market...). The goal of this strategy may be to make money, but may also be to minimize variance (risk), avoid losing too much on money that has already been laid on the table (ie. successfully exiting losing positions), or to exploit a certain effect (eg: mean reversion, autocorrelation)...

For background, look at these 3 strategies:
Martingale is generally considered fatally flawed; daily re-balancing is a hot topic right now in finance (I'll explain more later); and Kelly's is actually a mathematically proven betting strategy.

In this post I will examine Martingale and a close cousin of it (trading version) and return to other strategies in later posts.

Mathematics behind the Martingale strategy
The Martingale basically says this: bet $1. if you lose, double down. if you lose again, double down again. So  you are "guaranteed" to win $1 at the end (unless you just keep losing ad infinitum). The major problem with this is that you only have a limited bankroll. eg: say after you've lost 10x in a row, you may be out of money and the casino/brokerage firm won't let you double down anymore (because you won't be able to pay if you lose again) At this point, you are close to bankrupt and you can't double down to recoup your losses anymore.

So the hidden downside is actually that you might go bankrupt before you win once. So the question is actually what is the probability that you would go bankrupt? Let's say in a 50/50 game you budget 1023x the initial bet (ie. you could lose 10 times and then be bankrupt). Then the chance of you going bankrupt per attempt to win $1 is around 0.1% (precisely 1/1024). Now let's look at the probability that you will go bankrupt as a function of how many times you attempt:



Note that while the curve (first graph) is bounded by 1, initially (say first 100 attempts- second graph), your chance of bankruptcy is almost linear to your # of attempts n. This is because Pr(bankruptcy) = 1-(1023/1024)^n  = 1 - (1 - 1/1024)^n ≃ 1 - (1 + n/1024)  = n/1024. This result is from first order Taylor series approximation (given small enough x = 1/1024 around the point a =1)

So for example, if you plan to attempt to win $500 (try 500 times), there is a 1-(1023/1024)^500 = 38.6% chance that you will "go bankrupt' before you reach 500 (ie. lose $1023- notice that you would still have the $499 or however much you collected before you go bankrupt). If you did go bankrupt before you reach 500, on average you would have gone bankrupt on attempt #230 (you get this by doing sum[Pr(n) * n] / sum[Pr(n)]. where Pr(n) = Pr(bankrupt on n-th turn) = (1023/1024)^(n-1) * (1/1024). I just summed this over excel- is there another way to do this?)

Soooo. let's see if this all makes sense.
EV = 38.65% * (-1023+230.2) + 61.35% * (+500) = 0

Betting strategies = transform your return distribution
So as you can see, by following the Martingale betting strategy and stopping at 500 wins, we can actually consider this whole strategy as one single bet of [win $500 with 61% probability or lose $793 with 39% probability], which is still 0 EV. ie. We have not gone anywhere from the original single bet of [win $1 with 1023/1024 probability or lose $1023 with 1/1024 probability] which is also 0 EV. Essentially the betting strategy has allowed us to change the distribution of returns while keeping your expected profit (EV) the same. This is an interesting topic that we will revisit later (probably when we talk about Kelly's criterion).

The interesting thing about this is that you can "see" your P&L progress over time. ie. instead of one single bet where you have 0 info about if you are going to win or not and that becomes 100% certain right after the dice roll, here, as you make successful attempts, you are slowly but surely moving towards the +$500 outcome (instead of the -$800 outcome. ie. the odds are slowly tilting in favor of you. One significant difference between gambling and trading is that trading is more continuous. ie. even though your trades are discrete, the market price changes rather continuously during trading hours. This is similar to what is happening here where you can see your P&L slowly accumulate before you finally exit the trade.

In trading, I would would look at this strategy as having positive theta (ie. as time passes, you start to make money). In this case, the reason why there exists positive theta is because you are taking extra risks on every iteration/bet/time period. Here, the risk you are taking is the 1/1024 chance that you lose 1023. When these risks are not realized, you make money. This is the case for say short gamma positions in trading. Short gamma positions basically mean that large price movements are bad for you. (vs long gamma positions ie. the people betting against you are betting on large price movements) So every second/minute/hour/day (remember trading is more continuous) that there is no large price movements materializing, these mini bets for large price movements are expiring worthless for the long gamma guy and you are making money because you have lived to fight another day/minute/second.

Further topics (to be discussed next)
So strats like these are highly dependent on budgeting. On a totally unrelated note, in poker, there's this concept called chunking- basically say you expect 3 more rounds of betting till showdown. Then you size up your opponent's (assuming he has less chips than you) stacks and figure out how to size the bets to get him all-in at the third bet. So for example, in a $15 pot right  now and your opponent having $100 stack left, you would say bet $15, $30, $60. The bets scale up every time because the pot has gotten larger. You for a small pot, it might turn out that you need 4 bets to be all-in- in which case you need some action (ie. a raise) to be able to go all-in. But the point of this discussion is that you need to do the same thing when you are budgeting for Martingale betting. ie. if I am going to bet 1,2,4,8,16 then give up, then I need to budget $31. And this $31 hopefully lasts me for the craziest price swings ever. ie. Let's say AAPL trades in $10 range 60% of time, $30 range 90% of time, and $50 range 99% of time. You should be betting say $1 for a $5 deviation from the median (whatever that means), $2 for a $10 deviation, $4 for a $15 deviation, $8 for a $25 deviation, and really really hold tight before you make your $16 bet at the end. ie. at what pt you double down is highly dependent on how rare of an outcome this pt is.

ok- back to martingale and its cousin trading strategy. The cousin trading strategy that I wanted to talk about was the following:

Buy the stock. Set take profit level at say +10%. If stock trades down, sit and wait. If it keeps trading down, sit and wait more. Since variance increases with time and is not bounded, this means that eventually stock will trade to +10% and you can exit at a profit. So this is similar to the Martingale in some respects. What I mean is this:


If you look at the chart above, the range of estimates for the stock increases the further you go into the futures. This is in part because the stock is expected to trade in a bigger and bigger range given longer and longer time periods. This means that it is easier for you to reach any pre-set level as time goes on, potentially even if the drift (general trend) of the stock is downwards (ie. the opposite direction to your bet).

This post is getting pretty long, so I'm actually going to talk about this strat more in the next post before going on to other strats. In the mean time, key thoughts about this strat:

It is also highly dependent on the return distribution. Need to compare how much increased variance over time contributes to probability of hitting target take profit level  (eg: 50% chance of touching x in y days, -> 50% chance of touching sqrt(2)x in 2y days) to "breakeven drag". ie. how much stock will have to trend down for you to have deteriorating chances of hitting target level (have negative theta).



Sunday, August 12, 2012

Thoughts of the Week - Big Bank Inefficiencies

Here is a recent article by Zerohedge on big bank inefficiencies.

I usually try not to cite ZH, but this has a lot of good links. It reminds me of an HBS article I read recently about bees and risk management. The tendencies of bees towards decentralization is a risk management tool to prevent TBTF.

Take, for example, their approach toward the "too-big-to-fail" risk our financial sector famously took on. Honeybees have a failsafe preventive for that. It's: "Don't get too big." Hives grow through successive divestures or spin-offs: They swarm. When a colony gets too large, it becomes operationally unwieldy and grossly inefficient and the hive splits. Eventually, risk is spread across many hives and revenue sources in contrast to relying on one big, vulnerable "super-hive" for sustenance.

In fact, when hives divide, it is the old queen who departs, leaving the old hive to a young virgin queen. Contrast this with normal corporate M&A and spin-off behavior in which risk remains concentrated.

One classic argument in favor of TBTF is that big banks are in a better position to capitalize on economies of scale, and the fact that breaking up large banks simply doesn't prevent bank runs. The ZH article attacks the first point through a discussion of diseconomies of scale (which I quoted almost verbatim from here):
  1. atmospheric consequences due to specialization - as firms expand there will be increased specialisation, but also less commitment on the part of employees. In such firms, the employees often have a hard time understanding the purpose of corporate activities, as well as the small contribution each of them makes to the whole. Thus, alienation is more likely to occur in large firms.
  2. bureaucratic insularity - as firms increase in size, senior managers are less accountable to the lower ranks of the organisation and to shareholders. They thus become insulated from reality and will, given opportunism, strive to maximise their personal benefits rather than overall corporate performance. This problem is most acute in organisations with well-established procedures and rules and in which management is well-entrenched.
  3. incentive limits of the employment relation - the structure of incentives large firms offer employees is limited by a number of factors. First, large bonus payments may threaten senior managers. Second, performance-related bonuses may encourage less-than-optimal employee behavior in large firms. Therefore, large firms tend to base incentives on tenure and position rather than on merit. Such limitations may specially affect executive positions and product development functions, putting large firms at a disadvantage when compared with smaller enterprises in which employees are often given a direct stake in the success of the firm through bonuses, share participation, and stock options.
  4. communication distortion due to bounded rationality - Because a single manager has cognitive limits and cannot understand every aspect of a complex organisation, it is impossible to expand a firm without adding hierarchical layers. Information passed between layers inevitably becomes distorted. This reduces the ability of high-level executives to make decisions based on facts and negatively impacts their ability to strategize and respond directly to the market. Even under static conditions (no uncertainty) there is a loss of control.
The Alternative Banking working group of OWS (Occupy Wall Street) prefers TIBACO to TBTF. It stands for Too Interconnected, Big and Complex to Oversee, which I think is a more apt description of what exactly is wrong with these banks.

Things I am up to:
- I recently downloaded Anki, a flashcard program, to help me study for the CFA (Chartered Financial Analyst) exam. I've always wanted to try Anki, but I never got around to it. It displays flashcards on increasing time intervals based on the principle of spaced repetition. One example of this is the following image, where successive "boxes" show their cards less frequently, e.g. the flashcard program will draw from box 1 half of the time, box 2 one quarter of the time, etc.
File:Leitner system.svg
- Learning R. There is a lot of vibrant community support (especially in finance) for R. Some of my favorite quant finance bloggers - Timely Portfolio, Systematic Investor, Milktrader, CSS Analytics - use R. Non-finance R bloggers can be found on the excellent R-bloggers. So far, I have already been able to link my R environment to Bloomberg terminal data as well as free data sources such as Yahoo Finance and FRED (Federal Reserve Economic Data) using community packages.

Books I would buy my friend if he had my exact interests and his birthday were coming up very soon (say, in 3 days):
Haidt, Jonathan - The Happiness Hypothesis
Koo, Richard - The Holy Grail of Macroeconomics
Postman, Neil - Amusing Ourselves to Death
Schelling, Thomas - Micromotives and Macrobehavior
Singer, Peter - Animal Liberation
Strausse, William & Howe, Neil - Generations
Wolfram, Stephen - A New Kind of Science

Launch of a New Blog

Proud to announce the launch of a new joint blog "50 Thoughts to the Dollar". This will be a joint venture by me and a few of my friends whose thoughts and areas of interest I consider diverse and valuable. They will introduce themselves as they post.

My primary motivation for starting this blog (which may or may not be different from my fellow posters) is to have a space where a few knowledgeable people can join in conversation on diverse but related topics, engage each other, etc. A great example of this is The Conversation by the NYT Opinionator. The economics blogosphere is also very good at this.

A secondary motivation is that I think that this would be a great way to keep in touch with friends, considering that our geographic diversity (NYC, Philly, St Louis, California), career diversity (large firms, small firms, finance, technology, business school, entrepreneurs) and diversity in perspective.

We are aiming for 1-2 posts / week / blogger, so it'd work out to around 10 posts a week in aggregate. As time goes by, we'll see how realistic this is.

As for the blog name, Justin and Sean came up with it. On an unrelated thought, if someone gives you "a penny for your thoughts" for "just my two cents", isn't this just an illustration of the bid-ask spread? Seems very profitable to be a market maker in thoughts.