Category Archives: Quantitative Finance

Quantitative Finance is my professional field. I write columns for a well-known periodical in the field called The Wilmott Magazine. Here are those columns and more.

Talent and Intelligence

In the last post, I argued that how hard we work has nothing much to do with how much reward we should reap. After all, there are taxi drivers who work longer and harder, and even more unfortunate souls in the slums of India and other poor countries.

But, I am threading on real thin ice when I compare, however obliquely, senior executives to cabbies and slum dogs. They are (the executives, that is) clearly a lot more talented, which brings me to the famous talent argument for bonuses. What is this talent thing? Is it intelligence and articulation? I once met a taxi driver in Bangalore who was fluent in more than a dozen languages as disparate as English and Arabic. I discovered his hidden talent by accident when he cracked up at something my father said to me — a private joke in our vernacular, which I have seldom found a non-native speaker attempt. I couldn’t help thinking then — given another place and another time, this cabbie would have been a professor in linguistics or something. Talent may be a necessary condition for success (and bonus), but it certainly is not a sufficient one. Even among slum dogs, we might find ample talent, if the Oscar-winning movie is anything to go by. Although, the protagonist in the movie does make his million dollar bonus, but it was only fiction.

In real life, however, lucky accidents of circumstances play a more critical role than talent in putting us on the right side of the income divide. To me, it seems silly to claim a right to the rewards based on any perception of talent or intelligence. Heck, intelligence itself, however we define it, is nothing but a happy genetic accident.

Sections

Hard Work

One argument for big bonuses is that the executives work hard for it and earn it fair and square. It is true that some of these executives spend enormous amount of time (up to 10 to 14 hours a day, according the AIG executive under the spotlight here). But, do long hours and hard work automatically make us “those who deserve the best in life,” as Tracy Chapman puts it?

I have met taxi drivers in Singapore who ply the streets hour after owl-shift hour before they can break even. Apparently the rentals the cabbies have to pay are quite high, and they end up working consistently longer than most executives. Farther beyond our moral horizon, human slum dogs forage garbage dumps for scraps they can eat or sell. Back-breaking labour, I imagine. Long hours, terrible working conditions, and hard-hard work — but no bonus.

It looks to me as though hard work has very little correlation with what one is entitled to. We have to look elsewhere to find justifications to what we consider our due.

Sections

Bonus Plans of Mice and Men

Our best-laid plans often go awry. We see it all the time at a personal level — accidents (both good and bad), deaths (both of loved ones and rich uncles), births, and lotteries all conspire to reshuffle our priorities and render our plans null and void. In fact, there is nothing like a solid misfortune to get us to put things in perspective. This opportunity may be the proverbial silver lining we are constantly advised to see. What is true at a personal level holds true also at a larger scale. The industry-wide financial meltdown has imparted a philosophical clarity to our profession — a clarity that we might have been too busy to notice, but for the dire straits we are in right now.

This philosophical clarity inspires analyses (and columns, of course) that are at times self-serving and at times soul-searching. We now worry about the moral rectitude behind the insane bonus expectations of yesteryears, for instance. The case in point is Jake DeSantis, the AIG executive vice president who resigned rather publicly on the New York Times, and donated his relatively modest bonus of a million dollars to charity. The reasons behind the resignation are interesting, and fodder to this series of posts.

Before I go any further, let me state it outright. I am going to try to shred his arguments the best I can. I am sure I would have sung a totally different tune if they had given me a million dollar bonus. Or if anybody had the temerity to suggest that I part with my own bonus, paltry as it may seem in comparison. I will keep that possibility beyond the scope of this column, ignoring the moral inconsistency others might maliciously perceive therein. I will talk only about other people’s bonuses. After all, we are best in dealing with other people’s money. And it is always easier to risk and sacrifice something that doesn’t belong to us.

Sections

How Much is Your Time Worth?

I recently got a crazy idea. Suppose I tell you, “I will give you a ten-million-dollar job for a month. But I will have to kill you in two months.” Of course, you will have to know that I am serious. Let’s say I am an eccentric billionaire. Will you take the ten million dollars?

I am certain that most people will not take this job offer. In fact, there is a movie with Johnny Depp and Marlon Brando (IMDb tells me that it is The Brave) where Depp’s character actually takes up such an offer. Twenty-five thousand, I believe, was the price that he agreed upon for the rest of his life. For some of us, the price may be higher, but it is possible that there is a price that we will agree upon.

To me, my price is infinite — I wouldn’t trade the rest of my life for any amount of money. What does it help me to have all the money in the world if I don’t have the time to spend it? But, this stance of mine is neither consistent with what I do, nor fully devoid of hypocrisy. Hardly anything in real life is. If we say we won’t trade time for money, then how come we happily sell our time to our employers? Is it just that we don’t appreciate what we are doing? Or that our time is limited?

I guess the trade off between time and money is not straight forward. It is not a linear scale. If we have no money, then our time is worth nothing. We are willing to sell it for almost nothing. The reason is clear — it takes money to keep body and soul together. Without a bare minimum of money, there indeed is no time left to sell. As we make a bit of money, a bit more than the bare minimum, we begin to value time more. But as we make more money, we realize that we can make even more by selling more time, because the time is worth more now! This implicit vicious circle may be what is driving this crazy rat race that we see all around us.

Selling time is an interesting concept. We clearly do sell our time to those who pay us. Employees sell time to their employers. Entrepreneurs sell their time to the customers, and in deploying their businesses. But there is a fundamental difference between these two modes of selling. While employees sell their time once, businessmen sell their time multiple times. So do authors and actors. They spend a certain amount of time doing whatever they do, but the products they create (book, business, movies, Windows XP, songs etc.) are sold over and over again. That is why they can make their millions and billions while those who work for somebody else find it is very difficult to get really rich.

A New Kind of Binomial Tree

We can port even more complicated problems from mathematics directly to a functional language. For an example closer to home, let us consider a binomial pricing model, illustrating that the ease and elegance with which Haskell handles factorial do indeed extend to real-life quantitative finance problems as well.

The binomial tree pricing model works by assuming that the price of an underlying asset can only move up or down by constant factors u and d during a small time interval \delta t. Stringing together many such time intervals, we make up the expiration time of the derivative instrument we are trying to price. The derivative defined as a function of the price of the underlying at any point in time.

Figure 1
Figure 1. Binomial tree pricing model. On the X axis, labeled i, we have the time steps. The Y axis represents the price of the underlying, labeled j. The only difference from the standard binomial tree is that we have let j be both positive and negative, which is mathematically natural, and hence simplifies the notation in a functional language.

We can visualize the binomial tree as shown in Fig. 1. At time t = 0, we have the asset price S(0) = S_0. At t = \delta t (with the maturity T = N\delta t). we have two possible asset values S_0 u and S_0 d = S_0 / u, where we have chosen d = 1/u. In general, at time i\delta t, at the asset price node level j, we have

S_{ij} = S_0 u^j

By choosing the sizes of the up and down price movements the same, we have created a recombinant binomial tree, which is why we have only 2i+1 price nodes at any time step i\delta t. In order to price the derivative, we have to assign risk-neutral probabilities to the up and down price movements. The risk-neutral probability for an upward movement of u is denoted by p. With these notations, we can write down the fair value of an American call option (of expiry T, underlying asset price S_0, strike price K, risk free interest rate r, asset price volatility \sigma and number of time steps in the binomial tree N) using the binomial tree pricing model as follows:

\textrm{OptionPrice}(T, S_0, K, r, \sigma, N) = f_{00}

where f_{ij} denotes the fair value of the option at any the node i in time and j in price (referring to Fig. 1).

f_{ij} = \left{\begin{array}{ll}\textrm{Max}(S_{ij} - K, 0) & \textrm{if } i = N \\textrm{Max}(S_{ij} - 0, e^{-\delta tr}\left(p f_{i+1, j+1} + (1-p)  f_{i+1, j-1}\right)) & \textrm {otherwise}\end{array}\right

At maturity, i = N and i\delta t = T, where we exercise the option if it is in the money, which is what the first Max function denotes. The last term in the express above represents the risk neutral backward propagation of the option price from the time layer at (i+1)\delta t to i\delta t. At each node, if the option price is less than the intrinsic value, we exercise the option, which is the second Max function.

The common choice for the upward price movement depends on the volatility of the underlying asset. u = e^{\sigma\sqrt{\delta t}} and the downward movement is chosen to be the same d = 1/u to ensure that we have a recombinant tree. For risk neutrality, we have the probability defined as:

p = \frac{ e^{r\delta t} - d}{u - d}

For the purpose of illustrating how it translates to the functional programming language of Haskell, let us put all these equations together once more.

\textrm{OptionPrice}(T, S_0, K, r, \sigma, N) = f_{00}
where
&f_{ij}  =& \left\{\begin{array}{ll}\textrm{Max}(S_{ij} - K, 0) & \textrm{if } i = N \\\textrm{Max}(S_{ij} - 0, e^{-\delta tr}\left(p f_{i+1\, j+1} + (1-p)  f_{i+1\, j-1}\right)\quad \quad& \textrm{otherwise}\end{array}\right.
S_{ij}  = S_0 u^j
u = e^{\sigma\sqrt{\delta t}}
d  = 1/u
\delta t  = T/N
p  = \frac{ e^{r\delta t} - d}{u - d}

Now, let us look at the code in Haskell.

optionPrice t s0 k r sigma n = f 0 0
    where
      f i j =
          if i == n
          then max ((s i j) - k) 0
          else max ((s i j) - k)
                    (exp(-r*dt) * (p * f(i+1)(j+1) +
                    (1-p) * f(i+1)(j-1)))
      s i j = s0 * u**j
      u = exp(sigma * sqrt dt)
      d = 1 / u
      dt = t / n
      p = (exp(r*dt)-d) / (u-d)

As we can see, it is a near-verbatim rendition of the mathematical statements, nothing more. This code snippet actually runs as it is, and produces the result.

*Main> optionPrice 1 100 110 0.05 0.3 20
10.10369526959085

Looking at the remarkable similarity between the mathematical equations and the code in Haskell, we can understand why mathematicians love the idea of functional programming. This particular implementation of the binomial pricing model may not be the most computationally efficient, but it certainly is one of great elegance and brevity.

While a functional programming language may not be appropriate for a full-fledged implementation of a trading platform, many of its underlying principles, such as type abstractions and strict purity, may prove invaluable in programs we use in quantitative finance where heavy mathematics and number crunching are involved. The mathematical rigor enables us to employ complex functional manipulations at the program level. The religious adherence to the notion of statelessness in functional programming has another great benefit. It helps in parallel and grid enabling the computations with almost no extra work.

Sections

Functional Programming

Functional programming is the programming methodology that puts great emphasis on statelessness and religiously avoids side effects of one function in the evaluation any other function. Functions in this methodology are like mathematical functions. The conventional programming style, on the other hand, is considered “imperative” and uses states and their changes for accomplishing computing tasks.

Adapting this notion of functional programming may sound like regressing back to the pre-object-oriented age, and sacrificing all the advantages thereof. But there are practitioners, both in academia and in the industry, who strongly believe that functional languages are the only approach that ensures stability and robustness in financial and number crunching applications.

Functional languages, by definition, are stateless. They do everything through functions, which return results that are, well, functions of their arguments. This statelessness immediately makes the functions behave like their mathematical counterparts. Similarly, in a functional language, variable behave like mathematical variables rather than labels for memory locations. And a statement like x = x + 1 would make no sense. After all, it makes no sense in real life either.

This strong mathematical underpinning makes functional programming the darling of mathematicians. A piece of code written in a functional programming language is a set of declarations quite unlike a standard computer language such as C or C++, where the code represents a series of instructions for the computer. In other words, a functional language is declarative — its statements are mathematical declarations of facts and relationships, which is another reason why a statement like x = x + 1 would be illegal.

The declarative nature of the language makes it “lazy,” meaning that it computes a result only when we ask for it. (At least, that is the principle. In real life, full computational laziness may be difficult to achieve.) Computational laziness makes a functional programming language capable of handling many situations that would be impossible or exceedingly difficult for procedural languages. Users of Mathematica, which is a functional language for symbolic manipulation of mathematical equations, would immediately appreciate the advantages of computational laziness and other functional features such as its declarative nature. In Mathematica, we can carry out an operation like solving an equation for instance. Once that is done, we can add a few more constraints at the bottom of our notebook, scroll up to the command to solve the original equation and re-execute it, fully expecting the later constraints to be respected. They will be, because a statement appearing at a later part in the program listing is not some instruction to be carried out at a later point in a sequence. It is merely a mathematical declaration of truism, no matter where it appears.

This affinity of functional languages toward mathematics may appeal to quants as well, who are, after all, mathematicians of the applied kind. To see where the appeal stems from, let us consider a simple example of computing the factorial of an integer. In C or C++, we can write a factorial function either using a loop or making use of recursion. In a functional language, on the other hand, we merely restate the mathematical definition, using the syntax of the language we are working with. In mathematics, we define factorial as:

n! = left{begin{array}{ll}1 & n=1 \n times (n-1)! & textrm{Otherwise}end{array}right.

And in Haskell (a well known functional programming language), we can write:

bang 1 = 1
bang n = n * bang (n-1)

And expect to make the call bang 12 to get the factorial of 12.

This example may look artificially simple. But we can port even more complicated problems from mathematics directly to a functional language. For an example closer to home, let us consider a binomial pricing model, illustrating that the ease and elegance with which Haskell handles factorial do indeed extend to real-life quantitative finance problems as well.

Sections

Magic of Object Oriented Languages

Nowhere is the dominance of paradigms more obvious than in object oriented languages. Just take a look at the words that we use to describe some their features: polymorphism, inheritance, virtual, abstract, overloading — all of them normal (or near-normal) everyday words, but signifying notions and concepts quite far from their literal meaning. Yet, and here is the rub, their meaning in the computing context seems exquisitely appropriate. Is it a sign that we have taken these paradigms too far? Perhaps. After all, the “object” in object oriented programming is already an abstract paradigm, having nothing to do with “That Obscure Object of Desire,” for instance.

We do see the abstraction process running a bit wild in design patterns. When a pattern calls itself a visitor or a factory, it takes a geekily forgiving heart to grant the poetic license silently usurped. Design patterns, despite the liberties they take with our sensitivities, add enormous power to object oriented programming, which is already very powerful, with all the built in features like polymorphism, inheritance, overloading etc.

To someone with an exclusive background in sequential programming, all these features of object oriented languages may seem like pure magic. But most of the features are really extensions or variations on their sequential programming equivalents. A class is merely a structure, and can even be declared as such in C++. When you add a method in a class, you can imagine that the compiler is secretly adding a global function with an extra argument (the reference to the object) and a unique identifier (say, a hash value of the class name). Polymorphic functions also can be implemented by adding a hash value of the function signature to the function names, and putting them in the global scope.

The real value of the object oriented methodology is that it encourages good design. But good programming discipline goes beyond mere adaptation of an object oriented language, which is why my first C++ teacher said, “You can write bad Fortran in C++ if you really want. Just that you have to work a little harder to do it.”

For all their magical powers, the object oriented programming languages all suffer from some common weaknesses. One of their major disadvantages is, in fact, one of the basic design features of object oriented programming. Objects are memory locations containing data as laid down by the programmer (and the computer). Memory locations remember the state of the object — by design. What state an object is in determines what it does when a method is invoked. So object oriented approach is inherently stateful, if we can agree on what “state” means in the object oriented context.

But in a user interface, where we do not have much control over the sequence in which various steps are executed, we might get erroneous results in stateful programming depending on what step gets executed at a what point in time. Such considerations are especially important when we work with parallel computers in complex situations. One desirable property in such cases is that the functions return a number solely based on their arguments. This property, termed “purity,” is the basic design goal of most functional languages, although their architects will concede that most of them are not strictly “pure.”

Sections

Paradigms All the Way

Paradigms permeate almost all aspects of computing. Some of these paradigms are natural. For instance, it is natural to talk about an image or a song when we actually mean a JPEG or an MP3 file. File is already an abstraction evolved in the file-folder paradigm popularized in Windows systems. The underlying objects or streams are again abstractions for patterns of ones and zeros, which represent voltage levels in transistors, or spin states on a magnetic disk. There is an endless hierarchy of paradigms. Like the proverbial turtles that confounded Bertrand Russell (or was it Samuel Johnson?), it is paradigms all the way down.

Some paradigms have faded into the background although the terminology evolved from them lingers. The original paradigm for computer networks (and of the Internet) was a mesh of interconnections residing in the sky above. This view is more or less replaced by the World Wide Web residing on the ground at our level. But we still use the original paradigm whenever we say “download” or “upload.” The World Wide Web, by the way, is represented by the acronym WWW that figures in the name of all web sites. It is an acronym with the dubious distinction of being about the only one that takes us longer to say than what it stands for. But, getting back to our topic, paradigms are powerful and useful means to guide our interactions with unfamiliar systems and environments, especially in computers, which are strange and complicated beasts to begin with.

A basic computer processor is deceptively simple. It is a string of gates. A gate is a switch (more or less) made up of a small group of transistors. A 32 bit processor has 32 switches in an array. Each switch can be either off representing a zero, or on (one). And a processor can do only one function — add the contents of another array of gates (called a register) to itself. In other words, it can only “accumulate.”

In writing this last sentence, I have already started a process of abstraction. I wrote “contents,” thinking of the register as a container holding numbers. It is the power of multiple levels of abstraction, each of which is simple and obvious, but building on whatever comes before it, that makes a computer enormously powerful.

We can see abstractions, followed by the modularization of the abstracted concept, in every aspect of computing, both hardware and software. Groups of transistors become arrays of gates, and then processors, registers, cache or memory. Accumulations (additions) become all arithmetic operations, string manipulations, user interfaces, image and video editing and so on.

Another feature of computing that aids in the seemingly endless march of the Moore’s Law (which states that computers will double in their power every 18 months) is that each advance seems to fuel further advances, generating an explosive growth. The first compiler, for instance, was written in the primitive assembler level language. The second one was written using the first one and so on. Even in hardware development, one generation of computers become the tools in designing the next generation, stoking a seemingly inexorable cycle of development.

While this positive feedback in hardware and software is a good thing, the explosive nature of growth may take us in wrong directions, much like the strong grown in the credit market led to the banking collapses of 2008. Many computing experts now wonder whether the object oriented technology has been overplayed.

Sections

Zeros and Ones

Computers are notorious for their infuriatingly literal obedience. I am sure anyone who has ever worked with a computer has come across the lack of empathy on its part — it follows our instructions to the dot, yet ends up accomplishing something altogether different from what we intend. We have all been bitten in the rear end by this literal adherence to logic at the expense of commonsense. We can attribute at least some of the blame to our lack of understanding (yes, literal and complete understanding) of the paradigms used in computing.

Rich in paradigms, the field of computing has a strong influence in the way we think and view the world. If you don’t believe me, just look at the way we learn things these days. Do we learn anything now, or do we merely learn how to access information through browsing and searching? Even our arithmetic abilities have eroded along with the advent of calculators and spreadsheets. I remember the legends of great minds like Enrico Fermi, who estimated the power output of the first nuclear blast by floating a few pieces of scrap paper, and like Richard Feynman, who beat an abacus expert by doing binomial expansion. I wonder if the Fermis and Feynmans of our age would be able to pull those stunts without pulling out their pocket calculators.

Procedural programming, through its unwarranted reuse of mathematical symbols and patterns, has shaped the way we interact with our computers. The paradigm that has evolved is distinctly unmathematical. Functional programming represents a counter attack, a campaign to win our minds back from the damaging influences of the mathematical monstrosities of procedural languages. The success of this battle may depend more on might and momentum rather than truth and beauty. In our neck of the woods, this statement translates to a simple question: Can we find enough developers who can do functional programming? Or is it cheaper and more efficient to stick to procedural and object oriented methodologies?

Sections

House of Cards

We are in dire straits — no doubt about it. Our banks and financial edifices are collapsing. Those left standing also look shaky. Financial industry as a whole is battling to survive. And, as its front line warriors, we will bear the brunt of the bloodbath sure to ensue any minute now.

Ominous as it looks now, this dark hour will pass, as all the ones before it. How can we avoid such dark crises in the future? We can start by examining the root causes, the structural and systemic reasons, behind the current debacle. What are they? In my series of posts this month, I went through what I thought were the lessons to learn from the financial crisis. Here is what I think will happen.

The notion of risk management is sure to change in the coming years. Risk managers will have to be compensated enough so that top talent doesn’t always drift away from it into risk taking roles. Credit risk paradigms will be reviewed. Are credit limits and ratings the right tools? Will Off Balance Sheet instruments stay off the balance sheet? How will we account for leveraging?

Regulatory frameworks will change. They will become more intrusive, but hopefully more transparent and honest as well.

Upper management compensation schemes may change, but probably not much. Despite what the techies at the bottom think, those who reach the top are smart. They will think of some innovative ways of keeping their perks. Don’t worry; there will always be something to look forward to, as you climb the corporate ladder.

Nietzsche may be right, what doesn’t kill us, may eventually make us stronger. Hoping that this unprecedented financial crisis doesn’t kill us, let’s try to learn as much from it as possible.

Sections