Category Archives: The Wilmott Magazine

My published (or soon to be published) column pieces in The Wilmott Magazine

Functional Programming

Functional programming is the programming methodology that puts great emphasis on statelessness and religiously avoids side effects of one function in the evaluation any other function. Functions in this methodology are like mathematical functions. The conventional programming style, on the other hand, is considered “imperative” and uses states and their changes for accomplishing computing tasks.

Adapting this notion of functional programming may sound like regressing back to the pre-object-oriented age, and sacrificing all the advantages thereof. But there are practitioners, both in academia and in the industry, who strongly believe that functional languages are the only approach that ensures stability and robustness in financial and number crunching applications.

Functional languages, by definition, are stateless. They do everything through functions, which return results that are, well, functions of their arguments. This statelessness immediately makes the functions behave like their mathematical counterparts. Similarly, in a functional language, variable behave like mathematical variables rather than labels for memory locations. And a statement like x = x + 1 would make no sense. After all, it makes no sense in real life either.

This strong mathematical underpinning makes functional programming the darling of mathematicians. A piece of code written in a functional programming language is a set of declarations quite unlike a standard computer language such as C or C++, where the code represents a series of instructions for the computer. In other words, a functional language is declarative — its statements are mathematical declarations of facts and relationships, which is another reason why a statement like x = x + 1 would be illegal.

The declarative nature of the language makes it “lazy,” meaning that it computes a result only when we ask for it. (At least, that is the principle. In real life, full computational laziness may be difficult to achieve.) Computational laziness makes a functional programming language capable of handling many situations that would be impossible or exceedingly difficult for procedural languages. Users of Mathematica, which is a functional language for symbolic manipulation of mathematical equations, would immediately appreciate the advantages of computational laziness and other functional features such as its declarative nature. In Mathematica, we can carry out an operation like solving an equation for instance. Once that is done, we can add a few more constraints at the bottom of our notebook, scroll up to the command to solve the original equation and re-execute it, fully expecting the later constraints to be respected. They will be, because a statement appearing at a later part in the program listing is not some instruction to be carried out at a later point in a sequence. It is merely a mathematical declaration of truism, no matter where it appears.

This affinity of functional languages toward mathematics may appeal to quants as well, who are, after all, mathematicians of the applied kind. To see where the appeal stems from, let us consider a simple example of computing the factorial of an integer. In C or C++, we can write a factorial function either using a loop or making use of recursion. In a functional language, on the other hand, we merely restate the mathematical definition, using the syntax of the language we are working with. In mathematics, we define factorial as:

n! = left{begin{array}{ll}1 & n=1 \n times (n-1)! & textrm{Otherwise}end{array}right.

And in Haskell (a well known functional programming language), we can write:

bang 1 = 1
bang n = n * bang (n-1)

And expect to make the call bang 12 to get the factorial of 12.

This example may look artificially simple. But we can port even more complicated problems from mathematics directly to a functional language. For an example closer to home, let us consider a binomial pricing model, illustrating that the ease and elegance with which Haskell handles factorial do indeed extend to real-life quantitative finance problems as well.

Sections

Magic of Object Oriented Languages

Nowhere is the dominance of paradigms more obvious than in object oriented languages. Just take a look at the words that we use to describe some their features: polymorphism, inheritance, virtual, abstract, overloading — all of them normal (or near-normal) everyday words, but signifying notions and concepts quite far from their literal meaning. Yet, and here is the rub, their meaning in the computing context seems exquisitely appropriate. Is it a sign that we have taken these paradigms too far? Perhaps. After all, the “object” in object oriented programming is already an abstract paradigm, having nothing to do with “That Obscure Object of Desire,” for instance.

We do see the abstraction process running a bit wild in design patterns. When a pattern calls itself a visitor or a factory, it takes a geekily forgiving heart to grant the poetic license silently usurped. Design patterns, despite the liberties they take with our sensitivities, add enormous power to object oriented programming, which is already very powerful, with all the built in features like polymorphism, inheritance, overloading etc.

To someone with an exclusive background in sequential programming, all these features of object oriented languages may seem like pure magic. But most of the features are really extensions or variations on their sequential programming equivalents. A class is merely a structure, and can even be declared as such in C++. When you add a method in a class, you can imagine that the compiler is secretly adding a global function with an extra argument (the reference to the object) and a unique identifier (say, a hash value of the class name). Polymorphic functions also can be implemented by adding a hash value of the function signature to the function names, and putting them in the global scope.

The real value of the object oriented methodology is that it encourages good design. But good programming discipline goes beyond mere adaptation of an object oriented language, which is why my first C++ teacher said, “You can write bad Fortran in C++ if you really want. Just that you have to work a little harder to do it.”

For all their magical powers, the object oriented programming languages all suffer from some common weaknesses. One of their major disadvantages is, in fact, one of the basic design features of object oriented programming. Objects are memory locations containing data as laid down by the programmer (and the computer). Memory locations remember the state of the object — by design. What state an object is in determines what it does when a method is invoked. So object oriented approach is inherently stateful, if we can agree on what “state” means in the object oriented context.

But in a user interface, where we do not have much control over the sequence in which various steps are executed, we might get erroneous results in stateful programming depending on what step gets executed at a what point in time. Such considerations are especially important when we work with parallel computers in complex situations. One desirable property in such cases is that the functions return a number solely based on their arguments. This property, termed “purity,” is the basic design goal of most functional languages, although their architects will concede that most of them are not strictly “pure.”

Sections

Paradigms All the Way

Paradigms permeate almost all aspects of computing. Some of these paradigms are natural. For instance, it is natural to talk about an image or a song when we actually mean a JPEG or an MP3 file. File is already an abstraction evolved in the file-folder paradigm popularized in Windows systems. The underlying objects or streams are again abstractions for patterns of ones and zeros, which represent voltage levels in transistors, or spin states on a magnetic disk. There is an endless hierarchy of paradigms. Like the proverbial turtles that confounded Bertrand Russell (or was it Samuel Johnson?), it is paradigms all the way down.

Some paradigms have faded into the background although the terminology evolved from them lingers. The original paradigm for computer networks (and of the Internet) was a mesh of interconnections residing in the sky above. This view is more or less replaced by the World Wide Web residing on the ground at our level. But we still use the original paradigm whenever we say “download” or “upload.” The World Wide Web, by the way, is represented by the acronym WWW that figures in the name of all web sites. It is an acronym with the dubious distinction of being about the only one that takes us longer to say than what it stands for. But, getting back to our topic, paradigms are powerful and useful means to guide our interactions with unfamiliar systems and environments, especially in computers, which are strange and complicated beasts to begin with.

A basic computer processor is deceptively simple. It is a string of gates. A gate is a switch (more or less) made up of a small group of transistors. A 32 bit processor has 32 switches in an array. Each switch can be either off representing a zero, or on (one). And a processor can do only one function — add the contents of another array of gates (called a register) to itself. In other words, it can only “accumulate.”

In writing this last sentence, I have already started a process of abstraction. I wrote “contents,” thinking of the register as a container holding numbers. It is the power of multiple levels of abstraction, each of which is simple and obvious, but building on whatever comes before it, that makes a computer enormously powerful.

We can see abstractions, followed by the modularization of the abstracted concept, in every aspect of computing, both hardware and software. Groups of transistors become arrays of gates, and then processors, registers, cache or memory. Accumulations (additions) become all arithmetic operations, string manipulations, user interfaces, image and video editing and so on.

Another feature of computing that aids in the seemingly endless march of the Moore’s Law (which states that computers will double in their power every 18 months) is that each advance seems to fuel further advances, generating an explosive growth. The first compiler, for instance, was written in the primitive assembler level language. The second one was written using the first one and so on. Even in hardware development, one generation of computers become the tools in designing the next generation, stoking a seemingly inexorable cycle of development.

While this positive feedback in hardware and software is a good thing, the explosive nature of growth may take us in wrong directions, much like the strong grown in the credit market led to the banking collapses of 2008. Many computing experts now wonder whether the object oriented technology has been overplayed.

Sections

Zeros and Ones

Computers are notorious for their infuriatingly literal obedience. I am sure anyone who has ever worked with a computer has come across the lack of empathy on its part — it follows our instructions to the dot, yet ends up accomplishing something altogether different from what we intend. We have all been bitten in the rear end by this literal adherence to logic at the expense of commonsense. We can attribute at least some of the blame to our lack of understanding (yes, literal and complete understanding) of the paradigms used in computing.

Rich in paradigms, the field of computing has a strong influence in the way we think and view the world. If you don’t believe me, just look at the way we learn things these days. Do we learn anything now, or do we merely learn how to access information through browsing and searching? Even our arithmetic abilities have eroded along with the advent of calculators and spreadsheets. I remember the legends of great minds like Enrico Fermi, who estimated the power output of the first nuclear blast by floating a few pieces of scrap paper, and like Richard Feynman, who beat an abacus expert by doing binomial expansion. I wonder if the Fermis and Feynmans of our age would be able to pull those stunts without pulling out their pocket calculators.

Procedural programming, through its unwarranted reuse of mathematical symbols and patterns, has shaped the way we interact with our computers. The paradigm that has evolved is distinctly unmathematical. Functional programming represents a counter attack, a campaign to win our minds back from the damaging influences of the mathematical monstrosities of procedural languages. The success of this battle may depend more on might and momentum rather than truth and beauty. In our neck of the woods, this statement translates to a simple question: Can we find enough developers who can do functional programming? Or is it cheaper and more efficient to stick to procedural and object oriented methodologies?

Sections

House of Cards

We are in dire straits — no doubt about it. Our banks and financial edifices are collapsing. Those left standing also look shaky. Financial industry as a whole is battling to survive. And, as its front line warriors, we will bear the brunt of the bloodbath sure to ensue any minute now.

Ominous as it looks now, this dark hour will pass, as all the ones before it. How can we avoid such dark crises in the future? We can start by examining the root causes, the structural and systemic reasons, behind the current debacle. What are they? In my series of posts this month, I went through what I thought were the lessons to learn from the financial crisis. Here is what I think will happen.

The notion of risk management is sure to change in the coming years. Risk managers will have to be compensated enough so that top talent doesn’t always drift away from it into risk taking roles. Credit risk paradigms will be reviewed. Are credit limits and ratings the right tools? Will Off Balance Sheet instruments stay off the balance sheet? How will we account for leveraging?

Regulatory frameworks will change. They will become more intrusive, but hopefully more transparent and honest as well.

Upper management compensation schemes may change, but probably not much. Despite what the techies at the bottom think, those who reach the top are smart. They will think of some innovative ways of keeping their perks. Don’t worry; there will always be something to look forward to, as you climb the corporate ladder.

Nietzsche may be right, what doesn’t kill us, may eventually make us stronger. Hoping that this unprecedented financial crisis doesn’t kill us, let’s try to learn as much from it as possible.

Sections

Free Market Hypocrisy

Markets are not free, despite what the text books tell us. In mathematics, we verify the validity of equations by considering asymptotic or limiting cases. Let’s try the same trick on the statement about the markets being free.

If commodity markets were free, we would have no tariff restrictions, agricultural subsidies and other market skewing mechanisms at play. Heck, cocaine and heroine would be freely available. After all, there are willing buyers and sellers for those drugs. Indeed, drug lords would be respectable citizens belonging in country clubs rather than gun-totting cartels.

If labor markets were free, nobody would need a visa to go and work anywhere in the world. And, “equal pay for equal work” would be a true ideal across the globe, and nobody would whine about jobs being exported to third world countries.

Capital markets, at the receiving end of all the market turmoil of late, are highly regulated with capital adequacy and other Basel II requirements.

Derivatives markets, our neck of the woods, are a strange beast. It steps in and out of the capital markets as convenient and muddles up everything so that they will need us quants to explain it to them. We will get back to it in future columns.

So what exactly is free about the free market economy? It is free — as long as you deal in authorized commodities and products, operate within prescribed geographies, set aside as much capital as directed, and do not employ those you are not supposed to. By such creative redefinitions of terms like “free,” we can call even a high security prison free!

Don’t get me wrong. I wouldn’t advocate making all markets totally free. After all, opening the flood gates to the formidable Indian and Chinese talent can only adversely affect my salary levels. Nor am I suggesting that we deregulate everything and hope for the best. Far from it. All I am saying is that we need to be honest about what we mean by “free” in free markets, and understand and implement its meaning in a transparent way. I don’t know if it will help avoid a future financial meltdown, but it certainly can’t hurt.

Sections

Quant Culprits

Much has been said about the sins of the quants in their inability to model and price credit derivatives, especially Collateralized Debt Obligations (CDOs) and Mortgage Backed Securities (MBSs). In my opinion, it is not so much of a quant failure. After all, if you have the market data (especially default correlations) credit derivatives are not all that hard to price.

The failure was really in understanding how much credit and market risks were inter-related, given that they were independently managed using totally different paradigms. I think an overhauling is called for here, not merely in modeling and pricing credit risks, also in the paradigms and practices used in managing them.

Ultimately, we have to understand how the whole lifecycle of a trade is managed, and how various business units in a financial institution interact with each other bearing one common goal in mind. It is this fascination of mine with the “big picture” that inspired me to write The Principles of Quantitative Development, to be published by Wiley Finance in 2010.

Sections

Where Credit is Due

While the market risk managers are getting grilled for the financial debacle we are in, the credit controllers are walking around with that smug look that says, “Told you so!” But systemic reasons for the financial turmoil hide in our credit risk management practices as well.

We manage credit risk in two ways — by demanding collateral or by credit limit allocation. In the consumer credit market, they correspond to secure lending (home mortgages, for instance) and unsecured loans (say, credit lines). The latter clearly involves more credit risk, which is why you pay obscene interests on outstanding balances.

In dealing with financial counterparties, we use the same two paradigms. Collateral credit management is generally safe because the collateral involved cannot be used for multiple credit exposures. But when we assign each counterparty a credit limit based on their credit ratings, we have a problem. While the credit rating of a bank or a financial institution may be accurate, it is almost impossible to know how much credit is loaded against that entity (because options and derivatives are “off balance sheet” instruments). This situation is akin to a bank’s inability to check how much you have drawn against your other credit lines, when it offers you an overdraft facility.

The end result is that even in good times, the leverage against the credit rating can be dangerously high without counterparties realizing it. The ensuing painful deleveraging takes place when a credit event (such as lowering of the credit rating) occurs.

Sections

Hedging Dilemma

Ever wonder why those airfares are quick to climb, but slow to land? Well, you can blame the risk managers.

When the oil price hit $147 a barrel in July ’08, with all the pundits predicting sustained $200 levels, what would you have done if you were risk managing an airline’s exposure to fuel? You would have ran and paid an arm and a leg to hedge it. Hedging would essentially fix the price for your company around $150 level, no matter how the market moved. Now you sit back and relax, happy in the knowledge that you saved your firm potentially millions of dollars.

Then, to your horror, the oil price nosedives, and your firm is paying $100 more than it should for each barrel of oil. (Of course, airlines don’t buy WTI, but you know what I mean.) So, thanks to the risk managers’ honest work, airlines (and even countries) are now handing over huge sums of money to energy traders. Would you rather be a trader or a risk manager?

And, yes, the airfares will come down, but not before the risk managers take their due share of flak.

Sections

Risky Business

Just as 9/11 was more of an intelligence failure rather than a security lapse, the subprime debacle is a risk management breakdown, not merely a regulatory shortcoming. To do anything useful with this rather obvious insight, we need to understand why risk management failed, and how to correct it.

Risk management should be our first line of defense — it is a preventive mechanism, while regulatory framework (which also needs beefing up) is a curative, reactive second line.

The first reason for the inadequacy of risk management is the lack of glamour the risk controllers in a financial institution suffer from, when compared to their risk taking counterparts. (Glamour is a euphemism for salary.) If a risk taker does his job well, he makes money. He is a profit centre. On the other hand, if a risk controller does his job well, he ensures that the losses are not disproportionate. But in order to limit the downside, the risk controller has to limit the upside as well.

In a culture based on performance incentives, and where performance is measured in terms of profit, we can see why the risk controller’s job is sadly under-appreciated and under-compensated.

This imbalance has grave implications. It is the conflict between the risk takers and risk managers that enforces the corporate risk appetite. If the gamblers are being encouraged directly or indirectly, it is an indication of where the risk appetite lies. The question then is, was the risk appetite a little too strong?

The consequences of the lack of equilibrium between the risk manager and the risk taker are also equally troubling. The smarter ones among the risk management group slowly migrate to “profit generating” (read trading or Front Office) roles, thereby exacerbating the imbalance.

The talent migration and the consequent lack of control are not confined merely within the walls of a financial institution. Even regulatory bodies could not compete with the likes of Lehman brothers when hunting for top talent. The net result was that when the inevitable meltdown finally began, we were left with inadequate risk management and regulatory defenses.

Sections