I was suffering from a prolonged writer’s block, and was away from my blog for almost four months — probably the longest hiatus in my writing career. When a writer is lazy, it is writer’s block. When a normal person is lazy, he is just lazy.
This post is an edited version of my responses in a Webinar panel-discussion organized by Wiley-Finance and FinCAD. The freely available Webcast is linked in the post, and contains responses from the other participants — Paul Wilmott and Espen Huag. An expanded version of this post may later appear as an article in the Wilmott Magazine.
What is Risk?
When we use the word Risk in normal conversation, it has a negative connotation — risk of getting hit by a car, for instance; but not the risk of winning a lottery. In finance, risk is both positive and negative. At times, you want the exposure to a certain kind of risk to counterbalance some other exposure; at times, you are looking for the returns associated with a certain risk. Risk, in this context, is almost identical to the mathematical concept of probability.
But even in finance, you have one kind of risk that is always negative — it is Operational Risk. My professional interest right now is in minimizing the operational risk associated with trading and computational platforms.
How do you measure Risk?
Measuring risk ultimately boils down to estimating the probability of a loss as a function of something — typically the intensity of the loss and time. So it’s like asking — What’s the probability of losing a million dollars or two million dollars tomorrow or the day after?
The question whether we can measure risk is another way of asking whether we can figure out this probability function. In certain cases, we believe we can — in Market Risk, for instance, we have very good models for this function. Credit Risk is different story — although we thought we could measure it, we learned the hard way that we probably could not.
The question how effective the measure is, is, in my view, like asking ourselves, “What do we do with a probability number?” If I do a fancy calculation and tell you that you have 27.3% probability of losing one million tomorrow, what do you do with that piece of information? Probability has a reasonable meaning only a statistical sense, in high-frequency events or large ensembles. Risk events, almost by definition, are low-frequency events and a probability number may have only limited practical use. But as a pricing tool, accurate probability is great, especially when you price instruments with deep market liquidity.
Innovation in Risk Management.
Innovation in Risk comes in two flavors — one is on the risk taking side, which is in pricing, warehousing risk and so on. On this front, we do it well, or at least we think we are doing it well, and innovation in pricing and modeling is active. The flip side of it is, of course, risk management. Here, I think innovation lags actually behind catastrophic events. Once we have a financial crisis, for instance, we do a post-mortem, figure out what went wrong and try to implement safety guards. But the next failure, of course, is going to come from some other, totally, unexpected angle.
What is the role of Risk Management in a bank?
Risk taking and risk management are two aspects of a bank’s day-to-day business. These two aspects seem in conflict with each other, but the conflict is no accident. It is through fine-tuning this conflict that a bank implements its risk appetite. It is like a dynamic equilibrium that can be tweaked as desired.
What is the role of vendors?
In my experience, vendors seem to influence the processes rather than the methodologies of risk management, and indeed of modeling. A vended system, however customizable it may be, comes with its own assumptions about the workflow, lifecycle management etc. The processes built around the system will have to adapt to these assumptions. This is not a bad thing. At the very least, popular vended systems serve to standardize risk management practices.
For all its pretentiousness, French cuisine is pretty amazing. Sure, I’m no degustation connoisseur, but the French really know how to eat well. It is little wonder that the finest restaurants in the world are mostly French. The most pivotal aspect of a French dish usually is its delicate sauce, along with choice cuts, and, of course, inspired presentation (AKA huge plates and minuscule servings). The chefs, those artists in their tall white hats, show off their talent primarily in the subtleties of the sauce, for which knowledgeable patrons happily hand over large sums of money in those establishments, half of which are called “Cafe de Paris” or have the word “petit” in their names.
Seriously, sauce is king (to use Bollywood lingo) in French cuisine, so I found it shocking when I saw this on BBC that more and more French chefs were resorting to factory-manufactured sauces. Even the slices of boiled eggs garnishing their overpriced salads come in a cylindrical form wrapped in plastic. How could this be? How could they use mass-produced garbage and pretend to be serving up the finest gastronomical experiences?
Sure, we can see corporate and personal greed driving the policies to cut corners and use the cheapest of ingredients. But there is a small technology success story here. A few years ago, I read in the newspaper that they found fake chicken eggs in some Chinese supermarkets. They were “fresh” eggs, with shells, yolks, whites and everything. You could even make omelets with them. Imagine that — a real chicken egg probably costs only a few cents to produce. But someone could set up a manufacturing process that could churn out fake eggs cheaper than that. You have to admire the ingenuity involved — unless, of course, you have to eat those eggs.
The trouble with our times is that this unpalatable ingenuity is all pervasive. It is the norm, not the exception. We see it in tainted paints on toys, harmful garbage processed into fast food (or even fine-dining, apparently), poison in baby food, imaginative fine-print on financial papers and “EULAs”, substandard components and shoddy workmanship in critical machinery — on every facet of our modern life. Given such a backdrop, how do we know that the “organic” produce, though we pay four times as much for it, is any different from the normal produce? To put it all down to the faceless corporate greed, as most of us tend to do, is a bit simplistic. Going one step further to see our own collective greed in the corporate behavior (as I proudly did a couple of times) is also perhaps trivial. What are corporates these days, if not collections of people like you and me?
There is something deeper and more troubling in all this. I have some disjointed thoughts, and will try to write it up in an ongoing series. I suspect these thoughts of mine are going to sound similar to the luddite ones un-popularized by the infamous Unabomber. His idea was that our normal animalistic instincts of the hunter-gatherer kind are being stifled by the modern societies we have developed into. And, in his view, this unwelcome transformation and the consequent tension and stress can be countered only by an anarchical destruction of the propagators of our so-called development — namely, universities and other technology generators. Hence the bombing of innocent professors and such.
Clearly, I don’t agree with this luddite ideology, for if I did, I would have to first bomb myself! I’m nursing a far less destructive line of thought. Our technological advances and their unintended backlashes, with ever-increasing frequency and amplitude, remind me of something that fascinated my geeky mind — the phase transition between structured (laminar) and chaotic (turbulent) states in physical systems (when flow rates cross a certain threshold, for instance). Are we approaching such a threshold of phase transition in our social systems and societal structures? In my moody luddite moments, I feel certain that we are.
Featuring Paul Wilmott, Espen Haug and Manoj Thulasidas
PLEASE JOIN US FOR THIS FREE WEBINAR PRESENTED BY FINCAD AND WILEY GLOBAL FINANCE
How do you identify, measure and model risk, and more importantly, what changes need to be implemented to improve the long-term profitability and sustainability of our financial institutions? Take a unique opportunity to join globally recognised and respected experts in the field, Paul Wilmott, Espen Haug and Manoj Thulasidas in a free, one hour online roundtable discussion to debate the key issues and to find answers to questions to improve financial risk modelling.
Join our experts as they address these fundamental financial risk questions:
- What is risk?
- How do we measure and quantify risk in quantitative finance? Is this effective?
- Is it possible to model risk?
- Define innovation in risk management. Where does it take place? Where should it take place?
- How do new ideas see the light of day? How are they applied to the industry, and how should they be applied?
- How is risk management implemented in modern investment banking? Is there a better way?
Our panel of internationally respected experts include Dr Paul Wilmott, founder of the prestigious Certificate in Quantitative Finance (CQF) and Wilmott.com, Editor-in-Chief of Wilmott Magazine, and author of highly acclaimed books including the best-selling Paul Wilmott On Quantitative Finance; Dr Espen Gaarder Haug who has more than 20 years of experience in Derivatives research and trading and is author of The Complete Guide of Option Pricing Formulas and Derivatives: Models on Models; and Dr Manoj Thulasidas, a physicist-turned-quant who works as a senior quantitative professional at Standard Chartered Bank in Singapore and is author of Principles of Quantitative Development.
This debate will be critical for all chief risk officers, credit and market risk managers, asset liability managers, financial engineers, front office traders, risk analysts, quants and academics.
Despite the richness that mathematics imparts to life, it remains a hated and difficult subject to many. I feel that the difficulty stems from the early and often permanent disconnect between math and reality. It is hard to memorize that the reciprocals of bigger numbers are smaller, while it is fun to figure out that if you had more people sharing a pizza, you get a smaller slice. Figuring out is fun, memorizing — not so much. Mathematics, being a formal representation of the patterns in reality, doesn’t put too much emphasis on the figuring out part, and it is plain lost on many. To repeat that statement with mathematical precision — math is syntactically rich and rigorous, but semantically weak. Syntax can build on itself, and often shake off its semantic riders like an unruly horse. Worse, it can metamorphose into different semantic forms that look vastly different from one another. It takes a student a few years to notice that complex numbers, vector algebra, coordinate geometry, linear algebra and trigonometry are all essentially different syntactical descriptions of Euclidean geometry. Those who excel in mathematics are, I presume, the ones who have developed their own semantic perspectives to rein in the seemingly wild syntactical beast.
Physics also can provide beautiful semantic contexts to the empty formalisms of advanced mathematics. Look at Minkowski space and Riemannian geometry, for instance, and how Einstein turned them into descriptions of our perceived reality. In addition to providing semantics to mathematical formalism, science also promotes a worldview based on critical thinking and a ferociously scrupulous scientific integrity. It is an attitude of examining one’s conclusions, assumptions and hypotheses mercilessly to convince oneself that nothing has been overlooked. Nowhere is this nitpicking obsession more evident than in experimental physics. Physicists report their measurements with two sets of errors — a statistical error representing the fact that they have made only a finite number of observations, and a systematic error that is supposed to account for the inaccuracies in methodology, assumptions etc.
We may find it interesting to look at the counterpart of this scientific integrity in our neck of the woods — quantitative finance, which decorates the syntactical edifice of stochastic calculus with dollar-and-cents semantics, of a kind that ends up in annual reports and generates performance bonuses. One might even say that it has a profound impact on the global economy as a whole. Given this impact, how do we assign errors and confidence levels to our results? To illustrate it with an example, when a trading system reports the P/L of a trade as, say, seven million, is it $7,000,000 +/- $5,000,000 or is it $7,000, 000 +/- $5000? The latter, clearly, holds more value for the financial institution and should be rewarded more than the former. We are aware of it. We estimate the errors in terms of the volatility and sensitivities of the returns and apply P/L reserves. But how do we handle other systematic errors? How do we measure the impact of our assumptions on market liquidity, information symmetry etc., and assign dollar values to the resulting errors? If we had been scrupulous about error propagations of this, perhaps the financial crisis of 2008 would not have come about.
Although mathematicians are, in general, free of such critical self-doubts as physicists — precisely because of a total disconnect between their syntactical wizardry and its semantic contexts, in my opinion — there are some who take the validity of their assumptions almost too seriously. I remember this professor of mine who taught us mathematical induction. After proving some minor theorem using it on the blackboard (yes it was before the era of whiteboards), he asked us whether he had proved it. We said, sure, he had done it right front of us. He then said, “Ah, but you should ask yourselves if mathematical induction is right.” If I think of him as a great mathematician, it is perhaps only because of the common romantic fancy of ours that glorifies our past teachers. But I am fairly certain that the recognition of the possible fallacy in my glorification is a direct result of the seeds he planted with his statement.
My professor may have taken this self-doubt business too far; it is perhaps not healthy or practical to question the very backdrop of our rationality and logic. What is more important is to ensure the sanity of the results we arrive at, employing the formidable syntactical machinery at our disposal. The only way to maintain an attitude of healthy self-doubt and the consequent sanity checks is to jealously guard the connection between the patterns of reality and the formalisms in mathematics. And that, in my opinion, would be the right way to develop a love for math as well.
Most kids love patterns. Math is just patterns. So is life. Math, therefore, is merely a formal way of describing life, or at least the patterns we encounter in life. If the connection between life, patterns and math can be maintained, it follows that kids should love math. And love of math should generate an analytic ability (or what I would call a mathematical ability) to understand and do most things well. For instance, I wrote of a connection “between” three things a couple of sentences ago. I know that it has to be bad English because I see three vertices of a triangle and then one connection doesn’t make sense. A good writer would probably put it better instinctively. A mathematical writer like me would realize that the word “between” is good enough in this context — the subliminal jar on your sense of grammar that it creates can be compensated for or ignored in casual writing. I wouldn’t leave it standing in a book or a published column (except this one because I want to highlight it.)
My point is that it is my love for math that lets me do a large number of things fairly well. As a writer, for instance, I have done rather well. But I attribute my success to a certain mathematical ability rather than literary talent. I would never start a book with something like, “It was the best of times, it was the worst of times.” As an opening sentence, by all the mathematical rules of writing I have formulated for myself, this one just doesn’t measure up. Yet we all know that Dickens’s opening, following no rules of mine, is perhaps the best in English literature. I will probably cook up something similar someday because I see how it summarizes the book, and highlights the disparity between the haves and the have-nots mirrored in the contrasting lead characters and so on. In other words, I see how it works and may assimilate it into my cookbook of rules (if I can ever figure out how), and the process of assimilation is mathematical in nature, especially when it is a conscious effort. Similar fuzzy rule-based approaches can help you be a reasonably clever artist, employee, manager or anything that you set your sights on, which is why I once bragged to my wife that I could learn Indian classical music despite the fact that I am practically tone-deaf.
So loving math is a probably a good thing, in spite of its apparent disadvantage vis-a-vis cheerleaders. But I am yet to address my central theme — how do we actively encourage and develop a love for math among the next generation? I am not talking about making people good at math; I’m not concerned with teaching techniques per se. I think Singapore already does a good job with that. But to get people to like math the same way they like, say, their music or cars or cigarettes or football takes a bit more imagination. I think we can accomplish it by keeping the underlying patterns on the foreground. So instead of telling my children that 1/4 is bigger than 1/6 because 4 is smaller than 6, I say to them, “You order one pizza for some kids. Do you think each will get more if we had four kids or six kids sharing it?”
From my earlier example on geographic distances and degrees, I fancy my daughter will one day figure out that each degree (or about 100km — corrected by 5% and 6%) means four minutes of jet lag. She might even wonder why 60 appears in degrees and minutes and seconds, and learn something about number system basis and so on. Mathematics really does lead to a richer perspective on life. All it takes on our part is perhaps only to share the pleasure of enjoying this richness. At least, that’s my hope.
If you love math, you are a geek — with stock options in your future, but no cheerleaders. So getting a child to love mathematics is a questionable gift — are we really doing them a favor? Recently, a highly placed friend of mine asked me to look into it — not merely as getting a couple of kids interested in math, but as a general educational effort in the country. Once it becomes a general phenomenon, math whizkids might enjoy the same level of social acceptance and popularity as, say, athletes and rock stars. Wishful thinking? May be…
I was always among people who liked math. I remember my high school days where one of my friends would do the long multiplication and division during physics experiments, while I would team up with another friend to look up logarithms and try to beat the first dude, who almost always won. It didn’t really matter who won; the mere fact that we would device games like that as teenagers perhaps portended a cheerleader-less future. As it turned out, the long-multiplication guy grew up to be a highly placed banker in the Middle East, no doubt thanks to his talents not of the cheerleader-phobic, math-phelic kind.
When I moved to IIT, this mathematical geekiness reached a whole new level. Even among the general geekiness that permeated the IIT air, I remember a couple of guys who stood out. There was “Devious” who also had the dubious honor of introducing me to my virgin Kingfisher, and “Pain” would drawl a very pained “Obviously Yaar!” when we, the lesser geeks, failed to readily follow a his particular line of mathematical acrobatics.
All of us had a love for math. But, where did it come from? And how in the world would I make it a general educational tool? Imparting the love math to one kid is not too difficult; you just make it fun. The other day when I was driving around with my daughter, she described some shape (actually the bump on her grandmother’s forehead) as half-a-ball. I told her that it was actually a hemisphere. Then I highlighted to her that we were going to the southern hemisphere (New Zealand) for our vacation the next day, on the other side of the globe compared to Europe, which was why it was summer there. And finally, I told her Singapore was on the equator. My daughter likes to correct people, so she said, no, it wasn’t. I told her that we were about 0.8 degrees to the north of the equator (I hope I was right), and saw my opening. I asked her what the circumference of a circle was, and told her that the radius of the earth was about 6000km, and worked out that we were about 80km to the north of the equator, which was nothing compared to 36,000km great circle around the earth. Then we worked out that we made a 5% approximation on the value of pi, so the correct number was about 84km. I could have told her we made another 6% approximation on the radius, the number would be more like 90km. It was fun for her to work out these things. I fancy her love for math has been augmented a bit.
Photo by Dylan231
We know that our universe is a bit unreal. The stars we see in the night sky, for instance, are not really there. They may have moved or even died by the time we get to see them. It takes light time to travel from the distant stars and galaxies to reach us. We know of this delay. The sun that we see now is already eight minutes old by the time we see it, which is not a big deal. If we want to know what is going on at the sun right now, all we have to do is to wait for eight minutes. Nonetheless, we do have to “correct” for the delay in our perception due to the finite speed of light before we can trust what we see.
Now, this effect raises an interesting question — what is the “real” thing that we see? If seeing is believing, the stuff that we see should be the real thing. Then again, we know of the light travel time effect. So we should correct what we see before believing it. What then does “seeing” mean? When we say we see something, what do we really mean?
Seeing involves light, obviously. It is the finite (albeit very high) speed of light influences and distorts the way we see things, like the delay in seeing objects like stars. What is surprising (and seldom highlighted) is that when it comes to seeing moving objects, we cannot back-calculate the same way we take out the delay in seeing the sun. If we see a celestial body moving at an improbably high speed, we cannot figure out how fast and in what direction it is “really” moving without making further assumptions. One way of handling this difficulty is to ascribe the distortions in our perception to the fundamental properties of the arena of physics — space and time. Another course of action is to accept the disconnection between our perception and the underlying “reality” and deal with it in some way.
This disconnect between what we see and what is out there is not unknown to many philosophical schools of thought. Phenomenalism, for instance, holds the view that space and time are not objective realities. They are merely the medium of our perception. All the phenomena that happen in space and time are merely bundles of our perception. In other words, space and time are cognitive constructs arising from perception. Thus, all the physical properties that we ascribe to space and time can only apply to the phenomenal reality (the reality as we sense it). The noumenal reality (which holds the physical causes of our perception), by contrast, remains beyond our cognitive reach.
One, almost accidental, difficulty in redefining the effects of the finite speed of light as the properties of space and time is that any effect that we do understand gets instantly relegated to the realm of optical illusions. For instance, the eight-minute delay in seeing the sun, because we can readily understand it and disassociate it from our perception using simple arithmetic, is considered a mere optical illusion. However, the distortions in our perception of fast moving objects, although originating from the same source are considered a property of space and time because they are more complex. At some point, we have to come to terms with the fact that when it comes to seeing the universe, there is no such thing as an optical illusion, which is probably what Goethe pointed out when he said, “Optical illusion is optical truth.”
The distinction (or lack thereof) between optical illusion and truth is one of the oldest debates in philosophy. After all, it is about the distinction between knowledge and reality. Knowledge is considered our view about something that, in reality, is “actually the case.” In other words, knowledge is a reflection, or a mental image of something external. In this picture, the external reality goes through a process of becoming our knowledge, which includes perception, cognitive activities, and the exercise of pure reason. This is the picture that physics has come to accept. While acknowledging that our perception may be imperfect, physics assumes that we can get closer and closer to the external reality through increasingly finer experimentation, and, more importantly, through better theorization. The Special and General Theories of Relativity are examples of brilliant applications of this view of reality where simple physical principles are relentlessly pursued using the formidable machine of pure reason to their logically inevitable conclusions.
But there is another, competing view of knowledge and reality that has been around for a long time. This is the view that regards perceived reality as an internal cognitive representation of our sensory inputs. In this view, knowledge and perceived reality are both internal cognitive constructs, although we have come to think of them as separate. What is external is not the reality as we perceive it, but an unknowable entity giving rise to the physical causes behind sensory inputs. In this school of thought, we build our reality in two, often overlapping, steps. The first step consists of the process of sensing, and the second one is that of cognitive and logical reasoning. We can apply this view of reality and knowledge to science, but in order do so, we have to guess the nature of the absolute reality, unknowable as it is.
The ramifications of these two different philosophical stances described above are tremendous. Since modern physics has embraced a non-phenomenalistic view of space and time, it finds itself at odds with that branch of philosophy. This chasm between philosophy and physics has grown to such a degree that the Nobel prize winning physicist, Steven Weinberg, wondered (in his book “Dreams of a Final Theory”) why the contribution from philosophy to physics have been so surprisingly small. It also prompts philosophers to make statements like, “Whether ‘noumenal reality causes phenomenal reality’ or whether ‘noumenal reality is independent of our sensing it’ or whether ‘we sense noumenal reality,’ the problem remains that the concept of noumenal reality is a totally redundant concept for the analysis of science.”
From the perspective of cognitive neuroscience, everything we see, sense, feel and think is the result of the neuronal interconnections in our brain and the tiny electrical signals in them. This view must be right. What else is there? All our thoughts and worries, knowledge and beliefs, ego and reality, life and death — everything is merely neuronal firings in the one and half kilograms of gooey, grey material that we call our brain. There is nothing else. Nothing!
In fact, this view of reality in neuroscience is an exact echo of phenomenalism, which considers everything a bundle of perception or mental constructs. Space and time are also cognitive constructs in our brain, like everything else. They are mental pictures our brains concoct out of the sensory inputs that our senses receive. Generated from our sensory perception and fabricated by our cognitive process, the space-time continuum is the arena of physics. Of all our senses, sight is by far the dominant one. The sensory input to sight is light. In a space created by the brain out of the light falling on our retinas (or on the photo sensors of the Hubble telescope), is it a surprise that nothing can travel faster than light?
This philosophical stance is the basis of my book, The Unreal Universe, which explores the common threads binding physics and philosophy. Such philosophical musings usually get a bad rap from us physicists. To physicists, philosophy is an entirely different field, another silo of knowledge, which holds no relevance to their endeavors. We need to change this belief and appreciate the overlap among different knowledge silos. It is in this overlap that we can expect to find great breakthroughs in human thought.
The twist to this story of light and reality is that we seem to have known all this for a long time. Classical philosophical schools seem to have thought along lines very similar to Einstein’s reasonings. The role of light in creating our reality or universe is at the heart of Western religious thinking. A universe devoid of light is not simply a world where you have switched off the lights. It is indeed a universe devoid of itself, a universe that doesn’t exist. It is in this context that we have to understand the wisdom behind the statement that “the earth was without form, and void” until God caused light to be, by saying “Let there be light.”
The Quran also says, “Allah is the light of the heavens and the earth,” which is mirrored in one of the ancient Hindu writings: “Lead me from darkness to light, lead me from the unreal to the real.” The role of light in taking us from the unreal void (the nothingness) to a reality was indeed understood for a long, long time. Is it possible that the ancient saints and prophets knew things that we are only now beginning to uncover with all our supposed advances in knowledge?
I know I may be rushing in where angels fear to tread, for reinterpreting the scriptures is a dangerous game. Such alien interpretations are seldom welcome in the theological circles. But I seek refuge in the fact that I am looking for concurrence in the metaphysical views of spiritual philosophies, without diminishing their mystical and theological value.
The parallels between the noumenal-phenomenal distinction in phenomenalism and the Brahman-Maya distinction in Advaita are hard to ignore. This time-tested wisdom on the nature of reality from the repertoire of spirituality is now being reinvented in modern neuroscience, which treats reality as a cognitive representation created by the brain. The brain uses the sensory inputs, memory, consciousness, and even language as ingredients in concocting our sense of reality. This view of reality, however, is something physics is yet to come to terms with. But to the extent that its arena (space and time) is a part of reality, physics is not immune to philosophy.
As we push the boundaries of our knowledge further and further, we are beginning to discover hitherto unsuspected and often surprising interconnections between different branches of human efforts. In the final analysis, how can the diverse domains of our knowledge be independent of each other when all our knowledge resides in our brain? Knowledge is a cognitive representation of our experiences. But then, so is reality; it is a cognitive representation of our sensory inputs. It is a fallacy to think that knowledge is our internal representation of an external reality, and therefore distinct from it. Knowledge and reality are both internal cognitive constructs, although we have come to think of them as separate.
Recognizing and making use of the interconnections among the different domains of human endeavor may be the catalyst for the next breakthrough in our collective wisdom that we have been waiting for.
The financial crisis was a veritable gold mine for columnists like me. I, for one, published at least five articles on the subject, including its causes, the lessons learned, and, most self-deprecating of all, our excesses that contributed to it.
Looking back at these writings of mine, I feel as though I may have been a bit unfair on us. I did try to blunt my accusations of avarice (and perhaps decadence) by pointing out that it was the general air of insatiable greed of the era that we live in that spawned the obscenities and the likes of Madoff. But I did concede the existence of a higher level of greed (or, more to the point, a more sated kind of greed) among us bankers and quantitative professionals. I am not recanting my words in this piece now, but I want to point out another aspect, a justification if not an absolution.
Why would I want to defend bonuses and other excesses when another wave of public hatred is washing over the global corporations, thanks to the potentially unstoppable oil spill? Well, I guess I am a sucker for lost causes, much like Rhett Butler, as our quant way of tranquil life with insane bonuses is all but gone with the wind now. Unlike Mr. Butler, however, I have to battle and debunk my own arguments presented here previously.
One of the arguments that I wanted to poke holes in was the fair compensation angle. It was argued in our circles that the fat paycheck was merely an adequate compensation for the long hours of hard work that people in our line of work put in. I quashed it, I think, by pointing out other thankless professions where people work harder and longer with no rewards to write home about. Hard work has no correlation with what one is entitled to. The second argument that I made fun of was the ubiquitous “talent” angle. At the height of the financial crisis, it was easy to laugh off the talent argument. Besides, there was little demand for the talent and a lot of supply, so that the basic principle of economics could apply, as our cover story shows in this issue.
Of all the arguments for large compensation packages, the most convincing one was the profit-sharing one. When the top talents take huge risks and generate profit, they need to be given a fair share of the loot. Otherwise, where is the incentive to generate even more profits? This argument lost a bit of its bite when the negative profits (by which I indeed mean losses) needed to be subsidized. This whole saga reminded me of something that Scott Adams once said of risk takers. He said that risk takers, by definition, often fail. So do morons. In practice, it is hard to tell them apart. Should the morons reap handsome rewards? That is the question.
Having said all this in my previous articles, now it is time to find some arguments in our defense. I left out one important argument in my previous columns because it did not support my general thesis — that the generous bonuses were not all that justifiable. Now that I have switched allegiance to the lost cause, allow me to present it as forcefully as I can. In order to see compensation packages and performance bonuses in a different light, we first look at any traditional brick-and-mortar company. Let’s consider a hardware manufacturer, for instance. Suppose this hardware shop of ours does extremely well one year. What does it do with the profit? Sure, the shareholders take a healthy bite out of it in terms of dividends. The employees get decent bonuses, hopefully. But what do we do to ensure continued profitability?
We could perhaps see employee bonuses as an investment in future profitability. But the real investment in this case is much more physical and tangible than that. We could invest in hardware manufacturing machinery and technology improving the productivity for years to come. We could even invest in research and development, if we subscribe to a longer temporal horizon.
Looking along these lines, we might ask ourselves what the corresponding investment would be for a financial institution. How exactly do we reinvest so that we can reap benefits in the future?
We can think of better buildings, computer and software technologies etc. But given the scale of the profits involved, and the cost and benefit of these incremental improvements, these investments don’t measure up. Somehow, the impact of these tiny investments is not as impressive in the performance of a financial institution compared to a brick-and-mortar company. The reason behind this phenomenon is that the “hardware” we are dealing with (in the case of a financial institution) is really human resources — people — you and me. So the only sensible reinvestment option is in people.
So we come to the next question — how do we invest in people? We could use any number of euphemistic epithets, but at the end of the day, it is the bottom line that counts. We invest in people by rewarding them. Monetarily. Money talks. We can dress it up by saying that we are rewarding performance, sharing profits, retaining talents etc. But ultimately, it all boils down to ensuring future productivity, much like our hardware shop buying a fancy new piece of equipment.
Now the last question has to be asked. Who is doing the investing? Who benefits when the productivity (whether current or future) goes up? The answer may seem too obvious at first glance — it is clearly the shareholders, the owners of the financial institution who will benefit. But nothing is black and white in the murky world of global finance. The shareholders are not merely a bunch of people holding a piece of paper attesting their ownership. There are institutional investors, who mostly work for other financial institutions. They are people who move large pots of money from pension funds and bank deposits and such. In other words, it is the common man’s nest egg, whether or not explicitly linked to equities, that buys and sells the shares of large public companies. And it is the common man who benefits from the productivity improvements brought about by investments such as technology purchases or bonus payouts. At least, that is the theory.
This distributed ownership, the hallmark of capitalism, raises some interesting questions, I think. When a large oil company drills an unstoppable hole in the seabed, we find it easy to direct our ire at its executives, looking at their swanky jets and other unconscionable luxuries they allow themselves. Aren’t we conveniently forgetting the fact that all of us own a piece of the company? When the elected government of a democratic nation declares war on another country and kills a million people (speaking hypothetically, of course), should the culpa be confined to the presidents and generals, or should it percolate down to the masses that directly or indirectly delegated and entrusted their collective power?
More to the point, when a bank doles out huge bonuses, isn’t it a reflection of what all of us demand in return for our little investments? Viewed in this light, is it wrong that the taxpayers ultimately had to pick up the tab when everything went south? I rest my case.
We Singaporeans have a problem. We are graceless, they say. So we train ourselves to say the right magic words at the right times and to smile at random intervals. We still come across as a bit graceless at times.
We have to bite the bullet and face the music; we may be a bit on the rude side — when judged by the western norms of pasticky grace popularized by the media. But we don’t do too badly when judged by our own mixed bag of Asian cultures, some of which consider the phrase “Thank you” so formal that it is almost an insult to utter it.
One of the Asian ways of doing things is to eat noodles like a mini vacuum cleaner. This Singaporean friend of mine was doing just that while lunching with me and our French colleague. I hardly noticed the small noises; after all, I’m from a culture where loud burps at the end of a meal are considered a compliment to the host. But our French friend found the suction action very rude and irksome, and made French comments to that effect (ignoring, of course, the fact that it is rude to exclude people by talking in a private language). I tried to explain to him that it was not rude, just the way it was done here, but to no avail.
The real question is this — do we paint a thin veneer of politeness over our natural way of doing things so that we can exude grace a la Hollywood? The thinness of this kind of grace echoes loud and clear in the standard greeting of a checkout clerk in a typical American supermarket: “How’ ya doing today?” The expected response is: “Good, how are you?” to which the clerk is to say, “Good, good!” The first “Good” presumably to your graceful enquiry after his well-being, the second expressing satisfaction at your perfect state of bliss. I once decided to play the fool and responded to the ubiquitous “How’ ya doin’?” by: “Lousy man, my dog just died.” The inevitable and unhesitating response was, “Good, good!” Do we need this kind of shallow grace?
Grace is like the grammar of an unspoken social language. Unlike its spoken counterparts, the language of social mores seems to preclude multilingualism, leading to an almost xenophobic rejection of other norms of life. We all believe that our way of doing things and our world views are the only right ones. Naturally too, otherwise we wouldn’t hold on to our beliefs, would we? But, in an increasingly flattening and globalizing world, we do feel a bit alien because our values and graces are often graded by alien standards.
Soon, a day will come when we all conform to the standards prescribed to us by the global media and entertainment networks. Our amorphous “How’ ya doin’?”s and “Good, good”s will then be indistinguishable from the prescriptions.
When I think of that inevitable day, I suffer a pang of nostalgia. I hope I can hold on to the memory of social graces judged by lesser standards — of gratitude expressed in timid smiles, affections portrayed in fleeting glances, and life’s defining bonds conveyed in unspoken gestures.
Ultimately, the collective grace of a society is to be judged, not by polished niceties, but by how it treats its very old and very young. And I’m afraid we are beginning to find ourselves wanting in those fronts. We put our young children through tremendous amount of stress, preparing them for an even more stressful life, and unwittingly robbing them of their childhood.
And, when I see those aunties and uncles cleaning after us in eating houses, I see more than our lack of grace. I see myself in my twilight years, alienated in a world gone strange on me. So let’s spare a smile, and nod a thank you when we see them — we may be showing grace to ourselves a few decades down the line.