To err is human, but to really foul things up, you need a computer. So states the remarkably insightful Murphy’s Law. And nowhere else does this ring truer than in our financial workplace. After all, it is the financial sector that drove the rapid progress in the computing industry — which is why the first computing giant had the word “business” in its name.
The financial industry keeps up with the developments in the computer industry for one simple reason. Stronger computers and smarter programs mean more money — a concept we readily grasp. As we use the latest and greatest in computer technology and pour money into it, we fuel further developments in the computing field. In other words, not only did we start the fire, we actively fan it as well. But it is not a bad fire; the positive feedback loop that we helped set up has served both the industries well.
This inter-dependency, healthy as it is, gives us nightmarish visions of perfect storms and dire consequences. Computers being the perfect tools for completely fouling things up, our troubling nightmares are more justified than we care to admit.
Models vs. Systems
Paraphrasing a deadly argument that some gun aficionados make, I will defend our addiction to information technology. Computers don’t foul things up; people do.
Mind you, I am not implying that we always mess it up when we deploy computers. But at times, we try to massage our existing processes into their computerised counterparts, creating multiple points of failure. The right approach, instead, is often to redesign the processes so that they can take advantage of the technology. But it is easier said than done. To see why, we have to look beyond systems and processes and focus on the human factors.
In a financial institution, we are in the business of making money. We fine-tune our reward structure in such a way that our core business (of making money, that is) runs as smoothly as possible. Smooth operation relies on strict adherence to processes and the underlying policies they implement. In this rigid structure, there is little room for visionary innovation.
This structural lack of incentive to innovate results in staff hurrying through a new system rollout or a process re-engineering. They have neither the luxury of time nor the freedom to slack off in the dreaded “business-as-usual” to do a thorough job of such “non-essential” things.
Besides, there is seldom any unused human resource to deploy in studying and improving processes so that they can better exploit technology. People who do it need to have multi-facetted capabilities (business and computing, for instance). Being costly, they are much more optimally deployed in the core business of making more money.
Think about it, when is the last time you (or someone you know) got hired to revamp a system and the associated processes? The closest you get is when someone is hired to duplicate a system that is already known to work better elsewhere.
The lack of incentive results in a dearth of thought and care invested in the optimal use of technology. Suboptimal systems (which do one thing well at the cost of everything else) abound in our workplace. In time, we will reach a point where we have to bite the bullet and redesign these systems. When redesigning a system, we have to think about all the processes involved. And we have to think about the system while designing or redesigning processes. This cyclic dependence is the theme of this article.
Systems do not figure in a quant’s immediate concern. What concerns us more is our strongest value-add, namely mathematical modelling. In order to come up with an optimal deployment strategy for models, however, we need to pay attention to operational issues like trade workflow.
I was talking to one of our top traders the other day, and he mentioned that a quant, no matter how smart, is useless unless his work can be deployed effectively and in a timely manner. A quant typically delivers his work as a C++ program. In a rapid deployment scenario, his program will have to plug directly into a system that will manage trade booking, risk measurements, operations and settlement. The need for rapid deployment makes it essential for the quants to understand the trade lifecycle and business operations.
Life of a Trade
Once a quant figures out how to price a new product, his work is basically done. After coaxing that stochastic integral into a pricing formula (failing which, a Crank-Nicholson or Monte Carlo), the quant writes up a program and moves on to the next challenge.
It is when the trading desk picks up the pricing spreadsheet and books the first trade into the system that the fun begins. Then the trade takes on a life of its own, sneaking through various departments and systems, showing different strokes to different folks. This adventurous biography of the trade is depicted in Figure 1 in its simplified form.
At the inception stage, a trade is conceptualized by the Front Office folks (sales, structuring, trading desk – shown in yellow ovals in the figure). They study the market need and potential, and assess the trade viability. Once they see and grab a market opportunity, a trade is born.
Even with the best of quant models, a trade cannot be priced without market data, such as prices, volatilities, rates and correlations and so on. The validity of the market data is ensured by Product Control