Tag Archives: computing

Missing Events and Photos in iPhoto?

Let me guess – you got your new iMac. You had a recent Time Machine backup on your Time Capsule. Setting up the new iMac was ridiculously easy — just point to the backup. A few hours later, your new iMac is just like your old Mac, right down to the wall paper and browser history. You shake your head in disbelief and say to yourself, “Man, this thing just works! This is the way it is supposed to be!”

A couple of days later, you fire up your iPhoto. It says it needs to update the database or whatever. No sweat. Just a couple of minutes — the new iMac is ridiculously fast. Hullo — what is wrong with the last four events? How come they have no photos in them? Well, actually, they do have something, you can see the thumbnails for a second, and then they disappear. The events seem to have the right number of photos. They even list the camera model and exposure data.

You scratch your head and say to yourself, “Well, may be the Time Machine backup didn’t unpack properly or whatever. May be the version upgrade messed up some data. No sweat. I can use the Time Machine and find the right iPhoto Library.” You fire up the Time Machine — probably for the first time for real. You restore the last good backup of the iPhoto Library to your desktop, and launch iPhoto again. Database update again. Anxious wait. Hey, the damned events are still missing.

Panic begins to set in. Mad Google for answers. Ok, hold down the Option and Command keys, and launch iPhoto. Regenerate thumbnails. Repair the library. Rebuild the Database. Still, the ****** events refuse to come back.

How do I know all this? Because this is exactly what I did. I was lucky though. I managed to recover the events. It dawned on me that the problem was not with the restore process, nor the version update of iPhoto. It was the Time Machine backup process — the backup was incomplete. I had the old Mac and the old iPhoto library intact. So I copied the old library over to the new iMac (directly, over the network; not from the Time Machine backup). I then started iPhoto on the new machine. After the necessary database update, all the events and photos showed up. Phew!

So what exactly went wrong? It appears that Time Machine doesn’t backup the iPhoto Library properly if iPhoto is open (according to Apple). More precisely, the recently imported photos and events may not get backed up. This bug (or “feature”) was reported earlier and discussed in detail.

I thought I would share my experience here because it was important piece of information and might save somebody some time, and possibly some valuable photos. And I feel it is disingenuous of Apple to tout the Time Machine as the mother of all backup solutions with this glaring bug. After all, your photos are among the most precious of your data. If they are not backed up and migrated properly, why bother with Time Machine at all?

To recap:

  1. If you find your photo collection incomplete after migrating to your shiny new iMac (using a Time Machine backup), don’t panic if you still have your old Mac.
  2. Exit gracefully from iPhoto on both the machines.
  3. Copy your old iPhoto Library from the old Mac over to the new one, after properly exiting from iPhoto on both machines.
  4. Restart iPhoto on the new Mac and enjoy.

How to prevent this from happening

Before the final Time Machine backup from your old Mac, ensure that iPhoto is not running. In fact, it may be worth exiting from all applications before taking the final snapshot.

If you want to be doubly sure, consider another automated backup solution just for your iPhoto Library. I use Carbon Copy Cloner.

Photo by Victor Svensson

Your Virtual Thumbdrive

I wrote about DropBox a few weeks ago, ostensibly to introduce it to my readers. My hidden agenda behind that post was to get some of you to sign up using my link so that I get more space. I was certain that all I had to do was to write about it and everyone of you would want to sign up. Imagine my surprise when only two signed up, one of whom turned out to be a friend of mine. So I must have done it wrong. I probably didn’t bring out all the advantages clearly enough. Either that or not many people actually lug their data around in their thumbdrives. So here I go again (with the same, no-so-hidden agenda). Before we go any further, let me tell you clearly that DropBox is a free service. You pay nothing for 2GB of online storage. If you want to go beyond that limit, you do pay some fee.

Most people carry their thumbies around so that they can access their files from any computer they happen to find themselves in front of. If these computers are not your habitual computers (ie, your wife’s notebook, kids’ pc, office computer etc.), the virtual DropBox may not totally obviate the necessity of a real thumbdrive. For random computers, virtual just doesn’t cut it. But if you are a person of habits and shuttle from one regular computer to another, DropBox is actually a lot better than a real USB drive. All you have to do is to install DropBox on all those machines, which don’t even have to be of the same kind — they can be Macs, PCs, Linux boxes etc. (In fact, DropBox can be installed on your mobile devices as well, although how you will use it is far from clear.) Once you install DropBox, you will have a special folder (or directory) where you can save stuff. This special folder/directory is, in reality, nothing but a regular one. Just that there is a background program monitoring it and syncing it magically with a server (which is on a cloud), and with all other computers where you have DropBox installed under your credentials. Better yet, if your computers share a local network, DropBox uses it to sync among them in practically no time.

Here is video I found on YouTube on what DropBox can do for you:

In addition to this file synchronization, DropBox is an offline mirror of your synced files. So if you keep your important files in the DropBox folder, they will survive for ever. This is an advantage that no physical, real thumbdrive can offer you. With real thumbdrives, I personally have lost files (despite the fact that I am fairly religious about regular copies and mirrors) due to USB drives dying on me. With DropBox, it will never happen. You have local copies on all the computers where you have DropBox running and a remote copy on a cloud server.

But you might say, “Ha, that is the problem — how can I put my personal files on some remote location where anybody can look at them?” Well, DropBox says that they use industry-standard encryption that they themselves cannot unlock without your password. I chose to trust them. After all, even if they could decrypt it, how can they troll terabytes of data in random formats in the hope of finding your account number or whatever? Besides, if you are really worried about the security, you can always create a TrueCrypt volume in DropBox.

Another use you can put DropBox to is in keeping your application data synced between computers. This works best with Macs and symbolic links. For instance, if you have a MacBook and an iMac, you can put your address book in your DropBox directory, create a symbolic link from the normal location (in ~/Library/ApplicationData/Mail.app) and expect to see the same address book in both the computers. Similar trick will work with other applications as well. I have tried it with my offline blogging software (ecto) and my development environment (NetBeans).

Want more reasons to sign up? Well, you can also share files with other users. Suppose your spouse has a DropBox of her own, and you want to share some photos with her. This can be easily arranged. And I believe the photos folder in DropBox behaves like a gallery, although I haven’t tested it.

So, if you find these reasons to have a virtual thumbdrive in addition to (or instead of) a real physical one, do sign up for DropBox via any of the million links on this page. Did I tell you that if your friends signed up using your link, you would get 250MB extra for each referral?

Photo by Debs (ò‿ó)♪

Hosting Services

hosting.gifIn today’s world, if you don’t have a website, you don’t exist. Well, that may not be totally accurateyou may do just fine with a facebook page or a blog. But the democratic nature of the Internet inspires a lot of us to become providers of information rather than just consumers. The smarter ones, in fact, strategically position themselves in between the providers and the consumers, and reap handsome rewards. Look at the aforementioned facebook, or Google, or any one of those Internet businesses that made it big. Even the small fries of the Internet, including small-time bloggers such as yours faithfully, find themselves facing web-traffic and stability kind of technical issues. I recently moved from my shared hosting at NamesDirect.com to a virtual private host at Arvixe.com, and even more recently to InMotion. There, I have done it. I have gone and dropped technical jargon on my readers. But this post is on the technical choices budding webmasters have. (Before we proceed further, let me disclose the fact that the links to InMotion in this post are all affiliate links.)

When you start off with a small website, you typically go with what they callshared hosting” — the economy class of web hosting soltuion. You register a domain name (such as thulasidas.com) for $20 or $30 and look around for a place on the web to put your pages. You can find this kind of hosting for under $10 a month. (For instance, InMotion has a package for as low as $4 a month, with a free domain name registration thrown in.) Most of these providers advertise unlimited bandwidth, unlimited storage, unlimited databases etc. Well, don’t believe everything you see on the Internet; you get what you pay for. If you read the fine print before clickinghereto accept the 30 page-long terms and conditions, you would see that unlimited really means limited.

For those who have played around with web development at home, shared hosting is like having XAMPP installed on your home computer with multiple users accessing it. Sure, the provider may have a mighty powerful computer, huge storage space and large pipe to the Internet or whatever, but it is still sharing. This means that your own particular needs cannot be easily accommodated, especially if it looks as though you might hog an unfair share of theunlimitedresources, which is what happened with my provider. I needed aCREATE TEMPORARY TABLEprivilege for a particular application, and my host said, “No way dude.

Shared hosting comes in different packages, of course. Business, Pro, Ultimate etc. — they are all merely advertising buzzwords, essentially describing different sizes of the share of the resources you will get. The next upgrade is another buzzwordCloud Hosting. Here, the resources are still shared. But apparently they reside on geographically dispersed data centers, optimized and scalable through some kind of grid technology. This type of hosting is considered better because, if you run out of resources, the hosting program can allocate more. For instance, if you suddenly have a traffic spike because of your funny post going viral on facebook and digg, the cloud could easily handle it. They will, of course, charge you more, but in the shared hosting scenario, they would probably lock you out temporarily. To me, cloud hosting sounds like shared hosting with some of the resource constraints removed. It is like sharing a pie, but with all the ingredients on hand, so that if you run out, they can quickly bake some more for you.

The “business classof web hosting is VPS or Virtual Private Server. Here, you have a server (albeit a virtual one) for yourself. Since youownthis server, you can do whatever you like with ityou haverootaccess. And the advertised resources are, more or less, dedicated to you. This is like having a VirtualBox running on your home PC where you have installed XAMPP. The only downside is that you don’t know how many other VirtualBoxes are running on the computer where your VPS is running. So the share of the resources you actually get to enjoy may be different from the the so-calleddedicatedones. For root access and quasi-dedicated resources, you pay a premium. VPS costs roughly ten times as much as shared hosting. InMotion, for instance, has a VPS package for $40 a month, which is what I signed up for.

VPS hosting comes with service level agreements that typically state 99.9% uptime or availability. It is important to note that this uptime refers, not to your instance of VPS, but to the server that hosts the virtual servers. Since you are the boss of your VPS, if it crashes, it is largely your problem. Your provider may offer afully managedservice (InMotion does), but that usually means you can ask them to do some admin work and seek advice. In my case, my VPS started hanging (because of some FastCGI issues before I decided to move to DSO for PHP support so that APC worked — I know, lots of techie jargon, but I am laying the groundwork for my next post on server management). When I asked the support to help diagnose the problem, they said, “It is hanging because your server is spawning too many PHP processes. Anything I can help you with?” Accurate statement, I must admit, but not necessarily the kind of help you are looking for. They were saying, ultimately, the VPS server was my baby, and I would have to take care of it.

If you are real high-flying webmaster, the type of hosting you should go for is a fully dedicated one. This is kind of like the first class or private jet kind of situation in my analogy. This hosting option will run you a considerable cost, anywhere from $200 to several thousands per month. For that kind of money, what you will get is a powerful server (well, at least for the costlier ones of these plans) housed in a datacenter with redundant power supplies and so on. Dedicated hosting, in other words, is a real private server, as opposed to a virtual one.

I have no direct experience with a hosted dedicated server, but I do have a couple of servers running at home for development purposes. I run two computers with XAMPP (one real and one on a VirtualBox on my iMac) or and two with MAMP. And I presume the dedicated-server experience is going to be similara server at your beck and call with resources earmarked for you, running whatever it is that you would like run.

Somewhat spread out over shared and VPS hosting is what they call a reseller account. This type of hosting essentially sets you up as a small web hosting provider (presumably in a shared hosting mode, as described above) yourself. This can be interesting if you want to make a few bucks on the side. InMotion, for instance, offers you a reseller package for $20, and promises to look after enduser support themselves. Of course, when you actually resell to your potential customers, you may want to make sure your offering has something better than what they can get directly from the company either in terms of pricing or features. Otherwise, it wouldn’t make much sense for them to come to you, would it?

So there. That is the spectrum of hosting options you have. All you need to do is to figure out where in this spectrum your needs fall, and choose accordingly. If you end up choosing InMotion (a wise choice), I would be grateful if you do so using one of my affiliate links.

We Are Moving…

Unreal Blog has moved to a more powerful server at Arvixe. [Disclosure: All the server links in this article are affiliate links.] For those interested in moving your hosting to a new server, I thought I would describe the “gotchas” involved.

This gotcha got me during a test migration of my old posts to the new server. I had over 130 posts to migrate. When I moved them to the new blog on the new server, they looked like new posts. To the unforgiving logic of a computer (that defies common sense and manages to foul up life), this pronouncement of newness is accurate, I have to admit — they were indeed new posts on the new server. So, on the 10th of January, my regular readers who had signed up for updates received over 100 email notifications about “new posts” on my blog. Needless to say I started getting angry emails from my annoyed regulars demanding that I remove their names from my “list.excessive” (as one of them put it). If you were one of those who got excessive emails, please accept my apologies. Rest assured that I have turned off email notifications, and I will look and hard into the innards of my blog before turning it back on. And when I do turn it on, I will prominently provide a link in each message to subscribe or unsubscribe yourself.

As you grow your web footprint and your blog traffic, you are going to have to move to a bigger server. In my case, I decided to go with Arvixe> because of the excellent reviews I found on the web. The decision of what type of hosting you need makes for an interesting topic, which will be my next post.

Cloud Computing

I first heard ofCloud Computingwhen my friend in Trivandrum started talking about it, organizing seminars and conferences on the topic. I was familiar with Grid Computing, so I thought it was something similar and left it at that. But a recent need of mine illustrated to me what cloud computing really is, and why one would want it. I thought I would share my insight with the uninitiated.

Before we go any further, I should confess that I write this post with a bit of an ulterior motive. What that motive is is something I will divulge towards the end of this post.

Let me start by saying that I am no noob when it comes to computers. I started my long love affair with computing and programming in 1983. Those late night bicycle rides to CLT and stacks of Fortran cardsthose were fun-filled adventures. We would submit the stack to the IBM 370 operators early in the morning and get the output in the evening. So the turn around time for each bug fix would be a day, which I think made us fairly careful programmers. I remember writing a program for printing out a calendar, one page per month, spaced and aligned properly. Useless really, because the printout would be on A3 size feed rolls with holes on the sides, and the font was a dirty Courier type of point size 12 in light blue-black, barely legible at normal reading distance. But it was fun. Unfortunately I made a mistake in the loop nesting and the calendar came out all messed up. Worse, the operator, who was stingy about the paper usage, interrupted the output on the fourth month and advised me to stop doing it. I knew that he could not interrupt it if I used only one Fortran PRINT statement and rewrote the program to do it that way. I got the output, but on the January page, there was this hand-written missive, “Try it once more and I will cancel your account.At that point I ceased and desisted.

I started using email in the late eighties on a cluster of Vaxstations that belonged to the high-energy physics group at Syracuse University. At first, we could send email only to users on the same cluster, with DecNet addresses like VAX05::MONETI. And a year later, when I could send a mail to my friend in the next building with an address like IN%naresh@ee.syr.eduor something (the “INsignifying Internet), I was mighty impressed with the pace at which technology was progressing. Little did I know that a few short years later, there would be usenet, Mosaic and e-commerce. And that I would be writing books on financial computing and WordPress plugins in PHP.

Despite keeping pace with computing technology most of my life, I have begun to feel that technology is slowly breaking free and drifting away from me. I still don’t have a twitter account, and I visit my Facebook only once a month or so. More to the point of this post, I am embarrassed to admit that I had no clue what this cloud computing was all about. Until I got my MacBook Air, thanks to my dear wife who likes to play sugar mama once in a while. I always had this problem of synchronizing my documents among the four or five PCs and Macs I regularly work with. With a USB drive and extreme care, I could manage it, but the MBA was the proverbial straw that broke my camel of a back. (By the way, did you know this Iranian proverb – “Every time the came shits, it’s not dates”?) I figured that there had to be better way. I had played with Google Apps for a while now, although I didn’t realize that it was cloud computing.

What I wanted to do was a bit more involved than office applications. I wanted to work on my hobby PHP projects from different computers. This means something like XAMPP or MAMPP along with NetBeans on all the computers I work with. But how do I keep the source code sync’ed? Thmbdrives and backup/sync programs? Not elegant, and hardly seamless. Then I hit upon the perfect solutionDropbox! This way, you store the source files on the network (using Amazon S3, apparently, but that is beside the point), and see a directory (folder for those who haven’t obeyed Steve Jobbs and gone back to the Mac) that looks like suspiciously local. In fact, it is a local directoryjust that there is a program running on the background syncing it with your folder on the cloud.

Dropbox! gives you 2GB of network storage free, which I find quite adequate for any normal user. (That sounds like the famous last words by Bill Gates, doesn’t it? “64KB of memory should be enough for anyone!”) And, you can get 250MB extra for every successful referral you make. That brings me to my ulterior motiveall the links to Dropbox! on this post are actually referral links. When you sign up and start using it by clicking on one of them, I get 250MB extra. Don’t worry, you get 250MB extra as well. So I can grow my online storage up to 8GB, which should keep me happy for a long time, unless I want to store my photos and video there, in which case I will upgrade my Dropbox! account to a paid service.

Apart from giving me extra space, there are many reasons you should really check out Dropbox!. I will write more on those reasons later, but let me list them here.
1. Sync your (Mac) address book among your Macs.
2. Multiple synced backups of your precious data.
3. Transparent use for IDEs such as Netbeans.
Some of these reasons are addressed only by following some tips and tricks, which I will write about.

By the way, we Indian writers like to use expressions like ulterior motives and vested interests. Do you think it is because we always have some?

Blank Screen after Hibernate or Sleep?

Okay, the short answer, increase your virtual memory to more than the size of your physical memory.

Long version now. Recently, I had this problem with my PC that it wouldn’t wake up from hibernation or sleep mode properly. The PC itself would be on and churning, but the screen would switch to power save mode, staying blank. The only thing to do at that point would be to restart the computer.

Like the good netizen that I am, I trawled the Internet for a solution. But didn’t find any. Some suggested upgrading the BIOS, replacing the graphics card and so on. Then I saw this mentioned in a Linux group, saying that the size of the swap file should be more than the physical memory, and decided to try it on my Windows XP machine. And it solved the problem!

So the solution to this issue of blank screen after waking up is to set the size of the virtual memory to something larger than the memory in your system. If you need more information, here is how, in step-by-step form. These instructions apply to a Windows XP machine.

  1. Right-click on “My Computer” and hit “Properties.”
  2. Take a look at the RAM size, and click on the “Advanced” tab.
  3. Click on the “Setting” button under the “Performance” group box.
  4. In the “Performance Options” window that comes up, select the “Advanced” tab.
  5. In the “Virtual Memory” group box near the bottom, click on the “Change” button.
  6. In the “Virtual Memory” window that pops up, set the “Custom Size” to something more than your RAM size (that you saw in step 2). You can set it on any hard disk partition that you have, but if you are going through all these instructions, chances are you have only “C:”. In my case, I chose to put it on “M:”.

The Age of Spiritual Machines by Ray Kurzweil

It is not easy to review a non-fiction book without giving the gist of what the book is about. Without a synopsis, all one can do is to call it insightful and other such epithets.

The Age of Spiritual Machines is really an insightful book. It is a study of the future of computing and computational intelligence. It forces us to rethink what we mean by intelligence and consciousness, not merely at a technological level, but at a philosophical level. What do you do when your computer feels sad that you are turning it off and declares, “I cannot let you do that, Dave?”

What do we mean by intelligence? The traditional yardstick of machine intelligence is the remarkably one-sided Turing Test. It defines intelligence using comparative means — a computer is deemed intelligent if it can fool a human evaluator into believing that it is human. It is a one-sided test because a human being can never pass for a computer for long. All that an evaluator needs to do is to ask a question like, “What is tan(17.32^circ)?” My $4 calculator takes practically no time to answer it to better than one part in a million precision. A super intelligent human being might take about a minute before venturing a first guess.

But the Turing Test does not define intelligence as arithmetic muscle. Intelligence is composed of “higher” cognitive abilities. After beating around the bush for a while, one comes to the conclusion that intelligence is the presence of consciousness. And the Turing Test essentially examines a computer to see if it can fake consciousness well enough to fool a trained evaluator. It would have you believe that consciousness is nothing more than answering some clever questions satisfactorily. Is it true?

Once we restate the test (and redefine intelligence) this way, our analysis can bifurcate into an inward journey or an outward one. we can ask ourselves questions like — what if everybody is an automaton (except us — you and me — of course) successfully faking intelligence? Are we faking it (and freewill) to ourselves as well? We would think perhaps not, or who are these “ourselves” that we are faking it to? The inevitable conclusion to this inward journey is that we can be sure of the presence of consciousness only in ourselves.

The outward analysis of the emergence of intelligence (a la Turing Test) brings about a whole host of interesting questions, which occupy a significant part of the book (I’m referring to the audio abridgment edition), although a bit obsessed with virtual sex at times.

One of the thought provoking questions when machines claim that they are sentient is this: Would it be murder to “kill” one of them? Before you suggest that I (or rather, Kurzweil) stop acting crazy, consider this: What if the computer is a digital backup of a real person? A backup that thinks and acts like the original? Still no? What if it is the only backup and the person is dead? Wouldn’t “killing” the machine be tantamount to killing the person?

If you grudgingly said yes to the last question, then all hell breaks loose. What if there are multiple identical backups? What if you create your own backup? Would deleting a backup capable of spiritual experiences amount to murder?

When he talks about the progression of machine intelligence, Kurzweil demonstrates his inherent optimism. He posits that ultimate intelligence yearn for nothing but knowledge. I don’t know if I accept that. To what end then is knowledge? I think an ultimate intelligence would crave continuity or immortality.

Kurzweil assumes that all technology and intelligence would have all our material needs met at some point. Looking at our efforts so far, I have my doubts. We have developed no boon so far without an associated bane or two. Think of the seemingly unlimited nuclear energy and you also see the bombs and radioactive waste management issues. Think of fossil fuel and the scourge of global warming shows itself.

I guess I’m a Mr. Glass-is-Half-Empty kind of guy. To me, even the unlimited access to intelligence may be a dangerous thing. Remember how internet reading changed the way we learned things?

Software Nightmares

To err is human, but to really foul things up, you need a computer. So states the remarkably insightful Murphy’s Law. And nowhere else does this ring truer than in our financial workplace. After all, it is the financial sector that drove the rapid progress in the computing industrywhich is why the first computing giant had the wordbusinessin its name.

The financial industry keeps up with the developments in the computer industry for one simple reason. Stronger computers and smarter programs mean more moneya concept we readily grasp. As we use the latest and greatest in computer technology and pour money into it, we fuel further developments in the computing field. In other words, not only did we start the fire, we actively fan it as well. But it is not a bad fire; the positive feedback loop that we helped set up has served both the industries well.

This inter-dependency, healthy as it is, gives us nightmarish visions of perfect storms and dire consequences. Computers being the perfect tools for completely fouling things up, our troubling nightmares are more justified than we care to admit.

Models vs. Systems

Paraphrasing a deadly argument that some gun aficionados make, I will defend our addiction to information technology. Computers don’t foul things up; people do.

Mind you, I am not implying that we always mess it up when we deploy computers. But at times, we try to massage our existing processes into their computerised counterparts, creating multiple points of failure. The right approach, instead, is often to redesign the processes so that they can take advantage of the technology. But it is easier said than done. To see why, we have to look beyond systems and processes and focus on the human factors.

In a financial institution, we are in the business of making money. We fine-tune our reward structure in such a way that our core business (of making money, that is) runs as smoothly as possible. Smooth operation relies on strict adherence to processes and the underlying policies they implement. In this rigid structure, there is little room for visionary innovation.

This structural lack of incentive to innovate results in staff hurrying through a new system rollout or a process re-engineering. They have neither the luxury of time nor the freedom to slack off in the dreadedbusiness-as-usualto do a thorough job of suchnon-essentialthings.

Besides, there is seldom any unused human resource to deploy in studying and improving processes so that they can better exploit technology. People who do it need to have multi-facetted capabilities (business and computing, for instance). Being costly, they are much more optimally deployed in the core business of making more money.

Think about it, when is the last time you (or someone you know) got hired to revamp a system and the associated processes? The closest you get is when someone is hired to duplicate a system that is already known to work better elsewhere.

The lack of incentive results in a dearth of thought and care invested in the optimal use of technology. Suboptimal systems (which do one thing well at the cost of everything else) abound in our workplace. In time, we will reach a point where we have to bite the bullet and redesign these systems. When redesigning a system, we have to think about all the processes involved. And we have to think about the system while designing or redesigning processes. This cyclic dependence is the theme of this article.

Systems do not figure in a quant’s immediate concern. What concerns us more is our strongest value-add, namely mathematical modelling. In order to come up with an optimal deployment strategy for models, however, we need to pay attention to operational issues like trade workflow.

I was talking to one of our top traders the other day, and he mentioned that a quant, no matter how smart, is useless unless his work can be deployed effectively and in a timely manner. A quant typically delivers his work as a C++ program. In a rapid deployment scenario, his program will have to plug directly into a system that will manage trade booking, risk measurements, operations and settlement. The need for rapid deployment makes it essential for the quants to understand the trade lifecycle and business operations.

Life of a Trade

Once a quant figures out how to price a new product, his work is basically done. After coaxing that stochastic integral into a pricing formula (failing which, a Crank-Nicholson or Monte Carlo), the quant writes up a program and moves on to the next challenge.

It is when the trading desk picks up the pricing spreadsheet and books the first trade into the system that the fun begins. Then the trade takes on a life of its own, sneaking through various departments and systems, showing different strokes to different folks. This adventurous biography of the trade is depicted in Figure 1 in its simplified form.

At the inception stage, a trade is conceptualized by the Front Office folks (sales, structuring, trading deskshown in yellow ovals in the figure). They study the market need and potential, and assess the trade viability. Once they see and grab a market opportunity, a trade is born.

Fig. 1: Life of a Trade

Even with the best of quant models, a trade cannot be priced without market data, such as prices, volatilities, rates and correlations and so on. The validity of the market data is ensured by Product Control or Market Risk people. The data management group also needs to work closely with Information Technology (IT) to ensure live data feeds.

The trade first goes for a counterparty credit control (the pink bubbles). The credit controllers ask questions like: if we go ahead with the deal, how much will the counterparty end up owing us? Does the counterparty have enough credit left to engage in this deal? Since the credit exposure changes during the life cycle of the trade, this is a minor quant calculation on its own.

In principle, the Front Office can do the deal only after the credit control approves of it. Credit Risk folks use historical data, internal and external credit rating systems, and their own quantitative modelling team to come up with counterparty credit limits and maximum per trade and netted exposures.

Right after the trade is booked, it goes through some control checks by the Middle Office. These fine people verify the trade details, validate the initial pricing, apply some reasonable reserves against the insane profit claims of the Front Office, and come up with a simple yea or nay to the trade as it is booked. If they say yes, the trade is considered validated and active. If not, the trade goes back to the desk for modifications.

After these inception activities, trades go through their daily processing. In addition to the daily (or intra-day) hedge rebalancing in the Front Office, the Market Risk Management folks mark their books to market. They also take care of compliance reporting to regulatory bodies, as well as risk reporting to the upper managementa process that has far-reaching consequences.

The Risk Management folks, whose work is never done as Tracy Chapman would say, also perform scenario, stress-test and historical Value at Risk (VaR) computations. In stress-tests, they apply a drastic market movement of the kind that took place in the past (like the Asian currency crisis or 9/11) to the current market data and estimate the movement in the bank’s book. In historical VaR, they apply the market movements in the immediate past (typically last year) and figure out the 99 percentile (or some such pre-determined number) worst loss scenario. Such analysis is of enormous importance to the senior management and in regulatory and compliance reporting. In Figure 1, the activities of the Risk Management folks are depicted in blue bubbles.

In their attempts to rein in the ebullient traders, the Risk Management folks come across in their adversarial worst. But we have to remind ourselves that the trading and control processes are designed that way. It is the constant conflict between the risk takers (Front Office) and the risk controllers (Risk Management) that implements the risk appetite of the bank as decided by the upper management.

Another group that crunches the trade numbers every day from a slightly different perspective are the Product Control folks, shown in green in Figure 1. They worry about the daily profit and loss (P/L) movements both at trade and portfolio level. They also modulate the profit claims by the Front Office through a reserving mechanism and come up with the so called unrealized P/L.

This P/L, unrealized as it is, has a direct impact on the compensation and incentive structure of Front Office in the short run. Hence the perennial tussle over the reserve levels. In the long term, however, the trade gets settled and the P/L becomes realized and nobody argues over it. Once the trade is in the maturity phase, it is Finance that worries about statistics and cash flows. Their big picture view ends up in annual reports and stake holders meetings, and influences everything from our bonus to the CEO’s new Gulfstream.

Trades are not static entities. During the course of their life, they evolve. Their evolution is typically handled by Middle Office people (grey bubbles) who worry about trade modifications, fixings, knock-ins, knock-outs etc. The exact name given to this business unit (and indeed other units described above) depends on the financial institution we work in, but the trade flow is roughly the same.

The trade flow that I described so far should ring alarm bells in a quant heart. Where are the quants in this value chain? Well, they are hidden in a couple of places. Some of them find home in the Market Risk Management, validating pricing models. Some others may live in Credit Risk, estimating peak exposures, figuring out rating schemes and minimising capital charges.

Most important of all, they find their place before a trade is ever booked. Quants teach their home banks how to price products. A financial institution cannot warehouse the risk associated with a trade unless it knows how much the product in question is worth. It is in this crucial sense that model quants drive the business.

In a financial marketplace that is increasingly hungry for customized structures and solutions, the role of the quants has become almost unbearably vital. Along with the need for innovative models comes the imperative of robust platforms to launch them in a timely fashion to capture transient market opportunities.

In our better investment banks, such platforms are built in-house. This trend towards self-reliance is not hard to understand. If we use a generic trading platform from a vendor, it may work well for established (read vanilla) products. It may handle the established processes (read compliance, reporting, settlements, audit trails etc.) well. But what do we do when we need a hitherto unknown structure priced? We could ask the vendor to develop it. But then, they will take a long time to respond. And, when they finally do, they will sell it to all our competitors, or charge us an arm and a leg for exclusivity thereby eradicating any associated profit potential.

Once a vended solution is off the table, we are left with the more exciting option of developing in-house system. It is when we design an in-house system that we need to appreciate the big picture. We will need to understand the whole trade flow through the different business units and processes as well as the associated trade perspectives.

Trade Perspectives

The perspective that is most common these days is trade-centric. In this view, trades are the primary objects, which is why conventional trading systems keep track of them. Put bunch of trades together, you get a portfolio. Put a few portfolios together, you have a book. The whole Global Markets is merely a collection of books. This paradigm has worked well and is probably the best compromise between different possible views.

But the trade-centric perspective is only a compromise. The activities of the trading floor can be viewed from different angles. Each view has its role in the bigger scheme of things in the bank. Quants, for instance, are model-centric. They try to find commonality between various products in terms of the underlying mathematics. If they can reuse their models from one product to another, potentially across asset classes, they minimize the effort required of them. Remember how Merton views the whole world as options! I listened to him in amazement once when he explained the Asian currency crisis as originating from the risk profile of compound optionsthe bank guarantees to corporate clients being put options, government guarantees to banks being put options on put options.

Unlike quants who develop pricing models, quantitative developers tend to be product-centric. To them, it doesn’t matter too much even if two different products use very similar models. They may still have to write separate code for them depending on the infrastructure, market data, conventions etc.

Traders see their world from the asset class angle. Typically associated with a particular trading desks based on asset classes, their favourite view cuts across models and products. To traders, all products and models are merely tools to making profit.

IT folks view the trading world from a completely different perspective. Theirs is a system-centric view, where the same product using the same model appearing in two different systems is basically two different beasts. This view is not particularly appreciated by traders, quants or quant developers.

One view that all of us appreciate is the view of the senior management, which is narrowly focussed on the bottom line. The big bosses can prioritise things (whether products, asset classes or systems) in terms of the money they bring to the shareholders. Models and trades are typically not visible from their view — unless, of course, rogue traders lose a lot of money on a particular product or by using a particular model. Or, somewhat less likely, they make huge profits using the same tricks.

When the trade reaches the Market Risk folks, there is a subtle change in the perspective from a trade-level view to a portfolio or book level view. Though mathematically trivial (after all, the difference is only a matter of aggregation), this change has implications in the system design. Trading systems have to maintain a robust hierarchical portfolio structure so that various dicing and slicing as required in the later stages of the trade lifecycle can be handled with natural ease.

The busy folks in the Middle Office (who take care of trade validations and modifications) are obsessed with trade queues. They have a validation queue, market operation queue etc. Again, the management of queues using status flags is something we have to keep in mind while designing an in-house system.

When it comes to Finance and their notions of cost centres, the trade is pretty much out of the booking system. Still, they manage trading desks and asset classes cost centres. Any trading platform we design has to provide adequate hooks in the system to respond to their specific requirements as well.

Quants and the Big Picture

Most quants, especially at junior levels, despise the Big Picture. They think of it as a distraction from their real work of marrying stochastic calculus to C . Changing that mindset to some degree is the hidden agenda behind this column.

As my trader friends will agree, the best model in the world is worthless unless it can be deployed. Deployment is the fast track to the big pictureno point denying it. Besides, in an increasingly interconnected world where a crazy Frenchman’s actions instantly affect our bonus, what is the use of denying the existence of the big picture in our nook of the woods? Instead, let’s take advantage of the big picture to empower ourselves. Let’s bite the bullet and sit through aBig Picture 101.

When we change our narrow, albeit effective, focus on the work at hand to an understanding of our role and value in the organization, we will see the potential points of failure of the systems and processes. We will be prepared with possible solutions to the nightmarish havoc that computerized processes can wreak. And we will sleep easier.