Category Archives: Topical

Includes posts on physics, philosophy, sciences, quantitative finance, economics, environment etc.

Quant Talent Management

The trouble with quants is that it is hard to keep them anchored to their moorings. Their talent is in high demand for a variety of reasons. The primary reason is the increasing sophistication of the banking clients, who demand increasingly more structured products with specific hedging and speculative motives. Servicing their demand calls for a small army of quants supporting the trading desks and systems.

Since structured products are a major profit engine on the trading floor of most banks, this demand represents a strong pull factor for quants from competing institutions. There is nothing much most financial institutions can do about this pull factor, except to pull them back in with offers they can’t refuse.

But we can try to eliminate the push factors that are hard to identify. These push factors are often hidden in the culture, ethics and the way things get done in institutions. They are, therefore, specific to the geographical location and the social settings where the banks operate.

Performance AppraisalWho Needs It?

Performance appraisal is a tool for talent retention, if used wisely. But, if misused, it can become a push factor. Are there alternatives that will aid in retaining and promoting talent?

As it stands now, we go through this ordeal of performance appraisal at least once every year. Our career progression, bonus and salary depend on it. So we spend sleepless nights agonizing over it.

In addition to the appraisal, we also get ourkey performance indicatorsor KPIs for next year. These are the commandments we have to live by for the rest of the year. The whole experience of it is so unpleasant that we say to ourselves that life as an employee sucks.

The bosses fare hardly better though. They have to worry about their own appraisals by bigger bosses. On top of that, they have to craft the KPI commandments for us as wella job pretty darned difficult to delegate. In all likelihood, they say to themselves that their life as a boss sucks!

Given that nobody is thrilled about the performance appraisal exercise, why do we do it? Who needs it?

The objective behind performance appraisal is noble. It strives to reward good performance and punish poor showsthe old carrot and stick management paradigm. This objective is easily met in a small organization without the need for a formal appraisal process. Small business owners know who to keep and who to sack. But in a big corporate body with thousands of employees, how do you design a fair and consistent compensation scheme?

The solution, of course, is to pay a small fortune to consultants who design appraisal forms and define a uniform processtoo uniform, perhaps. Such verbose forms and inflexible processes come with inherent problems. One problem is that the focus shifts from the original objective (carrot and stick) to fairness and consistency (one-size-fits-all). Mind you, most bosses know who to reward and who to admonish. But the HR department wants the bosses to follow a uniform process, thereby increasing everybody’s workload.

Another, more insidious problem with this consultancy driven approach is that it is necessarily geared towards mediocrity. When you design an appraisal process to cater to everybody, the best you can hope to achieve is to improve the average performance level by a bit. Following such a process, the CERN scientist who invented the World Wide Web would have fared badly, for he did not concentrate on his KPIs and wasted all his time thinking about file transfers!

CERN is a place that consistently produces Nobel laureates. How does it do it? Certainly not by following processes that are designed to make incremental improvements at the average level. The trick is to be a center for excellence which attracts geniuses.

Of course, it is not fair to compare an average bank with CERN. But we have to realize that the verbose forms, which focus on averages and promote mediocrity, are a poor tool for innovation management, especially when we are trying to retain and encourage excellence in quant talent.

A viable alternative to standardized and regimented appraisal processes is to align employee objectives with those of the institutions and leave performance and reward management to bosses. With some luck, this approach may retain fringe geniuses and promote innovation. At the very least, it will alleviate some employee anxiety and sleepless nights.

To Know or Not To Know

One peculiar push factor in the Asian context is the lack of respect for technical knowledge. Technical knowledge is not always a good thing in the modern Asian workplace. Unless you are careful, others will take advantage of your expertise and dump their responsibilities on you. You may not mind it as long as they respect your expertise. But, they often hog the credit for your work and present their ability to evade work as people management skills.

People management is better rewarded than technical expertise. This differentiation between experts and middle-level managers in terms of rewards is a local Asian phenomenon. Here, those who present the work seem to get the credit for it, regardless of who actually performs it. We live in a place and time where articulation is often mistaken for accomplishments.

In the West, technical knowledge is more readily recognized than smooth presentations. You don’t have to look beyond Bill Gates to appreciate the heights to which technical expertise can take you in the West. Of course, Gates is more than an expert; he is a leader of great vision as well.

Leaders are different from people managers. Leaders provide inspiration and direction. They are sorely needed in all organizations, big and small.

Unlike people mangers, quants and technical experts are smart cookies. They can easily see that if they want to be people managers, they can get started with a tie and a good haircut. If the pickings are rich, why wouldn’t they?

This Asian differentiation between quants and managers, therefore, makes for a strong push factor for some quants who find it worthwhile to hide their technical skills, get that haircut, grab that tie, and become a people manager. Of course, it comes down to your personal choice between fulfilment and satisfaction originating from technical authority on the one hand, and convenience and promotions arising from people skills on the other.

I wonder whether we have already made our choices, even in our personal lives. We find fathers who cannot get the hang of changing diapers household chores. Is it likely that men cannot figure out washing machines and microwaves although they can operate complicated machinery at work? We also find ladies who cannot balance their accounts and estimate their spending. Is it really a mathematical impairment, or a matter of convenience? At times, the lack of knowledge is as potent a weapon as its abundance.

How Much is Talent Worth?

Banks deal in money. Our profession in finance teaches us that we can put a dollar value to everything in life. Talent retention is no different. After taking care of as much of the push factors as we can, the next question is fairly simple: How much does it take to retain talent?

My city-state of Singapore suffers from a special disadvantage when it comes to talent management. We need foreign talent. It is nothing to feel bad about. It is a statistical fact of life. For every top Singaporean in any fieldbe it finance, science, medicine, sports or whateverwe will find about 500 professionals of equal calibre in China and India. Not because we are 500 times less talented, just that they have 500 times more people.

Coupled with overwhelming statistical supremacy, certain countries have special superiority in their chosen or accidental specializations. We expect to find more hardware experts in China, more software gurus in India, more badminton players in Indonesia, more entrepreneurial spirit and managerial expertise in the west.

We need such experts, so we hire them. But how much should we pay them? That’s where economics comes indemand and supply. We offer attractive expatriate packages that the talents would bite.

I was on an expatriate package when I came to Singapore as a foreign talent. It was a fairly generous package, but cleverly worded so that if I became alocaltalent, I would lose out quite a bit. I did become local a few years later, and my compensation diminished as a consequence. My talent did not change, just the label fromforeigntolocal.

This experience made me think a bit about the value of talent and the value of labels. The local quant talents, too, are beginning to take note of the asymmetric compensation structure associated with labels. This asymmetry and the consequent erosion of loyalty introduce another push factor for the local quant talents, as if one was needed.

The solution to this problem is not a stricter enforcement of the confidentiality of salaries, but a more transparent compensation scheme free of anomalies that can be misconstrued as unfair practices. Otherwise, we may see an increasing number of Asian nationals using Singapore-based banks as a stepping stone to greener pastures. Worse, we may see (as indeed we do, these days) locals seeking level playing fields elsewhere.

We need to hire the much needed talent whatever it costs; but let’s not mistake labels for talent.

Handling Goodbyes

Losing talent is an inevitable part of managing it. What do you do when your key quant hands in the dreaded letter? It is your worst nightmare as a manager! Once the dust settles and the panic subsides, you should ask yourself, what next?

Because of all the pull and push factors discussed so far, quant staff retention is a challenge. New job offers are becoming increasingly more irresistible. At some stage, someone you work closely withbe it your staff, your boss or a fellow team memberis going to say goodbye. Handling resignations with tact and grace is no longer merely a desirable quality, but an essential corporate skill today.

We do have some general strategies to deal with resignations. The first step is to assess the motivation behind the career choice. Is it money? If so, a counter offer is usually successful. Counter offers (both making them and taking them) are considered ineffective and in poor taste. At least, executive search firms insist that they are. But then, they would say that, wouldn’t they?

If the motivation behind the resignation is the nature of the current or future job and its challenges, a lateral movement or reassignment (possibly combined with a counter offer) can be effective. If everything fails, then it is time to bid goodbyeamicably.

It is vitally important to maintain this amicabilitya fact often lost on bosses and HR departments. Understandably so because, by the time the counter offer negotiations fail, there is enough bitterness on both sides to sour the relationship. Brush those wounded feelings aside and smile through your pain, for your paths may cross again. You may rehire the same person. Or, you may end up working with him/her on the other side. Salvage whatever little you can for the sake of positive networking.

The level of amicability depends on corporate culture. Some organizations are so cordial with deserting employees that they almost encourage desertion. Others treat the traitors as the army used towith the help of a firing squad.

Both these extremes come with their associated perils. If you are too cordial, your employees may treat your organization as a stepping stone, concentrating on acquiring only transferable skills. On the other extreme, if you develop a reputation for severe exit barriers in an attempt to discourage potential traitors, you may also find it hard to recruit top talent.

The right approach lies somewhere in between, like most good things in life. It is a cultural choice that an organization has to make. But regardless of where the balance is found, resignation is here to stay, and people will change jobs. Change, as the much overused cliché puts it, is the only constant.

Summing Up

In a global market that demands ever more customization and structuring, there is an unbearable amount of pull factor for good quants. Quant talent management (acquisition and retention) is almost as challenging as developing quant skills yourself.

While powerless against the pull factor, banks and financial institutions should look into eliminating hidden push factors. Develop respect and appreciation for hard-to-replace talents. Invent innovative performance measurement metrics. Introduce fair and transparent compensation schemes.

When it all fails and the talent you so long to retain leaves, handle it with tact and grace. At some point in the future, you may have to hire them. Or worse, you may want to get hired by them!

Benford and Your Taxes

Nothing is certain but death and taxes, they say. On the death front, we are making some inroads with all our medical marvels, at least in postponing it if not actually avoiding it. But when it comes to taxes, we have no defense other than a bit of creativity in our tax returns.

Let’s say Uncle Sam thinks you owe him $75k. In your honest opinion, the fair figure is about the $50k mark. So you comb through your tax deductible receipts. After countless hours of hard work, fyou bring the number down to, say, $65k. As a quant, you can estimate the probability of an IRS audit. And you can put a number (an expectation value in dollars) to the pain and suffering that can result from it.

Let’s suppose that you calculate the risk of a tax audit to be about 1% and decide that it is worth the risk to get creative in you deduction claims to the tune of $15k. You send in the tax return and sit tight, smug in the knowledge that the odds of your getting audited are fairly slim. You are in for a big surprise. You will get well and truly fooled by randomness, and IRS will almost certainly want to take a closer look at your tax return.

The calculated creativity in tax returns seldom pays off. Your calculations of expected pain and suffering are never consistent with the frequency with which IRS audits you. The probability of an audit is, in fact, much higher if you try to inflate your tax deductions. You can blame Benford for this skew in probability stacked against your favor.


Benford presented something very counter-intuitive in his article [1] in 1938. He asked the question: What is the distribution of the first digits in any numeric, real-life data? At first glance, the answer seems obvious. All digits should have the same probability. Why would there be a preference to any one digit in random data?

Figure 1. The frequency of occurrence of the first digits in the notional amounts of financial transactions. The purple curve is the predicted distribution. Note that the slight excesses at 1 and 5 above the purple curve are expected because people tend to choose nationals like 1/5/10/50/100 million. The excess at 8 is also expected because it is considered a lucky number in Asia.

Benford showed that the first digit in anaturally occurringnumber is much more likely to be 1 rather than any other digit. In fact, each digit has a specific probability of being in the first position. The digit 1 has the highest probability; the digit 2 is about 40% less likely to be in the first position and so on. The digit 9 has the lowest probability of all; it is about 6 times less likely to be in the first position.

When I first heard of this first digit phenomenon from a well-informed colleague, I thought it was weird. I would have naively expected to see roughly same frequency of occurrence for all digits from 1 to 9. So I collected large amount of financial data, about 65000 numbers (as many as Excel would permit), and looked at the first digit. I found Benford to be absolutely right, as shown in Figure 1.

The probability of the first digit is pretty far from uniform, as Figure 1 shows. The distribution is, in fact, logarithmic. The probability of any digit d is given by log(1 + 1 / d), which is the purple curve in Figure 1.

This skewed distribution is not an anomaly in the data that I happened to look at. It is the rule in anynaturally occurringdata. It is the Benford’s law. Benford collected a large number of naturally occurring data (including population, areas of rivers, physical constants, numbers from newspaper reports and so on) and showed that this empirical law is respected.


As a quantitative developer, I tend to simulate things on a computer with the hope that I may be able to see patterns that will help me understand the problem. The first question to be settled in the simulation is to figure out what the probability distribution of a vague quantity likenaturally occurring numberswould be. Once I have the distribution, I can generate numbers and look at the first digits to see their frequency of occurrence.

To a mathematician or a quant, there is nothing more natural that natural logarithm. So the first candidate distribution for naturally occurring numbers is something like RV exp(RV), where RV is a uniformly distributed random variable (between zero and ten). The rationale behind this choice is an assumption that the number of digits in naturally occurring numbers is uniformly distributed between zero and an upper limit.

Indeed, you can choose other, fancier distributions for naturally occurring numbers. I tried a couple of other candidate distributions using two uniformly distributed (between zero and ten) random variables RV1 and RV2: RV1 exp(RV2) and exp(RV1+RV2). All these distributions turn out to be good guesses for naturally occurring numbers, as illustrated in Figure 2.

Figure 2. The distribution of the first digits in the simulation of “naturally occurring” numbers, compared to the prediction.

The first digits of the numbers that I generated follow Benford’s law to an uncanny degree of accuracy. Why does this happen? One good thing about computer simulation is that you can dig deeper and look at intermediate results. For instance, in our first simulation with the distribution: RV exp(RV), we can ask the question: What are the values of RV for which we get a certain first digit? The answer is shown in Figure 3a. Note that the ranges in RV that give the first digit 1 are much larger than those that give 9. About six times larger, in fact, as expected. Notice how pattern repeats itself as the simulated natural numbersroll overfrom the first digit of 9 to 1 (as an odometer tripping).

Figure 3a. The ranges in a uniformly distributed (between 0 and 10) random variable RV that result in different first digits in RV exp(RV). Note that the first digit of 1 occurs much more frequently than the rest, as expected.

A similar trend can be seen in our fancier simulation with two random variables. The regions in their joint distributions that give rise to various first digits in RV1 exp(RV2) are shown in Figure 3b. Notice the large swathes of deep blue (corresponding to the first digit of 1) and compare their area to the red swathes (for the first digit 9).

Figure 3b. The regions in the joint distribution of two uniformly distributed (between 0 and 10) random variables RV1 and RV2 that result in different first digits in RV1 exp(RV2).

This exercise gives me the insight I was hoping to glean from the simulation. The reason for the preponderance of smaller digits in the first position is that the distribution of naturally occurring numbers is usually a tapering one; there is usually an upper limit to the numbers, and as you get closer to the upper limit, the probably density becomes smaller and smaller. As you pass the first digit of 9 and then roll over to 1, suddenly its range becomes much bigger.

While this explanation is satisfying, the surprising fact is that it doesn’t matter how the probability of natural distributions tapers off. It is almost like the central limit theorem. Of course, this little simulation is no rigorous proof. If you are looking for a rigorous proof, you can find it in Hill’s work [3].

Fraud Detection

Although our tax evasion troubles can be attributed to Benford, the first digit phenomenon was originally described in an article by Simon Newcomb [2] in the American Journal of Mathematics in 1881. It was rediscovered by Frank Benford in 1938, to whom all the glory (or the blame, depending on which side of the fence you find yourself) went. In fact, the real culprit behind our tax woes may have been Theodore Hill. He brought the obscure law to the limelight in a series of articles in the 1990s. He even presented a statistical proof [3] for the phenomenon.

In addition to causing our personal tax troubles, Benford’s law can play a crucial role in many other fraud and irregularity checks [4]. For instance, the first digit distribution in the accounting entries of a company may reveal bouts of creativity. Employee reimbursement claims, check amounts, salary figures, grocery priceseverything is subject to Benford’s law. It can even be used to detect market manipulations because the first digits of stock prices, for instance, are supposed to follow the Benford distribution. If they don’t, we have to be wary.


Figure 4. The joint distribution of the first and second digits in a simulation, showing correlation effects.

The moral of the story is simple: Don’t get creative in your tax returns. You will get caught. You might think that you can use this Benford distribution to generate a more realistic tax deduction pattern. But this job is harder than it sounds. Although I didn’t mention it, there is a correlation between the digits. The probability of the second digit being 2, for instance, depends on what the first digit is. Look at Figure 4, which shows the correlation structure in one of my simulations.

Besides, the IRS system is likely to be far more sophisticated. For instance, they could be using an advanced data mining or pattern recognition systems such as neural networks or support vector machines. Remember that IRS has labeled data (tax returns of those who unsuccessfully tried to cheat, and those of good citizens) and they can easily train classifier programs to catch budding tax evaders. If they are not using these sophisticated pattern recognition algorithms yet, trust me, they will, after seeing this article. When it comes to taxes, randomness will always fool you because it is stacked against you.

But seriously, Benford’s law is a tool that we have to be aware of. It may come to our aid in unexpected ways when we find ourselves doubting the authenticity of all kinds of numeric data. A check based on the law is easy to implement and hard to circumvent. It is simple and fairly universal. So, let’s not try to beat Benford; let’s join him instead.

[1] Benford, F. “The Law of Anomalous Numbers.Proc. Amer. Phil. Soc. 78, 551-572, 1938.
[2] Newcomb, S. “Note on the Frequency of the Use of Digits in Natural Numbers.Amer. J. Math. 4, 39-40, 1881.
[3] Hill, T. P. “A Statistical Derivation of the Significant-Digit Law.Stat. Sci. 10, 354-363, 1996.
[4] Nigrini, M. “I’ve Got Your Number.J. Accountancy 187, pp. 79-83, May 1999.

Photo by LendingMemo


The Asian Tsunami two and a half years ago unleashed tremendous amount energy on the coastal regions around the Indian ocean. What do you think would’ve have happened to this energy if there had been no water to carry it away from the earthquake? I mean, if the earthquake (of the same kind and magnitude) had taken place on land instead of the sea-bed as it did, presumably this energy would’ve been present. How would it have manifested? As a more violent earthquake? Or a longer one?

I picture the earthquake (in cross-section) as a cantilever spring being held down and then released. The spring then transfers the energy to the tsunami in the form of potential energy, as an increase in the water level. As the tsunami radiates out, it is only the potential energy that is transferred; the water doesn’t move laterally, only vertically. As it hits the coast, the potential energy is transferred into the kinetic energy of the waves hitting the coast (water moving laterally then).

Given the magnitude of the energy transferred from the epicenter, I am speculating what would’ve happened if there was no mechanism for the transfer. Any thoughts?

Quant Life in Singapore

Singapore is a tiny city-state. Despite its diminutive size, Singapore has considerable financial muscle. It has been rated the fourth most active foreign exchange trading hub, and a major wealth management center in Asia, with funds amounting to almost half a trillion dollars, according to the Monitory Authority of Singapore. This mighty financial clout has its origins in a particularly pro-business atmosphere, world class (well, better than world class, in fact) infrastructure, and the highly skilled, cosmopolitan workforce–all of which Singapore is rightfully proud of.

Among the highly skilled workforce are scattered a hundred or so typically timid and self-effacing souls with bulging foreheads and dreamy eyes behind thick glasses. They are the Singaporean quants, and this short article is their story.

Quants command enormous respect for their intellectual prowess and mathematical knowledge. With flattering epithets like “rocket scientists” or simply “the brain,” quants silently go about their jobs of validating pricing models, writing C++ programs and developing complicated spreadsheet solutions.

But knowledge is a tricky thing to have in Asia. If you are known for your expertise, it can backfire on you at times. Unless you are careful, others will take advantage of your expertise and dump their responsibilities on you. You may not mind it as long as they respect your expertise. But, they often hog the credit for your work and present their ability to evade work as people management skills. And people managers (who may not actually know much) do get better compensated. This paradox is a fact of quant life in Singapore. The admiration that quants enjoy does not always translate to riches here.

This disparity in compensation may be okay. Quants are not terribly interested in money for one logical reason–in order to make a lot of it, you have to work long hours. And if you work long hours, when do you get to spend the money? What does it profit a man to amass all the wealth in the world if he doesn’t have the time to spend it?

Besides, quants seem to play by a different set of rules. They are typically perfectionist by nature. At least, I am, when it comes to certain aspects of work. I remember once when I was writing my PhD thesis, I started the day at around nine in the morning and worked all the way past midnight with no break. No breakfast, lunch or dinner. I wasn’t doing ground-breaking research on that particular day, just trying to get a set of numbers (branching ratios, as they were called) and their associated errors consistent. Looking back at it now, I can see that one day of starvation was too steep a price to pay for the consistency.

Similar bouts of perfectionism might grip some of us from time to time, forcing us to invest inordinate amounts of work for incremental improvements, and propelling us to higher levels of glory. The frustrating thing from the quants’ perspective is when the glory gets hogged by a middle-level people manager. It does happen, time and again. The quants are then left with little more than their flattering epithets.

I’m not painting all people managers with the same unkindly stroke; not all of them have been seduced by the dark side of the force. But I know some of them who actively hone their ignorance as a weapon. They plead ignorance to pass their work on to other unsuspecting worker bees, including quants.

The best thing a quant can hope for is a fair compensation for his hard work. Money may not be important in and of itself, but what it says about you and your station in the corporate pecking order may be of interest. Empty epithets are cheap, but it when it comes to showing real appreciation, hard cash is what matters, especially in our line of work.

Besides, corporate appreciation breeds confidence and a sense of self-worth. I feel that confidence is lacking among Singaporean quants. Some of them are really among the cleverest people I have met. And I have traveled far and wide and met some very clever people indeed. (Once I was in a CERN elevator with two Nobel laureates, as I will never tire of mentioning.)

This lack of confidence, and not lack of expertise or intelligence, is the root cause behind the dearth of quality work coming out of Singapore. We seem to keep ourselves happy with fairly mundane and routine tasks of implementing models developed by superior intelligences and validating the results.

Why not take a chance and dare to be wrong? I do it all the time. For instance, I think that there is something wrong with a Basel II recipe and I am going to write an article about it. I have published a physics article in a well-respected physics journal implying, among other things, that Einstein himself may have been slightly off the mark! See for yourself at

Asian quants are the ones closest to the Asian market. For structures and products specifically tailored to this market, how come we don’t develop our own pricing models? Why do we wait for the Mertons and Hulls of the world?

In our defense, may be some of the confident ones that do develop pricing models may move out of Asia. The CDO guru David Li is a case in point. But, on the whole, the intellectual contribution to modern quantitative finance looks disproportionately lopsided in favor of the West. This may change in the near future, when the brain banks in India and China open up and smell blood in this niche field of ours.

Another quality that is missing among us Singaporean parishioners is an appreciation of the big picture. Clichés like the “Big Picture” and the “Value Chain” have been overused by the afore-mentioned middle-level people managers on techies (a category of dubious distinction into which we quants also fall, to our constant chagrin) to devastating effect. Such phrases have rained terror on techies and quants and relegated them to demoralizing assignments with challenges far below their intellectual potential.

May be it is a sign of my underestimating the power of the dark side, but I feel that the big picture is something we have to pay attention to. Quants in Singapore seem to do what they are asked to do. They do it well, but they do it without questioning. We should be more aware of the implications of our work. If we recommend Monte Carlo as the pricing model for a certain option, will the risk oversight manager be in a pickle because his VaR report takes too long to run? If we suggest capping methods to renormalize divergent sensitivities of certain products due to discontinuities in their payoff functions, how will we affect the regulatory capital charges? Will our financial institute stay compliant? Quants may not be expected to know all these interconnected issues. But an awareness of such connections may add value (gasp, another managerial phrase!) to our office in the organization.

For all these reasons, we in Singapore end up importing talent. This practice opens up another can of polemic worms. Are they compensated a bit too fairly? Do we get blinded by their impressive labels, while losing sight of their real level of talent? How does the generous compensation scheme for the foreign talents affect the local talents?

But these issues may be transitory. The Indians and Chinese are waking up, not just in terms of their economies, but also by unleashing their tremendous talent pool in an increasingly globalizing labor market. They (or should I say we?) will force a rethinking of what we mean when we say talent. The trickle of talent we see now is only the tip of the iceberg. Here is an illustration of what is in store, from a BBC report citing the Royal Society of Chemistry.

China Test
National test set by Chinese education authorities for pre-entry students As shown in the figure, in square prism ABCD-A_1B_1C_1D_1,AB=AD=2, DC=2\sqrt(3), A1=\sqrt(3), AD\perp DC, AC\perp BD, and foot of perpendicular is E,

  1. Prove: BD\perp A_1C
  2. Determine the angle between the two planes A_1BD and BC_1D
  3. Determine the angle formed by lines AD and BC_1 which are in different planes.
UK Test
Diagnostic test set by an English university for first year students In diagram (not drawn to scale), angle ABC is a right angle, AB = 3m BC = 4m

  1. What is the length AC?
  2. What is the area of triangle ABC (above)?
  3. What is the tan of the angle ABC (above) as a fraction?

The end result of such demanding pre-selection criteria is beginning to show in the quality of the research papers coming out of the selected ones, both in China and India. This talent show is not limited to fundamental research; applied fields, including our niche of quantitative finance, are also getting a fair dose of this oriental medicine.

Singapore will only benefit from this regional infusion of talent. Our young nation has an equally young (professionally, that is) quant team. We will have to improve our skills and knowledge. And we will need to be more vocal and assertive before the world notices us and acknowledges us. We will get there. After all, we are from Singapore–an Asian tiger used to beating the odds.

Photo by hslo

Universe – Size and Age

I posted this question that was bothering me when I read that they found a galaxy at about 13 billion light years away. My understanding of that statement is: At distance of 13 billion light years, there was a galaxy 13 billion years ago, so that we can see the light from it now. Wouldn’t that mean that the universe is at least 26 billion years old? It must have taken the galaxy about 13 billion years to reach where it appears to be, and the light from it must take another 13 billion years to reach us.

In answering my question, Martin and Swansont (who I assume are academic phycisists) point out my misconceptions and essentially ask me to learn more. All shall be answered when I’m assimilated, it would appear! 🙂

This debate is published as a prelude to my post on the Big Bang theory, coming up in a day or two.

Mowgli 03-26-2007 10:14 PM

Universe – Size and Age
I was reading a post in stating that they found a galaxy at about 13 billion light years away. I am trying to figure out what that statement means. To me, it means that 13 billion years ago, this galaxy was where we see it now. Isn’t that what 13b LY away means? If so, wouldn’t that mean that the universe has to be at least 26 billion years old? I mean, the whole universe started from one singular point; how could this galaxy be where it was 13 billion years ago unless it had at least 13 billion years to get there? (Ignoring the inflationary phase for the moment…) I have heard people explain that the space itself is expanding. What the heck does that mean? Isn’t it just a fancier way of saying that the speed of light was smaller some time ago?
swansont 03-27-2007 09:10 AM


Originally Posted by Mowgli
(Post 329204)
I mean, the whole universe started from one singular point; how could this galaxy be where it was 13 billion years ago unless it had at least 13 billion years to get there? (Ignoring the inflationary phase for the moment…)

Ignoring all the rest, how would this mean the universe is 26 billion years old?


Originally Posted by Mowgli
(Post 329204)
I have heard people explain that the space itself is expanding. What the heck does that mean? Isn’t it just a fancier way of saying that the speed of light was smaller some time ago?

The speed of light is an inherent part of atomic structure, in the fine structure constant (alpha). If c was changing, then the patterns of atomic spectra would have to change. There hasn’t been any confirmed data that shows that alpha has changed (there has been the occasional paper claiming it, but you need someone to repeat the measurements), and the rest is all consistent with no change.

Martin 03-27-2007 11:25 AM

To confirm or reinforce what swansont said, there are speculation and some fringe or nonstandard cosmologies that involve c changing over time (or alpha changing over time), but the changing constants thing just gets more and more ruled out.I’ve been watching for over 5 years and the more people look and study evidence the LESS likely it seems that there is any change. They rule it out more and more accurately with their data.So it is probably best to ignore the “varying speed of light” cosmologies until one is thoroughly familiar with standard mainstream cosmology.You have misconceptions Mowgli

  • General Relativity (the 1915 theory) trumps Special Rel (1905)
  • They don’t actually contradict if you understand them correctly, because SR has only a very limited local applicability, like to the spaceship passing by:-)
  • Wherever GR and SR SEEM to contradict, believe GR. It is the more comprehensive theory.
  • GR does not have a speed limit on the rate that very great distances can increase. the only speed limit is on LOCAL stuff (you can’t catch up with and pass a photon)
  • So we can and DO observe stuff that is receding from us faster than c. (It’s far away, SR does not apply.)
  • This was explained in a Sci Am article I think last year
  • Google the author’s name Charles Lineweaver and Tamara Davis.
  • We know about plenty of stuff that is presently more than 14 billion LY away.
  • You need to learn some cosmology so you wont be confused by these things.
  • Also a “singularity” does not mean a single point. that is a popular mistake because the words SOUND the same.
  • A singularity can occur over an entire region, even an infinite region.

Also the “big bang” model doesn’t look like an explosion of matter whizzing away from some point. It shouldn’t be imagined like that. The best article explaining common mistakes people have is this Lineweaver and Davis thing in Sci Am. I think it was Jan or Feb 2005 but I could be a year off. Google it. Get it from your local library or find it online. Best advice I can give.

Mowgli 03-28-2007 01:30 AM

To swansont on why I thought 13 b LY implied an age of 26 b years:When you say that there is a galaxy at 13 b LY away, I understand it to mean that 13 billion years ago my time, the galaxy was at the point where I see it now (which is 13 b LY away from me). Knowing that everything started from the same point, it must have taken the galaxy at least 13 b years to get where it was 13 b years ago. So 13+13. I’m sure I must be wrong.To Martin: You are right, I need to learn quite a bit more about cosmology. But a couple of things you mentioned surprise me — how do we observe stuff that is receding from as FTL? I mean, wouldn’t the relativistic Doppler shift formula give imaginary 1+z? And the stuff beyond 14 b LY away – are they “outside” the universe?I will certainly look up and read the authors you mentioned. Thanks.
swansont 03-28-2007 03:13 AM


Originally Posted by Mowgli
(Post 329393)
To swansont on why I thought 13 b LY implied an age of 26 b years:When you say that there is a galaxy at 13 b LY away, I understand it to mean that 13 billion years ago my time, the galaxy was at the point where I see it now (which is 13 b LY away from me). Knowing that everything started from the same point, it must have taken the galaxy at least 13 b years to get where it was 13 b years ago. So 13+13. I’m sure I must be wrong.

That would depend on how you do your calibration. Looking only at a Doppler shift and ignoring all the other factors, if you know that speed correlates with distance, you get a certain redshift and you would probably calibrate that to mean 13b LY if that was the actual distance. That light would be 13b years old.

But as Martin has pointed out, space is expanding; the cosmological redshift is different from the Doppler shift. Because the intervening space has expanded, AFAIK the light that gets to us from a galaxy 13b LY away is not as old, because it was closer when the light was emitted. I would think that all of this is taken into account in the measurements, so that when a distance is given to the galaxy, it’s the actual distance.

Martin 03-28-2007 08:54 AM


Originally Posted by Mowgli
(Post 329393)
I will certainly look up and read the authors you mentioned.

This post has 5 or 6 links to that Sci Am article by Lineweaver and Davis…965#post142965

It is post #65 on the Astronomy links sticky thread

It turns out the article was in the March 2005 issue.

I think it’s comparatively easy to read—well written. So it should help.

When you’ve read the Sci Am article, ask more questions—your questions might be fun to try and answer:-)

Twin Paradox – Take 2

The Twin Paradox is usually explained away by arguing that the traveling twin feels the motion because of his acceleration/deceleration, and therefore ages slower.

But what will happen if the twins both accelerate symmetrically? That is, they start from rest from one space point with synchronized clocks, and get back to the same space point at rest by accelerating away from each other for some time and decelerating on the way back. By the symmetry of the problem, it seems that when the two clocks are together at the end of the journey, at the same point, and at rest with respect to each other, they have to agree.

Then again, during the whole journey, each clock is in motion (accelerated or not) with respect to the other one. In SR, every clock that is in motion with respect to an observer’s clock is supposed run slower. Or, the observer’s clock is always the fastest. So, for each twin, the other clock must be running slower. However, when they come back together at the end of the journey, they have to agree. This can happen only if each twin sees the other’s clock running faster at some point during the journey. What does SR say will happen in this imaginary journey?

(Note that the acceleration of each twin can be made constant. Have the twins cross each other at a high speed at a constant linear deceleration. They will cross again each other at the same speed after sometime. During the crossings, their clocks can be compared.)

Unreal Time

Farsight wrote:Time is a velocity-dependent subjective measure of event succession rather than something fundamental – the events mark the time, the time doesn’t mark the events. This means the stuff out there is space rather than space-time, and is an “aether” veiled by subjective time.

I like your definition of time. It is close to my own view that time is “unreal.” It is possible to treat space as real and space-time as something different, as you do. This calls for some careful thought. I will outline my thinking in this post and illustrate it with an example, if my friends don’t pull me out for lunch before I can finish. :)

The first question we need to ask ourselves is why space and time seem coupled? The answer is actually too simple to spot, and it is in your definition of time. Space and time mix through our concept of velocity and our brain’s ability to sense motion. There is an even deeper connection, which is that space is a cognitive representation of the photons inputs to our eyes, but we will get to it later.

Let’s assume for a second that we had a sixth sense that operated at an infinite speed. That is, if star explodes at a million light years from us, we can sense it immediately. We will see it only after a million years, but we sense it instantly. I know, it is a violation of SR, cannot happen and all that, but stay with me for a second. Now, a little bit of thinking will convince you that the space that we sense using this hypothetical sixth sense is Newtonian. Here, space and time can be completely decoupled, absolute time can be defined etc. Starting from this space, we can actually work out how we will see it using light and our eyes, knowing that the speed of light is what it is. It will turn out, clearly, that we seen events with a delay. That is a first order (or static) effect. The second order effect is the way we perceive objects in motion. It turns out that we will see a time dilation and a length contraction (for objects receding from us.)

Let me illustrate it a little further using echolocation. Assume that you are a blind bat. You sense your space using sonar pings. Can you sense a supersonic object? If it is coming towards you, by the time the reflected ping reaches you, it has gone past you. If it is going away from you, your pings can never catch up. In other words, faster than sound travel is “forbidden.” If you make one more assumption – the speed of the pings is the same for all bats regardless of their state of motion – you derive a special relativity for bats where the speed of sound is the fundamental property of space and time!

We have to dig a little deeper and appreciate that space is no more real than time. Space is a cognitive construct created out of our sensory inputs. If the sense modality (light for us, sound for bats) has a finite speed, that speed will become a fundamental property of the resultant space. And space and time will be coupled through the speed of the sense modality.

This, of course, is only my own humble interpretation of SR. I wanted to post this on a new thread, but I get the feeling that people are a little too attached to their own views in this forum to be able to listen.

Leo wrote:Minkowski spacetime is one interpretation of the Lorentz transforms, but other interpretations, the original Lorentz-Poincaré Relativity or modernized versions of it with a wave model of matter (LaFreniere or Close or many others), work in a perfectly euclidean 3D space.

So we end up with process slowdown and matter contraction, but NO time dilation or space contraction. The transforms are the same though. So why does one interpretation lead to tensor metric while the others don’t? Or do they all? I lack the theoretical background to answer the question.

Hi Leo,

If you define LT as a velocity dependent deformation of an object in motion, then you can make the transformation a function of time. There won’t be any warping and complications of metric tensors and stuff. Actually what I did in my book is something along those lines (though not quite), as you know.

The trouble arises when the transformation matrix is a function of the vector is transforming. So, if you define LT as a matrix operation in a 4-D space-time, you can no longer make it a function of time through acceleration any more than you can make it a function of position (as in a velocity field, for instance.) The space-time warping is a mathematical necessity. Because of it, you lose coordinates, and the tools that we learn in our undergraduate years are no longer powerful enough to handle the problem.

Of Rotation, LT and Acceleration

In the “Philosophical Implications” forum, there was an attempt to incorporate acceleration into Lorentz transformation using some clever calculus or numerical techniques. Such an attempt will not work because of a rather interesting geometric reason. I thought I would post the geometric interpretation of Lorentz transformation (or how to go from SR to GR) here.

Let me start with a couple of disclaimers. First of, what follows is my understanding of LT/SR/GR. I post it here with the honest belief that it is right. Although I have enough academic credentials to convince myself of my infallibility, who knows? People much smarter than me get proven wrong every day. And, if we had our way, we would prove even Einstein himself wrong right here in this forum, wouldn’t we? :D Secondly, what I write may be too elementary for some of the readers, perhaps even insultingly so. I request them to bear with it, considering that some other readers may find it illuminating. Thirdly, this post is not a commentary on the rightness or wrongness of the theories; it is merely a description of what the theories say. Or rather, my version of what they say. With those disclaimers out of the way, let’s get started…

LT is a rotation in the 4-D space-time. Since it not easy to visualize 4-D space-time rotation, let’s start with a 2-D, pure space rotation. One fundamental property of a geometry (such as 2-D Euclidean space) is its metric tensor. The metric tensor defines the inner product between two vectors in the space. In normal (Euclidean or flat) spaces, it also defines the distance between two points (or the length of a vector).

Though the metric tensor has the dreaded “tensor” word in its name, once you define a coordinate system, it is only a matrix. For Euclidean 2-D space with x and y coordinates, it is the identity matrix (two 1’s along the diagonal). Let’s call it G. The inner product between vectors A and B is A.B = Trans(A) G B, which works out to be a_1b_1+a_2b_2. Distance (or length of A) can be defined as \sqrt{A.A}.

So far in the post, the metric tensor looks fairly useless, only because it is the identity matrix for Euclidean space. SR (or LT), on the other hand, uses Minkowski space, which has a metric that can be written with [-1, 1, 1, 1] along the diagonal with all other elements zero – assuming time t is the first component of the coordinate system. Let’s consider a 2-D Minkowski space for simplicity, with time (t) and distance (x) axes. (This is a bit of over-simplification because this space cannot handle circular motion, which is popular in some threads.) In units that make c = 1, you can easily see that the invariant distance using this metric tensor is \sqrt{x^2 - t^2}.


The Unreal Universe — Discussion with Gibran

Hi again,You raise a lot of interesting questions. Let me try to answer them one by one.

You’re saying that our observations of an object moving away from us would look identical in either an SR or Galilean context, and therefore this is not a good test for SR.

What I’m saying is slightly different. The coordinate transformation in SR is derived considering only receding objects and sensing it using radar-like round trip light travel time. It is then assumed that the transformation laws thus derived apply to all objects. Because the round trip light travel is used, the transformation works for approaching objects as well, but not for things moving in other directions. But SR assumes that the transformation is a property of space and time and asserts that it applies to all moving (inertial) frames of reference regardless of direction.

We have to go a little deeper and ask ourselves what that statement means, what it means to talk about the properties of space. We cannot think of a space independent of our perception. Physicists are typically not happy with this starting point of mine. They think of space as something that exists independent of our sensing it. And they insist that SR applies to this independently existing space. I beg to differ. I consider space as a cognitive construct based on our perceptual inputs. There is an underlying reality that is the cause of our perception of space. It may be nothing like space, but let’s assume, for the sake of argument, that the underlying reality is like Galilean space-time. How would be perceive it, given that we perceive it using light (one-way travel of light, not two-way as SR assumes)? It turns out that our perceptual space would have time dilation and length contraction and all other effect predicted by SR. So my thesis is that the underlying reality obeys Galilean space-time and our perceptual space obeys something like SR. (It is possible that if I assume that our perception uses two-way light travel, I may get SR-like transformation. I haven’t done it because it seems obvious to me that we perceive a star, for instance, by sensing the light from it rather than flashing a light at it.)

This thesis doesn’t sit well with physicists, and indeed with most people. They mistake “perceptual effects” to be something like optical illusions. My point is more like space itself is an illusion. If you look at the night sky, you know that the stars you see are not “real” in the sense that they are not there when you are looking at them. This is simply because the information carrier, namely light, has a finite speed. If the star under observation is in motion, our perception of its motion is distorted for the same reason. SR is an attempt to formalize our perception of motion. Since motion and speed are concepts that mix space and time, SR has to operate on “space-time continuum.” Since SR is based on perceptual effects, it requires an observer and describes motion as he perceives it.

But are you actually saying that not a single experiment has been done with objects moving in any other direction than farther away? And what about experiments on time dilation where astronauts go into space and return with clocks showing less elapsed time than ones that stayed on the ground? Doesn’t this support the ideas inherent in SR?

Experiments are always interpreted in the light of a theory. It is always a model based interpretation. I know that this is not a convincing argument for you, so let me give you an example. Scientists have observed superluminal motion in certain celestial objects. They measure the angular speed of the celestial object, and they have some estimate of its distance from us, so they can estimate the speed. If we didn’t have SR, there would be nothing remarkable about this observation of superluminality. Since we do have SR, one has to find an “explanation” for this. The explanation is this: when an object approaches us at a shallow angle, it can appear to come in quite a bit faster than its real speed. Thus the “real” speed is subluminal while the “apparent” speed may be superluminal. This interpretation of the observation, in my view, breaks the philosophical grounding of SR that it is a description of the motion as it appears to the observer.

Now, there are other observations of where almost symmetric ejecta are seen on opposing jets in symmetric celestial objects. The angular speeds may indicate superluminality in both the jets if the distance of the object is sufficiently large. Since the jets are assumed to be back-to-back, if one jet is approaching us (thereby giving us the illusion of superluminality), the other jet has bet receding and can never appear superluminal, unless, of course, the underlying motion is superluminal. The interpretation of this observation is that the distance of the object is limited by the “fact” that real motion cannot be superluminal. This is what I mean by experiments being open to theory or model based interpretations.

In the case of moving clocks being slower, it is never a pure SR experiment because you cannot find space without gravity. Besides, one clock has to be accelerated or decelerated and GR applies. Otherwise, the age-old twin paradox would apply.

I know there have been some experiments done to support Einstein’s theories, like the bending of light due to gravity, but are you saying that all of them can be consistently re-interpreted according to your theory? If this is so, it’s dam surprising! I mean, no offense to you – you’re obviously a very bright individual, and you know much more about this stuff than I do, but I’d have to question how something like this slipped right through physicists’ fingers for 100 years.

These are gravity related questions and fall under GR. My “theory” doesn’t try to reinterpret GR or gravity at all. I put theory in inverted quotes because, to me, it is a rather obvious observation that there is a distinction between what we see and the underlying causes of our perception. The algebra involved is fairly simple by physics standards.

Supposing you’re right in that space and time are actually Galilean, and that the effects of SR are artifacts of our perception. How then are the results of the Michelson-Morley experiments explained? I’m sorry if you did explain it in your book, but it must have flown right over my head. Or are we leaving this as a mystery, an anomaly for future theorists to figure out?

I haven’t completely explained MMX, more or less leaving it as a mystery. I think the explanation hinges on how light is reflected off a moving mirror, which I pointed out in the book. Suppose the mirror is moving away from the light source at a speed of v in our frame of reference. Light strikes it at a speed of c-v. What is the speed of the reflected light? If the laws of reflection should hold (it’s not immediately obvious that they should), then the reflected light has to have a speed of c-v as well. This may explain why MMX gives null result. I haven’t worked out the whole thing though. I will, once I quit my day job and dedicate my life to full-time thinking. :-)

My idea is not a replacement theory for all of Einstein’s theories. It’s merely a reinterpretation of one part of SR. Since the rest of Einstein’s edifice is built on this coordinate transformation part, I’m sure there will be some reinterpretation of the rest of SR and GR also based on my idea. Again, this is a project for later. My reinterpretation is not an attempt to prove Einstein’s theories wrong; I merely want to point out that they apply to reality as we perceive it.

Overall, it was worth the $5 I payed. Thanks for the good read. Don’t take my questions as an assault on your proposal – I’m honestly in the dark about these things and I absolutely crave light (he he). If you could kindly answer them in your spare time, I’d love to share more ideas with you. It’s good to find a fellow thinker to bounce cool ideas like this off of. I’ll PM you again once I’m fully done the book. Again, it was a very satisfying read.

Thanks! I’m glad that you like my ideas and my writing. I don’t mind criticism at all. Hope I have answered most of your questions. If not, or if you want to disagree with my answers, feel free to write back. Always a pleasure to chat about these things even if we don’t agree with each other.

– Best regards,
– Manoj

Anti-relativity and Superluminality

Leo wrote:I have some problems with the introductory part though, when you confront light travel effects and relativistic transforms. You correctly state that all perceptual illusions have been cleared away in the conception of Special Relativity, but you also say that these perceptual illusions remained as a subconscious basis for the cognitive model of Special Relativity. Do I understand what you mean or do I get it wrong?

The perceptual effects are known in physics; they are called Light Travel Time effects (LTT, to cook up an acronym). These effects are considered an optical illusion on the motion of the object under observation. Once you take out the LTT effects, you get the “real” motion of the object . This real motion is supposed to obey SR. This is the current interpretation of SR.

My argument is that the LTT effects are so similar to SR that we should think of SR as just a formalization of LTT. (In fact, a slightly erroneous formalization.) Many reasons for this argument:
1. We cannot disentagle the “optical illusion” because many underlying configurations give rise to the same perception. In other words, going from what we see to what is causing our perception is a one to many problem.
2. SR coordinate transformation is partially based on LTT effects.
3. LTT effects are stronger than relativistic effects.

Probably for these reasons, what SR does is to say that what we see is what it is really like. It then tries to mathematically describe what we see. (This is what I meant by a formaliztion. ) Later on, when we figured out that LTT effects didn’t quite match with SR (as in the observation of “apparent” superluminal motion), we thought we had to “take out” the LTT effects and then say that the underlying motion (or space and time) obeyed SR. What I’m suggesting in my book and articles is that we should just guess what the underlying space and time are like and work out what our perception of it will be (because going the other way is an ill-posed one-to-many problem). My first guess, naturally, was Galilean space-time. This guess results in a rather neat and simple explantions of GRBs and DRAGNs as luminal booms and their aftermath.