Category Archives: Topical

Includes posts on physics, philosophy, sciences, quantitative finance, economics, environment etc.

House of Cards

We are in dire straits — no doubt about it. Our banks and financial edifices are collapsing. Those left standing also look shaky. Financial industry as a whole is battling to survive. And, as its front line warriors, we will bear the brunt of the bloodbath sure to ensue any minute now.

Ominous as it looks now, this dark hour will pass, as all the ones before it. How can we avoid such dark crises in the future? We can start by examining the root causes, the structural and systemic reasons, behind the current debacle. What are they? In my series of posts this month, I went through what I thought were the lessons to learn from the financial crisis. Here is what I think will happen.

The notion of risk management is sure to change in the coming years. Risk managers will have to be compensated enough so that top talent doesn’t always drift away from it into risk taking roles. Credit risk paradigms will be reviewed. Are credit limits and ratings the right tools? Will Off Balance Sheet instruments stay off the balance sheet? How will we account for leveraging?

Regulatory frameworks will change. They will become more intrusive, but hopefully more transparent and honest as well.

Upper management compensation schemes may change, but probably not much. Despite what the techies at the bottom think, those who reach the top are smart. They will think of some innovative ways of keeping their perks. Don’t worry; there will always be something to look forward to, as you climb the corporate ladder.

Nietzsche may be right, what doesn’t kill us, may eventually make us stronger. Hoping that this unprecedented financial crisis doesn’t kill us, let’s try to learn as much from it as possible.

Sections

Free Market Hypocrisy

Markets are not free, despite what the text books tell us. In mathematics, we verify the validity of equations by considering asymptotic or limiting cases. Let’s try the same trick on the statement about the markets being free.

If commodity markets were free, we would have no tariff restrictions, agricultural subsidies and other market skewing mechanisms at play. Heck, cocaine and heroine would be freely available. After all, there are willing buyers and sellers for those drugs. Indeed, drug lords would be respectable citizens belonging in country clubs rather than gun-totting cartels.

If labor markets were free, nobody would need a visa to go and work anywhere in the world. And, “equal pay for equal work” would be a true ideal across the globe, and nobody would whine about jobs being exported to third world countries.

Capital markets, at the receiving end of all the market turmoil of late, are highly regulated with capital adequacy and other Basel II requirements.

Derivatives markets, our neck of the woods, are a strange beast. It steps in and out of the capital markets as convenient and muddles up everything so that they will need us quants to explain it to them. We will get back to it in future columns.

So what exactly is free about the free market economy? It is free — as long as you deal in authorized commodities and products, operate within prescribed geographies, set aside as much capital as directed, and do not employ those you are not supposed to. By such creative redefinitions of terms like “free,” we can call even a high security prison free!

Don’t get me wrong. I wouldn’t advocate making all markets totally free. After all, opening the flood gates to the formidable Indian and Chinese talent can only adversely affect my salary levels. Nor am I suggesting that we deregulate everything and hope for the best. Far from it. All I am saying is that we need to be honest about what we mean by “free” in free markets, and understand and implement its meaning in a transparent way. I don’t know if it will help avoid a future financial meltdown, but it certainly can’t hurt.

Sections

Quant Culprits

Much has been said about the sins of the quants in their inability to model and price credit derivatives, especially Collateralized Debt Obligations (CDOs) and Mortgage Backed Securities (MBSs). In my opinion, it is not so much of a quant failure. After all, if you have the market data (especially default correlations) credit derivatives are not all that hard to price.

The failure was really in understanding how much credit and market risks were inter-related, given that they were independently managed using totally different paradigms. I think an overhauling is called for here, not merely in modeling and pricing credit risks, also in the paradigms and practices used in managing them.

Ultimately, we have to understand how the whole lifecycle of a trade is managed, and how various business units in a financial institution interact with each other bearing one common goal in mind. It is this fascination of mine with the “big picture” that inspired me to write The Principles of Quantitative Development, to be published by Wiley Finance in 2010.

Sections

Where Credit is Due

While the market risk managers are getting grilled for the financial debacle we are in, the credit controllers are walking around with that smug look that says, “Told you so!” But systemic reasons for the financial turmoil hide in our credit risk management practices as well.

We manage credit risk in two ways — by demanding collateral or by credit limit allocation. In the consumer credit market, they correspond to secure lending (home mortgages, for instance) and unsecured loans (say, credit lines). The latter clearly involves more credit risk, which is why you pay obscene interests on outstanding balances.

In dealing with financial counterparties, we use the same two paradigms. Collateral credit management is generally safe because the collateral involved cannot be used for multiple credit exposures. But when we assign each counterparty a credit limit based on their credit ratings, we have a problem. While the credit rating of a bank or a financial institution may be accurate, it is almost impossible to know how much credit is loaded against that entity (because options and derivatives are “off balance sheet” instruments). This situation is akin to a bank’s inability to check how much you have drawn against your other credit lines, when it offers you an overdraft facility.

The end result is that even in good times, the leverage against the credit rating can be dangerously high without counterparties realizing it. The ensuing painful deleveraging takes place when a credit event (such as lowering of the credit rating) occurs.

Sections

Hedging Dilemma

Ever wonder why those airfares are quick to climb, but slow to land? Well, you can blame the risk managers.

When the oil price hit $147 a barrel in July ’08, with all the pundits predicting sustained $200 levels, what would you have done if you were risk managing an airline’s exposure to fuel? You would have ran and paid an arm and a leg to hedge it. Hedging would essentially fix the price for your company around $150 level, no matter how the market moved. Now you sit back and relax, happy in the knowledge that you saved your firm potentially millions of dollars.

Then, to your horror, the oil price nosedives, and your firm is paying $100 more than it should for each barrel of oil. (Of course, airlines don’t buy WTI, but you know what I mean.) So, thanks to the risk managers’ honest work, airlines (and even countries) are now handing over huge sums of money to energy traders. Would you rather be a trader or a risk manager?

And, yes, the airfares will come down, but not before the risk managers take their due share of flak.

Sections

Risky Business

Just as 9/11 was more of an intelligence failure rather than a security lapse, the subprime debacle is a risk management breakdown, not merely a regulatory shortcoming. To do anything useful with this rather obvious insight, we need to understand why risk management failed, and how to correct it.

Risk management should be our first line of defense — it is a preventive mechanism, while regulatory framework (which also needs beefing up) is a curative, reactive second line.

The first reason for the inadequacy of risk management is the lack of glamour the risk controllers in a financial institution suffer from, when compared to their risk taking counterparts. (Glamour is a euphemism for salary.) If a risk taker does his job well, he makes money. He is a profit centre. On the other hand, if a risk controller does his job well, he ensures that the losses are not disproportionate. But in order to limit the downside, the risk controller has to limit the upside as well.

In a culture based on performance incentives, and where performance is measured in terms of profit, we can see why the risk controller’s job is sadly under-appreciated and under-compensated.

This imbalance has grave implications. It is the conflict between the risk takers and risk managers that enforces the corporate risk appetite. If the gamblers are being encouraged directly or indirectly, it is an indication of where the risk appetite lies. The question then is, was the risk appetite a little too strong?

The consequences of the lack of equilibrium between the risk manager and the risk taker are also equally troubling. The smarter ones among the risk management group slowly migrate to “profit generating” (read trading or Front Office) roles, thereby exacerbating the imbalance.

The talent migration and the consequent lack of control are not confined merely within the walls of a financial institution. Even regulatory bodies could not compete with the likes of Lehman brothers when hunting for top talent. The net result was that when the inevitable meltdown finally began, we were left with inadequate risk management and regulatory defenses.

Sections

Ambition vs. Greed

Growing up in a place like India, I was told early in life that ambition was a bad thing to have. It had a negative connotation closer to greed than drive in its meaning. I suspect this connotation was rather universal at some point in time. Why else would Mark Anthony harp on Brutus calling Caesar ambitious?

Greed, or its euphemistic twin ambition, probably had some role to play in the pain and suffering of the current financial turmoil and the unfolding economic downturn. But, it is not just the greed of Wall Street. Let’s get real. Jon Steward may poke fun at the twenty something commodity trader earning his thirty million dollar bonus by pushing virtual nothingness around, but nobody complained when they were (or thought they were) making money. Greed is not confined to those who ran fifty billion dollar Ponzi schemes; it is also in those who put their (and other people’s) money in such schemes expecting a too-good-to-be-true rate of returns. They were also made of the sterner stuff.

Let’s be honest about it. We in the financial industry are in the business of making money, for others and for ourselves. We don’t get into this business for philanthropic or spiritual reasons. We get into it because we like the rewards. Because we know that “how to get rich quick” or “how to get even richer” is the easiest sell of all.

We hear a lot about how the CEOs and other fat cats made a lot of money while other normal folks suffered. It is true that the profits were “private” while the losses are public, which is probably why the bailout plan did not get much popular support. But with or without the public support, bailout plan or not, like it or not, the pain is going to be public.

Sure, the CEOs of financial institutions with their private jets and eye-popping bonuses were guilty of ambition, but the fat cats didn’t all work in a bank or a hedge fund. It is the legitimization of greed that fueled this debacle, and nobody is innocent of it.

Sections

The Big Bang Theory – Part II

After reading a paper by Ashtekar on quantum gravity and thinking about it, I realized what my trouble with the Big Bang theory was. It is more on the fundamental assumptions than the details. I thought I would summarize my thoughts here, more for my own benefit than anybody else’s.

Classical theories (including SR and QM) treat space as continuous nothingness; hence the term space-time continuum. In this view, objects exist in continuous space and interact with each other in continuous time.

Although this notion of space time continuum is intuitively appealing, it is, at best, incomplete. Consider, for instance, a spinning body in empty space. It is expected to experience centrifugal force. Now imagine that the body is stationary and the whole space is rotating around it. Will it experience any centrifugal force?

It is hard to see why there would be any centrifugal force if space is empty nothingness.

GR introduced a paradigm shift by encoding gravity into space-time thereby making it dynamic in nature, rather than empty nothingness. Thus, mass gets enmeshed in space (and time), space becomes synonymous with the universe, and the spinning body question becomes easy to answer. Yes, it will experience centrifugal force if it is the universe that is rotating around it because it is equivalent to the body spinning. And, no, it won’t, if it is in just empty space. But “empty space” doesn’t exist. In the absence of mass, there is no space-time geometry.

So, naturally, before the Big Bang (if there was one), there couldn’t be any space, nor indeed could there be any “before.” Note, however, that the Ashtekar paper doesn’t clearly state why there had to be a big bang. The closest it gets is that the necessity of BB arises from the encoding of gravity in space-time in GR. Despite this encoding of gravity and thereby rendering space-time dynamic, GR still treats space-time as a smooth continuum — a flaw, according to Ashtekar, that QG will rectify.

Now, if we accept that the universe started out with a big bang (and from a small region), we have to account for quantum effects. Space-time has to be quantized and the only right way to do it would be through quantum gravity. Through QG, we expect to avoid the Big Bang singularity of GR, the same way QM solved the unbounded ground state energy problem in the hydrogen atom.

What I described above is what I understand to be the physical arguments behind modern cosmology. The rest is a mathematical edifice built on top of this physical (or indeed philosophical) foundation. If you have no strong views on the philosophical foundation (or if your views are consistent with it), you can accept BB with no difficulty. Unfortunately, I do have differing views.

My views revolve around the following questions.

These posts may sound like useless philosophical musings, but I do have some concrete (and in my opinion, important) results, listed below.

There is much more work to be done on this front. But for the next couple of years, with my new book contract and pressures from my quant career, I will not have enough time to study GR and cosmology with the seriousness they deserve. I hope to get back to them once the current phase of spreading myself too thin passes.

Chaos and Uncertainty

The last couple of months in finance industry can be summarized in two words — chaos and uncertainty. The aptness of this laconic description is all too evident. The sub-prime crisis where everybody lost, the dizzying commodity price movements, the pink slip syndrome, the spectacular bank busts and the gargantuan bail-outs all vouch for it.

The financial meltdown is such a rich topic with reasons and ramifications so overarching that all self-respecting columnists will be remiss to let it slide. After all, a columnist who keeps his opinions to himself is a columnist only in his imagination. I too will share my views on causes and effects of this turmoil that is sure to affect our lives more directly than anybody else’s, but perhaps in a future column.

The chaos and uncertainty I want to talk about are of different kind — the physics kind. The terms chaos and uncertainty have a different and specific meanings in physics. How those meanings apply to the world of finance is what this column is about.

Symmetries and Patterns

Physicists are a strange bunch. They seek and find symmetries and patterns where none exists. I remember once when our brilliant professor, Lee Smolin, described to us how the Earth could be considered a living organism. Using insightful arguments and precisely modulated articulation, Lee made a compelling case that the Earth, in fact, satisfied all the conditions of being an organism. The point in Lee’s view was not so much whether or the Earth was literally alive, but that thinking of it as an organism was a viable intellectual pattern. Once we represent the Earth in that model, we can use the patterns pertaining to organism to draw further predictions or conclusions.

Expanding on this pattern, I recently published a column presenting the global warming as a bout of fever caused by a virus (us humans) on this host organism. Don’t we plunder the raw material of our planet with the same abandon with which a virus usurps the genetic material of its host? In addition to fever, typical viral symptoms include sores and blisters as well. Looking at the cities and other eye sores that have replaced pristine forests and other natural landscapes, it is not hard to imagine that we are indeed inflicting fetid atrocities to our host Earth. Can’t we think of our city sewers and the polluted air as the stinking, oozing ulcers on its body?

While these analogies may sound farfetched, we have imported equally distant ideas from physics to mathematical finance. Why would stock prices behave anything like a random walk, unless we want to take Bush’s words (that “Wall Street got drunk”) literally? But seriously, Brownian motion has been a wildly successful model that we borrowed from physics. Again, once we accept that the pattern is similar between molecules getting bumped around and the equity price movements, the formidable mathematical machinery and physical intuitions available in one phenomenon can be brought to bear on the other.

Looking at the chaotic financial landscape now, I wonder if physics has other insights to offer so that we can duck and dodge as needed in the future. Of the many principles from physics, chaos seems such a natural concept to apply to the current situation. Are there lessons to be learned from chaos and nonlinear dynamics that we can make use of? May be it is Heisenberg’s uncertainty principle that holds new insights.

Perhaps I chose these concepts as a linguistic or emotional response to the baffling problems confronting us now, but let’s look at them any way. It is not like the powers that be have anything better to offer, is it?

Chaos Everywhere

In physics, chaos is generally described as our inability to predict the outcome of experiments with arbitrarily close initial conditions. For instance, try balancing your pencil on its tip. Clearly, you won’t be able to, and the pencil will land on your desktop. Now, note this line along which it falls, and repeat the experiment. Regardless of how closely you match the initial conditions (of how you hold and balance the pencil), the outcome (the line along which it falls) is pretty much random. Although this randomness may look natural to us — after all, we have been trying to balance pencils on their tips ever since we were four, if my son’s endeavours are anything to go by — it is indeed strange that we cannot bring the initial conditions close enough to be confident of the outcome.

Even stranger is the fact that similar randomness shows up in systems that are not quite as physical as pencils or experiments. Take, for instance, the socio-economic phenomenon of globalization, which I can describe as follows, admittedly with an incredible amount of over-simplification. Long time ago, we used to barter agricultural and dairy products with our neighbours — say, a few eggs for a litre (or was it pint?) of milk. Our self-interest ensured a certain level of honesty. We didn’t want to get beaten up for adding white paint to milk, for instance. These days, thanks to globalization, people don’t see their customers. A company buys milk from a farmer, adds god knows what, makes powder and other assorted chemicals in automated factories and ships them to New Zealand and Peru. The absence of a human face in the supply chain and in the flow of money results in increasingly unscrupulous behaviour.

Increasing chaos can be seen in the form of violently fluctuating concentrations of wealth and fortunes, increasing amplitudes and frequency of boom and bust cycles, exponential explosion in technological innovation and adaptation cycles, and the accelerated pace of paradigm shifts across all aspects of our lives.

It is one thing to say that things are getting chaotic, quite another matter to exploit that insight and do anything useful with it. I won’t pretend that I can predict the future even if (rather, especially if) I could. However, let me show you a possible approach using chaos.

One of the classic examples of chaos is the transition from a regular, laminar flow of a fluid to a chaotic, turbulent flow. For instance, when you open a faucet slowly, if you do it carefully, you can have a pretty nice continuous column of water, thicker near the top and stretched thinner near the bottom. The stretching force is gravity, and the cohesive forces are surface tension and inter-molecular forces. As you open the faucet still further, ripples begin to appear on the surface of the column which, at higher rates of flow, rip apart the column into complete chaos.

In a laminar flow, macroscopic forces tend to smooth out microscopic irregularities. Like gravity and surface tension in our faucet example, we have analogues of macroscopic forces in finance. The stretching force is probably greed, and the cohesive ones are efficient markets.

There is a rich mathematical framework available to describe chaos. Using this framework, I suspect one can predict the incidence and intensity of financial turmoils, though not their nature and causes. However, I am not sure such a prediction is useful. Imagine if I wrote two years ago that in 2008, there would be a financial crisis resulting in about one trillion dollar of losses. Even if people believed me, would it have helped?

Usefulness is one thing, but physicists and mathematicians derive pleasure also from useless titbits of knowledge. What is interesting about the faucet-flow example is this: if you follow the progress two water molecules starting off their careers pretty close to each other, in the laminar case, you will find that they end up pretty much next to each other. But once the flow turns turbulent, there is not telling where the molecules will end up. Similarly, in finance, suppose two banks start off roughly from the same position — say Bear Stearns and Lehman. Under normal, laminar conditions, their stock prices would track similar patterns. But during a financial turbulence, they end up in totally different recycle bins of history, as we have seen.

If whole financial institutions are tossed around into uncertain paths during chaotic times, imagine where two roughly similar employees might end up. In other words, don’t feel bad if you get a pink slip. There are forces well beyond your control at play here.

Uncertainty Principle in Quantitative Finance

The Heisenberg uncertainty principle is perhaps the second most popular theme from physics that has captured the public imagination. (The first one, of course, is Einstein’s E = mc2.) It says something seemingly straightforward — you can measure two complementary properties of a system only to a certain precision. For instance, if you try to figure out where an electron is (measure its position, that is) more and more precisely, its speed becomes progressively more uncertain (or, the momentum measurement becomes imprecise).

Quantitative finance has a natural counterpart to the uncertainty principle — risks and rewards. When you try to minimize the risks, the rewards themselves go down. If you hedge out all risks, you get only risk-free returns. Since risk is the same as the uncertainty in rewards, the risk-reward relation is not quite the same as the uncertainty principle (which, as described in the box, deals with complementary variables), but it is close enough to draw some parallels.

To link the quantum uncertainty principle to quantitative finance, let’s look at its interpretation as observation altering results. Does modelling affect how much money we can make out of a product? This is a trick question. The answer might look obvious at first glance. Of course, if we can understand and model a product perfectly, we can price it right and expect to reap healthy rewards. So, sure, modelling affects the risk-reward equation.

But, a model is only as good as its assumptions. And the most basic assumption in any model is that the market is efficient and liquid. The validity of this assumption (or lack thereof) is precisely what precipitated the current financial crisis. If our modelling effort actually changes the underlying assumptions (usually in terms of liquidity or market efficiency), we have to pay close attention to the quant equivalent of the uncertainty principle.

Look at it this way — a pyramid scheme is a perfectly valid money making model, but based on one unfortunate assumption on the infinite number of idiots at the bottom of the pyramid. (Coming to think of it, the underlying assumption in the sub-prime crisis, though more sophisticated, may not have been that different.) Similar pyramid assumptions can be seen in social security schemes, as well. We know that pyramid assumptions are incorrect. But at what point do they become incorrect enough for us to change the model?

There is an even more insidious assumption in using models — that we are the only ones who use them. In order to make a killing in a market, we always have to know a bit more than the rest of them. Once everybody starts using the same model, I think the returns will plummet to risk-free levels. Why else do you think we keep inventing more and more complex exotics?

Summing up…

The current financial crisis has been blamed on many things. One favourite theory has been that it was brought about by the greed in Wall Street — the so-called privatization of profits and socialization of losses. Incentive schemes skewed in such a way as to encourage risk taking and limit risk management must take at least part of the blame. A more tempered view regards the turmoil as a result of a risk management failure or a regulatory failure.

This column presents my personal view that the turmoil is the inevitable consequence of the interplay between opposing forces in financial markets — risk and rewards, speculation and regulation, risk taking and risk management and so on. To the extent that the risk appetite of a financial institute is implemented through a conflict between such opposing forces, these crises cannot be avoided. Worse, the intensity and frequency of similar meltdowns are going to increase as the volume of transactions increases. This is the inescapable conclusion from non-linear dynamics. After all, such turbulence has always existed in the real economy in the form cyclical booms and busts. In free market economies, selfishness and the inherent conflicts between selfish interests provide the stretching and cohesive forces, setting the stage for chaotic turbulence.

Physics has always been a source of talent and ideas for quantitative finance, much like mathematics provides a rich toolkit to physics. In his book, Dreams of a Final Theory, Nobel Prize winning physicist Steven Weinberg marvels at the uncanny ability of mathematics to anticipate physics needs. Similarly, quants may marvel at the ability of physics to come up with phenomena and principles that can be directly applied to our field. To me, it looks like the repertoire of physics holds a few more gems that we can employ and exploit.

Box: Heisenberg’s Uncertainty Principle

Where does this famous principle come from? It is considered a question beyond the realms of physics. Before we can ask the question, we have to examine what the principle really says. Here are a few possible interpretations:

  • Position and momentum of a particle are intrinsically interconnected. As we measure the momentum more accurately, the particle kind of “spreads out,” as George Gamow’s character, Mr. Tompkins, puts it. In other words, it is just one of those things; the way the world works.
  • When we measure the position, we disturb the momentum. Our measurement probes are “too fat,” as it were. As we increase the position accuracy (by shining light of shorter wavelengths, for instance), we disturb the momentum more and more (because shorter wavelength light has higher energy/momentum).
  • Closely related to this interpretation is a view that the uncertainty principle is a perceptual limit.
  • We can also think of the uncertainly principle as a cognitive limit if we consider that a future theory might surpass such limits.

The first view is currently popular and is related to the so-called Copenhagen interpretation of quantum mechanics. Let’s ignore it for it is not too open to discussions.

The second interpretation is generally understood as an experimental difficulty. But if the notion of the experimental setup is expanded to include the inevitable human observer, we arrive at the third view of perceptual limitation. In this view, it is actually possible to “derive” the uncertainty principle, based on how human perception works.

Let’s assume that we are using a beam of light of wavelength lambda to observe the particle. The precision in the position we can hope to achieve is of the order of lambda. In other words, Delta x approx lambda. In quantum mechanics, the momentum of each photon in the light beam is inversely proportional to the wavelength. At least one photon is reflected by the particle so that we can see it. So, by the classical conservation law, the momentum of the particle has to change by at least this amount(approx constant/lambda) from what it was before the measurement. Thus, through perceptual arguments, we get something similar to the Heisenberg uncertainty principle

Delta x.Delta p approx constant

We can make this argument more rigorous, and get an estimate of the value of the constant. The resolution of a microscope is given by the empirical formula 0.61lambda/NA, where NA is the numerical aperture, which has a maximum value of one. Thus, the best spatial resolution is 0.61lambda. Each photon in the light beam has a momentum 2pihbar/lambda, which is the uncertainty in the particle momentum. So we get Delta x.Delta p approx 4hbar, approximately an order of magnitude bigger than the quantum mechanical limit.

Through more rigorous statistical arguments, related to the spatial resolution and the expected momentum transferred, it may possible to derive the Heisenberg uncertainty principle through this line of reasoning.

If we consider the philosophical view that our reality is a cognitive model of our perceptual stimuli (which is the only view that makes sense to me), my fourth interpretation of the uncertainty principle being a cognitive limitation also holds a bit of water.

About the Author

The author is a scientist from the European Organization for Nuclear Research (CERN), who currently works as a senior quantitative professional at Standard Chartered in Singapore. More information about the author can be found at his blog: http//www.Thulasidas.com. The views expressed in this column are only his personal views, which have not been influenced by considerations of the firm’s business or client relationships.

Why the Speed of Light?

What is so special about light that its speed should figure in the basic structure of space and time and our reality? This is the question that has nagged many scientists ever since Albert Einstein published On the Electrodynamics of Moving Bodies about 100 years ago.

In order to understand the specialness of light in our space and time, we need to study how we perceive the world around us and how reality is created in our brains. We perceive our world using our senses. The sensory signals that our senses collect are then relayed to our brains. The brain creates a cognitive model, a representation of the sensory inputs, and presents it to our conscious awareness as reality. Our visual reality consists of space much like our auditory world is made up of sounds.

Just as sounds are a perceptual experience rather than a fundamental property of the physical reality, space also is an experience, or a cognitive representation of the visual inputs, not a fundamental aspect of “the world” our senses are trying to sense.

Space and time together form what physics considers the basis of reality. The only way we can understand the limitations in our reality is by studying the limitations in our senses themselves.

At a fundamental level, how do our senses work? Our sense of sight operates using light, and the fundamental interaction involved in sight falls in the electromagnetic (EM) category because light (or photon) is the intermediary of EM interactions. The exclusivity of EM interaction is not limited to our the long range sense of sight; all the short range senses (touch, taste, smell and hearing) are also EM in nature. To understand the limitations of our perception of space, we need not highlight the EM nature of all our senses. Space is, by and large, the result of our sight sense. But it is worthwhile to keep in mind that we would have no sensing, and indeed no reality, in the absence of EM interactions.

Like our senses, all our technological extensions to our senses (such as radio telescopes, electron microscopes, redshift measurements and even gravitational lensing) use EM interactions exclusively to measure our universe. Thus, we cannot escape the basic constraints of our perception even when we use modern instruments. The Hubble telescope may see a billion light years farther than our naked eyes, but what it sees is still a billion years older than what our eyes see. Our perceived reality, whether built upon direct sensory inputs or technologically enhanced, is a subset of electromagnetic particles and interactions only. It is a projection of EM particles and interactions into our sensory and cognitive space, a possibly imperfect projection.

This statement about the exclusivity of EM interactions in our perceived reality is often met with a bit of skepticism, mainly due to a misconception that we can sense gravity directly. This confusion arises because our bodies are subject to gravity. There is a fine distinction between “being subject to” and “being able to sense” gravitational force.

This difference is illustrated by a simple thought experiment: Imagine a human subject placed in front of an object made entirely of cosmological dark matter. There is no other visible matter anywhere the subject can see it. Given that the dark matter exerts gravitational force on the subject, will he be able to sense its presence? He will be pulled toward it, but how will he know that he is being pulled or that he is moving? He can possibly design some mechanical contraption to detect the gravity of the dark matter object. But then he will be sensing the effect of gravity on some matter using EM interactions. For instance, he may be able to see his unexplained acceleration (effect of gravity on his body, which is EM matter) with respect to reference objects such as stars. But the sensing part here (seeing the stars) involves EM interactions.

It is impossible to design any mechanical contraption to detect gravity that is devoid of EM matter. The gravity sensing in our ears again measures the effect of gravity on EM matter. In the absence of EM interaction, it is impossible to sense gravity, or anything else for that matter.

Electromagnetic interactions are responsible for our sensory inputs. Sensory perception leads to our brain’s representation that we call reality. Any limitation in this chain leads to a corresponding limitation in our sense of reality. One limitation in the chain from senses to reality is the finite speed of photon, which is the gauge boson of our senses. The finite speed of the sense modality influences and distorts our perception of motion, space and time. Because these distortions are perceived as a part of our reality itself, the root cause of the distortion becomes a fundamental property of our reality. This is how the speed of light becomes such an important constant in our space time. The sanctity of light is respected only in our perceived reality.

If we trust the imperfect perception and try to describe what we sense at cosmological scales, we end up with views of the world such as the big bang theory in modern cosmology and the general and special theories of relativity. These theories are not wrong, and the purpose of this book is not to prove them wrong, just to point out that they are descriptions of a perceived reality. They do not describe the physical causes behind the sensory inputs. The physical causes belong to an absolute reality beyond our senses.

The distinction between the absolute reality and our perception of it can be further developed and applied to certain specific astrophysical and cosmological phenomena. When it comes to the physics that happens well beyond our sensory ranges, we really have to take into account the role that our perception and cognition play in seeing them. The universe as we see it is only a cognitive model created out of the photons falling on our retina or on the photo sensors of the Hubble telescope. Because of the finite speed of the information carrier (namely photons), our perception is distorted in such a way as to give us the impression that space and time obey special relativity. They do, but space and time are not the absolute reality. They are only a part of the unreal universe that is our perception of an unknowable reality.

[This again is an edited excerpt from my book, The Unreal Universe.]