The atheist-theist debate boils down to a simple question — Did humans discover God? Or, did we invent Him? The difference between discovering and inventing is the similar to the one between believing and knowing. Theist believe that there was a God to be discovered. Atheists “know” that we humans invented the concept of God. Belief and knowledge differ only slightly — knowledge is merely a very very strong belief. A belief is considered knowledge when it fits in nicely with a larger worldview, which is very much like how a hypothesis in physics becomes a theory. While a theory (such as Quantum Mechanics, for instance) is considered to be knowledge (or the way the physical world really is), it is best not to forget the its lowly origin as a mere hypothesis. My focus in this post is the possible origin of the God hypothesis.
The only recourse an atheist can have against this argument based on personal experience is that the believer is either is misrepresenting his experience or is mistaken about it. I am not willing to pursue that line of argument. I know that I am undermining my own stance here, but I would like to give the theist camp some more ammunition for this particular argument, and make it more formal.
I have a reason for delaying this post on the fifth and last argument for God by Dr. William Lane Craig. It holds more potency than immediately obvious. While it is easy to write it off because it is a subjective, experiential argument, the lack of credence we attribute to subjectivity is in itself a result of our similarly subjective acceptance of what we consider objective reason and rationality. I hope that this point will become clearer as you read this post and the next one.
In the previous post, we considered the cosmological argument (that the Big Bang theory is an affirmation of a God) and a teleological argument (that the highly improbable fine-tuning of the universe proves the existence of intelligent creation). We saw that the cosmological argument is nothing more than an admission of our ignorance, although it may be presented in any number of fancy forms (such as the cause of the universe is an uncaused cause, which is God, for instance). The teleological argument comes from a potentially wilful distortion of the anthropic principle. The next one that Dr. Craig puts forward is the origin of morality, which has no grounding if you assume that atheism is true.
Prof. William Lane Craig is way more than a deist; he is certainly a theist. In fact, he is more than that; he believes that God is as described in the scriptures of his flavor of Christianity. I am not an expert in that field, so I don’t know exactly what that flavor is. But the arguments he gave do not go much farther than the deism. He gave five arguments to prove that God exists, and he invited Hitchens to refute them. Hitchens did not; at least, not in an enumerated and sequential fashion I plan to do here.
This post is an edited version of my responses in a Webinar panel-discussion organized by Wiley-Finance and FinCAD. The freely available Webcast is linked in the post, and contains responses from the other participants — Paul Wilmott and Espen Huag. An expanded version of this post may later appear as an article in the Wilmott Magazine.
What is Risk?
When we use the word Risk in normal conversation, it has a negative connotation — risk of getting hit by a car, for instance; but not the risk of winning a lottery. In finance, risk is both positive and negative. At times, you want the exposure to a certain kind of risk to counterbalance some other exposure; at times, you are looking for the returns associated with a certain risk. Risk, in this context, is almost identical to the mathematical concept of probability.
But even in finance, you have one kind of risk that is always negative — it is Operational Risk. My professional interest right now is in minimizing the operational risk associated with trading and computational platforms.
How do you measure Risk?
Measuring risk ultimately boils down to estimating the probability of a loss as a function of something — typically the intensity of the loss and time. So it’s like asking — What’s the probability of losing a million dollars or two million dollars tomorrow or the day after?
The question whether we can measure risk is another way of asking whether we can figure out this probability function. In certain cases, we believe we can — in Market Risk, for instance, we have very good models for this function. Credit Risk is different story — although we thought we could measure it, we learned the hard way that we probably could not.
The question how effective the measure is, is, in my view, like asking ourselves, “What do we do with a probability number?” If I do a fancy calculation and tell you that you have 27.3% probability of losing one million tomorrow, what do you do with that piece of information? Probability has a reasonable meaning only a statistical sense, in high-frequency events or large ensembles. Risk events, almost by definition, are low-frequency events and a probability number may have only limited practical use. But as a pricing tool, accurate probability is great, especially when you price instruments with deep market liquidity.
Innovation in Risk Management.
Innovation in Risk comes in two flavors — one is on the risk taking side, which is in pricing, warehousing risk and so on. On this front, we do it well, or at least we think we are doing it well, and innovation in pricing and modeling is active. The flip side of it is, of course, risk management. Here, I think innovation lags actually behind catastrophic events. Once we have a financial crisis, for instance, we do a post-mortem, figure out what went wrong and try to implement safety guards. But the next failure, of course, is going to come from some other, totally, unexpected angle.
What is the role of Risk Management in a bank?
Risk taking and risk management are two aspects of a bank’s day-to-day business. These two aspects seem in conflict with each other, but the conflict is no accident. It is through fine-tuning this conflict that a bank implements its risk appetite. It is like a dynamic equilibrium that can be tweaked as desired.
What is the role of vendors?
In my experience, vendors seem to influence the processes rather than the methodologies of risk management, and indeed of modeling. A vended system, however customizable it may be, comes with its own assumptions about the workflow, lifecycle management etc. The processes built around the system will have to adapt to these assumptions. This is not a bad thing. At the very least, popular vended systems serve to standardize risk management practices.
After reading a paper by Ashtekar on quantum gravity and thinking about it, I realized what my trouble with the Big Bang theory was. It is more on the fundamental assumptions than the details. I thought I would summarize my thoughts here, more for my own benefit than anybody else’s.
Classical theories (including SR and QM) treat space as continuous nothingness; hence the term space-time continuum. In this view, objects exist in continuous space and interact with each other in continuous time.
Although this notion of space time continuum is intuitively appealing, it is, at best, incomplete. Consider, for instance, a spinning body in empty space. It is expected to experience centrifugal force. Now imagine that the body is stationary and the whole space is rotating around it. Will it experience any centrifugal force?
It is hard to see why there would be any centrifugal force if space is empty nothingness.
GR introduced a paradigm shift by encoding gravity into space-time thereby making it dynamic in nature, rather than empty nothingness. Thus, mass gets enmeshed in space (and time), space becomes synonymous with the universe, and the spinning body question becomes easy to answer. Yes, it will experience centrifugal force if it is the universe that is rotating around it because it is equivalent to the body spinning. And, no, it won’t, if it is in just empty space. But “empty space” doesn’t exist. In the absence of mass, there is no space-time geometry.
So, naturally, before the Big Bang (if there was one), there couldn’t be any space, nor indeed could there be any “before.” Note, however, that the Ashtekar paper doesn’t clearly state why there had to be a big bang. The closest it gets is that the necessity of BB arises from the encoding of gravity in space-time in GR. Despite this encoding of gravity and thereby rendering space-time dynamic, GR still treats space-time as a smooth continuum — a flaw, according to Ashtekar, that QG will rectify.
Now, if we accept that the universe started out with a big bang (and from a small region), we have to account for quantum effects. Space-time has to be quantized and the only right way to do it would be through quantum gravity. Through QG, we expect to avoid the Big Bang singularity of GR, the same way QM solved the unbounded ground state energy problem in the hydrogen atom.
What I described above is what I understand to be the physical arguments behind modern cosmology. The rest is a mathematical edifice built on top of this physical (or indeed philosophical) foundation. If you have no strong views on the philosophical foundation (or if your views are consistent with it), you can accept BB with no difficulty. Unfortunately, I do have differing views.
My views revolve around the following questions.
- What is space?
- Why is the speed of light important in it?
- Where does the Heisenberg Uncertainty Principle come from?
These posts may sound like useless philosophical musings, but I do have some concrete (and in my opinion, important) results, listed below.
- Are GRBs and Radio Sources Luminal Booms? (An article published in IJMP-D, which became one of the “Top Accessed Articles” of the journal. :-))
- Light Travel Time Effects and Cosmological Features (Trying to get this one published.)
There is much more work to be done on this front. But for the next couple of years, with my new book contract and pressures from my quant career, I will not have enough time to study GR and cosmology with the seriousness they deserve. I hope to get back to them once the current phase of spreading myself too thin passes.
This sounds like a strange question. We all know what space is, it is all around us. When we open our eyes, we see it. If seeing is believing, then the question “What is space?” indeed is a strange one.
To be fair, we don’t actually see space. We see only objects which we assume are in space. Rather, we define space as whatever it is that holds or contains the objects. It is the arena where objects do their thing, the backdrop of our experience. In other words, experience presupposes space and time, and provides the basis for the worldview behind the currently popular interpretations of scientific theories.
Although not obvious, this definition (or assumption or understanding) of space comes with a philosophical baggage — that of realism. The realist’s view is predominant in the current understanding of Einstien’s theories as well. But Einstein himself may not have embraced realism blindly. Why else would he say:
In order to break away from the grip of realism, we have to approach the question tangentially. One way to do it is by studying the neuroscience and cognitive basis of sight, which after all provides the strongest evidence to the realness of space. Space, by and large, is the experience associated with sight. Another way is to examine experiential correlates of other senses: What is sound?
When we hear something, what we hear is, naturally, sound. We experience a tone, an intensity and a time variation that tell us a lot about who is talking, what is breaking and so on. But even after stripping off all the extra richness added to the experience by our brain, the most basic experience is still a “sound.” We all know what it is, but we cannot explain it in terms more basic than that.
Now let’s look at the sensory signal responsible for hearing. As we know, these are pressure waves in the air that are created by a vibrating body making compressions and depressions in the air around it. Much like the ripples in a pond, these pressure waves propagate in almost all directions. They are picked up by our ears. By a clever mechanism, the ears perform a spectral analysis and send electric signals, which roughly correspond to the frequency spectrum of the waves, to our brain. Note that, so far, we have a vibrating body, bunching and spreading of air molecules, and an electric signal that contains information about the pattern of the air molecules. We do not have sound yet.
The experience of sound is the magic our brain performs. It translates the electrical signal encoding the air pressure wave patterns to a representation of tonality and richness of sound. Sound is not the intrinsic property of a vibrating body or a falling tree, it is the way our brain chooses to represent the vibrations or, more precisely, the electrical signal encoding the spectrum of the pressure waves.
Doesn’t it make sense to call sound an internal cognitive representation of our auditory sensory inputs? If you agree, then reality itself is our internal representation of our sensory inputs. This notion is actually much more profound that it first appears. If sound is representation, so is smell. So is space.
|Figure: Illustration of the process of brain’s representation of sensory inputs. Odors are a representation of the chemical compositions and concentration levels our nose senses. Sounds are a mapping of the air pressure waves produced by a vibrating object. In sight, our representation is space, and possibly time. However, we do not know what it is the representation of.|
We can examine it and fully understand sound because of one remarkable fact — we have a more powerful sense, namely our sight. Sight enables us to understand the sensory signals of hearing and compare them to our sensory experience. In effect, sight enables us to make a model describing what sound is.
Why is it that we do not know the physical cause behind space? After all, we know of the causes behind the experiences of smell, sound, etc. The reason for our inability to see beyond the visual reality is in the hierarchy of senses, best illustrated using an example. Let’s consider a small explosion, like a firecracker going off. When we experience this explosion, we will see the flash, hear the report, smell the burning chemicals and feel the heat, if we are close enough.
The qualia of these experiences are attributed to the same physical event — the explosion, the physics of which is well understood. Now, let’s see if we can fool the senses into having the same experiences, in the absence of a real explosion. The heat and the smell are fairly easy to reproduce. The experience of the sound can also be created using, for instance, a high-end home theater system. How do we recreate the experience of the sight of the explosion? A home theater experience is a poor reproduction of the real thing.
In principle at least, we can think of futuristic scenarios such as the holideck in Star Trek, where the experience of the sight can be recreated. But at the point where sight is also recreated, is there a difference between the real experience of the explosion and the holideck simulation? The blurring of the sense of reality when the sight experience is simulated indicates that sight is our most powerful sense, and we have no access to causes beyond our visual reality.
Visual perception is the basis of our sense of reality. All other senses provide corroborating or complementing perceptions to the visual reality.
[This post has borrowed quite a bit from my book.]
The Asian Tsunami two and a half years ago unleashed tremendous amount energy on the coastal regions around the Indian ocean. What do you think would’ve have happened to this energy if there had been no water to carry it away from the earthquake? I mean, if the earthquake (of the same kind and magnitude) had taken place on land instead of the sea-bed as it did, presumably this energy would’ve been present. How would it have manifested? As a more violent earthquake? Or a longer one?
I picture the earthquake (in cross-section) as a cantilever spring being held down and then released. The spring then transfers the energy to the tsunami in the form of potential energy, as an increase in the water level. As the tsunami radiates out, it is only the potential energy that is transferred; the water doesn’t move laterally, only vertically. As it hits the coast, the potential energy is transferred into the kinetic energy of the waves hitting the coast (water moving laterally then).
Given the magnitude of the energy transferred from the epicenter, I am speculating what would’ve happened if there was no mechanism for the transfer. Any thoughts?
I posted this question that was bothering me when I read that they found a galaxy at about 13 billion light years away. My understanding of that statement is: At distance of 13 billion light years, there was a galaxy 13 billion years ago, so that we can see the light from it now. Wouldn’t that mean that the universe is at least 26 billion years old? It must have taken the galaxy about 13 billion years to reach where it appears to be, and the light from it must take another 13 billion years to reach us.
In answering my question, Martin and Swansont (who I assume are academic phycisists) point out my misconceptions and essentially ask me to learn more. All shall be answered when I’m assimilated, it would appear!
This debate is published as a prelude to my post on the Big Bang theory, coming up in a day or two.
Universe – Size and Age
I was reading a post in http://www.space.com/ stating that they found a galaxy at about 13 billion light years away. I am trying to figure out what that statement means. To me, it means that 13 billion years ago, this galaxy was where we see it now. Isn’t that what 13b LY away means? If so, wouldn’t that mean that the universe has to be at least 26 billion years old? I mean, the whole universe started from one singular point; how could this galaxy be where it was 13 billion years ago unless it had at least 13 billion years to get there? (Ignoring the inflationary phase for the moment…) I have heard people explain that the space itself is expanding. What the heck does that mean? Isn’t it just a fancier way of saying that the speed of light was smaller some time ago?
Ignoring all the rest, how would this mean the universe is 26 billion years old?
The speed of light is an inherent part of atomic structure, in the fine structure constant (alpha). If c was changing, then the patterns of atomic spectra would have to change. There hasn’t been any confirmed data that shows that alpha has changed (there has been the occasional paper claiming it, but you need someone to repeat the measurements), and the rest is all consistent with no change.
To confirm or reinforce what swansont said, there are speculation and some fringe or nonstandard cosmologies that involve c changing over time (or alpha changing over time), but the changing constants thing just gets more and more ruled out.I’ve been watching for over 5 years and the more people look and study evidence the LESS likely it seems that there is any change. They rule it out more and more accurately with their data.So it is probably best to ignore the “varying speed of light” cosmologies until one is thoroughly familiar with standard mainstream cosmology.You have misconceptions Mowgli
Also the “big bang” model doesn’t look like an explosion of matter whizzing away from some point. It shouldn’t be imagined like that. The best article explaining common mistakes people have is this Lineweaver and Davis thing in Sci Am. I think it was Jan or Feb 2005 but I could be a year off. Google it. Get it from your local library or find it online. Best advice I can give.
To swansont on why I thought 13 b LY implied an age of 26 b years:When you say that there is a galaxy at 13 b LY away, I understand it to mean that 13 billion years ago my time, the galaxy was at the point where I see it now (which is 13 b LY away from me). Knowing that everything started from the same point, it must have taken the galaxy at least 13 b years to get where it was 13 b years ago. So 13+13. I’m sure I must be wrong.To Martin: You are right, I need to learn quite a bit more about cosmology. But a couple of things you mentioned surprise me — how do we observe stuff that is receding from as FTL? I mean, wouldn’t the relativistic Doppler shift formula give imaginary 1+z? And the stuff beyond 14 b LY away – are they “outside” the universe?I will certainly look up and read the authors you mentioned. Thanks.
That would depend on how you do your calibration. Looking only at a Doppler shift and ignoring all the other factors, if you know that speed correlates with distance, you get a certain redshift and you would probably calibrate that to mean 13b LY if that was the actual distance. That light would be 13b years old.
But as Martin has pointed out, space is expanding; the cosmological redshift is different from the Doppler shift. Because the intervening space has expanded, AFAIK the light that gets to us from a galaxy 13b LY away is not as old, because it was closer when the light was emitted. I would think that all of this is taken into account in the measurements, so that when a distance is given to the galaxy, it’s the actual distance.
This post has 5 or 6 links to that Sci Am article by Lineweaver and Davis
It is post #65 on the Astronomy links sticky thread
It turns out the article was in the March 2005 issue.
I think it’s comparatively easy to read—well written. So it should help.
When you’ve read the Sci Am article, ask more questions—your questions might be fun to try and answer:-)