The only recourse an atheist can have against this argument based on personal experience is that the believer is either is misrepresenting his experience or is mistaken about it. I am not willing to pursue that line of argument. I know that I am undermining my own stance here, but I would like to give the theist camp some more ammunition for this particular argument, and make it more formal.
I have a reason for delaying this post on the fifth and last argument for God by Dr. William Lane Craig. It holds more potency than immediately obvious. While it is easy to write it off because it is a subjective, experiential argument, the lack of credence we attribute to subjectivity is in itself a result of our similarly subjective acceptance of what we consider objective reason and rationality. I hope that this point will become clearer as you read this post and the next one.
In the previous post, we considered the cosmological argument (that the Big Bang theory is an affirmation of a God) and a teleological argument (that the highly improbable fine-tuning of the universe proves the existence of intelligent creation). We saw that the cosmological argument is nothing more than an admission of our ignorance, although it may be presented in any number of fancy forms (such as the cause of the universe is an uncaused cause, which is God, for instance). The teleological argument comes from a potentially wilful distortion of the anthropic principle. The next one that Dr. Craig puts forward is the origin of morality, which has no grounding if you assume that atheism is true.
Prof. William Lane Craig is way more than a deist; he is certainly a theist. In fact, he is more than that; he believes that God is as described in the scriptures of his flavor of Christianity. I am not an expert in that field, so I don’t know exactly what that flavor is. But the arguments he gave do not go much farther than the deism. He gave five arguments to prove that God exists, and he invited Hitchens to refute them. Hitchens did not; at least, not in an enumerated and sequential fashion I plan to do here.
Recently, I have been listening to some debates on atheism by Christopher Hitchens, as recommended by a friend. Although I agree with almost everything Hitchens says (said rather, because he is no longer with us), I find his tone bit too flippant and derisive for my taste, much like The God Delusion by Richard Dawkins. I am an atheist, as those who have been following my writings may know. Given that an overwhelming majority of people do believe in some sort of a supreme being, at times I feel kind of compelled to answer the question why I don’t believe in one.
This post is an edited version of my responses in a Webinar panel-discussion organized by Wiley-Finance and FinCAD. The freely available Webcast is linked in the post, and contains responses from the other participants — Paul Wilmott and Espen Huag. An expanded version of this post may later appear as an article in the Wilmott Magazine.
What is Risk?
When we use the word Risk in normal conversation, it has a negative connotation — risk of getting hit by a car, for instance; but not the risk of winning a lottery. In finance, risk is both positive and negative. At times, you want the exposure to a certain kind of risk to counterbalance some other exposure; at times, you are looking for the returns associated with a certain risk. Risk, in this context, is almost identical to the mathematical concept of probability.
But even in finance, you have one kind of risk that is always negative — it is Operational Risk. My professional interest right now is in minimizing the operational risk associated with trading and computational platforms.
How do you measure Risk?
Measuring risk ultimately boils down to estimating the probability of a loss as a function of something — typically the intensity of the loss and time. So it’s like asking — What’s the probability of losing a million dollars or two million dollars tomorrow or the day after?
The question whether we can measure risk is another way of asking whether we can figure out this probability function. In certain cases, we believe we can — in Market Risk, for instance, we have very good models for this function. Credit Risk is different story — although we thought we could measure it, we learned the hard way that we probably could not.
The question how effective the measure is, is, in my view, like asking ourselves, “What do we do with a probability number?” If I do a fancy calculation and tell you that you have 27.3% probability of losing one million tomorrow, what do you do with that piece of information? Probability has a reasonable meaning only a statistical sense, in high-frequency events or large ensembles. Risk events, almost by definition, are low-frequency events and a probability number may have only limited practical use. But as a pricing tool, accurate probability is great, especially when you price instruments with deep market liquidity.
Innovation in Risk Management.
Innovation in Risk comes in two flavors — one is on the risk taking side, which is in pricing, warehousing risk and so on. On this front, we do it well, or at least we think we are doing it well, and innovation in pricing and modeling is active. The flip side of it is, of course, risk management. Here, I think innovation lags actually behind catastrophic events. Once we have a financial crisis, for instance, we do a post-mortem, figure out what went wrong and try to implement safety guards. But the next failure, of course, is going to come from some other, totally, unexpected angle.
What is the role of Risk Management in a bank?
Risk taking and risk management are two aspects of a bank’s day-to-day business. These two aspects seem in conflict with each other, but the conflict is no accident. It is through fine-tuning this conflict that a bank implements its risk appetite. It is like a dynamic equilibrium that can be tweaked as desired.
What is the role of vendors?
In my experience, vendors seem to influence the processes rather than the methodologies of risk management, and indeed of modeling. A vended system, however customizable it may be, comes with its own assumptions about the workflow, lifecycle management etc. The processes built around the system will have to adapt to these assumptions. This is not a bad thing. At the very least, popular vended systems serve to standardize risk management practices.
The Asian Tsunami two and a half years ago unleashed tremendous amount energy on the coastal regions around the Indian ocean. What do you think would’ve have happened to this energy if there had been no water to carry it away from the earthquake? I mean, if the earthquake (of the same kind and magnitude) had taken place on land instead of the sea-bed as it did, presumably this energy would’ve been present. How would it have manifested? As a more violent earthquake? Or a longer one?
I picture the earthquake (in cross-section) as a cantilever spring being held down and then released. The spring then transfers the energy to the tsunami in the form of potential energy, as an increase in the water level. As the tsunami radiates out, it is only the potential energy that is transferred; the water doesn’t move laterally, only vertically. As it hits the coast, the potential energy is transferred into the kinetic energy of the waves hitting the coast (water moving laterally then).
Given the magnitude of the energy transferred from the epicenter, I am speculating what would’ve happened if there was no mechanism for the transfer. Any thoughts?
I posted this question that was bothering me when I read that they found a galaxy at about 13 billion light years away. My understanding of that statement is: At distance of 13 billion light years, there was a galaxy 13 billion years ago, so that we can see the light from it now. Wouldn’t that mean that the universe is at least 26 billion years old? It must have taken the galaxy about 13 billion years to reach where it appears to be, and the light from it must take another 13 billion years to reach us.
In answering my question, Martin and Swansont (who I assume are academic phycisists) point out my misconceptions and essentially ask me to learn more. All shall be answered when I’m assimilated, it would appear!
This debate is published as a prelude to my post on the Big Bang theory, coming up in a day or two.
Universe – Size and Age
I was reading a post in http://www.space.com/ stating that they found a galaxy at about 13 billion light years away. I am trying to figure out what that statement means. To me, it means that 13 billion years ago, this galaxy was where we see it now. Isn’t that what 13b LY away means? If so, wouldn’t that mean that the universe has to be at least 26 billion years old? I mean, the whole universe started from one singular point; how could this galaxy be where it was 13 billion years ago unless it had at least 13 billion years to get there? (Ignoring the inflationary phase for the moment…) I have heard people explain that the space itself is expanding. What the heck does that mean? Isn’t it just a fancier way of saying that the speed of light was smaller some time ago?
Ignoring all the rest, how would this mean the universe is 26 billion years old?
The speed of light is an inherent part of atomic structure, in the fine structure constant (alpha). If c was changing, then the patterns of atomic spectra would have to change. There hasn’t been any confirmed data that shows that alpha has changed (there has been the occasional paper claiming it, but you need someone to repeat the measurements), and the rest is all consistent with no change.
To confirm or reinforce what swansont said, there are speculation and some fringe or nonstandard cosmologies that involve c changing over time (or alpha changing over time), but the changing constants thing just gets more and more ruled out.I’ve been watching for over 5 years and the more people look and study evidence the LESS likely it seems that there is any change. They rule it out more and more accurately with their data.So it is probably best to ignore the “varying speed of light” cosmologies until one is thoroughly familiar with standard mainstream cosmology.You have misconceptions Mowgli
Also the “big bang” model doesn’t look like an explosion of matter whizzing away from some point. It shouldn’t be imagined like that. The best article explaining common mistakes people have is this Lineweaver and Davis thing in Sci Am. I think it was Jan or Feb 2005 but I could be a year off. Google it. Get it from your local library or find it online. Best advice I can give.
To swansont on why I thought 13 b LY implied an age of 26 b years:When you say that there is a galaxy at 13 b LY away, I understand it to mean that 13 billion years ago my time, the galaxy was at the point where I see it now (which is 13 b LY away from me). Knowing that everything started from the same point, it must have taken the galaxy at least 13 b years to get where it was 13 b years ago. So 13+13. I’m sure I must be wrong.To Martin: You are right, I need to learn quite a bit more about cosmology. But a couple of things you mentioned surprise me — how do we observe stuff that is receding from as FTL? I mean, wouldn’t the relativistic Doppler shift formula give imaginary 1+z? And the stuff beyond 14 b LY away – are they “outside” the universe?I will certainly look up and read the authors you mentioned. Thanks.
That would depend on how you do your calibration. Looking only at a Doppler shift and ignoring all the other factors, if you know that speed correlates with distance, you get a certain redshift and you would probably calibrate that to mean 13b LY if that was the actual distance. That light would be 13b years old.
But as Martin has pointed out, space is expanding; the cosmological redshift is different from the Doppler shift. Because the intervening space has expanded, AFAIK the light that gets to us from a galaxy 13b LY away is not as old, because it was closer when the light was emitted. I would think that all of this is taken into account in the measurements, so that when a distance is given to the galaxy, it’s the actual distance.
This post has 5 or 6 links to that Sci Am article by Lineweaver and Davis
It is post #65 on the Astronomy links sticky thread
It turns out the article was in the March 2005 issue.
I think it’s comparatively easy to read—well written. So it should help.
When you’ve read the Sci Am article, ask more questions—your questions might be fun to try and answer:-)
The Twin Paradox is usually explained away by arguing that the traveling twin feels the motion because of his acceleration/deceleration, and therefore ages slower.
But what will happen if the twins both accelerate symmetrically? That is, they start from rest from one space point with synchronized clocks, and get back to the same space point at rest by accelerating away from each other for some time and decelerating on the way back. By the symmetry of the problem, it seems that when the two clocks are together at the end of the journey, at the same point, and at rest with respect to each other, they have to agree.
Then again, during the whole journey, each clock is in motion (accelerated or not) with respect to the other one. In SR, every clock that is in motion with respect to an observer’s clock is supposed run slower. Or, the observer’s clock is always the fastest. So, for each twin, the other clock must be running slower. However, when they come back together at the end of the journey, they have to agree. This can happen only if each twin sees the other’s clock running faster at some point during the journey. What does SR say will happen in this imaginary journey?
(Note that the acceleration of each twin can be made constant. Have the twins cross each other at a high speed at a constant linear deceleration. They will cross again each other at the same speed after sometime. During the crossings, their clocks can be compared.)
Farsight wrote:Time is a velocity-dependent subjective measure of event succession rather than something fundamental – the events mark the time, the time doesn’t mark the events. This means the stuff out there is space rather than space-time, and is an “aether” veiled by subjective time.
I like your definition of time. It is close to my own view that time is “unreal.” It is possible to treat space as real and space-time as something different, as you do. This calls for some careful thought. I will outline my thinking in this post and illustrate it with an example, if my friends don’t pull me out for lunch before I can finish.
The first question we need to ask ourselves is why space and time seem coupled? The answer is actually too simple to spot, and it is in your definition of time. Space and time mix through our concept of velocity and our brain’s ability to sense motion. There is an even deeper connection, which is that space is a cognitive representation of the photons inputs to our eyes, but we will get to it later.
Let’s assume for a second that we had a sixth sense that operated at an infinite speed. That is, if star explodes at a million light years from us, we can sense it immediately. We will see it only after a million years, but we sense it instantly. I know, it is a violation of SR, cannot happen and all that, but stay with me for a second. Now, a little bit of thinking will convince you that the space that we sense using this hypothetical sixth sense is Newtonian. Here, space and time can be completely decoupled, absolute time can be defined etc. Starting from this space, we can actually work out how we will see it using light and our eyes, knowing that the speed of light is what it is. It will turn out, clearly, that we seen events with a delay. That is a first order (or static) effect. The second order effect is the way we perceive objects in motion. It turns out that we will see a time dilation and a length contraction (for objects receding from us.)
Let me illustrate it a little further using echolocation. Assume that you are a blind bat. You sense your space using sonar pings. Can you sense a supersonic object? If it is coming towards you, by the time the reflected ping reaches you, it has gone past you. If it is going away from you, your pings can never catch up. In other words, faster than sound travel is “forbidden.” If you make one more assumption – the speed of the pings is the same for all bats regardless of their state of motion – you derive a special relativity for bats where the speed of sound is the fundamental property of space and time!
We have to dig a little deeper and appreciate that space is no more real than time. Space is a cognitive construct created out of our sensory inputs. If the sense modality (light for us, sound for bats) has a finite speed, that speed will become a fundamental property of the resultant space. And space and time will be coupled through the speed of the sense modality.
This, of course, is only my own humble interpretation of SR. I wanted to post this on a new thread, but I get the feeling that people are a little too attached to their own views in this forum to be able to listen.
Leo wrote:Minkowski spacetime is one interpretation of the Lorentz transforms, but other interpretations, the original Lorentz-PoincarÃ© Relativity or modernized versions of it with a wave model of matter (LaFreniere or Close or many others), work in a perfectly euclidean 3D space.
So we end up with process slowdown and matter contraction, but NO time dilation or space contraction. The transforms are the same though. So why does one interpretation lead to tensor metric while the others don’t? Or do they all? I lack the theoretical background to answer the question.
If you define LT as a velocity dependent deformation of an object in motion, then you can make the transformation a function of time. There won’t be any warping and complications of metric tensors and stuff. Actually what I did in my book is something along those lines (though not quite), as you know.
The trouble arises when the transformation matrix is a function of the vector is transforming. So, if you define LT as a matrix operation in a 4-D space-time, you can no longer make it a function of time through acceleration any more than you can make it a function of position (as in a velocity field, for instance.) The space-time warping is a mathematical necessity. Because of it, you lose coordinates, and the tools that we learn in our undergraduate years are no longer powerful enough to handle the problem.