Prof. William Lane Craig is way more than a deist; he is certainly a theist. In fact, he is more than that; he believes that God is as described in the scriptures of his flavor of Christianity. I am not an expert in that field, so I don’t know exactly what that flavor is. But the arguments he gave do not go much farther than the deism. He gave five arguments to prove that God exists, and he invited Hitchens to refute them. Hitchens did not; at least, not in an enumerated and sequential fashion I plan to do here.
Recently, I have been listening to some debates on atheism by Christopher Hitchens, as recommended by a friend. Although I agree with almost everything Hitchens says (said rather, because he is no longer with us), I find his tone bit too flippant and derisive for my taste, much like The God Delusion by Richard Dawkins. I am an atheist, as those who have been following my writings may know. Given that an overwhelming majority of people do believe in some sort of a supreme being, at times I feel kind of compelled to answer the question why I don’t believe in one.
This post is an edited version of my responses in a Webinar panel-discussion organized by Wiley-Finance and FinCAD. The freely available Webcast is linked in the post, and contains responses from the other participants — Paul Wilmott and Espen Huag. An expanded version of this post may later appear as an article in the Wilmott Magazine.
What is Risk?
When we use the word Risk in normal conversation, it has a negative connotation — risk of getting hit by a car, for instance; but not the risk of winning a lottery. In finance, risk is both positive and negative. At times, you want the exposure to a certain kind of risk to counterbalance some other exposure; at times, you are looking for the returns associated with a certain risk. Risk, in this context, is almost identical to the mathematical concept of probability.
But even in finance, you have one kind of risk that is always negative — it is Operational Risk. My professional interest right now is in minimizing the operational risk associated with trading and computational platforms.
How do you measure Risk?
Measuring risk ultimately boils down to estimating the probability of a loss as a function of something — typically the intensity of the loss and time. So it’s like asking — What’s the probability of losing a million dollars or two million dollars tomorrow or the day after?
The question whether we can measure risk is another way of asking whether we can figure out this probability function. In certain cases, we believe we can — in Market Risk, for instance, we have very good models for this function. Credit Risk is different story — although we thought we could measure it, we learned the hard way that we probably could not.
The question how effective the measure is, is, in my view, like asking ourselves, “What do we do with a probability number?” If I do a fancy calculation and tell you that you have 27.3% probability of losing one million tomorrow, what do you do with that piece of information? Probability has a reasonable meaning only a statistical sense, in high-frequency events or large ensembles. Risk events, almost by definition, are low-frequency events and a probability number may have only limited practical use. But as a pricing tool, accurate probability is great, especially when you price instruments with deep market liquidity.
Innovation in Risk Management.
Innovation in Risk comes in two flavors — one is on the risk taking side, which is in pricing, warehousing risk and so on. On this front, we do it well, or at least we think we are doing it well, and innovation in pricing and modeling is active. The flip side of it is, of course, risk management. Here, I think innovation lags actually behind catastrophic events. Once we have a financial crisis, for instance, we do a post-mortem, figure out what went wrong and try to implement safety guards. But the next failure, of course, is going to come from some other, totally, unexpected angle.
What is the role of Risk Management in a bank?
Risk taking and risk management are two aspects of a bank’s day-to-day business. These two aspects seem in conflict with each other, but the conflict is no accident. It is through fine-tuning this conflict that a bank implements its risk appetite. It is like a dynamic equilibrium that can be tweaked as desired.
What is the role of vendors?
In my experience, vendors seem to influence the processes rather than the methodologies of risk management, and indeed of modeling. A vended system, however customizable it may be, comes with its own assumptions about the workflow, lifecycle management etc. The processes built around the system will have to adapt to these assumptions. This is not a bad thing. At the very least, popular vended systems serve to standardize risk management practices.
The Asian Tsunami two and a half years ago unleashed tremendous amount energy on the coastal regions around the Indian ocean. What do you think would’ve have happened to this energy if there had been no water to carry it away from the earthquake? I mean, if the earthquake (of the same kind and magnitude) had taken place on land instead of the sea-bed as it did, presumably this energy would’ve been present. How would it have manifested? As a more violent earthquake? Or a longer one?
I picture the earthquake (in cross-section) as a cantilever spring being held down and then released. The spring then transfers the energy to the tsunami in the form of potential energy, as an increase in the water level. As the tsunami radiates out, it is only the potential energy that is transferred; the water doesn’t move laterally, only vertically. As it hits the coast, the potential energy is transferred into the kinetic energy of the waves hitting the coast (water moving laterally then).
Given the magnitude of the energy transferred from the epicenter, I am speculating what would’ve happened if there was no mechanism for the transfer. Any thoughts?
I posted this question that was bothering me when I read that they found a galaxy at about 13 billion light years away. My understanding of that statement is: At distance of 13 billion light years, there was a galaxy 13 billion years ago, so that we can see the light from it now. Wouldn’t that mean that the universe is at least 26 billion years old? It must have taken the galaxy about 13 billion years to reach where it appears to be, and the light from it must take another 13 billion years to reach us.
In answering my question, Martin and Swansont (who I assume are academic phycisists) point out my misconceptions and essentially ask me to learn more. All shall be answered when I’m assimilated, it would appear!
This debate is published as a prelude to my post on the Big Bang theory, coming up in a day or two.
Universe – Size and Age
I was reading a post in http://www.space.com/ stating that they found a galaxy at about 13 billion light years away. I am trying to figure out what that statement means. To me, it means that 13 billion years ago, this galaxy was where we see it now. Isn’t that what 13b LY away means? If so, wouldn’t that mean that the universe has to be at least 26 billion years old? I mean, the whole universe started from one singular point; how could this galaxy be where it was 13 billion years ago unless it had at least 13 billion years to get there? (Ignoring the inflationary phase for the moment…) I have heard people explain that the space itself is expanding. What the heck does that mean? Isn’t it just a fancier way of saying that the speed of light was smaller some time ago?
Ignoring all the rest, how would this mean the universe is 26 billion years old?
The speed of light is an inherent part of atomic structure, in the fine structure constant (alpha). If c was changing, then the patterns of atomic spectra would have to change. There hasn’t been any confirmed data that shows that alpha has changed (there has been the occasional paper claiming it, but you need someone to repeat the measurements), and the rest is all consistent with no change.
To confirm or reinforce what swansont said, there are speculation and some fringe or nonstandard cosmologies that involve c changing over time (or alpha changing over time), but the changing constants thing just gets more and more ruled out.I’ve been watching for over 5 years and the more people look and study evidence the LESS likely it seems that there is any change. They rule it out more and more accurately with their data.So it is probably best to ignore the “varying speed of light” cosmologies until one is thoroughly familiar with standard mainstream cosmology.You have misconceptions Mowgli
Also the “big bang” model doesn’t look like an explosion of matter whizzing away from some point. It shouldn’t be imagined like that. The best article explaining common mistakes people have is this Lineweaver and Davis thing in Sci Am. I think it was Jan or Feb 2005 but I could be a year off. Google it. Get it from your local library or find it online. Best advice I can give.
To swansont on why I thought 13 b LY implied an age of 26 b years:When you say that there is a galaxy at 13 b LY away, I understand it to mean that 13 billion years ago my time, the galaxy was at the point where I see it now (which is 13 b LY away from me). Knowing that everything started from the same point, it must have taken the galaxy at least 13 b years to get where it was 13 b years ago. So 13+13. I’m sure I must be wrong.To Martin: You are right, I need to learn quite a bit more about cosmology. But a couple of things you mentioned surprise me — how do we observe stuff that is receding from as FTL? I mean, wouldn’t the relativistic Doppler shift formula give imaginary 1+z? And the stuff beyond 14 b LY away – are they “outside” the universe?I will certainly look up and read the authors you mentioned. Thanks.
That would depend on how you do your calibration. Looking only at a Doppler shift and ignoring all the other factors, if you know that speed correlates with distance, you get a certain redshift and you would probably calibrate that to mean 13b LY if that was the actual distance. That light would be 13b years old.
But as Martin has pointed out, space is expanding; the cosmological redshift is different from the Doppler shift. Because the intervening space has expanded, AFAIK the light that gets to us from a galaxy 13b LY away is not as old, because it was closer when the light was emitted. I would think that all of this is taken into account in the measurements, so that when a distance is given to the galaxy, it’s the actual distance.
This post has 5 or 6 links to that Sci Am article by Lineweaver and Davis
It is post #65 on the Astronomy links sticky thread
It turns out the article was in the March 2005 issue.
I think it’s comparatively easy to read—well written. So it should help.
When you’ve read the Sci Am article, ask more questions—your questions might be fun to try and answer:-)
The Twin Paradox is usually explained away by arguing that the traveling twin feels the motion because of his acceleration/deceleration, and therefore ages slower.
But what will happen if the twins both accelerate symmetrically? That is, they start from rest from one space point with synchronized clocks, and get back to the same space point at rest by accelerating away from each other for some time and decelerating on the way back. By the symmetry of the problem, it seems that when the two clocks are together at the end of the journey, at the same point, and at rest with respect to each other, they have to agree.
Then again, during the whole journey, each clock is in motion (accelerated or not) with respect to the other one. In SR, every clock that is in motion with respect to an observer’s clock is supposed run slower. Or, the observer’s clock is always the fastest. So, for each twin, the other clock must be running slower. However, when they come back together at the end of the journey, they have to agree. This can happen only if each twin sees the other’s clock running faster at some point during the journey. What does SR say will happen in this imaginary journey?
(Note that the acceleration of each twin can be made constant. Have the twins cross each other at a high speed at a constant linear deceleration. They will cross again each other at the same speed after sometime. During the crossings, their clocks can be compared.)
Farsight wrote:Time is a velocity-dependent subjective measure of event succession rather than something fundamental – the events mark the time, the time doesn’t mark the events. This means the stuff out there is space rather than space-time, and is an “aether” veiled by subjective time.
I like your definition of time. It is close to my own view that time is “unreal.” It is possible to treat space as real and space-time as something different, as you do. This calls for some careful thought. I will outline my thinking in this post and illustrate it with an example, if my friends don’t pull me out for lunch before I can finish.
The first question we need to ask ourselves is why space and time seem coupled? The answer is actually too simple to spot, and it is in your definition of time. Space and time mix through our concept of velocity and our brain’s ability to sense motion. There is an even deeper connection, which is that space is a cognitive representation of the photons inputs to our eyes, but we will get to it later.
Let’s assume for a second that we had a sixth sense that operated at an infinite speed. That is, if star explodes at a million light years from us, we can sense it immediately. We will see it only after a million years, but we sense it instantly. I know, it is a violation of SR, cannot happen and all that, but stay with me for a second. Now, a little bit of thinking will convince you that the space that we sense using this hypothetical sixth sense is Newtonian. Here, space and time can be completely decoupled, absolute time can be defined etc. Starting from this space, we can actually work out how we will see it using light and our eyes, knowing that the speed of light is what it is. It will turn out, clearly, that we seen events with a delay. That is a first order (or static) effect. The second order effect is the way we perceive objects in motion. It turns out that we will see a time dilation and a length contraction (for objects receding from us.)
Let me illustrate it a little further using echolocation. Assume that you are a blind bat. You sense your space using sonar pings. Can you sense a supersonic object? If it is coming towards you, by the time the reflected ping reaches you, it has gone past you. If it is going away from you, your pings can never catch up. In other words, faster than sound travel is “forbidden.” If you make one more assumption – the speed of the pings is the same for all bats regardless of their state of motion – you derive a special relativity for bats where the speed of sound is the fundamental property of space and time!
We have to dig a little deeper and appreciate that space is no more real than time. Space is a cognitive construct created out of our sensory inputs. If the sense modality (light for us, sound for bats) has a finite speed, that speed will become a fundamental property of the resultant space. And space and time will be coupled through the speed of the sense modality.
This, of course, is only my own humble interpretation of SR. I wanted to post this on a new thread, but I get the feeling that people are a little too attached to their own views in this forum to be able to listen.
Leo wrote:Minkowski spacetime is one interpretation of the Lorentz transforms, but other interpretations, the original Lorentz-PoincarÃ© Relativity or modernized versions of it with a wave model of matter (LaFreniere or Close or many others), work in a perfectly euclidean 3D space.
So we end up with process slowdown and matter contraction, but NO time dilation or space contraction. The transforms are the same though. So why does one interpretation lead to tensor metric while the others don’t? Or do they all? I lack the theoretical background to answer the question.
If you define LT as a velocity dependent deformation of an object in motion, then you can make the transformation a function of time. There won’t be any warping and complications of metric tensors and stuff. Actually what I did in my book is something along those lines (though not quite), as you know.
The trouble arises when the transformation matrix is a function of the vector is transforming. So, if you define LT as a matrix operation in a 4-D space-time, you can no longer make it a function of time through acceleration any more than you can make it a function of position (as in a velocity field, for instance.) The space-time warping is a mathematical necessity. Because of it, you lose coordinates, and the tools that we learn in our undergraduate years are no longer powerful enough to handle the problem.
In the “Philosophical Implications” forum, there was an attempt to incorporate acceleration into Lorentz transformation using some clever calculus or numerical techniques. Such an attempt will not work because of a rather interesting geometric reason. I thought I would post the geometric interpretation of Lorentz transformation (or how to go from SR to GR) here.
Let me start with a couple of disclaimers. First of, what follows is my understanding of LT/SR/GR. I post it here with the honest belief that it is right. Although I have enough academic credentials to convince myself of my infallibility, who knows? People much smarter than me get proven wrong every day. And, if we had our way, we would prove even Einstein himself wrong right here in this forum, wouldn’t we? Secondly, what I write may be too elementary for some of the readers, perhaps even insultingly so. I request them to bear with it, considering that some other readers may find it illuminating. Thirdly, this post is not a commentary on the rightness or wrongness of the theories; it is merely a description of what the theories say. Or rather, my version of what they say. With those disclaimers out of the way, let’s get started…
LT is a rotation in the 4-D space-time. Since it not easy to visualize 4-D space-time rotation, let’s start with a 2-D, pure space rotation. One fundamental property of a geometry (such as 2-D Euclidean space) is its metric tensor. The metric tensor defines the inner product between two vectors in the space. In normal (Euclidean or flat) spaces, it also defines the distance between two points (or the length of a vector).
Though the metric tensor has the dreaded “tensor” word in its name, once you define a coordinate system, it is only a matrix. For Euclidean 2-D space with x and y coordinates, it is the identity matrix (two 1’s along the diagonal). Let’s call it G. The inner product between vectors A and B is A.B = Trans(A) G B, which works out to be . Distance (or length of A) can be defined as .
So far in the post, the metric tensor looks fairly useless, only because it is the identity matrix for Euclidean space. SR (or LT), on the other hand, uses Minkowski space, which has a metric that can be written with [-1, 1, 1, 1] along the diagonal with all other elements zero – assuming time t is the first component of the coordinate system. Let’s consider a 2-D Minkowski space for simplicity, with time (t) and distance (x) axes. (This is a bit of over-simplification because this space cannot handle circular motion, which is popular in some threads.) In units that make c = 1, you can easily see that the invariant distance using this metric tensor is .
Leo wrote:I have some problems with the introductory part though, when you confront light travel effects and relativistic transforms. You correctly state that all perceptual illusions have been cleared away in the conception of Special Relativity, but you also say that these perceptual illusions remained as a subconscious basis for the cognitive model of Special Relativity. Do I understand what you mean or do I get it wrong?
The perceptual effects are known in physics; they are called Light Travel Time effects (LTT, to cook up an acronym). These effects are considered an optical illusion on the motion of the object under observation. Once you take out the LTT effects, you get the “real” motion of the object . This real motion is supposed to obey SR. This is the current interpretation of SR.
My argument is that the LTT effects are so similar to SR that we should think of SR as just a formalization of LTT. (In fact, a slightly erroneous formalization.) Many reasons for this argument:
1. We cannot disentagle the “optical illusion” because many underlying configurations give rise to the same perception. In other words, going from what we see to what is causing our perception is a one to many problem.
2. SR coordinate transformation is partially based on LTT effects.
3. LTT effects are stronger than relativistic effects.
Probably for these reasons, what SR does is to say that what we see is what it is really like. It then tries to mathematically describe what we see. (This is what I meant by a formaliztion. ) Later on, when we figured out that LTT effects didn’t quite match with SR (as in the observation of “apparent” superluminal motion), we thought we had to “take out” the LTT effects and then say that the underlying motion (or space and time) obeyed SR. What I’m suggesting in my book and articles is that we should just guess what the underlying space and time are like and work out what our perception of it will be (because going the other way is an ill-posed one-to-many problem). My first guess, naturally, was Galilean space-time. This guess results in a rather neat and simple explantions of GRBs and DRAGNs as luminal booms and their aftermath.
On the Daily Mail forum, one participant (called “whats-in-a-name”) started talking about The Unreal Universe on July 15, 2006. It was attacked fairly viciously on the forum. I happened to see it during a Web search and decided to step in and defend it.
15 July, 2006
Posted by: whats-in-a-name on 15/07/06 at 09:28 AM
Ah, Kek, you’ve given me a further reason to be distracted from what I should be doing- and I can tell you that this is more interesting at the moment.I’ve been trying to formulate some ideas and there’s one coming- but I’ll have to give it to you in bits.I don’t want to delve into pseudoscience or take the woo-ish road that says that you can explain everything with quantum theory, but try starting here: http://theunrealuniverse.com/phys.shtml
The “Journal Article” link at the bottom touches on some of the points that we discussed elsewhere. It goes slightly off-topic, but you might also find the “Philosophy” link at the top left interesting.
Posted by: patopreto on 15/07/06 at 06:17 PM
Regarding that web site wian.One does not need to ead past this sentence –
The theories of physics are a description of reality. Reality is created out of the readings from our senses. Knowing that our senses all work using light as an intermediary, is it a surprise that the speed of light is of fundamental importance in our reality?
to realise that tis web site is complete ignorant hokum. I stopped at that point.
16 July, 2006
Posted by: whats-in-a-name on 16/07/06 at 09:04 AM
I’ve just been back to read that bit more carefully. I don’t know why the writer phrased it like that but surely what he meant was:(i) “Our perception of what is real is created out of the readings from our senses.” I think that most physicists wouldn’t argue with that would they? At the quantum level reality as we understand it doesn’t exist; you can only say that particles have more of a tendency to exist in one place or state than another.(ii) The information that we pick up from optical or radio telescopes, gamma-ray detectors and the like, shows the state of distant objects as they were in the past, owing to the transit time of the radiation. Delving deeper into space therefore enables us to look further back into the history of the universe.It’s an unusual way to express the point, I agree, but it doesn’t devalue the other information on there. In particular there are links to other papers that go into rather more detail, but I wanted to start with something that offered a more general view.
I get the impression that your study of physics is rather more advanced than mine- as I’ve said previously I’m only an amateur, though I’ve probably taken my interest a bit further than most. I’m happy to be corrected if any of my reasoning is flawed, though what I’ve said so far s quite basic stuff.
The ideas that I’m trying to express in response to Keka’s challenge are my own and again, I’m quite prepared to have you or anyone else knock them down. I’m still formulating my thoughts and I wanted to start by considering the model that physicists use of the nature of matter, going down to the grainy structure of spacetime at the Plank distance and quantum uncertainty.
I’ll have to come back to this in a day or two, but meanwhile if you or anyone else wants to offer an opposing view, please do.
Posted by: patopreto on 16/07/06 at 10:52 AM
I don’t know why the writer phrased it like that but surely what he meant was:
I think the write is quit clear! WIAN – you have re-written what he says to mean something different.
The writer is quite clear – “Once we accept that space and time are a part of the cognitive model created by the brain, and that special relativity applies to the cognitive model, we can ponder over the physical causes behind the model, the absolute reality itself.”
Blah Blah Blah!
The writer, Manoj Thulasidas, is an employee of OCBC bank in Singapore and self-described “amateur philosopher”. What is he writes appears to be nothing more than a religiously influenced solipsistic philosophy. Solipsism is interesting as a philosophical standpoint but quickly falls apart. If Manoj can start his arguments from such shaky grounds without explanation, then I really have no other course to take than to accept his descriptions of himself as “amateur”.
Maybe back to MEQUACK!