Arquivo da categoria: Tópica

Inclui posts sobre física, filosofia, ciências, finanças quantitativas, Economia, ambiente etc.

Benford e seus impostos

Nada é certo, exceto a morte e os impostos, eles dizem. Na frente da morte, estamos fazendo algumas incursões com todas as nossas maravilhas médicos, pelo menos em adiá-lo se não for realmente evitá-la. Mas quando se trata de impostos, não temos outro do que um pouco de criatividade defesa em nossas declarações de imposto.

Vamos dizer que o Tio Sam acha que você deve a ele $ 75k. Em sua opinião honesta, a figura feira é sobre a marca de US $ 50k. Então você vasculhar seus recibos dedutíveis. Após inúmeras horas de trabalho duro, fyou trazer o número para baixo para, dizer, $65a. As a quant, você pode estimar a probabilidade de uma auditoria IRS. E você pode colocar um número (um valor esperado em dólares) para a dor e o sofrimento que pode resultar a partir dele.

Vamos supor que você a calcular o risco de uma auditoria fiscal em cerca de 1% e decidir que vale a pena o risco de ser criativo em você reivindicações de dedução para a quantia de $ 15k. Você enviar a declaração de imposto e se sente apertado, presunçoso, sabendo que as chances de sua obtenção auditadas são bastante magro. Você está em uma grande surpresa. Você vai ficar bem e verdadeiramente enganado por aleatoriedade, e IRS quase certamente irá querer dar uma olhada em sua declaração de imposto.

A criatividade calculada de declarações fiscais raramente compensa. Seus cálculos de dor esperada e sofrimento nunca estão de acordo com a frequência com que você IRS auditorias. A probabilidade de uma auditoria é, de fato, muito maior se você tentar inflar suas deduções fiscais. Você pode culpá-Benford para este enviesamento na probabilidade empilhadas contra seu favor.

Ceticismo

Benford apresentou algo muito contra-intuitivo em seu artigo [1] em 1938. Ele fez a pergunta: Qual é a distribuição dos primeiros algarismos em qualquer numérico, dados da vida real? À primeira vista, a resposta parece óbvia. Todos os dígitos devem ter a mesma probabilidade. Por que haveria uma preferência para qualquer um dígito em dados aleatórios?

figure1
Figura 1. A freqüência de ocorrência dos primeiros dígitos nos valores nocionais das operações financeiras. A curva de distribuição é a púrpura previu. Note-se que as pequenas excessos em 1 e 5 acima da curva de roxo são esperados, porque as pessoas tendem a escolher nacionais como 1/5/10/50/100 milhão. O excesso na 8 também é esperado, pois é considerado um número de sorte na Ásia.

Benford mostrou que o primeiro dígito em um “que ocorre naturalmente” número é muito mais provável que seja 1 em vez de qualquer outro dígito. De fato, cada dígito tem uma probabilidade de ser específico na primeira posição. O dígito 1 tem a maior probabilidade; o dígito 2 é de cerca 40% menos provável de ser a primeira posição e assim por diante. O dígito 9 tem a menor probabilidade de todos; trata-se de 6 vezes menos probabilidade de estar na primeira posição.

Quando eu ouvi pela primeira vez deste primeiro fenômeno dígitos de um colega bem informado, Eu pensei que era estranho. Eu teria esperado ingenuamente para ver mais ou menos mesma freqüência de ocorrência para todos os dígitos de 1 para 9. Então eu coletei grande quantidade de dados financeiros, sobre 65000 números (como muitos como Excel permitiria), e olhou para o primeiro dígito. Achei Benford ser absolutamente certo, como mostrado na Figura 1.

A probabilidade de o primeiro dígito é muito longe de ser uniforme, como figura 1 espetáculos. A distribuição é, de fato, logarítmica. A probabilidade de qualquer dígito é dado por d log(1 + 1 / d), que é a curva de roxo na figura 1.

Esta distribuição assimétrica não é uma anomalia nos dados que aconteceu de eu olhar para. É a regra em qualquer “que ocorre naturalmente” dados. É a lei de Benford. Benford recolhida uma grande quantidade de dados que ocorre naturalmente (incluindo população, áreas de rios, As constantes físicas, números de reportagens de jornais e assim por diante) e mostrou que esta lei empírica é respeitado.

Simulação

Como um desenvolvedor quantitativa, Eu tendem a simular as coisas em um computador com a esperança de que eu possa ser capaz de ver os padrões que vão me ajudar a entender o problema. A primeira questão a ser resolvida na simulação é descobrir o que a distribuição de probabilidade de uma quantidade vaga como “números que ocorre naturalmente” Seria. Assim que eu tiver a distribuição, Posso gerar números e olhar para os primeiros dígitos para ver sua frequência de ocorrência.

Para um matemático ou um quant, não há nada mais natural que logaritmo natural. Assim, a primeira distribuição de candidato para os números naturais é algo como exp RV(RV), onde VR é uma variável aleatória uniformemente distribuída (entre zero e dez). A lógica por trás dessa escolha é uma suposição de que o número de dígitos em números de ocorrência natural é distribuído uniformemente entre zero e um limite superior.

De fato, você pode escolher outro, distribuições mais extravagantes para os números naturais. Eu tentei um par de outras distribuições que usam dois candidatos distribuídos uniformemente (entre zero e dez) variáveis ​​aleatórias RV1 e RV2: RV1 exp(RV2) e exp(RV1 RV2). Todas estas distribuições vir a ser bons palpites para os números naturais, como ilustrado na figura 2.

figure2
Figura 2. A distribuição dos primeiros dígitos na simulação de "ocorrência natural" números, em comparação com a previsão.

Os primeiros algarismos dos números que eu gerados seguir a lei de Benford para um grau de precisão misteriosa. Por que isso acontece? Uma coisa boa sobre simulação em computador é que você pode cavar mais fundo e olhar para os resultados intermediários. Por exemplo, em nossa primeira simulação com a distribuição: Exp RV(RV), nós podemos fazer a pergunta: Quais são os valores de RV para o qual temos uma certa primeiro dígito? A resposta é mostrada na Figura 3a. Note-se que os intervalos em RV que dão o primeiro dígito 1 são muito maiores do que aqueles que dão 9. Cerca de seis vezes maior, de fato, como esperado. Observe como padrão se repete como os números naturais simulados “rolar” a partir do primeiro dígito 9 para 1 (como um disparo odômetro).

figure3a
Figura 3a. Os intervalos em um distribuída uniformemente (entre 0 e 10) RV variável aleatória que resultam em diferentes primeiros dígitos exp RV(RV). Note-se que o primeiro dígito 1 ocorre com muito mais freqüência do que o resto, como esperado.

Uma tendência semelhante pode ser visto em nossa simulação apreciador com duas variáveis ​​aleatórias. As regiões em suas distribuições conjuntas que dão origem a vários primeiros dígitos em RV1 exp(RV2) são mostrados na Figura 3b. Observe as grandes áreas do azul profundo (correspondente ao primeiro dígito 1) e comparar a sua área para as faixas vermelhas (para o primeiro dígito 9).

figure3b
Figura 3b. As regiões na distribuição conjunta de duas distribuída uniformemente (entre 0 e 10) variáveis ​​aleatórias RV1 e RV2 que resultam em diferentes primeiros dígitos RV1 exp(RV2).

Este exercício dá-me a visão que eu estava esperando para recolher a partir da simulação. A razão para a preponderância de dígitos menores na primeira posição é que a distribuição de números de ocorrência natural é geralmente um afilamento; geralmente há um limite para o número, e à medida que se aproxima do limite superior, provavelmente a densidade torna-se menor e menor. Como você passar o primeiro dígito 9 e, em seguida, passar para 1, a sua gama de repente se torna muito maior.

Embora essa explicação é satisfatória, o fato surpreendente é que não importa como a probabilidade das distribuições naturais afunilamento. É quase como o teorema do limite central. Claro, esta pequena simulação há nenhuma prova rigorosa. Se você estiver procurando por uma prova rigorosa, você pode encontrá-lo no trabalho de Hill [3].

Fraud Detection

Apesar de nossos problemas de evasão fiscal pode ser atribuída a Benford, o primeiro fenômeno dígitos foi originalmente descrita em um artigo de Simon Newcomb [2] no American Journal of Mathematics, em 1881. Foi redescoberto por Frank Benford em 1938, a quem toda a glória (ou a culpa, dependendo de que lado da cerca você se encontra) fui. De fato, o verdadeiro culpado por trás de nossos problemas fiscais pode ter sido Theodore Colina. Ele trouxe a lei obscura para o centro das atenções em uma série de artigos na década de 1990. Ele até apresentou uma prova estatística [3] para o fenômeno.

Além de causar os nossos problemas fiscais pessoais, A lei de Benford pode desempenhar um papel crucial em muitas outras verificações de fraude e irregularidades [4]. Por exemplo, a primeira distribuição dígitos nos registros contábeis de uma empresa pode revelar crises de criatividade. Pedidos de reembolso do empregado, verificar valores, valores salariais, preços de supermercado — tudo está sujeito à lei de Benford. Ele pode até mesmo ser usado para detectar manipulações do mercado porque os primeiros dígitos dos preços das ações, por exemplo, devem seguir a distribuição Benford. Se não o fizerem, temos que ser cautelosos.

Moral

figure4
Figura 4. A distribuição conjunta dos primeiros e segundos dígitos numa simulação, mostrando os efeitos da correlação.

A moral da história é simples: Não ser criativo em suas declarações de imposto. Você será pego. Você pode pensar que você pode usar esta distribuição Benford para gerar um padrão de dedução fiscal mais realista. Mas este trabalho é mais difícil do que parece. Embora eu não tenha mencionado, existe uma correlação entre os dígitos. A probabilidade de o segundo ser dígitos 2, por exemplo, depende do que o primeiro dígito é. Observe a Figura 4, que mostra a estrutura de correlação em uma de minhas simulações.

Além, o sistema de IRS é provável que seja muito mais sofisticado. Por exemplo, eles poderiam estar usando uma avançados sistemas de mineração de dados ou de reconhecimento de padrões, tais como redes neurais ou máquinas de vetores de suporte. Lembre-se de dados que Receita Federal rotulados (declarações de impostos dos que tentaram em vão enganar, e os de bons cidadãos) e eles podem facilmente treinar programas classificador para pegar sonegadores brotamento. Se eles não estão usando esses algoritmos de reconhecimento de padrões sofisticados ainda, confia em mim, eles vão, depois de ver este artigo. Quando se trata de impostos, aleatoriedade sempre enganá-lo porque ele está contra você.

Mas, falando sério, A lei de Benford é uma ferramenta que nós temos que estar cientes de. Pode vir em nosso auxílio de formas inesperadas quando nos encontramos duvidar da autenticidade de todos os tipos de dados numéricos. A seleção com base na lei é fácil de implementar e de difícil contornar. É bastante simples e universal. Assim, Não vamos tentar vencer Benford; vamos juntar a ele em vez.

Referências
[1] Benford, F. “A Lei dos Números anômalos.” Proc. Amer. Phil. Soc. 78, 551-572, 1938.
[2] Newcomb, S. “Nota sobre a freqüência do uso de dígitos em números naturais.” Amer. J. Math. 4, 39-40, 1881.
[3] Colina, T. P. “A derivação de Estatística da Lei significativa dígitos.” Estado. Sci. 10, 354-363, 1996.
[4] Nigrini, M. “Eu tenho seu número.” J. Contabilidade 187, pp. 79-83, Maio 1999. http://www.aicpa.org/pubs/jofa/may1999/nigrini.htm.

Photo by LendingMemo

Tsunami

The Asian Tsunami two and a half years ago unleashed tremendous amount energy on the coastal regions around the Indian ocean. What do you think would’ve have happened to this energy if there had been no water to carry it away from the earthquake? I mean, if the earthquake (of the same kind and magnitude) had taken place on land instead of the sea-bed as it did, presumably this energy would’ve been present. How would it have manifested? As a more violent earthquake? Or a longer one?

I picture the earthquake (in cross-section) as a cantilever spring being held down and then released. The spring then transfers the energy to the tsunami in the form of potential energy, as an increase in the water level. As the tsunami radiates out, it is only the potential energy that is transferred; the water doesn’t move laterally, only vertically. As it hits the coast, the potential energy is transferred into the kinetic energy of the waves hitting the coast (water moving laterally then).

Given the magnitude of the energy transferred from the epicenter, I am speculating what would’ve happened if there was no mechanism for the transfer. Any thoughts?

Quant Life in Singapore

Singapore is a tiny city-state. Despite its diminutive size, Singapore has considerable financial muscle. It has been rated the fourth most active foreign exchange trading hub, and a major wealth management center in Asia, with funds amounting to almost half a trillion dollars, according to the Monitory Authority of Singapore. This mighty financial clout has its origins in a particularly pro-business atmosphere, world class (well, better than world class, in fact) infrastructure, and the highly skilled, cosmopolitan workforce–all of which Singapore is rightfully proud of.

Among the highly skilled workforce are scattered a hundred or so typically timid and self-effacing souls with bulging foreheads and dreamy eyes behind thick glasses. They are the Singaporean quants, and this short article is their story.

Quants command enormous respect for their intellectual prowess and mathematical knowledge. With flattering epithets like “rocket scientists” or simply “the brain,” quants silently go about their jobs of validating pricing models, writing C++ programs and developing complicated spreadsheet solutions.

But knowledge is a tricky thing to have in Asia. If you are known for your expertise, it can backfire on you at times. Unless you are careful, others will take advantage of your expertise and dump their responsibilities on you. You may not mind it as long as they respect your expertise. But, they often hog the credit for your work and present their ability to evade work as people management skills. And people managers (who may not actually know much) do get better compensated. This paradox is a fact of quant life in Singapore. The admiration that quants enjoy does not always translate to riches here.

This disparity in compensation may be okay. Quants are not terribly interested in money for one logical reason–in order to make a lot of it, you have to work long hours. And if you work long hours, when do you get to spend the money? What does it profit a man to amass all the wealth in the world if he doesn’t have the time to spend it?

Besides, quants seem to play by a different set of rules. They are typically perfectionist by nature. At least, I am, when it comes to certain aspects of work. I remember once when I was writing my PhD thesis, I started the day at around nine in the morning and worked all the way past midnight with no break. No breakfast, lunch or dinner. I wasn’t doing ground-breaking research on that particular day, just trying to get a set of numbers (branching ratios, as they were called) and their associated errors consistent. Looking back at it now, I can see that one day of starvation was too steep a price to pay for the consistency.

Similar bouts of perfectionism might grip some of us from time to time, forcing us to invest inordinate amounts of work for incremental improvements, and propelling us to higher levels of glory. The frustrating thing from the quants’ perspective is when the glory gets hogged by a middle-level people manager. It does happen, time and again. The quants are then left with little more than their flattering epithets.

I’m not painting all people managers with the same unkindly stroke; not all of them have been seduced by the dark side of the force. But I know some of them who actively hone their ignorance as a weapon. They plead ignorance to pass their work on to other unsuspecting worker bees, including quants.

The best thing a quant can hope for is a fair compensation for his hard work. Money may not be important in and of itself, but what it says about you and your station in the corporate pecking order may be of interest. Empty epithets are cheap, but it when it comes to showing real appreciation, hard cash is what matters, especially in our line of work.

Besides, corporate appreciation breeds confidence and a sense of self-worth. I feel that confidence is lacking among Singaporean quants. Some of them are really among the cleverest people I have met. And I have traveled far and wide and met some very clever people indeed. (Once I was in a CERN elevator with two Nobel laureates, as I will never tire of mentioning.)

This lack of confidence, and not lack of expertise or intelligence, is the root cause behind the dearth of quality work coming out of Singapore. We seem to keep ourselves happy with fairly mundane and routine tasks of implementing models developed by superior intelligences and validating the results.

Why not take a chance and dare to be wrong? I do it all the time. For instance, I think that there is something wrong with a Basel II recipe and I am going to write an article about it. I have published a physics article in a well-respected physics journal implying, among other things, that Einstein himself may have been slightly off the mark! See for yourself at http://TheUnrealUniverse.com.

Asian quants are the ones closest to the Asian market. For structures and products specifically tailored to this market, how come we don’t develop our own pricing models? Why do we wait for the Mertons and Hulls of the world?

In our defense, may be some of the confident ones that do develop pricing models may move out of Asia. The CDO guru David Li is a case in point. But, on the whole, the intellectual contribution to modern quantitative finance looks disproportionately lopsided in favor of the West. This may change in the near future, when the brain banks in India and China open up and smell blood in this niche field of ours.

Another quality that is missing among us Singaporean parishioners is an appreciation of the big picture. Clichés like the “Big Picture” and the “Value Chain” have been overused by the afore-mentioned middle-level people managers on techies (a category of dubious distinction into which we quants also fall, to our constant chagrin) to devastating effect. Such phrases have rained terror on techies and quants and relegated them to demoralizing assignments with challenges far below their intellectual potential.

May be it is a sign of my underestimating the power of the dark side, but I feel that the big picture is something we have to pay attention to. Quants in Singapore seem to do what they are asked to do. They do it well, but they do it without questioning. We should be more aware of the implications of our work. If we recommend Monte Carlo as the pricing model for a certain option, will the risk oversight manager be in a pickle because his VaR report takes too long to run? If we suggest capping methods to renormalize divergent sensitivities of certain products due to discontinuities in their payoff functions, how will we affect the regulatory capital charges? Will our financial institute stay compliant? Quants may not be expected to know all these interconnected issues. But an awareness of such connections may add value (gasp, another managerial phrase!) to our office in the organization.

For all these reasons, we in Singapore end up importing talent. This practice opens up another can of polemic worms. Are they compensated a bit too fairly? Do we get blinded by their impressive labels, while losing sight of their real level of talent? How does the generous compensation scheme for the foreign talents affect the local talents?

But these issues may be transitory. The Indians and Chinese are waking up, not just in terms of their economies, but also by unleashing their tremendous talent pool in an increasingly globalizing labor market. They (or should I say we?) will force a rethinking of what we mean when we say talent. The trickle of talent we see now is only the tip of the iceberg. Here is an illustration of what is in store, from a BBC report citing the Royal Society of Chemistry.

China Test
National test set by Chinese education authorities for pre-entry students As shown in the figure, in square prism ABCD-A_1B_1C_1D_1,AB=AD=2, DC=2\sqrt(3), A1=\sqrt(3), AD\perp DC, AC\perp BD, and foot of perpendicular is E,

  1. Prove: BD\perp A_1C
  2. Determine the angle between the two planes A_1BD and BC_1D
  3. Determine the angle formed by lines AD and BC_1 which are in different planes.
UK Test
Diagnostic test set by an English university for first year students In diagram (not drawn to scale), angle ABC is a right angle, AB = 3m BC = 4m

  1. What is the length AC?
  2. What is the area of triangle ABC (above)?
  3. What is the tan of the angle ABC (above) as a fraction?

The end result of such demanding pre-selection criteria is beginning to show in the quality of the research papers coming out of the selected ones, both in China and India. This talent show is not limited to fundamental research; applied fields, including our niche of quantitative finance, are also getting a fair dose of this oriental medicine.

Singapore will only benefit from this regional infusion of talent. Our young nation has an equally young (professionally, that is) quant team. We will have to improve our skills and knowledge. And we will need to be more vocal and assertive before the world notices us and acknowledges us. We will get there. After all, we are from Singapore–an Asian tiger used to beating the odds.

Photo by hslo

Universe – Size and Age

I posted this question that was bothering me when I read that they found a galaxy at about 13 billion light years away. My understanding of that statement is: At distance of 13 billion light years, there was a galaxy 13 billion years ago, so that we can see the light from it now. Wouldn’t that mean that the universe is at least 26 billion years old? It must have taken the galaxy about 13 billion years to reach where it appears to be, and the light from it must take another 13 billion years to reach us.

In answering my question, Martin and Swansont (who I assume are academic phycisists) point out my misconceptions and essentially ask me to learn more. All shall be answered when I’m assimilated, it would appear! 🙂

This debate is published as a prelude to my post on the Big Bang theory, coming up in a day or two.

Mowgli 03-26-2007 10:14 PM

Universe – Size and Age
I was reading a post in http://www.space.com/ stating that they found a galaxy at about 13 billion light years away. I am trying to figure out what that statement means. To me, it means that 13 billion years ago, this galaxy was where we see it now. Isn’t that what 13b LY away means? If so, wouldn’t that mean that the universe has to be at least 26 billion years old? I mean, the whole universe started from one singular point; how could this galaxy be where it was 13 billion years ago unless it had at least 13 billion years to get there? (Ignoring the inflationary phase for the moment…) I have heard people explain that the space itself is expanding. What the heck does that mean? Isn’t it just a fancier way of saying that the speed of light was smaller some time ago?
swansont 03-27-2007 09:10 AM

Quote:

Originally Posted by Mowgli
(Post 329204)
I mean, the whole universe started from one singular point; how could this galaxy be where it was 13 billion years ago unless it had at least 13 billion years to get there? (Ignoring the inflationary phase for the moment…)

Ignoring all the rest, how would this mean the universe is 26 billion years old?

Quote:

Originally Posted by Mowgli
(Post 329204)
I have heard people explain that the space itself is expanding. What the heck does that mean? Isn’t it just a fancier way of saying that the speed of light was smaller some time ago?

The speed of light is an inherent part of atomic structure, in the fine structure constant (alpha). If c was changing, then the patterns of atomic spectra would have to change. There hasn’t been any confirmed data that shows that alpha has changed (there has been the occasional paper claiming it, but you need someone to repeat the measurements), and the rest is all consistent with no change.

Martin 03-27-2007 11:25 AM

To confirm or reinforce what swansont said, there are speculation and some fringe or nonstandard cosmologies that involve c changing over time (or alpha changing over time), but the changing constants thing just gets more and more ruled out.I’ve been watching for over 5 years and the more people look and study evidence the LESS likely it seems that there is any change. They rule it out more and more accurately with their data.So it is probably best to ignore the “varying speed of light” cosmologies until one is thoroughly familiar with standard mainstream cosmology.You have misconceptions Mowgli

  • General Relativity (the 1915 theory) trumps Special Rel (1905)
  • They don’t actually contradict if you understand them correctly, because SR has only a very limited local applicability, like to the spaceship passing by:-)
  • Wherever GR and SR SEEM to contradict, believe GR. It is the more comprehensive theory.
  • GR does not have a speed limit on the rate that very great distances can increase. the only speed limit is on LOCAL stuff (you can’t catch up with and pass a photon)
  • So we can and DO observe stuff that is receding from us faster than c. (It’s far away, SR does not apply.)
  • This was explained in a Sci Am article I think last year
  • Google the author’s name Charles Lineweaver and Tamara Davis.
  • We know about plenty of stuff that is presently more than 14 billion LY away.
  • You need to learn some cosmology so you wont be confused by these things.
  • Also a “singularity” does not mean a single point. that is a popular mistake because the words SOUND the same.
  • A singularity can occur over an entire region, even an infinite region.

Also the “big bang” model doesn’t look like an explosion of matter whizzing away from some point. It shouldn’t be imagined like that. The best article explaining common mistakes people have is this Lineweaver and Davis thing in Sci Am. I think it was Jan or Feb 2005 but I could be a year off. Google it. Get it from your local library or find it online. Best advice I can give.

Mowgli 03-28-2007 01:30 AM

To swansont on why I thought 13 b LY implied an age of 26 b years:When you say that there is a galaxy at 13 b LY away, I understand it to mean that 13 billion years ago my time, the galaxy was at the point where I see it now (which is 13 b LY away from me). Knowing that everything started from the same point, it must have taken the galaxy at least 13 b years to get where it was 13 b years ago. So 13+13. I’m sure I must be wrong.To Martin: You are right, I need to learn quite a bit more about cosmology. But a couple of things you mentioned surprise me — how do we observe stuff that is receding from as FTL? I mean, wouldn’t the relativistic Doppler shift formula give imaginary 1+z? And the stuff beyond 14 b LY away – are they “outside” the universe?I will certainly look up and read the authors you mentioned. Thanks.
swansont 03-28-2007 03:13 AM

Quote:

Originally Posted by Mowgli
(Post 329393)
To swansont on why I thought 13 b LY implied an age of 26 b years:When you say that there is a galaxy at 13 b LY away, I understand it to mean that 13 billion years ago my time, the galaxy was at the point where I see it now (which is 13 b LY away from me). Knowing that everything started from the same point, it must have taken the galaxy at least 13 b years to get where it was 13 b years ago. So 13+13. I’m sure I must be wrong.

That would depend on how you do your calibration. Looking only at a Doppler shift and ignoring all the other factors, if you know that speed correlates with distance, you get a certain redshift and you would probably calibrate that to mean 13b LY if that was the actual distance. That light would be 13b years old.

But as Martin has pointed out, space is expanding; the cosmological redshift is different from the Doppler shift. Because the intervening space has expanded, AFAIK the light that gets to us from a galaxy 13b LY away is not as old, because it was closer when the light was emitted. I would think that all of this is taken into account in the measurements, so that when a distance is given to the galaxy, it’s the actual distance.

Martin 03-28-2007 08:54 AM

Quote:

Originally Posted by Mowgli
(Post 329393)
I will certainly look up and read the authors you mentioned.

This post has 5 or 6 links to that Sci Am article by Lineweaver and Davis

http://scienceforums.net/forum/showt…965#post142965

It is post #65 on the Astronomy links sticky thread

It turns out the article was in the March 2005 issue.

I think it’s comparatively easy to read—well written. So it should help.

When you’ve read the Sci Am article, ask more questions—your questions might be fun to try and answer:-)

Twin Paradox – Take 2

The Twin Paradox is usually explained away by arguing that the traveling twin feels the motion because of his acceleration/deceleration, and therefore ages slower.

But what will happen if the twins both accelerate symmetrically? That is, they start from rest from one space point with synchronized clocks, and get back to the same space point at rest by accelerating away from each other for some time and decelerating on the way back. By the symmetry of the problem, it seems that when the two clocks are together at the end of the journey, at the same point, and at rest with respect to each other, they have to agree.

Then again, during the whole journey, each clock is in motion (accelerated or not) with respect to the other one. In SR, every clock that is in motion with respect to an observer’s clock is supposed run slower. Or, the observer’s clock is always the fastest. So, for each twin, the other clock must be running slower. However, when they come back together at the end of the journey, they have to agree. This can happen only if each twin sees the other’s clock running faster at some point during the journey. What does SR say will happen in this imaginary journey?

(Note that the acceleration of each twin can be made constant. Have the twins cross each other at a high speed at a constant linear deceleration. They will cross again each other at the same speed after sometime. During the crossings, their clocks can be compared.)

Unreal Time

Farsight wrote:Time is a velocity-dependent subjective measure of event succession rather than something fundamental – the events mark the time, the time doesn’t mark the events. This means the stuff out there is space rather than space-time, and is an “aether” veiled by subjective time.

I like your definition of time. It is close to my own view that time is “unreal.” It is possible to treat space as real and space-time as something different, as you do. This calls for some careful thought. I will outline my thinking in this post and illustrate it with an example, if my friends don’t pull me out for lunch before I can finish. :)

The first question we need to ask ourselves is why space and time seem coupled? The answer is actually too simple to spot, and it is in your definition of time. Space and time mix through our concept of velocity and our brain’s ability to sense motion. There is an even deeper connection, which is that space is a cognitive representation of the photons inputs to our eyes, but we will get to it later.

Let’s assume for a second that we had a sixth sense that operated at an infinite speed. That is, if star explodes at a million light years from us, we can sense it immediately. We will see it only after a million years, but we sense it instantly. I know, it is a violation of SR, cannot happen and all that, but stay with me for a second. Now, a little bit of thinking will convince you that the space that we sense using this hypothetical sixth sense is Newtonian. Here, space and time can be completely decoupled, absolute time can be defined etc. Starting from this space, we can actually work out how we will see it using light and our eyes, knowing that the speed of light is what it is. It will turn out, clearly, that we seen events with a delay. That is a first order (or static) effect. The second order effect is the way we perceive objects in motion. It turns out that we will see a time dilation and a length contraction (for objects receding from us.)

Let me illustrate it a little further using echolocation. Assume that you are a blind bat. You sense your space using sonar pings. Can you sense a supersonic object? If it is coming towards you, by the time the reflected ping reaches you, it has gone past you. If it is going away from you, your pings can never catch up. In other words, faster than sound travel is “forbidden.” If you make one more assumption – the speed of the pings is the same for all bats regardless of their state of motion – you derive a special relativity for bats where the speed of sound is the fundamental property of space and time!

We have to dig a little deeper and appreciate that space is no more real than time. Space is a cognitive construct created out of our sensory inputs. If the sense modality (light for us, sound for bats) has a finite speed, that speed will become a fundamental property of the resultant space. And space and time will be coupled through the speed of the sense modality.

This, of course, is only my own humble interpretation of SR. I wanted to post this on a new thread, but I get the feeling that people are a little too attached to their own views in this forum to be able to listen.

Leo wrote:Minkowski spacetime is one interpretation of the Lorentz transforms, but other interpretations, the original Lorentz-Poincaré Relativity or modernized versions of it with a wave model of matter (LaFreniere or Close or many others), work in a perfectly euclidean 3D space.

So we end up with process slowdown and matter contraction, but NO time dilation or space contraction. The transforms are the same though. So why does one interpretation lead to tensor metric while the others don’t? Or do they all? I lack the theoretical background to answer the question.

Hi Leo,

If you define LT as a velocity dependent deformation of an object in motion, then you can make the transformation a function of time. There won’t be any warping and complications of metric tensors and stuff. Actually what I did in my book is something along those lines (though not quite), as you know.

The trouble arises when the transformation matrix is a function of the vector is transforming. So, if you define LT as a matrix operation in a 4-D space-time, you can no longer make it a function of time through acceleration any more than you can make it a function of position (as in a velocity field, for instance.) The space-time warping is a mathematical necessity. Because of it, you lose coordinates, and the tools that we learn in our undergraduate years are no longer powerful enough to handle the problem.

Of Rotation, LT and Acceleration

In the “Philosophical Implications” forum, there was an attempt to incorporate acceleration into Lorentz transformation using some clever calculus or numerical techniques. Such an attempt will not work because of a rather interesting geometric reason. I thought I would post the geometric interpretation of Lorentz transformation (or how to go from SR to GR) here.

Let me start with a couple of disclaimers. First of, what follows is my understanding of LT/SR/GR. I post it here with the honest belief that it is right. Although I have enough academic credentials to convince myself of my infallibility, who knows? People much smarter than me get proven wrong every day. And, if we had our way, we would prove even Einstein himself wrong right here in this forum, wouldn’t we? :D Secondly, what I write may be too elementary for some of the readers, perhaps even insultingly so. I request them to bear with it, considering that some other readers may find it illuminating. Thirdly, this post is not a commentary on the rightness or wrongness of the theories; it is merely a description of what the theories say. Or rather, my version of what they say. With those disclaimers out of the way, let’s get started…

LT is a rotation in the 4-D space-time. Since it not easy to visualize 4-D space-time rotation, let’s start with a 2-D, pure space rotation. One fundamental property of a geometry (such as 2-D Euclidean space) is its metric tensor. The metric tensor defines the inner product between two vectors in the space. In normal (Euclidean or flat) spaces, it also defines the distance between two points (or the length of a vector).

Though the metric tensor has the dreaded “tensor” word in its name, once you define a coordinate system, it is only a matrix. For Euclidean 2-D space with x and y coordinates, it is the identity matrix (two 1’s along the diagonal). Let’s call it G. The inner product between vectors A and B is A.B = Trans(A) G B, which works out to be a_1b_1+a_2b_2. Distance (or length of A) can be defined as \sqrt{A.A}.

So far in the post, the metric tensor looks fairly useless, only because it is the identity matrix for Euclidean space. SR (or LT), on the other hand, uses Minkowski space, which has a metric that can be written with [-1, 1, 1, 1] along the diagonal with all other elements zero – assuming time t is the first component of the coordinate system. Let’s consider a 2-D Minkowski space for simplicity, with time (t) and distance (x) axes. (This is a bit of over-simplification because this space cannot handle circular motion, which is popular in some threads.) In units that make c = 1, you can easily see that the invariant distance using this metric tensor is \sqrt{x^2 - t^2}.

Continued…

The Unreal Universe — Discussion with Gibran

Hi again,You raise a lot of interesting questions. Let me try to answer them one by one.

You’re saying that our observations of an object moving away from us would look identical in either an SR or Galilean context, and therefore this is not a good test for SR.

What I’m saying is slightly different. The coordinate transformation in SR is derived considering only receding objects and sensing it using radar-like round trip light travel time. It is then assumed that the transformation laws thus derived apply to all objects. Because the round trip light travel is used, the transformation works for approaching objects as well, but not for things moving in other directions. But SR assumes that the transformation is a property of space and time and asserts that it applies to all moving (inertial) frames of reference regardless of direction.

We have to go a little deeper and ask ourselves what that statement means, what it means to talk about the properties of space. We cannot think of a space independent of our perception. Physicists are typically not happy with this starting point of mine. They think of space as something that exists independent of our sensing it. And they insist that SR applies to this independently existing space. I beg to differ. I consider space as a cognitive construct based on our perceptual inputs. There is an underlying reality that is the cause of our perception of space. It may be nothing like space, but let’s assume, for the sake of argument, that the underlying reality is like Galilean space-time. How would be perceive it, given that we perceive it using light (one-way travel of light, not two-way as SR assumes)? It turns out that our perceptual space would have time dilation and length contraction and all other effect predicted by SR. So my thesis is that the underlying reality obeys Galilean space-time and our perceptual space obeys something like SR. (It is possible that if I assume that our perception uses two-way light travel, I may get SR-like transformation. I haven’t done it because it seems obvious to me that we perceive a star, for instance, by sensing the light from it rather than flashing a light at it.)

This thesis doesn’t sit well with physicists, and indeed with most people. They mistake “perceptual effects” to be something like optical illusions. My point is more like space itself is an illusion. If you look at the night sky, you know that the stars you see are not “real” in the sense that they are not there when you are looking at them. This is simply because the information carrier, namely light, has a finite speed. If the star under observation is in motion, our perception of its motion is distorted for the same reason. SR is an attempt to formalize our perception of motion. Since motion and speed are concepts that mix space and time, SR has to operate on “space-time continuum.” Since SR is based on perceptual effects, it requires an observer and describes motion as he perceives it.

But are you actually saying that not a single experiment has been done with objects moving in any other direction than farther away? And what about experiments on time dilation where astronauts go into space and return with clocks showing less elapsed time than ones that stayed on the ground? Doesn’t this support the ideas inherent in SR?

Experiments are always interpreted in the light of a theory. It is always a model based interpretation. I know that this is not a convincing argument for you, so let me give you an example. Scientists have observed superluminal motion in certain celestial objects. They measure the angular speed of the celestial object, and they have some estimate of its distance from us, so they can estimate the speed. If we didn’t have SR, there would be nothing remarkable about this observation of superluminality. Since we do have SR, one has to find an “explanation” for this. The explanation is this: when an object approaches us at a shallow angle, it can appear to come in quite a bit faster than its real speed. Thus the “real” speed is subluminal while the “apparent” speed may be superluminal. This interpretation of the observation, in my view, breaks the philosophical grounding of SR that it is a description of the motion as it appears to the observer.

Now, there are other observations of where almost symmetric ejecta are seen on opposing jets in symmetric celestial objects. The angular speeds may indicate superluminality in both the jets if the distance of the object is sufficiently large. Since the jets are assumed to be back-to-back, if one jet is approaching us (thereby giving us the illusion of superluminality), the other jet has bet receding and can never appear superluminal, unless, of course, the underlying motion is superluminal. The interpretation of this observation is that the distance of the object is limited by the “fact” that real motion cannot be superluminal. This is what I mean by experiments being open to theory or model based interpretations.

In the case of moving clocks being slower, it is never a pure SR experiment because you cannot find space without gravity. Besides, one clock has to be accelerated or decelerated and GR applies. Otherwise, the age-old twin paradox would apply.

I know there have been some experiments done to support Einstein’s theories, like the bending of light due to gravity, but are you saying that all of them can be consistently re-interpreted according to your theory? If this is so, it’s dam surprising! I mean, no offense to you – you’re obviously a very bright individual, and you know much more about this stuff than I do, but I’d have to question how something like this slipped right through physicists’ fingers for 100 years.

These are gravity related questions and fall under GR. My “theory” doesn’t try to reinterpret GR or gravity at all. I put theory in inverted quotes because, to me, it is a rather obvious observation that there is a distinction between what we see and the underlying causes of our perception. The algebra involved is fairly simple by physics standards.

Supposing you’re right in that space and time are actually Galilean, and that the effects of SR are artifacts of our perception. How then are the results of the Michelson-Morley experiments explained? I’m sorry if you did explain it in your book, but it must have flown right over my head. Or are we leaving this as a mystery, an anomaly for future theorists to figure out?

I haven’t completely explained MMX, more or less leaving it as a mystery. I think the explanation hinges on how light is reflected off a moving mirror, which I pointed out in the book. Suppose the mirror is moving away from the light source at a speed of v in our frame of reference. Light strikes it at a speed of c-v. What is the speed of the reflected light? If the laws of reflection should hold (it’s not immediately obvious that they should), then the reflected light has to have a speed of c-v as well. This may explain why MMX gives null result. I haven’t worked out the whole thing though. I will, once I quit my day job and dedicate my life to full-time thinking. :-)

My idea is not a replacement theory for all of Einstein’s theories. It’s merely a reinterpretation of one part of SR. Since the rest of Einstein’s edifice is built on this coordinate transformation part, I’m sure there will be some reinterpretation of the rest of SR and GR also based on my idea. Again, this is a project for later. My reinterpretation is not an attempt to prove Einstein’s theories wrong; I merely want to point out that they apply to reality as we perceive it.

Overall, it was worth the $5 I payed. Thanks for the good read. Don’t take my questions as an assault on your proposal – I’m honestly in the dark about these things and I absolutely crave light (he he). If you could kindly answer them in your spare time, I’d love to share more ideas with you. It’s good to find a fellow thinker to bounce cool ideas like this off of. I’ll PM you again once I’m fully done the book. Again, it was a very satisfying read.

Thanks! I’m glad that you like my ideas and my writing. I don’t mind criticism at all. Hope I have answered most of your questions. If not, or if you want to disagree with my answers, feel free to write back. Always a pleasure to chat about these things even if we don’t agree with each other.

– Best regards,
– Manoj

Anti-relativity and Superluminality

Leo wrote:I have some problems with the introductory part though, when you confront light travel effects and relativistic transforms. You correctly state that all perceptual illusions have been cleared away in the conception of Special Relativity, but you also say that these perceptual illusions remained as a subconscious basis for the cognitive model of Special Relativity. Do I understand what you mean or do I get it wrong?

The perceptual effects are known in physics; they are called Light Travel Time effects (LTT, to cook up an acronym). These effects are considered an optical illusion on the motion of the object under observation. Once you take out the LTT effects, you get the “real” motion of the object . This real motion is supposed to obey SR. This is the current interpretation of SR.

My argument is that the LTT effects are so similar to SR that we should think of SR as just a formalization of LTT. (In fact, a slightly erroneous formalization.) Many reasons for this argument:
1. We cannot disentagle the “optical illusion” because many underlying configurations give rise to the same perception. In other words, going from what we see to what is causing our perception is a one to many problem.
2. SR coordinate transformation is partially based on LTT effects.
3. LTT effects are stronger than relativistic effects.

Probably for these reasons, what SR does is to say that what we see is what it is really like. It then tries to mathematically describe what we see. (This is what I meant by a formaliztion. ) Later on, when we figured out that LTT effects didn’t quite match with SR (as in the observation of “apparent” superluminal motion), we thought we had to “take out” the LTT effects and then say that the underlying motion (or space and time) obeyed SR. What I’m suggesting in my book and articles is that we should just guess what the underlying space and time are like and work out what our perception of it will be (because going the other way is an ill-posed one-to-many problem). My first guess, naturally, was Galilean space-time. This guess results in a rather neat and simple explantions of GRBs and DRAGNs as luminal booms and their aftermath.

Discussion on the Daily Mail (UK)

On the Daily Mail forum, one participant (called “whats-in-a-name”) started talking about The Unreal Universe on July 15, 2006. It was attacked fairly viciously on the forum. I happened to see it during a Web search and decided to step in and defend it.

15 July, 2006

Posted by: whats-in-a-name on 15/07/06 at 09:28 AM

Ah, Kek, you’ve given me a further reason to be distracted from what I should be doing- and I can tell you that this is more interesting at the moment.I’ve been trying to formulate some ideas and there’s one coming- but I’ll have to give it to you in bits.I don’t want to delve into pseudoscience or take the woo-ish road that says that you can explain everything with quantum theory, but try starting here: http://theunrealuniverse.com/phys.shtml

The “Journal Article” link at the bottom touches on some of the points that we discussed elsewhere. It goes slightly off-topic, but you might also find the “Philosophy” link at the top left interesting.

Posted by: patopreto on 15/07/06 at 06:17 PM

Regarding that web site wian.One does not need to ead past this sentence –

The theories of physics are a description of reality. Reality is created out of the readings from our senses. Knowing that our senses all work using light as an intermediary, is it a surprise that the speed of light is of fundamental importance in our reality?

to realise that tis web site is complete ignorant hokum. I stopped at that point.

16 July, 2006

Posted by: whats-in-a-name on 16/07/06 at 09:04 AM

I’ve just been back to read that bit more carefully. I don’t know why the writer phrased it like that but surely what he meant was:(i) “Our perception of what is real is created out of the readings from our senses.” I think that most physicists wouldn’t argue with that would they? At the quantum level reality as we understand it doesn’t exist; you can only say that particles have more of a tendency to exist in one place or state than another.(ii) The information that we pick up from optical or radio telescopes, gamma-ray detectors and the like, shows the state of distant objects as they were in the past, owing to the transit time of the radiation. Delving deeper into space therefore enables us to look further back into the history of the universe.It’s an unusual way to express the point, I agree, but it doesn’t devalue the other information on there. In particular there are links to other papers that go into rather more detail, but I wanted to start with something that offered a more general view.

I get the impression that your study of physics is rather more advanced than mine- as I’ve said previously I’m only an amateur, though I’ve probably taken my interest a bit further than most. I’m happy to be corrected if any of my reasoning is flawed, though what I’ve said so far s quite basic stuff.

The ideas that I’m trying to express in response to Keka’s challenge are my own and again, I’m quite prepared to have you or anyone else knock them down. I’m still formulating my thoughts and I wanted to start by considering the model that physicists use of the nature of matter, going down to the grainy structure of spacetime at the Plank distance and quantum uncertainty.

I’ll have to come back to this in a day or two, but meanwhile if you or anyone else wants to offer an opposing view, please do.

Posted by: patopreto on 16/07/06 at 10:52 AM

I don’t know why the writer phrased it like that but surely what he meant was:

I think the write is quit clear! WIAN – you have re-written what he says to mean something different.

The writer is quite clear – “Once we accept that space and time are a part of the cognitive model created by the brain, and that special relativity applies to the cognitive model, we can ponder over the physical causes behind the model, the absolute reality itself.”

Blah Blah Blah!

The writer, Manoj Thulasidas, is an employee of OCBC bank in Singapore and self-described “amateur philosopher”. What is he writes appears to be nothing more than a religiously influenced solipsistic philosophy. Solipsism is interesting as a philosophical standpoint but quickly falls apart. If Manoj can start his arguments from such shaky grounds without explanation, then I really have no other course to take than to accept his descriptions of himself as “amateur”.

Maybe back to MEQUACK!