Metric Space

Introduction

The basic idea behind this app is to provide a visualization of a piece of space where the metric, or the mathematical idea of distance in space is non-euclidean. This will lead to spaces where the paralell postluate doesn't hold and various other weird things are true depending on the choice of metric

Currently I'm at a bit of a loss as to how to do this, so I give the following explanation in the interest of clarifying the current problem to those who may be able to help. I have a teleporter, a camera, a teapot, a magical machine, and a monkey. The magical machine takes 3d models and makes 2d pictures of them from particular angles. The monkey can see these pictures and understand the 3d nature of the picture which is represented. The teleporter can teleport any object to a region of space where the euclidean metric doesn't hold on a local scale. This could be outside the universe, near a black hole, anywhere you please.

Okay, so here is the fun part. I take the camera and the teapot and put them on the teleporter. I set the timer on the camera. I beam them both to a strange region of space. I beam them both back a few seconds later and develop my picture. Now I take the picture and show it to the monkey. The monkey has always known euclidean space (we keep him away from strong gravitational fields) and comes up with an idea of what shape the teapot must have been from the picture. Needless to say the monkey does not think that the teapot that I have is the teapot in the picture, but this is unimportant to us. The monkey makes a 3d model of the teapot from the one picture taken in the strange region of space. Putting this model into the magical machine yields a picture with the same shape as the photo taken in the strange space. Now I really don't have any of this stuff (not even a teapot) with the exception of the magical machine, which is OpenGL. What I need my program to do is to take a 3d model of the teapot and give me the 3d model that the monkey made after seeing the picture. This model I can throw into my magical machine and get an image which will help a person visualize the nature of such a space.

I'd like to use this visualization idea and extend it to possibly real-time simulation of general relativistic effects, or just to make pretty (if totally warped and distorted) pictures for my own amusement.

How to use

Press 1 and 2 to cycle between the sphere and the teapot. 'f' toggles wireframe on and off, 'e' selects euclidean metric, 'r' selects riemannian metric, and 't' selects city block metric. What you are seeing is a shape (sphere or teapot) distorted as it would appear if the spatial metric was changed from the normal euclidean distance function to whatever you have selected with the keyboard. Right-click-and-drag the mouse to rotate around the object. Middle-or-Scroll-Wheel-click-and-drag to zoom in and out. The tensor used in the riemannian metric is controlled by editing the riemann.mat file using a simple text editor such as emacs, vim, notepad, or simpletext.

Mathematics

Currently for each vertex in the teapot or sphere model, the program is calculating the vertex's distance from the origin, then using the coordinate of the vertex to calculate its distance from the origin in the alternate metric. The vector to the point is then re-sized so that it is the right distance from the origin. Let's take the point (1,1) in euclidean space the distance to the origin is sqrt(2). In the city block metric, the distance is 2. Thus, (1,1) is mapped to (sqrt(2),sqrt(2)) since (1/sqrt(2))*2 = sqrt(2). This makes pretty distorted pictures, but I am almost 100% sure this is the wrong way to do it. It is worth noting because the current version does it this way

Problems

My Current thinking is that the appropriate thing to do to truly simulate an alternate geometry is to do the same transformation described above, except using the distance from the eyepoint of the camera rather than than the distance from the origin as reference. This brings up many more problems, such as what to do with the transformed position wrt the eye point. Does the eye point need to be transformed before it is added back in?

The transformation or non-transformation of the eyepoint brings up the problem of camera control. Previously camera control had been spherical about the origin, with the user controlling phi, theta, and rho. This camera motion is only natural in euclidean space. If we want to simulate the nature of space the user must be embedded in it. The camera should control more like a space-craft, which options to go forward or back and to turn in any direction. Unfortunately the geodesics in non-euclidean space are not straight lines, and the calculation of a geodesic in a non-euclidean space is not something which I wish to calculate in real-time analytically. Thus we have to go with some numerical approximation. I'm assuming moving a short, straight distance and then re-calculating where everything is (including the direction we're pointing toward) iterated would prove to be a good method; however, I have nothing but my intuition to back this up, and I have no idea how to do such a transformation on the camera's direction.

Screen Shots

A sphere, transformed into the euclidean metric... which does nothing to it

A sphere, transformed into a riemannian metric. The matrix used for this shot was the default [[101][010][101]] specified in the riemann.mat file

A sphere, transformed into the city block metric In the city block metric distance is the sum of the absolute value of the difference of each coordinate.

Download

Currently the program requires a machine with a shader capable graphics card running Win32, Mac OS 10.4, or Linux. A "backwards compatibility mode" for non-shader graphics card is also in the works