The UniverseBefore Einstein's relativity, scientists described objects in the universe with a position in `[x,y,z]` coordinates in the three-dimensional Euclidean space at any given point in time `t`. Special relativity can be derived quite simply as follows, which suffices only for non-quantum size objects and locally where gravity (i.e. the relative global mass distribution of objects) isn't a significant factor.
Imagine a ball is kicked by the `1st` observer standing at origin `O` towards point `O'` at velocity `v`, and `2nd` observer simultaneously runs from `O` to `O'` at velocity `v`, such that the ball and the `2nd` observer both arrive at `O'` at the same elapsed time `Δt`.
From the `1st` observer's position the ball traveled `x = vΔt` distance, yet from the perspective of the `2nd` observer the ball never moved as it was alongside the feet of the `2nd` observer the entire time.
For observers with different relative velocities, the elapsed time is the same but distance the ball traveled is not the same, i.e. is not invariant.
The metric for the Euclidean 3D distance squared is provided by the Pythagorean theorem `(Δs)^2 = (Δx)^2 + (Δy)^2 + (Δz)^2`. For the `1st` observer `Δx = O'_x - O_x = x' - x`, `2nd` observer `Δx = 0`, and both observers `Δy = Δz = 0`.
To make 4D distances invariant and thus enable observers with different relative velocities to agree on them, transform the Euclidean 3D space into a 4D spacetime where the (perceived) elapsed time is allowed to be different for each observer then normalize (i.e. and or subtract) each observer's elapsed time from the (perceived) Euclidean 3D distance, i.e. `[(Δx)^2 + (Δy)^2 + (Δz)^2] ± (Δt)^2`.
But note `(Δt)^2` has the incompatible units of time instead of distance. The elapsed time can be normalized (i.e. multiplied) by any arbitrary chosen constant velocity (which has units `"distance"//"time"`) to convert it to a distance. Einstein chose the speed-of-light in a vacuum `c` because it is thought to represent the maximum velocity that causual information can travel. Thus in non-Euclidean, hyperbolic spacetime `(Δs)^2 = (Δx)^2 + (Δy)^2 + (Δz)^2 - (cΔt)^2`.
The choice of a minus `-` sign for the last term creates a projection of the Euclidean 3D space onto a 4D hyperbolic spacetime, i.e. `(Δs)^2 = (Δ"distance")^2 - (cΔt)^2` is a hyperbola `x^2/a^2 - y^2/b^2 = 1`, where `a = b = Δs`. This creates a hyperbolic light cone that extends unbounded into the past and future as shown below. Whereas, a plus `+` sign would transform into a 4D ellipse in Euclidean geometry thus limiting the past and future inside boundary of the edge of the ellipse (which would not model the real world) and unlike hyperbolic rotation relative observations could rotate future into the past (where rotation moved from `t > 0` to `t < 0` around the center point of the ellipse) and vice versa, which can only happen in the hyperbolic space when `v > c` (as explained below).
The above 3D visualization of the hyperbolic light cone flattens the non-time 3D portion of the space to a 2D hypersurface. This illustrates light (and thus also the propagation of information via electricity, electromagnetism, or light) traveling (at speed `c`) from the observation at point `A` into the future and from the past expanding reach to more of the non-time (i.e. 2D hypersurface) portion of the space over time. Any event point `B` inside the upper cone (as shown, i.e. `t > 0`) could be caused by interaction or information reaching it from `A`; or if inside the lower cone (i.e. if `t < 0`) could cause `A` by transmitting information or interacting to it from the past.
Whereas assuming the velocity `v` of information can't travel faster than the speed-of-light `c`, i.e. `v <= c`, then any event point `C` outside the future and past cones couldn't be caused by nor cause `A` because the information couldn't have traveled fast enough. If `t != 0` (as shown for `C`), then for information to travel between events `A` and `C` requires `(Δ"distance")^2 > (cΔt)^2`, i.e. `v > c`. A different observer `A'` (not shown) could have a rotated space where the sign of `t` for `C'` is reversed, thus the order of events between `A ⇔ C` and `A' ⇔ C'` would be reversed, i.e. if `C` followed `A` then `C'` could precede `A` or vice versa. Thus, if ever `v > c`, then different observers can perceive inverted causal relationships (i.e. the effect is the cause), the future can be inserted into the past and vice versa.
The prior section derived the hyperbolic geometry of spacetime that incorporates `(cΔt)^2` which enables but doesn't enforce the invariance of the aforementioned Minkowski metric `(Δs)^2`. The invariance can be enforced by the derivation of the appropriate transformation between relative observations of the elapsed time and the Euclidean 3D distance. The following derivation assumes only the elapsed time is allowed to dialate in such a transform.
`(Δs)^2 = (x' - x)^2 - (ct)^2 = (vt)^2 - (ct)^2` for the `1st` observer
However, the requirement of homogeneous and isotropic properties that are expected in the real world, makes it necessary to derive a more generalized transform that also allows the perceived Euclidean distances to vary. The following four equations are written in matrix form as taught in Linear Algebra.
`[(x'),(y'),(z'),(t')] = [(a_11,a_12,a_13,a_14),(a_21,a_22,a_23,a_24),(a_31,a_32,a_33,a_34),(a_41,a_42,a_43,a_44)][(x),(y),(z),(t)]`
Solving this system of equations when the observers have the same velocity on the `y` and `z` axes yields the following equations.
Or in matrix form as follows.
`[(x'),(y'),(z'),(t')] = [(γ,0,0,-γv),(0,1,0,0),(0,0,1,0),(-β // c,0,0,γ)][(x),(y),(z),(t)]`
Special relativity incorporated into spacetime only a macroscopic and holistic model of velocity. General relativity additionally macroscopically and holistically models gravity, acceleration, mass and energy. Macroscopic in this context means a deterministic, geometric model, as contrasted against the stochastic model in quantum mechanics.
Instead of Einstein's macroscopic derivation of general relativity, Erik Verlinde recently derived it from microscopic degrees-of-freedom in “On the Origin of Gravity and the Laws of Newton“. Verlinde's derivation of Newton's law of gravity is explained below with greater emphasis on interpretation of the emergence of physics from degrees-of-freedom.
The degrees-of-freedom `N` is the number of independent variables of a system, e.g. a motorcycles lacks a reverse gear thus has one less degree-of-freedom compared to a car and must turn around to go backwards. The degrees-of-freedom is equivalent to the Shannon entropy, i.e. the logarithm of the number of equiprobable configurations of a system. For example, a system that has 8 possible configurations, has degrees-of-freedom (entropy) of `log_2 8 = 3` base 2 variables (a.k.a. 3 binary bits) equivalently `ln 8 = 2.08` base `e` variables. (tangentially `ln 8 // ln 2 = 3`)
For an unaccelerated mass `M`, the maximum possible (i.e. the degrees-of-freedom for the) emittable and absorbable energies is as follows.
`c =` maximum possible speed in the universe
The energy `E_"emittable"` that a mass `M` can emit is given by the conservation of momentum for the emitted photons. An alternative interpretation is that any direction mass `M` could move (or that photons could be emitted as `M` is replaced by emitted energy) is fully determined from altitude and azimuth— analogous to launching a projectile from a gun turret. Thus there are two degrees-of-freedom, thus requiring a quadratic of rank two independent variables, e.g. `v × v` which is set to the maximum possible `c × c = c^2`. Although there are three dimensions `[x,y,z]` in the space, they are not independent possibilities for the direction, because only altitude and azimuth are required to fully determine any direction in the 3D space; i.e. if transform into polar coordinates, the distance from the origin is irrelevant to direction.
The energy `E_"absorbable"` that matter can absorb is related to the degrees-of-freedom (entropy) `N` by the equipartition theorem. The Holographic principle requires the maximum possible value of `N` is proportional to the surface area bounding the matter.
The temperature is related to the acceleration of `M` because due to the Unruh effect, an observer in an accelerated frame experiences a temperature.
`T = (ℏa) / (2πck_β)`
A second mass `m` is injected using Newton's equation `F = ma`. The acceleration `a` applies to `M` or `m` because it is indiscernible whether `M` is accelerating towards `m` or vice versa. Set `E_"emittable" = E_"absorbable"` as follows.
`Mc^2 = ((K4πR^2)k_β((ℏ(F // m)) / (2πck_β))) / 2 = (KR^2ℏF) / (mc)`
Reducing yields Newton's law of gravity, where `G` is the gravitational constant and `K` is the Planck area (i.e. Planck length squared).
Planck area has the units of the reciprocal of area, so it represents normalized density per unit area.
A function of time is equivalently a function of frequency.
Presented perhaps for the first time is a novel insight to understanding the Holographic principle that since a function can't be both time-limited and band-limited, the configuration of space over any finite interval (i.e. locality) of time is not independent of the infinite past and future. Matter must be an infinite continuum of configurations oscillating as waves. The frequency domain has two dimensions, phase (not shown above) and amplitude; thus only two functions of frequency are necessary to represent the 4D spacetime— 2D × 2 = 4D. This can be visualized in 3D as the consistent time which corresponds to the phases in the frequency domain of two functions, with the two independent amplitudes providing the altitude and azimuth degrees-of-freedom for the direction of movement of the configuration.
Holography inspires this amplitude and phase interpretation because it records three dimensions of information of the speckle pattern, stored in the opacity of a 2D surface, due to interference (on recording and defraction on viewing) of a coherent phase and monochromatic reference beam with the decoherent environmental light field.
The non-uniform distribution of mass is mutually causal with oscillation. A uniform distribution of mass would be no contrast and nothing could exist, especially knowledge creation.
Matter's canonical definition is circular because it doesn't state what it is. The answer to “what is the universe made of?” has been “whatever it is, it must have the attributes of the mass and volume things we make from it”. This definition is disunified because it excludes light and wave effects that don't have volume. But if spacetime emerges from two dimensions of the frequency domain continuum, then volume and mass are also just effects which emerge from the same fundamental matter as light. Since the long sought Holy Grail of science has been to unify quantum mechanics and relativity, it would be logical to define matter to be the thing that everything else in the universe is derived from. Future work is to analyze the relationship of this postulate to existing interpretations of quantum mechanics.
The derivation in the prior section also depended on Newton's second law `F=ma`, the equiparticipation theorem, and Unruh's effect. Newton's second law is an observed relationship which doesn't depend on the derivation of the effect. Since Boltzmann constant `K_β` relates the internal energy to the temperature `T` and both cancelled out in the relationship between the equiparticipation theorem and Unruh's effect, only the kinetic energy is pertinent. Kinetic energy derives from Newton's second law multiplied by distance. The equipartion theorem distributes the internal kinetic energy over the internal degrees-of-freedom. Unruh's effect derives from Newton's laws (i.e. classical mechanics) and special relativity the internal kinetic energy generated by a force per unit photon-normalized mass multiplied by the wavelength of light `λ` as follows.
`T = (h(F // m)) / (4π^2ck_β)`
The “edge of the universe” only has a meaning in the macroscropic spacetime model which is apparently not the fundamental reality— matter is a continuum in the frequency domain. Thus the edge of the universe is defined by how finely matter can be subdivided, i.e. the macroscopic geometric models of reality have infinite extent which is equivalent to the infinite resolution of matter.
On this question, the implication of Russell's paradox is that every set that is populated by rule instead of explicit enumeration, includes itself. Also Turing completeness is in essence unbounded recursion. Also Gödel's incompleteness theorems which state that no theory of computation can be both complete and consistent, as there will always exist another truth which can't be proven by the existing theory. Unbounded recursion appears even mathematical conceptualization of infinite sets which are duals of unbounded induction from a known origin which a disjunction of all members (and a conjunction of all operations on members) and unbounded coinduction towards an unknown end which is a conjuction of all members (and a disjunction of all operations on members).
A universe with finite future entropy, would mean it is prescripted deterministically and the future is known to a hypothetical observer being who can see all of the finite universe. An infinite frequency domain corresponds to infinite possible configurations in the time domain.
Planck's quantization doesn't violate the postulated continuity of matter. Rather via the uncertainty principle it defines the wavelength of radiation above which perceivable localized (i.e. within the past and future light cone) change in the spacetime domain is not subject to Shannon-Nyquist aliasing. Greater resolution is imperceptible in spacetime thus appears as a blackhole. If the speed-of-light were infinite, the time domain (and thus reality) would collapse to a single point, because all future changes in configuration would occur instantly. There is a lower bound to the product of the variances of the squares of time and frequency domains scaled by Planck's proportionality constant `ℏ`— proportionality between the frequency-of-light and energy.