Po8
Simulations for Reconciling

Contents

Why simulation?

Most of our work to-date has produced emergent qualitative results, but science demands quantitative proof, so our current research focus is the development of deterministic simulation software to generate reconcilable statistics.

Types of result

We foresee many useful types of result, and therefore modes for running the simulation. These modes rely on different interfaces for preconditions, exit conditions, and the shape of corresponding results.

Basic mechanics

These will use a small set of particles, and stop within a small number of events on the timeline. The presence of events within expected bounds will be verified, within low error margins. Possible tests might verify: interaction (or not) of waves having known phases and mass-energy to test whether they collapse into fermion events.

Probability distribution functions

These randomize the vacuum, or some other preconditions, and run the simulation many times, to produce a data set of equivalent events, and a map of variant properties. Examples might include: Compton radius of a particle, field statistics, and functions for covariance of properties.

Interpretive phenomena

Where datasets are compared with criteria, we can describe more advanced phenomena. Examples include: probability of decoherence, confidence of escaping a black hole, matter/anti-matter polarization, and decay distributions.

Scripted illustration

With appropriate navigation controls, the simulation state can be presented as an animated diagram that we can use to demonstrate phenomena. We'll include some emphasis on items of interest.

Progress

We are in the early stages of development, where the basic structures of waves, bosons and fermions are implemented, along with some event detection and a basic display of entities.

We see great potential for optimisations, especially for high-energy situations, but we are adopting a 'get it working first; optimise later' approach.

The 'deterministic' mechanism means that for any given inputs, the outputs will be the same every time the functions are run. This allows us to develop our software using unit testing, and TDD.

Fig.1: TDD in action (red-green testing)

Next steps

We now have utility functions to build fundamental structures, to time-evolve them, test them for interactions, and to collapse bosons to create new fermions. We have yet to include mass-energy, which is essential for collapsing asynchronous networks, and we have yet to build a full chaotic simulation controller.

Next steps:

  • n-body simulation, testing earliest interactions, processing, and re-computing the network. Early implementations may not be as efficient as possible, because we are not optimising early (we have many optimisation ideas).
  • A web interface, to show pre-computed simulation timelines.
  • Graphing component, to show simulation timelines as network graphs (like we have used in this site, only not hand-drawn).
  • Graphing component, to show probability distribution functions of a simulation with parameterised unknowns. This will use an analytical approach.

Challenges

Our main challenges are:

  • The numerical precision of hardware-supported IEEE mathematical operations is not sufficient to represent phase values to the precision required by the simulation. For example, we expect to keep track of multiples of Planck length while maintaining phase precision down to less than 10^-30 of Planck length. We expect to use a library like BigNum for this, at the expense of performance.
  • To reconcile with standard physics, in terms that standard physics recognises, we will need to generate statistical results with sufficient confidence. This requires statistical methods we have not yet fully developed, and also computing power and optimisations. It is expected that our first results will be qualitative and illustrative, followed by low-precision statistical results that we hope will converge on the desired outcome.
  • The simulation will be sub-optimal over large space partitions. Because the mechanism is based on the overlapping of expanding spheres, and an environmental vacuum containing many such spheres, then a large space of low-energy entities presents combinatorial challenges that are not easily optimised solely by space-partitioning. Phase and mass partitioning might also be necessary, to automatically reduce the testable combinations of entities.
  • Although we expect fewer free parameters than standard models, we are concerned that our free parameters do not yet have any basis (other than undesirably 'tuning to match expectations'). Without direct mathematical origins, those free parameters will be a limiting factor in the precision of our emergent results. We hope that a threshold of known geometric expression may provide a suitable auto-tuning of our parameters.