My theoretical work has produced emergent qualitative results, and I'm developing simulation software to generate quantitative proof and reconcilable statistics.
To test small-scale mechanics, I create scenarios with a small set of entities, and stop within a small number of events on the timeline. I verify timeline events within expected limits, in logs or visualisations in a web browser.
These randomize the vacuum, or some other preconditions, and run the simulation many times, to produce a data set of equivalent events, and a map of variant properties. Examples might include: Compton radius of a particle, field statistics, and functions for covariance of properties.
Where datasets are compared with criteria, I can describe more advanced phenomena. Examples include: probability of decoherence, confidence of escaping a black hole, matter/anti-matter polarization, and decay distributions.
With appropriate navigation controls, present the simulation state as an animated diagram that demonstrates phenomena, with emphasis on items of interest.
Real-world applications need high numerical precision to represent phase values of less than 10^-30 of Planck length, on the scale of classical distances. I expect to use a library like BigNum for this, at the expense of performance and inconvenient implementation.
To reconcile with standard physics, I'd generate statistical results with sufficient confidence. This needs statistical methods I've not developed, and also computing power and optimisations. I expect the first results will be qualitative and illustrative, followed by low-precision statistical results that I hope will converge on the desired outcome.
The simulation will be sub-optimal at macro scale, and is best suited to high energy scenarios. Because an environmental vacuum contains many overlapping spheres, a large space of low-energy entities presents combinatorial challenges that are not easily optimised solely by space-partitioning. Phase and mass partitioning might help to discard irrelevant combinations of entities.
The simulation has free parameters for example masses of entities. In the reconciled simulation, while I expect fewer free parameters than standard models, I want to replace them with values derived from geometric expressions.
We're not optimising yet, because we haven't yet proved the mechanism works.
The system we're simulating is chaotic and deterministic. The chaotic part means we can't look far into the future without first knowing what happens in the near future, because near-future events can change the system in ways that change the far-future events. We need to process all the events in the order they happen.
The computation is like an n-body problem, in that we need to test all combinations of things to see if they interact. However, it's unlike an n-body problem, because we don't need to infinitely divide time to work out precisely how more than two particles interact. In other words, we don't need to approximate conflicting differential equations. Instead, given the current simulation state, we can calculate all possible interactions with certainty.
We only need to process the next event, and then look at the simulation again to identify further event. This presents opportunities for optimization. We already have an ordered list of possible events. If we edit the list of objects, then we only need to re-examine the ones that changed, against the existing set of objects.
So for example if we process an interaction that collapses two oscillators, then we remove all interactions that involved those two oscillators from the list, and test for any new ones that involve the new oscillator events. If we have 100 particles, and collapse two into a shell, we only have to test around 100 new possible events, rather than 10,000 possible events. We then add the possible events to the list and re-order it by event time, and repeat the whole process.