Introduced "layer" tag types, which allows distinguishing layers in the type system. These types have a property specifying the scaling for that layer (meters per unit)
Changed some Myriad components/systems to require layer tags. This allows multiple to be bound for different layers.
Discarding integrator data if the rail epoch has changed while the job/task was running
Two remaining issues to fix:
RelativePagedRailBoundingSphere calculation (part of nbody line picking) is sometimes trying to write out of bounds
Entity jumps position when a burn is scheduled/cancelled
Debugging why nbody line picking isn't working
Rewriting nbody picking to use a simpler picking system (simple linear scan, instead of recursive). This fixes the index out of bounds issue.
Ray appears to be in the wrong space (needs to be relative to the origin to move back into world space)
Debugging entity jumping
There are two jumps:
A single frame jump to a position
A persistent offset for the entire duration of the burn
Investigating single frame jump
Sampling the rail fails because two points are required (one either side of the sample time) but because the rail has been trimmed down to end at now there's no point after now!
Added extrapolation to rail sampler, if two points cannot be found it uses the last known good point and extrapolates. This seems to fix the one frame jump.
Investigating shift
Probably caused by linear interpolation of a non-linear curve. i.e. the sample is some way into an acceleration (changing velocity) but the sampler is linear (assuming constant velocity)
Exported some orbital data from live sim, experimenting with it in Python
Some possible fixes
Better interpolation, sticking closer to ground truth so the skip is smaller
Run an integrator every frame (but lower precision) and re-sync with rails smoothly
Fit a curve to the rail points (e.g. bezier) and interpolate along that
"Fixup" step
Detect when the rail is modified in the section that interpolation is currently sampling, run an explicit fixup step - interpolate from the last predicted position (even though it's wrong) to the next rail point, then interpolate as usual
Began some work prototyping a new orbit rail sampler, which detects discontinuities.
Removed "catch-up" mode in integrator - the current implementation can leave large "holes" in the rail since the catch-up work is not added to the rail
Modified rail trimming to reset delta time back to min value, it'll rapidly go back up if it needs to.
Prototyping cubic bezier interpolation between points, that helps a lot! There's still some drift since cubic bezier is not a perfect approximation.
Built out a stateful sampler:
Extrapolation phase - best guess when there's no available rail data
Interpolation phase - when there's rail data, using cubic bezier
Reconciliation phase - when there's rail data but we just recently finished extrapolating. In this case continue extrapolating and interpolating, and slowly interpolate from one to the other. Lasts 30-60 frames.
Noticed that the spacecraft act differently depending upon if the rail was invalidated or not. Even if there's no actual change! Definitely a bug in how invalidation is done, or how recalculation is done.
Removed "catch up" mode from RailIntegrator - running integration work on main thread when an entity is behind. Extrapolation mode in the sampler handles this now.
Added an event the integrator can send when entities are falling behind.
Tested the (very rough) keyboard controller script, movement now seems to be smooth with no jitter or weird drifting.
Need to offset things by the transform position of the line renderer, to account for the camera being attached to a different thing to what the line is relative to.
Optimised nbody orbit picking
Normalizing ray ahead of time, so distance calculations can just use 1.0.
Replaced many divides in sphere tests with a single 1/x and using multiplies later.
Placed a soft limit on the number of points returned.
Considered using ray/cylinder tests, these are more expensive to evaluate but will have a tighter fit. Not implemented, it's fast enough at the moment with the soft limit preventing edge cases getting too extreme.