# hckrnws

If anyone is interested in playing around with gravitational dynamics I highly recommend the REBOUND library [1]. It has a simple Python interface but under the hood it has a collection of research grade integrators. It came out around the time I was finishing up my PhD in gravitational dynamics, but if I were still in the field it's what I'd be using.

If you're curious what would happen to the Solar System if you made Jupiter 10 times more massive you can pip import the library and find out for yourself in about five minutes.

It's a hundred times less polished than rebound (that readme looks seriously cool!), but in 2021 I also wanted to toy around with orbital mechanics / gravity and couldn't find a quick and easy simulator so I made this to run in a browser: https://lucgommans.nl/p/badgravity/

Since it's so rough on the edges (especially on mobile, initially I was surprised it works at all), here's the steps for the mentioned example of making Jupiter 10x heavier:

```
1. Open the scenarios on the left and click play on the inner solar system to load that up
2. Click the plus on the outer planets to add them in (if it looks like nothing happened: zoom out. Space is big and this is to scale)
3. Fold out the "bodies" section and alter the mass for "J"upiter. The change is applied live.
4. Optionally press Restart to restart with the current settings but back at their initial positions and speeds
```

Making Jupiter 1000× heavier (and fast-forwarding the time in the Simulator controls by 10×) makes it eject Mars from the solar system within one minute, but interestingly Mercury and Venus seem pretty stable around the sun in that configurationThe help/about page (https://lucgommans.nl/p/badgravity/about.html) contains links to all other orbit projects I could find. Seeing Rebound as well as the OP, I should probably add a "libraries" section! Or do you think that should just go with "Software to download" alongside Stellarium and such?

Looks awesome. I have a question that might be pedantic but I think you could speak to. Coming from an engineering mindset, I like the use of the term 'integrator' instead of 'solver'. In MATLAB, 'solvers' are used to iterate states of ODE models. But the term 'integrator' is more intuitive to me. Can you speak to the use of one term vs another in the ephemeris community?

Solver is a general term, it is just an algorithm which solves a certain problem. You have solvers for linear systems, PDEs, optimization problems, graph problems, etc.

An integrator is an algorithm which allows the numerical approximation of the solution to an ODE, given that the ODE is written in a specific form where it is equivalent to calculating the integral of multiple functions.

You can solve differential equations by integrating or by differentiating. Real-world integrators were easier to build back when DEs were solved with analog computers. Although "easier" is an understatement: Differentiators have a nasty habit of trying to blow up to infinity, which means you can't really build a good, general differentiator either with analog *or* digital electronics.

Integrators are much better behaved pets and they don't shit on the carpet. So everybody uses integrators. Integrators have lots of issues too but those can be sufficiently mitigated for many classes of problems. Differentiators are mostly hopeless, feral beasts.

Are you sure you got these the right way round? It seems logical that something that integrates over time *should* veer off towards infinity eventually, but something that differentiates should be stable.

Integrals are summers so they can increase without bound over time. But if the input function varies around y=0 -- as many do -- they can remain stable.

Now let's talk about infinities that can happen *instantly*: What's the derivative at the upward edge of a square wave?

This account has been posting simulations of interesting 3-body scenarios for quite a while. It used to be on Twitter but moved to Mastodon. You can check out the archives and play the videos, it’s quite neat: https://botsin.space/@ThreeBodyBot

How is it random if the 3 body problems are chosen to have overlapping ellipses that look beautiful

The Mastodon videos have random initial conditions, the GitHub example does not. Are you checking the right link?

Rejection sampling? /s

I love this bot, thanks for sharing!

There is this cool paper "Crash test for the restricted three-body problem" [0], that plots where the third body eventually ends up when dropped from any location. Looks very fractal-like [1][2]

[0] https://scholar.archive.org/work/wnwgyliq5fgtba45k535t5lb5e/...

[1] https://www.semanticscholar.org/paper/Crash-test-for-the-res...

[2] https://www.semanticscholar.org/paper/Crash-test-for-the-res...

‘Strange attractors’: https://en.wikipedia.org/wiki/Attractor

“An attractor is called strange if it has a fractal structure.”

Thank you for reminding me of this beautiful field of mathematics.

The best textbook I've ever read: *Nonlinear Dynamics and Chaos* by Steven Strogatz.

Looks like fluid dynamics / turbulence to me

E&M and gravity are both n^2 forces, but E&M has polarity, whereas gravity (at least we encounter it) does not.

One of the first programs I ever wrote was a simulator for a planet rotating a star with a naive difference equation approximation to Newton's law. I was a bit disappointed to see the planet reliably spiral into the sun.

The main thing is that something like Euler's method (naive iterative approximation) doesn't guarantee conservation of energy. I believe that this is why planetary dynamics are usually handled with Lagrangian equations rather than the naive approximation approach.

Edit: It would be nice to see what the author's system does for two bodies as a sanity check. Three body system was indeed chaotic but still conserve energy - would this system do that?

It wasn't the first program I wrote, or even a program I wrote, but in middle school a friend wrote a 3-body integrator in BASIC (sun, earth, moon). That single 20 line program shaped my entire world view for a long time (decades), implying to me that we could, if we had powerful enough computers, simulate all sorts of things... even entire universes (which was also an idea that I explored with cellular automata).

It's not a particularly helpful worldview and can often be harmful if you're working with complex systems, or systems that require more than o(n(log(n)) per step, or any number of other real-world problems scientists face.

Many years later I was impressed at how well astronomy packages work (IE, "it's time T at local L, what is the angle of the sun in the sky?") and stumbled across this paper by Sussman: https://web.mit.edu/wisdom/www/ss-chaos.pdf which shows some pretty serious work on future prediction of solar system objects.

>simulate all sorts of things... even entire universes

You also assumed that chaos is a measurement problem. You could simulate entire universe if you knew the initial conditions sufficiently enough. There were two nice recent papers[1][2] that showed in order to predict some orbits you'll need an accuracy less of Planck length or else some systems are fundamentally unpredictable.

[1]: https://arxiv.org/abs/2002.04029 [2]: https://arxiv.org/abs/2311.07651

I'd love to see convincing evidence that we could simulate the universe using only standard physical laws. IIUC we don't have a way to do that or reliably say whether it's possible. It's also not that interesting a problem because it's so impractical.

But it would be interesting to see if we could simulate *a* universe and observe if it in any way resembled ours. Even if the simulated universe was much "smaller". (It would of course have to be.)

I think you'd need to be a few rungs up the Kardashev scale to even contemplate this.

That may be ... plus we don't really know what to make of the observations around us, with dark matter and stuff.

> *if we had powerful enough computers, simulate all sorts of things*

In reality, we're having a hard time precisely simulating even two atoms interacting, with all the quantum effects, diffraction, gravity (however minuscule), etc.

Our universe is surprisingly detailed.

64-bit floats aren't even close enough to precisely simulate real world. What's the precision of the mass of an electron? What's the precision of its coordinates or motion vectors? Maybe plank length for coordinates, maybe not. What about acceleration? Can it be arbitrarily low? An electron's gravitational field from a billion light years away should theoretically affect us (in time).

The assumption in your comment is that any of this is real to begin with and logic isn't being short-circuited in our brains to make everything "check out" even if it doesn't.

If you simulate a universe with cube blocks from Minecraft, it doesn't matter as long as your *users* think the simulation is real.

And since you are simulating their consciousness, you can easily short circuit the train of thought that would cause doubt, or that would attempt logic, etc., so they *truly believe* their Minecraft cube world is incomprehensibly detailed down to the atoms and galaxies in the sky.

They'd happily go on the whiteboard, and prove their theories with math like 2+2=5 and everyone would agree because they literally couldn't disagree - they would feel in their hearts and minds that this is perfectly correct. There's nothing to say that's not happening now.

In fact, this is how I see most advanced civilizations performing simulations. The compute savings would be immense if you could just alter user consciousness as opposed to simulating an actual universe.

I always find skepticism like this to be really interesting, since in the end we could always be getting fooled by the Deus Deceptor or something. That being said, let me take a stab at being anti-skeptical for the fun of it.

I work around people who do "Computational Chemistry", which is basically running quantum physics calculations. These tend to be done in order to either understand the properties of materials, or to understand the reasons why reactions happen. The results are more advanced materials and better performing reactions. An early and famous example of such technology is the laser. A more typical modern example would be searching for Zeolite catalysts which have particular properties, or trying to create surface coatings which protect implants from being eaten by the immune system, or on which ice cannot freeze.

Basically, I believe the advanced calculations to be correct because they lead to things which are (eventually) used in daily life.

In nearly all situations, these advanced calculations bear only a limited relationship to the underlying physics occurring in material systems. A lot of simulation work involves twiddling parameters until you get the result you want to see, and then just publishing that one simulation. It's sort of a post-hoc retro-causality problem. Many of the things you describe came about because of a combination of immense amounts of lab work (mostly of which were failures), some theoretical concepts, and a person willing enough to twiddle params until they fall up something that works, after which they can optimize the parameters.

It is true that simulations produce results which may not reflect the underlying system if the simplifications and fudge factors are incorrect. Thus fiddling with parameters is part of the process.

In the example I gave of searching for zeolite catalysts, the simulations were just used to identify candidates for labs to study. I don't remember the exact numbers, but I think it brought the list of candidates down from hundreds to less than 10. The majority of these candidates were at least somewhat effective. Unless we believe that pretty much all of those hundred candidates would have been effective, then the advanced calculations were doing some work.

The question is, is all that work actually just done because of parameter twiddling? I don't think so. Consider that neural networks are often used lately in order to provide computationally simpler models of various physical phenomena. They can do a somewhat better job if fed with a lot of real data, but they use at least thousands of times more parameters than the simple quantum physics calcs with fudge factors. Thus I think it is safe to say that the structure of the quantum physics calcs does meaningfully model some part of reality. (Unless, as xvector points out, our memories are being continuously overwritten to make reality seem consistent)

It's also good to note that the fudge factors (read: parameters) and quantization are done because it would be too computationally difficult to model the parts of the system modeled by fudge factors for systems with a useful amount of atoms in them, and we just don't know how to compute ODEs for complex systems in continuous time and space. In simple systems, (e.g. 2 photons interacting) analytical solutions for ODEs can be found, no fudge factors are needed for computation, and the computed results match the experimental results to within measurement error.

> Basically, I believe the advanced calculations to be correct because they lead to things which are (eventually) used in daily life.

I think you are missing my point - if you can short circuit logic, you will never be able to know whether your calculations are correct (but you will believe it)

Whether the outputs are used in daily life or not is irrelevant. You don't truly know if that is happening because you do not know what the fuzz factor is in the simulation.

Is the night sky the same as it was yesterday, or is it generated on the spot and your memory edited? The latter is more compute efficient.

Does your coworker look the same, or is the fuzz factor in the sim very high and they have a new face/body generated every day, with your memory edited to match?

Etc. that the outputs of the equations you described are used or not is irrelevant because it would be far more compute efficient to just not have them mean anything and to fuzz their existence/workability

Indeed, I can't prove that it is or isn't the case that my thoughts and memories aren't being constantly overwritten to make reality consistent. I don't have a firm belief one way or the other, but I act like reality does intrinsically make sense.

Reason being, if reality is consistent, than acting as though it does achieves my goals. If reality isn't consistent, or is consistent in a way that differs from what I am capable of comprehending, then I am unable to compute any pattern of behavior that would be helpful to achieving my goals.

Thus it only makes sense to me to act like reality is consistent. I think that if I am acting this way, then it makes sense for me to say that I "believe" reality is consistent in a non thought overriding way.

EDIT: Looking at your comment again, I think that you think it is likely that reality should be simple because computing that would be easier. If we are stuck in a simulation by more advanced beings, then it is possible that compute power is a limiting factor, or they may just have computers so powerful that simulating us could be a cinch.

The simulation scenario is easy to imagine. However, just because I can't imagine scenarios besides "it just is this way", "God did it", and "We are in a simulation" doesn't mean such scenarios don't exist.

Sure, if we're just fooled, if it's all an illusion, it definitely requires a lot less computation.

But if it's a proper simulation, base reality must be even more detailed. Like, a lot more.

> But if it's a proper simulation, base reality must be even more detailed. Like, a lot more.

Not necessarily. You could create the feeling or impression of detail on-demand - consider a 2D fractal in software that you can zoom into infinitely. It's not more detailed than our base reality, it's actually quite a simple construct.

one imagines that post-singularity overloads don't have to worry about IEEE754. Float is likely not the right representation here, but double is enough to represent solar-system-scale differences at centimeter precisions.

> *at centimeter precisions*

So yeah, you're 33 decimal orders of magnitude off from the Planck length. And that's assuming that Plank length is the smallest possible length.

So you'd need at least 117 extra bits to get your representations precise. And that's just for our solar system.

For the observable universe (~93 billion light years across) you'd need 206 bits of precision.

I think symplectic integrators are typically used, which are derived from hamiltonian mechanics

It's true that Euler integration is about as crude as you can get, but you don't need to reach to Lagrangians for improvement; something like Verlet integration can already bring dramatic gains with fairly small changes needed.

Yes, I think it's probably the simplest symplectic method, which would be quite the improvement already.

Can convert Euler's method to a symplectic integrator utilizing v_{n+1} when computing x_{n+1}. That said although such integrators are widely used (usually of higher order than Euler's) in celestial mechanics, one is not restricted to them. For example Bulirsch-Stoer is also very used even if it isn't symplectic because remains accurate (energy error very low) even on long integrations.

Would it make sense to explicitly implement conservation of energy?

I.e. do a simple method but calculate the total energy at the beginning, and at each step adjust the speeds (e.g. proportionally) so that the total energy matches the initial value - you'll still always get *some* difference due to numerical accuracy issues, but that difference won't be growing over time.

The method you describe would be an example of what is called a "thermostat" in molecular dynamics (because the speed of molecules forms what we call temperature). Such adjustments to the speed can definitely paper over issues with your energy conservation, but you still have to be careful: if you rescale the speeds naively you get the "flying ice cube" effect where all internal motions of the system cease and it maintains its original energy simply by zooming away at high speed.

Thermostats ensure that the average _kinetic energy_ remains constant (on average or instantaneously depending on how they are implemented). Your parent post wants to enforce the constraint that the total energy remains constant. So its a bit different from a canonical ensemble (NVT) simulation. This is a microcanonical ensemble simulation (NVE). This means you don't know if you should correct the position (controlling the potential energy) or the velocities (controlling the kinetic energy).

Basically, there will be error in the positions and velocities due to the integrator used and you don't know how to patch it up. You have 1 constraint; the total energy should be constant. There are 2*(3*N-6) degrees of freedom for the positions and velocities (if more than 2 bodies). The extra constraint doesn't help much!

Edit: Also, the only reason thermostats work is because the assumption is that the system is in equilibrium with a heat bath (i.e. bunch of atoms at constant temperature). So there is an entire distribution of velocities that is statistically valid and as long as the velocities of the atoms in the system reflect that, you will on average model the kinetics of the system properly (e.g. things like reaction rates will be right). In gravitational problems there is no heat bath.

If you want to demonstrate why the three-body problem is chaotic, you can set it up to run a couple hundred very similar simulations in parallel. Just nudge each body by a tiny amount to simulate uncertainty in the initial conditions and watch the resulting configurations diverge as tiny differences become large differences. Rather than points, you get lines of probability that stretch out and wrap around each other. It's quite striking.

Edit: semantics

This is irrelevant to the unsolvability.

That the three body problem is unstable and that no analytic solution exists are completely independent statements.

The upright pendulum is also an unstable ODE, yet it has an analytical solution.

This only demonstrates that the system is chaotic, not that there is no closed form solution.

This may be a bit pedantic, the nbody problem is not chaotic, it is harder, having riddled basins.

> A riddled basin implies a kind of unpredictability, since exact initial data are required in order to determine whether the state of a system lies in such a basin, and hence to determine the system’s qualitative behavior as time increases without bound. (Note this is different from “chaos,” where very precise initial data are required to determine finite-time behavior.) What is more, any computation that determines the long-term behavior of a system with riddled basins must use the complete exact initial data, which generally cannot be finitely expressed. Hence such computations are intuitively impossible, even if the data are somehow available.

http://philsci-archive.pitt.edu/13175/1/parker2003.pdf

The above post is a good 'example' of sensitivity to initial conditions, and riddled basins do have a positive Lyapunov exponent which is often the only criteria in popular mathematics related to chaos. But while a positive Lyapunov exponent is required for a system to be chaotic, it is not sufficient to prove a system is chaotic.

If you look at the topologically transitive requirement, where you work with the non-empty open sets U,V ⊂ X....riddled basins have no open sets...only closed sets.

With riddled basins, no matter how small your ε, it will always contain the boundary set.

If you have 3 exit basins you can run into the Wada property, which is also dependent on initial conditions but may have a zero or even negative Lyapunov exponent and is where 3 or more basins share the same boundary set...which is hard to visualize, non-chaotic, and nondeterministic.

Add in strange non-chaotic attractors, which may be easier or harder than strange chaotic attractors, and the story gets more complicated.

Sensitivity to initial conditions is simply not sufficient to show a system is chaotic in the formal meaning.

But the 3 body problem's issues do directly relate to decidability and thus computability.

This is all very interesting stuff, and I thank you for a bunch of new keywords to google, but I’m not sure why you say it’s not chaotic.

As far as I understand, extreme sensitivity to parameters/ICs is all that is required for a system to be chaotic.

That was once a popular belief, but we have moved past that historical concept.

Here is a paper that is fairly accessible that may help.

https://home.csulb.edu/~scrass/teaching/math456/articles/sen...

It becomes important when you have a need to make useful models, or to know when you probably won't be able to find a practical approximation.

It is similar to the erroneous explanation of entropy as disorder, which is fundamentally false, yet popular.

It has real implications, like frustrating the efforts to make ANNs that are closer to biological neurons:

https://arxiv.org/abs/2303.13921

Or even model realistic population dynamics.

> It has been shown how simple ecosystem models can generate qualitative unpredictability above and beyond simple chaos or alternate basin structures.

https://www.researchgate.net/publication/241757794_Wada_basi...

Chaotic, riddled, and wada can be viewed as deterministic, practically indeterminate, and strongly indeterminate respectfully.

If you want to hold on to the flawed popular understanding of the butterfly effect that is fine, you just won't be able to solve some problems that are open to approximation and please don't design any encryption algorithms.

I think realizing it is simply a popular didactic half truth, is helpful.

But the chaos is extremely critical, since we can’t ever perfectly measure the initial conditions.

So, even having a closed form solution isn’t helpful when computing real world situations.

The statements are independent. It having a closed form solution and it being unstable don't contradict or confirm one another.

In solutions to ODEs converge very often *exponentially* from the true result. That the 3 Body problem for this makes it characteristic, not special.

>So, even having a closed form solution isn’t helpful when computing real world situations.

Simply not true. It is helpful or not depending on your problem. Often you are interested in short term behavior, which can be studied by numerical methods or, if existing, analytic solutions.

Closed form solutions in general dynamic systems are possible when systems are *integrable* which means there is a conserved quantity for every degree of freedom. The solar system is *almost* like that in the case that each planet keeps going around with a constant angular momentum so they go around like a set of clocks that run at different speeds. Over long periods of time there is angular momentum transferred by the planets so you get chaos like this

https://www.aanda.org/articles/aa/full_html/2022/06/aa43327-...

Orbital mechanics is a tough case for perturbation theory because each planet has three degrees of freedom (around, in and out, up and down) and the periods are the same for all of these motions and don’t vary with the orbital eccentricity or inclination. Contrast that to the generic case where the periods are all different and vary with the amplitude so with weak perturbations away from a resonance the system behaves mostly like an integrable system but if the ratio between two periods is close to rational all hell breaks loose, see

https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Arnold%E2%8...

the harmonic oscillator has a similar problem because the period doesn’t change as a function of the amplitude. Either way these two pedagogically important systems will lead you completely wrong in terms of understanding nonlinear dynamic, if you add, say, an εx^3 term to the force in one of two coupled harmonic oscillators it is meaningless that ε is small, you have to realize that the N=2 case of this integrable system

https://en.wikipedia.org/wiki/Toda_lattice

is the right place to start your perturbation from which ends up explaining why the symmetry of the harmonic oscillator breaks the way that it does. Funny though, the harmonic oscillator is not weird at all in quantum mechanics and is just fine to do perturbation theory from.

Thanks for this. This is something that always bugged me when I see explanations of the three-body problem. They'll say something like "changing the initial conditions just a tiny bit can dramatically change the outcome!" as an explanation for why having no closed form solution is significant.

But that never made sense to me, since plenty of things with closed form solutions also do this.

It was really a math problem and not a physical one. There's so much more going on ( GR, radiation pressure ), that even if there were a solution it wouldn't be able to predict Mercury's orbit.

FYI there are stable configurations to 3+ body systems. Not all configurations are chaotic.

For a trivial example, see the Solar System.

Great example.

> This only demonstrates that the system is chaotic, not that there is no closed form solution.

This seems a bit off, it seems like[1] an *implicit* assertion ("*only*(!) demonstrates") that *it is not possible* for a system that lacks a closed form solution *in fact* (beyond our ability to discern) to be demonstrated.

To be clear I'm in no way implying this was your intent (I see it as an interesting "quirk" of our culture)...I'm mainly interesting if you can see what I'm getting at.

As a thought experiment, stand up two instances: one is our current situation (inability to discern, indeterminate), the other where we have (somehow) proven out (or, *come to believe we have*, reality being Indirect but *experienced as* Direct, thus: "*is* "*proven*", thus: "*is*") that a closed form solution is not possible: would the second instance "be(!) a demonstration that the system has no closed form solution"? (Thinking more....I think maybe the choice of the word "demonstrate" may very well make *a path to seeking* the truth of the matter ~impossible to achieve in these sorts of cases, especially if one takes cultural forces[2] into consideration).

[1] Using "pedantry", which few people understand the technical meaning of, *and tend to flip flop on depending on what is being considered* (precision & accuracy in science/physics is good, precision & accuracy in philosophy/metaphysics is bad - *no explanation or justification needed*: a Cultural Fact).

[2] Which make the 3 body problem in the *known to be deterministic* physical realm seem like child's play.

I'm not sure I understand what you're saying.

>As a thought experiment, stand up two instances: one is our current situation ..., the other where we have (somehow) proven out ... that a closed form solution is not possible

As far as I understand, this has in fact been proved. Quite a long time ago, too, by Poincare I believe.

GP has edited his comment to reflect my feedback, but originally said that his experiment "demonstrates that there is no solution." All I was trying to point out is that the two concepts are not necessarily related.

You could imagine some system x' = f(x), where f(x) is some transcendental function. There is no analytic solution to this system, but it's obviously not chaotic.

Could you imagine a system that is chaotic but *does* have an analytical solution? I'm not sure. Closest I could find to answering this was: https://sprott.physics.wisc.edu/pubs/paper496.pdf

I'm sure he understood this. I only commented to try and minimize the confusion of others.

edit - This article suggests that the logistic map (a system famously used to introduce the concept of chaos) has an analytical solution: https://www.sciencedirect.com/science/article/pii/0378437195...

A few years ago a friend and I made something similar to universe sandbox, though only with the gravitational simulation part: https://github.com/fayalalebrun/Astraria

Surprisingly enough the jar still runs without issue. Something which probably would not be the case for linux binaries, but maybe for windows.

Not to forget https://www.kerbalspaceprogram.com

KSP doesn't actually do > 2 body simulations allowing the paths to follow closed form solutions. This is why you have an abrupt orbit change between bodies instead of just a freeform path. There is a mod "Principia" which adds this functionality in but bewarned it makes the game very different to play!

Principia is extremely impressive, they document all their math here: https://github.com/mockingbirdnest/Principia/tree/master/doc...

One interesting detail is that the source code makes extensive use of non-ASCII identifiers, for mathematical symbols and for the names of mathematicians. One of the two primary contributors is also an active contributor to Unicode

Thanks, I was wondering how feasible this was / how one would do that in C++ !

Comment was deleted :(

KSP even while simplified is so amazing.

I've never gotten very far but the one thing it did manage to impress extremely thoroughly on me is "space is hard". And it's like 5x easier in KSP than on earth lol.

Also it showed me that that ever recurring thought of "why don't they just..." is usually pretty misguided.

I really respect how they managed to make this fun and so incredibly educational at the same time.

Netflix really going hard on pushing their IP. It’s like guerilla marketing on steroids.

I jest. Tbh, I didn’t know this was an actual problem. Thanks for sharing.

I've used the NAIF SPICE toolkit (https://naif.jpl.nasa.gov/naif/toolkit.html) which includes the Yarkovsky effect and such, important for small bodies.

This is great! Thank you

In college, a long time ago, I wrote something like this, for n bodies, but in c++ and OpenGL

More recently I’ve built something similar in python

For anyone interested in this, I recommend this Wired article that goes from the 2 body problem to n, with simulations and code that run on the browser: https://www.wired.com/2016/06/way-solve-three-body-problem/

Do you want ghost numbers counting down on your retinas? Because this is how you get ghost numbers counting down on your retinas.

It's ok, I'll just make sure never to measure the CMB for variations.

I've always read about how astronomers, especially after Newton, could predict the movement of the planets with incredible precision, yet n-body problems are so infamously complex and hard to solve, even now. Why is that? Is it simply that the mass of everything except for the sun is just so negligible?

N-body problems can't be *analytically* solved.
However, you can still compute integrals into the future (with some acceptable error), you just need to step through all the intermediate states along the way

In the case of the solar system, yes, it helps that the Sun is much more massive than everything else (and then Jupiter is 4 times more massive than Saturn, the next biggest) - you can go a long way to a "reasonable" solution by starting with the 2-body solution if only the Sun affected each planet, and then adding in the perturbation caused by Jupiter and Saturn. (In fact, that's how we predicted the existence of Neptune, by noticing that there were extra perturbations on Uranus beyond those, and hence another massive planet must exist, far enough away from the sun to only significantly affect Uranus).

Comment was deleted :(

Predictions of the solar system state are accurate only on "short" periods. The solar system is chaotic, and predicting its state after few million years is no more possible that predicting the weather for next year. This does not preclude making very accurate on short period.

Note that n-body problems are not particularly complex or hard to solve compared to other chaotic systems. In many ways, the existence or nonexistence of closed-form solutions is mostly a distraction: it merely reflects our choice of primitive functions and there is no sets of primitive functions that is stable for addition, multiplication, composition, inversion and integration. Typically, even the simple integral ∫ eˣ/x dx cannot be decomposed into more elementary functions.

But that doesn't matter in practice, because we are already using numerical approximation to compute primitive functions that are not implemented in hardware. Using numerical solvers to compute solutions to ODE is not so different. A good illustration of that point is that there is an analytic solution for the 3-body problem (in the form of an infinite series in t^{1/3}). But this solution is useless for computing orbits because it has bad convergence properties. In other words, it is better to use a numerical solver rather than stick to the analytical solution. And a similar phenomenon exists for polynomial equation of degrees 3 and 4: the exact formula is numerically unstable, and its better to use a numerical solver when one wants a numerical solution.

IAMAP, but n-body problems result in non-linear partial differential equations, to which only a few special cases are known, and even that case its do to simplifications, e.g., treating planets as point masses and ignoring tidal forces, or ignoring the pull of the planets on the Sun, (or on each other).

One such case where a solution is known is the Lagrange point of the Earth-Moon-Sun system (and similiarly for other points) https://en.wikipedia.org/wiki/Lagrange_point But in reality, they exist only as an approximation. They aren't truly stable.

My understanding the way to calculate spacecraft, asteroid, etc. trajectories is just through a discrete simulation.

Like f you don't know how to solve the antiderivative of a given function, you can still calculate the integral since you know the value of the function.

To add to the points from others: 3+-body systems are generally chaotic, so you cannot predict arbitrarily far into the future, but the solar system is reasonably well-behaved in that manner, so the timescales are long, but we don't know where the planets in the inner solar system will be in their orbit in ~5-10 million years (as in, that's the timescale where the error bars for the position in the orbit span the whole orbit). Of course, if you care about more precise predictions then the timescales are shorter: eclipse predictions more than 1000 years in the future are likely to be quite inaccurate.

For almost any differential equation there is no analytical solution for it's initial value problem. That the n-Body problem behaves that way is unsurprising and poses no inherent challenge to making predictions.

Computers can easily solve initial value problems for most ordinary differential equations. They integrate them, calculating an approximate solution after every small, but finite step.

Getting an approximate solution to the 3-Body problem can be achieved in around 20 lines of python, without having to use any libraries. It is a remarkable simple and effective technique.

You can solve it numerically using Finite Differences. Basically using linear approximations

If you want to take the next step up in accuracy and cleverness, investigate the work of Mr. Runge and Mr. Kutta.

Is there some way of determining, whether the orbits, given some starting parameters, _at some point_ will become stable? I probably mean periodic. And I probably mean mostly periodic, with only slight deviation from orbits. And I don't necessarily mean that we can calculate/predict what they will look like in a stable form.

This reminds me of Langtons Ant, that has very simple rules, but at first still seems very chaotic. Then after some number of iterations it just shoots away in a regular endlessly regular repeating pattern. "Order came from chaos." So it makes me think, maybe there is not a way to tell, whether the orbits "stabilize" at some point, but maybe they will, and we simply don't know when or how to tell?

What does three body problem tell us about our universe? It surprises me that universe by its nature ended up with such smart safeguards to disallow predictability via computation.

Hah, I did something similar at https://ari.blumenthal.dev/!/-2/-1/three-body after reading the book last year.

Source at https://github.com/zkhr/blog/blob/main/static/js/three.js

Loved your website. Thanks for sharing.

Completely selfish plug, but it's not very hard to implement, so here's my implementation using p5js: https://editor.p5js.org/alexduf/full/c-iSj_b4p

you can play around with the code. Clicking generate a new solar system. There are a few constants you can adjust at the top of the file.

And since A. K. Dewdney is fresh on my mind, he did a *Computer Recreations* article generally about this (about simulating orbital mechanics) and the clever bit that I remember: you dial the time slice way down as objects got close, you care little when the objects are far apart.

Not a "solution" of course, but certainly an optimization if you're just generally doing gravitational simulations.

If you want a good read that (to summarize it quite tersely) takes the idea of 3- or n-body simulations and goes very far indeed with it (i.e. why there is something rather than nothing), I highly recommend checking out Julian Barbour's The Janus Point.

Also see the source code for the popular ThreeBodyBot [0] as seen on mastodon etc [1]

It contains a numerics tutorial [2] that I found very useful for my use case.

[0] https://github.com/kirklong/ThreeBodyBot

[1] https://botsin.space/@ThreeBodyBot/112200106103679713

[2] https://github.com/kirklong/ThreeBodyBot/blob/master/Numeric... (ipynb)

In this simulation the bodies also collide together. Luckily that never happened in the book.

The book is mis-named. Their system was not 3 bodies, but 4 (3 stars, plus a planet). And the system is so chaotic that even that little planet will make a huge difference over time. And even beyond that, the bodies themselves were transformed, having the ability to tear atmosphere away from each other.

Chaotic N-body behavior appears when the bodies have similar mass. In that scenario, every solution is either chaotic or unstable.

The solar system is strictly speaking a 20+ body system. That said, the behavior of the solar system is fairly predictable because the the sun has almost all of the mass, and jupiter has almost all of what remains, everything else is a small correction term. We can to a good approximation calculate the other satellites' orbits around either the sun or the center of mass of the sun-jupiter system.

I was wondering about this too, but I think they're not colliding. I think they're pulling towards each other and getting tightly pulled around each other so they end up slingshotting and flying back the way they came, in a way that looks like a bounce.

You can see this in the very beginning of the simulation, with the blue and green dot.

Can anyone say if this is actually accurate? It seems like an unintuitive motion to me, but I'm often surprised by how these things work.

i believe you're correct, other (higher resolution) visualizations of periodic orbits show that "wrapping" behavior more clearly.

example from wikipedia: https://en.wikipedia.org/wiki/Three-body_problem#/media/File...

Luckily??! Could have saved everyone a lot of trouble and kept them in the proper number of dimensions.

Arguably this was all Mao Zedong's fault.

Should a collision conduct to a simulation of a 4 body problem ?

that might just be because of my low resolution gif ;)

I've been re-reading the trilogy lately and it also lead me to search for some three-body simulations. I noticed that a lot of them are simulating only a 2D space. Any good (animated) 3D simulations? Best I've been able to find so far are https://demonstrations.wolfram.com/ThreeBodyProblemIn3D/ and https://labs.sense-studios.com/threebody/index.html

isn’t that story a 4-body problem?

Not a physicist, but apparently the planet does not really have enough mass to matter: https://www.reddit.com/r/threebodyproblem/comments/15cgd51/i...

I was wondering why it called a three-body problem (at least as it presented in Liu Cixin novel). There is actually at least 4 bodies (3 suns and a planet), right? I suppose the planet will affect the movement of the 3 suns.

Comment was deleted :(

Probably a misnomer. And issue isn't really about the planet affecting the movement of the 3 suns (its mass can be considered deligible compared to them) but how the suns affect the movement of the planet. Essentially they're looking for a solution to the restricted 4bp rather the 3bp.

No, it's just three. The reason it's a thing – as far as I understand – is that solving analytically for one body is trivial: it's stationary. Solving analytically for two bodies is possible but takes some calculus. Three bodies have chaotic behaviour and need to be simulated.

in the novels there are 3 suns, so it is 4 bodies.

The simulation doesn't get easier with 4 tho, so 3 body problem is still a good name. Also the planets mass is "almost" negligible compared to that of the suns, so I assume simulating (+ occasional correcting) 3 bodies is already a good approximation.

It’s not a good approximation because (spoilers) the challenge is to identify the location of the planet relative to the suns, not simply to locate the suns themselves. I too wondered at the title.

I noticed that paper they link to was the first one to find new periodic orbits in the three body problem in a long time which confirms what I’ve believed for a long time which is that nonlinear mechanics is badly underresearched.

There’s a saying in mathematics circles which I’ll butcher here: “everything is either a linear system, reducible to linear systems, or unapproachable.”

Think about how bad we are at analytically solving “simple looking” diff-eqs and the above statement starts to sound too true.

Exactly. Finding a few hundred periodic orbits is a lot of hard work but you don’t have the glory of having “solved” something. Because of that kind of thinking there are many unanswered questions which are ignored because they don’t seem to be part of some masterstroke.

If, like me, you are suddenly curious what would happen if you added a small fourth body: https://youtu.be/WrahPSY9pf0

Nicely done! We were playing with this concept as well, in case it's useful to compare notes together: https://app.elodin.systems/sandbox/hn/three-body

Adding a svelte three-body animation by rich harris https://svelte-cubed.vercel.app/examples/trisolaris

Would be neat if there's planet level visualization in Space Engine like in the show where you see the suns whizzing around and enviroment freeze/burn.

This isn't planet level, but shows surface: https://universesandbox.com/

Probably try to implement a much better integrator. Some energy preserving higher order integrator should serve a lot better.

Looks like Pokemon Jirachi

This is nothing like what I remember from the book!

[flagged]

Maybe its because I made an honest effort of getting a PhD in physics, but I absolutely do not understand this perspective.

Like yes, we have a really hard time talking about just about anything as finite object with physical extent, but jokes about frictionless spherical cows moving in simple harmonic motion started in secondary school. The gaps and shortcomings should not come as a surprise. But most of us also hold devices in our pockets that leverage actual quantum phenomena to function at all (diodes of any stripe only work because of quantum transitions). So while its true that there are a variety of unsolved and potentially unsolvable problems in physics, its a gross misunderstanding to say that it can barely answer simple questions.

I think about the Born-Oppenheimer approximation a lot, as its so obviously a hack to even do the math at all, but it undergirds basically all of solid state physics.

What in this post isn't honest?

"not honest" might not be the right phrasing. What I was trying to say is that when learning this stuff I felt like they hid a lot of information from me which later surprised me. But they hid it because they have no answers for it.

One simple example is what happens when you don't consider these as points but instead spheres. Also what happens when the spheres come close? The math starts breaking down, you start seeing infinities. I.e, in reality spheres come close and gravity doesn't go infinity.

You are complaining that you study the simple cases or simplified cases first before you study near unsolvable systems?

Besides, very often the simplified case gets you surprisingly far because the difference between idealized situations and reality is often negligible or at least easily describable - see perturbation theory. The simplified cases are well worth studying.

If I understand correctly, or at least if I map it to my own similar complaint: the problem is not that they have you study simple or simplified cases, it's that the ignored complexity is unacknowledged and sometimes even denied. Which makes a lot of sense in primary school, where even mentioning it might cause some kids to ignore everything because "it's not really how it works" or whatever. But by the time you've made it past the basics, sweeping complexity under the rug is harmful. You still want to be studying the simplified scenarios, but it would be much better if you had some sense of the range of things that meaningfully differ from realistic scenarios. Not so you can take them into account in your solutions, but so you have the appropriate level of humility about what your solutions mean and the limits of their applicability.

I guess I didn't do that much physics, because for me it comes up more in other fields. In statistics, for example, it is critically important to understand the limitations of your results. For example, you might assume that error is normally distributed. You don't want to forget about that assumption, because it is very commonly violated, and it can make a large difference in your conclusions. Yet in school, it was almost always handwaved aside with "Law of Large Numbers mumble mumble mumble". Even when the law didn't apply, or the definition of "Large" happened to be "way bigger than your pathetic number of data points".

It's also why there's often such a gulf between academia and industry. Academic results walk a tightrope of assumptions and preconditions, and trying to put them into practice always finds places where those don't hold. Sometimes they even start out holding, but then everybody takes advantage of it until competition drives everyone into optimizing the residuals. If there's a space where things make sense, competition will always drive you to the edge of that space. Or beyond; competitive pressure does not care about keeping your equations simple and pure. Back to the point, you might study a field for years and then land a job in exactly that field, only to discover that everybody is looking at it completely differently because they've exhausted the simplified space and are deep in the land of heuristics, guesswork, and approximation. The market for spherical steaks was saturated years before.

Well for physics, the three-body problem, you see it in the first year, the Roche limit - in second. (Specifically two spheres coming close does NOT result in infinities, pointlike objects do - but then you also learn around the same time that atoms aren't pointlike objects and at nanometer-short ranges you have to start to deal with other forces than gravitation too...)

(I have my own beef with the "sweeping under the rug" which happens with (electromagnetic) pseudovectors, but I do realize that requires a LOT of effort to fix.)

Sounds like an individual experience then. It's a bit of a stretch to blame "the physicists" for this. All of my teachers were very open about the short comings of our assumptions and solutions, and while it may be true that not every one and sometimes even none of "the physicists" is able to handle the complexity of realistic scenarios, I see no shame in working your way up until you get stuck. I can't remember a teacher that was too proud to say that something was too complicated.

Crafted by Rajat

Source Code