• 0 Posts
  • 6 Comments
Joined 3 months ago
cake
Cake day: March 16th, 2025

help-circle
  • A lot of people go into physics because they want to learn how the world works, but then are told that is not only not the topic of discussion but it is actively discouraged from asking that question. I think, on a pure pragmatic standpoint, there is no problem with this. As long as the math works it works. As long as the stuff you build with it functions, then you’ve done a good job. But I think there are some people who get disappointed in that. But I guess that’s a personal taste. If you are a pure utilitarian, I guess I cannot construct any argument that would change your mind on such a topic.

    I’m not sure I understand your last question. Of course your opinion on physical reality doesn’t make any different to reality. The point is that these are different claims and thus cannot all be correct. Either pilot wave people are factually correct that there are pilot waves or they are wrong. Either many worlds people are factually correct that there is a multiverse or they are wrong. Either objective collapse people are factually correct that there is an objective collapse or they are wrong (also objective collapse theories make different predictions, so they are not the same empirically).

    If we are not going to be a complete postmodernist, then we would have to admit that only one description of physical reality is actually correct, or, at the very least, if they are all incorrect, some are closer to reality than others. You are basically doing the same thing religious people do when they say there should be no problem believing a God exists as long as they don’t use that belief to contradict any of the known scientific laws. While I see where they are coming from, and maybe this is just due to personal taste, at the end of the day, I personally do care whether or not my beliefs are actually correct.

    There is also a benefit of having an agreement on how to understand a theory, which is it then becomes more intuitive. You’re not just told to “shut up and calculate” whenever someone asks a question. If you take a class in general relativity, you will be given a very intuitive mental picture of what’s going on, but if you take a class in quantum mechanics, you will not only not be given one, but be discouraged from even asking the question of what is going on. You just have to work with the maths in a very abstract and utilitarian sense.


  • No, it’s the lack of agreement that is the problem. Interpreting classical mechanics is philosophical as well, but there is generally agreement on how to think about it. You rarely see deep philosophical debates around Newtonian mechanics on how to “properly” interpret it. Even when we get into Einsteinian mechanics, there are some disagreements on how to interpret it but nothing too significant. The thing is that something like Newtonian mechanics is largely inline with our basic intuitions, so it is rather easy to get people on board with it, but QM requires you to give up a basic intuition, and which one you choose to give up on gives you an entirely different picture of what’s physically going on.

    Philosophy has never been empirical, of course any philosophical interpretation of the meaning of the mathematics gives you the same empirical results. The empirical results only change if you change the mathematics. The difficulty is precisely that it is more difficult to get everyone on the same page on QM. There are technically, again, some disagreements in classical mechanics, like whether or not the curvature of spacetime really constitutes a substance that is warping or if it is just a convenient way to describe the dispositions of how systems move. Einstein for example criticized the notion of reifying the equations too much. You also cannot distinguish which interpretation is correct here as it’s, again, philosophical.

    If we just all decided to agree on a particular way to interpret QM then there wouldn’t be an issue. The problem is that, while you can mostly get everyone on board with classical theories, with QM, you can interpret it in a time-symmetric way, a relational way, a way with a multiverse, etc, and they all give you drastically different pictures of physical reality. If we did just all pick one and agreed to it, then QM would be in the same boat as classical mechanics: some minor disagreements here and there but most people generally agree with the overall picture.



  • pcalau12i@lemmy.worldtomemes@lemmy.worldDeterminism
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    15 days ago

    Speaking of predicting outcomes implies a forwards arrow of time. As far as we know, the arrow of time is a macroscopic feature of the universe and just doesn’t exist at a fundamental level. You cannot explain it with entropy without appealing to the past hypothesis, which then requires appealing to the Big Bang, which is in and of itself an appeal to general relativity, something which is not part of quantum mechanics.

    Let’s say we happen to live in a universe where causality is genuinely indifferent to the arrow of time. This doesn’t mean such a universe would have retrocausality, because retrocausality is just causality with an arrow facing backwards. If its causal structure was genuinely independent of the arrow of time, then its causal structure would follow what the physicist Emily Adlam refers to as global determinism and an "all-at-once* structure of causality.

    Such a causal model would require the universe’s future and past to follow certain global consistency rules, but each taken separately would not allow you to derive the outcomes of systems deterministically. You would only ever be able to describe the deterministic evolution of a system retrospecitvely, when you know its initial and final state, and then subject it to those consistency rules. Given science is usually driven by predictive theories, it would thus be useless in terms of making predictions, as in practice we’re usually only interested in making future predictions and not giving retrospective explanations.

    If the initial conditions aren’t sufficient to predict the future, then any future prediction based on an initial state, not being sufficient to constrain the future state to a specific value, would lead to ambiguities, causing us to have to predict it probabilistically. And since physicists are very practically-minded, everyone would focus on the probabilistic forwards-evolution in time, and very few people would be that interested in reconstructing the state of the system retrospectively as it would have no practical predictive benefit.

    I bring this all up because, as the physicists Ken Wharton, Roderick Sutherland, Titus Amza, Raylor Liu, and James Saslow have pointed out, you can quite easily reconstruct values for all the observables in the evolution of system retrospectively by analyzing its weak values, and those values appear to evolve entirely locally, deterministically, and continuously, but doing so requires conditioning on both the initial and final state of the system simultaneously and evolving both ends towards that intermediate point to arrive at the value of the observable at that intermediate point in time. You can therefore only do this retrospectively.

    This is already built into the mathematics. You don’t have to add any additional assumptions. It is basically already a feature of quantum mechanics that if you evolve a known eigenstate at t=-1 and a known eigenstate at t=1 and evolve them towards each other simultaneously until they intersect at t=0, at the interaction you can seemingly compute the values of the observables at t=0. Even though the laws of quantum mechanics do not apply sufficient constraints to recover the observables when evolving them in a single direction in time, either forwards or backwards, if you do both simultaneously it gives you those sufficient constraints to determine a concrete value.

    Of course, there is no practical utility to this, but we should not necessarily confuse practicality with reality. Yes, being able to retrospectively reconstruct the system’s local and deterministic evolution is not practically useful as science is more about future prediction, but we shouldn’t declare from this practical choice that therefore the system has no deterministic dynamics, that it has no intermediate values and when it’s in a superposition of states it has no physical state at all or is literally equivalent to its probability distribution (a spread out wave in phase space). You are right that reconstructing the history of the system doesn’t help us predict outcomes better, but I don’t agree it doesn’t help us understand reality better.

    Take all the “paradoxes” for example, like the Einstein-Podolsky-Rosen paradox or, my favorite, the Frauchiger–Renner paradox. These are more conceptual problems dealing with an understanding of reality and ultimately your answer to them doesn’t change what predictions you make with quantum mechanics in any way. Yet, I still think there is some benefit, maybe on a more philosophical level, of giving an answer to those paradoxes. If you reconstruct the history of the systems with weak values for example, then out falls very simple solutions to these conceptual problems because you can actually just look directly at how the observables change throughout the system as it evolves.

    Not taking retrospection seriously as a tool of analysis leads to people believing in all sort of bizarre things like multiverses or physically collapsing wave functions, that all disappear if you just allow for retrospection to be a legitimate tool of analysis. It might not be as important as understanding the probabilistic structure of the theory that is needed for predictions, but it can still resolve confusions around the theory and what it implies about physical reality.


  • pcalau12i@lemmy.worldtomemes@lemmy.worldDeterminism
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    15 days ago

    According to our current model, we would probably observe un-collapsed quantum field waves, which is a concept inaccessible from within the universe, and could very well just be an artifact of the model instead of ground truth.

    It so strange to me that this is the popular way people think about quantum mechanics. Without reformulating quantum mechanics in any way or changing any of its postulates, the theory already allows you to recover the intermediate values of all the observables in any system through retrospection, and it evolves locally and deterministically.

    The “spreading out as a wave” isn’t a physical thing, but an epistemic one. The uncertainty principle makes it such that you can’t accurately predict the outcome of certain interactions, and the probability distribution depends upon the phase, which is the relative orientation between your measurement basis and the property you’re trying to measure. The wave-like statistical behavior arises from the phase, and the wave function is just a statistical tool to keep track of the phase.

    The “collapse” is not a physical process but a measurement update. Measurements aren’t fundamental to quantum mechanics. It is just that when you interact with something, you couple it to the environment, and this coupling leads to the effects of the phase spreading out to many particles in the environment. The spreading out of the influence of the phase dilutes its effects and renders it negligible to the statistics, and so the particle then briefly behaves more classically. That is why measurement causes the interference pattern to disappear in the double-slit experiment, not because of some physical “collapsing waves.”

    People just ignore the fact that you can use weak values to reconstruct the values of the observables through any quantum experiment retrospectively, which is already a feature baked into the theory and not something you need to add, and then instead choose to believe that things are somehow spreading out as waves when you’re not looking at them, which leads to a whole host of paradoxes: the Einstein-Podolsky-Rosen paradox, the Wigner’s friend paradox, the Frauchiger-Renner paradox, etc.

    Literally every paradox disappears if we stop pretending that systems are literally waves and that the wave-like behavior is just the result of the relationship between the phase and the statistical distribution of the system, and that the waves are ultimately a weakly emergent phenomena. We only see particle waves made up of particles. No one has ever seen a wave made up of nothing. Waves of light are made up of photons of light, and the wave-like behavior of the light is a weakly emergent property of the wave-like statistical distributions you get due to the relationship between the statistical uncertainty and the phase. It in no way implies everything is literally made up waves that are themselves made of nothing.