Skip Navigation
Gotta use those quantum numbers for peak 🤌 random randomness
  • quantum nature of the randomly generated numbers helped specifically with quantum computer simulations, but based on your reply you clearly just meant that you were using it as a multi-purpose RNG that is free of unwanted correlations between the randomly generated bits

    It is used as the source of entropy for the simulator. Quantum mechanics is random, so to actually get the results you have to sample it. In quantum computing, this typically involves running the same program tens of thousands of times, which are called "shots," and then forming a distribution of the results. The sampling with the simulator uses the QRNG for the source of entropy, so the sampling results are truly random.

    Out of curiosity, have you found that the card works as well as advertised? I ask because it seems to me that any imprecision in the design and/or manufacture of the card could introduce systematic errors in the quantum measurements that would result in correlations in the sampled bits, so I am curious if you have been able to verify that is not something to be concerned about.

    I have tried several hardware random number generators and usually there is no bias either because they specifically designed it not to have a bias or they have some level of post-processing to remove the bias. If there is a bias, it is possible to remove the bias yourself. There are two methods that I tend to use that depends upon the source of the bias.

    To be "random" simply means each bit is statistically independent of each other bit, not necessarily that the outcome is uniform, i.e. 50% chance of 0 and 50% chance of 1. It can still be considered truly random with a non-uniform distribution, such as 52% chance of 0 and 48% chance of 1, as long as each successive bit is entirely independent of any previous bit, i.e. there is no statistical analysis you could ever perform on the bits to improve your chances of predicting the next one beyond the initial distribution of 52%/48%.

    In the case where it is genuinely random (statistical independence) yet is non-uniform (which we can call nondeterministic bias), you can transform it into a uniform distribution using what is known as a von Neumann extractor. This takes advantage of a simple probability rule for statistically independent data whereby Pr(A)Pr(B)=Pr(B)Pr(A). Let's say A=0 and B=1, then Pr(0)Pr(1)=Pr(1)Pr(0). That means you can read two bits at a time rather than one and throw out all results that are 00 and 11 and only keep results that are 01 or 10, and then you can map 01 to 0 and 10 to 1. You would then be mathematically guaranteed that the resulting distribution of bits are perfectly uniform with 50% chance of 0 and 50% chance of 1.

    I have used this method to develop my own hardware random number generator that can pull random numbers from the air, by analyzing tiny fluctuations in electrical noise in your environment using an antenna. The problem is that electromagnetic waves are not always hitting the antenna, so there can often be long strings of zeros, so if you set something up like this, you will find your random numbers are massively skewed towards zero (like 95% chance of 0 and 5% chance of 1). However, since each bit still is truly independent of the successive bit, using this method will give you a uniform distribution of 50% 0 and 50% 1.

    Although, one thing to keep in mind is the bigger the skew, the more data you have to throw out. With my own hardware random number generator I built myself that pulls the numbers from the air, it ends up throwing out the vast majority of the data due to the huge bias, so it can be very slow. There are other algorithms which throw out less data but they can be much more mathematically complicated and require far more resources.

    In the cases where it may not be genuinely random because the bias is caused by some imperfection in the design (which we can call deterministic bias), you can still uniformly distribute the bias across all the bits so that not only would be much more difficult to detect the bias, but you will still get uniform results. The way to do this is to take your random number and XOR it with some data set that is non-random but uniform, which you can generate from a pseudorandom number generator like the C's rand() function.

    This will not improve the quality of the random numbers because, let's say if it is biased 52% to 48% but you use this method to de-bias it so the distribution is 50% to 50%, if someone can predict the next value of the rand() function that would increase their ability to make a prediction back to 52% to 48%. You can make it more difficult to do so by using a higher quality pseudorandom number generator like using something like AES to generate the pseudorandom numbers. NIST even has standards for this kind of post-processing.

    But ultimately using this method is only obfuscation, making it more and more difficult to discover the deterministic bias by hiding it away more cleverly, but does not truly get rid of it. It's impossible to take a random data set with some deterministic bias and trulyget rid of the deterministic bias purely through deterministic mathematical transformations,. You can only hide it away very cleverly. Only if the bias is nondeterministic can you get rid of it with a mathematical transformation.

    It is impossible to reduce the quality of the random numbers this way. If the entropy source is truly random and truly non-biased, then XORing it with the C rand() function, despite it being a low-quality pseudorandom number generator, is mathematically guaranteed to still output something truly random and non-biased. So there is never harm in doing this.

    However, in my experience if you find your hardware random number generator is biased (most aren't), the bias usually isn't very large. If something is truly random but biased so that there is a 52% chance of 0 and 48% chance of 1, this isn't enough of a bias to actually cause much issues. You could even use it for something like cryptography and even if someone does figure out the bias, it would not increase their ability to predict keys enough to actually put anything at risk. If you use a cryptographysically secure pseudorandom number generator (CSPRNG) in place of something like C rand(), they will likely not be able to discover the bias in the first place, as these do a very good job at obfuscating the bias to the point that it will likely be undetectable.

  • Gotta use those quantum numbers for peak 🤌 random randomness
  • I'm not sure what you mean by "turning into into a classical random number." The only point of the card is to make sure that the sampling results from the simulator are truly random, down to a quantum level, and have no deterministic patterns in them. Indeed, actually using quantum optics for this purpose is a bit overkill as there are hardware random number generators which are not quantum-based and produce something good enough for all practical purposes, like Intel Secure Key Technology which is built into most modern x86 CPUs.

    For that reason, my software does allow you to select other hardware random number generators. For example, you can easily get an entire build (including the GPU) that can run simulations of 14 qubits for only a few hundred dollars if you just use the Intel Secure Key Technology option. It also supports a much cheaper device called TrueRNGv3 which is a USB device. It also has an option to use a pseudorandom number generator if you're not that interested in randomness accuracy, and when using the pseudorandom number generator option it also supports "hidden variables" which really just act as the seed to the pseudorandom number generator.

    For most most practical purpose, no, you do not need this card and it's definitely overkill. The main reason I even bought it was just because I was adding support for hardware random number generators to my software and I wanted to support a quantum one and so I needed to buy it to actually test it and make sure it works for it. But now I use it regularly for the back-end to my simulator just because I think it is neat.

  • Gotta use those quantum numbers for peak 🤌 random randomness
  • I own a quantum random number generator on a PCie card that uses optical effects for random number generation. It cost me over $2000. I use it for quantum computer simulations.

  • You'll never see it coming
  • By applying both that and the many worlds hypothesis, the idea of quantum immortality comes up, and thats a real mind bender. Its also a way to verifiably prove many worlds accurate(afaik the only way)

    MWI only somewhat makes sense (it still doesn't make much sense) if you assume the "branches" cannot communicate with each other after decoherence occurs. "Quantum immortality" mysticism assumes somehow your cognitive functions can hop between decoherent branches where you are still alive if they cease in a particular branch. It is self-contradictory. There is nothing in the mathematical model that would predict this and there is no mechanism to explain how it could occur.

    Imagine creating a clone which is clearly not the same entity as you because it is standing in a different location and, due to occupying different frames of reference, your paths would diverge after the initial cloning, with the clone forming different memories and such. "Quantum immortality" would be as absurd as saying that if you then suddenly died, your cognitive processes would hop to your clone, you would "take over their body" so to speak.

    Why would that occur? What possible mechanism would cause it? Doesn't make any sense to me. It seems more reasonable to presume that if you die, you just die. Your clone lives on, but you don't. In the grand multiverse maybe there is a clone of you that is still alive, but that universe is not the one you occupy, in this one your story ends.

    It also has a problem similar to reincarnation mysticism. If MWI is correct (it's not), then there would be an infinite number of other decoherent branches containing other "yous." Which "you" would your consciousness hop into when you die, assuming this even does occur (it doesn't)? It makes zero sense.

    To reiterate though, assuming many worlds is accurate, the expiriment carries no risk to you. Due to the anthropic principle, you will always find yourself in the reality in which you survive.

    You see the issue right here, you say the reality in which you survive, except there would be an infinite number of them. There would be no the reality, there would be a reality, just one of an infinitude of them. Yet, how is the particular one you find yourself in decided?

    MWI is even worse than the clone analogy I gave, because it would be like saying there are an infinite number of clones of you, and when you die your cognitive processes hop from your own brain to one of theirs. Not only is there no mechanism to cause this, but even if we presume it is true, which one of your infinite number of clones would your cognitive processes take control of?

  • If the universe exists for an infinite amount of time, is death still truly oblivion and eternal?
  • I disagree, experience is very subjective. You can not convey what it feels like to exist with quantifiable data. No amount of information is sufficient to impart the sensation of seeing the color red on another observer without them actually experiencing it.

    None of this establishes it is subjective in the slightest. The reality we experience just is. Of course it is not equivalent to quantifiable data. If I go see the Shanghai Tower in person, and if I look at a picture or a written description of the Shanghai Tower, of course the real thing is categorically different than the thing in reality. How does that demonstrate the real thing is "subjective"?

    The real thing is not subjective, but it is perspective-dependent. The physical sciences allow us to describe all possible perspectives, as both general relativity and relational quantum mechanics are perspective-dependent theories. But there is a categorical distinction between a description of a perspective and the reality of a perspective.

    No matter how detailed a description of fire becomes, the paper it is written on will not suddenly burst into flames, as if it becomes a real fire. The reality of a thing, and the description of a thing, are always distinctly different. The physical sciences are descriptive, we can describe all possible perspectives, but there is still a categorical distinction between the reality of actually occupying such a perspective.

    It makes no sense to ask how to quantify the reality we experience. It is false to qualify it as well. Reality just is what it is. When we assign quantities and qualities to it, we are moving beyond reality and into interpretation of reality. Reality does have the property that it is capable of being quantified and qualified, but the specific quantities and qualities we choose depends quite a bit on contextual factors and only makes sense in relation to social institutions as all object-labels are socially constructed norms.

    This is, again, true for all objects. There is no reason to separate "the experience of seeing color" from any other experiential realities, such as "the experience of seeing a cat" or "the experience of seeing a triangle." Perspectives are defined in terms of physical systems, and so by definition two different physical systems occupy different slices of reality from two different perspectives. The only way to make them share the same perspective would be to make them the same object, which then they would no longer share the same perspective because the original two objects would no longer even exist, definitionally.

    It is just fallacious to jump from reality being perspective-dependent to it being subject-dependent. You have not actually established some fundamental role for subjects here. Again, the physical sciences allow us to describe reality from all possible perspectives of all physical objects, so there is no physical reason to state that reality only exists from human perspectives. If you want to point out the fact that you only occupy the reality of your own perspective and thus cannot actually verify the reality of other perspectives described by the physical sciences, sure, but this is also true of other people. You cannot occupy, as a matter of definition, the perspective of other human beings, so you would be forced to conclude that the slices of reality corresponding to other human perspectives don't exist, either, i.e. devolving into solipsism.

    Are you a solipsist? I guess I never actually asked.

    I’m saying maybe the consciousness itself briefly exists in a superposition, not the entire mass of the brain.

    If consciousness is a quantifiable object (which is necessary to be in a superposition of states which is a mathematical statement) then you should be able to give me a definition of consciousness I can quantify. You have yet to do so.

    If for some weird happenstance two copies of your mind existed at once, then your consciousness would briefly be in a superposition of two locations.

    You have no mechanism for this to actually occur. You are just devolving into complete quantum mysticism, believing if you abuse terminology from quantum theory then suddenly it gives it legitimacy. It does not.

    Stating that if two identical objects exist simultaneously they would be in a superposition of states is making a very specific quantifiable physical claim which you have not even attempted to explain the possible physical mechanism.

    You seem to fundamentally disagree that subjective experience even exists, so I’m not sure if you’re still following, but my thinking is that the qualia is in essence literally just the physical system that makes up my brain functioning correctly.

    No, qualia is just a category of objects. Things like "redness" or "loudness," these are socially constructed norms we use to identify aspects of reality in a way that allow us to communicate them to other people. There is nothing special about "qualia" over any other category of objects, such as mathematical objects or physical objects. Experience itself is not a category of objects, it is not "qualia," nor is it "subjective." What we experience is just reality as it exists from our own perspective.

  • If the universe exists for an infinite amount of time, is death still truly oblivion and eternal?
  • Simultaneity does exist in general relativity, I didn’t say it didn’t. I said it doesn’t exist for things separated by vast distances in spacetime, and that’s true. There is no simultaneity for two entities separated by an event horizon.

    Event horizon has to do with black holes which are not relevant here, I am assume you are talking about the cosmological horizon, which nothing in GR prevents you from defining simultaneous events from a particular frame of reference for other events beyond the cosmological horizon. If you do define such events, well, of course you could not perceive something beyond the cosmological horizon, so you might argue it's "metaphysical" so to speak. But, again, this is also true for something that exists in the future, it also not observable.

    I don’t know what consciousness, as in qualia

    Qualia is just a category of objects. Redness, loudness, etc. All objects are socially constructed norms used to judge reality to be something. There's nothing special about one set of objects over another, as if objects of qualia require a special explanation that physical objects like trees and cats do not, or mathematical objects like circles and triangles.

    subjective experience

    Experience is not subjective.

    why can’t it be in a superposition of two locations if two conscious instances of my brain exist in the same area?

    Particles have a wavelength associated with them that depends upon their mass caled the de Broglie wavelength and this represents the statistical spread of the position of particles. A superposition of states is really just a list of probability amplitudes presenting the likelihoods of where the particle may show up. If the statistical spread (determined by the de Broglie wavelength) is too narrow then it would be basically impossible to get the object to be noticeably in a superposition of two different locations, while if the statistical spread is very large then it would be very easy.

    The de Broglie wavelength depends upon mass, and gets narrower the more massive an object is. That means for any macroscopic object the statistical spread is just too small to place its position into a superposition of states. Massive objects like a human brain simply cannot be in a superposition of positions with another brain. The closest you could get is to a kind of Schrodinger's cat type scenario whereby the brain is entangled with another event that determines its trajectory, but I see no physical mechanism that would establish something like this between these two copies of "you."

  • If the universe exists for an infinite amount of time, is death still truly oblivion and eternal?
  • Simultaneity does exist in general relativity, it's just relative. If your clone doesn't exist because they lie beyond the observable horizon, well, you can't observe things in the future either, so what's the point? My point was that there's not an obvious reason to say a clone existing at the same time as you is indeed a clone but a clone existing at a different time is actually "you." To me, it makes more sense to say in both cases they are clones. But you seem to be saying that they are actually both "you"? Even if they exist at the same time? What about in similar locations as well, such as standing next to each other?

    Also, I do not believe in "subjective experience" nor do I believe in "consciousness." It's not true that "we know so little about consciousness" because there is nothing to know about "consciousness" as it's largely a mystical buzzword. There are plenty of things we don't understand about the human brain, like intelligence, but we are gradually making progress in these fields. "Consciousness" is largely meaningless, though, and so it cannot be studied as there is nothing to even know about it, as it doesn't refer to anything real.

    I have no idea why you are bringing superposition into this despite it having no relevance here.

  • If the universe exists for an infinite amount of time, is death still truly oblivion and eternal?
  • I used to consider myself a dialectical materialist, but I moved away from it because dialectical materialists don’t offer compelling answers to the "hard problem." After obsessively studying this issue in depth, I’ve become convinced that the "hard problem" results from a flawed philosophical view of reality known as metaphysical realism. The phrase "subjective experience" only makes sense under this framework, where reality is presumed to exist independently of what we perceive.

    Metaphysical realism dominates philosophical discourse, creating a false dichotomy between it and idealism. Bogdanov, unlike Lenin, rejected metaphysical realism by arguing that we directly perceive reality, not a "reflection" of it or some illusion created by the brain. Lenin, by accepting metaphysical realism, incorrectly accused Bogdanov of idealism, failing to grasp that Bogdanov wasn’t claiming reality is created by the mind but that perception is material reality from our frame of reference.

    This is why describing perception as "subjective" only makes sense if you assume there’s an unknowable "thing-in-itself" beyond perception. Thomas Nagel's argument, in "What is it like to be a Bat?" assumes that objective reality is independent of perspective, but modern physics—relativity and relational quantum mechanics—shows that properties depend on perspective. There is no perspective-independent reality. Therefore, perceiving reality from a particular perspective does not imply that what we perceive is unreal or a product of the mind or "consciousness," but rather that it is reality as it really is.

    Jocelyn Benoist’s contextual realism replaces the term "subjective" with "contextual." Experience isn’t subject-dependent (implying it only exists in conscious minds) but context-dependent, meaning it only exists under specific real-world conditions. For example, a cat in abstraction isn’t real, but a cat pointed out in a specific context is. Benoist argues that objects only exist meaningfully within contexts in which they are realized.

    Kant argued that appearances imply a "thing-in-itself" beyond them, but Benoist flips this: if we reject the noumenon, it no longer makes sense to talk about appearances. What we perceive isn’t an "appearance" of something deeper—it just is what it is. This distinction between phenomenon and noumenon collapses, and idealism is rejected as incoherent, as it still insists upon treating perception as phenomenological despite rejecting the very basis of that phenomenology.

    Thus, the "hard problem" is not a genuine issue but an artifact of metaphysical realism. Frameworks like contextual realism (Benoist), empiriomonism (Bogdanov), or weak realism (Rovelli) do not encounter this problem because they reject the premise of an unknowable, hidden reality beyond perception. Dialectical materialists, despite claiming to oppose metaphysics, still cling to metaphysical realism by positing an invisible reality beyond experience. Most tend to make a distinction between "reality" and "reflected reality" whereby only the latter is perceptual. This inevitably leads to contradictions because, if one assumes such a gap exists between reality and what we observe as an a priori premise, they cannot bridge the gap later without contradicting themselves.

    When I first read Dialectics of Nature, I heavily interpreted Engels as actually thinking along these longs. Similarly, Evald Ilyenkov’s Dialectical Logic also discussed how Feuerbach showed the mind-body problem (essentially the same as the "hard problem") arises only if you assume a gap between perception and reality. Rather than resolving it with argument, you must abandon the premise of such a gap altogether.

    However, I later realized my interpretation was rare. Most dialectical materialists, including Lenin in Materialism and Empirio-Criticism, cling to metaphysical realism, perpetuating the very dualism that creates the "hard problem" in the first place. I am not the first one to point this out, if you read Carlo Rovelli's Helgoland he has a chapter specifically on the Lenin and Bogdanov disagreement. Honestly, I think dialectical materialism would be far more consistent if they abandoned this gap at its foundations. I mean, you see weird contradictions in some diamat literature where they talk about how things only exist in their "interconnections between other things" but then also defend the thing-in-itself as a meaningful concept, which to me seems to be self-contradictory.

  • If the universe exists for an infinite amount of time, is death still truly oblivion and eternal?
  • We tend to define physical objects in a way that have spatial and temporal boundaries. That means if I point to a particular physical object, you can usually draw a rough border around it, as well as talk about when it came into existence and when it goes away. The boundaries are never really solid and there's usually some ambiguity to them, but it's hard to even know what is being talked about without these fuzzy boundaries.

    For example, if I point to a cat and say "that's a cat," you generally understand the rough shape of a cat and thus have a rough outline, and this helps you look for where it's located. If there is a water bowl next to the cat, you immediately know the bowl is not the cat and is not what I'm talking about because it's not within those borders. These borders are, again, a bit fuzzy, if you zoom up on a microscopic scale it becomes less clear where the cat begins and where it ends, but these fuzzy borders are still important because without them, if I said "look at that cat over there" you could never figure out what I'm talking about because you would have no concept of the border of the cat at all, which is necessary to locate it.

    It is also necessary to our understanding that these boundaries evolve continuously. If a building was demolished, and then a century later someone inspired by it builds another with the same plans, even if it's in the same location, we would typically not think of it as literally the same building, because they did not exist at the same times, i.e. their temporal boundaries do not overlap as there is a discontinuous gap between them. If a cat is located at one position and then later at another, its boundaries have moved, but this movement is continuous, it's not as if the cat teleportation from one point to the next.

    But this is precisely why people find the teletransportation paradox difficult to grapple with. What if the cat did teleport from one location to the next such that the original cat is destroyed and its information is used to reconstruct it elsewhere? Is it still the same cat? How we define objects is ultimately arbitrary so you can say either yes or no, but personally I think it's more consistent to say no.

    Consider if the teleporter succeeded in reconstructing the cat, but due to a malfunction, it failed to destroy the original cat. Now you have two. It seems rather obvious to me that, if this were to occur, what you have is a clone and not the original cat. They are obviously different beings with different perspectives of the world as they would each see out of their own eyes separately. If I cloned myself, I would not see out of my clone's eyes, so it is clearly not the same object as myself.

    I would be inclined to thus say the person who exists "countless eons" after you died would at best be considered a clone of yourself and not actually yourself. Your temporal boundaries do not overlap, there is no continuous transition from the "you" of the past and the "you" eons later, so they are not the same objects. They are different people.

    Sure, if we assume the universe can exist eternally (a big assumption, but let's go with it), then if enough time passes, a perfect clone of yourself would be certain to exist. Yet, if we're assuming the universe can exist for that long, why not also assume the universe is spatially infinite as well? We have no reason to suspect that if you kept traveling in one direction long enough, that you would somehow stop discovering new galaxies. As far as we know, they go on forever.

    Hence, if you kept traveling in one direction far enough, you would also eventually find a perfect clone of yourself, which would actually exist at the same time as you right now. If we were to accept that the clone of yourself in the far future is the same object as you, wouldn't you not also have to conclude that the clone at a far distance from you is the same object as you? I find this to be rather strange because, again, you do not see out of your clone's eyes, it's clearly a different person. I would thus be inclined to say neither are "you." One does not spatially overlap you (exists in the same time but a different location) and the other does not temporally overlap you (could possibly even exist in the same location, but definitely not at the same time).

    It thus seems more consistent to me to say both are merely clones and thus not the same object. It would be a different person who just so happens to look like you but is not you.

  • Engineers at Northwestern University have made a significant breakthrough by demonstrating quantum teleportation over a fiber optic cable that is already carrying regular Internet traffic.
  • Isn’t the quantum communication (if it were possible) supposed to be actually instantaneous, not just “nearly instantaneous”?

    There is no instantaneous information transfer ("nonlocality") in quantum mechanics. You can prove this with the No-communication Theorem. Quantum theory is a statistical theory, so predictions are made in terms of probabilities, and the No-communication Theorem is a relativity simple proof that no physical interaction with a particle in an entangled pair can alter the probabilities of the other particle it is entangled with.

    (It's actually a bit more broad than this as it shows that no interaction with a particle in an entangled pair can alter the reduced density matrix of the other particle it is entangled with. The density matrix captures more than probabilities, but also the ability for the particle to exhibit interference effects.)

    The speed of light limit is a fundamental property of special relativity, and if quantum theory violated this limit then it would be incompatible with special relativity. Yet, it is compatible with it and the two have been unified under the framework of quantum field theory.

    There are two main confusions as to why people falsely think there is anything nonlocal in quantum theory, stemming from Bell's theorem and the EPR paradox. I tried to briefly summarize these two in this article here. But to even more briefly summarize...

    People falsely think Bell's theorem proves there is "nonlocality" but it only proves there is nonlocality if you were to replace quantum theory with a hidden variable theory. It is important to stress that quantum theory is not a hidden variable theory and so there is nothing nonlocal about it and Bell's theorem just is not applicable.

    The EPR paradox is more of a philosophical argument that equates eigenstates to the ontology of the system, which such an equation leads to the appearance of nonlocal action, but this is just because the assumption is a bad one. Relational quantum mechanics, for example, uses a different assumption about the relationship between the mathematics and the ontology of the system and does not run into this.

  • Quantum Teleportation Achieved Over Internet For First Time
  • No, quantum teleportation is more akin to Star Trek teleportation whereby you disassemble the original object, transmit the information, then rebuild it using a different medium.

    (More technically, you apply an operation to a qubit which is non-reversible so its original state is lost if it was not already known, but you gain enough information from this process to transmit it over a classical channel which the recipient can then use that information to apply operations to a qubit they have which places it in the same quantum state as the original qubit.)

    The middle step here requires the transmission of information over a classical communication channel, and so it can't be used to send signals faster than light.

    (I would go as far as to argue there is nothing nonlocal in quantum mechanics at all and the belief there is anything nonlocal is a misunderstanding. I wrote an article here on it that is more meant for laymen, and another article here that is more technical.)

    There is a communication-related benefit to quantum teleportation, but not for superluminal communication. Let's say you have a qubit you want to transmit, but your quantum communication channel is very noisy. Using quantum teleportation will allow you to bypass it because you can transmit the information classically, and classical communication channels tend to be very robust to noise.

    (The algorithm requires a Bell pair to be shared by the sender and receiver in order to carry it out, and so this might seem like it defeats the purpose of bypassing the quantum communication channel. However, you can establish a Bell pair over a noisy channel using a process known as quantum distillation.)

  • When did you first gain consciousness?
  • I haven't yet.

  • I'm literally a thinking lump of fat
  • Depends upon what you mean by "consciousness." A lot of the literature seems to use "consciousness" just to refer to physical reality as it exists from a particular perspective, for some reason. For example, one popular definition is "what it is like to be in a particular perspective." The term "to be" refers to, well, being, which refers to, well, reality. So we are just talking about reality as it actually exists from a particular perspective, as opposed to mere description of reality from that perspective. (The description of a thing is always categorically different from the ontology of the thing.)

    I find it bizarre to call this "consciousness," but words are words. You can define them however you wish. If we define "consciousness" in this sense, as many philosophers do, then it does not make logical sense to speak of your "consciousness" doing anything at all after you die, as your "consciousness" would just be defined as reality as it actually exists from your perspective. Perspectives always implicitly entail a physical object that is at the basis of that perspective, akin to the zero-point of a coordinate system, which in this case that object is you.

    If you cease to exist, then your perspective ceases to even be defined. The concept of "your perspective" would no longer even be meaningful. It would be kind of like if a navigator kept telling you to go "more north" until eventually you reach the north pole, and then they tell you to go "more north" yet again. You'd be confused, because "more north" does not even make sense anymore at the north pole. The term ceases to be meaningfully applicable. If consciousness is defined as being from a particular perspective (as many philosophers in the literature define it), then by logical necessity the term ceases to be meaningful after the object that is the basis of that perspective ceases to exist. It neither exists nor ceases to exist, but no longer is even well-defined.

    But, like I said, I'm not a fan of defining "consciousness" in this way, albeit it is popular to do so in the literature. My criticism of the "what it is like to be" definition is mainly that most people tend to associate "consciousness" with mammalian brains, yet the definition is so broad that there is no logical reason as to why it should not be applicable to even a single fundamental particle.

  • I'm literally a thinking lump of fat
  • This problem presupposes metaphysical realism, so you have to be a metaphysical realist to take the problem seriously. Metaphysical realism is a particular kind of indirect realism whereby you posit that everything we observe is in some sense not real, sometimes likened to a kind of "illusion" created by the mammalian brain (I've also seen people describe it as an "internal simulation"), called "consciousness" or sometimes "subjective experience" with the adjective "subjective" used to make it clear it is being interpreted as something unique to conscious subjects and not ontologically real.

    If everything we observe is in some sense not reality, then "true" reality must by definition be independent of what we observe. If this is the case, then it opens up a whole bunch of confusing philosophical problems, as it would logically mean the entire universe is invisible/unobservable/nonexperiential, except in the precise configuration of matter in the human brain which somehow "gives rise to" this property of visibility/observability/experience. It seems difficult to explain this without just presupposing this property arbitrarily attaches itself to brains in a particular configuration, i.e. to treat it as strongly emergent, which is effectively just dualism, indeed the founder of the "hard problem of consciousness" is a self-described dualist.

    This philosophical problem does not exist in direct realist schools of philosophy, however, such as Jocelyn Benoist's contextual realism, Carlo Rovelli's weak realism, or in Alexander Bogdanov's empiriomonism. It is solely a philosophical problem for metaphysical realists, because they begin by positing that there exists some fundamental gap between what we observe and "true" reality, then later have to figure out how to mend the gap. Direct realist philosophies never posit this gap in the first place and treat reality as precisely equivalent to what we observe it to be, so it simply does not posit the existence of "consciousness" and it would seem odd in a direct realist standpoint to even call experience "subjective."

    The "hard problem" and the "mind-body problem" are the main reasons I consider myself a direct realist. I find that it is a completely insoluble contradiction at the heart of metaphysical realism, I don't think it even can be solved because you cannot posit a fundamental gap and then mend the gap later without contradicting yourself. There has to be no gap from the get-go. I see these "problems" as not things to be "solved," but just a proof-by-contradiction that metaphysical realism is incorrect. All the arguments against direct realism, on the other hand, are very weak and people who espouse them don't seem to give them much thought.

  • Google says its new quantum chip indicates that multiple universes exist
  • There is a strange phenomenon in academia of physicists so distraught over the fact that quantum mechanics is probabilistic that they invent a whole multiverse to get around it.

    Let's say a photon hits a beam splitter and has a 25% chance of being reflected and a 75% chance of passing through. You could make this prediction deterministic if you claim the universe branches off into a grand multiverse where in 25% of the branches the photon is reflected and in 75% of the branches it passes through. The multiverse would branch off in this way with the same structure every single time, guaranteed.

    Believe it or not, while they are a minority opinion, there are quite a few academics who unironically promote this idea just because they like that it restores determinism to the equations. One of them is David Deutsch who, to my knowledge, was the first to publish a paper arguing that he believed quantum computers delegate subtasks to branches of the multiverse.

    It's just not true at all that the quantum chip gives any evidence for the multiverse, because believing in the multiverse does not make any new predictions. Everyone who proposes this multiverse view (called the Many-Worlds Interpretation) do not actually believe the other branches of the multiverse would actually be detectable. It is something purely philosophical in order to restore determinism, and so there is no test you could do to confirm it. If you believe the outcome of experiments are just random and there is one universe, you would also predict that we can build quantum computers, so the invention of quantum computers in no way proves a multiuverse.

  • Hartmut Neven, the founder and lead at Google Quantum AI, says Google's new Willow quantum chip is so fast it may be borrowing computational power from other universes in the multiverse.
  • It does not lend credence to the notion at all, that statement doesn't even make sense. Quantum computing is inline with the predictions of quantum mechanics, it is not new physics, it is engineering, the implementation of physics we already know to build stuff, so it does not even make sense to suggest engineering something is "discovering" something fundamentally new about nature.

    MWI is just a philosophical worldview from people who dislike that quantum theory is random. Outcomes of experiments are nondeterministic. Bell's theorem proves you cannot simply interpret the nondeterminism as chaos, because any attempt to introduce a deterministic outcome at all would violate other known laws of physics, so you have to just accept it is nondeterministic.

    MWI proponents, who really dislike nondeterminism (for some reason I don't particularly understand) came up with a "clever" workaround. Rather than interpreting probability distributions as just that, probability distributions, you instead interpret them as physical objects in an infinite-dimensional space. Let's say I flip four coins so the possible outcomes are HH, HT, TH, and TT, and each you can assign a probability value to. Rather than interpreting the probability values as the likelihood of events occurring, you interpret the "faceness" property of the coin as a multi-dimensional property that is physically "stretched" in four dimensions, where the amount it is "stretched" depends upon those values. For example, if the probabilities are 25% HH, 0% HT, 25% TH, and 50% TT, you interpret it as if the coin's "faceness" property is physically stretched out in four physical dimensions of 0.25 HH, 0 HT, 0.25 TH, and 0.5 TT.

    Of course, in real quantum mechanics, it gets even more complicated than this because probability amplitudes are complex-valued, so you have an additional degree of freedom, so this would be an eight-dimensional physical space the "quantum" coins (like electron spin state) would be stretched out in. Additionally, notice how the number of dimensions depends upon the number of possible outcomes, which would grow exponentially by 2^N the more coins you have under consideration. MWI proponents thus posit that each description like this is actually just a limited description due to a limited perspective. In reality, the dimensions of this physical space would be 2^N where N=number of possible states of all particles in the entire universe, so basically infinite. The whole universe is a single giant infinite-dimensional object propagating through this infinite-dimensional space, something they called the "universal wave function."

    If you believe this, then it kind of restores determinism. If there is a 50% probability a photon will reflect off of a beam splitter and a 50% probability it will pass through, what MWI argues is that there is in fact a 100% chance it will pass through and be reflected simulateously, because it basically is stretched out in proportions of 0.5 going both directions. When the observer goes to observe it, the observer themselves also would get stretched out in those proportions, of both simulateously seeing it it pass through and be reflected. Since this outcome is guaranteed, it is deterministic.

    But why do we only perceive a single outcome? MWI proponents chalk it up to how our consciousness interprets the world, that it forms models based on a limited perspective, and these perspectives become separated from each other in the universal wave function during a process known as decoherence. This leads to an illusion that only a single perspective can be seen at a time, that even though the human observer is actually stretched out across all possible outcomes, they only believe they can perceive one of them at a time, and which one we settle on is random, I guess kind of like the blue-black/white-gold dress thing, your brain just kind of picks one at random, but the randomness is apparent rather than real.

    This whole story really is not necessary if you are just fine with saying the outcome is random. There is nothing about quantum computers that changes this story. Crazy David has a bad habit of publishing embarrassingly bad papers in favor of MWI. One paper he defends MWI with a false dichotomy pitching MWI as if its only competition is Copenhagen, then straw manning Copenhagen by equating it to an objective collapse model, which no supporter of this interpretation I am aware of would ever agree to this characterization of it.

    Another paper where he brings up quantum computing, he basically just argues that MWI must be right because it gives a more intuitive understanding of how quantum computing actually provides an advantage, that it delegates subtasks to different branches of the multiverse. It's bizarre to me how anyone could think something being "intuitive" or not (it's debatable whether or not it even is more intuitive) is evidence in favor of it. At best, it is an argument in favor of utility: if you personally find MWI intuitive (I don't) and it helps you solve problems, then have at ya, but pretending this somehow is evidence that there really is a multiverse makes no sense.

  • quantum computing breakthrough from google.
  • Yes, quantum computers can only break a certain class of asymmetrical ciphers, but we already have replacements called lattice-based cryptography which not even quantum computers can break. NIST even has on their website source code you can download for programs that implement some of these ciphers. We already have standards for quantum-resistance cryptography. Most companies have not switched over since it's slower, but I know some VPN programs claim to have implemented them.

  • quantum computing breakthrough from google.
  • To put it as simply as possible, in quantum mechanics, the outcome of events is random, but unlike classical probability theory, you can express probabilities as complex numbers. For example, it makes sense in quantum mechanics to say an event has a -70.7i% chance of occurring. This is bit cumbersome to explain, but the purpose of this is that there is a relationship between [the relative orientation between the measurement settings and the physical system being measured] and [the probability of measuring a particular outcome]. Using complex numbers gives you the additional degrees of freedom needed to represent both of these things simulateously and thus relate them together.

    In classical probability theory, since probabilities are only between 0% and 100%, they can only accumulate, while the fact probabilities in quantum mechanics can be negative allows for them to cancel each other out. You can have the likelihood of one event not add onto another, but if it is negative, basically subtract from it, giving you a total chance of 0% of it occurring. This is known as destructive interference and is pretty much the hallmark effect of quantum mechanics. Even entanglement is really just interference between statistically correlated systems.

    If you have seen the double-slit experiment, the particle has some probability of going through one slit or the other, and depending on which slit it goes through, it will have some probability of landing somewhere on the screen. You can compute these two possible paths separately and get two separate probability distributions for where it will land on the screen, which would look like two blobs of possible locations. However, since you do not know which slit it will pass through, to compute the final distribution you need to overlap those two probability distributions, effectively adding the two blobs together. What you find is that some parts of the two distributions cancel each other out, leaving a 0% chance that the particle will land there, which is why there are dark bands that show up in the screen, what is referred to as the interference pattern.

    Complex-valued probabilities are so strange that some physicists have speculated that maybe there is an issue with the theory. The physicist David Bohm for example had the idea of separating the complex numbers into their real and imaginary parts, and just using two separate real functions. When he did that, he found he could replace the complex-valued probabilities with real-valued probabilities alongside a propagating "pilot wave," kinda like a field.

    However, the physicist John Bell later showed that if you do this, then the only way to reproduce the predictions of quantum mechanics would be to violate the speed of light limit. This "pilot wave" field would not be compatible with other known laws of physics, specifically special relativity. Indeed, he would publish a theorem that proves that any attempt to get rid of these weird canceling probabilities and replacing them with more classical probabilities ends up breaking other known laws of physics.

    That's precisely where "entanglement" comes into the picture. Entanglement is just a fancy word for a statistically correlated system. But the statistics of correlated systems, when you have complex-valued probabilities, can make different predictions than when you have only real-valued probabilities, it can lead to certain cancellations that you would not expect otherwise. What Bell proved is that these cancellations in an entangled system could only be reproduced with a classical probability theory if it violated the speed of light limit. Despite common misconception, Bell did not prove there is anything superluminal in quantum mechanics, only that you cannot replace quantum mechanics with a classical-esque theory without it violating the speed of light limit.

    Despite the fact that there are no speed of light violations in quantum mechanics, these interference effects have results that are similar to that if you could violate the speed of light limit. This ultimately allows you to have more efficient processing of information and information exchange throughout the system.

    A simple example of this is the quantum superdense coding. Let's say I want to send a person a two-qubit message (a qubit is like a bit, either 0 or 1), but I don't know what the message is, but I send him a single qubit now anyways. Then, a year later, I decide what the message should be, so I send him another qubit. Interestingly enough, it is in principle to setup a situation whereby the recipient, who now has two qubits, could receive both qubits you intend to send across those two qubits they possess, despite the fact you transmitted one of those long before you even decided what you wanted the message to be.

    It's important to understand that this is not because qubits can actually carry more than one bit of information. No one has ever observed a qubit that was not either a 0 or 1. It cannot be both simulateously nor hold any additional information beyond 0 or 1. It is purely a result of the strange cancellation effects of the probabilities, that the likelihoods of different events occurring cancel out in a way that is very different from your everyday intuition, and you can make clever use of it to cause information to be (locally) exchanged throughout a system more efficiently than should be possible in classical probability theory.

    There is another fun example that is known as the CHSH game. The game is simple, each team is composed of two members who at the start of the round are each given a card with randomly the numbers 0 or 1. The number on the card given to the first team member we can call X and the number of the card given to the second team member we can call Y. The objective of the game is for the two team members to then turn over their card and write their own 0 or 1 on the back, which we can call what they both write on their cards A and B. When the host collects the cards, he computes X and Y = A xor B, and if the equality holds true, the team scores a point.

    The only kicker is that the team members are not allowed to talk to one another, they have to come up with their strategy beforehand. I would challenge you to write out a table and try to think of a strategy that will always work. You will find that it is impossible to score a point better than 75% of the time if the team members cannot communicate, but if they can, you can score a point 100% of the time. If the team members were given statistically correlated qubits at the beginning of the round and disallowed from communicating, they could actually make use of interference effects to score a point ~85% of the time. They can perform better than should be physically possible in a classical probability theory.

    While you can build a quantum computer using electron spin as you mentioned, it doesn't have to be. There are many different technologies that operate differently. All that you need is something which can exhibit these quantum interference effects, something that can only be accurately predicted using these complex-valued probabilities. Electron spin is what people often first think of because it is simple to comprehend. Electrons can only have two spin values of up or down, which you can map to 0 and 1, and you can directly measure it using a Stern-Garlach apparatus. This just makes electron spin simple as a way to explain how quantum computing works to people, but they definitely do not all operate on electron spin. Some operate on photon polarization for example. Some operate on the motion of Ytterbium ions trapped in an electromagnetic field.

    It's kind of like how you can implement bits using different voltage levels where 0v = 0 or 3.3v = 1, or how you can implement bits using the direction of magnetic polarization on a spinning platter in a hard drive whereby polarization in one direction = 0 and polarization in the opposite direction = 1. There are many different ways of physically implementing a bit. Similarly, there are many different ways of implementing a qubit. It also needs at minimum two discrete states to assign to 0 or 1, but on top of this it needs to follow the rules of quantum probability theory and not classical probability theory.

  • pcalau12i pcalau12i @lemmygrad.ml
    Posts 0
    Comments 18