• 0 Posts
  • 21 Comments
Joined 3 months ago
cake
Cake day: November 30th, 2024

help-circle
  • So usually this is explained with two scientists, Alice and Bob, on far away planets. They’re each in the possession of a particle that is entangled with the other, and in a superposition of state 1 and state 2.

    This “usual” way of explaining it is just overly complicating it and making it seem more mystical than it actually is. We should not say the particles are “in a superposition” as if this describes the current state of the particle. The superposition notation should be interpreted as merely a list of probability amplitudes predicting the different likelihoods of observing different states of the system in the future.

    It is sort of like if you flip a coin, while it’s in the air, you can say there is a 50% chance it will land heads and a 50% chance it will land tails. This is not a description of the coin in the present as if the coin is in some smeared out state of 50% landed heads and 50% landed tails. It has not landed at all yet!

    Unlike classical physics, quantum physics is fundamentally random, so you can only predict events probabilistically, but one should not conflate the prediction of a future event to the description of the present state of the system. The superposition notation is only writing down probability amplitudes of the likelihoods of what you will observe (state 1 or state 2) of the particles in the future event that you go to the interact with it and is not a description of the state of the particles in the present.

    When Alice measures the state of her particle, it collapses into one of the states, say state 1. When Bob measures the state of his particle immediately after, before any particle travelling at light speed could get there, it will also be in state 1 (assuming they were entangled in such a way that the state will be the same).

    This mistreatment of the mathematical notation as a description of the present state of the system also leads to confusing language like “it collapses into one of the states” as if the change in a probability distribution represents a physical change to the system. The mental picture people say this often have is that the particle literally physically becomes the probability distribution prior to measuring it—the particle “spreads out” like a wave according to the probability amplitudes of the state vector—and when you measure the particle, this allows you to update the probabilities, and so they must interpret this as the wave physically contracting into an eigenvalue—it “collapses” like a house of cards.

    But this is, again, overcomplicating things. The particle never spreads out like a wave and it never “collapses” back into a particle. The mathematical notation is just a way of capturing the likelihoods of the particle showing up in one state or the other, and when you measure what state it actually shows up in, then you can update your probabilities accordingly. For example, if you the coin is 50%/50% heads/tails and you observe it land on tails, you can update the probabilities to 0%/100% heads/tails because you know it landed on tails and not heads. Nothing “collapsed”: you’re just observing the actual outcome of the event you were predicting and updating your statistics accordingly.


  • Any time you do something to the particles on Earth, the ones on the Moon are affected also

    The no-communication theorem already proves that manipulating one particle in an entangled pair has no impact at al on another. The proof uses the reduced density matrices of the particles which capture both their probabilities of showing up in a particular state as well as their coherence terms which capture their ability to exhibit interference effects. No change you can make to one particle in an entangled pair can possibly lead to an alteration of the reduced density matrix of the other particle.


  • pcalau12i@lemmygrad.mltoScience Memes@mander.xyzObserver
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 days ago

    I don’t think solving the Schrodinger equation really gives you a good idea of why quantum mechanics is even interesting. You also shouldstudy very specific applications of it where it yields counterintuitive outcomes to see why it is interesting, such as in the GHZ experiment.


  • There is no “consciousness.” False belief in “consciousness” is a product of Kantianism, which itself was heavily inspired by Newtonian physics (Kant was heavily inspired by Newton), which we have changed some categories over the years but the fundamentals have not and have become deeply integrated into western psyche in how we think about the world, and probably in many other cultures as well.

    Modern day philosophers have just renamed Kant’s phenomena to “consciousness” or “subjective experience” and renamed his “noumena” to “matter.” Despite the renaming, the categories are still treated identically: the “consciousness” is everything we perceive, and the “matter” is something invisible, the true physical thing-in-itself beyond our perception and what “causes” our perception.

    Since all they have done is rename Kant’s categories, they do not actually solve Kant’s mind-body problem, but have just rediscovered it and thus renamed it in the form of the “hard problem of consciousness,” which is ultimately the same exact problem just renamed: that there seems to be a “gap” between this “consciousness” and “matter.”

    Most modern day philosophers seem to split into two categories. The first are the “promissory materialists” who just say there is a real problem here but shrug their shoulders and say one day science will solve it so we don’t have to worry about it, but give no explanation of what a solution could even possibly look like. The second are the mystics who insist this “consciousness” can’t be reconciled with “matter” because it must be some fundamental force of reality. They talk about things like “consciousness fields” or “cosmic consciousness” or whatever.

    However, both are wrong. Newtonian physics is not an accurate represent of reality, we already know this, and so the Kantian mindset inspired from it should also be abandoned. When you abandon the Kantian mindset, there is no longer a need for the “phenomena” and “noumena” division, or, in modern lingo, there is no longer a need for the “consciousness” and “matter” division. There is just reality.

    Imagine you are looking at a candle. The apparent size of the candle you will see will depend upon how far you are away from it: if you are further away it appears smaller. Technically, light doesn’t travel at an infinite speed, and so the further away you are, the further in the past you are seeing the candle. The candle also may appear a bit different under different lighting conditions.

    A Kantian would say there is a true candle, the “candle-in-itself,” or, in modern lingo, the material candle, the “causes” all these different perceptions. The perceptions themselves are then said to be brain-generated, not part of the candle, not even something real at all, but something purely immaterial, part of the phenomena, or, in modern lingo, part of “consciousness.”

    If every possible perception of the candle is part of “consciousness,” then the candle-in-itself, the actual material object, must be independent of perception, i.e. it’s invisible. No observation can reveal it because all observations are part of “consciousness.” This is the Kantian worldview: everything we perceive is part of a sort of illusion created within the mind as opposed to the “true” world that is entirely imperceptible. The mind-body problem, or in modern lingo the “hard problem,” then arises as to how an entirely imperceptible (non-phenomenal/non-conscious) world can give rise to what we perceive in a particular configuration.

    However, the Kantian worldview is a delusion. In Newtonian physics, if I launch a cannonball from point A to point B, simply observing it at point A and point B is enough to fill in the gaps to say where the object was at every point in between A and B independently of anything else. This Newtonian worldview allows us to conceive of the cannonball as a thing-in-itself, an object with its own inherent properties that can be meaningful conceived of existing even when in complete isolation, that always has an independent of history of how it ends up where it does.

    As Schrodinger pointed out, this mentality does not apply to modern physics. If you fire a photon from point A to point B and observe it at those two points, you cannot always meaningfully fill in the gaps of what the photon was doing in between those two points without running into contradictions. As Schrodinger concluded, one has to abandon the notion that particles really are independent autonomous entities with their own independent existence that can be meaningfully conceived of in complete isolation. They only exist from moment to moment in the context of whatever they are interacting with and not in themselves.

    If this is true for particles, it must also be true of everything made up of particles: there is no candle-in-itself either. It’s a high-level abstraction that doesn’t really exist. What we call the “candle” is not an independent unobservable entity separate from all our different perceptions of it, but what we call the candle is precisely the totality of all the different ways it is and can be perceived, all the different ways it interacts with other objects from those objects’ perspectives.

    Kant justified the noumena by arguing that it makes no sense to talk about objects “appearing” (the word “phenomena” means “the appearance of”) without there being something that is doing the appearing (the noumena). He is correct on this, but for a different reason. We should not use this to justify the noumena, but it shows that if we reject the noumena, we must also reject the phenomena (“consciousness”): it makes no sense to treat the different instances of a candle as some sort of separate “consciousness” realm, or some sort of illusion or whatever independent of the real material world as it really is.

    No, what we perceive directly is material reality as it actually is. Reality is what you are immersed in every day, what surrounds you, what you are experiencing in this very moment. It is not some illusion from which there is a “true” invisible reality beyond it. When you look at the candle, you are seeing the candle as it really is from your own perspective. That is the real candle in the real world. The Kantian distinction between noumena-phenomena (or between “matter” and “consciousness”) should be abandoned. It is just not compatible with the modern physical sciences.

    But I know no one will even know what I’m talking about, so writing this is rather pointless. Kantianism is too deeply ingrained into the western psyche, people cannot even comprehend that it is possible to criticize it because it underlies how they think about everything. This nonsense debate about “consciousness” will continue forever, in ten thousand years people will still be arguing over it, because it’s an intrinsic problem that arises out of the dualistic structure in Kantian thinking. If you begin from the get-go with an assumption that there is a division between mind and matter, you cannot close this division without contradicting yourself, which leads to this debate around “consciousness.” But it seems unrealistic at this point to get people to abandon this dualistic way of thinking, so it seems like the “consciousness” debate will proceed forever.


  • You have not made any point at all. Your first reply to me entirely ignored the point of my post which you did not read followed with an attack, I reply pointing out you ignored the whole point of my post and just attacked me without actually respond to it, and now you respond again with literally nothing of substance at all just saying “you’re wrong! touch grass! word salad!”

    You have nothing of substance to say, nothing to contribute to the discussion. You are either a complete troll trying to rile me up, or you just have a weird emotional attachment to this topic and felt an emotional need to respond and attack me prior to actually thinking up a coherent thing to criticize me on. Didn’t your momma ever teach you that “if you have nothing positive or constructive to say, don’t say anything at all”? Learn some manners, boy. Blocked.


  • They are incredibly efficient for short-term production, but very inefficient for long-term production. Destroying the environment is a long-term problem that doesn’t have immediate consequences on the businesses that engage in it. Sustainable production in the long-term requires foresight, which requires a plan. It also requires a more stable production environment, i.e. it cannot be competitive because if you are competing for survival you will only be able to act in your immediate interests to avoid being destroyed in the competition.

    Most economists are under a delusion known as neoclassical economics which is literally a nonphysical theory that treats the basis of the economy as not the material world we actually live in but abstract human ideas which are assumed to operate according to their own internal logic without any material causes or influences. They then derive from these imagined “laws” regarding human ideas (which no one has ever experimentally demonstrated but were just invented in some economists’ armchair one day) that humans left to be completely free to make decisions without any regulations at all will maximize the “utils” of the population, making everyone as happy as possible.

    With the complete failure of this policy leading to the US Great Depression, many economists recognized this was flawed and made some concessions, such as with Keynesianism, but they never abandoned the core idea. In fact, the core idea was just reformulated to be compatible with Keynesianism in what is called the neoclassical synthesis. It still exists as a fundamental belief to most every economist that completely unregulated market economy without any plan at all will automagically produce a society with maximal happiness, and while they will admit some caveats to this these days (such as the need for a central organization to manage currency in Keynesianism), these are treated as an exception and not the rule. Their beliefs are still incompatible with long-term sustainable planning because in their minds the success of markets from comes util-maximizing decisions built that are fundamental to the human psyche and so any long-term plan must contradict with this and lead to a bad economy that fails to maximize utils.

    The rise of Popperism in western academia has also played a role here. A lot of material scientists have been rather skeptical of the social sciences and aren’t really going to take arguments like those based in neoclassical economics which is based largely in mysticism about human free will seriously, and so a second argument against long-term planning was put forward by Karl Popper which has become rather popular in western academia. Popper argued that it is impossible to learn from history because it is too complicated with too many variables and you cannot control them all. You would need a science that studies how human societies develop in order to justify a long-term development plan into the future, but if it’s impossible to study them to learn how they develop because they are too complicated, then it is impossible to have such a science, and thus impossible to justify any sort of long-term sustainable development plan. It would always be based on guesswork and so more likely to do more harm than good. Popper argued that instead of long-term development plans, the state should instead be purely ideological, what he called an “open society” operating purely on the ideology of liberalism rather getting involved in economics.

    As long as both neoclassical economics and Popperism are dominate trends in western academia there will never be long-term sustainable planning because they are fundamentally incompatible ideas.


  • You did not read what I wrote, so it is unironic you call it “word salad” when you are not even aware of the words I wrote since you had an emotional response and wrote this reply without actually addressing what I argued. I stated that it is impossible to have an very large institution without strict rules that people follow, and this requires also the enforcement of the rules, and that means a hierarchy as you will have rule-enforcers.

    Also, you are insisting your personal definition of anarchism is the one true definition that I am somehow stupid for disagreeing with, yet anyone can just scroll through the same comments on this thread and see there are other people disagreeing with you while also defending anarchism. A lot of anarchists do not believe anarchism means “no hierarchy,” like, seriously, do you unironically believe in entirely abolishing all hierarchies? Do you think a medical doctor should have as much authority on how to treat an injured patient as the janitor of the same hospital? Most anarchists aren’t even “no hierarchy” they are “no unjustified hierarchy.”

    The fact you are entirely opposed to hierarchy makes your position even more silly than what I was criticizing.


  • All libertarian ideologies (including left and right wing anarchism) are anti-social and primitivist.

    It is anti-social because it arises from a hatred of working in a large groups. It’s impossible to have any sort of large-scale institution without having rules that people want to follow, and libertarian ideology arises out of people hating to have to follow rules, i.e. to be a respectable member of society, i.e. they hate society and don’t want to be social. They thus desire very small institutions with limited rules and restrictions. Right-wing libertarians envision a society dominated by small private businesses while left-wing libertarians imagine a society dominated by either small worker-cooperative, communes, or some sort of community council.

    Of course, everyone of all ideologies opposes submitting to hierarchies they find unjust, but hatred of submitting to hierarchies at all is just anti-social, as any society will have rules, people who write the rules, people who enforce the rules. It is necessary for any social institution to function. It is part of being an adult and learning to live in a society to learn to obey the rules, such as traffic rules. Sometimes it is annoying or inconvenient, but you do it because you are a respectable member of society and not a rebellious edgelord who makes things harder on everyone else because they don’t obey basic rules.

    It is primitivist because some institutions simply only work if they are very large. You cannot have something like NASA that builds rocket ships operated by five people. You are going to always need an enormous institution which will have a ton of people, a lot of different levels of command (“hierarchy”), strict rules for everyone to follow, etc. If you tried to “bust up” something like NASA or SpaceX to be small businesses they simply would lose their ability to build rocket ships at all.

    Of course, anarchists don’t mind, they will say, “who cares about rockets? They’re not important.” It reminds me of the old meme that spread around where someone asked anarchists how their tiny communes would be able to organize current massive supply chains in our modern societies and they responded by saying that the supply chain would be reduced to just people growing beans in their backyard and eating it, like a feudal peasant. They won’t even defend that their system could function as well as our modern economy but just says modern marvels of human engineering don’t even matter, because they are ultimately primitivists at heart.

    I never understood the popularity of libertarian and anarchist beliefs in programming circles. We would never have entered the Information Age if we had an anarchism or libertarian system. No matter how much they might pretend these are the ideal systems, they don’t even believe it themselves. If a libertarian has a serious medical illness, they are either going to seek medical help at a public hospital or a corporate hospital. Nobody is going to seek medical help at a “hospital small business” ran out of someone’s garage. We all intuitively and implicitly understand that large swathes of economy that we all take advantage of simply cannot feasibly be ran by small organizations, but libertarians are just in denial.


  • Anarchism thus becomes meaningless as anyone who defends certain hierarchies obviously does so because they believe they are just. Literally everyone on earth is against “unjust hierarchies” at least in their own personal evaluation of said hierarchies. People who support capitalism do so because they believe the exploitative systems it engenders are justifiable and will usually immediately tell you what those justifications are. Sure, you and I might not agree with their argument, but that’s not the point. To say your ideology is to oppose “unjust hierarchies” is to not say anything at all, because even the capitalist, hell, even the fascist would probably agree that they oppose “unjust hierarchies” because in their minds the hierarchies they promote are indeed justified by whatever twisted logic they have in their head.

    Telling me you oppose “unjust hierarchies” thus tells me nothing about what you actually believe, it does not tell me anything at all. It is as vague as saying “I oppose bad things.” It’s a meaningless statement on its own without clarifying what is meant by “bad” in this case. Similarly, “I oppose unjust hierarchies” is meaningless statement without clarifying what qualifies “just” and “unjust,” and once you tell me that, it would make more sense you label you based on your answer to that question. Anarchism thus becomes a meaningless word that tells me nothing about you. For example, you might tell me one unjust hierarchy you want to abolish is prison. It would make more sense for me to call you a prison abolitionist than an anarchist since that term at least carries meaning, and there are plenty of prison abolitionists who don’t identify as anarchist.


  • pcalau12i@lemmygrad.mltoOpen Source@lemmy.mlProton's biased article on Deepseek
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    1 month ago

    There is no “fundamentally” here, you are referring to some abstraction that doesn’t exist. The models are modified during the fine-tuning process, and the process trains them to learn to adopt DeepSeek R1’s reasoning technique. You are acting like there is some “essence” underlying the model which is the same between the original Qwen and this model. There isn’t. It is a hybrid and its own thing. There is no such thing as “base capability,” the model is not two separate pieces that can be judged independently. You can only evaluate the model as a whole. Your comment is just incredibly bizarre to respond to because you are referring to non-existent abstractions and not actually speaking of anything concretely real.

    The model is neither Qwen nor DeepSeek R1, it is DeepSeek R1 Qwen Distill as the name says. it would be like saying it’s false advertising to say a mule is a hybrid of a donkey and a horse because the “base capabilities” is a donkey and so it has nothing to do with horses, and it’s really just a donkey at the end of the day. The statement is so bizarre I just do not even know how to address it. It is a hybrid, it’s its own distinct third thing that is a hybrid of them both. The model’s capabilities can only be judged as it exists, and its capabilities differ from Qwen and the original DeepSeek R1 as actually scored by various metrics.

    Do you not know what fine-tuning is? It refers to actually adjusting the weights in the model, and it is the weights that define the model. And this fine-tuning is being done alongside DeepSeek R1, meaning it is being adjusted to take on capabilities of R1 within the model. It gains R1 capabilities at the expense of Qwen capabilities as DeepSeek R1 Qwen Distill performs better on reasoning tasks but actually not as well as baseline models on non-reasoning tasks. The weights literally have information both of Qwen and R1 within them at the same time.

    Speaking of its “base capabilities” is a meaningless floating abstraction which cannot be empirically measured and doesn’t refer to anything concretely real. It only has its real concrete capabilities, not some hypothetical imagined capabilities. You accuse them of “marketing” even though it is literally free. All DeepSeek sells is compute to run models, but you can pay any company to run these distill models. They have no financial benefit for misleading people about the distill models.

    You genuinely are not making any coherent sense at all, you are insisting a hybrid model which is objectively different and objectively scores and performs differently should be given the exact same name, for reasons you cannot seem to actually articulate. It clearly needs a different name, and since it was created utilizing the DeepSeek R1 model’s distillation process to fine-tune it, it seems to make sense to call it DeepSeek R1 Qwen Distill. Yet for some reason you insist this is lying and misrepresenting it and it actually has literally nothing to do with DeepSeek R1 at all and it should just be called Qwen and we should pretend it is literally the same model despite it not being the same model as its training weights are different (you can do a “diff” on the two model files if you don’t believe me!) and it performs differently on the same metrics.

    There is simply no rational reason to intentionally want to mislabel the model as just being Qwen and having no relevance to DeepSeek R1. You yourself admitted that the weights are trained on R1 data so they necessarily contain some R1 capabilities. If DeepSeek was lying and trying to hide that the distill models are based on Qwen and Llama, they wouldn’t have literally put that in the name to let everyone know, and released a paper explaining exactly how those were produced.

    It is clear to me that you and your other friends here have some sort of alternative agenda that makes you not want to label it correctly. DeepSeek is open about the distill models using Qwen and Llama, but you want them to be closed and not reveal that they also used DeepSeek R1. The current name for it is perfectly fine and pretending it is just a Qwen model (or Llama, for the other distilled versioned) is straight-up misinformation, and anyone who downloads the models and runs them themselves will clearly see immediately that they perform differently. It is a hybrid model correctly called what they are: DeepSeek R1 Qwen Distill and DeepSeek R1 Llama Distill.


  • pcalau12i@lemmygrad.mltoOpen Source@lemmy.mlProton's biased article on Deepseek
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    edit-2
    1 month ago

    The 1.5B/7B/8B/13B/32B/70B models are all officially DeepSeek R1 models, that is what DeepSeek themselves refer to those models as. It is DeepSeek themselves who produced those models and released them to the public and gave them their names. And their names are correct, it is just factually false to say they are not DeepSeek R1 models. They are.

    The “R1” in the name means “reasoning version one” because it does not just spit out an answer but reasons through it with an internal monologue. For example, here is a simple query I asked DeepSeek R1 13B:

    Me: can all the planets in the solar system fit between the earth and the moon?

    DeepSeek: Yes, all eight planets could theoretically be lined up along the line connecting Earth and the Moon without overlapping. The combined length of their diameters (approximately 379,011 km) is slightly less than the average Earth-Moon distance (about 384,400 km), allowing them to fit if placed consecutively with no required spacing.

    However, on top of its answer, I can expand an option to see its internal monologue it went through before generating the answer, which you can find the internal monologue here because it’s too long to paste.

    What makes these consumer-oriented models different is that that rather than being trained on raw data, they are trained on synthetic data from pre-existing models. That’s what the “Qwen” or “Llama” parts mean in the name. The 7B model is trained on synthetic data produced by Qwen, so it is effectively a compressed version of Qen. However, neither Qwen nor Llama can “reason,” they do not have an internal monologue.

    This is why it is just incorrect to claim that something like DeepSeek R1 7B Qwen Distill has no relevance to DeepSeek R1 but is just a Qwen model. If it’s supposedly a Qwen model, why is it that it can do something that Qwen cannot do but only DeepSeek R1 can? It’s because, again, it is a DeepSeek R1 model, they add the R1 reasoning to it during the distillation process as part of its training. They basically use synthetic data generated from DeepSeek R1 to fine-tune readjust its parameters so it adopts a similar reasoning style. It is objectively a new model because it performs better on reasoning tasks than just a normal Qwen model. It cannot be considered solely a Qwen model nor an R1 model because its parameters contain information from both.


  • As I said, they will likely come to the home in form of cloud computing, which is how advanced AI comes to the home. You can run some AI models at home but they’re nowhere near as advanced as cloud-based services and so not as useful. I’m not sure why, if we ever have AGI, it would need to be run at home. It doesn’t need to be. It would be nice if it could be ran entirely at home, but that’s no necessity, just a convenience. Maybe your personal AGI robot who does all your chores for you only works when the WiFi is on. That would not prevent people from buying it, I mean, those Amazon Fire TVs are selling like hot cakes and they only work when the WiFi is on. There also already exists some AI products that require a constant internet connection.

    It is kind of similar with quantum computing, there actually do exist consumer-end home quantum computers, such as Triangulum, but it only does 3 qubits, so it’s more of a toy than a genuinely useful computer. For useful tasks, it will all be cloud-based in all likelihood. The NMR technology Triangulum is based on, it’s not known to be scalable, so the only other possibility that quantum computers will make it to the home in a non-cloud based fashion would be optical quantum computing. There could be a breakthrough there, you can’t rule it out, but I wouldn’t keep my fingers crossed. If quantum computers become useful for regular people in the next few decades, I would bet it would be all through cloud-based services.


  • If quantum computers actually ever make significant progress to the point that they’re useful (big if) it would definitely be able to have positive benefits for the little guy. It is unlikely you will have a quantum chip in your smartphone (although, maybe it could happen if optical quantum chips ever make a significant breakthrough, but that’s even more unlikely), but you will still be able to access them cheaply over the cloud.

    I mean, IBM spends billions of on its quantum computers and gives cloud access to anyone who wants to experiment with them completely free. That’s how I even first learned quantum computing, running algorithms on IBM’s cloud-based quantum computers. I’m sure if the demand picks up if they stop being experimental and actually become useful, they’ll probably start charging a fee, but the fact it is free now makes me suspect it will not be very much.

    I think a comparison can be made with LLMs, such as with OpenAI. It takes billions to train those giant LLMs as well and can only be trained on extremely expensive computers, yet a single query costs less than a penny, and there are still free versions available. Expense for cloud access will likely always be incredibly cheap, it’s a great way to bring super expensive hardware to regular people.

    That’s likely what the future of quantum computing will be for regular people, quantum computing through cloud access. Even if you never run software that can benefit from it, you may get benefits indirectly, such as, if someone uses a quantum computer to help improve medicine and you later need that medicine.


  • quantum nature of the randomly generated numbers helped specifically with quantum computer simulations, but based on your reply you clearly just meant that you were using it as a multi-purpose RNG that is free of unwanted correlations between the randomly generated bits

    It is used as the source of entropy for the simulator. Quantum mechanics is random, so to actually get the results you have to sample it. In quantum computing, this typically involves running the same program tens of thousands of times, which are called “shots,” and then forming a distribution of the results. The sampling with the simulator uses the QRNG for the source of entropy, so the sampling results are truly random.

    Out of curiosity, have you found that the card works as well as advertised? I ask because it seems to me that any imprecision in the design and/or manufacture of the card could introduce systematic errors in the quantum measurements that would result in correlations in the sampled bits, so I am curious if you have been able to verify that is not something to be concerned about.

    I have tried several hardware random number generators and usually there is no bias either because they specifically designed it not to have a bias or they have some level of post-processing to remove the bias. If there is a bias, it is possible to remove the bias yourself. There are two methods that I tend to use that depends upon the source of the bias.

    To be “random” simply means each bit is statistically independent of each other bit, not necessarily that the outcome is uniform, i.e. 50% chance of 0 and 50% chance of 1. It can still be considered truly random with a non-uniform distribution, such as 52% chance of 0 and 48% chance of 1, as long as each successive bit is entirely independent of any previous bit, i.e. there is no statistical analysis you could ever perform on the bits to improve your chances of predicting the next one beyond the initial distribution of 52%/48%.

    In the case where it is genuinely random (statistical independence) yet is non-uniform (which we can call nondeterministic bias), you can transform it into a uniform distribution using what is known as a von Neumann extractor. This takes advantage of a simple probability rule for statistically independent data whereby Pr(A)Pr(B)=Pr(B)Pr(A). Let’s say A=0 and B=1, then Pr(0)Pr(1)=Pr(1)Pr(0). That means you can read two bits at a time rather than one and throw out all results that are 00 and 11 and only keep results that are 01 or 10, and then you can map 01 to 0 and 10 to 1. You would then be mathematically guaranteed that the resulting distribution of bits are perfectly uniform with 50% chance of 0 and 50% chance of 1.

    I have used this method to develop my own hardware random number generator that can pull random numbers from the air, by analyzing tiny fluctuations in electrical noise in your environment using an antenna. The problem is that electromagnetic waves are not always hitting the antenna, so there can often be long strings of zeros, so if you set something up like this, you will find your random numbers are massively skewed towards zero (like 95% chance of 0 and 5% chance of 1). However, since each bit still is truly independent of the successive bit, using this method will give you a uniform distribution of 50% 0 and 50% 1.

    Although, one thing to keep in mind is the bigger the skew, the more data you have to throw out. With my own hardware random number generator I built myself that pulls the numbers from the air, it ends up throwing out the vast majority of the data due to the huge bias, so it can be very slow. There are other algorithms which throw out less data but they can be much more mathematically complicated and require far more resources.

    In the cases where it may not be genuinely random because the bias is caused by some imperfection in the design (which we can call deterministic bias), you can still uniformly distribute the bias across all the bits so that not only would be much more difficult to detect the bias, but you will still get uniform results. The way to do this is to take your random number and XOR it with some data set that is non-random but uniform, which you can generate from a pseudorandom number generator like the C’s rand() function.

    This will not improve the quality of the random numbers because, let’s say if it is biased 52% to 48% but you use this method to de-bias it so the distribution is 50% to 50%, if someone can predict the next value of the rand() function that would increase their ability to make a prediction back to 52% to 48%. You can make it more difficult to do so by using a higher quality pseudorandom number generator like using something like AES to generate the pseudorandom numbers. NIST even has standards for this kind of post-processing.

    But ultimately using this method is only obfuscation, making it more and more difficult to discover the deterministic bias by hiding it away more cleverly, but does not truly get rid of it. It’s impossible to take a random data set with some deterministic bias and trulyget rid of the deterministic bias purely through deterministic mathematical transformations,. You can only hide it away very cleverly. Only if the bias is nondeterministic can you get rid of it with a mathematical transformation.

    It is impossible to reduce the quality of the random numbers this way. If the entropy source is truly random and truly non-biased, then XORing it with the C rand() function, despite it being a low-quality pseudorandom number generator, is mathematically guaranteed to still output something truly random and non-biased. So there is never harm in doing this.

    However, in my experience if you find your hardware random number generator is biased (most aren’t), the bias usually isn’t very large. If something is truly random but biased so that there is a 52% chance of 0 and 48% chance of 1, this isn’t enough of a bias to actually cause much issues. You could even use it for something like cryptography and even if someone does figure out the bias, it would not increase their ability to predict keys enough to actually put anything at risk. If you use a cryptographysically secure pseudorandom number generator (CSPRNG) in place of something like C rand(), they will likely not be able to discover the bias in the first place, as these do a very good job at obfuscating the bias to the point that it will likely be undetectable.


  • I’m not sure what you mean by “turning into into a classical random number.” The only point of the card is to make sure that the sampling results from the simulator are truly random, down to a quantum level, and have no deterministic patterns in them. Indeed, actually using quantum optics for this purpose is a bit overkill as there are hardware random number generators which are not quantum-based and produce something good enough for all practical purposes, like Intel Secure Key Technology which is built into most modern x86 CPUs.

    For that reason, my software does allow you to select other hardware random number generators. For example, you can easily get an entire build (including the GPU) that can run simulations of 14 qubits for only a few hundred dollars if you just use the Intel Secure Key Technology option. It also supports a much cheaper device called TrueRNGv3 which is a USB device. It also has an option to use a pseudorandom number generator if you’re not that interested in randomness accuracy, and when using the pseudorandom number generator option it also supports “hidden variables” which really just act as the seed to the pseudorandom number generator.

    For most most practical purpose, no, you do not need this card and it’s definitely overkill. The main reason I even bought it was just because I was adding support for hardware random number generators to my software and I wanted to support a quantum one and so I needed to buy it to actually test it and make sure it works for it. But now I use it regularly for the back-end to my simulator just because I think it is neat.



  • By applying both that and the many worlds hypothesis, the idea of quantum immortality comes up, and thats a real mind bender. Its also a way to verifiably prove many worlds accurate(afaik the only way)

    MWI only somewhat makes sense (it still doesn’t make much sense) if you assume the “branches” cannot communicate with each other after decoherence occurs. “Quantum immortality” mysticism assumes somehow your cognitive functions can hop between decoherent branches where you are still alive if they cease in a particular branch. It is self-contradictory. There is nothing in the mathematical model that would predict this and there is no mechanism to explain how it could occur.

    Imagine creating a clone which is clearly not the same entity as you because it is standing in a different location and, due to occupying different frames of reference, your paths would diverge after the initial cloning, with the clone forming different memories and such. “Quantum immortality” would be as absurd as saying that if you then suddenly died, your cognitive processes would hop to your clone, you would “take over their body” so to speak.

    Why would that occur? What possible mechanism would cause it? Doesn’t make any sense to me. It seems more reasonable to presume that if you die, you just die. Your clone lives on, but you don’t. In the grand multiverse maybe there is a clone of you that is still alive, but that universe is not the one you occupy, in this one your story ends.

    It also has a problem similar to reincarnation mysticism. If MWI is correct (it’s not), then there would be an infinite number of other decoherent branches containing other “yous.” Which “you” would your consciousness hop into when you die, assuming this even does occur (it doesn’t)? It makes zero sense.

    To reiterate though, assuming many worlds is accurate, the expiriment carries no risk to you. Due to the anthropic principle, you will always find yourself in the reality in which you survive.

    You see the issue right here, you say the reality in which you survive, except there would be an infinite number of them. There would be no the reality, there would be a reality, just one of an infinitude of them. Yet, how is the particular one you find yourself in decided?

    MWI is even worse than the clone analogy I gave, because it would be like saying there are an infinite number of clones of you, and when you die your cognitive processes hop from your own brain to one of theirs. Not only is there no mechanism to cause this, but even if we presume it is true, which one of your infinite number of clones would your cognitive processes take control of?


  • Isn’t the quantum communication (if it were possible) supposed to be actually instantaneous, not just “nearly instantaneous”?

    There is no instantaneous information transfer (“nonlocality”) in quantum mechanics. You can prove this with the No-communication Theorem. Quantum theory is a statistical theory, so predictions are made in terms of probabilities, and the No-communication Theorem is a relativity simple proof that no physical interaction with a particle in an entangled pair can alter the probabilities of the other particle it is entangled with.

    (It’s actually a bit more broad than this as it shows that no interaction with a particle in an entangled pair can alter the reduced density matrix of the other particle it is entangled with. The density matrix captures more than probabilities, but also the ability for the particle to exhibit interference effects.)

    The speed of light limit is a fundamental property of special relativity, and if quantum theory violated this limit then it would be incompatible with special relativity. Yet, it is compatible with it and the two have been unified under the framework of quantum field theory.

    There are two main confusions as to why people falsely think there is anything nonlocal in quantum theory, stemming from Bell’s theorem and the EPR paradox. I tried to briefly summarize these two in this article here. But to even more briefly summarize…

    People falsely think Bell’s theorem proves there is “nonlocality” but it only proves there is nonlocality if you were to replace quantum theory with a hidden variable theory. It is important to stress that quantum theory is not a hidden variable theory and so there is nothing nonlocal about it and Bell’s theorem just is not applicable.

    The EPR paradox is more of a philosophical argument that equates eigenstates to the ontology of the system, which such an equation leads to the appearance of nonlocal action, but this is just because the assumption is a bad one. Relational quantum mechanics, for example, uses a different assumption about the relationship between the mathematics and the ontology of the system and does not run into this.


  • Depends upon what you mean by “consciousness.” A lot of the literature seems to use “consciousness” just to refer to physical reality as it exists from a particular perspective, for some reason. For example, one popular definition is “what it is like to be in a particular perspective.” The term “to be” refers to, well, being, which refers to, well, reality. So we are just talking about reality as it actually exists from a particular perspective, as opposed to mere description of reality from that perspective. (The description of a thing is always categorically different from the ontology of the thing.)

    I find it bizarre to call this “consciousness,” but words are words. You can define them however you wish. If we define “consciousness” in this sense, as many philosophers do, then it does not make logical sense to speak of your “consciousness” doing anything at all after you die, as your “consciousness” would just be defined as reality as it actually exists from your perspective. Perspectives always implicitly entail a physical object that is at the basis of that perspective, akin to the zero-point of a coordinate system, which in this case that object is you.

    If you cease to exist, then your perspective ceases to even be defined. The concept of “your perspective” would no longer even be meaningful. It would be kind of like if a navigator kept telling you to go “more north” until eventually you reach the north pole, and then they tell you to go “more north” yet again. You’d be confused, because “more north” does not even make sense anymore at the north pole. The term ceases to be meaningfully applicable. If consciousness is defined as being from a particular perspective (as many philosophers in the literature define it), then by logical necessity the term ceases to be meaningful after the object that is the basis of that perspective ceases to exist. It neither exists nor ceases to exist, but no longer is even well-defined.

    But, like I said, I’m not a fan of defining “consciousness” in this way, albeit it is popular to do so in the literature. My criticism of the “what it is like to be” definition is mainly that most people tend to associate “consciousness” with mammalian brains, yet the definition is so broad that there is no logical reason as to why it should not be applicable to even a single fundamental particle.


  • This problem presupposes metaphysical realism, so you have to be a metaphysical realist to take the problem seriously. Metaphysical realism is a particular kind of indirect realism whereby you posit that everything we observe is in some sense not real, sometimes likened to a kind of “illusion” created by the mammalian brain (I’ve also seen people describe it as an “internal simulation”), called “consciousness” or sometimes “subjective experience” with the adjective “subjective” used to make it clear it is being interpreted as something unique to conscious subjects and not ontologically real.

    If everything we observe is in some sense not reality, then “true” reality must by definition be independent of what we observe. If this is the case, then it opens up a whole bunch of confusing philosophical problems, as it would logically mean the entire universe is invisible/unobservable/nonexperiential, except in the precise configuration of matter in the human brain which somehow “gives rise to” this property of visibility/observability/experience. It seems difficult to explain this without just presupposing this property arbitrarily attaches itself to brains in a particular configuration, i.e. to treat it as strongly emergent, which is effectively just dualism, indeed the founder of the “hard problem of consciousness” is a self-described dualist.

    This philosophical problem does not exist in direct realist schools of philosophy, however, such as Jocelyn Benoist’s contextual realism, Carlo Rovelli’s weak realism, or in Alexander Bogdanov’s empiriomonism. It is solely a philosophical problem for metaphysical realists, because they begin by positing that there exists some fundamental gap between what we observe and “true” reality, then later have to figure out how to mend the gap. Direct realist philosophies never posit this gap in the first place and treat reality as precisely equivalent to what we observe it to be, so it simply does not posit the existence of “consciousness” and it would seem odd in a direct realist standpoint to even call experience “subjective.”

    The “hard problem” and the “mind-body problem” are the main reasons I consider myself a direct realist. I find that it is a completely insoluble contradiction at the heart of metaphysical realism, I don’t think it even can be solved because you cannot posit a fundamental gap and then mend the gap later without contradicting yourself. There has to be no gap from the get-go. I see these “problems” as not things to be “solved,” but just a proof-by-contradiction that metaphysical realism is incorrect. All the arguments against direct realism, on the other hand, are very weak and people who espouse them don’t seem to give them much thought.