#errorCorrection

The Silent Siege: Defending the Radio Spectrum in an Age of Noise

3,286 words, 17 minutes read time.

The electromagnetic spectrum is currently facing an unprecedented siege from commercial expansion, environmental noise pollution, and regulatory encroachment, threatening the viability of independent communication. This conflict involves a diverse cast of actors ranging from multinational telecommunications conglomerates and unsuspecting homeowners to a dedicated community of radio operators who stand as the last line of defense for this invisible public resource. While the general public remains largely unaware of the radio waves passing through them, a fierce battle is being waged for control of these frequencies, occurring in corporate boardrooms, legislative chambers, and the backyards of suburban neighborhoods. The stakes are considerably higher than mere hobbyist chatter; at risk is the ability to maintain decentralized, resilient communication infrastructures independent of the fragile commercial grid. As the demand for wireless data explodes and the noise floor rises, the preservation of the spectrum requires a concerted response from informed men willing to understand the physics, the policy, and the practical application of radio technology.

The Commercial Encroachment on Finite Resources

The most immediate and powerful threat to the radio spectrum comes from the insatiable commercial appetite for bandwidth. As modern society transitions into an era defined by the Internet of Things and 5G connectivity, corporate entities are aggressively lobbying for access to every available slice of the radio frequency pie. This creates a direct conflict with existing services, including the bands historically allocated for amateur and emergency use. The spectrum is a finite physical resource; unlike fiber optic cables where more strands can be laid, there is only one electromagnetic spectrum. When a frequency band is auctioned off to the highest bidder for billions of dollars, it is often lost to the public domain forever. This commoditization of the airwaves treats a law of nature as a piece of real estate to be fenced off and monetized.

The pressure is particularly intense because the specific frequencies that are most desirable for long-range communication or high-penetration data signals are the very same frequencies that have been cultivated by amateur operators for decades. Telecommunications giants view these bands as underutilized assets waiting to be exploited for profit. The concept of “use it or lose it” has never been more relevant. If a community of capable operators does not actively occupy and defend these frequencies through demonstrated utility and public service, regulators face immense pressure to reallocate them to commercial interests. This reality turns every licensed operator into a stakeholder in a global resource management crisis. The defense against this encroachment is not just about complaining to regulators; it involves demonstrating the unique value of non-commercial spectrum access, particularly its role in disaster recovery when profit-driven networks fail.

The Rising Tide of the Noise Floor

While commercial reallocation attempts to steal the spectrum from above, a more insidious threat is rising from below: Radio Frequency Interference (RFI). This phenomenon is often referred to as the rising “noise floor.” In the past, turning on a radio receiver resulted in a quiet hiss of static, out of which a voice or signal would clearly emerge. Today, that quiet background is increasingly replaced by an angry roar of electronic smog. This pollution is generated by millions of poorly shielded consumer electronic devices. LED light bulbs, variable speed pool pumps, cheap switching power supplies, and solar panel inverters spew stray radio frequency energy into the environment. To a casual observer, these devices are harmless conveniences; to a radio operator, they are jammers that blind receivers and render communication impossible.

This environmental degradation of the electromagnetic spectrum creates a scenario where even if the frequencies are legally protected, they become practically useless. It is akin to owning a plot of land that has been flooded by toxic waste; you may hold the deed, but you cannot build on it. The physics of radio reception rely on the signal-to-noise ratio. As the noise floor rises, stronger and stronger signals are required to break through, effectively shrinking the range of communication systems. A handheld radio that could once talk to a station thirty miles away might now struggle to reach three miles across a noisy city. This threat is largely unregulated at the consumer level, as the enforcement of interference standards has lagged behind the proliferation of cheap electronics imported from manufacturers who cut corners on shielding.

Community Response and Technical Stewardship

The response to these threats has catalyzed a sophisticated movement within the radio community focused on stewardship and technical innovation. This is not a passive group; it consists of technically minded individuals who view the spectrum as a public trust. The primary weapon in this response is education and technical adaptation. Operators are developing new digital transmission modes designed specifically to function in high-noise environments. These modes use advanced signal processing and error correction to decode messages that are buried deep beneath the electronic smog, effectively reclaiming territory that was thought to be lost. This demonstrates a resilience and ingenuity that defines the spirit of the radio community. Rather than surrendering to the noise, they engineer their way through it.

Furthermore, the community response involves active monitoring and “fox hunting”—the practice of locating sources of interference. When a rogue signal or a malfunctioning device disrupts communications, operators use directional antennas and triangulation techniques to physically track down the source. This can lead to diplomatic engagements with utility companies to fix arcing power lines or helping a neighbor replace a noisy power supply. It is a form of neighborhood watch, but for the electromagnetic environment. This hands-on approach requires a deep understanding of wave propagation and electronics, skills that are honed through the pursuit of licensure and regular practice. It reinforces the idea that the spectrum is a shared backyard, and it is the responsibility of the residents to keep it clean.

The Regulatory Battlefield and Property Rights

Beyond the technical challenges, a significant battle is being fought on the regulatory front involving Homeowners Associations (HOAs) and private land covenants. These restrictions often prohibit the installation of external antennas, effectively locking millions of potential operators out of the spectrum. The “CC&Rs” (Covenants, Conditions, and Restrictions) that govern many modern housing developments prioritize aesthetic uniformity over functional resilience. This creates a paradox where a resident may legally hold a federal license to operate a radio station for emergency communications but is contractually banned from erecting the antenna necessary to use it. This represents a clash between private contract law and the public interest in maintaining a dispersed, capable civil defense network.

The community response to this has been a mix of legislative lobbying and stealth engineering. Legislation like the Amateur Radio Parity Act has been introduced in various forms to try and force a compromise, arguing that reasonable accommodation for antennas is a matter of national safety. On the ground, operators have become masters of stealth, deploying “invisible” antennas disguised as flagpoles, hidden in attics, or woven into landscaping. This ingenuity allows men to remain active and capable despite the restrictions, maintaining their readiness and their connection to the airwaves. It is a quiet act of rebellion, asserting the right to communicate and the duty to be prepared, regardless of arbitrary rules set by a housing board.

Strategic Implications of Spectrum Dominance

The importance of this subject extends into the realm of national security and strategic independence. In an era of cyber warfare and potential infrastructure attacks, reliance on centralized communication networks—like cell towers and the internet—is a vulnerability. These systems are fragile; they depend on the power grid, fiber backbones, and complex software stacks that can be hacked or jammed. The radio spectrum, accessed through decentralized amateur equipment, offers a fallback layer that is robust because of its simplicity and distribution. There is no central switch to turn off the ionosphere. There is no server farm to bomb that will silence point-to-point radio communication.

Understanding the spectrum allows an individual to step outside the “matrix” of commercial dependency. When the cellular networks are congested during a crisis, or when internet service providers suffer outages, the radio operator remains connected. This capability is not just about personal safety; it is a community asset. The response to spectrum threats is fundamentally about preserving this capability for the greater good. It aligns with a masculine ethos of protection and provision—ensuring that when the primary systems fail, a secondary, hardened system is ready to take over. This requires a workforce of licensed operators who are not just hobbyists, but disciplined communicators who understand the strategic value of the frequencies they guard.

Historical Context of Spectrum Allocation

To fully appreciate the current threats, one must understand the history of how the spectrum was tamed. In the early days of radio, the airwaves were a chaotic frontier, much like the Wild West. There were no lanes, no rules, and constant interference. The catalyst for order was the sinking of the Titanic in 1912. The tragedy highlighted the deadly consequences of unregulated communication, where distress calls could be missed or jammed by irrelevant chatter. This led to the Radio Act of 1912, which established the principle that the spectrum is a public resource to be managed by the government for the public good. It established the licensing structure that exists today, creating a hierarchy of users and prioritizing safety of life.

Over the last century, this allocation has evolved into a complex map of frequency blocks assigned to military, aviation, maritime, commercial, and amateur users. The amateur allocation was not a gift; it was carved out by pioneers who proved that the “useless” shortwave frequencies could actually span the globe. Today’s operators are the inheritors of that legacy. They occupy the bands that their predecessors explored and charted. The threat of losing these bands is a threat to erase that history and the public’s right to access the airwaves directly. The historical perspective reinforces why the community is so defensive of its privileges; they know that once a frequency is surrendered to commercial interests, it is never returned.

The Human Element of the Network

Technology and policy are critical, but the most vital component of spectrum defense is the human operator. A radio is merely a collection of capacitors and transistors until it is powered by a human intent on communicating. The decline in the number of active, knowledgeable operators is perhaps the greatest threat of all. A spectrum that is silent is a spectrum that is vulnerable to reallocation. The community needs fresh blood—men who are willing to learn the code, understand the electronics, and join the network. This is not about nostalgia for old technology; it is about maintaining a vital skill set in the modern world.

The culture of the radio community is one of mentorship and rigor. It welcomes those who are willing to put in the work to understand the medium. When a man decides to study the spectrum, he is not just preparing for a test; he is learning the language of the universe. He learns how the sun’s cycles affect communication, how the terrain shapes a signal, and how to build systems that survive when others fail. This human element is the ultimate check against the threats of noise and encroachment. An educated, active populace is the strongest argument for the continued preservation of the amateur bands.

Technological Adaptation and the Future

Looking forward, the defense of the spectrum will rely heavily on software-defined radio (SDR) and cognitive radio technologies. These advancements allow radios to be smarter, sensing the environment and finding clear frequencies automatically. The community is at the forefront of experimenting with these tools. By pushing the boundaries of what is possible with limited power and bandwidth, amateur operators often innovate solutions that are later adopted by the commercial and military sectors. The fight against spectrum pollution is driving the development of better filters and more robust digital protocols.

This technological evolution transforms the operator from a passive user into an active researcher. It makes the pursuit of a license an entry point into a world of high-tech experimentation. The threats facing the spectrum are forcing the community to up its game, resulting in a renaissance of technical learning. Men who engage with this subject find themselves gaining proficiency in computer networking, antenna physics, and signal processing—skills that are highly transferrable and economically valuable in the modern marketplace. The defense of the hobby thus becomes a pathway to professional development and technical mastery.

The Role of Organized Advocacy

No individual can fight the telecommunications lobby or the tide of electronic noise alone. The response is coordinated through national and international bodies that represent the interests of the non-commercial user. Organizations act as the shield, employing legal experts and engineers to testify before government commissions and international bodies like the International Telecommunication Union (ITU). They monitor legislative proposals, file comments on rule-making proceedings, and alert the membership when immediate action is required.

Supporting these organizations is a key part of the community response. It involves a recognition that rights must be defended collectively. The effectiveness of this advocacy depends on the size and engagement of the membership. A large, active body of licensed operators commands respect in Washington and Geneva. It signals to regulators that this is a voting block and a skilled workforce that cannot be ignored. The political aspect of spectrum defense is dry and often bureaucratic, but it is the trench warfare that keeps the frequencies open for the operator to use.

Natural Threats and Solar Cycles

The spectrum is also subject to threats that are entirely natural and beyond human control. The sun, the ultimate source of all radio propagation on Earth, goes through eleven-year cycles of activity. During the peak of these cycles, solar flares and coronal mass ejections can cause radio blackouts, wiping out communication across entire hemispheres. While this is not a “threat” in the sense of a malicious actor, it is a challenge that requires a deep understanding of space weather. The operator must know how to read the solar indices and adjust their strategies accordingly.

This connection to the cosmos adds a profound dimension to the spectrum. It reminds the operator that they are dealing with forces of nature. The community response to solar weather involves building networks of automated beacons that monitor propagation in real-time, providing data that is used not just by hams, but by scientific institutions. It turns the operator into a citizen scientist, contributing to our understanding of the sun-earth connection. This resilience in the face of natural variation is part of what makes radio operators so valuable during earthly disasters; they are accustomed to adapting to changing conditions.

The Economic Reality of Spectrum Auctions

It is impossible to discuss spectrum threats without addressing the sheer scale of the money involved. Governments view spectrum auctions as a painless way to raise revenue. Billions of dollars are exchanged for the exclusive rights to transmit on specific frequencies. This creates a David and Goliath dynamic. The amateur community cannot buy the spectrum; they can only argue for its value based on public service and educational merit. This is a difficult argument to make in a capitalist system that prioritizes immediate revenue over long-term resilience.

However, the economic argument is shifting. As infrastructure becomes more vulnerable to cyber-attacks, the “insurance policy” value of a trained volunteer radio corps is being reassessed. The cost of a total communications blackout during a hurricane or terrorist attack is astronomical. The community argues that the spectrum they occupy is a down payment on national safety. By maintaining these frequencies for public use, the government avoids the cost of building and maintaining a redundant emergency network of their own. It is a symbiotic relationship, but one that requires constant reminder and defense against the lure of quick auction cash.

Cybersecurity and the Radio Spectrum

The convergence of radio and computing has introduced cyber threats into the spectrum domain. Modern radios are often computers with antennas, and like any computer, they can be vulnerable. Malicious actors can exploit software vulnerabilities to jam networks, spoof signals, or inject false data. The “spectrum threat” now includes the possibility of hostile state actors using electronic warfare techniques to disrupt civil society.

The community response has been to embrace cybersecurity best practices. This includes verifying signal integrity, using digital signatures, and developing “air-gapped” systems that can operate without connection to the public internet. The modern operator must be part hacker, part engineer. This evolution appeals to men who are interested in information security and systems architecture. It frames the license not just as a permit to talk, but as a credential in the field of information assurance.

The Imperative of Self-Reliance

Ultimately, the drive to understand and defend the spectrum is rooted in the imperative of self-reliance. In a world where systems are increasingly interconnected and interdependent, the failure of one component can lead to cascading collapse. The man who holds a radio license and understands the spectrum possesses a tool of independence. He is not reliant on a monthly subscription or a functioning cell tower to ensure the safety of his family or community.

This self-reliance is the core motivation that drives the community response. It is why they build their own antennas, why they fight the HOAs, and why they study for the exams. It is a refusal to be helpless. The spectrum is the medium through which this independence is exercised. Protecting it is protecting the ability to act when others are paralyzed by a loss of connectivity. It is a masculine pursuit of competence and readiness in an unpredictable world.

Conclusion: The Future of the Frequency

The future of the radio spectrum is far from guaranteed. It stands at a crossroads between total commercialization and a balanced model that preserves public access. The threats of noise, regulation, and encroachment are unrelenting. However, the response from the community has been equally persistent. Through technical innovation, political advocacy, and a commitment to service, the guardians of the airwaves are holding the line.

For the man looking from the outside, this struggle represents an opportunity. It is a chance to join a fraternity of capable individuals who are not content to be passive consumers of technology. By engaging with the subject, understanding the physics, and eventually stepping up to earn the credentials, one becomes part of the solution. The spectrum is a heritage and a responsibility. It requires vigilant defense to ensure that when the world goes silent, there is still a signal in the noise, clear and strong, ready to carry the message.

Call to Action

If this breakdown helped you think a little clearer about the threats out there, don’t just click away. Subscribe for more no-nonsense security insights, drop a comment with your thoughts or questions, or reach out if there’s a topic you want me to tackle next. Stay sharp out there.

D. Bryan King

Sources

Disclaimer:

The views and opinions expressed in this post are solely those of the author. The information provided is based on personal research, experience, and understanding of the subject matter at the time of writing. Readers should consult relevant experts or authorities for specific guidance related to their unique situations.

#5GExpansion #AirGappedSystems #Airwaves #AmateurRadio #AntennaParity #antennaTheory #BandwidthScarcity #CivilDefense #CognitiveRadio #CommercialEncroachment #CommunicationBlackout #CoronalMassEjections #cyberSecurity #DecentralizedNetworks #digitalModes #DigitalSignatures #DirectionFinding #DisasterRecovery #electromagneticSpectrum #ElectronicSmog #ElectronicWarfare #ElectronicsHobby #emergencyCommunications #errorCorrection #FCCRegulations #FoxHunting #FrequencyAllocation #frequencyCoordination #FrequencyGuard #FutureOfRadio #GridDown #hamRadio #HFBands #HOARestrictions #IndependentInfrastructure #InformationAssurance #InterferenceHunting #IonosphericSkip #ITUStandards #LicensedOperator #MensHobbies #MicrowaveFrequencies #MonitoringStations #NationalSecurity #NeighborhoodWatch #NetworkResilience #NoiseFloor #OffGridComms #Preparedness #PropertyRights #PublicResource #publicSafety #RadioAct #radioBlackout #RadioEngineering #RadioFrequencyInterference #RadioLicensing #RadioPhysics #radioReceiver #RadioSilence #radioSpectrum #ResilientSystems #RFI #SDRTechnology #SecureComms #SelfReliance #shortwaveRadio #signalProcessing #signalStrength #SignalToNoiseRatio #softwareDefinedRadio #SolarCycles #SpaceWeather #SpectrumAnalyzer #SpectrumAuctions #SpectrumDefense #SpectrumManagement #SpectrumThreats #StealthAntennas #STEMSkills #StrategicIndependence #TacticalRadio #TechnicalMastery #TechnicalStewardship #TelecommunicationsLobby #TitanicRadioHistory #transceiver #VHFUHF #VolunteerCorps #WavePropagation #WirelessPolicy #WirelessTelegraphy

A radio operator adjusting high-tech equipment in a dimly lit room with a holographic title overlay reading The Silent Siege.
Vincent 🌻🇪🇺 en 🌹☘️photovince
2025-11-30

Zelf in deze ‘affaire’ wel geleerd dat de Airbus software normaal gesproken is gemaakt om fouten door kosmische straling tegen te gaan… en het was juist die functie die bij de foute software update buggy was.

Jammer dat dát nou niet in dit artikel stond.

Zou een meer diepgaand artikel over hóe dan wel waarderen - blijkbaar ook software afhankelijk, niet alleen maar ECC geheugen…

BGDon 🇨🇦 🇺🇸 👨‍💻BrentD@techhub.social
2025-11-05

Quantum Error Correction is a core linchpin for making Quantum systems stable and operational. Check out this post in the TechAptitude newsletter that outlines current quantum error correction mechanisms. techaptitude.substack.com/p/qu #Quantum #QuamtumComputing #QEC #ErrorCorrection

Graphic: Quantum Error Correction Lattice

IBM has run a critical error-correction algorithm on off-the-shelf chips, pushing quantum computing closer to real-world use! jpmellojr.blogspot.com/2025/10
#QuantumComputing #IBM #AMD #ErrorCorrection

eicker.news ᳇ tech newstechnews@eicker.news
2025-10-25

#AMD’s stock rose nearly 8% after a report that IBM can use AMD’s #chips for #quantumcomputing #errorcorrection. This development is a milestone in #IBM’s path towards a large-scale, fault-tolerant #quantumcomputer by 2029. cnbc.com/2025/10/24/amd-stock- #tech #media #news

Scalable, Energy-Efficient Quantum Computing

Author(s): Scott Douglas Jacobsen

Publication (Outlet/Website): The Good Men Project

Publication Date (yyyy/mm/dd): 2025/05/26

 John Levy, Co-founder, CEO, and Chair of SEEQC, talks about quantum computing as a company developing scalable, energy-efficient digital quantum systems. Levy discusses SEEQC’s origins in Hypres and its evolution through IARPA’s C3 program. The company uses superconducting single flux quantum (SFQ) technology to achieve dramatic reductions in power consumption—up to nine orders of magnitude. Levy outlines partnerships with NVIDIA, error correction infrastructure, and SEEQC’s vision for contextual and heterogeneous computing that integrates CPUs, GPUs, and CPUs. With its SEEQC Orange platform and chip-based architecture, SEEQC aims to enable real-time quantum-classical processing and unlock the practical utility of quantum computing at the scale.

Scott Douglas Jacobsen: Today, we are here with John Levy. He is the Co-founder, CEO, and Chair of SEEQC (“Seek”), a leading company developing scalable, energy-efficient quantum computers. He has over 35 years of experience at the intersection of technology and finance, previously serving as chairman of Hypres and sitting on the boards of goTenna and BioLite. He is a founding partner of L Capital Partners, where he has led investments in the technology sector and served on the boards of companies such as WiSpry, OnPATH Technologies, and HiGTek.  He earned an A.B. in Psychology and an MBA from Harvard Business School. Two things often come up in the news.

John Levy: By the way, your company’s name is SEEQC—as in, you have an aspiration; you are seeking to do something. It is an acronym. 

Jacobsen: Two things about quantum computing often appear in the news—particularly scalability and energy efficiency. People often discuss computation as one barrier to competition, but another is energy. I recall Eric Schmidt, in a recent interview, suggesting that the U.S. may need to partner closely with Canada to access sufficient hydroelectric power for large-scale AI data centers. So these things pop up. Within that context—was that the modus operandi?

Levy: First, thanks for reaching out and doing this.Yes. So, first, SEEQC began as a spinout of Hypres, which had its roots in IBM’s superconducting electronics division. When we were operating as Hypres, around 2014, the Intelligence Advanced Research Projects Activity (IARPA) launched the Cryogenic Computing Complexity (C3) program to explore superconducting computing as a solution to the power and cooling challenges of exascale computing.

They realized that, over time, we would need nuclear power plants to run data centers—which is precisely what is happening. Astonishingly, they foresaw this. Their response was: Let us see if we can develop entirely new kinds of classical computers that are incredibly energy efficient.

That was the idea: exascale computing at orders of magnitude lower power because that is the trajectory we needed to be on.

So we started working. There was a program at IARPA called C3, and we partnered with IBM and Raytheon BBN to build superconducting logic and superconducting memory for energy-efficient classical computers that could scale to exascale. That was the core idea.

We finished that project in early 2017. Following this, we held a strategic planning session at Hypres to further develop the idea.

We realized it was a perfect fit because we were already operating in the superconducting domain (i.e., 4 kelvins and below), and quantum computers—at least those using superconducting qubits—needed to operate at temperatures in the millikelvins.

Our technology could power quantum computers, and we knew how to scale them. We had, and still have, a chip foundry to do it. The core idea is that CMOS chips—designed and manufactured by companies such as TSMC, Intel, AMD, and others—consume excessive power.

Power turns into heat, which must be dissipated. CMOS chips are too slow and prone to noise.

If we could substitute the circuits we were building—based on single flux quantum (SFQ) technology—where we are using Josephson junctions instead of transistors and niobium instead of copper, we could change everything.

We produced these circuits in an entirely different way, and that is how we realized we could build scalable, energy-efficient quantum computers. That is how SEEQC was born.

We spun out. It took us a couple of years, but we officially spun out in 2019. Since then, we have been focused on that.

Now, to make that real, let us consider Google’s November announcement about error correction and what their Willow quantum computer looked like. It was a fantastic piece of engineering—honestly, it could be in an art museum. It is beautiful.

However, here is the reality: every qubit needs five cables. Moreover, to power each qubit using room-temperature electronics, you need 2 to 5 watts per qubit.

Because of what we are doing—by building energy-efficient circuits—our circuits operate at three nanowatts. That is nine orders of magnitude—a billion times more energy efficient—not 10%, not 1%, not a fraction of a percent—a billion times more efficient.

Why? Because our circuits are operating at the same cryogenic temperatures as the qubits. CMOS cannot do this—putting too much power into CMOS would generate heat and destabilize the qubits.

So, this is a technology that we have developed, patented, and brought into practice. Moreover, we are running quantum systems today based on this core technology.

We announced SEEQC Orange, which is now operational. It is digitally controlled using SFQ logic and digitally multiplexed to reduce cabling requirements significantly.

Now, think about what that means for data centers.

It is one thing to demonstrate this on a small circuit. However, imagine you want to build a quantum data center with a million qubits.

We studied the energy budget for building a 100,000-qubit system using conventional methods. The estimate ranged from 10 to 40 megawatts of power.

Using our approach, we estimate just 63 kilowatts of power, a massive reduction.

Jacobsen: So you have provided a real-world demonstration of scaling laws that show gains of nine orders of magnitude in energy efficiency. Sam Altman has said there are no foreseeable limits to the scaling laws—well, this seems like one of those parameters.

Levy: But it is critical to realize that this is only one of many technical hurdles—serious, nuanced engineering problems—that must be overcome. Moreover, you cannot just solve one. You have to solve all of them to build a utility-scale quantum computer.

Cabling is a huge issue, too, right? Do you have five cables per qubit? Or even three? Or two? Not if you are scaling to a million qubits. It is not feasible. It would be prohibitively expensive. The meantime between failures would be extremely short. The physical complexity would be overwhelming—encompassing space, thermal load, reliability, and other factors.

So you must solve that problem.

How do you do it?

You package chips in a way that allows them to communicate directly—essentially forming a multi-chip module. Think of it like a sandwich: one chip stacked directly on another.

That gives you direct connectivity, reduced latency, and increased speed.

So that is key for things like error correction. Do you want to do error correction? You have to have low latency. You have to solve that. I am telling you—it is like whack-a-mole. However, it is whack-a-mole at a level you cannot even imagine.

Jacobsen: You are also partnering with a major company, NVIDIA. Jensen Huang is known to be a sober and mature individual. Unlike some others, he does not speak off the cuff too often. So, what does this partnership mean for SEEQC?

Levy: Let me explain how it happened—it is a great story. I love this.

So, about three years ago, we were at one of those many quantum computing events. I met Tim Costa and Sam Stanwyck, who run Heterogeneous Compute and Quantum Computing at NVIDIA, respectively.

I often said, “Hey, let us get together—I will share our latest results.” So, I showed them what we were working on.

By the way, I have not mentioned this yet: our chips are fully digital.

Now remember—quantum computing exists in the analog space. People typically control and read out quantum computers using microwave pulses—that is, analog RF signals. Everything is in the analog domain.

However, we are doing it in the digital domain.

So I told them, “Imagine a GPU and a CPU connected to a QPU—chip-to-chip—with the same latency you have with NVLink between CPU and GPU.”

Ultimately, we want it to operate at such low latency and high speed that we can share memory across the QPU, CPU, and GPU. So, instead of having two separate systems connected by Ethernet or PCIe, we would have chip-to-chip-to-chip communication.

Just as NVIDIA’s CPU-GPU superchip is connected internally via NVLink, imagine a system where the QPU, CPU, and GPU operate together as one unit inside a single computing node.

Now, think about the possibilities we have opened up. We are introducing the concept of trustworthy heterogeneous computing. You can combine a quantum algorithm with a classical algorithm or an AI learning model. We are building the infrastructure to allow that.

Moreover, they said, “Yes.” 

Yes, they were in. That is what we have been building together.

Our recent announcement marked the first time we had demonstrated it. Now, we are focused on two main goals:

  1. Reducing latency—getting it down below a microsecond, ideally into the hundreds of nanoseconds, to make it viable for error correction.
  2. This makes it bandwidth-efficient—it does not require terabytes of data transfer but rather gigabytes, which is much more feasible.

Since we are working in the digital domain, we can optimize for both low latency and high throughput. That is the winning combination.

Jacobsen: I talked to a friend the other day about all this. One thing that came up was that we have GPUs, CPUs, and QPUs, but you are building an entirely new architecture. What you explained to me feels like contextual computing (Contextual Compute), where the system optimizes computation dynamically depending on what is needed at the time.

Levy: You could call it that.

You have made it once you build the software layer on top of this architecture.

That is what we are doing. We are building tools—new tools, such as a quantum computer.

However, even a QPU—let us be careful how we define it—is more than just a “quantum processor.” CPUs and GPUs are more than just arrays of transistors. They are architectures, ecosystems, and toolchains. We are doing the same thing for quantum computing.

They are high-level functions, such as arithmetic units, cache memory, power management, and I/O subsystems, among others. When people talk about QPUs, they often mean just an array of connected qubits. However, they do not encompass the full system-level functionality that constitutes a complete processor from a systems engineering perspective. That is what we are doing, and that is an important distinction. 

So, when you connect an integrated QPU—or something architecturally complete—and connect it, it becomes contextual computing. I love that idea. Jensen Huang, at NVIDIA, thinks of it as accelerated computing. Moreover, rightly so—think about it: he was moving from CPUs, which are fundamentally serial processors, to parallel GPUs. He would explain this better than I could. However, that is the idea—acceleration. 

Here, though, we are going beyond acceleration. We are changing the model entirely. Just as quantum computing represents a metaphorical leap from classical digital computing, what we are building represents a similar leap. It is not just about speed anymore—it is about solving NP-hard problems, doing so in the quantum domain, and coordinating those results with the classical domain when needed. That is an extraordinary shift. It is the kind of thing dreams were made of in the golden age of science fiction. It is the reason I do this.

Jacobsen: What did Isaac Asimov write about? This would be akin to laying the foundations for a positronic brain. Right? Because there is a certain resilience in the human brain, even after injury or insult, we often assume the brain is our working model of general intelligence—though “general” always needs a frame of reference. Still, what we are doing could be viewed as forming the synthetic equivalent of such resilience and versatility. Moreover, thinkers like David Deutsch have frameworks for describing systems like this—universal constructors. Shall we go there? We are not going down the panpsychism path. We will not claim that everything is conscious. Honestly, that is the kind of conceptual rabbit hole. The one who more or less caused all the panpsychism noise—he invented a problem, then offered no solution. Right.

Even when it comes to evolved systems like the human brain, which shows tremendous versatility and operates with high efficiency over decades, what we are building now forms the foundational architecture of something analogous. Moreover, that is incredibly exciting.

So, what do you see as the first immediate application—even before you get to those higher-level functions?

Levy: Funny enough, the first application we are considering is entirely internal to the quantum computer—error correction. So imagine how we manage error correction now: trying to do everything using FPGAs or cryo-CMOS. Instead, imagine a different structure where you do a portion of the processing on-chip at the millikelvin level using high-speed, low-power superconducting logic. That would handle the quick, easy stuff. Then, that chip is connected via superconducting ribbon cable to a digital pre-decoder operating at 100 millikelvins or even a single Kelvin. That would do the next layer of processing. If the error cannot be resolved at those two levels, the system hands it off to a GPU or classical processor that can take a global view of the data and run more complex algorithms.

The idea is to build a chip-based infrastructure for quantum error correction—something versatile and adaptable that software developers and quantum scientists can work with. That way, anyone with a new algorithm or software approach to error correction can plug it into this infrastructure. They do not have to reinvent the hardware stack. It gives them a toolkit. Our first instantiation of this heterogeneous computing system will most likely be focused on—error correction. Once we unlock quantum error correction effectively, we also unlock the real capabilities of this new form of contextual—or, as you said—contextualized computing.

Jacobsen: Where do you see the unknowns in developing this infrastructure, especially after the hardware layer, once we start layering algorithms on top of it?

It is funny—everyone asks a slightly different version of this question. Moreover, it is a good one because when I say, “Hey, we are building a new architecture,” or at least extending the existing one and bringing together two entirely different computing domains, it naturally begs the question: for what purpose? Where is this going? How is this going to play out?

Moreover, I will give you the same answer I gave Jensen at GCC. Imagine someone takes you down into the basement of the University of Pennsylvania in 1946 and shows you the ENIAC. You look at it and ask, “What is this good for?” Moreover, the answer might be, “Well, it is good for arithmetic. It is a super-calculator. I can crunch enormous volumes of numbers.” That is all very impressive—for that time. However, it is not the same as imagining that, one day, you would have a device in your pocket that could stream every movie and song ever created or help drive a car that picks an optimized route, pays for gas via a chip, and texts your friend that you are picking them up—all in real-time. No one imagined that in 1946.

Similarly, we are developing these tools and infrastructure without necessarily knowing the full extent to which they will be utilized. I mentioned one example earlier—error correction. However, broadly, we are trying to build a computational capability that can be released into the world, allowing others to discover what they want to become. Louis Kahn, the architect, used to say things like, “What does a brick want to be?”—as if his materials had their ambitions. His goal was to understand the brick deeply and let it express itself.

That is what we are doing. We are developing these technologies, engineering them with precision, and putting them in the hands of the Louis Kahns of the world to figure out what they should become. 

Jacobsen: It is like Michelangelo saying that David was always in the stone—he just had to carve him out.

Levy: Right. By the way, there is an excellent book I have been reading called The Rigor of Angels. Have you read it?

Jacobsen: No.

Levy: It is about Kant, Borges, and Heisenberg. Moreover, a recurring theme of infinity and unknowability pervades philosophy, literature, and physics. That is the thinking we need now—cross-disciplinary, multidisciplinary, with an open head and heart. That is how we determine what we want to express through this technology. Moreover, it is going somewhere—undeniably. 

Jacobsen: I am reminded of that famous Michio Kaku story about the U.S. attempting to build a particle collider three times the size of CERN’s in Geneva. They dug a massive hole—and spent a billion dollars doing it. Moreover, when someone asked, “Are you going to find God with this machine?” They said, “No. We are going to find the Higgs boson.” Then Congress promptly spent another billion to fill the hole back in. 

That is the tension—people want immediate answers to fundamental work that will pay off in decades. However, all I can say is this: we have reached the point where, in some integrated systems and breakout circuits, you have built all the core elements of a quantum computer—on a single chip, digitally, operating at temperatures in the millikelvins and ultra-high efficiency. As I mentioned earlier, by the end of next year, we will have complete core quantum computing functionality on-chip digitally.

As we refine this, we will enhance our connectivity to GPUs and CPUs and continue to expand our infrastructure. Some of that work is already happening at the National Quantum Computing Centre in the UK, where we expect the next generation of our contextual computing systems to emerge.

I like contextual computing. It is a good idea. I might use it.

Jacobsen: Because, conceptually, for me, it is taking the physics of this new infrastructure that you have built—and integrating that with a new stack of algorithms. Whether they are layered, modular, or stacked, the point is that they become bright and aware in a way that allows them to say, “I do not need to use this for that—I will use that for this.” It becomes efficient in a fundamentally new way.

Levy: Yes. Look, the issue, of course, is that we need to scale.

We are currently operating at a relatively small scale because you need to scale up before scaling out. Moreover, that is precisely what we are focused on—scaling up. We are building the foundational elements to scale out once we have all the core functionality integrated into a single chip.

Moreover, that is when this starts to come alive. That is when it becomes real at a systems level. At the very least, we are headed in the right direction.

Moreover, as I said earlier—error correction will likely be the first serious focus, as pick-and-shovel as that might sound. However, it is the groundwork we need to lay for everything else to follow.

Jacobsen: John, thank you so much for your time today. I appreciate your expertise.

Levy: Yes. No—it was great to meet you.

Jacobsen: Great to meet you, too. This was helpful.

Levy: Excellent.

Last updated May 3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices.In Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarksperformancesdatabases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.

#ContextualComputing #EnergyEfficiency #ErrorCorrection #QuantumScalability #SuperconductingLogic

2025-06-02

Snippet of a publicly posted #BaltimoreCounty permit notice showing the street numbering of suites and circles. #ErrorCorrection
#GIS

Edit: yes, 710. #QGis

Public sign: Property Information
Property Address: 710 Concourse Circle Suites 100 through 103
Middle River MarylandQGis screen of property lines with red circles in almost a circle around 710. Yellow and black layer.
2025-05-27

Quantum Computing in 2025: Real Progress, Real Impact

Quantum computing is moving from theory to practice in 2025, with companies like NTT Docomo, Ford Otosan, and Japan Tobacco seeing real benefits. Hybrid quantum-classical systems (e.g., Amazon + NVIDIA, IonQ) are solving complex problems in chemistry, logistics, and AI.

#QuantumComputing #Tech2025 #HybridSystems #QuantumSecurity #ErrorCorrection #NextGenChips #TECHi

Read Full Article Here :- techi.com/latest-developments-

2025-02-01

Today i have discovered a usefull terminal app "The fuck". 😁

github.com/nvbn/thefuck

fuck --yeah

#terminal #cli #errorcorrection #app #Python #command #shell #thefuck

John Vaccaro (johniac)johniac
2025-01-17
2024-12-23

AI and quantum computing rely heavily on error correction, but recent findings show some "magical" error correction methods may be fundamentally inefficient. This raises questions about how we build resilient systems for the future. Can we overcome these limitations, or must we rethink our approach entirely? #AI #QuantumComputing #ErrorCorrection
quantamagazine.org/magical-err

Keiran Rowellkeiran_rowell
2024-12-15

youtube.com/watch?v=Xe-83tBcxhs

Computer codes have nothing on the repair ballet that keeps your genetic information consistent across all your cells under constant chemical and energetic stress.

This entry into Drew Berry's animations for show with crystal clarity how a single nick sends off a controlled series of helix unwinding, bit complement checking and formation of a , during repair.

Do yourself a favour and give it the few minutes watch time 🧬

Technoholic.metechnoholic
2024-12-12

3/5 A key breakthrough is Google's chip's error correction capability, minimizing error rates and paving the way for scaling quantum systems. This is a game-changer for tech advancements!

2024-12-09

Quantum Computers Cross Critical Error Threshold | Quanta Magazine

By adding more physical qubits, they improved the resilience of logical qubits, crossing a critical error threshold. This advancement brings us closer to practical quantum computers, capable of performing complex calculations with high accuracy.

quantamagazine.org/quantum-com

#QuantumComputing #ErrorCorrection #GoogleAI #Science #Computing #AI #QuantumPhysics

2024-09-12

A new quantum computer breaks Google's quantum supremacy record by 100-fold

#Quantinuum's new 56-qubit H2-1 quantum computer has surpassed Google's #Sycamore by achieving a 100-fold improvement in error correction performance.

This was achieved by using the Random Circuit Sampling algorithm.

#QuantumSupremacy #ErrorCorrection #QuantumComputing #Computing

techspot.com/news/103802-new-q

Legends and LottiesLottie@beige.party
2024-09-06

🚀✨ Is **prompt engineering** making a comeback? With the launch of **Reflection 70B**, a powerful open-source AI using "Reflection-Tuning," it's looking like **prompts** are back in the spotlight! 🧠💡 This model can self-correct mistakes in real-time, breaking down complex reasoning and reducing those pesky hallucinations that plague other models. 🔍💭

🔥 Imagine crafting prompts that guide AI not only to generate responses but also to **analyze** and **improve** them! The future of AI seems to be all about **precision** and **accountability**, and prompts are the key to unlocking it. 🔑💬

Could this be the new era of AI interaction? 🤔 With GPT-4, Claude, and now **Reflection 70B** leading the charge, it's time to ask: **Is prompt engineering back, baby?** 💥

#AI #Tech #Reflection70B #PromptEngineering #MachineLearning #OpenSource #ErrorCorrection #FutureOfAI #AIRevolution #SelfCorrectingAI

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst