Papers

Universal Parity Quantum Computing

Universal Parity Quantum Computing - ParityQC architecture

Designing efficient quantum computers and quantum algorithms is a current grand challenge in science and engineering.

The fundamental rules of quantum mechanics that make quantum computing possible also impose fundamental restrictions. These restrictions are a substantial obstacle in the current quest to create a universal quantum computer, which would allow to perform arbitrary quantum operations.

One of these restrictions is the no-cloning theorem: in contrast to bits, quantum bits cannot be copied, only exchanged or swapped. As a consequence, quantum computers will not be able to follow the von Neumann architecture with separated memory and computational unit. Because the quantum CPU serves as memory and computational unit at the same time, there needs to be a connectivity between all quantum bits on the chip. In current standard approaches to gate-based quantum computers, these potentially long-range interactions are implemented either as physical interactions, which limits scalability, or by moving quantum information on the chip via SWAP sequences, which requires a large overhead in gates.

A group of physicists inside the team of ParityQC (Michael Fellner, Anette Messinger, Kilian Ender and Wolfgang Lechner) has developed a novel approach to universal quantum computing based on the ParityQC (LHZ) architecture. In a paper recently published on arXiv, they propose a universal gate set for quantum computing, with all-to-all connectivity and intrinsic robustness to bit-flip errors, based on the parity encoding.

In this new paradigm, each physical qubit represents the parity of multiple logical qubits. The paper outlines a transformation that creates single-qubit physical gates and sequences of physical gates, eliminating the need for long-range interactions and thus SWAP gates. The redundancy of the encoding also provides intrinsic fault tolerance. The paper shows that it is possible to choose and switch between different variants of the parity mapping, containing subsets of parity qubits tailored to the algorithmic requirements. This allows for further reduction of computational resources.

Read the paper on arXiv.

Back to news