2322 words - 9 pages

1. Introduction

A classical computer represents an implementation of a (sub-) Turing machine that is limited by the laws of classical physics. As such, certain computational problems are either extremely difficult to solve or are intractable (w.r.t. resources). In order to tackle these problems, a super-Turing computational model has been devised and named as the quantum computer [1]. Quantum computation is based on the laws of quantum mechanics and can outperform classical computation in terms of complexity. The power comes from quantum parallelism that is achieved through quantum superposition [2] and entanglement [3]. Computability is unchanged so intractable problems are still unsolvable, but the exponential increase in speed means that the problems that were previously difficult to solve (such as large integer factorisation) are now computable in polynomial time [4].

Quantum computation is still in its infancy and much effort is spent on the engineering problems such as fault-tolerant quantum error correction codes and scalable quantum bit (aka. qubit) implementations and architectures. Unlike a classical computer where a bit is represented by voltage levels or magnetic fields, qubits are implemented by physical objects such as atoms or superconducting devices. The intrinsic properties (e.g. nuclear spin) of these objects provide the necessary basis for quantum information [5]. However, due to these properties, quantum information is fragile and interacts with its environment, destroying the information in the process – this is known as decoherence [6].

Computation can only be carried out on information in a coherent state, hence, a variety of qubit implementation designs, which maximise coherence and fidelity of data, have been proposed. The manipulation of qubits through the use of logic gates is fault-prone and so most common operation in the machine is quantum error correction (QEC) [7,8] that is to be carried out after every gate operation. This requires use of large amounts of auxiliary qubits (aka. ancillae). Further fault-tolerance is achieved by applying the error correction codes recursively (in levels) [9], exponentially increasing overhead and reducing fault probability. Due to these error correction schemes, a single logical qubit is represented by multiple physical qubits. As an example, a single logical qubit at two levels of recursion is represented by 49 [10] or 81 [11] physical qubits, depending on the error correction code used. Intuitively, the number of physical qubits required for interesting integer factorisation described in [4] would be in the thousands. This is a problem because most investments into manufacturing technology went into silicon-based devices and technology such as ion-traps [12,13] are not as well developed. Hence, initial proposals for quantum architectures such as the Quantum Logic Array (QLA) [14] are space inefficient and somewhat impractical. This review attempts to give an outline of the...

Get inspired and start your paper now!