Dynamic Weight Decoding

In some scenarios, it is useful to adjust the weights of the decoding graph when additional information about the prior error distribution becomes available. Examples include erasures, leakage detection, and soft decoding information, all of which can significantly improve decoder accuracy when properly utilized.

Therefore, it is essential that a decoder supports dynamic weight updates efficiently, enabling real-time reconfiguration of the decoding graph during execution. This notebook demonstrates how to construct a general quantum LDPC (qLDPC) decoder that accepts dynamic weights, and presents two approaches for modifying these weights efficiently at runtime.

Decoder Construction and Visualization

A decoder can be created from a decoding graph specified with the following parameters:

  • vertex_num: the number of vertices in the decoding graph (corresponding to detectors in the detector graph).
  • weighted_edges: a tuple of the form ([vertex_1, vertex_2, vertex_3, ...], weight), describing an edge (i.e., a physical error source) that flips the measurement outcomes of the connected detectors.

The decoder receives a syndrome, represented as a set of defect vertices, and returns a set of edges (predicted physical errors) that matches the observed syndrome. Ideally, these corrections will resolve any logical errors with high probability.

In [1]:
import mwpf
import numpy as np

def weight_of(p: float) -> float:
    return np.log((1-p) / p)

vertex_num = 3
weighted_edges = [
    mwpf.HyperEdge([0], weight_of(0.01)),
    mwpf.HyperEdge([0, 1], weight_of(0.01)),
    mwpf.HyperEdge([1, 2], weight_of(0.01)),
    mwpf.HyperEdge([2], weight_of(0.01)),
    mwpf.HyperEdge([0, 1, 2], weight_of(0.01)),
]
initializer = mwpf.SolverInitializer(vertex_num, weighted_edges)

# describe the syndrome by a set of vertices
syndrome = mwpf.SyndromePattern([1, 2])

solver = mwpf.Solver(initializer)
solver.solve(syndrome)
correction, bound = solver.subgraph_range()
# correction = solver.subgraph()  # in case you're not interested in the bound
if bound.upper == bound.lower:
    print(f"the correction {correction} is proven to be the most-likely error 🔥")
    
list(correction)  # of course you can iterate over the edge indices
the correction Subgraph([2]) is proven to be the most-likely error 🔥
Out[1]:
[2]

For debugging purposes, the decoding graph can also be visualized using an embedded interactive visualizer. This visualizer requires specifying 3D positions for each vertex. It is conventional to layout qubits in a 2D plane, using the third dimension to represent the detector’s time step, although this choice is flexible.

The visualizer supports step-by-step navigation through the decoding process using the left/right arrow keys. It is implemented entirely on the frontend and embedded directly within the Jupyter notebook for offline access. If the visualizer does not appear automatically, you can enforce embedding by running the following command:

In [2]:
mwpf.Visualizer.embed()
MWPF visualization library embedded (451kB)
In [3]:
positions = [
    # the convention is (i, j): qubit coordinates, t: time step
    # the positions are automatically centered
    mwpf.VisualizePosition(i=1, j=0, t=0),
    mwpf.VisualizePosition(i=0, j=1, t=0),
    mwpf.VisualizePosition(i=1, j=2, t=0),
]

visualizer = mwpf.Visualizer(positions=positions)

solver.clear()  # must clear the decoder before using it to solve a new syndrome!
solver.solve(syndrome, visualizer=visualizer)
correction, bound = solver.subgraph_range(visualizer=visualizer)
visualizer.show()

In the visualization, you can hover and click on each vertex or edge to get more information.

  • Red vertices represent defect vertices, corresponding to syndrome measurements that deviate from the expected value.
  • White vertices are normal vertices, indicating no detected error, or flipped an even number of times.
  • Blue edges indicate the correction $\mathcal{E} \subseteq E$: a subset of physical error events inferred by the decoder that would produce the observed syndrome.
  • Degree-1 edges (edges connected to a single vertex) are shown as circles around the vertex.
  • Degree-2 edges are visualized as conventional edges between two vertices.
  • Higher-degree edges (hyperedges) are visualized using a Tanner graph representation, where connections between multiple vertices are shown, but the intermediary edge nodes are omitted to prevent confusion with detector vertices.

This visual representation allows for intuitive understanding of the decoder’s output and its correspondence to the underlying error structure.

Custom Dynamic Weights

The most general way to update weights dynamically is by explicitly overwriting the weights of specific edges during decoding. This allows the user to supply not only the syndrome but also a subset of edges whose weights should be modified before decoding.

The example below demonstrates a case where changing edge weights results in a different decoded outcome.

In [4]:
visualizer = mwpf.Visualizer(positions=positions)
solver.clear()  # must clear the decoder before using it to solve a new syndrome!

custom_weight_syndrome = mwpf.SyndromePattern(
    [1, 2],
    override_weights=([1, 5, 5, 5, 1])
)

solver.solve(custom_weight_syndrome, visualizer=visualizer)
correction, bound = solver.subgraph_range(visualizer=visualizer)
visualizer.show()

Erasure and Leakage Reweighting

When dealing with large graphs, modifying individual edge weights from Python can be inefficient. A more performant alternative is to predefine update rules for groups of edges and apply those rules using binary trigger flags. This is especially useful in the context of heralded errors, such as erasures or leakage, where external detection provides binary indicators of potential error sources.

For each heralded flag $F_i$, there exists a corresponding subset of edges $E(F_i) \subseteq E$ with an associated conditional error probability $p_{e,i}$ for each $e \in E(F_i)$. Given an original edge error probability $p_e$, the new effective probability after applying the heralded information is: $$p'_e = p_e (1-p_{e,i}) + p_{e,i} (1-p_e)$$

The updated edge weight is then computed as $w'_e = \log((1-p'_e)/p'_e)$.

Note that this method only increases the effective error probability; it cannot reduce it. Nonetheless, it is often much faster than computing arbitrary weight updates in Python, making it a preferred approach for performance-critical applications.

To use this functionality, simply provide a list of pre-configured heralded events and their corresponding edge modifiers. An example is provided below.

In [5]:
heralded_initializer = mwpf.SolverInitializer(
    vertex_num,
    weighted_edges,
    heralds=[
        [(0, 0.1), (4, 0.2)],  # each herald is a list of (edge index, weight)  
        [(0, 0.3), (1, 0.4), (3, 0.5)]
    ],
)

solver = mwpf.Solver(heralded_initializer)
for heralds in [[], [0], [1]]:
    visualizer = mwpf.Visualizer(positions=positions)
    solver.clear()

    # pass in the heralded indices to enable the corresponding set of weight updates
    solver.solve(
        mwpf.SyndromePattern([1, 2], heralds=heralds),
        visualizer=visualizer,
    )
    correction, bound = solver.subgraph_range(visualizer=visualizer)
    visualizer.show()
In [ ]: