Skip to content

Commit

Permalink
[docs] API example for structured_state for upcoming release v0.6.0 (#94
Browse files Browse the repository at this point in the history
)

* Updated MPS tutorial

* Added a TTN tutorial

* Added CI checks for examples

---------

Co-authored-by: PabloAndresCQ <[email protected]>
Co-authored-by: Melf <[email protected]>
  • Loading branch information
3 people authored Apr 11, 2024
1 parent 057adf1 commit e0e4b5b
Show file tree
Hide file tree
Showing 10 changed files with 509 additions and 2,122 deletions.
52 changes: 52 additions & 0 deletions .github/workflows/check-examples.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
name: check examples

on:
pull_request:
branches:
- develop
- main
schedule:
# 04:00 every Saturday morning
- cron: '0 4 * * 6'

jobs:

changes:
runs-on: ubuntu-22.04
outputs:
examples: ${{ steps.filter.outputs.examples }}
steps:
- uses: actions/checkout@v4
- uses: dorny/paths-filter@v3
id: filter
with:
base: ${{ github.ref }}
filters: |
examples:
- 'examples/**'
- '.github/**'
check:
name: check examples
needs: changes
if: github.event_name == 'schedule' || needs.changes.outputs.examples == 'true'
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
with:
fetch-depth: '0'
- run: git fetch --depth=1 origin +refs/tags/*:refs/tags/* +refs/heads/*:refs/remotes/origin/*
- name: Set up Python 3.11
uses: actions/setup-python@v5
with:
python-version: 3.11
- name: install python requirements for notebooks
run: |
python -m pip install --upgrade pip
python -m pip install .
cd examples
python -m pip install p2j
- name: test example notebooks
run: |
cd examples
./check-examples
4 changes: 2 additions & 2 deletions .github/workflows/lint.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,10 +25,10 @@ jobs:
- name: Update pip
run: pip install --upgrade pip
- name: Install black and pylint
run: pip install black~=22.3 pylint~=2.13,!=2.13.6
run: pip install black~=22.3 pylint~=3.0
- name: Check files are formatted with black
run: |
black --check .
- name: Run pylint
run: |
pylint --recursive=y */
pylint --recursive=y --ignore=ttn_tutorial.py,mps_tutorial.py */
10 changes: 10 additions & 0 deletions examples/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# Contents

Available tutorials for users:
* `mps_tutorial.ipynb`: Use of MPS simulation and features.
* `ttn_tutorial.ipynb`: Use of TTN simulation and features.
* `mpi/`: Example on how to use MPS for embarrasingly parallel tasks with `mpi4py` see the `mpi` folder.

Developers:
* `check-examples`: The script to check that the Jupyter notebooks are generated correctly from the files in `python/`. To generate the `.ipynb` from these run the `p2j` command in this script.
* `python/`: The `.py` files that generate the `.ipynb` files. As a developer, you are expected to update these files instead of the `.ipynb` files. Remember to generate the latter using the `p2j` command before opening a pull request that changes these examples.
15 changes: 15 additions & 0 deletions examples/check-examples
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
#!/bin/bash

set -e

for name in `cat ci-tested-notebooks.txt`
do
echo "Checking: ${name} ..."
# Check that notebook is generated from script:
p2j -o -t ${name}-gen.ipynb python/${name}.py
cmp ${name}.ipynb ${name}-gen.ipynb
rm ${name}-gen.ipynb
# TODO, add this when GPU is added to CI
# Run script:
# python python/${name}.py
done
2 changes: 2 additions & 0 deletions examples/ci-tested-notebooks.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
mps_tutorial
ttn_tutorial
Binary file added examples/images/mps.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2,121 changes: 1 addition & 2,120 deletions examples/mps_tutorial.ipynb

Large diffs are not rendered by default.

303 changes: 303 additions & 0 deletions examples/python/mps_tutorial.py

Large diffs are not rendered by default.

123 changes: 123 additions & 0 deletions examples/python/ttn_tutorial.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
import numpy as np
from time import time
import matplotlib.pyplot as plt
import networkx as nx
from pytket import Circuit
from pytket.circuit.display import render_circuit_jupyter

from pytket.extensions.cutensornet.structured_state import (
CuTensorNetHandle,
Config,
SimulationAlgorithm,
simulate,
)

# # Introduction
# This notebook provides examples of the usage of the TTN functionalities of `pytket_cutensornet`. For more information, see the docs at https://tket.quantinuum.com/extensions/pytket-cutensornet/api/index.html.
# Some good references to learn about Tree Tensor Network state simulation:
# - For an introduction into TTN based simulation of quantum circuits: https://arxiv.org/abs/2206.01000
# - For an introduction on some of the optimisation concerns that are relevant to TTN: https://arxiv.org/abs/2209.03196
# The implementation in pytket-cutensornet differs from previously published literature. I am still experimenting with the algorithm. I intend to write up a document detailing the approach, once I reach a stable version.
# The main advantage of TTN over MPS is that it can be used to efficiently simulate circuits with richer qubit connectivity. This does **not** mean that TTN has an easy time simulating all-to-all connectivity, but it is far more flexible than MPS. TTN's strength is in simulating circuit where certain subsets of qubits interact densely with each other, and there is not that many gates acting on qubits in different subsets.

# # How to use
# The interface for TTN matches that of MPS. As such, you should be able to run any code that uses `SimulationAlgorithm.MPSxGate` by replacing it with `SimulationAlgorithm.TTNxGate`. Calling `prepare_circuit_mps` is no longer necessary, since `TTNxGate` can apply gates between non-neighbouring qubits.
# **NOTE**: If you are new to pytket-cutensornet, it is highly recommended to start reading the `mps_tutorial.ipynb` notebook instead. More details about the use of the library are discussed there (for instance, why and when to call `CuTensorNetHandle()`).


def random_graph_circuit(n_qubits: int, edge_prob: float, layers: int) -> Circuit:
"""Random circuit with qubit connectivity determined by a random graph."""
c = Circuit(n_qubits)

for i in range(layers):
# Layer of TK1 gates
for q in range(n_qubits):
c.TK1(np.random.rand(), np.random.rand(), np.random.rand(), q)

# Layer of CX gates
graph = nx.erdos_renyi_graph(n_qubits, edge_prob, directed=True)
qubit_pairs = list(graph.edges)
for pair in qubit_pairs:
c.CX(pair[0], pair[1])

return c


# For **exact** simulation, you can call `simulate` directly, providing the default `Config()`:

simple_circ = random_graph_circuit(n_qubits=10, edge_prob=0.1, layers=1)

with CuTensorNetHandle() as libhandle:
my_ttn = simulate(libhandle, simple_circ, SimulationAlgorithm.TTNxGate, Config())

# ## Obtain an amplitude from a TTN
# Let's first see how to get the amplitude of the state `|10100>` from the output of the previous circuit.

state = int("10100", 2)
with CuTensorNetHandle() as libhandle:
my_ttn.update_libhandle(libhandle)
amplitude = my_ttn.get_amplitude(state)
print(amplitude)

# Since this is a very small circuit, we can use `pytket`'s state vector simulator capabilities to verify that the state is correct by checking the amplitude of each of the computational states.

state_vector = simple_circ.get_statevector()
n_qubits = len(simple_circ.qubits)

correct_amplitude = [False] * (2**n_qubits)
with CuTensorNetHandle() as libhandle:
my_ttn.update_libhandle(libhandle)
for i in range(2**n_qubits):
correct_amplitude[i] = np.isclose(state_vector[i], my_ttn.get_amplitude(i))

print("Are all amplitudes correct?")
print(all(correct_amplitude))

# ## Sampling from a TTN
# Sampling and measurement from a TTN state is not currently supported. This will be added in an upcoming release.

# # Approximate simulation
# We provide two policies for approximate simulation:
# * Bound the maximum value of the virtual bond dimension `chi`. If a bond dimension would increase past that point, we *truncate* (i.e. discard) the degrees of freedom that contribute the least to the state description. We can keep track of a lower bound of the error that this truncation causes.
# * Provide a value for acceptable two-qubit gate fidelity `truncation_fidelity`. After each two-qubit gate we truncate the dimension of virtual bonds as much as we can while guaranteeing the target gate fidelity. The more fidelity you require, the longer it will take to simulate. **Note**: this is *not* the final fidelity of the output state, but the fidelity per gate.
# Values for `chi` and `truncation_fidelity` can be set via `Config`. To showcase approximate simulation, let's define a circuit where exact TTN contraction would not be enough.

circuit = random_graph_circuit(n_qubits=30, edge_prob=0.1, layers=1)

# We can simulate it using bounded `chi` as follows:

start = time()
with CuTensorNetHandle() as libhandle:
config = Config(chi=64, float_precision=np.float32)
bound_chi_ttn = simulate(libhandle, circuit, SimulationAlgorithm.TTNxGate, config)
end = time()
print("Time taken by approximate contraction with bounded chi:")
print(f"{round(end-start,2)} seconds")
print("\nLower bound of the fidelity:")
print(round(bound_chi_ttn.fidelity, 4))

# Alternatively, we can fix `truncation_fidelity` and let the bond dimension increase as necessary to satisfy it.

start = time()
with CuTensorNetHandle() as libhandle:
config = Config(truncation_fidelity=0.99, float_precision=np.float32)
fixed_fidelity_ttn = simulate(
libhandle, circuit, SimulationAlgorithm.TTNxGate, config
)
end = time()
print("Time taken by approximate contraction with fixed truncation fidelity:")
print(f"{round(end-start,2)} seconds")
print("\nLower bound of the fidelity:")
print(round(fixed_fidelity_ttn.fidelity, 4))

# # Contraction algorithms

# We currently offer only one TTN-based simulation algorithm.
# * **TTNxGate**: Apply gates one by one to the TTN, canonicalising the TTN and truncating when necessary.

# # Using the logger

# You can request a verbose log to be produced during simulation, by assigning the `loglevel` argument when creating a `Config` instance. Currently, two log levels are supported (other than default, which is silent):
# - `logging.INFO` will print information about progress percent, memory currently occupied by the TTN and current fidelity. Additionally, some high level information of the current stage of the simulation is provided.
# - `logging.DEBUG` provides all of the messages from the loglevel above plus detailed information of the current operation being carried out and the values of important variables.
# **Note**: Due to technical issues with the `logging` module and Jupyter notebooks we need to reload the `logging` module. When working with python scripts and command line, just doing `import logging` is enough.
1 change: 1 addition & 0 deletions examples/ttn_tutorial.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"cells": [{"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["import numpy as np\n", "from time import time\n", "import matplotlib.pyplot as plt\n", "import networkx as nx\n", "from pytket import Circuit\n", "from pytket.circuit.display import render_circuit_jupyter"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["from pytket.extensions.cutensornet.structured_state import (\n", " CuTensorNetHandle,\n", " Config,\n", " SimulationAlgorithm,\n", " simulate,\n", ")"]}, {"cell_type": "markdown", "metadata": {}, "source": ["# Introduction<br>\n", "This notebook provides examples of the usage of the TTN functionalities of `pytket_cutensornet`. For more information, see the docs at https://tket.quantinuum.com/extensions/pytket-cutensornet/api/index.html.<br>\n", "Some good references to learn about Tree Tensor Network state simulation:<br>\n", "- For an introduction into TTN based simulation of quantum circuits: https://arxiv.org/abs/2206.01000<br>\n", "- For an introduction on some of the optimisation concerns that are relevant to TTN: https://arxiv.org/abs/2209.03196<br>\n", "The implementation in pytket-cutensornet differs from previously published literature. I am still experimenting with the algorithm. I intend to write up a document detailing the approach, once I reach a stable version.<br>\n", "The main advantage of TTN over MPS is that it can be used to efficiently simulate circuits with richer qubit connectivity. This does **not** mean that TTN has an easy time simulating all-to-all connectivity, but it is far more flexible than MPS. TTN's strength is in simulating circuit where certain subsets of qubits interact densely with each other, and there is not that many gates acting on qubits in different subsets."]}, {"cell_type": "markdown", "metadata": {}, "source": ["# How to use<br>\n", "The interface for TTN matches that of MPS. As such, you should be able to run any code that uses `SimulationAlgorithm.MPSxGate` by replacing it with `SimulationAlgorithm.TTNxGate`. Calling `prepare_circuit_mps` is no longer necessary, since `TTNxGate` can apply gates between non-neighbouring qubits.<br>\n", "**NOTE**: If you are new to pytket-cutensornet, it is highly recommended to start reading the `mps_tutorial.ipynb` notebook instead. More details about the use of the library are discussed there (for instance, why and when to call `CuTensorNetHandle()`)."]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["def random_graph_circuit(n_qubits: int, edge_prob: float, layers: int) -> Circuit:\n", " \"\"\"Random circuit with qubit connectivity determined by a random graph.\"\"\"\n", " c = Circuit(n_qubits)\n", " for i in range(layers):\n", " # Layer of TK1 gates\n", " for q in range(n_qubits):\n", " c.TK1(np.random.rand(), np.random.rand(), np.random.rand(), q)\n\n", " # Layer of CX gates\n", " graph = nx.erdos_renyi_graph(n_qubits, edge_prob, directed=True)\n", " qubit_pairs = list(graph.edges)\n", " for pair in qubit_pairs:\n", " c.CX(pair[0], pair[1])\n", " return c"]}, {"cell_type": "markdown", "metadata": {}, "source": ["For **exact** simulation, you can call `simulate` directly, providing the default `Config()`:"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["simple_circ = random_graph_circuit(n_qubits=10, edge_prob=0.1, layers=1)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["with CuTensorNetHandle() as libhandle:\n", " my_ttn = simulate(libhandle, simple_circ, SimulationAlgorithm.TTNxGate, Config())"]}, {"cell_type": "markdown", "metadata": {}, "source": ["## Obtain an amplitude from a TTN<br>\n", "Let's first see how to get the amplitude of the state `|10100>` from the output of the previous circuit."]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["state = int(\"10100\", 2)\n", "with CuTensorNetHandle() as libhandle:\n", " my_ttn.update_libhandle(libhandle)\n", " amplitude = my_ttn.get_amplitude(state)\n", "print(amplitude)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["Since this is a very small circuit, we can use `pytket`'s state vector simulator capabilities to verify that the state is correct by checking the amplitude of each of the computational states."]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["state_vector = simple_circ.get_statevector()\n", "n_qubits = len(simple_circ.qubits)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["correct_amplitude = [False] * (2**n_qubits)\n", "with CuTensorNetHandle() as libhandle:\n", " my_ttn.update_libhandle(libhandle)\n", " for i in range(2**n_qubits):\n", " correct_amplitude[i] = np.isclose(state_vector[i], my_ttn.get_amplitude(i))"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["print(\"Are all amplitudes correct?\")\n", "print(all(correct_amplitude))"]}, {"cell_type": "markdown", "metadata": {}, "source": ["## Sampling from a TTN<br>\n", "Sampling and measurement from a TTN state is not currently supported. This will be added in an upcoming release."]}, {"cell_type": "markdown", "metadata": {}, "source": ["# Approximate simulation<br>\n", "We provide two policies for approximate simulation:<br>\n", "* Bound the maximum value of the virtual bond dimension `chi`. If a bond dimension would increase past that point, we *truncate* (i.e. discard) the degrees of freedom that contribute the least to the state description. We can keep track of a lower bound of the error that this truncation causes.<br>\n", "* Provide a value for acceptable two-qubit gate fidelity `truncation_fidelity`. After each two-qubit gate we truncate the dimension of virtual bonds as much as we can while guaranteeing the target gate fidelity. The more fidelity you require, the longer it will take to simulate. **Note**: this is *not* the final fidelity of the output state, but the fidelity per gate.<br>\n", "Values for `chi` and `truncation_fidelity` can be set via `Config`. To showcase approximate simulation, let's define a circuit where exact TTN contraction would not be enough."]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["circuit = random_graph_circuit(n_qubits=30, edge_prob=0.1, layers=1)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["We can simulate it using bounded `chi` as follows:"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["start = time()\n", "with CuTensorNetHandle() as libhandle:\n", " config = Config(chi=64, float_precision=np.float32)\n", " bound_chi_ttn = simulate(libhandle, circuit, SimulationAlgorithm.TTNxGate, config)\n", "end = time()\n", "print(\"Time taken by approximate contraction with bounded chi:\")\n", "print(f\"{round(end-start,2)} seconds\")\n", "print(\"\\nLower bound of the fidelity:\")\n", "print(round(bound_chi_ttn.fidelity, 4))"]}, {"cell_type": "markdown", "metadata": {}, "source": ["Alternatively, we can fix `truncation_fidelity` and let the bond dimension increase as necessary to satisfy it."]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["start = time()\n", "with CuTensorNetHandle() as libhandle:\n", " config = Config(truncation_fidelity=0.99, float_precision=np.float32)\n", " fixed_fidelity_ttn = simulate(\n", " libhandle, circuit, SimulationAlgorithm.TTNxGate, config\n", " )\n", "end = time()\n", "print(\"Time taken by approximate contraction with fixed truncation fidelity:\")\n", "print(f\"{round(end-start,2)} seconds\")\n", "print(\"\\nLower bound of the fidelity:\")\n", "print(round(fixed_fidelity_ttn.fidelity, 4))"]}, {"cell_type": "markdown", "metadata": {}, "source": ["# Contraction algorithms"]}, {"cell_type": "markdown", "metadata": {}, "source": ["We currently offer only one TTN-based simulation algorithm.<br>\n", "* **TTNxGate**: Apply gates one by one to the TTN, canonicalising the TTN and truncating when necessary."]}, {"cell_type": "markdown", "metadata": {}, "source": ["# Using the logger"]}, {"cell_type": "markdown", "metadata": {}, "source": ["You can request a verbose log to be produced during simulation, by assigning the `loglevel` argument when creating a `Config` instance. Currently, two log levels are supported (other than default, which is silent):<br>\n", "- `logging.INFO` will print information about progress percent, memory currently occupied by the TTN and current fidelity. Additionally, some high level information of the current stage of the simulation is provided.<br>\n", "- `logging.DEBUG` provides all of the messages from the loglevel above plus detailed information of the current operation being carried out and the values of important variables.<br>\n", "**Note**: Due to technical issues with the `logging` module and Jupyter notebooks we need to reload the `logging` module. When working with python scripts and command line, just doing `import logging` is enough."]}], "metadata": {"kernelspec": {"display_name": "Python 3", "language": "python", "name": "python3"}, "language_info": {"codemirror_mode": {"name": "ipython", "version": 3}, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.4"}}, "nbformat": 4, "nbformat_minor": 2}

0 comments on commit e0e4b5b

Please sign in to comment.