Skip to content

Commit

Permalink
move BP from experimental to quimb.tensor.belief_propagation
Browse files Browse the repository at this point in the history
  • Loading branch information
jcmgray committed Jan 31, 2025
1 parent aaafbed commit d410482
Show file tree
Hide file tree
Showing 21 changed files with 540 additions and 363 deletions.
12 changes: 10 additions & 2 deletions docs/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,14 @@ Release notes for `quimb`.
(whats-new-1-10-1)=
## v1.10.1 (unreleased)

**Breaking Changes**

- move belief propagation to `quimb.tensor.belief_propagation`

**Breaking Changes**

- move belief propagation to `quimb.tensor.belief_propagation`

**Enhancements:**

- [`MatrixProductState.measure`](quimb.tensor.tensor_1d.MatrixProductState.measure): add a `seed` kwarg
Expand Down Expand Up @@ -128,7 +136,7 @@ Release notes for `quimb`.
- add [`TensorNetwork.drape_bond_between`](quimb.tensor.tensor_core.TensorNetwork.drape_bond_between) for 'draping' an existing bond between two tensors through a third
- add [`Tensor.new_ind_pair_with_identity`](quimb.tensor.tensor_core.Tensor.new_ind_pair_with_identity)
- TN2D, TN3D and arbitrary geom classical partition function builders ([`TN_classical_partition_function_from_edges`](quimb.tensor.tensor_builder.TN_classical_partition_function_from_edges)) now all support `outputs=` kwarg specifying non-marginalized variables
- add simple dense 1-norm belief propagation algorithm [`D1BP`](quimb.experimental.belief_propagation.d1bp.D1BP)
- add simple dense 1-norm belief propagation algorithm [`D1BP`](quimb.tensor.belief_propagation.d1bp.D1BP)
- add [`qtn.enforce_1d_like`](quimb.tensor.tensor_1d_compress.enforce_1d_like) for checking whether a tensor network is 1D-like, including automatically adding strings of identities between non-local bonds, expanding applicability of [`tensor_network_1d_compress`](quimb.tensor.tensor_1d_compress.tensor_network_1d_compress)
- add [`MatrixProductState.canonicalize`](quimb.tensor.tensor_1d.MatrixProductState.canonicalize) as (by default *non-inplace*) version of `canonize`, to follow the pattern of other tensor network methods. `canonize` is now an alias for `canonicalize_` [note trailing underscore].
- add [`MatrixProductState.left_canonicalize`](quimb.tensor.tensor_1d.MatrixProductState.left_canonicalize) as (by default *non-inplace*) version of `left_canonize`, to follow the pattern of other tensor network methods. `left_canonize` is now an alias for `left_canonicalize_` [note trailing underscore].
Expand Down Expand Up @@ -339,7 +347,7 @@ Release notes for `quimb`.
[Tensor.idxmax](quimb.tensor.Tensor.idxmax) for finding the index of the
minimum/maximum element.
- 2D and 3D classical partition function TN builders: allow output indices.
- [`quimb.experimental.belief_propagation`]([`quimb.experimental.belief_propagation`]):
- [`quimb.tensor.belief_propagation`]([`quimb.tensor.belief_propagation`]):
add various 1-norm/2-norm dense/lazy BP algorithms.

**Bug fixes:**
Expand Down
112 changes: 5 additions & 107 deletions quimb/experimental/belief_propagation/__init__.py
Original file line number Diff line number Diff line change
@@ -1,110 +1,8 @@
"""Belief propagation (BP) routines. There are three potential categorizations
of BP and each combination of them is potentially valid specific algorithm.
import warnings

1-norm vs 2-norm BP
-------------------
from quimb.tensor.belief_propagation import *

- 1-norm (normal): BP runs directly on the tensor network, messages have size
``d`` where ``d`` is the size of the bond(s) connecting two tensors or
regions.
- 2-norm (quantum): BP runs on the squared tensor network, messages have size
``d^2`` where ``d`` is the size of the bond(s) connecting two tensors or
regions. Each local tensor or region is partially traced (over dangling
indices) with its conjugate to create a single node.
Graph vs Hypergraph BP
----------------------
- Graph (simple): the tensor network lives on a graph, where indices either
appear on two tensors (a bond), or appear on a single tensor (are outputs).
In this case, messages are exchanged directly between tensors.
- Hypergraph: the tensor network lives on a hypergraph, where indices can
appear on any number of tensors. In this case, the update procedure is two
parts, first all 'tensor' messages are computed, these are then used in the
second step to compute all the 'index' messages, which are then fed back into
the 'tensor' message update and so forth. For 2-norm BP one likely needs to
specify which indices are outputs and should be traced over.
The hypergraph case of course includes the graph case, but since the 'index'
message update is simply the identity, it is convenient to have a separate
simpler implementation, where the standard TN bond vs physical index
definitions hold.
Dense vs Vectorized vs Lazy BP
------------------------------
- Dense: each node is a single tensor, or pair of tensors for 2-norm BP. If all
multibonds have been fused, then each message is a vector (1-norm case) or
matrix (2-norm case).
- Vectorized: the same as the above, but all matching tensor update and message
updates are stacked and performed simultaneously. This can be enormously more
efficient for large numbers of small tensors.
- Lazy: each node is potentially a tensor network itself with arbitrary inner
structure and number of bonds connecting to other nodes. The message are
generally tensors and each update is a lazy contraction, which is potentially
much cheaper / requires less memory than forming the 'dense' node for large
tensors.
(There is also the MPS flavor where each node has a 1D structure and the
messages are matrix product states, with updates involving compression.)
Overall that gives 12 possible BP flavors, some implemented here:
- [x] (HD1BP) hyper, dense, 1-norm - this is the standard BP algorithm
- [x] (HD2BP) hyper, dense, 2-norm
- [x] (HV1BP) hyper, vectorized, 1-norm
- [ ] (HV2BP) hyper, vectorized, 2-norm
- [ ] (HL1BP) hyper, lazy, 1-norm
- [ ] (HL2BP) hyper, lazy, 2-norm
- [x] (D1BP) simple, dense, 1-norm - simple BP for simple tensor networks
- [x] (D2BP) simple, dense, 2-norm - this is the standard PEPS BP algorithm
- [ ] (V1BP) simple, vectorized, 1-norm
- [ ] (V2BP) simple, vectorized, 2-norm
- [x] (L1BP) simple, lazy, 1-norm
- [x] (L2BP) simple, lazy, 2-norm
The 2-norm methods can be used to compress bonds or estimate the 2-norm.
The 1-norm methods can be used to estimate the 1-norm, i.e. contracted value.
Both methods can be used to compute index marginals and thus perform sampling.
The vectorized methods can be extremely fast for large numbers of small
tensors, but do currently require all dimensions to match.
The dense and lazy methods can can converge messages *locally*, i.e. only
update messages adjacent to messages which have changed.
"""

from .bp_common import combine_local_contractions, initialize_hyper_messages
from .d1bp import D1BP, contract_d1bp
from .d2bp import D2BP, compress_d2bp, contract_d2bp, sample_d2bp
from .hd1bp import HD1BP, contract_hd1bp, sample_hd1bp
from .hv1bp import HV1BP, contract_hv1bp, sample_hv1bp
from .l1bp import L1BP, contract_l1bp
from .l2bp import L2BP, compress_l2bp, contract_l2bp
from .regions import RegionGraph

__all__ = (
"combine_local_contractions",
"compress_d2bp",
"compress_l2bp",
"contract_d1bp",
"contract_d2bp",
"contract_hd1bp",
"contract_hv1bp",
"contract_l1bp",
"contract_l2bp",
"D1BP",
"D2BP",
"HD1BP",
"HV1BP",
"initialize_hyper_messages",
"L1BP",
"L2BP",
"RegionGraph",
"sample_d2bp",
"sample_hd1bp",
"sample_hv1bp",
warnings.warn(
"Most functionality of 'quimb.experimental.belief_propagation' "
"has been moved to `quimb.tensor.belief_propagation`.",
)
194 changes: 0 additions & 194 deletions quimb/experimental/belief_propagation/regions.py

This file was deleted.

Loading

0 comments on commit d410482

Please sign in to comment.