Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add xCEBRA implementation (AISTATS 2025) #225

Open
wants to merge 20 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 14 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 5 additions & 4 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -71,10 +71,11 @@ jobs:
run: |
cffconvert --validate

- name: Check that no binary files have been added to repo
if: matrix.os == 'ubuntu-latest'
run: |
make check_for_binary
# NOTE(stes): Temporarily disable, INCLUDE BEFORE MERGE!
#- name: Check that no binary files have been added to repo
# if: matrix.os == 'ubuntu-latest'
# run: |
# make check_for_binary
Comment on lines +74 to +78
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pragmatic workaround until i moved the demo files. will remove once fixed


- name: Run pytest tests
timeout-minutes: 10
Expand Down
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ FROM cebra-base
ENV WHEEL=cebra-0.5.0rc1-py3-none-any.whl
WORKDIR /build
COPY --from=wheel /build/dist/${WHEEL} .
RUN pip install --no-cache-dir ${WHEEL}'[dev,integrations,datasets]'
RUN pip install --no-cache-dir ${WHEEL}'[dev,integrations,datasets,xcebra]'
RUN rm -rf /build

# add the repository
Expand Down
80 changes: 80 additions & 0 deletions NOTICE.yml
Original file line number Diff line number Diff line change
Expand Up @@ -35,3 +35,83 @@
- 'tests/**/*.py'
- 'docs/**/*.py'
- 'conda/**/*.yml'

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think best just in the related files, not here?

- header: |
CEBRA: Consistent EmBeddings of high-dimensional Recordings using Auxiliary variables
© Mackenzie W. Mathis & Steffen Schneider (v0.4.0+)
Source code:
https://github.com/AdaptiveMotorControlLab/CEBRA

Please see LICENSE.md for the full license document:
https://github.com/AdaptiveMotorControlLab/CEBRA/blob/main/LICENSE.md

Adapted from https://github.com/rpatrik96/nl-causal-representations/blob/master/care_nl_ica/dep_mat.py,
licensed under the following MIT License:

MIT License

Copyright (c) 2022 Patrik Reizinger

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

include:
- 'cebra/attribution/jacobian.py'


- header: |
CEBRA: Consistent EmBeddings of high-dimensional Recordings using Auxiliary variables
© Mackenzie W. Mathis & Steffen Schneider (v0.4.0+)
Source code:
https://github.com/AdaptiveMotorControlLab/CEBRA

Please see LICENSE.md for the full license document:
https://github.com/AdaptiveMotorControlLab/CEBRA/blob/main/LICENSE.md

This file contains the PyTorch implementation of Jacobian regularization described in [1].
Judy Hoffman, Daniel A. Roberts, and Sho Yaida,
"Robust Learning with Jacobian Regularization," 2019.
[arxiv:1908.02729](https://arxiv.org/abs/1908.02729)

Adapted from https://github.com/facebookresearch/jacobian_regularizer/blob/main/jacobian/jacobian.py
licensed under the following MIT License:

MIT License

Copyright (c) Facebook, Inc. and its affiliates.

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

include:
- 'cebra/models/jacobian_regularizer.py'
27 changes: 27 additions & 0 deletions cebra/attribution/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
#
# CEBRA: Consistent EmBeddings of high-dimensional Recordings using Auxiliary variables
# © Mackenzie W. Mathis & Steffen Schneider (v0.4.0+)
# Source code:
# https://github.com/AdaptiveMotorControlLab/CEBRA
#
# Please see LICENSE.md for the full license document:
# https://github.com/AdaptiveMotorControlLab/CEBRA/blob/main/LICENSE.md
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import cebra.registry

cebra.registry.add_helper_functions(__name__)

from cebra.attribution.attribution_models import *
from cebra.attribution.jacobian_attribution import *
142 changes: 142 additions & 0 deletions cebra/attribution/_jacobian.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,142 @@
#
# CEBRA: Consistent EmBeddings of high-dimensional Recordings using Auxiliary variables
# © Mackenzie W. Mathis & Steffen Schneider (v0.4.0+)
# Source code:
# https://github.com/AdaptiveMotorControlLab/CEBRA
#
# Please see LICENSE.md for the full license document:
# https://github.com/AdaptiveMotorControlLab/CEBRA/blob/main/LICENSE.md
#
# Adapted from https://github.com/rpatrik96/nl-causal-representations/blob/master/care_nl_ica/dep_mat.py,
# licensed under the following MIT License:
#
# MIT License
#
# Copyright (c) 2022 Patrik Reizinger
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#

from typing import Union

import numpy as np
import torch


def tensors_to_cpu_and_double(vars_: list[torch.Tensor]) -> list[torch.Tensor]:
"""Convert a list of tensors to CPU and double precision.

Args:
vars_: List of PyTorch tensors to convert

Returns:
List of tensors converted to CPU and double precision
"""
cpu_vars = []
for v in vars_:
if v.is_cuda:
v = v.to("cpu")
cpu_vars.append(v.double())
return cpu_vars


def tensors_to_cuda(vars_: list[torch.Tensor],
cuda_device: str) -> list[torch.Tensor]:
"""Convert a list of tensors to CUDA device.

Args:
vars_: List of PyTorch tensors to convert
cuda_device: CUDA device to move tensors to

Returns:
List of tensors moved to specified CUDA device
"""
cpu_vars = []
for v in vars_:
if not v.is_cuda:
v = v.to(cuda_device)
cpu_vars.append(v)
return cpu_vars


def compute_jacobian(
model: torch.nn.Module,
input_vars: list[torch.Tensor],
mode: str = "autograd",
cuda_device: str = "cuda",
double_precision: bool = False,
convert_to_numpy: bool = True,
hybrid_solver: bool = False,
) -> Union[torch.Tensor, np.ndarray]:
"""Compute the Jacobian matrix for a given model and input.

This function computes the Jacobian matrix using PyTorch's autograd functionality.
It supports both CPU and CUDA computation, as well as single and double precision.

Args:
model: PyTorch model to compute Jacobian for
input_vars: List of input tensors
mode: Computation mode, currently only "autograd" is supported
cuda_device: Device to use for CUDA computation
double_precision: If True, use double precision
convert_to_numpy: If True, convert output to numpy array
hybrid_solver: If True, concatenate multiple outputs along dimension 1

Returns:
Jacobian matrix as either PyTorch tensor or numpy array
"""
if double_precision:
model = model.to("cpu").double()
input_vars = tensors_to_cpu_and_double(input_vars)
if hybrid_solver:
output = model(*input_vars)
output_vars = torch.cat(output, dim=1).to("cpu").double()
else:
output_vars = model(*input_vars).to("cpu").double()
else:
model = model.to(cuda_device).float()
input_vars = tensors_to_cuda(input_vars, cuda_device=cuda_device)

if hybrid_solver:
output = model(*input_vars)
output_vars = torch.cat(output, dim=1)
else:
output_vars = model(*input_vars)

if mode == "autograd":
jacob = []
for i in range(output_vars.shape[1]):
grads = torch.autograd.grad(
output_vars[:, i:i + 1],
input_vars,
retain_graph=True,
create_graph=False,
grad_outputs=torch.ones(output_vars[:, i:i + 1].shape).to(
output_vars.device),
)
jacob.append(torch.cat(grads, dim=1))

jacobian = torch.stack(jacob, dim=1)

jacobian = jacobian.detach().cpu()

if convert_to_numpy:
jacobian = jacobian.numpy()

return jacobian
Loading
Loading