Skip to content

Commit

Permalink
Adds a new workflow to update package index
Browse files Browse the repository at this point in the history
When we want to update to a new release of MLIR-TRT, we first need to publish
`packages.html` so we are able to `pip install` correctly. Previously, we were
doing this in each nightly, but it's better to do so whenever we push a change
to `packages.html` on `main`. That's what this workflow does.
  • Loading branch information
pranavm-nvidia committed Nov 9, 2024
1 parent a31da20 commit e1d4974
Show file tree
Hide file tree
Showing 2 changed files with 66 additions and 0 deletions.
49 changes: 49 additions & 0 deletions .github/workflows/post-merge-package-index-update.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
name: Post-merge package index update

on:
push:
branches: [ "main" ]
paths: ['tripy/docs/packages.html']

# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions:
contents: read
pages: write
id-token: write

# Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued.
# However, do NOT cancel in-progress runs as we want to allow these production deployments to complete.
concurrency:
group: "pages"
cancel-in-progress: false

jobs:
publish-package-index:
runs-on: tripy-self-hosted
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
container:
image: ghcr.io/nvidia/tensorrt-incubator/tripy:latest
volumes:
- ${{ github.workspace }}/tripy:/tripy
options: --gpus all
steps:
- uses: actions/checkout@v4

- name: build-docs
run: |
cd /tripy/
python3 docs/generate_rsts.py
sphinx-build build/doc_sources build/docs -c docs/ -j 4 -W -n
cp docs/packages.html build/docs/
- uses: actions/configure-pages@v5

- uses: actions/upload-pages-artifact@v3
with:
path: "/tripy/build/docs"

- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4
17 changes: 17 additions & 0 deletions tripy/docs/packages.html
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,23 @@ <h1>Package Index</h1>
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/mlir-tensorrt-v0.1.34/mlir_tensorrt_runtime-0.1.34+cuda12.trt102-cp312-cp312-linux_x86_64.whl">mlir_tensorrt_runtime-0.1.34+cuda12.trt102-cp312-cp312-linux_x86_64.whl</a><br>
<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/mlir-tensorrt-v0.1.34/mlir_tensorrt_runtime-0.1.34+cuda12.trt102-cp39-cp39-linux_x86_64.whl">mlir_tensorrt_runtime-0.1.34+cuda12.trt102-cp39-cp39-linux_x86_64.whl</a><br>

<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/mlir-tensorrt-v0.1.36/mlir_tensorrt_compiler-0.1.36+cuda12.trt102-cp310-cp310-linux_x86_64.whl">mlir_tensorrt_compiler-0.1.36+cuda12.trt102-cp310-cp310-linux_x86_64.whl</a><br>
<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/mlir-tensorrt-v0.1.36/mlir_tensorrt_compiler-0.1.36+cuda12.trt102-cp311-cp311-linux_x86_64.whl">mlir_tensorrt_compiler-0.1.36+cuda12.trt102-cp311-cp311-linux_x86_64.whl</a><br>
<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/mlir-tensorrt-v0.1.36/mlir_tensorrt_compiler-0.1.36+cuda12.trt102-cp312-cp312-linux_x86_64.whl">mlir_tensorrt_compiler-0.1.36+cuda12.trt102-cp312-cp312-linux_x86_64.whl</a><br>
<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/mlir-tensorrt-v0.1.36/mlir_tensorrt_compiler-0.1.36+cuda12.trt102-cp39-cp39-linux_x86_64.whl">mlir_tensorrt_compiler-0.1.36+cuda12.trt102-cp39-cp39-linux_x86_64.whl</a><br>
<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/mlir-tensorrt-v0.1.36/mlir_tensorrt_runtime-0.1.36+cuda12.trt102-cp310-cp310-linux_x86_64.whl">mlir_tensorrt_runtime-0.1.36+cuda12.trt102-cp310-cp310-linux_x86_64.whl</a><br>
<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/mlir-tensorrt-v0.1.36/mlir_tensorrt_runtime-0.1.36+cuda12.trt102-cp311-cp311-linux_x86_64.whl">mlir_tensorrt_runtime-0.1.36+cuda12.trt102-cp311-cp311-linux_x86_64.whl</a><br>
<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/mlir-tensorrt-v0.1.36/mlir_tensorrt_runtime-0.1.36+cuda12.trt102-cp312-cp312-linux_x86_64.whl">mlir_tensorrt_runtime-0.1.36+cuda12.trt102-cp312-cp312-linux_x86_64.whl</a><br>
<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/mlir-tensorrt-v0.1.36/mlir_tensorrt_runtime-0.1.36+cuda12.trt102-cp39-cp39-linux_x86_64.whl">mlir_tensorrt_runtime-0.1.36+cuda12.trt102-cp39-cp39-linux_x86_64.whl</a><br>
</body>

</html>

0 comments on commit e1d4974

Please sign in to comment.