Skip to content

Commit

Permalink
feat: Noir subrepo. (AztecProtocol#3369)
Browse files Browse the repository at this point in the history
Finally.

Builds the now subrepo'd noir repository. At present this just builds
nargo for x86 and arm, meaning our sandbox install script can now
provide a wrapper `aztec-nargo` that is guaranteed to always be exactly
what we want, regardless of any modifications to macros, compilers,
packages, or whatever. Decouples aztec from being dependent on Noirs
deployment pipeline, while still providing the ability to push changes
back to Noir.

Still need to do work to build npm packages and portal them into
yarn-project. Will do in separate PR.

* Removes old benchmarking stuff and commented canary stuff from
pipeline.
* Adds nargo x86 and arm builds, outputing a multiarch image that also
runs efficiently on macs.
* Removes a load of ARCH specific hack stuff in build-system, in favour
of all cache image URI's now just have their arch appended to their tag.
The arch is determined by the arch of the build system. This required a
small "hack(?)" whereby we will fall back from an unfound arch (arm) to
x86, as we build some of our arm images from previously build x86
images, but i anticipate that'll change at some point.
* Remove some project scripts that bleed the build-system abstraction
(e.g. deploy_docker.sh).
* Introduces `[ci dry-deploy]` commit message command for doing dry run
of deploys. We no longer have conditional workflow filters for
deployment to enable this. I.e. there is always deploy jobs, they just
noop out asap. If dry-deploy is enabled, the deploy jobs will run to
"completion" but doesn't actually execute pushes to dockerhub, and just
runs `npm publish` in dry-run mode.
* Gets rid of `VERSION` files and the sanity check is it got in the way
of the above, and I don't even remember why it was there.
* `build-system` can now launch arm spot instances. This can be
requested e.g. `cond_spot_run_build noir 32 arm64`.
* Added a script to help bootstrap build-instances in case we need to
create new AMI's again in future (I had to create and arm one).
* Deleted some build-system scripts I couldn't see used anywhere.
* `deploy_dockerhub` script now takes list of arches and does the
manifest generation itself.
* Introduces a `should_deploy` script which enables the early out of
deployment jobs. At present this causes exit if there is no COMMIT_TAG.
In future it'll want to run the deploy steps if BRANCH is master as
well.


Follow up PRs will:
* Build NPM packages and portal them into yarn-project.
* Modify sandbox install script to pull noir image, install
`aztec-nargo` wrapper.
* Update vscode plugin to allow selection between global `nargo` or
`aztec-nargo` if both found.
* Someone will do some magic to use the nargo container in github
codespaces so users can press `.` and play.
  • Loading branch information
charlielye authored Nov 23, 2023
1 parent 9557a66 commit d94d88b
Show file tree
Hide file tree
Showing 42 changed files with 304 additions and 665 deletions.
375 changes: 105 additions & 270 deletions .circleci/config.yml

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion PROJECT
Original file line number Diff line number Diff line change
@@ -1 +1 @@
aztec3-packages
aztec
1 change: 0 additions & 1 deletion VERSION

This file was deleted.

1 change: 0 additions & 1 deletion barretenberg/VERSION

This file was deleted.

14 changes: 0 additions & 14 deletions build-system/remote/32core.json

This file was deleted.

14 changes: 0 additions & 14 deletions build-system/remote/64core.json

This file was deleted.

8 changes: 8 additions & 0 deletions build-system/remote/bootstrap_build_instance.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
#!/bin/bash
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl software-properties-common awscli docker-ce
sudo usermod -aG docker ${USER}
mkdir .aws
echo "Add build-instance credentials to ~/.aws/credentials
41 changes: 17 additions & 24 deletions build-system/scripts/build
Original file line number Diff line number Diff line change
Expand Up @@ -8,26 +8,20 @@
# usage: ./build <repository>
# example: ./build aztec-connect-cpp-x86_64-linux-clang
# output image:
# 278380418400.dkr.ecr.us-east-2.amazonaws.com/aztec-connect-cpp-x86_64-linux-clang:cache-deadbeefcafebabe1337c0d3
# 278380418400.dkr.ecr.us-east-2.amazonaws.com/aztec-connect-cpp-x86_64-linux-clang:cache-deadbeefcafebabe1337c0d3-x86_64
#
# In more detail:
# - Init all submodules required to build this project.
# - Log into cache ECR, and ensures repository exists.
# - Checks if current project needs to be rebuilt, if not, retag previous image with current commit hash and early out.
# - Validate any terraform that may exist.
# - Pull down dependent images that we do not control (e.g. alpine etc).
# - For images we do control, pull the image we've built (or retagged) as part of this build.
# - For each "named stage" (usually intermittent builders before creating final image), pull previous to prime the cache, build and push the results.
# - Pull previous project image to use it as a layer cache if it exists.
# - Perform the build of the image itself. With the cache primed we should only have to rebuild the necessary layers.
# - Push the image tagged with the commit hash to the cache.
# - For images built previously in pipeline, pull the image we've built.
# - Perform the build of the image itself.
# - Push the image tagged with the content hash to the cache.

[ -n "${BUILD_SYSTEM_DEBUG:-}" ] && set -x # conditionally trace
set -euo pipefail

REPOSITORY=$1
FORCE_BUILD=${2:-"false"}
ARCH=${3:-""}
DOCKERFILE=$(query_manifest dockerfile $REPOSITORY)
PROJECT_DIR=$(query_manifest projectDir $REPOSITORY)
BUILD_DIR=$(query_manifest buildDir $REPOSITORY)
Expand All @@ -36,7 +30,6 @@ echo "Repository: $REPOSITORY"
echo "Working directory: $PWD"
echo "Dockerfile: $DOCKERFILE"
echo "Build directory: $BUILD_DIR"
echo "Arch: $ARCH"

# Fetch images with retries
function fetch_image() {
Expand Down Expand Up @@ -67,7 +60,8 @@ echo "Content hash: $CONTENT_HASH"
cd $BUILD_DIR

# If we have previously successful commit, we can early out if nothing relevant has changed since.
if [[ $FORCE_BUILD == 'false' ]] && check_rebuild cache-"$CONTENT_HASH" $REPOSITORY; then
IMAGE_COMMIT_TAG=$(calculate_image_tag $REPOSITORY)
if check_rebuild $IMAGE_COMMIT_TAG $REPOSITORY; then
echo "No rebuild necessary."
exit 0
fi
Expand All @@ -94,15 +88,17 @@ fi

# For each dependency, pull in the latest image and give it correct tag.
for PARENT_REPO in $(query_manifest dependencies $REPOSITORY); do
PARENT_CONTENT_HASH=$(calculate_content_hash $PARENT_REPO)
# There must be a parent image to continue.
if [ -z "$PARENT_CONTENT_HASH" ]; then
echo "No parent image found for $PARENT_REPO"
exit 1
PARENT_IMAGE_URI=$(calculate_image_uri $PARENT_REPO)
echo "Pulling dependency $PARENT_IMAGE_URI..."
if ! fetch_image $PARENT_IMAGE_URI; then
# This is a *bit* of a hack maybe. Some of our arm images can be built from x86 dependents.
# e.g. node projects are architecture independent.
# This may not hold true if we start introducing npm modules that are backed by native code.
# But for now, to avoid building some projects twice, we can fallback onto x86 variant.
PARENT_IMAGE_URI=$(calculate_image_uri $PARENT_REPO x86_64)
echo "Falling back onto x86 build. Pulling dependency $PARENT_IMAGE_URI..."
fetch_image $PARENT_IMAGE_URI
fi
PARENT_IMAGE_URI=$ECR_URL/$PARENT_REPO:cache-$PARENT_CONTENT_HASH
echo "Pulling dependency $PARENT_REPO..."
fetch_image $PARENT_IMAGE_URI
# Tag it to look like an official release as that's what we use in Dockerfiles.
TAG=$ECR_DEPLOY_URL/$PARENT_REPO
docker tag $PARENT_IMAGE_URI $TAG
Expand All @@ -112,10 +108,7 @@ COMMIT_TAG_VERSION=$(extract_tag_version $REPOSITORY false)
echo "Commit tag version: $COMMIT_TAG_VERSION"

# Build the actual image and give it a commit tag.
IMAGE_COMMIT_URI=$ECR_URL/$REPOSITORY:cache-$CONTENT_HASH
if [[ -n "$ARCH" ]]; then
IMAGE_COMMIT_URI=$IMAGE_COMMIT_URI-$ARCH
fi
IMAGE_COMMIT_URI=$(calculate_image_uri $REPOSITORY)
echo "Building image: $IMAGE_COMMIT_URI"
docker build -t $IMAGE_COMMIT_URI -f $DOCKERFILE --build-arg COMMIT_TAG=$COMMIT_TAG_VERSION --build-arg ARG_CONTENT_HASH=$CONTENT_HASH .
echo "Pushing image: $IMAGE_COMMIT_URI"
Expand Down
9 changes: 9 additions & 0 deletions build-system/scripts/calculate_image_tag
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
#!/bin/bash
[ -n "${BUILD_SYSTEM_DEBUG:-}" ] && set -x # conditionally trace
set -eu

REPOSITORY=$1
ARCH=${2:-$(uname -m)}
[ "$ARCH" == "aarch64" ] && ARCH=arm64
CONTENT_HASH=$(calculate_content_hash $REPOSITORY)
echo "cache-$CONTENT_HASH-$ARCH"
4 changes: 3 additions & 1 deletion build-system/scripts/calculate_image_uri
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,7 @@
set -eu

REPOSITORY=$1
ARCH=${2:-$(uname -m)}
[ "$ARCH" == "aarch64" ] && ARCH=arm64
CONTENT_HASH=$(calculate_content_hash $REPOSITORY)
echo "$ECR_URL/$REPOSITORY:cache-$CONTENT_HASH"
echo "$ECR_URL/$REPOSITORY:cache-$CONTENT_HASH-$ARCH"
18 changes: 0 additions & 18 deletions build-system/scripts/check_npm_version

This file was deleted.

12 changes: 8 additions & 4 deletions build-system/scripts/cond_run_compose
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,10 @@ REPOSITORY=$1
COMPOSE_FILE=$2
shift 2

CONTENT_HASH=$(calculate_content_hash $REPOSITORY)
BASE_TAG=cache-$CONTENT_HASH
BASE_TAG=$(calculate_image_tag $REPOSITORY)
SUCCESS_TAG=$BASE_TAG-$JOB_NAME

echo "Content hash: $CONTENT_HASH"
echo "Success tag: $SUCCESS_TAG"

if ! check_rebuild $SUCCESS_TAG $REPOSITORY; then
# Login to pull our ecr images with docker.
Expand All @@ -27,7 +26,12 @@ if ! check_rebuild $SUCCESS_TAG $REPOSITORY; then
cd $(query_manifest projectDir $REPOSITORY)

export $@
docker-compose -f $COMPOSE_FILE up --exit-code-from $REPOSITORY --force-recreate
if docker compose > /dev/null 2>&1; then
CMD="docker compose"
else
CMD="docker-compose"
fi
$CMD -f $COMPOSE_FILE up --exit-code-from $REPOSITORY --force-recreate

upload_logs_to_s3 log

Expand Down
5 changes: 2 additions & 3 deletions build-system/scripts/cond_run_container
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,10 @@ set -eu
REPOSITORY=$1
shift

CONTENT_HASH=$(calculate_content_hash $REPOSITORY)
BASE_TAG=cache-$CONTENT_HASH
BASE_TAG=$(calculate_image_tag $REPOSITORY)
SUCCESS_TAG=$BASE_TAG-$JOB_NAME

echo "Content hash: $CONTENT_HASH"
echo "Success tag: $SUCCESS_TAG"

if ! check_rebuild $SUCCESS_TAG $REPOSITORY; then
IMAGE_URI=$(calculate_image_uri $REPOSITORY)
Expand Down
5 changes: 2 additions & 3 deletions build-system/scripts/cond_run_script
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,10 @@ set -eu
REPOSITORY=$1
shift

CONTENT_HASH=$(calculate_content_hash $REPOSITORY)
BASE_TAG=cache-$CONTENT_HASH
BASE_TAG=$(calculate_image_tag $REPOSITORY)
SUCCESS_TAG=$BASE_TAG-$JOB_NAME

echo "Content hash: $CONTENT_HASH"
echo "Success tag: $SUCCESS_TAG"

if ! check_rebuild $SUCCESS_TAG $REPOSITORY; then
init_submodules $REPOSITORY
Expand Down
5 changes: 3 additions & 2 deletions build-system/scripts/cond_spot_run_build
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
#!/bin/bash
[ -n "${BUILD_SYSTEM_DEBUG:-}" ] && set -x # conditionally trace
set -eu
set -euo pipefail

REPOSITORY=$1
CPUS=$2
ARCH=${3:-x86_64}

cond_spot_run_script $REPOSITORY $CPUS build $REPOSITORY
cond_spot_run_script $REPOSITORY $CPUS $ARCH build $REPOSITORY $ARCH | add_timestamps
2 changes: 1 addition & 1 deletion build-system/scripts/cond_spot_run_compose
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,4 @@ CPUS=$2
shift 2

export TAG_POSTFIX=$JOB_NAME
cond_spot_run_script $REPOSITORY $CPUS cond_run_compose $REPOSITORY $@ 2>&1 | add_timestamps
cond_spot_run_script $REPOSITORY $CPUS x86_64 cond_run_compose $REPOSITORY $@ 2>&1 | add_timestamps
2 changes: 1 addition & 1 deletion build-system/scripts/cond_spot_run_container
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,4 @@ CPUS=$2
shift 2

export TAG_POSTFIX=$JOB_NAME
cond_spot_run_script $REPOSITORY $CPUS cond_run_container $REPOSITORY $@
cond_spot_run_script $REPOSITORY $CPUS x86_64 cond_run_container $REPOSITORY $@
11 changes: 6 additions & 5 deletions build-system/scripts/cond_spot_run_script
Original file line number Diff line number Diff line change
Expand Up @@ -16,20 +16,21 @@ set -eu

REPOSITORY=$1
CPUS=$2
shift 2
ARCH=$3
shift 3

CONTENT_HASH=$(calculate_content_hash $REPOSITORY)
BASE_TAG=cache-$CONTENT_HASH
# If the CPUS have a specific architecture assigned, we need to use that to build the success tag.
BASE_TAG=$(calculate_image_tag $REPOSITORY $ARCH)
SUCCESS_TAG=$BASE_TAG

if [ -n "${TAG_POSTFIX:-}" ]; then
SUCCESS_TAG=$BASE_TAG-$TAG_POSTFIX
fi

echo "Content hash: $CONTENT_HASH"
echo "Success tag: $SUCCESS_TAG"

if ! check_rebuild $SUCCESS_TAG $REPOSITORY; then
init_submodules $REPOSITORY
spot_run_script $CONTENT_HASH $CPUS $@
spot_run_script $SUCCESS_TAG $CPUS $ARCH $@
retry tag_remote_image $REPOSITORY $BASE_TAG $SUCCESS_TAG
fi
4 changes: 2 additions & 2 deletions build-system/scripts/cond_spot_run_test
Original file line number Diff line number Diff line change
Expand Up @@ -13,5 +13,5 @@ SCRIPT=$(query_manifest relativeProjectDir $REPOSITORY)/$SCRIPT

# Specify a TAG_POSTFIX as the JOB_NAME
mkdir -p /tmp/test-logs
export TAG_POSTFIX=$JOB_NAME
cond_spot_run_script $REPOSITORY $CPUS $SCRIPT $@ | tee "/tmp/test-logs/$JOB_NAME.log"
export TAG_POSTFIX=$JOB_NAME
cond_spot_run_script $REPOSITORY $CPUS x86_64 $SCRIPT $@ | tee "/tmp/test-logs/$JOB_NAME.log"
58 changes: 0 additions & 58 deletions build-system/scripts/create_dockerhub_manifest

This file was deleted.

23 changes: 9 additions & 14 deletions build-system/scripts/create_ecr_manifest
Original file line number Diff line number Diff line change
Expand Up @@ -11,25 +11,20 @@ set -eu
REPOSITORY=$1
ARCH_LIST=$2

# Ensure ECR repository exists.
retry ensure_repo $REPOSITORY $ECR_REGION refresh_lifecycle
ecr_login

IMAGE_URI=$(calculate_image_uri $REPOSITORY)
echo "Image URI: $IMAGE_URI"
CONTENT_HASH=$(calculate_content_hash $REPOSITORY)
MULTIARCH_IMAGE_URI=$ECR_URL/$REPOSITORY:cache-$CONTENT_HASH

echo "Creating manifest list..."
echo "Multi-arch Image URI: $MULTIARCH_IMAGE_URI"

export DOCKER_CLI_EXPERIMENTAL=enabled

OLD_IFS=$IFS
IFS=','
for A in $ARCH_LIST
do
ARCH_IMAGE=$IMAGE_URI-$A
echo "Adding image $ARCH_IMAGE to manifest list."
retry docker manifest create $IMAGE_URI --amend $ARCH_IMAGE
for A in $ARCH_LIST; do
IMAGE_URI=$(calculate_image_uri $REPOSITORY $A)
echo "Adding image $IMAGE_URI to manifest list $MULTIARCH_IMAGE_URI..."
docker manifest create $MULTIARCH_IMAGE_URI --amend $IMAGE_URI
done
IFS=$OLD_IFS
unset OLD_IFS

retry docker manifest push --purge $IMAGE_URI
retry docker manifest push --purge $MULTIARCH_IMAGE_URI
2 changes: 1 addition & 1 deletion build-system/scripts/deploy
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ if check_rebuild cache-$CONTENT_HASH-$DEPLOY_TAG-deployed $REPOSITORY; then
exit 0
fi

deploy_terraform $REPOSITORY ./terraform/$VERSION_TAG "$TO_TAINT"
deploy_terraform $REPOSITORY ./terraform/$DEPLOY_ENV "$TO_TAINT"

# Restart services.
for SERVICE in $SERVICES; do
Expand Down
Loading

0 comments on commit d94d88b

Please sign in to comment.