Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(docker): re-organize the autoware docker containers #4072

Merged
merged 17 commits into from
Mar 6, 2024
Merged
File renamed without changes.
24 changes: 24 additions & 0 deletions .devcontainer/base/devcontainer.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
{
"name": "Autoware",
"build": {
"dockerfile": "Dockerfile"
},
"remoteUser": "autoware",
"hostRequirements": {
"gpu": true
},
Comment on lines +7 to +9
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be removed for this non-GPU file. Although it is not critical because if I remember correctly it just issues a warning in the logs, but doesn't block anything.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

resolving following #4072 (comment)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not good at VSCode with Containers, I don't know if this affects anything related to RViz. If you think we can remove it safely, I can remove it.

Copy link
Collaborator

@ambroise-arm ambroise-arm Mar 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It won't change how things get setup. From https://containers.dev/implementors/json_reference/#min-host-reqs: "you will be presented with a warning if the requirements are not met". Removing this won't affect whether rviz works or not.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ambroise-arm I see, I will let @oguzkaganozt know this, let's remove it as well in the next PR.

"runArgs": [
"--cap-add=SYS_PTRACE",

Check warning on line 11 in .devcontainer/base/devcontainer.json

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (PTRACE)
"--security-opt",
"seccomp=unconfined",

Check warning on line 13 in .devcontainer/base/devcontainer.json

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (seccomp)
"--net=host",
"--volume=/etc/localtime:/etc/localtime:ro"

Check warning on line 15 in .devcontainer/base/devcontainer.json

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (localtime)

Check warning on line 15 in .devcontainer/base/devcontainer.json

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (localtime)
],
"customizations": {
"vscode": {
"settings.json": {
"terminal.integrated.profiles.linux": { "bash": { "path": "/bin/bash" } }
}
}
}
}
14 changes: 14 additions & 0 deletions .devcontainer/cuda/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
FROM ghcr.io/autowarefoundation/autoware-openadk:latest-devel-cuda
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If apart from this line the rest of the file is exactly the same as in base/Dockerfile then I think it would be better to keep a single Dockerfile under .devcontainer/ and instead have an ARG TAG or something like that in order to use FROM ghcr.io/autowarefoundation/autoware-openadk:$TAG and then pass it with build.args (https://containers.dev/implementors/json_reference/#image-specific) in the respective devcontainer.json

Copy link
Contributor

@xmfcx xmfcx Mar 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@oguzkaganozt will look into this tomorrow in a follow up PR.

Let's merge this as it is today.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok


ENV SHELL /bin/bash

ARG USERNAME=autoware
ARG USER_UID=1000
ARG USER_GID=$USER_UID

RUN groupadd --gid $USER_GID $USERNAME \
&& useradd --uid $USER_UID --gid $USER_GID -m $USERNAME \
&& apt-get update \
&& apt-get install -y sudo \
&& echo $USERNAME ALL=\(root\) NOPASSWD:ALL > /etc/sudoers.d/$USERNAME \
&& chmod 0440 /etc/sudoers.d/$USERNAME
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"name": "Autoware",
"name": "Autoware-cuda",
"build": {
"dockerfile": "Dockerfile"
},
Expand All @@ -8,11 +8,11 @@
"gpu": true
},
"runArgs": [
"--cap-add=SYS_PTRACE",

Check warning on line 11 in .devcontainer/cuda/devcontainer.json

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (PTRACE)
"--security-opt",
"seccomp=unconfined",

Check warning on line 13 in .devcontainer/cuda/devcontainer.json

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (seccomp)
"--net=host",
"--volume=/etc/localtime:/etc/localtime:ro",

Check warning on line 15 in .devcontainer/cuda/devcontainer.json

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (localtime)

Check warning on line 15 in .devcontainer/cuda/devcontainer.json

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (localtime)
"--gpus",
"all"
],
Expand Down
2 changes: 1 addition & 1 deletion ansible/playbooks/openadk.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
- name: Verify OS
ansible.builtin.fail:
msg: Only Ubuntu 22.04 is supported for this branch. Please refer to https://autowarefoundation.github.io/autoware-documentation/main/installation/autoware/source-installation/.
when: ansible_distribution == 'Ubuntu' and ansible_distribution_version != '22.04'
when: ansible_distribution != 'Ubuntu' or ansible_distribution_version != '22.04'

- name: Print args
ansible.builtin.debug:
Expand Down
36 changes: 26 additions & 10 deletions docker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ To install without **NVIDIA GPU** support:

## Usage

### Runtime Setup
### Runtime

You can use `run.sh` to run the Autoware runtime container with the map data:

Expand All @@ -59,7 +59,7 @@ Inside the container, you can run the Autoware simulation by following these tut

[Rosbag Replay Simulation](../../tutorials/ad-hoc-simulation/rosbag-replay-simulation.md).

### Development Setup
### Development Environment

You can use [VS Code Remote Containers](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) to develop Autoware in the containerized environment with ease. Or you can use `run.sh` manually to run the Autoware development container with the workspace mounted.

Expand All @@ -68,18 +68,34 @@ You can use [VS Code Remote Containers](https://marketplace.visualstudio.com/ite
Get the Visual Studio Code's [Remote - Containers](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) extension.
And reopen the workspace in the container by selecting `Remote-Containers: Reopen in Container` from the Command Palette (`F1`).

By default devcontainer assumes NVIDIA GPU support, you can change this by deleting these lines within `.devcontainer/devcontainer.json`:
Autoware and Autoware-cuda are available containers for development.

```json
"hostRequirements": {
"gpu": true
},
```
If you want to use CUDA supported dev container, you need to install the NVIDIA Container Toolkit before opening the workspace in the container:

```bash
# Add NVIDIA container toolkit GPG key
sudo apt-key adv --fetch-keys https://nvidia.github.io/libnvidia-container/gpgkey
sudo gpg --no-default-keyring --keyring /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg --import /etc/apt/trusted.gpg

# Add NVIDIA container toolkit repository
echo "deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://nvidia.github.io/libnvidia-container/stable/deb/$(dpkg --print-architecture) /" | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

# Update the package list
sudo apt-get update

```json
"--gpus", "all"
# Install NVIDIA Container Toolkit
sudo apt-get install -y nvidia-container-toolkit

# Add NVIDIA runtime support to docker engine
sudo nvidia-ctk runtime configure --runtime=docker

# Restart docker daemon
sudo systemctl restart docker
```

Then, you can use the `Remote-Containers: Reopen in Container` command to open the workspace in the container.


#### Using `run.sh` for Development

```bash
Expand Down
Loading