For these exercises, you will be using the host node3
as user root
.
From host bastion
, ssh to node3
.
ssh node3
Use sudo
to elevate your priviledges.
sudo -i
Verify that you are on the right host for these exercises.
workshop-podman-checkhost.sh
You are now ready to proceed with these exercises.
Now have a look at the general container information.
podman info
host: BuildahVersion: 1.9.0 Conmon: package: podman-1.4.2-5.module+el8.1.0+4240+893c1ab8.x86_64 path: /usr/libexec/podman/conmon version: 'conmon version 2.0.1-dev, commit: unknown' Distribution: distribution: '"rhel"' version: "8.1" MemFree: 2770325504 MemTotal: 3863744512 OCIRuntime: package: runc-1.0.0-60.rc8.module+el8.1.0+4081+b29780af.x86_64 path: /usr/bin/runc version: 'runc version spec: 1.0.1-dev' SwapFree: 2097147904 SwapTotal: 2097147904 arch: amd64 cpus: 2 hostname: node3.rhel8.example.com kernel: 4.18.0-147.el8.x86_64 os: linux rootless: false uptime: 10m 40.82s registries: blocked: null insecure: null search: - registry.access.redhat.com - quay.io - docker.io store: ConfigFile: /etc/containers/storage.conf ContainerStore: number: 0 GraphDriverName: overlay GraphOptions: null GraphRoot: /var/lib/containers/storage GraphStatus: Backing Filesystem: extfs Native Overlay Diff: "true" Supports d_type: "true" Using metacopy: "false" ImageStore: number: 0 RunRoot: /var/run/containers/storage VolumePath: /var/lib/containers/storage/volumes
Now have a look at the general container information.
podman images
Your results should have come back empty and that’s because we have not imported, loaded or pulled any containers on to our platform.
Time to pull a container from our local repository.
podman pull ubi8/ubi
Trying to pull registry.access.redhat.com/ubi8/ubi...Getting image source signatures Copying blob ee2244abc66f done Copying blob befb03b11956 done Copying config 8121a9f530 done Writing manifest to image destination Storing signatures 8121a9f5303be173ae054b7032613d5917d953d7353ff0540d692b2eaa089fbe
Have a looks at the image list now.
podman images
REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi8/ubi latest 8121a9f5303b 11 days ago 240 MB
ℹ️
|
if you are a subscriber to Red Hat Enterprise Linux, you can pull authentic Red Hat certified images directly from Red Hat’s repository. For example: podman pull rhel7.5 --creds 'username:password'
|
Pull a few more container images.
podman pull ubi8/ubi-minimal
podman pull ubi8/ubi-init
podman images
REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi8/ubi-init latest 0f5485af5398 11 days ago 256 MB registry.access.redhat.com/ubi8/ubi latest 8121a9f5303b 11 days ago 240 MB registry.access.redhat.com/ubi8/ubi-minimal latest 91d23a64fdf2 11 days ago 108 MB
Container images can also be tagged with convenient (ie:custom names). This could make it more intuitive to understand what they contain, especially after an image has been customized.
podman tag registry.access.redhat.com/ubi8/ubi myfavorite
podman images
REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi8/ubi-init latest 0f5485af5398 11 days ago 256 MB registry.access.redhat.com/ubi8/ubi latest 8121a9f5303b 11 days ago 240 MB localhost/myfavorite latest 8121a9f5303b 11 days ago 240 MB registry.access.redhat.com/ubi8/ubi-minimal latest 91d23a64fdf2 11 days ago 108 MB
Notice how the image-id for "ubi" and "myfavorite" are identical.
ℹ️
|
The Red Hat Container Catalog (RHCC) provides a convenient service to locate certified container images built and supported by Red Hat. You can also view the "security evaluation" for each image. |
podman images
podman rmi ubi-init
podman images
REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi8/ubi latest 8121a9f5303b 11 days ago 240 MB localhost/myfavorite latest 8121a9f5303b 11 days ago 240 MB registry.access.redhat.com/ubi8/ubi-minimal latest 91d23a64fdf2 11 days ago 108 MB
Here is a list of the fundamental podman commands and their purpose:
-
podman images - list images
-
podman ps - lists running containers
-
podman pull - pulls (copies) container image from repository (ie: redhat and/or docker hub)
-
podman run - run a container
-
podman inspect - view facts about a container
-
podman logs - display logs of a container (can be used with --follow)
-
podman rm - remove one or more containers
-
podman rmi - remove one or more images
-
podman stop - stops one or more containers
-
podman kill $(podman ps -q) - kill all running containers
-
podman rm $(podman ps -a -q) - deletes all stopped containers
# podman run ubi echo "hello world"
hello world
Well that was really boring!! What did we learn from this? For starters, you should have noticed how fast the container launched and then concluded. Compare that with traditional virtualization where:
-
you power up,
-
wait for bios,
-
wait for grub,
-
wait for the kernel to boot and initialize resources,
-
pivot root,
-
launch all the services, and then finally
-
run the application
Let us run a few more commands to see what else we can glean.
podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 249de20ebdb0 core.example.com:5000/ubi:latest echo hello world 18 seconds ago Exited (0) 17 seconds ago objective_kepler
Now let us run the exact same command as before to print "hello world".
podman run ubi echo "hello world"
hello world
Check out 'podman info' one more time and you should notice a few changes.
podman info
host: BuildahVersion: 1.9.0 Conmon: package: podman-1.4.2-5.module+el8.1.0+4240+893c1ab8.x86_64 path: /usr/libexec/podman/conmon version: 'conmon version 2.0.1-dev, commit: unknown' Distribution: distribution: '"rhel"' version: "8.1" MemFree: 2372833280 MemTotal: 3863744512 OCIRuntime: package: runc-1.0.0-60.rc8.module+el8.1.0+4081+b29780af.x86_64 path: /usr/bin/runc version: 'runc version spec: 1.0.1-dev' SwapFree: 2097147904 SwapTotal: 2097147904 arch: amd64 cpus: 2 hostname: node3.rhel8.example.com kernel: 4.18.0-147.el8.x86_64 os: linux rootless: false uptime: 21m 46.96s registries: blocked: null insecure: null search: - registry.access.redhat.com - quay.io - docker.io store: ConfigFile: /etc/containers/storage.conf ContainerStore: number: 2 GraphDriverName: overlay GraphOptions: null GraphRoot: /var/lib/containers/storage GraphStatus: Backing Filesystem: extfs Native Overlay Diff: "true" Supports d_type: "true" Using metacopy: "false" ImageStore: number: 2 RunRoot: /var/run/containers/storage VolumePath: /var/lib/containers/storage/volumes
You should notice that the number of containers (ContainerStore) has incremented to 2, and that the number of ImageStore(s) has grown.
Run 'podman ps -a' to the IDs of the exited containers.
podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e3f139ef0942 core.example.com:5000/ubi:latest echo hello world 35 seconds ago Exited (0) 34 seconds ago cocky_golick 249de20ebdb0 core.example.com:5000/ubi:latest echo hello world 2 minutes ago Exited (0) 2 minutes ago objective_kepler
Using the container UIDs from the above output, you can now clean up the 'exited' containers.
podman rm <CONTAINER-ID> <CONTAINER-ID>
ℹ️
|
if you are lazy, you can also cleanup up the containers with podman rm --all
|
Now you should be able to run 'podman ps -a' again, and the results should come back empty.
podman ps -a
podman run ubi cat /proc/sys/kernel/hostname
d8736f5cbd35
So what we have learned here is that the hostname in the container’s namespace is NOT the same as the host platform (node3.example.com). It is unique and is by default identical to the container’s ID. You can verify this with 'podman ps -a'.
podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d8736f5cbd35 registry.access.redhat.com/ubi8/ubi:latest cat /proc/sys/ker... 30 seconds ago Exited (0) 30 seconds ago dazzling_mendeleev
Let us have a look at the process table from with-in the container’s namespace.
podman run ubi ps -ef
Error: container_linux.go:345: starting container process caused "exec: \"ps\": executable file not found in $PATH" : OCI runtime error
What just happened?
For the most part, containers are not meant for interactive (user) sessions. In this instance, the image that we are using (ie: ubi) does not have the traditional commandline utilities a user might expect. Common tools to configure network interfaces like 'ip' simply aren’t there.
So for this exercise, we leverage something called a 'bind mount' to effectively mirror a portion of the host’s filesystem into the container’s namespace. Bind mounts are declared using the '-v' option. In the example below, /usr/bin from the host will be exposed and accessible to the containers namespace mounted at '/usr/bin' (ie: /usr/bin:/usr/bin).
ℹ️
|
Using bind mounts is generally suitable for debugging, but not a good practice as a design decision for enterprise container strategies. After all, creating dependencies between applications and host operating systems is what we are trying to get away from. |
podman run -v /usr/bin:/usr/bin -v /usr/lib64:/usr/lib64 ubi /bin/ps -ef
UID PID PPID C STIME TTY TIME CMD root 1 0 0 20:33 ? 00:00:00 /bin/ps -ef
Notice that all the process belonging to host itself are absent. The programs running in the container’s namespace are isolated from the rest of the host. From the container’s perspective, the process in the container is the only process running.
Now let us run a command to report the network configuration from within the a container’s namespace.
podman run -v /usr/sbin:/usr/sbin -v /usr/lib64:/usr/lib64 ubi /usr/sbin/ip addr show eth0
3: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 8a:ce:7f:ea:c7:9a brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.88.0.8/16 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::88ce:7fff:feea:c79a/64 scope link tentative valid_lft forever preferred_lft forever
A couple more commands to understand the network setup.
Let us begin by examining the '/etc/hosts' file.
ℹ️
|
Note that we introduce the '--rm' flag to our podman command. This tells podman to automatically cleanup after the container exists |
podman run --rm ubi cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.88.0.9 aa2204f3cd29
How does the container resolve hostnames (ie: DNS)?
podman run --rm ubi cat /etc/resolv.conf
search example.com nameserver 10.0.0.2
Take a look at the routing table. Pay attention now, the route command is in '/usr/sbin'. Take a look at the routing table for the container namespace.
podman run -v /usr/sbin:/usr/sbin --rm ubi route -n
Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.88.0.1 0.0.0.0 UG 0 0 0 eth0 10.88.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
Finally, look at the filesystem(S) in the container’s namespace.
podman run ubi df -h
Filesystem Size Used Avail Use% Mounted on overlay 8.0G 1.9G 6.2G 24% / tmpfs 64M 0 64M 0% /dev tmpfs 1.9G 8.6M 1.9G 1% /etc/hosts shm 63M 0 63M 0% /dev/shm tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup tmpfs 1.9G 0 1.9G 0% /proc/acpi tmpfs 1.9G 0 1.9G 0% /proc/scsi tmpfs 1.9G 0 1.9G 0% /sys/firmware
You were introduced to Bind-Mounts in the previous section. Let us examine what the filesystems looks like with an active Bind-Mount.
podman run -v /usr/bin:/usr/bin ubi df -h
Filesystem Size Used Avail Use% Mounted on overlay 8.0G 1.9G 6.2G 24% / tmpfs 64M 0 64M 0% /dev tmpfs 1.9G 8.6M 1.9G 1% /etc/hosts /dev/mapper/rhel-root 8.0G 1.9G 6.2G 24% /usr/bin shm 63M 0 63M 0% /dev/shm tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup tmpfs 1.9G 0 1.9G 0% /proc/acpi tmpfs 1.9G 0 1.9G 0% /proc/scsi tmpfs 1.9G 0 1.9G 0% /sys/firmware
Notice above how there is now a dedicated mount point for /usr/bin. Bind-Mounts can be a very powerful tool (primarily for diagnostics) to termporarily inject tools and files that are not normally part of a container image. Remember, using bind mounts as a design decision for enterprise container strategies is folly.
Let us clean up your environment before proceeding
podman kill --all
podman rm --all
A configuration file for a podman build has already been supplied for your system. Have a look at the contents of that config.
cat /root/custom_image.OCIFile
FROM ubi8/ubi RUN yum install -y httpd RUN yum clean all RUN echo "The Web Server is Running" > /var/www/html/index.html EXPOSE 80 CMD ["-D", "FOREGROUND"] ENTRYPOINT ["/usr/sbin/httpd"]
Notice a few things about the configuration:
-
that our image is based on
ubi8/ubi
-
the build process will install an additional package
httpd
along with it’s dependencies -
httpd is configured by default to run on port 80, so that is the port we will expose
-
the build will create a file
/var/www/html/index.html
with the contents "The Web Server is Running".
Now it’s time to build the new container image.
podman build -t custom_image --file custom_image.OCIFile
Once this completes, run:
podman images
REPOSITORY TAG IMAGE ID CREATED SIZE localhost/custom_image latest 8544c2e4a901 10 minutes ago 273 MB localhost/myfavorite latest 8121a9f5303b 12 days ago 240 MB registry.access.redhat.com/ubi8/ubi latest 8121a9f5303b 12 days ago 240 MB registry.access.redhat.com/ubi8/ubi-minimal latest 91d23a64fdf2 12 days ago 108 MB
Time to deploy the image. A few things to note here:
-
we are going to name the deployment "webserver"
-
we are connecting localhost port 8080 to port 80 of the deployed container
-
the deployment is using 'detached' mode
podman run -d --name="webserver" -p 8080:80 custom_image
To view some facts about the running container, you use 'podman inspect'.
podman inspect webserver
This reveals quite a bit of information which you can drill in to using additional format arguments. For example, let us locate the IP address for the container.
podman inspect --format '{{ .NetworkSettings.IPAddress }}' webserver
You can see the IP address that was assigned to the container.
We can apply the same filter to any value in the json output. Try a few different ones.
curl http://localhost:8080/
The Web Server is Running
Let us look at the processes running on the host.
pgrep -laf httpd
8662 httpd -D FOREGROUND 8703 httpd -D FOREGROUND 8704 httpd -D FOREGROUND 8705 httpd -D FOREGROUND 8711 httpd -D FOREGROUND 8717 httpd -D FOREGROUND
And finally let’s look at some networking info.
netstat -utlpn | grep 8080
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 28298/conmon
Now let’s introduce a commandline utility 'lsns' to check out the namespaces.
lsns
NS TYPE NPROCS PID USER COMMAND 4026531835 cgroup 104 1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 18 4026531836 pid 99 1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 18 4026531837 user 104 1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 18 4026531838 uts 99 1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 18 4026531839 ipc 99 1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 18 4026531840 mnt 93 1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 18 4026531860 mnt 1 21 root kdevtmpfs 4026531992 net 99 1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 18 4026532136 mnt 1 728 root /usr/lib/systemd/systemd-udevd 4026532314 mnt 2 950 root /sbin/auditd 4026532315 mnt 1 993 chrony /usr/sbin/chronyd 4026532316 mnt 1 1038 root /usr/sbin/NetworkManager --no-daemon 4026532388 net 5 30921 root /usr/sbin/httpd -D FOREGROUND 4026532449 mnt 5 30921 root /usr/sbin/httpd -D FOREGROUND 4026532450 uts 5 30921 root /usr/sbin/httpd -D FOREGROUND 4026532451 ipc 5 30921 root /usr/sbin/httpd -D FOREGROUND 4026532452 pid 5 30921 root /usr/sbin/httpd -D FOREGROUND
We see that the httpd processes running are using the mnt uts ipc pid and net namespaces.
Since we explored namespaces earlier, we may as well have a look at the control-groups aligned with our process.
systemd-cgls
... SNIP ... └─machine.slice ├─libpod-conmon-c726b2422ba73c0eb904c283a50a66e6e47cb42c3b633075e39f40d268026c6c.scope │ └─30909 /usr/libexec/podman/conmon -s -c c726b2422ba73c0eb904c283a50a66e6e47cb42c3b633075e39f40d26802> └─libpod-c726b2422ba73c0eb904c283a50a66e6e47cb42c3b633075e39f40d268026c6c.scope ├─30921 /usr/sbin/httpd -D FOREGROUND ├─30934 /usr/sbin/httpd -D FOREGROUND ├─30935 /usr/sbin/httpd -D FOREGROUND ├─30936 /usr/sbin/httpd -D FOREGROUND └─30937 /usr/sbin/httpd -D FOREGROUND
What we can tell is that our container is bound by a cgroup called "machine.slice". Otherwise, nothing remarkable to discern here.
podman stop webserver
podman rm webserver
podman kill --all
podman rm --all
podman rmi --all --force