Docker Engine 1.12 includes swarm mode for natively managing a cluster of Docker Engines called a Swarm. Use the Docker CLI to create a swarm, deploy application services to a swarm, and manage swarm behavior. Complete details at: https://docs.docker.com/engine/swarm/. It’s important to understand the Swarm mode key concepts of Swarm, Node, Service, Task and Load Balancing.
Swarm is typically a cluster of multiple Docker Engines. But for simplicity we’ll run a single node Swarm.
Initialize Swarm mode using the following command:
docker swarm init
This starts a Swarm Manager. By default, manager node are also worker but can be configured to be manager-only.
Find some information about this one-node cluster using the command docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 38
Server Version: 1.12.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 321
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: host bridge overlay null
Swarm: active
NodeID: 3sfo3bo6n8vhqzh0r18c2m6f4
Is Manager: true
ClusterID: 9nt6u2xaqsq2j5sb4jzmk4rea
Managers: 1
Nodes: 1
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Node Address: 192.168.65.2
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 4.4.20-moby
Operating System: Alpine Linux v3.4
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 1.952 GiB
Name: moby
ID: EZSJ:SFEA:65NS:CWJI:NHP4:TFHE:HF43:X6KR:FD2T:GHUQ:QN6T:N72P
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 35
Goroutines: 121
System Time: 2016-09-30T20:54:22.545443508Z
EventsListeners: 1
No Proxy: *.local, 169.254/16
Username: arungupta
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
127.0.0.0/8
This cluster has 1 node, and that is a manager.
This section describes how to use docker-compose: since docker-compose does not work with Swarm mode yet, the effect of these commands will be to schedule containers on the Swarm master that you target. However docker-compose works to schedule containers across a Swarm 1.0 cluster.
If you want to schedule containers across a Swarm mode cluster, go to the Swarm mode section at the end to learn how to use docker service create
.
Create a new directory and cd
to it:
mkdir compose-swarm cd compose-swarm
Create a new Compose definition using the configuration file shown below.
version: '2'
services:
db:
image: couchbase
ports:
- 8091:8091
- 8092:8092
- 8093:8093
- 11210:11210
web:
image: arungupta/wildfly-admin
environment:
- COUCHBASE_URI=db
ports:
- 8080:8080
- 9990:9990
In this Compose file:
-
couchbase
image is used for Couchbase server. -
arungupta/wildfly-admin
image is used as it binds WildFly’s management to all network interfaces, and in addition also exposes port 9990. This enables WildFly Maven Plugin to be used to deploy the application.
This application can be started as:
docker-compose up -d
This shows the output as:
Creating network "composeswarm_default" with the default driver
Pulling web (arungupta/wildfly-admin:latest)...
latest: Pulling from arungupta/wildfly-admin
a3ed95caeb02: Pull complete
fa32110542d8: Pull complete
15db7fa11ba9: Pull complete
a701e6df8ee7: Pull complete
6e1e0efdee86: Pull complete
191bf863124f: Pull complete
16ade257aae0: Pull complete
df6d4a72b040: Pull complete
Digest: sha256:a86d9e3807dd002ef070eea16bb90ae55d966da0a53fdc8ab121dcb505db1a20
Status: Downloaded newer image for arungupta/wildfly-admin:latest
Pulling db (couchbase:latest)...
latest: Pulling from library/couchbase
56eb14001ceb: Already exists
7ff49c327d83: Already exists
6e532f87f96d: Already exists
3ce63537e70c: Already exists
b8145bb24a3f: Already exists
e6e203bac6d0: Already exists
566dfc7d9e85: Already exists
a2c938a8a28b: Already exists
c6f4b64cd81f: Already exists
9471cd6d0816: Already exists
b5dbff584fd2: Already exists
cb803d8435bd: Already exists
Digest: sha256:c28ef137a77914333cd65e5cdf187e38507627d83caa06f4748ca0f596e49bea
Status: Downloaded newer image for couchbase:latest
Creating composeswarm_db_1
Creating composeswarm_web_1
WildFly and Couchbase containers are started on this node. If the Swarm cluster has multiple nodes then the containers will be started on different nodes based upon default spread
strategy.
A new overlay network is created. This allows multiple containers on different hosts to communicate with each other.
Connect to the Swarm cluster and verify that WildFly and Couchbase are running using docker-compose ps
:
docker-compose ps
Name Command State Ports
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
composeswarm_db_1 /entrypoint.sh couchbase-s ... Up 11207/tcp, 0.0.0.0:11210->11210/tcp,
11211/tcp, 18091/tcp, 18092/tcp,
18093/tcp, 0.0.0.0:8091->8091/tcp,
0.0.0.0:8092->8092/tcp,
0.0.0.0:8093->8093/tcp, 8094/tcp
composeswarm_web_1 /opt/jboss/wildfly/bin/sta ... Up 0.0.0.0:8080->8080/tcp,
0.0.0.0:9990->9990/tcp
Clone https://github.com/arun-gupta/couchbase-javaee.git. This workspace contains a simple Java EE application that is deployed on WildFly and provides a REST API over a sample bucket in Couchbase.
Couchbase server can be configured using Couchbase REST API. The application contains a Maven profile that configures the Couchbase server, loads the travel-sample
bucket, and creates an empty bucket. This can be invoked as:
mvn install -Pcouchbase -Ddocker.host=localhost
. . .
[INFO] --- exec-maven-plugin:1.4.0:exec (Configure memory) @ couchbase-javaee ---
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying ::1...
* Connected to localhost (::1) port 8091 (#0)
> POST /pools/default HTTP/1.1
> Host: localhost:8091
> User-Agent: curl/7.43.0
> Accept: */*
> Content-Length: 36
> Content-Type: application/x-www-form-urlencoded
>
} [36 bytes data]
* upload completely sent off: 36 out of 36 bytes
< HTTP/1.1 200 OK
< Server: Couchbase Server
< Pragma: no-cache
< Date: Fri, 15 Jul 2016 22:56:30 GMT
< Content-Length: 0
< Cache-Control: no-cache
<
100 36 0 0 100 36 0 4309 --:--:-- --:--:-- --:--:-- 4500
* Connection #0 to host localhost left intact
[INFO]
[INFO] --- exec-maven-plugin:1.4.0:exec (Configure services) @ couchbase-javaee ---
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying ::1...
* Connected to localhost (::1) port 8091 (#0)
> POST /node/controller/setupServices HTTP/1.1
> Host: localhost:8091
> User-Agent: curl/7.43.0
> Accept: */*
> Content-Length: 26
> Content-Type: application/x-www-form-urlencoded
>
} [26 bytes data]
* upload completely sent off: 26 out of 26 bytes
< HTTP/1.1 200 OK
< Server: Couchbase Server
< Pragma: no-cache
< Date: Fri, 15 Jul 2016 22:56:30 GMT
< Content-Length: 0
< Cache-Control: no-cache
<
100 26 0 0 100 26 0 3474 --:--:-- --:--:-- --:--:-- 3714
* Connection #0 to host localhost left intact
[INFO]
[INFO] --- exec-maven-plugin:1.4.0:exec (Setup credentials) @ couchbase-javaee ---
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying ::1...
* Connected to localhost (::1) port 8091 (#0)
> POST /settings/web HTTP/1.1
> Host: localhost:8091
> User-Agent: curl/7.43.0
> Accept: */*
> Content-Length: 50
> Content-Type: application/x-www-form-urlencoded
>
} [50 bytes data]
* upload completely sent off: 50 out of 50 bytes
< HTTP/1.1 200 OK
< Server: Couchbase Server
< Pragma: no-cache
< Date: Fri, 15 Jul 2016 22:56:30 GMT
< Content-Type: application/json
< Content-Length: 39
< Cache-Control: no-cache
<
{ [39 bytes data]
100 89 100 39 100 50 3349 4293 --:--:-- --:--:-- --:--:-- 4545
* Connection #0 to host localhost left intact
{"newBaseUri":"http://localhost:8091/"}[INFO]
[INFO] --- exec-maven-plugin:1.4.0:exec (Install travel-sample bucket) @ couchbase-javaee ---
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying ::1...
* Connected to localhost (::1) port 8091 (#0)
* Server auth using Basic with user 'Administrator'
> POST /sampleBuckets/install HTTP/1.1
> Host: localhost:8091
> Authorization: Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==
> User-Agent: curl/7.43.0
> Accept: */*
> Content-Length: 17
> Content-Type: application/x-www-form-urlencoded
>
} [17 bytes data]
* upload completely sent off: 17 out of 17 bytes
< HTTP/1.1 202 Accepted
< Server: Couchbase Server
< Pragma: no-cache
< Date: Fri, 15 Jul 2016 22:56:30 GMT
< Content-Type: application/json
< Content-Length: 2
< Cache-Control: no-cache
<
{ [2 bytes data]
100 19 100 2 100 17 51 435 --:--:-- --:--:-- --:--:-- 447
* Connection #0 to host localhost left intact
[][INFO]
[INFO] --- exec-maven-plugin:1.4.0:exec (Create a new book bucket) @ couchbase-javaee ---
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying ::1...
* Connected to localhost (::1) port 8091 (#0)
* Server auth using Basic with user 'Administrator'
> POST /pools/default/buckets HTTP/1.1
> Host: localhost:8091
> Authorization: Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==
> User-Agent: curl/7.43.0
> Accept: */*
> Content-Length: 60
> Content-Type: application/x-www-form-urlencoded
>
} [60 bytes data]
* upload completely sent off: 60 out of 60 bytes
< HTTP/1.1 202 Accepted
< Server: Couchbase Server
< Pragma: no-cache
< Location: /pools/default/buckets/books
< Date: Fri, 15 Jul 2016 22:56:31 GMT
< Content-Length: 0
< Cache-Control: no-cache
<
100 60 0 0 100 60 0 7577 --:--:-- --:--:-- --:--:-- 8571
* Connection #0 to host localhost left intact
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
. . .
Wait for a few seconds for the travel-sample
bucket to be created, populated and indexes created.
Deploy the application to WildFly by specifying three parameters:
-
Host IP address where WildFly is running (
localhost
in this example) -
Username of a user in WildFly’s administrative realm
-
Password of the user specified in WildFly’s administrative realm
mvn install -Pwildfly -Dwildfly.hostname=localhost -Dwildfly.username=admin -Dwildfly.password=Admin#007
. . .
Jul 15, 2016 2:58:28 PM org.xnio.Xnio <clinit>
INFO: XNIO version 3.3.1.Final
Jul 15, 2016 2:58:28 PM org.xnio.nio.NioXnio <clinit>
INFO: XNIO NIO Implementation Version 3.3.1.Final
Jul 15, 2016 2:58:28 PM org.jboss.remoting3.EndpointImpl <clinit>
INFO: JBoss Remoting version 4.0.9.Final
[INFO] Authenticating against security realm: ManagementRealm
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
. . .
Now that the WildFly and Couchbase servers have been configured, let’s access the application. You need to specify IP address of the host where WildFly is running (localhost
in our case).
The endpoint can be accessed in this case as:
curl http://localhost:8080/couchbase-javaee/resources/airline
The output is shown as:
[{"travel-sample":{"id":10,"iata":"Q5","icao":"MLA","name":"40-Mile Air","callsign":"MILE-AIR","type":"airline","country":"United States"}}, {"travel-sample":{"id":10123,"iata":"TQ","icao":"TXW","name":"Texas Wings","callsign":"TXW","type":"airline","country":"United States"}}, {"travel-sample":{"id":10226,"iata":"A1","icao":"A1F","name":"Atifly","callsign":"atifly","type":"airline","country":"United States"}}, {"travel-sample":{"id":10642,"iata":null,"icao":"JRB","name":"Jc royal.britannica","callsign":null,"type":"airline","country":"United Kingdom"}}, {"travel-sample":{"id":10748,"iata":"ZQ","icao":"LOC","name":"Locair","callsign":"LOCAIR","type":"airline","country":"United States"}}, {"travel-sample":{"id":10765,"iata":"K5","icao":"SQH","name":"SeaPort Airlines","callsign":"SASQUATCH","type":"airline","country":"United States"}}, {"travel-sample":{"id":109,"iata":"KO","icao":"AER","name":"Alaska Central Express","callsign":"ACE AIR","type":"airline","country":"United States"}}, {"travel-sample":{"id":112,"iata":"5W","icao":"AEU","name":"Astraeus","callsign":"FLYSTAR","type":"airline","country":"United Kingdom"}}, {"travel-sample":{"id":1191,"iata":"UU","icao":"REU","name":"Air Austral","callsign":"REUNION","type":"airline","country":"France"}}, {"travel-sample":{"id":1203,"iata":"A5","icao":"RLA","name":"Airlinair","callsign":"AIRLINAIR","type":"airline","country":"France"}}]
This shows 10 airlines from the travel-sample
bucket.
Shutdown the application:
docker-compose down
Stopping composeswarm_web_1 ... done
Stopping composeswarm_db_1 ... done
Removing composeswarm_web_1 ... done
Removing composeswarm_db_1 ... done
Removing network composeswarm_default
This stops and removes the container in each service. It also deletes any networks that were created as part of this application.
If you use Swarm mode, you cannot use docker-compose, and need to use docker service create. Here is how to to schedule the example application services.
docker network create --driver overlay javaee
docker service create --name db \
--network javaee \
arungupta/couchbase
docker service create --name javaee \
--network javaee \
--publish 8090:8080 \
--replicas 3 \
--env COUCHBASE_URI=db \
chanezon/wildfly-couchbase-javaee7
This creates an overlay network over the cluster named javaee. Then schedules the db service and the javaee service on that network. The name of the service, db, will be used as a dns alias to talk to that service over the javaee service. We don’t need to expose the db ports outside of the network. The javaee service will schedule 3 replicas on random ports and hosts in the cluster, and Swarm buil-in routing mesh will expose port 8090 on each node in the cluster for that service. This means that the 3 containers running our web application will be load balanced from any node in the cluster on port 8090. You can verify that on your local installation with:
In order to list the services and get details about the javaee service
docker service ls
docker service inspect javaee
And you can still use docker ps
and docker logs
on the containers implementing the tasks for the services to watch their logs.
In order to test if the application started successfully:
curl http://localhost:8090/airlines/resources/airline
If you use Docker for AWS or Docker for Azure beta, the published port will be exposed by the external load balancer of your cluster automatically.
When you are done, remove the services with
docker service rm javaee db