Join a swarm:docker swarm join --token SWMTKN-1-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX-XXXXXXXXXXXXXXXXXX [master ip]:[port given]
Remove a node from swarm:docker swarm rm [node]
Leave a swarm:docker swarm leave
Note: as a master you need to force it (docker swarm leave -f)
Deploy a service to the cluster with a name:docker service create --name [name] [image] [command]
Remove a service:docker service rm [name]
List nodes, status:docker node ls
List processes on a node:docker node ps [self|node]
Output information about a service:docker service inspect [name]
More human-readable format:docker service inspect --pretty [name]
List services:docker service ls
Show logs of a service:docker service logs [options] [name]
Status of a service:docker service ps [name|ID]
Alter replicas of a named service to the integer provided:docker service scale [name]=[integer]
Update a Container
Create a Long-Running Generic Container
Sometimes you just need a container that lives in a swarm for a long time. If you try to have the container execute /bin/ash or similar it will simply exit and never be shown as "up" since the command you asked it to run already completed. Instead you'll need to use a long-running process like top.
$ docker service create shelltest arm32v6/alpine:latest top
Create a Swarm
Pick a node to be the master. On that node initialize the cluster.
$ docker swarm init
It will give you a token. For all other worker nodes in the cluster, run the swarm join command with that token and master address.
Nodes will show as "down" after a restart and when inspected they show as "active." Currently the only solution is to completely remove and redeploy swarm. From master$ docker swarm rm [all nodes], then $ docker swarm leave --force. From nodes$ docker swarm leave. Then follow the creation steps once more to readd all nodes back to the cluster and recreate the master.