Docker and commands to keep handy

Run, stop, kill, stats, history, pause, unpause, events, etc..

Nikitha Gullapalli
5 min readJun 7, 2023
Docker vs deploying apps without docker

Deployment Before and After Docker

Docker is a light weight application to deploy applications. Lets look at how deployment used to work before and after docker.

Before docker if I have to deploy an app, my hardware, OS and software should compliment each other. Say I have two applications one need windows and one needs mac OS. I would have to create a virtual machine to support both and then deploy the apps. This mean my machine will have parent OS+ windows OS+ mac OS + software for all applications.

With Docker, this is simplified. I dont care what the requirements are. Everything the application needs to run, is compressed into an image (all the dependencies, jars, etc). All I need to do is run the image (runtime image is called a container). This means my machine will have parent OS+ software for all applications. That is two OS’s less compared to above example. Hence light weight. This makes is faster, lesser configuration, lesser memory usage.

This allows applications to be independent of each other. I can write application 1 in python and application 2 in java. Yet run both using docker. Docker does not care what languages we use as it simply has to run the image.

To deploy an application with docker:

  • Create docker image for micro service
  • Docker image contains everything the micro service needs to run -> application runtime(idk or nodeJS or python), application code and dependencies
  • Run the docker image container the same way in any infrastructure -> local machine, corporate data center, cloud (aws or google cloud)

We will also look at how to run Distributed tracing in docker to have all micro services information in one place.

Here are some commonly used docker commands.

Docker commands :

Docker commands : Images

//Give the list of images loally in docker deamoen
docker images
//download image from registry to local
docker pull <image name:tag>
e.g: docker pull mysql:latest will pull mysql image from registry to local
//Tag an image locally
docker tag <repositoryName:tag> <repositoryName:new tag>
e.g: docker tag <YourRepositioryName>/todo-rest-api-h2:1.0.0.RELEASE <YourRepositioryName>/todo-rest-api-h2:latest
//mysql is an officaial image. Say I want to search all images using mysql 
docker search mysql
/*see the history of an image. Can get <imageID> and <repositoryName:tag> 
from [docker images] command */
docker history <imageID>
e.g docker history f8049a029560

or

docker history <repositoryName:tag>
e.g: docker history mysql:latest


/*Inspect image in more detail. Tags, repo, created details, etc */
docker image inspect <imageID>
// Remove image from local. 
docker image remove <imageID>
eg: docker image remove f8049a029560

Docker commands : Container

Running image is called a container

/* Run image to create a container. 
p: port, d:detached. Will return containerID. Will return containerID
*/
docker run -p <container port>:<local port> -d <repositoryName:tag>
e.g: docker run -p 5000:5000 -d <YourRepositioryName>/todo-rest-api-h2:1.1.0.RELEASE


/* Run image to create a containerwith restart policy. Will return containerID
-p: port, -d:detached,
restart=always (default is no) => if we say always container restarts if
exited everytime docker deamon restarts
*/
docker run -p <container port>:<local port> -d restart=always <repositoryName:tag>
e.g: docker run -p 5000:5000 -d <YourRepositioryName>/todo-rest-api-h2:1.1.0.RELEASE


/* Run image to create a container.
-p: port, -d:detached. Will return containerID. Will return containerID
-m: memory, --cpu-quota: cpa quote. Max avalible is 100k
*/
//This container can user a maximum of 512mb
//This container has 5% of the cpu. That is 5% of 100k = 5000
docker run -p <container port>:<local port> -d <repositoryName:tag>
e.g: docker run -p 5000:5000 -m 512m --cpu-quota 5000 -d <YourRepositioryName>/todo-rest-api-h2:1.1.0.RELEASE


/*Can run same image in 2 containers parallally as shown below.
Below containers are running on 5000 and 5001. Both of the same image,
but different containers
*/
e.g: docker run -p 5000:5000 -d <YourRepositioryName>/todo-rest-api-h2:1.1.0.RELEASE
docker run -p 5000:5001 -d <YourRepositioryName>/todo-rest-api-h2:1.1.0.RELEASE
// Pause container
docker container pause <containerID>

//Unpause Container
docker container unpause <containerID>

//Inspect Container
docker container inspect <containerID>
//Returns back all containers running
docker container ls

//Returns back all containers running and exited
docker container ls -a

//Remove all stopped containers from local. Enter below command and say Y
docker container prune
//get logs for a container and -f tails it
docker container logs -f <containerID>

/*Gracefully Stops Container after SIGTERM.
Stop => SIGTERM => Gracefully Shutdown */
docker container stop <containerID>
e.g: docker container stop 1b1

/*Suddenly kills the container without finishing SIGTERM
Kill => SIGKILL => Immediatly terminated the process */
docker container kill <containerID>
e.g: docker container kill 1b1
/*Returns all events happening with docker. Keeps running continuouslt.
Use CTRL+C to exit
CLEAR to clear terminal
*/
docker events

/*Reurns the top process running in a container*/
docker top <containerID>

/*Returns all stats of all the containers running*/
docker stats

/*Returns resources being managed by docker deamon and whats the size of each of them*/
docker system df

Distributed tracing

Distributed tracing traces logs from all services and keeps it in one database. This way we can debug and view logs of all our services in one place. Zipkin is the service we use for this.

Helps in:

  • debug problems
  • Trace requests across micro services. Like for example can tell which microservice is taking max time and delaying the chain
All services will send information to distributed server.
// Make sure docker app is running and then run below command in terminal
docker run -d -p 9411:9411 openzipkin/zipkin

To connect your service to zipkin, simply add the below dependencies in pom.xml as shown here.

If your using SpringBoot 3:

<!-- #### In pom.xml : Add dependencies ##### -->

<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-observation</artifactId>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-tracing-bridge-otel</artifactId>
</dependency>
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-exporter-zipkin</artifactId>
</dependency>

<!-- Enables tracing of REST API calls made using Feign - V3 ONLY-->
<!-- Dependency needed only if your service is using feign-->
<dependency>
<groupId>io.github.openfeign</groupId>
<artifactId>feign-micrometer</artifactId>
</dependency>
<!-- ####In application.properties : Add sampling rate ##### -->
management.tracing.sampling.probability=1.0 #SB3
logging.pattern.level=%5p [${spring.application.name:},%X{traceId:-},%X{spanId:-}] #SB3

If your using SpringBoot 2:

#If using spring boot 2

<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-sleuth-brave</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-sleuth-zipkin</artifactId>
</dependency>
<!-- ####In application.properties : Add sampling rate ##### -->
#spring.sleuth.sampler.probability=1.0 #SB2

References:

Udemy course by in28mins- zipkin

--

--