Linux containers, in short, contain applications in a way that keep them isolated from the host system that they run on. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. And they are designed to make it easier to provide a consistent experience as developers and system administrators move code from development environments into production in a fast and replicated way.
And importantly, many of the technologies powering container technology are open source. This means that they have a wide community of contributors, helping to foster rapid development of a wide ecosystem of related projects fitting the needs of all sorts of different organizations, big and small.
It’s all about isolation
Isolation is a core concept to so many computing patterns, resource management strategies, and general accounting practices that it is difficult to even begin compiling a list. Someone who learns how Linux containers provide isolation for running programs and how to use Docker to control that isolation can accomplish amazing feats of reuse, resource efficiency, and system simplification.
A more complete way to containerization
We could say jar file is one kind of containerization. Or virtualenv in Python could be one. But we can only contain one thing in them. But with Docker, you can contain everything. From Python, Java or Selenium, Go code, all in one place.
How about VM
In a way, containers behave like a virtual machine. To the outside world, they can look like their own complete system. But unlike a virtual machine, rather than creating a whole virtual operating system, containers don’t need to replicate an entire operating system, only the individual components they need in order to operate. This gives a significant performance boost and reduces the size of the application. They also operate much faster, as unlike traditional virtualization the process is essentially running natively on its host, just with an additional layer of protection around it.
So in summary, Docker helps
- Make sure the product is deployed the same way
- Easy for developers to deliver their product
- Easy for automated pipelines
- Complete way to contain software
docker build [OPTIONS] PATH | URL | -
docker build command builds Docker images from a Dockerfile and a “context”. A build’s context is the set of files located in the specified
URL. The build process can refer to any of the files in the context. For example, your build can use a COPY instruction to reference a file in the context.
URL parameter can refer to three kinds of resources: Git repositories, pre-packaged tarball contexts and plain text files.
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
docker run command first
creates a writeable container layer over the specified image, and then
starts it using the specified command. That is,
docker run is equivalent to the API
/containers/(id)/start. A stopped container can be restarted with all its previous changes intact using
docker start. See
docker ps -a to view a list of all containers.
docker run command can be used in combination with
docker commit to change the command that a container runs. There is additional detailed information about
docker run in the Docker run reference.
For information on connecting a container to a network, see the “Docker network overview”.
For example uses of this command, refer to the examples section below.
docker ps [OPTIONS]
Show docker processes that are running
FROM : get the image from source
WORKDIR : show the directory for this docker container
COPY : only work with files, they do not copy the directory
RUN : command to run with
Docker in Action — By Jeff Nickoloff