docker version
docker version
docker info
docker info
docker run
docker run hello-world
docker ps
-a to list all containers (running and stopped).docker ps
docker ps -a
docker stop
docker stop <container_id_or_name>
docker start
docker start <container_id_or_name>
docker restart
docker restart <container_id_or_name>
docker rm
docker rm <container_id_or_name>
docker rmi
docker rmi <image_id_or_name>
docker images
docker images
docker pull
docker pull ubuntu
docker build
docker build -t myimage .
docker network ls
docker network ls
docker network create
docker network create my-network
docker network rm
docker network rm my-network
docker-compose up
docker-compose up
docker-compose down
up.docker-compose down
These commands are quite common for daily use in Docker environments and are essential for managing Docker containers and images effectively. Remember to replace placeholders (like or ) with actual values from your Docker environment.
When you have already created and started a Docker container with NVIDIA GPU support, using it through a terminal involves a similar process to accessing any Docker container, as described previously. The difference lies in ensuring that the container was properly set up to use the NVIDIA GPU, which involves having the appropriate NVIDIA Docker configurations.
Below are detailed steps on how to access and use your NVIDIA GPU-enabled Docker container from a terminal:
Before diving into accessing the container, it’s useful to first confirm that your container has access to the GPU. You can check this by running a command like nvidia-smi inside the container:
docker exec -it <container_name_or_id> nvidia-smi
This command should output information about the GPU, indicating that the container has access to it. If it does, you can proceed to interact with the container normally.
To access the container, you use the docker exec command to start an interactive shell session:
docker exec -it <container_name_or_id> /bin/bash
Replace with the actual name or ID of your container. You can find this by listing all running containers with docker ps.
Inside the container, you can execute any installed GPU-accelerated programs. For example, if you have TensorFlow installed in a container configured for GPU, you can start a Python session and import TensorFlow to verify it recognizes the GPU:
import tensorflow as tf
print(tf.config.list_physical_devices('GPU'))
This Python code should list the available GPUs if TensorFlow is set up correctly to use the GPU.
To exit the container terminal without stopping the container, you can simply type exit or press Ctrl-D.
Here’s a quick recap of how the flow might look:
List Containers (to find your specific container):
docker ps
Check GPU Access (using nvidia-smi):
docker exec -it my_gpu_container nvidia-smi
Access the Container:
docker exec -it my_gpu_container /bin/bash
Run Python and Check TensorFlow GPU (inside the container):
python
>>> import tensorflow as tf
>>> print(tf.config.list_physical_devices('GPU'))
Exit When Done:
exit
If the nvidia-smi command does not show the GPUs or if TensorFlow does not recognize the GPU, ensure that:
--gpus all flag or similar GPU specification.By following these steps, you can effectively use and interact with your NVIDIA GPU-accelerated Docker container from the terminal.