The Docker client object

DockerClient

python_on_whales.DockerClient(
    config=None,
    context=None,
    debug=None,
    host=None,
    log_level=None,
    tls=None,
    tlscacert=None,
    tlscert=None,
    tlskey=None,
    tlsverify=None,
    client_config=None,
    compose_files=[],
    compose_profiles=[],
    compose_env_file=None,
    compose_project_name=None,
    compose_project_directory=None,
    compose_compatibility=None,
    client_binary="docker",
    client_call=["docker"],
)

Creates a Docker client

Note that

from python_on_whales import docker
print(docker.run("hello-world"))

is equivalent to

from python_on_whales import DockerClient
docker = DockerClient()
print(docker.run("hello-world")

Arguments

  • config Optional[Union[str, pathlib.Path]]: Location of client config files (default "~/.docker")
  • context Optional[str]: Name of the context to use to connect to the daemon (overrides DOCKER_HOST env var and default context set with "docker context use")
  • debug Optional[bool]: Enable debug mode
  • host Optional[str]: Daemon socket(s) to connect to
  • log_level Optional[str]: Set the logging level ("debug"|"info"|"warn"|"error"|"fatal") (default "info")
  • tls Optional[bool]: Use TLS; implied by tlsverify
  • tlscacert Optional[Union[str, pathlib.Path]]: Trust certs signed only by this CA (default "~/.docker/ca.pem")
  • compose_files List[Union[str, pathlib.Path]]: Docker compose yaml file
  • compose_profiles List[str]: List of compose profiles to use. Take a look at the documentation for profiles.
  • compose_env_file Optional[Union[str, pathlib.Path]]: .env file containing the environments variables to inject into the compose project. By default, it uses ./.env.
  • compose_project_name Optional[str]: The name of the compose project. It will be prefixed to networks, volumes and containers created by compose.
  • compose_project_directory Optional[Union[str, pathlib.Path]]: Use an alternate working directory. By default, it uses the path of the compose file.
  • compose_compatibility Optional[bool]: Use docker compose in compatibility mode.
  • client_call List[str]: Client binary to use and how to call it. Default is ["docker"]. You can try with for example ["podman"] or ["nerdctl"]. The client must have the same commands and outputs as Docker to work. Some best effort support is done in case of divergences, meaning you can report issues occuring on some other binary than Docker, but we don't guarantee that it will be fixed. This option is a list because you can provide a list of command line arguments to be placed after "docker". For exemple host="ssh://my_user@host.com" is equivalent to client_call=["docker", "--host=ssh://my_user@host.com"]. This will allow you to use some exotic options that are not explicitly supported by Python-on-whales. Let's say you want to use estargz to run a container immediately, without waiting for the "pull" to finish (yes it's possible!), you can do nerdctl = DockerClient(client_call=["nerdctl", "--snapshotter=stargz"]) and then nerdctl.run("ghcr.io/stargz-containers/python:3.7-org", ["-c", "print('hi')"]). You can also use this system to call Docker with sudo with client_call=["sudo", "docker"] (note that it won't ask for your password, so sudo should be passwordless during the python program execution).
  • client_binary str: Deprecated, use client_call. If you used before client_binary="podman", now use client_call=["podman"].

login

docker.login(server=None, username=None, password=None)

Log in to a Docker registry.

If no server is specified, the default is defined by the daemon.

Arguments

  • server Optional[str]: The server to log into. For example, with a self-hosted registry it might be something like server="192.168.0.10:5000"
  • username Optional[str]: The username
  • password Optional[str]: The password

login_ecr

docker.login_ecr(
    aws_access_key_id=None, aws_secret_access_key=None, region_name=None, registry=None
)

Login to the aws ECR registry. Credentials are taken from the environment variables as defined in the aws docs.

If you don't have a profile or your environment variables configured, you can also use the function arguments aws_access_key_id, aws_secret_access_key, region_name.

Behind the scenes, those arguments are passed directly to

botocore.session.get_session().create_client(...)

You need botocore to run this function. Use pip install botocore to install it.

The registry parameter can be used to override the registry that is guessed from the authorization token request's response. In other words: If the registry is None (the default) then it will be assumed that it's the ECR registry linked to the credentials provided. It is especially useful if the aws account you use can access several repositories and you need to explicitly define the one you want to use


logout

docker.logout(server=None)

Logout from a Docker registry

Arguments

  • server Optional[str]: The server to logout from. For example, with a self-hosted registry it might be something like server="192.168.0.10:5000"

version

docker.version()

Not yet implemented


Sub-commands

Other commands

They're actually aliases

About multithreading and multiprocessing

Behind the scenes, Python on whales calls the Docker command line interface with subprocess. The Python on whales client does not store any intermediate state so it's safe to use with multithreading.

The Docker objects store some intermediate states (the attributes that you would normally get with docker ... inspectbut no logic in the codebase depends on those attributes. They're just here so that users can look at them. So you can share them between process/threads and pickle containers, images, networks...

The Docker daemon works with its own objects internally and handles concurrent and conflicting requests. For example, if you create two containers with the same name from different threads, only one will succeed. If you pull the same docker image from multiple processes/threads, the Docker daemon will only pull the image and layers once.

Just be careful with some scenario similar to this one

Thread 1: my_container = docker.run(..., detach=True)
...
# my_container finishes
...
Thread 2: docker.container.prune()
...
Thread 1: docker.logs(my_container)  # will fail because the container was removed by thread 2

In the end, unless you use this type of logic in your code, Python-on-whales is safe to use with multithreading and multiprocessing.

The Docker CLI

Python-on-whales needs the Docker CLI to work (unlike docker-py). Most of the time, users already have the CLI installed on their machines. It's possible to verify that the CLI is there by doing docker --help in the command line.

Sometimes, the CLI might not be available on the system, it can happen if you want to control Docker from within a container with -v /var/run/docker.sock:/var/run/docker.sock, or if you want to connect to a remote daemon with the host argument.

In this case, when using python-on-whales, the CLI will be downloaded automatically (it's a single binary), and will be put in

pathlib.Path.home() / ".cache/python-on-whales/docker"

Since it's not in the PATH and was not downloaded with the package manager, it's only seen and used by python-on-whales.

If you want to trigger the download manually (to avoid downloading the CLI at runtime), you can run from your shell:

python-on-whales download-cli

Handling an unavailable client

Trying to use Python-on-whales when it cannot find or download a Docker client binary will trigger a python_on_whales.ClientNotFoundError. You can use a try-except around a first docker.ps() call to handle the case when Python-on-whales won't work.