For a demo, I needed an environment with Minikube, using the Podman driver as an alternative container runtime to the Docker driver.
In my previous article, I shared with you the steps I took, to get Podman in combination with Kubernetes (Minikube), working on my demo environment.
[https://technology.amis.nl/recent/adding-podman-to-my-vm-with-minikube-part-1/]
In this article, you can read more about other Podman commands I tried out, as I continued following “Getting Started with Podman”. You also can read about the steps I took to build a container image inside the Minikube cluster (Image pushing).
[https://podman.io/getting-started/]
Podman
Podman is an open-source project that is available on most Linux platforms and resides on GitHub. Podman is a daemonless container engine for developing, managing, and running Open Container Initiative (OCI) containers and container images on your Linux System. Podman provides a Docker-compatible command line front end that can simply alias the Docker cli, alias docker=podman. Podman also provides a socket activated REST API service to allow remote applications to launch on-demand containers. This REST API also supports the Docker API, allowing users of docker-py and docker-compose to interact with the Podman as a service.
Containers under the control of Podman can either be run by root or by a non-privileged user. Podman manages the entire container ecosystem which includes pods, containers, container images, and container volumes using the libpod library. Podman specializes in all of the commands and functions that help you to maintain and modify OCI container images, such as pulling and tagging. It allows you to create, run, and maintain those containers created from those images in a production environment.
The Podman service runs only on Linux platforms, however the podman remote REST API client exists on Mac and Windows platforms and can communicate with the Podman service running on a Linux machine or VM via ssh. See also: Podman Remote clients for macOS and Windows.
[https://podman.io/whatis.html]
Getting Started with Podman, trying out podman commands
I wanted to try out some Podman commands, so I looked at “Getting Started with Podman” and continued where I left off in my previous article.
[https://podman.io/getting-started/#getting-started-with-podman]
In order to be able to continue with the examples mentioned below, I first had to free up the binding of port 8080, to avoid something like:
Error: rootlessport listen tcp 0.0.0.0:8080: bind: address already in use
For example, when using: podman run -dt -p 8080:80/tcp docker.io/library/httpd
I used vagrant ssh to open a Linux Command Prompt where I used the following command, to find the process/service listening on port 8080:
[https://man7.org/linux/man-pages/man8/lsof.8.html]
sudo lsof -i :8080
With the following output:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME socat 24307 vagrant 5u IPv4 69085 0t0 TCP *:http-alt (LISTEN)
Remark about the -i option:
-i [i]
Selects the listing of files any of whose Internet address matches the address specified in i. If no address is specified, this option selects the listing of all Internet and x.25 (HP-UX) network files.
Remark about socat using port 8080:
Remember from my previous article, I used the following command:
echo "**** Via socat forward local port 8080 to port 8080 on the minikube node ($nodeIP)" socat tcp-listen:8080,fork tcp:$nodeIP:8080 &
That’s why port 8080 was already in use.
Next, I killed that process:
sudo kill 24307
To search a registry or a list of registries for a matching image (httpd in my case), I used the following command on the Linux Command Prompt:
[https://docs.podman.io/en/latest/markdown/podman-search.1.html]
podman search httpd
With no output.
Remark about httpd:
The Apache HTTP Server, colloquially called Apache, is a Web server application notable for playing a key role in the initial growth of the World Wide Web.
[https://hub.docker.com/_/httpd]
Next, I tried:
podman search httpd --filter=is-official
With no output.
Remark about filter option:
–filter, -f=filter
Filter output based on conditions provided (default [])
Supported filters are:
- stars (int – number of stars the image has)
- is-automated (boolean – true | false) – is the image automated or not
- is-official (boolean – true | false) – is the image official or not
[https://docs.podman.io/en/latest/markdown/podman-search.1.html]
Then, I tried:
[https://github.com/containers/podman/issues/8896]
podman search --limit 3 docker.io/httpd
With the following output:
INDEX NAME DESCRIPTION STARS OFFICIAL AUTOMATED docker.io docker.io/library/httpd The Apache HTTP Server Project 4167 [OK] docker.io docker.io/clearlinux/httpd httpd HyperText Transfer Protocol (HTTP) ser... 2 docker.io docker.io/centos/httpd-24-centos7 Platform for running Apache httpd 2.4 or bui... 44
Remark about prefixing the registry in the search term:
The user can specify which registry to search by (e.g., registry.fedoraproject.org/fedora). By default, all unqualified-search registries in containers-registries.conf are used.
To get all available images in a registry without a specific search term, the user can just enter the registry name with a trailing “/” (example registry.fedoraproject.org/).
[https://docs.podman.io/en/latest/markdown/podman-search.1.html]
To copy an image from a registry onto the local machine, I used the following command on the Linux Command Prompt:
[https://docs.podman.io/en/latest/markdown/podman-pull.1.html]
podman pull docker.io/library/httpd
With the following output:
Trying to pull docker.io/library/httpd:latest... Getting image source signatures Copying blob a9fcd580ef1c done Copying blob a19138bf3164 done Copying blob f29089ecfcbf done Copying blob 5bfb2ce98078 done Copying blob 31b3f1ad4ce1 done Copying config f2789344c5 done Writing manifest to image destination Storing signatures f2789344c57324805883b174676365eb807fdb4eccfb9878fbb19054fd0c7b7e
Remark:
Podman searches in different registries. Therefore, it is recommended to use the full image name (docker.io/library/httpd instead of httpd) to ensure, that you are using the correct image.
[https://podman.io/getting-started/]
To display locally stored images, I used the following command on the Linux Command Prompt:
[https://docs.podman.io/en/latest/markdown/podman-images.1.html]
podman images
With the following output:
REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/library/httpd latest f2789344c573 11 days ago 150 MB
To run a process in a new container, I used the following command on the Linux Command Prompt:
[https://docs.podman.io/en/latest/markdown/podman-run.1.html]
podman run -dt -p 8080:80/tcp docker.io/library/httpd
With the following output:
7fdb64be69bb340cccc0c53973387512cfe2290e5375ac1635f039cc01906a32
Remark about the container:
This sample container above will run a very basic httpd server that serves only its index page.
[https://podman.io/getting-started/]
Remark about the options:
–detach, -d
Detached mode: run the container in the background and print the new container ID.
–tty, -t
Allocate a pseudo-TTY. When set to true, Podman will allocate a pseudo-tty and attach to the standard input of the container. This can be used, for example, to run a throwaway interactive shell.
–publish, -p=[[ip:][hostPort]:]containerPort[/protocol]
Publish a container’s port, or range of ports, to the host.
Both hostPort and containerPort can be specified as a range of ports. When specifying ranges for both, the number of container ports in the range must match the number of host ports in the range.
If host IP is set to 0.0.0.0 or not set at all, the port will be bound on all IPs on the host.
By default, Podman will publish TCP ports.
[https://docs.podman.io/en/latest/markdown/podman-run.1.html]
To lists the running containers on my system, I used the following command on the Linux Command Prompt:
[https://docs.podman.io/en/latest/markdown/podman-ps.1.html]
podman ps
With the following output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7fdb64be69bb docker.io/library/httpd:latest httpd-foreground About a minute ago Up About a minute ago 0.0.0.0:8080->80/tcp festive_moore
For testing the httpd container, I used the following command on the Linux Command Prompt:
curl http://localhost:8080
With the following output:
<html><body><h1>It works!</h1></body></html>
To “inspect” a running container for metadata and details about itself, I used the following command on the Linux Command Prompt:
[https://docs.podman.io/en/latest/markdown/podman-inspect.1.html]
podman inspect -l
With the following output:
[ { "Id": "7fdb64be69bb340cccc0c53973387512cfe2290e5375ac1635f039cc01906a32", "Created": "2022-09-25T10:35:26.609089194Z", "Path": "httpd-foreground", "Args": [ "httpd-foreground" ], "State": { "OciVersion": "1.0.2-dev", "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 27122, "ConmonPid": 27119, "ExitCode": 0, "Error": "", "StartedAt": "2022-09-25T10:35:26.766079697Z", "FinishedAt": "0001-01-01T00:00:00Z", "Healthcheck": { "Status": "", "FailingStreak": 0, "Log": null }, "CgroupPath": "/user.slice/user-1000.slice/user@1000.service/user.slice/libpod-7fdb64be69bb340cccc0c53973387512cfe2290e5375ac1635f039cc01906a32.scope" }, "Image": "f2789344c57324805883b174676365eb807fdb4eccfb9878fbb19054fd0c7b7e", "ImageName": "docker.io/library/httpd:latest", "Rootfs": "", "Pod": "", "ResolvConfPath": "/run/user/1000/containers/overlay-containers/7fdb64be69bb340cccc0c53973387512cfe2290e5375ac1635f039cc01906a32/userdata/resolv.conf", "HostnamePath": "/run/user/1000/containers/overlay-containers/7fdb64be69bb340cccc0c53973387512cfe2290e5375ac1635f039cc01906a32/userdata/hostname", "HostsPath": "/run/user/1000/containers/overlay-containers/7fdb64be69bb340cccc0c53973387512cfe2290e5375ac1635f039cc01906a32/userdata/hosts", "StaticDir": "/home/vagrant/.local/share/containers/storage/overlay-containers/7fdb64be69bb340cccc0c53973387512cfe2290e5375ac1635f039cc01906a32/userdata", "OCIConfigPath": "/home/vagrant/.local/share/containers/storage/overlay-containers/7fdb64be69bb340cccc0c53973387512cfe2290e5375ac1635f039cc01906a32/userdata/config.json", "OCIRuntime": "crun", "ConmonPidFile": "/run/user/1000/containers/overlay-containers/7fdb64be69bb340cccc0c53973387512cfe2290e5375ac1635f039cc01906a32/userdata/conmon.pid", "PidFile": "/run/user/1000/containers/overlay-containers/7fdb64be69bb340cccc0c53973387512cfe2290e5375ac1635f039cc01906a32/userdata/pidfile", "Name": "festive_moore", "RestartCount": 0, "Driver": "overlay", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "", "EffectiveCaps": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FOWNER", "CAP_FSETID", "CAP_KILL", "CAP_NET_BIND_SERVICE", "CAP_SETFCAP", "CAP_SETGID", "CAP_SETPCAP", "CAP_SETUID", "CAP_SYS_CHROOT" ], "BoundingCaps": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FOWNER", "CAP_FSETID", "CAP_KILL", "CAP_NET_BIND_SERVICE", "CAP_SETFCAP", "CAP_SETGID", "CAP_SETPCAP", "CAP_SETUID", "CAP_SYS_CHROOT" ], "ExecIDs": [], "GraphDriver": { "Name": "overlay", "Data": { "LowerDir": "/home/vagrant/.local/share/containers/storage/overlay/5d5c5a9bfd630339582269a6520a830748b815ac4683aa1a240b5f7c017ecfd2/diff:/home/vagrant/.local/share/containers/storage/overlay/82e2bec5990bdbf9476a84105083a38899944f27a5c975f87451aa42bfa57e00/diff:/home/vagrant/.local/share/containers/storage/overlay/8052b24fec4d37b812aa5e8dc5279011095eb9a85cc0a5215bdceeffe2a6cc24/diff:/home/vagrant/.local/share/containers/storage/overlay/a9b375edd2206e53a40a7e32f8188178212ac85299b38c460916e04eeb4d0954/diff:/home/vagrant/.local/share/containers/storage/overlay/b45078e74ec97c5e600f6d5de8ce6254094fb3cb4dc5e1cc8335fb31664af66e/diff", "MergedDir": "/home/vagrant/.local/share/containers/storage/overlay/5a24700bfc60ab91efce57769d5c2f74911b4a47e663f639ccf58301e8c70e5f/merged", "UpperDir": "/home/vagrant/.local/share/containers/storage/overlay/5a24700bfc60ab91efce57769d5c2f74911b4a47e663f639ccf58301e8c70e5f/diff", "WorkDir": "/home/vagrant/.local/share/containers/storage/overlay/5a24700bfc60ab91efce57769d5c2f74911b4a47e663f639ccf58301e8c70e5f/work" } }, "Mounts": [], "Dependencies": [], "NetworkSettings": { "EndpointID": "", "Gateway": "", "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "", "Bridge": "", "SandboxID": "", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "80/tcp": [ { "HostIp": "", "HostPort": "8080" } ] }, "SandboxKey": "/run/user/1000/netns/cni-070288d9-802b-80c3-9eba-c397056cfab1" }, "ExitCommand": [ "/usr/bin/podman", "--root", "/home/vagrant/.local/share/containers/storage", "--runroot", "/run/user/1000/containers", "--log-level", "warning", "--cgroup-manager", "systemd", "--tmpdir", "/run/user/1000/libpod/tmp", "--runtime", "crun", "--storage-driver", "overlay", "--events-backend", "journald", "container", "cleanup", "7fdb64be69bb340cccc0c53973387512cfe2290e5375ac1635f039cc01906a32" ], "Namespace": "", "IsInfra": false, "Config": { "Hostname": "7fdb64be69bb", "Domainname": "", "User": "", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "Tty": true, "OpenStdin": false, "StdinOnce": false, "Env": [ "PATH=/usr/local/apache2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "TERM=xterm", "container=podman", "HTTPD_PREFIX=/usr/local/apache2", "HTTPD_VERSION=2.4.54", "HTTPD_SHA256=eb397feeefccaf254f8d45de3768d9d68e8e73851c49afd5b7176d1ecf80c340", "HTTPD_PATCHES=", "HOME=/root", "HOSTNAME=7fdb64be69bb" ], "Cmd": [ "httpd-foreground" ], "Image": "docker.io/library/httpd:latest", "Volumes": null, "WorkingDir": "/usr/local/apache2", "Entrypoint": "", "OnBuild": null, "Labels": null, "Annotations": { "io.container.manager": "libpod", "io.kubernetes.cri-o.Created": "2022-09-25T10:35:26.609089194Z", "io.kubernetes.cri-o.TTY": "true", "io.podman.annotations.autoremove": "FALSE", "io.podman.annotations.init": "FALSE", "io.podman.annotations.privileged": "FALSE", "io.podman.annotations.publish-all": "FALSE", "org.opencontainers.image.stopSignal": "28" }, "StopSignal": 28, "CreateCommand": [ "podman", "run", "-dt", "-p", "8080:80/tcp", "docker.io/library/httpd" ], "Umask": "0022", "Timeout": 0, "StopTimeout": 10 }, "HostConfig": { "Binds": [], "CgroupManager": "systemd", "CgroupMode": "private", "ContainerIDFile": "", "LogConfig": { "Type": "journald", "Config": null, "Path": "", "Tag": "", "Size": "0B" }, "NetworkMode": "slirp4netns", "PortBindings": { "80/tcp": [ { "HostIp": "", "HostPort": "8080" } ] }, "RestartPolicy": { "Name": "", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": [], "CapDrop": [ "CAP_AUDIT_WRITE", "CAP_MKNOD", "CAP_NET_RAW" ], "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": [], "GroupAdd": [], "IpcMode": "private", "Cgroup": "", "Cgroups": "default", "Links": null, "OomScoreAdj": 0, "PidMode": "private", "Privileged": false, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": [], "Tmpfs": {}, "UTSMode": "private", "UsernsMode": "", "ShmSize": 65536000, "Runtime": "oci", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 0, "CgroupParent": "user.slice", "BlkioWeight": 0, "BlkioWeightDevice": null, "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [], "DiskQuota": 0, "KernelMemory": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": 0, "OomKillDisable": false, "PidsLimit": 2048, "Ulimits": [], "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "CgroupConf": null } } ]
So, there is lots of useful information like environment variables, network settings or allocated resources.
[https://podman.io/getting-started/#inspecting-a-running-container]
Remark about the option:
The -l is a convenience argument for latest container. You can also use the container’s ID or name instead of -l or the long argument –latest.
[https://podman.io/getting-started/#inspecting-a-running-container]
To “inspect” a running container for IP address metadata, I used the following command on the Linux Command Prompt:
[https://docs.podman.io/en/latest/markdown/podman-inspect.1.html]
podman inspect -l | grep IPAddress
With the following output:
"IPAddress": "",
Since, the container is running in rootless mode, no IP Address is assigned to the container.
[https://podman.io/getting-started/#inspecting-a-running-container]
To view the container’s logs, I used the following command on the Linux Command Prompt:
[https://docs.podman.io/en/latest/markdown/podman-logs.1.html]
podman logs -l
With the following output:
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.0.2.100. Set the 'ServerName' directive globally to suppress this message AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.0.2.100. Set the 'ServerName' directive globally to suppress this message [Sun Sep 25 10:35:26.786813 2022] [mpm_event:notice] [pid 1:tid 140657605340480] AH00489: Apache/2.4.54 (Unix) configured -- resuming normal operations [Sun Sep 25 10:35:26.786972 2022] [core:notice] [pid 1:tid 140657605340480] AH00094: Command line: 'httpd -D FOREGROUND' 10.0.2.100 - - [25/Sep/2022:10:37:41 +0000] "GET / HTTP/1.1" 200 45
To display the running processes of the container, I used the following command on the Linux Command Prompt:
[https://docs.podman.io/en/latest/markdown/podman-top.1.html]
podman top -l
With the following output:
USER PID PPID %CPU ELAPSED TTY TIME COMMAND root 1 0 0.000 8m25.005591989s pts/0 0s httpd -DFOREGROUND www-data 3 1 0.000 8m25.005707099s pts/0 0s httpd -DFOREGROUND www-data 4 1 0.000 8m25.005755798s pts/0 0s httpd -DFOREGROUND www-data 6 1 0.000 8m25.005802307s pts/0 0s httpd -DFOREGROUND
Remark:
By default, podman top prints data similar to ps -ef
[https://docs.podman.io/en/latest/markdown/podman-top.1.html]
To stop the running container, I used the following command on the Linux Command Prompt:
[https://docs.podman.io/en/latest/markdown/podman-stop.1.html]
podman stop -l
With the following output:
7fdb64be69bb340cccc0c53973387512cfe2290e5375ac1635f039cc01906a32
A quick check with podman ps -a gave as output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7fdb64be69bb docker.io/library/httpd:latest httpd-foreground 10 minutes ago Exited (0) 55 seconds ago 0.0.0.0:8080->80/tcp festive_moore
Remark about the option:
–all, -a
Show all the containers created by Podman, default is only running containers.
Note: Podman shares containers storage with other tools such as Buildah and CRI-O. In some cases these external containers might also exist in the same storage. Use the –external option to see these external containers. External containers show the ‘storage’ status.
[https://docs.podman.io/en/latest/markdown/podman-ps.1.html]
To remove the stopped container, I used the following command on the Linux Command Prompt:
[https://docs.podman.io/en/latest/markdown/podman-rm.1.html]
podman rm -l
With the following output:
7fdb64be69bb340cccc0c53973387512cfe2290e5375ac1635f039cc01906a32
Again, a quick check with podman ps -a gave as output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Podman and a Dockerfile
So now, it was time to try out Podman with a Dockerfile.
For simplicity I stayed with an Apache HTTP Server example. I had a look at “httpd, Docker Official Image” hosted on Docker Hub. Next, I followed the instructions from “’How to use this image.”.
[https://hub.docker.com/_/httpd]
On my Windows laptop (in my shared folder), I created a ApacheHTTPServer directory were I created a Dockerfile with the following content:
FROM httpd:2.4 COPY ./public-html/ /usr/local/apache2/htdocs/
With this simple Dockerfile you can run a simple HTML server, where public-html/ is the directory containing all your HTML.
[https://hub.docker.com/_/httpd]
Next, I created a public_html directory with an index.html file with the following content:
<!DOCTYPE html> <html> <head> <title>Apache HTTP Server</title> </head> <body> <h1>Hello World. Greetings from AMIS.</h1> </body> </html>
To navigate to and list the content of my shared folder, I used the following commands on the Linux Command Prompt:
cd /mnt/mysharedfolder/applications/ApacheHTTPServer ls -latr
With the following output:
total 1 -rwxrwxrwx 1 vagrant vagrant 62 Aug 31 17:25 Dockerfile drwxrwxrwx 1 vagrant vagrant 0 Aug 31 17:43 .. drwxrwxrwx 1 vagrant vagrant 0 Aug 31 17:44 . drwxrwxrwx 1 vagrant vagrant 0 Aug 31 17:47 public-html
To build the container image, I used the following command on the Linux Command Prompt:
[https://docs.podman.io/en/latest/markdown/podman-build.1.html]
podman build -t my-apache2 .
With the following output:
STEP 1/2: FROM httpd:2.4 Error: error creating build container: short-name "httpd:2.4" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Remark about the option:
–tag, -t=imageName
Specifies the name which will be assigned to the resulting image if the build process completes successfully. If imageName does not include a registry name, the registry name localhost will be prepended to the image name.
So apparently, I had to change the registries.conf file.
Via my shared folder (via copy and paste) I changed the content of this file to:
[in bold, I highlighted the changes]
# For more information on this configuration file, see containers-registries.conf(5). # # NOTE: RISK OF USING UNQUALIFIED IMAGE NAMES # We recommend always using fully qualified image names including the registry # server (full dns name), namespace, image name, and tag # (e.g., registry.redhat.io/ubi8/ubi:latest). Pulling by digest (i.e., # quay.io/repository/name@digest) further eliminates the ambiguity of tags. # When using short names, there is always an inherent risk that the image being # pulled could be spoofed. For example, a user wants to pull an image named # `foobar` from a registry and expects it to come from myregistry.com. If # myregistry.com is not first in the search list, an attacker could place a # different `foobar` image at a registry earlier in the search list. The user # would accidentally pull and run the attacker's image and code rather than the # intended content. We recommend only adding registries which are completely # trusted (i.e., registries which don't allow unknown or anonymous users to # create accounts with arbitrary names). This will prevent an image from being # spoofed, squatted or otherwise made insecure. If it is necessary to use one # of these registries, it should be added at the end of the list. # # # An array of host[:port] registries to try when pulling an unqualified image, in order. # unqualified-search-registries = ["example.com"] # # [[registry]] # # The "prefix" field is used to choose the relevant [[registry]] TOML table; # # (only) the TOML table with the longest match for the input image name # # (taking into account namespace/repo/tag/digest separators) is used. # # # # The prefix can also be of the form: *.example.com for wildcard subdomain # # matching. # # # # If the prefix field is missing, it defaults to be the same as the "location" field. # prefix = "example.com/foo" # # # If true, unencrypted HTTP as well as TLS connections with untrusted # # certificates are allowed. # insecure = false # # # If true, pulling images with matching names is forbidden. # blocked = false # # # The physical location of the "prefix"-rooted namespace. # # # # By default, this is equal to "prefix" (in which case "prefix" can be omitted # # and the [[registry]] TOML table can only specify "location"). # # # # Example: Given # # prefix = "example.com/foo" # # location = "internal-registry-for-example.net/bar" # # requests for the image example.com/foo/myimage:latest will actually work with the # # internal-registry-for-example.net/bar/myimage:latest image. # # # The location can be empty iff prefix is in a # # wildcarded format: "*.example.com". In this case, the input reference will # # be used as-is without any rewrite. # location = internal-registry-for-example.com/bar" # # # (Possibly-partial) mirrors for the "prefix"-rooted namespace. # # # # The mirrors are attempted in the specified order; the first one that can be # # contacted and contains the image will be used (and if none of the mirrors contains the image, # # the primary location specified by the "registry.location" field, or using the unmodified # # user-specified reference, is tried last). # # # # Each TOML table in the "mirror" array can contain the following fields, with the same semantics # # as if specified in the [[registry]] TOML table directly: # # - location # # - insecure # [[registry.mirror]] # location = "example-mirror-0.local/mirror-for-foo" # [[registry.mirror]] # location = "example-mirror-1.local/mirrors/foo" # insecure = true # # Given the above, a pull of example.com/foo/image:latest will try: # # 1. example-mirror-0.local/mirror-for-foo/image:latest # # 2. example-mirror-1.local/mirrors/foo/image:latest # # 3. internal-registry-for-example.net/bar/image:latest # # in order, and use the first one that exists. unqualified-search-registries = ["docker.io"] [[registry]] prefix="docker.io/library" location="docker.io/library"
Remark about registries.conf (/etc/containers/registries.conf):
registries.conf is the configuration file which specifies which container registries should be consulted when completing image names which do not include a registry or domain portion.
[https://docs.podman.io/en/latest/markdown/podman-search.1.html]
Again, to build the container image (based on docker.io/library/httpd), I used the following command on the Linux Command Prompt:
[https://docs.podman.io/en/latest/markdown/podman-build.1.html]
podman build -t my-apache2 .
With the following output:
STEP 1/2: FROM httpd:2.4 Resolving "httpd" using unqualified-search registries (/etc/containers/registries.conf) Trying to pull docker.io/library/httpd:2.4... Getting image source signatures Copying blob 5bfb2ce98078 skipped: already exists Copying blob f29089ecfcbf [--------------------------------------] 0.0b / 0.0b Copying blob 31b3f1ad4ce1 [--------------------------------------] 0.0b / 0.0b Copying blob a9fcd580ef1c [--------------------------------------] 0.0b / 0.0b Copying blob a19138bf3164 [--------------------------------------] 0.0b / 0.0b Copying config f2789344c5 done Writing manifest to image destination Storing signatures STEP 2/2: COPY ./public-html/ /usr/local/apache2/htdocs/ COMMIT my-apache2 --> 5b4b67eb76f Successfully tagged localhost/my-apache2:latest 5b4b67eb76fda55399fbe90cafd7e3d9be9ea8896022cdc67e2d89ca72862413
Remark:
If you omit the dot at the end of the command, you get the following error:
Error: no context directory and no Containerfile specified
A quick check with podman images gave as output:
REPOSITORY TAG IMAGE ID CREATED SIZE localhost/my-apache2 latest 5b4b67eb76fd 42 seconds ago 150 MB docker.io/library/httpd 2.4 f2789344c573 11 days ago 150 MB docker.io/library/httpd latest f2789344c573 11 days ago 150 MB
To run a process in a new container, I used the following command on the Linux Command Prompt:
[https://docs.podman.io/en/latest/markdown/podman-run.1.html]
podman run -dit --name my-running-app -p 8080:80 my-apache2
With the following output:
d8a502db026472d24c25b169f38d2bb25955e88db285f363cbefa88ab298d0c0
A quick check with podman ps -a gave as output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d8a502db0264 localhost/my-apache2:latest httpd-foreground 35 seconds ago Up 35 seconds ago 0.0.0.0:8080->80/tcp my-running-app
For testing the httpd container, I used the following command on the Linux Command Prompt:
curl http://localhost:8080
With the following output:
<!DOCTYPE html> <html> <head> <title>Apache HTTP Server</title> </head> <body> <h1>Hello World. Greetings from AMIS.</h1> </body> </html>
Podman and Kubernetes
At Podman we believe that Kubernetes is the defacto standard for composing Pods and for orchestrating containers, making Kubernetes YAML a defacto standard file format. Hence, Podman allows the creation and execution of Pods from a Kubernetes YAML file (see podman-play-kube). Podman can also generate Kubernetes YAML based on a container or Pod (see podman-generate-kube), which allows for an easy transition from a local development environment to a production Kubernetes cluster.
[https://podman.io/whatis.html#out-of-scope]
I also wanted to try out some Podman commands related to Kubernetes YAML files.
Podman command: podman-generate-kube
podman generate kube will generate Kubernetes YAML (v1 specification) from Podman containers, pods or volumes. Regardless of whether the input is for containers or pods, Podman will always generate the specification as a Pod. The input may be in the form of one or more containers, pods or volumes names or IDs.
[https://docs.podman.io/en/stable/markdown/podman-generate-kube.1.html]
This command is equivalent to: podman-kube-generate
[https://docs.podman.io/en/latest/markdown/podman-kube-generate.1.html]
To generate a Kubernetes YAML based on a container, I used the following command on the Linux Command Prompt:
podman generate kube my-running-app -f /mnt/mysharedfolder/my-running-app.yaml
This created a my-running-app.yaml file, with the following content:
# Save the output of this file and use kubectl create -f to import # it into Kubernetes. # # Created with podman-3.4.4 apiVersion: v1 kind: Pod metadata: creationTimestamp: "2022-09-25T12:48:45Z" labels: app: my-running-app name: my-running-app spec: containers: - image: localhost/my-apache2:latest name: my-running-app ports: - containerPort: 80 hostPort: 8080 securityContext: capabilities: drop: - CAP_MKNOD - CAP_NET_RAW - CAP_AUDIT_WRITE stdin: true tty: true
Because now I had a YAML file, I no longer needed the running container (locally), so I removed it, via the following commands:
podman stop -l podman rm -l
Bye the way, the image then still exists.
Podman command: podman-play-kube
podman play kube will read in a structured file of Kubernetes YAML. It will then recreate the containers, pods or volumes described in the YAML. Containers within a pod are then started and the ID of the new Pod or the name of the new Volume is output. If the yaml file is specified as “-” then podman play kube will read the YAML file from stdin. Using the –down command line option, it is also capable of tearing down the pods created by a previous run of podman play kube. Using the –replace command line option, it will tear down the pods(if any) created by a previous run of podman play kube and recreate the pods with the Kubernetes YAML file. Ideally the input file would be one created by Podman (see podman-generate-kube(1)). This would guarantee a smooth import and expected results.
Currently, the supported Kubernetes kinds are:
- Pod
- Deployment
- PersistentVolumeClaim
- ConfigMap
[https://docs.podman.io/en/stable/markdown/podman-play-kube.1.html]
This command is equivalent to: podman-kube-play
[https://docs.podman.io/en/latest/markdown/podman-kube-play.1.html]
To create the Container and Pod, described in the Kubernetes YAML file, I used the following command on the Linux Command Prompt:
podman play kube /mnt/mysharedfolder/my-running-app.yaml
With the following output:
a container exists with the same name ("my-running-app") as the pod in your YAML file; changing pod name to my-running-app_pod Pod: dca450da0b6f29b2c1976e7531df6c1dc462f1b2f84c7b014bd10c8d0965a5ad Container: ba279b510e3e165f0fced8cc672a80197f64f0eca333b4a1cebb2289eb3df119
A quick check via podman ps -a gave the following output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES beda8e20f2f6 k8s.gcr.io/pause:3.5 About a minute ago Up About a minute ago 0.0.0.0:8080->80/tcp dca450da0b6f-infra ba279b510e3e localhost/my-apache2:latest httpd-foreground About a minute ago Up About a minute ago 0.0.0.0:8080->80/tcp my-running-app_pod-my-running-app
Remark:
In the generated YAML file, the name of the Pod and Container are the same (my-running-app). According to the output above, the name of the Pod is changed by Podman to: my-running-app_pod (and the name of the Container is apparently changed to: my-running-app_pod-my-running-app).
To list the Pods on my system, I used the following command on the Linux Command Prompt:
[https://docs.podman.io/en/latest/markdown/podman-pod-ps.1.html]
podman pod ps
With the following output:
POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS dca450da0b6f my-running-app_pod Running 2 minutes ago beda8e20f2f6 2
Remark:
You can also use: podman pod list
I wanted to check if this Pod was now known in Kubernetes.
So, to list all Pods in all namespaces, I used the following command on the Linux Command Prompt:
[https://kubernetes.io/docs/reference/kubectl/cheatsheet/#viewing-finding-resources]
kubectl get pods --all-namespaces
With the following output:
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-565d847f94-rdzfm 1/1 Running 0 43h kube-system etcd-minikube 1/1 Running 0 43h kube-system kindnet-vdgsl 1/1 Running 0 43h kube-system kube-apiserver-minikube 1/1 Running 0 43h kube-system kube-controller-manager-minikube 1/1 Running 0 43h kube-system kube-proxy-8scpb 1/1 Running 0 43h kube-system kube-scheduler-minikube 1/1 Running 0 43h kube-system storage-provisioner 1/1 Running 1 (43h ago) 43h kubernetes-dashboard dashboard-metrics-scraper-b74747df5-fbqqs 1/1 Running 0 43h kubernetes-dashboard kubernetes-dashboard-54596f475f-6pxdh 1/1 Running 0 43h
And obviously my Pod wasn’t known in Kubernetes, because the Pod was created locally.
Minikube
In order to create the Pod, I used the following command on the Linux Command Prompt:
[https://kubernetes.io/docs/reference/kubectl/cheatsheet/#creating-objects]
kubectl apply -f /mnt/mysharedfolder/my-running-app.yaml
With the following output:
pod/my-running-app created
Next, in the Web Browser on my Windows laptop, I started the Kubernetes Dashboard in my demo environment, in order to check the created Pod.
I could see the my-running-app Pod was not running, instead the status was “ImagePullBackOff”.
I opened the log from the my-running-app Pod, and saw the reason behind the problem:
Then I closed the log and I clicked on the Pod Name (my-running-app) to show the details of the Pod.
Apparently, there was a problem pulling the image (localhost/my-apache2:latest), the container was based on.
Remember, this image was mentioned in the my-running-app.yaml file:
… spec: containers: - image: localhost/my-apache2:latest name: my-running-app ports: - containerPort: 80 hostPort: 8080 …
So, the image that was created via Podman, and was mentioned in the Kubernetes YAML file, was not available in Minikube.
Next, via the Kubernetes Dashboard, I deleted the my-running-app Pod.
Remark:
As you can see in the pop-up this is equivalent to: kubectl delete -n default pod my-running-app
In order to fix this problem, I had a look at the Minikube documentation about “Pushing images”.
[https://minikube.sigs.k8s.io/docs/handbook/pushing/]
Pushing images
On this page, I navigated to the “Comparison table for different methods” part.
The best method to push your image to minikube depends on the container-runtime you built your cluster with (the default is docker).
[https://minikube.sigs.k8s.io/docs/handbook/pushing/#comparison-table-for-different-methods]
First, I had a look at the method for the Docker runtime (docker-env command).
When using a container or VM driver (all drivers except none), you can reuse the Docker daemon inside minikube cluster. This means you don’t have to build on your host machine and push the image into a docker registry. You can just build inside the same docker daemon as minikube which speeds up local experiments.
[https://minikube.sigs.k8s.io/docs/handbook/pushing/#1-pushing-directly-to-the-in-cluster-docker-daemon-docker-env]
Because I was using Podman as the container-runtime, I then navigated to the method for:
podman-env command
[https://minikube.sigs.k8s.io/docs/handbook/pushing/#3-pushing-directly-to-in-cluster-cri-o-podman-env]
Next, I clicked on the ‘Linux’ tab:
Before I was going to follow the instructions on this page, first I wanted a list of the images.
Listing images
To display locally stored images, I used the following command on the Linux Command Prompt:
[https://docs.podman.io/en/latest/markdown/podman-images.1.html]
podman images
With the following output:
REPOSITORY TAG IMAGE ID CREATED SIZE localhost/my-apache2 latest 5b4b67eb76fd 7 hours ago 150 MB docker.io/library/httpd 2.4 f2789344c573 12 days ago 150 MB docker.io/library/httpd latest f2789344c573 12 days ago 150 MB k8s.gcr.io/pause 3.5 ed210e3e4a5b 18 months ago 690 kB
To list images in Minikube, I used the following command on the Linux Command Prompt:
[https://minikube.sigs.k8s.io/docs/commands/image/#minikube-image-ls]
minikube image ls
With the following output:
registry.k8s.io/pause:3.8 registry.k8s.io/pause:3.6 registry.k8s.io/kube-scheduler:v1.25.0 registry.k8s.io/kube-proxy:v1.25.0 registry.k8s.io/kube-controller-manager:v1.25.0 registry.k8s.io/kube-apiserver:v1.25.0 registry.k8s.io/etcd:3.5.4-0 registry.k8s.io/coredns/coredns:v1.9.3 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kindest/kindnetd:v20220726-ed811e41
To list images in Minikube with output in table format, I used the following command on the Linux Command Prompt:
[https://minikube.sigs.k8s.io/docs/commands/image/#minikube-image-ls]
minikube image ls --format table
With the following output:
So, here we don’t see the image localhost/my-apache2:latest.
In order to create the image, I followed the instructions on the page, mentioned above.
To get an idea about what happens in the first step on the page, I used the following command on the Linux Command Prompt:
[https://man7.org/linux/man-pages/man1/echo.1p.html]
echo $(minikube -p minikube podman-env)
With the following output:
export CONTAINER_HOST="ssh://docker@127.0.0.1:33281/run/podman/podman.sock" export CONTAINER_SSHKEY="/home/vagrant/.minikube/machines/minikube/id_rsa" export MINIKUBE_ACTIVE_PODMAN="minikube" # To point your shell to minikube's podman service, run: # eval $(minikube -p minikube podman-env)
So, above we see the output of the command: minikube -p minikube podman-env
Remark about Command Substitution:
Command substitution allows the output of a command to replace the command itself. Command substitution occurs when a command is enclosed as follows: $(command) or `command`.
Bash performs the expansion by executing command in a subshell environment and replacing the command substitution with the standard output of the command, with any trailing newlines deleted.
[https://www.gnu.org/software/bash/manual/html_node/Command-Substitution.html]
In order for any ‘podman’ command I run in this current terminal to run against Podman inside the Minikube cluster, I used the following command on the Linux Command Prompt:
[https://man7.org/linux/man-pages/man1/eval.1p.html]
eval $(minikube -p minikube podman-env)
With no output.
Remark about the eval utility:
The eval utility shall construct a command by concatenating arguments together, separating each with a <space> character. The constructed command shall be read and executed by the shell.
[https://man7.org/linux/man-pages/man1/eval.1p.html]
So, essentially the output of the command minikube -p minikube podman-env is read and executed by the shell, meaning that the export commands seen above are executed. And this in turn means that any ‘podman’ command you run in this current terminal will run against Podman inside the Minikube cluster.
[https://man7.org/linux/man-pages/man1/export.1p.html]
Evaluating the podman-env is only valid for the current terminal. By closing the terminal, you will go back to using your own system’s podman daemon.
In container-based drivers such as Docker or Podman, you will need to re-do docker-env respectively podman-env each time you restart your minikube cluster.
[https://minikube.sigs.k8s.io/docs/handbook/pushing/#1-pushing-directly-to-the-in-cluster-docker-daemon-docker-env]
Remark about minikube podman-env:
With this command you configure the environment to use minikube’s Podman service
-p, –profile string
The name of the minikube VM being used. This can be set to allow having multiple instances of minikube independently. (default “minikube”)
[https://minikube.sigs.k8s.io/docs/commands/podman-env/]
Run commands against Podman inside the Minikube cluster
To display images stored inside the Minikube cluster, I used the following command on the Linux Command Prompt:
[https://docs.podman.io/en/latest/markdown/podman-images.1.html]
podman --remote images
With the following output:
REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/library/httpd 2.4 f2789344c573 12 days ago 150 MB registry.k8s.io/kube-apiserver v1.25.0 4d2edfd10d3e 4 weeks ago 129 MB registry.k8s.io/kube-controller-manager v1.25.0 1a54c86c03a6 4 weeks ago 118 MB registry.k8s.io/kube-scheduler v1.25.0 bef2cf311509 4 weeks ago 51.9 MB registry.k8s.io/kube-proxy v1.25.0 58a9a0c6d96f 4 weeks ago 63.3 MB docker.io/library/registry <none> 3a0f7b0a13ef 6 weeks ago 24.7 MB docker.io/kindest/kindnetd v20220726-ed811e41 d921cee84948 2 months ago 63.3 MB registry.k8s.io/pause 3.8 4873874c08ef 3 months ago 718 kB registry.k8s.io/etcd 3.5.4-0 a8a176a5d5d6 3 months ago 301 MB docker.io/kubernetesui/dashboard <none> 1042d9e0d8fc 3 months ago 250 MB docker.io/kubernetesui/metrics-scraper <none> 115053965e86 3 months ago 43.8 MB registry.k8s.io/coredns/coredns v1.9.3 5185b96f0bec 4 months ago 48.9 MB registry.k8s.io/pause 3.6 6270bb605e12 13 months ago 690 kB gcr.io/k8s-minikube/storage-provisioner v5 6e38f40d628d 18 months ago 31.5 MB gcr.io/google_containers/kube-registry-proxy <none> 60dc18151daf 5 years ago 196 MB
As you can read at the second step on the page, we should now be able to use podman client on the command line on our host machine talking to the podman service inside the minikube VM.
[https://minikube.sigs.k8s.io/docs/handbook/pushing/#3-pushing-directly-to-in-cluster-cri-o-podman-env]
So, I tried:
[https://docs.podman.io/en/latest/markdown/podman-remote.1.html]
podman-remote –help
With the following output:
command not found
This wasn’t working, so for the next commands I used the –remote option instead.
Remark about remote option:
–remote, -r
When true, access to the Podman service will be remote. Defaults to false. Settings can be modified in the containers.conf file. If the CONTAINER_HOST environment variable is set, the –remote option defaults to true.
[https://docs.podman.io/en/latest/markdown/podman.1.html]
Remember, that in our case, the CONTAINER_HOST environment variable is set!
To build the container image (inside the Minikube cluster), I used the following command on the Linux Command Prompt:
[https://docs.podman.io/en/latest/markdown/podman-build.1.html]
podman --remote build -t my-apache2 .
With the following output:
STEP 1/2: FROM httpd:2.4 STEP 2/2: COPY ./public-html/ /usr/local/apache2/htdocs/ COMMIT my-apache2 --> 0054b74c6f5 Successfully tagged localhost/my-apache2:latest 0054b74c6f5dbe6b88c6aca1a3a79ac2fc669e2ccb9b0d26a08a0587bcc31ac4
Again, to display locally stored images (inside the Minikube cluster), I used the following command on the Linux Command Prompt:
[https://docs.podman.io/en/latest/markdown/podman-images.1.html]
podman --remote images
With the following output:
REPOSITORY TAG IMAGE ID CREATED SIZE localhost/my-apache2 latest 0054b74c6f5d 4 seconds ago 150 MB docker.io/library/httpd 2.4 f2789344c573 12 days ago 150 MB registry.k8s.io/kube-apiserver v1.25.0 4d2edfd10d3e 4 weeks ago 129 MB registry.k8s.io/kube-scheduler v1.25.0 bef2cf311509 4 weeks ago 51.9 MB registry.k8s.io/kube-controller-manager v1.25.0 1a54c86c03a6 4 weeks ago 118 MB registry.k8s.io/kube-proxy v1.25.0 58a9a0c6d96f 4 weeks ago 63.3 MB docker.io/library/registry <none> 3a0f7b0a13ef 6 weeks ago 24.7 MB docker.io/kindest/kindnetd v20220726-ed811e41 d921cee84948 2 months ago 63.3 MB registry.k8s.io/pause 3.8 4873874c08ef 3 months ago 718 kB registry.k8s.io/etcd 3.5.4-0 a8a176a5d5d6 3 months ago 301 MB docker.io/kubernetesui/dashboard <none> 1042d9e0d8fc 3 months ago 250 MB docker.io/kubernetesui/metrics-scraper <none> 115053965e86 3 months ago 43.8 MB registry.k8s.io/coredns/coredns v1.9.3 5185b96f0bec 4 months ago 48.9 MB registry.k8s.io/pause 3.6 6270bb605e12 13 months ago 690 kB gcr.io/k8s-minikube/storage-provisioner v5 6e38f40d628d 18 months ago 31.5 MB gcr.io/google_containers/kube-registry-proxy <none> 60dc18151daf 5 years ago 196 MB
To list images in Minikube with output in table format, I used the following command on the Linux Command Prompt:
[https://minikube.sigs.k8s.io/docs/commands/image/#minikube-image-ls]
minikube image ls --format table
With the following output:
So, this time the image (localhost/my-apache2:latest) is available in Minikube!
In order to create the Pod, I used the following command on the Linux Command Prompt:
kubectl create -f /mnt/mysharedfolder/my-running-app.yaml
With the following output:
pod/my-running-app created
To list all pods in all namespaces, I used the following command on the Linux Command Prompt:
[https://kubernetes.io/docs/reference/kubectl/cheatsheet/#viewing-finding-resources]
kubectl get pods --all-namespaces
With the following output:
NAMESPACE NAME READY STATUS RESTARTS AGE default my-running-app 1/1 Running 0 34s kube-system coredns-565d847f94-rdzfm 1/1 Running 1 2d17h kube-system etcd-minikube 1/1 Running 1 2d17h kube-system kindnet-vdgsl 1/1 Running 1 2d17h kube-system kube-apiserver-minikube 1/1 Running 1 2d17h kube-system kube-controller-manager-minikube 1/1 Running 1 2d17h kube-system kube-proxy-8scpb 1/1 Running 1 2d17h kube-system kube-scheduler-minikube 1/1 Running 1 2d17h kube-system registry-proxy-fd6lm 1/1 Running 1 16h kube-system registry-rtnrt 1/1 Running 1 16h kube-system storage-provisioner 1/1 Running 3 (19m ago) 2d17h kubernetes-dashboard dashboard-metrics-scraper-b74747df5-fbqqs 1/1 Running 1 2d17h kubernetes-dashboard kubernetes-dashboard-54596f475f-6pxdh 1/1 Running 2 (19m ago) 2d17h
Here I could see, the my-running-app Pod was indeed running!
Next, in the Web Browser on my Windows laptop, I started the Kubernetes Dashboard in my demo environment, in order to check the created Pod.
Again, also here I could see that it was running without any errors. Great!
In order to be able to check the functionality of the my-running-app Pod (via port forwarding), I used the following commands on the Linux Command Prompt:
[https://kubernetes.io/docs/reference/kubectl/cheatsheet/#viewing-finding-resources]
nodeIP=$(kubectl get node minikube -o yaml | grep address: | grep -E -o "([0-9]{1,3}[\.]){3}[0-9]{1,3}") echo "---$nodeIP---"
With the following output:
---192.168.49.2---
For testing the Pod/Container, I used the following command on the Linux Command Prompt:
curl http://$nodeIP:8080
With the following output:
<!DOCTYPE html> <html> <head> <title>Apache HTTP Server</title> </head> <body> <h1>Hello World. Greetings from AMIS.</h1> </body> </html>
Next, for port forwarding, I used the following command on the Linux Command Prompt:
socat tcp-listen:8080,fork tcp:$nodeIP:8080 &
With the following output:
[1] 9645
Then, in the Web Browser on my Windows laptop, I entered the URL: http://localhost:8080
And not surprisingly, I got the following result:
So, the my-running-app Pod was functioning as expected.
With exit I closed the ssh Windows Command Prompt.
Below you find an overview of the demo environment:
So now it’s time to conclude this article.
In this article, you have read about other Podman commands I tried out, as I continued following “Getting Started with Podman”. And also, some related to Kubernetes YAML files. You also have read about the steps I took to build a container image inside the Minikube cluster (Image pushing).