programming: the action or process of writing computer programs. | rants: speak or shout at length in a wild, [im]passioned way.
2023-10-14
Benchmarking docker-volume vs mount-fs vs tmpfs
version: "3.7"
services:
web:
image: ubuntu
command: "sleep 3600"
volumes:
- ./temp1:/temp1 # mountfs
- temp2:/temp2 # dockvol
- temp3:/temp3 # tmpfs
volumes:
temp2:
temp3:
driver_opts:
type: tmpfs
device: tmpfs
The docker compose file is on the sibling directory as data-root of docker to ensure using the same SSD. First benchmark we're gonna clone from this repository, then run copy, create 100 small files, then do 2 sequential write (small and large), here's the result of those (some steps not pasted below, eg. removing file when running benchmark twice for example):
apt install git g++ make time
alias time='/usr/bin/time -f "\nCPU: %Us\tReal: %es\tRAM: %MKB"'
cd /temp3 # tmpfs
git clone https://github.com/nikolausmayer/file-IO-benchmark.git
### copy small files
time cp -R /temp3/file-IO-benchmark /temp2 # dockvol
CPU: 0.00s Real: 1.02s RAM: 2048KB
time cp -R /temp3/file-IO-benchmark /temp1 # bindfs
CPU: 0.00s Real: 1.00s RAM: 2048KB
### create 100 x 10MB files
cd /temp3/file*
time make data # tmpfs
CPU: 0.41s Real: 0.91s RAM: 3072KB
cd /temp2/file*
time make data # dockvol
CPU: 0.44s Real: 1.94s RAM: 2816KB
cd /temp1/file*
time make data # mountfs
CPU: 0.51s Real: 1.83s RAM: 2816KB
### compile
cd /temp3/file*
time make # tmpfs
CPU: 2.93s Real: 3.23s RAM: 236640KB
cd /temp2/file*
time make # dockvol
CPU: 2.94s Real: 3.22s RAM: 236584KB
cd /temp1/file*
time make # mountfs
CPU: 2.89s Real: 3.13s RAM: 236300KB
### sequential small
cd /temp3 # tmpfs
time dd if=/dev/zero of=./test.img count=10 bs=200M
2097152000 bytes (2.1 GB, 2.0 GiB) copied, 0.910784 s, 2.3 GB/s
cd /temp2 # dockvol
time dd if=/dev/zero of=./test.img count=10 bs=200M
2097152000 bytes (2.1 GB, 2.0 GiB) copied, 2.26261 s, 927 MB/s
cd /temp1 # mountfs
time dd if=/dev/zero of=./test.img count=10 bs=200M
2097152000 bytes (2.1 GB, 2.0 GiB) copied, 2.46954 s, 849 MB/s
### sequential large
cd /temp3 # tmpfs
time dd if=/dev/zero of=./test.img count=10 bs=1G
10737418240 bytes (11 GB, 10 GiB) copied, 4.95956 s, 2.2 GB/s
cd /temp2 # dockvol
time dd if=/dev/zero of=./test.img count=10 bs=1G
10737418240 bytes (11 GB, 10 GiB) copied, 81.8511 s, 131 MB/s
10737418240 bytes (11 GB, 10 GiB) copied, 44.2367 s, 243 MB/s
# ^ running twice because I'm not sure why it's so slow
cd /temp1 # mountfs
time dd if=/dev/zero of=./test.img count=10 bs=1G
10737418240 bytes (11 GB, 10 GiB) copied, 12.7516 s, 842 MB/s
The conclusion is, docker volume is a bit faster (+10%) for sequential small, but significantly slower (-72% to -84%) for large sequential files compared to bind/mount-fs, for the other cases seems there's no noticeable difference. I always prefer bind/mount-fs over docker volume because of safety, for example if you accidentally run docker volume rm $(docker volume ls -q) this would delete all your docker volume (I did this multiple times on my own dev PC), also you can easily backup/rsync/copy/manage files if using bind/mount-fs. For other cases, that you don't care whether losing files or not and need high performance (as long as your ram is enough), just use tmpfs.
2023-07-29
Using Vault with Go
So today we're gonna use vault to make the configuration of an application to be in-memory, this would make debugging harder (since it's in memory, not on disk), but a bit more secure (if got hacked, have to read memory to know the credentials).
The flow of doing this is something like this:
1. Set up Vault service in separate directory (vault-server/Dockerfile):
FROM hashicorp/vault
RUN apk add --no-cache bash jq
COPY reseller1-policy.hcl /vault/config/reseller1-policy.hcl
COPY terraform-policy.hcl /vault/config/terraform-policy.hcl
COPY init_vault.sh /init_vault.sh
EXPOSE 8200
ENTRYPOINT [ "/init_vault.sh" ]
HEALTHCHECK \
--start-period=5s \
--interval=1s \
--timeout=1s \
--retries=30 \
CMD [ "/bin/sh", "-c", "[ -f /tmp/healthy ]" ]
2. The reseller1 ("user" for the app) policy and terraform (just name, we don't use terraform here, this could be any tool that provision/deploy the app, eg. any CD pipeline) policy is something like this:
# terraform-policy.hcl
path "auth/approle/role/dummy_role/secret-id" {
capabilities = ["update"]
}
path "secret/data/dummy_config_yaml/*" {
capabilities = ["create","update","read","patch","delete"]
}
path "secret/dummy_config_yaml/*" { # v1
capabilities = ["create","update","read","patch","delete"]
}
path "secret/metadata/dummy_config_yaml/*" {
capabilities = ["list"]
}
# reseller1-policy.hcl
path "secret/data/dummy_config_yaml/reseller1/*" {
capabilities = ["read"]
}
path "secret/dummy_config_yaml/reseller1/*" { # v1
capabilities = ["read"]
}
3. Then we need to create init script for docker (init_vault.sh), so it could execute required permissions when docker started (insert policies, create appRole, reset token for provisioner), something like this:
set -e
export VAULT_ADDR='http://127.0.0.1:8200'
export VAULT_FORMAT='json'
sleep 1s
vault login -no-print "${VAULT_DEV_ROOT_TOKEN_ID}"
vault policy write terraform-policy /vault/config/terraform-policy.hcl
vault policy write reseller1-policy /vault/config/reseller1-policy.hcl
vault auth enable approle
# configure AppRole
vault write auth/approle/role/dummy_role \
token_policies=reseller1-policy \
token_num_uses=0 \
secret_id_ttl="32d" \
token_ttl="32d" \
token_max_ttl="32d"
# overwrite token for provisioner
vault token create \
-id="${TERRAFORM_TOKEN}" \
-policy=terraform-policy \
-ttl="32d"
# keep container alive
tail -f /dev/null & trap 'kill %1' TERM ; wait
5. Now that all has been set up, we can create docker compose (docker-compose.yaml) to start everything with proper environment variable injection, something like this:
version: '3.3'
services:
testvaultserver1:
build: ./vault-server/
cap_add:
- IPC_LOCK
environment:
VAULT_DEV_ROOT_TOKEN_ID: root
APPROLE_ROLE_ID: dummy_app
TERRAFORM_TOKEN: dummyTerraformToken
ports:
- "8200:8200"
# run with: docker compose up
6. Now that vault server already up, we can run a script (should be run by provisioner/CD) to retrieve an AppSecret and write it to /tmp/secret, and write our app configuration (config.yaml) to vault path with key dummy_config_yaml/reseller1/region99 something like this:
TERRAFORM_TOKEN=`cat docker-compose.yml | grep TERRAFORM_TOKEN | cut -d':' -f2 | xargs echo -n`
VAULT_ADDRESS="127.0.0.1:8200"
# retrieve secret for appsecret so dummy app can load the /tmp/secret
curl \
--request POST \
--header "X-Vault-Token: ${TERRAFORM_TOKEN}" \
"${VAULT_ADDRESS}/v1/auth/approle/role/dummy_role/secret-id" > /tmp/debug
cat /tmp/debug | jq -r '.data.secret_id' > /tmp/secret
# check appsecret exists
cat /tmp/debug
cat /tmp/secret
VAULT_DOCKER=`docker ps| grep vault | cut -d' ' -f 1`
echo 'put secret'
cat config.yaml | docker exec -i $VAULT_DOCKER vault -v kv put -address=http://127.0.0.1:8200 -mount=secret dummy_config_yaml/reseller1/region99 raw=-
echo 'check secret length'
docker exec -i $VAULT_DOCKER vault -v kv get -address=http://127.0.0.1:8200 -mount=secret dummy_config_yaml/reseller1/region99 | wc -l
7. Next, we just need to creat an application that will read the AppSecret (/tmp/secret), retrieve the application config from vault key path secret dummy_config_yaml/reseller1/region99, something like this:
secretId := readFile(`/tmp/secret`)
config := vault.DefaultConfig()
config.Address = address
appRoleAuth, err := approle.NewAppRoleAuth(
AppRoleID, -- injected on compile time = `dummy_app`
approleSecretID)
const configPath = `secret/data/dummy_config_yaml/reseller1/region99`
secret, err := client.Logical().Read(configPath)
data := secret.Data[`data`]
m, ok := data.(map[string]interface{})
raw, ok := m[`raw`]
rawStr, ok := raw.(string)
the content of rawStr that read from vault will have exactly the same as config.yaml.
This way if hacker already got in into the system/OS/docker, can only know the secretId, to know the AppRoleID and the config.yaml content they have to analyze from memory. Full source code can be found here.
2023-05-17
Dockerfile vs Nixpacks vs ko
Dockerfile is quite simple, first we need to pick the base image for build phase (only if you want to build inside docker, if you already have CI/CD that build it outside, you just need to copy the executable binary directly), put command of build steps, choose runtime image for run stage (popular one like ubuntu/debian have bunch of debugging tools, alpine/busybox for stripped one), copy the binary to that layer and done.
FROM golang:1.20 as build1
WORKDIR /app1
# if you don't use go mod vendor
#COPY go.mod .
#COPY go.sum .
#RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o app1.exe
FROM busybox:latest
WORKDIR /
COPY --from=build1 /etc/ssl/certs /etc/ssl/certs
COPY --from=build1 /app1/app1.exe .
CMD ./app1.exe
then run the docker build and docker run command:
# build
docker build . -t app0
[+] Building 76.2s (15/15) FINISHED -- first time, without vendor
[+] Building 9.5s (12/12) FINISHED -- changing code, rebuild, with go mod vendor
# run
docker run -it app0
with nixpacks you just need to run this without having to create Dockerfile (as long there's main.go file):
# install nixpack
curl -sSL https://nixpacks.com/install.sh | bash
# build
nixpacks build . --name app1
[+] Building 315.7s (19/19) FINISHED -- first time build
[+] Building 37.2s (19/19) FINISHED -- changing code, rebuild
# run
docker run -it app1
With ko,
# install ko
go install github.com/google/ko@latest
# build
time ko build -L -t app2
CPU: 0.84s Real: 5.05s RAM: 151040KB
# run (have to do this since the image name is hashed)
docker run -it `docker image ls | grep app2 | cut -d ' ' -f 1`
How about container image size? Dockerfile with busybox only use 14.5MB, with ubuntu 82.4MB, debian 133MB, alpine 15.2MB, with nixpack it uses 99.2MB, and with ko it only took 11.5MB but it only support Go (and you cannot debug inside it, eg. for testing connectivity to 3rd party dependency using shell inside the container). So is it better to use nixpacks? I don't think so, both build speed and image size for this case is inferior compared to normal Dockerfile with busybox or ko.
2022-03-26
Move docker Data Directory to Another Partition
My first NVMe (i hate this brand S*****P****), sometimes hangs periodically (marking the filesystem readonly) so 50-100% CPU usage and nothing more can be done. So I have to move some parts of it into another NVMe drive, here's how I moved docker to another partition
sudo systemctl stop docke
then you can edit
sudo vim /etc/docker/daemon.json
# where /media/asd/nvme2 is your mount point to your other partition)
# add something like this
{
"dns": ["8.8.8.8","1.1.1.1"],
"data-root": "/media/asd/nvme2/docker"
}
Clone your partition to that new partition (no need to mkdir docker folder):
sudo rsync -aP --progress /var/lib/docker/ /media/asd/nvme2/docker
mv /var/lib/docker /var/lib/docker.backup
Then try to start again the docker service:
sudo systemctl start docker
sudo systemctl status docker
If it all works, you can delete the backup of original data directory.
2022-01-21
Easy minimal Ubuntu VM on any OS
Normally we use LXC/LXD, KVM, QEMU, Docker, Vagrant, VirtualBox, VMWare or any other virtualization and containerization software to spawn a VM-like instance locally. Today we're gonna try multipass, a tool to spawn and orchestrate ubuntu VM. To install multipass, it's as easy as running these commands:
snap install multipass
ls -al /var/snap/multipass/common/multipass_socket
snap connect multipass:libvirt # if error: ensure libvirt is installed and running
snap info multipass
To spawn a VM on Ubuntu (for other OSes, see the link above), we can run:
multipass find
Image Aliases Version Description
...
18.04 bionic 20220104 Ubuntu 18.04 LTS
20.04 focal,lts 20220118 Ubuntu 20.04 LTS
21.10 impish 20220118 Ubuntu 21.10
daily:22.04 devel,jammy 20220114 Ubuntu 22.04 LTS
...
minikube latest minikube is local Kubernetes
multipass launch --name groovy-lagomorph lts
# 20.04 --cpus 1 --disk 5G --mem 1G
multipass list
Name State IPv4 Image
groovy-lagomorph Running 10.204.28.99 Ubuntu 20.04 LTS
multipass info --all
Name: groovy-lagomorph
State: Running
IPv4: 10.204.28.99
Release: Ubuntu 20.04.3 LTS
Image hash: e1264d4cca6c (Ubuntu 20.04 LTS)
Load: 0.00 0.00 0.00
Disk usage: 1.3G out of 4.7G
Memory usage: 134.2M out of 976.8M
Mounts: --
To run shell inside newly spawned VM, we can run:
multipass shell groovy-lagomorph
multipass exec groovy-lagomorph -- bash
If you need to simulate ssh, according to this issue you can either:
sudo ssh -i /var/snap/multipass/common/data/multipassd/ssh-keys/id_rsa ubuntu@10.204.28.99
# or add ssh key before launch on cloud-init.yaml
ssh_authorized_keys:
- <your_ssh_key>
# or copy ssh key manually after launch
sudo ssh-copy-id -f -o 'IdentityFile=/var/snap/multipass/common/data/multipassd/ssh-keys/id_rsa' -i ~/.ssh/id_rsa.pub ubuntu@10.204.28.99
If to stop/start/delete the VM:
multipass stop groovy-lagomorph
multipass start groovy-lagomorph
multipass delete groovy-lagomorph
multipass purge
2021-12-18
Coolest PaaS/IaaS I've ever use: Jelastic
- Can autoscale out (like AWS ELB/ECS, GCR, ACS, etc) and auto-clustering (as easy as CloudSQL or AWS RDS/Aurora, but can be automatic)
- Can autoscale up '__') without downtime, only took 1 second to scale up from 1 core 640MB to 16 core 32GB (seems like they only changing container's resource quota limit) but you can see the changes directly without restart
- Can deploy VPS on the same cluster/network (for my databases, since I don't use "standard/popular" databases) and it's super cheap (it only took 3.9$ per month to deploy a VPS with 1 static IP, and can autoscale up), you only need to pay what you utilize (CPU and RAM usage), not charged 100% when server up unlike other VPS providers
- The UI doesn't sucks XD you can WebSSH, normal SSH (as long as have real IP), easy SSL setup, super easy to change config, the lacking part about Jelastic probably configfile/gitops-based setup (for working with multiple members in the future) at least there's API and CLI to create and modify environment, not sure if there's auditing available (haven't checked yet).
- Can also deploy automatically from git (checked every N minutes) or CI pipeline or using CLI.
- Easy to move (live migration) to different providers, change ownership of a cluster, or if it's not enabled, at least there's no vendor locking, you can also manually export and import environment (for example copying staging setup to production has similar architecture just different deployment branch and scaling strategy).
- It's quite expensive if you utilize 100% (around 339$ if you use ToggleBox for the specs above), for comparison:
- cheapest highest spec Contabo's VPS (9 core, 60GB RAM, 1.5TB SSD) unmetered bandwidth only cost $55-ish per month (not apple-to-apple since it's different spec and performance, also this is what you should pay per month regardless your utilization)
- similar spec GCE n1-custom-16-32768 (16 core, 32GB, 200GB SSD) non-committed, cost $525 excluding bandwidth
- similar spec AWS EC2 a1.xlarge (16 core, 32GB RAM, 200GB gp2 SSD) on-demand, only cost $317 excluding bandwidth
- similar spec Azure F16s (16 core, 32GB RAM, 256GB SSD) pay-as-you-go, cost $634 excluding bandwidth
- cheapest OVH on SG (8 core. 64GB RAM, 400GB SSD) only cost $135 with unmetered 200Mbps bandwidth
- Some provider have different "free" tier, for example ToggleBox give free 2GB bandwidth per hour (GCR only give free 1GB per month XD), some other provider give free 1 static IP, some other provider give free 10GB disk usage per hour, etc.
- License might be pricey if you install it on your own cluster instead of using the already provided (eg. DewaCloud or CloudKilat for Indonesia region, ToggleBox for US region, etc), but they have profit sharing model if you are a reseller (have your own VPS and rent it).
- The billing is hourly (so you will always billed at minimum 1 cloudlet -- specs of 1 cloudlet can be vary per provider), compared to for example GCR that use second as minimum billing resolution (VCPU, GB RAM, Requests, and Bandwidth).
2021-08-04
Dockerfile Template (React, Express, Vue, Nest, Angular, GoFiber, Svelte, Django, Laravel, ASP.NET Core, Kotlin, Deno)
ReactJS
FROM node:15.4 as build1COPY ./nginx.conf /etc/nginx/nginx.conf
To build it, use docker build -t react1 .
ExpressJS
VueJS
FROM node:15.4 as build1COPY ./nginx.conf /etc/nginx/nginx.conf
NestJS
FROM node:15.4 as build1COPY package.json .
COPY --from=build1 /app1/dist ./dist
AngularJS
COPY ./nginx.conf /etc/nginx/nginx.conf
Fiber (Golang)
Svelte
COPY ./nginx.conf /etc/nginx/nginx.conf
Django
Laravel
ASP.NET Core
FROM mcr.microsoft.com/dotnet/aspnet
Kotlin
Deno
Deployment
2021-01-26
GOPS: Trace your Golang service with ease
import "github.com/google/gops/agent"
If you don't put those lines, you can still use gops limited to get list of programs running on your computer/server that made with Go with limited statistics information, using these commands:
$ go get -u -v github.com/google/gops
# if the binary compiled with GOPS agent
2020-10-16
Cleanup git and docker Disk Usage
Or if you have time you can use aggresive GC, like this:
git gc --aggressive
Or if you do not need any old history, you can clone then replace, like this:
Next you can reclaim space from docker using this command:
docker system df
For more disk usage analysis you can use baobab for linux or windirstat on windows.
2019-07-24
Expose LXC/LXD Container Ports to Public
sudo apt install lxc lxd libvirt-bin zfsutils-linux
sudo lxd init
# there would be questions to be answered like these:
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (dir, lvm, zfs) [default=zfs]:
Create a new ZFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=100GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]: yes
Address to bind LXD to (not including port) [default=all]: 127.0.0.1
Port to bind LXD to [default=8443]:
Trust password for new clients:
Again:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
# cache one and run one container, but this will only shown on lxc-ls
sudo lxc-create -t download -n container1 -- --dist ubuntu --release bionic --arch amd64
sudo lxc-start --name container1 --daemon
sudo lxc-info --name container1
sudo lxc-stop --name container1
sudo lxc-destroy --name container1
# or run one container
lxc launch ubuntu:18.04 container1
# run command inside, enable ssh with password, change the root password
lxc exec container1 bash
echo '
PermitRootLogin yes
PasswordAuthentication yes
' > /etc/ssh/sshd_config
systemctl restart ssh
passwd
Then you'll need to expose (or port forward) from outside to your container:
# get ip from your container
lxc list
+------------+---------+-----------------------+------------+-----------+
| NAME | STATE | IPV4 | TYPE | SNAPSHOTS |
+------------+---------+-----------------------+------------+-----------+
| container1 | RUNNING | 10.123.126.200 (eth0) | PERSISTENT | 0 |
+------------+---------+-----------------------+------------+-----------+
# forward real port 2200 to container's port 22 and vice versa
iptables -A FORWARD -i eth0 -j DROP
iptables -A FORWARD -i lxdbr0 -m state --state NEW,INVALID -j DROP
iptables -A FORWARD -i eth0 -d 10.123.126.200 -p tcp --dport 2200 -j ACCEPT
iptables -t nat -A PREROUTING -p tcp --dport 2200 -j DNAT --to 10.123.126.200:22
You can test whether the port forwarding and ssh works using these command from another computer:
ssh -o PreferredAuthentications=keyboard-interactive,password -o PubkeyAuthentication=no root:@thePublicIpAddress -p 2200
If you need to expose more ports, for example container's 80 to real's 8080 for example, you can add the rules like this:
iptables -A FORWARD -i eth0 -d 10.123.126.200 -p tcp --dport 8080 -j ACCEPT
iptables -t nat -A PREROUTING -p tcp --dport 8080 -j DNAT --to 10.123.126.200:80
But for this case, I think it's better to use a reverse proxy instead.
Here's the performance difference between baremetal machine and LXC?
CPU model: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
Number of cores: 8
CPU frequency: 2199.996 MHz
Total amount of RAM: 30151 MB
Total amount of swap: MB
System uptime: 147 days, 20:48,
I/O speed: 132 MB/s
Bzip 25MB: 8.01s
Download 100MB file: 69.2MB/s
I/O speed(1st run) : 127 MB/s
I/O speed(2nd run) : 107 MB/s
I/O speed(3rd run) : 107 MB/s
Average I/O speed : 113.7 MB/s
LXC (because the write not yet committed?):
CPU model: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
Number of cores: 8
CPU frequency: 2199.996 MHz
Total amount of RAM: 30151 MB
Total amount of swap: MB
System uptime: 20 min,
I/O speed: 451 MB/s
Bzip 25MB: 9.40s
Download 100MB file: 63.7MB/s
I/O speed(1st run) : 925 MB/s
I/O speed(2nd run) : 1.2 GB/s
I/O speed(3rd run) : 956 MB/s
Average I/O speed : 1036.6 MB/s
2015-02-26
Docker: The Software Container
# install stable version
$ yaourt --needed --noconfirm -S --force docker
# or latest git version
$ yaourt --needed --noconfirm -S --force docker-git
# start and enable the service
$ sudo systemctl enable docker
$ sudo systemctl start docker
# allow your user to access docker, refresh session
$ sudo gpasswd -a `whoami` docker
$ newgrp docker
# show information
$ docker info
Containers: 0
Images: 0
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 0
Execution Driver: native-0.2
Kernel Version: 3.18.7-1-ARCH
Operating System: ArchLinux
CPUs: 4
Total Memory: 15.49 GiB
Name: zzz
ID: 5SDJ:LPNU:UAR4:ULRJ:REZF:4V3W:6ES6:KJTW:DETH:765Y:XP4I:IZZZ
WARNING: No swap limit support
$ docker pull l3iggs/archlinux
$ docker pull kampka/archlinux
Pulling repository logankoester/archlinux
88d601db3077: Download complete
511136ea3c5a: Download complete
9b0516337e5a: Download complete
dce0559daa1b: Download complete
ff4d9d90bf08: Download complete
7207641fe7f8: Download complete
Status: Downloaded newer image for logankoester/archlinux:latest
$ docker run 88d601db3077 ls -al
...
$ docker run -t -i logankoester/archlinux /bin/bash
exit
$ docker run logankoester/archlinux pacman -Rdd --noconfirm dirmngr
Packages (1): dirmngr-1.1.1-2
Total Removed Size: 0.49 MiB
:: Do you want to remove these packages? [Y/n]
removing dirmngr...
:: Synchronizing package databases...
downloading core.db...
downloading extra.db...
downloading community.db...
:: Starting full system upgrade...
:: Replace dirmngr with core/gnupg? [Y/n]
:: Replace lzo2 with core/lzo? [Y/n]
resolving dependencies...
looking for inter-conflicts...
Packages (77): archlinux-keyring-20150212-1 bash-4.3.033-1 ca-certificates-20140923-9 ca-certificates-cacert-20140824-2 ca-certificates-mozilla-3.17.4-1 ca-certificates-utils-20140923-9 coreutils-8.23-1 cracklib-2.9.1-1 curl-7.40.0-1 db-5.3.28-2 dbus-1.8.16-2 device-mapper-2.02.116-1 dhcpcd-6.7.1-1 dirmngr-1.1.1-2 [removal] e2fsprogs-1.42.12-1 expat-2.1.0-4 file-5.22-1 filesystem-2015.02-1 gcc-libs-4.9.2-3 gettext-0.19.4-1 glib2-2.42.1-1 glibc-2.21-2 gmp-6.0.0-2 gnupg-2.1.2-1 gnutls-3.3.12-1 gpgme-1.5.3-1 grep-2.21-1 hwids-20150129-1 inetutils-1.9.2-2 iproute2-3.18.0-1 kbd-2.0.2-1 kmod-19-1 krb5-1.13.1-1 less-471-1 libarchive-3.1.2-8 libassuan-2.1.3-1 libcap-2.24-2 libdbus-1.8.16-2 libffi-3.2.1-1 libgcrypt-1.6.2-1 libgpg-error-1.18-1 libidn-1.29-1 libksba-1.3.2-1 libldap-2.4.40-2 libsystemd-218-2 libtasn1-4.2-1 libtirpc-0.2.5-1 libunistring-0.9.4-1 libutil-linux-2.25.2-1 linux-api-headers-3.18.5-1 logrotate-3.8.8-2 lz4-127-1 lzo-2.09-1 lzo2-2.08-1 [removal] mpfr-3.1.2.p11-1 ncurses-5.9-7 netctl-1.10-1 nettle-2.7.1-1 npth-1.1-1 openresolv-3.6.1-1 openssl-1.0.2-1 p11-kit-0.22.1-3 pacman-4.2.1-1 pacman-mirrorlist-20150205-1 pcre-8.36-2 perl-5.20.2-1 pinentry-0.9.0-1 procps-ng-3.3.10-1 shadow-4.2.1-2 systemd-218-2 systemd-sysvcompat-218-2 tar-1.28-1 texinfo-5.2-3 tzdata-2015a-1 usbutils-008-1 util-linux-2.25.2-1 xz-5.2.0-1
Total Download Size: 62.40 MiB
Total Installed Size: 264.78 MiB
Net Upgrade Size: 26.52 MiB
:: Proceed with installation? [Y/n]
:: Retrieving packages ...
...
$ docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6d67ee44e7f5 logankoester/archlinux:latest "pacman -Syu --nocon 11 minutes ago Exited (0) 2 minutes ago stoic_meitner
# docker commit ID your_username/your_repository
$ docker commit 6d67ee44e7f5 kokizzu/archlinux
5ab1562ea89959c54b8da4462abf086c91434524ae741769dab869b8263d7c1b
To check more information about current dock, use docker inspect followed by image ID:
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
kokizzu/archlinux latest 5ab1562ea899 28 seconds ago 640.6 MB
logankoester/archlinux latest 88d601db3077 24 hours ago 282.9 MB
...
# docker inspect ID
$ docker inspect 5ab1562ea899
After you verify that your image is working, you can share it to others (create a repository first on your dashboard), for example:
# docker push ID your_username/your_repository
You can find more information on the cheatsheet and the documentation, and if you're tempted to install sshd read this first.