Showing posts with label lxc. Show all posts
Showing posts with label lxc. Show all posts

2022-04-05

Start/restart Golang or any other binary program automatically on boot/crash

There are some alternative to make program start on boot on Linux, the usual way is using:

1. SystemD, it could ensure that dependency started before your service, also could limit your CPU/RAM usage. Generate a template using this website or use kardianos/service

2. PM2 (requires NodeJS), or PMG

3. docker-compose (requires docker, but you can skip the build part, just copy the binary directly on Dockerfile command (that can be deployed using rsync), just set restart property on docker-compose and it would restart when computer boot),  -- bad part, you cannot limit cpu/ram unless using docker swarm. But you can use docker directly to limit and use --restart flag.

3. lxc/lxd or multipass or other vm/lightweight vm (but still need systemd inside it XD at least it won't ruin your host), you can rsync directly to the container to redeploy, for example using overseer or tableflip, you must add reverse proxy or nat or proper routing/ip forwarding tho if you want it to be accessed from outside

4. supervisord (python) or ochinchina/supervisord (golang) tutorial here

5. create one daemon manager with systemd/docker-compose, then spawn the other services using goproc or pioz/god

6. monit it can monitor and ensure a program started/not dead

7. nomad (actually this one is deployment tool), but i can also manage workload

8. kubernetes XD overkill

9. immortal.run a supervisor, this one actually using systemd

10. other containerization/VM workload orchestrator/manager that usually already provided by the hoster/PaaS provider (Amazon ECS/Beanstalk/Fargate, Google AppEngine, Heroku, Jelastic, etc)


This is the systemd script that I usually use (you need to create user named "web" and install "unbuffer"):

$ cat /usr/lib/systemd/system/xxx.service
[Unit]
Description=xxx
After=network-online.target postgresql.service
Wants=network-online.target

[Service]
Type=simple
Restart=on-failure
User=web
Group=users
WorkingDirectory=/home/web/xxx
ExecStart=/home/web/xxx/run_production.sh
ExecStop=/usr/bin/killall xxx
LimitNOFILE=2097152
LimitNPROC=65536
ProtectSystem=full
NoNewPrivileges=true

[Install]
WantedBy=multi-user.target

$ cat /home/web/xxx/run_production.sh
#!/usr/bin/env bash

mkdir -p `pwd`/logs
ofile=`pwd`/logs/access_`date +%F_%H%M%S`.log
echo Logging into: $ofile
unbuffer time ./xxx | tee $ofile



2022-01-21

Easy minimal Ubuntu VM on any OS

Normally we use LXC/LXD, KVM, QEMU, Docker, Vagrant, VirtualBox, VMWare or any other virtualization and containerization software to spawn a VM-like instance locally. Today we're gonna try multipass, a tool to spawn and orchestrate ubuntu VM. To install multipass, it's as easy as running these commands:

snap install multipass
ls -al /var/snap/multipass/common/multipass_socket
snap connect multipass:libvirt # if error: ensure libvirt is installed and running
snap info multipass

To spawn a VM on Ubuntu (for other OSes, see the link above), we can run:

multipass find

Image        Aliases      Version   Description
...
18.04        bionic       20220104  Ubuntu 18.04 LTS
20.04        focal,lts    20220118  Ubuntu 20.04 LTS
21.10        impish       20220118  Ubuntu 21.10
daily:22.04  devel,jammy  20220114  Ubuntu 22.04 LTS
...
minikube                  latest    minikube is local Kubernetes

multipass launch --name groovy-lagomorph lts
# 20.04 --cpus 1 --disk 5G --mem 1G

multipass list
Name                    State             IPv4             Image
groovy-lagomorph        Running           10.204.28.99     Ubuntu 20.04 LTS

multipass info --all
Name:           groovy-lagomorph
State:          Running
IPv4:           10.204.28.99
Release:        Ubuntu 20.04.3 LTS
Image hash:     e1264d4cca6c (Ubuntu 20.04 LTS)
Load:           0.00 0.00 0.00
Disk usage:     1.3G out of 4.7G
Memory usage:   134.2M out of 976.8M
Mounts:         --


To run shell inside newly spawned VM, we can run:

multipass shell groovy-lagomorph

multipass exec groovy-lagomorph -- bash

If you need to simulate ssh, according to this issue you can either:

sudo ssh -i /var/snap/multipass/common/data/multipassd/ssh-keys/id_rsa ubuntu@10.204.28.99

# or add ssh key before launch on cloud-init.yaml
ssh_authorized_keys:
  - <your_ssh_key>

# or copy ssh key manually after launch
sudo ssh-copy-id -f -o 'IdentityFile=/var/snap/multipass/common/data/multipassd/ssh-keys/id_rsa' -i ~/.ssh/id_rsa.pub ubuntu@10.204.28.99

If to stop/start/delete the VM:

multipass stop groovy-lagomorph
multipass start groovy-lagomorph
multipass delete groovy-lagomorph
multipass purge

What technology used by multipass? it's QEMU, but maybe it's different on another platform (it can run on Windows and MacOSX too).

2019-07-24

Expose LXC/LXD Container Ports to Public

LXC/LXD is lightweight OS-level virtualization on Linux, much like OpenVZ. It was used by early version of Docker. The benefit of using LXC/LXD is when you need a virtualization but also need fast startup and near-baremetal performance (especially compared to full-virtualization like KVM or VirtualBox). The difference between Docker and LXC is which level they are targeting, Docker is more for application deployment, where LXC is machine level. LXD adds REST API for LXC. Other main difference between LXC and Docker is that Docker has a copy-on-write file system built-in. To start using LXD, just install and run:

sudo apt install lxc lxd libvirt-bin zfsutils-linux
sudo lxd init

# there would be questions to be answered like these:
Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (dir, lvm, zfs) [default=zfs]: 
Create a new ZFS pool? (yes/no) [default=yes]: 
Would you like to use an existing block device? (yes/no) [default=no]: 
Size in GB of the new loop device (1GB minimum) [default=100GB]:    
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=lxdbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like LXD to be available over the network? (yes/no) [default=no]: yes
Address to bind LXD to (not including port) [default=all]: 127.0.0.1
Port to bind LXD to [default=8443]: 
Trust password for new clients: 
Again: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

# cache one and run one container, but this will only shown on lxc-ls
sudo lxc-create -t download -n container1 -- --dist ubuntu --release bionic --arch amd64
sudo lxc-start --name container1 --daemon
sudo lxc-info --name container1
sudo lxc-stop --name container1
sudo lxc-destroy --name container1

# or run one container
lxc launch ubuntu:18.04 container1


# run command inside, enable ssh with password, change the root password
lxc exec container1 bash
echo '
PermitRootLogin yes
PasswordAuthentication yes
' > /etc/ssh/sshd_config
systemctl restart ssh
passwd

Then you'll need to expose (or port forward) from outside to your container:

# get ip from your container
lxc list
+------------+---------+-----------------------+------------+-----------+
|    NAME    |  STATE  |         IPV4          |    TYPE    | SNAPSHOTS |
+------------+---------+-----------------------+------------+-----------+
| container1 | RUNNING | 10.123.126.200 (eth0) | PERSISTENT | 0         |
+------------+---------+-----------------------+------------+-----------+

# forward real port 2200 to container's port 22 and vice versa
iptables -A FORWARD -i eth0 -j DROP
iptables -A FORWARD -i lxdbr0 -m state --state NEW,INVALID -j DROP
iptables -A FORWARD -i eth0 -d 10.123.126.200 -p tcp --dport 2200 -j ACCEPT
iptables -t nat -A PREROUTING -p tcp --dport 2200 -j DNAT --to 10.123.126.200:22

You can test whether the port forwarding and ssh works using these command from another computer:

ssh -o PreferredAuthentications=keyboard-interactive,password -o PubkeyAuthentication=no root:@thePublicIpAddress -p 2200

If you need to expose more ports, for example container's 80 to real's 8080 for example, you can add the rules like this:

iptables -A FORWARD -i eth0 -d 10.123.126.200 -p tcp --dport 8080 -j ACCEPT
iptables -t nat -A PREROUTING -p tcp --dport 8080 -j DNAT --to 10.123.126.200:80

But for this case, I think it's better to use a reverse proxy instead.

Here's the performance difference between baremetal machine and LXC?

CPU model:  Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz 
Number of cores: 8
CPU frequency:  2199.996 MHz
Total amount of RAM: 30151 MB
Total amount of swap:  MB
System uptime:   147 days, 20:48,    
I/O speed:  132 MB/s
Bzip 25MB: 8.01s
Download 100MB file: 69.2MB/s


I/O speed(1st run)   : 127 MB/s
I/O speed(2nd run)   : 107 MB/s
I/O speed(3rd run)   : 107 MB/s
Average I/O speed    : 113.7 MB/s

LXC (because the write not yet committed?):

CPU model:  Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz 
Number of cores: 8
CPU frequency:  2199.996 MHz
Total amount of RAM: 30151 MB
Total amount of swap:  MB
System uptime:   20 min,    
I/O speed:  451 MB/s
Bzip 25MB: 9.40s
Download 100MB file: 63.7MB/s


I/O speed(1st run)   : 925 MB/s
I/O speed(2nd run)   : 1.2 GB/s
I/O speed(3rd run)   : 956 MB/s
Average I/O speed    : 1036.6 MB/s

2016-10-01

LXC Web Panel

As you (probably) already know, LXC (Linux Containers) or OpenVZ an operating-system-level virtualization is really faster than hardware virtualization, see the comparison. For those who hate CLI, you can use web interface called LXC Web Panel (for LXC 0.7 to 0.9, or newer fork 1.0+ here) to manage your containers:

wget https://lxc-webpanel.github.io/tools/install.sh -O - | sudo bash

This software only works on Ubuntu 12.04 or later. Despite of its performance, of course there are limitations, such as: you can only use host OS and architecture on guest. You can find more info on their website or this blog post.




So why LXC instead of Docker or Virtualization? because it's simpler :3 yes, both are different kind of animal, don't forget to check LXD and other alternatives too.