Contents

How to create your own cloud-init alpine image for Proxmox

Recently I’ve converted one of my old PCs into a Proxmox server. In case you don’t know, Proxmox is essentially a virtual machines management environment. I didn’t really know if I’ll find it useful, just wanted to experiment a bit and maybe learn about terraform. Since then, I’m finding my virtual machines server quite useful. I can run my own small kubernetes cluster (k3s), docker registry, VPN gateway… all sorts of things. Basically, I can have whatever disposable infrastructure I want to test the technologies I’m interested in. So, as much as I encourage you to try Proxmox out as well, this won’t be the main topic of this post.

When creating a new VM, it’s useful to have a base image that will be populated with the user accounts (and their credentials) you want. I did that mostly with a VM templates which I just cloned but there’s a better way to do it - cloud-init.

Disclaimer

The big names in Linux world (Ubuntu, Debian, Arch, Alpine) all provide cloud-init enabled images which you should definitely use instead of preparing your own image. Additionally, these are certified and safe to use in the cloud. But again, it’s not the destination that matters but the journey so, with that in mind, let’s learn how to prepare a cloud-init enabled alpine image for proxmox.

Basic setup

I’ll be using alpine-3.18.4 standard X86_64 as my base image. Create a very basic VM (mine is 2G of RAM and 4G of disk space) - during creation, enable qemu-guest (not a requirement but a nice to have).

Boot up the machine and install alpine with setup-alpine script.

During setup:

  • hostname - doesn’t matter, it’ll be changed by cloud-init,
  • don’t add any additional users,
  • enable ssh,
  • disable root ssh login,
  • format the drive as lvmsys - it’s easier to customise the filesystem later on with LVM,

Reboot the system once the setup is complete.

Required packages

Log in back again (as root - since this is the only account existing at the moment).

Enable “community” repositories in /etc/apk/repositories and apk update.

Install the following packages first:

1
2
3
4
5
apk add \
    util-linux \
    e2fsprogs-extra \
    qemu-guest-agent \
    sudo

Enable qemu-guest-agent service.

1
rc-update add qemu-guest-agent

Just to clarify util-linux provides a non BusyBox version of mount - without it, cloud-init won’t be able to use the configuration image. e2fsprogs-extra provides resize2fs which cloud-init requires as well.

Now, install cloud-init and py3-netifaces - the latter is a cloud-init’s dependency but I guess there’s a bug in community maintained package which omits that.

1
2
3
apk add \
    py3-netiface \
    cloud-init

Now, configure sudo using visudo and uncomment rules for wheel group.

As a last step configure /etc/cloud/cloud.cfg. Specifically datasources_list, remove all sources but NoCloud.

Once you do that, it’s time to disable root password and perform cloud-init-setup. After that there’s no turning back! So, if there’s anything else you want to bake into the image do it now.

1
2
3
passwd -d root
setup-cloud-init
poweroff

After you power off, don’t start the machine again!

Add cloud-init

cloud-init works by using a datasource. The simplest one is a data source disc which proxmox will generate for us and attach to the machine. Add a new hardware to the VM.

/ci/cloud-init-device.png

This will be our cloud-init CDROM. The type of the device (IDE, SCSI) and its number doesn’t matter. cloud-init looks up the drive by the filesystem label which must be set to CIDATA.

Once you do that, convert the VM to a template.

/ci/template-conversion.png

Using the template

Having the above template, to provision a new bare bones machine, I just perform a full clone of the template and configure cloud-init details in proxmox.

/ci/cloud-init-setup.png

Proxmox generates an iso9660 image containing the configuration. During first boot, cloud-init service mounts that image, creates the user I specified and performs basic machine setup.

So, once the machine is up and running I can login straight away with ssh public key (sudo works out of the box as well):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
$ ssh -l twdev 192.168.0.44
Welcome to Alpine!

The Alpine Wiki contains a large amount of how-to guides and general
information about administrating Alpine systems.
See <https://wiki.alpinelinux.org/>.

You can setup the system with the command: setup-alpine

You may change this message by editing /etc/motd.

alpine-clone:~$ id
uid=1000(twdev) gid=1001(twdev) groups=4(adm),10(wheel),1000(sudo),1001(twdev)
alpine-clone:~$ sudo su
/home/twdev # id
uid=0(root) gid=0(root) groups=0(root),0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(tape),27(video)
/home/twdev # 

Inspecting CIDATA disc image

In case you’re interested in the contents of the cloud-init configuration, it’s easy to inspect it:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
alpine-clone:~$ mkdir /tmp/cidata
alpine-clone:~$ sudo mount -t iso9660 /dev/disk/by-label/cidata /tmp/cidata
mount: /tmp/cidata: WARNING: source write-protected, mounted read-only.
alpine-clone:~$ ls -l /tmp/cidata/
total 2
-rw-r--r--    1 root     root            54 Nov 14 11:38 meta-data
-rw-r--r--    1 root     root           221 Nov 14 11:38 network-config
-rw-r--r--    1 root     root           644 Nov 14 11:38 user-data
-rw-r--r--    1 root     root             0 Nov 14 11:38 vendor-data
alpine-clone:~$ cat /tmp/cidata/user-data 
#cloud-config
hostname: alpine-clone
manage_etc_hosts: true
fqdn: alpine-clone
user: twdev
password: <... REDACTED ...>
ssh_authorized_keys:
  - ssh-rsa < ... REDACTED ...>
chpasswd:
  expire: False
users:
  - default
package_upgrade: true