ZFS: from array to array

I own few Microservers Gen8 build by HP.
Each of them is beautiful and cool machine. It has compact and modular design with easy access to basic components. It’s hardware configuration is also nice and sufficient for small office. It is definitely better choice than NAS as offers more flexibility considering software and OS.
There was need to grow space for the disk array in one of offices which use Microserver.
Server itself has built-in RAID controller but in reality it’s a software array which works only with operating system built in Redmond. But it’s much cheaper than server with real hardware controller.

Frankly, hardware controller can costs the very same amount of money as the whole Microserver so it’s really smart to use ZFS mirror.

I prefer Linux so my choice is Proxmox, Debian based virtualisation environment with KVM and LXC(Linux Containers). I defined array in so called “hardware” RAID but only for booting purposes. Proxmox ignores this array and I configured a mirror in ZFS filesystem.
What I needed was replacing disks with bigger ones, which is very easy in ZFS. In brief: I need put new disks into empty slots (4 HDD bays – it’s another advantage of Micorserver), make new mirror, copy all data from old to new mirror and make new mirror bootable.
Original mirror isrpooland consists of two drives:/dev/sdaand/dev/sdb.I can list it with:

zpool status -v

New drives are:/dev/sdcand/dev/sdd.
With gdisktool I created partitions on new drives based on old configuration.
Number Start (sector) End (sector) Size Code Name
1 2048 4061 1007.0 KiB EF02 BIOS boot partition
2 4096 5860500366 2.7 TiB BF01 Solaris /usr & Mac ZFS

9 5860501504 5860533134 15.4 MiB BF07 Solaris Reserved 1

As you can see first partition is bootable and second partition holds all data.
New pool is made with Bash script which I built based on output of following command:

zpool history rpool

  1. 2$ cat ~/create_rpool2.sh
  2. 4#!/bin/sh
  3. 6# create new pool based on old_pool history
  4. 8zpool create -f -o ashift=12 -o cachefile=none rpool2 mirror /dev/sdb2 /dev/sdd2
  5. 10zfs create rpool2/ROOT
  6. 12zfs create rpool2/ROOT/pve-1
  7. 14zfs set atime=off rpool2
  8. 16zfs set compression=lz4 rpool2
  9. 18zfs create -V 4194304K -b 4K rpool2/swap
  10. 20zfs set com.sun:auto-snapshot=false rpool2/swap
  11. 22zfs set sync=always rpool2/swap
  12. 24zfs set sync=disabled rpool2
  13. 26zfs set sync=standard rpool2
Take care of the name of new pool whis isrpool2in my configuration.
What I will do next is making snapshot ofrpool.Snapshot is a frozen state of filesystem.-roption takes recursive snapshot of all filesystems defined in pool.

zfs snapshot -r rpool@moving

Let’s check snapshots:

zfs list -t snapshot

Now I sent data from snapshot to new poolrpool2:

zfs send -r rpool@moving | zfs receive rpool2

If you want to see how fast data are transfered you can pipe this process through Pipe Viewer (pv). Install it with: apt-get install pvand run following command:

zfs send -R rpool@moving | pv | zfs receive -F rpool2

Check it again:

zfs list -t snapshot

For the next step I prepared myself a Bash script to stop as many services I can to make another snapshot. This way I tried to avoid bigger changes in filesystem. Then I will send new snapshot as a increment which will be very fast comparing first transfer of data.
Here is the script:
  1. 2$ cat ~/stop-services.sh
  2. 4#!/bin/sh
  3. 6# stop services before sending snapshot
  4. 8systemctl stop watchdog-mux
  5. 10systemctl stop systemd-timesyncd
  6. 12systemctl stop spiceproxy
  7. 14systemctl stop rrdcached
  8. 16systemctl stop rpcbind
  9. 18systemctl stop pvestatd
  10. 20systemctl stop pveproxy
  11. 22systemctl stop pvefw-logger
  12. 24systemctl stop pvedaemon
  13. 26systemctl stop pve-ha-lrm
  14. 28systemctl stop pve-ha-crm
  15. 30systemctl stop pve-firewall
  16. 32systemctl stop pve-cluster
  17. 34systemctl stop postfix
  18. 36systemctl stop nfs-common
  19. 38systemctl stop lxcfs
  20. 40systemctl stop dbus
  21. 42systemctl stop cron
  22. 44systemctl stop cgmanager
  23. 46systemctl stop open-iscsi
  24. 48systemctl stop atd
  25. 50systemctl stop ksmtuned
  26. 52systemctl stop rsyslog
  27. 54systemctl list-units –type=service –state=running
I run script:

./stop-services.sh

And made another snapshot:

zfs snapshot -r rpool@moving2

I sent new snapshot incrementally:

zfs send -Ri rpool@moving rpool@moving2 | zfs receive -F rpool2

I set new mountpoint for root and add bootable flag:
zfs set mountpoint=/ rpool2/ROOT/pve-1

zpool set bootfs=rpool2/ROOT/pve-1 rpool2

I added new entry into file/etc/grub/40_custom(content of this file is taken in consideration when I will runupdate-grubscript):
  1. 2menuentry 'Proxmox NEW’ –class proxmox –class gnu-linux –class gnu –class os $menuentry_id_option 'gnulinux-simple-32125e6ecced17a2′ {
  2. 4load_video
  3. 6insmod gzio
  4. 8insmod part_gpt
  5. 10insmod zfs
  6. 12set root=’hd1,gpt2′
  7. 14if [ x$feature_platform_search_hint = xy ]; then
  8. 16search –no-floppy –fs-uuid –set=root –hint-bios=hd1,gpt2 –hint-efi=hd1,gpt2 –hint-baremetal=ahci0,gpt2 32125e6ecced17a2
  9. 18else
  10. 20search –no-floppy –fs-uuid –set=root 32125e6ecced17a2
  11. 22fi
  12. 24echo 'Loading Linux 4.2.2-1-pve …’
  13. 26linux /ROOT/pve-1@/boot/vmlinuz-4.2.2-1-pve root=ZFS=rpool2/ROOT/pve-1 ro boot=zfs $bootfs root=ZFS=rpool2/ROOT/pve-1 boot=zfs quiet
  14. 28echo 'Loading initial ramdisk …’
  15. 30initrd /ROOT/pve-1@/boot/initrd.img-4.2.2-1-pve
  16. 32}
Note: I changedhd0tohd1(until I’ll remove old disk).

Note: I changed pool name fromrpoolto the new one:rpool2.

Next, in/etc/default/grubchanged the following line:
GRUB_DEFAULT=0
this number tells Grub to boot from first entry defined in/boot/grub/grub.cfg.
I changed it to:
GRUB_DEFAULT=6

as my new entry will be soon on the 6th place (starting from 0) in new/boot/grub/grub.cfg.

I wrote new grub config to disk:

update-grub

I crossed my finger and madereboot.
I checked ifrpool2is mounted as root:
  1. 2df -h
  2. 4mount
  3. 6gdisk -l /dev/sdx
etc.
Everything was OK so I installed Grub on the first drive ofrpool2, in my case/dev/sdc:

grub-install /dev/sdc

Reboot again.
System still boots from old drive but mounts new drive as root.

What I did next is revert almost all changes to Grub configuration remembering that new ZFS pool is nowrpool2,notrpool.

So, I removed entry from/etc/grub/40_custom.
Next, in/etc/default/grubchanged back following line:
GRUB_DEFAULT=6
to:

GRUB_DEFAULT=0

And I run again:

update-grub

I listed newly created/boot/grub/grub.cfgto check if allrpoolentries are changed torpool2.
Last reboot and I’m done.
I removed old drives from HDD bays.
p.s.
If boot messages show:
Job dev-zvol-rpool-swap.device/start timed out.
you need to change/etc/fstabfrom:
/dev/zvol/rpool/swap none swap sw 0 0
to:

/dev/zvol/rpool2/swap none swap sw 0 0

Other problems could be:
cannot import 'rpool’ : one or more devices is currently unavailable
zfs-import-cache.service: main proces exited, code=exited, status=1/FAILURE
Failed to start Import ZFS pools by cache file.

Unit zfs-import-cache.servie entered failed state.

Generate new cache with:

zpool set cachefile=/etc/zfs/zpool.cache rpool2

Reboot and check with:

journalctl -b

You shouldn’t see any problems.