HOWTO: Migrate a UFS Root Filesystem to ZFS
Solaris 10 10/08 (u6) is due to be released within the next month of so (I don't have an exact date) and one of the great features to come with it is ZFS boot. You can already use ZFS boot on Nevada and OpenSolaris defaults to ZFS, but this will be the first release of officially supported Solaris 10 to have ZFS boot.
People have been waiting for this for a long time, and will naturally be eager to migrate their root filesystem from UFS to ZFS. This article will detail how you can do this using Live Upgrade. This will allow you to perform the migration with the least amount of downtime, and still have a safety net in case something goes wrong.
These instructions are aimed at users with systems ALREADY running Solaris 10 10/08 (update 6) or Nevada build 90 (snv_90) or later.
Create the Root zpool
The first thing you need to do is create your disk zpool. It MUST exist before you can continue, so create and verify it:
# zpool create rootpool c1t0d0s0 # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT rootpool 10G 73.5K 10.0G 0% ONLINE - #
If the slice you've selected currently has another filesystem on it, eg UFS or VxFS, you'll need to use the -f
flag to force the creation of the ZFS filesystem.
You can use any name you like. I've chosen rootpool
to make it clear what the pool's function is.
Create The Boot Environments (BE)
Now we've got our zpool in place, we can create the BEs that will be used to migrate the current root filesystem across to the new ZFS filesystem.
Create the ABE as follows:
This command will create two boot environments where:
ufsBE
is the name your current boot environment will be assigned. This can be anything you like and is your safety net. If something goes wrong, you can always boot back to this BE (unless you delete it).zfsBE
is the name of your new boot environment that will be on ZFS and...rootpool
is the name of the zpool you create for the boot environment.
This command will take a while to run as it copies your ufsBE
to your new zfsBE
and will produce output similar to the following if all goes well:
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <ufsbe>.
Creating initial configuration for primary boot environment <zfsbe>.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <ufsbe> PBE Boot Device </dev/dsk/c0t0d0s0>.
Comparing source boot environment <ufsbe> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <zfsbe>.
Source boot environment is <ufsbe>.
Creating boot environment <zfsbe>.
Creating file systems on boot environment <zfsbe>.
Creating <zfs> file system for </> in zone <global> on <mpool/ROOT/zfsBE>.
Populating file systems on boot environment <zfsbe>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Creating shared file system mount points.
Creating compare databases for boot environment <zfsbe>.
Creating compare database for file system </>.
Updating compare databases on boot environment <zfsbe>.
Making boot environment <zfsbe> bootable.
Creating boot_archive for /.alt.tmp.b-7Tc.mnt
updating /.alt.tmp.b-7Tc.mnt/platform/sun4u/boot_archive
Population of boot environment <zfsbe> successful.
Creation of boot environment <zfsbe> successful.
#
The x86 output it not much different. It'll just include information about updating GRUB.
Update: Live upgrade patches 121430-65 (SPARC) and 121431-66 (x86) introduce the -D option so you can move /var to it's own dataset. Thanks to John Ross for reminding me about this.
Update: You may get the following error from lucreate
:
ERROR: ZFS pool does not support boot environments.
This will be due to the label on the disk.
You need to relabel your root disks and give them an SMI label. You can do this using "format -e", select the disk, then go to "label" and select "[0] SMI label". This should be all that's needed, but whilst you're at it, you may as well check your partition table is still as you want. If not, make your changes and label the disk again.
For x86 system, you need to ensure your disk has an fdisk table.
You should now be able to perform the lucreate.
The most likely reason for your disk having an EFI label is it's probably been used by ZFS as a whole disk before. ZFS uses EFI labels for whole disk usage, however you need an SMI label for your root disks at the moment (I believe this may change in the future).
Once the the lucreate
has completed, you can verify your Live Upgrade environments with lustatus
:
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- ufsBE yes yes yes no - zfsBE yes no no yes - #
Activate and Boot from ZFS zpool
We're almost done. All we need to do now is activate our new ZFS boot environment and reboot:
# init 6
NOTE: Ensure you reboot using "init 6
" or "shutdown -i6
". Do NOT use "reboot
"
Remember, if you're on SPARC, you'll need to set the appropriate boot device at the OBP. luactivate
will remind you.
You can verify you're booted from the ZFS BE using lustatus:
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- ufsBE yes no no yes - zfsBE yes yes yes no - #
At this point you can delete the old ufsBE
if all went well. You can also re-use that old disk/slice for anything you want like adding it to the rootpool
to create a mirror. The choice is yours, but now you have your system booted from ZFS and all it's wonderfulness is available on the root filesystem too.