We have run OpenStack for a while, and always on CentOS 6. It's old and virtualization-wise somewhat restricted. Currently we're on OpenStack Icehouse. The Juno packages don't exist for CentOS 6 so we need to make the challenging jump to CentOS 7 before upgrading.
This is mainly a problem for the compute nodes. The virtual machines run on local storage on the compute nodes, so we have to migrate virtual machines to CentOS 7 compute nodes then reinstall the CentOS 6 compute nodes. For us the capacity isn't the problem. We don't have empty capacity to move everything at once, but we can do it in batches. However, the actual migration part is more tricky.
Lucklily, RedHat has seen to it that migrations from CentOS 6 to CentOS7 basically work.
Migrations
There are thee main types of migration in OpenStack.
Live migration
nova live-migration <vmid> [target-host]
Live migration is nice. Almost magic. Basically the VM memory is copied over from source host to target host (in many iterations if necessary), and then the VM is started on the target host. For a deep dive, there was a good presentation about it at the OpenStack summit Vancouver.
Live migration requires that the the compute nodes have a shared storage backend, e.g. NFS or CEPH. If you have that, you're golden, migrate away. You might see some issues, but in general it works. We don't have shared disks, so this won't work for us.
Block migration
nova live-migration --block-migrate <vmid> [target-host]
Block migration is similar to live migration, but in addition to the memory it copies over all the block devices' contents to the target machine. Sounds good? In theory, yes. In another excellent presentation from the Vancouver summit, HP describes the reality of the situation. And I'm quite sure they were running something more recent than CentOS 6 on their compute nodes.
Either way, this is what we wanted to do. It would be minimal customer impact. Some seconds to a minute of network downtime for a VM.
The main issue we could not get around was that libvirt copies over all data from all block devices. If you have a volume connected to the VM,it reads the contents of the volume from the source machine, and writes it back to the volume on the target machine. This is done while the volume is in use, so if something goes wrong, silent data corruption is a real risk. Some posts said that if you mark the volumes as "shared" block devices, libvirt won't copy them over. Well, the CentOS 6 libvirt version seems to ignore that. So no block migration for instances with volumes.
Also, as a side note. The default CentOS 7 kernel doesn't have support for block migration. Apparently you can migrate to CentOS 7 from CentOS 6, but not between CentOS 7 machines, unless the support compiled into the kernel (e.g. the RHEV kernel).
Offline migration
nova migrate <vmid>
This migration has nothing to do with libvirt, OpenStack manages this. The workflow is something like:
- The VM is shut down
- qemu-img convert is run on the root disk (at least if the disks are qcow2)
- The root disk is rsynced over to the new node with the following rsync flags "-Sze.Ls"
- qemu-img convert is run on the ephemeral disk
- The ephemeral disk is rsynced over
- The VM is started on the new host
Now, this will take time, depending on your hardware and the VMs. It seems that the qemu-img convert commands are run to unite the diff file with the backing file. For VM's that have used most of the disks space, it mainly reads the whole file from the disk, and writes it to the disk with another name. Depending on the VM size and the speed of your disks, this might take a while.
Then you probably get to the even slower part. The "z" flag in rsync is compress. This is a compute intensive task. Since rsync is not multithreaded, this will slow down your migration. On a single maxed out E5-2670v2 core, the transfer speed was ~12 MB/s. I guess the speed improves for empty disks, but for full ones, it's bad (nova bug I filed).
Add these up, and with large VMs, you're going to be waiting for a while. This also means the VM will be shut down for an extended amount of time, which requires coordination with the customers.
How did it go?
The issues we saw with migrations were myriad and frequent. We tried block migration for VMs without volumes and offline migration for VMs with volumes.
But before we get to that, there is a nova bug that needs addressing. If the glance image of the VM has been removed, you can't migrate the VM. We solved this by
- Get all nonexistent VM images that are in use by VMs from the database
select image_ref,glance.images.id,glance.images.deleted from instances join glance.images on image_ref=glance.images.id where instances.deleted=0 and glance.images.deleted=1 group by image_ref;
- Get the SHA-1 hash of those (the base filename is a SHA1 hash of the UUID)
- Copy over all those images from the hosts where they still exist to the new CentOS 7 hosts
- Make sure you don't delete unused backing images automatically in Nova
Block migration
If you watched the talk by HP, that's pretty accurate. Here are the most common outcomes of the migration.
The migration worked
This actually happened quite often.
The migrations never started
We actually had a case or two where OpenStack said the VM was migrating, but nothing ever started, and the VM stayed migrating.
Fix
nova reset-state --active <vmid>
Try again.
The migrations never started - full disk
2015-07-20 15:52:30.714 28656 TRACE oslo.messaging.rpc.dispatcher MigrationPreCheckError: Migration pre-check error: Unable to migrate <vmid>: Disk of instance is too large(available on destination host:55834574848 < need:246960619520)
Sometimes the migrations tried to start, but OpenStack said the destination host didn't have enough disk space, and the migration failed (VM remained active on the source machine).
Fix
Specify destination host which has space.
The migrations never started - Message timeouts
Exception during message handling: Timed out waiting for a reply to message ID 4ca936f37c964dba93256cd45a12b4b7
This was probably caused by some high load which timed out some rabbitmq connections.
Fix
Restart services, if necessary reset the state of the VM, try again.
The migrations never started - Network filter
ERROR nova.virt.libvirt.driver [-] [instance: <vmid>] Live Migration failure: Network filter not found: Could not find filter 'nova-instance-instance-<vmid>'
I have no idea what caused this. They just happened. The VMs remained in the active state on the source host.
Fix
Try again, that worked sometimes. If it didn't work, leave the machine for offline migrations.
Virtual machine migration never finishes - token expired
Some times the VM migration worked for all intents and purposes, but the VM stayed in the migrating state. This was caused by the keystone token expiring during the migration, and it could signal that it's complete. This is at least my guess based on the log files.
Fix
Reset the state of the VM to active, and change the host of the VM.
Virtual machine migration never finished - something else?
The same as above, but the migration didn't take long enough for the token to expire, and there were no traces in the logs about this.
Fix
Same as above.
Offline migration
Offline migration should be easier, right? Well..
Virtual machines migrate
Again, this happened fairly often. Just remember to resize-confirm the machine.
Virtual machines are stuck in verify - nothing happened
Some times after migrating a shutoff node, OpenStack basically finishes the migration, but nothing has been copied over. Libvirt doesn't know about the VM either.
Fix
nova resize-revert <vmid>
Try to figure out what happened? Try again?
Migrations fail because tokens expire
2015-07-28 10:45:33.122 25000 ERROR oslo.messaging.rpc.dispatcher [-] Exception during message handling: Unauthorized (HTTP 401)
There's these in the nova compute logs. The token expiry also touches the offline migrations.
If you migrate an active VM, which goes to the ERROR state since it took ages and the token expired, all calls after the migration just fail. However, the migration actually most likely just worked.
Fix
Update the VM host to the new host in the database. Reset the VM state to active. Stop the VM. Start the VM.
Conclusion
A large part of migrations just work. However, there has been tons of manual work, and breaking migrations. Most of the problems can be fixed, and so far, no customer VM has been lost.
We're still in the process of offline migrating the VMs. It'll take a while to get them all on fresh hosts. This is not an exercise I'll gladly repeat often.
So long, I'm off to the drawing board to see how we can improve on this.
Geek. Product Owner @CSCfi