Fix issue when restoring backup after migration of volume#12549
Fix issue when restoring backup after migration of volume#12549
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## 4.20 #12549 +/- ##
============================================
+ Coverage 16.26% 16.37% +0.10%
- Complexity 13428 13622 +194
============================================
Files 5660 5661 +1
Lines 499907 502480 +2573
Branches 60696 61846 +1150
============================================
+ Hits 81316 82267 +951
- Misses 409521 411087 +1566
- Partials 9070 9126 +56
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
| List<Backup.VolumeInfo> backedVolumes = backup.getBackedUpVolumes(); | ||
| List<VolumeVO> volumes = backedVolumes.stream() | ||
| .map(volume -> volumeDao.findByUuid(volume.getUuid())) | ||
| .map(volume -> volumeDao.findByUuid(volume.getPath())) |
There was a problem hiding this comment.
the new uuid or path after migration needs to be updated in the backed-up volumes metadata if any backups existing for them? any case path might also change?
There was a problem hiding this comment.
The new UUID / path for the backed up volume doesn't need to be updated as the uuid - points to the volume UUID - which is always the same on subsequent backups, and the path points to the backup path - which shouldn't vary even if volume is migrated. I don't see the path of the backup changing.
There was a problem hiding this comment.
| .map(volume -> volumeDao.findByUuid(volume.getPath())) | |
| .map(backedVolumeInfo -> volumeDao.findByUuid(backedVolumeInfo.getPath())) |
it's better change to backedVolumeInfo to avoid confusion.
@Pearl1594 Correct, path of the backup doesn't change. I mean, the volume path after migration might change as the volume is checked by its backed up path (which is before migration). cc @abh1sar
There was a problem hiding this comment.
I think we should not have path in the backed-up volumes metadata at all.
- Backup files are named suing the volume uuid
- The path in backed-up volumes is not being referenced anywhere apart from UI
volume.getPath() already gives us the path where we have to restore. I don't see a point in maintaining it in backup metadata also and updating it whenever volume path changes.
There was a PR merged on main that makes the UI reference uuid instead of path. (#12156)
So I propose removing path entirely from Backup.VolumeInfo in the main branch. We don't need upgrade handling also. The path in the DB for older backups will simply get ignored.
Now in the context of this PR, we should get the path from volume.getPath() not from backup-up volumes metadata.
Thoughts? @sureshanaparti @Pearl1594
There was a problem hiding this comment.
With respect to this statement:
Now in the context of this PR, we should get the path from volume.getPath() not from backup-up volumes metadata.
I think we need to consider backedVolume.getPath() - as a volume could have gotten migrated and the path changes. But when restoring from a backup, we need to reference the path of the backedVolume (which was the path of the volume prior to the migrate operation). Correct me if I'm wrong.
If volumeUuid is passed, then volume's uuid is used, but, when null, it uses the backupVolume path to determine the backup file as seen in : https://github.com/apache/cloudstack/blob/main/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtRestoreBackupCommandWrapper.java#L224-L225
|
@blueorangutan package |
|
@Pearl1594 a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress. |
|
Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ el10 ✔️ debian ✔️ suse15. SL-JID 16637 |
|
@blueorangutan test |
|
@Pearl1594 a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests |
plugins/backup/nas/src/main/java/org/apache/cloudstack/backup/NASBackupProvider.java
Outdated
Show resolved
Hide resolved
|
[SF] Trillian test result (tid-15340)
|
plugins/backup/nas/src/main/java/org/apache/cloudstack/backup/NASBackupProvider.java
Show resolved
Hide resolved
|
@blueorangutan package |
|
@DaanHoogland a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress. |
|
Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ el10 ✔️ debian ✔️ suse15. SL-JID 16686 |
plugins/backup/nas/src/main/java/org/apache/cloudstack/backup/NASBackupProvider.java
Show resolved
Hide resolved
|
@blueorangutan package |
|
@blueorangutan package |
|
@sureshanaparti a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress. |
|
|
Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ el10 ✔️ debian ✔️ suse15. SL-JID 16710 |
There was a problem hiding this comment.
LGTM
Tested on KVM with NAS backup provider. The fix correctly uses volume paths from backup metadata instead of current DB paths after volume migration.
Test Execution Summary
| TC | Description | Result |
|---|---|---|
| TC1 | ROOT volume migration (pri1→pri2) | PASSED |
| TC2 | DATA volume migration (pri1→pri2) | PASSED |
| TC3 | No migration (regression check) | PASSED |
| TC4 | Both ROOT and DATA volumes migrated | PASSED |
| TC5 | Restore to destroyed (not expunged) VM | PASSED |
| TC6 | Multiple backups before/after migration | PASSED |
| TC7 | Volume added after backup | N/A (expected validation error) |
Detailed Test Execution
TC1: Restore backup after ROOT volume migration (stopped VM)
Objective
Verify that restoring a VM backup succeeds after the ROOT volume has been migrated to a different primary storage pool. The fix should use the volume path from backup metadata instead of the current DB path.
Test Steps
- Confirmed ROOT volume path before backup:
51c96bbb-6f13-43d3-ac75-29ee89670ca1on pri1 - Assigned backup offering to VM 78dbc9a3-df86-4550-a912-5174ede758ed
- Created backup (id: ef4039a6-07d6-4cfe-a8dc-8c07948dc2f3)
- Stopped the VM
- Migrated ROOT volume from pri1 to pri2 — path changed to
cff6b169-a645-4f54-b20c-734c5e77a3e9 - Restored from backup
- Started VM and confirmed Running state
Expected Result:
Backup restore should succeed using the backed-up volume path from the backup metadata, not the current (post-migration) DB path. VM should start successfully after restore.
Actual Result:
Backup restore succeeded. The RestoreBackupCommand correctly used the backed-up volume path 51c96bbb-6f13-43d3-ac75-29ee89670ca1 from backup metadata. The rsync command on the KVM agent executed successfully. VM started and reached Running state.
Test Evidence:
- Volume before backup (on pri1):
(localcloud) 🐱 > list volumes virtualmachineid=78dbc9a3-df86-4550-a912-5174ede758ed filter=id,name,type,path,storage,storageid
{
"count": 1,
"volume": [
{
"id": "51c96bbb-6f13-43d3-ac75-29ee89670ca1",
"name": "ROOT-5",
"path": "51c96bbb-6f13-43d3-ac75-29ee89670ca1",
"storage": "ref-trl-10861-k-Mol9-rositsa-kyuchukova-kvm-pri1",
"storageid": "b6b70d7b-c86a-3d46-a75b-32e7af66bc26",
"type": "ROOT"
}
]
}
- Backup created — backup metadata shows original path:
(localcloud) 🐱 > list backups virtualmachineid=78dbc9a3-df86-4550-a912-5174ede758ed filter=id,externalid,status,volumes
{
"backup": [
{
"externalid": "i-2-5-VM/2026.02.06.08.54.27",
"id": "ef4039a6-07d6-4cfe-a8dc-8c07948dc2f3",
"status": "BackedUp",
"volumes": "[{\"uuid\":\"51c96bbb-6f13-43d3-ac75-29ee89670ca1\",\"type\":\"ROOT\",\"size\":8589934592,\"path\":\"51c96bbb-6f13-43d3-ac75-29ee89670ca1\"}]"
}
],
"count": 1
}
- ROOT volume migrated from pri1 to pri2 — path changed:
(localcloud) 🐱 > migrate volume storageid=1fa3e6b5-fabd-34df-b802-6c1daf2ec740 volumeid=51c96bbb-6f13-43d3-ac75-29ee89670ca1
{
"volume": {
"id": "51c96bbb-6f13-43d3-ac75-29ee89670ca1",
"name": "ROOT-5",
"path": "cff6b169-a645-4f54-b20c-734c5e77a3e9",
"storage": "ref-trl-10861-k-Mol9-rositsa-kyuchukova-kvm-pri2",
"storageid": "1fa3e6b5-fabd-34df-b802-6c1daf2ec740"
}
}
- Volume after migration confirms new path on pri2:
(localcloud) 🐱 > list volumes virtualmachineid=78dbc9a3-df86-4550-a912-5174ede758ed filter=id,name,type,path,storage
{
"count": 1,
"volume": [
{
"id": "51c96bbb-6f13-43d3-ac75-29ee89670ca1",
"name": "ROOT-5",
"path": "cff6b169-a645-4f54-b20c-734c5e77a3e9",
"storage": "ref-trl-10861-k-Mol9-rositsa-kyuchukova-kvm-pri2",
"type": "ROOT"
}
]
}
- Restore backup succeeded:
(localcloud) 🐱 > restore backup id=ef4039a6-07d6-4cfe-a8dc-8c07948dc2f3
{
"success": true
}
- KVM agent log — RestoreBackupCommand used backed-up path, rsync successful:
2026-02-06 08:59:18,466 DEBUG [cloud.agent.Agent] (AgentRequest-Handler-1:[]) (logid:) Request:Seq 1-417708865438616579: { Cmd , MgmtId: 32988888826665, via: 1, Ver: v1, Flags: 100111, [{"org.apache.cloudstack.backup.RestoreBackupCommand":{"vmName":"i-2-5-VM","backupPath":"i-2-5-VM/2026.02.06.08.54.27","backupRepoType":"nfs","backupRepoAddress":"10.0.32.4:/acs/primary/ref-trl-10861-k-Mol9-rositsa-kyuchukova/backup","volumePaths":["/mnt/1fa3e6b5-fabd-34df-b802-6c1daf2ec740/51c96bbb-6f13-43d3-ac75-29ee89670ca1"],"vmExists":"true","vmState":"Restoring","wait":"0","bypassHostMaintenance":"false"}}] }
2026-02-06 08:59:18,575 DEBUG [utils.script.Script] (AgentRequest-Handler-1:[]) (logid:) Executing command [/bin/bash -c rsync -az /usr/share/cloudstack-agent/tmp/csbackup.psKta3844783569237874084/i-2-5-VM/2026.02.06.08.54.27/root.51c96bbb-6f13-43d3-ac75-29ee89670ca1.qcow2 /mnt/1fa3e6b5-fabd-34df-b802-6c1daf2ec740/51c96bbb-6f13-43d3-ac75-29ee89670ca1 ].
2026-02-06 08:59:45,934 DEBUG [utils.script.Script] (AgentRequest-Handler-1:[]) (logid:) Successfully executed process [92104] for command [/bin/bash -c rsync -az /usr/share/cloudstack-agent/tmp/csbackup.psKta3844783569237874084/i-2-5-VM/2026.02.06.08.54.27/root.51c96bbb-6f13-43d3-ac75-29ee89670ca1.qcow2 /mnt/1fa3e6b5-fabd-34df-b802-6c1daf2ec740/51c96bbb-6f13-43d3-ac75-29ee89670ca1 ].
- VM started successfully after restore:
(localcloud) 🐱 > list virtualmachines id=78dbc9a3-df86-4550-a912-5174ede758ed filter=id,state
{
"count": 1,
"virtualmachine": [
{
"id": "78dbc9a3-df86-4550-a912-5174ede758ed",
"state": "Running"
}
]
}
Test Result: PASSED
TC2: Restore backup after DATA volume migration
Objective
Verify that restoring a VM backup succeeds after a DATA volume has been migrated to a different primary storage pool. The fix should use the volume path from backup metadata for the DATA volume.
Test Steps
- Created and attached DATA volume (DATA-test2) to VM — path
12d32216-8db5-44f4-b367-9cfb4c74e1b6on pri1 - Started VM — ROOT on pri2, DATA on pri1
- Created backup (id: 2378e946-568e-41a8-b784-5a159ba8dc4e) with both volumes
- Stopped the VM
- Migrated DATA volume from pri1 to pri2 — path changed to
92440ffa-e204-4cc8-8ff5-9572872fd356 - Restored from backup
- Started VM and confirmed Running state
Expected Result:
Backup restore should succeed using the backed-up volume paths from backup metadata for both ROOT and DATA volumes. VM should start successfully after restore.
Actual Result:
Backup restore succeeded. The RestoreBackupCommand correctly used the backed-up volume paths from backup metadata for both volumes. Both rsync commands executed successfully. VM started and reached Running state.
Test Evidence:
- Volumes before backup — ROOT on pri2, DATA on pri1:
(localcloud) 🐱 > list volumes virtualmachineid=78dbc9a3-df86-4550-a912-5174ede758ed filter=id,name,type,path,storage
{
"count": 2,
"volume": [
{
"id": "51c96bbb-6f13-43d3-ac75-29ee89670ca1",
"name": "ROOT-5",
"path": "cff6b169-a645-4f54-b20c-734c5e77a3e9",
"storage": "ref-trl-10861-k-Mol9-rositsa-kyuchukova-kvm-pri2",
"type": "ROOT"
},
{
"id": "12d32216-8db5-44f4-b367-9cfb4c74e1b6",
"name": "DATA-test2",
"path": "12d32216-8db5-44f4-b367-9cfb4c74e1b6",
"storage": "ref-trl-10861-k-Mol9-rositsa-kyuchukova-kvm-pri1",
"type": "DATADISK"
}
]
}
- Backup created — backup metadata shows original paths:
(localcloud) 🐱 > list backups virtualmachineid=78dbc9a3-df86-4550-a912-5174ede758ed filter=id,externalid,status,volumes
{
"backup": [
{
"externalid": "i-2-5-VM/2026.02.06.09.03.51",
"id": "2378e946-568e-41a8-b784-5a159ba8dc4e",
"status": "BackedUp",
"volumes": "[{\"uuid\":\"51c96bbb-6f13-43d3-ac75-29ee89670ca1\",\"type\":\"ROOT\",\"size\":8589934592,\"path\":\"cff6b169-a645-4f54-b20c-734c5e77a3e9\"},{\"uuid\":\"12d32216-8db5-44f4-b367-9cfb4c74e1b6\",\"type\":\"DATADISK\",\"size\":5368709120,\"path\":\"12d32216-8db5-44f4-b367-9cfb4c74e1b6\"}]"
}
],
"count": 2
}
- DATA volume migrated from pri1 to pri2 — path changed:
(localcloud) 🐱 > migrate volume storageid=1fa3e6b5-fabd-34df-b802-6c1daf2ec740 volumeid=12d32216-8db5-44f4-b367-9cfb4c74e1b6
{
"volume": {
"id": "12d32216-8db5-44f4-b367-9cfb4c74e1b6",
"name": "DATA-test2",
"path": "92440ffa-e204-4cc8-8ff5-9572872fd356",
"storage": "ref-trl-10861-k-Mol9-rositsa-kyuchukova-kvm-pri2",
"storageid": "1fa3e6b5-fabd-34df-b802-6c1daf2ec740"
}
}
- Volumes after migration — both on pri2, DATA path changed:
(localcloud) 🐱 > list volumes virtualmachineid=78dbc9a3-df86-4550-a912-5174ede758ed filter=id,name,type,path,storage
{
"count": 2,
"volume": [
{
"id": "51c96bbb-6f13-43d3-ac75-29ee89670ca1",
"name": "ROOT-5",
"path": "cff6b169-a645-4f54-b20c-734c5e77a3e9",
"storage": "ref-trl-10861-k-Mol9-rositsa-kyuchukova-kvm-pri2",
"type": "ROOT"
},
{
"id": "12d32216-8db5-44f4-b367-9cfb4c74e1b6",
"name": "DATA-test2",
"path": "92440ffa-e204-4cc8-8ff5-9572872fd356",
"storage": "ref-trl-10861-k-Mol9-rositsa-kyuchukova-kvm-pri2",
"type": "DATADISK"
}
]
}
- Restore backup succeeded:
(localcloud) 🐱 > restore backup id=2378e946-568e-41a8-b784-5a159ba8dc4e
{
"success": true
}
- KVM agent log — RestoreBackupCommand used backed-up paths for both volumes, both rsyncs successful:
2026-02-06 09:05:56,068 DEBUG [cloud.agent.Agent] (AgentRequest-Handler-3:[]) (logid:) Request:Seq 1-417708865438616621: { Cmd , MgmtId: 32988888826665, via: 1, Ver: v1, Flags: 100111, [{"org.apache.cloudstack.backup.RestoreBackupCommand":{"vmName":"i-2-5-VM","backupPath":"i-2-5-VM/2026.02.06.09.03.51","backupRepoType":"nfs","backupRepoAddress":"10.0.32.4:/acs/primary/ref-trl-10861-k-Mol9-rositsa-kyuchukova/backup","volumePaths":["/mnt/1fa3e6b5-fabd-34df-b802-6c1daf2ec740/cff6b169-a645-4f54-b20c-734c5e77a3e9","/mnt/1fa3e6b5-fabd-34df-b802-6c1daf2ec740/12d32216-8db5-44f4-b367-9cfb4c74e1b6"],"vmExists":"true","vmState":"Restoring","wait":"0","bypassHostMaintenance":"false"}}] }
2026-02-06 09:05:56,159 DEBUG [utils.script.Script] (AgentRequest-Handler-3:[]) (logid:) Executing command [/bin/bash -c rsync -az /usr/share/cloudstack-agent/tmp/csbackup.pUkng705972317945793199/i-2-5-VM/2026.02.06.09.03.51/root.cff6b169-a645-4f54-b20c-734c5e77a3e9.qcow2 /mnt/1fa3e6b5-fabd-34df-b802-6c1daf2ec740/cff6b169-a645-4f54-b20c-734c5e77a3e9 ].
2026-02-06 09:06:22,110 DEBUG [utils.script.Script] (AgentRequest-Handler-3:[]) (logid:) Successfully executed process [92964] for command [/bin/bash -c rsync -az /usr/share/cloudstack-agent/tmp/csbackup.pUkng705972317945793199/i-2-5-VM/2026.02.06.09.03.51/root.cff6b169-a645-4f54-b20c-734c5e77a3e9.qcow2 /mnt/1fa3e6b5-fabd-34df-b802-6c1daf2ec740/cff6b169-a645-4f54-b20c-734c5e77a3e9 ].
2026-02-06 09:06:22,111 DEBUG [utils.script.Script] (AgentRequest-Handler-3:[]) (logid:) Executing command [/bin/bash -c rsync -az /usr/share/cloudstack-agent/tmp/csbackup.pUkng705972317945793199/i-2-5-VM/2026.02.06.09.03.51/datadisk.12d32216-8db5-44f4-b367-9cfb4c74e1b6.qcow2 /mnt/1fa3e6b5-fabd-34df-b802-6c1daf2ec740/12d32216-8db5-44f4-b367-9cfb4c74e1b6 ].
2026-02-06 09:06:22,225 DEBUG [utils.script.Script] (AgentRequest-Handler-3:[]) (logid:) Successfully executed process [92997] for command [/bin/bash -c rsync -az /usr/share/cloudstack-agent/tmp/csbackup.pUkng705972317945793199/i-2-5-VM/2026.02.06.09.03.51/datadisk.12d32216-8db5-44f4-b367-9cfb4c74e1b6.qcow2 /mnt/1fa3e6b5-fabd-34df-b802-6c1daf2ec740/12d32216-8db5-44f4-b367-9cfb4c74e1b6 ].
- VM started successfully after restore:
(localcloud) 🐱 > list virtualmachines id=78dbc9a3-df86-4550-a912-5174ede758ed filter=id,state
{
"count": 1,
"virtualmachine": [
{
"id": "78dbc9a3-df86-4550-a912-5174ede758ed",
"state": "Running"
}
]
}
Test Result: PASSED
TC3: Restore backup without migration (regression check)
Objective
Verify that restoring a VM backup still works correctly when no volume migration has occurred, ensuring the fix does not introduce a regression.
Test Steps
- Created backup (id: 17e2c10e-e2d3-437e-9253-ce1a17dd4df2) with current volume paths — both ROOT and DATA on pri2
- Stopped the VM (no migration performed)
- Restored from backup
- Started VM and confirmed Running state
Expected Result:
Backup restore should succeed as before — no regression introduced by the fix. Volume paths from backup metadata match current DB paths.
Actual Result:
Backup restore succeeded. The RestoreBackupCommand used the correct paths matching both backup metadata and current DB state. Both rsync commands executed successfully. VM started and reached Running state.
Test Evidence:
- Backup created — backup metadata shows current paths:
(localcloud) 🐱 > list backups virtualmachineid=78dbc9a3-df86-4550-a912-5174ede758ed filter=id,externalid,status,volumes
{
"backup": [
{
"externalid": "i-2-5-VM/2026.02.06.09.08.35",
"id": "17e2c10e-e2d3-437e-9253-ce1a17dd4df2",
"status": "BackedUp",
"volumes": "[{\"uuid\":\"51c96bbb-6f13-43d3-ac75-29ee89670ca1\",\"type\":\"ROOT\",\"size\":8589934592,\"path\":\"cff6b169-a645-4f54-b20c-734c5e77a3e9\"},{\"uuid\":\"12d32216-8db5-44f4-b367-9cfb4c74e1b6\",\"type\":\"DATADISK\",\"size\":5368709120,\"path\":\"92440ffa-e204-4cc8-8ff5-9572872fd356\"}]"
}
],
"count": 3
}
- Restore backup succeeded:
(localcloud) 🐱 > restore backup id=17e2c10e-e2d3-437e-9253-ce1a17dd4df2
{
"success": true
}
- KVM agent log — RestoreBackupCommand used correct paths (matching backup metadata), both rsyncs successful:
2026-02-06 09:11:18,183 DEBUG [cloud.agent.Agent] (AgentRequest-Handler-1:[]) (logid:) Request:Seq 1-417708865438616643: { Cmd , MgmtId: 32988888826665, via: 1, Ver: v1, Flags: 100111, [{"org.apache.cloudstack.backup.RestoreBackupCommand":{"vmName":"i-2-5-VM","backupPath":"i-2-5-VM/2026.02.06.09.08.35","backupRepoType":"nfs","backupRepoAddress":"10.0.32.4:/acs/primary/ref-trl-10861-k-Mol9-rositsa-kyuchukova/backup","volumePaths":["/mnt/1fa3e6b5-fabd-34df-b802-6c1daf2ec740/cff6b169-a645-4f54-b20c-734c5e77a3e9","/mnt/1fa3e6b5-fabd-34df-b802-6c1daf2ec740/92440ffa-e204-4cc8-8ff5-9572872fd356"],"vmExists":"true","vmState":"Restoring","wait":"0","bypassHostMaintenance":"false"}}] }
2026-02-06 09:11:18,275 DEBUG [utils.script.Script] (AgentRequest-Handler-1:[]) (logid:) Executing command [/bin/bash -c rsync -az /usr/share/cloudstack-agent/tmp/csbackup.EknNi17155312656925976874/i-2-5-VM/2026.02.06.09.08.35/root.cff6b169-a645-4f54-b20c-734c5e77a3e9.qcow2 /mnt/1fa3e6b5-fabd-34df-b802-6c1daf2ec740/cff6b169-a645-4f54-b20c-734c5e77a3e9 ].
2026-02-06 09:11:44,934 DEBUG [utils.script.Script] (AgentRequest-Handler-1:[]) (logid:) Successfully executed process [93485] for command [/bin/bash -c rsync -az /usr/share/cloudstack-agent/tmp/csbackup.EknNi17155312656925976874/i-2-5-VM/2026.02.06.09.08.35/root.cff6b169-a645-4f54-b20c-734c5e77a3e9.qcow2 /mnt/1fa3e6b5-fabd-34df-b802-6c1daf2ec740/cff6b169-a645-4f54-b20c-734c5e77a3e9 ].
2026-02-06 09:11:44,934 DEBUG [utils.script.Script] (AgentRequest-Handler-1:[]) (logid:) Executing command [/bin/bash -c rsync -az /usr/share/cloudstack-agent/tmp/csbackup.EknNi17155312656925976874/i-2-5-VM/2026.02.06.09.08.35/datadisk.92440ffa-e204-4cc8-8ff5-9572872fd356.qcow2 /mnt/1fa3e6b5-fabd-34df-b802-6c1daf2ec740/92440ffa-e204-4cc8-8ff5-9572872fd356 ].
2026-02-06 09:11:45,119 DEBUG [utils.script.Script] (AgentRequest-Handler-1:[]) (logid:) Successfully executed process [93518] for command [/bin/bash -c rsync -az /usr/share/cloudstack-agent/tmp/csbackup.EknNi17155312656925976874/i-2-5-VM/2026.02.06.09.08.35/datadisk.92440ffa-e204-4cc8-8ff5-9572872fd356.qcow2 /mnt/1fa3e6b5-fabd-34df-b802-6c1daf2ec740/92440ffa-e204-4cc8-8ff5-9572872fd356 ].
- VM started successfully after restore:
(localcloud) 🐱 > list virtualmachines id=78dbc9a3-df86-4550-a912-5174ede758ed filter=id,state
{
"count": 1,
"virtualmachine": [
{
"id": "78dbc9a3-df86-4550-a912-5174ede758ed",
"state": "Running"
}
]
}
Test Result: PASSED
TC4: Restore backup after migrating BOTH ROOT and DATA volumes
Objective
Verify that restoring a VM backup succeeds after both ROOT and DATA volumes have been migrated to a different primary storage pool.
Test Steps
- Starting state: backup 2378e946 taken with ROOT path
cff6b169-a645-4f54-b20c-734c5e77a3e9on pri2 and DATA path12d32216-8db5-44f4-b367-9cfb4c74e1b6on pri1 - Stopped the VM
- Migrated ROOT volume from pri2 to pri1 — path changed to
fd80019a-1393-4f17-bfc6-60ac874d1500 - Migrated DATA volume from pri2 to pri1 — path changed to
9d66ac3d-2a46-43b7-a300-b04f82f072f5 - Confirmed both volumes on pri1 with new paths
- Restored from backup 2378e946-568e-41a8-b784-5a159ba8dc4e
- Started VM and confirmed Running state
Expected Result:
Backup restore should succeed using the backed-up volume paths from backup metadata for both ROOT and DATA volumes, even though both have been migrated to a different pool with different paths.
Actual Result:
Backup restore succeeded. The RestoreBackupCommand correctly used the backed-up volume paths from backup metadata for both volumes. Both rsync commands executed successfully. VM started and reached Running state.
Test Evidence:
- Volumes after both migrations — both on pri1 with new paths:
(localcloud) 🐱 > list volumes virtualmachineid=78dbc9a3-df86-4550-a912-5174ede758ed filter=id,name,type,path,storage
{
"count": 2,
"volume": [
{
"id": "51c96bbb-6f13-43d3-ac75-29ee89670ca1",
"name": "ROOT-5",
"path": "fd80019a-1393-4f17-bfc6-60ac874d1500",
"storage": "ref-trl-10861-k-Mol9-rositsa-kyuchukova-kvm-pri1",
"type": "ROOT"
},
{
"id": "12d32216-8db5-44f4-b367-9cfb4c74e1b6",
"name": "DATA-test2",
"path": "9d66ac3d-2a46-43b7-a300-b04f82f072f5",
"storage": "ref-trl-10861-k-Mol9-rositsa-kyuchukova-kvm-pri1",
"type": "DATADISK"
}
]
}
- Restore backup succeeded:
(localcloud) 🐱 > restore backup id=2378e946-568e-41a8-b784-5a159ba8dc4e
{
"success": true
}
- KVM agent log — RestoreBackupCommand used backed-up paths for both volumes on pri1, both rsyncs successful:
2026-02-06 09:15:09,278 DEBUG [cloud.agent.Agent] (AgentRequest-Handler-4:[]) (logid:) Request:Seq 1-417708865438616672: { Cmd , MgmtId: 32988888826665, via: 1, Ver: v1, Flags: 100111, [{"org.apache.cloudstack.backup.RestoreBackupCommand":{"vmName":"i-2-5-VM","backupPath":"i-2-5-VM/2026.02.06.09.03.51","backupRepoType":"nfs","backupRepoAddress":"10.0.32.4:/acs/primary/ref-trl-10861-k-Mol9-rositsa-kyuchukova/backup","volumePaths":["/mnt/b6b70d7b-c86a-3d46-a75b-32e7af66bc26/cff6b169-a645-4f54-b20c-734c5e77a3e9","/mnt/b6b70d7b-c86a-3d46-a75b-32e7af66bc26/12d32216-8db5-44f4-b367-9cfb4c74e1b6"],"vmExists":"true","vmState":"Restoring","wait":"0","bypassHostMaintenance":"false"}}] }
2026-02-06 09:15:09,363 DEBUG [utils.script.Script] (AgentRequest-Handler-4:[]) (logid:) Executing command [/bin/bash -c rsync -az /usr/share/cloudstack-agent/tmp/csbackup.tNSwu13918875422589320668/i-2-5-VM/2026.02.06.09.03.51/root.cff6b169-a645-4f54-b20c-734c5e77a3e9.qcow2 /mnt/b6b70d7b-c86a-3d46-a75b-32e7af66bc26/cff6b169-a645-4f54-b20c-734c5e77a3e9 ].
2026-02-06 09:15:32,802 DEBUG [utils.script.Script] (AgentRequest-Handler-4:[]) (logid:) Successfully executed process [93905] for command [/bin/bash -c rsync -az /usr/share/cloudstack-agent/tmp/csbackup.tNSwu13918875422589320668/i-2-5-VM/2026.02.06.09.03.51/root.cff6b169-a645-4f54-b20c-734c5e77a3e9.qcow2 /mnt/b6b70d7b-c86a-3d46-a75b-32e7af66bc26/cff6b169-a645-4f54-b20c-734c5e77a3e9 ].
2026-02-06 09:15:32,806 DEBUG [utils.script.Script] (AgentRequest-Handler-4:[]) (logid:) Executing command [/bin/bash -c rsync -az /usr/share/cloudstack-agent/tmp/csbackup.tNSwu13918875422589320668/i-2-5-VM/2026.02.06.09.03.51/datadisk.12d32216-8db5-44f4-b367-9cfb4c74e1b6.qcow2 /mnt/b6b70d7b-c86a-3d46-a75b-32e7af66bc26/12d32216-8db5-44f4-b367-9cfb4c74e1b6 ].
2026-02-06 09:15:33,240 DEBUG [utils.script.Script] (AgentRequest-Handler-4:[]) (logid:) Successfully executed process [93923] for command [/bin/bash -c rsync -az /usr/share/cloudstack-agent/tmp/csbackup.tNSwu13918875422589320668/i-2-5-VM/2026.02.06.09.03.51/datadisk.12d32216-8db5-44f4-b367-9cfb4c74e1b6.qcow2 /mnt/b6b70d7b-c86a-3d46-a75b-32e7af66bc26/12d32216-8db5-44f4-b367-9cfb4c74e1b6 ].
- VM started successfully after restore:
(localcloud) 🐱 > list virtualmachines id=78dbc9a3-df86-4550-a912-5174ede758ed filter=id,state
{
"count": 1,
"virtualmachine": [
{
"id": "78dbc9a3-df86-4550-a912-5174ede758ed",
"state": "Running"
}
]
}
Test Result: PASSED
TC5: Restore backup to destroyed (not expunged) VM after migration
Objective
Verify backup restore works on a VM that has been destroyed but not expunged, and that the correct backed-up path is used after volume migration.
Test Steps
- Used existing VM (test-tc6) which had ROOT volume migrated from pri2 to pri1
- Stopped VM, removed backup offering (required before deletion)
- Destroyed VM without expunge flag — VM entered Destroyed state
- Confirmed VM still exists in Destroyed state
- Confirmed backups still exist
- Restored from backup dd44123e (pre-migration backup with path
158ffdba-2a15-4da0-91b2-cfa4697cf3b5) - VM automatically recovered to Stopped state
- Started VM and confirmed Running state
Expected Result:
Restore should succeed on Destroyed VM, using the backed-up path from backup metadata. VM should be recovered and startable.
Actual Result:
Restore succeeded. The RestoreBackupCommand correctly used the backed-up path from backup metadata. The restore operation automatically recovered the VM from Destroyed to Stopped state. VM started and reached Running state.
Test Evidence:
- VM in Destroyed state before restore:
(localcloud) 🐱 > list virtualmachines id=3a7e798a-60d5-457b-b4e9-9d8a69fe2f2a listall=true filter=id,name,state
{
"count": 1,
"virtualmachine": [
{
"id": "3a7e798a-60d5-457b-b4e9-9d8a69fe2f2a",
"name": "test-tc6",
"state": "Destroyed"
}
]
}
- Restore backup succeeded:
(localcloud) 🐱 > restore backup id=dd44123e-6f89-47fa-afeb-7e10732f0c85
{
"success": true
}
- VM automatically recovered to Stopped state after restore:
(localcloud) 🐱 > list virtualmachines id=3a7e798a-60d5-457b-b4e9-9d8a69fe2f2a listall=true filter=id,name,state
{
"count": 1,
"virtualmachine": [
{
"id": "3a7e798a-60d5-457b-b4e9-9d8a69fe2f2a",
"name": "test-tc6",
"state": "Stopped"
}
]
}
- KVM2 agent log — RestoreBackupCommand used backed-up path, rsync successful:
2026-02-06 09:35:39,432 DEBUG [cloud.agent.Agent] (AgentRequest-Handler-3:[]) (logid:) Request:Seq 2-6278299355531184738: { Cmd , MgmtId: 32988888826665, via: 2, Ver: v1, Flags: 100111, [{"org.apache.cloudstack.backup.RestoreBackupCommand":{"vmName":"i-2-6-VM","backupPath":"i-2-6-VM/2026.02.06.09.22.47","backupRepoType":"nfs","backupRepoAddress":"10.0.32.4:/acs/primary/ref-trl-10861-k-Mol9-rositsa-kyuchukova/backup","volumePaths":["/mnt/b6b70d7b-c86a-3d46-a75b-32e7af66bc26/158ffdba-2a15-4da0-91b2-cfa4697cf3b5"],"vmExists":"true","vmState":"Restoring","wait":"0","bypassHostMaintenance":"false"}}] }
2026-02-06 09:35:39,517 DEBUG [utils.script.Script] (AgentRequest-Handler-3:[]) (logid:) Executing command [/bin/bash -c rsync -az /usr/share/cloudstack-agent/tmp/csbackup.qUCYi6882657134864249557/i-2-6-VM/2026.02.06.09.22.47/root.158ffdba-2a15-4da0-91b2-cfa4697cf3b5.qcow2 /mnt/b6b70d7b-c86a-3d46-a75b-32e7af66bc26/158ffdba-2a15-4da0-91b2-cfa4697cf3b5 ].
2026-02-06 09:35:39,570 DEBUG [utils.script.Script] (AgentRequest-Handler-3:[]) (logid:) Successfully executed process [72401] for command ...
- VM started successfully after restore:
(localcloud) 🐱 > list virtualmachines id=3a7e798a-60d5-457b-b4e9-9d8a69fe2f2a filter=id,state
{
"count": 1,
"virtualmachine": [
{
"id": "3a7e798a-60d5-457b-b4e9-9d8a69fe2f2a",
"state": "Running"
}
]
}
Test Result: PASSED
TC6: Multiple backups before and after migration
Objective
Verify that restoring from different backups (taken before and after volume migration) correctly uses the respective backed-up paths from each backup's metadata.
Test Steps
- Created new VM (test-tc6, 3a7e798a-60d5-457b-b4e9-9d8a69fe2f2a)
- Assigned backup offering
- Created Backup 1 (dd44123e) — ROOT path
158ffdba-2a15-4da0-91b2-cfa4697cf3b5on pri2 - Stopped VM, migrated ROOT from pri2 to pri1 — path changed to
eeeae80c-20fe-4b15-9258-c38302ff30c5 - Started VM, created Backup 2 (91600e16) — ROOT path
eeeae80c-20fe-4b15-9258-c38302ff30c5on pri1 - Stopped VM, restored from Backup 1, started VM — verified success
- Stopped VM, restored from Backup 2, started VM — verified success
Expected Result:
Each restore should use the path from its respective backup metadata. Backup 1 restore should use the pre-migration path. Backup 2 restore should use the post-migration path.
Actual Result:
Both restores succeeded. Each RestoreBackupCommand correctly used the path from its respective backup metadata. VM started successfully after each restore.
Test Evidence:
- Backup 1 created before migration:
{
"externalid": "i-2-6-VM/2026.02.06.09.22.47",
"id": "dd44123e-6f89-47fa-afeb-7e10732f0c85",
"status": "BackedUp",
"volumes": "[{\"uuid\":\"158ffdba-2a15-4da0-91b2-cfa4697cf3b5\",\"type\":\"ROOT\",\"size\":8589934592,\"path\":\"158ffdba-2a15-4da0-91b2-cfa4697cf3b5\"}]"
}
- Backup 2 created after migration:
{
"externalid": "i-2-6-VM/2026.02.06.09.24.36",
"id": "91600e16-745a-45bb-8c90-21fa35d1dafa",
"status": "BackedUp",
"volumes": "[{\"uuid\":\"158ffdba-2a15-4da0-91b2-cfa4697cf3b5\",\"type\":\"ROOT\",\"size\":8589934592,\"path\":\"eeeae80c-20fe-4b15-9258-c38302ff30c5\"}]"
}
- KVM2 agent log — Backup 1 restore used pre-migration path:
2026-02-06 09:27:21,110 DEBUG [cloud.agent.Agent] ... [{"org.apache.cloudstack.backup.RestoreBackupCommand":{"vmName":"i-2-6-VM","backupPath":"i-2-6-VM/2026.02.06.09.22.47","backupRepoType":"nfs","backupRepoAddress":"10.0.32.4:/acs/primary/ref-trl-10861-k-Mol9-rositsa-kyuchukova/backup","volumePaths":["/mnt/b6b70d7b-c86a-3d46-a75b-32e7af66bc26/158ffdba-2a15-4da0-91b2-cfa4697cf3b5"],...}}] }
2026-02-06 09:27:21,501 DEBUG [utils.script.Script] ... Executing command [/bin/bash -c rsync -az .../root.158ffdba-2a15-4da0-91b2-cfa4697cf3b5.qcow2 /mnt/b6b70d7b-c86a-3d46-a75b-32e7af66bc26/158ffdba-2a15-4da0-91b2-cfa4697cf3b5 ].
2026-02-06 09:27:45,526 DEBUG [utils.script.Script] ... Successfully executed process [71638] ...
- KVM2 agent log — Backup 2 restore used post-migration path:
2026-02-06 09:30:58,693 DEBUG [cloud.agent.Agent] ... [{"org.apache.cloudstack.backup.RestoreBackupCommand":{"vmName":"i-2-6-VM","backupPath":"i-2-6-VM/2026.02.06.09.24.36","backupRepoType":"nfs","backupRepoAddress":"10.0.32.4:/acs/primary/ref-trl-10861-k-Mol9-rositsa-kyuchukova/backup","volumePaths":["/mnt/b6b70d7b-c86a-3d46-a75b-32e7af66bc26/eeeae80c-20fe-4b15-9258-c38302ff30c5"],...}}] }
2026-02-06 09:30:58,774 DEBUG [utils.script.Script] ... Executing command [/bin/bash -c rsync -az .../root.eeeae80c-20fe-4b15-9258-c38302ff30c5.qcow2 /mnt/b6b70d7b-c86a-3d46-a75b-32e7af66bc26/eeeae80c-20fe-4b15-9258-c38302ff30c5 ].
2026-02-06 09:31:17,191 DEBUG [utils.script.Script] ... Successfully executed process [72022] ...
- VM started successfully after both restores:
(localcloud) 🐱 > list virtualmachines id=3a7e798a-60d5-457b-b4e9-9d8a69fe2f2a filter=id,state
{
"count": 1,
"virtualmachine": [
{
"id": "3a7e798a-60d5-457b-b4e9-9d8a69fe2f2a",
"state": "Running"
}
]
}
Test Result: PASSED
TC7: Restore backup when VM has additional volume not in backup (Edge Case)
Objective
Test the new continue; fix by attempting to restore a backup when the VM has a volume that was attached AFTER the backup was taken.
Test Steps
- Stopped VM (test-tc6) which had backup with only ROOT volume
- Attached DATA volume (DATA-test, a59763c9) to VM
- VM now has 2 volumes: ROOT + DATA
- Backup dd44123e only contains ROOT volume
- Attempted restore from backup dd44123e
Expected Result:
Either: (a) restore succeeds, restoring only ROOT and skipping DATA volume; or (b) restore fails with disk count mismatch validation error.
Actual Result:
Restore failed with error: "Unable to restore VM with the current backup as the backup has different number of disks as the VM"
Analysis:
This is expected behavior. CloudStack validates that the VM has the same number of disks as the backup BEFORE attempting the restore operation. This validation happens in the restoreBackup() method before getVolumePaths() is called.
The new continue; fix in getVolumePaths() addresses a different scenario: when backedVolumes is provided but a specific volume's UUID cannot be matched in the backup metadata. In normal operation with the disk count validation in place, this scenario shouldn't occur. The fix is a defensive coding measure for edge cases where the volume lists might be misaligned.
Test Evidence:
- VM volumes after attaching DATA disk (2 volumes):
(localcloud) 🐱 > list volumes virtualmachineid=3a7e798a-60d5-457b-b4e9-9d8a69fe2f2a filter=id,name,type,path,storage
{
"count": 2,
"volume": [
{
"id": "a59763c9-756a-4ed9-ac85-6593fc5f3d1c",
"name": "DATA-test",
"path": "4c29bd27-6850-42e0-97f3-872c03a4de96",
"storage": "ref-trl-10861-k-Mol9-rositsa-kyuchukova-kvm-pri2",
"type": "DATADISK"
},
{
"id": "158ffdba-2a15-4da0-91b2-cfa4697cf3b5",
"name": "ROOT-6",
"path": "eeeae80c-20fe-4b15-9258-c38302ff30c5",
"storage": "ref-trl-10861-k-Mol9-rositsa-kyuchukova-kvm-pri1",
"type": "ROOT"
}
]
}
- Backup only contains ROOT volume (1 volume):
{
"externalid": "i-2-6-VM/2026.02.06.09.22.47",
"id": "dd44123e-6f89-47fa-afeb-7e10732f0c85",
"status": "BackedUp",
"volumes": "[{\"uuid\":\"158ffdba-2a15-4da0-91b2-cfa4697cf3b5\",\"type\":\"ROOT\",\"size\":8589934592,\"path\":\"158ffdba-2a15-4da0-91b2-cfa4697cf3b5\"}]"
}
- Restore failed with disk count validation:
(localcloud) 🐱 > restore backup id=dd44123e-6f89-47fa-afeb-7e10732f0c85
{
"jobresult": {
"errorcode": 530,
"errortext": "Unable to restore VM with the current backup as the backup has different number of disks as the VM"
},
"jobstatus": 2
}
Test Result: N/A (Expected Validation Behavior)


Description
This PR fixes: #12517
Types of changes
Feature/Enhancement Scale or Bug Severity
Screenshots (if appropriate):
How Has This Been Tested?
How did you try to break this feature and the system with this change?