Discussion:
[ovirt-users] Cloning VM on NFS Leads to Locked Disks
Charles Tassell
2017-05-10 14:57:11 UTC
Permalink
Hi Everyone,

I'm having some issues with my oVirt 4.1 (fully updated to latest
release as of yesterday) cluster. When I clone a VM the disks of both
the original and the clone stay in the locked state, and the only way I
can resolve it is to go into the database on the engine and run "update
images set imagestatus=1 where imagestatus=2;"

I'm using NFS4 as a datastore and the disks seem to copy fine (file
sizes match and everything), but the locking worries me. To clone the
VM I just shut the source VM down and then right click on it and select
"Clone"

I've attached the full VDSM log from my last attempt, but here is the
excerpt of the lines just referencing the two disks
(d73206ed-89ba-48a9-82ff-c107c1af60f0 is the original VMs disk and
670a7b20-fecd-45c6-af5c-3c7b98258224 is the clone.)

2017-05-10 11:36:20,120-0300 INFO (jsonrpc/2) [dispatcher] Run and
protect: copyImage(sdUUID=u'20423d5e-188c-4e10-9893-588ceb81b354',
spUUID=u'00000001-0001-0001-0001-000000000311', vmUUID='',
srcImgUUID=u'd73206ed-89ba-48a9-82ff-c107c1af60f0',
srcVolUUID=u'bba1813d-9e91-4e37-bed7-40beedda8051',
dstImgUUID=u'670a7b20-fecd-45c6-af5c-3c7b98258224',
dstVolUUID=u'013d8855-4e49-4984-8266-6a5e9437dff7', description=u'',
dstSdUUID=u'20423d5e-188c-4e10-9893-588ceb81b354', volType=8,
volFormat=5, preallocate=2, postZero=u'false', force=u'false',
discard=False) (logUtils:51)
2017-05-10 11:36:20,152-0300 INFO (jsonrpc/2) [storage.Image] image
d73206ed-89ba-48a9-82ff-c107c1af60f0 in domain
20423d5e-188c-4e10-9893-588ceb81b354 has vollist
[u'bba1813d-9e91-4e37-bed7-40beedda8051'] (image:319)
2017-05-10 11:36:20,169-0300 INFO (jsonrpc/2) [storage.Image]
sdUUID=20423d5e-188c-4e10-9893-588ceb81b354
imgUUID=d73206ed-89ba-48a9-82ff-c107c1af60f0
chain=[<storage.fileVolume.FileVolume object at 0x278e450>] (image:249)
2017-05-10 11:36:20,292-0300 INFO (tasks/0) [storage.Image]
sdUUID=20423d5e-188c-4e10-9893-588ceb81b354
imgUUID=d73206ed-89ba-48a9-82ff-c107c1af60f0
chain=[<storage.fileVolume.FileVolume object at 0x302c6d0>] (image:249)
2017-05-10 11:36:20,295-0300 INFO (tasks/0) [storage.Image]
sdUUID=20423d5e-188c-4e10-9893-588ceb81b354 vmUUID=
srcImgUUID=d73206ed-89ba-48a9-82ff-c107c1af60f0
srcVolUUID=bba1813d-9e91-4e37-bed7-40beedda8051
dstImgUUID=670a7b20-fecd-45c6-af5c-3c7b98258224
dstVolUUID=013d8855-4e49-4984-8266-6a5e9437dff7
dstSdUUID=20423d5e-188c-4e10-9893-588ceb81b354 volType=8 volFormat=RAW
preallocate=SPARSE force=False postZero=False discard=False (image:765)
2017-05-10 11:36:20,305-0300 INFO (tasks/0) [storage.Image] copy source
20423d5e-188c-4e10-9893-588ceb81b354:d73206ed-89ba-48a9-82ff-c107c1af60f0:bba1813d-9e91-4e37-bed7-40beedda8051
vol size 41943040 destination
20423d5e-188c-4e10-9893-588ceb81b354:670a7b20-fecd-45c6-af5c-3c7b98258224:013d8855-4e49-4984-8266-6a5e9437dff7
apparentsize 41943040 (image:815)
2017-05-10 11:36:20,306-0300 INFO (tasks/0) [storage.Image] image
670a7b20-fecd-45c6-af5c-3c7b98258224 in domain
20423d5e-188c-4e10-9893-588ceb81b354 has vollist [] (image:319)
2017-05-10 11:36:20,306-0300 INFO (tasks/0) [storage.Image] Create
placeholder
/rhev/data-center/00000001-0001-0001-0001-000000000311/20423d5e-188c-4e10-9893-588ceb81b354/images/670a7b20-fecd-45c6-af5c-3c7b98258224
for image's volumes (image:149)
2017-05-10 11:36:20,392-0300 INFO (tasks/0) [storage.Volume] Request to
create RAW volume
/rhev/data-center/00000001-0001-0001-0001-000000000311/20423d5e-188c-4e10-9893-588ceb81b354/images/670a7b20-fecd-45c6-af5c-3c7b98258224/013d8855-4e49-4984-8266-6a5e9437dff7
with size = 20480 sectors (fileVolume:439)
2017-05-10 11:37:58,453-0300 INFO (tasks/0) [storage.VolumeManifest]
sdUUID=20423d5e-188c-4e10-9893-588ceb81b354
imgUUID=670a7b20-fecd-45c6-af5c-3c7b98258224 volUUID =
013d8855-4e49-4984-8266-6a5e9437dff7 legality = LEGAL (volume:393)
Fred Rolland
2017-05-11 07:30:05 UTC
Permalink
Hi,

Can you provide the engine log ?

Thanks,

Freddy
Post by Charles Tassell
Hi Everyone,
I'm having some issues with my oVirt 4.1 (fully updated to latest
release as of yesterday) cluster. When I clone a VM the disks of both the
original and the clone stay in the locked state, and the only way I can
resolve it is to go into the database on the engine and run "update images
set imagestatus=1 where imagestatus=2;"
I'm using NFS4 as a datastore and the disks seem to copy fine (file
sizes match and everything), but the locking worries me. To clone the VM I
just shut the source VM down and then right click on it and select "Clone"
I've attached the full VDSM log from my last attempt, but here is the
excerpt of the lines just referencing the two disks
(d73206ed-89ba-48a9-82ff-c107c1af60f0 is the original VMs disk and
670a7b20-fecd-45c6-af5c-3c7b98258224 is the clone.)
2017-05-10 11:36:20,120-0300 INFO (jsonrpc/2) [dispatcher] Run and
protect: copyImage(sdUUID=u'20423d5e-188c-4e10-9893-588ceb81b354',
spUUID=u'00000001-0001-0001-0001-000000000311', vmUUID='',
srcImgUUID=u'd73206ed-89ba-48a9-82ff-c107c1af60f0',
srcVolUUID=u'bba1813d-9e91-4e37-bed7-40beedda8051',
dstImgUUID=u'670a7b20-fecd-45c6-af5c-3c7b98258224',
dstVolUUID=u'013d8855-4e49-4984-8266-6a5e9437dff7', description=u'',
dstSdUUID=u'20423d5e-188c-4e10-9893-588ceb81b354', volType=8,
volFormat=5, preallocate=2, postZero=u'false', force=u'false',
discard=False) (logUtils:51)
2017-05-10 11:36:20,152-0300 INFO (jsonrpc/2) [storage.Image] image
d73206ed-89ba-48a9-82ff-c107c1af60f0 in domain
20423d5e-188c-4e10-9893-588ceb81b354 has vollist
[u'bba1813d-9e91-4e37-bed7-40beedda8051'] (image:319)
2017-05-10 11:36:20,169-0300 INFO (jsonrpc/2) [storage.Image]
sdUUID=20423d5e-188c-4e10-9893-588ceb81b354 imgUUID=d73206ed-89ba-48a9-82ff-c107c1af60f0
chain=[<storage.fileVolume.FileVolume object at 0x278e450>] (image:249)
2017-05-10 11:36:20,292-0300 INFO (tasks/0) [storage.Image]
sdUUID=20423d5e-188c-4e10-9893-588ceb81b354 imgUUID=d73206ed-89ba-48a9-82ff-c107c1af60f0
chain=[<storage.fileVolume.FileVolume object at 0x302c6d0>] (image:249)
2017-05-10 11:36:20,295-0300 INFO (tasks/0) [storage.Image]
sdUUID=20423d5e-188c-4e10-9893-588ceb81b354 vmUUID=
srcImgUUID=d73206ed-89ba-48a9-82ff-c107c1af60f0
srcVolUUID=bba1813d-9e91-4e37-bed7-40beedda8051
dstImgUUID=670a7b20-fecd-45c6-af5c-3c7b98258224
dstVolUUID=013d8855-4e49-4984-8266-6a5e9437dff7
dstSdUUID=20423d5e-188c-4e10-9893-588ceb81b354 volType=8 volFormat=RAW
preallocate=SPARSE force=False postZero=False discard=False (image:765)
2017-05-10 11:36:20,305-0300 INFO (tasks/0) [storage.Image] copy source
20423d5e-188c-4e10-9893-588ceb81b354:d73206ed-89ba-48a9-
82ff-c107c1af60f0:bba1813d-9e91-4e37-bed7-40beedda8051 vol size 41943040
destination 20423d5e-188c-4e10-9893-588ceb81b354:670a7b20-fecd-45c6-
af5c-3c7b98258224:013d8855-4e49-4984-8266-6a5e9437dff7 apparentsize
41943040 (image:815)
2017-05-10 11:36:20,306-0300 INFO (tasks/0) [storage.Image] image
670a7b20-fecd-45c6-af5c-3c7b98258224 in domain
20423d5e-188c-4e10-9893-588ceb81b354 has vollist [] (image:319)
2017-05-10 11:36:20,306-0300 INFO (tasks/0) [storage.Image] Create
placeholder /rhev/data-center/00000001-0001-0001-0001-000000000311/20423
d5e-188c-4e10-9893-588ceb81b354/images/670a7b20-fecd-45c6-af5c-3c7b98258224
for image's volumes (image:149)
2017-05-10 11:36:20,392-0300 INFO (tasks/0) [storage.Volume] Request to
create RAW volume /rhev/data-center/00000001-000
1-0001-0001-000000000311/20423d5e-188c-4e10-9893-588ceb81b35
4/images/670a7b20-fecd-45c6-af5c-3c7b98258224/013d8855-4e49-4984-8266-6a5e9437dff7
with size = 20480 sectors (fileVolume:439)
2017-05-10 11:37:58,453-0300 INFO (tasks/0) [storage.VolumeManifest]
sdUUID=20423d5e-188c-4e10-9893-588ceb81b354 imgUUID=670a7b20-fecd-45c6-af5c-3c7b98258224
volUUID = 013d8855-4e49-4984-8266-6a5e9437dff7 legality = LEGAL
(volume:393)
_______________________________________________
Users mailing list
http://lists.ovirt.org/mailman/listinfo/users
Charles Tassell
2017-05-11 13:09:14 UTC
Permalink
Hi Freddy,

Sure, thanks for looking into this. Here you go:

2017-05-10 11:35:30,249-03 INFO
[org.ovirt.engine.core.bll.aaa.SessionDataContainer]
(DefaultQuartzScheduler8) [1c84abac] Not removing session
'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==',
session has running commands for user '***@internal-authz'.
2017-05-10 11:35:30,249-03 INFO
[org.ovirt.engine.core.bll.aaa.SessionDataContainer]
(DefaultQuartzScheduler8) [1c84abac] Not removing session
'nLzKb6P4Th3A9zUFNNZYuZgWaL/DraYRfQjOvjmXisa9c3SiI+Sv8xvLZ+n2+9ptKPHqaiw0oNCOG6kDWOaszA==',
session has running commands for user '***@internal-authz'.
2017-05-10 11:35:37,793-03 INFO
[org.ovirt.engine.core.bll.storage.disk.UpdateVmDiskCommand] (default
task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] Lock Acquired to object
'EngineLock:{exclusiveLocks='[6d51dd95-ba25-4ae9-8bee-303f90e76e62=<VM_DISK_BOOT,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='[6d51dd95-ba25-4ae9-8bee-303f90e76e62=<VM,
ACTION_TYPE_FAILED_VM_IS_LOCKED>]'}'
2017-05-10 11:35:37,843-03 INFO
[org.ovirt.engine.core.bll.storage.disk.UpdateVmDiskCommand] (default
task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] Running command:
UpdateVmDiskCommand internal: false. Entities affected : ID:
d73206ed-89ba-48a9-82ff-c107c1af60f0 Type: DiskAction group
EDIT_DISK_PROPERTIES with role type USER
2017-05-10 11:35:37,846-03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
(default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] START,
SetVolumeDescriptionVDSCommand(
SetVolumeDescriptionVDSCommandParameters:{runAsync='true',
storagePoolId='00000001-0001-0001-0001-000000000311',
ignoreFailoverLimit='false',
storageDomainId='20423d5e-188c-4e10-9893-588ceb81b354',
imageGroupId='d73206ed-89ba-48a9-82ff-c107c1af60f0',
imageId='bba1813d-9e91-4e37-bed7-40beedda8051'}), log id: 31e5d891
2017-05-10 11:35:37,846-03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
(default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] --
executeIrsBrokerCommand: calling 'setVolumeDescription', parameters:
2017-05-10 11:35:37,846-03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
(default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] ++
spUUID=00000001-0001-0001-0001-000000000311
2017-05-10 11:35:37,846-03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
(default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] ++
sdUUID=20423d5e-188c-4e10-9893-588ceb81b354
2017-05-10 11:35:37,846-03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
(default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] ++
imageGroupGUID=d73206ed-89ba-48a9-82ff-c107c1af60f0
2017-05-10 11:35:37,846-03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
(default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] ++
volUUID=bba1813d-9e91-4e37-bed7-40beedda8051
2017-05-10 11:35:37,846-03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
(default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] ++
description={"DiskAlias":"Ubu16_Disk1","DiskDescription":""}
2017-05-10 11:35:37,937-03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
(default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] FINISH,
SetVolumeDescriptionVDSCommand, log id: 31e5d891
2017-05-10 11:35:37,981-03 INFO
[org.ovirt.engine.core.bll.storage.disk.UpdateVmDiskCommand] (default
task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] Lock freed to object
'EngineLock:{exclusiveLocks='[6d51dd95-ba25-4ae9-8bee-303f90e76e62=<VM_DISK_BOOT,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='[6d51dd95-ba25-4ae9-8bee-303f90e76e62=<VM,
ACTION_TYPE_FAILED_VM_IS_LOCKED>]'}'
2017-05-10 11:35:37,988-03 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-11) [2c63ae40-b59d-4a80-b348-6c9b9fa176b8] EVENT_ID:
USER_UPDATE_VM_DISK(88), Correlation ID:
2c63ae40-b59d-4a80-b348-6c9b9fa176b8, Call Stack: null, Custom Event ID:
-1, Message: VM Ubu16 Xenial_Disk1 disk was updated by ***@internal-authz.
2017-05-10 11:36:19,931-03 INFO
[org.ovirt.engine.core.bll.CloneVmCommand] (default task-15)
[6c38dbf9-6c8f-4390-979f-4031b5b46c98] Lock Acquired to object
'EngineLock:{exclusiveLocks='[chUbu16Clone=<VM_NAME,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='[d73206ed-89ba-48a9-82ff-c107c1af60f0=<DISK,
ACTION_TYPE_FAILED_DISK_IS_USED_FOR_CREATE_VM$VmName chUbu16Clone>,
6d51dd95-ba25-4ae9-8bee-303f90e76e62=<VM,
ACTION_TYPE_FAILED_VM_IS_BEING_CLONED>]'}'
2017-05-10 11:36:19,981-03 INFO
[org.ovirt.engine.core.bll.CloneVmCommand] (default task-15)
[6c38dbf9-6c8f-4390-979f-4031b5b46c98] Running command: CloneVmCommand
internal: false. Entities affected : ID:
6d51dd95-ba25-4ae9-8bee-303f90e76e62 Type: VMAction group CREATE_VM with
role type USER
2017-05-10 11:36:20,052-03 INFO
[org.ovirt.engine.core.bll.storage.disk.image.CopyImageGroupCommand]
(default task-15) [78d33cee] Running command: CopyImageGroupCommand
internal: true. Entities affected : ID:
20423d5e-188c-4e10-9893-588ceb81b354 Type: Storage
2017-05-10 11:36:20,126-03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default
task-15) [78d33cee] START, CopyImageVDSCommand(
CopyImageVDSCommandParameters:{runAsync='true',
storagePoolId='00000001-0001-0001-0001-000000000311',
ignoreFailoverLimit='false',
storageDomainId='20423d5e-188c-4e10-9893-588ceb81b354',
imageGroupId='d73206ed-89ba-48a9-82ff-c107c1af60f0',
imageId='bba1813d-9e91-4e37-bed7-40beedda8051',
dstImageGroupId='670a7b20-fecd-45c6-af5c-3c7b98258224',
vmId='e470d257-6e3a-4989-a0ec-3b10c3ca808c',
dstImageId='013d8855-4e49-4984-8266-6a5e9437dff7', imageDescription='',
dstStorageDomainId='20423d5e-188c-4e10-9893-588ceb81b354',
copyVolumeType='LeafVol', volumeFormat='RAW', preallocate='Sparse',
postZero='false', discard='false', force='false'}), log id: 3477da84
2017-05-10 11:36:20,126-03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default
task-15) [78d33cee] -- executeIrsBrokerCommand: calling 'copyImage' with
two new parameters: description and UUID. Parameters:
2017-05-10 11:36:20,126-03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default
task-15) [78d33cee] ++ sdUUID=20423d5e-188c-4e10-9893-588ceb81b354
2017-05-10 11:36:20,126-03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default
task-15) [78d33cee] ++ spUUID=00000001-0001-0001-0001-000000000311
2017-05-10 11:36:20,126-03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default
task-15) [78d33cee] ++ vmGUID=e470d257-6e3a-4989-a0ec-3b10c3ca808c
2017-05-10 11:36:20,126-03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default
task-15) [78d33cee] ++ srcImageGUID=d73206ed-89ba-48a9-82ff-c107c1af60f0
2017-05-10 11:36:20,126-03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default
task-15) [78d33cee] ++ srcVolUUID=bba1813d-9e91-4e37-bed7-40beedda8051
2017-05-10 11:36:20,126-03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default
task-15) [78d33cee] ++ dstImageGUID=670a7b20-fecd-45c6-af5c-3c7b98258224
2017-05-10 11:36:20,126-03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default
task-15) [78d33cee] ++ dstVolUUID=013d8855-4e49-4984-8266-6a5e9437dff7
2017-05-10 11:36:20,126-03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default
task-15) [78d33cee] ++ descr=
2017-05-10 11:36:20,126-03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default
task-15) [78d33cee] ++ dstSdUUID=20423d5e-188c-4e10-9893-588ceb81b354
2017-05-10 11:36:21,014-03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default
task-15) [78d33cee] FINISH, CopyImageVDSCommand, return:
de21abbb-5ec9-4f03-886d-c853abcc80ec, log id: 3477da84
2017-05-10 11:36:21,034-03 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (default task-15)
[78d33cee] CommandAsyncTask::Adding CommandMultiAsyncTasks object for
command 'f273e5fe-0d77-47c0-a5b1-8dc8f16dde9f'
2017-05-10 11:36:21,034-03 INFO
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (default task-15)
[78d33cee] CommandMultiAsyncTasks::attachTask: Attaching task
'de21abbb-5ec9-4f03-886d-c853abcc80ec' to command
'f273e5fe-0d77-47c0-a5b1-8dc8f16dde9f'.
2017-05-10 11:36:21,068-03 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (default task-15)
[78d33cee] Adding task 'de21abbb-5ec9-4f03-886d-c853abcc80ec' (Parent
Command 'CopyImageGroup', Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling
hasn't started yet..
2017-05-10 11:36:21,089-03 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (default task-15)
[78d33cee] BaseAsyncTask::startPollingTask: Starting to poll task
'de21abbb-5ec9-4f03-886d-c853abcc80ec'.
2017-05-10 11:36:21,164-03 INFO
[org.ovirt.engine.core.bll.AddGraphicsDeviceCommand] (default task-15)
[1270fe1f] Running command: AddGraphicsDeviceCommand internal: true.
Entities affected : ID: e470d257-6e3a-4989-a0ec-3b10c3ca808c Type:
VMAction group EDIT_VM_PROPERTIES with role type USER
2017-05-10 11:36:21,241-03 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-15) [1270fe1f] EVENT_ID: USER_ADD_VM(34), Correlation ID:
6c38dbf9-6c8f-4390-979f-4031b5b46c98, Job ID:
3320698b-6507-41e0-bd80-8d3269be0193, Call Stack: null, Custom Event ID:
-1, Message: VM chUbu16Clone was created by ***@internal-authz.
2017-05-10 11:36:22,539-03 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(DefaultQuartzScheduler9) [29c890f1] Polling and updating Async Tasks: 1
tasks, 1 tasks to poll now
2017-05-10 11:36:23,059-03 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler9)
[29c890f1] SPMAsyncTask::PollTask: Polling task
'de21abbb-5ec9-4f03-886d-c853abcc80ec' (Parent Command 'CopyImageGroup',
Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned
status 'running'.
2017-05-10 11:36:23,059-03 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(DefaultQuartzScheduler9) [29c890f1] Finished polling Tasks, will poll
again in 10 seconds.
2017-05-10 11:36:30,269-03 INFO
[org.ovirt.engine.core.bll.aaa.SessionDataContainer]
(DefaultQuartzScheduler4) [62c29fef] Not removing session
'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==',
session has running commands for user '***@internal-authz'.
2017-05-10 11:36:30,269-03 INFO
[org.ovirt.engine.core.bll.aaa.SessionDataContainer]
(DefaultQuartzScheduler4) [62c29fef] Not removing session
'nLzKb6P4Th3A9zUFNNZYuZgWaL/DraYRfQjOvjmXisa9c3SiI+Sv8xvLZ+n2+9ptKPHqaiw0oNCOG6kDWOaszA==',
session has running commands for user '***@internal-authz'.
2017-05-10 11:36:35,685-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetFileStatsVDSCommand]
(org.ovirt.thread.pool-6-thread-49) [2ee2229a] START,
GetFileStatsVDSCommand( GetFileStatsParameters:{runAsync='true',
storagePoolId='00000001-0001-0001-0001-000000000311',
ignoreFailoverLimit='false'}), log id: 7559896f
2017-05-10 11:36:37,089-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetFileStatsVDSCommand]
(org.ovirt.thread.pool-6-thread-49) [2ee2229a] FINISH,
GetFileStatsVDSCommand, return:
{ubuntu-16.04.2-server-amd64.iso={status=0, ctime=1492046787.0,
size=869269504}, systemrescuecd-x86-4.9.5.iso={status=0,
ctime=1492048781.0, size=520032256},
ubuntu-14.04.5-server-amd64.iso={status=0, ctime=1492130650.0,
size=649068544}}, log id: 7559896f
2017-05-10 11:36:37,146-03 INFO
[org.ovirt.engine.core.bll.storage.domain.IsoDomainListSynchronizer]
(org.ovirt.thread.pool-6-thread-49) [2ee2229a] Finished automatic
refresh process for 'ISO' file type with success, for storage domain id
'6154f231-195b-46cd-a108-a0e42d360145'.
2017-05-10 11:37:30,289-03 INFO
[org.ovirt.engine.core.bll.aaa.SessionDataContainer]
(DefaultQuartzScheduler10) [493b7b3a] Not removing session
'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==',
session has running commands for user '***@internal-authz'.
2017-05-10 11:37:30,289-03 INFO
[org.ovirt.engine.core.bll.aaa.SessionDataContainer]
(DefaultQuartzScheduler10) [493b7b3a] Not removing session
'nLzKb6P4Th3A9zUFNNZYuZgWaL/DraYRfQjOvjmXisa9c3SiI+Sv8xvLZ+n2+9ptKPHqaiw0oNCOG6kDWOaszA==',
session has running commands for user '***@internal-authz'.
2017-05-10 11:38:09,306-03 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler6)
[5a3af47c] SPMAsyncTask::PollTask: Polling task
'de21abbb-5ec9-4f03-886d-c853abcc80ec' (Parent Command 'CopyImageGroup',
Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned
status 'finished', result 'success'.
2017-05-10 11:38:09,310-03 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler6)
[5a3af47c] BaseAsyncTask::onTaskEndSuccess: Task
'de21abbb-5ec9-4f03-886d-c853abcc80ec' (Parent Command 'CopyImageGroup',
Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended
successfully.
2017-05-10 11:38:09,310-03 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(DefaultQuartzScheduler6) [5a3af47c]
CommandAsyncTask::endActionIfNecessary: All tasks of command
'f273e5fe-0d77-47c0-a5b1-8dc8f16dde9f' has ended -> executing 'endAction'
2017-05-10 11:38:09,310-03 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(DefaultQuartzScheduler6) [5a3af47c] CommandAsyncTask::endAction: Ending
action for '1' tasks (command ID:
'f273e5fe-0d77-47c0-a5b1-8dc8f16dde9f'): calling endAction '.
2017-05-10 11:38:09,311-03 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-6-thread-34) [5a3af47c]
CommandAsyncTask::endCommandAction [within thread] context: Attempting
to endAction 'CopyImageGroup',
2017-05-10 11:38:09,325-03 INFO
[org.ovirt.engine.core.bll.storage.disk.image.CopyImageGroupCommand]
(org.ovirt.thread.pool-6-thread-34) [78d33cee] Command
[id=f273e5fe-0d77-47c0-a5b1-8dc8f16dde9f]: Updating status to
'SUCCEEDED', The command end method logic will be executed by one of its
parent commands.
2017-05-10 11:38:09,325-03 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-6-thread-34) [78d33cee]
CommandAsyncTask::HandleEndActionResult [within thread]: endAction for
action type 'CopyImageGroup' completed, handling the result.
2017-05-10 11:38:09,325-03 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-6-thread-34) [78d33cee]
CommandAsyncTask::HandleEndActionResult [within thread]: endAction for
action type 'CopyImageGroup' succeeded, clearing tasks.
2017-05-10 11:38:09,325-03 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(org.ovirt.thread.pool-6-thread-34) [78d33cee]
SPMAsyncTask::ClearAsyncTask: Attempting to clear task
'de21abbb-5ec9-4f03-886d-c853abcc80ec'
2017-05-10 11:38:09,326-03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(org.ovirt.thread.pool-6-thread-34) [78d33cee] START,
SPMClearTaskVDSCommand(
SPMTaskGuidBaseVDSCommandParameters:{runAsync='true',
storagePoolId='00000001-0001-0001-0001-000000000311',
ignoreFailoverLimit='false',
taskId='de21abbb-5ec9-4f03-886d-c853abcc80ec'}), log id: 3ae0372a
2017-05-10 11:38:09,326-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(org.ovirt.thread.pool-6-thread-34) [78d33cee] START,
HSMClearTaskVDSCommand(HostName = ovirt730-01,
HSMTaskGuidBaseVDSCommandParameters:{runAsync='true',
hostId='f9519bb1-7b11-4a7b-98d8-72f606288e7c',
taskId='de21abbb-5ec9-4f03-886d-c853abcc80ec'}), log id: 26278f86
2017-05-10 11:38:10,313-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(org.ovirt.thread.pool-6-thread-34) [78d33cee] FINISH,
HSMClearTaskVDSCommand, log id: 26278f86
2017-05-10 11:38:10,313-03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(org.ovirt.thread.pool-6-thread-34) [78d33cee] FINISH,
SPMClearTaskVDSCommand, log id: 3ae0372a
2017-05-10 11:38:10,316-03 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(org.ovirt.thread.pool-6-thread-34) [78d33cee]
BaseAsyncTask::removeTaskFromDB: Removed task
'de21abbb-5ec9-4f03-886d-c853abcc80ec' from DataBase
2017-05-10 11:38:10,316-03 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-6-thread-34) [78d33cee]
CommandAsyncTask::HandleEndActionResult [within thread]: Removing
CommandMultiAsyncTasks object for entity
'f273e5fe-0d77-47c0-a5b1-8dc8f16dde9f'
2017-05-10 11:38:30,304-03 INFO
[org.ovirt.engine.core.bll.aaa.SessionDataContainer]
(DefaultQuartzScheduler9) [29c890f1] Not removing session
'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==',
session has running commands for user '***@internal-authz'.
2017-05-10 11:38:30,304-03 INFO
[org.ovirt.engine.core.bll.aaa.SessionDataContainer]
(DefaultQuartzScheduler9) [29c890f1] Not removing session
'nLzKb6P4Th3A9zUFNNZYuZgWaL/DraYRfQjOvjmXisa9c3SiI+Sv8xvLZ+n2+9ptKPHqaiw0oNCOG6kDWOaszA==',
session has running commands for user '***@internal-authz'.
2017-05-10 11:39:26,248-03 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(DefaultQuartzScheduler9) [29c890f1] Setting new tasks map. The map
contains now 0 tasks
2017-05-10 11:39:26,248-03 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(DefaultQuartzScheduler9) [29c890f1] Cleared all tasks of pool
'00000001-0001-0001-0001-000000000311'.
2017-05-10 11:39:30,318-03 INFO
[org.ovirt.engine.core.bll.aaa.SessionDataContainer]
(DefaultQuartzScheduler9) [29c890f1] Not removing session
'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==',
session has running commands for user '***@internal-authz'.
2017-05-10 11:39:30,318-03 INFO
[org.ovirt.engine.core.bll.aaa.SessionDataContainer]
(DefaultQuartzScheduler9) [29c890f1] Not removing session
'nLzKb6P4Th3A9zUFNNZYuZgWaL/DraYRfQjOvjmXisa9c3SiI+Sv8xvLZ+n2+9ptKPHqaiw0oNCOG6kDWOaszA==',
session has running commands for user '***@internal-authz'.
2017-05-10 11:40:30,333-03 INFO
[org.ovirt.engine.core.bll.aaa.SessionDataContainer]
(DefaultQuartzScheduler3) [5bdc6552] Not removing session
'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==',
session has running commands for user '***@internal-authz'.
2017-05-10 11:40:30,333-03 INFO
[org.ovirt.engine.core.bll.aaa.SessionDataContainer]
(DefaultQuartzScheduler3) [5bdc6552] Not removing session
'nLzKb6P4Th3A9zUFNNZYuZgWaL/DraYRfQjOvjmXisa9c3SiI+Sv8xvLZ+n2+9ptKPHqaiw0oNCOG6kDWOaszA==',
Post by Fred Rolland
Hi,
Can you provide the engine log ?
Thanks,
Freddy
On Wed, May 10, 2017 at 5:57 PM, Charles Tassell
Hi Everyone,
I'm having some issues with my oVirt 4.1 (fully updated to
latest release as of yesterday) cluster. When I clone a VM the
disks of both the original and the clone stay in the locked state,
and the only way I can resolve it is to go into the database on
the engine and run "update images set imagestatus=1 where
imagestatus=2;"
I'm using NFS4 as a datastore and the disks seem to copy fine
(file sizes match and everything), but the locking worries me. To
clone the VM I just shut the source VM down and then right click
on it and select "Clone"
I've attached the full VDSM log from my last attempt, but here
is the excerpt of the lines just referencing the two disks
(d73206ed-89ba-48a9-82ff-c107c1af60f0 is the original VMs disk and
670a7b20-fecd-45c6-af5c-3c7b98258224 is the clone.)
2017-05-10 11:36:20,120-0300 INFO (jsonrpc/2) [dispatcher] Run
copyImage(sdUUID=u'20423d5e-188c-4e10-9893-588ceb81b354',
spUUID=u'00000001-0001-0001-0001-000000000311', vmUUID='',
srcImgUUID=u'd73206ed-89ba-48a9-82ff-c107c1af60f0',
srcVolUUID=u'bba1813d-9e91-4e37-bed7-40beedda8051',
dstImgUUID=u'670a7b20-fecd-45c6-af5c-3c7b98258224',
dstVolUUID=u'013d8855-4e49-4984-8266-6a5e9437dff7',
description=u'',
dstSdUUID=u'20423d5e-188c-4e10-9893-588ceb81b354', volType=8,
volFormat=5, preallocate=2, postZero=u'false', force=u'false',
discard=False) (logUtils:51)
2017-05-10 11:36:20,152-0300 INFO (jsonrpc/2) [storage.Image]
image d73206ed-89ba-48a9-82ff-c107c1af60f0 in domain
20423d5e-188c-4e10-9893-588ceb81b354 has vollist
[u'bba1813d-9e91-4e37-bed7-40beedda8051'] (image:319)
2017-05-10 11:36:20,169-0300 INFO (jsonrpc/2) [storage.Image]
sdUUID=20423d5e-188c-4e10-9893-588ceb81b354
imgUUID=d73206ed-89ba-48a9-82ff-c107c1af60f0
chain=[<storage.fileVolume.FileVolume object at 0x278e450>] (image:249)
2017-05-10 11:36:20,292-0300 INFO (tasks/0) [storage.Image]
sdUUID=20423d5e-188c-4e10-9893-588ceb81b354
imgUUID=d73206ed-89ba-48a9-82ff-c107c1af60f0
chain=[<storage.fileVolume.FileVolume object at 0x302c6d0>] (image:249)
2017-05-10 11:36:20,295-0300 INFO (tasks/0) [storage.Image]
sdUUID=20423d5e-188c-4e10-9893-588ceb81b354 vmUUID=
srcImgUUID=d73206ed-89ba-48a9-82ff-c107c1af60f0
srcVolUUID=bba1813d-9e91-4e37-bed7-40beedda8051
dstImgUUID=670a7b20-fecd-45c6-af5c-3c7b98258224
dstVolUUID=013d8855-4e49-4984-8266-6a5e9437dff7
dstSdUUID=20423d5e-188c-4e10-9893-588ceb81b354 volType=8
volFormat=RAW preallocate=SPARSE force=False postZero=False
discard=False (image:765)
2017-05-10 11:36:20,305-0300 INFO (tasks/0) [storage.Image] copy source
20423d5e-188c-4e10-9893-588ceb81b354:d73206ed-89ba-48a9-82ff-c107c1af60f0:bba1813d-9e91-4e37-bed7-40beedda8051
vol size 41943040 destination
20423d5e-188c-4e10-9893-588ceb81b354:670a7b20-fecd-45c6-af5c-3c7b98258224:013d8855-4e49-4984-8266-6a5e9437dff7
apparentsize 41943040 (image:815)
2017-05-10 11:36:20,306-0300 INFO (tasks/0) [storage.Image] image
670a7b20-fecd-45c6-af5c-3c7b98258224 in domain
20423d5e-188c-4e10-9893-588ceb81b354 has vollist [] (image:319)
2017-05-10 11:36:20,306-0300 INFO (tasks/0) [storage.Image]
Create placeholder
/rhev/data-center/00000001-0001-0001-0001-000000000311/20423d5e-188c-4e10-9893-588ceb81b354/images/670a7b20-fecd-45c6-af5c-3c7b98258224
for image's volumes (image:149)
2017-05-10 11:36:20,392-0300 INFO (tasks/0) [storage.Volume]
Request to create RAW volume
/rhev/data-center/00000001-0001-0001-0001-000000000311/20423d5e-188c-4e10-9893-588ceb81b354/images/670a7b20-fecd-45c6-af5c-3c7b98258224/013d8855-4e49-4984-8266-6a5e9437dff7
with size = 20480 sectors (fileVolume:439)
2017-05-10 11:37:58,453-0300 INFO (tasks/0)
[storage.VolumeManifest]
sdUUID=20423d5e-188c-4e10-9893-588ceb81b354
imgUUID=670a7b20-fecd-45c6-af5c-3c7b98258224 volUUID =
013d8855-4e49-4984-8266-6a5e9437dff7 legality = LEGAL (volume:393)
_______________________________________________
Users mailing list
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>
Charles Tassell
2017-05-11 18:30:24 UTC
Permalink
Just as an update, I created a new VM and had the same issue: the disk
remains locked. So I then added a new data store (this one iSCSI not
NFS) and create a new VM on that. Again, the disk remains locked. So
the problem seems to be that any action that sets to modify a disk image
on my cluster locks the disk and keeps it locked permanently.

I tried restarting the vdsm daemon, but that didn't make a difference.
I'm seeing this in my sanlock.log file though, which worries me:


017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202
delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202
delta_length 10 last_success 1738508

Here's the last 20 lines:
2017-05-07 07:51:41-0300 1738538 [13580]: s3 renewal error -202
delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [13575]: 20423d5e aio timeout RD
0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 ioto 10 to_count 67
2017-05-07 07:51:41-0300 1738538 [13575]: s2 delta_renew read timeout 10
sec offset 0
/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/ids
2017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202
delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [11513]: hosted-e aio timeout RD
0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 ioto 10 to_count 65
2017-05-07 07:51:41-0300 1738538 [11513]: s1 delta_renew read timeout 10
sec offset 0
/var/run/vdsm/storage/5dccd07d-a923-4d4b-9cb1-3b51ebfdca4d/5a9c284f-0faa-4a25-94ce-c9efdae07484/ab2443f1-95ed-475d-886c-c1653257cf04
2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202
delta_length 10 last_success 1738508
2017-05-07 07:51:47-0300 1738544 [13575]: 20423d5e aio collect RD
0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 result 1048576:0 match reap
2017-05-07 07:51:47-0300 1738544 [13580]: 5dccd07d aio collect RD
0x7fe13c0008c0:0x7fe13c0008d0:0x7fe14e5fa000 result 1048576:0 match reap
2017-05-07 07:51:47-0300 1738544 [11513]: hosted-e aio collect RD
0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 result 1048576:0 match reap
2017-05-07 07:53:57-0300 1738674 [13590]: s2:r15 resource
20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576
for 7,21,78395
2017-05-07 07:59:49-0300 1739027 [13575]: s2 delta_renew long write time
10 sec
2017-05-09 08:38:34-0300 1914151 [13590]: s2:r16 resource
20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576
for 7,21,78395
2017-05-11 15:07:45-0300 2110302 [13590]: s2:r17 resource
20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576
for 7,21,112346
2017-05-11 15:17:24-0300 2110881 [13590]: s4 lockspace
b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/ids:0
2017-05-11 15:17:45-0300 2110902 [1395]: s4 host 1 1 2110881
44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0
2017-05-11 15:17:45-0300 2110902 [1400]: s4:r18 resource
b010093e-1924-46e1-bd57-2cf2b2445087:SDM:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/leases:1048576
for 7,21,112346
2017-05-11 15:17:52-0300 2110909 [1399]: s5 lockspace
b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/ids:0
2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 1 2 2110909
44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0
2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 2 1 2110065
4d31313f-b2dd-4368-bf31-d39835e10afb.ovirt730-0
Post by Charles Tassell
Hi Freddy,
2017-05-10 11:35:30,249-03 INFO
[org.ovirt.engine.core.bll.aaa.SessionDataContainer]
(DefaultQuartzScheduler8) [1c84abac] Not removing session
'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==',
Post by Fred Rolland
Hi,
Can you provide the engine log ?
Thanks,
Freddy
On Wed, May 10, 2017 at 5:57 PM, Charles Tassell
Hi Everyone,
I'm having some issues with my oVirt 4.1 (fully updated to
latest release as of yesterday) cluster. When I clone a VM the
disks of both the original and the clone stay in the locked
state, and the only way I can resolve it is to go into the
database on the engine and run "update images set imagestatus=1
where imagestatus=2;"
I'm using NFS4 as a datastore and the disks seem to copy fine
(file sizes match and everything), but the locking worries me.
To clone the VM I just shut the source VM down and then right
click on it and select "Clone"
I've attached the full VDSM log from my last attempt, but here
is the excerpt of the lines just referencing the two disks
(d73206ed-89ba-48a9-82ff-c107c1af60f0 is the original VMs disk
and 670a7b20-fecd-45c6-af5c-3c7b98258224 is the clone.)
[snip]
_______________________________________________
Users mailing list
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>
Fred Rolland
2017-05-11 19:08:55 UTC
Permalink
The locking is on the engine side and restarting the vdsm will not affect
it .
Can you send the whole engine log ?
Which exact version are you using ?
Post by Charles Tassell
Just as an update, I created a new VM and had the same issue: the disk
remains locked. So I then added a new data store (this one iSCSI not NFS)
and create a new VM on that. Again, the disk remains locked. So the
problem seems to be that any action that sets to modify a disk image on my
cluster locks the disk and keeps it locked permanently.
I tried restarting the vdsm daemon, but that didn't make a difference.
017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202
delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202
delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [13580]: s3 renewal error -202
delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [13575]: 20423d5e aio timeout RD
0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 ioto 10 to_count 67
2017-05-07 07:51:41-0300 1738538 [13575]: s2 delta_renew read timeout 10
sec offset 0 /rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-
188c-4e10-9893-588ceb81b354/dom_md/ids
2017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202
delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [11513]: hosted-e aio timeout RD
0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 ioto 10 to_count 65
2017-05-07 07:51:41-0300 1738538 [11513]: s1 delta_renew read timeout 10
sec offset 0 /var/run/vdsm/storage/5dccd07d-a923-4d4b-9cb1-
3b51ebfdca4d/5a9c284f-0faa-4a25-94ce-c9efdae07484/ab2443f1-95ed-475d-886c-
c1653257cf04
2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202
delta_length 10 last_success 1738508
2017-05-07 07:51:47-0300 1738544 [13575]: 20423d5e aio collect RD
0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 result 1048576:0 match reap
2017-05-07 07:51:47-0300 1738544 [13580]: 5dccd07d aio collect RD
0x7fe13c0008c0:0x7fe13c0008d0:0x7fe14e5fa000 result 1048576:0 match reap
2017-05-07 07:51:47-0300 1738544 [11513]: hosted-e aio collect RD
0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 result 1048576:0 match reap
2017-05-07 07:53:57-0300 1738674 [13590]: s2:r15 resource
20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-
center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-
4e10-9893-588ceb81b354/dom_md/leases:1048576 for 7,21,78395
2017-05-07 07:59:49-0300 1739027 [13575]: s2 delta_renew long write time
10 sec
2017-05-09 08:38:34-0300 1914151 [13590]: s2:r16 resource
20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-
center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-
4e10-9893-588ceb81b354/dom_md/leases:1048576 for 7,21,78395
2017-05-11 15:07:45-0300 2110302 [13590]: s2:r17 resource
20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-
center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-
4e10-9893-588ceb81b354/dom_md/leases:1048576 for 7,21,112346
2017-05-11 15:17:24-0300 2110881 [13590]: s4 lockspace
b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-
1924-46e1-bd57-2cf2b2445087/ids:0
2017-05-11 15:17:45-0300 2110902 [1395]: s4 host 1 1 2110881
44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0
2017-05-11 15:17:45-0300 2110902 [1400]: s4:r18 resource
b010093e-1924-46e1-bd57-2cf2b2445087:SDM:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/leases:1048576
for 7,21,112346
2017-05-11 15:17:52-0300 2110909 [1399]: s5 lockspace
b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-
1924-46e1-bd57-2cf2b2445087/ids:0
2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 1 2 2110909
44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0
2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 2 1 2110065
4d31313f-b2dd-4368-bf31-d39835e10afb.ovirt730-0
Hi Freddy,
2017-05-10 11:35:30,249-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer]
(DefaultQuartzScheduler8) [1c84abac] Not removing session '
vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==',
Hi,
Can you provide the engine log ?
Thanks,
Freddy
Post by Charles Tassell
Hi Everyone,
I'm having some issues with my oVirt 4.1 (fully updated to latest
release as of yesterday) cluster. When I clone a VM the disks of both the
original and the clone stay in the locked state, and the only way I can
resolve it is to go into the database on the engine and run "update images
set imagestatus=1 where imagestatus=2;"
I'm using NFS4 as a datastore and the disks seem to copy fine (file
sizes match and everything), but the locking worries me. To clone the VM I
just shut the source VM down and then right click on it and select "Clone"
I've attached the full VDSM log from my last attempt, but here is the
excerpt of the lines just referencing the two disks
(d73206ed-89ba-48a9-82ff-c107c1af60f0 is the original VMs disk and
670a7b20-fecd-45c6-af5c-3c7b98258224 is the clone.)
[snip]
_______________________________________________
Users mailing list
http://lists.ovirt.org/mailman/listinfo/users
Charles Tassell
2017-05-11 19:29:46 UTC
Permalink
Sure, it's pretty big so I've put it online for download at
http://krissy.islandadmin.ca/public/engine.log.txt
Post by Fred Rolland
The locking is on the engine side and restarting the vdsm will not
affect it .
Can you send the whole engine log ?
Which exact version are you using ?
On Thu, May 11, 2017 at 9:30 PM, Charles Tassell
Just as an update, I created a new VM and had the same issue: the
disk remains locked. So I then added a new data store (this one
iSCSI not NFS) and create a new VM on that. Again, the disk
remains locked. So the problem seems to be that any action that
sets to modify a disk image on my cluster locks the disk and keeps
it locked permanently.
I tried restarting the vdsm daemon, but that didn't make a
difference. I'm seeing this in my sanlock.log file though, which
017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202
delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202
delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [13580]: s3 renewal error -202
delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [13575]: 20423d5e aio timeout RD
0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 ioto 10 to_count 67
2017-05-07 07:51:41-0300 1738538 [13575]: s2 delta_renew read
timeout 10 sec offset 0
/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/ids
2017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202
delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [11513]: hosted-e aio timeout RD
0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 ioto 10 to_count 65
2017-05-07 07:51:41-0300 1738538 [11513]: s1 delta_renew read
timeout 10 sec offset 0
/var/run/vdsm/storage/5dccd07d-a923-4d4b-9cb1-3b51ebfdca4d/5a9c284f-0faa-4a25-94ce-c9efdae07484/ab2443f1-95ed-475d-886c-c1653257cf04
2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202
delta_length 10 last_success 1738508
2017-05-07 07:51:47-0300 1738544 [13575]: 20423d5e aio collect RD
0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 result 1048576:0 match reap
2017-05-07 07:51:47-0300 1738544 [13580]: 5dccd07d aio collect RD
0x7fe13c0008c0:0x7fe13c0008d0:0x7fe14e5fa000 result 1048576:0 match reap
2017-05-07 07:51:47-0300 1738544 [11513]: hosted-e aio collect RD
0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 result 1048576:0 match reap
2017-05-07 07:53:57-0300 1738674 [13590]: s2:r15 resource
20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576
for 7,21,78395
2017-05-07 07:59:49-0300 1739027 [13575]: s2 delta_renew long
write time 10 sec
2017-05-09 08:38:34-0300 1914151 [13590]: s2:r16 resource
20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576
for 7,21,78395
2017-05-11 15:07:45-0300 2110302 [13590]: s2:r17 resource
20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576
for 7,21,112346
2017-05-11 15:17:24-0300 2110881 [13590]: s4 lockspace
b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/ids:0
2017-05-11 15:17:45-0300 2110902 [1395]: s4 host 1 1 2110881
44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0
2017-05-11 15:17:45-0300 2110902 [1400]: s4:r18 resource
b010093e-1924-46e1-bd57-2cf2b2445087:SDM:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/leases:1048576
for 7,21,112346
2017-05-11 15:17:52-0300 2110909 [1399]: s5 lockspace
b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/ids:0
2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 1 2 2110909
44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0
2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 2 1 2110065
4d31313f-b2dd-4368-bf31-d39835e10afb.ovirt730-0
Post by Charles Tassell
Hi Freddy,
2017-05-10 11:35:30,249-03 INFO
[org.ovirt.engine.core.bll.aaa.SessionDataContainer]
(DefaultQuartzScheduler8) [1c84abac] Not removing session
'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==',
Post by Fred Rolland
Hi,
Can you provide the engine log ?
Thanks,
Freddy
On Wed, May 10, 2017 at 5:57 PM, Charles Tassell
Hi Everyone,
I'm having some issues with my oVirt 4.1 (fully updated to
latest release as of yesterday) cluster. When I clone a VM
the disks of both the original and the clone stay in the
locked state, and the only way I can resolve it is to go
into the database on the engine and run "update images set
imagestatus=1 where imagestatus=2;"
I'm using NFS4 as a datastore and the disks seem to copy
fine (file sizes match and everything), but the locking
worries me. To clone the VM I just shut the source VM down
and then right click on it and select "Clone"
I've attached the full VDSM log from my last attempt, but
here is the excerpt of the lines just referencing the two
disks (d73206ed-89ba-48a9-82ff-c107c1af60f0 is the original
VMs disk and 670a7b20-fecd-45c6-af5c-3c7b98258224 is the clone.)
[snip]
_______________________________________________
Users mailing list
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>
Fred Rolland
2017-07-02 14:32:19 UTC
Permalink
Seems you hit [1].
Can you tell me what is the exact version of the engine ?

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1395793
Post by Charles Tassell
Sure, it's pretty big so I've put it online for download at
http://krissy.islandadmin.ca/public/engine.log.txt
The locking is on the engine side and restarting the vdsm will not affect
it .
Can you send the whole engine log ?
Which exact version are you using ?
Post by Charles Tassell
Just as an update, I created a new VM and had the same issue: the disk
remains locked. So I then added a new data store (this one iSCSI not NFS)
and create a new VM on that. Again, the disk remains locked. So the
problem seems to be that any action that sets to modify a disk image on my
cluster locks the disk and keeps it locked permanently.
I tried restarting the vdsm daemon, but that didn't make a difference.
017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202
delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202
delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [13580]: s3 renewal error -202
delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [13575]: 20423d5e aio timeout RD
0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 ioto 10 to_count 67
2017-05-07 07:51:41-0300 1738538 [13575]: s2 delta_renew read timeout 10
sec offset 0 /rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-
188c-4e10-9893-588ceb81b354/dom_md/ids
2017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error -202
delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [11513]: hosted-e aio timeout RD
0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 ioto 10 to_count 65
2017-05-07 07:51:41-0300 1738538 [11513]: s1 delta_renew read timeout 10
sec offset 0 /var/run/vdsm/storage/5dccd07d-a923-4d4b-9cb1-3b51ebfdca4d/
5a9c284f-0faa-4a25-94ce-c9efdae07484/ab2443f1-95ed-475d-886c-c1653257cf04
2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error -202
delta_length 10 last_success 1738508
2017-05-07 07:51:47-0300 1738544 [13575]: 20423d5e aio collect RD
0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 result 1048576:0 match reap
2017-05-07 07:51:47-0300 1738544 [13580]: 5dccd07d aio collect RD
0x7fe13c0008c0:0x7fe13c0008d0:0x7fe14e5fa000 result 1048576:0 match reap
2017-05-07 07:51:47-0300 1738544 [11513]: hosted-e aio collect RD
0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 result 1048576:0 match reap
2017-05-07 07:53:57-0300 1738674 [13590]: s2:r15 resource
20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/
mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-
588ceb81b354/dom_md/leases:1048576 for 7,21,78395
2017-05-07 07:59:49-0300 1739027 [13575]: s2 delta_renew long write time
10 sec
2017-05-09 08:38:34-0300 1914151 [13590]: s2:r16 resource
20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/
mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-
588ceb81b354/dom_md/leases:1048576 for 7,21,78395
2017-05-11 15:07:45-0300 2110302 [13590]: s2:r17 resource
20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/
mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-
588ceb81b354/dom_md/leases:1048576 for 7,21,112346
2017-05-11 15:17:24-0300 2110881 [13590]: s4 lockspace
b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924-
46e1-bd57-2cf2b2445087/ids:0
2017-05-11 15:17:45-0300 2110902 [1395]: s4 host 1 1 2110881
44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0
2017-05-11 15:17:45-0300 2110902 [1400]: s4:r18 resource
b010093e-1924-46e1-bd57-2cf2b2445087:SDM:/dev/b010093e-1924-
46e1-bd57-2cf2b2445087/leases:1048576 for 7,21,112346
2017-05-11 15:17:52-0300 2110909 [1399]: s5 lockspace
b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924-
46e1-bd57-2cf2b2445087/ids:0
2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 1 2 2110909
44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0
2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 2 1 2110065
4d31313f-b2dd-4368-bf31-d39835e10afb.ovirt730-0
Hi Freddy,
2017-05-10 11:35:30,249-03 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer]
(DefaultQuartzScheduler8) [1c84abac] Not removing session
'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPb
jYxdjbfyR3DBnp+UZ9/k20dGsMA==', session has running commands for user
Hi,
Can you provide the engine log ?
Thanks,
Freddy
Post by Charles Tassell
Hi Everyone,
I'm having some issues with my oVirt 4.1 (fully updated to latest
release as of yesterday) cluster. When I clone a VM the disks of both the
original and the clone stay in the locked state, and the only way I can
resolve it is to go into the database on the engine and run "update images
set imagestatus=1 where imagestatus=2;"
I'm using NFS4 as a datastore and the disks seem to copy fine (file
sizes match and everything), but the locking worries me. To clone the VM I
just shut the source VM down and then right click on it and select "Clone"
I've attached the full VDSM log from my last attempt, but here is the
excerpt of the lines just referencing the two disks
(d73206ed-89ba-48a9-82ff-c107c1af60f0 is the original VMs disk and
670a7b20-fecd-45c6-af5c-3c7b98258224 is the clone.)
[snip]
_______________________________________________
Users mailing list
http://lists.ovirt.org/mailman/listinfo/users
Charles Tassell
2017-07-07 01:14:09 UTC
Permalink
Hi Fred,

Unfortunately we reinstalled the cluster a while ago, so I don't know
the exact version we were running at the time. It would have been the
latest version in the 4.1 production repos though if that helps.
Post by Fred Rolland
Seems you hit [1].
Can you tell me what is the exact version of the engine ?
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1395793
On Thu, May 11, 2017 at 10:29 PM, Charles Tassell
Sure, it's pretty big so I've put it online for download at
http://krissy.islandadmin.ca/public/engine.log.txt
<http://krissy.islandadmin.ca/public/engine.log.txt>
Post by Fred Rolland
The locking is on the engine side and restarting the vdsm will
not affect it .
Can you send the whole engine log ?
Which exact version are you using ?
On Thu, May 11, 2017 at 9:30 PM, Charles Tassell
the disk remains locked. So I then added a new data store
(this one iSCSI not NFS) and create a new VM on that. Again,
the disk remains locked. So the problem seems to be that any
action that sets to modify a disk image on my cluster locks
the disk and keeps it locked permanently.
I tried restarting the vdsm daemon, but that didn't make a
difference. I'm seeing this in my sanlock.log file though,
017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error
-202 delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error
-202 delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [13580]: s3 renewal error
-202 delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [13575]: 20423d5e aio
timeout RD 0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000 ioto
10 to_count 67
2017-05-07 07:51:41-0300 1738538 [13575]: s2 delta_renew read
timeout 10 sec offset 0
/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/ids
2017-05-07 07:51:41-0300 1738538 [13575]: s2 renewal error
-202 delta_length 10 last_success 1738508
2017-05-07 07:51:41-0300 1738538 [11513]: hosted-e aio
timeout RD 0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000 ioto
10 to_count 65
2017-05-07 07:51:41-0300 1738538 [11513]: s1 delta_renew read
timeout 10 sec offset 0
/var/run/vdsm/storage/5dccd07d-a923-4d4b-9cb1-3b51ebfdca4d/5a9c284f-0faa-4a25-94ce-c9efdae07484/ab2443f1-95ed-475d-886c-c1653257cf04
2017-05-07 07:51:41-0300 1738538 [11513]: s1 renewal error
-202 delta_length 10 last_success 1738508
2017-05-07 07:51:47-0300 1738544 [13575]: 20423d5e aio
collect RD 0x7fe1440008c0:0x7fe1440008d0:0x7fe160255000
result 1048576:0 match reap
2017-05-07 07:51:47-0300 1738544 [13580]: 5dccd07d aio
collect RD 0x7fe13c0008c0:0x7fe13c0008d0:0x7fe14e5fa000
result 1048576:0 match reap
2017-05-07 07:51:47-0300 1738544 [11513]: hosted-e aio
collect RD 0x7fe1480008c0:0x7fe1480008d0:0x7fe14e6fc000
result 1048576:0 match reap
2017-05-07 07:53:57-0300 1738674 [13590]: s2:r15 resource
20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576
for 7,21,78395
2017-05-07 07:59:49-0300 1739027 [13575]: s2 delta_renew long
write time 10 sec
2017-05-09 08:38:34-0300 1914151 [13590]: s2:r16 resource
20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576
for 7,21,78395
2017-05-11 15:07:45-0300 2110302 [13590]: s2:r17 resource
20423d5e-188c-4e10-9893-588ceb81b354:SDM:/rhev/data-center/mnt/192.168.130.217:_media_ovirt/20423d5e-188c-4e10-9893-588ceb81b354/dom_md/leases:1048576
for 7,21,112346
2017-05-11 15:17:24-0300 2110881 [13590]: s4 lockspace
b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/ids:0
2017-05-11 15:17:45-0300 2110902 [1395]: s4 host 1 1 2110881
44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0
2017-05-11 15:17:45-0300 2110902 [1400]: s4:r18 resource
b010093e-1924-46e1-bd57-2cf2b2445087:SDM:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/leases:1048576
for 7,21,112346
2017-05-11 15:17:52-0300 2110909 [1399]: s5 lockspace
b010093e-1924-46e1-bd57-2cf2b2445087:1:/dev/b010093e-1924-46e1-bd57-2cf2b2445087/ids:0
2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 1 2 2110909
44ae07eb-3371-4750-8728-ab3b049dbae2.ovirt730-0
2017-05-11 15:18:13-0300 2110930 [1395]: s5 host 2 1 2110065
4d31313f-b2dd-4368-bf31-d39835e10afb.ovirt730-0
Post by Charles Tassell
Hi Freddy,
2017-05-10 11:35:30,249-03 INFO
[org.ovirt.engine.core.bll.aaa.SessionDataContainer]
(DefaultQuartzScheduler8) [1c84abac] Not removing session
'vZyqrcCljPC7hQtcILsk4uDug3QsiinZBOyoGDiQKkYYT2znGyWe4fasrPbjYxdjbfyR3DBnp+UZ9/k20dGsMA==',
session has running commands for user
Post by Fred Rolland
Hi,
Can you provide the engine log ?
Thanks,
Freddy
On Wed, May 10, 2017 at 5:57 PM, Charles Tassell
Hi Everyone,
I'm having some issues with my oVirt 4.1 (fully
updated to latest release as of yesterday) cluster.
When I clone a VM the disks of both the original and
the clone stay in the locked state, and the only way I
can resolve it is to go into the database on the engine
and run "update images set imagestatus=1 where
imagestatus=2;"
I'm using NFS4 as a datastore and the disks seem to
copy fine (file sizes match and everything), but the
locking worries me. To clone the VM I just shut the
source VM down and then right click on it and select
"Clone"
I've attached the full VDSM log from my last attempt,
but here is the excerpt of the lines just referencing
the two disks (d73206ed-89ba-48a9-82ff-c107c1af60f0 is
the original VMs disk and
670a7b20-fecd-45c6-af5c-3c7b98258224 is the clone.)
[snip]
_______________________________________________
Users mailing list
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>
Loading...