Discussion:
[ovirt-users] After upgrade only 1/3 hosts is running Node 4.4.6
Jayme
2021-05-27 19:36:27 UTC
Permalink
I updated my three server HCI cluster from 4.4.5 to 4.4.6. All hosts
updated successfully and rebooted and are active. I notice that only one
host out of the three is actually running oVirt node 4.4.6 and the other
two are running 4.4.5. If I check for upgrade in admin it shows no upgrades
available.

Why are two hosts still running 4.4.5 after being successfully
upgraded/rebooted and how can I get them on 4.4.6 if no upgrades are being
found?
wodel youchi
2021-05-27 21:03:08 UTC
Permalink
Hi,

What does "nodectl info" reports on all hosts?
did you execute "refresh capabilities" after the update?

Regards.

<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
Virus-free.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
Post by Jayme
I updated my three server HCI cluster from 4.4.5 to 4.4.6. All hosts
updated successfully and rebooted and are active. I notice that only one
host out of the three is actually running oVirt node 4.4.6 and the other
two are running 4.4.5. If I check for upgrade in admin it shows no upgrades
available.
Why are two hosts still running 4.4.5 after being successfully
upgraded/rebooted and how can I get them on 4.4.6 if no upgrades are being
found?
_______________________________________________
Privacy Statement: https://www.ovirt.org/privacy-policy.html
https://www.ovirt.org/community/about/community-guidelines/
Jayme
2021-05-27 21:18:00 UTC
Permalink
It shows 4.4.5 image on two hosts and 4.4.6 on one. Yum update shows noting
available nor does check upgrade in admin GUI.

I believe these two hosts failed on first install and succeeded on second
attempt which may have something to do with it. How can I force them to
update to 4.4.6 image? Would reinstall host do it?
Post by wodel youchi
Hi,
What does "nodectl info" reports on all hosts?
did you execute "refresh capabilities" after the update?
Regards.
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> Virus-free.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
<#m_1909242515811637061_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
Post by Jayme
I updated my three server HCI cluster from 4.4.5 to 4.4.6. All hosts
updated successfully and rebooted and are active. I notice that only one
host out of the three is actually running oVirt node 4.4.6 and the other
two are running 4.4.5. If I check for upgrade in admin it shows no upgrades
available.
Why are two hosts still running 4.4.5 after being successfully
upgraded/rebooted and how can I get them on 4.4.6 if no upgrades are being
found?
_______________________________________________
Privacy Statement: https://www.ovirt.org/privacy-policy.html
https://www.ovirt.org/community/about/community-guidelines/
Jayme
2021-05-27 21:21:12 UTC
Permalink
The good host:

bootloader:
default: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
entries:
ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64):
index: 0
kernel:
/boot//ovirt-node-ng-4.4.6.3-0.20210518.0+1/vmlinuz-4.18.0-301.1.el8.x86_64
args: crashkernel=auto resume=/dev/mapper/onn_orchard1-swap
rd.lvm.lv=onn_orchard1/ovirt-node-ng-4.4.6.3-0.20210518.0+1
rd.lvm.lv=onn_orchard1/swap rhgb quiet
boot=UUID=3069e23f-5dd6-49a8-824d-e54efbeeb9a3 rootflags=discard
img.bootid=ovirt-node-ng-4.4.6.3-0.20210518.0+1
root: /dev/onn_orchard1/ovirt-node-ng-4.4.6.3-0.20210518.0+1
initrd:
/boot//ovirt-node-ng-4.4.6.3-0.20210518.0+1/initramfs-4.18.0-301.1.el8.x86_64.img
title: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
blsid: ovirt-node-ng-4.4.6.3-0.20210518.0+1-4.18.0-301.1.el8.x86_64
ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64):
index: 1
kernel:
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
args: crashkernel=auto resume=/dev/mapper/onn_orchard1-swap
rd.lvm.lv=onn_orchard1/ovirt-node-ng-4.4.5.1-0.20210323.0+1
rd.lvm.lv=onn_orchard1/swap rhgb quiet
boot=UUID=3069e23f-5dd6-49a8-824d-e54efbeeb9a3 rootflags=discard
img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1
root: /dev/onn_orchard1/ovirt-node-ng-4.4.5.1-0.20210323.0+1
initrd:
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
title: ovirt-node-ng-4.4.5.1-0.20210323.0
(4.18.0-240.15.1.el8_3.x86_64)
blsid:
ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
layers:
ovirt-node-ng-4.4.5.1-0.20210323.0:
ovirt-node-ng-4.4.5.1-0.20210323.0+1
ovirt-node-ng-4.4.6.3-0.20210518.0:
ovirt-node-ng-4.4.6.3-0.20210518.0+1
current_layer: ovirt-node-ng-4.4.6.3-0.20210518.0+1


The other two show:

bootloader:
default: ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64)
entries:
ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64):
index: 0
kernel:
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
args: crashkernel=auto resume=/dev/mapper/onn_orchard2-swap
rd.lvm.lv=onn_orchard2/ovirt-node-ng-4.4.5.1-0.20210323.0+1
rd.lvm.lv=onn_orchard2/swap rhgb quiet
boot=UUID=cd9dd412-2acd-4f3d-9b3e-44030153856f rootflags=discard
img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1
root: /dev/onn_orchard2/ovirt-node-ng-4.4.5.1-0.20210323.0+1
initrd:
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
title: ovirt-node-ng-4.4.5.1-0.20210323.0
(4.18.0-240.15.1.el8_3.x86_64)
blsid:
ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
layers:
ovirt-node-ng-4.4.5.1-0.20210323.0:
ovirt-node-ng-4.4.5.1-0.20210323.0+1
current_layer: ovirt-node-ng-4.4.5.1-0.20210323.0+1
Post by Jayme
It shows 4.4.5 image on two hosts and 4.4.6 on one. Yum update shows
noting available nor does check upgrade in admin GUI.
I believe these two hosts failed on first install and succeeded on second
attempt which may have something to do with it. How can I force them to
update to 4.4.6 image? Would reinstall host do it?
Post by wodel youchi
Hi,
What does "nodectl info" reports on all hosts?
did you execute "refresh capabilities" after the update?
Regards.
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> Virus-free.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
<#m_-2192448828611170138_m_1909242515811637061_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
Post by Jayme
I updated my three server HCI cluster from 4.4.5 to 4.4.6. All hosts
updated successfully and rebooted and are active. I notice that only one
host out of the three is actually running oVirt node 4.4.6 and the other
two are running 4.4.5. If I check for upgrade in admin it shows no upgrades
available.
Why are two hosts still running 4.4.5 after being successfully
upgraded/rebooted and how can I get them on 4.4.6 if no upgrades are being
found?
_______________________________________________
Privacy Statement: https://www.ovirt.org/privacy-policy.html
https://www.ovirt.org/community/about/community-guidelines/
wodel youchi
2021-05-28 00:31:48 UTC
Permalink
Hi,

On the "bad hosts" try to find if there is/are any 4.4.6 rpm installed, if
yes, try to remove them, then try the update again.

You can try to install the ovirt-node rpm manually, here is the link
https://resources.ovirt.org/pub/ovirt-4.4/rpm/el8/noarch/ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch.rpm
# dnf install ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch.rpm
PS: remember to use tmux if executing via ssh.

Regards.
default: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
index: 0
/boot//ovirt-node-ng-4.4.6.3-0.20210518.0+1/vmlinuz-4.18.0-301.1.el8.x86_64
args: crashkernel=auto resume=/dev/mapper/onn_orchard1-swap
rd.lvm.lv=onn_orchard1/ovirt-node-ng-4.4.6.3-0.20210518.0+1 rd.lvm.lv=onn_orchard1/swap
rhgb quiet boot=UUID=3069e23f-5dd6-49a8-824d-e54efbeeb9a3 rootflags=discard
img.bootid=ovirt-node-ng-4.4.6.3-0.20210518.0+1
root: /dev/onn_orchard1/ovirt-node-ng-4.4.6.3-0.20210518.0+1
/boot//ovirt-node-ng-4.4.6.3-0.20210518.0+1/initramfs-4.18.0-301.1.el8.x86_64.img
title: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
blsid: ovirt-node-ng-4.4.6.3-0.20210518.0+1-4.18.0-301.1.el8.x86_64
index: 1
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
args: crashkernel=auto resume=/dev/mapper/onn_orchard1-swap
rd.lvm.lv=onn_orchard1/ovirt-node-ng-4.4.5.1-0.20210323.0+1 rd.lvm.lv=onn_orchard1/swap
rhgb quiet boot=UUID=3069e23f-5dd6-49a8-824d-e54efbeeb9a3 rootflags=discard
img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1
root: /dev/onn_orchard1/ovirt-node-ng-4.4.5.1-0.20210323.0+1
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
title: ovirt-node-ng-4.4.5.1-0.20210323.0
(4.18.0-240.15.1.el8_3.x86_64)
ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
ovirt-node-ng-4.4.5.1-0.20210323.0+1
ovirt-node-ng-4.4.6.3-0.20210518.0+1
current_layer: ovirt-node-ng-4.4.6.3-0.20210518.0+1
default: ovirt-node-ng-4.4.5.1-0.20210323.0
(4.18.0-240.15.1.el8_3.x86_64)
index: 0
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
args: crashkernel=auto resume=/dev/mapper/onn_orchard2-swap
rd.lvm.lv=onn_orchard2/ovirt-node-ng-4.4.5.1-0.20210323.0+1 rd.lvm.lv=onn_orchard2/swap
rhgb quiet boot=UUID=cd9dd412-2acd-4f3d-9b3e-44030153856f rootflags=discard
img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1
root: /dev/onn_orchard2/ovirt-node-ng-4.4.5.1-0.20210323.0+1
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
title: ovirt-node-ng-4.4.5.1-0.20210323.0
(4.18.0-240.15.1.el8_3.x86_64)
ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
ovirt-node-ng-4.4.5.1-0.20210323.0+1
current_layer: ovirt-node-ng-4.4.5.1-0.20210323.0+1
Post by Jayme
It shows 4.4.5 image on two hosts and 4.4.6 on one. Yum update shows
noting available nor does check upgrade in admin GUI.
I believe these two hosts failed on first install and succeeded on second
attempt which may have something to do with it. How can I force them to
update to 4.4.6 image? Would reinstall host do it?
Post by wodel youchi
Hi,
What does "nodectl info" reports on all hosts?
did you execute "refresh capabilities" after the update?
Regards.
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> Virus-free.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
<#m_1584774078427632385_m_-2192448828611170138_m_1909242515811637061_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
Post by Jayme
I updated my three server HCI cluster from 4.4.5 to 4.4.6. All hosts
updated successfully and rebooted and are active. I notice that only one
host out of the three is actually running oVirt node 4.4.6 and the other
two are running 4.4.5. If I check for upgrade in admin it shows no upgrades
available.
Why are two hosts still running 4.4.5 after being successfully
upgraded/rebooted and how can I get them on 4.4.6 if no upgrades are being
found?
_______________________________________________
Privacy Statement: https://www.ovirt.org/privacy-policy.html
https://www.ovirt.org/community/about/community-guidelines/
Jayme
2021-05-28 00:57:18 UTC
Permalink
# rpm -qa | grep ovirt-node
ovirt-node-ng-nodectl-4.4.0-1.el8.noarch
python3-ovirt-node-ng-nodectl-4.4.0-1.el8.noarch
ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch

I removed ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch but yum update
and check for updates in GUI still show no updates available.

I can attempt re-installing the package tomorrow, but I'm not confident it
will work since it was already installed.
Post by wodel youchi
Hi,
On the "bad hosts" try to find if there is/are any 4.4.6 rpm installed, if
yes, try to remove them, then try the update again.
You can try to install the ovirt-node rpm manually, here is the link
https://resources.ovirt.org/pub/ovirt-4.4/rpm/el8/noarch/ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch.rpm
# dnf install ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch.rpm
PS: remember to use tmux if executing via ssh.
Regards.
default: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
index: 0
/boot//ovirt-node-ng-4.4.6.3-0.20210518.0+1/vmlinuz-4.18.0-301.1.el8.x86_64
args: crashkernel=auto resume=/dev/mapper/onn_orchard1-swap
rd.lvm.lv=onn_orchard1/ovirt-node-ng-4.4.6.3-0.20210518.0+1 rd.lvm.lv=onn_orchard1/swap
rhgb quiet boot=UUID=3069e23f-5dd6-49a8-824d-e54efbeeb9a3 rootflags=discard
img.bootid=ovirt-node-ng-4.4.6.3-0.20210518.0+1
root: /dev/onn_orchard1/ovirt-node-ng-4.4.6.3-0.20210518.0+1
/boot//ovirt-node-ng-4.4.6.3-0.20210518.0+1/initramfs-4.18.0-301.1.el8.x86_64.img
title: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
blsid: ovirt-node-ng-4.4.6.3-0.20210518.0+1-4.18.0-301.1.el8.x86_64
index: 1
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
args: crashkernel=auto resume=/dev/mapper/onn_orchard1-swap
rd.lvm.lv=onn_orchard1/ovirt-node-ng-4.4.5.1-0.20210323.0+1 rd.lvm.lv=onn_orchard1/swap
rhgb quiet boot=UUID=3069e23f-5dd6-49a8-824d-e54efbeeb9a3 rootflags=discard
img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1
root: /dev/onn_orchard1/ovirt-node-ng-4.4.5.1-0.20210323.0+1
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
title: ovirt-node-ng-4.4.5.1-0.20210323.0
(4.18.0-240.15.1.el8_3.x86_64)
ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
ovirt-node-ng-4.4.5.1-0.20210323.0+1
ovirt-node-ng-4.4.6.3-0.20210518.0+1
current_layer: ovirt-node-ng-4.4.6.3-0.20210518.0+1
default: ovirt-node-ng-4.4.5.1-0.20210323.0
(4.18.0-240.15.1.el8_3.x86_64)
index: 0
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
args: crashkernel=auto resume=/dev/mapper/onn_orchard2-swap
rd.lvm.lv=onn_orchard2/ovirt-node-ng-4.4.5.1-0.20210323.0+1 rd.lvm.lv=onn_orchard2/swap
rhgb quiet boot=UUID=cd9dd412-2acd-4f3d-9b3e-44030153856f rootflags=discard
img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1
root: /dev/onn_orchard2/ovirt-node-ng-4.4.5.1-0.20210323.0+1
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
title: ovirt-node-ng-4.4.5.1-0.20210323.0
(4.18.0-240.15.1.el8_3.x86_64)
ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
ovirt-node-ng-4.4.5.1-0.20210323.0+1
current_layer: ovirt-node-ng-4.4.5.1-0.20210323.0+1
Post by Jayme
It shows 4.4.5 image on two hosts and 4.4.6 on one. Yum update shows
noting available nor does check upgrade in admin GUI.
I believe these two hosts failed on first install and succeeded on
second attempt which may have something to do with it. How can I force them
to update to 4.4.6 image? Would reinstall host do it?
Post by wodel youchi
Hi,
What does "nodectl info" reports on all hosts?
did you execute "refresh capabilities" after the update?
Regards.
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> Virus-free.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
<#m_-995438561685975429_m_1584774078427632385_m_-2192448828611170138_m_1909242515811637061_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
Post by Jayme
I updated my three server HCI cluster from 4.4.5 to 4.4.6. All hosts
updated successfully and rebooted and are active. I notice that only one
host out of the three is actually running oVirt node 4.4.6 and the other
two are running 4.4.5. If I check for upgrade in admin it shows no upgrades
available.
Why are two hosts still running 4.4.5 after being successfully
upgraded/rebooted and how can I get them on 4.4.6 if no upgrades are being
found?
_______________________________________________
Privacy Statement: https://www.ovirt.org/privacy-policy.html
https://www.ovirt.org/community/about/community-guidelines/
Jayme
2021-05-28 12:52:07 UTC
Permalink
Removing the ovirt-node-ng-image-update package and re-installing it
manually seems to have done the trick. Thanks for pointing me in the right
direction!
Post by Jayme
# rpm -qa | grep ovirt-node
ovirt-node-ng-nodectl-4.4.0-1.el8.noarch
python3-ovirt-node-ng-nodectl-4.4.0-1.el8.noarch
ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
I removed ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch but yum update
and check for updates in GUI still show no updates available.
I can attempt re-installing the package tomorrow, but I'm not confident it
will work since it was already installed.
Post by wodel youchi
Hi,
On the "bad hosts" try to find if there is/are any 4.4.6 rpm installed,
if yes, try to remove them, then try the update again.
You can try to install the ovirt-node rpm manually, here is the link
https://resources.ovirt.org/pub/ovirt-4.4/rpm/el8/noarch/ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch.rpm
# dnf install ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch.rpm
PS: remember to use tmux if executing via ssh.
Regards.
default: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
index: 0
/boot//ovirt-node-ng-4.4.6.3-0.20210518.0+1/vmlinuz-4.18.0-301.1.el8.x86_64
args: crashkernel=auto resume=/dev/mapper/onn_orchard1-swap
rd.lvm.lv=onn_orchard1/ovirt-node-ng-4.4.6.3-0.20210518.0+1 rd.lvm.lv=onn_orchard1/swap
rhgb quiet boot=UUID=3069e23f-5dd6-49a8-824d-e54efbeeb9a3 rootflags=discard
img.bootid=ovirt-node-ng-4.4.6.3-0.20210518.0+1
root: /dev/onn_orchard1/ovirt-node-ng-4.4.6.3-0.20210518.0+1
/boot//ovirt-node-ng-4.4.6.3-0.20210518.0+1/initramfs-4.18.0-301.1.el8.x86_64.img
title: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
blsid: ovirt-node-ng-4.4.6.3-0.20210518.0+1-4.18.0-301.1.el8.x86_64
index: 1
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
args: crashkernel=auto resume=/dev/mapper/onn_orchard1-swap
rd.lvm.lv=onn_orchard1/ovirt-node-ng-4.4.5.1-0.20210323.0+1 rd.lvm.lv=onn_orchard1/swap
rhgb quiet boot=UUID=3069e23f-5dd6-49a8-824d-e54efbeeb9a3 rootflags=discard
img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1
root: /dev/onn_orchard1/ovirt-node-ng-4.4.5.1-0.20210323.0+1
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
title: ovirt-node-ng-4.4.5.1-0.20210323.0
(4.18.0-240.15.1.el8_3.x86_64)
ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
ovirt-node-ng-4.4.5.1-0.20210323.0+1
ovirt-node-ng-4.4.6.3-0.20210518.0+1
current_layer: ovirt-node-ng-4.4.6.3-0.20210518.0+1
default: ovirt-node-ng-4.4.5.1-0.20210323.0
(4.18.0-240.15.1.el8_3.x86_64)
index: 0
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
args: crashkernel=auto resume=/dev/mapper/onn_orchard2-swap
rd.lvm.lv=onn_orchard2/ovirt-node-ng-4.4.5.1-0.20210323.0+1 rd.lvm.lv=onn_orchard2/swap
rhgb quiet boot=UUID=cd9dd412-2acd-4f3d-9b3e-44030153856f rootflags=discard
img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1
root: /dev/onn_orchard2/ovirt-node-ng-4.4.5.1-0.20210323.0+1
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
title: ovirt-node-ng-4.4.5.1-0.20210323.0
(4.18.0-240.15.1.el8_3.x86_64)
ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
ovirt-node-ng-4.4.5.1-0.20210323.0+1
current_layer: ovirt-node-ng-4.4.5.1-0.20210323.0+1
Post by Jayme
It shows 4.4.5 image on two hosts and 4.4.6 on one. Yum update shows
noting available nor does check upgrade in admin GUI.
I believe these two hosts failed on first install and succeeded on
second attempt which may have something to do with it. How can I force them
to update to 4.4.6 image? Would reinstall host do it?
Post by wodel youchi
Hi,
What does "nodectl info" reports on all hosts?
did you execute "refresh capabilities" after the update?
Regards.
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> Virus-free.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
<#m_9096503235400820450_m_-995438561685975429_m_1584774078427632385_m_-2192448828611170138_m_1909242515811637061_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
Post by Jayme
I updated my three server HCI cluster from 4.4.5 to 4.4.6. All hosts
updated successfully and rebooted and are active. I notice that only one
host out of the three is actually running oVirt node 4.4.6 and the other
two are running 4.4.5. If I check for upgrade in admin it shows no upgrades
available.
Why are two hosts still running 4.4.5 after being successfully
upgraded/rebooted and how can I get them on 4.4.6 if no upgrades are being
found?
_______________________________________________
Privacy Statement: https://www.ovirt.org/privacy-policy.html
https://www.ovirt.org/community/about/community-guidelines/
Loading...