Description of problem: Opening the VM portal briefly shows 8 VMs (most likely the first page), then crashes. Restoring the database and navigating to VM portal is enough to reproduce it. I tried to copy most permissions and create a VM Pool similar to the customer in our labs, but did not reproduce on same versions. debug http GET[53] -> url: "/ovirt-engine/api/vms/;max=8?search=SORTBY NAME ASC page 3&follow=graphics_consoles", headers: {"Accept":"application/json","Authorization":"*****","Accept-Language":"en_US","Filter":false} transport.js:73:10 debug Reducing action: {"type":"PERSIST_STATE"} utils.js:47:14 debug persistStateToLocalStorage() called storage.js:18:10 debug Reducing action: {"type":"UPDATE_VMPOOLS_COUNT"} utils.js:47:14 warn No translation for enum item "VmStatus.undefined" found. index.js:177:10 error TypeError: "n is undefined" default index.js:323 Redux 8 qt react-dom.production.min.js:132 Dn react-dom.production.min.js:167 Rn react-dom.production.min.js:180 wr react-dom.production.min.js:232 Sr react-dom.production.min.js:233 zr react-dom.production.min.js:249 Yr react-dom.production.min.js:248 jr react-dom.production.min.js:245 Or react-dom.production.min.js:243 enqueueSetState react-dom.production.min.js:130 setState React Redux 12 v middleware.js:25 Redux 14 jQuery 2 That index.js:323 seems to be this: export default withRouter( connect( (state, { vm }) => ({ isEditable: vm.get('canUserEditVm') && state.clusters.find(cluster => cluster.get('canUserUseCluster')) !== undefined, config: state.config, }), But this does not seem to have changed for a long time. Also, the customer suggests this was working before the latest upgrade. Version-Release number of selected component (if applicable): ovirt-web-ui-1.6.0-1.el7ev.noarch rhvm-4.3.6.7-0.1.el7.noarch How reproducible: Only with customer Database so far
Please note that this issue is reproduced on ovirt-web-ui master as well.
Looks to be the same issue as https://github.com/oVirt/ovirt-web-ui/issues/1128.
Hi, I don't have an exact location in the code where this is going wrong, but I can share that in our environment the key for reproducing this is that if at least one VM in the pool is in a Up/Running state when you log into the VM portal you get this behavior where the VM Portal screen briefly displays all VMs in the pool and then you get the "Sorry, VM Portal is currently having some issues" screen. If all VMs in all pools are not Up/Running, you can log in and see the VMs in the VM Portal. We haven't done anything with adjusting permissions that I'm aware of. If you log into the portal with all VMs in a given pool in a down/Off state, things work fine. If there are VMs that are not in a pool in an Up/Running state, things still work fine If you Start a VM that is in a pool, it goes to a Running state in the portal and things still work fine. If you refresh the portal screen after starting a VM, you will get the error behavior. If you log out and try go log back in, you see all the VMs for an instant and then get the error behavior. I hope this helps narrow down the issue. Thanks, Pete
And, as noted in https://github.com/oVirt/ovirt-web-ui/issues/1128 this only occurs if the user has SuperUser privilege. If you remove SuperUser privilege, the user can log into the portal and view all the VMs in the pool even when there is one or more in the Running state.
too late for a backport, still don't have a solution, dropping zstream request
Verified in ovirt-engine-4.4.0-0.31.master.el8ev.noarch ovirt-web-ui-1.6.1-0.20200228.git5b3b4e0.el8ev.noarch VM Portal no longer crashes when a SuperUser takes or starts a pool VM or displays/reloads VM list with a running pool VM.
Tested in browsers: Firefox 74.0.1, Chromium 80
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: RHV Manager (ovirt-engine) 4.4 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:3247