Hide Forgot
+++ This bug is a downstream clone. The original bug is: +++ +++ bug 1510856 +++ ====================================================================== Description of problem: Thanks to BZ#1156194, when a VM is suspended and later restored, the time is synced. The request is to extend the automatic sync of the timing even when the VM restore from a Pause state. Version-Release number of selected component (if applicable): RHEV 4.1.x How reproducible: Always Steps to Reproduce: 1. turn on a VM with guest agent enabled 2. create a transient storage issue or a high impact IO operation (like a snapshot with memory for a huge VM) happen. 2. VM went to pause 3. solve the issue or wait for the IO operation to happen Actual results: the time is mis-aligned. Expected results: RHV do force the time sync like when the VM go into suspend state. Additional info: This RFE is critical for those time sensitive workloads running on RHV (transational apps and so on). (Originally by Andrea Perotti)
possible, but not sure it needs to be configurable or not (Originally by michal.skrivanek)
can you add more details about the actual scenario? Even when VM pauses due to drive extension it shouldn't really take too much time, not more than few seconds which are better handled by NTP inside the guest rather than abrupt clock changes done externally. (Originally by michal.skrivanek)
The scenario we are talking about is very time sensitive app, an in-memory app, like a jboss datagrid installation, running on VM with huge amount of RAM. Dealing with a pause of that VM *can* be worked out with ntp, but require a constant aggressive configuration of the tool, while having the same behaviour for paused like for suspended VMs can be more practical for some users. Eventually this can be make configurable, like, setting after how many seconds of pause state RHV should enforce the clock changes. (Originally by Andrea Perotti)
(In reply to Andrea Perotti from comment #4) > The scenario we are talking about is very time sensitive app, an in-memory > app, like a jboss datagrid installation, running on VM with huge amount of > RAM. > > Dealing with a pause of that VM *can* be worked out with ntp, but require a > constant aggressive configuration of the tool, while having the same > behaviour for paused like for suspended VMs can be more practical for some > users. if it is a time sensitive app, wouldn't it be better to avoid ENOSPC paused states? Either bigger allocation chunks, lower watermark so it starts extending the drive sooner, bigger initial size of the thin provisioned disk, etc. > Eventually this can be make configurable, like, setting after how many > seconds of pause state RHV should enforce the clock changes. Creating a config option to do a time sync after resume from pause is feasible. I would still leave it off by default though. Implementing a configurable interval when it should be set is more complicated and will delay this RFE, but if that's required it's doable too. Still, before starting on this I believe we should check if we are really solving the right thing, making VMs not to pause in the first place might make more sense. (Originally by michal.skrivanek)
(In reply to Michal Skrivanek from comment #5) > if it is a time sensitive app, wouldn't it be better to avoid ENOSPC paused > states? Either bigger allocation chunks, lower watermark so it starts > extending the drive sooner, bigger initial size of the thin provisioned > disk, etc. Customer is triggering this event when doing a full snapshot of VM included with memory, but also transient connectivity storage issues can lead to Pause. > Creating a config option to do a time sync after resume from pause is > feasible. I would still leave it off by default though. I think just having it would be good enough for my customer, and having it now is more important than having it perfectly configurable. (Originally by Andrea Perotti)
Overall I believe that we should address the reason for the pausing in the first place. If that happens for snapshots, we should probably also check if we can get rid of the pausing that does happen. (Originally by Martin Tessun)
perhaps after_vm_cont hook can be used? Hopefully it's not suffering from the same problem as the after_vm_pause hook in bug 1543103 Other than that, the solution could look similar to Openstack's https://review.openstack.org/#/c/316116/ (Originally by michal.skrivanek)
I'll add another large application to this one too. In this case, storage goes offline, the VM pauses, storage comes back, the VM resumes, but now its clock is way off, and that sets a whole bad cascade of events. Nothing we can do about the storage going offline, and pausing the VM is the correct action when that happens. If we can inject the correct time into that VM when it resumes, we can make lots of people happy. - Greg (Originally by Greg Scott)
Verified upstream: ovirt-engine-4.2.6.5-0.0.master.20180831090131.git1d64d4c.el7.noarch vdsm-4.20.39-6.git00d5340.el7.x86_64
QE verification bot: the bug was verified upstream
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:3478