Bug 1303728
Summary: | Seeing VM crash while writing in same RBD Disk from different VMs | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Tanay Ganguly <tganguly> |
Component: | RBD | Assignee: | Jason Dillaman <jdillama> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | ceph-qe-bugs <ceph-qe-bugs> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 1.3.2 | CC: | ceph-eng-bugs, flucifre, jdillama, kdreyer, kurs |
Target Milestone: | rc | ||
Target Release: | 1.3.4 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2017-06-28 15:46:34 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1223652 | ||
Bug Blocks: |
Description
Tanay Ganguly
2016-02-01 19:09:46 UTC
I discussed this with Jason and Josh, and the outcome is that we want to land a fix in 0.94.7 upstream and give it plenty of testing before pulling it downstream. This is not technically a valid use case of RBD exclusive locking, and customers should never have two VMs using the same RBD image at the same time. If customers had such a setup, this would certainly cause other issues. Jason and Josh also confirmed that there's no way to hit this with a single client, nor during a VM live migration between hypervisors. Since this is a lower priority bug, re-targeting to RHCS 1.3.3. This was resolved in RHCS 2.x |