Bug 1303728 - Seeing VM crash while writing in same RBD Disk from different VMs
Seeing VM crash while writing in same RBD Disk from different VMs
Status: CLOSED CURRENTRELEASE
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: RBD (Show other bugs)
1.3.2
x86_64 Linux
unspecified Severity high
: rc
: 1.3.4
Assigned To: Jason Dillaman
ceph-qe-bugs
:
Depends On: 1223652
Blocks:
  Show dependency treegraph
 
Reported: 2016-02-01 14:09 EST by Tanay Ganguly
Modified: 2017-07-30 11:36 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-06-28 11:46:34 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Ceph Project Bug Tracker 14595 None None None 2016-02-01 18:12 EST

  None (edit)
Description Tanay Ganguly 2016-02-01 14:09:46 EST
Description of problem:
While trying to write on a same RBD image, the VM is getting crashed.

Version-Release number of selected component (if applicable):
rpm -qa | grep ceph
ceph-common-0.94.5-4.el7cp.x86_64
ceph-osd-0.94.5-4.el7cp.x86_64
ceph-0.94.5-4.el7cp.x86_64
ceph-radosgw-0.94.5-4.el7cp.x86_64
ceph-debuginfo-0.94.5-4.el7cp.x86_64
ceph-selinux-0.94.5-4.el7cp.x86_64

Selinux as enforcing.

How reproducible:


Steps to Reproduce:
1. Create an rbd image with --feature 13, create snap and clone it.
2. Attach the same Clone to 2 different VMs.
3. Try to write on the RBD clone from 2 different VMs at same time
   Used dd to write it.

Actual results:
Lock should handle this use case, and there should not be a crash

Expected results:
Seeing the VM Crash.

Additional info:
Logs attached.
Comment 2 Ken Dreyer (Red Hat) 2016-02-03 12:19:08 EST
I discussed this with Jason and Josh, and the outcome is that we want to land a fix in 0.94.7 upstream and give it plenty of testing before pulling it downstream.

This is not technically a valid use case of RBD exclusive locking, and customers should never have two VMs using the same RBD image at the same time. If customers had such a setup, this would certainly cause other issues. Jason and Josh also confirmed that there's no way to hit this with a single client, nor during a VM live migration between hypervisors.

Since this is a lower priority bug, re-targeting to RHCS 1.3.3.
Comment 4 Jason Dillaman 2017-06-26 13:02:58 EDT
This was resolved in RHCS 2.x

Note You need to log in before you can comment on or make changes to this bug.