Bug 178384
Summary: | Cannot activate logical volumes using physical devices discovered after clvmd start. | ||||||
---|---|---|---|---|---|---|---|
Product: | [Retired] Red Hat Cluster Suite | Reporter: | Henry Harris <henry.harris> | ||||
Component: | lvm2-cluster | Assignee: | Alasdair Kergon <agk> | ||||
Status: | CLOSED DUPLICATE | QA Contact: | Cluster QE <mspqa-list> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | medium | ||||||
Version: | 4 | CC: | agk, ccaulfie | ||||
Target Milestone: | --- | ||||||
Target Release: | --- | ||||||
Hardware: | All | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2006-01-27 22:30:42 UTC | Type: | --- | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
Henry Harris
2006-01-19 22:39:06 UTC
Created attachment 123461 [details]
steps taken and results
This should have been fixed in the long-closed bug #138396 Bug #138396 was believed fixed in RHBA-2005-192, this is happening in 2.01.14 which is later. The steps are slightly different than in 138396: we are not restarting clvmd -- restarting clvmd makes the problem go away. Also, this behavior is exhibited reliably, not intermittently. Does 'md' mean these are software raid shared between nodes? What's their configuration? If so, can you reproduce without using 'md'? Also need to test with latest U3 beta packages. A work around for this issue that we've tested was to stop clvmd on all the nodes in the cluster, add your new devices, discover the new devices on all the nodes, and then restart clvmd. What we actually did: 1. We had a 3 disk/PV (400Gb) GFS filesystem with active I/O running from all the nodes. 2. Stopped clvmd: [root@link-02 ~]# service clvmd stop Deactivating VG link1: Can't deactivate volume group "link1" with 1 open logical volume(s) [FAILED] Deactivating VG link2: [ OK ] Stopping clvm:[ OK ] (note that the de-activation will fail due to the mounted filesystem with running I/O) 3. Took 3 other unused disks, repartitioned them, rediscovered them on all nodes and then restarted clvmd. 4. Created PVs out of those new partitons 5. Grew the active VG and LV 6. Grew the GFS filesystem |