Red Hat Bugzilla – Bug 798308
Problems after recreating a volume with more bricks
Last modified: 2012-07-16 09:37:47 EDT
I add two more bricks to an volume and after remount we did not found andy folders which was on the first 2 bricks.
Self healing is not working, I can mkdir the folder and the systems tells me that the folder exists but we have a lot of subfolders and can't mkdir to all this folders.
--- Additional comment from email@example.com on 2012-02-28 09:56:09 EST ---
If I recall our conversation on IRC correctly, you said that you had created a new volume with the same name as a previous one, but with more bricks. This seems a bit problematic with respect to things like brick UUIDs and xattr values. To diagnose, we'll need some more information, such as:
* client and server logs (especially the embedded volfiles and messages from around each daemon's startup)
* xattr values (anything with "gluster" in the name) from at least the per-tenant directories on each brick
Then again, this seems to be a separate problem so perhaps it should be a separate bug. Mind if I clone this one?
--- Additional comment from firstname.lastname@example.org on 2012-02-28 10:09:24 EST ---
Yes this would be fine,
I have to recreate an Volume with same name, I did not have an option to expand my Volume,.
client Volfile from hfs_mount:
HekaFS will be merged into core GlusterFS