Bug 1000817
Summary: | Allow bcache devices to be used as PVs by default | ||||||
---|---|---|---|---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Rolf Fokkens <rolf> | ||||
Component: | lvm2 | Assignee: | Alasdair Kergon <agk> | ||||
Status: | CLOSED ERRATA | QA Contact: | Fedora Extras Quality Assurance <extras-qa> | ||||
Severity: | low | Docs Contact: | |||||
Priority: | low | ||||||
Version: | 20 | CC: | agk, bmarzins, bmr, dwysocha, heinzm, ignatenko, jonathan, lvm-team, msnitzer, prajnoha, prockai, rwheeler, zkabelac | ||||
Target Milestone: | --- | Keywords: | FutureFeature | ||||
Target Release: | --- | ||||||
Hardware: | All | ||||||
OS: | All | ||||||
Whiteboard: | |||||||
Fixed In Version: | lvm2-2.02.102-1.fc20 | Doc Type: | Enhancement | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2013-09-26 06:17:34 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 998543, 1008227, 1008228 | ||||||
Attachments: |
|
Description
Rolf Fokkens
2013-08-25 15:48:05 UTC
A workaround is to have the user add types = [ "bcache", 16 ] in /etc/lvm/lvm.conf I consider it a workaround, since I think the solution would be to have LVM2 accept bcache in /lib/filters/filter.c Note that DM has a dm-cache target that is a caching device (no need for BCACHE). BCACHE and dm-cache are designed for different workloads, so performance will not be the same for every workload. Please provide a reference for your choice of 16 rather than 1. I'm not sure what "the maximum number of partitions" means, but I guess you're right, it should be 1. From http://bcache.evilpiepirate.org/Design/ "When bcache loads, it creates a major device type. Every slow storage device registered with bcache creates a new minor device number that is associated with that storage device." Hope that provides the right information. Just like on other similar bugzillas, create two devices and provide the /proc/devices output and their major/minor numbers. [root@localhost ~]# cat /proc/devices Character devices: ... Block devices: 259 blkext 7 loop ... 135 sd 252 bcache 253 device-mapper 254 mdp [root@localhost ~]# cat /proc/partitions major minor #blocks name 8 0 20971520 sda 8 1 512000 sda1 8 2 12356608 sda2 8 3 8101888 sda3 8 32 20971520 sdc 8 33 20970496 sdc1 8 16 2097152 sdb 8 17 2096128 sdb1 8 48 2097152 sdd 8 49 2096128 sdd1 253 0 10240000 dm-0 253 1 2113536 dm-1 252 0 8101880 bcache0 253 2 4194304 dm-2 252 1 20970488 bcache1 [root@localhost ~]# ls -l /dev/bcache* brw-rw----. 1 root disk 252, 0 Aug 26 10:52 /dev/bcache0 brw-rw----. 1 root disk 252, 1 Aug 26 10:55 /dev/bcache1 [root@localhost ~]# Towards the planning of F20 having this implemented would be really nice. Is there an indication when this will be implemented? Could anybody please respond? (In reply to Rolf Fokkens from comment #8) > Could anybody please respond? This isn't a difficult change to lvm2, but it certainly isn't a priority given bcache isn't a priority. Have you looked at the lvm2 code in this area? Seems to me you're looking to stand up bcache without the ability to properly support it. This seems short-cited. I can appreciate that you just want a functional caching layer in fedora but if you have to completely rely on other developers to make it work and/or supportable then you'll just have problems down the road. Especially if the upstream bcache developer (Kent) isn't actively working with fedora to make it fully supported. Red Hat is not is a position to support bcache at this time. Thanks for the response. My effort in making bcache work has started since the SSD cache change was accepted: https://fedorahosted.org/fesco/ticket/1145. In that process the relation with Kent Overstreet was not identified as a blocking issue. So to me it's now unclear how to move forward. Can I be of any help? If I make a patch, do you consider accepting that? I'll do some testing later, but this should be it: --- LVM2.2.02.98/lib/filters/device-types.h.orig 2013-09-13 10:40:28.270907244 +0200 +++ LVM2.2.02.98/lib/filters/device-types.h 2013-09-13 10:42:51.760734860 +0200 @@ -56,5 +56,6 @@ {"blkext", 1}, /* Extended device partitions */ {"fio", 16}, /* Fusion */ {"mtip32xx", 16}, /* Micron PCIe SSDs */ + {"bcache", 1}, /* Bcache disk */ {"", 0} }; Created attachment 797540 [details]
LVM patch to accept bcache as a PV
After a small correction the patch applied fine to f20 LVM:
--- LVM2.2.02.99/lib/device/device-types.h.orig 2013-09-13 21:51:31.594728463 +0200
+++ LVM2.2.02.99/lib/device/device-types.h 2013-09-13 21:52:34.640634634 +0200
@@ -60,5 +60,6 @@
{"vtms", 16, "Violin Memory"},
{"skd", 16, "STEC"},
{"scm", 8, "Storage Class Memory (IBM S/390)"},
+ {"bcache", 1, "bcache cached disk"},
{"", 0, ""}
};
Testing showed that this patch fixes the issue; no more tweaks in lvm.conf are needed.
This is NOT the same as adding recognition for new types of hardware: bcache creates a sophisticated type of block device that must be tested thoroughly in combination with device-mapper/lvm and should not be assumed to work correctly in all circumstances yet. I shall add the recognition as requested here because there is an expectation upstream that it needs to work. This should NOT be taken to imply that the combination of device-mapper over bcache has already been tested thoroughly and shown to work. If any problems are found they should be reported on the appropriate mailing lists ( dm-devel / linux-kernel / lvm-devel / linux-bcache ) so they can be investigated and fixed. Thanks, this is great! I completeley agree with you, and will position bcache experimental on every occasion. In release 2.02.102. Should be building packages in next couple of days. This release also contains a new 'lvm devtypes' command that lists the built-in supported block device types. lvm2-2.02.102-1.fc20 has been submitted as an update for Fedora 20. https://admin.fedoraproject.org/updates/lvm2-2.02.102-1.fc20 Package lvm2-2.02.102-1.fc20: * should fix your issue, * was pushed to the Fedora 20 testing repository, * should be available at your local mirror within two days. Update it with: # su -c 'yum update --enablerepo=updates-testing lvm2-2.02.102-1.fc20' as soon as you are able to. Please go to the following url: https://admin.fedoraproject.org/updates/FEDORA-2013-17514/lvm2-2.02.102-1.fc20 then log in and leave karma (feedback). lvm2-2.02.102-1.fc20 has been pushed to the Fedora 20 stable repository. If problems still persist, please make note of it in this bug report. |