Description of problem: Some of the customers use RAID 5 in their hyperconverged setup. Currently we do not have any performance recommendations on how to setup the logical volumes on RAID 5 disks. Raising this bug as a place holder for follow up with perf team.
Manoj, If RAID 5 device is used for RHGS brick backend, what should be the data disk count ? The intention of asking this question is pvcreate would require --dataalignment value, what would be the value there ?
For N+1 RAID-5, where N is the number of data disks, here's what I'd suggest: * N=4, would be a good choice * RAID-5 stripe-unit size of 64KB * enable write-back caching in the RAID controller (if it has BBU). * [stripe_unit_size=64KB, data_stripe_size = 64*4 = 256KB] * pvcreate --dataalignment $data_stripe_size <device> * vgcreate --physicalextentsize $data_stripe_size <vg_name> <device> * thin pool chunk size of $data_stripe_size * mkfs.xfs: use -d su=<>,sw=<> option, with stripe_unit_size and N, resp. I'm assuming this will go through some testing before it goes out to customers? :).
(In reply to Manoj Pillai from comment #5) > For N+1 RAID-5, where N is the number of data disks, here's what I'd suggest: > > * N=4, would be a good choice > * RAID-5 stripe-unit size of 64KB > * enable write-back caching in the RAID controller (if it has BBU). > * [stripe_unit_size=64KB, data_stripe_size = 64*4 = 256KB] That should have been: data_stripe_size = 64*N.
There is no action in last 11 months, should we close this?
RHHI-V supports usage of RAID 5 and there are no specific performance penalities reported
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days