Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 156830 Details for
Bug 241907
RHEL 4.5.0 release notes -- LVM failover, new feature
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
rel notes for ha lvm
ha_lvm_rn.txt (text/plain), 2.39 KB, created by
Corey Marthaler
on 2007-06-12 21:15:55 UTC
(
hide
)
Description:
rel notes for ha lvm
Filename:
MIME Type:
Creator:
Corey Marthaler
Created:
2007-06-12 21:15:55 UTC
Size:
2.39 KB
patch
obsolete
><release_notes> >Logical Volume Manager fail-over - also known as, Highly Available LVM (HA LVM) >- is now possible with rgmanager. This provides a way to use LVM in an >activate/passive environment. > >The most compelling application of HA LVM is the mirroring of two distinct >SAN-connected sites. One site can suffer complete failure (machine and storage) >and the other can continue serving content. [HA LVM is not currently able to >handle a complete SAN connectivity loss from the serving machine - even if the >standby machine is still SAN connected. Multipathing can be used to mitigate >this event.] > >Proper setup is required for correct operation. Setup consists of the following >steps: >1) Create the logical volume and filesystem. Only one logical volume is allowed >per volume group in HA LVM. Example: >prompt> pvcreate /dev/sd[cde]1 >prompt> vgcreate my_volume_group /dev/sd[cde]1 >prompt> lvcreate -L 10G -n my_logical_volume my_volume_group >prompt> mkfs.ext3 /dev/my_volume_group/my_logical_volume > >2) Edit /etc/cluster/cluster.conf to include the newly created logical volume as >a resource in one of your services. (Optionally, you can use the >system-config-cluster or conga GUIs.) Example resource manager section from >/etc/cluster/cluster.conf: > ><rm> > <failoverdomains> > <failoverdomain name="FD" ordered="1" restricted="0"> > <failoverdomainnode name="neo-04" priority="1"/> > <failoverdomainnode name="neo-05" priority="2"/> > </failoverdomain> > </failoverdomains> > > <resources> > <lvm name="lvm" vg_name="my_volume_group" lv_name="my_logical_volume"/> > <fs name="FS" device="/dev/my_volume_group/my_logical_volume" > force_fsck="0" force_unmount="1" fsid="64050" fstype="ext3" > mountpoint="/mnt" options="" self_fence="0"/> > </resources> > > <service autostart="1" domain="FD" name="serv" recovery="relocate"> > <lvm ref="lvm"/> > <fs ref="FS"/> > </service> ></rm> > >3) Edit the "volume_list" field in /etc/lvm/lvm.conf. Include the name of your >root volume group and your machine's name as given in /etc/cluster/cluster.conf >preceded by an "@". Example from /etc/lvm/lvm.conf: > >volume_list = [ "VolGroup00", "@neo-01" ] > >4) Update your initrd on all your cluster machines Example: > >prompt> new-kernel-pkg --mkinitrd \ >--initrdfile=/boot/initrd-halvm-`uname -r`.img \ >--install `uname -r` --make-default > >5) Reboot all of your machines to ensure the correct initrd is in-use. > ></release_notes>
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 241907
: 156830