[linux-lvm] [IMPORTANT]LVM+iSCSI issue..Local Disk disappeared..

himanshu padmanabhi himanshu.padmanabhi at gmail.com
Thu Oct 16 10:27:52 UTC 2008


We configured it as
*
Case 1

*pv1= localdisk1    pv2 =localdisk2

Vg1= pv1,pv2

created lv1 of size =pv1size + pv2size

Deleted lv1,

removed pv2,

vg is getting displayed properly in vgs command

*
Case 2

*pv1= localdisk1    pv2 =remotedisk2(iscsi disk)

Vg1= pv1,pv2

created lv1 of size =pv1size + pv2size

Deleted lv1,

logout from iscsi target

vg is not getting displayed properly in vgs command


On Wed, Oct 15, 2008 at 9:36 PM, Peter Larsen <plarsen at ciber.com> wrote:

> On Wed, 2008-10-15 at 18:50 +0530, himanshu padmanabhi wrote:
> >
> > I am using "iscsi-initiator-utils-6.2.0.865-0.2.fc7" as initiator and
> > "iscsitarget-0.4.15-1" as target.
> >
> >
> > Following is the scenario
> >
> >
> > PV  =  local_disk1       remote_disk1  (i.e PV is formed using 2
> > disks
> > 1 local and 1 from iscsitarget)
>
> Strange construction?
> Why not simply format each target as a separate PV and then join them in
> a single VG? Much easier to manage.
>
> That aside - I wouldn't mix and match that way. You're getting very
> different response times and security issues on each device. I would
> treat them very different.
>
> > VG  =  Vgname
>
> You need to assign the PV's to your VG.
>
> > LV =  lv_localdisk_1       lv_localremotedisk1
> > lv_remotedisk1 (i.e. 1 lv only from local disk,1 from remote
> > iscsitarget and one from combination of both)
>
> That makes no sense. You don't use physical volumes when you create
> logical ones. You use groups. You don't assign physical devices like
> that when you do LVs.
>
> > I performed "logout" operation on "remote_disk1" after deactivating
> > "lv_localremotedisk1 and           lv_remotedisk1" on it using
> > "lvchange" command.
> >
> >
> > Then result I should at least get,
> >
> >
> > PV  =  local_disk1    (remote_disk1 is removed now)
> >
> >
> > VG = Vgname
> >
> >
> > LV = lv_localdisk1     (So LVs containing PVs as "remote_disk1" are
> > deactivated.It can be in combination with local disk also)
>
> No - things don't work that way. If you damage/remove a PV from a VG -
> everything in that VG gets disabled until the whole VG is operational.
> It doesn't matter if your LV is in the area damaged or not.
>
> > i.e They all were lost temporarily.
>
> Not temporarily. As long as the VG is bad, nothing is there. You should
> see all available PVs but the way you set it up, you didn't do a PV on
> each disk so when the disk-group sees one disk missing the whole disk
> group is offline, you loose your PV, your VG gets deactivated = the
> result you see.
>
> > When I logged in to the same target and activated the LVs on
> > "remote_disk1",I got my original configuration.i.e.
>
> Because your PV is now present, the VG can activate etc.
>
>
> ---
> Regards
>    Peter Larsen
>
> We have met the enemy, and he is us.
>                -- Walt Kelly
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>



-- 
Regards,
Himanshu Padmanabhi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20081016/3e304ec4/attachment.htm>


More information about the linux-lvm mailing list