[linux-lvm] lvcreate cores dump (NTCTA001 - MSV)

norman at embratel.com.br norman at embratel.com.br
Tue Oct 9 03:00:49 UTC 2001




Hi, I'm new to LVM but I'll try to provide
you with as much detail as possible...

I'm using LVM with a 30G disk. Initially this was a
partitioned disk (separated partitions for /usr, /tmp,
/var, /boot, / and /home). To use LVM without afecting my
performance I migrated the data from /usr, /home, /tmp and
/var, deleted them and now I have only the following mounted partitions:

Filesystem    1k-blocks Used    Available Use%  Mounted on
/dev/hdb5     822184    714968  65452     92%   /
/dev/hdb1     23302      1473   20626     7%    /boot

and i have a swap and a big partiton reserved for lvm as
follows:

Disco /dev/hdb: 255 cabeças, 63 setores, 3649 cilindros
Unidades = cilindros de 16065 * 512 bytes

Dispositivo Boot    Início      Fim    Blocos   Id  Sistema
/dev/hdb1   *         1         3     24066   83  Linux
/dev/hdb2          3546      3649    835380    5  Estendida
/dev/hdb3          3515      3545    249007+  82  Linux swap
/dev/hdb4             4      3514  28202107+  8e  Linux LVM
/dev/hdb5          3546      3649    835348+  83  Linux

Partições lógicas fora da ordem do disco

(sorry, the output of fdisk is in portuguese, but i think
you know the camps by heart :)
(I'll tranlate only the last sentence as I don't know if it
matters or not: logic partitions out of disk order.

That's ok. I then made:

vgscan
pvcreate /dev/hdb4
vgcreate vg01 /dev/hdb4

follows output from some commands and /proc entries:

/sbin/vgdisplay
--- Volume group ---
VG Name               vg01
VG Access             read/write
VG Status             available/resizable
VG #                  0
MAX LV                255
Cur LV                0
Open LV               0
MAX LV Size           255.99 GB
Max PV                255
Cur PV                1
Act PV                1
VG Size               26.89 GB
PE Size               4.00 MB
Total PE              6884
Alloc PE / Size       0 / 0
Free  PE / Size       6884 / 26.89 GB
VG UUID               sTirvK-2jz3-diSe-bjH2-9hOI-PWcX-DRZy1Q

/sbin/vgck
vgck -- VGDA of "vg01" in lvmtab is consistent
vgck -- VGDA of "vg01" on physical volume is consistent

 cat  /proc/lvm/global
LVM driver LVM version 1.0.1-rc4(03/10/2001)

Total:  1 VG  1 PV  0 LVs (0 LVs open)
Global: 3177 bytes malloced   IOP version: 10   3:36:47 active

VG:  vg01  [1 PV, 0 LV/0 open]  PE Size: 4096 KB
  Usage [KB/PE]: 28196864 /6884 total  0 /0 used  28196864 /6884 free
  PV:  [AA] hdb4                  28196864 /6884           0 /0       28196864
/6884
    LVs: none

cat  /proc/lvm/VGs/vg01/group
name:         vg01
size:         28196864
access:       3
status:       5
number:       0
LV max:       255
LV current:   0
LV open:      0
PV max:       255
PV current:   1
PV active:    1
PE size:      4096
PE total:     6884
PE allocated: 0
uuid:         sTir-vK2j-z3di-Sebj-H29h-OIPW-cXDR-Zy1Q


 cat  /proc/lvm/VGs/vg01/PVs/hdb4
name:         /dev/hdb4
size:         56404215
status:       1
number:       1
allocatable:  2
LV current:   0
PE size:      4096
PE total:     6884
PE allocated: 0
device:       03:68
uuid:         4usA-WblC-Yxdm-d0hS-chbZ-xuvn-FYG2-3XR1

well, when I try:

/sbin/lvcreate -L 2000 -n lv_usr vg01
or without spaces after -L and -n
or
/sbin/lvcreate -l 500 -n lv01 vg01
(changing spaces too)
or
/sbin/lvcreate -L2G -nlv_usr vg01
(changing spaces and changing lv name to lv01)

I get core dumped
I have a 2.4.10 kernel, with a 1.0.6 JFS patch and your latest 1.0.1-rc4 patches
applieds. (Note that no partition
was mounted using JFS (the partition I had I deleted to use
with LVM)). So both / and /boot are ext2.

I'll send you the core file, the /etc/lvmconf/vg01.conf and /etc/lvmtab.d/vg01
files (even not knowing if it'll help or not).
-------------- next part --------------
A non-text attachment was scrubbed...
Name: files.tgz
Type: application/octet-stream
Size: 23328 bytes
Desc: not available
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20011009/50e48dfd/attachment.obj>


More information about the linux-lvm mailing list