[linux-lvm] lvdisplay "open" field computing
Simon ELBAZ
selbaz at linagora.com
Wed Jul 10 12:46:23 UTC 2019
Veeambackup seems to access the volume (/var/log/veeam/veeamsvc.log):
[02.07.2019 00:01:18] <140503985145600> lpbcore| Found device:
[/dev/dm-3]. Device number: [253:3]; Type: [dm].
[02.07.2019 00:01:18] <140503985145600> lpbcore| Dm name:
[vg_obm-var_spool_imap].
[02.07.2019 00:01:18] <140503985145600> lpbcore| Dm UUID:
[LVM-1i1v6pEjab2WslaDQRvkf8eLk6QfBW4J0oK7h0tDZHeUxEAyBwcK7xU9pSW7X4Uh].
[02.07.2019 00:01:18] <140503985145600> lpbcore| Slave of: [8:64].
[02.07.2019 00:01:18] <140503985145600> lpbcore| Link: [/dev/dm-3].
[02.07.2019 00:01:18] <140503985145600> lpbcore| Link:
[/dev/vg_obm/var_spool_imap].
[02.07.2019 00:01:18] <140503985145600> lpbcore| Link: [/dev/block/253:3].
[02.07.2019 00:01:18] <140503985145600> lpbcore| Link:
[/dev/mapper/vg_obm-var_spool_imap].
[02.07.2019 00:01:18] <140503985145600> lpbcore| Link:
[/dev/disk/by-uuid/c9f9314e-4ce0-4138-b9b5-c745fdc22258].
[02.07.2019 00:01:18] <140503985145600> lpbcore| Link:
[/dev/disk/by-id/dm-uuid-LVM-1i1v6pEjab2WslaDQRvkf8eLk6QfBW4J0oK7h0tDZHeUxEAyBwcK7xU9pSW7X4Uh].
[02.07.2019 00:01:18] <140503985145600> lpbcore| Link:
[/dev/disk/by-id/dm-name-vg_obm-var_spool_imap].
[02.07.2019 00:01:18] <140503985145600> lpbcore| Filesystem UUID:
[c9f9314e-4ce0-4138-b9b5-c745fdc22258]; Type: [ext4]; Mount points:
[/var/spool/imap].
I wonder if veeam uses its own namespace.
Regards
On 10/07/2019 13:54, Simon ELBAZ wrote:
> Hi Zdenek,
>
> Thanks for your feedback.
>
> The kernel version is:
>
> [root at panoramix ~]# uname -a
> Linux panoramix.ch-perrens.fr 2.6.32-573.12.1.el6.x86_64 #1 SMP Tue
> Dec 15 21:19:08 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
>
> This server is a corosync/pacemaker cluster node experiencing troubles
> at failover.
>
> The error happening during the failover is:
>
> Jul 2 03:08:09 panoramix pgsql(pri_postgres)[18708]: INFO: Changing
> pri_postgres-status on jolitorax.ch-perrens.fr : HS:alone->HS:async.
> Jul 2 03:08:10 panoramix LVM(pri_ISCSIVG0_vg_obm)[18618]: ERROR:
> Logical volume vg_obm/var_spool_imap in use. Can't deactivate volume
> group "vg_obm" with 1 open logical volume(s)
> Jul 2 03:08:10 panoramix LVM(pri_ISCSIVG0_vg_obm)[18618]: ERROR: LVM:
> vg_obm did not stop correctly
> Jul 2 03:08:10 panoramix LVM(pri_ISCSIVG0_vg_obm)[18618]: WARNING:
> vg_obm still Active
> Jul 2 03:08:10 panoramix LVM(pri_ISCSIVG0_vg_obm)[18618]: INFO: Retry
> deactivating volume group vg_obm
>
> This is why I am trying to understand how the field is computed.
>
> The grep in mountinfo is attached to the mail.
>
> The volume is presented as a raw device by VMWare hypervisor and is
> backuped by Veeam:
>
> [root at panoramix ~]# ps -ef | grep vee
> root 2877 1 0 Jul04 ? 00:01:10 /usr/sbin/veeamservice
> --daemonize --pidfile=/var/run/veeamservice.pid
> root 5207 2 0 Jul05 ? 00:00:00 [veeamsnap_log]
> root 21475 21004 0 13:53 pts/2 00:00:00 grep vee
>
> Regards
>
>
> On 10/07/2019 12:51, Zdenek Kabelac wrote:
>> Dne 10. 07. 19 v 9:13 Simon ELBAZ napsal(a):
>>> Hi,
>>>
>>> This LV seems mounted on single mountpoint.
>>>
>>> The lsof output is:
>>
>>
>> Hmm
>>
>> Which kernel version is this - aren't we tracking issue for some
>> ancient kernel ?
>>
>> Why have you actually started to hunt the reason for opencount 2 ?
>>
>> Has the machine experienced some troubles ?
>>
>> Aren't there some namespaces in use - so i.e. volume is used in
>> different namespace ??
>>
>> grep 253:3 /proc/*/mountinfo - showing something unusual ?
>>
>>
>>
>> Zdenek
>>
--
Simon Elbaz
@Linagora
Mob: +33 (0) 6 38 99 18 34
Tour Franklin 31ème étage
100/101 Quartier Boieldieu
92042 La Défense
FRANCE
More information about the linux-lvm
mailing list