<div dir="ltr">Hi Lukasz,<div><br></div><div>Version 6.1.3.7 is the latest available as of RHEL-7.8, and 6.1.3.23 is the latest available as of RHEL-7.9.  Perhaps the CentOS repos haven't been updated to include RHEL-7.9 content just yet.</div><div><br></div><div>Unfortunately the fix for the issue you encountered isn't available in 6.1.3.7 as it was actually fixed in 6.1.3.23.<br clear="all"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><br>Andy Walsh</div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Nov 5, 2020 at 11:57 AM Łukasz Michalski <<a href="mailto:lm@zork.pl">lm@zork.pl</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
  
    
  
  <div>
    <div>Hmmm looking at
      <a href="http://mirror.centos.org/centos/7/os/x86_64/Packages/" target="_blank">http://mirror.centos.org/centos/7/os/x86_64/Packages/</a> I see
      kmod-kvdo-6.1.3.7-5.el7.x86_64.rp</div>
    <div><br>
    </div>
    <div>Is 6.1.3.23 available somewhere?</div>
    <div><br>
    </div>
    <br>
    <div>On 05/11/2020 17.50, Sweet Tea Dorminy
      wrote:<br>
    </div>
    <blockquote type="cite">
      
      <div dir="ltr">
        <div>No, I believe you'd need to update the kernel also to go
          along with the updated kmod-kvdo. </div>
      </div>
      <br>
      <div class="gmail_quote">
        <div dir="ltr" class="gmail_attr">On Thu, Nov 5, 2020 at 10:21
          AM Łukasz Michalski <<a href="mailto:lm@zork.pl" target="_blank">lm@zork.pl</a>> wrote:<br>
        </div>
        <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
          <div>
            <div>Hi,</div>
            <div><br>
            </div>
            <div>Is it possible to upgrade only vdo and stick with
              CentOS 7.5.1804 for rest of packages?</div>
            <div><br>
            </div>
            <div>Regards,</div>
            <div>Łukasz<br>
            </div>
            <div><br>
            </div>
            <div>On 05/11/2020 16.17, Sweet Tea Dorminy wrote:<br>
            </div>
            <blockquote type="cite">
              <div dir="ltr">
                <div>
                  <div>Greetings Łukasz;<br>
                    <br>
                    I think this may be a instance of BZ <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1821275" target="_blank">1821275</a>,
                    fixed in 6.1.3.23. Is it feasible to restart the
                    machine (unfortunately there's no other way to stop
                    a presumably hung attempt to start VDO), upgrade to
                    at least that version, and try again? <br>
                    <br>
                  </div>
                  Thanks!<br>
                  <br>
                </div>
                Sweet Tea Dorminy<br>
              </div>
              <br>
              <br>
              <div class="gmail_quote">
                <div dir="ltr" class="gmail_attr">On Thu, Nov 5, 2020 at
                  9:54 AM Łukasz Michalski <<a href="mailto:lm@zork.pl" target="_blank">lm@zork.pl</a>> wrote:<br>
                </div>
                <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
                  <div>
                    <div>Details below.</div>
                    <div><br>
                    </div>
                    <div>Now I see that I was looking at wrong block
                      device, My vdo is on /dev/sda and atop shows no
                      activity for it.</div>
                    <div><br>
                    </div>
                    <div>Thanks,<br>
                      Łukasz<br>
                    </div>
                    <div><br>
                    </div>
                    <div>On 05/11/2020 15.26, Andrew Walsh wrote:<br>
                    </div>
                    <blockquote type="cite">
                      <div dir="ltr">Hi Lukasz,
                        <div><br>
                        </div>
                        <div>Can you please confirm a few details? 
                          These will help us understand what may be
                          going on.  We may end up needing additional
                          information, but this will help us identify a
                          starting point for the investigation.</div>
                        <div><br>
                        </div>
                        <div>**Storage Stack Configuration:**<br>
                          High Level Configuration: [e.g. SSD -> MD
                          RAID 5 -> VDO -> XFS]<br>
                        </div>
                      </div>
                    </blockquote>
                    <p>Two servers, on each:<br>
                      Hardware RAID6, 54Tb -> LVM -> VDO ->
                      GlusterFS (XFS for bricks) -> Samba shares.<br>
                      Currently samba and gluster are disabled.<br>
                    </p>
                    <blockquote type="cite">
                      <div dir="ltr">
                        <div>Output of `blockdev --report`: <br>
                        </div>
                      </div>
                    </blockquote>
                    [root@ixmed1 /]# blockdev --report<br>
                    <p><tt>RO    RA   SSZ   BSZ   StartSec           
                        Size   Device</tt><tt><br>
                      </tt><tt>rw   256   512  4096          0 
                        59999990579200   /dev/sda</tt><tt><br>
                      </tt><tt>rw   256   512  4096          0   
                        238999830528   /dev/sdb</tt><tt><br>
                      </tt><tt>rw   256   512  4096       2048     
                        1073741824   /dev/sdb1</tt><tt><br>
                      </tt><tt>rw   256   512  4096    2099200   
                        216446009344   /dev/sdb2</tt><tt><br>
                      </tt><tt>rw   256   512  4096  424845312    
                        21479030784   /dev/sdb3</tt><tt><br>
                      </tt><tt>rw   256   512  4096          0   
                        119810293760   /dev/dm-0</tt><tt><br>
                      </tt><tt>rw   256   512  4096          0    
                        21470642176   /dev/dm-1</tt><tt><br>
                      </tt><tt>rw   256   512  4096          0    
                        32212254720   /dev/dm-2</tt><tt><br>
                      </tt><tt>rw   256   512  4096          0    
                        42949672960   /dev/dm-3</tt><tt><br>
                      </tt><tt>rw   256   512  4096          0    
                        21474836480   /dev/dm-4</tt><tt><br>
                      </tt><tt>rw   256   512  4096          0 
                        21990232555520   /dev/dm-5</tt><tt><br>
                      </tt><tt>rw   256   512  4096          0    
                        21474144256   /dev/drbd999</tt><br>
                      <br>
                    </p>
                    <blockquote type="cite">
                      <div dir="ltr">
                        <div>Output of `lsblk -o
                          name,maj:min,kname,type,fstype,state,sched,uuid`:<br>
                        </div>
                      </div>
                    </blockquote>
                    <tt>[root@ixmed1 /]# lsblk -o
                      name,maj:min,kname,type,fstype,state,sched,uuid</tt><tt><br>
                    </tt><tt> lsblk: dm-6: failed to get device path</tt><tt><br>
                    </tt><tt> lsblk: dm-6: failed to get device path</tt><tt><br>
                    </tt><tt> NAME              MAJ:MIN KNAME   TYPE
                      FSTYPE   STATE SCHED    UUID</tt><tt><br>
                    </tt><tt> sda                 8:0   sda     disk
                      LVM2_mem runni deadline
                      ggCzji-1O8d-BWCa-XwLe-BJ94-fwHa-cOseC0</tt><tt><br>
                    </tt><tt> └─vgStorage-LV_vdo_Rada--ixmed</tt><tt><br>
                    </tt><tt>                   253:5   dm-5    lvm 
                      vdo      runni         
                      b668b2d9-96bf-4840-a43d-6b7ab0a7f235</tt><tt><br>
                    </tt><tt> sdb                 8:16  sdb    
                      disk          runni deadline </tt><tt><br>
                    </tt><tt> ├─sdb1              8:17  sdb1    part
                      xfs            deadline
                      f89ef6d8-d9f4-4061-8f48-3ffae8e23b1e</tt><tt><br>
                    </tt><tt> ├─sdb2              8:18  sdb2    part
                      LVM2_mem       deadline
                      pHO0UQ-aGWu-Hg6g-siiq-TGPT-kw4B-gD0fgs</tt><tt><br>
                    </tt><tt> │ ├─vgSys-root    253:0   dm-0    lvm 
                      xfs      runni         
                      4f48e2c7-6324-4465-953a-c1a9512ab782</tt><tt><br>
                    </tt><tt> │ ├─vgSys-swap    253:1   dm-1    lvm 
                      swap     runni         
                      97234c91-7804-43b2-944f-0122c90fc962</tt><tt><br>
                    </tt><tt> │ ├─vgSys-cluster 253:2   dm-2    lvm 
                      xfs      runni         
                      97b4c285-4bfe-4d4f-8c3c-ca716157bf52</tt><tt><br>
                    </tt><tt> │ └─vgSys-var     253:3   dm-3    lvm 
                      xfs      runni         
                      6f5c860b-88e0-4d28-bc09-2e365299f86e</tt><tt><br>
                    </tt><tt> └─sdb3              8:19  sdb3    part
                      LVM2_mem       deadline
                      nvBfNi-qm2u-bt5T-dyCL-3FgQ-DSic-z8dUDq</tt><tt><br>
                    </tt><tt>   └─vgSys-pgsql   253:4   dm-4    lvm 
                      xfs      runni         
                      5c3e18cc-9e0f-4c81-906b-3e68f196cafe</tt><tt><br>
                    </tt><tt>     └─drbd999     147:999 drbd999 disk
                      xfs                    
                      5c3e18cc-9e0f-4c81-906b-3e68f196cafe</tt><br>
                    <blockquote type="cite">
                      <div dir="ltr">
                        <div><br>
                          **Hardware Information:**<br>
                           - CPU: [e.g. 2x Intel Xeon E5-1650 v2 @
                          3.5GHz]<br>
                           - Memory: [e.g. 128G]<br>
                           - Storage: [e.g. Intel Optane SSD 900P]<br>
                           - Other: [e.g. iSCSI backed storage]<br>
                        </div>
                      </div>
                    </blockquote>
                    <p>Huawei 5288 V5<br>
                      64GB RAM<br>
                      2 X Intel(R) Xeon(R) Silver 4116 CPU @ 2.10GHz<br>
                      RAID: Symbios Logic MegaRAID SAS-3 3008 [Fury]
                      (rev 02) (from lspci, megaraid_sas driver)<br>
                    </p>
                    <blockquote type="cite">
                      <div dir="ltr">
                        <div><br>
                          **Distro Information:**<br>
                           - OS: [e.g. RHEL-7.5]<br>
                        </div>
                      </div>
                    </blockquote>
                    CentOS Linux release 7.5.1804 (Core) <br>
                    <blockquote type="cite">
                      <div dir="ltr">
                        <div> - Architecture: [e.g. x86_64]<br>
                        </div>
                      </div>
                    </blockquote>
                    x86_64<br>
                    <blockquote type="cite">
                      <div dir="ltr">
                        <div> - Kernel: [e.g. kernel-3.10.0-862.el7]<br>
                        </div>
                      </div>
                    </blockquote>
                    3.10.0-862.el7
                    <blockquote type="cite">
                      <div dir="ltr">
                        <div> - VDO Version: [e.g. vdo-6.2.0.168-18.el7,
                          or a commit hash]<br>
                           - KVDO Version: [e.g.
                          kmod-kvdo6.2.0.153-15.el7, or a commit hash]<tt><br>
                          </tt></div>
                      </div>
                    </blockquote>
                    <tt>[root@ixmed1 /]# yum list |grep vdo</tt><tt><br>
                    </tt><tt>kmod-kvdo.x86_64                         
                      6.1.0.168-16.el7_5           @updates </tt><tt><br>
                    </tt><tt>vdo.x86_64                               
                      6.1.0.168-18                 @updates </tt><br>
                    <blockquote type="cite">
                      <div dir="ltr">
                        <div> - LVM Version: [e.g. 2.02.177-4.el7]<br>
                        </div>
                      </div>
                    </blockquote>
                    2.02.177(2)-RHEL7 (2018-01-22<br>
                    <blockquote type="cite">
                      <div dir="ltr">
                        <div> - Output of `uname -a`: [e.g. Linux
                          localhost.localdomain 3.10.0-862.el7.x86_64 #1
                          SMP Wed Mar 21 18:14:51 EDT 2018 x86_64 x86_64
                          x86_64 GNU/Linux]<br clear="all">
                        </div>
                      </div>
                    </blockquote>
                    <p>Linux ixmed1 3.10.0-862.el7.x86_64 #1 SMP Fri Apr
                      20 16:44:24 UTC 2018 x86_64 x86_64 x86_64
                      GNU/Linux<br>
                    </p>
                    <blockquote type="cite"><br>
                      <div class="gmail_quote">
                        <div dir="ltr" class="gmail_attr">On Thu, Nov 5,
                          2020 at 6:49 AM Łukasz Michalski <<a href="mailto:lm@zork.pl" target="_blank">lm@zork.pl</a>>
                          wrote:<br>
                        </div>
                        <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<br>
                          <br>
                          I have two 20T two devices that was crashed
                          during power outage - on two servers.<br>
                          <br>
                          After server restart I see in logs on the
                          first server:<br>
                          <br>
                          [root@ixmed1 /]# dmesg |grep vdo<br>
                          [   11.223770] kvdo: modprobe: loaded version
                          6.1.0.168<br>
                          [   11.904949] kvdo0:dmsetup: starting device
                          'vdo_test' device instantiation 0 write policy
                          auto<br>
                          [   11.904979] kvdo0:dmsetup: underlying
                          device, REQ_FLUSH: not supported, REQ_FUA: not
                          supported<br>
                          [   11.904985] kvdo0:dmsetup: Using mode sync
                          automatically.<br>
                          [   11.905017] kvdo0:dmsetup: zones: 1
                          logical, 1 physical, 1 hash; base threads: 5<br>
                          [   11.966414] kvdo0:journalQ: Device was
                          dirty, rebuilding reference counts<br>
                          [   12.452589] kvdo0:logQ0: Finished reading
                          recovery journal<br>
                          [   12.458550] kvdo0:logQ0: Highest-numbered
                          recovery journal block has sequence number
                          70548140, and the highest-numbered usable
                          block is 70548140<br>
                          [   12.458556] kvdo0:logQ0: Replaying entries
                          into slab journals<br>
                          [   13.538099] kvdo0:logQ0: Replayed 5568767
                          journal entries into slab journals<br>
                          [   14.174984] kvdo0:logQ0: Recreating missing
                          journal entries<br>
                          [   14.175025] kvdo0:journalQ: Synthesized 0
                          missing journal entries<br>
                          [   14.177768] kvdo0:journalQ: Saving recovery
                          progress<br>
                          [   14.636416] kvdo0:logQ0: Replaying 2528946
                          recovery entries into block map<br>
                          <br>
                          [root@ixmed1 /]# uptime<br>
                           12:41:33 up 1 day,  4:07,  2 users,  load
                          average: 1.06, 1.05, 1.16<br>
                          <br>
                          [root@ixmed1 /]# ps ax |grep vdo<br>
                            1135 ?        Ss     0:00 /usr/bin/python
                          /usr/bin/vdo start --all --confFile
                          /etc/vdoconf.yml<br>
                            1210 ?        R    21114668:39 dmsetup
                          create vdo_Rada-ixmed --uuid
                          VDO-b668b2d9-96bf-4840-a43d-6b7ab0a7f235
                          --table 0 72301908952 vdo
                          /dev/disk/by-id/dm-name-vgStorage-LV_test 4096
                          disabled 0 32768 16380 on auto vdo_test
                          ack=1,bio=4,bioRotationInterval=64,cpu=2,hash=1,logical=1,physical=1<br>
                            1213 ?        S      1:51 [kvdo0:dedupeQ]<br>
                            1214 ?        S      1:51 [kvdo0:journalQ]<br>
                            1215 ?        S      1:51 [kvdo0:packerQ]<br>
                            1216 ?        S      1:51 [kvdo0:logQ0]<br>
                            1217 ?        S      1:51 [kvdo0:physQ0]<br>
                            1218 ?        S      1:50 [kvdo0:hashQ0]<br>
                            1219 ?        S      1:52 [kvdo0:bioQ0]<br>
                            1220 ?        S      1:51 [kvdo0:bioQ1]<br>
                            1221 ?        S      1:51 [kvdo0:bioQ2]<br>
                            1222 ?        S      1:51 [kvdo0:bioQ3]<br>
                            1223 ?        S      1:48 [kvdo0:ackQ]<br>
                            1224 ?        S      1:49 [kvdo0:cpuQ0]<br>
                            1225 ?        S      1:49 [kvdo0:cpuQ1]<br>
                          <br>
                          The only activity I see is that there are
                          small writes shown in 'atop' to vdo underlying
                          device.<br>
                          <br>
                          On the first server dmsetup takes 100% cpu
                          (one core), on the second server dmsetup seems
                          to be idle.<br>
                          <br>
                          What should I do in this situation?<br>
                          <br>
                          Regards,<br>
                          Łukasz<br>
                          <br>
                          <br>
                          <br>
_______________________________________________<br>
                          vdo-devel mailing list<br>
                          <a href="mailto:vdo-devel@redhat.com" target="_blank">vdo-devel@redhat.com</a><br>
                          <a href="https://www.redhat.com/mailman/listinfo/vdo-devel" rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vdo-devel</a><br>
                        </blockquote>
                      </div>
                    </blockquote>
                    <p><br>
                    </p>
                  </div>
                  _______________________________________________<br>
                  vdo-devel mailing list<br>
                  <a href="mailto:vdo-devel@redhat.com" target="_blank">vdo-devel@redhat.com</a><br>
                  <a href="https://www.redhat.com/mailman/listinfo/vdo-devel" rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vdo-devel</a><br>
                </blockquote>
              </div>
            </blockquote>
            <p><br>
            </p>
          </div>
        </blockquote>
      </div>
    </blockquote>
    <p><br>
    </p>
  </div>

</blockquote></div>